diff --git a/.gitattributes b/.gitattributes index 25e3302a42b40..e032f59a3616d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -9,3 +9,5 @@ docs/theme/js/theme.js linguist-vendored=false *_pb.ts linguist-generated *_pb.grpc-*.ts linguist-generated *_pb.client.ts linguist-generated + +*.golden -text diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 5d2098d1b0a99..f709ff96fa8e8 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1,8 +1,25 @@ # Merge rules are governed by logic in the Workflow Bot. Protect the # .github/workflows directory (and the merge logic) using CODEOWNERS. -/.github/workflows/ @klizhentas @russjones @r0mant @zmb3 @fheinecke @camscale @tcsc @rosstimothy -/build.assets/tooling/cmd/difftest/ @klizhentas @russjones @r0mant @zmb3 +/.github/workflows/ @klizhentas @russjones @r0mant @zmb3 @fheinecke @camscale @doggydogworld @rosstimothy +/.github/actions/ @klizhentas @russjones @r0mant @zmb3 @fheinecke @camscale @doggydogworld @rosstimothy +/build.assets/tooling/cmd/difftest/ @klizhentas @russjones @r0mant @zmb3 @rosstimothy -# Owners for dependency updates in JS packages. -/pnpm-lock.yaml @avatus @gzdunek @ravicious -web/packages/teleterm/package.json @gzdunek @ravicious +# Owners for JS dependency updates. +/pnpm-lock.yaml @avatus @gzdunek @ravicious @zmb3 @r0mant +/web/packages/teleterm/package.json @gzdunek @ravicious + +# Owners for Go dependency updates. +/go.mod @russjones @r0mant @zmb3 @rosstimothy +/api/go.mod @russjones @r0mant @zmb3 @rosstimothy +/assets/aws/go.mod @russjones @r0mant @zmb3 @rosstimothy +/assets/backport/go.mod @russjones @r0mant @zmb3 @rosstimothy +/build.assets/tooling/go.mod @russjones @r0mant @zmb3 @rosstimothy @fheinecke @camscale @doggydogworld +/integrations/terraform/go.mod @russjones @r0mant @zmb3 @rosstimothy @hugoShaka @tigrato +/integrations/terraform-mwi/go.mod @russjones @r0mant @zmb3 @rosstimothy @hugoShaka @tigrato @strideynet +/integrations/event-handler/go.mod @russjones @r0mant @zmb3 @rosstimothy @hugoShaka @tigrato + +# Owners for Rust dependency updates. +/Cargo.toml @russjones @r0mant @zmb3 @rosstimothy @fspmarshall @espadolini +/tool/fdpass-teleport/Cargo.toml @russjones @r0mant @zmb3 @rosstimothy @fspmarshall @espadolini +/web/packages/shared/libs/ironrdp/Cargo.toml @russjones @r0mant @zmb3 @rosstimothy @fspmarshall @espadolini @probakowski +/lib/srv/desktop/rdp/rdpclient/Cargo.toml @russjones @r0mant @zmb3 @rosstimothy @fspmarshall @espadolini @probakowski diff --git a/.github/ISSUE_TEMPLATE/test-plan-docs.md b/.github/ISSUE_TEMPLATE/test-plan-docs.md index 7182a1ffc4294..6ea9a66b54f7f 100644 --- a/.github/ISSUE_TEMPLATE/test-plan-docs.md +++ b/.github/ISSUE_TEMPLATE/test-plan-docs.md @@ -102,10 +102,10 @@ version of Teleport. and verify their accuracy while using the newly released major version of Teleport. - - [ ] General [installation page](../../docs/pages/installation.mdx): ensure + - [ ] General [installation page](../../docs/pages/installation/installation.mdx): ensure that installation methods support the new release candidate. - [ ] [Teleport Community - Edition](../../docs/pages/admin-guides/deploy-a-cluster/linux-demo.mdx) demo + Edition](../../docs/pages/get-started/deploy-community.mdx) demo guide. - [ ] [Teleport Enterprise (Cloud)](../../docs/pages/get-started.mdx) getting started guide. diff --git a/.github/ISSUE_TEMPLATE/testplan.md b/.github/ISSUE_TEMPLATE/testplan.md index 36fbdb83bf270..f28b95bf408f2 100644 --- a/.github/ISSUE_TEMPLATE/testplan.md +++ b/.github/ISSUE_TEMPLATE/testplan.md @@ -569,6 +569,10 @@ on the remote host. Note that the `--callback` URL must be able to resolve to th [Docs](https://goteleport.com/docs/agents/join-services-to-your-cluster/gcp/) - [ ] Join a Teleport node running in a GCP VM. +### Oracle Node Joining +[Docs](https://goteleport.com/docs/enroll-resources/agents/oracle/) +- [ ] Join a Teleport node running in an OCI VM. + ### Cloud Labels - [ ] Create an EC2 instance with [tags in instance metadata enabled](https://goteleport.com/docs/management/guides/ec2-tags/) and with tag `foo`: `bar`. Verify that a node running on the instance has label @@ -740,6 +744,8 @@ tsh ssh node-that-requires-device-trust - [ ] K8s Access - [ ] App Access NOT enforced in global mode - [ ] Desktop Access NOT enforced in global mode + - [ ] device_trust.mode="required-for-humans" enforces enrolled devices for + humans, but bots (e.g. `tbot`) function on any device - [ ] Role-based authz enforces enrolled devices (device_trust.mode="optional" and role.spec.options.device_trust_mode="required") - [ ] SSH @@ -1039,9 +1045,10 @@ tsh bench web sessions --max=5000 --web user ls - [ ] Verify [AWS console access](https://goteleport.com/docs/application-access/cloud-apis/aws-console/). - [ ] Can log into AWS web console through the web UI. - [ ] Can interact with AWS using `tsh` commands. - - [ ] `tsh aws` - - [ ] `tsh aws --endpoint-url` (this is a hidden flag) -- [ ] Verify [Azure CLI access](https://goteleport.com/docs/application-access/cloud-apis/azure/) with `tsh apps login`. + - [ ] `tsh aws sts get-caller-identity` + - [ ] `tsh aws s3 ls` + - [ ] `tsh aws s3 cp ./file s3:///test` +- [ ] Verify [Azure CLI access](https://goteleport.com/docs/enroll-resources/application-access/cloud-apis/azure/) with `tsh apps login`. - [ ] Can interact with Azure using `tsh az` commands. - [ ] Can interact with Azure using a combination of `tsh proxy az` and `az` commands. - [ ] Verify [GCP CLI access](https://goteleport.com/docs/application-access/cloud-apis/google-cloud/) with `tsh apps login`. @@ -1086,6 +1093,7 @@ manualy testing. - [ ] Amazon Redshift Serverless. - [ ] Verify connection to external AWS account works with `assume_role_arn: ""` and `external_id: ""` - [ ] Amazon ElastiCache. + - [ ] Amazon ElastiCache Serverless. - [ ] Amazon MemoryDB. - [ ] Amazon OpenSearch. - [ ] Amazon Dynamodb. @@ -1124,6 +1132,7 @@ manualy testing. - [ ] Amazon Redshift. - [ ] Amazon Redshift Serverless. - [ ] Amazon ElastiCache. + - [ ] Amazon ElastiCache Serverless. - [ ] Amazon MemoryDB. - [ ] Amazon OpenSearch. - [ ] Amazon Dynamodb. @@ -1184,6 +1193,7 @@ manualy testing. - [x] Can detect and register Redshift clusters. (covered by E2E test) - [x] Can detect and register Redshift serverless workgroups, and their VPC endpoints. (covered by E2E test) - [ ] Can detect and register ElastiCache Redis clusters. + - [ ] Can detect and register ElastiCache Serverless Redis/Valkey clusters. - [ ] Can detect and register MemoryDB clusters. - [ ] Can detect and register OpenSearch domains. - [ ] Can detect and register DocumentDB clusters. @@ -1639,6 +1649,7 @@ Verify that SSH works, and that resumable SSH is not interrupted across a contro - [ ] New EC2 instances with matching AWS tags are discovered and added to the teleport cluster - [ ] Large numbers of EC2 instances (51+) are all successfully added to the cluster - [ ] Nodes that have been discovered do not have the install script run on the node multiple times + - [ ] EC2 instances can be discovered in multiple accounts ## Azure Discovery diff --git a/.github/ISSUE_TEMPLATE/webtestplan.md b/.github/ISSUE_TEMPLATE/webtestplan.md index 13e0822366107..65edd73cb0983 100644 --- a/.github/ISSUE_TEMPLATE/webtestplan.md +++ b/.github/ISSUE_TEMPLATE/webtestplan.md @@ -132,6 +132,8 @@ For each, test the invite, reset, and login flows - [ ] Verify that error message is shown if an invite/reset is expired/invalid - [ ] Verify that account is locked after several unsuccessful login attempts +- [ ] Verify that after logging in, user is automatically redirected to the login page when the session expires. + #### Auth Connectors For help with setting up auth connectors, check out the [Quick GitHub/SAML/OIDC Setup Tips] diff --git a/.github/dependabot.yml b/.github/dependabot.yml index b8c8bf7719609..3ce9f5f22fe1c 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -1,19 +1,16 @@ version: 2 updates: - package-ecosystem: gomod - directory: "/" + directory: '/' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC ignore: - # Breaks backwards compatibility - - dependency-name: github.com/gravitational/ttlmap # Must be kept in-sync with libbpf - dependency-name: github.com/aquasecurity/libbpfgo # Forked/replaced dependencies - dependency-name: github.com/alecthomas/kingpin/v2 - - dependency-name: github.com/coreos/go-oidc - dependency-name: github.com/go-mysql-org/go-mysql - dependency-name: github.com/gogo/protobuf - dependency-name: github.com/julienschmidt/httprouter @@ -25,44 +22,36 @@ updates: groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - zmb3 - - hugoShaka + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/api" + directory: '/api' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC open-pull-requests-limit: 20 groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - zmb3 - - hugoShaka + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/assets/aws" + directory: '/assets/aws' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC ignore: # Forked/replaced dependencies - dependency-name: github.com/alecthomas/kingpin/v2 @@ -70,43 +59,36 @@ updates: groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - tcsc - - zmb3 + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/assets/backport" + directory: '/assets/backport' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC open-pull-requests-limit: 20 groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - zmb3 + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/build.assets/tooling" + directory: '/build.assets/tooling' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC ignore: # Forked/replaced dependencies - dependency-name: github.com/alecthomas/kingpin/v2 @@ -114,23 +96,19 @@ updates: groups: go: update-types: - - "minor" - - "patch" - reviewers: - - fheinecke - - rosstimothy - - zmb3 + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/integrations/terraform" + directory: '/integrations/terraform' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC ignore: # breaks compatibility - dependency-name: github.com/hashicorp/terraform-plugin-framework @@ -140,136 +118,156 @@ updates: groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - hugoShaka - - tigrato - - marcoandredinis + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' - package-ecosystem: gomod - directory: "/integrations/event-handler" + directory: '/integrations/terraform-mwi' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC open-pull-requests-limit: 20 groups: go: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - hugoShaka - - tigrato - - marcoandredinis + - 'minor' + - 'patch' labels: - - "dependencies" - - "go" - - "no-changelog" + - 'dependencies' + - 'go' + - 'no-changelog' + - package-ecosystem: gomod + directory: '/integrations/event-handler' + schedule: + interval: monthly + day: 'sunday' + time: '09:00' # 9am UTC + open-pull-requests-limit: 20 + groups: + go: + update-types: + - 'minor' + - 'patch' + labels: + - 'dependencies' + - 'go' + - 'no-changelog' + + - package-ecosystem: cargo + directory: '/' + schedule: + interval: monthly + day: 'sunday' + time: '09:00' # 9am UTC + open-pull-requests-limit: 20 + groups: + rust: + update-types: + - 'minor' + - 'patch' + labels: + - 'dependencies' + - 'rust' + - 'no-changelog' + + - package-ecosystem: cargo + directory: '/lib/srv/desktop/rdp/rdpclient' + schedule: + interval: monthly + day: 'sunday' + time: '09:00' # 9am UTC + open-pull-requests-limit: 20 + groups: + rust: + update-types: + - 'minor' + - 'patch' + labels: + - 'dependencies' + - 'rust' + - 'no-changelog' - package-ecosystem: cargo - directory: "/" + directory: '/tool/fdpass-teleport' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC open-pull-requests-limit: 20 groups: rust: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - zmb3 + - 'minor' + - 'patch' labels: - - "dependencies" - - "rust" - - "no-changelog" + - 'dependencies' + - 'rust' + - 'no-changelog' - package-ecosystem: cargo - directory: "/lib/srv/desktop/rdp/rdpclient" + directory: '/web/packages/shared/libs/ironrdp/Cargo.toml' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC open-pull-requests-limit: 20 groups: rust: update-types: - - "minor" - - "patch" - reviewers: - - rosstimothy - - zmb3 + - 'minor' + - 'patch' labels: - - "dependencies" - - "rust" - - "no-changelog" + - 'dependencies' + - 'rust' + - 'no-changelog' - package-ecosystem: npm - directory: "/" + directory: '/' schedule: interval: monthly - day: "sunday" - time: "09:00" # 9am UTC + day: 'sunday' + time: '09:00' # 9am UTC labels: - - "dependencies" - - "ui" - - "no-changelog" + - 'dependencies' + - 'ui' + - 'no-changelog' groups: electron: patterns: - - "electron*" + - 'electron*' ui: update-types: - - "minor" - - "patch" + - 'minor' + - 'patch' exclude-patterns: - - "electron*" + - 'electron*' open-pull-requests-limit: 20 - reviewers: - - avatus - - kimlisa - - rudream - - bl-nero - - gzdunek - - ravicious - - ryanclark - package-ecosystem: github-actions - directory: "/.github/workflows" + directory: '/.github/workflows' schedule: interval: monthly day: monday - time: "09:00" - timezone: "America/Los_Angeles" - reviewers: - - fheinecke - - camscale + time: '09:00' + timezone: 'America/Los_Angeles' labels: - - "dependencies" - - "github-actions" - - "no-changelog" + - 'dependencies' + - 'github-actions' + - 'no-changelog' - package-ecosystem: github-actions - directory: "/.github/actions" + directory: '/.github/actions' schedule: interval: monthly day: monday - time: "09:00" - timezone: "America/Los_Angeles" - reviewers: - - fheinecke - - camscale + time: '09:00' + timezone: 'America/Los_Angeles' labels: - - "dependencies" - - "github-actions" - - "no-changelog" + - 'dependencies' + - 'github-actions' + - 'no-changelog' diff --git a/.github/workflows/assign.yaml b/.github/workflows/assign.yaml index b7f12e5b692a1..8769ace69f8d2 100644 --- a/.github/workflows/assign.yaml +++ b/.github/workflows/assign.yaml @@ -39,7 +39,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/backport.yaml b/.github/workflows/backport.yaml index c93de290739b5..80187158bb814 100644 --- a/.github/workflows/backport.yaml +++ b/.github/workflows/backport.yaml @@ -37,7 +37,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/bloat.yaml b/.github/workflows/bloat.yaml index 40903ba901b20..2afdc35697080 100644 --- a/.github/workflows/bloat.yaml +++ b/.github/workflows/bloat.yaml @@ -16,7 +16,7 @@ on: jobs: bloat_check: name: Bloat Check - runs-on: ubuntu-latest + runs-on: ubuntu-22.04-4core outputs: base_stats_file: ${{ steps.build_base.outputs.base_stats_file }} current_build_dir: ${{ steps.build_branch.outputs.build_dir }} @@ -41,7 +41,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Setup base cache uses: actions/cache/restore@v3 @@ -93,7 +93,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Build Binaries id: build_branch diff --git a/.github/workflows/build-macos.yaml b/.github/workflows/build-macos.yaml index 49715b555888a..3a0410d0d000f 100644 --- a/.github/workflows/build-macos.yaml +++ b/.github/workflows/build-macos.yaml @@ -19,7 +19,7 @@ jobs: build: name: Build on Mac OS if: ${{ !startsWith(github.head_ref, 'dependabot/') }} - runs-on: macos-13-xlarge + runs-on: macos-15-xlarge permissions: contents: read @@ -66,7 +66,7 @@ jobs: run: | rustup override set ${{ env.RUST_VERSION }} - - name: Install wasm-pack + - name: Install wasm-deps run: make ensure-wasm-deps - name: Build diff --git a/.github/workflows/changelog.yaml b/.github/workflows/changelog.yaml index a65b0c034b4d8..e3b57a48386ba 100644 --- a/.github/workflows/changelog.yaml +++ b/.github/workflows/changelog.yaml @@ -29,7 +29,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/check.yaml b/.github/workflows/check.yaml index 22d46fb8769dc..44aaa7dae62d1 100644 --- a/.github/workflows/check.yaml +++ b/.github/workflows/check.yaml @@ -47,7 +47,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/cla-assistant.yaml b/.github/workflows/cla-assistant.yaml index 626361f3bdae6..fb8e9bc8a60ee 100644 --- a/.github/workflows/cla-assistant.yaml +++ b/.github/workflows/cla-assistant.yaml @@ -7,6 +7,11 @@ on: types: - opened - synchronize # Run on any diff changes to the PR (e.g. code updates) + # merge_group will allow this workflow to be triggered when in merge queue. + # The job will end up being skipped due to conditionals below. This will be considered a "Success" + # for the required check as the workflow was triggered but the job was skipped due to conditionals + # See: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/troubleshooting-required-status-checks#handling-skipped-but-required-checks + merge_group: # explicitly configure permissions, in case your GITHUB_TOKEN workflow permissions are set to read-only in repository settings permissions: actions: read diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml deleted file mode 100644 index 0d743abffb9e7..0000000000000 --- a/.github/workflows/codeql.yml +++ /dev/null @@ -1,51 +0,0 @@ -name: "CodeQL" - -on: - schedule: - - cron: '0 13 * * *' # At 1:00 PM UTC every day - pull_request: - paths: - - '.github/workflows/codeql.yml' - # merge_group is intentionally excluded, because we don't require this workflow - -jobs: - analyze: - name: Analyze - runs-on: ubuntu-22.04-16core - permissions: - actions: read - contents: read - security-events: write - - strategy: - fail-fast: false - matrix: - language: [ 'go', 'javascript' ] - - steps: - - name: Checkout repository - uses: actions/checkout@v4 - - - name: Set up Go - uses: actions/setup-go@v5 - with: - cache: false - go-version-file: go.mod - if: ${{ matrix.language == 'go' }} - - - name: Initialize the CodeQL tools for scanning - uses: github/codeql-action/init@v3 - with: - languages: ${{ matrix.language }} - queries: security-extended - timeout-minutes: 5 - - - name: Autobuild - uses: github/codeql-action/autobuild@v3 - timeout-minutes: 30 - - - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@v3 - with: - category: "/language:${{matrix.language}}" - timeout-minutes: 10 diff --git a/.github/workflows/dependency-review.yaml b/.github/workflows/dependency-review.yaml deleted file mode 100644 index 6e9a1e061b750..0000000000000 --- a/.github/workflows/dependency-review.yaml +++ /dev/null @@ -1,80 +0,0 @@ -name: Dependency Review - -on: - pull_request: - merge_group: - -jobs: - dependency-review: - uses: gravitational/shared-workflows/.github/workflows/dependency-review.yaml@main - permissions: - contents: read - pull-requests: write - with: - base-ref: > - ${{ - github.event_name == 'pull_request' && github.event.pull_request.base.sha || - github.event_name == 'merge_group' && github.event.merge_group.base_sha || - 'Invalid reference (workflow bug)' - }} - # 'GHSA-6xf3-5hp7-xqqg' is a false positive. That's an old Teleport Vuln, - # but because of the replace, the dependency cannot find the correct - # Teleport version. - allow-ghsas: 'GHSA-6xf3-5hp7-xqqg' - # IronRDP uses MIT/Apache-2.0 but slashes are not recognized by dependency review action - # - # @swc/core@1.11.24 uses Apache-2.0, but LicenseRef-scancode-unknown-license-reference is also - # detected. - # https://www.npmjs.com/package/@swc/core/v/1.11.24?activeTab=code - # https://scancode-licensedb.aboutcode.org/unknown-license-reference.html - # - # cookie-signature@1.2.2 uses MIT, but LicenseRef-scancode-unknown-license-reference is also - # detected. - # https://www.npmjs.com/package/cookie-signature/v/1.2.2?activeTab=code - allow-dependencies-licenses: >- - pkg:cargo/ironrdp-cliprdr, - pkg:cargo/ironrdp-core, - pkg:cargo/ironrdp-async, - pkg:cargo/ironrdp-connector, - pkg:cargo/ironrdp-displaycontrol, - pkg:cargo/ironrdp-dvc, - pkg:cargo/ironrdp-error, - pkg:cargo/ironrdp-graphics, - pkg:cargo/ironrdp-pdu, - pkg:cargo/ironrdp-rdpdr, - pkg:cargo/ironrdp-rdpsnd, - pkg:cargo/ironrdp-session, - pkg:cargo/ironrdp-svc, - pkg:cargo/ironrdp-tokio, - pkg:cargo/ironrdp-tls, - pkg:cargo/asn1-rs, - pkg:cargo/asn1-rs-derive, - pkg:cargo/asn1-rs-impl, - pkg:cargo/curve25519-dalek-derive, - pkg:cargo/der-parser, - pkg:cargo/icu_collections, - pkg:cargo/icu_locid, - pkg:cargo/icu_locid_transform, - pkg:cargo/icu_locid_transform_data, - pkg:cargo/icu_normalizer, - pkg:cargo/icu_normalizer_data, - pkg:cargo/icu_properties, - pkg:cargo/icu_properties_data, - pkg:cargo/icu_provider, - pkg:cargo/icu_provider_macros, - pkg:cargo/litemap, - pkg:cargo/ring, - pkg:cargo/sspi, - pkg:cargo/tokio-boring, - pkg:cargo/tokio-rustls, - pkg:cargo/writeable, - pkg:cargo/yoke, - pkg:cargo/yoke-derive, - pkg:cargo/zerofrom, - pkg:cargo/zerofrom-derive, - pkg:cargo/zerovec, - pkg:cargo/zerovec-derive, - pkg:npm/cspell/dict-en-common-misspellings, - pkg:npm/swc/core, - pkg:npm/cookie-signature, - pkg:npm/prettier diff --git a/.github/workflows/dismiss.yaml b/.github/workflows/dismiss.yaml index b595a7c7411bb..63d93f95ce4b8 100644 --- a/.github/workflows/dismiss.yaml +++ b/.github/workflows/dismiss.yaml @@ -43,7 +43,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/doc-tests.yaml b/.github/workflows/doc-tests.yaml index ba6fd1ca041b9..b4011d4c5c6a7 100644 --- a/.github/workflows/doc-tests.yaml +++ b/.github/workflows/doc-tests.yaml @@ -69,37 +69,32 @@ jobs: with: repository: gravitational/shared-workflows path: shared-workflows + ref: 6f388765b8ce993721502083c74cb9af1fc5658f - name: Install Go - uses: actions/setup-go@v5 + uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0 with: go-version: 'stable' + cache-dependency-path: shared-workflows/bot/go.sum - name: Ensure docs changes include redirects env: TOKEN: ${{ steps.generate_token.outputs.token }} REVIEWERS: ${{ secrets.reviewers }} run: cd shared-workflows/bot && go run main.go -workflow=docpaths -token="${TOKEN}" -teleport-path="${GITHUB_WORKSPACE}/teleport" -reviewers="${REVIEWERS}" - # Cache node_modules. Unlike the example in the actions/cache repo, this - # caches the node_modules directory instead of the yarn cache. This is - # because yarn needs to build fresh packages even when it copies files - # from the yarn cache into node_modules. - # See: - # https://github.com/actions/cache/blob/main/examples.md#node---yarn - - uses: actions/cache@v4 - id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`) + + - name: Install Node.js + uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 with: - path: '${{ github.workspace }}/docs/node_modules' - key: ${{ runner.os }}-yarn-${{ hashFiles(format('{0}/docs/yarn.lock', github.workspace)) }} - restore-keys: | - ${{ runner.os }}-yarn- + node-version: 22 + cache: 'yarn' + cache-dependency-path: '${{ github.workspace }}/docs/yarn.lock' - name: Install docs site dependencies working-directory: docs - if: ${{ steps.yarn-cache.outputs.cache-hit != 'true' }} # Prevent occasional `yarn install` executions that run indefinitely timeout-minutes: 10 - run: yarn install + run: yarn install --frozen-lockfile - name: Prepare docs site configuration working-directory: docs diff --git a/.github/workflows/docs-amplify.yaml b/.github/workflows/docs-amplify.yaml index 6eff40996d143..e5f285093038a 100644 --- a/.github/workflows/docs-amplify.yaml +++ b/.github/workflows/docs-amplify.yaml @@ -23,8 +23,8 @@ jobs: role-to-assume: ${{ vars.IAM_ROLE }} - name: Create Amplify preview environment - uses: gravitational/shared-workflows/tools/amplify-preview@tools/amplify-preview/v0.0.1 - continue-on-error: true + uses: gravitational/shared-workflows/tools/amplify-preview@664e788d45a7f56935cf63094b4fb52a41b12015 # tools/amplify-preview/v0.0.2 + id: amplify_preview with: app_ids: ${{ vars.AMPLIFY_APP_IDS }} create_branches: "true" @@ -37,6 +37,7 @@ jobs: ERR_TITLE: Teleport Docs preview build failed ERR_MESSAGE: >- Please refer to the following documentation for help: https://www.notion.so/goteleport/How-to-Amplify-deployments-162fdd3830be8096ba72efa1a49ee7bc?pvs=4 + Execution info: ${{ steps.amplify_preview.outputs.amplify_app_id }} ${{ steps.amplify_preview.outputs.amplify_branch_name }} ${{ steps.amplify_preview.outputs.amplify_job_id }} run: | echo ::error title=$ERR_TITLE::$ERR_MESSAGE exit 1 diff --git a/.github/workflows/flaky-tests.yaml b/.github/workflows/flaky-tests.yaml index b513fcf170e86..68be7043815d6 100644 --- a/.github/workflows/flaky-tests.yaml +++ b/.github/workflows/flaky-tests.yaml @@ -70,7 +70,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Find excluded tests id: find_excluded @@ -79,7 +79,7 @@ jobs: - name: Run base difftest uses: ./.github/actions/difftest with: - flags: --skip="${{ steps.find_excluded.outputs.FLAKE_SKIP }}" -e "integrations/operator/**/*" -e "integrations/terraform/**/*" -e "integrations/event-handler/**/*" -e "tool/tsh/**/*" -e "integration/**/*" -e "build.assets/**/*" -e "lib/auth/webauthncli/**/*" -e "lib/auth/touchid/**/*" -e "api/**/*" -e "examples/teleport-usage/**/*" -e "integrations/access/**" -e "integrations/lib/**" -e "integrations/lib/backoff/backoff_test.go" -e "e2e/**/*" + flags: --skip="${{ steps.find_excluded.outputs.FLAKE_SKIP }}" -e "integrations/operator/**/*" -e "integrations/terraform/**/*" -e "integrations/terraform-mwi/**/*" -e "integrations/event-handler/**/*" -e "tool/tsh/**/*" -e "integration/**/*" -e "build.assets/**/*" -e "lib/auth/webauthncli/**/*" -e "lib/auth/touchid/**/*" -e "api/**/*" -e "examples/teleport-usage/**/*" -e "integrations/access/**" -e "integrations/lib/**" -e "integrations/lib/backoff/backoff_test.go" -e "e2e/**/*" target: test-go-unit - name: Run touch-id difftest diff --git a/.github/workflows/kube-integration-tests-non-root.yaml b/.github/workflows/kube-integration-tests-non-root.yaml index 95efe75f701d8..b7b81729eeb4a 100644 --- a/.github/workflows/kube-integration-tests-non-root.yaml +++ b/.github/workflows/kube-integration-tests-non-root.yaml @@ -92,33 +92,13 @@ jobs: - name: Build Alpine image with webserver run: | - - export SHORT_VERSION=${ALPINE_VERSION%.*} - - # download the alpine image - # store the files in the fixtures/alpine directory + # create the alpine image in the fixtures/alpine directory # to avoid passing all the repository files to the docker build context. cd ./fixtures/alpine - - # download alpine minirootfs and signature - curl -fSsLO https://dl-cdn.alpinelinux.org/alpine/v$SHORT_VERSION/releases/x86_64/alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz - curl -fSsLO https://dl-cdn.alpinelinux.org/alpine/v$SHORT_VERSION/releases/x86_64/alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz.asc - curl -fSsLO https://dl-cdn.alpinelinux.org/alpine/v$SHORT_VERSION/releases/x86_64/alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz.sha256 - - # verify the checksum - sha256sum -c alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz.sha256 - - # verify the signature - gpg --import ./alpine-ncopa.at.alpinelinux.org.asc - gpg --verify ./alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz.asc ./alpine-minirootfs-$ALPINE_VERSION-x86_64.tar.gz - - # build the webserver - CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./webserver ./webserver.go - - docker build -t alpine-webserver:v1 --build-arg=ALPINE_VERSION=$ALPINE_VERSION -f ./Dockerfile . + make SHORT_VERSION=${ALPINE_VERSION%.*} ALPINE_VERSION=${ALPINE_VERSION} build-image # load the image into the kind cluster - kind load docker-image alpine-webserver:v1 + make load-image cd - diff --git a/.github/workflows/label.yaml b/.github/workflows/label.yaml index b098f8132819b..cb41586fcfcad 100644 --- a/.github/workflows/label.yaml +++ b/.github/workflows/label.yaml @@ -39,7 +39,7 @@ jobs: with: repository: gravitational/shared-workflows path: .github/shared-workflows - ref: main + ref: 664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 - name: Installing Go uses: actions/setup-go@v5 with: diff --git a/.github/workflows/lint-ui-bypass.yaml b/.github/workflows/lint-ui-bypass.yaml deleted file mode 100644 index 8637f6ebd08d5..0000000000000 --- a/.github/workflows/lint-ui-bypass.yaml +++ /dev/null @@ -1,47 +0,0 @@ -# This workflow is required to ensure that required Github check passes even if -# the actual "Lint UI" workflow skipped due to path filtering. Otherwise -# it will stay forever pending. -# -# See "Handling skipped but required checks" for more info: -# -# https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks -# -# Note both workflows must have the same name. - -name: Lint UI -run-name: Lint UI - ${{ github.run_id }} - @${{ github.actor }} - -on: - pull_request: - paths-ignore: - - 'web/**' - - 'gen/proto/js/**' - - 'gen/proto/ts/**' - - 'package.json' - - 'pnpm-lock.yaml' - - 'Cargo.toml' - - 'Cargo.lock' - - 'tsconfig.json' - - 'tsconfig.node.json' - merge_group: - paths-ignore: - - 'web/**' - - 'gen/proto/js/**' - - 'gen/proto/ts/**' - - 'package.json' - - 'pnpm-lock.yaml' - - 'Cargo.toml' - - 'Cargo.lock' - - 'tsconfig.json' - - 'tsconfig.node.json' - -jobs: - lint: - name: Prettier, ESLint, & TSC - runs-on: ubuntu-latest - - permissions: - contents: none - - steps: - - run: 'echo "No changes to verify"' diff --git a/.github/workflows/lint-ui.yaml b/.github/workflows/lint-ui.yaml deleted file mode 100644 index 50022e8e5026b..0000000000000 --- a/.github/workflows/lint-ui.yaml +++ /dev/null @@ -1,59 +0,0 @@ -name: Lint UI -run-name: Lint UI - ${{ github.run_id }} - @${{ github.actor }} - -on: - pull_request: - paths: - - 'web/**' - - 'gen/proto/js/**' - - 'gen/proto/ts/**' - - 'package.json' - - 'pnpm-lock.yaml' - - 'Cargo.toml' - - 'Cargo.lock' - - 'tsconfig.json' - - 'tsconfig.node.json' - merge_group: - paths: - - 'web/**' - - 'gen/proto/js/**' - - 'gen/proto/ts/**' - - 'package.json' - - 'pnpm-lock.yaml' - - 'Cargo.toml' - - 'Cargo.lock' - - 'tsconfig.json' - - 'tsconfig.node.json' - -jobs: - lint: - name: Prettier, ESLint, & TSC - runs-on: ubuntu-latest - container: - image: ghcr.io/gravitational/teleport-buildbox:teleport18 - steps: - - name: Checkout OSS Teleport - uses: actions/checkout@v4 - - - name: Print Node version - run: | - node --version - - - name: Install JS dependencies - run: | - pnpm install --frozen-lockfile - - - name: Build WASM - run: pnpm build-wasm - - - name: Run Type Check - run: pnpm type-check - - - name: Run lint - run: pnpm lint - - - name: Run Storybook smoke test - run: pnpm storybook-smoke-test - - - name: Lint licenses - run: make lint-license diff --git a/.github/workflows/lint.yaml b/.github/workflows/lint.yaml index 540fd1a522feb..949e70f4ce744 100644 --- a/.github/workflows/lint.yaml +++ b/.github/workflows/lint.yaml @@ -1,10 +1,10 @@ -name: Lint (Go) -run-name: Lint (Go and Rust) +name: Lint on: pull_request: merge_group: env: GOFLAGS: '-buildvcs=false' + GOEXPERIMENT: 'synctest' jobs: changes: @@ -17,6 +17,7 @@ jobs: has_rust: ${{ steps.changes.outputs.has_rust }} has_proto: ${{ steps.changes.outputs.has_proto }} has_rfd: ${{ steps.changes.outputs.has_rfd }} + has_ui: ${{ steps.changes.outputs.has_ui }} steps: - name: Checkout if: ${{ github.event_name == 'merge_group' }} @@ -68,9 +69,18 @@ jobs: # rendered doc changes - 'docs/pages/admin-guides/**' - 'docs/pages/enroll-resources/**' - - 'docs/pages/reference/operator-resources/**' - - 'docs/pages/reference/terraform-provider/**' + - 'docs/pages/reference/infrastructure-as-code/operator-resources/**' + - 'docs/pages/reference/infrastructure-as-code/terraform-provider/**' - 'examples/chart/teleport-cluster/charts/teleport-operator/operator-crds' + has_ui: + - '.github/workflows/lint.yaml' + - 'web/**' + - 'gen/proto/js/**' + - 'gen/proto/ts/**' + - 'package.json' + - 'pnpm-lock.yaml' + - 'tsconfig.json' + - 'tsconfig.node.json' lint-go: name: Lint (Go) @@ -155,6 +165,9 @@ jobs: - name: Check if go generated files are up to date run: make go-generate-up-to-date + - name: Check if test symbols are included in binaries + run: make lint-test-symbols + lint-rust: name: Lint (Rust) runs-on: ubuntu-22.04 @@ -229,6 +242,10 @@ jobs: # We have to add the current directory as a safe directory or else git commands will not work as expected. run: git config --global --add safe.directory $(realpath .) && make protos-up-to-date/host + - name: Check if resource reference docs are up to date + # We have to add the current directory as a safe directory or else git commands will not work as expected. + run: git config --global --add safe.directory $(realpath .) && make resource-docs-up-to-date + - name: Check if Operator CRDs are up to date # We have to add the current directory as a safe directory or else git commands will not work as expected. run: git config --global --add safe.directory $(realpath .) && make crds-up-to-date @@ -236,7 +253,12 @@ jobs: - name: Check if Terraform resources are up to date # We have to add the current directory as a safe directory or else git commands will not work as expected. # The protoc-gen-terraform version must match the version in integrations/terraform/Makefile - run: git config --global --add safe.directory $(realpath .) && go install github.com/gravitational/protoc-gen-terraform/v3@v3.0.2 && make terraform-resources-up-to-date + run: git config --global --add safe.directory $(realpath .) && go install github.com/gravitational/protoc-gen-terraform/v3@v3.0.3 && make terraform-resources-up-to-date + + - name: Check if the Access Monitoring reference is up to date + # We have to add the current directory as a safe directory or else git commands will not work as expected. + # The protoc-gen-terraform version must match the version in integrations/terraform/Makefile + run: git config --global --add safe.directory $(realpath .) && make access-monitoring-reference-up-to-date lint-rfd: name: Lint (RFD) @@ -248,7 +270,7 @@ jobs: contents: read container: - image: ghcr.io/gravitational/teleport-buildbox:teleport17 + image: ghcr.io/gravitational/teleport-buildbox:teleport18 steps: - name: Checkout @@ -259,3 +281,54 @@ jobs: - name: Check spelling run: pnpm cspell -c ./rfd/cspell.json rfd + + lint-ui: + name: Prettier, ESLint, & TSC + needs: changes + if: ${{ !startsWith(github.head_ref, 'dependabot/') && needs.changes.outputs.has_ui == 'true' }} + runs-on: ubuntu-latest + + permissions: + contents: read + + container: + image: ghcr.io/gravitational/teleport-buildbox:teleport18 + + steps: + - name: Checkout OSS Teleport + uses: actions/checkout@v4 + + - name: Print Node version + run: | + node --version + + - name: Install JS dependencies + run: | + pnpm install --frozen-lockfile + + - name: Install WASM deps + run: make ensure-wasm-deps + + - name: Build WASM + run: pnpm build-wasm + + - name: Run Type Check + run: pnpm type-check + + - name: Run lint + run: pnpm lint + + - name: Run Storybook smoke test + run: pnpm storybook-smoke-test + + - name: Lint licenses + run: make lint-license + + - name: Check icons + # We have to add the current directory as a safe directory or else git commands will not work as expected. + run: git config --global --add safe.directory $(realpath .) && make icons-up-to-date + + - name: Check audit event reference docs + # We have to add the current directory as a safe directory or else git commands will not work as expected. + run: git config --global --add safe.directory $(realpath .) && make audit-event-reference-up-to-date + diff --git a/.github/workflows/terraform-lint.yaml b/.github/workflows/terraform-lint.yaml index 090212fb1e629..0daa7025c5085 100644 --- a/.github/workflows/terraform-lint.yaml +++ b/.github/workflows/terraform-lint.yaml @@ -15,7 +15,7 @@ on: jobs: terraform-lint: - uses: gravitational/shared-workflows/.github/workflows/terraform-lint.yaml@main + uses: gravitational/shared-workflows/.github/workflows/terraform-lint.yaml@664e788d45a7f56935cf63094b4fb52a41b12015 # workflows/v0.0.2 with: # TODO: Fix Terraform linting issues and stop using force to pass the job. tflint_force: true @@ -24,4 +24,3 @@ jobs: actions: read contents: read pull-requests: write - security-events: write diff --git a/.github/workflows/trivy.yaml b/.github/workflows/trivy.yaml deleted file mode 100644 index 7b207844a5850..0000000000000 --- a/.github/workflows/trivy.yaml +++ /dev/null @@ -1,13 +0,0 @@ -name: Trivy - -on: - pull_request: - merge_group: - -jobs: - trivy: - uses: gravitational/shared-workflows/.github/workflows/trivy.yaml@main - permissions: - actions: read - contents: read - security-events: write diff --git a/.github/workflows/unit-tests-integrations.yaml b/.github/workflows/unit-tests-integrations.yaml index a8c901c5de347..60d0dd3cd55b6 100644 --- a/.github/workflows/unit-tests-integrations.yaml +++ b/.github/workflows/unit-tests-integrations.yaml @@ -70,6 +70,10 @@ jobs: run: make test-terraform-provider timeout-minutes: 15 + - name: Run terraform provider OSS tests + run: make test-terraform-provider-mwi + timeout-minutes: 15 + - name: Run integrations event-handler tests run: make test-event-handler-integrations timeout-minutes: 10 diff --git a/.gitignore b/.gitignore index ebec5a42720af..485adc0f6faa7 100644 --- a/.gitignore +++ b/.gitignore @@ -25,6 +25,7 @@ default.etcd out build !/web/packages/build +/web/packages/shared/libs/ironrdp/pkg/** *.o *.a *.so @@ -118,3 +119,7 @@ msgfile/ # Dockerized builds generate .pnpm-store in the root, so ignore it .pnpm-store + +# Artifacts generated by gotestsum --jsonfile=unit-tests-foo.json --junitfile=unit-tests-foo.xml +unit-tests*.xml +unit-tests*.json diff --git a/.golangci.yml b/.golangci.yml index 143943ac7fb8d..67f0b78a5fab3 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -94,7 +94,6 @@ linters: - '!**lib/services/compare.go' - '!**/lib/services/local/access_list.go' - '!**/lib/services/local/users.go' - - '!**/lib/services/server.go' - '!**/lib/services/user.go' deny: - pkg: github.com/google/go-cmp/cmp @@ -113,6 +112,14 @@ linters: deny: - pkg: github.com/gravitational/teleport/integration desc: integration test should not be imported outside of intergation tests + - pkg: github.com/gravitational/teleport/lib/utils/mcptest + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/testcontainers/testcontainers-go + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/testcontainers/testcontainers-go/modules/postgres + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/testcontainers/testcontainers-go/modules/mysql + desc: testing packages should not be imported outside of _test.go files logging: deny: - pkg: github.com/sirupsen/logrus @@ -159,12 +166,70 @@ linters: desc: use "github.com/gravitational/teleport/lib/msgraph" instead - pkg: github.com/cloudflare/cfssl desc: use "crypto" or "x/crypto" instead + - pkg: github.com/gogo/protobuf/jsonpb + desc: gogoproto jsonpb has inconsistent interactions with any + message that directly or indirectly makes use of + gogoproto.casttype, and will happily violate the protobuf json + encoding spec in those situations. It should only be used for + existing marshaling and unmarshaling that can't easily be + migrated away from jsonpb, but it should not be used by new + code. If you need to encode and decode a gogoproto-generated + message in protobuf json, use + google.golang.org/protobuf/encoding/protojson together with + google.golang.org/protobuf/protoadapt instead, or, better yet, + see if you can move the code generation for that message to the + modern protoc-gen-go. oidc: deny: - pkg: github.com/coreos/go-oidc desc: 'github.com/zitadel/oidc/v3 should be used instead' - pkg: github.com/zitadel/oidc$ desc: 'github.com/zitadel/oidc/v3 should be used instead' + synctest: + deny: + - pkg: testing/synctest + desc: 'use "github.com/gravitational/teleport/lib/utils/testutils/synctest" instead' + test_packages: + files: + - '!$test' + - '!**/integrations/operator/controllers/resources/testlib/**.go' + - '!**/e/lib/aws/identitycenter/test/**' + - '!**/e/lib/devicetrust/testenv/**' + - '!**/e/lib/jamf/testenv/**' + - '!**/e/lib/intune/testenv/**' + - '!**/e/lib/idp/saml/testenv/**' + - '!**/e/lib/operatortest/**' + - '!**/e/tests/**' + - '!**/integration/db/fixture.go' + - '!**/integration/helpers/**' + - '!**/integrations/lib/testing/**' + deny: + - pkg: github.com/gravitational/teleport/e/lib/aws/identitycenter/test + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/e/lib/idp/operatortest + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/events/tests + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/teleterm/gatewaytest + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/utils/testutils + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/test + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/integrations/operator/controllers/resources/testlib + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/tool/teleport/testenv + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/srv/db/redis/testing + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/srv/db/spanner/testing + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/modules/modulestest + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/cryptosuites/cryptosuitestest + desc: testing packages should not be imported outside of _test.go files + - pkg: github.com/gravitational/teleport/lib/auth/authtest + desc: testing packages should not be imported outside of _test.go files testify: files: - '!$test' @@ -174,21 +239,14 @@ linters: - '!**/e/lib/idp/saml/testenv/**' - '!**/e/lib/operatortest/**' - '!**/e/tests/**' - - '!**/lib/automaticupgrades/basichttp/servermock.go' - '!**/lib/auth/helpers.go' - '!**/lib/auth/keystore/testhelpers.go' - - '!**/lib/auth/test/**' - '!**/lib/backend/test/**' - - '!**/lib/events/athena/test.go' - '!**/lib/events/test/**' - - '!**/lib/kube/proxy/utils_testing.go' - '!**/lib/services/suite/**' - - '!**/lib/srv/mock.go' - - '!**/lib/srv/db/redis/test.go' - - '!**/lib/tbot/workloadidentity/workloadattest/podman/test_server.go' - '!**/lib/tbot/workloadidentity/workloadattest/sigstore/sigstoretest/sigstoretest.go' - '!**/lib/teleterm/gatewaytest/**' - - '!**/lib/utils/testhelpers.go' + - '!**/lib/utils/mcptest/**' - '!**/lib/utils/testutils/**' - '!**/integration/appaccess/fixtures.go' - '!**/integration/appaccess/jwt.go' @@ -212,7 +270,7 @@ linters: - '!**/integrations/lib/testing/integration/authhelper.go' - '!**/integrations/lib/testing/integration/suite.go' - '!**/integrations/operator/controllers/resources/testlib/**' - - '!**/tool/teleport/testenv/**' + - '!**/e/lib/intune/testenv/**' deny: - pkg: github.com/stretchr/testify desc: testify should not be imported outside of test code @@ -226,6 +284,7 @@ linters: - '!**/e/lib/devicetrust/storage/storage.go' - '!**/e/lib/idp/saml/testenv/**' - '!**/e/lib/jamf/testenv/**' + - '!**/e/lib/intune/testenv/**' - '!**/e/lib/okta/api/oktaapitest/**' - '!**/e/lib/operatortest/**' - '!**/e/tests/**' @@ -234,30 +293,23 @@ linters: - '!**/integrations/access/msteams/testlib/**' - '!**/integrations/access/slack/testlib/**' - '!**/integrations/operator/controllers/resources/testlib/**' + - '!**/lib/auth/authtest/**' - '!**/lib/auth/helpers.go' - '!**/lib/auth/keystore/testhelpers.go' - - '!**/lib/auth/test/**' - - '!**/lib/automaticupgrades/basichttp/servermock.go' - '!**/lib/backend/test/**' - '!**/lib/cryptosuites/precompute.go' - '!**/lib/cryptosuites/internal/rsa/rsa.go' - '!**/lib/events/test/**' - - '!**/lib/events/athena/test.go' - - '!**/lib/fixtures/**' - - '!**/lib/kube/proxy/utils_testing.go' + - '!**/lib/modules/modulestest/**' - '!**/lib/modules/test.go' - '!**/lib/service/service.go' - '!**/lib/services/local/users.go' - '!**/lib/services/suite/**' - - '!**/lib/srv/mock.go' - - '!**/lib/srv/db/redis/test.go' - - '!**/lib/tbot/workloadidentity/workloadattest/podman/test_server.go' - '!**/lib/tbot/workloadidentity/workloadattest/sigstore/sigstoretest/sigstoretest.go' - '!**/lib/teleterm/gatewaytest/**' - '!**/lib/utils/cli.go' - - '!**/lib/utils/testhelpers.go' + - '!**/lib/utils/mcptest/**' - '!**/lib/utils/testutils/**' - - '!**/tool/teleport/testenv/**' deny: - pkg: testing desc: testing should not be imported outside of tests @@ -280,7 +332,7 @@ linters: - pattern: ^protojson\.Unmarshal$ msg: use protojson.UnmarshalOptions and consider enabling DiscardUnknown - pattern: ^jsonpb\.(?:Unmarshal|UnmarshalString|UnmarshalNext)$ - msg: use jsonpb.Unmarshaler and consider enabling AllowUnknownFields + msg: use protojson if possible, or use jsonpb.Unmarshaler and consider enabling AllowUnknownFields misspell: locale: US nolintlint: diff --git a/.npmrc b/.npmrc index 98799a6a4c549..70ef8264dad13 100644 --- a/.npmrc +++ b/.npmrc @@ -3,13 +3,3 @@ update-notifier=false # ESLint editor integrations expect ESLint's binary to be in the root node_modules. public-hoist-pattern[]=eslint -# pnpm v10.3.0+ ships with node-gyp@11.1.0. That version of node-gyp has a bug which prevents -# node-pty, one of Connect deps, from being built on Windows (https://github.com/nodejs/node-gyp/issues/3126). -# -# To work around this, we install node-gyp@11.0.0 as a dev dep and then tell node-pty through the below -# config option to use node-gyp from our node_modules rather than the one bundled with pnpm. -# -# During pnpm install, the script is executed from a directory such as -# /node_modules/.pnpm/node-pty@1.1.0-beta14/node_modules/node-pty -# so we escape five directories and then give a path to node-gyp.js. -node_gyp=../../../../../node_modules/node-gyp/bin/node-gyp.js diff --git a/BUILD_macos.md b/BUILD_macos.md index c1bfd6916f88e..301a0220912d8 100644 --- a/BUILD_macos.md +++ b/BUILD_macos.md @@ -81,8 +81,6 @@ PRs with corrections and updates are welcome! * `brew install node corepack` * `corepack enable pnpm` * The `Rust` and `Cargo` version in [build.assets/Makefile](https://github.com/gravitational/teleport/blob/master/build.assets/versions.mk#L11) (search for `RUST_VERSION`) are required. - * The [`wasm-pack`](https://github.com/rustwasm/wasm-pack) version in [build.assets/Makefile](https://github.com/gravitational/teleport/blob/master/build.assets/versions.mk#L12) (search for `WASM_PACK_VERSION`) is required: - `curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh` ##### Local Tests Dependencies diff --git a/CHANGELOG.md b/CHANGELOG.md index 4db7fc04ce5f1..037fc667cc5a9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,10 +1,948 @@ # Changelog -## 18.0.0 (xx/xx/xx) +## 18.6.2 (12/26/25) + +This is a private security release. Changelog will be publicly announced in a later version. + +## 18.6.1 (12/24/25) + +* Fixed an issue preventing text editors in the Web UI from allowing edits. [#62488](https://github.com/gravitational/teleport/pull/62488) +* Acking a cluster alert no longer requires the create permission. [#62468](https://github.com/gravitational/teleport/pull/62468) +* Fixed service health reason formatting for bot instances in the Web UI. [#62328](https://github.com/gravitational/teleport/pull/62328) +* Fixed an issue causing a ref type of "any" to be added when editing GitHub or Gitlab join tokens in the Web UI. [#62487](https://github.com/gravitational/teleport/pull/62487) + +## 18.6.0 (12/22/25) + +### Identifier-first login enhancements +Teleport now automatically passes the username to the identifier provider when performing Identifier-first login with OIDC or SAML IdPs. + +### GitHub Actions Kubernetes Wizard +Teleport now ships with a new guided flow for setting up GitHub Actions workflows that connects to Teleport-protected Kubernetes clusters without secrets. + +### Other changes and improvements + +* Fixed unspecified proxy address breaking moderated SFTP when mixing IPv4 and IPv6. [#62296](https://github.com/gravitational/teleport/pull/62296) +* Added full configuration file for `teleport-plugin-event-handler` helm chart. [#62280](https://github.com/gravitational/teleport/pull/62280) +* Added full environment variable configuration for event handler CLI. [#62280](https://github.com/gravitational/teleport/pull/62280) +* Added support for extraArgs/extraEnv/extraLabels patterns for `teleport-plugin-event-handler` helm chart. [#62266](https://github.com/gravitational/teleport/pull/62266) +* Fixed issue where AltGr key combinations did not work correctly in remote desktop sessions. [#62198](https://github.com/gravitational/teleport/pull/62198) +* Added `annotations` support for `teleport-plugin-event-handler` helm chart. [#62188](https://github.com/gravitational/teleport/pull/62188) +* Added a new global configuration section auth_connection_config allowing users to configure the backoff behavior for Proxy and Agent instances connecting to the Auth Service. [#62139](https://github.com/gravitational/teleport/pull/62139) +* Fixed a potential SSRF vulnerability in the Azure join method implementation. [#62406](https://github.com/gravitational/teleport/pull/62406) +* Support for v8 roles has been added to the Terraform provider. [#62380](https://github.com/gravitational/teleport/pull/62380) +* Added support for selecting Kube agents as Managed Updates v2 canaries. Important: the default update group is corrected to "default" from "stable/cloud". [#62211](https://github.com/gravitational/teleport/pull/62211) + +## 18.5.1 (12/12/25) + +* Fixed Teleport instances running the Auth Service sometimes not becoming ready during initialization. [#62194](https://github.com/gravitational/teleport/pull/62194) +* Fixed an Auth Service bug causing the event handler to miss up to 1 event every 5 minutes when storing audit events in S3. [#62150](https://github.com/gravitational/teleport/pull/62150) +* Fixed bug where event handler dies on malformed session events. [#62141](https://github.com/gravitational/teleport/pull/62141) +* Updated event handler to ingest missing session recordings at twice the `concurrency` instead of only 10 sessions at a time. [#62141](https://github.com/gravitational/teleport/pull/62141) +* Changed "tsh --mfa-mode=cross-platform" to favor security keys on current Windows versions. [#62134](https://github.com/gravitational/teleport/pull/62134) +* Fixed "the client connection is closing" error happening under certain conditions in Teleport Connect when connecting to resources with per-session MFA enabled. [#62127](https://github.com/gravitational/teleport/pull/62127) +* Improved detail of error messages for `identity` service in `tbot`. [#62120](https://github.com/gravitational/teleport/pull/62120) +* Teleport Connect now supports expanding `~/` home-directory paths in the configuration file. [#62104](https://github.com/gravitational/teleport/pull/62104) +* Added support for --format flag for `tsh request search`. [#62099](https://github.com/gravitational/teleport/pull/62099) +* Fixed bug where event handler `types` filter is ignored for Teleport clients using Athena storage backend. [#62082](https://github.com/gravitational/teleport/pull/62082) +* Fixed intermittent issues with VNet on Windows when other NRPT rules from GPOs are present under `HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\DnsPolicyConfig`. [#62052](https://github.com/gravitational/teleport/pull/62052) +* Added Terraform provider support for teleport_integration resources. [#62040](https://github.com/gravitational/teleport/pull/62040) +* DiscoveryConfig resources can now be managed via the Teleport Terraform Provider. [#62034](https://github.com/gravitational/teleport/pull/62034) +* Reduced memory consumption of the Application service. [#62014](https://github.com/gravitational/teleport/pull/62014) +* Added support for listing application session recordings in `tsh recording ls` and the Web UI. [#62010](https://github.com/gravitational/teleport/pull/62010) +* Fixed a Web UI issue where the copy button for the session ID did not work for non-interactive session recordings. [#62010](https://github.com/gravitational/teleport/pull/62010) +* Prevented stuck `teleport-cluster` Helm chart rollouts in small Kubernetes clusters. Removed resource requests from configuration check hooks. [#62003](https://github.com/gravitational/teleport/pull/62003) +* Fixed static keypair creation in `tbot keypair create` when the `--static-key-path` flag is used. [#61947](https://github.com/gravitational/teleport/pull/61947) +* Re-enabled MySQL database health checks. MySQL health checks will now authenticate to the database as a user, rather than TCP dialing and closing the connection, to prevent MySQL from automatically blocking the Teleport database service instance host. The health check user name default is "teleport-healthchecker". [#61942](https://github.com/gravitational/teleport/pull/61942) +* Added support for templating `secret_labels`, and the `{{.Labels}}` template variable, to tbot's `kubernetes/argo-cd` output. [#61876](https://github.com/gravitational/teleport/pull/61876) + +Enterprise: +* Updated AWS Identity Center integration sign-in start URL format to support AWS GovCloud accounts. +* Fix a potential race where Okta assignments may never be cleaned up if the Okta integration is down while the assignment expires. +* Created a dedicated Access Automations feature page within the Web UI. +* Entra ID directory reconciler now overwrites user accounts created by the referenced SAML Auth Connector. + +## 18.5.0 (12/04/25) + +### Kubernetes support for Relay Service +The relay service now facilitates Kubernetes connections. + +### Shared state between tsh and Teleport Connect +Teleport Connect and tsh now share the same local state. Logins from one app will automatically be reflected in the other. + +### SCIM PATCH support in SailPoint integration +Teleport SCIM server now natively supports PATCH operations to improve reliability of bulk SCIM operations in integrations like SailPoint. + +### Other changes and improvements + +* Updated Go to 1.24.11. [#61953](https://github.com/gravitational/teleport/pull/61953) +* Added support for discovering EC2 instances in all regions, without enumerating them. Requires access to `account.ListRegions` in the IAM Role assumed by the Discovery Service. [#61924](https://github.com/gravitational/teleport/pull/61924) +* Fixed a bug where JWT-SVID timestamp claims would be represented using scientific notation. [#61921](https://github.com/gravitational/teleport/pull/61921) +* Fixed "SSH cert not found" errors in Teleport Connect. [#61846](https://github.com/gravitational/teleport/pull/61846) +* Added support for authenticating Azure resource discovery using Azure OIDC integrations. [#61830](https://github.com/gravitational/teleport/pull/61830) +* Fixed a bug in Proxy recording mode where Teleport Node sessions would result in duplicate audit events with a different session ID. [#61246](https://github.com/gravitational/teleport/pull/61246) +* Tuned teleport-cluster, teleport-kube-agent, and teleport-relay Helm charts to reduce the probability of Teleport exceeding its memory limits and being OOM-Killed. GOMEMLIMIT defaults to 90% of the configured memory limits. + +Enterprise: +* Added support for AWS Account name and ID labels (`teleport.dev/account-id`, `teleport.dev/account-name`) on AWS Identity Center resources (`aws_ic_account_assignment` and `aws_ic_account`). These labels improve compatibility with Access Monitoring Rules, allowing users to more easily target and audit AWS IC accounts. +* Updated the Access Automation Rules dialog to display rules in a paginated view. + +## 18.4.2 (12/01/25) + +* Fixed a bug causing high memory consumption in the Teleport Auth Service when clients were listing large resources. [#61849](https://github.com/gravitational/teleport/pull/61849) +* Prevent data races when terminating interactive Kubernetes sessions. [#61818](https://github.com/gravitational/teleport/pull/61818) +* Fixed `tsh db connect` failing to connect to databases using separate ports configuration (non-TLS routing mode). [#61812](https://github.com/gravitational/teleport/pull/61812) +* Fixed a bug where Kubernetes App Discovery `poll_interval` is not set correctly. [#61791](https://github.com/gravitational/teleport/pull/61791) +* Fixed an issue that caused a failed upload of an encrypted session recording to block other recordings from uploading. [#61774](https://github.com/gravitational/teleport/pull/61774) +* Fixed relative path evaluation for SFTP in proxy recording mode. [#61760](https://github.com/gravitational/teleport/pull/61760) +* Fixed `tsh kube ls` showing deleted clusters. [#61742](https://github.com/gravitational/teleport/pull/61742) +* Fixed workload identity templating to support certain numeric values that previously gave a "expression did not evaluate to a string" error. [#61738](https://github.com/gravitational/teleport/pull/61738) +* Added User Details view to Web UI. [#61737](https://github.com/gravitational/teleport/pull/61737) +* Added --roles flag for tsh request search, allowing users to list all requestable roles. This flag is mutually exclusive with --kind. [#61699](https://github.com/gravitational/teleport/pull/61699) +* Fixed EC2 SSM Document set up script used in Enroll New Resource. [#61673](https://github.com/gravitational/teleport/pull/61673) +* Fixed AWS Console access when using AWS IAM Roles Anywhere or AWS OIDC integrations, when IP Pinning is enabled. [#61654](https://github.com/gravitational/teleport/pull/61654) +* Fixed "invalid name syntax" connection error for PostgreSQL auto-provisioned users with email usernames. [#61631](https://github.com/gravitational/teleport/pull/61631) +* Auth readiness tuned to wait for cache initialization. [#61620](https://github.com/gravitational/teleport/pull/61620) +* Added ability to update existing Azure OIDC integration with `tctl`. [#61592](https://github.com/gravitational/teleport/pull/61592) + +Enterprise: +* Added Entra directory sync metrics. +* Improved the initial EntraID user and group synchronization time, reducing the time required for the first full sync. +* Prevented Trivy from reporting false positives when scanning the Teleport binaries. + +## 18.4.1 (11/20/25) + +* Fixed a bug that prevented searching audit log events in the web UI when using Athena audit storage. [#61603](https://github.com/gravitational/teleport/pull/61603) +* Prevented Trivy from reporting false positives when scanning the Teleport binaries. [#61539](https://github.com/gravitational/teleport/pull/61539) +* Added support for `tsh logout --proxy` (or `TELEPORT_PROXY` set) to work without `--user` flag when one identity exists. [#61404](https://github.com/gravitational/teleport/pull/61404) +* Fixed web upload/download failure behind load balancers when web listen address is unspecified. [#61393](https://github.com/gravitational/teleport/pull/61393) +* Fixed corrupted private keys breaking tsh. [#61388](https://github.com/gravitational/teleport/pull/61388) +* Resource names are now properly validated for AWS Roles Anywhere integration `Generate Command`. [#61385](https://github.com/gravitational/teleport/pull/61385) +* Added caches to reduce Active Directory user SID lookups and TLS certificate requests. [#61317](https://github.com/gravitational/teleport/pull/61317) +* GOAWAY errors received from Kubernetes API Servers configured with a non-zero --goaway-chance are now forward to clients to be retried. [#61256](https://github.com/gravitational/teleport/pull/61256) +* Added support for creating and managing scoped tokens using `tctl scoped tokens add/ls/rm`. SSH nodes can now join a cluster within a particular scope by joining with a scoped token. [#60758](https://github.com/gravitational/teleport/pull/60758) + +Enterprise: +* Removed sync of the model identifier from Intune to avoid mismatches between the identifier reported by Intune vs Teleport clients. +* Added support for Jamf's /v2/computers-inventory API (addresses Jamf's deprecation of /v1/computers-inventory). +* Updated the AWS Identity Center resource synchronizer to handle AWS Account name changes more gracefully. +* Added audit events in response to SCIM provisioning requests. + +## 18.4.0 (11/13/25) + +### Streamable-HTTP and SSE support for MCP Zero-Trust Access +MCP Zero-Trust Access users are now able to secure and audit connections to MCP servers that use HTTP-based transport protocols in addition to stdio. + +### Improved Bot Instances Dashboard +The Bot Instances dashboard now provides a more intuitive interface for managing a fleet of Machine & Workload Identity bot instances. This includes improved filtering, sorting and searching capabilities, and a high-level overview of the versions of all bot instances in the cluster. + +### Updated Oracle Joining Support +Oracle compute instances are no longer required to have additional IAM permissions granted to them in order to join. Oracle join tokens now also allow restricting which instances may leverage a token to join. + +### Other changes and improvements + +* Fixed an issue connections to MongoDB Atlas clusters fail if clusters use certs signed by Google Trust Services (GTS). [#61324](https://github.com/gravitational/teleport/pull/61324) +* Improved reverse tunnel dialing recovery from default route changes by 1min on average. [#61319](https://github.com/gravitational/teleport/pull/61319) +* Fixed an issue Postgres database cannot be accessed via Teleport Connect when per-session MFA is enabled and the role does not have wildcard `db_names`. [#61299](https://github.com/gravitational/teleport/pull/61299) +* Improved conflict detection of application public address and Teleport cluster addresses. [#61290](https://github.com/gravitational/teleport/pull/61290) +* Fixed AWS Roles Anywhere cli access when using per-session MFA. [#61273](https://github.com/gravitational/teleport/pull/61273) +* Fixed rare error in the `authorized_keys` secret scanner when running the Teleport agent on MacOS. [#61268](https://github.com/gravitational/teleport/pull/61268) +* Updated Go to v1.24.10. [#61212](https://github.com/gravitational/teleport/pull/61212) +* Terraform: `teleport_bot` resource now supports import, and follows the standard resource structure. [#61201](https://github.com/gravitational/teleport/pull/61201) +* Added support for tbot to teleport-update. [#61198](https://github.com/gravitational/teleport/pull/61198) +* Instrumented tbot to better support teleport-update. [#61189](https://github.com/gravitational/teleport/pull/61189) +* Improved error message of `tsh` when there is a certificate DNS SAN mismatch when connecting to Auth via Proxy. [#61186](https://github.com/gravitational/teleport/pull/61186) +* Improved error handling during desktop sessions that encounter unknown/invalid smartcard commands. This prevents abrupt desktop session termination with a "PDU error" message when using certain applications. [#61180](https://github.com/gravitational/teleport/pull/61180) +* Fixed an issue causing Access Automation Rules to evaluate incorrectly when users are granted traits via Access Lists. [#61169](https://github.com/gravitational/teleport/pull/61169) +* Added support for tsh copying files between two hosts, i.e. `tsh scp alice@foo:/path/1.txt bob@bar:/path/2.txt`. [#61165](https://github.com/gravitational/teleport/pull/61165) +* Added support for custom reason prompts for Access Requests, per requested role/resource (`role.spec.allow.request.reason.prompt`). [#61127](https://github.com/gravitational/teleport/pull/61127) +* Fixed the webUI timeout time to respect the cluster's WebIdleTimeout configuration. [#61103](https://github.com/gravitational/teleport/pull/61103) +* Added an option to restrict Oracle join tokens to specific instance IDs. [#61078](https://github.com/gravitational/teleport/pull/61078) +* Stabilized tsh paths when run from agent installation. [#60873](https://github.com/gravitational/teleport/pull/60873) +* Added advanced search and sorting to the bot instances list in the web UI. [#60761](https://github.com/gravitational/teleport/pull/60761) +* Added filter and sort flags to `tctl bots instances ls`. [#60761](https://github.com/gravitational/teleport/pull/60761) +* Added service health to the output `tctl bots instances ls` and `tctl bot instance show` commands. [#60761](https://github.com/gravitational/teleport/pull/60761) +* Added a dashboard to visualize bot instances by their version compatibility. [#60761](https://github.com/gravitational/teleport/pull/60761) +* Added bot instance service health to web UI. [#60761](https://github.com/gravitational/teleport/pull/60761) +* Added new `env0` join method to support joining within Env0 workflows. [#60710](https://github.com/gravitational/teleport/pull/60710) +* Added a new OCI join method that does not require IAM policies. [#60293](https://github.com/gravitational/teleport/pull/60293) + +## 18.3.2 (11/07/25) + +* Updated github.com/containerd/containerd dependency to fix https://github.com/advisories/GHSA-pwhc-rpq9-4c8w. [#61143](https://github.com/gravitational/teleport/pull/61143) +* Fixed regression when connecting to non-AD desktops. [#61117](https://github.com/gravitational/teleport/pull/61117) +* Fixed a bug causing `tsh` to stop waiting for access request approval and incorrectly report that the request had been deleted. [#61109](https://github.com/gravitational/teleport/pull/61109) +* Fixed an issue where resources in Teleport Connect were not always refreshed correctly after re-logging in as a different user. [#61099](https://github.com/gravitational/teleport/pull/61099) + +Enterprise: +* Added support for Amazon Bedrock to session recording summarizer (unavailable in Teleport Cloud). [#7463](https://github.com/gravitational/teleport.e/pull/7463) + +## 18.3.1 (11/04/25) + +**Warning:** This release includes a regression that prevents connection to non-AD desktops. +The following workaround is available: +- Upgrade Windows Desktop Service to 18.3.2 + +* Fixed an issue MCP session end event is not being sent sometimes. [#61009](https://github.com/gravitational/teleport/pull/61009) +* Teleport's Windows Desktop service can now discover the KDC server address via DNS. [#60988](https://github.com/gravitational/teleport/pull/60988) +* Fixed Kubernetes metrics API unmarshaling errors causing kubectl top commands to fail in certain scenarios. [#60971](https://github.com/gravitational/teleport/pull/60971) +* Fixed an issue which could lead to session recordings saved on disk being truncated. [#60964](https://github.com/gravitational/teleport/pull/60964) +* Fixed a bug causing unencrypted session recordings to be deleted 24 hours after being created while using `node` and `proxy` recording modes. [#60948](https://github.com/gravitational/teleport/pull/60948) +* Enabled summarization and metadata generation for encrypted session recordings, storing metadata and summaries in encrypted form. [#60945](https://github.com/gravitational/teleport/pull/60945) +* Fixed a bug where encrypted sessions recordings could not be uploaded to S3. [#60895](https://github.com/gravitational/teleport/pull/60895) +* Added "tsh mcp config/connect" support for custom headers for streamable-HTTP MCP servers. [#60843](https://github.com/gravitational/teleport/pull/60843) +* Fixed the session recording player that was unable to play SSH sessions captured prior to v18.1.6. [#60832](https://github.com/gravitational/teleport/pull/60832) +* Fixed an issue in the web UI where a bot with zero tokens would show a validation error. [#60760](https://github.com/gravitational/teleport/pull/60760) +* Added the ability to set OIDC Integration credentials in the tctl AWS Identity Center plugin installer. [#60712](https://github.com/gravitational/teleport/pull/60712) +* Kubernetes OIDC responses are now cached to improve performance and reliability when joining bots and nodes. [#60711](https://github.com/gravitational/teleport/pull/60711) +* Fixed MongoDB topology monitoring connection leak in the Teleport Database Service. [#60692](https://github.com/gravitational/teleport/pull/60692) +* Added support for topologySpreadConstraints to the teleport-kube-agent Helm chart. [#58012](https://github.com/gravitational/teleport/pull/58012) +* The teleport-kube-agent Helm chart now tries to spread pods across hosts and zones. [#58012](https://github.com/gravitational/teleport/pull/58012) + +## 18.3.0 (10/28/25) + +### Web UI Workload ID + +Teleport's Web UI now lists all workload identity resources registered in the cluster. -### Breaking changes +### Relay Service + +Teleport now includes a new relay service that acts as a lightweight proxy service. This new service can receive connections from both SSH clients and agents. -#### TLS Cipher Suites +The relay service can be used to avoid routing SSH connections through the broader Teleport control plane, providing the ability to optimize network flows in large or complex deployments. + +### Multi-cluster Discovery + +Multiple Teleport clusters can now discover the same EC2 instances simultaneously through auto-discovery, with each cluster operating independently without interference. + +### Kubernetes Health Checks + +Teleport now continuously monitors the health of your registered Kubernetes clusters and displays their status directly in the web UI. When connecting to Kubernetes clusters, Teleport automatically routes you to healthy services, ensuring reliable access to your infrastructure. + +### ElastiCache Serverless + +Teleport Database Access now supports connecting to ElastiCache Serverless databases. + +### Other fixes and improvements + +* The browser window for SSO MFA is slightly taller in order to accommodate larger elements like QR codes. [#60703](https://github.com/gravitational/teleport/pull/60703) +* Slack access plugin no longer crashes in the event access list is unsupported. [#60671](https://github.com/gravitational/teleport/pull/60671) +* Okta-managed apps are now pinned correctly in the web UI. [#60667](https://github.com/gravitational/teleport/pull/60667) +* Create and edit GitLab join tokens from the Web UI. [#60649](https://github.com/gravitational/teleport/pull/60649) +* Teleport Connect now displays the profile name (instead of the cluster name) in the UI when referring to the profile; this affects only clusters where the cluster name was specifically set to something else than the proxy hostname during setup. [#60615](https://github.com/gravitational/teleport/pull/60615) +* Fixed tsh scp failing on files that grow during transfer. [#60607](https://github.com/gravitational/teleport/pull/60607) +* Allowed moderated session peers to perform file transfers. [#60604](https://github.com/gravitational/teleport/pull/60604) +* Added support for regular expression conditions for AccessMonitoringRule. [#60598](https://github.com/gravitational/teleport/pull/60598) +* Added support for SSE and streamable-HTTP MCP servers. [#60519](https://github.com/gravitational/teleport/pull/60519) +* Added health checks for enrolled Kubernetes clusters. [#60492](https://github.com/gravitational/teleport/pull/60492) +* MWI: `tbot`'s auto-generated service names are now simpler and easier to use in the `/readyz` endpoint. [#60458](https://github.com/gravitational/teleport/pull/60458) +* Client tools managed updates stores OS and ARCH in the configuration. This ensures compatibility when `TELEPORT_HOME` directory is shared with a virtual instance running a different OS or architecture. [#60414](https://github.com/gravitational/teleport/pull/60414) +* Added a Workload Identities page to the web UI to list workload identities. [#59479](https://github.com/gravitational/teleport/pull/59479) + +Enterprise: +* Enabled Access Automation Rule schedule configuration within the WebUI. +* Updated Entra ID plugin installation UI to support group filter configuration. + +## 18.2.10 (10/23/25) + +* Fixed a bug where listing members of an access list results in listing members of access lists which have names prefixed with the original access list name. This may lead to RBAC escalations. [#60587](https://github.com/gravitational/teleport/pull/60587) +* Fixed a startup error `EADDRINUSE: address already in use` in Teleport Connect on macOS and Linux that could occur with long system usernames. [#60576](https://github.com/gravitational/teleport/pull/60576) +* Fixed an issue where the eligibility reconsideration flow could continuously reset the Owner’s eligibility status when the Access List contains a dangling reference to a non-existent user. [#60575](https://github.com/gravitational/teleport/pull/60575) +* Fixed Username AccessList name collision. [#60563](https://github.com/gravitational/teleport/pull/60563) +* Playback speed can be changed in the new SSH/k8s recording player. [#60451](https://github.com/gravitational/teleport/pull/60451) +* Adapts EC2 Server auto discovery to send the correct parameters when using the `AWS-RunShellScript` pre-defined SSM Document. [#60434](https://github.com/gravitational/teleport/pull/60434) +* Updated tsh debug output to include tsh client version when --debug flag is set. [#60407](https://github.com/gravitational/teleport/pull/60407) +* Updated LDAP dial timeout from 15 seconds to 30 seconds. [#60388](https://github.com/gravitational/teleport/pull/60388) +* Fixed a bug that prevented using database role names longer than 30 chars for MySQL auto user provisioning. Now role names as long as 32 chars, which is the MySQL limit, can be used. [#60377](https://github.com/gravitational/teleport/pull/60377) +* Fixed a bug in Proxy Recording Mode that causes SSH sessions in the WebUI to fail. [#60369](https://github.com/gravitational/teleport/pull/60369) +* Added `extraEnv` and `extraArgs` to the teleport-operator helm chart. [#60357](https://github.com/gravitational/teleport/pull/60357) +* Fixed issue with inherited roles interfering with auto role provisioning cleanup in Postgres. [#60345](https://github.com/gravitational/teleport/pull/60345) +* Fixed malformed audit events breaking the audit log. [#60334](https://github.com/gravitational/teleport/pull/60334) +* Enabled use of schedules within automatic review and notification access_monitoring_rules. [#60327](https://github.com/gravitational/teleport/pull/60327) +* Fixed an issue that caused Kubernetes debug containers to fail with a “container not valid” error when launched by a user requiring moderated sessions. [#60302](https://github.com/gravitational/teleport/pull/60302) +* Added `tbot start ssh-multiplexer` helper to start the SSH multiplexer service without a config file. [#60287](https://github.com/gravitational/teleport/pull/60287) +* Fixed "The server-side graphics subsystem is in an error state" during connection initialization to Windows Desktop. [#60285](https://github.com/gravitational/teleport/pull/60285) +* Fixed a bug where SSH host certificates are missing the `.` principal, breaking SSH access via third-party clients. [#60276](https://github.com/gravitational/teleport/pull/60276) +* Reduces the memory usage when processing a session recording by ~80%. [#60275](https://github.com/gravitational/teleport/pull/60275) +* Fixed AWS CLI access when using the AWS Roles Anywhere integration. [#60227](https://github.com/gravitational/teleport/pull/60227) +* Fixed an issue in Teleport Connect where Ctrl+D would sometimes not close a terminal tab. [#60221](https://github.com/gravitational/teleport/pull/60221) +* Updated error messages displayed by tsh ssh when access to hosts is denied and when attempting to connect to a host that is offline or not enrolled in the cluster. [#60215](https://github.com/gravitational/teleport/pull/60215) +* Added editing bot description to the web UI. [#60212](https://github.com/gravitational/teleport/pull/60212) +* Added support for PodSecurityContext to `tbot` helm chart. [#60206](https://github.com/gravitational/teleport/pull/60206) +* MWI: Add `teleport_bot_instances` metric. [#60196](https://github.com/gravitational/teleport/pull/60196) +* The `tbot` Workload API now logs errors encountered when handling requests. [#60193](https://github.com/gravitational/teleport/pull/60193) +* Added explicit timeout to `tbot` when the Trust Bundle Cache is establishing an event watch. [#60182](https://github.com/gravitational/teleport/pull/60182) +* Fixed a bug where OpenSSH EICE node connections would fail. [#60124](https://github.com/gravitational/teleport/pull/60124) +* Updated Go to 1.24.9. [#60108](https://github.com/gravitational/teleport/pull/60108) +* Fixed SFTP audit events breaking the audit log. [#60069](https://github.com/gravitational/teleport/pull/60069) +* Fixed Access List owners permission inheritance when the nesting depth is one. (Members of an Access List configured as an Owner of another Access List). [#60056](https://github.com/gravitational/teleport/pull/60056) +* Added support for loading bound keypair joining parameters from the environment. [#60031](https://github.com/gravitational/teleport/pull/60031) +* Deleting an AWS OIDC integration will remove associated Teleport Discovery Configs and App servers that reference the integration. [#60018](https://github.com/gravitational/teleport/pull/60018) +* Fixed selinux warning in teleport-update output and error during remove. [#59997](https://github.com/gravitational/teleport/pull/59997) +* Fixed tsh scp getting stuck in symlink loops. [#59994](https://github.com/gravitational/teleport/pull/59994) +* Fixed handling of local tsh scp targets that contain a colon. [#59981](https://github.com/gravitational/teleport/pull/59981) +* Fixed EC2 auto discovery report of failed installations. [#59972](https://github.com/gravitational/teleport/pull/59972) +* Fixed issue where temporarily unreachable app servers were permanently removed from session cache, causing persistent connection failures: `no application servers remaining to connect`. [#59956](https://github.com/gravitational/teleport/pull/59956) +* Fixed the issue with automatic access requests for `tsh ssh` when `spec.allow.request.max_duration` is set on the requester role. [#59924](https://github.com/gravitational/teleport/pull/59924) +* Fixes a bug with the check for a running Teleport process in the install-node.sh script. [#59887](https://github.com/gravitational/teleport/pull/59887) +* Fixed handling SFTP file transfers when the SSH agent is enforced by SELinux. [#59874](https://github.com/gravitational/teleport/pull/59874) +* Periods of inactivity in SSH session playback can now be skipped. [#59701](https://github.com/gravitational/teleport/pull/59701) + +Enterprise: +* Oracle database local proxies started with `tsh proxy db` will now accept connections to any database name. + +## 18.2.9 (10/23/25) + +This is a follow up to the private security release. Changelog will be publicly announced on 10/24/25. + +In addition to the previous release it includes the following bug fixes: + +* Fixed crash of EC2 auto discovery when AWS credentials provided in to the Discovery Service are not valid. [#60046](https://github.com/gravitational/teleport/pull/60046) + +## 18.2.8 (10/20/25) + +This is a follow up to the private security release. Changelog will be publicly announced on 10/24/25. + +In addition to the previous release it includes the following bug fixes: + +* Fixed issue with access list ineligibility status reconciler blocking member updates. +* Fixed issue with SSH host certificates missing the `.` principal, breaking SSH access via third-party clients. + +## 18.2.7 (10/09/25) + +This is a follow up to the private security release. Changelog will be publicly announced on 10/24/25. + +In addition to the previous release it includes the following bug fixes: + +* Fixed issue with automatic access requests for `tsh ssh` when `spec.allow.request.max_duration` is set on the requester role. + +## 18.2.6 (10/06/25) + +This is a follow up to the private security release. Changelog will be publicly announced on 10/24/25. + +## 18.2.5 (10/02/25) + +This is a private security release. Changelog will be publicly announced on 10/24/25. + +## 18.2.4 (10/01/25) + +* Fixed an issue where the new SSH/Kubernetes recording player would indefinitely show a loading spinner when seeking into a long period of inactivity. [#59816](https://github.com/gravitational/teleport/pull/59816) +* MWI: Added support for customizing context names with a template in `kubernetes/v2` output. [#59739](https://github.com/gravitational/teleport/pull/59739) +* Updated mongo-driver to v1.17.4 to include fixes for possible connection leaks that could affect Teleport Database Service instances. [#59732](https://github.com/gravitational/teleport/pull/59732) +* Fixed excessive memory usage on Teleport Proxy Service instances when using the the Teleport Web UI MySQL REPL. [#59719](https://github.com/gravitational/teleport/pull/59719) +* Added support for multiple agents in EC2, GCP and Azure Server auto discovery, allowing server access from different Teleport clusters. [#59688](https://github.com/gravitational/teleport/pull/59688) +* Changed the event-handler plugin to skip over Windows desktop session recording events by default. [#59681](https://github.com/gravitational/teleport/pull/59681) +* Fixed an issue that would cause trusted cluster resource updates to fail silently. [#58886](https://github.com/gravitational/teleport/pull/58886) + +## 18.2.3 (09/29/25) + +* Fixed auto-approvals in the Datadog Incident Management integration by updating the on-call API client. [#59668](https://github.com/gravitational/teleport/pull/59668) +* Fixed auto-approvals in the Datadog Incident Management integration to ignore case sensitivity in user emails. [#59668](https://github.com/gravitational/teleport/pull/59668) +* Database recordings now show the session summary if it is available. [#59634](https://github.com/gravitational/teleport/pull/59634) +* Added automatic `@.iam` suffix to GCP Postgres usernames (Teleport Connect). [#59629](https://github.com/gravitational/teleport/pull/59629) +* Fixed `tsh play` not returning an error when playing a session fails. [#59625](https://github.com/gravitational/teleport/pull/59625) +* Fixed an issue in Teleport Connect where clicking 'Restart' to apply an update could close the window without actually restarting the app. [#59592](https://github.com/gravitational/teleport/pull/59592) +* Added automatic `@.iam` suffix to GCP Postgres usernames (tsh, web UI). [#59590](https://github.com/gravitational/teleport/pull/59590) +* Introduced `application-proxy` service to `tbot` for HTTP proxying to applications protected by Teleport. [#59587](https://github.com/gravitational/teleport/pull/59587) +* MWI: Added support for customizing cluster names with a template to the `kubernetes/argo-cd` output. [#59575](https://github.com/gravitational/teleport/pull/59575) +* Fixed persistence of `metadata.description` field for the Bot resource. [#59570](https://github.com/gravitational/teleport/pull/59570) +* Fixed a crash in Teleport's Windows Desktop Service introduced in 18.2.0. Compaction of certain shared directory read/write audit events could result in a stack overflow error. [#59515](https://github.com/gravitational/teleport/pull/59515) +* Added `tctl tokens configure-kube` helper command to easily trust Kubernetes clusters and allow secure repeatable joining. [#59497](https://github.com/gravitational/teleport/pull/59497) +* Made the check for a running Teleport process in the install-node.sh script more robust. [#59496](https://github.com/gravitational/teleport/pull/59496) +* Fixed `tctl edit` producing an error when trying to modify a Bot resource. [#59480](https://github.com/gravitational/teleport/pull/59480) +* Added support for generating VSCode and Claude Code MCP servers configurations to the `tsh mcp config` and `tsh mcp db config` commands. [#59473](https://github.com/gravitational/teleport/pull/59473) +* Fixed a bug where session IDs were tied to the client connection, resulting in issues when combined with multiplexed connection features (OpenSSH ControlPath/ControlMaster/ControlPersist). [#59472](https://github.com/gravitational/teleport/pull/59472) +* Improved app access error messages in case of network error. [#59468](https://github.com/gravitational/teleport/pull/59468) +* Fixed database IAM configurator potentially getting stuck and never recovering (#59290). [#59417](https://github.com/gravitational/teleport/pull/59417) +* Added tbot copy-binaries command to simplify using tbot as a Kubernetes sidecar. [#59404](https://github.com/gravitational/teleport/pull/59404) +* Fixed `tsh config` binary path after managed updates. [#59384](https://github.com/gravitational/teleport/pull/59384) +* Updated Entra ID integration to support group filters. [#59378](https://github.com/gravitational/teleport/pull/59378) +* Fixed regression allowing SAML apps to be included when filtering resources by 'Applications' in the Web UI. [#59327](https://github.com/gravitational/teleport/pull/59327) +* Allow controlling the description of auto-discovered Kubernetes apps with an annotation. [#58817](https://github.com/gravitational/teleport/pull/58817) +* Fixed an issue that prevented connecting to agents over peered tunnels when proxy peering was enabled. [#59556](https://github.com/gravitational/teleport/pull/59556) + +## 18.2.2 (09/19/25) + +* Fixed a regression in Teleport Connect for Windows that caused the executable to be unsigned. [#59302](https://github.com/gravitational/teleport/pull/59302) +* Fixed an issue that prevented uploading encrypted recordings using the S3 session recording backend. [#59281](https://github.com/gravitational/teleport/pull/59281) +* Fix issue preventing auto enrollment of EKS clusters when using the Web UI. [#59272](https://github.com/gravitational/teleport/pull/59272) +* Terraform provider: Allow creating access lists without setting spec.grants. [#59217](https://github.com/gravitational/teleport/pull/59217) +* Fixes a panic that occurs when creating a Bound Keypair join token with the `spec.onboarding` field unset. [#59178](https://github.com/gravitational/teleport/pull/59178) +* Added desktop name for Windows Directory and Clipboard audit events. [#59146](https://github.com/gravitational/teleport/pull/59146) +* Added the ability to update the AWS Identity Center SCIM token in `tctl`. [#59114](https://github.com/gravitational/teleport/pull/59114) +* Added services to correctly choose Access Request roles in remote clusters. [#59062](https://github.com/gravitational/teleport/pull/59062) +* Install script allows specifying a group for agent installation with managed updates V2 enabled. [#59059](https://github.com/gravitational/teleport/pull/59059) +* Added support for ElastiCache Serverless for Redis OSS and Valkey database access. [#58891](https://github.com/gravitational/teleport/pull/58891) + +Enterprise: +* Fixed an issue in the Entra ID integration where a user account with an unsupported username value could prevent other valid users and groups to be synced to Teleport. Such user accounts are now filtered. + +## 18.2.1 (09/12/25) + +* Fixed client tools managed updates sequential update. [#59086](https://github.com/gravitational/teleport/pull/59086) +* Fixed headless login so that it supports both WebAuthn and SSO for MFA. [#59078](https://github.com/gravitational/teleport/pull/59078) +* When selecting a login for an SSH server, Teleport Connect now shows only logins allowed by RBAC for that specific server rather than showing all logins which the user has access to. [#59067](https://github.com/gravitational/teleport/pull/59067) +* Terraform Provider is now supported on Windows machines. [#59055](https://github.com/gravitational/teleport/pull/59055) +* Enabled Oracle Cloud joining in Machine ID's `tbot` client. [#59040](https://github.com/gravitational/teleport/pull/59040) +* Fixed a bug preventing users to create access lists with empty grants through Terraform. [#59032](https://github.com/gravitational/teleport/pull/59032) +* Fixed a DynamoDB bug potentially causing event queries to return a different range of events. In the worst case scenario, this bug would block the event-handler. [#59029](https://github.com/gravitational/teleport/pull/59029) +* Fixed an issue where SSH file copying attempts would be spuriously denied in proxy recording mode. [#59027](https://github.com/gravitational/teleport/pull/59027) +* Updated Enroll Integration page design. [#58985](https://github.com/gravitational/teleport/pull/58985) +* Teleport Connect now runs in the background by default on macOS and Windows. On Linux, this behavior can be enabled in the app configuration. [#58923](https://github.com/gravitational/teleport/pull/58923) +* Added fdpass-teleport binary to install script for Teleport tar downloads. [#58919](https://github.com/gravitational/teleport/pull/58919) +* Support multiple resource editing in `tctl edit` when editing collections. [#58902](https://github.com/gravitational/teleport/pull/58902) +* Added support for browser window resizing to the Teleport Web UI database client terminal. [#58900](https://github.com/gravitational/teleport/pull/58900) +* Fixed a bug that prevented root users from viewing session recordings when they were participants. [#58897](https://github.com/gravitational/teleport/pull/58897) +* Added ability for user to select whether IC integration creates roles for all possible Account Assignments. [#58861](https://github.com/gravitational/teleport/pull/58861) +* Updated Go to 1.24.7. [#58835](https://github.com/gravitational/teleport/pull/58835) +* Populate `user_roles` and `user_traits` fields for SSH audit events. [#58804](https://github.com/gravitational/teleport/pull/58804) +* Added support for wtmpdb as a user accounting backend to wtmp. [#58777](https://github.com/gravitational/teleport/pull/58777) +* Prevents an application from being registered if its public address matches a Teleport cluster address. [#58766](https://github.com/gravitational/teleport/pull/58766) +* Added a preset role `mcp-user` that has access to all MCP servers and their tools. [#58613](https://github.com/gravitational/teleport/pull/58613) + +Enterprise: +* Fixed an issue where sometimes the session summary was marked as a success, even though the summary was empty (this was particularly visible using GPT 5). +* Updated Enroll Integration page design. + +## 18.2.0 (09/04/25) + +### Encrypted session recordings + +Teleport now provides the ability to integrate with Hardware Security Modules (HSMs) in order to encrypt session recordings prior to uploading them to storage. + +### AI session summaries + +Teleport Identity Security users are now able to view AI-generated summaries for SSH, Kubernetes and database sessions. + +### Updated session recordings page + +Session recordings page in Teleport web UI are now updated with a new design that will include session thumbnails and ability to view session summaries for Identity Security users. + +### Teleport Connect Managed Updates + +Teleport Connect is now able to detect when application updates are available and automatically apply them on the next restart. + +### Teleport Device Trust Intune Support + +Teleport now includes a new hosted plugin for Microsoft's Intune suite, allowing trusted devices to be synchronized from the Intune inventory. + +### Terraform support for Access List members + +Users are now able to provision Access Lists and their members (including other nested Access Lists) with terraform. + +### Long-term access requests UX + +Teleport access requests creation dialog in web UI now better differentiate between short and long-term access requests. + +### Database web terminal for MySQL + +Teleport web UI now provides terminal interface for MySQL database access. + +### Database access for AlloyDB + +Teleport now supports database access for GCP AlloyDB databases. + +### Other changes and improvements + +* Improved observability by adding health check metrics for healthy, unhealthy, and unknown states. Database health checks can now be monitored with these metrics. [#58708](https://github.com/gravitational/teleport/pull/58708) +* Removed AccessList review notification check from tsh login/status flow. [#58662](https://github.com/gravitational/teleport/pull/58662) +* Lock, unlock and delete from the Bot Details page, as well as viewing lock status. [#58653](https://github.com/gravitational/teleport/pull/58653) +* Fixed internal access list membership caching issue that caused high CPU usage when the total number of members exceeded 200. [#58614](https://github.com/gravitational/teleport/pull/58614) +* Fixed internal cache issue that could cause crashes in AWS IC, Database, and App access flows. [#58611](https://github.com/gravitational/teleport/pull/58611) +* Fixed panic in `tbot`'s `ssh-multiplexer` service. [#58595](https://github.com/gravitational/teleport/pull/58595) +* Teleport now honours Entra ID OIDC groups overage claim. The OIDC connector spec in Teleport must be updated to request OIDC `profile` scope and the enterprise application in Entra ID must be granted with `User.ReadBasic.All` Graph API permission for this feature to work. By default, Teleport will query the Microsoft Graph API `graph.microsoft.com` endpoint and filter user's group membership of "security groups" group type. This behaviour can be updated by configuring `entra_id_groups_provider` configuration field, which is available in the OIDC connector configuration spec. [#58593](https://github.com/gravitational/teleport/pull/58593) +* Enhanced session recordings RBAC to enforce recording access based on rules that reference creator’s roles, traits, and resource properties. [#58563](https://github.com/gravitational/teleport/pull/58563) +* Added support for configure SCIM Plugin with OIDC or Github Teleport Connectors. [#58554](https://github.com/gravitational/teleport/pull/58554) +* Added `user_agent` field to MySQL database session start audit events. [#58523](https://github.com/gravitational/teleport/pull/58523) +* `tbot` now supports the configuration of a default namespace for kubeconfig files generated by the `kubernetes/v2` service. [#58494](https://github.com/gravitational/teleport/pull/58494) +* Reduced audit log clutter by compacting contiguous shared directory read/write events into a single audit log event. [#58446](https://github.com/gravitational/teleport/pull/58446) +* Session metadata now appears next to SSH sessions in the UI. [#58405](https://github.com/gravitational/teleport/pull/58405) +* Refreshed the list session recordings UI with thumbnails, more filtering options and a card/list view. [#58390](https://github.com/gravitational/teleport/pull/58390) +* Added thumbnail and metadata generation for session recordings. [#58360](https://github.com/gravitational/teleport/pull/58360) +* Teleport Connect now supports managed updates. [#58260](https://github.com/gravitational/teleport/pull/58260) +* Teleport Connect now brings focus back from the browser to itself after a successful SSO login. [#58260](https://github.com/gravitational/teleport/pull/58260) +* Added support for GCP AlloyDB. [#58202](https://github.com/gravitational/teleport/pull/58202) +* Added support for encrypting session recordings at rest across all recording modes. Encryption can be enabled statically by setting `auth_server.session_recording_config.enabled: yes` in the Teleport file configuration, or dynamically by editing the `session_recording_config` resource and setting `spec.encryption.enabled: yes`. [#57959](https://github.com/gravitational/teleport/pull/57959) +* Added SSH SELinux module management to teleport-update. [#57660](https://github.com/gravitational/teleport/pull/57660) +* Added Terraform support for Access List members. [#57058](https://github.com/gravitational/teleport/pull/57058) + +## 18.1.8 (08/29/25) + +* Fixed an issue introduced in v18.1.5 that caused desktop connection attempts to stall on the loading screen. [#58500](https://github.com/gravitational/teleport/pull/58500) +* Support setting `"*"` in role `kubernetes_users`. [#58477](https://github.com/gravitational/teleport/pull/58477) +* The following Helm charts now support obtaining the plugin credentials using tbot: `teleport-plugin-discord`, `teleport-plugin-email`, `teleport-plugin-jira`, `teleport-plugin-mattermost`, `teleport-plugin-msteams`, `teleport-plugin-pagerduty`, `teleport-plugin-event-handler`. [#58301](https://github.com/gravitational/teleport/pull/58301) + +## 18.1.7 (08/27/25) + +**Warning:** This release includes a regression that prevents connecting to Windows desktops via the Web UI. +The following workarounds are available: +- Downgrade proxy servers to 18.1.4 +- Use Teleport Connect instead of the web UI to access desktops +- Set your preferred keyboard layout (under account settings) to something other than _system_. + +* Fixed an issue where VNet could not start because of "VNet is already running" error. [#58388](https://github.com/gravitational/teleport/pull/58388) +* Fix MCP icon displaying as white/black blocks. [#58347](https://github.com/gravitational/teleport/pull/58347) +* Fix crash when running 'teleport backend clone' on non-Linux platforms. [#58332](https://github.com/gravitational/teleport/pull/58332) +* Disabled MySQL database health checks to avoid MySQL blocking the Teleport Database Service for too many connection errors. MySQL health checks can be re-enabled by setting max_connect_errors on MySQL to its maximum value and setting the environment variable TELEPORT_ENABLE_MYSQL_DB_HEALTH_CHECKS=1 on the Teleport Database Service instance. [#58331](https://github.com/gravitational/teleport/pull/58331) +* Fixed incorrect scp exit status between OpenSSH clients and servers. [#58327](https://github.com/gravitational/teleport/pull/58327) +* Fixed sftp readdir failing due to broken symlinks. [#58320](https://github.com/gravitational/teleport/pull/58320) +* Added "MCP Servers" filter in resources view for Web UI and Teleport Connect. [#58309](https://github.com/gravitational/teleport/pull/58309) +* Enable separate request_object_mode setting for MFA flow in OIDC connectors. [#58281](https://github.com/gravitational/teleport/pull/58281) +* Allow a namespace to be specified for the `tbot` Kubernetes Secret destination. [#58203](https://github.com/gravitational/teleport/pull/58203) +* MWI: `tbot` now supports managing Argo CD clusters via the `kubernetes/argo-cd` output service. [#58200](https://github.com/gravitational/teleport/pull/58200) +* Fixed failure to close user accounting session. [#58163](https://github.com/gravitational/teleport/pull/58163) +* Add paginated API ListDatabases, deprecate GetDatabases. [#58105](https://github.com/gravitational/teleport/pull/58105) +* Prevent modifier keys from getting stuck during remote desktop sessions. [#58103](https://github.com/gravitational/teleport/pull/58103) +* Fixed AWS app access signature verification for AWS requests that use an unsigned payload. [#58085](https://github.com/gravitational/teleport/pull/58085) +* Windows desktop LDAP discovery now auto-populates the resource's description field. [#58082](https://github.com/gravitational/teleport/pull/58082) + +Enterprise: +* For OIDC SSO, the IdP app/client configured for MFA checks is no longer expected to return claims that map to Teleport roles. Valid claim to role mappings are only required for login flows. +* Fix SSO MFA method for applications when Teleport is the SAML identity provider and Per-Session MFA is enabled. +* Added an optional session recording summarizer that uses OpenAI or a compatible API. + +## 18.1.6 (08/20/25) + +**Warning:** This release includes a regression that prevents connecting to Windows desktops via the Web UI. +The following workarounds are available: +- Downgrade proxy servers to 18.1.4 +- Use Teleport Connect instead of the web UI to access desktops +- Set your preferred keyboard layout (under account settings) to something other than _system_. + +* Fixed an uncaught exception in Teleport Connect on Windows when closing the app while the `TELEPORT_TOOLS_VERSION` environment variable is set. [#58131](https://github.com/gravitational/teleport/pull/58131) +* Fixed a Teleport Connect crash that occurred when assuming an access request while an application or database connection was active. [#58109](https://github.com/gravitational/teleport/pull/58109) +* Enable Azure joining with VMSS. [#58094](https://github.com/gravitational/teleport/pull/58094) +* Add support for JWT-Secured Authorization Requests to OIDC Connector. [#58063](https://github.com/gravitational/teleport/pull/58063) +* Fixed an issue that could cause some hosts not to register dynamic Windows desktops. [#58061](https://github.com/gravitational/teleport/pull/58061) +* TBot now emits a log message stating the current version on startup. [#58056](https://github.com/gravitational/teleport/pull/58056) +* Improve error message when a User without any MFA devices enrolled attempts to access a resource that requires MFA. [#58042](https://github.com/gravitational/teleport/pull/58042) +* Web assets are now pre-compressed with Brotli. [#58039](https://github.com/gravitational/teleport/pull/58039) +* Add TELEPORT_UNSTABLE_GRPC_RECV_SIZE env var which can be set to overwrite client side max grpc message size. [#58029](https://github.com/gravitational/teleport/pull/58029) + +## 18.1.5 (08/18/25) + +**Warning:** This release includes a regression that prevents connecting to Windows desktops via the Web UI. +The following workarounds are available: +- Downgrade proxy servers to 18.1.4 +- Use Teleport Connect instead of the web UI to access desktops +- Set your preferred keyboard layout (under account settings) to something other than _system_. + +* Fix AWS CLI access using AWS OIDC integration. [#57977](https://github.com/gravitational/teleport/pull/57977) +* Fixed an issue that could cause revocation checks to fail in Windows environments. [#57880](https://github.com/gravitational/teleport/pull/57880) +* Fixed the case where the auto-updated client tools did not use the intended version. [#57870](https://github.com/gravitational/teleport/pull/57870) +* Bound Keypair Joining: Fix lock generation on sequence desync. [#57863](https://github.com/gravitational/teleport/pull/57863) +* Fix database PKINIT issues caused missing CDP information in the certificate. [#57850](https://github.com/gravitational/teleport/pull/57850) +* Fixed connection issues to Windows Desktop Services (v17 or earlier) in Teleport Connect. [#57842](https://github.com/gravitational/teleport/pull/57842) +* The teleport-kube-agent Helm chart now supports kubernetes joining. `teleportClusterName` must be set to enable the feature. [#57824](https://github.com/gravitational/teleport/pull/57824) +* Fixed the web UI's access request submission panel getting stuck when scrolling down the page. [#57797](https://github.com/gravitational/teleport/pull/57797) +* Enroll new Kubernetes agents in Managed Updates. [#57784](https://github.com/gravitational/teleport/pull/57784) +* Teleport now supports displaying more than 2k tokens. [#57772](https://github.com/gravitational/teleport/pull/57772) +* Updated Go to 1.24.6. [#57764](https://github.com/gravitational/teleport/pull/57764) +* Database MCP server now supports CockroachDB databases. [#57762](https://github.com/gravitational/teleport/pull/57762) +* Added support for CockroachDB Web Access and interactive CockroachDB session playback. [#57762](https://github.com/gravitational/teleport/pull/57762) +* Added the `--auth` flag to the `tctl plugins install scim` CLI command to support Bearer token and OAuth authentication methods. [#57759](https://github.com/gravitational/teleport/pull/57759) +* Fix Alt+Click not being registered in remote desktop sessions. [#57757](https://github.com/gravitational/teleport/pull/57757) +* Kubernetes Access: `kubectl port-forward` now exits cleanly when backend pods are removed. [#57738](https://github.com/gravitational/teleport/pull/57738) +* Kubernetes Access: Fixed a bug when forwarding multiple ports to a single pod. [#57736](https://github.com/gravitational/teleport/pull/57736) +* Fixed unlink-package during upgrade/downgrade. [#57720](https://github.com/gravitational/teleport/pull/57720) +* Add new oidc joining mode for Kubernetes delegated joining to support providers that can be configured to provide public OIDC endpoints, like EKS, AKS, and GKE. [#57683](https://github.com/gravitational/teleport/pull/57683) +* Teleport `event-handler` now accepts HTTP Status Code 204 from the recipient. This adds support for sending events to Grafana Alloy and newer Fluentd versions. [#57680](https://github.com/gravitational/teleport/pull/57680) +* Enrich the windows.desktop.session.start audit event with additional certificate metadata. [#57676](https://github.com/gravitational/teleport/pull/57676) +* Allow the use of ResourceGroupsTaggingApi for KMS Key deletion. [#57671](https://github.com/gravitational/teleport/pull/57671) +* Added `--force` option to `tctl workload-identity x509-issuer-overrides sign-csrs` to allow displaying the output of partial failures, intended for use in clusters that make use of HSMs. [#57662](https://github.com/gravitational/teleport/pull/57662) +* Tctl top can now display raw prometheus metrics. [#57632](https://github.com/gravitational/teleport/pull/57632) +* Enable resource label conditions for notification routing rules. [#57616](https://github.com/gravitational/teleport/pull/57616) +* Use the bot details page to view and edit bot configuration, and see active instances with their upgrade status. [#57542](https://github.com/gravitational/teleport/pull/57542) +* Device Trust: added `required-for-humans` mode to allow bots to run on unenrolled devices, while enforcing checks for human users. [#57222](https://github.com/gravitational/teleport/pull/57222) +* Add `TeleportDatabaseV3` support to the Teleport Kubernetes Operator. [#56948](https://github.com/gravitational/teleport/pull/56948) +* Add `TeleportAppV3` support to the Teleport Kubernetes Operator. [#56948](https://github.com/gravitational/teleport/pull/56948) +* Fix TELEPORT_SESSION and SSH_SESSION_ID environmental variables not matching in an SSH session. [#55272](https://github.com/gravitational/teleport/pull/55272) + +Enterprise: +* Allow OIDC authentication to complete if email verification is not provided when the OIDC connecter is set to enforce verified email addresses. + +## 18.1.4 (08/06/25) + +* Fixed access denied error messages not being displayed in the Teleport web UI PostgreSQL client. [#57568](https://github.com/gravitational/teleport/pull/57568) +* Fixed a bug in the default discovery script that can happen discovering instances whose PATH doesn't contain `/usr/local/bin`. [#57530](https://github.com/gravitational/teleport/pull/57530) + +## 18.1.3 (08/05/25) + +* Fixed a panic that may occur when fetching non-existent resources from the cache. [#57583](https://github.com/gravitational/teleport/pull/57583) +* Added support for consuming arbitrary JSON OIDC claims using the JSONPath query language. [#57570](https://github.com/gravitational/teleport/pull/57570) +* Made it easier to identify Windows desktop certificate issuance on the audit log page. [#57521](https://github.com/gravitational/teleport/pull/57521) +* Fixed a race condition in the Terraform Provider potentially causing "does not exist" errors the following resources: `auth_preference`, `autoupdate_config`, `autoupdate_version`, `cluster_maintenance_config`, `cluster_network_config`, and `session_recording_config`. [#57518](https://github.com/gravitational/teleport/pull/57518) +* Fixed a Terraform provider bug causing resource creation to be retried more times than the `MaxRetries` setting. [#57518](https://github.com/gravitational/teleport/pull/57518) +* Fixed a Terraform provider bug happening when `autoupdate_version` or `autoupdate_config` have non-empty metadata. [#57516](https://github.com/gravitational/teleport/pull/57516) + +## 18.1.2 (08/05/25) + +* Fix a bug on Windows where a forwarded SSH agent would become dysfunctional after a single connection using the agent. [#57511](https://github.com/gravitational/teleport/pull/57511) +* Fixed usage print for global `--help` flag. [#57451](https://github.com/gravitational/teleport/pull/57451) +* Added Cursor and VSCode install buttons in MCP connect dialog in Web UI. [#57362](https://github.com/gravitational/teleport/pull/57362) +* Added "Allowed Tools" to "tsh mcp ls" and show a warning if no tools allowed. [#57360](https://github.com/gravitational/teleport/pull/57360) +* Tctl top respects local teleport config file. [#57354](https://github.com/gravitational/teleport/pull/57354) +* Fixed an issue backfilling CRLs during startup for long-standing clusters. [#57321](https://github.com/gravitational/teleport/pull/57321) +* Disable NLA in FIPS mode. [#57307](https://github.com/gravitational/teleport/pull/57307) +* Added a configurable delay between receiving a termination signal and shutting down. [#57211](https://github.com/gravitational/teleport/pull/57211) + +Enterprise: +* Slightly optimized access token refresh logic for Jamf integration when using API credentials. + +## 18.1.1 (07/29/25) + +* Fix CRL publication for Active Directory Windows desktop access. [#57264](https://github.com/gravitational/teleport/pull/57264) +* Allow YubiKeys running 5.7.4+ firmware to be usable as PIV hardware keys. [#57216](https://github.com/gravitational/teleport/pull/57216) +* Append headers to configuration files generated by teleport-update. [#56577](https://github.com/gravitational/teleport/pull/56577) + +Enterprise: +* Fixed application crash that could occur when using GitHub personal access tokens that don't have an expiration date + +## 18.1.0 (07/25/25) + +### MCP server access + +Teleport now provides the ability to connect to stdio-based MCP servers with +connection proxying and audit logging support. + +### MCP for database access + +Teleport now allows MCP clients such as Claude Desktop to execute queries in +Teleport-protected databases. + +### VNet for SSH + +Teleport VNet adds native support for SSH, enabling any SSH client to connect to +Teleport SSH servers with zero configuration. Advanced Teleport features like +per-session MFA have first-class support for a seamless user experience. + +### Identifier-first login + +Teleport adds support for identifier-first login flows. When enabled, the +initial login screen contains only a username prompt. Users are presented with +the SSO connectors that apply to them after submitting their username. + +### Bound keypair joining for Machine ID + +The new bound keypair join method for Machine ID is a more secure and +user-friendly alternative to token joining in both on-prem environments and +cloud providers without a delegated join method. It allows for automatic +self-recovery in case of expired client certificates and gives administrators +new options to manage and automate bot joining. + +### Sailpoint SCIM integration + +Teleport now supports Sailpoint as a SCIM provider allowing administrators to +synchronize Sailpoint entitlement groups with Teleport access lists. + +### LDAP server discovery for desktop access + +Teleport's `windows_desktop_service` can now locate the LDAP server via DNS as +an alternative to providing the address in the configuration file. + +### Managed Updates canary support + +Managed Updates v2 now support performing canary updates. When canary updates +are enabled for a group, Teleport will update a few agents first and confirm +they come back healthy before updating the rest of the group. + +You can unable canary updates by setting `canary_count` in your +`autoupdate_config`: + +```yaml +kind: autoupdate_config +spec: + agents: + mode: enabled + schedules: + regular: + - name: dev + days: + - Mon + - Tue + - Wed + - Thu + start_hour: 20 + canary_count: 5 + strategy: halt-on-error +``` + +Each group can have a maximum of 5 canaries, canaries are picked randomly among +the connected agents. + +Canary update support is currently only support by Linux agents, Kubernetes +support will be part of a future release. + +### Improved access requests UX + +Teleport's web UI makes a better distinction between just-in-time and long-term +access request UX. + +### Other changes and improvements + +* Fixed a bug causing `tctl`/`tsh` to fail on read-only file systems. [#57147](https://github.com/gravitational/teleport/pull/57147) +* The `teleport-distroless` container image now disables client tools updates by default (when using tsh/tctl, you will always use the version from the image). You can enable them back by unsetting the `TELEPORT_TOOLS_VERSION` environment variable. [#57147](https://github.com/gravitational/teleport/pull/57147) +* Fixed a crash in Teleport Connect that could occur when copying large clipboard content during desktop sessions. [#57130](https://github.com/gravitational/teleport/pull/57130) +* Audit log events for SPIFFE SVID issuances now include the name/label selector used by the client. [#57129](https://github.com/gravitational/teleport/pull/57129) +* Fixed an issue with `tsh aws` failing for STS and other AWS services. [#57122](https://github.com/gravitational/teleport/pull/57122) +* Fixed client tools managed updates downgrade to older version. [#57073](https://github.com/gravitational/teleport/pull/57073) +* Removed unnecessary macOS entitlements from Teleport Connect subprocesses. [#57066](https://github.com/gravitational/teleport/pull/57066) +* Machine and Workload ID: The `tbot` client will now discard expired identities if needed during renewal to allow automatic recovery without restarting the process. [#57060](https://github.com/gravitational/teleport/pull/57060) +* Defined `access-plugin` preset role. [#57056](https://github.com/gravitational/teleport/pull/57056) +* The `tctl top` command now supports the local unix sock debug endpoint. [#57025](https://github.com/gravitational/teleport/pull/57025) +* Added `--listen` flag to `tsh proxy db` for setting local listener address. [#57005](https://github.com/gravitational/teleport/pull/57005) +* Added multi-account support to teleport discovery bootstrap. [#56998](https://github.com/gravitational/teleport/pull/56998) +* Added `TeleportRoleV8` support to the Teleport Kubernetes Operator. [#56946](https://github.com/gravitational/teleport/pull/56946) +* Fixed a bug in the Teleport install scripts when running on macOS. The install scripts now error instead of trying to install non existing macOS FIPS binaries. [#56941](https://github.com/gravitational/teleport/pull/56941) +* Fixed using relative path `TELEPORT_HOME` environment variable with client tools managed update. [#56933](https://github.com/gravitational/teleport/pull/56933) +* Client tools managed updates support multi-cluster environments and track each version in the configuration file. [#56933](https://github.com/gravitational/teleport/pull/56933) +* Fixed certificate revocation failures in Active Directory environments when Teleport is using HSM-backed key material. [#56924](https://github.com/gravitational/teleport/pull/56924) +* Fixed database connect options dialog displaying wrong database username options. [#55560](https://github.com/gravitational/teleport/pull/55560) + +Enterprise: +* Fixed SCIM user provisioning when a user already exists and is managed by the same connector as the SCIM integration. +* Added enrolment for a generic SCIM Integration. + +## 18.0.2 (07/17/25) + +* Fix backward compatibility issue introduced in the 17.5.5 / 18.0.1 release related to Access List type, causing the `unknown access_list type "dynamic"` validation error. [#56892](https://github.com/gravitational/teleport/pull/56892) +* Added support for glob-style matching to Spacelift join rules. [#56877](https://github.com/gravitational/teleport/pull/56877) +* Improve PKINIT compatibility by always including CDP information in the certificate. [#56875](https://github.com/gravitational/teleport/pull/56875) +* Update Application APIs to use pagination to avoid exceeding message size limitations. [#56727](https://github.com/gravitational/teleport/pull/56727) +* MWI: tbot's `/readyz` endpoint is now representative of the bot's health. [#56719](https://github.com/gravitational/teleport/pull/56719) + +## 18.0.1 (07/15/25) + +* Fixed backward compatibility for Access List 'membershipRequires is missing' for older terraform providers. [#56742](https://github.com/gravitational/teleport/pull/56742) +* Fixed VNet DNS configuration on Windows hosts joined to Active Directory domains. [#56738](https://github.com/gravitational/teleport/pull/56738) +* Updated default client timeout and upload rate for Pyroscope. [#56730](https://github.com/gravitational/teleport/pull/56730) +* Bot instances are now sortable by latest heartbeat time in the web UI. [#56696](https://github.com/gravitational/teleport/pull/56696) +* Enabled automatic reviews of resource requests. [#56690](https://github.com/gravitational/teleport/pull/56690) +* Updated Go to 1.24.5. [#56679](https://github.com/gravitational/teleport/pull/56679) +* Fixed `tbot` SPIFFE Workload API failing to renew SPIFFE SVIDs. [#56662](https://github.com/gravitational/teleport/pull/56662) +* Fixed some icons displaying as white/black blocks. [#56619](https://github.com/gravitational/teleport/pull/56619) +* Fixed Teleport Cache ListUsers pagination. [#56613](https://github.com/gravitational/teleport/pull/56613) +* Fixed duplicated `db_client` CA in `tctl status` and `tctl get cas` output. [#56563](https://github.com/gravitational/teleport/pull/56563) +* Added cross-account support for EC2 discovery. [#56535](https://github.com/gravitational/teleport/pull/56535) +* Terraform Provider: added support for skipping proxy certificate verification in development environments. [#56527](https://github.com/gravitational/teleport/pull/56527) +* Added support for CRD in access requests. [#56496](https://github.com/gravitational/teleport/pull/56496) +* Added `tctl autoupdate agents report` command. [#56495](https://github.com/gravitational/teleport/pull/56495) +* Made VNet DNS available over IPv4. [#56477](https://github.com/gravitational/teleport/pull/56477) +* Fixed missing Teleport Kube Operator permission in v18.0.0 causing the operator to fail. [#56466](https://github.com/gravitational/teleport/pull/56466) +* Trait role templating is now supported in the `workload_identity_labels` field. [#56296](https://github.com/gravitational/teleport/pull/56296) +* MWI: `tbot` no longer supports providing a proxy server address via `--auth-server` or `auth_server`, use `--proxy-server` or `proxy_server` instead. [#55818](https://github.com/gravitational/teleport/pull/55818) +* UX: Forbid creating Access Requests to user_group resources when Okta bidirectional sync is disabled. [#55585](https://github.com/gravitational/teleport/pull/55585) +* Teleport Connect: Added support for custom reason prompts. [#55557](https://github.com/gravitational/teleport/pull/55557) + +Enterprise: +* Renamed Access Monitoring Rules to Access Automation Rules within the WebUI. +* Prevent the lack of an `email_verified` OIDC claim from failing authentication when the OIDC connecter is set to enforce verified email addresses. +* Fixed a email integration enrollment documentation link. +* Fixed a regression in SAML IdP that caused service provider initiated login to fail if the request was made with `http-redirect` binding encoding and the user had an active session in Teleport. + +## 18.0.0 (07/03/25) + +Teleport 18 brings the following new features and improvements: + +* Identity Activity Center +* Automatic access request reviews +* Multi-session MFA for database access +* RBAC and device trust for SAML applications +* Database health checks +* Kubernetes CRD + +### Description + +#### Identity Activity Center + +Teleport Identity Security, Identity Activity Center helps teams expose and +eliminate hidden identity risk in your infrastructure. By correlating user +activity from multiple sources, it accelerates incident response to +identity-based attacks. The first iteration will support integrations with AWS +(CloudTrail), GitHub (Audit Log API), Okta (Audit API) and Teleport (Audit Log). + +#### Automatic access request reviews + +Teleport 18 includes built-in support for automatic access request reviews, +eliminating the need to run separate access request plugins. Automatic reviews +are enabled by setting up Access Monitoring Rules which define the conditions +that must be satisfied in order for a request to be automatically approved or +denied. + +For more information, refer to the [docs](docs/pages/identity-governance/access-requests/automatic-reviews.mdx). + +#### Multi-session MFA for database access + +Per-session MFA has been extended to support multi-session reuse, allowing a +single MFA challenge to authorize multiple database connections using the new +`tsh db exec` command. This command executes a query across multiple selected +databases, making it user-friendly for ad-hoc user and script-friendly for +automation. + +For more details, see the *database access examples* in the [per-session MFA +guide](docs/pages/zero-trust-access/authentication/per-session-mfa.mdx). + +#### RBAC and device trust for SAML applications + +Access to SAML IdP service provider resources can now be controlled with +resource labels. The resource labels are matched against `app_labels` defined in +user roles. Additionally, SAML IdP sessions now enforce device trust. + +#### Database health checks + +In Teleport 18, the database service performs regular health checks for +registered databases. Health status and any networking issues are reported in +the Teleport web UI and reflected in `db_server` resources. + +In highly-available deployments with multiple database services, Teleport +prioritizes healthy services when routing user connections. For more +information, see the [database health checks +guide](docs/pages/enroll-resources/database-access/guides/health-checks.mdx). + +#### Kubernetes CRD + +In Teleport 18, the `kubernetes_resources` control of [role version +8](https://goteleport.com/docs/reference/resources/#role-versions) has been +updated to support Kubernetes Custom Resource Definitions and the behavior of +the `kind` and `namespace` fields has been updated to allow finer control. When +the `kind`: `namespace` is set, it will now only refer to the Kubernetes +namespace itself and not all resources within the namespace. The `kind` field +now expects the plural version of the resource name (i.e. `pods` instead of +`pod`) and a new field `api_group` has been added which must match the apiGroup +that the Kubernetes resource belongs to. + +##### Examples + +A role which allows access to the CronTab CRD. + +```yaml +kind: role +metadata: + name: kube-access-v8 +spec: + allow: + kubernetes_groups: + - '{{internal.kubernetes_groups}}' + kubernetes_labels: + '*': '*' + kubernetes_resources: + - api_group: stable.example.com + kind: crontabs + name: '*' + namespace: '*' + verbs: + - '*' + deny: {} +version: v8 +``` + +Converting a v7 Role to a v8 Role. Note the addition of the now required +`api_group` field and the change from **deployment** to **deployments** and from +**persistentvolume** to **persistentvolumes** for the `kind` field. + +```yaml +kind: role +metadata: + name: kube-access-v7 +spec: + allow: + kubernetes_groups: + - '{{internal.kubernetes_groups}}' + kubernetes_labels: + '*': '*' + kubernetes_resources: + - kind: deployment + name: '*' + namespace: default + verbs: + - '*' + - kind: persistentvolume + name: '*' + verbs: + - '*' + deny: {} +version: v7 +``` + +```yaml +kind: role +metadata: + name: kube-access-v8 +spec: + allow: + kubernetes_groups: + - '{{internal.kubernetes_groups}}' + kubernetes_labels: + '*': '*' + kubernetes_resources: + - api_group: apps + kind: deployments + name: '*' + namespace: default + verbs: + - '*' + - api_group: '' + kind: persistentvolumes + name: '*' + verbs: + - '*' + deny: {} +version: v8 +``` + +Granting access to all items within a namespace. Note that in v8 there are two +entries, the first is for the namespace itself and the second is for all entries +within the namespace. + +```yaml +kind: role +metadata: + name: kube-access-v7-ns +spec: + allow: + kubernetes_groups: + - '{{internal.kubernetes_groups}}' + kubernetes_labels: + '*': '*' + kubernetes_resources: + - kind: namespace + name: default + verbs: + - '*' + deny: {} +version: v7 +``` + +```yaml +kind: role +metadata: + name: kube-access-v8-ns +spec: + allow: + kubernetes_groups: + - '{{internal.kubernetes_groups}}' + kubernetes_labels: + '*': '*' + kubernetes_resources: + - api_group: '' + kind: namespaces + name: default + verbs: + - '*' + - api_group: '*' + kind: '*' + name: '*' + namespace: default + verbs: + - '*' + deny: {} +version: v8 +``` + +For more information, refer to the [docs](docs/pages/enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources). + +### Breaking changes and deprecations + +#### TLS cipher suites TLS cipher suites with known security issues can no longer be manually configured in the Teleport YAML configuration file. If you do not explicitly @@ -12,13 +950,13 @@ configure any of the listed TLS cipher suites, you are not affected by this change. Teleport 18 removes support for: -- `tls-rsa-with-aes-128-cbc-sha` -- `tls-rsa-with-aes-256-cbc-sha` -- `tls-rsa-with-aes-128-cbc-sha256` -- `tls-rsa-with-aes-128-gcm-sha256` -- `tls-rsa-with-aes-256-gcm-sha384` -- `tls-ecdhe-ecdsa-with-aes-128-cbc-sha256` -- `tls-ecdhe-rsa-with-aes-128-cbc-sha256` +* `tls-rsa-with-aes-128-cbc-sha` +* `tls-rsa-with-aes-256-cbc-sha` +* `tls-rsa-with-aes-128-cbc-sha256` +* `tls-rsa-with-aes-128-gcm-sha256` +* `tls-rsa-with-aes-256-gcm-sha384` +* `tls-ecdhe-ecdsa-with-aes-128-cbc-sha256` +* `tls-ecdhe-rsa-with-aes-128-cbc-sha256` #### Terraform provider role defaults @@ -30,7 +968,7 @@ the Kubernetes Operator. This might change the default options of role where not every option was explicitly set. For example: -``` +```hcl resource "teleport_role" "one-option-set" { version = "v7" metadata = { @@ -54,6 +992,7 @@ role option differences, please review it and check that the default changes are acceptable. If they are not, you must set the options to `false`. Here's a plan example for the code above: + ``` # teleport_role.one-option-set will be updated in-place ~ resource "teleport_role" "one-option-set" { @@ -75,27 +1014,50 @@ Here's a plan example for the code above: #### AWS endpoint URL mode removed -The AWS endpoint URL mode (`--endpoint-url`) has been removed for `tsh proxy -aws` and `tsh aws`. Users using this mode should use the default HTTPS Proxy -mode from now on. +The `tsh aws` and `tsh proxy aws` commands no longer support being used as +custom service endpoints. Instead, users should use them as `HTTPS_PROXY` proxy +servers. + +For example, the following command will no longer work: `aws s3 ls +--endpoint-url https://localhost:LOCAL_PROXY_PORT`. To achieve a similar result +with Teleport 18, run `HTTPS_PROXY=http://localhost:LOCAL_PROXY_PORT aws s3 ls`. + +#### TOTP for per-session MFA + +Starting with Teleport 18, `tsh` will no longer allow for using TOTP as a second +factor for per-session MFA. TOTP continues to be accepted as a second factor for +the initial login. + +#### Linux kernel 3.2 required + +On Linux, Teleport now requires Linux kernel version 3.2 or later. ### Other changes -#### Configurable keyboard layouts for Windows desktop sessions +#### PKCE support for OpenID Connect + +Teleport 18 includes support for Proof Key for Code Exchange (PKCE) in OpenID +Connect flows. PKCE is a security enhancement that ensures that attackers who +can intercept the authorization code will not be able to exchange it for an +access token. + +To enable PKCE, set `pkce_mode: enabled` in your OIDC connector. Future versions +of Teleport may enable PKCE by default. -Teleport's Account Settings page now exposes an option to set your preferred -keyboard layout for Windows desktop sessions. +#### Cache improvements -Note: in order for this setting to take affect, agent's running Teleport's -`windows_desktop_service` must be upgraded to v18.0.0 or later. +Teleport 18 ships with an improved cache implementation that stores resources +directly instead of storing their JSON-encoded representation. In addition to +performance gains, this new storage mechanism will also improve compatibility +between older agents and newer versions of resources. #### Windows desktop discovery enhancements Teleport's LDAP-based discovery mechanism for Windows desktops now supports: -- a configurable discovery interval -- custom RDP ports -- the ability to run multiple separate discovery configurations, allowing you to +* a configurable discovery interval +* custom RDP ports +* the ability to run multiple separate discovery configurations, allowing you to configure finely-grained discovery policies without running multiple agents To update your configuration, move the `discovery` section to `discovery_configs`: @@ -113,4932 +1075,49 @@ windows_desktop_service: + rdp_port: 9989 # optional, defaults to 3389 ``` -#### Legacy ALPN connection upgrade mode has been removed - -Teleport v15.1 added WebSocket upgrade support for Teleport proxies behind -layer 7 load balancers and reverse proxies. The legacy ALPN upgrade mode using -`alpn` or `alpn-ping` as upgrade types was left as a fallback until v17. -Teleport v18 removes the legacy upgrade mode entirely including the use of -environment variable `TELEPORT_TLS_ROUTING_CONN_UPGRADE_MODE`. - -## 16.0.0 (xx/xx/xx) - -### Breaking changes - -#### Opsgenie plugin annotations - -Opsgenie plugin users, role annotations must now contain -`teleport.dev/notify-services` to receive notification on Opsgenie. -`teleport.dev/schedules` is now the label used to determine auto approval flow. -See [the Opsgenie plugin documentation](docs/pages/admin-guides/access-controls/access-request-plugins/opsgenie.mdx) -for setup instructions. - -#### Teleport Assist has been removed - -Teleport Assist chat has been removed from Teleport 16. `auth_service.assist` and `proxy_service.assist` -options have been removed from the configuration. Teleport will not start if these options are present. - -During the migration from v15 to v16, the options mentioned above should be removed from the configuration. - -#### DynamoDB permission requirements have changed - -Teleport clusters using the dynamodb backend must now have the `dynamodb:ConditionCheckItem` -permission. For a full list of all required permissions see the Teleport [Backend Reference](docs/pages/reference/backends.mdx#dynamodb). - -#### Disabling multi-factor authentication_type - -Support for disabling multi-factor authentication has been removed - -#### Machine ID and OpenSSH client config changes - -Users with custom `ssh_config` should modify their ProxyCommand to use the new, -more performant, `tbot ssh-proxy-command`. See the -[v16 upgrade guide](docs/pages/reference/machine-id/v16-upgrade-guide.mdx) for -more details. - -#### Default keyboard shortcuts in Teleport Connect have been changed - -On Windows and Linux, some of the default shortcuts conflicted with the default bash or nano shortcuts -(e.g. Ctrl + E, Ctrl + K). -On those platforms, the default shortcuts have been changed to a combination of Ctrl + Shift + *. -We also updated the shortcut to open a new terminal on macOS to Control + Shift + \`. -See [configuration](docs/pages/connect-your-client/teleport-connect.mdx#configuration) -for the current list of shortcuts. - -## 15.0.0 (xx/xx/24) - -### New features - -#### FIPS now supported on ARM64 - -Teleport 15 now provides FIPS-compliant Linux builds on ARM64. Users will now -be able to run Teleport in FedRAMP/FIPS mode on ARM64. - -#### Hardened AMIs now produced for ARM64 - -Teleport 15 now provides hardened AWS AMIs on ARM64. - -#### Streaming session playback - -Prior to Teleport 15, `tsh play` and the web UI would download the entire -session recording before starting playback. As a result, playback of large -recordings could be slow to start, and may fail to play at all in the browser. - -In Teleport 15, session recordings are streamed from the Auth Service, allowing -playback to start before the entire session is downloaded and unpacked. - -Additionally, `tsh play` now supports a `--speed` flag for adjusting the -playback speed. - -#### Standalone Teleport Operator - -Prior to Teleport 15, the Teleport Kubernetes Operator had to run as a sidecar -of the Teleport auth. It was not possible to use the operator in Teleport -Enterprise (Cloud) or against a Teleport cluster not deployed with the -`teleport-cluster` Helm chart. - -In Teleport 15, the Teleport Operator can reconcile resources in any Teleport -cluster. Teleport Enterprise (Cloud) users can now use the operator to manage -their resources. - -When deployed with the `teleport-cluster` chart, the operator now runs in a -separate pod. This ensures that Teleport's availability won't be impacted if -the operator becomes unready. - -See [the Standalone Operator guide](docs/pages/admin-guides/infrastructure-as-code/teleport-operator/teleport-operator-standalone.mdx) -for installation instructions. - -#### Teleport Operator now supports roles v6 and v7 - -Starting with Teleport 15, newly supported kinds will contain the resource version. -For example: `TeleportRoleV6` and `TeleportRoleV7` kinds will allow users to -create Teleport Roles v6 and v7. - -Existing kinds will remain unchanged in Teleport 15, but will be renamed in -Teleport 16 for consistency. - -To migrate an existing Custom Resource (CR) `TeleportRole` to -a `TeleportRoleV7`, you must: -- upgrade Teleport and the operator to v15 -- annotate the exiting `TeleportRole` CR with `teleport.dev/keep: "true"` -- delete the `TeleportRole` CR (it won't delete the role in Teleport thanks to the annotation) -- create a new `TeleportRoleV7` CR with the same name - -### Breaking changes and deprecations - -#### RDP engine requires RemoteFX - -Teleport 15 includes a new RDP engine that leverages the RemoteFX codec for -improved performance. Additional configuration may be required to enable -RemoteFX on your Windows hosts. - -If you are using our authentication package for local users, the v15 installer -will automatically enable RemoteFX for you. - -Alternatively, you can enable RemoteFX by updating the registry: - -```powershell -Set-ItemProperty -Path 'HKLM:\Software\Policies\Microsoft\Windows NT\Terminal Services' -Name 'ColorDepth' -Type DWORD -Value 5 -Set-ItemProperty -Path 'HKLM:\Software\Policies\Microsoft\Windows NT\Terminal Services' -Name 'fEnableVirtualizedGraphics' -Type DWORD -Value 1 -``` - -If you are using Teleport with Windows hosts that are part of an Active -Directory environment, you should enable RemoteFX via group policy. - -Under Computer Configuration > Administrative Templates > Windows Components > -Remote Desktop Services > Remote Desktop Session Host, enable: - -1. Remote Session Environment > RemoteFX for Windows Server 2008 R2 > Configure RemoteFX -1. Remote Session Environment > Enable RemoteFX encoding for RemoteFX clients designed for Windows Server 2008 R2 SP1 -1. Remote Session Environment > Limit maximum color depth - -Detailed instructions are available in the -[setup guide](docs/pages/enroll-resources/desktop-access/active-directory.mdx#enable-remotefx). -A reboot may be required for these changes to take effect. - -#### `tsh ssh` - -When running a command on multiple nodes with `tsh ssh`, each line of output -is now labeled with the hostname of the node it was written by. Users that -rely on parsing the output from multiple nodes should pass the `--log-dir` flag -to `tsh ssh`, which will create a directory where the separated output of each node -will be written. - -#### `drop` host user creation mode - -The `drop` host user creation mode has been removed in Teleport 15. It is replaced -by `insecure-drop`, which still creates temporary users but does not create a -home directory. Users who need home directory creation should either wrap `useradd`/`userdel` -or use PAM. - -#### Remove restricted sessions for SSH - -The restricted session feature for SSH has been deprecated since Teleport 14 and -has been removed in Teleport 15. We recommend implementing network restrictions -outside of Teleport (iptables, security groups, etc). - -#### Packages no longer published to legacy Debian and RPM repos - -`deb.releases.teleport.dev` and `rpm.releases.teleport.dev` were deprecated in -Teleport 11. Beginning in Teleport 15, Debian and RPM packages will no longer be -published to these repos. Teleport 14 and prior packages will continue to be -published to these repos for the remainder of those releases' lifecycle. - -All users are recommended to switch to `apt.releases.teleport.dev` and -`yum.releases.teleport.dev` repositories as described in installation -[instructions](docs/pages/installation.mdx). - -The legacy package repos will be shut off in mid 2025 after Teleport 14 -has been out of support for many months. - -#### Container images - -Teleport 15 contains several breaking changes to improve the default security -and usability of Teleport-provided container images. - -##### "Heavy" container images are discontinued - -In order to increase default security in 15+, Teleport will no longer publish -[container images containing a shell and command line environment](https://github.com/gravitational/teleport/blob/branch/v14/build.assets/charts/Dockerfile) -to Elastic Container Registry's [gravitational/teleport](https://gallery.ecr.aws/gravitational/teleport) -image repo. Instead, all users should use the [distroless images](https://github.com/gravitational/teleport/blob/branch/v15/build.assets/charts/Dockerfile-distroless) -introduced in Teleport 12. These images can be found at: - -* https://gallery.ecr.aws/gravitational/teleport-distroless -* https://gallery.ecr.aws/gravitational/teleport-ent-distroless - -For users who need a shell in a Teleport container, a "debug" image is -available which contains BusyBox, including a shell and many CLI tools. Find -the debug images at: - -* https://gallery.ecr.aws/gravitational/teleport-distroless-debug -* https://gallery.ecr.aws/gravitational/teleport-ent-distroless-debug - -Do not run debug container images in production environments. - -Heavy container images will continue to be published for Teleport 13 and 14 -throughout the remainder of these releases' lifecycle. - -##### Helm cluster chart FIPS mode changes - -The teleport-cluster chart no longer uses versionOverride and extraArgs to set FIPS mode. - -Instead, you should use the following values file configuration: -``` -enterpriseImage: public.ecr.aws/gravitational/teleport-ent-fips-distroless -authentication: - localAuth: false -``` - -##### Multi-architecture Teleport Operator images - -Teleport Operator container images will no longer be published with architecture -suffixes in their tags (for example: `14.2.1-amd64` and `14.2.1-arm`). Instead, -only a single tag will be published with multi-platform support (e.g., `15.0.0`). -If you use Teleport Operator images with an architecture suffix, remove the -suffix and your client should automatically pull the platform-appropriate image. -Individual architectures may be pulled with `docker pull --platform `. - -##### Quay.io registry - -The quay.io container registry was deprecated and Teleport 12 is the last -version to publish images to quay.io. With Teleport 15's release, v12 is no -longer supported and no new container images will be published to quay.io. - -For Teleport 8+, replacement container images can be found in [Teleport's public ECR registry](https://gallery.ecr.aws/gravitational). - -Users who wish to continue to use unsupported container images prior to -Teleport 8 will need to download any quay.io images they depend on and mirror -them elsewhere before July 2024. Following brownouts in May and June, Teleport -will disable pulls from all Teleport quay.io repositories on Wednesday July 3, -2024. - -#### Amazon AMIs - -Teleport 15 contains several breaking changes to improve the default security -and usability of Teleport-provided Amazon AMIs. - -##### Hardened AMIs - -Teleport-provided Amazon Linux 2023 previously only supported x86_64/amd64. -Starting with Teleport 15, arm64-based AMIs will be produced. However, the -naming scheme for these AMIs has been changed to include the architecture. - -- Previous naming scheme: `teleport-oss-14.0.0-$TIMESTAMP` -- New naming scheme: `teleport-oss-15.0.0-x86_64-$TIMESTAMP` - -##### Legacy Amazon Linux 2 AMIs - -Teleport-provided Amazon Linux 2 AMIs were deprecated, and Teleport 14 is the -last version to produce such legacy AMIs. With Teleport 15's release, only -the newer hardened Amazon Linux 2023 AMIs will be produced. - -The legacy AMIs will continue to be published for Teleport 13 and 14 throughout -the remainder of these releases' lifecycle. - -#### `windows_desktop_service` no longer writes to the NTAuth store - -In Teleport 15, the process that periodically publishes Teleport's user CA to -the Windows NTAuth store has been removed. It is not necessary for Teleport to -perform this step since it must be done by an administrator at installation -time. As a result, Teleport's service account can use more restrictive -permissions. +#### Customizable keyboard layouts for remote desktop sessions -#### Example AWS cluster deployments updated +The web UI's account settings page now includes an option for +setting your desired keyboard layout for remote desktop sessions. -The AWS terraform examples for Teleport clusters have been updated to use the -newer hardened Amazon Linux 2023 AMIs. Additionally, the default architecture -and instance type has been changed to ARM64/Graviton. +This keyboard layout will be respected by agents running Teleport 18 +or later. -As a result of this modernization, the legacy monitoring stack configuration -used with the legacy AMIs has been removed. +#### Faster user lookups on domain-joined Windows workstations -#### `teleport-cluster` Helm chart changes +Teleport 18 is built with Go 1.24, which includes an optimized user lookup +implementation. As a result, the +[workarounds](https://goteleport.com/docs/faq/#tsh-is-very-slow-on-windows-what-to-do) +for avoiding slow lookups in tsh and Teleport Connect are no longer necessary. -Due to the new separate operator deployment, the operator is deployed by a subchart. -This causes the following breaking changes: -- `installCRDs` has been replaced by `operator.installCRDs` -- `teleportVersionOverride` does not set the operator version anymore, you must - use `operator.teleportVersionOverride` to override the operator version. +#### Agent Managed Updates v2 enhancements -Note: version overrides are dangerous and not recommended. Each chart version -is designed to run a specific Teleport and operator version. If you want to -deploy a specific Teleport version, use Helm's `--version X.Y.Z` instead. +Managed Updates v2 can now track which version agents are running and use this +information to progress the rollout. Only Linux agents are supported, agent +reports for `teleport-kube-agent` will come in a future update. Reports are +generated every minute and only count agents connected and stable for at least +a minute. -The operator now joins using a Kubernetes ServiceAccount token. To validate the -token, the Teleport Auth Service must have access to the `TokenReview` API. -The chart configures this for you since v12, unless you disabled `rbac` creation. +You can now observe the agent managed update progress by using +`tctl autoupdate agents status` and `tctl autoupdate agents report`. -##### Helm cluster chart FIPS mode changes +If the strategy is `halt-on-error`, the group will be marked as done and the +rollout will continue only after at least 90% of the agents are updated. -The teleport-cluster chart no longer uses versionOverride and extraArgs to set FIPS mode. - -Instead, you should use the following values file configuration: - -``` -enterpriseImage: public.ecr.aws/gravitational/teleport-ent-fips-distroless -authentication: - localAuth: false +You can now manually trigger a group, mark it as done, or rollback an update +with `tctl`: +```shell +autoupdate agents start-update [group1, group2, ...] +autoupdate agents mark-done [group1, group2, ...] +autoupdate agents rollback [group1, group2, ...] ``` -#### Resource version is now mandatory and immutable in the Terraform provider - -Starting with Teleport 15, each Terraform resource must have its version specified. -Before version 15, Terraform was picking the latest version available on resource creation. -This caused inconsistencies as new resources created with the same manifest as -old resources were not exhibiting the same behavior. - -Resource version is now immutable. Changing a resource version will cause -Terraform to delete and re-create the resource. This ensures the correct -defaults are set. - -Existing resources will continue to work as Terraform already imported their -version. However, new resources will require an explicit version. - -### Other changes - -#### Increased password length - -The minimum password length has been increased to 12 characters. - -#### Increased account lockout interval - -The account lockout interval has been increased to 30 minutes. - -## 14.0.0 (09/20/23) - -Teleport 14 brings the following new major features and improvements: - -- Access lists -- Unified resource view -- ClickHouse support for database access -- Advanced audit log -- Kubernetes apps auto-discovery -- Extended Kubernetes per-resource RBAC -- Oracle database access audit logging support -- Enhanced PuTTY support -- Support for TLS routing in Terraform deployment examples -- Discord and ServiceNow hosted plugins -- Limited passwordless access for local Windows users in Teleport Community - Edition -- Machine ID: Kubernetes Secret destination - -In addition, this release includes several changes that affect existing -functionality listed in the “Breaking changes” section below. Users are advised -to review them before upgrading. - -### New features - -#### Advanced audit log - -Teleport 14 includes support for a new audit log powered by Amazon S3 and Athena -that supports efficient searching, sorting, and filtering operations. Teleport -Enterprise (Cloud) customers will have their audit log automatically migrated to -this new backend. - -See the documentation [here](docs/pages/reference/backends.mdx#athena). - -#### Access lists - -Teleport 14 introduces foundational support for Access Lists, an extension to -the short-lived Access Request system targeted towards longer-term access. -Administrators can add users to Access Lists granting them long-term permissions -within the cluster. - -As the feature is being developed, future Teleport releases will add support for -periodic audit reviews and deeper integration of Access Lists with Okta. - -You can find existing Access Lists documentation [here](docs/pages/admin-guides/access-controls/access-lists/guide.mdx). - -#### Unified resources view - -The web UI in Teleport 14 has been updated to show all resources in a single -unified view. - -This is the first step in a series of changes designed to support a -customizable Teleport experience and make it easier to access the resources that -are most important to you. - -#### Kubernetes apps auto-discovery - -Teleport 14 updates its auto-discovery capabilities with support for web -applications in Kubernetes clusters. When connected to a Kubernetes cluster (or -deployed as a Helm chart), the Teleport Discovery Service will automatically find -and enroll web applications with your Teleport cluster. - -See documentation [here](docs/pages/enroll-resources/auto-discovery/kubernetes-applications/kubernetes-applications.mdx). - -#### Extended Kubernetes per-resource RBAC - -Teleport 14 extends resource-based Access Requests to support more Kubernetes -resources than just pods, including custom resources, and verbs. Note that this -feature requires role version `v7`. - -See Kubernetes resources documentation to see a full list of [supported -resources](docs/pages/enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources). - -#### ClickHouse support for database access - -Teleport 14 adds database access support for ClickHouse HTTP and native (TCP) -protocols. When using HTTP protocol, the user's query activity is captured in -the Teleport audit log. - -See how to connect ClickHouse to Teleport -[here](docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/clickhouse-self-hosted.mdx). - -#### Oracle database access audit logging support - -In Teleport 14, database access for Oracle integration is updated with query -audit logging support. - -See documentation on how to configure it in the [Oracle guide](docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/oracle-self-hosted.mdx). - -#### Limited passwordless access for local Windows users in Teleport Community Edition - -In Teleport 14, access to Windows desktops with local Windows users has been -extended to Community Edition. Teleport will permit users to register and -connect to up to 5 desktops with local users without an enterprise license. - -For more information on using Teleport with local Windows users, see [docs](docs/pages/enroll-resources/desktop-access/getting-started.mdx). - -#### Discord and ServiceNow hosted plugins - -Teleport 14 includes support for hosted Discord and ServiceNow plugins. Teleport -Enterprise (Cloud) users can configure Discord and ServiceNow integrations to -receive Access Request notifications. - -Discord plugin is available now, ServiceNow is coming in 14.0.1. - -#### Enhanced PuTTY Support - -tsh on Windows now supports the `tsh puttyconfig` command, which can -configure saved sessions inside the well-known PuTTY client to connect to -Teleport-protected servers. - -For more information, see [docs](docs/pages/connect-your-client/putty-winscp.mdx). - -#### Support for TLS routing in Terraform deployment examples - -The ha-autoscale-cluster and starter-cluster Terraform deployment examples now -support a `USE_TLS_ROUTING` variable to enable TLS routing inside the deployed -Teleport cluster. - -#### Machine ID: Kubernetes Secret destination - -In Teleport 14, `tbot` can now be configured to write artifacts such as -credentials and configuration files directly to a Kubernetes secret rather than -a directory on the local file system. - -For more information, see [docs](docs/pages/reference/machine-id/configuration.mdx). - -### Breaking changes and deprecations - -Please familiarize yourself with the following potentially disruptive changes in -Teleport 14 before upgrading. - -#### SSH node open dial no longer supported - -Teleport 14 no longer allows connecting to OpenSSH servers not registered with -the cluster. Follow the updated agentless OpenSSH integration [guide](docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx) -to register your OpenSSH nodes in the cluster’s inventory. - -You can set `TELEPORT_UNSTABLE_UNLISTED_AGENT_DIALING=yes` environment variable -on Teleport proxy to temporarily re-enable the open dial functionality. The -environment variable will be removed in Teleport 15. - -#### Proxy protocol default change - -Starting from version 14, Teleport will require users to explicitly enable or -disable PROXY protocol in their `proxy_service`/`auth_service` configuration -using `proxy_protocol: on|off` option. - -Users who run their proxies behind L4 load balancers with PROXY protocol -enabled, should set `proxy_protocol: on`. Users who don’t run Teleport behind -PROXY protocol enabled load balancers, should disable `proxy_protocol: off` -explicitly for security reasons. - -By default, Teleport will accept the PROXY line but will prevent connections -with IP pinning enabled. IP pinning users will need to explicitly enable/disable -proxy protocol like explained above. - -See more details in our [documentation](docs/pages/admin-guides/management/security/proxy-protocol.mdx). - -#### Legacy deb/rpm package repositories are deprecated - -Teleport 14 will be the last release published to the legacy package -repositories at `deb.releases.teleport.dev` and `rpm.releases.teleport.dev`. -Starting with Teleport 15, packages will only be published to the new -repositories at `apt.releases.teleport.dev` and `yum.releases.teleport.dev`. - -All users are recommended to switch to `apt.releases.teleport.dev` and -`yum.releases.teleport.dev` repositories as described in installation -[instructions](docs/pages/installation.mdx). - -#### `Cf-Access-Token` header no longer included with requests to Teleport-protected applications - -Starting from Teleport 14, the `Cf-Access-Token` header containing the signed -JWT token will no longer be included by default with all requests to -Teleport-protected applications. -All requests will still include `Teleport-JWT-Assertion` containing the JWT -token. - -See documentation for details on how to inject the JWT token into any header -using [header rewriting](docs/pages/enroll-resources/application-access/jwt/introduction.mdx#inject-jwt). - -#### tsh db CLI commands changes - -In Teleport 14 tsh db sub-commands will attempt to select a default value for -`--db-user` or `--db-name` flags if they are not provided by the user by -examining their allowed `db_users` and `db_names`. - -The flags `--cert-file` and `--key-file` for tsh proxy db command were also -removed, in favor of the `--tunnel` flag that opens an authenticated local -database proxy. - -#### MongoDB versions prior to 3.6 are no longer supported - -Teleport 14 includes an update to the MongoDB driver. - -Due to the MongoDB team dropping support for servers prior to version 3.6 (which -reached EOL on April 30, 2021), Teleport also will no longer be able to support -these old server versions. - -#### Symlinks for `~/.tsh/environment` no longer supported - -In order to strengthen the security in Teleport 14, file loading from home -directories where the path includes a symlink is no longer allowed. The most -common use case for this is loading environment variables from the -`~/.tsh/environment` file. This will still work normally as long as the path -includes no symlinks. - -#### Deprecated audit event - -Teleport 14 deprecates the `trusted_cluster_token.create` audit event, replacing -it with a new `join_token.create` event. The new event is emitted when any join -token is created, whether it be for trusted clusters or other Teleport services. - -Teleport 14 will emit both events when a trusted cluster join token is created. -Starting in Teleport 15, the `trusted_cluster_token.create` event will no longer -be emitted. - -### Other changes - -#### DynamoDB billing mode defaults to on-demand - -In Teleport 14, when creating new DynamoDB tables, Teleport will now create them -with the billing mode set to `pay_per_request` instead of being set to provisioned -mode. - -The old behavior can be restored by setting the `billing_mode` option in the -storage configuration. - -#### Default role version is v7 - -The default role version in Teleport 14 is `v7` which enables support for extended -Kubernetes per-resource RBAC, and changes the `kubernetes_resources` default to -wildcard for better getting started user experience. - -You can review role versions in the [documentation](docs/pages/reference/access-controls/roles.mdx). - -#### Stricter name validation for auto-discovered databases - -In Teleport 14, database discovery via `db_service` config enforces the same name -validation as for databases created via tctl, static config, and -`discovery_service`. - -As such, database names in AWS, GCP and Azure must start with a letter, contain -only letters, digits, and hyphens and end with a letter or digit (no trailing -hyphens). - -#### Access Request API changes - -Teleport 14 introduces a new and more secure API for submitting Access Requests. -As a result, tsh users may be prompted to upgrade their clients before -submitting an Access Request. - -#### Desktop discovery name change - -Desktops discovered via LDAP will have a short suffix appended to their name to -ensure uniqueness. Users will notice duplicate desktops (with and without the -suffix) for up to an hour after upgrading. Connectivity to desktops will not be -affected, and the old record will naturally expire after 1 hour. - -#### Machine ID : New configuration schema - -Teleport 14 introduces a new configuration schema (v2) for Machine ID’s agent -`tbot`. The new schema is designed to be more explicit and more extensible: - -```yaml -version: v2 -onboarding: - token: gcp-bot - join_method: gcp -storage: - type: memory -auth_server: example.teleport.sh:443 -outputs: - - type: identity - destination: - type: kubernetes_secret - name: my-secret -​ - - type: kubernetes - kubernetes_cluster: my-cluster - destination: - type: directory - path: ./k8s -​ - - type: database - service: my-postgres-service - database: postgres - username: postgres - destination: - type: directory - path: ./db -​ - - type: application - app_name: my-app - destination: - type: directory - path: ./app -``` - -`tbot` will continue to support the v1 schema for several Teleport versions but it -is recommended that you migrate to v2 as soon as possible to benefit from new -Machine ID features. - -For more details and guidance on how to upgrade to v2, see [docs](https://github.com/gravitational/teleport/blob/branch/v14/docs/pages/reference/machine-id/v14-upgrade-guide.mdx). - -## 13.0.1 (05/xx/23) - -* Helm Charts - * Fixed issue with invite token being incorrectly overridden when it was manually created. [#26055](https://github.com/gravitational/teleport/pull/26055) - -### Breaking Changes - -Please familiarize yourself with the following potentially disruptive changes in -Teleport 13 before upgrading. - -#### Teleport Kubernetes Agent helm chart - -When upgrading to Teleport 13, users of the Teleport Kubernetes Agent Helm chart -that manually create their own Teleport token secret (`secretName=` and no auth token provided) -will need to set the following values: - -```yaml -# Manages the join token secret creation and its name. -joinTokenSecret: - # create controls whether the Helm chart should create and manage the join token - # secret. - # If false, the chart assumes that the secret with the configured name already exists at the - # installation namespace. - create: false - # Name of the Secret to store the teleport join token. - name: -``` - -The Helm chart parameter `secretName` was deprecated in Teleport 13 in favor of -`joinTokenSecret.name`. `joinTokenSecret.create` indicates whether the Helm -chart should create and manage the join token secret. If `create` is set to -`false`, the chart assumes that the secret with the configured name already -exists at the installation namespace. - -## 13.0.0 (05/08/23) - -Teleport 13 brings the following marquee features and improvements: - -* (Preview) Automatic agent upgrades. -* (Preview) TLS routing through ALB for accessing servers, Kubernetes clusters, and applications. -* (Preview, Enterprise-only) Ability to import applications and groups from Okta. -* (Preview) Teleport support for AWS OpenSearch. -* (Preview) View and control access to OpenSSH nodes natively in Teleport. -* Cross-cluster search for Teleport Connect. -* Performance improvements for accessing Kubernetes clusters. -* Universal binaries (including Apple Silicon) for macOS. -* Simplified RDS onboarding flow in Access Management UI. -* Light theme for Web UI. - -### (Preview) Automatic agent upgrades - -In Teleport 13 users can configure their Teleport agents deployed via apt/yum -repositories or a Helm chart to be upgraded automatically. - -### (Preview) TLS routing through ALB accessing servers, Kubernetes clusters, and applications - -Teleport 13 adds single-port TLS routing mode support for servers, Kubernetes -clusters, and applications for clusters deployed behind application layer load -balancers such as AWS ALB. - -### (Preview, Enterprise-only) Ability to import applications and groups from Okta - -In Teleport 13 users can import apps and groups from Okta and use Teleport -Access Requests for requesting short-term access to them. This feature is only -available in the Teleport Enterprise edition. - -### (Preview) Teleport support for AWS OpenSearch - -Teleport users can now connect to AWS OpenSearch databases. - -### (Preview) View and control access to OpenSSH nodes natively in Teleport - -In Teleport 13 users will be able register OpenSSH nodes as a resource with the -cluster. - -This will allow users to view the OpenSSH nodes in Web UI and using `tsh ls` -and use RBAC to control access to them. - -See the updated [OpenSSH integration -guide](docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx). - -### Cross-cluster search for Teleport Connect - -Teleport Connect now includes a new search experience, allowing you to search -for and connect to resources across all logged-in clusters. - -### Performance improvements for accessing Kubernetes clusters - -In Teleport 13 we improved the way the Teleport Proxy Service handles Kubernetes -credentials. - -Users will experience better performance when interacting with Kubernetes -clusters using kubectl or via the API. - -### Universal binaries (including Apple Silicon) for macOS - -Teleport 13 binaries (including Teleport Connect) will have universal -architecture and run natively on both Intel and ARM macOS systems. - -### Simplified RDS onboarding flow in Access Management UI - -When connecting an RDS database using Teleport 13 Access Management UI, users -can connect their AWS account and select the RDS database to add instead of -entering details manually. - -To try out the new flow, add an RDS database using the Resource Management UI -in your cluster’s Web UI dashboard. - -### Light theme for Web UI - -Teleport's web UI includes an optional light theme. - -The light theme is enabled by default but can be changed back to the dark theme -via the top-right corner user settings menu. - -### Windows desktop session recording export - -Session recordings for Windows desktop sessions can now be exported to video -format for offline playback with the new tsh recordings export command. - -### SFTP in Moderated Sessions - -Teleport 13 adds the ability to transfer files in Moderated Sessions. -This feature requires that both the session originator and the moderator -have joined the session via the web UI. - -### Breaking changes - -Please familiarize yourself with the following potentially disruptive changes -in Teleport 13 before upgrading. - -#### Default session join mode - -Teleport 13 defaults to observer (read-only) mode when joining SSH and Kubernetes -sessions. Prior versions of Teleport would default to peer mode for SSH sessions -and moderator mode for Kubernetes sessions. To override the default join mode, -specify the --mode flag with tsh join. - -#### CA rotation deprecation - -Teleport 13 removes support for rotating all certificate authorities with -`tctl auth rotate --type=all`. The `type` flag is now required, which ensures -that only one CA is rotated at a time, increasing cluster stability during -rotations. - -#### Join token API changes - -The default 30-minute expiry no longer applies to tokens created via YAML -resource files. If you want to enforce an expiration, ensure this is set in the -`metadata.expires` field. Tokens created using `tctl nodes add` and `tctl tokens add` -will continue to have a default 30m expiry applied. - -Additionally, users of Teleport’s API module will note that the `CreateToken` -and `UpsertToken` RPCs are now deprecated in favor of `CreateTokenV2` and -`UpsertTokenV2`. The new V2 variants no longer have a default expiry, so be sure -to set a TTL if you want your tokens to expire. - -The original RPCs are still supported in Teleport 13 and will be removed -completely for Teleport 14. - -#### Enhanced user validation - -Teleport 13 will refuse to create or update users that reference non-existent -roles. In some circumstances, older versions of Teleport would permit you to -create users and assign them invalid roles. In Teleport 13 this is a hard error. - -#### Quay.io registry - -Quay.io registry was deprecated in Teleport 11 and starting with Teleport 13, -Teleport container images are no longer being published to it. - -Users should use the [public ECR -registry](https://gallery.ecr.aws/gravitational). - -#### Helm chart uses `distroless`-based container image by default - -Starting with Teleport 13, the Helm charts `teleport-cluster` and `teleport-kube-agent` -are deploying distroless Teleport images by default. Those images are slimmer -and more secure but contain less tooling (e.g. neither `bash` nor -`apt` are available). - -The Debian-based images are deprecated and will be removed in Teleport 14. -The chart image can be reverted back to the Debian-based images by setting: -```yaml -image: "public.ecr.aws/gravitational/teleport" -``` - -For debugging purposes, a "debug" image is available and contains BusyBox, -which includes a shell and most common POSIX executables: -`public.ecr.aws/gravitational/teleport-distroless-debug`. - -## 12.3.0 (05/01/23) - -This release of Teleport contains multiple improvements and bug fixes. - -* Desktop Access - * Added support for automatic Windows user creation. [#25348](https://github.com/gravitational/teleport/pull/25348) -* CLI - * Fixed MFA permission denied error from `tsh` for non-SSH protocols. [#25430](https://github.com/gravitational/teleport/pull/25430) -* Terraform - * Fixed `AccessControlListNotSupported` error in HA terraform. [#25335](https://github.com/gravitational/teleport/pull/25335) -* Device Trust - * Updated Device Trust audit events to have descriptive types. [#25320](https://github.com/gravitational/teleport/pull/25320) - -## 12.2.5 (04/28/23) - -This release of Teleport contains multiple improvements and bug fixes. - -* Auth - * Fixed issue where Github SSO would fail if a user is a part of more than 30 teams. [#25098](https://github.com/gravitational/teleport/pull/25098) - * Fixed issue with `tsh login` with "required" hardware key policy returning "policy not met" error. [#24956](https://github.com/gravitational/teleport/pull/24956) - * Improved Device Trust logging and error reporting. [#24912](https://github.com/gravitational/teleport/pull/24912) - * Detect and warn about RPID changes when using WebAuthn. [#25289](https://github.com/gravitational/teleport/pull/25289) -* Access Management - * Fixed issue with running install script on macOS for enterprise clusters. [#25076](https://github.com/gravitational/teleport/pull/25076) -* Server Access - * Fixed issue with headless `tsh ssh` not working when used in `rsync -rsh`. [#25242](https://github.com/gravitational/teleport/pull/25242) - * Fixed issue with headless `tsh ssh` prompting users for MFA. [#25187](https://github.com/gravitational/teleport/pull/25187) - * Fixed issue with `tsh ssh` failing to connect over public address with per-session MFA. [#25223](https://github.com/gravitational/teleport/pull/25223) - * Fixed issue with `tsh scp` failing on some destination paths. [#24861](https://github.com/gravitational/teleport/pull/24861) - * Require explicit username in headless `tsh ssh`. [#25112](https://github.com/gravitational/teleport/pull/25112) - * Updated automatic user provisioning to sort sudoers lines by role name to ensure stable order. [#24792](https://github.com/gravitational/teleport/pull/24792) - * Updated `tsh` commands to recognize `SSH_` environment variables. [#24470](https://github.com/gravitational/teleport/pull/24470) -* Database Access - * Fixed issue with `tsh db env` and `tsh db config` not recognizing separate MySQL listener. [#24827](https://github.com/gravitational/teleport/pull/24827) -* Kubernetes Access - * Added `--set-context` flag to `tsh kube login` to allow overriding default context name. [#25253](https://github.com/gravitational/teleport/pull/25253) -* IdP - * Fixed issue with SAML IdP not being disabled properly. [#25309](https://github.com/gravitational/teleport/pull/25309) -* IP Pinning - * Fixed interoperability issues with load balancers with proxy protocol v2 enabled. [#25302](https://github.com/gravitational/teleport/pull/25302) -* CLI - * Fixed issue with cluster alerts sometimes not showing up after `tsh login`. [#25300](https://github.com/gravitational/teleport/pull/25300) -* AMIs - * Fixed issue with startup script failing to acquire lock from AWS metadata. [#25296](https://github.com/gravitational/teleport/pull/25296) -* HSM - * Fixed issue with inadvertent deletion of active HSM keys when using YubiHSM2 SDK version 2023.1. [#25208](https://github.com/gravitational/teleport/pull/25208) -* Performance & Scalability - * Improved performance of MFA ceremony. [#24804](https://github.com/gravitational/teleport/pull/24804) - -## 12.2.4 (04/18/23) - -This release of Teleport contains multiple improvements and bug fixes. - -* Auto-discovery - * Added ability to specify discovery group for discovery services. [#24716](https://github.com/gravitational/teleport/pull/24716) -* CLI - * Improved `tsh` performance on some Windows systems. [#24573](https://github.com/gravitational/teleport/pull/24573) - * Improved `teleport configure` error/warning reporting. [#24676](https://github.com/gravitational/teleport/pull/24676) - * Added `--raw` flag to `teleport version` command. [#24772](https://github.com/gravitational/teleport/pull/24772) -* Configuration - * Prevent proxies from trying to join cluster over reverse tunnel. [#24668](https://github.com/gravitational/teleport/pull/24668) -* Server Access - * Fixed issue with excessive audit logging when copying files over SFTP. [#24831](https://github.com/gravitational/teleport/pull/24831) - * Fixed issue with `tsh scp` not recognizing wildcard patterns. [#24831](https://github.com/gravitational/teleport/pull/24831) - * Fixed issue with `tsh scp` failing when max sessions is set to 1. [#24831](https://github.com/gravitational/teleport/pull/24831) - * Improved error reporting from `tsh scp` when file copying is disabled. [#24831](https://github.com/gravitational/teleport/pull/24831) -* Kubernetes Access - * Fixed issue with `tctl auth sign` not respecting `kube_public_addr`. [#24516](https://github.com/gravitational/teleport/pull/24516) - * Fixed memory leak when using port forwarding. [#24763](https://github.com/gravitational/teleport/pull/24763) - * Reduced log spam when using port forwarding. [#24658](https://github.com/gravitational/teleport/pull/24658) -* Database Access - * Updated `teleport db configure` to support more AWS databases. [#24494](https://github.com/gravitational/teleport/pull/24494) -* Performance & Scalability - * Reduced thundering herd effect in large clusters. [#24719](https://github.com/gravitational/teleport/pull/24719) -* Web UI - * Fixed issue with downloading files from leaf clusters when per-session MFA is enabled. [#24768](https://github.com/gravitational/teleport/pull/24768) - -## 12.2.3 (04/13/23) - -This release of Teleport contains multiple bug fixes. - -* CLI - * Fixed potential panic in `tsh ssh`. [#24490](https://github.com/gravitational/teleport/pull/24490) -* Performance & Scalability - * Improved `tsh ssh` latency. [#24371](https://github.com/gravitational/teleport/pull/24371) -* Kubernetes Access - * Fixed issue with moderator joining session on a cluster they don't have access to. [#23993](https://github.com/gravitational/teleport/pull/23993) -* Security - * Added IP pinning support to SSO users. [#24541](https://github.com/gravitational/teleport/pull/24541) - -## 12.2.2 (04/12/23) - -This release of Teleport contains multiple improvements and bug fixes. - -* Server Access - * Restored `MajorVersion` template variable for EC2 install scripts. [#24434](https://github.com/gravitational/teleport/pull/24434) - * Added `--mlock` flag to headless `tsh` mode to allow memory locking. [#24410](https://github.com/gravitational/teleport/pull/24410) - * Fixed issue with EC2 install script silently failing on errors. [#24034](https://github.com/gravitational/teleport/pull/24034) -* Database Access - * Reduced log spam when AWS database engine name is not recognized. [#24413](https://github.com/gravitational/teleport/pull/24413) -* Machine ID - * Improved post-renewal message by logging correct identity. [#24246](https://github.com/gravitational/teleport/pull/24246) -* Kubernetes Access - * Fixed issue with incorrect status being returned on exec commands. [#24155](https://github.com/gravitational/teleport/pull/24155) -* Proxy Peering - * Improved agent reconnect speed with proxy peering. [#24141](https://github.com/gravitational/teleport/pull/24141) -* Helm Charts - * Fixed issue with `securityContext` and `nodeSelector` not being propagated to job hooks. [#24134](https://github.com/gravitational/teleport/pull/24134) - * Fixed issue with TLS routing being disabled after v12 upgrade when `proxyListenerMode` is empty. [#24426](https://github.com/gravitational/teleport/pull/24426) - -## 12.2.1 (04/04/23) - -This release of Teleport contains several new features and improvements. - -* Server Access - * Added support for headless SSO to `tsh ls`, `tsh ssh` and `tsh scp`. [#23360](https://github.com/gravitational/teleport/pull/23360) -* Database Access - * Added support for connecting to Oracle databases. [#23892](https://github.com/gravitational/teleport/pull/23892) -* Moderated Sessions - * Fixed issue with joining moderated sessions via Web UI. [#24018](https://github.com/gravitational/teleport/pull/24018) -* Helm Charts - * Added support for `imagePullSecrets` to `teleport-cluster` chart. [#24017](https://github.com/gravitational/teleport/pull/24017) -* Security - * Added IP pinning support to Kubernetes and database access. [#23418](https://github.com/gravitational/teleport/pull/23418) -* Tooling - * Upgraded Go to `1.20.3`. [#24062](https://github.com/gravitational/teleport/pull/24062) - -## 12.1.5 (03/30/23) - -This release of Teleport contains 2 security fixes as well as multiple improvements and bug fixes. - -### [High] OS authorization bypass in SSH tunneling - -When establishing an SSH port forwarding connection, Teleport did not -sufficiently validate the specified OS principal. - -This could allow an attacker in possession of valid cluster credentials to -establish a TCP tunnel to a node using a non-existent Linux user. - -The connection attempt would show up in the audit log as a "port" audit event -(code T3003I) and include Teleport username in the "user" field. - -### [High] Teleport authorization bypass in Kubernetes cluster access - -When authorizing a request to a Teleport-protected Kubernetes cluster, Teleport -did not adequately validate the target Kubernetes cluster. - -This could allow an attacker in possession of valid Kubernetes agent credentials -or a join token to trick Teleport into forwarding requests to a different -Kubernetes cluster. - -Every Kubernetes request would show up in the audit log as a "kube.request" -audit event (code T3009I) and include the Kubernetes cluster metadata. - -### Other improvements and fixes - -* AMIs - * Added support for configuring TLS routing mode in AMIs. [#23678](https://github.com/gravitational/teleport/pull/23678) -* Application Access - * Added support for application access behind ALB. [#23054](https://github.com/gravitational/teleport/pull/23054) - * Fixed requests to Teleport-protected applications being redirected to leaf's public address in some cases. [#23220](https://github.com/gravitational/teleport/pull/23220) - * Reduced log noise. [#23365](https://github.com/gravitational/teleport/pull/23365) - * Added ability to specify command in AWS `tsh` proxy. [#23835](https://github.com/gravitational/teleport/pull/23835) -* Bootstrap - * Added provision tokens support. [#23474](https://github.com/gravitational/teleport/pull/23474) -* CLI - * Added `app_server` support to `tctl` resource commands. [#23136](https://github.com/gravitational/teleport/pull/23136) - * Display year in `tctl` commands output. [#23371](https://github.com/gravitational/teleport/pull/23371) - * Fixed issue with `tsh` reporting errors about missing webauthn.dll on Windows. [#23161](https://github.com/gravitational/teleport/pull/23161) - * Updated `tsh status` to not display internal logins. [#23411](https://github.com/gravitational/teleport/pull/23411) - * Added `--cluster` flag to `tsh kube sessions` command. [#23825](https://github.com/gravitational/teleport/pull/23825) - * Fixed issue with invalid TLS mode when creating database resources. [#23808](https://github.com/gravitational/teleport/pull/23808) -* Database Access - * Added support for canceling in-progress PostgreSQL requests in database access. [#23467](https://github.com/gravitational/teleport/pull/23467) - * Fixed issue with query audit events always having `success: false` status. [#23274](https://github.com/gravitational/teleport/pull/23274) -* Desktop Access - * Updated setup script to be idempotent. [#23176](https://github.com/gravitational/teleport/pull/23176) -* Helm Charts - * Added ability to set resource limits and requests for pre-deployment jobs. [#23126](https://github.com/gravitational/teleport/pull/23126) -* Infrastructure - * Introduced distroless Teleport container images. [#22814](https://github.com/gravitational/teleport/pull/22814) -* Kubernetes Access - * Fixed issue with `tsh kube credentials` failing on remote clusters. [#23354](https://github.com/gravitational/teleport/pull/23354) - * Fixed issue with `tsh kube credentials` loading incorrect profile. [#23716](https://github.com/gravitational/teleport/pull/23716) -* Machine ID - * Added ability to specify memory backend using CLI parameters. [#23495](https://github.com/gravitational/teleport/pull/23495) - * Added support for Azure delegated joining. [#23391](https://github.com/gravitational/teleport/pull/23391) - * Added support for Gitlab delegated joining. [#23191](https://github.com/gravitational/teleport/pull/23191) - * Added support for trusted clusters. [#23390](https://github.com/gravitational/teleport/pull/23390) - * Added FIPS support. [#23850](https://github.com/gravitational/teleport/pull/23850) -* Proxy Peering - * Fixed proxy peering issues when running behind a load balancer. [#23506](https://github.com/gravitational/teleport/pull/23506) -* Reverse Tunnels - * Fixed issue when joining leaf cluster over tunnel port with enabled proxy protocol. [#23487](https://github.com/gravitational/teleport/pull/23487) - * Fixed issue with joining agents over reverse tunnel port. [#23332](https://github.com/gravitational/teleport/pull/23332) -* Performance & scalability - * Improved `tsh ls -R` performance in large clusters. [#23596](https://github.com/gravitational/teleport/pull/23596) - * Improved performance when setting session environment variables. [#23834](https://github.com/gravitational/teleport/pull/23834) -* Server Access - * Fixed issue with successful SFTP transfers returning non-zero code. [#23729](https://github.com/gravitational/teleport/pull/23729) -* SSO - * Fixed issue with Github Enterprise SSO not working with custom URLs. [#23568](https://github.com/gravitational/teleport/pull/23568) -* Teleport Connect - * Added support for config customization. [#23197](https://github.com/gravitational/teleport/pull/23197) - * Fixed unresponsive terminal on Windows Server 2019. [#22996](https://github.com/gravitational/teleport/pull/22996) -* Tooling - * Updated Electron to `22.3.2`. [#23048](https://github.com/gravitational/teleport/pull/23048) - * Updated Go to `1.20.2`. [#22997](https://github.com/gravitational/teleport/pull/22997) - * Updated Rust to `1.68.0`. [#23101](https://github.com/gravitational/teleport/pull/23101) -* Web UI - * Added MFA support when copying files. [#23195](https://github.com/gravitational/teleport/pull/23195) - * Fixed "ambiguous node" error when downloading files. [#23152](https://github.com/gravitational/teleport/pull/23152) - * Fixed intermittent "client connection is closing" errors in web UI after logging in. [#23733](https://github.com/gravitational/teleport/pull/23733) - -## 12.1.1 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with Access Management's connection tester not working with per-session MFA. [#22918](https://github.com/gravitational/teleport/pull/22918), [#22943](https://github.com/gravitational/teleport/pull/22943) -* Fixed Kubernetes access panic when using moderated sessions. [#22930](https://github.com/gravitational/teleport/pull/22930) -* Fixed `tsh db config` reporting incorrect port in TLS routing mode. [#22889](https://github.com/gravitational/teleport/pull/22889) -* Fixed issue with Teleport always performing OS group check even without auto user provisioning enabled. [#22805](https://github.com/gravitational/teleport/pull/22805) -* Fixed issue with desktop access crashing on systems that consume many file descriptors. [#22798](https://github.com/gravitational/teleport/pull/22798) -* Fixed issue with `teleport start --bootstrap` command failing on unexpected resource. [#22721](https://github.com/gravitational/teleport/pull/22721) -* Fixed issue with install script not refreshing repository metadata before installing new version. [#22585](https://github.com/gravitational/teleport/pull/22585) -* Added ability to export database CA in DER format via `tctl auth export`. [#22896](https://github.com/gravitational/teleport/pull/22896) -* Reduced log spam from proxy multiplexer. [#22802](https://github.com/gravitational/teleport/pull/22802) -* Updated EC2 auto-discovery install script to use enterprise binaries for enterprise clusters. [#22769](https://github.com/gravitational/teleport/pull/22769) -* Upgraded Go to `v1.19.7`. [#22725](https://github.com/gravitational/teleport/pull/22725) -* Improved idle connections handling. [#22908](https://github.com/gravitational/teleport/pull/22908), [#22893](https://github.com/gravitational/teleport/pull/22893) -* Improved Kubernetes service labels validation upon startup. [#22777](https://github.com/gravitational/teleport/pull/22777) -* Improved `tsh login` error reporting when proxy is not available. [#22763](https://github.com/gravitational/teleport/pull/22763) - -## 12.1.0 - -This release of Teleport contains multiple improvements and bug fixes. - -* Added ability for Teleport to function as SAML IdP (Enterprise edition only). -* Downgraded Go to `v1.19.6` to resolve memory leak issues. [#22691](https://github.com/gravitational/teleport/pull/22691) -* Fixed issue with `tsh scp` overriding copied file permissions without `-p` flag. [#22609](https://github.com/gravitational/teleport/pull/22609) -* Improved performance of fetching remote clusters. [#22575](https://github.com/gravitational/teleport/pull/22575) - -## 12.0.5 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with `tsh` not respecting HTTPS_PROXY in some cases. [#22492](https://github.com/gravitational/teleport/pull/22492) -* Fixed issue with config validation in Helm charts scratch mode. [#22423](https://github.com/gravitational/teleport/pull/22423) -* Added IAM joining support for Azure VMs. [#22204](https://github.com/gravitational/teleport/pull/22204) -* Added auto-discovery support for Azure VMs. [#22521](https://github.com/gravitational/teleport/pull/22521) -* Added support for `ap-southeast-4` AWS region for IAM joining. [#22486](https://github.com/gravitational/teleport/pull/22486) -* Added ability to specify web terminal scrollback length in proxy config. [#22422](https://github.com/gravitational/teleport/pull/22422) -* Added support for PuTTY's `winadj` channel requests. [#22420](https://github.com/gravitational/teleport/pull/22420) -* Added `--trace-profile` flag to `tsh` that allows generating runtime trace profiles. [#22406](https://github.com/gravitational/teleport/pull/22406) -* Added enhanced session recording support for arm64 architectures. [#22550](https://github.com/gravitational/teleport/pull/22550) -* Updated `tctl alert ack` to allow acknowledging alerts of any severity. [#22582](https://github.com/gravitational/teleport/pull/22582) -* Updated Windows desktop access to display only applicable logins. [#22333](https://github.com/gravitational/teleport/pull/22333) -* Improved Kubernetes access performance when using `kubectl`. [#22508](https://github.com/gravitational/teleport/pull/22508) -* Improved Teleport Connect performance when connecting to large clusters. [#22316](https://github.com/gravitational/teleport/pull/22316) -* Improved performance and scalability in large clusters. [#21495](https://github.com/gravitational/teleport/pull/21495) - -## 12.0.4 - -This release of Teleport contains multiple security fixes, improvements and bug fixes. - -### Security fixes - -* Fixed issue with malicious SQL Server packet being able to cause proxy crash. [#21638](https://github.com/gravitational/teleport/pull/21638) -* Fixed issue with session terminated after a short delay instead of being immediately paused when moderator leaves. [#21974](https://github.com/gravitational/teleport/pull/21974) - -### Other improvements and bug fixes - -* Fixed issue with orphaned child processes after session ends. [#22222](https://github.com/gravitational/teleport/pull/22222) -* Fixed issue with not being able to see any pods with an active Access Request. [#22196](https://github.com/gravitational/teleport/pull/22196) -* Fixed issue with remote cluster state not always being correctly updated. [#22088](https://github.com/gravitational/teleport/pull/22088) -* Fixed heartbeat errors from the Database Service. [#22087](https://github.com/gravitational/teleport/pull/22087) -* Fixed issue with applications temporarily disappearing during Application Service restart. [#21807](https://github.com/gravitational/teleport/pull/21807) -* Fixed issue with some Helm values being accidentally shared between Auth Service and Proxy Service configs. [#21768](https://github.com/gravitational/teleport/pull/21768) -* Fixed issues with desktop access flow in Access Management interface. [#21756](https://github.com/gravitational/teleport/pull/21756) -* Fixed "access denied" errors in Teleport Connect on Windows. [#21720](https://github.com/gravitational/teleport/pull/21720) -* Fixed issue with database GUI client connections requiring random taps when per-session MFA is enabled. [#21661](https://github.com/gravitational/teleport/pull/21661) -* Fixed issue with moderated sessions not working on leaf clusters. [#21612](https://github.com/gravitational/teleport/pull/21612) -* Fixed issue with missing `--request-id` flag in UI for Kubernetes login instructions. [#21445](https://github.com/gravitational/teleport/pull/21445) -* Fixed issue connecting to AWS resources when using full IAM role ARNs. [#21251](https://github.com/gravitational/teleport/pull/21251) -* Fixed issue with `local_auth: false` setting being ignored without explicitly setting `authentication_type`. [#22215](https://github.com/gravitational/teleport/pull/22215) -* Added `tctl` resource commands for Device Trust. [#22157](https://github.com/gravitational/teleport/pull/22157) -* Added support for assuming roles in `tsh proxy aws`. [#21990](https://github.com/gravitational/teleport/pull/21990) -* Added early feedback for successful security key taps in `tsh`. [#21780](https://github.com/gravitational/teleport/pull/21780) -* Added device lock support. [#21751](https://github.com/gravitational/teleport/pull/21751) -* Added support for security contexts in `teleport-kube-agent` Helm chart. [#21535](https://github.com/gravitational/teleport/pull/21535) -* Updated `tsh version` command to display client version only via `--client` flag. [#22167](https://github.com/gravitational/teleport/pull/22167) -* Updated install script to use enterprise packages for enterprise clusters. [#22109](https://github.com/gravitational/teleport/pull/22109) -* Updated install script to use deb/rpm repositories. [#22108](https://github.com/gravitational/teleport/pull/22108) -* Updated proxy init container in Helm charts to use security context. [#22064](https://github.com/gravitational/teleport/pull/22064) -* Updated `tsh` to include timestamps with debug logs. [#21996](https://github.com/gravitational/teleport/pull/21996) -* Updated AWS access to fetch credentials with TTL matching user's certificate TTL. [#21994](https://github.com/gravitational/teleport/pull/21994) -* Updated Go toolchain to `1.20.1`. [#21931](https://github.com/gravitational/teleport/pull/21931) -* Updated `tsh kube login --all` to not require cluster name. [#21765](https://github.com/gravitational/teleport/pull/21765) -* Updated `teleport db configure create` command to support more use-cases. [#21690](https://github.com/gravitational/teleport/pull/21690) -* Improved performance in large clusters with etcd backend. [#21905](https://github.com/gravitational/teleport/pull/21905), [#21496](https://github.com/gravitational/teleport/pull/21496) - -## 12.0.2 - -This release of Teleport contains a security fix as well as multiple improvements and bug fixes. - -### OpenSSL update - -* Updated OpenSSL to `1.1.1t`. [#21425](https://github.com/gravitational/teleport/pull/21425) - -### Other fixes and improvements - -* Fixed issue with Access Manager interface not accepting valid port numbers. [#21651](https://github.com/gravitational/teleport/pull/21651) -* Fixed issue with some requests to Teleport-protected applications failing after proxy restart. [#21615](https://github.com/gravitational/teleport/pull/21615) -* Fixed issue with invalid role template namespaces leading to cluster lockouts. [#21573](https://github.com/gravitational/teleport/pull/21573) -* Fixed issue with Teleport Connect failing to recognize logged in user sometimes. [#21467](https://github.com/gravitational/teleport/pull/21467) -* Fixed issue with the back button not working in Web UI navigation. [#21236](https://github.com/gravitational/teleport/pull/21236) -* Fixed issue with Web UI SSH player having scroll bars. [#20868](https://github.com/gravitational/teleport/pull/20868) -* Added support for `tsh request search --kind=pod` command. [#21456](https://github.com/gravitational/teleport/pull/21456) -* Updated `tsh db configure create` to require flag for dynamic resources matching. [#21395](https://github.com/gravitational/teleport/pull/21395) -* Improved reconnect stability after Database Service restart. [#21635](https://github.com/gravitational/teleport/pull/21635) -* Improved reconnect stability after Kubernetes service restart.[#21617](https://github.com/gravitational/teleport/pull/21617) -* Improved `tsh ls -R` performance. [#21577](https://github.com/gravitational/teleport/pull/21577) -* Improved `tsh scp` error message when no remote path is specified. [#21373](https://github.com/gravitational/teleport/pull/21373) -* Improved error message when trying to rename resource. [#21179](https://github.com/gravitational/teleport/pull/21179) -* Reduced CPU usage when using enhanced session recording. [#21437](https://github.com/gravitational/teleport/pull/21437) - -## 12.0.1 - -Teleport 12 brings the following marquee features and improvements: - -- Device Trust (Preview, Enterprise only) -- Passwordless Windows access for local users (Preview, Enterprise only) -- Per-pod RBAC for Kubernetes access (Preview) -- Azure and GCP CLI support for application access (Preview) -- Support for more databases in database access: - - Amazon DynamoDB - - Amazon Redshift Serverless - - AWS RDS Proxy for PostgreSQL/MySQL - - Azure SQLServer Auto Discovery - - Azure Flexible Servers -- Refactored Helm charts (Preview) -- Dropped support for SHA1 in server access -- Signed/notarized macOS binaries - -### Device Trust (Preview, Enterprise only) - -Teleport 12 includes a preview of our upcoming Device Trust feature, which -allows administrators to require that Teleport access is performed from an -authenticated and trusted device. - -This preview release requires macOS and a native client like tsh or Teleport -Connect. These clients leverage the Secure Enclave on macOS to solve device -challenges issued by the Teleport CA, proving their identity as a trusted -device. - -Teleport features requiring the web UI (desktop access, application access) are -not currently supported. - -### Passwordless Windows Access for Local Users (Preview, Enterprise only) - -Teleport 12 brings passwordless certificate-based authentication to Windows -desktops in environments where Active Directory is not available. This feature -requires the installation of a Teleport package on each Windows desktop. - -### Per-pod RBAC for Kubernetes access (Preview) - -Teleport 12 extends RBAC to support controlling access to individual pods in -Kubernetes clusters. Pod RBAC integrates with existing Teleport RBAC features -such as role templating and Access Requests. - -### Azure and GCP CLI support for application access (Preview) - -In Teleport 12 administrators can interact with Azure and GCP APIs through the -Application Service using `tsh az` and `tsh gcloud` CLI commands, or using -standard `az` and `gcloud` tools through the local application proxy. - -### Support for more databases in database access - -Database access in Teleport 12 brings a number of new integrations to AWS-hosted -databases such as DynamoDB (now with audit log support), Redshift Serverless and -RDS Proxy for PostgreSQL/MySQL. - -On Azure, database access adds SQLServer auto-discovery and support for Azure -Flexible Server for PostgreSQL/MySQL. - -### Refactored Helm charts (Preview) - -The “teleport-cluster” Helm chart underwent significant refactoring in Teleport -12 to provide better scalability and UX. Proxy and Auth are now separate -deployments and the new “scratch” chart mode makes it easier to provide a custom -Teleport config. - -### Dropped support for SHA1 in Teleport-protected servers - -Newer OpenSSH clients connecting to Teleport 12 clusters no longer need the -“PubAcceptedKeyTypes” workaround to include the deprecated “sha” algorithm. - -### Signed/notarized macOS binaries - -Users who download Teleport 12 Darwin binaries would no longer get an untrusted -software warning from macOS. - -### tctl edit - -tctl now supports an edit subcommand, allowing you to edit resources directly in -your preferred text editor. - -### Breaking Changes - -Please familiarize yourself with the following potentially disruptive changes in -Teleport 12 before upgrading. - -#### Helm charts - -The teleport-cluster Helm chart underwent significant changes in Teleport 12. - -Additionally, PSPs are removed from the chart when installing on Kubernetes 1.23 -and higher to account for the deprecation/removal of PSPs by Kubernetes. - -#### tctl auth export - -The tctl auth export command only exports the private key when passing the ---keys flag. Previously it would output the certificate and private key -together. - -#### Desktop access - -Windows Desktop sessions disable the wallpaper by default, improving -performance. To restore the previous behavior, add `show_desktop_wallpaper: true` -to your windows_desktop_service config. - -## 11.3.2 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed regression issue with accessing SSO apps behind application access. [#21049](https://github.com/gravitational/teleport/pull/21049) -* Fixed regression performance issue with `tsh scp`. [#20953](https://github.com/gravitational/teleport/pull/20953) -* Fixed issue with `tsh proxy aws --endpoint-url` not working. [#20880](https://github.com/gravitational/teleport/pull/20880) -* Fixed issue with MongoDB queries failing on large datasets. [#21113](https://github.com/gravitational/teleport/pull/21113) -* Fixed issue with direct node dial from web UI. [#20928](https://github.com/gravitational/teleport/pull/20928) -* Updated install scripts to download binaries from new CDN location. [#21057](https://github.com/gravitational/teleport/pull/21057) -* Updated `tsh` to detect unplugged devices when using hardware-backed keys. [#20949](https://github.com/gravitational/teleport/pull/20949) -* Updated Elasticsearch access to explicitly require `--db-user`. (#20695) [#20919](https://github.com/gravitational/teleport/pull/20919) -* Updated Rust to 1.67.0. [#20883](https://github.com/gravitational/teleport/pull/20883) - -## 11.3.1 - -This release of Teleport contains a security fix, as well as multiple improvements and bug fixes. - -### Moderated Sessions - -* Fixed issue with moderated sessions not being disconnected on Ctrl+C. [#20588](https://github.com/gravitational/teleport/pull/20588) - -### Other fixes and improvements - -* Fixed issue with node install script downloading OSS binaries in Enterprise edition. [#20816](https://github.com/gravitational/teleport/pull/20816) -* Fixed a regression when renewing Kubernetes dynamic credentials that prevented multiple renewals. [#20788](https://github.com/gravitational/teleport/pull/20788) -* Fixed issue with `tctl auth sign` not respecting Ctrl-C. [#20773](https://github.com/gravitational/teleport/pull/20773) -* Fixed occasional key attestation error in `tsh login`. [#20712](https://github.com/gravitational/teleport/pull/20712) -* Fixed issue with being able to create Access Requests with invalid cluster name. [#20674](https://github.com/gravitational/teleport/pull/20674) -* Fixed issue with EC2 auto-discovery install script for RHEL instances. [#20604](https://github.com/gravitational/teleport/pull/20604) -* Fixed issue connecting with Oracle MySQL client on Windows. [#20599](https://github.com/gravitational/teleport/pull/20599) -* Fixed issue with using `tctl auth sign --format kubernetes` against remote Auth Service instances. [#20571](https://github.com/gravitational/teleport/pull/20571) -* Fixed panic in Azure SQL Server access. [#20483](https://github.com/gravitational/teleport/pull/20483) -* Added support for Moderated Sessions in the Web UI. [#20796](https://github.com/gravitational/teleport/pull/20796) -* Added support for Login Rules for SSO users. [#20743](https://github.com/gravitational/teleport/pull/20743), [#20738](https://github.com/gravitational/teleport/pull/20738), [#20737](https://github.com/gravitational/teleport/pull/20737), [#20629](https://github.com/gravitational/teleport/pull/20629) -* Added ability to acknowledge alerts. [#20692](https://github.com/gravitational/teleport/pull/20692) -* Added `client_idle_timeout_message` support to Windows access. [#20617](https://github.com/gravitational/teleport/pull/20617) -* Added PodMonitor support in `teleport-cluster` Helm chart. [#20564](https://github.com/gravitational/teleport/pull/20564) -* Added support for passing raw config in `teleport-kube-agent` Helm chart. [#20449](https://github.com/gravitational/teleport/pull/20449) -* Added nodeSelector field to `teleport-cluster` Helm chart. [#20441](https://github.com/gravitational/teleport/pull/20441) -* Improved Kubernetes access stability for slow clients. [#20517](https://github.com/gravitational/teleport/pull/20517) -* Updated `teleport-cluster` Helm chart to reload proxy certificate daily. [#20503](https://github.com/gravitational/teleport/pull/20503) - -## 11.2.3 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with `tsh login` defaulting to passwordless and ignoring the `--auth` and `--mfa-mode` flags. [#20474](https://github.com/gravitational/teleport/pull/20474) -* Fixed regression issue with AWS console access via `tsh aws`. [#20437](https://github.com/gravitational/teleport/pull/20437) -* Fixed issue connecting to MariaDB in non-TLS Routing mode. [#20409](https://github.com/gravitational/teleport/pull/20409) -* Fixed the `*:*` selector in EC2 auto-discovery. [#20390](https://github.com/gravitational/teleport/pull/20390) -* Improved handling of unknown events in the events search API. [#20329](https://github.com/gravitational/teleport/pull/20329) -* Added support for multiple transformations in role templates. [#20296](https://github.com/gravitational/teleport/pull/20296) -* Added the ability to update a Trusted Cluster's role mappings without recreating the cluster. [#20286](https://github.com/gravitational/teleport/pull/20286) -* Added `dnsConfig` support to the `teleport-kube-agent` Helm chart. [#20107](https://github.com/gravitational/teleport/pull/20107) - -## 11.2.2 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue connecting to leaf cluster nodes via web UI with per-session MFA. [#20238](https://github.com/gravitational/teleport/pull/20238) -* Fixed issue with `max_kubernetes_connections` leading to access denied errors. [#20174](https://github.com/gravitational/teleport/pull/20174) -* Fixed issue with `kube-agent` Helm chart leaving state behind after `helm uninstall`. [#20169](https://github.com/gravitational/teleport/pull/20169) -* Fixed X.509 issue after updating RDS database resource. [#20099](https://github.com/gravitational/teleport/pull/20099) -* Fixed issue with some `tsh` HTTP requests missing extra headers. [#20071](https://github.com/gravitational/teleport/pull/20071) -* Improved auto-discovery config validation. [#20288](https://github.com/gravitational/teleport/pull/20288) -* Improved graceful shutdown stability. [#20225](https://github.com/gravitational/teleport/pull/20225) -* Improved application access authentication flow. [#20165](https://github.com/gravitational/teleport/pull/20165) -* Reduced auth load by ensure proxy uses cache for periodic operations. [#20153](https://github.com/gravitational/teleport/pull/20153) -* Updated Rust to `1.66.1`. [#20201](https://github.com/gravitational/teleport/pull/20201) -* Updated macOS binaries to be signed and notarized. [#20305](https://github.com/gravitational/teleport/pull/20305) - -## 11.2.1 - -This release of Teleport contains multiple improvements and bug fixes. - -* Added support for periodically reloading the proxy's TLS certificates [#20040](https://github.com/gravitational/teleport/pull/20040) -* Improved desktop certificate generation by using the proper field for querying a user's SID [#20022](https://github.com/gravitational/teleport/pull/20022) -* Updated the web UI to hide the trusted clusters screen for users who lack the appropriate role [#1494](https://github.com/gravitational/webapps/pull/1494/) -* Fixed an issue resulting in an "invalid bearer token" message [#20102](https://github.com/gravitational/teleport/pull/#20102) -* Fixed an issue preventing bots from using IAM joining [#20011](https://github.com/gravitational/teleport/pull/20011) -* Fixed an issue where Machine ID Certificates did not respect the provided TTL when using IAM joining [#20001](https://github.com/gravitational/teleport/pull/20001) -* Updated to Go 1.19.5 [#20084](https://github.com/gravitational/teleport/pull/20084) - -## 11.2.0 - -This release of Teleport contains multiple improvements and bug fixes. - -### Machine ID GitHub Actions - -In addition, we're happy to announce a set of GitHub Actions that you can use in -your workflows to assist with accessing Teleport Resources in your CI/CD pipelines. - -Visit the individual repositories to find out more and see usage examples: - -- https://github.com/teleport-actions/setup -- https://github.com/teleport-actions/auth -- https://github.com/teleport-actions/auth-k8s - -For a more in-depth guide, see our -[documentation](./docs/pages/enroll-resources/machine-id/deployment/github-actions.mdx) for using -Teleport with GitHub Actions. - -### Secure certificate mapping for desktop access - -Later this year, Windows will begin requiring a stronger mapping from a certificate -to an Active Directory user. In anticipation of this change, Teleport 11.2.0 is compliant -with the new requirements. - -*Warning:* This feature requires that Teleport's own service account also uses a strong -mapping. In order to support this requirement, you must now set a new Security Identifier -(`sid`) field in the LDAP configuration for your Windows Desktop Services. You can find -the SID for your service account by running the following PowerShell snippet -(replace `svc-teleport` with the name of the service account you are using): - -``` -Get-AdUser -Identity svc-teleport | Select SID -``` - -### Other improvements and bugfixes - -* Added an improved database joining flow in the web UI [#1487](https://github.com/gravitational/webapps/pull/1487) -* Added support for secure certificate mapping for Windows desktop certificates [#19737](https://github.com/gravitational/teleport/pull/19737) -* Fixed an issue with desktop directory sharing where large files could be corrupted [#1472](https://github.com/gravitational/webapps/pull/1472) -* Fixed an issue where desktop access users may see a an error after ending a session [#1470](https://github.com/gravitational/webapps/pull/1470) -* Fixed an issue preventing database agents from joining due to improperly formatted YAML [#19958](https://github.com/gravitational/teleport/pull/19958) -* Updated the web UI to use session storage instead of local storage for Teleport's bearer token [#1470](https://github.com/gravitational/webapps/pull/1470) -* Added rate limiting to SAML/OIDC routes [#19950](https://github.com/gravitational/teleport/pull/19950) -* Fixed an issue connecting to leaf cluster desktops via reverse tunnel [#19945](https://github.com/gravitational/teleport/pull/19945) -* Fixed a backwards compatibility issue with database access in 11.1.4 [#19940](https://github.com/gravitational/teleport/pull/19940) -* Fixed an issue where Access Requests for Kubernetes clusters used improperly cached credentials [#19912](https://github.com/gravitational/teleport/pull/19912) -* Added support for CentOS 7 in ARM64 builds [#19895](https://github.com/gravitational/teleport/pull/19895) -* Added rate limiting to unauthenticated routes [#19869](https://github.com/gravitational/teleport/pull/19869) -* Add suggested reviewers and requestable roles to Teleport Connect Access Requests [#19846](https://github.com/gravitational/teleport/pull/19846) -* Fixed an issue listing all nodes with `tsh` [#19821](https://github.com/gravitational/teleport/pull/19821) -* Made `gcp.credentialSecretName` optional in the Teleport Cluster Helm chart [#19803](https://github.com/gravitational/teleport/pull/19803) -* Fixed an issue preventing audit events that exceed the maximum size limit from being logged [#19736](https://github.com/gravitational/teleport/pull/19736) -* Fixed an issue preventing some users from being able to play desktop recordings [#19709](https://github.com/gravitational/teleport/pull/19709) -* Added validation of AWS Account IDs when adding databases (#19638) [#19702](https://github.com/gravitational/teleport/pull/19702) -* Added a new audit event for DynamoDB requests via application access [#19667](https://github.com/gravitational/teleport/pull/19667) -* Added the ability to export `tsh` traces even when the Auth Service is not configured for tracing [#19583](https://github.com/gravitational/teleport/pull/19583) -* Added support for linking Teleport Connect's embedded `tsh` binary for use outside of Teleport Connect [#1488](https://github.com/gravitational/webapps/pull/1488) - -## 11.1.4 - -This release of Teleport contains multiple security fixes, improvements and bug fixes. - -*Note:* This release of Teleport contains an issue that affects backwards compatibility -with database access agents. If you are a database access user we recommend skipping -straight to version 11.2.0. - -### [Critical] RBAC bypass in SSH TCP tunneling - -When establishing a direct-tcpip channel, Teleport did not sufficiently validate -RBAC. - -This could allow an attacker in possession of valid cluster credentials to -establish a TCP tunnel to a node they didn’t have access to. - -The connection attempt would show up in the audit log as a “port” audit event -(code T3003I) and include Teleport username in the “user” field. - -### [High] Application access session hijack - -When accepting application Access Requests, Teleport did not sufficiently -validate client credentials. - -This could allow an attacker in possession of a valid active application session -ID to issue requests to this application impersonating the session owner for a -limited time window. - -Presence of multiple “cert.create” audit events (code TC000I) with the same app -session ID in the “route_to_app.session_id” field may indicate the attempt to -impersonate an existing user’s application session. - -### [Medium] SSH IP pinning bypass - -When issuing a user certificate, Teleport did not check for the presence of IP -restrictions in the client’s credentials. - -This could allow an attacker in possession of valid client credentials with IP -restrictions to reissue credentials without IP restrictions. - -Presence of a “cert.create” audit event (code TC000I) without corresponding -“user.login” audit event (codes T1000I or T1101I) for users with IP restricted -roles may indicate an issuance of a certificate without IP restrictions. - -### [Low] Web API session caching - -After logging out via the web UI, a user’s session could remain cached in -Teleport’s proxy, allowing continued access to resources for a limited time -window. - -### Other improvements and bugfixes - -* Fixed issue with noisy-square distortions in desktop access. [#19545](https://github.com/gravitational/teleport/pull/19545) -* Fixed issue with LDAP search pagination in desktop access. [#19533](https://github.com/gravitational/teleport/pull/19533) -* Fixed issue with SSH sessions inheriting OOM score of the parent process. [#19521](https://github.com/gravitational/teleport/pull/19521) -* Fixed issue with ambiguous host resolution in web UI. [#19513](https://github.com/gravitational/teleport/pull/19513) -* Fixed issue with using desktop access with Windows 10. [#19504](https://github.com/gravitational/teleport/pull/19504) -* Fixed issue with `session.start` events being overwritten by `session.exec` events. [#19497](https://github.com/gravitational/teleport/pull/19497) -* Fixed issue with `tsh login --format kubernetes` not setting SNI info. [#19433](https://github.com/gravitational/teleport/pull/19433) -* Fixed issue with WebSockets not working via app access if the upstream web server is using HTTP/2. [#19423](https://github.com/gravitational/teleport/pull/19423) -* Fixed TLS routing in insecure mode. [#19410](https://github.com/gravitational/teleport/pull/19410) -* Fixed issue with connecting to ElastiCache 7.0.4 in database access. [#19400](https://github.com/gravitational/teleport/pull/19400) -* Fixed issue with SAML connector validation calling descriptor URL prior to authz checks. [#19317](https://github.com/gravitational/teleport/pull/19317) -* Fixed issue with database access complaining about "redis" engine not being registered. [#19251](https://github.com/gravitational/teleport/pull/19251) -* Fixed issue with `disconnect_expired_cert` and `require_session_mfa` settings conflicting with each other. [#19178](https://github.com/gravitational/teleport/pull/19178) -* Fixed startup failure when MongoDB URI is not resolvable. [#18984](https://github.com/gravitational/teleport/pull/18984) -* Added resource names for Access Requests in Teleport Connect. [#19549](https://github.com/gravitational/teleport/pull/19549) -* Added support for Github Enterprise join method. [#19518](https://github.com/gravitational/teleport/pull/19518) -* Added the ability to supply Access Request TTLs. [#19385](https://github.com/gravitational/teleport/pull/19385) -* Added new `instance.join` and `bot.join` audit events. [#19343](https://github.com/gravitational/teleport/pull/19343) -* Added support for port-forward over websocket protocol in Kubernetes access. [#19181](https://github.com/gravitational/teleport/pull/19181) -* Reduced latency of `tsh ls -R`. [#19482](https://github.com/gravitational/teleport/pull/19482) -* Updated desktop access config script to disable password prompt. [#19427](https://github.com/gravitational/teleport/pull/19427) -* Updated Go to 1.19.4. [#19127](https://github.com/gravitational/teleport/pull/19127) -* Improved performance when converting traits to roles. [#19170](https://github.com/gravitational/teleport/pull/19170) -* Improved handling of expired database certificates in Teleport Connect. [#19096](https://github.com/gravitational/teleport/pull/19096) - -## 11.1.2 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with EC2 discovery failing to install Teleport on older Ubuntu instances. [#18965](https://github.com/gravitational/teleport/pull/18965) -* Fixed issue with log spam when cleaning up groups for automatically created Linux users. [#18990](https://github.com/gravitational/teleport/pull/18990) -* Fixed issue with `tctl windows_desktops ls` not producing results in JSON and YAML formats. [#19016](https://github.com/gravitational/teleport/pull/19016) -* Fixed issue with web SSH sessions in proxy recording mode. [#19021](https://github.com/gravitational/teleport/pull/19021) -* Improved handling of corrupted session recordings. [#19040](https://github.com/gravitational/teleport/pull/19040) - -## 11.1.1 - -This release of Teleport contains a security fix as well as multiple improvements and bug fixes. - -### Insecure TOTP MFA seed removal - -Fixed issue where an attacker with physical access to user's computer and raw -access to the filesystem could potentially recover the seed QR code. - -[#18917](https://github.com/gravitational/teleport/pull/18917) - -### Other improvements and fixes - -* Fixed issue with Teleport Connect not working on macOS. [#18921](https://github.com/gravitational/teleport/pull/18921) -* Added support for Cloud HSM on Google Cloud. [#18835](https://github.com/gravitational/teleport/pull/18835) -* Added `server_hostname` to `session.*` audit events. [#18832](https://github.com/gravitational/teleport/pull/18832) -* Added ability to specify roles when making Access Requests in web UI. [#18868](https://github.com/gravitational/teleport/pull/18868) -* Improved error reporting from etcd backend. [#18822](https://github.com/gravitational/teleport/pull/18822) -* Improved failed session recording upload logs to include upload and session IDs. [#18872](https://github.com/gravitational/teleport/pull/18872) - -## 11.1.0 - -This release of Teleport contains multiple improvements and bug fixes. - -* Added support for self-hosted Github Enterprise SSO connectors in Teleport Enterprise edition. [#18521](https://github.com/gravitational/teleport/pull/18521), [#18687](https://github.com/gravitational/teleport/pull/18687) -* Added audit events for DynamoDB via AWS CLI access. [#18035](https://github.com/gravitational/teleport/pull/18035) -* Added auth connectors support in Kubernetes Operator. [#18350](https://github.com/gravitational/teleport/pull/18350) -* Added audit events for desktop access directory sharing. [#18398](https://github.com/gravitational/teleport/pull/18398) -* Added trusted clusters support for desktop access. [#18666](https://github.com/gravitational/teleport/pull/18666) -* Added support for `user.spec` syntax in moderated session filters. [#18455](https://github.com/gravitational/teleport/pull/18455) -* Added support for GKE auto-discovery to Kubernetes access. [#18396](https://github.com/gravitational/teleport/pull/18396) -* Added FIPS support to desktop access. [#18743](https://github.com/gravitational/teleport/pull/18743) -* Added `teleport discovery bootstrap` command. [#18641](https://github.com/gravitational/teleport/pull/18641) -* Added `windows_desktops` as the correct resource for `tctl` commands. [#18816](https://github.com/gravitational/teleport/pull/18816) -* Updated `tsh db ls` JSON and YAML output to include allowed users. [#18543](https://github.com/gravitational/teleport/pull/18543) -* Updated `tctl auth sign --format kubernetes` to allow merging multiple clusters in the same kubeconfig. [#18525](https://github.com/gravitational/teleport/pull/18525) -* Improved web UI SSH performance. [#18797](https://github.com/gravitational/teleport/pull/18797), [#18839](https://github.com/gravitational/teleport/pull/18839) -* Improved `tsh play` output in JSON and YAML formats. [#18825](https://github.com/gravitational/teleport/pull/18825) -* Fixed issue with RDS auto-discovery failing to start in some cases. [#18590](https://github.com/gravitational/teleport/pull/18590) -* Fixed "cannot read properties of null" error when trying to add a new server using web UI. [webapps#1356](https://github.com/gravitational/webapps/pull/1356) -* Fixed issue with applications list pagination in web UI. [#18601](https://github.com/gravitational/teleport/pull/18601) -* Fixed issue with MongoDB commands sometimes failing through database access. [#18738](https://github.com/gravitational/teleport/pull/18738) -* Fixed issue with automatically imported cloud labels not being used in RBAC in App Access. [#18642](https://github.com/gravitational/teleport/pull/18642) -* Fixed issue with Kubernetes sessions lingering after all participants have disconnected. [#18684](https://github.com/gravitational/teleport/pull/18684) -* Fixed issue with Auth Service being down affecting ability to establish new non-moderated SSH sessions. [#18441](https://github.com/gravitational/teleport/pull/18441) -* Fixed issue with launching SSH sessions when SELinux is enabled. [#18810](https://github.com/gravitational/teleport/pull/18810) -* Fixed issue with not being able to create SAML connectors with templated role names. [#18766](https://github.com/gravitational/teleport/pull/18766) - -## 11.0.3 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with validation of U2F devices. [#17876](https://github.com/gravitational/teleport/pull/17876) -* Fixed `tsh ssh -J` not being able to connect to leaf cluster nodes. [#18268](https://github.com/gravitational/teleport/pull/18268) -* Fixed issue with failed database connection when client requests GSS encryption. [#17811](https://github.com/gravitational/teleport/pull/17811) -* Fixed issue with setting Teleport version to v10 in Helm charts resulting in invalid config. [#18008](https://github.com/gravitational/teleport/pull/18008) -* Fixed issue with Teleport Kubernetes resource name conflicting with builtin resources. [#17717](https://github.com/gravitational/teleport/pull/17717) -* Fixed issue with invalid MS Teams plugin systemd service file. [#18028](https://github.com/gravitational/teleport/pull/18028) -* Fixed issue with failing to connect to OpenSSH 7.x servers. [#18248](https://github.com/gravitational/teleport/pull/18248) -* Fixed issue with extra trailing question mark in application Access Requests. [#17955](https://github.com/gravitational/teleport/pull/17955) -* Fixed issue with application access websocket requests sometimes failing in Chrome. [#18002](https://github.com/gravitational/teleport/pull/18002) -* Fixed issue with multiple `tbot`'s concurrently using the same output directory. [#17999](https://github.com/gravitational/teleport/pull/17999) -* Fixed issue with `tbot` failing to parse version on some kernels. [#18298](https://github.com/gravitational/teleport/pull/18298) -* Fixed panic when v9 node runs against v11 Auth Service. [#18383](https://github.com/gravitational/teleport/pull/18383) -* Fixed issue with Kubernetes proxy caching client credentials between sessions. [#18109](https://github.com/gravitational/teleport/pull/18109) -* Fixed issue with agents not being able to reconnect to proxies in some cases. [#18149](https://github.com/gravitational/teleport/pull/18149) -* Fixed issue with remote tunnel connections not being closed properly. [#18224](https://github.com/gravitational/teleport/pull/18224) -* Added CircleCI support to Machine ID. [#17996](https://github.com/gravitational/teleport/pull/17996) -* Added support for `arm` and `arm64` Docker images for Teleport and Operator. [#18222](https://github.com/gravitational/teleport/pull/18222) -* Added PostgreSQL and MySQL RDS Proxy support to database access. [#18045](https://github.com/gravitational/teleport/pull/18045) -* Improved database access denied error messages. [#17856](https://github.com/gravitational/teleport/pull/17856) -* Improved desktop access errors in case of locked sessions. [#17549](https://github.com/gravitational/teleport/pull/17549) -* Improved web UI handling of private key policy errors. [#17991](https://github.com/gravitational/teleport/pull/17991) -* Improved memory usage in clusters with large numbers of active sessions. [#18051](https://github.com/gravitational/teleport/pull/18051) -* Updated `tsh proxy ssh` to support `HTTPS_PROXY`. [#18295](https://github.com/gravitational/teleport/pull/18295) -* Updated Azure hosted databases to fetch the new CA. [#18172](https://github.com/gravitational/teleport/pull/18172) -* Updated `tsh kube login` to support providing default user, group and namespace. [#18185](https://github.com/gravitational/teleport/pull/18185) -* Updated web UI session listing to include active sessions of all types. [#18229](https://github.com/gravitational/teleport/pull/18229) -* Updated user locking to terminate in progress TCP application access connections. [#18187](https://github.com/gravitational/teleport/pull/18187) -* Updated `teleport configure` command to produce v2 config when Auth Service is provided. [#17914](https://github.com/gravitational/teleport/pull/17914) -* Updated all systemd service files to set max open files limit. [#17961](https://github.com/gravitational/teleport/pull/17961) - -## 11.0.1 - -This release of Teleport contains a security fix and multiple bug fixes. - -### Block SFTP in Moderated Sessions - -Teleport did not block SFTP protocol in Moderated Sessions. - -[#17727](https://github.com/gravitational/teleport/pull/17727) - -### Other fixes - -* Fixed issue with agent forwarding not working for auto-created users. [#17586](https://github.com/gravitational/teleport/pull/17586) -* Fixed "traits missing" error in application access. [#17737](https://github.com/gravitational/teleport/pull/17737) -* Fixed connection leak issue in IAM joining. [#17737](https://github.com/gravitational/teleport/pull/17737) -* Fixed panic in "tsh db ls". [#17780](https://github.com/gravitational/teleport/pull/17780) -* Fixed issue with "tsh mfa add" not displaying OTP QR code image on Windows. [#17703](https://github.com/gravitational/teleport/pull/17703) -* Fixed issue with `tctl rm windows_desktop/` removing all desktops. [#17732](https://github.com/gravitational/teleport/pull/17732) -* Fixed issue connecting to Redis 7.0 in cluster mode. [#17849](https://github.com/gravitational/teleport/pull/17849) -* Fixed "failed to open user account database" error after exiting SSH session. [#17825](https://github.com/gravitational/teleport/pull/17825) -* Improved `tctl` UX when using hardware-backed private keys. [#17681](https://github.com/gravitational/teleport/pull/17681) -* Improved `tsh mfa add` error reporting. [#17580](https://github.com/gravitational/teleport/pull/17580) - -## 11.0.0 - -Teleport 11 brings the following new major features and improvements: - -- Hardware-backed private keys support for server access (Enterprise only). -- Replacement of obsolete SCP protocol with SFTP for server access. -- Removal of persistent storage requirement for Helm charts. -- Automatic discovery and enrollment of EKS/AKS clusters for Kubernetes access. -- Richer Azure integrations for server and database access. -- Cassandra and Scylla support for database access, including Amazon Keyspaces. -- GitHub Actions and Terraform support for Machine ID. -- Access Requests and file upload/download support for Teleport Connect. - -### Hardware-backed private keys (Enterprise Only) - -Teleport 11 clients (such as tsh or Connect) support storing their private key -material on Yubikey devices instead of filesystem which helps prevent -credentials exfiltration attacks. - -See how to enable it in the -[documentation](docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx): - -Hardware-backed private keys is an enterprise only feature, and is currently -supported for server access only. - -### SFTP protocol - -Teleport 11 adds server-side support for SFTP protocol which many IDEs such as -VSCode or JetBrains PyCharm, GoLand and others use for browsing, copying, and -editing files on remote systems. - -The following guides explain how to use IDEs to connect to a remote machine via -Teleport: - -- [VS Code](./docs/pages/enroll-resources/server-access/guides/vscode.mdx) -- [JetBrains](./docs/pages/enroll-resources/server-access/guides/jetbrains-sftp.mdx) - -In addition, Teleport 11 clients will use SFTP protocol for file transfer under -the hood instead of the obsolete scp protocol. Server-side scp is still -supported so existing clients aren’t affected. - -### Helm charts persistent storage - -In Teleport 11 users no longer need to use persistent storage when deploying -Helm charts. When running on Kubernetes, Teleport services will now store their -identities in Kubernetes Secrets which removes the need for using persistent -storage or static join tokens. - -For existing deployments, this change involves migration from Deployment to -StatefulSet which is performed automatically during Helm upgrade to Teleport 11. - -### EKS/AKS discovery - -Teleport 11 adds support for automatic discovery and enrollment of AWS Elastic -Kubernetes Service (EKS) and Azure Kubernetes Service (AKS) clusters. - -### Azure integrations - -Teleport 11 improves Azure support in multiple areas. - -Teleport agents running on Azure VMs will now automatically import Azure tags to -label resources. - -Teleport database access now supports auto-discovery for Azure-hosted PostgreSQL -and MySQL databases. See the [Azure -guide](docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql.mdx) for more -details. - -In addition, Teleport database access will now use Azure AD managed identity -authentication for Azure-hosted SQL Server databases. - -### Cassandra/ScyllaDB - -Teleport 11 adds support for Cassandra and ScyllaDB databases in Database -Access. This includes support for Amazon Keyspaces. - -### Machine ID - -Teleport 11 adds support for secret-less joining of Machine ID agents in GitHub -Actions workflows. See the guide for more details: TODO - -We have also released a GitHub Action for setting up the Teleport binaries -within a GitHub workflow environment. More details regarding this can be found -at the Teleport GitHub Actions repository: - -https://github.com/gravitational/teleport-actions - -In addition, the Teleport Terraform plugin now supports the creation of Machine -ID Bots and Bot Tokens. - -### tsh MFA on Windows - -tsh 11 adds support for MFA and passwordless logins via Windows Hello and -FIDO2 devices. - -### Teleport Connect - -Teleport Connect has added support for Access Requests and file upload/download. - -### Breaking Changes - -Please familiarize yourself with the following potentially disruptive changes in -Teleport 11 before upgrading. - -#### Removed Github external SSO - -Beginning in Teleport 11, GitHub SAML SSO will only be available in our -Enterprise Edition. GitHub SSO without SAML will continue to work with OSS -Teleport. - -To keep using GitHub SSO with Teleport Community Edition, SAML SSO needs to be disabled -for your GitHub organization. Teleport Community Edition users can continue to use GitHub SSO -if using a Github Free or Team GitHub Plan. - -#### Changed Terraform OIDC connector redirect_url type to array - -In Teleport Plugins 11, `redirect_url` property in OIDC connectors created via -a Terraform module expects an array: - -``` -redirect_url = [ "http://example.com" ] -``` - -#### Deprecated Quay.io registry - -Starting with Teleport 11, Quay.io as a container registry has been deprecated. -Customers should use the new AWS ECR registry to pull [Teleport Docker -images](./docs/pages/installation.mdx#docker). - -Quay.io registry support will be removed in a future release. - -#### Deprecated old deb/rpm repositories - -In Teleport 11, old deb/rpm repositories (deb.releases.teleport.dev and -rpm.releases.teleport.dev) have been deprecated. Customers should use the new -repositories (apt.releases.teleport.dev and yum.releases.teleport.dev) to -[install Teleport](docs/pages/installation.mdx#linux). - -Support for our old deb/rpm repositories will be removed in a future release. - -#### Changed teleport-kube-agent Helm chart to StatefulSet - -Teleport 11 agents will now store their identities in Kubernetes Secrets when -deployed via a Helm chart which eliminates the need for using persistent storage -or static join tokens. Due to this change, Teleport agents are now always -deployed as part of StatefulSet regardless of whether persistent storage is -enabled or not. - -Existing agents that were deployed as Kubernetes Deployments (i.e. without -persistent storage) will be automatically converted to StatefulSets during -Teleport 11 Helm upgrade. - -#### Removed PostgreSQL backend - -The preview PostgreSQL backend was deleted due to performance and scalability -concerns. - -#### Removed desktop access support for 32-bit ARM and 386 architectures - -32-bit support for desktop access on ARM and 386 architectures has been removed -due to performance issues on these devices. - -This also reduces the binary size for these builds, making them slightly more -convenient for smaller resource-constrained devices. - -## 10.0.0 - -Teleport 10 is a major release that brings the following new features. - -Platform: - -* Passwordless (Preview) -* Resource Access Requests (Preview) -* Proxy Peering (Preview) - -Server access: - -* IP-Based Restrictions (Preview) -* Automatic User Provisioning (Preview) - -Database access: - -* Audit Logging for Microsoft SQL Server database access -* Snowflake database access (Preview) -* ElastiCache/MemoryDB database access (Preview) - -Teleport Connect: - -* Teleport Connect for server and database access (Preview) - -Machine ID: - -* Machine ID database access support (Preview) - -### Passwordless (Preview) - -Teleport 10 introduces passwordless support to your clusters. To use passwordless -users may register a security key with resident credentials or use a built-in -authenticator, like Touch ID. - -See the [documentation](docs/pages/admin-guides/access-controls/guides/passwordless.mdx). - -### Resource Access Requests (Preview) - -Teleport 10 expands just-in-time Access Requests to allow for requesting access -to specific resources. This lets you grant users the least privileged access -needed for their workflows. - -Just-in-time Access Requests are only available in Teleport Enterprise Edition. - -### Proxy Peering (Preview) - -Proxy peering enables Teleport deployments to scale without an increase in load -from the number of agent connections. This is accomplished by allowing Proxy -Services to tunnel client connections to the desired agent through a neighboring -proxy and decoupling the number of agent connections from the number of Proxies. - -Proxy peering can be enabled with the following configurations: - -```yaml -auth_service: - tunnel_strategy: - type: proxy_peering - agent_connection_count: 1 -``` - -```yaml -proxy_service: - peer_listen_addr: 0.0.0.0:3021 -``` - -Network connectivity between proxy servers to the `peer_listen_addr` is required -for this feature to work. - -Proxy peering is only available in Teleport Enterprise Edition. - -### IP-Based Restrictions (Preview) - -Teleport 10 introduces a new role option to pin the source IP in SSH -certificates. When enabled, the source IP that was used to request certificates -is embedded in the certificate, and SSH servers will reject connection attempts -from other IPs. This protects against attacks where valid credentials are -exfiltrated from disk and copied out into other environments. - -IP-based restrictions are only available in Teleport Enterprise Edition. - -### Automatic User Provisioning (Preview) - -Teleport 10 can be configured to automatically create Linux host users upon -login without having to use Teleport's PAM integration. Users can be added to specific -Linux groups and assigned appropriate “sudoer” privileges. - -To learn more about configuring automatic user provisioning read the -[documentation](docs/pages/enroll-resources/server-access/guides/host-user-creation.mdx). - -### Audit Logging for Microsoft SQL Server database access - -Teleport 9 introduced a preview of database access support for Microsoft SQL -Server which didn’t include audit logging of user queries. Teleport 10 captures -users' queries and prepared statements and sends them to the audit log, similarly -to other supported database protocols. - -Teleport database access for SQL Server remains in Preview mode with more UX -improvements coming in future releases. - -Refer to [the guide](docs/pages/enroll-resources/database-access/enroll-aws-databases/sql-server-ad.mdx) to set -up access to a SQL Server with Active Directory authentication. - -### Snowflake database access (Preview) - -Teleport 10 brings support for Snowflake to database access. Administrators can -set up access to Snowflake databases through Teleport for their users with -standard database access features like role-based access control and audit -logging, including query activity. - -Connect your Snowflake database to Teleport following the -[documentation](docs/pages/enroll-resources/database-access/enroll-managed-databases/snowflake.mdx). - -### Elasticache/MemoryDB database access (Preview) - -Teleport 9 added Redis protocol support to database access. Teleport 10 improves -this integration by adding native support for AWS-hosted Elasticache and -MemoryDB, including auto-discovery and automatic credential management in some -deployment configurations. - -Learn more about it in the [documentation]( -docs/pages/enroll-resources/database-access/enroll-aws-databases/redis-aws.mdx). - -### Teleport Connect for server and database access (Preview) - -Teleport Connect is a graphical macOS application that simplifies access to your -Teleport resources. Teleport Connect 10 supports server access and database access. -Other protocols and Windows support are coming in a future release. - -Get Teleport Connect installer from the macOS tab on the downloads page: -https://goteleport.com/download/. - -### Machine ID database access support (Preview) - -In Teleport 10 we’ve added database access support to Machine ID. Applications -can use Machine ID to access databases protected by Teleport. - -You can find Machine ID guide for database access in the -[documentation](docs/pages/enroll-resources/machine-id/access-guides/databases.mdx). - -### Breaking changes - -Please familiarize yourself with the following potentially disruptive changes in -Teleport 10 before upgrading. - -#### Auth Service version check - -Teleport 10 agents will now refuse to start if they detect that the Auth Service -is more than one major version behind them. You can use the `--skip-version-check` flag to -bypass the version check. - -Take a look at component compatibility guarantees in the -[documentation](docs/pages/upgrading/upgrading.mdx). - -#### HTTP_PROXY for reverse tunnels - -Reverse tunnel connections will now respect `HTTP_PROXY` environment variables. -This may result in reverse tunnel agents not being able to re-establish -connections if the HTTP proxy is set in their environment and does not allow -connections to the Teleport Proxy Service. - -Refer to the -[documentation](docs/pages/reference/networking.mdx#http-connect-proxies) -for more details. - -#### New APT repos - -With Teleport 10 we’ve migrated to new APT repositories that now support -multiple release channels, Teleport versions and OS distributions. The new -repositories have been backfilled with Teleport versions starting from 6.2.31 -and we recommend upgrading to them. The old repositories will be maintained for -the foreseeable future. - -See the [installation -instructions](docs/pages/enroll-resources/server-access/getting-started.mdx#step-14-install-teleport-on-your-linux-host). - -#### Removed “tctl access ls” - -The `tctl access ls` command that returned information about user server access -within the cluster was removed. Please use a previous `tctl` version if you’d like -to keep using it. - -#### Relaxed session join permissions - -In previous versions of Teleport users needed full access to a Node/Kubernetes -pod in order to join a session. Teleport 10 relaxes this requirement. Joining -sessions remains deny-by-default but now only `join_sessions` statements are -checked for session join RBAC. - -See the [Moderated Sessions -guide](docs/pages/admin-guides/access-controls/guides/joining-sessions.mdx) for more -details. - -#### GitHub connectors - -The GitHub authentication connector’s `teams_to_logins` field is deprecated in favor of the new -`teams_to_roles` field. The old field will be removed in a future release. - -#### Teleport FIPS AWS endpoints - -Teleport 10 will now automatically use FIPS endpoints for AWS S3 and DynamoDB -when started with the `--fips` flag. You can use the `use_fips_endpoint=false` -connection endpoint option to use regular endpoints for Teleport in FIPS mode, -for example: - -``` -s3://bucket/path?region=us-east-1&use_fips_endpoint=false -``` - -See the [S3/DynamoDB backend -documentation](docs/pages/reference/backends.mdx) for more information. - -## 9.3.9 - -This release of Teleport contains a security fix, as well as multiple improvements and bug fixes. - -### Auth bypass in Moderated Sessions - -When checking a user’s roles prior to starting a session, Teleport may have -incorrectly allowed a session to proceed without moderation depending on the -order roles are received from the backend. - -### Other improvements and fixes - -* Fixed issue with per-session MFA swallowing key presses. [#13822](https://github.com/gravitational/teleport/pull/13822) -* Fixed issue with `tsh db ls -R` now showing allowed users. [#13626](https://github.com/gravitational/teleport/pull/13626) -* Fixed vertical and horizontal scroll in desktop access. [#13905](https://github.com/gravitational/teleport/pull/13905) -* Fixed issue with invalid query filters forcing `tsh` relogin. [#13747](https://github.com/gravitational/teleport/pull/13747) -* Fixed issue with TLS routing and proxy jump. [#13928](https://github.com/gravitational/teleport/pull/13928) -* Fixed issue with MongoDB connections timing out in certain scenarios. [#13859](https://github.com/gravitational/teleport/pull/13859) -* Fixed issue with Machine ID certificate renewal with empty requested roles. [#13893](https://github.com/gravitational/teleport/pull/13893) -* Fixed issue with Windows desktops not being labeled with LDAP attribute labels. [#13681](https://github.com/gravitational/teleport/pull/13681) -* Fixed issue with desktop access streaming not being terminated properly. [#14024](https://github.com/gravitational/teleport/pull/14024) -* Added ability to use FIPS endpoints for S3 and DynamoDB using `use_fips_endpoint` connection option. [#13703](https://github.com/gravitational/teleport/pull/13703) -* Added ability to specify CA pin as a file path in the config. [#13089](https://github.com/gravitational/teleport/pull/13089) -* Improved reconnect reliability after root proxy restart. [#13967](https://github.com/gravitational/teleport/pull/13967) -* Improved error messages for failed auth client connections. [#13835](https://github.com/gravitational/teleport/pull/13835) - -## 9.3.7 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with startup delay caused by AWS EC2 check. [#13167](https://github.com/gravitational/teleport/pull/13167) -* Added `tsh ls -R` that displays resources across all clusters and profiles. [#13313](https://github.com/gravitational/teleport/pull/13313) -* Fixed issue with `tsh` not correctly reporting "address in use" error during port forwarding. [#13679](https://github.com/gravitational/teleport/pull/13679) -* Fixed two potential panics. [#13590](https://github.com/gravitational/teleport/pull/13590), [#13655](https://github.com/gravitational/teleport/pull/13655) -* Fixed issue with enhanced session recording not working on recent Ubuntu versions. [#13650](https://github.com/gravitational/teleport/pull/13650) -* Fixed issue with CA rotation when Database Service does not contain any databases. [#13517](https://github.com/gravitational/teleport/pull/13517) -* Fixed issue with desktop access connection failing with "invalid channel name rdpsnd" error. [#13450](https://github.com/gravitational/teleport/issues/13450) -* Fixed issue with invalid Teleport config when enabling IMDSv2 in Terraform config. [#13537](https://github.com/gravitational/teleport/pull/13537) - -## 9.3.6 - -This release of Teleport contains multiple improvements and bug fixes. - -* Added Unicode clipboard support to desktop access. [#13391](https://github.com/gravitational/teleport/pull/13391) -* Fixed backwards compatibility issue with fetch Access Requests from older servers. [#13490](https://github.com/gravitational/teleport/pull/13490) -* Fixed issue with requests to Teleport-protected apps periodically failing with 500 errors. [#13469](https://github.com/gravitational/teleport/pull/13469) -* Fixed issues with pagination when displaying applications. [#13451](https://github.com/gravitational/teleport/pull/13451) -* Fixed file descriptor leak in Machine ID. [#13386](https://github.com/gravitational/teleport/pull/13386) - -## 9.3.5 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed backwards compatibility issue with fetching Access Requests from older servers. [#13428](https://github.com/gravitational/teleport/pull/13428) -* Fixed issue with using Microsoft SQL Server Management Studio with database access. [#13337](https://github.com/gravitational/teleport/pull/13337) -* Added support for `tsh proxy ssh -J` to improve interoperability with OpenSSH clients. [#13311](https://github.com/gravitational/teleport/pull/13311) -* Added ability to provide security context in Helm charts. [#13286](https://github.com/gravitational/teleport/pull/13286) -* Added Application and database access support to reference AWS Terraform deployment. [#13383](https://github.com/gravitational/teleport/pull/13383) -* Improved reliability of dialing the Auth Service through the Proxy Service. [#13399](https://github.com/gravitational/teleport/pull/13399) -* Improved `kubectl exec` auditing by logging access denied attempts. [#12831](https://github.com/gravitational/teleport/pull/12831), [#13400](https://github.com/gravitational/teleport/pull/13400) - -## 9.3.4 - -This release of Teleport contains multiple security, bug fixes and improvements. - -### Escalation attack in agent forwarding - -When setting up agent forwarding on the node, Teleport did not handle unix socket creation in a secure manner. - -This could have given a potential attacker an opportunity to get Teleport to change arbitrary file permissions to the attacker’s user. - -### WebSockets CSRF - -When handling websocket requests, Teleport did not verify that the provided Bearer token was generated for the correct user. - -This could have allowed a malicious low privileged Teleport user to use a social engineering attack to gain higher privileged access on the same Teleport cluster. - -### Denial of service in Access Requests - -When accepting an Access Request, Teleport did not enforce the maximum request reason size. - -This could allow a malicious actor to mount a DoS attack by creating an Access Request with a very large request reason. - -### Auth bypass in moderated sessions - -When initializing a moderated session, Teleport did not discard participant’s input prior to the moderator joining. - -This could prevent a moderator from being able to interrupt a malicious command executed by a participant. - -### Other fixes - -* Fixed issue with stdin hijacking when per-session MFA is enabled. [#13212](https://github.com/gravitational/teleport/pull/13212) -* Added support for automatic tags import when running on AWS EC2. [#12593](https://github.com/gravitational/teleport/pull/12593) -* Added ability to use multiple redirect URLs in OIDC connectors. [#13046](https://github.com/gravitational/teleport/pull/13046) -* Fixed issue with ANSI escape sequences being broken when using `tsh` on Windows. [#13221](https://github.com/gravitational/teleport/pull/13221) -* Fixed issue with `tsh ssh` printing extra error upon exit if last command was unsuccessful. [#12903](https://github.com/gravitational/teleport/pull/12903) -* Added support for Proxy Protocol v2 in MySQL proxy. [#12993](https://github.com/gravitational/teleport/pull/12993) -* Upgraded to Go `v1.17.11`. [#13104](https://github.com/gravitational/teleport/pull/13104) -* Added Windows desktops labeling based on their LDAP attributes. [#13238](https://github.com/gravitational/teleport/pull/13238) -* Improved performance when listing resources for users with many roles. [#13263](https://github.com/gravitational/teleport/pull/13263) - -## 9.3.2 - -This release of Teleport contains two bug fixes. - -* Fixed issue with Machine ID's `tsh` version check. [#13037](https://github.com/gravitational/teleport/pull/13037) -* Fixed AWS related log spam in database agent when not running on AWS. [#12984](https://github.com/gravitational/teleport/pull/12984) - -## 9.3.0 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with `tctl` not taking `TELEPORT_HOME` environment variable into account. [#12738](https://github.com/gravitational/teleport/pull/12738) -* Fixed issue with Redis `AUTH` command not always authenticating the user in database access. [#12754](https://github.com/gravitational/teleport/pull/12754) -* Fixed issue with Teleport not starting with deprecated U2F configuration. [#12826](https://github.com/gravitational/teleport/pull/12826) -* Fixed issue with `tsh db ls` not showing allowed users for leaf clusters. [#12853](https://github.com/gravitational/teleport/pull/12853) -* Fixed issue with `teleport configure` failing when given non-existent data directory. [#12806](https://github.com/gravitational/teleport/pull/12806) -* Fixed issue with `tctl` not outputting debug logs. [#12920](https://github.com/gravitational/teleport/pull/12920) -* Fixed issue with Kubernetes access not working when using default CA pool. [#12874](https://github.com/gravitational/teleport/pull/12874) -* Fixed issue with Machine ID not working in TLS routing mode. [#12990](https://github.com/gravitational/teleport/pull/12990) -* Improved connection performance in large clusters. [#12832](https://github.com/gravitational/teleport/pull/12832) -* Improved memory usage in large clusters. [#12724](https://github.com/gravitational/teleport/pull/12724) - -### Breaking Changes - -Teleport 9.3.0 reduces the minimum GLIBC requirement to 2.18 and enforces more -secure cipher suites for desktop access. - -As a result of these changes, desktop access users with desktops running Windows -Server 2012R2 will need to perform [additional -configuration](docs/pages/enroll-resources/desktop-access/getting-started.mdx) to force Windows -to use compatible cipher suites. - -Windows desktops running Windows Server 2016 and newer will continue to operate -normally - no additional configuration is required. - -## 9.2.4 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed compatibility issue with agents connected to older Auth Service instances. [#12728](https://github.com/gravitational/teleport/pull/12728) -* Fixed issue with TLS routing endpoint advertising preference for `http/1.1` over `h2`. [#12749](https://github.com/gravitational/teleport/pull/12749) -* Implemented multiple proxy restart stability improvements. [#12632](https://github.com/gravitational/teleport/pull/12632), [#12488](https://github.com/gravitational/teleport/pull/12488), [#12689](https://github.com/gravitational/teleport/pull/12689) -* Improved compatibility with PuTTY. [#12662](https://github.com/gravitational/teleport/pull/12662) -* Added support for global tsh config file `/etc/tsh.yaml`. [#12626](https://github.com/gravitational/teleport/pull/12626) -* Added `tbot configure` command. [#12576](https://github.com/gravitational/teleport/pull/12576) -* Fixed issue with desktop access not working in Teleport Enterprise (Cloud). [#12781](https://github.com/gravitational/teleport/pull/12781) -* Improved Web UI performance in large clusters. [#12637](https://github.com/gravitational/teleport/pull/12637) -* Fixed issue with running MySQL stored procedures via database access. [#12734](https://github.com/gravitational/teleport/pull/12734) - -## 9.2.3 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with `HTTP_PROXY` being inadvertently respected in reverse tunnel connections. [#12335](https://github.com/gravitational/teleport/pull/12335) -* Added `--format` flag to `tctl token add` command. [#12588](https://github.com/gravitational/teleport/pull/12588) -* Fixed backwards compatibility issues with session upload. [#12535](https://github.com/gravitational/teleport/pull/12535) -* Added support for persistency in custom mode in Helm charts. [#12218](https://github.com/gravitational/teleport/pull/12218) -* Fixed issue with PostgreSQL backend not respecting username from certificate. [#12553](https://github.com/gravitational/teleport/pull/12553) -* Fixed issues with `kubectl cp` and `kubectl exec` not working through Kubernetes access. [#12541](https://github.com/gravitational/teleport/pull/12541) -* Fixed issues with dynamic registration logic for cloud databases. [#12451](https://github.com/gravitational/teleport/pull/12451) -* Fixed issue with automatic Add Application script failing to join the cluster. [#12539](https://github.com/gravitational/teleport/pull/12539) -* Fixed issue with `tctl` crashing when PAM is enabled. [#12572](https://github.com/gravitational/teleport/pull/12572) -* Added support for setting priority class and extra labels in Helm charts. [#12568](https://github.com/gravitational/teleport/pull/12568) -* Fixed issue with App Access JWT tokens not including `iat` claim. [#12589](https://github.com/gravitational/teleport/pull/12589) -* Added ability to inject App Access JWT tokens in rewritten headers. [#12589](https://github.com/gravitational/teleport/pull/12589) -* Desktop access automatically adds a `teleport.dev/ou` label for desktops discovered via LDAP. [#12502](https://github.com/gravitational/teleport/pull/12502) -* Updated Machine ID to generates identity files compatible with `tctl` and `tsh`. [#12500](https://github.com/gravitational/teleport/pull/12500) -* Updated internal build infrastructure to Go 1.17.10. [#12607](https://github.com/gravitational/teleport/pull/12607) -* Improved proxy memory usage in clusters with large number of nodes. [#12573](https://github.com/gravitational/teleport/pull/12573) - -## 9.2.1 - -This release of Teleport contains an improvement and several bug fixes. - -* Updated `tctl rm` command to support removing tokens. [#12439](https://github.com/gravitational/teleport/pull/12439) -* Fixed issue with Teleport failing to start when using DynamoDB backend in pay-per-request mode. [#12461](https://github.com/gravitational/teleport/pull/12461) -* Fixed issue with Kubernetes port forwarding not working. [#12468](https://github.com/gravitational/teleport/pull/12468) -* Fixed issue with IAM policy limit when using database auto-discovery on Kubernetes. [#12457](https://github.com/gravitational/teleport/pull/12457) - -## 9.2.0 - -This release of Teleport contains multiple improvements, security and bug fixes. - -* Fixed issue with U2F facets not being properly validated. [#12208](https://github.com/gravitational/teleport/pull/12208) -* Hardened SQLite permissions. [#12360](https://github.com/gravitational/teleport/pull/12360) -* Fixed issue with OIDC callback not checking `email_verified` claim. [#12360](https://github.com/gravitational/teleport/pull/12360) -* Added `max_kubernetes_connections` role option for limiting simultaneous Kubernetes connections. [#12360](https://github.com/gravitational/teleport/pull/12360) -* Fixed issue with Teleport failing to start with pay-per-request DynamoDB mode. [#12360](https://github.com/gravitational/teleport/pull/12360) -* Reduced Machine ID verbosity in case of missing secure symlink kernel support. [#12423](https://github.com/gravitational/teleport/pull/12423) -* Fixed `tsh proxy db` tunnel mode not working for CockroachDB connections. [#12400](https://github.com/gravitational/teleport/pull/12400) -* Added support for database access certificates in Machine ID. [#12195](https://github.com/gravitational/teleport/pull/12195) -* Improved shutdown/restart stability in certain scenarios. [#12393](https://github.com/gravitational/teleport/pull/12393) -* Added support for clickable labels in web UI. [#12422](https://github.com/gravitational/teleport/pull/12422) - -## 9.1.3 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with some MySQL clients not being able to connect to MySQL 8.0 servers. [#12340](https://github.com/gravitational/teleport/pull/12340) -* Fixed multiple conditions that could lead to SSH sessions freezing. [#12286](https://github.com/gravitational/teleport/pull/12286) -* Fixed issue with `tsh db ls` failing for leaf clusters. [#12320](https://github.com/gravitational/teleport/pull/12320) -* Fixed a scenario in which Teleport's internal cache could potentially become unhealthy. [#12251](https://github.com/gravitational/teleport/pull/12251), [#12002](https://github.com/gravitational/teleport/pull/12002) -* Improved performance when opening new application access sessions. [#12300](https://github.com/gravitational/teleport/pull/12300) -* Added flags to the `teleport configure` command. [#12267](https://github.com/gravitational/teleport/pull/12267) -* Improved CA rotation stability. [#12333](https://github.com/gravitational/teleport/pull/12333) -* Fixed issue with `mongosh` certificate verification when using TLS routing. [#12363](https://github.com/gravitational/teleport/pull/12363) - -## 9.1.2 - -This release of Teleport contains two bug fixes. - -* Fixed issue with Teleport pods not becoming ready on Kubernetes. [#12243](https://github.com/gravitational/teleport/pull/12243) -* Fixed issue with Teleport processes crashing upon restart after failed host UUID generation. [#12222](https://github.com/gravitational/teleport/pull/12222) - -## 9.1.1 - -This release of Teleport contains multiple bug fixes and improvements. - -* Fixed regression issue where reverse tunnel connections inadvertently started respecting `HTTP_PROXY`. [#12035](https://github.com/gravitational/teleport/pull/12035) -* Fixed potential deadlock in SSH server. [#12122](https://github.com/gravitational/teleport/pull/12122) -* Fixed issue with Kubernetes service not reporting its readiness. [#12152](https://github.com/gravitational/teleport/pull/12152) -* Fixed issue with JumpCloud identity provider. [#11936](https://github.com/gravitational/teleport/pull/11936) -* Fixed issue with deleting many records from Firestore backend. [#12177](https://github.com/gravitational/teleport/pull/12177) - -## 9.1.0 - -Teleport 9.1 is a minor release that brings several new features, security and bug fixes. - -### Security - -Teleport build infrastructure was updated to use Go v1.17.9 to fix CVE-2022-24675, CVE-2022-28327 and CVE-2022-27536. - -### SQL backend (preview) - -Teleport users can now use PostgreSQL or CockroachDB for storing Auth Service data. - -See the [documentation](docs/pages/reference/backends.mdx) for more information. - -### Server-side filtering and pagination - -Searching and filtering resources is now handled on the server, improving the -efficiency of queries with `tsh`, `tctl`, or the web UI. - -The web UI loads resources faster by leveraging server-side pagination. -Additionally, the web UI supports bookmarking searches by including the query in -the URL. - -### Other improvements and fixes - -* Fixed issue with stdin being ignored after refreshing expired credentials. [#11847](https://github.com/gravitational/teleport/pull/11847) -* Fixed issue with `tsh` requiring host login when using identity files for some commands. [#11793](https://github.com/gravitational/teleport/pull/11793) -* Added support for calling proxy over plain HTTP in insecure mode. [#11403](https://github.com/gravitational/teleport/pull/11403) -* Fixed multiple issues that could lead to sessions output freezing. [#11853](https://github.com/gravitational/teleport/pull/11853) -* Added optional gRPC client/server latency metrics. [#11773](https://github.com/gravitational/teleport/pull/11773) -* Fixed issue with connecting to self-hosted databases in TLS insecure mode. [#11758](https://github.com/gravitational/teleport/pull/11758) -* Improved error message when incorrect auth connector name is used. [#11884](https://github.com/gravitational/teleport/pull/11884) -* Implemented multiple moderated session stability improvements. [#11803](https://github.com/gravitational/teleport/pull/11803), [#11890](https://github.com/gravitational/teleport/pull/11890) -* Added authenticated tunnel mode to `tsh proxy db` command. [#11808](https://github.com/gravitational/teleport/pull/11808) -* Fixed issue with application sessions not being deleted upon web logout. [#11956](https://github.com/gravitational/teleport/pull/11956) -* Improved MySQL audit logging to include support for additional commands. [#11949](https://github.com/gravitational/teleport/pull/11949) -* Improved reliability of Teleport services restart. [#11795](https://github.com/gravitational/teleport/pull/11795) -* Fixed issue with Okta OIDC auth connector not working. [#11718](https://github.com/gravitational/teleport/pull/11718) -* Added support for `json` and `yaml` formatting to all `tsh` commands. [#12050](https://github.com/gravitational/teleport/pull/12050) -* Added support for setting `kubernetes_users`, `kubernetes_groups`, `db_names`, `db_users` and `aws_role_arns` traits when creating users. [#12133](https://github.com/gravitational/teleport/pull/12133) -* Fixed potential CA rotation panic. [#12004](https://github.com/gravitational/teleport/pull/12004) -* Updated `tsh db ls` to display allowed database usernames. [#11942](https://github.com/gravitational/teleport/pull/11942) -* Fixed goroutine leak in OIDC client. [#12078](https://github.com/gravitational/teleport/pull/12078) - -## 9.0.4 - -This release of Teleport contains multiple improvements and fixes. - -* Fixed issue with `:` not being allowed in label keys. [#11563](https://github.com/gravitational/teleport/pull/11563) -* Fixed potential panic in Kubernetes access. [#11614](https://github.com/gravitational/teleport/pull/11614) -* Added `teleport_connect_to_node_attempts_total` Prometheus metric. [#11629](https://github.com/gravitational/teleport/pull/11629) -* Multiple CA rotation stability improvements. [#11658](https://github.com/gravitational/teleport/pull/11658) -* Fixed console player Ctrl-C and Ctrl-D functionality. [#11559](https://github.com/gravitational/teleport/pull/11559) -* Improved logging in case of node with existing state joining an new cluster. [#11751](https://github.com/gravitational/teleport/pull/11751) -* Added preview of PostgreSQL/CockroachDB backend. [#11667](https://github.com/gravitational/teleport/pull/11667) -* Fixed compatibility issues with CA loading between old and new tsh versions. [#11663](https://github.com/gravitational/teleport/pull/11663) -* Fixed loggers not respecting JSON configuration. [#11655](https://github.com/gravitational/teleport/pull/11655) -* Added support for Proxy Protocol v2. [#11722](https://github.com/gravitational/teleport/pull/11722) -* Fixed a number of tsh player stability issues. [#11491](https://github.com/gravitational/teleport/pull/11491) -* Improved network utilization caused by session uploader. [#11698](https://github.com/gravitational/teleport/pull/11698) -* Improved remote clusters inventory bookkeeping. [#11707](https://github.com/gravitational/teleport/pull/11707) - -## 9.0.3 - -This release of Teleport contains multiple fixes. - -* Fixed issue with `tctl` ignoring `TELEPORT_HOME` environment variable. [#11561](https://github.com/gravitational/teleport/pull/11561) -* Fixed multiple moderated sessions stability issues. [#11494](https://github.com/gravitational/teleport/pull/11494) -* Fixed issue with `tsh version` exiting with error when tsh config file is not present. [#11571](https://github.com/gravitational/teleport/pull/11571) -* Fixed issue with `tsh` not respecting proxy hosts. [#11496](https://github.com/gravitational/teleport/pull/11496) -* Fixed issue with Kubernetes forwarder taking HTTP proxies into account. [#11462](https://github.com/gravitational/teleport/pull/11462) -* Fixed issue with stale DynamoDB Auth Service instances disrupting agent reconnect attempts. [#11598](https://github.com/gravitational/teleport/pull/11598) - -## 9.0.2 - -This release of Teleport contains multiple features, improvements and bug fixes. - -* Added support for per-user `tsh` configuration preferences. [#10336](https://github.com/gravitational/teleport/pull/10336) -* Added support for role bootstrapping in OSS. [#11175](https://github.com/gravitational/teleport/pull/11175) -* Added `HTTP_PROXY` support to tsh. [#10209](https://github.com/gravitational/teleport/pull/10209) -* Improved error messages `tsh` and `tctl` show to include usage information on invalid command line invocation. [#11174](https://github.com/gravitational/teleport/pull/11174) -* Improved `tctl ls` output to make it consistent across all resources. [#9519](https://github.com/gravitational/teleport/pull/9519) -* Fixed multiple issues with CA rotation, graceful restart, and stability. [#10706](https://github.com/gravitational/teleport/pull/10706) [#11074](https://github.com/gravitational/teleport/pull/11074) [#11283](https://github.com/gravitational/teleport/pull/11283) -* Fixed issue where MOTD was not always shown. [#10735](https://github.com/gravitational/teleport/pull/10735) -* Fixed an issue where certificate extension not being included in `tctl auth sign`. [#10949](https://github.com/gravitational/teleport/pull/10949) -* Fixed a panic that could occur in the Web UI. [#11389](https://github.com/gravitational/teleport/pull/11389) - -## 9.0.1 - -This release of Teleport contains multiple improvements and bug fixes. - -* Fixed issue with Ctrl-C freezing sessions. [#11188](https://github.com/gravitational/teleport/pull/11188) -* Improved handling of unknown audit events. [#11064](https://github.com/gravitational/teleport/pull/11064) -* Improved calculation of public addresses for dynamically registered apps. [#11139](https://github.com/gravitational/teleport/pull/11139) -* Fixed `tsh aws ecr` returning 500 errors. [#11108](https://github.com/gravitational/teleport/pull/11108) -* Fixed issue with deleting certain users. [#11131](https://github.com/gravitational/teleport/pull/11131) -* Fixed issue with Machine ID not detecting token in file config. [#11206](https://github.com/gravitational/teleport/pull/11206) - -## 9.0.0 - -Teleport 9.0 is a major release that brings: - -- Teleport desktop access GA -- Teleport Machine ID Preview -- Various additions to Teleport database access -- Moderated Sessions for server and Kubernetes access - -Desktop access adds support for clipboard sharing, session recording, and -per-session MFA. - -Teleport Machine ID Preview extends identity-based access to machines. It's the -easiest way to issue, renew, and manage SSH and X.509 certificates for service -accounts, microservices, CI/CD automation and all other forms of -machine-to-machine access. - -Database access brings self-hosted Redis support, RDS MariaDB (10.6 and higher) -support, auto-discovery for Redshift clusters, and auto-IAM configuration -improvements to GA. Additionally, this release also brings Microsoft SQL Server -with AD authentication to Preview. - -Moderated Sessions enables the creation of sessions where a moderator has to -be present. This feature can be selectively enabled for specific sessions via -RBAC and can be used in conjunction with per-session MFA. - -### Desktop access - -#### Clipboard Support - -Desktop access now supports copying and pasting text between your local -workstation and a remote Windows Desktop. This feature requires a Chromium-based -browser and can be disabled via RBAC. - -#### Session Recording - -Desktop sessions are now recorded and stored alongside SSH sessions, and can be -viewed in Teleport's web interface. Desktop session recordings are fully -compatible with the RBAC for sessions feature introduced in Teleport 8.1. - -#### Per-session MFA - -Per-session MFA settings now apply to desktop sessions. This allows cluster -administrators to require an additional MFA "tap" prior to opening a desktop -session. This feature requires a WebAuthn device. - -### Machine ID (Preview) - -Machine ID allows the creation of machine / bot / service account users who can -automatically issue, renew, and manage SSH and X.509 certificates to facilitate -machine-to-machine access. - -Machine ID is a service that programmatically issues and renews short-lived -certificates to any service account (e.g., a CI/CD server) by retrieving -credentials from the Teleport Auth Service. This enables fine-grained role-based -access controls and audit. - -Some of the things you can do with Machine ID: - -- Machines can retrieve short-lived SSH certificates for CI/CD pipelines. -- Machines can retrieve short-lived X.509 certificates for use with databases or - applications. -- Configure role-based access controls and locking for machines. -- Capture access events in the audit log. - -[Machine ID getting started guide](docs/pages/enroll-resources/machine-id/getting-started.mdx) - -### Database access - -#### Redis - -You can now use database access to connect to a self-hosted Redis instance or -Redis cluster and view Redis commands in the Teleport audit log. We will be -adding support for Amazon Elasticache in the coming weeks. - -[Self-hosted Redis -guide](docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis.mdx) - -#### SQL Server (Preview) - -Teleport 9 includes a preview release of Microsoft SQL Server with Active -Directory authentication support for database access. Audit logging of query -activity is not included in the preview release and will be implemented in a -later 9.x release. - -[SQL Server guide](docs/pages/enroll-resources/database-access/enroll-aws-databases/sql-server-ad.mdx) - -#### RDS MariaDB - -Teleport 9 updates MariaDB support with auto-discovery and connection to AWS RDS -MariaDB databases using IAM authentication. The minimum MariaDB version that -supports IAM authentication is 10.6. - -[Updated RDS guide](docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx) - -#### Other Improvements - -In addition, Teleport 9 expands auto-discovery to support Redshift databases and -2 new commands which simplify the database access getting started experience: -"teleport db configure create", which generates Database Service configuration, -and "teleport db configure bootstrap", which configures IAM permissions for the -Database Service when running on AWS. - -CLI commands reference: -- [`teleport db configure - create`](docs/pages/reference/agent-services/database-access-reference/cli.mdx) -- [`teleport db configure bootstrap`](docs/pages/reference/agent-services/database-access-reference/cli.mdx) - -### Moderated Sessions - -With Moderated Sessions, Teleport administrators can define policies that allow -users to invite other users to participate in SSH or Kubernetes sessions as -observers, moderators or peers. - -[Moderated Sessions guide](docs/pages/admin-guides/access-controls/guides/joining-sessions.mdx) - -### Breaking Changes - -#### CentOS 6 - -CentOS 6 support was deprecated in Teleport 8 and has now been removed. - -#### Desktop access - -desktop access now authenticates to LDAP using X.509 client certificates. -Support for the `password_file` configuration option has been removed. - -## 8.0.0 - -Teleport 8.0 is a major release of Teleport that contains new features, improvements, and bug fixes. - -### New Features - -#### Windows desktop access Preview - -Teleport 8.0 includes a preview of the Windows desktop access feature, allowing -users passwordless login to Windows Desktops via any modern web browser. - -Teleport users can connect to Active Directory enrolled Windows hosts running -Windows 10, Windows Server 2012 R2 and newer Windows versions. - -To try this feature yourself, check out our -[Getting Started Guide](docs/pages/enroll-resources/desktop-access/getting-started.mdx). - -Review the desktop access design in: - -- [RFD #33](https://github.com/gravitational/teleport/blob/master/rfd/0033-desktop-access.md) -- [RFD #34](https://github.com/gravitational/teleport/blob/master/rfd/0034-desktop-access-windows.md) -- [RFD #35](https://github.com/gravitational/teleport/blob/master/rfd/0035-desktop-access-windows-authn.md) -- [RFD #37](https://github.com/gravitational/teleport/blob/master/rfd/0037-desktop-access-protocol.md) - -#### TLS Routing - -In TLS routing mode all client connections are wrapped in TLS and multiplexed on -a single Teleport proxy port. - -TLS routing can be enabled by including the following Auth Service configuration: - -```yaml -auth_service: - proxy_listener_mode: multiplex - ... -``` - -and setting proxy configuration version to `v2` to prevent legacy listeners from -being created: - -```yaml -version: v2 -proxy_service: - ... -``` - -#### AWS CLI - -Teleport application access extends AWS console support to CLI . Users are able -to log into their AWS console using `tsh apps login` and use `tsh aws` commands -to interact with AWS APIs. - -See more info in the -[documentation](docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx). - -#### Application and Database Dynamic Registration - -With dynamic registration users are able to manage applications and databases -without needing to update static YAML configuration or restart application or -database agents. - -See dynamic registration guides for -[apps](docs/pages/enroll-resources/application-access/guides/dynamic-registration.mdx) -and -[databases](docs/pages/enroll-resources/database-access/guides/dynamic-registration.mdx). - -#### RDS Automatic Discovery - -With RDS auto discovery Teleport database agents can automatically discover RDS -instances and Aurora clusters in an AWS account. - -See updated -[RDS guide](docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx) for -more information. - -#### WebAuthn - -WebAuthn support enables Teleport users to use modern multi-factor options, -including Apple FaceID and TouchID. - -In addition, the Teleport Web UI includes new multi-factor management tools, -enabling users to configure and update their multi-factor devices via their -web browser. - -Lastly, our UI becomes more secure by requiring an additional multi-factor -confirmation for certain privileged actions (editing roles for multi-factor -confirmation, for example). - -### Improvements - -* Added support for [CockroachDB](https://www.cockroachlabs.com) to Database - Access. [#8505](https://github.com/gravitational/teleport/pull/8505) -* Reduced network utilization on large clusters during login. - [#8471](https://github.com/gravitational/teleport/pull/8471) -* Added metrics and added the ability for `tctl top` to show network utilization - for resource propagation. - [#8338](https://github.com/gravitational/teleport/pull/8338) - [#8603](https://github.com/gravitational/teleport/pull/8603) - [#8491](https://github.com/gravitational/teleport/pull/8491) -* Added support for account recovery and cancellation. - [#6769](https://github.com/gravitational/teleport/pull/6769) -* Added per-session MFA support to database access. - [#8270](https://github.com/gravitational/teleport/pull/8270) -* Added support for profile specific `kubeconfig`. - [#7840](https://github.com/gravitational/teleport/pull/7840) - -### Fixes - -* Fixed issues with web applications that utilized - [EventSource](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) - with application access. - [#8359](https://github.com/gravitational/teleport/pull/8359) -* Fixed issue were interactive sessions would always return exit code 0. - [#8081](https://github.com/gravitational/teleport/pull/8081) -* Fixed issue where JWT signer was omitted from bootstrap logic. - [#8119](https://github.com/gravitational/teleport/pull/8119) - -### Breaking Changes - -#### CentOS 6 - -CentOS 6 support will be deprecated in Teleport 8 and removed in Teleport 9. - -Teleport 8 will continue to receive security patches for about 9 months after -which it will be EOL. Users are encouraged to upgrade to CentOS 7 in that time -frame. - -#### Updated dependencies - -New run time dependencies have been added to Teleport 8 due to the inclusion of -Rust in the build chain. Teleport 8 requires `libgcc_s.so` and `libm.so` be -installed on systems running Teleport. - -Users of [distroless](https://github.com/GoogleContainerTools/distroless) -container images are encouraged to use the -[gcr.io/distroless/cc-debian11](https://github.com/GoogleContainerTools/distroless/blob/main/examples/rust/Dockerfile) -image to run Teleport. - -``` -FROM gcr.io/distroless/cc-debian11 -``` - -Alpine users are recommended to install the `libgcc` package in addition to any -glibc compatibility layer they have already been using. - -``` -apk --update --no-cache add libgcc -``` - -#### Database access Certificates - -With the `GODEBUG=x509ignoreCN=0` flag removed in Go 1.17, database access users -will no longer be able to connect to databases that include their hostname in -the `CommonName` field of the presented certificate. Users are recommended to -update their database certificates to include hostname in the -`Subject Alternative Name` extension instead. - -Subscribe to Github issue -[#7636](https://github.com/gravitational/teleport/issues/7636) which will add -ability to control level of TLS verification as a workaround. - -#### Role Changes - -New clusters will no longer have the default `admin` role, it has been replaced -with 3 smaller scoped roles: `access`, `auditor`, and `editor`. - -## 7.0.0 - -Teleport 7.0 is a major release of Teleport that contains new features, improvements, and bug fixes. - -### New Features - -#### MongoDB - -Added support for [MongoDB](https://www.mongodb.com) to Teleport database access. [#6600](https://github.com/gravitational/teleport/issues/6600). - -View the [database access with MongoDB](docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mongodb-self-hosted.mdx) for more details. - -#### Cloud SQL MySQL - -Added support for [GCP Cloud SQL MySQL](https://cloud.google.com/sql/docs/mysql) to Teleport database access. [#7302](https://github.com/gravitational/teleport/pull/7302) - -View the Cloud SQL MySQL [guide](docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/mysql-cloudsql.mdx) for more details. - -#### AWS Console - -Added support for [AWS Console](https://aws.amazon.com/console) to Teleport application access. [#7590](https://github.com/gravitational/teleport/pull/7590) - -Teleport application access can now automatically sign users into the AWS Management Console using [Identity federation](https://aws.amazon.com/identity/federation). View AWS Management Console [guide](docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx) for more details. - -#### Restricted Sessions - -Added the ability to block network traffic (IPv4 and IPv6) on a per-SSH session basis. Implemented using BPF tooling which required kernel 5.8 or above. [#7099](https://github.com/gravitational/teleport/pull/7099) - -#### Enhanced Session Recording - -Updated Enhanced Session Recording to no longer require the installation of external compilers like `bcc-tools`. Implemented using BPF tooling which required kernel 5.8 or above. [#6027](https://github.com/gravitational/teleport/pull/6027) - -### Improvements - -* Added the ability to terminate database access certificates when the certificate expires. [#5476](https://github.com/gravitational/teleport/issues/5476) -* Added additional FedRAMP compliance controls, such as custom disconnect and MOTD messages. [#6091](https://github.com/gravitational/teleport/issues/6091) [#7396](https://github.com/gravitational/teleport/pull/7396) -* Added the ability to export Audit Log and session recordings using the Teleport API. [#6731](https://github.com/gravitational/teleport/pull/6731) [#7360](https://github.com/gravitational/teleport/pull/7360) -* Added the ability to partially configure a cluster. [#5857](https://github.com/gravitational/teleport/issues/5857) [RFD #28](https://github.com/gravitational/teleport/blob/master/rfd/0028-cluster-config-resources.md) -* Added the ability to disable port forwarding on a per-host basis. [#6989](https://github.com/gravitational/teleport/pull/6989) -* Added ability to configure `tsh` home directory. [#7035](https://github.com/gravitational/teleport/pull/7035/files) -* Added ability to generate OpenSSH client configuration snippets using `tsh config`. [#7437](https://github.com/gravitational/teleport/pull/7437) -* Added default-port detection to `tsh` [#6374](https://github.com/gravitational/teleport/pull/6374) -* Improved performance of the Web UI for users with many roles. [#7588](https://github.com/gravitational/teleport/pull/7588) - -### Fixes - -* Fixed a memory leak that could affect etcd users. [#7631](https://github.com/gravitational/teleport/pull/7631) -* Fixed an issue where `tsh login` could fail if the user had multiple public addresses defined on the proxy. [#7368](https://github.com/gravitational/teleport/pull/7368) - -### Breaking Changes - -#### Enhanced Session Recording - -Enhanced Session Recording has been updated to use CO-RE BPF executables. This means that you no longer have to install `bcc-tools`, but comes with a higher minimum kernel version of 5.8 and above. [#6027](https://github.com/gravitational/teleport/pull/6027) - -#### Kubernetes access - -Kubernetes access will no longer automatically register a cluster named after the Teleport cluster if the proxy is running within a Kubernetes cluster. Users wishing to retain this functionality now have to explicitly set `kube_cluster_name`. [#6786](https://github.com/gravitational/teleport/pull/6786) - -#### `tsh` - -`tsh login` has been updated to no longer change the current Kubernetes context. While `tsh login` will write credentials to `kubeconfig` it will only update your context if `tsh login --kube-cluster` or `tsh kube login ` is used. [#6045](https://github.com/gravitational/teleport/issues/6045) - -## 6.2 - -Teleport 6.2 contains new features, improvements, and bug fixes. - -**Note:** the DynamoDB indexing change described below may cause rate-limiting -errors from AWS APIs and is slow on large deployments (1000+ existing audit -events). The next patch release, v6.2.1, will improve the migration performance. -If you run a large DynamoDB-based cluster, we advise you to wait for v6.2.1 -before upgrading. - -### New Features - -#### Added Amazon Redshift Support - -Added support for [Amazon Redshift](https://aws.amazon.com/redshift) to Teleport database access.[#6479](https://github.com/gravitational/teleport/pull/6479). - -View the [database access with Redshift on AWS guide](docs/pages/enroll-resources/database-access/enroll-aws-databases/postgres-redshift.mdx) for more details. - -### Improvements - -* Added pass-through header support for Teleport application access. [#6601](https://github.com/gravitational/teleport/pull/6601) -* Added ability to propagate claim information from root to leaf clusters. [#6540](https://github.com/gravitational/teleport/pull/6540) -* Added Proxy Protocol for MySQL database access. [#6594](https://github.com/gravitational/teleport/pull/6594) -* Added prepared statement support for Postgres database access. [#6303](https://github.com/gravitational/teleport/pull/6303) -* Added `GetSessionEventsRequest` RPC endpoint for Audit Log pagination. [RFD 19](https://github.com/gravitational/teleport/blob/master/rfd/0019-event-iteration-api.md) [#6731](https://github.com/gravitational/teleport/pull/6731) -* Changed DynamoDB indexing strategy for events. [RFD 24](https://github.com/gravitational/teleport/blob/master/rfd/0024-dynamo-event-overflow.md) [#6583](https://github.com/gravitational/teleport/pull/6583) - -### Fixes - -* Fixed multiple per-session MFA issues. [#6542](https://github.com/gravitational/teleport/pull/6542) [#6567](https://github.com/gravitational/teleport/pull/6567) [#6625](https://github.com/gravitational/teleport/pull/6625) [#6779](https://github.com/gravitational/teleport/pull/6779) [#6948](https://github.com/gravitational/teleport/pull/6948) -* Fixed etcd JWT renewal issue. [#6905](https://github.com/gravitational/teleport/pull/6905) -* Fixed issue where `kubectl exec` sessions were not being recorded when the target pod was killed. [#6068](https://github.com/gravitational/teleport/pull/6068) -* Fixed an issue that prevented Teleport from starting on ARMv7 systems. [#6711](https://github.com/gravitational/teleport/pull/6711). -* Fixed issue that caused Access Requests to inconsistently allow elevated Kubernetes access. [#6492](https://github.com/gravitational/teleport/pull/6492) -* Fixed an issue that could cause `session.end` events not to be emitted. [#6756](https://github.com/gravitational/teleport/pull/6756) -* Fixed an issue with PAM variable interpolation. [#6558](https://github.com/gravitational/teleport/pull/6558) - -### Breaking Changes - -#### Agent Forwarding - -Teleport 6.2 brings a potentially backward incompatible change with `tsh` agent forwarding. - -Prior to Teleport 6.2, `tsh ssh -A` would create an in-memory SSH agent from your `~/.tsh` directory and forward that agent to the target host. - -Starting in Teleport 6.2 `tsh ssh -A` by default now forwards your system SSH agent (available at `$SSH_AUTH_SOCK`). Users wishing to retain the prior behavior can use `tsh ssh -o "ForwardAgent local"`. - -For more details see [RFD 22](https://github.com/gravitational/teleport/blob/master/rfd/0022-ssh-agent-forwarding.md) and implementation in [#6525](https://github.com/gravitational/teleport/pull/6525). - -#### DynamoDB Indexing Change - -DynamoDB users should note that the events backend indexing strategy has -changed and a data migration will be triggered after upgrade. For optimal -performance perform this migration with only one Auth Service instance online. It may -take some time and progress will be periodically written to the Auth Service -log. During this migration, only events that have been migrated will appear in -the Web UI. After completion, all events will be available. - -For more details see [RFD 24](https://github.com/gravitational/teleport/blob/master/rfd/0024-dynamo-event-overflow.md) and implementation in [#6583](https://github.com/gravitational/teleport/pull/6583). - -## 6.1.5 - -This release of Teleport contains multiple bug fixes. - -* Added additional Prometheus Metrics. [#6511](https://github.com/gravitational/teleport/pull/6511) -* Updated the TLS handshake timeout to 5 seconds to avoid timeout issues on large clusters. [#6692](https://github.com/gravitational/teleport/pull/6692) -* Fixed issue that caused non-interactive SSH output to show up in logs. [#6683](https://github.com/gravitational/teleport/pull/6683) -* Fixed two issues that could cause Teleport to panic upon startup. [#6431](https://github.com/gravitational/teleport/pull/6431) [#5712](https://github.com/gravitational/teleport/pull/5712) - -## 6.1.3 - -This release of Teleport contains a bug fix. - -* Added support for PROXY protocol to database access (MySQL). [#6517](https://github.com/gravitational/teleport/issues/6517) - -## 6.1.2 - -This release of Teleport contains a new feature. - -* Added log formatting and support to enable timestamps for logs. [#5898](https://github.com/gravitational/teleport/pull/5898) - -## 6.1.1 - -This release of Teleport contains a bug fix. - -* Fixed an issue where DEB builds were not published to the [Teleport DEB repository](https://deb.releases.teleport.dev/). - -## 6.1.0 - -Teleport 6.1 contains multiple new features, improvements, and bug fixes. - -### New Features - -#### U2F for Kubernetes and SSH sessions - -Added support for U2F authentication on every SSH and Kubernetes "connection" (a single `tsh ssh` or `kubectl` call). This is an advanced security feature that protects users against compromises of their on-disk Teleport certificates. Per-session MFA can be enforced cluster-wide or only for some specific roles. - -For more details see [Per-Session -MFA](docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx) documentation or -[RFD -14](https://github.com/gravitational/teleport/blob/master/rfd/0014-session-2FA.md) -and [RFD -15](https://github.com/gravitational/teleport/blob/master/rfd/0015-2fa-management.md) -for technical details. - -#### Dual Authorization Workflows - -Added ability to request multiple users to review and approve Access Requests. - -See [#5071](https://github.com/gravitational/teleport/pull/5071) for technical details. - -### Improvements - -* Added the ability to propagate SSO claims to PAM modules. [#6158](https://github.com/gravitational/teleport/pull/6158) -* Added support for cluster routing to reduce latency to leaf clusters. [RFD 21](https://github.com/gravitational/teleport/blob/master/rfd/0021-cluster-routing.md) -* Added support for Google Cloud SQL to database access. [#6090](https://github.com/gravitational/teleport/pull/6090) -* Added support CLI credential issuance for application access. [#5918](https://github.com/gravitational/teleport/pull/5918) -* Added support for Encrypted SAML Assertions. [#5598](https://github.com/gravitational/teleport/pull/5598) -* Added support for user impersonation. [#6073](https://github.com/gravitational/teleport/pull/6073) - -### Fixes - -* Fixed interoperability issues with `gpg-agent`. [RFD 18](http://github.com/gravitational/teleport/blob/master/rfd/0018-agent-loading.md) -* Fixed websocket support in application access. [#6028](https://github.com/gravitational/teleport/pull/6028) -* Fixed file argument issues with `tsh play`. [#1580](https://github.com/gravitational/teleport/issues/1580) -* Fixed `utmp` regressions that caused issues in LXC containers. [#6256](https://github.com/gravitational/teleport/pull/6256) - -## 6.0.3 - -This release of Teleport contains a bug fix. - -* Fixed a issue that caused high network on deployments with many leaf Trusted Clusters. [#6263](https://github.com/gravitational/teleport/pull/6263) - -## 6.0.2 - -This release of Teleport contains bug fixes and adds new default roles. - -* Fixed an issue with proxy web endpoint resetting connection when run with `--insecure-no-tls` flag. [#5923](https://github.com/gravitational/teleport/pull/5923) -* Introduced role presets: `auditor`, `editor` and `access`. [#5968](https://github.com/gravitational/teleport/pull/5968) -* Added ability to inline `google_service_account` field into Google Workspace OIDC connector. [#5563](http://github.com/gravitational/teleport/pull/5563) - -## 6.0.1 - -This release of Teleport contains multiple bug fixes. - -* Fixed issue that caused ACME default configuration to fail with `TLS-ALPN-01` challenge. [#5839](https://github.com/gravitational/teleport/pull/5839) -* Fixed regression in ADFS integration. [#5880](https://github.com/gravitational/teleport/pull/5880) - -## 6.0.0 - -Teleport 6.0 is a major release with new features, functionality, and bug fixes. - -We have implemented [database access](./docs/pages/enroll-resources/database-access/database-access.mdx), -open sourced role-based access control (RBAC), and added official API and a Go client library. - -Users can review the [6.0 milestone](https://github.com/gravitational/teleport/milestone/33?closed=1) on Github for more details. - -### New Features - -#### Database access - -Review the database access design in [RFD #11](https://github.com/gravitational/teleport/blob/master/rfd/0011-database-access.md). - -With database access users can connect to PostgreSQL and MySQL databases using short-lived certificates, configure SSO authentication and role-based access controls for databases, and capture SQL query activity in the audit log. - -##### Getting Started - -Configure database access following the [Getting Started](./docs/pages/enroll-resources/database-access/getting-started.mdx/) guide. - -##### Guides - -* [AWS RDS/Aurora - PostgreSQL](./docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx) -* [AWS RDS/Aurora MySQL](./docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx) -* [Self-hosted PostgreSQL](./docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/postgres-self-hosted.mdx) -* [Self-hosted MySQL](./docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mysql-self-hosted.mdx) -* [GUI clients](docs/pages/connect-your-client/gui-clients.mdx) - -##### Resources - -To learn more about configuring role-based access control for database access, check out the [RBAC](./docs/pages/enroll-resources/database-access/database-access.mdx) section. - -[Architecture](./docs/pages/enroll-resources/database-access/database-access.mdx) provides a more in-depth look at database access internals such as networking and security. - -See [Reference](docs/pages/reference/agent-services/database-access-reference/database-access-reference.mdx) for an overview of database access related configuration and CLI commands. - -Finally, check out [Frequently Asked Questions](docs/pages/enroll-resources/database-access/faq.mdx). - -#### OSS RBAC - -Open source RBAC support was introduced in [RFD #7](https://github.com/gravitational/teleport/blob/master/rfd/0007-rbac-oss.md). - -RBAC support gives OSS administrators more granular access controls to servers and other resources with a cluster (like session recording access). An example of an RBAC policy could be: "admins can do anything, developers must never touch production servers and interns can only SSH into staging servers as guests" - -In addition, some Access Workflow Plugins will now become available to open source users. - -* Access Workflows Golang SDK and API -* Slack -* Gitlab -* Mattermost -* JIRA Plugin -* PagerDuty Plugin - -#### Client libraries and API - -API and Client Libraries support was introduced in [RFD #10](https://github.com/gravitational/teleport/blob/master/rfd/0010-api.md). - -The new API and client library reduces the dependencies needed to use the Teleport API as well as making it easier to use. An example of using the new API is below. - -```go -// Create a client connected to the Auth server with an exported identity file. -clt, err := client.NewClient(client.Config{ - Addrs: []string{"auth.example.com:3025"}, - Credentials: []client.Credentials{ - client.LoadIdentityFile("identity.pem"), - }, -}) -if err != nil { - log.Fatalf("Failed to create client: %v.", err) -} -defer clt.Close() - -// Create a Access Request. -accessRequest, err := types.NewAccessRequest(uuid.New(), "access-admin", "admin") -if err != nil { - log.Fatalf("Failed to build access request: %v.", err) -} -if err = clt.CreateAccessRequest(ctx, accessRequest); err != nil { - log.Fatalf("Failed to create access request: %v.", err) -} -``` - -### Improvements - -* Added `utmp`/`wtmp` support for SSH in [#5491](https://github.com/gravitational/teleport/pull/5491). -* Added the ability to set a Kubernetes specific public address in [#5611](https://github.com/gravitational/teleport/pull/5611). -* Added Proxy Protocol support to Kubernetes access in [#5299](https://github.com/gravitational/teleport/pull/5299). -* Added ACME ([Let's Encrypt](https://letsencrypt.org/)) support to make getting and using TLS certificates easier. [#5177](https://github.com/gravitational/teleport/issues/5177). -* Added the ability to manage local users to the Web UI in [#2945](https://github.com/gravitational/teleport/issues/2945). -* Added the ability to preserve timestamps when using `tsh scp` in [#2889](https://github.com/gravitational/teleport/issues/2889). - -### Fixes - -* Fixed authentication failure when logging in via CLI with Access Workflows after removing `.tsh` directory in [#5323](https://github.com/gravitational/teleport/pull/5323). -* Fixed `tsh login` failure when `--proxy` differs from actual proxy public address in [#5380](https://github.com/gravitational/teleport/pull/5380). -* Fixed session playback issues in [#2945](https://github.com/gravitational/teleport/issues/2945). -* Fixed several UX issues in [#5559](https://github.com/gravitational/teleport/issues/5559), [#5568](https://github.com/gravitational/teleport/issues/5568), [#4965](https://github.com/gravitational/teleport/issues/4965), and [#5057](https://github.com/gravitational/teleport/pull/5057). - -### Upgrade Notes - -Please follow our [standard upgrade procedure](docs/pages/admin-guides/management/admin/admin.mdx) to upgrade your cluster. - -Note, for clusters using GitHub SSO and Trusted Clusters, when upgrading SSO users will lose connectivity to leaf clusters. Local users will not be affected. - -To restore connectivity to leaf clusters for SSO users, leaf admins should update the `trusted_cluster` role mapping resource like below. - -```yaml -kind: trusted_cluster -version: v2 -metadata: - name: "zztop-oss" -spec: - enabled: true - token: "bar" - web_proxy_addr: 172.10.1.1:3080 - tunnel_addr: 172.10.1.1:3024 - role_map: - - remote: "admin" - local: ['admin'] - - remote: "^(github-.*)$" - local: ['admin'] -``` - -## 5.1.0 - -This release of Teleport adds a new feature. - -* Support for creating and assuming Access Workflow requests from within the Web UI (first step toward full Workflow UI support: [#4937](https://github.com/gravitational/teleport/issues/4937)). - -## 5.0.2 - -This release of Teleport contains a security fix. - -* Patch a SAML authentication bypass (see https://github.com/russellhaering/gosaml2/security/advisories/GHSA-xhqq-x44f-9fgg): [#5119](https://github.com/gravitational/teleport/pull/5119). - -Any Enterprise SSO users using Okta, Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 5.0.2 and restart Teleport. If you are unable to upgrade immediately, we suggest disabling SAML connectors for all clusters until the updates can be applied. - - -## 5.0.1 - -This release of Teleport contains multiple bug fixes. - -* Always set expiry times on server resource in heartbeats [#5008](https://github.com/gravitational/teleport/pull/5008) -* Fixes streaming k8s responses (`kubectl logs -f`, `kubectl run -it`, etc) [#5009](https://github.com/gravitational/teleport/pull/5009) -* Multiple fixes for the k8s forwarder [#5038](https://github.com/gravitational/teleport/pull/5038) - -## 5.0.0 - -Teleport 5.0 is a major release with new features, functionality, and bug fixes. Users can review [5.0 closed issues](https://github.com/gravitational/teleport/milestone/39?closed=1) on Github for details of all items. - -#### New Features - -Teleport 5.0 introduces two distinct features: Teleport application access and significant Kubernetes access improvements - multi-cluster support. - -##### Teleport application access - -Teleport can now be used to provide secure access to web applications. This new feature was built with the express intention of securing internal apps which might have once lived on a VPN or had an authorization and authentication mechanism with little to no audit trail. application access works with everything from dashboards to single page Javascript applications (SPA). - -Application access uses mutually authenticated reverse tunnels to establish a secure connection with the Teleport unified Access Platform which can then becomes the single ingress point for all traffic to an internal application. - -Adding an application follows the same UX as adding SSH servers or Kubernetes clusters, starting with creating a static or dynamic invite token. - -```bash -$ tctl tokens add --type=app -``` - -Then simply start Teleport with a few new flags. - -```sh -$ teleport start --roles=app --token=xyz --auth-server=proxy.example.com:3080 \ - --app-name="example-app" \ - --app-uri="http://localhost:8080" -``` - -This command will start an app server that proxies the application "example-app" running at `http://localhost:8080` at the public address `https://example-app.example.com`. - -Applications can also be configured using the new `app_service` section in `teleport.yaml`. - -```yaml -app_service: - # Teleport application access is enabled. - enabled: yes - # We've added a default sample app that will check - # that Teleport application access is working - # and output JWT tokens. - # https://dumper.teleport.example.com:3080/ - debug_app: true - apps: - # application access can be used to proxy any HTTP endpoint. - # Note: Name can't include any spaces and should be DNS-compatible A-Za-z0-9-._ - - name: "internal-dashboard" - uri: "http://10.0.1.27:8000" - # By default Teleport will make this application - # available on a sub-domain of your Teleport proxy's hostname - # internal-dashboard.teleport.example.com - # - thus the importance of setting up wildcard DNS. - # If you want, it's possible to set up a custom public url. - # DNS records should point to the proxy server. - # internal-dashboard.teleport.example.com - # Example Public URL for the internal-dashboard app. - # public_addr: "internal-dashboard.acme.com" - # Optional labels - # Labels can be combined with RBAC rules to provide access. - labels: - customer: "acme" - env: "production" - # Optional dynamic labels - commands: - - name: "os" - command: ["/usr/bin/uname"] - period: "5s" - # A proxy can support multiple applications. application access - # can also be deployed with a Teleport node. - - name: "arris" - uri: "http://localhost:3001" - public_addr: "arris.example.com" -``` - -Application access requires two additional changes. DNS must be updated to point the application domain to the proxy and the proxy must be loaded with a TLS certificate for the domain. Wildcard DNS and TLS certificates can be used to simplify deployment. - -```yaml -# When adding the app_service certificates are required to provide a TLS -# connection. The certificates are managed by the proxy_service -proxy_service: - # We've extended support for https certs. Teleport can now load multiple - # TLS certificates. In the below example we've obtained a wildcard cert - # that'll be used for proxying the applications. - # The correct certificate is selected based on the hostname in the HTTPS - # request using SNI. - https_keypairs: - - key_file: /etc/letsencrypt/live/teleport.example.com/privkey.pem - cert_file: /etc/letsencrypt/live/teleport.example.com/fullchain.pem - - key_file: /etc/letsencrypt/live/*.teleport.example.com/privkey.pem - cert_file: /etc/letsencrypt/live/*.teleport.example.com/fullchain.pem -``` - -You can learn more in [Introduction to Enrolling Applications](./docs/pages/enroll-resources/application-access/introduction.mdx). - -##### Teleport Kubernetes access - -Teleport 5.0 also introduces two highly requested features for Kubernetes. - -* The ability to connect multiple Kubernetes Clusters to the Teleport Access Platform, greatly reducing operational complexity. -* Complete Kubernetes audit log capture [#4526](https://github.com/gravitational/teleport/pull/4526), going beyond the existing `kubectl exec` capture. - -For a full overview please review the [Kubernetes RFD](https://github.com/gravitational/teleport/blob/master/rfd/0005-kubernetes-service.md). - -To support these changes, we've introduced a new service. This moves Teleport Kubernetes configuration from the `proxy_service` into its own dedicated `kubernetes_service` section. - -When adding the new Kubernetes service, a new type of join token is required. - -```bash -tctl tokens add --type=kube -``` - -Example configuration for the new `kubernetes_service`: - -```yaml -# ... -kubernetes_service: - enabled: yes - listen_addr: 0.0.0.0:3027 - kubeconfig_file: /secrets/kubeconfig -``` - -Note: a Kubernetes port still needs to be configured in the `proxy_service` via `kube_listen_addr`. - -#### New "tsh kube" commands - -`tsh kube` commands are used to query registered clusters and switch `kubeconfig` context: - -```sh -$ tsh login --proxy=proxy.example.com --user=awly - -# list all registered clusters -$ tsh kube ls -Cluster Name Status -------------- ------ -a.k8s.example.com online -b.k8s.example.com online -c.k8s.example.com online - -# on login, kubeconfig is pointed at the first cluster (alphabetically) -$ kubectl config current-context -proxy.example.com-a.k8s.example.com - -# but all clusters are populated as contexts -$ kubectl config get-contexts -CURRENT NAME CLUSTER AUTHINFO -* proxy.example.com-a.k8s.example.com proxy.example.com proxy.example.com-a.k8s.example.com - proxy.example.com-b.k8s.example.com proxy.example.com proxy.example.com-b.k8s.example.com - proxy.example.com-c.k8s.example.com proxy.example.com proxy.example.com-c.k8s.example.com - -# switch between different clusters: -$ tsh kube login c.k8s.example.com - -# the traditional way is also supported: -$ kubectl config use-context proxy.example.com-c.k8s.example.com - -# check current cluster -$ kubectl config current-context -proxy.example.com-c.k8s.example.com -``` - -Other Kubernetes changes: - -* Support k8s clusters behind firewall/NAT using a single Teleport cluster [#3667](https://github.com/gravitational/teleport/issues/3667) -* Support multiple k8s clusters with a single Teleport proxy instance [#3952](https://github.com/gravitational/teleport/issues/3952) - -##### Additional User and Token Resource - -We've added two new RBAC resources; these provide the ability to limit token creation and to list and modify Teleport users: - -```yaml -- resources: [user] - verbs: [list,create,read,update,delete] -- resources: [token] - verbs: [list,create,read,update,delete] -``` - -Learn more about [Teleport's RBAC Resources](docs/pages/admin-guides/access-controls/access-controls.mdx) - -##### Cluster Labels - -Teleport 5.0 also adds the ability to set labels on Trusted Clusters. The labels are set when creating a trusted cluster invite token. This lets teams use the same RBAC controls used on nodes to approve or deny access to clusters. This can be especially useful for MSPs that connect hundreds of customers' clusters - when combined with access workflows, cluster access can be delegated. Learn more by reviewing our [Truster Cluster Setup & RBAC Docs](docs/pages/admin-guides/management/admin/trustedclusters.mdx) - -Creating a trusted cluster join token for a production environment: - -```bash -$ tctl tokens add --type=trusted_cluster --labels=env=prod -``` - -```yaml -kind: role -#... - deny: - # cluster labels control what clusters user can connect to. The wildcard ('*') - # means any cluster. By default, deny rules are empty to preserve backwards - # compatibility - cluster_labels: - 'env': 'prod' -``` - -##### Teleport UI Updates - -Teleport 5.0 also iterates on the UI Refresh from 4.3. We've moved the cluster list into our sidebar and have added an Application launcher. For customers moving from 4.4 to 5.0, you'll notice that we have moved session recordings back to their own dedicated section. - -Other updates: - -* We now provide local user management via `https://[cluster-url]/web/users`, providing the ability to edit, reset and delete local users. -* Teleport Node & App Install scripts. This is currently an Enterprise-only feature that provides customers with an 'auto-magic' installer script. Enterprise customers can enable this feature by modifying the 'token' resource. See note above. -* We've added a Waiting Room for customers using Access Workflows. [Docs](docs/pages/admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx) - -##### Signed RPM and Releases - -Starting with Teleport 5.0, we now provide an RPM repo for stable releases of Teleport. We've also started signing our RPMs to provide assurance that you're always using an official build of Teleport. - -See https://rpm.releases.teleport.dev/ for more details. - -#### Improvements - -* Added `--format=json` playback option for `tsh play`. For example `tsh play --format=json ~/play/0c0b81ed-91a9-4a2a-8d7c-7495891a6ca0.tar | jq '.event` can be used to show all events within an a local archive. [#4578](https://github.com/gravitational/teleport/issues/4578) -* Added support for continuous backups and auto scaling for DynamoDB. [#4780](https://github.com/gravitational/teleport/issues/4780) -* Added a Linux ARM64/ARMv8 (64-bit) Release. [#3383](https://github.com/gravitational/teleport/issues/3383) -* Added `https_keypairs` field which replaces `https_key_file` and `https_cert_file`. This allows administrators to load multiple HTTPS certs for Teleport application access. Teleport 5.0 is backwards compatible with the old format, but we recommend updating your configuration to use `https_keypairs`. - -Enterprise Only: - -* `tctl` can load credentials from `~/.tsh` [#4678](https://github.com/gravitational/teleport/pull/4678) -* Teams can require a user submitted reason when using Access Workflows [#4573](https://github.com/gravitational/teleport/pull/4573#issuecomment-720777443) - -#### Fixes - -* Updated `tctl` to always format resources as lists in JSON/YAML. [#4281](https://github.com/gravitational/teleport/pull/4281) -* Updated `tsh status` to now print Kubernetes status. [#4348](https://github.com/gravitational/teleport/pull/4348) -* Fixed intermittent issues with `loginuid.so`. [#3245](https://github.com/gravitational/teleport/issues/3245) -* Reduced `access denied to Proxy` log spam. [#2920](https://github.com/gravitational/teleport/issues/2920) -* Various AMI fixes: paths are now consistent with other Teleport packages and configuration files will not be overwritten on reboot. - -#### Documentation - -We've added an [API Guide](docs/pages/admin-guides/api/api.mdx) to simply developing applications against Teleport. - -#### Upgrade Notes - -Please follow our [standard upgrade procedure](docs/pages/upgrading/upgrading.mdx). - -* Optional: Consider updating `https_key_file` & `https_cert_file` to our new `https_keypairs:` format. -* Optional: Consider migrating Kubernetes access from `proxy_service` to `kubernetes_service` after the upgrade. - -### 4.4.6 - -This release of teleport contains a security fix and a bug fix. - -* Patch a SAML authentication bypass (see https://github.com/russellhaering/gosaml2/security/advisories/GHSA-xhqq-x44f-9fgg): [#5120](https://github.com/gravitational/teleport/pull/5120). - -Any Enterprise SSO users using Okta, Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 4.4.6 and restart Teleport. If you are unable to upgrade immediately, we suggest disabling SAML connectors for all clusters until the updates can be applied. - -* Fix an issue where `tsh login` would fail with an `AccessDenied` error if -the user was perviously logged into a leaf cluster. [#5105](https://github.com/gravitational/teleport/pull/5105) - -### 4.4.5 - -This release of Teleport contains a bug fix. - -* Fixed an issue where a slow or unresponsive Teleport Auth Service instance could hang client connections in async recording mode. [#4696](https://github.com/gravitational/teleport/pull/4696) - -### 4.4.4 - -This release of Teleport adds enhancements to the Access Workflows API. - -* Support for creating limited roles that trigger Access Requests -on login, allowing users to be configured such that no nodes can -be accessed without externally granted roles. - -* Teleport UI support for automatically generating Access Requests and -assuming new roles upon approval (Access Requests were previously -only available in `tsh`). - -* New `claims_to_roles` mapping that can use claims from external -identity providers to determine which roles a user can request. - -* Various minor API improvements to help make requests easier to -manage and audit, including support for human-readable -request/approve/deny reasons and structured annotations. - -### 4.4.2 - -This release of Teleport adds support for a new build architecture. - -* Added automatic arm64 builds of Teleport to the download portal. - -### 4.4.1 - -This release of Teleport contains a bug fix. - -* Fixed an issue where defining multiple logging configurations would cause Teleport to crash. [#4598](https://github.com/gravitational/teleport/issues/4598) - -### 4.4.0 - -This is a major Teleport release with a focus on new features, functionality, and bug fixes. It’s a substantial release and users can review [4.4 closed issues](https://github.com/gravitational/teleport/milestone/40?closed=1) on Github for details of all items. - -#### New Features - -##### Concurrent Session Control - -This addition to Teleport helps customers obtain AC-10 control. We now provide two new optional configuration values: `max_connections` and `max_sessions`. - -###### `max_connections` - -This value is the total number of concurrent sessions within a cluster to nodes running Teleport. This value is applied at a per user level. If you set `max_connections` to `1`, a `tsh` user would only be able to `tsh ssh` into one node at a time. - -###### `max_sessions` per connection - -This value limits the total number of session channels which can be established across a single SSH connection (typically used for interactive terminals or remote exec operations). This is for cases where nodes have Teleport set up, but a user is using OpenSSH to connect to them. It is essentially equivalent to the `MaxSessions` configuration value accepted by `sshd`. - -```yaml -spec: - options: - # Optional: Required to be set for AC-10 Compliance - max_connections: 2 - # Optional: To match OpenSSH behavior set to 10 - max_sessions: 10 -``` - -###### `session_control_timeout` - -A new `session_control_timeout` configuration value has been added to the `auth_service` configuration block of the Teleport config file. It's unlikely that you'll need to modify this. - -```yaml -auth_service: - session_control_timeout: 2m # default -# ... -``` - -#### Session Streaming Improvements - -Teleport 4.4 includes a complete refactoring of our event system. This resolved a few customer bug reports such as [#3800: Events overwritten in DynamoDB](https://github.com/gravitational/teleport/issues/3800) and [#3182: Teleport consuming all disk space with multipart uploads](https://github.com/gravitational/teleport/issues/3182). - -Along with foundational improvements, 4.4 includes two new experimental `session_recording` options: `node-sync` and `proxy-sync`. -NOTE: These experimental modes require all Teleport Auth Service instances, Proxy Service instances, and nodes to be running Teleport 4.4. - -```yaml -# This section configures the Auth Service: -auth_service: - # Optional setting for configuring session recording. Possible values are: - # "node" : sessions will be recorded on the node level (the default) - # "proxy" : recording on the proxy level, see "recording proxy mode" section. - # "off" : session recording is turned off - # - # EXPERIMENTAL *-sync modes: proxy and node send logs directly to S3 or other - # storage without storing the records on disk at all. This mode will kill a - # connection if network connectivity is lost. - # NOTE: These experimental modes require all Teleport Auth Service instances, - # Proxy Service instances, and nodes to be running Teleport 4.4. - # - # "node-sync" : sessions recording will be streamed from node -> auth -> storage - # "proxy-sync : sessions recording will be streamed from proxy -> auth -> storage - # - session_recording: "node-sync" -``` - -#### Improvements - -* Added session streaming. [#4045](https://github.com/gravitational/teleport/pull/4045) -* Added concurrent session control. [#4138](https://github.com/gravitational/teleport/pull/4138) -* Added ability to specify leaf cluster when generating `kubeconfig` via `tctl auth sign`. [#4446](https://github.com/gravitational/teleport/pull/4446) -* Added output options (like JSON) for `tsh ls`. [#4390](https://github.com/gravitational/teleport/pull/4390) -* Added node ID to heartbeat debug log [#4291](https://github.com/gravitational/teleport/pull/4291) -* Added the option to trigger `pam_authenticate` on login [#3966](https://github.com/gravitational/teleport/pull/3966) - -#### Fixes - -* Fixed issue that caused some idle `kubectl exec` sessions to terminate. [#4377](https://github.com/gravitational/teleport/pull/4377) -* Fixed symlink issued when using `tsh` on Windows. [#4347](https://github.com/gravitational/teleport/pull/4347) -* Fixed `tctl top` so it runs without the debug flag and on dark terminals. [#4282](https://github.com/gravitational/teleport/pull/4282) [#4231](https://github.com/gravitational/teleport/pull/4231) -* Fixed issue that caused DynamoDB not to respect HTTP CONNECT proxies. [#4271](https://github.com/gravitational/teleport/pull/4271) -* Fixed `/readyz` endpoint to recover much quicker. [#4223](https://github.com/gravitational/teleport/pull/4223) - -#### Documentation - -* Updated Google Workspace documentation to add clarification on supported account types. [#4394](https://github.com/gravitational/teleport/pull/4394) -* Updated IoT instructions on necessary ports. [#4398](https://github.com/gravitational/teleport/pull/4398) -* Updated Trusted Cluster documentation on how to remove trust from root and leaf clusters. [#4358](https://github.com/gravitational/teleport/pull/4358) -* Updated the PAM documentation with PAM authentication usage information. [#4352](https://github.com/gravitational/teleport/pull/4352) - -#### Upgrade Notes - -Please follow our [standard upgrade -procedure](docs/pages/upgrading/upgrading.mdx). - -## 4.3.9 - -This release of Teleport contains a security fix. - -* Patch a SAML authentication bypass (see https://github.com/russellhaering/gosaml2/security/advisories/GHSA-xhqq-x44f-9fgg): [#5122](https://github.com/gravitational/teleport/pull/5122). - -Any Enterprise SSO users using Okta, Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 4.3.9 and restart Teleport. If you are unable to upgrade immediately, we suggest disabling SAML connectors for all clusters until the updates can be applied. - -## 4.3.8 - -This release of Teleport adds support for a new build architecture. - -* Added automatic arm64 builds of Teleport to the download portal. - -## 4.3.7 - -This release of Teleport contains a security fix and a bug fix. - -* Mitigated [CVE-2020-15216](https://nvd.nist.gov/vuln/detail/CVE-2020-15216) by updating github.com/russellhaering/goxmldsig. - -### Details -A vulnerability was discovered in the `github.com/russellhaering/goxmldsig` library which is used by Teleport to validate the -signatures of XML files used to configure SAML 2.0 connectors. With a carefully crafted XML file, an attacker can completely -bypass XML signature validation and pass off an altered file as a signed one. - -### Actions -The `goxmldsig` library has been updated upstream and Teleport 4.3.7 includes the fix. Any Enterprise SSO users using Okta, -Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 4.3.7 and restart Teleport. - -If you are unable to upgrade immediately, we suggest deleting SAML connectors for all clusters until the updates can be applied. - -* Fixed an issue where DynamoDB connections made by Teleport would not respect the `HTTP_PROXY` or `HTTPS_PROXY` environment variables. [#4271](https://github.com/gravitational/teleport/pull/4271) - -## 4.3.6 - -This release of Teleport contains multiple bug fixes. - -* Fixed an issue with prefix migration that could lead to loss of cluster state. [#4299](https://github.com/gravitational/teleport/pull/4299) [#4345](https://github.com/gravitational/teleport/pull/4345) -* Fixed an issue that caused excessively slow loading of the UI on large clusters. [#4326](https://github.com/gravitational/teleport/pull/4326) -* Updated `/readyz` endpoint to recover faster after node goes into degraded state. [#4223](https://github.com/gravitational/teleport/pull/4223) -* Added node UUID to debug logs to allow correlation between TCP connections and nodes. [#4291](https://github.com/gravitational/teleport/pull/4291) - -## 4.3.5 - -This release of Teleport contains a bug fix. - -* Fixed issue that caused Teleport Docker images to be built incorrectly. [#4201](https://github.com/gravitational/teleport/pull/4201) - -## 4.3.4 - -This release of Teleport contains multiple bug fixes. - -* Fixed issue that caused intermittent login failures when using PAM modules like `pam_loginuid.so` and `pam_selinux.so`. [#4133](https://github.com/gravitational/teleport/pull/4133) -* Fixed issue that required users to manually verify a certificate when exporting an identity file. [#4003](https://github.com/gravitational/teleport/pull/4003) -* Fixed issue that prevented local user creation using Firestore. [#4160](https://github.com/gravitational/teleport/pull/4160) -* Fixed issue that could cause `tsh` to panic when using a PEM file. [#4189](https://github.com/gravitational/teleport/pull/4189) - -## 4.3.2 - -This release of Teleport contains multiple bug fixes. - -* Reverted base OS in container images to Ubuntu. [#4054](https://github.com/gravitational/teleport/issues/4054) -* Fixed an issue that prevented changing the path for the Audit Log. [#3771](https://github.com/gravitational/teleport/issues/3771) -* Fixed an issue that allowed servers with invalid labels to be added to the cluster. [#4034](https://github.com/gravitational/teleport/issues/4034) -* Fixed an issue that caused Cloud Firestore to panic on startup. [#4041](https://github.com/gravitational/teleport/pull/4041) -* Fixed an error that would cause Teleport to fail to load with the error "list of proxies empty". [#4005](https://github.com/gravitational/teleport/issues/4005) -* Fixed an issue that would prevent playback of Kubernetes session [#4055](https://github.com/gravitational/teleport/issues/4055) -* Fixed regressions in the UI. [#4013](https://github.com/gravitational/teleport/issues/4013) [#4012](https://github.com/gravitational/teleport/issues/4012) [#4035](https://github.com/gravitational/teleport/issues/4035) [#4051](https://github.com/gravitational/teleport/issues/4051) [#4044](https://github.com/gravitational/teleport/issues/4044) - -## 4.3.0 - -This is a major Teleport release with a focus on new features, functionality, and bug fixes. It’s a substantial release and users can review [4.3 closed issues](https://github.com/gravitational/teleport/milestone/37?closed=1) on Github for details of all items. - -#### New Features - -##### Web UI - -Teleport 4.3 includes a completely redesigned Web UI. The new Web UI expands the management functionality of a Teleport cluster and the user experience of using Teleport. Teleport's new terminal provides a jumping-off point to access nodes and nodes on other clusters via the web. - -Teleport's Web UI now exposes Teleport’s Audit log, letting auditors and administrators view Teleport access events, SSH events, recording session, and enhanced session recording all in one view. - -##### Teleport Plugins - -Teleport 4.3 introduces four new plugins that work out of the box with [Approval Workflow](docs/pages/admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx). These plugins allow you to automatically support role escalation with commonly used third party services. The built-in plugins are listed below. - -* [PagerDuty](docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty.mdx) -* [Jira](docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-jira.mdx) -* [Slack](docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-slack.mdx) -* [Mattermost](docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost.mdx) - -#### Improvements - -* Added the ability for local users to reset their own passwords. [#2387](https://github.com/gravitational/teleport/pull/3287) -* Added user impersonation (`kube_users)` support to Kubernetes Proxy. [#3369](https://github.com/gravitational/teleport/issues/3369) -* Added support for third party S3-compatible storage for sessions. [#3057](https://github.com/gravitational/teleport/pull/3057) -* Added support for GCP backend data stores. [#3766](https://github.com/gravitational/teleport/pull/3766) [#3014](https://github.com/gravitational/teleport/pull/3014) -* Added support for X11 forwarding to OpenSSH servers. [#3401](https://github.com/gravitational/teleport/issues/3401) -* Added support for auth plugins in proxy `kubeconfig`. [#3655](https://github.com/gravitational/teleport/pull/3655) -* Added support for OpenSSH-like escape sequence. [#3752](https://github.com/gravitational/teleport/pull/3752) -* Added `--browser` flag to `tsh`. [#3737](https://github.com/gravitational/teleport/pull/3737) -* Updated `teleport configure` output to be more useful out of the box. [#3429](https://github.com/gravitational/teleport/pull/3429) -* Updated ability to only show SSO on the login page. [#2789](https://github.com/gravitational/teleport/issues/2789) -* Updated help and support section in Web UI. [#3531](https://github.com/gravitational/teleport/issues/3531) -* Updated default SSH signing algorithm to SHA-512 for new clusters. [#3777](https://github.com/gravitational/teleport/pull/3777) -* Standardized audit event fields. - -#### Fixes - -* Fixed removing existing user definitions in kubeconfig. [#3209](https://github.com/gravitational/teleport/issues/3749) -* Fixed an issue where port forwarding could fail in certain circumstances. [#3749](https://github.com/gravitational/teleport/issues/3749) -* Fixed temporary role grants issue when forwarding Kubernetes requests. [#3624](https://github.com/gravitational/teleport/pull/3624) -* Fixed an issue that prevented copy/paste in the web termination. [#92](https://github.com/gravitational/webapps/issues/92) -* Fixed an issue where the proxy did not test Kubernetes permissions at startup. [#3812](https://github.com/gravitational/teleport/pull/3812) -* Fixed `tsh` and `gpg-agent` integration. [#3169](https://github.com/gravitational/teleport/issues/3169) -* Fixed Vulnerabilities in Teleport Docker Image [https://quay.io/repository/gravitational/teleport?tab=tags](https://quay.io/repository/gravitational/teleport?tab=tags) - -#### Upgrade Notes - -Always follow the [recommended upgrade -procedure](docs/pages/upgrading/upgrading.mdx) to upgrade to this version. - -##### New Signing Algorithm - -If you’re upgrading an existing version of Teleport, you may want to consider rotating CA to SHA-256 or SHA-512 for RSA SSH certificate signatures. The previous default was SHA-1, which is now considered to be weak against brute-force attacks. SHA-1 certificate signatures are also [no longer accepted](https://www.openssh.com/releasenotes.html) by OpenSSH versions 8.2 and above. All new Teleport clusters will default to SHA-512 based signatures. To upgrade an existing cluster, set the following in your `teleport.yaml`: - -``` -teleport: - ca_signature_algo: "rsa-sha2-512" -``` - -Rotate the cluster CA, following [these -docs](docs/pages/admin-guides/management/operations/ca-rotation.mdx). - -##### Web UI - -Due to the number of changes included in the redesigned Web UI, some URLs and functionality have shifted. Refer to the following ticket for more details. [#3580](https://github.com/gravitational/teleport/issues/3580) - -##### RBAC for Audit Log and Recorded Sessions - -Teleport 4.3 has made the audit log accessible via the Web UI. Enterprise customers -can limit access by changing the options on the new `event` resource. - -```yaml -# list and read audit log, including audit events and recorded sessions -- resources: [event] - verbs: [list, read] -``` - -##### Kubernetes Permissions - -The minimum set of Kubernetes permissions that need to be granted to Teleport -proxies has been updated. If you use the Kubernetes integration, please make -sure that the ClusterRole used by the proxy has [sufficient -permissions](./docs/pages/enroll-resources/kubernetes-access/controls.mdx). - -##### Path prefix for etcd - -The [etcd backend](docs/pages/reference/backends.mdx#etcd) now correctly uses -the “prefix” config value when storing data. Upgrading from 4.2 to 4.3 will -migrate the data as needed at startup. Make sure you follow our Teleport -[upgrade guidance](docs/pages/upgrading/upgrading.mdx). - -**Note: If you use an etcd backend with a non-default prefix and need to downgrade from 4.3 to 4.2, you should [backup Teleport data and restore it](docs/pages/admin-guides/management/operations/backup-restore.mdx) into the downgraded cluster.** - -## 4.2.12 - -This release of Teleport contains a security fix. - -* Mitigated [CVE-2020-15216](https://nvd.nist.gov/vuln/detail/CVE-2020-15216) by updating github.com/russellhaering/goxmldsig. - -### Details -A vulnerability was discovered in the `github.com/russellhaering/goxmldsig` library which is used by Teleport to validate the -signatures of XML files used to configure SAML 2.0 connectors. With a carefully crafted XML file, an attacker can completely -bypass XML signature validation and pass off an altered file as a signed one. - -### Actions -The `goxmldsig` library has been updated upstream and Teleport 4.2.12 includes the fix. Any Enterprise SSO users using Okta, -Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 4.2.12 and restart Teleport. - -If you are unable to upgrade immediately, we suggest deleting SAML connectors for all clusters until the updates can be applied. - -## 4.2.11 - -This release of Teleport contains multiple bug fixes. - -* Fixed an issue that prevented upload of session archives to NFS volumes. [#3780](https://github.com/gravitational/teleport/pull/3780) -* Fixed an issue with port forwarding that prevented TCP connections from being closed correctly. [#3801](https://github.com/gravitational/teleport/pull/3801) -* Fixed an issue in `tsh` that would cause connections to the Auth Service to fail on large clusters. [#3872](https://github.com/gravitational/teleport/pull/3872) -* Fixed an issue that prevented the use of Write-Only roles with S3 and GCS. [#3810](https://github.com/gravitational/teleport/pull/3810) - -## 4.2.10 - -This release of Teleport contains multiple bug fixes. - -* Fixed an issue that caused Teleport environment variables not to be available in PAM modules. [#3725](https://github.com/gravitational/teleport/pull/3725) -* Fixed an issue with `tsh login ` not working correctly with Kubernetes clusters. [#3693](https://github.com/gravitational/teleport/issues/3693) - -## 4.2.9 - -This release of Teleport contains multiple bug fixes. - -* Fixed an issue where double `tsh login` would be required to login to a leaf cluster. [#3639](https://github.com/gravitational/teleport/pull/3639) -* Fixed an issue that was preventing connection reuse. [#3613](https://github.com/gravitational/teleport/pull/3613) -* Fixed an issue that could cause `tsh ls` to return stale results. [#3536](https://github.com/gravitational/teleport/pull/3536) - -## 4.2.8 - -This release of Teleport contains multiple bug fixes. - -* Fixed issue where `^C` would not terminate `tsh`. [#3456](https://github.com/gravitational/teleport/pull/3456) -* Fixed an issue where enhanced session recording could cause Teleport to panic. [#3506](https://github.com/gravitational/teleport/pull/3506) - -## 4.2.7 - -As part of a routine security audit of Teleport, a security vulnerability was discovered that affects all recent releases of Teleport. We strongly suggest upgrading to the latest patched release to mitigate this vulnerability. - -### Details - -Due to a flaw in how the Teleport Web UI handled host certificate validation, host certificate validation was disabled for clusters where connections were terminated at the node. This means that an attacker could impersonate a Teleport node without detection when connecting through the Web UI. - -Clusters where sessions were terminated at the proxy (recording proxy mode) are not affected. - -Command line programs like `tsh` (or `ssh`) are not affected by this vulnerability. - -### Actions - -To mitigate this issue, upgrade and restart all Teleport proxy processes. - -## 4.2.6 - -This release of Teleport contains a bug fix. - -* Fixed a regression in reissuing certificate that could cause nodes to not start. [#3449](https://github.com/gravitational/teleport/pull/3449) - -## 4.2.5 - -This release of Teleport contains multiple bug fixes. - -* Added support for custom OIDC prompts. [#3409](https://github.com/gravitational/teleport/pull/3409) -* Added support for `kubernetes_users` in roles. [#3409](https://github.com/gravitational/teleport/pull/3404) -* Added support for extended variable interpolation. [#3409](https://github.com/gravitational/teleport/pull/3404) -* Added SameSite attribute to CSRF cookie. [#3441](https://github.com/gravitational/teleport/pull/3441) - -## 4.2.4 - -This release of Teleport contains bug fixes. - -* Fixed issue where Teleport could connect to the wrong node and added support to connect via UUID. [#2396](https://github.com/gravitational/teleport/issues/2396) -* Fixed issue where `tsh login` would fail to output identity when using the `--out` parameter. [#3339](https://github.com/gravitational/teleport/issues/3339) - -## 4.2.3 - -This release of Teleport contains bug and security fixes. - -* Mitigated [CVE-2020-9283](https://groups.google.com/forum/#!msg/golang-announce/3L45YRc91SY/ywEPcKLnGQAJ) by updating golang.org/x/crypto. -* Fixed PAM integration to support user creation upon login. [#3317](https://github.com/gravitational/teleport/pull/3317) [#3346](//github.com/gravitational/teleport/pull/3346) -* Improved Teleport performance on large IoT clusters. [#3227](https://github.com/gravitational/teleport/issues/3227) -* Added support for PluginData to Teleport plugins. [#3286](https://github.com/gravitational/teleport/issues/3286) [#3298](https://github.com/gravitational/teleport/issues/3298) - -## 4.2.2 - -This release of Teleport contains bug fixes and improvements. - -* Fixed a regression in role mapping between trusted clusters. [#3252](https://github.com/gravitational/teleport/issues/3252) -* Improved variety of issues with Enhanced Session Recording including support for more operating systems and install from packages. [#3279](https://github.com/gravitational/teleport/pull/3279) - -## 4.2.1 - -This release of Teleport contains bug fixes and minor usability improvements. - -* New build command for client-only (`tsh`) .pkg builds. [#3159](https://github.com/gravitational/teleport/pull/3159) -* Added support for etcd password auth. [#3234](https://github.com/gravitational/teleport/pull/3234) -* Added third-party s3 support. [#3234](https://github.com/gravitational/teleport/pull/3234) -* Fixed an issue where access-request event system fails when cache is enabled. [#3223](https://github.com/gravitational/teleport/pull/3223) -* Fixed cgroup resolution so enhanced session recording works on Debian based distributions. [#3215](https://github.com/gravitational/teleport/pull/3215) - -## 4.2.0 - -This is a minor Teleport release with a focus on new features and bug fixes. - -### Improvements - -* Alpha: Enhanced Session Recording lets you know what's really happening during a Teleport Session. [#2948](https://github.com/gravitational/teleport/issues/2948) -* Alpha: Workflows API lets admins escalate RBAC roles in response to user requests. [Read the docs](docs/pages/admin-guides/access-controls/access-requests/access-requests.mdx). [#3006](https://github.com/gravitational/teleport/issues/3006) -* Beta: Teleport provides HA Support using Firestore and Google Cloud Storage using Google Cloud Platform. [Read the docs](docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx). [#2821](https://github.com/gravitational/teleport/pull/2821) -* Remote tctl execution is now possible. [Read the docs](./docs/pages/reference/cli/tctl.mdx). [#1525](https://github.com/gravitational/teleport/issues/1525) [#2991](https://github.com/gravitational/teleport/issues/2991) - -### Fixes - -* Fixed issue in socks4 when rendering remote address [#3110](https://github.com/gravitational/teleport/issues/3110) - -### Documentation - -* Adopting root/leaf terminology for trusted clusters. [Trusted cluster documentation](docs/pages/admin-guides/management/admin/trustedclusters.mdx). -* Documented Teleport FedRAMP & FIPS Support. [FedRAMP & FIPS documentation](docs/pages/admin-guides/access-controls/compliance-frameworks/fedramp.mdx). - -## 4.1.11 - -This release of Teleport contains a security fix. - -* Mitigated [CVE-2020-15216](https://nvd.nist.gov/vuln/detail/CVE-2020-15216) by updating github.com/russellhaering/goxmldsig. - -### Details -A vulnerability was discovered in the `github.com/russellhaering/goxmldsig` library which is used by Teleport to validate the -signatures of XML files used to configure SAML 2.0 connectors. With a carefully crafted XML file, an attacker can completely -bypass XML signature validation and pass off an altered file as a signed one. - -### Actions -The `goxmldsig` library has been updated upstream and Teleport 4.1.11 includes the fix. Any Enterprise SSO users using Okta, -Active Directory, OneLogin or custom SAML connectors should upgrade their Auth Service to version 4.1.11 and restart Teleport. - -If you are unable to upgrade immediately, we suggest deleting SAML connectors for all clusters until the updates can be applied. - -## 4.1.10 - -As part of a routine security audit of Teleport, a security vulnerability was discovered that affects all recent releases of Teleport. We strongly suggest upgrading to the latest patched release to mitigate this vulnerability. - -### Details - -Due to a flaw in how the Teleport Web UI handled host certificate validation, host certificate validation was disabled for clusters where connections were terminated at the node. This means that an attacker could impersonate a Teleport node without detection when connecting through the Web UI. - -Clusters where sessions were terminated at the proxy (recording proxy mode) are not affected. - -Command line programs like `tsh` (or `ssh`) are not affected by this vulnerability. - -### Actions - -To mitigate this issue, upgrade and restart all Teleport proxy processes. - -## 4.1.9 - -This release of Teleport contains a security fix. - -* Mitigated [CVE-2020-9283](https://groups.google.com/forum/#!msg/golang-announce/3L45YRc91SY/ywEPcKLnGQAJ) by updating golang.org/x/crypto. - -## 4.1.8 - -This release of Teleport contains a bug fix. - -* Fixed a regression in role mapping between trusted clusters. [#3252](https://github.com/gravitational/teleport/issues/3252) - -## 4.1.7 - -This release of Teleport contains a bug fix. - -* Fixed issue where the port forwarding option in a role was ignored. [#3208](https://github.com/gravitational/teleport/pull/3208) - -## 4.1.6 - -This release of Teleport contains a bug fix. - -* Fixed an issue that caused Teleport not to start with certain OIDC claims. [#3053](https://github.com/gravitational/teleport/issues/3053) - -## 4.1.5 - -This release of Teleport adds support for an older version of Linux. - -* Added RHEL/CentOS 6.x builds to the build pipeline. [#3175](https://github.com/gravitational/teleport/pull/3175) - -## 4.1.4 - -This release of Teleport contains a bug fix. - -* Fixed GSuite integration by adding support for service accounts. [#3122](https://github.com/gravitational/teleport/pull/3122) - -## 4.1.3 - -This release of Teleport contains multiple bug fixes. - -* Removed `TLS_RSA_WITH_AES_128_GCM_SHA{256,384}` from default ciphersuites due to compatibility issues with HTTP2. -* Fixed issues with `local_auth` for FIPS builds. [#3100](https://github.com/gravitational/teleport/pull/3100) -* Upgraded Go runtime to 1.13.2 to mitigate [CVE-2019-16276](https://github.com/golang/go/issues/34540) and [CVE-2019-17596](https://github.com/golang/go/issues/34960). - -## 4.1.2 - -This release of Teleport contains improvements to the build code. - -* Added support for building Docker images using the FIPS-compliant version of Teleport. The first of these images is quay.io/gravitational/teleport-ent:4.1.2-fips -* In future, these images will be automatically built for use by Teleport Enterprise customers. - -## 4.1.1 - -This release of Teleport contains a bug fix. - -* Fixed an issue with multi-cluster EKS when the Teleport proxy runs outside EKS. [#3070](https://github.com/gravitational/teleport/pull/3070) - -## 4.1.0 - -This is a major Teleport release with a focus on stability and bug fixes. - -### Improvements - -* Support for IPv6. [#2124](https://github.com/gravitational/teleport/issues/2124) -* Kubernetes support does not require SNI. [#2766](https://github.com/gravitational/teleport/issues/2766) -* Support use of a path for `auth_token` in `teleport.yaml`. [#2515](https://github.com/gravitational/teleport/issues/2515) -* Implement ProxyJump compatibility. [#2543](https://github.com/gravitational/teleport/issues/2543) -* Audit logs should show roles. [#2823](https://github.com/gravitational/teleport/issues/2823) -* Allow `tsh` to go background and without executing remote command. [#2297](https://github.com/gravitational/teleport/issues/2297) -* Provide a high level tool to backup and restore the cluster state. [#2480](https://github.com/gravitational/teleport/issues/2480) -* Investigate nodes using stale list when connecting to proxies (discovery protocol). [#2832](https://github.com/gravitational/teleport/issues/2832) - -### Fixes - -* Proxy can hang due to invalid OIDC connector. [#2690](https://github.com/gravitational/teleport/issues/2690) -* Proper `-D` flag parsing. [#2663](https://github.com/gravitational/teleport/issues/2663) -* `tsh` status does not show correct cluster name. [#2671](https://github.com/gravitational/teleport/issues/2671) -* Teleport truncates MOTD with PAM. [#2477](https://github.com/gravitational/teleport/issues/2477) -* Miscellaneous fixes around error handling and reporting. - -## 4.0.16 - -As part of a routine security audit of Teleport, a security vulnerability was discovered that affects all recent releases of Teleport. We strongly suggest upgrading to the latest patched release to mitigate this vulnerability. - -### Details - -Due to a flaw in how the Teleport Web UI handled host certificate validation, host certificate validation was disabled for clusters where connections were terminated at the node. This means that an attacker could impersonate a Teleport node without detection when connecting through the Web UI. - -Clusters where sessions were terminated at the proxy (recording proxy mode) are not affected. - -Command line programs like `tsh` (or `ssh`) are not affected by this vulnerability. - -### Actions - -To mitigate this issue, upgrade and restart all Teleport proxy processes. - -## 4.0.15 - -This release of Teleport contains a security fix. - -* Mitigated [CVE-2020-9283](https://groups.google.com/forum/#!msg/golang-announce/3L45YRc91SY/ywEPcKLnGQAJ) by updating golang.org/x/crypto. - -## 4.0.14 - -This release of Teleport contains a bug fix. - -* Fixed a regression in role mapping between trusted clusters. [#3252](https://github.com/gravitational/teleport/issues/3252) - -## 4.1.13 - -This release of Teleport contains a bug fix. - -* Fixed issue where the port forwarding option in a role was ignored. [#3208](https://github.com/gravitational/teleport/pull/3208) - -## 4.0.12 - -This release of Teleport contains a bug fix. - -* Fixed an issue that caused Teleport not to start with certain OIDC claims. [#3053](https://github.com/gravitational/teleport/issues/3053) - -## 4.0.11 - -This release of Teleport adds support for an older version of Linux. - -* Added RHEL/CentOS 6.x builds to the build pipeline. [#3175](https://github.com/gravitational/teleport/pull/3175) - -## 4.0.10 - -This release of Teleport contains a bug fix. - -* Fixed a goroutine leak that occurred whenever a leaf cluster disconnected from the root cluster. [#3037](https://github.com/gravitational/teleport/pull/3037) - -## 4.0.9 - -This release of Teleport contains a bug fix. - -* Fixed issue where Web UI could not connect to older nodes within a cluster. [#2993](https://github.com/gravitational/teleport/pull/2993) - -## 4.0.8 - -This release of Teleport contains two bug fixes. - -### Description - -* Fixed issue where new versions of `tsh` could not connect to older clusters. [#2969](https://github.com/gravitational/teleport/pull/2969) -* Fixed trait encoding to be more robust. [#2970](https://github.com/gravitational/teleport/pull/2970) - -## 4.0.6 - -This release of Teleport contains a bug fix. - -* Fixed issue introduced in 4.0.5 that broke session recording when using the recording proxy. [#2957](https://github.com/gravitational/teleport/pull/2957) - -## 4.0.4 - -This release of Teleport contains a bug fix. - -* Fixed a memory leak in the cache module. [#2892](https://github.com/gravitational/teleport/pull/2892) - -## 4.0.3 - -* Reduced keep-alive interval to improve interoperability with popular load balancers. [#2845](https://github.com/gravitational/teleport/issues/2845) -* Fixed issue where non-RSA certificates were rejected when not in FIPS mode. [#2805](https://github.com/gravitational/teleport/pull/2879) - -## 4.0.2 - -This release of Teleport contains multiple bug fixes. - -* Fixed an issue that caused active sessions not to be shown. [#2801](https://github.com/gravitational/teleport/issues/2801) -* Fixed further issues with host certificate principal generation. [#2812](https://github.com/gravitational/teleport/pull/2812) -* Fixed issue where fetching CA would sometimes return not found. [#2805](https://github.com/gravitational/teleport/pull/2805) - -## 4.0.1 - -This release of Teleport contains multiple bug fixes. - -* Fixed issue that caused processes to be spawned with an incorrect GID. [#2791](https://github.com/gravitational/teleport/pull/2791) -* Fixed host certificate principal generation to only include hosts or IP addresses. [#2790](https://github.com/gravitational/teleport/pull/2790) -* Fixed issue preventing `tsh` 4.0 from connection to 3.2 clusters. [#2784](https://github.com/gravitational/teleport/pull/2784) - -## 4.0.0 - -This is a major Teleport release which introduces support for Teleport Internet of Things (IoT). In addition to this new feature this release includes usability, performance, and bug fixes listed below. - -### New Features - -#### Teleport for IoT - -With Teleport 4.0, nodes gain the ability to use reverse tunnels to dial back to a Teleport cluster to bypass firewall restrictions. This allows connections even to nodes that a cluster does not have direct network access to. Customers that have been using Trusted Clusters to achieve this can now utilize a unified interface to access all nodes within their infrastructure. - -#### FedRamp Compliance - -With this release of Teleport, we have built out the foundation to help Teleport Enterprise customers build and meet the requirements in a FedRAMP System Security Plan (SSP). This includes a FIPS 140-2 friendly build of Teleport Enterprise as well as a variety of improvements to aid in complying with security controls even in FedRAMP High environments. - -### Improvements - -* Teleport now support 10,000 remote connections to a single Teleport cluster. [Using our recommend hardware setup.](docs/pages/admin-guides/management/operations/scaling.mdx) -* Added ability to delete node using `tctl rm`. [#2685](https://github.com/gravitational/teleport/pull/2685) -* Output of `tsh ls` is now sorted by node name. [#2534](https://github.com/gravitational/teleport/pull/2534) - -### Bug Fixes - -* Switched to `xdg-open` to open a browser window on Linux. [#2536](https://github.com/gravitational/teleport/pull/2536) -* Increased SSO callback timeout to 180 seconds. [#2533](https://github.com/gravitational/teleport/pull/2533) -* Set permissions on TTY similar to OpenSSH. [#2508](https://github.com/gravitational/teleport/pull/2508) - -The lists of improvements and bug fixes above mention only the significant changes, please take a look at the complete list on Github for more. - -### Upgrading - -Teleport 4.0 is backwards compatible with Teleport 3.2 and later. [Follow the recommended upgrade procedure to upgrade to this version.](docs/pages/upgrading/upgrading.mdx) - -Note that due to substantial changes between Teleport 3.2 and 4.0, we recommend creating a backup of the backend datastore (DynamoDB, etcd, or dir) before upgrading a cluster to Teleport 4.0 to allow downgrades. - -#### Notes on compatibility - -Teleport has always validated host certificates when a client connects to a server, however prior to Teleport 4.0, Teleport did not validate the host the user requests a connection to is in the list of principals on the certificate. To avoid issues during the upgrade, make sure the hosts you connect to have the appropriate address set in `public_addr` in `teleport.yaml` before upgrading. - -## 3.2.15 - -This release of Teleport contains a bug fix. - -* Fixed a regression in role mapping between trusted clusters. [#3252](https://github.com/gravitational/teleport/issues/3252) - -## 3.2.14 - -This release of Teleport contains a bug fix and a feature. - -* Restore `CreateWebSession` method used by some integrations. [#3076](https://github.com/gravitational/teleport/pull/3076) -* Add Docker registry and Helm repository support to `tsh login`. [#3045](https://github.com/gravitational/teleport/pull/3045) - -## 3.2.13 - -This release of Teleport contains a bug fix. - -### Description - -* Fixed issue with TLS certificate not included in identity exported by `tctl auth sign`. [#3001](https://github.com/gravitational/teleport/pull/3001) - -## 3.2.12 - -This release of Teleport contains a bug fix. - -* Fixed issue where Web UI could not connect to older nodes within a cluster. [#2993](https://github.com/gravitational/teleport/pull/2993) - -## 3.2.11 - -This release of Teleport contains two bug fixes. - -* Fixed issue where new versions of `tsh` could not connect to older clusters. [#2969](https://github.com/gravitational/teleport/pull/2969) -* Fixed trait encoding to be more robust. [#2970](https://github.com/gravitational/teleport/pull/2970) - -## 3.2.9 - -This release of Teleport contains a bug fix. - -* Fixed issue introduced in 3.2.8 that broke session recording when using the recording proxy. [#2957](https://github.com/gravitational/teleport/pull/2957) - -## 3.2.4 - -This release of Teleport contains multiple bug fixes. - -* Read cluster name from `TELEPORT_SITE` environment variable in `tsh`. [#2675](https://github.com/gravitational/teleport/pull/2675) -* Multiple improvements around logging in and saving `tsh` profiles. [#2657](https://github.com/gravitational/teleport/pull/2657) - -## 3.2.2 - -This release of Teleport contains a bug fix. - -#### Changes - -* Fixed issue with `--bind-addr` implementation. [#2650](https://github.com/gravitational/teleport/pull/2650) - -## 3.2.1 - -This release of Teleport contains a new feature. - -#### Changes - -* Added `--bind-addr` to force `tsh` to bind to a specific port during SSO login. [#2620](https://github.com/gravitational/teleport/issues/2620) - -## 3.2 - -This version brings support for Amazon's managed Kubernetes offering (EKS). - -Starting with this release, Teleport proxy uses [the impersonation API](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) instead of the [CSR API](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#requesting-a-certificate). - -## 3.1.14 - -This release of Teleport contains a bug fix. - -* Fixed issue where Web UI could not connect to older nodes within a cluster. [#2993](https://github.com/gravitational/teleport/pull/2993) - -## 3.1.13 - -This release of Teleport contains two bug fixes. - -* Fixed issue where new versions of `tsh` could not connect to older clusters. [#2969](https://github.com/gravitational/teleport/pull/2969) -* Fixed trait encoding to be more robust. [#2970](https://github.com/gravitational/teleport/pull/2970) - -## 3.1.11 - -This release of Teleport contains a bug fix. - -* Fixed issue introduced in 3.1.10 that broke session recording when using the recording proxy. [#2957](https://github.com/gravitational/teleport/pull/2957) - -## 3.1.8 - -This release of Teleport contains a bug fix. - -#### Changes - -* Fixed issue where SSO users TTL was set incorrectly. [#2564](https://github.com/gravitational/teleport/pull/2564) - -## 3.1.7 - -This release of Teleport contains a bug fix. - -#### Changes - -* Fixed issue where `tctl users ls` output contained duplicates. [#2569](https://github.com/gravitational/teleport/issues/2569) [#2107](https://github.com/gravitational/teleport/issues/2107) - -## 3.1.6 - -This release of Teleport contains bug fixes, security fixes, and user experience improvements. - -#### Changes - -* Use `xdg-open` instead of `sensible-browser` to open links on Linux. [#2454](https://github.com/gravitational/teleport/issues/2454) -* Increased SSO callback timeout to 180 seconds. [#2483](https://github.com/gravitational/teleport/issues/2483) -* Improved Teleport error messages when it fails to start. [#2525](https://github.com/gravitational/teleport/issues/2525) -* Sort `tsh ls` output by node name. [#2511](https://github.com/gravitational/teleport/issues/2511) -* Support different regions for S3 (sessions) and DynamoDB (audit log). [#2007](https://github.com/gravitational/teleport/issues/2007) -* Fixed syslog output even when Teleport is in debug mode. [#2550](https://github.com/gravitational/teleport/issues/2550) -* Fixed audit log naming conventions. [#2388](https://github.com/gravitational/teleport/issues/2388) -* Fixed issue where `~/.tsh/profile` was deleted upon logout. [#2546](https://github.com/gravitational/teleport/issues/2546) -* Fixed output of `tctl get` to be compatible with `tctl create`. [#2479](https://github.com/gravitational/teleport/issues/2479) -* Fixed issue where multiple file upload with `scp` did not work correctly. [#2094](https://github.com/gravitational/teleport/issues/2094) -* Correctly set permissions TTY. [#2540](https://github.com/gravitational/teleport/issues/2540) -* Mitigated scp issues when connected to malicious server [#2539](https://github.com/gravitational/teleport/issues/2539) - -## 3.1.5 - -Teleport 3.1.5 contains a bug fix and security fix. - -#### Bug fixes - -* Fixed issue where certificate authorities were not fetched during every login. [#2526](https://github.com/gravitational/teleport/pull/2526) -* Upgraded Go to 1.11.5 to mitigate [CVE-2019-6486](https://groups.google.com/forum/#!topic/golang-announce/mVeX35iXuSw): CPU denial of service in P-521 and P-384 elliptic curve implementation. - -## 3.1.4 - -Teleport 3.1.4 contains one new feature and two bug fixes. - -#### New Feature - -* Added support for GSuite as a SSO provider. [#2455](https://github.com/gravitational/teleport/issues/2455) - -#### Bug fixes - -* Fixed issue where Kubernetes groups were not being passed to remote clusters. [#2484](https://github.com/gravitational/teleport/pull/2484) -* Fixed issue where the client was pulling incorrect CA for trusted clusters. [#2487](https://github.com/gravitational/teleport/pull/2487) - -## 3.1.3 - -Teleport 3.1.3 contains two security fixes. - -#### Bugfixes - -* Updated xterm.js to mitigate a [RCE in xterm.js](https://github.com/xtermjs/xterm.js/releases/tag/3.10.1). -* Mitigate potential timing attacks during bearer token authentication. [#2482](https://github.com/gravitational/teleport/pull/2482) -* Fixed `x509: certificate signed by unknown authority` error when connecting to DynamoDB within Gravitational publish Docker image. [#2473](https://github.com/gravitational/teleport/pull/2473) - -## 3.1.2 - -Teleport 3.1.2 contains a security fix. We strongly encourage anyone running Teleport 3.1.1 to upgrade. - -#### Bugfixes - -* Due to the flaw in internal RBAC verification logic, a compromised node, trusted cluster or authenticated non-privileged user can craft special request to Teleport's internal Auth Service API to gain access to the private key material of the cluster's internal certificate authorities and elevate their privileges to gain full administrative access to the Teleport cluster. This vulnerability only affects authenticated clients, there is no known way to exploit this vulnerability outside the cluster for unauthenticated clients. - -## 3.1.1 - -Teleport 3.1.1 contains a security fix. We strongly encourage anyone running Teleport 3.1.0 to upgrade. - -* Upgraded Go to 1.11.4 to mitigate CVE-2018-16875: [CPU denial of service in chain validation](https://golang.org/issue/29233) Go. For customers using the RHEL5.x compatible release of Teleport, we've backported this fix to Go 1.9.7, before releasing RHEL 5.x compatible binaries. - -## 3.1 - -This is a major Teleport release with a focus on backwards compatibility, stability, and bug fixes. Some of the improvements: - -* Added support for regular expressions in RBAC label keys and values. [#2161](https://github.com/gravitational/teleport/issues/2161) -* Added support for configurable server side keepalives. [#2334](https://github.com/gravitational/teleport/issues/2334) -* Added support for some `-o` to improve OpenSSH interoperability. [#2330](https://github.com/gravitational/teleport/issues/2330) -* Added i386 binaries as well as binaries built with older version of Go to support legacy systems. [#2277](https://github.com/gravitational/teleport/issues/2277) -* Added SOCKS5 support to `tsh`. [#1693](https://github.com/gravitational/teleport/issues/1693) -* Improved UX and security for nodes joining a cluster. [#2294](https://github.com/gravitational/teleport/issues/2294) -* Improved Kubernetes UX. [#2291](https://github.com/gravitational/teleport/issues/2291) [#2258](https://github.com/gravitational/teleport/issues/2258) [#2304](https://github.com/gravitational/teleport/issues/2304) -* Fixed bug that did not allow copy and paste of texts over 128 in the Web UI. [#2313](https://github.com/gravitational/teleport/issues/2313) -* Fixes issues with `scp` when using the Web UI. [#2300](https://github.com/gravitational/teleport/issues/2300) - -## 3.0.5 - -Teleport 3.0.5 contains a security fix. - -#### Bug fixes - -* Upgraded Go to 1.11.5 to mitigate [CVE-2019-6486](https://groups.google.com/forum/#!topic/golang-announce/mVeX35iXuSw): CPU denial of service in P-521 and P-384 elliptic curve implementation. - -## 3.0.4 - -Teleport 3.0.4 contains two security fixes. - -#### Bugfixes - -* Updated xterm.js to mitigate a [RCE in xterm.js](https://github.com/xtermjs/xterm.js/releases/tag/3.10.1). -* Mitigate potential timing attacks during bearer token authentication. [#2482](https://github.com/gravitational/teleport/pull/2482) - -## 3.0.3 - -Teleport 3.0.3 contains a security fix. We strongly encourage anyone running Teleport 3.0.2 to upgrade. - -#### Bugfixes - -* Due to the flaw in internal RBAC verification logic, a compromised node, trusted cluster or authenticated non-privileged user can craft special request to Teleport's internal Auth Service API to gain access to the private key material of the cluster's internal certificate authorities and elevate their privileges to gain full administrative access to the Teleport cluster. This vulnerability only affects authenticated clients, there is no known way to exploit this vulnerability outside the cluster for unauthenticated clients. - -## 3.0.2 - -Teleport 3.0.2 contains a security fix. We strongly encourage anyone running Teleport 3.0.1 to upgrade. - -* Upgraded Go to 1.11.4 to mitigate CVE-2018-16875: [CPU denial of service in chain validation](https://golang.org/issue/29233) Go. For customers using the RHEL5.x compatible release of Teleport, we've backported this fix to Go 1.9.7, before releasing RHEL 5.x compatible binaries. - -## 3.0.1 - -This release of Teleport contains the following bug fix: - -* Fix regression that marked ADFS claims as invalid. [#2293](https://github.com/gravitational/teleport/pull/2293) - -## 3.0 - -This is a major Teleport release which introduces support for Kubernetes -clusters. In addition to this new feature this release includes several -usability and performance improvements listed below. - -#### Kubernetes Support - -* `tsh login` can retrieve and install certificates for both Kubernetes and SSH - at the same time. -* Full audit log support for `kubectl` commands, including recording of the sessions - if `kubectl exec` command was interactive. -* Unified (AKA "single pane of glass") RBAC for both SSH and Kubernetes permissions. - -#### Improvements - -* Teleport administrators can now fine-tune the enabled ciphersuites [#1999](https://github.com/gravitational/teleport/issues/1999) -* Improved user experience linking trusted clusters together [#1971](https://github.com/gravitational/teleport/issues/1971) -* All Teleport components (proxy, auth and nodes) now support `public_addr` - setting which allows them to be hosted behind NAT/Load Balancers. [#1793](https://github.com/gravitational/teleport/issues/1793) -* We have documented the previously undocumented monitoring endpoints [#2103](https://github.com/gravitational/teleport/issues/2103) -* The `etcd` back-end has been updated to implement 3.3+ protocol. See the upgrading notes below. -* Listing nodes via `tsh ls` or the web UI no longer shows nodes that the currently logged in user has no access to. [#1954](https://github.com/gravitational/teleport/issues/1954) -* It is now possible to build `tsh` client on Windows. Note: only `tsh login` command is implemented. [#1996](https://github.com/gravitational/teleport/pull/1996). -* `-i` flag to `tsh login` is now guarantees to be non-interactive. [#2221](https://github.com/gravitational/teleport/issues/2221) - -#### Bugfixes - -* Removed the bogus error message "access denied to perform action create on user" [#2132](https://github.com/gravitational/teleport/issues/2132) -* `scp` implementation in "recording proxy" mode did not work correctly. [#2176](https://github.com/gravitational/teleport/issues/2176) -* Removed the limit of 8 trusted clusters with SSO. [#2192](https://github.com/gravitational/teleport/issues/2192) -* `tsh ls` now works correctly when executed on a remote/trusted cluster [#2204](https://github.com/gravitational/teleport/milestone/24?closed=1) - -The lists of improvements and bug fixes above mention only the significant -changes, please take a look at [the complete list](https://github.com/gravitational/teleport/milestone/24?closed=1) -on Github for more. - -#### Upgrading to 3.0 - -Follow the [recommended upgrade -procedure](docs/pages/upgrading/upgrading.mdx) to upgrade to this -version. - -**WARNING:** if you are using Teleport with the etcd back-end, make sure your -`etcd` version is 3.3 or newer prior to upgrading to Teleport 3.0. - -## 2.7.9 - -Teleport 2.7.9 contains a security fix. - -#### Bug fixes - -* Upgraded Go to 1.11.5 to mitigate [CVE-2019-6486](https://groups.google.com/forum/#!topic/golang-announce/mVeX35iXuSw): CPU denial of service in P-521 and P-384 elliptic curve implementation. - -## 2.7.8 - -Teleport 2.7.8 contains two security fixes. - -#### Bugfixes - -* Updated xterm.js to mitigate a [RCE in xterm.js](https://github.com/xtermjs/xterm.js/releases/tag/3.10.1). -* Mitigate potential timing attacks during bearer token authentication. [#2482](https://github.com/gravitational/teleport/pull/2482) - -## 2.7.7 - -Teleport 2.7.7 contains two security fixes. We strongly encourage anyone running Teleport 2.7.6 to upgrade. - -#### Bugfixes - -* Due to the flaw in internal RBAC verification logic, a compromised node, trusted cluster or authenticated non-privileged user can craft special request to Teleport's internal Auth Service API to gain access to the private key material of the cluster's internal certificate authorities and elevate their privileges to gain full administrative access to the Teleport cluster. This vulnerability only affects authenticated clients, there is no known way to exploit this vulnerability outside the cluster for unauthenticated clients. -* Upgraded Go to 1.11.4 to mitigate CVE-2018-16875: CPU denial of service in chain validation Go. - -## 2.7.6 - -This release of Teleport contains the following bug fix: - -* Fix regression that marked ADFS claims as invalid. [#2293](https://github.com/gravitational/teleport/pull/2293) - -## 2.7.5 - -This release of Teleport contains the following bug fix: - -* Teleport Auth Service instances do not delete temporary files named `/tmp/multipart-` [#2250](https://github.com/gravitational/teleport/issues/2250) - -## 2.7.4 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Fixed issues with `client_idle_timeout`. [#2166](https://github.com/gravitational/teleport/issues/2166) -* Added support for scalar and list values for `node_labels` in roles. [#2136](https://github.com/gravitational/teleport/issues/2136) -* Improved font support on Ubuntu. - -## 2.7.3 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Fixed issue that cause `failed executing request: user agent missing` missing error when upgrading from 2.6. - -## 2.7.2 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Fixed issue in Teleport 2.7.2 where rollback to Go 1.9.7 was not complete for `linux-amd64` binaries. - -## 2.7.1 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Rollback to Go 1.9.7 for users with custom CA running into `x509: certificate signed by unknown authority`. - -## 2.7.0 - -The primary goal of 2.7.0 release was to address the community feedback and improve the performance and flexibility when running Teleport clusters with large number of nodes. - -#### New Features - -* The Web UI now includes `scp` (secure copy) functionality. This allows Windows users and other users of the Web UI to upload/download files into SSH nodes using a web browser. -* Fine-grained control over forceful session termination has been added [#1935](https://github.com/gravitational/teleport/issues/1935). It is now possible to: - * Forcefully disconnect idle clients (no client activity) after a specified timeout. - * Forcefully disconnect clients when their certificates expire in the middle of an active SSH session. - -#### Performance Improvements - -* Performance of SSH login commands have been improved on large clusters (thousands of nodes). [#2061](https://github.com/gravitational/teleport/issues/2061) -* DynamoDB storage back-end performance has been improved. [#2021](https://github.com/gravitational/teleport/issues/2021) -* Performance of session recording via a proxy has been improved [#1966](https://github.com/gravitational/teleport/issues/1966) -* Connections between trusted clusters are managed better [#2023](https://github.com/gravitational/teleport/issues/2023) - -#### Bug Fixes - -As always, this release contains several bug fixes. The full list can be seen [here](https://github.com/gravitational/teleport/milestone/25?closed=1). Here are some notable ones: - -* It is now possible to issue certificates with a long TTL via admin's `auth sign` tool. Previously they were limited to 30 hours for undocumented reason. [1745](https://github.com/gravitational/teleport/issues/1745) -* Dynamic label values were shown as empty strings. [2056](https://github.com/gravitational/teleport/issues/2056) - -#### Upgrading - -Follow the [recommended upgrade -procedure](docs/pages/upgrading/upgrading.mdx) to upgrade to this -version. - -## 2.6.9 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Fixed issue in Teleport 2.6.8 where rollback to Go 1.9.7 was not complete for `linux-amd64` binaries. - -## 2.6.8 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Rollback to Go 1.9.7 for users with custom CA running into `x509: certificate signed by unknown authority`. - -## 2.6.7 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Resolved dynamic label regression. [#2056](https://github.com/gravitational/teleport/issues/2056) - -## 2.6.5 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Remote clusters no longer try to re-connect to proxies that have been permanently removed. [#2023](https://github.com/gravitational/teleport/issues/2023) -* Speed up login on systems with many users. [#2021](https://github.com/gravitational/teleport/issues/2021) -* Improve overall performance of the etcd backend. [#2030](https://github.com/gravitational/teleport/issues/2030) -* Role login validation now applies after variables have been substituted. [#2022](https://github.com/gravitational/teleport/issues/2022) - -## 2.6.3 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Remote clusters no longer try to re-connect to proxies that have been permanently removed. [#2023](https://github.com/gravitational/teleport/issues/2023) -* Speed up login on systems with many users. [#2021](https://github.com/gravitational/teleport/issues/2021) -* Improve overall performance of the etcd backend. [#2030](https://github.com/gravitational/teleport/issues/2030) -* Role login validation now applies after variables have been substituted. [#2022](https://github.com/gravitational/teleport/issues/2022) - -## 2.6.2 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Reduced go routine usage by the forwarding proxy. [#1966](https://github.com/gravitational/teleport/issues/1966) -* Teleport no longer sends full version in the SSH handshake. [#970](https://github.com/gravitational/teleport/issues/970) -* Force flag works correctly for Trusted Clusters. [#1871](https://github.com/gravitational/teleport/issues/1871) -* Allow manual creation of Certificate Authorities. [#2001](https://github.com/gravitational/teleport/pull/2001) -* Include Teleport username in port forwarding events. [#2004](https://github.com/gravitational/teleport/pull/2004) -* Allow `tctl auth sign` to create user certificate with arbitrary TTL values. [#1745](https://github.com/gravitational/teleport/issues/1745) -* Upgrade to Go 1.10.3. [#2008](https://github.com/gravitational/teleport/pull/2008) - -## 2.6.1 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Use ciphers, KEX, and MAC algorithms from Teleport configuration in reverse tunnel server. [#1984](https://github.com/gravitational/teleport/pull/1984) -* Update path sanitizer it allow `@`. [#1985](https://github.com/gravitational/teleport/pull/1985) - -## 2.6.0 - -This release of Teleport brings new features, significant performance and -usability improvements as well usual bugfixes. - -During this release cycle, the Teleport source code has been audited for -security vulnerabilities by Cure53 and this release (2.6.0) contains patches -for the discovered problems. - -#### New Features - -* Support for DynamoDB for storing the audit log events. [#1755](https://github.com/gravitational/teleport/issues/1755) -* Support for Amazon S3 for storing the recorded SSH sessions. [#1755](https://github.com/gravitational/teleport/issues/1755) -* Support for rotating certificate authorities (CA rotation). [#1899] (https://github.com/gravitational/teleport/pull/1899) -* Integration with Linux PAM (pluggable authentication modules) subsystem. [#742](https://github.com/gravitational/teleport/issues/742) and [#1766](https://github.com/gravitational/teleport/issues/1766) -* The new CLI command `tsh status` shows users which Teleport clusters they are authenticated with. [#1628](https://github.com/gravitational/teleport/issues/1628) - -Additionally, Teleport 2.6.0 has been submitted to the AWS marketplace. Soon -AWS users will be able to create properly configured, secure and highly -available Teleport clusters with ease. - -#### Configuration Changes - -* Role templates (depreciated in Teleport 2.3) were fully removed. We recommend - migrating to role variables which are documented [here](docs/pages/admin-guides/access-controls/guides/role-templates.mdx) - -* Resource names (like roles, connectors, trusted clusters) can no longer - contain unicode or other special characters. Update the names of all user - created resources to only include characters, hyphens, and dots. - -* `advertise_ip` has been deprecated and replaced with `public_addr` setting. See [#1803](https://github.com/gravitational/teleport/issues/1803) - The existing configuration files will still work, but we advise Teleport - administrators to update it to reflect the new format. - -* Teleport no longer uses `boltdb` back-end for storing cluster state _by - default_. The new default is called `dir` and it uses JSON files - stored in `/var/lib/teleport/backend`. This change applies to brand new - Teleport installations, the existing clusters will continue to use `boltdb`. - -* The default set of enabled cryptographic primitives has been - updated to reflect the latest state of SSH and TLS security. [#1856](https://github.com/gravitational/teleport/issues/1856). - -#### Bug Fixes - -The list of most visible bug fixes in this release: - -* `tsh` now properly handles Ctrl+C [#1882](https://github.com/gravitational/teleport/issues/1882) -* High CPU utilization on ARM platforms during daemon start-up. [#1886](https://github.com/gravitational/teleport/issues/1886) -* Terminal window size can get out of sync on AWS. [#1874](https://github.com/gravitational/teleport/issues/1874) -* Some CLI commands print errors twice. [#1889](https://github.com/gravitational/teleport/issues/1889) -* SSH session playback can be interrupted for long sessions. [#1774](https://github.com/gravitational/teleport/issues/1774) -* Processing `HUP` UNIX signal is unreliable when `teleport` daemon runs under `systemd`. [#1844](https://github.com/gravitational/teleport/issues/1844) - -You can see the full list of 2.6.0 changes [here](https://github.com/gravitational/teleport/milestone/22?closed=1). - -#### Upgrading - -Follow the [recommended upgrade -procedure](docs/pages/upgrading/upgrading.mdx) to upgrade to this -version. - -## 2.5.7 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Allow creation of users from `tctl create`. [#1949](https://github.com/gravitational/teleport/pull/1949) - -## 2.5.6 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Improvements to Teleport HUP signal handling for more reliable reload. [#1844](https://github.com/gravitational/teleport/issues/1844) -* Restore output format of `tctl nodes add --format=json`. [#1846](https://github.com/gravitational/teleport/issues/1846) - -## 2.5.5 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Allow creation of multiple sessions per connection (fixes Ansible issues with the recording proxy). [#1811](https://github.com/gravitational/teleport/issues/1811) - -## 2.5.4 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Only reset SIGINT handler if it has not been set to ignore. [#1814](https://github.com/gravitational/teleport/pull/1814) -* Improvement of user-visible errors. [#1798](https://github.com/gravitational/teleport/issues/1798) [#1779](https://github.com/gravitational/teleport/issues/1779) - -## 2.5.3 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Fix logging, collect status of forked processes. [#1785](https://github.com/gravitational/teleport/issues/1785) [#1776](https://github.com/gravitational/teleport/issues/1776) -* Turn off proxy support when no-tls is used. [#1800](https://github.com/gravitational/teleport/issues/1800) -* Correct the signup URL. [#1777](https://github.com/gravitational/teleport/issues/1777) -* Fix GitHub team pagination issues. [#1734](https://github.com/gravitational/teleport/issues/1734) -* Increase global dial timeout to 30 seconds. [#1760](https://github.com/gravitational/teleport/issues/1760) -* Reuse existing singing key. [#1713](https://github.com/gravitational/teleport/issues/1713) -* Don't panic on channel failures. [#1808](https://github.com/gravitational/teleport/pull/1808) - -## 2.5.2 - -This release of Teleport includes bug fixes and regression fixes. - -#### Bug Fixes - -* Run session migration in the background. [#1784](https://github.com/gravitational/teleport/pull/1784) -* Include node name in regenerated host certificates. [#1786](https://github.com/gravitational/teleport/issues/1786) - -## 2.5.1 - -This release of Teleport fixes a regression in Teleport binaries. - -#### Bug Fixes - -* Binaries for macOS have been rebuilt to resolve "certificate signed by a unknown authority" issue. - -## 2.5.0 - -This is a major release of Teleport. Its goal is to make cloud-native -deployments easier. Numerous AWS users have contributed feedback to this -release, which includes: - -#### New Features - -* Auth servers in highly available (HA) configuration can share the same `/var/lib/teleport` - data directory when it's hosted on NFS (or AWS EFS). - [#1351](https://github.com/gravitational/teleport/issues/1351) - -* There is now an AWS reference deployment in `examples/aws` directory. It - uses Terraform and demonstrates how to deploy large Teleport clusters on AWS - using best practices like auto-scaling groups, security groups, secrets - management, load balancers, etc. - -* The Teleport daemon now implements built-in connection draining which allows - zero-downtime upgrades. [See - documentation](docs/pages/upgrading/upgrading.mdx). - -* Dynamic join tokens for new nodes can now be explicitly set via `tctl node add --token`. - This allows Teleport admins to use an external mechanism for generating - cluster invitation tokens. - [#1615](https://github.com/gravitational/teleport/pull/1615) - -* Teleport now correctly manages certificates for accessing proxies behind a - load balancer with the same domain name. The new configuration parameter - `public_addr` must be used for this. - [#1174](https://github.com/gravitational/teleport/issues/1174). - -#### Improvements - -* Switching to a new TLS-based Auth Service API improves performance of large clusters. - [#1528](https://github.com/gravitational/teleport/issues/1528) - -* Session recordings are now compressed by default using gzip. This reduces storage - requirements by up to 80% in our real-world tests. - [#1579](https://github.com/gravitational/teleport/issues/1528) - -* More user-friendly authentication errors in Teleport audit log helps Teleport admins - troubleshoot configuration errors when integrating with SAML/OIDC providers. - [#1554](https://github.com/gravitational/teleport/issues/1554), - [#1553](https://github.com/gravitational/teleport/issues/1553), - [#1599](https://github.com/gravitational/teleport/issues/1599) - -* `tsh` client will now report if a server's API is no longer compatible. - -#### Bug Fixes - -* `tsh logout` will now correctly log out from all active Teleport sessions. This is useful - for users who're connected to multiple Teleport clusters at the same time. - [#1541](https://github.com/gravitational/teleport/issues/1541) - -* When parsing YAML, Teleport now supports `--` list item separator to create - multiple resources with a single `tctl create` command. - [#1663](https://github.com/gravitational/teleport/issues/1663) - -* Fixed a panic in the Web UI backend [#1558](https://github.com/gravitational/teleport/issues/1558) - -#### Behavior Changes - -Certain components of Teleport behave differently in version 2.5. It is important to -note that these changes are not breaking Teleport functionality. They improve -Teleport behavior on large clusters deployed on highly dynamic cloud -environments such as AWS. This includes: - -* Session list in the Web UI is now limited to 1,000 sessions. -* The audit log and recorded session storage has been moved from `/var/lib/teleport/log` - to `/var/lib/teleport/log/`. This is related to [#1351](https://github.com/gravitational/teleport/issues/1351) - described above. -* When connecting a trusted cluster users can no longer pick an arbitrary name for them. - Their own (local) names will be used, i.e. the `cluster_name` setting now defines how - the cluster is seen from the outside. - [#1543](https://github.com/gravitational/teleport/issues/1543) - -## 2.4.7 - -This release of Teleport contains a bugfix. - -#### Bug Fixes - -* Only reset SIGINT handler if it has not been set to ignore. [#1814](https://github.com/gravitational/teleport/pull/1814) - -## 2.4.6 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Increase global dial timeout to 30 seconds. [#1760](https://github.com/gravitational/teleport/issues/1760) -* Don't panic on channel failures. [#1808](https://github.com/gravitational/teleport/pull/1808) - -## 2.4.5 - -This release of Teleport fixes a regression in Teleport binaries. - -#### Bug Fixes - -* Binaries for macOS have been rebuilt to resolve "certificate signed by a unknown authority" issue. - -## 2.4.4 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Resolved `tsh logout` regression. [#1541](https://github.com/gravitational/teleport/issues/1541) -* Binaries for supported platforms all built with Go 1.9.2. - -## 2.4.3 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Resolved "access denied" regression in Trusted Clusters. [#1733](https://github.com/gravitational/teleport/issues/1733) -* Key written with wrong username to `~/.tsh`. [#1749](https://github.com/gravitational/teleport/issues/1749) -* Resolved Trusted Clusters toggling regression. [#1751](https://github.com/gravitational/teleport/issues/1751) - -## 2.4.2 - -This release of Teleport focuses on bugfixes. - -#### Bug Fixes - -* Wait for copy to complete before propagating exit-status. [#1646](https://github.com/gravitational/teleport/issues/1646) -* Don't discard initial bytes in HTTP CONNECT tunnel. [#1659](https://github.com/gravitational/teleport/issues/1659) -* Pass caching key generator to services and use cache in recording proxy. [#1639](https://github.com/gravitational/teleport/issues/1639) -* Only display "Change Password" in UI for local users. [#1669](https://github.com/gravitational/teleport/issues/1669) -* Update Signup URL. [#1643](https://github.com/gravitational/teleport/issues/1643) -* Improved Teleport version reporting. [#1538](https://github.com/gravitational/teleport/issues/1538) -* Fixed regressions in terminal size handling and Trusted Clusters introduced in 2.4.1. [#1674](https://github.com/gravitational/teleport/issues/1674) [#1692](https://github.com/gravitational/teleport/issues/1692) - -## 2.4.1 - -This release is focused on fixing a few regressions in Teleport as well as -adding a new feature. - -#### New Features - -* Exposed the `--compat` flag to Web UI users. [#1542](https://github.com/gravitational/teleport/issues/1542) - -#### Bug Fixes - -* Wrap lines correctly on initial login. [#1087](https://github.com/gravitational/teleport/issues/1087) -* Accept port numbers larger than `32767`: [#1576](https://github.com/gravitational/teleport/issues/1576) -* Don't show the `Join` button when using the recording proxy. [#1421](https://github.com/gravitational/teleport/issues/1421) -* Don't double record sessions when using the recording proxy and Teleport nodes. [#1582](https://github.com/gravitational/teleport/issues/1582) -* Fixed regressions in `tsh login` and `tsh logout`. [#1611](https://github.com/gravitational/teleport/issues/1611) [#1541](https://github.com/gravitational/teleport/issues/1541) - -## 2.4.0 - -This release adds two major new features and a few improvements and bugfixes. - -#### New Features - -* New Commercial Teleport Editions: "Pro" and "Business" allow users to - purchase a Teleport subscription without signing contracts. -* Teleport now supports SSH session recording even for nodes running OpenSSH [#1327](https://github.com/gravitational/teleport/issues/1327) - This feature is called "recording proxy mode". -* Users of open source edition of Teleport can now authenticate against Github [#1445](https://github.com/gravitational/teleport/issues/1445) -* The Web UI now supports persistent URLs to Teleport nodes which can be - integrated into 3rd party web apps. [#1511](https://github.com/gravitational/teleport/issues/1511) -* Session recording can now be turned off [#1430](https://github.com/gravitational/teleport/pull/1430) - -#### Deprecated Features - -* Teleport client `tsh` no longer supports being an SSH agent. We recommend - using build-in SSH agents for MacOS and Linux, like `ssh-agent` from - `openssh-client` package. - -#### Bug Fixes - -There have been numerous small usability and performance improvements, but some -notable fixed bugs are listed below: - -* Resource (file descriptor) leak [#1433](https://github.com/gravitational/teleport/issues/1433) -* Correct handling of the terminal type [#1402](https://github.com/gravitational/teleport/issues/1402) -* Crash on startup [#1395](https://github.com/gravitational/teleport/issues/1395) - -## 2.3.5 - -This release is focused on fixing a few regressions in configuration and UI/UX. - -#### Improvements - -* Updated documentation to accurately reflect 2.3 changes -* Web UI can use introspection so users can skip explicitly specifying SSH port [#1410](https://github.com/gravitational/teleport/issues/1410) - -#### Bug fixes - -* Fixed issue of MFA users getting prematurely locked out [#1347](https://github.com/gravitational/teleport/issues/1347) -* UI (regression) when invite link is expired, nothing is shown to the user [#1400](https://github.com/gravitational/teleport/issues/1400) -* OIDC regression with some providers [#1371](https://github.com/gravitational/teleport/issues/1371) -* Legacy configuration for trusted clusters regression: [#1381](https://github.com/gravitational/teleport/issues/1381) -* Dynamic tokens for adding nodes: "access denied" [#1348](https://github.com/gravitational/teleport/issues/1348) - -## 2.3.1 - -#### Bug fixes - -* Added CSRF protection to login endpoint. [#1356](https://github.com/gravitational/teleport/issues/1356) -* Proxy subsystem handling is more robust. [#1336](https://github.com/gravitational/teleport/issues/1336) - -## 2.3 - -This release focus was to increase Teleport user experience in the following areas: - -* Easier configuration via `tctl` resource commands. -* Improved documentation, with expanded 'examples' directory. -* Improved CLI interface. -* Web UI improvements. - -#### Improvements - -* Web UI: users can connect to OpenSSH servers using the Web UI. -* Web UI now supports arbitrary SSH logins, in addition to role-defined ones, for better compatibility with OpenSSH. -* CLI: trusted clusters can now be managed on the fly without having to edit Teleport configuration. [#1137](https://github.com/gravitational/teleport/issues/1137) -* CLI: `tsh login` supports exporting a user identity into a file to be used later with OpenSSH. -* `tsh agent` command has been deprecated: users are expected to use native SSH Agents on their platforms. - -#### Teleport Enterprise - -* More granular RBAC rules [#1092](https://github.com/gravitational/teleport/issues/1092) -* Role definitions now support templates. [#1120](https://github.com/gravitational/teleport/issues/1120) -* Authentication: Teleport now supports multiple OIDC/SAML endpoints. -* Configuration: local authentication is always enabled as a fallback if a SAML/OIDC endpoints go offline. -* Configuration: SAML/OIDC endpoints can be created on the fly using `tctl` and without having to edit configuration file or restart Teleport. -* Web UI: it is now easier to turn a trusted cluster on/off [#1199](https://github.com/gravitational/teleport/issues/1199). - -#### Bug Fixes - -* Proper handling of `ENV_SUPATH` from `login.defs` [#1004](https://github.com/gravitational/teleport/pull/1004) -* Reverse tunnels would periodically lose connectivity. [#1156](https://github.com/gravitational/teleport/issues/1156) -* `tsh` now stores user identities in a format compatible with OpenSSH. [1171](https://github.com/gravitational/teleport/issues/1171). - -## 2.2.7 - -#### Bug fixes - -* Updated YAML parsing library. [#1226](https://github.com/gravitational/teleport/pull/1226) - -## 2.2.6 - -#### Bug fixes - -* Fixed issue with SSH dial potentially hanging indefinitely. [#1153](https://github.com/gravitational/teleport/issues/1153) - -## 2.2.5 - -#### Bug fixes - -* Fixed issue where node did not have correct permissions. [#1151](https://github.com/gravitational/teleport/issues/1151) - -## 2.2.4 - -#### Bug fixes - -* Fixed issue with remote tunnel timeouts. [#1140](https://github.com/gravitational/teleport/issues/1140). - -## 2.2.3 - -### Bug fixes - -* Fixed issue with Trusted Clusters where a clusters could lose its signing keys. [#1050](https://github.com/gravitational/teleport/issues/1050). -* Fixed SAML signing certificate export in Enterprise. [#1109](https://github.com/gravitational/teleport/issues/1109). - -## 2.2.2 - -### Bug fixes - -* Fixed an issue where in certain situations `tctl ls` would not work. [#1102](https://github.com/gravitational/teleport/issues/1102). - -## 2.2.1 - -### Improvements - -* Added `--compat=oldssh` to both `tsh` and `tctl` that can be used to request certificates in the legacy format (no roles in extensions). [#1083](https://github.com/gravitational/teleport/issues/1083) - -### Bugfixes - -* Fixed multiple regressions when using SAML with dynamic roles. [#1080](https://github.com/gravitational/teleport/issues/1080) - -## 2.2.0 - -### Features - -* HTTP CONNECT tunneling for Trusted Clusters. [#860](https://github.com/gravitational/teleport/issues/860) -* Long lived certificates and identity export which can be used for automation. [#1033](https://github.com/gravitational/teleport/issues/1033) -* New terminal for Web UI. [#933](https://github.com/gravitational/teleport/issues/933) -* Read user environment files. [#1014](https://github.com/gravitational/teleport/issues/1014) -* Improvements to Auth Service resiliency and availability. [#1071](https://github.com/gravitational/teleport/issues/1071) -* Server side configuration of support ciphers, key exchange (KEX) algorithms, and MAC algorithms. [#1062](https://github.com/gravitational/teleport/issues/1062) -* Renaming `tsh` to `ssh` or making a symlink `tsh -> ssh` removes the need to type `tsh ssh`, making it compatible with familiar `ssh user@host`. [#929](https://github.com/gravitational/teleport/issues/929) - -### Enterprise Features - -* SAML 2.0. [#1070](https://github.com/gravitational/teleport/issues/1070) -* Role mapping for Trusted Clusters. [#983](https://github.com/gravitational/teleport/issues/983) -* ACR parsing for OIDC identity providers. [#901](https://github.com/gravitational/teleport/issues/901) - -### Improvements - -* Improvements to OpenSSH interoperability. - * Certificate export format changes to match OpenSSH. [#1068](https://github.com/gravitational/teleport/issues/1068) - * CA export format changes to match OpenSSH. [#918](https://github.com/gravitational/teleport/issues/918) - * Improvements to `scp` implementation to fix incompatibility issues. [#1048](https://github.com/gravitational/teleport/issues/1048) - * OpenSSH keep alive messages are now processed correctly. [#963](https://github.com/gravitational/teleport/issues/963) -* `tsh` profile is now always read. [#1047](https://github.com/gravitational/teleport/issues/1047) -* Correct signal handling when Teleport is launched using sysvinit. [#981](https://github.com/gravitational/teleport/issues/981) -* Role templates now automatically fill out default values when omitted. [#912](https://github.com/gravitational/teleport/issues/912) - -## 2.0.6 - -### Bugfixes - -* Fixed regression in TLP-01-009. - -## 2.0.5 - -Teleport 2.0.5 contains a variety of security fixes. We strongly encourage anyone running Teleport 2.0.0 and above to upgrade to 2.0.5. - -The most pressing issues (a phishing attack which can potentially be used to extract plaintext credentials and an attack where an already authenticated user can escalate privileges) can be resolved by upgrading the web proxy. However, however all nodes need to be upgraded to mitigate all vulnerabilities. - -### Bugfixes - -* Patch for TLP-01-001 and TLP-01-003: Check redirect. -* Patch for TLP-01-004: Always check is namespace is valid. -* Patch for TLP-01-005: Check user principal when joining session. -* Patch for TLP-01-006 and TLP-01-007: Validate Session ID. -* Patch for TLP-01-008: Use a fake hash for password authentication if user does not exist. -* Patch for TLP-01-009: Command injection in scp. - -## 2.0.4 - -### Bugfixes - -* Roles created in the Web UI now have `node` resource. [#949](https://github.com/gravitational/teleport/pull/949) - -## 2.0.3 - -### Bugfixes - -* Execute commands using user's shell. [#943](https://github.com/gravitational/teleport/pull/943) -* Allow users to read their own roles. [#941](https://github.com/gravitational/teleport/pull/941) -* Fix User CA import. [#919](https://github.com/gravitational/teleport/pull/919) -* Role template defaults. [#916](https://github.com/gravitational/teleport/pull/916) -* Skip UserInfo if not provided. [#915](https://github.com/gravitational/teleport/pull/915) - -## 2.0.2 - -### Bugfixes - -* Agent socket had wrong permissions. [#936](https://github.com/gravitational/teleport/pull/936) - -## 2.0.1 - -### Features - -* Introduced Dynamic Roles. [#897](https://github.com/gravitational/teleport/pull/897) - -### Improvements - -* Improved OpenSSH interoperability. [#902](https://github.com/gravitational/teleport/pull/902), [#911](https://github.com/gravitational/teleport/pull/911) -* Enhanced OIDC Functionality. [#882](https://github.com/gravitational/teleport/pull/882) - -### Bugfixes - -* Fixed Regressions. [#874](https://github.com/gravitational/teleport/pull/874), [#876](https://github.com/gravitational/teleport/pull/876), [#883](https://github.com/gravitational/teleport/pull/883), [#892](https://github.com/gravitational/teleport/pull/892), and [#906](https://github.com/gravitational/teleport/pull/906) - -## 2.0 - -This is a major new release of Teleport. - -### Features - -* Native support for DynamoDB back-end for storing cluster state. -* It is now possible to turn off 2nd factor authentication. -* 2nd factor now uses TOTP. #522 -* New framework for implementing secret storage plug-ins. -* Audit log format has been finalized and documented. -* Experimental file-based secret storage back-end. -* SSH agent forwarding. - -### Improvements - -* Friendlier CLI error messages. -* `tsh login` is now compatible with SSH agents. - -### Enterprise Features - -* Role-based access control (RBAC) -* Dynamic configuration: ability to manage roles and trusted clusters at runtime. - -Full list of Github issues: -https://github.com/gravitational/teleport/milestone/8 - -## 1.3.2 - -v1.3.2 is a maintenance release which fixes a Web UI issue when in some cases -static web assets like custom fonts would not load properly. - -### Bugfixes - -* Issue #687 - broken web assets on some browsers. - -## 1.3.1 - -v1.3.1 is a maintenance release which fixes a few issues found in 1.3 - -### Bugfixes - -* Teleport session recorder can skip characters. -* U2F was enabled by default in "demo mode" if `teleport.yaml` file was missing. - -### Improvements - -* U2F documentation has been improved - -## 1.3 - -This release includes several major new features and it's recommended for production use. - -### Features - -* Support for hardware U2F keys for 2nd factor authentication. -* CLI client profiles: `tsh` can now remember its `--proxy` setting. -* tctl auth sign command to allow administrators to generate user session keys -* Web UI is now served directly from the executable. There is no more need for web - assets in `/usr/local/share/teleport` - -### Bugfixes - -* Multiple Auth Service instances in config doesn't work if the last on is not reachable. #593 -* `tsh scp -r` does not handle directory upload properly #606 - -## 1.2 - -This is a maintenance release and it's a drop-in replacement for previous versions. - -### Changes - -* Usability bugfixes as can be seen here -* Updated documentation -* Added examples directory with sample configuration and systemd unit file. - -## 1.1.0 - -This is a maintenance release meant to be a drop-in upgrade of previous versions. - -### Changes - -* User experience improvements: nicer error messages -* Better compatibility with ssh command: `-t` flag can be used to force allocation of TTY - -## 1.0.5 - -This release was recommended for production with one reservation: time-limited -certificates did not work correctly in this release due to #529 - -* Improvements in performance and usability of the Web UI -* Smaller binary sizes thanks to Golang v1.7 - -### Bugfixes - -* Wrong url to register new users. #497 -* Logged in users inherit Teleport supplemental groups bug security. #507 -* Joining a session running on a trusted cluster does not work. #504 - -## 1.0.4 - -This release only includes the addition of the ability to specify non-standard -HTTPS port for Teleport proxy for `tsh --proxy` flag. - -## 1.0.3 - -This release only includes one major bugfix #486 plus minor changes not exposed -to Teleport Community Edition users. - -### Bugfixes - -* Guessing `advertise_ip` chooses IPv6 address space. #486 +#### Legacy ALPN connection upgrade mode has been removed -## 1.0 +Teleport v15.1 added WebSocket upgrade support for Teleport proxies behind layer +7 load balancers and reverse proxies. The legacy ALPN upgrade mode using `alpn` +or `alpn-ping` as upgrade types was left as a fallback until v17. -The first official release of Teleport! +Teleport v18 removes the legacy upgrade mode entirely including the use of the +`TELEPORT_TLS_ROUTING_CONN_UPGRADE_MODE` environment variable. diff --git a/Makefile b/Makefile index f2bbf07f43168..b58272a8cd477 100644 --- a/Makefile +++ b/Makefile @@ -13,7 +13,7 @@ # Stable releases: "1.0.0" # Pre-releases: "1.0.0-alpha.1", "1.0.0-beta.2", "1.0.0-rc.3" # Master/dev branch: "1.0.0-dev" -VERSION=18.0.0-dev +VERSION=18.6.1 DOCKER_IMAGE ?= teleport @@ -36,6 +36,9 @@ TELEPORT_DEBUG ?= false GITTAG=v$(VERSION) CGOFLAG ?= CGO_ENABLED=1 KUSTOMIZE_NO_DYNAMIC_PLUGIN ?= kustomize_disable_go_plugin_support +KUBECTL_VERSION := $(shell go list -m -f '{{.Version}}' k8s.io/kubectl | sed 's/v0/v1/') +KUBECTL_SETVERSION := -X k8s.io/component-base/version.gitVersion=$(KUBECTL_VERSION) + # RELEASE_DIR is where the release artifacts (tarballs, pacakges, etc) are put. It # should be an absolute directory as it is used by e/Makefile too, from the e/ directory. RELEASE_DIR := $(CURDIR)/$(BUILDDIR)/artifacts @@ -46,14 +49,14 @@ GO_LDFLAGS ?= -w -s $(KUBECTL_SETVERSION) # When TELEPORT_DEBUG is true, set flags to produce # debugger-friendly builds. ifeq ("$(TELEPORT_DEBUG)","true") -BUILDFLAGS ?= $(ADDFLAGS) -gcflags=all="-N -l" -BUILDFLAGS_TBOT ?= $(ADDFLAGS) -gcflags=all="-N -l" -BUILDFLAGS_TELEPORT_UPDATE ?= $(ADDFLAGS) -gcflags=all="-N -l" +BUILDFLAGS ?= $(ADDFLAGS) -gcflags=all="-N -l" -buildvcs=false +BUILDFLAGS_TBOT ?= $(ADDFLAGS) -gcflags=all="-N -l" -buildvcs=false +BUILDFLAGS_TELEPORT_UPDATE ?= $(ADDFLAGS) -gcflags=all="-N -l" -buildvcs=false else -BUILDFLAGS ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath -buildmode=pie -BUILDFLAGS_TBOT ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath +BUILDFLAGS ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath -buildmode=pie -buildvcs=false +BUILDFLAGS_TBOT ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath -buildvcs=false # teleport-update builds with disabled cgo, buildmode=pie is not required. -BUILDFLAGS_TELEPORT_UPDATE ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath +BUILDFLAGS_TELEPORT_UPDATE ?= $(ADDFLAGS) -ldflags '$(GO_LDFLAGS)' -trimpath -buildvcs=false endif GO_ENV_OS := $(shell go env GOOS) @@ -64,6 +67,8 @@ ARCH ?= $(GO_ENV_ARCH) FIPS ?= RELEASE = teleport-$(GITTAG)-$(OS)-$(ARCH)-bin +RELEASE_TOOLS = teleport-tools-$(GITTAG)-$(OS)-$(ARCH)-bin +RELEASE_UPDATE = teleport-update-$(GITTAG)-$(OS)-$(ARCH)-bin # If we're building inside the cross-compiling buildbox, include the # cross compilation definitions so we select the correct compilers and @@ -129,6 +134,7 @@ CARGO_TARGET_linux_386 := i686-unknown-linux-gnu CARGO_TARGET_linux_amd64 := x86_64-unknown-linux-gnu CARGO_TARGET := --target=$(RUST_TARGET_ARCH) +CARGO_WASM_TARGET := wasm32-unknown-unknown # If set to 1, Windows RDP client is not built. RDPCLIENT_SKIP_BUILD ?= 0 @@ -248,6 +254,7 @@ BINS_darwin = teleport tctl tsh tbot fdpass-teleport BINS_windows = tsh tctl BINS = $(or $(BINS_$(OS)),$(BINS_default)) BINARIES = $(addprefix $(BUILDDIR)/,$(BINS)) +UPDATE_BINARIES = $(addprefix $(BUILDDIR)/,teleport-update) # Joins elements of the list in arg 2 with the given separator. # 1. Element separator. @@ -314,10 +321,10 @@ ifneq ("$(ARCH)","amd64") $(error "Building for windows requires ARCH=amd64") endif CGOFLAG = CGO_ENABLED=1 CC=x86_64-w64-mingw32-gcc CXX=x86_64-w64-mingw32-g++ -BUILDFLAGS = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath -buildmode=pie -BUILDFLAGS_TBOT = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath +BUILDFLAGS = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath -buildmode=pie -buildvcs=false +BUILDFLAGS_TBOT = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath -buildvcs=false # teleport-update builds with disabled cgo, buildmode=pie is not required. -BUILDFLAGS_TELEPORT_UPDATE = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath +BUILDFLAGS_TELEPORT_UPDATE = $(ADDFLAGS) -ldflags '-w -s $(KUBECTL_SETVERSION)' -trimpath -buildvcs=false endif ifeq ("$(OS)","darwin") @@ -362,7 +369,7 @@ ifeq ("$(GITHUB_REPOSITORY_OWNER)","gravitational") # TELEPORT_LDFLAGS and TOOLS_LDFLAGS if appended will overwrite the previous LDFLAGS set in the BUILDFLAGS. # This is done here to prevent any changes to the (BUI)LDFLAGS passed to the other binaries TELEPORT_LDFLAGS ?= -ldflags '$(GO_LDFLAGS) -X github.com/gravitational/teleport/lib/modules.teleportBuildType=community' -TOOLS_LDFLAGS ?= -ldflags '$(GO_LDFLAGS) -X github.com/gravitational/teleport/lib/modules.teleportBuildType=community' +TOOLS_LDFLAGS ?= -ldflags '$(GO_LDFLAGS) $(KUBECTL_SETVERSION) -X github.com/gravitational/teleport/lib/modules.teleportBuildType=community' endif # By making these 3 targets below (tsh, tctl and teleport) PHONY we are solving @@ -387,8 +394,6 @@ $(BUILDDIR)/teleport: ensure-webassets bpf-bytecode rdpclient # NOTE: Any changes to the `tsh` build here must be copied to `build.assets/windows/build.ps1` # until we can use this Makefile for native Windows builds. .PHONY: $(BUILDDIR)/tsh -$(BUILDDIR)/tsh: KUBECTL_VERSION ?= $(shell go run ./build.assets/kubectl-version/main.go) -$(BUILDDIR)/tsh: KUBECTL_SETVERSION ?= -X k8s.io/component-base/version.gitVersion=$(KUBECTL_VERSION) $(BUILDDIR)/tsh: @if [[ "$(OS)" != "windows" && -z "$(LIBFIDO2_BUILD_TAG)" ]]; then \ echo 'Warning: Building tsh without libfido2. Install libfido2 to have access to MFA.' >&2; \ @@ -484,6 +489,26 @@ ifeq ("$(with_rdpclient)", "yes") cargo build -p rdp-client $(if $(FIPS),--features=fips) --release --locked $(CARGO_TARGET) endif +define ironrdp_package_json +{ + "name": "ironrdp", + "version": "0.1.0", + "module": "ironrdp.js", + "types": "ironrdp.d.ts", + "files": ["ironrdp_bg.wasm","ironrdp.js","ironrdp.d.ts"], + "sideEffects": ["./snippets/*"] +} +endef +export ironrdp_package_json + +.PHONY: build-ironrdp-wasm +build-ironrdp-wasm: ironrdp = web/packages/shared/libs/ironrdp +build-ironrdp-wasm: ensure-wasm-deps + cargo build --package ironrdp --lib --target $(CARGO_WASM_TARGET) --release + wasm-opt target/$(CARGO_WASM_TARGET)/release/ironrdp.wasm -o target/$(CARGO_WASM_TARGET)/release/ironrdp.wasm -O + wasm-bindgen target/$(CARGO_WASM_TARGET)/release/ironrdp.wasm --out-dir $(ironrdp)/pkg --typescript --target web + printenv ironrdp_package_json > $(ironrdp)/pkg/package.json + # Build libfido2 and dependencies for MacOS. Uses exported C_ARCH variable defined earlier. .PHONY: build-fido2 build-fido2: @@ -535,7 +560,7 @@ endif rm -f *.zip rm -f gitref.go rm -rf build.assets/tooling/bin - # Clean up wasm-pack build artifacts + # Clean up wasm build artifacts rm -rf web/packages/shared/libs/ironrdp/pkg/ .PHONY: clean-ui @@ -620,18 +645,37 @@ build-archive: | $(RELEASE_DIR) rm -rf teleport @echo "---> Created $(RELEASE).tar.gz." +.PHONY: build-update-archive +build-update-archive: | $(RELEASE_DIR) + @echo "---> Creating OSS update release archive." + mkdir -p teleport + cp -rf $(UPDATE_BINARIES) \ + CHANGELOG.md \ + build.assets/LICENSE-community \ + teleport/ + echo $(GITTAG) > teleport/VERSION + tar $(TAR_FLAGS) -c teleport | gzip -n > $(RELEASE_UPDATE).tar.gz + cp $(RELEASE_UPDATE).tar.gz $(RELEASE_DIR) + # linux-amd64 generates a centos7-compatible archive. Make a copy with the -centos7 label, + # for the releases page. We should probably drop that at some point. + $(if $(filter linux-amd64,$(OS)-$(ARCH)), \ + cp $(RELEASE_UPDATE).tar.gz $(RELEASE_DIR)/$(subst amd64,amd64-centos7,$(RELEASE_UPDATE)).tar.gz \ + ) + rm -rf teleport + @echo "---> Created $(RELEASE_UPDATE).tar.gz." + # # make release-unix - Produces binary release tarballs for both OSS and # Enterprise editions, containing teleport, tctl, tbot and tsh. # .PHONY: release-unix -release-unix: clean full build-archive +release-unix: clean full build-archive build-update-archive @if [ -f e/Makefile ]; then $(MAKE) -C e release; fi # release-unix-preserving-webassets cleans just the build and not the UI # allowing webassets to be built in a prior step before building the release. .PHONY: release-unix-preserving-webassets -release-unix-preserving-webassets: clean-build full build-archive +release-unix-preserving-webassets: clean-build full build-archive build-update-archive @if [ -f e/Makefile ]; then $(MAKE) -C e release; fi include darwin-signing.mk @@ -774,13 +818,16 @@ release-connect: | $(RELEASE_DIR) pnpm build-term pnpm package-term -c.extraMetadata.version=$(VERSION) --$(ELECTRON_BUILDER_ARCH) # Only copy proper builds with tsh.app to $(RELEASE_DIR) - # Drop -universal "arch" from dmg name when copying to $(RELEASE_DIR) + # Drop -universal "arch" from dmg and zip name when copying to $(RELEASE_DIR) if [ -n "$$CONNECT_TSH_APP_PATH" ]; then \ - TARGET_NAME="Teleport Connect-$(VERSION)-$(ARCH).dmg"; \ + DMG_TARGET_NAME="Teleport Connect-$(VERSION)-$(ARCH).dmg"; \ + ZIP_TARGET_NAME="Teleport Connect-$(VERSION)-$(ARCH)-mac.zip"; \ if [ "$(ARCH)" = 'universal' ]; then \ - TARGET_NAME="$${TARGET_NAME/-universal/}"; \ + DMG_TARGET_NAME="$${DMG_TARGET_NAME/-universal/}"; \ + ZIP_TARGET_NAME="$${ZIP_TARGET_NAME/-universal/}"; \ fi; \ - cp web/packages/teleterm/build/release/"Teleport Connect-$(VERSION)-$(ELECTRON_BUILDER_ARCH).dmg" "$(RELEASE_DIR)/$${TARGET_NAME}"; \ + cp web/packages/teleterm/build/release/"Teleport Connect-$(VERSION)-$(ELECTRON_BUILDER_ARCH).dmg" "$(RELEASE_DIR)/$${DMG_TARGET_NAME}"; \ + cp web/packages/teleterm/build/release/"Teleport Connect-$(VERSION)-$(ELECTRON_BUILDER_ARCH)-mac.zip" "$(RELEASE_DIR)/$${ZIP_TARGET_NAME}"; \ fi # @@ -830,6 +877,17 @@ ensure-gotestsum: go install gotest.tools/gotestsum@latest endif +# +# Install goda to lint testing symbols +# +.PHONY: ensure-goda +ensure-goda: +# Install goda if it's not already installed + ifeq (, $(shell command -v goda)) + go install github.com/loov/goda@latest +endif + + DIFF_TEST := $(TOOLINGDIR)/bin/difftest $(DIFF_TEST): $(wildcard $(TOOLINGDIR)/cmd/difftest/*.go) cd $(TOOLINGDIR) && go build -o "$@" ./cmd/difftest @@ -867,6 +925,7 @@ helmunit/installed: test-helm: helmunit/installed helm unittest -3 --with-subchart=false examples/chart/teleport-cluster helm unittest -3 --with-subchart=false examples/chart/teleport-kube-agent + helm unittest -3 --with-subchart=false examples/chart/teleport-relay helm unittest -3 --with-subchart=false examples/chart/teleport-cluster/charts/teleport-operator helm unittest -3 --with-subchart=false examples/chart/access/* helm unittest -3 --with-subchart=false examples/chart/event-handler @@ -876,6 +935,7 @@ test-helm: helmunit/installed test-helm-update-snapshots: helmunit/installed helm unittest -3 -u --with-subchart=false examples/chart/teleport-cluster helm unittest -3 -u --with-subchart=false examples/chart/teleport-kube-agent + helm unittest -3 -u --with-subchart=false examples/chart/teleport-relay helm unittest -3 -u --with-subchart=false examples/chart/teleport-cluster/charts/teleport-operator helm unittest -3 -u --with-subchart=false examples/chart/access/* helm unittest -3 -u --with-subchart=false examples/chart/event-handler @@ -916,20 +976,19 @@ test-go-prepare: ensure-webassets bpf-bytecode $(TEST_LOG_DIR) ensure-gotestsum # Runs base unit tests .PHONY: test-go-unit +test-go-unit: rdpclient test-go-unit: FLAGS ?= -race -shuffle on test-go-unit: SUBJECT ?= $(shell go list ./... | grep -vE 'teleport/(e2e|integration|tool/tsh|integrations/operator|integrations/access|integrations/lib)') test-go-unit: - $(CGOFLAG) GOEXPERIMENT=synctest go test -cover -json -tags "enablesynctest $(PAM_TAG) $(FIPS_TAG) $(BPF_TAG) $(LIBFIDO2_TEST_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(VNETDAEMON_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/unit.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG) GOEXPERIMENT=synctest go test -json -tags "enablesynctest $(PAM_TAG) $(RDPCLIENT_TAG) $(FIPS_TAG) $(BPF_TAG) $(LIBFIDO2_TEST_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(VNETDAEMON_TAG) $(ADDTAGS)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests.xml --jsonfile $(TEST_LOG_DIR)/unit-tests.json --raw-command -- cat # Runs tbot unit tests .PHONY: test-go-unit-tbot test-go-unit-tbot: FLAGS ?= -race -shuffle on test-go-unit-tbot: - $(CGOFLAG) go test -cover -json $(FLAGS) $(ADDFLAGS) ./tool/tbot/... ./lib/tbot/... \ - | tee $(TEST_LOG_DIR)/unit.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG) go test -json $(FLAGS) $(ADDFLAGS) ./tool/tbot/... ./lib/tbot/... \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-tbot.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-tbot.json --raw-command -- cat # Make sure untagged touchid code build/tests. .PHONY: test-go-touch-id @@ -937,9 +996,8 @@ test-go-touch-id: FLAGS ?= -race -shuffle on test-go-touch-id: SUBJECT ?= ./lib/auth/touchid/... test-go-touch-id: ifneq ("$(TOUCHID_TAG)", "") - $(CGOFLAG) go test -cover -json $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/unit.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG) go test -json $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-touchid.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-touchid.json --raw-command -- cat endif # Runs benchmarks once to make sure they pass. @@ -967,9 +1025,8 @@ test-go-vnet-daemon: FLAGS ?= -race -shuffle on test-go-vnet-daemon: SUBJECT ?= ./lib/vnet/daemon/... test-go-vnet-daemon: ifneq ("$(VNETDAEMON_TAG)", "") - $(CGOFLAG) go test -cover -json $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/unit.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG) go test -json $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-vnet.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-vnet.json --raw-command -- cat endif # Runs ci tsh tests @@ -977,17 +1034,15 @@ endif test-go-tsh: FLAGS ?= -race -shuffle on test-go-tsh: SUBJECT ?= github.com/gravitational/teleport/tool/tsh/... test-go-tsh: - $(CGOFLAG_TSH) go test -cover -json -tags "$(PAM_TAG) $(FIPS_TAG) $(LIBFIDO2_TEST_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(VNETDAEMON_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/unit.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG_TSH) go test -json -tags "$(PAM_TAG) $(FIPS_TAG) $(LIBFIDO2_TEST_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(VNETDAEMON_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-tsh.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-tsh.json --raw-command -- cat # Chaos tests have high concurrency, run without race detector and have TestChaos prefix. .PHONY: test-go-chaos test-go-chaos: CHAOS_FOLDERS = $(shell find . -type f -name '*chaos*.go' | xargs dirname | uniq) test-go-chaos: - $(CGOFLAG) go test -cover -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" -test.run=TestChaos $(CHAOS_FOLDERS) \ - | tee $(TEST_LOG_DIR)/chaos.json \ - | gotestsum --raw-command -- cat + $(CGOFLAG) go test -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" -test.run=TestChaos $(CHAOS_FOLDERS) \ + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-chaos.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-chaos.json --raw-command -- cat # # Runs all Go tests except integration, end-to-end, and chaos, called by CI/CD. @@ -999,8 +1054,7 @@ test-go-root: FLAGS ?= -race -shuffle on test-go-root: PACKAGES = $(shell go list $(ADDFLAGS) ./... | grep -v -e e2e -e integration -e integrations/operator) test-go-root: $(VERSRC) $(CGOFLAG) go test -json -run "$(UNIT_ROOT_REGEX)" -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" $(PACKAGES) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/unit-root.json \ - | gotestsum --raw-command -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-root.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-root.json --raw-command -- cat # # Runs Go tests on the api module. These have to be run separately as the package name is different. @@ -1011,8 +1065,7 @@ test-api: FLAGS ?= -race -shuffle on test-api: SUBJECT ?= $(shell cd api && go list ./...) test-api: cd api && $(CGOFLAG) go test -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/api.json \ - | gotestsum --raw-command -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-api.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-api.json --raw-command -- cat # # Runs Teleport Operator tests. @@ -1028,6 +1081,12 @@ test-operator: test-terraform-provider: make -C integrations test-terraform-provider # +# Runs Teleport MWI Terraform provider tests. +# +.PHONY: test-terraform-provider-mwi +test-terraform-provider-mwi: + make -C integrations test-terraform-provider-mwi +# # Runs Go tests on the integrations/kube-agent-updater module. These have to be run separately as the package name is different. # .PHONY: test-kube-agent-updater @@ -1036,8 +1095,7 @@ test-kube-agent-updater: FLAGS ?= -race -shuffle on test-kube-agent-updater: SUBJECT ?= $(shell cd integrations/kube-agent-updater && go list ./...) test-kube-agent-updater: cd integrations/kube-agent-updater && $(CGOFLAG) go test -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/kube-agent-updater.json \ - | gotestsum --raw-command --format=testname -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-kube-agent-updater.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-kube-agent-updater.json --raw-command -- cat .PHONY: test-access-integrations test-access-integrations: @@ -1060,8 +1118,7 @@ test-teleport-usage: FLAGS ?= -race -shuffle on test-teleport-usage: SUBJECT ?= $(shell cd examples/teleport-usage && go list ./...) test-teleport-usage: cd examples/teleport-usage && $(CGOFLAG) go test -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" $(PACKAGES) $(SUBJECT) $(FLAGS) $(ADDFLAGS) \ - | tee $(TEST_LOG_DIR)/teleport-usage.json \ - | gotestsum --raw-command -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-teleport-usage.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-teleport-usage.json --raw-command -- cat # # Flaky test detection. Usually run from CI nightly, overriding these default parameters @@ -1073,9 +1130,10 @@ FLAKY_RUNS ?= 3 FLAKY_TIMEOUT ?= 1h FLAKY_TOP_N ?= 20 FLAKY_SUMMARY_FILE ?= /tmp/flaky-report.txt +test-go-flaky: rdpclient test-go-flaky: FLAGS ?= -race -shuffle on test-go-flaky: SUBJECT ?= $(shell go list ./... | grep -v -e e2e -e integration -e tool/tsh -e integrations/operator -e integrations/access -e integrations/lib ) -test-go-flaky: GO_BUILD_TAGS ?= $(PAM_TAG) $(FIPS_TAG) $(BPF_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(LIBFIDO2_TEST_TAG) $(VNETDAEMON_TAG) +test-go-flaky: GO_BUILD_TAGS ?= $(PAM_TAG) $(FIPS_TAG) $(RDPCLIENT_TAG) $(BPF_TAG) $(TOUCHID_TAG) $(PIV_TEST_TAG) $(LIBFIDO2_TEST_TAG) $(VNETDAEMON_TAG) test-go-flaky: RENDER_FLAGS ?= -report-by flakiness -summary-file $(FLAKY_SUMMARY_FILE) -top $(FLAKY_TOP_N) test-go-flaky: test-go-prepare $(RENDER_TESTS) $(RERUN) $(CGOFLAG) $(RERUN) -n $(FLAKY_RUNS) -t $(FLAKY_TIMEOUT) \ @@ -1122,8 +1180,7 @@ integration: PACKAGES = $(shell go list ./... | grep 'integration\([^s]\|$$\)' | integration: $(TEST_LOG_DIR) ensure-gotestsum @echo KUBECONFIG is: $(KUBECONFIG), TEST_KUBE: $(TEST_KUBE) $(CGOFLAG) go test -timeout 30m -json -tags "$(PAM_TAG) $(FIPS_TAG) $(BPF_TAG)" $(PACKAGES) $(FLAGS) \ - | tee $(TEST_LOG_DIR)/integration.json \ - | gotestsum --raw-command --format=testname -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-integration.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-integration.json --raw-command -- cat # # Integration tests that run Kubernetes tests in order to complete successfully @@ -1132,12 +1189,11 @@ integration: $(TEST_LOG_DIR) ensure-gotestsum INTEGRATION_KUBE_REGEX := TestKube.* .PHONY: integration-kube integration-kube: FLAGS ?= -v -race -integration-kube: PACKAGES = $(shell go list ./... | grep 'integration\([^s]\|$$\)') +integration-kube: PACKAGES = $(shell go list ./... | grep 'integration\([^s]\|$$\)' | grep -v 'integration/autoupdate') integration-kube: $(TEST_LOG_DIR) ensure-gotestsum @echo KUBECONFIG is: $(KUBECONFIG), TEST_KUBE: $(TEST_KUBE) $(CGOFLAG) go test -json -run "$(INTEGRATION_KUBE_REGEX)" $(PACKAGES) $(FLAGS) \ - | tee $(TEST_LOG_DIR)/integration-kube.json \ - | gotestsum --raw-command --format=testname -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-integration-kube.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-integration-kube.json --raw-command -- cat # # Integration tests which need to be run as root in order to complete successfully @@ -1149,8 +1205,7 @@ integration-root: FLAGS ?= -v -race integration-root: PACKAGES = $(shell go list ./... | grep 'integration\([^s]\|$$\)') integration-root: $(TEST_LOG_DIR) ensure-gotestsum $(CGOFLAG) go test -json -run "$(INTEGRATION_ROOT_REGEX)" $(PACKAGES) $(FLAGS) \ - | tee $(TEST_LOG_DIR)/integration-root.json \ - | gotestsum --raw-command --format=testname -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-integration-root.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-integration-root.json --raw-command -- cat .PHONY: e2e-aws @@ -1159,8 +1214,7 @@ e2e-aws: PACKAGES = $(shell go list ./... | grep 'e2e/aws') e2e-aws: $(TEST_LOG_DIR) ensure-gotestsum @echo TEST_KUBE: $(TEST_KUBE) TEST_AWS_DB: $(TEST_AWS_DB) $(CGOFLAG) go test -json $(PACKAGES) $(FLAGS) $(ADDFLAGS)\ - | tee $(TEST_LOG_DIR)/e2e-aws.json \ - | gotestsum --raw-command --format=testname -- cat + | gotestsum --junitfile $(TEST_LOG_DIR)/unit-tests-e2e-aws.xml --jsonfile $(TEST_LOG_DIR)/unit-tests-e2e-aws.json --raw-command -- cat # # Lint the source code. @@ -1179,6 +1233,27 @@ lint-no-actions: lint-sh lint-license .PHONY: lint-tools lint-tools: lint-build-tooling lint-backport + +# +# Checks that testing symbols and the testify library is not included in binaries. +# +# +.PHONY: lint-test-symbols +lint-test-symbols: ensure-goda + @testing_count=`goda tree "reach(github.com/gravitational/teleport/tool/...:all, testing)" | tee /dev/stderr | wc -l | tr -d ' '`; \ + if [ "$$testing_count" -gt 0 ]; then \ + echo ""; \ + echo "FAIL: \"testing\" is included in binaries"; \ + fi; \ + testify_count=`goda tree "reach(github.com/gravitational/teleport/tool/...:all, github.com/stretchr/testify/...)" | tee /dev/stderr | wc -l | tr -d ' '`; \ + if [ "$$testify_count" -gt 0 ]; then \ + echo ""; \ + echo "FAIL: \"github.com/stretchr/testify\" is included in binaries"; \ + fi; \ + if [ "$$testing_count" -gt 0 ] || [ "$$testify_count" -gt 0 ]; then \ + exit 1; \ + fi + # # Runs the clippy linter and rustfmt on our rust modules # (a no-op if cargo and rustc are not installed) @@ -1269,7 +1344,7 @@ lint-helm: if [ "$${CI}" = "true" ]; then echo "This is a failure when running in CI." && exit 1; fi; \ exit 0; \ fi; \ - for CHART in ./examples/chart/teleport-cluster ./examples/chart/teleport-kube-agent ./examples/chart/teleport-cluster/charts/teleport-operator ./examples/chart/tbot; do \ + for CHART in ./examples/chart/teleport-cluster ./examples/chart/teleport-kube-agent ./examples/chart/teleport-relay ./examples/chart/teleport-cluster/charts/teleport-operator ./examples/chart/tbot; do \ if [ -d $${CHART}/.lint ]; then \ for VALUES in $${CHART}/.lint/*.yaml; do \ export HELM_TEMP=$$(mktemp); \ @@ -1278,6 +1353,7 @@ lint-helm: helm lint --quiet --strict $${CHART} -f $${VALUES} || exit 1; \ helm template test $${CHART} -f $${VALUES} 1>$${HELM_TEMP} || exit 1; \ yamllint -c examples/chart/.lint-config.yaml $${HELM_TEMP} || { cat -en $${HELM_TEMP}; exit 1; }; \ + echo; \ done \ else \ export HELM_TEMP=$$(mktemp); \ @@ -1505,27 +1581,38 @@ enter/arm: BUF := buf +# +# Install buf to lint, format and generate code from protobuf files. +# +.PHONY: ensure-buf +ensure-buf: NEED_VERSION = $(shell $(MAKE) --no-print-directory -s -C build.assets print-buf-version) +ensure-buf: +# Install buf if it's not already installed. +ifeq (, $(shell command -v $(BUF))) + go install github.com/bufbuild/buf/cmd/buf@$(NEED_VERSION) +endif + # protos/all runs build, lint and format on all protos. # Use `make grpc` to regenerate protos inside buildbox. .PHONY: protos/all protos/all: protos/build protos/lint protos/format .PHONY: protos/build -protos/build: buf/installed +protos/build: ensure-buf $(BUF) build .PHONY: protos/format -protos/format: buf/installed +protos/format: ensure-buf $(BUF) format -w .PHONY: protos/lint -protos/lint: buf/installed +protos/lint: ensure-buf $(BUF) lint $(BUF) lint --config=buf-legacy.yaml api/proto .PHONY: protos/breaking protos/breaking: BASE=origin/master -protos/breaking: buf/installed +protos/breaking: ensure-buf @echo Checking compatibility against BASE=$(BASE) buf breaking . --against '.git#branch=$(BASE)' @@ -1535,13 +1622,6 @@ lint-protos: protos/lint .PHONY: lint-breaking lint-breaking: protos/breaking -.PHONY: buf/installed -buf/installed: - @if ! type -p $(BUF) >/dev/null; then \ - echo 'Buf is required to build/format/lint protos. Follow https://docs.buf.build/installation.'; \ - exit 1; \ - fi - GODERIVE := $(TOOLINGDIR)/bin/goderive # derive will generate derived functions for our API. # we need to build goderive first otherwise it will not be able to resolve dependencies @@ -1549,7 +1629,7 @@ GODERIVE := $(TOOLINGDIR)/bin/goderive .PHONY: derive derive: cd $(TOOLINGDIR) && go build -o $(GODERIVE) ./cmd/goderive/main.go - $(GODERIVE) ./api/types ./api/types/discoveryconfig + $(GODERIVE) ./api/types ./api/types/discoveryconfig ./api/types/accesslist ./api/types/userloginstate # derive-up-to-date checks if the generated derived functions are up to date. .PHONY: derive-up-to-date @@ -1625,6 +1705,15 @@ terraform-resources-up-to-date: must-start-clean/host exit 1; \ fi +# icons-up-to-date checks if icons were pre-processed before being added to the repo. +.PHONY: icons-up-to-date +icons-up-to-date: must-start-clean/host + pnpm process-icons + @if ! git diff --quiet; then \ + ./build.assets/please-run.sh "icons (see web/packages/design/src/Icon/README.md)" "pnpm process-icons"; \ + exit 1; \ + fi + # go-generate will execute `go generate` and generate go code. .PHONY: go-generate go-generate: @@ -1687,6 +1776,8 @@ endif .PHONY: pkg pkg: TELEPORT_PKG_UNSIGNED = $(BUILDDIR)/teleport-$(VERSION).unsigned.pkg pkg: TELEPORT_PKG_SIGNED = $(RELEASE_DIR)/teleport-$(VERSION).pkg +pkg: TELEPORT_TOOLS_PKG_UNSIGNED = $(BUILDDIR)/teleport-tools-$(VERSION).unsigned.pkg +pkg: TELEPORT_TOOLS_PKG_SIGNED = $(RELEASE_DIR)/teleport-tools-$(VERSION).pkg pkg: | $(RELEASE_DIR) mkdir -p $(BUILDDIR)/ @@ -1709,6 +1800,10 @@ pkg: | $(RELEASE_DIR) productbuild --package $(BUILDDIR)/tsh*.pkg --package $(BUILDDIR)/tctl*.pkg --package $(BUILDDIR)/teleport-bin*.pkg $(TELEPORT_PKG_UNSIGNED) $(NOTARIZE_TELEPORT_PKG) + @echo Combining tsh-$(VERSION).pkg and tctl-$(VERSION).pkg into teleport-tools-$(VERSION).pkg + productbuild --package $(BUILDDIR)/tsh*.pkg --package $(BUILDDIR)/tctl*.pkg $(TELEPORT_TOOLS_PKG_UNSIGNED) + $(NOTARIZE_TELEPORT_TOOLS_PKG) + if [ -f e/Makefile ]; then $(MAKE) -C e pkg; fi # build .rpm @@ -1795,29 +1890,18 @@ ensure-js-deps: ifeq ($(WEBASSETS_SKIP_BUILD),1) ensure-wasm-deps: else -ensure-wasm-deps: ensure-wasm-pack ensure-wasm-bindgen - -# Get the version of wasm-bindgen from cargo, as that is what wasm-pack is -# going to do when it checks for the right version. The buildboxes do not -# have jq installed (yet), so have a hacky awk version on standby. -CARGO_GET_VERSION_JQ = cargo metadata --locked --format-version=1 | jq -r 'first(.packages[] | select(.name? == "$(1)") | .version)' -CARGO_GET_VERSION_AWK = awk -F '[ ="]+' '/^name = "$(1)"$$/ {inpkg = 1} inpkg && $$1 == "version" {print $$2; exit}' Cargo.lock +ensure-wasm-deps: ensure-wasm-bindgen ensure-wasm-opt rustup-install-wasm-toolchain -BIN_JQ = $(shell which jq 2>/dev/null) -CARGO_GET_VERSION = $(if $(BIN_JQ),$(CARGO_GET_VERSION_JQ),$(CARGO_GET_VERSION_AWK)) +WASM_BINDGEN_VERSION = $(shell awk ' \ + $$1 == "name" && $$3 == "\"wasm-bindgen\"" { in_pkg=1; next } \ + in_pkg && $$1 == "version" { gsub(/"/, "", $$3); print $$3; exit } \ +' Cargo.lock) -ensure-wasm-pack: NEED_VERSION = $(shell $(MAKE) --no-print-directory -s -C build.assets print-wasm-pack-version) -ensure-wasm-pack: INSTALLED_VERSION = $(word 2,$(shell wasm-pack --version 2>/dev/null)) -ensure-wasm-pack: - $(if $(filter-out $(INSTALLED_VERSION),$(NEED_VERSION)),\ - cargo install wasm-pack --force --locked --version "$(NEED_VERSION)", \ - @echo wasm-pack up-to-date: $(INSTALLED_VERSION) \ - ) +.PHONY: print-wasm-bindgen-version +print-wasm-bindgen-version: + @echo $(WASM_BINDGEN_VERSION) -# TODO: Use CARGO_GET_VERSION_AWK instead of hardcoded version -# On 386 Arch, calling the variable produces a malformed command that fails the build. -#ensure-wasm-bindgen: NEED_VERSION = $(shell $(call CARGO_GET_VERSION,wasm-bindgen)) -ensure-wasm-bindgen: NEED_VERSION = 0.2.99 +ensure-wasm-bindgen: NEED_VERSION = $(WASM_BINDGEN_VERSION) ensure-wasm-bindgen: INSTALLED_VERSION = $(word 2,$(shell wasm-bindgen --version 2>/dev/null)) ensure-wasm-bindgen: ifneq ($(CI)$(FORCE),) @@ -1834,6 +1918,11 @@ else endif endif +.PHONY: ensure-wasm-opt +ensure-wasm-opt: WASM_OPT_VERSION := $(shell $(MAKE) --no-print-directory -C build.assets print-wasm-opt-version) +ensure-wasm-opt: + cargo install --locked wasm-opt@$(WASM_OPT_VERSION) + .PHONY: build-ui build-ui: ensure-js-deps ensure-wasm-deps @[ "${WEBASSETS_SKIP_BUILD}" -eq 1 ] || pnpm build-ui-oss @@ -1859,6 +1948,10 @@ rustup-set-version: rustup-install-target-toolchain: rustup-set-version rustup target add $(RUST_TARGET_ARCH) +.PHONY: rustup-install-wasm-toolchain +rustup-install-wasm-toolchain: rustup-set-version + rustup target add $(CARGO_WASM_TARGET) + # changelog generates PR changelog between the provided base tag and the tip of # the specified branch. # @@ -1909,3 +2002,54 @@ dump-preset-roles: .PHONY: test-e2e test-e2e: ensure-webassets (cd e2e && pnpm install) && $(CGOFLAG) go test -tags=webassets_embed ./e2e/web_e2e_test.go + +.PHONY: cli-docs-tsh +cli-docs-tsh: + # Not executing go run since we don't want to redirect linker warnings + # along with the docs page content. + go build -o $(BUILDDIR)/tshdocs -tags docs ./tool/tsh && \ + $(BUILDDIR)/tshdocs help 2>docs/pages/reference/cli/tsh.mdx && \ + rm $(BUILDDIR)/tshdocs + +# audit-event-reference generates audit event reference docs using the Web UI +# source. +.PHONY: audit-event-reference +audit-event-reference: + pnpm run -C ./web/packages/teleport event-reference + +# audit-event-reference-up-to-date ensures the audit event reference +# documentation reflects the Web UI source. +.PHONY: audit-event-reference-up-to-date +audit-event-reference-up-to-date: must-start-clean/host audit-event-reference + @if ! git diff --quiet; then \ + ./build.assets/please-run.sh "audit event reference docs" "make audit-event-reference"; \ + exit 1; \ + fi + +.PHONY: access-monitoring-reference +access-monitoring-reference: + cd ./build.assets/tooling/cmd/gen-athena-docs && go run main.go > ../../../../docs/pages/includes/access-monitoring-events.mdx + +.PHONY: access-monitoring-reference-up-to-date +access-monitoring-reference-up-to-date: access-monitoring-reference + @if ! git diff --quiet; then \ + ./build.assets/please-run.sh "Access Monitoring event reference docs" "make access-monitoring-reference"; \ + exit 1; \ + fi + +.PHONY: gen-docs +gen-docs: gen-resource-docs audit-event-reference + $(MAKE) -C integrations/terraform docs + $(MAKE) -C integrations/operator crd-docs + $(MAKE) -C examples/chart render-chart-ref + +.PHONY: gen-resource-docs +gen-resource-docs: + cd build.assets/tooling/cmd/resource-ref-generator && go run . -config config.yaml + +.PHONY: resource-docs-up-to-date +resource-docs-up-to-date: must-start-clean/host gen-resource-docs + @if ! git diff --quiet; then \ + ./build.assets/please-run.sh "tctl resource reference docs" "make gen-resource-docs"; \ + exit 1; \ + fi diff --git a/README.md b/README.md index 65fb31b8bc666..2e25df3f8b028 100644 --- a/README.md +++ b/README.md @@ -121,7 +121,7 @@ If your intention is to build and deploy for use in a production infrastructure a released tag should be used. The default branch, `master`, is the current development branch for an upcoming major version. Get the latest release tags listed at https://goteleport.com/download/ and then use that tag in the `git clone`. -For example `git clone https://github.com/gravitational/teleport.git -b v16.0.0` gets release v16.0.0. +For example `git clone https://github.com/gravitational/teleport.git -b v18.0.0` gets release v18.0.0. ### Dockerized Build diff --git a/api/client/accesslist/accesslist.go b/api/client/accesslist/accesslist.go index 62c8a200696a4..54c518603a483 100644 --- a/api/client/accesslist/accesslist.go +++ b/api/client/accesslist/accesslist.go @@ -61,7 +61,10 @@ func (c *Client) GetAccessLists(ctx context.Context) ([]*accesslist.AccessList, } // ListAccessLists returns a paginated list of access lists. +// TODO (avatus): DELETE IN 21.0.0 func (c *Client) ListAccessLists(ctx context.Context, pageSize int, nextToken string) ([]*accesslist.AccessList, string, error) { + //nolint:staticcheck // SA1019. ListAccessLists is deprecated but will + // continue be supported for backward compatibility. resp, err := c.grpcClient.ListAccessLists(ctx, &accesslistv1.ListAccessListsRequest{ PageSize: int32(pageSize), NextToken: nextToken, @@ -82,6 +85,24 @@ func (c *Client) ListAccessLists(ctx context.Context, pageSize int, nextToken st return accessLists, resp.GetNextToken(), nil } +// ListAccessListsV2 returns a filtered and sorted paginated list of access lists. +func (c *Client) ListAccessListsV2(ctx context.Context, req *accesslistv1.ListAccessListsV2Request) ([]*accesslist.AccessList, string, error) { + resp, err := c.grpcClient.ListAccessListsV2(ctx, req) + if err != nil { + return nil, "", trace.Wrap(err) + } + + accessLists := make([]*accesslist.AccessList, len(resp.AccessLists)) + for i, accessList := range resp.AccessLists { + accessLists[i], err = conv.FromProto(accessList, conv.WithOwnersIneligibleStatusField(accessList.GetSpec().GetOwners())) + if err != nil { + return nil, "", trace.Wrap(err) + } + } + + return accessLists, resp.GetNextPageToken(), nil +} + // GetAccessList returns the specified access list resource. func (c *Client) GetAccessList(ctx context.Context, name string) (*accesslist.AccessList, error) { resp, err := c.grpcClient.GetAccessList(ctx, &accesslistv1.GetAccessListRequest{ @@ -219,7 +240,7 @@ func (c *Client) ListAllAccessListMembers(ctx context.Context, pageSize int, pag } // GetAccessListMember returns the specified access list member resource. -func (c *Client) GetAccessListMember(ctx context.Context, accessList string, memberName string) (*accesslist.AccessListMember, error) { +func (c *Client) GetAccessListMember(ctx context.Context, accessList, memberName string) (*accesslist.AccessListMember, error) { resp, err := c.grpcClient.GetAccessListMember(ctx, &accesslistv1.GetAccessListMemberRequest{ AccessList: accessList, MemberName: memberName, @@ -232,6 +253,21 @@ func (c *Client) GetAccessListMember(ctx context.Context, accessList string, mem return member, trace.Wrap(err) } +// GetStaticAccessListMember returns the specified access_list_member resource. If returns error if +// the target access_list is not of type static. +func (c *Client) GetStaticAccessListMember(ctx context.Context, accessList, memberName string) (*accesslist.AccessListMember, error) { + resp, err := c.grpcClient.GetStaticAccessListMember(ctx, &accesslistv1.GetStaticAccessListMemberRequest{ + AccessList: accessList, + MemberName: memberName, + }) + if err != nil { + return nil, trace.Wrap(err) + } + + member, err := conv.FromMemberProto(resp.Member, conv.WithMemberIneligibleStatusField(resp.Member)) + return member, trace.Wrap(err) +} + // GetAccessListOwners returns a list of all owners in an Access List, including those inherited from nested Access Lists. // // Returned Owners are not validated for ownership requirements – use `IsAccessListOwner` for validation. @@ -264,6 +300,19 @@ func (c *Client) UpsertAccessListMember(ctx context.Context, member *accesslist. return responseMember, trace.Wrap(err) } +// UpsertStaticAccessListMember creates or updates an access_list_member resource. It returns error +// and does nothing if the target access_list is not of type static. +func (c *Client) UpsertStaticAccessListMember(ctx context.Context, member *accesslist.AccessListMember) (*accesslist.AccessListMember, error) { + resp, err := c.grpcClient.UpsertStaticAccessListMember(ctx, &accesslistv1.UpsertStaticAccessListMemberRequest{ + Member: conv.ToMemberProto(member), + }) + if err != nil { + return nil, trace.Wrap(err) + } + m, err := conv.FromMemberProto(resp.Member, conv.WithMemberIneligibleStatusField(resp.Member)) + return m, trace.Wrap(err) +} + // UpdateAccessListMember updates an access list member resource using a conditional update. func (c *Client) UpdateAccessListMember(ctx context.Context, member *accesslist.AccessListMember) (*accesslist.AccessListMember, error) { resp, err := c.grpcClient.UpdateAccessListMember(ctx, &accesslistv1.UpdateAccessListMemberRequest{ @@ -276,8 +325,18 @@ func (c *Client) UpdateAccessListMember(ctx context.Context, member *accesslist. return responseMember, trace.Wrap(err) } +// DeleteStaticAccessListMember hard deletes the specified access_list_member. It returns error and +// does nothing if the target access_list is not of static type. +func (c *Client) DeleteStaticAccessListMember(ctx context.Context, accessList, memberName string) error { + _, err := c.grpcClient.DeleteStaticAccessListMember(ctx, &accesslistv1.DeleteStaticAccessListMemberRequest{ + AccessList: accessList, + MemberName: memberName, + }) + return trace.Wrap(err) +} + // DeleteAccessListMember hard deletes the specified access list member resource. -func (c *Client) DeleteAccessListMember(ctx context.Context, accessList string, memberName string) error { +func (c *Client) DeleteAccessListMember(ctx context.Context, accessList, memberName string) error { _, err := c.grpcClient.DeleteAccessListMember(ctx, &accesslistv1.DeleteAccessListMemberRequest{ AccessList: accessList, MemberName: memberName, @@ -415,3 +474,22 @@ func (c *Client) GetSuggestedAccessLists(ctx context.Context, accessRequestID st return accessLists, nil } + +// ListUserAccessLists returns a paginated list of all access lists where the +// user is explicitly an owner or member. +func (c *Client) ListUserAccessLists(ctx context.Context, req *accesslistv1.ListUserAccessListsRequest) ([]*accesslist.AccessList, string, error) { + resp, err := c.grpcClient.ListUserAccessLists(ctx, req) + if err != nil { + return nil, "", trace.Wrap(err) + } + + accessLists := make([]*accesslist.AccessList, len(resp.AccessLists)) + for i, accessList := range resp.AccessLists { + accessLists[i], err = conv.FromProto(accessList, conv.WithOwnersIneligibleStatusField(accessList.GetSpec().GetOwners())) + if err != nil { + return nil, "", trace.Wrap(err) + } + } + + return accessLists, resp.GetNextPageToken(), nil +} diff --git a/api/client/client.go b/api/client/client.go index afa1104cda9bc..cb99f34ba6215 100644 --- a/api/client/client.go +++ b/api/client/client.go @@ -24,6 +24,7 @@ import ( "errors" "fmt" "io" + "iter" "log/slog" "net" "slices" @@ -58,6 +59,7 @@ import ( "github.com/gravitational/teleport/api/client/okta" "github.com/gravitational/teleport/api/client/proto" "github.com/gravitational/teleport/api/client/scim" + scopedaccess "github.com/gravitational/teleport/api/client/scopes/access" "github.com/gravitational/teleport/api/client/secreport" statichostuserclient "github.com/gravitational/teleport/api/client/statichostuser" "github.com/gravitational/teleport/api/client/userloginstate" @@ -79,8 +81,8 @@ import ( externalauditstoragev1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/externalauditstorage/v1" gitserverpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/gitserver/v1" healthcheckconfigv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/healthcheckconfig/v1" - identitycenterv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/identitycenter/v1" integrationpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/integration/v1" + joinv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/join/v1" kubeproto "github.com/gravitational/teleport/api/gen/proto/go/teleport/kube/v1" kubewaitingcontainerpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/kubewaitingcontainer/v1" loginrulepb "github.com/gravitational/teleport/api/gen/proto/go/teleport/loginrule/v1" @@ -89,11 +91,15 @@ import ( oktapb "github.com/gravitational/teleport/api/gen/proto/go/teleport/okta/v1" pluginspb "github.com/gravitational/teleport/api/gen/proto/go/teleport/plugins/v1" presencepb "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1" - provisioningv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/provisioning/v1" + recordingencryptionv1pb "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1" + recordingmetadatav1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingmetadata/v1" resourceusagepb "github.com/gravitational/teleport/api/gen/proto/go/teleport/resourceusage/v1" samlidppb "github.com/gravitational/teleport/api/gen/proto/go/teleport/samlidp/v1" + scopedaccessv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1" + joiningv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/joining/v1" secreportsv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/secreports/v1" stableunixusersv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/stableunixusers/v1" + summarizerv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1" trustpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/trust/v1" userloginstatev1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1" userprovisioningpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/userprovisioning/v2" @@ -110,6 +116,8 @@ import ( "github.com/gravitational/teleport/api/types/events" "github.com/gravitational/teleport/api/types/wrappers" "github.com/gravitational/teleport/api/utils" + "github.com/gravitational/teleport/api/utils/clientutils" + grpcutils "github.com/gravitational/teleport/api/utils/grpc" "github.com/gravitational/teleport/api/utils/grpc/interceptors" ) @@ -127,6 +135,8 @@ type AuthServiceClient struct { auditlogpb.AuditLogServiceClient userpreferencespb.UserPreferencesServiceClient notificationsv1pb.NotificationServiceClient + recordingencryptionv1pb.RecordingEncryptionServiceClient + joiningv1.ScopedJoiningServiceClient } // Client is a gRPC Client that connects to a Teleport Auth server either @@ -146,8 +156,8 @@ type Client struct { conn *grpc.ClientConn // grpc is the gRPC client specification for the auth server. grpc AuthServiceClient - // JoinServiceClient is a client for the JoinService, which runs on both the - // auth and proxy. + // JoinServiceClient is a client for the legacy JoinService, which + // runs on both the auth and proxy. *JoinServiceClient // closedFlag is set to indicate that the connection is closed. // It's a pointer to allow the Client struct to be copied. @@ -500,23 +510,24 @@ func (c *Client) dialGRPC(ctx context.Context, addr string) error { if err != nil { return trace.Wrap(err) } - var dialOpts []grpc.DialOption dialOpts = append(dialOpts, grpc.WithContextDialer(c.grpcDialer())) dialOpts = append(dialOpts, + grpc.WithStatsHandler(otelgrpc.NewClientHandler()), grpc.WithChainUnaryInterceptor( - otelUnaryClientInterceptor(), metadata.UnaryClientInterceptor, interceptors.GRPCClientUnaryErrorInterceptor, interceptors.WithMFAUnaryInterceptor(c.PerformMFACeremony), breaker.UnaryClientInterceptor(cb), ), grpc.WithChainStreamInterceptor( - otelStreamClientInterceptor(), metadata.StreamClientInterceptor, interceptors.GRPCClientStreamErrorInterceptor, breaker.StreamClientInterceptor(cb), ), + grpc.WithDefaultCallOptions( + grpc.MaxCallRecvMsgSize(grpcutils.MaxClientRecvMsgSize()), + ), ) // Only set transportCredentials if tlsConfig is set. This makes it possible // to explicitly provide grpc.WithTransportCredentials(insecure.NewCredentials()) @@ -534,35 +545,18 @@ func (c *Client) dialGRPC(ctx context.Context, addr string) error { c.conn = conn c.grpc = AuthServiceClient{ - AuthServiceClient: proto.NewAuthServiceClient(c.conn), - AuditLogServiceClient: auditlogpb.NewAuditLogServiceClient(c.conn), - UserPreferencesServiceClient: userpreferencespb.NewUserPreferencesServiceClient(c.conn), - NotificationServiceClient: notificationsv1pb.NewNotificationServiceClient(c.conn), + AuthServiceClient: proto.NewAuthServiceClient(c.conn), + AuditLogServiceClient: auditlogpb.NewAuditLogServiceClient(c.conn), + UserPreferencesServiceClient: userpreferencespb.NewUserPreferencesServiceClient(c.conn), + NotificationServiceClient: notificationsv1pb.NewNotificationServiceClient(c.conn), + RecordingEncryptionServiceClient: recordingencryptionv1pb.NewRecordingEncryptionServiceClient(c.conn), + ScopedJoiningServiceClient: joiningv1.NewScopedJoiningServiceClient(c.conn), } c.JoinServiceClient = NewJoinServiceClient(proto.NewJoinServiceClient(c.conn)) return nil } -// We wrap the creation of the otelgrpc interceptors in a sync.Once - this is -// because each time this is called, they create a new underlying metric. If -// something (e.g tbot) is repeatedly creating new clients and closing them, -// then this leads to a memory leak since the underlying metric is not cleaned -// up. -// See https://github.com/gravitational/teleport/issues/30759 -// See https://github.com/open-telemetry/opentelemetry-go-contrib/issues/4226 -var otelStreamClientInterceptor = sync.OnceValue(func() grpc.StreamClientInterceptor { - //nolint:staticcheck // SA1019. There is a data race in the stats.Handler that is replacing - // the interceptor. See https://github.com/open-telemetry/opentelemetry-go-contrib/issues/4576. - return otelgrpc.StreamClientInterceptor() -}) - -var otelUnaryClientInterceptor = sync.OnceValue(func() grpc.UnaryClientInterceptor { - //nolint:staticcheck // SA1019. There is a data race in the stats.Handler that is replacing - // the interceptor. See https://github.com/open-telemetry/opentelemetry-go-contrib/issues/4576. - return otelgrpc.UnaryClientInterceptor() -}) - // ConfigureALPN configures ALPN SNI cluster routing information in TLS settings allowing for // allowing to dial auth service through Teleport Proxy directly without using SSH Tunnels. func ConfigureALPN(tlsConfig *tls.Config, clusterName string) *tls.Config { @@ -831,6 +825,12 @@ func (c *Client) UpsertDeviceResource(ctx context.Context, res *types.DeviceV1) return types.DeviceToResource(upserted), nil } +// ScopedAccessServiceClient returns an unadorned Scoped Access Service client, using the underlying +// Auth gRPC connection. +func (c *Client) ScopedAccessServiceClient() *scopedaccess.Client { + return scopedaccess.NewClient(scopedaccessv1.NewScopedAccessServiceClient(c.conn)) +} + // LoginRuleClient returns an unadorned Login Rule client, using the underlying // Auth gRPC connection. // Clients connecting to non-Enterprise clusters, or older Teleport versions, @@ -934,6 +934,29 @@ func (c *Client) VnetConfigServiceClient() vnet.VnetConfigServiceClient { return vnet.NewVnetConfigServiceClient(c.conn) } +// JoinV1Client returns an unadorned gRPC client for the new Join service. +func (c *Client) JoinV1Client() joinv1.JoinServiceClient { + return joinv1.NewJoinServiceClient(c.conn) +} + +// SummarizerServiceClient returns an unadorned client for the session +// recording summarizer service. +func (c *Client) SummarizerServiceClient() summarizerv1.SummarizerServiceClient { + return summarizerv1.NewSummarizerServiceClient(c.conn) +} + +// RecordingMetadataServiceClient returns an unadorned client for the session +// recording metadata service. +func (c *Client) RecordingMetadataServiceClient() recordingmetadatav1.RecordingMetadataServiceClient { + return recordingmetadatav1.NewRecordingMetadataServiceClient(c.conn) +} + +// RecordingEncryptionServiceClient returns an unadorned client for the session +// recording encryption service. +func (c *Client) RecordingEncryptionServiceClient() recordingencryptionv1pb.RecordingEncryptionServiceClient { + return recordingencryptionv1pb.NewRecordingEncryptionServiceClient(c.conn) +} + // GetVnetConfig returns the singleton VnetConfig resource. func (c *Client) GetVnetConfig(ctx context.Context) (*vnet.VnetConfig, error) { return c.VnetConfigServiceClient().GetVnetConfig(ctx, &vnet.GetVnetConfigRequest{}) @@ -1036,25 +1059,29 @@ func (c *Client) GetCurrentUserRoles(ctx context.Context) ([]types.Role, error) // GetUsers returns all currently registered users. // withSecrets controls whether authentication details are returned. func (c *Client) GetUsers(ctx context.Context, withSecrets bool) ([]types.User, error) { - req := userspb.ListUsersRequest{ - WithSecrets: withSecrets, - } + userstream := clientutils.Resources( + ctx, + func(ctx context.Context, limit int, token string) ([]*types.UserV2, string, error) { + rsp, err := c.ListUsers(ctx, &userspb.ListUsersRequest{ + WithSecrets: withSecrets, + PageToken: token, + PageSize: int32(limit), + }) + + if err != nil { + return nil, "", trace.Wrap(err) + } + + return rsp.Users, rsp.NextPageToken, nil + }) var out []types.User - for { - rsp, err := c.ListUsers(ctx, &req) + for user, err := range userstream { if err != nil { return nil, trace.Wrap(err) } - for _, user := range rsp.Users { - out = append(out, user) - } - - req.PageToken = rsp.NextPageToken - if req.PageToken == "" { - break - } + out = append(out, user) } return out, nil @@ -1171,6 +1198,24 @@ func (c *Client) CreateResetPasswordToken(ctx context.Context, req *proto.Create return token, nil } +func (c *Client) ListResetPasswordTokens(ctx context.Context, pageSize int, pageToken string) ([]types.UserToken, string, error) { + req := &proto.ListResetPasswordTokenRequest{ + PageSize: int32(pageSize), + PageToken: pageToken, + } + resp, err := c.grpc.ListResetPasswordTokens(ctx, req) + if err != nil { + return nil, "", trace.Wrap(err) + } + + // Convert concrete type []*types.UserTokenV3 to interface type []types.UserToken + tokens := make([]types.UserToken, len(resp.UserTokens)) + for i, token := range resp.UserTokens { + tokens[i] = token + } + return tokens, resp.NextPageToken, nil +} + // GetAccessRequests retrieves a list of all access requests matching the provided filter. func (c *Client) GetAccessRequests(ctx context.Context, filter types.AccessRequestFilter) ([]types.AccessRequest, error) { requests, err := c.ListAllAccessRequests(ctx, &proto.ListAccessRequestsRequest{ @@ -1340,6 +1385,15 @@ func (c *Client) GetAccessCapabilities(ctx context.Context, req types.AccessCapa return caps, nil } +// GetRemoteAccessCapabilities requests the access capabilities of a user. +func (c *Client) GetRemoteAccessCapabilities(ctx context.Context, req types.RemoteAccessCapabilitiesRequest) (*types.RemoteAccessCapabilities, error) { + caps, err := c.grpc.GetRemoteAccessCapabilities(ctx, &req) + if err != nil { + return nil, trace.Wrap(err) + } + return caps, nil +} + // GetPluginData loads all plugin data matching the supplied filter. func (c *Client) GetPluginData(ctx context.Context, filter types.PluginDataFilter) ([]types.PluginData, error) { seq, err := c.grpc.GetPluginData(ctx, &filter) @@ -1393,6 +1447,25 @@ func (c *Client) GetSemaphores(ctx context.Context, filter types.SemaphoreFilter return sems, nil } +// ListSemaphores returns a page of semaphores matching supplied filter. +func (c *Client) ListSemaphores(ctx context.Context, limit int, start string, filter *types.SemaphoreFilter) ([]types.Semaphore, string, error) { + resp, err := c.grpc.ListSemaphores(ctx, &proto.ListSemaphoresRequest{ + PageSize: int32(limit), + PageToken: start, + Filter: filter, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + sems := make([]types.Semaphore, 0, len(resp.Semaphores)) + for _, s := range resp.Semaphores { + sems = append(sems, s) + } + + return sems, resp.NextPageToken, nil +} + // DeleteSemaphore deletes a semaphore matching the supplied filter. func (c *Client) DeleteSemaphore(ctx context.Context, filter types.SemaphoreFilter) error { _, err := c.grpc.DeleteSemaphore(ctx, &filter) @@ -1526,6 +1599,27 @@ func (c *Client) GetSnowflakeSessions(ctx context.Context) ([]types.WebSession, return out, nil } +// ListSnowflakeSessions returns a page of Snowflake web sessions. +func (c *Client) ListSnowflakeSessions(ctx context.Context, limit int, start string) ([]types.WebSession, string, error) { + resp, err := c.grpc.ListSnowflakeSessions(ctx, &proto.ListSnowflakeSessionsRequest{ + PageSize: int32(limit), + PageToken: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + sessions := make([]types.WebSession, len(resp.Sessions)) + for i := range resp.Sessions { + sessions[i] = resp.Sessions[i] + } + return sessions, resp.NextPageToken, nil +} + +// RangeSnowflakeSessions returns Snowflake web sessions within the range [start, end). +func (c *Client) RangeSnowflakeSessions(ctx context.Context, start, end string) iter.Seq2[types.WebSession, error] { + return clientutils.RangeResources(ctx, start, end, c.ListSnowflakeSessions, types.WebSession.GetName) +} + // CreateAppSession creates an application web session. Application web // sessions represent a browser session the client holds. func (c *Client) CreateAppSession(ctx context.Context, req *proto.CreateAppSessionRequest) (types.WebSession, error) { @@ -1770,6 +1864,16 @@ func (c *Client) ListRoles(ctx context.Context, req *proto.ListRolesRequest) (*p return rsp, nil } +// ListRequestableRoles is a paginated requestable role getter. +func (c *Client) ListRequestableRoles(ctx context.Context, req *proto.ListRequestableRolesRequest) (*proto.ListRequestableRolesResponse, error) { + rsp, err := c.grpc.ListRequestableRoles(ctx, req) + if err != nil { + return nil, trace.Wrap(err) + } + + return rsp, nil +} + // CreateRole creates a new role. func (c *Client) CreateRole(ctx context.Context, role types.Role) (types.Role, error) { r, ok := role.(*types.RoleV6) @@ -1878,6 +1982,39 @@ func (c *Client) GetOIDCConnectors(ctx context.Context, withSecrets bool) ([]typ return oidcConnectors, nil } +// ListOIDCConnectors returns a page of valid registered connectors. +// withSecrets adds or removes client secret from return results. +func (c *Client) ListOIDCConnectors(ctx context.Context, limit int, start string, withSecrets bool) ([]types.OIDCConnector, string, error) { + resp, err := c.grpc.ListOIDCConnectors(ctx, &proto.ListOIDCConnectorsRequest{ + PageSize: int32(limit), + PageToken: start, + WithSecrets: withSecrets, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + oidcConnectors := make([]types.OIDCConnector, len(resp.Connectors)) + for i, oidcConnector := range resp.Connectors { + oidcConnectors[i] = oidcConnector + } + return oidcConnectors, resp.NextPageToken, nil +} + +// RangeOIDCConnectors returns valid registered connectors within the range [start, end). +// withSecrets adds or removes client secret from return results. +func (c *Client) RangeOIDCConnectors(ctx context.Context, start, end string, withSecrets bool) iter.Seq2[types.OIDCConnector, error] { + return clientutils.RangeResources( + ctx, + start, + end, + func(ctx context.Context, limit int, start string) ([]types.OIDCConnector, string, error) { + return c.ListOIDCConnectors(ctx, limit, start, withSecrets) + }, + types.OIDCConnector.GetName, + ) +} + // CreateOIDCConnector creates an OIDC connector. func (c *Client) CreateOIDCConnector(ctx context.Context, connector types.OIDCConnector) (types.OIDCConnector, error) { oidcConnector, ok := connector.(*types.OIDCConnectorV3) @@ -1991,6 +2128,43 @@ func (c *Client) GetSAMLConnectorsWithValidationOptions(ctx context.Context, wit return samlConnectors, nil } +// ListSAMLConnectorsWithOptions returns a page of valid registered SAML connectors. +// withSecrets adds or removes client secret from return results. +func (c *Client) ListSAMLConnectorsWithOptions(ctx context.Context, limit int, start string, withSecrets bool, opts ...types.SAMLConnectorValidationOption) ([]types.SAMLConnector, string, error) { + var options types.SAMLConnectorValidationOptions + for _, opt := range opts { + opt(&options) + } + + resp, err := c.grpc.ListSAMLConnectors(ctx, &proto.ListSAMLConnectorsRequest{ + PageSize: int32(limit), + PageToken: start, + WithSecrets: withSecrets, + NoFollowUrls: options.NoFollowURLs, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + samlConnectors := make([]types.SAMLConnector, len(resp.Connectors)) + for i, samlConnector := range resp.Connectors { + samlConnectors[i] = samlConnector + } + return samlConnectors, resp.NextPageToken, nil +} + +// RangeSAMLConnectorsWithOptions returns valid registered SAML connectors within the range [start, end). +// withSecrets adds or removes client secret from return results. +func (c *Client) RangeSAMLConnectorsWithOptions(ctx context.Context, start, end string, withSecrets bool, opts ...types.SAMLConnectorValidationOption) iter.Seq2[types.SAMLConnector, error] { + return clientutils.RangeResources( + ctx, + start, + end, + func(ctx context.Context, pageSize int, pageToken string) ([]types.SAMLConnector, string, error) { + return c.ListSAMLConnectorsWithOptions(ctx, pageSize, pageToken, withSecrets, opts...) + }, + types.SAMLConnector.GetName) +} + // CreateSAMLConnector creates a SAML connector. func (c *Client) CreateSAMLConnector(ctx context.Context, connector types.SAMLConnector) (types.SAMLConnector, error) { samlConnectorV2, ok := connector.(*types.SAMLConnectorV2) @@ -2077,6 +2251,39 @@ func (c *Client) GetGithubConnectors(ctx context.Context, withSecrets bool) ([]t return githubConnectors, nil } +// ListGithubConnectors returns a page of valid registered connectors. +// withSecrets adds or removes client secret from return results. +func (c *Client) ListGithubConnectors(ctx context.Context, limit int, start string, withSecrets bool) ([]types.GithubConnector, string, error) { + resp, err := c.grpc.ListGithubConnectors(ctx, &proto.ListGithubConnectorsRequest{ + PageSize: int32(limit), + PageToken: start, + WithSecrets: withSecrets, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + githubConnectors := make([]types.GithubConnector, 0, len(resp.Connectors)) + for _, githubConnector := range resp.Connectors { + githubConnectors = append(githubConnectors, githubConnector) + } + return githubConnectors, resp.NextPageToken, nil +} + +// RangeGithubConnectors returns valid registered connectors within the range [start, end). +// withSecrets adds or removes client secret from return results. +func (c *Client) RangeGithubConnectors(ctx context.Context, start, end string, withSecrets bool) iter.Seq2[types.GithubConnector, error] { + return clientutils.RangeResources( + ctx, + start, + end, + func(ctx context.Context, limit int, start string) ([]types.GithubConnector, string, error) { + return c.ListGithubConnectors(ctx, limit, start, withSecrets) + }, + types.GithubConnector.GetName, + ) +} + // CreateGithubConnector creates a Github connector. func (c *Client) CreateGithubConnector(ctx context.Context, connector types.GithubConnector) (types.GithubConnector, error) { githubConnector, ok := connector.(*types.GithubConnectorV3) @@ -2234,6 +2441,27 @@ func (c *Client) GetTrustedClusters(ctx context.Context) ([]types.TrustedCluster return trustedClusters, nil } +// ListTrustedClusters returns a page of Trusted Cluster resources. +func (c *Client) ListTrustedClusters(ctx context.Context, limit int, start string) ([]types.TrustedCluster, string, error) { + resp, err := c.TrustClient().ListTrustedClusters(ctx, &trustpb.ListTrustedClustersRequest{ + PageSize: int32(limit), + PageToken: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + tcs := make([]types.TrustedCluster, len(resp.TrustedClusters)) + for i := range resp.TrustedClusters { + tcs[i] = resp.TrustedClusters[i] + } + return tcs, resp.NextPageToken, nil +} + +// RangeTrustedClusters returns Trusted Cluster resources within the range [start, end). +func (c *Client) RangeTrustedClusters(ctx context.Context, start, end string) iter.Seq2[types.TrustedCluster, error] { + return clientutils.RangeResources(ctx, start, end, c.ListTrustedClusters, types.TrustedCluster.GetName) +} + // UpsertTrustedCluster creates or updates a Trusted Cluster. // // Deprecated: Use [Client.UpsertTrustedClusterV2] instead. @@ -2313,8 +2541,10 @@ func (c *Client) GetToken(ctx context.Context, name string) (types.ProvisionToke } // GetTokens returns a list of active provision tokens for nodes and users. +// Deprecated: Use [ListProvisionTokens], [GetStaticTokens], and [ListResetPasswordTokens] instead. +// TODO(hugoShaka): DELETE IN 19.0.0 func (c *Client) GetTokens(ctx context.Context) ([]types.ProvisionToken, error) { - resp, err := c.grpc.GetTokens(ctx, &emptypb.Empty{}) + resp, err := c.grpc.GetTokens(ctx, &emptypb.Empty{}) //nolint:staticcheck // Provides backward compatibility, will be removed later. if err != nil { return nil, trace.Wrap(err) } @@ -2326,6 +2556,37 @@ func (c *Client) GetTokens(ctx context.Context) ([]types.ProvisionToken, error) return tokens, nil } +// GetStaticTokens returns the cluster static tokens. +func (c *Client) GetStaticTokens(ctx context.Context) (types.StaticTokens, error) { + tokens, err := c.grpc.GetStaticTokens(ctx, &emptypb.Empty{}) + if err != nil { + return nil, trace.Wrap(err) + } + + return tokens, nil +} + +// ListProvisionTokens retrieves a paginated list of provision tokens. +func (c *Client) ListProvisionTokens(ctx context.Context, pageSize int, pageToken string, anyRoles types.SystemRoles, botName string) ([]types.ProvisionToken, string, error) { + resp, err := c.grpc.ListProvisionTokens(ctx, &proto.ListProvisionTokensRequest{ + Limit: int32(pageSize), + StartKey: pageToken, + FilterRoles: anyRoles.StringSlice(), + FilterBotName: botName, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + // Convert concrete type []*types.ProvisionTokenV2 to interface type []types.ProvisionToken + tokens := make([]types.ProvisionToken, len(resp.Tokens)) + for i, token := range resp.Tokens { + tokens[i] = token + } + + return tokens, resp.NextKey, nil +} + // UpsertToken creates or updates a provision token. func (c *Client) UpsertToken(ctx context.Context, token types.ProvisionToken) error { tokenV2, ok := token.(*types.ProvisionTokenV2) @@ -2476,6 +2737,65 @@ func (c *Client) StreamSessionEvents(ctx context.Context, sessionID string, star return ch, e } +// UploadEncryptedRecording streams encrypted recording parts to the auth +// server to be saved in long term storage. +func (c *Client) UploadEncryptedRecording(ctx context.Context, sessionID string, parts iter.Seq2[[]byte, error]) error { + createRes, err := c.grpc.CreateUpload(ctx, &recordingencryptionv1pb.CreateUploadRequest{ + SessionId: sessionID, + }) + if err != nil { + return trace.Wrap(err) + } + + next, stop := iter.Pull2(parts) + defer stop() + + part, err, ok := next() + if err != nil { + return trace.Wrap(err) + } else if !ok { + return trace.BadParameter("unexpected empty upload") + } + + var uploadedParts []*recordingencryptionv1pb.Part + // S3 requires that part numbers start at 1, so we do that by default regardless of which uploader is + // configured for the auth service + var partNumber int64 = 1 + for { + nextPart, err, hasNext := next() + if err != nil { + return trace.Wrap(err) + } + + uploadRes, err := c.grpc.UploadPart(ctx, &recordingencryptionv1pb.UploadPartRequest{ + Upload: createRes.Upload, + PartNumber: partNumber, + Part: part, + IsLast: !hasNext, + }) + if err != nil { + return trace.Wrap(err) + } + uploadedParts = append(uploadedParts, uploadRes.Part) + + if !hasNext { + break + } + + part = nextPart + partNumber++ + } + + if _, err := c.grpc.CompleteUpload(ctx, &recordingencryptionv1pb.CompleteUploadRequest{ + Upload: createRes.Upload, + Parts: uploadedParts, + }); err != nil { + return trace.Wrap(err) + } + + return nil +} + // SearchEvents allows searching for events with a full pagination support. func (c *Client) SearchEvents(ctx context.Context, fromUTC, toUTC time.Time, namespace string, eventTypes []string, limit int, order types.EventOrder, startKey string) ([]events.AuditEvent, string, error) { request := &proto.GetEventsRequest{ @@ -2656,6 +2976,14 @@ func (c *Client) DynamicDesktopClient() *dynamicwindows.Client { return dynamicwindows.NewClient(dynamicwindowsv1.NewDynamicWindowsServiceClient(c.conn)) } +func (c *Client) ListDynamicWindowsDesktops(ctx context.Context, pageSize int, pageToken string) ([]types.DynamicWindowsDesktop, string, error) { + return c.DynamicDesktopClient().ListDynamicWindowsDesktops(ctx, pageSize, pageToken) +} + +func (c *Client) GetDynamicWindowsDesktop(ctx context.Context, name string) (types.DynamicWindowsDesktop, error) { + return c.DynamicDesktopClient().GetDynamicWindowsDesktop(ctx, name) +} + // ClusterConfigClient returns an unadorned Cluster Configuration client, using the underlying // Auth gRPC connection. func (c *Client) ClusterConfigClient() clusterconfigpb.ClusterConfigServiceClient { @@ -2982,6 +3310,35 @@ func (c *Client) ListAutoUpdateAgentReports(ctx context.Context, pageSize int, p return resp.GetAutoupdateAgentReports(), resp.GetNextKey(), nil } +// UpsertAutoUpdateAgentReport upserts an AutoUpdateAgentReport resource. +func (c *Client) UpsertAutoUpdateAgentReport(ctx context.Context, report *autoupdatev1pb.AutoUpdateAgentReport) (*autoupdatev1pb.AutoUpdateAgentReport, error) { + client := autoupdatev1pb.NewAutoUpdateServiceClient(c.conn) + resp, err := client.UpsertAutoUpdateAgentReport(ctx, &autoupdatev1pb.UpsertAutoUpdateAgentReportRequest{ + AutoupdateAgentReport: report, + }) + if err != nil { + return nil, trace.Wrap(err) + } + return resp, nil +} + +// GetAutoUpdateBotInstanceReport gets the singleton auto-update bot report. +func (c *Client) GetAutoUpdateBotInstanceReport(ctx context.Context) (*autoupdatev1pb.AutoUpdateBotInstanceReport, error) { + client := autoupdatev1pb.NewAutoUpdateServiceClient(c.conn) + resp, err := client.GetAutoUpdateBotInstanceReport(ctx, &autoupdatev1pb.GetAutoUpdateBotInstanceReportRequest{}) + if err != nil { + return nil, trace.Wrap(err) + } + return resp, nil +} + +// DeleteAutoUpdateBotInstanceReport deletes the singleton auto-update bot instance report. +func (c *Client) DeleteAutoUpdateBotInstanceReport(ctx context.Context) error { + client := autoupdatev1pb.NewAutoUpdateServiceClient(c.conn) + _, err := client.DeleteAutoUpdateBotInstanceReport(ctx, &autoupdatev1pb.DeleteAutoUpdateBotInstanceReportRequest{}) + return trace.Wrap(err) +} + // GetClusterAccessGraphConfig retrieves the Cluster Access Graph configuration from Auth server. func (c *Client) GetClusterAccessGraphConfig(ctx context.Context) (*clusterconfigpb.AccessGraphConfig, error) { rsp, err := c.ClusterConfigClient().GetClusterAccessGraphConfig(ctx, &clusterconfigpb.GetClusterAccessGraphConfigRequest{}) @@ -2991,7 +3348,7 @@ func (c *Client) GetClusterAccessGraphConfig(ctx context.Context) (*clusterconfi return rsp.AccessGraph, nil } -// GetInstaller gets all installer script resources +// GetInstallers gets all installer script resources func (c *Client) GetInstallers(ctx context.Context) ([]types.Installer, error) { resp, err := c.grpc.GetInstallers(ctx, &emptypb.Empty{}) if err != nil { @@ -3004,6 +3361,29 @@ func (c *Client) GetInstallers(ctx context.Context) ([]types.Installer, error) { return installers, nil } +// ListInstallers returns a page of installer script resources. +func (c *Client) ListInstallers(ctx context.Context, limit int, start string) ([]types.Installer, string, error) { + resp, err := c.grpc.ListInstallers(ctx, &proto.ListInstallersRequest{ + PageSize: int32(limit), + PageToken: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + installers := make([]types.Installer, len(resp.Installers)) + for i, inst := range resp.Installers { + installers[i] = inst + } + + return installers, resp.NextPageToken, nil +} + +// RangeInstallers returns installer script resources within the range [start, end). +func (c *Client) RangeInstallers(ctx context.Context, start, end string) iter.Seq2[types.Installer, error] { + return clientutils.RangeResources(ctx, start, end, c.ListInstallers, types.Installer.GetName) +} + // GetUIConfig gets the configuration for the UI served by the proxy service func (c *Client) GetUIConfig(ctx context.Context) (types.UIConfig, error) { resp, err := c.grpc.GetUIConfig(ctx, &emptypb.Empty{}) @@ -3088,6 +3468,41 @@ func (c *Client) GetLocks(ctx context.Context, inForceOnly bool, targets ...type return locks, nil } +// ListLocks returns a page of locks matching a filter +func (c *Client) ListLocks(ctx context.Context, limit int, startKey string, filter *types.LockFilter) ([]types.Lock, string, error) { + resp, err := c.grpc.ListLocks( + ctx, + &proto.ListLocksRequest{ + PageSize: int32(limit), + PageToken: startKey, + Filter: filter, + }, + ) + if err != nil { + return nil, "", trace.Wrap(err) + } + + locks := make([]types.Lock, 0, len(resp.Locks)) + for _, lock := range resp.Locks { + locks = append(locks, lock) + } + return locks, resp.NextPageToken, nil + +} + +// RangeLocks returns locks within the range [start, end) matching a filter +func (c *Client) RangeLocks(ctx context.Context, start, end string, filter *types.LockFilter) iter.Seq2[types.Lock, error] { + return clientutils.RangeResources( + ctx, + start, + end, + func(ctx context.Context, limit int, start string) ([]types.Lock, string, error) { + return c.ListLocks(ctx, limit, start, filter) + }, + types.Lock.GetName, + ) +} + // UpsertLock upserts a lock. func (c *Client) UpsertLock(ctx context.Context, lock types.Lock) error { lockV2, ok := lock.(*types.LockV2) @@ -3219,6 +3634,35 @@ func (c *Client) GetApps(ctx context.Context) ([]types.Application, error) { return apps, nil } +// ListApps returns a page of application resources. +// +// Note that application resources here refers to "dynamically-added" +// applications such as applications created by `tctl create`, or the CreateApp +// API. Applications defined in the `app_service.apps` section of the service +// YAML configuration are not collected in this API. +// +// For a page of registered applications that are served by an application +// service, use ListResources instead. +func (c *Client) ListApps(ctx context.Context, limit int, start string) ([]types.Application, string, error) { + resp, err := c.grpc.ListApps(ctx, &proto.ListAppsRequest{ + Limit: int32(limit), + StartKey: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + apps := make([]types.Application, 0, len(resp.Applications)) + for _, app := range resp.Applications { + apps = append(apps, app) + } + return apps, resp.NextKey, nil +} + +// Apps returns application resources within the range [start, end). +func (c *Client) Apps(ctx context.Context, start, end string) iter.Seq2[types.Application, error] { + return clientutils.RangeResources(ctx, start, end, c.ListApps, types.Application.GetName) +} + // DeleteApp deletes specified application resource. func (c *Client) DeleteApp(ctx context.Context, name string) error { _, err := c.grpc.DeleteApp(ctx, &types.ResourceRequest{Name: name}) @@ -3231,6 +3675,38 @@ func (c *Client) DeleteAllApps(ctx context.Context) error { return trace.Wrap(err) } +// ListAuthServers returns a paginated list of auth servers registered in the cluster. +func (c *Client) ListAuthServers(ctx context.Context, pageSize int, pageToken string) ([]types.Server, string, error) { + resp, err := c.PresenceServiceClient().ListAuthServers(ctx, &presencepb.ListAuthServersRequest{ + PageSize: int32(pageSize), + PageToken: pageToken, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + servers := make([]types.Server, 0, len(resp.Servers)) + for _, server := range resp.Servers { + servers = append(servers, server) + } + return servers, resp.NextPageToken, nil +} + +// ListProxyServers returns a paginated list of proxy servers registered in the cluster. +func (c *Client) ListProxyServers(ctx context.Context, pageSize int, pageToken string) ([]types.Server, string, error) { + resp, err := c.PresenceServiceClient().ListProxyServers(ctx, &presencepb.ListProxyServersRequest{ + PageSize: int32(pageSize), + PageToken: pageToken, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + servers := make([]types.Server, 0, len(resp.Servers)) + for _, server := range resp.Servers { + servers = append(servers, server) + } + return servers, resp.NextPageToken, nil +} + // CreateKubernetesCluster creates a new kubernetes cluster resource. func (c *Client) CreateKubernetesCluster(ctx context.Context, cluster types.KubeCluster) error { kubeClusterV3, ok := cluster.(*types.KubernetesClusterV3) @@ -3276,6 +3752,27 @@ func (c *Client) GetKubernetesClusters(ctx context.Context) ([]types.KubeCluster return clusters, nil } +// ListKubernetesClusters returns a page of registered kubernetes clusters. +func (c *Client) ListKubernetesClusters(ctx context.Context, limit int, start string) ([]types.KubeCluster, string, error) { + resp, err := c.grpc.ListKubernetesClusters(ctx, &proto.ListKubernetesClustersRequest{ + PageSize: int32(limit), + PageToken: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + kubeClusters := make([]types.KubeCluster, len(resp.KubernetesClusters)) + for i := range resp.KubernetesClusters { + kubeClusters[i] = resp.KubernetesClusters[i] + } + return kubeClusters, resp.NextPageToken, nil +} + +// RangeKubernetesClusters returns kubernetes clusters within the range [start, end). +func (c *Client) RangeKubernetesClusters(ctx context.Context, start, end string) iter.Seq2[types.KubeCluster, error] { + return clientutils.RangeResources(ctx, start, end, c.ListKubernetesClusters, types.KubeCluster.GetName) +} + // DeleteKubernetesCluster deletes specified kubernetes cluster resource. func (c *Client) DeleteKubernetesCluster(ctx context.Context, name string) error { _, err := c.grpc.DeleteKubernetesCluster(ctx, &types.ResourceRequest{Name: name}) @@ -3392,6 +3889,34 @@ func (c *Client) GetDatabases(ctx context.Context) ([]types.Database, error) { return databases, nil } +// ListDatabases returns a page of database resources. +// +// Note that database resources here refers to "dynamically-added" databases +// such as databases created by `tctl create`, the discovery service, or the +// CreateDatabase API. Databases discovered by the database agent (legacy +// discovery flow using `database_service.aws/database_service.azure`) and +// static databases defined in the `database_service.databases` section of the +// service YAML configuration are not collected in this API. +func (c *Client) ListDatabases(ctx context.Context, limit int, start string) ([]types.Database, string, error) { + resp, err := c.grpc.ListDatabases(ctx, &proto.ListDatabasesRequest{ + PageSize: int32(limit), + PageToken: start, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + databases := make([]types.Database, len(resp.Databases)) + for i := range resp.Databases { + databases[i] = resp.Databases[i] + } + return databases, resp.NextPageToken, nil +} + +// RangeDatabases returns database resources within the range [start, end). +func (c *Client) RangeDatabases(ctx context.Context, start, end string) iter.Seq2[types.Database, error] { + return clientutils.RangeResources(ctx, start, end, c.ListDatabases, types.Database.GetName) +} + // DeleteDatabase deletes specified database resource. func (c *Client) DeleteDatabase(ctx context.Context, name string) error { _, err := c.grpc.DeleteDatabase(ctx, &types.ResourceRequest{Name: name}) @@ -3472,6 +3997,57 @@ func (c *Client) GetDatabaseObjects(ctx context.Context) ([]*dbobjectv1.Database return out, nil } +// ListWindowsDesktops returns a page of registered Windows desktop hosts. +func (c *Client) ListWindowsDesktops(ctx context.Context, req types.ListWindowsDesktopsRequest) (*types.ListWindowsDesktopsResponse, error) { + resp, err := c.grpc.ListWindowsDesktops(ctx, &proto.ListWindowsDesktopsRequest{ + Limit: int32(req.Limit), + StartKey: req.StartKey, + Labels: req.Labels, + PredicateExpression: req.PredicateExpression, + SearchKeywords: req.SearchKeywords, + WindowsDesktopFilter: req.WindowsDesktopFilter, + }) + if err != nil { + return nil, trace.Wrap(err) + } + + out := &types.ListWindowsDesktopsResponse{ + Desktops: make([]types.WindowsDesktop, 0, len(resp.Desktops)), + NextKey: resp.NextKey, + } + + for _, d := range resp.Desktops { + out.Desktops = append(out.Desktops, d) + } + + return out, nil +} + +// ListWindowsDesktopServices returns a page of Windows desktop services. +func (c *Client) ListWindowsDesktopServices(ctx context.Context, req types.ListWindowsDesktopServicesRequest) (*types.ListWindowsDesktopServicesResponse, error) { + resp, err := c.grpc.ListResources(ctx, &proto.ListResourcesRequest{ + Limit: int32(req.Limit), + StartKey: req.StartKey, + Labels: req.Labels, + PredicateExpression: req.PredicateExpression, + SearchKeywords: req.SearchKeywords, + }) + if err != nil { + return nil, trace.Wrap(err) + } + + out := &types.ListWindowsDesktopServicesResponse{ + DesktopServices: make([]types.WindowsDesktopService, 0, len(resp.Resources)), + NextKey: resp.NextKey, + } + + for _, r := range resp.Resources { + out.DesktopServices = append(out.DesktopServices, r.GetWindowsDesktopService()) + } + + return out, nil +} + // GetWindowsDesktopServices returns all registered windows desktop services. func (c *Client) GetWindowsDesktopServices(ctx context.Context) ([]types.WindowsDesktopService, error) { resp, err := c.grpc.GetWindowsDesktopServices(ctx, &emptypb.Empty{}) @@ -3527,6 +4103,42 @@ func (c *Client) DeleteAllWindowsDesktopServices(ctx context.Context) error { return nil } +// GetRelayServer returns the relay server heartbeat with a given name. +func (c *Client) GetRelayServer(ctx context.Context, name string) (*presencepb.RelayServer, error) { + req := &presencepb.GetRelayServerRequest{ + Name: name, + } + resp, err := c.PresenceServiceClient().GetRelayServer(ctx, req) + if err != nil { + return nil, trace.Wrap(err) + } + return resp.GetRelayServer(), nil +} + +// ListRelayServers returns a paginated list of relay server heartbeats. +func (c *Client) ListRelayServers(ctx context.Context, pageSize int, pageToken string) (_ []*presencepb.RelayServer, nextPageToken string, _ error) { + req := &presencepb.ListRelayServersRequest{ + PageSize: int64(pageSize), + PageToken: pageToken, + } + + resp, err := c.PresenceServiceClient().ListRelayServers(ctx, req) + if err != nil { + return nil, "", trace.Wrap(err) + } + + return resp.GetRelays(), resp.GetNextPageToken(), nil +} + +// DeleteRelayServer deletes a relay server heartbeat by name. +func (c *Client) DeleteRelayServer(ctx context.Context, name string) error { + req := &presencepb.DeleteRelayServerRequest{ + Name: name, + } + _, err := c.PresenceServiceClient().DeleteRelayServer(ctx, req) + return trace.Wrap(err) +} + func (c *Client) GetDesktopBootstrapScript(ctx context.Context) (string, error) { resp, err := c.grpc.GetDesktopBootstrapScript(ctx, &emptypb.Empty{}) if err != nil { @@ -4011,7 +4623,7 @@ func GetResourcePage[T types.ResourceWithLabels](ctx context.Context, clt GetRes resource = respResource.GetGitServer() default: out.Resources = nil - return out, trace.NotImplemented("resource type %s does not support pagination", req.ResourceType) + return out, trace.NotImplemented("resource type %q does not support pagination", req.ResourceType) } t, ok := resource.(T) @@ -5192,18 +5804,6 @@ func (c *Client) GetRemoteClusters(ctx context.Context) ([]types.RemoteCluster, } } -// IdentityCenterClient returns Identity Center service client using an underlying -// gRPC connection. -func (c *Client) IdentityCenterClient() identitycenterv1.IdentityCenterServiceClient { - return identitycenterv1.NewIdentityCenterServiceClient(c.conn) -} - -// ProvisioningServiceClient returns provisioning service client using -// an underlying gRPC connection. -func (c *Client) ProvisioningServiceClient() provisioningv1.ProvisioningServiceClient { - return provisioningv1.NewProvisioningServiceClient(c.conn) -} - // IntegrationsClient returns integrations client. func (c *Client) IntegrationsClient() integrationpb.IntegrationServiceClient { return c.integrationsClient() @@ -5296,3 +5896,25 @@ func (c *Client) DeleteHealthCheckConfig(ctx context.Context, name string) error ) return trace.Wrap(err) } + +// ListScopedTokens fetches pages of scoped tokens. +func (c *Client) ListScopedTokens(ctx context.Context, req *joiningv1.ListScopedTokensRequest) (*joiningv1.ListScopedTokensResponse, error) { + res, err := c.grpc.ListScopedTokens(ctx, req) + return res, trace.Wrap(err) +} + +// DeleteScopedToken deletes an existing scoped token. +func (c *Client) DeleteScopedToken(ctx context.Context, name string) error { + _, err := c.grpc.DeleteScopedToken(ctx, &joiningv1.DeleteScopedTokenRequest{ + Name: name, + }) + return trace.Wrap(err) +} + +// CreateScopedToken creates a new scoped token. +func (c *Client) CreateScopedToken(ctx context.Context, token *joiningv1.ScopedToken) (*joiningv1.ScopedToken, error) { + res, err := c.grpc.CreateScopedToken(ctx, &joiningv1.CreateScopedTokenRequest{ + Token: token, + }) + return res.GetToken(), trace.Wrap(err) +} diff --git a/api/client/client_test.go b/api/client/client_test.go index ce1a9c510fd55..c43a08dfeade5 100644 --- a/api/client/client_test.go +++ b/api/client/client_test.go @@ -18,6 +18,7 @@ package client import ( "context" + "errors" "flag" "fmt" "net" @@ -28,6 +29,7 @@ import ( "time" "github.com/google/go-cmp/cmp" + "github.com/google/uuid" "github.com/gravitational/trace" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -35,6 +37,7 @@ import ( "github.com/gravitational/teleport/api" "github.com/gravitational/teleport/api/client/proto" "github.com/gravitational/teleport/api/defaults" + recordingencryptionv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1" "github.com/gravitational/teleport/api/metadata" "github.com/gravitational/teleport/api/trail" "github.com/gravitational/teleport/api/types" @@ -65,7 +68,7 @@ func (s *pingService) userAgentFromLastCall() string { func TestNew(t *testing.T) { t.Parallel() ctx := context.Background() - srv := startMockServer(t, &pingService{}) + srv := startMockServer(t, mockServices{auth: &pingService{}}) tests := []struct { desc string @@ -130,7 +133,7 @@ func TestNewDialBackground(t *testing.T) { require.NoError(t, err) addr := l.Addr().String() ping := &pingService{} - srv := newMockServer(t, addr, ping) + srv := newMockServer(t, addr, mockServices{auth: ping}) // Create client before the server is listening. cfg := srv.clientCfg() @@ -167,7 +170,7 @@ func TestWaitForConnectionReady(t *testing.T) { l, err := net.Listen("tcp", "localhost:") require.NoError(t, err) addr := l.Addr().String() - srv := newMockServer(t, addr, &proto.UnimplementedAuthServiceServer{}) + srv := newMockServer(t, addr, mockServices{auth: &proto.UnimplementedAuthServiceServer{}}) // Create client before the server is listening. cfg := srv.clientCfg() @@ -440,7 +443,7 @@ func testResources[T types.ResourceWithLabels](resourceType, namespace string) ( func TestListResources(t *testing.T) { t.Parallel() ctx := context.Background() - srv := startMockServer(t, &listResourcesService{}) + srv := startMockServer(t, mockServices{auth: &listResourcesService{}}) testCases := map[string]struct { resourceType string @@ -546,7 +549,7 @@ func testGetResources[T types.ResourceWithLabels](t *testing.T, clt *Client, kin func TestGetResources(t *testing.T) { t.Parallel() ctx := context.Background() - srv := startMockServer(t, &listResourcesService{}) + srv := startMockServer(t, mockServices{auth: &listResourcesService{}}) // Create client clt, err := New(ctx, srv.clientCfg()) @@ -586,7 +589,7 @@ func TestGetResources(t *testing.T) { func TestGetResourcesWithFilters(t *testing.T) { t.Parallel() ctx := context.Background() - srv := startMockServer(t, &listResourcesService{}) + srv := startMockServer(t, mockServices{auth: &listResourcesService{}}) // Create client clt, err := New(ctx, srv.clientCfg()) @@ -699,3 +702,99 @@ func TestGetUnifiedResourcesWithLogins(t *testing.T) { } } } + +func TestUploadEncryptedRecording(t *testing.T) { + ctx := context.Background() + + recordingEncryptionService := &uploadRecordingService{ + uploads: make(map[string][]*recordingencryptionv1.Part), + } + srv := startMockServer(t, mockServices{recordingEncryption: recordingEncryptionService}) + parts := [][]byte{ + []byte("123"), + []byte("456"), + []byte("789"), + } + partIter := func(yield func([]byte, error) bool) { + for _, part := range parts { + if part == nil { + if !yield(nil, errors.New("invalid part")) { + return + } + } else { + if !yield(part, nil) { + return + } + } + } + } + + clt, err := New(ctx, srv.clientCfg()) + require.NoError(t, err) + + sessionID, err := uuid.NewV7() + require.NoError(t, err) + err = clt.UploadEncryptedRecording(ctx, sessionID.String(), partIter) + require.NoError(t, err) + + uploaded := recordingEncryptionService.uploads[sessionID.String()] + require.Len(t, uploaded, len(parts)) + for idx, part := range uploaded { + // uploaded part numbers should increment starting with 1 + require.Equal(t, int64(idx+1), part.PartNumber) + } +} + +type uploadRecordingService struct { + recordingencryptionv1.UnimplementedRecordingEncryptionServiceServer + + uploads map[string][]*recordingencryptionv1.Part +} + +func (s *uploadRecordingService) CreateUpload(ctx context.Context, req *recordingencryptionv1.CreateUploadRequest) (*recordingencryptionv1.CreateUploadResponse, error) { + id, err := uuid.NewV7() + if err != nil { + return nil, trace.Wrap(err) + } + + s.uploads[req.SessionId] = []*recordingencryptionv1.Part{} + return &recordingencryptionv1.CreateUploadResponse{ + Upload: &recordingencryptionv1.Upload{ + UploadId: id.String(), + SessionId: req.SessionId, + }, + }, nil +} + +func (s *uploadRecordingService) UploadPart(ctx context.Context, req *recordingencryptionv1.UploadPartRequest) (*recordingencryptionv1.UploadPartResponse, error) { + sessionID := req.GetUpload().GetSessionId() + parts, ok := s.uploads[sessionID] + if !ok { + return nil, trace.Errorf("no upload found for %s", sessionID) + } + part := &recordingencryptionv1.Part{ + PartNumber: req.PartNumber, + } + s.uploads[sessionID] = append(parts, part) + return &recordingencryptionv1.UploadPartResponse{ + Part: part, + }, nil +} + +func (s *uploadRecordingService) CompleteUpload(ctx context.Context, req *recordingencryptionv1.CompleteUploadRequest) (*recordingencryptionv1.CompleteUploadResponse, error) { + sessionID := req.GetUpload().GetSessionId() + parts, ok := s.uploads[sessionID] + if !ok { + return nil, trace.Errorf("no upload found for %s", sessionID) + } + if len(parts) != len(req.GetParts()) { + return nil, errors.New("parts reported as uploaded is not the ") + } + for _, part := range req.GetParts() { + uploaded := parts[part.PartNumber-1] + if uploaded.PartNumber != part.PartNumber { + return nil, fmt.Errorf("expected part %d in place of %d", part.PartNumber, uploaded.PartNumber) + } + } + return nil, nil +} diff --git a/api/client/contextdialer.go b/api/client/contextdialer.go index 90f27468764c8..1e53174795be6 100644 --- a/api/client/contextdialer.go +++ b/api/client/contextdialer.go @@ -383,7 +383,7 @@ func newTLSRoutingWithConnUpgradeDialer(ssh ssh.ClientConfig, params connectPara // sshConnect upgrades the underling connection to ssh and connects to the Auth service. func sshConnect(ctx context.Context, conn net.Conn, ssh ssh.ClientConfig, dialTimeout time.Duration, addr string) (net.Conn, error) { ssh.Timeout = dialTimeout - sconn, err := tracessh.NewClientConnWithDeadline(ctx, conn, addr, &ssh) + sconn, err := tracessh.NewClientWithTimeout(ctx, conn, addr, &ssh) if err != nil { return nil, trace.NewAggregate(err, conn.Close()) } diff --git a/api/client/events.go b/api/client/events.go index fe09adae2a78a..7aec7fcca9e29 100644 --- a/api/client/events.go +++ b/api/client/events.go @@ -28,7 +28,10 @@ import ( kubewaitingcontainerpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/kubewaitingcontainer/v1" machineidv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1" notificationsv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/notifications/v1" + presencev1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1" provisioningv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/provisioning/v1" + recordingencryptionv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1" + scopedaccessv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1" userprovisioningpb "github.com/gravitational/teleport/api/gen/proto/go/teleport/userprovisioning/v2" usertasksv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/usertasks/v1" workloadidentityv1pb "github.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1" @@ -127,6 +130,18 @@ func EventToGRPC(in types.Event) (*proto.Event, error) { out.Resource = &proto.Event_AutoUpdateAgentReport{ AutoUpdateAgentReport: r.UnwrapT(), } + case types.Resource153UnwrapperT[*autoupdate.AutoUpdateBotInstanceReport]: + out.Resource = &proto.Event_AutoUpdateBotInstanceReport{ + AutoUpdateBotInstanceReport: r.UnwrapT(), + } + case types.Resource153UnwrapperT[*scopedaccessv1.ScopedRole]: + out.Resource = &proto.Event_ScopedRole{ + ScopedRole: r.UnwrapT(), + } + case types.Resource153UnwrapperT[*scopedaccessv1.ScopedRoleAssignment]: + out.Resource = &proto.Event_ScopedRoleAssignment{ + ScopedRoleAssignment: r.UnwrapT(), + } case types.Resource153UnwrapperT[*identitycenterv1.Account]: out.Resource = &proto.Event_IdentityCenterAccount{ IdentityCenterAccount: r.UnwrapT(), @@ -147,10 +162,18 @@ func EventToGRPC(in types.Event) (*proto.Event, error) { out.Resource = &proto.Event_WorkloadIdentityX509Revocation{ WorkloadIdentityX509Revocation: r.UnwrapT(), } + case types.Resource153UnwrapperT[*recordingencryptionv1.RecordingEncryption]: + out.Resource = &proto.Event_RecordingEncryption{ + RecordingEncryption: r.UnwrapT(), + } case types.Resource153UnwrapperT[*healthcheckconfigv1.HealthCheckConfig]: out.Resource = &proto.Event_HealthCheckConfig{ HealthCheckConfig: r.UnwrapT(), } + case types.Resource153UnwrapperT[*presencev1.RelayServer]: + out.Resource = &proto.Event_RelayServer{ + RelayServer: r.UnwrapT(), + } case *types.ResourceHeader: out.Resource = &proto.Event_ResourceHeader{ ResourceHeader: r, @@ -356,6 +379,11 @@ func EventToGRPC(in types.Event) (*proto.Event, error) { out.Resource = &proto.Event_PluginStaticCredentials{ PluginStaticCredentials: r, } + case *types.PluginV1: + out.Resource = &proto.Event_Plugin{ + Plugin: r, + } + default: return nil, trace.BadParameter("resource type %T is not supported", in.Resource) } @@ -610,6 +638,15 @@ func EventFromGRPC(in *proto.Event) (*types.Event, error) { } else if r := in.GetAutoUpdateAgentReport(); r != nil { out.Resource = types.Resource153ToLegacy(r) return &out, nil + } else if r := in.GetAutoUpdateBotInstanceReport(); r != nil { + out.Resource = types.Resource153ToLegacy(r) + return &out, nil + } else if r := in.GetScopedRole(); r != nil { + out.Resource = types.Resource153ToLegacy(r) + return &out, nil + } else if r := in.GetScopedRoleAssignment(); r != nil { + out.Resource = types.Resource153ToLegacy(r) + return &out, nil } else if r := in.GetUserTask(); r != nil { out.Resource = types.Resource153ToLegacy(r) return &out, nil @@ -637,6 +674,15 @@ func EventFromGRPC(in *proto.Event) (*types.Event, error) { } else if r := in.GetHealthCheckConfig(); r != nil { out.Resource = types.Resource153ToLegacy(r) return &out, nil + } else if r := in.GetRelayServer(); r != nil { + out.Resource = types.ProtoResource153ToLegacy(r) + return &out, nil + } else if r := in.GetPlugin(); r != nil { + out.Resource = r + return &out, nil + } else if r := in.GetRecordingEncryption(); r != nil { + out.Resource = types.ProtoResource153ToLegacy(r) + return &out, nil } else { return nil, trace.BadParameter("received unsupported resource %T", in.Resource) } diff --git a/api/client/inventory.go b/api/client/inventory.go index 1232a50c36df8..88bad22c1fdcf 100644 --- a/api/client/inventory.go +++ b/api/client/inventory.go @@ -343,6 +343,10 @@ func (i *downstreamICS) runSendLoop(stream proto.AuthService_InventoryControlStr oneOf.Msg = &proto.UpstreamInventoryOneOf_Goodbye{ Goodbye: msg, } + case *proto.UpstreamInventoryStopHeartbeat: + oneOf.Msg = &proto.UpstreamInventoryOneOf_StopHeartbeat{ + StopHeartbeat: msg, + } default: sendMsg.errC <- trace.BadParameter("cannot send unexpected upstream msg type: %T", msg) continue @@ -484,6 +488,8 @@ func (i *upstreamICS) runRecvLoop(stream proto.AuthService_InventoryControlStrea msg = oneOf.GetAgentMetadata() case oneOf.GetGoodbye() != nil: msg = oneOf.GetGoodbye() + case oneOf.GetStopHeartbeat() != nil: + msg = oneOf.GetStopHeartbeat() default: slog.WarnContext(stream.Context(), "received unknown upstream message", "message", oneOf) continue diff --git a/api/client/mock_server_test.go b/api/client/mock_server_test.go index e4113a22ffe48..1aaa55a95f49a 100644 --- a/api/client/mock_server_test.go +++ b/api/client/mock_server_test.go @@ -26,6 +26,7 @@ import ( "google.golang.org/grpc/credentials" "github.com/gravitational/teleport/api/client/proto" + recordingencryptionv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1" "github.com/gravitational/teleport/api/testhelpers/mtls" "github.com/gravitational/teleport/api/utils/grpc/interceptors" ) @@ -37,7 +38,12 @@ type mockServer struct { mtlsConfig *mtls.Config } -func newMockServer(t *testing.T, addr string, service proto.AuthServiceServer) *mockServer { +type mockServices struct { + auth proto.AuthServiceServer + recordingEncryption recordingencryptionv1.RecordingEncryptionServiceServer +} + +func newMockServer(t *testing.T, addr string, services mockServices) *mockServer { t.Helper() m := &mockServer{ addr: addr, @@ -50,15 +56,21 @@ func newMockServer(t *testing.T, addr string, service proto.AuthServiceServer) * grpc.StreamInterceptor(interceptors.GRPCServerStreamErrorInterceptor), ) - proto.RegisterAuthServiceServer(m.grpc, service) + if services.auth != nil { + proto.RegisterAuthServiceServer(m.grpc, services.auth) + } + + if services.recordingEncryption != nil { + recordingencryptionv1.RegisterRecordingEncryptionServiceServer(m.grpc, services.recordingEncryption) + } return m } // startMockServer starts a new mock server. Parallel tests cannot use the same addr. -func startMockServer(t *testing.T, service proto.AuthServiceServer) *mockServer { +func startMockServer(t *testing.T, services mockServices) *mockServer { l, err := net.Listen("tcp", "localhost:") require.NoError(t, err) - srv := newMockServer(t, l.Addr().String(), service) + srv := newMockServer(t, l.Addr().String(), services) srv.serve(t, l) return srv } diff --git a/api/client/proto/authservice.pb.go b/api/client/proto/authservice.pb.go index b61231948dc15..249aa365ffd9b 100644 --- a/api/client/proto/authservice.pb.go +++ b/api/client/proto/authservice.pb.go @@ -335,6 +335,10 @@ const ( // tunnel. Requests from this requester allows reuse of the MFA session // response but TTL is limited to single use TTL. UserCertsRequest_TSH_DB_EXEC UserCertsRequest_Requester = 5 + // TSH_APP_AWS_CREDENTIALPROCESS is set when tsh provides access to an AWS App which uses client side credentials. + // When using per-session MFA, this ensures the TTL of the certificate (and thus the AWS session) is the same as the Teleport identity session. + // AWS credentials should not be written to disk when this requester is used, but may be exported as env variables through stdout. + UserCertsRequest_TSH_APP_AWS_CREDENTIALPROCESS UserCertsRequest_Requester = 6 ) var UserCertsRequest_Requester_name = map[int32]string{ @@ -344,6 +348,7 @@ var UserCertsRequest_Requester_name = map[int32]string{ 3: "TSH_KUBE_LOCAL_PROXY_HEADLESS", 4: "TSH_APP_LOCAL_PROXY", 5: "TSH_DB_EXEC", + 6: "TSH_APP_AWS_CREDENTIALPROCESS", } var UserCertsRequest_Requester_value = map[string]int32{ @@ -353,6 +358,7 @@ var UserCertsRequest_Requester_value = map[string]int32{ "TSH_KUBE_LOCAL_PROXY_HEADLESS": 3, "TSH_APP_LOCAL_PROXY": 4, "TSH_DB_EXEC": 5, + "TSH_APP_AWS_CREDENTIALPROCESS": 6, } func (x UserCertsRequest_Requester) String() string { @@ -423,7 +429,7 @@ func (x DatabaseCertRequest_Requester) String() string { } func (DatabaseCertRequest_Requester) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{76, 0} + return fileDescriptor_0ffcffcda38ae159, []int{82, 0} } // Extensions are the extensions to add to the certificate. @@ -451,7 +457,7 @@ func (x DatabaseCertRequest_Extensions) String() string { } func (DatabaseCertRequest_Extensions) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{76, 1} + return fileDescriptor_0ffcffcda38ae159, []int{82, 1} } // Watch specifies watch parameters @@ -1560,6 +1566,131 @@ func (m *ChangePasswordRequest) GetWebauthn() *webauthn.CredentialAssertionRespo return nil } +type ListSemaphoresRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // filter encodes semaphore filtering params. + Filter *types.SemaphoreFilter `protobuf:"bytes,3,opt,name=filter,proto3" json:"filter,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSemaphoresRequest) Reset() { *m = ListSemaphoresRequest{} } +func (m *ListSemaphoresRequest) String() string { return proto.CompactTextString(m) } +func (*ListSemaphoresRequest) ProtoMessage() {} +func (*ListSemaphoresRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{11} +} +func (m *ListSemaphoresRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSemaphoresRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSemaphoresRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSemaphoresRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSemaphoresRequest.Merge(m, src) +} +func (m *ListSemaphoresRequest) XXX_Size() int { + return m.Size() +} +func (m *ListSemaphoresRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListSemaphoresRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSemaphoresRequest proto.InternalMessageInfo + +func (m *ListSemaphoresRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListSemaphoresRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +func (m *ListSemaphoresRequest) GetFilter() *types.SemaphoreFilter { + if m != nil { + return m.Filter + } + return nil +} + +type ListSemaphoresResponse struct { + // a list of semaphores. + Semaphores []*types.SemaphoreV3 `protobuf:"bytes,1,rep,name=semaphores,proto3" json:"semaphores,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSemaphoresResponse) Reset() { *m = ListSemaphoresResponse{} } +func (m *ListSemaphoresResponse) String() string { return proto.CompactTextString(m) } +func (*ListSemaphoresResponse) ProtoMessage() {} +func (*ListSemaphoresResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{12} +} +func (m *ListSemaphoresResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSemaphoresResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSemaphoresResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSemaphoresResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSemaphoresResponse.Merge(m, src) +} +func (m *ListSemaphoresResponse) XXX_Size() int { + return m.Size() +} +func (m *ListSemaphoresResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListSemaphoresResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSemaphoresResponse proto.InternalMessageInfo + +func (m *ListSemaphoresResponse) GetSemaphores() []*types.SemaphoreV3 { + if m != nil { + return m.Semaphores + } + return nil +} + +func (m *ListSemaphoresResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // PluginDataSeq is a sequence of plugin data. type PluginDataSeq struct { PluginData []*types.PluginDataV3 `protobuf:"bytes,1,rep,name=PluginData,proto3" json:"plugin_data"` @@ -1572,7 +1703,7 @@ func (m *PluginDataSeq) Reset() { *m = PluginDataSeq{} } func (m *PluginDataSeq) String() string { return proto.CompactTextString(m) } func (*PluginDataSeq) ProtoMessage() {} func (*PluginDataSeq) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{11} + return fileDescriptor_0ffcffcda38ae159, []int{13} } func (m *PluginDataSeq) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1640,7 +1771,7 @@ func (m *RequestStateSetter) Reset() { *m = RequestStateSetter{} } func (m *RequestStateSetter) String() string { return proto.CompactTextString(m) } func (*RequestStateSetter) ProtoMessage() {} func (*RequestStateSetter) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{12} + return fileDescriptor_0ffcffcda38ae159, []int{14} } func (m *RequestStateSetter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1723,7 +1854,7 @@ func (m *RequestID) Reset() { *m = RequestID{} } func (m *RequestID) String() string { return proto.CompactTextString(m) } func (*RequestID) ProtoMessage() {} func (*RequestID) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{13} + return fileDescriptor_0ffcffcda38ae159, []int{15} } func (m *RequestID) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1771,7 +1902,7 @@ func (m *GetResetPasswordTokenRequest) Reset() { *m = GetResetPasswordTo func (m *GetResetPasswordTokenRequest) String() string { return proto.CompactTextString(m) } func (*GetResetPasswordTokenRequest) ProtoMessage() {} func (*GetResetPasswordTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{14} + return fileDescriptor_0ffcffcda38ae159, []int{16} } func (m *GetResetPasswordTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1824,7 +1955,7 @@ func (m *CreateResetPasswordTokenRequest) Reset() { *m = CreateResetPass func (m *CreateResetPasswordTokenRequest) String() string { return proto.CompactTextString(m) } func (*CreateResetPasswordTokenRequest) ProtoMessage() {} func (*CreateResetPasswordTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{15} + return fileDescriptor_0ffcffcda38ae159, []int{17} } func (m *CreateResetPasswordTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1874,6 +2005,124 @@ func (m *CreateResetPasswordTokenRequest) GetTTL() Duration { return 0 } +// ListResetPasswordTokenRequest is a request for a page of user tokens. +type ListResetPasswordTokenRequest struct { + // PageSize is the maximum amount of resources to retrieve. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListResetPasswordTokenRequest) Reset() { *m = ListResetPasswordTokenRequest{} } +func (m *ListResetPasswordTokenRequest) String() string { return proto.CompactTextString(m) } +func (*ListResetPasswordTokenRequest) ProtoMessage() {} +func (*ListResetPasswordTokenRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{18} +} +func (m *ListResetPasswordTokenRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListResetPasswordTokenRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListResetPasswordTokenRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListResetPasswordTokenRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListResetPasswordTokenRequest.Merge(m, src) +} +func (m *ListResetPasswordTokenRequest) XXX_Size() int { + return m.Size() +} +func (m *ListResetPasswordTokenRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListResetPasswordTokenRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListResetPasswordTokenRequest proto.InternalMessageInfo + +func (m *ListResetPasswordTokenRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListResetPasswordTokenRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +// ListResetPasswordTokenResponse contains a page of user tokens. +type ListResetPasswordTokenResponse struct { + // UserTokens is a list of user tokens. + UserTokens []*types.UserTokenV3 `protobuf:"bytes,1,rep,name=user_tokens,json=userTokens,proto3" json:"user_tokens,omitempty"` + // NextKey is the key for the next page of user tokens. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListResetPasswordTokenResponse) Reset() { *m = ListResetPasswordTokenResponse{} } +func (m *ListResetPasswordTokenResponse) String() string { return proto.CompactTextString(m) } +func (*ListResetPasswordTokenResponse) ProtoMessage() {} +func (*ListResetPasswordTokenResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{19} +} +func (m *ListResetPasswordTokenResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListResetPasswordTokenResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListResetPasswordTokenResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListResetPasswordTokenResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListResetPasswordTokenResponse.Merge(m, src) +} +func (m *ListResetPasswordTokenResponse) XXX_Size() int { + return m.Size() +} +func (m *ListResetPasswordTokenResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListResetPasswordTokenResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListResetPasswordTokenResponse proto.InternalMessageInfo + +func (m *ListResetPasswordTokenResponse) GetUserTokens() []*types.UserTokenV3 { + if m != nil { + return m.UserTokens + } + return nil +} + +func (m *ListResetPasswordTokenResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // RenewableCertsRequest is a request to generate a first set of renewable // certificates from a bot join token. type RenewableCertsRequest struct { @@ -1890,7 +2139,7 @@ func (m *RenewableCertsRequest) Reset() { *m = RenewableCertsRequest{} } func (m *RenewableCertsRequest) String() string { return proto.CompactTextString(m) } func (*RenewableCertsRequest) ProtoMessage() {} func (*RenewableCertsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{16} + return fileDescriptor_0ffcffcda38ae159, []int{20} } func (m *RenewableCertsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1944,7 +2193,7 @@ func (m *PingRequest) Reset() { *m = PingRequest{} } func (m *PingRequest) String() string { return proto.CompactTextString(m) } func (*PingRequest) ProtoMessage() {} func (*PingRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{17} + return fileDescriptor_0ffcffcda38ae159, []int{21} } func (m *PingRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2004,7 +2253,7 @@ func (m *PingResponse) Reset() { *m = PingResponse{} } func (m *PingResponse) String() string { return proto.CompactTextString(m) } func (*PingResponse) ProtoMessage() {} func (*PingResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{18} + return fileDescriptor_0ffcffcda38ae159, []int{22} } func (m *PingResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2191,7 +2440,9 @@ type Features struct { // *Access* to the feature is gated on the `AccessMonitoring` entitlement. AccessMonitoringConfigured bool `protobuf:"varint,36,opt,name=AccessMonitoringConfigured,proto3" json:"AccessMonitoringConfigured,omitempty"` // AccessGraphDemoMode enables the ability to opt-in to a demo mode of Access Graph with limited features. - AccessGraphDemoMode bool `protobuf:"varint,38,opt,name=AccessGraphDemoMode,proto3" json:"access_graph_demo_mode,omitempty"` + AccessGraphDemoMode bool `protobuf:"varint,38,opt,name=AccessGraphDemoMode,proto3" json:"access_graph_demo_mode,omitempty"` + // ClientIPRestrictions allows Cloud users to setup a client IP allowlist + ClientIPRestrictions bool `protobuf:"varint,39,opt,name=ClientIPRestrictions,proto3" json:"client_ip_restrictions,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -2201,7 +2452,7 @@ func (m *Features) Reset() { *m = Features{} } func (m *Features) String() string { return proto.CompactTextString(m) } func (*Features) ProtoMessage() {} func (*Features) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{19} + return fileDescriptor_0ffcffcda38ae159, []int{23} } func (m *Features) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2468,6 +2719,13 @@ func (m *Features) GetAccessGraphDemoMode() bool { return false } +func (m *Features) GetClientIPRestrictions() bool { + if m != nil { + return m.ClientIPRestrictions + } + return false +} + // EntitlementInfo is the state and limits of a particular entitlement type EntitlementInfo struct { // enabled indicates the feature is 'on' if true @@ -2483,7 +2741,7 @@ func (m *EntitlementInfo) Reset() { *m = EntitlementInfo{} } func (m *EntitlementInfo) String() string { return proto.CompactTextString(m) } func (*EntitlementInfo) ProtoMessage() {} func (*EntitlementInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{20} + return fileDescriptor_0ffcffcda38ae159, []int{24} } func (m *EntitlementInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2550,7 +2808,7 @@ func (m *DeviceTrustFeature) Reset() { *m = DeviceTrustFeature{} } func (m *DeviceTrustFeature) String() string { return proto.CompactTextString(m) } func (*DeviceTrustFeature) ProtoMessage() {} func (*DeviceTrustFeature) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{21} + return fileDescriptor_0ffcffcda38ae159, []int{25} } func (m *DeviceTrustFeature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2609,7 +2867,7 @@ func (m *AccessRequestsFeature) Reset() { *m = AccessRequestsFeature{} } func (m *AccessRequestsFeature) String() string { return proto.CompactTextString(m) } func (*AccessRequestsFeature) ProtoMessage() {} func (*AccessRequestsFeature) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{22} + return fileDescriptor_0ffcffcda38ae159, []int{26} } func (m *AccessRequestsFeature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2660,7 +2918,7 @@ func (m *AccessListFeature) Reset() { *m = AccessListFeature{} } func (m *AccessListFeature) String() string { return proto.CompactTextString(m) } func (*AccessListFeature) ProtoMessage() {} func (*AccessListFeature) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{23} + return fileDescriptor_0ffcffcda38ae159, []int{27} } func (m *AccessListFeature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2712,7 +2970,7 @@ func (m *AccessMonitoringFeature) Reset() { *m = AccessMonitoringFeature func (m *AccessMonitoringFeature) String() string { return proto.CompactTextString(m) } func (*AccessMonitoringFeature) ProtoMessage() {} func (*AccessMonitoringFeature) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{24} + return fileDescriptor_0ffcffcda38ae159, []int{28} } func (m *AccessMonitoringFeature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2768,7 +3026,7 @@ func (m *PolicyFeature) Reset() { *m = PolicyFeature{} } func (m *PolicyFeature) String() string { return proto.CompactTextString(m) } func (*PolicyFeature) ProtoMessage() {} func (*PolicyFeature) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{25} + return fileDescriptor_0ffcffcda38ae159, []int{29} } func (m *PolicyFeature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2817,7 +3075,7 @@ func (m *DeleteUserRequest) Reset() { *m = DeleteUserRequest{} } func (m *DeleteUserRequest) String() string { return proto.CompactTextString(m) } func (*DeleteUserRequest) ProtoMessage() {} func (*DeleteUserRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{26} + return fileDescriptor_0ffcffcda38ae159, []int{30} } func (m *DeleteUserRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2865,7 +3123,7 @@ func (m *Semaphores) Reset() { *m = Semaphores{} } func (m *Semaphores) String() string { return proto.CompactTextString(m) } func (*Semaphores) ProtoMessage() {} func (*Semaphores) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{27} + return fileDescriptor_0ffcffcda38ae159, []int{31} } func (m *Semaphores) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2922,7 +3180,7 @@ func (m *AuditStreamRequest) Reset() { *m = AuditStreamRequest{} } func (m *AuditStreamRequest) String() string { return proto.CompactTextString(m) } func (*AuditStreamRequest) ProtoMessage() {} func (*AuditStreamRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{28} + return fileDescriptor_0ffcffcda38ae159, []int{32} } func (m *AuditStreamRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3047,7 +3305,7 @@ func (m *AuditStreamStatus) Reset() { *m = AuditStreamStatus{} } func (m *AuditStreamStatus) String() string { return proto.CompactTextString(m) } func (*AuditStreamStatus) ProtoMessage() {} func (*AuditStreamStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{29} + return fileDescriptor_0ffcffcda38ae159, []int{33} } func (m *AuditStreamStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3095,7 +3353,7 @@ func (m *CreateStream) Reset() { *m = CreateStream{} } func (m *CreateStream) String() string { return proto.CompactTextString(m) } func (*CreateStream) ProtoMessage() {} func (*CreateStream) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{30} + return fileDescriptor_0ffcffcda38ae159, []int{34} } func (m *CreateStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3146,7 +3404,7 @@ func (m *ResumeStream) Reset() { *m = ResumeStream{} } func (m *ResumeStream) String() string { return proto.CompactTextString(m) } func (*ResumeStream) ProtoMessage() {} func (*ResumeStream) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{31} + return fileDescriptor_0ffcffcda38ae159, []int{35} } func (m *ResumeStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3201,7 +3459,7 @@ func (m *CompleteStream) Reset() { *m = CompleteStream{} } func (m *CompleteStream) String() string { return proto.CompactTextString(m) } func (*CompleteStream) ProtoMessage() {} func (*CompleteStream) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{32} + return fileDescriptor_0ffcffcda38ae159, []int{36} } func (m *CompleteStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3241,7 +3499,7 @@ func (m *FlushAndCloseStream) Reset() { *m = FlushAndCloseStream{} } func (m *FlushAndCloseStream) String() string { return proto.CompactTextString(m) } func (*FlushAndCloseStream) ProtoMessage() {} func (*FlushAndCloseStream) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{33} + return fileDescriptor_0ffcffcda38ae159, []int{37} } func (m *FlushAndCloseStream) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3283,7 +3541,7 @@ func (m *UpsertApplicationServerRequest) Reset() { *m = UpsertApplicatio func (m *UpsertApplicationServerRequest) String() string { return proto.CompactTextString(m) } func (*UpsertApplicationServerRequest) ProtoMessage() {} func (*UpsertApplicationServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{34} + return fileDescriptor_0ffcffcda38ae159, []int{38} } func (m *UpsertApplicationServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3336,7 +3594,7 @@ func (m *DeleteApplicationServerRequest) Reset() { *m = DeleteApplicatio func (m *DeleteApplicationServerRequest) String() string { return proto.CompactTextString(m) } func (*DeleteApplicationServerRequest) ProtoMessage() {} func (*DeleteApplicationServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{35} + return fileDescriptor_0ffcffcda38ae159, []int{39} } func (m *DeleteApplicationServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3399,7 +3657,7 @@ func (m *DeleteAllApplicationServersRequest) Reset() { *m = DeleteAllApp func (m *DeleteAllApplicationServersRequest) String() string { return proto.CompactTextString(m) } func (*DeleteAllApplicationServersRequest) ProtoMessage() {} func (*DeleteAllApplicationServersRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{36} + return fileDescriptor_0ffcffcda38ae159, []int{40} } func (m *DeleteAllApplicationServersRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3457,7 +3715,7 @@ func (m *GenerateAppTokenRequest) Reset() { *m = GenerateAppTokenRequest func (m *GenerateAppTokenRequest) String() string { return proto.CompactTextString(m) } func (*GenerateAppTokenRequest) ProtoMessage() {} func (*GenerateAppTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{37} + return fileDescriptor_0ffcffcda38ae159, []int{41} } func (m *GenerateAppTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3533,7 +3791,7 @@ func (m *GenerateAppTokenResponse) Reset() { *m = GenerateAppTokenRespon func (m *GenerateAppTokenResponse) String() string { return proto.CompactTextString(m) } func (*GenerateAppTokenResponse) ProtoMessage() {} func (*GenerateAppTokenResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{38} + return fileDescriptor_0ffcffcda38ae159, []int{42} } func (m *GenerateAppTokenResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3582,7 +3840,7 @@ func (m *GetAppSessionRequest) Reset() { *m = GetAppSessionRequest{} } func (m *GetAppSessionRequest) String() string { return proto.CompactTextString(m) } func (*GetAppSessionRequest) ProtoMessage() {} func (*GetAppSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{39} + return fileDescriptor_0ffcffcda38ae159, []int{43} } func (m *GetAppSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3631,7 +3889,7 @@ func (m *GetAppSessionResponse) Reset() { *m = GetAppSessionResponse{} } func (m *GetAppSessionResponse) String() string { return proto.CompactTextString(m) } func (*GetAppSessionResponse) ProtoMessage() {} func (*GetAppSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{40} + return fileDescriptor_0ffcffcda38ae159, []int{44} } func (m *GetAppSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3689,7 +3947,7 @@ func (m *ListAppSessionsRequest) Reset() { *m = ListAppSessionsRequest{} func (m *ListAppSessionsRequest) String() string { return proto.CompactTextString(m) } func (*ListAppSessionsRequest) ProtoMessage() {} func (*ListAppSessionsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{41} + return fileDescriptor_0ffcffcda38ae159, []int{45} } func (m *ListAppSessionsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3755,7 +4013,7 @@ func (m *ListAppSessionsResponse) Reset() { *m = ListAppSessionsResponse func (m *ListAppSessionsResponse) String() string { return proto.CompactTextString(m) } func (*ListAppSessionsResponse) ProtoMessage() {} func (*ListAppSessionsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{42} + return fileDescriptor_0ffcffcda38ae159, []int{46} } func (m *ListAppSessionsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3811,7 +4069,7 @@ func (m *GetSnowflakeSessionsResponse) Reset() { *m = GetSnowflakeSessio func (m *GetSnowflakeSessionsResponse) String() string { return proto.CompactTextString(m) } func (*GetSnowflakeSessionsResponse) ProtoMessage() {} func (*GetSnowflakeSessionsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{43} + return fileDescriptor_0ffcffcda38ae159, []int{47} } func (m *GetSnowflakeSessionsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3869,7 +4127,7 @@ func (m *ListSAMLIdPSessionsRequest) Reset() { *m = ListSAMLIdPSessionsR func (m *ListSAMLIdPSessionsRequest) String() string { return proto.CompactTextString(m) } func (*ListSAMLIdPSessionsRequest) ProtoMessage() {} func (*ListSAMLIdPSessionsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{44} + return fileDescriptor_0ffcffcda38ae159, []int{48} } func (m *ListSAMLIdPSessionsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3935,7 +4193,7 @@ func (m *ListSAMLIdPSessionsResponse) Reset() { *m = ListSAMLIdPSessions func (m *ListSAMLIdPSessionsResponse) String() string { return proto.CompactTextString(m) } func (*ListSAMLIdPSessionsResponse) ProtoMessage() {} func (*ListSAMLIdPSessionsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{45} + return fileDescriptor_0ffcffcda38ae159, []int{49} } func (m *ListSAMLIdPSessionsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4011,7 +4269,7 @@ func (m *CreateAppSessionRequest) Reset() { *m = CreateAppSessionRequest func (m *CreateAppSessionRequest) String() string { return proto.CompactTextString(m) } func (*CreateAppSessionRequest) ProtoMessage() {} func (*CreateAppSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{46} + return fileDescriptor_0ffcffcda38ae159, []int{50} } func (m *CreateAppSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4123,7 +4381,7 @@ func (m *CreateAppSessionResponse) Reset() { *m = CreateAppSessionRespon func (m *CreateAppSessionResponse) String() string { return proto.CompactTextString(m) } func (*CreateAppSessionResponse) ProtoMessage() {} func (*CreateAppSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{47} + return fileDescriptor_0ffcffcda38ae159, []int{51} } func (m *CreateAppSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4176,7 +4434,7 @@ func (m *CreateSnowflakeSessionRequest) Reset() { *m = CreateSnowflakeSe func (m *CreateSnowflakeSessionRequest) String() string { return proto.CompactTextString(m) } func (*CreateSnowflakeSessionRequest) ProtoMessage() {} func (*CreateSnowflakeSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{48} + return fileDescriptor_0ffcffcda38ae159, []int{52} } func (m *CreateSnowflakeSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4238,7 +4496,7 @@ func (m *CreateSnowflakeSessionResponse) Reset() { *m = CreateSnowflakeS func (m *CreateSnowflakeSessionResponse) String() string { return proto.CompactTextString(m) } func (*CreateSnowflakeSessionResponse) ProtoMessage() {} func (*CreateSnowflakeSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{49} + return fileDescriptor_0ffcffcda38ae159, []int{53} } func (m *CreateSnowflakeSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4291,7 +4549,7 @@ func (m *CreateSAMLIdPSessionRequest) Reset() { *m = CreateSAMLIdPSessio func (m *CreateSAMLIdPSessionRequest) String() string { return proto.CompactTextString(m) } func (*CreateSAMLIdPSessionRequest) ProtoMessage() {} func (*CreateSAMLIdPSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{50} + return fileDescriptor_0ffcffcda38ae159, []int{54} } func (m *CreateSAMLIdPSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4353,7 +4611,7 @@ func (m *CreateSAMLIdPSessionResponse) Reset() { *m = CreateSAMLIdPSessi func (m *CreateSAMLIdPSessionResponse) String() string { return proto.CompactTextString(m) } func (*CreateSAMLIdPSessionResponse) ProtoMessage() {} func (*CreateSAMLIdPSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{51} + return fileDescriptor_0ffcffcda38ae159, []int{55} } func (m *CreateSAMLIdPSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4402,7 +4660,7 @@ func (m *GetSnowflakeSessionRequest) Reset() { *m = GetSnowflakeSessionR func (m *GetSnowflakeSessionRequest) String() string { return proto.CompactTextString(m) } func (*GetSnowflakeSessionRequest) ProtoMessage() {} func (*GetSnowflakeSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{52} + return fileDescriptor_0ffcffcda38ae159, []int{56} } func (m *GetSnowflakeSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4451,7 +4709,7 @@ func (m *GetSnowflakeSessionResponse) Reset() { *m = GetSnowflakeSession func (m *GetSnowflakeSessionResponse) String() string { return proto.CompactTextString(m) } func (*GetSnowflakeSessionResponse) ProtoMessage() {} func (*GetSnowflakeSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{53} + return fileDescriptor_0ffcffcda38ae159, []int{57} } func (m *GetSnowflakeSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4500,7 +4758,7 @@ func (m *GetSAMLIdPSessionRequest) Reset() { *m = GetSAMLIdPSessionReque func (m *GetSAMLIdPSessionRequest) String() string { return proto.CompactTextString(m) } func (*GetSAMLIdPSessionRequest) ProtoMessage() {} func (*GetSAMLIdPSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{54} + return fileDescriptor_0ffcffcda38ae159, []int{58} } func (m *GetSAMLIdPSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4549,7 +4807,7 @@ func (m *GetSAMLIdPSessionResponse) Reset() { *m = GetSAMLIdPSessionResp func (m *GetSAMLIdPSessionResponse) String() string { return proto.CompactTextString(m) } func (*GetSAMLIdPSessionResponse) ProtoMessage() {} func (*GetSAMLIdPSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{55} + return fileDescriptor_0ffcffcda38ae159, []int{59} } func (m *GetSAMLIdPSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4597,7 +4855,7 @@ func (m *DeleteAppSessionRequest) Reset() { *m = DeleteAppSessionRequest func (m *DeleteAppSessionRequest) String() string { return proto.CompactTextString(m) } func (*DeleteAppSessionRequest) ProtoMessage() {} func (*DeleteAppSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{56} + return fileDescriptor_0ffcffcda38ae159, []int{60} } func (m *DeleteAppSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4645,7 +4903,7 @@ func (m *DeleteSnowflakeSessionRequest) Reset() { *m = DeleteSnowflakeSe func (m *DeleteSnowflakeSessionRequest) String() string { return proto.CompactTextString(m) } func (*DeleteSnowflakeSessionRequest) ProtoMessage() {} func (*DeleteSnowflakeSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{57} + return fileDescriptor_0ffcffcda38ae159, []int{61} } func (m *DeleteSnowflakeSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4693,7 +4951,7 @@ func (m *DeleteSAMLIdPSessionRequest) Reset() { *m = DeleteSAMLIdPSessio func (m *DeleteSAMLIdPSessionRequest) String() string { return proto.CompactTextString(m) } func (*DeleteSAMLIdPSessionRequest) ProtoMessage() {} func (*DeleteSAMLIdPSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{58} + return fileDescriptor_0ffcffcda38ae159, []int{62} } func (m *DeleteSAMLIdPSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4742,7 +5000,7 @@ func (m *DeleteUserAppSessionsRequest) Reset() { *m = DeleteUserAppSessi func (m *DeleteUserAppSessionsRequest) String() string { return proto.CompactTextString(m) } func (*DeleteUserAppSessionsRequest) ProtoMessage() {} func (*DeleteUserAppSessionsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{59} + return fileDescriptor_0ffcffcda38ae159, []int{63} } func (m *DeleteUserAppSessionsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4791,7 +5049,7 @@ func (m *DeleteUserSAMLIdPSessionsRequest) Reset() { *m = DeleteUserSAML func (m *DeleteUserSAMLIdPSessionsRequest) String() string { return proto.CompactTextString(m) } func (*DeleteUserSAMLIdPSessionsRequest) ProtoMessage() {} func (*DeleteUserSAMLIdPSessionsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{60} + return fileDescriptor_0ffcffcda38ae159, []int{64} } func (m *DeleteUserSAMLIdPSessionsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4840,7 +5098,7 @@ func (m *GetWebSessionResponse) Reset() { *m = GetWebSessionResponse{} } func (m *GetWebSessionResponse) String() string { return proto.CompactTextString(m) } func (*GetWebSessionResponse) ProtoMessage() {} func (*GetWebSessionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{61} + return fileDescriptor_0ffcffcda38ae159, []int{65} } func (m *GetWebSessionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4889,7 +5147,7 @@ func (m *GetWebSessionsResponse) Reset() { *m = GetWebSessionsResponse{} func (m *GetWebSessionsResponse) String() string { return proto.CompactTextString(m) } func (*GetWebSessionsResponse) ProtoMessage() {} func (*GetWebSessionsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{62} + return fileDescriptor_0ffcffcda38ae159, []int{66} } func (m *GetWebSessionsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4938,7 +5196,7 @@ func (m *GetWebTokenResponse) Reset() { *m = GetWebTokenResponse{} } func (m *GetWebTokenResponse) String() string { return proto.CompactTextString(m) } func (*GetWebTokenResponse) ProtoMessage() {} func (*GetWebTokenResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{63} + return fileDescriptor_0ffcffcda38ae159, []int{67} } func (m *GetWebTokenResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4987,7 +5245,7 @@ func (m *GetWebTokensResponse) Reset() { *m = GetWebTokensResponse{} } func (m *GetWebTokensResponse) String() string { return proto.CompactTextString(m) } func (*GetWebTokensResponse) ProtoMessage() {} func (*GetWebTokensResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{64} + return fileDescriptor_0ffcffcda38ae159, []int{68} } func (m *GetWebTokensResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5023,6 +5281,124 @@ func (m *GetWebTokensResponse) GetTokens() []*types.WebTokenV3 { return nil } +// ListWebTokensRequest contains all the requested web tokens. +type ListWebTokensRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListWebTokensRequest) Reset() { *m = ListWebTokensRequest{} } +func (m *ListWebTokensRequest) String() string { return proto.CompactTextString(m) } +func (*ListWebTokensRequest) ProtoMessage() {} +func (*ListWebTokensRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{69} +} +func (m *ListWebTokensRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListWebTokensRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListWebTokensRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListWebTokensRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListWebTokensRequest.Merge(m, src) +} +func (m *ListWebTokensRequest) XXX_Size() int { + return m.Size() +} +func (m *ListWebTokensRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListWebTokensRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListWebTokensRequest proto.InternalMessageInfo + +func (m *ListWebTokensRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListWebTokensRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +// ListWebTokensResponse contains all the requested web tokens. +type ListWebTokensResponse struct { + // Tokens is a list of web tokens. + Tokens []*types.WebTokenV3 `protobuf:"bytes,1,rep,name=tokens,proto3" json:"tokens,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListWebTokensResponse) Reset() { *m = ListWebTokensResponse{} } +func (m *ListWebTokensResponse) String() string { return proto.CompactTextString(m) } +func (*ListWebTokensResponse) ProtoMessage() {} +func (*ListWebTokensResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{70} +} +func (m *ListWebTokensResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListWebTokensResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListWebTokensResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListWebTokensResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListWebTokensResponse.Merge(m, src) +} +func (m *ListWebTokensResponse) XXX_Size() int { + return m.Size() +} +func (m *ListWebTokensResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListWebTokensResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListWebTokensResponse proto.InternalMessageInfo + +func (m *ListWebTokensResponse) GetTokens() []*types.WebTokenV3 { + if m != nil { + return m.Tokens + } + return nil +} + +func (m *ListWebTokensResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // UpsertKubernetesServerRequest are the parameters used to add or update a // kubernetes server. type UpsertKubernetesServerRequest struct { @@ -5036,7 +5412,7 @@ func (m *UpsertKubernetesServerRequest) Reset() { *m = UpsertKubernetesS func (m *UpsertKubernetesServerRequest) String() string { return proto.CompactTextString(m) } func (*UpsertKubernetesServerRequest) ProtoMessage() {} func (*UpsertKubernetesServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{65} + return fileDescriptor_0ffcffcda38ae159, []int{71} } func (m *UpsertKubernetesServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5087,7 +5463,7 @@ func (m *DeleteKubernetesServerRequest) Reset() { *m = DeleteKubernetesS func (m *DeleteKubernetesServerRequest) String() string { return proto.CompactTextString(m) } func (*DeleteKubernetesServerRequest) ProtoMessage() {} func (*DeleteKubernetesServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{66} + return fileDescriptor_0ffcffcda38ae159, []int{72} } func (m *DeleteKubernetesServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5141,7 +5517,7 @@ func (m *DeleteAllKubernetesServersRequest) Reset() { *m = DeleteAllKube func (m *DeleteAllKubernetesServersRequest) String() string { return proto.CompactTextString(m) } func (*DeleteAllKubernetesServersRequest) ProtoMessage() {} func (*DeleteAllKubernetesServersRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{67} + return fileDescriptor_0ffcffcda38ae159, []int{73} } func (m *DeleteAllKubernetesServersRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5183,7 +5559,7 @@ func (m *UpsertDatabaseServerRequest) Reset() { *m = UpsertDatabaseServe func (m *UpsertDatabaseServerRequest) String() string { return proto.CompactTextString(m) } func (*UpsertDatabaseServerRequest) ProtoMessage() {} func (*UpsertDatabaseServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{68} + return fileDescriptor_0ffcffcda38ae159, []int{74} } func (m *UpsertDatabaseServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5236,7 +5612,7 @@ func (m *DeleteDatabaseServerRequest) Reset() { *m = DeleteDatabaseServe func (m *DeleteDatabaseServerRequest) String() string { return proto.CompactTextString(m) } func (*DeleteDatabaseServerRequest) ProtoMessage() {} func (*DeleteDatabaseServerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{69} + return fileDescriptor_0ffcffcda38ae159, []int{75} } func (m *DeleteDatabaseServerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5299,7 +5675,7 @@ func (m *DeleteAllDatabaseServersRequest) Reset() { *m = DeleteAllDataba func (m *DeleteAllDatabaseServersRequest) String() string { return proto.CompactTextString(m) } func (*DeleteAllDatabaseServersRequest) ProtoMessage() {} func (*DeleteAllDatabaseServersRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{70} + return fileDescriptor_0ffcffcda38ae159, []int{76} } func (m *DeleteAllDatabaseServersRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5348,7 +5724,7 @@ func (m *DatabaseServiceV1List) Reset() { *m = DatabaseServiceV1List{} } func (m *DatabaseServiceV1List) String() string { return proto.CompactTextString(m) } func (*DatabaseServiceV1List) ProtoMessage() {} func (*DatabaseServiceV1List) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{71} + return fileDescriptor_0ffcffcda38ae159, []int{77} } func (m *DatabaseServiceV1List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5397,7 +5773,7 @@ func (m *UpsertDatabaseServiceRequest) Reset() { *m = UpsertDatabaseServ func (m *UpsertDatabaseServiceRequest) String() string { return proto.CompactTextString(m) } func (*UpsertDatabaseServiceRequest) ProtoMessage() {} func (*UpsertDatabaseServiceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{72} + return fileDescriptor_0ffcffcda38ae159, []int{78} } func (m *UpsertDatabaseServiceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5444,7 +5820,7 @@ func (m *DeleteAllDatabaseServicesRequest) Reset() { *m = DeleteAllDatab func (m *DeleteAllDatabaseServicesRequest) String() string { return proto.CompactTextString(m) } func (*DeleteAllDatabaseServicesRequest) ProtoMessage() {} func (*DeleteAllDatabaseServicesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{73} + return fileDescriptor_0ffcffcda38ae159, []int{79} } func (m *DeleteAllDatabaseServicesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5489,7 +5865,7 @@ func (m *DatabaseCSRRequest) Reset() { *m = DatabaseCSRRequest{} } func (m *DatabaseCSRRequest) String() string { return proto.CompactTextString(m) } func (*DatabaseCSRRequest) ProtoMessage() {} func (*DatabaseCSRRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{74} + return fileDescriptor_0ffcffcda38ae159, []int{80} } func (m *DatabaseCSRRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5547,7 +5923,7 @@ func (m *DatabaseCSRResponse) Reset() { *m = DatabaseCSRResponse{} } func (m *DatabaseCSRResponse) String() string { return proto.CompactTextString(m) } func (*DatabaseCSRResponse) ProtoMessage() {} func (*DatabaseCSRResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{75} + return fileDescriptor_0ffcffcda38ae159, []int{81} } func (m *DatabaseCSRResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5607,7 +5983,10 @@ type DatabaseCertRequest struct { // CertificateExtensions identifies which extensions, if any, should be added to the certificate. CertificateExtensions DatabaseCertRequest_Extensions `protobuf:"varint,6,opt,name=CertificateExtensions,proto3,enum=proto.DatabaseCertRequest_Extensions" json:"certificate_extensions"` // CRLEndpoint is a certificate revocation list distribution point. Required for Windows smartcard certs. - CRLEndpoint string `protobuf:"bytes,7,opt,name=CRLEndpoint,proto3" json:"crl_endpoint"` + // DEPRECATED: use CRLDomain instead. + CRLEndpoint string `protobuf:"bytes,7,opt,name=CRLEndpoint,proto3" json:"crl_endpoint"` // Deprecated: Do not use. + // CRLDomain is the Active Directory domain where CRLs are published. + CRLDomain string `protobuf:"bytes,8,opt,name=CRLDomain,proto3" json:"crl_domain"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -5617,7 +5996,7 @@ func (m *DatabaseCertRequest) Reset() { *m = DatabaseCertRequest{} } func (m *DatabaseCertRequest) String() string { return proto.CompactTextString(m) } func (*DatabaseCertRequest) ProtoMessage() {} func (*DatabaseCertRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{76} + return fileDescriptor_0ffcffcda38ae159, []int{82} } func (m *DatabaseCertRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5689,6 +6068,7 @@ func (m *DatabaseCertRequest) GetCertificateExtensions() DatabaseCertRequest_Ext return DatabaseCertRequest_NORMAL } +// Deprecated: Do not use. func (m *DatabaseCertRequest) GetCRLEndpoint() string { if m != nil { return m.CRLEndpoint @@ -5696,6 +6076,13 @@ func (m *DatabaseCertRequest) GetCRLEndpoint() string { return "" } +func (m *DatabaseCertRequest) GetCRLDomain() string { + if m != nil { + return m.CRLDomain + } + return "" +} + // DatabaseCertResponse contains the signed certificate. type DatabaseCertResponse struct { // Cert is the signed certificate. @@ -5711,7 +6098,7 @@ func (m *DatabaseCertResponse) Reset() { *m = DatabaseCertResponse{} } func (m *DatabaseCertResponse) String() string { return proto.CompactTextString(m) } func (*DatabaseCertResponse) ProtoMessage() {} func (*DatabaseCertResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{77} + return fileDescriptor_0ffcffcda38ae159, []int{83} } func (m *DatabaseCertResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5767,7 +6154,7 @@ func (m *SnowflakeJWTRequest) Reset() { *m = SnowflakeJWTRequest{} } func (m *SnowflakeJWTRequest) String() string { return proto.CompactTextString(m) } func (*SnowflakeJWTRequest) ProtoMessage() {} func (*SnowflakeJWTRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{78} + return fileDescriptor_0ffcffcda38ae159, []int{84} } func (m *SnowflakeJWTRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5822,7 +6209,7 @@ func (m *SnowflakeJWTResponse) Reset() { *m = SnowflakeJWTResponse{} } func (m *SnowflakeJWTResponse) String() string { return proto.CompactTextString(m) } func (*SnowflakeJWTResponse) ProtoMessage() {} func (*SnowflakeJWTResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{79} + return fileDescriptor_0ffcffcda38ae159, []int{85} } func (m *SnowflakeJWTResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5871,7 +6258,7 @@ func (m *GetRoleRequest) Reset() { *m = GetRoleRequest{} } func (m *GetRoleRequest) String() string { return proto.CompactTextString(m) } func (*GetRoleRequest) ProtoMessage() {} func (*GetRoleRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{80} + return fileDescriptor_0ffcffcda38ae159, []int{86} } func (m *GetRoleRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5920,7 +6307,7 @@ func (m *GetRolesResponse) Reset() { *m = GetRolesResponse{} } func (m *GetRolesResponse) String() string { return proto.CompactTextString(m) } func (*GetRolesResponse) ProtoMessage() {} func (*GetRolesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{81} + return fileDescriptor_0ffcffcda38ae159, []int{87} } func (m *GetRolesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5975,7 +6362,7 @@ func (m *ListRolesRequest) Reset() { *m = ListRolesRequest{} } func (m *ListRolesRequest) String() string { return proto.CompactTextString(m) } func (*ListRolesRequest) ProtoMessage() {} func (*ListRolesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{82} + return fileDescriptor_0ffcffcda38ae159, []int{88} } func (m *ListRolesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6040,7 +6427,7 @@ func (m *ListRolesResponse) Reset() { *m = ListRolesResponse{} } func (m *ListRolesResponse) String() string { return proto.CompactTextString(m) } func (*ListRolesResponse) ProtoMessage() {} func (*ListRolesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{83} + return fileDescriptor_0ffcffcda38ae159, []int{89} } func (m *ListRolesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6096,7 +6483,7 @@ func (m *CreateRoleRequest) Reset() { *m = CreateRoleRequest{} } func (m *CreateRoleRequest) String() string { return proto.CompactTextString(m) } func (*CreateRoleRequest) ProtoMessage() {} func (*CreateRoleRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{84} + return fileDescriptor_0ffcffcda38ae159, []int{90} } func (m *CreateRoleRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6145,7 +6532,7 @@ func (m *UpdateRoleRequest) Reset() { *m = UpdateRoleRequest{} } func (m *UpdateRoleRequest) String() string { return proto.CompactTextString(m) } func (*UpdateRoleRequest) ProtoMessage() {} func (*UpdateRoleRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{85} + return fileDescriptor_0ffcffcda38ae159, []int{91} } func (m *UpdateRoleRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6194,7 +6581,7 @@ func (m *UpsertRoleRequest) Reset() { *m = UpsertRoleRequest{} } func (m *UpsertRoleRequest) String() string { return proto.CompactTextString(m) } func (*UpsertRoleRequest) ProtoMessage() {} func (*UpsertRoleRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{86} + return fileDescriptor_0ffcffcda38ae159, []int{92} } func (m *UpsertRoleRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6243,7 +6630,7 @@ func (m *DeleteRoleRequest) Reset() { *m = DeleteRoleRequest{} } func (m *DeleteRoleRequest) String() string { return proto.CompactTextString(m) } func (*DeleteRoleRequest) ProtoMessage() {} func (*DeleteRoleRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{87} + return fileDescriptor_0ffcffcda38ae159, []int{93} } func (m *DeleteRoleRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6312,7 +6699,7 @@ func (m *MFAAuthenticateChallenge) Reset() { *m = MFAAuthenticateChallen func (m *MFAAuthenticateChallenge) String() string { return proto.CompactTextString(m) } func (*MFAAuthenticateChallenge) ProtoMessage() {} func (*MFAAuthenticateChallenge) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{88} + return fileDescriptor_0ffcffcda38ae159, []int{94} } func (m *MFAAuthenticateChallenge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6386,7 +6773,7 @@ func (m *MFAAuthenticateResponse) Reset() { *m = MFAAuthenticateResponse func (m *MFAAuthenticateResponse) String() string { return proto.CompactTextString(m) } func (*MFAAuthenticateResponse) ProtoMessage() {} func (*MFAAuthenticateResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{89} + return fileDescriptor_0ffcffcda38ae159, []int{95} } func (m *MFAAuthenticateResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6483,7 +6870,7 @@ func (m *TOTPChallenge) Reset() { *m = TOTPChallenge{} } func (m *TOTPChallenge) String() string { return proto.CompactTextString(m) } func (*TOTPChallenge) ProtoMessage() {} func (*TOTPChallenge) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{90} + return fileDescriptor_0ffcffcda38ae159, []int{96} } func (m *TOTPChallenge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6524,7 +6911,7 @@ func (m *TOTPResponse) Reset() { *m = TOTPResponse{} } func (m *TOTPResponse) String() string { return proto.CompactTextString(m) } func (*TOTPResponse) ProtoMessage() {} func (*TOTPResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{91} + return fileDescriptor_0ffcffcda38ae159, []int{97} } func (m *TOTPResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6577,7 +6964,7 @@ func (m *SSOChallenge) Reset() { *m = SSOChallenge{} } func (m *SSOChallenge) String() string { return proto.CompactTextString(m) } func (*SSOChallenge) ProtoMessage() {} func (*SSOChallenge) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{92} + return fileDescriptor_0ffcffcda38ae159, []int{98} } func (m *SSOChallenge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6642,7 +7029,7 @@ func (m *SSOResponse) Reset() { *m = SSOResponse{} } func (m *SSOResponse) String() string { return proto.CompactTextString(m) } func (*SSOResponse) ProtoMessage() {} func (*SSOResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{93} + return fileDescriptor_0ffcffcda38ae159, []int{99} } func (m *SSOResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6702,7 +7089,7 @@ func (m *MFARegisterChallenge) Reset() { *m = MFARegisterChallenge{} } func (m *MFARegisterChallenge) String() string { return proto.CompactTextString(m) } func (*MFARegisterChallenge) ProtoMessage() {} func (*MFARegisterChallenge) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{94} + return fileDescriptor_0ffcffcda38ae159, []int{100} } func (m *MFARegisterChallenge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6791,7 +7178,7 @@ func (m *MFARegisterResponse) Reset() { *m = MFARegisterResponse{} } func (m *MFARegisterResponse) String() string { return proto.CompactTextString(m) } func (*MFARegisterResponse) ProtoMessage() {} func (*MFARegisterResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{95} + return fileDescriptor_0ffcffcda38ae159, []int{101} } func (m *MFARegisterResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6894,7 +7281,7 @@ func (m *TOTPRegisterChallenge) Reset() { *m = TOTPRegisterChallenge{} } func (m *TOTPRegisterChallenge) String() string { return proto.CompactTextString(m) } func (*TOTPRegisterChallenge) ProtoMessage() {} func (*TOTPRegisterChallenge) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{96} + return fileDescriptor_0ffcffcda38ae159, []int{102} } func (m *TOTPRegisterChallenge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6993,7 +7380,7 @@ func (m *TOTPRegisterResponse) Reset() { *m = TOTPRegisterResponse{} } func (m *TOTPRegisterResponse) String() string { return proto.CompactTextString(m) } func (*TOTPRegisterResponse) ProtoMessage() {} func (*TOTPRegisterResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{97} + return fileDescriptor_0ffcffcda38ae159, []int{103} } func (m *TOTPRegisterResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7047,7 +7434,7 @@ func (m *AddMFADeviceRequest) Reset() { *m = AddMFADeviceRequest{} } func (m *AddMFADeviceRequest) String() string { return proto.CompactTextString(m) } func (*AddMFADeviceRequest) ProtoMessage() {} func (*AddMFADeviceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{98} + return fileDescriptor_0ffcffcda38ae159, []int{104} } func (m *AddMFADeviceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7087,7 +7474,7 @@ func (m *AddMFADeviceResponse) Reset() { *m = AddMFADeviceResponse{} } func (m *AddMFADeviceResponse) String() string { return proto.CompactTextString(m) } func (*AddMFADeviceResponse) ProtoMessage() {} func (*AddMFADeviceResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{99} + return fileDescriptor_0ffcffcda38ae159, []int{105} } func (m *AddMFADeviceResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7127,7 +7514,7 @@ func (m *DeleteMFADeviceRequest) Reset() { *m = DeleteMFADeviceRequest{} func (m *DeleteMFADeviceRequest) String() string { return proto.CompactTextString(m) } func (*DeleteMFADeviceRequest) ProtoMessage() {} func (*DeleteMFADeviceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{100} + return fileDescriptor_0ffcffcda38ae159, []int{106} } func (m *DeleteMFADeviceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7167,7 +7554,7 @@ func (m *DeleteMFADeviceResponse) Reset() { *m = DeleteMFADeviceResponse func (m *DeleteMFADeviceResponse) String() string { return proto.CompactTextString(m) } func (*DeleteMFADeviceResponse) ProtoMessage() {} func (*DeleteMFADeviceResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{101} + return fileDescriptor_0ffcffcda38ae159, []int{107} } func (m *DeleteMFADeviceResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7222,7 +7609,7 @@ func (m *DeleteMFADeviceSyncRequest) Reset() { *m = DeleteMFADeviceSyncR func (m *DeleteMFADeviceSyncRequest) String() string { return proto.CompactTextString(m) } func (*DeleteMFADeviceSyncRequest) ProtoMessage() {} func (*DeleteMFADeviceSyncRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{102} + return fileDescriptor_0ffcffcda38ae159, []int{108} } func (m *DeleteMFADeviceSyncRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7302,7 +7689,7 @@ func (m *AddMFADeviceSyncRequest) Reset() { *m = AddMFADeviceSyncRequest func (m *AddMFADeviceSyncRequest) String() string { return proto.CompactTextString(m) } func (*AddMFADeviceSyncRequest) ProtoMessage() {} func (*AddMFADeviceSyncRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{103} + return fileDescriptor_0ffcffcda38ae159, []int{109} } func (m *AddMFADeviceSyncRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7378,7 +7765,7 @@ func (m *AddMFADeviceSyncResponse) Reset() { *m = AddMFADeviceSyncRespon func (m *AddMFADeviceSyncResponse) String() string { return proto.CompactTextString(m) } func (*AddMFADeviceSyncResponse) ProtoMessage() {} func (*AddMFADeviceSyncResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{104} + return fileDescriptor_0ffcffcda38ae159, []int{110} } func (m *AddMFADeviceSyncResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7433,7 +7820,7 @@ func (m *GetMFADevicesRequest) Reset() { *m = GetMFADevicesRequest{} } func (m *GetMFADevicesRequest) String() string { return proto.CompactTextString(m) } func (*GetMFADevicesRequest) ProtoMessage() {} func (*GetMFADevicesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{105} + return fileDescriptor_0ffcffcda38ae159, []int{111} } func (m *GetMFADevicesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7481,7 +7868,7 @@ func (m *GetMFADevicesResponse) Reset() { *m = GetMFADevicesResponse{} } func (m *GetMFADevicesResponse) String() string { return proto.CompactTextString(m) } func (*GetMFADevicesResponse) ProtoMessage() {} func (*GetMFADevicesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{106} + return fileDescriptor_0ffcffcda38ae159, []int{112} } func (m *GetMFADevicesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7528,7 +7915,7 @@ func (m *UserSingleUseCertsRequest) Reset() { *m = UserSingleUseCertsReq func (m *UserSingleUseCertsRequest) String() string { return proto.CompactTextString(m) } func (*UserSingleUseCertsRequest) ProtoMessage() {} func (*UserSingleUseCertsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{107} + return fileDescriptor_0ffcffcda38ae159, []int{113} } func (m *UserSingleUseCertsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7568,7 +7955,7 @@ func (m *UserSingleUseCertsResponse) Reset() { *m = UserSingleUseCertsRe func (m *UserSingleUseCertsResponse) String() string { return proto.CompactTextString(m) } func (*UserSingleUseCertsResponse) ProtoMessage() {} func (*UserSingleUseCertsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{108} + return fileDescriptor_0ffcffcda38ae159, []int{114} } func (m *UserSingleUseCertsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7617,7 +8004,7 @@ func (m *IsMFARequiredRequest) Reset() { *m = IsMFARequiredRequest{} } func (m *IsMFARequiredRequest) String() string { return proto.CompactTextString(m) } func (*IsMFARequiredRequest) ProtoMessage() {} func (*IsMFARequiredRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{109} + return fileDescriptor_0ffcffcda38ae159, []int{115} } func (m *IsMFARequiredRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7755,7 +8142,7 @@ func (m *StreamSessionEventsRequest) Reset() { *m = StreamSessionEventsR func (m *StreamSessionEventsRequest) String() string { return proto.CompactTextString(m) } func (*StreamSessionEventsRequest) ProtoMessage() {} func (*StreamSessionEventsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{110} + return fileDescriptor_0ffcffcda38ae159, []int{116} } func (m *StreamSessionEventsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7813,7 +8200,7 @@ func (m *NodeLogin) Reset() { *m = NodeLogin{} } func (m *NodeLogin) String() string { return proto.CompactTextString(m) } func (*NodeLogin) ProtoMessage() {} func (*NodeLogin) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{111} + return fileDescriptor_0ffcffcda38ae159, []int{117} } func (m *NodeLogin) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7867,7 +8254,7 @@ func (m *AdminAction) Reset() { *m = AdminAction{} } func (m *AdminAction) String() string { return proto.CompactTextString(m) } func (*AdminAction) ProtoMessage() {} func (*AdminAction) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{112} + return fileDescriptor_0ffcffcda38ae159, []int{118} } func (m *AdminAction) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7912,7 +8299,7 @@ func (m *IsMFARequiredResponse) Reset() { *m = IsMFARequiredResponse{} } func (m *IsMFARequiredResponse) String() string { return proto.CompactTextString(m) } func (*IsMFARequiredResponse) ProtoMessage() {} func (*IsMFARequiredResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{113} + return fileDescriptor_0ffcffcda38ae159, []int{119} } func (m *IsMFARequiredResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7972,7 +8359,9 @@ type GetEventsRequest struct { StartKey string `protobuf:"bytes,6,opt,name=StartKey,proto3" json:"StartKey,omitempty"` // Order specifies an ascending or descending order of events. // A value of 0 means a descending order and a value of 1 means an ascending order. - Order Order `protobuf:"varint,7,opt,name=Order,proto3,enum=proto.Order" json:"Order,omitempty"` + Order Order `protobuf:"varint,7,opt,name=Order,proto3,enum=proto.Order" json:"Order,omitempty"` + // Search is an optional search term to filter events by (case-insensitive substring match). + Search string `protobuf:"bytes,8,opt,name=Search,proto3" json:"Search,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -7982,7 +8371,7 @@ func (m *GetEventsRequest) Reset() { *m = GetEventsRequest{} } func (m *GetEventsRequest) String() string { return proto.CompactTextString(m) } func (*GetEventsRequest) ProtoMessage() {} func (*GetEventsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{114} + return fileDescriptor_0ffcffcda38ae159, []int{120} } func (m *GetEventsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8060,6 +8449,13 @@ func (m *GetEventsRequest) GetOrder() Order { return Order_DESCENDING } +func (m *GetEventsRequest) GetSearch() string { + if m != nil { + return m.Search + } + return "" +} + type GetSessionEventsRequest struct { // StartDate is the oldest date of returned events StartDate time.Time `protobuf:"bytes,1,opt,name=StartDate,proto3,stdtime" json:"StartDate"` @@ -8083,7 +8479,7 @@ func (m *GetSessionEventsRequest) Reset() { *m = GetSessionEventsRequest func (m *GetSessionEventsRequest) String() string { return proto.CompactTextString(m) } func (*GetSessionEventsRequest) ProtoMessage() {} func (*GetSessionEventsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{115} + return fileDescriptor_0ffcffcda38ae159, []int{121} } func (m *GetSessionEventsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8163,7 +8559,7 @@ func (m *Events) Reset() { *m = Events{} } func (m *Events) String() string { return proto.CompactTextString(m) } func (*Events) ProtoMessage() {} func (*Events) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{116} + return fileDescriptor_0ffcffcda38ae159, []int{122} } func (m *Events) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8221,7 +8617,7 @@ func (m *GetLocksRequest) Reset() { *m = GetLocksRequest{} } func (m *GetLocksRequest) String() string { return proto.CompactTextString(m) } func (*GetLocksRequest) ProtoMessage() {} func (*GetLocksRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{117} + return fileDescriptor_0ffcffcda38ae159, []int{123} } func (m *GetLocksRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8276,7 +8672,7 @@ func (m *GetLocksResponse) Reset() { *m = GetLocksResponse{} } func (m *GetLocksResponse) String() string { return proto.CompactTextString(m) } func (*GetLocksResponse) ProtoMessage() {} func (*GetLocksResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{118} + return fileDescriptor_0ffcffcda38ae159, []int{124} } func (m *GetLocksResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8312,6 +8708,133 @@ func (m *GetLocksResponse) GetLocks() []*types.LockV2 { return nil } +// ListLocksRequest is a request for a page of locks. +type ListLocksRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // Filter specifies lock specific filters. + Filter *types.LockFilter `protobuf:"bytes,3,opt,name=filter,proto3" json:"filter,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListLocksRequest) Reset() { *m = ListLocksRequest{} } +func (m *ListLocksRequest) String() string { return proto.CompactTextString(m) } +func (*ListLocksRequest) ProtoMessage() {} +func (*ListLocksRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{125} +} +func (m *ListLocksRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListLocksRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListLocksRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListLocksRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListLocksRequest.Merge(m, src) +} +func (m *ListLocksRequest) XXX_Size() int { + return m.Size() +} +func (m *ListLocksRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListLocksRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListLocksRequest proto.InternalMessageInfo + +func (m *ListLocksRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListLocksRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +func (m *ListLocksRequest) GetFilter() *types.LockFilter { + if m != nil { + return m.Filter + } + return nil +} + +// ListLocksResponse contains a page of registered applications. +type ListLocksResponse struct { + // Locks is a list of locks. + Locks []*types.LockV2 `protobuf:"bytes,1,rep,name=locks,proto3" json:"locks,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListLocksResponse) Reset() { *m = ListLocksResponse{} } +func (m *ListLocksResponse) String() string { return proto.CompactTextString(m) } +func (*ListLocksResponse) ProtoMessage() {} +func (*ListLocksResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{126} +} +func (m *ListLocksResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListLocksResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListLocksResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListLocksResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListLocksResponse.Merge(m, src) +} +func (m *ListLocksResponse) XXX_Size() int { + return m.Size() +} +func (m *ListLocksResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListLocksResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListLocksResponse proto.InternalMessageInfo + +func (m *ListLocksResponse) GetLocks() []*types.LockV2 { + if m != nil { + return m.Locks + } + return nil +} + +func (m *ListLocksResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + type GetLockRequest struct { // Name is the name of the lock to get. Name string `protobuf:"bytes,1,opt,name=Name,proto3" json:"Name,omitempty"` @@ -8324,7 +8847,7 @@ func (m *GetLockRequest) Reset() { *m = GetLockRequest{} } func (m *GetLockRequest) String() string { return proto.CompactTextString(m) } func (*GetLockRequest) ProtoMessage() {} func (*GetLockRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{119} + return fileDescriptor_0ffcffcda38ae159, []int{127} } func (m *GetLockRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8372,7 +8895,7 @@ func (m *DeleteLockRequest) Reset() { *m = DeleteLockRequest{} } func (m *DeleteLockRequest) String() string { return proto.CompactTextString(m) } func (*DeleteLockRequest) ProtoMessage() {} func (*DeleteLockRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{120} + return fileDescriptor_0ffcffcda38ae159, []int{128} } func (m *DeleteLockRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8422,7 +8945,7 @@ func (m *ReplaceRemoteLocksRequest) Reset() { *m = ReplaceRemoteLocksReq func (m *ReplaceRemoteLocksRequest) String() string { return proto.CompactTextString(m) } func (*ReplaceRemoteLocksRequest) ProtoMessage() {} func (*ReplaceRemoteLocksRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{121} + return fileDescriptor_0ffcffcda38ae159, []int{129} } func (m *ReplaceRemoteLocksRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8465,6 +8988,240 @@ func (m *ReplaceRemoteLocksRequest) GetLocks() []*types.LockV2 { return nil } +// ListAppsRequest is a request for a page of registered applications. +type ListAppsRequest struct { + // Limit is the maximum amount of resources to retrieve. + Limit int32 `protobuf:"varint,1,opt,name=limit,proto3" json:"limit,omitempty"` + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + StartKey string `protobuf:"bytes,2,opt,name=start_key,json=startKey,proto3" json:"start_key,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListAppsRequest) Reset() { *m = ListAppsRequest{} } +func (m *ListAppsRequest) String() string { return proto.CompactTextString(m) } +func (*ListAppsRequest) ProtoMessage() {} +func (*ListAppsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{130} +} +func (m *ListAppsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListAppsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListAppsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListAppsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListAppsRequest.Merge(m, src) +} +func (m *ListAppsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListAppsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListAppsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListAppsRequest proto.InternalMessageInfo + +func (m *ListAppsRequest) GetLimit() int32 { + if m != nil { + return m.Limit + } + return 0 +} + +func (m *ListAppsRequest) GetStartKey() string { + if m != nil { + return m.StartKey + } + return "" +} + +// ListAppsResponse contains a page of registered applications. +type ListAppsResponse struct { + // Applictaions is a list of applications. + Applications []*types.AppV3 `protobuf:"bytes,1,rep,name=applications,proto3" json:"applications,omitempty"` + // NextKey is the key for the next page of applications. + NextKey string `protobuf:"bytes,2,opt,name=next_key,json=nextKey,proto3" json:"next_key,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListAppsResponse) Reset() { *m = ListAppsResponse{} } +func (m *ListAppsResponse) String() string { return proto.CompactTextString(m) } +func (*ListAppsResponse) ProtoMessage() {} +func (*ListAppsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{131} +} +func (m *ListAppsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListAppsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListAppsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListAppsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListAppsResponse.Merge(m, src) +} +func (m *ListAppsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListAppsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListAppsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListAppsResponse proto.InternalMessageInfo + +func (m *ListAppsResponse) GetApplications() []*types.AppV3 { + if m != nil { + return m.Applications + } + return nil +} + +func (m *ListAppsResponse) GetNextKey() string { + if m != nil { + return m.NextKey + } + return "" +} + +type ListDatabasesRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListDatabasesRequest) Reset() { *m = ListDatabasesRequest{} } +func (m *ListDatabasesRequest) String() string { return proto.CompactTextString(m) } +func (*ListDatabasesRequest) ProtoMessage() {} +func (*ListDatabasesRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{132} +} +func (m *ListDatabasesRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListDatabasesRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListDatabasesRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListDatabasesRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListDatabasesRequest.Merge(m, src) +} +func (m *ListDatabasesRequest) XXX_Size() int { + return m.Size() +} +func (m *ListDatabasesRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListDatabasesRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListDatabasesRequest proto.InternalMessageInfo + +func (m *ListDatabasesRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListDatabasesRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +type ListDatabasesResponse struct { + // a list of databases. + Databases []*types.DatabaseV3 `protobuf:"bytes,1,rep,name=databases,proto3" json:"databases,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListDatabasesResponse) Reset() { *m = ListDatabasesResponse{} } +func (m *ListDatabasesResponse) String() string { return proto.CompactTextString(m) } +func (*ListDatabasesResponse) ProtoMessage() {} +func (*ListDatabasesResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{133} +} +func (m *ListDatabasesResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListDatabasesResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListDatabasesResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListDatabasesResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListDatabasesResponse.Merge(m, src) +} +func (m *ListDatabasesResponse) XXX_Size() int { + return m.Size() +} +func (m *ListDatabasesResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListDatabasesResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListDatabasesResponse proto.InternalMessageInfo + +func (m *ListDatabasesResponse) GetDatabases() []*types.DatabaseV3 { + if m != nil { + return m.Databases + } + return nil +} + +func (m *ListDatabasesResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // GetWindowsDesktopServicesResponse contains all registered Windows desktop services. type GetWindowsDesktopServicesResponse struct { // Services is a list of Windows desktop services. @@ -8478,7 +9235,7 @@ func (m *GetWindowsDesktopServicesResponse) Reset() { *m = GetWindowsDes func (m *GetWindowsDesktopServicesResponse) String() string { return proto.CompactTextString(m) } func (*GetWindowsDesktopServicesResponse) ProtoMessage() {} func (*GetWindowsDesktopServicesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{122} + return fileDescriptor_0ffcffcda38ae159, []int{134} } func (m *GetWindowsDesktopServicesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8527,7 +9284,7 @@ func (m *GetWindowsDesktopServiceRequest) Reset() { *m = GetWindowsDeskt func (m *GetWindowsDesktopServiceRequest) String() string { return proto.CompactTextString(m) } func (*GetWindowsDesktopServiceRequest) ProtoMessage() {} func (*GetWindowsDesktopServiceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{123} + return fileDescriptor_0ffcffcda38ae159, []int{135} } func (m *GetWindowsDesktopServiceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8576,7 +9333,7 @@ func (m *GetWindowsDesktopServiceResponse) Reset() { *m = GetWindowsDesk func (m *GetWindowsDesktopServiceResponse) String() string { return proto.CompactTextString(m) } func (*GetWindowsDesktopServiceResponse) ProtoMessage() {} func (*GetWindowsDesktopServiceResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{124} + return fileDescriptor_0ffcffcda38ae159, []int{136} } func (m *GetWindowsDesktopServiceResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8625,7 +9382,7 @@ func (m *DeleteWindowsDesktopServiceRequest) Reset() { *m = DeleteWindow func (m *DeleteWindowsDesktopServiceRequest) String() string { return proto.CompactTextString(m) } func (*DeleteWindowsDesktopServiceRequest) ProtoMessage() {} func (*DeleteWindowsDesktopServiceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{125} + return fileDescriptor_0ffcffcda38ae159, []int{137} } func (m *DeleteWindowsDesktopServiceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8661,6 +9418,160 @@ func (m *DeleteWindowsDesktopServiceRequest) GetName() string { return "" } +// ListWindowsDesktopsRequest is a request for a page of registered Windows desktop hosts. +type ListWindowsDesktopsRequest struct { + // Limit is the maximum amount of resources to retrieve. + Limit int32 `protobuf:"varint,1,opt,name=limit,proto3" json:"limit,omitempty"` + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + StartKey string `protobuf:"bytes,2,opt,name=start_key,json=startKey,proto3" json:"start_key,omitempty"` + // Labels is a label-based matcher if non-empty. + Labels map[string]string `protobuf:"bytes,3,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // PredicateExpression defines boolean conditions that will be matched against the resource. + PredicateExpression string `protobuf:"bytes,4,opt,name=predicate_expression,json=predicateExpression,proto3" json:"predicate_expression,omitempty"` + // SearchKeywords is a list of search keywords to match against resource field values. + SearchKeywords []string `protobuf:"bytes,5,rep,name=search_keywords,json=searchKeywords,proto3" json:"search_keywords,omitempty"` + // WindowsDesktopFilter specifies windows desktop specific filters. + WindowsDesktopFilter types.WindowsDesktopFilter `protobuf:"bytes,6,opt,name=windows_desktop_filter,json=windowsDesktopFilter,proto3" json:"windows_desktop_filter,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListWindowsDesktopsRequest) Reset() { *m = ListWindowsDesktopsRequest{} } +func (m *ListWindowsDesktopsRequest) String() string { return proto.CompactTextString(m) } +func (*ListWindowsDesktopsRequest) ProtoMessage() {} +func (*ListWindowsDesktopsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{138} +} +func (m *ListWindowsDesktopsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListWindowsDesktopsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListWindowsDesktopsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListWindowsDesktopsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListWindowsDesktopsRequest.Merge(m, src) +} +func (m *ListWindowsDesktopsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListWindowsDesktopsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListWindowsDesktopsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListWindowsDesktopsRequest proto.InternalMessageInfo + +func (m *ListWindowsDesktopsRequest) GetLimit() int32 { + if m != nil { + return m.Limit + } + return 0 +} + +func (m *ListWindowsDesktopsRequest) GetStartKey() string { + if m != nil { + return m.StartKey + } + return "" +} + +func (m *ListWindowsDesktopsRequest) GetLabels() map[string]string { + if m != nil { + return m.Labels + } + return nil +} + +func (m *ListWindowsDesktopsRequest) GetPredicateExpression() string { + if m != nil { + return m.PredicateExpression + } + return "" +} + +func (m *ListWindowsDesktopsRequest) GetSearchKeywords() []string { + if m != nil { + return m.SearchKeywords + } + return nil +} + +func (m *ListWindowsDesktopsRequest) GetWindowsDesktopFilter() types.WindowsDesktopFilter { + if m != nil { + return m.WindowsDesktopFilter + } + return types.WindowsDesktopFilter{} +} + +// ListWindowsDesktopsResponse contains a page of registered Windows desktop hosts. +type ListWindowsDesktopsResponse struct { + // Desktops is a list of Windows desktop hosts. + Desktops []*types.WindowsDesktopV3 `protobuf:"bytes,1,rep,name=desktops,proto3" json:"desktops,omitempty"` + // NextKey is the key for the next page of desktops. + NextKey string `protobuf:"bytes,2,opt,name=next_key,json=nextKey,proto3" json:"next_key,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListWindowsDesktopsResponse) Reset() { *m = ListWindowsDesktopsResponse{} } +func (m *ListWindowsDesktopsResponse) String() string { return proto.CompactTextString(m) } +func (*ListWindowsDesktopsResponse) ProtoMessage() {} +func (*ListWindowsDesktopsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{139} +} +func (m *ListWindowsDesktopsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListWindowsDesktopsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListWindowsDesktopsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListWindowsDesktopsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListWindowsDesktopsResponse.Merge(m, src) +} +func (m *ListWindowsDesktopsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListWindowsDesktopsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListWindowsDesktopsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListWindowsDesktopsResponse proto.InternalMessageInfo + +func (m *ListWindowsDesktopsResponse) GetDesktops() []*types.WindowsDesktopV3 { + if m != nil { + return m.Desktops + } + return nil +} + +func (m *ListWindowsDesktopsResponse) GetNextKey() string { + if m != nil { + return m.NextKey + } + return "" +} + // GetWindowsDesktopsResponse contains all registered Windows desktop hosts. type GetWindowsDesktopsResponse struct { // Servers is a list of Windows desktop hosts. @@ -8674,7 +9585,7 @@ func (m *GetWindowsDesktopsResponse) Reset() { *m = GetWindowsDesktopsRe func (m *GetWindowsDesktopsResponse) String() string { return proto.CompactTextString(m) } func (*GetWindowsDesktopsResponse) ProtoMessage() {} func (*GetWindowsDesktopsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{126} + return fileDescriptor_0ffcffcda38ae159, []int{140} } func (m *GetWindowsDesktopsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8727,7 +9638,7 @@ func (m *DeleteWindowsDesktopRequest) Reset() { *m = DeleteWindowsDeskto func (m *DeleteWindowsDesktopRequest) String() string { return proto.CompactTextString(m) } func (*DeleteWindowsDesktopRequest) ProtoMessage() {} func (*DeleteWindowsDesktopRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{127} + return fileDescriptor_0ffcffcda38ae159, []int{141} } func (m *DeleteWindowsDesktopRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8776,9 +9687,12 @@ type WindowsDesktopCertRequest struct { // CSR is the request to sign in PEM format. CSR []byte `protobuf:"bytes,1,opt,name=CSR,proto3" json:"CSR,omitempty"` // CRLEndpoint is the address of the CRL for this certificate. - CRLEndpoint string `protobuf:"bytes,2,opt,name=CRLEndpoint,proto3" json:"CRLEndpoint,omitempty"` + // DEPRECATED: use CRLDomain instead. + CRLEndpoint string `protobuf:"bytes,2,opt,name=CRLEndpoint,proto3" json:"CRLEndpoint,omitempty"` // Deprecated: Do not use. // TTL is the certificate validity period. - TTL Duration `protobuf:"varint,3,opt,name=TTL,proto3,casttype=Duration" json:"TTL,omitempty"` + TTL Duration `protobuf:"varint,3,opt,name=TTL,proto3,casttype=Duration" json:"TTL,omitempty"` + // CRLDomain is the Active Directory domain where CRLs are published. + CRLDomain string `protobuf:"bytes,4,opt,name=CRLDomain,proto3" json:"CRLDomain,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -8788,7 +9702,7 @@ func (m *WindowsDesktopCertRequest) Reset() { *m = WindowsDesktopCertReq func (m *WindowsDesktopCertRequest) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopCertRequest) ProtoMessage() {} func (*WindowsDesktopCertRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{128} + return fileDescriptor_0ffcffcda38ae159, []int{142} } func (m *WindowsDesktopCertRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8824,6 +9738,7 @@ func (m *WindowsDesktopCertRequest) GetCSR() []byte { return nil } +// Deprecated: Do not use. func (m *WindowsDesktopCertRequest) GetCRLEndpoint() string { if m != nil { return m.CRLEndpoint @@ -8838,6 +9753,13 @@ func (m *WindowsDesktopCertRequest) GetTTL() Duration { return 0 } +func (m *WindowsDesktopCertRequest) GetCRLDomain() string { + if m != nil { + return m.CRLDomain + } + return "" +} + // WindowsDesktopCertResponse contains the signed Windows RDP certificate. type WindowsDesktopCertResponse struct { // Cert is the signed certificate in PEM format. @@ -8851,7 +9773,7 @@ func (m *WindowsDesktopCertResponse) Reset() { *m = WindowsDesktopCertRe func (m *WindowsDesktopCertResponse) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopCertResponse) ProtoMessage() {} func (*WindowsDesktopCertResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{129} + return fileDescriptor_0ffcffcda38ae159, []int{143} } func (m *WindowsDesktopCertResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8900,7 +9822,7 @@ func (m *DesktopBootstrapScriptResponse) Reset() { *m = DesktopBootstrap func (m *DesktopBootstrapScriptResponse) String() string { return proto.CompactTextString(m) } func (*DesktopBootstrapScriptResponse) ProtoMessage() {} func (*DesktopBootstrapScriptResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{130} + return fileDescriptor_0ffcffcda38ae159, []int{144} } func (m *DesktopBootstrapScriptResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8951,7 +9873,7 @@ func (m *ListSAMLIdPServiceProvidersRequest) Reset() { *m = ListSAMLIdPS func (m *ListSAMLIdPServiceProvidersRequest) String() string { return proto.CompactTextString(m) } func (*ListSAMLIdPServiceProvidersRequest) ProtoMessage() {} func (*ListSAMLIdPServiceProvidersRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{131} + return fileDescriptor_0ffcffcda38ae159, []int{145} } func (m *ListSAMLIdPServiceProvidersRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9011,7 +9933,7 @@ func (m *ListSAMLIdPServiceProvidersResponse) Reset() { *m = ListSAMLIdP func (m *ListSAMLIdPServiceProvidersResponse) String() string { return proto.CompactTextString(m) } func (*ListSAMLIdPServiceProvidersResponse) ProtoMessage() {} func (*ListSAMLIdPServiceProvidersResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{132} + return fileDescriptor_0ffcffcda38ae159, []int{146} } func (m *ListSAMLIdPServiceProvidersResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9074,7 +9996,7 @@ func (m *GetSAMLIdPServiceProviderRequest) Reset() { *m = GetSAMLIdPServ func (m *GetSAMLIdPServiceProviderRequest) String() string { return proto.CompactTextString(m) } func (*GetSAMLIdPServiceProviderRequest) ProtoMessage() {} func (*GetSAMLIdPServiceProviderRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{133} + return fileDescriptor_0ffcffcda38ae159, []int{147} } func (m *GetSAMLIdPServiceProviderRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9123,7 +10045,7 @@ func (m *DeleteSAMLIdPServiceProviderRequest) Reset() { *m = DeleteSAMLI func (m *DeleteSAMLIdPServiceProviderRequest) String() string { return proto.CompactTextString(m) } func (*DeleteSAMLIdPServiceProviderRequest) ProtoMessage() {} func (*DeleteSAMLIdPServiceProviderRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{134} + return fileDescriptor_0ffcffcda38ae159, []int{148} } func (m *DeleteSAMLIdPServiceProviderRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9174,7 +10096,7 @@ func (m *ListUserGroupsRequest) Reset() { *m = ListUserGroupsRequest{} } func (m *ListUserGroupsRequest) String() string { return proto.CompactTextString(m) } func (*ListUserGroupsRequest) ProtoMessage() {} func (*ListUserGroupsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{135} + return fileDescriptor_0ffcffcda38ae159, []int{149} } func (m *ListUserGroupsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9234,7 +10156,7 @@ func (m *ListUserGroupsResponse) Reset() { *m = ListUserGroupsResponse{} func (m *ListUserGroupsResponse) String() string { return proto.CompactTextString(m) } func (*ListUserGroupsResponse) ProtoMessage() {} func (*ListUserGroupsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{136} + return fileDescriptor_0ffcffcda38ae159, []int{150} } func (m *ListUserGroupsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9297,7 +10219,7 @@ func (m *GetUserGroupRequest) Reset() { *m = GetUserGroupRequest{} } func (m *GetUserGroupRequest) String() string { return proto.CompactTextString(m) } func (*GetUserGroupRequest) ProtoMessage() {} func (*GetUserGroupRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{137} + return fileDescriptor_0ffcffcda38ae159, []int{151} } func (m *GetUserGroupRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9346,7 +10268,7 @@ func (m *DeleteUserGroupRequest) Reset() { *m = DeleteUserGroupRequest{} func (m *DeleteUserGroupRequest) String() string { return proto.CompactTextString(m) } func (*DeleteUserGroupRequest) ProtoMessage() {} func (*DeleteUserGroupRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{138} + return fileDescriptor_0ffcffcda38ae159, []int{152} } func (m *DeleteUserGroupRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9395,7 +10317,7 @@ func (m *CertAuthorityRequest) Reset() { *m = CertAuthorityRequest{} } func (m *CertAuthorityRequest) String() string { return proto.CompactTextString(m) } func (*CertAuthorityRequest) ProtoMessage() {} func (*CertAuthorityRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{139} + return fileDescriptor_0ffcffcda38ae159, []int{153} } func (m *CertAuthorityRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9444,7 +10366,7 @@ func (m *CRL) Reset() { *m = CRL{} } func (m *CRL) String() string { return proto.CompactTextString(m) } func (*CRL) ProtoMessage() {} func (*CRL) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{140} + return fileDescriptor_0ffcffcda38ae159, []int{154} } func (m *CRL) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9514,7 +10436,7 @@ func (m *ChangeUserAuthenticationRequest) Reset() { *m = ChangeUserAuthe func (m *ChangeUserAuthenticationRequest) String() string { return proto.CompactTextString(m) } func (*ChangeUserAuthenticationRequest) ProtoMessage() {} func (*ChangeUserAuthenticationRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{141} + return fileDescriptor_0ffcffcda38ae159, []int{155} } func (m *ChangeUserAuthenticationRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9600,7 +10522,7 @@ func (m *ChangeUserAuthenticationResponse) Reset() { *m = ChangeUserAuth func (m *ChangeUserAuthenticationResponse) String() string { return proto.CompactTextString(m) } func (*ChangeUserAuthenticationResponse) ProtoMessage() {} func (*ChangeUserAuthenticationResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{142} + return fileDescriptor_0ffcffcda38ae159, []int{156} } func (m *ChangeUserAuthenticationResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9673,7 +10595,7 @@ func (m *StartAccountRecoveryRequest) Reset() { *m = StartAccountRecover func (m *StartAccountRecoveryRequest) String() string { return proto.CompactTextString(m) } func (*StartAccountRecoveryRequest) ProtoMessage() {} func (*StartAccountRecoveryRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{143} + return fileDescriptor_0ffcffcda38ae159, []int{157} } func (m *StartAccountRecoveryRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9748,7 +10670,7 @@ func (m *VerifyAccountRecoveryRequest) Reset() { *m = VerifyAccountRecov func (m *VerifyAccountRecoveryRequest) String() string { return proto.CompactTextString(m) } func (*VerifyAccountRecoveryRequest) ProtoMessage() {} func (*VerifyAccountRecoveryRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{144} + return fileDescriptor_0ffcffcda38ae159, []int{158} } func (m *VerifyAccountRecoveryRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9863,7 +10785,7 @@ func (m *CompleteAccountRecoveryRequest) Reset() { *m = CompleteAccountR func (m *CompleteAccountRecoveryRequest) String() string { return proto.CompactTextString(m) } func (*CompleteAccountRecoveryRequest) ProtoMessage() {} func (*CompleteAccountRecoveryRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{145} + return fileDescriptor_0ffcffcda38ae159, []int{159} } func (m *CompleteAccountRecoveryRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9969,7 +10891,7 @@ func (m *RecoveryCodes) Reset() { *m = RecoveryCodes{} } func (m *RecoveryCodes) String() string { return proto.CompactTextString(m) } func (*RecoveryCodes) ProtoMessage() {} func (*RecoveryCodes) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{146} + return fileDescriptor_0ffcffcda38ae159, []int{160} } func (m *RecoveryCodes) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10032,7 +10954,7 @@ func (m *CreateAccountRecoveryCodesRequest) Reset() { *m = CreateAccount func (m *CreateAccountRecoveryCodesRequest) String() string { return proto.CompactTextString(m) } func (*CreateAccountRecoveryCodesRequest) ProtoMessage() {} func (*CreateAccountRecoveryCodesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{147} + return fileDescriptor_0ffcffcda38ae159, []int{161} } func (m *CreateAccountRecoveryCodesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10083,7 +11005,7 @@ func (m *GetAccountRecoveryTokenRequest) Reset() { *m = GetAccountRecove func (m *GetAccountRecoveryTokenRequest) String() string { return proto.CompactTextString(m) } func (*GetAccountRecoveryTokenRequest) ProtoMessage() {} func (*GetAccountRecoveryTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{148} + return fileDescriptor_0ffcffcda38ae159, []int{162} } func (m *GetAccountRecoveryTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10131,7 +11053,7 @@ func (m *GetAccountRecoveryCodesRequest) Reset() { *m = GetAccountRecove func (m *GetAccountRecoveryCodesRequest) String() string { return proto.CompactTextString(m) } func (*GetAccountRecoveryCodesRequest) ProtoMessage() {} func (*GetAccountRecoveryCodesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{149} + return fileDescriptor_0ffcffcda38ae159, []int{163} } func (m *GetAccountRecoveryCodesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10173,7 +11095,7 @@ func (m *UserCredentials) Reset() { *m = UserCredentials{} } func (m *UserCredentials) String() string { return proto.CompactTextString(m) } func (*UserCredentials) ProtoMessage() {} func (*UserCredentials) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{150} + return fileDescriptor_0ffcffcda38ae159, []int{164} } func (m *UserCredentials) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10227,7 +11149,7 @@ func (m *ContextUser) Reset() { *m = ContextUser{} } func (m *ContextUser) String() string { return proto.CompactTextString(m) } func (*ContextUser) ProtoMessage() {} func (*ContextUser) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{151} + return fileDescriptor_0ffcffcda38ae159, []int{165} } func (m *ContextUser) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10267,7 +11189,7 @@ func (m *Passwordless) Reset() { *m = Passwordless{} } func (m *Passwordless) String() string { return proto.CompactTextString(m) } func (*Passwordless) ProtoMessage() {} func (*Passwordless) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{152} + return fileDescriptor_0ffcffcda38ae159, []int{166} } func (m *Passwordless) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10337,7 +11259,7 @@ func (m *CreateAuthenticateChallengeRequest) Reset() { *m = CreateAuthen func (m *CreateAuthenticateChallengeRequest) String() string { return proto.CompactTextString(m) } func (*CreateAuthenticateChallengeRequest) ProtoMessage() {} func (*CreateAuthenticateChallengeRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{153} + return fileDescriptor_0ffcffcda38ae159, []int{167} } func (m *CreateAuthenticateChallengeRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10485,7 +11407,7 @@ func (m *CreatePrivilegeTokenRequest) Reset() { *m = CreatePrivilegeToke func (m *CreatePrivilegeTokenRequest) String() string { return proto.CompactTextString(m) } func (*CreatePrivilegeTokenRequest) ProtoMessage() {} func (*CreatePrivilegeTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{154} + return fileDescriptor_0ffcffcda38ae159, []int{168} } func (m *CreatePrivilegeTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10554,7 +11476,7 @@ func (m *CreateRegisterChallengeRequest) Reset() { *m = CreateRegisterCh func (m *CreateRegisterChallengeRequest) String() string { return proto.CompactTextString(m) } func (*CreateRegisterChallengeRequest) ProtoMessage() {} func (*CreateRegisterChallengeRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{155} + return fileDescriptor_0ffcffcda38ae159, []int{169} } func (m *CreateRegisterChallengeRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10631,7 +11553,7 @@ func (m *IdentityCenterAccount) Reset() { *m = IdentityCenterAccount{} } func (m *IdentityCenterAccount) String() string { return proto.CompactTextString(m) } func (*IdentityCenterAccount) ProtoMessage() {} func (*IdentityCenterAccount) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{156} + return fileDescriptor_0ffcffcda38ae159, []int{170} } func (m *IdentityCenterAccount) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10704,7 +11626,7 @@ func (m *IdentityCenterPermissionSet) Reset() { *m = IdentityCenterPermi func (m *IdentityCenterPermissionSet) String() string { return proto.CompactTextString(m) } func (*IdentityCenterPermissionSet) ProtoMessage() {} func (*IdentityCenterPermissionSet) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{157} + return fileDescriptor_0ffcffcda38ae159, []int{171} } func (m *IdentityCenterPermissionSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10776,7 +11698,7 @@ func (m *IdentityCenterAccountAssignment) Reset() { *m = IdentityCenterA func (m *IdentityCenterAccountAssignment) String() string { return proto.CompactTextString(m) } func (*IdentityCenterAccountAssignment) ProtoMessage() {} func (*IdentityCenterAccountAssignment) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{158} + return fileDescriptor_0ffcffcda38ae159, []int{172} } func (m *IdentityCenterAccountAssignment) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10854,6 +11776,122 @@ func (m *IdentityCenterAccountAssignment) GetPermissionSet() *IdentityCenterPerm return nil } +type ListInstallersRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListInstallersRequest) Reset() { *m = ListInstallersRequest{} } +func (m *ListInstallersRequest) String() string { return proto.CompactTextString(m) } +func (*ListInstallersRequest) ProtoMessage() {} +func (*ListInstallersRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{173} +} +func (m *ListInstallersRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListInstallersRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListInstallersRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListInstallersRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListInstallersRequest.Merge(m, src) +} +func (m *ListInstallersRequest) XXX_Size() int { + return m.Size() +} +func (m *ListInstallersRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListInstallersRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListInstallersRequest proto.InternalMessageInfo + +func (m *ListInstallersRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListInstallersRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +type ListInstallersResponse struct { + // Installers is a list of installer resources. + Installers []*types.InstallerV1 `protobuf:"bytes,1,rep,name=installers,proto3" json:"installers,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListInstallersResponse) Reset() { *m = ListInstallersResponse{} } +func (m *ListInstallersResponse) String() string { return proto.CompactTextString(m) } +func (*ListInstallersResponse) ProtoMessage() {} +func (*ListInstallersResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{174} +} +func (m *ListInstallersResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListInstallersResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListInstallersResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListInstallersResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListInstallersResponse.Merge(m, src) +} +func (m *ListInstallersResponse) XXX_Size() int { + return m.Size() +} +func (m *ListInstallersResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListInstallersResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListInstallersResponse proto.InternalMessageInfo + +func (m *ListInstallersResponse) GetInstallers() []*types.InstallerV1 { + if m != nil { + return m.Installers + } + return nil +} + +func (m *ListInstallersResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // PaginatedResource represents one of the supported resources. type PaginatedResource struct { // Resource is the resource itself. @@ -10886,7 +11924,7 @@ func (m *PaginatedResource) Reset() { *m = PaginatedResource{} } func (m *PaginatedResource) String() string { return proto.CompactTextString(m) } func (*PaginatedResource) ProtoMessage() {} func (*PaginatedResource) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{159} + return fileDescriptor_0ffcffcda38ae159, []int{175} } func (m *PaginatedResource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11138,7 +12176,7 @@ func (m *ListUnifiedResourcesRequest) Reset() { *m = ListUnifiedResource func (m *ListUnifiedResourcesRequest) String() string { return proto.CompactTextString(m) } func (*ListUnifiedResourcesRequest) ProtoMessage() {} func (*ListUnifiedResourcesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{160} + return fileDescriptor_0ffcffcda38ae159, []int{176} } func (m *ListUnifiedResourcesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11275,7 +12313,7 @@ func (m *ListUnifiedResourcesResponse) Reset() { *m = ListUnifiedResourc func (m *ListUnifiedResourcesResponse) String() string { return proto.CompactTextString(m) } func (*ListUnifiedResourcesResponse) ProtoMessage() {} func (*ListUnifiedResourcesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{161} + return fileDescriptor_0ffcffcda38ae159, []int{177} } func (m *ListUnifiedResourcesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11371,7 +12409,7 @@ func (m *ListResourcesRequest) Reset() { *m = ListResourcesRequest{} } func (m *ListResourcesRequest) String() string { return proto.CompactTextString(m) } func (*ListResourcesRequest) ProtoMessage() {} func (*ListResourcesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{162} + return fileDescriptor_0ffcffcda38ae159, []int{178} } func (m *ListResourcesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11522,7 +12560,7 @@ func (m *ResolveSSHTargetRequest) Reset() { *m = ResolveSSHTargetRequest func (m *ResolveSSHTargetRequest) String() string { return proto.CompactTextString(m) } func (*ResolveSSHTargetRequest) ProtoMessage() {} func (*ResolveSSHTargetRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{163} + return fileDescriptor_0ffcffcda38ae159, []int{179} } func (m *ResolveSSHTargetRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11599,7 +12637,7 @@ func (m *ResolveSSHTargetResponse) Reset() { *m = ResolveSSHTargetRespon func (m *ResolveSSHTargetResponse) String() string { return proto.CompactTextString(m) } func (*ResolveSSHTargetResponse) ProtoMessage() {} func (*ResolveSSHTargetResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{164} + return fileDescriptor_0ffcffcda38ae159, []int{180} } func (m *ResolveSSHTargetResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11651,7 +12689,7 @@ func (m *GetSSHTargetsRequest) Reset() { *m = GetSSHTargetsRequest{} } func (m *GetSSHTargetsRequest) String() string { return proto.CompactTextString(m) } func (*GetSSHTargetsRequest) ProtoMessage() {} func (*GetSSHTargetsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{165} + return fileDescriptor_0ffcffcda38ae159, []int{181} } func (m *GetSSHTargetsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11707,7 +12745,7 @@ func (m *GetSSHTargetsResponse) Reset() { *m = GetSSHTargetsResponse{} } func (m *GetSSHTargetsResponse) String() string { return proto.CompactTextString(m) } func (*GetSSHTargetsResponse) ProtoMessage() {} func (*GetSSHTargetsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{166} + return fileDescriptor_0ffcffcda38ae159, []int{182} } func (m *GetSSHTargetsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11762,7 +12800,7 @@ func (m *ListResourcesResponse) Reset() { *m = ListResourcesResponse{} } func (m *ListResourcesResponse) String() string { return proto.CompactTextString(m) } func (*ListResourcesResponse) ProtoMessage() {} func (*ListResourcesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{167} + return fileDescriptor_0ffcffcda38ae159, []int{183} } func (m *ListResourcesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11827,7 +12865,7 @@ func (m *CreateSessionTrackerRequest) Reset() { *m = CreateSessionTracke func (m *CreateSessionTrackerRequest) String() string { return proto.CompactTextString(m) } func (*CreateSessionTrackerRequest) ProtoMessage() {} func (*CreateSessionTrackerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{168} + return fileDescriptor_0ffcffcda38ae159, []int{184} } func (m *CreateSessionTrackerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11876,7 +12914,7 @@ func (m *GetSessionTrackerRequest) Reset() { *m = GetSessionTrackerReque func (m *GetSessionTrackerRequest) String() string { return proto.CompactTextString(m) } func (*GetSessionTrackerRequest) ProtoMessage() {} func (*GetSessionTrackerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{169} + return fileDescriptor_0ffcffcda38ae159, []int{185} } func (m *GetSessionTrackerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11925,7 +12963,7 @@ func (m *RemoveSessionTrackerRequest) Reset() { *m = RemoveSessionTracke func (m *RemoveSessionTrackerRequest) String() string { return proto.CompactTextString(m) } func (*RemoveSessionTrackerRequest) ProtoMessage() {} func (*RemoveSessionTrackerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{170} + return fileDescriptor_0ffcffcda38ae159, []int{186} } func (m *RemoveSessionTrackerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11973,7 +13011,7 @@ func (m *SessionTrackerUpdateState) Reset() { *m = SessionTrackerUpdateS func (m *SessionTrackerUpdateState) String() string { return proto.CompactTextString(m) } func (*SessionTrackerUpdateState) ProtoMessage() {} func (*SessionTrackerUpdateState) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{171} + return fileDescriptor_0ffcffcda38ae159, []int{187} } func (m *SessionTrackerUpdateState) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12021,7 +13059,7 @@ func (m *SessionTrackerAddParticipant) Reset() { *m = SessionTrackerAddP func (m *SessionTrackerAddParticipant) String() string { return proto.CompactTextString(m) } func (*SessionTrackerAddParticipant) ProtoMessage() {} func (*SessionTrackerAddParticipant) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{172} + return fileDescriptor_0ffcffcda38ae159, []int{188} } func (m *SessionTrackerAddParticipant) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12069,7 +13107,7 @@ func (m *SessionTrackerRemoveParticipant) Reset() { *m = SessionTrackerR func (m *SessionTrackerRemoveParticipant) String() string { return proto.CompactTextString(m) } func (*SessionTrackerRemoveParticipant) ProtoMessage() {} func (*SessionTrackerRemoveParticipant) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{173} + return fileDescriptor_0ffcffcda38ae159, []int{189} } func (m *SessionTrackerRemoveParticipant) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12118,7 +13156,7 @@ func (m *SessionTrackerUpdateExpiry) Reset() { *m = SessionTrackerUpdate func (m *SessionTrackerUpdateExpiry) String() string { return proto.CompactTextString(m) } func (*SessionTrackerUpdateExpiry) ProtoMessage() {} func (*SessionTrackerUpdateExpiry) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{174} + return fileDescriptor_0ffcffcda38ae159, []int{190} } func (m *SessionTrackerUpdateExpiry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12173,7 +13211,7 @@ func (m *UpdateSessionTrackerRequest) Reset() { *m = UpdateSessionTracke func (m *UpdateSessionTrackerRequest) String() string { return proto.CompactTextString(m) } func (*UpdateSessionTrackerRequest) ProtoMessage() {} func (*UpdateSessionTrackerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{175} + return fileDescriptor_0ffcffcda38ae159, []int{191} } func (m *UpdateSessionTrackerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12297,7 +13335,7 @@ func (m *PresenceMFAChallengeRequest) Reset() { *m = PresenceMFAChalleng func (m *PresenceMFAChallengeRequest) String() string { return proto.CompactTextString(m) } func (*PresenceMFAChallengeRequest) ProtoMessage() {} func (*PresenceMFAChallengeRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{176} + return fileDescriptor_0ffcffcda38ae159, []int{192} } func (m *PresenceMFAChallengeRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12362,7 +13400,7 @@ func (m *PresenceMFAChallengeSend) Reset() { *m = PresenceMFAChallengeSe func (m *PresenceMFAChallengeSend) String() string { return proto.CompactTextString(m) } func (*PresenceMFAChallengeSend) ProtoMessage() {} func (*PresenceMFAChallengeSend) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{177} + return fileDescriptor_0ffcffcda38ae159, []int{193} } func (m *PresenceMFAChallengeSend) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12449,7 +13487,7 @@ func (m *GetDomainNameResponse) Reset() { *m = GetDomainNameResponse{} } func (m *GetDomainNameResponse) String() string { return proto.CompactTextString(m) } func (*GetDomainNameResponse) ProtoMessage() {} func (*GetDomainNameResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{178} + return fileDescriptor_0ffcffcda38ae159, []int{194} } func (m *GetDomainNameResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12498,7 +13536,7 @@ func (m *GetClusterCACertResponse) Reset() { *m = GetClusterCACertRespon func (m *GetClusterCACertResponse) String() string { return proto.CompactTextString(m) } func (*GetClusterCACertResponse) ProtoMessage() {} func (*GetClusterCACertResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{179} + return fileDescriptor_0ffcffcda38ae159, []int{195} } func (m *GetClusterCACertResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12546,7 +13584,7 @@ func (m *GetLicenseResponse) Reset() { *m = GetLicenseResponse{} } func (m *GetLicenseResponse) String() string { return proto.CompactTextString(m) } func (*GetLicenseResponse) ProtoMessage() {} func (*GetLicenseResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{180} + return fileDescriptor_0ffcffcda38ae159, []int{196} } func (m *GetLicenseResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12594,7 +13632,7 @@ func (m *ListReleasesResponse) Reset() { *m = ListReleasesResponse{} } func (m *ListReleasesResponse) String() string { return proto.CompactTextString(m) } func (*ListReleasesResponse) ProtoMessage() {} func (*ListReleasesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{181} + return fileDescriptor_0ffcffcda38ae159, []int{197} } func (m *ListReleasesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12643,7 +13681,7 @@ func (m *GetOIDCAuthRequestRequest) Reset() { *m = GetOIDCAuthRequestReq func (m *GetOIDCAuthRequestRequest) String() string { return proto.CompactTextString(m) } func (*GetOIDCAuthRequestRequest) ProtoMessage() {} func (*GetOIDCAuthRequestRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{182} + return fileDescriptor_0ffcffcda38ae159, []int{198} } func (m *GetOIDCAuthRequestRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12692,7 +13730,7 @@ func (m *GetSAMLAuthRequestRequest) Reset() { *m = GetSAMLAuthRequestReq func (m *GetSAMLAuthRequestRequest) String() string { return proto.CompactTextString(m) } func (*GetSAMLAuthRequestRequest) ProtoMessage() {} func (*GetSAMLAuthRequestRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{183} + return fileDescriptor_0ffcffcda38ae159, []int{199} } func (m *GetSAMLAuthRequestRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12741,7 +13779,7 @@ func (m *GetGithubAuthRequestRequest) Reset() { *m = GetGithubAuthReques func (m *GetGithubAuthRequestRequest) String() string { return proto.CompactTextString(m) } func (*GetGithubAuthRequestRequest) ProtoMessage() {} func (*GetGithubAuthRequestRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{184} + return fileDescriptor_0ffcffcda38ae159, []int{200} } func (m *GetGithubAuthRequestRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12777,6 +13815,131 @@ func (m *GetGithubAuthRequestRequest) GetStateToken() string { return "" } +type ListOIDCConnectorsRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // WithSecrets specifies whether to load associated secrets. + WithSecrets bool `protobuf:"varint,3,opt,name=with_secrets,json=withSecrets,proto3" json:"with_secrets,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListOIDCConnectorsRequest) Reset() { *m = ListOIDCConnectorsRequest{} } +func (m *ListOIDCConnectorsRequest) String() string { return proto.CompactTextString(m) } +func (*ListOIDCConnectorsRequest) ProtoMessage() {} +func (*ListOIDCConnectorsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{201} +} +func (m *ListOIDCConnectorsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListOIDCConnectorsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListOIDCConnectorsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListOIDCConnectorsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListOIDCConnectorsRequest.Merge(m, src) +} +func (m *ListOIDCConnectorsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListOIDCConnectorsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListOIDCConnectorsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListOIDCConnectorsRequest proto.InternalMessageInfo + +func (m *ListOIDCConnectorsRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListOIDCConnectorsRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +func (m *ListOIDCConnectorsRequest) GetWithSecrets() bool { + if m != nil { + return m.WithSecrets + } + return false +} + +type ListOIDCConnectorsResponse struct { + // Connectors is a list of OIDC connectors. + Connectors []*types.OIDCConnectorV3 `protobuf:"bytes,1,rep,name=connectors,proto3" json:"connectors,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListOIDCConnectorsResponse) Reset() { *m = ListOIDCConnectorsResponse{} } +func (m *ListOIDCConnectorsResponse) String() string { return proto.CompactTextString(m) } +func (*ListOIDCConnectorsResponse) ProtoMessage() {} +func (*ListOIDCConnectorsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{202} +} +func (m *ListOIDCConnectorsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListOIDCConnectorsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListOIDCConnectorsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListOIDCConnectorsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListOIDCConnectorsResponse.Merge(m, src) +} +func (m *ListOIDCConnectorsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListOIDCConnectorsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListOIDCConnectorsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListOIDCConnectorsResponse proto.InternalMessageInfo + +func (m *ListOIDCConnectorsResponse) GetConnectors() []*types.OIDCConnectorV3 { + if m != nil { + return m.Connectors + } + return nil +} + +func (m *ListOIDCConnectorsResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // CreateOIDCConnectorRequest is a request for CreateOIDCConnector. type CreateOIDCConnectorRequest struct { // Connector to be created. @@ -12790,7 +13953,7 @@ func (m *CreateOIDCConnectorRequest) Reset() { *m = CreateOIDCConnectorR func (m *CreateOIDCConnectorRequest) String() string { return proto.CompactTextString(m) } func (*CreateOIDCConnectorRequest) ProtoMessage() {} func (*CreateOIDCConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{185} + return fileDescriptor_0ffcffcda38ae159, []int{203} } func (m *CreateOIDCConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12839,7 +14002,7 @@ func (m *UpdateOIDCConnectorRequest) Reset() { *m = UpdateOIDCConnectorR func (m *UpdateOIDCConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpdateOIDCConnectorRequest) ProtoMessage() {} func (*UpdateOIDCConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{186} + return fileDescriptor_0ffcffcda38ae159, []int{204} } func (m *UpdateOIDCConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12888,7 +14051,7 @@ func (m *UpsertOIDCConnectorRequest) Reset() { *m = UpsertOIDCConnectorR func (m *UpsertOIDCConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpsertOIDCConnectorRequest) ProtoMessage() {} func (*UpsertOIDCConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{187} + return fileDescriptor_0ffcffcda38ae159, []int{205} } func (m *UpsertOIDCConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12937,7 +14100,7 @@ func (m *CreateSAMLConnectorRequest) Reset() { *m = CreateSAMLConnectorR func (m *CreateSAMLConnectorRequest) String() string { return proto.CompactTextString(m) } func (*CreateSAMLConnectorRequest) ProtoMessage() {} func (*CreateSAMLConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{188} + return fileDescriptor_0ffcffcda38ae159, []int{206} } func (m *CreateSAMLConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12986,7 +14149,7 @@ func (m *UpdateSAMLConnectorRequest) Reset() { *m = UpdateSAMLConnectorR func (m *UpdateSAMLConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpdateSAMLConnectorRequest) ProtoMessage() {} func (*UpdateSAMLConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{189} + return fileDescriptor_0ffcffcda38ae159, []int{207} } func (m *UpdateSAMLConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13035,7 +14198,7 @@ func (m *UpsertSAMLConnectorRequest) Reset() { *m = UpsertSAMLConnectorR func (m *UpsertSAMLConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpsertSAMLConnectorRequest) ProtoMessage() {} func (*UpsertSAMLConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{190} + return fileDescriptor_0ffcffcda38ae159, []int{208} } func (m *UpsertSAMLConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13071,6 +14234,266 @@ func (m *UpsertSAMLConnectorRequest) GetConnector() *types.SAMLConnectorV2 { return nil } +type ListSAMLConnectorsRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // WithSecrets specifies whether to load associated secrets. + WithSecrets bool `protobuf:"varint,3,opt,name=with_secrets,json=withSecrets,proto3" json:"with_secrets,omitempty"` + // NoFollowURLs specifies whether to skip following URLs when + // validating SAML connector resources. + NoFollowUrls bool `protobuf:"varint,4,opt,name=no_follow_urls,json=noFollowUrls,proto3" json:"no_follow_urls,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSAMLConnectorsRequest) Reset() { *m = ListSAMLConnectorsRequest{} } +func (m *ListSAMLConnectorsRequest) String() string { return proto.CompactTextString(m) } +func (*ListSAMLConnectorsRequest) ProtoMessage() {} +func (*ListSAMLConnectorsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{209} +} +func (m *ListSAMLConnectorsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSAMLConnectorsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSAMLConnectorsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSAMLConnectorsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSAMLConnectorsRequest.Merge(m, src) +} +func (m *ListSAMLConnectorsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListSAMLConnectorsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListSAMLConnectorsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSAMLConnectorsRequest proto.InternalMessageInfo + +func (m *ListSAMLConnectorsRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListSAMLConnectorsRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +func (m *ListSAMLConnectorsRequest) GetWithSecrets() bool { + if m != nil { + return m.WithSecrets + } + return false +} + +func (m *ListSAMLConnectorsRequest) GetNoFollowUrls() bool { + if m != nil { + return m.NoFollowUrls + } + return false +} + +type ListSAMLConnectorsResponse struct { + // Connectors is a list of SAML connectors. + Connectors []*types.SAMLConnectorV2 `protobuf:"bytes,1,rep,name=connectors,proto3" json:"connectors,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSAMLConnectorsResponse) Reset() { *m = ListSAMLConnectorsResponse{} } +func (m *ListSAMLConnectorsResponse) String() string { return proto.CompactTextString(m) } +func (*ListSAMLConnectorsResponse) ProtoMessage() {} +func (*ListSAMLConnectorsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{210} +} +func (m *ListSAMLConnectorsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSAMLConnectorsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSAMLConnectorsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSAMLConnectorsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSAMLConnectorsResponse.Merge(m, src) +} +func (m *ListSAMLConnectorsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListSAMLConnectorsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListSAMLConnectorsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSAMLConnectorsResponse proto.InternalMessageInfo + +func (m *ListSAMLConnectorsResponse) GetConnectors() []*types.SAMLConnectorV2 { + if m != nil { + return m.Connectors + } + return nil +} + +func (m *ListSAMLConnectorsResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + +type ListGithubConnectorsRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // WithSecrets specifies whether to load associated secrets. + WithSecrets bool `protobuf:"varint,3,opt,name=with_secrets,json=withSecrets,proto3" json:"with_secrets,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListGithubConnectorsRequest) Reset() { *m = ListGithubConnectorsRequest{} } +func (m *ListGithubConnectorsRequest) String() string { return proto.CompactTextString(m) } +func (*ListGithubConnectorsRequest) ProtoMessage() {} +func (*ListGithubConnectorsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{211} +} +func (m *ListGithubConnectorsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListGithubConnectorsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListGithubConnectorsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListGithubConnectorsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListGithubConnectorsRequest.Merge(m, src) +} +func (m *ListGithubConnectorsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListGithubConnectorsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListGithubConnectorsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListGithubConnectorsRequest proto.InternalMessageInfo + +func (m *ListGithubConnectorsRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListGithubConnectorsRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +func (m *ListGithubConnectorsRequest) GetWithSecrets() bool { + if m != nil { + return m.WithSecrets + } + return false +} + +type ListGithubConnectorsResponse struct { + // Connectors is a list of Github connectors. + Connectors []*types.GithubConnectorV3 `protobuf:"bytes,1,rep,name=connectors,proto3" json:"connectors,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListGithubConnectorsResponse) Reset() { *m = ListGithubConnectorsResponse{} } +func (m *ListGithubConnectorsResponse) String() string { return proto.CompactTextString(m) } +func (*ListGithubConnectorsResponse) ProtoMessage() {} +func (*ListGithubConnectorsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{212} +} +func (m *ListGithubConnectorsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListGithubConnectorsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListGithubConnectorsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListGithubConnectorsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListGithubConnectorsResponse.Merge(m, src) +} +func (m *ListGithubConnectorsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListGithubConnectorsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListGithubConnectorsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListGithubConnectorsResponse proto.InternalMessageInfo + +func (m *ListGithubConnectorsResponse) GetConnectors() []*types.GithubConnectorV3 { + if m != nil { + return m.Connectors + } + return nil +} + +func (m *ListGithubConnectorsResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + // CreateGithubConnectorRequest is a request for CreateGithubConnector. type CreateGithubConnectorRequest struct { // Connector to be created. @@ -13084,7 +14507,7 @@ func (m *CreateGithubConnectorRequest) Reset() { *m = CreateGithubConnec func (m *CreateGithubConnectorRequest) String() string { return proto.CompactTextString(m) } func (*CreateGithubConnectorRequest) ProtoMessage() {} func (*CreateGithubConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{191} + return fileDescriptor_0ffcffcda38ae159, []int{213} } func (m *CreateGithubConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13133,7 +14556,7 @@ func (m *UpdateGithubConnectorRequest) Reset() { *m = UpdateGithubConnec func (m *UpdateGithubConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpdateGithubConnectorRequest) ProtoMessage() {} func (*UpdateGithubConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{192} + return fileDescriptor_0ffcffcda38ae159, []int{214} } func (m *UpdateGithubConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13182,7 +14605,7 @@ func (m *UpsertGithubConnectorRequest) Reset() { *m = UpsertGithubConnec func (m *UpsertGithubConnectorRequest) String() string { return proto.CompactTextString(m) } func (*UpsertGithubConnectorRequest) ProtoMessage() {} func (*UpsertGithubConnectorRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{193} + return fileDescriptor_0ffcffcda38ae159, []int{215} } func (m *UpsertGithubConnectorRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13233,7 +14656,7 @@ func (m *GetSSODiagnosticInfoRequest) Reset() { *m = GetSSODiagnosticInf func (m *GetSSODiagnosticInfoRequest) String() string { return proto.CompactTextString(m) } func (*GetSSODiagnosticInfoRequest) ProtoMessage() {} func (*GetSSODiagnosticInfoRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{194} + return fileDescriptor_0ffcffcda38ae159, []int{216} } func (m *GetSSODiagnosticInfoRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13299,7 +14722,7 @@ func (m *SystemRoleAssertion) Reset() { *m = SystemRoleAssertion{} } func (m *SystemRoleAssertion) String() string { return proto.CompactTextString(m) } func (*SystemRoleAssertion) ProtoMessage() {} func (*SystemRoleAssertion) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{195} + return fileDescriptor_0ffcffcda38ae159, []int{217} } func (m *SystemRoleAssertion) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13369,7 +14792,7 @@ func (m *SystemRoleAssertionSet) Reset() { *m = SystemRoleAssertionSet{} func (m *SystemRoleAssertionSet) String() string { return proto.CompactTextString(m) } func (*SystemRoleAssertionSet) ProtoMessage() {} func (*SystemRoleAssertionSet) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{196} + return fileDescriptor_0ffcffcda38ae159, []int{218} } func (m *SystemRoleAssertionSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13432,7 +14855,7 @@ func (m *InventoryConnectedServiceCountsRequest) Reset() { func (m *InventoryConnectedServiceCountsRequest) String() string { return proto.CompactTextString(m) } func (*InventoryConnectedServiceCountsRequest) ProtoMessage() {} func (*InventoryConnectedServiceCountsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{197} + return fileDescriptor_0ffcffcda38ae159, []int{219} } func (m *InventoryConnectedServiceCountsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13474,7 +14897,7 @@ func (m *InventoryConnectedServiceCounts) Reset() { *m = InventoryConnec func (m *InventoryConnectedServiceCounts) String() string { return proto.CompactTextString(m) } func (*InventoryConnectedServiceCounts) ProtoMessage() {} func (*InventoryConnectedServiceCounts) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{198} + return fileDescriptor_0ffcffcda38ae159, []int{220} } func (m *InventoryConnectedServiceCounts) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13528,7 +14951,7 @@ func (m *InventoryPingRequest) Reset() { *m = InventoryPingRequest{} } func (m *InventoryPingRequest) String() string { return proto.CompactTextString(m) } func (*InventoryPingRequest) ProtoMessage() {} func (*InventoryPingRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{199} + return fileDescriptor_0ffcffcda38ae159, []int{221} } func (m *InventoryPingRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13585,7 +15008,7 @@ func (m *InventoryPingResponse) Reset() { *m = InventoryPingResponse{} } func (m *InventoryPingResponse) String() string { return proto.CompactTextString(m) } func (*InventoryPingResponse) ProtoMessage() {} func (*InventoryPingResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{200} + return fileDescriptor_0ffcffcda38ae159, []int{222} } func (m *InventoryPingResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13634,7 +15057,7 @@ func (m *GetClusterAlertsResponse) Reset() { *m = GetClusterAlertsRespon func (m *GetClusterAlertsResponse) String() string { return proto.CompactTextString(m) } func (*GetClusterAlertsResponse) ProtoMessage() {} func (*GetClusterAlertsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{201} + return fileDescriptor_0ffcffcda38ae159, []int{223} } func (m *GetClusterAlertsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13681,7 +15104,7 @@ func (m *GetAlertAcksRequest) Reset() { *m = GetAlertAcksRequest{} } func (m *GetAlertAcksRequest) String() string { return proto.CompactTextString(m) } func (*GetAlertAcksRequest) ProtoMessage() {} func (*GetAlertAcksRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{202} + return fileDescriptor_0ffcffcda38ae159, []int{224} } func (m *GetAlertAcksRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13723,7 +15146,7 @@ func (m *GetAlertAcksResponse) Reset() { *m = GetAlertAcksResponse{} } func (m *GetAlertAcksResponse) String() string { return proto.CompactTextString(m) } func (*GetAlertAcksResponse) ProtoMessage() {} func (*GetAlertAcksResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{203} + return fileDescriptor_0ffcffcda38ae159, []int{225} } func (m *GetAlertAcksResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13773,7 +15196,7 @@ func (m *ClearAlertAcksRequest) Reset() { *m = ClearAlertAcksRequest{} } func (m *ClearAlertAcksRequest) String() string { return proto.CompactTextString(m) } func (*ClearAlertAcksRequest) ProtoMessage() {} func (*ClearAlertAcksRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{204} + return fileDescriptor_0ffcffcda38ae159, []int{226} } func (m *ClearAlertAcksRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13822,7 +15245,7 @@ func (m *UpsertClusterAlertRequest) Reset() { *m = UpsertClusterAlertReq func (m *UpsertClusterAlertRequest) String() string { return proto.CompactTextString(m) } func (*UpsertClusterAlertRequest) ProtoMessage() {} func (*UpsertClusterAlertRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{205} + return fileDescriptor_0ffcffcda38ae159, []int{227} } func (m *UpsertClusterAlertRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13871,7 +15294,7 @@ func (m *GetConnectionDiagnosticRequest) Reset() { *m = GetConnectionDia func (m *GetConnectionDiagnosticRequest) String() string { return proto.CompactTextString(m) } func (*GetConnectionDiagnosticRequest) ProtoMessage() {} func (*GetConnectionDiagnosticRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{206} + return fileDescriptor_0ffcffcda38ae159, []int{228} } func (m *GetConnectionDiagnosticRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13922,7 +15345,7 @@ func (m *AppendDiagnosticTraceRequest) Reset() { *m = AppendDiagnosticTr func (m *AppendDiagnosticTraceRequest) String() string { return proto.CompactTextString(m) } func (*AppendDiagnosticTraceRequest) ProtoMessage() {} func (*AppendDiagnosticTraceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{207} + return fileDescriptor_0ffcffcda38ae159, []int{229} } func (m *AppendDiagnosticTraceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13977,7 +15400,7 @@ func (m *SubmitUsageEventRequest) Reset() { *m = SubmitUsageEventRequest func (m *SubmitUsageEventRequest) String() string { return proto.CompactTextString(m) } func (*SubmitUsageEventRequest) ProtoMessage() {} func (*SubmitUsageEventRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{208} + return fileDescriptor_0ffcffcda38ae159, []int{230} } func (m *SubmitUsageEventRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14024,7 +15447,7 @@ func (m *GetLicenseRequest) Reset() { *m = GetLicenseRequest{} } func (m *GetLicenseRequest) String() string { return proto.CompactTextString(m) } func (*GetLicenseRequest) ProtoMessage() {} func (*GetLicenseRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{209} + return fileDescriptor_0ffcffcda38ae159, []int{231} } func (m *GetLicenseRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14064,7 +15487,7 @@ func (m *ListReleasesRequest) Reset() { *m = ListReleasesRequest{} } func (m *ListReleasesRequest) String() string { return proto.CompactTextString(m) } func (*ListReleasesRequest) ProtoMessage() {} func (*ListReleasesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{210} + return fileDescriptor_0ffcffcda38ae159, []int{232} } func (m *ListReleasesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14108,7 +15531,7 @@ func (m *CreateTokenV2Request) Reset() { *m = CreateTokenV2Request{} } func (m *CreateTokenV2Request) String() string { return proto.CompactTextString(m) } func (*CreateTokenV2Request) ProtoMessage() {} func (*CreateTokenV2Request) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{211} + return fileDescriptor_0ffcffcda38ae159, []int{233} } func (m *CreateTokenV2Request) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14185,7 +15608,7 @@ func (m *UpsertTokenV2Request) Reset() { *m = UpsertTokenV2Request{} } func (m *UpsertTokenV2Request) String() string { return proto.CompactTextString(m) } func (*UpsertTokenV2Request) ProtoMessage() {} func (*UpsertTokenV2Request) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{212} + return fileDescriptor_0ffcffcda38ae159, []int{234} } func (m *UpsertTokenV2Request) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14260,7 +15683,7 @@ func (m *GetHeadlessAuthenticationRequest) Reset() { *m = GetHeadlessAut func (m *GetHeadlessAuthenticationRequest) String() string { return proto.CompactTextString(m) } func (*GetHeadlessAuthenticationRequest) ProtoMessage() {} func (*GetHeadlessAuthenticationRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{213} + return fileDescriptor_0ffcffcda38ae159, []int{235} } func (m *GetHeadlessAuthenticationRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14318,7 +15741,7 @@ func (m *UpdateHeadlessAuthenticationStateRequest) Reset() { func (m *UpdateHeadlessAuthenticationStateRequest) String() string { return proto.CompactTextString(m) } func (*UpdateHeadlessAuthenticationStateRequest) ProtoMessage() {} func (*UpdateHeadlessAuthenticationStateRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{214} + return fileDescriptor_0ffcffcda38ae159, []int{236} } func (m *UpdateHeadlessAuthenticationStateRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14384,7 +15807,7 @@ func (m *ExportUpgradeWindowsRequest) Reset() { *m = ExportUpgradeWindow func (m *ExportUpgradeWindowsRequest) String() string { return proto.CompactTextString(m) } func (*ExportUpgradeWindowsRequest) ProtoMessage() {} func (*ExportUpgradeWindowsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{215} + return fileDescriptor_0ffcffcda38ae159, []int{237} } func (m *ExportUpgradeWindowsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14450,7 +15873,7 @@ func (m *ExportUpgradeWindowsResponse) Reset() { *m = ExportUpgradeWindo func (m *ExportUpgradeWindowsResponse) String() string { return proto.CompactTextString(m) } func (*ExportUpgradeWindowsResponse) ProtoMessage() {} func (*ExportUpgradeWindowsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{216} + return fileDescriptor_0ffcffcda38ae159, []int{238} } func (m *ExportUpgradeWindowsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14525,7 +15948,7 @@ func (m *ListAccessRequestsRequest) Reset() { *m = ListAccessRequestsReq func (m *ListAccessRequestsRequest) String() string { return proto.CompactTextString(m) } func (*ListAccessRequestsRequest) ProtoMessage() {} func (*ListAccessRequestsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{217} + return fileDescriptor_0ffcffcda38ae159, []int{239} } func (m *ListAccessRequestsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14604,7 +16027,7 @@ func (m *ListAccessRequestsResponse) Reset() { *m = ListAccessRequestsRe func (m *ListAccessRequestsResponse) String() string { return proto.CompactTextString(m) } func (*ListAccessRequestsResponse) ProtoMessage() {} func (*ListAccessRequestsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{218} + return fileDescriptor_0ffcffcda38ae159, []int{240} } func (m *ListAccessRequestsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14660,7 +16083,7 @@ func (m *AccessRequestAllowedPromotionRequest) Reset() { *m = AccessRequ func (m *AccessRequestAllowedPromotionRequest) String() string { return proto.CompactTextString(m) } func (*AccessRequestAllowedPromotionRequest) ProtoMessage() {} func (*AccessRequestAllowedPromotionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{219} + return fileDescriptor_0ffcffcda38ae159, []int{241} } func (m *AccessRequestAllowedPromotionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14709,7 +16132,7 @@ func (m *AccessRequestAllowedPromotionResponse) Reset() { *m = AccessReq func (m *AccessRequestAllowedPromotionResponse) String() string { return proto.CompactTextString(m) } func (*AccessRequestAllowedPromotionResponse) ProtoMessage() {} func (*AccessRequestAllowedPromotionResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_0ffcffcda38ae159, []int{220} + return fileDescriptor_0ffcffcda38ae159, []int{242} } func (m *AccessRequestAllowedPromotionResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14745,6 +16168,377 @@ func (m *AccessRequestAllowedPromotionResponse) GetAllowedPromotions() *types.Ac return nil } +// ListProvisionTokensRequest is used to retrieve a paginated list of provision tokens. +type ListProvisionTokensRequest struct { + // Limit is the maximum amount of items per page. + Limit int32 `protobuf:"varint,1,opt,name=Limit,proto3" json:"Limit,omitempty"` + // StartKey is used to resume a query in order to enable pagination. + // If the previous response had NextKey set then this should be + // set to its value. Otherwise leave empty. + StartKey string `protobuf:"bytes,2,opt,name=StartKey,proto3" json:"StartKey,omitempty"` + // FilterRoles allows filtering for tokens with the provided roles. Tokens + // with ANY of the provided roles are returned. + FilterRoles []string `protobuf:"bytes,3,rep,name=FilterRoles,proto3" json:"FilterRoles,omitempty"` + // FilterBotName allows filtering for tokens associated with the + // named bot. In addition, only items with a role of Bot are returned. + FilterBotName string `protobuf:"bytes,4,opt,name=FilterBotName,proto3" json:"FilterBotName,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListProvisionTokensRequest) Reset() { *m = ListProvisionTokensRequest{} } +func (m *ListProvisionTokensRequest) String() string { return proto.CompactTextString(m) } +func (*ListProvisionTokensRequest) ProtoMessage() {} +func (*ListProvisionTokensRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{243} +} +func (m *ListProvisionTokensRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListProvisionTokensRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListProvisionTokensRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListProvisionTokensRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListProvisionTokensRequest.Merge(m, src) +} +func (m *ListProvisionTokensRequest) XXX_Size() int { + return m.Size() +} +func (m *ListProvisionTokensRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListProvisionTokensRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListProvisionTokensRequest proto.InternalMessageInfo + +func (m *ListProvisionTokensRequest) GetLimit() int32 { + if m != nil { + return m.Limit + } + return 0 +} + +func (m *ListProvisionTokensRequest) GetStartKey() string { + if m != nil { + return m.StartKey + } + return "" +} + +func (m *ListProvisionTokensRequest) GetFilterRoles() []string { + if m != nil { + return m.FilterRoles + } + return nil +} + +func (m *ListProvisionTokensRequest) GetFilterBotName() string { + if m != nil { + return m.FilterBotName + } + return "" +} + +// ListProvisionTokensResponse is used to retrieve a paginated list of provision tokens. +type ListProvisionTokensResponse struct { + // Tokens is the list of tokens. + Tokens []*types.ProvisionTokenV2 `protobuf:"bytes,1,rep,name=Tokens,proto3" json:"Tokens,omitempty"` + // NextKey is used to resume a query in order to enable pagination. + // Leave empty to start at the beginning. + NextKey string `protobuf:"bytes,2,opt,name=NextKey,proto3" json:"NextKey,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListProvisionTokensResponse) Reset() { *m = ListProvisionTokensResponse{} } +func (m *ListProvisionTokensResponse) String() string { return proto.CompactTextString(m) } +func (*ListProvisionTokensResponse) ProtoMessage() {} +func (*ListProvisionTokensResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{244} +} +func (m *ListProvisionTokensResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListProvisionTokensResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListProvisionTokensResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListProvisionTokensResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListProvisionTokensResponse.Merge(m, src) +} +func (m *ListProvisionTokensResponse) XXX_Size() int { + return m.Size() +} +func (m *ListProvisionTokensResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListProvisionTokensResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListProvisionTokensResponse proto.InternalMessageInfo + +func (m *ListProvisionTokensResponse) GetTokens() []*types.ProvisionTokenV2 { + if m != nil { + return m.Tokens + } + return nil +} + +func (m *ListProvisionTokensResponse) GetNextKey() string { + if m != nil { + return m.NextKey + } + return "" +} + +type ListKubernetesClustersRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListKubernetesClustersRequest) Reset() { *m = ListKubernetesClustersRequest{} } +func (m *ListKubernetesClustersRequest) String() string { return proto.CompactTextString(m) } +func (*ListKubernetesClustersRequest) ProtoMessage() {} +func (*ListKubernetesClustersRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{245} +} +func (m *ListKubernetesClustersRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListKubernetesClustersRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListKubernetesClustersRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListKubernetesClustersRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListKubernetesClustersRequest.Merge(m, src) +} +func (m *ListKubernetesClustersRequest) XXX_Size() int { + return m.Size() +} +func (m *ListKubernetesClustersRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListKubernetesClustersRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListKubernetesClustersRequest proto.InternalMessageInfo + +func (m *ListKubernetesClustersRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListKubernetesClustersRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +type ListKubernetesClustersResponse struct { + // KubernetesClusters is a list of kubernetes cluster resources. + KubernetesClusters []*types.KubernetesClusterV3 `protobuf:"bytes,1,rep,name=kubernetes_clusters,json=kubernetesClusters,proto3" json:"kubernetes_clusters,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListKubernetesClustersResponse) Reset() { *m = ListKubernetesClustersResponse{} } +func (m *ListKubernetesClustersResponse) String() string { return proto.CompactTextString(m) } +func (*ListKubernetesClustersResponse) ProtoMessage() {} +func (*ListKubernetesClustersResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{246} +} +func (m *ListKubernetesClustersResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListKubernetesClustersResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListKubernetesClustersResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListKubernetesClustersResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListKubernetesClustersResponse.Merge(m, src) +} +func (m *ListKubernetesClustersResponse) XXX_Size() int { + return m.Size() +} +func (m *ListKubernetesClustersResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListKubernetesClustersResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListKubernetesClustersResponse proto.InternalMessageInfo + +func (m *ListKubernetesClustersResponse) GetKubernetesClusters() []*types.KubernetesClusterV3 { + if m != nil { + return m.KubernetesClusters + } + return nil +} + +func (m *ListKubernetesClustersResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + +type ListSnowflakeSessionsRequest struct { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSnowflakeSessionsRequest) Reset() { *m = ListSnowflakeSessionsRequest{} } +func (m *ListSnowflakeSessionsRequest) String() string { return proto.CompactTextString(m) } +func (*ListSnowflakeSessionsRequest) ProtoMessage() {} +func (*ListSnowflakeSessionsRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{247} +} +func (m *ListSnowflakeSessionsRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSnowflakeSessionsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSnowflakeSessionsRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSnowflakeSessionsRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSnowflakeSessionsRequest.Merge(m, src) +} +func (m *ListSnowflakeSessionsRequest) XXX_Size() int { + return m.Size() +} +func (m *ListSnowflakeSessionsRequest) XXX_DiscardUnknown() { + xxx_messageInfo_ListSnowflakeSessionsRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSnowflakeSessionsRequest proto.InternalMessageInfo + +func (m *ListSnowflakeSessionsRequest) GetPageSize() int32 { + if m != nil { + return m.PageSize + } + return 0 +} + +func (m *ListSnowflakeSessionsRequest) GetPageToken() string { + if m != nil { + return m.PageToken + } + return "" +} + +type ListSnowflakeSessionsResponse struct { + // Sessions is a list of Snowflake web sessions. + Sessions []*types.WebSessionV2 `protobuf:"bytes,1,rep,name=sessions,proto3" json:"sessions,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ListSnowflakeSessionsResponse) Reset() { *m = ListSnowflakeSessionsResponse{} } +func (m *ListSnowflakeSessionsResponse) String() string { return proto.CompactTextString(m) } +func (*ListSnowflakeSessionsResponse) ProtoMessage() {} +func (*ListSnowflakeSessionsResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_0ffcffcda38ae159, []int{248} +} +func (m *ListSnowflakeSessionsResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ListSnowflakeSessionsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ListSnowflakeSessionsResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ListSnowflakeSessionsResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_ListSnowflakeSessionsResponse.Merge(m, src) +} +func (m *ListSnowflakeSessionsResponse) XXX_Size() int { + return m.Size() +} +func (m *ListSnowflakeSessionsResponse) XXX_DiscardUnknown() { + xxx_messageInfo_ListSnowflakeSessionsResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_ListSnowflakeSessionsResponse proto.InternalMessageInfo + +func (m *ListSnowflakeSessionsResponse) GetSessions() []*types.WebSessionV2 { + if m != nil { + return m.Sessions + } + return nil +} + +func (m *ListSnowflakeSessionsResponse) GetNextPageToken() string { + if m != nil { + return m.NextPageToken + } + return "" +} + func init() { proto.RegisterEnum("proto.ProductType", ProductType_name, ProductType_value) proto.RegisterEnum("proto.SupportType", SupportType_name, SupportType_value) @@ -14769,11 +16563,15 @@ func init() { proto.RegisterType((*GetUserRequest)(nil), "proto.GetUserRequest") proto.RegisterType((*GetUsersRequest)(nil), "proto.GetUsersRequest") proto.RegisterType((*ChangePasswordRequest)(nil), "proto.ChangePasswordRequest") + proto.RegisterType((*ListSemaphoresRequest)(nil), "proto.ListSemaphoresRequest") + proto.RegisterType((*ListSemaphoresResponse)(nil), "proto.ListSemaphoresResponse") proto.RegisterType((*PluginDataSeq)(nil), "proto.PluginDataSeq") proto.RegisterType((*RequestStateSetter)(nil), "proto.RequestStateSetter") proto.RegisterType((*RequestID)(nil), "proto.RequestID") proto.RegisterType((*GetResetPasswordTokenRequest)(nil), "proto.GetResetPasswordTokenRequest") proto.RegisterType((*CreateResetPasswordTokenRequest)(nil), "proto.CreateResetPasswordTokenRequest") + proto.RegisterType((*ListResetPasswordTokenRequest)(nil), "proto.ListResetPasswordTokenRequest") + proto.RegisterType((*ListResetPasswordTokenResponse)(nil), "proto.ListResetPasswordTokenResponse") proto.RegisterType((*RenewableCertsRequest)(nil), "proto.RenewableCertsRequest") proto.RegisterType((*PingRequest)(nil), "proto.PingRequest") proto.RegisterType((*PingResponse)(nil), "proto.PingResponse") @@ -14825,6 +16623,8 @@ func init() { proto.RegisterType((*GetWebSessionsResponse)(nil), "proto.GetWebSessionsResponse") proto.RegisterType((*GetWebTokenResponse)(nil), "proto.GetWebTokenResponse") proto.RegisterType((*GetWebTokensResponse)(nil), "proto.GetWebTokensResponse") + proto.RegisterType((*ListWebTokensRequest)(nil), "proto.ListWebTokensRequest") + proto.RegisterType((*ListWebTokensResponse)(nil), "proto.ListWebTokensResponse") proto.RegisterType((*UpsertKubernetesServerRequest)(nil), "proto.UpsertKubernetesServerRequest") proto.RegisterType((*DeleteKubernetesServerRequest)(nil), "proto.DeleteKubernetesServerRequest") proto.RegisterType((*DeleteAllKubernetesServersRequest)(nil), "proto.DeleteAllKubernetesServersRequest") @@ -14879,13 +16679,22 @@ func init() { proto.RegisterType((*Events)(nil), "proto.Events") proto.RegisterType((*GetLocksRequest)(nil), "proto.GetLocksRequest") proto.RegisterType((*GetLocksResponse)(nil), "proto.GetLocksResponse") + proto.RegisterType((*ListLocksRequest)(nil), "proto.ListLocksRequest") + proto.RegisterType((*ListLocksResponse)(nil), "proto.ListLocksResponse") proto.RegisterType((*GetLockRequest)(nil), "proto.GetLockRequest") proto.RegisterType((*DeleteLockRequest)(nil), "proto.DeleteLockRequest") proto.RegisterType((*ReplaceRemoteLocksRequest)(nil), "proto.ReplaceRemoteLocksRequest") + proto.RegisterType((*ListAppsRequest)(nil), "proto.ListAppsRequest") + proto.RegisterType((*ListAppsResponse)(nil), "proto.ListAppsResponse") + proto.RegisterType((*ListDatabasesRequest)(nil), "proto.ListDatabasesRequest") + proto.RegisterType((*ListDatabasesResponse)(nil), "proto.ListDatabasesResponse") proto.RegisterType((*GetWindowsDesktopServicesResponse)(nil), "proto.GetWindowsDesktopServicesResponse") proto.RegisterType((*GetWindowsDesktopServiceRequest)(nil), "proto.GetWindowsDesktopServiceRequest") proto.RegisterType((*GetWindowsDesktopServiceResponse)(nil), "proto.GetWindowsDesktopServiceResponse") proto.RegisterType((*DeleteWindowsDesktopServiceRequest)(nil), "proto.DeleteWindowsDesktopServiceRequest") + proto.RegisterType((*ListWindowsDesktopsRequest)(nil), "proto.ListWindowsDesktopsRequest") + proto.RegisterMapType((map[string]string)(nil), "proto.ListWindowsDesktopsRequest.LabelsEntry") + proto.RegisterType((*ListWindowsDesktopsResponse)(nil), "proto.ListWindowsDesktopsResponse") proto.RegisterType((*GetWindowsDesktopsResponse)(nil), "proto.GetWindowsDesktopsResponse") proto.RegisterType((*DeleteWindowsDesktopRequest)(nil), "proto.DeleteWindowsDesktopRequest") proto.RegisterType((*WindowsDesktopCertRequest)(nil), "proto.WindowsDesktopCertRequest") @@ -14919,6 +16728,8 @@ func init() { proto.RegisterType((*IdentityCenterAccount)(nil), "proto.IdentityCenterAccount") proto.RegisterType((*IdentityCenterPermissionSet)(nil), "proto.IdentityCenterPermissionSet") proto.RegisterType((*IdentityCenterAccountAssignment)(nil), "proto.IdentityCenterAccountAssignment") + proto.RegisterType((*ListInstallersRequest)(nil), "proto.ListInstallersRequest") + proto.RegisterType((*ListInstallersResponse)(nil), "proto.ListInstallersResponse") proto.RegisterType((*PaginatedResource)(nil), "proto.PaginatedResource") proto.RegisterType((*ListUnifiedResourcesRequest)(nil), "proto.ListUnifiedResourcesRequest") proto.RegisterMapType((map[string]string)(nil), "proto.ListUnifiedResourcesRequest.LabelsEntry") @@ -14948,12 +16759,18 @@ func init() { proto.RegisterType((*GetOIDCAuthRequestRequest)(nil), "proto.GetOIDCAuthRequestRequest") proto.RegisterType((*GetSAMLAuthRequestRequest)(nil), "proto.GetSAMLAuthRequestRequest") proto.RegisterType((*GetGithubAuthRequestRequest)(nil), "proto.GetGithubAuthRequestRequest") + proto.RegisterType((*ListOIDCConnectorsRequest)(nil), "proto.ListOIDCConnectorsRequest") + proto.RegisterType((*ListOIDCConnectorsResponse)(nil), "proto.ListOIDCConnectorsResponse") proto.RegisterType((*CreateOIDCConnectorRequest)(nil), "proto.CreateOIDCConnectorRequest") proto.RegisterType((*UpdateOIDCConnectorRequest)(nil), "proto.UpdateOIDCConnectorRequest") proto.RegisterType((*UpsertOIDCConnectorRequest)(nil), "proto.UpsertOIDCConnectorRequest") proto.RegisterType((*CreateSAMLConnectorRequest)(nil), "proto.CreateSAMLConnectorRequest") proto.RegisterType((*UpdateSAMLConnectorRequest)(nil), "proto.UpdateSAMLConnectorRequest") proto.RegisterType((*UpsertSAMLConnectorRequest)(nil), "proto.UpsertSAMLConnectorRequest") + proto.RegisterType((*ListSAMLConnectorsRequest)(nil), "proto.ListSAMLConnectorsRequest") + proto.RegisterType((*ListSAMLConnectorsResponse)(nil), "proto.ListSAMLConnectorsResponse") + proto.RegisterType((*ListGithubConnectorsRequest)(nil), "proto.ListGithubConnectorsRequest") + proto.RegisterType((*ListGithubConnectorsResponse)(nil), "proto.ListGithubConnectorsResponse") proto.RegisterType((*CreateGithubConnectorRequest)(nil), "proto.CreateGithubConnectorRequest") proto.RegisterType((*UpdateGithubConnectorRequest)(nil), "proto.UpdateGithubConnectorRequest") proto.RegisterType((*UpsertGithubConnectorRequest)(nil), "proto.UpsertGithubConnectorRequest") @@ -14985,6 +16802,12 @@ func init() { proto.RegisterType((*ListAccessRequestsResponse)(nil), "proto.ListAccessRequestsResponse") proto.RegisterType((*AccessRequestAllowedPromotionRequest)(nil), "proto.AccessRequestAllowedPromotionRequest") proto.RegisterType((*AccessRequestAllowedPromotionResponse)(nil), "proto.AccessRequestAllowedPromotionResponse") + proto.RegisterType((*ListProvisionTokensRequest)(nil), "proto.ListProvisionTokensRequest") + proto.RegisterType((*ListProvisionTokensResponse)(nil), "proto.ListProvisionTokensResponse") + proto.RegisterType((*ListKubernetesClustersRequest)(nil), "proto.ListKubernetesClustersRequest") + proto.RegisterType((*ListKubernetesClustersResponse)(nil), "proto.ListKubernetesClustersResponse") + proto.RegisterType((*ListSnowflakeSessionsRequest)(nil), "proto.ListSnowflakeSessionsRequest") + proto.RegisterType((*ListSnowflakeSessionsResponse)(nil), "proto.ListSnowflakeSessionsResponse") } func init() { @@ -14992,924 +16815,988 @@ func init() { } var fileDescriptor_0ffcffcda38ae159 = []byte{ - // 14666 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0xbd, 0x5b, 0x6c, 0x5c, 0x49, - 0x96, 0x20, 0xa6, 0xe4, 0x9b, 0x87, 0xaf, 0x54, 0x90, 0x14, 0xa9, 0xd4, 0x23, 0xa5, 0xab, 0x92, - 0x4a, 0x55, 0x5d, 0xad, 0x07, 0x55, 0x5d, 0x5d, 0x8f, 0xae, 0xaa, 0xce, 0x24, 0x29, 0x91, 0x12, - 0x1f, 0x59, 0x37, 0x49, 0xaa, 0x5e, 0xd3, 0xd9, 0x57, 0x99, 0x21, 0xf2, 0x8e, 0x92, 0x79, 0xb3, - 0xef, 0xbd, 0x29, 0x95, 0x7a, 0xdc, 0x63, 0xcc, 0x8c, 0x1f, 0xf3, 0x63, 0x7b, 0x0c, 0xd8, 0xc6, - 0x18, 0x63, 0xd8, 0xb0, 0x3d, 0xfe, 0x31, 0x6c, 0xc0, 0x3f, 0xc6, 0x7c, 0xcd, 0x02, 0xfb, 0xb5, - 0xbd, 0x83, 0x5d, 0xec, 0x02, 0x8b, 0xf9, 0xd9, 0x0f, 0xce, 0x6e, 0x03, 0xf3, 0x43, 0xec, 0x7e, - 0x0c, 0x16, 0xbb, 0xc0, 0xf6, 0x62, 0x80, 0x45, 0x9c, 0x78, 0xdc, 0x88, 0xfb, 0x4a, 0x52, 0xa5, - 0xea, 0xd9, 0x1f, 0x89, 0x19, 0x71, 0xce, 0x89, 0xe7, 0x3d, 0x71, 0xce, 0x89, 0x13, 0xe7, 0xc0, - 0xad, 0x90, 0xb6, 0x69, 0xd7, 0xf3, 0xc3, 0xdb, 0x6d, 0xba, 0xef, 0x34, 0x5f, 0xde, 0x6e, 0xb6, - 0x5d, 0xda, 0x09, 0x6f, 0x77, 0x7d, 0x2f, 0xf4, 0x6e, 0x3b, 0xbd, 0xf0, 0x20, 0xa0, 0xfe, 0x73, - 0xb7, 0x49, 0x6f, 0x61, 0x09, 0x19, 0xc6, 0xff, 0x4a, 0x73, 0xfb, 0xde, 0xbe, 0xc7, 0x61, 0xd8, - 0x5f, 0xbc, 0xb2, 0x74, 0x61, 0xdf, 0xf3, 0xf6, 0xdb, 0x94, 0x23, 0x3f, 0xe9, 0x3d, 0xbd, 0x4d, - 0x0f, 0xbb, 0xe1, 0x4b, 0x51, 0x59, 0x8e, 0x57, 0x86, 0xee, 0x21, 0x0d, 0x42, 0xe7, 0xb0, 0x2b, - 0x00, 0xde, 0x52, 0x5d, 0x71, 0xc2, 0x90, 0xd5, 0x84, 0xae, 0xd7, 0xb9, 0xfd, 0xfc, 0xae, 0xfe, - 0x53, 0x80, 0xde, 0xcc, 0xed, 0x75, 0x93, 0xfa, 0x61, 0x70, 0x22, 0x48, 0xfa, 0x9c, 0x76, 0x42, - 0x01, 0xf9, 0x4e, 0x2e, 0xa4, 0xdb, 0x61, 0xa0, 0x9e, 0xff, 0x32, 0xd1, 0x59, 0x01, 0x1d, 0xbe, - 0xec, 0xd2, 0x80, 0x13, 0x94, 0xff, 0x09, 0xd0, 0xab, 0xe9, 0xa0, 0xf8, 0xaf, 0x00, 0xf9, 0x7e, - 0x3a, 0xc8, 0x0b, 0xfa, 0x84, 0xad, 0x40, 0x47, 0xfd, 0xd1, 0x07, 0xdc, 0x77, 0xba, 0x5d, 0xea, - 0x47, 0x7f, 0x08, 0xf0, 0xf3, 0x0a, 0xfc, 0xf0, 0xa9, 0xc3, 0x26, 0xf4, 0xf0, 0xa9, 0x93, 0x18, - 0x46, 0x2f, 0x70, 0xf6, 0xa9, 0xe8, 0xfe, 0xf3, 0xbb, 0xfa, 0x4f, 0x0e, 0x6a, 0xfd, 0x6f, 0x05, - 0x18, 0x7e, 0xec, 0x84, 0xcd, 0x03, 0xf2, 0x29, 0x0c, 0x3f, 0x72, 0x3b, 0xad, 0x60, 0xb1, 0x70, - 0x65, 0xf0, 0xe6, 0xc4, 0x52, 0xf1, 0x16, 0x1f, 0x0a, 0x56, 0xb2, 0x8a, 0xea, 0xc2, 0x2f, 0x8f, - 0xca, 0x67, 0x8e, 0x8f, 0xca, 0x33, 0xcf, 0x18, 0xd8, 0x3b, 0xde, 0xa1, 0x1b, 0xe2, 0x4e, 0xb0, - 0x39, 0x1e, 0xd9, 0x85, 0xd9, 0x4a, 0xbb, 0xed, 0xbd, 0xa8, 0x39, 0x7e, 0xe8, 0x3a, 0xed, 0x7a, - 0xaf, 0xd9, 0xa4, 0x41, 0xb0, 0x38, 0x70, 0xa5, 0x70, 0x73, 0xac, 0x7a, 0xed, 0xf8, 0xa8, 0x5c, - 0x76, 0x58, 0x75, 0xa3, 0xcb, 0xeb, 0x1b, 0x01, 0x07, 0xd0, 0x08, 0xa5, 0xe1, 0x5b, 0x7f, 0x31, - 0x02, 0xc5, 0x35, 0x2f, 0x08, 0x97, 0xd9, 0xfa, 0xdb, 0xf4, 0x67, 0x3d, 0x1a, 0x84, 0xe4, 0x1a, - 0x8c, 0xb0, 0xb2, 0xf5, 0x95, 0xc5, 0xc2, 0x95, 0xc2, 0xcd, 0xf1, 0xea, 0xc4, 0xf1, 0x51, 0x79, - 0xf4, 0xc0, 0x0b, 0xc2, 0x86, 0xdb, 0xb2, 0x45, 0x15, 0x79, 0x0b, 0xc6, 0xb6, 0xbc, 0x16, 0xdd, - 0x72, 0x0e, 0x29, 0xf6, 0x62, 0xbc, 0x3a, 0x75, 0x7c, 0x54, 0x1e, 0xef, 0x78, 0x2d, 0xda, 0xe8, - 0x38, 0x87, 0xd4, 0x56, 0xd5, 0x64, 0x0f, 0x86, 0x6c, 0xaf, 0x4d, 0x17, 0x07, 0x11, 0xac, 0x7a, - 0x7c, 0x54, 0x1e, 0xf2, 0xbd, 0x36, 0xfd, 0xf5, 0x51, 0xf9, 0xbd, 0x7d, 0x37, 0x3c, 0xe8, 0x3d, - 0xb9, 0xd5, 0xf4, 0x0e, 0x6f, 0xef, 0xfb, 0xce, 0x73, 0x97, 0x6f, 0x59, 0xa7, 0x7d, 0x3b, 0xda, - 0xd8, 0x5d, 0x57, 0xac, 0x7b, 0xfd, 0x65, 0x10, 0xd2, 0x43, 0x46, 0xc9, 0x46, 0x7a, 0xe4, 0x31, - 0xcc, 0x55, 0x5a, 0x2d, 0x97, 0x63, 0xd4, 0x7c, 0xb7, 0xd3, 0x74, 0xbb, 0x4e, 0x3b, 0x58, 0x1c, - 0xba, 0x32, 0x78, 0x73, 0x5c, 0x4c, 0x8a, 0xaa, 0x6f, 0x74, 0x15, 0x80, 0x36, 0x29, 0xa9, 0x04, - 0xc8, 0x3d, 0x18, 0x5b, 0xd9, 0xaa, 0xb3, 0xbe, 0x07, 0x8b, 0xc3, 0x48, 0x6c, 0xe1, 0xf8, 0xa8, - 0x3c, 0xdb, 0xea, 0x04, 0x38, 0x34, 0x9d, 0x80, 0x02, 0x24, 0xef, 0xc1, 0x64, 0xad, 0xf7, 0xa4, - 0xed, 0x36, 0x77, 0x36, 0xea, 0x8f, 0xe8, 0xcb, 0xc5, 0x91, 0x2b, 0x85, 0x9b, 0x93, 0x55, 0x72, - 0x7c, 0x54, 0x9e, 0xee, 0x62, 0x79, 0x23, 0x6c, 0x07, 0x8d, 0x67, 0xf4, 0xa5, 0x6d, 0xc0, 0x45, - 0x78, 0xf5, 0xfa, 0x1a, 0xc3, 0x1b, 0x4d, 0xe0, 0x05, 0xc1, 0x81, 0x8e, 0xc7, 0xe1, 0xc8, 0x6d, - 0x00, 0x9b, 0x1e, 0x7a, 0x21, 0xad, 0xb4, 0x5a, 0xfe, 0xe2, 0x18, 0xce, 0xed, 0xcc, 0xf1, 0x51, - 0x79, 0xc2, 0xc7, 0xd2, 0x86, 0xd3, 0x6a, 0xf9, 0xb6, 0x06, 0x42, 0x96, 0x61, 0xcc, 0xf6, 0xf8, - 0x04, 0x2f, 0x8e, 0x5f, 0x29, 0xdc, 0x9c, 0x58, 0x9a, 0x11, 0xdb, 0x50, 0x16, 0x57, 0xcf, 0x1d, - 0x1f, 0x95, 0x89, 0x2f, 0x7e, 0xe9, 0xa3, 0x94, 0x10, 0xa4, 0x0c, 0xa3, 0x5b, 0xde, 0xb2, 0xd3, - 0x3c, 0xa0, 0x8b, 0x80, 0x7b, 0x6f, 0xf8, 0xf8, 0xa8, 0x5c, 0xf8, 0xbe, 0x2d, 0x4b, 0xc9, 0x73, - 0x98, 0x88, 0x16, 0x2a, 0x58, 0x9c, 0xc0, 0xe9, 0xdb, 0x39, 0x3e, 0x2a, 0x9f, 0x0b, 0xb0, 0xb8, - 0xc1, 0x96, 0x5e, 0x9b, 0xc1, 0x6f, 0xb1, 0x0b, 0xf4, 0x86, 0xc8, 0xd7, 0x30, 0x1f, 0xfd, 0xac, - 0x04, 0x01, 0xf5, 0x19, 0x8d, 0xf5, 0x95, 0xc5, 0x29, 0x9c, 0x99, 0x1b, 0xc7, 0x47, 0x65, 0x4b, - 0xeb, 0x41, 0xc3, 0x91, 0x20, 0x0d, 0xb7, 0xa5, 0x8d, 0x34, 0x9d, 0xc8, 0xc3, 0xa1, 0xb1, 0xc9, - 0xe2, 0x94, 0x7d, 0x69, 0xb7, 0x13, 0x84, 0xce, 0x93, 0x36, 0x4d, 0x05, 0xb2, 0xfe, 0xb6, 0x00, - 0x64, 0xbb, 0x4b, 0x3b, 0xf5, 0xfa, 0x1a, 0xfb, 0x9e, 0xe4, 0xe7, 0xf4, 0x0e, 0x8c, 0xf3, 0x85, - 0x63, 0xab, 0x3b, 0x80, 0xab, 0x3b, 0x7d, 0x7c, 0x54, 0x06, 0xb1, 0xba, 0x6c, 0x65, 0x23, 0x00, - 0x72, 0x1d, 0x06, 0x77, 0x76, 0x36, 0xf0, 0x5b, 0x19, 0xac, 0xce, 0x1e, 0x1f, 0x95, 0x07, 0xc3, - 0xb0, 0xfd, 0xeb, 0xa3, 0xf2, 0xd8, 0x4a, 0xcf, 0xc7, 0x69, 0xb1, 0x59, 0x3d, 0xb9, 0x0e, 0xa3, - 0xcb, 0xed, 0x5e, 0x10, 0x52, 0x7f, 0x71, 0x28, 0xfa, 0x48, 0x9b, 0xbc, 0xc8, 0x96, 0x75, 0xe4, - 0x7b, 0x30, 0xb4, 0x1b, 0x50, 0x7f, 0x71, 0x18, 0xd7, 0x7b, 0x4a, 0xac, 0x37, 0x2b, 0xda, 0x5b, - 0xaa, 0x8e, 0xb1, 0x2f, 0xb1, 0x17, 0x50, 0xdf, 0x46, 0x20, 0x72, 0x0b, 0x86, 0xf9, 0xa2, 0x8d, - 0x20, 0x93, 0x9a, 0x52, 0xbb, 0xa3, 0x4d, 0xf7, 0xde, 0xab, 0x8e, 0x1f, 0x1f, 0x95, 0x87, 0x71, - 0xf1, 0x6c, 0x0e, 0xf6, 0x70, 0x68, 0xac, 0x50, 0x1c, 0xb0, 0xc7, 0x18, 0x2e, 0xfb, 0x2c, 0xac, - 0xef, 0xc1, 0x84, 0x36, 0x7c, 0x72, 0x11, 0x86, 0xd8, 0xff, 0xc8, 0x44, 0x26, 0x79, 0x63, 0xec, - 0x98, 0xb1, 0xb1, 0xd4, 0xfa, 0x7f, 0x66, 0xa1, 0xc8, 0x30, 0x0d, 0xce, 0x73, 0x13, 0x14, 0x35, - 0xc1, 0x54, 0x26, 0x8f, 0x8f, 0xca, 0x63, 0x3d, 0x51, 0x16, 0xb5, 0x45, 0xea, 0x30, 0xba, 0xfa, - 0x4d, 0xd7, 0xf5, 0x69, 0x80, 0x53, 0x35, 0xb1, 0x54, 0xba, 0xc5, 0x0f, 0xcb, 0x5b, 0xf2, 0xb0, - 0xbc, 0xb5, 0x23, 0x0f, 0xcb, 0xea, 0x25, 0xc1, 0x5c, 0xcf, 0x52, 0x8e, 0x12, 0xad, 0xf7, 0x1f, - 0xfd, 0x55, 0xb9, 0x60, 0x4b, 0x4a, 0xe4, 0x1d, 0x18, 0xb9, 0xef, 0xf9, 0x87, 0x4e, 0x28, 0xe6, - 0x74, 0xee, 0xf8, 0xa8, 0x5c, 0x7c, 0x8a, 0x25, 0xda, 0x16, 0x11, 0x30, 0xe4, 0x3e, 0x4c, 0xdb, - 0x5e, 0x2f, 0xa4, 0x3b, 0x9e, 0x5c, 0x89, 0x61, 0xc4, 0xba, 0x7c, 0x7c, 0x54, 0x2e, 0xf9, 0xac, - 0xa6, 0x11, 0x7a, 0x0d, 0xb1, 0x24, 0x1a, 0x7e, 0x0c, 0x8b, 0xac, 0xc2, 0x74, 0x05, 0xb9, 0xb1, - 0x98, 0x05, 0x3e, 0xff, 0xe3, 0xd5, 0x4b, 0xc7, 0x47, 0xe5, 0xf3, 0x0e, 0xd6, 0x34, 0x7c, 0x51, - 0xa5, 0x93, 0x31, 0x91, 0xc8, 0x16, 0x9c, 0x7d, 0xd4, 0x7b, 0x42, 0xfd, 0x0e, 0x0d, 0x69, 0x20, - 0x7b, 0x34, 0x8a, 0x3d, 0xba, 0x72, 0x7c, 0x54, 0xbe, 0xf8, 0x4c, 0x55, 0xa6, 0xf4, 0x29, 0x89, - 0x4a, 0x28, 0xcc, 0x88, 0x8e, 0xae, 0x38, 0xa1, 0xf3, 0xc4, 0x09, 0x28, 0x32, 0x99, 0x89, 0xa5, - 0x73, 0x7c, 0x8a, 0x6f, 0xc5, 0x6a, 0xab, 0xd7, 0xc4, 0x2c, 0x5f, 0x50, 0x63, 0x6f, 0x89, 0x2a, - 0xad, 0xa1, 0x38, 0x4d, 0xc6, 0x6b, 0xd5, 0x39, 0x32, 0x8e, 0xbd, 0x45, 0x5e, 0xab, 0xce, 0x11, - 0x9d, 0x0b, 0xa9, 0x13, 0x65, 0x03, 0x86, 0x77, 0xd9, 0x69, 0x8b, 0x3c, 0x68, 0x7a, 0xe9, 0xaa, - 0xe8, 0x51, 0x7c, 0x3f, 0xdd, 0x62, 0x3f, 0x10, 0x10, 0xbf, 0xa4, 0x19, 0x3c, 0xa1, 0xf5, 0xb3, - 0x15, 0xeb, 0xc8, 0x67, 0x00, 0xa2, 0x57, 0x95, 0x6e, 0x77, 0x71, 0x02, 0x07, 0x79, 0xd6, 0x1c, - 0x64, 0xa5, 0xdb, 0xad, 0x5e, 0x16, 0xe3, 0x3b, 0xa7, 0xc6, 0xe7, 0x74, 0xbb, 0x1a, 0x35, 0x8d, - 0x08, 0xf9, 0x14, 0x26, 0x91, 0x45, 0xc9, 0x15, 0x9d, 0xc4, 0x15, 0xbd, 0x70, 0x7c, 0x54, 0x5e, - 0x40, 0xee, 0x93, 0xb2, 0x9e, 0x06, 0x02, 0xf9, 0x5d, 0x98, 0x17, 0xe4, 0x1e, 0xbb, 0x9d, 0x96, - 0xf7, 0x22, 0x58, 0xa1, 0xc1, 0xb3, 0xd0, 0xeb, 0x22, 0x3b, 0x9b, 0x58, 0xba, 0x68, 0x76, 0xcf, - 0x84, 0xa9, 0xbe, 0x2d, 0x7a, 0x6a, 0xa9, 0x9e, 0xbe, 0xe0, 0x00, 0x8d, 0x16, 0x87, 0xd0, 0x19, - 0x5e, 0x2a, 0x09, 0xb2, 0x0e, 0x33, 0xbb, 0x01, 0x35, 0xc6, 0x30, 0x8d, 0xfc, 0xbe, 0xcc, 0x56, - 0xb8, 0x17, 0xd0, 0x46, 0xd6, 0x38, 0xe2, 0x78, 0xc4, 0x06, 0xb2, 0xe2, 0x7b, 0xdd, 0xd8, 0x1e, - 0x9f, 0xc1, 0x19, 0xb1, 0x8e, 0x8f, 0xca, 0x97, 0x5b, 0xbe, 0xd7, 0x6d, 0x64, 0x6f, 0xf4, 0x14, - 0x6c, 0xf2, 0x13, 0x38, 0xb7, 0xec, 0x75, 0x3a, 0xb4, 0xc9, 0x38, 0xe2, 0x8a, 0xeb, 0xec, 0x77, - 0xbc, 0x20, 0x74, 0x9b, 0xeb, 0x2b, 0x8b, 0xc5, 0x88, 0xdd, 0x37, 0x15, 0x44, 0xa3, 0xa5, 0x40, - 0x4c, 0x76, 0x9f, 0x41, 0x85, 0x7c, 0x05, 0x53, 0xa2, 0x2d, 0xea, 0xe3, 0xd6, 0x3c, 0x9b, 0xbf, - 0xd1, 0x14, 0x30, 0x3f, 0xb8, 0x7d, 0xf9, 0x93, 0x8b, 0x42, 0x26, 0x2d, 0xf2, 0x35, 0x4c, 0x6c, - 0xde, 0xaf, 0xd8, 0x34, 0xe8, 0x7a, 0x9d, 0x80, 0x2e, 0x12, 0x5c, 0xd1, 0xcb, 0x82, 0xf4, 0xe6, - 0xfd, 0x4a, 0xa5, 0x17, 0x1e, 0xd0, 0x4e, 0xe8, 0x36, 0x9d, 0x90, 0x4a, 0xa8, 0x6a, 0x89, 0xed, - 0xbc, 0xc3, 0xa7, 0x4e, 0xc3, 0x17, 0x25, 0xda, 0x28, 0x74, 0x72, 0xa4, 0x04, 0x63, 0xf5, 0xfa, - 0xda, 0x86, 0xb7, 0xef, 0x76, 0x16, 0x67, 0xd9, 0x64, 0xd8, 0xea, 0x37, 0xd9, 0x81, 0xd1, 0x5a, - 0xcf, 0xef, 0x7a, 0x01, 0x5d, 0x9c, 0xc7, 0x01, 0x5d, 0xcb, 0xfb, 0x72, 0x04, 0x68, 0x75, 0x9e, - 0xb1, 0xce, 0x2e, 0xff, 0xa1, 0xb5, 0x2a, 0x49, 0x91, 0x1f, 0xc3, 0x64, 0xbd, 0xbe, 0x16, 0x9d, - 0x71, 0xe7, 0x90, 0xe1, 0x5f, 0x3c, 0x3e, 0x2a, 0x2f, 0x32, 0xd1, 0x25, 0x3a, 0xe7, 0xf4, 0xdd, - 0xae, 0x63, 0x30, 0x0a, 0x3b, 0x1b, 0xf5, 0x88, 0xc2, 0x42, 0x44, 0x81, 0x09, 0x4d, 0xe9, 0x14, - 0x74, 0x0c, 0xf2, 0xff, 0x16, 0xe0, 0x8a, 0x4e, 0xb2, 0x12, 0x29, 0x40, 0xf5, 0xd0, 0x09, 0xe9, - 0x21, 0xed, 0x84, 0x8b, 0xe7, 0x71, 0xa6, 0xbf, 0xaf, 0x14, 0xb8, 0x5b, 0xba, 0x9a, 0xf4, 0xfc, - 0xee, 0xad, 0x34, 0xa4, 0xea, 0xd2, 0xf1, 0x51, 0xf9, 0x96, 0x39, 0x8e, 0x86, 0x86, 0xd7, 0x08, - 0x24, 0xa4, 0xd6, 0xb7, 0xbe, 0x5d, 0xc1, 0xfe, 0xea, 0x03, 0x48, 0xed, 0x6f, 0xe9, 0x95, 0xfb, - 0x6b, 0xce, 0x5a, 0xff, 0xfe, 0xf6, 0xeb, 0x0a, 0x69, 0xc2, 0x05, 0x9b, 0xba, 0x41, 0xd0, 0x63, - 0xe2, 0x0f, 0xfb, 0xbc, 0xd7, 0x0f, 0x99, 0xba, 0xe4, 0x75, 0xb8, 0x3c, 0x79, 0x01, 0x79, 0xc3, - 0xd5, 0xe3, 0xa3, 0xf2, 0x25, 0x5f, 0x81, 0x71, 0x16, 0xe1, 0xea, 0x80, 0x76, 0x1e, 0x15, 0xeb, - 0x73, 0x18, 0x57, 0x1c, 0x9b, 0x8c, 0xc2, 0x60, 0xa5, 0xdd, 0x2e, 0x9e, 0x61, 0x7f, 0xd4, 0xeb, - 0x6b, 0xc5, 0x02, 0x99, 0x06, 0x88, 0x8e, 0xa9, 0xe2, 0x00, 0x99, 0x84, 0x31, 0x79, 0x8c, 0x14, - 0x07, 0x11, 0xbe, 0xdb, 0x2d, 0x0e, 0x11, 0x02, 0xd3, 0x26, 0x33, 0x2b, 0x0e, 0x5b, 0xff, 0x47, - 0x01, 0xc6, 0xd5, 0x47, 0x48, 0x66, 0x60, 0x62, 0x77, 0xab, 0x5e, 0x5b, 0x5d, 0x5e, 0xbf, 0xbf, - 0xbe, 0xba, 0x52, 0x3c, 0x43, 0x2e, 0xc1, 0xf9, 0x9d, 0xfa, 0x5a, 0x63, 0xa5, 0xda, 0xd8, 0xd8, - 0x5e, 0xae, 0x6c, 0x34, 0x6a, 0xf6, 0xf6, 0xe7, 0x5f, 0x34, 0x76, 0x76, 0xb7, 0xb6, 0x56, 0x37, - 0x8a, 0x05, 0xb2, 0x08, 0x73, 0xac, 0xfa, 0xd1, 0x6e, 0x75, 0x55, 0x07, 0x28, 0x0e, 0x90, 0xab, - 0x70, 0x29, 0xad, 0xa6, 0xb1, 0xb6, 0x5a, 0x59, 0xd9, 0x58, 0xad, 0xd7, 0x8b, 0x83, 0x64, 0x01, - 0x66, 0x19, 0x48, 0xa5, 0x56, 0x33, 0x70, 0x87, 0x58, 0x2f, 0x44, 0xa3, 0xab, 0x9f, 0xaf, 0x2e, - 0x17, 0x87, 0xad, 0x36, 0x4c, 0x68, 0x9f, 0x1d, 0xb9, 0x08, 0x8b, 0xcb, 0xab, 0xf6, 0x4e, 0xa3, - 0xb6, 0x6b, 0xd7, 0xb6, 0xeb, 0xab, 0x0d, 0xb3, 0xcb, 0xf1, 0xda, 0x8d, 0xed, 0x07, 0xeb, 0x5b, - 0x0d, 0x56, 0x54, 0x2f, 0x16, 0x58, 0xbf, 0x8c, 0xda, 0xfa, 0xfa, 0xd6, 0x83, 0x8d, 0xd5, 0xc6, - 0x6e, 0x7d, 0x55, 0x80, 0x0c, 0x70, 0xe9, 0xed, 0xe1, 0xd0, 0xd8, 0x5c, 0x71, 0x5e, 0x93, 0x3f, - 0xed, 0xf9, 0xd4, 0xbd, 0x62, 0xfd, 0xfe, 0x40, 0x42, 0x1c, 0x20, 0x4b, 0x30, 0x51, 0xe7, 0x96, - 0x0e, 0x64, 0x91, 0x5c, 0x59, 0x2c, 0x1e, 0x1f, 0x95, 0x27, 0x85, 0x01, 0x84, 0x73, 0x3f, 0x1d, - 0x88, 0x49, 0x78, 0x35, 0xc6, 0x71, 0x9a, 0x5e, 0x5b, 0x97, 0xf0, 0xba, 0xa2, 0xcc, 0x56, 0xb5, - 0x64, 0x49, 0x93, 0x05, 0xb9, 0xe6, 0x88, 0xda, 0x89, 0x94, 0x05, 0x75, 0xb9, 0x40, 0x49, 0x85, - 0x4b, 0xd1, 0x8e, 0x10, 0x22, 0x1c, 0xe2, 0xa4, 0xc8, 0x21, 0x0a, 0x8e, 0xbc, 0x25, 0xa5, 0x5e, - 0xae, 0xe9, 0xa1, 0xa0, 0x10, 0xd3, 0x51, 0x84, 0xc0, 0x6b, 0xf5, 0x32, 0x0e, 0x65, 0xf2, 0x51, - 0x7c, 0xcb, 0x89, 0xc9, 0x40, 0x62, 0xb1, 0xb3, 0xd7, 0x8e, 0x81, 0x92, 0x32, 0x0c, 0x73, 0x6e, - 0xcd, 0xe7, 0x03, 0xe5, 0xec, 0x36, 0x2b, 0xb0, 0x79, 0xb9, 0xf5, 0x8f, 0x86, 0x74, 0x01, 0x85, - 0xc9, 0xd5, 0xda, 0x7c, 0xa3, 0x5c, 0x8d, 0xf3, 0x8c, 0xa5, 0x4c, 0x2d, 0xe4, 0x8b, 0x89, 0x6a, - 0xe1, 0x60, 0xa4, 0x16, 0x0a, 0x76, 0xc0, 0xd5, 0xc2, 0x08, 0x84, 0xad, 0xa2, 0x10, 0xf9, 0x90, - 0xea, 0x50, 0xb4, 0x8a, 0x42, 0x4c, 0x14, 0xab, 0xa8, 0x01, 0x91, 0x0f, 0x01, 0x2a, 0x8f, 0xeb, - 0xa8, 0xff, 0xd8, 0x5b, 0x42, 0xec, 0xc5, 0x03, 0xca, 0x79, 0x11, 0x08, 0xf5, 0xca, 0xd7, 0xf5, - 0x47, 0x0d, 0x9a, 0x54, 0x61, 0xaa, 0xf2, 0xf3, 0x9e, 0x4f, 0xd7, 0x5b, 0xec, 0x8c, 0x0b, 0xb9, - 0xa2, 0x3c, 0xce, 0x99, 0xbd, 0xc3, 0x2a, 0x1a, 0xae, 0xa8, 0xd1, 0x08, 0x98, 0x28, 0x64, 0x1b, - 0xce, 0x3e, 0x58, 0xae, 0x89, 0x7d, 0x55, 0x69, 0x36, 0xbd, 0x5e, 0x27, 0x14, 0xb2, 0x2e, 0xf2, - 0xa0, 0xfd, 0x66, 0xb7, 0x21, 0xf7, 0xa0, 0xc3, 0xab, 0x75, 0x61, 0x37, 0x81, 0x4b, 0xae, 0xc1, - 0xe0, 0xae, 0xbd, 0x2e, 0xb4, 0xe8, 0xb3, 0xc7, 0x47, 0xe5, 0xa9, 0x9e, 0xef, 0x6a, 0x28, 0xac, - 0x96, 0x7c, 0x00, 0xb0, 0xe3, 0xf8, 0xfb, 0x34, 0xac, 0x79, 0x7e, 0x88, 0xc2, 0xea, 0x54, 0xf5, - 0xfc, 0xf1, 0x51, 0x79, 0x3e, 0xc4, 0xd2, 0x06, 0x63, 0xd1, 0xfa, 0xa0, 0x23, 0x60, 0xf2, 0x12, - 0xca, 0x95, 0xc7, 0xf5, 0x65, 0x9f, 0xe2, 0x08, 0x9c, 0x76, 0xcd, 0xf7, 0x98, 0x3c, 0x13, 0x15, - 0x04, 0x28, 0xca, 0x8e, 0x57, 0x6f, 0x1f, 0x1f, 0x95, 0xbf, 0xc7, 0x66, 0xb1, 0xa9, 0xaa, 0xba, - 0x1c, 0x56, 0x2b, 0xd1, 0xb7, 0x66, 0x3f, 0xba, 0x0f, 0x87, 0xc6, 0x06, 0x8a, 0x83, 0xf6, 0x78, - 0x9d, 0x06, 0x01, 0x57, 0x53, 0xdb, 0x30, 0xfd, 0x80, 0x86, 0xec, 0x9b, 0x91, 0x6a, 0x57, 0xfe, - 0x8e, 0xfa, 0x11, 0x4c, 0x3c, 0x76, 0xc3, 0x83, 0x3a, 0x6d, 0xfa, 0x34, 0x94, 0x26, 0x27, 0x5c, - 0xed, 0x17, 0x6e, 0x78, 0xd0, 0x08, 0x78, 0xb9, 0x2e, 0x8e, 0x68, 0xe0, 0xd6, 0x2a, 0xcc, 0x88, - 0xd6, 0x94, 0x96, 0xb7, 0x64, 0x12, 0x2c, 0x20, 0x41, 0xdc, 0x71, 0x3a, 0x41, 0x93, 0xcc, 0xff, - 0x3f, 0x00, 0xf3, 0xcb, 0x07, 0x4e, 0x67, 0x9f, 0xd6, 0x9c, 0x20, 0x78, 0xe1, 0xf9, 0x2d, 0xad, - 0xf3, 0xa8, 0xe2, 0x26, 0x3a, 0x8f, 0x3a, 0xed, 0x12, 0x4c, 0x6c, 0xb7, 0x5b, 0x12, 0x47, 0xa8, - 0xdf, 0xd8, 0x96, 0xd7, 0x6e, 0x35, 0xba, 0x92, 0x96, 0x0e, 0xc4, 0x70, 0xb6, 0xe8, 0x0b, 0x85, - 0x33, 0x18, 0xe1, 0x74, 0xe8, 0x0b, 0x0d, 0x47, 0x03, 0x22, 0xab, 0x70, 0xb6, 0x4e, 0x9b, 0x5e, - 0xa7, 0x75, 0xdf, 0x69, 0x86, 0x9e, 0xbf, 0xe3, 0x3d, 0xa3, 0x1d, 0xf1, 0x2d, 0xa1, 0x3e, 0x13, - 0x60, 0x65, 0xe3, 0x29, 0xd6, 0x36, 0x42, 0x56, 0x6d, 0x27, 0x31, 0xc8, 0x36, 0x8c, 0x3d, 0x16, - 0x86, 0x4b, 0xa1, 0xb3, 0x5f, 0xbf, 0xa5, 0x2c, 0x99, 0xd1, 0xaa, 0x2a, 0xab, 0x83, 0x12, 0x0f, - 0x91, 0x8b, 0x4a, 0x48, 0x5b, 0x11, 0xb1, 0x76, 0x61, 0xaa, 0xd6, 0xee, 0xed, 0xbb, 0x1d, 0xc6, - 0xef, 0xea, 0xf4, 0x67, 0x64, 0x05, 0x20, 0x2a, 0x10, 0xe6, 0xc8, 0x59, 0xa1, 0xe9, 0x47, 0x15, - 0x7b, 0xf7, 0x04, 0xd3, 0xc0, 0x12, 0x54, 0xe4, 0x6c, 0x0d, 0xcf, 0xfa, 0xf7, 0x83, 0x40, 0xc4, - 0x02, 0xa0, 0x8c, 0x50, 0xa7, 0x21, 0x3b, 0x58, 0xcf, 0xc1, 0x80, 0xb2, 0x1a, 0x8e, 0x1c, 0x1f, - 0x95, 0x07, 0xdc, 0x96, 0x3d, 0xb0, 0xbe, 0x42, 0xde, 0x85, 0x61, 0x04, 0xc3, 0xf9, 0x9f, 0x56, - 0xed, 0xe9, 0x14, 0x38, 0xdf, 0xc3, 0x03, 0xc7, 0xe6, 0xc0, 0xe4, 0x07, 0x30, 0xbe, 0x42, 0xdb, - 0x74, 0xdf, 0x09, 0x3d, 0xc9, 0xc9, 0xb8, 0x1d, 0x4e, 0x16, 0x6a, 0x7b, 0x2e, 0x82, 0x64, 0x5a, - 0xbc, 0x4d, 0x9d, 0xc0, 0xeb, 0xe8, 0x5a, 0xbc, 0x8f, 0x25, 0xba, 0x16, 0xcf, 0x61, 0xc8, 0xff, - 0x58, 0x80, 0x89, 0x4a, 0xa7, 0x23, 0xec, 0x5b, 0x81, 0x98, 0xf5, 0xf9, 0x5b, 0xca, 0x20, 0xbc, - 0xe1, 0x3c, 0xa1, 0xed, 0x3d, 0xa7, 0xdd, 0xa3, 0x41, 0xf5, 0x6b, 0xa6, 0x58, 0xfd, 0xf3, 0xa3, - 0xf2, 0x47, 0xa7, 0xb0, 0x58, 0x45, 0xa6, 0xe5, 0x1d, 0xdf, 0x71, 0xc3, 0x80, 0x31, 0x0c, 0x27, - 0x6a, 0x50, 0xff, 0x6e, 0xb4, 0x7e, 0x44, 0xc7, 0xd2, 0x48, 0xbf, 0x63, 0x89, 0x1c, 0xc2, 0x4c, - 0x25, 0x08, 0x7a, 0x87, 0xb4, 0x1e, 0x3a, 0x7e, 0xb8, 0xe3, 0x1e, 0x52, 0xe4, 0x85, 0xf9, 0x36, - 0x91, 0x37, 0x7f, 0x79, 0x54, 0x2e, 0x30, 0x5d, 0xce, 0x41, 0x54, 0x76, 0xd4, 0xfb, 0x61, 0x23, - 0x74, 0xf5, 0x93, 0x15, 0xad, 0x23, 0x71, 0xda, 0xd6, 0x35, 0x25, 0x4a, 0xad, 0xaf, 0x64, 0xad, - 0xb8, 0xb5, 0x0c, 0x17, 0x1f, 0xd0, 0xd0, 0xa6, 0x01, 0x0d, 0xe5, 0x37, 0x82, 0x3b, 0x3c, 0xb2, - 0x31, 0x8f, 0xe2, 0x6f, 0x85, 0x8c, 0xcb, 0xcf, 0xbf, 0x0b, 0x59, 0x63, 0xfd, 0x17, 0x05, 0x28, - 0x2f, 0xfb, 0x94, 0xab, 0x41, 0x19, 0x84, 0xf2, 0x79, 0xd7, 0x45, 0x18, 0xda, 0x79, 0xd9, 0x95, - 0xc6, 0x24, 0xac, 0x65, 0x8b, 0x62, 0x63, 0xe9, 0x09, 0x6d, 0x6d, 0xd6, 0x53, 0x98, 0xb7, 0x69, - 0x87, 0xbe, 0x60, 0x42, 0xab, 0x61, 0xae, 0x2a, 0xc3, 0x30, 0xff, 0xd0, 0x13, 0x43, 0xe0, 0xe5, - 0xa7, 0x33, 0xfd, 0x59, 0x53, 0x30, 0x51, 0x73, 0x3b, 0xfb, 0x82, 0xba, 0xf5, 0xd7, 0x43, 0x30, - 0xc9, 0x7f, 0x0b, 0xcd, 0x2e, 0x76, 0x52, 0x17, 0x4e, 0x72, 0x52, 0xbf, 0x0f, 0x53, 0xec, 0xa8, - 0xa3, 0xfe, 0x1e, 0xf5, 0x19, 0xff, 0x17, 0x33, 0x81, 0x5a, 0x6a, 0x80, 0x15, 0x8d, 0xe7, 0xbc, - 0xc6, 0x36, 0x01, 0xc9, 0x06, 0x4c, 0xf3, 0x82, 0xfb, 0xd4, 0x09, 0x7b, 0x91, 0xa1, 0x6d, 0x46, - 0xa8, 0x8c, 0xb2, 0x98, 0x6f, 0x4d, 0x41, 0xeb, 0xa9, 0x28, 0xb4, 0x63, 0xb8, 0xe4, 0x53, 0x98, - 0xa9, 0xf9, 0xde, 0x37, 0x2f, 0x35, 0xd9, 0x84, 0x7f, 0x9d, 0x5c, 0xb9, 0x64, 0x55, 0x0d, 0x5d, - 0x42, 0x89, 0x43, 0x93, 0xb7, 0x60, 0x6c, 0x3d, 0xa8, 0x7a, 0xbe, 0xdb, 0xd9, 0xc7, 0x6f, 0x74, - 0x8c, 0xdf, 0x37, 0xb8, 0x41, 0xe3, 0x09, 0x16, 0xda, 0xaa, 0x3a, 0x66, 0x19, 0x1f, 0xed, 0x6f, - 0x19, 0xbf, 0x03, 0xb0, 0xe1, 0x39, 0xad, 0x4a, 0xbb, 0xbd, 0x5c, 0x09, 0x50, 0x08, 0x10, 0xe7, - 0x51, 0xdb, 0x73, 0x5a, 0x0d, 0xa7, 0xdd, 0x6e, 0x34, 0x9d, 0xc0, 0xd6, 0x60, 0xc8, 0x97, 0x70, - 0x3e, 0x70, 0xf7, 0x3b, 0x38, 0xb8, 0x86, 0xd3, 0xde, 0xf7, 0x7c, 0x37, 0x3c, 0x38, 0x6c, 0x04, - 0x3d, 0x37, 0xe4, 0x66, 0xac, 0xe9, 0xa5, 0xcb, 0x82, 0xc9, 0xd5, 0x25, 0x5c, 0x45, 0x82, 0xd5, - 0x19, 0x94, 0xbd, 0x10, 0xa4, 0x57, 0x90, 0xc7, 0x30, 0xb5, 0xe1, 0x36, 0x69, 0x27, 0xa0, 0x68, - 0x97, 0x7c, 0x89, 0x92, 0x41, 0xfe, 0xc7, 0xcc, 0x26, 0x71, 0xaa, 0xad, 0x23, 0xe1, 0xa7, 0x6b, - 0xd2, 0x79, 0x38, 0x34, 0x36, 0x52, 0x1c, 0xb5, 0x67, 0x44, 0xe1, 0x63, 0xc7, 0xef, 0xb8, 0x9d, - 0xfd, 0xc0, 0xfa, 0x3f, 0x09, 0x8c, 0xa9, 0x75, 0xba, 0xa5, 0xeb, 0x58, 0xe2, 0x68, 0xc6, 0x2d, - 0x1b, 0x99, 0x0f, 0x6d, 0x0d, 0x82, 0x9c, 0x47, 0xad, 0x4b, 0x08, 0x05, 0xa3, 0xec, 0x13, 0x72, - 0xba, 0x5d, 0x9b, 0x95, 0x31, 0xd6, 0xb0, 0x52, 0xc5, 0x4d, 0x33, 0xc6, 0x59, 0x43, 0xeb, 0x89, - 0x3d, 0xb0, 0x52, 0x65, 0xdf, 0xe4, 0xf6, 0xfa, 0xca, 0x32, 0xae, 0xff, 0x18, 0xff, 0x26, 0x3d, - 0xb7, 0xd5, 0xb4, 0xb1, 0x94, 0xd5, 0xd6, 0x2b, 0x9b, 0x1b, 0x62, 0x8d, 0xb1, 0x36, 0x70, 0x0e, - 0xdb, 0x36, 0x96, 0x32, 0x41, 0x9b, 0x5b, 0x82, 0x96, 0xbd, 0x4e, 0xe8, 0x7b, 0xed, 0x00, 0xa5, - 0xc7, 0x31, 0xbe, 0x07, 0x85, 0x09, 0xa9, 0x29, 0xaa, 0xec, 0x18, 0x28, 0x79, 0x0c, 0x0b, 0x95, - 0xd6, 0x73, 0xa7, 0xd3, 0xa4, 0x2d, 0x5e, 0xf3, 0xd8, 0xf3, 0x9f, 0x3d, 0x6d, 0x7b, 0x2f, 0x02, - 0xdc, 0x24, 0x63, 0xc2, 0xe2, 0x2a, 0x40, 0xa4, 0x45, 0xea, 0x85, 0x04, 0xb2, 0xb3, 0xb0, 0x19, - 0x1f, 0x58, 0x6e, 0x7b, 0xbd, 0x96, 0xd8, 0x3a, 0xc8, 0x07, 0x9a, 0xac, 0xc0, 0xe6, 0xe5, 0x6c, - 0x96, 0xd6, 0xea, 0x9b, 0xb8, 0x31, 0xc4, 0x2c, 0x1d, 0x04, 0x87, 0x36, 0x2b, 0x23, 0xd7, 0x61, - 0x54, 0xea, 0x0c, 0xfc, 0x42, 0x05, 0x0d, 0xf9, 0x52, 0x57, 0x90, 0x75, 0xec, 0x3b, 0xb6, 0x69, - 0xd3, 0x7b, 0x4e, 0xfd, 0x97, 0xcb, 0x5e, 0x8b, 0x4a, 0x6b, 0x9c, 0xb0, 0x36, 0xf1, 0x8a, 0x46, - 0x93, 0xd5, 0xd8, 0x26, 0x20, 0x6b, 0x80, 0x1f, 0xdc, 0xc1, 0xe2, 0x4c, 0xd4, 0x00, 0x3f, 0xd8, - 0x03, 0x5b, 0xd6, 0x91, 0x15, 0x38, 0x5b, 0xe9, 0x85, 0xde, 0xa1, 0x13, 0xba, 0xcd, 0xdd, 0xee, - 0xbe, 0xef, 0xb0, 0x46, 0x8a, 0x88, 0x80, 0x3a, 0x94, 0x23, 0x2b, 0x1b, 0x3d, 0x51, 0x6b, 0x27, - 0x11, 0xc8, 0x7b, 0x30, 0xb9, 0x1e, 0x70, 0x8b, 0xab, 0x13, 0xd0, 0x16, 0x9a, 0xcd, 0x44, 0x2f, - 0xdd, 0xa0, 0x81, 0xf6, 0xd7, 0x06, 0xd3, 0xba, 0x5a, 0xb6, 0x01, 0x47, 0x2c, 0x18, 0xa9, 0x04, - 0x81, 0x1b, 0x84, 0x68, 0x0d, 0x1b, 0xab, 0xc2, 0xf1, 0x51, 0x79, 0xc4, 0xc1, 0x12, 0x5b, 0xd4, - 0x90, 0xc7, 0x30, 0xb1, 0x42, 0x99, 0xd0, 0xbe, 0xe3, 0xf7, 0x82, 0x10, 0x6d, 0x5b, 0x13, 0x4b, - 0xe7, 0x05, 0x37, 0xd2, 0x6a, 0xc4, 0x5e, 0xe6, 0x22, 0x6a, 0x0b, 0xcb, 0x1b, 0x21, 0xab, 0xd0, - 0x8f, 0x5a, 0x0d, 0x9e, 0x69, 0x24, 0x02, 0x67, 0xcd, 0x6d, 0x31, 0xfe, 0x32, 0x87, 0x7d, 0x40, - 0x8d, 0x44, 0x30, 0xb4, 0xc6, 0x01, 0xd6, 0xe8, 0x1a, 0x89, 0x81, 0x42, 0x9a, 0x09, 0x23, 0xfe, - 0xbc, 0x61, 0xa8, 0x35, 0x2b, 0x65, 0x17, 0x4f, 0x69, 0xe2, 0xff, 0x11, 0x4c, 0x2c, 0xf7, 0x82, - 0xd0, 0x3b, 0xdc, 0x39, 0xa0, 0x87, 0x14, 0xed, 0x6c, 0x42, 0xef, 0x6a, 0x62, 0x71, 0x23, 0x64, - 0xe5, 0xfa, 0x30, 0x35, 0x70, 0xf2, 0x19, 0x10, 0xa9, 0x40, 0x3d, 0x60, 0xfb, 0xa3, 0xc3, 0xf6, - 0x32, 0x9a, 0xda, 0x84, 0xe5, 0x46, 0xea, 0x5d, 0x8d, 0x7d, 0x55, 0xad, 0x9b, 0x61, 0x93, 0xc8, - 0xac, 0x43, 0xbc, 0x8b, 0x0f, 0x7c, 0xa7, 0x7b, 0xb0, 0xb8, 0x18, 0xa9, 0x06, 0x62, 0x50, 0xfb, - 0xac, 0xdc, 0x10, 0x71, 0x22, 0x70, 0x52, 0x07, 0xe0, 0x3f, 0x37, 0xd8, 0xc2, 0x73, 0xe3, 0xdc, - 0xa2, 0x31, 0x5f, 0xac, 0x42, 0xce, 0x15, 0x6a, 0x5a, 0x82, 0x6c, 0xdb, 0x35, 0x56, 0x53, 0x23, - 0x43, 0x9e, 0x41, 0x91, 0xff, 0xda, 0xf4, 0x3a, 0x6e, 0xc8, 0xcf, 0x8b, 0x92, 0x61, 0x61, 0x8d, - 0x57, 0xcb, 0x06, 0xd0, 0xb2, 0x2d, 0x1a, 0x38, 0x54, 0xb5, 0x5a, 0x33, 0x09, 0xc2, 0xa4, 0x06, - 0x13, 0x35, 0xdf, 0x6b, 0xf5, 0x9a, 0x21, 0x4a, 0x19, 0x17, 0x90, 0xf1, 0x13, 0xd1, 0x8e, 0x56, - 0xc3, 0xe7, 0xa4, 0xcb, 0x0b, 0x1a, 0xec, 0x5c, 0xd0, 0xe7, 0x44, 0x03, 0x24, 0x55, 0x18, 0xa9, - 0x79, 0x6d, 0xb7, 0xf9, 0x72, 0xf1, 0x22, 0x76, 0x7a, 0x4e, 0x12, 0xc3, 0x42, 0xd9, 0x55, 0x14, - 0x69, 0xbb, 0x58, 0xa4, 0x8b, 0xb4, 0x1c, 0x88, 0x54, 0x60, 0xea, 0x33, 0xb6, 0x61, 0x5c, 0xaf, - 0xd3, 0x71, 0x5c, 0x9f, 0x2e, 0x5e, 0xc2, 0x75, 0xc1, 0xdb, 0x87, 0x9f, 0xe9, 0x15, 0xfa, 0x76, - 0x36, 0x30, 0xc8, 0x3a, 0xcc, 0xac, 0x07, 0xf5, 0xd0, 0x77, 0xbb, 0x74, 0xd3, 0xe9, 0x38, 0xfb, - 0xb4, 0xb5, 0x78, 0x39, 0x32, 0xff, 0xbb, 0x41, 0x23, 0xc0, 0xba, 0xc6, 0x21, 0xaf, 0xd4, 0xcd, - 0xff, 0x31, 0x3c, 0xf2, 0x39, 0xcc, 0xad, 0x7e, 0x13, 0xb2, 0x1d, 0xd3, 0xae, 0xf4, 0x5a, 0x6e, - 0x58, 0x0f, 0x3d, 0xdf, 0xd9, 0xa7, 0x8b, 0x65, 0xa4, 0xf7, 0xc6, 0xf1, 0x51, 0xf9, 0x0a, 0x15, - 0xf5, 0x0d, 0x87, 0x01, 0x34, 0x02, 0x0e, 0xa1, 0x5f, 0xd3, 0xa7, 0x51, 0x60, 0xb3, 0x5f, 0xef, - 0x75, 0x99, 0xb4, 0x8d, 0xb3, 0x7f, 0xc5, 0x98, 0x7d, 0xad, 0x86, 0xcf, 0x7e, 0xc0, 0x0b, 0x12, - 0xb3, 0xaf, 0x01, 0x12, 0x1b, 0xc8, 0x43, 0xcf, 0xed, 0x54, 0x9a, 0xa1, 0xfb, 0x9c, 0x0a, 0x8d, - 0x39, 0x58, 0xbc, 0x8a, 0x3d, 0xc5, 0xab, 0x8a, 0xdf, 0xf6, 0xdc, 0x4e, 0xc3, 0xc1, 0xea, 0x46, - 0x20, 0xea, 0xf5, 0x6f, 0x24, 0x89, 0x4d, 0x7e, 0x02, 0xe7, 0x36, 0xbd, 0x27, 0x6e, 0x9b, 0x72, - 0x96, 0xc3, 0xa7, 0x05, 0xcd, 0xbb, 0x16, 0xd2, 0xc5, 0xab, 0x8a, 0x43, 0x84, 0x68, 0x08, 0x6e, - 0x75, 0xa8, 0x60, 0xf4, 0xab, 0x8a, 0x74, 0x2a, 0x64, 0x15, 0x26, 0xf1, 0xbb, 0x6c, 0xe3, 0xcf, - 0x60, 0xf1, 0x1a, 0xaa, 0x74, 0x57, 0x63, 0x52, 0xda, 0xad, 0x55, 0x0d, 0x66, 0xb5, 0x13, 0xfa, - 0x2f, 0x6d, 0x03, 0x8d, 0x7c, 0x02, 0xa5, 0xf8, 0xf6, 0x5e, 0xf6, 0x3a, 0x4f, 0xdd, 0xfd, 0x9e, - 0x4f, 0x5b, 0x8b, 0x6f, 0xb0, 0xae, 0xda, 0x39, 0x10, 0x64, 0x0f, 0x66, 0xb5, 0x6f, 0x7b, 0x85, - 0x1e, 0x7a, 0x9b, 0x5e, 0x8b, 0x2e, 0xde, 0x88, 0x56, 0x59, 0x67, 0x09, 0x8d, 0x16, 0x3d, 0xf4, - 0x1a, 0x87, 0x5e, 0x8b, 0x1a, 0x1e, 0x2a, 0x49, 0x02, 0xa5, 0xc7, 0x70, 0x36, 0xd1, 0x75, 0x52, - 0x84, 0xc1, 0x67, 0xf4, 0x25, 0x97, 0x80, 0x6d, 0xf6, 0x27, 0x79, 0x07, 0x86, 0x9f, 0x33, 0x1d, - 0x0d, 0x25, 0x91, 0xe8, 0x8e, 0x52, 0x43, 0x5d, 0xef, 0x3c, 0xf5, 0x6c, 0x0e, 0xf4, 0xe1, 0xc0, - 0xfb, 0x85, 0x87, 0x43, 0x63, 0x13, 0xc5, 0x49, 0x7e, 0xb1, 0xff, 0x70, 0x68, 0x6c, 0xaa, 0x38, - 0xfd, 0x70, 0x68, 0xec, 0x7a, 0xf1, 0x86, 0x3d, 0x8f, 0x47, 0x76, 0xa5, 0xe3, 0x75, 0x5e, 0x1e, - 0xba, 0x3f, 0x47, 0x35, 0x80, 0x09, 0xe7, 0x15, 0x98, 0x89, 0x11, 0x23, 0x8b, 0x30, 0x4a, 0x3b, - 0x4c, 0x29, 0x68, 0x71, 0x41, 0xc9, 0x96, 0x3f, 0xc9, 0x1c, 0x0c, 0xb7, 0xdd, 0x43, 0x37, 0xc4, - 0xde, 0x0c, 0xdb, 0xfc, 0x87, 0xf5, 0xc7, 0x05, 0x20, 0xc9, 0x73, 0x8a, 0xdc, 0x8e, 0x91, 0xe1, - 0x22, 0xb1, 0x28, 0xd2, 0xef, 0x5b, 0x24, 0xf5, 0xcf, 0x60, 0x96, 0x6f, 0x14, 0x79, 0xa2, 0x6a, - 0x6d, 0x71, 0x4e, 0x9e, 0x52, 0xad, 0xdb, 0xbf, 0x44, 0x35, 0x9e, 0xbf, 0x1b, 0xd8, 0xb5, 0x1e, - 0xcc, 0xa7, 0x9e, 0x50, 0x64, 0x13, 0xe6, 0x0f, 0xbd, 0x4e, 0x78, 0xd0, 0x7e, 0x29, 0x0f, 0x28, - 0xd1, 0x5a, 0x01, 0x5b, 0x43, 0xa6, 0x9c, 0x0a, 0x60, 0xcf, 0x8a, 0x62, 0x41, 0x11, 0xdb, 0x11, - 0xc6, 0x28, 0x39, 0x12, 0xcb, 0x86, 0xb3, 0x09, 0x46, 0x4f, 0x3e, 0x86, 0xc9, 0x26, 0x2a, 0x7d, - 0x46, 0x4b, 0xfc, 0x98, 0xd3, 0xca, 0xf5, 0x6f, 0x98, 0x97, 0xf3, 0xa1, 0xfc, 0x5f, 0x05, 0x58, - 0xc8, 0x60, 0xf1, 0xa7, 0x9f, 0xea, 0x2f, 0xe0, 0xdc, 0xa1, 0xf3, 0x4d, 0xc3, 0x47, 0x9d, 0xbe, - 0xe1, 0x3b, 0x9d, 0xd8, 0x6c, 0xe3, 0xc6, 0x4e, 0x87, 0xd0, 0x37, 0xf6, 0xa1, 0xf3, 0x8d, 0x8d, - 0x00, 0x36, 0xab, 0xe7, 0xfd, 0xfc, 0x31, 0x4c, 0x19, 0x4c, 0xfd, 0xd4, 0x9d, 0xb3, 0xee, 0xc2, - 0xd9, 0x15, 0xda, 0xa6, 0x21, 0x3d, 0xb1, 0x2d, 0xcf, 0xaa, 0x01, 0xd4, 0xe9, 0xa1, 0xd3, 0x3d, - 0xf0, 0x98, 0xb0, 0x5f, 0xd5, 0x7f, 0x09, 0x5b, 0x10, 0x91, 0x6a, 0x8b, 0xac, 0xd8, 0xbb, 0xc7, - 0x15, 0x80, 0x40, 0x41, 0xda, 0x1a, 0x96, 0xf5, 0x8f, 0x07, 0x80, 0x08, 0xae, 0xec, 0x53, 0xe7, - 0x50, 0x76, 0xe3, 0x03, 0x98, 0xe4, 0x9a, 0x3b, 0x2f, 0xc6, 0xee, 0x4c, 0x2c, 0xcd, 0x8a, 0xcf, - 0x52, 0xaf, 0x5a, 0x3b, 0x63, 0x1b, 0xa0, 0x0c, 0xd5, 0xa6, 0xdc, 0xe4, 0x80, 0xa8, 0x03, 0x06, - 0xaa, 0x5e, 0xc5, 0x50, 0xf5, 0xdf, 0xe4, 0x53, 0x98, 0x5e, 0xf6, 0x0e, 0xbb, 0x6c, 0x4e, 0x04, - 0xf2, 0xa0, 0x30, 0xe7, 0x88, 0x76, 0x8d, 0xca, 0xb5, 0x33, 0x76, 0x0c, 0x9c, 0x6c, 0xc1, 0xec, - 0xfd, 0x76, 0x2f, 0x38, 0xa8, 0x74, 0x5a, 0xcb, 0x6d, 0x2f, 0x90, 0x54, 0x86, 0x84, 0x06, 0x26, - 0x78, 0x6a, 0x12, 0x62, 0xed, 0x8c, 0x9d, 0x86, 0x48, 0xae, 0xc3, 0xf0, 0xea, 0x73, 0xc6, 0xeb, - 0xa5, 0x03, 0x8e, 0xf0, 0x0f, 0xdc, 0xee, 0xd0, 0xed, 0xa7, 0x6b, 0x67, 0x6c, 0x5e, 0x5b, 0x1d, - 0x87, 0x51, 0xa9, 0xf5, 0xdf, 0x66, 0x72, 0xb8, 0x9a, 0xce, 0x7a, 0xe8, 0x84, 0xbd, 0x80, 0x94, - 0x60, 0x6c, 0xb7, 0xcb, 0x94, 0x51, 0x69, 0x2e, 0xb1, 0xd5, 0x6f, 0xeb, 0x1d, 0x73, 0xa6, 0xc9, - 0x45, 0x88, 0x6c, 0xbd, 0x02, 0x58, 0x33, 0xfe, 0xae, 0x99, 0x93, 0x9b, 0x0f, 0x6d, 0xb4, 0x3b, - 0x10, 0x6b, 0xb7, 0x18, 0x9f, 0x6b, 0x6b, 0x3e, 0x75, 0xf2, 0xac, 0xcf, 0xe1, 0xf2, 0x6e, 0x37, - 0xa0, 0x7e, 0x58, 0xe9, 0x76, 0xdb, 0x6e, 0x93, 0x5f, 0x2c, 0xa2, 0x75, 0x40, 0x6e, 0x96, 0xf7, - 0x60, 0x84, 0x17, 0x88, 0x6d, 0x22, 0xf7, 0x60, 0xa5, 0xdb, 0x15, 0x36, 0x89, 0x7b, 0x5c, 0x23, - 0xe0, 0x56, 0x06, 0x5b, 0x40, 0x5b, 0x7f, 0x54, 0x80, 0xcb, 0xfc, 0x0b, 0xc8, 0x24, 0xfd, 0x3d, - 0x18, 0x47, 0xf7, 0xbc, 0xae, 0xd3, 0x94, 0xdf, 0x04, 0xf7, 0x53, 0x94, 0x85, 0x76, 0x54, 0xaf, - 0x39, 0x3e, 0x0e, 0x64, 0x3b, 0x3e, 0xca, 0x0f, 0x6c, 0x30, 0xf5, 0x03, 0xfb, 0x0c, 0x2c, 0xd1, - 0xa3, 0x76, 0x3b, 0xd1, 0xa9, 0xe0, 0x55, 0x7a, 0x65, 0xfd, 0xeb, 0x01, 0x58, 0x78, 0x40, 0x3b, - 0xd4, 0x77, 0x70, 0x9c, 0x86, 0xf5, 0x4b, 0x77, 0x98, 0x2a, 0xe4, 0x3a, 0x4c, 0x95, 0xa5, 0x3d, - 0x71, 0x00, 0xed, 0x89, 0x09, 0x6f, 0x2e, 0xa6, 0xa3, 0xee, 0xda, 0xeb, 0x62, 0x58, 0xa8, 0xa3, - 0xf6, 0x7c, 0x97, 0x5f, 0x7c, 0xac, 0x47, 0xce, 0x56, 0x43, 0x7d, 0x6d, 0x11, 0xb3, 0xc2, 0xf9, - 0x64, 0x54, 0x38, 0x5b, 0x99, 0x2e, 0x56, 0x5b, 0x30, 0xc2, 0xcd, 0xa0, 0x78, 0xdd, 0x36, 0xb1, - 0xf4, 0xb6, 0xf8, 0xa6, 0x32, 0x06, 0x28, 0x6c, 0xa6, 0x78, 0xea, 0xf3, 0x2d, 0x10, 0x62, 0x81, - 0x2d, 0xa8, 0x94, 0x3e, 0x83, 0x09, 0x0d, 0xe4, 0x24, 0x82, 0x81, 0x32, 0xc7, 0x32, 0x31, 0xb5, - 0xb3, 0xcf, 0x2d, 0xbb, 0x9a, 0x60, 0x60, 0x7d, 0x04, 0x8b, 0xc9, 0xde, 0x08, 0x13, 0x5c, 0x3f, - 0x8b, 0x9f, 0xb5, 0x02, 0x73, 0x0f, 0x68, 0x88, 0x1b, 0x17, 0x3f, 0x22, 0xcd, 0x09, 0x30, 0xf6, - 0x9d, 0x49, 0xae, 0x8a, 0x85, 0x6c, 0x83, 0x69, 0x5f, 0x69, 0x1d, 0xe6, 0x63, 0x54, 0x44, 0xfb, - 0x1f, 0xc2, 0xa8, 0x28, 0x52, 0x1c, 0x55, 0x78, 0x12, 0xd3, 0x27, 0xa2, 0x62, 0x6f, 0x89, 0xef, - 0x5b, 0x41, 0xd9, 0x96, 0x08, 0xd6, 0x01, 0x9c, 0x63, 0xc7, 0x6c, 0x44, 0x55, 0x6d, 0xc7, 0x0b, - 0x30, 0xde, 0x65, 0x82, 0x42, 0xe0, 0xfe, 0x9c, 0x6f, 0xa3, 0x61, 0x7b, 0x8c, 0x15, 0xd4, 0xdd, - 0x9f, 0x53, 0x72, 0x09, 0x00, 0x2b, 0x71, 0x98, 0x82, 0x0b, 0x20, 0x38, 0x37, 0x71, 0x12, 0x40, - 0x17, 0x42, 0xbe, 0x6f, 0x6c, 0xfc, 0xdb, 0xf2, 0x61, 0x21, 0xd1, 0x92, 0x18, 0xc0, 0x6d, 0x18, - 0x93, 0x72, 0x73, 0xec, 0xf2, 0x41, 0x1f, 0x81, 0xad, 0x80, 0xc8, 0x0d, 0x98, 0xe9, 0xd0, 0x6f, - 0xc2, 0x46, 0xa2, 0x0f, 0x53, 0xac, 0xb8, 0x26, 0xfb, 0x61, 0xfd, 0x16, 0x1a, 0x9c, 0xeb, 0x1d, - 0xef, 0xc5, 0xd3, 0xb6, 0xf3, 0x8c, 0x26, 0x1a, 0xfe, 0x18, 0xc6, 0xea, 0xfd, 0x1b, 0xe6, 0x9f, - 0x8f, 0x6c, 0xdc, 0x56, 0x28, 0x56, 0x1b, 0x4a, 0x6c, 0x48, 0xf5, 0xca, 0xe6, 0xc6, 0x7a, 0xab, - 0xf6, 0x5d, 0x4f, 0xe0, 0x73, 0xb8, 0x90, 0xda, 0xda, 0x77, 0x3d, 0x89, 0x7f, 0x3e, 0x04, 0x0b, - 0xfc, 0x30, 0x49, 0xee, 0xe0, 0x93, 0xb3, 0x9a, 0xdf, 0xc8, 0x15, 0xf4, 0x9d, 0x94, 0x2b, 0x68, - 0x44, 0xd1, 0xaf, 0xa0, 0x8d, 0x8b, 0xe7, 0xf7, 0xd3, 0x2f, 0x9e, 0xd1, 0x38, 0x65, 0x5e, 0x3c, - 0xc7, 0xaf, 0x9b, 0x57, 0xb3, 0xaf, 0x9b, 0xf1, 0x42, 0x2a, 0xe5, 0xba, 0x39, 0xed, 0x92, 0x39, - 0xe6, 0xf7, 0x35, 0xf6, 0x7a, 0xfd, 0xbe, 0x6e, 0xc0, 0x68, 0xa5, 0xdb, 0xd5, 0xfc, 0x28, 0x71, - 0x79, 0x9c, 0x6e, 0x97, 0x4f, 0x9e, 0xac, 0x94, 0x7c, 0x1e, 0x52, 0xf8, 0xfc, 0x07, 0x00, 0xcb, - 0xf8, 0x82, 0x03, 0x17, 0x6e, 0x02, 0x21, 0x50, 0xc2, 0xe7, 0xef, 0x3a, 0x70, 0xe1, 0x74, 0xb3, - 0x4b, 0x04, 0xcc, 0x05, 0x7b, 0x6b, 0x0f, 0x16, 0x93, 0xdb, 0xe7, 0x35, 0xb0, 0xae, 0x3f, 0x2b, - 0xc0, 0x25, 0x21, 0xe4, 0xc4, 0x3e, 0xf0, 0xd3, 0xef, 0xce, 0x1f, 0xc0, 0xa4, 0xc0, 0xdd, 0x89, - 0x3e, 0x04, 0x7e, 0xe7, 0x2f, 0x99, 0x31, 0xe7, 0xe8, 0x06, 0x18, 0xf9, 0x01, 0x8c, 0xe1, 0x1f, - 0xd1, 0x85, 0x11, 0x9b, 0x99, 0x71, 0x04, 0x6d, 0xc4, 0xaf, 0x8d, 0x14, 0xa8, 0xf5, 0x35, 0x5c, - 0xce, 0xea, 0xf8, 0x6b, 0x98, 0x97, 0xbf, 0x5f, 0x80, 0x0b, 0x82, 0xbc, 0xc1, 0x2a, 0x5e, 0xe9, - 0xd4, 0x39, 0x85, 0xf7, 0xf5, 0x43, 0x98, 0x60, 0x0d, 0xca, 0x7e, 0x0f, 0x8a, 0xa3, 0x55, 0x68, - 0x0e, 0x51, 0xcd, 0x8a, 0x13, 0x3a, 0xc2, 0x23, 0xc8, 0x39, 0x6c, 0x4b, 0x8b, 0x89, 0xad, 0x23, - 0x5b, 0x5f, 0xc2, 0xc5, 0xf4, 0x21, 0xbc, 0x86, 0xf9, 0x79, 0x08, 0xa5, 0x94, 0x43, 0xe1, 0xd5, - 0xce, 0xe4, 0x2f, 0xe0, 0x42, 0x2a, 0xad, 0xd7, 0xd0, 0xcd, 0x35, 0x26, 0x71, 0x84, 0xaf, 0x61, - 0x09, 0xad, 0xc7, 0x70, 0x3e, 0x85, 0xd2, 0x6b, 0xe8, 0xe2, 0x03, 0x58, 0x50, 0x92, 0xf6, 0xb7, - 0xea, 0xe1, 0x26, 0x5c, 0xe2, 0x84, 0x5e, 0xcf, 0xaa, 0x3c, 0x82, 0x0b, 0x82, 0xdc, 0x6b, 0x98, - 0xbd, 0x35, 0xb8, 0x18, 0x29, 0xd4, 0x29, 0x72, 0xd2, 0x89, 0x99, 0x8c, 0xb5, 0x01, 0x57, 0x22, - 0x4a, 0x19, 0x42, 0xc3, 0xc9, 0xa9, 0x71, 0x71, 0x30, 0x5a, 0xa5, 0xd7, 0xb2, 0xa2, 0x8f, 0xe1, - 0x9c, 0x41, 0xf4, 0xb5, 0x89, 0x4a, 0xeb, 0x30, 0xcb, 0x09, 0x9b, 0xa2, 0xf3, 0x92, 0x2e, 0x3a, - 0x4f, 0x2c, 0x9d, 0x8d, 0x48, 0x62, 0xf1, 0xde, 0xbd, 0x14, 0x69, 0x7a, 0x13, 0xa5, 0x69, 0x09, - 0x12, 0xf5, 0xf0, 0x07, 0x30, 0xc2, 0x4b, 0x44, 0xff, 0x52, 0x88, 0x71, 0x65, 0x81, 0xa3, 0x09, - 0x60, 0xeb, 0x27, 0x70, 0x89, 0x6b, 0xa2, 0xd1, 0x05, 0xa6, 0xa9, 0x2d, 0x7e, 0x1c, 0x53, 0x44, - 0xcf, 0x0b, 0xba, 0x71, 0xf8, 0x0c, 0x7d, 0xf4, 0x89, 0xdc, 0xdb, 0x59, 0xf4, 0x4f, 0xf4, 0xb2, - 0x4e, 0x2a, 0x98, 0x03, 0xa9, 0x0a, 0xe6, 0x35, 0xb8, 0xaa, 0x14, 0xcc, 0x78, 0x33, 0x72, 0x6b, - 0x59, 0x5f, 0xc2, 0x05, 0x3e, 0x50, 0xe9, 0xe5, 0x68, 0x76, 0xe3, 0xa3, 0xd8, 0x30, 0x17, 0xc4, - 0x30, 0x4d, 0xe8, 0x8c, 0x41, 0xfe, 0x37, 0x05, 0xf9, 0xc9, 0xa5, 0x13, 0xff, 0x4d, 0x6b, 0xdc, - 0x5b, 0x50, 0x56, 0x13, 0x62, 0xf6, 0xe8, 0xd5, 0xd4, 0xed, 0x4d, 0x98, 0xd7, 0xc9, 0xb8, 0x4d, - 0xba, 0x77, 0x17, 0x6f, 0x96, 0xde, 0x65, 0x9f, 0x05, 0x16, 0xc8, 0x6d, 0xb7, 0x98, 0x32, 0x6f, - 0x08, 0x6f, 0x2b, 0x48, 0xab, 0x01, 0x17, 0x93, 0x4b, 0xe1, 0x36, 0xe5, 0xf3, 0x08, 0xf2, 0x29, - 0xfb, 0x84, 0xb1, 0x44, 0x2c, 0x46, 0x26, 0x51, 0xf9, 0x1d, 0x73, 0x74, 0x89, 0x65, 0x59, 0x92, - 0xd5, 0xc4, 0xc6, 0xcf, 0x5a, 0x97, 0xfb, 0xe1, 0x17, 0x40, 0x64, 0xd5, 0x72, 0xdd, 0x96, 0x4d, - 0x9f, 0x87, 0xc1, 0xe5, 0xba, 0x2d, 0xde, 0x67, 0xa1, 0x24, 0xd8, 0x0c, 0x7c, 0x9b, 0x95, 0xc5, - 0x25, 0xf2, 0x81, 0x13, 0x48, 0xe4, 0x0f, 0x87, 0xc6, 0x06, 0x8b, 0x43, 0x36, 0xa9, 0xbb, 0xfb, - 0x9d, 0xc7, 0x6e, 0x78, 0xa0, 0x1a, 0xac, 0x58, 0x5f, 0xc1, 0xac, 0xd1, 0xbc, 0xf8, 0x8a, 0x73, - 0x1f, 0x88, 0x31, 0x79, 0x76, 0xb9, 0x82, 0xee, 0x36, 0x68, 0xb2, 0x98, 0xe4, 0xfc, 0xa6, 0xe9, - 0x34, 0xf0, 0xad, 0xb2, 0x2d, 0x2b, 0xad, 0x3f, 0x1d, 0xd2, 0xa8, 0x6b, 0xcf, 0xee, 0x72, 0x46, - 0x77, 0x17, 0x80, 0xef, 0x10, 0x6d, 0x70, 0x4c, 0x00, 0x9c, 0x10, 0x5e, 0x2c, 0x9c, 0x25, 0xdb, - 0x1a, 0xd0, 0x49, 0x9f, 0xe5, 0x09, 0x97, 0x68, 0x8e, 0x24, 0x5f, 0xa2, 0x2a, 0x97, 0x68, 0x41, - 0x3a, 0xb0, 0x75, 0x20, 0xf2, 0x93, 0xf8, 0x5b, 0x93, 0x61, 0xbc, 0xc8, 0x7a, 0x43, 0xde, 0x6c, - 0x27, 0xc7, 0x76, 0xba, 0xe7, 0x26, 0x2f, 0x60, 0x9e, 0xe1, 0xba, 0x4f, 0x51, 0xb1, 0x58, 0xfd, - 0x26, 0xa4, 0x1d, 0xce, 0xdb, 0x47, 0xb0, 0x9d, 0xeb, 0x39, 0xed, 0x44, 0xc0, 0xc2, 0xfe, 0x1e, - 0xd1, 0x69, 0x50, 0x55, 0x67, 0xa7, 0xd3, 0xc7, 0x4d, 0x64, 0x6f, 0xac, 0x76, 0x5a, 0x5d, 0xcf, - 0x55, 0x0a, 0x13, 0xdf, 0x44, 0x7e, 0xbb, 0x41, 0x45, 0xb9, 0xad, 0x03, 0x59, 0x37, 0x72, 0xfd, - 0xf4, 0xc7, 0x60, 0x68, 0x67, 0x79, 0x67, 0xa3, 0x58, 0xb0, 0x6e, 0x03, 0x68, 0x2d, 0x01, 0x8c, - 0x6c, 0x6d, 0xdb, 0x9b, 0x95, 0x8d, 0xe2, 0x19, 0x32, 0x0f, 0x67, 0x1f, 0xaf, 0x6f, 0xad, 0x6c, - 0x3f, 0xae, 0x37, 0xea, 0x9b, 0x15, 0x7b, 0x67, 0xb9, 0x62, 0xaf, 0x14, 0x0b, 0xd6, 0xd7, 0x30, - 0x67, 0x8e, 0xf0, 0xb5, 0x6e, 0xc2, 0x10, 0x66, 0x95, 0x3c, 0xf3, 0xf0, 0xf1, 0x8e, 0xe6, 0xe9, - 0x2a, 0x94, 0xbf, 0xb8, 0xc7, 0x96, 0x50, 0x13, 0xc5, 0x67, 0xa4, 0x01, 0x91, 0xb7, 0xb8, 0x58, - 0x10, 0x7f, 0x58, 0xcd, 0xc4, 0x82, 0x46, 0x24, 0x17, 0x20, 0xeb, 0xfb, 0x21, 0xcc, 0x99, 0xad, - 0x9e, 0xd4, 0x4a, 0xf5, 0x06, 0xba, 0x00, 0x6b, 0xaf, 0xb4, 0x08, 0xd1, 0xaf, 0x0d, 0x04, 0x67, - 0xfd, 0x21, 0x14, 0x05, 0x54, 0x74, 0xf2, 0x5e, 0x93, 0x66, 0xc4, 0x42, 0xca, 0x1b, 0x51, 0xe9, - 0x27, 0xef, 0x41, 0x91, 0x71, 0x4c, 0x81, 0xc9, 0x1b, 0x98, 0x83, 0xe1, 0x8d, 0xe8, 0x3a, 0xc7, - 0xe6, 0x3f, 0xf0, 0xb1, 0x52, 0xe8, 0xf8, 0xa1, 0xf4, 0x8f, 0x1b, 0xb7, 0xd5, 0x6f, 0xf2, 0x16, - 0x8c, 0xdc, 0x77, 0xdb, 0xa1, 0x30, 0x8d, 0x44, 0x87, 0x3c, 0x23, 0xcb, 0x2b, 0x6c, 0x01, 0x60, - 0xd9, 0x70, 0x56, 0x6b, 0xf0, 0x14, 0x5d, 0x25, 0x8b, 0x30, 0xba, 0x45, 0xbf, 0xd1, 0xda, 0x97, - 0x3f, 0xad, 0xf7, 0xe0, 0xac, 0xf0, 0x3d, 0xd4, 0xa6, 0xe9, 0xaa, 0x78, 0xca, 0x5e, 0x30, 0xde, - 0xd3, 0x0a, 0x92, 0x58, 0xc5, 0xf0, 0x76, 0xbb, 0xad, 0x57, 0xc4, 0x63, 0x07, 0xc5, 0x29, 0xf1, - 0xde, 0x94, 0xb7, 0x40, 0xfd, 0x96, 0xf3, 0xbf, 0x1a, 0x80, 0xc5, 0x98, 0x95, 0x61, 0xf9, 0xc0, - 0x69, 0xb7, 0x69, 0x67, 0x9f, 0x92, 0x9b, 0x30, 0xb4, 0xb3, 0xbd, 0x53, 0x13, 0x56, 0x52, 0xe9, - 0x75, 0xc0, 0x8a, 0x14, 0x8c, 0x8d, 0x10, 0xe4, 0x11, 0x9c, 0x95, 0xde, 0xc5, 0xaa, 0x4a, 0xac, - 0xd0, 0xa5, 0x7c, 0x5f, 0xe5, 0x24, 0x1e, 0x79, 0x57, 0x98, 0x44, 0x7e, 0xd6, 0x73, 0x7d, 0xda, - 0x42, 0xcb, 0x4f, 0x74, 0x85, 0xaf, 0xd5, 0xd8, 0x3a, 0x18, 0xf9, 0x21, 0x4c, 0xd6, 0xeb, 0xdb, - 0x51, 0xeb, 0xc3, 0xc6, 0x0d, 0x91, 0x5e, 0x65, 0x1b, 0x80, 0xfc, 0xcd, 0x8b, 0xf5, 0xe7, 0x05, - 0x58, 0xc8, 0x30, 0xb7, 0x90, 0xb7, 0x8c, 0x79, 0x98, 0xd5, 0xe6, 0x41, 0x82, 0xac, 0x9d, 0x11, - 0x13, 0xb1, 0xac, 0xf9, 0x6a, 0x0f, 0x9e, 0xc2, 0x57, 0x7b, 0xed, 0x4c, 0xe4, 0x9f, 0x4d, 0x6e, - 0xc0, 0x60, 0xbd, 0xbe, 0x2d, 0xcc, 0xea, 0x24, 0x1a, 0x81, 0x06, 0xcc, 0x00, 0xaa, 0x00, 0x63, - 0xb2, 0xc8, 0x9a, 0x81, 0x29, 0x63, 0x61, 0x2c, 0x0b, 0x26, 0xf5, 0x1e, 0xb2, 0xd5, 0x5f, 0xf6, - 0x5a, 0x6a, 0xf5, 0xd9, 0xdf, 0xd6, 0x2f, 0xcc, 0x39, 0x23, 0x97, 0x00, 0xe4, 0x7d, 0xad, 0xdb, - 0x92, 0x37, 0x3f, 0xa2, 0x64, 0xbd, 0x45, 0xae, 0xc2, 0xa4, 0x4f, 0x5b, 0xae, 0x4f, 0x9b, 0x61, - 0xa3, 0xe7, 0x8b, 0xb7, 0x3a, 0xf6, 0x84, 0x2c, 0xdb, 0xf5, 0xdb, 0xe4, 0x7b, 0x30, 0xc2, 0x2f, - 0x92, 0xc5, 0xe8, 0xa5, 0x92, 0x50, 0xaf, 0x6f, 0x6f, 0xde, 0xaf, 0xf0, 0x8b, 0x6e, 0x5b, 0x80, - 0x58, 0x55, 0x98, 0xd0, 0x46, 0xd5, 0xaf, 0xf5, 0x39, 0x18, 0xd6, 0xad, 0x94, 0xfc, 0x87, 0xf5, - 0xdf, 0x17, 0x60, 0x0e, 0xb7, 0xc1, 0xbe, 0xcb, 0x8e, 0x87, 0x68, 0x2c, 0x4b, 0xc6, 0xa2, 0x5d, - 0x34, 0x16, 0x2d, 0x06, 0xab, 0x56, 0xef, 0xc3, 0xc4, 0xea, 0x5d, 0x4c, 0x5b, 0x3d, 0x64, 0x01, - 0xae, 0xd7, 0xd1, 0x17, 0x4d, 0xbf, 0xae, 0xfb, 0xe3, 0x02, 0xcc, 0x6a, 0x7d, 0x52, 0x03, 0xbc, - 0x6b, 0x74, 0xe9, 0x42, 0x4a, 0x97, 0x12, 0xfb, 0xa9, 0x9a, 0xe8, 0xd1, 0x1b, 0x79, 0x3d, 0x4a, - 0xdb, 0x4e, 0xc6, 0x36, 0xf9, 0xeb, 0x02, 0xcc, 0xa7, 0xce, 0x01, 0x39, 0xc7, 0xe4, 0xff, 0xa6, - 0x4f, 0x43, 0x31, 0xf3, 0xe2, 0x17, 0x2b, 0x5f, 0x0f, 0x82, 0x1e, 0xf5, 0xc5, 0xbc, 0x8b, 0x5f, - 0xe4, 0x0d, 0x98, 0xaa, 0x51, 0xdf, 0xf5, 0x5a, 0xfc, 0xc1, 0x02, 0xf7, 0x04, 0x9e, 0xb2, 0xcd, - 0x42, 0x72, 0x11, 0xc6, 0x95, 0x27, 0x2b, 0xb7, 0xe1, 0xda, 0x51, 0x01, 0xa3, 0xbd, 0xe2, 0xee, - 0xf3, 0x8b, 0x1f, 0x86, 0x2c, 0x7e, 0x31, 0x06, 0x2c, 0x2d, 0xaa, 0x23, 0x9c, 0x01, 0x4b, 0x73, - 0xe9, 0x39, 0x18, 0xf9, 0xcc, 0xc6, 0x7d, 0x8c, 0x21, 0x31, 0x6c, 0xf1, 0x8b, 0x4c, 0xa3, 0xcb, - 0x39, 0x3e, 0xd5, 0x41, 0x57, 0xf3, 0x0f, 0x61, 0x2e, 0x6d, 0x5e, 0xd3, 0xbe, 0x02, 0x81, 0x3b, - 0xa0, 0x70, 0xbf, 0x84, 0xd9, 0x4a, 0xab, 0x15, 0x6d, 0x57, 0xbe, 0xaa, 0xea, 0x6d, 0xdc, 0x40, - 0x71, 0x50, 0x88, 0xb5, 0x43, 0xeb, 0x1d, 0x37, 0xb4, 0x67, 0x57, 0xbf, 0x71, 0x83, 0xd0, 0xed, - 0xec, 0x6b, 0x86, 0x57, 0xfb, 0xdc, 0x16, 0x7d, 0x91, 0xb2, 0x05, 0x98, 0xc4, 0x61, 0xd2, 0xe6, - 0xe5, 0x29, 0xc4, 0xe7, 0x34, 0xb2, 0x11, 0xeb, 0x5a, 0x30, 0xe9, 0x46, 0x15, 0x83, 0x95, 0xe6, - 0x33, 0xeb, 0x87, 0x70, 0x8e, 0xb3, 0xfd, 0xbc, 0xce, 0x8b, 0x6e, 0xeb, 0x76, 0x62, 0xeb, 0x7d, - 0x69, 0xc9, 0xc9, 0xed, 0x99, 0x3d, 0x69, 0xf4, 0x05, 0x9b, 0xfc, 0x57, 0x05, 0x28, 0xc5, 0x50, - 0xeb, 0x2f, 0x3b, 0x4d, 0x79, 0xe6, 0xdc, 0x88, 0xbb, 0xf4, 0xa3, 0xac, 0xc4, 0x0d, 0xa4, 0x6e, - 0x4b, 0x79, 0xf5, 0x93, 0xdb, 0x00, 0x1c, 0x59, 0x13, 0x71, 0xf0, 0x7a, 0x40, 0x78, 0x3f, 0xa1, - 0x90, 0xa3, 0x81, 0x90, 0x1e, 0xa4, 0xcd, 0xbb, 0xf8, 0x46, 0xfa, 0xd9, 0xcf, 0x31, 0x0c, 0x0c, - 0x15, 0xe8, 0x8d, 0x0c, 0x43, 0x7a, 0x1a, 0x7d, 0xeb, 0xbf, 0x1d, 0x84, 0x05, 0x7d, 0x01, 0x5f, - 0x65, 0xac, 0x35, 0x98, 0x58, 0xf6, 0x3a, 0x21, 0xfd, 0x26, 0xd4, 0xc2, 0x70, 0x10, 0xe5, 0x8d, - 0xa0, 0x6a, 0x84, 0x78, 0xcd, 0x0b, 0x1a, 0x4c, 0xd6, 0x33, 0xbc, 0x38, 0x23, 0x40, 0xb2, 0x0c, - 0x53, 0x5b, 0xf4, 0x45, 0x62, 0x02, 0xd1, 0x93, 0xb4, 0x43, 0x5f, 0x34, 0xb4, 0x49, 0xd4, 0xdd, - 0xfb, 0x0c, 0x1c, 0xf2, 0x04, 0xa6, 0xe5, 0xe6, 0x32, 0x26, 0xb3, 0xa4, 0x9f, 0xbc, 0xe6, 0x76, - 0xe6, 0x61, 0x2d, 0x58, 0x0b, 0x19, 0x73, 0x18, 0xa3, 0xc8, 0x86, 0xce, 0x5b, 0xe4, 0x91, 0x1a, - 0xcc, 0xa3, 0x5d, 0xab, 0x31, 0xfc, 0x74, 0xe3, 0x11, 0x1a, 0x74, 0x12, 0x56, 0x0d, 0x16, 0x93, - 0xeb, 0x21, 0x5a, 0x7b, 0x17, 0x46, 0x78, 0xa9, 0x10, 0x95, 0x64, 0x84, 0x25, 0x05, 0xcd, 0x6d, - 0x19, 0x2d, 0x71, 0x2a, 0xf1, 0x32, 0x6b, 0x0d, 0xed, 0x4b, 0x0a, 0x46, 0x09, 0xab, 0x77, 0xe2, - 0xcb, 0x8b, 0x2e, 0xd0, 0x72, 0x79, 0x75, 0x5f, 0x1c, 0xf9, 0x54, 0x65, 0x19, 0x4d, 0x74, 0x3a, - 0x25, 0xd1, 0xb1, 0xb7, 0x61, 0x54, 0x14, 0xc5, 0x62, 0x3f, 0x45, 0x9f, 0x9f, 0x04, 0xb0, 0x3e, - 0x84, 0xf3, 0x68, 0x2f, 0x74, 0x3b, 0xfb, 0x6d, 0xba, 0x1b, 0x18, 0x8f, 0x4d, 0xfa, 0x7d, 0xd6, - 0x3f, 0x82, 0x52, 0x1a, 0x6e, 0xdf, 0x2f, 0x9b, 0x47, 0x63, 0xf9, 0xcb, 0x01, 0x98, 0x5b, 0x0f, - 0x74, 0x81, 0x4b, 0xcc, 0xc4, 0xad, 0xb4, 0xa8, 0x22, 0x38, 0x27, 0x6b, 0x67, 0xd2, 0xa2, 0x86, - 0xbc, 0xab, 0xbd, 0xc0, 0x1d, 0xc8, 0x0b, 0x17, 0xc2, 0x8e, 0x2d, 0xf5, 0x06, 0xf7, 0x06, 0x0c, - 0x6d, 0x31, 0x56, 0x3d, 0x28, 0xd6, 0x8e, 0x63, 0xb0, 0x22, 0x7c, 0x01, 0xcb, 0x8e, 0x48, 0xf6, - 0x83, 0xdc, 0x4f, 0xbc, 0xb3, 0x1d, 0xea, 0x1f, 0x0e, 0x63, 0xed, 0x4c, 0xe2, 0xc9, 0xed, 0x7b, - 0x30, 0x51, 0x69, 0x1d, 0x72, 0x57, 0x4d, 0xaf, 0x13, 0xfb, 0x2c, 0xb5, 0x9a, 0xb5, 0x33, 0xb6, - 0x0e, 0x48, 0xae, 0xf3, 0xd7, 0x0e, 0x23, 0x19, 0x21, 0x42, 0x98, 0xb0, 0x56, 0xe9, 0x76, 0xab, - 0x63, 0x30, 0xc2, 0xdf, 0x7e, 0x5a, 0x5f, 0x42, 0x49, 0x38, 0xf2, 0x70, 0xeb, 0x28, 0xba, 0xfb, - 0x04, 0x91, 0xaf, 0x56, 0x9e, 0xf3, 0xcd, 0x65, 0x00, 0xd4, 0x85, 0xd6, 0x3b, 0x2d, 0xfa, 0x8d, - 0xf0, 0x24, 0xd4, 0x4a, 0xac, 0x1f, 0xc0, 0xb8, 0x9a, 0x21, 0x14, 0xf8, 0xb5, 0xc3, 0x0e, 0x67, - 0x6b, 0xce, 0x78, 0x58, 0x2c, 0x5f, 0x13, 0x9f, 0x37, 0xc6, 0x2e, 0x82, 0xf8, 0x70, 0x0d, 0xc1, - 0x85, 0xf9, 0xd8, 0x26, 0x88, 0x62, 0x4a, 0x28, 0x19, 0x9d, 0xbb, 0x3a, 0xaa, 0xdf, 0x71, 0x11, - 0x7e, 0xe0, 0x44, 0x22, 0xbc, 0xf5, 0x7f, 0x0f, 0xa0, 0x72, 0x99, 0x98, 0x8f, 0x98, 0x9d, 0x4e, - 0xb7, 0x15, 0x56, 0x61, 0x1c, 0x47, 0xbf, 0x22, 0x1f, 0x12, 0xe6, 0xfb, 0xa1, 0x8c, 0xfd, 0xf2, - 0xa8, 0x7c, 0x06, 0x9d, 0x4f, 0x22, 0x34, 0xf2, 0x09, 0x8c, 0xae, 0x76, 0x5a, 0x48, 0x61, 0xf0, - 0x14, 0x14, 0x24, 0x12, 0x5b, 0x13, 0xec, 0xf2, 0x0e, 0xfb, 0x84, 0xb9, 0x79, 0xc7, 0xd6, 0x4a, - 0x22, 0x2d, 0x77, 0x38, 0x4b, 0xcb, 0x1d, 0x89, 0x69, 0xb9, 0x16, 0x0c, 0x6f, 0xfb, 0x2d, 0x11, - 0xaa, 0x67, 0x7a, 0x69, 0x52, 0x4c, 0x1c, 0x96, 0xd9, 0xbc, 0xca, 0xfa, 0x37, 0x05, 0x58, 0x78, - 0x40, 0xc3, 0xd4, 0x3d, 0x64, 0xcc, 0x4a, 0xe1, 0x5b, 0xcf, 0xca, 0xc0, 0xab, 0xcc, 0x8a, 0x1a, - 0xf5, 0x60, 0xd6, 0xa8, 0x87, 0xb2, 0x46, 0x3d, 0x9c, 0x3d, 0xea, 0x07, 0x30, 0xc2, 0x87, 0xca, - 0x34, 0xf9, 0xf5, 0x90, 0x1e, 0x46, 0x9a, 0xbc, 0xee, 0x45, 0x67, 0xf3, 0x3a, 0x26, 0x48, 0x6e, - 0x38, 0x81, 0xae, 0xc9, 0x8b, 0x9f, 0xd6, 0x4f, 0xf1, 0x09, 0xf2, 0x86, 0xd7, 0x7c, 0xa6, 0x59, - 0x84, 0x47, 0xf9, 0x17, 0x1a, 0xbf, 0x41, 0x60, 0x50, 0xbc, 0xc6, 0x96, 0x10, 0xe4, 0x0a, 0x4c, - 0xac, 0x77, 0xee, 0x7b, 0x7e, 0x93, 0x6e, 0x77, 0xda, 0x9c, 0xfa, 0x98, 0xad, 0x17, 0x09, 0x4b, - 0x89, 0x68, 0x21, 0x32, 0x3f, 0x60, 0x41, 0xcc, 0xfc, 0xc0, 0xca, 0xf6, 0x96, 0x6c, 0x5e, 0x27, - 0x0c, 0x31, 0xec, 0xef, 0x3c, 0xcd, 0x5d, 0xa9, 0xf8, 0xfd, 0x00, 0x9f, 0xc0, 0x79, 0x9b, 0x76, - 0xdb, 0x0e, 0x93, 0xe9, 0x0e, 0x3d, 0x0e, 0xaf, 0xc6, 0x7c, 0x25, 0xe5, 0xf9, 0xa0, 0xe9, 0x53, - 0xa1, 0xba, 0x3c, 0x90, 0xd3, 0xe5, 0x43, 0xb8, 0xfa, 0x80, 0x86, 0x26, 0x43, 0x8d, 0xec, 0xcd, - 0x62, 0xf0, 0x6b, 0x30, 0x16, 0x98, 0xb6, 0x72, 0xf9, 0x1c, 0x2e, 0x15, 0x71, 0xef, 0x9e, 0xbc, - 0x4d, 0x12, 0x74, 0xd4, 0x5f, 0xd6, 0xa7, 0x50, 0xce, 0x6a, 0xee, 0x64, 0x2e, 0xaf, 0x2e, 0x5c, - 0xc9, 0x26, 0x20, 0xba, 0xbb, 0x0a, 0xd2, 0xae, 0x2e, 0x3e, 0xa1, 0x7e, 0xbd, 0x35, 0x4d, 0xf1, - 0xe2, 0x0f, 0xab, 0x2a, 0x9d, 0xff, 0xbe, 0x45, 0x77, 0x1b, 0x78, 0x65, 0x6d, 0x12, 0x88, 0xe6, - 0xb5, 0x02, 0x63, 0xb2, 0x4c, 0xcc, 0xeb, 0x42, 0x6a, 0x4f, 0xe5, 0x84, 0xb6, 0x24, 0x01, 0x85, - 0x66, 0xfd, 0x54, 0x5e, 0xdf, 0x98, 0x18, 0x27, 0x7b, 0x4f, 0x7b, 0x92, 0xfb, 0x1a, 0xcb, 0x83, - 0xf3, 0x26, 0x6d, 0xdd, 0x2c, 0x5f, 0xd4, 0xcc, 0xf2, 0xdc, 0x1a, 0x7f, 0xc5, 0x34, 0x13, 0x0b, - 0x4b, 0x83, 0x56, 0x44, 0x2e, 0xeb, 0xc6, 0xf7, 0xc9, 0xe4, 0x03, 0xdd, 0x3b, 0x50, 0x4a, 0x6b, - 0x50, 0xd3, 0x03, 0x95, 0x85, 0x57, 0xc8, 0x3b, 0x2b, 0x70, 0x59, 0x06, 0xcb, 0xf2, 0xbc, 0x30, - 0x08, 0x7d, 0xa7, 0x5b, 0x6f, 0xfa, 0x6e, 0x37, 0xc2, 0xb2, 0x60, 0x84, 0x97, 0x88, 0x99, 0xe0, - 0x57, 0x61, 0x1c, 0x46, 0xd4, 0x58, 0xbf, 0x57, 0x00, 0xcb, 0xf0, 0xd3, 0xc2, 0x75, 0xae, 0xf9, - 0xde, 0x73, 0xb7, 0xa5, 0x5d, 0x3f, 0xbd, 0x65, 0x98, 0x3e, 0xf9, 0x5b, 0xc5, 0xb8, 0x8b, 0xb8, - 0xe0, 0x99, 0x77, 0x62, 0xe6, 0x48, 0x2e, 0x78, 0xa2, 0xef, 0x96, 0x19, 0xfd, 0x48, 0x99, 0x29, - 0xff, 0x5d, 0x01, 0xae, 0xe5, 0xf6, 0x41, 0x8c, 0xe7, 0x09, 0x14, 0xe3, 0x75, 0x62, 0x07, 0x95, - 0x35, 0xbf, 0x8d, 0x24, 0x85, 0xbd, 0xbb, 0xdc, 0x0f, 0x5d, 0xfa, 0x37, 0x75, 0x15, 0xe5, 0x04, - 0xbd, 0xd3, 0xf7, 0x1e, 0x43, 0x6a, 0x78, 0xa1, 0xd3, 0x5e, 0x46, 0x03, 0xc0, 0x60, 0xf4, 0xa6, - 0x20, 0x64, 0xa5, 0x8d, 0x78, 0xe4, 0x0e, 0x0d, 0xd8, 0xfa, 0x31, 0x7e, 0xd7, 0xe9, 0x9d, 0x3e, - 0xd9, 0xa7, 0xb6, 0x0c, 0xd7, 0x62, 0xbe, 0x03, 0xaf, 0x40, 0x24, 0x84, 0x79, 0x36, 0xfd, 0x4c, - 0xf6, 0x7e, 0xe0, 0x7b, 0xbd, 0xee, 0x6f, 0x66, 0xd5, 0xff, 0xa2, 0xc0, 0x9d, 0x39, 0xf5, 0x66, - 0xc5, 0x42, 0x2f, 0x03, 0x44, 0xa5, 0x31, 0xa7, 0x7e, 0x55, 0xb1, 0x77, 0x97, 0xab, 0xdc, 0x78, - 0xab, 0xb0, 0xcf, 0x09, 0x68, 0x68, 0xbf, 0xd9, 0x95, 0xbc, 0x87, 0x0e, 0x03, 0xaa, 0xf5, 0x93, - 0xcd, 0xfb, 0x7b, 0xd2, 0xfe, 0x71, 0x4a, 0xbc, 0x03, 0x98, 0x63, 0x1c, 0xa0, 0xd2, 0x0b, 0x0f, - 0x3c, 0xdf, 0x0d, 0xe5, 0xf3, 0x14, 0x52, 0x13, 0x91, 0x02, 0x38, 0xd6, 0x8f, 0x7e, 0x7d, 0x54, - 0x7e, 0xff, 0x34, 0x61, 0x49, 0x25, 0xcd, 0x1d, 0x15, 0x5d, 0xc0, 0x5a, 0x80, 0xc1, 0x65, 0x7b, - 0x03, 0x19, 0x9e, 0xbd, 0xa1, 0x18, 0x9e, 0xbd, 0x61, 0xfd, 0xcd, 0x00, 0x94, 0x79, 0x2c, 0x13, - 0xf4, 0x33, 0x89, 0xac, 0x16, 0x9a, 0xe3, 0xca, 0x49, 0x0d, 0x0c, 0xb1, 0x58, 0x25, 0x03, 0x27, - 0x89, 0x55, 0xf2, 0x3b, 0x90, 0x61, 0xb2, 0x3a, 0x81, 0x15, 0xe0, 0xcd, 0xe3, 0xa3, 0xf2, 0xb5, - 0xc8, 0x0a, 0xc0, 0x6b, 0xd3, 0xcc, 0x01, 0x19, 0x4d, 0x24, 0xed, 0x17, 0x43, 0xaf, 0x60, 0xbf, - 0xb8, 0x03, 0xa3, 0xa8, 0xcc, 0xac, 0xd7, 0x84, 0xe7, 0x27, 0x6e, 0x4f, 0x0c, 0x9a, 0xd4, 0x70, - 0xf5, 0xe8, 0x86, 0x12, 0xcc, 0xfa, 0x9f, 0x06, 0xe0, 0x4a, 0xf6, 0x9c, 0x8b, 0xbe, 0xad, 0x00, - 0x44, 0x1e, 0x2e, 0x79, 0x1e, 0x35, 0xf8, 0xed, 0xbc, 0xa0, 0x4f, 0x94, 0x47, 0x9b, 0x86, 0xc7, - 0x64, 0x1f, 0xf9, 0x02, 0x3b, 0x76, 0x9d, 0x62, 0x3c, 0xcc, 0x16, 0xc1, 0x76, 0x45, 0x91, 0x11, - 0x6c, 0x57, 0x94, 0x91, 0x27, 0xb0, 0x50, 0xf3, 0xdd, 0xe7, 0x4e, 0x48, 0x1f, 0xd1, 0x97, 0xfc, - 0xb1, 0xd0, 0xaa, 0x78, 0x21, 0xc4, 0x9f, 0xd5, 0xdf, 0x3c, 0x3e, 0x2a, 0xbf, 0xd1, 0xe5, 0x20, - 0x18, 0xe8, 0x8d, 0xbf, 0x09, 0x6d, 0x24, 0x1f, 0x0d, 0x65, 0x11, 0xb2, 0xfe, 0x61, 0x01, 0x2e, - 0xa0, 0x58, 0x2e, 0xcc, 0xae, 0xb2, 0xf1, 0x57, 0x72, 0xac, 0xd4, 0x07, 0x28, 0xf6, 0x22, 0x3a, - 0x56, 0x1a, 0x2f, 0xd4, 0x6d, 0x03, 0x8c, 0xac, 0xc3, 0x84, 0xf8, 0x8d, 0xdf, 0xdf, 0x20, 0x2a, - 0x04, 0xf3, 0x1a, 0xc3, 0xc2, 0xad, 0xce, 0x4d, 0x45, 0xb8, 0xb1, 0x05, 0x31, 0x7c, 0xc8, 0x69, - 0xeb, 0xb8, 0xd6, 0xaf, 0x06, 0xe0, 0xe2, 0x1e, 0xf5, 0xdd, 0xa7, 0x2f, 0x33, 0x06, 0xb3, 0x0d, - 0x73, 0xb2, 0x88, 0xc7, 0x33, 0x31, 0x3e, 0x31, 0x1e, 0x9e, 0x53, 0x76, 0x55, 0x04, 0x44, 0x91, - 0x5f, 0x5c, 0x2a, 0xe2, 0x29, 0x5c, 0x26, 0xdf, 0x85, 0xb1, 0x58, 0x44, 0x21, 0x5c, 0x7f, 0xf9, - 0x85, 0x46, 0x4b, 0xb5, 0x76, 0xc6, 0x56, 0x90, 0xe4, 0x0f, 0xb2, 0xaf, 0xaa, 0x84, 0xe9, 0xa3, - 0x9f, 0xfd, 0x13, 0x3f, 0x58, 0xf6, 0xb1, 0x3a, 0x5a, 0x6d, 0xca, 0x07, 0xbb, 0x76, 0xc6, 0xce, - 0x6a, 0xa9, 0x3a, 0x01, 0xe3, 0x15, 0xbc, 0xb7, 0x63, 0x9a, 0xfb, 0xbf, 0x1d, 0x80, 0xcb, 0xf2, - 0xe1, 0x4f, 0xc6, 0x34, 0x7f, 0x0e, 0x0b, 0xb2, 0xa8, 0xd2, 0x65, 0x02, 0x03, 0x6d, 0x99, 0x33, - 0xcd, 0x43, 0xe4, 0xca, 0x99, 0x76, 0x04, 0x4c, 0x34, 0xd9, 0x59, 0xe8, 0xaf, 0xc7, 0xfa, 0xf9, - 0x49, 0x5a, 0x7c, 0x27, 0xb4, 0x42, 0xea, 0x3c, 0xd3, 0x98, 0x1a, 0x83, 0x7f, 0xb6, 0x12, 0xd6, - 0xd3, 0xa1, 0x6f, 0x6b, 0x3d, 0x5d, 0x3b, 0x13, 0xb7, 0x9f, 0x56, 0xa7, 0x61, 0x72, 0x8b, 0xbe, - 0x88, 0xe6, 0xfd, 0xbf, 0x2c, 0xc4, 0x42, 0x40, 0x30, 0x09, 0x83, 0xc7, 0x82, 0x28, 0x44, 0x21, - 0x82, 0x30, 0x04, 0x84, 0x2e, 0x61, 0x70, 0xd0, 0x75, 0x18, 0xe5, 0x97, 0xd9, 0xad, 0x13, 0x68, - 0xf8, 0xea, 0x05, 0x0f, 0x7f, 0x56, 0xd9, 0xe2, 0xca, 0xbe, 0xc0, 0xb7, 0x1e, 0xc1, 0x55, 0xe1, - 0xe3, 0x6d, 0x2e, 0x3e, 0x36, 0x74, 0xca, 0xe3, 0xcb, 0x72, 0xe0, 0xf2, 0x03, 0x1a, 0x67, 0x3d, - 0xc6, 0x0b, 0xa7, 0x4f, 0x61, 0xc6, 0x28, 0x57, 0x14, 0x51, 0x2a, 0x55, 0x7b, 0x48, 0x91, 0x8e, - 0x43, 0x5b, 0x57, 0xd2, 0x9a, 0xd0, 0x3b, 0x6b, 0x51, 0x8c, 0x75, 0xeb, 0x6b, 0x41, 0xd3, 0x4e, - 0xc1, 0xf5, 0x6e, 0x6a, 0xdf, 0x35, 0xe7, 0x78, 0x3c, 0xa0, 0xa1, 0x3c, 0x79, 0x55, 0xad, 0x35, - 0x65, 0xdc, 0x05, 0x58, 0xd3, 0x30, 0x29, 0xab, 0xda, 0x34, 0x08, 0xac, 0xff, 0x75, 0x04, 0x2c, - 0x31, 0xb1, 0x69, 0x37, 0xf4, 0x72, 0x3e, 0x9e, 0x24, 0x3a, 0x2b, 0x0e, 0xaa, 0x73, 0x7a, 0x28, - 0xd7, 0xa8, 0x96, 0xef, 0x3c, 0x94, 0xf3, 0x52, 0x03, 0xc8, 0xad, 0x9d, 0xb1, 0x13, 0xa3, 0xff, - 0x2a, 0x83, 0x4d, 0xf2, 0x8f, 0xed, 0xfa, 0xf1, 0x51, 0xf9, 0x6a, 0x06, 0x9b, 0x34, 0xe8, 0xa6, - 0xb3, 0x4c, 0xdb, 0xbc, 0x12, 0x19, 0x7c, 0x95, 0x2b, 0x11, 0xf6, 0x45, 0xea, 0x97, 0x22, 0xbb, - 0xe6, 0x5c, 0x8a, 0xef, 0x51, 0xde, 0xde, 0xeb, 0x55, 0x22, 0x12, 0x83, 0x56, 0x62, 0x50, 0x35, - 0xc8, 0x10, 0x17, 0x8a, 0x9a, 0xcd, 0x72, 0xf9, 0x80, 0x36, 0x9f, 0x09, 0x5b, 0xb1, 0xbc, 0xd0, - 0x4d, 0xb3, 0x99, 0xf3, 0x70, 0xdb, 0xfc, 0x3b, 0xe7, 0x15, 0x8d, 0x26, 0x43, 0xd5, 0x23, 0x49, - 0xc4, 0xc9, 0x92, 0x9f, 0xc3, 0xac, 0x5a, 0xea, 0x98, 0x8b, 0xd6, 0xc4, 0xd2, 0x1b, 0x51, 0x04, - 0xd8, 0xc3, 0xa7, 0xce, 0xad, 0xe7, 0x77, 0x6f, 0xa5, 0xc0, 0xf2, 0x00, 0x05, 0x4d, 0x59, 0xa1, - 0xf9, 0x67, 0xe9, 0x17, 0x5d, 0x29, 0x88, 0xe4, 0x0b, 0x98, 0xab, 0xd7, 0xb7, 0xf9, 0x63, 0x0e, - 0x5b, 0x5e, 0xf0, 0xdb, 0x1b, 0xc2, 0x61, 0x0b, 0x97, 0x3b, 0x08, 0xbc, 0x86, 0x78, 0x04, 0xa2, - 0xbb, 0x05, 0xe8, 0x21, 0x1a, 0xd2, 0x48, 0x90, 0x4f, 0x61, 0x12, 0x03, 0x39, 0x55, 0x5a, 0x2d, - 0x9f, 0x2d, 0xcc, 0x58, 0x74, 0xd0, 0xf2, 0x98, 0x4f, 0x0e, 0xaf, 0xd0, 0xe3, 0xfa, 0xea, 0x08, - 0xfa, 0x55, 0xfb, 0xff, 0xa0, 0x1e, 0x3b, 0x30, 0x59, 0xc6, 0x6d, 0x53, 0xf1, 0x6a, 0x49, 0x7e, - 0x19, 0x19, 0xd7, 0x84, 0x85, 0xef, 0xf8, 0x9a, 0xf0, 0xff, 0x1b, 0x90, 0x4f, 0x3c, 0x92, 0x37, - 0xb5, 0xa7, 0xbe, 0x2d, 0x4c, 0x1d, 0xc1, 0x89, 0x0e, 0xfa, 0xd4, 0xce, 0x91, 0xaa, 0xbc, 0x6b, - 0x55, 0x51, 0xd0, 0xa6, 0xd5, 0xbd, 0x45, 0x54, 0x61, 0x5c, 0xbf, 0xa2, 0x58, 0xa5, 0x61, 0xc5, - 0x2f, 0xf2, 0x06, 0xbf, 0xfd, 0x45, 0xde, 0x2f, 0x60, 0x5e, 0xbe, 0xad, 0x5a, 0xa6, 0x9d, 0x90, - 0xfa, 0xf2, 0xca, 0x7f, 0x3a, 0x8a, 0x26, 0x87, 0x71, 0x03, 0x8b, 0x30, 0x58, 0xb1, 0xb7, 0x84, - 0x49, 0x88, 0xfd, 0x49, 0xae, 0x98, 0x1e, 0x75, 0xfc, 0xd1, 0x9c, 0xe1, 0x3f, 0x77, 0x85, 0x75, - 0x97, 0x1b, 0x6a, 0x5c, 0x19, 0x03, 0xd0, 0xd6, 0x8b, 0xac, 0x65, 0xb8, 0x60, 0x36, 0x5f, 0xa3, - 0xfe, 0xa1, 0x8b, 0xc2, 0x7b, 0x9d, 0x86, 0xb2, 0xd1, 0x42, 0xd4, 0x28, 0xd1, 0x3d, 0xb2, 0x85, - 0x1e, 0xf9, 0x1f, 0x06, 0xa0, 0x9c, 0x3a, 0x88, 0x4a, 0x10, 0xb8, 0xfb, 0x1d, 0x0c, 0xcd, 0x71, - 0x11, 0x86, 0x1e, 0xb9, 0x9d, 0x96, 0xae, 0x89, 0x3e, 0x73, 0x3b, 0x2d, 0x1b, 0x4b, 0x99, 0x12, - 0x53, 0xef, 0x3d, 0x41, 0x00, 0x4d, 0xc7, 0x0e, 0x7a, 0x4f, 0x1a, 0x0c, 0x48, 0x57, 0x62, 0x04, - 0x18, 0xb9, 0x0e, 0xa3, 0x32, 0x8c, 0xdb, 0x60, 0x64, 0x7e, 0x93, 0xf1, 0xdb, 0x64, 0x1d, 0xf9, - 0x18, 0xc6, 0x36, 0x69, 0xe8, 0xb4, 0x9c, 0xd0, 0x11, 0x7b, 0x47, 0x26, 0xfa, 0x90, 0xc5, 0xd5, - 0xa2, 0x38, 0xe2, 0xc7, 0x0e, 0x45, 0x89, 0xad, 0x50, 0x70, 0x02, 0xdd, 0xa0, 0xdb, 0x76, 0x5e, - 0x2a, 0x6f, 0x54, 0x36, 0x81, 0x51, 0x11, 0x79, 0xcf, 0xf4, 0xd9, 0x88, 0xee, 0xdf, 0x52, 0x27, - 0x24, 0xf2, 0xe8, 0x58, 0x43, 0x3f, 0x92, 0x68, 0xaa, 0x45, 0x98, 0x42, 0x2b, 0x15, 0xdb, 0x80, - 0xb4, 0x4d, 0x44, 0xeb, 0xef, 0x01, 0x9c, 0xad, 0x39, 0xfb, 0x6e, 0x87, 0x89, 0x24, 0x36, 0x0d, - 0xbc, 0x9e, 0xdf, 0xa4, 0xa4, 0x02, 0xd3, 0xa6, 0x07, 0x78, 0x1f, 0xff, 0x76, 0x26, 0x75, 0x99, - 0x65, 0x64, 0x09, 0xc6, 0xd5, 0xab, 0x73, 0x21, 0x2a, 0xa5, 0xbc, 0x46, 0x5f, 0x3b, 0x63, 0x47, - 0x60, 0xe4, 0x03, 0xe3, 0xf6, 0x72, 0x46, 0x05, 0x50, 0x40, 0xd8, 0x25, 0xee, 0xa2, 0xdb, 0x31, - 0x42, 0x9d, 0xa8, 0x0b, 0xcd, 0x9f, 0x26, 0x2e, 0x34, 0x87, 0x8d, 0x1e, 0x27, 0xac, 0xba, 0x28, - 0xe9, 0x66, 0x46, 0xf3, 0x4f, 0xb9, 0xea, 0xfc, 0x12, 0x26, 0x1e, 0xf5, 0x9e, 0x50, 0x79, 0x75, - 0x3b, 0x22, 0xa4, 0xbf, 0xf8, 0xbb, 0x06, 0x51, 0xbf, 0x77, 0x8f, 0x7f, 0xc5, 0xcf, 0x7a, 0x4f, - 0x68, 0x32, 0x4d, 0x04, 0x3b, 0x76, 0x35, 0x62, 0xe4, 0x00, 0x8a, 0xf1, 0x27, 0x08, 0x62, 0x49, - 0x73, 0x1e, 0x4e, 0x60, 0x04, 0x21, 0x2d, 0x19, 0x05, 0x77, 0x8c, 0x36, 0x1a, 0x49, 0x50, 0x25, - 0xbf, 0x80, 0xf9, 0x54, 0x9b, 0xba, 0x7a, 0x44, 0x99, 0x6f, 0xae, 0xc7, 0x33, 0x2c, 0x36, 0x6b, - 0xf2, 0xc5, 0xa6, 0xd1, 0x72, 0x7a, 0x2b, 0xa4, 0x05, 0x33, 0x31, 0xd7, 0x7a, 0x91, 0x41, 0x27, - 0xdb, 0x59, 0x1f, 0xc5, 0x2e, 0x19, 0x78, 0x3a, 0xb5, 0xad, 0x38, 0x49, 0xb2, 0x01, 0xe3, 0xca, - 0x98, 0x25, 0x82, 0xfe, 0xa5, 0x19, 0xee, 0x16, 0x8f, 0x8f, 0xca, 0x73, 0x91, 0xe1, 0xce, 0xa0, - 0x19, 0x11, 0x20, 0xbf, 0x5f, 0x80, 0x73, 0xe9, 0x86, 0xcd, 0xc5, 0x49, 0xa4, 0xdd, 0xd7, 0xee, - 0x8b, 0xaa, 0x23, 0x3e, 0xdc, 0x73, 0x5b, 0xd1, 0x03, 0x57, 0x69, 0x00, 0x36, 0xda, 0xcd, 0x68, - 0x89, 0xdc, 0x01, 0xd8, 0x77, 0x43, 0xb1, 0xc6, 0x18, 0x7f, 0x2e, 0xf9, 0x81, 0xb0, 0x6e, 0xef, - 0xbb, 0xa1, 0x58, 0xe9, 0x3f, 0x2d, 0xf4, 0xe5, 0xab, 0x18, 0x96, 0x6e, 0x62, 0xe9, 0x46, 0x1e, - 0xd3, 0x89, 0xa0, 0xab, 0x77, 0x8e, 0x8f, 0xca, 0xef, 0xa8, 0xd8, 0x66, 0x4d, 0x84, 0x92, 0xcf, - 0x74, 0x1b, 0x8e, 0x82, 0x33, 0xc6, 0xd3, 0x97, 0xb5, 0xbf, 0x03, 0x23, 0x68, 0x5a, 0x0a, 0x16, - 0xa7, 0x50, 0xf9, 0xc2, 0x88, 0x5c, 0x68, 0x80, 0xd2, 0x65, 0x19, 0x01, 0x43, 0xd6, 0x98, 0x12, - 0x83, 0xe2, 0x9e, 0x54, 0x3a, 0x44, 0xfc, 0x3e, 0xa1, 0x08, 0xf3, 0x2a, 0x19, 0x40, 0xc7, 0x48, - 0x97, 0x62, 0xa2, 0x55, 0x01, 0xc6, 0x7c, 0xc1, 0xee, 0x1e, 0x0e, 0x8d, 0x0d, 0x15, 0x87, 0x79, - 0x1c, 0x23, 0xfe, 0x5d, 0xca, 0x8b, 0xa6, 0xab, 0x8a, 0x37, 0x6d, 0xfb, 0xe9, 0x0b, 0x63, 0xfd, - 0xe1, 0x18, 0x7f, 0x62, 0xbe, 0xdb, 0x71, 0x9f, 0xba, 0x11, 0x0b, 0xd5, 0xad, 0xd7, 0x51, 0xc2, - 0x32, 0xa1, 0x5b, 0x66, 0xa4, 0x26, 0x53, 0x86, 0xee, 0x81, 0xbe, 0x86, 0xee, 0x7b, 0xda, 0x95, - 0xb0, 0x16, 0xd0, 0x97, 0xeb, 0x10, 0xa6, 0x61, 0x39, 0xba, 0x2b, 0xfe, 0x1a, 0x46, 0x30, 0x06, - 0x2f, 0xbf, 0x6f, 0x9f, 0x58, 0xba, 0x25, 0xd6, 0x3d, 0xa7, 0xfb, 0x3c, 0x68, 0xaf, 0x08, 0x1b, - 0xc1, 0x97, 0x06, 0x0b, 0x8c, 0xa5, 0xc1, 0x12, 0xb2, 0x03, 0xb3, 0x35, 0x26, 0xd2, 0xf2, 0xb7, - 0x0b, 0x5d, 0x5f, 0x18, 0xff, 0xb8, 0x59, 0x11, 0x45, 0xea, 0xae, 0xac, 0x6e, 0x50, 0x55, 0xaf, - 0x0b, 0x85, 0x29, 0xe8, 0x64, 0x15, 0xa6, 0xeb, 0xd4, 0xf1, 0x9b, 0x07, 0x8f, 0xe8, 0x4b, 0xa6, - 0x4e, 0x18, 0x39, 0x7d, 0x02, 0xac, 0x61, 0xe3, 0xc5, 0x2a, 0xdd, 0x87, 0xca, 0x44, 0x22, 0x3f, - 0x86, 0x91, 0xba, 0xe7, 0x87, 0xd5, 0x97, 0x82, 0xad, 0xca, 0x1b, 0x59, 0x5e, 0x58, 0x3d, 0x2f, - 0xf3, 0x1a, 0x05, 0x9e, 0x1f, 0x36, 0x9e, 0x18, 0xb1, 0xe0, 0x38, 0x08, 0x79, 0x09, 0x73, 0x26, - 0x4b, 0x13, 0x2e, 0xf5, 0x63, 0x42, 0x8d, 0x49, 0xe3, 0x9b, 0x1c, 0xa4, 0x7a, 0x53, 0x50, 0xbf, - 0x12, 0x67, 0x9c, 0x4f, 0xb1, 0x5e, 0x97, 0xfd, 0xd3, 0xf0, 0xc9, 0x26, 0x26, 0x78, 0xe2, 0x23, - 0xaa, 0x04, 0xdc, 0x15, 0x7f, 0x3c, 0x8a, 0x36, 0xd8, 0x43, 0xb6, 0x88, 0x33, 0xe1, 0x04, 0xf1, - 0xac, 0x60, 0x76, 0x02, 0x95, 0xd4, 0xe0, 0xec, 0x6e, 0x40, 0x6b, 0x3e, 0x7d, 0xee, 0xd2, 0x17, - 0x92, 0x1e, 0x44, 0xa1, 0xd9, 0x18, 0xbd, 0x2e, 0xaf, 0x4d, 0x23, 0x98, 0x44, 0x26, 0x1f, 0x00, - 0xd4, 0xdc, 0x4e, 0x87, 0xb6, 0xf0, 0x5a, 0x7f, 0x02, 0x49, 0xe1, 0x95, 0x45, 0x17, 0x4b, 0x1b, - 0x5e, 0xa7, 0xad, 0x4f, 0xa9, 0x06, 0x4c, 0xaa, 0x30, 0xb5, 0xde, 0x69, 0xb6, 0x7b, 0xc2, 0xfd, - 0x26, 0x40, 0x96, 0x2a, 0x42, 0x46, 0xba, 0xbc, 0xa2, 0x91, 0xe0, 0x06, 0x26, 0x0a, 0x79, 0x04, - 0x44, 0x14, 0x88, 0x5d, 0xeb, 0x3c, 0x69, 0x53, 0xc1, 0x17, 0x50, 0x43, 0x92, 0x84, 0x70, 0xbb, - 0x1b, 0x91, 0x18, 0x13, 0x68, 0xa5, 0x0f, 0x60, 0x42, 0xdb, 0xf3, 0x29, 0x71, 0x50, 0xe6, 0xf4, - 0x38, 0x28, 0xe3, 0x7a, 0xbc, 0x93, 0xff, 0xbd, 0x00, 0x17, 0xd3, 0xbf, 0x25, 0xa1, 0x44, 0x6c, - 0xc3, 0xb8, 0x2a, 0x54, 0x2f, 0xdf, 0xa4, 0x6a, 0x1d, 0x93, 0xc1, 0xf8, 0x07, 0x2d, 0x59, 0x94, - 0x3e, 0xfa, 0x88, 0xc6, 0x2b, 0xdc, 0x77, 0xfd, 0xd7, 0x63, 0x30, 0x87, 0x2f, 0x3c, 0xe2, 0x7c, - 0xea, 0x53, 0x8c, 0x67, 0x84, 0x65, 0xda, 0xf5, 0x8d, 0xb0, 0xe4, 0xf2, 0xf2, 0x78, 0xc4, 0x3f, - 0x03, 0x81, 0xfc, 0x40, 0xf7, 0x39, 0x1a, 0xd0, 0x12, 0x50, 0xc9, 0x42, 0x7d, 0x08, 0x91, 0x33, - 0xd2, 0x5b, 0x86, 0xcb, 0xcb, 0x89, 0x99, 0xde, 0xd0, 0x49, 0x99, 0xde, 0xae, 0x62, 0x7a, 0x3c, - 0x4e, 0xce, 0x9b, 0x1a, 0xd3, 0x7b, 0xfd, 0xdc, 0x6e, 0xe4, 0x75, 0x73, 0xbb, 0xd1, 0x6f, 0xc7, - 0xed, 0xc6, 0x5e, 0x91, 0xdb, 0xdd, 0x87, 0xe9, 0x2d, 0x4a, 0x5b, 0xda, 0x45, 0xe4, 0x78, 0x74, - 0xcc, 0x76, 0x28, 0x9a, 0x98, 0xd3, 0x6e, 0x23, 0x63, 0x58, 0x99, 0x5c, 0x13, 0xfe, 0x6e, 0xb8, - 0xe6, 0xc4, 0x6b, 0xe6, 0x9a, 0x93, 0xdf, 0x86, 0x6b, 0x26, 0x58, 0xdf, 0xd4, 0xa9, 0x59, 0xdf, - 0xb7, 0xe1, 0x56, 0xff, 0xcb, 0x00, 0x2c, 0xb0, 0x0f, 0xa0, 0xfd, 0x9c, 0xd6, 0xeb, 0x6b, 0xc2, - 0x55, 0x2b, 0x72, 0x89, 0x3a, 0xf0, 0x02, 0xf9, 0xaa, 0x01, 0xff, 0x66, 0x65, 0x5d, 0xcf, 0x97, - 0x6e, 0x25, 0xf8, 0x37, 0xa9, 0xc2, 0x08, 0xff, 0x42, 0x16, 0x07, 0x8d, 0x20, 0x54, 0x19, 0x74, - 0xf5, 0xef, 0xcb, 0x16, 0x98, 0xe4, 0x2e, 0xcc, 0xa5, 0x7d, 0x2a, 0xc2, 0xde, 0x30, 0xdb, 0x4d, - 0xf9, 0x4c, 0xde, 0x84, 0x99, 0xd8, 0xc7, 0xc0, 0x73, 0xce, 0xd8, 0xd3, 0x81, 0xf1, 0x21, 0x7c, - 0x9b, 0xe9, 0x59, 0x86, 0xc5, 0xe4, 0x28, 0x04, 0x1f, 0x7f, 0x13, 0xc4, 0x5b, 0x6e, 0xa1, 0x16, - 0xc7, 0x05, 0x71, 0x5b, 0x54, 0x5b, 0x9f, 0xa0, 0x5b, 0xb4, 0x22, 0x10, 0x68, 0xf3, 0xbb, 0xa6, - 0xcd, 0xef, 0x9a, 0x98, 0xdf, 0x9a, 0x36, 0xbf, 0xec, 0x6f, 0xab, 0x8a, 0xce, 0xd0, 0x3a, 0xbe, - 0x7a, 0x5d, 0x35, 0x2a, 0x9e, 0x66, 0x8b, 0x73, 0x24, 0xd1, 0x05, 0x59, 0x6f, 0xfd, 0x65, 0x81, - 0x3b, 0x56, 0xfc, 0xa7, 0x78, 0x1c, 0x7d, 0x1b, 0x67, 0x87, 0x3f, 0x88, 0x42, 0xb6, 0x88, 0xf0, - 0x32, 0xbe, 0xd3, 0x7c, 0x16, 0x79, 0x9b, 0xfc, 0x84, 0xf1, 0x52, 0xbd, 0x42, 0x68, 0x4d, 0x0b, - 0x6a, 0xa6, 0xf4, 0xca, 0xbd, 0xbb, 0x92, 0xc9, 0x8a, 0xc8, 0x35, 0xbc, 0xd8, 0x64, 0xb2, 0x3a, - 0x02, 0xfa, 0xfb, 0xce, 0x58, 0x36, 0x8f, 0x38, 0x92, 0xda, 0x83, 0xf7, 0x92, 0x31, 0x33, 0x50, - 0xe5, 0x8c, 0x62, 0x66, 0xe8, 0xd3, 0x18, 0x45, 0xcf, 0xd8, 0x85, 0x0b, 0x36, 0x3d, 0xf4, 0x9e, - 0xd3, 0xd7, 0x4b, 0xf6, 0x2b, 0x38, 0x6f, 0x12, 0xe4, 0xaf, 0x2b, 0x79, 0x8a, 0x90, 0x4f, 0xd2, - 0x13, 0x8b, 0x08, 0x04, 0x9e, 0x58, 0x84, 0xe7, 0x27, 0x60, 0x7f, 0xea, 0x67, 0x33, 0xd6, 0x59, - 0x1e, 0x5c, 0x34, 0x89, 0x57, 0x5a, 0x2d, 0x4c, 0x91, 0xdc, 0x74, 0xbb, 0x4e, 0x27, 0x24, 0xdb, - 0x30, 0xa1, 0xfd, 0x8c, 0x19, 0x84, 0xb4, 0x1a, 0x21, 0x37, 0x46, 0x05, 0x46, 0x7c, 0xe7, 0xa8, - 0xd8, 0xa2, 0x50, 0x8e, 0x4f, 0x0f, 0x9b, 0x32, 0xbd, 0xcd, 0x2a, 0x4c, 0x69, 0x3f, 0xd5, 0xb5, - 0x0b, 0x32, 0x58, 0xad, 0x05, 0x73, 0xc2, 0x4c, 0x14, 0xab, 0x09, 0xa5, 0xb4, 0x49, 0xe3, 0x89, - 0x00, 0xc8, 0x6a, 0x14, 0xcf, 0xaf, 0xbf, 0xc7, 0xf0, 0x4c, 0x56, 0x2c, 0x3f, 0xeb, 0xbf, 0x1b, - 0x82, 0x0b, 0x62, 0x31, 0x5e, 0xe7, 0x8a, 0x93, 0x9f, 0xc2, 0x84, 0xb6, 0xc6, 0x62, 0xd2, 0xaf, - 0xc8, 0xb7, 0x91, 0x59, 0x7b, 0x81, 0x1b, 0xae, 0x7a, 0x58, 0xd0, 0x88, 0x2d, 0xf7, 0xda, 0x19, - 0x5b, 0x27, 0x49, 0xda, 0x30, 0x6d, 0x2e, 0xb4, 0xb0, 0xdd, 0x5d, 0x4b, 0x6d, 0xc4, 0x04, 0x95, - 0x59, 0x02, 0x5a, 0x8d, 0xd4, 0xe5, 0x5e, 0x3b, 0x63, 0xc7, 0x68, 0x93, 0x6f, 0xe0, 0x6c, 0x62, - 0x95, 0x85, 0x61, 0xf6, 0x46, 0x6a, 0x83, 0x09, 0x68, 0x7e, 0xa5, 0xe4, 0x63, 0x71, 0x66, 0xb3, - 0xc9, 0x46, 0x48, 0x0b, 0x26, 0xf5, 0x85, 0x17, 0xc6, 0xc5, 0xab, 0x39, 0x53, 0xc9, 0x01, 0xb9, - 0x00, 0x2d, 0xe6, 0x12, 0xd7, 0xfe, 0xa5, 0x79, 0x4d, 0x66, 0x00, 0x8f, 0xc1, 0x08, 0xff, 0x6d, - 0xfd, 0x4d, 0x01, 0x2e, 0xd4, 0x7c, 0x1a, 0xd0, 0x4e, 0x93, 0x1a, 0xaf, 0x4c, 0xbe, 0xe5, 0x8e, - 0xc8, 0xba, 0xa1, 0x1a, 0x78, 0xfd, 0x37, 0x54, 0x83, 0xa7, 0xbc, 0xa1, 0xb2, 0xfe, 0x41, 0x01, - 0x16, 0xd3, 0xc6, 0x5c, 0xa7, 0x9d, 0x16, 0xa9, 0x41, 0x31, 0x3e, 0x09, 0xe2, 0x93, 0xb3, 0x54, - 0x94, 0xf8, 0xcc, 0xe9, 0x5a, 0x3b, 0x63, 0x27, 0xb0, 0xc9, 0x16, 0x9c, 0xd5, 0xca, 0xc4, 0x0d, - 0xd1, 0xc0, 0x49, 0x6e, 0x88, 0xd8, 0x16, 0x49, 0xa0, 0xea, 0x17, 0x6c, 0x6b, 0x78, 0x6c, 0xaf, - 0x78, 0x87, 0x8e, 0xdb, 0x61, 0x9a, 0x8e, 0x16, 0x6f, 0x10, 0xa2, 0x52, 0xb1, 0x6e, 0xfc, 0xca, - 0x08, 0x4b, 0xe5, 0x8b, 0x3d, 0x05, 0x62, 0xfd, 0x08, 0x8f, 0x17, 0x61, 0x26, 0xe6, 0x31, 0x12, - 0x14, 0xb1, 0x2b, 0x30, 0xbc, 0xb3, 0x51, 0x5f, 0xae, 0x88, 0x88, 0x0b, 0x3c, 0x4e, 0x4f, 0x3b, - 0x68, 0x34, 0x1d, 0x9b, 0x57, 0x58, 0x1f, 0x01, 0x79, 0x40, 0x43, 0x91, 0xa6, 0x44, 0xe1, 0x5d, - 0x87, 0x51, 0x51, 0x24, 0x30, 0xf1, 0xf2, 0x43, 0x24, 0x3d, 0xb1, 0x65, 0x9d, 0x55, 0x93, 0x8a, - 0x62, 0x9b, 0x3a, 0x81, 0x26, 0x35, 0xbc, 0x0f, 0x63, 0xbe, 0x28, 0x13, 0x42, 0xc3, 0xb4, 0xca, - 0x42, 0x85, 0xc5, 0xfc, 0x52, 0x4e, 0xc2, 0xd8, 0xea, 0x2f, 0x6b, 0x03, 0x63, 0x6a, 0x6d, 0xaf, - 0xaf, 0x2c, 0xb3, 0x59, 0x15, 0x93, 0x25, 0x97, 0xe3, 0x36, 0x3e, 0xd2, 0x09, 0xa9, 0x1e, 0x6f, - 0x01, 0xa7, 0x06, 0x39, 0x90, 0x88, 0x24, 0xa7, 0x81, 0x58, 0xf7, 0x54, 0x84, 0xae, 0x14, 0x6a, - 0x59, 0xd9, 0x94, 0xb6, 0x30, 0xf6, 0xd8, 0x03, 0xf4, 0x47, 0x7c, 0x1d, 0x9d, 0x70, 0xa0, 0xc4, - 0x65, 0x10, 0x36, 0x2a, 0x91, 0x02, 0xd7, 0x53, 0x7c, 0x7b, 0x19, 0xc6, 0x55, 0x99, 0x72, 0x2e, - 0xe0, 0x73, 0x65, 0xc0, 0xef, 0xdd, 0xe3, 0xa1, 0x29, 0x9a, 0x8a, 0x40, 0x84, 0xc7, 0x9a, 0xe0, - 0x4c, 0xe1, 0x3b, 0x6e, 0x22, 0xa0, 0x7e, 0xf8, 0x9d, 0x36, 0x11, 0x05, 0xa7, 0x3b, 0x4d, 0x13, - 0x06, 0xfc, 0xde, 0xd2, 0x49, 0x26, 0xea, 0x3b, 0x6e, 0x82, 0x4d, 0xd4, 0x77, 0xd7, 0x04, 0x95, - 0x51, 0xfc, 0xf8, 0x26, 0x4d, 0x34, 0xb2, 0x9a, 0x6c, 0x44, 0xde, 0x9d, 0xc4, 0x30, 0x72, 0xd7, - 0x83, 0xc2, 0x45, 0x3e, 0x59, 0xbf, 0x81, 0x66, 0xd8, 0x84, 0x7d, 0xb7, 0xcd, 0xfc, 0xcf, 0x05, - 0x1e, 0x53, 0xb0, 0xbe, 0xad, 0x25, 0x9f, 0xee, 0x3c, 0xf5, 0x34, 0xdf, 0x27, 0xed, 0x6b, 0xd7, - 0xae, 0x92, 0xd1, 0xf7, 0xc9, 0xe9, 0x85, 0x07, 0x2a, 0xe6, 0x3e, 0xde, 0x2b, 0xc7, 0xa1, 0xc9, - 0x07, 0x30, 0xa5, 0x15, 0x29, 0x51, 0x92, 0x67, 0x4b, 0xd2, 0xd1, 0xdd, 0x96, 0x6d, 0x42, 0x5a, - 0x7f, 0x5b, 0x80, 0xd9, 0xfa, 0xcb, 0x20, 0xa4, 0x87, 0x18, 0x3f, 0x55, 0x06, 0xb1, 0x40, 0x6b, - 0x16, 0xaa, 0x68, 0x8a, 0x51, 0x89, 0xfc, 0x86, 0x18, 0xde, 0xc8, 0x38, 0xc0, 0x15, 0x20, 0xe6, - 0x89, 0x91, 0x14, 0x54, 0x2f, 0x78, 0x9e, 0x18, 0x59, 0x6c, 0xa2, 0xea, 0xe0, 0x24, 0x00, 0x88, - 0x7a, 0x22, 0x0e, 0xe8, 0x3a, 0x93, 0xb7, 0x03, 0x2c, 0x45, 0xa3, 0x45, 0x84, 0xfb, 0xeb, 0xa3, - 0xf2, 0x7b, 0xa7, 0xf1, 0xdc, 0x8e, 0x48, 0xdb, 0x5a, 0x33, 0xd6, 0x1f, 0x0c, 0xc0, 0xb9, 0x94, - 0xf1, 0xd7, 0x69, 0xf8, 0x77, 0x31, 0x05, 0xcf, 0x61, 0x22, 0xea, 0x0c, 0x37, 0x5b, 0x8c, 0x57, - 0x77, 0x30, 0xad, 0x49, 0x34, 0x07, 0xc1, 0x6b, 0x99, 0x04, 0xbd, 0x21, 0xeb, 0x26, 0xdc, 0x58, - 0xef, 0x3c, 0xa7, 0x9d, 0xd0, 0xf3, 0x5f, 0x8a, 0x7d, 0x4b, 0x5b, 0xe2, 0x2e, 0x09, 0xf5, 0x59, - 0xe5, 0x44, 0xf7, 0x7b, 0x03, 0x50, 0xee, 0x03, 0x4a, 0xfe, 0xa4, 0xc0, 0xb3, 0xd1, 0xa9, 0x12, - 0x71, 0x12, 0x7f, 0x20, 0x6f, 0xf2, 0xf2, 0xf1, 0x6f, 0x19, 0xbf, 0xb8, 0xb9, 0xf3, 0xc3, 0xdf, - 0xff, 0xab, 0x57, 0x1e, 0xa9, 0xd9, 0x97, 0xd2, 0x8f, 0x81, 0x24, 0x1b, 0xe8, 0x67, 0x7c, 0x19, - 0xd2, 0x8d, 0x2f, 0x7b, 0x30, 0xa7, 0x86, 0xa0, 0x65, 0xf2, 0xc3, 0x97, 0x90, 0xc6, 0x86, 0xd1, - 0xf6, 0x85, 0x05, 0x20, 0x12, 0x94, 0x6d, 0x78, 0xfb, 0x22, 0x8f, 0xda, 0xc0, 0x62, 0xc1, 0xd6, - 0x4a, 0xad, 0xfb, 0x30, 0x1f, 0xa3, 0x2b, 0x84, 0x9a, 0xef, 0x83, 0x7a, 0x09, 0x85, 0x84, 0x07, - 0xab, 0x67, 0x7f, 0x7d, 0x54, 0x9e, 0x0a, 0xdd, 0x43, 0x7a, 0x2b, 0x0a, 0x46, 0x2b, 0xff, 0xb2, - 0x36, 0x75, 0xb1, 0xac, 0xd2, 0xd6, 0x5f, 0x88, 0x93, 0xbb, 0x30, 0xc2, 0x4b, 0x62, 0x21, 0x1f, - 0x75, 0xe8, 0xea, 0xd0, 0x2f, 0x8f, 0xca, 0x67, 0x6c, 0x01, 0x68, 0xcd, 0xe3, 0xbb, 0x0d, 0xfc, - 0x51, 0x89, 0xde, 0x19, 0x5a, 0xbb, 0x3c, 0x04, 0x7a, 0x54, 0xac, 0xc2, 0x4a, 0x0e, 0x55, 0xa2, - 0xf7, 0x90, 0xd2, 0x88, 0x2a, 0xe1, 0x3a, 0xde, 0x8b, 0x36, 0x6d, 0xf1, 0x9c, 0x36, 0xd5, 0x49, - 0x61, 0x44, 0x1d, 0x72, 0x18, 0x01, 0x44, 0xb3, 0x3e, 0x85, 0xf9, 0xe5, 0x36, 0x75, 0xfc, 0x78, - 0x7b, 0x18, 0xf8, 0x98, 0x95, 0x99, 0xde, 0x55, 0x0e, 0x2b, 0x42, 0xef, 0x2a, 0x51, 0xc9, 0xe4, - 0x38, 0xce, 0xd4, 0xf5, 0x21, 0x45, 0x22, 0xd4, 0x30, 0xfe, 0x8e, 0x79, 0xfd, 0xa7, 0x8c, 0x9e, - 0xc3, 0x59, 0x9f, 0xa0, 0x5b, 0xa9, 0xd8, 0xa8, 0xae, 0xd7, 0x89, 0x38, 0xf8, 0xc9, 0xde, 0xa1, - 0xfc, 0xe7, 0x70, 0xb1, 0xd2, 0xed, 0xd2, 0x4e, 0x2b, 0x42, 0x64, 0x8a, 0xd8, 0xc9, 0x5e, 0x09, - 0x92, 0x0a, 0x0c, 0x23, 0xb4, 0x52, 0x8e, 0x45, 0x77, 0x53, 0xba, 0x83, 0x70, 0x22, 0x06, 0x18, - 0x36, 0xc0, 0x31, 0xad, 0x16, 0x2c, 0xd4, 0x7b, 0x4f, 0x0e, 0x5d, 0x9e, 0x6e, 0x1d, 0x5f, 0xda, - 0xca, 0xb6, 0xd7, 0x65, 0xd6, 0x0a, 0x3e, 0x19, 0x37, 0x23, 0xf7, 0x43, 0x74, 0xeb, 0x12, 0xaf, - 0x6f, 0x9f, 0xdf, 0xbd, 0x15, 0xa1, 0xe2, 0x53, 0x5c, 0xde, 0x0a, 0x56, 0x8b, 0xcc, 0x16, 0xd6, - 0x2c, 0x9c, 0xd5, 0x65, 0x79, 0xbe, 0x43, 0xe6, 0x61, 0xd6, 0x94, 0xd1, 0x79, 0xf1, 0xd7, 0x30, - 0xc7, 0x65, 0x08, 0x1e, 0xc3, 0x73, 0x29, 0x0a, 0x57, 0x39, 0xb0, 0xb7, 0x14, 0x73, 0xe5, 0xc1, - 0x8b, 0x6c, 0x15, 0x9d, 0x79, 0x6f, 0x89, 0x3f, 0x0d, 0x78, 0xbe, 0x64, 0xa8, 0xa9, 0x03, 0x7b, - 0x4b, 0xd5, 0x51, 0x11, 0x0b, 0x8d, 0x51, 0xe7, 0xcb, 0xff, 0x9d, 0x50, 0x5f, 0xc2, 0xd7, 0x68, - 0x6b, 0xd4, 0x41, 0xcf, 0xd1, 0xf4, 0x37, 0x3d, 0xd3, 0x30, 0xa0, 0x82, 0x1d, 0x0d, 0xb8, 0x2d, - 0xeb, 0xcf, 0x0a, 0x70, 0x93, 0x4b, 0x33, 0xe9, 0x78, 0x28, 0xb0, 0x67, 0x20, 0x93, 0xf7, 0x81, - 0x27, 0xcb, 0x15, 0x96, 0x2f, 0x4b, 0xf4, 0x3c, 0x8f, 0x12, 0x47, 0x20, 0x15, 0x98, 0xd4, 0xfd, - 0x1b, 0x4f, 0x16, 0x47, 0xc5, 0x9e, 0x38, 0x7c, 0xea, 0x28, 0x9f, 0xc7, 0x67, 0x70, 0x61, 0xf5, - 0x1b, 0xb6, 0x21, 0x44, 0xe2, 0x3f, 0x71, 0x93, 0x11, 0xbd, 0x19, 0x99, 0xd9, 0x11, 0x3b, 0x46, - 0xfa, 0xab, 0xf1, 0x8e, 0xc7, 0x8b, 0x89, 0x05, 0x93, 0x82, 0x84, 0x1f, 0x39, 0xc2, 0xd9, 0x46, - 0x99, 0xf5, 0xcf, 0x0a, 0x70, 0x31, 0xbd, 0x35, 0xc1, 0x58, 0xd6, 0xe1, 0xec, 0xb2, 0xd3, 0xf1, - 0x3a, 0x6e, 0xd3, 0x69, 0xd7, 0x9b, 0x07, 0xb4, 0xd5, 0x53, 0x11, 0xd3, 0x14, 0x97, 0xd9, 0xa7, - 0x1d, 0x89, 0x2e, 0x41, 0xec, 0x24, 0x16, 0x79, 0x0f, 0xce, 0xa1, 0x83, 0x13, 0xe7, 0xbd, 0x6d, - 0xea, 0x2b, 0x7a, 0xbc, 0x67, 0x19, 0xb5, 0xe4, 0x8e, 0x14, 0x96, 0x5a, 0xbb, 0x1d, 0x37, 0x54, - 0x48, 0xdc, 0x3d, 0x31, 0xad, 0xca, 0xfa, 0x27, 0x05, 0x38, 0x8f, 0x49, 0x12, 0x8c, 0xb4, 0x4b, - 0x51, 0xe0, 0x40, 0x19, 0xfb, 0xae, 0x60, 0x38, 0x6c, 0x19, 0xd0, 0x66, 0x10, 0x3c, 0xf2, 0x0e, - 0x0c, 0xd5, 0xa5, 0x25, 0x7e, 0x3a, 0x96, 0x48, 0x4f, 0x66, 0x5a, 0xf6, 0xfc, 0xd0, 0x46, 0x28, - 0x72, 0x19, 0x60, 0x85, 0x06, 0x4d, 0xda, 0xc1, 0x8c, 0x87, 0xf8, 0x9c, 0xc8, 0xd6, 0x4a, 0xa2, - 0x37, 0xfd, 0x43, 0x59, 0x6f, 0xfa, 0x87, 0xcd, 0x37, 0xfd, 0xd6, 0x73, 0x9e, 0x22, 0x21, 0x3e, - 0x20, 0xb1, 0x48, 0x9f, 0x24, 0x12, 0x24, 0xf2, 0x73, 0xe0, 0x5c, 0xda, 0xc8, 0xf6, 0xee, 0x25, - 0x72, 0x1f, 0x66, 0x07, 0xea, 0xab, 0xc1, 0x1b, 0x06, 0x6c, 0xa5, 0xdd, 0xf6, 0x5e, 0xd0, 0x56, - 0xcd, 0xf7, 0x0e, 0xbd, 0xd0, 0x08, 0x11, 0x2f, 0x32, 0x84, 0x46, 0xe2, 0xb0, 0xd8, 0x95, 0xb1, - 0x62, 0xeb, 0x3f, 0x83, 0xeb, 0x7d, 0x28, 0x8a, 0x41, 0xd5, 0xe1, 0xac, 0x13, 0xab, 0x93, 0x26, - 0xd5, 0xeb, 0x69, 0xe3, 0x8a, 0x13, 0x0a, 0xec, 0x24, 0xfe, 0xdb, 0x3b, 0x46, 0x52, 0x41, 0xb2, - 0x08, 0x73, 0x35, 0x7b, 0x7b, 0x65, 0x77, 0x79, 0xa7, 0xb1, 0xf3, 0x45, 0x6d, 0xb5, 0xb1, 0xbb, - 0xf5, 0x68, 0x6b, 0xfb, 0xf1, 0x16, 0x8f, 0x74, 0x69, 0xd4, 0xec, 0xac, 0x56, 0x36, 0x8b, 0x05, - 0x32, 0x07, 0x45, 0xa3, 0x78, 0x75, 0xb7, 0x5a, 0x1c, 0x78, 0xfb, 0x6b, 0x23, 0x59, 0x1e, 0xb9, - 0x08, 0x8b, 0xf5, 0xdd, 0x5a, 0x6d, 0xdb, 0x56, 0x54, 0xf5, 0x38, 0x9b, 0xf3, 0x70, 0xd6, 0xa8, - 0xbd, 0x6f, 0xaf, 0xae, 0x16, 0x0b, 0xac, 0x2b, 0x46, 0x71, 0xcd, 0x5e, 0xdd, 0x5c, 0xdf, 0xdd, - 0x2c, 0x0e, 0xbc, 0xdd, 0xd0, 0xfd, 0x8c, 0xc9, 0x05, 0x58, 0x58, 0x59, 0xdd, 0x5b, 0x5f, 0x5e, - 0x4d, 0xa3, 0x3d, 0x07, 0x45, 0xbd, 0x72, 0x67, 0x7b, 0xa7, 0xc6, 0x49, 0xeb, 0xa5, 0x8f, 0x57, - 0xab, 0x95, 0xdd, 0x9d, 0xb5, 0xad, 0xe2, 0xa0, 0x35, 0x34, 0x36, 0x50, 0x1c, 0x78, 0xfb, 0xa7, - 0x86, 0x13, 0x32, 0xeb, 0xbe, 0x00, 0xdf, 0xad, 0x57, 0x1e, 0x64, 0x37, 0xc1, 0x6b, 0x37, 0xef, - 0x57, 0x8a, 0x05, 0x72, 0x09, 0xce, 0x1b, 0xa5, 0xb5, 0x4a, 0xbd, 0xfe, 0x78, 0xdb, 0x5e, 0xd9, - 0x58, 0xad, 0xd7, 0x8b, 0x03, 0x6f, 0xef, 0x19, 0x71, 0x4c, 0x58, 0x0b, 0x9b, 0xf7, 0x2b, 0x0d, - 0x7b, 0xf5, 0xb3, 0xdd, 0x75, 0x7b, 0x75, 0x25, 0xd9, 0x82, 0x51, 0xfb, 0xc5, 0x6a, 0xbd, 0x58, - 0x20, 0xb3, 0x30, 0x63, 0x94, 0x6e, 0x6d, 0x17, 0x07, 0xde, 0xbe, 0x21, 0x42, 0x5d, 0x90, 0x69, - 0x80, 0x95, 0xd5, 0xfa, 0xf2, 0xea, 0xd6, 0xca, 0xfa, 0xd6, 0x83, 0xe2, 0x19, 0x32, 0x05, 0xe3, - 0x15, 0xf5, 0xb3, 0xf0, 0x76, 0x55, 0xe6, 0x42, 0xd3, 0xbe, 0x55, 0x32, 0x01, 0xa3, 0x2b, 0xab, - 0xf7, 0x2b, 0xbb, 0x1b, 0x3b, 0xc5, 0x33, 0xec, 0xc7, 0xb2, 0xbd, 0x5a, 0xd9, 0x59, 0x5d, 0x29, - 0x16, 0xc8, 0x38, 0x0c, 0xd7, 0x77, 0x2a, 0x3b, 0xab, 0xc5, 0x01, 0x32, 0x06, 0x43, 0xbb, 0xf5, - 0x55, 0xbb, 0x38, 0xb8, 0xf4, 0xab, 0x3f, 0x29, 0xc0, 0x04, 0x63, 0xdf, 0xd2, 0xa5, 0xf0, 0x6b, - 0x38, 0xa7, 0x0b, 0xd5, 0x8c, 0x6d, 0x89, 0xc4, 0x4f, 0x97, 0xe4, 0x73, 0x91, 0x6e, 0x80, 0x05, - 0x0a, 0x0c, 0x4f, 0xf2, 0x52, 0x59, 0xfa, 0x77, 0x7b, 0x2f, 0x3a, 0x69, 0x00, 0x37, 0x0b, 0x77, - 0x0a, 0xc4, 0x46, 0x43, 0x9d, 0xaa, 0x10, 0xc9, 0xaa, 0x2e, 0xc5, 0xa5, 0x79, 0x5e, 0x2e, 0x86, - 0x55, 0xca, 0xa8, 0xae, 0xf7, 0x0e, 0x0f, 0x1d, 0xff, 0x25, 0xf9, 0x1d, 0xb0, 0x74, 0x9a, 0x19, - 0x9a, 0xc4, 0xf7, 0x4f, 0xa6, 0x31, 0xc8, 0x36, 0x6f, 0x9c, 0x0c, 0x9c, 0x3c, 0x84, 0x29, 0x26, - 0x5f, 0x2b, 0x30, 0x72, 0x21, 0x8e, 0xa8, 0x89, 0xf5, 0xa5, 0x8b, 0xe9, 0x95, 0x2a, 0x36, 0xfb, - 0x24, 0x0e, 0x24, 0x08, 0x9d, 0x4e, 0x93, 0x06, 0x44, 0x3e, 0x69, 0x94, 0x25, 0x9c, 0x6b, 0x97, - 0xce, 0xc6, 0x8a, 0xf7, 0xee, 0xde, 0x29, 0x90, 0x3a, 0xc6, 0x13, 0x31, 0x04, 0x75, 0x22, 0x7d, - 0x5c, 0x93, 0x12, 0x3c, 0xef, 0x4d, 0x59, 0x65, 0x52, 0xca, 0x90, 0xf0, 0xb7, 0x80, 0x24, 0xe5, - 0x5f, 0x72, 0x25, 0xda, 0x07, 0xe9, 0xa2, 0x71, 0xe9, 0x5c, 0xe2, 0x72, 0x68, 0x95, 0x49, 0x40, - 0x64, 0x15, 0xa6, 0xc5, 0x7b, 0x25, 0x21, 0x91, 0x93, 0x3c, 0x99, 0x3e, 0x93, 0xcc, 0x03, 0x9c, - 0x27, 0x25, 0xd5, 0x93, 0x52, 0x34, 0x8e, 0xb8, 0xa8, 0x5f, 0xba, 0x90, 0x5a, 0x27, 0xc6, 0x77, - 0x1f, 0xa6, 0x4d, 0x05, 0x81, 0xc8, 0x05, 0x4a, 0xd5, 0x1b, 0x32, 0x3b, 0xd4, 0x80, 0x85, 0x4d, - 0xc7, 0xed, 0x84, 0x8e, 0xdb, 0x11, 0x37, 0x10, 0xd2, 0x46, 0x4f, 0xca, 0x39, 0x46, 0xfb, 0x3a, - 0xed, 0xb4, 0xd4, 0x22, 0x64, 0xc5, 0x59, 0xc5, 0xcf, 0xa6, 0x2e, 0xe5, 0x5c, 0xf3, 0x02, 0x86, - 0x58, 0x66, 0x76, 0xbc, 0xb4, 0x3b, 0xb5, 0x52, 0xd6, 0x35, 0x30, 0xd9, 0x44, 0x41, 0x3b, 0x46, - 0x51, 0xdb, 0x13, 0xa7, 0x26, 0xb7, 0x88, 0xaf, 0xe6, 0xb4, 0x54, 0xa6, 0xa2, 0x32, 0x20, 0x19, - 0x13, 0x97, 0x49, 0xec, 0x4e, 0x81, 0x7c, 0x8d, 0x5f, 0x75, 0x2a, 0xb9, 0xc7, 0x6e, 0x78, 0x20, - 0x24, 0x98, 0x0b, 0xa9, 0x04, 0xc4, 0x87, 0x92, 0x43, 0xdd, 0x86, 0xb9, 0xb4, 0x9b, 0x67, 0x35, - 0xa1, 0x39, 0xd7, 0xd2, 0x99, 0xbb, 0xc0, 0x66, 0xea, 0x42, 0x2b, 0x7b, 0x91, 0x72, 0x2e, 0x3e, - 0x33, 0x69, 0xfe, 0x08, 0xa6, 0xd9, 0x2e, 0x79, 0x44, 0x69, 0xb7, 0xd2, 0x76, 0x9f, 0xd3, 0x80, - 0xc8, 0x60, 0x70, 0xaa, 0x28, 0x0b, 0xf7, 0x66, 0x81, 0x7c, 0x0f, 0x26, 0x1e, 0x3b, 0x61, 0xf3, - 0x40, 0x04, 0x45, 0x92, 0x31, 0x93, 0xb0, 0xac, 0x24, 0x7f, 0x61, 0xe5, 0x9d, 0x02, 0xf9, 0x18, - 0x46, 0x1f, 0xd0, 0x10, 0xdf, 0x18, 0x5c, 0x55, 0xf7, 0x1c, 0xdc, 0xe1, 0x61, 0xbd, 0xa3, 0xdc, - 0xd8, 0x64, 0x87, 0xe3, 0x6e, 0x18, 0xe4, 0x36, 0x00, 0x67, 0x08, 0x48, 0x21, 0x5e, 0x5d, 0x4a, - 0x74, 0x9b, 0x3c, 0x60, 0x02, 0x40, 0x9b, 0x86, 0xf4, 0xa4, 0x4d, 0x66, 0xcd, 0xd1, 0x06, 0x4c, - 0xab, 0x70, 0xf6, 0x5b, 0xf8, 0x76, 0xd5, 0x8a, 0x11, 0x0b, 0x4e, 0x41, 0xed, 0x43, 0xf6, 0x55, - 0xf0, 0x5c, 0x6e, 0xf8, 0xc8, 0x11, 0x39, 0xe9, 0x82, 0xfe, 0x52, 0x52, 0x67, 0xa1, 0x72, 0x12, - 0x39, 0x98, 0x86, 0xbb, 0xe6, 0x05, 0xa1, 0x89, 0xab, 0x4a, 0xd2, 0x71, 0x7f, 0x1b, 0x4a, 0x7a, - 0xbb, 0x66, 0x54, 0xbe, 0x88, 0xe7, 0x66, 0x05, 0xfb, 0x2b, 0x5d, 0xcd, 0x81, 0x10, 0x3a, 0xd8, - 0xe0, 0x1f, 0x0e, 0x14, 0x90, 0x9d, 0xac, 0xc0, 0xac, 0x6c, 0x6b, 0xbb, 0x4b, 0x3b, 0xf5, 0xfa, - 0x1a, 0x86, 0x2e, 0x97, 0x99, 0xd1, 0xb5, 0x32, 0x49, 0x9d, 0x24, 0xab, 0xd8, 0xd1, 0x67, 0x3c, - 0x66, 0x24, 0x79, 0x4f, 0x1c, 0xa3, 0xa3, 0x2f, 0x35, 0x5c, 0xdc, 0x23, 0x6e, 0x18, 0x32, 0x04, - 0xf8, 0xbd, 0x25, 0x92, 0xa3, 0xc4, 0x94, 0x32, 0xd4, 0x80, 0x3b, 0x05, 0xf2, 0x05, 0x90, 0xa4, - 0x5a, 0xa1, 0xa6, 0x30, 0x53, 0x85, 0x52, 0x53, 0x98, 0xa3, 0x93, 0x3c, 0x80, 0x79, 0xf5, 0x94, - 0x59, 0x6b, 0x75, 0x89, 0x64, 0xf4, 0x26, 0xab, 0x97, 0xe4, 0x53, 0x98, 0x15, 0x9b, 0x56, 0xaf, - 0x20, 0x45, 0xc5, 0x7f, 0x84, 0x66, 0x91, 0xb9, 0x4f, 0x1f, 0xc2, 0x7c, 0x3d, 0x36, 0x63, 0xdc, - 0x53, 0xe1, 0xbc, 0x49, 0x02, 0x0b, 0xeb, 0x34, 0xe4, 0x53, 0x96, 0x4e, 0xeb, 0x11, 0x10, 0x6e, - 0xd8, 0x91, 0xe4, 0x9e, 0xbb, 0xf4, 0x05, 0xb9, 0x14, 0xeb, 0x3a, 0x2b, 0x44, 0x30, 0x64, 0x60, - 0x99, 0x23, 0xdb, 0xe1, 0x99, 0x08, 0xb1, 0x74, 0xd9, 0xe9, 0x3a, 0x4f, 0xdc, 0xb6, 0x1b, 0xba, - 0x94, 0x2d, 0x80, 0x8e, 0xa0, 0x57, 0xc9, 0x05, 0x38, 0x9f, 0x09, 0x41, 0x7e, 0x17, 0x63, 0x88, - 0xe5, 0xab, 0x46, 0xe4, 0x7b, 0x69, 0x1a, 0x6c, 0x86, 0x72, 0x57, 0x7a, 0xe7, 0x64, 0xc0, 0x4a, - 0x19, 0x9d, 0x7a, 0x40, 0xc3, 0x5a, 0xbb, 0xb7, 0xef, 0x62, 0x8e, 0x2a, 0xa2, 0x0c, 0x3f, 0xaa, - 0x48, 0xec, 0x4b, 0x95, 0x7f, 0x5d, 0x55, 0xd4, 0xe9, 0xcf, 0xc8, 0x3a, 0x14, 0x39, 0xff, 0xd7, - 0x48, 0x5c, 0x4a, 0x90, 0x10, 0x20, 0x8e, 0xef, 0x1c, 0x06, 0x99, 0xab, 0x75, 0x1b, 0x86, 0x98, - 0xd8, 0x48, 0x54, 0xd6, 0x78, 0x4d, 0xc0, 0x9c, 0x35, 0xca, 0x54, 0x5c, 0x55, 0xb6, 0x22, 0x36, - 0x0d, 0x68, 0x28, 0x1f, 0x2b, 0xf3, 0x0c, 0x65, 0xd7, 0xa2, 0xc3, 0x3e, 0x59, 0x1b, 0x7d, 0xfa, - 0xb1, 0xc0, 0x1a, 0x7b, 0xf7, 0x88, 0xca, 0xda, 0x96, 0x42, 0xf4, 0x86, 0x21, 0x93, 0x9c, 0x8e, - 0xee, 0xbb, 0x78, 0x06, 0xe1, 0x03, 0xed, 0xf9, 0xa8, 0x6f, 0x5a, 0xb2, 0xe2, 0xd2, 0x94, 0x86, - 0xb5, 0xb7, 0x84, 0x2c, 0x8d, 0x1d, 0x92, 0x4c, 0x84, 0xed, 0xf9, 0x3e, 0xed, 0x70, 0xe4, 0x2c, - 0x79, 0x23, 0x0d, 0xfb, 0x13, 0x64, 0x3d, 0x1a, 0x36, 0x77, 0x5a, 0xed, 0x47, 0x82, 0x47, 0xd4, - 0xbf, 0x53, 0x20, 0xef, 0xc3, 0x98, 0xe8, 0x23, 0x43, 0x32, 0x3a, 0x1d, 0xe4, 0xf4, 0x1a, 0x31, - 0x81, 0x4f, 0x12, 0xf6, 0xd9, 0x84, 0xc9, 0x5a, 0x7d, 0xde, 0xe7, 0xf7, 0xd9, 0x61, 0xdb, 0x7a, - 0x15, 0xcc, 0x65, 0x79, 0xea, 0x22, 0xe6, 0xa2, 0x7a, 0x93, 0x1b, 0x4b, 0x0a, 0x9d, 0x4f, 0x84, - 0xc9, 0xcd, 0x18, 0x19, 0x47, 0x05, 0xb8, 0x50, 0x72, 0xb3, 0x51, 0xdc, 0xef, 0xac, 0x5d, 0x87, - 0x62, 0xa5, 0x89, 0x27, 0x81, 0x4a, 0x0f, 0xad, 0x94, 0x96, 0x78, 0x85, 0xa4, 0x35, 0x1f, 0xcf, - 0x36, 0xbd, 0x41, 0x1d, 0x0c, 0x1f, 0xb8, 0xa0, 0x44, 0x8b, 0x58, 0x55, 0x3a, 0x46, 0x8e, 0x92, - 0x32, 0xb7, 0xcc, 0xd4, 0xaa, 0xf6, 0xb7, 0x23, 0xf3, 0x21, 0x32, 0x0c, 0x2d, 0x75, 0xf6, 0xb9, - 0x38, 0xbe, 0x52, 0xe7, 0xa4, 0x6f, 0x95, 0x02, 0xad, 0xc0, 0x8c, 0x08, 0x33, 0xa6, 0xa6, 0x25, - 0x0b, 0x3b, 0xab, 0xf9, 0x1f, 0xc2, 0xf4, 0x2a, 0x63, 0xe8, 0xbd, 0x96, 0xcb, 0x43, 0xa6, 0x12, - 0x33, 0x06, 0x66, 0x26, 0xe2, 0x9a, 0x4c, 0x62, 0xa1, 0xe5, 0x94, 0x56, 0x67, 0x4a, 0x32, 0x6d, - 0x77, 0x69, 0x4e, 0x92, 0xd5, 0xd3, 0x4f, 0x0b, 0x5d, 0x7f, 0x21, 0x23, 0x8b, 0x33, 0xb9, 0x6e, - 0xa8, 0x90, 0x59, 0xa9, 0x98, 0x53, 0x84, 0xc6, 0xcf, 0xb5, 0xa4, 0x72, 0x19, 0x34, 0xf3, 0xd3, - 0x3b, 0x67, 0x8e, 0x5b, 0x05, 0x39, 0x4c, 0x4d, 0xc3, 0x4c, 0xde, 0x32, 0xa9, 0xe7, 0xa4, 0x6a, - 0xce, 0x6c, 0x01, 0x55, 0x74, 0x33, 0x4b, 0x30, 0xb9, 0x9c, 0x9f, 0xcc, 0x58, 0x53, 0xd1, 0x33, - 0xd2, 0x0b, 0x3f, 0xc4, 0x6d, 0x16, 0x65, 0x9e, 0x23, 0xba, 0xc2, 0x1b, 0x4f, 0xbc, 0xa7, 0x84, - 0xb0, 0xf4, 0x54, 0xc1, 0x35, 0x98, 0x89, 0x25, 0xe1, 0x55, 0x96, 0x99, 0xf4, 0x34, 0xc0, 0xa5, - 0xcb, 0x59, 0xd5, 0xca, 0xda, 0x59, 0x8c, 0x67, 0xf7, 0x54, 0x43, 0xce, 0xc8, 0x1a, 0xab, 0x86, - 0x9c, 0x99, 0x16, 0xf4, 0x21, 0x14, 0xe3, 0x89, 0x05, 0x15, 0xd1, 0x8c, 0x8c, 0x83, 0x99, 0x6b, - 0x72, 0x1f, 0xe6, 0xf4, 0x15, 0x55, 0xe3, 0xce, 0xe2, 0xfe, 0x59, 0x74, 0x76, 0x60, 0x3e, 0x35, - 0x0f, 0xa0, 0x3a, 0x62, 0xf3, 0xb2, 0x04, 0x66, 0x52, 0xa5, 0x70, 0x2e, 0x3d, 0x15, 0x28, 0x79, - 0xc3, 0x54, 0xfc, 0xd3, 0x13, 0x23, 0x96, 0xae, 0xf7, 0x81, 0x12, 0x13, 0xfa, 0x35, 0x9e, 0x80, - 0x89, 0x36, 0xae, 0x6a, 0xa6, 0x80, 0x8c, 0x06, 0xac, 0x3c, 0x10, 0xb5, 0x07, 0xe6, 0xd2, 0xd2, - 0x2c, 0x67, 0x4e, 0xf1, 0xb5, 0x6c, 0x9a, 0xd1, 0xc6, 0xda, 0x93, 0xb1, 0xfc, 0x32, 0x67, 0x26, - 0x37, 0x65, 0x64, 0x8e, 0x2e, 0x59, 0x52, 0xfb, 0xe1, 0xe4, 0x5d, 0xce, 0xa2, 0xd6, 0x52, 0x66, - 0x1b, 0x23, 0x9f, 0x63, 0xdc, 0x6c, 0x93, 0x96, 0x87, 0x52, 0x4d, 0x43, 0x5e, 0xa6, 0x53, 0x7e, - 0x1a, 0x7f, 0xc5, 0xed, 0x38, 0x66, 0x13, 0xba, 0x1d, 0x27, 0x95, 0xfe, 0x95, 0x6c, 0x00, 0x9d, - 0xb8, 0xc3, 0x2f, 0x5e, 0x63, 0x09, 0x29, 0x89, 0xae, 0x2a, 0xa5, 0x27, 0xab, 0x54, 0x7b, 0x23, - 0x27, 0x2d, 0x35, 0x6f, 0xe2, 0xb1, 0xfc, 0x06, 0x33, 0x66, 0x29, 0x27, 0x5b, 0x67, 0xbe, 0x98, - 0xb2, 0x0d, 0x8b, 0xd1, 0x62, 0xc6, 0x06, 0x70, 0xca, 0xa5, 0x94, 0x93, 0x71, 0x3e, 0x33, 0x47, - 0x27, 0x79, 0x33, 0xf1, 0xa5, 0x67, 0x4c, 0x4c, 0x6e, 0x13, 0x9c, 0x9f, 0x6b, 0xb1, 0x01, 0x2f, - 0x44, 0x46, 0x5c, 0x3d, 0x9d, 0x67, 0x82, 0x9f, 0xa7, 0xe4, 0xfa, 0xdc, 0x40, 0xb9, 0x58, 0xcb, - 0xd7, 0x99, 0x39, 0xea, 0x4b, 0x69, 0x74, 0x62, 0xcb, 0x54, 0x85, 0xb3, 0xfc, 0x88, 0x3f, 0x09, - 0xc1, 0xb4, 0x18, 0x88, 0x77, 0x0a, 0x11, 0xeb, 0xd6, 0x06, 0x28, 0x05, 0xbe, 0x78, 0xc5, 0x69, - 0x58, 0xf7, 0x49, 0xba, 0x94, 0x45, 0x67, 0x05, 0x26, 0xb4, 0x8c, 0xa1, 0xe4, 0xbc, 0x31, 0xdf, - 0xc6, 0x61, 0x5c, 0x32, 0x66, 0xc9, 0x3c, 0x87, 0x97, 0xd1, 0x26, 0xad, 0xf2, 0x8e, 0x66, 0xf6, - 0xe2, 0x42, 0x92, 0x86, 0x61, 0x8f, 0x56, 0xb3, 0xc0, 0x7b, 0x73, 0x31, 0x3e, 0x39, 0x46, 0x87, - 0xb2, 0x87, 0x44, 0xf4, 0xa9, 0xe9, 0xd3, 0xa5, 0x6c, 0x41, 0x78, 0x56, 0xa4, 0x25, 0xc3, 0xc8, - 0xe0, 0x32, 0x84, 0xc7, 0x39, 0x65, 0x5c, 0xd3, 0x4a, 0xd1, 0xd2, 0x91, 0x4e, 0xa6, 0x06, 0xe7, - 0xd2, 0x53, 0xa8, 0x2a, 0x56, 0x9d, 0x9b, 0x61, 0x35, 0x45, 0x08, 0x54, 0xcc, 0x3f, 0x93, 0x62, - 0x6e, 0x4e, 0xd5, 0xcc, 0x9e, 0xfe, 0x44, 0x63, 0xfe, 0x89, 0x44, 0xa9, 0xe4, 0x66, 0x5c, 0x02, - 0xcc, 0xca, 0xa5, 0x9a, 0x73, 0xb8, 0xcc, 0xa5, 0xe5, 0x58, 0xd5, 0x0c, 0xc4, 0x99, 0x09, 0x58, - 0x53, 0x66, 0xc1, 0x96, 0xfb, 0x3f, 0x83, 0x5a, 0x4e, 0xc6, 0xd5, 0xcc, 0x1e, 0x7e, 0xa9, 0x71, - 0xcc, 0x58, 0x66, 0x54, 0xa5, 0xd7, 0xf7, 0x49, 0x9d, 0x9a, 0x49, 0x7b, 0x0b, 0xe6, 0x53, 0xd3, - 0x9a, 0x2a, 0x11, 0x29, 0x2f, 0xe9, 0x69, 0xaa, 0xfd, 0x78, 0x3e, 0x39, 0x44, 0x46, 0xef, 0x5c, - 0xcc, 0xfa, 0xdb, 0xaf, 0x63, 0x5f, 0x4b, 0xae, 0x9e, 0x92, 0x0e, 0x35, 0xc6, 0xd5, 0xb3, 0x13, - 0xa6, 0xe6, 0xe8, 0x53, 0x33, 0x75, 0x77, 0xbf, 0xa3, 0x65, 0x33, 0x55, 0xda, 0x54, 0x32, 0xc1, - 0xaa, 0x62, 0x31, 0x69, 0xc9, 0x4f, 0xb7, 0x99, 0x20, 0xc5, 0xd5, 0x00, 0x3d, 0x2f, 0x25, 0x29, - 0x65, 0xa7, 0xe3, 0x54, 0xec, 0x26, 0x35, 0x91, 0xa5, 0x46, 0x50, 0x4f, 0x0a, 0xa9, 0x08, 0xa6, - 0xe4, 0xa7, 0x54, 0x04, 0x53, 0xb3, 0x48, 0xde, 0x46, 0xf3, 0x8d, 0xed, 0xb5, 0xa9, 0x6e, 0xbe, - 0xd1, 0xb2, 0x0c, 0xc6, 0xac, 0x27, 0xe4, 0x13, 0x18, 0x57, 0x59, 0x18, 0x95, 0xa1, 0x3c, 0x9e, - 0x08, 0xb2, 0xb4, 0x98, 0xac, 0x50, 0x59, 0x9d, 0x21, 0xca, 0xb8, 0xa8, 0xac, 0x19, 0x89, 0x24, - 0x8c, 0xf1, 0x66, 0x7f, 0x20, 0xcd, 0x27, 0x06, 0x5a, 0x22, 0x07, 0x63, 0x1c, 0xed, 0x87, 0x30, - 0x19, 0xe5, 0x5b, 0xdc, 0x5b, 0xd2, 0x10, 0x63, 0x49, 0x18, 0xe3, 0x88, 0xef, 0xcb, 0xbb, 0x11, - 0x6c, 0xcf, 0xac, 0xcc, 0x17, 0x07, 0x3e, 0x91, 0xe6, 0x1a, 0xa3, 0xa7, 0x89, 0xec, 0x8d, 0x39, - 0xcc, 0x77, 0x52, 0x4f, 0x80, 0xa4, 0x96, 0x36, 0x25, 0x85, 0x99, 0x5a, 0xda, 0xb4, 0x14, 0x64, - 0xd1, 0xdd, 0xc1, 0x17, 0xd2, 0x36, 0x11, 0x11, 0xbd, 0x64, 0x74, 0x2b, 0x41, 0xf7, 0x72, 0x56, - 0x75, 0x9c, 0x74, 0x1d, 0x8a, 0xf1, 0x6c, 0x4d, 0x4a, 0xb1, 0xcb, 0x48, 0xab, 0xa5, 0xb4, 0xc5, - 0xcc, 0x34, 0x4f, 0x35, 0x69, 0x68, 0x37, 0xe9, 0x5e, 0x4d, 0xef, 0x94, 0x4e, 0x3a, 0xdb, 0xf2, - 0x3e, 0x65, 0x24, 0x6e, 0xd2, 0x55, 0xee, 0x44, 0x62, 0x28, 0x5d, 0x44, 0x4b, 0xc9, 0xf5, 0xe4, - 0xca, 0xa7, 0xbd, 0xe9, 0x39, 0x36, 0xdf, 0x32, 0x75, 0xe1, 0x9c, 0x28, 0x9f, 0x7d, 0xef, 0x91, - 0xc9, 0x6f, 0xc1, 0x42, 0x46, 0xd0, 0x41, 0x72, 0x3d, 0x66, 0xb2, 0x4d, 0x0f, 0x4a, 0xa8, 0x36, - 0x48, 0x6a, 0x46, 0xc5, 0x4d, 0x74, 0x40, 0x30, 0xde, 0xc9, 0x24, 0x2e, 0xf5, 0x1e, 0xbb, 0xe1, - 0x01, 0x4f, 0x1c, 0xa8, 0xb1, 0xcd, 0xd4, 0x07, 0x36, 0xa4, 0x8e, 0x4a, 0x8d, 0x51, 0x9a, 0x72, - 0xaf, 0x97, 0x42, 0xb0, 0x94, 0x4e, 0x10, 0xf3, 0x71, 0xd7, 0x60, 0x36, 0xe5, 0x11, 0x93, 0xda, - 0x0b, 0xd9, 0x0f, 0x9c, 0x32, 0xbb, 0x59, 0x93, 0x32, 0x52, 0x3a, 0xc5, 0xec, 0xf7, 0x4c, 0x99, - 0x14, 0x1f, 0x32, 0x8a, 0x89, 0x27, 0x4a, 0x24, 0x03, 0x3c, 0x9f, 0x7b, 0xd8, 0xf2, 0xc8, 0x35, - 0xb1, 0x96, 0xb4, 0xfe, 0x65, 0x3d, 0x86, 0xca, 0xec, 0xdf, 0xaa, 0xfc, 0x9e, 0xd2, 0xfb, 0x77, - 0xd2, 0x43, 0x57, 0x5d, 0xa4, 0xc5, 0x5e, 0xc9, 0x19, 0x03, 0xd5, 0xca, 0x4b, 0x19, 0xe5, 0x64, - 0x0b, 0x3d, 0x8a, 0xe2, 0xa5, 0x9a, 0x76, 0x9b, 0xfe, 0x0c, 0x2f, 0x93, 0x1e, 0xdf, 0xc7, 0xc6, - 0x33, 0xa6, 0xd3, 0xec, 0xe3, 0xd8, 0xfb, 0x27, 0xb1, 0x8f, 0x8d, 0xd2, 0xd3, 0xed, 0xe3, 0x18, - 0x41, 0x73, 0x1f, 0xc7, 0xbb, 0x19, 0x37, 0x19, 0x64, 0xae, 0x6a, 0xbc, 0x9b, 0x6a, 0x1f, 0xa7, - 0x53, 0xcc, 0x7e, 0x6e, 0x96, 0x49, 0x51, 0xed, 0x63, 0x93, 0x62, 0x06, 0xf8, 0x09, 0xf7, 0x71, - 0xbc, 0x11, 0x73, 0x1f, 0x9f, 0xaa, 0x7f, 0x6a, 0x1f, 0xa7, 0xf7, 0xef, 0xd4, 0xfb, 0x38, 0xf6, - 0x3e, 0xd3, 0x18, 0x68, 0xda, 0x3e, 0x8e, 0xc3, 0xf3, 0x7d, 0x1c, 0x2f, 0x8d, 0x59, 0x69, 0x72, - 0xf6, 0x71, 0x1c, 0xf3, 0x33, 0xa4, 0x17, 0x7b, 0x5b, 0x76, 0x92, 0x9d, 0x9c, 0xf9, 0x2c, 0x8d, - 0x3c, 0x46, 0x3b, 0x61, 0xac, 0xfc, 0x64, 0xbb, 0xf9, 0x62, 0x16, 0x51, 0xdc, 0xcf, 0x7b, 0x72, - 0x12, 0xe3, 0xdd, 0x35, 0x8d, 0x60, 0xe9, 0x4f, 0xeb, 0x72, 0x3a, 0xbc, 0xc7, 0xf6, 0x4d, 0x2b, - 0x87, 0x6e, 0xde, 0xcb, 0xc0, 0x1c, 0xba, 0x4a, 0x95, 0x89, 0xd3, 0xcd, 0x44, 0xc9, 0xdf, 0xdf, - 0x9f, 0xcb, 0x9b, 0x92, 0x38, 0xde, 0x52, 0x4c, 0x39, 0x3a, 0x75, 0x4f, 0x95, 0x92, 0x14, 0xef, - 0xe9, 0x69, 0xf7, 0xf9, 0xa6, 0x94, 0x1e, 0x12, 0x4f, 0x8a, 0x63, 0x83, 0xd6, 0xf7, 0x7a, 0x66, - 0x0d, 0xd9, 0x41, 0xa3, 0x70, 0xb2, 0x5c, 0x33, 0x28, 0x67, 0xbd, 0x5d, 0xee, 0x4b, 0x35, 0xf1, - 0x38, 0x52, 0xa7, 0x9a, 0xf5, 0x72, 0x52, 0x51, 0x4d, 0x62, 0x7f, 0x8a, 0x66, 0x34, 0xf1, 0xfc, - 0xaa, 0xf3, 0xd4, 0xeb, 0x6f, 0xf5, 0x8a, 0x60, 0xd1, 0xd9, 0xec, 0x47, 0xe2, 0x2a, 0x50, 0x16, - 0x66, 0x4e, 0x7e, 0x1a, 0x3e, 0xf9, 0x14, 0x8a, 0x82, 0xbd, 0x45, 0x04, 0xd2, 0x00, 0x33, 0x97, - 0xae, 0x2a, 0x8d, 0x6e, 0x27, 0xe8, 0xc1, 0x49, 0x8c, 0x6d, 0x27, 0x99, 0x89, 0x6c, 0xcb, 0x14, - 0x3b, 0x0e, 0x77, 0xfc, 0x5e, 0x10, 0xd2, 0x56, 0xd2, 0xa2, 0x64, 0x76, 0x46, 0xba, 0x58, 0x98, - 0xe0, 0x7b, 0x4b, 0x64, 0x1d, 0x79, 0x9b, 0x59, 0x9c, 0x67, 0x72, 0x4b, 0x27, 0x83, 0xac, 0x67, - 0x53, 0xbd, 0xf1, 0x31, 0xfb, 0x94, 0xd5, 0x76, 0x66, 0xa7, 0xe4, 0xcd, 0xb8, 0x98, 0xa7, 0x13, - 0x0e, 0x31, 0x6b, 0x9e, 0x3e, 0x42, 0xa7, 0x02, 0x6e, 0x03, 0xec, 0x37, 0x3d, 0xf1, 0xa7, 0x47, - 0xe4, 0xc7, 0x30, 0x2e, 0x91, 0xfb, 0xcf, 0x4a, 0x1c, 0x1b, 0x67, 0x65, 0x05, 0xa6, 0x8c, 0x77, - 0x55, 0x4a, 0xc5, 0x49, 0x7b, 0x6d, 0x95, 0xb3, 0xd8, 0x53, 0xc6, 0xfb, 0x29, 0x45, 0x25, 0xed, - 0x55, 0x55, 0x26, 0x95, 0x8f, 0x61, 0x42, 0x4c, 0x69, 0xee, 0x6c, 0x64, 0x1b, 0xdd, 0xe6, 0x35, - 0xff, 0xe6, 0x5e, 0xcb, 0x0d, 0x97, 0xbd, 0xce, 0x53, 0x77, 0xbf, 0xef, 0xc4, 0x24, 0x51, 0xf6, - 0x96, 0xc8, 0x57, 0x98, 0x6b, 0x4f, 0x66, 0x40, 0xa4, 0xe1, 0x0b, 0xcf, 0x7f, 0xe6, 0x76, 0xf6, - 0xfb, 0x90, 0xbc, 0x62, 0x92, 0x8c, 0xe3, 0xc9, 0xcd, 0xf3, 0x15, 0x94, 0xea, 0xd9, 0xc4, 0xfb, - 0x12, 0xc9, 0x3f, 0x63, 0xea, 0x70, 0x11, 0x7d, 0x71, 0x4e, 0xdb, 0xf7, 0x5c, 0xa2, 0x5f, 0xf0, - 0xd0, 0x14, 0xd2, 0x60, 0xdf, 0xf4, 0xfc, 0x56, 0x7f, 0x8a, 0x65, 0xd3, 0x2d, 0x37, 0x86, 0x26, - 0x27, 0xe3, 0x0b, 0x38, 0x5f, 0xcf, 0x24, 0xdd, 0x8f, 0x44, 0x3f, 0x71, 0xf2, 0x02, 0x4e, 0xc5, - 0x29, 0xfb, 0x9d, 0x4b, 0x73, 0x1d, 0x19, 0x1b, 0x3b, 0x8c, 0x6a, 0x3e, 0x7d, 0x4a, 0x7d, 0x74, - 0xfe, 0xee, 0xe7, 0xf6, 0x6c, 0x82, 0xcb, 0x91, 0xaf, 0xc3, 0xd9, 0x7a, 0x82, 0x54, 0x16, 0x4a, - 0xbf, 0xdb, 0xa4, 0x59, 0x1c, 0xe9, 0x09, 0xfb, 0xd5, 0xc7, 0xe7, 0x68, 0xe2, 0x01, 0x0d, 0x77, - 0xd7, 0xfb, 0xcc, 0x92, 0x7c, 0x9d, 0x20, 0x01, 0xf7, 0xee, 0x32, 0xcc, 0xba, 0x86, 0x99, 0x84, - 0xc8, 0xfc, 0x78, 0x7f, 0x2c, 0x2f, 0x44, 0xfa, 0x36, 0x9b, 0x45, 0xe1, 0x1e, 0xf2, 0x42, 0xe1, - 0x00, 0xbd, 0x10, 0xc9, 0x01, 0x46, 0x4a, 0xdc, 0xd2, 0x94, 0xee, 0x0b, 0x1d, 0x90, 0x0a, 0xd7, - 0x01, 0xf5, 0xe4, 0xb9, 0x9a, 0xa7, 0x46, 0x6a, 0x56, 0xdd, 0x38, 0x09, 0x6e, 0x0a, 0xdd, 0xf0, - 0x9a, 0xcf, 0x74, 0x53, 0xa8, 0x96, 0x8d, 0xb5, 0x64, 0xe6, 0x4a, 0x15, 0x1c, 0x1f, 0x13, 0xa6, - 0xea, 0x6e, 0x64, 0x7a, 0x3e, 0xd6, 0xd2, 0x42, 0xa2, 0x5c, 0x98, 0x91, 0xee, 0x49, 0x03, 0x23, - 0x36, 0x68, 0x52, 0xce, 0x9c, 0x1a, 0x65, 0x5b, 0x44, 0x24, 0xd3, 0xb6, 0xa8, 0x77, 0x34, 0xdb, - 0xa0, 0x4f, 0x92, 0xa9, 0x63, 0x95, 0xc6, 0x92, 0x99, 0x55, 0x36, 0xc7, 0x1b, 0x6c, 0x36, 0x25, - 0xf3, 0xb5, 0xd2, 0xf1, 0xb2, 0xb3, 0x62, 0x97, 0x4c, 0xd7, 0xa6, 0x3b, 0x05, 0xb2, 0x05, 0xe7, - 0x1e, 0xd0, 0x50, 0xf0, 0x38, 0x9b, 0x06, 0xa1, 0xef, 0x36, 0xc3, 0xdc, 0xdb, 0x41, 0xa9, 0xa0, - 0xa4, 0xe0, 0xec, 0xbd, 0xcb, 0xe8, 0xd5, 0xd3, 0xe9, 0xe5, 0xe2, 0xe5, 0x38, 0xdc, 0x8a, 0x2b, - 0x87, 0xd3, 0x74, 0x31, 0x7b, 0x8b, 0x8f, 0x72, 0x7f, 0x9e, 0x6c, 0xd4, 0x62, 0x94, 0x90, 0x40, - 0xa8, 0x5c, 0xb7, 0x60, 0x84, 0x23, 0x65, 0x1e, 0xa8, 0x93, 0x3a, 0x0e, 0xb9, 0x0b, 0xe3, 0xca, - 0x21, 0x87, 0x18, 0x55, 0x99, 0xfd, 0xba, 0x0b, 0xe3, 0x5c, 0xbf, 0x3a, 0x39, 0xca, 0x47, 0x30, - 0xae, 0x3c, 0x78, 0x4e, 0x7d, 0xd2, 0x7f, 0x0a, 0x53, 0xba, 0x2f, 0xcf, 0xe9, 0x27, 0xf2, 0x63, - 0xbc, 0xc3, 0x95, 0x57, 0x25, 0xd9, 0xf8, 0xf3, 0xb1, 0x10, 0xfe, 0x62, 0x4a, 0x39, 0x83, 0x54, - 0x19, 0xea, 0xb3, 0xba, 0x7f, 0x36, 0x81, 0x4d, 0x3e, 0x92, 0xef, 0xa2, 0x14, 0x72, 0x12, 0x28, - 0x67, 0xce, 0xa6, 0xf9, 0x34, 0xbf, 0x0a, 0xb2, 0x62, 0xb0, 0x7d, 0xbb, 0x7d, 0x92, 0xbb, 0xe6, - 0xfe, 0x53, 0x97, 0x45, 0x65, 0x1b, 0xa5, 0xb4, 0x44, 0x72, 0x89, 0x6c, 0x42, 0x97, 0xb3, 0xf3, - 0x51, 0xe0, 0x62, 0x3c, 0x44, 0x55, 0x30, 0x51, 0x9b, 0x39, 0xbc, 0x9c, 0xfc, 0x16, 0x91, 0xee, - 0x9b, 0x24, 0x97, 0x83, 0x96, 0xa7, 0x4a, 0xf3, 0x05, 0x7b, 0x3d, 0xe4, 0xd6, 0xa5, 0x4b, 0xe4, - 0xc9, 0x07, 0x9b, 0xdd, 0xb3, 0x0b, 0x29, 0xb7, 0xdb, 0x7d, 0xd7, 0x22, 0x8b, 0xdc, 0x6f, 0xa1, - 0x74, 0x98, 0x9e, 0xf7, 0x3b, 0x93, 0xd8, 0x4d, 0xcd, 0x41, 0x22, 0x3f, 0x63, 0xf8, 0x33, 0x7c, - 0x70, 0x96, 0x9e, 0x7e, 0xe3, 0x46, 0x1f, 0x2a, 0x72, 0x26, 0xde, 0xec, 0x0b, 0xa7, 0xee, 0x4a, - 0x2f, 0xf0, 0x13, 0x36, 0xbd, 0xbd, 0x3e, 0xe9, 0x44, 0x52, 0xae, 0xaf, 0x33, 0x92, 0x6a, 0x4b, - 0x82, 0xa6, 0xbf, 0x69, 0xee, 0x18, 0xb2, 0xa6, 0xff, 0x33, 0x28, 0x47, 0x5e, 0x20, 0xa7, 0x5b, - 0x84, 0x6c, 0x37, 0x47, 0x92, 0x4c, 0x35, 0x4e, 0xf2, 0xa2, 0x5b, 0x97, 0xae, 0x66, 0xcd, 0xb0, - 0xfe, 0xa8, 0x46, 0xb8, 0xc9, 0xc5, 0x12, 0xd1, 0x64, 0xa5, 0xb4, 0xc9, 0x31, 0xc6, 0x8a, 0x17, - 0x78, 0xaf, 0x85, 0x50, 0x72, 0xb5, 0x4f, 0x4f, 0x48, 0x39, 0x69, 0xc4, 0x08, 0x59, 0x39, 0xcb, - 0xdb, 0xff, 0xfe, 0x71, 0x31, 0x63, 0x5d, 0x4f, 0xbf, 0xa0, 0x4e, 0xf4, 0xea, 0x2c, 0x99, 0x0f, - 0x5d, 0xc9, 0x72, 0x99, 0xb9, 0xd9, 0xd5, 0xea, 0xe6, 0x24, 0x53, 0x5f, 0x66, 0x9f, 0x29, 0x6f, - 0xc2, 0x48, 0xa3, 0xbc, 0x6c, 0x6f, 0x44, 0x66, 0x85, 0x94, 0xfc, 0xca, 0x25, 0x90, 0x95, 0xf6, - 0x06, 0xf9, 0x12, 0x59, 0x49, 0x7a, 0x02, 0xf6, 0xcc, 0x41, 0x47, 0x1e, 0xe1, 0xb9, 0x79, 0xdb, - 0xeb, 0x32, 0x62, 0x5f, 0x5a, 0x2c, 0x1c, 0xf5, 0x48, 0x27, 0xad, 0x32, 0x47, 0x73, 0xa9, 0xcb, - 0x18, 0x7d, 0xaf, 0x93, 0x68, 0x03, 0x16, 0x32, 0x22, 0x08, 0xa9, 0x2b, 0xdc, 0xfc, 0x08, 0x43, - 0xa5, 0xfc, 0x86, 0xc9, 0x57, 0x30, 0x9f, 0x1a, 0x62, 0x48, 0x99, 0xa1, 0xf3, 0x02, 0x10, 0xf5, - 0x23, 0xfe, 0x0c, 0x16, 0xb3, 0xf2, 0x29, 0x47, 0x8f, 0x86, 0xf2, 0x93, 0x5c, 0x2b, 0x7e, 0xdd, - 0x37, 0x31, 0xf3, 0x16, 0xcc, 0xa5, 0xe5, 0x28, 0x56, 0x1f, 0x5e, 0x4e, 0x02, 0xe3, 0xd4, 0x97, - 0x49, 0x35, 0x98, 0x4f, 0xcd, 0x13, 0xac, 0x66, 0x26, 0x2f, 0x8b, 0x70, 0x2a, 0xc5, 0xcf, 0x61, - 0x21, 0x23, 0x29, 0x6e, 0x74, 0x1f, 0x9f, 0x9b, 0x34, 0x37, 0xc7, 0xa1, 0xa9, 0x94, 0x9d, 0x6f, - 0x55, 0xf9, 0xb1, 0xf5, 0x4d, 0xc9, 0x5a, 0x4a, 0x4d, 0x42, 0x4d, 0x76, 0x70, 0x13, 0xa6, 0x25, - 0x60, 0xd5, 0x37, 0x61, 0x4e, 0x82, 0xd6, 0x8c, 0x17, 0x65, 0x0b, 0x19, 0x39, 0x57, 0x73, 0xa8, - 0x9e, 0xa0, 0xb7, 0x5b, 0xf2, 0x6c, 0x31, 0x33, 0x40, 0xc6, 0x5c, 0xb0, 0x53, 0xd3, 0x43, 0xa6, - 0xf6, 0x53, 0x8b, 0xd1, 0xd0, 0x6e, 0xe7, 0x88, 0x58, 0x44, 0x0f, 0xd2, 0xc0, 0x20, 0xd1, 0x92, - 0x3f, 0xa5, 0xe3, 0xe6, 0x71, 0xeb, 0x04, 0x32, 0x0a, 0xb5, 0x1f, 0xc2, 0x64, 0x5d, 0x6f, 0x3c, - 0xa5, 0x91, 0xcc, 0x4d, 0xa1, 0xde, 0x14, 0xf5, 0xef, 0x7b, 0x8e, 0x43, 0xa8, 0x3a, 0x78, 0x4e, - 0x34, 0x8a, 0x4c, 0xff, 0x19, 0x23, 0x4c, 0xbf, 0x3a, 0x05, 0xd2, 0x32, 0x95, 0x28, 0xff, 0x99, - 0xf4, 0xc8, 0xfe, 0x0d, 0x1e, 0xbb, 0x37, 0x9e, 0x88, 0x86, 0x58, 0xfd, 0x33, 0x3e, 0x29, 0x0f, - 0xfb, 0xdc, 0x4c, 0x36, 0xdc, 0xd9, 0x27, 0x4a, 0x4c, 0xa0, 0x3b, 0xfb, 0x24, 0xd2, 0x1d, 0xe8, - 0xce, 0x3e, 0x29, 0xb9, 0x0c, 0xea, 0x50, 0x8c, 0x67, 0x5a, 0x50, 0x66, 0xa5, 0x8c, 0x44, 0x12, - 0xca, 0xad, 0x27, 0x33, 0x45, 0xc3, 0x2a, 0x76, 0x30, 0x8a, 0xa4, 0x9c, 0x63, 0xe1, 0x50, 0x7d, - 0x4b, 0x09, 0xd8, 0xfc, 0x48, 0x8f, 0x1f, 0xc2, 0xe3, 0x2f, 0xe7, 0x18, 0x70, 0xe3, 0x71, 0x43, - 0x62, 0x01, 0x9b, 0xef, 0x43, 0x91, 0x87, 0xa2, 0x8c, 0x42, 0x27, 0x46, 0x4e, 0x85, 0xc9, 0x08, - 0x99, 0x39, 0x3b, 0xa5, 0x18, 0x0f, 0x38, 0xa7, 0x26, 0x2c, 0x23, 0x12, 0x5d, 0xce, 0xfe, 0x87, - 0x28, 0xac, 0x9c, 0xb2, 0x76, 0x25, 0x22, 0xcd, 0x95, 0xce, 0xa7, 0xd4, 0x28, 0x39, 0x75, 0x52, - 0x0f, 0x42, 0xa7, 0x86, 0x94, 0x12, 0x99, 0xae, 0x74, 0x21, 0xb5, 0x4e, 0x10, 0x0a, 0x79, 0x26, - 0xb5, 0xf4, 0x44, 0x6b, 0xd1, 0x5b, 0xb3, 0x1c, 0x18, 0xd9, 0xcc, 0xdb, 0x27, 0x01, 0x15, 0xad, - 0x52, 0x15, 0x47, 0x3a, 0x25, 0xed, 0xde, 0x9b, 0x29, 0xcf, 0x41, 0x0c, 0x88, 0x68, 0x43, 0xe6, - 0xe7, 0x00, 0x24, 0x8f, 0x65, 0x5c, 0xdf, 0x8c, 0x96, 0xfa, 0x11, 0xc8, 0x5c, 0xc1, 0xc7, 0x32, - 0x92, 0xef, 0xeb, 0x26, 0xfc, 0x04, 0x2e, 0xc6, 0xde, 0x98, 0x98, 0x84, 0xdf, 0x4e, 0x7f, 0x88, - 0x92, 0x3a, 0x3d, 0xd9, 0x8a, 0xc0, 0x95, 0xe4, 0x5b, 0x94, 0xd8, 0xba, 0x9f, 0x96, 0x91, 0x6e, - 0xc2, 0x34, 0xf2, 0x2e, 0x99, 0xc0, 0x31, 0x0a, 0x5f, 0x63, 0x16, 0xc7, 0xe3, 0x28, 0xc5, 0x6b, - 0xd5, 0x13, 0xf7, 0x49, 0xf1, 0x6e, 0x99, 0xa7, 0x83, 0x2c, 0x99, 0x8f, 0x99, 0xb1, 0x30, 0xed, - 0x68, 0x14, 0x59, 0x26, 0xc9, 0xc7, 0x30, 0x13, 0x3d, 0x67, 0xe6, 0x24, 0x52, 0xc0, 0x72, 0xac, - 0x6f, 0x33, 0xd1, 0x9b, 0xe6, 0xd3, 0xa3, 0xaf, 0xc9, 0xf3, 0x2d, 0x42, 0xbf, 0x94, 0x78, 0x91, - 0x63, 0x8c, 0xe1, 0x24, 0xc7, 0x9c, 0x36, 0xb7, 0xa7, 0x5d, 0x9d, 0x26, 0x7e, 0x6e, 0xe9, 0xd1, - 0x15, 0xf5, 0xcf, 0x2d, 0x37, 0x02, 0xa4, 0x92, 0xa9, 0x33, 0xe8, 0x6c, 0xc2, 0x35, 0x8c, 0xe6, - 0x52, 0xe3, 0x31, 0xf8, 0xd2, 0xa1, 0xb2, 0xfb, 0x1e, 0x8f, 0x01, 0xd3, 0x86, 0xab, 0x7d, 0xc3, - 0x4b, 0x92, 0xdb, 0x86, 0xf3, 0x4c, 0xff, 0x40, 0x94, 0x39, 0xea, 0xcc, 0x5c, 0x5a, 0x94, 0x46, - 0x75, 0x78, 0xe7, 0x04, 0x8c, 0x54, 0x87, 0x77, 0x6e, 0x98, 0xc7, 0xcf, 0x31, 0x58, 0xb6, 0x38, - 0xa3, 0x30, 0x42, 0x13, 0xed, 0x38, 0x9d, 0x26, 0xed, 0x73, 0x97, 0x74, 0xd5, 0xbc, 0x69, 0x4d, - 0x20, 0xa2, 0xa2, 0x74, 0x59, 0xa8, 0x77, 0x59, 0xc4, 0xfb, 0x13, 0xc9, 0x71, 0xda, 0xbe, 0xcc, - 0x37, 0xe0, 0xa9, 0x7b, 0x9e, 0x51, 0x5e, 0x5d, 0xf9, 0xe5, 0xbf, 0xbc, 0x5c, 0xf8, 0xe5, 0xaf, - 0x2e, 0x17, 0xfe, 0xe9, 0xaf, 0x2e, 0x17, 0xfe, 0xc5, 0xaf, 0x2e, 0x17, 0xbe, 0x5c, 0x3a, 0x59, - 0x04, 0x64, 0x9e, 0x1f, 0xe3, 0x36, 0x27, 0x37, 0x82, 0xff, 0xdd, 0xfb, 0x8f, 0x01, 0x00, 0x00, - 0xff, 0xff, 0x8e, 0xf9, 0x1e, 0xc7, 0xcf, 0xe2, 0x00, 0x00, + // 15691 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0xbd, 0x59, 0x6c, 0x5c, 0x49, + 0x96, 0x18, 0xaa, 0xe4, 0x9a, 0x3c, 0xc9, 0x25, 0x15, 0x24, 0xc5, 0x14, 0x29, 0x29, 0xa5, 0xab, + 0x92, 0x4a, 0x55, 0x5d, 0xad, 0x85, 0xaa, 0xaa, 0xae, 0xa5, 0xab, 0xaa, 0x33, 0x93, 0x29, 0x91, + 0x12, 0x97, 0xac, 0x9b, 0x24, 0x55, 0xdb, 0x74, 0xf6, 0x55, 0x66, 0x88, 0xbc, 0xa3, 0xcc, 0x7b, + 0xb3, 0xef, 0xbd, 0x29, 0x95, 0x7a, 0xa6, 0xe7, 0xbd, 0x99, 0x7e, 0xcb, 0xfc, 0xbc, 0xf7, 0x66, + 0x80, 0x37, 0x83, 0x79, 0x78, 0x86, 0x17, 0xc0, 0x86, 0x01, 0x1b, 0x06, 0xfc, 0x63, 0xcc, 0xd7, + 0x18, 0xb0, 0x01, 0xc3, 0xed, 0x81, 0x8d, 0x31, 0x60, 0xcf, 0x8f, 0x3f, 0x38, 0x76, 0x03, 0xf3, + 0x43, 0xd8, 0x1f, 0x03, 0xc3, 0x06, 0xdc, 0xc6, 0x00, 0x46, 0xac, 0x37, 0xe2, 0x2e, 0x99, 0x49, + 0x89, 0x55, 0xe3, 0x1f, 0x89, 0x19, 0x71, 0xce, 0x89, 0xb8, 0xb1, 0x9c, 0x38, 0x71, 0xb6, 0x80, + 0x9b, 0x01, 0x6e, 0xe3, 0xae, 0xeb, 0x05, 0xb7, 0xda, 0xf8, 0xc0, 0x6a, 0xbe, 0xb8, 0xd5, 0x6c, + 0xdb, 0xd8, 0x09, 0x6e, 0x75, 0x3d, 0x37, 0x70, 0x6f, 0x59, 0xbd, 0xe0, 0xd0, 0xc7, 0xde, 0x33, + 0xbb, 0x89, 0x6f, 0xd2, 0x12, 0x34, 0x4e, 0xff, 0x5b, 0x5e, 0x38, 0x70, 0x0f, 0x5c, 0x06, 0x43, + 0xfe, 0x62, 0x95, 0xcb, 0x2b, 0x07, 0xae, 0x7b, 0xd0, 0xc6, 0x0c, 0xf9, 0x71, 0xef, 0xc9, 0x2d, + 0xdc, 0xe9, 0x06, 0x2f, 0x78, 0x65, 0x31, 0x5a, 0x19, 0xd8, 0x1d, 0xec, 0x07, 0x56, 0xa7, 0xcb, + 0x01, 0xde, 0x90, 0x5d, 0xb1, 0x82, 0x80, 0xd4, 0x04, 0xb6, 0xeb, 0xdc, 0x7a, 0x76, 0x47, 0xfd, + 0xc9, 0x41, 0x6f, 0xf4, 0xed, 0x75, 0x13, 0x7b, 0x81, 0x3f, 0x14, 0x24, 0x7e, 0x86, 0x9d, 0x80, + 0x43, 0xbe, 0xd5, 0x17, 0xd2, 0x76, 0x08, 0xa8, 0xeb, 0x89, 0xaf, 0x79, 0xbb, 0x2f, 0xb4, 0x87, + 0x7f, 0xdc, 0x23, 0x5d, 0x7e, 0xdc, 0xc6, 0x0d, 0xcf, 0x6d, 0x63, 0x3f, 0xf6, 0x89, 0x1c, 0x2b, + 0x78, 0xd1, 0xc5, 0x3e, 0xeb, 0x86, 0xf8, 0x8f, 0x83, 0x5e, 0x49, 0x06, 0xa5, 0xff, 0x72, 0x90, + 0xef, 0x26, 0x83, 0x3c, 0xc7, 0x8f, 0xc9, 0xbc, 0x39, 0xf2, 0x8f, 0x01, 0xe0, 0x9e, 0xd5, 0xed, + 0x62, 0x2f, 0xfc, 0x83, 0x83, 0x9f, 0x97, 0xe0, 0x9d, 0x27, 0x16, 0x99, 0x86, 0xce, 0x13, 0x2b, + 0xf6, 0x19, 0x3d, 0xdf, 0x3a, 0xc0, 0xbc, 0xfb, 0xcf, 0xee, 0xa8, 0x3f, 0x19, 0xa8, 0xf1, 0x37, + 0x32, 0x30, 0xfe, 0xc8, 0x0a, 0x9a, 0x87, 0xe8, 0x13, 0x18, 0x7f, 0x68, 0x3b, 0x2d, 0xbf, 0x90, + 0xb9, 0x3c, 0x7a, 0x23, 0xb7, 0x9a, 0xbf, 0xc9, 0x3e, 0x85, 0x56, 0x92, 0x8a, 0xf2, 0xd2, 0xcf, + 0x8f, 0x8a, 0x67, 0x8e, 0x8f, 0x8a, 0x73, 0x4f, 0x09, 0xd8, 0x5b, 0x6e, 0xc7, 0x0e, 0xe8, 0xfa, + 0x31, 0x19, 0x1e, 0xda, 0x83, 0xf9, 0x52, 0xbb, 0xed, 0x3e, 0xaf, 0x59, 0x5e, 0x60, 0x5b, 0xed, + 0x7a, 0xaf, 0xd9, 0xc4, 0xbe, 0x5f, 0x18, 0xb9, 0x9c, 0xb9, 0x91, 0x2d, 0x5f, 0x3d, 0x3e, 0x2a, + 0x16, 0x2d, 0x52, 0xdd, 0xe8, 0xb2, 0xfa, 0x86, 0xcf, 0x00, 0x14, 0x42, 0x49, 0xf8, 0xc6, 0x1f, + 0x4f, 0x40, 0x7e, 0xdd, 0xf5, 0x83, 0x0a, 0x59, 0x35, 0x26, 0x9b, 0x38, 0x74, 0x15, 0x26, 0x48, + 0xd9, 0xc6, 0x5a, 0x21, 0x73, 0x39, 0x73, 0x63, 0xaa, 0x9c, 0x3b, 0x3e, 0x2a, 0x4e, 0x1e, 0xba, + 0x7e, 0xd0, 0xb0, 0x5b, 0x26, 0xaf, 0x42, 0x6f, 0x40, 0x76, 0xdb, 0x6d, 0xe1, 0x6d, 0xab, 0x83, + 0x69, 0x2f, 0xa6, 0xca, 0x33, 0xc7, 0x47, 0xc5, 0x29, 0xc7, 0x6d, 0xe1, 0x86, 0x63, 0x75, 0xb0, + 0x29, 0xab, 0xd1, 0x3e, 0x8c, 0x99, 0x6e, 0x1b, 0x17, 0x46, 0x29, 0x58, 0xf9, 0xf8, 0xa8, 0x38, + 0x46, 0xd6, 0xc5, 0x2f, 0x8f, 0x8a, 0xef, 0x1e, 0xd8, 0xc1, 0x61, 0xef, 0xf1, 0xcd, 0xa6, 0xdb, + 0xb9, 0x75, 0xe0, 0x59, 0xcf, 0x6c, 0xb6, 0xd0, 0xad, 0xf6, 0xad, 0x70, 0x3b, 0x74, 0x6d, 0x3e, + 0xef, 0xf5, 0x17, 0x7e, 0x80, 0x3b, 0x84, 0x92, 0x49, 0xe9, 0xa1, 0x47, 0xb0, 0x50, 0x6a, 0xb5, + 0x6c, 0x86, 0x51, 0xf3, 0x6c, 0xa7, 0x69, 0x77, 0xad, 0xb6, 0x5f, 0x18, 0xbb, 0x3c, 0x7a, 0x63, + 0x8a, 0x0f, 0x8a, 0xac, 0x6f, 0x74, 0x25, 0x80, 0x32, 0x28, 0x89, 0x04, 0xd0, 0x5d, 0xc8, 0xae, + 0x6d, 0xd7, 0x49, 0xdf, 0xfd, 0xc2, 0x38, 0x25, 0xb6, 0x74, 0x7c, 0x54, 0x9c, 0x6f, 0x39, 0x3e, + 0xfd, 0x34, 0x95, 0x80, 0x04, 0x44, 0xef, 0xc2, 0x74, 0xad, 0xf7, 0xb8, 0x6d, 0x37, 0x77, 0x37, + 0xeb, 0x0f, 0xf1, 0x8b, 0xc2, 0xc4, 0xe5, 0xcc, 0x8d, 0xe9, 0x32, 0x3a, 0x3e, 0x2a, 0xce, 0x76, + 0x69, 0x79, 0x23, 0x68, 0xfb, 0x8d, 0xa7, 0xf8, 0x85, 0xa9, 0xc1, 0x85, 0x78, 0xf5, 0xfa, 0x3a, + 0xc1, 0x9b, 0x8c, 0xe1, 0xf9, 0xfe, 0xa1, 0x8a, 0xc7, 0xe0, 0xd0, 0x2d, 0x00, 0x13, 0x77, 0xdc, + 0x00, 0x97, 0x5a, 0x2d, 0xaf, 0x90, 0xa5, 0x63, 0x3b, 0x77, 0x7c, 0x54, 0xcc, 0x79, 0xb4, 0xb4, + 0x61, 0xb5, 0x5a, 0x9e, 0xa9, 0x80, 0xa0, 0x0a, 0x64, 0x4d, 0x97, 0x0d, 0x70, 0x61, 0xea, 0x72, + 0xe6, 0x46, 0x6e, 0x75, 0x8e, 0x2f, 0x43, 0x51, 0x5c, 0x3e, 0x77, 0x7c, 0x54, 0x44, 0x1e, 0xff, + 0xa5, 0x7e, 0xa5, 0x80, 0x40, 0x45, 0x98, 0xdc, 0x76, 0x2b, 0x56, 0xf3, 0x10, 0x17, 0x80, 0xae, + 0xbd, 0xf1, 0xe3, 0xa3, 0x62, 0xe6, 0xbb, 0xa6, 0x28, 0x45, 0xcf, 0x20, 0x17, 0x4e, 0x94, 0x5f, + 0xc8, 0xd1, 0xe1, 0xdb, 0x3d, 0x3e, 0x2a, 0x9e, 0xf3, 0x69, 0x31, 0x63, 0x09, 0x21, 0xed, 0x57, + 0x58, 0x05, 0x6a, 0x43, 0xe8, 0x2b, 0x58, 0x0c, 0x7f, 0x96, 0x7c, 0x1f, 0x7b, 0x84, 0xc6, 0xc6, + 0x5a, 0x61, 0x86, 0x8e, 0xcc, 0xf5, 0xe3, 0xa3, 0xa2, 0xa1, 0xf4, 0xa0, 0x61, 0x09, 0x90, 0x86, + 0xdd, 0x52, 0xbe, 0x34, 0x99, 0xc8, 0x83, 0xb1, 0xec, 0x74, 0x7e, 0xc6, 0xbc, 0xb8, 0xe7, 0x30, + 0xbe, 0x96, 0x08, 0x64, 0xfc, 0x65, 0x06, 0xd0, 0x4e, 0x17, 0x3b, 0xf5, 0xfa, 0x3a, 0xd9, 0x4f, + 0x62, 0x3b, 0xbd, 0x05, 0x53, 0x6c, 0xe2, 0xc8, 0xec, 0x8e, 0xd0, 0xd9, 0x9d, 0x3d, 0x3e, 0x2a, + 0x02, 0x9f, 0x5d, 0x32, 0xb3, 0x21, 0x00, 0xba, 0x06, 0xa3, 0xbb, 0xbb, 0x9b, 0x74, 0xaf, 0x8c, + 0x96, 0xe7, 0x8f, 0x8f, 0x8a, 0xa3, 0x41, 0xd0, 0xfe, 0xe5, 0x51, 0x31, 0xbb, 0xd6, 0xf3, 0xe8, + 0xb0, 0x98, 0xa4, 0x1e, 0x5d, 0x83, 0xc9, 0x4a, 0xbb, 0xe7, 0x07, 0xd8, 0x2b, 0x8c, 0x85, 0x9b, + 0xb4, 0xc9, 0x8a, 0x4c, 0x51, 0x87, 0xbe, 0x03, 0x63, 0x7b, 0x3e, 0xf6, 0x0a, 0xe3, 0x74, 0xbe, + 0x67, 0xf8, 0x7c, 0x93, 0xa2, 0xfd, 0xd5, 0x72, 0x96, 0xec, 0xc4, 0x9e, 0x8f, 0x3d, 0x93, 0x02, + 0xa1, 0x9b, 0x30, 0xce, 0x26, 0x6d, 0x82, 0x32, 0xa9, 0x19, 0xb9, 0x3a, 0xda, 0x78, 0xff, 0xdd, + 0xf2, 0xd4, 0xf1, 0x51, 0x71, 0x9c, 0x4e, 0x9e, 0xc9, 0xc0, 0x1e, 0x8c, 0x65, 0x33, 0xf9, 0x11, + 0x33, 0x4b, 0x70, 0xc9, 0xb6, 0x30, 0xbe, 0x03, 0x39, 0xe5, 0xf3, 0xd1, 0x05, 0x18, 0x23, 0xff, + 0x53, 0x26, 0x32, 0xcd, 0x1a, 0x23, 0x87, 0x93, 0x49, 0x4b, 0x8d, 0x3f, 0x99, 0x87, 0x3c, 0xc1, + 0xd4, 0x38, 0xcf, 0x0d, 0x90, 0xd4, 0x38, 0x53, 0x99, 0x3e, 0x3e, 0x2a, 0x66, 0x7b, 0xbc, 0x2c, + 0x6c, 0x0b, 0xd5, 0x61, 0xb2, 0xfa, 0x75, 0xd7, 0xf6, 0xb0, 0x4f, 0x87, 0x2a, 0xb7, 0xba, 0x7c, + 0x93, 0x1d, 0xb1, 0x37, 0xc5, 0x11, 0x7b, 0x73, 0x57, 0x1c, 0xb1, 0xe5, 0x8b, 0x9c, 0xb9, 0x9e, + 0xc5, 0x0c, 0x25, 0x9c, 0xef, 0xdf, 0xf9, 0xb3, 0x62, 0xc6, 0x14, 0x94, 0xd0, 0x5b, 0x30, 0x71, + 0xcf, 0xf5, 0x3a, 0x56, 0xc0, 0xc7, 0x74, 0xe1, 0xf8, 0xa8, 0x98, 0x7f, 0x42, 0x4b, 0x94, 0x25, + 0xc2, 0x61, 0xd0, 0x3d, 0x98, 0x35, 0xdd, 0x5e, 0x80, 0x77, 0x5d, 0x31, 0x13, 0xe3, 0x14, 0xeb, + 0xd2, 0xf1, 0x51, 0x71, 0xd9, 0x23, 0x35, 0x8d, 0xc0, 0x6d, 0xf0, 0x29, 0x51, 0xf0, 0x23, 0x58, + 0xa8, 0x0a, 0xb3, 0x25, 0xca, 0x8d, 0xf9, 0x28, 0xb0, 0xf1, 0x9f, 0x2a, 0x5f, 0x3c, 0x3e, 0x2a, + 0x9e, 0xb7, 0x68, 0x4d, 0x83, 0x9f, 0xa9, 0x2a, 0xe7, 0x89, 0x20, 0xa1, 0x6d, 0x38, 0xfb, 0xb0, + 0xf7, 0x18, 0x7b, 0x0e, 0x0e, 0xb0, 0x2f, 0x7a, 0x34, 0x49, 0x7b, 0x74, 0xf9, 0xf8, 0xa8, 0x78, + 0xe1, 0xa9, 0xac, 0x4c, 0xe8, 0x53, 0x1c, 0x15, 0x61, 0x98, 0xe3, 0x1d, 0x5d, 0xb3, 0x02, 0xeb, + 0xb1, 0xe5, 0x63, 0xca, 0x64, 0x72, 0xab, 0xe7, 0xd8, 0x10, 0xdf, 0x8c, 0xd4, 0x96, 0xaf, 0xf2, + 0x51, 0x5e, 0x91, 0xdf, 0xde, 0xe2, 0x55, 0x4a, 0x43, 0x51, 0x9a, 0x84, 0xd7, 0xca, 0x73, 0x64, + 0x8a, 0xf6, 0x96, 0xf2, 0x5a, 0x79, 0x8e, 0xa8, 0x5c, 0x48, 0x9e, 0x28, 0x9b, 0x30, 0xbe, 0x47, + 0x4e, 0x5b, 0xca, 0x83, 0x66, 0x57, 0xaf, 0xf0, 0x1e, 0x45, 0xd7, 0xd3, 0x4d, 0xf2, 0x83, 0x02, + 0xd2, 0x9d, 0x34, 0x47, 0x4f, 0x68, 0xf5, 0x6c, 0xa5, 0x75, 0xe8, 0x53, 0x00, 0xde, 0xab, 0x52, + 0xb7, 0x5b, 0xc8, 0xd1, 0x8f, 0x3c, 0xab, 0x7f, 0x64, 0xa9, 0xdb, 0x2d, 0x5f, 0xe2, 0xdf, 0x77, + 0x4e, 0x7e, 0x9f, 0xd5, 0xed, 0x2a, 0xd4, 0x14, 0x22, 0xe8, 0x13, 0x98, 0xa6, 0x2c, 0x4a, 0xcc, + 0xe8, 0x34, 0x9d, 0xd1, 0x95, 0xe3, 0xa3, 0xe2, 0x12, 0xe5, 0x3e, 0x09, 0xf3, 0xa9, 0x21, 0xa0, + 0xdf, 0x80, 0x45, 0x4e, 0xee, 0x91, 0xed, 0xb4, 0xdc, 0xe7, 0xfe, 0x1a, 0xf6, 0x9f, 0x06, 0x6e, + 0x97, 0xb2, 0xb3, 0xdc, 0xea, 0x05, 0xbd, 0x7b, 0x3a, 0x4c, 0xf9, 0x4d, 0xde, 0x53, 0x43, 0xf6, + 0xf4, 0x39, 0x03, 0x68, 0xb4, 0x18, 0x84, 0xca, 0xf0, 0x12, 0x49, 0xa0, 0x0d, 0x98, 0xdb, 0xf3, + 0xb1, 0xf6, 0x0d, 0xb3, 0x94, 0xdf, 0x17, 0xc9, 0x0c, 0xf7, 0x7c, 0x26, 0xda, 0x25, 0x7d, 0x47, + 0x14, 0x0f, 0x99, 0x80, 0xd6, 0x3c, 0xb7, 0x1b, 0x59, 0xe3, 0x73, 0x74, 0x44, 0x8c, 0xe3, 0xa3, + 0xe2, 0xa5, 0x96, 0xe7, 0x76, 0x1b, 0xe9, 0x0b, 0x3d, 0x01, 0x1b, 0xfd, 0x10, 0xce, 0x55, 0x5c, + 0xc7, 0xc1, 0x4d, 0xc2, 0x11, 0xd7, 0x6c, 0xeb, 0xc0, 0x71, 0xfd, 0xc0, 0x6e, 0x6e, 0xac, 0x15, + 0xf2, 0x21, 0xbb, 0x6f, 0x4a, 0x88, 0x46, 0x4b, 0x82, 0xe8, 0xec, 0x3e, 0x85, 0x0a, 0xfa, 0x12, + 0x66, 0x78, 0x5b, 0xd8, 0xa3, 0x4b, 0xf3, 0x6c, 0xff, 0x85, 0x26, 0x81, 0xd9, 0xc1, 0xed, 0x89, + 0x9f, 0x4c, 0x14, 0xd2, 0x69, 0xa1, 0xaf, 0x20, 0xb7, 0x75, 0xaf, 0x64, 0x62, 0xbf, 0xeb, 0x3a, + 0x3e, 0x2e, 0x20, 0x3a, 0xa3, 0x97, 0x38, 0xe9, 0xad, 0x7b, 0xa5, 0x52, 0x2f, 0x38, 0xc4, 0x4e, + 0x60, 0x37, 0xad, 0x00, 0x0b, 0xa8, 0xf2, 0x32, 0x59, 0x79, 0x9d, 0x27, 0x56, 0xc3, 0xe3, 0x25, + 0xca, 0x57, 0xa8, 0xe4, 0xd0, 0x32, 0x64, 0xeb, 0xf5, 0xf5, 0x4d, 0xf7, 0xc0, 0x76, 0x0a, 0xf3, + 0x64, 0x30, 0x4c, 0xf9, 0x1b, 0xed, 0xc2, 0x64, 0xad, 0xe7, 0x75, 0x5d, 0x1f, 0x17, 0x16, 0xe9, + 0x07, 0x5d, 0xed, 0xb7, 0x73, 0x38, 0x68, 0x79, 0x91, 0xb0, 0xce, 0x2e, 0xfb, 0xa1, 0xb4, 0x2a, + 0x48, 0xa1, 0x1f, 0xc0, 0x74, 0xbd, 0xbe, 0x1e, 0x9e, 0x71, 0xe7, 0x28, 0xc3, 0xbf, 0x70, 0x7c, + 0x54, 0x2c, 0x10, 0xd1, 0x25, 0x3c, 0xe7, 0xd4, 0xd5, 0xae, 0x62, 0x10, 0x0a, 0xbb, 0x9b, 0xf5, + 0x90, 0xc2, 0x52, 0x48, 0x81, 0x08, 0x4d, 0xc9, 0x14, 0x54, 0x0c, 0xf4, 0x0f, 0x32, 0x70, 0x59, + 0x25, 0x59, 0x0a, 0xaf, 0x4d, 0xf5, 0xc0, 0x0a, 0x70, 0x07, 0x3b, 0x41, 0xe1, 0x3c, 0x1d, 0xe9, + 0xef, 0xca, 0x6b, 0xdf, 0x4d, 0xf5, 0x72, 0xf5, 0xec, 0xce, 0xcd, 0x24, 0xa4, 0xf2, 0xea, 0xf1, + 0x51, 0xf1, 0xa6, 0xfe, 0x1d, 0x0d, 0x05, 0xaf, 0xe1, 0x0b, 0x48, 0xa5, 0x6f, 0x03, 0xbb, 0x42, + 0xfb, 0xab, 0x7e, 0x40, 0x62, 0x7f, 0x97, 0x5f, 0xba, 0xbf, 0xfa, 0xa8, 0x0d, 0xee, 0xef, 0xa0, + 0xae, 0xa0, 0x26, 0xac, 0x98, 0xd8, 0xf6, 0xfd, 0x1e, 0x11, 0x7f, 0xc8, 0xf6, 0xde, 0xe8, 0x90, + 0xeb, 0x92, 0xeb, 0x30, 0x79, 0x72, 0x85, 0xf2, 0x86, 0x2b, 0xc7, 0x47, 0xc5, 0x8b, 0x9e, 0x04, + 0x63, 0x2c, 0xc2, 0x56, 0x01, 0xcd, 0x7e, 0x54, 0x8c, 0xcf, 0x60, 0x4a, 0x72, 0x6c, 0x34, 0x09, + 0xa3, 0xa5, 0x76, 0x3b, 0x7f, 0x86, 0xfc, 0x51, 0xaf, 0xaf, 0xe7, 0x33, 0x68, 0x16, 0x20, 0x3c, + 0xa6, 0xf2, 0x23, 0x68, 0x1a, 0xb2, 0xe2, 0x18, 0xc9, 0x8f, 0x52, 0xf8, 0x6e, 0x37, 0x3f, 0x86, + 0x10, 0xcc, 0xea, 0xcc, 0x2c, 0x3f, 0x6e, 0xfc, 0xb3, 0x0c, 0x4c, 0xc9, 0x4d, 0x88, 0xe6, 0x20, + 0xb7, 0xb7, 0x5d, 0xaf, 0x55, 0x2b, 0x1b, 0xf7, 0x36, 0xaa, 0x6b, 0xf9, 0x33, 0xe8, 0x22, 0x9c, + 0xdf, 0xad, 0xaf, 0x37, 0xd6, 0xca, 0x8d, 0xcd, 0x9d, 0x4a, 0x69, 0xb3, 0x51, 0x33, 0x77, 0x3e, + 0xfb, 0xbc, 0xb1, 0xbb, 0xb7, 0xbd, 0x5d, 0xdd, 0xcc, 0x67, 0x50, 0x01, 0x16, 0x48, 0xf5, 0xc3, + 0xbd, 0x72, 0x55, 0x05, 0xc8, 0x8f, 0xa0, 0x2b, 0x70, 0x31, 0xa9, 0xa6, 0xb1, 0x5e, 0x2d, 0xad, + 0x6d, 0x56, 0xeb, 0xf5, 0xfc, 0x28, 0x5a, 0x82, 0x79, 0x02, 0x52, 0xaa, 0xd5, 0x34, 0xdc, 0x31, + 0xd2, 0x0b, 0xde, 0x68, 0xf5, 0xb3, 0x6a, 0x25, 0x3f, 0x2e, 0x88, 0x11, 0xc8, 0xd2, 0xa3, 0x7a, + 0xa3, 0x62, 0x56, 0xd7, 0xaa, 0xdb, 0xbb, 0x1b, 0xa5, 0xcd, 0x9a, 0xb9, 0x53, 0x21, 0xc4, 0x26, + 0x8c, 0x36, 0xe4, 0x94, 0x9d, 0x89, 0x2e, 0x40, 0xa1, 0x52, 0x35, 0x77, 0x1b, 0xb5, 0x3d, 0xb3, + 0xb6, 0x53, 0xaf, 0x36, 0xf4, 0xaf, 0x8a, 0xd6, 0x6e, 0xee, 0xdc, 0xdf, 0xd8, 0x6e, 0x90, 0xa2, + 0x7a, 0x3e, 0x43, 0x5a, 0xd3, 0x6a, 0xeb, 0x1b, 0xdb, 0xf7, 0x37, 0xab, 0x8d, 0xbd, 0x7a, 0x95, + 0x83, 0x8c, 0x30, 0x01, 0xef, 0xc1, 0x58, 0x76, 0x21, 0xbf, 0xa8, 0x88, 0xa8, 0xe6, 0x62, 0xe2, + 0x72, 0x32, 0x7e, 0x6b, 0x24, 0x26, 0x31, 0xa0, 0x55, 0xc8, 0xd5, 0x99, 0x0a, 0x85, 0x72, 0x51, + 0x76, 0x9f, 0xcc, 0x1f, 0x1f, 0x15, 0xa7, 0xb9, 0x66, 0x85, 0x31, 0x48, 0x15, 0x88, 0x08, 0x81, + 0x35, 0xc2, 0x94, 0x9a, 0x6e, 0x5b, 0x15, 0x02, 0xbb, 0xbc, 0xcc, 0x94, 0xb5, 0x68, 0x55, 0x11, + 0x17, 0xd9, 0xe5, 0x92, 0x5e, 0x60, 0x84, 0xb8, 0xa8, 0x8a, 0x0e, 0x52, 0x70, 0x5c, 0x0d, 0x17, + 0x0d, 0x97, 0xf2, 0x28, 0x4e, 0x82, 0xa8, 0x22, 0xe1, 0xd0, 0x1b, 0x42, 0x30, 0x66, 0x97, 0x41, + 0x2a, 0x4b, 0x44, 0xae, 0x31, 0x5c, 0x26, 0x36, 0x7a, 0x29, 0xe7, 0x36, 0xfa, 0x30, 0xba, 0x2a, + 0xf9, 0x60, 0x50, 0x62, 0x91, 0xe3, 0xd9, 0x8c, 0x80, 0xa2, 0x22, 0x8c, 0x33, 0x86, 0xce, 0xc6, + 0x83, 0x8a, 0xe2, 0x6d, 0x52, 0x60, 0xb2, 0x72, 0xe3, 0x5f, 0x8e, 0xa9, 0x32, 0x0c, 0x11, 0xbd, + 0x95, 0xf1, 0xa6, 0xa2, 0x37, 0x1d, 0x67, 0x5a, 0x4a, 0x6e, 0x8e, 0x6c, 0x32, 0xe9, 0xcd, 0x71, + 0x34, 0xbc, 0x39, 0x72, 0x8e, 0xc1, 0x6e, 0x8e, 0x21, 0x08, 0x99, 0x45, 0x2e, 0x15, 0x52, 0xaa, + 0x63, 0xe1, 0x2c, 0x72, 0x49, 0x92, 0xcf, 0xa2, 0x02, 0x84, 0x3e, 0x00, 0x28, 0x3d, 0xaa, 0xd3, + 0x2b, 0x92, 0xb9, 0xcd, 0x25, 0x63, 0x7a, 0x86, 0x59, 0xcf, 0x7d, 0x7e, 0x03, 0xf3, 0xd4, 0x2b, + 0xa6, 0x02, 0x8d, 0xca, 0x30, 0x53, 0xfa, 0x49, 0xcf, 0xc3, 0x1b, 0x2d, 0x72, 0x0c, 0x06, 0xec, + 0x2e, 0x3d, 0xc5, 0xce, 0x03, 0x8b, 0x54, 0x34, 0x6c, 0x5e, 0xa3, 0x10, 0xd0, 0x51, 0xd0, 0x0e, + 0x9c, 0xbd, 0x5f, 0xa9, 0xf1, 0x75, 0x55, 0x6a, 0x36, 0xdd, 0x9e, 0x13, 0x70, 0x71, 0x98, 0xb2, + 0xa9, 0x83, 0x66, 0xb7, 0x21, 0xd6, 0xa0, 0xc5, 0xaa, 0x55, 0x79, 0x38, 0x86, 0x8b, 0xae, 0xc2, + 0xe8, 0x9e, 0xb9, 0xc1, 0x2f, 0xda, 0x67, 0x8f, 0x8f, 0x8a, 0x33, 0x3d, 0xcf, 0x56, 0x50, 0x48, + 0x2d, 0x7a, 0x1f, 0x60, 0xd7, 0xf2, 0x0e, 0x70, 0x50, 0x73, 0xbd, 0x80, 0xca, 0xb3, 0x33, 0xe5, + 0xf3, 0xc7, 0x47, 0xc5, 0xc5, 0x80, 0x96, 0x36, 0x08, 0x17, 0x57, 0x3f, 0x3a, 0x04, 0x46, 0x2f, + 0xa0, 0x58, 0x7a, 0x54, 0xaf, 0x78, 0x98, 0x7e, 0x81, 0xd5, 0xae, 0x79, 0x2e, 0x11, 0x79, 0xc2, + 0x02, 0x9f, 0x4a, 0xbb, 0x53, 0xe5, 0x5b, 0xc7, 0x47, 0xc5, 0xef, 0x90, 0x51, 0x6c, 0xca, 0xaa, + 0x2e, 0x83, 0x55, 0x4a, 0xd4, 0xa5, 0x39, 0x88, 0xee, 0x83, 0xb1, 0xec, 0x48, 0x7e, 0xd4, 0x9c, + 0xaa, 0x63, 0xdf, 0x67, 0x37, 0xd9, 0x36, 0xcc, 0xde, 0xc7, 0x01, 0xd9, 0x33, 0xe2, 0x66, 0xd6, + 0x7f, 0x45, 0x7d, 0x1f, 0x72, 0x8f, 0xec, 0xe0, 0xb0, 0x8e, 0x9b, 0x1e, 0x0e, 0x84, 0x56, 0x8a, + 0xce, 0xf6, 0x73, 0x3b, 0x38, 0x6c, 0xf8, 0xac, 0x5c, 0x95, 0x58, 0x14, 0x70, 0xa3, 0x0a, 0x73, + 0xbc, 0x35, 0x79, 0x11, 0x5c, 0xd5, 0x09, 0x66, 0x28, 0x41, 0xba, 0xe2, 0x54, 0x82, 0x3a, 0x99, + 0x7f, 0x34, 0x02, 0x8b, 0x95, 0x43, 0xcb, 0x39, 0xc0, 0x35, 0xcb, 0xf7, 0x9f, 0xbb, 0x5e, 0x4b, + 0xe9, 0x3c, 0xbd, 0x05, 0xc7, 0x3a, 0x4f, 0xaf, 0xbd, 0xab, 0x90, 0xdb, 0x69, 0xb7, 0x04, 0x0e, + 0xbf, 0xa1, 0xd3, 0xb6, 0xdc, 0x76, 0xab, 0xd1, 0x15, 0xb4, 0x54, 0x20, 0x82, 0xb3, 0x8d, 0x9f, + 0x4b, 0x9c, 0xd1, 0x10, 0xc7, 0xc1, 0xcf, 0x15, 0x1c, 0x05, 0x08, 0x55, 0xe1, 0x6c, 0x1d, 0x37, + 0x5d, 0xa7, 0x75, 0xcf, 0x6a, 0x06, 0xae, 0xb7, 0xeb, 0x3e, 0xc5, 0x0e, 0xdf, 0x4b, 0xf4, 0xca, + 0xe3, 0xd3, 0xca, 0xc6, 0x13, 0x5a, 0xdb, 0x08, 0x48, 0xb5, 0x19, 0xc7, 0x40, 0x3b, 0x90, 0x7d, + 0xc4, 0x75, 0x9b, 0xfc, 0x5a, 0x7f, 0xed, 0xa6, 0x54, 0x76, 0x86, 0xb3, 0x2a, 0x15, 0x13, 0x52, + 0x82, 0xa4, 0x5c, 0x54, 0x40, 0x9a, 0x92, 0x88, 0xf1, 0xb3, 0x0c, 0x2c, 0x6e, 0xda, 0x7e, 0x50, + 0xc7, 0x1d, 0xab, 0x7b, 0xe8, 0x7a, 0x58, 0xce, 0xc2, 0x0a, 0x4c, 0x75, 0xad, 0x03, 0xdc, 0xf0, + 0xed, 0x9f, 0xb0, 0x99, 0x1f, 0x37, 0xb3, 0xa4, 0xa0, 0x6e, 0xff, 0x04, 0xa3, 0x8b, 0x00, 0xb4, + 0x92, 0x76, 0x94, 0x31, 0x26, 0x93, 0x82, 0xb3, 0x6e, 0xde, 0x84, 0x89, 0x27, 0x76, 0x9b, 0xdc, + 0x41, 0x47, 0xf9, 0xad, 0x91, 0xeb, 0x6f, 0x44, 0x2b, 0xf7, 0x68, 0xad, 0xc9, 0xa1, 0x8c, 0x00, + 0xce, 0x45, 0x3b, 0xc1, 0x05, 0xda, 0x55, 0x00, 0x5f, 0x96, 0x72, 0x05, 0x2a, 0x8a, 0x52, 0xdb, + 0xbf, 0x6b, 0x2a, 0x50, 0xe8, 0x3a, 0xcc, 0x39, 0xf8, 0xeb, 0xa0, 0x11, 0xeb, 0xe1, 0x0c, 0x29, + 0xae, 0x89, 0x5e, 0x1a, 0x7b, 0x30, 0x53, 0x6b, 0xf7, 0x0e, 0x6c, 0x87, 0xf0, 0xfa, 0x3a, 0xfe, + 0x31, 0x5a, 0x03, 0x08, 0x0b, 0x78, 0x63, 0xf3, 0xbc, 0xb1, 0xb0, 0x62, 0xff, 0x2e, 0x67, 0x98, + 0xb4, 0x84, 0xde, 0x73, 0x4d, 0x05, 0xcf, 0xf8, 0x6f, 0xa3, 0x80, 0xf8, 0x20, 0x52, 0x11, 0xaa, + 0x8e, 0x03, 0x22, 0x77, 0x9c, 0x83, 0x11, 0xa9, 0x54, 0x9d, 0x38, 0x3e, 0x2a, 0x8e, 0xd8, 0x2d, + 0x73, 0x64, 0x63, 0x0d, 0xbd, 0x0d, 0xe3, 0x14, 0x8c, 0xf6, 0x71, 0x56, 0xb6, 0xa7, 0x52, 0x60, + 0x3c, 0x9f, 0x1e, 0xb6, 0x26, 0x03, 0x46, 0xef, 0xc0, 0xd4, 0x1a, 0x6e, 0xe3, 0x03, 0x2b, 0x70, + 0x05, 0x17, 0x67, 0x6a, 0x4a, 0x51, 0xa8, 0xec, 0xb7, 0x10, 0x12, 0xbd, 0x05, 0x13, 0x26, 0xb6, + 0x7c, 0xd7, 0x51, 0x95, 0x1c, 0x1e, 0x2d, 0x51, 0x95, 0x1c, 0x0c, 0x06, 0xfd, 0x5e, 0x06, 0x72, + 0x25, 0xc7, 0xe1, 0xea, 0x3f, 0x9f, 0xaf, 0xb8, 0xc5, 0x9b, 0x52, 0x5f, 0xbe, 0x69, 0x3d, 0xc6, + 0xed, 0x7d, 0xab, 0xdd, 0xc3, 0x7e, 0xf9, 0x2b, 0x72, 0xef, 0xfc, 0x77, 0x47, 0xc5, 0x0f, 0x4f, + 0xa0, 0xd0, 0x0b, 0x35, 0xef, 0xbb, 0x9e, 0x65, 0x07, 0x3e, 0x61, 0x96, 0x56, 0xd8, 0xa0, 0xca, + 0x33, 0x94, 0x7e, 0x84, 0x47, 0xf2, 0xc4, 0xa0, 0x23, 0x19, 0x75, 0x60, 0xae, 0xe4, 0xfb, 0xbd, + 0x0e, 0xae, 0x07, 0x96, 0x17, 0xec, 0xda, 0x1d, 0x4c, 0xcf, 0x81, 0xfe, 0x2a, 0xa3, 0xd7, 0x7f, + 0x7e, 0x54, 0xcc, 0x90, 0xab, 0xae, 0x45, 0x51, 0x89, 0x98, 0xe3, 0x05, 0x8d, 0xc0, 0x56, 0xa5, + 0x0a, 0xaa, 0x3c, 0x8a, 0xd2, 0x36, 0xae, 0x4a, 0x49, 0x73, 0x63, 0x2d, 0x6d, 0xc6, 0x8d, 0x0a, + 0x5c, 0xb8, 0x8f, 0x03, 0x13, 0xfb, 0x38, 0x10, 0xfc, 0x81, 0x2e, 0xc8, 0x50, 0x05, 0x3f, 0x49, + 0x7f, 0x4b, 0x64, 0x3a, 0xfd, 0x8c, 0x27, 0x88, 0x1a, 0xe3, 0x7f, 0xcb, 0x40, 0xb1, 0xe2, 0x61, + 0x76, 0x4b, 0x4c, 0x21, 0xd4, 0x9f, 0x6f, 0x5f, 0x80, 0xb1, 0xdd, 0x17, 0x5d, 0xa1, 0x6b, 0xa3, + 0xb5, 0x64, 0x52, 0x4c, 0x5a, 0x3a, 0xa4, 0x2a, 0xd2, 0xf8, 0x12, 0x2e, 0x92, 0x9d, 0x9b, 0xde, + 0x87, 0x57, 0x60, 0x23, 0xc6, 0x4f, 0xe1, 0x52, 0x1a, 0x71, 0xce, 0x1e, 0xee, 0x42, 0x8e, 0x08, + 0x7c, 0x8c, 0x40, 0x94, 0x3f, 0x10, 0x06, 0x4f, 0xc1, 0x09, 0x7f, 0xe8, 0x89, 0x1f, 0xc3, 0xf3, + 0x87, 0x27, 0xb0, 0x68, 0x62, 0x07, 0x3f, 0x27, 0xf7, 0x15, 0x4d, 0x53, 0x59, 0x84, 0x71, 0xc6, + 0xc0, 0x63, 0xd3, 0xc3, 0xca, 0x4f, 0xa6, 0xf5, 0x35, 0x66, 0x20, 0x57, 0xb3, 0x9d, 0x03, 0x4e, + 0xdd, 0xf8, 0xf3, 0x31, 0x98, 0x66, 0xbf, 0x25, 0x0f, 0xd4, 0x24, 0xb0, 0xcc, 0x30, 0x12, 0xd8, + 0x7b, 0x30, 0x43, 0x44, 0x18, 0xec, 0xed, 0x63, 0x8f, 0x9c, 0xeb, 0x7c, 0x96, 0xa9, 0x82, 0xc2, + 0xa7, 0x15, 0x8d, 0x67, 0xac, 0xc6, 0xd4, 0x01, 0xd1, 0x26, 0xcc, 0xb2, 0x82, 0x7b, 0xd8, 0x0a, + 0x7a, 0xa1, 0x8e, 0x75, 0x8e, 0x6b, 0x0b, 0x44, 0x31, 0xdb, 0x76, 0x9c, 0xd6, 0x13, 0x5e, 0x68, + 0x46, 0x70, 0xd1, 0x27, 0x30, 0x57, 0xf3, 0xdc, 0xaf, 0x5f, 0x28, 0x32, 0x27, 0xe3, 0x3c, 0x4c, + 0xaf, 0x40, 0xaa, 0x1a, 0xaa, 0xe4, 0x19, 0x85, 0x46, 0x6f, 0x40, 0x76, 0xc3, 0x2f, 0xbb, 0x9e, + 0xed, 0x1c, 0x50, 0xfe, 0x93, 0x65, 0xa6, 0x26, 0xdb, 0x6f, 0x3c, 0xa6, 0x85, 0xa6, 0xac, 0x8e, + 0x18, 0x45, 0x26, 0x07, 0x1b, 0x45, 0x6e, 0x03, 0x6c, 0xba, 0x56, 0xab, 0xd4, 0x6e, 0x57, 0x4a, + 0x3e, 0x15, 0xee, 0xb8, 0x9c, 0xd1, 0x76, 0xad, 0x56, 0xc3, 0x6a, 0xb7, 0x1b, 0x4d, 0xcb, 0x37, + 0x15, 0x18, 0xf4, 0x05, 0x9c, 0xf7, 0xed, 0x03, 0x87, 0x7e, 0x5c, 0xc3, 0x6a, 0x1f, 0xb8, 0x9e, + 0x1d, 0x1c, 0x76, 0x1a, 0x7e, 0xcf, 0x0e, 0x98, 0x06, 0x73, 0x76, 0xf5, 0x92, 0x38, 0x9d, 0x04, + 0x5c, 0x49, 0x80, 0xd5, 0x09, 0x94, 0xb9, 0xe4, 0x27, 0x57, 0xa0, 0x47, 0x30, 0xb3, 0x69, 0x37, + 0xb1, 0xe3, 0x63, 0xaa, 0x92, 0x7e, 0x41, 0x25, 0xbe, 0xfe, 0x8c, 0x8a, 0x0c, 0xe2, 0x4c, 0x5b, + 0x45, 0xa2, 0x6c, 0x49, 0xa7, 0xf3, 0x60, 0x2c, 0x3b, 0x91, 0x9f, 0x34, 0xe7, 0x78, 0xe1, 0x23, + 0xcb, 0x73, 0x6c, 0xe7, 0xc0, 0x37, 0xfe, 0x12, 0x41, 0x56, 0xce, 0xd3, 0x4d, 0xf5, 0x7a, 0xcd, + 0x45, 0x2e, 0xba, 0x64, 0x43, 0xcd, 0xb1, 0xa9, 0x40, 0xa0, 0xf3, 0xf4, 0xc2, 0xcd, 0x85, 0xbd, + 0x49, 0xc2, 0x1e, 0xac, 0x6e, 0xd7, 0x24, 0x65, 0x84, 0xed, 0xad, 0x95, 0xe9, 0xa2, 0xc9, 0x32, + 0xb6, 0xd7, 0x7a, 0x6c, 0x8e, 0xac, 0x95, 0x09, 0xbf, 0xd9, 0xd9, 0x58, 0xab, 0xd0, 0xf9, 0xcf, + 0x32, 0x7e, 0xe3, 0xda, 0xad, 0xa6, 0x49, 0x4b, 0x49, 0x6d, 0xbd, 0xb4, 0xb5, 0xc9, 0xe7, 0x98, + 0xd6, 0xfa, 0x56, 0xa7, 0x6d, 0xd2, 0x52, 0x72, 0x81, 0x62, 0x4a, 0xc0, 0x8a, 0xeb, 0x04, 0x9e, + 0xdb, 0xf6, 0xe9, 0xad, 0x20, 0xcb, 0xd6, 0x20, 0xd7, 0x1e, 0x36, 0x79, 0x95, 0x19, 0x01, 0x45, + 0x8f, 0x60, 0xa9, 0xd4, 0x7a, 0x66, 0x39, 0x4d, 0xdc, 0x62, 0x35, 0x8f, 0x5c, 0xef, 0xe9, 0x93, + 0xb6, 0xfb, 0xdc, 0xa7, 0x8b, 0x24, 0xcb, 0x95, 0xed, 0x1c, 0x44, 0x28, 0x23, 0x9f, 0x0b, 0x20, + 0x33, 0x0d, 0x9b, 0xf0, 0x81, 0x4a, 0xdb, 0xed, 0xb5, 0xf8, 0xd2, 0xa1, 0x7c, 0xa0, 0x49, 0x0a, + 0x4c, 0x56, 0x4e, 0x46, 0x69, 0xbd, 0xbe, 0x45, 0x17, 0x06, 0x1f, 0xa5, 0x43, 0xbf, 0x63, 0x92, + 0x32, 0x74, 0x0d, 0x26, 0xc5, 0x5d, 0x90, 0xd9, 0xd2, 0xa8, 0x0d, 0x47, 0xdc, 0x01, 0x45, 0x1d, + 0xd9, 0xc7, 0x26, 0x6e, 0xba, 0xcf, 0xb0, 0xf7, 0xa2, 0xe2, 0xb6, 0xb0, 0x50, 0xc4, 0x72, 0x45, + 0x23, 0xab, 0x68, 0x34, 0x49, 0x8d, 0xa9, 0x03, 0x92, 0x06, 0x98, 0x50, 0xe2, 0x17, 0xe6, 0xc2, + 0x06, 0x98, 0xd0, 0xe2, 0x9b, 0xa2, 0x0e, 0xad, 0xc1, 0xd9, 0x52, 0x2f, 0x70, 0x3b, 0x56, 0x60, + 0x37, 0xf7, 0xba, 0x07, 0x9e, 0x45, 0x1a, 0xc9, 0x53, 0x04, 0x7a, 0x37, 0xb6, 0x44, 0x65, 0xa3, + 0xc7, 0x6b, 0xcd, 0x38, 0x02, 0x7a, 0x17, 0xa6, 0x37, 0x7c, 0xa6, 0x6c, 0xb7, 0x7c, 0xdc, 0xa2, + 0x1a, 0x53, 0xde, 0x4b, 0xdb, 0x6f, 0x50, 0xd5, 0x7b, 0x83, 0xdc, 0xa6, 0x5b, 0xa6, 0x06, 0x87, + 0x0c, 0x98, 0x28, 0xf9, 0xbe, 0xed, 0x07, 0x54, 0x11, 0x9a, 0x2d, 0xc3, 0xf1, 0x51, 0x71, 0xc2, + 0xa2, 0x25, 0x26, 0xaf, 0x41, 0x8f, 0x20, 0xb7, 0x86, 0xc9, 0x65, 0x6c, 0xd7, 0xeb, 0xf9, 0x01, + 0x55, 0x6b, 0xe6, 0x56, 0xcf, 0x73, 0x6e, 0xa4, 0xd4, 0xf0, 0xb5, 0xcc, 0xae, 0x1e, 0x2d, 0x5a, + 0xde, 0x08, 0x48, 0x85, 0x2a, 0x46, 0x28, 0xf0, 0xe4, 0xa6, 0xc9, 0x71, 0xd6, 0xed, 0x16, 0xe1, + 0x2f, 0x0b, 0xb4, 0x0f, 0xf4, 0xa6, 0xc9, 0x19, 0x5a, 0xe3, 0x90, 0xd6, 0xa8, 0x37, 0x4d, 0x0d, + 0x05, 0x35, 0x63, 0xf6, 0x9b, 0x45, 0x4d, 0x47, 0xaf, 0x57, 0x8a, 0x2e, 0x9e, 0xd0, 0xba, 0xf3, + 0x7d, 0xc8, 0x55, 0x7a, 0x7e, 0xe0, 0x76, 0x76, 0x0f, 0x71, 0x07, 0x53, 0x15, 0x2b, 0xbf, 0x4f, + 0x37, 0x69, 0x71, 0x23, 0x20, 0xe5, 0xea, 0x67, 0x2a, 0xe0, 0xe8, 0x53, 0x40, 0xe2, 0x62, 0x7c, + 0x9f, 0xac, 0x0f, 0x87, 0xac, 0x65, 0xaa, 0x65, 0xe5, 0x4a, 0x3b, 0x71, 0x9f, 0x6e, 0x1c, 0xc8, + 0x6a, 0x55, 0x03, 0x1f, 0x47, 0x26, 0x1d, 0x62, 0x5d, 0xbc, 0xef, 0x59, 0xdd, 0xc3, 0x42, 0x21, + 0xbc, 0xf2, 0xf1, 0x8f, 0x3a, 0x20, 0xe5, 0x9a, 0xf8, 0x16, 0x82, 0xa3, 0x3a, 0x00, 0xfb, 0x49, + 0x0e, 0x77, 0xae, 0x97, 0x2d, 0x68, 0xe3, 0x45, 0x2a, 0xc4, 0x58, 0xd1, 0x1b, 0x34, 0x27, 0xdb, + 0xb6, 0xb5, 0xd9, 0x54, 0xc8, 0xa0, 0xa7, 0x90, 0x67, 0xbf, 0xb6, 0x5c, 0xc7, 0x0e, 0xd8, 0x79, + 0xb1, 0xac, 0x29, 0xd7, 0xa3, 0xd5, 0xa2, 0x01, 0x6a, 0xd4, 0xe0, 0x0d, 0x74, 0x64, 0xad, 0xd2, + 0x4c, 0x8c, 0x30, 0xaa, 0x41, 0xae, 0xe6, 0xb9, 0xad, 0x5e, 0x33, 0xa0, 0x12, 0xd4, 0x0a, 0x65, + 0xfc, 0x88, 0xb7, 0xa3, 0xd4, 0xb0, 0x31, 0xe9, 0xb2, 0x82, 0x06, 0x39, 0x17, 0xd4, 0x31, 0x51, + 0x00, 0x51, 0x19, 0x26, 0x6a, 0x6e, 0xdb, 0x6e, 0xbe, 0x28, 0x5c, 0xa0, 0x9d, 0x5e, 0x10, 0xc4, + 0x68, 0xa1, 0xe8, 0x2a, 0x15, 0xd7, 0xbb, 0xb4, 0x48, 0x15, 0xd7, 0x19, 0x10, 0x2a, 0xc1, 0xcc, + 0xa7, 0x64, 0xc1, 0xd8, 0xae, 0xe3, 0x58, 0xb6, 0x87, 0x0b, 0x17, 0xe9, 0xbc, 0x50, 0xc3, 0xd3, + 0x8f, 0xd5, 0x0a, 0x75, 0x39, 0x6b, 0x18, 0x68, 0x03, 0xe6, 0x36, 0xfc, 0x7a, 0xe0, 0xd9, 0x5d, + 0xbc, 0x65, 0x39, 0xd6, 0x01, 0x6e, 0x15, 0x2e, 0x85, 0x96, 0x1f, 0xdb, 0x6f, 0xf8, 0xb4, 0xae, + 0xd1, 0x61, 0x95, 0xaa, 0xe5, 0x27, 0x82, 0x87, 0x3e, 0x83, 0x85, 0xea, 0xd7, 0x01, 0x59, 0x31, + 0xed, 0x52, 0xaf, 0x65, 0x07, 0xf5, 0xc0, 0xf5, 0xac, 0x03, 0x5c, 0x28, 0x52, 0x7a, 0xaf, 0x1d, + 0x1f, 0x15, 0x2f, 0x63, 0x5e, 0xdf, 0xb0, 0x08, 0x40, 0xc3, 0x67, 0x10, 0xaa, 0x87, 0x46, 0x12, + 0x05, 0x32, 0xfa, 0xf5, 0x5e, 0x97, 0xdc, 0x24, 0xe8, 0xe8, 0x5f, 0xd6, 0x46, 0x5f, 0xa9, 0x61, + 0xa3, 0xef, 0xb3, 0x82, 0xd8, 0xe8, 0x2b, 0x80, 0xc8, 0x04, 0xf4, 0xc0, 0xb5, 0x9d, 0x52, 0x33, + 0xb0, 0x9f, 0x61, 0xae, 0x09, 0xf1, 0x0b, 0x57, 0x68, 0x4f, 0xa9, 0x95, 0xea, 0x57, 0x5d, 0xdb, + 0x69, 0x58, 0xb4, 0xba, 0xe1, 0xf3, 0x7a, 0x75, 0x8f, 0xc4, 0xb1, 0xd1, 0x0f, 0xe1, 0xdc, 0x96, + 0xfb, 0xd8, 0x6e, 0x63, 0xc6, 0x72, 0xd8, 0xb0, 0x50, 0xcd, 0xbe, 0x41, 0xe9, 0x52, 0x2b, 0x55, + 0x87, 0x42, 0x34, 0x38, 0xb7, 0xea, 0x48, 0x18, 0xd5, 0x4a, 0x95, 0x4c, 0x05, 0x55, 0x61, 0x9a, + 0xee, 0xcb, 0x36, 0xfd, 0xe9, 0x17, 0xae, 0x52, 0xd9, 0xf7, 0x4a, 0x44, 0x4a, 0xbb, 0x59, 0x55, + 0x60, 0xaa, 0x4e, 0xe0, 0xbd, 0x30, 0x35, 0x34, 0xf4, 0x31, 0x2c, 0x47, 0x97, 0x77, 0xc5, 0x75, + 0x9e, 0xd8, 0x07, 0x3d, 0x0f, 0xb7, 0x0a, 0xaf, 0x91, 0xae, 0x9a, 0x7d, 0x20, 0xd0, 0x3e, 0xcc, + 0x2b, 0x7b, 0x7b, 0x0d, 0x77, 0xdc, 0x2d, 0xb7, 0x85, 0x0b, 0xd7, 0xc3, 0x59, 0x56, 0x59, 0x42, + 0xa3, 0x85, 0x3b, 0x6e, 0xa3, 0xe3, 0xb6, 0xb0, 0xe6, 0x9c, 0x14, 0x27, 0x40, 0x96, 0x4f, 0x85, + 0x3a, 0x96, 0x6d, 0xd4, 0x4c, 0x4c, 0x56, 0x5d, 0x93, 0xdd, 0x41, 0x5f, 0x0f, 0x09, 0x33, 0xc7, + 0xb3, 0x86, 0xdd, 0x6d, 0x78, 0x0a, 0x84, 0xba, 0x7c, 0x92, 0x28, 0x2c, 0x3f, 0x82, 0xb3, 0xb1, + 0x41, 0x41, 0x79, 0x18, 0x7d, 0x8a, 0x5f, 0x30, 0xd9, 0xda, 0x24, 0x7f, 0xa2, 0xb7, 0x60, 0xfc, + 0x19, 0xb9, 0xd9, 0x52, 0x19, 0x27, 0x34, 0x7c, 0x2b, 0xa8, 0x1b, 0xce, 0x13, 0xd7, 0x64, 0x40, + 0x1f, 0x8c, 0xbc, 0x97, 0x79, 0x30, 0x96, 0xcd, 0xe5, 0xa7, 0x99, 0xb7, 0xc8, 0x83, 0xb1, 0xec, + 0x4c, 0x7e, 0xf6, 0xc1, 0x58, 0xf6, 0x5a, 0xfe, 0xba, 0xb9, 0x48, 0x85, 0x81, 0x92, 0xe3, 0x3a, + 0x2f, 0x3a, 0xf6, 0x4f, 0xe8, 0xe5, 0x89, 0x88, 0xfd, 0x25, 0x98, 0x8b, 0x10, 0x43, 0x05, 0x98, + 0xc4, 0x0e, 0xb9, 0x6e, 0xb4, 0x98, 0x08, 0x66, 0x8a, 0x9f, 0x68, 0x01, 0xc6, 0xdb, 0x76, 0xc7, + 0x0e, 0x68, 0x6f, 0xc6, 0x4d, 0xf6, 0xc3, 0xf8, 0x83, 0x0c, 0xa0, 0xf8, 0x09, 0x88, 0x6e, 0x45, + 0xc8, 0x30, 0x61, 0x9b, 0x17, 0xa9, 0x46, 0x3c, 0x41, 0xfd, 0x53, 0x98, 0x67, 0x4b, 0x50, 0x9c, + 0xd5, 0x4a, 0x5b, 0xec, 0x8c, 0x48, 0xa8, 0x56, 0x35, 0xa6, 0xbc, 0x9a, 0x9e, 0xec, 0x9b, 0xb4, + 0x6b, 0x3d, 0x58, 0x4c, 0x3c, 0xfb, 0xd0, 0x16, 0x2c, 0x76, 0x5c, 0x27, 0x38, 0x6c, 0xbf, 0x10, + 0x47, 0x1f, 0x6f, 0x8d, 0x5e, 0x0e, 0x19, 0xbb, 0x4f, 0x04, 0x30, 0xe7, 0x79, 0x31, 0xa7, 0x48, + 0xdb, 0xe1, 0xea, 0x4b, 0xf1, 0x25, 0x86, 0x09, 0x67, 0x63, 0x47, 0x08, 0xfa, 0x08, 0xa6, 0x9b, + 0xf4, 0xaa, 0xac, 0xb5, 0xc4, 0x0e, 0x50, 0xa5, 0x5c, 0xe5, 0x0e, 0xac, 0x9c, 0x7d, 0xca, 0xdf, + 0xc9, 0xc0, 0x52, 0xca, 0xe1, 0x71, 0xf2, 0xa1, 0xfe, 0x1c, 0xce, 0x75, 0xac, 0xaf, 0x1b, 0x1e, + 0xd5, 0x84, 0x34, 0x3c, 0xcb, 0x89, 0x8c, 0x36, 0x5d, 0xd9, 0xc9, 0x10, 0xea, 0x96, 0xe9, 0x58, + 0x5f, 0x9b, 0x14, 0xc0, 0x24, 0xf5, 0xac, 0x9f, 0x3f, 0x80, 0x19, 0xed, 0xb8, 0x38, 0x71, 0xe7, + 0x8c, 0x3b, 0x70, 0x76, 0x0d, 0xb7, 0x71, 0x80, 0x87, 0xd6, 0xfe, 0x1a, 0x35, 0x80, 0x50, 0x6d, + 0x87, 0xca, 0xea, 0xaf, 0x74, 0x75, 0x1d, 0xbb, 0x5a, 0x84, 0x2a, 0x3b, 0x53, 0xc1, 0x32, 0xfe, + 0xd5, 0x08, 0x20, 0xce, 0xef, 0x3d, 0x6c, 0x75, 0x44, 0x37, 0xde, 0x87, 0x69, 0xa6, 0xef, 0x60, + 0xc5, 0xb4, 0x3b, 0xb9, 0xd5, 0x79, 0xbe, 0x2d, 0xd5, 0xaa, 0xf5, 0x33, 0xa6, 0x06, 0x4a, 0x50, + 0x4d, 0xcc, 0x14, 0x35, 0x14, 0x75, 0x44, 0x43, 0x55, 0xab, 0x08, 0xaa, 0xfa, 0x1b, 0x7d, 0x02, + 0xb3, 0x15, 0xb7, 0xd3, 0x25, 0x63, 0xc2, 0x91, 0x47, 0xb9, 0x12, 0x8c, 0xb7, 0xab, 0x55, 0xae, + 0x9f, 0x31, 0x23, 0xe0, 0x68, 0x1b, 0xe6, 0xef, 0xb5, 0x7b, 0xfe, 0x61, 0xc9, 0x69, 0x55, 0xda, + 0xae, 0x2f, 0xa8, 0x8c, 0xf1, 0xbb, 0x1d, 0xe7, 0xd6, 0x71, 0x88, 0xf5, 0x33, 0x66, 0x12, 0x22, + 0xba, 0x06, 0xe3, 0xd5, 0x67, 0xe4, 0x14, 0x11, 0x5e, 0x5d, 0xdc, 0xe9, 0x74, 0xc7, 0xc1, 0x3b, + 0x4f, 0xd6, 0xcf, 0x98, 0xac, 0xb6, 0x3c, 0x05, 0x93, 0x42, 0x9f, 0x70, 0x8b, 0x48, 0xf8, 0x72, + 0x38, 0xeb, 0x81, 0x15, 0xf4, 0x7c, 0xb4, 0x0c, 0xd9, 0xbd, 0x2e, 0xb9, 0xe6, 0x0a, 0x25, 0x93, + 0x29, 0x7f, 0x1b, 0x6f, 0xe9, 0x23, 0x8d, 0x2e, 0x40, 0x68, 0x1d, 0xe0, 0xc0, 0x8a, 0xb9, 0x60, + 0x5d, 0x1f, 0xdc, 0xfe, 0xd0, 0x5a, 0xbb, 0x23, 0x91, 0x76, 0xf3, 0xd1, 0xb1, 0x36, 0x16, 0x13, + 0x07, 0xcf, 0xf8, 0x0c, 0x2e, 0xed, 0x75, 0x7d, 0xec, 0x05, 0xa5, 0x6e, 0xb7, 0x6d, 0x37, 0x99, + 0xb5, 0x9a, 0xea, 0x1d, 0xc4, 0x62, 0x79, 0x17, 0x26, 0x58, 0x01, 0x5f, 0x26, 0x62, 0x0d, 0x96, + 0xba, 0x5d, 0xae, 0xed, 0xb8, 0xcb, 0xee, 0x1a, 0x4c, 0x7f, 0x61, 0x72, 0x68, 0xe3, 0x77, 0x32, + 0x70, 0x89, 0xed, 0x80, 0x54, 0xd2, 0xdf, 0x81, 0x29, 0xea, 0xf3, 0xd9, 0xb5, 0x9a, 0x62, 0x4f, + 0x30, 0xe7, 0x57, 0x51, 0x68, 0x86, 0xf5, 0x8a, 0x37, 0xed, 0x48, 0xba, 0x37, 0xad, 0xd8, 0x60, + 0xa3, 0x89, 0x1b, 0xec, 0x53, 0x30, 0x78, 0x8f, 0xda, 0xed, 0x58, 0xa7, 0xfc, 0x97, 0xe9, 0x95, + 0xf1, 0x9f, 0x46, 0x60, 0xe9, 0x3e, 0x76, 0xb0, 0x67, 0xd1, 0xef, 0xd4, 0xf4, 0x75, 0xaa, 0x17, + 0x5e, 0xa6, 0xaf, 0x17, 0x5e, 0x51, 0x68, 0x61, 0x47, 0xa8, 0x16, 0x36, 0xe6, 0x22, 0x48, 0x6e, + 0xbf, 0x7b, 0xe6, 0x06, 0xff, 0x2c, 0x7a, 0xfb, 0xed, 0x79, 0x36, 0x33, 0x95, 0x6d, 0x84, 0x1e, + 0x7c, 0x63, 0x03, 0xb5, 0x1c, 0xf3, 0xdc, 0xa3, 0x69, 0x92, 0x7b, 0xf0, 0xe9, 0x7e, 0x7b, 0xdb, + 0x30, 0xc1, 0x94, 0xc7, 0xd4, 0x40, 0x9b, 0x5b, 0x7d, 0x93, 0xef, 0xa9, 0x94, 0x0f, 0xe4, 0x9a, + 0x66, 0x7a, 0xea, 0xb3, 0x25, 0x10, 0xd0, 0x02, 0x93, 0x53, 0x59, 0xfe, 0x14, 0x72, 0x0a, 0xc8, + 0x30, 0x82, 0x81, 0x54, 0x62, 0x13, 0x01, 0xd8, 0x39, 0x60, 0xfa, 0x70, 0x45, 0x30, 0x30, 0x3e, + 0x84, 0x42, 0xbc, 0x37, 0x5c, 0xb9, 0x37, 0x48, 0x97, 0x68, 0xac, 0xc1, 0xc2, 0x7d, 0x1c, 0xd0, + 0x85, 0x4b, 0x37, 0x91, 0xe2, 0x59, 0x1a, 0xd9, 0x67, 0x82, 0xab, 0xd2, 0x42, 0xb2, 0xc0, 0x94, + 0x5d, 0x5a, 0x87, 0xc5, 0x08, 0x15, 0xde, 0xfe, 0x07, 0x30, 0xc9, 0x8b, 0x24, 0x47, 0xe5, 0xee, + 0xe9, 0xf8, 0x31, 0xaf, 0xd8, 0x5f, 0x65, 0xeb, 0x96, 0x53, 0x36, 0x05, 0x82, 0x71, 0xc8, 0xcc, + 0x36, 0x21, 0xd5, 0x53, 0x31, 0x1e, 0x21, 0xa0, 0x7e, 0xa9, 0x6c, 0xdd, 0x98, 0xf4, 0x6f, 0xc3, + 0x83, 0xa5, 0x58, 0x4b, 0xfc, 0x03, 0x6e, 0x41, 0x56, 0x48, 0xe4, 0x11, 0x93, 0x8d, 0xfa, 0x05, + 0xa6, 0x04, 0x1a, 0x5a, 0xfd, 0xfb, 0x2b, 0x54, 0x4d, 0x5f, 0x77, 0xdc, 0xe7, 0x4f, 0xda, 0xd6, + 0x53, 0x1c, 0x6b, 0xf8, 0x23, 0xc8, 0xd6, 0x07, 0x37, 0xcc, 0xb6, 0x8f, 0x68, 0xdc, 0x94, 0x28, + 0x46, 0x1b, 0x96, 0xa9, 0xcd, 0xab, 0xb4, 0xb5, 0xb9, 0xd1, 0xaa, 0x7d, 0xd3, 0x03, 0xf8, 0x0c, + 0x56, 0x12, 0x5b, 0xfb, 0xa6, 0x07, 0xf1, 0x8f, 0xc6, 0x60, 0x89, 0x1d, 0x26, 0xf1, 0x15, 0x3c, + 0x3c, 0xab, 0xf9, 0x56, 0x9c, 0x16, 0x6e, 0x27, 0x38, 0x2d, 0x50, 0x14, 0xd5, 0x69, 0x41, 0x73, + 0x55, 0x78, 0x2f, 0xd9, 0x55, 0x81, 0xaa, 0xbd, 0x74, 0x57, 0x85, 0xa8, 0x83, 0x42, 0x35, 0xdd, + 0x41, 0x81, 0x9a, 0xf1, 0x12, 0x1c, 0x14, 0x92, 0xdc, 0x12, 0x22, 0xce, 0x84, 0xd9, 0xd3, 0x75, + 0x26, 0xbc, 0x0e, 0x93, 0xa5, 0x6e, 0x57, 0x71, 0xce, 0xa5, 0xd3, 0x63, 0x75, 0xbb, 0x6c, 0xf0, + 0x44, 0xa5, 0xe0, 0xf3, 0x90, 0xc0, 0xe7, 0xdf, 0x07, 0x60, 0x77, 0x30, 0x3a, 0x71, 0x39, 0x0a, + 0x41, 0x25, 0x7c, 0x7e, 0x77, 0x23, 0x13, 0xa7, 0x2a, 0x74, 0x42, 0x60, 0x26, 0xd8, 0x1b, 0xfb, + 0x50, 0x88, 0x2f, 0x9f, 0x53, 0x60, 0x5d, 0x7f, 0x98, 0x81, 0x8b, 0x5c, 0xc8, 0x89, 0x6c, 0xf0, + 0x93, 0xaf, 0xce, 0x77, 0x60, 0x9a, 0xe3, 0xee, 0x86, 0x1b, 0x81, 0x79, 0x89, 0x08, 0x66, 0xcc, + 0x38, 0xba, 0x06, 0x86, 0xde, 0x81, 0x2c, 0xfd, 0x23, 0x34, 0xb3, 0x91, 0x91, 0x99, 0xa2, 0xa0, + 0x8d, 0xa8, 0xb1, 0x4d, 0x82, 0x1a, 0x5f, 0xc1, 0xa5, 0xb4, 0x8e, 0x9f, 0xc2, 0xb8, 0xfc, 0x93, + 0x0c, 0xac, 0x70, 0xf2, 0x1a, 0xab, 0x78, 0xa9, 0x53, 0xe7, 0x04, 0x2e, 0xfd, 0x0f, 0x20, 0x47, + 0x1a, 0x14, 0xfd, 0x8e, 0xb8, 0x0d, 0x84, 0x35, 0x6b, 0x56, 0x60, 0x71, 0x1f, 0x32, 0xab, 0xd3, + 0x16, 0xba, 0x18, 0x53, 0x45, 0x36, 0xbe, 0x80, 0x0b, 0xc9, 0x9f, 0x70, 0x0a, 0xe3, 0xf3, 0x00, + 0x96, 0x13, 0x0e, 0x85, 0x97, 0x3b, 0x93, 0x3f, 0x87, 0x95, 0x44, 0x5a, 0xa7, 0xd0, 0xcd, 0x75, + 0x22, 0x71, 0x04, 0xa7, 0x30, 0x85, 0xc6, 0x23, 0x38, 0x9f, 0x40, 0xe9, 0x14, 0xba, 0x78, 0x1f, + 0x96, 0xa4, 0xa4, 0xfd, 0x4a, 0x3d, 0xdc, 0x82, 0x8b, 0x8c, 0xd0, 0xe9, 0xcc, 0xca, 0x43, 0x58, + 0xe1, 0xe4, 0x4e, 0x61, 0xf4, 0xd6, 0xe1, 0x42, 0x78, 0xa1, 0x4e, 0x90, 0x93, 0x86, 0x66, 0x32, + 0xc6, 0x26, 0x5c, 0x0e, 0x29, 0xa5, 0x08, 0x0d, 0xc3, 0x53, 0x63, 0xe2, 0x60, 0x38, 0x4b, 0xa7, + 0x32, 0xa3, 0x8f, 0xe0, 0x9c, 0x46, 0xf4, 0xd4, 0x44, 0xa5, 0x0d, 0x98, 0x67, 0x84, 0x75, 0xd1, + 0x79, 0x55, 0x15, 0x9d, 0x73, 0xab, 0x67, 0x43, 0x92, 0xdc, 0xea, 0x9f, 0x20, 0x4d, 0x6f, 0x51, + 0x69, 0x5a, 0x80, 0x84, 0x3d, 0x7c, 0x07, 0x26, 0x76, 0x55, 0x1f, 0x82, 0x04, 0x62, 0xec, 0xb2, + 0xc0, 0xd0, 0x38, 0xb0, 0x61, 0xc2, 0x02, 0x11, 0xab, 0x14, 0x7a, 0xaf, 0xee, 0xf5, 0xf0, 0xab, + 0xcc, 0x23, 0x2b, 0xde, 0xc7, 0x37, 0x60, 0x22, 0xe8, 0xdf, 0x47, 0x93, 0x03, 0x0c, 0x2d, 0x9e, + 0xfd, 0x10, 0x2e, 0xb2, 0x9b, 0x74, 0x68, 0xda, 0xd5, 0x6f, 0xbb, 0x1f, 0x45, 0x2e, 0xd2, 0xe7, + 0x79, 0x9b, 0x51, 0xf8, 0x94, 0xfb, 0xf4, 0x63, 0xb1, 0x37, 0xd3, 0xe8, 0x0f, 0x15, 0x6e, 0x2a, + 0x2e, 0xc8, 0x23, 0x89, 0x17, 0xe4, 0xab, 0x70, 0x45, 0x5e, 0x90, 0xa3, 0xcd, 0x88, 0x09, 0x31, + 0xbe, 0x80, 0x15, 0xf6, 0xa1, 0xc2, 0xaf, 0x57, 0xef, 0xc6, 0x87, 0x91, 0xcf, 0x5c, 0xe2, 0x9f, + 0xa9, 0x43, 0xa7, 0x7c, 0xe4, 0xff, 0x95, 0x11, 0x2c, 0x23, 0x99, 0xf8, 0xb7, 0xad, 0x31, 0xd8, + 0x86, 0xa2, 0x1c, 0x10, 0xbd, 0x47, 0x2f, 0xa7, 0x2e, 0xd8, 0x82, 0x45, 0x95, 0x8c, 0xdd, 0xc4, + 0xfb, 0x77, 0xa8, 0xcd, 0xed, 0x6d, 0xb2, 0xad, 0x69, 0x81, 0x58, 0x92, 0x85, 0x84, 0x71, 0xa3, + 0xf0, 0xa6, 0x84, 0x34, 0x1a, 0x70, 0x21, 0x3e, 0x15, 0x76, 0x53, 0xc4, 0x0c, 0xa1, 0x4f, 0x08, + 0x0b, 0xa2, 0x25, 0x7c, 0x32, 0x52, 0x89, 0x0a, 0x3e, 0xc4, 0xd0, 0x05, 0x96, 0x61, 0x08, 0x56, + 0x19, 0xf9, 0x7e, 0xd2, 0xba, 0x58, 0x0f, 0x3f, 0x05, 0x24, 0xaa, 0x2a, 0x75, 0x53, 0x34, 0x7d, + 0x1e, 0x46, 0x2b, 0x75, 0x93, 0x07, 0x2d, 0x52, 0x49, 0xb6, 0xe9, 0x7b, 0x26, 0x29, 0x8b, 0xde, + 0x28, 0x46, 0x86, 0xb8, 0x51, 0x3c, 0x18, 0xcb, 0x8e, 0xe6, 0xc7, 0x4c, 0x54, 0xb7, 0x0f, 0x9c, + 0x47, 0x76, 0x70, 0x28, 0x1b, 0x2c, 0x19, 0x5f, 0xc2, 0xbc, 0xd6, 0x3c, 0xdf, 0xe1, 0x7d, 0xa3, + 0x26, 0x89, 0x3c, 0x5e, 0x29, 0x51, 0x47, 0x24, 0xaa, 0x72, 0x99, 0x66, 0xfc, 0xb2, 0x69, 0x35, + 0x68, 0xd8, 0xbf, 0x29, 0x2a, 0x8d, 0x7f, 0x3b, 0xa6, 0x50, 0x57, 0x62, 0x51, 0xfb, 0x7c, 0xdd, + 0x1d, 0x00, 0xb6, 0x42, 0x94, 0x8f, 0x23, 0x02, 0x6c, 0x8e, 0xfb, 0xf7, 0xb0, 0x23, 0xc5, 0x54, + 0x80, 0x86, 0x8d, 0x55, 0xe5, 0x41, 0x00, 0x0c, 0x49, 0x84, 0x67, 0xcb, 0x20, 0x00, 0x4e, 0xda, + 0x37, 0x55, 0x20, 0xf4, 0xc3, 0x68, 0x00, 0xd6, 0x38, 0x35, 0xf1, 0xbd, 0x26, 0x6c, 0xfe, 0xf1, + 0x6f, 0x3b, 0x59, 0x0c, 0xd6, 0x73, 0x58, 0x24, 0xb8, 0xf6, 0x13, 0x7a, 0x31, 0xaa, 0x7e, 0x1d, + 0x60, 0x87, 0x9d, 0x4d, 0x13, 0xb4, 0x9d, 0x6b, 0x7d, 0xda, 0x09, 0x81, 0xb9, 0xfd, 0x20, 0xa4, + 0xd3, 0xc0, 0xb2, 0xce, 0x4c, 0xa6, 0x8f, 0xde, 0x86, 0x5c, 0xc5, 0xdc, 0xac, 0x3a, 0xad, 0xae, + 0x6b, 0xcb, 0x0b, 0x1f, 0xa2, 0x8b, 0xc8, 0x6b, 0x37, 0xb0, 0x28, 0xcf, 0x98, 0x2a, 0x18, 0x11, + 0x39, 0x2a, 0xe6, 0xe6, 0x9a, 0xdb, 0xb1, 0x6c, 0x87, 0xbb, 0xa0, 0x53, 0x91, 0x83, 0xe0, 0xb4, + 0x68, 0xa9, 0x19, 0x02, 0x18, 0xd7, 0xfb, 0x06, 0xbb, 0x64, 0x61, 0x6c, 0xb7, 0xb2, 0xbb, 0x99, + 0xcf, 0x18, 0xb7, 0x00, 0x94, 0x9e, 0x01, 0x4c, 0x6c, 0xef, 0x98, 0x5b, 0xa5, 0xcd, 0xfc, 0x19, + 0xb4, 0x08, 0x67, 0x1f, 0x6d, 0x6c, 0xaf, 0xed, 0x3c, 0xaa, 0x37, 0xea, 0x5b, 0x25, 0x73, 0xb7, + 0x52, 0x32, 0xd7, 0xf2, 0x19, 0xe3, 0x2b, 0x58, 0xd0, 0x47, 0xe4, 0x54, 0x17, 0x6d, 0x00, 0xf3, + 0x52, 0x7e, 0x7b, 0xf0, 0x68, 0x57, 0xf1, 0x05, 0xe7, 0x97, 0xdd, 0xa8, 0xef, 0x1b, 0xbf, 0x16, + 0xf3, 0x6d, 0xa7, 0x00, 0xa1, 0x37, 0x98, 0x18, 0x14, 0xcd, 0x4e, 0x40, 0x1d, 0x05, 0x43, 0x39, + 0x88, 0xb2, 0xca, 0xef, 0xc1, 0x82, 0xde, 0xea, 0xb0, 0x5a, 0xb9, 0xd7, 0xa8, 0x93, 0xbc, 0x12, + 0xea, 0x88, 0x90, 0x6a, 0x26, 0xe1, 0x9c, 0xf8, 0x7b, 0x90, 0xe7, 0x50, 0xe1, 0x29, 0x7e, 0x55, + 0xa8, 0x4d, 0x33, 0x09, 0x81, 0xd6, 0x22, 0x92, 0xc4, 0x85, 0x3c, 0xf5, 0x7c, 0x64, 0x98, 0xac, + 0x81, 0x05, 0x18, 0xdf, 0x0c, 0xcd, 0x57, 0x26, 0xfb, 0x41, 0x23, 0xfe, 0x02, 0xcb, 0x0b, 0x84, + 0xa7, 0xe1, 0x94, 0x29, 0x7f, 0x13, 0x81, 0xe1, 0x9e, 0xea, 0x86, 0x7d, 0x56, 0x69, 0x4b, 0x78, + 0x60, 0xb3, 0xff, 0x0d, 0x13, 0xce, 0x2a, 0x0d, 0x9e, 0xa0, 0xab, 0xa8, 0x00, 0x93, 0xdb, 0xf8, + 0x6b, 0xa5, 0x7d, 0xf1, 0xd3, 0x78, 0x17, 0xce, 0x72, 0x0f, 0x55, 0x65, 0x98, 0xae, 0xf0, 0x7c, + 0x10, 0x19, 0x2d, 0x28, 0x9d, 0x93, 0xa4, 0x55, 0x04, 0x6f, 0xaf, 0xdb, 0x7a, 0x49, 0x3c, 0x72, + 0xb0, 0x9c, 0x10, 0xef, 0x75, 0x61, 0xf5, 0x1a, 0x34, 0x9d, 0xff, 0xc7, 0x08, 0x14, 0x22, 0x5a, + 0x95, 0xca, 0xa1, 0xd5, 0x6e, 0x63, 0xe7, 0x00, 0xa3, 0x1b, 0x30, 0xb6, 0xbb, 0xb3, 0x5b, 0xe3, + 0x5a, 0x61, 0xe1, 0xbf, 0x41, 0x8a, 0x24, 0x8c, 0x49, 0x21, 0xd0, 0x43, 0x38, 0x2b, 0xfc, 0xef, + 0x65, 0x15, 0x9f, 0xa1, 0x8b, 0xfd, 0xbd, 0xf9, 0xe3, 0x78, 0x84, 0xa5, 0x50, 0x9d, 0xcd, 0x8f, + 0x7b, 0xb6, 0x87, 0x5b, 0x54, 0xd3, 0x15, 0x3a, 0x43, 0x28, 0x35, 0xa6, 0x0a, 0x86, 0xbe, 0x07, + 0xd3, 0xf5, 0xfa, 0x4e, 0xd8, 0xfa, 0xb8, 0x66, 0x11, 0x53, 0xab, 0x4c, 0x0d, 0x90, 0x45, 0x85, + 0x19, 0x7f, 0x94, 0x81, 0xa5, 0x14, 0xf5, 0x12, 0x7a, 0x43, 0x1b, 0x87, 0x79, 0x65, 0x1c, 0x04, + 0xc8, 0xfa, 0x19, 0x3e, 0x10, 0x15, 0x25, 0x9a, 0x61, 0xf4, 0x04, 0xd1, 0x0c, 0xeb, 0x67, 0xc2, + 0x08, 0x06, 0x74, 0x1d, 0x46, 0xeb, 0xf5, 0x1d, 0x6e, 0x46, 0x40, 0xe1, 0x17, 0x28, 0xc0, 0x04, + 0xa0, 0x0c, 0x90, 0x15, 0x45, 0xc6, 0x1c, 0xcc, 0x68, 0x13, 0x63, 0x18, 0x30, 0xad, 0xf6, 0x90, + 0xcc, 0x7e, 0xc5, 0x6d, 0xc9, 0xd9, 0x27, 0x7f, 0x1b, 0x3f, 0xd5, 0xc7, 0x8c, 0x88, 0xf1, 0xc2, + 0x3e, 0x6d, 0xb7, 0x84, 0xa5, 0x8b, 0x97, 0x6c, 0xb4, 0xd0, 0x15, 0x98, 0xf6, 0x70, 0xcb, 0xf6, + 0x70, 0x33, 0x68, 0xf4, 0x3c, 0x1e, 0xcd, 0x66, 0xe6, 0x44, 0xd9, 0x9e, 0xd7, 0x46, 0xdf, 0x81, + 0x09, 0x66, 0x38, 0xe7, 0x5f, 0x2f, 0x2e, 0x45, 0xf5, 0xfa, 0xce, 0xd6, 0xbd, 0x12, 0x33, 0xec, + 0x9b, 0x1c, 0xc4, 0x28, 0x43, 0x4e, 0xf9, 0xaa, 0x41, 0xad, 0x2f, 0xc0, 0xb8, 0x2a, 0xf6, 0xb3, + 0x1f, 0xc6, 0xef, 0x66, 0x60, 0x81, 0x2e, 0x83, 0x03, 0x9b, 0x1c, 0x0f, 0xe1, 0xb7, 0xac, 0x6a, + 0x93, 0x76, 0x41, 0x9b, 0xb4, 0x08, 0xac, 0x9c, 0xbd, 0x0f, 0x62, 0xb3, 0x77, 0x21, 0x69, 0xf6, + 0x28, 0x0b, 0xb0, 0x5d, 0x47, 0x9d, 0x34, 0xd5, 0x3c, 0xf9, 0x07, 0x19, 0x98, 0x57, 0xfa, 0x24, + 0x3f, 0xf0, 0x8e, 0xd6, 0xa5, 0x95, 0x84, 0x2e, 0xc5, 0xd6, 0x53, 0x39, 0xd6, 0xa3, 0xd7, 0xfa, + 0xf5, 0x28, 0x69, 0x39, 0x69, 0xcb, 0xe4, 0xcf, 0x33, 0xb0, 0x98, 0x38, 0x06, 0xe8, 0x1c, 0xb9, + 0x2f, 0x34, 0x3d, 0x1c, 0xf0, 0x91, 0xe7, 0xbf, 0x48, 0xf9, 0x86, 0xef, 0xf7, 0xb0, 0xc7, 0xc7, + 0x9d, 0xff, 0x42, 0xaf, 0xc1, 0x4c, 0x0d, 0x7b, 0xb6, 0xdb, 0x62, 0x21, 0x3d, 0xcc, 0xa7, 0x7a, + 0xc6, 0xd4, 0x0b, 0xd1, 0x05, 0x98, 0x92, 0x3e, 0xc1, 0x4c, 0x67, 0x6d, 0x86, 0x05, 0x84, 0xf6, + 0x9a, 0x7d, 0xc0, 0x0c, 0x5d, 0x04, 0x99, 0xff, 0x22, 0x0c, 0x58, 0x68, 0x90, 0x27, 0x18, 0x03, + 0x16, 0xea, 0xe1, 0x73, 0x30, 0xf1, 0xa9, 0x49, 0xd7, 0x31, 0xcd, 0x2b, 0x63, 0xf2, 0x5f, 0x68, + 0x96, 0x06, 0x26, 0x50, 0x49, 0x82, 0x06, 0x24, 0x7c, 0x00, 0x0b, 0x49, 0xe3, 0x9a, 0xb4, 0x0b, + 0x38, 0xee, 0x88, 0xc4, 0xfd, 0x02, 0xe6, 0x4b, 0xad, 0x56, 0xb8, 0x5c, 0xd9, 0xac, 0xca, 0xe8, + 0xd1, 0x91, 0xfc, 0x28, 0x17, 0x83, 0xc7, 0x36, 0x1c, 0x3b, 0x30, 0xe7, 0xab, 0x5f, 0xdb, 0x7e, + 0x60, 0x3b, 0x07, 0x8a, 0xa2, 0xd9, 0x3c, 0xb7, 0x8d, 0x9f, 0x27, 0x2c, 0x01, 0x22, 0x71, 0xe8, + 0xb4, 0x59, 0x79, 0x02, 0xf1, 0x05, 0x85, 0x6c, 0xc8, 0xba, 0x96, 0x74, 0xba, 0x61, 0xc5, 0x68, + 0xa9, 0xf9, 0xd4, 0xf8, 0x1e, 0x9c, 0x63, 0x6c, 0xbf, 0x5f, 0xe7, 0x79, 0xb7, 0x55, 0xbd, 0xb8, + 0xf1, 0x9e, 0xd0, 0x5c, 0xf5, 0xed, 0x99, 0x39, 0xad, 0xf5, 0x85, 0x36, 0xf9, 0x1f, 0x33, 0xb0, + 0x1c, 0x41, 0xad, 0xbf, 0x70, 0x9a, 0xe2, 0xcc, 0xb9, 0x1e, 0x0d, 0xfc, 0xa0, 0xb2, 0x12, 0x53, + 0x08, 0xdb, 0x2d, 0x19, 0xfb, 0x81, 0x6e, 0x01, 0x30, 0x64, 0x45, 0xc4, 0xa1, 0xe6, 0x10, 0xee, + 0x47, 0x46, 0x85, 0x1c, 0x05, 0x04, 0xf5, 0x20, 0x69, 0xdc, 0xf9, 0x1e, 0x19, 0x64, 0x2f, 0xa0, + 0xb9, 0x94, 0x30, 0x47, 0x6f, 0xa4, 0x18, 0x0e, 0x92, 0xe8, 0x1b, 0xff, 0xf7, 0x28, 0x2c, 0xa9, + 0x13, 0xf8, 0x32, 0xdf, 0x5a, 0x83, 0x5c, 0xc5, 0x75, 0x02, 0xfc, 0x75, 0xa0, 0xe4, 0xb2, 0x41, + 0xd2, 0xfb, 0x42, 0xd6, 0x70, 0x71, 0x9c, 0x15, 0x34, 0x88, 0xac, 0xa7, 0xf9, 0xc3, 0x86, 0x80, + 0xa8, 0x02, 0x33, 0xdb, 0xf8, 0x79, 0x6c, 0x00, 0xa9, 0x4f, 0xae, 0x83, 0x9f, 0x37, 0x94, 0x41, + 0x54, 0x1d, 0x25, 0x35, 0x1c, 0xf4, 0x18, 0x66, 0xc5, 0xe2, 0xd2, 0x06, 0x73, 0x59, 0x3d, 0x79, + 0xf5, 0xe5, 0xcc, 0x72, 0xc3, 0x90, 0x16, 0x52, 0xc6, 0x30, 0x42, 0x91, 0x7c, 0x3a, 0x6b, 0x91, + 0xa5, 0x3b, 0xd1, 0x8f, 0x76, 0xa5, 0x46, 0xf3, 0x78, 0x8e, 0xa6, 0x39, 0x51, 0x49, 0x18, 0x35, + 0x28, 0xc4, 0xe7, 0x83, 0xb7, 0xf6, 0x36, 0x4c, 0xb0, 0x52, 0x2e, 0x2a, 0x89, 0x34, 0x65, 0x12, + 0x9a, 0xe9, 0x3e, 0x5a, 0xfc, 0x54, 0x62, 0x65, 0xc6, 0x3a, 0xd5, 0xa7, 0x49, 0x18, 0x29, 0xac, + 0xde, 0x8e, 0x4e, 0x2f, 0x75, 0x26, 0x17, 0xd3, 0xab, 0xfa, 0x1e, 0x89, 0x80, 0xa6, 0x0a, 0x55, + 0x49, 0xaa, 0x94, 0x78, 0xc7, 0xde, 0x84, 0x49, 0x5e, 0x14, 0x49, 0xa0, 0x16, 0x6e, 0x3f, 0x01, + 0x60, 0x7c, 0x00, 0xe7, 0xa9, 0x7e, 0xd4, 0x76, 0x0e, 0xda, 0x78, 0xcf, 0xd7, 0xc2, 0x76, 0x06, + 0x6d, 0xeb, 0xef, 0xc3, 0x72, 0x12, 0xee, 0xc0, 0x9d, 0xcd, 0x52, 0x1a, 0xfd, 0xe9, 0x08, 0x2c, + 0x6c, 0xf8, 0xaa, 0xc0, 0xc5, 0x47, 0xe2, 0x66, 0x52, 0x6a, 0x1e, 0x3a, 0x26, 0xeb, 0x67, 0x92, + 0x52, 0xef, 0xbc, 0xad, 0xc4, 0xa8, 0x8f, 0xf4, 0xcb, 0xb9, 0x43, 0x8e, 0x2d, 0x19, 0xa5, 0x7e, + 0x1d, 0xc6, 0xb6, 0x09, 0xab, 0x1e, 0xe5, 0x73, 0xc7, 0x30, 0x48, 0x11, 0x8d, 0x11, 0x27, 0x47, + 0x24, 0xf9, 0x81, 0xee, 0xc5, 0x22, 0xd1, 0xc7, 0x06, 0xe7, 0x94, 0x59, 0x3f, 0x13, 0x0b, 0x4a, + 0x7f, 0x17, 0x72, 0xa5, 0x56, 0x87, 0x39, 0xbd, 0xba, 0x4e, 0x64, 0x5b, 0x2a, 0x35, 0xeb, 0x67, + 0x4c, 0x15, 0x10, 0x5d, 0x63, 0x71, 0x23, 0x13, 0x29, 0x79, 0x76, 0x88, 0xb0, 0x56, 0xea, 0x76, + 0xcb, 0x59, 0x98, 0x60, 0xd1, 0xd1, 0xc6, 0x17, 0xb0, 0xcc, 0x1d, 0x97, 0x98, 0x36, 0x98, 0xba, + 0x37, 0xf9, 0xa1, 0x6f, 0x5a, 0x3f, 0x67, 0xa3, 0x4b, 0x00, 0xf4, 0x2e, 0xb4, 0xe1, 0xb4, 0xf0, + 0xd7, 0xdc, 0x73, 0x52, 0x29, 0x31, 0xde, 0x81, 0x29, 0x39, 0x42, 0x54, 0xe0, 0x57, 0x0e, 0x3b, + 0x3a, 0x5a, 0x0b, 0x5a, 0xe8, 0xbd, 0x88, 0xb7, 0x3f, 0xaf, 0x7d, 0x3b, 0xcf, 0x84, 0xc5, 0x6e, + 0x08, 0x36, 0x2c, 0x46, 0x16, 0x41, 0x98, 0x98, 0x45, 0xca, 0xe8, 0xcc, 0xb5, 0x53, 0xfe, 0x8e, + 0x8a, 0xf0, 0x23, 0x43, 0x89, 0xf0, 0xc6, 0x3f, 0x1d, 0xa1, 0x97, 0xcb, 0xd8, 0x78, 0x44, 0xf4, + 0x7a, 0xaa, 0x6e, 0xb1, 0x0c, 0x53, 0xf4, 0xeb, 0xd7, 0x44, 0xb8, 0x69, 0x7f, 0xbf, 0x9b, 0xec, + 0xcf, 0x8f, 0x8a, 0x67, 0xa8, 0xb3, 0x4d, 0x88, 0x86, 0x3e, 0x86, 0xc9, 0xaa, 0xd3, 0xa2, 0x14, + 0x46, 0x4f, 0x40, 0x41, 0x20, 0x91, 0x39, 0xa1, 0x5d, 0xde, 0x25, 0x5b, 0x98, 0xa9, 0x83, 0x4c, + 0xa5, 0x24, 0xbc, 0xe5, 0x8e, 0xa7, 0xdd, 0x72, 0x27, 0x22, 0xb7, 0x5c, 0x03, 0xc6, 0x77, 0xbc, + 0x16, 0xcf, 0x77, 0x35, 0xbb, 0x3a, 0xcd, 0x07, 0x8e, 0x96, 0x99, 0xac, 0x8a, 0xc9, 0x6b, 0x96, + 0xd7, 0x3c, 0xe4, 0x52, 0x0f, 0xff, 0x65, 0xfc, 0xe7, 0x0c, 0x2c, 0xdd, 0xc7, 0x41, 0xe2, 0xda, + 0xd2, 0x46, 0x2b, 0xf3, 0xca, 0xa3, 0x35, 0xf2, 0x32, 0xa3, 0x25, 0x47, 0x63, 0x34, 0x6d, 0x34, + 0xc6, 0xd2, 0x46, 0x63, 0x3c, 0x75, 0x34, 0x8c, 0xfb, 0x30, 0xc1, 0x3e, 0x95, 0xdc, 0xf0, 0x37, + 0x02, 0xdc, 0x09, 0x6f, 0xf8, 0xaa, 0x37, 0xa1, 0xc9, 0xea, 0x88, 0x80, 0xb9, 0x69, 0xf9, 0xea, + 0x0d, 0x9f, 0xff, 0x34, 0x7e, 0x44, 0x83, 0xf7, 0x37, 0xdd, 0xe6, 0x53, 0x45, 0xb3, 0x3c, 0xc9, + 0x76, 0x6e, 0xd4, 0x4a, 0x41, 0xa0, 0x58, 0x8d, 0x29, 0x20, 0xd0, 0x65, 0xc8, 0x6d, 0x38, 0xf7, + 0x5c, 0xaf, 0x89, 0x77, 0x9c, 0x36, 0xa3, 0x9e, 0x35, 0xd5, 0x22, 0xae, 0x41, 0xe1, 0x2d, 0x84, + 0x6a, 0x09, 0x5a, 0x10, 0x51, 0x4b, 0x90, 0xb2, 0xfd, 0x55, 0x93, 0xd5, 0x19, 0x2f, 0x98, 0x06, + 0x45, 0xeb, 0xdb, 0xab, 0x38, 0xd5, 0xbc, 0x11, 0x09, 0x69, 0x57, 0x3f, 0x2b, 0x12, 0xcd, 0xfe, + 0x23, 0xa6, 0x4b, 0x89, 0x75, 0xba, 0xdd, 0xa7, 0xd3, 0xb4, 0x6e, 0x68, 0xb3, 0x0d, 0xd3, 0x3e, + 0x11, 0xdc, 0x7e, 0xea, 0x0a, 0xa9, 0xd7, 0x18, 0x04, 0xf8, 0x18, 0xce, 0x9b, 0xb8, 0xdb, 0xb6, + 0x88, 0x20, 0xdb, 0x71, 0x19, 0xbc, 0x1c, 0xb4, 0xcb, 0x09, 0xd1, 0xa7, 0xba, 0xe3, 0x8c, 0x9c, + 0x8f, 0x91, 0x3e, 0xf3, 0xb1, 0x06, 0x73, 0xdc, 0x83, 0x4b, 0x55, 0x68, 0xb5, 0x55, 0x85, 0x16, + 0xfd, 0x41, 0x26, 0x89, 0xc5, 0x5b, 0x3f, 0x0d, 0x35, 0x5a, 0x3e, 0x5f, 0xdd, 0x46, 0x83, 0xcd, + 0x2a, 0xa3, 0xc2, 0x47, 0xf6, 0x36, 0x4c, 0x5b, 0xa1, 0x5f, 0xa4, 0x18, 0xe0, 0xe9, 0xd0, 0xe3, + 0x73, 0xff, 0xae, 0xa9, 0x41, 0xa0, 0xf3, 0x90, 0xa5, 0xc3, 0x1c, 0xb6, 0x30, 0xe9, 0x70, 0x9d, + 0x15, 0x37, 0xe8, 0x89, 0x73, 0xf5, 0x54, 0x0c, 0x7a, 0x5d, 0x66, 0xd0, 0x53, 0x68, 0x4a, 0xaf, + 0xab, 0x29, 0x91, 0x7a, 0x26, 0xba, 0x5b, 0x04, 0xf0, 0xfe, 0x5d, 0x33, 0x84, 0x19, 0x7a, 0x7d, + 0x74, 0xe0, 0xca, 0x7d, 0x1c, 0xe8, 0x47, 0x76, 0x68, 0x01, 0xe1, 0xad, 0xaf, 0x43, 0xd6, 0xd7, + 0xad, 0x37, 0x22, 0x74, 0x35, 0x11, 0x71, 0xff, 0xae, 0xb0, 0xcf, 0x72, 0x3a, 0xf2, 0x2f, 0xe3, + 0x13, 0x28, 0xa6, 0x35, 0x37, 0x9c, 0x13, 0xb9, 0x0d, 0x97, 0xd3, 0x09, 0xf0, 0xee, 0x56, 0x41, + 0x58, 0x7a, 0x38, 0x33, 0x1e, 0xd4, 0x5b, 0xdd, 0x38, 0xc4, 0xff, 0x30, 0xca, 0xc2, 0x9d, 0xf6, + 0x15, 0xba, 0xfb, 0xf7, 0x46, 0x99, 0xef, 0x9e, 0x4e, 0xe2, 0x15, 0xd6, 0x35, 0xaa, 0xc2, 0x44, + 0xdb, 0x7a, 0x8c, 0xdb, 0x7e, 0x61, 0x94, 0xce, 0xc4, 0x77, 0x39, 0xdb, 0x4e, 0x6f, 0x85, 0xe5, + 0x5e, 0xe0, 0x21, 0x3d, 0x1c, 0x19, 0xdd, 0x81, 0x85, 0xae, 0x87, 0x5b, 0xc2, 0x1c, 0xd1, 0xf5, + 0xb8, 0x29, 0x9f, 0x1d, 0x12, 0xf3, 0xb2, 0xae, 0x2a, 0xab, 0xd0, 0xeb, 0x30, 0xe7, 0xd3, 0xb3, + 0x90, 0xf4, 0xeb, 0xb9, 0xeb, 0xb5, 0x78, 0xa2, 0x23, 0x73, 0x96, 0x15, 0x3f, 0xe4, 0xa5, 0xe8, + 0xd7, 0xe0, 0x5c, 0x24, 0x53, 0x51, 0x83, 0x33, 0xc4, 0x09, 0xae, 0xa1, 0x49, 0x9a, 0x0e, 0xc6, + 0x1a, 0xcb, 0x37, 0xb8, 0x0b, 0xef, 0xe5, 0x64, 0x12, 0x6a, 0xcc, 0xce, 0xf3, 0x04, 0xfc, 0xe5, + 0xf7, 0x21, 0xa7, 0x7c, 0x6f, 0x82, 0x53, 0xee, 0x82, 0xea, 0x94, 0x3b, 0xa5, 0x3a, 0xdf, 0x76, + 0x98, 0xe7, 0x63, 0x6c, 0x14, 0x65, 0x06, 0x81, 0x2c, 0xef, 0x8b, 0xd8, 0x05, 0x4b, 0x89, 0x1f, + 0xb2, 0x7f, 0xd7, 0x94, 0x80, 0xfd, 0x18, 0x48, 0x83, 0x3a, 0x08, 0xa5, 0xb5, 0x56, 0x82, 0xec, + 0xda, 0x70, 0xad, 0xb1, 0xcd, 0x26, 0x5a, 0x34, 0x25, 0x9a, 0xf1, 0x23, 0x61, 0x6c, 0xd6, 0x31, + 0x86, 0xcb, 0xf9, 0x30, 0x8c, 0x75, 0xd9, 0xf8, 0xdd, 0x0c, 0x9c, 0xd7, 0x89, 0xab, 0x56, 0xc4, + 0xbc, 0x62, 0x45, 0x64, 0xc6, 0xc3, 0xd7, 0x74, 0xab, 0x16, 0xa3, 0x3c, 0x12, 0xb5, 0x62, 0x5d, + 0x52, 0xed, 0x85, 0xd3, 0x71, 0x43, 0xe1, 0x05, 0xd5, 0xca, 0xc5, 0x95, 0x5f, 0xa1, 0x55, 0xeb, + 0x36, 0x2c, 0x27, 0x75, 0x49, 0x51, 0x54, 0x49, 0x13, 0x14, 0xbf, 0x90, 0xad, 0xc1, 0x25, 0x91, + 0x12, 0xd3, 0x75, 0x03, 0x3f, 0xf0, 0xac, 0x6e, 0xbd, 0xe9, 0xd9, 0xdd, 0x10, 0xcb, 0x80, 0x09, + 0x56, 0xc2, 0x07, 0x8b, 0xd9, 0xf6, 0x19, 0x0c, 0xaf, 0x31, 0x7e, 0x33, 0x03, 0x86, 0xe6, 0x38, + 0x4b, 0xd9, 0x44, 0xcd, 0x73, 0x9f, 0xd9, 0x2d, 0xc5, 0x9e, 0xfe, 0x86, 0x66, 0x9b, 0x61, 0x61, + 0xe9, 0xd1, 0x98, 0x1d, 0x2e, 0xbc, 0xdd, 0x8e, 0xd8, 0x4b, 0xd8, 0xcd, 0x58, 0x2c, 0x27, 0xf5, + 0x66, 0x2c, 0xec, 0x28, 0xff, 0x35, 0x03, 0x57, 0xfb, 0xf6, 0x81, 0x7f, 0xcf, 0x63, 0xc8, 0x47, + 0xeb, 0xf8, 0x22, 0x2b, 0x2a, 0x8e, 0x74, 0x71, 0x0a, 0xfb, 0x77, 0x58, 0x60, 0x90, 0x70, 0x38, + 0xed, 0x4a, 0xca, 0x31, 0x7a, 0x27, 0xef, 0x3d, 0xcd, 0x8a, 0xe5, 0x06, 0x56, 0xbb, 0x42, 0x35, + 0x94, 0xa3, 0x61, 0x90, 0x57, 0x40, 0x4a, 0x1b, 0xd1, 0xe4, 0x5b, 0x0a, 0xb0, 0xf1, 0x03, 0x7a, + 0x2c, 0x24, 0x77, 0x7a, 0x38, 0x4e, 0x5d, 0x81, 0xab, 0x11, 0x67, 0xae, 0x97, 0x20, 0x12, 0xb0, + 0xf3, 0x7b, 0xcf, 0xc7, 0xde, 0x7d, 0xcf, 0xed, 0x75, 0xbf, 0x9d, 0x59, 0xff, 0xe3, 0x0c, 0xf3, + 0xae, 0x57, 0x9b, 0xe5, 0x13, 0x5d, 0x01, 0x08, 0x4b, 0x13, 0x92, 0x9e, 0xd0, 0x8a, 0xfd, 0x3b, + 0x4c, 0x27, 0x48, 0xcd, 0x9e, 0x07, 0x8c, 0x80, 0x82, 0xf6, 0xed, 0xce, 0xe4, 0x5d, 0xea, 0xc1, + 0x25, 0x5b, 0x1f, 0x6e, 0xdc, 0xdf, 0x15, 0x0a, 0xda, 0x13, 0xe2, 0x1d, 0xc2, 0x02, 0xe1, 0x00, + 0xa5, 0x5e, 0x70, 0xe8, 0x7a, 0x76, 0x20, 0xe2, 0x05, 0x51, 0x8d, 0x27, 0xbc, 0x61, 0x58, 0xdf, + 0xff, 0xe5, 0x51, 0xf1, 0xbd, 0x93, 0x24, 0x1f, 0x17, 0x34, 0x77, 0x65, 0x92, 0x1c, 0x63, 0x09, + 0x46, 0x2b, 0xe6, 0x26, 0x65, 0x89, 0xe6, 0xa6, 0x64, 0x89, 0xe6, 0xa6, 0xf1, 0x17, 0x23, 0x50, + 0x64, 0xe9, 0xc8, 0xa8, 0xe3, 0x5f, 0xa8, 0x56, 0x55, 0x3c, 0x09, 0x87, 0xd5, 0x80, 0x46, 0xd2, + 0x8d, 0x8d, 0x0c, 0x93, 0x6e, 0xec, 0xd7, 0x20, 0x45, 0xa7, 0x3e, 0x84, 0x9a, 0xf2, 0xf5, 0xe3, + 0xa3, 0xe2, 0xd5, 0x50, 0x4d, 0xc9, 0x6a, 0x93, 0xf4, 0x95, 0x29, 0x4d, 0xc4, 0x15, 0xac, 0x63, + 0x2f, 0xa1, 0x60, 0xbd, 0x0d, 0x93, 0x54, 0xdb, 0xb2, 0x51, 0xe3, 0xae, 0xf8, 0x74, 0x79, 0xd2, + 0xbc, 0x87, 0x0d, 0x5b, 0xcd, 0x61, 0x2c, 0xc0, 0x8c, 0xdf, 0x1f, 0x81, 0xcb, 0xe9, 0x63, 0xce, + 0xfb, 0xb6, 0x06, 0x10, 0xba, 0x1c, 0xf6, 0x73, 0x71, 0xa4, 0x7b, 0xe7, 0x39, 0x7e, 0x2c, 0x5d, + 0x8c, 0x15, 0x3c, 0x22, 0x3a, 0x8b, 0x64, 0x1b, 0x11, 0x7b, 0xaf, 0x96, 0x83, 0x83, 0xa7, 0xd4, + 0xe7, 0x45, 0x5a, 0x4a, 0x7d, 0x5e, 0x86, 0x1e, 0xc3, 0x52, 0xcd, 0xb3, 0x9f, 0x59, 0x01, 0x7e, + 0x88, 0x5f, 0xb0, 0xe8, 0xcd, 0x2a, 0x0f, 0xd9, 0x64, 0x19, 0x54, 0x6e, 0x1c, 0x1f, 0x15, 0x5f, + 0xeb, 0x32, 0x10, 0x9a, 0xce, 0x95, 0x85, 0xff, 0x37, 0xe2, 0x51, 0x9c, 0x69, 0x84, 0x8c, 0x7f, + 0x91, 0x81, 0x15, 0xaa, 0x1f, 0xe0, 0x76, 0x21, 0xd1, 0xf8, 0x4b, 0x79, 0xba, 0xab, 0x1f, 0xc8, + 0xd7, 0x22, 0xf5, 0x74, 0xd7, 0x92, 0x91, 0x98, 0x1a, 0x18, 0xda, 0x80, 0x1c, 0xff, 0x4d, 0xf7, + 0xdf, 0x28, 0xd5, 0x4c, 0x2c, 0x46, 0xb3, 0x34, 0x31, 0x5d, 0x36, 0x5d, 0xd8, 0x9c, 0x18, 0x8d, + 0xd9, 0x37, 0x55, 0x5c, 0xe3, 0x17, 0x23, 0x70, 0x61, 0x1f, 0x7b, 0xf6, 0x93, 0x17, 0x29, 0x1f, + 0xb3, 0x03, 0x0b, 0xa2, 0x88, 0xa5, 0xe5, 0xd2, 0xb6, 0x18, 0x4b, 0xc2, 0x2d, 0xba, 0xca, 0xf3, + 0x7a, 0x89, 0x1d, 0x97, 0x88, 0x78, 0x02, 0x1f, 0xf6, 0xb7, 0x21, 0x1b, 0x49, 0x0a, 0x48, 0xe7, + 0x5f, 0xec, 0xd0, 0x70, 0xaa, 0xd6, 0xcf, 0x98, 0x12, 0x12, 0xfd, 0x2c, 0xdd, 0x96, 0xce, 0x75, + 0xb3, 0x83, 0x0c, 0x34, 0x74, 0xc3, 0x92, 0xcd, 0x6a, 0x29, 0xb5, 0x09, 0x1b, 0x76, 0xfd, 0x8c, + 0x99, 0xd6, 0x52, 0x39, 0x07, 0x53, 0x25, 0xea, 0x58, 0xe0, 0xe1, 0x96, 0xf1, 0x5f, 0x46, 0xe0, + 0x92, 0x88, 0xc4, 0x4c, 0x19, 0xe6, 0xcf, 0x60, 0x49, 0x14, 0x95, 0xba, 0x44, 0x60, 0xc0, 0x2d, + 0x7d, 0xa4, 0x59, 0x22, 0x7c, 0x31, 0xd2, 0x16, 0x87, 0x09, 0x07, 0x3b, 0x0d, 0xfd, 0x74, 0xcc, + 0x33, 0x1f, 0x27, 0xa5, 0x68, 0xa4, 0x66, 0x12, 0x95, 0x67, 0x6a, 0x43, 0xa3, 0xf1, 0xcf, 0x56, + 0xcc, 0xbc, 0x33, 0xf6, 0xaa, 0xe6, 0x9d, 0xf5, 0x33, 0x51, 0x03, 0x4f, 0x79, 0x16, 0xa6, 0xb7, + 0xf1, 0xf3, 0x70, 0xdc, 0xff, 0xf7, 0x4c, 0x24, 0xdb, 0x0f, 0x91, 0x30, 0x58, 0xda, 0x9f, 0x4c, + 0x98, 0xe9, 0x8e, 0x66, 0xfb, 0x51, 0x25, 0x0c, 0x06, 0xba, 0x01, 0x93, 0xcc, 0xdb, 0xa6, 0x35, + 0x84, 0xaa, 0x51, 0x86, 0x54, 0xb2, 0x38, 0xf7, 0x16, 0xd3, 0x3a, 0x72, 0x7c, 0xe3, 0x21, 0x5c, + 0xe1, 0x41, 0x37, 0xfa, 0xe4, 0xd3, 0x86, 0x4e, 0x78, 0x7c, 0x19, 0x16, 0x5c, 0xba, 0x8f, 0xa3, + 0xac, 0x47, 0x0b, 0x39, 0xfd, 0x04, 0xe6, 0xb4, 0x72, 0x49, 0x91, 0x4a, 0xa5, 0x72, 0x0d, 0x49, + 0xd2, 0x51, 0x68, 0xe3, 0x72, 0x52, 0x13, 0x6a, 0x67, 0x0d, 0x4c, 0x33, 0xda, 0x7b, 0x4a, 0xde, + 0xd3, 0x13, 0x70, 0xbd, 0x1b, 0xca, 0xbe, 0x66, 0x1c, 0x8f, 0xe5, 0x24, 0x16, 0x27, 0xaf, 0xac, + 0x35, 0x66, 0x34, 0x63, 0xa5, 0x31, 0x0b, 0xd3, 0xa2, 0xaa, 0x8d, 0x7d, 0xdf, 0xf8, 0xeb, 0x13, + 0x60, 0xf0, 0x81, 0x4d, 0x72, 0x21, 0x12, 0xe3, 0xf1, 0x38, 0xd6, 0x59, 0x7e, 0x50, 0x9d, 0x53, + 0x13, 0xb6, 0x87, 0xb5, 0x6c, 0xe5, 0x51, 0x39, 0x2f, 0x31, 0x07, 0xec, 0xfa, 0x19, 0x33, 0xf6, + 0xf5, 0x5f, 0xa6, 0xb0, 0x49, 0xb6, 0xd9, 0xae, 0x1d, 0x1f, 0x15, 0xaf, 0xa4, 0xb0, 0x49, 0x8d, + 0x6e, 0x32, 0xcb, 0x34, 0x75, 0x9b, 0xed, 0xe8, 0xcb, 0xd8, 0x6c, 0xc9, 0x8e, 0x54, 0xad, 0xb6, + 0x7b, 0xfa, 0x58, 0xf2, 0xfd, 0x28, 0xdc, 0x8b, 0xd4, 0x2a, 0x9e, 0x74, 0x47, 0x29, 0xd1, 0xa8, + 0x6a, 0x64, 0x90, 0x0d, 0x79, 0xc5, 0xa8, 0x52, 0x39, 0xc4, 0xcd, 0xa7, 0xdc, 0x98, 0x25, 0x3c, + 0x4e, 0x92, 0x8c, 0x7a, 0xec, 0x51, 0x0d, 0xb6, 0xcf, 0x59, 0x45, 0xa3, 0x49, 0x50, 0xd5, 0xa4, + 0x41, 0x51, 0xb2, 0xe8, 0x27, 0x30, 0x2f, 0xa7, 0x3a, 0xe2, 0x73, 0x9a, 0x5b, 0x7d, 0x2d, 0xcc, + 0xf3, 0xde, 0x79, 0x62, 0xdd, 0x7c, 0x76, 0xe7, 0x66, 0x02, 0x2c, 0xcb, 0x45, 0xd3, 0x14, 0x15, + 0x8a, 0xc3, 0xa9, 0x6a, 0x89, 0x4f, 0x40, 0x44, 0x9f, 0xc3, 0x42, 0xbd, 0xbe, 0xc3, 0xa2, 0xeb, + 0x4c, 0xe1, 0x81, 0x64, 0x6e, 0x72, 0x0f, 0x54, 0x3a, 0xdd, 0xbe, 0xef, 0x36, 0x78, 0x54, 0x9e, + 0xea, 0xb7, 0xa4, 0xaa, 0x66, 0x92, 0x48, 0xa0, 0x4f, 0x60, 0x9a, 0xe6, 0xec, 0x2b, 0xb5, 0x5a, + 0x1e, 0x99, 0x98, 0x6c, 0x78, 0xd0, 0xb2, 0xf4, 0x7e, 0x16, 0xab, 0x50, 0xb3, 0xf7, 0xab, 0x08, + 0xaa, 0x2f, 0xd0, 0xff, 0x2b, 0xa3, 0xcf, 0x88, 0x2c, 0x63, 0xb7, 0x31, 0x57, 0x68, 0x8a, 0x9d, + 0x91, 0xe2, 0xc7, 0x90, 0xf9, 0x86, 0xfd, 0x18, 0xfe, 0xe1, 0x88, 0x88, 0xb9, 0x8b, 0xbb, 0x92, + 0x9c, 0xd8, 0x9d, 0x21, 0xf1, 0x0b, 0x86, 0x3a, 0xe8, 0x13, 0x3b, 0x87, 0xca, 0xc2, 0x19, 0x44, + 0x26, 0xf3, 0x9c, 0x95, 0x86, 0xd5, 0xb0, 0x42, 0xf3, 0x0f, 0xa1, 0x62, 0x95, 0x82, 0x15, 0xf5, + 0x34, 0x18, 0x7d, 0x75, 0x4f, 0x83, 0x9f, 0xc2, 0xa2, 0x08, 0x76, 0xad, 0x60, 0x27, 0xc0, 0x9e, + 0xf0, 0x49, 0x9a, 0x0d, 0x93, 0xa2, 0xd2, 0xf4, 0xb7, 0x79, 0x18, 0x2d, 0x99, 0xdb, 0x5c, 0x8b, + 0x46, 0xfe, 0x44, 0x97, 0x75, 0x97, 0x5f, 0x16, 0xc5, 0xac, 0x39, 0xf8, 0x5e, 0x26, 0xdd, 0x65, + 0x8a, 0x9a, 0x50, 0xbb, 0xa9, 0x16, 0x19, 0x15, 0x58, 0xd1, 0x9b, 0xaf, 0x61, 0xaf, 0x63, 0x53, + 0xe1, 0xbd, 0x8e, 0x03, 0xd1, 0x68, 0x26, 0x6c, 0x14, 0xa9, 0x21, 0x26, 0xfc, 0x1e, 0xf9, 0xdf, + 0x47, 0xa0, 0x98, 0xf8, 0x11, 0x25, 0xdf, 0xb7, 0x0f, 0x1c, 0x9a, 0x85, 0xe9, 0x02, 0x8c, 0x3d, + 0xb4, 0x9d, 0x96, 0x7a, 0x13, 0x7d, 0x6a, 0x3b, 0x2d, 0x93, 0x96, 0x92, 0x4b, 0x4c, 0xbd, 0xf7, + 0x98, 0x02, 0x28, 0x77, 0x6c, 0xbf, 0xf7, 0xb8, 0x41, 0x80, 0xd4, 0x4b, 0x0c, 0x07, 0x43, 0xd7, + 0x60, 0x52, 0x64, 0xec, 0x1c, 0x0d, 0x35, 0x74, 0x22, 0x55, 0xa7, 0xa8, 0x43, 0x1f, 0x41, 0x76, + 0x0b, 0x07, 0x56, 0xcb, 0x0a, 0x2c, 0xbe, 0x76, 0xc4, 0x73, 0x5e, 0xa2, 0xb8, 0x9c, 0xe7, 0x47, + 0x7c, 0xb6, 0xc3, 0x4b, 0x4c, 0x89, 0x42, 0x07, 0xd0, 0xf6, 0xbb, 0x6d, 0xeb, 0x85, 0x74, 0xaf, + 0x27, 0x03, 0x18, 0x16, 0xa1, 0x77, 0x75, 0xa7, 0xb2, 0xd0, 0x41, 0x20, 0x71, 0x40, 0x42, 0x97, + 0xb3, 0x75, 0xea, 0xe8, 0x16, 0x0e, 0x35, 0xcf, 0xb6, 0x6b, 0x24, 0x62, 0x6b, 0x90, 0xa6, 0x8e, + 0x68, 0xd4, 0x99, 0xd6, 0x65, 0xc3, 0xf1, 0x03, 0xb2, 0xd7, 0xbc, 0x53, 0x31, 0xc5, 0xf0, 0x44, + 0xd3, 0x2a, 0xd1, 0x30, 0xd1, 0xb4, 0x2d, 0x4b, 0x23, 0x3a, 0x15, 0x09, 0xbe, 0x7f, 0xc7, 0x54, + 0xa0, 0x86, 0x36, 0xc7, 0xfc, 0x63, 0x80, 0xb3, 0x35, 0xeb, 0xc0, 0x76, 0x88, 0x74, 0x65, 0x62, + 0xdf, 0xed, 0x79, 0x4d, 0x8c, 0x4a, 0x30, 0xab, 0x47, 0xe7, 0x0c, 0x88, 0x3d, 0x22, 0x02, 0xa4, + 0x5e, 0x86, 0x56, 0x61, 0x4a, 0x66, 0x34, 0xe1, 0x52, 0x5f, 0x42, 0xa6, 0x93, 0xf5, 0x33, 0x66, + 0x08, 0x86, 0xde, 0xd7, 0x3c, 0x45, 0xe6, 0x64, 0x72, 0x1e, 0x0a, 0xbb, 0xca, 0xc2, 0x27, 0x1c, + 0x2d, 0x41, 0x97, 0x74, 0x1e, 0xf9, 0x51, 0xcc, 0x79, 0x64, 0x5c, 0xeb, 0x71, 0x4c, 0x87, 0x4d, + 0x85, 0xf6, 0xd4, 0xe7, 0x87, 0x12, 0xdc, 0x4a, 0xbe, 0x80, 0xdc, 0xc3, 0xde, 0x63, 0x2c, 0xdc, + 0x64, 0x26, 0xb8, 0x20, 0x1b, 0x8d, 0x39, 0xe3, 0xf5, 0xfb, 0x77, 0x19, 0x43, 0x7a, 0xda, 0x7b, + 0x8c, 0xe3, 0xef, 0x5a, 0x11, 0x09, 0x42, 0x21, 0x86, 0x0e, 0x21, 0x1f, 0x0d, 0x0f, 0xe3, 0xab, + 0xb3, 0x4f, 0x50, 0x1b, 0xcd, 0x7b, 0xa7, 0xbc, 0x9e, 0xc5, 0x82, 0x56, 0xb4, 0x46, 0x62, 0x54, + 0xd1, 0x4f, 0x61, 0x31, 0xd1, 0xba, 0x24, 0x03, 0xf4, 0xfb, 0x1b, 0xae, 0xe8, 0x71, 0x1c, 0x35, + 0x94, 0x70, 0xe5, 0xac, 0xd6, 0x72, 0x72, 0x2b, 0xa8, 0x05, 0x73, 0x91, 0xb0, 0x27, 0xfe, 0xe4, + 0x5f, 0x7a, 0x20, 0x15, 0x95, 0x20, 0x85, 0x9d, 0x31, 0xb1, 0xad, 0x28, 0x49, 0xb4, 0x09, 0x53, + 0x52, 0x2f, 0xc7, 0x53, 0xd5, 0x26, 0xe9, 0x20, 0x0b, 0xc7, 0x47, 0xc5, 0x85, 0x50, 0x07, 0xa9, + 0xd1, 0x0c, 0x09, 0xa0, 0xdf, 0xca, 0xc0, 0xb9, 0x64, 0x1d, 0x6d, 0x61, 0x9a, 0xd2, 0x1e, 0xa8, + 0xc2, 0xa6, 0xb7, 0x60, 0x1a, 0x14, 0x6e, 0xb7, 0xc2, 0xe4, 0x09, 0x42, 0x97, 0xad, 0xb5, 0x9b, + 0xd2, 0x12, 0xba, 0x0d, 0x70, 0x60, 0x07, 0x7c, 0x8e, 0x69, 0xd6, 0xd4, 0xf8, 0x06, 0x21, 0xdd, + 0x3e, 0xb0, 0x03, 0x3e, 0xd3, 0x7f, 0x3b, 0x33, 0xf0, 0x88, 0xa0, 0xc9, 0x54, 0x73, 0xab, 0xd7, + 0xfb, 0xf1, 0xcf, 0x10, 0xba, 0x7c, 0xfb, 0xf8, 0xa8, 0xf8, 0x96, 0xcc, 0xc8, 0xd9, 0xa4, 0x50, + 0x22, 0x05, 0x44, 0xc3, 0x92, 0x70, 0xda, 0xf7, 0x0c, 0x3c, 0xa5, 0xde, 0x82, 0x09, 0xaa, 0x25, + 0xf3, 0x0b, 0x33, 0xf4, 0x1e, 0x49, 0xf3, 0x48, 0x52, 0x5d, 0x9a, 0x2a, 0x96, 0x71, 0x18, 0xb4, + 0x4e, 0xee, 0x63, 0x54, 0x72, 0x15, 0x3c, 0x97, 0x67, 0x9d, 0xe5, 0x77, 0x7a, 0x56, 0x25, 0x92, + 0xb3, 0x69, 0xef, 0xbb, 0xe9, 0x68, 0x65, 0x80, 0xac, 0xc7, 0xd9, 0xdd, 0x83, 0xb1, 0xec, 0x58, + 0x7e, 0x9c, 0xe5, 0xc8, 0x63, 0xfb, 0x52, 0x98, 0x5c, 0xaf, 0x48, 0xde, 0xb4, 0xe3, 0x25, 0x4f, + 0x8c, 0xf1, 0xdb, 0x59, 0x66, 0xc4, 0xdb, 0x73, 0xec, 0x27, 0x76, 0xc8, 0x42, 0x55, 0x45, 0x7c, + 0xf8, 0xc2, 0x2a, 0xbf, 0x26, 0xa7, 0xbc, 0xa5, 0x2a, 0x75, 0xf6, 0x23, 0x03, 0x75, 0xf6, 0x77, + 0x15, 0x37, 0x1b, 0x25, 0xc5, 0xbe, 0xb4, 0xe2, 0xaa, 0x0a, 0x3d, 0xe9, 0x7f, 0xf3, 0x15, 0x4c, + 0x30, 0x4b, 0x25, 0xf5, 0x6d, 0xca, 0xad, 0xde, 0x54, 0x2c, 0xb9, 0x29, 0xdd, 0x57, 0x4d, 0xb9, + 0x7c, 0x6a, 0x68, 0x81, 0x36, 0x35, 0xcc, 0xc0, 0xbb, 0x0b, 0xf3, 0xb5, 0xb8, 0x11, 0x97, 0x6b, + 0x48, 0xe9, 0xed, 0x20, 0xc9, 0xfe, 0xab, 0xca, 0xb7, 0x09, 0xe8, 0xa8, 0x0a, 0xb3, 0x75, 0xcd, + 0xd8, 0xab, 0x3e, 0x42, 0x18, 0xb1, 0x0e, 0xab, 0xfe, 0xaa, 0x3a, 0x12, 0xfa, 0x01, 0x4c, 0xd4, + 0x5d, 0x2f, 0x28, 0xbf, 0xe0, 0x6c, 0x55, 0x38, 0x82, 0xb0, 0xc2, 0xf2, 0x79, 0xf1, 0x10, 0xa3, + 0xef, 0x7a, 0x41, 0xe3, 0xb1, 0x96, 0xc1, 0x94, 0x81, 0xa0, 0x17, 0xb0, 0x90, 0x64, 0x3e, 0xe6, + 0x7c, 0xf3, 0xb4, 0x2c, 0xcc, 0x49, 0xf8, 0x68, 0x8b, 0xbe, 0x48, 0xc9, 0xbe, 0xa8, 0xe4, 0xb3, + 0xb0, 0xa7, 0xa9, 0x30, 0x47, 0x6e, 0x8f, 0xb2, 0x45, 0x3a, 0x12, 0x96, 0x1f, 0x7d, 0xc6, 0xd4, + 0x8c, 0xa1, 0xa2, 0x1a, 0x9c, 0xdd, 0xf3, 0x71, 0xcd, 0xc3, 0xcf, 0x6c, 0xfc, 0x5c, 0xd0, 0x83, + 0x30, 0xa1, 0x28, 0xa1, 0xd7, 0x65, 0xb5, 0x49, 0x04, 0xe3, 0xc8, 0xe8, 0x7d, 0x80, 0x9a, 0xed, + 0x38, 0xb8, 0x45, 0x5d, 0xa5, 0x72, 0x94, 0x14, 0xb5, 0xbe, 0x74, 0x69, 0x69, 0xc3, 0x75, 0xda, + 0xea, 0x90, 0x2a, 0xc0, 0xa8, 0x0c, 0x33, 0x1b, 0x4e, 0xb3, 0xdd, 0xe3, 0xae, 0x8e, 0x3e, 0x65, + 0xa9, 0x3c, 0xd1, 0xb1, 0xcd, 0x2a, 0x1a, 0x31, 0x6e, 0xa0, 0xa3, 0xa0, 0x87, 0x80, 0x78, 0x81, + 0x19, 0x3e, 0xf1, 0xcc, 0xf9, 0x02, 0xbd, 0xec, 0x09, 0x42, 0x74, 0xb9, 0x6b, 0xf9, 0x83, 0x63, + 0x68, 0xaf, 0x62, 0xce, 0xff, 0x5b, 0x19, 0xb8, 0x90, 0xbc, 0x97, 0xb8, 0x20, 0xb7, 0x03, 0x53, + 0xb2, 0x50, 0x46, 0x25, 0x0b, 0x2d, 0x41, 0x44, 0x06, 0x63, 0x1b, 0x5a, 0xb0, 0x28, 0xf5, 0xeb, + 0x43, 0x1a, 0x2f, 0x61, 0xba, 0xfb, 0x3f, 0xb3, 0xcc, 0x8b, 0x28, 0xc6, 0xa7, 0x3e, 0xa1, 0xb9, + 0xf2, 0x68, 0x99, 0x62, 0x89, 0xe2, 0x4a, 0x69, 0x56, 0x1e, 0xcd, 0x53, 0xab, 0x21, 0xa0, 0x77, + 0x54, 0xff, 0xce, 0x11, 0xe5, 0xc5, 0x4c, 0x51, 0xa8, 0x7e, 0x42, 0xe8, 0xf8, 0xf9, 0x86, 0xe6, + 0x46, 0x38, 0x34, 0xd3, 0x1b, 0x1b, 0x96, 0xe9, 0xed, 0x49, 0xa6, 0xc7, 0x72, 0xb0, 0xbd, 0xae, + 0x30, 0xbd, 0xd3, 0xe7, 0x76, 0x13, 0xa7, 0xcd, 0xed, 0x26, 0x5f, 0x8d, 0xdb, 0x65, 0x5f, 0x92, + 0xdb, 0xdd, 0x83, 0xd9, 0x6d, 0x8c, 0x5b, 0x8a, 0x4d, 0x75, 0x2a, 0x3c, 0x66, 0x1d, 0x4c, 0xb5, + 0xe5, 0x49, 0x86, 0xd5, 0x08, 0x56, 0x2a, 0xd7, 0x84, 0xbf, 0x1a, 0xae, 0x99, 0x3b, 0x65, 0xae, + 0x39, 0xfd, 0x2a, 0x5c, 0x33, 0xc6, 0xfa, 0x66, 0x4e, 0xcc, 0xfa, 0x5e, 0x85, 0x5b, 0xfd, 0xb5, + 0x11, 0x58, 0x22, 0x1b, 0xa0, 0xfd, 0x0c, 0xd7, 0xeb, 0xeb, 0xdc, 0xfd, 0x35, 0xf4, 0xc4, 0x3c, + 0x74, 0x7d, 0x11, 0x41, 0x46, 0xff, 0x26, 0x65, 0x5d, 0xd7, 0x0b, 0x84, 0x1a, 0x82, 0xfc, 0x8d, + 0xca, 0x11, 0xdf, 0xb0, 0x37, 0xc3, 0xbc, 0xa5, 0x49, 0x74, 0xbf, 0x6d, 0xc7, 0xb0, 0x57, 0x19, + 0x9e, 0x0a, 0x14, 0xe2, 0x5f, 0xc1, 0xf9, 0xf8, 0xeb, 0xc0, 0xf3, 0x6c, 0xf0, 0x6b, 0x71, 0x54, + 0x10, 0x37, 0x79, 0xb5, 0xf1, 0x31, 0x0d, 0x41, 0x91, 0x04, 0x7c, 0x65, 0x7c, 0xd7, 0x95, 0xf1, + 0x5d, 0xe7, 0xe3, 0x5b, 0x53, 0xc6, 0x97, 0xfc, 0x6d, 0x94, 0x69, 0xe0, 0x89, 0x8a, 0x2f, 0x23, + 0x59, 0x27, 0x79, 0xda, 0x0c, 0x7e, 0x8e, 0xc4, 0xba, 0x20, 0xea, 0x8d, 0x3f, 0xe5, 0xcf, 0x68, + 0xfd, 0xcf, 0x78, 0x1c, 0xbd, 0x8a, 0xdf, 0xc6, 0xcf, 0xc2, 0x74, 0x60, 0x3c, 0x75, 0x99, 0x67, + 0x35, 0x9f, 0x86, 0x8e, 0x33, 0x3f, 0x24, 0xbc, 0x54, 0xad, 0xe0, 0xb7, 0xa6, 0x25, 0x39, 0x52, + 0x6a, 0xe5, 0xfe, 0x1d, 0xc1, 0x64, 0x79, 0x56, 0x34, 0x56, 0xac, 0x33, 0x59, 0x15, 0x81, 0xc6, + 0x56, 0xcc, 0x19, 0x26, 0xcb, 0x66, 0x95, 0xd8, 0x83, 0x77, 0xe3, 0xf9, 0x98, 0xe8, 0x95, 0x33, + 0xcc, 0xc7, 0xa4, 0x0e, 0x63, 0x98, 0x99, 0x69, 0x0f, 0x56, 0x4c, 0xdc, 0x71, 0x9f, 0xe1, 0xd3, + 0x25, 0xfb, 0x25, 0x9c, 0xd7, 0x09, 0xb2, 0x48, 0x76, 0xf6, 0x68, 0xd7, 0xc7, 0xc9, 0x4f, 0x7d, + 0x71, 0x04, 0xf6, 0xd4, 0x17, 0x7b, 0x55, 0x87, 0xfc, 0xa9, 0x9e, 0xcd, 0xb4, 0xce, 0x70, 0xe1, + 0x82, 0x4e, 0xbc, 0xd4, 0x6a, 0xd5, 0x2c, 0x2f, 0xb0, 0x9b, 0x76, 0xd7, 0x72, 0x02, 0xb4, 0x03, + 0x39, 0xe5, 0x67, 0x44, 0x21, 0xa4, 0xd4, 0x70, 0xb9, 0x31, 0x2c, 0xd0, 0x5e, 0x25, 0x08, 0x8b, + 0x0d, 0x0c, 0xc5, 0xe8, 0xf0, 0x90, 0x21, 0x53, 0xdb, 0x2c, 0xc3, 0x8c, 0xf2, 0x53, 0x5a, 0x90, + 0x28, 0x83, 0x55, 0x5a, 0xd0, 0x07, 0x4c, 0x47, 0x31, 0x9a, 0xb0, 0x9c, 0x34, 0x68, 0xec, 0xf9, + 0x1a, 0x54, 0x0d, 0x73, 0xc5, 0x0e, 0x8e, 0xc2, 0x98, 0x4b, 0xcb, 0x13, 0x6b, 0xfc, 0x3f, 0x63, + 0xb0, 0xc2, 0x27, 0xe3, 0x34, 0x67, 0x1c, 0xfd, 0x08, 0x72, 0xca, 0x1c, 0xf3, 0x41, 0xbf, 0x2c, + 0xe2, 0xd0, 0xd3, 0xd6, 0x02, 0x53, 0x5c, 0xf5, 0x68, 0x41, 0x23, 0x32, 0xdd, 0xeb, 0x67, 0x4c, + 0x95, 0x24, 0x6a, 0xc3, 0xac, 0x3e, 0xd1, 0x5c, 0x77, 0x77, 0x35, 0xb1, 0x11, 0x1d, 0x54, 0xbc, + 0x6d, 0xd3, 0x6a, 0x24, 0x4e, 0xf7, 0xfa, 0x19, 0x33, 0x42, 0x1b, 0x7d, 0x0d, 0x67, 0x63, 0xb3, + 0xcc, 0x75, 0xcc, 0xd7, 0x13, 0x1b, 0x8c, 0x41, 0x33, 0xeb, 0x98, 0x47, 0x8b, 0x53, 0x9b, 0x8d, + 0x37, 0x82, 0x5a, 0x30, 0xad, 0x4e, 0x3c, 0x57, 0x2e, 0x5e, 0xe9, 0x33, 0x94, 0x0c, 0x90, 0x09, + 0xd0, 0x7c, 0x2c, 0xe9, 0xdc, 0xbf, 0xd0, 0x2d, 0x7e, 0x1a, 0x70, 0x16, 0x26, 0xd8, 0x6f, 0xe3, + 0x2f, 0x32, 0xb0, 0x52, 0xf3, 0xb0, 0x8f, 0x9d, 0x26, 0xd6, 0x22, 0xfa, 0x5e, 0x71, 0x45, 0xa4, + 0x19, 0xdb, 0x46, 0x4e, 0xdf, 0xd8, 0x36, 0x7a, 0x42, 0x63, 0x9b, 0xf1, 0xcf, 0x33, 0x50, 0x48, + 0xfa, 0xe6, 0x3a, 0x76, 0x5a, 0xa8, 0x06, 0xf9, 0xe8, 0x20, 0xf0, 0x2d, 0x67, 0xc8, 0xb7, 0x4d, + 0x52, 0x87, 0x6b, 0xfd, 0x8c, 0x19, 0xc3, 0x46, 0xdb, 0x70, 0x56, 0x29, 0xe3, 0xc6, 0xae, 0x91, + 0x61, 0x8c, 0x5d, 0x64, 0x89, 0xc4, 0x50, 0x55, 0x5b, 0xe1, 0x3a, 0x3d, 0xb6, 0x99, 0x7b, 0x30, + 0xb9, 0xe9, 0x28, 0x51, 0x15, 0x10, 0x96, 0xf2, 0x79, 0x63, 0xd6, 0x2f, 0x5a, 0x2a, 0xa2, 0xa3, + 0x25, 0x88, 0xf1, 0x7d, 0x7a, 0xbc, 0x70, 0x35, 0x31, 0xcb, 0x47, 0x23, 0x89, 0x5d, 0x86, 0xf1, + 0xdd, 0xcd, 0x7a, 0xa5, 0xc4, 0xb3, 0xdb, 0xb0, 0x1c, 0x70, 0x6d, 0xbf, 0xd1, 0xb4, 0x4c, 0x56, + 0x61, 0x7c, 0x08, 0xe8, 0x3e, 0x0e, 0xf8, 0xe3, 0x5a, 0x12, 0xef, 0x1a, 0x4c, 0xf2, 0x22, 0x8e, + 0x49, 0xed, 0x38, 0xfc, 0xa9, 0x2e, 0x53, 0xd4, 0x19, 0x35, 0x71, 0x51, 0x6c, 0x63, 0x2d, 0x32, + 0xe4, 0x3d, 0xc8, 0x7a, 0xbc, 0x8c, 0x0b, 0x0d, 0xb3, 0xf2, 0x5d, 0x48, 0x5a, 0xcc, 0xec, 0x8b, + 0x02, 0xc6, 0x94, 0x7f, 0x19, 0x9b, 0x34, 0x5f, 0xe3, 0xce, 0xc6, 0x5a, 0x85, 0x8c, 0x2a, 0x1f, + 0x2c, 0x31, 0x1d, 0xb7, 0x68, 0x40, 0x64, 0x80, 0xd5, 0xdc, 0x36, 0x74, 0x68, 0x28, 0x07, 0xe2, + 0x59, 0x4a, 0x15, 0x10, 0xe3, 0xae, 0xcc, 0xfe, 0x98, 0x40, 0x2d, 0xed, 0x7d, 0xc3, 0x6d, 0x9a, + 0xd7, 0xf2, 0x3e, 0x75, 0xad, 0x3c, 0x8d, 0x4e, 0xfc, 0x04, 0xce, 0x93, 0x41, 0x22, 0xdf, 0xc4, + 0x5f, 0xec, 0x77, 0x4f, 0xc5, 0x1a, 0x84, 0xae, 0x80, 0xf6, 0xa2, 0x2c, 0xf3, 0xb8, 0x33, 0x73, + 0xcf, 0x95, 0x77, 0x65, 0x7f, 0x9d, 0x45, 0x7a, 0x44, 0xdb, 0xe6, 0xd3, 0xf4, 0x2e, 0x40, 0x53, + 0x96, 0xf2, 0x89, 0x12, 0x49, 0x4b, 0x35, 0x94, 0xfd, 0xbb, 0xa6, 0x02, 0x39, 0xb4, 0xe1, 0xc8, + 0x82, 0x65, 0x26, 0x7d, 0x69, 0xc4, 0xc4, 0xa7, 0x57, 0x60, 0x4a, 0x96, 0x49, 0x0f, 0x91, 0xc4, + 0xc6, 0x59, 0x02, 0x24, 0xd9, 0x01, 0x33, 0xc4, 0x23, 0x4d, 0x30, 0x76, 0xf8, 0x0d, 0x37, 0xe1, + 0x63, 0x2f, 0xf8, 0x46, 0x9b, 0x08, 0x53, 0xbe, 0x9e, 0xa4, 0x09, 0x0d, 0x7e, 0x7f, 0x75, 0x98, + 0x81, 0xfa, 0x86, 0x9b, 0x20, 0x03, 0xf5, 0xcd, 0x35, 0xf1, 0x37, 0x33, 0x6c, 0x33, 0x69, 0x18, + 0xdf, 0xd2, 0x66, 0x42, 0xaf, 0xc1, 0xac, 0xe3, 0x36, 0x9e, 0xb8, 0xed, 0xb6, 0xfb, 0x9c, 0x1c, + 0x77, 0xcc, 0x8b, 0x27, 0x6b, 0x4e, 0x3b, 0xee, 0x3d, 0x5a, 0xb8, 0xe7, 0xb5, 0xe5, 0x96, 0x8b, + 0xf6, 0x70, 0x88, 0x2d, 0x17, 0x19, 0x86, 0x97, 0xda, 0x72, 0xbf, 0xce, 0x2c, 0x0d, 0x8c, 0x7b, + 0x7d, 0xeb, 0xec, 0xe6, 0x7f, 0xe5, 0xda, 0xcd, 0x78, 0xf3, 0xf2, 0x60, 0x88, 0x7f, 0xbe, 0x30, + 0xeb, 0x45, 0x90, 0x5e, 0x92, 0xe7, 0x60, 0x91, 0x3d, 0x39, 0x42, 0x4e, 0x8c, 0x40, 0x35, 0xbe, + 0x0c, 0x53, 0x3b, 0xd0, 0x67, 0x21, 0x62, 0xb8, 0xc0, 0xb6, 0xd3, 0xb7, 0xd0, 0x0c, 0xd9, 0x52, + 0xdf, 0x6c, 0x33, 0xff, 0x5f, 0x86, 0xe5, 0x72, 0xae, 0xef, 0xac, 0xd9, 0xd6, 0x81, 0xe3, 0xfa, + 0x81, 0xdd, 0xa4, 0xcf, 0x43, 0x85, 0x2e, 0x8e, 0xca, 0x49, 0xa8, 0x78, 0x8c, 0x50, 0x17, 0x47, + 0xab, 0x17, 0x1c, 0xca, 0xb7, 0x8e, 0xa8, 0xfb, 0x48, 0x14, 0x1a, 0xbd, 0x0f, 0x33, 0x4a, 0x91, + 0xbc, 0x66, 0xb1, 0xf7, 0x2f, 0x55, 0x74, 0xbb, 0x65, 0xea, 0x90, 0xc6, 0x5f, 0x66, 0x60, 0xbe, + 0xfe, 0xc2, 0x0f, 0x70, 0x87, 0xe6, 0xad, 0x17, 0xc9, 0xb4, 0xa8, 0xa6, 0x97, 0xaa, 0x2f, 0xe4, + 0x21, 0xce, 0x5f, 0x22, 0xa7, 0x69, 0x19, 0x35, 0xe1, 0x56, 0x02, 0xd2, 0x97, 0xff, 0x04, 0x05, + 0xd9, 0x0b, 0xf6, 0xf2, 0x9f, 0x28, 0xd6, 0x51, 0x55, 0x70, 0xe4, 0x03, 0x84, 0x3d, 0xe1, 0xc2, + 0x6b, 0x9d, 0xdc, 0x45, 0x7d, 0x5a, 0x4a, 0x15, 0x7a, 0x21, 0xee, 0x2f, 0x8f, 0x8a, 0xef, 0x9e, + 0x24, 0x40, 0x23, 0x24, 0x6d, 0x2a, 0xcd, 0x18, 0x3f, 0x1b, 0x81, 0x73, 0x09, 0xdf, 0x5f, 0xc7, + 0xc1, 0x5f, 0xc5, 0x10, 0x3c, 0x83, 0x5c, 0xd8, 0x19, 0xa6, 0xd2, 0x9b, 0x2a, 0xef, 0xd2, 0x87, + 0xea, 0xc2, 0x31, 0xf0, 0x4f, 0x65, 0x10, 0xd4, 0x86, 0x8c, 0x1b, 0x70, 0x7d, 0xc3, 0x79, 0x86, + 0x9d, 0xc0, 0xf5, 0x5e, 0xf0, 0x75, 0x8b, 0x5b, 0xdc, 0xce, 0x4a, 0x75, 0x3d, 0xd2, 0x57, 0xf6, + 0x37, 0x47, 0xa0, 0x38, 0x00, 0x14, 0xfd, 0xff, 0x19, 0xf6, 0xbe, 0xb0, 0x2c, 0xe1, 0xac, 0xe8, + 0x7d, 0x61, 0xe5, 0xee, 0x8f, 0x7f, 0x53, 0xfb, 0xc5, 0x4c, 0x01, 0x1f, 0xfc, 0xd6, 0x9f, 0xbd, + 0xf4, 0x97, 0xea, 0x7d, 0x59, 0xfe, 0x01, 0xa0, 0x78, 0x03, 0x83, 0x14, 0x93, 0x63, 0xaa, 0x62, + 0x72, 0x1f, 0x16, 0xe4, 0x27, 0x28, 0x6f, 0x33, 0xd3, 0xcc, 0x0b, 0xda, 0x82, 0x51, 0xd6, 0x85, + 0x01, 0xc0, 0x9f, 0x9c, 0xdd, 0x74, 0x0f, 0xf8, 0xcb, 0xb8, 0x23, 0x85, 0x8c, 0xa9, 0x94, 0x1a, + 0xf7, 0x60, 0x31, 0x42, 0x97, 0xf3, 0xf5, 0xef, 0x82, 0x0c, 0x87, 0xa4, 0x84, 0x47, 0xcb, 0x67, + 0x7f, 0x79, 0x54, 0x9c, 0x09, 0xec, 0x0e, 0xbe, 0x19, 0x3e, 0x02, 0x20, 0xfe, 0x32, 0xb6, 0xd4, + 0x2b, 0x4b, 0xa9, 0xad, 0x66, 0xaa, 0x41, 0x77, 0x60, 0x82, 0x95, 0x44, 0x52, 0x6d, 0xab, 0xd0, + 0xe5, 0xb1, 0x9f, 0x1f, 0x15, 0xcf, 0x98, 0x1c, 0xd0, 0x58, 0xa4, 0xe1, 0x59, 0xf4, 0x47, 0x29, + 0x0c, 0xfd, 0x37, 0xf6, 0xd8, 0xd3, 0x33, 0x61, 0xb1, 0x4c, 0xe7, 0x3d, 0x56, 0x0a, 0x53, 0x19, + 0x08, 0x03, 0x83, 0x80, 0x73, 0xdc, 0xe7, 0x6d, 0xdc, 0x62, 0xaf, 0x14, 0x96, 0xa7, 0xb9, 0x81, + 0x61, 0xcc, 0x22, 0x04, 0x28, 0x9a, 0xf1, 0x09, 0x2c, 0x56, 0xda, 0xd8, 0xf2, 0xa2, 0xed, 0xd1, + 0x07, 0x27, 0x48, 0x99, 0xee, 0x44, 0x69, 0x91, 0x22, 0xea, 0x44, 0xc9, 0x2b, 0xc9, 0x1d, 0x87, + 0x31, 0x75, 0xf5, 0x93, 0xc2, 0xeb, 0xc5, 0x38, 0xfd, 0x1d, 0x09, 0xee, 0x49, 0xf8, 0x7a, 0x06, + 0x67, 0x7c, 0x4c, 0xbd, 0xc7, 0xf9, 0x42, 0xb5, 0x5d, 0x27, 0xe4, 0xe0, 0xc3, 0x85, 0x9b, 0xfd, + 0x2f, 0x70, 0xa1, 0xd4, 0xed, 0x62, 0xa7, 0x15, 0x22, 0xee, 0x7a, 0xd6, 0x90, 0xb1, 0xe4, 0xa8, + 0x04, 0xe3, 0x14, 0x5a, 0x2a, 0x8e, 0x78, 0x77, 0x13, 0xba, 0x43, 0xe1, 0x78, 0x2e, 0x52, 0xda, + 0x00, 0xc3, 0x34, 0x5a, 0xb0, 0x54, 0xef, 0x3d, 0xee, 0xd8, 0x01, 0x75, 0xbd, 0xa4, 0x99, 0x3d, + 0x44, 0xdb, 0x1b, 0xe2, 0xb5, 0x30, 0x36, 0x18, 0x37, 0x42, 0x2f, 0x63, 0xea, 0xbd, 0xc9, 0xb3, + 0x7d, 0x3c, 0xbb, 0x73, 0x33, 0x44, 0xa5, 0xa9, 0x3f, 0x58, 0x2b, 0xb4, 0x9a, 0xbf, 0x28, 0x66, + 0xcc, 0xc3, 0x59, 0xf5, 0x9e, 0xcb, 0x56, 0xc8, 0x22, 0xcc, 0xeb, 0xf7, 0x57, 0x56, 0xfc, 0x15, + 0x2c, 0x30, 0x19, 0x82, 0xe5, 0x25, 0x5f, 0x0d, 0xd3, 0x6c, 0x8f, 0xec, 0xaf, 0x46, 0xdc, 0xdc, + 0xa8, 0x93, 0x87, 0x7c, 0x15, 0x63, 0x7f, 0x95, 0x45, 0x00, 0x3d, 0x5b, 0xd5, 0x54, 0x38, 0x23, + 0xfb, 0xab, 0xe5, 0x49, 0x9e, 0x93, 0x95, 0x50, 0x67, 0xd3, 0xff, 0x8d, 0x50, 0x5f, 0xa5, 0x41, + 0xa7, 0xeb, 0xd8, 0xa2, 0x0e, 0xe2, 0xc9, 0xa1, 0x7b, 0xb3, 0x30, 0x22, 0x93, 0x2e, 0x8e, 0xd8, + 0x2d, 0xe3, 0x0f, 0x33, 0x70, 0x83, 0x49, 0x33, 0xc9, 0x78, 0xf4, 0x32, 0x9b, 0x82, 0x8c, 0xde, + 0x83, 0x71, 0x5f, 0xd1, 0x0a, 0x1b, 0xbc, 0xe7, 0xfd, 0x28, 0x31, 0x04, 0x54, 0x82, 0x69, 0xd5, + 0x8d, 0x79, 0xb8, 0x7c, 0x6e, 0x66, 0xae, 0xf3, 0xc4, 0x92, 0xae, 0xcd, 0x4f, 0x61, 0xa5, 0xfa, + 0x35, 0x59, 0x10, 0xfc, 0x29, 0x67, 0x6e, 0xe5, 0x0b, 0x43, 0xc3, 0xe6, 0x76, 0xf9, 0x8a, 0x11, + 0x6e, 0xa9, 0xac, 0xe3, 0xd1, 0x62, 0x64, 0xc0, 0x34, 0x27, 0xe1, 0x85, 0xfe, 0xae, 0xa6, 0x56, + 0x66, 0xfc, 0x9b, 0x0c, 0x5c, 0x48, 0x6e, 0x8d, 0x33, 0x96, 0x0d, 0x38, 0x5b, 0xb1, 0x1c, 0xd7, + 0xb1, 0x9b, 0x56, 0xbb, 0xde, 0x3c, 0xc4, 0xad, 0x9e, 0xcc, 0xdc, 0x2a, 0xb9, 0xcc, 0x01, 0x76, + 0x04, 0xba, 0x00, 0x31, 0xe3, 0x58, 0xe8, 0x5d, 0x38, 0x47, 0x9d, 0xff, 0x18, 0xef, 0x6d, 0x63, + 0x4f, 0xd2, 0x63, 0x3d, 0x4b, 0xa9, 0x45, 0xb7, 0x85, 0xb0, 0xd4, 0xda, 0x73, 0xec, 0x40, 0x22, + 0x31, 0x2f, 0xe4, 0xa4, 0x2a, 0xe3, 0x4f, 0xf8, 0x95, 0x4a, 0x7f, 0xee, 0x32, 0x4c, 0x60, 0x2c, + 0x72, 0xf0, 0x66, 0x34, 0x67, 0x46, 0x0d, 0x5a, 0x4f, 0xc6, 0x8b, 0xde, 0x82, 0xb1, 0xba, 0xb0, + 0x52, 0xcd, 0x46, 0x9e, 0x46, 0xe6, 0x18, 0xa4, 0xde, 0xa4, 0x50, 0xe8, 0x12, 0xc0, 0x1a, 0xf6, + 0x9b, 0xd8, 0xa1, 0x6f, 0x58, 0xb3, 0x4b, 0x85, 0x52, 0x12, 0xe6, 0x10, 0x1a, 0x4b, 0xcb, 0x21, + 0x34, 0xae, 0xe7, 0x10, 0x32, 0x9e, 0xb1, 0x1b, 0x58, 0xf4, 0x83, 0xf8, 0x24, 0x7d, 0x1c, 0x7b, + 0xf2, 0x5a, 0xbf, 0x85, 0x69, 0x95, 0xfb, 0x77, 0x63, 0xaf, 0x59, 0xa7, 0x27, 0x0c, 0xae, 0xc1, + 0x6b, 0x1a, 0x6c, 0x89, 0xdc, 0x09, 0x71, 0xab, 0xe6, 0xb9, 0x1d, 0x37, 0xd0, 0x9e, 0xe6, 0xe1, + 0x6f, 0xbe, 0x87, 0xe2, 0x30, 0x5f, 0x95, 0x91, 0x62, 0xe3, 0xd7, 0xe1, 0xda, 0x00, 0x8a, 0xfc, + 0xa3, 0xea, 0x70, 0xd6, 0x8a, 0xd4, 0x09, 0x73, 0xc3, 0xb5, 0xa4, 0xef, 0x8a, 0x12, 0xf2, 0xcd, + 0x38, 0xbe, 0xf1, 0xfb, 0x19, 0x36, 0x90, 0x3a, 0x13, 0x7a, 0x85, 0x84, 0xce, 0x97, 0x21, 0xc7, + 0x97, 0x4a, 0x28, 0x3c, 0x9a, 0x6a, 0x11, 0x7a, 0x0d, 0x66, 0xb8, 0x69, 0xdf, 0x0d, 0xc2, 0xd8, + 0x5b, 0x53, 0x2f, 0x34, 0x0e, 0xd9, 0x25, 0x37, 0xd6, 0x2f, 0xa9, 0x41, 0xd5, 0x1f, 0xc3, 0x48, + 0x63, 0xa6, 0xe2, 0x19, 0x8c, 0x3e, 0x53, 0xfa, 0x25, 0x5c, 0x24, 0x2d, 0xc5, 0x5c, 0x77, 0x4f, + 0xc5, 0x9b, 0xfb, 0xf7, 0x32, 0x70, 0x29, 0x8d, 0x3a, 0xff, 0x94, 0x87, 0x30, 0xaf, 0x78, 0xef, + 0x72, 0x1f, 0x61, 0xf1, 0x5d, 0x7d, 0x1c, 0x8b, 0x4d, 0xf4, 0x34, 0x46, 0x74, 0xe8, 0x2b, 0xf4, + 0x17, 0xec, 0x12, 0x9f, 0xf0, 0x74, 0xdc, 0xab, 0x7f, 0xf3, 0xd7, 0x6c, 0x40, 0xd3, 0x9f, 0xa5, + 0xfb, 0xa6, 0x9e, 0x72, 0x7b, 0x73, 0x57, 0x7b, 0xf4, 0x1c, 0x15, 0x60, 0xa1, 0x66, 0xee, 0xac, + 0xed, 0x55, 0x76, 0x1b, 0xbb, 0x9f, 0xd7, 0xaa, 0x8d, 0xbd, 0xed, 0x87, 0xdb, 0x3b, 0x8f, 0xb6, + 0x59, 0xfe, 0x78, 0xad, 0x66, 0xb7, 0x5a, 0xda, 0xca, 0x67, 0xd0, 0x02, 0xe4, 0xb5, 0xe2, 0xea, + 0x5e, 0x39, 0x3f, 0xf2, 0xe6, 0x57, 0xda, 0x63, 0xde, 0xe8, 0x02, 0x14, 0xea, 0x7b, 0xb5, 0xda, + 0x8e, 0x29, 0xa9, 0xaa, 0xd9, 0xeb, 0x17, 0xe1, 0xac, 0x56, 0x7b, 0xcf, 0xac, 0x56, 0xf3, 0x19, + 0xd2, 0x15, 0xad, 0xb8, 0x66, 0x56, 0xb7, 0x36, 0xf6, 0xb6, 0xf2, 0x23, 0x6f, 0x36, 0xd4, 0xe0, + 0x18, 0xb4, 0x02, 0x4b, 0x6b, 0xd5, 0xfd, 0x8d, 0x4a, 0x35, 0x89, 0xf6, 0x02, 0xe4, 0xd5, 0xca, + 0xdd, 0x9d, 0xdd, 0x1a, 0x23, 0xad, 0x96, 0x3e, 0xaa, 0x96, 0x4b, 0x7b, 0xbb, 0xeb, 0xdb, 0xf9, + 0x51, 0x63, 0x2c, 0x3b, 0x92, 0x1f, 0x79, 0xf3, 0x47, 0x5a, 0xe4, 0x0c, 0xe9, 0x3e, 0x07, 0xdf, + 0xab, 0x97, 0xee, 0xa7, 0x37, 0xc1, 0x6a, 0xb7, 0xee, 0x95, 0xf2, 0x19, 0x74, 0x11, 0xce, 0x6b, + 0xa5, 0xb5, 0x52, 0xbd, 0xfe, 0x68, 0xc7, 0x5c, 0xdb, 0xac, 0xd6, 0xeb, 0xf9, 0x91, 0x37, 0xf7, + 0xb5, 0xec, 0x80, 0xa4, 0x85, 0xad, 0x7b, 0xa5, 0x86, 0x59, 0xfd, 0x74, 0x6f, 0xc3, 0xac, 0xae, + 0xc5, 0x5b, 0xd0, 0x6a, 0x3f, 0xaf, 0xd6, 0xf3, 0x19, 0x34, 0x0f, 0x73, 0x5a, 0xe9, 0xf6, 0x4e, + 0x7e, 0xe4, 0xcd, 0xeb, 0x3c, 0x51, 0x1c, 0x9a, 0x05, 0x58, 0xab, 0xd6, 0x2b, 0xd5, 0xed, 0xb5, + 0x8d, 0xed, 0xfb, 0xf9, 0x33, 0x68, 0x06, 0xa6, 0x4a, 0xf2, 0x67, 0xe6, 0xcd, 0xb2, 0x78, 0x51, + 0x59, 0x39, 0x79, 0x50, 0x0e, 0x26, 0xd7, 0xaa, 0xf7, 0x4a, 0x7b, 0x9b, 0xbb, 0xf9, 0x33, 0xe4, + 0x47, 0xc5, 0xac, 0x96, 0x76, 0xab, 0x6b, 0xf9, 0x0c, 0x9a, 0x82, 0xf1, 0xfa, 0x6e, 0x69, 0xb7, + 0x9a, 0x1f, 0x41, 0x59, 0x18, 0xdb, 0xab, 0x57, 0xcd, 0xfc, 0xe8, 0xea, 0xdf, 0xff, 0xbb, 0x19, + 0xc8, 0x11, 0x61, 0x44, 0x38, 0x8f, 0x7f, 0x05, 0xe7, 0xd4, 0x2b, 0x22, 0x39, 0x84, 0xf9, 0xf3, + 0xb1, 0x17, 0x45, 0x8c, 0x63, 0xd7, 0xa7, 0x05, 0x12, 0x8c, 0xca, 0xa5, 0xcb, 0x45, 0x11, 0x94, + 0xe4, 0x3e, 0x77, 0x92, 0x00, 0x6e, 0x64, 0x6e, 0x67, 0x90, 0x49, 0x4d, 0x32, 0xb2, 0x82, 0x3f, + 0x79, 0x7b, 0x31, 0x7a, 0x37, 0x65, 0xe5, 0xfc, 0xb3, 0x96, 0x53, 0xaa, 0xeb, 0xbd, 0x4e, 0xc7, + 0xf2, 0x5e, 0xa0, 0x5f, 0x03, 0x43, 0xa5, 0x99, 0x72, 0x2f, 0xfe, 0xee, 0x70, 0xf7, 0x5f, 0xd1, + 0xe6, 0xf5, 0xe1, 0xc0, 0xd1, 0x03, 0x98, 0x21, 0xb7, 0x45, 0x09, 0x86, 0x56, 0xa2, 0x88, 0xca, + 0x25, 0x75, 0xf9, 0x42, 0x72, 0xa5, 0x7c, 0xe1, 0x69, 0x9a, 0x7e, 0x88, 0x1f, 0x58, 0x4e, 0x13, + 0xfb, 0x68, 0x51, 0x0d, 0x72, 0x71, 0x9a, 0xfc, 0x61, 0x80, 0xe5, 0xb3, 0x91, 0xe2, 0xfd, 0x3b, + 0xb7, 0x33, 0xa8, 0x4e, 0xb3, 0xf1, 0x69, 0xd7, 0x4e, 0x24, 0xa2, 0x19, 0xe2, 0xf7, 0x51, 0xd6, + 0x9b, 0xa2, 0x7c, 0x8f, 0x35, 0xe5, 0xbe, 0xba, 0x0d, 0x28, 0x7e, 0x9b, 0x43, 0x97, 0xc3, 0x75, + 0x90, 0x7c, 0xd1, 0x5b, 0x3e, 0x17, 0x73, 0x03, 0xa8, 0x12, 0x79, 0x1e, 0x55, 0x61, 0x96, 0x07, + 0xd9, 0xf2, 0xfb, 0x25, 0xea, 0x77, 0x43, 0x4d, 0x25, 0x73, 0x9f, 0x8e, 0x93, 0xbc, 0xa3, 0xa2, + 0xe5, 0xf0, 0x3b, 0xa2, 0x17, 0xd7, 0xe5, 0x95, 0xc4, 0x3a, 0xfe, 0x7d, 0xf7, 0x60, 0x56, 0xbf, + 0xee, 0x22, 0x31, 0x41, 0x89, 0xb7, 0xe0, 0xd4, 0x0e, 0x35, 0x60, 0x69, 0xcb, 0xb2, 0x9d, 0xc0, + 0xb2, 0x1d, 0xce, 0xc6, 0x85, 0x35, 0x16, 0x15, 0xfb, 0x98, 0x67, 0xeb, 0xd8, 0x69, 0xc9, 0x49, + 0x48, 0x7b, 0xbd, 0x80, 0x6e, 0x9b, 0xba, 0xb8, 0xb5, 0xe9, 0xa6, 0x76, 0x64, 0xe8, 0x6f, 0x6c, + 0x27, 0x79, 0x4f, 0x2c, 0xa7, 0x39, 0xfc, 0xa0, 0x2d, 0x7a, 0x6d, 0x8c, 0x50, 0x54, 0xd6, 0xc4, + 0x89, 0xc9, 0x15, 0x68, 0xa8, 0x77, 0x60, 0x47, 0x3d, 0x77, 0x7c, 0x94, 0x32, 0x70, 0xa9, 0xc4, + 0x6e, 0x67, 0xd0, 0x57, 0x74, 0x57, 0x27, 0x92, 0x7b, 0x64, 0x07, 0x87, 0x5c, 0x1e, 0x5f, 0x49, + 0x24, 0xc0, 0x37, 0x4a, 0x1f, 0xea, 0x26, 0x2c, 0x24, 0xf9, 0x18, 0xc9, 0x01, 0xed, 0xe3, 0x80, + 0x94, 0xba, 0x0a, 0x4c, 0x72, 0xf9, 0x6d, 0xa5, 0x4f, 0x52, 0x1f, 0x17, 0x97, 0x54, 0x9a, 0xdf, + 0x87, 0x59, 0xb2, 0x4a, 0x1e, 0x62, 0xdc, 0x2d, 0xb5, 0xed, 0x67, 0xd8, 0x47, 0x22, 0xc5, 0xb2, + 0x2c, 0x4a, 0xc3, 0xbd, 0x91, 0x41, 0xdf, 0x81, 0xdc, 0x23, 0x2b, 0x68, 0x1e, 0xf2, 0x94, 0xa2, + 0x22, 0xe3, 0x28, 0x2d, 0x5b, 0x16, 0xbf, 0x68, 0xe5, 0xed, 0x0c, 0xfa, 0x08, 0x26, 0xef, 0xe3, + 0x80, 0x46, 0x93, 0x5d, 0x91, 0x16, 0x6d, 0xe6, 0xda, 0xb6, 0xe1, 0x48, 0x87, 0x65, 0xd1, 0xe1, + 0xa8, 0xc3, 0x1d, 0xba, 0x05, 0xc0, 0x18, 0x02, 0xa5, 0x10, 0xad, 0x5e, 0x8e, 0x75, 0x1b, 0xdd, + 0x27, 0x02, 0x40, 0x1b, 0x07, 0x78, 0xd8, 0x26, 0xd3, 0xc6, 0x68, 0x13, 0x66, 0xe5, 0xa3, 0x52, + 0xdb, 0x34, 0xe1, 0x82, 0x11, 0x21, 0xe6, 0x9f, 0x80, 0xda, 0x07, 0x64, 0x57, 0xb0, 0x17, 0xa1, + 0x69, 0x64, 0x3e, 0xe5, 0xa4, 0x4b, 0x6a, 0x78, 0xbf, 0xca, 0x42, 0xc5, 0x20, 0x32, 0x30, 0x05, + 0x77, 0xdd, 0xf5, 0x03, 0x1d, 0x57, 0x96, 0x24, 0xe3, 0xfe, 0x2a, 0x2c, 0xab, 0xed, 0xea, 0xb9, + 0xae, 0x43, 0x9e, 0x9b, 0x96, 0x42, 0x7b, 0xf9, 0x4a, 0x1f, 0x08, 0xae, 0x51, 0x18, 0xfd, 0xed, + 0x91, 0x0c, 0x65, 0x27, 0x6b, 0x30, 0x2f, 0xda, 0xda, 0xe9, 0x62, 0xa7, 0x5e, 0x5f, 0xa7, 0x0f, + 0x02, 0x9d, 0x17, 0x19, 0x69, 0xc3, 0x32, 0x41, 0x1d, 0xc5, 0xab, 0xc8, 0xd1, 0xa7, 0x45, 0xe0, + 0xa3, 0x7e, 0x71, 0xf9, 0xe1, 0xd1, 0x97, 0x98, 0x84, 0xf9, 0x21, 0x53, 0x73, 0x6a, 0xd7, 0xd1, + 0xfd, 0x55, 0xd4, 0xe7, 0x4a, 0xbe, 0x9c, 0x72, 0xa9, 0xbd, 0x9d, 0x41, 0x9f, 0x03, 0x8a, 0x5f, + 0x92, 0xe5, 0x10, 0xa6, 0x2a, 0x04, 0xe4, 0x10, 0xf6, 0xb9, 0x61, 0xdf, 0x87, 0x45, 0x99, 0x7f, + 0x43, 0x69, 0x75, 0x15, 0xa5, 0xf4, 0x26, 0xad, 0x97, 0xe8, 0x13, 0x98, 0xe7, 0x8b, 0x56, 0xad, + 0x40, 0x79, 0xc9, 0x7f, 0xf8, 0x3d, 0x39, 0x75, 0x9d, 0x3e, 0x80, 0xc5, 0x7a, 0x64, 0xc4, 0x98, + 0x4f, 0xda, 0x79, 0x9d, 0x04, 0x2d, 0xac, 0xe3, 0x80, 0x0d, 0x59, 0x32, 0xad, 0x87, 0x80, 0x98, + 0x9a, 0x52, 0x90, 0x7b, 0x66, 0xe3, 0xe7, 0xe8, 0x62, 0xa4, 0xeb, 0xa4, 0x90, 0x82, 0x51, 0x06, + 0x96, 0xfa, 0x65, 0xbb, 0xec, 0x3d, 0x73, 0x5a, 0x5a, 0xb1, 0xba, 0xd6, 0x63, 0xbb, 0x6d, 0x07, + 0x36, 0x26, 0x13, 0xa0, 0x22, 0xa8, 0x55, 0x62, 0x02, 0xce, 0xa7, 0x42, 0xa0, 0x27, 0xd4, 0x8a, + 0xc7, 0x92, 0xe0, 0x26, 0x54, 0x5f, 0x97, 0x3b, 0x3e, 0x19, 0x20, 0x14, 0x79, 0xfa, 0xc3, 0xa1, + 0xdf, 0xa0, 0xf9, 0x59, 0xfb, 0x2b, 0x14, 0xd0, 0x77, 0x92, 0xf4, 0x3e, 0x29, 0x2a, 0x91, 0xe5, + 0xb7, 0x86, 0x03, 0x96, 0x2a, 0x9c, 0x99, 0xfb, 0x38, 0xa8, 0xb5, 0x7b, 0x07, 0x36, 0x7d, 0x51, + 0x17, 0xc9, 0x1b, 0xbe, 0x2c, 0xe2, 0xeb, 0x5f, 0xe4, 0xb5, 0x0a, 0x2b, 0xea, 0xf8, 0xc7, 0x68, + 0x03, 0xf2, 0xec, 0x9c, 0x51, 0x48, 0x5c, 0x8c, 0x91, 0xe0, 0x20, 0x96, 0x67, 0x75, 0xfc, 0xd4, + 0x55, 0x71, 0x0b, 0xc6, 0x88, 0x78, 0x8a, 0xc4, 0xde, 0x57, 0x05, 0xd9, 0x79, 0xad, 0x4c, 0xbe, + 0x8a, 0xb0, 0x48, 0xe7, 0xc8, 0xc7, 0x81, 0xc8, 0xe4, 0xc1, 0xcc, 0xeb, 0x57, 0x43, 0xa1, 0x22, + 0x5e, 0x1b, 0xb2, 0x98, 0x48, 0xd6, 0xa9, 0xfd, 0xbb, 0x48, 0xbe, 0x31, 0x9d, 0x40, 0xf4, 0xba, + 0x26, 0xfb, 0x9c, 0x8c, 0xee, 0x13, 0xf6, 0x68, 0x7d, 0x1c, 0xc9, 0x47, 0xaf, 0xe9, 0x01, 0x32, + 0x29, 0x44, 0xaf, 0x0d, 0x80, 0x92, 0x2f, 0x37, 0x4c, 0xf2, 0xe4, 0x7a, 0x68, 0x31, 0x1c, 0x03, + 0xf2, 0x5b, 0x10, 0x9a, 0x51, 0x7a, 0xb7, 0xbf, 0x4a, 0x59, 0x34, 0x39, 0xf4, 0x89, 0x48, 0xde, + 0xf3, 0x3c, 0xec, 0x30, 0xe4, 0x34, 0xf9, 0x29, 0x09, 0xfb, 0x63, 0xca, 0x4a, 0x15, 0x6c, 0xa6, + 0x7e, 0x1a, 0x44, 0x82, 0xbd, 0xbb, 0x75, 0x3b, 0x83, 0xde, 0x83, 0x2c, 0xef, 0x23, 0x41, 0xd2, + 0x3a, 0xed, 0xf7, 0xe9, 0x35, 0xc5, 0x04, 0x36, 0x19, 0xb4, 0xcf, 0x3a, 0x4c, 0xda, 0x2a, 0x63, + 0x7d, 0x7e, 0x8f, 0x08, 0x0f, 0xad, 0x97, 0xc1, 0xac, 0x08, 0x29, 0x82, 0x62, 0x16, 0x64, 0x62, + 0x0c, 0x51, 0x34, 0xe0, 0xb8, 0x67, 0x44, 0xc8, 0x3d, 0x80, 0xa6, 0xa7, 0x93, 0x59, 0xa6, 0xe4, + 0x3d, 0x40, 0x2b, 0x1e, 0x24, 0x3b, 0x6c, 0x40, 0xbe, 0xd4, 0xa4, 0x27, 0x5b, 0x1d, 0x77, 0xac, + 0xee, 0xa1, 0xeb, 0x61, 0x79, 0x09, 0x8b, 0x56, 0x08, 0x5a, 0x8b, 0x52, 0x52, 0xe2, 0x15, 0x9b, + 0xd8, 0xa2, 0x29, 0xa0, 0x97, 0xa4, 0xa8, 0x14, 0xa9, 0x4a, 0xc6, 0xe8, 0x73, 0xe9, 0x5a, 0xa8, + 0x90, 0x6b, 0x62, 0xfb, 0xd5, 0xc8, 0x7c, 0x40, 0x19, 0x93, 0x04, 0xf6, 0xe5, 0x89, 0x27, 0x8b, + 0xe4, 0xf5, 0x54, 0x78, 0x05, 0x4b, 0xd0, 0x2d, 0x98, 0xa5, 0x9a, 0xb1, 0xb0, 0xe4, 0x82, 0xb2, + 0x7f, 0xc2, 0xe2, 0xe8, 0x85, 0x3f, 0x5a, 0x2b, 0x53, 0xf5, 0xce, 0xf1, 0xd4, 0xa1, 0x72, 0x94, + 0xd3, 0x3a, 0x93, 0xf6, 0x35, 0xdf, 0x83, 0xd9, 0x2a, 0x39, 0xef, 0x7a, 0x2d, 0x9b, 0xbd, 0xd3, + 0x80, 0xf4, 0x04, 0xfb, 0xa9, 0x88, 0xeb, 0xe2, 0xe5, 0x3c, 0x8a, 0xca, 0x35, 0x23, 0xe2, 0xc8, + 0x55, 0xca, 0xc4, 0xa7, 0x2c, 0x08, 0xb2, 0xfc, 0xa9, 0x0c, 0xaa, 0xb9, 0xe0, 0xaa, 0x90, 0x25, + 0x26, 0x30, 0x97, 0xc2, 0x04, 0xe8, 0x3c, 0x76, 0xfd, 0x9a, 0x76, 0xc3, 0x8e, 0xd5, 0x0b, 0xda, + 0x71, 0x99, 0xfa, 0x33, 0xe5, 0xe5, 0xee, 0x14, 0x9a, 0x29, 0xf5, 0x83, 0x96, 0xb6, 0xcc, 0x6d, + 0x5c, 0x6a, 0xb7, 0x63, 0xc8, 0x3e, 0x7a, 0x43, 0xa7, 0x9e, 0x04, 0x33, 0xa8, 0x05, 0xaa, 0xc1, + 0x60, 0x42, 0x69, 0xa9, 0xdb, 0x65, 0x3c, 0xfe, 0x92, 0xe4, 0x3f, 0x7a, 0x45, 0x5c, 0x83, 0x11, + 0xad, 0xe7, 0x4b, 0xe5, 0x01, 0x5d, 0xb5, 0xe1, 0xf3, 0xde, 0x48, 0xd5, 0x07, 0x44, 0x5f, 0x37, + 0x97, 0x32, 0x6a, 0xa4, 0x52, 0x1e, 0x6f, 0x22, 0x4f, 0xbe, 0xd0, 0xec, 0x22, 0x75, 0xa1, 0xc6, + 0xdf, 0x10, 0x5f, 0xbe, 0x94, 0x56, 0x2d, 0x4d, 0x1b, 0x79, 0xbe, 0x98, 0xc2, 0x0e, 0x5e, 0xd2, + 0x8e, 0xb5, 0x78, 0x1f, 0x8b, 0xa9, 0xf5, 0xf2, 0x93, 0xf3, 0xd1, 0xd7, 0xdb, 0x25, 0xd1, 0x94, + 0x67, 0xdd, 0x53, 0xe7, 0xe4, 0x1e, 0x2c, 0xa8, 0x33, 0x2a, 0xbf, 0x3b, 0xed, 0x30, 0x49, 0xa3, + 0xb3, 0x0b, 0x8b, 0x89, 0x8f, 0xad, 0x4b, 0xc9, 0xa0, 0xdf, 0x53, 0xec, 0xa9, 0x54, 0x31, 0x9c, + 0xe3, 0x8a, 0x8f, 0x88, 0xca, 0x5d, 0x1e, 0xe2, 0xc9, 0xd5, 0xd1, 0x43, 0x3c, 0x0d, 0x8a, 0x0f, + 0xe8, 0x57, 0xf4, 0x40, 0x8d, 0xb5, 0x71, 0x45, 0xd1, 0x94, 0xa4, 0x34, 0x60, 0xf4, 0x03, 0x91, + 0x6b, 0x60, 0x21, 0xa1, 0x3a, 0x7d, 0x88, 0xaf, 0xa6, 0xd3, 0x54, 0xf3, 0x4d, 0x2f, 0x26, 0x9a, + 0x22, 0xe4, 0x78, 0xf7, 0x33, 0x82, 0x2c, 0xbf, 0xd6, 0x1f, 0x88, 0xb7, 0xb1, 0x2f, 0x72, 0x00, + 0xa7, 0x8e, 0x7e, 0xdf, 0xb7, 0xff, 0xfb, 0x5c, 0xe7, 0x97, 0xe5, 0x9a, 0x1b, 0x7e, 0x58, 0xd2, + 0xa8, 0xb5, 0xa4, 0xe6, 0x4c, 0x7b, 0x98, 0x3f, 0xaa, 0x39, 0xd3, 0x2a, 0x45, 0x0f, 0xaf, 0xf6, + 0x85, 0x51, 0x2e, 0xd5, 0xe8, 0x4b, 0xa6, 0x4a, 0xd3, 0x9b, 0x50, 0x55, 0x69, 0x89, 0xf4, 0x2f, + 0xa7, 0x03, 0xa8, 0xc4, 0x2d, 0xe6, 0xc9, 0xa1, 0x83, 0xf8, 0x48, 0xbd, 0xad, 0x46, 0xea, 0xa2, + 0xeb, 0x2f, 0x11, 0x44, 0x6d, 0xe2, 0x91, 0xd8, 0xe7, 0x29, 0xa3, 0x94, 0x54, 0x39, 0x94, 0x64, + 0xb5, 0x03, 0x85, 0x70, 0x32, 0x23, 0x1f, 0x70, 0xc2, 0xa9, 0x14, 0x83, 0x71, 0x3e, 0xe4, 0x15, + 0x51, 0x8a, 0xaf, 0xc7, 0xb8, 0x49, 0xca, 0xc0, 0xf4, 0x6d, 0x82, 0x9d, 0x19, 0x4a, 0x4e, 0xe1, + 0x95, 0x50, 0x8f, 0x1e, 0x96, 0x26, 0x9c, 0x19, 0x6a, 0x25, 0xdf, 0x24, 0x9b, 0x54, 0x94, 0x0f, + 0x2b, 0xd2, 0xbf, 0xfa, 0x62, 0x12, 0x9d, 0xc8, 0x34, 0x95, 0xe1, 0x2c, 0x13, 0x23, 0x86, 0x21, + 0x98, 0x64, 0x5b, 0xbc, 0x9d, 0x09, 0x8f, 0x07, 0xe5, 0x03, 0x85, 0x8c, 0x1a, 0xad, 0x38, 0xc9, + 0xf1, 0x30, 0x4c, 0x97, 0xd2, 0xe8, 0xac, 0x41, 0x8e, 0x7d, 0x36, 0x3b, 0xf5, 0xcf, 0x6b, 0xe3, + 0xad, 0x1d, 0xf8, 0xcb, 0xda, 0x28, 0xe9, 0x67, 0x7d, 0x85, 0x9a, 0x05, 0x44, 0x71, 0x7a, 0x2f, + 0x56, 0xe2, 0x34, 0x7c, 0x55, 0x60, 0xa0, 0x6f, 0x52, 0x48, 0x2a, 0x2b, 0xea, 0x7b, 0x1f, 0x21, + 0xb8, 0x3e, 0xf9, 0x91, 0xca, 0xd0, 0xbc, 0x20, 0x47, 0x94, 0x7d, 0xd9, 0x85, 0xe8, 0x40, 0x6b, + 0x1f, 0x97, 0x3e, 0x3c, 0x48, 0x1d, 0xe6, 0x01, 0x9f, 0x97, 0x7e, 0x0f, 0x98, 0xe7, 0x6f, 0x37, + 0x53, 0xdd, 0x87, 0xc8, 0xbd, 0x75, 0x4e, 0xd3, 0x88, 0x48, 0x2b, 0x7b, 0x2a, 0x99, 0x1a, 0x9c, + 0x63, 0x62, 0x69, 0x2c, 0xb7, 0xd6, 0x6b, 0x9a, 0xd4, 0x1a, 0xad, 0x4e, 0x17, 0x5a, 0xe5, 0x41, + 0x92, 0x4a, 0x31, 0xb9, 0x7a, 0xd0, 0xb0, 0xfd, 0x50, 0x39, 0x48, 0xa2, 0xb8, 0x3e, 0xba, 0x11, + 0x95, 0x58, 0x63, 0x20, 0x83, 0x0f, 0x2a, 0xee, 0xec, 0x16, 0x49, 0xfd, 0x66, 0x68, 0xe3, 0xa0, + 0x57, 0xa6, 0x8f, 0x82, 0x29, 0xf6, 0x52, 0x0a, 0xb5, 0xa4, 0xca, 0x41, 0x3d, 0xfc, 0x42, 0xe1, + 0xbe, 0x3a, 0xa6, 0x2f, 0xd5, 0x27, 0x69, 0x00, 0x83, 0x68, 0x6f, 0xc3, 0x62, 0xfc, 0x03, 0xed, + 0x26, 0x96, 0x22, 0x46, 0x62, 0x6d, 0xfa, 0xf7, 0xdf, 0x17, 0x22, 0x62, 0x94, 0xde, 0xb9, 0x88, + 0x32, 0x7f, 0x50, 0xc7, 0xbe, 0x12, 0x27, 0x44, 0xe4, 0x9b, 0xec, 0x26, 0x8e, 0x9e, 0x10, 0x09, + 0x10, 0x83, 0xa8, 0xaf, 0xc3, 0x5c, 0xdd, 0x3e, 0x70, 0xe4, 0x73, 0xfb, 0x75, 0x53, 0xde, 0xfe, + 0x94, 0xb2, 0x28, 0xbb, 0xd2, 0xaa, 0x64, 0xfa, 0x81, 0x05, 0x71, 0x6d, 0x51, 0x1f, 0xef, 0x47, + 0x31, 0x1c, 0x45, 0x0d, 0xbf, 0x92, 0x58, 0x17, 0x27, 0xa8, 0xbe, 0x9c, 0x2f, 0x09, 0x26, 0x3c, + 0xe2, 0x2f, 0x09, 0x26, 0x3e, 0xb5, 0x7f, 0x8b, 0x6a, 0xaf, 0x4c, 0xb7, 0x8d, 0x55, 0xed, 0x95, + 0xf2, 0x14, 0x7b, 0x44, 0x79, 0x84, 0x3e, 0x86, 0x29, 0xf9, 0x54, 0xbd, 0xb4, 0x7b, 0x44, 0x5f, + 0xcb, 0x5f, 0x2e, 0xc4, 0x2b, 0x78, 0x83, 0x0d, 0x11, 0x73, 0x29, 0xf3, 0x11, 0x31, 0x52, 0x86, + 0xa6, 0x6d, 0xd3, 0x2b, 0xa3, 0xc2, 0x5a, 0x32, 0x0c, 0x6f, 0xe0, 0x1d, 0xa1, 0xa1, 0xa2, 0x1f, + 0x55, 0xd0, 0x35, 0x88, 0xe9, 0xdf, 0xf5, 0x8e, 0x50, 0x4f, 0x69, 0x68, 0xb1, 0x97, 0xf0, 0xa3, + 0x68, 0xdf, 0x83, 0xe9, 0xf0, 0xd5, 0xfb, 0xfd, 0x55, 0x05, 0x31, 0xf2, 0x14, 0x7e, 0x14, 0xf1, + 0x3d, 0x61, 0x4b, 0xa3, 0xed, 0xe9, 0x95, 0xfd, 0x65, 0x97, 0x8f, 0x85, 0x3a, 0x4c, 0xeb, 0x69, + 0xec, 0x0d, 0xfd, 0x3e, 0xdc, 0x7d, 0x5a, 0x7d, 0x86, 0x56, 0xae, 0x9d, 0x84, 0x87, 0xa4, 0xe5, + 0xda, 0x49, 0x7a, 0x08, 0x3a, 0xb4, 0x35, 0x7d, 0x2e, 0x94, 0x35, 0x21, 0xd1, 0x8b, 0x5a, 0xb7, + 0x62, 0x74, 0x2f, 0xa5, 0x55, 0x47, 0x49, 0xd7, 0x21, 0x1f, 0x7d, 0x33, 0x57, 0xde, 0x74, 0x53, + 0x1e, 0x37, 0x96, 0xd7, 0xe7, 0xd4, 0xc7, 0x76, 0x6b, 0xc2, 0x30, 0xa3, 0xd3, 0xbd, 0x92, 0xdc, + 0x29, 0x95, 0x74, 0xba, 0xa5, 0x66, 0x46, 0x7b, 0x3e, 0x57, 0xd5, 0x41, 0xc4, 0x9e, 0xe7, 0x55, + 0xe5, 0xc9, 0x84, 0x17, 0x77, 0x6d, 0x91, 0xf4, 0x23, 0xd1, 0x57, 0x40, 0xaa, 0x61, 0x06, 0xa7, + 0x32, 0x1f, 0xe8, 0x77, 0x80, 0x7e, 0x05, 0x96, 0x52, 0x32, 0x2b, 0xa3, 0x6b, 0x11, 0xd5, 0x7b, + 0x72, 0xe6, 0x65, 0xb9, 0x40, 0x12, 0xdf, 0xb5, 0xdf, 0xa2, 0x0e, 0x2b, 0x5a, 0x1c, 0x69, 0xcc, + 0x08, 0xfc, 0x28, 0x8c, 0xbf, 0x0b, 0x07, 0x39, 0x31, 0x00, 0x15, 0xd5, 0xe9, 0x0d, 0x4c, 0x0f, + 0x06, 0x8e, 0xdb, 0x81, 0x13, 0x08, 0x2e, 0x27, 0x13, 0x24, 0x6c, 0x44, 0x18, 0x12, 0x23, 0x54, + 0x55, 0x43, 0x62, 0x62, 0xe4, 0xb3, 0x66, 0x48, 0x4c, 0x89, 0x4f, 0xae, 0xc1, 0x7c, 0x42, 0xfc, + 0xb0, 0x5c, 0x66, 0xe9, 0xb1, 0xc5, 0xa9, 0x23, 0x50, 0x13, 0xf2, 0x5d, 0x32, 0xc5, 0xf4, 0x50, + 0xe2, 0x54, 0x8a, 0x0f, 0x08, 0xc5, 0x58, 0x74, 0x30, 0x4a, 0x01, 0xef, 0xcf, 0x98, 0x4c, 0x21, + 0x2e, 0xe8, 0x58, 0xab, 0x4a, 0xff, 0xd2, 0xe2, 0x90, 0x53, 0xfb, 0x57, 0x15, 0x5b, 0x35, 0xb9, + 0x7f, 0xc3, 0x0a, 0x0c, 0xd2, 0xa6, 0x1b, 0x09, 0xcd, 0xd7, 0x3e, 0x54, 0x29, 0x5f, 0x4e, 0x29, + 0x47, 0xdb, 0xd4, 0xb9, 0x2d, 0x5a, 0xaa, 0xdc, 0xf2, 0x93, 0x63, 0xff, 0x53, 0xe9, 0xb1, 0x2d, + 0xa2, 0x85, 0xce, 0x9e, 0x64, 0x8b, 0x44, 0x62, 0x6e, 0xf9, 0x16, 0xd1, 0x83, 0x77, 0x4f, 0xb4, + 0x45, 0x22, 0x04, 0xd5, 0x2d, 0x12, 0xa1, 0x7a, 0x39, 0xa2, 0x78, 0xe8, 0xbf, 0x45, 0x52, 0xe2, + 0x89, 0xe5, 0x16, 0x89, 0x8e, 0x40, 0x54, 0x2b, 0x93, 0xba, 0x60, 0xa2, 0x23, 0x20, 0xb7, 0x48, + 0x32, 0xc5, 0xf4, 0x20, 0xf2, 0x54, 0x8a, 0x72, 0x8b, 0xe8, 0x14, 0x53, 0xc0, 0x87, 0xdc, 0x22, + 0xd1, 0x46, 0xf4, 0x2d, 0x72, 0xa2, 0xfe, 0xc9, 0x2d, 0x92, 0xdc, 0xbf, 0x13, 0x6f, 0x91, 0x48, + 0xbe, 0x09, 0xed, 0x43, 0x93, 0xb6, 0x48, 0x14, 0x9e, 0x6d, 0x91, 0x68, 0x69, 0x44, 0x11, 0xd6, + 0x67, 0x8b, 0x44, 0x31, 0x3f, 0xa5, 0xf4, 0x22, 0xf1, 0xc0, 0xc3, 0x6c, 0x92, 0xd4, 0x50, 0x62, + 0xf4, 0x88, 0xaa, 0x7b, 0xa3, 0x61, 0xde, 0x43, 0x6d, 0x94, 0x0b, 0x69, 0x44, 0xe9, 0x56, 0xe1, + 0xd2, 0x6d, 0x02, 0xe5, 0x70, 0x2b, 0xa4, 0x04, 0xb7, 0x6b, 0xd2, 0x6d, 0x6a, 0x04, 0xfa, 0xbe, + 0x98, 0xa5, 0xe8, 0x78, 0xe8, 0x8a, 0xcc, 0xe4, 0x78, 0xeb, 0x3e, 0x23, 0xb2, 0x4f, 0x16, 0x66, + 0xab, 0x0f, 0xdd, 0x7e, 0xe1, 0xe2, 0x7d, 0xe8, 0xca, 0x2b, 0x64, 0x94, 0x6e, 0x2a, 0x4a, 0xff, + 0x0d, 0xf4, 0x99, 0xb0, 0xa8, 0x45, 0xf1, 0x56, 0x23, 0x97, 0xd2, 0x13, 0xf7, 0x54, 0x5e, 0x4e, + 0xa3, 0x3d, 0x3d, 0xe9, 0x46, 0xda, 0x12, 0x42, 0x55, 0x2c, 0x07, 0x4b, 0xe4, 0xa3, 0xd5, 0xcd, + 0x94, 0x5a, 0x83, 0x76, 0xa9, 0xf1, 0x20, 0x5e, 0xae, 0x18, 0x1e, 0xd2, 0x92, 0xbd, 0x0c, 0xa4, + 0x1a, 0x8b, 0x98, 0x57, 0xa9, 0xa6, 0x85, 0xd3, 0x4b, 0xaa, 0x71, 0xec, 0x4f, 0xa8, 0x2a, 0x94, + 0xc7, 0xe4, 0x3a, 0x4f, 0xdc, 0xc1, 0x9a, 0xcb, 0x10, 0x96, 0xfa, 0x6c, 0x7e, 0x9f, 0x5b, 0xa0, + 0x45, 0x61, 0xea, 0xe0, 0x27, 0xe1, 0xa3, 0x4f, 0x20, 0xcf, 0xf9, 0x67, 0x48, 0x20, 0x09, 0x30, + 0x75, 0xea, 0xca, 0x42, 0x71, 0x3a, 0x44, 0x0f, 0x86, 0x51, 0x98, 0x0e, 0x33, 0x12, 0xe9, 0x1a, + 0x41, 0x72, 0x94, 0xef, 0x7a, 0x3d, 0x3f, 0xc0, 0xad, 0xb8, 0x26, 0x4f, 0xef, 0x8c, 0xf0, 0x20, + 0xd2, 0xc1, 0xf7, 0x57, 0xd1, 0x06, 0x65, 0x9e, 0x7a, 0x71, 0x3f, 0xb5, 0x69, 0x32, 0x19, 0xca, + 0xdb, 0xb6, 0x64, 0xe0, 0xa7, 0xde, 0xa7, 0xb4, 0xb6, 0x53, 0x3b, 0x25, 0x1c, 0x32, 0xf8, 0x38, + 0x0d, 0xf9, 0x89, 0x69, 0xe3, 0xf4, 0x21, 0xf5, 0x65, 0x61, 0xba, 0xd7, 0x41, 0xc3, 0x13, 0x0d, + 0xa1, 0x42, 0x55, 0x98, 0x12, 0xc8, 0x83, 0x47, 0x25, 0x8a, 0x4d, 0x46, 0x85, 0x7d, 0xcb, 0x0f, + 0xe8, 0x4b, 0xfc, 0xf5, 0xc0, 0x0a, 0xec, 0xe6, 0x00, 0x62, 0xd2, 0x25, 0x43, 0x01, 0xde, 0x5f, + 0x45, 0x5f, 0x31, 0x03, 0x50, 0x24, 0x26, 0x4c, 0x33, 0x00, 0x25, 0xc7, 0xb1, 0x69, 0x06, 0xa0, + 0xb4, 0x90, 0xb2, 0x35, 0x98, 0xd1, 0x22, 0x82, 0xe5, 0xf5, 0x34, 0x29, 0x4e, 0xb8, 0xcf, 0x8a, + 0x9c, 0xd1, 0x22, 0x7f, 0x25, 0x95, 0xa4, 0x78, 0xe0, 0x54, 0x2a, 0x1f, 0x41, 0x8e, 0xcf, 0x7b, + 0xdf, 0x29, 0x4b, 0xd7, 0xc8, 0x2e, 0x2a, 0xb1, 0x0c, 0xbd, 0x96, 0x1d, 0x54, 0x5c, 0xe7, 0x89, + 0x7d, 0x30, 0x70, 0xf6, 0xe2, 0x28, 0xfb, 0xab, 0xe8, 0x4b, 0xfa, 0x5e, 0xb4, 0x78, 0x4e, 0x1f, + 0x07, 0xcf, 0x5d, 0xef, 0xa9, 0xed, 0x1c, 0x0c, 0x20, 0x79, 0x59, 0x27, 0x19, 0xc5, 0x13, 0x2b, + 0xfc, 0x4b, 0x58, 0xae, 0xa7, 0x13, 0x1f, 0x48, 0xa4, 0xff, 0x41, 0x58, 0x87, 0x0b, 0xd4, 0x29, + 0xed, 0xa4, 0x7d, 0xef, 0x4b, 0xf4, 0x73, 0x96, 0x70, 0x4c, 0x58, 0x86, 0x9a, 0xae, 0xd7, 0x1a, + 0x4c, 0xb1, 0xa8, 0xbb, 0xe0, 0x47, 0xd0, 0xc4, 0x60, 0x7c, 0x0e, 0xe7, 0xeb, 0xa9, 0xa4, 0x07, + 0x91, 0x18, 0x24, 0x54, 0xaf, 0xd0, 0xa1, 0x38, 0x61, 0xbf, 0xfb, 0xd2, 0xdc, 0xa0, 0xdc, 0x97, + 0x9c, 0x98, 0x35, 0x0f, 0x3f, 0xc1, 0x1e, 0x0d, 0xf4, 0x18, 0x14, 0xe2, 0xa0, 0x83, 0x8b, 0x2f, + 0xdf, 0x80, 0xb3, 0xf5, 0x18, 0xa9, 0x34, 0x94, 0x41, 0x66, 0xcb, 0x79, 0xfa, 0xa5, 0x43, 0xf6, + 0x6b, 0x80, 0x3f, 0x5e, 0xee, 0x3e, 0x0e, 0xf6, 0x36, 0x06, 0x8c, 0x92, 0x88, 0x44, 0x12, 0x80, + 0xfb, 0x77, 0x08, 0x66, 0x5d, 0xc1, 0x8c, 0x43, 0xa4, 0x6e, 0xde, 0x1f, 0x08, 0x6b, 0xd9, 0xc0, + 0x66, 0xd3, 0x28, 0xdc, 0xa5, 0x0c, 0x9b, 0x07, 0x3b, 0x2c, 0x85, 0xc2, 0x0a, 0x2b, 0x09, 0x75, + 0xad, 0x4a, 0xdc, 0x83, 0x8f, 0x4a, 0xec, 0x92, 0xcd, 0x96, 0x07, 0x2f, 0xbb, 0x14, 0x0b, 0x82, + 0xe9, 0x4b, 0x82, 0xe9, 0xc9, 0x37, 0xdd, 0xe6, 0x53, 0x55, 0x4f, 0x4e, 0x7e, 0x47, 0xf5, 0xbb, + 0xa4, 0x6c, 0x7f, 0x95, 0x1f, 0x4b, 0xe4, 0x87, 0xe6, 0x62, 0x49, 0x0b, 0xc2, 0x63, 0x29, 0x5a, + 0x2e, 0x3d, 0x84, 0xa9, 0x92, 0x9d, 0x61, 0xab, 0x4a, 0x76, 0x0d, 0xbd, 0x10, 0xaf, 0x90, 0xcf, + 0xea, 0x73, 0xe5, 0x32, 0xed, 0xb0, 0xde, 0xb3, 0xd4, 0xa1, 0x95, 0x7a, 0x65, 0x8a, 0xa4, 0xeb, + 0x95, 0xd5, 0x0f, 0x4d, 0xb7, 0x16, 0x21, 0x13, 0x77, 0xdb, 0x34, 0xfe, 0xa2, 0xe3, 0x32, 0x9c, + 0x50, 0x4d, 0x10, 0xaf, 0x1a, 0xec, 0x69, 0x39, 0xcf, 0x1d, 0xea, 0xb4, 0x89, 0x93, 0x19, 0x49, + 0xe3, 0x75, 0xe1, 0x54, 0xa8, 0x7e, 0x7e, 0xb7, 0x33, 0x68, 0x1b, 0xce, 0xdd, 0xc7, 0x01, 0xe7, + 0x91, 0x26, 0xf6, 0x03, 0xcf, 0x6e, 0x06, 0x7d, 0xcd, 0xd8, 0xe2, 0x9a, 0x97, 0x80, 0xb3, 0xff, + 0x36, 0xa1, 0x57, 0x4f, 0xa6, 0xd7, 0x17, 0xaf, 0x8f, 0x73, 0x3e, 0xb7, 0x67, 0x9d, 0xa4, 0x8b, + 0xe9, 0x5b, 0x64, 0x92, 0x39, 0xb7, 0xa5, 0xa3, 0xe6, 0xc3, 0x67, 0xaa, 0xf8, 0xc5, 0xf5, 0x43, + 0xc8, 0x72, 0x0f, 0xb6, 0x70, 0xb9, 0x8a, 0x82, 0xe8, 0x72, 0x0d, 0xcb, 0xf9, 0x72, 0xbb, 0x09, + 0x13, 0xac, 0xc5, 0xd4, 0xd3, 0x7c, 0x5a, 0x6d, 0x10, 0xdd, 0x81, 0x29, 0xe9, 0xda, 0x86, 0xb4, + 0xaa, 0xd4, 0x8f, 0xba, 0x03, 0x53, 0xec, 0x06, 0x3a, 0x3c, 0xca, 0x87, 0x30, 0x25, 0x7d, 0xe1, + 0x4e, 0x2c, 0x66, 0x7c, 0x02, 0x33, 0xaa, 0x57, 0xdc, 0xc9, 0x67, 0xe1, 0x23, 0xea, 0xa9, 0x20, + 0x8c, 0x78, 0x83, 0xe5, 0x41, 0x01, 0xc9, 0xe7, 0x83, 0xfb, 0x28, 0x84, 0xf8, 0xaa, 0x8f, 0x82, + 0x2c, 0x4d, 0xf2, 0x51, 0x50, 0x2a, 0x65, 0xd6, 0xba, 0x9c, 0xd2, 0x95, 0xd4, 0xa1, 0x38, 0x1b, + 0xeb, 0x09, 0xfa, 0x50, 0x04, 0x73, 0x4a, 0xe4, 0x38, 0x50, 0x9f, 0xf1, 0x9f, 0x65, 0x53, 0xf6, + 0x32, 0xc8, 0xf2, 0xa4, 0x18, 0xd8, 0xed, 0x61, 0x3c, 0x2a, 0x06, 0x4f, 0x43, 0x1a, 0x95, 0x1d, + 0x2a, 0x6e, 0xc6, 0x53, 0x1c, 0xa4, 0x12, 0xba, 0x94, 0x9e, 0xd5, 0x80, 0x4e, 0x2c, 0x66, 0x6f, + 0xe0, 0x25, 0x50, 0x54, 0x5d, 0xf2, 0x52, 0x33, 0x36, 0x68, 0x51, 0x09, 0x7d, 0x32, 0x2f, 0x3c, + 0xa0, 0xf7, 0xfb, 0x18, 0x40, 0xea, 0x28, 0xf6, 0x49, 0xc6, 0x10, 0x2a, 0x34, 0xe2, 0xe4, 0xfa, + 0xa0, 0xf5, 0xd3, 0x8f, 0xb0, 0x75, 0x71, 0x3a, 0xe4, 0x36, 0x84, 0x3f, 0xf4, 0xf0, 0x1f, 0x9b, + 0xde, 0xb3, 0x95, 0x04, 0x57, 0x91, 0x81, 0x53, 0x9e, 0x46, 0xee, 0x57, 0xa8, 0x34, 0x9d, 0xf8, + 0x4a, 0x5c, 0x3a, 0xb1, 0x1b, 0x8a, 0xe7, 0x52, 0x22, 0xa6, 0x9c, 0xe2, 0xa7, 0x34, 0x18, 0x37, + 0xf9, 0x11, 0xba, 0xeb, 0x03, 0xa8, 0x88, 0x91, 0x78, 0x7d, 0x20, 0x9c, 0x74, 0x3c, 0x58, 0x61, + 0x12, 0x45, 0x72, 0x7b, 0x03, 0x1e, 0xd5, 0x4b, 0xf0, 0x05, 0x91, 0xce, 0xe6, 0xc9, 0x04, 0x75, + 0x67, 0xf3, 0xbe, 0xdf, 0x90, 0x36, 0xfc, 0x9f, 0x42, 0x31, 0x74, 0xa9, 0x3a, 0xd9, 0x24, 0xa4, + 0xfb, 0x38, 0xa3, 0xd8, 0x48, 0xf9, 0xa8, 0xdf, 0x1b, 0x2f, 0xcb, 0x57, 0xd2, 0x46, 0xd8, 0x57, + 0x9d, 0x8f, 0xa9, 0x73, 0x59, 0x84, 0xac, 0x7a, 0xf7, 0x8f, 0xa1, 0xc6, 0xef, 0xfe, 0x69, 0xd4, + 0xef, 0x0b, 0xef, 0xd8, 0xc8, 0x63, 0x8f, 0x69, 0xcf, 0x46, 0xf6, 0x31, 0x10, 0xf0, 0xd8, 0xe7, + 0x53, 0x21, 0x14, 0x5f, 0x4b, 0x27, 0x27, 0x24, 0xfd, 0xa9, 0x22, 0x84, 0x8c, 0x3e, 0x8b, 0x67, + 0xb0, 0x25, 0xbf, 0x90, 0xb2, 0x6a, 0x4e, 0xbe, 0x5c, 0xac, 0x30, 0xde, 0x57, 0x27, 0x55, 0x51, + 0x73, 0x2c, 0xc4, 0xab, 0xa2, 0x06, 0xb4, 0x24, 0x08, 0xe9, 0x10, 0x59, 0x10, 0x4d, 0x90, 0x72, + 0x72, 0x31, 0x74, 0x3d, 0x3b, 0x78, 0x51, 0x31, 0x37, 0x43, 0x25, 0x8f, 0x5a, 0x21, 0x68, 0x83, + 0xa8, 0x34, 0x37, 0xd1, 0x17, 0x94, 0x51, 0x71, 0xf2, 0x65, 0xd7, 0x0d, 0xfc, 0xc0, 0xb3, 0xba, + 0x75, 0xfa, 0x9e, 0x6f, 0xea, 0x47, 0x87, 0xc1, 0x26, 0x49, 0x68, 0x8a, 0xef, 0x3b, 0xcf, 0x0d, + 0x9d, 0x94, 0x53, 0x4f, 0x86, 0x2d, 0x26, 0x55, 0xf6, 0xb9, 0x47, 0xd6, 0x45, 0x36, 0xe8, 0xd3, + 0x24, 0xda, 0x80, 0xa5, 0x94, 0x4c, 0x84, 0xd2, 0x19, 0xa2, 0x7f, 0xa6, 0xc2, 0xe5, 0xfe, 0x0d, + 0xa3, 0x2f, 0x61, 0x31, 0x31, 0x55, 0xa1, 0xb4, 0x5c, 0xf4, 0x4b, 0x64, 0x38, 0x88, 0xf8, 0x53, + 0x28, 0xb0, 0x40, 0x36, 0x1a, 0x60, 0xa1, 0x65, 0xad, 0x0b, 0xc3, 0x28, 0x53, 0x00, 0xa2, 0xa7, + 0x41, 0x3a, 0x9c, 0x4c, 0x1a, 0xb2, 0x40, 0xb3, 0x62, 0x45, 0x1e, 0xfd, 0x97, 0x1b, 0x2f, 0xa9, + 0xb2, 0x5f, 0xac, 0x66, 0x0d, 0x16, 0xf7, 0xb1, 0x67, 0x3f, 0x79, 0x11, 0x25, 0x28, 0x46, 0x26, + 0xb1, 0xb6, 0x1f, 0xc5, 0xcf, 0x60, 0xa9, 0xe2, 0x76, 0xba, 0x3c, 0xfa, 0x5a, 0xa3, 0x29, 0x3d, + 0x5b, 0x92, 0xeb, 0x07, 0xfb, 0x1e, 0x2e, 0xcb, 0xf0, 0x70, 0x15, 0xaf, 0x42, 0xd3, 0x12, 0xdc, + 0xd0, 0xbd, 0x73, 0x12, 0x40, 0xc2, 0xa8, 0x31, 0x71, 0x31, 0x56, 0xf1, 0x77, 0xe9, 0x22, 0x8c, + 0xe0, 0x31, 0x4d, 0xa9, 0xb2, 0x08, 0x93, 0xea, 0xfb, 0xc7, 0xd8, 0x26, 0x50, 0x65, 0x0d, 0xa6, + 0x53, 0x1d, 0xa2, 0xb7, 0xdb, 0xe2, 0x6c, 0xd1, 0x1f, 0x8c, 0x8f, 0x44, 0x5e, 0x24, 0xbe, 0x26, + 0x9f, 0xd8, 0x4f, 0x25, 0x3b, 0x4e, 0xbb, 0xdd, 0x47, 0x80, 0x4b, 0x78, 0x1a, 0x9a, 0xdc, 0xd8, + 0x54, 0xdc, 0x7e, 0xdc, 0x3a, 0x86, 0xcc, 0xed, 0x1b, 0xb3, 0xfa, 0xeb, 0xd4, 0x5a, 0x04, 0x63, + 0xec, 0x25, 0x6c, 0x2d, 0x82, 0x31, 0xe1, 0x49, 0xeb, 0x0f, 0x60, 0xba, 0xae, 0x7e, 0x4b, 0x42, + 0x9f, 0x53, 0xd7, 0x98, 0x8c, 0x7e, 0x1c, 0x3c, 0x14, 0x7d, 0x5c, 0xc1, 0xe5, 0x39, 0x36, 0xd4, + 0xa0, 0xa4, 0x3a, 0xb6, 0x69, 0x2f, 0x6b, 0x69, 0xf7, 0xd0, 0xe8, 0xe3, 0x82, 0xda, 0x3d, 0x34, + 0xfe, 0x18, 0x17, 0x37, 0x8e, 0x47, 0xdf, 0x8e, 0xd4, 0x8c, 0xe3, 0x29, 0x8f, 0xb4, 0x6a, 0xc6, + 0xf1, 0xd4, 0xc7, 0x27, 0x99, 0x17, 0x5e, 0xf8, 0x96, 0x98, 0xea, 0x85, 0x17, 0x7b, 0xa1, 0x4c, + 0xf5, 0xc2, 0x4b, 0x78, 0x7e, 0xac, 0x0e, 0xf9, 0xe8, 0xe3, 0x68, 0x52, 0x67, 0x98, 0xf2, 0xf6, + 0x9b, 0xf4, 0xb7, 0x4b, 0x7d, 0x55, 0xad, 0x4a, 0x3b, 0x18, 0x3e, 0x7e, 0xd2, 0x47, 0xfd, 0x24, + 0xfb, 0x96, 0xf0, 0xc6, 0xca, 0x43, 0x35, 0x11, 0x14, 0x7b, 0x32, 0xa5, 0x8f, 0x76, 0x3e, 0x9a, + 0x00, 0x2a, 0xf2, 0xc6, 0xca, 0x3d, 0xc8, 0xb3, 0x0c, 0xd9, 0x61, 0x46, 0xe7, 0xd0, 0x9d, 0x38, + 0x9e, 0xb8, 0xbb, 0xcf, 0x4a, 0xc9, 0x47, 0xf3, 0xe0, 0xca, 0x01, 0x4b, 0x49, 0x90, 0xdb, 0x67, + 0xfd, 0x43, 0x98, 0xed, 0x56, 0xaa, 0x22, 0x63, 0x09, 0x70, 0x97, 0xcf, 0x27, 0xd4, 0x48, 0xb1, + 0x77, 0x5a, 0xcd, 0x8d, 0x2b, 0x3f, 0x29, 0x21, 0x61, 0xee, 0xf2, 0x4a, 0x62, 0x1d, 0x27, 0x14, + 0xb0, 0x6c, 0x8d, 0xc9, 0x6f, 0x23, 0x87, 0x51, 0xb1, 0x7d, 0x60, 0x44, 0x33, 0x6f, 0x0e, 0x03, + 0xca, 0x5b, 0xc5, 0xf2, 0xe9, 0x97, 0x84, 0x97, 0xb2, 0x5f, 0x4f, 0x08, 0x2a, 0xd3, 0x20, 0xa2, + 0xa9, 0x30, 0xd2, 0x9e, 0xed, 0x46, 0x8f, 0xc4, 0x73, 0x03, 0x29, 0x2d, 0x0d, 0x22, 0x90, 0x3a, + 0x83, 0x8f, 0xc4, 0x03, 0x03, 0xa7, 0x4d, 0xf8, 0x31, 0x5c, 0x88, 0x44, 0xaa, 0xe9, 0x84, 0xdf, + 0x4c, 0x0e, 0x67, 0x4b, 0x1c, 0x9e, 0xf4, 0x7b, 0xc5, 0xe5, 0x78, 0x44, 0x5b, 0x64, 0xde, 0x4f, + 0xca, 0x48, 0xf9, 0xe9, 0x22, 0xdf, 0x5c, 0xd7, 0x4f, 0x97, 0xb0, 0x38, 0xe9, 0x74, 0x51, 0x6b, + 0xa5, 0x85, 0x60, 0x9a, 0x27, 0x6c, 0x60, 0x2f, 0xb8, 0x2f, 0xeb, 0x59, 0x1c, 0x68, 0x61, 0xd2, + 0x49, 0xcb, 0x1f, 0x86, 0x47, 0x1f, 0xc1, 0x5c, 0x98, 0xc7, 0x81, 0x91, 0x48, 0x00, 0xeb, 0xa3, + 0xdd, 0x9c, 0x0b, 0x93, 0x39, 0x9c, 0x1c, 0x7d, 0x5d, 0x9c, 0x6f, 0x21, 0xfa, 0xc5, 0x58, 0x5c, + 0x9f, 0xf6, 0x0d, 0xc3, 0x1c, 0x73, 0xca, 0xd8, 0x9e, 0x74, 0x76, 0x9a, 0x74, 0xbb, 0x25, 0x27, + 0x7d, 0x56, 0xb7, 0x5b, 0xdf, 0xc4, 0xd4, 0x52, 0x44, 0x4f, 0xa1, 0xb3, 0x05, 0x57, 0x69, 0x5a, + 0xae, 0x1a, 0x4b, 0x0d, 0x9c, 0x0c, 0x95, 0xde, 0xf7, 0x68, 0x32, 0xaf, 0x36, 0x5c, 0x19, 0x98, + 0xf5, 0x1a, 0xdd, 0xd2, 0xdc, 0xb7, 0x06, 0xe7, 0xc7, 0xee, 0x73, 0x3b, 0x5a, 0x48, 0x4a, 0x1e, + 0x2d, 0x0f, 0xef, 0x3e, 0x79, 0xac, 0xe5, 0xe1, 0xdd, 0x37, 0xfb, 0xf4, 0x67, 0x34, 0xfb, 0x0f, + 0x3f, 0xa3, 0x68, 0xaa, 0x3d, 0xec, 0x58, 0x4e, 0x13, 0x0f, 0x30, 0x14, 0x5e, 0xd1, 0xcd, 0xe8, + 0x31, 0x44, 0x7a, 0xef, 0xba, 0xc4, 0x6f, 0x8b, 0x69, 0xc4, 0x07, 0x13, 0xe9, 0x13, 0x4d, 0x71, + 0x89, 0x2d, 0xc0, 0x13, 0xf7, 0x3c, 0xa5, 0xbc, 0xbc, 0xf6, 0xf3, 0xff, 0x70, 0x29, 0xf3, 0xf3, + 0x5f, 0x5c, 0xca, 0xfc, 0xeb, 0x5f, 0x5c, 0xca, 0xfc, 0xfb, 0x5f, 0x5c, 0xca, 0x7c, 0xb1, 0x3a, + 0xdc, 0xc3, 0x0c, 0xec, 0x49, 0xbb, 0x5b, 0x8c, 0xdc, 0x04, 0xfd, 0xef, 0xee, 0xff, 0x08, 0x00, + 0x00, 0xff, 0xff, 0x38, 0xcc, 0x42, 0xa3, 0x69, 0xf7, 0x00, 0x00, } func (m *Watch) Marshal() (dAtA []byte, err error) { @@ -16781,6 +18668,105 @@ func (m *ChangePasswordRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *ListSemaphoresRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSemaphoresRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSemaphoresRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Filter != nil { + { + size, err := m.Filter.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListSemaphoresResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSemaphoresResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSemaphoresResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Semaphores) > 0 { + for iNdEx := len(m.Semaphores) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Semaphores[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *PluginDataSeq) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -16847,12 +18833,12 @@ func (m *RequestStateSetter) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } if m.AssumeStartTime != nil { - n11, err11 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) - if err11 != nil { - return 0, err11 + n12, err12 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) + if err12 != nil { + return 0, err12 } - i -= n11 - i = encodeVarintAuthservice(dAtA, i, uint64(n11)) + i -= n12 + i = encodeVarintAuthservice(dAtA, i, uint64(n12)) i-- dAtA[i] = 0x3a } @@ -17018,6 +19004,93 @@ func (m *CreateResetPasswordTokenRequest) MarshalToSizedBuffer(dAtA []byte) (int return len(dAtA) - i, nil } +func (m *ListResetPasswordTokenRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListResetPasswordTokenRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListResetPasswordTokenRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListResetPasswordTokenResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListResetPasswordTokenResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListResetPasswordTokenResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.UserTokens) > 0 { + for iNdEx := len(m.UserTokens) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.UserTokens[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *RenewableCertsRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -17111,12 +19184,12 @@ func (m *PingResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } if m.LicenseExpiry != nil { - n13, err13 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LicenseExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LicenseExpiry):]) - if err13 != nil { - return 0, err13 + n14, err14 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LicenseExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LicenseExpiry):]) + if err14 != nil { + return 0, err14 } - i -= n13 - i = encodeVarintAuthservice(dAtA, i, uint64(n13)) + i -= n14 + i = encodeVarintAuthservice(dAtA, i, uint64(n14)) i-- dAtA[i] = 0x52 } @@ -17212,6 +19285,18 @@ func (m *Features) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ClientIPRestrictions { + i-- + if m.ClientIPRestrictions { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x2 + i-- + dAtA[i] = 0xb8 + } if m.AccessGraphDemoMode { i-- if m.AccessGraphDemoMode { @@ -18389,12 +20474,12 @@ func (m *GenerateAppTokenRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) dAtA[i] = 0x2a } } - n28, err28 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err28 != nil { - return 0, err28 + n29, err29 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err29 != nil { + return 0, err29 } - i -= n28 - i = encodeVarintAuthservice(dAtA, i, uint64(n28)) + i -= n29 + i = encodeVarintAuthservice(dAtA, i, uint64(n29)) i-- dAtA[i] = 0x22 if len(m.URI) > 0 { @@ -19553,6 +21638,93 @@ func (m *GetWebTokensResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *ListWebTokensRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListWebTokensRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListWebTokensRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListWebTokensResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListWebTokensResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListWebTokensResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Tokens) > 0 { + for iNdEx := len(m.Tokens) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Tokens[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *UpsertKubernetesServerRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -19996,6 +22168,13 @@ func (m *DatabaseCertRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.CRLDomain) > 0 { + i -= len(m.CRLDomain) + copy(dAtA[i:], m.CRLDomain) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.CRLDomain))) + i-- + dAtA[i] = 0x42 + } if len(m.CRLEndpoint) > 0 { i -= len(m.CRLEndpoint) copy(dAtA[i:], m.CRLEndpoint) @@ -21812,6 +23991,13 @@ func (m *GetEventsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Search) > 0 { + i -= len(m.Search) + copy(dAtA[i:], m.Search) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.Search))) + i-- + dAtA[i] = 0x42 + } if m.Order != 0 { i = encodeVarintAuthservice(dAtA, i, uint64(m.Order)) i-- @@ -21838,21 +24024,21 @@ func (m *GetEventsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x22 } } - n66, err66 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndDate):]) - if err66 != nil { - return 0, err66 - } - i -= n66 - i = encodeVarintAuthservice(dAtA, i, uint64(n66)) - i-- - dAtA[i] = 0x1a - n67, err67 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) + n67, err67 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndDate):]) if err67 != nil { return 0, err67 } i -= n67 i = encodeVarintAuthservice(dAtA, i, uint64(n67)) i-- + dAtA[i] = 0x1a + n68, err68 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) + if err68 != nil { + return 0, err68 + } + i -= n68 + i = encodeVarintAuthservice(dAtA, i, uint64(n68)) + i-- dAtA[i] = 0x12 if len(m.Namespace) > 0 { i -= len(m.Namespace) @@ -21905,21 +24091,21 @@ func (m *GetSessionEventsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) i-- dAtA[i] = 0x18 } - n68, err68 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndDate):]) - if err68 != nil { - return 0, err68 - } - i -= n68 - i = encodeVarintAuthservice(dAtA, i, uint64(n68)) - i-- - dAtA[i] = 0x12 - n69, err69 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) + n69, err69 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndDate):]) if err69 != nil { return 0, err69 } i -= n69 i = encodeVarintAuthservice(dAtA, i, uint64(n69)) i-- + dAtA[i] = 0x12 + n70, err70 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) + if err70 != nil { + return 0, err70 + } + i -= n70 + i = encodeVarintAuthservice(dAtA, i, uint64(n70)) + i-- dAtA[i] = 0xa return len(dAtA) - i, nil } @@ -22064,6 +24250,105 @@ func (m *GetLocksResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *ListLocksRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListLocksRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListLocksRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Filter != nil { + { + size, err := m.Filter.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListLocksResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListLocksResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListLocksResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Locks) > 0 { + for iNdEx := len(m.Locks) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Locks[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *GetLockRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -22180,6 +24465,180 @@ func (m *ReplaceRemoteLocksRequest) MarshalToSizedBuffer(dAtA []byte) (int, erro return len(dAtA) - i, nil } +func (m *ListAppsRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListAppsRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListAppsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.StartKey) > 0 { + i -= len(m.StartKey) + copy(dAtA[i:], m.StartKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.StartKey))) + i-- + dAtA[i] = 0x12 + } + if m.Limit != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.Limit)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListAppsResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListAppsResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListAppsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextKey) > 0 { + i -= len(m.NextKey) + copy(dAtA[i:], m.NextKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextKey))) + i-- + dAtA[i] = 0x12 + } + if len(m.Applications) > 0 { + for iNdEx := len(m.Applications) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Applications[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ListDatabasesRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListDatabasesRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListDatabasesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListDatabasesResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListDatabasesResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListDatabasesResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Databases) > 0 { + for iNdEx := len(m.Databases) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Databases[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *GetWindowsDesktopServicesResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -22328,6 +24787,138 @@ func (m *DeleteWindowsDesktopServiceRequest) MarshalToSizedBuffer(dAtA []byte) ( return len(dAtA) - i, nil } +func (m *ListWindowsDesktopsRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListWindowsDesktopsRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListWindowsDesktopsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.WindowsDesktopFilter.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + if len(m.SearchKeywords) > 0 { + for iNdEx := len(m.SearchKeywords) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.SearchKeywords[iNdEx]) + copy(dAtA[i:], m.SearchKeywords[iNdEx]) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.SearchKeywords[iNdEx]))) + i-- + dAtA[i] = 0x2a + } + } + if len(m.PredicateExpression) > 0 { + i -= len(m.PredicateExpression) + copy(dAtA[i:], m.PredicateExpression) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PredicateExpression))) + i-- + dAtA[i] = 0x22 + } + if len(m.Labels) > 0 { + for k := range m.Labels { + v := m.Labels[k] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintAuthservice(dAtA, i, uint64(len(v))) + i-- + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintAuthservice(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintAuthservice(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x1a + } + } + if len(m.StartKey) > 0 { + i -= len(m.StartKey) + copy(dAtA[i:], m.StartKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.StartKey))) + i-- + dAtA[i] = 0x12 + } + if m.Limit != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.Limit)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListWindowsDesktopsResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListWindowsDesktopsResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListWindowsDesktopsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextKey) > 0 { + i -= len(m.NextKey) + copy(dAtA[i:], m.NextKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextKey))) + i-- + dAtA[i] = 0x12 + } + if len(m.Desktops) > 0 { + for iNdEx := len(m.Desktops) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Desktops[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *GetWindowsDesktopsResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -22434,6 +25025,13 @@ func (m *WindowsDesktopCertRequest) MarshalToSizedBuffer(dAtA []byte) (int, erro i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.CRLDomain) > 0 { + i -= len(m.CRLDomain) + copy(dAtA[i:], m.CRLDomain) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.CRLDomain))) + i-- + dAtA[i] = 0x22 + } if m.TTL != 0 { i = encodeVarintAuthservice(dAtA, i, uint64(m.TTL)) i-- @@ -23284,12 +25882,12 @@ func (m *RecoveryCodes) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n76, err76 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err76 != nil { - return 0, err76 + n79, err79 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err79 != nil { + return 0, err79 } - i -= n76 - i = encodeVarintAuthservice(dAtA, i, uint64(n76)) + i -= n79 + i = encodeVarintAuthservice(dAtA, i, uint64(n79)) i-- dAtA[i] = 0x12 if len(m.Codes) > 0 { @@ -23925,6 +26523,93 @@ func (m *IdentityCenterAccountAssignment) MarshalToSizedBuffer(dAtA []byte) (int return len(dAtA) - i, nil } +func (m *ListInstallersRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListInstallersRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListInstallersRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListInstallersResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListInstallersResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListInstallersResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Installers) > 0 { + for iNdEx := len(m.Installers) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Installers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *PaginatedResource) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -25070,12 +27755,12 @@ func (m *SessionTrackerUpdateExpiry) MarshalToSizedBuffer(dAtA []byte) (int, err copy(dAtA[i:], m.XXX_unrecognized) } if m.Expires != nil { - n106, err106 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) - if err106 != nil { - return 0, err106 + n109, err109 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) + if err109 != nil { + return 0, err109 } - i -= n106 - i = encodeVarintAuthservice(dAtA, i, uint64(n106)) + i -= n109 + i = encodeVarintAuthservice(dAtA, i, uint64(n109)) i-- dAtA[i] = 0xa } @@ -25580,7 +28265,7 @@ func (m *GetGithubAuthRequestRequest) MarshalToSizedBuffer(dAtA []byte) (int, er return len(dAtA) - i, nil } -func (m *CreateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { +func (m *ListOIDCConnectorsRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25590,12 +28275,12 @@ func (m *CreateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *CreateOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *ListOIDCConnectorsRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *CreateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ListOIDCConnectorsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25604,22 +28289,32 @@ func (m *CreateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, err i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Connector != nil { - { - size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintAuthservice(dAtA, i, uint64(size)) + if m.WithSecrets { + i-- + if m.WithSecrets { + dAtA[i] = 1 + } else { + dAtA[i] = 0 } i-- - dAtA[i] = 0xa + dAtA[i] = 0x18 + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 } return len(dAtA) - i, nil } -func (m *UpdateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { +func (m *ListOIDCConnectorsResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25629,12 +28324,12 @@ func (m *UpdateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UpdateOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *ListOIDCConnectorsResponse) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UpdateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ListOIDCConnectorsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25643,22 +28338,31 @@ func (m *UpdateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, err i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Connector != nil { - { - size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Connectors) > 0 { + for iNdEx := len(m.Connectors) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Connectors[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) } - i -= size - i = encodeVarintAuthservice(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *UpsertOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { +func (m *CreateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25668,12 +28372,90 @@ func (m *UpsertOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UpsertOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *CreateOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UpsertOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *CreateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Connector != nil { + { + size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *UpdateOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UpdateOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UpdateOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Connector != nil { + { + size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *UpsertOIDCConnectorRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UpsertOIDCConnectorRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UpsertOIDCConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25814,6 +28596,210 @@ func (m *UpsertSAMLConnectorRequest) MarshalToSizedBuffer(dAtA []byte) (int, err return len(dAtA) - i, nil } +func (m *ListSAMLConnectorsRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSAMLConnectorsRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSAMLConnectorsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.NoFollowUrls { + i-- + if m.NoFollowUrls { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x20 + } + if m.WithSecrets { + i-- + if m.WithSecrets { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x18 + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListSAMLConnectorsResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSAMLConnectorsResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSAMLConnectorsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Connectors) > 0 { + for iNdEx := len(m.Connectors) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Connectors[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ListGithubConnectorsRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListGithubConnectorsRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListGithubConnectorsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.WithSecrets { + i-- + if m.WithSecrets { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x18 + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListGithubConnectorsResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListGithubConnectorsResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListGithubConnectorsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Connectors) > 0 { + for iNdEx := len(m.Connectors) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Connectors[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *CreateGithubConnectorRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -27050,6 +30036,283 @@ func (m *AccessRequestAllowedPromotionResponse) MarshalToSizedBuffer(dAtA []byte return len(dAtA) - i, nil } +func (m *ListProvisionTokensRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListProvisionTokensRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListProvisionTokensRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.FilterBotName) > 0 { + i -= len(m.FilterBotName) + copy(dAtA[i:], m.FilterBotName) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.FilterBotName))) + i-- + dAtA[i] = 0x22 + } + if len(m.FilterRoles) > 0 { + for iNdEx := len(m.FilterRoles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.FilterRoles[iNdEx]) + copy(dAtA[i:], m.FilterRoles[iNdEx]) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.FilterRoles[iNdEx]))) + i-- + dAtA[i] = 0x1a + } + } + if len(m.StartKey) > 0 { + i -= len(m.StartKey) + copy(dAtA[i:], m.StartKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.StartKey))) + i-- + dAtA[i] = 0x12 + } + if m.Limit != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.Limit)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListProvisionTokensResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListProvisionTokensResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListProvisionTokensResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextKey) > 0 { + i -= len(m.NextKey) + copy(dAtA[i:], m.NextKey) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextKey))) + i-- + dAtA[i] = 0x12 + } + if len(m.Tokens) > 0 { + for iNdEx := len(m.Tokens) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Tokens[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ListKubernetesClustersRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListKubernetesClustersRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListKubernetesClustersRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListKubernetesClustersResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListKubernetesClustersResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListKubernetesClustersResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.KubernetesClusters) > 0 { + for iNdEx := len(m.KubernetesClusters) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.KubernetesClusters[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ListSnowflakeSessionsRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSnowflakeSessionsRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSnowflakeSessionsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PageToken) > 0 { + i -= len(m.PageToken) + copy(dAtA[i:], m.PageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.PageToken))) + i-- + dAtA[i] = 0x12 + } + if m.PageSize != 0 { + i = encodeVarintAuthservice(dAtA, i, uint64(m.PageSize)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *ListSnowflakeSessionsResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ListSnowflakeSessionsResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ListSnowflakeSessionsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NextPageToken) > 0 { + i -= len(m.NextPageToken) + copy(dAtA[i:], m.NextPageToken) + i = encodeVarintAuthservice(dAtA, i, uint64(len(m.NextPageToken))) + i-- + dAtA[i] = 0x12 + } + if len(m.Sessions) > 0 { + for iNdEx := len(m.Sessions) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Sessions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintAuthservice(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func encodeVarintAuthservice(dAtA []byte, offset int, v uint64) int { offset -= sovAuthservice(v) base := offset @@ -27464,6 +30727,51 @@ func (m *ChangePasswordRequest) Size() (n int) { return n } +func (m *ListSemaphoresRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.Filter != nil { + l = m.Filter.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListSemaphoresResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Semaphores) > 0 { + for _, e := range m.Semaphores { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *PluginDataSeq) Size() (n int) { if m == nil { return 0 @@ -27576,6 +30884,47 @@ func (m *CreateResetPasswordTokenRequest) Size() (n int) { return n } +func (m *ListResetPasswordTokenRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListResetPasswordTokenResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.UserTokens) > 0 { + for _, e := range m.UserTokens { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *RenewableCertsRequest) Size() (n int) { if m == nil { return 0 @@ -27777,6 +31126,9 @@ func (m *Features) Size() (n int) { if m.AccessGraphDemoMode { n += 3 } + if m.ClientIPRestrictions { + n += 3 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -28679,6 +32031,47 @@ func (m *GetWebTokensResponse) Size() (n int) { return n } +func (m *ListWebTokensRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListWebTokensResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Tokens) > 0 { + for _, e := range m.Tokens { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *UpsertKubernetesServerRequest) Size() (n int) { if m == nil { return 0 @@ -28904,6 +32297,10 @@ func (m *DatabaseCertRequest) Size() (n int) { if l > 0 { n += 1 + l + sovAuthservice(uint64(l)) } + l = len(m.CRLDomain) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -29762,6 +33159,10 @@ func (m *GetEventsRequest) Size() (n int) { if m.Order != 0 { n += 1 + sovAuthservice(uint64(m.Order)) } + l = len(m.Search) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -29855,6 +33256,51 @@ func (m *GetLocksResponse) Size() (n int) { return n } +func (m *ListLocksRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.Filter != nil { + l = m.Filter.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListLocksResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Locks) > 0 { + for _, e := range m.Locks { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *GetLockRequest) Size() (n int) { if m == nil { return 0 @@ -29909,6 +33355,88 @@ func (m *ReplaceRemoteLocksRequest) Size() (n int) { return n } +func (m *ListAppsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Limit != 0 { + n += 1 + sovAuthservice(uint64(m.Limit)) + } + l = len(m.StartKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListAppsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Applications) > 0 { + for _, e := range m.Applications { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListDatabasesRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListDatabasesResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Databases) > 0 { + for _, e := range m.Databases { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *GetWindowsDesktopServicesResponse) Size() (n int) { if m == nil { return 0 @@ -29975,6 +33503,67 @@ func (m *DeleteWindowsDesktopServiceRequest) Size() (n int) { return n } +func (m *ListWindowsDesktopsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Limit != 0 { + n += 1 + sovAuthservice(uint64(m.Limit)) + } + l = len(m.StartKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if len(m.Labels) > 0 { + for k, v := range m.Labels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovAuthservice(uint64(len(k))) + 1 + len(v) + sovAuthservice(uint64(len(v))) + n += mapEntrySize + 1 + sovAuthservice(uint64(mapEntrySize)) + } + } + l = len(m.PredicateExpression) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if len(m.SearchKeywords) > 0 { + for _, s := range m.SearchKeywords { + l = len(s) + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = m.WindowsDesktopFilter.Size() + n += 1 + l + sovAuthservice(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListWindowsDesktopsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Desktops) > 0 { + for _, e := range m.Desktops { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *GetWindowsDesktopsResponse) Size() (n int) { if m == nil { return 0 @@ -30030,6 +33619,10 @@ func (m *WindowsDesktopCertRequest) Size() (n int) { if m.TTL != 0 { n += 1 + sovAuthservice(uint64(m.TTL)) } + l = len(m.CRLDomain) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -30737,6 +34330,47 @@ func (m *IdentityCenterAccountAssignment) Size() (n int) { return n } +func (m *ListInstallersRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListInstallersResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Installers) > 0 { + for _, e := range m.Installers { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *PaginatedResource) Size() (n int) { if m == nil { return 0 @@ -31522,6 +35156,50 @@ func (m *GetGithubAuthRequestRequest) Size() (n int) { return n } +func (m *ListOIDCConnectorsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.WithSecrets { + n += 2 + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListOIDCConnectorsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Connectors) > 0 { + for _, e := range m.Connectors { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *CreateOIDCConnectorRequest) Size() (n int) { if m == nil { return 0 @@ -31618,6 +35296,97 @@ func (m *UpsertSAMLConnectorRequest) Size() (n int) { return n } +func (m *ListSAMLConnectorsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.WithSecrets { + n += 2 + } + if m.NoFollowUrls { + n += 2 + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListSAMLConnectorsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Connectors) > 0 { + for _, e := range m.Connectors { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListGithubConnectorsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.WithSecrets { + n += 2 + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListGithubConnectorsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Connectors) > 0 { + for _, e := range m.Connectors { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *CreateGithubConnectorRequest) Size() (n int) { if m == nil { return 0 @@ -32176,6 +35945,139 @@ func (m *AccessRequestAllowedPromotionResponse) Size() (n int) { return n } +func (m *ListProvisionTokensRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Limit != 0 { + n += 1 + sovAuthservice(uint64(m.Limit)) + } + l = len(m.StartKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if len(m.FilterRoles) > 0 { + for _, s := range m.FilterRoles { + l = len(s) + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.FilterBotName) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListProvisionTokensResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Tokens) > 0 { + for _, e := range m.Tokens { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextKey) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListKubernetesClustersRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListKubernetesClustersResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.KubernetesClusters) > 0 { + for _, e := range m.KubernetesClusters { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListSnowflakeSessionsRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.PageSize != 0 { + n += 1 + sovAuthservice(uint64(m.PageSize)) + } + l = len(m.PageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ListSnowflakeSessionsResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Sessions) > 0 { + for _, e := range m.Sessions { + l = e.Size() + n += 1 + l + sovAuthservice(uint64(l)) + } + } + l = len(m.NextPageToken) + if l > 0 { + n += 1 + l + sovAuthservice(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func sovAuthservice(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 } @@ -34830,7 +38732,7 @@ func (m *ChangePasswordRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *PluginDataSeq) Unmarshal(dAtA []byte) error { +func (m *ListSemaphoresRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -34853,15 +38755,270 @@ func (m *PluginDataSeq) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PluginDataSeq: wiretype end group for non-group") + return fmt.Errorf("proto: ListSemaphoresRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PluginDataSeq: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSemaphoresRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginData", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Filter", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Filter == nil { + m.Filter = &types.SemaphoreFilter{} + } + if err := m.Filter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListSemaphoresResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListSemaphoresResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListSemaphoresResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Semaphores", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Semaphores = append(m.Semaphores, &types.SemaphoreV3{}) + if err := m.Semaphores[len(m.Semaphores)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginDataSeq) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginDataSeq: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginDataSeq: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PluginData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -35482,7 +39639,7 @@ func (m *CreateResetPasswordTokenRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { +func (m *ListResetPasswordTokenRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -35505,17 +39662,17 @@ func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RenewableCertsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListResetPasswordTokenRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RenewableCertsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListResetPasswordTokenRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Token", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) } - var stringLen uint64 + m.PageSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -35525,29 +39682,16 @@ func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.PageSize |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Token = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } - var byteLen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -35557,25 +39701,23 @@ func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.PublicKey = append(m.PublicKey[:0], dAtA[iNdEx:postIndex]...) - if m.PublicKey == nil { - m.PublicKey = []byte{} - } + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -35599,7 +39741,7 @@ func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *PingRequest) Unmarshal(dAtA []byte) error { +func (m *ListResetPasswordTokenResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -35622,10 +39764,244 @@ func (m *PingRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PingRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListResetPasswordTokenResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PingRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListResetPasswordTokenResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserTokens", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserTokens = append(m.UserTokens, &types.UserTokenV3{}) + if err := m.UserTokens[len(m.UserTokens)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RenewableCertsRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RenewableCertsRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RenewableCertsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Token", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Token = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PublicKey = append(m.PublicKey[:0], dAtA[iNdEx:postIndex]...) + if m.PublicKey == nil { + m.PublicKey = []byte{} + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PingRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PingRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PingRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { default: @@ -36868,6 +41244,26 @@ func (m *Features) Unmarshal(dAtA []byte) error { } } m.AccessGraphDemoMode = bool(v != 0) + case 39: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ClientIPRestrictions", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.ClientIPRestrictions = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -41634,7 +46030,7 @@ func (m *GetWebTokensResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertKubernetesServerRequest) Unmarshal(dAtA []byte) error { +func (m *ListWebTokensRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -41657,17 +46053,36 @@ func (m *UpsertKubernetesServerRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertKubernetesServerRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListWebTokensRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertKubernetesServerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListWebTokensRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Server", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -41677,27 +46092,23 @@ func (m *UpsertKubernetesServerRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Server == nil { - m.Server = &types.KubernetesServerV3{} - } - if err := m.Server.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -41721,7 +46132,7 @@ func (m *UpsertKubernetesServerRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteKubernetesServerRequest) Unmarshal(dAtA []byte) error { +func (m *ListWebTokensResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -41744,15 +46155,219 @@ func (m *DeleteKubernetesServerRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteKubernetesServerRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListWebTokensResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteKubernetesServerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListWebTokensResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Tokens", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Tokens = append(m.Tokens, &types.WebTokenV3{}) + if err := m.Tokens[len(m.Tokens)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UpsertKubernetesServerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UpsertKubernetesServerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UpsertKubernetesServerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Server", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Server == nil { + m.Server = &types.KubernetesServerV3{} + } + if err := m.Server.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DeleteKubernetesServerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DeleteKubernetesServerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DeleteKubernetesServerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -42877,96 +47492,11 @@ func (m *DatabaseCertRequest) Unmarshal(dAtA []byte) error { } m.CRLEndpoint = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DatabaseCertResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DatabaseCertResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseCertResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Cert", wireType) - } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if byteLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Cert = append(m.Cert[:0], dAtA[iNdEx:postIndex]...) - if m.Cert == nil { - m.Cert = []byte{} - } - iNdEx = postIndex - case 2: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CACerts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CRLDomain", wireType) } - var byteLen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -42976,23 +47506,140 @@ func (m *DatabaseCertResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.CACerts = append(m.CACerts, make([]byte, postIndex-iNdEx)) - copy(m.CACerts[len(m.CACerts)-1], dAtA[iNdEx:postIndex]) + m.CRLDomain = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DatabaseCertResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DatabaseCertResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DatabaseCertResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Cert", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Cert = append(m.Cert[:0], dAtA[iNdEx:postIndex]...) + if m.Cert == nil { + m.Cert = []byte{} + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CACerts", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CACerts = append(m.CACerts, make([]byte, postIndex-iNdEx)) + copy(m.CACerts[len(m.CACerts)-1], dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -47118,6 +51765,38 @@ func (m *GetEventsRequest) Unmarshal(dAtA []byte) error { break } } + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Search", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Search = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -47634,7 +52313,7 @@ func (m *GetLocksResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetLockRequest) Unmarshal(dAtA []byte) error { +func (m *ListLocksRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -47657,15 +52336,34 @@ func (m *GetLockRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetLockRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListLocksRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetLockRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListLocksRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -47693,7 +52391,43 @@ func (m *GetLockRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Filter", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Filter == nil { + m.Filter = &types.LockFilter{} + } + if err := m.Filter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -47717,7 +52451,7 @@ func (m *GetLockRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteLockRequest) Unmarshal(dAtA []byte) error { +func (m *ListLocksResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -47740,15 +52474,49 @@ func (m *DeleteLockRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteLockRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListLocksResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteLockRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListLocksResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Locks", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Locks = append(m.Locks, &types.LockV2{}) + if err := m.Locks[len(m.Locks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -47776,7 +52544,7 @@ func (m *DeleteLockRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -47800,7 +52568,7 @@ func (m *DeleteLockRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ReplaceRemoteLocksRequest) Unmarshal(dAtA []byte) error { +func (m *GetLockRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -47823,15 +52591,15 @@ func (m *ReplaceRemoteLocksRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ReplaceRemoteLocksRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetLockRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ReplaceRemoteLocksRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetLockRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -47859,41 +52627,7 @@ func (m *ReplaceRemoteLocksRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Locks", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Locks = append(m.Locks, &types.LockV2{}) - if err := m.Locks[len(m.Locks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -47917,7 +52651,7 @@ func (m *ReplaceRemoteLocksRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetWindowsDesktopServicesResponse) Unmarshal(dAtA []byte) error { +func (m *DeleteLockRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -47940,17 +52674,17 @@ func (m *GetWindowsDesktopServicesResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetWindowsDesktopServicesResponse: wiretype end group for non-group") + return fmt.Errorf("proto: DeleteLockRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetWindowsDesktopServicesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeleteLockRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Services", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -47960,25 +52694,23 @@ func (m *GetWindowsDesktopServicesResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Services = append(m.Services, &types.WindowsDesktopServiceV3{}) - if err := m.Services[len(m.Services)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -48002,7 +52734,7 @@ func (m *GetWindowsDesktopServicesResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { +func (m *ReplaceRemoteLocksRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48025,15 +52757,15 @@ func (m *GetWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetWindowsDesktopServiceRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ReplaceRemoteLocksRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetWindowsDesktopServiceRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ReplaceRemoteLocksRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48061,62 +52793,11 @@ func (m *GetWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.ClusterName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GetWindowsDesktopServiceResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GetWindowsDesktopServiceResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GetWindowsDesktopServiceResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Service", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Locks", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -48143,10 +52824,8 @@ func (m *GetWindowsDesktopServiceResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Service == nil { - m.Service = &types.WindowsDesktopServiceV3{} - } - if err := m.Service.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Locks = append(m.Locks, &types.LockV2{}) + if err := m.Locks[len(m.Locks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -48172,7 +52851,7 @@ func (m *GetWindowsDesktopServiceResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { +func (m *ListAppsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48195,15 +52874,34 @@ func (m *DeleteWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteWindowsDesktopServiceRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListAppsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteWindowsDesktopServiceRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListAppsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48231,7 +52929,7 @@ func (m *DeleteWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.StartKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -48255,7 +52953,7 @@ func (m *DeleteWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { +func (m *ListAppsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48278,15 +52976,15 @@ func (m *GetWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetWindowsDesktopsResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListAppsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetWindowsDesktopsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListAppsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Desktops", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Applications", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -48313,11 +53011,43 @@ func (m *GetWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Desktops = append(m.Desktops, &types.WindowsDesktopV3{}) - if err := m.Desktops[len(m.Desktops)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Applications = append(m.Applications, &types.AppV3{}) + if err := m.Applications[len(m.Applications)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -48340,7 +53070,7 @@ func (m *GetWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { +func (m *ListDatabasesRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48363,17 +53093,17 @@ func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteWindowsDesktopRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListDatabasesRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteWindowsDesktopRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListDatabasesRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) } - var stringLen uint64 + m.PageSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -48383,27 +53113,14 @@ func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.PageSize |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Name = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48431,7 +53148,7 @@ func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.HostID = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -48455,7 +53172,7 @@ func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { +func (m *ListDatabasesResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48478,17 +53195,17 @@ func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WindowsDesktopCertRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListDatabasesResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WindowsDesktopCertRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListDatabasesResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CSR", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Databases", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -48498,29 +53215,29 @@ func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.CSR = append(m.CSR[:0], dAtA[iNdEx:postIndex]...) - if m.CSR == nil { - m.CSR = []byte{} + m.Databases = append(m.Databases, &types.DatabaseV3{}) + if err := m.Databases[len(m.Databases)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CRLEndpoint", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48548,27 +53265,8 @@ func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.CRLEndpoint = string(dAtA[iNdEx:postIndex]) + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType) - } - m.TTL = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TTL |= Duration(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -48591,7 +53289,7 @@ func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *WindowsDesktopCertResponse) Unmarshal(dAtA []byte) error { +func (m *GetWindowsDesktopServicesResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48614,17 +53312,17 @@ func (m *WindowsDesktopCertResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WindowsDesktopCertResponse: wiretype end group for non-group") + return fmt.Errorf("proto: GetWindowsDesktopServicesResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WindowsDesktopCertResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetWindowsDesktopServicesResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Cert", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Services", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -48634,24 +53332,24 @@ func (m *WindowsDesktopCertResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Cert = append(m.Cert[:0], dAtA[iNdEx:postIndex]...) - if m.Cert == nil { - m.Cert = []byte{} + m.Services = append(m.Services, &types.WindowsDesktopServiceV3{}) + if err := m.Services[len(m.Services)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex default: @@ -48676,7 +53374,7 @@ func (m *WindowsDesktopCertResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *DesktopBootstrapScriptResponse) Unmarshal(dAtA []byte) error { +func (m *GetWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48699,15 +53397,15 @@ func (m *DesktopBootstrapScriptResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DesktopBootstrapScriptResponse: wiretype end group for non-group") + return fmt.Errorf("proto: GetWindowsDesktopServiceRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopBootstrapScriptResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetWindowsDesktopServiceRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Script", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48735,7 +53433,7 @@ func (m *DesktopBootstrapScriptResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Script = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -48759,7 +53457,7 @@ func (m *DesktopBootstrapScriptResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { +func (m *GetWindowsDesktopServiceResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48782,17 +53480,17 @@ func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListSAMLIdPServiceProvidersRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetWindowsDesktopServiceResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListSAMLIdPServiceProvidersRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetWindowsDesktopServiceResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Service", wireType) } - m.Limit = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -48802,14 +53500,82 @@ func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Limit |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 2: + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Service == nil { + m.Service = &types.WindowsDesktopServiceV3{} + } + if err := m.Service.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DeleteWindowsDesktopServiceRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DeleteWindowsDesktopServiceRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DeleteWindowsDesktopServiceRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48837,7 +53603,7 @@ func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -48861,7 +53627,7 @@ func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { +func (m *ListWindowsDesktopsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -48884,15 +53650,66 @@ func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListSAMLIdPServiceProvidersResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListWindowsDesktopsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListSAMLIdPServiceProvidersResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListWindowsDesktopsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviders", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.StartKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -48919,14 +53736,107 @@ func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServiceProviders = append(m.ServiceProviders, &types.SAMLIdPServiceProviderV1{}) - if err := m.ServiceProviders[len(m.ServiceProviders)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.Labels == nil { + m.Labels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.Labels[mapkey] = mapvalue iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -48954,13 +53864,13 @@ func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.PredicateExpression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) } - m.TotalCount = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -48970,11 +53880,57 @@ func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TotalCount |= int32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopFilter", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.WindowsDesktopFilter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -48997,7 +53953,7 @@ func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { +func (m *ListWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49020,15 +53976,49 @@ func (m *GetSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetSAMLIdPServiceProviderRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListWindowsDesktopsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetSAMLIdPServiceProviderRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListWindowsDesktopsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Desktops", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Desktops = append(m.Desktops, &types.WindowsDesktopV3{}) + if err := m.Desktops[len(m.Desktops)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49056,7 +54046,7 @@ func (m *GetSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -49080,7 +54070,7 @@ func (m *GetSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { +func (m *GetWindowsDesktopsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49103,17 +54093,17 @@ func (m *DeleteSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteSAMLIdPServiceProviderRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetWindowsDesktopsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteSAMLIdPServiceProviderRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetWindowsDesktopsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Desktops", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49123,23 +54113,25 @@ func (m *DeleteSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Desktops = append(m.Desktops, &types.WindowsDesktopV3{}) + if err := m.Desktops[len(m.Desktops)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -49163,7 +54155,7 @@ func (m *DeleteSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { +func (m *DeleteWindowsDesktopRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49186,17 +54178,17 @@ func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListUserGroupsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: DeleteWindowsDesktopRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListUserGroupsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeleteWindowsDesktopRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - m.Limit = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49206,14 +54198,27 @@ func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Limit |= int32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49241,7 +54246,7 @@ func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.HostID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -49265,7 +54270,7 @@ func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { +func (m *WindowsDesktopCertRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49288,17 +54293,17 @@ func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListUserGroupsResponse: wiretype end group for non-group") + return fmt.Errorf("proto: WindowsDesktopCertRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListUserGroupsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WindowsDesktopCertRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserGroups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CSR", wireType) } - var msglen int + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49308,29 +54313,29 @@ func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.UserGroups = append(m.UserGroups, &types.UserGroupV1{}) - if err := m.UserGroups[len(m.UserGroups)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.CSR = append(m.CSR[:0], dAtA[iNdEx:postIndex]...) + if m.CSR == nil { + m.CSR = []byte{} } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CRLEndpoint", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49358,13 +54363,13 @@ func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.CRLEndpoint = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType) } - m.TotalCount = 0 + m.TTL = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49374,11 +54379,43 @@ func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TotalCount |= int32(b&0x7F) << shift + m.TTL |= Duration(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CRLDomain", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CRLDomain = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -49401,7 +54438,7 @@ func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetUserGroupRequest) Unmarshal(dAtA []byte) error { +func (m *WindowsDesktopCertResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49424,17 +54461,17 @@ func (m *GetUserGroupRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetUserGroupRequest: wiretype end group for non-group") + return fmt.Errorf("proto: WindowsDesktopCertResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetUserGroupRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WindowsDesktopCertResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Cert", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49444,24 +54481,26 @@ func (m *GetUserGroupRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex + m.Cert = append(m.Cert[:0], dAtA[iNdEx:postIndex]...) + if m.Cert == nil { + m.Cert = []byte{} + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -49484,7 +54523,7 @@ func (m *GetUserGroupRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeleteUserGroupRequest) Unmarshal(dAtA []byte) error { +func (m *DesktopBootstrapScriptResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49507,15 +54546,15 @@ func (m *DeleteUserGroupRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeleteUserGroupRequest: wiretype end group for non-group") + return fmt.Errorf("proto: DesktopBootstrapScriptResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeleteUserGroupRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DesktopBootstrapScriptResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Script", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49543,7 +54582,7 @@ func (m *DeleteUserGroupRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Script = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -49567,7 +54606,7 @@ func (m *DeleteUserGroupRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CertAuthorityRequest) Unmarshal(dAtA []byte) error { +func (m *ListSAMLIdPServiceProvidersRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49590,15 +54629,34 @@ func (m *CertAuthorityRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CertAuthorityRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListSAMLIdPServiceProvidersRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CertAuthorityRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSAMLIdPServiceProvidersRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49626,7 +54684,7 @@ func (m *CertAuthorityRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Type = github_com_gravitational_teleport_api_types.CertAuthType(dAtA[iNdEx:postIndex]) + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -49650,7 +54708,7 @@ func (m *CertAuthorityRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CRL) Unmarshal(dAtA []byte) error { +func (m *ListSAMLIdPServiceProvidersResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49673,17 +54731,17 @@ func (m *CRL) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CRL: wiretype end group for non-group") + return fmt.Errorf("proto: ListSAMLIdPServiceProvidersResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CRL: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSAMLIdPServiceProvidersResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CRL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviders", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49693,26 +54751,77 @@ func (m *CRL) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.CRL = append(m.CRL[:0], dAtA[iNdEx:postIndex]...) - if m.CRL == nil { - m.CRL = []byte{} + m.ServiceProviders = append(m.ServiceProviders, &types.SAMLIdPServiceProviderV1{}) + if err := m.ServiceProviders[len(m.ServiceProviders)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) + } + m.TotalCount = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TotalCount |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -49735,7 +54844,7 @@ func (m *CRL) Unmarshal(dAtA []byte) error { } return nil } -func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { +func (m *GetSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49758,15 +54867,15 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ChangeUserAuthenticationRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetSAMLIdPServiceProviderRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ChangeUserAuthenticationRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetSAMLIdPServiceProviderRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49794,47 +54903,64 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TokenID = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewPassword", wireType) - } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - if byteLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DeleteSAMLIdPServiceProviderRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.NewPassword = append(m.NewPassword[:0], dAtA[iNdEx:postIndex]...) - if m.NewPassword == nil { - m.NewPassword = []byte{} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 3: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DeleteSAMLIdPServiceProviderRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DeleteSAMLIdPServiceProviderRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewMFARegisterResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49844,33 +54970,80 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.NewMFARegisterResponse == nil { - m.NewMFARegisterResponse = &MFARegisterResponse{} - } - if err := m.NewMFARegisterResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { return err } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewDeviceName", wireType) + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice } - var stringLen uint64 + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListUserGroupsRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListUserGroupsRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListUserGroupsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -49880,27 +55053,14 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.Limit |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.NewDeviceName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LoginIP", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -49928,7 +55088,7 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.LoginIP = string(dAtA[iNdEx:postIndex]) + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -49952,7 +55112,7 @@ func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { +func (m *ListUserGroupsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -49975,15 +55135,15 @@ func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ChangeUserAuthenticationResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListUserGroupsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ChangeUserAuthenticationResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListUserGroupsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WebSession", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserGroups", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -50010,18 +55170,16 @@ func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.WebSession == nil { - m.WebSession = &types.WebSessionV2{} - } - if err := m.WebSession.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.UserGroups = append(m.UserGroups, &types.UserGroupV1{}) + if err := m.UserGroups[len(m.UserGroups)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Recovery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50031,33 +55189,29 @@ func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Recovery == nil { - m.Recovery = &RecoveryCodes{} - } - if err := m.Recovery.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicyEnabled", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) } - var v int + m.TotalCount = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50067,12 +55221,11 @@ func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + m.TotalCount |= int32(b&0x7F) << shift if b < 0x80 { break } } - m.PrivateKeyPolicyEnabled = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -50095,7 +55248,7 @@ func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *StartAccountRecoveryRequest) Unmarshal(dAtA []byte) error { +func (m *GetUserGroupRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50118,15 +55271,15 @@ func (m *StartAccountRecoveryRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StartAccountRecoveryRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetUserGroupRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StartAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetUserGroupRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50154,61 +55307,8 @@ func (m *StartAccountRecoveryRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Username = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCode", wireType) - } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if byteLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.RecoveryCode = append(m.RecoveryCode[:0], dAtA[iNdEx:postIndex]...) - if m.RecoveryCode == nil { - m.RecoveryCode = []byte{} - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoverType", wireType) - } - m.RecoverType = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.RecoverType |= types.UserTokenUsage(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -50231,7 +55331,7 @@ func (m *StartAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { +func (m *DeleteUserGroupRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50254,15 +55354,15 @@ func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: VerifyAccountRecoveryRequest: wiretype end group for non-group") + return fmt.Errorf("proto: DeleteUserGroupRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: VerifyAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeleteUserGroupRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryStartTokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50290,11 +55390,62 @@ func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.RecoveryStartTokenID = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CertAuthorityRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CertAuthorityRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CertAuthorityRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50322,46 +55473,64 @@ func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Username = string(dAtA[iNdEx:postIndex]) + m.Type = github_com_gravitational_teleport_api_types.CertAuthType(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Password", wireType) - } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - if byteLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CRL) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - v := make([]byte, postIndex-iNdEx) - copy(v, dAtA[iNdEx:postIndex]) - m.AuthnCred = &VerifyAccountRecoveryRequest_Password{v} - iNdEx = postIndex - case 4: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CRL: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CRL: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFAAuthenticateResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CRL", wireType) } - var msglen int + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50371,26 +55540,25 @@ func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &MFAAuthenticateResponse{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.CRL = append(m.CRL[:0], dAtA[iNdEx:postIndex]...) + if m.CRL == nil { + m.CRL = []byte{} } - m.AuthnCred = &VerifyAccountRecoveryRequest_MFAAuthenticateResponse{v} iNdEx = postIndex default: iNdEx = preIndex @@ -50414,7 +55582,7 @@ func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { +func (m *ChangeUserAuthenticationRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50437,15 +55605,15 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CompleteAccountRecoveryRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ChangeUserAuthenticationRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CompleteAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ChangeUserAuthenticationRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryApprovedTokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50473,13 +55641,13 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.RecoveryApprovedTokenID = string(dAtA[iNdEx:postIndex]) + m.TokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewDeviceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewPassword", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50489,29 +55657,31 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.NewDeviceName = string(dAtA[iNdEx:postIndex]) + m.NewPassword = append(m.NewPassword[:0], dAtA[iNdEx:postIndex]...) + if m.NewPassword == nil { + m.NewPassword = []byte{} + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewPassword", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewMFARegisterResponse", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50521,30 +55691,33 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := make([]byte, postIndex-iNdEx) - copy(v, dAtA[iNdEx:postIndex]) - m.NewAuthnCred = &CompleteAccountRecoveryRequest_NewPassword{v} + if m.NewMFARegisterResponse == nil { + m.NewMFARegisterResponse = &MFARegisterResponse{} + } + if err := m.NewMFARegisterResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NewMFAResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewDeviceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50554,26 +55727,55 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &MFARegisterResponse{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.NewDeviceName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LoginIP", wireType) } - m.NewAuthnCred = &CompleteAccountRecoveryRequest_NewMFAResponse{v} + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LoginIP = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -50597,7 +55799,7 @@ func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { +func (m *ChangeUserAuthenticationResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50620,17 +55822,17 @@ func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RecoveryCodes: wiretype end group for non-group") + return fmt.Errorf("proto: ChangeUserAuthenticationResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RecoveryCodes: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ChangeUserAuthenticationResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Codes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WebSession", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50640,27 +55842,31 @@ func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Codes = append(m.Codes, string(dAtA[iNdEx:postIndex])) + if m.WebSession == nil { + m.WebSession = &types.WebSessionV2{} + } + if err := m.WebSession.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Created", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Recovery", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -50687,10 +55893,33 @@ func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Created, dAtA[iNdEx:postIndex]); err != nil { + if m.Recovery == nil { + m.Recovery = &RecoveryCodes{} + } + if err := m.Recovery.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicyEnabled", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.PrivateKeyPolicyEnabled = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -50713,7 +55942,7 @@ func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { +func (m *StartAccountRecoveryRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50736,15 +55965,15 @@ func (m *CreateAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateAccountRecoveryCodesRequest: wiretype end group for non-group") + return fmt.Errorf("proto: StartAccountRecoveryRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateAccountRecoveryCodesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: StartAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50772,64 +56001,13 @@ func (m *CreateAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TokenID = string(dAtA[iNdEx:postIndex]) + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GetAccountRecoveryTokenRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GetAccountRecoveryTokenRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GetAccountRecoveryTokenRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryTokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCode", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -50839,75 +56017,45 @@ func (m *GetAccountRecoveryTokenRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.RecoveryTokenID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GetAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice + m.RecoveryCode = append(m.RecoveryCode[:0], dAtA[iNdEx:postIndex]...) + if m.RecoveryCode == nil { + m.RecoveryCode = []byte{} } - if iNdEx >= l { - return io.ErrUnexpectedEOF + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field RecoverType", wireType) } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + m.RecoverType = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.RecoverType |= types.UserTokenUsage(b&0x7F) << shift + if b < 0x80 { + break + } } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GetAccountRecoveryCodesRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GetAccountRecoveryCodesRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -50930,7 +56078,7 @@ func (m *GetAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserCredentials) Unmarshal(dAtA []byte) error { +func (m *VerifyAccountRecoveryRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -50953,15 +56101,15 @@ func (m *UserCredentials) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserCredentials: wiretype end group for non-group") + return fmt.Errorf("proto: VerifyAccountRecoveryRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserCredentials: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: VerifyAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryStartTokenID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -50989,13 +56137,13 @@ func (m *UserCredentials) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Username = string(dAtA[iNdEx:postIndex]) + m.RecoveryStartTokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Password", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } - var byteLen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51005,128 +56153,92 @@ func (m *UserCredentials) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Password = append(m.Password[:0], dAtA[iNdEx:postIndex]...) - if m.Password == nil { - m.Password = []byte{} - } + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Password", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + if byteLen < 0 { + return ErrInvalidLengthAuthservice } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ContextUser) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + v := make([]byte, postIndex-iNdEx) + copy(v, dAtA[iNdEx:postIndex]) + m.AuthnCred = &VerifyAccountRecoveryRequest_Password{v} + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MFAAuthenticateResponse", wireType) } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ContextUser: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ContextUser: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (skippy < 0) || (iNdEx+skippy) < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *Passwordless) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + v := &MFAAuthenticateResponse{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Passwordless: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Passwordless: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { + m.AuthnCred = &VerifyAccountRecoveryRequest_MFAAuthenticateResponse{v} + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -51149,7 +56261,7 @@ func (m *Passwordless) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { +func (m *CompleteAccountRecoveryRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -51172,17 +56284,17 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateAuthenticateChallengeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CompleteAccountRecoveryRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateAuthenticateChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CompleteAccountRecoveryRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserCredentials", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryApprovedTokenID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51192,30 +56304,27 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserCredentials{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Request = &CreateAuthenticateChallengeRequest_UserCredentials{v} + m.RecoveryApprovedTokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryStartTokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewDeviceName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -51243,13 +56352,13 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Request = &CreateAuthenticateChallengeRequest_RecoveryStartTokenID{string(dAtA[iNdEx:postIndex])} + m.NewDeviceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ContextUser", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewPassword", wireType) } - var msglen int + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51259,30 +56368,28 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ContextUser{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Request = &CreateAuthenticateChallengeRequest_ContextUser{v} + v := make([]byte, postIndex-iNdEx) + copy(v, dAtA[iNdEx:postIndex]) + m.NewAuthnCred = &CompleteAccountRecoveryRequest_NewPassword{v} iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Passwordless", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NewMFAResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -51309,87 +56416,66 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &Passwordless{} + v := &MFARegisterResponse{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Request = &CreateAuthenticateChallengeRequest_Passwordless{v} + m.NewAuthnCred = &CompleteAccountRecoveryRequest_NewMFAResponse{v} iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFARequiredCheck", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - if m.MFARequiredCheck == nil { - m.MFARequiredCheck = &IsMFARequiredRequest{} - } - if err := m.MFARequiredCheck.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeExtensions", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RecoveryCodes) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - if m.ChallengeExtensions == nil { - m.ChallengeExtensions = &v11.ChallengeExtensions{} - } - if err := m.ChallengeExtensions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 7: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RecoveryCodes: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RecoveryCodes: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SSOClientRedirectURL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Codes", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -51417,13 +56503,13 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SSOClientRedirectURL = string(dAtA[iNdEx:postIndex]) + m.Codes = append(m.Codes, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ProxyAddress", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Created", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51433,23 +56519,24 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.ProxyAddress = string(dAtA[iNdEx:postIndex]) + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Created, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -51473,7 +56560,7 @@ func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreatePrivilegeTokenRequest) Unmarshal(dAtA []byte) error { +func (m *CreateAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -51496,17 +56583,17 @@ func (m *CreatePrivilegeTokenRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreatePrivilegeTokenRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CreateAccountRecoveryCodesRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreatePrivilegeTokenRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateAccountRecoveryCodesRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExistingMFAResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51516,27 +56603,23 @@ func (m *CreatePrivilegeTokenRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.ExistingMFAResponse == nil { - m.ExistingMFAResponse = &MFAAuthenticateResponse{} - } - if err := m.ExistingMFAResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -51560,7 +56643,7 @@ func (m *CreatePrivilegeTokenRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateRegisterChallengeRequest) Unmarshal(dAtA []byte) error { +func (m *GetAccountRecoveryTokenRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -51583,15 +56666,15 @@ func (m *CreateRegisterChallengeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateRegisterChallengeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetAccountRecoveryTokenRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateRegisterChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetAccountRecoveryTokenRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryTokenID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -51619,82 +56702,59 @@ func (m *CreateRegisterChallengeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TokenID = string(dAtA[iNdEx:postIndex]) + m.RecoveryTokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceType", wireType) - } - m.DeviceType = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DeviceType |= DeviceType(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceUsage", wireType) - } - m.DeviceUsage = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DeviceUsage |= DeviceUsage(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExistingMFAResponse", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - if m.ExistingMFAResponse == nil { - m.ExistingMFAResponse = &MFAAuthenticateResponse{} + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetAccountRecoveryCodesRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice } - if err := m.ExistingMFAResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if iNdEx >= l { + return io.ErrUnexpectedEOF } - iNdEx = postIndex + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetAccountRecoveryCodesRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetAccountRecoveryCodesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -51717,7 +56777,7 @@ func (m *CreateRegisterChallengeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { +func (m *UserCredentials) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -51740,15 +56800,15 @@ func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IdentityCenterAccount: wiretype end group for non-group") + return fmt.Errorf("proto: UserCredentials: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IdentityCenterAccount: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserCredentials: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -51776,13 +56836,13 @@ func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ID = string(dAtA[iNdEx:postIndex]) + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ARN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Password", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -51792,87 +56852,25 @@ func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.ARN = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccountName", wireType) + m.Password = append(m.Password[:0], dAtA[iNdEx:postIndex]...) + if m.Password == nil { + m.Password = []byte{} } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.AccountName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Description = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -51896,7 +56894,7 @@ func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { } return nil } -func (m *IdentityCenterPermissionSet) Unmarshal(dAtA []byte) error { +func (m *ContextUser) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -51919,76 +56917,63 @@ func (m *IdentityCenterPermissionSet) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IdentityCenterPermissionSet: wiretype end group for non-group") + return fmt.Errorf("proto: ContextUser: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IdentityCenterPermissionSet: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ContextUser: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ARN", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.ARN = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *Passwordless) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Passwordless: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Passwordless: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -52011,7 +56996,7 @@ func (m *IdentityCenterPermissionSet) Unmarshal(dAtA []byte) error { } return nil } -func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { +func (m *CreateAuthenticateChallengeRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52034,17 +57019,17 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IdentityCenterAccountAssignment: wiretype end group for non-group") + return fmt.Errorf("proto: CreateAuthenticateChallengeRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IdentityCenterAccountAssignment: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateAuthenticateChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserCredentials", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52054,27 +57039,30 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Kind = string(dAtA[iNdEx:postIndex]) + v := &UserCredentials{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Request = &CreateAuthenticateChallengeRequest_UserCredentials{v} iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryStartTokenID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52102,13 +57090,13 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SubKind = string(dAtA[iNdEx:postIndex]) + m.Request = &CreateAuthenticateChallengeRequest_RecoveryStartTokenID{string(dAtA[iNdEx:postIndex])} iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ContextUser", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52118,27 +57106,30 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Version = string(dAtA[iNdEx:postIndex]) + v := &ContextUser{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Request = &CreateAuthenticateChallengeRequest_ContextUser{v} iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Passwordless", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -52165,15 +57156,17 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &Passwordless{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Request = &CreateAuthenticateChallengeRequest_Passwordless{v} iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DisplayName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFARequiredCheck", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52183,27 +57176,31 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.DisplayName = string(dAtA[iNdEx:postIndex]) + if m.MFARequiredCheck == nil { + m.MFARequiredCheck = &IsMFARequiredRequest{} + } + if err := m.MFARequiredCheck.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Account", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeExtensions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -52230,18 +57227,18 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Account == nil { - m.Account = &IdentityCenterAccount{} + if m.ChallengeExtensions == nil { + m.ChallengeExtensions = &v11.ChallengeExtensions{} } - if err := m.Account.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ChallengeExtensions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PermissionSet", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SSOClientRedirectURL", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52251,27 +57248,55 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.PermissionSet == nil { - m.PermissionSet = &IdentityCenterPermissionSet{} + m.SSOClientRedirectURL = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProxyAddress", wireType) } - if err := m.PermissionSet.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ProxyAddress = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -52295,7 +57320,7 @@ func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { } return nil } -func (m *PaginatedResource) Unmarshal(dAtA []byte) error { +func (m *CreatePrivilegeTokenRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52318,15 +57343,15 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PaginatedResource: wiretype end group for non-group") + return fmt.Errorf("proto: CreatePrivilegeTokenRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PaginatedResource: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreatePrivilegeTokenRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseServer", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExistingMFAResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -52353,17 +57378,69 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.DatabaseServerV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.ExistingMFAResponse == nil { + m.ExistingMFAResponse = &MFAAuthenticateResponse{} + } + if err := m.ExistingMFAResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Resource = &PaginatedResource_DatabaseServer{v} iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CreateRegisterChallengeRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CreateRegisterChallengeRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CreateRegisterChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppServer", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TokenID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52373,32 +57450,29 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.AppServerV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_AppServer{v} + m.TokenID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Node", wireType) + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DeviceType", wireType) } - var msglen int + m.DeviceType = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52408,32 +57482,16 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.DeviceType |= DeviceType(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &types.ServerV2{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_Node{v} - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktop", wireType) + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DeviceUsage", wireType) } - var msglen int + m.DeviceUsage = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52443,30 +57501,14 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.DeviceUsage |= DeviceUsage(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &types.WindowsDesktopV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_WindowsDesktop{v} - iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeCluster", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExistingMFAResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -52493,17 +57535,69 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.KubernetesClusterV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.ExistingMFAResponse == nil { + m.ExistingMFAResponse = &MFAAuthenticateResponse{} + } + if err := m.ExistingMFAResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Resource = &PaginatedResource_KubeCluster{v} iNdEx = postIndex - case 7: + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *IdentityCenterAccount) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: IdentityCenterAccount: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: IdentityCenterAccount: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesServer", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52513,32 +57607,29 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.KubernetesServerV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_KubernetesServer{v} + m.ID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ARN", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52548,32 +57639,29 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthAuthservice + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.WindowsDesktopServiceV3{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_WindowsDesktopService{v} + m.ARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseService", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccountName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52583,32 +57671,29 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.DatabaseServiceV1{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_DatabaseService{v} + m.AccountName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 10: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserGroup", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52618,65 +57703,78 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.UserGroupV1{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_UserGroup{v} + m.Description = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 12: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProvider", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *IdentityCenterPermissionSet) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - v := &types.SAMLIdPServiceProviderV1{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.Resource = &PaginatedResource_SAMLIdPServiceProvider{v} - iNdEx = postIndex - case 13: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: IdentityCenterPermissionSet: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: IdentityCenterPermissionSet: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Logins", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ARN", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52704,68 +57802,13 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Logins = append(m.Logins, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 14: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field RequiresRequest", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.RequiresRequest = bool(v != 0) - case 15: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GitServer", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &types.ServerV2{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_GitServer{v} + m.ARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 16: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IdentityCenterAccountAssignment", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52775,26 +57818,23 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &IdentityCenterAccountAssignment{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Resource = &PaginatedResource_IdentityCenterAccountAssignment{v} + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -52818,7 +57858,7 @@ func (m *PaginatedResource) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { +func (m *IdentityCenterAccountAssignment) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52841,15 +57881,15 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListUnifiedResourcesRequest: wiretype end group for non-group") + return fmt.Errorf("proto: IdentityCenterAccountAssignment: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListUnifiedResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IdentityCenterAccountAssignment: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kinds", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52877,30 +57917,11 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Kinds = append(m.Kinds, string(dAtA[iNdEx:postIndex])) + m.Kind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) - } - m.Limit = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Limit |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52928,13 +57949,13 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.StartKey = string(dAtA[iNdEx:postIndex]) + m.SubKind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -52944,124 +57965,29 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Labels == nil { - m.Labels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthAuthservice - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthAuthservice - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthAuthservice - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthAuthservice - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - m.Labels[mapkey] = mapvalue + m.Version = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53071,27 +57997,28 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.PredicateExpression = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DisplayName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53119,11 +58046,11 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + m.DisplayName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SortBy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Account", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53150,13 +58077,16 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SortBy.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Account == nil { + m.Account = &IdentityCenterAccount{} + } + if err := m.Account.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 8: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopFilter", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PermissionSet", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53183,110 +58113,13 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.WindowsDesktopFilter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.PermissionSet == nil { + m.PermissionSet = &IdentityCenterPermissionSet{} + } + if err := m.PermissionSet.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UseSearchAsRoles", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.UseSearchAsRoles = bool(v != 0) - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UsePreviewAsRoles", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.UsePreviewAsRoles = bool(v != 0) - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PinnedOnly", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.PinnedOnly = bool(v != 0) - case 12: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field IncludeLogins", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.IncludeLogins = bool(v != 0) - case 14: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field IncludeRequestable", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.IncludeRequestable = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -53309,7 +58142,7 @@ func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { +func (m *ListInstallersRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -53332,17 +58165,17 @@ func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListUnifiedResourcesResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListInstallersRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListUnifiedResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListInstallersRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) } - var msglen int + m.PageSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53352,29 +58185,14 @@ func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.PageSize |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Resources = append(m.Resources, &PaginatedResource{}) - if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53402,7 +58220,7 @@ func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -53426,7 +58244,7 @@ func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { +func (m *ListInstallersResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -53449,17 +58267,17 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListResourcesRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListInstallersResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListInstallersResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Installers", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53469,27 +58287,29 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceType = string(dAtA[iNdEx:postIndex]) + m.Installers = append(m.Installers, &types.InstallerV1{}) + if err := m.Installers[len(m.Installers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53517,13 +58337,64 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Namespace = string(dAtA[iNdEx:postIndex]) + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - m.Limit = 0 + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PaginatedResource) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PaginatedResource: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PaginatedResource: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseServer", wireType) + } + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53533,16 +58404,32 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Limit |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 4: + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &types.DatabaseServerV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_DatabaseServer{v} + iNdEx = postIndex + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppServer", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53552,27 +58439,30 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.StartKey = string(dAtA[iNdEx:postIndex]) + v := &types.AppServerV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_AppServer{v} iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Node", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53599,109 +58489,52 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Labels == nil { - m.Labels = make(map[string]string) + v := &types.ServerV2{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + m.Resource = &PaginatedResource_Node{v} + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktop", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthAuthservice - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthAuthservice - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthAuthservice - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthAuthservice - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break } } - m.Labels[mapkey] = mapvalue + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &types.WindowsDesktopV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_WindowsDesktop{v} iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubeCluster", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53711,29 +58544,32 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.PredicateExpression = string(dAtA[iNdEx:postIndex]) + v := &types.KubernetesClusterV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_KubeCluster{v} iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesServer", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53743,27 +58579,30 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + v := &types.KubernetesServerV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_KubernetesServer{v} iNdEx = postIndex case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SortBy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53790,15 +58629,17 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SortBy.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &types.WindowsDesktopServiceV3{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Resource = &PaginatedResource_WindowsDesktopService{v} iNdEx = postIndex case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NeedTotalCount", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseService", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53808,15 +58649,30 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.NeedTotalCount = bool(v != 0) + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &types.DatabaseServiceV1{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_DatabaseService{v} + iNdEx = postIndex case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopFilter", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserGroup", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53843,15 +58699,17 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.WindowsDesktopFilter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &types.UserGroupV1{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Resource = &PaginatedResource_UserGroup{v} iNdEx = postIndex - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UseSearchAsRoles", wireType) + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProvider", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53861,17 +58719,32 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.UseSearchAsRoles = bool(v != 0) - case 12: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UsePreviewAsRoles", wireType) + if msglen < 0 { + return ErrInvalidLengthAuthservice } - var v int + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &types.SAMLIdPServiceProviderV1{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_SAMLIdPServiceProvider{v} + iNdEx = postIndex + case 13: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Logins", wireType) + } + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -53881,15 +58754,27 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - m.UsePreviewAsRoles = bool(v != 0) - case 13: + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Logins = append(m.Logins, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 14: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field IncludeLogins", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequiresRequest", wireType) } var v int for shift := uint(0); ; shift += 7 { @@ -53906,7 +58791,77 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { break } } - m.IncludeLogins = bool(v != 0) + m.RequiresRequest = bool(v != 0) + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field GitServer", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &types.ServerV2{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_GitServer{v} + iNdEx = postIndex + case 16: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field IdentityCenterAccountAssignment", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &IdentityCenterAccountAssignment{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Resource = &PaginatedResource_IdentityCenterAccountAssignment{v} + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -53929,7 +58884,7 @@ func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { +func (m *ListUnifiedResourcesRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -53952,15 +58907,15 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ResolveSSHTargetRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListUnifiedResourcesRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ResolveSSHTargetRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListUnifiedResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Kinds", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53988,11 +58943,30 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Host = string(dAtA[iNdEx:postIndex]) + m.Kinds = append(m.Kinds, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54020,9 +58994,9 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Port = string(dAtA[iNdEx:postIndex]) + m.StartKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) } @@ -54149,7 +59123,7 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { } m.Labels[mapkey] = mapvalue iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) } @@ -54181,7 +59155,7 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { } m.PredicateExpression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 6: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) } @@ -54213,29 +59187,1797 @@ func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { } m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SortBy", wireType) } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SortBy.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopFilter", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.WindowsDesktopFilter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UseSearchAsRoles", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.UseSearchAsRoles = bool(v != 0) + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UsePreviewAsRoles", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.UsePreviewAsRoles = bool(v != 0) + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PinnedOnly", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.PinnedOnly = bool(v != 0) + case 12: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field IncludeLogins", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.IncludeLogins = bool(v != 0) + case 14: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field IncludeRequestable", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.IncludeRequestable = bool(v != 0) + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListUnifiedResourcesResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListUnifiedResourcesResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListUnifiedResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Resources = append(m.Resources, &PaginatedResource{}) + if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListResourcesRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListResourcesRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceType", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.StartKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Labels == nil { + m.Labels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Labels[mapkey] = mapvalue + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PredicateExpression = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SortBy", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SortBy.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NeedTotalCount", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NeedTotalCount = bool(v != 0) + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopFilter", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.WindowsDesktopFilter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UseSearchAsRoles", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.UseSearchAsRoles = bool(v != 0) + case 12: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UsePreviewAsRoles", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.UsePreviewAsRoles = bool(v != 0) + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field IncludeLogins", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.IncludeLogins = bool(v != 0) + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ResolveSSHTargetRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ResolveSSHTargetRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ResolveSSHTargetRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Host = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Port = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Labels == nil { + m.Labels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Labels[mapkey] = mapvalue + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PredicateExpression = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ResolveSSHTargetResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ResolveSSHTargetResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ResolveSSHTargetResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Server", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Server == nil { + m.Server = &types.ServerV2{} + } + if err := m.Server.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetSSHTargetsRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetSSHTargetsRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetSSHTargetsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Host = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Port = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetSSHTargetsResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetSSHTargetsResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetSSHTargetsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Servers", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Servers = append(m.Servers, &types.ServerV2{}) + if err := m.Servers[len(m.Servers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListResourcesResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListResourcesResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Resources = append(m.Resources, &PaginatedResource{}) + if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) + } + m.TotalCount = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TotalCount |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CreateSessionTrackerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CreateSessionTrackerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CreateSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionTracker", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.SessionTracker == nil { + m.SessionTracker = &types.SessionTrackerV1{} + } + if err := m.SessionTracker.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetSessionTrackerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetSessionTrackerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RemoveSessionTrackerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RemoveSessionTrackerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RemoveSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } return nil } -func (m *ResolveSSHTargetResponse) Unmarshal(dAtA []byte) error { +func (m *SessionTrackerUpdateState) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54258,15 +61000,255 @@ func (m *ResolveSSHTargetResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ResolveSSHTargetResponse: wiretype end group for non-group") + return fmt.Errorf("proto: SessionTrackerUpdateState: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ResolveSSHTargetResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionTrackerUpdateState: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) + } + m.State = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.State |= types.SessionState(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionTrackerAddParticipant) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionTrackerAddParticipant: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionTrackerAddParticipant: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Participant", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Participant == nil { + m.Participant = &types.Participant{} + } + if err := m.Participant.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionTrackerRemoveParticipant) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionTrackerRemoveParticipant: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionTrackerRemoveParticipant: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ParticipantID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ParticipantID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionTrackerUpdateExpiry) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionTrackerUpdateExpiry: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionTrackerUpdateExpiry: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Server", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54293,12 +61275,235 @@ func (m *ResolveSSHTargetResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Server == nil { - m.Server = &types.ServerV2{} + if m.Expires == nil { + m.Expires = new(time.Time) } - if err := m.Server.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.Expires, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UpdateSessionTrackerRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UpdateSessionTrackerRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UpdateSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpdateState", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &SessionTrackerUpdateState{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Update = &UpdateSessionTrackerRequest_UpdateState{v} + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AddParticipant", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &SessionTrackerAddParticipant{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Update = &UpdateSessionTrackerRequest_AddParticipant{v} + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RemoveParticipant", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &SessionTrackerRemoveParticipant{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Update = &UpdateSessionTrackerRequest_RemoveParticipant{v} + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpdateExpiry", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &SessionTrackerUpdateExpiry{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Update = &UpdateSessionTrackerRequest_UpdateExpiry{v} iNdEx = postIndex default: iNdEx = preIndex @@ -54322,7 +61527,7 @@ func (m *ResolveSSHTargetResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetSSHTargetsRequest) Unmarshal(dAtA []byte) error { +func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54345,15 +61550,15 @@ func (m *GetSSHTargetsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetSSHTargetsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: PresenceMFAChallengeRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetSSHTargetsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PresenceMFAChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54381,11 +61586,11 @@ func (m *GetSSHTargetsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Host = string(dAtA[iNdEx:postIndex]) + m.SessionID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SSOClientRedirectURL", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54413,64 +61618,13 @@ func (m *GetSSHTargetsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Port = string(dAtA[iNdEx:postIndex]) + m.SSOClientRedirectURL = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GetSSHTargetsResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GetSSHTargetsResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GetSSHTargetsResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Servers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ProxyAddress", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -54480,25 +61634,23 @@ func (m *GetSSHTargetsResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Servers = append(m.Servers, &types.ServerV2{}) - if err := m.Servers[len(m.Servers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ProxyAddress = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -54522,7 +61674,7 @@ func (m *GetSSHTargetsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListResourcesResponse) Unmarshal(dAtA []byte) error { +func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54545,15 +61697,15 @@ func (m *ListResourcesResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListResourcesResponse: wiretype end group for non-group") + return fmt.Errorf("proto: PresenceMFAChallengeSend: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PresenceMFAChallengeSend: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54580,116 +61732,15 @@ func (m *ListResourcesResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Resources = append(m.Resources, &PaginatedResource{}) - if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &PresenceMFAChallengeRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Request = &PresenceMFAChallengeSend_ChallengeRequest{v} iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.NextKey = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalCount", wireType) - } - m.TotalCount = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TotalCount |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *CreateSessionTrackerRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: CreateSessionTrackerRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: CreateSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 15: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionTracker", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54716,12 +61767,11 @@ func (m *CreateSessionTrackerRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.SessionTracker == nil { - m.SessionTracker = &types.SessionTrackerV1{} - } - if err := m.SessionTracker.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MFAAuthenticateResponse{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Request = &PresenceMFAChallengeSend_ChallengeResponse{v} iNdEx = postIndex default: iNdEx = preIndex @@ -54745,7 +61795,7 @@ func (m *CreateSessionTrackerRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetSessionTrackerRequest) Unmarshal(dAtA []byte) error { +func (m *GetDomainNameResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54768,15 +61818,15 @@ func (m *GetSessionTrackerRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetSessionTrackerRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetDomainNameResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetDomainNameResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DomainName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54804,7 +61854,7 @@ func (m *GetSessionTrackerRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) + m.DomainName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -54828,7 +61878,7 @@ func (m *GetSessionTrackerRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *RemoveSessionTrackerRequest) Unmarshal(dAtA []byte) error { +func (m *GetClusterCACertResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54851,17 +61901,17 @@ func (m *RemoveSessionTrackerRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RemoveSessionTrackerRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetClusterCACertResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RemoveSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetClusterCACertResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TLSCA", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -54871,23 +61921,25 @@ func (m *RemoveSessionTrackerRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) + m.TLSCA = append(m.TLSCA[:0], dAtA[iNdEx:postIndex]...) + if m.TLSCA == nil { + m.TLSCA = []byte{} + } iNdEx = postIndex default: iNdEx = preIndex @@ -54911,7 +61963,7 @@ func (m *RemoveSessionTrackerRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionTrackerUpdateState) Unmarshal(dAtA []byte) error { +func (m *GetLicenseResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54934,17 +61986,17 @@ func (m *SessionTrackerUpdateState) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionTrackerUpdateState: wiretype end group for non-group") + return fmt.Errorf("proto: GetLicenseResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionTrackerUpdateState: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetLicenseResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field License", wireType) } - m.State = 0 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -54954,11 +62006,26 @@ func (m *SessionTrackerUpdateState) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.State |= types.SessionState(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } + if byteLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.License = append(m.License[:0], dAtA[iNdEx:postIndex]...) + if m.License == nil { + m.License = []byte{} + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -54981,7 +62048,7 @@ func (m *SessionTrackerUpdateState) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionTrackerAddParticipant) Unmarshal(dAtA []byte) error { +func (m *ListReleasesResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55004,15 +62071,15 @@ func (m *SessionTrackerAddParticipant) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionTrackerAddParticipant: wiretype end group for non-group") + return fmt.Errorf("proto: ListReleasesResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionTrackerAddParticipant: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListReleasesResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 2: + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Participant", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Releases", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55039,10 +62106,8 @@ func (m *SessionTrackerAddParticipant) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Participant == nil { - m.Participant = &types.Participant{} - } - if err := m.Participant.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Releases = append(m.Releases, &types.Release{}) + if err := m.Releases[len(m.Releases)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -55068,7 +62133,7 @@ func (m *SessionTrackerAddParticipant) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionTrackerRemoveParticipant) Unmarshal(dAtA []byte) error { +func (m *GetOIDCAuthRequestRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55091,15 +62156,15 @@ func (m *SessionTrackerRemoveParticipant) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionTrackerRemoveParticipant: wiretype end group for non-group") + return fmt.Errorf("proto: GetOIDCAuthRequestRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionTrackerRemoveParticipant: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetOIDCAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 2: + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ParticipantID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55127,7 +62192,7 @@ func (m *SessionTrackerRemoveParticipant) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ParticipantID = string(dAtA[iNdEx:postIndex]) + m.StateToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -55151,7 +62216,7 @@ func (m *SessionTrackerRemoveParticipant) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionTrackerUpdateExpiry) Unmarshal(dAtA []byte) error { +func (m *GetSAMLAuthRequestRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55174,17 +62239,17 @@ func (m *SessionTrackerUpdateExpiry) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionTrackerUpdateExpiry: wiretype end group for non-group") + return fmt.Errorf("proto: GetSAMLAuthRequestRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionTrackerUpdateExpiry: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetSAMLAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55194,27 +62259,23 @@ func (m *SessionTrackerUpdateExpiry) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Expires == nil { - m.Expires = new(time.Time) - } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.Expires, dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -55238,7 +62299,7 @@ func (m *SessionTrackerUpdateExpiry) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpdateSessionTrackerRequest) Unmarshal(dAtA []byte) error { +func (m *GetGithubAuthRequestRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55261,15 +62322,15 @@ func (m *UpdateSessionTrackerRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpdateSessionTrackerRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetGithubAuthRequestRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpdateSessionTrackerRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetGithubAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55297,147 +62358,7 @@ func (m *UpdateSessionTrackerRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdateState", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &SessionTrackerUpdateState{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Update = &UpdateSessionTrackerRequest_UpdateState{v} - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AddParticipant", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &SessionTrackerAddParticipant{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Update = &UpdateSessionTrackerRequest_AddParticipant{v} - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RemoveParticipant", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &SessionTrackerRemoveParticipant{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Update = &UpdateSessionTrackerRequest_RemoveParticipant{v} - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdateExpiry", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &SessionTrackerUpdateExpiry{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Update = &UpdateSessionTrackerRequest_UpdateExpiry{v} + m.StateToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -55461,7 +62382,7 @@ func (m *UpdateSessionTrackerRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { +func (m *ListOIDCConnectorsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55484,17 +62405,17 @@ func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PresenceMFAChallengeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListOIDCConnectorsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PresenceMFAChallengeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListOIDCConnectorsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) } - var stringLen uint64 + m.PageSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55504,27 +62425,14 @@ func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.PageSize |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.SessionID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SSOClientRedirectURL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55552,13 +62460,13 @@ func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SSOClientRedirectURL = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ProxyAddress", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field WithSecrets", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55568,24 +62476,12 @@ func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.ProxyAddress = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex + m.WithSecrets = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -55608,7 +62504,7 @@ func (m *PresenceMFAChallengeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { +func (m *ListOIDCConnectorsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55631,15 +62527,15 @@ func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PresenceMFAChallengeSend: wiretype end group for non-group") + return fmt.Errorf("proto: ListOIDCConnectorsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PresenceMFAChallengeSend: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListOIDCConnectorsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connectors", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55666,17 +62562,16 @@ func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PresenceMFAChallengeRequest{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Connectors = append(m.Connectors, &types.OIDCConnectorV3{}) + if err := m.Connectors[len(m.Connectors)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Request = &PresenceMFAChallengeSend_ChallengeRequest{v} iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55686,26 +62581,23 @@ func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - v := &MFAAuthenticateResponse{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Request = &PresenceMFAChallengeSend_ChallengeResponse{v} + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -55729,7 +62621,7 @@ func (m *PresenceMFAChallengeSend) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetDomainNameResponse) Unmarshal(dAtA []byte) error { +func (m *CreateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55752,17 +62644,17 @@ func (m *GetDomainNameResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetDomainNameResponse: wiretype end group for non-group") + return fmt.Errorf("proto: CreateOIDCConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetDomainNameResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DomainName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55772,23 +62664,27 @@ func (m *GetDomainNameResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.DomainName = string(dAtA[iNdEx:postIndex]) + if m.Connector == nil { + m.Connector = &types.OIDCConnectorV3{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -55812,7 +62708,7 @@ func (m *GetDomainNameResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetClusterCACertResponse) Unmarshal(dAtA []byte) error { +func (m *UpdateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55835,17 +62731,17 @@ func (m *GetClusterCACertResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetClusterCACertResponse: wiretype end group for non-group") + return fmt.Errorf("proto: UpdateOIDCConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetClusterCACertResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpdateOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TLSCA", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55855,24 +62751,26 @@ func (m *GetClusterCACertResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.TLSCA = append(m.TLSCA[:0], dAtA[iNdEx:postIndex]...) - if m.TLSCA == nil { - m.TLSCA = []byte{} + if m.Connector == nil { + m.Connector = &types.OIDCConnectorV3{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex default: @@ -55897,7 +62795,7 @@ func (m *GetClusterCACertResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetLicenseResponse) Unmarshal(dAtA []byte) error { +func (m *UpsertOIDCConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55920,17 +62818,17 @@ func (m *GetLicenseResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetLicenseResponse: wiretype end group for non-group") + return fmt.Errorf("proto: UpsertOIDCConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetLicenseResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpsertOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field License", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -55940,24 +62838,26 @@ func (m *GetLicenseResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.License = append(m.License[:0], dAtA[iNdEx:postIndex]...) - if m.License == nil { - m.License = []byte{} + if m.Connector == nil { + m.Connector = &types.OIDCConnectorV3{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex default: @@ -55982,7 +62882,7 @@ func (m *GetLicenseResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListReleasesResponse) Unmarshal(dAtA []byte) error { +func (m *CreateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56005,15 +62905,15 @@ func (m *ListReleasesResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListReleasesResponse: wiretype end group for non-group") + return fmt.Errorf("proto: CreateSAMLConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListReleasesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Releases", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56040,8 +62940,10 @@ func (m *ListReleasesResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Releases = append(m.Releases, &types.Release{}) - if err := m.Releases[len(m.Releases)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Connector == nil { + m.Connector = &types.SAMLConnectorV2{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -56067,7 +62969,7 @@ func (m *ListReleasesResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetOIDCAuthRequestRequest) Unmarshal(dAtA []byte) error { +func (m *UpdateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56090,17 +62992,17 @@ func (m *GetOIDCAuthRequestRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetOIDCAuthRequestRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpdateSAMLConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetOIDCAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpdateSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -56110,23 +63012,27 @@ func (m *GetOIDCAuthRequestRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.StateToken = string(dAtA[iNdEx:postIndex]) + if m.Connector == nil { + m.Connector = &types.SAMLConnectorV2{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -56150,7 +63056,7 @@ func (m *GetOIDCAuthRequestRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetSAMLAuthRequestRequest) Unmarshal(dAtA []byte) error { +func (m *UpsertSAMLConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56173,17 +63079,17 @@ func (m *GetSAMLAuthRequestRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetSAMLAuthRequestRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpsertSAMLConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetSAMLAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpsertSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -56193,23 +63099,27 @@ func (m *GetSAMLAuthRequestRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.ID = string(dAtA[iNdEx:postIndex]) + if m.Connector == nil { + m.Connector = &types.SAMLConnectorV2{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -56233,7 +63143,7 @@ func (m *GetSAMLAuthRequestRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetGithubAuthRequestRequest) Unmarshal(dAtA []byte) error { +func (m *ListSAMLConnectorsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56256,15 +63166,34 @@ func (m *GetGithubAuthRequestRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetGithubAuthRequestRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListSAMLConnectorsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetGithubAuthRequestRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSAMLConnectorsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -56292,8 +63221,48 @@ func (m *GetGithubAuthRequestRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.StateToken = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field WithSecrets", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.WithSecrets = bool(v != 0) + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NoFollowUrls", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NoFollowUrls = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -56316,7 +63285,7 @@ func (m *GetGithubAuthRequestRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *ListSAMLConnectorsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56339,15 +63308,15 @@ func (m *CreateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateOIDCConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListSAMLConnectorsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSAMLConnectorsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connectors", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56374,69 +63343,16 @@ func (m *CreateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.OIDCConnectorV3{} - } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Connectors = append(m.Connectors, &types.SAMLConnectorV2{}) + if err := m.Connectors[len(m.Connectors)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *UpdateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: UpdateOIDCConnectorRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: UpdateOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -56446,27 +63362,23 @@ func (m *UpdateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.OIDCConnectorV3{} - } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -56490,7 +63402,7 @@ func (m *UpdateOIDCConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertOIDCConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *ListGithubConnectorsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56513,17 +63425,36 @@ func (m *UpsertOIDCConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertOIDCConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListGithubConnectorsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertOIDCConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListGithubConnectorsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -56533,28 +63464,44 @@ func (m *UpsertOIDCConnectorRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.OIDCConnectorV3{} + m.PageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field WithSecrets", wireType) } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - iNdEx = postIndex + m.WithSecrets = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -56577,7 +63524,7 @@ func (m *UpsertOIDCConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *ListGithubConnectorsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56600,15 +63547,15 @@ func (m *CreateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateSAMLConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListGithubConnectorsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListGithubConnectorsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connectors", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56635,13 +63582,43 @@ func (m *CreateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.SAMLConnectorV2{} - } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Connectors = append(m.Connectors, &types.GithubConnectorV3{}) + if err := m.Connectors[len(m.Connectors)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -56664,7 +63641,7 @@ func (m *CreateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpdateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *CreateGithubConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56687,10 +63664,10 @@ func (m *UpdateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpdateSAMLConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CreateGithubConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpdateSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56723,7 +63700,7 @@ func (m *UpdateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.Connector == nil { - m.Connector = &types.SAMLConnectorV2{} + m.Connector = &types.GithubConnectorV3{} } if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -56751,7 +63728,7 @@ func (m *UpdateSAMLConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertSAMLConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *UpdateGithubConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56774,10 +63751,10 @@ func (m *UpsertSAMLConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertSAMLConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpdateGithubConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertSAMLConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpdateGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56810,7 +63787,7 @@ func (m *UpsertSAMLConnectorRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.Connector == nil { - m.Connector = &types.SAMLConnectorV2{} + m.Connector = &types.GithubConnectorV3{} } if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -56838,7 +63815,7 @@ func (m *UpsertSAMLConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateGithubConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *UpsertGithubConnectorRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56861,10 +63838,10 @@ func (m *CreateGithubConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateGithubConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpsertGithubConnectorRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpsertGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56925,7 +63902,7 @@ func (m *CreateGithubConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpdateGithubConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56948,17 +63925,17 @@ func (m *UpdateGithubConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpdateGithubConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetSSODiagnosticInfoRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpdateGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetSSODiagnosticInfoRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuthRequestKind", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -56968,27 +63945,55 @@ func (m *UpdateGithubConnectorRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.GithubConnectorV3{} + m.AuthRequestKind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AuthRequestID", wireType) } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.AuthRequestID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -57012,7 +64017,7 @@ func (m *UpdateGithubConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertGithubConnectorRequest) Unmarshal(dAtA []byte) error { +func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57035,17 +64040,17 @@ func (m *UpsertGithubConnectorRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertGithubConnectorRequest: wiretype end group for non-group") + return fmt.Errorf("proto: SystemRoleAssertion: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertGithubConnectorRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SystemRoleAssertion: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57055,27 +64060,87 @@ func (m *UpsertGithubConnectorRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types.GithubConnectorV3{} + m.ServerID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AssertionID", wireType) } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AssertionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SystemRole", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.SystemRole = github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -57099,7 +64164,7 @@ func (m *UpsertGithubConnectorRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { +func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57122,15 +64187,15 @@ func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetSSODiagnosticInfoRequest: wiretype end group for non-group") + return fmt.Errorf("proto: SystemRoleAssertionSet: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetSSODiagnosticInfoRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SystemRoleAssertionSet: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuthRequestKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57158,11 +64223,11 @@ func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AuthRequestKind = string(dAtA[iNdEx:postIndex]) + m.ServerID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuthRequestID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AssertionID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57190,7 +64255,39 @@ func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AuthRequestID = string(dAtA[iNdEx:postIndex]) + m.AssertionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SystemRoles", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SystemRoles = append(m.SystemRoles, github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -57214,7 +64311,7 @@ func (m *GetSSODiagnosticInfoRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { +func (m *InventoryConnectedServiceCountsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57237,17 +64334,68 @@ func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SystemRoleAssertion: wiretype end group for non-group") + return fmt.Errorf("proto: InventoryConnectedServiceCountsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SystemRoleAssertion: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: InventoryConnectedServiceCountsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *InventoryConnectedServiceCounts) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: InventoryConnectedServiceCounts: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: InventoryConnectedServiceCounts: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServiceCounts", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57257,27 +64405,159 @@ func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: + if m.ServiceCounts == nil { + m.ServiceCounts = make(map[github_com_gravitational_teleport_api_types.SystemRole]uint64) + } + var mapkey github_com_gravitational_teleport_api_types.SystemRole + var mapvalue uint64 + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthAuthservice + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthAuthservice + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + } else { + iNdEx = entryPreIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.ServiceCounts[github_com_gravitational_teleport_api_types.SystemRole(mapkey)] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *InventoryPingRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: InventoryPingRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: InventoryPingRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AssertionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57305,13 +64585,13 @@ func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AssertionID = string(dAtA[iNdEx:postIndex]) + m.ServerID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SystemRole", wireType) + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ControlLog", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57321,24 +64601,12 @@ func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.SystemRole = github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex]) - iNdEx = postIndex + m.ControlLog = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -57361,7 +64629,7 @@ func (m *SystemRoleAssertion) Unmarshal(dAtA []byte) error { } return nil } -func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { +func (m *InventoryPingResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57384,17 +64652,17 @@ func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SystemRoleAssertionSet: wiretype end group for non-group") + return fmt.Errorf("proto: InventoryPingResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SystemRoleAssertionSet: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: InventoryPingResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Duration", wireType) } - var stringLen uint64 + m.Duration = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57404,61 +64672,67 @@ func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.Duration |= time.Duration(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthAuthservice } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.ServerID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AssertionID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetClusterAlertsResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.AssertionID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetClusterAlertsResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetClusterAlertsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SystemRoles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Alerts", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57468,23 +64742,25 @@ func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.SystemRoles = append(m.SystemRoles, github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex])) + m.Alerts = append(m.Alerts, types.ClusterAlert{}) + if err := m.Alerts[len(m.Alerts)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -57508,7 +64784,7 @@ func (m *SystemRoleAssertionSet) Unmarshal(dAtA []byte) error { } return nil } -func (m *InventoryConnectedServiceCountsRequest) Unmarshal(dAtA []byte) error { +func (m *GetAlertAcksRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57531,10 +64807,10 @@ func (m *InventoryConnectedServiceCountsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: InventoryConnectedServiceCountsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetAlertAcksRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: InventoryConnectedServiceCountsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetAlertAcksRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { default: @@ -57559,7 +64835,7 @@ func (m *InventoryConnectedServiceCountsRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *InventoryConnectedServiceCounts) Unmarshal(dAtA []byte) error { +func (m *GetAlertAcksResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57582,15 +64858,15 @@ func (m *InventoryConnectedServiceCounts) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: InventoryConnectedServiceCounts: wiretype end group for non-group") + return fmt.Errorf("proto: GetAlertAcksResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: InventoryConnectedServiceCounts: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetAlertAcksResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceCounts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Acks", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57617,89 +64893,10 @@ func (m *InventoryConnectedServiceCounts) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ServiceCounts == nil { - m.ServiceCounts = make(map[github_com_gravitational_teleport_api_types.SystemRole]uint64) - } - var mapkey github_com_gravitational_teleport_api_types.SystemRole - var mapvalue uint64 - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthAuthservice - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthAuthservice - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - mapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - } else { - iNdEx = entryPreIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthAuthservice - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } + m.Acks = append(m.Acks, types.AlertAcknowledgement{}) + if err := m.Acks[len(m.Acks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.ServiceCounts[github_com_gravitational_teleport_api_types.SystemRole(mapkey)] = mapvalue iNdEx = postIndex default: iNdEx = preIndex @@ -57723,7 +64920,7 @@ func (m *InventoryConnectedServiceCounts) Unmarshal(dAtA []byte) error { } return nil } -func (m *InventoryPingRequest) Unmarshal(dAtA []byte) error { +func (m *ClearAlertAcksRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57746,15 +64943,15 @@ func (m *InventoryPingRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: InventoryPingRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ClearAlertAcksRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: InventoryPingRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ClearAlertAcksRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AlertID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57782,28 +64979,8 @@ func (m *InventoryPingRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerID = string(dAtA[iNdEx:postIndex]) + m.AlertID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ControlLog", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.ControlLog = bool(v != 0) default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -57826,7 +65003,7 @@ func (m *InventoryPingRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *InventoryPingResponse) Unmarshal(dAtA []byte) error { +func (m *UpsertClusterAlertRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57849,17 +65026,17 @@ func (m *InventoryPingResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: InventoryPingResponse: wiretype end group for non-group") + return fmt.Errorf("proto: UpsertClusterAlertRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: InventoryPingResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpsertClusterAlertRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Duration", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Alert", wireType) } - m.Duration = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57869,11 +65046,25 @@ func (m *InventoryPingResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Duration |= time.Duration(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Alert.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -57896,7 +65087,7 @@ func (m *InventoryPingResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetClusterAlertsResponse) Unmarshal(dAtA []byte) error { +func (m *GetConnectionDiagnosticRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57919,17 +65110,17 @@ func (m *GetClusterAlertsResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetClusterAlertsResponse: wiretype end group for non-group") + return fmt.Errorf("proto: GetConnectionDiagnosticRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetClusterAlertsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetConnectionDiagnosticRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Alerts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -57939,25 +65130,23 @@ func (m *GetClusterAlertsResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.Alerts = append(m.Alerts, types.ClusterAlert{}) - if err := m.Alerts[len(m.Alerts)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -57981,7 +65170,7 @@ func (m *GetClusterAlertsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetAlertAcksRequest) Unmarshal(dAtA []byte) error { +func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58004,12 +65193,80 @@ func (m *GetAlertAcksRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetAlertAcksRequest: wiretype end group for non-group") + return fmt.Errorf("proto: AppendDiagnosticTraceRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetAlertAcksRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppendDiagnosticTraceRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Trace", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Trace == nil { + m.Trace = &types.ConnectionDiagnosticTrace{} + } + if err := m.Trace.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -58032,7 +65289,7 @@ func (m *GetAlertAcksRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetAlertAcksResponse) Unmarshal(dAtA []byte) error { +func (m *SubmitUsageEventRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58055,15 +65312,15 @@ func (m *GetAlertAcksResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetAlertAcksResponse: wiretype end group for non-group") + return fmt.Errorf("proto: SubmitUsageEventRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetAlertAcksResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SubmitUsageEventRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Acks", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Event", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58090,11 +65347,115 @@ func (m *GetAlertAcksResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Acks = append(m.Acks, types.AlertAcknowledgement{}) - if err := m.Acks[len(m.Acks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Event == nil { + m.Event = &v12.UsageEventOneOf{} + } + if err := m.Event.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GetLicenseRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GetLicenseRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GetLicenseRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipAuthservice(dAtA[iNdEx:]) + if err != nil { return err } - iNdEx = postIndex + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthAuthservice + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ListReleasesRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ListReleasesRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ListReleasesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -58117,7 +65478,7 @@ func (m *GetAlertAcksResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ClearAlertAcksRequest) Unmarshal(dAtA []byte) error { +func (m *CreateTokenV2Request) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58140,17 +65501,17 @@ func (m *ClearAlertAcksRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ClearAlertAcksRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CreateTokenV2Request: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ClearAlertAcksRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CreateTokenV2Request: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AlertID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field V2", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -58160,23 +65521,26 @@ func (m *ClearAlertAcksRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - m.AlertID = string(dAtA[iNdEx:postIndex]) + v := &types.ProvisionTokenV2{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Token = &CreateTokenV2Request_V2{v} iNdEx = postIndex default: iNdEx = preIndex @@ -58200,7 +65564,7 @@ func (m *ClearAlertAcksRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertClusterAlertRequest) Unmarshal(dAtA []byte) error { +func (m *UpsertTokenV2Request) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58223,15 +65587,15 @@ func (m *UpsertClusterAlertRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertClusterAlertRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpsertTokenV2Request: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertClusterAlertRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpsertTokenV2Request: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Alert", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field V2", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58258,9 +65622,11 @@ func (m *UpsertClusterAlertRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Alert.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &types.ProvisionTokenV2{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Token = &UpsertTokenV2Request_V2{v} iNdEx = postIndex default: iNdEx = preIndex @@ -58284,7 +65650,7 @@ func (m *UpsertClusterAlertRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetConnectionDiagnosticRequest) Unmarshal(dAtA []byte) error { +func (m *GetHeadlessAuthenticationRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58307,15 +65673,15 @@ func (m *GetConnectionDiagnosticRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetConnectionDiagnosticRequest: wiretype end group for non-group") + return fmt.Errorf("proto: GetHeadlessAuthenticationRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetConnectionDiagnosticRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GetHeadlessAuthenticationRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58343,7 +65709,7 @@ func (m *GetConnectionDiagnosticRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Id = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -58367,7 +65733,7 @@ func (m *GetConnectionDiagnosticRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { +func (m *UpdateHeadlessAuthenticationStateRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58390,15 +65756,15 @@ func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppendDiagnosticTraceRequest: wiretype end group for non-group") + return fmt.Errorf("proto: UpdateHeadlessAuthenticationStateRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppendDiagnosticTraceRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpdateHeadlessAuthenticationStateRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58426,11 +65792,30 @@ func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Id = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) + } + m.State = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.State |= types.HeadlessAuthenticationState(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Trace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MfaResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58457,10 +65842,10 @@ func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Trace == nil { - m.Trace = &types.ConnectionDiagnosticTrace{} + if m.MfaResponse == nil { + m.MfaResponse = &MFAAuthenticateResponse{} } - if err := m.Trace.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.MfaResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -58486,7 +65871,7 @@ func (m *AppendDiagnosticTraceRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *SubmitUsageEventRequest) Unmarshal(dAtA []byte) error { +func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58509,17 +65894,17 @@ func (m *SubmitUsageEventRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SubmitUsageEventRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ExportUpgradeWindowsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SubmitUsageEventRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ExportUpgradeWindowsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Event", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TeleportVersion", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -58529,27 +65914,55 @@ func (m *SubmitUsageEventRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthAuthservice } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthAuthservice } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Event == nil { - m.Event = &v12.UsageEventOneOf{} + m.TeleportVersion = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpgraderKind", wireType) } - if err := m.Event.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.UpgraderKind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -58573,7 +65986,7 @@ func (m *SubmitUsageEventRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetLicenseRequest) Unmarshal(dAtA []byte) error { +func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58596,63 +66009,112 @@ func (m *GetLicenseRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetLicenseRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ExportUpgradeWindowsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetLicenseRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ExportUpgradeWindowsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - default: - iNdEx = preIndex - skippy, err := skipAuthservice(dAtA[iNdEx:]) - if err != nil { - return err + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CanonicalSchedule", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { return ErrInvalidLengthAuthservice } - if (iNdEx + skippy) > l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ListReleasesRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice + if m.CanonicalSchedule == nil { + m.CanonicalSchedule = &types.AgentUpgradeSchedule{} } - if iNdEx >= l { + if err := m.CanonicalSchedule.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubeControllerSchedule", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + m.KubeControllerSchedule = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SystemdUnitSchedule", wireType) } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ListReleasesRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ListReleasesRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SystemdUnitSchedule = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) @@ -58675,7 +66137,7 @@ func (m *ListReleasesRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateTokenV2Request) Unmarshal(dAtA []byte) error { +func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58698,15 +66160,15 @@ func (m *CreateTokenV2Request) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateTokenV2Request: wiretype end group for non-group") + return fmt.Errorf("proto: ListAccessRequestsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateTokenV2Request: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListAccessRequestsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field V2", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Filter", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58733,11 +66195,102 @@ func (m *CreateTokenV2Request) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.ProvisionTokenV2{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Filter == nil { + m.Filter = &types.AccessRequestFilter{} + } + if err := m.Filter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Token = &CreateTokenV2Request_V2{v} + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Sort", wireType) + } + m.Sort = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Sort |= AccessRequestSort(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Descending", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Descending = bool(v != 0) + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.StartKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -58761,7 +66314,7 @@ func (m *CreateTokenV2Request) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpsertTokenV2Request) Unmarshal(dAtA []byte) error { +func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58784,15 +66337,15 @@ func (m *UpsertTokenV2Request) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpsertTokenV2Request: wiretype end group for non-group") + return fmt.Errorf("proto: ListAccessRequestsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpsertTokenV2Request: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListAccessRequestsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field V2", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58819,11 +66372,42 @@ func (m *UpsertTokenV2Request) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &types.ProvisionTokenV2{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.AccessRequests = append(m.AccessRequests, &types.AccessRequestV3{}) + if err := m.AccessRequests[len(m.AccessRequests)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Token = &UpsertTokenV2Request_V2{v} + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -58847,7 +66431,7 @@ func (m *UpsertTokenV2Request) Unmarshal(dAtA []byte) error { } return nil } -func (m *GetHeadlessAuthenticationRequest) Unmarshal(dAtA []byte) error { +func (m *AccessRequestAllowedPromotionRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58870,15 +66454,15 @@ func (m *GetHeadlessAuthenticationRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GetHeadlessAuthenticationRequest: wiretype end group for non-group") + return fmt.Errorf("proto: AccessRequestAllowedPromotionRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GetHeadlessAuthenticationRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessRequestAllowedPromotionRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58906,7 +66490,7 @@ func (m *GetHeadlessAuthenticationRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Id = string(dAtA[iNdEx:postIndex]) + m.AccessRequestID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -58930,7 +66514,7 @@ func (m *GetHeadlessAuthenticationRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpdateHeadlessAuthenticationStateRequest) Unmarshal(dAtA []byte) error { +func (m *AccessRequestAllowedPromotionResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58953,66 +66537,15 @@ func (m *UpdateHeadlessAuthenticationStateRequest) Unmarshal(dAtA []byte) error fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpdateHeadlessAuthenticationStateRequest: wiretype end group for non-group") + return fmt.Errorf("proto: AccessRequestAllowedPromotionResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpdateHeadlessAuthenticationStateRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessRequestAllowedPromotionResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Id = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) - } - m.State = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.State |= types.HeadlessAuthenticationState(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MfaResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AllowedPromotions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59039,10 +66572,10 @@ func (m *UpdateHeadlessAuthenticationStateRequest) Unmarshal(dAtA []byte) error if postIndex > l { return io.ErrUnexpectedEOF } - if m.MfaResponse == nil { - m.MfaResponse = &MFAAuthenticateResponse{} + if m.AllowedPromotions == nil { + m.AllowedPromotions = &types.AccessRequestAllowedPromotions{} } - if err := m.MfaResponse.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AllowedPromotions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -59068,7 +66601,7 @@ func (m *UpdateHeadlessAuthenticationStateRequest) Unmarshal(dAtA []byte) error } return nil } -func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { +func (m *ListProvisionTokensRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59091,15 +66624,34 @@ func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ExportUpgradeWindowsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListProvisionTokensRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ExportUpgradeWindowsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListProvisionTokensRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + } + m.Limit = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Limit |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TeleportVersion", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59127,11 +66679,11 @@ func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TeleportVersion = string(dAtA[iNdEx:postIndex]) + m.StartKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpgraderKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field FilterRoles", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59159,7 +66711,39 @@ func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.UpgraderKind = string(dAtA[iNdEx:postIndex]) + m.FilterRoles = append(m.FilterRoles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FilterBotName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.FilterBotName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59183,7 +66767,7 @@ func (m *ExportUpgradeWindowsRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { +func (m *ListProvisionTokensResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59206,15 +66790,15 @@ func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ExportUpgradeWindowsResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListProvisionTokensResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ExportUpgradeWindowsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListProvisionTokensResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CanonicalSchedule", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Tokens", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59241,48 +66825,14 @@ func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.CanonicalSchedule == nil { - m.CanonicalSchedule = &types.AgentUpgradeSchedule{} - } - if err := m.CanonicalSchedule.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Tokens = append(m.Tokens, &types.ProvisionTokenV2{}) + if err := m.Tokens[len(m.Tokens)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeControllerSchedule", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.KubeControllerSchedule = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SystemdUnitSchedule", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59310,7 +66860,7 @@ func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SystemdUnitSchedule = string(dAtA[iNdEx:postIndex]) + m.NextKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59334,7 +66884,7 @@ func (m *ExportUpgradeWindowsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { +func (m *ListKubernetesClustersRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59357,92 +66907,17 @@ func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListAccessRequestsRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListKubernetesClustersRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListAccessRequestsRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListKubernetesClustersRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Filter", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthAuthservice - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthAuthservice - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if m.Filter == nil { - m.Filter = &types.AccessRequestFilter{} - } - if err := m.Filter.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Sort", wireType) - } - m.Sort = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Sort |= AccessRequestSort(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Descending", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowAuthservice - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.Descending = bool(v != 0) - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) } - m.Limit = 0 + m.PageSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowAuthservice @@ -59452,14 +66927,14 @@ func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Limit |= int32(b&0x7F) << shift + m.PageSize |= int32(b&0x7F) << shift if b < 0x80 { break } } - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59487,7 +66962,7 @@ func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.StartKey = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59511,7 +66986,7 @@ func (m *ListAccessRequestsRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { +func (m *ListKubernetesClustersResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59534,15 +67009,15 @@ func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ListAccessRequestsResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListKubernetesClustersResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ListAccessRequestsResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListKubernetesClustersResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusters", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59569,14 +67044,14 @@ func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessRequests = append(m.AccessRequests, &types.AccessRequestV3{}) - if err := m.AccessRequests[len(m.AccessRequests)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.KubernetesClusters = append(m.KubernetesClusters, &types.KubernetesClusterV3{}) + if err := m.KubernetesClusters[len(m.KubernetesClusters)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NextKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59604,7 +67079,7 @@ func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.NextKey = string(dAtA[iNdEx:postIndex]) + m.NextPageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59628,7 +67103,7 @@ func (m *ListAccessRequestsResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessRequestAllowedPromotionRequest) Unmarshal(dAtA []byte) error { +func (m *ListSnowflakeSessionsRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59651,15 +67126,34 @@ func (m *AccessRequestAllowedPromotionRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessRequestAllowedPromotionRequest: wiretype end group for non-group") + return fmt.Errorf("proto: ListSnowflakeSessionsRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestAllowedPromotionRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSnowflakeSessionsRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PageSize", wireType) + } + m.PageSize = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PageSize |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PageToken", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59687,7 +67181,7 @@ func (m *AccessRequestAllowedPromotionRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessRequestID = string(dAtA[iNdEx:postIndex]) + m.PageToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59711,7 +67205,7 @@ func (m *AccessRequestAllowedPromotionRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessRequestAllowedPromotionResponse) Unmarshal(dAtA []byte) error { +func (m *ListSnowflakeSessionsResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59734,15 +67228,15 @@ func (m *AccessRequestAllowedPromotionResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessRequestAllowedPromotionResponse: wiretype end group for non-group") + return fmt.Errorf("proto: ListSnowflakeSessionsResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestAllowedPromotionResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ListSnowflakeSessionsResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AllowedPromotions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Sessions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59769,13 +67263,43 @@ func (m *AccessRequestAllowedPromotionResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AllowedPromotions == nil { - m.AllowedPromotions = &types.AccessRequestAllowedPromotions{} - } - if err := m.AllowedPromotions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Sessions = append(m.Sessions, &types.WebSessionV2{}) + if err := m.Sessions[len(m.Sessions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NextPageToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowAuthservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthAuthservice + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthAuthservice + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NextPageToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipAuthservice(dAtA[iNdEx:]) diff --git a/api/client/proto/authservice_grpc.pb.go b/api/client/proto/authservice_grpc.pb.go index 5df6237d0ce0b..39ff81a69d2cb 100644 --- a/api/client/proto/authservice_grpc.pb.go +++ b/api/client/proto/authservice_grpc.pb.go @@ -71,12 +71,14 @@ const ( AuthService_SetAccessRequestState_FullMethodName = "/proto.AuthService/SetAccessRequestState" AuthService_SubmitAccessReview_FullMethodName = "/proto.AuthService/SubmitAccessReview" AuthService_GetAccessCapabilities_FullMethodName = "/proto.AuthService/GetAccessCapabilities" + AuthService_GetRemoteAccessCapabilities_FullMethodName = "/proto.AuthService/GetRemoteAccessCapabilities" AuthService_GetAccessRequestAllowedPromotions_FullMethodName = "/proto.AuthService/GetAccessRequestAllowedPromotions" AuthService_GetPluginData_FullMethodName = "/proto.AuthService/GetPluginData" AuthService_UpdatePluginData_FullMethodName = "/proto.AuthService/UpdatePluginData" AuthService_Ping_FullMethodName = "/proto.AuthService/Ping" AuthService_GetResetPasswordToken_FullMethodName = "/proto.AuthService/GetResetPasswordToken" AuthService_CreateResetPasswordToken_FullMethodName = "/proto.AuthService/CreateResetPasswordToken" + AuthService_ListResetPasswordTokens_FullMethodName = "/proto.AuthService/ListResetPasswordTokens" AuthService_GetUser_FullMethodName = "/proto.AuthService/GetUser" AuthService_GetCurrentUser_FullMethodName = "/proto.AuthService/GetCurrentUser" AuthService_GetCurrentUserRoles_FullMethodName = "/proto.AuthService/GetCurrentUserRoles" @@ -89,6 +91,7 @@ const ( AuthService_KeepAliveSemaphoreLease_FullMethodName = "/proto.AuthService/KeepAliveSemaphoreLease" AuthService_CancelSemaphoreLease_FullMethodName = "/proto.AuthService/CancelSemaphoreLease" AuthService_GetSemaphores_FullMethodName = "/proto.AuthService/GetSemaphores" + AuthService_ListSemaphores_FullMethodName = "/proto.AuthService/ListSemaphores" AuthService_DeleteSemaphore_FullMethodName = "/proto.AuthService/DeleteSemaphore" AuthService_EmitAuditEvent_FullMethodName = "/proto.AuthService/EmitAuditEvent" AuthService_CreateAuditStream_FullMethodName = "/proto.AuthService/CreateAuditStream" @@ -105,6 +108,7 @@ const ( AuthService_CreateSnowflakeSession_FullMethodName = "/proto.AuthService/CreateSnowflakeSession" AuthService_GetSnowflakeSession_FullMethodName = "/proto.AuthService/GetSnowflakeSession" AuthService_GetSnowflakeSessions_FullMethodName = "/proto.AuthService/GetSnowflakeSessions" + AuthService_ListSnowflakeSessions_FullMethodName = "/proto.AuthService/ListSnowflakeSessions" AuthService_DeleteSnowflakeSession_FullMethodName = "/proto.AuthService/DeleteSnowflakeSession" AuthService_DeleteAllSnowflakeSessions_FullMethodName = "/proto.AuthService/DeleteAllSnowflakeSessions" AuthService_CreateSAMLIdPSession_FullMethodName = "/proto.AuthService/CreateSAMLIdPSession" @@ -120,6 +124,7 @@ const ( AuthService_DeleteAllWebSessions_FullMethodName = "/proto.AuthService/DeleteAllWebSessions" AuthService_GetWebToken_FullMethodName = "/proto.AuthService/GetWebToken" AuthService_GetWebTokens_FullMethodName = "/proto.AuthService/GetWebTokens" + AuthService_ListWebTokens_FullMethodName = "/proto.AuthService/ListWebTokens" AuthService_DeleteWebToken_FullMethodName = "/proto.AuthService/DeleteWebToken" AuthService_DeleteAllWebTokens_FullMethodName = "/proto.AuthService/DeleteAllWebTokens" AuthService_UpdateRemoteCluster_FullMethodName = "/proto.AuthService/UpdateRemoteCluster" @@ -137,6 +142,7 @@ const ( AuthService_GenerateSnowflakeJWT_FullMethodName = "/proto.AuthService/GenerateSnowflakeJWT" AuthService_GetRole_FullMethodName = "/proto.AuthService/GetRole" AuthService_ListRoles_FullMethodName = "/proto.AuthService/ListRoles" + AuthService_ListRequestableRoles_FullMethodName = "/proto.AuthService/ListRequestableRoles" AuthService_CreateRole_FullMethodName = "/proto.AuthService/CreateRole" AuthService_UpdateRole_FullMethodName = "/proto.AuthService/UpdateRole" AuthService_UpsertRoleV2_FullMethodName = "/proto.AuthService/UpsertRoleV2" @@ -151,6 +157,7 @@ const ( AuthService_CreateRegisterChallenge_FullMethodName = "/proto.AuthService/CreateRegisterChallenge" AuthService_GetOIDCConnector_FullMethodName = "/proto.AuthService/GetOIDCConnector" AuthService_GetOIDCConnectors_FullMethodName = "/proto.AuthService/GetOIDCConnectors" + AuthService_ListOIDCConnectors_FullMethodName = "/proto.AuthService/ListOIDCConnectors" AuthService_CreateOIDCConnector_FullMethodName = "/proto.AuthService/CreateOIDCConnector" AuthService_UpdateOIDCConnector_FullMethodName = "/proto.AuthService/UpdateOIDCConnector" AuthService_UpsertOIDCConnector_FullMethodName = "/proto.AuthService/UpsertOIDCConnector" @@ -160,6 +167,7 @@ const ( AuthService_GetOIDCAuthRequest_FullMethodName = "/proto.AuthService/GetOIDCAuthRequest" AuthService_GetSAMLConnector_FullMethodName = "/proto.AuthService/GetSAMLConnector" AuthService_GetSAMLConnectors_FullMethodName = "/proto.AuthService/GetSAMLConnectors" + AuthService_ListSAMLConnectors_FullMethodName = "/proto.AuthService/ListSAMLConnectors" AuthService_CreateSAMLConnector_FullMethodName = "/proto.AuthService/CreateSAMLConnector" AuthService_UpdateSAMLConnector_FullMethodName = "/proto.AuthService/UpdateSAMLConnector" AuthService_UpsertSAMLConnector_FullMethodName = "/proto.AuthService/UpsertSAMLConnector" @@ -169,6 +177,7 @@ const ( AuthService_GetSAMLAuthRequest_FullMethodName = "/proto.AuthService/GetSAMLAuthRequest" AuthService_GetGithubConnector_FullMethodName = "/proto.AuthService/GetGithubConnector" AuthService_GetGithubConnectors_FullMethodName = "/proto.AuthService/GetGithubConnectors" + AuthService_ListGithubConnectors_FullMethodName = "/proto.AuthService/ListGithubConnectors" AuthService_CreateGithubConnector_FullMethodName = "/proto.AuthService/CreateGithubConnector" AuthService_UpdateGithubConnector_FullMethodName = "/proto.AuthService/UpdateGithubConnector" AuthService_UpsertGithubConnector_FullMethodName = "/proto.AuthService/UpsertGithubConnector" @@ -188,6 +197,8 @@ const ( AuthService_DeleteTrustedCluster_FullMethodName = "/proto.AuthService/DeleteTrustedCluster" AuthService_GetToken_FullMethodName = "/proto.AuthService/GetToken" AuthService_GetTokens_FullMethodName = "/proto.AuthService/GetTokens" + AuthService_GetStaticTokens_FullMethodName = "/proto.AuthService/GetStaticTokens" + AuthService_ListProvisionTokens_FullMethodName = "/proto.AuthService/ListProvisionTokens" AuthService_CreateTokenV2_FullMethodName = "/proto.AuthService/CreateTokenV2" AuthService_UpsertTokenV2_FullMethodName = "/proto.AuthService/UpsertTokenV2" AuthService_DeleteToken_FullMethodName = "/proto.AuthService/DeleteToken" @@ -208,6 +219,7 @@ const ( AuthService_GetSessionEvents_FullMethodName = "/proto.AuthService/GetSessionEvents" AuthService_GetLock_FullMethodName = "/proto.AuthService/GetLock" AuthService_GetLocks_FullMethodName = "/proto.AuthService/GetLocks" + AuthService_ListLocks_FullMethodName = "/proto.AuthService/ListLocks" AuthService_UpsertLock_FullMethodName = "/proto.AuthService/UpsertLock" AuthService_DeleteLock_FullMethodName = "/proto.AuthService/DeleteLock" AuthService_ReplaceRemoteLocks_FullMethodName = "/proto.AuthService/ReplaceRemoteLocks" @@ -216,18 +228,21 @@ const ( AuthService_SetNetworkRestrictions_FullMethodName = "/proto.AuthService/SetNetworkRestrictions" AuthService_DeleteNetworkRestrictions_FullMethodName = "/proto.AuthService/DeleteNetworkRestrictions" AuthService_GetApps_FullMethodName = "/proto.AuthService/GetApps" + AuthService_ListApps_FullMethodName = "/proto.AuthService/ListApps" AuthService_GetApp_FullMethodName = "/proto.AuthService/GetApp" AuthService_CreateApp_FullMethodName = "/proto.AuthService/CreateApp" AuthService_UpdateApp_FullMethodName = "/proto.AuthService/UpdateApp" AuthService_DeleteApp_FullMethodName = "/proto.AuthService/DeleteApp" AuthService_DeleteAllApps_FullMethodName = "/proto.AuthService/DeleteAllApps" AuthService_GetDatabases_FullMethodName = "/proto.AuthService/GetDatabases" + AuthService_ListDatabases_FullMethodName = "/proto.AuthService/ListDatabases" AuthService_GetDatabase_FullMethodName = "/proto.AuthService/GetDatabase" AuthService_CreateDatabase_FullMethodName = "/proto.AuthService/CreateDatabase" AuthService_UpdateDatabase_FullMethodName = "/proto.AuthService/UpdateDatabase" AuthService_DeleteDatabase_FullMethodName = "/proto.AuthService/DeleteDatabase" AuthService_DeleteAllDatabases_FullMethodName = "/proto.AuthService/DeleteAllDatabases" AuthService_GetKubernetesClusters_FullMethodName = "/proto.AuthService/GetKubernetesClusters" + AuthService_ListKubernetesClusters_FullMethodName = "/proto.AuthService/ListKubernetesClusters" AuthService_GetKubernetesCluster_FullMethodName = "/proto.AuthService/GetKubernetesCluster" AuthService_CreateKubernetesCluster_FullMethodName = "/proto.AuthService/CreateKubernetesCluster" AuthService_UpdateKubernetesCluster_FullMethodName = "/proto.AuthService/UpdateKubernetesCluster" @@ -239,6 +254,7 @@ const ( AuthService_DeleteWindowsDesktopService_FullMethodName = "/proto.AuthService/DeleteWindowsDesktopService" AuthService_DeleteAllWindowsDesktopServices_FullMethodName = "/proto.AuthService/DeleteAllWindowsDesktopServices" AuthService_GetWindowsDesktops_FullMethodName = "/proto.AuthService/GetWindowsDesktops" + AuthService_ListWindowsDesktops_FullMethodName = "/proto.AuthService/ListWindowsDesktops" AuthService_CreateWindowsDesktop_FullMethodName = "/proto.AuthService/CreateWindowsDesktop" AuthService_UpdateWindowsDesktop_FullMethodName = "/proto.AuthService/UpdateWindowsDesktop" AuthService_UpsertWindowsDesktop_FullMethodName = "/proto.AuthService/UpsertWindowsDesktop" @@ -261,6 +277,7 @@ const ( AuthService_CreatePrivilegeToken_FullMethodName = "/proto.AuthService/CreatePrivilegeToken" AuthService_GetInstaller_FullMethodName = "/proto.AuthService/GetInstaller" AuthService_GetInstallers_FullMethodName = "/proto.AuthService/GetInstallers" + AuthService_ListInstallers_FullMethodName = "/proto.AuthService/ListInstallers" AuthService_SetInstaller_FullMethodName = "/proto.AuthService/SetInstaller" AuthService_DeleteInstaller_FullMethodName = "/proto.AuthService/DeleteInstaller" AuthService_DeleteAllInstallers_FullMethodName = "/proto.AuthService/DeleteAllInstallers" @@ -376,6 +393,8 @@ type AuthServiceClient interface { SubmitAccessReview(ctx context.Context, in *types.AccessReviewSubmission, opts ...grpc.CallOption) (*types.AccessRequestV3, error) // GetAccessCapabilities requests the access capabilities of a user. GetAccessCapabilities(ctx context.Context, in *types.AccessCapabilitiesRequest, opts ...grpc.CallOption) (*types.AccessCapabilities, error) + // GetRemoteAccessCapabilities requests the access capabilities for a user from a remote cluster + GetRemoteAccessCapabilities(ctx context.Context, in *types.RemoteAccessCapabilitiesRequest, opts ...grpc.CallOption) (*types.RemoteAccessCapabilities, error) // GetAccessRequestAllowedPromotions returns a list of allowed promotions from an access request to an access list. GetAccessRequestAllowedPromotions(ctx context.Context, in *AccessRequestAllowedPromotionRequest, opts ...grpc.CallOption) (*AccessRequestAllowedPromotionResponse, error) // GetPluginData gets all plugin data matching the supplied filter. @@ -393,6 +412,8 @@ type AuthServiceClient interface { // // Only local users may be reset. CreateResetPasswordToken(ctx context.Context, in *CreateResetPasswordTokenRequest, opts ...grpc.CallOption) (*types.UserTokenV3, error) + // ListResetPasswordTokens returns a page of user tokens. + ListResetPasswordTokens(ctx context.Context, in *ListResetPasswordTokenRequest, opts ...grpc.CallOption) (*ListResetPasswordTokenResponse, error) // Deprecated: Do not use. // GetUser gets a user resource by name. // @@ -438,6 +459,8 @@ type AuthServiceClient interface { CancelSemaphoreLease(ctx context.Context, in *types.SemaphoreLease, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetSemaphores returns a list of all semaphores matching the supplied filter. GetSemaphores(ctx context.Context, in *types.SemaphoreFilter, opts ...grpc.CallOption) (*Semaphores, error) + // ListSemaphores returns a page of all semaphores matching the supplied filter. + ListSemaphores(ctx context.Context, in *ListSemaphoresRequest, opts ...grpc.CallOption) (*ListSemaphoresResponse, error) // DeleteSemaphore deletes a semaphore matching the supplied filter. DeleteSemaphore(ctx context.Context, in *types.SemaphoreFilter, opts ...grpc.CallOption) (*emptypb.Empty, error) // EmitAuditEvent emits audit event @@ -472,6 +495,8 @@ type AuthServiceClient interface { GetSnowflakeSession(ctx context.Context, in *GetSnowflakeSessionRequest, opts ...grpc.CallOption) (*GetSnowflakeSessionResponse, error) // GetSnowflakeSessions gets all Snowflake web sessions. GetSnowflakeSessions(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*GetSnowflakeSessionsResponse, error) + // ListSnowflakeSessions returns a page of Snowflake web sessions. + ListSnowflakeSessions(ctx context.Context, in *ListSnowflakeSessionsRequest, opts ...grpc.CallOption) (*ListSnowflakeSessionsResponse, error) // DeleteSnowflakeSession removes a Snowflake web session. DeleteSnowflakeSession(ctx context.Context, in *DeleteSnowflakeSessionRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) // DeleteAllSnowflakeSessions removes all Snowflake web sessions. @@ -509,6 +534,8 @@ type AuthServiceClient interface { GetWebToken(ctx context.Context, in *types.GetWebTokenRequest, opts ...grpc.CallOption) (*GetWebTokenResponse, error) // GetWebTokens gets all web tokens. GetWebTokens(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*GetWebTokensResponse, error) + // ListWebTokens returns a page of web tokens. + ListWebTokens(ctx context.Context, in *ListWebTokensRequest, opts ...grpc.CallOption) (*ListWebTokensResponse, error) // DeleteWebToken deletes a web token. DeleteWebToken(ctx context.Context, in *types.DeleteWebTokenRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) // DeleteAllWebTokens deletes all web tokens. @@ -547,6 +574,8 @@ type AuthServiceClient interface { GetRole(ctx context.Context, in *GetRoleRequest, opts ...grpc.CallOption) (*types.RoleV6, error) // ListRoles is a paginated role getter. ListRoles(ctx context.Context, in *ListRolesRequest, opts ...grpc.CallOption) (*ListRolesResponse, error) + // ListRequestableRoles is a paginated requestable role getter. + ListRequestableRoles(ctx context.Context, in *ListRequestableRolesRequest, opts ...grpc.CallOption) (*ListRequestableRolesResponse, error) // CreateRole creates a new role. CreateRole(ctx context.Context, in *CreateRoleRequest, opts ...grpc.CallOption) (*types.RoleV6, error) // UpdateRole updates an existing role. @@ -617,6 +646,8 @@ type AuthServiceClient interface { GetOIDCConnector(ctx context.Context, in *types.ResourceWithSecretsRequest, opts ...grpc.CallOption) (*types.OIDCConnectorV3, error) // GetOIDCConnectors gets all current OIDC connector resources. GetOIDCConnectors(ctx context.Context, in *types.ResourcesWithSecretsRequest, opts ...grpc.CallOption) (*types.OIDCConnectorV3List, error) + // ListOIDCConnectors returns a page of current OIDC connector resources. + ListOIDCConnectors(ctx context.Context, in *ListOIDCConnectorsRequest, opts ...grpc.CallOption) (*ListOIDCConnectorsResponse, error) // UpsertOIDCConnector creates a new OIDC connector in the backend. CreateOIDCConnector(ctx context.Context, in *CreateOIDCConnectorRequest, opts ...grpc.CallOption) (*types.OIDCConnectorV3, error) // UpsertOIDCConnector updates an existing OIDC connector in the backend. @@ -638,6 +669,8 @@ type AuthServiceClient interface { GetSAMLConnector(ctx context.Context, in *types.ResourceWithSecretsRequest, opts ...grpc.CallOption) (*types.SAMLConnectorV2, error) // GetSAMLConnectors gets all current SAML connector resources. GetSAMLConnectors(ctx context.Context, in *types.ResourcesWithSecretsRequest, opts ...grpc.CallOption) (*types.SAMLConnectorV2List, error) + // ListSAMLConnectors returns a page of current SAML connector resources. + ListSAMLConnectors(ctx context.Context, in *ListSAMLConnectorsRequest, opts ...grpc.CallOption) (*ListSAMLConnectorsResponse, error) // CreateSAMLConnector creates a new SAML connector in the backend. CreateSAMLConnector(ctx context.Context, in *CreateSAMLConnectorRequest, opts ...grpc.CallOption) (*types.SAMLConnectorV2, error) // UpdateSAMLConnector updates an existing SAML connector in the backend. @@ -659,6 +692,8 @@ type AuthServiceClient interface { GetGithubConnector(ctx context.Context, in *types.ResourceWithSecretsRequest, opts ...grpc.CallOption) (*types.GithubConnectorV3, error) // GetGithubConnectors gets all current Github connector resources. GetGithubConnectors(ctx context.Context, in *types.ResourcesWithSecretsRequest, opts ...grpc.CallOption) (*types.GithubConnectorV3List, error) + // ListGithubConnectors returns a page of current Github connector resources. + ListGithubConnectors(ctx context.Context, in *ListGithubConnectorsRequest, opts ...grpc.CallOption) (*ListGithubConnectorsResponse, error) // CreateGithubConnector creates a new Github connector in the backend. CreateGithubConnector(ctx context.Context, in *CreateGithubConnectorRequest, opts ...grpc.CallOption) (*types.GithubConnectorV3, error) // UpdateGithubConnector updates an existing Github connector in the backend. @@ -701,8 +736,15 @@ type AuthServiceClient interface { DeleteTrustedCluster(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetToken retrieves a token described by the given request. GetToken(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.ProvisionTokenV2, error) + // Deprecated: Do not use. // GetToken retrieves all tokens. + // Deprecated: Use [ListProvisionTokens], [GetStaticTokens], and [ListResetPasswordTokens] instead. + // TODO(hugoShaka): DELETE IN 21.0.0 GetTokens(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.ProvisionTokenV2List, error) + // GetStaticTokens retrieves all static tokens. + GetStaticTokens(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.StaticTokensV2, error) + // ListToken retrieves a paginated list of filtered provision tokens. + ListProvisionTokens(ctx context.Context, in *ListProvisionTokensRequest, opts ...grpc.CallOption) (*ListProvisionTokensResponse, error) // CreateTokenV2 creates a token in a backend. CreateTokenV2(ctx context.Context, in *CreateTokenV2Request, opts ...grpc.CallOption) (*emptypb.Empty, error) // UpsertTokenV2 upserts a token in a backend. @@ -761,6 +803,8 @@ type AuthServiceClient interface { GetLock(ctx context.Context, in *GetLockRequest, opts ...grpc.CallOption) (*types.LockV2, error) // GetLocks gets all/in-force locks that match at least one of the targets when specified. GetLocks(ctx context.Context, in *GetLocksRequest, opts ...grpc.CallOption) (*GetLocksResponse, error) + // ListLocks returns a page of locks matching a filter. + ListLocks(ctx context.Context, in *ListLocksRequest, opts ...grpc.CallOption) (*ListLocksResponse, error) // UpsertLock upserts a lock. UpsertLock(ctx context.Context, in *types.LockV2, opts ...grpc.CallOption) (*emptypb.Empty, error) // DeleteLock deletes a lock. @@ -777,6 +821,8 @@ type AuthServiceClient interface { DeleteNetworkRestrictions(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetApps returns all registered applications. GetApps(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.AppV3List, error) + // ListApps returns a page of registered applications. + ListApps(ctx context.Context, in *ListAppsRequest, opts ...grpc.CallOption) (*ListAppsResponse, error) // GetApp returns an application by name. GetApp(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.AppV3, error) // CreateApp creates a new application resource. @@ -789,6 +835,8 @@ type AuthServiceClient interface { DeleteAllApps(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetDatabases returns all registered databases. GetDatabases(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.DatabaseV3List, error) + // ListDatabases returns a page of registered databases. + ListDatabases(ctx context.Context, in *ListDatabasesRequest, opts ...grpc.CallOption) (*ListDatabasesResponse, error) // GetDatabase returns a database by name. GetDatabase(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.DatabaseV3, error) // CreateDatabase creates a new database resource. @@ -801,6 +849,8 @@ type AuthServiceClient interface { DeleteAllDatabases(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetKubernetesClusters returns all registered kubernetes clusters. GetKubernetesClusters(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.KubernetesClusterV3List, error) + // ListKubernetesClusters returns a page of registered kubernetes clusters. + ListKubernetesClusters(ctx context.Context, in *ListKubernetesClustersRequest, opts ...grpc.CallOption) (*ListKubernetesClustersResponse, error) // GetKubernetesCluster returns a kubernetes cluster by name. GetKubernetesCluster(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.KubernetesClusterV3, error) // CreateKubernetesCluster creates a new kubernetes cluster resource. @@ -823,6 +873,8 @@ type AuthServiceClient interface { DeleteAllWindowsDesktopServices(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*emptypb.Empty, error) // GetWindowsDesktops returns all registered Windows desktop hosts matching the supplied filter. GetWindowsDesktops(ctx context.Context, in *types.WindowsDesktopFilter, opts ...grpc.CallOption) (*GetWindowsDesktopsResponse, error) + // ListWindowsDesktops returns a page of registered Windows desktop hosts. + ListWindowsDesktops(ctx context.Context, in *ListWindowsDesktopsRequest, opts ...grpc.CallOption) (*ListWindowsDesktopsResponse, error) // CreateWindowsDesktop registers a new Windows desktop host. CreateWindowsDesktop(ctx context.Context, in *types.WindowsDesktopV3, opts ...grpc.CallOption) (*emptypb.Empty, error) // UpdateWindowsDesktop updates an existing Windows desktop host. @@ -920,6 +972,8 @@ type AuthServiceClient interface { GetInstaller(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.InstallerV1, error) // GetInstallers retrieves all of installer script resources. GetInstallers(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.InstallerV1List, error) + // ListInstallers returns a page of installer script resources. + ListInstallers(ctx context.Context, in *ListInstallersRequest, opts ...grpc.CallOption) (*ListInstallersResponse, error) // SetInstaller sets the installer script resource SetInstaller(ctx context.Context, in *types.InstallerV1, opts ...grpc.CallOption) (*emptypb.Empty, error) // DeleteInstaller removes the specified installer script resource @@ -1411,6 +1465,16 @@ func (c *authServiceClient) GetAccessCapabilities(ctx context.Context, in *types return out, nil } +func (c *authServiceClient) GetRemoteAccessCapabilities(ctx context.Context, in *types.RemoteAccessCapabilitiesRequest, opts ...grpc.CallOption) (*types.RemoteAccessCapabilities, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(types.RemoteAccessCapabilities) + err := c.cc.Invoke(ctx, AuthService_GetRemoteAccessCapabilities_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) GetAccessRequestAllowedPromotions(ctx context.Context, in *AccessRequestAllowedPromotionRequest, opts ...grpc.CallOption) (*AccessRequestAllowedPromotionResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(AccessRequestAllowedPromotionResponse) @@ -1471,6 +1535,16 @@ func (c *authServiceClient) CreateResetPasswordToken(ctx context.Context, in *Cr return out, nil } +func (c *authServiceClient) ListResetPasswordTokens(ctx context.Context, in *ListResetPasswordTokenRequest, opts ...grpc.CallOption) (*ListResetPasswordTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListResetPasswordTokenResponse) + err := c.cc.Invoke(ctx, AuthService_ListResetPasswordTokens_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // Deprecated: Do not use. func (c *authServiceClient) GetUser(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*types.UserV2, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) @@ -1615,6 +1689,16 @@ func (c *authServiceClient) GetSemaphores(ctx context.Context, in *types.Semapho return out, nil } +func (c *authServiceClient) ListSemaphores(ctx context.Context, in *ListSemaphoresRequest, opts ...grpc.CallOption) (*ListSemaphoresResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListSemaphoresResponse) + err := c.cc.Invoke(ctx, AuthService_ListSemaphores_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) DeleteSemaphore(ctx context.Context, in *types.SemaphoreFilter, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -1778,6 +1862,16 @@ func (c *authServiceClient) GetSnowflakeSessions(ctx context.Context, in *emptyp return out, nil } +func (c *authServiceClient) ListSnowflakeSessions(ctx context.Context, in *ListSnowflakeSessionsRequest, opts ...grpc.CallOption) (*ListSnowflakeSessionsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListSnowflakeSessionsResponse) + err := c.cc.Invoke(ctx, AuthService_ListSnowflakeSessions_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) DeleteSnowflakeSession(ctx context.Context, in *DeleteSnowflakeSessionRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -1944,6 +2038,16 @@ func (c *authServiceClient) GetWebTokens(ctx context.Context, in *emptypb.Empty, return out, nil } +func (c *authServiceClient) ListWebTokens(ctx context.Context, in *ListWebTokensRequest, opts ...grpc.CallOption) (*ListWebTokensResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListWebTokensResponse) + err := c.cc.Invoke(ctx, AuthService_ListWebTokens_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) DeleteWebToken(ctx context.Context, in *types.DeleteWebTokenRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -2114,6 +2218,16 @@ func (c *authServiceClient) ListRoles(ctx context.Context, in *ListRolesRequest, return out, nil } +func (c *authServiceClient) ListRequestableRoles(ctx context.Context, in *ListRequestableRolesRequest, opts ...grpc.CallOption) (*ListRequestableRolesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListRequestableRolesResponse) + err := c.cc.Invoke(ctx, AuthService_ListRequestableRoles_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateRole(ctx context.Context, in *CreateRoleRequest, opts ...grpc.CallOption) (*types.RoleV6, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.RoleV6) @@ -2263,6 +2377,16 @@ func (c *authServiceClient) GetOIDCConnectors(ctx context.Context, in *types.Res return out, nil } +func (c *authServiceClient) ListOIDCConnectors(ctx context.Context, in *ListOIDCConnectorsRequest, opts ...grpc.CallOption) (*ListOIDCConnectorsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListOIDCConnectorsResponse) + err := c.cc.Invoke(ctx, AuthService_ListOIDCConnectors_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateOIDCConnector(ctx context.Context, in *CreateOIDCConnectorRequest, opts ...grpc.CallOption) (*types.OIDCConnectorV3, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.OIDCConnectorV3) @@ -2354,6 +2478,16 @@ func (c *authServiceClient) GetSAMLConnectors(ctx context.Context, in *types.Res return out, nil } +func (c *authServiceClient) ListSAMLConnectors(ctx context.Context, in *ListSAMLConnectorsRequest, opts ...grpc.CallOption) (*ListSAMLConnectorsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListSAMLConnectorsResponse) + err := c.cc.Invoke(ctx, AuthService_ListSAMLConnectors_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateSAMLConnector(ctx context.Context, in *CreateSAMLConnectorRequest, opts ...grpc.CallOption) (*types.SAMLConnectorV2, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.SAMLConnectorV2) @@ -2445,6 +2579,16 @@ func (c *authServiceClient) GetGithubConnectors(ctx context.Context, in *types.R return out, nil } +func (c *authServiceClient) ListGithubConnectors(ctx context.Context, in *ListGithubConnectorsRequest, opts ...grpc.CallOption) (*ListGithubConnectorsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListGithubConnectorsResponse) + err := c.cc.Invoke(ctx, AuthService_ListGithubConnectors_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateGithubConnector(ctx context.Context, in *CreateGithubConnectorRequest, opts ...grpc.CallOption) (*types.GithubConnectorV3, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.GithubConnectorV3) @@ -2636,6 +2780,7 @@ func (c *authServiceClient) GetToken(ctx context.Context, in *types.ResourceRequ return out, nil } +// Deprecated: Do not use. func (c *authServiceClient) GetTokens(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.ProvisionTokenV2List, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.ProvisionTokenV2List) @@ -2646,6 +2791,26 @@ func (c *authServiceClient) GetTokens(ctx context.Context, in *emptypb.Empty, op return out, nil } +func (c *authServiceClient) GetStaticTokens(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*types.StaticTokensV2, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(types.StaticTokensV2) + err := c.cc.Invoke(ctx, AuthService_GetStaticTokens_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *authServiceClient) ListProvisionTokens(ctx context.Context, in *ListProvisionTokensRequest, opts ...grpc.CallOption) (*ListProvisionTokensResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListProvisionTokensResponse) + err := c.cc.Invoke(ctx, AuthService_ListProvisionTokens_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateTokenV2(ctx context.Context, in *CreateTokenV2Request, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -2855,6 +3020,16 @@ func (c *authServiceClient) GetLocks(ctx context.Context, in *GetLocksRequest, o return out, nil } +func (c *authServiceClient) ListLocks(ctx context.Context, in *ListLocksRequest, opts ...grpc.CallOption) (*ListLocksResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListLocksResponse) + err := c.cc.Invoke(ctx, AuthService_ListLocks_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) UpsertLock(ctx context.Context, in *types.LockV2, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -2944,6 +3119,16 @@ func (c *authServiceClient) GetApps(ctx context.Context, in *emptypb.Empty, opts return out, nil } +func (c *authServiceClient) ListApps(ctx context.Context, in *ListAppsRequest, opts ...grpc.CallOption) (*ListAppsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListAppsResponse) + err := c.cc.Invoke(ctx, AuthService_ListApps_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) GetApp(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.AppV3, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.AppV3) @@ -3004,6 +3189,16 @@ func (c *authServiceClient) GetDatabases(ctx context.Context, in *emptypb.Empty, return out, nil } +func (c *authServiceClient) ListDatabases(ctx context.Context, in *ListDatabasesRequest, opts ...grpc.CallOption) (*ListDatabasesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListDatabasesResponse) + err := c.cc.Invoke(ctx, AuthService_ListDatabases_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) GetDatabase(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.DatabaseV3, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.DatabaseV3) @@ -3064,6 +3259,16 @@ func (c *authServiceClient) GetKubernetesClusters(ctx context.Context, in *empty return out, nil } +func (c *authServiceClient) ListKubernetesClusters(ctx context.Context, in *ListKubernetesClustersRequest, opts ...grpc.CallOption) (*ListKubernetesClustersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListKubernetesClustersResponse) + err := c.cc.Invoke(ctx, AuthService_ListKubernetesClusters_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) GetKubernetesCluster(ctx context.Context, in *types.ResourceRequest, opts ...grpc.CallOption) (*types.KubernetesClusterV3, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(types.KubernetesClusterV3) @@ -3174,6 +3379,16 @@ func (c *authServiceClient) GetWindowsDesktops(ctx context.Context, in *types.Wi return out, nil } +func (c *authServiceClient) ListWindowsDesktops(ctx context.Context, in *ListWindowsDesktopsRequest, opts ...grpc.CallOption) (*ListWindowsDesktopsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListWindowsDesktopsResponse) + err := c.cc.Invoke(ctx, AuthService_ListWindowsDesktops_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) CreateWindowsDesktop(ctx context.Context, in *types.WindowsDesktopV3, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -3394,6 +3609,16 @@ func (c *authServiceClient) GetInstallers(ctx context.Context, in *emptypb.Empty return out, nil } +func (c *authServiceClient) ListInstallers(ctx context.Context, in *ListInstallersRequest, opts ...grpc.CallOption) (*ListInstallersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListInstallersResponse) + err := c.cc.Invoke(ctx, AuthService_ListInstallers_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *authServiceClient) SetInstaller(ctx context.Context, in *types.InstallerV1, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -3804,6 +4029,8 @@ type AuthServiceServer interface { SubmitAccessReview(context.Context, *types.AccessReviewSubmission) (*types.AccessRequestV3, error) // GetAccessCapabilities requests the access capabilities of a user. GetAccessCapabilities(context.Context, *types.AccessCapabilitiesRequest) (*types.AccessCapabilities, error) + // GetRemoteAccessCapabilities requests the access capabilities for a user from a remote cluster + GetRemoteAccessCapabilities(context.Context, *types.RemoteAccessCapabilitiesRequest) (*types.RemoteAccessCapabilities, error) // GetAccessRequestAllowedPromotions returns a list of allowed promotions from an access request to an access list. GetAccessRequestAllowedPromotions(context.Context, *AccessRequestAllowedPromotionRequest) (*AccessRequestAllowedPromotionResponse, error) // GetPluginData gets all plugin data matching the supplied filter. @@ -3821,6 +4048,8 @@ type AuthServiceServer interface { // // Only local users may be reset. CreateResetPasswordToken(context.Context, *CreateResetPasswordTokenRequest) (*types.UserTokenV3, error) + // ListResetPasswordTokens returns a page of user tokens. + ListResetPasswordTokens(context.Context, *ListResetPasswordTokenRequest) (*ListResetPasswordTokenResponse, error) // Deprecated: Do not use. // GetUser gets a user resource by name. // @@ -3866,6 +4095,8 @@ type AuthServiceServer interface { CancelSemaphoreLease(context.Context, *types.SemaphoreLease) (*emptypb.Empty, error) // GetSemaphores returns a list of all semaphores matching the supplied filter. GetSemaphores(context.Context, *types.SemaphoreFilter) (*Semaphores, error) + // ListSemaphores returns a page of all semaphores matching the supplied filter. + ListSemaphores(context.Context, *ListSemaphoresRequest) (*ListSemaphoresResponse, error) // DeleteSemaphore deletes a semaphore matching the supplied filter. DeleteSemaphore(context.Context, *types.SemaphoreFilter) (*emptypb.Empty, error) // EmitAuditEvent emits audit event @@ -3900,6 +4131,8 @@ type AuthServiceServer interface { GetSnowflakeSession(context.Context, *GetSnowflakeSessionRequest) (*GetSnowflakeSessionResponse, error) // GetSnowflakeSessions gets all Snowflake web sessions. GetSnowflakeSessions(context.Context, *emptypb.Empty) (*GetSnowflakeSessionsResponse, error) + // ListSnowflakeSessions returns a page of Snowflake web sessions. + ListSnowflakeSessions(context.Context, *ListSnowflakeSessionsRequest) (*ListSnowflakeSessionsResponse, error) // DeleteSnowflakeSession removes a Snowflake web session. DeleteSnowflakeSession(context.Context, *DeleteSnowflakeSessionRequest) (*emptypb.Empty, error) // DeleteAllSnowflakeSessions removes all Snowflake web sessions. @@ -3937,6 +4170,8 @@ type AuthServiceServer interface { GetWebToken(context.Context, *types.GetWebTokenRequest) (*GetWebTokenResponse, error) // GetWebTokens gets all web tokens. GetWebTokens(context.Context, *emptypb.Empty) (*GetWebTokensResponse, error) + // ListWebTokens returns a page of web tokens. + ListWebTokens(context.Context, *ListWebTokensRequest) (*ListWebTokensResponse, error) // DeleteWebToken deletes a web token. DeleteWebToken(context.Context, *types.DeleteWebTokenRequest) (*emptypb.Empty, error) // DeleteAllWebTokens deletes all web tokens. @@ -3975,6 +4210,8 @@ type AuthServiceServer interface { GetRole(context.Context, *GetRoleRequest) (*types.RoleV6, error) // ListRoles is a paginated role getter. ListRoles(context.Context, *ListRolesRequest) (*ListRolesResponse, error) + // ListRequestableRoles is a paginated requestable role getter. + ListRequestableRoles(context.Context, *ListRequestableRolesRequest) (*ListRequestableRolesResponse, error) // CreateRole creates a new role. CreateRole(context.Context, *CreateRoleRequest) (*types.RoleV6, error) // UpdateRole updates an existing role. @@ -4045,6 +4282,8 @@ type AuthServiceServer interface { GetOIDCConnector(context.Context, *types.ResourceWithSecretsRequest) (*types.OIDCConnectorV3, error) // GetOIDCConnectors gets all current OIDC connector resources. GetOIDCConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.OIDCConnectorV3List, error) + // ListOIDCConnectors returns a page of current OIDC connector resources. + ListOIDCConnectors(context.Context, *ListOIDCConnectorsRequest) (*ListOIDCConnectorsResponse, error) // UpsertOIDCConnector creates a new OIDC connector in the backend. CreateOIDCConnector(context.Context, *CreateOIDCConnectorRequest) (*types.OIDCConnectorV3, error) // UpsertOIDCConnector updates an existing OIDC connector in the backend. @@ -4066,6 +4305,8 @@ type AuthServiceServer interface { GetSAMLConnector(context.Context, *types.ResourceWithSecretsRequest) (*types.SAMLConnectorV2, error) // GetSAMLConnectors gets all current SAML connector resources. GetSAMLConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.SAMLConnectorV2List, error) + // ListSAMLConnectors returns a page of current SAML connector resources. + ListSAMLConnectors(context.Context, *ListSAMLConnectorsRequest) (*ListSAMLConnectorsResponse, error) // CreateSAMLConnector creates a new SAML connector in the backend. CreateSAMLConnector(context.Context, *CreateSAMLConnectorRequest) (*types.SAMLConnectorV2, error) // UpdateSAMLConnector updates an existing SAML connector in the backend. @@ -4087,6 +4328,8 @@ type AuthServiceServer interface { GetGithubConnector(context.Context, *types.ResourceWithSecretsRequest) (*types.GithubConnectorV3, error) // GetGithubConnectors gets all current Github connector resources. GetGithubConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.GithubConnectorV3List, error) + // ListGithubConnectors returns a page of current Github connector resources. + ListGithubConnectors(context.Context, *ListGithubConnectorsRequest) (*ListGithubConnectorsResponse, error) // CreateGithubConnector creates a new Github connector in the backend. CreateGithubConnector(context.Context, *CreateGithubConnectorRequest) (*types.GithubConnectorV3, error) // UpdateGithubConnector updates an existing Github connector in the backend. @@ -4129,8 +4372,15 @@ type AuthServiceServer interface { DeleteTrustedCluster(context.Context, *types.ResourceRequest) (*emptypb.Empty, error) // GetToken retrieves a token described by the given request. GetToken(context.Context, *types.ResourceRequest) (*types.ProvisionTokenV2, error) + // Deprecated: Do not use. // GetToken retrieves all tokens. + // Deprecated: Use [ListProvisionTokens], [GetStaticTokens], and [ListResetPasswordTokens] instead. + // TODO(hugoShaka): DELETE IN 21.0.0 GetTokens(context.Context, *emptypb.Empty) (*types.ProvisionTokenV2List, error) + // GetStaticTokens retrieves all static tokens. + GetStaticTokens(context.Context, *emptypb.Empty) (*types.StaticTokensV2, error) + // ListToken retrieves a paginated list of filtered provision tokens. + ListProvisionTokens(context.Context, *ListProvisionTokensRequest) (*ListProvisionTokensResponse, error) // CreateTokenV2 creates a token in a backend. CreateTokenV2(context.Context, *CreateTokenV2Request) (*emptypb.Empty, error) // UpsertTokenV2 upserts a token in a backend. @@ -4189,6 +4439,8 @@ type AuthServiceServer interface { GetLock(context.Context, *GetLockRequest) (*types.LockV2, error) // GetLocks gets all/in-force locks that match at least one of the targets when specified. GetLocks(context.Context, *GetLocksRequest) (*GetLocksResponse, error) + // ListLocks returns a page of locks matching a filter. + ListLocks(context.Context, *ListLocksRequest) (*ListLocksResponse, error) // UpsertLock upserts a lock. UpsertLock(context.Context, *types.LockV2) (*emptypb.Empty, error) // DeleteLock deletes a lock. @@ -4205,6 +4457,8 @@ type AuthServiceServer interface { DeleteNetworkRestrictions(context.Context, *emptypb.Empty) (*emptypb.Empty, error) // GetApps returns all registered applications. GetApps(context.Context, *emptypb.Empty) (*types.AppV3List, error) + // ListApps returns a page of registered applications. + ListApps(context.Context, *ListAppsRequest) (*ListAppsResponse, error) // GetApp returns an application by name. GetApp(context.Context, *types.ResourceRequest) (*types.AppV3, error) // CreateApp creates a new application resource. @@ -4217,6 +4471,8 @@ type AuthServiceServer interface { DeleteAllApps(context.Context, *emptypb.Empty) (*emptypb.Empty, error) // GetDatabases returns all registered databases. GetDatabases(context.Context, *emptypb.Empty) (*types.DatabaseV3List, error) + // ListDatabases returns a page of registered databases. + ListDatabases(context.Context, *ListDatabasesRequest) (*ListDatabasesResponse, error) // GetDatabase returns a database by name. GetDatabase(context.Context, *types.ResourceRequest) (*types.DatabaseV3, error) // CreateDatabase creates a new database resource. @@ -4229,6 +4485,8 @@ type AuthServiceServer interface { DeleteAllDatabases(context.Context, *emptypb.Empty) (*emptypb.Empty, error) // GetKubernetesClusters returns all registered kubernetes clusters. GetKubernetesClusters(context.Context, *emptypb.Empty) (*types.KubernetesClusterV3List, error) + // ListKubernetesClusters returns a page of registered kubernetes clusters. + ListKubernetesClusters(context.Context, *ListKubernetesClustersRequest) (*ListKubernetesClustersResponse, error) // GetKubernetesCluster returns a kubernetes cluster by name. GetKubernetesCluster(context.Context, *types.ResourceRequest) (*types.KubernetesClusterV3, error) // CreateKubernetesCluster creates a new kubernetes cluster resource. @@ -4251,6 +4509,8 @@ type AuthServiceServer interface { DeleteAllWindowsDesktopServices(context.Context, *emptypb.Empty) (*emptypb.Empty, error) // GetWindowsDesktops returns all registered Windows desktop hosts matching the supplied filter. GetWindowsDesktops(context.Context, *types.WindowsDesktopFilter) (*GetWindowsDesktopsResponse, error) + // ListWindowsDesktops returns a page of registered Windows desktop hosts. + ListWindowsDesktops(context.Context, *ListWindowsDesktopsRequest) (*ListWindowsDesktopsResponse, error) // CreateWindowsDesktop registers a new Windows desktop host. CreateWindowsDesktop(context.Context, *types.WindowsDesktopV3) (*emptypb.Empty, error) // UpdateWindowsDesktop updates an existing Windows desktop host. @@ -4348,6 +4608,8 @@ type AuthServiceServer interface { GetInstaller(context.Context, *types.ResourceRequest) (*types.InstallerV1, error) // GetInstallers retrieves all of installer script resources. GetInstallers(context.Context, *emptypb.Empty) (*types.InstallerV1List, error) + // ListInstallers returns a page of installer script resources. + ListInstallers(context.Context, *ListInstallersRequest) (*ListInstallersResponse, error) // SetInstaller sets the installer script resource SetInstaller(context.Context, *types.InstallerV1) (*emptypb.Empty, error) // DeleteInstaller removes the specified installer script resource @@ -4535,6 +4797,9 @@ func (UnimplementedAuthServiceServer) SubmitAccessReview(context.Context, *types func (UnimplementedAuthServiceServer) GetAccessCapabilities(context.Context, *types.AccessCapabilitiesRequest) (*types.AccessCapabilities, error) { return nil, status.Errorf(codes.Unimplemented, "method GetAccessCapabilities not implemented") } +func (UnimplementedAuthServiceServer) GetRemoteAccessCapabilities(context.Context, *types.RemoteAccessCapabilitiesRequest) (*types.RemoteAccessCapabilities, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetRemoteAccessCapabilities not implemented") +} func (UnimplementedAuthServiceServer) GetAccessRequestAllowedPromotions(context.Context, *AccessRequestAllowedPromotionRequest) (*AccessRequestAllowedPromotionResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetAccessRequestAllowedPromotions not implemented") } @@ -4553,6 +4818,9 @@ func (UnimplementedAuthServiceServer) GetResetPasswordToken(context.Context, *Ge func (UnimplementedAuthServiceServer) CreateResetPasswordToken(context.Context, *CreateResetPasswordTokenRequest) (*types.UserTokenV3, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateResetPasswordToken not implemented") } +func (UnimplementedAuthServiceServer) ListResetPasswordTokens(context.Context, *ListResetPasswordTokenRequest) (*ListResetPasswordTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListResetPasswordTokens not implemented") +} func (UnimplementedAuthServiceServer) GetUser(context.Context, *GetUserRequest) (*types.UserV2, error) { return nil, status.Errorf(codes.Unimplemented, "method GetUser not implemented") } @@ -4589,6 +4857,9 @@ func (UnimplementedAuthServiceServer) CancelSemaphoreLease(context.Context, *typ func (UnimplementedAuthServiceServer) GetSemaphores(context.Context, *types.SemaphoreFilter) (*Semaphores, error) { return nil, status.Errorf(codes.Unimplemented, "method GetSemaphores not implemented") } +func (UnimplementedAuthServiceServer) ListSemaphores(context.Context, *ListSemaphoresRequest) (*ListSemaphoresResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListSemaphores not implemented") +} func (UnimplementedAuthServiceServer) DeleteSemaphore(context.Context, *types.SemaphoreFilter) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteSemaphore not implemented") } @@ -4637,6 +4908,9 @@ func (UnimplementedAuthServiceServer) GetSnowflakeSession(context.Context, *GetS func (UnimplementedAuthServiceServer) GetSnowflakeSessions(context.Context, *emptypb.Empty) (*GetSnowflakeSessionsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetSnowflakeSessions not implemented") } +func (UnimplementedAuthServiceServer) ListSnowflakeSessions(context.Context, *ListSnowflakeSessionsRequest) (*ListSnowflakeSessionsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListSnowflakeSessions not implemented") +} func (UnimplementedAuthServiceServer) DeleteSnowflakeSession(context.Context, *DeleteSnowflakeSessionRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteSnowflakeSession not implemented") } @@ -4682,6 +4956,9 @@ func (UnimplementedAuthServiceServer) GetWebToken(context.Context, *types.GetWeb func (UnimplementedAuthServiceServer) GetWebTokens(context.Context, *emptypb.Empty) (*GetWebTokensResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetWebTokens not implemented") } +func (UnimplementedAuthServiceServer) ListWebTokens(context.Context, *ListWebTokensRequest) (*ListWebTokensResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListWebTokens not implemented") +} func (UnimplementedAuthServiceServer) DeleteWebToken(context.Context, *types.DeleteWebTokenRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteWebToken not implemented") } @@ -4733,6 +5010,9 @@ func (UnimplementedAuthServiceServer) GetRole(context.Context, *GetRoleRequest) func (UnimplementedAuthServiceServer) ListRoles(context.Context, *ListRolesRequest) (*ListRolesResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method ListRoles not implemented") } +func (UnimplementedAuthServiceServer) ListRequestableRoles(context.Context, *ListRequestableRolesRequest) (*ListRequestableRolesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListRequestableRoles not implemented") +} func (UnimplementedAuthServiceServer) CreateRole(context.Context, *CreateRoleRequest) (*types.RoleV6, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateRole not implemented") } @@ -4775,6 +5055,9 @@ func (UnimplementedAuthServiceServer) GetOIDCConnector(context.Context, *types.R func (UnimplementedAuthServiceServer) GetOIDCConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.OIDCConnectorV3List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetOIDCConnectors not implemented") } +func (UnimplementedAuthServiceServer) ListOIDCConnectors(context.Context, *ListOIDCConnectorsRequest) (*ListOIDCConnectorsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListOIDCConnectors not implemented") +} func (UnimplementedAuthServiceServer) CreateOIDCConnector(context.Context, *CreateOIDCConnectorRequest) (*types.OIDCConnectorV3, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateOIDCConnector not implemented") } @@ -4802,6 +5085,9 @@ func (UnimplementedAuthServiceServer) GetSAMLConnector(context.Context, *types.R func (UnimplementedAuthServiceServer) GetSAMLConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.SAMLConnectorV2List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetSAMLConnectors not implemented") } +func (UnimplementedAuthServiceServer) ListSAMLConnectors(context.Context, *ListSAMLConnectorsRequest) (*ListSAMLConnectorsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListSAMLConnectors not implemented") +} func (UnimplementedAuthServiceServer) CreateSAMLConnector(context.Context, *CreateSAMLConnectorRequest) (*types.SAMLConnectorV2, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateSAMLConnector not implemented") } @@ -4829,6 +5115,9 @@ func (UnimplementedAuthServiceServer) GetGithubConnector(context.Context, *types func (UnimplementedAuthServiceServer) GetGithubConnectors(context.Context, *types.ResourcesWithSecretsRequest) (*types.GithubConnectorV3List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetGithubConnectors not implemented") } +func (UnimplementedAuthServiceServer) ListGithubConnectors(context.Context, *ListGithubConnectorsRequest) (*ListGithubConnectorsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListGithubConnectors not implemented") +} func (UnimplementedAuthServiceServer) CreateGithubConnector(context.Context, *CreateGithubConnectorRequest) (*types.GithubConnectorV3, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateGithubConnector not implemented") } @@ -4886,6 +5175,12 @@ func (UnimplementedAuthServiceServer) GetToken(context.Context, *types.ResourceR func (UnimplementedAuthServiceServer) GetTokens(context.Context, *emptypb.Empty) (*types.ProvisionTokenV2List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetTokens not implemented") } +func (UnimplementedAuthServiceServer) GetStaticTokens(context.Context, *emptypb.Empty) (*types.StaticTokensV2, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetStaticTokens not implemented") +} +func (UnimplementedAuthServiceServer) ListProvisionTokens(context.Context, *ListProvisionTokensRequest) (*ListProvisionTokensResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListProvisionTokens not implemented") +} func (UnimplementedAuthServiceServer) CreateTokenV2(context.Context, *CreateTokenV2Request) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateTokenV2 not implemented") } @@ -4946,6 +5241,9 @@ func (UnimplementedAuthServiceServer) GetLock(context.Context, *GetLockRequest) func (UnimplementedAuthServiceServer) GetLocks(context.Context, *GetLocksRequest) (*GetLocksResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetLocks not implemented") } +func (UnimplementedAuthServiceServer) ListLocks(context.Context, *ListLocksRequest) (*ListLocksResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListLocks not implemented") +} func (UnimplementedAuthServiceServer) UpsertLock(context.Context, *types.LockV2) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method UpsertLock not implemented") } @@ -4970,6 +5268,9 @@ func (UnimplementedAuthServiceServer) DeleteNetworkRestrictions(context.Context, func (UnimplementedAuthServiceServer) GetApps(context.Context, *emptypb.Empty) (*types.AppV3List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetApps not implemented") } +func (UnimplementedAuthServiceServer) ListApps(context.Context, *ListAppsRequest) (*ListAppsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListApps not implemented") +} func (UnimplementedAuthServiceServer) GetApp(context.Context, *types.ResourceRequest) (*types.AppV3, error) { return nil, status.Errorf(codes.Unimplemented, "method GetApp not implemented") } @@ -4988,6 +5289,9 @@ func (UnimplementedAuthServiceServer) DeleteAllApps(context.Context, *emptypb.Em func (UnimplementedAuthServiceServer) GetDatabases(context.Context, *emptypb.Empty) (*types.DatabaseV3List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetDatabases not implemented") } +func (UnimplementedAuthServiceServer) ListDatabases(context.Context, *ListDatabasesRequest) (*ListDatabasesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListDatabases not implemented") +} func (UnimplementedAuthServiceServer) GetDatabase(context.Context, *types.ResourceRequest) (*types.DatabaseV3, error) { return nil, status.Errorf(codes.Unimplemented, "method GetDatabase not implemented") } @@ -5006,6 +5310,9 @@ func (UnimplementedAuthServiceServer) DeleteAllDatabases(context.Context, *empty func (UnimplementedAuthServiceServer) GetKubernetesClusters(context.Context, *emptypb.Empty) (*types.KubernetesClusterV3List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetKubernetesClusters not implemented") } +func (UnimplementedAuthServiceServer) ListKubernetesClusters(context.Context, *ListKubernetesClustersRequest) (*ListKubernetesClustersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListKubernetesClusters not implemented") +} func (UnimplementedAuthServiceServer) GetKubernetesCluster(context.Context, *types.ResourceRequest) (*types.KubernetesClusterV3, error) { return nil, status.Errorf(codes.Unimplemented, "method GetKubernetesCluster not implemented") } @@ -5039,6 +5346,9 @@ func (UnimplementedAuthServiceServer) DeleteAllWindowsDesktopServices(context.Co func (UnimplementedAuthServiceServer) GetWindowsDesktops(context.Context, *types.WindowsDesktopFilter) (*GetWindowsDesktopsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetWindowsDesktops not implemented") } +func (UnimplementedAuthServiceServer) ListWindowsDesktops(context.Context, *ListWindowsDesktopsRequest) (*ListWindowsDesktopsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListWindowsDesktops not implemented") +} func (UnimplementedAuthServiceServer) CreateWindowsDesktop(context.Context, *types.WindowsDesktopV3) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method CreateWindowsDesktop not implemented") } @@ -5105,6 +5415,9 @@ func (UnimplementedAuthServiceServer) GetInstaller(context.Context, *types.Resou func (UnimplementedAuthServiceServer) GetInstallers(context.Context, *emptypb.Empty) (*types.InstallerV1List, error) { return nil, status.Errorf(codes.Unimplemented, "method GetInstallers not implemented") } +func (UnimplementedAuthServiceServer) ListInstallers(context.Context, *ListInstallersRequest) (*ListInstallersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListInstallers not implemented") +} func (UnimplementedAuthServiceServer) SetInstaller(context.Context, *types.InstallerV1) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method SetInstaller not implemented") } @@ -5772,6 +6085,24 @@ func _AuthService_GetAccessCapabilities_Handler(srv interface{}, ctx context.Con return interceptor(ctx, in, info, handler) } +func _AuthService_GetRemoteAccessCapabilities_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(types.RemoteAccessCapabilitiesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).GetRemoteAccessCapabilities(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_GetRemoteAccessCapabilities_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).GetRemoteAccessCapabilities(ctx, req.(*types.RemoteAccessCapabilitiesRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_GetAccessRequestAllowedPromotions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(AccessRequestAllowedPromotionRequest) if err := dec(in); err != nil { @@ -5880,6 +6211,24 @@ func _AuthService_CreateResetPasswordToken_Handler(srv interface{}, ctx context. return interceptor(ctx, in, info, handler) } +func _AuthService_ListResetPasswordTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListResetPasswordTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListResetPasswordTokens(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListResetPasswordTokens_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListResetPasswordTokens(ctx, req.(*ListResetPasswordTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_GetUser_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetUserRequest) if err := dec(in); err != nil { @@ -6082,6 +6431,24 @@ func _AuthService_GetSemaphores_Handler(srv interface{}, ctx context.Context, de return interceptor(ctx, in, info, handler) } +func _AuthService_ListSemaphores_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListSemaphoresRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListSemaphores(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListSemaphores_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListSemaphores(ctx, req.(*ListSemaphoresRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_DeleteSemaphore_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.SemaphoreFilter) if err := dec(in); err != nil { @@ -6359,6 +6726,24 @@ func _AuthService_GetSnowflakeSessions_Handler(srv interface{}, ctx context.Cont return interceptor(ctx, in, info, handler) } +func _AuthService_ListSnowflakeSessions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListSnowflakeSessionsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListSnowflakeSessions(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListSnowflakeSessions_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListSnowflakeSessions(ctx, req.(*ListSnowflakeSessionsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_DeleteSnowflakeSession_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(DeleteSnowflakeSessionRequest) if err := dec(in); err != nil { @@ -6622,6 +7007,24 @@ func _AuthService_GetWebTokens_Handler(srv interface{}, ctx context.Context, dec return interceptor(ctx, in, info, handler) } +func _AuthService_ListWebTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListWebTokensRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListWebTokens(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListWebTokens_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListWebTokens(ctx, req.(*ListWebTokensRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_DeleteWebToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.DeleteWebTokenRequest) if err := dec(in); err != nil { @@ -6928,6 +7331,24 @@ func _AuthService_ListRoles_Handler(srv interface{}, ctx context.Context, dec fu return interceptor(ctx, in, info, handler) } +func _AuthService_ListRequestableRoles_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListRequestableRolesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListRequestableRoles(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListRequestableRoles_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListRequestableRoles(ctx, req.(*ListRequestableRolesRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CreateRoleRequest) if err := dec(in); err != nil { @@ -7158,6 +7579,24 @@ func _AuthService_GetOIDCConnectors_Handler(srv interface{}, ctx context.Context return interceptor(ctx, in, info, handler) } +func _AuthService_ListOIDCConnectors_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListOIDCConnectorsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListOIDCConnectors(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListOIDCConnectors_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListOIDCConnectors(ctx, req.(*ListOIDCConnectorsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateOIDCConnector_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CreateOIDCConnectorRequest) if err := dec(in); err != nil { @@ -7320,6 +7759,24 @@ func _AuthService_GetSAMLConnectors_Handler(srv interface{}, ctx context.Context return interceptor(ctx, in, info, handler) } +func _AuthService_ListSAMLConnectors_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListSAMLConnectorsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListSAMLConnectors(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListSAMLConnectors_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListSAMLConnectors(ctx, req.(*ListSAMLConnectorsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateSAMLConnector_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CreateSAMLConnectorRequest) if err := dec(in); err != nil { @@ -7482,6 +7939,24 @@ func _AuthService_GetGithubConnectors_Handler(srv interface{}, ctx context.Conte return interceptor(ctx, in, info, handler) } +func _AuthService_ListGithubConnectors_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListGithubConnectorsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListGithubConnectors(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListGithubConnectors_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListGithubConnectors(ctx, req.(*ListGithubConnectorsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateGithubConnector_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CreateGithubConnectorRequest) if err := dec(in); err != nil { @@ -7817,6 +8292,42 @@ func _AuthService_GetTokens_Handler(srv interface{}, ctx context.Context, dec fu return interceptor(ctx, in, info, handler) } +func _AuthService_GetStaticTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(emptypb.Empty) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).GetStaticTokens(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_GetStaticTokens_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).GetStaticTokens(ctx, req.(*emptypb.Empty)) + } + return interceptor(ctx, in, info, handler) +} + +func _AuthService_ListProvisionTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListProvisionTokensRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListProvisionTokens(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListProvisionTokens_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListProvisionTokens(ctx, req.(*ListProvisionTokensRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateTokenV2_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(CreateTokenV2Request) if err := dec(in); err != nil { @@ -8177,6 +8688,24 @@ func _AuthService_GetLocks_Handler(srv interface{}, ctx context.Context, dec fun return interceptor(ctx, in, info, handler) } +func _AuthService_ListLocks_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListLocksRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListLocks(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListLocks_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListLocks(ctx, req.(*ListLocksRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_UpsertLock_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.LockV2) if err := dec(in); err != nil { @@ -8314,6 +8843,24 @@ func _AuthService_GetApps_Handler(srv interface{}, ctx context.Context, dec func return interceptor(ctx, in, info, handler) } +func _AuthService_ListApps_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListAppsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListApps(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListApps_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListApps(ctx, req.(*ListAppsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_GetApp_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.ResourceRequest) if err := dec(in); err != nil { @@ -8422,6 +8969,24 @@ func _AuthService_GetDatabases_Handler(srv interface{}, ctx context.Context, dec return interceptor(ctx, in, info, handler) } +func _AuthService_ListDatabases_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListDatabasesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListDatabases(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListDatabases_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListDatabases(ctx, req.(*ListDatabasesRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_GetDatabase_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.ResourceRequest) if err := dec(in); err != nil { @@ -8530,6 +9095,24 @@ func _AuthService_GetKubernetesClusters_Handler(srv interface{}, ctx context.Con return interceptor(ctx, in, info, handler) } +func _AuthService_ListKubernetesClusters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListKubernetesClustersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListKubernetesClusters(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListKubernetesClusters_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListKubernetesClusters(ctx, req.(*ListKubernetesClustersRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_GetKubernetesCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.ResourceRequest) if err := dec(in); err != nil { @@ -8728,6 +9311,24 @@ func _AuthService_GetWindowsDesktops_Handler(srv interface{}, ctx context.Contex return interceptor(ctx, in, info, handler) } +func _AuthService_ListWindowsDesktops_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListWindowsDesktopsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListWindowsDesktops(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListWindowsDesktops_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListWindowsDesktops(ctx, req.(*ListWindowsDesktopsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_CreateWindowsDesktop_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.WindowsDesktopV3) if err := dec(in); err != nil { @@ -9124,6 +9725,24 @@ func _AuthService_GetInstallers_Handler(srv interface{}, ctx context.Context, de return interceptor(ctx, in, info, handler) } +func _AuthService_ListInstallers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListInstallersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AuthServiceServer).ListInstallers(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AuthService_ListInstallers_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AuthServiceServer).ListInstallers(ctx, req.(*ListInstallersRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AuthService_SetInstaller_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(types.InstallerV1) if err := dec(in); err != nil { @@ -9804,6 +10423,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetAccessCapabilities", Handler: _AuthService_GetAccessCapabilities_Handler, }, + { + MethodName: "GetRemoteAccessCapabilities", + Handler: _AuthService_GetRemoteAccessCapabilities_Handler, + }, { MethodName: "GetAccessRequestAllowedPromotions", Handler: _AuthService_GetAccessRequestAllowedPromotions_Handler, @@ -9828,6 +10451,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "CreateResetPasswordToken", Handler: _AuthService_CreateResetPasswordToken_Handler, }, + { + MethodName: "ListResetPasswordTokens", + Handler: _AuthService_ListResetPasswordTokens_Handler, + }, { MethodName: "GetUser", Handler: _AuthService_GetUser_Handler, @@ -9868,6 +10495,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetSemaphores", Handler: _AuthService_GetSemaphores_Handler, }, + { + MethodName: "ListSemaphores", + Handler: _AuthService_ListSemaphores_Handler, + }, { MethodName: "DeleteSemaphore", Handler: _AuthService_DeleteSemaphore_Handler, @@ -9928,6 +10559,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetSnowflakeSessions", Handler: _AuthService_GetSnowflakeSessions_Handler, }, + { + MethodName: "ListSnowflakeSessions", + Handler: _AuthService_ListSnowflakeSessions_Handler, + }, { MethodName: "DeleteSnowflakeSession", Handler: _AuthService_DeleteSnowflakeSession_Handler, @@ -9984,6 +10619,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetWebTokens", Handler: _AuthService_GetWebTokens_Handler, }, + { + MethodName: "ListWebTokens", + Handler: _AuthService_ListWebTokens_Handler, + }, { MethodName: "DeleteWebToken", Handler: _AuthService_DeleteWebToken_Handler, @@ -10052,6 +10691,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ListRoles", Handler: _AuthService_ListRoles_Handler, }, + { + MethodName: "ListRequestableRoles", + Handler: _AuthService_ListRequestableRoles_Handler, + }, { MethodName: "CreateRole", Handler: _AuthService_CreateRole_Handler, @@ -10100,6 +10743,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetOIDCConnectors", Handler: _AuthService_GetOIDCConnectors_Handler, }, + { + MethodName: "ListOIDCConnectors", + Handler: _AuthService_ListOIDCConnectors_Handler, + }, { MethodName: "CreateOIDCConnector", Handler: _AuthService_CreateOIDCConnector_Handler, @@ -10136,6 +10783,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetSAMLConnectors", Handler: _AuthService_GetSAMLConnectors_Handler, }, + { + MethodName: "ListSAMLConnectors", + Handler: _AuthService_ListSAMLConnectors_Handler, + }, { MethodName: "CreateSAMLConnector", Handler: _AuthService_CreateSAMLConnector_Handler, @@ -10172,6 +10823,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetGithubConnectors", Handler: _AuthService_GetGithubConnectors_Handler, }, + { + MethodName: "ListGithubConnectors", + Handler: _AuthService_ListGithubConnectors_Handler, + }, { MethodName: "CreateGithubConnector", Handler: _AuthService_CreateGithubConnector_Handler, @@ -10244,6 +10899,14 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetTokens", Handler: _AuthService_GetTokens_Handler, }, + { + MethodName: "GetStaticTokens", + Handler: _AuthService_GetStaticTokens_Handler, + }, + { + MethodName: "ListProvisionTokens", + Handler: _AuthService_ListProvisionTokens_Handler, + }, { MethodName: "CreateTokenV2", Handler: _AuthService_CreateTokenV2_Handler, @@ -10324,6 +10987,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetLocks", Handler: _AuthService_GetLocks_Handler, }, + { + MethodName: "ListLocks", + Handler: _AuthService_ListLocks_Handler, + }, { MethodName: "UpsertLock", Handler: _AuthService_UpsertLock_Handler, @@ -10352,6 +11019,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetApps", Handler: _AuthService_GetApps_Handler, }, + { + MethodName: "ListApps", + Handler: _AuthService_ListApps_Handler, + }, { MethodName: "GetApp", Handler: _AuthService_GetApp_Handler, @@ -10376,6 +11047,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetDatabases", Handler: _AuthService_GetDatabases_Handler, }, + { + MethodName: "ListDatabases", + Handler: _AuthService_ListDatabases_Handler, + }, { MethodName: "GetDatabase", Handler: _AuthService_GetDatabase_Handler, @@ -10400,6 +11075,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetKubernetesClusters", Handler: _AuthService_GetKubernetesClusters_Handler, }, + { + MethodName: "ListKubernetesClusters", + Handler: _AuthService_ListKubernetesClusters_Handler, + }, { MethodName: "GetKubernetesCluster", Handler: _AuthService_GetKubernetesCluster_Handler, @@ -10444,6 +11123,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetWindowsDesktops", Handler: _AuthService_GetWindowsDesktops_Handler, }, + { + MethodName: "ListWindowsDesktops", + Handler: _AuthService_ListWindowsDesktops_Handler, + }, { MethodName: "CreateWindowsDesktop", Handler: _AuthService_CreateWindowsDesktop_Handler, @@ -10532,6 +11215,10 @@ var AuthService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetInstallers", Handler: _AuthService_GetInstallers_Handler, }, + { + MethodName: "ListInstallers", + Handler: _AuthService_ListInstallers_Handler, + }, { MethodName: "SetInstaller", Handler: _AuthService_SetInstaller_Handler, diff --git a/api/client/proto/event.pb.go b/api/client/proto/event.pb.go index 9fc303584626d..72daa606cac6d 100644 --- a/api/client/proto/event.pb.go +++ b/api/client/proto/event.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/legacy/client/proto/event.proto @@ -33,7 +33,10 @@ import ( v15 "github.com/gravitational/teleport/api/gen/proto/go/teleport/kubewaitingcontainer/v1" v19 "github.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1" v16 "github.com/gravitational/teleport/api/gen/proto/go/teleport/notifications/v1" + v118 "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1" v113 "github.com/gravitational/teleport/api/gen/proto/go/teleport/provisioning/v1" + v119 "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1" + v117 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1" v13 "github.com/gravitational/teleport/api/gen/proto/go/teleport/secreports/v1" v11 "github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1" v2 "github.com/gravitational/teleport/api/gen/proto/go/teleport/userprovisioning/v2" @@ -191,6 +194,12 @@ type Event struct { // *Event_WorkloadIdentityX509Revocation // *Event_HealthCheckConfig // *Event_AutoUpdateAgentReport + // *Event_ScopedRole + // *Event_ScopedRoleAssignment + // *Event_RelayServer + // *Event_RecordingEncryption + // *Event_Plugin + // *Event_AutoUpdateBotInstanceReport Resource isEvent_Resource `protobuf_oneof:"Resource"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache @@ -906,6 +915,60 @@ func (x *Event) GetAutoUpdateAgentReport() *v111.AutoUpdateAgentReport { return nil } +func (x *Event) GetScopedRole() *v117.ScopedRole { + if x != nil { + if x, ok := x.Resource.(*Event_ScopedRole); ok { + return x.ScopedRole + } + } + return nil +} + +func (x *Event) GetScopedRoleAssignment() *v117.ScopedRoleAssignment { + if x != nil { + if x, ok := x.Resource.(*Event_ScopedRoleAssignment); ok { + return x.ScopedRoleAssignment + } + } + return nil +} + +func (x *Event) GetRelayServer() *v118.RelayServer { + if x != nil { + if x, ok := x.Resource.(*Event_RelayServer); ok { + return x.RelayServer + } + } + return nil +} + +func (x *Event) GetRecordingEncryption() *v119.RecordingEncryption { + if x != nil { + if x, ok := x.Resource.(*Event_RecordingEncryption); ok { + return x.RecordingEncryption + } + } + return nil +} + +func (x *Event) GetPlugin() *types.PluginV1 { + if x != nil { + if x, ok := x.Resource.(*Event_Plugin); ok { + return x.Plugin + } + } + return nil +} + +func (x *Event) GetAutoUpdateBotInstanceReport() *v111.AutoUpdateBotInstanceReport { + if x != nil { + if x, ok := x.Resource.(*Event_AutoUpdateBotInstanceReport); ok { + return x.AutoUpdateBotInstanceReport + } + } + return nil +} + type isEvent_Resource interface { isEvent_Resource() } @@ -1286,6 +1349,35 @@ type Event_AutoUpdateAgentReport struct { AutoUpdateAgentReport *v111.AutoUpdateAgentReport `protobuf:"bytes,79,opt,name=AutoUpdateAgentReport,proto3,oneof"` } +type Event_ScopedRole struct { + // ScopedRole is a role that descibes scoped privileges. + ScopedRole *v117.ScopedRole `protobuf:"bytes,80,opt,name=ScopedRole,proto3,oneof"` +} + +type Event_ScopedRoleAssignment struct { + // ScopedRoleAssignment is an assignment of one or more scoped roles to a user. + ScopedRoleAssignment *v117.ScopedRoleAssignment `protobuf:"bytes,81,opt,name=ScopedRoleAssignment,proto3,oneof"` +} + +type Event_RelayServer struct { + RelayServer *v118.RelayServer `protobuf:"bytes,82,opt,name=relay_server,json=relayServer,proto3,oneof"` +} + +type Event_RecordingEncryption struct { + // RecordingEncryption is a resource for controlling session recording encryption. + RecordingEncryption *v119.RecordingEncryption `protobuf:"bytes,83,opt,name=RecordingEncryption,proto3,oneof"` +} + +type Event_Plugin struct { + // PluginV1 is a resource for Teleport plugins. + Plugin *types.PluginV1 `protobuf:"bytes,84,opt,name=plugin,proto3,oneof"` +} + +type Event_AutoUpdateBotInstanceReport struct { + // AutoUpdateAgentReport is a resource for counting connected bot instances. + AutoUpdateBotInstanceReport *v111.AutoUpdateBotInstanceReport `protobuf:"bytes,85,opt,name=AutoUpdateBotInstanceReport,proto3,oneof"` +} + func (*Event_ResourceHeader) isEvent_Resource() {} func (*Event_CertAuthority) isEvent_Resource() {} @@ -1434,11 +1526,23 @@ func (*Event_HealthCheckConfig) isEvent_Resource() {} func (*Event_AutoUpdateAgentReport) isEvent_Resource() {} +func (*Event_ScopedRole) isEvent_Resource() {} + +func (*Event_ScopedRoleAssignment) isEvent_Resource() {} + +func (*Event_RelayServer) isEvent_Resource() {} + +func (*Event_RecordingEncryption) isEvent_Resource() {} + +func (*Event_Plugin) isEvent_Resource() {} + +func (*Event_AutoUpdateBotInstanceReport) isEvent_Resource() {} + var File_teleport_legacy_client_proto_event_proto protoreflect.FileDescriptor const file_teleport_legacy_client_proto_event_proto_rawDesc = "" + "\n" + - "(teleport/legacy/client/proto/event.proto\x12\x05proto\x1a'teleport/accesslist/v1/accesslist.proto\x1a?teleport/accessmonitoringrules/v1/access_monitoring_rules.proto\x1a'teleport/autoupdate/v1/autoupdate.proto\x1a5teleport/clusterconfig/v1/access_graph_settings.proto\x1a'teleport/crownjewel/v1/crownjewel.proto\x1a#teleport/dbobject/v1/dbobject.proto\x1a1teleport/discoveryconfig/v1/discoveryconfig.proto\x1a7teleport/healthcheckconfig/v1/health_check_config.proto\x1a/teleport/identitycenter/v1/identitycenter.proto\x1a;teleport/kubewaitingcontainer/v1/kubewaitingcontainer.proto\x1a!teleport/legacy/types/types.proto\x1a(teleport/machineid/v1/bot_instance.proto\x1a&teleport/machineid/v1/federation.proto\x1a-teleport/notifications/v1/notifications.proto\x1a+teleport/provisioning/v1/provisioning.proto\x1a'teleport/secreports/v1/secreports.proto\x1a/teleport/userloginstate/v1/userloginstate.proto\x1a1teleport/userprovisioning/v2/statichostuser.proto\x1a&teleport/usertasks/v1/user_tasks.proto\x1a+teleport/workloadidentity/v1/resource.proto\x1a6teleport/workloadidentity/v1/revocation_resource.proto\"\xdd,\n" + + "(teleport/legacy/client/proto/event.proto\x12\x05proto\x1a'teleport/accesslist/v1/accesslist.proto\x1a?teleport/accessmonitoringrules/v1/access_monitoring_rules.proto\x1a'teleport/autoupdate/v1/autoupdate.proto\x1a5teleport/clusterconfig/v1/access_graph_settings.proto\x1a'teleport/crownjewel/v1/crownjewel.proto\x1a#teleport/dbobject/v1/dbobject.proto\x1a1teleport/discoveryconfig/v1/discoveryconfig.proto\x1a7teleport/healthcheckconfig/v1/health_check_config.proto\x1a/teleport/identitycenter/v1/identitycenter.proto\x1a;teleport/kubewaitingcontainer/v1/kubewaitingcontainer.proto\x1a!teleport/legacy/types/types.proto\x1a(teleport/machineid/v1/bot_instance.proto\x1a&teleport/machineid/v1/federation.proto\x1a-teleport/notifications/v1/notifications.proto\x1a'teleport/presence/v1/relay_server.proto\x1a+teleport/provisioning/v1/provisioning.proto\x1a:teleport/recordingencryption/v1/recording_encryption.proto\x1a*teleport/scopes/access/v1/assignment.proto\x1a$teleport/scopes/access/v1/role.proto\x1a'teleport/secreports/v1/secreports.proto\x1a/teleport/userloginstate/v1/userloginstate.proto\x1a1teleport/userprovisioning/v2/statichostuser.proto\x1a&teleport/usertasks/v1/user_tasks.proto\x1a+teleport/workloadidentity/v1/resource.proto\x1a6teleport/workloadidentity/v1/revocation_resource.proto\"\xe30\n" + "\x05Event\x12$\n" + "\x04Type\x18\x01 \x01(\x0e2\x10.proto.OperationR\x04Type\x12?\n" + "\x0eResourceHeader\x18\x02 \x01(\v2\x15.types.ResourceHeaderH\x00R\x0eResourceHeader\x12>\n" + @@ -1525,7 +1629,15 @@ const file_teleport_legacy_client_proto_event_proto_rawDesc = "" + "\x10WorkloadIdentity\x18L \x01(\v2..teleport.workloadidentity.v1.WorkloadIdentityH\x00R\x10WorkloadIdentity\x12\x86\x01\n" + "\x1eWorkloadIdentityX509Revocation\x18M \x01(\v2<.teleport.workloadidentity.v1.WorkloadIdentityX509RevocationH\x00R\x1eWorkloadIdentityX509Revocation\x12`\n" + "\x11HealthCheckConfig\x18N \x01(\v20.teleport.healthcheckconfig.v1.HealthCheckConfigH\x00R\x11HealthCheckConfig\x12e\n" + - "\x15AutoUpdateAgentReport\x18O \x01(\v2-.teleport.autoupdate.v1.AutoUpdateAgentReportH\x00R\x15AutoUpdateAgentReportB\n" + + "\x15AutoUpdateAgentReport\x18O \x01(\v2-.teleport.autoupdate.v1.AutoUpdateAgentReportH\x00R\x15AutoUpdateAgentReport\x12G\n" + + "\n" + + "ScopedRole\x18P \x01(\v2%.teleport.scopes.access.v1.ScopedRoleH\x00R\n" + + "ScopedRole\x12e\n" + + "\x14ScopedRoleAssignment\x18Q \x01(\v2/.teleport.scopes.access.v1.ScopedRoleAssignmentH\x00R\x14ScopedRoleAssignment\x12F\n" + + "\frelay_server\x18R \x01(\v2!.teleport.presence.v1.RelayServerH\x00R\vrelayServer\x12h\n" + + "\x13RecordingEncryption\x18S \x01(\v24.teleport.recordingencryption.v1.RecordingEncryptionH\x00R\x13RecordingEncryption\x12)\n" + + "\x06plugin\x18T \x01(\v2\x0f.types.PluginV1H\x00R\x06plugin\x12w\n" + + "\x1bAutoUpdateBotInstanceReport\x18U \x01(\v23.teleport.autoupdate.v1.AutoUpdateBotInstanceReportH\x00R\x1bAutoUpdateBotInstanceReportB\n" + "\n" + "\bResourceJ\x04\b\a\x10\bJ\x04\b1\x102J\x04\b?\x10@J\x04\bD\x10ER\x12ExternalCloudAuditR\x0eStaticHostUserR\x13AutoUpdateAgentPlan**\n" + "\tOperation\x12\b\n" + @@ -1622,6 +1734,12 @@ var file_teleport_legacy_client_proto_event_proto_goTypes = []any{ (*v115.WorkloadIdentityX509Revocation)(nil), // 70: teleport.workloadidentity.v1.WorkloadIdentityX509Revocation (*v116.HealthCheckConfig)(nil), // 71: teleport.healthcheckconfig.v1.HealthCheckConfig (*v111.AutoUpdateAgentReport)(nil), // 72: teleport.autoupdate.v1.AutoUpdateAgentReport + (*v117.ScopedRole)(nil), // 73: teleport.scopes.access.v1.ScopedRole + (*v117.ScopedRoleAssignment)(nil), // 74: teleport.scopes.access.v1.ScopedRoleAssignment + (*v118.RelayServer)(nil), // 75: teleport.presence.v1.RelayServer + (*v119.RecordingEncryption)(nil), // 76: teleport.recordingencryption.v1.RecordingEncryption + (*types.PluginV1)(nil), // 77: types.PluginV1 + (*v111.AutoUpdateBotInstanceReport)(nil), // 78: teleport.autoupdate.v1.AutoUpdateBotInstanceReport } var file_teleport_legacy_client_proto_event_proto_depIdxs = []int32{ 0, // 0: proto.Event.Type:type_name -> proto.Operation @@ -1699,11 +1817,17 @@ var file_teleport_legacy_client_proto_event_proto_depIdxs = []int32{ 70, // 72: proto.Event.WorkloadIdentityX509Revocation:type_name -> teleport.workloadidentity.v1.WorkloadIdentityX509Revocation 71, // 73: proto.Event.HealthCheckConfig:type_name -> teleport.healthcheckconfig.v1.HealthCheckConfig 72, // 74: proto.Event.AutoUpdateAgentReport:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport - 75, // [75:75] is the sub-list for method output_type - 75, // [75:75] is the sub-list for method input_type - 75, // [75:75] is the sub-list for extension type_name - 75, // [75:75] is the sub-list for extension extendee - 0, // [0:75] is the sub-list for field type_name + 73, // 75: proto.Event.ScopedRole:type_name -> teleport.scopes.access.v1.ScopedRole + 74, // 76: proto.Event.ScopedRoleAssignment:type_name -> teleport.scopes.access.v1.ScopedRoleAssignment + 75, // 77: proto.Event.relay_server:type_name -> teleport.presence.v1.RelayServer + 76, // 78: proto.Event.RecordingEncryption:type_name -> teleport.recordingencryption.v1.RecordingEncryption + 77, // 79: proto.Event.plugin:type_name -> types.PluginV1 + 78, // 80: proto.Event.AutoUpdateBotInstanceReport:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReport + 81, // [81:81] is the sub-list for method output_type + 81, // [81:81] is the sub-list for method input_type + 81, // [81:81] is the sub-list for extension type_name + 81, // [81:81] is the sub-list for extension extendee + 0, // [0:81] is the sub-list for field type_name } func init() { file_teleport_legacy_client_proto_event_proto_init() } @@ -1786,6 +1910,12 @@ func file_teleport_legacy_client_proto_event_proto_init() { (*Event_WorkloadIdentityX509Revocation)(nil), (*Event_HealthCheckConfig)(nil), (*Event_AutoUpdateAgentReport)(nil), + (*Event_ScopedRole)(nil), + (*Event_ScopedRoleAssignment)(nil), + (*Event_RelayServer)(nil), + (*Event_RecordingEncryption)(nil), + (*Event_Plugin)(nil), + (*Event_AutoUpdateBotInstanceReport)(nil), } type x struct{} out := protoimpl.TypeBuilder{ diff --git a/api/client/proto/inventory.pb.go b/api/client/proto/inventory.pb.go index 7d12f39db8776..bdb21927db455 100644 --- a/api/client/proto/inventory.pb.go +++ b/api/client/proto/inventory.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/legacy/client/proto/inventory.proto @@ -24,6 +24,7 @@ package proto import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1" types "github.com/gravitational/teleport/api/types" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" @@ -93,6 +94,54 @@ func (LabelUpdateKind) EnumDescriptor() ([]byte, []int) { return file_teleport_legacy_client_proto_inventory_proto_rawDescGZIP(), []int{0} } +// StopHeartbeatKind is the type of heartbeat to stop. +type StopHeartbeatKind int32 + +const ( + StopHeartbeatKind_STOP_HEARTBEAT_KIND_UNSPECIFIED StopHeartbeatKind = 0 + // STOP_HEARTBEAT_KIND_DATABASE_SERVER means stop a database server heartbeat. + StopHeartbeatKind_STOP_HEARTBEAT_KIND_DATABASE_SERVER StopHeartbeatKind = 1 +) + +// Enum value maps for StopHeartbeatKind. +var ( + StopHeartbeatKind_name = map[int32]string{ + 0: "STOP_HEARTBEAT_KIND_UNSPECIFIED", + 1: "STOP_HEARTBEAT_KIND_DATABASE_SERVER", + } + StopHeartbeatKind_value = map[string]int32{ + "STOP_HEARTBEAT_KIND_UNSPECIFIED": 0, + "STOP_HEARTBEAT_KIND_DATABASE_SERVER": 1, + } +) + +func (x StopHeartbeatKind) Enum() *StopHeartbeatKind { + p := new(StopHeartbeatKind) + *p = x + return p +} + +func (x StopHeartbeatKind) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (StopHeartbeatKind) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_legacy_client_proto_inventory_proto_enumTypes[1].Descriptor() +} + +func (StopHeartbeatKind) Type() protoreflect.EnumType { + return &file_teleport_legacy_client_proto_inventory_proto_enumTypes[1] +} + +func (x StopHeartbeatKind) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use StopHeartbeatKind.Descriptor instead. +func (StopHeartbeatKind) EnumDescriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_inventory_proto_rawDescGZIP(), []int{1} +} + // UpstreamInventoryOneOf is the upstream message for the inventory control stream, // sent from teleport instances to the auth server. type UpstreamInventoryOneOf struct { @@ -106,6 +155,7 @@ type UpstreamInventoryOneOf struct { // *UpstreamInventoryOneOf_Pong // *UpstreamInventoryOneOf_AgentMetadata // *UpstreamInventoryOneOf_Goodbye + // *UpstreamInventoryOneOf_StopHeartbeat Msg isUpstreamInventoryOneOf_Msg `protobuf_oneof:"Msg"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache @@ -193,6 +243,15 @@ func (x *UpstreamInventoryOneOf) GetGoodbye() *UpstreamInventoryGoodbye { return nil } +func (x *UpstreamInventoryOneOf) GetStopHeartbeat() *UpstreamInventoryStopHeartbeat { + if x != nil { + if x, ok := x.Msg.(*UpstreamInventoryOneOf_StopHeartbeat); ok { + return x.StopHeartbeat + } + } + return nil +} + type isUpstreamInventoryOneOf_Msg interface { isUpstreamInventoryOneOf_Msg() } @@ -222,6 +281,12 @@ type UpstreamInventoryOneOf_Goodbye struct { Goodbye *UpstreamInventoryGoodbye `protobuf:"bytes,5,opt,name=Goodbye,proto3,oneof"` } +type UpstreamInventoryOneOf_StopHeartbeat struct { + // UpstreamInventoryStopHeartbeat informs the upstream service that a + // heartbeat is stopping. + StopHeartbeat *UpstreamInventoryStopHeartbeat `protobuf:"bytes,6,opt,name=stop_heartbeat,json=stopHeartbeat,proto3,oneof"` +} + func (*UpstreamInventoryOneOf_Hello) isUpstreamInventoryOneOf_Msg() {} func (*UpstreamInventoryOneOf_Heartbeat) isUpstreamInventoryOneOf_Msg() {} @@ -232,6 +297,8 @@ func (*UpstreamInventoryOneOf_AgentMetadata) isUpstreamInventoryOneOf_Msg() {} func (*UpstreamInventoryOneOf_Goodbye) isUpstreamInventoryOneOf_Msg() {} +func (*UpstreamInventoryOneOf_StopHeartbeat) isUpstreamInventoryOneOf_Msg() {} + // DownstreamInventoryOneOf is the downstream message for the inventory control stream, // sent from auth servers to teleport instances. type DownstreamInventoryOneOf struct { @@ -461,7 +528,10 @@ type UpstreamInventoryHello struct { // ExternalUpgraderVersion identifies the external upgrader version. Empty if no upgrader is defined. ExternalUpgraderVersion string `protobuf:"bytes,6,opt,name=ExternalUpgraderVersion,proto3" json:"ExternalUpgraderVersion,omitempty"` // UpdaterInfo is used by Teleport to send information about how the Teleport updater is doing. - UpdaterInfo *types.UpdaterV2Info `protobuf:"bytes,8,opt,name=UpdaterInfo,proto3" json:"UpdaterInfo,omitempty"` + UpdaterInfo *types.UpdaterV2Info `protobuf:"bytes,8,opt,name=UpdaterInfo,proto3" json:"UpdaterInfo,omitempty"` + // The advertized scope of the instance. An instance's scope can not change once assigned, so future + // heartbeats must include a scope value matching the one declared in the hello message. + Scope string `protobuf:"bytes,9,opt,name=scope,proto3" json:"scope,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -545,6 +615,13 @@ func (x *UpstreamInventoryHello) GetUpdaterInfo() *types.UpdaterV2Info { return nil } +func (x *UpstreamInventoryHello) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + // UpstreamInventoryAgentMetadata is the message sent up the inventory control stream containing // metadata about the instance. type UpstreamInventoryAgentMetadata struct { @@ -855,8 +932,10 @@ type InventoryHeartbeat struct { DatabaseServer *types.DatabaseServerV3 `protobuf:"bytes,3,opt,name=DatabaseServer,proto3" json:"DatabaseServer,omitempty"` // KubeServer is a complete kube server spec to be heartbeated. KubernetesServer *types.KubernetesServerV3 `protobuf:"bytes,4,opt,name=KubernetesServer,proto3" json:"KubernetesServer,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // A relay_server to be heartbeated. + RelayServer *v1.RelayServer `protobuf:"bytes,5,opt,name=relay_server,json=relayServer,proto3" json:"relay_server,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *InventoryHeartbeat) Reset() { @@ -917,6 +996,13 @@ func (x *InventoryHeartbeat) GetKubernetesServer() *types.KubernetesServerV3 { return nil } +func (x *InventoryHeartbeat) GetRelayServer() *v1.RelayServer { + if x != nil { + return x.RelayServer + } + return nil +} + // UpstreamInventoryGoodbye informs the upstream service that instance // is terminating type UpstreamInventoryGoodbye struct { @@ -1105,6 +1191,62 @@ func (x *InventoryStatusSummary) GetServiceCounts() map[string]uint32 { return nil } +// UpstreamInventoryStopHeartbeat informs the upstream service that the +// heartbeat is stopping. +type UpstreamInventoryStopHeartbeat struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the kind of heartbeat to stop. + Kind StopHeartbeatKind `protobuf:"varint,1,opt,name=kind,proto3,enum=proto.StopHeartbeatKind" json:"kind,omitempty"` + // Name is the name of the heatbeat to stop. + Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpstreamInventoryStopHeartbeat) Reset() { + *x = UpstreamInventoryStopHeartbeat{} + mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpstreamInventoryStopHeartbeat) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpstreamInventoryStopHeartbeat) ProtoMessage() {} + +func (x *UpstreamInventoryStopHeartbeat) ProtoReflect() protoreflect.Message { + mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpstreamInventoryStopHeartbeat.ProtoReflect.Descriptor instead. +func (*UpstreamInventoryStopHeartbeat) Descriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_inventory_proto_rawDescGZIP(), []int{13} +} + +func (x *UpstreamInventoryStopHeartbeat) GetKind() StopHeartbeatKind { + if x != nil { + return x.Kind + } + return StopHeartbeatKind_STOP_HEARTBEAT_KIND_UNSPECIFIED +} + +func (x *UpstreamInventoryStopHeartbeat) GetName() string { + if x != nil { + return x.Name + } + return "" +} + // SupportedCapabilities indicate which features of the ICS that // the connect auth server supports. This allows agents to determine // how they should interact with the auth server to maintain compatibility. @@ -1146,13 +1288,17 @@ type DownstreamInventoryHello_SupportedCapabilities struct { KubernetesHeartbeats bool `protobuf:"varint,17,opt,name=KubernetesHeartbeats,proto3" json:"KubernetesHeartbeats,omitempty"` // KubernetesCleanup indicates the ICS supports deleting kubernetes clusters when UpstreamInventoryGoodbye.DeleteResources is set. KubernetesCleanup bool `protobuf:"varint,18,opt,name=KubernetesCleanup,proto3" json:"KubernetesCleanup,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // Indicates that the ICS supports heartbeating relay_server entries as well as deleting them on disconnect if UpstreamInventoryGoodbye.DeleteResources is set. + RelayServerHeartbeatsCleanup bool `protobuf:"varint,19,opt,name=relay_server_heartbeats_cleanup,json=relayServerHeartbeatsCleanup,proto3" json:"relay_server_heartbeats_cleanup,omitempty"` + // DatabaseHeartbeatGracefulStop indicates the ICS supports stopping an individual database heartbeat. + DatabaseHeartbeatGracefulStop bool `protobuf:"varint,20,opt,name=database_heartbeat_graceful_stop,json=databaseHeartbeatGracefulStop,proto3" json:"database_heartbeat_graceful_stop,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *DownstreamInventoryHello_SupportedCapabilities) Reset() { *x = DownstreamInventoryHello_SupportedCapabilities{} - mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[13] + mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[14] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1164,7 +1310,7 @@ func (x *DownstreamInventoryHello_SupportedCapabilities) String() string { func (*DownstreamInventoryHello_SupportedCapabilities) ProtoMessage() {} func (x *DownstreamInventoryHello_SupportedCapabilities) ProtoReflect() protoreflect.Message { - mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[13] + mi := &file_teleport_legacy_client_proto_inventory_proto_msgTypes[14] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1306,17 +1452,32 @@ func (x *DownstreamInventoryHello_SupportedCapabilities) GetKubernetesCleanup() return false } +func (x *DownstreamInventoryHello_SupportedCapabilities) GetRelayServerHeartbeatsCleanup() bool { + if x != nil { + return x.RelayServerHeartbeatsCleanup + } + return false +} + +func (x *DownstreamInventoryHello_SupportedCapabilities) GetDatabaseHeartbeatGracefulStop() bool { + if x != nil { + return x.DatabaseHeartbeatGracefulStop + } + return false +} + var File_teleport_legacy_client_proto_inventory_proto protoreflect.FileDescriptor const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\n" + - ",teleport/legacy/client/proto/inventory.proto\x12\x05proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a!teleport/legacy/types/types.proto\"\xd1\x02\n" + + ",teleport/legacy/client/proto/inventory.proto\x12\x05proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a!teleport/legacy/types/types.proto\x1a'teleport/presence/v1/relay_server.proto\"\xa1\x03\n" + "\x16UpstreamInventoryOneOf\x125\n" + "\x05Hello\x18\x01 \x01(\v2\x1d.proto.UpstreamInventoryHelloH\x00R\x05Hello\x129\n" + "\tHeartbeat\x18\x02 \x01(\v2\x19.proto.InventoryHeartbeatH\x00R\tHeartbeat\x122\n" + "\x04Pong\x18\x03 \x01(\v2\x1c.proto.UpstreamInventoryPongH\x00R\x04Pong\x12M\n" + "\rAgentMetadata\x18\x04 \x01(\v2%.proto.UpstreamInventoryAgentMetadataH\x00R\rAgentMetadata\x12;\n" + - "\aGoodbye\x18\x05 \x01(\v2\x1f.proto.UpstreamInventoryGoodbyeH\x00R\aGoodbyeB\x05\n" + + "\aGoodbye\x18\x05 \x01(\v2\x1f.proto.UpstreamInventoryGoodbyeH\x00R\aGoodbye\x12N\n" + + "\x0estop_heartbeat\x18\x06 \x01(\v2%.proto.UpstreamInventoryStopHeartbeatH\x00R\rstopHeartbeatB\x05\n" + "\x03Msg\"\xde\x01\n" + "\x18DownstreamInventoryOneOf\x127\n" + "\x05Hello\x18\x01 \x01(\v2\x1f.proto.DownstreamInventoryHelloH\x00R\x05Hello\x124\n" + @@ -1327,7 +1488,7 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\x02ID\x18\x01 \x01(\x04R\x02ID\"e\n" + "\x15UpstreamInventoryPong\x12\x0e\n" + "\x02ID\x18\x01 \x01(\x04R\x02ID\x12<\n" + - "\vSystemClock\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\vSystemClock\"\xb9\x02\n" + + "\vSystemClock\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\vSystemClock\"\xcf\x02\n" + "\x16UpstreamInventoryHello\x12\x18\n" + "\aVersion\x18\x01 \x01(\tR\aVersion\x12\x1a\n" + "\bServerID\x18\x02 \x01(\tR\bServerID\x12\x1a\n" + @@ -1335,7 +1496,8 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\bHostname\x18\x04 \x01(\tR\bHostname\x12*\n" + "\x10ExternalUpgrader\x18\x05 \x01(\tR\x10ExternalUpgrader\x128\n" + "\x17ExternalUpgraderVersion\x18\x06 \x01(\tR\x17ExternalUpgraderVersion\x126\n" + - "\vUpdaterInfo\x18\b \x01(\v2\x14.types.UpdaterV2InfoR\vUpdaterInfoJ\x04\b\a\x10\bR\rUpdaterV2Info\"\xd4\x02\n" + + "\vUpdaterInfo\x18\b \x01(\v2\x14.types.UpdaterV2InfoR\vUpdaterInfo\x12\x14\n" + + "\x05scope\x18\t \x01(\tR\x05scopeJ\x04\b\a\x10\bR\rUpdaterV2Info\"\xd4\x02\n" + "\x1eUpstreamInventoryAgentMetadata\x12\x0e\n" + "\x02OS\x18\x01 \x01(\tR\x02OS\x12\x1c\n" + "\tOSVersion\x18\x02 \x01(\tR\tOSVersion\x12*\n" + @@ -1344,11 +1506,11 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\x0eInstallMethods\x18\x05 \x03(\tR\x0eInstallMethods\x12*\n" + "\x10ContainerRuntime\x18\x06 \x01(\tR\x10ContainerRuntime\x124\n" + "\x15ContainerOrchestrator\x18\a \x01(\tR\x15ContainerOrchestrator\x12*\n" + - "\x10CloudEnvironment\x18\b \x01(\tR\x10CloudEnvironment\"\x9f\b\n" + + "\x10CloudEnvironment\x18\b \x01(\tR\x10CloudEnvironment\"\xaf\t\n" + "\x18DownstreamInventoryHello\x12\x18\n" + "\aVersion\x18\x01 \x01(\tR\aVersion\x12\x1a\n" + "\bServerID\x18\x02 \x01(\tR\bServerID\x12Y\n" + - "\fCapabilities\x18\x03 \x01(\v25.proto.DownstreamInventoryHello.SupportedCapabilitiesR\fCapabilities\x1a\xf1\x06\n" + + "\fCapabilities\x18\x03 \x01(\v25.proto.DownstreamInventoryHello.SupportedCapabilitiesR\fCapabilities\x1a\x81\b\n" + "\x15SupportedCapabilities\x12(\n" + "\x0fProxyHeartbeats\x18\x01 \x01(\bR\x0fProxyHeartbeats\x12\"\n" + "\fProxyCleanup\x18\x02 \x01(\bR\fProxyCleanup\x12&\n" + @@ -1370,7 +1532,9 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\x1fWindowsDesktopServiceHeartbeats\x18\x0f \x01(\bR\x1fWindowsDesktopServiceHeartbeats\x12B\n" + "\x1cWindowsDesktopServiceCleanup\x18\x10 \x01(\bR\x1cWindowsDesktopServiceCleanup\x122\n" + "\x14KubernetesHeartbeats\x18\x11 \x01(\bR\x14KubernetesHeartbeats\x12,\n" + - "\x11KubernetesCleanup\x18\x12 \x01(\bR\x11KubernetesCleanup\"\xea\x01\n" + + "\x11KubernetesCleanup\x18\x12 \x01(\bR\x11KubernetesCleanup\x12E\n" + + "\x1frelay_server_heartbeats_cleanup\x18\x13 \x01(\bR\x1crelayServerHeartbeatsCleanup\x12G\n" + + " database_heartbeat_graceful_stop\x18\x14 \x01(\bR\x1ddatabaseHeartbeatGracefulStop\"\xea\x01\n" + "\x1cInventoryUpdateLabelsRequest\x12\x1a\n" + "\bServerID\x18\x01 \x01(\tR\bServerID\x12*\n" + "\x04Kind\x18\x02 \x01(\x0e2\x16.proto.LabelUpdateKindR\x04Kind\x12G\n" + @@ -1383,12 +1547,13 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\x06Labels\x18\x02 \x03(\v22.proto.DownstreamInventoryUpdateLabels.LabelsEntryR\x06Labels\x1a9\n" + "\vLabelsEntry\x12\x10\n" + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + - "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xfd\x01\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xc3\x02\n" + "\x12InventoryHeartbeat\x12-\n" + "\tSSHServer\x18\x01 \x01(\v2\x0f.types.ServerV2R\tSSHServer\x120\n" + "\tAppServer\x18\x02 \x01(\v2\x12.types.AppServerV3R\tAppServer\x12?\n" + "\x0eDatabaseServer\x18\x03 \x01(\v2\x17.types.DatabaseServerV3R\x0eDatabaseServer\x12E\n" + - "\x10KubernetesServer\x18\x04 \x01(\v2\x19.types.KubernetesServerV3R\x10KubernetesServer\"d\n" + + "\x10KubernetesServer\x18\x04 \x01(\v2\x19.types.KubernetesServerV3R\x10KubernetesServer\x12D\n" + + "\frelay_server\x18\x05 \x01(\v2!.teleport.presence.v1.RelayServerR\vrelayServer\"d\n" + "\x18UpstreamInventoryGoodbye\x12(\n" + "\x0fDeleteResources\x18\x01 \x01(\bR\x0fDeleteResources\x12\x1e\n" + "\n" + @@ -1410,10 +1575,16 @@ const file_teleport_legacy_client_proto_inventory_proto_rawDesc = "" + "\x05value\x18\x02 \x01(\rR\x05value:\x028\x01\x1a@\n" + "\x12ServiceCountsEntry\x12\x10\n" + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + - "\x05value\x18\x02 \x01(\rR\x05value:\x028\x01*:\n" + + "\x05value\x18\x02 \x01(\rR\x05value:\x028\x01\"b\n" + + "\x1eUpstreamInventoryStopHeartbeat\x12,\n" + + "\x04kind\x18\x01 \x01(\x0e2\x18.proto.StopHeartbeatKindR\x04kind\x12\x12\n" + + "\x04name\x18\x02 \x01(\tR\x04name*:\n" + "\x0fLabelUpdateKind\x12\r\n" + "\tSSHServer\x10\x00\x12\x18\n" + - "\x14SSHServerCloudLabels\x10\x01B4Z2github.com/gravitational/teleport/api/client/protob\x06proto3" + "\x14SSHServerCloudLabels\x10\x01*a\n" + + "\x11StopHeartbeatKind\x12#\n" + + "\x1fSTOP_HEARTBEAT_KIND_UNSPECIFIED\x10\x00\x12'\n" + + "#STOP_HEARTBEAT_KIND_DATABASE_SERVER\x10\x01B4Z2github.com/gravitational/teleport/api/client/protob\x06proto3" var ( file_teleport_legacy_client_proto_inventory_proto_rawDescOnce sync.Once @@ -1427,65 +1598,71 @@ func file_teleport_legacy_client_proto_inventory_proto_rawDescGZIP() []byte { return file_teleport_legacy_client_proto_inventory_proto_rawDescData } -var file_teleport_legacy_client_proto_inventory_proto_enumTypes = make([]protoimpl.EnumInfo, 1) -var file_teleport_legacy_client_proto_inventory_proto_msgTypes = make([]protoimpl.MessageInfo, 19) +var file_teleport_legacy_client_proto_inventory_proto_enumTypes = make([]protoimpl.EnumInfo, 2) +var file_teleport_legacy_client_proto_inventory_proto_msgTypes = make([]protoimpl.MessageInfo, 20) var file_teleport_legacy_client_proto_inventory_proto_goTypes = []any{ (LabelUpdateKind)(0), // 0: proto.LabelUpdateKind - (*UpstreamInventoryOneOf)(nil), // 1: proto.UpstreamInventoryOneOf - (*DownstreamInventoryOneOf)(nil), // 2: proto.DownstreamInventoryOneOf - (*DownstreamInventoryPing)(nil), // 3: proto.DownstreamInventoryPing - (*UpstreamInventoryPong)(nil), // 4: proto.UpstreamInventoryPong - (*UpstreamInventoryHello)(nil), // 5: proto.UpstreamInventoryHello - (*UpstreamInventoryAgentMetadata)(nil), // 6: proto.UpstreamInventoryAgentMetadata - (*DownstreamInventoryHello)(nil), // 7: proto.DownstreamInventoryHello - (*InventoryUpdateLabelsRequest)(nil), // 8: proto.InventoryUpdateLabelsRequest - (*DownstreamInventoryUpdateLabels)(nil), // 9: proto.DownstreamInventoryUpdateLabels - (*InventoryHeartbeat)(nil), // 10: proto.InventoryHeartbeat - (*UpstreamInventoryGoodbye)(nil), // 11: proto.UpstreamInventoryGoodbye - (*InventoryStatusRequest)(nil), // 12: proto.InventoryStatusRequest - (*InventoryStatusSummary)(nil), // 13: proto.InventoryStatusSummary - (*DownstreamInventoryHello_SupportedCapabilities)(nil), // 14: proto.DownstreamInventoryHello.SupportedCapabilities - nil, // 15: proto.InventoryUpdateLabelsRequest.LabelsEntry - nil, // 16: proto.DownstreamInventoryUpdateLabels.LabelsEntry - nil, // 17: proto.InventoryStatusSummary.VersionCountsEntry - nil, // 18: proto.InventoryStatusSummary.UpgraderCountsEntry - nil, // 19: proto.InventoryStatusSummary.ServiceCountsEntry - (*timestamppb.Timestamp)(nil), // 20: google.protobuf.Timestamp - (*types.UpdaterV2Info)(nil), // 21: types.UpdaterV2Info - (*types.ServerV2)(nil), // 22: types.ServerV2 - (*types.AppServerV3)(nil), // 23: types.AppServerV3 - (*types.DatabaseServerV3)(nil), // 24: types.DatabaseServerV3 - (*types.KubernetesServerV3)(nil), // 25: types.KubernetesServerV3 + (StopHeartbeatKind)(0), // 1: proto.StopHeartbeatKind + (*UpstreamInventoryOneOf)(nil), // 2: proto.UpstreamInventoryOneOf + (*DownstreamInventoryOneOf)(nil), // 3: proto.DownstreamInventoryOneOf + (*DownstreamInventoryPing)(nil), // 4: proto.DownstreamInventoryPing + (*UpstreamInventoryPong)(nil), // 5: proto.UpstreamInventoryPong + (*UpstreamInventoryHello)(nil), // 6: proto.UpstreamInventoryHello + (*UpstreamInventoryAgentMetadata)(nil), // 7: proto.UpstreamInventoryAgentMetadata + (*DownstreamInventoryHello)(nil), // 8: proto.DownstreamInventoryHello + (*InventoryUpdateLabelsRequest)(nil), // 9: proto.InventoryUpdateLabelsRequest + (*DownstreamInventoryUpdateLabels)(nil), // 10: proto.DownstreamInventoryUpdateLabels + (*InventoryHeartbeat)(nil), // 11: proto.InventoryHeartbeat + (*UpstreamInventoryGoodbye)(nil), // 12: proto.UpstreamInventoryGoodbye + (*InventoryStatusRequest)(nil), // 13: proto.InventoryStatusRequest + (*InventoryStatusSummary)(nil), // 14: proto.InventoryStatusSummary + (*UpstreamInventoryStopHeartbeat)(nil), // 15: proto.UpstreamInventoryStopHeartbeat + (*DownstreamInventoryHello_SupportedCapabilities)(nil), // 16: proto.DownstreamInventoryHello.SupportedCapabilities + nil, // 17: proto.InventoryUpdateLabelsRequest.LabelsEntry + nil, // 18: proto.DownstreamInventoryUpdateLabels.LabelsEntry + nil, // 19: proto.InventoryStatusSummary.VersionCountsEntry + nil, // 20: proto.InventoryStatusSummary.UpgraderCountsEntry + nil, // 21: proto.InventoryStatusSummary.ServiceCountsEntry + (*timestamppb.Timestamp)(nil), // 22: google.protobuf.Timestamp + (*types.UpdaterV2Info)(nil), // 23: types.UpdaterV2Info + (*types.ServerV2)(nil), // 24: types.ServerV2 + (*types.AppServerV3)(nil), // 25: types.AppServerV3 + (*types.DatabaseServerV3)(nil), // 26: types.DatabaseServerV3 + (*types.KubernetesServerV3)(nil), // 27: types.KubernetesServerV3 + (*v1.RelayServer)(nil), // 28: teleport.presence.v1.RelayServer } var file_teleport_legacy_client_proto_inventory_proto_depIdxs = []int32{ - 5, // 0: proto.UpstreamInventoryOneOf.Hello:type_name -> proto.UpstreamInventoryHello - 10, // 1: proto.UpstreamInventoryOneOf.Heartbeat:type_name -> proto.InventoryHeartbeat - 4, // 2: proto.UpstreamInventoryOneOf.Pong:type_name -> proto.UpstreamInventoryPong - 6, // 3: proto.UpstreamInventoryOneOf.AgentMetadata:type_name -> proto.UpstreamInventoryAgentMetadata - 11, // 4: proto.UpstreamInventoryOneOf.Goodbye:type_name -> proto.UpstreamInventoryGoodbye - 7, // 5: proto.DownstreamInventoryOneOf.Hello:type_name -> proto.DownstreamInventoryHello - 3, // 6: proto.DownstreamInventoryOneOf.Ping:type_name -> proto.DownstreamInventoryPing - 9, // 7: proto.DownstreamInventoryOneOf.UpdateLabels:type_name -> proto.DownstreamInventoryUpdateLabels - 20, // 8: proto.UpstreamInventoryPong.SystemClock:type_name -> google.protobuf.Timestamp - 21, // 9: proto.UpstreamInventoryHello.UpdaterInfo:type_name -> types.UpdaterV2Info - 14, // 10: proto.DownstreamInventoryHello.Capabilities:type_name -> proto.DownstreamInventoryHello.SupportedCapabilities - 0, // 11: proto.InventoryUpdateLabelsRequest.Kind:type_name -> proto.LabelUpdateKind - 15, // 12: proto.InventoryUpdateLabelsRequest.Labels:type_name -> proto.InventoryUpdateLabelsRequest.LabelsEntry - 0, // 13: proto.DownstreamInventoryUpdateLabels.Kind:type_name -> proto.LabelUpdateKind - 16, // 14: proto.DownstreamInventoryUpdateLabels.Labels:type_name -> proto.DownstreamInventoryUpdateLabels.LabelsEntry - 22, // 15: proto.InventoryHeartbeat.SSHServer:type_name -> types.ServerV2 - 23, // 16: proto.InventoryHeartbeat.AppServer:type_name -> types.AppServerV3 - 24, // 17: proto.InventoryHeartbeat.DatabaseServer:type_name -> types.DatabaseServerV3 - 25, // 18: proto.InventoryHeartbeat.KubernetesServer:type_name -> types.KubernetesServerV3 - 5, // 19: proto.InventoryStatusSummary.Connected:type_name -> proto.UpstreamInventoryHello - 17, // 20: proto.InventoryStatusSummary.VersionCounts:type_name -> proto.InventoryStatusSummary.VersionCountsEntry - 18, // 21: proto.InventoryStatusSummary.UpgraderCounts:type_name -> proto.InventoryStatusSummary.UpgraderCountsEntry - 19, // 22: proto.InventoryStatusSummary.ServiceCounts:type_name -> proto.InventoryStatusSummary.ServiceCountsEntry - 23, // [23:23] is the sub-list for method output_type - 23, // [23:23] is the sub-list for method input_type - 23, // [23:23] is the sub-list for extension type_name - 23, // [23:23] is the sub-list for extension extendee - 0, // [0:23] is the sub-list for field type_name + 6, // 0: proto.UpstreamInventoryOneOf.Hello:type_name -> proto.UpstreamInventoryHello + 11, // 1: proto.UpstreamInventoryOneOf.Heartbeat:type_name -> proto.InventoryHeartbeat + 5, // 2: proto.UpstreamInventoryOneOf.Pong:type_name -> proto.UpstreamInventoryPong + 7, // 3: proto.UpstreamInventoryOneOf.AgentMetadata:type_name -> proto.UpstreamInventoryAgentMetadata + 12, // 4: proto.UpstreamInventoryOneOf.Goodbye:type_name -> proto.UpstreamInventoryGoodbye + 15, // 5: proto.UpstreamInventoryOneOf.stop_heartbeat:type_name -> proto.UpstreamInventoryStopHeartbeat + 8, // 6: proto.DownstreamInventoryOneOf.Hello:type_name -> proto.DownstreamInventoryHello + 4, // 7: proto.DownstreamInventoryOneOf.Ping:type_name -> proto.DownstreamInventoryPing + 10, // 8: proto.DownstreamInventoryOneOf.UpdateLabels:type_name -> proto.DownstreamInventoryUpdateLabels + 22, // 9: proto.UpstreamInventoryPong.SystemClock:type_name -> google.protobuf.Timestamp + 23, // 10: proto.UpstreamInventoryHello.UpdaterInfo:type_name -> types.UpdaterV2Info + 16, // 11: proto.DownstreamInventoryHello.Capabilities:type_name -> proto.DownstreamInventoryHello.SupportedCapabilities + 0, // 12: proto.InventoryUpdateLabelsRequest.Kind:type_name -> proto.LabelUpdateKind + 17, // 13: proto.InventoryUpdateLabelsRequest.Labels:type_name -> proto.InventoryUpdateLabelsRequest.LabelsEntry + 0, // 14: proto.DownstreamInventoryUpdateLabels.Kind:type_name -> proto.LabelUpdateKind + 18, // 15: proto.DownstreamInventoryUpdateLabels.Labels:type_name -> proto.DownstreamInventoryUpdateLabels.LabelsEntry + 24, // 16: proto.InventoryHeartbeat.SSHServer:type_name -> types.ServerV2 + 25, // 17: proto.InventoryHeartbeat.AppServer:type_name -> types.AppServerV3 + 26, // 18: proto.InventoryHeartbeat.DatabaseServer:type_name -> types.DatabaseServerV3 + 27, // 19: proto.InventoryHeartbeat.KubernetesServer:type_name -> types.KubernetesServerV3 + 28, // 20: proto.InventoryHeartbeat.relay_server:type_name -> teleport.presence.v1.RelayServer + 6, // 21: proto.InventoryStatusSummary.Connected:type_name -> proto.UpstreamInventoryHello + 19, // 22: proto.InventoryStatusSummary.VersionCounts:type_name -> proto.InventoryStatusSummary.VersionCountsEntry + 20, // 23: proto.InventoryStatusSummary.UpgraderCounts:type_name -> proto.InventoryStatusSummary.UpgraderCountsEntry + 21, // 24: proto.InventoryStatusSummary.ServiceCounts:type_name -> proto.InventoryStatusSummary.ServiceCountsEntry + 1, // 25: proto.UpstreamInventoryStopHeartbeat.kind:type_name -> proto.StopHeartbeatKind + 26, // [26:26] is the sub-list for method output_type + 26, // [26:26] is the sub-list for method input_type + 26, // [26:26] is the sub-list for extension type_name + 26, // [26:26] is the sub-list for extension extendee + 0, // [0:26] is the sub-list for field type_name } func init() { file_teleport_legacy_client_proto_inventory_proto_init() } @@ -1499,6 +1676,7 @@ func file_teleport_legacy_client_proto_inventory_proto_init() { (*UpstreamInventoryOneOf_Pong)(nil), (*UpstreamInventoryOneOf_AgentMetadata)(nil), (*UpstreamInventoryOneOf_Goodbye)(nil), + (*UpstreamInventoryOneOf_StopHeartbeat)(nil), } file_teleport_legacy_client_proto_inventory_proto_msgTypes[1].OneofWrappers = []any{ (*DownstreamInventoryOneOf_Hello)(nil), @@ -1510,8 +1688,8 @@ func file_teleport_legacy_client_proto_inventory_proto_init() { File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_legacy_client_proto_inventory_proto_rawDesc), len(file_teleport_legacy_client_proto_inventory_proto_rawDesc)), - NumEnums: 1, - NumMessages: 19, + NumEnums: 2, + NumMessages: 20, NumExtensions: 0, NumServices: 0, }, diff --git a/api/client/proto/joinservice.pb.go b/api/client/proto/joinservice.pb.go index f52c07e44d945..0dadaf7ecfc10 100644 --- a/api/client/proto/joinservice.pb.go +++ b/api/client/proto/joinservice.pb.go @@ -1443,9 +1443,11 @@ func (m *RegisterUsingBoundKeypairCertificates) GetPublicKey() string { // RegisterUsingBoundKeypairRotationRequest is the response sent by the server // when a keypair rotation is required. type RegisterUsingBoundKeypairRotationRequest struct { - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + // The signature algorithm suite in use by the cluster. + SignatureAlgorithmSuite types.SignatureAlgorithmSuite `protobuf:"varint,1,opt,name=signature_algorithm_suite,json=signatureAlgorithmSuite,proto3,enum=types.SignatureAlgorithmSuite" json:"signature_algorithm_suite,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *RegisterUsingBoundKeypairRotationRequest) Reset() { @@ -1483,6 +1485,13 @@ func (m *RegisterUsingBoundKeypairRotationRequest) XXX_DiscardUnknown() { var xxx_messageInfo_RegisterUsingBoundKeypairRotationRequest proto.InternalMessageInfo +func (m *RegisterUsingBoundKeypairRotationRequest) GetSignatureAlgorithmSuite() types.SignatureAlgorithmSuite { + if m != nil { + return m.SignatureAlgorithmSuite + } + return types.SignatureAlgorithmSuite_SIGNATURE_ALGORITHM_SUITE_UNSPECIFIED +} + // RegisterUsingBoundKeypairMethodResponse is a response sent by the server // during the bound-keypair joining process. Multiple requests and responses are // expected to be exchanged during one join process. @@ -1623,86 +1632,88 @@ func init() { } var fileDescriptor_d7e760ce923b836e = []byte{ - // 1254 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0x4b, 0x6f, 0xdb, 0xc6, - 0x13, 0x17, 0xe5, 0xd8, 0x89, 0x46, 0xca, 0x43, 0x9b, 0x20, 0x7f, 0x59, 0x88, 0x5f, 0xfc, 0x3b, - 0xb1, 0x9c, 0x36, 0x92, 0xab, 0x5e, 0x8a, 0x9c, 0xea, 0x17, 0x20, 0xc7, 0x75, 0x6b, 0x30, 0x29, - 0x8a, 0xf6, 0x42, 0xac, 0xa8, 0xad, 0xbc, 0x11, 0x43, 0xb2, 0xbb, 0x2b, 0x03, 0xea, 0xb9, 0x97, - 0x7e, 0x80, 0x9c, 0xfb, 0x09, 0x0a, 0xf4, 0xd0, 0x73, 0xcf, 0x3d, 0x14, 0x45, 0x6f, 0xbd, 0x16, - 0xfe, 0x0e, 0xbd, 0x17, 0xdc, 0x5d, 0x4a, 0x24, 0x45, 0x4a, 0x32, 0x90, 0xf6, 0x62, 0x99, 0x33, - 0xb3, 0xbf, 0x79, 0xcf, 0x2c, 0x09, 0x4d, 0x41, 0x5c, 0x12, 0xf8, 0x4c, 0xb4, 0x5c, 0xd2, 0xc7, - 0xce, 0xa8, 0xe5, 0xb8, 0x94, 0x78, 0xa2, 0x15, 0x30, 0x5f, 0xf8, 0xad, 0xd7, 0x3e, 0xf5, 0x38, - 0x61, 0x97, 0xd4, 0x21, 0x4d, 0x49, 0x41, 0xcb, 0xf2, 0xa7, 0xde, 0x98, 0x79, 0xcc, 0x21, 0x4c, - 0x70, 0x75, 0xa0, 0xbe, 0x95, 0x96, 0x14, 0xa3, 0x80, 0x70, 0xf5, 0x57, 0x89, 0x98, 0x3f, 0x1b, - 0xb0, 0x66, 0x91, 0x3e, 0xe5, 0x82, 0xb0, 0xcf, 0x39, 0xf5, 0xfa, 0x27, 0xfb, 0x67, 0x67, 0x44, - 0x5c, 0xf8, 0x3d, 0x8b, 0x7c, 0x33, 0x24, 0x5c, 0x20, 0x0c, 0x8f, 0x98, 0x16, 0xb0, 0x87, 0xa1, - 0x84, 0x2d, 0xfc, 0x01, 0xf1, 0x6c, 0xa6, 0xf8, 0x35, 0x63, 0xd3, 0x68, 0x94, 0xdb, 0x9b, 0x4d, - 0x85, 0x9a, 0xc0, 0x7a, 0x15, 0x0a, 0x6a, 0x1c, 0x6b, 0x95, 0xe5, 0xb1, 0xd0, 0x1e, 0x3c, 0xe0, - 0x82, 0xdb, 0xb4, 0x47, 0x3c, 0x41, 0xc5, 0x68, 0x0c, 0x5d, 0xdc, 0x34, 0x1a, 0x15, 0x0b, 0x71, - 0xc1, 0x4f, 0x34, 0x4b, 0x9f, 0x30, 0xbb, 0xb0, 0x9e, 0x67, 0x35, 0x0f, 0x7c, 0x8f, 0x13, 0xf4, - 0x08, 0x4a, 0xce, 0x05, 0x76, 0x5d, 0xe2, 0xf5, 0x89, 0xb4, 0xb1, 0x64, 0x4d, 0x08, 0xc8, 0x84, - 0x65, 0x19, 0x28, 0xa9, 0xa2, 0xdc, 0xae, 0xa8, 0x68, 0x34, 0x0f, 0x43, 0x9a, 0xa5, 0x58, 0xe6, - 0x6f, 0x06, 0x6c, 0x24, 0x94, 0xec, 0x7f, 0x3b, 0x64, 0xe4, 0x3f, 0x0f, 0xce, 0xff, 0xe1, 0x36, - 0x16, 0x82, 0x70, 0x41, 0x7a, 0x76, 0x0f, 0x0b, 0xac, 0xa3, 0x52, 0x89, 0x88, 0x47, 0x58, 0x60, - 0xb4, 0x05, 0x15, 0xec, 0x38, 0x84, 0x73, 0xa5, 0xbf, 0xb6, 0x24, 0x1d, 0x2e, 0x2b, 0x9a, 0x84, - 0x33, 0x7b, 0xb0, 0x99, 0xef, 0xcd, 0x3b, 0x0b, 0xda, 0x31, 0xec, 0x24, 0xbd, 0x3c, 0xd7, 0x89, - 0x39, 0x8c, 0x60, 0xc6, 0xca, 0xea, 0x70, 0x8b, 0xfb, 0xee, 0x50, 0x50, 0xdf, 0x93, 0xba, 0x2a, - 0xd6, 0xf8, 0xd9, 0xfc, 0xdb, 0x80, 0xed, 0x6c, 0x9c, 0x13, 0x8f, 0x0a, 0x8a, 0xdd, 0x28, 0x3a, - 0x87, 0x50, 0x09, 0x1b, 0xe5, 0xda, 0x01, 0x2f, 0x87, 0xa7, 0x22, 0x90, 0x55, 0xb8, 0x49, 0x06, - 0x76, 0xe8, 0x80, 0x0a, 0x6e, 0xa7, 0x60, 0xad, 0x90, 0x41, 0xe8, 0x17, 0xfa, 0x1f, 0xac, 0x90, - 0x81, 0x3d, 0x20, 0x23, 0x19, 0xd2, 0x90, 0xb3, 0x4c, 0x06, 0xa7, 0x64, 0x84, 0x3e, 0x05, 0xa4, - 0x32, 0x80, 0x43, 0x83, 0xed, 0x00, 0x33, 0xfc, 0x86, 0xd7, 0x6e, 0x48, 0xf5, 0x1b, 0x3a, 0x32, - 0xaf, 0xce, 0xcf, 0xf6, 0x27, 0x32, 0xe7, 0xa1, 0x08, 0x11, 0x84, 0x71, 0xab, 0x8a, 0x53, 0x64, - 0x7e, 0x70, 0x03, 0x8a, 0x64, 0x60, 0xfe, 0x9e, 0x6e, 0xc7, 0xb1, 0xdf, 0x91, 0xad, 0xfb, 0x70, - 0x83, 0x7a, 0x34, 0x72, 0xf4, 0x3d, 0xad, 0x69, 0x91, 0x58, 0x75, 0x0a, 0x96, 0x3c, 0x8a, 0x6c, - 0x40, 0xe3, 0xa4, 0xda, 0x4c, 0xa7, 0x43, 0x27, 0xb5, 0x39, 0x13, 0x70, 0x2a, 0x89, 0x9d, 0x82, - 0x55, 0x75, 0xd2, 0xc4, 0x83, 0x12, 0xdc, 0x0c, 0xf0, 0xc8, 0xf5, 0x71, 0xcf, 0xfc, 0xc1, 0x48, - 0x75, 0x6a, 0xcc, 0x21, 0x5d, 0x07, 0x9f, 0x40, 0x35, 0x6e, 0x4e, 0x3c, 0x8f, 0x6b, 0x93, 0x40, - 0x1e, 0x7b, 0x0e, 0x1b, 0x05, 0x82, 0xf4, 0x0e, 0x19, 0x91, 0xc3, 0x00, 0xbb, 0x9d, 0x82, 0x75, - 0x2f, 0xa6, 0x5c, 0xc5, 0x67, 0x7b, 0x46, 0x91, 0x86, 0xd9, 0x93, 0xcc, 0xb8, 0x85, 0x3f, 0x1a, - 0x50, 0xcb, 0x4b, 0x14, 0x7a, 0x08, 0x2b, 0xc1, 0xb0, 0xeb, 0x52, 0x47, 0x57, 0xa8, 0x7e, 0x42, - 0x1b, 0x50, 0x76, 0x18, 0xc1, 0x82, 0xc4, 0x5b, 0x12, 0x14, 0x49, 0x36, 0xe4, 0x33, 0x40, 0x5a, - 0x20, 0x96, 0x6a, 0x55, 0x43, 0x56, 0x55, 0x71, 0x62, 0x1a, 0xd1, 0x2e, 0xdc, 0xd3, 0xe2, 0x9c, - 0xf6, 0x3d, 0x2c, 0x86, 0x8c, 0xc8, 0x5a, 0xaa, 0x58, 0x77, 0x15, 0xfd, 0x65, 0x44, 0x36, 0xbf, - 0x84, 0x87, 0xd9, 0xe1, 0x40, 0x3b, 0x10, 0x0a, 0xeb, 0x27, 0xbb, 0xeb, 0xfa, 0x5d, 0x6d, 0xf5, - 0x9d, 0x09, 0xf9, 0xc0, 0xf5, 0xbb, 0xa1, 0x57, 0x9c, 0x38, 0x8c, 0x44, 0x13, 0x56, 0x3f, 0x99, - 0x3f, 0x15, 0xe1, 0xfe, 0x67, 0x0c, 0x3b, 0xae, 0x54, 0x47, 0x62, 0x35, 0x77, 0xf3, 0x82, 0xe0, - 0x1e, 0x61, 0xbc, 0x66, 0x6c, 0x2e, 0x35, 0xca, 0xed, 0x1d, 0x1d, 0xd5, 0x0c, 0xe1, 0x66, 0x47, - 0x49, 0x1e, 0x7b, 0x82, 0x8d, 0xac, 0xe8, 0x1c, 0xfa, 0x02, 0xee, 0xea, 0x80, 0xdb, 0x11, 0x54, - 0x51, 0x42, 0x35, 0x67, 0x40, 0x9d, 0xab, 0x13, 0x09, 0xc4, 0x3b, 0x41, 0x82, 0x58, 0x7f, 0x0e, - 0x95, 0x38, 0x1f, 0xdd, 0x83, 0xa5, 0xb0, 0x5b, 0xd5, 0xf0, 0x0a, 0xff, 0x45, 0x0f, 0x60, 0xf9, - 0x12, 0xbb, 0x43, 0x55, 0xe1, 0x25, 0x4b, 0x3d, 0x3c, 0x2f, 0x7e, 0x64, 0xd4, 0xf7, 0xe1, 0x7e, - 0x86, 0x8a, 0xeb, 0x40, 0x98, 0x7f, 0x1a, 0xa9, 0xb1, 0xaa, 0xfc, 0x48, 0xf6, 0xac, 0xf3, 0x6e, - 0xb6, 0x44, 0xa7, 0x30, 0x6b, 0x4f, 0x1c, 0xc2, 0x1d, 0x5f, 0xea, 0x4e, 0xac, 0xcf, 0x72, 0xbb, - 0x9e, 0x1f, 0xe0, 0x4e, 0xc1, 0xba, 0xad, 0xce, 0x68, 0x42, 0xd8, 0x17, 0xfa, 0xb4, 0x39, 0x84, - 0xad, 0x19, 0x8e, 0xe9, 0xde, 0x5d, 0x9f, 0x5a, 0x18, 0x9d, 0x42, 0x7c, 0x65, 0x2c, 0xd6, 0x8d, - 0x00, 0xb7, 0xa2, 0x31, 0x14, 0x4e, 0xc0, 0xe4, 0x06, 0x39, 0xf0, 0x87, 0x5e, 0xef, 0x94, 0x8c, - 0x02, 0x4c, 0xd9, 0xbf, 0x31, 0xfc, 0x9b, 0x70, 0x9f, 0x2a, 0x58, 0x5b, 0x82, 0xc5, 0x3a, 0xa3, - 0x64, 0x55, 0x35, 0xeb, 0x85, 0x4f, 0xbd, 0x97, 0x92, 0x11, 0xca, 0x07, 0x8c, 0x5c, 0x52, 0x7f, - 0xc8, 0xf5, 0x01, 0x81, 0x05, 0x89, 0x5a, 0x3b, 0x62, 0xc9, 0x03, 0x21, 0xc3, 0xec, 0xc0, 0xd3, - 0x5c, 0x7f, 0xae, 0xb7, 0x14, 0x5f, 0xc0, 0x6e, 0x2e, 0x92, 0xe5, 0xab, 0x49, 0x32, 0x06, 0x5a, - 0x03, 0x50, 0xb3, 0xca, 0x9e, 0xd4, 0x72, 0x49, 0x51, 0x4e, 0xc9, 0xc8, 0xfc, 0xa5, 0x08, 0x4f, - 0x72, 0xc1, 0x92, 0xd5, 0x7b, 0x94, 0xd8, 0x38, 0x99, 0x0b, 0x22, 0x3f, 0x47, 0xe3, 0xa5, 0xd3, - 0x9d, 0xb1, 0x74, 0x3e, 0x98, 0x87, 0xb9, 0xd8, 0xde, 0x41, 0x36, 0x54, 0x99, 0x8e, 0xc3, 0x44, - 0xc5, 0x92, 0x54, 0xb1, 0x37, 0x4f, 0x45, 0x3a, 0x80, 0xe1, 0x72, 0x61, 0x29, 0x5a, 0x7c, 0x6d, - 0x60, 0x30, 0xe7, 0x9b, 0x3b, 0x27, 0x0b, 0xc9, 0xfb, 0x56, 0x31, 0x75, 0xdf, 0x32, 0xbf, 0x37, - 0xe0, 0x71, 0xbe, 0x0e, 0xc2, 0x04, 0xfd, 0x9a, 0x3a, 0x58, 0x10, 0x3e, 0xb9, 0x99, 0x19, 0xb9, - 0x37, 0xb3, 0xd0, 0x94, 0x58, 0xb9, 0xaa, 0xc1, 0x5f, 0x7a, 0x1d, 0x95, 0x69, 0xca, 0xd2, 0xa5, - 0x74, 0xbd, 0x3c, 0x85, 0xc6, 0x02, 0xa1, 0x53, 0x93, 0xe3, 0x6d, 0x71, 0x46, 0x0b, 0xa7, 0x06, - 0xc8, 0x49, 0x7a, 0x80, 0x94, 0xdb, 0xbb, 0x0b, 0x57, 0x43, 0x72, 0xd6, 0x1c, 0x25, 0x67, 0xcd, - 0xfb, 0x73, 0x61, 0x62, 0x11, 0x1c, 0xcf, 0x22, 0x74, 0x06, 0xb7, 0xa2, 0xb4, 0xeb, 0xd2, 0x69, - 0x2d, 0x5e, 0x3a, 0x51, 0xc9, 0x8f, 0x21, 0xe2, 0xa3, 0xad, 0xfd, 0x76, 0x19, 0xca, 0x6a, 0x90, - 0xc8, 0xb7, 0x3a, 0x44, 0xe1, 0x61, 0xf6, 0x4b, 0x0c, 0xda, 0xce, 0x52, 0x99, 0x7e, 0x33, 0xab, - 0x3f, 0x9e, 0x23, 0xa5, 0xd4, 0x36, 0x8c, 0x3d, 0x03, 0xf9, 0x50, 0xcb, 0xbb, 0xfc, 0xa3, 0x27, - 0x59, 0x30, 0xd3, 0xef, 0x3a, 0xf5, 0x9d, 0xb9, 0x72, 0x31, 0x85, 0x69, 0xdf, 0xc6, 0xd7, 0xbe, - 0x6c, 0xdf, 0xd2, 0xd7, 0xdc, 0x6c, 0xdf, 0xa6, 0xee, 0x8e, 0x52, 0x15, 0x83, 0xd5, 0xdc, 0x45, - 0x85, 0x32, 0x8d, 0xce, 0xd8, 0xd1, 0xf5, 0xc6, 0x7c, 0xc1, 0x98, 0xce, 0xef, 0xd2, 0xef, 0x86, - 0xd3, 0x25, 0x8e, 0x9e, 0xcd, 0xab, 0x9b, 0xa4, 0x01, 0xcd, 0x45, 0xc5, 0x63, 0x66, 0x1c, 0x01, - 0x9a, 0xde, 0x72, 0x68, 0xee, 0x02, 0xac, 0x27, 0x06, 0xc4, 0xc1, 0xc7, 0xbf, 0x5e, 0xad, 0x1b, - 0x7f, 0x5c, 0xad, 0x1b, 0x7f, 0x5d, 0xad, 0x1b, 0x5f, 0xb5, 0xfb, 0x54, 0x5c, 0x0c, 0xbb, 0x4d, - 0xc7, 0x7f, 0xd3, 0xea, 0x33, 0x7c, 0x49, 0x55, 0x25, 0x63, 0xb7, 0x35, 0xfe, 0x9a, 0x80, 0x03, - 0x9a, 0xf8, 0xe8, 0xd0, 0x5d, 0x91, 0x3f, 0x1f, 0xfe, 0x13, 0x00, 0x00, 0xff, 0xff, 0x72, 0x17, - 0x9a, 0x35, 0xd2, 0x10, 0x00, 0x00, + // 1294 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0xcf, 0x8f, 0xd3, 0xc6, + 0x17, 0x8f, 0xb3, 0xec, 0x42, 0x5e, 0xc2, 0xc2, 0x0e, 0x08, 0x76, 0x23, 0x08, 0x8b, 0xbf, 0xc0, + 0x86, 0x6f, 0x4b, 0x42, 0xd3, 0x4b, 0xc5, 0xa9, 0xd9, 0x5d, 0xa4, 0x2c, 0x74, 0xdb, 0x95, 0xa1, + 0xaa, 0xca, 0xc5, 0x9a, 0x38, 0xd3, 0xec, 0x10, 0x63, 0xbb, 0x33, 0xe3, 0x95, 0xd2, 0x73, 0x55, + 0xa9, 0x7f, 0x00, 0xe7, 0xfe, 0x05, 0x95, 0x7a, 0xe8, 0xb9, 0xe7, 0x1e, 0xaa, 0xaa, 0xb7, 0x5e, + 0x2b, 0xfe, 0x87, 0xde, 0x2b, 0xcf, 0x8c, 0x1d, 0xdb, 0xb1, 0x93, 0x20, 0xd1, 0x5e, 0x58, 0xfc, + 0xde, 0x9b, 0xcf, 0xfb, 0xfd, 0xde, 0x4c, 0xa0, 0x23, 0x88, 0x4b, 0x02, 0x9f, 0x89, 0xae, 0x4b, + 0xc6, 0xd8, 0x99, 0x76, 0x1d, 0x97, 0x12, 0x4f, 0x74, 0x03, 0xe6, 0x0b, 0xbf, 0xfb, 0xd2, 0xa7, + 0x1e, 0x27, 0xec, 0x8c, 0x3a, 0xa4, 0x23, 0x29, 0x68, 0x5d, 0xfe, 0x69, 0xb6, 0x17, 0x1e, 0x73, + 0x08, 0x13, 0x5c, 0x1d, 0x68, 0xde, 0xce, 0x4b, 0x8a, 0x69, 0x40, 0xb8, 0xfa, 0x57, 0x89, 0x98, + 0x3f, 0x1b, 0x70, 0xd3, 0x22, 0x63, 0xca, 0x05, 0x61, 0x9f, 0x73, 0xea, 0x8d, 0x8f, 0xfa, 0xc7, + 0xc7, 0x44, 0x9c, 0xfa, 0x23, 0x8b, 0x7c, 0x1d, 0x12, 0x2e, 0x10, 0x86, 0x1b, 0x4c, 0x0b, 0xd8, + 0x61, 0x24, 0x61, 0x0b, 0x7f, 0x42, 0x3c, 0x9b, 0x29, 0xfe, 0xb6, 0xb1, 0x6b, 0xb4, 0xeb, 0xbd, + 0xdd, 0x8e, 0x42, 0xcd, 0x60, 0x3d, 0x8f, 0x04, 0x35, 0x8e, 0xb5, 0xc3, 0xca, 0x58, 0xe8, 0x21, + 0x5c, 0xe5, 0x82, 0xdb, 0x74, 0x44, 0x3c, 0x41, 0xc5, 0x34, 0x81, 0xae, 0xee, 0x1a, 0xed, 0x86, + 0x85, 0xb8, 0xe0, 0x47, 0x9a, 0xa5, 0x4f, 0x98, 0x43, 0x68, 0x95, 0x59, 0xcd, 0x03, 0xdf, 0xe3, + 0x04, 0xdd, 0x80, 0x9a, 0x73, 0x8a, 0x5d, 0x97, 0x78, 0x63, 0x22, 0x6d, 0xac, 0x59, 0x33, 0x02, + 0x32, 0x61, 0x5d, 0x06, 0x4a, 0xaa, 0xa8, 0xf7, 0x1a, 0x2a, 0x1a, 0x9d, 0x83, 0x88, 0x66, 0x29, + 0x96, 0xf9, 0x9b, 0x01, 0xb7, 0x32, 0x4a, 0xfa, 0xdf, 0x84, 0x8c, 0xfc, 0xe7, 0xc1, 0xf9, 0x1f, + 0x5c, 0xc4, 0x42, 0x10, 0x2e, 0xc8, 0xc8, 0x1e, 0x61, 0x81, 0x75, 0x54, 0x1a, 0x31, 0xf1, 0x10, + 0x0b, 0x8c, 0x6e, 0x43, 0x03, 0x3b, 0x0e, 0xe1, 0x5c, 0xe9, 0xdf, 0x5e, 0x93, 0x0e, 0xd7, 0x15, + 0x4d, 0xc2, 0x99, 0x23, 0xd8, 0x2d, 0xf7, 0xe6, 0x9d, 0x05, 0xed, 0x31, 0xec, 0x65, 0xbd, 0x3c, + 0xd1, 0x89, 0x39, 0x88, 0x61, 0x12, 0x65, 0x4d, 0xb8, 0xc0, 0x7d, 0x37, 0x14, 0xd4, 0xf7, 0xa4, + 0xae, 0x86, 0x95, 0x7c, 0x9b, 0x7f, 0x1b, 0x70, 0xa7, 0x18, 0xe7, 0xc8, 0xa3, 0x82, 0x62, 0x37, + 0x8e, 0xce, 0x01, 0x34, 0xa2, 0x46, 0x79, 0xeb, 0x80, 0xd7, 0xa3, 0x53, 0x31, 0xc8, 0x0e, 0x9c, + 0x27, 0x13, 0x3b, 0x72, 0x40, 0x05, 0x77, 0x50, 0xb1, 0x36, 0xc8, 0x24, 0xf2, 0x0b, 0x5d, 0x87, + 0x0d, 0x32, 0xb1, 0x27, 0x64, 0x2a, 0x43, 0x1a, 0x71, 0xd6, 0xc9, 0xe4, 0x29, 0x99, 0xa2, 0x4f, + 0x01, 0xa9, 0x0c, 0xe0, 0xc8, 0x60, 0x3b, 0xc0, 0x0c, 0xbf, 0xe2, 0xdb, 0xe7, 0xa4, 0xfa, 0x5b, + 0x3a, 0x32, 0xcf, 0x4f, 0x8e, 0xfb, 0x33, 0x99, 0x93, 0x48, 0x84, 0x08, 0xc2, 0xb8, 0xb5, 0x85, + 0x73, 0x64, 0xbe, 0x7f, 0x0e, 0xaa, 0x64, 0x62, 0xfe, 0x9e, 0x6f, 0xc7, 0xc4, 0xef, 0xd8, 0xd6, + 0x3e, 0x9c, 0xa3, 0x1e, 0x8d, 0x1d, 0x7d, 0x4f, 0x6b, 0x5a, 0x25, 0x56, 0x83, 0x8a, 0x25, 0x8f, + 0x22, 0x1b, 0x50, 0x92, 0x54, 0x9b, 0xe9, 0x74, 0xe8, 0xa4, 0x76, 0x16, 0x02, 0xce, 0x25, 0x71, + 0x50, 0xb1, 0xb6, 0x9c, 0x3c, 0x71, 0xbf, 0x06, 0xe7, 0x03, 0x3c, 0x75, 0x7d, 0x3c, 0x32, 0x7f, + 0x30, 0x72, 0x9d, 0x9a, 0x72, 0x48, 0xd7, 0xc1, 0x27, 0xb0, 0x95, 0x36, 0x27, 0x9d, 0xc7, 0x9b, + 0xb3, 0x40, 0x3e, 0xf6, 0x1c, 0x36, 0x0d, 0x04, 0x19, 0x1d, 0x30, 0x22, 0x87, 0x01, 0x76, 0x07, + 0x15, 0xeb, 0x72, 0x4a, 0xb9, 0x8a, 0xcf, 0x9d, 0x05, 0x45, 0x1a, 0x65, 0x4f, 0x32, 0xd3, 0x16, + 0xfe, 0x68, 0xc0, 0x76, 0x59, 0xa2, 0xd0, 0x35, 0xd8, 0x08, 0xc2, 0xa1, 0x4b, 0x1d, 0x5d, 0xa1, + 0xfa, 0x0b, 0xdd, 0x82, 0xba, 0xc3, 0x08, 0x16, 0x24, 0xdd, 0x92, 0xa0, 0x48, 0xb2, 0x21, 0x1f, + 0x00, 0xd2, 0x02, 0xa9, 0x54, 0xab, 0x1a, 0xb2, 0xb6, 0x14, 0x27, 0xa5, 0x11, 0xdd, 0x87, 0xcb, + 0x5a, 0x9c, 0xd3, 0xb1, 0x87, 0x45, 0xc8, 0x88, 0xac, 0xa5, 0x86, 0x75, 0x49, 0xd1, 0x9f, 0xc5, + 0x64, 0xf3, 0x4b, 0xb8, 0x56, 0x1c, 0x0e, 0xb4, 0x07, 0x91, 0xb0, 0xfe, 0xb2, 0x87, 0xae, 0x3f, + 0xd4, 0x56, 0x6f, 0xce, 0xc8, 0xfb, 0xae, 0x3f, 0x8c, 0xbc, 0xe2, 0xc4, 0x61, 0x24, 0x9e, 0xb0, + 0xfa, 0xcb, 0xfc, 0xa9, 0x0a, 0x57, 0x3e, 0x63, 0xd8, 0x71, 0xa5, 0x3a, 0x92, 0xaa, 0xb9, 0xf3, + 0xa7, 0x04, 0x8f, 0x08, 0xe3, 0xdb, 0xc6, 0xee, 0x5a, 0xbb, 0xde, 0xdb, 0xd3, 0x51, 0x2d, 0x10, + 0xee, 0x0c, 0x94, 0xe4, 0x63, 0x4f, 0xb0, 0xa9, 0x15, 0x9f, 0x43, 0x5f, 0xc0, 0x25, 0x1d, 0x70, + 0x3b, 0x86, 0xaa, 0x4a, 0xa8, 0xce, 0x02, 0xa8, 0x13, 0x75, 0x22, 0x83, 0xb8, 0x19, 0x64, 0x88, + 0xcd, 0x47, 0xd0, 0x48, 0xf3, 0xd1, 0x65, 0x58, 0x8b, 0xba, 0x55, 0x0d, 0xaf, 0xe8, 0xbf, 0xe8, + 0x2a, 0xac, 0x9f, 0x61, 0x37, 0x54, 0x15, 0x5e, 0xb3, 0xd4, 0xc7, 0xa3, 0xea, 0x47, 0x46, 0xb3, + 0x0f, 0x57, 0x0a, 0x54, 0xbc, 0x0d, 0x84, 0xf9, 0xa7, 0x91, 0x1b, 0xab, 0xca, 0x8f, 0x6c, 0xcf, + 0x3a, 0xef, 0x66, 0x4b, 0x0c, 0x2a, 0x8b, 0xf6, 0xc4, 0x01, 0x6c, 0xfa, 0x52, 0x77, 0x66, 0x7d, + 0xd6, 0x7b, 0xcd, 0xf2, 0x00, 0x0f, 0x2a, 0xd6, 0x45, 0x75, 0x46, 0x13, 0xa2, 0xbe, 0xd0, 0xa7, + 0xcd, 0x10, 0x6e, 0x2f, 0x70, 0x4c, 0xf7, 0x6e, 0x6b, 0x6e, 0x61, 0x0c, 0x2a, 0xe9, 0x95, 0xb1, + 0x5a, 0x37, 0x02, 0x5c, 0x88, 0xc7, 0x50, 0x34, 0x01, 0xb3, 0x1b, 0x64, 0xdf, 0x0f, 0xbd, 0xd1, + 0x53, 0x32, 0x0d, 0x30, 0x65, 0xff, 0xc6, 0xf0, 0xef, 0xc0, 0x15, 0xaa, 0x60, 0x6d, 0x09, 0x96, + 0xea, 0x8c, 0x9a, 0xb5, 0xa5, 0x59, 0x4f, 0x7c, 0xea, 0x3d, 0x93, 0x8c, 0x48, 0x3e, 0x60, 0xe4, + 0x8c, 0xfa, 0x21, 0xd7, 0x07, 0x04, 0x16, 0x24, 0x6e, 0xed, 0x98, 0x25, 0x0f, 0x44, 0x0c, 0x73, + 0x00, 0xff, 0x2f, 0xf5, 0xe7, 0xed, 0x96, 0xe2, 0x13, 0xb8, 0x5f, 0x8a, 0x64, 0xf9, 0x6a, 0x92, + 0x24, 0x40, 0x37, 0x01, 0xd4, 0xac, 0xb2, 0x67, 0xb5, 0x5c, 0x53, 0x94, 0xa7, 0x64, 0x6a, 0xfe, + 0x52, 0x85, 0x7b, 0xa5, 0x60, 0xd9, 0xea, 0x3d, 0xcc, 0x6c, 0x9c, 0xc2, 0x05, 0x51, 0x9e, 0xa3, + 0x64, 0xe9, 0x0c, 0x17, 0x2c, 0x9d, 0x0f, 0x96, 0x61, 0xae, 0xb6, 0x77, 0x90, 0x0d, 0x5b, 0x4c, + 0xc7, 0x61, 0xa6, 0x62, 0x4d, 0xaa, 0x78, 0xb8, 0x4c, 0x45, 0x3e, 0x80, 0xd1, 0x72, 0x61, 0x39, + 0x5a, 0x7a, 0x6d, 0x60, 0x30, 0x97, 0x9b, 0xbb, 0x24, 0x0b, 0xd9, 0xfb, 0x56, 0x35, 0x77, 0xdf, + 0x32, 0xbf, 0x37, 0xe0, 0x6e, 0xb9, 0x0e, 0xc2, 0x04, 0xfd, 0x8a, 0x3a, 0x58, 0x10, 0x3e, 0xbb, + 0x99, 0x19, 0xa5, 0x37, 0xb3, 0xc8, 0x94, 0x54, 0xb9, 0xaa, 0xc1, 0x5f, 0x7b, 0x19, 0x97, 0x69, + 0xce, 0xd2, 0xb5, 0x7c, 0xbd, 0x7c, 0x67, 0x40, 0x7b, 0x85, 0xd8, 0xa9, 0x8a, 0x79, 0x01, 0x3b, + 0xc9, 0x1a, 0xb3, 0xb1, 0x3b, 0xf6, 0x19, 0x15, 0xa7, 0xaf, 0x6c, 0x1e, 0x52, 0xa1, 0xa6, 0xc4, + 0x66, 0xaf, 0xa5, 0x9b, 0x34, 0xd9, 0x6b, 0xfd, 0x58, 0xec, 0x59, 0x24, 0x65, 0x5d, 0xe7, 0xc5, + 0x0c, 0xf3, 0x75, 0x75, 0xc1, 0x7c, 0xc8, 0x4d, 0xa7, 0xa3, 0xfc, 0x74, 0xaa, 0xf7, 0xee, 0xaf, + 0x5c, 0x6a, 0xd9, 0x41, 0x76, 0x98, 0x1d, 0x64, 0xef, 0x2f, 0x85, 0x49, 0xa5, 0x27, 0x19, 0x74, + 0xe8, 0x18, 0x2e, 0xc4, 0x35, 0xa5, 0xeb, 0xb2, 0xbb, 0x7a, 0x5d, 0xc6, 0xfd, 0x94, 0x40, 0xa4, + 0xe7, 0x66, 0xef, 0xf5, 0x3a, 0xd4, 0xd5, 0x94, 0x92, 0x4f, 0x46, 0x44, 0xe1, 0x5a, 0xf1, 0x0b, + 0x09, 0xdd, 0x29, 0x52, 0x99, 0x7f, 0xf6, 0x35, 0xef, 0x2e, 0x91, 0x52, 0x6a, 0xdb, 0xc6, 0x43, + 0x03, 0xf9, 0xb0, 0x5d, 0xf6, 0xb2, 0x40, 0xf7, 0x8a, 0x60, 0xe6, 0x1f, 0x52, 0xcd, 0xbd, 0xa5, + 0x72, 0x29, 0x85, 0x79, 0xdf, 0x92, 0x3b, 0x65, 0xb1, 0x6f, 0xf9, 0x3b, 0x74, 0xb1, 0x6f, 0x73, + 0x17, 0x53, 0xa9, 0x8a, 0xc1, 0x4e, 0xe9, 0x16, 0x44, 0x85, 0x46, 0x17, 0x5c, 0x00, 0x9a, 0xed, + 0xe5, 0x82, 0x29, 0x9d, 0xdf, 0xe6, 0x1f, 0x9e, 0xf3, 0x25, 0x8e, 0x1e, 0x2c, 0xab, 0x9b, 0xac, + 0x01, 0x9d, 0x55, 0xc5, 0x53, 0x66, 0x1c, 0x02, 0x9a, 0x5f, 0xa1, 0x68, 0xe9, 0x76, 0x6d, 0x66, + 0xa6, 0xcf, 0xfe, 0xc7, 0xbf, 0xbe, 0x69, 0x19, 0x7f, 0xbc, 0x69, 0x19, 0x7f, 0xbd, 0x69, 0x19, + 0x2f, 0x7a, 0x63, 0x2a, 0x4e, 0xc3, 0x61, 0xc7, 0xf1, 0x5f, 0x75, 0xc7, 0x0c, 0x9f, 0x51, 0x55, + 0xc9, 0xd8, 0xed, 0x26, 0x3f, 0x55, 0xe0, 0x80, 0x66, 0x7e, 0xd1, 0x18, 0x6e, 0xc8, 0x3f, 0x1f, + 0xfe, 0x13, 0x00, 0x00, 0xff, 0xff, 0x83, 0x41, 0x97, 0x10, 0x2f, 0x11, 0x00, 0x00, } func (m *RegisterUsingIAMMethodRequest) Marshal() (dAtA []byte, err error) { @@ -2826,6 +2837,11 @@ func (m *RegisterUsingBoundKeypairRotationRequest) MarshalToSizedBuffer(dAtA []b i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.SignatureAlgorithmSuite != 0 { + i = encodeVarintJoinservice(dAtA, i, uint64(m.SignatureAlgorithmSuite)) + i-- + dAtA[i] = 0x8 + } return len(dAtA) - i, nil } @@ -3473,6 +3489,9 @@ func (m *RegisterUsingBoundKeypairRotationRequest) Size() (n int) { } var l int _ = l + if m.SignatureAlgorithmSuite != 0 { + n += 1 + sovJoinservice(uint64(m.SignatureAlgorithmSuite)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -6189,6 +6208,25 @@ func (m *RegisterUsingBoundKeypairRotationRequest) Unmarshal(dAtA []byte) error return fmt.Errorf("proto: RegisterUsingBoundKeypairRotationRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field SignatureAlgorithmSuite", wireType) + } + m.SignatureAlgorithmSuite = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowJoinservice + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.SignatureAlgorithmSuite |= types.SignatureAlgorithmSuite(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipJoinservice(dAtA[iNdEx:]) diff --git a/api/client/proto/requestable_roles.pb.go b/api/client/proto/requestable_roles.pb.go new file mode 100644 index 0000000000000..81efc640662b1 --- /dev/null +++ b/api/client/proto/requestable_roles.pb.go @@ -0,0 +1,329 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/legacy/client/proto/requestable_roles.proto + +// buf:lint:ignore PACKAGE_DIRECTORY_MATCH +// buf:lint:ignore PACKAGE_VERSION_SUFFIX + +package proto + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// ListRequestableRolesRequest is the request message for AuthService.ListRequestableRoles +type ListRequestableRolesRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // page_size is the maximum number of requestable roles to return per page. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // page_token is the next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // filter is the filtering options for the returned roles. + Filter *ListRequestableRolesRequest_Filter `protobuf:"bytes,3,opt,name=filter,proto3" json:"filter,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRequestableRolesRequest) Reset() { + *x = ListRequestableRolesRequest{} + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRequestableRolesRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRequestableRolesRequest) ProtoMessage() {} + +func (x *ListRequestableRolesRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRequestableRolesRequest.ProtoReflect.Descriptor instead. +func (*ListRequestableRolesRequest) Descriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_requestable_roles_proto_rawDescGZIP(), []int{0} +} + +func (x *ListRequestableRolesRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListRequestableRolesRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListRequestableRolesRequest) GetFilter() *ListRequestableRolesRequest_Filter { + if x != nil { + return x.Filter + } + return nil +} + +// ListRequestableRolesResponse is the response message for AuthService.ListRequestableRoles +type ListRequestableRolesResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // roles is the requestable roles. + Roles []*ListRequestableRolesResponse_RequestableRole `protobuf:"bytes,1,rep,name=roles,proto3" json:"roles,omitempty"` + // next_page_token is the next page token. If there are no more results, it will be empty. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRequestableRolesResponse) Reset() { + *x = ListRequestableRolesResponse{} + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRequestableRolesResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRequestableRolesResponse) ProtoMessage() {} + +func (x *ListRequestableRolesResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRequestableRolesResponse.ProtoReflect.Descriptor instead. +func (*ListRequestableRolesResponse) Descriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_requestable_roles_proto_rawDescGZIP(), []int{1} +} + +func (x *ListRequestableRolesResponse) GetRoles() []*ListRequestableRolesResponse_RequestableRole { + if x != nil { + return x.Roles + } + return nil +} + +func (x *ListRequestableRolesResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// Filter is the type for the filtering options for the returned roles. +type ListRequestableRolesRequest_Filter struct { + state protoimpl.MessageState `protogen:"open.v1"` + // a list of search keywords to match against resource field values, if specified + SearchKeywords []string `protobuf:"bytes,1,rep,name=search_keywords,json=searchKeywords,proto3" json:"search_keywords,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRequestableRolesRequest_Filter) Reset() { + *x = ListRequestableRolesRequest_Filter{} + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRequestableRolesRequest_Filter) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRequestableRolesRequest_Filter) ProtoMessage() {} + +func (x *ListRequestableRolesRequest_Filter) ProtoReflect() protoreflect.Message { + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRequestableRolesRequest_Filter.ProtoReflect.Descriptor instead. +func (*ListRequestableRolesRequest_Filter) Descriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_requestable_roles_proto_rawDescGZIP(), []int{0, 0} +} + +func (x *ListRequestableRolesRequest_Filter) GetSearchKeywords() []string { + if x != nil { + return x.SearchKeywords + } + return nil +} + +// RequestableRole is the type of a requestable role, containing only its name and description. +type ListRequestableRolesResponse_RequestableRole struct { + state protoimpl.MessageState `protogen:"open.v1"` + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRequestableRolesResponse_RequestableRole) Reset() { + *x = ListRequestableRolesResponse_RequestableRole{} + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRequestableRolesResponse_RequestableRole) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRequestableRolesResponse_RequestableRole) ProtoMessage() {} + +func (x *ListRequestableRolesResponse_RequestableRole) ProtoReflect() protoreflect.Message { + mi := &file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRequestableRolesResponse_RequestableRole.ProtoReflect.Descriptor instead. +func (*ListRequestableRolesResponse_RequestableRole) Descriptor() ([]byte, []int) { + return file_teleport_legacy_client_proto_requestable_roles_proto_rawDescGZIP(), []int{1, 0} +} + +func (x *ListRequestableRolesResponse_RequestableRole) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +func (x *ListRequestableRolesResponse_RequestableRole) GetDescription() string { + if x != nil { + return x.Description + } + return "" +} + +var File_teleport_legacy_client_proto_requestable_roles_proto protoreflect.FileDescriptor + +const file_teleport_legacy_client_proto_requestable_roles_proto_rawDesc = "" + + "\n" + + "4teleport/legacy/client/proto/requestable_roles.proto\x12\x05proto\"\xcf\x01\n" + + "\x1bListRequestableRolesRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12A\n" + + "\x06filter\x18\x03 \x01(\v2).proto.ListRequestableRolesRequest.FilterR\x06filter\x1a1\n" + + "\x06Filter\x12'\n" + + "\x0fsearch_keywords\x18\x01 \x03(\tR\x0esearchKeywords\"\xda\x01\n" + + "\x1cListRequestableRolesResponse\x12I\n" + + "\x05roles\x18\x01 \x03(\v23.proto.ListRequestableRolesResponse.RequestableRoleR\x05roles\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\x1aG\n" + + "\x0fRequestableRole\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\x12 \n" + + "\vdescription\x18\x02 \x01(\tR\vdescriptionB4Z2github.com/gravitational/teleport/api/client/protob\x06proto3" + +var ( + file_teleport_legacy_client_proto_requestable_roles_proto_rawDescOnce sync.Once + file_teleport_legacy_client_proto_requestable_roles_proto_rawDescData []byte +) + +func file_teleport_legacy_client_proto_requestable_roles_proto_rawDescGZIP() []byte { + file_teleport_legacy_client_proto_requestable_roles_proto_rawDescOnce.Do(func() { + file_teleport_legacy_client_proto_requestable_roles_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_legacy_client_proto_requestable_roles_proto_rawDesc), len(file_teleport_legacy_client_proto_requestable_roles_proto_rawDesc))) + }) + return file_teleport_legacy_client_proto_requestable_roles_proto_rawDescData +} + +var file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes = make([]protoimpl.MessageInfo, 4) +var file_teleport_legacy_client_proto_requestable_roles_proto_goTypes = []any{ + (*ListRequestableRolesRequest)(nil), // 0: proto.ListRequestableRolesRequest + (*ListRequestableRolesResponse)(nil), // 1: proto.ListRequestableRolesResponse + (*ListRequestableRolesRequest_Filter)(nil), // 2: proto.ListRequestableRolesRequest.Filter + (*ListRequestableRolesResponse_RequestableRole)(nil), // 3: proto.ListRequestableRolesResponse.RequestableRole +} +var file_teleport_legacy_client_proto_requestable_roles_proto_depIdxs = []int32{ + 2, // 0: proto.ListRequestableRolesRequest.filter:type_name -> proto.ListRequestableRolesRequest.Filter + 3, // 1: proto.ListRequestableRolesResponse.roles:type_name -> proto.ListRequestableRolesResponse.RequestableRole + 2, // [2:2] is the sub-list for method output_type + 2, // [2:2] is the sub-list for method input_type + 2, // [2:2] is the sub-list for extension type_name + 2, // [2:2] is the sub-list for extension extendee + 0, // [0:2] is the sub-list for field type_name +} + +func init() { file_teleport_legacy_client_proto_requestable_roles_proto_init() } +func file_teleport_legacy_client_proto_requestable_roles_proto_init() { + if File_teleport_legacy_client_proto_requestable_roles_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_legacy_client_proto_requestable_roles_proto_rawDesc), len(file_teleport_legacy_client_proto_requestable_roles_proto_rawDesc)), + NumEnums: 0, + NumMessages: 4, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_legacy_client_proto_requestable_roles_proto_goTypes, + DependencyIndexes: file_teleport_legacy_client_proto_requestable_roles_proto_depIdxs, + MessageInfos: file_teleport_legacy_client_proto_requestable_roles_proto_msgTypes, + }.Build() + File_teleport_legacy_client_proto_requestable_roles_proto = out.File + file_teleport_legacy_client_proto_requestable_roles_proto_goTypes = nil + file_teleport_legacy_client_proto_requestable_roles_proto_depIdxs = nil +} diff --git a/api/client/proto/types.go b/api/client/proto/types.go index 9f074aad7078c..75537847f9725 100644 --- a/api/client/proto/types.go +++ b/api/client/proto/types.go @@ -107,6 +107,8 @@ func (a *UpstreamInventoryAgentMetadata) sealedUpstreamInventoryMessage() {} func (h *UpstreamInventoryGoodbye) sealedUpstreamInventoryMessage() {} +func (h *UpstreamInventoryStopHeartbeat) sealedUpstreamInventoryMessage() {} + // DownstreamInventoryMessage is a sealed interface representing the possible // downstream messages of the inventory controls stream after initial hello. type DownstreamInventoryMessage interface { diff --git a/api/client/proxy/client.go b/api/client/proxy/client.go index 8413e99293ebf..3c2650dec0e4b 100644 --- a/api/client/proxy/client.go +++ b/api/client/proxy/client.go @@ -15,6 +15,7 @@ package proxy import ( + "cmp" "context" "crypto/tls" "crypto/x509" @@ -38,6 +39,7 @@ import ( "github.com/gravitational/teleport/api/defaults" transportv1pb "github.com/gravitational/teleport/api/gen/proto/go/teleport/transport/v1" "github.com/gravitational/teleport/api/metadata" + grpcutils "github.com/gravitational/teleport/api/utils/grpc" "github.com/gravitational/teleport/api/utils/grpc/interceptors" ) @@ -46,6 +48,8 @@ import ( type ClientConfig struct { // ProxyAddress is the address of the Proxy server. ProxyAddress string + // RelayAddress is the address of the relay in use, if any. + RelayAddress string // TLSRoutingEnabled indicates if the cluster is using TLS Routing. TLSRoutingEnabled bool // TLSConfigFunc produces the [tls.Config] required for mTLS connections to a specific cluster. @@ -75,9 +79,15 @@ type ClientConfig struct { PROXYHeaderGetter client.PROXYHeaderGetter // The below items are intended to be used by tests to connect without mTLS. - // The gRPC transport credentials to use when establishing the connection to proxy. + + // creds returns gRPC transport credentials to use with the proxy sshgrpc + // server. creds func(cluster string) (credentials.TransportCredentials, error) - // The client credentials to use when establishing the connection to auth. + // relayCreds returns gRPC transport credentials to use with the relay + // transport server. + relayCreds func() (credentials.TransportCredentials, error) + // clientCreds returns the api client credentials to use when establishing + // the connection to auth. clientCreds func(cluster string) (client.Credentials, error) } @@ -137,6 +147,20 @@ func (c *ClientConfig) CheckAndSetDefaults(ctx context.Context) error { } } + return credentials.NewTLS(tlsCfg), nil + } + c.relayCreds = func() (credentials.TransportCredentials, error) { + const localCluster = "" + tlsCfg, err := c.TLSConfigFunc(localCluster) + if err != nil { + return nil, trace.Wrap(err) + } + + // the [credentials.NewTLS] transport credentials will take care of SNI + // and ALPN + tlsCfg.NextProtos = nil + tlsCfg.ServerName = "" + return credentials.NewTLS(tlsCfg), nil } } else { @@ -146,6 +170,9 @@ func (c *ClientConfig) CheckAndSetDefaults(ctx context.Context) error { c.creds = func(cluster string) (credentials.TransportCredentials, error) { return insecure.NewCredentials(), nil } + c.relayCreds = func() (credentials.TransportCredentials, error) { + return insecure.NewCredentials(), nil + } } return nil @@ -184,6 +211,12 @@ type Client struct { // clusterName as determined by inspecting the certificate presented by // the Proxy during the connection handshake. clusterName *clusterName + + // relayGrpcConn, if set, is the gRPC client to the relay transport server. + relayGrpcConn *grpc.ClientConn + // relayTransport, if set, is a [transportv1.Client] for the relay transport + // service. Should be used over the standard one, if set. + relayTransport *transportv1.Client } // protocolProxySSHGRPC is TLS ALPN protocol value used to indicate gRPC @@ -291,7 +324,8 @@ func newGRPCClient(ctx context.Context, cfg *ClientConfig) (_ *Client, err error c := &clusterName{} - creds, err := cfg.creds("") + const localCluster = "" + creds, err := cfg.creds(localCluster) if err != nil { return nil, trace.Wrap(err) } @@ -300,26 +334,24 @@ func newGRPCClient(ctx context.Context, cfg *ClientConfig) (_ *Client, err error dialCtx, cfg.ProxyAddress, append([]grpc.DialOption{ + grpc.WithStatsHandler(otelgrpc.NewClientHandler()), grpc.WithContextDialer(newDialerForGRPCClient(ctx, cfg)), grpc.WithTransportCredentials(&clusterCredentials{TransportCredentials: creds, clusterName: c}), grpc.WithChainUnaryInterceptor( append(cfg.UnaryInterceptors, - //nolint:staticcheck // SA1019. There is a data race in the stats.Handler that is replacing - // the interceptor. See https://github.com/open-telemetry/opentelemetry-go-contrib/issues/4576. - otelgrpc.UnaryClientInterceptor(), metadata.UnaryClientInterceptor, interceptors.GRPCClientUnaryErrorInterceptor, )..., ), grpc.WithChainStreamInterceptor( append(cfg.StreamInterceptors, - //nolint:staticcheck // SA1019. There is a data race in the stats.Handler that is replacing - // the interceptor. See https://github.com/open-telemetry/opentelemetry-go-contrib/issues/4576. - otelgrpc.StreamClientInterceptor(), metadata.StreamClientInterceptor, interceptors.GRPCClientStreamErrorInterceptor, )..., ), + grpc.WithDefaultCallOptions( + grpc.MaxCallRecvMsgSize(grpcutils.MaxClientRecvMsgSize()), + ), }, cfg.DialOpts...)..., ) if err != nil { @@ -337,11 +369,47 @@ func newGRPCClient(ctx context.Context, cfg *ClientConfig) (_ *Client, err error return nil, trace.Wrap(err) } + var relayGrpcConn *grpc.ClientConn + var relayTransport *transportv1.Client + if cfg.RelayAddress != "" { + relayCreds, err := cfg.relayCreds() + if err != nil { + return nil, trace.Wrap(err) + } + + cc, err := grpc.NewClient(cfg.RelayAddress, + grpc.WithTransportCredentials(relayCreds), + grpc.WithStatsHandler(otelgrpc.NewClientHandler()), + grpc.WithChainUnaryInterceptor( + metadata.UnaryClientInterceptor, + interceptors.GRPCClientUnaryErrorInterceptor, + ), + grpc.WithChainStreamInterceptor( + metadata.StreamClientInterceptor, + interceptors.GRPCClientStreamErrorInterceptor, + ), + ) + if err != nil { + return nil, trace.Wrap(err) + } + r, err := transportv1.NewClient(transportv1pb.NewTransportServiceClient(cc)) + if err != nil { + _ = cc.Close() + return nil, trace.Wrap(err) + } + + relayGrpcConn = cc + relayTransport = r + } + return &Client{ cfg: cfg, grpcConn: conn, transport: transport, clusterName: c, + + relayGrpcConn: relayGrpcConn, + relayTransport: relayTransport, }, nil } @@ -362,6 +430,9 @@ func (c *Client) ClusterName() string { // Close attempts to close both the gRPC and SSH connections. func (c *Client) Close() error { + if c.relayGrpcConn != nil { + return trace.NewAggregate(c.relayGrpcConn.Close(), c.grpcConn.Close()) + } return trace.Wrap(c.grpcConn.Close()) } @@ -429,9 +500,13 @@ func (c *Client) ClientConfig(ctx context.Context, cluster string) (client.Confi // DialHost establishes a connection to the `target` in cluster named `cluster`. If a keyring // is provided it will only be forwarded if proxy recording mode is enabled in the cluster. func (c *Client) DialHost(ctx context.Context, target, cluster string, keyring agent.ExtendedAgent) (net.Conn, ClusterDetails, error) { - conn, details, err := c.transport.DialHost(ctx, target, cluster, nil, keyring) + conn, details, err := cmp.Or(c.relayTransport, c.transport).DialHost(ctx, target, cluster, nil, keyring) if err != nil { - return nil, ClusterDetails{}, trace.ConnectionProblem(err, "failed connecting to host %s: %v", target, err) + host := target + if h, _, err := net.SplitHostPort(target); err == nil { + host = h + } + return nil, ClusterDetails{}, trace.ConnectionProblem(err, "failed connecting to host %s: %v", host, err) } return conn, ClusterDetails{FIPS: details.FipsEnabled}, nil @@ -439,7 +514,7 @@ func (c *Client) DialHost(ctx context.Context, target, cluster string, keyring a // ClusterDetails retrieves cluster information as seen by the Proxy. func (c *Client) ClusterDetails(ctx context.Context) (ClusterDetails, error) { - details, err := c.transport.ClusterDetails(ctx) + details, err := cmp.Or(c.relayTransport, c.transport).ClusterDetails(ctx) if err != nil { return ClusterDetails{}, trace.Wrap(err) } @@ -463,7 +538,7 @@ func (c *Client) Ping(ctx context.Context) error { // TODO(tross): Update to call Ping when it is added to the transport service. // For now we don't really care what method is used we just want to measure // how long it takes to get a reply. - _, _ = c.transport.ClusterDetails(ctx) + _, _ = cmp.Or(c.relayTransport, c.transport).ClusterDetails(ctx) return nil } diff --git a/api/client/proxy/transport/transportv1/client.go b/api/client/proxy/transport/transportv1/client.go index 92d92fb59ebc3..eff9c30d485c4 100644 --- a/api/client/proxy/transport/transportv1/client.go +++ b/api/client/proxy/transport/transportv1/client.go @@ -245,7 +245,7 @@ func (c *Client) DialHost(ctx context.Context, hostport, cluster string, src net stream, err := c.clt.ProxySSH(ctx) if err != nil { cancel() - return nil, nil, trace.Wrap(err, "unable to establish proxy stream") + return nil, nil, trace.Wrap(err, "opening proxy stream") } if err := stream.Send(&transportv1pb.ProxySSHRequest{DialTarget: &transportv1pb.TargetHost{ @@ -253,13 +253,13 @@ func (c *Client) DialHost(ctx context.Context, hostport, cluster string, src net Cluster: cluster, }}); err != nil { cancel() - return nil, nil, trace.Wrap(err, "failed to send dial target request") + return nil, nil, trace.Wrap(err, "sending dial target request") } resp, err := stream.Recv() if err != nil { cancel() - return nil, nil, trace.Wrap(err, "failed to receive cluster details response") + return nil, nil, trace.Wrap(err) } // create streams for ssh and agent protocol diff --git a/api/client/proxy/transport/transportv1/client_test.go b/api/client/proxy/transport/transportv1/client_test.go index 6cc7d8b835ba4..b69cdafaa0983 100644 --- a/api/client/proxy/transport/transportv1/client_test.go +++ b/api/client/proxy/transport/transportv1/client_test.go @@ -31,6 +31,7 @@ import ( "github.com/gravitational/trace" "github.com/stretchr/testify/require" "golang.org/x/crypto/ssh/agent" + "golang.org/x/sync/errgroup" "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" "google.golang.org/grpc/test/bufconn" @@ -269,13 +270,16 @@ func TestClient_DialHost(t *testing.T) { return trail.ToGRPC(trace.Wrap(err)) } - // wait for the first ssh frame + // wait for the client to write some data req, err = server.Recv() if err != nil { + if errors.Is(err, io.EOF) { + return nil + } return trail.ToGRPC(trace.Wrap(err)) } - // write too much data to terminate the stream + // echo the received payload back to the client switch f := req.Frame.(type) { case *transportv1pb.ProxySSHRequest_Ssh: if err := server.Send(&transportv1pb.ProxySSHResponse{ @@ -287,7 +291,11 @@ func TestClient_DialHost(t *testing.T) { case *transportv1pb.ProxySSHRequest_Agent: return trail.ToGRPC(trace.BadParameter("test expects first frame to be ssh. got an agent frame")) } - return nil + + // wait for the client to reply again before exiting the handler + // to ensure that the client processed the echo. + _, err = server.Recv() + return trail.ToGRPC(trace.Wrap(err)) case req.DialTarget.Cluster == "forward": // send the initial cluster details if err := server.Send(&transportv1pb.ProxySSHResponse{Details: &transportv1pb.ClusterDetails{FipsEnabled: true}}); err != nil && !errors.Is(err, io.EOF) { @@ -322,48 +330,53 @@ func TestClient_DialHost(t *testing.T) { return trail.ToGRPC(trace.Wrap(err, "failed constructing ssh agent streamer")) } - // read in agent frames - go func() { - for { - req, err := server.Recv() - if err != nil { - if errors.Is(err, io.EOF) { - return - } + // process agent frames + var eg errgroup.Group + eg.Go(func() error { + // create an agent that will communicate over the agent frames + // and list the keys from the client + clt := agent.NewClient(agentStreamRW) + keys, err := clt.List() + if err != nil { + return trail.ToGRPC(trace.Wrap(err)) + } - return - } + if len(keys) != 1 { + return trail.ToGRPC(fmt.Errorf("expected to receive 1 key. got %v", len(keys))) + } - switch frame := req.Frame.(type) { - case *transportv1pb.ProxySSHRequest_Agent: - agentStream.incomingC <- frame.Agent.Payload - default: - continue - } + // send the key blob back via an ssh frame to alert the + // test that we finished listing keys + if err := server.Send(&transportv1pb.ProxySSHResponse{ + Details: nil, + Frame: &transportv1pb.ProxySSHResponse_Ssh{Ssh: &transportv1pb.Frame{Payload: keys[0].Blob}}, + }); err != nil && !errors.Is(err, io.EOF) { + return trail.ToGRPC(trace.Wrap(err)) } - }() - // create an agent that will communicate over the agent frames - // and list the keys from the client - clt := agent.NewClient(agentStreamRW) - keys, err := clt.List() - if err != nil { - return trail.ToGRPC(trace.Wrap(err)) - } + return nil + }) - if len(keys) != 1 { - return trail.ToGRPC(fmt.Errorf("expected to receive 1 key. got %v", len(keys))) - } + for { + req, err := server.Recv() + switch { + case errors.Is(err, io.EOF): + if err := eg.Wait(); err != nil { + return trail.ToGRPC(trace.Wrap(err)) + } - // send the key blob back via an ssh frame to alert the - // test that we finished listing keys - if err := server.Send(&transportv1pb.ProxySSHResponse{ - Details: nil, - Frame: &transportv1pb.ProxySSHResponse_Ssh{Ssh: &transportv1pb.Frame{Payload: keys[0].Blob}}, - }); err != nil && !errors.Is(err, io.EOF) { - return trail.ToGRPC(trace.Wrap(err)) + return nil + case err != nil: + return trace.Wrap(err) + } + + switch frame := req.Frame.(type) { + case *transportv1pb.ProxySSHRequest_Agent: + agentStream.incomingC <- frame.Agent.Payload + default: + continue + } } - return nil default: return trail.ToGRPC(trace.BadParameter("invalid cluster")) } @@ -425,17 +438,23 @@ func TestClient_DialHost(t *testing.T) { require.NoError(t, err) require.NotNil(t, conn) + // Write the message and expect it is echoed back. msg := []byte("hello") n, err := conn.Write(msg) require.NoError(t, err) require.Len(t, msg, n) - out := make([]byte, n) n, err = conn.Read(out) require.NoError(t, err) require.Len(t, msg, n) require.Equal(t, msg, out) + // Write the same message again to trigger the server to shut down. + n, err = conn.Write(msg) + require.NoError(t, err) + require.Len(t, msg, n) + + // Expect subsequent reads result in an EOF. n, err = conn.Read(out) require.ErrorIs(t, err, io.EOF) require.Zero(t, n) @@ -473,7 +492,7 @@ func TestClient_DialHost(t *testing.T) { // the server performs a remote list of keys // via ssh frames. to prevent the test from terminating - // before it can complete it will write the blob of the + // before it can complete, it will write the blob of the // listed key back on the ssh frame. verify that the key // it received matches the one from out local keyring. out = make([]byte, len(keys[0].Blob)) diff --git a/api/client/scim/scim.go b/api/client/scim/scim.go index a0126d93499da..d587b79998a1c 100644 --- a/api/client/scim/scim.go +++ b/api/client/scim/scim.go @@ -83,3 +83,12 @@ func (c *Client) DeleteSCIMResource(ctx context.Context, req *scimpb.DeleteSCIMR } return res, nil } + +// PatchSCIMResource handles a request to patch a resource. +func (c *Client) PatchSCIMResource(ctx context.Context, request *scimpb.PatchSCIMResourceRequest) (*scimpb.Resource, error) { + resp, err := c.grpcClient.PatchSCIMResource(ctx, request) + if err != nil { + return nil, trace.Wrap(err, "handling SCIM patch request") + } + return resp, nil +} diff --git a/api/client/scopes/access/client.go b/api/client/scopes/access/client.go new file mode 100644 index 0000000000000..fa87499636f40 --- /dev/null +++ b/api/client/scopes/access/client.go @@ -0,0 +1,79 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package access + +import ( + "context" + + scopedaccessv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1" +) + +// Client is a scoped access client that conforms to the following lib/services interfaces: +// * services.ScopedAccess +type Client struct { + grpcClient scopedaccessv1.ScopedAccessServiceClient +} + +// NewClient creates a new Access List client. +func NewClient(grpcClient scopedaccessv1.ScopedAccessServiceClient) *Client { + return &Client{ + grpcClient: grpcClient, + } +} + +// GetScopedRole gets a scoped role by name. +func (c *Client) GetScopedRole(ctx context.Context, req *scopedaccessv1.GetScopedRoleRequest) (*scopedaccessv1.GetScopedRoleResponse, error) { + return c.grpcClient.GetScopedRole(ctx, req) +} + +// ListScopedRoles returns a paginated list of scoped roles. +func (c *Client) ListScopedRoles(ctx context.Context, req *scopedaccessv1.ListScopedRolesRequest) (*scopedaccessv1.ListScopedRolesResponse, error) { + return c.grpcClient.ListScopedRoles(ctx, req) +} + +// CreateScopedRole creates a new scoped role. +func (c *Client) CreateScopedRole(ctx context.Context, req *scopedaccessv1.CreateScopedRoleRequest) (*scopedaccessv1.CreateScopedRoleResponse, error) { + return c.grpcClient.CreateScopedRole(ctx, req) +} + +// UpdateScopedRole updates a scoped role. +func (c *Client) UpdateScopedRole(ctx context.Context, req *scopedaccessv1.UpdateScopedRoleRequest) (*scopedaccessv1.UpdateScopedRoleResponse, error) { + return c.grpcClient.UpdateScopedRole(ctx, req) +} + +// DeleteScopedRole deletes a scoped role. +func (c *Client) DeleteScopedRole(ctx context.Context, req *scopedaccessv1.DeleteScopedRoleRequest) (*scopedaccessv1.DeleteScopedRoleResponse, error) { + return c.grpcClient.DeleteScopedRole(ctx, req) +} + +// GetScopedRoleAssignment gets a scoped role assignment by name. +func (c *Client) GetScopedRoleAssignment(ctx context.Context, req *scopedaccessv1.GetScopedRoleAssignmentRequest) (*scopedaccessv1.GetScopedRoleAssignmentResponse, error) { + return c.grpcClient.GetScopedRoleAssignment(ctx, req) +} + +// ListScopedRoleAssignments returns a paginated list of scoped role assignments. +func (c *Client) ListScopedRoleAssignments(ctx context.Context, req *scopedaccessv1.ListScopedRoleAssignmentsRequest) (*scopedaccessv1.ListScopedRoleAssignmentsResponse, error) { + return c.grpcClient.ListScopedRoleAssignments(ctx, req) +} + +// CreateScopedRoleAssignment creates a new scoped role assignment. +func (c *Client) CreateScopedRoleAssignment(ctx context.Context, req *scopedaccessv1.CreateScopedRoleAssignmentRequest) (*scopedaccessv1.CreateScopedRoleAssignmentResponse, error) { + return c.grpcClient.CreateScopedRoleAssignment(ctx, req) +} + +// DeleteScopedRoleAssignment deletes a scoped role assignment. +func (c *Client) DeleteScopedRoleAssignment(ctx context.Context, req *scopedaccessv1.DeleteScopedRoleAssignmentRequest) (*scopedaccessv1.DeleteScopedRoleAssignmentResponse, error) { + return c.grpcClient.DeleteScopedRoleAssignment(ctx, req) +} diff --git a/api/client/sessions.go b/api/client/sessions.go index 6d835204a522f..8bad70f749cc9 100644 --- a/api/client/sessions.go +++ b/api/client/sessions.go @@ -20,11 +20,14 @@ import ( "context" "errors" "io" + "iter" "github.com/gravitational/trace" "google.golang.org/protobuf/types/known/emptypb" + "github.com/gravitational/teleport/api/client/proto" "github.com/gravitational/teleport/api/types" + "github.com/gravitational/teleport/api/utils/clientutils" ) // GetWebSession returns the web session for the specified request. @@ -119,29 +122,18 @@ type webSessions struct { c *Client } -// GetWebToken returns the web token for the specified request. -// Implements ReadAccessPoint +// GetWebToken returns the web token for the specified request func (c *Client) GetWebToken(ctx context.Context, req types.GetWebTokenRequest) (types.WebToken, error) { - return c.WebTokens().Get(ctx, req) -} - -// WebTokens returns the web tokens controller -func (c *Client) WebTokens() types.WebTokenInterface { - return &webTokens{c: c} -} - -// Get returns the web token for the specified request -func (r *webTokens) Get(ctx context.Context, req types.GetWebTokenRequest) (types.WebToken, error) { - resp, err := r.c.grpc.GetWebToken(ctx, &req) + resp, err := c.grpc.GetWebToken(ctx, &req) if err != nil { return nil, trace.Wrap(err) } return resp.Token, nil } -// List returns the list of all web tokens -func (r *webTokens) List(ctx context.Context) ([]types.WebToken, error) { - resp, err := r.c.grpc.GetWebTokens(ctx, &emptypb.Empty{}) +// GetWebTokens returns the list of all web tokens +func (c *Client) GetWebTokens(ctx context.Context) ([]types.WebToken, error) { + resp, err := c.grpc.GetWebTokens(ctx, &emptypb.Empty{}) if err != nil { return nil, trace.Wrap(err) } @@ -152,29 +144,82 @@ func (r *webTokens) List(ctx context.Context) ([]types.WebToken, error) { return out, nil } -// Upsert not implemented: can only be called locally. -func (r *webTokens) Upsert(ctx context.Context, token types.WebToken) error { +// ListWebTokens returns a page of web tokens +func (c *Client) ListWebTokens(ctx context.Context, limit int, start string) ([]types.WebToken, string, error) { + resp, err := c.grpc.ListWebTokens(ctx, &proto.ListWebTokensRequest{ + PageToken: start, + PageSize: int32(limit), + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + + tokens := make([]types.WebToken, 0, len(resp.Tokens)) + for _, token := range resp.Tokens { + tokens = append(tokens, token) + } + + return tokens, resp.NextPageToken, nil +} + +// RangeWebTokens returns web tokens within the range [start, end). +func (c *Client) RangeWebTokens(ctx context.Context, start, end string) iter.Seq2[types.WebToken, error] { + return clientutils.RangeResources(ctx, start, end, c.ListWebTokens, types.WebToken.GetName) +} + +// UpsertWebToken not implemented: can only be called locally. +func (c *Client) UpsertWebToken(ctx context.Context, token types.WebToken) error { return trace.NotImplemented(notImplementedMessage) } -// Delete deletes the web token specified with the request -func (r *webTokens) Delete(ctx context.Context, req types.DeleteWebTokenRequest) error { - _, err := r.c.grpc.DeleteWebToken(ctx, &req) +// DeleteWebToken deletes the web token specified with the request +func (c *Client) DeleteWebToken(ctx context.Context, req types.DeleteWebTokenRequest) error { + _, err := c.grpc.DeleteWebToken(ctx, &req) if err != nil { return trace.Wrap(err) } return nil } -// DeleteAll deletes all web tokens -func (r *webTokens) DeleteAll(ctx context.Context) error { - _, err := r.c.grpc.DeleteAllWebTokens(ctx, &emptypb.Empty{}) +// DeleteAllWebTokens deletes all web tokens +func (c *Client) DeleteAllWebTokens(ctx context.Context) error { + _, err := c.grpc.DeleteAllWebTokens(ctx, &emptypb.Empty{}) if err != nil { return trace.Wrap(err) } return nil } +// WebTokens returns the web tokens controller +func (c *Client) WebTokens() types.WebTokenInterface { + return &webTokens{c: c} +} + +// Get returns the web token for the specified request +func (r *webTokens) Get(ctx context.Context, req types.GetWebTokenRequest) (types.WebToken, error) { + return r.c.GetWebToken(ctx, req) +} + +// List returns the list of all web tokens +func (r *webTokens) List(ctx context.Context) ([]types.WebToken, error) { + return clientutils.CollectWithFallback(ctx, r.c.ListWebTokens, r.c.GetWebTokens) +} + +// Upsert not implemented: can only be called locally. +func (r *webTokens) Upsert(ctx context.Context, token types.WebToken) error { + return r.c.UpsertWebToken(ctx, token) +} + +// Delete deletes the web token specified with the request +func (r *webTokens) Delete(ctx context.Context, req types.DeleteWebTokenRequest) error { + return r.c.DeleteWebToken(ctx, req) +} + +// DeleteAll deletes all web tokens +func (r *webTokens) DeleteAll(ctx context.Context) error { + return r.c.DeleteAllWebTokens(ctx) +} + type webTokens struct { c *Client } diff --git a/api/client/userloginstate/userloginstate.go b/api/client/userloginstate/userloginstate.go index fc328d5d7e3fc..f52ba39b22bb4 100644 --- a/api/client/userloginstate/userloginstate.go +++ b/api/client/userloginstate/userloginstate.go @@ -94,3 +94,23 @@ func (c *Client) DeleteAllUserLoginStates(ctx context.Context) error { _, err := c.grpcClient.DeleteAllUserLoginStates(ctx, &userloginstatev1.DeleteAllUserLoginStatesRequest{}) return trace.Wrap(err) } + +// ListUserLoginStates returns a paginated list of user login state resources. +func (c *Client) ListUserLoginStates(ctx context.Context, pageSize int, nextToken string) ([]*userloginstate.UserLoginState, string, error) { + resp, err := c.grpcClient.ListUserLoginStates(ctx, &userloginstatev1.ListUserLoginStatesRequest{ + PageSize: int32(pageSize), + PageToken: nextToken, + }) + if err != nil { + return nil, "", trace.Wrap(err) + } + out := make([]*userloginstate.UserLoginState, 0, len(resp.UserLoginStates)) + for _, v := range resp.UserLoginStates { + item, err := conv.FromProto(v) + if err != nil { + return nil, "", trace.Wrap(err) + } + out = append(out, item) + } + return out, resp.GetNextPageToken(), nil +} diff --git a/api/client/userloginstate/userloginstate_test.go b/api/client/userloginstate/userloginstate_test.go index 7b9d0830e4835..0ed4164346acd 100644 --- a/api/client/userloginstate/userloginstate_test.go +++ b/api/client/userloginstate/userloginstate_test.go @@ -19,6 +19,7 @@ import ( "testing" "github.com/google/go-cmp/cmp" + "github.com/gravitational/trace" "github.com/stretchr/testify/require" "google.golang.org/grpc" "google.golang.org/protobuf/types/known/emptypb" @@ -29,6 +30,7 @@ import ( "github.com/gravitational/teleport/api/types/trait" "github.com/gravitational/teleport/api/types/userloginstate" conv "github.com/gravitational/teleport/api/types/userloginstate/convert/v1" + "github.com/gravitational/teleport/api/utils/clientutils" ) type mockClient struct { @@ -74,7 +76,31 @@ func (m *mockClient) DeleteAllUserLoginStates(_ context.Context, in *userloginst return nil, nil } -func TestGetUserLoginStates(t *testing.T) { +func (m *mockClient) ListUserLoginStates(ctx context.Context, in *userloginstatev1.ListUserLoginStatesRequest, opts ...grpc.CallOption) (*userloginstatev1.ListUserLoginStatesResponse, error) { + if in.PageSize != 1 { + return nil, trace.BadParameter("unsupported page size, expected 1, got %d", in.PageSize) + } + switch in.PageToken { + case "", "uls1": + return &userloginstatev1.ListUserLoginStatesResponse{ + UserLoginStates: []*userloginstatev1.UserLoginState{newUserLoginStateProto(m.t, "uls1")}, + NextPageToken: "uls2", + }, nil + case "uls2": + return &userloginstatev1.ListUserLoginStatesResponse{ + UserLoginStates: []*userloginstatev1.UserLoginState{newUserLoginStateProto(m.t, "uls2")}, + NextPageToken: "uls3", + }, nil + case "uls3": + return &userloginstatev1.ListUserLoginStatesResponse{ + UserLoginStates: []*userloginstatev1.UserLoginState{newUserLoginStateProto(m.t, "uls3")}, + NextPageToken: "", + }, nil + } + return nil, trace.BadParameter("unsupported page token") +} + +func TestGetListUserLoginStates(t *testing.T) { t.Parallel() mockClient := &mockClient{t: t} client := NewClient(mockClient) @@ -89,6 +115,20 @@ func TestGetUserLoginStates(t *testing.T) { newUserLoginState(t, "uls2"), newUserLoginState(t, "uls3"), }, states)) + + t.Run("test list user login state with pagination", func(t *testing.T) { + var items []*userloginstate.UserLoginState + for item, err := range clientutils.ResourcesWithPageSize(context.Background(), client.ListUserLoginStates, 1) { + require.NoError(t, err) + items = append(items, item) + } + + require.Empty(t, cmp.Diff([]*userloginstate.UserLoginState{ + newUserLoginState(t, "uls1"), + newUserLoginState(t, "uls2"), + newUserLoginState(t, "uls3"), + }, items)) + }) } func TestGetUserLoginState(t *testing.T) { diff --git a/api/client/webclient/webconfig.go b/api/client/webclient/webconfig.go index 432995a7f2c96..e1de70d3859d5 100644 --- a/api/client/webclient/webconfig.go +++ b/api/client/webclient/webconfig.go @@ -26,13 +26,13 @@ const ( WebConfigAuthProviderOIDCType = "oidc" // WebConfigAuthProviderOIDCURL is OIDC webapi endpoint. // redirect_url MUST be the last query param, see the comment in parseSSORequestParams for an explanation. - WebConfigAuthProviderOIDCURL = "/v1/webapi/oidc/login/web?connector_id=:providerName&redirect_url=:redirect" + WebConfigAuthProviderOIDCURL = "/v1/webapi/oidc/login/web?connector_id=:providerName&login_hint=:loginHint?&redirect_url=:redirect" // WebConfigAuthProviderSAMLType is SAML provider type WebConfigAuthProviderSAMLType = "saml" // WebConfigAuthProviderSAMLURL is SAML webapi endpoint. // redirect_url MUST be the last query param, see the comment in parseSSORequestParams for an explanation. - WebConfigAuthProviderSAMLURL = "/v1/webapi/saml/sso?connector_id=:providerName&redirect_url=:redirect" + WebConfigAuthProviderSAMLURL = "/v1/webapi/saml/sso?connector_id=:providerName&login_hint=:loginHint?&redirect_url=:redirect" // WebConfigAuthProviderGitHubType is GitHub provider type WebConfigAuthProviderGitHubType = "github" @@ -82,6 +82,9 @@ type WebConfig struct { // PlayableDatabaseProtocols is a list of database protocols which session // recordings can be played. PlayableDatabaseProtocols []string `json:"playable_db_protocols"` + // SessionSummarizerEnabled indicates whether the session recording + // summarizer is configured. + SessionSummarizerEnabled bool `json:"sessionSummarizerEnabled,omitempty"` // entitlements define a customer’s access to a specific features Entitlements map[string]EntitlementInfo `json:"entitlements,omitempty"` @@ -205,4 +208,6 @@ type WebConfigAuthSettings struct { PrivateKeyPolicy keys.PrivateKeyPolicy `json:"privateKeyPolicy,omitempty"` // MOTD is message of the day. MOTD is displayed to users before login. MOTD string `json:"motd"` + // IdentifierFirstLoginEnabled is whether identifier-first login is enabled, this will be true if one or more auth connectors has a `user_matchers` field set. + IdentifierFirstLoginEnabled bool `json:"identifierFirstLoginEnabled,omitempty"` } diff --git a/api/constants/constants.go b/api/constants/constants.go index 918af99028fd2..22267566a4480 100644 --- a/api/constants/constants.go +++ b/api/constants/constants.go @@ -198,6 +198,20 @@ const ( OIDCPKCEModeDisabled OIDCPKCEMode = "disabled" ) +// OIDCRequestObjectMode represents the Request Object Mode of an OIDC Connector. +type OIDCRequestObjectMode string + +const ( + // OIDCRequestObjectModeUnknown indicates an unknown or uninitialized state of the request object mode. + OIDCRequestObjectModeUnknown OIDCRequestObjectMode = "" + // OIDCRequestObjectModeNone indicates that request objects should not be used. Parameters should be encoded + // into the URI of the authorization request. + OIDCRequestObjectModeNone OIDCRequestObjectMode = "none" + // OIDCRequestObjectModeSigned indicates that a signed (unencrypted) request object should be encoded into + // the URI of the authorization request. + OIDCRequestObjectModeSigned OIDCRequestObjectMode = "signed" +) + // SecondFactorType is the type of 2FA authentication. type SecondFactorType string @@ -294,6 +308,10 @@ const ( // DeviceTrustModeRequired enforces the presence of device extensions for // sensitive endpoints. DeviceTrustModeRequired DeviceTrustMode = "required" + // DeviceTrustModeRequiredForHumans enforces the presence of device + // extensions for sensitive endpoints if the user is human. In this mode, + // bots are exempt from device trust checks. + DeviceTrustModeRequiredForHumans DeviceTrustMode = "required-for-humans" ) const ( @@ -417,6 +435,13 @@ const ( // TraitGitHubOrgs is the name of the variable to specify the GitHub // organizations for GitHub integration. TraitGitHubOrgs = "github_orgs" + // TraitMCPTools is the name of the variable to specify the MCP tools for + // MCP servers. + TraitMCPTools = "mcp_tools" + + // TraitDefaultRelayAddr is the trait used to specify the default relay + // address passed to clients at login time. + TraitDefaultRelayAddr = "default_relay_addr" ) const ( @@ -527,6 +552,8 @@ const ( EnvVarTerraformIdentityFile = "TF_TELEPORT_IDENTITY_FILE" // EnvVarTerraformIdentityFileBase64 is the environment variable containing the base64-encoded identity file used by the Terraform provider. EnvVarTerraformIdentityFileBase64 = "TF_TELEPORT_IDENTITY_FILE_BASE64" + // EnvVarTerraformInsecure is the environment variable used to control whether the Terraform provider will skip verifying the proxy server's TLS certificate. + EnvVarTerraformInsecure = "TF_TELEPORT_INSECURE" // EnvVarTerraformRetryBaseDuration is the environment variable configuring the base duration between two Terraform provider retries. EnvVarTerraformRetryBaseDuration = "TF_TELEPORT_RETRY_BASE_DURATION" // EnvVarTerraformRetryCapDuration is the environment variable configuring the maximum duration between two Terraform provider retries. @@ -556,3 +583,7 @@ const MaxPIVPINCacheTTL = time.Hour // routine running in every auth server. Any report older than this period should // be considered stale. const AutoUpdateAgentReportPeriod = time.Minute + +// AutoUpdateBotInstanceReportPeriod is the period of the autoupdate bot instance +// reporting routine. +const AutoUpdateBotInstanceReportPeriod = time.Minute diff --git a/api/fixtures/fixtures.go b/api/fixtures/fixtures.go index 573e4d3c9f651..d888c4377e1ab 100644 --- a/api/fixtures/fixtures.go +++ b/api/fixtures/fixtures.go @@ -14,6 +14,10 @@ package fixtures +import ( + "time" +) + const ( TLSCACertPEM = `-----BEGIN CERTIFICATE----- MIIDKjCCAhKgAwIBAgIQJtJDJZZBkg/afM8d2ZJCTjANBgkqhkiG9w0BAQsFADBA @@ -62,3 +66,10 @@ LJxgC1GdoEz2ilXW802H9QrdKf9GPqxwi2TVzfO6pzWkdZcmbItu+QCCFz+co+r8 +ki49FmlfbR5YVPN+8X40aLQB4xDkCHwRwTkrigzWQhIOv8NAhDA -----END RSA PRIVATE KEY-----` ) + +var ( + // TLSCACertNotBefore is the "Not before" date of TLSCACertPEM. + TLSCACertNotBefore = time.Date(2017, time.May, 9, 19, 40, 36, 0, time.UTC) + // TLSCACertNotAfter is the "Not after" date of TLSCACertPEM. + TLSCACertNotAfter = time.Date(2027, time.May, 7, 19, 40, 36, 0, time.UTC) +) diff --git a/api/gen/proto/go/teleport/accessgraph/v1/authorized_key.pb.go b/api/gen/proto/go/teleport/accessgraph/v1/authorized_key.pb.go index dae73d25abf70..a0c910968da01 100644 --- a/api/gen/proto/go/teleport/accessgraph/v1/authorized_key.pb.go +++ b/api/gen/proto/go/teleport/accessgraph/v1/authorized_key.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/access_graph/v1/authorized_key.proto diff --git a/api/gen/proto/go/teleport/accessgraph/v1/private_key.pb.go b/api/gen/proto/go/teleport/accessgraph/v1/private_key.pb.go index a38ec50649f19..deb2fdc5ea0e2 100644 --- a/api/gen/proto/go/teleport/accessgraph/v1/private_key.pb.go +++ b/api/gen/proto/go/teleport/accessgraph/v1/private_key.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/access_graph/v1/private_key.proto diff --git a/api/gen/proto/go/teleport/accessgraph/v1/secrets_service.pb.go b/api/gen/proto/go/teleport/accessgraph/v1/secrets_service.pb.go index f9d3acbbf7f7a..fcef88b7667cf 100644 --- a/api/gen/proto/go/teleport/accessgraph/v1/secrets_service.pb.go +++ b/api/gen/proto/go/teleport/accessgraph/v1/secrets_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/access_graph/v1/secrets_service.proto diff --git a/api/gen/proto/go/teleport/accesslist/v1/accesslist.pb.go b/api/gen/proto/go/teleport/accesslist/v1/accesslist.pb.go index 5bb613f074455..8c5906e946529 100644 --- a/api/gen/proto/go/teleport/accesslist/v1/accesslist.pb.go +++ b/api/gen/proto/go/teleport/accesslist/v1/accesslist.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/accesslist/v1/accesslist.proto @@ -411,7 +411,11 @@ type AccessListSpec struct { // title is a plaintext short description of the Access List. Title string `protobuf:"bytes,8,opt,name=title,proto3" json:"title,omitempty"` // owner_grants describes the access granted by owners to this Access List. - OwnerGrants *AccessListGrants `protobuf:"bytes,11,opt,name=owner_grants,json=ownerGrants,proto3" json:"owner_grants,omitempty"` + OwnerGrants *AccessListGrants `protobuf:"bytes,11,opt,name=owner_grants,json=ownerGrants,proto3" json:"owner_grants,omitempty"` + // type can be an empty string which denotes a regular Access List, "scim" which represents + // an Access List created from SCIM group or "static" for Access Lists managed by IaC + // tools. + Type string `protobuf:"bytes,12,opt,name=type,proto3" json:"type,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -502,6 +506,13 @@ func (x *AccessListSpec) GetOwnerGrants() *AccessListGrants { return nil } +func (x *AccessListSpec) GetType() string { + if x != nil { + return x.Type + } + return "" +} + // AccessListOwner is an owner of an Access List. type AccessListOwner struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -1298,6 +1309,61 @@ func (x *CurrentUserAssignments) GetMembershipType() AccessListUserAssignmentTyp return AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED } +// UserAssignments describes the requested user's ownership and membership assignment types in the access list. +type UserAssignments struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ownership_type represents the requested user's ownership type (explicit, inherited, or none) in the access list. + OwnershipType AccessListUserAssignmentType `protobuf:"varint,1,opt,name=ownership_type,json=ownershipType,proto3,enum=teleport.accesslist.v1.AccessListUserAssignmentType" json:"ownership_type,omitempty"` + // membership_type represents the requested user's membership type (explicit, inherited, or none) in the access list. + MembershipType AccessListUserAssignmentType `protobuf:"varint,2,opt,name=membership_type,json=membershipType,proto3,enum=teleport.accesslist.v1.AccessListUserAssignmentType" json:"membership_type,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UserAssignments) Reset() { + *x = UserAssignments{} + mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UserAssignments) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UserAssignments) ProtoMessage() {} + +func (x *UserAssignments) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UserAssignments.ProtoReflect.Descriptor instead. +func (*UserAssignments) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_proto_rawDescGZIP(), []int{14} +} + +func (x *UserAssignments) GetOwnershipType() AccessListUserAssignmentType { + if x != nil { + return x.OwnershipType + } + return AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED +} + +func (x *UserAssignments) GetMembershipType() AccessListUserAssignmentType { + if x != nil { + return x.MembershipType + } + return AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED +} + // AccessListStatus contains dynamic fields calculated during retrieval. type AccessListStatus struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -1311,13 +1377,15 @@ type AccessListStatus struct { MemberOf []string `protobuf:"bytes,4,rep,name=member_of,json=memberOf,proto3" json:"member_of,omitempty"` // current_user_assignments describes the current user's ownership and membership status in the access list. CurrentUserAssignments *CurrentUserAssignments `protobuf:"bytes,5,opt,name=current_user_assignments,json=currentUserAssignments,proto3" json:"current_user_assignments,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // user_assignments describes the requested user's ownership and membership assignment types in the access list. + UserAssignments *UserAssignments `protobuf:"bytes,6,opt,name=user_assignments,json=userAssignments,proto3" json:"user_assignments,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *AccessListStatus) Reset() { *x = AccessListStatus{} - mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[14] + mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[15] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1329,7 +1397,7 @@ func (x *AccessListStatus) String() string { func (*AccessListStatus) ProtoMessage() {} func (x *AccessListStatus) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[14] + mi := &file_teleport_accesslist_v1_accesslist_proto_msgTypes[15] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1342,7 +1410,7 @@ func (x *AccessListStatus) ProtoReflect() protoreflect.Message { // Deprecated: Use AccessListStatus.ProtoReflect.Descriptor instead. func (*AccessListStatus) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_proto_rawDescGZIP(), []int{14} + return file_teleport_accesslist_v1_accesslist_proto_rawDescGZIP(), []int{15} } func (x *AccessListStatus) GetMemberCount() uint32 { @@ -1380,6 +1448,13 @@ func (x *AccessListStatus) GetCurrentUserAssignments() *CurrentUserAssignments { return nil } +func (x *AccessListStatus) GetUserAssignments() *UserAssignments { + if x != nil { + return x.UserAssignments + } + return nil +} + var File_teleport_accesslist_v1_accesslist_proto protoreflect.FileDescriptor const file_teleport_accesslist_v1_accesslist_proto_rawDesc = "" + @@ -1389,7 +1464,7 @@ const file_teleport_accesslist_v1_accesslist_proto_rawDesc = "" + "AccessList\x12:\n" + "\x06header\x18\x01 \x01(\v2\".teleport.header.v1.ResourceHeaderR\x06header\x12:\n" + "\x04spec\x18\x02 \x01(\v2&.teleport.accesslist.v1.AccessListSpecR\x04spec\x12@\n" + - "\x06status\x18\x03 \x01(\v2(.teleport.accesslist.v1.AccessListStatusR\x06status\"\xc1\x04\n" + + "\x06status\x18\x03 \x01(\v2(.teleport.accesslist.v1.AccessListStatusR\x06status\"\xd5\x04\n" + "\x0eAccessListSpec\x12 \n" + "\vdescription\x18\x01 \x01(\tR\vdescription\x12?\n" + "\x06owners\x18\x02 \x03(\v2'.teleport.accesslist.v1.AccessListOwnerR\x06owners\x12=\n" + @@ -1398,7 +1473,8 @@ const file_teleport_accesslist_v1_accesslist_proto_rawDesc = "" + "\x12ownership_requires\x18\x05 \x01(\v2*.teleport.accesslist.v1.AccessListRequiresR\x11ownershipRequires\x12@\n" + "\x06grants\x18\x06 \x01(\v2(.teleport.accesslist.v1.AccessListGrantsR\x06grants\x12\x14\n" + "\x05title\x18\b \x01(\tR\x05title\x12K\n" + - "\fowner_grants\x18\v \x01(\v2(.teleport.accesslist.v1.AccessListGrantsR\vownerGrantsJ\x04\b\a\x10\bJ\x04\b\t\x10\n" + + "\fowner_grants\x18\v \x01(\v2(.teleport.accesslist.v1.AccessListGrantsR\vownerGrants\x12\x12\n" + + "\x04type\x18\f \x01(\tR\x04typeJ\x04\b\a\x10\bJ\x04\b\t\x10\n" + "J\x04\b\n" + "\x10\vR\amembersR\n" + "membershipR\townership\"\xef\x01\n" + @@ -1460,13 +1536,17 @@ const file_teleport_accesslist_v1_accesslist_proto_rawDesc = "" + "\x1breview_day_of_month_changed\x18\x05 \x01(\x0e2(.teleport.accesslist.v1.ReviewDayOfMonthR\x17reviewDayOfMonthChangedJ\x04\b\x01\x10\x02R\x11frequency_changed\"\xd4\x01\n" + "\x16CurrentUserAssignments\x12[\n" + "\x0eownership_type\x18\x01 \x01(\x0e24.teleport.accesslist.v1.AccessListUserAssignmentTypeR\rownershipType\x12]\n" + - "\x0fmembership_type\x18\x02 \x01(\x0e24.teleport.accesslist.v1.AccessListUserAssignmentTypeR\x0emembershipType\"\xb4\x02\n" + + "\x0fmembership_type\x18\x02 \x01(\x0e24.teleport.accesslist.v1.AccessListUserAssignmentTypeR\x0emembershipType\"\xcd\x01\n" + + "\x0fUserAssignments\x12[\n" + + "\x0eownership_type\x18\x01 \x01(\x0e24.teleport.accesslist.v1.AccessListUserAssignmentTypeR\rownershipType\x12]\n" + + "\x0fmembership_type\x18\x02 \x01(\x0e24.teleport.accesslist.v1.AccessListUserAssignmentTypeR\x0emembershipType\"\x88\x03\n" + "\x10AccessListStatus\x12&\n" + "\fmember_count\x18\x01 \x01(\rH\x00R\vmemberCount\x88\x01\x01\x12/\n" + "\x11member_list_count\x18\x02 \x01(\rH\x01R\x0fmemberListCount\x88\x01\x01\x12\x19\n" + "\bowner_of\x18\x03 \x03(\tR\aownerOf\x12\x1b\n" + "\tmember_of\x18\x04 \x03(\tR\bmemberOf\x12h\n" + - "\x18current_user_assignments\x18\x05 \x01(\v2..teleport.accesslist.v1.CurrentUserAssignmentsR\x16currentUserAssignmentsB\x0f\n" + + "\x18current_user_assignments\x18\x05 \x01(\v2..teleport.accesslist.v1.CurrentUserAssignmentsR\x16currentUserAssignments\x12R\n" + + "\x10user_assignments\x18\x06 \x01(\v2'.teleport.accesslist.v1.UserAssignmentsR\x0fuserAssignmentsB\x0f\n" + "\r_member_countB\x14\n" + "\x12_member_list_count*\xb6\x01\n" + "\x0fReviewFrequency\x12 \n" + @@ -1508,7 +1588,7 @@ func file_teleport_accesslist_v1_accesslist_proto_rawDescGZIP() []byte { } var file_teleport_accesslist_v1_accesslist_proto_enumTypes = make([]protoimpl.EnumInfo, 5) -var file_teleport_accesslist_v1_accesslist_proto_msgTypes = make([]protoimpl.MessageInfo, 15) +var file_teleport_accesslist_v1_accesslist_proto_msgTypes = make([]protoimpl.MessageInfo, 16) var file_teleport_accesslist_v1_accesslist_proto_goTypes = []any{ (ReviewFrequency)(0), // 0: teleport.accesslist.v1.ReviewFrequency (ReviewDayOfMonth)(0), // 1: teleport.accesslist.v1.ReviewDayOfMonth @@ -1529,16 +1609,17 @@ var file_teleport_accesslist_v1_accesslist_proto_goTypes = []any{ (*ReviewSpec)(nil), // 16: teleport.accesslist.v1.ReviewSpec (*ReviewChanges)(nil), // 17: teleport.accesslist.v1.ReviewChanges (*CurrentUserAssignments)(nil), // 18: teleport.accesslist.v1.CurrentUserAssignments - (*AccessListStatus)(nil), // 19: teleport.accesslist.v1.AccessListStatus - (*v1.ResourceHeader)(nil), // 20: teleport.header.v1.ResourceHeader - (*timestamppb.Timestamp)(nil), // 21: google.protobuf.Timestamp - (*durationpb.Duration)(nil), // 22: google.protobuf.Duration - (*v11.Trait)(nil), // 23: teleport.trait.v1.Trait + (*UserAssignments)(nil), // 19: teleport.accesslist.v1.UserAssignments + (*AccessListStatus)(nil), // 20: teleport.accesslist.v1.AccessListStatus + (*v1.ResourceHeader)(nil), // 21: teleport.header.v1.ResourceHeader + (*timestamppb.Timestamp)(nil), // 22: google.protobuf.Timestamp + (*durationpb.Duration)(nil), // 23: google.protobuf.Duration + (*v11.Trait)(nil), // 24: teleport.trait.v1.Trait } var file_teleport_accesslist_v1_accesslist_proto_depIdxs = []int32{ - 20, // 0: teleport.accesslist.v1.AccessList.header:type_name -> teleport.header.v1.ResourceHeader + 21, // 0: teleport.accesslist.v1.AccessList.header:type_name -> teleport.header.v1.ResourceHeader 6, // 1: teleport.accesslist.v1.AccessList.spec:type_name -> teleport.accesslist.v1.AccessListSpec - 19, // 2: teleport.accesslist.v1.AccessList.status:type_name -> teleport.accesslist.v1.AccessListStatus + 20, // 2: teleport.accesslist.v1.AccessList.status:type_name -> teleport.accesslist.v1.AccessListStatus 7, // 3: teleport.accesslist.v1.AccessListSpec.owners:type_name -> teleport.accesslist.v1.AccessListOwner 8, // 4: teleport.accesslist.v1.AccessListSpec.audit:type_name -> teleport.accesslist.v1.AccessListAudit 11, // 5: teleport.accesslist.v1.AccessListSpec.membership_requires:type_name -> teleport.accesslist.v1.AccessListRequires @@ -1547,35 +1628,38 @@ var file_teleport_accesslist_v1_accesslist_proto_depIdxs = []int32{ 12, // 8: teleport.accesslist.v1.AccessListSpec.owner_grants:type_name -> teleport.accesslist.v1.AccessListGrants 3, // 9: teleport.accesslist.v1.AccessListOwner.ineligible_status:type_name -> teleport.accesslist.v1.IneligibleStatus 2, // 10: teleport.accesslist.v1.AccessListOwner.membership_kind:type_name -> teleport.accesslist.v1.MembershipKind - 21, // 11: teleport.accesslist.v1.AccessListAudit.next_audit_date:type_name -> google.protobuf.Timestamp + 22, // 11: teleport.accesslist.v1.AccessListAudit.next_audit_date:type_name -> google.protobuf.Timestamp 9, // 12: teleport.accesslist.v1.AccessListAudit.recurrence:type_name -> teleport.accesslist.v1.Recurrence 10, // 13: teleport.accesslist.v1.AccessListAudit.notifications:type_name -> teleport.accesslist.v1.Notifications 0, // 14: teleport.accesslist.v1.Recurrence.frequency:type_name -> teleport.accesslist.v1.ReviewFrequency 1, // 15: teleport.accesslist.v1.Recurrence.day_of_month:type_name -> teleport.accesslist.v1.ReviewDayOfMonth - 22, // 16: teleport.accesslist.v1.Notifications.start:type_name -> google.protobuf.Duration - 23, // 17: teleport.accesslist.v1.AccessListRequires.traits:type_name -> teleport.trait.v1.Trait - 23, // 18: teleport.accesslist.v1.AccessListGrants.traits:type_name -> teleport.trait.v1.Trait - 20, // 19: teleport.accesslist.v1.Member.header:type_name -> teleport.header.v1.ResourceHeader + 23, // 16: teleport.accesslist.v1.Notifications.start:type_name -> google.protobuf.Duration + 24, // 17: teleport.accesslist.v1.AccessListRequires.traits:type_name -> teleport.trait.v1.Trait + 24, // 18: teleport.accesslist.v1.AccessListGrants.traits:type_name -> teleport.trait.v1.Trait + 21, // 19: teleport.accesslist.v1.Member.header:type_name -> teleport.header.v1.ResourceHeader 14, // 20: teleport.accesslist.v1.Member.spec:type_name -> teleport.accesslist.v1.MemberSpec - 21, // 21: teleport.accesslist.v1.MemberSpec.joined:type_name -> google.protobuf.Timestamp - 21, // 22: teleport.accesslist.v1.MemberSpec.expires:type_name -> google.protobuf.Timestamp + 22, // 21: teleport.accesslist.v1.MemberSpec.joined:type_name -> google.protobuf.Timestamp + 22, // 22: teleport.accesslist.v1.MemberSpec.expires:type_name -> google.protobuf.Timestamp 3, // 23: teleport.accesslist.v1.MemberSpec.ineligible_status:type_name -> teleport.accesslist.v1.IneligibleStatus 2, // 24: teleport.accesslist.v1.MemberSpec.membership_kind:type_name -> teleport.accesslist.v1.MembershipKind - 20, // 25: teleport.accesslist.v1.Review.header:type_name -> teleport.header.v1.ResourceHeader + 21, // 25: teleport.accesslist.v1.Review.header:type_name -> teleport.header.v1.ResourceHeader 16, // 26: teleport.accesslist.v1.Review.spec:type_name -> teleport.accesslist.v1.ReviewSpec - 21, // 27: teleport.accesslist.v1.ReviewSpec.review_date:type_name -> google.protobuf.Timestamp + 22, // 27: teleport.accesslist.v1.ReviewSpec.review_date:type_name -> google.protobuf.Timestamp 17, // 28: teleport.accesslist.v1.ReviewSpec.changes:type_name -> teleport.accesslist.v1.ReviewChanges 11, // 29: teleport.accesslist.v1.ReviewChanges.membership_requirements_changed:type_name -> teleport.accesslist.v1.AccessListRequires 0, // 30: teleport.accesslist.v1.ReviewChanges.review_frequency_changed:type_name -> teleport.accesslist.v1.ReviewFrequency 1, // 31: teleport.accesslist.v1.ReviewChanges.review_day_of_month_changed:type_name -> teleport.accesslist.v1.ReviewDayOfMonth 4, // 32: teleport.accesslist.v1.CurrentUserAssignments.ownership_type:type_name -> teleport.accesslist.v1.AccessListUserAssignmentType 4, // 33: teleport.accesslist.v1.CurrentUserAssignments.membership_type:type_name -> teleport.accesslist.v1.AccessListUserAssignmentType - 18, // 34: teleport.accesslist.v1.AccessListStatus.current_user_assignments:type_name -> teleport.accesslist.v1.CurrentUserAssignments - 35, // [35:35] is the sub-list for method output_type - 35, // [35:35] is the sub-list for method input_type - 35, // [35:35] is the sub-list for extension type_name - 35, // [35:35] is the sub-list for extension extendee - 0, // [0:35] is the sub-list for field type_name + 4, // 34: teleport.accesslist.v1.UserAssignments.ownership_type:type_name -> teleport.accesslist.v1.AccessListUserAssignmentType + 4, // 35: teleport.accesslist.v1.UserAssignments.membership_type:type_name -> teleport.accesslist.v1.AccessListUserAssignmentType + 18, // 36: teleport.accesslist.v1.AccessListStatus.current_user_assignments:type_name -> teleport.accesslist.v1.CurrentUserAssignments + 19, // 37: teleport.accesslist.v1.AccessListStatus.user_assignments:type_name -> teleport.accesslist.v1.UserAssignments + 38, // [38:38] is the sub-list for method output_type + 38, // [38:38] is the sub-list for method input_type + 38, // [38:38] is the sub-list for extension type_name + 38, // [38:38] is the sub-list for extension extendee + 0, // [0:38] is the sub-list for field type_name } func init() { file_teleport_accesslist_v1_accesslist_proto_init() } @@ -1583,14 +1667,14 @@ func file_teleport_accesslist_v1_accesslist_proto_init() { if File_teleport_accesslist_v1_accesslist_proto != nil { return } - file_teleport_accesslist_v1_accesslist_proto_msgTypes[14].OneofWrappers = []any{} + file_teleport_accesslist_v1_accesslist_proto_msgTypes[15].OneofWrappers = []any{} type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_accesslist_v1_accesslist_proto_rawDesc), len(file_teleport_accesslist_v1_accesslist_proto_rawDesc)), NumEnums: 5, - NumMessages: 15, + NumMessages: 16, NumExtensions: 0, NumServices: 0, }, diff --git a/api/gen/proto/go/teleport/accesslist/v1/accesslist_service.pb.go b/api/gen/proto/go/teleport/accesslist/v1/accesslist_service.pb.go index 97894b571bf44..2b79fee1ecb43 100644 --- a/api/gen/proto/go/teleport/accesslist/v1/accesslist_service.pb.go +++ b/api/gen/proto/go/teleport/accesslist/v1/accesslist_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/accesslist/v1/accesslist_service.proto @@ -231,6 +231,207 @@ func (x *ListAccessListsResponse) GetNextToken() string { return "" } +// ListAccessListsV2Request is the request for getting filtered and sorted paginated access lists. +type ListAccessListsV2Request struct { + state protoimpl.MessageState `protogen:"open.v1"` + // page_size is the size of the page to request. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // page_token is the token to begin the next page with. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // sort_by specifies the sort order for the results. + SortBy *types.SortBy `protobuf:"bytes,3,opt,name=sort_by,json=sortBy,proto3" json:"sort_by,omitempty"` + // filter is a collection of fields to filter access lists. + Filter *AccessListsFilter `protobuf:"bytes,4,opt,name=filter,proto3" json:"filter,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListAccessListsV2Request) Reset() { + *x = ListAccessListsV2Request{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListAccessListsV2Request) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListAccessListsV2Request) ProtoMessage() {} + +func (x *ListAccessListsV2Request) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListAccessListsV2Request.ProtoReflect.Descriptor instead. +func (*ListAccessListsV2Request) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{4} +} + +func (x *ListAccessListsV2Request) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListAccessListsV2Request) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListAccessListsV2Request) GetSortBy() *types.SortBy { + if x != nil { + return x.SortBy + } + return nil +} + +func (x *ListAccessListsV2Request) GetFilter() *AccessListsFilter { + if x != nil { + return x.Filter + } + return nil +} + +// AccessListsFilter is used to collect filter options for listing access lists. +type AccessListsFilter struct { + state protoimpl.MessageState `protogen:"open.v1"` + // search is a search term to filter access lists by name. + Search string `protobuf:"bytes,1,opt,name=search,proto3" json:"search,omitempty"` + // owners indicates returned access lists should be owned by one of the provider owners + Owners []string `protobuf:"bytes,2,rep,name=owners,proto3" json:"owners,omitempty"` + // roles indicates returned access lists should great one of the provider roles + Roles []string `protobuf:"bytes,3,rep,name=roles,proto3" json:"roles,omitempty"` + // origin is origin of the resource + Origin string `protobuf:"bytes,4,opt,name=origin,proto3" json:"origin,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AccessListsFilter) Reset() { + *x = AccessListsFilter{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AccessListsFilter) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AccessListsFilter) ProtoMessage() {} + +func (x *AccessListsFilter) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AccessListsFilter.ProtoReflect.Descriptor instead. +func (*AccessListsFilter) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{5} +} + +func (x *AccessListsFilter) GetSearch() string { + if x != nil { + return x.Search + } + return "" +} + +func (x *AccessListsFilter) GetOwners() []string { + if x != nil { + return x.Owners + } + return nil +} + +func (x *AccessListsFilter) GetRoles() []string { + if x != nil { + return x.Roles + } + return nil +} + +func (x *AccessListsFilter) GetOrigin() string { + if x != nil { + return x.Origin + } + return "" +} + +// ListAccessListsV2Response is the response for getting paginated access lists. +type ListAccessListsV2Response struct { + state protoimpl.MessageState `protogen:"open.v1"` + // access_lists is the list of access lists. + AccessLists []*AccessList `protobuf:"bytes,1,rep,name=access_lists,json=accessLists,proto3" json:"access_lists,omitempty"` + // next_page_token is the next page token. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListAccessListsV2Response) Reset() { + *x = ListAccessListsV2Response{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListAccessListsV2Response) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListAccessListsV2Response) ProtoMessage() {} + +func (x *ListAccessListsV2Response) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListAccessListsV2Response.ProtoReflect.Descriptor instead. +func (*ListAccessListsV2Response) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{6} +} + +func (x *ListAccessListsV2Response) GetAccessLists() []*AccessList { + if x != nil { + return x.AccessLists + } + return nil +} + +func (x *ListAccessListsV2Response) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + // GetInheritedGrantsRequest is the request for getting inherited grants. type GetInheritedGrantsRequest struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -242,7 +443,7 @@ type GetInheritedGrantsRequest struct { func (x *GetInheritedGrantsRequest) Reset() { *x = GetInheritedGrantsRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[4] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[7] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -254,7 +455,7 @@ func (x *GetInheritedGrantsRequest) String() string { func (*GetInheritedGrantsRequest) ProtoMessage() {} func (x *GetInheritedGrantsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[4] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[7] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -267,7 +468,7 @@ func (x *GetInheritedGrantsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetInheritedGrantsRequest.ProtoReflect.Descriptor instead. func (*GetInheritedGrantsRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{4} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{7} } func (x *GetInheritedGrantsRequest) GetAccessListId() string { @@ -288,7 +489,7 @@ type GetInheritedGrantsResponse struct { func (x *GetInheritedGrantsResponse) Reset() { *x = GetInheritedGrantsResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[5] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[8] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -300,7 +501,7 @@ func (x *GetInheritedGrantsResponse) String() string { func (*GetInheritedGrantsResponse) ProtoMessage() {} func (x *GetInheritedGrantsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[5] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[8] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -313,7 +514,7 @@ func (x *GetInheritedGrantsResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use GetInheritedGrantsResponse.ProtoReflect.Descriptor instead. func (*GetInheritedGrantsResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{5} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{8} } func (x *GetInheritedGrantsResponse) GetGrants() *AccessListGrants { @@ -334,7 +535,7 @@ type GetAccessListRequest struct { func (x *GetAccessListRequest) Reset() { *x = GetAccessListRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[6] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[9] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -346,7 +547,7 @@ func (x *GetAccessListRequest) String() string { func (*GetAccessListRequest) ProtoMessage() {} func (x *GetAccessListRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[6] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[9] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -359,7 +560,7 @@ func (x *GetAccessListRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListRequest.ProtoReflect.Descriptor instead. func (*GetAccessListRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{6} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{9} } func (x *GetAccessListRequest) GetName() string { @@ -380,7 +581,7 @@ type UpsertAccessListRequest struct { func (x *UpsertAccessListRequest) Reset() { *x = UpsertAccessListRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[7] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[10] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -392,7 +593,7 @@ func (x *UpsertAccessListRequest) String() string { func (*UpsertAccessListRequest) ProtoMessage() {} func (x *UpsertAccessListRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[7] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[10] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -405,7 +606,7 @@ func (x *UpsertAccessListRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use UpsertAccessListRequest.ProtoReflect.Descriptor instead. func (*UpsertAccessListRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{7} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{10} } func (x *UpsertAccessListRequest) GetAccessList() *AccessList { @@ -426,7 +627,7 @@ type UpdateAccessListRequest struct { func (x *UpdateAccessListRequest) Reset() { *x = UpdateAccessListRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[8] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[11] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -438,7 +639,7 @@ func (x *UpdateAccessListRequest) String() string { func (*UpdateAccessListRequest) ProtoMessage() {} func (x *UpdateAccessListRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[8] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[11] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -451,7 +652,7 @@ func (x *UpdateAccessListRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use UpdateAccessListRequest.ProtoReflect.Descriptor instead. func (*UpdateAccessListRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{8} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{11} } func (x *UpdateAccessListRequest) GetAccessList() *AccessList { @@ -472,7 +673,7 @@ type DeleteAccessListRequest struct { func (x *DeleteAccessListRequest) Reset() { *x = DeleteAccessListRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[9] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[12] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -484,7 +685,7 @@ func (x *DeleteAccessListRequest) String() string { func (*DeleteAccessListRequest) ProtoMessage() {} func (x *DeleteAccessListRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[9] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[12] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -497,7 +698,7 @@ func (x *DeleteAccessListRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DeleteAccessListRequest.ProtoReflect.Descriptor instead. func (*DeleteAccessListRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{9} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{12} } func (x *DeleteAccessListRequest) GetName() string { @@ -516,7 +717,7 @@ type DeleteAllAccessListsRequest struct { func (x *DeleteAllAccessListsRequest) Reset() { *x = DeleteAllAccessListsRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[10] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[13] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -528,7 +729,7 @@ func (x *DeleteAllAccessListsRequest) String() string { func (*DeleteAllAccessListsRequest) ProtoMessage() {} func (x *DeleteAllAccessListsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[10] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[13] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -541,7 +742,7 @@ func (x *DeleteAllAccessListsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DeleteAllAccessListsRequest.ProtoReflect.Descriptor instead. func (*DeleteAllAccessListsRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{10} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{13} } // GetAccessListsToReviewRequest is the request for getting access lists that @@ -554,7 +755,7 @@ type GetAccessListsToReviewRequest struct { func (x *GetAccessListsToReviewRequest) Reset() { *x = GetAccessListsToReviewRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[11] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[14] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -566,7 +767,7 @@ func (x *GetAccessListsToReviewRequest) String() string { func (*GetAccessListsToReviewRequest) ProtoMessage() {} func (x *GetAccessListsToReviewRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[11] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[14] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -579,7 +780,7 @@ func (x *GetAccessListsToReviewRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListsToReviewRequest.ProtoReflect.Descriptor instead. func (*GetAccessListsToReviewRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{11} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{14} } // GetAccessListsToReviewResponse is the response for getting access lists that @@ -593,7 +794,7 @@ type GetAccessListsToReviewResponse struct { func (x *GetAccessListsToReviewResponse) Reset() { *x = GetAccessListsToReviewResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[12] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[15] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -605,7 +806,7 @@ func (x *GetAccessListsToReviewResponse) String() string { func (*GetAccessListsToReviewResponse) ProtoMessage() {} func (x *GetAccessListsToReviewResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[12] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[15] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -618,7 +819,7 @@ func (x *GetAccessListsToReviewResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListsToReviewResponse.ProtoReflect.Descriptor instead. func (*GetAccessListsToReviewResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{12} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{15} } func (x *GetAccessListsToReviewResponse) GetAccessLists() []*AccessList { @@ -640,7 +841,7 @@ type CountAccessListMembersRequest struct { func (x *CountAccessListMembersRequest) Reset() { *x = CountAccessListMembersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[13] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[16] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -652,7 +853,7 @@ func (x *CountAccessListMembersRequest) String() string { func (*CountAccessListMembersRequest) ProtoMessage() {} func (x *CountAccessListMembersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[13] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[16] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -665,7 +866,7 @@ func (x *CountAccessListMembersRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use CountAccessListMembersRequest.ProtoReflect.Descriptor instead. func (*CountAccessListMembersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{13} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{16} } func (x *CountAccessListMembersRequest) GetAccessListName() string { @@ -689,7 +890,7 @@ type CountAccessListMembersResponse struct { func (x *CountAccessListMembersResponse) Reset() { *x = CountAccessListMembersResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[14] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[17] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -701,7 +902,7 @@ func (x *CountAccessListMembersResponse) String() string { func (*CountAccessListMembersResponse) ProtoMessage() {} func (x *CountAccessListMembersResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[14] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[17] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -714,7 +915,7 @@ func (x *CountAccessListMembersResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use CountAccessListMembersResponse.ProtoReflect.Descriptor instead. func (*CountAccessListMembersResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{14} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{17} } func (x *CountAccessListMembersResponse) GetCount() uint32 { @@ -747,7 +948,7 @@ type ListAccessListMembersRequest struct { func (x *ListAccessListMembersRequest) Reset() { *x = ListAccessListMembersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[15] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[18] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -759,7 +960,7 @@ func (x *ListAccessListMembersRequest) String() string { func (*ListAccessListMembersRequest) ProtoMessage() {} func (x *ListAccessListMembersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[15] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[18] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -772,7 +973,7 @@ func (x *ListAccessListMembersRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAccessListMembersRequest.ProtoReflect.Descriptor instead. func (*ListAccessListMembersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{15} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{18} } func (x *ListAccessListMembersRequest) GetPageSize() int32 { @@ -810,7 +1011,7 @@ type ListAccessListMembersResponse struct { func (x *ListAccessListMembersResponse) Reset() { *x = ListAccessListMembersResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[16] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[19] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -822,7 +1023,7 @@ func (x *ListAccessListMembersResponse) String() string { func (*ListAccessListMembersResponse) ProtoMessage() {} func (x *ListAccessListMembersResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[16] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[19] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -835,7 +1036,7 @@ func (x *ListAccessListMembersResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAccessListMembersResponse.ProtoReflect.Descriptor instead. func (*ListAccessListMembersResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{16} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{19} } func (x *ListAccessListMembersResponse) GetMembers() []*Member { @@ -866,7 +1067,7 @@ type ListAllAccessListMembersRequest struct { func (x *ListAllAccessListMembersRequest) Reset() { *x = ListAllAccessListMembersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[17] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[20] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -878,7 +1079,7 @@ func (x *ListAllAccessListMembersRequest) String() string { func (*ListAllAccessListMembersRequest) ProtoMessage() {} func (x *ListAllAccessListMembersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[17] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[20] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -891,7 +1092,7 @@ func (x *ListAllAccessListMembersRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAllAccessListMembersRequest.ProtoReflect.Descriptor instead. func (*ListAllAccessListMembersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{17} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{20} } func (x *ListAllAccessListMembersRequest) GetPageSize() int32 { @@ -922,7 +1123,7 @@ type ListAllAccessListMembersResponse struct { func (x *ListAllAccessListMembersResponse) Reset() { *x = ListAllAccessListMembersResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[18] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[21] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -934,7 +1135,7 @@ func (x *ListAllAccessListMembersResponse) String() string { func (*ListAllAccessListMembersResponse) ProtoMessage() {} func (x *ListAllAccessListMembersResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[18] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[21] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -947,7 +1148,7 @@ func (x *ListAllAccessListMembersResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAllAccessListMembersResponse.ProtoReflect.Descriptor instead. func (*ListAllAccessListMembersResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{18} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{21} } func (x *ListAllAccessListMembersResponse) GetMembers() []*Member { @@ -978,7 +1179,7 @@ type UpsertAccessListWithMembersRequest struct { func (x *UpsertAccessListWithMembersRequest) Reset() { *x = UpsertAccessListWithMembersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[19] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[22] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -990,7 +1191,7 @@ func (x *UpsertAccessListWithMembersRequest) String() string { func (*UpsertAccessListWithMembersRequest) ProtoMessage() {} func (x *UpsertAccessListWithMembersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[19] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[22] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1003,7 +1204,7 @@ func (x *UpsertAccessListWithMembersRequest) ProtoReflect() protoreflect.Message // Deprecated: Use UpsertAccessListWithMembersRequest.ProtoReflect.Descriptor instead. func (*UpsertAccessListWithMembersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{19} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{22} } func (x *UpsertAccessListWithMembersRequest) GetAccessList() *AccessList { @@ -1034,7 +1235,7 @@ type UpsertAccessListWithMembersResponse struct { func (x *UpsertAccessListWithMembersResponse) Reset() { *x = UpsertAccessListWithMembersResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[20] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[23] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1046,7 +1247,7 @@ func (x *UpsertAccessListWithMembersResponse) String() string { func (*UpsertAccessListWithMembersResponse) ProtoMessage() {} func (x *UpsertAccessListWithMembersResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[20] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[23] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1059,7 +1260,7 @@ func (x *UpsertAccessListWithMembersResponse) ProtoReflect() protoreflect.Messag // Deprecated: Use UpsertAccessListWithMembersResponse.ProtoReflect.Descriptor instead. func (*UpsertAccessListWithMembersResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{20} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{23} } func (x *UpsertAccessListWithMembersResponse) GetAccessList() *AccessList { @@ -1076,8 +1277,7 @@ func (x *UpsertAccessListWithMembersResponse) GetMembers() []*Member { return nil } -// GetAccessListMemberRequest is the request for retrieving an access list -// member. +// GetAccessListMemberRequest is the request for retrieving an access_list_member. type GetAccessListMemberRequest struct { state protoimpl.MessageState `protogen:"open.v1"` // access_list is the name of the access list that the member belongs to. @@ -1090,7 +1290,7 @@ type GetAccessListMemberRequest struct { func (x *GetAccessListMemberRequest) Reset() { *x = GetAccessListMemberRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[21] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[24] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1102,7 +1302,7 @@ func (x *GetAccessListMemberRequest) String() string { func (*GetAccessListMemberRequest) ProtoMessage() {} func (x *GetAccessListMemberRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[21] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[24] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1115,7 +1315,7 @@ func (x *GetAccessListMemberRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListMemberRequest.ProtoReflect.Descriptor instead. func (*GetAccessListMemberRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{21} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{24} } func (x *GetAccessListMemberRequest) GetAccessList() string { @@ -1132,6 +1332,109 @@ func (x *GetAccessListMemberRequest) GetMemberName() string { return "" } +// GetStaticAccessListMemberRequest is the request for retrieving an access_list_member of a static +// type access_list. +type GetStaticAccessListMemberRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // access_list is the name of the access_list that the member belongs to. + AccessList string `protobuf:"bytes,1,opt,name=access_list,json=accessList,proto3" json:"access_list,omitempty"` + // member_name is the name of the user that belongs to the access_list. + MemberName string `protobuf:"bytes,2,opt,name=member_name,json=memberName,proto3" json:"member_name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetStaticAccessListMemberRequest) Reset() { + *x = GetStaticAccessListMemberRequest{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[25] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetStaticAccessListMemberRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetStaticAccessListMemberRequest) ProtoMessage() {} + +func (x *GetStaticAccessListMemberRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[25] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetStaticAccessListMemberRequest.ProtoReflect.Descriptor instead. +func (*GetStaticAccessListMemberRequest) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{25} +} + +func (x *GetStaticAccessListMemberRequest) GetAccessList() string { + if x != nil { + return x.AccessList + } + return "" +} + +func (x *GetStaticAccessListMemberRequest) GetMemberName() string { + if x != nil { + return x.MemberName + } + return "" +} + +// GetStaticAccessListMemberResponse is the response containing the access_list_member of the +// target access_list of static type. +type GetStaticAccessListMemberResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // member of the target static access_list. + Member *Member `protobuf:"bytes,1,opt,name=member,proto3" json:"member,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetStaticAccessListMemberResponse) Reset() { + *x = GetStaticAccessListMemberResponse{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[26] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetStaticAccessListMemberResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetStaticAccessListMemberResponse) ProtoMessage() {} + +func (x *GetStaticAccessListMemberResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[26] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetStaticAccessListMemberResponse.ProtoReflect.Descriptor instead. +func (*GetStaticAccessListMemberResponse) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{26} +} + +func (x *GetStaticAccessListMemberResponse) GetMember() *Member { + if x != nil { + return x.Member + } + return nil +} + // GetAccessListOwnersRequest is the request for getting a list of all owners // in an Access List, including those inherited from nested Access Lists. type GetAccessListOwnersRequest struct { @@ -1144,7 +1447,7 @@ type GetAccessListOwnersRequest struct { func (x *GetAccessListOwnersRequest) Reset() { *x = GetAccessListOwnersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[22] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[27] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1156,7 +1459,7 @@ func (x *GetAccessListOwnersRequest) String() string { func (*GetAccessListOwnersRequest) ProtoMessage() {} func (x *GetAccessListOwnersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[22] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[27] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1169,7 +1472,7 @@ func (x *GetAccessListOwnersRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListOwnersRequest.ProtoReflect.Descriptor instead. func (*GetAccessListOwnersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{22} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{27} } func (x *GetAccessListOwnersRequest) GetAccessList() string { @@ -1192,7 +1495,7 @@ type GetAccessListOwnersResponse struct { func (x *GetAccessListOwnersResponse) Reset() { *x = GetAccessListOwnersResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[23] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[28] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1204,7 +1507,7 @@ func (x *GetAccessListOwnersResponse) String() string { func (*GetAccessListOwnersResponse) ProtoMessage() {} func (x *GetAccessListOwnersResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[23] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[28] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1217,7 +1520,7 @@ func (x *GetAccessListOwnersResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessListOwnersResponse.ProtoReflect.Descriptor instead. func (*GetAccessListOwnersResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{23} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{28} } func (x *GetAccessListOwnersResponse) GetOwners() []*AccessListOwner { @@ -1227,31 +1530,125 @@ func (x *GetAccessListOwnersResponse) GetOwners() []*AccessListOwner { return nil } -// UpsertAccessListMemberRequest is the request for upserting an access list -// member. -type UpsertAccessListMemberRequest struct { +// UpsertAccessListMemberRequest is the request for upserting an access list +// member. +type UpsertAccessListMemberRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // member is the access list member to upsert. + Member *Member `protobuf:"bytes,4,opt,name=member,proto3" json:"member,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertAccessListMemberRequest) Reset() { + *x = UpsertAccessListMemberRequest{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[29] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertAccessListMemberRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertAccessListMemberRequest) ProtoMessage() {} + +func (x *UpsertAccessListMemberRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[29] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertAccessListMemberRequest.ProtoReflect.Descriptor instead. +func (*UpsertAccessListMemberRequest) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{29} +} + +func (x *UpsertAccessListMemberRequest) GetMember() *Member { + if x != nil { + return x.Member + } + return nil +} + +// UpsertStaticAccessListMemberRequest is the request for upserting an access_list_member to an +// access_list of type static. +type UpsertStaticAccessListMemberRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // member is the access_list_member to upsert. + Member *Member `protobuf:"bytes,1,opt,name=member,proto3" json:"member,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertStaticAccessListMemberRequest) Reset() { + *x = UpsertStaticAccessListMemberRequest{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[30] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertStaticAccessListMemberRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertStaticAccessListMemberRequest) ProtoMessage() {} + +func (x *UpsertStaticAccessListMemberRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[30] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertStaticAccessListMemberRequest.ProtoReflect.Descriptor instead. +func (*UpsertStaticAccessListMemberRequest) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{30} +} + +func (x *UpsertStaticAccessListMemberRequest) GetMember() *Member { + if x != nil { + return x.Member + } + return nil +} + +// UpsertStaticAccessListMemberResponse is the response of upserting an access_list_member to an +// static_access of type static. +type UpsertStaticAccessListMemberResponse struct { state protoimpl.MessageState `protogen:"open.v1"` - // member is the access list member to upsert. - Member *Member `protobuf:"bytes,4,opt,name=member,proto3" json:"member,omitempty"` + // member is the upserted access_list_member. + Member *Member `protobuf:"bytes,1,opt,name=member,proto3" json:"member,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } -func (x *UpsertAccessListMemberRequest) Reset() { - *x = UpsertAccessListMemberRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[24] +func (x *UpsertStaticAccessListMemberResponse) Reset() { + *x = UpsertStaticAccessListMemberResponse{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[31] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } -func (x *UpsertAccessListMemberRequest) String() string { +func (x *UpsertStaticAccessListMemberResponse) String() string { return protoimpl.X.MessageStringOf(x) } -func (*UpsertAccessListMemberRequest) ProtoMessage() {} +func (*UpsertStaticAccessListMemberResponse) ProtoMessage() {} -func (x *UpsertAccessListMemberRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[24] +func (x *UpsertStaticAccessListMemberResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[31] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1262,12 +1659,12 @@ func (x *UpsertAccessListMemberRequest) ProtoReflect() protoreflect.Message { return mi.MessageOf(x) } -// Deprecated: Use UpsertAccessListMemberRequest.ProtoReflect.Descriptor instead. -func (*UpsertAccessListMemberRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{24} +// Deprecated: Use UpsertStaticAccessListMemberResponse.ProtoReflect.Descriptor instead. +func (*UpsertStaticAccessListMemberResponse) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{31} } -func (x *UpsertAccessListMemberRequest) GetMember() *Member { +func (x *UpsertStaticAccessListMemberResponse) GetMember() *Member { if x != nil { return x.Member } @@ -1286,7 +1683,7 @@ type UpdateAccessListMemberRequest struct { func (x *UpdateAccessListMemberRequest) Reset() { *x = UpdateAccessListMemberRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[25] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[32] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1298,7 +1695,7 @@ func (x *UpdateAccessListMemberRequest) String() string { func (*UpdateAccessListMemberRequest) ProtoMessage() {} func (x *UpdateAccessListMemberRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[25] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[32] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1311,7 +1708,7 @@ func (x *UpdateAccessListMemberRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use UpdateAccessListMemberRequest.ProtoReflect.Descriptor instead. func (*UpdateAccessListMemberRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{25} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{32} } func (x *UpdateAccessListMemberRequest) GetMember() *Member { @@ -1335,7 +1732,7 @@ type DeleteAccessListMemberRequest struct { func (x *DeleteAccessListMemberRequest) Reset() { *x = DeleteAccessListMemberRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[26] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[33] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1347,7 +1744,7 @@ func (x *DeleteAccessListMemberRequest) String() string { func (*DeleteAccessListMemberRequest) ProtoMessage() {} func (x *DeleteAccessListMemberRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[26] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[33] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1360,7 +1757,7 @@ func (x *DeleteAccessListMemberRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DeleteAccessListMemberRequest.ProtoReflect.Descriptor instead. func (*DeleteAccessListMemberRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{26} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{33} } func (x *DeleteAccessListMemberRequest) GetAccessList() string { @@ -1377,6 +1774,100 @@ func (x *DeleteAccessListMemberRequest) GetMemberName() string { return "" } +// DeleteStaticAccessListMemberRequest is the request for deleting an access_list_member from an +// access_list of type static. +type DeleteStaticAccessListMemberRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // access_list is the name of access list. + AccessList string `protobuf:"bytes,1,opt,name=access_list,json=accessList,proto3" json:"access_list,omitempty"` + // member_name is the name of the user to delete. + MemberName string `protobuf:"bytes,2,opt,name=member_name,json=memberName,proto3" json:"member_name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteStaticAccessListMemberRequest) Reset() { + *x = DeleteStaticAccessListMemberRequest{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[34] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteStaticAccessListMemberRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteStaticAccessListMemberRequest) ProtoMessage() {} + +func (x *DeleteStaticAccessListMemberRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[34] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteStaticAccessListMemberRequest.ProtoReflect.Descriptor instead. +func (*DeleteStaticAccessListMemberRequest) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{34} +} + +func (x *DeleteStaticAccessListMemberRequest) GetAccessList() string { + if x != nil { + return x.AccessList + } + return "" +} + +func (x *DeleteStaticAccessListMemberRequest) GetMemberName() string { + if x != nil { + return x.MemberName + } + return "" +} + +// DeleteStaticAccessListMemberResponse is the response of deleting an access_list_member from an +// access_list of type static. +type DeleteStaticAccessListMemberResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteStaticAccessListMemberResponse) Reset() { + *x = DeleteStaticAccessListMemberResponse{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[35] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteStaticAccessListMemberResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteStaticAccessListMemberResponse) ProtoMessage() {} + +func (x *DeleteStaticAccessListMemberResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[35] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteStaticAccessListMemberResponse.ProtoReflect.Descriptor instead. +func (*DeleteStaticAccessListMemberResponse) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{35} +} + // DeleteAllAccessListMembersForAccessListRequest is the request for deleting // all members from an access list. type DeleteAllAccessListMembersForAccessListRequest struct { @@ -1389,7 +1880,7 @@ type DeleteAllAccessListMembersForAccessListRequest struct { func (x *DeleteAllAccessListMembersForAccessListRequest) Reset() { *x = DeleteAllAccessListMembersForAccessListRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[27] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[36] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1401,7 +1892,7 @@ func (x *DeleteAllAccessListMembersForAccessListRequest) String() string { func (*DeleteAllAccessListMembersForAccessListRequest) ProtoMessage() {} func (x *DeleteAllAccessListMembersForAccessListRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[27] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[36] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1414,7 +1905,7 @@ func (x *DeleteAllAccessListMembersForAccessListRequest) ProtoReflect() protoref // Deprecated: Use DeleteAllAccessListMembersForAccessListRequest.ProtoReflect.Descriptor instead. func (*DeleteAllAccessListMembersForAccessListRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{27} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{36} } func (x *DeleteAllAccessListMembersForAccessListRequest) GetAccessList() string { @@ -1434,7 +1925,7 @@ type DeleteAllAccessListMembersRequest struct { func (x *DeleteAllAccessListMembersRequest) Reset() { *x = DeleteAllAccessListMembersRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[28] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[37] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1446,7 +1937,7 @@ func (x *DeleteAllAccessListMembersRequest) String() string { func (*DeleteAllAccessListMembersRequest) ProtoMessage() {} func (x *DeleteAllAccessListMembersRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[28] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[37] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1459,7 +1950,7 @@ func (x *DeleteAllAccessListMembersRequest) ProtoReflect() protoreflect.Message // Deprecated: Use DeleteAllAccessListMembersRequest.ProtoReflect.Descriptor instead. func (*DeleteAllAccessListMembersRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{28} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{37} } // ListAccessListReviewsRequest is the request for getting paginated access list @@ -1478,7 +1969,7 @@ type ListAccessListReviewsRequest struct { func (x *ListAccessListReviewsRequest) Reset() { *x = ListAccessListReviewsRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[29] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[38] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1490,7 +1981,7 @@ func (x *ListAccessListReviewsRequest) String() string { func (*ListAccessListReviewsRequest) ProtoMessage() {} func (x *ListAccessListReviewsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[29] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[38] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1503,7 +1994,7 @@ func (x *ListAccessListReviewsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAccessListReviewsRequest.ProtoReflect.Descriptor instead. func (*ListAccessListReviewsRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{29} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{38} } func (x *ListAccessListReviewsRequest) GetAccessList() string { @@ -1541,7 +2032,7 @@ type ListAccessListReviewsResponse struct { func (x *ListAccessListReviewsResponse) Reset() { *x = ListAccessListReviewsResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[30] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[39] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1553,7 +2044,7 @@ func (x *ListAccessListReviewsResponse) String() string { func (*ListAccessListReviewsResponse) ProtoMessage() {} func (x *ListAccessListReviewsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[30] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[39] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1566,7 +2057,7 @@ func (x *ListAccessListReviewsResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAccessListReviewsResponse.ProtoReflect.Descriptor instead. func (*ListAccessListReviewsResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{30} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{39} } func (x *ListAccessListReviewsResponse) GetReviews() []*Review { @@ -1597,7 +2088,7 @@ type ListAllAccessListReviewsRequest struct { func (x *ListAllAccessListReviewsRequest) Reset() { *x = ListAllAccessListReviewsRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[31] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[40] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1609,7 +2100,7 @@ func (x *ListAllAccessListReviewsRequest) String() string { func (*ListAllAccessListReviewsRequest) ProtoMessage() {} func (x *ListAllAccessListReviewsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[31] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[40] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1622,7 +2113,7 @@ func (x *ListAllAccessListReviewsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAllAccessListReviewsRequest.ProtoReflect.Descriptor instead. func (*ListAllAccessListReviewsRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{31} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{40} } func (x *ListAllAccessListReviewsRequest) GetPageSize() int32 { @@ -1653,7 +2144,7 @@ type ListAllAccessListReviewsResponse struct { func (x *ListAllAccessListReviewsResponse) Reset() { *x = ListAllAccessListReviewsResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[32] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[41] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1665,7 +2156,7 @@ func (x *ListAllAccessListReviewsResponse) String() string { func (*ListAllAccessListReviewsResponse) ProtoMessage() {} func (x *ListAllAccessListReviewsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[32] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[41] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1678,7 +2169,7 @@ func (x *ListAllAccessListReviewsResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAllAccessListReviewsResponse.ProtoReflect.Descriptor instead. func (*ListAllAccessListReviewsResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{32} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{41} } func (x *ListAllAccessListReviewsResponse) GetReviews() []*Review { @@ -1707,7 +2198,7 @@ type CreateAccessListReviewRequest struct { func (x *CreateAccessListReviewRequest) Reset() { *x = CreateAccessListReviewRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[33] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[42] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1719,7 +2210,7 @@ func (x *CreateAccessListReviewRequest) String() string { func (*CreateAccessListReviewRequest) ProtoMessage() {} func (x *CreateAccessListReviewRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[33] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[42] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1732,7 +2223,7 @@ func (x *CreateAccessListReviewRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use CreateAccessListReviewRequest.ProtoReflect.Descriptor instead. func (*CreateAccessListReviewRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{33} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{42} } func (x *CreateAccessListReviewRequest) GetReview() *Review { @@ -1756,7 +2247,7 @@ type CreateAccessListReviewResponse struct { func (x *CreateAccessListReviewResponse) Reset() { *x = CreateAccessListReviewResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[34] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[43] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1768,7 +2259,7 @@ func (x *CreateAccessListReviewResponse) String() string { func (*CreateAccessListReviewResponse) ProtoMessage() {} func (x *CreateAccessListReviewResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[34] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[43] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1781,7 +2272,7 @@ func (x *CreateAccessListReviewResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use CreateAccessListReviewResponse.ProtoReflect.Descriptor instead. func (*CreateAccessListReviewResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{34} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{43} } func (x *CreateAccessListReviewResponse) GetReviewName() string { @@ -1812,7 +2303,7 @@ type DeleteAccessListReviewRequest struct { func (x *DeleteAccessListReviewRequest) Reset() { *x = DeleteAccessListReviewRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[35] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[44] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1824,7 +2315,7 @@ func (x *DeleteAccessListReviewRequest) String() string { func (*DeleteAccessListReviewRequest) ProtoMessage() {} func (x *DeleteAccessListReviewRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[35] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[44] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1837,7 +2328,7 @@ func (x *DeleteAccessListReviewRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DeleteAccessListReviewRequest.ProtoReflect.Descriptor instead. func (*DeleteAccessListReviewRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{35} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{44} } func (x *DeleteAccessListReviewRequest) GetReviewName() string { @@ -1870,7 +2361,7 @@ type AccessRequestPromoteRequest struct { func (x *AccessRequestPromoteRequest) Reset() { *x = AccessRequestPromoteRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[36] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[45] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1882,7 +2373,7 @@ func (x *AccessRequestPromoteRequest) String() string { func (*AccessRequestPromoteRequest) ProtoMessage() {} func (x *AccessRequestPromoteRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[36] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[45] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1895,7 +2386,7 @@ func (x *AccessRequestPromoteRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use AccessRequestPromoteRequest.ProtoReflect.Descriptor instead. func (*AccessRequestPromoteRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{36} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{45} } func (x *AccessRequestPromoteRequest) GetRequestId() string { @@ -1931,7 +2422,7 @@ type AccessRequestPromoteResponse struct { func (x *AccessRequestPromoteResponse) Reset() { *x = AccessRequestPromoteResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[37] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[46] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1943,7 +2434,7 @@ func (x *AccessRequestPromoteResponse) String() string { func (*AccessRequestPromoteResponse) ProtoMessage() {} func (x *AccessRequestPromoteResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[37] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[46] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1956,7 +2447,7 @@ func (x *AccessRequestPromoteResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use AccessRequestPromoteResponse.ProtoReflect.Descriptor instead. func (*AccessRequestPromoteResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{37} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{46} } func (x *AccessRequestPromoteResponse) GetAccessRequest() *types.AccessRequestV3 { @@ -1978,7 +2469,7 @@ type GetSuggestedAccessListsRequest struct { func (x *GetSuggestedAccessListsRequest) Reset() { *x = GetSuggestedAccessListsRequest{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[38] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[47] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1990,7 +2481,7 @@ func (x *GetSuggestedAccessListsRequest) String() string { func (*GetSuggestedAccessListsRequest) ProtoMessage() {} func (x *GetSuggestedAccessListsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[38] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[47] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -2003,7 +2494,7 @@ func (x *GetSuggestedAccessListsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetSuggestedAccessListsRequest.ProtoReflect.Descriptor instead. func (*GetSuggestedAccessListsRequest) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{38} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{47} } func (x *GetSuggestedAccessListsRequest) GetAccessRequestId() string { @@ -2025,7 +2516,7 @@ type GetSuggestedAccessListsResponse struct { func (x *GetSuggestedAccessListsResponse) Reset() { *x = GetSuggestedAccessListsResponse{} - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[39] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[48] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -2037,7 +2528,7 @@ func (x *GetSuggestedAccessListsResponse) String() string { func (*GetSuggestedAccessListsResponse) ProtoMessage() {} func (x *GetSuggestedAccessListsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[39] + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[48] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -2050,7 +2541,7 @@ func (x *GetSuggestedAccessListsResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use GetSuggestedAccessListsResponse.ProtoReflect.Descriptor instead. func (*GetSuggestedAccessListsResponse) Descriptor() ([]byte, []int) { - return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{39} + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{48} } func (x *GetSuggestedAccessListsResponse) GetAccessLists() []*AccessList { @@ -2060,6 +2551,136 @@ func (x *GetSuggestedAccessListsResponse) GetAccessLists() []*AccessList { return nil } +// ListUserAccessListsRequest is the request for getting access lists where the +// user is an owner or member. +type ListUserAccessListsRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // username is the name of the user to get access lists for. + Username string `protobuf:"bytes,1,opt,name=username,proto3" json:"username,omitempty"` + // page_size is the size of the page to request. + PageSize int32 `protobuf:"varint,2,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // page_token is the page token. + PageToken string `protobuf:"bytes,3,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListUserAccessListsRequest) Reset() { + *x = ListUserAccessListsRequest{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[49] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListUserAccessListsRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListUserAccessListsRequest) ProtoMessage() {} + +func (x *ListUserAccessListsRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[49] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListUserAccessListsRequest.ProtoReflect.Descriptor instead. +func (*ListUserAccessListsRequest) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{49} +} + +func (x *ListUserAccessListsRequest) GetUsername() string { + if x != nil { + return x.Username + } + return "" +} + +func (x *ListUserAccessListsRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListUserAccessListsRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListUserAccessListsResponse is the response for getting access lists where the +// user is an owner or member. +type ListUserAccessListsResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // access_lists are the access lists where the user is a member or owner. + AccessLists []*AccessList `protobuf:"bytes,1,rep,name=access_lists,json=accessLists,proto3" json:"access_lists,omitempty"` + // next_page_token is the next page token. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + // total_count is the total number of access lists in all pages. + TotalCount int32 `protobuf:"varint,3,opt,name=total_count,json=totalCount,proto3" json:"total_count,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListUserAccessListsResponse) Reset() { + *x = ListUserAccessListsResponse{} + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[50] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListUserAccessListsResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListUserAccessListsResponse) ProtoMessage() {} + +func (x *ListUserAccessListsResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accesslist_v1_accesslist_service_proto_msgTypes[50] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListUserAccessListsResponse.ProtoReflect.Descriptor instead. +func (*ListUserAccessListsResponse) Descriptor() ([]byte, []int) { + return file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP(), []int{50} +} + +func (x *ListUserAccessListsResponse) GetAccessLists() []*AccessList { + if x != nil { + return x.AccessLists + } + return nil +} + +func (x *ListUserAccessListsResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +func (x *ListUserAccessListsResponse) GetTotalCount() int32 { + if x != nil { + return x.TotalCount + } + return 0 +} + var File_teleport_accesslist_v1_accesslist_service_proto protoreflect.FileDescriptor const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + @@ -2075,7 +2696,21 @@ const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + "\x17ListAccessListsResponse\x12E\n" + "\faccess_lists\x18\x01 \x03(\v2\".teleport.accesslist.v1.AccessListR\vaccessLists\x12\x1d\n" + "\n" + - "next_token\x18\x02 \x01(\tR\tnextToken\"A\n" + + "next_token\x18\x02 \x01(\tR\tnextToken\"\xc1\x01\n" + + "\x18ListAccessListsV2Request\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12&\n" + + "\asort_by\x18\x03 \x01(\v2\r.types.SortByR\x06sortBy\x12A\n" + + "\x06filter\x18\x04 \x01(\v2).teleport.accesslist.v1.AccessListsFilterR\x06filter\"q\n" + + "\x11AccessListsFilter\x12\x16\n" + + "\x06search\x18\x01 \x01(\tR\x06search\x12\x16\n" + + "\x06owners\x18\x02 \x03(\tR\x06owners\x12\x14\n" + + "\x05roles\x18\x03 \x03(\tR\x05roles\x12\x16\n" + + "\x06origin\x18\x04 \x01(\tR\x06origin\"\x8a\x01\n" + + "\x19ListAccessListsV2Response\x12E\n" + + "\faccess_lists\x18\x01 \x03(\v2\".teleport.accesslist.v1.AccessListR\vaccessLists\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"A\n" + "\x19GetInheritedGrantsRequest\x12$\n" + "\x0eaccess_list_id\x18\x01 \x01(\tR\faccessListId\"^\n" + "\x1aGetInheritedGrantsResponse\x12@\n" + @@ -2128,21 +2763,38 @@ const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + "\vaccess_list\x18\x01 \x01(\tR\n" + "accessList\x12\x1f\n" + "\vmember_name\x18\x02 \x01(\tR\n" + - "memberName\"=\n" + + "memberName\"d\n" + + " GetStaticAccessListMemberRequest\x12\x1f\n" + + "\vaccess_list\x18\x01 \x01(\tR\n" + + "accessList\x12\x1f\n" + + "\vmember_name\x18\x02 \x01(\tR\n" + + "memberName\"[\n" + + "!GetStaticAccessListMemberResponse\x126\n" + + "\x06member\x18\x01 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06member\"=\n" + "\x1aGetAccessListOwnersRequest\x12\x1f\n" + "\vaccess_list\x18\x01 \x01(\tR\n" + "accessList\"^\n" + "\x1bGetAccessListOwnersResponse\x12?\n" + "\x06owners\x18\x01 \x03(\v2'.teleport.accesslist.v1.AccessListOwnerR\x06owners\"\x84\x01\n" + "\x1dUpsertAccessListMemberRequest\x126\n" + - "\x06member\x18\x04 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06memberJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03J\x04\b\x03\x10\x04R\vaccess_listR\x04nameR\x06reason\"W\n" + + "\x06member\x18\x04 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06memberJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03J\x04\b\x03\x10\x04R\vaccess_listR\x04nameR\x06reason\"]\n" + + "#UpsertStaticAccessListMemberRequest\x126\n" + + "\x06member\x18\x01 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06member\"^\n" + + "$UpsertStaticAccessListMemberResponse\x126\n" + + "\x06member\x18\x01 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06member\"W\n" + "\x1dUpdateAccessListMemberRequest\x126\n" + "\x06member\x18\x01 \x01(\v2\x1e.teleport.accesslist.v1.MemberR\x06member\"m\n" + "\x1dDeleteAccessListMemberRequest\x12\x1f\n" + "\vaccess_list\x18\x01 \x01(\tR\n" + "accessList\x12\x1f\n" + "\vmember_name\x18\x03 \x01(\tR\n" + - "memberNameJ\x04\b\x02\x10\x03R\x04name\"Q\n" + + "memberNameJ\x04\b\x02\x10\x03R\x04name\"g\n" + + "#DeleteStaticAccessListMemberRequest\x12\x1f\n" + + "\vaccess_list\x18\x01 \x01(\tR\n" + + "accessList\x12\x1f\n" + + "\vmember_name\x18\x02 \x01(\tR\n" + + "memberName\"&\n" + + "$DeleteStaticAccessListMemberResponse\"Q\n" + ".DeleteAllAccessListMembersForAccessListRequest\x12\x1f\n" + "\vaccess_list\x18\x01 \x01(\tR\n" + "accessList\"6\n" + @@ -2185,10 +2837,21 @@ const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + "\x1eGetSuggestedAccessListsRequest\x12*\n" + "\x11access_request_id\x18\x01 \x01(\tR\x0faccessRequestId\"h\n" + "\x1fGetSuggestedAccessListsResponse\x12E\n" + - "\faccess_lists\x18\x01 \x03(\v2\".teleport.accesslist.v1.AccessListR\vaccessLists2\xfe\x18\n" + + "\faccess_lists\x18\x01 \x03(\v2\".teleport.accesslist.v1.AccessListR\vaccessLists\"t\n" + + "\x1aListUserAccessListsRequest\x12\x1a\n" + + "\busername\x18\x01 \x01(\tR\busername\x12\x1b\n" + + "\tpage_size\x18\x02 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x03 \x01(\tR\tpageToken\"\xad\x01\n" + + "\x1bListUserAccessListsResponse\x12E\n" + + "\faccess_lists\x18\x01 \x03(\v2\".teleport.accesslist.v1.AccessListR\vaccessLists\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\x12\x1f\n" + + "\vtotal_count\x18\x03 \x01(\x05R\n" + + "totalCount2\xc8\x1e\n" + "\x11AccessListService\x12o\n" + - "\x0eGetAccessLists\x12-.teleport.accesslist.v1.GetAccessListsRequest\x1a..teleport.accesslist.v1.GetAccessListsResponse\x12r\n" + - "\x0fListAccessLists\x12..teleport.accesslist.v1.ListAccessListsRequest\x1a/.teleport.accesslist.v1.ListAccessListsResponse\x12a\n" + + "\x0eGetAccessLists\x12-.teleport.accesslist.v1.GetAccessListsRequest\x1a..teleport.accesslist.v1.GetAccessListsResponse\x12w\n" + + "\x0fListAccessLists\x12..teleport.accesslist.v1.ListAccessListsRequest\x1a/.teleport.accesslist.v1.ListAccessListsResponse\"\x03\x88\x02\x01\x12x\n" + + "\x11ListAccessListsV2\x120.teleport.accesslist.v1.ListAccessListsV2Request\x1a1.teleport.accesslist.v1.ListAccessListsV2Response\x12a\n" + "\rGetAccessList\x12,.teleport.accesslist.v1.GetAccessListRequest\x1a\".teleport.accesslist.v1.AccessList\x12g\n" + "\x10UpsertAccessList\x12/.teleport.accesslist.v1.UpsertAccessListRequest\x1a\".teleport.accesslist.v1.AccessList\x12g\n" + "\x10UpdateAccessList\x12/.teleport.accesslist.v1.UpdateAccessListRequest\x1a\".teleport.accesslist.v1.AccessList\x12[\n" + @@ -2198,11 +2861,14 @@ const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + "\x16CountAccessListMembers\x125.teleport.accesslist.v1.CountAccessListMembersRequest\x1a6.teleport.accesslist.v1.CountAccessListMembersResponse\x12\x84\x01\n" + "\x15ListAccessListMembers\x124.teleport.accesslist.v1.ListAccessListMembersRequest\x1a5.teleport.accesslist.v1.ListAccessListMembersResponse\x12\x8d\x01\n" + "\x18ListAllAccessListMembers\x127.teleport.accesslist.v1.ListAllAccessListMembersRequest\x1a8.teleport.accesslist.v1.ListAllAccessListMembersResponse\x12i\n" + - "\x13GetAccessListMember\x122.teleport.accesslist.v1.GetAccessListMemberRequest\x1a\x1e.teleport.accesslist.v1.Member\x12~\n" + + "\x13GetAccessListMember\x122.teleport.accesslist.v1.GetAccessListMemberRequest\x1a\x1e.teleport.accesslist.v1.Member\x12\x90\x01\n" + + "\x19GetStaticAccessListMember\x128.teleport.accesslist.v1.GetStaticAccessListMemberRequest\x1a9.teleport.accesslist.v1.GetStaticAccessListMemberResponse\x12~\n" + "\x13GetAccessListOwners\x122.teleport.accesslist.v1.GetAccessListOwnersRequest\x1a3.teleport.accesslist.v1.GetAccessListOwnersResponse\x12o\n" + - "\x16UpsertAccessListMember\x125.teleport.accesslist.v1.UpsertAccessListMemberRequest\x1a\x1e.teleport.accesslist.v1.Member\x12o\n" + + "\x16UpsertAccessListMember\x125.teleport.accesslist.v1.UpsertAccessListMemberRequest\x1a\x1e.teleport.accesslist.v1.Member\x12\x99\x01\n" + + "\x1cUpsertStaticAccessListMember\x12;.teleport.accesslist.v1.UpsertStaticAccessListMemberRequest\x1a<.teleport.accesslist.v1.UpsertStaticAccessListMemberResponse\x12o\n" + "\x16UpdateAccessListMember\x125.teleport.accesslist.v1.UpdateAccessListMemberRequest\x1a\x1e.teleport.accesslist.v1.Member\x12g\n" + - "\x16DeleteAccessListMember\x125.teleport.accesslist.v1.DeleteAccessListMemberRequest\x1a\x16.google.protobuf.Empty\x12\x89\x01\n" + + "\x16DeleteAccessListMember\x125.teleport.accesslist.v1.DeleteAccessListMemberRequest\x1a\x16.google.protobuf.Empty\x12\x99\x01\n" + + "\x1cDeleteStaticAccessListMember\x12;.teleport.accesslist.v1.DeleteStaticAccessListMemberRequest\x1a<.teleport.accesslist.v1.DeleteStaticAccessListMemberResponse\x12\x89\x01\n" + "'DeleteAllAccessListMembersForAccessList\x12F.teleport.accesslist.v1.DeleteAllAccessListMembersForAccessListRequest\x1a\x16.google.protobuf.Empty\x12o\n" + "\x1aDeleteAllAccessListMembers\x129.teleport.accesslist.v1.DeleteAllAccessListMembersRequest\x1a\x16.google.protobuf.Empty\x12\x96\x01\n" + "\x1bUpsertAccessListWithMembers\x12:.teleport.accesslist.v1.UpsertAccessListWithMembersRequest\x1a;.teleport.accesslist.v1.UpsertAccessListWithMembersResponse\x12\x84\x01\n" + @@ -2212,7 +2878,8 @@ const file_teleport_accesslist_v1_accesslist_service_proto_rawDesc = "" + "\x16DeleteAccessListReview\x125.teleport.accesslist.v1.DeleteAccessListReviewRequest\x1a\x16.google.protobuf.Empty\x12\x81\x01\n" + "\x14AccessRequestPromote\x123.teleport.accesslist.v1.AccessRequestPromoteRequest\x1a4.teleport.accesslist.v1.AccessRequestPromoteResponse\x12\x8a\x01\n" + "\x17GetSuggestedAccessLists\x126.teleport.accesslist.v1.GetSuggestedAccessListsRequest\x1a7.teleport.accesslist.v1.GetSuggestedAccessListsResponse\x12{\n" + - "\x12GetInheritedGrants\x121.teleport.accesslist.v1.GetInheritedGrantsRequest\x1a2.teleport.accesslist.v1.GetInheritedGrantsResponseBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/accesslist/v1;accesslistv1b\x06proto3" + "\x12GetInheritedGrants\x121.teleport.accesslist.v1.GetInheritedGrantsRequest\x1a2.teleport.accesslist.v1.GetInheritedGrantsResponse\x12~\n" + + "\x13ListUserAccessLists\x122.teleport.accesslist.v1.ListUserAccessListsRequest\x1a3.teleport.accesslist.v1.ListUserAccessListsResponseBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/accesslist/v1;accesslistv1b\x06proto3" var ( file_teleport_accesslist_v1_accesslist_service_proto_rawDescOnce sync.Once @@ -2226,136 +2893,165 @@ func file_teleport_accesslist_v1_accesslist_service_proto_rawDescGZIP() []byte { return file_teleport_accesslist_v1_accesslist_service_proto_rawDescData } -var file_teleport_accesslist_v1_accesslist_service_proto_msgTypes = make([]protoimpl.MessageInfo, 40) +var file_teleport_accesslist_v1_accesslist_service_proto_msgTypes = make([]protoimpl.MessageInfo, 51) var file_teleport_accesslist_v1_accesslist_service_proto_goTypes = []any{ (*GetAccessListsRequest)(nil), // 0: teleport.accesslist.v1.GetAccessListsRequest (*GetAccessListsResponse)(nil), // 1: teleport.accesslist.v1.GetAccessListsResponse (*ListAccessListsRequest)(nil), // 2: teleport.accesslist.v1.ListAccessListsRequest (*ListAccessListsResponse)(nil), // 3: teleport.accesslist.v1.ListAccessListsResponse - (*GetInheritedGrantsRequest)(nil), // 4: teleport.accesslist.v1.GetInheritedGrantsRequest - (*GetInheritedGrantsResponse)(nil), // 5: teleport.accesslist.v1.GetInheritedGrantsResponse - (*GetAccessListRequest)(nil), // 6: teleport.accesslist.v1.GetAccessListRequest - (*UpsertAccessListRequest)(nil), // 7: teleport.accesslist.v1.UpsertAccessListRequest - (*UpdateAccessListRequest)(nil), // 8: teleport.accesslist.v1.UpdateAccessListRequest - (*DeleteAccessListRequest)(nil), // 9: teleport.accesslist.v1.DeleteAccessListRequest - (*DeleteAllAccessListsRequest)(nil), // 10: teleport.accesslist.v1.DeleteAllAccessListsRequest - (*GetAccessListsToReviewRequest)(nil), // 11: teleport.accesslist.v1.GetAccessListsToReviewRequest - (*GetAccessListsToReviewResponse)(nil), // 12: teleport.accesslist.v1.GetAccessListsToReviewResponse - (*CountAccessListMembersRequest)(nil), // 13: teleport.accesslist.v1.CountAccessListMembersRequest - (*CountAccessListMembersResponse)(nil), // 14: teleport.accesslist.v1.CountAccessListMembersResponse - (*ListAccessListMembersRequest)(nil), // 15: teleport.accesslist.v1.ListAccessListMembersRequest - (*ListAccessListMembersResponse)(nil), // 16: teleport.accesslist.v1.ListAccessListMembersResponse - (*ListAllAccessListMembersRequest)(nil), // 17: teleport.accesslist.v1.ListAllAccessListMembersRequest - (*ListAllAccessListMembersResponse)(nil), // 18: teleport.accesslist.v1.ListAllAccessListMembersResponse - (*UpsertAccessListWithMembersRequest)(nil), // 19: teleport.accesslist.v1.UpsertAccessListWithMembersRequest - (*UpsertAccessListWithMembersResponse)(nil), // 20: teleport.accesslist.v1.UpsertAccessListWithMembersResponse - (*GetAccessListMemberRequest)(nil), // 21: teleport.accesslist.v1.GetAccessListMemberRequest - (*GetAccessListOwnersRequest)(nil), // 22: teleport.accesslist.v1.GetAccessListOwnersRequest - (*GetAccessListOwnersResponse)(nil), // 23: teleport.accesslist.v1.GetAccessListOwnersResponse - (*UpsertAccessListMemberRequest)(nil), // 24: teleport.accesslist.v1.UpsertAccessListMemberRequest - (*UpdateAccessListMemberRequest)(nil), // 25: teleport.accesslist.v1.UpdateAccessListMemberRequest - (*DeleteAccessListMemberRequest)(nil), // 26: teleport.accesslist.v1.DeleteAccessListMemberRequest - (*DeleteAllAccessListMembersForAccessListRequest)(nil), // 27: teleport.accesslist.v1.DeleteAllAccessListMembersForAccessListRequest - (*DeleteAllAccessListMembersRequest)(nil), // 28: teleport.accesslist.v1.DeleteAllAccessListMembersRequest - (*ListAccessListReviewsRequest)(nil), // 29: teleport.accesslist.v1.ListAccessListReviewsRequest - (*ListAccessListReviewsResponse)(nil), // 30: teleport.accesslist.v1.ListAccessListReviewsResponse - (*ListAllAccessListReviewsRequest)(nil), // 31: teleport.accesslist.v1.ListAllAccessListReviewsRequest - (*ListAllAccessListReviewsResponse)(nil), // 32: teleport.accesslist.v1.ListAllAccessListReviewsResponse - (*CreateAccessListReviewRequest)(nil), // 33: teleport.accesslist.v1.CreateAccessListReviewRequest - (*CreateAccessListReviewResponse)(nil), // 34: teleport.accesslist.v1.CreateAccessListReviewResponse - (*DeleteAccessListReviewRequest)(nil), // 35: teleport.accesslist.v1.DeleteAccessListReviewRequest - (*AccessRequestPromoteRequest)(nil), // 36: teleport.accesslist.v1.AccessRequestPromoteRequest - (*AccessRequestPromoteResponse)(nil), // 37: teleport.accesslist.v1.AccessRequestPromoteResponse - (*GetSuggestedAccessListsRequest)(nil), // 38: teleport.accesslist.v1.GetSuggestedAccessListsRequest - (*GetSuggestedAccessListsResponse)(nil), // 39: teleport.accesslist.v1.GetSuggestedAccessListsResponse - (*AccessList)(nil), // 40: teleport.accesslist.v1.AccessList - (*AccessListGrants)(nil), // 41: teleport.accesslist.v1.AccessListGrants - (*Member)(nil), // 42: teleport.accesslist.v1.Member - (*AccessListOwner)(nil), // 43: teleport.accesslist.v1.AccessListOwner - (*Review)(nil), // 44: teleport.accesslist.v1.Review - (*timestamppb.Timestamp)(nil), // 45: google.protobuf.Timestamp - (*types.AccessRequestV3)(nil), // 46: types.AccessRequestV3 - (*emptypb.Empty)(nil), // 47: google.protobuf.Empty + (*ListAccessListsV2Request)(nil), // 4: teleport.accesslist.v1.ListAccessListsV2Request + (*AccessListsFilter)(nil), // 5: teleport.accesslist.v1.AccessListsFilter + (*ListAccessListsV2Response)(nil), // 6: teleport.accesslist.v1.ListAccessListsV2Response + (*GetInheritedGrantsRequest)(nil), // 7: teleport.accesslist.v1.GetInheritedGrantsRequest + (*GetInheritedGrantsResponse)(nil), // 8: teleport.accesslist.v1.GetInheritedGrantsResponse + (*GetAccessListRequest)(nil), // 9: teleport.accesslist.v1.GetAccessListRequest + (*UpsertAccessListRequest)(nil), // 10: teleport.accesslist.v1.UpsertAccessListRequest + (*UpdateAccessListRequest)(nil), // 11: teleport.accesslist.v1.UpdateAccessListRequest + (*DeleteAccessListRequest)(nil), // 12: teleport.accesslist.v1.DeleteAccessListRequest + (*DeleteAllAccessListsRequest)(nil), // 13: teleport.accesslist.v1.DeleteAllAccessListsRequest + (*GetAccessListsToReviewRequest)(nil), // 14: teleport.accesslist.v1.GetAccessListsToReviewRequest + (*GetAccessListsToReviewResponse)(nil), // 15: teleport.accesslist.v1.GetAccessListsToReviewResponse + (*CountAccessListMembersRequest)(nil), // 16: teleport.accesslist.v1.CountAccessListMembersRequest + (*CountAccessListMembersResponse)(nil), // 17: teleport.accesslist.v1.CountAccessListMembersResponse + (*ListAccessListMembersRequest)(nil), // 18: teleport.accesslist.v1.ListAccessListMembersRequest + (*ListAccessListMembersResponse)(nil), // 19: teleport.accesslist.v1.ListAccessListMembersResponse + (*ListAllAccessListMembersRequest)(nil), // 20: teleport.accesslist.v1.ListAllAccessListMembersRequest + (*ListAllAccessListMembersResponse)(nil), // 21: teleport.accesslist.v1.ListAllAccessListMembersResponse + (*UpsertAccessListWithMembersRequest)(nil), // 22: teleport.accesslist.v1.UpsertAccessListWithMembersRequest + (*UpsertAccessListWithMembersResponse)(nil), // 23: teleport.accesslist.v1.UpsertAccessListWithMembersResponse + (*GetAccessListMemberRequest)(nil), // 24: teleport.accesslist.v1.GetAccessListMemberRequest + (*GetStaticAccessListMemberRequest)(nil), // 25: teleport.accesslist.v1.GetStaticAccessListMemberRequest + (*GetStaticAccessListMemberResponse)(nil), // 26: teleport.accesslist.v1.GetStaticAccessListMemberResponse + (*GetAccessListOwnersRequest)(nil), // 27: teleport.accesslist.v1.GetAccessListOwnersRequest + (*GetAccessListOwnersResponse)(nil), // 28: teleport.accesslist.v1.GetAccessListOwnersResponse + (*UpsertAccessListMemberRequest)(nil), // 29: teleport.accesslist.v1.UpsertAccessListMemberRequest + (*UpsertStaticAccessListMemberRequest)(nil), // 30: teleport.accesslist.v1.UpsertStaticAccessListMemberRequest + (*UpsertStaticAccessListMemberResponse)(nil), // 31: teleport.accesslist.v1.UpsertStaticAccessListMemberResponse + (*UpdateAccessListMemberRequest)(nil), // 32: teleport.accesslist.v1.UpdateAccessListMemberRequest + (*DeleteAccessListMemberRequest)(nil), // 33: teleport.accesslist.v1.DeleteAccessListMemberRequest + (*DeleteStaticAccessListMemberRequest)(nil), // 34: teleport.accesslist.v1.DeleteStaticAccessListMemberRequest + (*DeleteStaticAccessListMemberResponse)(nil), // 35: teleport.accesslist.v1.DeleteStaticAccessListMemberResponse + (*DeleteAllAccessListMembersForAccessListRequest)(nil), // 36: teleport.accesslist.v1.DeleteAllAccessListMembersForAccessListRequest + (*DeleteAllAccessListMembersRequest)(nil), // 37: teleport.accesslist.v1.DeleteAllAccessListMembersRequest + (*ListAccessListReviewsRequest)(nil), // 38: teleport.accesslist.v1.ListAccessListReviewsRequest + (*ListAccessListReviewsResponse)(nil), // 39: teleport.accesslist.v1.ListAccessListReviewsResponse + (*ListAllAccessListReviewsRequest)(nil), // 40: teleport.accesslist.v1.ListAllAccessListReviewsRequest + (*ListAllAccessListReviewsResponse)(nil), // 41: teleport.accesslist.v1.ListAllAccessListReviewsResponse + (*CreateAccessListReviewRequest)(nil), // 42: teleport.accesslist.v1.CreateAccessListReviewRequest + (*CreateAccessListReviewResponse)(nil), // 43: teleport.accesslist.v1.CreateAccessListReviewResponse + (*DeleteAccessListReviewRequest)(nil), // 44: teleport.accesslist.v1.DeleteAccessListReviewRequest + (*AccessRequestPromoteRequest)(nil), // 45: teleport.accesslist.v1.AccessRequestPromoteRequest + (*AccessRequestPromoteResponse)(nil), // 46: teleport.accesslist.v1.AccessRequestPromoteResponse + (*GetSuggestedAccessListsRequest)(nil), // 47: teleport.accesslist.v1.GetSuggestedAccessListsRequest + (*GetSuggestedAccessListsResponse)(nil), // 48: teleport.accesslist.v1.GetSuggestedAccessListsResponse + (*ListUserAccessListsRequest)(nil), // 49: teleport.accesslist.v1.ListUserAccessListsRequest + (*ListUserAccessListsResponse)(nil), // 50: teleport.accesslist.v1.ListUserAccessListsResponse + (*AccessList)(nil), // 51: teleport.accesslist.v1.AccessList + (*types.SortBy)(nil), // 52: types.SortBy + (*AccessListGrants)(nil), // 53: teleport.accesslist.v1.AccessListGrants + (*Member)(nil), // 54: teleport.accesslist.v1.Member + (*AccessListOwner)(nil), // 55: teleport.accesslist.v1.AccessListOwner + (*Review)(nil), // 56: teleport.accesslist.v1.Review + (*timestamppb.Timestamp)(nil), // 57: google.protobuf.Timestamp + (*types.AccessRequestV3)(nil), // 58: types.AccessRequestV3 + (*emptypb.Empty)(nil), // 59: google.protobuf.Empty } var file_teleport_accesslist_v1_accesslist_service_proto_depIdxs = []int32{ - 40, // 0: teleport.accesslist.v1.GetAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList - 40, // 1: teleport.accesslist.v1.ListAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList - 41, // 2: teleport.accesslist.v1.GetInheritedGrantsResponse.grants:type_name -> teleport.accesslist.v1.AccessListGrants - 40, // 3: teleport.accesslist.v1.UpsertAccessListRequest.access_list:type_name -> teleport.accesslist.v1.AccessList - 40, // 4: teleport.accesslist.v1.UpdateAccessListRequest.access_list:type_name -> teleport.accesslist.v1.AccessList - 40, // 5: teleport.accesslist.v1.GetAccessListsToReviewResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList - 42, // 6: teleport.accesslist.v1.ListAccessListMembersResponse.members:type_name -> teleport.accesslist.v1.Member - 42, // 7: teleport.accesslist.v1.ListAllAccessListMembersResponse.members:type_name -> teleport.accesslist.v1.Member - 40, // 8: teleport.accesslist.v1.UpsertAccessListWithMembersRequest.access_list:type_name -> teleport.accesslist.v1.AccessList - 42, // 9: teleport.accesslist.v1.UpsertAccessListWithMembersRequest.members:type_name -> teleport.accesslist.v1.Member - 40, // 10: teleport.accesslist.v1.UpsertAccessListWithMembersResponse.access_list:type_name -> teleport.accesslist.v1.AccessList - 42, // 11: teleport.accesslist.v1.UpsertAccessListWithMembersResponse.members:type_name -> teleport.accesslist.v1.Member - 43, // 12: teleport.accesslist.v1.GetAccessListOwnersResponse.owners:type_name -> teleport.accesslist.v1.AccessListOwner - 42, // 13: teleport.accesslist.v1.UpsertAccessListMemberRequest.member:type_name -> teleport.accesslist.v1.Member - 42, // 14: teleport.accesslist.v1.UpdateAccessListMemberRequest.member:type_name -> teleport.accesslist.v1.Member - 44, // 15: teleport.accesslist.v1.ListAccessListReviewsResponse.reviews:type_name -> teleport.accesslist.v1.Review - 44, // 16: teleport.accesslist.v1.ListAllAccessListReviewsResponse.reviews:type_name -> teleport.accesslist.v1.Review - 44, // 17: teleport.accesslist.v1.CreateAccessListReviewRequest.review:type_name -> teleport.accesslist.v1.Review - 45, // 18: teleport.accesslist.v1.CreateAccessListReviewResponse.next_audit_date:type_name -> google.protobuf.Timestamp - 46, // 19: teleport.accesslist.v1.AccessRequestPromoteResponse.access_request:type_name -> types.AccessRequestV3 - 40, // 20: teleport.accesslist.v1.GetSuggestedAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList - 0, // 21: teleport.accesslist.v1.AccessListService.GetAccessLists:input_type -> teleport.accesslist.v1.GetAccessListsRequest - 2, // 22: teleport.accesslist.v1.AccessListService.ListAccessLists:input_type -> teleport.accesslist.v1.ListAccessListsRequest - 6, // 23: teleport.accesslist.v1.AccessListService.GetAccessList:input_type -> teleport.accesslist.v1.GetAccessListRequest - 7, // 24: teleport.accesslist.v1.AccessListService.UpsertAccessList:input_type -> teleport.accesslist.v1.UpsertAccessListRequest - 8, // 25: teleport.accesslist.v1.AccessListService.UpdateAccessList:input_type -> teleport.accesslist.v1.UpdateAccessListRequest - 9, // 26: teleport.accesslist.v1.AccessListService.DeleteAccessList:input_type -> teleport.accesslist.v1.DeleteAccessListRequest - 10, // 27: teleport.accesslist.v1.AccessListService.DeleteAllAccessLists:input_type -> teleport.accesslist.v1.DeleteAllAccessListsRequest - 11, // 28: teleport.accesslist.v1.AccessListService.GetAccessListsToReview:input_type -> teleport.accesslist.v1.GetAccessListsToReviewRequest - 13, // 29: teleport.accesslist.v1.AccessListService.CountAccessListMembers:input_type -> teleport.accesslist.v1.CountAccessListMembersRequest - 15, // 30: teleport.accesslist.v1.AccessListService.ListAccessListMembers:input_type -> teleport.accesslist.v1.ListAccessListMembersRequest - 17, // 31: teleport.accesslist.v1.AccessListService.ListAllAccessListMembers:input_type -> teleport.accesslist.v1.ListAllAccessListMembersRequest - 21, // 32: teleport.accesslist.v1.AccessListService.GetAccessListMember:input_type -> teleport.accesslist.v1.GetAccessListMemberRequest - 22, // 33: teleport.accesslist.v1.AccessListService.GetAccessListOwners:input_type -> teleport.accesslist.v1.GetAccessListOwnersRequest - 24, // 34: teleport.accesslist.v1.AccessListService.UpsertAccessListMember:input_type -> teleport.accesslist.v1.UpsertAccessListMemberRequest - 25, // 35: teleport.accesslist.v1.AccessListService.UpdateAccessListMember:input_type -> teleport.accesslist.v1.UpdateAccessListMemberRequest - 26, // 36: teleport.accesslist.v1.AccessListService.DeleteAccessListMember:input_type -> teleport.accesslist.v1.DeleteAccessListMemberRequest - 27, // 37: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembersForAccessList:input_type -> teleport.accesslist.v1.DeleteAllAccessListMembersForAccessListRequest - 28, // 38: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembers:input_type -> teleport.accesslist.v1.DeleteAllAccessListMembersRequest - 19, // 39: teleport.accesslist.v1.AccessListService.UpsertAccessListWithMembers:input_type -> teleport.accesslist.v1.UpsertAccessListWithMembersRequest - 29, // 40: teleport.accesslist.v1.AccessListService.ListAccessListReviews:input_type -> teleport.accesslist.v1.ListAccessListReviewsRequest - 31, // 41: teleport.accesslist.v1.AccessListService.ListAllAccessListReviews:input_type -> teleport.accesslist.v1.ListAllAccessListReviewsRequest - 33, // 42: teleport.accesslist.v1.AccessListService.CreateAccessListReview:input_type -> teleport.accesslist.v1.CreateAccessListReviewRequest - 35, // 43: teleport.accesslist.v1.AccessListService.DeleteAccessListReview:input_type -> teleport.accesslist.v1.DeleteAccessListReviewRequest - 36, // 44: teleport.accesslist.v1.AccessListService.AccessRequestPromote:input_type -> teleport.accesslist.v1.AccessRequestPromoteRequest - 38, // 45: teleport.accesslist.v1.AccessListService.GetSuggestedAccessLists:input_type -> teleport.accesslist.v1.GetSuggestedAccessListsRequest - 4, // 46: teleport.accesslist.v1.AccessListService.GetInheritedGrants:input_type -> teleport.accesslist.v1.GetInheritedGrantsRequest - 1, // 47: teleport.accesslist.v1.AccessListService.GetAccessLists:output_type -> teleport.accesslist.v1.GetAccessListsResponse - 3, // 48: teleport.accesslist.v1.AccessListService.ListAccessLists:output_type -> teleport.accesslist.v1.ListAccessListsResponse - 40, // 49: teleport.accesslist.v1.AccessListService.GetAccessList:output_type -> teleport.accesslist.v1.AccessList - 40, // 50: teleport.accesslist.v1.AccessListService.UpsertAccessList:output_type -> teleport.accesslist.v1.AccessList - 40, // 51: teleport.accesslist.v1.AccessListService.UpdateAccessList:output_type -> teleport.accesslist.v1.AccessList - 47, // 52: teleport.accesslist.v1.AccessListService.DeleteAccessList:output_type -> google.protobuf.Empty - 47, // 53: teleport.accesslist.v1.AccessListService.DeleteAllAccessLists:output_type -> google.protobuf.Empty - 12, // 54: teleport.accesslist.v1.AccessListService.GetAccessListsToReview:output_type -> teleport.accesslist.v1.GetAccessListsToReviewResponse - 14, // 55: teleport.accesslist.v1.AccessListService.CountAccessListMembers:output_type -> teleport.accesslist.v1.CountAccessListMembersResponse - 16, // 56: teleport.accesslist.v1.AccessListService.ListAccessListMembers:output_type -> teleport.accesslist.v1.ListAccessListMembersResponse - 18, // 57: teleport.accesslist.v1.AccessListService.ListAllAccessListMembers:output_type -> teleport.accesslist.v1.ListAllAccessListMembersResponse - 42, // 58: teleport.accesslist.v1.AccessListService.GetAccessListMember:output_type -> teleport.accesslist.v1.Member - 23, // 59: teleport.accesslist.v1.AccessListService.GetAccessListOwners:output_type -> teleport.accesslist.v1.GetAccessListOwnersResponse - 42, // 60: teleport.accesslist.v1.AccessListService.UpsertAccessListMember:output_type -> teleport.accesslist.v1.Member - 42, // 61: teleport.accesslist.v1.AccessListService.UpdateAccessListMember:output_type -> teleport.accesslist.v1.Member - 47, // 62: teleport.accesslist.v1.AccessListService.DeleteAccessListMember:output_type -> google.protobuf.Empty - 47, // 63: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembersForAccessList:output_type -> google.protobuf.Empty - 47, // 64: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembers:output_type -> google.protobuf.Empty - 20, // 65: teleport.accesslist.v1.AccessListService.UpsertAccessListWithMembers:output_type -> teleport.accesslist.v1.UpsertAccessListWithMembersResponse - 30, // 66: teleport.accesslist.v1.AccessListService.ListAccessListReviews:output_type -> teleport.accesslist.v1.ListAccessListReviewsResponse - 32, // 67: teleport.accesslist.v1.AccessListService.ListAllAccessListReviews:output_type -> teleport.accesslist.v1.ListAllAccessListReviewsResponse - 34, // 68: teleport.accesslist.v1.AccessListService.CreateAccessListReview:output_type -> teleport.accesslist.v1.CreateAccessListReviewResponse - 47, // 69: teleport.accesslist.v1.AccessListService.DeleteAccessListReview:output_type -> google.protobuf.Empty - 37, // 70: teleport.accesslist.v1.AccessListService.AccessRequestPromote:output_type -> teleport.accesslist.v1.AccessRequestPromoteResponse - 39, // 71: teleport.accesslist.v1.AccessListService.GetSuggestedAccessLists:output_type -> teleport.accesslist.v1.GetSuggestedAccessListsResponse - 5, // 72: teleport.accesslist.v1.AccessListService.GetInheritedGrants:output_type -> teleport.accesslist.v1.GetInheritedGrantsResponse - 47, // [47:73] is the sub-list for method output_type - 21, // [21:47] is the sub-list for method input_type - 21, // [21:21] is the sub-list for extension type_name - 21, // [21:21] is the sub-list for extension extendee - 0, // [0:21] is the sub-list for field type_name + 51, // 0: teleport.accesslist.v1.GetAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList + 51, // 1: teleport.accesslist.v1.ListAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList + 52, // 2: teleport.accesslist.v1.ListAccessListsV2Request.sort_by:type_name -> types.SortBy + 5, // 3: teleport.accesslist.v1.ListAccessListsV2Request.filter:type_name -> teleport.accesslist.v1.AccessListsFilter + 51, // 4: teleport.accesslist.v1.ListAccessListsV2Response.access_lists:type_name -> teleport.accesslist.v1.AccessList + 53, // 5: teleport.accesslist.v1.GetInheritedGrantsResponse.grants:type_name -> teleport.accesslist.v1.AccessListGrants + 51, // 6: teleport.accesslist.v1.UpsertAccessListRequest.access_list:type_name -> teleport.accesslist.v1.AccessList + 51, // 7: teleport.accesslist.v1.UpdateAccessListRequest.access_list:type_name -> teleport.accesslist.v1.AccessList + 51, // 8: teleport.accesslist.v1.GetAccessListsToReviewResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList + 54, // 9: teleport.accesslist.v1.ListAccessListMembersResponse.members:type_name -> teleport.accesslist.v1.Member + 54, // 10: teleport.accesslist.v1.ListAllAccessListMembersResponse.members:type_name -> teleport.accesslist.v1.Member + 51, // 11: teleport.accesslist.v1.UpsertAccessListWithMembersRequest.access_list:type_name -> teleport.accesslist.v1.AccessList + 54, // 12: teleport.accesslist.v1.UpsertAccessListWithMembersRequest.members:type_name -> teleport.accesslist.v1.Member + 51, // 13: teleport.accesslist.v1.UpsertAccessListWithMembersResponse.access_list:type_name -> teleport.accesslist.v1.AccessList + 54, // 14: teleport.accesslist.v1.UpsertAccessListWithMembersResponse.members:type_name -> teleport.accesslist.v1.Member + 54, // 15: teleport.accesslist.v1.GetStaticAccessListMemberResponse.member:type_name -> teleport.accesslist.v1.Member + 55, // 16: teleport.accesslist.v1.GetAccessListOwnersResponse.owners:type_name -> teleport.accesslist.v1.AccessListOwner + 54, // 17: teleport.accesslist.v1.UpsertAccessListMemberRequest.member:type_name -> teleport.accesslist.v1.Member + 54, // 18: teleport.accesslist.v1.UpsertStaticAccessListMemberRequest.member:type_name -> teleport.accesslist.v1.Member + 54, // 19: teleport.accesslist.v1.UpsertStaticAccessListMemberResponse.member:type_name -> teleport.accesslist.v1.Member + 54, // 20: teleport.accesslist.v1.UpdateAccessListMemberRequest.member:type_name -> teleport.accesslist.v1.Member + 56, // 21: teleport.accesslist.v1.ListAccessListReviewsResponse.reviews:type_name -> teleport.accesslist.v1.Review + 56, // 22: teleport.accesslist.v1.ListAllAccessListReviewsResponse.reviews:type_name -> teleport.accesslist.v1.Review + 56, // 23: teleport.accesslist.v1.CreateAccessListReviewRequest.review:type_name -> teleport.accesslist.v1.Review + 57, // 24: teleport.accesslist.v1.CreateAccessListReviewResponse.next_audit_date:type_name -> google.protobuf.Timestamp + 58, // 25: teleport.accesslist.v1.AccessRequestPromoteResponse.access_request:type_name -> types.AccessRequestV3 + 51, // 26: teleport.accesslist.v1.GetSuggestedAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList + 51, // 27: teleport.accesslist.v1.ListUserAccessListsResponse.access_lists:type_name -> teleport.accesslist.v1.AccessList + 0, // 28: teleport.accesslist.v1.AccessListService.GetAccessLists:input_type -> teleport.accesslist.v1.GetAccessListsRequest + 2, // 29: teleport.accesslist.v1.AccessListService.ListAccessLists:input_type -> teleport.accesslist.v1.ListAccessListsRequest + 4, // 30: teleport.accesslist.v1.AccessListService.ListAccessListsV2:input_type -> teleport.accesslist.v1.ListAccessListsV2Request + 9, // 31: teleport.accesslist.v1.AccessListService.GetAccessList:input_type -> teleport.accesslist.v1.GetAccessListRequest + 10, // 32: teleport.accesslist.v1.AccessListService.UpsertAccessList:input_type -> teleport.accesslist.v1.UpsertAccessListRequest + 11, // 33: teleport.accesslist.v1.AccessListService.UpdateAccessList:input_type -> teleport.accesslist.v1.UpdateAccessListRequest + 12, // 34: teleport.accesslist.v1.AccessListService.DeleteAccessList:input_type -> teleport.accesslist.v1.DeleteAccessListRequest + 13, // 35: teleport.accesslist.v1.AccessListService.DeleteAllAccessLists:input_type -> teleport.accesslist.v1.DeleteAllAccessListsRequest + 14, // 36: teleport.accesslist.v1.AccessListService.GetAccessListsToReview:input_type -> teleport.accesslist.v1.GetAccessListsToReviewRequest + 16, // 37: teleport.accesslist.v1.AccessListService.CountAccessListMembers:input_type -> teleport.accesslist.v1.CountAccessListMembersRequest + 18, // 38: teleport.accesslist.v1.AccessListService.ListAccessListMembers:input_type -> teleport.accesslist.v1.ListAccessListMembersRequest + 20, // 39: teleport.accesslist.v1.AccessListService.ListAllAccessListMembers:input_type -> teleport.accesslist.v1.ListAllAccessListMembersRequest + 24, // 40: teleport.accesslist.v1.AccessListService.GetAccessListMember:input_type -> teleport.accesslist.v1.GetAccessListMemberRequest + 25, // 41: teleport.accesslist.v1.AccessListService.GetStaticAccessListMember:input_type -> teleport.accesslist.v1.GetStaticAccessListMemberRequest + 27, // 42: teleport.accesslist.v1.AccessListService.GetAccessListOwners:input_type -> teleport.accesslist.v1.GetAccessListOwnersRequest + 29, // 43: teleport.accesslist.v1.AccessListService.UpsertAccessListMember:input_type -> teleport.accesslist.v1.UpsertAccessListMemberRequest + 30, // 44: teleport.accesslist.v1.AccessListService.UpsertStaticAccessListMember:input_type -> teleport.accesslist.v1.UpsertStaticAccessListMemberRequest + 32, // 45: teleport.accesslist.v1.AccessListService.UpdateAccessListMember:input_type -> teleport.accesslist.v1.UpdateAccessListMemberRequest + 33, // 46: teleport.accesslist.v1.AccessListService.DeleteAccessListMember:input_type -> teleport.accesslist.v1.DeleteAccessListMemberRequest + 34, // 47: teleport.accesslist.v1.AccessListService.DeleteStaticAccessListMember:input_type -> teleport.accesslist.v1.DeleteStaticAccessListMemberRequest + 36, // 48: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembersForAccessList:input_type -> teleport.accesslist.v1.DeleteAllAccessListMembersForAccessListRequest + 37, // 49: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembers:input_type -> teleport.accesslist.v1.DeleteAllAccessListMembersRequest + 22, // 50: teleport.accesslist.v1.AccessListService.UpsertAccessListWithMembers:input_type -> teleport.accesslist.v1.UpsertAccessListWithMembersRequest + 38, // 51: teleport.accesslist.v1.AccessListService.ListAccessListReviews:input_type -> teleport.accesslist.v1.ListAccessListReviewsRequest + 40, // 52: teleport.accesslist.v1.AccessListService.ListAllAccessListReviews:input_type -> teleport.accesslist.v1.ListAllAccessListReviewsRequest + 42, // 53: teleport.accesslist.v1.AccessListService.CreateAccessListReview:input_type -> teleport.accesslist.v1.CreateAccessListReviewRequest + 44, // 54: teleport.accesslist.v1.AccessListService.DeleteAccessListReview:input_type -> teleport.accesslist.v1.DeleteAccessListReviewRequest + 45, // 55: teleport.accesslist.v1.AccessListService.AccessRequestPromote:input_type -> teleport.accesslist.v1.AccessRequestPromoteRequest + 47, // 56: teleport.accesslist.v1.AccessListService.GetSuggestedAccessLists:input_type -> teleport.accesslist.v1.GetSuggestedAccessListsRequest + 7, // 57: teleport.accesslist.v1.AccessListService.GetInheritedGrants:input_type -> teleport.accesslist.v1.GetInheritedGrantsRequest + 49, // 58: teleport.accesslist.v1.AccessListService.ListUserAccessLists:input_type -> teleport.accesslist.v1.ListUserAccessListsRequest + 1, // 59: teleport.accesslist.v1.AccessListService.GetAccessLists:output_type -> teleport.accesslist.v1.GetAccessListsResponse + 3, // 60: teleport.accesslist.v1.AccessListService.ListAccessLists:output_type -> teleport.accesslist.v1.ListAccessListsResponse + 6, // 61: teleport.accesslist.v1.AccessListService.ListAccessListsV2:output_type -> teleport.accesslist.v1.ListAccessListsV2Response + 51, // 62: teleport.accesslist.v1.AccessListService.GetAccessList:output_type -> teleport.accesslist.v1.AccessList + 51, // 63: teleport.accesslist.v1.AccessListService.UpsertAccessList:output_type -> teleport.accesslist.v1.AccessList + 51, // 64: teleport.accesslist.v1.AccessListService.UpdateAccessList:output_type -> teleport.accesslist.v1.AccessList + 59, // 65: teleport.accesslist.v1.AccessListService.DeleteAccessList:output_type -> google.protobuf.Empty + 59, // 66: teleport.accesslist.v1.AccessListService.DeleteAllAccessLists:output_type -> google.protobuf.Empty + 15, // 67: teleport.accesslist.v1.AccessListService.GetAccessListsToReview:output_type -> teleport.accesslist.v1.GetAccessListsToReviewResponse + 17, // 68: teleport.accesslist.v1.AccessListService.CountAccessListMembers:output_type -> teleport.accesslist.v1.CountAccessListMembersResponse + 19, // 69: teleport.accesslist.v1.AccessListService.ListAccessListMembers:output_type -> teleport.accesslist.v1.ListAccessListMembersResponse + 21, // 70: teleport.accesslist.v1.AccessListService.ListAllAccessListMembers:output_type -> teleport.accesslist.v1.ListAllAccessListMembersResponse + 54, // 71: teleport.accesslist.v1.AccessListService.GetAccessListMember:output_type -> teleport.accesslist.v1.Member + 26, // 72: teleport.accesslist.v1.AccessListService.GetStaticAccessListMember:output_type -> teleport.accesslist.v1.GetStaticAccessListMemberResponse + 28, // 73: teleport.accesslist.v1.AccessListService.GetAccessListOwners:output_type -> teleport.accesslist.v1.GetAccessListOwnersResponse + 54, // 74: teleport.accesslist.v1.AccessListService.UpsertAccessListMember:output_type -> teleport.accesslist.v1.Member + 31, // 75: teleport.accesslist.v1.AccessListService.UpsertStaticAccessListMember:output_type -> teleport.accesslist.v1.UpsertStaticAccessListMemberResponse + 54, // 76: teleport.accesslist.v1.AccessListService.UpdateAccessListMember:output_type -> teleport.accesslist.v1.Member + 59, // 77: teleport.accesslist.v1.AccessListService.DeleteAccessListMember:output_type -> google.protobuf.Empty + 35, // 78: teleport.accesslist.v1.AccessListService.DeleteStaticAccessListMember:output_type -> teleport.accesslist.v1.DeleteStaticAccessListMemberResponse + 59, // 79: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembersForAccessList:output_type -> google.protobuf.Empty + 59, // 80: teleport.accesslist.v1.AccessListService.DeleteAllAccessListMembers:output_type -> google.protobuf.Empty + 23, // 81: teleport.accesslist.v1.AccessListService.UpsertAccessListWithMembers:output_type -> teleport.accesslist.v1.UpsertAccessListWithMembersResponse + 39, // 82: teleport.accesslist.v1.AccessListService.ListAccessListReviews:output_type -> teleport.accesslist.v1.ListAccessListReviewsResponse + 41, // 83: teleport.accesslist.v1.AccessListService.ListAllAccessListReviews:output_type -> teleport.accesslist.v1.ListAllAccessListReviewsResponse + 43, // 84: teleport.accesslist.v1.AccessListService.CreateAccessListReview:output_type -> teleport.accesslist.v1.CreateAccessListReviewResponse + 59, // 85: teleport.accesslist.v1.AccessListService.DeleteAccessListReview:output_type -> google.protobuf.Empty + 46, // 86: teleport.accesslist.v1.AccessListService.AccessRequestPromote:output_type -> teleport.accesslist.v1.AccessRequestPromoteResponse + 48, // 87: teleport.accesslist.v1.AccessListService.GetSuggestedAccessLists:output_type -> teleport.accesslist.v1.GetSuggestedAccessListsResponse + 8, // 88: teleport.accesslist.v1.AccessListService.GetInheritedGrants:output_type -> teleport.accesslist.v1.GetInheritedGrantsResponse + 50, // 89: teleport.accesslist.v1.AccessListService.ListUserAccessLists:output_type -> teleport.accesslist.v1.ListUserAccessListsResponse + 59, // [59:90] is the sub-list for method output_type + 28, // [28:59] is the sub-list for method input_type + 28, // [28:28] is the sub-list for extension type_name + 28, // [28:28] is the sub-list for extension extendee + 0, // [0:28] is the sub-list for field type_name } func init() { file_teleport_accesslist_v1_accesslist_service_proto_init() } @@ -2370,7 +3066,7 @@ func file_teleport_accesslist_v1_accesslist_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_accesslist_v1_accesslist_service_proto_rawDesc), len(file_teleport_accesslist_v1_accesslist_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 40, + NumMessages: 51, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/accesslist/v1/accesslist_service_grpc.pb.go b/api/gen/proto/go/teleport/accesslist/v1/accesslist_service_grpc.pb.go index 3bdae0029e54f..7e16340b3d4a1 100644 --- a/api/gen/proto/go/teleport/accesslist/v1/accesslist_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/accesslist/v1/accesslist_service_grpc.pb.go @@ -36,6 +36,7 @@ const _ = grpc.SupportPackageIsVersion9 const ( AccessListService_GetAccessLists_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetAccessLists" AccessListService_ListAccessLists_FullMethodName = "/teleport.accesslist.v1.AccessListService/ListAccessLists" + AccessListService_ListAccessListsV2_FullMethodName = "/teleport.accesslist.v1.AccessListService/ListAccessListsV2" AccessListService_GetAccessList_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetAccessList" AccessListService_UpsertAccessList_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpsertAccessList" AccessListService_UpdateAccessList_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpdateAccessList" @@ -46,10 +47,13 @@ const ( AccessListService_ListAccessListMembers_FullMethodName = "/teleport.accesslist.v1.AccessListService/ListAccessListMembers" AccessListService_ListAllAccessListMembers_FullMethodName = "/teleport.accesslist.v1.AccessListService/ListAllAccessListMembers" AccessListService_GetAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetAccessListMember" + AccessListService_GetStaticAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetStaticAccessListMember" AccessListService_GetAccessListOwners_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetAccessListOwners" AccessListService_UpsertAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpsertAccessListMember" + AccessListService_UpsertStaticAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpsertStaticAccessListMember" AccessListService_UpdateAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpdateAccessListMember" AccessListService_DeleteAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/DeleteAccessListMember" + AccessListService_DeleteStaticAccessListMember_FullMethodName = "/teleport.accesslist.v1.AccessListService/DeleteStaticAccessListMember" AccessListService_DeleteAllAccessListMembersForAccessList_FullMethodName = "/teleport.accesslist.v1.AccessListService/DeleteAllAccessListMembersForAccessList" AccessListService_DeleteAllAccessListMembers_FullMethodName = "/teleport.accesslist.v1.AccessListService/DeleteAllAccessListMembers" AccessListService_UpsertAccessListWithMembers_FullMethodName = "/teleport.accesslist.v1.AccessListService/UpsertAccessListWithMembers" @@ -60,6 +64,7 @@ const ( AccessListService_AccessRequestPromote_FullMethodName = "/teleport.accesslist.v1.AccessListService/AccessRequestPromote" AccessListService_GetSuggestedAccessLists_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetSuggestedAccessLists" AccessListService_GetInheritedGrants_FullMethodName = "/teleport.accesslist.v1.AccessListService/GetInheritedGrants" + AccessListService_ListUserAccessLists_FullMethodName = "/teleport.accesslist.v1.AccessListService/ListUserAccessLists" ) // AccessListServiceClient is the client API for AccessListService service. @@ -70,8 +75,12 @@ const ( type AccessListServiceClient interface { // GetAccessLists returns a list of all access lists. GetAccessLists(ctx context.Context, in *GetAccessListsRequest, opts ...grpc.CallOption) (*GetAccessListsResponse, error) + // Deprecated: Do not use. // ListAccessLists returns a paginated list of all access lists. + // Deprecated: Use ListAccessListsV2 instead. ListAccessLists(ctx context.Context, in *ListAccessListsRequest, opts ...grpc.CallOption) (*ListAccessListsResponse, error) + // ListAccessListsV2 returns a paginated, filtered, and sorted list of all access lists. + ListAccessListsV2(ctx context.Context, in *ListAccessListsV2Request, opts ...grpc.CallOption) (*ListAccessListsV2Response, error) // GetAccessList returns the specified access list resource. GetAccessList(ctx context.Context, in *GetAccessListRequest, opts ...grpc.CallOption) (*AccessList, error) // UpsertAccessList creates or updates an access list resource. @@ -95,16 +104,28 @@ type AccessListServiceClient interface { ListAllAccessListMembers(ctx context.Context, in *ListAllAccessListMembersRequest, opts ...grpc.CallOption) (*ListAllAccessListMembersResponse, error) // GetAccessListMember returns the specified access list member resource. GetAccessListMember(ctx context.Context, in *GetAccessListMemberRequest, opts ...grpc.CallOption) (*Member, error) + // GetStaticAccessListMember returns the specified access_list_member resource. If returns error + // if the target access_list is not of type static. This API is there for the IaC tools to + // prevent them from making changes to members of dynamic access lists. + GetStaticAccessListMember(ctx context.Context, in *GetStaticAccessListMemberRequest, opts ...grpc.CallOption) (*GetStaticAccessListMemberResponse, error) // GetAccessListOwners returns a list of all owners in an Access List, // including those inherited from nested Access Lists. GetAccessListOwners(ctx context.Context, in *GetAccessListOwnersRequest, opts ...grpc.CallOption) (*GetAccessListOwnersResponse, error) // UpsertAccessListMember creates or updates an access list member resource. UpsertAccessListMember(ctx context.Context, in *UpsertAccessListMemberRequest, opts ...grpc.CallOption) (*Member, error) + // UpsertStaticAccessListMember creates or updates an access_list_member resource. It returns + // error and does nothing if the target access_list is not of type static. This API is there for + // the IaC tools to prevent them from making changes to members of dynamic access lists. + UpsertStaticAccessListMember(ctx context.Context, in *UpsertStaticAccessListMemberRequest, opts ...grpc.CallOption) (*UpsertStaticAccessListMemberResponse, error) // UpdateAccessListMember conditionally updates an access list member resource. UpdateAccessListMember(ctx context.Context, in *UpdateAccessListMemberRequest, opts ...grpc.CallOption) (*Member, error) // DeleteAccessListMember hard deletes the specified access list member // resource. DeleteAccessListMember(ctx context.Context, in *DeleteAccessListMemberRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // DeleteStaticAccessListMember hard deletes the specified access_list_member. It returns error + // and does nothing if the target access_list is not of static type. This API is there for the + // IaC tools to prevent them from making changes to members of dynamic access lists. + DeleteStaticAccessListMember(ctx context.Context, in *DeleteStaticAccessListMemberRequest, opts ...grpc.CallOption) (*DeleteStaticAccessListMemberResponse, error) // DeleteAllAccessListMembers hard deletes all access list members for an // access list. DeleteAllAccessListMembersForAccessList(ctx context.Context, in *DeleteAllAccessListMembersForAccessListRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) @@ -132,6 +153,9 @@ type AccessListServiceClient interface { GetSuggestedAccessLists(ctx context.Context, in *GetSuggestedAccessListsRequest, opts ...grpc.CallOption) (*GetSuggestedAccessListsResponse, error) // GetInheritedGrants returns the inherited grants for an access list. GetInheritedGrants(ctx context.Context, in *GetInheritedGrantsRequest, opts ...grpc.CallOption) (*GetInheritedGrantsResponse, error) + // ListUserAccessLists returns a paginated list of all access lists where the + // user is an owner or member. + ListUserAccessLists(ctx context.Context, in *ListUserAccessListsRequest, opts ...grpc.CallOption) (*ListUserAccessListsResponse, error) } type accessListServiceClient struct { @@ -152,6 +176,7 @@ func (c *accessListServiceClient) GetAccessLists(ctx context.Context, in *GetAcc return out, nil } +// Deprecated: Do not use. func (c *accessListServiceClient) ListAccessLists(ctx context.Context, in *ListAccessListsRequest, opts ...grpc.CallOption) (*ListAccessListsResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(ListAccessListsResponse) @@ -162,6 +187,16 @@ func (c *accessListServiceClient) ListAccessLists(ctx context.Context, in *ListA return out, nil } +func (c *accessListServiceClient) ListAccessListsV2(ctx context.Context, in *ListAccessListsV2Request, opts ...grpc.CallOption) (*ListAccessListsV2Response, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListAccessListsV2Response) + err := c.cc.Invoke(ctx, AccessListService_ListAccessListsV2_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *accessListServiceClient) GetAccessList(ctx context.Context, in *GetAccessListRequest, opts ...grpc.CallOption) (*AccessList, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(AccessList) @@ -262,6 +297,16 @@ func (c *accessListServiceClient) GetAccessListMember(ctx context.Context, in *G return out, nil } +func (c *accessListServiceClient) GetStaticAccessListMember(ctx context.Context, in *GetStaticAccessListMemberRequest, opts ...grpc.CallOption) (*GetStaticAccessListMemberResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetStaticAccessListMemberResponse) + err := c.cc.Invoke(ctx, AccessListService_GetStaticAccessListMember_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *accessListServiceClient) GetAccessListOwners(ctx context.Context, in *GetAccessListOwnersRequest, opts ...grpc.CallOption) (*GetAccessListOwnersResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(GetAccessListOwnersResponse) @@ -282,6 +327,16 @@ func (c *accessListServiceClient) UpsertAccessListMember(ctx context.Context, in return out, nil } +func (c *accessListServiceClient) UpsertStaticAccessListMember(ctx context.Context, in *UpsertStaticAccessListMemberRequest, opts ...grpc.CallOption) (*UpsertStaticAccessListMemberResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpsertStaticAccessListMemberResponse) + err := c.cc.Invoke(ctx, AccessListService_UpsertStaticAccessListMember_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *accessListServiceClient) UpdateAccessListMember(ctx context.Context, in *UpdateAccessListMemberRequest, opts ...grpc.CallOption) (*Member, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(Member) @@ -302,6 +357,16 @@ func (c *accessListServiceClient) DeleteAccessListMember(ctx context.Context, in return out, nil } +func (c *accessListServiceClient) DeleteStaticAccessListMember(ctx context.Context, in *DeleteStaticAccessListMemberRequest, opts ...grpc.CallOption) (*DeleteStaticAccessListMemberResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteStaticAccessListMemberResponse) + err := c.cc.Invoke(ctx, AccessListService_DeleteStaticAccessListMember_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *accessListServiceClient) DeleteAllAccessListMembersForAccessList(ctx context.Context, in *DeleteAllAccessListMembersForAccessListRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -402,6 +467,16 @@ func (c *accessListServiceClient) GetInheritedGrants(ctx context.Context, in *Ge return out, nil } +func (c *accessListServiceClient) ListUserAccessLists(ctx context.Context, in *ListUserAccessListsRequest, opts ...grpc.CallOption) (*ListUserAccessListsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListUserAccessListsResponse) + err := c.cc.Invoke(ctx, AccessListService_ListUserAccessLists_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // AccessListServiceServer is the server API for AccessListService service. // All implementations must embed UnimplementedAccessListServiceServer // for forward compatibility. @@ -410,8 +485,12 @@ func (c *accessListServiceClient) GetInheritedGrants(ctx context.Context, in *Ge type AccessListServiceServer interface { // GetAccessLists returns a list of all access lists. GetAccessLists(context.Context, *GetAccessListsRequest) (*GetAccessListsResponse, error) + // Deprecated: Do not use. // ListAccessLists returns a paginated list of all access lists. + // Deprecated: Use ListAccessListsV2 instead. ListAccessLists(context.Context, *ListAccessListsRequest) (*ListAccessListsResponse, error) + // ListAccessListsV2 returns a paginated, filtered, and sorted list of all access lists. + ListAccessListsV2(context.Context, *ListAccessListsV2Request) (*ListAccessListsV2Response, error) // GetAccessList returns the specified access list resource. GetAccessList(context.Context, *GetAccessListRequest) (*AccessList, error) // UpsertAccessList creates or updates an access list resource. @@ -435,16 +514,28 @@ type AccessListServiceServer interface { ListAllAccessListMembers(context.Context, *ListAllAccessListMembersRequest) (*ListAllAccessListMembersResponse, error) // GetAccessListMember returns the specified access list member resource. GetAccessListMember(context.Context, *GetAccessListMemberRequest) (*Member, error) + // GetStaticAccessListMember returns the specified access_list_member resource. If returns error + // if the target access_list is not of type static. This API is there for the IaC tools to + // prevent them from making changes to members of dynamic access lists. + GetStaticAccessListMember(context.Context, *GetStaticAccessListMemberRequest) (*GetStaticAccessListMemberResponse, error) // GetAccessListOwners returns a list of all owners in an Access List, // including those inherited from nested Access Lists. GetAccessListOwners(context.Context, *GetAccessListOwnersRequest) (*GetAccessListOwnersResponse, error) // UpsertAccessListMember creates or updates an access list member resource. UpsertAccessListMember(context.Context, *UpsertAccessListMemberRequest) (*Member, error) + // UpsertStaticAccessListMember creates or updates an access_list_member resource. It returns + // error and does nothing if the target access_list is not of type static. This API is there for + // the IaC tools to prevent them from making changes to members of dynamic access lists. + UpsertStaticAccessListMember(context.Context, *UpsertStaticAccessListMemberRequest) (*UpsertStaticAccessListMemberResponse, error) // UpdateAccessListMember conditionally updates an access list member resource. UpdateAccessListMember(context.Context, *UpdateAccessListMemberRequest) (*Member, error) // DeleteAccessListMember hard deletes the specified access list member // resource. DeleteAccessListMember(context.Context, *DeleteAccessListMemberRequest) (*emptypb.Empty, error) + // DeleteStaticAccessListMember hard deletes the specified access_list_member. It returns error + // and does nothing if the target access_list is not of static type. This API is there for the + // IaC tools to prevent them from making changes to members of dynamic access lists. + DeleteStaticAccessListMember(context.Context, *DeleteStaticAccessListMemberRequest) (*DeleteStaticAccessListMemberResponse, error) // DeleteAllAccessListMembers hard deletes all access list members for an // access list. DeleteAllAccessListMembersForAccessList(context.Context, *DeleteAllAccessListMembersForAccessListRequest) (*emptypb.Empty, error) @@ -472,6 +563,9 @@ type AccessListServiceServer interface { GetSuggestedAccessLists(context.Context, *GetSuggestedAccessListsRequest) (*GetSuggestedAccessListsResponse, error) // GetInheritedGrants returns the inherited grants for an access list. GetInheritedGrants(context.Context, *GetInheritedGrantsRequest) (*GetInheritedGrantsResponse, error) + // ListUserAccessLists returns a paginated list of all access lists where the + // user is an owner or member. + ListUserAccessLists(context.Context, *ListUserAccessListsRequest) (*ListUserAccessListsResponse, error) mustEmbedUnimplementedAccessListServiceServer() } @@ -488,6 +582,9 @@ func (UnimplementedAccessListServiceServer) GetAccessLists(context.Context, *Get func (UnimplementedAccessListServiceServer) ListAccessLists(context.Context, *ListAccessListsRequest) (*ListAccessListsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method ListAccessLists not implemented") } +func (UnimplementedAccessListServiceServer) ListAccessListsV2(context.Context, *ListAccessListsV2Request) (*ListAccessListsV2Response, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListAccessListsV2 not implemented") +} func (UnimplementedAccessListServiceServer) GetAccessList(context.Context, *GetAccessListRequest) (*AccessList, error) { return nil, status.Errorf(codes.Unimplemented, "method GetAccessList not implemented") } @@ -518,18 +615,27 @@ func (UnimplementedAccessListServiceServer) ListAllAccessListMembers(context.Con func (UnimplementedAccessListServiceServer) GetAccessListMember(context.Context, *GetAccessListMemberRequest) (*Member, error) { return nil, status.Errorf(codes.Unimplemented, "method GetAccessListMember not implemented") } +func (UnimplementedAccessListServiceServer) GetStaticAccessListMember(context.Context, *GetStaticAccessListMemberRequest) (*GetStaticAccessListMemberResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetStaticAccessListMember not implemented") +} func (UnimplementedAccessListServiceServer) GetAccessListOwners(context.Context, *GetAccessListOwnersRequest) (*GetAccessListOwnersResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetAccessListOwners not implemented") } func (UnimplementedAccessListServiceServer) UpsertAccessListMember(context.Context, *UpsertAccessListMemberRequest) (*Member, error) { return nil, status.Errorf(codes.Unimplemented, "method UpsertAccessListMember not implemented") } +func (UnimplementedAccessListServiceServer) UpsertStaticAccessListMember(context.Context, *UpsertStaticAccessListMemberRequest) (*UpsertStaticAccessListMemberResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpsertStaticAccessListMember not implemented") +} func (UnimplementedAccessListServiceServer) UpdateAccessListMember(context.Context, *UpdateAccessListMemberRequest) (*Member, error) { return nil, status.Errorf(codes.Unimplemented, "method UpdateAccessListMember not implemented") } func (UnimplementedAccessListServiceServer) DeleteAccessListMember(context.Context, *DeleteAccessListMemberRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteAccessListMember not implemented") } +func (UnimplementedAccessListServiceServer) DeleteStaticAccessListMember(context.Context, *DeleteStaticAccessListMemberRequest) (*DeleteStaticAccessListMemberResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteStaticAccessListMember not implemented") +} func (UnimplementedAccessListServiceServer) DeleteAllAccessListMembersForAccessList(context.Context, *DeleteAllAccessListMembersForAccessListRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteAllAccessListMembersForAccessList not implemented") } @@ -560,6 +666,9 @@ func (UnimplementedAccessListServiceServer) GetSuggestedAccessLists(context.Cont func (UnimplementedAccessListServiceServer) GetInheritedGrants(context.Context, *GetInheritedGrantsRequest) (*GetInheritedGrantsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method GetInheritedGrants not implemented") } +func (UnimplementedAccessListServiceServer) ListUserAccessLists(context.Context, *ListUserAccessListsRequest) (*ListUserAccessListsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListUserAccessLists not implemented") +} func (UnimplementedAccessListServiceServer) mustEmbedUnimplementedAccessListServiceServer() {} func (UnimplementedAccessListServiceServer) testEmbeddedByValue() {} @@ -617,6 +726,24 @@ func _AccessListService_ListAccessLists_Handler(srv interface{}, ctx context.Con return interceptor(ctx, in, info, handler) } +func _AccessListService_ListAccessListsV2_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListAccessListsV2Request) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AccessListServiceServer).ListAccessListsV2(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AccessListService_ListAccessListsV2_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AccessListServiceServer).ListAccessListsV2(ctx, req.(*ListAccessListsV2Request)) + } + return interceptor(ctx, in, info, handler) +} + func _AccessListService_GetAccessList_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetAccessListRequest) if err := dec(in); err != nil { @@ -797,6 +924,24 @@ func _AccessListService_GetAccessListMember_Handler(srv interface{}, ctx context return interceptor(ctx, in, info, handler) } +func _AccessListService_GetStaticAccessListMember_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetStaticAccessListMemberRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AccessListServiceServer).GetStaticAccessListMember(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AccessListService_GetStaticAccessListMember_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AccessListServiceServer).GetStaticAccessListMember(ctx, req.(*GetStaticAccessListMemberRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AccessListService_GetAccessListOwners_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetAccessListOwnersRequest) if err := dec(in); err != nil { @@ -833,6 +978,24 @@ func _AccessListService_UpsertAccessListMember_Handler(srv interface{}, ctx cont return interceptor(ctx, in, info, handler) } +func _AccessListService_UpsertStaticAccessListMember_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpsertStaticAccessListMemberRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AccessListServiceServer).UpsertStaticAccessListMember(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AccessListService_UpsertStaticAccessListMember_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AccessListServiceServer).UpsertStaticAccessListMember(ctx, req.(*UpsertStaticAccessListMemberRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AccessListService_UpdateAccessListMember_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(UpdateAccessListMemberRequest) if err := dec(in); err != nil { @@ -869,6 +1032,24 @@ func _AccessListService_DeleteAccessListMember_Handler(srv interface{}, ctx cont return interceptor(ctx, in, info, handler) } +func _AccessListService_DeleteStaticAccessListMember_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteStaticAccessListMemberRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AccessListServiceServer).DeleteStaticAccessListMember(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AccessListService_DeleteStaticAccessListMember_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AccessListServiceServer).DeleteStaticAccessListMember(ctx, req.(*DeleteStaticAccessListMemberRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _AccessListService_DeleteAllAccessListMembersForAccessList_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(DeleteAllAccessListMembersForAccessListRequest) if err := dec(in); err != nil { @@ -1049,6 +1230,24 @@ func _AccessListService_GetInheritedGrants_Handler(srv interface{}, ctx context. return interceptor(ctx, in, info, handler) } +func _AccessListService_ListUserAccessLists_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListUserAccessListsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AccessListServiceServer).ListUserAccessLists(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AccessListService_ListUserAccessLists_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AccessListServiceServer).ListUserAccessLists(ctx, req.(*ListUserAccessListsRequest)) + } + return interceptor(ctx, in, info, handler) +} + // AccessListService_ServiceDesc is the grpc.ServiceDesc for AccessListService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -1064,6 +1263,10 @@ var AccessListService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ListAccessLists", Handler: _AccessListService_ListAccessLists_Handler, }, + { + MethodName: "ListAccessListsV2", + Handler: _AccessListService_ListAccessListsV2_Handler, + }, { MethodName: "GetAccessList", Handler: _AccessListService_GetAccessList_Handler, @@ -1104,6 +1307,10 @@ var AccessListService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetAccessListMember", Handler: _AccessListService_GetAccessListMember_Handler, }, + { + MethodName: "GetStaticAccessListMember", + Handler: _AccessListService_GetStaticAccessListMember_Handler, + }, { MethodName: "GetAccessListOwners", Handler: _AccessListService_GetAccessListOwners_Handler, @@ -1112,6 +1319,10 @@ var AccessListService_ServiceDesc = grpc.ServiceDesc{ MethodName: "UpsertAccessListMember", Handler: _AccessListService_UpsertAccessListMember_Handler, }, + { + MethodName: "UpsertStaticAccessListMember", + Handler: _AccessListService_UpsertStaticAccessListMember_Handler, + }, { MethodName: "UpdateAccessListMember", Handler: _AccessListService_UpdateAccessListMember_Handler, @@ -1120,6 +1331,10 @@ var AccessListService_ServiceDesc = grpc.ServiceDesc{ MethodName: "DeleteAccessListMember", Handler: _AccessListService_DeleteAccessListMember_Handler, }, + { + MethodName: "DeleteStaticAccessListMember", + Handler: _AccessListService_DeleteStaticAccessListMember_Handler, + }, { MethodName: "DeleteAllAccessListMembersForAccessList", Handler: _AccessListService_DeleteAllAccessListMembersForAccessList_Handler, @@ -1160,6 +1375,10 @@ var AccessListService_ServiceDesc = grpc.ServiceDesc{ MethodName: "GetInheritedGrants", Handler: _AccessListService_GetInheritedGrants_Handler, }, + { + MethodName: "ListUserAccessLists", + Handler: _AccessListService_ListUserAccessLists_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/accesslist/v1/accesslist_service.proto", diff --git a/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules.pb.go b/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules.pb.go index 7d57946e156da..1b443c434790c 100644 --- a/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules.pb.go +++ b/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/accessmonitoringrules/v1/access_monitoring_rules.proto @@ -135,17 +135,22 @@ type AccessMonitoringRuleSpec struct { // Separate plugins may be used if both notifications and automatic_reviews is // set. Notification *Notification `protobuf:"bytes,4,opt,name=notification,proto3" json:"notification,omitempty"` - // automatic_review defines automatic review configurations for access requests. + // automatic_review defines automatic review configurations for Access Requests. // Both notification and automatic_review may be set within the same // access_monitoring_rule. If both fields are set, the rule will trigger // both notifications and automatic reviews for the same set of access events. // Separate plugins may be used if both notifications and automatic_reviews is // set. AutomaticReview *AutomaticReview `protobuf:"bytes,6,opt,name=automatic_review,json=automaticReview,proto3" json:"automatic_review,omitempty"` - // desired_state defines the desired state of the subject. For access request + // desired_state defines the desired state of the subject. For Access Request // subjects, the desired_state may be set to `reviewed` to indicate that the - // access request should be automatically reviewed. - DesiredState string `protobuf:"bytes,7,opt,name=desired_state,json=desiredState,proto3" json:"desired_state,omitempty"` + // Access Request should be automatically reviewed. + DesiredState string `protobuf:"bytes,7,opt,name=desired_state,json=desiredState,proto3" json:"desired_state,omitempty"` + // schedules specifies a map of schedules that can be used to configure the + // access monitoring rule conditions. + // + // Available in Teleport v18.2.8 or higher. + Schedules map[string]*Schedule `protobuf:"bytes,8,rep,name=schedules,proto3" json:"schedules,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -222,6 +227,13 @@ func (x *AccessMonitoringRuleSpec) GetDesiredState() string { return "" } +func (x *AccessMonitoringRuleSpec) GetSchedules() map[string]*Schedule { + if x != nil { + return x.Schedules + } + return nil +} + // Notification contains configurations for plugin notification rules. type Notification struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -334,6 +346,112 @@ func (x *AutomaticReview) GetDecision() string { return "" } +// Schedule specifies a schedule that can be used to configure rule conditions. +type Schedule struct { + state protoimpl.MessageState `protogen:"open.v1"` + // TimeSchedule specifies an in-line schedule. + Time *TimeSchedule `protobuf:"bytes,1,opt,name=time,proto3" json:"time,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Schedule) Reset() { + *x = Schedule{} + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Schedule) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Schedule) ProtoMessage() {} + +func (x *Schedule) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Schedule.ProtoReflect.Descriptor instead. +func (*Schedule) Descriptor() ([]byte, []int) { + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{4} +} + +func (x *Schedule) GetTime() *TimeSchedule { + if x != nil { + return x.Time + } + return nil +} + +// TimeSchedule specifies an in-line schedule. +type TimeSchedule struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Shifts contains a set of shifts that make up the schedule. + Shifts []*TimeSchedule_Shift `protobuf:"bytes,1,rep,name=shifts,proto3" json:"shifts,omitempty"` + // Timezone specifies the schedule timezone. This field is optional and defaults + // to "UTC". Accepted values use timezone locations as defined in the IANA + // Time Zone Database, such as "America/Los_Angeles", "Europe/Lisbon", or + // "Asia/Singapore". + // + // See https://data.iana.org/time-zones/tzdb/zone1970.tab for a list of supported values. + Timezone string `protobuf:"bytes,2,opt,name=timezone,proto3" json:"timezone,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TimeSchedule) Reset() { + *x = TimeSchedule{} + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TimeSchedule) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TimeSchedule) ProtoMessage() {} + +func (x *TimeSchedule) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TimeSchedule.ProtoReflect.Descriptor instead. +func (*TimeSchedule) Descriptor() ([]byte, []int) { + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{5} +} + +func (x *TimeSchedule) GetShifts() []*TimeSchedule_Shift { + if x != nil { + return x.Shifts + } + return nil +} + +func (x *TimeSchedule) GetTimezone() string { + if x != nil { + return x.Timezone + } + return "" +} + // CreateAccessMonitoringRuleRequest is the request for CreateAccessMonitoringRule. type CreateAccessMonitoringRuleRequest struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -345,7 +463,7 @@ type CreateAccessMonitoringRuleRequest struct { func (x *CreateAccessMonitoringRuleRequest) Reset() { *x = CreateAccessMonitoringRuleRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[4] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[6] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -357,7 +475,7 @@ func (x *CreateAccessMonitoringRuleRequest) String() string { func (*CreateAccessMonitoringRuleRequest) ProtoMessage() {} func (x *CreateAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[4] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[6] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -370,7 +488,7 @@ func (x *CreateAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message // Deprecated: Use CreateAccessMonitoringRuleRequest.ProtoReflect.Descriptor instead. func (*CreateAccessMonitoringRuleRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{4} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{6} } func (x *CreateAccessMonitoringRuleRequest) GetRule() *AccessMonitoringRule { @@ -391,7 +509,7 @@ type UpdateAccessMonitoringRuleRequest struct { func (x *UpdateAccessMonitoringRuleRequest) Reset() { *x = UpdateAccessMonitoringRuleRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[5] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[7] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -403,7 +521,7 @@ func (x *UpdateAccessMonitoringRuleRequest) String() string { func (*UpdateAccessMonitoringRuleRequest) ProtoMessage() {} func (x *UpdateAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[5] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[7] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -416,7 +534,7 @@ func (x *UpdateAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message // Deprecated: Use UpdateAccessMonitoringRuleRequest.ProtoReflect.Descriptor instead. func (*UpdateAccessMonitoringRuleRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{5} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{7} } func (x *UpdateAccessMonitoringRuleRequest) GetRule() *AccessMonitoringRule { @@ -437,7 +555,7 @@ type UpsertAccessMonitoringRuleRequest struct { func (x *UpsertAccessMonitoringRuleRequest) Reset() { *x = UpsertAccessMonitoringRuleRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[6] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[8] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -449,7 +567,7 @@ func (x *UpsertAccessMonitoringRuleRequest) String() string { func (*UpsertAccessMonitoringRuleRequest) ProtoMessage() {} func (x *UpsertAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[6] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[8] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -462,7 +580,7 @@ func (x *UpsertAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message // Deprecated: Use UpsertAccessMonitoringRuleRequest.ProtoReflect.Descriptor instead. func (*UpsertAccessMonitoringRuleRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{6} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{8} } func (x *UpsertAccessMonitoringRuleRequest) GetRule() *AccessMonitoringRule { @@ -483,7 +601,7 @@ type GetAccessMonitoringRuleRequest struct { func (x *GetAccessMonitoringRuleRequest) Reset() { *x = GetAccessMonitoringRuleRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[7] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[9] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -495,7 +613,7 @@ func (x *GetAccessMonitoringRuleRequest) String() string { func (*GetAccessMonitoringRuleRequest) ProtoMessage() {} func (x *GetAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[7] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[9] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -508,7 +626,7 @@ func (x *GetAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use GetAccessMonitoringRuleRequest.ProtoReflect.Descriptor instead. func (*GetAccessMonitoringRuleRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{7} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{9} } func (x *GetAccessMonitoringRuleRequest) GetName() string { @@ -529,7 +647,7 @@ type DeleteAccessMonitoringRuleRequest struct { func (x *DeleteAccessMonitoringRuleRequest) Reset() { *x = DeleteAccessMonitoringRuleRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[8] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[10] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -541,7 +659,7 @@ func (x *DeleteAccessMonitoringRuleRequest) String() string { func (*DeleteAccessMonitoringRuleRequest) ProtoMessage() {} func (x *DeleteAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[8] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[10] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -554,7 +672,7 @@ func (x *DeleteAccessMonitoringRuleRequest) ProtoReflect() protoreflect.Message // Deprecated: Use DeleteAccessMonitoringRuleRequest.ProtoReflect.Descriptor instead. func (*DeleteAccessMonitoringRuleRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{8} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{10} } func (x *DeleteAccessMonitoringRuleRequest) GetName() string { @@ -578,7 +696,7 @@ type ListAccessMonitoringRulesRequest struct { func (x *ListAccessMonitoringRulesRequest) Reset() { *x = ListAccessMonitoringRulesRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[9] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[11] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -590,7 +708,7 @@ func (x *ListAccessMonitoringRulesRequest) String() string { func (*ListAccessMonitoringRulesRequest) ProtoMessage() {} func (x *ListAccessMonitoringRulesRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[9] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[11] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -603,7 +721,7 @@ func (x *ListAccessMonitoringRulesRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListAccessMonitoringRulesRequest.ProtoReflect.Descriptor instead. func (*ListAccessMonitoringRulesRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{9} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{11} } func (x *ListAccessMonitoringRulesRequest) GetPageSize() int64 { @@ -641,7 +759,7 @@ type ListAccessMonitoringRulesWithFilterRequest struct { func (x *ListAccessMonitoringRulesWithFilterRequest) Reset() { *x = ListAccessMonitoringRulesWithFilterRequest{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[10] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[12] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -653,7 +771,7 @@ func (x *ListAccessMonitoringRulesWithFilterRequest) String() string { func (*ListAccessMonitoringRulesWithFilterRequest) ProtoMessage() {} func (x *ListAccessMonitoringRulesWithFilterRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[10] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[12] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -666,7 +784,7 @@ func (x *ListAccessMonitoringRulesWithFilterRequest) ProtoReflect() protoreflect // Deprecated: Use ListAccessMonitoringRulesWithFilterRequest.ProtoReflect.Descriptor instead. func (*ListAccessMonitoringRulesWithFilterRequest) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{10} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{12} } func (x *ListAccessMonitoringRulesWithFilterRequest) GetPageSize() int64 { @@ -718,7 +836,7 @@ type ListAccessMonitoringRulesResponse struct { func (x *ListAccessMonitoringRulesResponse) Reset() { *x = ListAccessMonitoringRulesResponse{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[11] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[13] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -730,7 +848,7 @@ func (x *ListAccessMonitoringRulesResponse) String() string { func (*ListAccessMonitoringRulesResponse) ProtoMessage() {} func (x *ListAccessMonitoringRulesResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[11] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[13] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -743,7 +861,7 @@ func (x *ListAccessMonitoringRulesResponse) ProtoReflect() protoreflect.Message // Deprecated: Use ListAccessMonitoringRulesResponse.ProtoReflect.Descriptor instead. func (*ListAccessMonitoringRulesResponse) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{11} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{13} } func (x *ListAccessMonitoringRulesResponse) GetRules() []*AccessMonitoringRule { @@ -774,7 +892,7 @@ type ListAccessMonitoringRulesWithFilterResponse struct { func (x *ListAccessMonitoringRulesWithFilterResponse) Reset() { *x = ListAccessMonitoringRulesWithFilterResponse{} - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[12] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[14] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -786,7 +904,7 @@ func (x *ListAccessMonitoringRulesWithFilterResponse) String() string { func (*ListAccessMonitoringRulesWithFilterResponse) ProtoMessage() {} func (x *ListAccessMonitoringRulesWithFilterResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[12] + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[14] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -799,7 +917,7 @@ func (x *ListAccessMonitoringRulesWithFilterResponse) ProtoReflect() protoreflec // Deprecated: Use ListAccessMonitoringRulesWithFilterResponse.ProtoReflect.Descriptor instead. func (*ListAccessMonitoringRulesWithFilterResponse) Descriptor() ([]byte, []int) { - return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{12} + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{14} } func (x *ListAccessMonitoringRulesWithFilterResponse) GetRules() []*AccessMonitoringRule { @@ -816,6 +934,70 @@ func (x *ListAccessMonitoringRulesWithFilterResponse) GetNextPageToken() string return "" } +// Shift contains the weekday, start time, and end time of a shift. +type TimeSchedule_Shift struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Weekday specifies the day of the week, e.g., "Sunday", "Monday", "Tuesday". + Weekday string `protobuf:"bytes,1,opt,name=weekday,proto3" json:"weekday,omitempty"` + // Start specifies the start time in the format HH:MM, e.g., "12:30". + Start string `protobuf:"bytes,2,opt,name=start,proto3" json:"start,omitempty"` + // End specifies the end time in the format HH:MM, e.g., "12:30". + End string `protobuf:"bytes,3,opt,name=end,proto3" json:"end,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TimeSchedule_Shift) Reset() { + *x = TimeSchedule_Shift{} + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TimeSchedule_Shift) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TimeSchedule_Shift) ProtoMessage() {} + +func (x *TimeSchedule_Shift) ProtoReflect() protoreflect.Message { + mi := &file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TimeSchedule_Shift.ProtoReflect.Descriptor instead. +func (*TimeSchedule_Shift) Descriptor() ([]byte, []int) { + return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescGZIP(), []int{5, 0} +} + +func (x *TimeSchedule_Shift) GetWeekday() string { + if x != nil { + return x.Weekday + } + return "" +} + +func (x *TimeSchedule_Shift) GetStart() string { + if x != nil { + return x.Start + } + return "" +} + +func (x *TimeSchedule_Shift) GetEnd() string { + if x != nil { + return x.End + } + return "" +} + var File_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto protoreflect.FileDescriptor const file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDesc = "" + @@ -826,14 +1008,18 @@ const file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDe "\x04kind\x18\x02 \x01(\tR\x04kind\x12\x19\n" + "\bsub_kind\x18\x03 \x01(\tR\asubKind\x12\x18\n" + "\aversion\x18\x04 \x01(\tR\aversion\x12O\n" + - "\x04spec\x18\x05 \x01(\v2;.teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpecR\x04spec\"\xdf\x02\n" + + "\x04spec\x18\x05 \x01(\v2;.teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpecR\x04spec\"\xb4\x04\n" + "\x18AccessMonitoringRuleSpec\x12\x1a\n" + "\bsubjects\x18\x01 \x03(\tR\bsubjects\x12\x16\n" + "\x06states\x18\x02 \x03(\tR\x06states\x12\x1c\n" + "\tcondition\x18\x03 \x01(\tR\tcondition\x12S\n" + "\fnotification\x18\x04 \x01(\v2/.teleport.accessmonitoringrules.v1.NotificationR\fnotification\x12]\n" + "\x10automatic_review\x18\x06 \x01(\v22.teleport.accessmonitoringrules.v1.AutomaticReviewR\x0fautomaticReview\x12#\n" + - "\rdesired_state\x18\a \x01(\tR\fdesiredStateJ\x04\b\x05\x10\x06R\x12automatic_approval\"B\n" + + "\rdesired_state\x18\a \x01(\tR\fdesiredState\x12h\n" + + "\tschedules\x18\b \x03(\v2J.teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.SchedulesEntryR\tschedules\x1ai\n" + + "\x0eSchedulesEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12A\n" + + "\x05value\x18\x02 \x01(\v2+.teleport.accessmonitoringrules.v1.ScheduleR\x05value:\x028\x01J\x04\b\x05\x10\x06R\x12automatic_approval\"B\n" + "\fNotification\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\x12\x1e\n" + "\n" + @@ -841,7 +1027,16 @@ const file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDe "recipients\"O\n" + "\x0fAutomaticReview\x12 \n" + "\vintegration\x18\x01 \x01(\tR\vintegration\x12\x1a\n" + - "\bdecision\x18\x02 \x01(\tR\bdecision\"p\n" + + "\bdecision\x18\x02 \x01(\tR\bdecision\"O\n" + + "\bSchedule\x12C\n" + + "\x04time\x18\x01 \x01(\v2/.teleport.accessmonitoringrules.v1.TimeScheduleR\x04time\"\xc4\x01\n" + + "\fTimeSchedule\x12M\n" + + "\x06shifts\x18\x01 \x03(\v25.teleport.accessmonitoringrules.v1.TimeSchedule.ShiftR\x06shifts\x12\x1a\n" + + "\btimezone\x18\x02 \x01(\tR\btimezone\x1aI\n" + + "\x05Shift\x12\x18\n" + + "\aweekday\x18\x01 \x01(\tR\aweekday\x12\x14\n" + + "\x05start\x18\x02 \x01(\tR\x05start\x12\x10\n" + + "\x03end\x18\x03 \x01(\tR\x03end\"p\n" + "!CreateAccessMonitoringRuleRequest\x12K\n" + "\x04rule\x18\x01 \x01(\v27.teleport.accessmonitoringrules.v1.AccessMonitoringRuleR\x04rule\"p\n" + "!UpdateAccessMonitoringRuleRequest\x12K\n" + @@ -882,38 +1077,46 @@ func file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDes return file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDescData } -var file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes = make([]protoimpl.MessageInfo, 13) +var file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_msgTypes = make([]protoimpl.MessageInfo, 17) var file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_goTypes = []any{ (*AccessMonitoringRule)(nil), // 0: teleport.accessmonitoringrules.v1.AccessMonitoringRule (*AccessMonitoringRuleSpec)(nil), // 1: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec (*Notification)(nil), // 2: teleport.accessmonitoringrules.v1.Notification (*AutomaticReview)(nil), // 3: teleport.accessmonitoringrules.v1.AutomaticReview - (*CreateAccessMonitoringRuleRequest)(nil), // 4: teleport.accessmonitoringrules.v1.CreateAccessMonitoringRuleRequest - (*UpdateAccessMonitoringRuleRequest)(nil), // 5: teleport.accessmonitoringrules.v1.UpdateAccessMonitoringRuleRequest - (*UpsertAccessMonitoringRuleRequest)(nil), // 6: teleport.accessmonitoringrules.v1.UpsertAccessMonitoringRuleRequest - (*GetAccessMonitoringRuleRequest)(nil), // 7: teleport.accessmonitoringrules.v1.GetAccessMonitoringRuleRequest - (*DeleteAccessMonitoringRuleRequest)(nil), // 8: teleport.accessmonitoringrules.v1.DeleteAccessMonitoringRuleRequest - (*ListAccessMonitoringRulesRequest)(nil), // 9: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesRequest - (*ListAccessMonitoringRulesWithFilterRequest)(nil), // 10: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterRequest - (*ListAccessMonitoringRulesResponse)(nil), // 11: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesResponse - (*ListAccessMonitoringRulesWithFilterResponse)(nil), // 12: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterResponse - (*v1.Metadata)(nil), // 13: teleport.header.v1.Metadata + (*Schedule)(nil), // 4: teleport.accessmonitoringrules.v1.Schedule + (*TimeSchedule)(nil), // 5: teleport.accessmonitoringrules.v1.TimeSchedule + (*CreateAccessMonitoringRuleRequest)(nil), // 6: teleport.accessmonitoringrules.v1.CreateAccessMonitoringRuleRequest + (*UpdateAccessMonitoringRuleRequest)(nil), // 7: teleport.accessmonitoringrules.v1.UpdateAccessMonitoringRuleRequest + (*UpsertAccessMonitoringRuleRequest)(nil), // 8: teleport.accessmonitoringrules.v1.UpsertAccessMonitoringRuleRequest + (*GetAccessMonitoringRuleRequest)(nil), // 9: teleport.accessmonitoringrules.v1.GetAccessMonitoringRuleRequest + (*DeleteAccessMonitoringRuleRequest)(nil), // 10: teleport.accessmonitoringrules.v1.DeleteAccessMonitoringRuleRequest + (*ListAccessMonitoringRulesRequest)(nil), // 11: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesRequest + (*ListAccessMonitoringRulesWithFilterRequest)(nil), // 12: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterRequest + (*ListAccessMonitoringRulesResponse)(nil), // 13: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesResponse + (*ListAccessMonitoringRulesWithFilterResponse)(nil), // 14: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterResponse + nil, // 15: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.SchedulesEntry + (*TimeSchedule_Shift)(nil), // 16: teleport.accessmonitoringrules.v1.TimeSchedule.Shift + (*v1.Metadata)(nil), // 17: teleport.header.v1.Metadata } var file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_depIdxs = []int32{ - 13, // 0: teleport.accessmonitoringrules.v1.AccessMonitoringRule.metadata:type_name -> teleport.header.v1.Metadata + 17, // 0: teleport.accessmonitoringrules.v1.AccessMonitoringRule.metadata:type_name -> teleport.header.v1.Metadata 1, // 1: teleport.accessmonitoringrules.v1.AccessMonitoringRule.spec:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec 2, // 2: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.notification:type_name -> teleport.accessmonitoringrules.v1.Notification 3, // 3: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.automatic_review:type_name -> teleport.accessmonitoringrules.v1.AutomaticReview - 0, // 4: teleport.accessmonitoringrules.v1.CreateAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule - 0, // 5: teleport.accessmonitoringrules.v1.UpdateAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule - 0, // 6: teleport.accessmonitoringrules.v1.UpsertAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule - 0, // 7: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesResponse.rules:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule - 0, // 8: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterResponse.rules:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule - 9, // [9:9] is the sub-list for method output_type - 9, // [9:9] is the sub-list for method input_type - 9, // [9:9] is the sub-list for extension type_name - 9, // [9:9] is the sub-list for extension extendee - 0, // [0:9] is the sub-list for field type_name + 15, // 4: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.schedules:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.SchedulesEntry + 5, // 5: teleport.accessmonitoringrules.v1.Schedule.time:type_name -> teleport.accessmonitoringrules.v1.TimeSchedule + 16, // 6: teleport.accessmonitoringrules.v1.TimeSchedule.shifts:type_name -> teleport.accessmonitoringrules.v1.TimeSchedule.Shift + 0, // 7: teleport.accessmonitoringrules.v1.CreateAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule + 0, // 8: teleport.accessmonitoringrules.v1.UpdateAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule + 0, // 9: teleport.accessmonitoringrules.v1.UpsertAccessMonitoringRuleRequest.rule:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule + 0, // 10: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesResponse.rules:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule + 0, // 11: teleport.accessmonitoringrules.v1.ListAccessMonitoringRulesWithFilterResponse.rules:type_name -> teleport.accessmonitoringrules.v1.AccessMonitoringRule + 4, // 12: teleport.accessmonitoringrules.v1.AccessMonitoringRuleSpec.SchedulesEntry.value:type_name -> teleport.accessmonitoringrules.v1.Schedule + 13, // [13:13] is the sub-list for method output_type + 13, // [13:13] is the sub-list for method input_type + 13, // [13:13] is the sub-list for extension type_name + 13, // [13:13] is the sub-list for extension extendee + 0, // [0:13] is the sub-list for field type_name } func init() { file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_init() } @@ -927,7 +1130,7 @@ func file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_init() GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDesc), len(file_teleport_accessmonitoringrules_v1_access_monitoring_rules_proto_rawDesc)), NumEnums: 0, - NumMessages: 13, + NumMessages: 17, NumExtensions: 0, NumServices: 0, }, diff --git a/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules_service.pb.go b/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules_service.pb.go index 4366fe2636b09..78f130a5eb6b0 100644 --- a/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules_service.pb.go +++ b/api/gen/proto/go/teleport/accessmonitoringrules/v1/access_monitoring_rules_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/accessmonitoringrules/v1/access_monitoring_rules_service.proto diff --git a/api/gen/proto/go/teleport/auditlog/v1/auditlog.pb.go b/api/gen/proto/go/teleport/auditlog/v1/auditlog.pb.go index deeacaf1a7172..95675762e0d3e 100644 --- a/api/gen/proto/go/teleport/auditlog/v1/auditlog.pb.go +++ b/api/gen/proto/go/teleport/auditlog/v1/auditlog.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/auditlog/v1/auditlog.proto diff --git a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate.pb.go b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate.pb.go index cb963a7017a38..4781f38268b8a 100644 --- a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate.pb.go +++ b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/autoupdate/v1/autoupdate.proto @@ -55,6 +55,9 @@ const ( // AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK represents that the group has been rolled back. // New agents should run v1, existing agents should update to v1. AutoUpdateAgentGroupState_AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK AutoUpdateAgentGroupState = 4 + // AUTO_UPDATE_AGENT_GROUP_STATE_CANARY represents that the group is updating a few canary nodes, but that most nodes + // have not started updating yet. + AutoUpdateAgentGroupState_AUTO_UPDATE_AGENT_GROUP_STATE_CANARY AutoUpdateAgentGroupState = 5 ) // Enum value maps for AutoUpdateAgentGroupState. @@ -65,6 +68,7 @@ var ( 2: "AUTO_UPDATE_AGENT_GROUP_STATE_ACTIVE", 3: "AUTO_UPDATE_AGENT_GROUP_STATE_DONE", 4: "AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK", + 5: "AUTO_UPDATE_AGENT_GROUP_STATE_CANARY", } AutoUpdateAgentGroupState_value = map[string]int32{ "AUTO_UPDATE_AGENT_GROUP_STATE_UNSPECIFIED": 0, @@ -72,6 +76,7 @@ var ( "AUTO_UPDATE_AGENT_GROUP_STATE_ACTIVE": 2, "AUTO_UPDATE_AGENT_GROUP_STATE_DONE": 3, "AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK": 4, + "AUTO_UPDATE_AGENT_GROUP_STATE_CANARY": 5, } ) @@ -476,7 +481,11 @@ type AgentAutoUpdateGroup struct { StartHour int32 `protobuf:"varint,3,opt,name=start_hour,json=startHour,proto3" json:"start_hour,omitempty"` // wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". // This field must be positive. - WaitHours int32 `protobuf:"varint,5,opt,name=wait_hours,json=waitHours,proto3" json:"wait_hours,omitempty"` + WaitHours int32 `protobuf:"varint,5,opt,name=wait_hours,json=waitHours,proto3" json:"wait_hours,omitempty"` + // canary_count is the number of canary agents that will be updated before the whole group is updated. + // when set to 0, the group does not enter the canary phase. This number is capped to 5. + // This number must always be lower than the total number of agents in the group, else the rollout will be stuck. + CanaryCount int32 `protobuf:"varint,6,opt,name=canary_count,json=canaryCount,proto3" json:"canary_count,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -539,6 +548,13 @@ func (x *AgentAutoUpdateGroup) GetWaitHours() int32 { return 0 } +func (x *AgentAutoUpdateGroup) GetCanaryCount() int32 { + if x != nil { + return x.CanaryCount + } + return 0 +} + // AutoUpdateVersion is a resource singleton with version required for // tools autoupdate. type AutoUpdateVersion struct { @@ -720,9 +736,9 @@ func (x *AutoUpdateVersionSpecTools) GetTargetVersion() string { // AutoUpdateVersionSpecAgents is the spec for the autoupdate version. type AutoUpdateVersionSpecAgents struct { state protoimpl.MessageState `protogen:"open.v1"` - // start_version is the version to update from. + // start_version is the version used for newly installed agents before their update window. StartVersion string `protobuf:"bytes,1,opt,name=start_version,json=startVersion,proto3" json:"start_version,omitempty"` - // target_version is the version to update to. + // target_version is the version that all agents will update to during their update window. TargetVersion string `protobuf:"bytes,2,opt,name=target_version,json=targetVersion,proto3" json:"target_version,omitempty"` // schedule to use for the rollout Schedule string `protobuf:"bytes,3,opt,name=schedule,proto3" json:"schedule,omitempty"` @@ -882,9 +898,9 @@ func (x *AutoUpdateAgentRollout) GetStatus() *AutoUpdateAgentRolloutStatus { // AutoUpdateVersionSpecAgents. type AutoUpdateAgentRolloutSpec struct { state protoimpl.MessageState `protogen:"open.v1"` - // start_version is the version to update from. + // start_version is the version used for newly installed agents before their update window. StartVersion string `protobuf:"bytes,1,opt,name=start_version,json=startVersion,proto3" json:"start_version,omitempty"` - // target_version is the version to update to. + // target_version is the version that all agents will update to during their update window. TargetVersion string `protobuf:"bytes,2,opt,name=target_version,json=targetVersion,proto3" json:"target_version,omitempty"` // schedule to use for the rollout. Supported values are "regular" and "immediate". // - "regular" follows the regular group schedule @@ -1105,6 +1121,12 @@ type AutoUpdateAgentRolloutStatusGroup struct { // to the done state if: // - the ratio present_count/initial_count is above 0.9 (no more than 10% of the nodes dropped during update) UpToDateCount uint64 `protobuf:"varint,12,opt,name=up_to_date_count,json=upToDateCount,proto3" json:"up_to_date_count,omitempty"` + // canary_count represents how many canaries this group should have to leave the AUTO_UPDATE_AGENT_GROUP_STATE_CANARY + // state. + CanaryCount uint64 `protobuf:"varint,13,opt,name=canary_count,json=canaryCount,proto3" json:"canary_count,omitempty"` + // canaries is the list of canary agents that should be updated. + // This list is empty until we enter the AUTO_UPDATE_AGENT_GROUP_STATE_CANARY state. + Canaries []*Canary `protobuf:"bytes,14,rep,name=canaries,proto3" json:"canaries,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -1216,6 +1238,20 @@ func (x *AutoUpdateAgentRolloutStatusGroup) GetUpToDateCount() uint64 { return 0 } +func (x *AutoUpdateAgentRolloutStatusGroup) GetCanaryCount() uint64 { + if x != nil { + return x.CanaryCount + } + return 0 +} + +func (x *AutoUpdateAgentRolloutStatusGroup) GetCanaries() []*Canary { + if x != nil { + return x.Canaries + } + return nil +} + // AutoUpdateAgentReport is a report generated by each Teleport Auth service. // The report tracks per group and per version how many agents are running. // The report is used to track which version agents are running. @@ -1504,6 +1540,314 @@ func (x *AutoUpdateAgentReportSpecOmitted) GetReason() string { return "" } +// AutoUpdateBotInstanceReport is a report generated by an elected instance of the +// Teleport Auth service. The report tracks per group and per version how many +// instances of tbot are running. +type AutoUpdateBotInstanceReport struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The kind of resource represented. This is always `autoupdate_bot_instance_report`. + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // Differentiates variations of the same kind. All resources should contain + // one, even if it is never populated. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // The version of the resource being represented. + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + // Common metadata that all resources share. + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Contents of the report. + Spec *AutoUpdateBotInstanceReportSpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AutoUpdateBotInstanceReport) Reset() { + *x = AutoUpdateBotInstanceReport{} + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[19] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AutoUpdateBotInstanceReport) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AutoUpdateBotInstanceReport) ProtoMessage() {} + +func (x *AutoUpdateBotInstanceReport) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[19] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AutoUpdateBotInstanceReport.ProtoReflect.Descriptor instead. +func (*AutoUpdateBotInstanceReport) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP(), []int{19} +} + +func (x *AutoUpdateBotInstanceReport) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *AutoUpdateBotInstanceReport) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *AutoUpdateBotInstanceReport) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *AutoUpdateBotInstanceReport) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *AutoUpdateBotInstanceReport) GetSpec() *AutoUpdateBotInstanceReportSpec { + if x != nil { + return x.Spec + } + return nil +} + +// AutoUpdateBotInstanceReportSpec holds the contents of an AutoUpdateBotInstanceReport. +type AutoUpdateBotInstanceReportSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Timestamp is when the report was generated. + Timestamp *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` + // Bot counts aggregated by update group. + Groups map[string]*AutoUpdateBotInstanceReportSpecGroup `protobuf:"bytes,2,rep,name=groups,proto3" json:"groups,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AutoUpdateBotInstanceReportSpec) Reset() { + *x = AutoUpdateBotInstanceReportSpec{} + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[20] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AutoUpdateBotInstanceReportSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AutoUpdateBotInstanceReportSpec) ProtoMessage() {} + +func (x *AutoUpdateBotInstanceReportSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[20] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AutoUpdateBotInstanceReportSpec.ProtoReflect.Descriptor instead. +func (*AutoUpdateBotInstanceReportSpec) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP(), []int{20} +} + +func (x *AutoUpdateBotInstanceReportSpec) GetTimestamp() *timestamppb.Timestamp { + if x != nil { + return x.Timestamp + } + return nil +} + +func (x *AutoUpdateBotInstanceReportSpec) GetGroups() map[string]*AutoUpdateBotInstanceReportSpecGroup { + if x != nil { + return x.Groups + } + return nil +} + +// AutoUpdateBotInstanceReportSpecGroup holds an update group's bot counts. +type AutoUpdateBotInstanceReportSpecGroup struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Bot counts aggregated by version. + Versions map[string]*AutoUpdateBotInstanceReportSpecGroupVersion `protobuf:"bytes,1,rep,name=versions,proto3" json:"versions,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AutoUpdateBotInstanceReportSpecGroup) Reset() { + *x = AutoUpdateBotInstanceReportSpecGroup{} + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[21] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AutoUpdateBotInstanceReportSpecGroup) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AutoUpdateBotInstanceReportSpecGroup) ProtoMessage() {} + +func (x *AutoUpdateBotInstanceReportSpecGroup) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[21] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AutoUpdateBotInstanceReportSpecGroup.ProtoReflect.Descriptor instead. +func (*AutoUpdateBotInstanceReportSpecGroup) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP(), []int{21} +} + +func (x *AutoUpdateBotInstanceReportSpecGroup) GetVersions() map[string]*AutoUpdateBotInstanceReportSpecGroupVersion { + if x != nil { + return x.Versions + } + return nil +} + +// AutoUpdateBotInstanceReportSpecGroupVersion holds a version's bot count. +type AutoUpdateBotInstanceReportSpecGroupVersion struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Count of bot instances running this version. + Count int32 `protobuf:"varint,1,opt,name=count,proto3" json:"count,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AutoUpdateBotInstanceReportSpecGroupVersion) Reset() { + *x = AutoUpdateBotInstanceReportSpecGroupVersion{} + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[22] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AutoUpdateBotInstanceReportSpecGroupVersion) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AutoUpdateBotInstanceReportSpecGroupVersion) ProtoMessage() {} + +func (x *AutoUpdateBotInstanceReportSpecGroupVersion) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[22] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AutoUpdateBotInstanceReportSpecGroupVersion.ProtoReflect.Descriptor instead. +func (*AutoUpdateBotInstanceReportSpecGroupVersion) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP(), []int{22} +} + +func (x *AutoUpdateBotInstanceReportSpecGroupVersion) GetCount() int32 { + if x != nil { + return x.Count + } + return 0 +} + +// Canary describes a node that is acting as a canary and being updated before other nodes in its group. +type Canary struct { + state protoimpl.MessageState `protogen:"open.v1"` + // updater_id is reported by the agent in its control stream Hello. This allows us to uniquely identify an updater so + // the proxy can modulate its answer when the request comes from this specific updater. + UpdaterId string `protobuf:"bytes,1,opt,name=updater_id,json=updaterId,proto3" json:"updater_id,omitempty"` + // host_id is the node Host ID, reported by the agent in its control stream Hello. + HostId string `protobuf:"bytes,2,opt,name=host_id,json=hostId,proto3" json:"host_id,omitempty"` + // hostname is the server hostname reported by the agent in its control stream Hello. + // This is purely for debugging purposes: if the agent drops, we won't be able to query the inventory to know which + // agent it was. + Hostname string `protobuf:"bytes,3,opt,name=hostname,proto3" json:"hostname,omitempty"` + // success represents if the agent successfully connected back, running the target version. + Success bool `protobuf:"varint,4,opt,name=success,proto3" json:"success,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Canary) Reset() { + *x = Canary{} + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[23] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Canary) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Canary) ProtoMessage() {} + +func (x *Canary) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_proto_msgTypes[23] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Canary.ProtoReflect.Descriptor instead. +func (*Canary) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP(), []int{23} +} + +func (x *Canary) GetUpdaterId() string { + if x != nil { + return x.UpdaterId + } + return "" +} + +func (x *Canary) GetHostId() string { + if x != nil { + return x.HostId + } + return "" +} + +func (x *Canary) GetHostname() string { + if x != nil { + return x.Hostname + } + return "" +} + +func (x *Canary) GetSuccess() bool { + if x != nil { + return x.Success + } + return false +} + var File_teleport_autoupdate_v1_autoupdate_proto protoreflect.FileDescriptor const file_teleport_autoupdate_v1_autoupdate_proto_rawDesc = "" + @@ -1526,14 +1870,15 @@ const file_teleport_autoupdate_v1_autoupdate_proto_rawDesc = "" + "\x1bmaintenance_window_duration\x18\x03 \x01(\v2\x19.google.protobuf.DurationR\x19maintenanceWindowDuration\x12N\n" + "\tschedules\x18\x06 \x01(\v20.teleport.autoupdate.v1.AgentAutoUpdateSchedulesR\tschedulesJ\x04\b\x05\x10\x06R\x0fagent_schedules\"b\n" + "\x18AgentAutoUpdateSchedules\x12F\n" + - "\aregular\x18\x01 \x03(\v2,.teleport.autoupdate.v1.AgentAutoUpdateGroupR\aregular\"\x8d\x01\n" + + "\aregular\x18\x01 \x03(\v2,.teleport.autoupdate.v1.AgentAutoUpdateGroupR\aregular\"\xb0\x01\n" + "\x14AgentAutoUpdateGroup\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\x12\x12\n" + "\x04days\x18\x02 \x03(\tR\x04days\x12\x1d\n" + "\n" + "start_hour\x18\x03 \x01(\x05R\tstartHour\x12\x1d\n" + "\n" + - "wait_hours\x18\x05 \x01(\x05R\twaitHoursJ\x04\b\x04\x10\x05R\twait_days\"\xd9\x01\n" + + "wait_hours\x18\x05 \x01(\x05R\twaitHours\x12!\n" + + "\fcanary_count\x18\x06 \x01(\x05R\vcanaryCountJ\x04\b\x04\x10\x05R\twait_days\"\xd9\x01\n" + "\x11AutoUpdateVersion\x12\x12\n" + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + @@ -1569,7 +1914,7 @@ const file_teleport_autoupdate_v1_autoupdate_proto_rawDesc = "" + "\x05state\x18\x02 \x01(\x0e23.teleport.autoupdate.v1.AutoUpdateAgentRolloutStateR\x05state\x129\n" + "\n" + "start_time\x18\x03 \x01(\v2\x1a.google.protobuf.TimestampR\tstartTime\x12?\n" + - "\rtime_override\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\ftimeOverride\"\xb3\x04\n" + + "\rtime_override\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\ftimeOverride\"\x92\x05\n" + "!AutoUpdateAgentRolloutStatusGroup\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\x129\n" + "\n" + @@ -1584,7 +1929,9 @@ const file_teleport_autoupdate_v1_autoupdate_proto_rawDesc = "" + "\rinitial_count\x18\n" + " \x01(\x04R\finitialCount\x12#\n" + "\rpresent_count\x18\v \x01(\x04R\fpresentCount\x12'\n" + - "\x10up_to_date_count\x18\f \x01(\x04R\rupToDateCountJ\x04\b\b\x10\tR\x10config_wait_days\"\xe1\x01\n" + + "\x10up_to_date_count\x18\f \x01(\x04R\rupToDateCount\x12!\n" + + "\fcanary_count\x18\r \x01(\x04R\vcanaryCount\x12:\n" + + "\bcanaries\x18\x0e \x03(\v2\x1e.teleport.autoupdate.v1.CanaryR\bcanariesJ\x04\b\b\x10\tR\x10config_wait_days\"\xe1\x01\n" + "\x15AutoUpdateAgentReport\x12\x12\n" + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + @@ -1607,13 +1954,39 @@ const file_teleport_autoupdate_v1_autoupdate_proto_rawDesc = "" + "\x05count\x18\x01 \x01(\x05R\x05count\"P\n" + " AutoUpdateAgentReportSpecOmitted\x12\x14\n" + "\x05count\x18\x01 \x01(\x03R\x05count\x12\x16\n" + - "\x06reason\x18\x02 \x01(\tR\x06reason*\xf7\x01\n" + + "\x06reason\x18\x02 \x01(\tR\x06reason\"\xed\x01\n" + + "\x1bAutoUpdateBotInstanceReport\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12K\n" + + "\x04spec\x18\x05 \x01(\v27.teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecR\x04spec\"\xb1\x02\n" + + "\x1fAutoUpdateBotInstanceReportSpec\x128\n" + + "\ttimestamp\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\ttimestamp\x12[\n" + + "\x06groups\x18\x02 \x03(\v2C.teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.GroupsEntryR\x06groups\x1aw\n" + + "\vGroupsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12R\n" + + "\x05value\x18\x02 \x01(\v2<.teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroupR\x05value:\x028\x01\"\x91\x02\n" + + "$AutoUpdateBotInstanceReportSpecGroup\x12f\n" + + "\bversions\x18\x01 \x03(\v2J.teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup.VersionsEntryR\bversions\x1a\x80\x01\n" + + "\rVersionsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12Y\n" + + "\x05value\x18\x02 \x01(\v2C.teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroupVersionR\x05value:\x028\x01\"C\n" + + "+AutoUpdateBotInstanceReportSpecGroupVersion\x12\x14\n" + + "\x05count\x18\x01 \x01(\x05R\x05count\"v\n" + + "\x06Canary\x12\x1d\n" + + "\n" + + "updater_id\x18\x01 \x01(\tR\tupdaterId\x12\x17\n" + + "\ahost_id\x18\x02 \x01(\tR\x06hostId\x12\x1a\n" + + "\bhostname\x18\x03 \x01(\tR\bhostname\x12\x18\n" + + "\asuccess\x18\x04 \x01(\bR\asuccess*\xa1\x02\n" + "\x19AutoUpdateAgentGroupState\x12-\n" + ")AUTO_UPDATE_AGENT_GROUP_STATE_UNSPECIFIED\x10\x00\x12+\n" + "'AUTO_UPDATE_AGENT_GROUP_STATE_UNSTARTED\x10\x01\x12(\n" + "$AUTO_UPDATE_AGENT_GROUP_STATE_ACTIVE\x10\x02\x12&\n" + "\"AUTO_UPDATE_AGENT_GROUP_STATE_DONE\x10\x03\x12,\n" + - "(AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK\x10\x04*\x83\x02\n" + + "(AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK\x10\x04\x12(\n" + + "$AUTO_UPDATE_AGENT_GROUP_STATE_CANARY\x10\x05*\x83\x02\n" + "\x1bAutoUpdateAgentRolloutState\x12/\n" + "+AUTO_UPDATE_AGENT_ROLLOUT_STATE_UNSPECIFIED\x10\x00\x12-\n" + ")AUTO_UPDATE_AGENT_ROLLOUT_STATE_UNSTARTED\x10\x01\x12*\n" + @@ -1634,71 +2007,86 @@ func file_teleport_autoupdate_v1_autoupdate_proto_rawDescGZIP() []byte { } var file_teleport_autoupdate_v1_autoupdate_proto_enumTypes = make([]protoimpl.EnumInfo, 2) -var file_teleport_autoupdate_v1_autoupdate_proto_msgTypes = make([]protoimpl.MessageInfo, 21) +var file_teleport_autoupdate_v1_autoupdate_proto_msgTypes = make([]protoimpl.MessageInfo, 28) var file_teleport_autoupdate_v1_autoupdate_proto_goTypes = []any{ - (AutoUpdateAgentGroupState)(0), // 0: teleport.autoupdate.v1.AutoUpdateAgentGroupState - (AutoUpdateAgentRolloutState)(0), // 1: teleport.autoupdate.v1.AutoUpdateAgentRolloutState - (*AutoUpdateConfig)(nil), // 2: teleport.autoupdate.v1.AutoUpdateConfig - (*AutoUpdateConfigSpec)(nil), // 3: teleport.autoupdate.v1.AutoUpdateConfigSpec - (*AutoUpdateConfigSpecTools)(nil), // 4: teleport.autoupdate.v1.AutoUpdateConfigSpecTools - (*AutoUpdateConfigSpecAgents)(nil), // 5: teleport.autoupdate.v1.AutoUpdateConfigSpecAgents - (*AgentAutoUpdateSchedules)(nil), // 6: teleport.autoupdate.v1.AgentAutoUpdateSchedules - (*AgentAutoUpdateGroup)(nil), // 7: teleport.autoupdate.v1.AgentAutoUpdateGroup - (*AutoUpdateVersion)(nil), // 8: teleport.autoupdate.v1.AutoUpdateVersion - (*AutoUpdateVersionSpec)(nil), // 9: teleport.autoupdate.v1.AutoUpdateVersionSpec - (*AutoUpdateVersionSpecTools)(nil), // 10: teleport.autoupdate.v1.AutoUpdateVersionSpecTools - (*AutoUpdateVersionSpecAgents)(nil), // 11: teleport.autoupdate.v1.AutoUpdateVersionSpecAgents - (*AutoUpdateAgentRollout)(nil), // 12: teleport.autoupdate.v1.AutoUpdateAgentRollout - (*AutoUpdateAgentRolloutSpec)(nil), // 13: teleport.autoupdate.v1.AutoUpdateAgentRolloutSpec - (*AutoUpdateAgentRolloutStatus)(nil), // 14: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus - (*AutoUpdateAgentRolloutStatusGroup)(nil), // 15: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup - (*AutoUpdateAgentReport)(nil), // 16: teleport.autoupdate.v1.AutoUpdateAgentReport - (*AutoUpdateAgentReportSpec)(nil), // 17: teleport.autoupdate.v1.AutoUpdateAgentReportSpec - (*AutoUpdateAgentReportSpecGroup)(nil), // 18: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup - (*AutoUpdateAgentReportSpecGroupVersion)(nil), // 19: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroupVersion - (*AutoUpdateAgentReportSpecOmitted)(nil), // 20: teleport.autoupdate.v1.AutoUpdateAgentReportSpecOmitted - nil, // 21: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry - nil, // 22: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry - (*v1.Metadata)(nil), // 23: teleport.header.v1.Metadata - (*durationpb.Duration)(nil), // 24: google.protobuf.Duration - (*timestamppb.Timestamp)(nil), // 25: google.protobuf.Timestamp + (AutoUpdateAgentGroupState)(0), // 0: teleport.autoupdate.v1.AutoUpdateAgentGroupState + (AutoUpdateAgentRolloutState)(0), // 1: teleport.autoupdate.v1.AutoUpdateAgentRolloutState + (*AutoUpdateConfig)(nil), // 2: teleport.autoupdate.v1.AutoUpdateConfig + (*AutoUpdateConfigSpec)(nil), // 3: teleport.autoupdate.v1.AutoUpdateConfigSpec + (*AutoUpdateConfigSpecTools)(nil), // 4: teleport.autoupdate.v1.AutoUpdateConfigSpecTools + (*AutoUpdateConfigSpecAgents)(nil), // 5: teleport.autoupdate.v1.AutoUpdateConfigSpecAgents + (*AgentAutoUpdateSchedules)(nil), // 6: teleport.autoupdate.v1.AgentAutoUpdateSchedules + (*AgentAutoUpdateGroup)(nil), // 7: teleport.autoupdate.v1.AgentAutoUpdateGroup + (*AutoUpdateVersion)(nil), // 8: teleport.autoupdate.v1.AutoUpdateVersion + (*AutoUpdateVersionSpec)(nil), // 9: teleport.autoupdate.v1.AutoUpdateVersionSpec + (*AutoUpdateVersionSpecTools)(nil), // 10: teleport.autoupdate.v1.AutoUpdateVersionSpecTools + (*AutoUpdateVersionSpecAgents)(nil), // 11: teleport.autoupdate.v1.AutoUpdateVersionSpecAgents + (*AutoUpdateAgentRollout)(nil), // 12: teleport.autoupdate.v1.AutoUpdateAgentRollout + (*AutoUpdateAgentRolloutSpec)(nil), // 13: teleport.autoupdate.v1.AutoUpdateAgentRolloutSpec + (*AutoUpdateAgentRolloutStatus)(nil), // 14: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus + (*AutoUpdateAgentRolloutStatusGroup)(nil), // 15: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup + (*AutoUpdateAgentReport)(nil), // 16: teleport.autoupdate.v1.AutoUpdateAgentReport + (*AutoUpdateAgentReportSpec)(nil), // 17: teleport.autoupdate.v1.AutoUpdateAgentReportSpec + (*AutoUpdateAgentReportSpecGroup)(nil), // 18: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup + (*AutoUpdateAgentReportSpecGroupVersion)(nil), // 19: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroupVersion + (*AutoUpdateAgentReportSpecOmitted)(nil), // 20: teleport.autoupdate.v1.AutoUpdateAgentReportSpecOmitted + (*AutoUpdateBotInstanceReport)(nil), // 21: teleport.autoupdate.v1.AutoUpdateBotInstanceReport + (*AutoUpdateBotInstanceReportSpec)(nil), // 22: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec + (*AutoUpdateBotInstanceReportSpecGroup)(nil), // 23: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup + (*AutoUpdateBotInstanceReportSpecGroupVersion)(nil), // 24: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroupVersion + (*Canary)(nil), // 25: teleport.autoupdate.v1.Canary + nil, // 26: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry + nil, // 27: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry + nil, // 28: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.GroupsEntry + nil, // 29: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup.VersionsEntry + (*v1.Metadata)(nil), // 30: teleport.header.v1.Metadata + (*durationpb.Duration)(nil), // 31: google.protobuf.Duration + (*timestamppb.Timestamp)(nil), // 32: google.protobuf.Timestamp } var file_teleport_autoupdate_v1_autoupdate_proto_depIdxs = []int32{ - 23, // 0: teleport.autoupdate.v1.AutoUpdateConfig.metadata:type_name -> teleport.header.v1.Metadata + 30, // 0: teleport.autoupdate.v1.AutoUpdateConfig.metadata:type_name -> teleport.header.v1.Metadata 3, // 1: teleport.autoupdate.v1.AutoUpdateConfig.spec:type_name -> teleport.autoupdate.v1.AutoUpdateConfigSpec 4, // 2: teleport.autoupdate.v1.AutoUpdateConfigSpec.tools:type_name -> teleport.autoupdate.v1.AutoUpdateConfigSpecTools 5, // 3: teleport.autoupdate.v1.AutoUpdateConfigSpec.agents:type_name -> teleport.autoupdate.v1.AutoUpdateConfigSpecAgents - 24, // 4: teleport.autoupdate.v1.AutoUpdateConfigSpecAgents.maintenance_window_duration:type_name -> google.protobuf.Duration + 31, // 4: teleport.autoupdate.v1.AutoUpdateConfigSpecAgents.maintenance_window_duration:type_name -> google.protobuf.Duration 6, // 5: teleport.autoupdate.v1.AutoUpdateConfigSpecAgents.schedules:type_name -> teleport.autoupdate.v1.AgentAutoUpdateSchedules 7, // 6: teleport.autoupdate.v1.AgentAutoUpdateSchedules.regular:type_name -> teleport.autoupdate.v1.AgentAutoUpdateGroup - 23, // 7: teleport.autoupdate.v1.AutoUpdateVersion.metadata:type_name -> teleport.header.v1.Metadata + 30, // 7: teleport.autoupdate.v1.AutoUpdateVersion.metadata:type_name -> teleport.header.v1.Metadata 9, // 8: teleport.autoupdate.v1.AutoUpdateVersion.spec:type_name -> teleport.autoupdate.v1.AutoUpdateVersionSpec 10, // 9: teleport.autoupdate.v1.AutoUpdateVersionSpec.tools:type_name -> teleport.autoupdate.v1.AutoUpdateVersionSpecTools 11, // 10: teleport.autoupdate.v1.AutoUpdateVersionSpec.agents:type_name -> teleport.autoupdate.v1.AutoUpdateVersionSpecAgents - 23, // 11: teleport.autoupdate.v1.AutoUpdateAgentRollout.metadata:type_name -> teleport.header.v1.Metadata + 30, // 11: teleport.autoupdate.v1.AutoUpdateAgentRollout.metadata:type_name -> teleport.header.v1.Metadata 13, // 12: teleport.autoupdate.v1.AutoUpdateAgentRollout.spec:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRolloutSpec 14, // 13: teleport.autoupdate.v1.AutoUpdateAgentRollout.status:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus - 24, // 14: teleport.autoupdate.v1.AutoUpdateAgentRolloutSpec.maintenance_window_duration:type_name -> google.protobuf.Duration + 31, // 14: teleport.autoupdate.v1.AutoUpdateAgentRolloutSpec.maintenance_window_duration:type_name -> google.protobuf.Duration 15, // 15: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.groups:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup 1, // 16: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.state:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRolloutState - 25, // 17: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.start_time:type_name -> google.protobuf.Timestamp - 25, // 18: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.time_override:type_name -> google.protobuf.Timestamp - 25, // 19: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.start_time:type_name -> google.protobuf.Timestamp + 32, // 17: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.start_time:type_name -> google.protobuf.Timestamp + 32, // 18: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatus.time_override:type_name -> google.protobuf.Timestamp + 32, // 19: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.start_time:type_name -> google.protobuf.Timestamp 0, // 20: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.state:type_name -> teleport.autoupdate.v1.AutoUpdateAgentGroupState - 25, // 21: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.last_update_time:type_name -> google.protobuf.Timestamp - 23, // 22: teleport.autoupdate.v1.AutoUpdateAgentReport.metadata:type_name -> teleport.header.v1.Metadata - 17, // 23: teleport.autoupdate.v1.AutoUpdateAgentReport.spec:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpec - 25, // 24: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.timestamp:type_name -> google.protobuf.Timestamp - 21, // 25: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.groups:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry - 20, // 26: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.omitted:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecOmitted - 22, // 27: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.versions:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry - 18, // 28: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup - 19, // 29: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroupVersion - 30, // [30:30] is the sub-list for method output_type - 30, // [30:30] is the sub-list for method input_type - 30, // [30:30] is the sub-list for extension type_name - 30, // [30:30] is the sub-list for extension extendee - 0, // [0:30] is the sub-list for field type_name + 32, // 21: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.last_update_time:type_name -> google.protobuf.Timestamp + 25, // 22: teleport.autoupdate.v1.AutoUpdateAgentRolloutStatusGroup.canaries:type_name -> teleport.autoupdate.v1.Canary + 30, // 23: teleport.autoupdate.v1.AutoUpdateAgentReport.metadata:type_name -> teleport.header.v1.Metadata + 17, // 24: teleport.autoupdate.v1.AutoUpdateAgentReport.spec:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpec + 32, // 25: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.timestamp:type_name -> google.protobuf.Timestamp + 26, // 26: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.groups:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry + 20, // 27: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.omitted:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecOmitted + 27, // 28: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.versions:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry + 30, // 29: teleport.autoupdate.v1.AutoUpdateBotInstanceReport.metadata:type_name -> teleport.header.v1.Metadata + 22, // 30: teleport.autoupdate.v1.AutoUpdateBotInstanceReport.spec:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec + 32, // 31: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.timestamp:type_name -> google.protobuf.Timestamp + 28, // 32: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.groups:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.GroupsEntry + 29, // 33: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup.versions:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup.VersionsEntry + 18, // 34: teleport.autoupdate.v1.AutoUpdateAgentReportSpec.GroupsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup + 19, // 35: teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroup.VersionsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReportSpecGroupVersion + 23, // 36: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpec.GroupsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup + 24, // 37: teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroup.VersionsEntry.value:type_name -> teleport.autoupdate.v1.AutoUpdateBotInstanceReportSpecGroupVersion + 38, // [38:38] is the sub-list for method output_type + 38, // [38:38] is the sub-list for method input_type + 38, // [38:38] is the sub-list for extension type_name + 38, // [38:38] is the sub-list for extension extendee + 0, // [0:38] is the sub-list for field type_name } func init() { file_teleport_autoupdate_v1_autoupdate_proto_init() } @@ -1712,7 +2100,7 @@ func file_teleport_autoupdate_v1_autoupdate_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_autoupdate_v1_autoupdate_proto_rawDesc), len(file_teleport_autoupdate_v1_autoupdate_proto_rawDesc)), NumEnums: 2, - NumMessages: 21, + NumMessages: 28, NumExtensions: 0, NumServices: 0, }, diff --git a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service.pb.go b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service.pb.go index cc4e56af6df32..61578cf222afe 100644 --- a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service.pb.go +++ b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/autoupdate/v1/autoupdate_service.proto @@ -1173,6 +1173,80 @@ func (x *DeleteAutoUpdateAgentReportRequest) GetName() string { return "" } +// Request for GetAutoUpdateBotInstanceReport. +type GetAutoUpdateBotInstanceReportRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetAutoUpdateBotInstanceReportRequest) Reset() { + *x = GetAutoUpdateBotInstanceReportRequest{} + mi := &file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes[25] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetAutoUpdateBotInstanceReportRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetAutoUpdateBotInstanceReportRequest) ProtoMessage() {} + +func (x *GetAutoUpdateBotInstanceReportRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes[25] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetAutoUpdateBotInstanceReportRequest.ProtoReflect.Descriptor instead. +func (*GetAutoUpdateBotInstanceReportRequest) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_service_proto_rawDescGZIP(), []int{25} +} + +// Request for DeleteAutoUpdateBotInstanceReport. +type DeleteAutoUpdateBotInstanceReportRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteAutoUpdateBotInstanceReportRequest) Reset() { + *x = DeleteAutoUpdateBotInstanceReportRequest{} + mi := &file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes[26] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteAutoUpdateBotInstanceReportRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteAutoUpdateBotInstanceReportRequest) ProtoMessage() {} + +func (x *DeleteAutoUpdateBotInstanceReportRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes[26] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteAutoUpdateBotInstanceReportRequest.ProtoReflect.Descriptor instead. +func (*DeleteAutoUpdateBotInstanceReportRequest) Descriptor() ([]byte, []int) { + return file_teleport_autoupdate_v1_autoupdate_service_proto_rawDescGZIP(), []int{26} +} + var File_teleport_autoupdate_v1_autoupdate_service_proto protoreflect.FileDescriptor const file_teleport_autoupdate_v1_autoupdate_service_proto_rawDesc = "" + @@ -1228,7 +1302,9 @@ const file_teleport_autoupdate_v1_autoupdate_service_proto_rawDesc = "" + "\"UpsertAutoUpdateAgentReportRequest\x12e\n" + "\x17autoupdate_agent_report\x18\x01 \x01(\v2-.teleport.autoupdate.v1.AutoUpdateAgentReportR\x15autoupdateAgentReport\"8\n" + "\"DeleteAutoUpdateAgentReportRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name2\xb7\x18\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"'\n" + + "%GetAutoUpdateBotInstanceReportRequest\"*\n" + + "(DeleteAutoUpdateBotInstanceReportRequest2\xcd\x1a\n" + "\x11AutoUpdateService\x12s\n" + "\x13GetAutoUpdateConfig\x122.teleport.autoupdate.v1.GetAutoUpdateConfigRequest\x1a(.teleport.autoupdate.v1.AutoUpdateConfig\x12y\n" + "\x16CreateAutoUpdateConfig\x125.teleport.autoupdate.v1.CreateAutoUpdateConfigRequest\x1a(.teleport.autoupdate.v1.AutoUpdateConfig\x12y\n" + @@ -1253,7 +1329,9 @@ const file_teleport_autoupdate_v1_autoupdate_service_proto_rawDesc = "" + "\x1bCreateAutoUpdateAgentReport\x12:.teleport.autoupdate.v1.CreateAutoUpdateAgentReportRequest\x1a-.teleport.autoupdate.v1.AutoUpdateAgentReport\x12\x88\x01\n" + "\x1bUpdateAutoUpdateAgentReport\x12:.teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest\x1a-.teleport.autoupdate.v1.AutoUpdateAgentReport\x12\x88\x01\n" + "\x1bUpsertAutoUpdateAgentReport\x12:.teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest\x1a-.teleport.autoupdate.v1.AutoUpdateAgentReport\x12q\n" + - "\x1bDeleteAutoUpdateAgentReport\x12:.teleport.autoupdate.v1.DeleteAutoUpdateAgentReportRequest\x1a\x16.google.protobuf.EmptyBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/autoupdate/v1;autoupdateb\x06proto3" + "\x1bDeleteAutoUpdateAgentReport\x12:.teleport.autoupdate.v1.DeleteAutoUpdateAgentReportRequest\x1a\x16.google.protobuf.Empty\x12\x94\x01\n" + + "\x1eGetAutoUpdateBotInstanceReport\x12=.teleport.autoupdate.v1.GetAutoUpdateBotInstanceReportRequest\x1a3.teleport.autoupdate.v1.AutoUpdateBotInstanceReport\x12}\n" + + "!DeleteAutoUpdateBotInstanceReport\x12@.teleport.autoupdate.v1.DeleteAutoUpdateBotInstanceReportRequest\x1a\x16.google.protobuf.EmptyBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/autoupdate/v1;autoupdateb\x06proto3" var ( file_teleport_autoupdate_v1_autoupdate_service_proto_rawDescOnce sync.Once @@ -1267,55 +1345,58 @@ func file_teleport_autoupdate_v1_autoupdate_service_proto_rawDescGZIP() []byte { return file_teleport_autoupdate_v1_autoupdate_service_proto_rawDescData } -var file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes = make([]protoimpl.MessageInfo, 25) +var file_teleport_autoupdate_v1_autoupdate_service_proto_msgTypes = make([]protoimpl.MessageInfo, 27) var file_teleport_autoupdate_v1_autoupdate_service_proto_goTypes = []any{ - (*GetAutoUpdateConfigRequest)(nil), // 0: teleport.autoupdate.v1.GetAutoUpdateConfigRequest - (*CreateAutoUpdateConfigRequest)(nil), // 1: teleport.autoupdate.v1.CreateAutoUpdateConfigRequest - (*UpdateAutoUpdateConfigRequest)(nil), // 2: teleport.autoupdate.v1.UpdateAutoUpdateConfigRequest - (*UpsertAutoUpdateConfigRequest)(nil), // 3: teleport.autoupdate.v1.UpsertAutoUpdateConfigRequest - (*DeleteAutoUpdateConfigRequest)(nil), // 4: teleport.autoupdate.v1.DeleteAutoUpdateConfigRequest - (*GetAutoUpdateVersionRequest)(nil), // 5: teleport.autoupdate.v1.GetAutoUpdateVersionRequest - (*CreateAutoUpdateVersionRequest)(nil), // 6: teleport.autoupdate.v1.CreateAutoUpdateVersionRequest - (*UpdateAutoUpdateVersionRequest)(nil), // 7: teleport.autoupdate.v1.UpdateAutoUpdateVersionRequest - (*UpsertAutoUpdateVersionRequest)(nil), // 8: teleport.autoupdate.v1.UpsertAutoUpdateVersionRequest - (*DeleteAutoUpdateVersionRequest)(nil), // 9: teleport.autoupdate.v1.DeleteAutoUpdateVersionRequest - (*GetAutoUpdateAgentRolloutRequest)(nil), // 10: teleport.autoupdate.v1.GetAutoUpdateAgentRolloutRequest - (*CreateAutoUpdateAgentRolloutRequest)(nil), // 11: teleport.autoupdate.v1.CreateAutoUpdateAgentRolloutRequest - (*UpdateAutoUpdateAgentRolloutRequest)(nil), // 12: teleport.autoupdate.v1.UpdateAutoUpdateAgentRolloutRequest - (*UpsertAutoUpdateAgentRolloutRequest)(nil), // 13: teleport.autoupdate.v1.UpsertAutoUpdateAgentRolloutRequest - (*DeleteAutoUpdateAgentRolloutRequest)(nil), // 14: teleport.autoupdate.v1.DeleteAutoUpdateAgentRolloutRequest - (*TriggerAutoUpdateAgentGroupRequest)(nil), // 15: teleport.autoupdate.v1.TriggerAutoUpdateAgentGroupRequest - (*ForceAutoUpdateAgentGroupRequest)(nil), // 16: teleport.autoupdate.v1.ForceAutoUpdateAgentGroupRequest - (*RollbackAutoUpdateAgentGroupRequest)(nil), // 17: teleport.autoupdate.v1.RollbackAutoUpdateAgentGroupRequest - (*ListAutoUpdateAgentReportsRequest)(nil), // 18: teleport.autoupdate.v1.ListAutoUpdateAgentReportsRequest - (*ListAutoUpdateAgentReportsResponse)(nil), // 19: teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse - (*GetAutoUpdateAgentReportRequest)(nil), // 20: teleport.autoupdate.v1.GetAutoUpdateAgentReportRequest - (*CreateAutoUpdateAgentReportRequest)(nil), // 21: teleport.autoupdate.v1.CreateAutoUpdateAgentReportRequest - (*UpdateAutoUpdateAgentReportRequest)(nil), // 22: teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest - (*UpsertAutoUpdateAgentReportRequest)(nil), // 23: teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest - (*DeleteAutoUpdateAgentReportRequest)(nil), // 24: teleport.autoupdate.v1.DeleteAutoUpdateAgentReportRequest - (*AutoUpdateConfig)(nil), // 25: teleport.autoupdate.v1.AutoUpdateConfig - (*AutoUpdateVersion)(nil), // 26: teleport.autoupdate.v1.AutoUpdateVersion - (*AutoUpdateAgentRollout)(nil), // 27: teleport.autoupdate.v1.AutoUpdateAgentRollout - (AutoUpdateAgentGroupState)(0), // 28: teleport.autoupdate.v1.AutoUpdateAgentGroupState - (*AutoUpdateAgentReport)(nil), // 29: teleport.autoupdate.v1.AutoUpdateAgentReport - (*emptypb.Empty)(nil), // 30: google.protobuf.Empty + (*GetAutoUpdateConfigRequest)(nil), // 0: teleport.autoupdate.v1.GetAutoUpdateConfigRequest + (*CreateAutoUpdateConfigRequest)(nil), // 1: teleport.autoupdate.v1.CreateAutoUpdateConfigRequest + (*UpdateAutoUpdateConfigRequest)(nil), // 2: teleport.autoupdate.v1.UpdateAutoUpdateConfigRequest + (*UpsertAutoUpdateConfigRequest)(nil), // 3: teleport.autoupdate.v1.UpsertAutoUpdateConfigRequest + (*DeleteAutoUpdateConfigRequest)(nil), // 4: teleport.autoupdate.v1.DeleteAutoUpdateConfigRequest + (*GetAutoUpdateVersionRequest)(nil), // 5: teleport.autoupdate.v1.GetAutoUpdateVersionRequest + (*CreateAutoUpdateVersionRequest)(nil), // 6: teleport.autoupdate.v1.CreateAutoUpdateVersionRequest + (*UpdateAutoUpdateVersionRequest)(nil), // 7: teleport.autoupdate.v1.UpdateAutoUpdateVersionRequest + (*UpsertAutoUpdateVersionRequest)(nil), // 8: teleport.autoupdate.v1.UpsertAutoUpdateVersionRequest + (*DeleteAutoUpdateVersionRequest)(nil), // 9: teleport.autoupdate.v1.DeleteAutoUpdateVersionRequest + (*GetAutoUpdateAgentRolloutRequest)(nil), // 10: teleport.autoupdate.v1.GetAutoUpdateAgentRolloutRequest + (*CreateAutoUpdateAgentRolloutRequest)(nil), // 11: teleport.autoupdate.v1.CreateAutoUpdateAgentRolloutRequest + (*UpdateAutoUpdateAgentRolloutRequest)(nil), // 12: teleport.autoupdate.v1.UpdateAutoUpdateAgentRolloutRequest + (*UpsertAutoUpdateAgentRolloutRequest)(nil), // 13: teleport.autoupdate.v1.UpsertAutoUpdateAgentRolloutRequest + (*DeleteAutoUpdateAgentRolloutRequest)(nil), // 14: teleport.autoupdate.v1.DeleteAutoUpdateAgentRolloutRequest + (*TriggerAutoUpdateAgentGroupRequest)(nil), // 15: teleport.autoupdate.v1.TriggerAutoUpdateAgentGroupRequest + (*ForceAutoUpdateAgentGroupRequest)(nil), // 16: teleport.autoupdate.v1.ForceAutoUpdateAgentGroupRequest + (*RollbackAutoUpdateAgentGroupRequest)(nil), // 17: teleport.autoupdate.v1.RollbackAutoUpdateAgentGroupRequest + (*ListAutoUpdateAgentReportsRequest)(nil), // 18: teleport.autoupdate.v1.ListAutoUpdateAgentReportsRequest + (*ListAutoUpdateAgentReportsResponse)(nil), // 19: teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse + (*GetAutoUpdateAgentReportRequest)(nil), // 20: teleport.autoupdate.v1.GetAutoUpdateAgentReportRequest + (*CreateAutoUpdateAgentReportRequest)(nil), // 21: teleport.autoupdate.v1.CreateAutoUpdateAgentReportRequest + (*UpdateAutoUpdateAgentReportRequest)(nil), // 22: teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest + (*UpsertAutoUpdateAgentReportRequest)(nil), // 23: teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest + (*DeleteAutoUpdateAgentReportRequest)(nil), // 24: teleport.autoupdate.v1.DeleteAutoUpdateAgentReportRequest + (*GetAutoUpdateBotInstanceReportRequest)(nil), // 25: teleport.autoupdate.v1.GetAutoUpdateBotInstanceReportRequest + (*DeleteAutoUpdateBotInstanceReportRequest)(nil), // 26: teleport.autoupdate.v1.DeleteAutoUpdateBotInstanceReportRequest + (*AutoUpdateConfig)(nil), // 27: teleport.autoupdate.v1.AutoUpdateConfig + (*AutoUpdateVersion)(nil), // 28: teleport.autoupdate.v1.AutoUpdateVersion + (*AutoUpdateAgentRollout)(nil), // 29: teleport.autoupdate.v1.AutoUpdateAgentRollout + (AutoUpdateAgentGroupState)(0), // 30: teleport.autoupdate.v1.AutoUpdateAgentGroupState + (*AutoUpdateAgentReport)(nil), // 31: teleport.autoupdate.v1.AutoUpdateAgentReport + (*emptypb.Empty)(nil), // 32: google.protobuf.Empty + (*AutoUpdateBotInstanceReport)(nil), // 33: teleport.autoupdate.v1.AutoUpdateBotInstanceReport } var file_teleport_autoupdate_v1_autoupdate_service_proto_depIdxs = []int32{ - 25, // 0: teleport.autoupdate.v1.CreateAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig - 25, // 1: teleport.autoupdate.v1.UpdateAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig - 25, // 2: teleport.autoupdate.v1.UpsertAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig - 26, // 3: teleport.autoupdate.v1.CreateAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion - 26, // 4: teleport.autoupdate.v1.UpdateAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion - 26, // 5: teleport.autoupdate.v1.UpsertAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion - 27, // 6: teleport.autoupdate.v1.CreateAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 7: teleport.autoupdate.v1.UpdateAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 8: teleport.autoupdate.v1.UpsertAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 28, // 9: teleport.autoupdate.v1.TriggerAutoUpdateAgentGroupRequest.desired_state:type_name -> teleport.autoupdate.v1.AutoUpdateAgentGroupState - 29, // 10: teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse.autoupdate_agent_reports:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 11: teleport.autoupdate.v1.CreateAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 12: teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 13: teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport + 27, // 0: teleport.autoupdate.v1.CreateAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig + 27, // 1: teleport.autoupdate.v1.UpdateAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig + 27, // 2: teleport.autoupdate.v1.UpsertAutoUpdateConfigRequest.config:type_name -> teleport.autoupdate.v1.AutoUpdateConfig + 28, // 3: teleport.autoupdate.v1.CreateAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion + 28, // 4: teleport.autoupdate.v1.UpdateAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion + 28, // 5: teleport.autoupdate.v1.UpsertAutoUpdateVersionRequest.version:type_name -> teleport.autoupdate.v1.AutoUpdateVersion + 29, // 6: teleport.autoupdate.v1.CreateAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 7: teleport.autoupdate.v1.UpdateAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 8: teleport.autoupdate.v1.UpsertAutoUpdateAgentRolloutRequest.rollout:type_name -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 30, // 9: teleport.autoupdate.v1.TriggerAutoUpdateAgentGroupRequest.desired_state:type_name -> teleport.autoupdate.v1.AutoUpdateAgentGroupState + 31, // 10: teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse.autoupdate_agent_reports:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 11: teleport.autoupdate.v1.CreateAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 12: teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 13: teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest.autoupdate_agent_report:type_name -> teleport.autoupdate.v1.AutoUpdateAgentReport 0, // 14: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateConfig:input_type -> teleport.autoupdate.v1.GetAutoUpdateConfigRequest 1, // 15: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateConfig:input_type -> teleport.autoupdate.v1.CreateAutoUpdateConfigRequest 2, // 16: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateConfig:input_type -> teleport.autoupdate.v1.UpdateAutoUpdateConfigRequest @@ -1340,32 +1421,36 @@ var file_teleport_autoupdate_v1_autoupdate_service_proto_depIdxs = []int32{ 22, // 35: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateAgentReport:input_type -> teleport.autoupdate.v1.UpdateAutoUpdateAgentReportRequest 23, // 36: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateAgentReport:input_type -> teleport.autoupdate.v1.UpsertAutoUpdateAgentReportRequest 24, // 37: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateAgentReport:input_type -> teleport.autoupdate.v1.DeleteAutoUpdateAgentReportRequest - 25, // 38: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig - 25, // 39: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig - 25, // 40: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig - 25, // 41: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig - 30, // 42: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateConfig:output_type -> google.protobuf.Empty - 26, // 43: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion - 26, // 44: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion - 26, // 45: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion - 26, // 46: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion - 30, // 47: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateVersion:output_type -> google.protobuf.Empty - 27, // 48: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 49: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 50: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 51: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 30, // 52: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateAgentRollout:output_type -> google.protobuf.Empty - 27, // 53: teleport.autoupdate.v1.AutoUpdateService.TriggerAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 54: teleport.autoupdate.v1.AutoUpdateService.ForceAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 27, // 55: teleport.autoupdate.v1.AutoUpdateService.RollbackAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout - 19, // 56: teleport.autoupdate.v1.AutoUpdateService.ListAutoUpdateAgentReports:output_type -> teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse - 29, // 57: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 58: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 59: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport - 29, // 60: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport - 30, // 61: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateAgentReport:output_type -> google.protobuf.Empty - 38, // [38:62] is the sub-list for method output_type - 14, // [14:38] is the sub-list for method input_type + 25, // 38: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateBotInstanceReport:input_type -> teleport.autoupdate.v1.GetAutoUpdateBotInstanceReportRequest + 26, // 39: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateBotInstanceReport:input_type -> teleport.autoupdate.v1.DeleteAutoUpdateBotInstanceReportRequest + 27, // 40: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig + 27, // 41: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig + 27, // 42: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig + 27, // 43: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateConfig:output_type -> teleport.autoupdate.v1.AutoUpdateConfig + 32, // 44: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateConfig:output_type -> google.protobuf.Empty + 28, // 45: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion + 28, // 46: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion + 28, // 47: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion + 28, // 48: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateVersion:output_type -> teleport.autoupdate.v1.AutoUpdateVersion + 32, // 49: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateVersion:output_type -> google.protobuf.Empty + 29, // 50: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 51: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 52: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 53: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateAgentRollout:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 32, // 54: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateAgentRollout:output_type -> google.protobuf.Empty + 29, // 55: teleport.autoupdate.v1.AutoUpdateService.TriggerAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 56: teleport.autoupdate.v1.AutoUpdateService.ForceAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 29, // 57: teleport.autoupdate.v1.AutoUpdateService.RollbackAutoUpdateAgentGroup:output_type -> teleport.autoupdate.v1.AutoUpdateAgentRollout + 19, // 58: teleport.autoupdate.v1.AutoUpdateService.ListAutoUpdateAgentReports:output_type -> teleport.autoupdate.v1.ListAutoUpdateAgentReportsResponse + 31, // 59: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 60: teleport.autoupdate.v1.AutoUpdateService.CreateAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 61: teleport.autoupdate.v1.AutoUpdateService.UpdateAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport + 31, // 62: teleport.autoupdate.v1.AutoUpdateService.UpsertAutoUpdateAgentReport:output_type -> teleport.autoupdate.v1.AutoUpdateAgentReport + 32, // 63: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateAgentReport:output_type -> google.protobuf.Empty + 33, // 64: teleport.autoupdate.v1.AutoUpdateService.GetAutoUpdateBotInstanceReport:output_type -> teleport.autoupdate.v1.AutoUpdateBotInstanceReport + 32, // 65: teleport.autoupdate.v1.AutoUpdateService.DeleteAutoUpdateBotInstanceReport:output_type -> google.protobuf.Empty + 40, // [40:66] is the sub-list for method output_type + 14, // [14:40] is the sub-list for method input_type 14, // [14:14] is the sub-list for extension type_name 14, // [14:14] is the sub-list for extension extendee 0, // [0:14] is the sub-list for field type_name @@ -1383,7 +1468,7 @@ func file_teleport_autoupdate_v1_autoupdate_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_autoupdate_v1_autoupdate_service_proto_rawDesc), len(file_teleport_autoupdate_v1_autoupdate_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 25, + NumMessages: 27, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service_grpc.pb.go b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service_grpc.pb.go index 7ca9d489f208a..94ac9ed1f28fe 100644 --- a/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/autoupdate/v1/autoupdate_service_grpc.pb.go @@ -34,30 +34,32 @@ import ( const _ = grpc.SupportPackageIsVersion9 const ( - AutoUpdateService_GetAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateConfig" - AutoUpdateService_CreateAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateConfig" - AutoUpdateService_UpdateAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateConfig" - AutoUpdateService_UpsertAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateConfig" - AutoUpdateService_DeleteAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateConfig" - AutoUpdateService_GetAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateVersion" - AutoUpdateService_CreateAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateVersion" - AutoUpdateService_UpdateAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateVersion" - AutoUpdateService_UpsertAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateVersion" - AutoUpdateService_DeleteAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateVersion" - AutoUpdateService_GetAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateAgentRollout" - AutoUpdateService_CreateAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateAgentRollout" - AutoUpdateService_UpdateAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateAgentRollout" - AutoUpdateService_UpsertAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateAgentRollout" - AutoUpdateService_DeleteAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateAgentRollout" - AutoUpdateService_TriggerAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/TriggerAutoUpdateAgentGroup" - AutoUpdateService_ForceAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/ForceAutoUpdateAgentGroup" - AutoUpdateService_RollbackAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/RollbackAutoUpdateAgentGroup" - AutoUpdateService_ListAutoUpdateAgentReports_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/ListAutoUpdateAgentReports" - AutoUpdateService_GetAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateAgentReport" - AutoUpdateService_CreateAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateAgentReport" - AutoUpdateService_UpdateAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateAgentReport" - AutoUpdateService_UpsertAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateAgentReport" - AutoUpdateService_DeleteAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateAgentReport" + AutoUpdateService_GetAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateConfig" + AutoUpdateService_CreateAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateConfig" + AutoUpdateService_UpdateAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateConfig" + AutoUpdateService_UpsertAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateConfig" + AutoUpdateService_DeleteAutoUpdateConfig_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateConfig" + AutoUpdateService_GetAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateVersion" + AutoUpdateService_CreateAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateVersion" + AutoUpdateService_UpdateAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateVersion" + AutoUpdateService_UpsertAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateVersion" + AutoUpdateService_DeleteAutoUpdateVersion_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateVersion" + AutoUpdateService_GetAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateAgentRollout" + AutoUpdateService_CreateAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateAgentRollout" + AutoUpdateService_UpdateAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateAgentRollout" + AutoUpdateService_UpsertAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateAgentRollout" + AutoUpdateService_DeleteAutoUpdateAgentRollout_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateAgentRollout" + AutoUpdateService_TriggerAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/TriggerAutoUpdateAgentGroup" + AutoUpdateService_ForceAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/ForceAutoUpdateAgentGroup" + AutoUpdateService_RollbackAutoUpdateAgentGroup_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/RollbackAutoUpdateAgentGroup" + AutoUpdateService_ListAutoUpdateAgentReports_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/ListAutoUpdateAgentReports" + AutoUpdateService_GetAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateAgentReport" + AutoUpdateService_CreateAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/CreateAutoUpdateAgentReport" + AutoUpdateService_UpdateAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpdateAutoUpdateAgentReport" + AutoUpdateService_UpsertAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/UpsertAutoUpdateAgentReport" + AutoUpdateService_DeleteAutoUpdateAgentReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateAgentReport" + AutoUpdateService_GetAutoUpdateBotInstanceReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/GetAutoUpdateBotInstanceReport" + AutoUpdateService_DeleteAutoUpdateBotInstanceReport_FullMethodName = "/teleport.autoupdate.v1.AutoUpdateService/DeleteAutoUpdateBotInstanceReport" ) // AutoUpdateServiceClient is the client API for AutoUpdateService service. @@ -116,6 +118,10 @@ type AutoUpdateServiceClient interface { UpsertAutoUpdateAgentReport(ctx context.Context, in *UpsertAutoUpdateAgentReportRequest, opts ...grpc.CallOption) (*AutoUpdateAgentReport, error) // DeleteAutoUpdateAgentReport removes the specified AutoUpdateAgentReport resource. DeleteAutoUpdateAgentReport(ctx context.Context, in *DeleteAutoUpdateAgentReportRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // GetAutoUpdateBotInstanceReport returns the singleton AutoUpdateBotInstanceReport resource. + GetAutoUpdateBotInstanceReport(ctx context.Context, in *GetAutoUpdateBotInstanceReportRequest, opts ...grpc.CallOption) (*AutoUpdateBotInstanceReport, error) + // DeleteAutoUpdateBotInstanceReport removes the singleton AutoUpdateBotInstanceReport resource. + DeleteAutoUpdateBotInstanceReport(ctx context.Context, in *DeleteAutoUpdateBotInstanceReportRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) } type autoUpdateServiceClient struct { @@ -366,6 +372,26 @@ func (c *autoUpdateServiceClient) DeleteAutoUpdateAgentReport(ctx context.Contex return out, nil } +func (c *autoUpdateServiceClient) GetAutoUpdateBotInstanceReport(ctx context.Context, in *GetAutoUpdateBotInstanceReportRequest, opts ...grpc.CallOption) (*AutoUpdateBotInstanceReport, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(AutoUpdateBotInstanceReport) + err := c.cc.Invoke(ctx, AutoUpdateService_GetAutoUpdateBotInstanceReport_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *autoUpdateServiceClient) DeleteAutoUpdateBotInstanceReport(ctx context.Context, in *DeleteAutoUpdateBotInstanceReportRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(emptypb.Empty) + err := c.cc.Invoke(ctx, AutoUpdateService_DeleteAutoUpdateBotInstanceReport_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // AutoUpdateServiceServer is the server API for AutoUpdateService service. // All implementations must embed UnimplementedAutoUpdateServiceServer // for forward compatibility. @@ -422,6 +448,10 @@ type AutoUpdateServiceServer interface { UpsertAutoUpdateAgentReport(context.Context, *UpsertAutoUpdateAgentReportRequest) (*AutoUpdateAgentReport, error) // DeleteAutoUpdateAgentReport removes the specified AutoUpdateAgentReport resource. DeleteAutoUpdateAgentReport(context.Context, *DeleteAutoUpdateAgentReportRequest) (*emptypb.Empty, error) + // GetAutoUpdateBotInstanceReport returns the singleton AutoUpdateBotInstanceReport resource. + GetAutoUpdateBotInstanceReport(context.Context, *GetAutoUpdateBotInstanceReportRequest) (*AutoUpdateBotInstanceReport, error) + // DeleteAutoUpdateBotInstanceReport removes the singleton AutoUpdateBotInstanceReport resource. + DeleteAutoUpdateBotInstanceReport(context.Context, *DeleteAutoUpdateBotInstanceReportRequest) (*emptypb.Empty, error) mustEmbedUnimplementedAutoUpdateServiceServer() } @@ -504,6 +534,12 @@ func (UnimplementedAutoUpdateServiceServer) UpsertAutoUpdateAgentReport(context. func (UnimplementedAutoUpdateServiceServer) DeleteAutoUpdateAgentReport(context.Context, *DeleteAutoUpdateAgentReportRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteAutoUpdateAgentReport not implemented") } +func (UnimplementedAutoUpdateServiceServer) GetAutoUpdateBotInstanceReport(context.Context, *GetAutoUpdateBotInstanceReportRequest) (*AutoUpdateBotInstanceReport, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetAutoUpdateBotInstanceReport not implemented") +} +func (UnimplementedAutoUpdateServiceServer) DeleteAutoUpdateBotInstanceReport(context.Context, *DeleteAutoUpdateBotInstanceReportRequest) (*emptypb.Empty, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteAutoUpdateBotInstanceReport not implemented") +} func (UnimplementedAutoUpdateServiceServer) mustEmbedUnimplementedAutoUpdateServiceServer() {} func (UnimplementedAutoUpdateServiceServer) testEmbeddedByValue() {} @@ -957,6 +993,42 @@ func _AutoUpdateService_DeleteAutoUpdateAgentReport_Handler(srv interface{}, ctx return interceptor(ctx, in, info, handler) } +func _AutoUpdateService_GetAutoUpdateBotInstanceReport_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetAutoUpdateBotInstanceReportRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AutoUpdateServiceServer).GetAutoUpdateBotInstanceReport(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AutoUpdateService_GetAutoUpdateBotInstanceReport_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AutoUpdateServiceServer).GetAutoUpdateBotInstanceReport(ctx, req.(*GetAutoUpdateBotInstanceReportRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _AutoUpdateService_DeleteAutoUpdateBotInstanceReport_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteAutoUpdateBotInstanceReportRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AutoUpdateServiceServer).DeleteAutoUpdateBotInstanceReport(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AutoUpdateService_DeleteAutoUpdateBotInstanceReport_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AutoUpdateServiceServer).DeleteAutoUpdateBotInstanceReport(ctx, req.(*DeleteAutoUpdateBotInstanceReportRequest)) + } + return interceptor(ctx, in, info, handler) +} + // AutoUpdateService_ServiceDesc is the grpc.ServiceDesc for AutoUpdateService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -1060,6 +1132,14 @@ var AutoUpdateService_ServiceDesc = grpc.ServiceDesc{ MethodName: "DeleteAutoUpdateAgentReport", Handler: _AutoUpdateService_DeleteAutoUpdateAgentReport_Handler, }, + { + MethodName: "GetAutoUpdateBotInstanceReport", + Handler: _AutoUpdateService_GetAutoUpdateBotInstanceReport_Handler, + }, + { + MethodName: "DeleteAutoUpdateBotInstanceReport", + Handler: _AutoUpdateService_DeleteAutoUpdateBotInstanceReport_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/autoupdate/v1/autoupdate_service.proto", diff --git a/api/gen/proto/go/teleport/backendinfo/v1/backendinfo.pb.go b/api/gen/proto/go/teleport/backendinfo/v1/backendinfo.pb.go index 74a2ceb4d3e65..c687584b3278e 100644 --- a/api/gen/proto/go/teleport/backendinfo/v1/backendinfo.pb.go +++ b/api/gen/proto/go/teleport/backendinfo/v1/backendinfo.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/backendinfo/v1/backendinfo.proto diff --git a/api/gen/proto/go/teleport/clusterconfig/v1/access_graph.pb.go b/api/gen/proto/go/teleport/clusterconfig/v1/access_graph.pb.go index 2da20d74a5402..f3bac24badb93 100644 --- a/api/gen/proto/go/teleport/clusterconfig/v1/access_graph.pb.go +++ b/api/gen/proto/go/teleport/clusterconfig/v1/access_graph.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/clusterconfig/v1/access_graph.proto diff --git a/api/gen/proto/go/teleport/clusterconfig/v1/access_graph_settings.pb.go b/api/gen/proto/go/teleport/clusterconfig/v1/access_graph_settings.pb.go index 863d63d278e23..fa5afc39663b7 100644 --- a/api/gen/proto/go/teleport/clusterconfig/v1/access_graph_settings.pb.go +++ b/api/gen/proto/go/teleport/clusterconfig/v1/access_graph_settings.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/clusterconfig/v1/access_graph_settings.proto diff --git a/api/gen/proto/go/teleport/clusterconfig/v1/clusterconfig_service.pb.go b/api/gen/proto/go/teleport/clusterconfig/v1/clusterconfig_service.pb.go index c9eddf30917cf..a6a1fc908c448 100644 --- a/api/gen/proto/go/teleport/clusterconfig/v1/clusterconfig_service.pb.go +++ b/api/gen/proto/go/teleport/clusterconfig/v1/clusterconfig_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/clusterconfig/v1/clusterconfig_service.proto diff --git a/api/gen/proto/go/teleport/componentfeatures/v1/component_features.pb.go b/api/gen/proto/go/teleport/componentfeatures/v1/component_features.pb.go new file mode 100644 index 0000000000000..6edea26358322 --- /dev/null +++ b/api/gen/proto/go/teleport/componentfeatures/v1/component_features.pb.go @@ -0,0 +1,414 @@ +// Code generated by protoc-gen-gogo. DO NOT EDIT. +// source: teleport/componentfeatures/v1/component_features.proto + +package componentfeaturesv1 + +import ( + fmt "fmt" + proto "github.com/gogo/protobuf/proto" + io "io" + math "math" + math_bits "math/bits" +) + +// Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the proto package it is being compiled against. +// A compilation error at this line likely means your copy of the +// proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package + +// ComponentFeatureID is an identifier for a specific feature supported by a Teleport component. +type ComponentFeatureID int32 + +const ( + ComponentFeatureID_COMPONENT_FEATURE_ID_UNSPECIFIED ComponentFeatureID = 0 + // ResourceConstraintsV1 indicates support for Resource Constraints and ResourceAccessIDs, as defined in RFD 228. + ComponentFeatureID_COMPONENT_FEATURE_ID_RESOURCE_CONSTRAINTS_V1 ComponentFeatureID = 1 +) + +var ComponentFeatureID_name = map[int32]string{ + 0: "COMPONENT_FEATURE_ID_UNSPECIFIED", + 1: "COMPONENT_FEATURE_ID_RESOURCE_CONSTRAINTS_V1", +} + +var ComponentFeatureID_value = map[string]int32{ + "COMPONENT_FEATURE_ID_UNSPECIFIED": 0, + "COMPONENT_FEATURE_ID_RESOURCE_CONSTRAINTS_V1": 1, +} + +func (x ComponentFeatureID) String() string { + return proto.EnumName(ComponentFeatureID_name, int32(x)) +} + +func (ComponentFeatureID) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_a4949687356da643, []int{0} +} + +// ComponentFeatures represents a set of features supported by a given Teleport component. +type ComponentFeatures struct { + // features is a list of supported feature identifiers. + Features []ComponentFeatureID `protobuf:"varint,1,rep,packed,name=features,proto3,enum=teleport.componentfeatures.v1.ComponentFeatureID" json:"features,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ComponentFeatures) Reset() { *m = ComponentFeatures{} } +func (m *ComponentFeatures) String() string { return proto.CompactTextString(m) } +func (*ComponentFeatures) ProtoMessage() {} +func (*ComponentFeatures) Descriptor() ([]byte, []int) { + return fileDescriptor_a4949687356da643, []int{0} +} +func (m *ComponentFeatures) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ComponentFeatures) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ComponentFeatures.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ComponentFeatures) XXX_Merge(src proto.Message) { + xxx_messageInfo_ComponentFeatures.Merge(m, src) +} +func (m *ComponentFeatures) XXX_Size() int { + return m.Size() +} +func (m *ComponentFeatures) XXX_DiscardUnknown() { + xxx_messageInfo_ComponentFeatures.DiscardUnknown(m) +} + +var xxx_messageInfo_ComponentFeatures proto.InternalMessageInfo + +func (m *ComponentFeatures) GetFeatures() []ComponentFeatureID { + if m != nil { + return m.Features + } + return nil +} + +func init() { + proto.RegisterEnum("teleport.componentfeatures.v1.ComponentFeatureID", ComponentFeatureID_name, ComponentFeatureID_value) + proto.RegisterType((*ComponentFeatures)(nil), "teleport.componentfeatures.v1.ComponentFeatures") +} + +func init() { + proto.RegisterFile("teleport/componentfeatures/v1/component_features.proto", fileDescriptor_a4949687356da643) +} + +var fileDescriptor_a4949687356da643 = []byte{ + // 256 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x32, 0x2b, 0x49, 0xcd, 0x49, + 0x2d, 0xc8, 0x2f, 0x2a, 0xd1, 0x4f, 0xce, 0xcf, 0x2d, 0xc8, 0xcf, 0x4b, 0xcd, 0x2b, 0x49, 0x4b, + 0x4d, 0x2c, 0x29, 0x2d, 0x4a, 0x2d, 0xd6, 0x2f, 0x33, 0x44, 0x08, 0xc6, 0xc3, 0x44, 0xf5, 0x0a, + 0x8a, 0xf2, 0x4b, 0xf2, 0x85, 0x64, 0x61, 0xfa, 0xf4, 0x30, 0xf4, 0xe9, 0x95, 0x19, 0x2a, 0x25, + 0x71, 0x09, 0x3a, 0xc3, 0xc4, 0xdd, 0xa0, 0xe2, 0x42, 0xbe, 0x5c, 0x1c, 0x30, 0x35, 0x12, 0x8c, + 0x0a, 0xcc, 0x1a, 0x7c, 0x46, 0x86, 0x7a, 0x78, 0x8d, 0xd1, 0x43, 0x37, 0xc3, 0xd3, 0x25, 0x08, + 0x6e, 0x84, 0x56, 0x0e, 0x97, 0x10, 0xa6, 0xbc, 0x90, 0x0a, 0x97, 0x82, 0xb3, 0xbf, 0x6f, 0x80, + 0xbf, 0x9f, 0xab, 0x5f, 0x48, 0xbc, 0x9b, 0xab, 0x63, 0x48, 0x68, 0x90, 0x6b, 0xbc, 0xa7, 0x4b, + 0x7c, 0xa8, 0x5f, 0x70, 0x80, 0xab, 0xb3, 0xa7, 0x9b, 0xa7, 0xab, 0x8b, 0x00, 0x83, 0x90, 0x01, + 0x97, 0x0e, 0x56, 0x55, 0x41, 0xae, 0xc1, 0xfe, 0xa1, 0x41, 0xce, 0xae, 0xf1, 0xce, 0xfe, 0x7e, + 0xc1, 0x21, 0x41, 0x8e, 0x9e, 0x7e, 0x21, 0xc1, 0xf1, 0x61, 0x86, 0x02, 0x8c, 0x4e, 0x45, 0x27, + 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x54, 0x4a, 0x7a, 0x66, + 0x49, 0x46, 0x69, 0x12, 0xc8, 0xd5, 0xfa, 0xe9, 0x45, 0x89, 0x65, 0x99, 0x25, 0x89, 0x25, 0x99, + 0xf9, 0x79, 0x89, 0x39, 0xfa, 0xf0, 0xf0, 0x4c, 0x2c, 0xc8, 0xd4, 0x4f, 0x4f, 0xcd, 0xd3, 0x07, + 0x87, 0x97, 0x7e, 0x7a, 0xbe, 0x3e, 0xde, 0x90, 0xb6, 0xc6, 0x10, 0x2c, 0x33, 0x4c, 0x62, 0x03, + 0xeb, 0x35, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x72, 0x26, 0xf7, 0x8d, 0xa5, 0x01, 0x00, 0x00, +} + +func (m *ComponentFeatures) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ComponentFeatures) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ComponentFeatures) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Features) > 0 { + dAtA2 := make([]byte, len(m.Features)*10) + var j1 int + for _, num := range m.Features { + for num >= 1<<7 { + dAtA2[j1] = uint8(uint64(num)&0x7f | 0x80) + num >>= 7 + j1++ + } + dAtA2[j1] = uint8(num) + j1++ + } + i -= j1 + copy(dAtA[i:], dAtA2[:j1]) + i = encodeVarintComponentFeatures(dAtA, i, uint64(j1)) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func encodeVarintComponentFeatures(dAtA []byte, offset int, v uint64) int { + offset -= sovComponentFeatures(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + v >>= 7 + offset++ + } + dAtA[offset] = uint8(v) + return base +} +func (m *ComponentFeatures) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Features) > 0 { + l = 0 + for _, e := range m.Features { + l += sovComponentFeatures(uint64(e)) + } + n += 1 + sovComponentFeatures(uint64(l)) + l + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func sovComponentFeatures(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozComponentFeatures(x uint64) (n int) { + return sovComponentFeatures(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (m *ComponentFeatures) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ComponentFeatures: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ComponentFeatures: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType == 0 { + var v ComponentFeatureID + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= ComponentFeatureID(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Features = append(m.Features, v) + } else if wireType == 2 { + var packedLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + packedLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if packedLen < 0 { + return ErrInvalidLengthComponentFeatures + } + postIndex := iNdEx + packedLen + if postIndex < 0 { + return ErrInvalidLengthComponentFeatures + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + var elementCount int + if elementCount != 0 && len(m.Features) == 0 { + m.Features = make([]ComponentFeatureID, 0, elementCount) + } + for iNdEx < postIndex { + var v ComponentFeatureID + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= ComponentFeatureID(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Features = append(m.Features, v) + } + } else { + return fmt.Errorf("proto: wrong wireType = %d for field Features", wireType) + } + default: + iNdEx = preIndex + skippy, err := skipComponentFeatures(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthComponentFeatures + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func skipComponentFeatures(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break + } + } + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowComponentFeatures + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if length < 0 { + return 0, ErrInvalidLengthComponentFeatures + } + iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupComponentFeatures + } + depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) + } + if iNdEx < 0 { + return 0, ErrInvalidLengthComponentFeatures + } + if depth == 0 { + return iNdEx, nil + } + } + return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthComponentFeatures = fmt.Errorf("proto: negative length found during unmarshaling") + ErrIntOverflowComponentFeatures = fmt.Errorf("proto: integer overflow") + ErrUnexpectedEndOfGroupComponentFeatures = fmt.Errorf("proto: unexpected end of group") +) diff --git a/api/gen/proto/go/teleport/crownjewel/v1/crownjewel.pb.go b/api/gen/proto/go/teleport/crownjewel/v1/crownjewel.pb.go index 1173000d5f2bd..09e2c5e80cde1 100644 --- a/api/gen/proto/go/teleport/crownjewel/v1/crownjewel.pb.go +++ b/api/gen/proto/go/teleport/crownjewel/v1/crownjewel.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/crownjewel/v1/crownjewel.proto diff --git a/api/gen/proto/go/teleport/crownjewel/v1/crownjewel_service.pb.go b/api/gen/proto/go/teleport/crownjewel/v1/crownjewel_service.pb.go index 32e4e17577669..a3c95bd63d232 100644 --- a/api/gen/proto/go/teleport/crownjewel/v1/crownjewel_service.pb.go +++ b/api/gen/proto/go/teleport/crownjewel/v1/crownjewel_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/crownjewel/v1/crownjewel_service.proto diff --git a/api/gen/proto/go/teleport/dbobject/v1/dbobject.pb.go b/api/gen/proto/go/teleport/dbobject/v1/dbobject.pb.go index ddde1670feb0a..a70c6b5169ab2 100644 --- a/api/gen/proto/go/teleport/dbobject/v1/dbobject.pb.go +++ b/api/gen/proto/go/teleport/dbobject/v1/dbobject.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/dbobject/v1/dbobject.proto diff --git a/api/gen/proto/go/teleport/dbobject/v1/dbobject_service.pb.go b/api/gen/proto/go/teleport/dbobject/v1/dbobject_service.pb.go index 7e3e511a9e1b0..f66439f480d2f 100644 --- a/api/gen/proto/go/teleport/dbobject/v1/dbobject_service.pb.go +++ b/api/gen/proto/go/teleport/dbobject/v1/dbobject_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/dbobject/v1/dbobject_service.proto diff --git a/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule.pb.go b/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule.pb.go index 824babbc7101b..301cc3b123060 100644 --- a/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule.pb.go +++ b/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/dbobjectimportrule/v1/dbobjectimportrule.proto diff --git a/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule_service.pb.go b/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule_service.pb.go index 162dfa6ba1b11..432360fd976b5 100644 --- a/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule_service.pb.go +++ b/api/gen/proto/go/teleport/dbobjectimportrule/v1/dbobjectimportrule_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/dbobjectimportrule/v1/dbobjectimportrule_service.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/database_access.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/database_access.pb.go index d300d7c286d9e..7ce6301001792 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/database_access.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/database_access.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/database_access.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/decision_service.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/decision_service.pb.go index 0154e1697a5ee..43e4a6c4e5aed 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/decision_service.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/decision_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/decision_service.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/denial_metadata.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/denial_metadata.pb.go index 73ed6a281d03f..00add5e2b6b12 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/denial_metadata.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/denial_metadata.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/denial_metadata.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/enforcement_feature.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/enforcement_feature.pb.go index cc064e315501c..d990dced49ab7 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/enforcement_feature.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/enforcement_feature.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/enforcement_feature.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/permit_metadata.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/permit_metadata.pb.go index e5a24022e1a7b..a6af9f4e0bb8c 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/permit_metadata.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/permit_metadata.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/permit_metadata.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/request_metadata.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/request_metadata.pb.go index 5f7896b6ba042..e125aadff9d06 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/request_metadata.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/request_metadata.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/request_metadata.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/resource.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/resource.pb.go index c8df27c64b175..fe214de27cc86 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/resource.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/resource.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/resource.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_access.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_access.pb.go index afe7acf4332c7..48653c61c8a40 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_access.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_access.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/ssh_access.proto @@ -583,7 +583,13 @@ type LockTarget struct { // Requires Teleport Enterprise. Device string `protobuf:"bytes,7,opt,name=device,proto3" json:"device,omitempty"` // ServerID is the host id of the Teleport instance. - ServerId string `protobuf:"bytes,8,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` + ServerId string `protobuf:"bytes,8,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"` + // BotInstanceID is the bot instance ID if this is a bot identity. + BotInstanceId string `protobuf:"bytes,9,opt,name=bot_instance_id,json=botInstanceId,proto3" json:"bot_instance_id,omitempty"` + // JoinToken is the name of the join token used when this identity originally + // joined. This only applies to bot identities, and cannot be used to target + // bots that joined via the `token` join method. + JoinToken string `protobuf:"bytes,10,opt,name=join_token,json=joinToken,proto3" json:"join_token,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -674,6 +680,20 @@ func (x *LockTarget) GetServerId() string { return "" } +func (x *LockTarget) GetBotInstanceId() string { + if x != nil { + return x.BotInstanceId + } + return "" +} + +func (x *LockTarget) GetJoinToken() string { + if x != nil { + return x.JoinToken + } + return "" +} + // HostUsersInfo keeps information about groups and sudoers entries // for a particular host user type HostUsersInfo struct { @@ -794,7 +814,7 @@ const file_teleport_decision_v1alpha1_ssh_access_proto_rawDesc = "" + "\fmapped_roles\x18\x17 \x03(\tR\vmappedRoles\x12Q\n" + "\x0fhost_users_info\x18\x18 \x01(\v2).teleport.decision.v1alpha1.HostUsersInfoR\rhostUsersInfoJ\x04\b\x02\x10\x03J\x04\b\x04\x10\x05J\x04\b\a\x10\bJ\x04\b\f\x10\rJ\x04\b\r\x10\x0eJ\x04\b\x0f\x10\x10J\x04\b\x10\x10\x11J\x04\b\x11\x10\x12\"Y\n" + "\x0fSSHAccessDenial\x12F\n" + - "\bmetadata\x18\x01 \x01(\v2*.teleport.decision.v1alpha1.DenialMetadataR\bmetadata\"\xee\x01\n" + + "\bmetadata\x18\x01 \x01(\v2*.teleport.decision.v1alpha1.DenialMetadataR\bmetadata\"\xb5\x02\n" + "\n" + "LockTarget\x12\x12\n" + "\x04user\x18\x01 \x01(\tR\x04user\x12\x12\n" + @@ -805,7 +825,11 @@ const file_teleport_decision_v1alpha1_ssh_access_proto_rawDesc = "" + "\x0fwindows_desktop\x18\x05 \x01(\tR\x0ewindowsDesktop\x12%\n" + "\x0eaccess_request\x18\x06 \x01(\tR\raccessRequest\x12\x16\n" + "\x06device\x18\a \x01(\tR\x06device\x12\x1b\n" + - "\tserver_id\x18\b \x01(\tR\bserverId\"\x9f\x01\n" + + "\tserver_id\x18\b \x01(\tR\bserverId\x12&\n" + + "\x0fbot_instance_id\x18\t \x01(\tR\rbotInstanceId\x12\x1d\n" + + "\n" + + "join_token\x18\n" + + " \x01(\tR\tjoinToken\"\x9f\x01\n" + "\rHostUsersInfo\x12\x16\n" + "\x06groups\x18\x01 \x03(\tR\x06groups\x12<\n" + "\x04mode\x18\x02 \x01(\x0e2(.teleport.decision.v1alpha1.HostUserModeR\x04mode\x12\x10\n" + diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_identity.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_identity.pb.go index e18ccb4fc744d..2769d5fa8b2d2 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_identity.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_identity.pb.go @@ -14,13 +14,14 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/ssh_identity.proto package decisionpb import ( + v11 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/trait/v1" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" @@ -283,8 +284,16 @@ type SSHIdentity struct { // GitHubUsername indicates the GitHub username identified by the GitHub // connector. GithubUsername string `protobuf:"bytes,33,opt,name=github_username,json=githubUsername,proto3" json:"github_username,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // JoinToken is the name of the join token used for bot joining. It is unset + // for other identity types, or for bots using the `token` join method. + JoinToken string `protobuf:"bytes,34,opt,name=join_token,json=joinToken,proto3" json:"join_token,omitempty"` + // ScopePin is an optional pin that ties the certificate to a specific scope and set of scoped roles. When + // set, the Roles field must not be set. + ScopePin *v11.Pin `protobuf:"bytes,35,opt,name=scope_pin,json=scopePin,proto3" json:"scope_pin,omitempty"` + // The scope associated with a host identity. + AgentScope string `protobuf:"bytes,36,opt,name=agent_scope,json=agentScope,proto3" json:"agent_scope,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *SSHIdentity) Reset() { @@ -548,6 +557,27 @@ func (x *SSHIdentity) GetGithubUsername() string { return "" } +func (x *SSHIdentity) GetJoinToken() string { + if x != nil { + return x.JoinToken + } + return "" +} + +func (x *SSHIdentity) GetScopePin() *v11.Pin { + if x != nil { + return x.ScopePin + } + return nil +} + +func (x *SSHIdentity) GetAgentScope() string { + if x != nil { + return x.AgentScope + } + return "" +} + // CertExtension represents a key/value for a certificate extension. This type must // be kept up to date with types.CertExtension. type CertExtension struct { @@ -630,10 +660,10 @@ var File_teleport_decision_v1alpha1_ssh_identity_proto protoreflect.FileDescript const file_teleport_decision_v1alpha1_ssh_identity_proto_rawDesc = "" + "\n" + - "-teleport/decision/v1alpha1/ssh_identity.proto\x12\x1ateleport.decision.v1alpha1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a-teleport/decision/v1alpha1/tls_identity.proto\x1a\x1dteleport/trait/v1/trait.proto\"X\n" + + "-teleport/decision/v1alpha1/ssh_identity.proto\x12\x1ateleport.decision.v1alpha1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a-teleport/decision/v1alpha1/tls_identity.proto\x1a\x1fteleport/scopes/v1/scopes.proto\x1a\x1dteleport/trait/v1/trait.proto\"X\n" + "\fSSHAuthority\x12!\n" + "\fcluster_name\x18\x01 \x01(\tR\vclusterName\x12%\n" + - "\x0eauthority_type\x18\x02 \x01(\tR\rauthorityType\"\x9a\v\n" + + "\x0eauthority_type\x18\x02 \x01(\tR\rauthorityType\"\x90\f\n" + "\vSSHIdentity\x12\x1f\n" + "\vvalid_after\x18\x01 \x01(\x04R\n" + "validAfter\x12!\n" + @@ -674,7 +704,12 @@ const file_teleport_decision_v1alpha1_ssh_identity_proto_rawDesc = "" + "\x10device_asset_tag\x18\x1e \x01(\tR\x0edeviceAssetTag\x120\n" + "\x14device_credential_id\x18\x1f \x01(\tR\x12deviceCredentialId\x12$\n" + "\x0egithub_user_id\x18 \x01(\tR\fgithubUserId\x12'\n" + - "\x0fgithub_username\x18! \x01(\tR\x0egithubUsername\"\xbf\x01\n" + + "\x0fgithub_username\x18! \x01(\tR\x0egithubUsername\x12\x1d\n" + + "\n" + + "join_token\x18\" \x01(\tR\tjoinToken\x124\n" + + "\tscope_pin\x18# \x01(\v2\x17.teleport.scopes.v1.PinR\bscopePin\x12\x1f\n" + + "\vagent_scope\x18$ \x01(\tR\n" + + "agentScope\"\xbf\x01\n" + "\rCertExtension\x12A\n" + "\x04type\x18\x01 \x01(\x0e2-.teleport.decision.v1alpha1.CertExtensionTypeR\x04type\x12A\n" + "\x04mode\x18\x02 \x01(\x0e2-.teleport.decision.v1alpha1.CertExtensionModeR\x04mode\x12\x12\n" + @@ -710,19 +745,21 @@ var file_teleport_decision_v1alpha1_ssh_identity_proto_goTypes = []any{ (*v1.Trait)(nil), // 5: teleport.trait.v1.Trait (*timestamppb.Timestamp)(nil), // 6: google.protobuf.Timestamp (*ResourceId)(nil), // 7: teleport.decision.v1alpha1.ResourceId + (*v11.Pin)(nil), // 8: teleport.scopes.v1.Pin } var file_teleport_decision_v1alpha1_ssh_identity_proto_depIdxs = []int32{ 5, // 0: teleport.decision.v1alpha1.SSHIdentity.traits:type_name -> teleport.trait.v1.Trait 6, // 1: teleport.decision.v1alpha1.SSHIdentity.previous_identity_expires:type_name -> google.protobuf.Timestamp 4, // 2: teleport.decision.v1alpha1.SSHIdentity.certificate_extensions:type_name -> teleport.decision.v1alpha1.CertExtension 7, // 3: teleport.decision.v1alpha1.SSHIdentity.allowed_resource_ids:type_name -> teleport.decision.v1alpha1.ResourceId - 1, // 4: teleport.decision.v1alpha1.CertExtension.type:type_name -> teleport.decision.v1alpha1.CertExtensionType - 0, // 5: teleport.decision.v1alpha1.CertExtension.mode:type_name -> teleport.decision.v1alpha1.CertExtensionMode - 6, // [6:6] is the sub-list for method output_type - 6, // [6:6] is the sub-list for method input_type - 6, // [6:6] is the sub-list for extension type_name - 6, // [6:6] is the sub-list for extension extendee - 0, // [0:6] is the sub-list for field type_name + 8, // 4: teleport.decision.v1alpha1.SSHIdentity.scope_pin:type_name -> teleport.scopes.v1.Pin + 1, // 5: teleport.decision.v1alpha1.CertExtension.type:type_name -> teleport.decision.v1alpha1.CertExtensionType + 0, // 6: teleport.decision.v1alpha1.CertExtension.mode:type_name -> teleport.decision.v1alpha1.CertExtensionMode + 7, // [7:7] is the sub-list for method output_type + 7, // [7:7] is the sub-list for method input_type + 7, // [7:7] is the sub-list for extension type_name + 7, // [7:7] is the sub-list for extension extendee + 0, // [0:7] is the sub-list for field type_name } func init() { file_teleport_decision_v1alpha1_ssh_identity_proto_init() } diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_join.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_join.pb.go index e3e11957b1859..d3b4c3707dadc 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/ssh_join.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/ssh_join.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/ssh_join.proto diff --git a/api/gen/proto/go/teleport/decision/v1alpha1/tls_identity.pb.go b/api/gen/proto/go/teleport/decision/v1alpha1/tls_identity.pb.go index 17d3b9f19a789..c62b8691044e2 100644 --- a/api/gen/proto/go/teleport/decision/v1alpha1/tls_identity.pb.go +++ b/api/gen/proto/go/teleport/decision/v1alpha1/tls_identity.pb.go @@ -14,13 +14,14 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/decision/v1alpha1/tls_identity.proto package decisionpb import ( + v11 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/trait/v1" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" @@ -131,7 +132,14 @@ type TLSIdentity struct { // DeviceExtensions holds device-aware extensions for the identity. DeviceExtensions *DeviceExtensions `protobuf:"bytes,34,opt,name=device_extensions,json=deviceExtensions,proto3" json:"device_extensions,omitempty"` // UserType indicates if the User was created by an SSO Provider or locally. - UserType string `protobuf:"bytes,35,opt,name=user_type,json=userType,proto3" json:"user_type,omitempty"` + UserType string `protobuf:"bytes,35,opt,name=user_type,json=userType,proto3" json:"user_type,omitempty"` + // JoinToken is the name of the join token used when a bot joins; it does not + // apply to other identity types, or to bots using the traditional `token` + // join method. + JoinToken string `protobuf:"bytes,36,opt,name=join_token,json=joinToken,proto3" json:"join_token,omitempty"` + // ScopePin is an optional pin that ties the certificate to a specific scope and set of scoped roles. When + // set, the Roles field must not be set. + ScopePin *v11.Pin `protobuf:"bytes,37,opt,name=scope_pin,json=scopePin,proto3" json:"scope_pin,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -411,6 +419,20 @@ func (x *TLSIdentity) GetUserType() string { return "" } +func (x *TLSIdentity) GetJoinToken() string { + if x != nil { + return x.JoinToken + } + return "" +} + +func (x *TLSIdentity) GetScopePin() *v11.Pin { + if x != nil { + return x.ScopePin + } + return nil +} + // RouteToApp holds routing information for applications. type RouteToApp struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -785,7 +807,7 @@ var File_teleport_decision_v1alpha1_tls_identity_proto protoreflect.FileDescript const file_teleport_decision_v1alpha1_tls_identity_proto_rawDesc = "" + "\n" + - "-teleport/decision/v1alpha1/tls_identity.proto\x12\x1ateleport.decision.v1alpha1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1dteleport/trait/v1/trait.proto\"\xb6\f\n" + + "-teleport/decision/v1alpha1/tls_identity.proto\x12\x1ateleport.decision.v1alpha1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1fteleport/scopes/v1/scopes.proto\x1a\x1dteleport/trait/v1/trait.proto\"\x8b\r\n" + "\vTLSIdentity\x12\x1a\n" + "\busername\x18\x01 \x01(\tR\busername\x12\"\n" + "\fimpersonator\x18\x02 \x01(\tR\fimpersonator\x12\x16\n" + @@ -827,7 +849,10 @@ const file_teleport_decision_v1alpha1_tls_identity_proto_rawDesc = "" + "\x12private_key_policy\x18 \x01(\tR\x10privateKeyPolicy\x128\n" + "\x18connection_diagnostic_id\x18! \x01(\tR\x16connectionDiagnosticId\x12Y\n" + "\x11device_extensions\x18\" \x01(\v2,.teleport.decision.v1alpha1.DeviceExtensionsR\x10deviceExtensions\x12\x1b\n" + - "\tuser_type\x18# \x01(\tR\buserType\"\xfb\x02\n" + + "\tuser_type\x18# \x01(\tR\buserType\x12\x1d\n" + + "\n" + + "join_token\x18$ \x01(\tR\tjoinToken\x124\n" + + "\tscope_pin\x18% \x01(\v2\x17.teleport.scopes.v1.PinR\bscopePin\"\xfb\x02\n" + "\n" + "RouteToApp\x12\x1d\n" + "\n" + @@ -883,6 +908,7 @@ var file_teleport_decision_v1alpha1_tls_identity_proto_goTypes = []any{ (*DeviceExtensions)(nil), // 4: teleport.decision.v1alpha1.DeviceExtensions (*timestamppb.Timestamp)(nil), // 5: google.protobuf.Timestamp (*v1.Trait)(nil), // 6: teleport.trait.v1.Trait + (*v11.Pin)(nil), // 7: teleport.scopes.v1.Pin } var file_teleport_decision_v1alpha1_tls_identity_proto_depIdxs = []int32{ 5, // 0: teleport.decision.v1alpha1.TLSIdentity.expires:type_name -> google.protobuf.Timestamp @@ -892,11 +918,12 @@ var file_teleport_decision_v1alpha1_tls_identity_proto_depIdxs = []int32{ 5, // 4: teleport.decision.v1alpha1.TLSIdentity.previous_identity_expires:type_name -> google.protobuf.Timestamp 3, // 5: teleport.decision.v1alpha1.TLSIdentity.allowed_resource_ids:type_name -> teleport.decision.v1alpha1.ResourceId 4, // 6: teleport.decision.v1alpha1.TLSIdentity.device_extensions:type_name -> teleport.decision.v1alpha1.DeviceExtensions - 7, // [7:7] is the sub-list for method output_type - 7, // [7:7] is the sub-list for method input_type - 7, // [7:7] is the sub-list for extension type_name - 7, // [7:7] is the sub-list for extension extendee - 0, // [0:7] is the sub-list for field type_name + 7, // 7: teleport.decision.v1alpha1.TLSIdentity.scope_pin:type_name -> teleport.scopes.v1.Pin + 8, // [8:8] is the sub-list for method output_type + 8, // [8:8] is the sub-list for method input_type + 8, // [8:8] is the sub-list for extension type_name + 8, // [8:8] is the sub-list for extension extendee + 0, // [0:8] is the sub-list for field type_name } func init() { file_teleport_decision_v1alpha1_tls_identity_proto_init() } diff --git a/api/gen/proto/go/teleport/devicetrust/v1/assert.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/assert.pb.go index 9270eeeb8a283..f7bdde38bba00 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/assert.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/assert.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/assert.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/authenticate_challenge.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/authenticate_challenge.pb.go index a47f45fcf4962..6d4ced0b2cd75 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/authenticate_challenge.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/authenticate_challenge.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/authenticate_challenge.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device.pb.go index 5cc79c6aa89fe..e4aaae7252d74 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_collected_data.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_collected_data.pb.go index 854accb8f7948..8d3b9dcad9eff 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_collected_data.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_collected_data.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_collected_data.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_confirmation_token.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_confirmation_token.pb.go index 3b13881886de5..d2767483b8386 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_confirmation_token.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_confirmation_token.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_confirmation_token.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_enroll_token.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_enroll_token.pb.go index 713cb363701be..75b7cf0b65a07 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_enroll_token.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_enroll_token.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_enroll_token.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_profile.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_profile.pb.go index 5c670ece8dd01..cb6c2ba029c8d 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_profile.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_profile.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_profile.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_source.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_source.pb.go index 5e80e96a00d73..5599da55a51db 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_source.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_source.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_source.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/device_web_token.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/device_web_token.pb.go index 770085f40baa9..825f70a2aeead 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/device_web_token.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/device_web_token.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/device_web_token.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/devicetrust_service.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/devicetrust_service.pb.go index 7be523d4582b3..763906d0f8ca0 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/devicetrust_service.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/devicetrust_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/devicetrust_service.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/os_type.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/os_type.pb.go index fe032bac221e4..130a707e7577f 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/os_type.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/os_type.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/os_type.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/tpm.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/tpm.pb.go index 0abc17890ae8f..a16ca14f2f84a 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/tpm.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/tpm.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/tpm.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/usage.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/usage.pb.go index 03bf57e6c23f3..69c1da08ef4a3 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/usage.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/usage.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/usage.proto diff --git a/api/gen/proto/go/teleport/devicetrust/v1/user_certificates.pb.go b/api/gen/proto/go/teleport/devicetrust/v1/user_certificates.pb.go index 26eef4bb4cc9f..715acb3bd3590 100644 --- a/api/gen/proto/go/teleport/devicetrust/v1/user_certificates.pb.go +++ b/api/gen/proto/go/teleport/devicetrust/v1/user_certificates.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/devicetrust/v1/user_certificates.proto diff --git a/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig.pb.go b/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig.pb.go index d2fe7eb49522e..ffff6129ffeb5 100644 --- a/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig.pb.go +++ b/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/discoveryconfig/v1/discoveryconfig.proto diff --git a/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig_service.pb.go b/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig_service.pb.go index d7c8bb0eb5281..cc7bdae175b03 100644 --- a/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig_service.pb.go +++ b/api/gen/proto/go/teleport/discoveryconfig/v1/discoveryconfig_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/discoveryconfig/v1/discoveryconfig_service.proto diff --git a/api/gen/proto/go/teleport/dynamicwindows/v1/dynamicwindows_service.pb.go b/api/gen/proto/go/teleport/dynamicwindows/v1/dynamicwindows_service.pb.go index 275788654b629..5edba7e6e6385 100644 --- a/api/gen/proto/go/teleport/dynamicwindows/v1/dynamicwindows_service.pb.go +++ b/api/gen/proto/go/teleport/dynamicwindows/v1/dynamicwindows_service.pb.go @@ -17,7 +17,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/dynamicwindows/v1/dynamicwindows_service.proto diff --git a/api/gen/proto/go/teleport/embedding/v1/embedding.pb.go b/api/gen/proto/go/teleport/embedding/v1/embedding.pb.go index cc28555af57c6..70651a648498f 100644 --- a/api/gen/proto/go/teleport/embedding/v1/embedding.pb.go +++ b/api/gen/proto/go/teleport/embedding/v1/embedding.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/embedding/v1/embedding.proto diff --git a/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage.pb.go b/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage.pb.go index ce1b23ac0b651..8931edb6c885a 100644 --- a/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage.pb.go +++ b/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/externalauditstorage/v1/externalauditstorage.proto diff --git a/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage_service.pb.go b/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage_service.pb.go index 761d99d9427ef..90c7b90da4f54 100644 --- a/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage_service.pb.go +++ b/api/gen/proto/go/teleport/externalauditstorage/v1/externalauditstorage_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/externalauditstorage/v1/externalauditstorage_service.proto diff --git a/api/gen/proto/go/teleport/gitserver/v1/git_server_service.pb.go b/api/gen/proto/go/teleport/gitserver/v1/git_server_service.pb.go index 980d3465450e2..cda6c07a39ee5 100644 --- a/api/gen/proto/go/teleport/gitserver/v1/git_server_service.pb.go +++ b/api/gen/proto/go/teleport/gitserver/v1/git_server_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/gitserver/v1/git_server_service.proto diff --git a/api/gen/proto/go/teleport/hardwarekeyagent/v1/hardwarekeyagent_service.pb.go b/api/gen/proto/go/teleport/hardwarekeyagent/v1/hardwarekeyagent_service.pb.go index 685578fda3ba1..5167983b9f115 100644 --- a/api/gen/proto/go/teleport/hardwarekeyagent/v1/hardwarekeyagent_service.pb.go +++ b/api/gen/proto/go/teleport/hardwarekeyagent/v1/hardwarekeyagent_service.pb.go @@ -16,7 +16,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/hardwarekeyagent/v1/hardwarekeyagent_service.proto diff --git a/api/gen/proto/go/teleport/header/v1/metadata.pb.go b/api/gen/proto/go/teleport/header/v1/metadata.pb.go index eba193282c680..bd181322c9f57 100644 --- a/api/gen/proto/go/teleport/header/v1/metadata.pb.go +++ b/api/gen/proto/go/teleport/header/v1/metadata.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/header/v1/metadata.proto diff --git a/api/gen/proto/go/teleport/header/v1/resourceheader.pb.go b/api/gen/proto/go/teleport/header/v1/resourceheader.pb.go index 5367f48cde949..808ffb909c807 100644 --- a/api/gen/proto/go/teleport/header/v1/resourceheader.pb.go +++ b/api/gen/proto/go/teleport/header/v1/resourceheader.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/header/v1/resourceheader.proto diff --git a/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config.pb.go b/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config.pb.go index a947bd74945c8..33a1c4ebdcd44 100644 --- a/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config.pb.go +++ b/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/healthcheckconfig/v1/health_check_config.proto @@ -216,8 +216,17 @@ type Matcher struct { // empty value is ignored. The match result is logically ANDed with DBLabels, // if both are non-empty. DbLabelsExpression string `protobuf:"bytes,2,opt,name=db_labels_expression,json=dbLabelsExpression,proto3" json:"db_labels_expression,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // KubernetesLabels matches Kubernetes labels. An empty value is ignored. The match + // result is logically ANDed with KubernetesLabelsExpression, if both are non-empty. + KubernetesLabels []*v11.Label `protobuf:"bytes,3,rep,name=kubernetes_labels,json=kubernetesLabels,proto3" json:"kubernetes_labels,omitempty"` + // KubernetesLabelsExpression is a label predicate expression to match Kubernetes. An + // empty value is ignored. The match result is logically ANDed with KubernetesLabels, + // if both are non-empty. + KubernetesLabelsExpression string `protobuf:"bytes,4,opt,name=kubernetes_labels_expression,json=kubernetesLabelsExpression,proto3" json:"kubernetes_labels_expression,omitempty"` + // Disabled disables matches for all labels and expressions. + Disabled bool `protobuf:"varint,5,opt,name=disabled,proto3" json:"disabled,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *Matcher) Reset() { @@ -264,6 +273,27 @@ func (x *Matcher) GetDbLabelsExpression() string { return "" } +func (x *Matcher) GetKubernetesLabels() []*v11.Label { + if x != nil { + return x.KubernetesLabels + } + return nil +} + +func (x *Matcher) GetKubernetesLabelsExpression() string { + if x != nil { + return x.KubernetesLabelsExpression + } + return "" +} + +func (x *Matcher) GetDisabled() bool { + if x != nil { + return x.Disabled + } + return false +} + var File_teleport_healthcheckconfig_v1_health_check_config_proto protoreflect.FileDescriptor const file_teleport_healthcheckconfig_v1_health_check_config_proto_rawDesc = "" + @@ -280,10 +310,13 @@ const file_teleport_healthcheckconfig_v1_health_check_config_proto_rawDesc = "" "\atimeout\x18\x02 \x01(\v2\x19.google.protobuf.DurationR\atimeout\x125\n" + "\binterval\x18\x03 \x01(\v2\x19.google.protobuf.DurationR\binterval\x12+\n" + "\x11healthy_threshold\x18\x04 \x01(\rR\x10healthyThreshold\x12/\n" + - "\x13unhealthy_threshold\x18\x05 \x01(\rR\x12unhealthyThreshold\"r\n" + + "\x13unhealthy_threshold\x18\x05 \x01(\rR\x12unhealthyThreshold\"\x97\x02\n" + "\aMatcher\x125\n" + "\tdb_labels\x18\x01 \x03(\v2\x18.teleport.label.v1.LabelR\bdbLabels\x120\n" + - "\x14db_labels_expression\x18\x02 \x01(\tR\x12dbLabelsExpressionBfZdgithub.com/gravitational/teleport/api/gen/proto/go/teleport/healthcheckconfig/v1;healthcheckconfigv1b\x06proto3" + "\x14db_labels_expression\x18\x02 \x01(\tR\x12dbLabelsExpression\x12E\n" + + "\x11kubernetes_labels\x18\x03 \x03(\v2\x18.teleport.label.v1.LabelR\x10kubernetesLabels\x12@\n" + + "\x1ckubernetes_labels_expression\x18\x04 \x01(\tR\x1akubernetesLabelsExpression\x12\x1a\n" + + "\bdisabled\x18\x05 \x01(\bR\bdisabledBfZdgithub.com/gravitational/teleport/api/gen/proto/go/teleport/healthcheckconfig/v1;healthcheckconfigv1b\x06proto3" var ( file_teleport_healthcheckconfig_v1_health_check_config_proto_rawDescOnce sync.Once @@ -313,11 +346,12 @@ var file_teleport_healthcheckconfig_v1_health_check_config_proto_depIdxs = []int 4, // 3: teleport.healthcheckconfig.v1.HealthCheckConfigSpec.timeout:type_name -> google.protobuf.Duration 4, // 4: teleport.healthcheckconfig.v1.HealthCheckConfigSpec.interval:type_name -> google.protobuf.Duration 5, // 5: teleport.healthcheckconfig.v1.Matcher.db_labels:type_name -> teleport.label.v1.Label - 6, // [6:6] is the sub-list for method output_type - 6, // [6:6] is the sub-list for method input_type - 6, // [6:6] is the sub-list for extension type_name - 6, // [6:6] is the sub-list for extension extendee - 0, // [0:6] is the sub-list for field type_name + 5, // 6: teleport.healthcheckconfig.v1.Matcher.kubernetes_labels:type_name -> teleport.label.v1.Label + 7, // [7:7] is the sub-list for method output_type + 7, // [7:7] is the sub-list for method input_type + 7, // [7:7] is the sub-list for extension type_name + 7, // [7:7] is the sub-list for extension extendee + 0, // [0:7] is the sub-list for field type_name } func init() { file_teleport_healthcheckconfig_v1_health_check_config_proto_init() } diff --git a/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config_service.pb.go b/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config_service.pb.go index 9acd34767a6ef..db6c0af5845af 100644 --- a/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config_service.pb.go +++ b/api/gen/proto/go/teleport/healthcheckconfig/v1/health_check_config_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/healthcheckconfig/v1/health_check_config_service.proto diff --git a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter.pb.go b/api/gen/proto/go/teleport/identitycenter/v1/identitycenter.pb.go index dc348468246d8..e993559e841ef 100644 --- a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter.pb.go +++ b/api/gen/proto/go/teleport/identitycenter/v1/identitycenter.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/identitycenter/v1/identitycenter.proto diff --git a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service.pb.go b/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service.pb.go deleted file mode 100644 index 0f4a252cbfd6d..0000000000000 --- a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service.pb.go +++ /dev/null @@ -1,260 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/identitycenter/v1/identitycenter_service.proto - -package identitycenterv1 - -import ( - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - emptypb "google.golang.org/protobuf/types/known/emptypb" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// DeleteAllIdentityCenterAccountsRequest is a request to delete all Identity Center imported accounts. -type DeleteAllIdentityCenterAccountsRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteAllIdentityCenterAccountsRequest) Reset() { - *x = DeleteAllIdentityCenterAccountsRequest{} - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteAllIdentityCenterAccountsRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteAllIdentityCenterAccountsRequest) ProtoMessage() {} - -func (x *DeleteAllIdentityCenterAccountsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteAllIdentityCenterAccountsRequest.ProtoReflect.Descriptor instead. -func (*DeleteAllIdentityCenterAccountsRequest) Descriptor() ([]byte, []int) { - return file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescGZIP(), []int{0} -} - -// DeleteAllAccountAssignmentsRequest is a request to delete all Identity Center account assignments. -type DeleteAllAccountAssignmentsRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteAllAccountAssignmentsRequest) Reset() { - *x = DeleteAllAccountAssignmentsRequest{} - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteAllAccountAssignmentsRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteAllAccountAssignmentsRequest) ProtoMessage() {} - -func (x *DeleteAllAccountAssignmentsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteAllAccountAssignmentsRequest.ProtoReflect.Descriptor instead. -func (*DeleteAllAccountAssignmentsRequest) Descriptor() ([]byte, []int) { - return file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescGZIP(), []int{1} -} - -// DeleteAllPrincipalAssignmentsRequest is a request to delete all Identity Center principal assignments. -type DeleteAllPrincipalAssignmentsRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteAllPrincipalAssignmentsRequest) Reset() { - *x = DeleteAllPrincipalAssignmentsRequest{} - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[2] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteAllPrincipalAssignmentsRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteAllPrincipalAssignmentsRequest) ProtoMessage() {} - -func (x *DeleteAllPrincipalAssignmentsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[2] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteAllPrincipalAssignmentsRequest.ProtoReflect.Descriptor instead. -func (*DeleteAllPrincipalAssignmentsRequest) Descriptor() ([]byte, []int) { - return file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescGZIP(), []int{2} -} - -// DeleteAllPermissionSetsRequest is a request to delete all Identity Center permission sets. -type DeleteAllPermissionSetsRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteAllPermissionSetsRequest) Reset() { - *x = DeleteAllPermissionSetsRequest{} - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[3] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteAllPermissionSetsRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteAllPermissionSetsRequest) ProtoMessage() {} - -func (x *DeleteAllPermissionSetsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes[3] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteAllPermissionSetsRequest.ProtoReflect.Descriptor instead. -func (*DeleteAllPermissionSetsRequest) Descriptor() ([]byte, []int) { - return file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescGZIP(), []int{3} -} - -var File_teleport_identitycenter_v1_identitycenter_service_proto protoreflect.FileDescriptor - -const file_teleport_identitycenter_v1_identitycenter_service_proto_rawDesc = "" + - "\n" + - "7teleport/identitycenter/v1/identitycenter_service.proto\x12\x1ateleport.identitycenter.v1\x1a\x1bgoogle/protobuf/empty.proto\"(\n" + - "&DeleteAllIdentityCenterAccountsRequest\"$\n" + - "\"DeleteAllAccountAssignmentsRequest\"&\n" + - "$DeleteAllPrincipalAssignmentsRequest\" \n" + - "\x1eDeleteAllPermissionSetsRequest2\xf7\x03\n" + - "\x15IdentityCenterService\x12}\n" + - "\x1fDeleteAllIdentityCenterAccounts\x12B.teleport.identitycenter.v1.DeleteAllIdentityCenterAccountsRequest\x1a\x16.google.protobuf.Empty\x12u\n" + - "\x1bDeleteAllAccountAssignments\x12>.teleport.identitycenter.v1.DeleteAllAccountAssignmentsRequest\x1a\x16.google.protobuf.Empty\x12y\n" + - "\x1dDeleteAllPrincipalAssignments\x12@.teleport.identitycenter.v1.DeleteAllPrincipalAssignmentsRequest\x1a\x16.google.protobuf.Empty\x12m\n" + - "\x17DeleteAllPermissionSets\x12:.teleport.identitycenter.v1.DeleteAllPermissionSetsRequest\x1a\x16.google.protobuf.EmptyB`Z^github.com/gravitational/teleport/api/gen/proto/go/teleport/identitycenter/v1;identitycenterv1b\x06proto3" - -var ( - file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescOnce sync.Once - file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescData []byte -) - -func file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescGZIP() []byte { - file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescOnce.Do(func() { - file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_identitycenter_v1_identitycenter_service_proto_rawDesc), len(file_teleport_identitycenter_v1_identitycenter_service_proto_rawDesc))) - }) - return file_teleport_identitycenter_v1_identitycenter_service_proto_rawDescData -} - -var file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes = make([]protoimpl.MessageInfo, 4) -var file_teleport_identitycenter_v1_identitycenter_service_proto_goTypes = []any{ - (*DeleteAllIdentityCenterAccountsRequest)(nil), // 0: teleport.identitycenter.v1.DeleteAllIdentityCenterAccountsRequest - (*DeleteAllAccountAssignmentsRequest)(nil), // 1: teleport.identitycenter.v1.DeleteAllAccountAssignmentsRequest - (*DeleteAllPrincipalAssignmentsRequest)(nil), // 2: teleport.identitycenter.v1.DeleteAllPrincipalAssignmentsRequest - (*DeleteAllPermissionSetsRequest)(nil), // 3: teleport.identitycenter.v1.DeleteAllPermissionSetsRequest - (*emptypb.Empty)(nil), // 4: google.protobuf.Empty -} -var file_teleport_identitycenter_v1_identitycenter_service_proto_depIdxs = []int32{ - 0, // 0: teleport.identitycenter.v1.IdentityCenterService.DeleteAllIdentityCenterAccounts:input_type -> teleport.identitycenter.v1.DeleteAllIdentityCenterAccountsRequest - 1, // 1: teleport.identitycenter.v1.IdentityCenterService.DeleteAllAccountAssignments:input_type -> teleport.identitycenter.v1.DeleteAllAccountAssignmentsRequest - 2, // 2: teleport.identitycenter.v1.IdentityCenterService.DeleteAllPrincipalAssignments:input_type -> teleport.identitycenter.v1.DeleteAllPrincipalAssignmentsRequest - 3, // 3: teleport.identitycenter.v1.IdentityCenterService.DeleteAllPermissionSets:input_type -> teleport.identitycenter.v1.DeleteAllPermissionSetsRequest - 4, // 4: teleport.identitycenter.v1.IdentityCenterService.DeleteAllIdentityCenterAccounts:output_type -> google.protobuf.Empty - 4, // 5: teleport.identitycenter.v1.IdentityCenterService.DeleteAllAccountAssignments:output_type -> google.protobuf.Empty - 4, // 6: teleport.identitycenter.v1.IdentityCenterService.DeleteAllPrincipalAssignments:output_type -> google.protobuf.Empty - 4, // 7: teleport.identitycenter.v1.IdentityCenterService.DeleteAllPermissionSets:output_type -> google.protobuf.Empty - 4, // [4:8] is the sub-list for method output_type - 0, // [0:4] is the sub-list for method input_type - 0, // [0:0] is the sub-list for extension type_name - 0, // [0:0] is the sub-list for extension extendee - 0, // [0:0] is the sub-list for field type_name -} - -func init() { file_teleport_identitycenter_v1_identitycenter_service_proto_init() } -func file_teleport_identitycenter_v1_identitycenter_service_proto_init() { - if File_teleport_identitycenter_v1_identitycenter_service_proto != nil { - return - } - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_identitycenter_v1_identitycenter_service_proto_rawDesc), len(file_teleport_identitycenter_v1_identitycenter_service_proto_rawDesc)), - NumEnums: 0, - NumMessages: 4, - NumExtensions: 0, - NumServices: 1, - }, - GoTypes: file_teleport_identitycenter_v1_identitycenter_service_proto_goTypes, - DependencyIndexes: file_teleport_identitycenter_v1_identitycenter_service_proto_depIdxs, - MessageInfos: file_teleport_identitycenter_v1_identitycenter_service_proto_msgTypes, - }.Build() - File_teleport_identitycenter_v1_identitycenter_service_proto = out.File - file_teleport_identitycenter_v1_identitycenter_service_proto_goTypes = nil - file_teleport_identitycenter_v1_identitycenter_service_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service_grpc.pb.go b/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service_grpc.pb.go deleted file mode 100644 index de6e1b076ca42..0000000000000 --- a/api/gen/proto/go/teleport/identitycenter/v1/identitycenter_service_grpc.pb.go +++ /dev/null @@ -1,264 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go-grpc. DO NOT EDIT. -// versions: -// - protoc-gen-go-grpc v1.5.1 -// - protoc (unknown) -// source: teleport/identitycenter/v1/identitycenter_service.proto - -package identitycenterv1 - -import ( - context "context" - grpc "google.golang.org/grpc" - codes "google.golang.org/grpc/codes" - status "google.golang.org/grpc/status" - emptypb "google.golang.org/protobuf/types/known/emptypb" -) - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the grpc package it is being compiled against. -// Requires gRPC-Go v1.64.0 or later. -const _ = grpc.SupportPackageIsVersion9 - -const ( - IdentityCenterService_DeleteAllIdentityCenterAccounts_FullMethodName = "/teleport.identitycenter.v1.IdentityCenterService/DeleteAllIdentityCenterAccounts" - IdentityCenterService_DeleteAllAccountAssignments_FullMethodName = "/teleport.identitycenter.v1.IdentityCenterService/DeleteAllAccountAssignments" - IdentityCenterService_DeleteAllPrincipalAssignments_FullMethodName = "/teleport.identitycenter.v1.IdentityCenterService/DeleteAllPrincipalAssignments" - IdentityCenterService_DeleteAllPermissionSets_FullMethodName = "/teleport.identitycenter.v1.IdentityCenterService/DeleteAllPermissionSets" -) - -// IdentityCenterServiceClient is the client API for IdentityCenterService service. -// -// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. -// -// IdentityCenterService provides methods to manage Identity Center -// resources. -type IdentityCenterServiceClient interface { - // DeleteAllIdentityCenterAccounts deletes all Identity Center accounts. - DeleteAllIdentityCenterAccounts(ctx context.Context, in *DeleteAllIdentityCenterAccountsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) - // DeleteAllAccountAssignments deletes all Identity Center Account assignments. - DeleteAllAccountAssignments(ctx context.Context, in *DeleteAllAccountAssignmentsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) - // DeleteAllPrincipalAssignments deletes all Identity Center principal assignments. - DeleteAllPrincipalAssignments(ctx context.Context, in *DeleteAllPrincipalAssignmentsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) - // DeleteAllPermissionSets deletes all Identity Center permission sets. - DeleteAllPermissionSets(ctx context.Context, in *DeleteAllPermissionSetsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) -} - -type identityCenterServiceClient struct { - cc grpc.ClientConnInterface -} - -func NewIdentityCenterServiceClient(cc grpc.ClientConnInterface) IdentityCenterServiceClient { - return &identityCenterServiceClient{cc} -} - -func (c *identityCenterServiceClient) DeleteAllIdentityCenterAccounts(ctx context.Context, in *DeleteAllIdentityCenterAccountsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(emptypb.Empty) - err := c.cc.Invoke(ctx, IdentityCenterService_DeleteAllIdentityCenterAccounts_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *identityCenterServiceClient) DeleteAllAccountAssignments(ctx context.Context, in *DeleteAllAccountAssignmentsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(emptypb.Empty) - err := c.cc.Invoke(ctx, IdentityCenterService_DeleteAllAccountAssignments_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *identityCenterServiceClient) DeleteAllPrincipalAssignments(ctx context.Context, in *DeleteAllPrincipalAssignmentsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(emptypb.Empty) - err := c.cc.Invoke(ctx, IdentityCenterService_DeleteAllPrincipalAssignments_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *identityCenterServiceClient) DeleteAllPermissionSets(ctx context.Context, in *DeleteAllPermissionSetsRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(emptypb.Empty) - err := c.cc.Invoke(ctx, IdentityCenterService_DeleteAllPermissionSets_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -// IdentityCenterServiceServer is the server API for IdentityCenterService service. -// All implementations must embed UnimplementedIdentityCenterServiceServer -// for forward compatibility. -// -// IdentityCenterService provides methods to manage Identity Center -// resources. -type IdentityCenterServiceServer interface { - // DeleteAllIdentityCenterAccounts deletes all Identity Center accounts. - DeleteAllIdentityCenterAccounts(context.Context, *DeleteAllIdentityCenterAccountsRequest) (*emptypb.Empty, error) - // DeleteAllAccountAssignments deletes all Identity Center Account assignments. - DeleteAllAccountAssignments(context.Context, *DeleteAllAccountAssignmentsRequest) (*emptypb.Empty, error) - // DeleteAllPrincipalAssignments deletes all Identity Center principal assignments. - DeleteAllPrincipalAssignments(context.Context, *DeleteAllPrincipalAssignmentsRequest) (*emptypb.Empty, error) - // DeleteAllPermissionSets deletes all Identity Center permission sets. - DeleteAllPermissionSets(context.Context, *DeleteAllPermissionSetsRequest) (*emptypb.Empty, error) - mustEmbedUnimplementedIdentityCenterServiceServer() -} - -// UnimplementedIdentityCenterServiceServer must be embedded to have -// forward compatible implementations. -// -// NOTE: this should be embedded by value instead of pointer to avoid a nil -// pointer dereference when methods are called. -type UnimplementedIdentityCenterServiceServer struct{} - -func (UnimplementedIdentityCenterServiceServer) DeleteAllIdentityCenterAccounts(context.Context, *DeleteAllIdentityCenterAccountsRequest) (*emptypb.Empty, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteAllIdentityCenterAccounts not implemented") -} -func (UnimplementedIdentityCenterServiceServer) DeleteAllAccountAssignments(context.Context, *DeleteAllAccountAssignmentsRequest) (*emptypb.Empty, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteAllAccountAssignments not implemented") -} -func (UnimplementedIdentityCenterServiceServer) DeleteAllPrincipalAssignments(context.Context, *DeleteAllPrincipalAssignmentsRequest) (*emptypb.Empty, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteAllPrincipalAssignments not implemented") -} -func (UnimplementedIdentityCenterServiceServer) DeleteAllPermissionSets(context.Context, *DeleteAllPermissionSetsRequest) (*emptypb.Empty, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteAllPermissionSets not implemented") -} -func (UnimplementedIdentityCenterServiceServer) mustEmbedUnimplementedIdentityCenterServiceServer() {} -func (UnimplementedIdentityCenterServiceServer) testEmbeddedByValue() {} - -// UnsafeIdentityCenterServiceServer may be embedded to opt out of forward compatibility for this service. -// Use of this interface is not recommended, as added methods to IdentityCenterServiceServer will -// result in compilation errors. -type UnsafeIdentityCenterServiceServer interface { - mustEmbedUnimplementedIdentityCenterServiceServer() -} - -func RegisterIdentityCenterServiceServer(s grpc.ServiceRegistrar, srv IdentityCenterServiceServer) { - // If the following call pancis, it indicates UnimplementedIdentityCenterServiceServer was - // embedded by pointer and is nil. This will cause panics if an - // unimplemented method is ever invoked, so we test this at initialization - // time to prevent it from happening at runtime later due to I/O. - if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { - t.testEmbeddedByValue() - } - s.RegisterService(&IdentityCenterService_ServiceDesc, srv) -} - -func _IdentityCenterService_DeleteAllIdentityCenterAccounts_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteAllIdentityCenterAccountsRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(IdentityCenterServiceServer).DeleteAllIdentityCenterAccounts(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: IdentityCenterService_DeleteAllIdentityCenterAccounts_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(IdentityCenterServiceServer).DeleteAllIdentityCenterAccounts(ctx, req.(*DeleteAllIdentityCenterAccountsRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _IdentityCenterService_DeleteAllAccountAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteAllAccountAssignmentsRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(IdentityCenterServiceServer).DeleteAllAccountAssignments(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: IdentityCenterService_DeleteAllAccountAssignments_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(IdentityCenterServiceServer).DeleteAllAccountAssignments(ctx, req.(*DeleteAllAccountAssignmentsRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _IdentityCenterService_DeleteAllPrincipalAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteAllPrincipalAssignmentsRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(IdentityCenterServiceServer).DeleteAllPrincipalAssignments(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: IdentityCenterService_DeleteAllPrincipalAssignments_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(IdentityCenterServiceServer).DeleteAllPrincipalAssignments(ctx, req.(*DeleteAllPrincipalAssignmentsRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _IdentityCenterService_DeleteAllPermissionSets_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteAllPermissionSetsRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(IdentityCenterServiceServer).DeleteAllPermissionSets(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: IdentityCenterService_DeleteAllPermissionSets_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(IdentityCenterServiceServer).DeleteAllPermissionSets(ctx, req.(*DeleteAllPermissionSetsRequest)) - } - return interceptor(ctx, in, info, handler) -} - -// IdentityCenterService_ServiceDesc is the grpc.ServiceDesc for IdentityCenterService service. -// It's only intended for direct use with grpc.RegisterService, -// and not to be introspected or modified (even as a copy) -var IdentityCenterService_ServiceDesc = grpc.ServiceDesc{ - ServiceName: "teleport.identitycenter.v1.IdentityCenterService", - HandlerType: (*IdentityCenterServiceServer)(nil), - Methods: []grpc.MethodDesc{ - { - MethodName: "DeleteAllIdentityCenterAccounts", - Handler: _IdentityCenterService_DeleteAllIdentityCenterAccounts_Handler, - }, - { - MethodName: "DeleteAllAccountAssignments", - Handler: _IdentityCenterService_DeleteAllAccountAssignments_Handler, - }, - { - MethodName: "DeleteAllPrincipalAssignments", - Handler: _IdentityCenterService_DeleteAllPrincipalAssignments_Handler, - }, - { - MethodName: "DeleteAllPermissionSets", - Handler: _IdentityCenterService_DeleteAllPermissionSets_Handler, - }, - }, - Streams: []grpc.StreamDesc{}, - Metadata: "teleport/identitycenter/v1/identitycenter_service.proto", -} diff --git a/api/gen/proto/go/teleport/integration/v1/awsoidc_service.pb.go b/api/gen/proto/go/teleport/integration/v1/awsoidc_service.pb.go index f7c51564607a7..4b8c526cdcf80 100644 --- a/api/gen/proto/go/teleport/integration/v1/awsoidc_service.pb.go +++ b/api/gen/proto/go/teleport/integration/v1/awsoidc_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/integration/v1/awsoidc_service.proto diff --git a/api/gen/proto/go/teleport/integration/v1/awsra_service.pb.go b/api/gen/proto/go/teleport/integration/v1/awsra_service.pb.go new file mode 100644 index 0000000000000..107496ed43dc3 --- /dev/null +++ b/api/gen/proto/go/teleport/integration/v1/awsra_service.pb.go @@ -0,0 +1,598 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/integration/v1/awsra_service.proto + +package integrationv1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// AWSRolesAnywherePingRequest is a request for doing an health check against the configured integration. +type AWSRolesAnywherePingRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Mode: + // + // *AWSRolesAnywherePingRequest_Integration + // *AWSRolesAnywherePingRequest_Custom + Mode isAWSRolesAnywherePingRequest_Mode `protobuf_oneof:"mode"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AWSRolesAnywherePingRequest) Reset() { + *x = AWSRolesAnywherePingRequest{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AWSRolesAnywherePingRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AWSRolesAnywherePingRequest) ProtoMessage() {} + +func (x *AWSRolesAnywherePingRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AWSRolesAnywherePingRequest.ProtoReflect.Descriptor instead. +func (*AWSRolesAnywherePingRequest) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{0} +} + +func (x *AWSRolesAnywherePingRequest) GetMode() isAWSRolesAnywherePingRequest_Mode { + if x != nil { + return x.Mode + } + return nil +} + +func (x *AWSRolesAnywherePingRequest) GetIntegration() string { + if x != nil { + if x, ok := x.Mode.(*AWSRolesAnywherePingRequest_Integration); ok { + return x.Integration + } + } + return "" +} + +func (x *AWSRolesAnywherePingRequest) GetCustom() *AWSRolesAnywherePingRequestWithoutIntegration { + if x != nil { + if x, ok := x.Mode.(*AWSRolesAnywherePingRequest_Custom); ok { + return x.Custom + } + } + return nil +} + +type isAWSRolesAnywherePingRequest_Mode interface { + isAWSRolesAnywherePingRequest_Mode() +} + +type AWSRolesAnywherePingRequest_Integration struct { + // Use an integration to perform the Ping operation. + Integration string `protobuf:"bytes,1,opt,name=integration,proto3,oneof"` +} + +type AWSRolesAnywherePingRequest_Custom struct { + // Use a Trust Anchor, Profile and Role to perform the Ping operation. + // This is useful when the integration is not configured. + Custom *AWSRolesAnywherePingRequestWithoutIntegration `protobuf:"bytes,2,opt,name=custom,proto3,oneof"` +} + +func (*AWSRolesAnywherePingRequest_Integration) isAWSRolesAnywherePingRequest_Mode() {} + +func (*AWSRolesAnywherePingRequest_Custom) isAWSRolesAnywherePingRequest_Mode() {} + +// Identifies the Trust Anchor, Profile and Role to use for the Ping operation. +type AWSRolesAnywherePingRequestWithoutIntegration struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The AWS Roles Anywhere Trust Anchor ARN to be used when generating the token. + TrustAnchorArn string `protobuf:"bytes,1,opt,name=trust_anchor_arn,json=trustAnchorArn,proto3" json:"trust_anchor_arn,omitempty"` + // The AWS Roles Anywhere Profile ARN to be used when generating the token. + ProfileArn string `protobuf:"bytes,3,opt,name=profile_arn,json=profileArn,proto3" json:"profile_arn,omitempty"` + // The AWS Role ARN to be used when generating the token. + RoleArn string `protobuf:"bytes,4,opt,name=role_arn,json=roleArn,proto3" json:"role_arn,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) Reset() { + *x = AWSRolesAnywherePingRequestWithoutIntegration{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AWSRolesAnywherePingRequestWithoutIntegration) ProtoMessage() {} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AWSRolesAnywherePingRequestWithoutIntegration.ProtoReflect.Descriptor instead. +func (*AWSRolesAnywherePingRequestWithoutIntegration) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{1} +} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) GetTrustAnchorArn() string { + if x != nil { + return x.TrustAnchorArn + } + return "" +} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) GetProfileArn() string { + if x != nil { + return x.ProfileArn + } + return "" +} + +func (x *AWSRolesAnywherePingRequestWithoutIntegration) GetRoleArn() string { + if x != nil { + return x.RoleArn + } + return "" +} + +// AWSRolesAnywherePingResponse contains the response for the Ping operation. +type AWSRolesAnywherePingResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The AWS account ID number of the account that owns or contains the calling entity. + AccountId string `protobuf:"bytes,1,opt,name=account_id,json=accountId,proto3" json:"account_id,omitempty"` + // The AWS ARN associated with the calling entity. + Arn string `protobuf:"bytes,2,opt,name=arn,proto3" json:"arn,omitempty"` + // The unique identifier of the calling entity. + UserId string `protobuf:"bytes,3,opt,name=user_id,json=userId,proto3" json:"user_id,omitempty"` + // The number of AWS Roles Anywhere Profiles that are active and have at least one associated Role. + ProfileCount int32 `protobuf:"varint,4,opt,name=profile_count,json=profileCount,proto3" json:"profile_count,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AWSRolesAnywherePingResponse) Reset() { + *x = AWSRolesAnywherePingResponse{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AWSRolesAnywherePingResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AWSRolesAnywherePingResponse) ProtoMessage() {} + +func (x *AWSRolesAnywherePingResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AWSRolesAnywherePingResponse.ProtoReflect.Descriptor instead. +func (*AWSRolesAnywherePingResponse) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{2} +} + +func (x *AWSRolesAnywherePingResponse) GetAccountId() string { + if x != nil { + return x.AccountId + } + return "" +} + +func (x *AWSRolesAnywherePingResponse) GetArn() string { + if x != nil { + return x.Arn + } + return "" +} + +func (x *AWSRolesAnywherePingResponse) GetUserId() string { + if x != nil { + return x.UserId + } + return "" +} + +func (x *AWSRolesAnywherePingResponse) GetProfileCount() int32 { + if x != nil { + return x.ProfileCount + } + return 0 +} + +// ListRolesAnywhereProfilesRequest is a request to list AWS Roles Anywhere Profiles. +type ListRolesAnywhereProfilesRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Integration is the AWS Roles Anywhere Integration name. + Integration string `protobuf:"bytes,1,opt,name=integration,proto3" json:"integration,omitempty"` + // page_size is the max size of the page to request. + // Depending on the filters, the actual number of profiles returned may be less than this value. + PageSize int32 `protobuf:"varint,2,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // next_page_token is the page token. + NextPageToken string `protobuf:"bytes,3,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + // ProfileNameFilters is a list of filters applied to the profile name. + // Only matching profiles will be returned. + // If empty, no filtering is applied. + // + // Filters can be globs, for example: + // + // profile* + // *name* + // + // Or regexes if they're prefixed and suffixed with ^ and $, for example: + // + // ^profile.*$ + // ^.*name.*$ + ProfileNameFilters []string `protobuf:"bytes,4,rep,name=profile_name_filters,json=profileNameFilters,proto3" json:"profile_name_filters,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRolesAnywhereProfilesRequest) Reset() { + *x = ListRolesAnywhereProfilesRequest{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRolesAnywhereProfilesRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRolesAnywhereProfilesRequest) ProtoMessage() {} + +func (x *ListRolesAnywhereProfilesRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRolesAnywhereProfilesRequest.ProtoReflect.Descriptor instead. +func (*ListRolesAnywhereProfilesRequest) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{3} +} + +func (x *ListRolesAnywhereProfilesRequest) GetIntegration() string { + if x != nil { + return x.Integration + } + return "" +} + +func (x *ListRolesAnywhereProfilesRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListRolesAnywhereProfilesRequest) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +func (x *ListRolesAnywhereProfilesRequest) GetProfileNameFilters() []string { + if x != nil { + return x.ProfileNameFilters + } + return nil +} + +// ListRolesAnywhereProfilesResponse contains the response for the ListRolesAnywhereProfiles operation. +type ListRolesAnywhereProfilesResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Profiles is a list of AWS Roles Anywhere Profiles. + Profiles []*RolesAnywhereProfile `protobuf:"bytes,1,rep,name=profiles,proto3" json:"profiles,omitempty"` + // NextPageToken is used to paginate the results. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRolesAnywhereProfilesResponse) Reset() { + *x = ListRolesAnywhereProfilesResponse{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRolesAnywhereProfilesResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRolesAnywhereProfilesResponse) ProtoMessage() {} + +func (x *ListRolesAnywhereProfilesResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRolesAnywhereProfilesResponse.ProtoReflect.Descriptor instead. +func (*ListRolesAnywhereProfilesResponse) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{4} +} + +func (x *ListRolesAnywhereProfilesResponse) GetProfiles() []*RolesAnywhereProfile { + if x != nil { + return x.Profiles + } + return nil +} + +func (x *ListRolesAnywhereProfilesResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// RolesAnywhereProfile represents an AWS Roles Anywhere Profile. +type RolesAnywhereProfile struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The AWS Roles Anywhere Profile ARN. + Arn string `protobuf:"bytes,1,opt,name=arn,proto3" json:"arn,omitempty"` + // Whether the AWS Roles Anywhere Profile is enabled. + Enabled bool `protobuf:"varint,2,opt,name=enabled,proto3" json:"enabled,omitempty"` + // The name of the AWS Roles Anywhere Profile. + Name string `protobuf:"bytes,3,opt,name=name,proto3" json:"name,omitempty"` + // Whether the profile accepts role session names. + AcceptRoleSessionName bool `protobuf:"varint,4,opt,name=accept_role_session_name,json=acceptRoleSessionName,proto3" json:"accept_role_session_name,omitempty"` + // The tags associated with the AWS Roles Anywhere Profile. + Tags map[string]string `protobuf:"bytes,5,rep,name=tags,proto3" json:"tags,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + // The roles accessible from this AWS Roles Anywhere Profile. + Roles []string `protobuf:"bytes,6,rep,name=roles,proto3" json:"roles,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RolesAnywhereProfile) Reset() { + *x = RolesAnywhereProfile{} + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RolesAnywhereProfile) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RolesAnywhereProfile) ProtoMessage() {} + +func (x *RolesAnywhereProfile) ProtoReflect() protoreflect.Message { + mi := &file_teleport_integration_v1_awsra_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RolesAnywhereProfile.ProtoReflect.Descriptor instead. +func (*RolesAnywhereProfile) Descriptor() ([]byte, []int) { + return file_teleport_integration_v1_awsra_service_proto_rawDescGZIP(), []int{5} +} + +func (x *RolesAnywhereProfile) GetArn() string { + if x != nil { + return x.Arn + } + return "" +} + +func (x *RolesAnywhereProfile) GetEnabled() bool { + if x != nil { + return x.Enabled + } + return false +} + +func (x *RolesAnywhereProfile) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +func (x *RolesAnywhereProfile) GetAcceptRoleSessionName() bool { + if x != nil { + return x.AcceptRoleSessionName + } + return false +} + +func (x *RolesAnywhereProfile) GetTags() map[string]string { + if x != nil { + return x.Tags + } + return nil +} + +func (x *RolesAnywhereProfile) GetRoles() []string { + if x != nil { + return x.Roles + } + return nil +} + +var File_teleport_integration_v1_awsra_service_proto protoreflect.FileDescriptor + +const file_teleport_integration_v1_awsra_service_proto_rawDesc = "" + + "\n" + + "+teleport/integration/v1/awsra_service.proto\x12\x17teleport.integration.v1\"\xab\x01\n" + + "\x1bAWSRolesAnywherePingRequest\x12\"\n" + + "\vintegration\x18\x01 \x01(\tH\x00R\vintegration\x12`\n" + + "\x06custom\x18\x02 \x01(\v2F.teleport.integration.v1.AWSRolesAnywherePingRequestWithoutIntegrationH\x00R\x06customB\x06\n" + + "\x04mode\"\x95\x01\n" + + "-AWSRolesAnywherePingRequestWithoutIntegration\x12(\n" + + "\x10trust_anchor_arn\x18\x01 \x01(\tR\x0etrustAnchorArn\x12\x1f\n" + + "\vprofile_arn\x18\x03 \x01(\tR\n" + + "profileArn\x12\x19\n" + + "\brole_arn\x18\x04 \x01(\tR\aroleArn\"\x8d\x01\n" + + "\x1cAWSRolesAnywherePingResponse\x12\x1d\n" + + "\n" + + "account_id\x18\x01 \x01(\tR\taccountId\x12\x10\n" + + "\x03arn\x18\x02 \x01(\tR\x03arn\x12\x17\n" + + "\auser_id\x18\x03 \x01(\tR\x06userId\x12#\n" + + "\rprofile_count\x18\x04 \x01(\x05R\fprofileCount\"\xbb\x01\n" + + " ListRolesAnywhereProfilesRequest\x12 \n" + + "\vintegration\x18\x01 \x01(\tR\vintegration\x12\x1b\n" + + "\tpage_size\x18\x02 \x01(\x05R\bpageSize\x12&\n" + + "\x0fnext_page_token\x18\x03 \x01(\tR\rnextPageToken\x120\n" + + "\x14profile_name_filters\x18\x04 \x03(\tR\x12profileNameFilters\"\x96\x01\n" + + "!ListRolesAnywhereProfilesResponse\x12I\n" + + "\bprofiles\x18\x01 \x03(\v2-.teleport.integration.v1.RolesAnywhereProfileR\bprofiles\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"\xab\x02\n" + + "\x14RolesAnywhereProfile\x12\x10\n" + + "\x03arn\x18\x01 \x01(\tR\x03arn\x12\x18\n" + + "\aenabled\x18\x02 \x01(\bR\aenabled\x12\x12\n" + + "\x04name\x18\x03 \x01(\tR\x04name\x127\n" + + "\x18accept_role_session_name\x18\x04 \x01(\bR\x15acceptRoleSessionName\x12K\n" + + "\x04tags\x18\x05 \x03(\v27.teleport.integration.v1.RolesAnywhereProfile.TagsEntryR\x04tags\x12\x14\n" + + "\x05roles\x18\x06 \x03(\tR\x05roles\x1a7\n" + + "\tTagsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x012\xb4\x02\n" + + "\x17AWSRolesAnywhereService\x12\x83\x01\n" + + "\x14AWSRolesAnywherePing\x124.teleport.integration.v1.AWSRolesAnywherePingRequest\x1a5.teleport.integration.v1.AWSRolesAnywherePingResponse\x12\x92\x01\n" + + "\x19ListRolesAnywhereProfiles\x129.teleport.integration.v1.ListRolesAnywhereProfilesRequest\x1a:.teleport.integration.v1.ListRolesAnywhereProfilesResponseBZZXgithub.com/gravitational/teleport/api/gen/proto/go/teleport/integration/v1;integrationv1b\x06proto3" + +var ( + file_teleport_integration_v1_awsra_service_proto_rawDescOnce sync.Once + file_teleport_integration_v1_awsra_service_proto_rawDescData []byte +) + +func file_teleport_integration_v1_awsra_service_proto_rawDescGZIP() []byte { + file_teleport_integration_v1_awsra_service_proto_rawDescOnce.Do(func() { + file_teleport_integration_v1_awsra_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_integration_v1_awsra_service_proto_rawDesc), len(file_teleport_integration_v1_awsra_service_proto_rawDesc))) + }) + return file_teleport_integration_v1_awsra_service_proto_rawDescData +} + +var file_teleport_integration_v1_awsra_service_proto_msgTypes = make([]protoimpl.MessageInfo, 7) +var file_teleport_integration_v1_awsra_service_proto_goTypes = []any{ + (*AWSRolesAnywherePingRequest)(nil), // 0: teleport.integration.v1.AWSRolesAnywherePingRequest + (*AWSRolesAnywherePingRequestWithoutIntegration)(nil), // 1: teleport.integration.v1.AWSRolesAnywherePingRequestWithoutIntegration + (*AWSRolesAnywherePingResponse)(nil), // 2: teleport.integration.v1.AWSRolesAnywherePingResponse + (*ListRolesAnywhereProfilesRequest)(nil), // 3: teleport.integration.v1.ListRolesAnywhereProfilesRequest + (*ListRolesAnywhereProfilesResponse)(nil), // 4: teleport.integration.v1.ListRolesAnywhereProfilesResponse + (*RolesAnywhereProfile)(nil), // 5: teleport.integration.v1.RolesAnywhereProfile + nil, // 6: teleport.integration.v1.RolesAnywhereProfile.TagsEntry +} +var file_teleport_integration_v1_awsra_service_proto_depIdxs = []int32{ + 1, // 0: teleport.integration.v1.AWSRolesAnywherePingRequest.custom:type_name -> teleport.integration.v1.AWSRolesAnywherePingRequestWithoutIntegration + 5, // 1: teleport.integration.v1.ListRolesAnywhereProfilesResponse.profiles:type_name -> teleport.integration.v1.RolesAnywhereProfile + 6, // 2: teleport.integration.v1.RolesAnywhereProfile.tags:type_name -> teleport.integration.v1.RolesAnywhereProfile.TagsEntry + 0, // 3: teleport.integration.v1.AWSRolesAnywhereService.AWSRolesAnywherePing:input_type -> teleport.integration.v1.AWSRolesAnywherePingRequest + 3, // 4: teleport.integration.v1.AWSRolesAnywhereService.ListRolesAnywhereProfiles:input_type -> teleport.integration.v1.ListRolesAnywhereProfilesRequest + 2, // 5: teleport.integration.v1.AWSRolesAnywhereService.AWSRolesAnywherePing:output_type -> teleport.integration.v1.AWSRolesAnywherePingResponse + 4, // 6: teleport.integration.v1.AWSRolesAnywhereService.ListRolesAnywhereProfiles:output_type -> teleport.integration.v1.ListRolesAnywhereProfilesResponse + 5, // [5:7] is the sub-list for method output_type + 3, // [3:5] is the sub-list for method input_type + 3, // [3:3] is the sub-list for extension type_name + 3, // [3:3] is the sub-list for extension extendee + 0, // [0:3] is the sub-list for field type_name +} + +func init() { file_teleport_integration_v1_awsra_service_proto_init() } +func file_teleport_integration_v1_awsra_service_proto_init() { + if File_teleport_integration_v1_awsra_service_proto != nil { + return + } + file_teleport_integration_v1_awsra_service_proto_msgTypes[0].OneofWrappers = []any{ + (*AWSRolesAnywherePingRequest_Integration)(nil), + (*AWSRolesAnywherePingRequest_Custom)(nil), + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_integration_v1_awsra_service_proto_rawDesc), len(file_teleport_integration_v1_awsra_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 7, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_integration_v1_awsra_service_proto_goTypes, + DependencyIndexes: file_teleport_integration_v1_awsra_service_proto_depIdxs, + MessageInfos: file_teleport_integration_v1_awsra_service_proto_msgTypes, + }.Build() + File_teleport_integration_v1_awsra_service_proto = out.File + file_teleport_integration_v1_awsra_service_proto_goTypes = nil + file_teleport_integration_v1_awsra_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/integration/v1/awsra_service_grpc.pb.go b/api/gen/proto/go/teleport/integration/v1/awsra_service_grpc.pb.go new file mode 100644 index 0000000000000..33b51e617e34a --- /dev/null +++ b/api/gen/proto/go/teleport/integration/v1/awsra_service_grpc.pb.go @@ -0,0 +1,202 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/integration/v1/awsra_service.proto + +package integrationv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + AWSRolesAnywhereService_AWSRolesAnywherePing_FullMethodName = "/teleport.integration.v1.AWSRolesAnywhereService/AWSRolesAnywherePing" + AWSRolesAnywhereService_ListRolesAnywhereProfiles_FullMethodName = "/teleport.integration.v1.AWSRolesAnywhereService/ListRolesAnywhereProfiles" +) + +// AWSRolesAnywhereServiceClient is the client API for AWSRolesAnywhereService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// AWSRolesAnywhereService provides access to AWS APIs using the AWS Roles Anywhere Integration. +type AWSRolesAnywhereServiceClient interface { + // AWSRolesAnywherePing does an health check for the integration. + // Returns the caller identity and the number of AWS Roles Anywhere Profiles that are active. + // It uses the following APIs: + // https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + AWSRolesAnywherePing(ctx context.Context, in *AWSRolesAnywherePingRequest, opts ...grpc.CallOption) (*AWSRolesAnywherePingResponse, error) + // ListRolesAnywhereProfiles lists the AWS Roles Anywhere Profiles that are configured in the integration. + // It uses the following API: + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListTagsForResource.html + // + // The number of profiles returned is always between 0 and page_size. + // If the number of elements is 0, then there are no more profiles to return and the next page token is empty. + ListRolesAnywhereProfiles(ctx context.Context, in *ListRolesAnywhereProfilesRequest, opts ...grpc.CallOption) (*ListRolesAnywhereProfilesResponse, error) +} + +type aWSRolesAnywhereServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewAWSRolesAnywhereServiceClient(cc grpc.ClientConnInterface) AWSRolesAnywhereServiceClient { + return &aWSRolesAnywhereServiceClient{cc} +} + +func (c *aWSRolesAnywhereServiceClient) AWSRolesAnywherePing(ctx context.Context, in *AWSRolesAnywherePingRequest, opts ...grpc.CallOption) (*AWSRolesAnywherePingResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(AWSRolesAnywherePingResponse) + err := c.cc.Invoke(ctx, AWSRolesAnywhereService_AWSRolesAnywherePing_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *aWSRolesAnywhereServiceClient) ListRolesAnywhereProfiles(ctx context.Context, in *ListRolesAnywhereProfilesRequest, opts ...grpc.CallOption) (*ListRolesAnywhereProfilesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListRolesAnywhereProfilesResponse) + err := c.cc.Invoke(ctx, AWSRolesAnywhereService_ListRolesAnywhereProfiles_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +// AWSRolesAnywhereServiceServer is the server API for AWSRolesAnywhereService service. +// All implementations must embed UnimplementedAWSRolesAnywhereServiceServer +// for forward compatibility. +// +// AWSRolesAnywhereService provides access to AWS APIs using the AWS Roles Anywhere Integration. +type AWSRolesAnywhereServiceServer interface { + // AWSRolesAnywherePing does an health check for the integration. + // Returns the caller identity and the number of AWS Roles Anywhere Profiles that are active. + // It uses the following APIs: + // https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + AWSRolesAnywherePing(context.Context, *AWSRolesAnywherePingRequest) (*AWSRolesAnywherePingResponse, error) + // ListRolesAnywhereProfiles lists the AWS Roles Anywhere Profiles that are configured in the integration. + // It uses the following API: + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListTagsForResource.html + // + // The number of profiles returned is always between 0 and page_size. + // If the number of elements is 0, then there are no more profiles to return and the next page token is empty. + ListRolesAnywhereProfiles(context.Context, *ListRolesAnywhereProfilesRequest) (*ListRolesAnywhereProfilesResponse, error) + mustEmbedUnimplementedAWSRolesAnywhereServiceServer() +} + +// UnimplementedAWSRolesAnywhereServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedAWSRolesAnywhereServiceServer struct{} + +func (UnimplementedAWSRolesAnywhereServiceServer) AWSRolesAnywherePing(context.Context, *AWSRolesAnywherePingRequest) (*AWSRolesAnywherePingResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method AWSRolesAnywherePing not implemented") +} +func (UnimplementedAWSRolesAnywhereServiceServer) ListRolesAnywhereProfiles(context.Context, *ListRolesAnywhereProfilesRequest) (*ListRolesAnywhereProfilesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListRolesAnywhereProfiles not implemented") +} +func (UnimplementedAWSRolesAnywhereServiceServer) mustEmbedUnimplementedAWSRolesAnywhereServiceServer() { +} +func (UnimplementedAWSRolesAnywhereServiceServer) testEmbeddedByValue() {} + +// UnsafeAWSRolesAnywhereServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to AWSRolesAnywhereServiceServer will +// result in compilation errors. +type UnsafeAWSRolesAnywhereServiceServer interface { + mustEmbedUnimplementedAWSRolesAnywhereServiceServer() +} + +func RegisterAWSRolesAnywhereServiceServer(s grpc.ServiceRegistrar, srv AWSRolesAnywhereServiceServer) { + // If the following call pancis, it indicates UnimplementedAWSRolesAnywhereServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&AWSRolesAnywhereService_ServiceDesc, srv) +} + +func _AWSRolesAnywhereService_AWSRolesAnywherePing_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(AWSRolesAnywherePingRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AWSRolesAnywhereServiceServer).AWSRolesAnywherePing(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AWSRolesAnywhereService_AWSRolesAnywherePing_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AWSRolesAnywhereServiceServer).AWSRolesAnywherePing(ctx, req.(*AWSRolesAnywherePingRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _AWSRolesAnywhereService_ListRolesAnywhereProfiles_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListRolesAnywhereProfilesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(AWSRolesAnywhereServiceServer).ListRolesAnywhereProfiles(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: AWSRolesAnywhereService_ListRolesAnywhereProfiles_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(AWSRolesAnywhereServiceServer).ListRolesAnywhereProfiles(ctx, req.(*ListRolesAnywhereProfilesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +// AWSRolesAnywhereService_ServiceDesc is the grpc.ServiceDesc for AWSRolesAnywhereService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var AWSRolesAnywhereService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.integration.v1.AWSRolesAnywhereService", + HandlerType: (*AWSRolesAnywhereServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "AWSRolesAnywherePing", + Handler: _AWSRolesAnywhereService_AWSRolesAnywherePing_Handler, + }, + { + MethodName: "ListRolesAnywhereProfiles", + Handler: _AWSRolesAnywhereService_ListRolesAnywhereProfiles_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "teleport/integration/v1/awsra_service.proto", +} diff --git a/api/gen/proto/go/teleport/integration/v1/integration_service.pb.go b/api/gen/proto/go/teleport/integration/v1/integration_service.pb.go index 843103fa56089..3aca8ba6ad8fa 100644 --- a/api/gen/proto/go/teleport/integration/v1/integration_service.pb.go +++ b/api/gen/proto/go/teleport/integration/v1/integration_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/integration/v1/integration_service.proto diff --git a/api/gen/proto/go/teleport/join/v1/joinservice.pb.go b/api/gen/proto/go/teleport/join/v1/joinservice.pb.go new file mode 100644 index 0000000000000..143a26cb1a3f5 --- /dev/null +++ b/api/gen/proto/go/teleport/join/v1/joinservice.pb.go @@ -0,0 +1,3185 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/join/v1/joinservice.proto + +package joinv1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + timestamppb "google.golang.org/protobuf/types/known/timestamppb" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// Reason is the reason the client is giving up. +type GivingUp_Reason int32 + +const ( + // REASON_UNSPECIFIED is an unspecified reason. + GivingUp_REASON_UNSPECIFIED GivingUp_Reason = 0 + // REASON_UNSUPPORTED_JOIN_METHOD means the client does not support the + // join method sent by the server. + GivingUp_REASON_UNSUPPORTED_JOIN_METHOD GivingUp_Reason = 1 + // REASON_UNSUPPORTED_MESSAGE_TYPE means the client can not handle a + // message type sent by the server. + GivingUp_REASON_UNSUPPORTED_MESSAGE_TYPE GivingUp_Reason = 2 + // REASON_CHALLENGE_SOLUTION_FAILED means the client failed to solve a + // challenge sent by the server. + GivingUp_REASON_CHALLENGE_SOLUTION_FAILED GivingUp_Reason = 3 +) + +// Enum value maps for GivingUp_Reason. +var ( + GivingUp_Reason_name = map[int32]string{ + 0: "REASON_UNSPECIFIED", + 1: "REASON_UNSUPPORTED_JOIN_METHOD", + 2: "REASON_UNSUPPORTED_MESSAGE_TYPE", + 3: "REASON_CHALLENGE_SOLUTION_FAILED", + } + GivingUp_Reason_value = map[string]int32{ + "REASON_UNSPECIFIED": 0, + "REASON_UNSUPPORTED_JOIN_METHOD": 1, + "REASON_UNSUPPORTED_MESSAGE_TYPE": 2, + "REASON_CHALLENGE_SOLUTION_FAILED": 3, + } +) + +func (x GivingUp_Reason) Enum() *GivingUp_Reason { + p := new(GivingUp_Reason) + *p = x + return p +} + +func (x GivingUp_Reason) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (GivingUp_Reason) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_join_v1_joinservice_proto_enumTypes[0].Descriptor() +} + +func (GivingUp_Reason) Type() protoreflect.EnumType { + return &file_teleport_join_v1_joinservice_proto_enumTypes[0] +} + +func (x GivingUp_Reason) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use GivingUp_Reason.Descriptor instead. +func (GivingUp_Reason) EnumDescriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{27, 0} +} + +// ClientInit is the first message sent from the client during the join process, it +// holds parameters common to all join methods. +type ClientInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // JoinMethod is the name of the join method that the client is configured to use. + // This parameter is optional, the client can leave it empty to allow the + // server to determine the join method based on the provision token named by + // TokenName, it will be sent to the client in the ServerInit message. + JoinMethod *string `protobuf:"bytes,1,opt,name=join_method,json=joinMethod,proto3,oneof" json:"join_method,omitempty"` + // TokenName is the name of the join token. + // This is a secret if using the token join method, otherwise it is a + // non-secret name of a provision token resource. + TokenName string `protobuf:"bytes,2,opt,name=token_name,json=tokenName,proto3" json:"token_name,omitempty"` + // SystemRole is the system role requested, e.g. Proxy, Node, Instance, Bot. + SystemRole string `protobuf:"bytes,3,opt,name=system_role,json=systemRole,proto3" json:"system_role,omitempty"` + // ForwardedByProxy will be set to true when the message is forwarded by the + // Proxy service. When this is set the Auth service must ignore any + // any credentials authenticating the request, except for the purpose of + // accepting ProxySuppliedParams. + ForwardedByProxy bool `protobuf:"varint,4,opt,name=forwarded_by_proxy,json=forwardedByProxy,proto3" json:"forwarded_by_proxy,omitempty"` + ProxySuppliedParameters *ClientInit_ProxySuppliedParams `protobuf:"bytes,5,opt,name=proxy_supplied_parameters,json=proxySuppliedParameters,proto3,oneof" json:"proxy_supplied_parameters,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ClientInit) Reset() { + *x = ClientInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ClientInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ClientInit) ProtoMessage() {} + +func (x *ClientInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ClientInit.ProtoReflect.Descriptor instead. +func (*ClientInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{0} +} + +func (x *ClientInit) GetJoinMethod() string { + if x != nil && x.JoinMethod != nil { + return *x.JoinMethod + } + return "" +} + +func (x *ClientInit) GetTokenName() string { + if x != nil { + return x.TokenName + } + return "" +} + +func (x *ClientInit) GetSystemRole() string { + if x != nil { + return x.SystemRole + } + return "" +} + +func (x *ClientInit) GetForwardedByProxy() bool { + if x != nil { + return x.ForwardedByProxy + } + return false +} + +func (x *ClientInit) GetProxySuppliedParameters() *ClientInit_ProxySuppliedParams { + if x != nil { + return x.ProxySuppliedParameters + } + return nil +} + +// PublicKeys holds public keys sent by the client requested subject keys for +// issued certificates. +type PublicKeys struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PublicTlsKey is the public key requested for the subject of the x509 certificate. + // It must be encoded in PKIX, ASN.1 DER form. + PublicTlsKey []byte `protobuf:"bytes,1,opt,name=public_tls_key,json=publicTlsKey,proto3" json:"public_tls_key,omitempty"` + // PublicSshKey is the public key requested for the subject of the SSH certificate. + // It must be encoded in SSH wire format. + PublicSshKey []byte `protobuf:"bytes,2,opt,name=public_ssh_key,json=publicSshKey,proto3" json:"public_ssh_key,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *PublicKeys) Reset() { + *x = PublicKeys{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *PublicKeys) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*PublicKeys) ProtoMessage() {} + +func (x *PublicKeys) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use PublicKeys.ProtoReflect.Descriptor instead. +func (*PublicKeys) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{1} +} + +func (x *PublicKeys) GetPublicTlsKey() []byte { + if x != nil { + return x.PublicTlsKey + } + return nil +} + +func (x *PublicKeys) GetPublicSshKey() []byte { + if x != nil { + return x.PublicSshKey + } + return nil +} + +// HostParams holds parameters required for host joining. +type HostParams struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PublicKeys holds the host public keys. + PublicKeys *PublicKeys `protobuf:"bytes,1,opt,name=public_keys,json=publicKeys,proto3" json:"public_keys,omitempty"` + // HostName is the user-friendly node name for the host. This comes from + // teleport.nodename in the service configuration and defaults to the + // hostname. It is encoded as a valid principal in issued certificates. + HostName string `protobuf:"bytes,2,opt,name=host_name,json=hostName,proto3" json:"host_name,omitempty"` + // AdditionalPrincipals is a list of additional principals requested. + AdditionalPrincipals []string `protobuf:"bytes,3,rep,name=additional_principals,json=additionalPrincipals,proto3" json:"additional_principals,omitempty"` + // DnsNames is a list of DNS names requested for inclusion in the x509 certificate. + DnsNames []string `protobuf:"bytes,4,rep,name=dns_names,json=dnsNames,proto3" json:"dns_names,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *HostParams) Reset() { + *x = HostParams{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *HostParams) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*HostParams) ProtoMessage() {} + +func (x *HostParams) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use HostParams.ProtoReflect.Descriptor instead. +func (*HostParams) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{2} +} + +func (x *HostParams) GetPublicKeys() *PublicKeys { + if x != nil { + return x.PublicKeys + } + return nil +} + +func (x *HostParams) GetHostName() string { + if x != nil { + return x.HostName + } + return "" +} + +func (x *HostParams) GetAdditionalPrincipals() []string { + if x != nil { + return x.AdditionalPrincipals + } + return nil +} + +func (x *HostParams) GetDnsNames() []string { + if x != nil { + return x.DnsNames + } + return nil +} + +// BotParams holds parameters required for bot joining. +type BotParams struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PublicKeys holds the bot public keys. + PublicKeys *PublicKeys `protobuf:"bytes,1,opt,name=public_keys,json=publicKeys,proto3" json:"public_keys,omitempty"` + // Expires is a desired time of the expiry of the returned certificates. + Expires *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=expires,proto3,oneof" json:"expires,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BotParams) Reset() { + *x = BotParams{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BotParams) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BotParams) ProtoMessage() {} + +func (x *BotParams) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BotParams.ProtoReflect.Descriptor instead. +func (*BotParams) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{3} +} + +func (x *BotParams) GetPublicKeys() *PublicKeys { + if x != nil { + return x.PublicKeys + } + return nil +} + +func (x *BotParams) GetExpires() *timestamppb.Timestamp { + if x != nil { + return x.Expires + } + return nil +} + +// ClientParams holds either host or bot join parameters. +type ClientParams struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *ClientParams_HostParams + // *ClientParams_BotParams + Payload isClientParams_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ClientParams) Reset() { + *x = ClientParams{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ClientParams) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ClientParams) ProtoMessage() {} + +func (x *ClientParams) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ClientParams.ProtoReflect.Descriptor instead. +func (*ClientParams) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{4} +} + +func (x *ClientParams) GetPayload() isClientParams_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *ClientParams) GetHostParams() *HostParams { + if x != nil { + if x, ok := x.Payload.(*ClientParams_HostParams); ok { + return x.HostParams + } + } + return nil +} + +func (x *ClientParams) GetBotParams() *BotParams { + if x != nil { + if x, ok := x.Payload.(*ClientParams_BotParams); ok { + return x.BotParams + } + } + return nil +} + +type isClientParams_Payload interface { + isClientParams_Payload() +} + +type ClientParams_HostParams struct { + HostParams *HostParams `protobuf:"bytes,1,opt,name=host_params,json=hostParams,proto3,oneof"` +} + +type ClientParams_BotParams struct { + BotParams *BotParams `protobuf:"bytes,2,opt,name=bot_params,json=botParams,proto3,oneof"` +} + +func (*ClientParams_HostParams) isClientParams_Payload() {} + +func (*ClientParams_BotParams) isClientParams_Payload() {} + +// TokenInit is sent by the client in response to the ServerInit message for +// the Token join method. +// +// The Token method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: TokenInit +// 4. server->client: Result +type TokenInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TokenInit) Reset() { + *x = TokenInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TokenInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TokenInit) ProtoMessage() {} + +func (x *TokenInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TokenInit.ProtoReflect.Descriptor instead. +func (*TokenInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{5} +} + +func (x *TokenInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +// OIDCInit holds the OIDC identity token used for all OIDC-based join methods. +// +// The join flow for all OIDC-based join methods is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: OIDCInit +// 4. server->client: Result +type OIDCInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + // IdToken is the OIDC identity token. + IdToken []byte `protobuf:"bytes,2,opt,name=id_token,json=idToken,proto3" json:"id_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *OIDCInit) Reset() { + *x = OIDCInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *OIDCInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*OIDCInit) ProtoMessage() {} + +func (x *OIDCInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use OIDCInit.ProtoReflect.Descriptor instead. +func (*OIDCInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{6} +} + +func (x *OIDCInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +func (x *OIDCInit) GetIdToken() []byte { + if x != nil { + return x.IdToken + } + return nil +} + +// BoundKeypairInit is sent from the client in response to the ServerInit +// message for the bound keypair join method. +// The server is expected to respond with a BoundKeypairChallenge. +// +// The bound keypair method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: BoundKeypairInit +// 4. server->client: BoundKeypairChallenge +// 5. client->server: BoundKeypairChallengeSolution +// (optional additional steps if keypair rotation is required) +// server->client: BoundKeypairRotationRequest +// client->server: BoundKeypairRotationResponse +// server->client: BoundKeypairChallenge +// client->server: BoundKeypairChallengeSolution +// 6. server->client: Result containing BoundKeypairResult +type BoundKeypairInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + // If set, attempts to bind a new keypair using an initial join secret. + // Any value set here will be ignored if a keypair is already bound. + InitialJoinSecret string `protobuf:"bytes,2,opt,name=initial_join_secret,json=initialJoinSecret,proto3" json:"initial_join_secret,omitempty"` + // A document signed by Auth containing join state parameters from the + // previous join attempt. Not required on initial join; required on all + // subsequent joins. + PreviousJoinState []byte `protobuf:"bytes,3,opt,name=previous_join_state,json=previousJoinState,proto3" json:"previous_join_state,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairInit) Reset() { + *x = BoundKeypairInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairInit) ProtoMessage() {} + +func (x *BoundKeypairInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairInit.ProtoReflect.Descriptor instead. +func (*BoundKeypairInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{7} +} + +func (x *BoundKeypairInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +func (x *BoundKeypairInit) GetInitialJoinSecret() string { + if x != nil { + return x.InitialJoinSecret + } + return "" +} + +func (x *BoundKeypairInit) GetPreviousJoinState() []byte { + if x != nil { + return x.PreviousJoinState + } + return nil +} + +// BoundKeypairChallenge is a challenge issued by the server that joining +// clients are expected to complete. +// The client is expected to respond with a BoundKeypairChallengeSolution. +type BoundKeypairChallenge struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The desired public key corresponding to the private key that should be used + // to sign this challenge, in SSH authorized keys format. + PublicKey []byte `protobuf:"bytes,1,opt,name=public_key,json=publicKey,proto3" json:"public_key,omitempty"` + // A challenge to sign with the requested public key. During keypair rotation, + // a second challenge will be provided to verify the new keypair before certs + // are returned. + Challenge string `protobuf:"bytes,2,opt,name=challenge,proto3" json:"challenge,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairChallenge) Reset() { + *x = BoundKeypairChallenge{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairChallenge) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairChallenge) ProtoMessage() {} + +func (x *BoundKeypairChallenge) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairChallenge.ProtoReflect.Descriptor instead. +func (*BoundKeypairChallenge) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{8} +} + +func (x *BoundKeypairChallenge) GetPublicKey() []byte { + if x != nil { + return x.PublicKey + } + return nil +} + +func (x *BoundKeypairChallenge) GetChallenge() string { + if x != nil { + return x.Challenge + } + return "" +} + +// BoundKeypairChallengeSolution is sent from the client in response to the +// BoundKeypairChallenge. +// The server is expected to respond with either a Result or a +// BoundKeypairRotationRequest. +type BoundKeypairChallengeSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A solution to a challenge from the server. This generated by signing the + // challenge as a JWT using the keypair associated with the requested public + // key. + Solution []byte `protobuf:"bytes,1,opt,name=solution,proto3" json:"solution,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairChallengeSolution) Reset() { + *x = BoundKeypairChallengeSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairChallengeSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairChallengeSolution) ProtoMessage() {} + +func (x *BoundKeypairChallengeSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairChallengeSolution.ProtoReflect.Descriptor instead. +func (*BoundKeypairChallengeSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{9} +} + +func (x *BoundKeypairChallengeSolution) GetSolution() []byte { + if x != nil { + return x.Solution + } + return nil +} + +// BoundKeypairRotationRequest is sent by the server in response to a +// BoundKeypairChallenge when a keypair rotation is required. It acts like an +// additional challenge, the client is expected to respond with a +// BoundKeypairRotationResponse. +type BoundKeypairRotationRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The signature algorithm suite in use by the cluster. + SignatureAlgorithmSuite string `protobuf:"bytes,1,opt,name=signature_algorithm_suite,json=signatureAlgorithmSuite,proto3" json:"signature_algorithm_suite,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairRotationRequest) Reset() { + *x = BoundKeypairRotationRequest{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairRotationRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairRotationRequest) ProtoMessage() {} + +func (x *BoundKeypairRotationRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairRotationRequest.ProtoReflect.Descriptor instead. +func (*BoundKeypairRotationRequest) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{10} +} + +func (x *BoundKeypairRotationRequest) GetSignatureAlgorithmSuite() string { + if x != nil { + return x.SignatureAlgorithmSuite + } + return "" +} + +// BoundKeypairRotationResponse is sent by the client in response to a +// BoundKeypairRotationRequest from the server. +// The server is expected to respond with an additional BoundKeypairChallenge +// for the new key. +type BoundKeypairRotationResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The public key to be registered with auth. Clients should expect a + // subsequent challenge against this public key to be sent. This is encoded in + // SSH authorized keys format. + PublicKey []byte `protobuf:"bytes,1,opt,name=public_key,json=publicKey,proto3" json:"public_key,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairRotationResponse) Reset() { + *x = BoundKeypairRotationResponse{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairRotationResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairRotationResponse) ProtoMessage() {} + +func (x *BoundKeypairRotationResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairRotationResponse.ProtoReflect.Descriptor instead. +func (*BoundKeypairRotationResponse) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{11} +} + +func (x *BoundKeypairRotationResponse) GetPublicKey() []byte { + if x != nil { + return x.PublicKey + } + return nil +} + +// BoundKeypairResult holds additional result parameters relevant to the bound +// keypair join method. +type BoundKeypairResult struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A signed join state document to be provided on the next join attempt. + JoinState []byte `protobuf:"bytes,2,opt,name=join_state,json=joinState,proto3" json:"join_state,omitempty"` + // The public key registered with Auth at the end of the joining ceremony. + // After a successful keypair rotation, this should reflect the newly + // registered public key. This is encoded in SSH authorized keys format. + PublicKey []byte `protobuf:"bytes,3,opt,name=public_key,json=publicKey,proto3" json:"public_key,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BoundKeypairResult) Reset() { + *x = BoundKeypairResult{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BoundKeypairResult) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BoundKeypairResult) ProtoMessage() {} + +func (x *BoundKeypairResult) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BoundKeypairResult.ProtoReflect.Descriptor instead. +func (*BoundKeypairResult) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{12} +} + +func (x *BoundKeypairResult) GetJoinState() []byte { + if x != nil { + return x.JoinState + } + return nil +} + +func (x *BoundKeypairResult) GetPublicKey() []byte { + if x != nil { + return x.PublicKey + } + return nil +} + +// IAMInit is sent from the client in response to the ServerInit message for +// the IAM join method. +// +// The IAM method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: IAMInit +// 4. server->client: IAMChallenge +// 5. client->server: IAMChallengeSolution +// 6. server->client: Result +type IAMInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *IAMInit) Reset() { + *x = IAMInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *IAMInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*IAMInit) ProtoMessage() {} + +func (x *IAMInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use IAMInit.ProtoReflect.Descriptor instead. +func (*IAMInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{13} +} + +func (x *IAMInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +// IAMChallenge is from the server in response to the IAMInit message from the client. +// The client is expected to respond with a IAMChallengeSolution. +type IAMChallenge struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Challenge is a a crypto-random string that should be included by the + // client in the IAMChallengeSolution message. + Challenge string `protobuf:"bytes,1,opt,name=challenge,proto3" json:"challenge,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *IAMChallenge) Reset() { + *x = IAMChallenge{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *IAMChallenge) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*IAMChallenge) ProtoMessage() {} + +func (x *IAMChallenge) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use IAMChallenge.ProtoReflect.Descriptor instead. +func (*IAMChallenge) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{14} +} + +func (x *IAMChallenge) GetChallenge() string { + if x != nil { + return x.Challenge + } + return "" +} + +// IAMChallengeSolution must be sent from the client in response to the +// IAMChallenge message. +type IAMChallengeSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // STSIdentityRequest is a signed sts:GetCallerIdentity API request used + // to prove the AWS identity of a joining node. It must include the + // challenge string as a signed header. + StsIdentityRequest []byte `protobuf:"bytes,1,opt,name=sts_identity_request,json=stsIdentityRequest,proto3" json:"sts_identity_request,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *IAMChallengeSolution) Reset() { + *x = IAMChallengeSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *IAMChallengeSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*IAMChallengeSolution) ProtoMessage() {} + +func (x *IAMChallengeSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use IAMChallengeSolution.ProtoReflect.Descriptor instead. +func (*IAMChallengeSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{15} +} + +func (x *IAMChallengeSolution) GetStsIdentityRequest() []byte { + if x != nil { + return x.StsIdentityRequest + } + return nil +} + +// EC2Init is sent from the client in response to the ServerInit message for +// the EC2 join method. +// +// The EC2 method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: EC2Init +// 4. server->client: Result +type EC2Init struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + // Document is a signed EC2 Instance Identity Document used to prove the + // identity of a joining EC2 instance. + Document []byte `protobuf:"bytes,2,opt,name=document,proto3" json:"document,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *EC2Init) Reset() { + *x = EC2Init{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *EC2Init) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*EC2Init) ProtoMessage() {} + +func (x *EC2Init) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use EC2Init.ProtoReflect.Descriptor instead. +func (*EC2Init) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{16} +} + +func (x *EC2Init) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +func (x *EC2Init) GetDocument() []byte { + if x != nil { + return x.Document + } + return nil +} + +// OracleInit is sent from the client in response to the ServerInit message for +// the Oracle join method. +// +// The Oracle method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: OracleInit +// 4. client<-server: OracleChallenge +// 5. client->server: OracleChallengeSolution +// 6. client<-server: Result +type OracleInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *OracleInit) Reset() { + *x = OracleInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[17] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *OracleInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*OracleInit) ProtoMessage() {} + +func (x *OracleInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[17] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use OracleInit.ProtoReflect.Descriptor instead. +func (*OracleInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{17} +} + +func (x *OracleInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +// OracleChallenge is sent from the server in response to the OracleInit message from the client. +// The client is expected to respond with a OracleChallengeSolution. +type OracleChallenge struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Challenge is a a crypto-random string that should be included by the + // client in the OracleChallengeSolution message. + Challenge string `protobuf:"bytes,1,opt,name=challenge,proto3" json:"challenge,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *OracleChallenge) Reset() { + *x = OracleChallenge{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[18] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *OracleChallenge) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*OracleChallenge) ProtoMessage() {} + +func (x *OracleChallenge) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[18] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use OracleChallenge.ProtoReflect.Descriptor instead. +func (*OracleChallenge) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{18} +} + +func (x *OracleChallenge) GetChallenge() string { + if x != nil { + return x.Challenge + } + return "" +} + +// OracleChallengeSolution must be sent from the client in response to the +// OracleChallenge message. +type OracleChallengeSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Cert is the OCI instance identity certificate, an X509 certificate in PEM format. + Cert []byte `protobuf:"bytes,1,opt,name=cert,proto3" json:"cert,omitempty"` + // Intermediate encodes the intermediate CAs that issued the instance + // identity certificate, in PEM format. + Intermediate []byte `protobuf:"bytes,2,opt,name=intermediate,proto3" json:"intermediate,omitempty"` + // Signature is a signature over the challenge, signed by the private key + // matching the instance identity certificate. + Signature []byte `protobuf:"bytes,3,opt,name=signature,proto3" json:"signature,omitempty"` + // SignedRootCaReq is a signed request to the Oracle API for retreiving the + // root CAs that issued the instance identity certificate. + SignedRootCaReq []byte `protobuf:"bytes,4,opt,name=signed_root_ca_req,json=signedRootCaReq,proto3" json:"signed_root_ca_req,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *OracleChallengeSolution) Reset() { + *x = OracleChallengeSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[19] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *OracleChallengeSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*OracleChallengeSolution) ProtoMessage() {} + +func (x *OracleChallengeSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[19] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use OracleChallengeSolution.ProtoReflect.Descriptor instead. +func (*OracleChallengeSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{19} +} + +func (x *OracleChallengeSolution) GetCert() []byte { + if x != nil { + return x.Cert + } + return nil +} + +func (x *OracleChallengeSolution) GetIntermediate() []byte { + if x != nil { + return x.Intermediate + } + return nil +} + +func (x *OracleChallengeSolution) GetSignature() []byte { + if x != nil { + return x.Signature + } + return nil +} + +func (x *OracleChallengeSolution) GetSignedRootCaReq() []byte { + if x != nil { + return x.SignedRootCaReq + } + return nil +} + +// TPMInit is the message sent from the client in response to the ServerInit +// message for the TPM join flow. +// The server is expected to respond with a TPMEncryptedCredential message. +// +// The TPM method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: TPMInit +// 4. client<-server: TPMEncryptedCredential +// 5. client->server: TPMSolution +// 6. client<-server: Result +type TPMInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + // The encoded TPMT_PUBLIC structure containing the attestation public key + // and signing parameters. + Public []byte `protobuf:"bytes,2,opt,name=public,proto3" json:"public,omitempty"` + // The properties of the attestation key, encoded as a TPMS_CREATION_DATA + // structure. + CreateData []byte `protobuf:"bytes,3,opt,name=create_data,json=createData,proto3" json:"create_data,omitempty"` + // An assertion as to the details of the key, encoded as a TPMS_ATTEST + // structure. + CreateAttestation []byte `protobuf:"bytes,4,opt,name=create_attestation,json=createAttestation,proto3" json:"create_attestation,omitempty"` + // A signature of create_attestation, encoded as a TPMT_SIGNATURE structure. + CreateSignature []byte `protobuf:"bytes,5,opt,name=create_signature,json=createSignature,proto3" json:"create_signature,omitempty"` + // Types that are valid to be assigned to Ek: + // + // *TPMInit_EkCert + // *TPMInit_EkKey + Ek isTPMInit_Ek `protobuf_oneof:"ek"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TPMInit) Reset() { + *x = TPMInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[20] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TPMInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TPMInit) ProtoMessage() {} + +func (x *TPMInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[20] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TPMInit.ProtoReflect.Descriptor instead. +func (*TPMInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{20} +} + +func (x *TPMInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +func (x *TPMInit) GetPublic() []byte { + if x != nil { + return x.Public + } + return nil +} + +func (x *TPMInit) GetCreateData() []byte { + if x != nil { + return x.CreateData + } + return nil +} + +func (x *TPMInit) GetCreateAttestation() []byte { + if x != nil { + return x.CreateAttestation + } + return nil +} + +func (x *TPMInit) GetCreateSignature() []byte { + if x != nil { + return x.CreateSignature + } + return nil +} + +func (x *TPMInit) GetEk() isTPMInit_Ek { + if x != nil { + return x.Ek + } + return nil +} + +func (x *TPMInit) GetEkCert() []byte { + if x != nil { + if x, ok := x.Ek.(*TPMInit_EkCert); ok { + return x.EkCert + } + } + return nil +} + +func (x *TPMInit) GetEkKey() []byte { + if x != nil { + if x, ok := x.Ek.(*TPMInit_EkKey); ok { + return x.EkKey + } + } + return nil +} + +type isTPMInit_Ek interface { + isTPMInit_Ek() +} + +type TPMInit_EkCert struct { + // The device's endorsement certificate in X509, ASN.1 DER form. This + // certificate contains the public key of the endorsement key. This is + // preferred to ek_key. + EkCert []byte `protobuf:"bytes,6,opt,name=ek_cert,json=ekCert,proto3,oneof"` +} + +type TPMInit_EkKey struct { + // The device's public endorsement key in PKIX, ASN.1 DER form. This is + // used when a TPM does not contain any endorsement certificates. + EkKey []byte `protobuf:"bytes,7,opt,name=ek_key,json=ekKey,proto3,oneof"` +} + +func (*TPMInit_EkCert) isTPMInit_Ek() {} + +func (*TPMInit_EkKey) isTPMInit_Ek() {} + +// TPMEncryptedCredential is the message sent from the server in response to the +// TPMInit message. +// The client is expected to respond with a TPMSolution message. +type TPMEncryptedCredential struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The `credential_blob` parameter to be used with the `ActivateCredential` + // command. This is used with the decrypted value of `secret` in a + // cryptographic process to decrypt the solution. + CredentialBlob []byte `protobuf:"bytes,1,opt,name=credential_blob,json=credentialBlob,proto3" json:"credential_blob,omitempty"` + // The `secret` parameter to be used with `ActivateCredential`. This is a + // seed which can be decrypted with the EK. The decrypted seed is then used + // when decrypting `credential_blob`. + Secret []byte `protobuf:"bytes,2,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TPMEncryptedCredential) Reset() { + *x = TPMEncryptedCredential{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[21] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TPMEncryptedCredential) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TPMEncryptedCredential) ProtoMessage() {} + +func (x *TPMEncryptedCredential) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[21] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TPMEncryptedCredential.ProtoReflect.Descriptor instead. +func (*TPMEncryptedCredential) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{21} +} + +func (x *TPMEncryptedCredential) GetCredentialBlob() []byte { + if x != nil { + return x.CredentialBlob + } + return nil +} + +func (x *TPMEncryptedCredential) GetSecret() []byte { + if x != nil { + return x.Secret + } + return nil +} + +// TPMSolution is the message sent from the client in response to the +// TPMEncryptedCredential message. The server is expected to respond with a +// Result message. +type TPMSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The client's solution to TPMEncryptedCredential using ActivateCredential. + Solution []byte `protobuf:"bytes,1,opt,name=solution,proto3" json:"solution,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *TPMSolution) Reset() { + *x = TPMSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[22] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *TPMSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TPMSolution) ProtoMessage() {} + +func (x *TPMSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[22] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TPMSolution.ProtoReflect.Descriptor instead. +func (*TPMSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{22} +} + +func (x *TPMSolution) GetSolution() []byte { + if x != nil { + return x.Solution + } + return nil +} + +// AzureInit is sent from the client in response to the ServerInit message for +// the Azure join method. +// +// The Azure method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: AzureInit +// 4. client<-server: AzureChallenge +// 5. client->server: AzureChallengeSolution +// 6. client<-server: Result +type AzureInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams *ClientParams `protobuf:"bytes,1,opt,name=client_params,json=clientParams,proto3" json:"client_params,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AzureInit) Reset() { + *x = AzureInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[23] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AzureInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AzureInit) ProtoMessage() {} + +func (x *AzureInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[23] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AzureInit.ProtoReflect.Descriptor instead. +func (*AzureInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{23} +} + +func (x *AzureInit) GetClientParams() *ClientParams { + if x != nil { + return x.ClientParams + } + return nil +} + +// AzureChallenge is sent from the server in response to the AzureInit message from the client. +// The client is expected to respond with a AzureChallengeSolution. +type AzureChallenge struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Challenge is a a crypto-random string that should be included by the + // client in the challenge response message. + Challenge string `protobuf:"bytes,1,opt,name=challenge,proto3" json:"challenge,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AzureChallenge) Reset() { + *x = AzureChallenge{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[24] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AzureChallenge) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AzureChallenge) ProtoMessage() {} + +func (x *AzureChallenge) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[24] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AzureChallenge.ProtoReflect.Descriptor instead. +func (*AzureChallenge) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{24} +} + +func (x *AzureChallenge) GetChallenge() string { + if x != nil { + return x.Challenge + } + return "" +} + +// AzureChallengeSolution must be sent from the client in response to the +// AzureChallenge message. +type AzureChallengeSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // AttestedData is a signed JSON document from an Azure VM's attested data + // metadata endpoint used to prove the identity of a joining node. It must + // include the challenge string as the nonce. + AttestedData []byte `protobuf:"bytes,1,opt,name=attested_data,json=attestedData,proto3" json:"attested_data,omitempty"` + // Intermediate encodes the intermediate CAs that issued the leaf certificate + // used to sign the attested data document, in x509 DER format. + Intermediate []byte `protobuf:"bytes,2,opt,name=intermediate,proto3" json:"intermediate,omitempty"` + // AccessToken is a JWT signed by Azure, used to prove the identity of a + // joining node. + AccessToken string `protobuf:"bytes,3,opt,name=access_token,json=accessToken,proto3" json:"access_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *AzureChallengeSolution) Reset() { + *x = AzureChallengeSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[25] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *AzureChallengeSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*AzureChallengeSolution) ProtoMessage() {} + +func (x *AzureChallengeSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[25] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use AzureChallengeSolution.ProtoReflect.Descriptor instead. +func (*AzureChallengeSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{25} +} + +func (x *AzureChallengeSolution) GetAttestedData() []byte { + if x != nil { + return x.AttestedData + } + return nil +} + +func (x *AzureChallengeSolution) GetIntermediate() []byte { + if x != nil { + return x.Intermediate + } + return nil +} + +func (x *AzureChallengeSolution) GetAccessToken() string { + if x != nil { + return x.AccessToken + } + return "" +} + +// ChallengeSolution holds a solution to a challenge issued by the server. +type ChallengeSolution struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *ChallengeSolution_BoundKeypairChallengeSolution + // *ChallengeSolution_BoundKeypairRotationResponse + // *ChallengeSolution_IamChallengeSolution + // *ChallengeSolution_OracleChallengeSolution + // *ChallengeSolution_TpmSolution + // *ChallengeSolution_AzureChallengeSolution + Payload isChallengeSolution_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ChallengeSolution) Reset() { + *x = ChallengeSolution{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[26] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ChallengeSolution) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ChallengeSolution) ProtoMessage() {} + +func (x *ChallengeSolution) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[26] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ChallengeSolution.ProtoReflect.Descriptor instead. +func (*ChallengeSolution) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{26} +} + +func (x *ChallengeSolution) GetPayload() isChallengeSolution_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *ChallengeSolution) GetBoundKeypairChallengeSolution() *BoundKeypairChallengeSolution { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_BoundKeypairChallengeSolution); ok { + return x.BoundKeypairChallengeSolution + } + } + return nil +} + +func (x *ChallengeSolution) GetBoundKeypairRotationResponse() *BoundKeypairRotationResponse { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_BoundKeypairRotationResponse); ok { + return x.BoundKeypairRotationResponse + } + } + return nil +} + +func (x *ChallengeSolution) GetIamChallengeSolution() *IAMChallengeSolution { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_IamChallengeSolution); ok { + return x.IamChallengeSolution + } + } + return nil +} + +func (x *ChallengeSolution) GetOracleChallengeSolution() *OracleChallengeSolution { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_OracleChallengeSolution); ok { + return x.OracleChallengeSolution + } + } + return nil +} + +func (x *ChallengeSolution) GetTpmSolution() *TPMSolution { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_TpmSolution); ok { + return x.TpmSolution + } + } + return nil +} + +func (x *ChallengeSolution) GetAzureChallengeSolution() *AzureChallengeSolution { + if x != nil { + if x, ok := x.Payload.(*ChallengeSolution_AzureChallengeSolution); ok { + return x.AzureChallengeSolution + } + } + return nil +} + +type isChallengeSolution_Payload interface { + isChallengeSolution_Payload() +} + +type ChallengeSolution_BoundKeypairChallengeSolution struct { + BoundKeypairChallengeSolution *BoundKeypairChallengeSolution `protobuf:"bytes,1,opt,name=bound_keypair_challenge_solution,json=boundKeypairChallengeSolution,proto3,oneof"` +} + +type ChallengeSolution_BoundKeypairRotationResponse struct { + BoundKeypairRotationResponse *BoundKeypairRotationResponse `protobuf:"bytes,2,opt,name=bound_keypair_rotation_response,json=boundKeypairRotationResponse,proto3,oneof"` +} + +type ChallengeSolution_IamChallengeSolution struct { + IamChallengeSolution *IAMChallengeSolution `protobuf:"bytes,3,opt,name=iam_challenge_solution,json=iamChallengeSolution,proto3,oneof"` +} + +type ChallengeSolution_OracleChallengeSolution struct { + OracleChallengeSolution *OracleChallengeSolution `protobuf:"bytes,4,opt,name=oracle_challenge_solution,json=oracleChallengeSolution,proto3,oneof"` +} + +type ChallengeSolution_TpmSolution struct { + TpmSolution *TPMSolution `protobuf:"bytes,5,opt,name=tpm_solution,json=tpmSolution,proto3,oneof"` +} + +type ChallengeSolution_AzureChallengeSolution struct { + AzureChallengeSolution *AzureChallengeSolution `protobuf:"bytes,6,opt,name=azure_challenge_solution,json=azureChallengeSolution,proto3,oneof"` +} + +func (*ChallengeSolution_BoundKeypairChallengeSolution) isChallengeSolution_Payload() {} + +func (*ChallengeSolution_BoundKeypairRotationResponse) isChallengeSolution_Payload() {} + +func (*ChallengeSolution_IamChallengeSolution) isChallengeSolution_Payload() {} + +func (*ChallengeSolution_OracleChallengeSolution) isChallengeSolution_Payload() {} + +func (*ChallengeSolution_TpmSolution) isChallengeSolution_Payload() {} + +func (*ChallengeSolution_AzureChallengeSolution) isChallengeSolution_Payload() {} + +// GivingUp should be sent by clients that fail to complete the join flow so +// that the Auth service can log an informative error message. +type GivingUp struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Reason is the reason the client is giving up. + Reason GivingUp_Reason `protobuf:"varint,1,opt,name=reason,proto3,enum=teleport.join.v1.GivingUp_Reason" json:"reason,omitempty"` + // Msg is an error message related to the failure. + Msg string `protobuf:"bytes,2,opt,name=msg,proto3" json:"msg,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GivingUp) Reset() { + *x = GivingUp{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[27] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GivingUp) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GivingUp) ProtoMessage() {} + +func (x *GivingUp) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[27] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GivingUp.ProtoReflect.Descriptor instead. +func (*GivingUp) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{27} +} + +func (x *GivingUp) GetReason() GivingUp_Reason { + if x != nil { + return x.Reason + } + return GivingUp_REASON_UNSPECIFIED +} + +func (x *GivingUp) GetMsg() string { + if x != nil { + return x.Msg + } + return "" +} + +// JoinRequest is the message type sent from the joining client to the server. +type JoinRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *JoinRequest_ClientInit + // *JoinRequest_TokenInit + // *JoinRequest_BoundKeypairInit + // *JoinRequest_Solution + // *JoinRequest_IamInit + // *JoinRequest_GivingUp + // *JoinRequest_Ec2Init + // *JoinRequest_OidcInit + // *JoinRequest_OracleInit + // *JoinRequest_TpmInit + // *JoinRequest_AzureInit + Payload isJoinRequest_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *JoinRequest) Reset() { + *x = JoinRequest{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[28] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *JoinRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*JoinRequest) ProtoMessage() {} + +func (x *JoinRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[28] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use JoinRequest.ProtoReflect.Descriptor instead. +func (*JoinRequest) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{28} +} + +func (x *JoinRequest) GetPayload() isJoinRequest_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *JoinRequest) GetClientInit() *ClientInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_ClientInit); ok { + return x.ClientInit + } + } + return nil +} + +func (x *JoinRequest) GetTokenInit() *TokenInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_TokenInit); ok { + return x.TokenInit + } + } + return nil +} + +func (x *JoinRequest) GetBoundKeypairInit() *BoundKeypairInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_BoundKeypairInit); ok { + return x.BoundKeypairInit + } + } + return nil +} + +func (x *JoinRequest) GetSolution() *ChallengeSolution { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_Solution); ok { + return x.Solution + } + } + return nil +} + +func (x *JoinRequest) GetIamInit() *IAMInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_IamInit); ok { + return x.IamInit + } + } + return nil +} + +func (x *JoinRequest) GetGivingUp() *GivingUp { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_GivingUp); ok { + return x.GivingUp + } + } + return nil +} + +func (x *JoinRequest) GetEc2Init() *EC2Init { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_Ec2Init); ok { + return x.Ec2Init + } + } + return nil +} + +func (x *JoinRequest) GetOidcInit() *OIDCInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_OidcInit); ok { + return x.OidcInit + } + } + return nil +} + +func (x *JoinRequest) GetOracleInit() *OracleInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_OracleInit); ok { + return x.OracleInit + } + } + return nil +} + +func (x *JoinRequest) GetTpmInit() *TPMInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_TpmInit); ok { + return x.TpmInit + } + } + return nil +} + +func (x *JoinRequest) GetAzureInit() *AzureInit { + if x != nil { + if x, ok := x.Payload.(*JoinRequest_AzureInit); ok { + return x.AzureInit + } + } + return nil +} + +type isJoinRequest_Payload interface { + isJoinRequest_Payload() +} + +type JoinRequest_ClientInit struct { + ClientInit *ClientInit `protobuf:"bytes,1,opt,name=client_init,json=clientInit,proto3,oneof"` +} + +type JoinRequest_TokenInit struct { + TokenInit *TokenInit `protobuf:"bytes,2,opt,name=token_init,json=tokenInit,proto3,oneof"` +} + +type JoinRequest_BoundKeypairInit struct { + BoundKeypairInit *BoundKeypairInit `protobuf:"bytes,3,opt,name=bound_keypair_init,json=boundKeypairInit,proto3,oneof"` +} + +type JoinRequest_Solution struct { + Solution *ChallengeSolution `protobuf:"bytes,4,opt,name=solution,proto3,oneof"` +} + +type JoinRequest_IamInit struct { + IamInit *IAMInit `protobuf:"bytes,5,opt,name=iam_init,json=iamInit,proto3,oneof"` +} + +type JoinRequest_GivingUp struct { + GivingUp *GivingUp `protobuf:"bytes,6,opt,name=giving_up,json=givingUp,proto3,oneof"` +} + +type JoinRequest_Ec2Init struct { + Ec2Init *EC2Init `protobuf:"bytes,7,opt,name=ec2_init,json=ec2Init,proto3,oneof"` +} + +type JoinRequest_OidcInit struct { + OidcInit *OIDCInit `protobuf:"bytes,8,opt,name=oidc_init,json=oidcInit,proto3,oneof"` +} + +type JoinRequest_OracleInit struct { + OracleInit *OracleInit `protobuf:"bytes,9,opt,name=oracle_init,json=oracleInit,proto3,oneof"` +} + +type JoinRequest_TpmInit struct { + TpmInit *TPMInit `protobuf:"bytes,10,opt,name=tpm_init,json=tpmInit,proto3,oneof"` +} + +type JoinRequest_AzureInit struct { + AzureInit *AzureInit `protobuf:"bytes,11,opt,name=azure_init,json=azureInit,proto3,oneof"` +} + +func (*JoinRequest_ClientInit) isJoinRequest_Payload() {} + +func (*JoinRequest_TokenInit) isJoinRequest_Payload() {} + +func (*JoinRequest_BoundKeypairInit) isJoinRequest_Payload() {} + +func (*JoinRequest_Solution) isJoinRequest_Payload() {} + +func (*JoinRequest_IamInit) isJoinRequest_Payload() {} + +func (*JoinRequest_GivingUp) isJoinRequest_Payload() {} + +func (*JoinRequest_Ec2Init) isJoinRequest_Payload() {} + +func (*JoinRequest_OidcInit) isJoinRequest_Payload() {} + +func (*JoinRequest_OracleInit) isJoinRequest_Payload() {} + +func (*JoinRequest_TpmInit) isJoinRequest_Payload() {} + +func (*JoinRequest_AzureInit) isJoinRequest_Payload() {} + +// ServerInit is the first message sent from the server in response to the +// ClientInit message. +type ServerInit struct { + state protoimpl.MessageState `protogen:"open.v1"` + // JoinMethod is the name of the selected join method. + JoinMethod string `protobuf:"bytes,1,opt,name=join_method,json=joinMethod,proto3" json:"join_method,omitempty"` + // SignatureAlgorithmSuite is the name of the signature algorithm suite + // currently configured for the cluster. + SignatureAlgorithmSuite string `protobuf:"bytes,2,opt,name=signature_algorithm_suite,json=signatureAlgorithmSuite,proto3" json:"signature_algorithm_suite,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ServerInit) Reset() { + *x = ServerInit{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[29] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ServerInit) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ServerInit) ProtoMessage() {} + +func (x *ServerInit) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[29] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ServerInit.ProtoReflect.Descriptor instead. +func (*ServerInit) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{29} +} + +func (x *ServerInit) GetJoinMethod() string { + if x != nil { + return x.JoinMethod + } + return "" +} + +func (x *ServerInit) GetSignatureAlgorithmSuite() string { + if x != nil { + return x.SignatureAlgorithmSuite + } + return "" +} + +// Challenge is a challenge message sent from the server that the client must solve. +type Challenge struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *Challenge_BoundKeypairChallenge + // *Challenge_BoundKeypairRotationRequest + // *Challenge_IamChallenge + // *Challenge_OracleChallenge + // *Challenge_TpmEncryptedCredential + // *Challenge_AzureChallenge + Payload isChallenge_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Challenge) Reset() { + *x = Challenge{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[30] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Challenge) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Challenge) ProtoMessage() {} + +func (x *Challenge) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[30] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Challenge.ProtoReflect.Descriptor instead. +func (*Challenge) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{30} +} + +func (x *Challenge) GetPayload() isChallenge_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *Challenge) GetBoundKeypairChallenge() *BoundKeypairChallenge { + if x != nil { + if x, ok := x.Payload.(*Challenge_BoundKeypairChallenge); ok { + return x.BoundKeypairChallenge + } + } + return nil +} + +func (x *Challenge) GetBoundKeypairRotationRequest() *BoundKeypairRotationRequest { + if x != nil { + if x, ok := x.Payload.(*Challenge_BoundKeypairRotationRequest); ok { + return x.BoundKeypairRotationRequest + } + } + return nil +} + +func (x *Challenge) GetIamChallenge() *IAMChallenge { + if x != nil { + if x, ok := x.Payload.(*Challenge_IamChallenge); ok { + return x.IamChallenge + } + } + return nil +} + +func (x *Challenge) GetOracleChallenge() *OracleChallenge { + if x != nil { + if x, ok := x.Payload.(*Challenge_OracleChallenge); ok { + return x.OracleChallenge + } + } + return nil +} + +func (x *Challenge) GetTpmEncryptedCredential() *TPMEncryptedCredential { + if x != nil { + if x, ok := x.Payload.(*Challenge_TpmEncryptedCredential); ok { + return x.TpmEncryptedCredential + } + } + return nil +} + +func (x *Challenge) GetAzureChallenge() *AzureChallenge { + if x != nil { + if x, ok := x.Payload.(*Challenge_AzureChallenge); ok { + return x.AzureChallenge + } + } + return nil +} + +type isChallenge_Payload interface { + isChallenge_Payload() +} + +type Challenge_BoundKeypairChallenge struct { + BoundKeypairChallenge *BoundKeypairChallenge `protobuf:"bytes,1,opt,name=bound_keypair_challenge,json=boundKeypairChallenge,proto3,oneof"` +} + +type Challenge_BoundKeypairRotationRequest struct { + BoundKeypairRotationRequest *BoundKeypairRotationRequest `protobuf:"bytes,2,opt,name=bound_keypair_rotation_request,json=boundKeypairRotationRequest,proto3,oneof"` +} + +type Challenge_IamChallenge struct { + IamChallenge *IAMChallenge `protobuf:"bytes,3,opt,name=iam_challenge,json=iamChallenge,proto3,oneof"` +} + +type Challenge_OracleChallenge struct { + OracleChallenge *OracleChallenge `protobuf:"bytes,4,opt,name=oracle_challenge,json=oracleChallenge,proto3,oneof"` +} + +type Challenge_TpmEncryptedCredential struct { + TpmEncryptedCredential *TPMEncryptedCredential `protobuf:"bytes,5,opt,name=tpm_encrypted_credential,json=tpmEncryptedCredential,proto3,oneof"` +} + +type Challenge_AzureChallenge struct { + AzureChallenge *AzureChallenge `protobuf:"bytes,6,opt,name=azure_challenge,json=azureChallenge,proto3,oneof"` +} + +func (*Challenge_BoundKeypairChallenge) isChallenge_Payload() {} + +func (*Challenge_BoundKeypairRotationRequest) isChallenge_Payload() {} + +func (*Challenge_IamChallenge) isChallenge_Payload() {} + +func (*Challenge_OracleChallenge) isChallenge_Payload() {} + +func (*Challenge_TpmEncryptedCredential) isChallenge_Payload() {} + +func (*Challenge_AzureChallenge) isChallenge_Payload() {} + +// Result is the final message sent from the cluster back to the client, it +// contains the result of the joining process including the assigned host ID +// and issued certificates. +type Result struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *Result_HostResult + // *Result_BotResult + Payload isResult_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Result) Reset() { + *x = Result{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[31] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Result) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Result) ProtoMessage() {} + +func (x *Result) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[31] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Result.ProtoReflect.Descriptor instead. +func (*Result) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{31} +} + +func (x *Result) GetPayload() isResult_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *Result) GetHostResult() *HostResult { + if x != nil { + if x, ok := x.Payload.(*Result_HostResult); ok { + return x.HostResult + } + } + return nil +} + +func (x *Result) GetBotResult() *BotResult { + if x != nil { + if x, ok := x.Payload.(*Result_BotResult); ok { + return x.BotResult + } + } + return nil +} + +type isResult_Payload interface { + isResult_Payload() +} + +type Result_HostResult struct { + HostResult *HostResult `protobuf:"bytes,1,opt,name=host_result,json=hostResult,proto3,oneof"` +} + +type Result_BotResult struct { + BotResult *BotResult `protobuf:"bytes,2,opt,name=bot_result,json=botResult,proto3,oneof"` +} + +func (*Result_HostResult) isResult_Payload() {} + +func (*Result_BotResult) isResult_Payload() {} + +// Certificates holds issued certificates and cluster CAs. +type Certificates struct { + state protoimpl.MessageState `protogen:"open.v1"` + // TlsCert is an X.509 certificate encoded in ASN.1 DER form. + TlsCert []byte `protobuf:"bytes,1,opt,name=tls_cert,json=tlsCert,proto3" json:"tls_cert,omitempty"` + // TlsCaCerts is a list of TLS certificate authorities that the client should trust. + // Each certificate is encoding in ASN.1 DER form. + TlsCaCerts [][]byte `protobuf:"bytes,2,rep,name=tls_ca_certs,json=tlsCaCerts,proto3" json:"tls_ca_certs,omitempty"` + // SshCert is an SSH certificate encoded in SSH wire format. + SshCert []byte `protobuf:"bytes,3,opt,name=ssh_cert,json=sshCert,proto3" json:"ssh_cert,omitempty"` + // SshCaKey is a list of SSH certificate authority public keys that the client should trust. + // Each CA key is encoded in SSH wire format. + SshCaKeys [][]byte `protobuf:"bytes,4,rep,name=ssh_ca_keys,json=sshCaKeys,proto3" json:"ssh_ca_keys,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Certificates) Reset() { + *x = Certificates{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[32] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Certificates) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Certificates) ProtoMessage() {} + +func (x *Certificates) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[32] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Certificates.ProtoReflect.Descriptor instead. +func (*Certificates) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{32} +} + +func (x *Certificates) GetTlsCert() []byte { + if x != nil { + return x.TlsCert + } + return nil +} + +func (x *Certificates) GetTlsCaCerts() [][]byte { + if x != nil { + return x.TlsCaCerts + } + return nil +} + +func (x *Certificates) GetSshCert() []byte { + if x != nil { + return x.SshCert + } + return nil +} + +func (x *Certificates) GetSshCaKeys() [][]byte { + if x != nil { + return x.SshCaKeys + } + return nil +} + +// HostResult holds results for host joining. +type HostResult struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Certificates holds issued certificates and cluster CAs. + Certificates *Certificates `protobuf:"bytes,1,opt,name=certificates,proto3" json:"certificates,omitempty"` + // HostId is the unique ID assigned to the host. + HostId string `protobuf:"bytes,2,opt,name=host_id,json=hostId,proto3" json:"host_id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *HostResult) Reset() { + *x = HostResult{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[33] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *HostResult) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*HostResult) ProtoMessage() {} + +func (x *HostResult) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[33] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use HostResult.ProtoReflect.Descriptor instead. +func (*HostResult) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{33} +} + +func (x *HostResult) GetCertificates() *Certificates { + if x != nil { + return x.Certificates + } + return nil +} + +func (x *HostResult) GetHostId() string { + if x != nil { + return x.HostId + } + return "" +} + +// HostResult holds results for bot joining. +type BotResult struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Certificates holds issued certificates and cluster CAs. + Certificates *Certificates `protobuf:"bytes,1,opt,name=certificates,proto3" json:"certificates,omitempty"` + // BoundKeypairResult holds extra result parameters relevant to the bound keypair join method. + BoundKeypairResult *BoundKeypairResult `protobuf:"bytes,2,opt,name=bound_keypair_result,json=boundKeypairResult,proto3,oneof" json:"bound_keypair_result,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BotResult) Reset() { + *x = BotResult{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[34] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BotResult) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BotResult) ProtoMessage() {} + +func (x *BotResult) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[34] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BotResult.ProtoReflect.Descriptor instead. +func (*BotResult) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{34} +} + +func (x *BotResult) GetCertificates() *Certificates { + if x != nil { + return x.Certificates + } + return nil +} + +func (x *BotResult) GetBoundKeypairResult() *BoundKeypairResult { + if x != nil { + return x.BoundKeypairResult + } + return nil +} + +// JoinResponse is the message type sent from the server to the joining client. +type JoinResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Payload: + // + // *JoinResponse_Init + // *JoinResponse_Challenge + // *JoinResponse_Result + Payload isJoinResponse_Payload `protobuf_oneof:"payload"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *JoinResponse) Reset() { + *x = JoinResponse{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[35] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *JoinResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*JoinResponse) ProtoMessage() {} + +func (x *JoinResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[35] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use JoinResponse.ProtoReflect.Descriptor instead. +func (*JoinResponse) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{35} +} + +func (x *JoinResponse) GetPayload() isJoinResponse_Payload { + if x != nil { + return x.Payload + } + return nil +} + +func (x *JoinResponse) GetInit() *ServerInit { + if x != nil { + if x, ok := x.Payload.(*JoinResponse_Init); ok { + return x.Init + } + } + return nil +} + +func (x *JoinResponse) GetChallenge() *Challenge { + if x != nil { + if x, ok := x.Payload.(*JoinResponse_Challenge); ok { + return x.Challenge + } + } + return nil +} + +func (x *JoinResponse) GetResult() *Result { + if x != nil { + if x, ok := x.Payload.(*JoinResponse_Result); ok { + return x.Result + } + } + return nil +} + +type isJoinResponse_Payload interface { + isJoinResponse_Payload() +} + +type JoinResponse_Init struct { + // Init is the initial message sent from the server in response to the + // ClientInit message. It specifies the join method used by the provision token. + Init *ServerInit `protobuf:"bytes,1,opt,name=init,proto3,oneof"` +} + +type JoinResponse_Challenge struct { + // Challenge is a challenge issued by the server that the client must solve + // in order to complete the join flow. The challenge type depends on the join method. + // Each method may issue zero or more challenges that the client must solve. + Challenge *Challenge `protobuf:"bytes,2,opt,name=challenge,proto3,oneof"` +} + +type JoinResponse_Result struct { + // Result is the result of the join flow, it is the final message sent from + // the cluster when the join flow is successful. + // For the token join method, it is sent immediately in response to the ClientInit request. + Result *Result `protobuf:"bytes,3,opt,name=result,proto3,oneof"` +} + +func (*JoinResponse_Init) isJoinResponse_Payload() {} + +func (*JoinResponse_Challenge) isJoinResponse_Payload() {} + +func (*JoinResponse_Result) isJoinResponse_Payload() {} + +// ProxySuppliedParams holds parameters set by the Proxy when nodes join +// via the proxy address. They must only be trusted if the incoming join +// request is authenticated as the Proxy. +type ClientInit_ProxySuppliedParams struct { + state protoimpl.MessageState `protogen:"open.v1"` + // RemoteAddr is the remote address of the host requesting a host certificate. + // It replaces 0.0.0.0 in the list of additional principals. + RemoteAddr string `protobuf:"bytes,1,opt,name=remote_addr,json=remoteAddr,proto3" json:"remote_addr,omitempty"` + // ClientVersion is the Teleport version of the client attempting to join. + ClientVersion string `protobuf:"bytes,2,opt,name=client_version,json=clientVersion,proto3" json:"client_version,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ClientInit_ProxySuppliedParams) Reset() { + *x = ClientInit_ProxySuppliedParams{} + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[36] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ClientInit_ProxySuppliedParams) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ClientInit_ProxySuppliedParams) ProtoMessage() {} + +func (x *ClientInit_ProxySuppliedParams) ProtoReflect() protoreflect.Message { + mi := &file_teleport_join_v1_joinservice_proto_msgTypes[36] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ClientInit_ProxySuppliedParams.ProtoReflect.Descriptor instead. +func (*ClientInit_ProxySuppliedParams) Descriptor() ([]byte, []int) { + return file_teleport_join_v1_joinservice_proto_rawDescGZIP(), []int{0, 0} +} + +func (x *ClientInit_ProxySuppliedParams) GetRemoteAddr() string { + if x != nil { + return x.RemoteAddr + } + return "" +} + +func (x *ClientInit_ProxySuppliedParams) GetClientVersion() string { + if x != nil { + return x.ClientVersion + } + return "" +} + +var File_teleport_join_v1_joinservice_proto protoreflect.FileDescriptor + +const file_teleport_join_v1_joinservice_proto_rawDesc = "" + + "\n" + + "\"teleport/join/v1/joinservice.proto\x12\x10teleport.join.v1\x1a\x1fgoogle/protobuf/timestamp.proto\"\xa0\x03\n" + + "\n" + + "ClientInit\x12$\n" + + "\vjoin_method\x18\x01 \x01(\tH\x00R\n" + + "joinMethod\x88\x01\x01\x12\x1d\n" + + "\n" + + "token_name\x18\x02 \x01(\tR\ttokenName\x12\x1f\n" + + "\vsystem_role\x18\x03 \x01(\tR\n" + + "systemRole\x12,\n" + + "\x12forwarded_by_proxy\x18\x04 \x01(\bR\x10forwardedByProxy\x12q\n" + + "\x19proxy_supplied_parameters\x18\x05 \x01(\v20.teleport.join.v1.ClientInit.ProxySuppliedParamsH\x01R\x17proxySuppliedParameters\x88\x01\x01\x1a]\n" + + "\x13ProxySuppliedParams\x12\x1f\n" + + "\vremote_addr\x18\x01 \x01(\tR\n" + + "remoteAddr\x12%\n" + + "\x0eclient_version\x18\x02 \x01(\tR\rclientVersionB\x0e\n" + + "\f_join_methodB\x1c\n" + + "\x1a_proxy_supplied_parameters\"X\n" + + "\n" + + "PublicKeys\x12$\n" + + "\x0epublic_tls_key\x18\x01 \x01(\fR\fpublicTlsKey\x12$\n" + + "\x0epublic_ssh_key\x18\x02 \x01(\fR\fpublicSshKey\"\xba\x01\n" + + "\n" + + "HostParams\x12=\n" + + "\vpublic_keys\x18\x01 \x01(\v2\x1c.teleport.join.v1.PublicKeysR\n" + + "publicKeys\x12\x1b\n" + + "\thost_name\x18\x02 \x01(\tR\bhostName\x123\n" + + "\x15additional_principals\x18\x03 \x03(\tR\x14additionalPrincipals\x12\x1b\n" + + "\tdns_names\x18\x04 \x03(\tR\bdnsNames\"\x91\x01\n" + + "\tBotParams\x12=\n" + + "\vpublic_keys\x18\x01 \x01(\v2\x1c.teleport.join.v1.PublicKeysR\n" + + "publicKeys\x129\n" + + "\aexpires\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampH\x00R\aexpires\x88\x01\x01B\n" + + "\n" + + "\b_expires\"\x98\x01\n" + + "\fClientParams\x12?\n" + + "\vhost_params\x18\x01 \x01(\v2\x1c.teleport.join.v1.HostParamsH\x00R\n" + + "hostParams\x12<\n" + + "\n" + + "bot_params\x18\x02 \x01(\v2\x1b.teleport.join.v1.BotParamsH\x00R\tbotParamsB\t\n" + + "\apayload\"P\n" + + "\tTokenInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\"j\n" + + "\bOIDCInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\x12\x19\n" + + "\bid_token\x18\x02 \x01(\fR\aidToken\"\xb7\x01\n" + + "\x10BoundKeypairInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\x12.\n" + + "\x13initial_join_secret\x18\x02 \x01(\tR\x11initialJoinSecret\x12.\n" + + "\x13previous_join_state\x18\x03 \x01(\fR\x11previousJoinState\"T\n" + + "\x15BoundKeypairChallenge\x12\x1d\n" + + "\n" + + "public_key\x18\x01 \x01(\fR\tpublicKey\x12\x1c\n" + + "\tchallenge\x18\x02 \x01(\tR\tchallenge\";\n" + + "\x1dBoundKeypairChallengeSolution\x12\x1a\n" + + "\bsolution\x18\x01 \x01(\fR\bsolution\"Y\n" + + "\x1bBoundKeypairRotationRequest\x12:\n" + + "\x19signature_algorithm_suite\x18\x01 \x01(\tR\x17signatureAlgorithmSuite\"=\n" + + "\x1cBoundKeypairRotationResponse\x12\x1d\n" + + "\n" + + "public_key\x18\x01 \x01(\fR\tpublicKey\"R\n" + + "\x12BoundKeypairResult\x12\x1d\n" + + "\n" + + "join_state\x18\x02 \x01(\fR\tjoinState\x12\x1d\n" + + "\n" + + "public_key\x18\x03 \x01(\fR\tpublicKey\"N\n" + + "\aIAMInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\",\n" + + "\fIAMChallenge\x12\x1c\n" + + "\tchallenge\x18\x01 \x01(\tR\tchallenge\"H\n" + + "\x14IAMChallengeSolution\x120\n" + + "\x14sts_identity_request\x18\x01 \x01(\fR\x12stsIdentityRequest\"j\n" + + "\aEC2Init\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\x12\x1a\n" + + "\bdocument\x18\x02 \x01(\fR\bdocument\"Q\n" + + "\n" + + "OracleInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\"/\n" + + "\x0fOracleChallenge\x12\x1c\n" + + "\tchallenge\x18\x01 \x01(\tR\tchallenge\"\x9c\x01\n" + + "\x17OracleChallengeSolution\x12\x12\n" + + "\x04cert\x18\x01 \x01(\fR\x04cert\x12\"\n" + + "\fintermediate\x18\x02 \x01(\fR\fintermediate\x12\x1c\n" + + "\tsignature\x18\x03 \x01(\fR\tsignature\x12+\n" + + "\x12signed_root_ca_req\x18\x04 \x01(\fR\x0fsignedRootCaReq\"\x9b\x02\n" + + "\aTPMInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\x12\x16\n" + + "\x06public\x18\x02 \x01(\fR\x06public\x12\x1f\n" + + "\vcreate_data\x18\x03 \x01(\fR\n" + + "createData\x12-\n" + + "\x12create_attestation\x18\x04 \x01(\fR\x11createAttestation\x12)\n" + + "\x10create_signature\x18\x05 \x01(\fR\x0fcreateSignature\x12\x19\n" + + "\aek_cert\x18\x06 \x01(\fH\x00R\x06ekCert\x12\x17\n" + + "\x06ek_key\x18\a \x01(\fH\x00R\x05ekKeyB\x04\n" + + "\x02ek\"Y\n" + + "\x16TPMEncryptedCredential\x12'\n" + + "\x0fcredential_blob\x18\x01 \x01(\fR\x0ecredentialBlob\x12\x16\n" + + "\x06secret\x18\x02 \x01(\fR\x06secret\")\n" + + "\vTPMSolution\x12\x1a\n" + + "\bsolution\x18\x01 \x01(\fR\bsolution\"P\n" + + "\tAzureInit\x12C\n" + + "\rclient_params\x18\x01 \x01(\v2\x1e.teleport.join.v1.ClientParamsR\fclientParams\".\n" + + "\x0eAzureChallenge\x12\x1c\n" + + "\tchallenge\x18\x01 \x01(\tR\tchallenge\"\x84\x01\n" + + "\x16AzureChallengeSolution\x12#\n" + + "\rattested_data\x18\x01 \x01(\fR\fattestedData\x12\"\n" + + "\fintermediate\x18\x02 \x01(\fR\fintermediate\x12!\n" + + "\faccess_token\x18\x03 \x01(\tR\vaccessToken\"\x86\x05\n" + + "\x11ChallengeSolution\x12z\n" + + " bound_keypair_challenge_solution\x18\x01 \x01(\v2/.teleport.join.v1.BoundKeypairChallengeSolutionH\x00R\x1dboundKeypairChallengeSolution\x12w\n" + + "\x1fbound_keypair_rotation_response\x18\x02 \x01(\v2..teleport.join.v1.BoundKeypairRotationResponseH\x00R\x1cboundKeypairRotationResponse\x12^\n" + + "\x16iam_challenge_solution\x18\x03 \x01(\v2&.teleport.join.v1.IAMChallengeSolutionH\x00R\x14iamChallengeSolution\x12g\n" + + "\x19oracle_challenge_solution\x18\x04 \x01(\v2).teleport.join.v1.OracleChallengeSolutionH\x00R\x17oracleChallengeSolution\x12B\n" + + "\ftpm_solution\x18\x05 \x01(\v2\x1d.teleport.join.v1.TPMSolutionH\x00R\vtpmSolution\x12d\n" + + "\x18azure_challenge_solution\x18\x06 \x01(\v2(.teleport.join.v1.AzureChallengeSolutionH\x00R\x16azureChallengeSolutionB\t\n" + + "\apayload\"\xe9\x01\n" + + "\bGivingUp\x129\n" + + "\x06reason\x18\x01 \x01(\x0e2!.teleport.join.v1.GivingUp.ReasonR\x06reason\x12\x10\n" + + "\x03msg\x18\x02 \x01(\tR\x03msg\"\x8f\x01\n" + + "\x06Reason\x12\x16\n" + + "\x12REASON_UNSPECIFIED\x10\x00\x12\"\n" + + "\x1eREASON_UNSUPPORTED_JOIN_METHOD\x10\x01\x12#\n" + + "\x1fREASON_UNSUPPORTED_MESSAGE_TYPE\x10\x02\x12$\n" + + " REASON_CHALLENGE_SOLUTION_FAILED\x10\x03\"\xcb\x05\n" + + "\vJoinRequest\x12?\n" + + "\vclient_init\x18\x01 \x01(\v2\x1c.teleport.join.v1.ClientInitH\x00R\n" + + "clientInit\x12<\n" + + "\n" + + "token_init\x18\x02 \x01(\v2\x1b.teleport.join.v1.TokenInitH\x00R\ttokenInit\x12R\n" + + "\x12bound_keypair_init\x18\x03 \x01(\v2\".teleport.join.v1.BoundKeypairInitH\x00R\x10boundKeypairInit\x12A\n" + + "\bsolution\x18\x04 \x01(\v2#.teleport.join.v1.ChallengeSolutionH\x00R\bsolution\x126\n" + + "\biam_init\x18\x05 \x01(\v2\x19.teleport.join.v1.IAMInitH\x00R\aiamInit\x129\n" + + "\tgiving_up\x18\x06 \x01(\v2\x1a.teleport.join.v1.GivingUpH\x00R\bgivingUp\x126\n" + + "\bec2_init\x18\a \x01(\v2\x19.teleport.join.v1.EC2InitH\x00R\aec2Init\x129\n" + + "\toidc_init\x18\b \x01(\v2\x1a.teleport.join.v1.OIDCInitH\x00R\boidcInit\x12?\n" + + "\voracle_init\x18\t \x01(\v2\x1c.teleport.join.v1.OracleInitH\x00R\n" + + "oracleInit\x126\n" + + "\btpm_init\x18\n" + + " \x01(\v2\x19.teleport.join.v1.TPMInitH\x00R\atpmInit\x12<\n" + + "\n" + + "azure_init\x18\v \x01(\v2\x1b.teleport.join.v1.AzureInitH\x00R\tazureInitB\t\n" + + "\apayload\"i\n" + + "\n" + + "ServerInit\x12\x1f\n" + + "\vjoin_method\x18\x01 \x01(\tR\n" + + "joinMethod\x12:\n" + + "\x19signature_algorithm_suite\x18\x02 \x01(\tR\x17signatureAlgorithmSuite\"\xb9\x04\n" + + "\tChallenge\x12a\n" + + "\x17bound_keypair_challenge\x18\x01 \x01(\v2'.teleport.join.v1.BoundKeypairChallengeH\x00R\x15boundKeypairChallenge\x12t\n" + + "\x1ebound_keypair_rotation_request\x18\x02 \x01(\v2-.teleport.join.v1.BoundKeypairRotationRequestH\x00R\x1bboundKeypairRotationRequest\x12E\n" + + "\riam_challenge\x18\x03 \x01(\v2\x1e.teleport.join.v1.IAMChallengeH\x00R\fiamChallenge\x12N\n" + + "\x10oracle_challenge\x18\x04 \x01(\v2!.teleport.join.v1.OracleChallengeH\x00R\x0foracleChallenge\x12d\n" + + "\x18tpm_encrypted_credential\x18\x05 \x01(\v2(.teleport.join.v1.TPMEncryptedCredentialH\x00R\x16tpmEncryptedCredential\x12K\n" + + "\x0fazure_challenge\x18\x06 \x01(\v2 .teleport.join.v1.AzureChallengeH\x00R\x0eazureChallengeB\t\n" + + "\apayload\"\x92\x01\n" + + "\x06Result\x12?\n" + + "\vhost_result\x18\x01 \x01(\v2\x1c.teleport.join.v1.HostResultH\x00R\n" + + "hostResult\x12<\n" + + "\n" + + "bot_result\x18\x02 \x01(\v2\x1b.teleport.join.v1.BotResultH\x00R\tbotResultB\t\n" + + "\apayload\"\x86\x01\n" + + "\fCertificates\x12\x19\n" + + "\btls_cert\x18\x01 \x01(\fR\atlsCert\x12 \n" + + "\ftls_ca_certs\x18\x02 \x03(\fR\n" + + "tlsCaCerts\x12\x19\n" + + "\bssh_cert\x18\x03 \x01(\fR\asshCert\x12\x1e\n" + + "\vssh_ca_keys\x18\x04 \x03(\fR\tsshCaKeys\"i\n" + + "\n" + + "HostResult\x12B\n" + + "\fcertificates\x18\x01 \x01(\v2\x1e.teleport.join.v1.CertificatesR\fcertificates\x12\x17\n" + + "\ahost_id\x18\x02 \x01(\tR\x06hostId\"\xc5\x01\n" + + "\tBotResult\x12B\n" + + "\fcertificates\x18\x01 \x01(\v2\x1e.teleport.join.v1.CertificatesR\fcertificates\x12[\n" + + "\x14bound_keypair_result\x18\x02 \x01(\v2$.teleport.join.v1.BoundKeypairResultH\x00R\x12boundKeypairResult\x88\x01\x01B\x17\n" + + "\x15_bound_keypair_result\"\xbe\x01\n" + + "\fJoinResponse\x122\n" + + "\x04init\x18\x01 \x01(\v2\x1c.teleport.join.v1.ServerInitH\x00R\x04init\x12;\n" + + "\tchallenge\x18\x02 \x01(\v2\x1b.teleport.join.v1.ChallengeH\x00R\tchallenge\x122\n" + + "\x06result\x18\x03 \x01(\v2\x18.teleport.join.v1.ResultH\x00R\x06resultB\t\n" + + "\apayload2X\n" + + "\vJoinService\x12I\n" + + "\x04Join\x12\x1d.teleport.join.v1.JoinRequest\x1a\x1e.teleport.join.v1.JoinResponse(\x010\x01BLZJgithub.com/gravitational/teleport/api/gen/proto/go/teleport/join/v1;joinv1b\x06proto3" + +var ( + file_teleport_join_v1_joinservice_proto_rawDescOnce sync.Once + file_teleport_join_v1_joinservice_proto_rawDescData []byte +) + +func file_teleport_join_v1_joinservice_proto_rawDescGZIP() []byte { + file_teleport_join_v1_joinservice_proto_rawDescOnce.Do(func() { + file_teleport_join_v1_joinservice_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_join_v1_joinservice_proto_rawDesc), len(file_teleport_join_v1_joinservice_proto_rawDesc))) + }) + return file_teleport_join_v1_joinservice_proto_rawDescData +} + +var file_teleport_join_v1_joinservice_proto_enumTypes = make([]protoimpl.EnumInfo, 1) +var file_teleport_join_v1_joinservice_proto_msgTypes = make([]protoimpl.MessageInfo, 37) +var file_teleport_join_v1_joinservice_proto_goTypes = []any{ + (GivingUp_Reason)(0), // 0: teleport.join.v1.GivingUp.Reason + (*ClientInit)(nil), // 1: teleport.join.v1.ClientInit + (*PublicKeys)(nil), // 2: teleport.join.v1.PublicKeys + (*HostParams)(nil), // 3: teleport.join.v1.HostParams + (*BotParams)(nil), // 4: teleport.join.v1.BotParams + (*ClientParams)(nil), // 5: teleport.join.v1.ClientParams + (*TokenInit)(nil), // 6: teleport.join.v1.TokenInit + (*OIDCInit)(nil), // 7: teleport.join.v1.OIDCInit + (*BoundKeypairInit)(nil), // 8: teleport.join.v1.BoundKeypairInit + (*BoundKeypairChallenge)(nil), // 9: teleport.join.v1.BoundKeypairChallenge + (*BoundKeypairChallengeSolution)(nil), // 10: teleport.join.v1.BoundKeypairChallengeSolution + (*BoundKeypairRotationRequest)(nil), // 11: teleport.join.v1.BoundKeypairRotationRequest + (*BoundKeypairRotationResponse)(nil), // 12: teleport.join.v1.BoundKeypairRotationResponse + (*BoundKeypairResult)(nil), // 13: teleport.join.v1.BoundKeypairResult + (*IAMInit)(nil), // 14: teleport.join.v1.IAMInit + (*IAMChallenge)(nil), // 15: teleport.join.v1.IAMChallenge + (*IAMChallengeSolution)(nil), // 16: teleport.join.v1.IAMChallengeSolution + (*EC2Init)(nil), // 17: teleport.join.v1.EC2Init + (*OracleInit)(nil), // 18: teleport.join.v1.OracleInit + (*OracleChallenge)(nil), // 19: teleport.join.v1.OracleChallenge + (*OracleChallengeSolution)(nil), // 20: teleport.join.v1.OracleChallengeSolution + (*TPMInit)(nil), // 21: teleport.join.v1.TPMInit + (*TPMEncryptedCredential)(nil), // 22: teleport.join.v1.TPMEncryptedCredential + (*TPMSolution)(nil), // 23: teleport.join.v1.TPMSolution + (*AzureInit)(nil), // 24: teleport.join.v1.AzureInit + (*AzureChallenge)(nil), // 25: teleport.join.v1.AzureChallenge + (*AzureChallengeSolution)(nil), // 26: teleport.join.v1.AzureChallengeSolution + (*ChallengeSolution)(nil), // 27: teleport.join.v1.ChallengeSolution + (*GivingUp)(nil), // 28: teleport.join.v1.GivingUp + (*JoinRequest)(nil), // 29: teleport.join.v1.JoinRequest + (*ServerInit)(nil), // 30: teleport.join.v1.ServerInit + (*Challenge)(nil), // 31: teleport.join.v1.Challenge + (*Result)(nil), // 32: teleport.join.v1.Result + (*Certificates)(nil), // 33: teleport.join.v1.Certificates + (*HostResult)(nil), // 34: teleport.join.v1.HostResult + (*BotResult)(nil), // 35: teleport.join.v1.BotResult + (*JoinResponse)(nil), // 36: teleport.join.v1.JoinResponse + (*ClientInit_ProxySuppliedParams)(nil), // 37: teleport.join.v1.ClientInit.ProxySuppliedParams + (*timestamppb.Timestamp)(nil), // 38: google.protobuf.Timestamp +} +var file_teleport_join_v1_joinservice_proto_depIdxs = []int32{ + 37, // 0: teleport.join.v1.ClientInit.proxy_supplied_parameters:type_name -> teleport.join.v1.ClientInit.ProxySuppliedParams + 2, // 1: teleport.join.v1.HostParams.public_keys:type_name -> teleport.join.v1.PublicKeys + 2, // 2: teleport.join.v1.BotParams.public_keys:type_name -> teleport.join.v1.PublicKeys + 38, // 3: teleport.join.v1.BotParams.expires:type_name -> google.protobuf.Timestamp + 3, // 4: teleport.join.v1.ClientParams.host_params:type_name -> teleport.join.v1.HostParams + 4, // 5: teleport.join.v1.ClientParams.bot_params:type_name -> teleport.join.v1.BotParams + 5, // 6: teleport.join.v1.TokenInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 7: teleport.join.v1.OIDCInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 8: teleport.join.v1.BoundKeypairInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 9: teleport.join.v1.IAMInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 10: teleport.join.v1.EC2Init.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 11: teleport.join.v1.OracleInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 12: teleport.join.v1.TPMInit.client_params:type_name -> teleport.join.v1.ClientParams + 5, // 13: teleport.join.v1.AzureInit.client_params:type_name -> teleport.join.v1.ClientParams + 10, // 14: teleport.join.v1.ChallengeSolution.bound_keypair_challenge_solution:type_name -> teleport.join.v1.BoundKeypairChallengeSolution + 12, // 15: teleport.join.v1.ChallengeSolution.bound_keypair_rotation_response:type_name -> teleport.join.v1.BoundKeypairRotationResponse + 16, // 16: teleport.join.v1.ChallengeSolution.iam_challenge_solution:type_name -> teleport.join.v1.IAMChallengeSolution + 20, // 17: teleport.join.v1.ChallengeSolution.oracle_challenge_solution:type_name -> teleport.join.v1.OracleChallengeSolution + 23, // 18: teleport.join.v1.ChallengeSolution.tpm_solution:type_name -> teleport.join.v1.TPMSolution + 26, // 19: teleport.join.v1.ChallengeSolution.azure_challenge_solution:type_name -> teleport.join.v1.AzureChallengeSolution + 0, // 20: teleport.join.v1.GivingUp.reason:type_name -> teleport.join.v1.GivingUp.Reason + 1, // 21: teleport.join.v1.JoinRequest.client_init:type_name -> teleport.join.v1.ClientInit + 6, // 22: teleport.join.v1.JoinRequest.token_init:type_name -> teleport.join.v1.TokenInit + 8, // 23: teleport.join.v1.JoinRequest.bound_keypair_init:type_name -> teleport.join.v1.BoundKeypairInit + 27, // 24: teleport.join.v1.JoinRequest.solution:type_name -> teleport.join.v1.ChallengeSolution + 14, // 25: teleport.join.v1.JoinRequest.iam_init:type_name -> teleport.join.v1.IAMInit + 28, // 26: teleport.join.v1.JoinRequest.giving_up:type_name -> teleport.join.v1.GivingUp + 17, // 27: teleport.join.v1.JoinRequest.ec2_init:type_name -> teleport.join.v1.EC2Init + 7, // 28: teleport.join.v1.JoinRequest.oidc_init:type_name -> teleport.join.v1.OIDCInit + 18, // 29: teleport.join.v1.JoinRequest.oracle_init:type_name -> teleport.join.v1.OracleInit + 21, // 30: teleport.join.v1.JoinRequest.tpm_init:type_name -> teleport.join.v1.TPMInit + 24, // 31: teleport.join.v1.JoinRequest.azure_init:type_name -> teleport.join.v1.AzureInit + 9, // 32: teleport.join.v1.Challenge.bound_keypair_challenge:type_name -> teleport.join.v1.BoundKeypairChallenge + 11, // 33: teleport.join.v1.Challenge.bound_keypair_rotation_request:type_name -> teleport.join.v1.BoundKeypairRotationRequest + 15, // 34: teleport.join.v1.Challenge.iam_challenge:type_name -> teleport.join.v1.IAMChallenge + 19, // 35: teleport.join.v1.Challenge.oracle_challenge:type_name -> teleport.join.v1.OracleChallenge + 22, // 36: teleport.join.v1.Challenge.tpm_encrypted_credential:type_name -> teleport.join.v1.TPMEncryptedCredential + 25, // 37: teleport.join.v1.Challenge.azure_challenge:type_name -> teleport.join.v1.AzureChallenge + 34, // 38: teleport.join.v1.Result.host_result:type_name -> teleport.join.v1.HostResult + 35, // 39: teleport.join.v1.Result.bot_result:type_name -> teleport.join.v1.BotResult + 33, // 40: teleport.join.v1.HostResult.certificates:type_name -> teleport.join.v1.Certificates + 33, // 41: teleport.join.v1.BotResult.certificates:type_name -> teleport.join.v1.Certificates + 13, // 42: teleport.join.v1.BotResult.bound_keypair_result:type_name -> teleport.join.v1.BoundKeypairResult + 30, // 43: teleport.join.v1.JoinResponse.init:type_name -> teleport.join.v1.ServerInit + 31, // 44: teleport.join.v1.JoinResponse.challenge:type_name -> teleport.join.v1.Challenge + 32, // 45: teleport.join.v1.JoinResponse.result:type_name -> teleport.join.v1.Result + 29, // 46: teleport.join.v1.JoinService.Join:input_type -> teleport.join.v1.JoinRequest + 36, // 47: teleport.join.v1.JoinService.Join:output_type -> teleport.join.v1.JoinResponse + 47, // [47:48] is the sub-list for method output_type + 46, // [46:47] is the sub-list for method input_type + 46, // [46:46] is the sub-list for extension type_name + 46, // [46:46] is the sub-list for extension extendee + 0, // [0:46] is the sub-list for field type_name +} + +func init() { file_teleport_join_v1_joinservice_proto_init() } +func file_teleport_join_v1_joinservice_proto_init() { + if File_teleport_join_v1_joinservice_proto != nil { + return + } + file_teleport_join_v1_joinservice_proto_msgTypes[0].OneofWrappers = []any{} + file_teleport_join_v1_joinservice_proto_msgTypes[3].OneofWrappers = []any{} + file_teleport_join_v1_joinservice_proto_msgTypes[4].OneofWrappers = []any{ + (*ClientParams_HostParams)(nil), + (*ClientParams_BotParams)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[20].OneofWrappers = []any{ + (*TPMInit_EkCert)(nil), + (*TPMInit_EkKey)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[26].OneofWrappers = []any{ + (*ChallengeSolution_BoundKeypairChallengeSolution)(nil), + (*ChallengeSolution_BoundKeypairRotationResponse)(nil), + (*ChallengeSolution_IamChallengeSolution)(nil), + (*ChallengeSolution_OracleChallengeSolution)(nil), + (*ChallengeSolution_TpmSolution)(nil), + (*ChallengeSolution_AzureChallengeSolution)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[28].OneofWrappers = []any{ + (*JoinRequest_ClientInit)(nil), + (*JoinRequest_TokenInit)(nil), + (*JoinRequest_BoundKeypairInit)(nil), + (*JoinRequest_Solution)(nil), + (*JoinRequest_IamInit)(nil), + (*JoinRequest_GivingUp)(nil), + (*JoinRequest_Ec2Init)(nil), + (*JoinRequest_OidcInit)(nil), + (*JoinRequest_OracleInit)(nil), + (*JoinRequest_TpmInit)(nil), + (*JoinRequest_AzureInit)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[30].OneofWrappers = []any{ + (*Challenge_BoundKeypairChallenge)(nil), + (*Challenge_BoundKeypairRotationRequest)(nil), + (*Challenge_IamChallenge)(nil), + (*Challenge_OracleChallenge)(nil), + (*Challenge_TpmEncryptedCredential)(nil), + (*Challenge_AzureChallenge)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[31].OneofWrappers = []any{ + (*Result_HostResult)(nil), + (*Result_BotResult)(nil), + } + file_teleport_join_v1_joinservice_proto_msgTypes[34].OneofWrappers = []any{} + file_teleport_join_v1_joinservice_proto_msgTypes[35].OneofWrappers = []any{ + (*JoinResponse_Init)(nil), + (*JoinResponse_Challenge)(nil), + (*JoinResponse_Result)(nil), + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_join_v1_joinservice_proto_rawDesc), len(file_teleport_join_v1_joinservice_proto_rawDesc)), + NumEnums: 1, + NumMessages: 37, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_join_v1_joinservice_proto_goTypes, + DependencyIndexes: file_teleport_join_v1_joinservice_proto_depIdxs, + EnumInfos: file_teleport_join_v1_joinservice_proto_enumTypes, + MessageInfos: file_teleport_join_v1_joinservice_proto_msgTypes, + }.Build() + File_teleport_join_v1_joinservice_proto = out.File + file_teleport_join_v1_joinservice_proto_goTypes = nil + file_teleport_join_v1_joinservice_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/join/v1/joinservice_grpc.pb.go b/api/gen/proto/go/teleport/join/v1/joinservice_grpc.pb.go new file mode 100644 index 0000000000000..269a4f2238c72 --- /dev/null +++ b/api/gen/proto/go/teleport/join/v1/joinservice_grpc.pb.go @@ -0,0 +1,193 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/join/v1/joinservice.proto + +package joinv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + JoinService_Join_FullMethodName = "/teleport.join.v1.JoinService/Join" +) + +// JoinServiceClient is the client API for JoinService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// JoinService provides methods which allow Teleport nodes, proxies, and other +// services to "join" the Teleport cluster by completing a supported join flow +// in order to receive signed certificates issued by the cluster. +// +// It may be used in multiple cases: +// - Teleport agents joining the cluster on their first start to receive their +// initial certificates. These requests do not use mTLS and the client +// authenticates itself using only the join flow and is assigned a new host +// ID. +// - Teleport agents that need certificates authenticated for an additional +// system role allowed by a new provision token. These requests must be +// authenticated with mTLS using their existing certificates so that the +// existing host ID can be maintained. +// - MachineID bots fetching their initial certificates. +// - MachineID bots refreshing their certificates. +// +// It is implemented on both the Auth and Proxy servers to serve the needs of +// - clients connecting to the proxy address for their initial join when they are +// unauthenticated and unable to directly dial the auth service. +// - clients connecting to the auth address for their initial join. +// - clients refreshing existing certificates that are able to make an +// authenticates dial to the auth service via proxy TLS routing. +type JoinServiceClient interface { + // Join is a bidirectional streaming RPC that implements all join methods. + // The client does not need to know the join method ahead of time, all it + // needs is the token name. + // + // The client must send an ClientInit message on the JoinRequest stream to + // initiate the join flow. + // + // The server will reply with a ServerInit message, and subsequent messages + // on the stream will depend on the join method. + Join(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[JoinRequest, JoinResponse], error) +} + +type joinServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewJoinServiceClient(cc grpc.ClientConnInterface) JoinServiceClient { + return &joinServiceClient{cc} +} + +func (c *joinServiceClient) Join(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[JoinRequest, JoinResponse], error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + stream, err := c.cc.NewStream(ctx, &JoinService_ServiceDesc.Streams[0], JoinService_Join_FullMethodName, cOpts...) + if err != nil { + return nil, err + } + x := &grpc.GenericClientStream[JoinRequest, JoinResponse]{ClientStream: stream} + return x, nil +} + +// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name. +type JoinService_JoinClient = grpc.BidiStreamingClient[JoinRequest, JoinResponse] + +// JoinServiceServer is the server API for JoinService service. +// All implementations must embed UnimplementedJoinServiceServer +// for forward compatibility. +// +// JoinService provides methods which allow Teleport nodes, proxies, and other +// services to "join" the Teleport cluster by completing a supported join flow +// in order to receive signed certificates issued by the cluster. +// +// It may be used in multiple cases: +// - Teleport agents joining the cluster on their first start to receive their +// initial certificates. These requests do not use mTLS and the client +// authenticates itself using only the join flow and is assigned a new host +// ID. +// - Teleport agents that need certificates authenticated for an additional +// system role allowed by a new provision token. These requests must be +// authenticated with mTLS using their existing certificates so that the +// existing host ID can be maintained. +// - MachineID bots fetching their initial certificates. +// - MachineID bots refreshing their certificates. +// +// It is implemented on both the Auth and Proxy servers to serve the needs of +// - clients connecting to the proxy address for their initial join when they are +// unauthenticated and unable to directly dial the auth service. +// - clients connecting to the auth address for their initial join. +// - clients refreshing existing certificates that are able to make an +// authenticates dial to the auth service via proxy TLS routing. +type JoinServiceServer interface { + // Join is a bidirectional streaming RPC that implements all join methods. + // The client does not need to know the join method ahead of time, all it + // needs is the token name. + // + // The client must send an ClientInit message on the JoinRequest stream to + // initiate the join flow. + // + // The server will reply with a ServerInit message, and subsequent messages + // on the stream will depend on the join method. + Join(grpc.BidiStreamingServer[JoinRequest, JoinResponse]) error + mustEmbedUnimplementedJoinServiceServer() +} + +// UnimplementedJoinServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedJoinServiceServer struct{} + +func (UnimplementedJoinServiceServer) Join(grpc.BidiStreamingServer[JoinRequest, JoinResponse]) error { + return status.Errorf(codes.Unimplemented, "method Join not implemented") +} +func (UnimplementedJoinServiceServer) mustEmbedUnimplementedJoinServiceServer() {} +func (UnimplementedJoinServiceServer) testEmbeddedByValue() {} + +// UnsafeJoinServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to JoinServiceServer will +// result in compilation errors. +type UnsafeJoinServiceServer interface { + mustEmbedUnimplementedJoinServiceServer() +} + +func RegisterJoinServiceServer(s grpc.ServiceRegistrar, srv JoinServiceServer) { + // If the following call pancis, it indicates UnimplementedJoinServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&JoinService_ServiceDesc, srv) +} + +func _JoinService_Join_Handler(srv interface{}, stream grpc.ServerStream) error { + return srv.(JoinServiceServer).Join(&grpc.GenericServerStream[JoinRequest, JoinResponse]{ServerStream: stream}) +} + +// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name. +type JoinService_JoinServer = grpc.BidiStreamingServer[JoinRequest, JoinResponse] + +// JoinService_ServiceDesc is the grpc.ServiceDesc for JoinService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var JoinService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.join.v1.JoinService", + HandlerType: (*JoinServiceServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "Join", + Handler: _JoinService_Join_Handler, + ServerStreams: true, + ClientStreams: true, + }, + }, + Metadata: "teleport/join/v1/joinservice.proto", +} diff --git a/api/gen/proto/go/teleport/kube/v1/kube_service.pb.go b/api/gen/proto/go/teleport/kube/v1/kube_service.pb.go index c63eaa47f5d5b..ff4530261576a 100644 --- a/api/gen/proto/go/teleport/kube/v1/kube_service.pb.go +++ b/api/gen/proto/go/teleport/kube/v1/kube_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/kube/v1/kube_service.proto diff --git a/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer.pb.go b/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer.pb.go index b34182742c88c..2fc544432548a 100644 --- a/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer.pb.go +++ b/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/kubewaitingcontainer/v1/kubewaitingcontainer.proto diff --git a/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer_service.pb.go b/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer_service.pb.go index f148fecfa18be..8c09ba729686c 100644 --- a/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer_service.pb.go +++ b/api/gen/proto/go/teleport/kubewaitingcontainer/v1/kubewaitingcontainer_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/kubewaitingcontainer/v1/kubewaitingcontainer_service.proto diff --git a/api/gen/proto/go/teleport/label/v1/label.pb.go b/api/gen/proto/go/teleport/label/v1/label.pb.go index e17f86ebc0a9e..560b873b09c9f 100644 --- a/api/gen/proto/go/teleport/label/v1/label.pb.go +++ b/api/gen/proto/go/teleport/label/v1/label.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/label/v1/label.proto diff --git a/api/gen/proto/go/teleport/loginrule/v1/loginrule.pb.go b/api/gen/proto/go/teleport/loginrule/v1/loginrule.pb.go index c1f166b26eeed..af76207b01cd0 100644 --- a/api/gen/proto/go/teleport/loginrule/v1/loginrule.pb.go +++ b/api/gen/proto/go/teleport/loginrule/v1/loginrule.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/loginrule/v1/loginrule.proto diff --git a/api/gen/proto/go/teleport/loginrule/v1/loginrule_service.pb.go b/api/gen/proto/go/teleport/loginrule/v1/loginrule_service.pb.go index c421329831f98..9f6e110815925 100644 --- a/api/gen/proto/go/teleport/loginrule/v1/loginrule_service.pb.go +++ b/api/gen/proto/go/teleport/loginrule/v1/loginrule_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/loginrule/v1/loginrule_service.proto diff --git a/api/gen/proto/go/teleport/machineid/v1/bot.pb.go b/api/gen/proto/go/teleport/machineid/v1/bot.pb.go index a94abffbf6924..c8b6e42d99a73 100644 --- a/api/gen/proto/go/teleport/machineid/v1/bot.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/bot.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/bot.proto diff --git a/api/gen/proto/go/teleport/machineid/v1/bot_instance.pb.go b/api/gen/proto/go/teleport/machineid/v1/bot_instance.pb.go index 89c5a3e57f289..1cc8a9b9b58be 100644 --- a/api/gen/proto/go/teleport/machineid/v1/bot_instance.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/bot_instance.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/bot_instance.proto @@ -23,6 +23,7 @@ package machineidv1 import ( v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" v11 "github.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1" + types "github.com/gravitational/teleport/api/types" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" durationpb "google.golang.org/protobuf/types/known/durationpb" @@ -40,6 +41,126 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) +// BotKind identifies whether the bot is the tbot binary or embedded in another +// component. +type BotKind int32 + +const ( + // The enum zero-value, it means no kind was included. + BotKind_BOT_KIND_UNSPECIFIED BotKind = 0 + // Means the bot is running the tbot binary. + BotKind_BOT_KIND_TBOT BotKind = 1 + // Means the bot is running inside one of our Terraform providers. + BotKind_BOT_KIND_TERRAFORM_PROVIDER BotKind = 2 + // Means the bot is running inside the Teleport Kubernetes operator. + BotKind_BOT_KIND_KUBERNETES_OPERATOR BotKind = 3 + // Means the bot is running inside tctl (e.g. `tctl terraform env`) + BotKind_BOT_KIND_TCTL BotKind = 4 +) + +// Enum value maps for BotKind. +var ( + BotKind_name = map[int32]string{ + 0: "BOT_KIND_UNSPECIFIED", + 1: "BOT_KIND_TBOT", + 2: "BOT_KIND_TERRAFORM_PROVIDER", + 3: "BOT_KIND_KUBERNETES_OPERATOR", + 4: "BOT_KIND_TCTL", + } + BotKind_value = map[string]int32{ + "BOT_KIND_UNSPECIFIED": 0, + "BOT_KIND_TBOT": 1, + "BOT_KIND_TERRAFORM_PROVIDER": 2, + "BOT_KIND_KUBERNETES_OPERATOR": 3, + "BOT_KIND_TCTL": 4, + } +) + +func (x BotKind) Enum() *BotKind { + p := new(BotKind) + *p = x + return p +} + +func (x BotKind) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (BotKind) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_machineid_v1_bot_instance_proto_enumTypes[0].Descriptor() +} + +func (BotKind) Type() protoreflect.EnumType { + return &file_teleport_machineid_v1_bot_instance_proto_enumTypes[0] +} + +func (x BotKind) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use BotKind.Descriptor instead. +func (BotKind) EnumDescriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_proto_rawDescGZIP(), []int{0} +} + +// BotInstanceHealthStatus describes the healthiness of a `tbot` service. +type BotInstanceHealthStatus int32 + +const ( + // The enum zero-value, it means no status was included. + BotInstanceHealthStatus_BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED BotInstanceHealthStatus = 0 + // Means the service is still "starting up" and hasn't reported its status. + BotInstanceHealthStatus_BOT_INSTANCE_HEALTH_STATUS_INITIALIZING BotInstanceHealthStatus = 1 + // Means the service is healthy and ready to serve traffic, or it has + // recently succeeded in generating an output. + BotInstanceHealthStatus_BOT_INSTANCE_HEALTH_STATUS_HEALTHY BotInstanceHealthStatus = 2 + // Means the service is failing to serve traffic or generate output. + BotInstanceHealthStatus_BOT_INSTANCE_HEALTH_STATUS_UNHEALTHY BotInstanceHealthStatus = 3 +) + +// Enum value maps for BotInstanceHealthStatus. +var ( + BotInstanceHealthStatus_name = map[int32]string{ + 0: "BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED", + 1: "BOT_INSTANCE_HEALTH_STATUS_INITIALIZING", + 2: "BOT_INSTANCE_HEALTH_STATUS_HEALTHY", + 3: "BOT_INSTANCE_HEALTH_STATUS_UNHEALTHY", + } + BotInstanceHealthStatus_value = map[string]int32{ + "BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED": 0, + "BOT_INSTANCE_HEALTH_STATUS_INITIALIZING": 1, + "BOT_INSTANCE_HEALTH_STATUS_HEALTHY": 2, + "BOT_INSTANCE_HEALTH_STATUS_UNHEALTHY": 3, + } +) + +func (x BotInstanceHealthStatus) Enum() *BotInstanceHealthStatus { + p := new(BotInstanceHealthStatus) + *p = x + return p +} + +func (x BotInstanceHealthStatus) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (BotInstanceHealthStatus) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_machineid_v1_bot_instance_proto_enumTypes[1].Descriptor() +} + +func (BotInstanceHealthStatus) Type() protoreflect.EnumType { + return &file_teleport_machineid_v1_bot_instance_proto_enumTypes[1] +} + +func (x BotInstanceHealthStatus) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use BotInstanceHealthStatus.Descriptor instead. +func (BotInstanceHealthStatus) EnumDescriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_proto_rawDescGZIP(), []int{1} +} + // A BotInstance type BotInstance struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -224,7 +345,17 @@ type BotInstanceStatusHeartbeat struct { // runtime.GOARCH. Architecture string `protobuf:"bytes,8,opt,name=architecture,proto3" json:"architecture,omitempty"` // The OS of the host that `tbot` is running on, determined by runtime.GOOS. - Os string `protobuf:"bytes,9,opt,name=os,proto3" json:"os,omitempty"` + Os string `protobuf:"bytes,9,opt,name=os,proto3" json:"os,omitempty"` + // Identifies the external updater process. + ExternalUpdater string `protobuf:"bytes,10,opt,name=external_updater,json=externalUpdater,proto3" json:"external_updater,omitempty"` + // Identifies the external updated version. Empty if no updater is configured. + ExternalUpdaterVersion string `protobuf:"bytes,11,opt,name=external_updater_version,json=externalUpdaterVersion,proto3" json:"external_updater_version,omitempty"` + // Information provided by the external updater, including the update group + // and updater status. + UpdaterInfo *types.UpdaterV2Info `protobuf:"bytes,12,opt,name=updater_info,json=updaterInfo,proto3" json:"updater_info,omitempty"` + // Identifies whether the bot is running in the tbot binary or embedded in + // another component. + Kind BotKind `protobuf:"varint,13,opt,name=kind,proto3,enum=teleport.machineid.v1.BotKind" json:"kind,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -322,6 +453,34 @@ func (x *BotInstanceStatusHeartbeat) GetOs() string { return "" } +func (x *BotInstanceStatusHeartbeat) GetExternalUpdater() string { + if x != nil { + return x.ExternalUpdater + } + return "" +} + +func (x *BotInstanceStatusHeartbeat) GetExternalUpdaterVersion() string { + if x != nil { + return x.ExternalUpdaterVersion + } + return "" +} + +func (x *BotInstanceStatusHeartbeat) GetUpdaterInfo() *types.UpdaterV2Info { + if x != nil { + return x.UpdaterInfo + } + return nil +} + +func (x *BotInstanceStatusHeartbeat) GetKind() BotKind { + if x != nil { + return x.Kind + } + return BotKind_BOT_KIND_UNSPECIFIED +} + // BotInstanceStatusAuthentication contains information about a join or renewal. // Ths information is entirely sourced by the Auth Server and can be trusted. type BotInstanceStatusAuthentication struct { @@ -447,8 +606,10 @@ type BotInstanceStatus struct { InitialHeartbeat *BotInstanceStatusHeartbeat `protobuf:"bytes,3,opt,name=initial_heartbeat,json=initialHeartbeat,proto3" json:"initial_heartbeat,omitempty"` // The N most recent heartbeats for this bot instance. LatestHeartbeats []*BotInstanceStatusHeartbeat `protobuf:"bytes,4,rep,name=latest_heartbeats,json=latestHeartbeats,proto3" json:"latest_heartbeats,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // The health of the services/output `tbot` is running. + ServiceHealth []*BotInstanceServiceHealth `protobuf:"bytes,5,rep,name=service_health,json=serviceHealth,proto3" json:"service_health,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *BotInstanceStatus) Reset() { @@ -509,11 +670,147 @@ func (x *BotInstanceStatus) GetLatestHeartbeats() []*BotInstanceStatusHeartbeat return nil } +func (x *BotInstanceStatus) GetServiceHealth() []*BotInstanceServiceHealth { + if x != nil { + return x.ServiceHealth + } + return nil +} + +// BotInstanceServiceHealth is a snapshot of a `tbot` service's health. +type BotInstanceServiceHealth struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Service identifies the service. + Service *BotInstanceServiceIdentifier `protobuf:"bytes,1,opt,name=service,proto3" json:"service,omitempty"` + // Status describes the service's healthiness. + Status BotInstanceHealthStatus `protobuf:"varint,2,opt,name=status,proto3,enum=teleport.machineid.v1.BotInstanceHealthStatus" json:"status,omitempty"` + // Reason is a human-readable explanation for the service's status. It might + // include an error message. + Reason *string `protobuf:"bytes,3,opt,name=reason,proto3,oneof" json:"reason,omitempty"` + // UpdatedAt is the time at which the service's health last changed. + UpdatedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BotInstanceServiceHealth) Reset() { + *x = BotInstanceServiceHealth{} + mi := &file_teleport_machineid_v1_bot_instance_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BotInstanceServiceHealth) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BotInstanceServiceHealth) ProtoMessage() {} + +func (x *BotInstanceServiceHealth) ProtoReflect() protoreflect.Message { + mi := &file_teleport_machineid_v1_bot_instance_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BotInstanceServiceHealth.ProtoReflect.Descriptor instead. +func (*BotInstanceServiceHealth) Descriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_proto_rawDescGZIP(), []int{5} +} + +func (x *BotInstanceServiceHealth) GetService() *BotInstanceServiceIdentifier { + if x != nil { + return x.Service + } + return nil +} + +func (x *BotInstanceServiceHealth) GetStatus() BotInstanceHealthStatus { + if x != nil { + return x.Status + } + return BotInstanceHealthStatus_BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED +} + +func (x *BotInstanceServiceHealth) GetReason() string { + if x != nil && x.Reason != nil { + return *x.Reason + } + return "" +} + +func (x *BotInstanceServiceHealth) GetUpdatedAt() *timestamppb.Timestamp { + if x != nil { + return x.UpdatedAt + } + return nil +} + +// BotInstanceServiceIdentifier uniquely identifies a `tbot` service. +type BotInstanceServiceIdentifier struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Type of service (e.g. database-tunnel, ssh-multiplexer). + Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"` + // Name of the service, either given by the user or auto-generated. + Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BotInstanceServiceIdentifier) Reset() { + *x = BotInstanceServiceIdentifier{} + mi := &file_teleport_machineid_v1_bot_instance_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BotInstanceServiceIdentifier) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BotInstanceServiceIdentifier) ProtoMessage() {} + +func (x *BotInstanceServiceIdentifier) ProtoReflect() protoreflect.Message { + mi := &file_teleport_machineid_v1_bot_instance_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BotInstanceServiceIdentifier.ProtoReflect.Descriptor instead. +func (*BotInstanceServiceIdentifier) Descriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_proto_rawDescGZIP(), []int{6} +} + +func (x *BotInstanceServiceIdentifier) GetType() string { + if x != nil { + return x.Type + } + return "" +} + +func (x *BotInstanceServiceIdentifier) GetName() string { + if x != nil { + return x.Name + } + return "" +} + var File_teleport_machineid_v1_bot_instance_proto protoreflect.FileDescriptor const file_teleport_machineid_v1_bot_instance_proto_rawDesc = "" + "\n" + - "(teleport/machineid/v1/bot_instance.proto\x12\x15teleport.machineid.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a!teleport/header/v1/metadata.proto\x1a-teleport/workloadidentity/v1/join_attrs.proto\"\x8e\x02\n" + + "(teleport/machineid/v1/bot_instance.proto\x12\x15teleport.machineid.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a!teleport/header/v1/metadata.proto\x1a!teleport/legacy/types/types.proto\x1a-teleport/workloadidentity/v1/join_attrs.proto\"\x8e\x02\n" + "\vBotInstance\x12\x12\n" + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + @@ -525,7 +822,7 @@ const file_teleport_machineid_v1_bot_instance_proto_rawDesc = "" + "\bbot_name\x18\x01 \x01(\tR\abotName\x12\x1f\n" + "\vinstance_id\x18\x02 \x01(\tR\n" + "instanceId\x120\n" + - "\x14previous_instance_id\x18\x04 \x01(\tR\x12previousInstanceIdJ\x04\b\x03\x10\x04R\x03ttl\"\xd1\x02\n" + + "\x14previous_instance_id\x18\x04 \x01(\tR\x12previousInstanceIdJ\x04\b\x03\x10\x04R\x03ttl\"\xa3\x04\n" + "\x1aBotInstanceStatusHeartbeat\x12;\n" + "\vrecorded_at\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\n" + "recordedAt\x12\x1d\n" + @@ -538,7 +835,12 @@ const file_teleport_machineid_v1_bot_instance_proto_rawDesc = "" + "joinMethod\x12\x19\n" + "\bone_shot\x18\a \x01(\bR\aoneShot\x12\"\n" + "\farchitecture\x18\b \x01(\tR\farchitecture\x12\x0e\n" + - "\x02os\x18\t \x01(\tR\x02os\"\xf7\x02\n" + + "\x02os\x18\t \x01(\tR\x02os\x12)\n" + + "\x10external_updater\x18\n" + + " \x01(\tR\x0fexternalUpdater\x128\n" + + "\x18external_updater_version\x18\v \x01(\tR\x16externalUpdaterVersion\x127\n" + + "\fupdater_info\x18\f \x01(\v2\x14.types.UpdaterV2InfoR\vupdaterInfo\x122\n" + + "\x04kind\x18\r \x01(\x0e2\x1e.teleport.machineid.v1.BotKindR\x04kind\"\xf7\x02\n" + "\x1fBotInstanceStatusAuthentication\x12E\n" + "\x10authenticated_at\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\x0fauthenticatedAt\x12\x1f\n" + "\vjoin_method\x18\x02 \x01(\tR\n" + @@ -552,12 +854,34 @@ const file_teleport_machineid_v1_bot_instance_proto_rawDesc = "" + "\n" + "public_key\x18\x06 \x01(\fR\tpublicKey\x12F\n" + "\n" + - "join_attrs\x18\b \x01(\v2'.teleport.workloadidentity.v1.JoinAttrsR\tjoinAttrsJ\x04\b\a\x10\bR\vfingerprint\"\xb1\x03\n" + + "join_attrs\x18\b \x01(\v2'.teleport.workloadidentity.v1.JoinAttrsR\tjoinAttrsJ\x04\b\a\x10\bR\vfingerprint\"\x89\x04\n" + "\x11BotInstanceStatus\x12m\n" + "\x16initial_authentication\x18\x01 \x01(\v26.teleport.machineid.v1.BotInstanceStatusAuthenticationR\x15initialAuthentication\x12m\n" + "\x16latest_authentications\x18\x02 \x03(\v26.teleport.machineid.v1.BotInstanceStatusAuthenticationR\x15latestAuthentications\x12^\n" + "\x11initial_heartbeat\x18\x03 \x01(\v21.teleport.machineid.v1.BotInstanceStatusHeartbeatR\x10initialHeartbeat\x12^\n" + - "\x11latest_heartbeats\x18\x04 \x03(\v21.teleport.machineid.v1.BotInstanceStatusHeartbeatR\x10latestHeartbeatsBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1;machineidv1b\x06proto3" + "\x11latest_heartbeats\x18\x04 \x03(\v21.teleport.machineid.v1.BotInstanceStatusHeartbeatR\x10latestHeartbeats\x12V\n" + + "\x0eservice_health\x18\x05 \x03(\v2/.teleport.machineid.v1.BotInstanceServiceHealthR\rserviceHealth\"\x94\x02\n" + + "\x18BotInstanceServiceHealth\x12M\n" + + "\aservice\x18\x01 \x01(\v23.teleport.machineid.v1.BotInstanceServiceIdentifierR\aservice\x12F\n" + + "\x06status\x18\x02 \x01(\x0e2..teleport.machineid.v1.BotInstanceHealthStatusR\x06status\x12\x1b\n" + + "\x06reason\x18\x03 \x01(\tH\x00R\x06reason\x88\x01\x01\x129\n" + + "\n" + + "updated_at\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\tupdatedAtB\t\n" + + "\a_reason\"F\n" + + "\x1cBotInstanceServiceIdentifier\x12\x12\n" + + "\x04type\x18\x01 \x01(\tR\x04type\x12\x12\n" + + "\x04name\x18\x02 \x01(\tR\x04name*\x8c\x01\n" + + "\aBotKind\x12\x18\n" + + "\x14BOT_KIND_UNSPECIFIED\x10\x00\x12\x11\n" + + "\rBOT_KIND_TBOT\x10\x01\x12\x1f\n" + + "\x1bBOT_KIND_TERRAFORM_PROVIDER\x10\x02\x12 \n" + + "\x1cBOT_KIND_KUBERNETES_OPERATOR\x10\x03\x12\x11\n" + + "\rBOT_KIND_TCTL\x10\x04*\xc4\x01\n" + + "\x17BotInstanceHealthStatus\x12*\n" + + "&BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED\x10\x00\x12+\n" + + "'BOT_INSTANCE_HEALTH_STATUS_INITIALIZING\x10\x01\x12&\n" + + "\"BOT_INSTANCE_HEALTH_STATUS_HEALTHY\x10\x02\x12(\n" + + "$BOT_INSTANCE_HEALTH_STATUS_UNHEALTHY\x10\x03BVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1;machineidv1b\x06proto3" var ( file_teleport_machineid_v1_bot_instance_proto_rawDescOnce sync.Once @@ -571,37 +895,49 @@ func file_teleport_machineid_v1_bot_instance_proto_rawDescGZIP() []byte { return file_teleport_machineid_v1_bot_instance_proto_rawDescData } -var file_teleport_machineid_v1_bot_instance_proto_msgTypes = make([]protoimpl.MessageInfo, 5) +var file_teleport_machineid_v1_bot_instance_proto_enumTypes = make([]protoimpl.EnumInfo, 2) +var file_teleport_machineid_v1_bot_instance_proto_msgTypes = make([]protoimpl.MessageInfo, 7) var file_teleport_machineid_v1_bot_instance_proto_goTypes = []any{ - (*BotInstance)(nil), // 0: teleport.machineid.v1.BotInstance - (*BotInstanceSpec)(nil), // 1: teleport.machineid.v1.BotInstanceSpec - (*BotInstanceStatusHeartbeat)(nil), // 2: teleport.machineid.v1.BotInstanceStatusHeartbeat - (*BotInstanceStatusAuthentication)(nil), // 3: teleport.machineid.v1.BotInstanceStatusAuthentication - (*BotInstanceStatus)(nil), // 4: teleport.machineid.v1.BotInstanceStatus - (*v1.Metadata)(nil), // 5: teleport.header.v1.Metadata - (*timestamppb.Timestamp)(nil), // 6: google.protobuf.Timestamp - (*durationpb.Duration)(nil), // 7: google.protobuf.Duration - (*structpb.Struct)(nil), // 8: google.protobuf.Struct - (*v11.JoinAttrs)(nil), // 9: teleport.workloadidentity.v1.JoinAttrs + (BotKind)(0), // 0: teleport.machineid.v1.BotKind + (BotInstanceHealthStatus)(0), // 1: teleport.machineid.v1.BotInstanceHealthStatus + (*BotInstance)(nil), // 2: teleport.machineid.v1.BotInstance + (*BotInstanceSpec)(nil), // 3: teleport.machineid.v1.BotInstanceSpec + (*BotInstanceStatusHeartbeat)(nil), // 4: teleport.machineid.v1.BotInstanceStatusHeartbeat + (*BotInstanceStatusAuthentication)(nil), // 5: teleport.machineid.v1.BotInstanceStatusAuthentication + (*BotInstanceStatus)(nil), // 6: teleport.machineid.v1.BotInstanceStatus + (*BotInstanceServiceHealth)(nil), // 7: teleport.machineid.v1.BotInstanceServiceHealth + (*BotInstanceServiceIdentifier)(nil), // 8: teleport.machineid.v1.BotInstanceServiceIdentifier + (*v1.Metadata)(nil), // 9: teleport.header.v1.Metadata + (*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp + (*durationpb.Duration)(nil), // 11: google.protobuf.Duration + (*types.UpdaterV2Info)(nil), // 12: types.UpdaterV2Info + (*structpb.Struct)(nil), // 13: google.protobuf.Struct + (*v11.JoinAttrs)(nil), // 14: teleport.workloadidentity.v1.JoinAttrs } var file_teleport_machineid_v1_bot_instance_proto_depIdxs = []int32{ - 5, // 0: teleport.machineid.v1.BotInstance.metadata:type_name -> teleport.header.v1.Metadata - 1, // 1: teleport.machineid.v1.BotInstance.spec:type_name -> teleport.machineid.v1.BotInstanceSpec - 4, // 2: teleport.machineid.v1.BotInstance.status:type_name -> teleport.machineid.v1.BotInstanceStatus - 6, // 3: teleport.machineid.v1.BotInstanceStatusHeartbeat.recorded_at:type_name -> google.protobuf.Timestamp - 7, // 4: teleport.machineid.v1.BotInstanceStatusHeartbeat.uptime:type_name -> google.protobuf.Duration - 6, // 5: teleport.machineid.v1.BotInstanceStatusAuthentication.authenticated_at:type_name -> google.protobuf.Timestamp - 8, // 6: teleport.machineid.v1.BotInstanceStatusAuthentication.metadata:type_name -> google.protobuf.Struct - 9, // 7: teleport.machineid.v1.BotInstanceStatusAuthentication.join_attrs:type_name -> teleport.workloadidentity.v1.JoinAttrs - 3, // 8: teleport.machineid.v1.BotInstanceStatus.initial_authentication:type_name -> teleport.machineid.v1.BotInstanceStatusAuthentication - 3, // 9: teleport.machineid.v1.BotInstanceStatus.latest_authentications:type_name -> teleport.machineid.v1.BotInstanceStatusAuthentication - 2, // 10: teleport.machineid.v1.BotInstanceStatus.initial_heartbeat:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat - 2, // 11: teleport.machineid.v1.BotInstanceStatus.latest_heartbeats:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat - 12, // [12:12] is the sub-list for method output_type - 12, // [12:12] is the sub-list for method input_type - 12, // [12:12] is the sub-list for extension type_name - 12, // [12:12] is the sub-list for extension extendee - 0, // [0:12] is the sub-list for field type_name + 9, // 0: teleport.machineid.v1.BotInstance.metadata:type_name -> teleport.header.v1.Metadata + 3, // 1: teleport.machineid.v1.BotInstance.spec:type_name -> teleport.machineid.v1.BotInstanceSpec + 6, // 2: teleport.machineid.v1.BotInstance.status:type_name -> teleport.machineid.v1.BotInstanceStatus + 10, // 3: teleport.machineid.v1.BotInstanceStatusHeartbeat.recorded_at:type_name -> google.protobuf.Timestamp + 11, // 4: teleport.machineid.v1.BotInstanceStatusHeartbeat.uptime:type_name -> google.protobuf.Duration + 12, // 5: teleport.machineid.v1.BotInstanceStatusHeartbeat.updater_info:type_name -> types.UpdaterV2Info + 0, // 6: teleport.machineid.v1.BotInstanceStatusHeartbeat.kind:type_name -> teleport.machineid.v1.BotKind + 10, // 7: teleport.machineid.v1.BotInstanceStatusAuthentication.authenticated_at:type_name -> google.protobuf.Timestamp + 13, // 8: teleport.machineid.v1.BotInstanceStatusAuthentication.metadata:type_name -> google.protobuf.Struct + 14, // 9: teleport.machineid.v1.BotInstanceStatusAuthentication.join_attrs:type_name -> teleport.workloadidentity.v1.JoinAttrs + 5, // 10: teleport.machineid.v1.BotInstanceStatus.initial_authentication:type_name -> teleport.machineid.v1.BotInstanceStatusAuthentication + 5, // 11: teleport.machineid.v1.BotInstanceStatus.latest_authentications:type_name -> teleport.machineid.v1.BotInstanceStatusAuthentication + 4, // 12: teleport.machineid.v1.BotInstanceStatus.initial_heartbeat:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat + 4, // 13: teleport.machineid.v1.BotInstanceStatus.latest_heartbeats:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat + 7, // 14: teleport.machineid.v1.BotInstanceStatus.service_health:type_name -> teleport.machineid.v1.BotInstanceServiceHealth + 8, // 15: teleport.machineid.v1.BotInstanceServiceHealth.service:type_name -> teleport.machineid.v1.BotInstanceServiceIdentifier + 1, // 16: teleport.machineid.v1.BotInstanceServiceHealth.status:type_name -> teleport.machineid.v1.BotInstanceHealthStatus + 10, // 17: teleport.machineid.v1.BotInstanceServiceHealth.updated_at:type_name -> google.protobuf.Timestamp + 18, // [18:18] is the sub-list for method output_type + 18, // [18:18] is the sub-list for method input_type + 18, // [18:18] is the sub-list for extension type_name + 18, // [18:18] is the sub-list for extension extendee + 0, // [0:18] is the sub-list for field type_name } func init() { file_teleport_machineid_v1_bot_instance_proto_init() } @@ -609,18 +945,20 @@ func file_teleport_machineid_v1_bot_instance_proto_init() { if File_teleport_machineid_v1_bot_instance_proto != nil { return } + file_teleport_machineid_v1_bot_instance_proto_msgTypes[5].OneofWrappers = []any{} type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_machineid_v1_bot_instance_proto_rawDesc), len(file_teleport_machineid_v1_bot_instance_proto_rawDesc)), - NumEnums: 0, - NumMessages: 5, + NumEnums: 2, + NumMessages: 7, NumExtensions: 0, NumServices: 0, }, GoTypes: file_teleport_machineid_v1_bot_instance_proto_goTypes, DependencyIndexes: file_teleport_machineid_v1_bot_instance_proto_depIdxs, + EnumInfos: file_teleport_machineid_v1_bot_instance_proto_enumTypes, MessageInfos: file_teleport_machineid_v1_bot_instance_proto_msgTypes, }.Build() File_teleport_machineid_v1_bot_instance_proto = out.File diff --git a/api/gen/proto/go/teleport/machineid/v1/bot_instance_service.pb.go b/api/gen/proto/go/teleport/machineid/v1/bot_instance_service.pb.go index 3bef9f97a0f17..d9d09bb87e4f1 100644 --- a/api/gen/proto/go/teleport/machineid/v1/bot_instance_service.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/bot_instance_service.pb.go @@ -14,13 +14,14 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/bot_instance_service.proto package machineidv1 import ( + types "github.com/gravitational/teleport/api/types" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" emptypb "google.golang.org/protobuf/types/known/emptypb" @@ -108,8 +109,10 @@ type ListBotInstancesRequest struct { PageToken string `protobuf:"bytes,3,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` // A search term used to filter the results. If non-empty, it's used to match against supported fields. FilterSearchTerm string `protobuf:"bytes,4,opt,name=filter_search_term,json=filterSearchTerm,proto3" json:"filter_search_term,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // The sort config to use for the results. If empty, the default sort field and order is used. + Sort *types.SortBy `protobuf:"bytes,5,opt,name=sort,proto3" json:"sort,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *ListBotInstancesRequest) Reset() { @@ -170,6 +173,102 @@ func (x *ListBotInstancesRequest) GetFilterSearchTerm() string { return "" } +func (x *ListBotInstancesRequest) GetSort() *types.SortBy { + if x != nil { + return x.Sort + } + return nil +} + +// Request for ListBotInstancesV2. +// +// Follows the pagination semantics of +// https://cloud.google.com/apis/design/standard_methods#list +type ListBotInstancesV2Request struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The page_token value returned from a previous ListBotInstancesV2 request, + // if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // The sort field to use for the results. If empty, the default sort field is + // used. + SortField string `protobuf:"bytes,3,opt,name=sort_field,json=sortField,proto3" json:"sort_field,omitempty"` + // The sort order to use for the results. If empty, the default sort order is + // used. + SortDesc bool `protobuf:"varint,4,opt,name=sort_desc,json=sortDesc,proto3" json:"sort_desc,omitempty"` + // Fields used to filter the results + Filter *ListBotInstancesV2Request_Filters `protobuf:"bytes,5,opt,name=filter,proto3" json:"filter,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListBotInstancesV2Request) Reset() { + *x = ListBotInstancesV2Request{} + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListBotInstancesV2Request) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListBotInstancesV2Request) ProtoMessage() {} + +func (x *ListBotInstancesV2Request) ProtoReflect() protoreflect.Message { + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListBotInstancesV2Request.ProtoReflect.Descriptor instead. +func (*ListBotInstancesV2Request) Descriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{2} +} + +func (x *ListBotInstancesV2Request) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListBotInstancesV2Request) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListBotInstancesV2Request) GetSortField() string { + if x != nil { + return x.SortField + } + return "" +} + +func (x *ListBotInstancesV2Request) GetSortDesc() bool { + if x != nil { + return x.SortDesc + } + return false +} + +func (x *ListBotInstancesV2Request) GetFilter() *ListBotInstancesV2Request_Filters { + if x != nil { + return x.Filter + } + return nil +} + // Response for ListBotInstances. type ListBotInstancesResponse struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -184,7 +283,7 @@ type ListBotInstancesResponse struct { func (x *ListBotInstancesResponse) Reset() { *x = ListBotInstancesResponse{} - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[2] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[3] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -196,7 +295,7 @@ func (x *ListBotInstancesResponse) String() string { func (*ListBotInstancesResponse) ProtoMessage() {} func (x *ListBotInstancesResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[2] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[3] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -209,7 +308,7 @@ func (x *ListBotInstancesResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListBotInstancesResponse.ProtoReflect.Descriptor instead. func (*ListBotInstancesResponse) Descriptor() ([]byte, []int) { - return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{2} + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{3} } func (x *ListBotInstancesResponse) GetBotInstances() []*BotInstance { @@ -239,7 +338,7 @@ type DeleteBotInstanceRequest struct { func (x *DeleteBotInstanceRequest) Reset() { *x = DeleteBotInstanceRequest{} - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[3] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[4] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -251,7 +350,7 @@ func (x *DeleteBotInstanceRequest) String() string { func (*DeleteBotInstanceRequest) ProtoMessage() {} func (x *DeleteBotInstanceRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[3] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[4] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -264,7 +363,7 @@ func (x *DeleteBotInstanceRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DeleteBotInstanceRequest.ProtoReflect.Descriptor instead. func (*DeleteBotInstanceRequest) Descriptor() ([]byte, []int) { - return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{3} + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{4} } func (x *DeleteBotInstanceRequest) GetBotName() string { @@ -285,14 +384,16 @@ func (x *DeleteBotInstanceRequest) GetInstanceId() string { type SubmitHeartbeatRequest struct { state protoimpl.MessageState `protogen:"open.v1"` // The heartbeat data to submit. - Heartbeat *BotInstanceStatusHeartbeat `protobuf:"bytes,1,opt,name=heartbeat,proto3" json:"heartbeat,omitempty"` + Heartbeat *BotInstanceStatusHeartbeat `protobuf:"bytes,1,opt,name=heartbeat,proto3" json:"heartbeat,omitempty"` + // The health of the services/output `tbot` is running. + ServiceHealth []*BotInstanceServiceHealth `protobuf:"bytes,2,rep,name=service_health,json=serviceHealth,proto3" json:"service_health,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } func (x *SubmitHeartbeatRequest) Reset() { *x = SubmitHeartbeatRequest{} - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[4] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[5] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -304,7 +405,7 @@ func (x *SubmitHeartbeatRequest) String() string { func (*SubmitHeartbeatRequest) ProtoMessage() {} func (x *SubmitHeartbeatRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[4] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[5] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -317,7 +418,7 @@ func (x *SubmitHeartbeatRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use SubmitHeartbeatRequest.ProtoReflect.Descriptor instead. func (*SubmitHeartbeatRequest) Descriptor() ([]byte, []int) { - return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{4} + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{5} } func (x *SubmitHeartbeatRequest) GetHeartbeat() *BotInstanceStatusHeartbeat { @@ -327,6 +428,13 @@ func (x *SubmitHeartbeatRequest) GetHeartbeat() *BotInstanceStatusHeartbeat { return nil } +func (x *SubmitHeartbeatRequest) GetServiceHealth() []*BotInstanceServiceHealth { + if x != nil { + return x.ServiceHealth + } + return nil +} + // The response for SubmitHeartbeat. type SubmitHeartbeatResponse struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -336,7 +444,7 @@ type SubmitHeartbeatResponse struct { func (x *SubmitHeartbeatResponse) Reset() { *x = SubmitHeartbeatResponse{} - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[5] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[6] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -348,7 +456,7 @@ func (x *SubmitHeartbeatResponse) String() string { func (*SubmitHeartbeatResponse) ProtoMessage() {} func (x *SubmitHeartbeatResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[5] + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[6] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -361,37 +469,119 @@ func (x *SubmitHeartbeatResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use SubmitHeartbeatResponse.ProtoReflect.Descriptor instead. func (*SubmitHeartbeatResponse) Descriptor() ([]byte, []int) { - return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{5} + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{6} +} + +// Filters contains fields to be used to filter the results. +type ListBotInstancesV2Request_Filters struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The name of the Bot to list BotInstances for. If non-empty, only + // BotInstances for that bot will be listed. + BotName string `protobuf:"bytes,1,opt,name=bot_name,json=botName,proto3" json:"bot_name,omitempty"` + // A search term used to filter the results. If non-empty, it's used to + // match against supported fields. + SearchTerm string `protobuf:"bytes,2,opt,name=search_term,json=searchTerm,proto3" json:"search_term,omitempty"` + // A Teleport predicate language query used to filter the results. + Query string `protobuf:"bytes,3,opt,name=query,proto3" json:"query,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListBotInstancesV2Request_Filters) Reset() { + *x = ListBotInstancesV2Request_Filters{} + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListBotInstancesV2Request_Filters) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListBotInstancesV2Request_Filters) ProtoMessage() {} + +func (x *ListBotInstancesV2Request_Filters) ProtoReflect() protoreflect.Message { + mi := &file_teleport_machineid_v1_bot_instance_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListBotInstancesV2Request_Filters.ProtoReflect.Descriptor instead. +func (*ListBotInstancesV2Request_Filters) Descriptor() ([]byte, []int) { + return file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP(), []int{2, 0} +} + +func (x *ListBotInstancesV2Request_Filters) GetBotName() string { + if x != nil { + return x.BotName + } + return "" +} + +func (x *ListBotInstancesV2Request_Filters) GetSearchTerm() string { + if x != nil { + return x.SearchTerm + } + return "" +} + +func (x *ListBotInstancesV2Request_Filters) GetQuery() string { + if x != nil { + return x.Query + } + return "" } var File_teleport_machineid_v1_bot_instance_service_proto protoreflect.FileDescriptor const file_teleport_machineid_v1_bot_instance_service_proto_rawDesc = "" + "\n" + - "0teleport/machineid/v1/bot_instance_service.proto\x12\x15teleport.machineid.v1\x1a\x1bgoogle/protobuf/empty.proto\x1a(teleport/machineid/v1/bot_instance.proto\"S\n" + + "0teleport/machineid/v1/bot_instance_service.proto\x12\x15teleport.machineid.v1\x1a\x1bgoogle/protobuf/empty.proto\x1a!teleport/legacy/types/types.proto\x1a(teleport/machineid/v1/bot_instance.proto\"S\n" + "\x15GetBotInstanceRequest\x12\x19\n" + "\bbot_name\x18\x01 \x01(\tR\abotName\x12\x1f\n" + "\vinstance_id\x18\x02 \x01(\tR\n" + - "instanceId\"\xab\x01\n" + + "instanceId\"\xce\x01\n" + "\x17ListBotInstancesRequest\x12&\n" + "\x0ffilter_bot_name\x18\x01 \x01(\tR\rfilterBotName\x12\x1b\n" + "\tpage_size\x18\x02 \x01(\x05R\bpageSize\x12\x1d\n" + "\n" + "page_token\x18\x03 \x01(\tR\tpageToken\x12,\n" + - "\x12filter_search_term\x18\x04 \x01(\tR\x10filterSearchTerm\"\x8b\x01\n" + + "\x12filter_search_term\x18\x04 \x01(\tR\x10filterSearchTerm\x12!\n" + + "\x04sort\x18\x05 \x01(\v2\r.types.SortByR\x04sort\"\xc2\x02\n" + + "\x19ListBotInstancesV2Request\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12\x1d\n" + + "\n" + + "sort_field\x18\x03 \x01(\tR\tsortField\x12\x1b\n" + + "\tsort_desc\x18\x04 \x01(\bR\bsortDesc\x12P\n" + + "\x06filter\x18\x05 \x01(\v28.teleport.machineid.v1.ListBotInstancesV2Request.FiltersR\x06filter\x1a[\n" + + "\aFilters\x12\x19\n" + + "\bbot_name\x18\x01 \x01(\tR\abotName\x12\x1f\n" + + "\vsearch_term\x18\x02 \x01(\tR\n" + + "searchTerm\x12\x14\n" + + "\x05query\x18\x03 \x01(\tR\x05query\"\x8b\x01\n" + "\x18ListBotInstancesResponse\x12G\n" + "\rbot_instances\x18\x01 \x03(\v2\".teleport.machineid.v1.BotInstanceR\fbotInstances\x12&\n" + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"V\n" + "\x18DeleteBotInstanceRequest\x12\x19\n" + "\bbot_name\x18\x01 \x01(\tR\abotName\x12\x1f\n" + "\vinstance_id\x18\x02 \x01(\tR\n" + - "instanceId\"i\n" + + "instanceId\"\xc1\x01\n" + "\x16SubmitHeartbeatRequest\x12O\n" + - "\theartbeat\x18\x01 \x01(\v21.teleport.machineid.v1.BotInstanceStatusHeartbeatR\theartbeat\"\x19\n" + - "\x17SubmitHeartbeatResponse2\xbd\x03\n" + + "\theartbeat\x18\x01 \x01(\v21.teleport.machineid.v1.BotInstanceStatusHeartbeatR\theartbeat\x12V\n" + + "\x0eservice_health\x18\x02 \x03(\v2/.teleport.machineid.v1.BotInstanceServiceHealthR\rserviceHealth\"\x19\n" + + "\x17SubmitHeartbeatResponse2\xbb\x04\n" + "\x12BotInstanceService\x12b\n" + - "\x0eGetBotInstance\x12,.teleport.machineid.v1.GetBotInstanceRequest\x1a\".teleport.machineid.v1.BotInstance\x12s\n" + - "\x10ListBotInstances\x12..teleport.machineid.v1.ListBotInstancesRequest\x1a/.teleport.machineid.v1.ListBotInstancesResponse\x12\\\n" + + "\x0eGetBotInstance\x12,.teleport.machineid.v1.GetBotInstanceRequest\x1a\".teleport.machineid.v1.BotInstance\x12x\n" + + "\x10ListBotInstances\x12..teleport.machineid.v1.ListBotInstancesRequest\x1a/.teleport.machineid.v1.ListBotInstancesResponse\"\x03\x88\x02\x01\x12w\n" + + "\x12ListBotInstancesV2\x120.teleport.machineid.v1.ListBotInstancesV2Request\x1a/.teleport.machineid.v1.ListBotInstancesResponse\x12\\\n" + "\x11DeleteBotInstance\x12/.teleport.machineid.v1.DeleteBotInstanceRequest\x1a\x16.google.protobuf.Empty\x12p\n" + "\x0fSubmitHeartbeat\x12-.teleport.machineid.v1.SubmitHeartbeatRequest\x1a..teleport.machineid.v1.SubmitHeartbeatResponseBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1;machineidv1b\x06proto3" @@ -407,34 +597,43 @@ func file_teleport_machineid_v1_bot_instance_service_proto_rawDescGZIP() []byte return file_teleport_machineid_v1_bot_instance_service_proto_rawDescData } -var file_teleport_machineid_v1_bot_instance_service_proto_msgTypes = make([]protoimpl.MessageInfo, 6) +var file_teleport_machineid_v1_bot_instance_service_proto_msgTypes = make([]protoimpl.MessageInfo, 8) var file_teleport_machineid_v1_bot_instance_service_proto_goTypes = []any{ - (*GetBotInstanceRequest)(nil), // 0: teleport.machineid.v1.GetBotInstanceRequest - (*ListBotInstancesRequest)(nil), // 1: teleport.machineid.v1.ListBotInstancesRequest - (*ListBotInstancesResponse)(nil), // 2: teleport.machineid.v1.ListBotInstancesResponse - (*DeleteBotInstanceRequest)(nil), // 3: teleport.machineid.v1.DeleteBotInstanceRequest - (*SubmitHeartbeatRequest)(nil), // 4: teleport.machineid.v1.SubmitHeartbeatRequest - (*SubmitHeartbeatResponse)(nil), // 5: teleport.machineid.v1.SubmitHeartbeatResponse - (*BotInstance)(nil), // 6: teleport.machineid.v1.BotInstance - (*BotInstanceStatusHeartbeat)(nil), // 7: teleport.machineid.v1.BotInstanceStatusHeartbeat - (*emptypb.Empty)(nil), // 8: google.protobuf.Empty + (*GetBotInstanceRequest)(nil), // 0: teleport.machineid.v1.GetBotInstanceRequest + (*ListBotInstancesRequest)(nil), // 1: teleport.machineid.v1.ListBotInstancesRequest + (*ListBotInstancesV2Request)(nil), // 2: teleport.machineid.v1.ListBotInstancesV2Request + (*ListBotInstancesResponse)(nil), // 3: teleport.machineid.v1.ListBotInstancesResponse + (*DeleteBotInstanceRequest)(nil), // 4: teleport.machineid.v1.DeleteBotInstanceRequest + (*SubmitHeartbeatRequest)(nil), // 5: teleport.machineid.v1.SubmitHeartbeatRequest + (*SubmitHeartbeatResponse)(nil), // 6: teleport.machineid.v1.SubmitHeartbeatResponse + (*ListBotInstancesV2Request_Filters)(nil), // 7: teleport.machineid.v1.ListBotInstancesV2Request.Filters + (*types.SortBy)(nil), // 8: types.SortBy + (*BotInstance)(nil), // 9: teleport.machineid.v1.BotInstance + (*BotInstanceStatusHeartbeat)(nil), // 10: teleport.machineid.v1.BotInstanceStatusHeartbeat + (*BotInstanceServiceHealth)(nil), // 11: teleport.machineid.v1.BotInstanceServiceHealth + (*emptypb.Empty)(nil), // 12: google.protobuf.Empty } var file_teleport_machineid_v1_bot_instance_service_proto_depIdxs = []int32{ - 6, // 0: teleport.machineid.v1.ListBotInstancesResponse.bot_instances:type_name -> teleport.machineid.v1.BotInstance - 7, // 1: teleport.machineid.v1.SubmitHeartbeatRequest.heartbeat:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat - 0, // 2: teleport.machineid.v1.BotInstanceService.GetBotInstance:input_type -> teleport.machineid.v1.GetBotInstanceRequest - 1, // 3: teleport.machineid.v1.BotInstanceService.ListBotInstances:input_type -> teleport.machineid.v1.ListBotInstancesRequest - 3, // 4: teleport.machineid.v1.BotInstanceService.DeleteBotInstance:input_type -> teleport.machineid.v1.DeleteBotInstanceRequest - 4, // 5: teleport.machineid.v1.BotInstanceService.SubmitHeartbeat:input_type -> teleport.machineid.v1.SubmitHeartbeatRequest - 6, // 6: teleport.machineid.v1.BotInstanceService.GetBotInstance:output_type -> teleport.machineid.v1.BotInstance - 2, // 7: teleport.machineid.v1.BotInstanceService.ListBotInstances:output_type -> teleport.machineid.v1.ListBotInstancesResponse - 8, // 8: teleport.machineid.v1.BotInstanceService.DeleteBotInstance:output_type -> google.protobuf.Empty - 5, // 9: teleport.machineid.v1.BotInstanceService.SubmitHeartbeat:output_type -> teleport.machineid.v1.SubmitHeartbeatResponse - 6, // [6:10] is the sub-list for method output_type - 2, // [2:6] is the sub-list for method input_type - 2, // [2:2] is the sub-list for extension type_name - 2, // [2:2] is the sub-list for extension extendee - 0, // [0:2] is the sub-list for field type_name + 8, // 0: teleport.machineid.v1.ListBotInstancesRequest.sort:type_name -> types.SortBy + 7, // 1: teleport.machineid.v1.ListBotInstancesV2Request.filter:type_name -> teleport.machineid.v1.ListBotInstancesV2Request.Filters + 9, // 2: teleport.machineid.v1.ListBotInstancesResponse.bot_instances:type_name -> teleport.machineid.v1.BotInstance + 10, // 3: teleport.machineid.v1.SubmitHeartbeatRequest.heartbeat:type_name -> teleport.machineid.v1.BotInstanceStatusHeartbeat + 11, // 4: teleport.machineid.v1.SubmitHeartbeatRequest.service_health:type_name -> teleport.machineid.v1.BotInstanceServiceHealth + 0, // 5: teleport.machineid.v1.BotInstanceService.GetBotInstance:input_type -> teleport.machineid.v1.GetBotInstanceRequest + 1, // 6: teleport.machineid.v1.BotInstanceService.ListBotInstances:input_type -> teleport.machineid.v1.ListBotInstancesRequest + 2, // 7: teleport.machineid.v1.BotInstanceService.ListBotInstancesV2:input_type -> teleport.machineid.v1.ListBotInstancesV2Request + 4, // 8: teleport.machineid.v1.BotInstanceService.DeleteBotInstance:input_type -> teleport.machineid.v1.DeleteBotInstanceRequest + 5, // 9: teleport.machineid.v1.BotInstanceService.SubmitHeartbeat:input_type -> teleport.machineid.v1.SubmitHeartbeatRequest + 9, // 10: teleport.machineid.v1.BotInstanceService.GetBotInstance:output_type -> teleport.machineid.v1.BotInstance + 3, // 11: teleport.machineid.v1.BotInstanceService.ListBotInstances:output_type -> teleport.machineid.v1.ListBotInstancesResponse + 3, // 12: teleport.machineid.v1.BotInstanceService.ListBotInstancesV2:output_type -> teleport.machineid.v1.ListBotInstancesResponse + 12, // 13: teleport.machineid.v1.BotInstanceService.DeleteBotInstance:output_type -> google.protobuf.Empty + 6, // 14: teleport.machineid.v1.BotInstanceService.SubmitHeartbeat:output_type -> teleport.machineid.v1.SubmitHeartbeatResponse + 10, // [10:15] is the sub-list for method output_type + 5, // [5:10] is the sub-list for method input_type + 5, // [5:5] is the sub-list for extension type_name + 5, // [5:5] is the sub-list for extension extendee + 0, // [0:5] is the sub-list for field type_name } func init() { file_teleport_machineid_v1_bot_instance_service_proto_init() } @@ -449,7 +648,7 @@ func file_teleport_machineid_v1_bot_instance_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_machineid_v1_bot_instance_service_proto_rawDesc), len(file_teleport_machineid_v1_bot_instance_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 6, + NumMessages: 8, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/machineid/v1/bot_instance_service_grpc.pb.go b/api/gen/proto/go/teleport/machineid/v1/bot_instance_service_grpc.pb.go index 766dda1c5d9a5..3637522d47a33 100644 --- a/api/gen/proto/go/teleport/machineid/v1/bot_instance_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/bot_instance_service_grpc.pb.go @@ -34,10 +34,11 @@ import ( const _ = grpc.SupportPackageIsVersion9 const ( - BotInstanceService_GetBotInstance_FullMethodName = "/teleport.machineid.v1.BotInstanceService/GetBotInstance" - BotInstanceService_ListBotInstances_FullMethodName = "/teleport.machineid.v1.BotInstanceService/ListBotInstances" - BotInstanceService_DeleteBotInstance_FullMethodName = "/teleport.machineid.v1.BotInstanceService/DeleteBotInstance" - BotInstanceService_SubmitHeartbeat_FullMethodName = "/teleport.machineid.v1.BotInstanceService/SubmitHeartbeat" + BotInstanceService_GetBotInstance_FullMethodName = "/teleport.machineid.v1.BotInstanceService/GetBotInstance" + BotInstanceService_ListBotInstances_FullMethodName = "/teleport.machineid.v1.BotInstanceService/ListBotInstances" + BotInstanceService_ListBotInstancesV2_FullMethodName = "/teleport.machineid.v1.BotInstanceService/ListBotInstancesV2" + BotInstanceService_DeleteBotInstance_FullMethodName = "/teleport.machineid.v1.BotInstanceService/DeleteBotInstance" + BotInstanceService_SubmitHeartbeat_FullMethodName = "/teleport.machineid.v1.BotInstanceService/SubmitHeartbeat" ) // BotInstanceServiceClient is the client API for BotInstanceService service. @@ -48,8 +49,12 @@ const ( type BotInstanceServiceClient interface { // GetBotInstance returns the specified BotInstance resource. GetBotInstance(ctx context.Context, in *GetBotInstanceRequest, opts ...grpc.CallOption) (*BotInstance, error) + // Deprecated: Do not use. // ListBotInstances returns a page of BotInstance resources. + // Deprecated: Use ListBotInstancesV2 instead ListBotInstances(ctx context.Context, in *ListBotInstancesRequest, opts ...grpc.CallOption) (*ListBotInstancesResponse, error) + // ListBotInstancesV2 returns a page of BotInstance resources. + ListBotInstancesV2(ctx context.Context, in *ListBotInstancesV2Request, opts ...grpc.CallOption) (*ListBotInstancesResponse, error) // DeleteBotInstance hard deletes the specified BotInstance resource. DeleteBotInstance(ctx context.Context, in *DeleteBotInstanceRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) // SubmitHeartbeat submits a heartbeat for a BotInstance. @@ -74,6 +79,7 @@ func (c *botInstanceServiceClient) GetBotInstance(ctx context.Context, in *GetBo return out, nil } +// Deprecated: Do not use. func (c *botInstanceServiceClient) ListBotInstances(ctx context.Context, in *ListBotInstancesRequest, opts ...grpc.CallOption) (*ListBotInstancesResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(ListBotInstancesResponse) @@ -84,6 +90,16 @@ func (c *botInstanceServiceClient) ListBotInstances(ctx context.Context, in *Lis return out, nil } +func (c *botInstanceServiceClient) ListBotInstancesV2(ctx context.Context, in *ListBotInstancesV2Request, opts ...grpc.CallOption) (*ListBotInstancesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListBotInstancesResponse) + err := c.cc.Invoke(ctx, BotInstanceService_ListBotInstancesV2_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *botInstanceServiceClient) DeleteBotInstance(ctx context.Context, in *DeleteBotInstanceRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(emptypb.Empty) @@ -112,8 +128,12 @@ func (c *botInstanceServiceClient) SubmitHeartbeat(ctx context.Context, in *Subm type BotInstanceServiceServer interface { // GetBotInstance returns the specified BotInstance resource. GetBotInstance(context.Context, *GetBotInstanceRequest) (*BotInstance, error) + // Deprecated: Do not use. // ListBotInstances returns a page of BotInstance resources. + // Deprecated: Use ListBotInstancesV2 instead ListBotInstances(context.Context, *ListBotInstancesRequest) (*ListBotInstancesResponse, error) + // ListBotInstancesV2 returns a page of BotInstance resources. + ListBotInstancesV2(context.Context, *ListBotInstancesV2Request) (*ListBotInstancesResponse, error) // DeleteBotInstance hard deletes the specified BotInstance resource. DeleteBotInstance(context.Context, *DeleteBotInstanceRequest) (*emptypb.Empty, error) // SubmitHeartbeat submits a heartbeat for a BotInstance. @@ -134,6 +154,9 @@ func (UnimplementedBotInstanceServiceServer) GetBotInstance(context.Context, *Ge func (UnimplementedBotInstanceServiceServer) ListBotInstances(context.Context, *ListBotInstancesRequest) (*ListBotInstancesResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method ListBotInstances not implemented") } +func (UnimplementedBotInstanceServiceServer) ListBotInstancesV2(context.Context, *ListBotInstancesV2Request) (*ListBotInstancesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListBotInstancesV2 not implemented") +} func (UnimplementedBotInstanceServiceServer) DeleteBotInstance(context.Context, *DeleteBotInstanceRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteBotInstance not implemented") } @@ -197,6 +220,24 @@ func _BotInstanceService_ListBotInstances_Handler(srv interface{}, ctx context.C return interceptor(ctx, in, info, handler) } +func _BotInstanceService_ListBotInstancesV2_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListBotInstancesV2Request) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(BotInstanceServiceServer).ListBotInstancesV2(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: BotInstanceService_ListBotInstancesV2_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(BotInstanceServiceServer).ListBotInstancesV2(ctx, req.(*ListBotInstancesV2Request)) + } + return interceptor(ctx, in, info, handler) +} + func _BotInstanceService_DeleteBotInstance_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(DeleteBotInstanceRequest) if err := dec(in); err != nil { @@ -248,6 +289,10 @@ var BotInstanceService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ListBotInstances", Handler: _BotInstanceService_ListBotInstances_Handler, }, + { + MethodName: "ListBotInstancesV2", + Handler: _BotInstanceService_ListBotInstancesV2_Handler, + }, { MethodName: "DeleteBotInstance", Handler: _BotInstanceService_DeleteBotInstance_Handler, diff --git a/api/gen/proto/go/teleport/machineid/v1/bot_service.pb.go b/api/gen/proto/go/teleport/machineid/v1/bot_service.pb.go index f63c1d4461a51..91c8491303e00 100644 --- a/api/gen/proto/go/teleport/machineid/v1/bot_service.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/bot_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/bot_service.proto diff --git a/api/gen/proto/go/teleport/machineid/v1/federation.pb.go b/api/gen/proto/go/teleport/machineid/v1/federation.pb.go index 0bd5c235c8580..09c80c3f77708 100644 --- a/api/gen/proto/go/teleport/machineid/v1/federation.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/federation.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/federation.proto diff --git a/api/gen/proto/go/teleport/machineid/v1/federation_service.pb.go b/api/gen/proto/go/teleport/machineid/v1/federation_service.pb.go index 011253f902388..397dc2360a0a3 100644 --- a/api/gen/proto/go/teleport/machineid/v1/federation_service.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/federation_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/federation_service.proto diff --git a/api/gen/proto/go/teleport/machineid/v1/workload_identity_service.pb.go b/api/gen/proto/go/teleport/machineid/v1/workload_identity_service.pb.go index 2a8f8f5b2e53b..f8b0dd9633fdb 100644 --- a/api/gen/proto/go/teleport/machineid/v1/workload_identity_service.pb.go +++ b/api/gen/proto/go/teleport/machineid/v1/workload_identity_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/machineid/v1/workload_identity_service.proto diff --git a/api/gen/proto/go/teleport/notifications/v1/notifications.pb.go b/api/gen/proto/go/teleport/notifications/v1/notifications.pb.go index d7b07c6f91a41..ff694fc700aa0 100644 --- a/api/gen/proto/go/teleport/notifications/v1/notifications.pb.go +++ b/api/gen/proto/go/teleport/notifications/v1/notifications.pb.go @@ -17,7 +17,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/notifications/v1/notifications.proto diff --git a/api/gen/proto/go/teleport/notifications/v1/notifications_service.pb.go b/api/gen/proto/go/teleport/notifications/v1/notifications_service.pb.go index d989e1d33f8b2..1d47da38979bd 100644 --- a/api/gen/proto/go/teleport/notifications/v1/notifications_service.pb.go +++ b/api/gen/proto/go/teleport/notifications/v1/notifications_service.pb.go @@ -17,7 +17,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/notifications/v1/notifications_service.proto diff --git a/api/gen/proto/go/teleport/okta/v1/okta_service.pb.go b/api/gen/proto/go/teleport/okta/v1/okta_service.pb.go index b64a4a3c0ce39..8f5a03d65a556 100644 --- a/api/gen/proto/go/teleport/okta/v1/okta_service.pb.go +++ b/api/gen/proto/go/teleport/okta/v1/okta_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/okta/v1/okta_service.proto @@ -287,8 +287,13 @@ type CreateIntegrationRequest struct { EnableSystemLogExport bool `protobuf:"varint,11,opt,name=enable_system_log_export,json=enableSystemLogExport,proto3" json:"enable_system_log_export,omitempty"` // Whether to assign the builtin okta-requester role to all Okta synced users. DisableAssignDefaultRoles bool `protobuf:"varint,12,opt,name=disable_assign_default_roles,json=disableAssignDefaultRoles,proto3" json:"disable_assign_default_roles,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. It will be rounded down to the nearest second The default value + // is 1800 (30 minutes). + TimeBetweenImports *durationpb.Duration `protobuf:"bytes,13,opt,name=time_between_imports,json=timeBetweenImports,proto3" json:"time_between_imports,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *CreateIntegrationRequest) Reset() { @@ -405,6 +410,13 @@ func (x *CreateIntegrationRequest) GetDisableAssignDefaultRoles() bool { return false } +func (x *CreateIntegrationRequest) GetTimeBetweenImports() *durationpb.Duration { + if x != nil { + return x.TimeBetweenImports + } + return nil +} + // UpdateIntegrationRequest is the request message for updating an existing Okta integration. type UpdateIntegrationRequest struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -428,8 +440,13 @@ type UpdateIntegrationRequest struct { EnableSystemLogExport bool `protobuf:"varint,11,opt,name=enable_system_log_export,json=enableSystemLogExport,proto3" json:"enable_system_log_export,omitempty"` // Whether to assign the builtin okta-requester role to all Okta synced users. DisableAssignDefaultRoles bool `protobuf:"varint,12,opt,name=disable_assign_default_roles,json=disableAssignDefaultRoles,proto3" json:"disable_assign_default_roles,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. It will be rounded down to the nearest second. The default + // value is 1800 (30 minutes). + TimeBetweenImports *durationpb.Duration `protobuf:"bytes,13,opt,name=time_between_imports,json=timeBetweenImports,proto3" json:"time_between_imports,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *UpdateIntegrationRequest) Reset() { @@ -525,6 +542,13 @@ func (x *UpdateIntegrationRequest) GetDisableAssignDefaultRoles() bool { return false } +func (x *UpdateIntegrationRequest) GetTimeBetweenImports() *durationpb.Duration { + if x != nil { + return x.TimeBetweenImports + } + return nil +} + // AccessListSettings contains the settings for access list synchronization. type AccessListSettings struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -1808,7 +1832,7 @@ const file_teleport_okta_v1_okta_service_proto_rawDesc = "" + "\x06groups\x18\x01 \x03(\v2).teleport.okta.v1.GetGroupsResponse.GroupR\x06groups\x1a=\n" + "\x05Group\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\x12 \n" + - "\vdescription\x18\x02 \x01(\tR\vdescription\"\xb1\x05\n" + + "\vdescription\x18\x02 \x01(\tR\vdescription\"\xfe\x05\n" + "\x18CreateIntegrationRequest\x122\n" + "\x15okta_organization_url\x18\x01 \x01(\tR\x13oktaOrganizationUrl\x12M\n" + "\x0fapi_credentials\x18\x02 \x01(\v2$.teleport.okta.v1.OktaAPICredentialsR\x0eapiCredentials\x12\x1d\n" + @@ -1823,7 +1847,8 @@ const file_teleport_okta_v1_okta_service_proto_rawDesc = "" + "\x19enable_bidirectional_sync\x18\n" + " \x01(\bR\x17enableBidirectionalSync\x127\n" + "\x18enable_system_log_export\x18\v \x01(\bR\x15enableSystemLogExport\x12?\n" + - "\x1cdisable_assign_default_roles\x18\f \x01(\bR\x19disableAssignDefaultRoles\"\xd9\x04\n" + + "\x1cdisable_assign_default_roles\x18\f \x01(\bR\x19disableAssignDefaultRoles\x12K\n" + + "\x14time_between_imports\x18\r \x01(\v2\x19.google.protobuf.DurationR\x12timeBetweenImports\"\xa6\x05\n" + "\x18UpdateIntegrationRequest\x12M\n" + "\x0fapi_credentials\x18\x02 \x01(\v2$.teleport.okta.v1.OktaAPICredentialsR\x0eapiCredentials\x12\x1d\n" + "\n" + @@ -1835,7 +1860,8 @@ const file_teleport_okta_v1_okta_service_proto_rawDesc = "" + "\x19enable_bidirectional_sync\x18\n" + " \x01(\bR\x17enableBidirectionalSync\x127\n" + "\x18enable_system_log_export\x18\v \x01(\bR\x15enableSystemLogExport\x12?\n" + - "\x1cdisable_assign_default_roles\x18\f \x01(\bR\x19disableAssignDefaultRolesJ\x04\b\b\x10\tJ\x04\b\t\x10\n" + + "\x1cdisable_assign_default_roles\x18\f \x01(\bR\x19disableAssignDefaultRoles\x12K\n" + + "\x14time_between_imports\x18\r \x01(\v2\x19.google.protobuf.DurationR\x12timeBetweenImportsJ\x04\b\b\x10\tJ\x04\b\t\x10\n" + "R\x0freuse_connectorR\x10sso_metadata_url\"\x7f\n" + "\x12AccessListSettings\x12#\n" + "\rgroup_filters\x18\x02 \x03(\tR\fgroupFilters\x12\x1f\n" + @@ -1967,11 +1993,11 @@ var file_teleport_okta_v1_okta_service_proto_goTypes = []any{ (*DeleteAllOktaAssignmentsRequest)(nil), // 27: teleport.okta.v1.DeleteAllOktaAssignmentsRequest (*GetAppsResponse_App)(nil), // 28: teleport.okta.v1.GetAppsResponse.App (*GetGroupsResponse_Group)(nil), // 29: teleport.okta.v1.GetGroupsResponse.Group - (*types.PluginV1)(nil), // 30: types.PluginV1 - (*types.OktaImportRuleV1)(nil), // 31: types.OktaImportRuleV1 - (*types.OktaAssignmentV1)(nil), // 32: types.OktaAssignmentV1 - (types.OktaAssignmentSpecV1_OktaAssignmentStatus)(0), // 33: types.OktaAssignmentSpecV1.OktaAssignmentStatus - (*durationpb.Duration)(nil), // 34: google.protobuf.Duration + (*durationpb.Duration)(nil), // 30: google.protobuf.Duration + (*types.PluginV1)(nil), // 31: types.PluginV1 + (*types.OktaImportRuleV1)(nil), // 32: types.OktaImportRuleV1 + (*types.OktaAssignmentV1)(nil), // 33: types.OktaAssignmentV1 + (types.OktaAssignmentSpecV1_OktaAssignmentStatus)(0), // 34: types.OktaAssignmentSpecV1.OktaAssignmentStatus (*emptypb.Empty)(nil), // 35: google.protobuf.Empty } var file_teleport_okta_v1_okta_service_proto_depIdxs = []int32{ @@ -1981,62 +2007,64 @@ var file_teleport_okta_v1_okta_service_proto_depIdxs = []int32{ 29, // 3: teleport.okta.v1.GetGroupsResponse.groups:type_name -> teleport.okta.v1.GetGroupsResponse.Group 7, // 4: teleport.okta.v1.CreateIntegrationRequest.api_credentials:type_name -> teleport.okta.v1.OktaAPICredentials 6, // 5: teleport.okta.v1.CreateIntegrationRequest.access_list_settings:type_name -> teleport.okta.v1.AccessListSettings - 7, // 6: teleport.okta.v1.UpdateIntegrationRequest.api_credentials:type_name -> teleport.okta.v1.OktaAPICredentials - 6, // 7: teleport.okta.v1.UpdateIntegrationRequest.access_list_settings:type_name -> teleport.okta.v1.AccessListSettings - 30, // 8: teleport.okta.v1.CreateIntegrationResponse.plugin:type_name -> types.PluginV1 - 10, // 9: teleport.okta.v1.CreateIntegrationResponse.connector_info:type_name -> teleport.okta.v1.ConnectorInfo - 30, // 10: teleport.okta.v1.UpdateIntegrationResponse.plugin:type_name -> types.PluginV1 - 10, // 11: teleport.okta.v1.UpdateIntegrationResponse.connector_info:type_name -> teleport.okta.v1.ConnectorInfo - 7, // 12: teleport.okta.v1.ValidateClientCredentialsRequest.api_credentials:type_name -> teleport.okta.v1.OktaAPICredentials - 31, // 13: teleport.okta.v1.ListOktaImportRulesResponse.import_rules:type_name -> types.OktaImportRuleV1 - 31, // 14: teleport.okta.v1.CreateOktaImportRuleRequest.import_rule:type_name -> types.OktaImportRuleV1 - 31, // 15: teleport.okta.v1.UpdateOktaImportRuleRequest.import_rule:type_name -> types.OktaImportRuleV1 - 32, // 16: teleport.okta.v1.ListOktaAssignmentsResponse.assignments:type_name -> types.OktaAssignmentV1 - 32, // 17: teleport.okta.v1.CreateOktaAssignmentRequest.assignment:type_name -> types.OktaAssignmentV1 - 32, // 18: teleport.okta.v1.UpdateOktaAssignmentRequest.assignment:type_name -> types.OktaAssignmentV1 - 33, // 19: teleport.okta.v1.UpdateOktaAssignmentStatusRequest.status:type_name -> types.OktaAssignmentSpecV1.OktaAssignmentStatus - 34, // 20: teleport.okta.v1.UpdateOktaAssignmentStatusRequest.time_has_passed:type_name -> google.protobuf.Duration - 13, // 21: teleport.okta.v1.OktaService.ListOktaImportRules:input_type -> teleport.okta.v1.ListOktaImportRulesRequest - 15, // 22: teleport.okta.v1.OktaService.GetOktaImportRule:input_type -> teleport.okta.v1.GetOktaImportRuleRequest - 16, // 23: teleport.okta.v1.OktaService.CreateOktaImportRule:input_type -> teleport.okta.v1.CreateOktaImportRuleRequest - 17, // 24: teleport.okta.v1.OktaService.UpdateOktaImportRule:input_type -> teleport.okta.v1.UpdateOktaImportRuleRequest - 18, // 25: teleport.okta.v1.OktaService.DeleteOktaImportRule:input_type -> teleport.okta.v1.DeleteOktaImportRuleRequest - 19, // 26: teleport.okta.v1.OktaService.DeleteAllOktaImportRules:input_type -> teleport.okta.v1.DeleteAllOktaImportRulesRequest - 20, // 27: teleport.okta.v1.OktaService.ListOktaAssignments:input_type -> teleport.okta.v1.ListOktaAssignmentsRequest - 22, // 28: teleport.okta.v1.OktaService.GetOktaAssignment:input_type -> teleport.okta.v1.GetOktaAssignmentRequest - 23, // 29: teleport.okta.v1.OktaService.CreateOktaAssignment:input_type -> teleport.okta.v1.CreateOktaAssignmentRequest - 24, // 30: teleport.okta.v1.OktaService.UpdateOktaAssignment:input_type -> teleport.okta.v1.UpdateOktaAssignmentRequest - 25, // 31: teleport.okta.v1.OktaService.UpdateOktaAssignmentStatus:input_type -> teleport.okta.v1.UpdateOktaAssignmentStatusRequest - 26, // 32: teleport.okta.v1.OktaService.DeleteOktaAssignment:input_type -> teleport.okta.v1.DeleteOktaAssignmentRequest - 27, // 33: teleport.okta.v1.OktaService.DeleteAllOktaAssignments:input_type -> teleport.okta.v1.DeleteAllOktaAssignmentsRequest - 11, // 34: teleport.okta.v1.OktaService.ValidateClientCredentials:input_type -> teleport.okta.v1.ValidateClientCredentialsRequest - 4, // 35: teleport.okta.v1.OktaService.CreateIntegration:input_type -> teleport.okta.v1.CreateIntegrationRequest - 5, // 36: teleport.okta.v1.OktaService.UpdateIntegration:input_type -> teleport.okta.v1.UpdateIntegrationRequest - 0, // 37: teleport.okta.v1.OktaService.GetApps:input_type -> teleport.okta.v1.GetAppsRequest - 2, // 38: teleport.okta.v1.OktaService.GetGroups:input_type -> teleport.okta.v1.GetGroupsRequest - 14, // 39: teleport.okta.v1.OktaService.ListOktaImportRules:output_type -> teleport.okta.v1.ListOktaImportRulesResponse - 31, // 40: teleport.okta.v1.OktaService.GetOktaImportRule:output_type -> types.OktaImportRuleV1 - 31, // 41: teleport.okta.v1.OktaService.CreateOktaImportRule:output_type -> types.OktaImportRuleV1 - 31, // 42: teleport.okta.v1.OktaService.UpdateOktaImportRule:output_type -> types.OktaImportRuleV1 - 35, // 43: teleport.okta.v1.OktaService.DeleteOktaImportRule:output_type -> google.protobuf.Empty - 35, // 44: teleport.okta.v1.OktaService.DeleteAllOktaImportRules:output_type -> google.protobuf.Empty - 21, // 45: teleport.okta.v1.OktaService.ListOktaAssignments:output_type -> teleport.okta.v1.ListOktaAssignmentsResponse - 32, // 46: teleport.okta.v1.OktaService.GetOktaAssignment:output_type -> types.OktaAssignmentV1 - 32, // 47: teleport.okta.v1.OktaService.CreateOktaAssignment:output_type -> types.OktaAssignmentV1 - 32, // 48: teleport.okta.v1.OktaService.UpdateOktaAssignment:output_type -> types.OktaAssignmentV1 - 35, // 49: teleport.okta.v1.OktaService.UpdateOktaAssignmentStatus:output_type -> google.protobuf.Empty - 35, // 50: teleport.okta.v1.OktaService.DeleteOktaAssignment:output_type -> google.protobuf.Empty - 35, // 51: teleport.okta.v1.OktaService.DeleteAllOktaAssignments:output_type -> google.protobuf.Empty - 12, // 52: teleport.okta.v1.OktaService.ValidateClientCredentials:output_type -> teleport.okta.v1.ValidateClientCredentialsResponse - 8, // 53: teleport.okta.v1.OktaService.CreateIntegration:output_type -> teleport.okta.v1.CreateIntegrationResponse - 9, // 54: teleport.okta.v1.OktaService.UpdateIntegration:output_type -> teleport.okta.v1.UpdateIntegrationResponse - 1, // 55: teleport.okta.v1.OktaService.GetApps:output_type -> teleport.okta.v1.GetAppsResponse - 3, // 56: teleport.okta.v1.OktaService.GetGroups:output_type -> teleport.okta.v1.GetGroupsResponse - 39, // [39:57] is the sub-list for method output_type - 21, // [21:39] is the sub-list for method input_type - 21, // [21:21] is the sub-list for extension type_name - 21, // [21:21] is the sub-list for extension extendee - 0, // [0:21] is the sub-list for field type_name + 30, // 6: teleport.okta.v1.CreateIntegrationRequest.time_between_imports:type_name -> google.protobuf.Duration + 7, // 7: teleport.okta.v1.UpdateIntegrationRequest.api_credentials:type_name -> teleport.okta.v1.OktaAPICredentials + 6, // 8: teleport.okta.v1.UpdateIntegrationRequest.access_list_settings:type_name -> teleport.okta.v1.AccessListSettings + 30, // 9: teleport.okta.v1.UpdateIntegrationRequest.time_between_imports:type_name -> google.protobuf.Duration + 31, // 10: teleport.okta.v1.CreateIntegrationResponse.plugin:type_name -> types.PluginV1 + 10, // 11: teleport.okta.v1.CreateIntegrationResponse.connector_info:type_name -> teleport.okta.v1.ConnectorInfo + 31, // 12: teleport.okta.v1.UpdateIntegrationResponse.plugin:type_name -> types.PluginV1 + 10, // 13: teleport.okta.v1.UpdateIntegrationResponse.connector_info:type_name -> teleport.okta.v1.ConnectorInfo + 7, // 14: teleport.okta.v1.ValidateClientCredentialsRequest.api_credentials:type_name -> teleport.okta.v1.OktaAPICredentials + 32, // 15: teleport.okta.v1.ListOktaImportRulesResponse.import_rules:type_name -> types.OktaImportRuleV1 + 32, // 16: teleport.okta.v1.CreateOktaImportRuleRequest.import_rule:type_name -> types.OktaImportRuleV1 + 32, // 17: teleport.okta.v1.UpdateOktaImportRuleRequest.import_rule:type_name -> types.OktaImportRuleV1 + 33, // 18: teleport.okta.v1.ListOktaAssignmentsResponse.assignments:type_name -> types.OktaAssignmentV1 + 33, // 19: teleport.okta.v1.CreateOktaAssignmentRequest.assignment:type_name -> types.OktaAssignmentV1 + 33, // 20: teleport.okta.v1.UpdateOktaAssignmentRequest.assignment:type_name -> types.OktaAssignmentV1 + 34, // 21: teleport.okta.v1.UpdateOktaAssignmentStatusRequest.status:type_name -> types.OktaAssignmentSpecV1.OktaAssignmentStatus + 30, // 22: teleport.okta.v1.UpdateOktaAssignmentStatusRequest.time_has_passed:type_name -> google.protobuf.Duration + 13, // 23: teleport.okta.v1.OktaService.ListOktaImportRules:input_type -> teleport.okta.v1.ListOktaImportRulesRequest + 15, // 24: teleport.okta.v1.OktaService.GetOktaImportRule:input_type -> teleport.okta.v1.GetOktaImportRuleRequest + 16, // 25: teleport.okta.v1.OktaService.CreateOktaImportRule:input_type -> teleport.okta.v1.CreateOktaImportRuleRequest + 17, // 26: teleport.okta.v1.OktaService.UpdateOktaImportRule:input_type -> teleport.okta.v1.UpdateOktaImportRuleRequest + 18, // 27: teleport.okta.v1.OktaService.DeleteOktaImportRule:input_type -> teleport.okta.v1.DeleteOktaImportRuleRequest + 19, // 28: teleport.okta.v1.OktaService.DeleteAllOktaImportRules:input_type -> teleport.okta.v1.DeleteAllOktaImportRulesRequest + 20, // 29: teleport.okta.v1.OktaService.ListOktaAssignments:input_type -> teleport.okta.v1.ListOktaAssignmentsRequest + 22, // 30: teleport.okta.v1.OktaService.GetOktaAssignment:input_type -> teleport.okta.v1.GetOktaAssignmentRequest + 23, // 31: teleport.okta.v1.OktaService.CreateOktaAssignment:input_type -> teleport.okta.v1.CreateOktaAssignmentRequest + 24, // 32: teleport.okta.v1.OktaService.UpdateOktaAssignment:input_type -> teleport.okta.v1.UpdateOktaAssignmentRequest + 25, // 33: teleport.okta.v1.OktaService.UpdateOktaAssignmentStatus:input_type -> teleport.okta.v1.UpdateOktaAssignmentStatusRequest + 26, // 34: teleport.okta.v1.OktaService.DeleteOktaAssignment:input_type -> teleport.okta.v1.DeleteOktaAssignmentRequest + 27, // 35: teleport.okta.v1.OktaService.DeleteAllOktaAssignments:input_type -> teleport.okta.v1.DeleteAllOktaAssignmentsRequest + 11, // 36: teleport.okta.v1.OktaService.ValidateClientCredentials:input_type -> teleport.okta.v1.ValidateClientCredentialsRequest + 4, // 37: teleport.okta.v1.OktaService.CreateIntegration:input_type -> teleport.okta.v1.CreateIntegrationRequest + 5, // 38: teleport.okta.v1.OktaService.UpdateIntegration:input_type -> teleport.okta.v1.UpdateIntegrationRequest + 0, // 39: teleport.okta.v1.OktaService.GetApps:input_type -> teleport.okta.v1.GetAppsRequest + 2, // 40: teleport.okta.v1.OktaService.GetGroups:input_type -> teleport.okta.v1.GetGroupsRequest + 14, // 41: teleport.okta.v1.OktaService.ListOktaImportRules:output_type -> teleport.okta.v1.ListOktaImportRulesResponse + 32, // 42: teleport.okta.v1.OktaService.GetOktaImportRule:output_type -> types.OktaImportRuleV1 + 32, // 43: teleport.okta.v1.OktaService.CreateOktaImportRule:output_type -> types.OktaImportRuleV1 + 32, // 44: teleport.okta.v1.OktaService.UpdateOktaImportRule:output_type -> types.OktaImportRuleV1 + 35, // 45: teleport.okta.v1.OktaService.DeleteOktaImportRule:output_type -> google.protobuf.Empty + 35, // 46: teleport.okta.v1.OktaService.DeleteAllOktaImportRules:output_type -> google.protobuf.Empty + 21, // 47: teleport.okta.v1.OktaService.ListOktaAssignments:output_type -> teleport.okta.v1.ListOktaAssignmentsResponse + 33, // 48: teleport.okta.v1.OktaService.GetOktaAssignment:output_type -> types.OktaAssignmentV1 + 33, // 49: teleport.okta.v1.OktaService.CreateOktaAssignment:output_type -> types.OktaAssignmentV1 + 33, // 50: teleport.okta.v1.OktaService.UpdateOktaAssignment:output_type -> types.OktaAssignmentV1 + 35, // 51: teleport.okta.v1.OktaService.UpdateOktaAssignmentStatus:output_type -> google.protobuf.Empty + 35, // 52: teleport.okta.v1.OktaService.DeleteOktaAssignment:output_type -> google.protobuf.Empty + 35, // 53: teleport.okta.v1.OktaService.DeleteAllOktaAssignments:output_type -> google.protobuf.Empty + 12, // 54: teleport.okta.v1.OktaService.ValidateClientCredentials:output_type -> teleport.okta.v1.ValidateClientCredentialsResponse + 8, // 55: teleport.okta.v1.OktaService.CreateIntegration:output_type -> teleport.okta.v1.CreateIntegrationResponse + 9, // 56: teleport.okta.v1.OktaService.UpdateIntegration:output_type -> teleport.okta.v1.UpdateIntegrationResponse + 1, // 57: teleport.okta.v1.OktaService.GetApps:output_type -> teleport.okta.v1.GetAppsResponse + 3, // 58: teleport.okta.v1.OktaService.GetGroups:output_type -> teleport.okta.v1.GetGroupsResponse + 41, // [41:59] is the sub-list for method output_type + 23, // [23:41] is the sub-list for method input_type + 23, // [23:23] is the sub-list for extension type_name + 23, // [23:23] is the sub-list for extension extendee + 0, // [0:23] is the sub-list for field type_name } func init() { file_teleport_okta_v1_okta_service_proto_init() } diff --git a/api/gen/proto/go/teleport/plugins/v1/plugin_service.pb.go b/api/gen/proto/go/teleport/plugins/v1/plugin_service.pb.go index 733750db186b2..d80f7cb549458 100644 --- a/api/gen/proto/go/teleport/plugins/v1/plugin_service.pb.go +++ b/api/gen/proto/go/teleport/plugins/v1/plugin_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/plugins/v1/plugin_service.proto @@ -702,6 +702,197 @@ func (x *SearchPluginStaticCredentialsRequest) GetLabels() map[string]string { return nil } +// CredentialQuery is a set of values to match when searching for credentials +type CredentialQuery struct { + state protoimpl.MessageState `protogen:"open.v1"` + Labels map[string]string `protobuf:"bytes,1,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CredentialQuery) Reset() { + *x = CredentialQuery{} + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CredentialQuery) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CredentialQuery) ProtoMessage() {} + +func (x *CredentialQuery) ProtoReflect() protoreflect.Message { + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CredentialQuery.ProtoReflect.Descriptor instead. +func (*CredentialQuery) Descriptor() ([]byte, []int) { + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{12} +} + +func (x *CredentialQuery) GetLabels() map[string]string { + if x != nil { + return x.Labels + } + return nil +} + +// UpdatePluginStaticCredentialsRequest holds information for updating a plugin +// static credential. The service will attempt to find the credential to update +// based on the supplied plugin name and labels. +type UpdatePluginStaticCredentialsRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Target: + // + // *UpdatePluginStaticCredentialsRequest_Name + // *UpdatePluginStaticCredentialsRequest_Query + Target isUpdatePluginStaticCredentialsRequest_Target `protobuf_oneof:"target"` + // Credential is the payload containing the updated credential. Only the spec + // is allowed to be updated via this interface. + Credential *types.PluginStaticCredentialsSpecV1 `protobuf:"bytes,3,opt,name=credential,proto3" json:"credential,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdatePluginStaticCredentialsRequest) Reset() { + *x = UpdatePluginStaticCredentialsRequest{} + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdatePluginStaticCredentialsRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdatePluginStaticCredentialsRequest) ProtoMessage() {} + +func (x *UpdatePluginStaticCredentialsRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdatePluginStaticCredentialsRequest.ProtoReflect.Descriptor instead. +func (*UpdatePluginStaticCredentialsRequest) Descriptor() ([]byte, []int) { + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{13} +} + +func (x *UpdatePluginStaticCredentialsRequest) GetTarget() isUpdatePluginStaticCredentialsRequest_Target { + if x != nil { + return x.Target + } + return nil +} + +func (x *UpdatePluginStaticCredentialsRequest) GetName() string { + if x != nil { + if x, ok := x.Target.(*UpdatePluginStaticCredentialsRequest_Name); ok { + return x.Name + } + } + return "" +} + +func (x *UpdatePluginStaticCredentialsRequest) GetQuery() *CredentialQuery { + if x != nil { + if x, ok := x.Target.(*UpdatePluginStaticCredentialsRequest_Query); ok { + return x.Query + } + } + return nil +} + +func (x *UpdatePluginStaticCredentialsRequest) GetCredential() *types.PluginStaticCredentialsSpecV1 { + if x != nil { + return x.Credential + } + return nil +} + +type isUpdatePluginStaticCredentialsRequest_Target interface { + isUpdatePluginStaticCredentialsRequest_Target() +} + +type UpdatePluginStaticCredentialsRequest_Name struct { + // Name is the name of the plugin static credentials resource that we're + // targeting + Name string `protobuf:"bytes,1,opt,name=name,proto3,oneof"` +} + +type UpdatePluginStaticCredentialsRequest_Query struct { + // Query is the search query that a credential must match in order to be + // updated. The update will only proceed if exactly one credential matches + // the search query. + Query *CredentialQuery `protobuf:"bytes,2,opt,name=query,proto3,oneof"` +} + +func (*UpdatePluginStaticCredentialsRequest_Name) isUpdatePluginStaticCredentialsRequest_Target() {} + +func (*UpdatePluginStaticCredentialsRequest_Query) isUpdatePluginStaticCredentialsRequest_Target() {} + +// UpdatePluginStaticCredentialsResponse holds the updated credential returned +// from UpdatePluginStaticCredentials +type UpdatePluginStaticCredentialsResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + Credential *types.PluginStaticCredentialsV1 `protobuf:"bytes,1,opt,name=credential,proto3" json:"credential,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdatePluginStaticCredentialsResponse) Reset() { + *x = UpdatePluginStaticCredentialsResponse{} + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdatePluginStaticCredentialsResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdatePluginStaticCredentialsResponse) ProtoMessage() {} + +func (x *UpdatePluginStaticCredentialsResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdatePluginStaticCredentialsResponse.ProtoReflect.Descriptor instead. +func (*UpdatePluginStaticCredentialsResponse) Descriptor() ([]byte, []int) { + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{14} +} + +func (x *UpdatePluginStaticCredentialsResponse) GetCredential() *types.PluginStaticCredentialsV1 { + if x != nil { + return x.Credential + } + return nil +} + // SearchPluginStaticCredentialsResponse is the response type for // SearchPluginStaticCredentials type SearchPluginStaticCredentialsResponse struct { @@ -714,7 +905,7 @@ type SearchPluginStaticCredentialsResponse struct { func (x *SearchPluginStaticCredentialsResponse) Reset() { *x = SearchPluginStaticCredentialsResponse{} - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[12] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[15] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -726,7 +917,7 @@ func (x *SearchPluginStaticCredentialsResponse) String() string { func (*SearchPluginStaticCredentialsResponse) ProtoMessage() {} func (x *SearchPluginStaticCredentialsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[12] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[15] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -739,7 +930,7 @@ func (x *SearchPluginStaticCredentialsResponse) ProtoReflect() protoreflect.Mess // Deprecated: Use SearchPluginStaticCredentialsResponse.ProtoReflect.Descriptor instead. func (*SearchPluginStaticCredentialsResponse) Descriptor() ([]byte, []int) { - return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{12} + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{15} } func (x *SearchPluginStaticCredentialsResponse) GetCredentials() []*types.PluginStaticCredentialsV1 { @@ -762,7 +953,7 @@ type NeedsCleanupRequest struct { func (x *NeedsCleanupRequest) Reset() { *x = NeedsCleanupRequest{} - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[13] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[16] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -774,7 +965,7 @@ func (x *NeedsCleanupRequest) String() string { func (*NeedsCleanupRequest) ProtoMessage() {} func (x *NeedsCleanupRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[13] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[16] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -787,7 +978,7 @@ func (x *NeedsCleanupRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use NeedsCleanupRequest.ProtoReflect.Descriptor instead. func (*NeedsCleanupRequest) Descriptor() ([]byte, []int) { - return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{13} + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{16} } func (x *NeedsCleanupRequest) GetType() string { @@ -812,7 +1003,7 @@ type NeedsCleanupResponse struct { func (x *NeedsCleanupResponse) Reset() { *x = NeedsCleanupResponse{} - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[14] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[17] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -824,7 +1015,7 @@ func (x *NeedsCleanupResponse) String() string { func (*NeedsCleanupResponse) ProtoMessage() {} func (x *NeedsCleanupResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[14] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[17] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -837,7 +1028,7 @@ func (x *NeedsCleanupResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use NeedsCleanupResponse.ProtoReflect.Descriptor instead. func (*NeedsCleanupResponse) Descriptor() ([]byte, []int) { - return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{14} + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{17} } func (x *NeedsCleanupResponse) GetNeedsCleanup() bool { @@ -874,7 +1065,7 @@ type CleanupRequest struct { func (x *CleanupRequest) Reset() { *x = CleanupRequest{} - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[15] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[18] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -886,7 +1077,7 @@ func (x *CleanupRequest) String() string { func (*CleanupRequest) ProtoMessage() {} func (x *CleanupRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[15] + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[18] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -899,7 +1090,7 @@ func (x *CleanupRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use CleanupRequest.ProtoReflect.Descriptor instead. func (*CleanupRequest) Descriptor() ([]byte, []int) { - return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{15} + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{18} } func (x *CleanupRequest) GetType() string { @@ -909,6 +1100,143 @@ func (x *CleanupRequest) GetType() string { return "" } +// CreatePluginOauthTokenRequest is the request type for creating an OAuth token for a plugin. +type CreatePluginOauthTokenRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // plugin_name is the name of the plugin for which the OAuth token is requested. + PluginName string `protobuf:"bytes,1,opt,name=plugin_name,json=pluginName,proto3" json:"plugin_name,omitempty"` + // client_id is the OAuth client identifier issued to the plugin. + ClientId string `protobuf:"bytes,2,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"` + // client_secret is the secret associated with the client_id. + ClientSecret string `protobuf:"bytes,3,opt,name=client_secret,json=clientSecret,proto3" json:"client_secret,omitempty"` + // grant_type is the OAuth 2.0 grant type being used. Currently, only "client_credentials" is supported. + GrantType string `protobuf:"bytes,4,opt,name=grant_type,json=grantType,proto3" json:"grant_type,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreatePluginOauthTokenRequest) Reset() { + *x = CreatePluginOauthTokenRequest{} + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[19] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreatePluginOauthTokenRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreatePluginOauthTokenRequest) ProtoMessage() {} + +func (x *CreatePluginOauthTokenRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[19] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreatePluginOauthTokenRequest.ProtoReflect.Descriptor instead. +func (*CreatePluginOauthTokenRequest) Descriptor() ([]byte, []int) { + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{19} +} + +func (x *CreatePluginOauthTokenRequest) GetPluginName() string { + if x != nil { + return x.PluginName + } + return "" +} + +func (x *CreatePluginOauthTokenRequest) GetClientId() string { + if x != nil { + return x.ClientId + } + return "" +} + +func (x *CreatePluginOauthTokenRequest) GetClientSecret() string { + if x != nil { + return x.ClientSecret + } + return "" +} + +func (x *CreatePluginOauthTokenRequest) GetGrantType() string { + if x != nil { + return x.GrantType + } + return "" +} + +// CreatePluginOauthTokenResponse is the response type for a successful OAuth token creation. +type CreatePluginOauthTokenResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // access_token is the generated token issued to the plugin. + AccessToken string `protobuf:"bytes,1,opt,name=access_token,json=accessToken,proto3" json:"access_token,omitempty"` + // token_type describes the type of the token issued + TokenType string `protobuf:"bytes,2,opt,name=token_type,json=tokenType,proto3" json:"token_type,omitempty"` + // expires_in is the number of seconds until the token expires. + ExpiresIn int64 `protobuf:"varint,3,opt,name=expires_in,json=expiresIn,proto3" json:"expires_in,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreatePluginOauthTokenResponse) Reset() { + *x = CreatePluginOauthTokenResponse{} + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[20] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreatePluginOauthTokenResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreatePluginOauthTokenResponse) ProtoMessage() {} + +func (x *CreatePluginOauthTokenResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_plugins_v1_plugin_service_proto_msgTypes[20] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreatePluginOauthTokenResponse.ProtoReflect.Descriptor instead. +func (*CreatePluginOauthTokenResponse) Descriptor() ([]byte, []int) { + return file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP(), []int{20} +} + +func (x *CreatePluginOauthTokenResponse) GetAccessToken() string { + if x != nil { + return x.AccessToken + } + return "" +} + +func (x *CreatePluginOauthTokenResponse) GetTokenType() string { + if x != nil { + return x.TokenType + } + return "" +} + +func (x *CreatePluginOauthTokenResponse) GetExpiresIn() int64 { + if x != nil { + return x.ExpiresIn + } + return 0 +} + var File_teleport_plugins_v1_plugin_service_proto protoreflect.FileDescriptor const file_teleport_plugins_v1_plugin_service_proto_rawDesc = "" + @@ -954,7 +1282,23 @@ const file_teleport_plugins_v1_plugin_service_proto_rawDesc = "" + "\x06labels\x18\x01 \x03(\v2E.teleport.plugins.v1.SearchPluginStaticCredentialsRequest.LabelsEntryR\x06labels\x1a9\n" + "\vLabelsEntry\x12\x10\n" + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + - "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"k\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\x96\x01\n" + + "\x0fCredentialQuery\x12H\n" + + "\x06labels\x18\x01 \x03(\v20.teleport.plugins.v1.CredentialQuery.LabelsEntryR\x06labels\x1a9\n" + + "\vLabelsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xca\x01\n" + + "$UpdatePluginStaticCredentialsRequest\x12\x14\n" + + "\x04name\x18\x01 \x01(\tH\x00R\x04name\x12<\n" + + "\x05query\x18\x02 \x01(\v2$.teleport.plugins.v1.CredentialQueryH\x00R\x05query\x12D\n" + + "\n" + + "credential\x18\x03 \x01(\v2$.types.PluginStaticCredentialsSpecV1R\n" + + "credentialB\b\n" + + "\x06target\"i\n" + + "%UpdatePluginStaticCredentialsResponse\x12@\n" + + "\n" + + "credential\x18\x01 \x01(\v2 .types.PluginStaticCredentialsV1R\n" + + "credential\"k\n" + "%SearchPluginStaticCredentialsResponse\x12B\n" + "\vcredentials\x18\x01 \x03(\v2 .types.PluginStaticCredentialsV1R\vcredentials\")\n" + "\x13NeedsCleanupRequest\x12\x12\n" + @@ -964,7 +1308,21 @@ const file_teleport_plugins_v1_plugin_service_proto_rawDesc = "" + "\x14resources_to_cleanup\x18\x02 \x03(\v2\x11.types.ResourceIDR\x12resourcesToCleanup\x12#\n" + "\rplugin_active\x18\x03 \x01(\bR\fpluginActive\"$\n" + "\x0eCleanupRequest\x12\x12\n" + - "\x04type\x18\x01 \x01(\tR\x04type2\xac\b\n" + + "\x04type\x18\x01 \x01(\tR\x04type\"\xa1\x01\n" + + "\x1dCreatePluginOauthTokenRequest\x12\x1f\n" + + "\vplugin_name\x18\x01 \x01(\tR\n" + + "pluginName\x12\x1b\n" + + "\tclient_id\x18\x02 \x01(\tR\bclientId\x12#\n" + + "\rclient_secret\x18\x03 \x01(\tR\fclientSecret\x12\x1d\n" + + "\n" + + "grant_type\x18\x04 \x01(\tR\tgrantType\"\x81\x01\n" + + "\x1eCreatePluginOauthTokenResponse\x12!\n" + + "\faccess_token\x18\x01 \x01(\tR\vaccessToken\x12\x1d\n" + + "\n" + + "token_type\x18\x02 \x01(\tR\ttokenType\x12\x1d\n" + + "\n" + + "expires_in\x18\x03 \x01(\x03R\texpiresIn2\xc9\n" + + "\n" + "\rPluginService\x12P\n" + "\fCreatePlugin\x12(.teleport.plugins.v1.CreatePluginRequest\x1a\x16.google.protobuf.Empty\x12C\n" + "\tGetPlugin\x12%.teleport.plugins.v1.GetPluginRequest\x1a\x0f.types.PluginV1\x12I\n" + @@ -974,9 +1332,11 @@ const file_teleport_plugins_v1_plugin_service_proto_rawDesc = "" + "\x14SetPluginCredentials\x120.teleport.plugins.v1.SetPluginCredentialsRequest\x1a\x16.google.protobuf.Empty\x12V\n" + "\x0fSetPluginStatus\x12+.teleport.plugins.v1.SetPluginStatusRequest\x1a\x16.google.protobuf.Empty\x12\x84\x01\n" + "\x17GetAvailablePluginTypes\x123.teleport.plugins.v1.GetAvailablePluginTypesRequest\x1a4.teleport.plugins.v1.GetAvailablePluginTypesResponse\x12\x96\x01\n" + - "\x1dSearchPluginStaticCredentials\x129.teleport.plugins.v1.SearchPluginStaticCredentialsRequest\x1a:.teleport.plugins.v1.SearchPluginStaticCredentialsResponse\x12c\n" + + "\x1dSearchPluginStaticCredentials\x129.teleport.plugins.v1.SearchPluginStaticCredentialsRequest\x1a:.teleport.plugins.v1.SearchPluginStaticCredentialsResponse\x12\x96\x01\n" + + "\x1dUpdatePluginStaticCredentials\x129.teleport.plugins.v1.UpdatePluginStaticCredentialsRequest\x1a:.teleport.plugins.v1.UpdatePluginStaticCredentialsResponse\x12c\n" + "\fNeedsCleanup\x12(.teleport.plugins.v1.NeedsCleanupRequest\x1a).teleport.plugins.v1.NeedsCleanupResponse\x12F\n" + - "\aCleanup\x12#.teleport.plugins.v1.CleanupRequest\x1a\x16.google.protobuf.EmptyBRZPgithub.com/gravitational/teleport/api/gen/proto/go/teleport/plugins/v1;pluginsv1b\x06proto3" + "\aCleanup\x12#.teleport.plugins.v1.CleanupRequest\x1a\x16.google.protobuf.Empty\x12\x81\x01\n" + + "\x16CreatePluginOauthToken\x122.teleport.plugins.v1.CreatePluginOauthTokenRequest\x1a3.teleport.plugins.v1.CreatePluginOauthTokenResponseBRZPgithub.com/gravitational/teleport/api/gen/proto/go/teleport/plugins/v1;pluginsv1b\x06proto3" var ( file_teleport_plugins_v1_plugin_service_proto_rawDescOnce sync.Once @@ -990,7 +1350,7 @@ func file_teleport_plugins_v1_plugin_service_proto_rawDescGZIP() []byte { return file_teleport_plugins_v1_plugin_service_proto_rawDescData } -var file_teleport_plugins_v1_plugin_service_proto_msgTypes = make([]protoimpl.MessageInfo, 18) +var file_teleport_plugins_v1_plugin_service_proto_msgTypes = make([]protoimpl.MessageInfo, 24) var file_teleport_plugins_v1_plugin_service_proto_goTypes = []any{ (*PluginType)(nil), // 0: teleport.plugins.v1.PluginType (*CreatePluginRequest)(nil), // 1: teleport.plugins.v1.CreatePluginRequest @@ -1004,61 +1364,76 @@ var file_teleport_plugins_v1_plugin_service_proto_goTypes = []any{ (*GetAvailablePluginTypesRequest)(nil), // 9: teleport.plugins.v1.GetAvailablePluginTypesRequest (*GetAvailablePluginTypesResponse)(nil), // 10: teleport.plugins.v1.GetAvailablePluginTypesResponse (*SearchPluginStaticCredentialsRequest)(nil), // 11: teleport.plugins.v1.SearchPluginStaticCredentialsRequest - (*SearchPluginStaticCredentialsResponse)(nil), // 12: teleport.plugins.v1.SearchPluginStaticCredentialsResponse - (*NeedsCleanupRequest)(nil), // 13: teleport.plugins.v1.NeedsCleanupRequest - (*NeedsCleanupResponse)(nil), // 14: teleport.plugins.v1.NeedsCleanupResponse - (*CleanupRequest)(nil), // 15: teleport.plugins.v1.CleanupRequest - nil, // 16: teleport.plugins.v1.CreatePluginRequest.CredentialLabelsEntry - nil, // 17: teleport.plugins.v1.SearchPluginStaticCredentialsRequest.LabelsEntry - (*types.PluginV1)(nil), // 18: types.PluginV1 - (*types.PluginBootstrapCredentialsV1)(nil), // 19: types.PluginBootstrapCredentialsV1 - (*types.PluginStaticCredentialsV1)(nil), // 20: types.PluginStaticCredentialsV1 - (*types.PluginCredentialsV1)(nil), // 21: types.PluginCredentialsV1 - (*types.PluginStatusV1)(nil), // 22: types.PluginStatusV1 - (*types.ResourceID)(nil), // 23: types.ResourceID - (*emptypb.Empty)(nil), // 24: google.protobuf.Empty + (*CredentialQuery)(nil), // 12: teleport.plugins.v1.CredentialQuery + (*UpdatePluginStaticCredentialsRequest)(nil), // 13: teleport.plugins.v1.UpdatePluginStaticCredentialsRequest + (*UpdatePluginStaticCredentialsResponse)(nil), // 14: teleport.plugins.v1.UpdatePluginStaticCredentialsResponse + (*SearchPluginStaticCredentialsResponse)(nil), // 15: teleport.plugins.v1.SearchPluginStaticCredentialsResponse + (*NeedsCleanupRequest)(nil), // 16: teleport.plugins.v1.NeedsCleanupRequest + (*NeedsCleanupResponse)(nil), // 17: teleport.plugins.v1.NeedsCleanupResponse + (*CleanupRequest)(nil), // 18: teleport.plugins.v1.CleanupRequest + (*CreatePluginOauthTokenRequest)(nil), // 19: teleport.plugins.v1.CreatePluginOauthTokenRequest + (*CreatePluginOauthTokenResponse)(nil), // 20: teleport.plugins.v1.CreatePluginOauthTokenResponse + nil, // 21: teleport.plugins.v1.CreatePluginRequest.CredentialLabelsEntry + nil, // 22: teleport.plugins.v1.SearchPluginStaticCredentialsRequest.LabelsEntry + nil, // 23: teleport.plugins.v1.CredentialQuery.LabelsEntry + (*types.PluginV1)(nil), // 24: types.PluginV1 + (*types.PluginBootstrapCredentialsV1)(nil), // 25: types.PluginBootstrapCredentialsV1 + (*types.PluginStaticCredentialsV1)(nil), // 26: types.PluginStaticCredentialsV1 + (*types.PluginCredentialsV1)(nil), // 27: types.PluginCredentialsV1 + (*types.PluginStatusV1)(nil), // 28: types.PluginStatusV1 + (*types.PluginStaticCredentialsSpecV1)(nil), // 29: types.PluginStaticCredentialsSpecV1 + (*types.ResourceID)(nil), // 30: types.ResourceID + (*emptypb.Empty)(nil), // 31: google.protobuf.Empty } var file_teleport_plugins_v1_plugin_service_proto_depIdxs = []int32{ - 18, // 0: teleport.plugins.v1.CreatePluginRequest.plugin:type_name -> types.PluginV1 - 19, // 1: teleport.plugins.v1.CreatePluginRequest.bootstrap_credentials:type_name -> types.PluginBootstrapCredentialsV1 - 20, // 2: teleport.plugins.v1.CreatePluginRequest.static_credentials:type_name -> types.PluginStaticCredentialsV1 - 20, // 3: teleport.plugins.v1.CreatePluginRequest.static_credentials_list:type_name -> types.PluginStaticCredentialsV1 - 16, // 4: teleport.plugins.v1.CreatePluginRequest.credential_labels:type_name -> teleport.plugins.v1.CreatePluginRequest.CredentialLabelsEntry - 18, // 5: teleport.plugins.v1.UpdatePluginRequest.plugin:type_name -> types.PluginV1 - 18, // 6: teleport.plugins.v1.ListPluginsResponse.plugins:type_name -> types.PluginV1 - 21, // 7: teleport.plugins.v1.SetPluginCredentialsRequest.credentials:type_name -> types.PluginCredentialsV1 - 22, // 8: teleport.plugins.v1.SetPluginStatusRequest.status:type_name -> types.PluginStatusV1 + 24, // 0: teleport.plugins.v1.CreatePluginRequest.plugin:type_name -> types.PluginV1 + 25, // 1: teleport.plugins.v1.CreatePluginRequest.bootstrap_credentials:type_name -> types.PluginBootstrapCredentialsV1 + 26, // 2: teleport.plugins.v1.CreatePluginRequest.static_credentials:type_name -> types.PluginStaticCredentialsV1 + 26, // 3: teleport.plugins.v1.CreatePluginRequest.static_credentials_list:type_name -> types.PluginStaticCredentialsV1 + 21, // 4: teleport.plugins.v1.CreatePluginRequest.credential_labels:type_name -> teleport.plugins.v1.CreatePluginRequest.CredentialLabelsEntry + 24, // 5: teleport.plugins.v1.UpdatePluginRequest.plugin:type_name -> types.PluginV1 + 24, // 6: teleport.plugins.v1.ListPluginsResponse.plugins:type_name -> types.PluginV1 + 27, // 7: teleport.plugins.v1.SetPluginCredentialsRequest.credentials:type_name -> types.PluginCredentialsV1 + 28, // 8: teleport.plugins.v1.SetPluginStatusRequest.status:type_name -> types.PluginStatusV1 0, // 9: teleport.plugins.v1.GetAvailablePluginTypesResponse.plugin_types:type_name -> teleport.plugins.v1.PluginType - 17, // 10: teleport.plugins.v1.SearchPluginStaticCredentialsRequest.labels:type_name -> teleport.plugins.v1.SearchPluginStaticCredentialsRequest.LabelsEntry - 20, // 11: teleport.plugins.v1.SearchPluginStaticCredentialsResponse.credentials:type_name -> types.PluginStaticCredentialsV1 - 23, // 12: teleport.plugins.v1.NeedsCleanupResponse.resources_to_cleanup:type_name -> types.ResourceID - 1, // 13: teleport.plugins.v1.PluginService.CreatePlugin:input_type -> teleport.plugins.v1.CreatePluginRequest - 2, // 14: teleport.plugins.v1.PluginService.GetPlugin:input_type -> teleport.plugins.v1.GetPluginRequest - 3, // 15: teleport.plugins.v1.PluginService.UpdatePlugin:input_type -> teleport.plugins.v1.UpdatePluginRequest - 6, // 16: teleport.plugins.v1.PluginService.DeletePlugin:input_type -> teleport.plugins.v1.DeletePluginRequest - 4, // 17: teleport.plugins.v1.PluginService.ListPlugins:input_type -> teleport.plugins.v1.ListPluginsRequest - 7, // 18: teleport.plugins.v1.PluginService.SetPluginCredentials:input_type -> teleport.plugins.v1.SetPluginCredentialsRequest - 8, // 19: teleport.plugins.v1.PluginService.SetPluginStatus:input_type -> teleport.plugins.v1.SetPluginStatusRequest - 9, // 20: teleport.plugins.v1.PluginService.GetAvailablePluginTypes:input_type -> teleport.plugins.v1.GetAvailablePluginTypesRequest - 11, // 21: teleport.plugins.v1.PluginService.SearchPluginStaticCredentials:input_type -> teleport.plugins.v1.SearchPluginStaticCredentialsRequest - 13, // 22: teleport.plugins.v1.PluginService.NeedsCleanup:input_type -> teleport.plugins.v1.NeedsCleanupRequest - 15, // 23: teleport.plugins.v1.PluginService.Cleanup:input_type -> teleport.plugins.v1.CleanupRequest - 24, // 24: teleport.plugins.v1.PluginService.CreatePlugin:output_type -> google.protobuf.Empty - 18, // 25: teleport.plugins.v1.PluginService.GetPlugin:output_type -> types.PluginV1 - 18, // 26: teleport.plugins.v1.PluginService.UpdatePlugin:output_type -> types.PluginV1 - 24, // 27: teleport.plugins.v1.PluginService.DeletePlugin:output_type -> google.protobuf.Empty - 5, // 28: teleport.plugins.v1.PluginService.ListPlugins:output_type -> teleport.plugins.v1.ListPluginsResponse - 24, // 29: teleport.plugins.v1.PluginService.SetPluginCredentials:output_type -> google.protobuf.Empty - 24, // 30: teleport.plugins.v1.PluginService.SetPluginStatus:output_type -> google.protobuf.Empty - 10, // 31: teleport.plugins.v1.PluginService.GetAvailablePluginTypes:output_type -> teleport.plugins.v1.GetAvailablePluginTypesResponse - 12, // 32: teleport.plugins.v1.PluginService.SearchPluginStaticCredentials:output_type -> teleport.plugins.v1.SearchPluginStaticCredentialsResponse - 14, // 33: teleport.plugins.v1.PluginService.NeedsCleanup:output_type -> teleport.plugins.v1.NeedsCleanupResponse - 24, // 34: teleport.plugins.v1.PluginService.Cleanup:output_type -> google.protobuf.Empty - 24, // [24:35] is the sub-list for method output_type - 13, // [13:24] is the sub-list for method input_type - 13, // [13:13] is the sub-list for extension type_name - 13, // [13:13] is the sub-list for extension extendee - 0, // [0:13] is the sub-list for field type_name + 22, // 10: teleport.plugins.v1.SearchPluginStaticCredentialsRequest.labels:type_name -> teleport.plugins.v1.SearchPluginStaticCredentialsRequest.LabelsEntry + 23, // 11: teleport.plugins.v1.CredentialQuery.labels:type_name -> teleport.plugins.v1.CredentialQuery.LabelsEntry + 12, // 12: teleport.plugins.v1.UpdatePluginStaticCredentialsRequest.query:type_name -> teleport.plugins.v1.CredentialQuery + 29, // 13: teleport.plugins.v1.UpdatePluginStaticCredentialsRequest.credential:type_name -> types.PluginStaticCredentialsSpecV1 + 26, // 14: teleport.plugins.v1.UpdatePluginStaticCredentialsResponse.credential:type_name -> types.PluginStaticCredentialsV1 + 26, // 15: teleport.plugins.v1.SearchPluginStaticCredentialsResponse.credentials:type_name -> types.PluginStaticCredentialsV1 + 30, // 16: teleport.plugins.v1.NeedsCleanupResponse.resources_to_cleanup:type_name -> types.ResourceID + 1, // 17: teleport.plugins.v1.PluginService.CreatePlugin:input_type -> teleport.plugins.v1.CreatePluginRequest + 2, // 18: teleport.plugins.v1.PluginService.GetPlugin:input_type -> teleport.plugins.v1.GetPluginRequest + 3, // 19: teleport.plugins.v1.PluginService.UpdatePlugin:input_type -> teleport.plugins.v1.UpdatePluginRequest + 6, // 20: teleport.plugins.v1.PluginService.DeletePlugin:input_type -> teleport.plugins.v1.DeletePluginRequest + 4, // 21: teleport.plugins.v1.PluginService.ListPlugins:input_type -> teleport.plugins.v1.ListPluginsRequest + 7, // 22: teleport.plugins.v1.PluginService.SetPluginCredentials:input_type -> teleport.plugins.v1.SetPluginCredentialsRequest + 8, // 23: teleport.plugins.v1.PluginService.SetPluginStatus:input_type -> teleport.plugins.v1.SetPluginStatusRequest + 9, // 24: teleport.plugins.v1.PluginService.GetAvailablePluginTypes:input_type -> teleport.plugins.v1.GetAvailablePluginTypesRequest + 11, // 25: teleport.plugins.v1.PluginService.SearchPluginStaticCredentials:input_type -> teleport.plugins.v1.SearchPluginStaticCredentialsRequest + 13, // 26: teleport.plugins.v1.PluginService.UpdatePluginStaticCredentials:input_type -> teleport.plugins.v1.UpdatePluginStaticCredentialsRequest + 16, // 27: teleport.plugins.v1.PluginService.NeedsCleanup:input_type -> teleport.plugins.v1.NeedsCleanupRequest + 18, // 28: teleport.plugins.v1.PluginService.Cleanup:input_type -> teleport.plugins.v1.CleanupRequest + 19, // 29: teleport.plugins.v1.PluginService.CreatePluginOauthToken:input_type -> teleport.plugins.v1.CreatePluginOauthTokenRequest + 31, // 30: teleport.plugins.v1.PluginService.CreatePlugin:output_type -> google.protobuf.Empty + 24, // 31: teleport.plugins.v1.PluginService.GetPlugin:output_type -> types.PluginV1 + 24, // 32: teleport.plugins.v1.PluginService.UpdatePlugin:output_type -> types.PluginV1 + 31, // 33: teleport.plugins.v1.PluginService.DeletePlugin:output_type -> google.protobuf.Empty + 5, // 34: teleport.plugins.v1.PluginService.ListPlugins:output_type -> teleport.plugins.v1.ListPluginsResponse + 31, // 35: teleport.plugins.v1.PluginService.SetPluginCredentials:output_type -> google.protobuf.Empty + 31, // 36: teleport.plugins.v1.PluginService.SetPluginStatus:output_type -> google.protobuf.Empty + 10, // 37: teleport.plugins.v1.PluginService.GetAvailablePluginTypes:output_type -> teleport.plugins.v1.GetAvailablePluginTypesResponse + 15, // 38: teleport.plugins.v1.PluginService.SearchPluginStaticCredentials:output_type -> teleport.plugins.v1.SearchPluginStaticCredentialsResponse + 14, // 39: teleport.plugins.v1.PluginService.UpdatePluginStaticCredentials:output_type -> teleport.plugins.v1.UpdatePluginStaticCredentialsResponse + 17, // 40: teleport.plugins.v1.PluginService.NeedsCleanup:output_type -> teleport.plugins.v1.NeedsCleanupResponse + 31, // 41: teleport.plugins.v1.PluginService.Cleanup:output_type -> google.protobuf.Empty + 20, // 42: teleport.plugins.v1.PluginService.CreatePluginOauthToken:output_type -> teleport.plugins.v1.CreatePluginOauthTokenResponse + 30, // [30:43] is the sub-list for method output_type + 17, // [17:30] is the sub-list for method input_type + 17, // [17:17] is the sub-list for extension type_name + 17, // [17:17] is the sub-list for extension extendee + 0, // [0:17] is the sub-list for field type_name } func init() { file_teleport_plugins_v1_plugin_service_proto_init() } @@ -1066,13 +1441,17 @@ func file_teleport_plugins_v1_plugin_service_proto_init() { if File_teleport_plugins_v1_plugin_service_proto != nil { return } + file_teleport_plugins_v1_plugin_service_proto_msgTypes[13].OneofWrappers = []any{ + (*UpdatePluginStaticCredentialsRequest_Name)(nil), + (*UpdatePluginStaticCredentialsRequest_Query)(nil), + } type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_plugins_v1_plugin_service_proto_rawDesc), len(file_teleport_plugins_v1_plugin_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 18, + NumMessages: 24, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/plugins/v1/plugin_service_grpc.pb.go b/api/gen/proto/go/teleport/plugins/v1/plugin_service_grpc.pb.go index 1af0d3e7dfcc7..e31865eab4afa 100644 --- a/api/gen/proto/go/teleport/plugins/v1/plugin_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/plugins/v1/plugin_service_grpc.pb.go @@ -44,8 +44,10 @@ const ( PluginService_SetPluginStatus_FullMethodName = "/teleport.plugins.v1.PluginService/SetPluginStatus" PluginService_GetAvailablePluginTypes_FullMethodName = "/teleport.plugins.v1.PluginService/GetAvailablePluginTypes" PluginService_SearchPluginStaticCredentials_FullMethodName = "/teleport.plugins.v1.PluginService/SearchPluginStaticCredentials" + PluginService_UpdatePluginStaticCredentials_FullMethodName = "/teleport.plugins.v1.PluginService/UpdatePluginStaticCredentials" PluginService_NeedsCleanup_FullMethodName = "/teleport.plugins.v1.PluginService/NeedsCleanup" PluginService_Cleanup_FullMethodName = "/teleport.plugins.v1.PluginService/Cleanup" + PluginService_CreatePluginOauthToken_FullMethodName = "/teleport.plugins.v1.PluginService/CreatePluginOauthToken" ) // PluginServiceClient is the client API for PluginService service. @@ -75,11 +77,18 @@ type PluginServiceClient interface { // for. Only accessible by RoleAdmin and, in the case of Teleport Assist, // RoleProxy. SearchPluginStaticCredentials(ctx context.Context, in *SearchPluginStaticCredentialsRequest, opts ...grpc.CallOption) (*SearchPluginStaticCredentialsResponse, error) + // UpdatePluginStaticCredentials + UpdatePluginStaticCredentials(ctx context.Context, in *UpdatePluginStaticCredentialsRequest, opts ...grpc.CallOption) (*UpdatePluginStaticCredentialsResponse, error) // NeedsCleanup will indicate whether a plugin of the given type needs cleanup // before it can be created. NeedsCleanup(ctx context.Context, in *NeedsCleanupRequest, opts ...grpc.CallOption) (*NeedsCleanupResponse, error) // Cleanup will clean up the resources for the given plugin type. Cleanup(ctx context.Context, in *CleanupRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // CreatePluginOauthToken issues a short-lived OAuth access token for the specified plugin. + // + // This endpoint supports the OAuth 2.0 "client_credentials" grant type, where the plugin + // authenticates using its client ID and client secret + CreatePluginOauthToken(ctx context.Context, in *CreatePluginOauthTokenRequest, opts ...grpc.CallOption) (*CreatePluginOauthTokenResponse, error) } type pluginServiceClient struct { @@ -180,6 +189,16 @@ func (c *pluginServiceClient) SearchPluginStaticCredentials(ctx context.Context, return out, nil } +func (c *pluginServiceClient) UpdatePluginStaticCredentials(ctx context.Context, in *UpdatePluginStaticCredentialsRequest, opts ...grpc.CallOption) (*UpdatePluginStaticCredentialsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdatePluginStaticCredentialsResponse) + err := c.cc.Invoke(ctx, PluginService_UpdatePluginStaticCredentials_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *pluginServiceClient) NeedsCleanup(ctx context.Context, in *NeedsCleanupRequest, opts ...grpc.CallOption) (*NeedsCleanupResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(NeedsCleanupResponse) @@ -200,6 +219,16 @@ func (c *pluginServiceClient) Cleanup(ctx context.Context, in *CleanupRequest, o return out, nil } +func (c *pluginServiceClient) CreatePluginOauthToken(ctx context.Context, in *CreatePluginOauthTokenRequest, opts ...grpc.CallOption) (*CreatePluginOauthTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreatePluginOauthTokenResponse) + err := c.cc.Invoke(ctx, PluginService_CreatePluginOauthToken_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // PluginServiceServer is the server API for PluginService service. // All implementations must embed UnimplementedPluginServiceServer // for forward compatibility. @@ -227,11 +256,18 @@ type PluginServiceServer interface { // for. Only accessible by RoleAdmin and, in the case of Teleport Assist, // RoleProxy. SearchPluginStaticCredentials(context.Context, *SearchPluginStaticCredentialsRequest) (*SearchPluginStaticCredentialsResponse, error) + // UpdatePluginStaticCredentials + UpdatePluginStaticCredentials(context.Context, *UpdatePluginStaticCredentialsRequest) (*UpdatePluginStaticCredentialsResponse, error) // NeedsCleanup will indicate whether a plugin of the given type needs cleanup // before it can be created. NeedsCleanup(context.Context, *NeedsCleanupRequest) (*NeedsCleanupResponse, error) // Cleanup will clean up the resources for the given plugin type. Cleanup(context.Context, *CleanupRequest) (*emptypb.Empty, error) + // CreatePluginOauthToken issues a short-lived OAuth access token for the specified plugin. + // + // This endpoint supports the OAuth 2.0 "client_credentials" grant type, where the plugin + // authenticates using its client ID and client secret + CreatePluginOauthToken(context.Context, *CreatePluginOauthTokenRequest) (*CreatePluginOauthTokenResponse, error) mustEmbedUnimplementedPluginServiceServer() } @@ -269,12 +305,18 @@ func (UnimplementedPluginServiceServer) GetAvailablePluginTypes(context.Context, func (UnimplementedPluginServiceServer) SearchPluginStaticCredentials(context.Context, *SearchPluginStaticCredentialsRequest) (*SearchPluginStaticCredentialsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method SearchPluginStaticCredentials not implemented") } +func (UnimplementedPluginServiceServer) UpdatePluginStaticCredentials(context.Context, *UpdatePluginStaticCredentialsRequest) (*UpdatePluginStaticCredentialsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdatePluginStaticCredentials not implemented") +} func (UnimplementedPluginServiceServer) NeedsCleanup(context.Context, *NeedsCleanupRequest) (*NeedsCleanupResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method NeedsCleanup not implemented") } func (UnimplementedPluginServiceServer) Cleanup(context.Context, *CleanupRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method Cleanup not implemented") } +func (UnimplementedPluginServiceServer) CreatePluginOauthToken(context.Context, *CreatePluginOauthTokenRequest) (*CreatePluginOauthTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreatePluginOauthToken not implemented") +} func (UnimplementedPluginServiceServer) mustEmbedUnimplementedPluginServiceServer() {} func (UnimplementedPluginServiceServer) testEmbeddedByValue() {} @@ -458,6 +500,24 @@ func _PluginService_SearchPluginStaticCredentials_Handler(srv interface{}, ctx c return interceptor(ctx, in, info, handler) } +func _PluginService_UpdatePluginStaticCredentials_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdatePluginStaticCredentialsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PluginServiceServer).UpdatePluginStaticCredentials(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PluginService_UpdatePluginStaticCredentials_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PluginServiceServer).UpdatePluginStaticCredentials(ctx, req.(*UpdatePluginStaticCredentialsRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _PluginService_NeedsCleanup_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(NeedsCleanupRequest) if err := dec(in); err != nil { @@ -494,6 +554,24 @@ func _PluginService_Cleanup_Handler(srv interface{}, ctx context.Context, dec fu return interceptor(ctx, in, info, handler) } +func _PluginService_CreatePluginOauthToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreatePluginOauthTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PluginServiceServer).CreatePluginOauthToken(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PluginService_CreatePluginOauthToken_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PluginServiceServer).CreatePluginOauthToken(ctx, req.(*CreatePluginOauthTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + // PluginService_ServiceDesc is the grpc.ServiceDesc for PluginService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -537,6 +615,10 @@ var PluginService_ServiceDesc = grpc.ServiceDesc{ MethodName: "SearchPluginStaticCredentials", Handler: _PluginService_SearchPluginStaticCredentials_Handler, }, + { + MethodName: "UpdatePluginStaticCredentials", + Handler: _PluginService_UpdatePluginStaticCredentials_Handler, + }, { MethodName: "NeedsCleanup", Handler: _PluginService_NeedsCleanup_Handler, @@ -545,6 +627,10 @@ var PluginService_ServiceDesc = grpc.ServiceDesc{ MethodName: "Cleanup", Handler: _PluginService_Cleanup_Handler, }, + { + MethodName: "CreatePluginOauthToken", + Handler: _PluginService_CreatePluginOauthToken_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/plugins/v1/plugin_service.proto", diff --git a/api/gen/proto/go/teleport/presence/v1/relay_server.pb.go b/api/gen/proto/go/teleport/presence/v1/relay_server.pb.go new file mode 100644 index 0000000000000..fc8c37877d8a4 --- /dev/null +++ b/api/gen/proto/go/teleport/presence/v1/relay_server.pb.go @@ -0,0 +1,297 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/presence/v1/relay_server.proto + +package presencev1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// A heartbeat for a relay service; this message serves as both the type used in +// the v1 service and as the canonical v1 storage format (in protojson). +type RelayServer struct { + state protoimpl.MessageState `protogen:"open.v1"` + // fixed string, "relay_server". + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // fixed string, "". + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // fixed string, "v1". + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + Spec *RelayServer_Spec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + // The advertized scope of the server. A server's scope can not change once assigned, so + // heartbeats must include a scope value matching the one declared in the hello message. + Scope string `protobuf:"bytes,6,opt,name=scope,proto3" json:"scope,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RelayServer) Reset() { + *x = RelayServer{} + mi := &file_teleport_presence_v1_relay_server_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RelayServer) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RelayServer) ProtoMessage() {} + +func (x *RelayServer) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_relay_server_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RelayServer.ProtoReflect.Descriptor instead. +func (*RelayServer) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_relay_server_proto_rawDescGZIP(), []int{0} +} + +func (x *RelayServer) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *RelayServer) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *RelayServer) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *RelayServer) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *RelayServer) GetSpec() *RelayServer_Spec { + if x != nil { + return x.Spec + } + return nil +} + +func (x *RelayServer) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +// resource spec +type RelayServer_Spec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // host IDs of Proxy Service instances that this server is available on + // through a reverse tunnel + ProxyIds []string `protobuf:"bytes,1,rep,name=proxy_ids,json=proxyIds,proto3" json:"proxy_ids,omitempty"` + // configured hostname (or nodename) of the machine, for troubleshooting and + // debugging + Hostname string `protobuf:"bytes,2,opt,name=hostname,proto3" json:"hostname,omitempty"` + // the name of the Relay group this server belongs to + RelayGroup string `protobuf:"bytes,3,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // address and port that this server is reachable at by other Relay Service + // instance of the same group + PeerAddr string `protobuf:"bytes,4,opt,name=peer_addr,json=peerAddr,proto3" json:"peer_addr,omitempty"` + // random string chosen for the duration of the process, for troubleshooting + // and debugging + Nonce string `protobuf:"bytes,5,opt,name=nonce,proto3" json:"nonce,omitempty"` + // set after the Teleport instance has received a termination signal (but + // hasn't necessarily began shutting down) + Terminating bool `protobuf:"varint,6,opt,name=terminating,proto3" json:"terminating,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RelayServer_Spec) Reset() { + *x = RelayServer_Spec{} + mi := &file_teleport_presence_v1_relay_server_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RelayServer_Spec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RelayServer_Spec) ProtoMessage() {} + +func (x *RelayServer_Spec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_relay_server_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RelayServer_Spec.ProtoReflect.Descriptor instead. +func (*RelayServer_Spec) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_relay_server_proto_rawDescGZIP(), []int{0, 0} +} + +func (x *RelayServer_Spec) GetProxyIds() []string { + if x != nil { + return x.ProxyIds + } + return nil +} + +func (x *RelayServer_Spec) GetHostname() string { + if x != nil { + return x.Hostname + } + return "" +} + +func (x *RelayServer_Spec) GetRelayGroup() string { + if x != nil { + return x.RelayGroup + } + return "" +} + +func (x *RelayServer_Spec) GetPeerAddr() string { + if x != nil { + return x.PeerAddr + } + return "" +} + +func (x *RelayServer_Spec) GetNonce() string { + if x != nil { + return x.Nonce + } + return "" +} + +func (x *RelayServer_Spec) GetTerminating() bool { + if x != nil { + return x.Terminating + } + return false +} + +var File_teleport_presence_v1_relay_server_proto protoreflect.FileDescriptor + +const file_teleport_presence_v1_relay_server_proto_rawDesc = "" + + "\n" + + "'teleport/presence/v1/relay_server.proto\x12\x14teleport.presence.v1\x1a!teleport/header/v1/metadata.proto\"\x9a\x03\n" + + "\vRelayServer\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12:\n" + + "\x04spec\x18\x05 \x01(\v2&.teleport.presence.v1.RelayServer.SpecR\x04spec\x12\x14\n" + + "\x05scope\x18\x06 \x01(\tR\x05scope\x1a\xb5\x01\n" + + "\x04Spec\x12\x1b\n" + + "\tproxy_ids\x18\x01 \x03(\tR\bproxyIds\x12\x1a\n" + + "\bhostname\x18\x02 \x01(\tR\bhostname\x12\x1f\n" + + "\vrelay_group\x18\x03 \x01(\tR\n" + + "relayGroup\x12\x1b\n" + + "\tpeer_addr\x18\x04 \x01(\tR\bpeerAddr\x12\x14\n" + + "\x05nonce\x18\x05 \x01(\tR\x05nonce\x12 \n" + + "\vterminating\x18\x06 \x01(\bR\vterminatingBTZRgithub.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1;presencev1b\x06proto3" + +var ( + file_teleport_presence_v1_relay_server_proto_rawDescOnce sync.Once + file_teleport_presence_v1_relay_server_proto_rawDescData []byte +) + +func file_teleport_presence_v1_relay_server_proto_rawDescGZIP() []byte { + file_teleport_presence_v1_relay_server_proto_rawDescOnce.Do(func() { + file_teleport_presence_v1_relay_server_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_presence_v1_relay_server_proto_rawDesc), len(file_teleport_presence_v1_relay_server_proto_rawDesc))) + }) + return file_teleport_presence_v1_relay_server_proto_rawDescData +} + +var file_teleport_presence_v1_relay_server_proto_msgTypes = make([]protoimpl.MessageInfo, 2) +var file_teleport_presence_v1_relay_server_proto_goTypes = []any{ + (*RelayServer)(nil), // 0: teleport.presence.v1.RelayServer + (*RelayServer_Spec)(nil), // 1: teleport.presence.v1.RelayServer.Spec + (*v1.Metadata)(nil), // 2: teleport.header.v1.Metadata +} +var file_teleport_presence_v1_relay_server_proto_depIdxs = []int32{ + 2, // 0: teleport.presence.v1.RelayServer.metadata:type_name -> teleport.header.v1.Metadata + 1, // 1: teleport.presence.v1.RelayServer.spec:type_name -> teleport.presence.v1.RelayServer.Spec + 2, // [2:2] is the sub-list for method output_type + 2, // [2:2] is the sub-list for method input_type + 2, // [2:2] is the sub-list for extension type_name + 2, // [2:2] is the sub-list for extension extendee + 0, // [0:2] is the sub-list for field type_name +} + +func init() { file_teleport_presence_v1_relay_server_proto_init() } +func file_teleport_presence_v1_relay_server_proto_init() { + if File_teleport_presence_v1_relay_server_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_presence_v1_relay_server_proto_rawDesc), len(file_teleport_presence_v1_relay_server_proto_rawDesc)), + NumEnums: 0, + NumMessages: 2, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_presence_v1_relay_server_proto_goTypes, + DependencyIndexes: file_teleport_presence_v1_relay_server_proto_depIdxs, + MessageInfos: file_teleport_presence_v1_relay_server_proto_msgTypes, + }.Build() + File_teleport_presence_v1_relay_server_proto = out.File + file_teleport_presence_v1_relay_server_proto_goTypes = nil + file_teleport_presence_v1_relay_server_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/presence/v1/service.pb.go b/api/gen/proto/go/teleport/presence/v1/service.pb.go index 0bf6e894f5374..2cef7822e3061 100644 --- a/api/gen/proto/go/teleport/presence/v1/service.pb.go +++ b/api/gen/proto/go/teleport/presence/v1/service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/presence/v1/service.proto @@ -504,11 +504,519 @@ func (x *DeleteReverseTunnelRequest) GetName() string { return "" } +// Request message for the PresenceService.GetRelayServer rpc. +type GetRelayServerRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetRelayServerRequest) Reset() { + *x = GetRelayServerRequest{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetRelayServerRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetRelayServerRequest) ProtoMessage() {} + +func (x *GetRelayServerRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetRelayServerRequest.ProtoReflect.Descriptor instead. +func (*GetRelayServerRequest) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{9} +} + +func (x *GetRelayServerRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// Response message for the PresenceService.GetRelayServer rpc. +type GetRelayServerResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + RelayServer *RelayServer `protobuf:"bytes,1,opt,name=relay_server,json=relayServer,proto3" json:"relay_server,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetRelayServerResponse) Reset() { + *x = GetRelayServerResponse{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetRelayServerResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetRelayServerResponse) ProtoMessage() {} + +func (x *GetRelayServerResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetRelayServerResponse.ProtoReflect.Descriptor instead. +func (*GetRelayServerResponse) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{10} +} + +func (x *GetRelayServerResponse) GetRelayServer() *RelayServer { + if x != nil { + return x.RelayServer + } + return nil +} + +// Request message for the PresenceService.ListRelayServers rpc. +type ListRelayServersRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. The service may return fewer than + // this value. If unspecified, the service will use a sensible default. + PageSize int64 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // A pagination token returned from a previous request. If empty, the request + // will return the first page. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRelayServersRequest) Reset() { + *x = ListRelayServersRequest{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRelayServersRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRelayServersRequest) ProtoMessage() {} + +func (x *ListRelayServersRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRelayServersRequest.ProtoReflect.Descriptor instead. +func (*ListRelayServersRequest) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{11} +} + +func (x *ListRelayServersRequest) GetPageSize() int64 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListRelayServersRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// Response message for the PresenceService.ListRelayServers rpc. +type ListRelayServersResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + Relays []*RelayServer `protobuf:"bytes,1,rep,name=relays,proto3" json:"relays,omitempty"` + // A token that can be sent as the page_token to retrieve the next page. If + // this field is empty, there are no more pages. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListRelayServersResponse) Reset() { + *x = ListRelayServersResponse{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListRelayServersResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListRelayServersResponse) ProtoMessage() {} + +func (x *ListRelayServersResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListRelayServersResponse.ProtoReflect.Descriptor instead. +func (*ListRelayServersResponse) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{12} +} + +func (x *ListRelayServersResponse) GetRelays() []*RelayServer { + if x != nil { + return x.Relays + } + return nil +} + +func (x *ListRelayServersResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// Request message for the PresenceService.DeleteRelayServer rpc. +type DeleteRelayServerRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteRelayServerRequest) Reset() { + *x = DeleteRelayServerRequest{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteRelayServerRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteRelayServerRequest) ProtoMessage() {} + +func (x *DeleteRelayServerRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteRelayServerRequest.ProtoReflect.Descriptor instead. +func (*DeleteRelayServerRequest) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{13} +} + +func (x *DeleteRelayServerRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// Response message for the PresenceService.DeleteRelayServer rpc. +type DeleteRelayServerResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteRelayServerResponse) Reset() { + *x = DeleteRelayServerResponse{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteRelayServerResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteRelayServerResponse) ProtoMessage() {} + +func (x *DeleteRelayServerResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteRelayServerResponse.ProtoReflect.Descriptor instead. +func (*DeleteRelayServerResponse) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{14} +} + +// Request message for the PresenceService.ListAuthServers rpc. +type ListAuthServersRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListAuthServersRequest) Reset() { + *x = ListAuthServersRequest{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListAuthServersRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListAuthServersRequest) ProtoMessage() {} + +func (x *ListAuthServersRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListAuthServersRequest.ProtoReflect.Descriptor instead. +func (*ListAuthServersRequest) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{15} +} + +func (x *ListAuthServersRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListAuthServersRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// Response message for the PresenceService.ListAuthServers rpc. +type ListAuthServersResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A list of auth server resources. + Servers []*types.ServerV2 `protobuf:"bytes,1,rep,name=servers,proto3" json:"servers,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListAuthServersResponse) Reset() { + *x = ListAuthServersResponse{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListAuthServersResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListAuthServersResponse) ProtoMessage() {} + +func (x *ListAuthServersResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListAuthServersResponse.ProtoReflect.Descriptor instead. +func (*ListAuthServersResponse) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{16} +} + +func (x *ListAuthServersResponse) GetServers() []*types.ServerV2 { + if x != nil { + return x.Servers + } + return nil +} + +func (x *ListAuthServersResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// Request message for the PresenceService.ListProxyServers rpc. +type ListProxyServersRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListProxyServersRequest) Reset() { + *x = ListProxyServersRequest{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[17] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListProxyServersRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListProxyServersRequest) ProtoMessage() {} + +func (x *ListProxyServersRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[17] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListProxyServersRequest.ProtoReflect.Descriptor instead. +func (*ListProxyServersRequest) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{17} +} + +func (x *ListProxyServersRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListProxyServersRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// Response message for the PresenceService.ListProxyServers rpc. +type ListProxyServersResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A list of proxy server resources. + Servers []*types.ServerV2 `protobuf:"bytes,1,rep,name=servers,proto3" json:"servers,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListProxyServersResponse) Reset() { + *x = ListProxyServersResponse{} + mi := &file_teleport_presence_v1_service_proto_msgTypes[18] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListProxyServersResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListProxyServersResponse) ProtoMessage() {} + +func (x *ListProxyServersResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_presence_v1_service_proto_msgTypes[18] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListProxyServersResponse.ProtoReflect.Descriptor instead. +func (*ListProxyServersResponse) Descriptor() ([]byte, []int) { + return file_teleport_presence_v1_service_proto_rawDescGZIP(), []int{18} +} + +func (x *ListProxyServersResponse) GetServers() []*types.ServerV2 { + if x != nil { + return x.Servers + } + return nil +} + +func (x *ListProxyServersResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + var File_teleport_presence_v1_service_proto protoreflect.FileDescriptor const file_teleport_presence_v1_service_proto_rawDesc = "" + "\n" + - "\"teleport/presence/v1/service.proto\x12\x14teleport.presence.v1\x1a\x1bgoogle/protobuf/empty.proto\x1a google/protobuf/field_mask.proto\x1a!teleport/legacy/types/types.proto\"-\n" + + "\"teleport/presence/v1/service.proto\x12\x14teleport.presence.v1\x1a\x1bgoogle/protobuf/empty.proto\x1a google/protobuf/field_mask.proto\x1a!teleport/legacy/types/types.proto\x1a'teleport/presence/v1/relay_server.proto\"-\n" + "\x17GetRemoteClusterRequest\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\"W\n" + "\x19ListRemoteClustersRequest\x12\x1b\n" + @@ -534,7 +1042,36 @@ const file_teleport_presence_v1_service_proto_rawDesc = "" + "\x1aUpsertReverseTunnelRequest\x12=\n" + "\x0ereverse_tunnel\x18\x01 \x01(\v2\x16.types.ReverseTunnelV2R\rreverseTunnel\"0\n" + "\x1aDeleteReverseTunnelRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name2\xe2\x05\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"+\n" + + "\x15GetRelayServerRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"^\n" + + "\x16GetRelayServerResponse\x12D\n" + + "\frelay_server\x18\x01 \x01(\v2!.teleport.presence.v1.RelayServerR\vrelayServer\"U\n" + + "\x17ListRelayServersRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x03R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"}\n" + + "\x18ListRelayServersResponse\x129\n" + + "\x06relays\x18\x01 \x03(\v2!.teleport.presence.v1.RelayServerR\x06relays\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\".\n" + + "\x18DeleteRelayServerRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"\x1b\n" + + "\x19DeleteRelayServerResponse\"T\n" + + "\x16ListAuthServersRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"l\n" + + "\x17ListAuthServersResponse\x12)\n" + + "\aservers\x18\x01 \x03(\v2\x0f.types.ServerV2R\aservers\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"U\n" + + "\x17ListProxyServersRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"m\n" + + "\x18ListProxyServersResponse\x12)\n" + + "\aservers\x18\x01 \x03(\v2\x0f.types.ServerV2R\aservers\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken2\x9b\n" + + "\n" + "\x0fPresenceService\x12Y\n" + "\x10GetRemoteCluster\x12-.teleport.presence.v1.GetRemoteClusterRequest\x1a\x16.types.RemoteClusterV3\x12w\n" + "\x12ListRemoteClusters\x12/.teleport.presence.v1.ListRemoteClustersRequest\x1a0.teleport.presence.v1.ListRemoteClustersResponse\x12_\n" + @@ -542,7 +1079,12 @@ const file_teleport_presence_v1_service_proto_rawDesc = "" + "\x13DeleteRemoteCluster\x120.teleport.presence.v1.DeleteRemoteClusterRequest\x1a\x16.google.protobuf.Empty\x12w\n" + "\x12ListReverseTunnels\x12/.teleport.presence.v1.ListReverseTunnelsRequest\x1a0.teleport.presence.v1.ListReverseTunnelsResponse\x12_\n" + "\x13UpsertReverseTunnel\x120.teleport.presence.v1.UpsertReverseTunnelRequest\x1a\x16.types.ReverseTunnelV2\x12_\n" + - "\x13DeleteReverseTunnel\x120.teleport.presence.v1.DeleteReverseTunnelRequest\x1a\x16.google.protobuf.EmptyBTZRgithub.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1;presencev1b\x06proto3" + "\x13DeleteReverseTunnel\x120.teleport.presence.v1.DeleteReverseTunnelRequest\x1a\x16.google.protobuf.Empty\x12k\n" + + "\x0eGetRelayServer\x12+.teleport.presence.v1.GetRelayServerRequest\x1a,.teleport.presence.v1.GetRelayServerResponse\x12q\n" + + "\x10ListRelayServers\x12-.teleport.presence.v1.ListRelayServersRequest\x1a..teleport.presence.v1.ListRelayServersResponse\x12t\n" + + "\x11DeleteRelayServer\x12..teleport.presence.v1.DeleteRelayServerRequest\x1a/.teleport.presence.v1.DeleteRelayServerResponse\x12n\n" + + "\x0fListAuthServers\x12,.teleport.presence.v1.ListAuthServersRequest\x1a-.teleport.presence.v1.ListAuthServersResponse\x12q\n" + + "\x10ListProxyServers\x12-.teleport.presence.v1.ListProxyServersRequest\x1a..teleport.presence.v1.ListProxyServersResponseBTZRgithub.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1;presencev1b\x06proto3" var ( file_teleport_presence_v1_service_proto_rawDescOnce sync.Once @@ -556,7 +1098,7 @@ func file_teleport_presence_v1_service_proto_rawDescGZIP() []byte { return file_teleport_presence_v1_service_proto_rawDescData } -var file_teleport_presence_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 9) +var file_teleport_presence_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 19) var file_teleport_presence_v1_service_proto_goTypes = []any{ (*GetRemoteClusterRequest)(nil), // 0: teleport.presence.v1.GetRemoteClusterRequest (*ListRemoteClustersRequest)(nil), // 1: teleport.presence.v1.ListRemoteClustersRequest @@ -567,36 +1109,62 @@ var file_teleport_presence_v1_service_proto_goTypes = []any{ (*ListReverseTunnelsResponse)(nil), // 6: teleport.presence.v1.ListReverseTunnelsResponse (*UpsertReverseTunnelRequest)(nil), // 7: teleport.presence.v1.UpsertReverseTunnelRequest (*DeleteReverseTunnelRequest)(nil), // 8: teleport.presence.v1.DeleteReverseTunnelRequest - (*types.RemoteClusterV3)(nil), // 9: types.RemoteClusterV3 - (*fieldmaskpb.FieldMask)(nil), // 10: google.protobuf.FieldMask - (*types.ReverseTunnelV2)(nil), // 11: types.ReverseTunnelV2 - (*emptypb.Empty)(nil), // 12: google.protobuf.Empty + (*GetRelayServerRequest)(nil), // 9: teleport.presence.v1.GetRelayServerRequest + (*GetRelayServerResponse)(nil), // 10: teleport.presence.v1.GetRelayServerResponse + (*ListRelayServersRequest)(nil), // 11: teleport.presence.v1.ListRelayServersRequest + (*ListRelayServersResponse)(nil), // 12: teleport.presence.v1.ListRelayServersResponse + (*DeleteRelayServerRequest)(nil), // 13: teleport.presence.v1.DeleteRelayServerRequest + (*DeleteRelayServerResponse)(nil), // 14: teleport.presence.v1.DeleteRelayServerResponse + (*ListAuthServersRequest)(nil), // 15: teleport.presence.v1.ListAuthServersRequest + (*ListAuthServersResponse)(nil), // 16: teleport.presence.v1.ListAuthServersResponse + (*ListProxyServersRequest)(nil), // 17: teleport.presence.v1.ListProxyServersRequest + (*ListProxyServersResponse)(nil), // 18: teleport.presence.v1.ListProxyServersResponse + (*types.RemoteClusterV3)(nil), // 19: types.RemoteClusterV3 + (*fieldmaskpb.FieldMask)(nil), // 20: google.protobuf.FieldMask + (*types.ReverseTunnelV2)(nil), // 21: types.ReverseTunnelV2 + (*RelayServer)(nil), // 22: teleport.presence.v1.RelayServer + (*types.ServerV2)(nil), // 23: types.ServerV2 + (*emptypb.Empty)(nil), // 24: google.protobuf.Empty } var file_teleport_presence_v1_service_proto_depIdxs = []int32{ - 9, // 0: teleport.presence.v1.ListRemoteClustersResponse.remote_clusters:type_name -> types.RemoteClusterV3 - 9, // 1: teleport.presence.v1.UpdateRemoteClusterRequest.remote_cluster:type_name -> types.RemoteClusterV3 - 10, // 2: teleport.presence.v1.UpdateRemoteClusterRequest.update_mask:type_name -> google.protobuf.FieldMask - 11, // 3: teleport.presence.v1.ListReverseTunnelsResponse.reverse_tunnels:type_name -> types.ReverseTunnelV2 - 11, // 4: teleport.presence.v1.UpsertReverseTunnelRequest.reverse_tunnel:type_name -> types.ReverseTunnelV2 - 0, // 5: teleport.presence.v1.PresenceService.GetRemoteCluster:input_type -> teleport.presence.v1.GetRemoteClusterRequest - 1, // 6: teleport.presence.v1.PresenceService.ListRemoteClusters:input_type -> teleport.presence.v1.ListRemoteClustersRequest - 3, // 7: teleport.presence.v1.PresenceService.UpdateRemoteCluster:input_type -> teleport.presence.v1.UpdateRemoteClusterRequest - 4, // 8: teleport.presence.v1.PresenceService.DeleteRemoteCluster:input_type -> teleport.presence.v1.DeleteRemoteClusterRequest - 5, // 9: teleport.presence.v1.PresenceService.ListReverseTunnels:input_type -> teleport.presence.v1.ListReverseTunnelsRequest - 7, // 10: teleport.presence.v1.PresenceService.UpsertReverseTunnel:input_type -> teleport.presence.v1.UpsertReverseTunnelRequest - 8, // 11: teleport.presence.v1.PresenceService.DeleteReverseTunnel:input_type -> teleport.presence.v1.DeleteReverseTunnelRequest - 9, // 12: teleport.presence.v1.PresenceService.GetRemoteCluster:output_type -> types.RemoteClusterV3 - 2, // 13: teleport.presence.v1.PresenceService.ListRemoteClusters:output_type -> teleport.presence.v1.ListRemoteClustersResponse - 9, // 14: teleport.presence.v1.PresenceService.UpdateRemoteCluster:output_type -> types.RemoteClusterV3 - 12, // 15: teleport.presence.v1.PresenceService.DeleteRemoteCluster:output_type -> google.protobuf.Empty - 6, // 16: teleport.presence.v1.PresenceService.ListReverseTunnels:output_type -> teleport.presence.v1.ListReverseTunnelsResponse - 11, // 17: teleport.presence.v1.PresenceService.UpsertReverseTunnel:output_type -> types.ReverseTunnelV2 - 12, // 18: teleport.presence.v1.PresenceService.DeleteReverseTunnel:output_type -> google.protobuf.Empty - 12, // [12:19] is the sub-list for method output_type - 5, // [5:12] is the sub-list for method input_type - 5, // [5:5] is the sub-list for extension type_name - 5, // [5:5] is the sub-list for extension extendee - 0, // [0:5] is the sub-list for field type_name + 19, // 0: teleport.presence.v1.ListRemoteClustersResponse.remote_clusters:type_name -> types.RemoteClusterV3 + 19, // 1: teleport.presence.v1.UpdateRemoteClusterRequest.remote_cluster:type_name -> types.RemoteClusterV3 + 20, // 2: teleport.presence.v1.UpdateRemoteClusterRequest.update_mask:type_name -> google.protobuf.FieldMask + 21, // 3: teleport.presence.v1.ListReverseTunnelsResponse.reverse_tunnels:type_name -> types.ReverseTunnelV2 + 21, // 4: teleport.presence.v1.UpsertReverseTunnelRequest.reverse_tunnel:type_name -> types.ReverseTunnelV2 + 22, // 5: teleport.presence.v1.GetRelayServerResponse.relay_server:type_name -> teleport.presence.v1.RelayServer + 22, // 6: teleport.presence.v1.ListRelayServersResponse.relays:type_name -> teleport.presence.v1.RelayServer + 23, // 7: teleport.presence.v1.ListAuthServersResponse.servers:type_name -> types.ServerV2 + 23, // 8: teleport.presence.v1.ListProxyServersResponse.servers:type_name -> types.ServerV2 + 0, // 9: teleport.presence.v1.PresenceService.GetRemoteCluster:input_type -> teleport.presence.v1.GetRemoteClusterRequest + 1, // 10: teleport.presence.v1.PresenceService.ListRemoteClusters:input_type -> teleport.presence.v1.ListRemoteClustersRequest + 3, // 11: teleport.presence.v1.PresenceService.UpdateRemoteCluster:input_type -> teleport.presence.v1.UpdateRemoteClusterRequest + 4, // 12: teleport.presence.v1.PresenceService.DeleteRemoteCluster:input_type -> teleport.presence.v1.DeleteRemoteClusterRequest + 5, // 13: teleport.presence.v1.PresenceService.ListReverseTunnels:input_type -> teleport.presence.v1.ListReverseTunnelsRequest + 7, // 14: teleport.presence.v1.PresenceService.UpsertReverseTunnel:input_type -> teleport.presence.v1.UpsertReverseTunnelRequest + 8, // 15: teleport.presence.v1.PresenceService.DeleteReverseTunnel:input_type -> teleport.presence.v1.DeleteReverseTunnelRequest + 9, // 16: teleport.presence.v1.PresenceService.GetRelayServer:input_type -> teleport.presence.v1.GetRelayServerRequest + 11, // 17: teleport.presence.v1.PresenceService.ListRelayServers:input_type -> teleport.presence.v1.ListRelayServersRequest + 13, // 18: teleport.presence.v1.PresenceService.DeleteRelayServer:input_type -> teleport.presence.v1.DeleteRelayServerRequest + 15, // 19: teleport.presence.v1.PresenceService.ListAuthServers:input_type -> teleport.presence.v1.ListAuthServersRequest + 17, // 20: teleport.presence.v1.PresenceService.ListProxyServers:input_type -> teleport.presence.v1.ListProxyServersRequest + 19, // 21: teleport.presence.v1.PresenceService.GetRemoteCluster:output_type -> types.RemoteClusterV3 + 2, // 22: teleport.presence.v1.PresenceService.ListRemoteClusters:output_type -> teleport.presence.v1.ListRemoteClustersResponse + 19, // 23: teleport.presence.v1.PresenceService.UpdateRemoteCluster:output_type -> types.RemoteClusterV3 + 24, // 24: teleport.presence.v1.PresenceService.DeleteRemoteCluster:output_type -> google.protobuf.Empty + 6, // 25: teleport.presence.v1.PresenceService.ListReverseTunnels:output_type -> teleport.presence.v1.ListReverseTunnelsResponse + 21, // 26: teleport.presence.v1.PresenceService.UpsertReverseTunnel:output_type -> types.ReverseTunnelV2 + 24, // 27: teleport.presence.v1.PresenceService.DeleteReverseTunnel:output_type -> google.protobuf.Empty + 10, // 28: teleport.presence.v1.PresenceService.GetRelayServer:output_type -> teleport.presence.v1.GetRelayServerResponse + 12, // 29: teleport.presence.v1.PresenceService.ListRelayServers:output_type -> teleport.presence.v1.ListRelayServersResponse + 14, // 30: teleport.presence.v1.PresenceService.DeleteRelayServer:output_type -> teleport.presence.v1.DeleteRelayServerResponse + 16, // 31: teleport.presence.v1.PresenceService.ListAuthServers:output_type -> teleport.presence.v1.ListAuthServersResponse + 18, // 32: teleport.presence.v1.PresenceService.ListProxyServers:output_type -> teleport.presence.v1.ListProxyServersResponse + 21, // [21:33] is the sub-list for method output_type + 9, // [9:21] is the sub-list for method input_type + 9, // [9:9] is the sub-list for extension type_name + 9, // [9:9] is the sub-list for extension extendee + 0, // [0:9] is the sub-list for field type_name } func init() { file_teleport_presence_v1_service_proto_init() } @@ -604,13 +1172,14 @@ func file_teleport_presence_v1_service_proto_init() { if File_teleport_presence_v1_service_proto != nil { return } + file_teleport_presence_v1_relay_server_proto_init() type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_presence_v1_service_proto_rawDesc), len(file_teleport_presence_v1_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 9, + NumMessages: 19, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/presence/v1/service_grpc.pb.go b/api/gen/proto/go/teleport/presence/v1/service_grpc.pb.go index ac6ead13f60db..fa595702a3f14 100644 --- a/api/gen/proto/go/teleport/presence/v1/service_grpc.pb.go +++ b/api/gen/proto/go/teleport/presence/v1/service_grpc.pb.go @@ -42,6 +42,11 @@ const ( PresenceService_ListReverseTunnels_FullMethodName = "/teleport.presence.v1.PresenceService/ListReverseTunnels" PresenceService_UpsertReverseTunnel_FullMethodName = "/teleport.presence.v1.PresenceService/UpsertReverseTunnel" PresenceService_DeleteReverseTunnel_FullMethodName = "/teleport.presence.v1.PresenceService/DeleteReverseTunnel" + PresenceService_GetRelayServer_FullMethodName = "/teleport.presence.v1.PresenceService/GetRelayServer" + PresenceService_ListRelayServers_FullMethodName = "/teleport.presence.v1.PresenceService/ListRelayServers" + PresenceService_DeleteRelayServer_FullMethodName = "/teleport.presence.v1.PresenceService/DeleteRelayServer" + PresenceService_ListAuthServers_FullMethodName = "/teleport.presence.v1.PresenceService/ListAuthServers" + PresenceService_ListProxyServers_FullMethodName = "/teleport.presence.v1.PresenceService/ListProxyServers" ) // PresenceServiceClient is the client API for PresenceService service. @@ -64,6 +69,16 @@ type PresenceServiceClient interface { UpsertReverseTunnel(ctx context.Context, in *UpsertReverseTunnelRequest, opts ...grpc.CallOption) (*types.ReverseTunnelV2, error) // DeleteReverseTunnel removes an existing ReverseTunnel by name. DeleteReverseTunnel(ctx context.Context, in *DeleteReverseTunnelRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // GetRelayServer returns a single relay_server by name. + GetRelayServer(ctx context.Context, in *GetRelayServerRequest, opts ...grpc.CallOption) (*GetRelayServerResponse, error) + // ListRelayServers returns a page of relay_server resources. + ListRelayServers(ctx context.Context, in *ListRelayServersRequest, opts ...grpc.CallOption) (*ListRelayServersResponse, error) + // DeleteRelayServer deletes a relay_server resource by name. + DeleteRelayServer(ctx context.Context, in *DeleteRelayServerRequest, opts ...grpc.CallOption) (*DeleteRelayServerResponse, error) + // ListAuthServers returns a page of Auth servers. + ListAuthServers(ctx context.Context, in *ListAuthServersRequest, opts ...grpc.CallOption) (*ListAuthServersResponse, error) + // ListProxyServers returns a page of Proxy servers. + ListProxyServers(ctx context.Context, in *ListProxyServersRequest, opts ...grpc.CallOption) (*ListProxyServersResponse, error) } type presenceServiceClient struct { @@ -144,6 +159,56 @@ func (c *presenceServiceClient) DeleteReverseTunnel(ctx context.Context, in *Del return out, nil } +func (c *presenceServiceClient) GetRelayServer(ctx context.Context, in *GetRelayServerRequest, opts ...grpc.CallOption) (*GetRelayServerResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetRelayServerResponse) + err := c.cc.Invoke(ctx, PresenceService_GetRelayServer_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *presenceServiceClient) ListRelayServers(ctx context.Context, in *ListRelayServersRequest, opts ...grpc.CallOption) (*ListRelayServersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListRelayServersResponse) + err := c.cc.Invoke(ctx, PresenceService_ListRelayServers_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *presenceServiceClient) DeleteRelayServer(ctx context.Context, in *DeleteRelayServerRequest, opts ...grpc.CallOption) (*DeleteRelayServerResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteRelayServerResponse) + err := c.cc.Invoke(ctx, PresenceService_DeleteRelayServer_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *presenceServiceClient) ListAuthServers(ctx context.Context, in *ListAuthServersRequest, opts ...grpc.CallOption) (*ListAuthServersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListAuthServersResponse) + err := c.cc.Invoke(ctx, PresenceService_ListAuthServers_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *presenceServiceClient) ListProxyServers(ctx context.Context, in *ListProxyServersRequest, opts ...grpc.CallOption) (*ListProxyServersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListProxyServersResponse) + err := c.cc.Invoke(ctx, PresenceService_ListProxyServers_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // PresenceServiceServer is the server API for PresenceService service. // All implementations must embed UnimplementedPresenceServiceServer // for forward compatibility. @@ -164,6 +229,16 @@ type PresenceServiceServer interface { UpsertReverseTunnel(context.Context, *UpsertReverseTunnelRequest) (*types.ReverseTunnelV2, error) // DeleteReverseTunnel removes an existing ReverseTunnel by name. DeleteReverseTunnel(context.Context, *DeleteReverseTunnelRequest) (*emptypb.Empty, error) + // GetRelayServer returns a single relay_server by name. + GetRelayServer(context.Context, *GetRelayServerRequest) (*GetRelayServerResponse, error) + // ListRelayServers returns a page of relay_server resources. + ListRelayServers(context.Context, *ListRelayServersRequest) (*ListRelayServersResponse, error) + // DeleteRelayServer deletes a relay_server resource by name. + DeleteRelayServer(context.Context, *DeleteRelayServerRequest) (*DeleteRelayServerResponse, error) + // ListAuthServers returns a page of Auth servers. + ListAuthServers(context.Context, *ListAuthServersRequest) (*ListAuthServersResponse, error) + // ListProxyServers returns a page of Proxy servers. + ListProxyServers(context.Context, *ListProxyServersRequest) (*ListProxyServersResponse, error) mustEmbedUnimplementedPresenceServiceServer() } @@ -195,6 +270,21 @@ func (UnimplementedPresenceServiceServer) UpsertReverseTunnel(context.Context, * func (UnimplementedPresenceServiceServer) DeleteReverseTunnel(context.Context, *DeleteReverseTunnelRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteReverseTunnel not implemented") } +func (UnimplementedPresenceServiceServer) GetRelayServer(context.Context, *GetRelayServerRequest) (*GetRelayServerResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetRelayServer not implemented") +} +func (UnimplementedPresenceServiceServer) ListRelayServers(context.Context, *ListRelayServersRequest) (*ListRelayServersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListRelayServers not implemented") +} +func (UnimplementedPresenceServiceServer) DeleteRelayServer(context.Context, *DeleteRelayServerRequest) (*DeleteRelayServerResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteRelayServer not implemented") +} +func (UnimplementedPresenceServiceServer) ListAuthServers(context.Context, *ListAuthServersRequest) (*ListAuthServersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListAuthServers not implemented") +} +func (UnimplementedPresenceServiceServer) ListProxyServers(context.Context, *ListProxyServersRequest) (*ListProxyServersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListProxyServers not implemented") +} func (UnimplementedPresenceServiceServer) mustEmbedUnimplementedPresenceServiceServer() {} func (UnimplementedPresenceServiceServer) testEmbeddedByValue() {} @@ -342,6 +432,96 @@ func _PresenceService_DeleteReverseTunnel_Handler(srv interface{}, ctx context.C return interceptor(ctx, in, info, handler) } +func _PresenceService_GetRelayServer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetRelayServerRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PresenceServiceServer).GetRelayServer(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PresenceService_GetRelayServer_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PresenceServiceServer).GetRelayServer(ctx, req.(*GetRelayServerRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _PresenceService_ListRelayServers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListRelayServersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PresenceServiceServer).ListRelayServers(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PresenceService_ListRelayServers_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PresenceServiceServer).ListRelayServers(ctx, req.(*ListRelayServersRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _PresenceService_DeleteRelayServer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteRelayServerRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PresenceServiceServer).DeleteRelayServer(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PresenceService_DeleteRelayServer_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PresenceServiceServer).DeleteRelayServer(ctx, req.(*DeleteRelayServerRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _PresenceService_ListAuthServers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListAuthServersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PresenceServiceServer).ListAuthServers(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PresenceService_ListAuthServers_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PresenceServiceServer).ListAuthServers(ctx, req.(*ListAuthServersRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _PresenceService_ListProxyServers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListProxyServersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(PresenceServiceServer).ListProxyServers(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: PresenceService_ListProxyServers_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(PresenceServiceServer).ListProxyServers(ctx, req.(*ListProxyServersRequest)) + } + return interceptor(ctx, in, info, handler) +} + // PresenceService_ServiceDesc is the grpc.ServiceDesc for PresenceService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -377,6 +557,26 @@ var PresenceService_ServiceDesc = grpc.ServiceDesc{ MethodName: "DeleteReverseTunnel", Handler: _PresenceService_DeleteReverseTunnel_Handler, }, + { + MethodName: "GetRelayServer", + Handler: _PresenceService_GetRelayServer_Handler, + }, + { + MethodName: "ListRelayServers", + Handler: _PresenceService_ListRelayServers_Handler, + }, + { + MethodName: "DeleteRelayServer", + Handler: _PresenceService_DeleteRelayServer_Handler, + }, + { + MethodName: "ListAuthServers", + Handler: _PresenceService_ListAuthServers_Handler, + }, + { + MethodName: "ListProxyServers", + Handler: _PresenceService_ListProxyServers_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/presence/v1/service.proto", diff --git a/api/gen/proto/go/teleport/provisioning/v1/provisioning.pb.go b/api/gen/proto/go/teleport/provisioning/v1/provisioning.pb.go index f1dbdafe083c3..263e6d8ee4cdb 100644 --- a/api/gen/proto/go/teleport/provisioning/v1/provisioning.pb.go +++ b/api/gen/proto/go/teleport/provisioning/v1/provisioning.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/provisioning/v1/provisioning.proto diff --git a/api/gen/proto/go/teleport/provisioning/v1/provisioning_service.pb.go b/api/gen/proto/go/teleport/provisioning/v1/provisioning_service.pb.go deleted file mode 100644 index 6e3ae761fb8c1..0000000000000 --- a/api/gen/proto/go/teleport/provisioning/v1/provisioning_service.pb.go +++ /dev/null @@ -1,145 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/provisioning/v1/provisioning_service.proto - -package provisioningv1 - -import ( - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - emptypb "google.golang.org/protobuf/types/known/emptypb" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// DeleteDownstreamProvisioningStatesRequest is a request to delete all provisioning states for -// a given DownstreamId. -type DeleteDownstreamProvisioningStatesRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // DownstreamId identifies the downstream service that this state applies to. - DownstreamId string `protobuf:"bytes,1,opt,name=downstream_id,json=downstreamId,proto3" json:"downstream_id,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteDownstreamProvisioningStatesRequest) Reset() { - *x = DeleteDownstreamProvisioningStatesRequest{} - mi := &file_teleport_provisioning_v1_provisioning_service_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteDownstreamProvisioningStatesRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteDownstreamProvisioningStatesRequest) ProtoMessage() {} - -func (x *DeleteDownstreamProvisioningStatesRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_provisioning_v1_provisioning_service_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteDownstreamProvisioningStatesRequest.ProtoReflect.Descriptor instead. -func (*DeleteDownstreamProvisioningStatesRequest) Descriptor() ([]byte, []int) { - return file_teleport_provisioning_v1_provisioning_service_proto_rawDescGZIP(), []int{0} -} - -func (x *DeleteDownstreamProvisioningStatesRequest) GetDownstreamId() string { - if x != nil { - return x.DownstreamId - } - return "" -} - -var File_teleport_provisioning_v1_provisioning_service_proto protoreflect.FileDescriptor - -const file_teleport_provisioning_v1_provisioning_service_proto_rawDesc = "" + - "\n" + - "3teleport/provisioning/v1/provisioning_service.proto\x12\x18teleport.provisioning.v1\x1a\x1bgoogle/protobuf/empty.proto\"P\n" + - ")DeleteDownstreamProvisioningStatesRequest\x12#\n" + - "\rdownstream_id\x18\x01 \x01(\tR\fdownstreamId2\x99\x01\n" + - "\x13ProvisioningService\x12\x81\x01\n" + - "\"DeleteDownstreamProvisioningStates\x12C.teleport.provisioning.v1.DeleteDownstreamProvisioningStatesRequest\x1a\x16.google.protobuf.EmptyB\\ZZgithub.com/gravitational/teleport/api/gen/proto/go/teleport/provisioning/v1;provisioningv1b\x06proto3" - -var ( - file_teleport_provisioning_v1_provisioning_service_proto_rawDescOnce sync.Once - file_teleport_provisioning_v1_provisioning_service_proto_rawDescData []byte -) - -func file_teleport_provisioning_v1_provisioning_service_proto_rawDescGZIP() []byte { - file_teleport_provisioning_v1_provisioning_service_proto_rawDescOnce.Do(func() { - file_teleport_provisioning_v1_provisioning_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_provisioning_v1_provisioning_service_proto_rawDesc), len(file_teleport_provisioning_v1_provisioning_service_proto_rawDesc))) - }) - return file_teleport_provisioning_v1_provisioning_service_proto_rawDescData -} - -var file_teleport_provisioning_v1_provisioning_service_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_teleport_provisioning_v1_provisioning_service_proto_goTypes = []any{ - (*DeleteDownstreamProvisioningStatesRequest)(nil), // 0: teleport.provisioning.v1.DeleteDownstreamProvisioningStatesRequest - (*emptypb.Empty)(nil), // 1: google.protobuf.Empty -} -var file_teleport_provisioning_v1_provisioning_service_proto_depIdxs = []int32{ - 0, // 0: teleport.provisioning.v1.ProvisioningService.DeleteDownstreamProvisioningStates:input_type -> teleport.provisioning.v1.DeleteDownstreamProvisioningStatesRequest - 1, // 1: teleport.provisioning.v1.ProvisioningService.DeleteDownstreamProvisioningStates:output_type -> google.protobuf.Empty - 1, // [1:2] is the sub-list for method output_type - 0, // [0:1] is the sub-list for method input_type - 0, // [0:0] is the sub-list for extension type_name - 0, // [0:0] is the sub-list for extension extendee - 0, // [0:0] is the sub-list for field type_name -} - -func init() { file_teleport_provisioning_v1_provisioning_service_proto_init() } -func file_teleport_provisioning_v1_provisioning_service_proto_init() { - if File_teleport_provisioning_v1_provisioning_service_proto != nil { - return - } - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_provisioning_v1_provisioning_service_proto_rawDesc), len(file_teleport_provisioning_v1_provisioning_service_proto_rawDesc)), - NumEnums: 0, - NumMessages: 1, - NumExtensions: 0, - NumServices: 1, - }, - GoTypes: file_teleport_provisioning_v1_provisioning_service_proto_goTypes, - DependencyIndexes: file_teleport_provisioning_v1_provisioning_service_proto_depIdxs, - MessageInfos: file_teleport_provisioning_v1_provisioning_service_proto_msgTypes, - }.Build() - File_teleport_provisioning_v1_provisioning_service_proto = out.File - file_teleport_provisioning_v1_provisioning_service_proto_goTypes = nil - file_teleport_provisioning_v1_provisioning_service_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/provisioning/v1/provisioning_service_grpc.pb.go b/api/gen/proto/go/teleport/provisioning/v1/provisioning_service_grpc.pb.go deleted file mode 100644 index fdfe2a7bde4f4..0000000000000 --- a/api/gen/proto/go/teleport/provisioning/v1/provisioning_service_grpc.pb.go +++ /dev/null @@ -1,142 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go-grpc. DO NOT EDIT. -// versions: -// - protoc-gen-go-grpc v1.5.1 -// - protoc (unknown) -// source: teleport/provisioning/v1/provisioning_service.proto - -package provisioningv1 - -import ( - context "context" - grpc "google.golang.org/grpc" - codes "google.golang.org/grpc/codes" - status "google.golang.org/grpc/status" - emptypb "google.golang.org/protobuf/types/known/emptypb" -) - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the grpc package it is being compiled against. -// Requires gRPC-Go v1.64.0 or later. -const _ = grpc.SupportPackageIsVersion9 - -const ( - ProvisioningService_DeleteDownstreamProvisioningStates_FullMethodName = "/teleport.provisioning.v1.ProvisioningService/DeleteDownstreamProvisioningStates" -) - -// ProvisioningServiceClient is the client API for ProvisioningService service. -// -// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. -// -// ProvisioningService provides methods to manage Provisioning resources. -type ProvisioningServiceClient interface { - // DeleteDownstreamProvisioningStates deletes all Identity Center provisioning state for a given downstream. - DeleteDownstreamProvisioningStates(ctx context.Context, in *DeleteDownstreamProvisioningStatesRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) -} - -type provisioningServiceClient struct { - cc grpc.ClientConnInterface -} - -func NewProvisioningServiceClient(cc grpc.ClientConnInterface) ProvisioningServiceClient { - return &provisioningServiceClient{cc} -} - -func (c *provisioningServiceClient) DeleteDownstreamProvisioningStates(ctx context.Context, in *DeleteDownstreamProvisioningStatesRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(emptypb.Empty) - err := c.cc.Invoke(ctx, ProvisioningService_DeleteDownstreamProvisioningStates_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -// ProvisioningServiceServer is the server API for ProvisioningService service. -// All implementations must embed UnimplementedProvisioningServiceServer -// for forward compatibility. -// -// ProvisioningService provides methods to manage Provisioning resources. -type ProvisioningServiceServer interface { - // DeleteDownstreamProvisioningStates deletes all Identity Center provisioning state for a given downstream. - DeleteDownstreamProvisioningStates(context.Context, *DeleteDownstreamProvisioningStatesRequest) (*emptypb.Empty, error) - mustEmbedUnimplementedProvisioningServiceServer() -} - -// UnimplementedProvisioningServiceServer must be embedded to have -// forward compatible implementations. -// -// NOTE: this should be embedded by value instead of pointer to avoid a nil -// pointer dereference when methods are called. -type UnimplementedProvisioningServiceServer struct{} - -func (UnimplementedProvisioningServiceServer) DeleteDownstreamProvisioningStates(context.Context, *DeleteDownstreamProvisioningStatesRequest) (*emptypb.Empty, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteDownstreamProvisioningStates not implemented") -} -func (UnimplementedProvisioningServiceServer) mustEmbedUnimplementedProvisioningServiceServer() {} -func (UnimplementedProvisioningServiceServer) testEmbeddedByValue() {} - -// UnsafeProvisioningServiceServer may be embedded to opt out of forward compatibility for this service. -// Use of this interface is not recommended, as added methods to ProvisioningServiceServer will -// result in compilation errors. -type UnsafeProvisioningServiceServer interface { - mustEmbedUnimplementedProvisioningServiceServer() -} - -func RegisterProvisioningServiceServer(s grpc.ServiceRegistrar, srv ProvisioningServiceServer) { - // If the following call pancis, it indicates UnimplementedProvisioningServiceServer was - // embedded by pointer and is nil. This will cause panics if an - // unimplemented method is ever invoked, so we test this at initialization - // time to prevent it from happening at runtime later due to I/O. - if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { - t.testEmbeddedByValue() - } - s.RegisterService(&ProvisioningService_ServiceDesc, srv) -} - -func _ProvisioningService_DeleteDownstreamProvisioningStates_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteDownstreamProvisioningStatesRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ProvisioningServiceServer).DeleteDownstreamProvisioningStates(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ProvisioningService_DeleteDownstreamProvisioningStates_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ProvisioningServiceServer).DeleteDownstreamProvisioningStates(ctx, req.(*DeleteDownstreamProvisioningStatesRequest)) - } - return interceptor(ctx, in, info, handler) -} - -// ProvisioningService_ServiceDesc is the grpc.ServiceDesc for ProvisioningService service. -// It's only intended for direct use with grpc.RegisterService, -// and not to be introspected or modified (even as a copy) -var ProvisioningService_ServiceDesc = grpc.ServiceDesc{ - ServiceName: "teleport.provisioning.v1.ProvisioningService", - HandlerType: (*ProvisioningServiceServer)(nil), - Methods: []grpc.MethodDesc{ - { - MethodName: "DeleteDownstreamProvisioningStates", - Handler: _ProvisioningService_DeleteDownstreamProvisioningStates_Handler, - }, - }, - Streams: []grpc.StreamDesc{}, - Metadata: "teleport/provisioning/v1/provisioning_service.proto", -} diff --git a/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption.pb.go b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption.pb.go new file mode 100644 index 0000000000000..02b63f6fd9556 --- /dev/null +++ b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption.pb.go @@ -0,0 +1,598 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/recordingencryption/v1/recording_encryption.proto + +package recordingencryptionv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + types "github.com/gravitational/teleport/api/types" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// The possible states a KeyPair can be in. +type KeyPairState int32 + +const ( + // Unspecified value + KeyPairState_KEY_PAIR_STATE_UNSPECIFIED KeyPairState = 0 + // Represents an active key. + KeyPairState_KEY_PAIR_STATE_ACTIVE KeyPairState = 1 + // Represents a key in the process of being rotated. + KeyPairState_KEY_PAIR_STATE_ROTATING KeyPairState = 2 + // Represents a key being rotated in that is inaccessible to at least one + // auth server. + KeyPairState_KEY_PAIR_STATE_INACCESSIBLE KeyPairState = 3 +) + +// Enum value maps for KeyPairState. +var ( + KeyPairState_name = map[int32]string{ + 0: "KEY_PAIR_STATE_UNSPECIFIED", + 1: "KEY_PAIR_STATE_ACTIVE", + 2: "KEY_PAIR_STATE_ROTATING", + 3: "KEY_PAIR_STATE_INACCESSIBLE", + } + KeyPairState_value = map[string]int32{ + "KEY_PAIR_STATE_UNSPECIFIED": 0, + "KEY_PAIR_STATE_ACTIVE": 1, + "KEY_PAIR_STATE_ROTATING": 2, + "KEY_PAIR_STATE_INACCESSIBLE": 3, + } +) + +func (x KeyPairState) Enum() *KeyPairState { + p := new(KeyPairState) + *p = x + return p +} + +func (x KeyPairState) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (KeyPairState) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_recordingencryption_v1_recording_encryption_proto_enumTypes[0].Descriptor() +} + +func (KeyPairState) Type() protoreflect.EnumType { + return &file_teleport_recordingencryption_v1_recording_encryption_proto_enumTypes[0] +} + +func (x KeyPairState) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use KeyPairState.Descriptor instead. +func (KeyPairState) EnumDescriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{0} +} + +// A key pair used with age to wrap and unwrap file keys for session recording encryption. +type KeyPair struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A key pair used with age to wrap and unwrap file keys for session recording encryption. + KeyPair *types.EncryptionKeyPair `protobuf:"bytes,1,opt,name=key_pair,json=keyPair,proto3" json:"key_pair,omitempty"` + // The current state of the key pair. + State KeyPairState `protobuf:"varint,2,opt,name=state,proto3,enum=teleport.recordingencryption.v1.KeyPairState" json:"state,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *KeyPair) Reset() { + *x = KeyPair{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *KeyPair) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*KeyPair) ProtoMessage() {} + +func (x *KeyPair) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use KeyPair.ProtoReflect.Descriptor instead. +func (*KeyPair) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{0} +} + +func (x *KeyPair) GetKeyPair() *types.EncryptionKeyPair { + if x != nil { + return x.KeyPair + } + return nil +} + +func (x *KeyPair) GetState() KeyPairState { + if x != nil { + return x.State + } + return KeyPairState_KEY_PAIR_STATE_UNSPECIFIED +} + +// RecordingEncryptionSpec contains the active key set for encrypted session recording. +type RecordingEncryptionSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A list of active key pairs used for session recording encryption. The unique set of + // active public keys are used as recipients during age encryption. This allows any + // active private key to be used during decryption which guards against recordings being + // inaccessible to auth servers waiting for key rotation. + ActiveKeyPairs []*KeyPair `protobuf:"bytes,2,rep,name=active_key_pairs,json=activeKeyPairs,proto3" json:"active_key_pairs,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RecordingEncryptionSpec) Reset() { + *x = RecordingEncryptionSpec{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RecordingEncryptionSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RecordingEncryptionSpec) ProtoMessage() {} + +func (x *RecordingEncryptionSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RecordingEncryptionSpec.ProtoReflect.Descriptor instead. +func (*RecordingEncryptionSpec) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{1} +} + +func (x *RecordingEncryptionSpec) GetActiveKeyPairs() []*KeyPair { + if x != nil { + return x.ActiveKeyPairs + } + return nil +} + +// RecordingEncryptionStatus contains the status of the RecordingEncryption resource. +type RecordingEncryptionStatus struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RecordingEncryptionStatus) Reset() { + *x = RecordingEncryptionStatus{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RecordingEncryptionStatus) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RecordingEncryptionStatus) ProtoMessage() {} + +func (x *RecordingEncryptionStatus) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RecordingEncryptionStatus.ProtoReflect.Descriptor instead. +func (*RecordingEncryptionStatus) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{2} +} + +// RecordingEncryption contains cluster state for encrypted session recordings. +type RecordingEncryption struct { + state protoimpl.MessageState `protogen:"open.v1"` + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + Spec *RecordingEncryptionSpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + Status *RecordingEncryptionStatus `protobuf:"bytes,6,opt,name=status,proto3" json:"status,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RecordingEncryption) Reset() { + *x = RecordingEncryption{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RecordingEncryption) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RecordingEncryption) ProtoMessage() {} + +func (x *RecordingEncryption) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RecordingEncryption.ProtoReflect.Descriptor instead. +func (*RecordingEncryption) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{3} +} + +func (x *RecordingEncryption) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *RecordingEncryption) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *RecordingEncryption) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *RecordingEncryption) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *RecordingEncryption) GetSpec() *RecordingEncryptionSpec { + if x != nil { + return x.Spec + } + return nil +} + +func (x *RecordingEncryption) GetStatus() *RecordingEncryptionStatus { + if x != nil { + return x.Status + } + return nil +} + +// A rotated key pair previously used with age to wrap and unwrap file keys for session recording +// encryption. +type RotatedKeySpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The rotated key pair previously used with age to wrap and unwrap file keys for session recording + // encryption. + EncryptionKeyPair *types.EncryptionKeyPair `protobuf:"bytes,2,opt,name=encryption_key_pair,json=encryptionKeyPair,proto3" json:"encryption_key_pair,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RotatedKeySpec) Reset() { + *x = RotatedKeySpec{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RotatedKeySpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RotatedKeySpec) ProtoMessage() {} + +func (x *RotatedKeySpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RotatedKeySpec.ProtoReflect.Descriptor instead. +func (*RotatedKeySpec) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{4} +} + +func (x *RotatedKeySpec) GetEncryptionKeyPair() *types.EncryptionKeyPair { + if x != nil { + return x.EncryptionKeyPair + } + return nil +} + +// The empty status of a RotatedKey. +type RotatedKeyStatus struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RotatedKeyStatus) Reset() { + *x = RotatedKeyStatus{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RotatedKeyStatus) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RotatedKeyStatus) ProtoMessage() {} + +func (x *RotatedKeyStatus) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RotatedKeyStatus.ProtoReflect.Descriptor instead. +func (*RotatedKeyStatus) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{5} +} + +// A previously rotated encryption key for session recordings kept for future replay. The metadata.name +// is expected to be the fingerprint of the public key contained in the spec, which is a hex encoded +// SHA256 hash of its PKIX form. +type RotatedKey struct { + state protoimpl.MessageState `protogen:"open.v1"` + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + Spec *RotatedKeySpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + Status *RotatedKeyStatus `protobuf:"bytes,6,opt,name=status,proto3" json:"status,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RotatedKey) Reset() { + *x = RotatedKey{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RotatedKey) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RotatedKey) ProtoMessage() {} + +func (x *RotatedKey) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RotatedKey.ProtoReflect.Descriptor instead. +func (*RotatedKey) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP(), []int{6} +} + +func (x *RotatedKey) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *RotatedKey) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *RotatedKey) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *RotatedKey) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *RotatedKey) GetSpec() *RotatedKeySpec { + if x != nil { + return x.Spec + } + return nil +} + +func (x *RotatedKey) GetStatus() *RotatedKeyStatus { + if x != nil { + return x.Status + } + return nil +} + +var File_teleport_recordingencryption_v1_recording_encryption_proto protoreflect.FileDescriptor + +const file_teleport_recordingencryption_v1_recording_encryption_proto_rawDesc = "" + + "\n" + + ":teleport/recordingencryption/v1/recording_encryption.proto\x12\x1fteleport.recordingencryption.v1\x1a!teleport/header/v1/metadata.proto\x1a!teleport/legacy/types/types.proto\"\x83\x01\n" + + "\aKeyPair\x123\n" + + "\bkey_pair\x18\x01 \x01(\v2\x18.types.EncryptionKeyPairR\akeyPair\x12C\n" + + "\x05state\x18\x02 \x01(\x0e2-.teleport.recordingencryption.v1.KeyPairStateR\x05state\"\x80\x01\n" + + "\x17RecordingEncryptionSpec\x12R\n" + + "\x10active_key_pairs\x18\x02 \x03(\v2(.teleport.recordingencryption.v1.KeyPairR\x0eactiveKeyPairsJ\x04\b\x01\x10\x02R\vactive_keys\"\x1b\n" + + "\x19RecordingEncryptionStatus\"\xba\x02\n" + + "\x13RecordingEncryption\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12L\n" + + "\x04spec\x18\x05 \x01(\v28.teleport.recordingencryption.v1.RecordingEncryptionSpecR\x04spec\x12R\n" + + "\x06status\x18\x06 \x01(\v2:.teleport.recordingencryption.v1.RecordingEncryptionStatusR\x06status\"Z\n" + + "\x0eRotatedKeySpec\x12H\n" + + "\x13encryption_key_pair\x18\x02 \x01(\v2\x18.types.EncryptionKeyPairR\x11encryptionKeyPair\"\x12\n" + + "\x10RotatedKeyStatus\"\x9f\x02\n" + + "\n" + + "RotatedKey\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12C\n" + + "\x04spec\x18\x05 \x01(\v2/.teleport.recordingencryption.v1.RotatedKeySpecR\x04spec\x12I\n" + + "\x06status\x18\x06 \x01(\v21.teleport.recordingencryption.v1.RotatedKeyStatusR\x06status*\x87\x01\n" + + "\fKeyPairState\x12\x1e\n" + + "\x1aKEY_PAIR_STATE_UNSPECIFIED\x10\x00\x12\x19\n" + + "\x15KEY_PAIR_STATE_ACTIVE\x10\x01\x12\x1b\n" + + "\x17KEY_PAIR_STATE_ROTATING\x10\x02\x12\x1f\n" + + "\x1bKEY_PAIR_STATE_INACCESSIBLE\x10\x03BjZhgithub.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1;recordingencryptionv1b\x06proto3" + +var ( + file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescOnce sync.Once + file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescData []byte +) + +func file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescGZIP() []byte { + file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescOnce.Do(func() { + file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_recordingencryption_v1_recording_encryption_proto_rawDesc), len(file_teleport_recordingencryption_v1_recording_encryption_proto_rawDesc))) + }) + return file_teleport_recordingencryption_v1_recording_encryption_proto_rawDescData +} + +var file_teleport_recordingencryption_v1_recording_encryption_proto_enumTypes = make([]protoimpl.EnumInfo, 1) +var file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes = make([]protoimpl.MessageInfo, 7) +var file_teleport_recordingencryption_v1_recording_encryption_proto_goTypes = []any{ + (KeyPairState)(0), // 0: teleport.recordingencryption.v1.KeyPairState + (*KeyPair)(nil), // 1: teleport.recordingencryption.v1.KeyPair + (*RecordingEncryptionSpec)(nil), // 2: teleport.recordingencryption.v1.RecordingEncryptionSpec + (*RecordingEncryptionStatus)(nil), // 3: teleport.recordingencryption.v1.RecordingEncryptionStatus + (*RecordingEncryption)(nil), // 4: teleport.recordingencryption.v1.RecordingEncryption + (*RotatedKeySpec)(nil), // 5: teleport.recordingencryption.v1.RotatedKeySpec + (*RotatedKeyStatus)(nil), // 6: teleport.recordingencryption.v1.RotatedKeyStatus + (*RotatedKey)(nil), // 7: teleport.recordingencryption.v1.RotatedKey + (*types.EncryptionKeyPair)(nil), // 8: types.EncryptionKeyPair + (*v1.Metadata)(nil), // 9: teleport.header.v1.Metadata +} +var file_teleport_recordingencryption_v1_recording_encryption_proto_depIdxs = []int32{ + 8, // 0: teleport.recordingencryption.v1.KeyPair.key_pair:type_name -> types.EncryptionKeyPair + 0, // 1: teleport.recordingencryption.v1.KeyPair.state:type_name -> teleport.recordingencryption.v1.KeyPairState + 1, // 2: teleport.recordingencryption.v1.RecordingEncryptionSpec.active_key_pairs:type_name -> teleport.recordingencryption.v1.KeyPair + 9, // 3: teleport.recordingencryption.v1.RecordingEncryption.metadata:type_name -> teleport.header.v1.Metadata + 2, // 4: teleport.recordingencryption.v1.RecordingEncryption.spec:type_name -> teleport.recordingencryption.v1.RecordingEncryptionSpec + 3, // 5: teleport.recordingencryption.v1.RecordingEncryption.status:type_name -> teleport.recordingencryption.v1.RecordingEncryptionStatus + 8, // 6: teleport.recordingencryption.v1.RotatedKeySpec.encryption_key_pair:type_name -> types.EncryptionKeyPair + 9, // 7: teleport.recordingencryption.v1.RotatedKey.metadata:type_name -> teleport.header.v1.Metadata + 5, // 8: teleport.recordingencryption.v1.RotatedKey.spec:type_name -> teleport.recordingencryption.v1.RotatedKeySpec + 6, // 9: teleport.recordingencryption.v1.RotatedKey.status:type_name -> teleport.recordingencryption.v1.RotatedKeyStatus + 10, // [10:10] is the sub-list for method output_type + 10, // [10:10] is the sub-list for method input_type + 10, // [10:10] is the sub-list for extension type_name + 10, // [10:10] is the sub-list for extension extendee + 0, // [0:10] is the sub-list for field type_name +} + +func init() { file_teleport_recordingencryption_v1_recording_encryption_proto_init() } +func file_teleport_recordingencryption_v1_recording_encryption_proto_init() { + if File_teleport_recordingencryption_v1_recording_encryption_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_recordingencryption_v1_recording_encryption_proto_rawDesc), len(file_teleport_recordingencryption_v1_recording_encryption_proto_rawDesc)), + NumEnums: 1, + NumMessages: 7, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_recordingencryption_v1_recording_encryption_proto_goTypes, + DependencyIndexes: file_teleport_recordingencryption_v1_recording_encryption_proto_depIdxs, + EnumInfos: file_teleport_recordingencryption_v1_recording_encryption_proto_enumTypes, + MessageInfos: file_teleport_recordingencryption_v1_recording_encryption_proto_msgTypes, + }.Build() + File_teleport_recordingencryption_v1_recording_encryption_proto = out.File + file_teleport_recordingencryption_v1_recording_encryption_proto_goTypes = nil + file_teleport_recordingencryption_v1_recording_encryption_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service.pb.go b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service.pb.go new file mode 100644 index 0000000000000..a86889f0002de --- /dev/null +++ b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service.pb.go @@ -0,0 +1,990 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/recordingencryption/v1/recording_encryption_service.proto + +package recordingencryptionv1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + timestamppb "google.golang.org/protobuf/types/known/timestamppb" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// The handle to an upload for an encrypted session. +type Upload struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The primary identifier for an Upload. + UploadId string `protobuf:"bytes,1,opt,name=upload_id,json=uploadId,proto3" json:"upload_id,omitempty"` + // The session ID an upload is tied to. + SessionId string `protobuf:"bytes,2,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + // The time that an upload was created at. + InitiatedAt *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=initiated_at,json=initiatedAt,proto3" json:"initiated_at,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Upload) Reset() { + *x = Upload{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Upload) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Upload) ProtoMessage() {} + +func (x *Upload) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Upload.ProtoReflect.Descriptor instead. +func (*Upload) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{0} +} + +func (x *Upload) GetUploadId() string { + if x != nil { + return x.UploadId + } + return "" +} + +func (x *Upload) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +func (x *Upload) GetInitiatedAt() *timestamppb.Timestamp { + if x != nil { + return x.InitiatedAt + } + return nil +} + +// The request to start a multipart upload for a specific session recording. +type CreateUploadRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The session ID associated with the recording being uploaded. + SessionId string `protobuf:"bytes,1,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateUploadRequest) Reset() { + *x = CreateUploadRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateUploadRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateUploadRequest) ProtoMessage() {} + +func (x *CreateUploadRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateUploadRequest.ProtoReflect.Descriptor instead. +func (*CreateUploadRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{1} +} + +func (x *CreateUploadRequest) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +// The resulting Upload message for a created Upload. +type CreateUploadResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The handle for the created Upload. + Upload *Upload `protobuf:"bytes,1,opt,name=upload,proto3" json:"upload,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateUploadResponse) Reset() { + *x = CreateUploadResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateUploadResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateUploadResponse) ProtoMessage() {} + +func (x *CreateUploadResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateUploadResponse.ProtoReflect.Descriptor instead. +func (*CreateUploadResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{2} +} + +func (x *CreateUploadResponse) GetUpload() *Upload { + if x != nil { + return x.Upload + } + return nil +} + +// The request to upload a single part in a multipart upload. +type UploadPartRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The handle to the in-progress upload that should be uploaded to. + Upload *Upload `protobuf:"bytes,1,opt,name=upload,proto3" json:"upload,omitempty"` + // The ordered index applied to the part. + PartNumber int64 `protobuf:"varint,2,opt,name=part_number,json=partNumber,proto3" json:"part_number,omitempty"` + // The encrypted part of session recording data being uploaded. + Part []byte `protobuf:"bytes,3,opt,name=part,proto3" json:"part,omitempty"` + // Whether this is the last upload part in the upload. + IsLast bool `protobuf:"varint,4,opt,name=is_last,json=isLast,proto3" json:"is_last,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UploadPartRequest) Reset() { + *x = UploadPartRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UploadPartRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UploadPartRequest) ProtoMessage() {} + +func (x *UploadPartRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UploadPartRequest.ProtoReflect.Descriptor instead. +func (*UploadPartRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{3} +} + +func (x *UploadPartRequest) GetUpload() *Upload { + if x != nil { + return x.Upload + } + return nil +} + +func (x *UploadPartRequest) GetPartNumber() int64 { + if x != nil { + return x.PartNumber + } + return 0 +} + +func (x *UploadPartRequest) GetPart() []byte { + if x != nil { + return x.Part + } + return nil +} + +func (x *UploadPartRequest) GetIsLast() bool { + if x != nil { + return x.IsLast + } + return false +} + +// The resulting metadata about an uploaded part. +type Part struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The ordered index applied to the part. + PartNumber int64 `protobuf:"varint,1,opt,name=part_number,json=partNumber,proto3" json:"part_number,omitempty"` + // The part e-tag value relevant to some storage backends. + Etag string `protobuf:"bytes,2,opt,name=etag,proto3" json:"etag,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Part) Reset() { + *x = Part{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Part) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Part) ProtoMessage() {} + +func (x *Part) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Part.ProtoReflect.Descriptor instead. +func (*Part) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{4} +} + +func (x *Part) GetPartNumber() int64 { + if x != nil { + return x.PartNumber + } + return 0 +} + +func (x *Part) GetEtag() string { + if x != nil { + return x.Etag + } + return "" +} + +// A successfully uploaded Part to be included in the final CompleteUpload request. +type UploadPartResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The resulting part metadata about an uploaded part. + Part *Part `protobuf:"bytes,1,opt,name=part,proto3" json:"part,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UploadPartResponse) Reset() { + *x = UploadPartResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UploadPartResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UploadPartResponse) ProtoMessage() {} + +func (x *UploadPartResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UploadPartResponse.ProtoReflect.Descriptor instead. +func (*UploadPartResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{5} +} + +func (x *UploadPartResponse) GetPart() *Part { + if x != nil { + return x.Part + } + return nil +} + +// The request to complete an upload. The included part numbers must match the parts successfully +// uploaded up until this point. +type CompleteUploadRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The handle to an upload to complete. + Upload *Upload `protobuf:"bytes,1,opt,name=upload,proto3" json:"upload,omitempty"` + // The parts expected to be successfully uploaded. + Parts []*Part `protobuf:"bytes,2,rep,name=parts,proto3" json:"parts,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CompleteUploadRequest) Reset() { + *x = CompleteUploadRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CompleteUploadRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CompleteUploadRequest) ProtoMessage() {} + +func (x *CompleteUploadRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CompleteUploadRequest.ProtoReflect.Descriptor instead. +func (*CompleteUploadRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{6} +} + +func (x *CompleteUploadRequest) GetUpload() *Upload { + if x != nil { + return x.Upload + } + return nil +} + +func (x *CompleteUploadRequest) GetParts() []*Part { + if x != nil { + return x.Parts + } + return nil +} + +// The body of a CompleteUpload request. +type CompleteUploadResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CompleteUploadResponse) Reset() { + *x = CompleteUploadResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CompleteUploadResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CompleteUploadResponse) ProtoMessage() {} + +func (x *CompleteUploadResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CompleteUploadResponse.ProtoReflect.Descriptor instead. +func (*CompleteUploadResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{7} +} + +// The body of a RotateKey request. +type RotateKeyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RotateKeyRequest) Reset() { + *x = RotateKeyRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RotateKeyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RotateKeyRequest) ProtoMessage() {} + +func (x *RotateKeyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RotateKeyRequest.ProtoReflect.Descriptor instead. +func (*RotateKeyRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{8} +} + +// The return value of a RotateKey request. +type RotateKeyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RotateKeyResponse) Reset() { + *x = RotateKeyResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RotateKeyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RotateKeyResponse) ProtoMessage() {} + +func (x *RotateKeyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RotateKeyResponse.ProtoReflect.Descriptor instead. +func (*RotateKeyResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{9} +} + +// The body of a GetRotationState request. +type GetRotationStateRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetRotationStateRequest) Reset() { + *x = GetRotationStateRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetRotationStateRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetRotationStateRequest) ProtoMessage() {} + +func (x *GetRotationStateRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetRotationStateRequest.ProtoReflect.Descriptor instead. +func (*GetRotationStateRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{10} +} + +func (x *GetRotationStateRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *GetRotationStateRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// A public key fingerprint coupled with its current state. +type FingerprintWithState struct { + state protoimpl.MessageState `protogen:"open.v1"` + // A fingerprint identifying the public key of a KeyPair. + Fingerprint string `protobuf:"bytes,1,opt,name=fingerprint,proto3" json:"fingerprint,omitempty"` + // The state associated with the identified KeyPair. + State KeyPairState `protobuf:"varint,2,opt,name=state,proto3,enum=teleport.recordingencryption.v1.KeyPairState" json:"state,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *FingerprintWithState) Reset() { + *x = FingerprintWithState{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *FingerprintWithState) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*FingerprintWithState) ProtoMessage() {} + +func (x *FingerprintWithState) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use FingerprintWithState.ProtoReflect.Descriptor instead. +func (*FingerprintWithState) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{11} +} + +func (x *FingerprintWithState) GetFingerprint() string { + if x != nil { + return x.Fingerprint + } + return "" +} + +func (x *FingerprintWithState) GetState() KeyPairState { + if x != nil { + return x.State + } + return KeyPairState_KEY_PAIR_STATE_UNSPECIFIED +} + +// The current state of all active encryption key pairs. +type GetRotationStateResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + NextPageToken string `protobuf:"bytes,1,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + // The state of all active encryption key pairs. + KeyPairStates []*FingerprintWithState `protobuf:"bytes,2,rep,name=key_pair_states,json=keyPairStates,proto3" json:"key_pair_states,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetRotationStateResponse) Reset() { + *x = GetRotationStateResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetRotationStateResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetRotationStateResponse) ProtoMessage() {} + +func (x *GetRotationStateResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetRotationStateResponse.ProtoReflect.Descriptor instead. +func (*GetRotationStateResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{12} +} + +func (x *GetRotationStateResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +func (x *GetRotationStateResponse) GetKeyPairStates() []*FingerprintWithState { + if x != nil { + return x.KeyPairStates + } + return nil +} + +// The body of a CompleteRotation request. +type CompleteRotationRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CompleteRotationRequest) Reset() { + *x = CompleteRotationRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CompleteRotationRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CompleteRotationRequest) ProtoMessage() {} + +func (x *CompleteRotationRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CompleteRotationRequest.ProtoReflect.Descriptor instead. +func (*CompleteRotationRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{13} +} + +// The return value of a CompleteRotation request. +type CompleteRotationResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CompleteRotationResponse) Reset() { + *x = CompleteRotationResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CompleteRotationResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CompleteRotationResponse) ProtoMessage() {} + +func (x *CompleteRotationResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CompleteRotationResponse.ProtoReflect.Descriptor instead. +func (*CompleteRotationResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{14} +} + +// The body of a RollbackRotation request. +type RollbackRotationRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RollbackRotationRequest) Reset() { + *x = RollbackRotationRequest{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RollbackRotationRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RollbackRotationRequest) ProtoMessage() {} + +func (x *RollbackRotationRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RollbackRotationRequest.ProtoReflect.Descriptor instead. +func (*RollbackRotationRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{15} +} + +// The return value of a RollbackRotation request +type RollbackRotationResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *RollbackRotationResponse) Reset() { + *x = RollbackRotationResponse{} + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *RollbackRotationResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*RollbackRotationResponse) ProtoMessage() {} + +func (x *RollbackRotationResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use RollbackRotationResponse.ProtoReflect.Descriptor instead. +func (*RollbackRotationResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP(), []int{16} +} + +var File_teleport_recordingencryption_v1_recording_encryption_service_proto protoreflect.FileDescriptor + +const file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDesc = "" + + "\n" + + "Bteleport/recordingencryption/v1/recording_encryption_service.proto\x12\x1fteleport.recordingencryption.v1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a:teleport/recordingencryption/v1/recording_encryption.proto\"\x83\x01\n" + + "\x06Upload\x12\x1b\n" + + "\tupload_id\x18\x01 \x01(\tR\buploadId\x12\x1d\n" + + "\n" + + "session_id\x18\x02 \x01(\tR\tsessionId\x12=\n" + + "\finitiated_at\x18\x03 \x01(\v2\x1a.google.protobuf.TimestampR\vinitiatedAt\"4\n" + + "\x13CreateUploadRequest\x12\x1d\n" + + "\n" + + "session_id\x18\x01 \x01(\tR\tsessionId\"W\n" + + "\x14CreateUploadResponse\x12?\n" + + "\x06upload\x18\x01 \x01(\v2'.teleport.recordingencryption.v1.UploadR\x06upload\"\xa2\x01\n" + + "\x11UploadPartRequest\x12?\n" + + "\x06upload\x18\x01 \x01(\v2'.teleport.recordingencryption.v1.UploadR\x06upload\x12\x1f\n" + + "\vpart_number\x18\x02 \x01(\x03R\n" + + "partNumber\x12\x12\n" + + "\x04part\x18\x03 \x01(\fR\x04part\x12\x17\n" + + "\ais_last\x18\x04 \x01(\bR\x06isLast\";\n" + + "\x04Part\x12\x1f\n" + + "\vpart_number\x18\x01 \x01(\x03R\n" + + "partNumber\x12\x12\n" + + "\x04etag\x18\x02 \x01(\tR\x04etag\"O\n" + + "\x12UploadPartResponse\x129\n" + + "\x04part\x18\x01 \x01(\v2%.teleport.recordingencryption.v1.PartR\x04part\"\x95\x01\n" + + "\x15CompleteUploadRequest\x12?\n" + + "\x06upload\x18\x01 \x01(\v2'.teleport.recordingencryption.v1.UploadR\x06upload\x12;\n" + + "\x05parts\x18\x02 \x03(\v2%.teleport.recordingencryption.v1.PartR\x05parts\"\x18\n" + + "\x16CompleteUploadResponse\"\x12\n" + + "\x10RotateKeyRequest\"\x13\n" + + "\x11RotateKeyResponse\"U\n" + + "\x17GetRotationStateRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"}\n" + + "\x14FingerprintWithState\x12 \n" + + "\vfingerprint\x18\x01 \x01(\tR\vfingerprint\x12C\n" + + "\x05state\x18\x02 \x01(\x0e2-.teleport.recordingencryption.v1.KeyPairStateR\x05state\"\xa1\x01\n" + + "\x18GetRotationStateResponse\x12&\n" + + "\x0fnext_page_token\x18\x01 \x01(\tR\rnextPageToken\x12]\n" + + "\x0fkey_pair_states\x18\x02 \x03(\v25.teleport.recordingencryption.v1.FingerprintWithStateR\rkeyPairStates\"\x19\n" + + "\x17CompleteRotationRequest\"\x1a\n" + + "\x18CompleteRotationResponse\"\x19\n" + + "\x17RollbackRotationRequest\"\x1a\n" + + "\x18RollbackRotationResponse2\xa6\a\n" + + "\x1aRecordingEncryptionService\x12{\n" + + "\fCreateUpload\x124.teleport.recordingencryption.v1.CreateUploadRequest\x1a5.teleport.recordingencryption.v1.CreateUploadResponse\x12u\n" + + "\n" + + "UploadPart\x122.teleport.recordingencryption.v1.UploadPartRequest\x1a3.teleport.recordingencryption.v1.UploadPartResponse\x12\x81\x01\n" + + "\x0eCompleteUpload\x126.teleport.recordingencryption.v1.CompleteUploadRequest\x1a7.teleport.recordingencryption.v1.CompleteUploadResponse\x12r\n" + + "\tRotateKey\x121.teleport.recordingencryption.v1.RotateKeyRequest\x1a2.teleport.recordingencryption.v1.RotateKeyResponse\x12\x87\x01\n" + + "\x10GetRotationState\x128.teleport.recordingencryption.v1.GetRotationStateRequest\x1a9.teleport.recordingencryption.v1.GetRotationStateResponse\x12\x87\x01\n" + + "\x10CompleteRotation\x128.teleport.recordingencryption.v1.CompleteRotationRequest\x1a9.teleport.recordingencryption.v1.CompleteRotationResponse\x12\x87\x01\n" + + "\x10RollbackRotation\x128.teleport.recordingencryption.v1.RollbackRotationRequest\x1a9.teleport.recordingencryption.v1.RollbackRotationResponseBjZhgithub.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1;recordingencryptionv1b\x06proto3" + +var ( + file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescOnce sync.Once + file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescData []byte +) + +func file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescGZIP() []byte { + file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescOnce.Do(func() { + file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDesc), len(file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDesc))) + }) + return file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDescData +} + +var file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes = make([]protoimpl.MessageInfo, 17) +var file_teleport_recordingencryption_v1_recording_encryption_service_proto_goTypes = []any{ + (*Upload)(nil), // 0: teleport.recordingencryption.v1.Upload + (*CreateUploadRequest)(nil), // 1: teleport.recordingencryption.v1.CreateUploadRequest + (*CreateUploadResponse)(nil), // 2: teleport.recordingencryption.v1.CreateUploadResponse + (*UploadPartRequest)(nil), // 3: teleport.recordingencryption.v1.UploadPartRequest + (*Part)(nil), // 4: teleport.recordingencryption.v1.Part + (*UploadPartResponse)(nil), // 5: teleport.recordingencryption.v1.UploadPartResponse + (*CompleteUploadRequest)(nil), // 6: teleport.recordingencryption.v1.CompleteUploadRequest + (*CompleteUploadResponse)(nil), // 7: teleport.recordingencryption.v1.CompleteUploadResponse + (*RotateKeyRequest)(nil), // 8: teleport.recordingencryption.v1.RotateKeyRequest + (*RotateKeyResponse)(nil), // 9: teleport.recordingencryption.v1.RotateKeyResponse + (*GetRotationStateRequest)(nil), // 10: teleport.recordingencryption.v1.GetRotationStateRequest + (*FingerprintWithState)(nil), // 11: teleport.recordingencryption.v1.FingerprintWithState + (*GetRotationStateResponse)(nil), // 12: teleport.recordingencryption.v1.GetRotationStateResponse + (*CompleteRotationRequest)(nil), // 13: teleport.recordingencryption.v1.CompleteRotationRequest + (*CompleteRotationResponse)(nil), // 14: teleport.recordingencryption.v1.CompleteRotationResponse + (*RollbackRotationRequest)(nil), // 15: teleport.recordingencryption.v1.RollbackRotationRequest + (*RollbackRotationResponse)(nil), // 16: teleport.recordingencryption.v1.RollbackRotationResponse + (*timestamppb.Timestamp)(nil), // 17: google.protobuf.Timestamp + (KeyPairState)(0), // 18: teleport.recordingencryption.v1.KeyPairState +} +var file_teleport_recordingencryption_v1_recording_encryption_service_proto_depIdxs = []int32{ + 17, // 0: teleport.recordingencryption.v1.Upload.initiated_at:type_name -> google.protobuf.Timestamp + 0, // 1: teleport.recordingencryption.v1.CreateUploadResponse.upload:type_name -> teleport.recordingencryption.v1.Upload + 0, // 2: teleport.recordingencryption.v1.UploadPartRequest.upload:type_name -> teleport.recordingencryption.v1.Upload + 4, // 3: teleport.recordingencryption.v1.UploadPartResponse.part:type_name -> teleport.recordingencryption.v1.Part + 0, // 4: teleport.recordingencryption.v1.CompleteUploadRequest.upload:type_name -> teleport.recordingencryption.v1.Upload + 4, // 5: teleport.recordingencryption.v1.CompleteUploadRequest.parts:type_name -> teleport.recordingencryption.v1.Part + 18, // 6: teleport.recordingencryption.v1.FingerprintWithState.state:type_name -> teleport.recordingencryption.v1.KeyPairState + 11, // 7: teleport.recordingencryption.v1.GetRotationStateResponse.key_pair_states:type_name -> teleport.recordingencryption.v1.FingerprintWithState + 1, // 8: teleport.recordingencryption.v1.RecordingEncryptionService.CreateUpload:input_type -> teleport.recordingencryption.v1.CreateUploadRequest + 3, // 9: teleport.recordingencryption.v1.RecordingEncryptionService.UploadPart:input_type -> teleport.recordingencryption.v1.UploadPartRequest + 6, // 10: teleport.recordingencryption.v1.RecordingEncryptionService.CompleteUpload:input_type -> teleport.recordingencryption.v1.CompleteUploadRequest + 8, // 11: teleport.recordingencryption.v1.RecordingEncryptionService.RotateKey:input_type -> teleport.recordingencryption.v1.RotateKeyRequest + 10, // 12: teleport.recordingencryption.v1.RecordingEncryptionService.GetRotationState:input_type -> teleport.recordingencryption.v1.GetRotationStateRequest + 13, // 13: teleport.recordingencryption.v1.RecordingEncryptionService.CompleteRotation:input_type -> teleport.recordingencryption.v1.CompleteRotationRequest + 15, // 14: teleport.recordingencryption.v1.RecordingEncryptionService.RollbackRotation:input_type -> teleport.recordingencryption.v1.RollbackRotationRequest + 2, // 15: teleport.recordingencryption.v1.RecordingEncryptionService.CreateUpload:output_type -> teleport.recordingencryption.v1.CreateUploadResponse + 5, // 16: teleport.recordingencryption.v1.RecordingEncryptionService.UploadPart:output_type -> teleport.recordingencryption.v1.UploadPartResponse + 7, // 17: teleport.recordingencryption.v1.RecordingEncryptionService.CompleteUpload:output_type -> teleport.recordingencryption.v1.CompleteUploadResponse + 9, // 18: teleport.recordingencryption.v1.RecordingEncryptionService.RotateKey:output_type -> teleport.recordingencryption.v1.RotateKeyResponse + 12, // 19: teleport.recordingencryption.v1.RecordingEncryptionService.GetRotationState:output_type -> teleport.recordingencryption.v1.GetRotationStateResponse + 14, // 20: teleport.recordingencryption.v1.RecordingEncryptionService.CompleteRotation:output_type -> teleport.recordingencryption.v1.CompleteRotationResponse + 16, // 21: teleport.recordingencryption.v1.RecordingEncryptionService.RollbackRotation:output_type -> teleport.recordingencryption.v1.RollbackRotationResponse + 15, // [15:22] is the sub-list for method output_type + 8, // [8:15] is the sub-list for method input_type + 8, // [8:8] is the sub-list for extension type_name + 8, // [8:8] is the sub-list for extension extendee + 0, // [0:8] is the sub-list for field type_name +} + +func init() { file_teleport_recordingencryption_v1_recording_encryption_service_proto_init() } +func file_teleport_recordingencryption_v1_recording_encryption_service_proto_init() { + if File_teleport_recordingencryption_v1_recording_encryption_service_proto != nil { + return + } + file_teleport_recordingencryption_v1_recording_encryption_proto_init() + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDesc), len(file_teleport_recordingencryption_v1_recording_encryption_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 17, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_recordingencryption_v1_recording_encryption_service_proto_goTypes, + DependencyIndexes: file_teleport_recordingencryption_v1_recording_encryption_service_proto_depIdxs, + MessageInfos: file_teleport_recordingencryption_v1_recording_encryption_service_proto_msgTypes, + }.Build() + File_teleport_recordingencryption_v1_recording_encryption_service_proto = out.File + file_teleport_recordingencryption_v1_recording_encryption_service_proto_goTypes = nil + file_teleport_recordingencryption_v1_recording_encryption_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service_grpc.pb.go b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service_grpc.pb.go new file mode 100644 index 0000000000000..2227d21eea726 --- /dev/null +++ b/api/gen/proto/go/teleport/recordingencryption/v1/recording_encryption_service_grpc.pb.go @@ -0,0 +1,384 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/recordingencryption/v1/recording_encryption_service.proto + +package recordingencryptionv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + RecordingEncryptionService_CreateUpload_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/CreateUpload" + RecordingEncryptionService_UploadPart_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/UploadPart" + RecordingEncryptionService_CompleteUpload_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/CompleteUpload" + RecordingEncryptionService_RotateKey_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/RotateKey" + RecordingEncryptionService_GetRotationState_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/GetRotationState" + RecordingEncryptionService_CompleteRotation_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/CompleteRotation" + RecordingEncryptionService_RollbackRotation_FullMethodName = "/teleport.recordingencryption.v1.RecordingEncryptionService/RollbackRotation" +) + +// RecordingEncryptionServiceClient is the client API for RecordingEncryptionService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// RecordingEncryption provides methods to manage cluster encryption configuration resources. +type RecordingEncryptionServiceClient interface { + // CreateUpload begins a multipart upload for an encrypted recording. The + // returned upload ID should be used while uploading parts. + CreateUpload(ctx context.Context, in *CreateUploadRequest, opts ...grpc.CallOption) (*CreateUploadResponse, error) + // UploadPart uploads a part to the given upload ID. + UploadPart(ctx context.Context, in *UploadPartRequest, opts ...grpc.CallOption) (*UploadPartResponse, error) + // CompleteUploadRequest marks a multipart upload as complete. + CompleteUpload(ctx context.Context, in *CompleteUploadRequest, opts ...grpc.CallOption) (*CompleteUploadResponse, error) + // RotateKey rotates the key pair used for encrypting session recording data. + RotateKey(ctx context.Context, in *RotateKeyRequest, opts ...grpc.CallOption) (*RotateKeyResponse, error) + // GetRotationState returns whether or not a rotation is in progress. + GetRotationState(ctx context.Context, in *GetRotationStateRequest, opts ...grpc.CallOption) (*GetRotationStateResponse, error) + // CompleteRotation moves rotated keys out of the active set. + CompleteRotation(ctx context.Context, in *CompleteRotationRequest, opts ...grpc.CallOption) (*CompleteRotationResponse, error) + // RollbackRotation removes active keys and reverts rotating keys back to being active. + RollbackRotation(ctx context.Context, in *RollbackRotationRequest, opts ...grpc.CallOption) (*RollbackRotationResponse, error) +} + +type recordingEncryptionServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewRecordingEncryptionServiceClient(cc grpc.ClientConnInterface) RecordingEncryptionServiceClient { + return &recordingEncryptionServiceClient{cc} +} + +func (c *recordingEncryptionServiceClient) CreateUpload(ctx context.Context, in *CreateUploadRequest, opts ...grpc.CallOption) (*CreateUploadResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateUploadResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_CreateUpload_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) UploadPart(ctx context.Context, in *UploadPartRequest, opts ...grpc.CallOption) (*UploadPartResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UploadPartResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_UploadPart_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) CompleteUpload(ctx context.Context, in *CompleteUploadRequest, opts ...grpc.CallOption) (*CompleteUploadResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CompleteUploadResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_CompleteUpload_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) RotateKey(ctx context.Context, in *RotateKeyRequest, opts ...grpc.CallOption) (*RotateKeyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(RotateKeyResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_RotateKey_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) GetRotationState(ctx context.Context, in *GetRotationStateRequest, opts ...grpc.CallOption) (*GetRotationStateResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetRotationStateResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_GetRotationState_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) CompleteRotation(ctx context.Context, in *CompleteRotationRequest, opts ...grpc.CallOption) (*CompleteRotationResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CompleteRotationResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_CompleteRotation_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingEncryptionServiceClient) RollbackRotation(ctx context.Context, in *RollbackRotationRequest, opts ...grpc.CallOption) (*RollbackRotationResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(RollbackRotationResponse) + err := c.cc.Invoke(ctx, RecordingEncryptionService_RollbackRotation_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +// RecordingEncryptionServiceServer is the server API for RecordingEncryptionService service. +// All implementations must embed UnimplementedRecordingEncryptionServiceServer +// for forward compatibility. +// +// RecordingEncryption provides methods to manage cluster encryption configuration resources. +type RecordingEncryptionServiceServer interface { + // CreateUpload begins a multipart upload for an encrypted recording. The + // returned upload ID should be used while uploading parts. + CreateUpload(context.Context, *CreateUploadRequest) (*CreateUploadResponse, error) + // UploadPart uploads a part to the given upload ID. + UploadPart(context.Context, *UploadPartRequest) (*UploadPartResponse, error) + // CompleteUploadRequest marks a multipart upload as complete. + CompleteUpload(context.Context, *CompleteUploadRequest) (*CompleteUploadResponse, error) + // RotateKey rotates the key pair used for encrypting session recording data. + RotateKey(context.Context, *RotateKeyRequest) (*RotateKeyResponse, error) + // GetRotationState returns whether or not a rotation is in progress. + GetRotationState(context.Context, *GetRotationStateRequest) (*GetRotationStateResponse, error) + // CompleteRotation moves rotated keys out of the active set. + CompleteRotation(context.Context, *CompleteRotationRequest) (*CompleteRotationResponse, error) + // RollbackRotation removes active keys and reverts rotating keys back to being active. + RollbackRotation(context.Context, *RollbackRotationRequest) (*RollbackRotationResponse, error) + mustEmbedUnimplementedRecordingEncryptionServiceServer() +} + +// UnimplementedRecordingEncryptionServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedRecordingEncryptionServiceServer struct{} + +func (UnimplementedRecordingEncryptionServiceServer) CreateUpload(context.Context, *CreateUploadRequest) (*CreateUploadResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateUpload not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) UploadPart(context.Context, *UploadPartRequest) (*UploadPartResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UploadPart not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) CompleteUpload(context.Context, *CompleteUploadRequest) (*CompleteUploadResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CompleteUpload not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) RotateKey(context.Context, *RotateKeyRequest) (*RotateKeyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method RotateKey not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) GetRotationState(context.Context, *GetRotationStateRequest) (*GetRotationStateResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetRotationState not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) CompleteRotation(context.Context, *CompleteRotationRequest) (*CompleteRotationResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CompleteRotation not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) RollbackRotation(context.Context, *RollbackRotationRequest) (*RollbackRotationResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method RollbackRotation not implemented") +} +func (UnimplementedRecordingEncryptionServiceServer) mustEmbedUnimplementedRecordingEncryptionServiceServer() { +} +func (UnimplementedRecordingEncryptionServiceServer) testEmbeddedByValue() {} + +// UnsafeRecordingEncryptionServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to RecordingEncryptionServiceServer will +// result in compilation errors. +type UnsafeRecordingEncryptionServiceServer interface { + mustEmbedUnimplementedRecordingEncryptionServiceServer() +} + +func RegisterRecordingEncryptionServiceServer(s grpc.ServiceRegistrar, srv RecordingEncryptionServiceServer) { + // If the following call pancis, it indicates UnimplementedRecordingEncryptionServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&RecordingEncryptionService_ServiceDesc, srv) +} + +func _RecordingEncryptionService_CreateUpload_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateUploadRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).CreateUpload(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_CreateUpload_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).CreateUpload(ctx, req.(*CreateUploadRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_UploadPart_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UploadPartRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).UploadPart(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_UploadPart_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).UploadPart(ctx, req.(*UploadPartRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_CompleteUpload_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CompleteUploadRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).CompleteUpload(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_CompleteUpload_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).CompleteUpload(ctx, req.(*CompleteUploadRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_RotateKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(RotateKeyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).RotateKey(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_RotateKey_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).RotateKey(ctx, req.(*RotateKeyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_GetRotationState_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetRotationStateRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).GetRotationState(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_GetRotationState_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).GetRotationState(ctx, req.(*GetRotationStateRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_CompleteRotation_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CompleteRotationRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).CompleteRotation(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_CompleteRotation_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).CompleteRotation(ctx, req.(*CompleteRotationRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingEncryptionService_RollbackRotation_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(RollbackRotationRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingEncryptionServiceServer).RollbackRotation(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingEncryptionService_RollbackRotation_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingEncryptionServiceServer).RollbackRotation(ctx, req.(*RollbackRotationRequest)) + } + return interceptor(ctx, in, info, handler) +} + +// RecordingEncryptionService_ServiceDesc is the grpc.ServiceDesc for RecordingEncryptionService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var RecordingEncryptionService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.recordingencryption.v1.RecordingEncryptionService", + HandlerType: (*RecordingEncryptionServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "CreateUpload", + Handler: _RecordingEncryptionService_CreateUpload_Handler, + }, + { + MethodName: "UploadPart", + Handler: _RecordingEncryptionService_UploadPart_Handler, + }, + { + MethodName: "CompleteUpload", + Handler: _RecordingEncryptionService_CompleteUpload_Handler, + }, + { + MethodName: "RotateKey", + Handler: _RecordingEncryptionService_RotateKey_Handler, + }, + { + MethodName: "GetRotationState", + Handler: _RecordingEncryptionService_GetRotationState_Handler, + }, + { + MethodName: "CompleteRotation", + Handler: _RecordingEncryptionService_CompleteRotation_Handler, + }, + { + MethodName: "RollbackRotation", + Handler: _RecordingEncryptionService_RollbackRotation_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "teleport/recordingencryption/v1/recording_encryption_service.proto", +} diff --git a/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata.pb.go b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata.pb.go new file mode 100644 index 0000000000000..41a032264509a --- /dev/null +++ b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata.pb.go @@ -0,0 +1,712 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// SessionRecordingEvent is an event that occurred during a session recording. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/recordingmetadata/v1/recordingmetadata.proto + +package recordingmetadatav1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + timestamppb "google.golang.org/protobuf/types/known/timestamppb" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// SessionRecordingType is the type of session recording. +type SessionRecordingType int32 + +const ( + // SessionRecordingTypeUnspecified is the default value for session recording type. + SessionRecordingType_SESSION_RECORDING_TYPE_UNSPECIFIED SessionRecordingType = 0 + // SessionRecordingTypeSsh is an interactive SSH session recording. + SessionRecordingType_SESSION_RECORDING_TYPE_SSH SessionRecordingType = 1 + // SessionRecordingTypeKubernetes is an interactive Kubernetes session recording. + SessionRecordingType_SESSION_RECORDING_TYPE_KUBERNETES SessionRecordingType = 2 +) + +// Enum value maps for SessionRecordingType. +var ( + SessionRecordingType_name = map[int32]string{ + 0: "SESSION_RECORDING_TYPE_UNSPECIFIED", + 1: "SESSION_RECORDING_TYPE_SSH", + 2: "SESSION_RECORDING_TYPE_KUBERNETES", + } + SessionRecordingType_value = map[string]int32{ + "SESSION_RECORDING_TYPE_UNSPECIFIED": 0, + "SESSION_RECORDING_TYPE_SSH": 1, + "SESSION_RECORDING_TYPE_KUBERNETES": 2, + } +) + +func (x SessionRecordingType) Enum() *SessionRecordingType { + p := new(SessionRecordingType) + *p = x + return p +} + +func (x SessionRecordingType) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (SessionRecordingType) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_enumTypes[0].Descriptor() +} + +func (SessionRecordingType) Type() protoreflect.EnumType { + return &file_teleport_recordingmetadata_v1_recordingmetadata_proto_enumTypes[0] +} + +func (x SessionRecordingType) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use SessionRecordingType.Descriptor instead. +func (SessionRecordingType) EnumDescriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{0} +} + +// SessionRecordingEvent represents an event that occurred during a session recording. +type SessionRecordingEvent struct { + state protoimpl.MessageState `protogen:"open.v1"` + // StartOffset is the start time of the event, relative to the start of the session. + StartOffset *durationpb.Duration `protobuf:"bytes,1,opt,name=start_offset,json=startOffset,proto3" json:"start_offset,omitempty"` + // EndOffset is the end time of the event, relative to the start of the session. + EndOffset *durationpb.Duration `protobuf:"bytes,2,opt,name=end_offset,json=endOffset,proto3" json:"end_offset,omitempty"` + // Types that are valid to be assigned to Event: + // + // *SessionRecordingEvent_Inactivity + // *SessionRecordingEvent_Join + // *SessionRecordingEvent_Resize + Event isSessionRecordingEvent_Event `protobuf_oneof:"event"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingEvent) Reset() { + *x = SessionRecordingEvent{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingEvent) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingEvent) ProtoMessage() {} + +func (x *SessionRecordingEvent) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingEvent.ProtoReflect.Descriptor instead. +func (*SessionRecordingEvent) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{0} +} + +func (x *SessionRecordingEvent) GetStartOffset() *durationpb.Duration { + if x != nil { + return x.StartOffset + } + return nil +} + +func (x *SessionRecordingEvent) GetEndOffset() *durationpb.Duration { + if x != nil { + return x.EndOffset + } + return nil +} + +func (x *SessionRecordingEvent) GetEvent() isSessionRecordingEvent_Event { + if x != nil { + return x.Event + } + return nil +} + +func (x *SessionRecordingEvent) GetInactivity() *SessionRecordingInactivityEvent { + if x != nil { + if x, ok := x.Event.(*SessionRecordingEvent_Inactivity); ok { + return x.Inactivity + } + } + return nil +} + +func (x *SessionRecordingEvent) GetJoin() *SessionRecordingJoinEvent { + if x != nil { + if x, ok := x.Event.(*SessionRecordingEvent_Join); ok { + return x.Join + } + } + return nil +} + +func (x *SessionRecordingEvent) GetResize() *SessionRecordingResizeEvent { + if x != nil { + if x, ok := x.Event.(*SessionRecordingEvent_Resize); ok { + return x.Resize + } + } + return nil +} + +type isSessionRecordingEvent_Event interface { + isSessionRecordingEvent_Event() +} + +type SessionRecordingEvent_Inactivity struct { + // Inactivity is an event that indicates inactivity during the session. + Inactivity *SessionRecordingInactivityEvent `protobuf:"bytes,3,opt,name=inactivity,proto3,oneof"` +} + +type SessionRecordingEvent_Join struct { + // Join is an event that indicates a user joined the session. + Join *SessionRecordingJoinEvent `protobuf:"bytes,4,opt,name=join,proto3,oneof"` +} + +type SessionRecordingEvent_Resize struct { + // Resize is an event that indicates the terminal was resized. + Resize *SessionRecordingResizeEvent `protobuf:"bytes,5,opt,name=resize,proto3,oneof"` +} + +func (*SessionRecordingEvent_Inactivity) isSessionRecordingEvent_Event() {} + +func (*SessionRecordingEvent_Join) isSessionRecordingEvent_Event() {} + +func (*SessionRecordingEvent_Resize) isSessionRecordingEvent_Event() {} + +// SessionRecordingInactivityEvent is an event that indicates inactivity during the session. +type SessionRecordingInactivityEvent struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingInactivityEvent) Reset() { + *x = SessionRecordingInactivityEvent{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingInactivityEvent) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingInactivityEvent) ProtoMessage() {} + +func (x *SessionRecordingInactivityEvent) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingInactivityEvent.ProtoReflect.Descriptor instead. +func (*SessionRecordingInactivityEvent) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{1} +} + +// SessionRecordingJoinEvent is an event that indicates a user joined the session. +type SessionRecordingJoinEvent struct { + state protoimpl.MessageState `protogen:"open.v1"` + // User is the name of the user who joined the session. + User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingJoinEvent) Reset() { + *x = SessionRecordingJoinEvent{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingJoinEvent) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingJoinEvent) ProtoMessage() {} + +func (x *SessionRecordingJoinEvent) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingJoinEvent.ProtoReflect.Descriptor instead. +func (*SessionRecordingJoinEvent) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{2} +} + +func (x *SessionRecordingJoinEvent) GetUser() string { + if x != nil { + return x.User + } + return "" +} + +// SessionRecordingResizeEvent is an event that indicates the terminal was resized. +type SessionRecordingResizeEvent struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Cols is the number of columns in the terminal. + Cols int32 `protobuf:"varint,1,opt,name=cols,proto3" json:"cols,omitempty"` + // Rows is the number of rows in the terminal. + Rows int32 `protobuf:"varint,2,opt,name=rows,proto3" json:"rows,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingResizeEvent) Reset() { + *x = SessionRecordingResizeEvent{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingResizeEvent) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingResizeEvent) ProtoMessage() {} + +func (x *SessionRecordingResizeEvent) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingResizeEvent.ProtoReflect.Descriptor instead. +func (*SessionRecordingResizeEvent) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{3} +} + +func (x *SessionRecordingResizeEvent) GetCols() int32 { + if x != nil { + return x.Cols + } + return 0 +} + +func (x *SessionRecordingResizeEvent) GetRows() int32 { + if x != nil { + return x.Rows + } + return 0 +} + +// SessionRecordingMetadata contains metadata for a session recording. +type SessionRecordingMetadata struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Duration is the duration of the session recording. + Duration *durationpb.Duration `protobuf:"bytes,1,opt,name=duration,proto3" json:"duration,omitempty"` + // Events is the events that occurred during the session. + Events []*SessionRecordingEvent `protobuf:"bytes,2,rep,name=events,proto3" json:"events,omitempty"` + // StartCols is the number of columns in the terminal at the start of the session. + StartCols int32 `protobuf:"varint,3,opt,name=start_cols,json=startCols,proto3" json:"start_cols,omitempty"` + // StartRows is the number of rows in the terminal at the start of the session. + StartRows int32 `protobuf:"varint,4,opt,name=start_rows,json=startRows,proto3" json:"start_rows,omitempty"` + // StartTime is the start time of the session recording. + StartTime *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"` + // EndTime is the end time of the session recording. + EndTime *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=end_time,json=endTime,proto3" json:"end_time,omitempty"` + // ClusterName is the name of the cluster where the session recording took place. + ClusterName string `protobuf:"bytes,7,opt,name=cluster_name,json=clusterName,proto3" json:"cluster_name,omitempty"` + // User is the user whose session is being recorded. + User string `protobuf:"bytes,8,opt,name=user,proto3" json:"user,omitempty"` + // ResourceName is the name of the resource that was connected to. + ResourceName string `protobuf:"bytes,9,opt,name=resource_name,json=resourceName,proto3" json:"resource_name,omitempty"` + // Type is the type of session recording. + Type SessionRecordingType `protobuf:"varint,10,opt,name=type,proto3,enum=teleport.recordingmetadata.v1.SessionRecordingType" json:"type,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingMetadata) Reset() { + *x = SessionRecordingMetadata{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingMetadata) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingMetadata) ProtoMessage() {} + +func (x *SessionRecordingMetadata) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingMetadata.ProtoReflect.Descriptor instead. +func (*SessionRecordingMetadata) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{4} +} + +func (x *SessionRecordingMetadata) GetDuration() *durationpb.Duration { + if x != nil { + return x.Duration + } + return nil +} + +func (x *SessionRecordingMetadata) GetEvents() []*SessionRecordingEvent { + if x != nil { + return x.Events + } + return nil +} + +func (x *SessionRecordingMetadata) GetStartCols() int32 { + if x != nil { + return x.StartCols + } + return 0 +} + +func (x *SessionRecordingMetadata) GetStartRows() int32 { + if x != nil { + return x.StartRows + } + return 0 +} + +func (x *SessionRecordingMetadata) GetStartTime() *timestamppb.Timestamp { + if x != nil { + return x.StartTime + } + return nil +} + +func (x *SessionRecordingMetadata) GetEndTime() *timestamppb.Timestamp { + if x != nil { + return x.EndTime + } + return nil +} + +func (x *SessionRecordingMetadata) GetClusterName() string { + if x != nil { + return x.ClusterName + } + return "" +} + +func (x *SessionRecordingMetadata) GetUser() string { + if x != nil { + return x.User + } + return "" +} + +func (x *SessionRecordingMetadata) GetResourceName() string { + if x != nil { + return x.ResourceName + } + return "" +} + +func (x *SessionRecordingMetadata) GetType() SessionRecordingType { + if x != nil { + return x.Type + } + return SessionRecordingType_SESSION_RECORDING_TYPE_UNSPECIFIED +} + +// SessionRecordingThumbnail is a thumbnail of a session recording. +type SessionRecordingThumbnail struct { + state protoimpl.MessageState `protogen:"open.v1"` + // SVG is the SVG image of the thumbnail. + Svg []byte `protobuf:"bytes,1,opt,name=svg,proto3" json:"svg,omitempty"` + // Cols is the number of columns in the terminal. + Cols int32 `protobuf:"varint,2,opt,name=cols,proto3" json:"cols,omitempty"` + // Rows is the number of rows in the terminal. + Rows int32 `protobuf:"varint,3,opt,name=rows,proto3" json:"rows,omitempty"` + // CursorX is the X coordinate of the cursor. + CursorX int32 `protobuf:"varint,4,opt,name=cursor_x,json=cursorX,proto3" json:"cursor_x,omitempty"` + // CursorY is the Y coordinate of the cursor. + CursorY int32 `protobuf:"varint,5,opt,name=cursor_y,json=cursorY,proto3" json:"cursor_y,omitempty"` + // StartOffset is the start time of the thumbnail, relative to the start of the session. + StartOffset *durationpb.Duration `protobuf:"bytes,6,opt,name=start_offset,json=startOffset,proto3" json:"start_offset,omitempty"` + // EndOffset is the end time of the thumbnail, relative to the start of the session. + EndOffset *durationpb.Duration `protobuf:"bytes,7,opt,name=end_offset,json=endOffset,proto3" json:"end_offset,omitempty"` + // CursorVisible indicates whether the cursor is visible. + CursorVisible bool `protobuf:"varint,8,opt,name=cursor_visible,json=cursorVisible,proto3" json:"cursor_visible,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SessionRecordingThumbnail) Reset() { + *x = SessionRecordingThumbnail{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SessionRecordingThumbnail) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SessionRecordingThumbnail) ProtoMessage() {} + +func (x *SessionRecordingThumbnail) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SessionRecordingThumbnail.ProtoReflect.Descriptor instead. +func (*SessionRecordingThumbnail) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP(), []int{5} +} + +func (x *SessionRecordingThumbnail) GetSvg() []byte { + if x != nil { + return x.Svg + } + return nil +} + +func (x *SessionRecordingThumbnail) GetCols() int32 { + if x != nil { + return x.Cols + } + return 0 +} + +func (x *SessionRecordingThumbnail) GetRows() int32 { + if x != nil { + return x.Rows + } + return 0 +} + +func (x *SessionRecordingThumbnail) GetCursorX() int32 { + if x != nil { + return x.CursorX + } + return 0 +} + +func (x *SessionRecordingThumbnail) GetCursorY() int32 { + if x != nil { + return x.CursorY + } + return 0 +} + +func (x *SessionRecordingThumbnail) GetStartOffset() *durationpb.Duration { + if x != nil { + return x.StartOffset + } + return nil +} + +func (x *SessionRecordingThumbnail) GetEndOffset() *durationpb.Duration { + if x != nil { + return x.EndOffset + } + return nil +} + +func (x *SessionRecordingThumbnail) GetCursorVisible() bool { + if x != nil { + return x.CursorVisible + } + return false +} + +var File_teleport_recordingmetadata_v1_recordingmetadata_proto protoreflect.FileDescriptor + +const file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDesc = "" + + "\n" + + "5teleport/recordingmetadata/v1/recordingmetadata.proto\x12\x1dteleport.recordingmetadata.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xa0\x03\n" + + "\x15SessionRecordingEvent\x12<\n" + + "\fstart_offset\x18\x01 \x01(\v2\x19.google.protobuf.DurationR\vstartOffset\x128\n" + + "\n" + + "end_offset\x18\x02 \x01(\v2\x19.google.protobuf.DurationR\tendOffset\x12`\n" + + "\n" + + "inactivity\x18\x03 \x01(\v2>.teleport.recordingmetadata.v1.SessionRecordingInactivityEventH\x00R\n" + + "inactivity\x12N\n" + + "\x04join\x18\x04 \x01(\v28.teleport.recordingmetadata.v1.SessionRecordingJoinEventH\x00R\x04join\x12T\n" + + "\x06resize\x18\x05 \x01(\v2:.teleport.recordingmetadata.v1.SessionRecordingResizeEventH\x00R\x06resizeB\a\n" + + "\x05event\"!\n" + + "\x1fSessionRecordingInactivityEvent\"/\n" + + "\x19SessionRecordingJoinEvent\x12\x12\n" + + "\x04user\x18\x01 \x01(\tR\x04user\"E\n" + + "\x1bSessionRecordingResizeEvent\x12\x12\n" + + "\x04cols\x18\x01 \x01(\x05R\x04cols\x12\x12\n" + + "\x04rows\x18\x02 \x01(\x05R\x04rows\"\xf4\x03\n" + + "\x18SessionRecordingMetadata\x125\n" + + "\bduration\x18\x01 \x01(\v2\x19.google.protobuf.DurationR\bduration\x12L\n" + + "\x06events\x18\x02 \x03(\v24.teleport.recordingmetadata.v1.SessionRecordingEventR\x06events\x12\x1d\n" + + "\n" + + "start_cols\x18\x03 \x01(\x05R\tstartCols\x12\x1d\n" + + "\n" + + "start_rows\x18\x04 \x01(\x05R\tstartRows\x129\n" + + "\n" + + "start_time\x18\x05 \x01(\v2\x1a.google.protobuf.TimestampR\tstartTime\x125\n" + + "\bend_time\x18\x06 \x01(\v2\x1a.google.protobuf.TimestampR\aendTime\x12!\n" + + "\fcluster_name\x18\a \x01(\tR\vclusterName\x12\x12\n" + + "\x04user\x18\b \x01(\tR\x04user\x12#\n" + + "\rresource_name\x18\t \x01(\tR\fresourceName\x12G\n" + + "\x04type\x18\n" + + " \x01(\x0e23.teleport.recordingmetadata.v1.SessionRecordingTypeR\x04type\"\xaa\x02\n" + + "\x19SessionRecordingThumbnail\x12\x10\n" + + "\x03svg\x18\x01 \x01(\fR\x03svg\x12\x12\n" + + "\x04cols\x18\x02 \x01(\x05R\x04cols\x12\x12\n" + + "\x04rows\x18\x03 \x01(\x05R\x04rows\x12\x19\n" + + "\bcursor_x\x18\x04 \x01(\x05R\acursorX\x12\x19\n" + + "\bcursor_y\x18\x05 \x01(\x05R\acursorY\x12<\n" + + "\fstart_offset\x18\x06 \x01(\v2\x19.google.protobuf.DurationR\vstartOffset\x128\n" + + "\n" + + "end_offset\x18\a \x01(\v2\x19.google.protobuf.DurationR\tendOffset\x12%\n" + + "\x0ecursor_visible\x18\b \x01(\bR\rcursorVisible*\x85\x01\n" + + "\x14SessionRecordingType\x12&\n" + + "\"SESSION_RECORDING_TYPE_UNSPECIFIED\x10\x00\x12\x1e\n" + + "\x1aSESSION_RECORDING_TYPE_SSH\x10\x01\x12%\n" + + "!SESSION_RECORDING_TYPE_KUBERNETES\x10\x02BfZdgithub.com/gravitational/teleport/api/gen/proto/go/teleport/recordingmetadata/v1;recordingmetadatav1b\x06proto3" + +var ( + file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescOnce sync.Once + file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescData []byte +) + +func file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescGZIP() []byte { + file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescOnce.Do(func() { + file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDesc), len(file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDesc))) + }) + return file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDescData +} + +var file_teleport_recordingmetadata_v1_recordingmetadata_proto_enumTypes = make([]protoimpl.EnumInfo, 1) +var file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes = make([]protoimpl.MessageInfo, 6) +var file_teleport_recordingmetadata_v1_recordingmetadata_proto_goTypes = []any{ + (SessionRecordingType)(0), // 0: teleport.recordingmetadata.v1.SessionRecordingType + (*SessionRecordingEvent)(nil), // 1: teleport.recordingmetadata.v1.SessionRecordingEvent + (*SessionRecordingInactivityEvent)(nil), // 2: teleport.recordingmetadata.v1.SessionRecordingInactivityEvent + (*SessionRecordingJoinEvent)(nil), // 3: teleport.recordingmetadata.v1.SessionRecordingJoinEvent + (*SessionRecordingResizeEvent)(nil), // 4: teleport.recordingmetadata.v1.SessionRecordingResizeEvent + (*SessionRecordingMetadata)(nil), // 5: teleport.recordingmetadata.v1.SessionRecordingMetadata + (*SessionRecordingThumbnail)(nil), // 6: teleport.recordingmetadata.v1.SessionRecordingThumbnail + (*durationpb.Duration)(nil), // 7: google.protobuf.Duration + (*timestamppb.Timestamp)(nil), // 8: google.protobuf.Timestamp +} +var file_teleport_recordingmetadata_v1_recordingmetadata_proto_depIdxs = []int32{ + 7, // 0: teleport.recordingmetadata.v1.SessionRecordingEvent.start_offset:type_name -> google.protobuf.Duration + 7, // 1: teleport.recordingmetadata.v1.SessionRecordingEvent.end_offset:type_name -> google.protobuf.Duration + 2, // 2: teleport.recordingmetadata.v1.SessionRecordingEvent.inactivity:type_name -> teleport.recordingmetadata.v1.SessionRecordingInactivityEvent + 3, // 3: teleport.recordingmetadata.v1.SessionRecordingEvent.join:type_name -> teleport.recordingmetadata.v1.SessionRecordingJoinEvent + 4, // 4: teleport.recordingmetadata.v1.SessionRecordingEvent.resize:type_name -> teleport.recordingmetadata.v1.SessionRecordingResizeEvent + 7, // 5: teleport.recordingmetadata.v1.SessionRecordingMetadata.duration:type_name -> google.protobuf.Duration + 1, // 6: teleport.recordingmetadata.v1.SessionRecordingMetadata.events:type_name -> teleport.recordingmetadata.v1.SessionRecordingEvent + 8, // 7: teleport.recordingmetadata.v1.SessionRecordingMetadata.start_time:type_name -> google.protobuf.Timestamp + 8, // 8: teleport.recordingmetadata.v1.SessionRecordingMetadata.end_time:type_name -> google.protobuf.Timestamp + 0, // 9: teleport.recordingmetadata.v1.SessionRecordingMetadata.type:type_name -> teleport.recordingmetadata.v1.SessionRecordingType + 7, // 10: teleport.recordingmetadata.v1.SessionRecordingThumbnail.start_offset:type_name -> google.protobuf.Duration + 7, // 11: teleport.recordingmetadata.v1.SessionRecordingThumbnail.end_offset:type_name -> google.protobuf.Duration + 12, // [12:12] is the sub-list for method output_type + 12, // [12:12] is the sub-list for method input_type + 12, // [12:12] is the sub-list for extension type_name + 12, // [12:12] is the sub-list for extension extendee + 0, // [0:12] is the sub-list for field type_name +} + +func init() { file_teleport_recordingmetadata_v1_recordingmetadata_proto_init() } +func file_teleport_recordingmetadata_v1_recordingmetadata_proto_init() { + if File_teleport_recordingmetadata_v1_recordingmetadata_proto != nil { + return + } + file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes[0].OneofWrappers = []any{ + (*SessionRecordingEvent_Inactivity)(nil), + (*SessionRecordingEvent_Join)(nil), + (*SessionRecordingEvent_Resize)(nil), + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDesc), len(file_teleport_recordingmetadata_v1_recordingmetadata_proto_rawDesc)), + NumEnums: 1, + NumMessages: 6, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_recordingmetadata_v1_recordingmetadata_proto_goTypes, + DependencyIndexes: file_teleport_recordingmetadata_v1_recordingmetadata_proto_depIdxs, + EnumInfos: file_teleport_recordingmetadata_v1_recordingmetadata_proto_enumTypes, + MessageInfos: file_teleport_recordingmetadata_v1_recordingmetadata_proto_msgTypes, + }.Build() + File_teleport_recordingmetadata_v1_recordingmetadata_proto = out.File + file_teleport_recordingmetadata_v1_recordingmetadata_proto_goTypes = nil + file_teleport_recordingmetadata_v1_recordingmetadata_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service.pb.go b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service.pb.go new file mode 100644 index 0000000000000..d4cc668e6ca15 --- /dev/null +++ b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service.pb.go @@ -0,0 +1,346 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/recordingmetadata/v1/recordingmetadata_service.proto + +package recordingmetadatav1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// GetMetadataResponseChunk is a chunked response for retrieving a session's metadata. +// It can contain either metadata or a frame from the session recording. +type GetMetadataResponseChunk struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Chunk: + // + // *GetMetadataResponseChunk_Metadata + // *GetMetadataResponseChunk_Frame + Chunk isGetMetadataResponseChunk_Chunk `protobuf_oneof:"chunk"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetMetadataResponseChunk) Reset() { + *x = GetMetadataResponseChunk{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetMetadataResponseChunk) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetMetadataResponseChunk) ProtoMessage() {} + +func (x *GetMetadataResponseChunk) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetMetadataResponseChunk.ProtoReflect.Descriptor instead. +func (*GetMetadataResponseChunk) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescGZIP(), []int{0} +} + +func (x *GetMetadataResponseChunk) GetChunk() isGetMetadataResponseChunk_Chunk { + if x != nil { + return x.Chunk + } + return nil +} + +func (x *GetMetadataResponseChunk) GetMetadata() *SessionRecordingMetadata { + if x != nil { + if x, ok := x.Chunk.(*GetMetadataResponseChunk_Metadata); ok { + return x.Metadata + } + } + return nil +} + +func (x *GetMetadataResponseChunk) GetFrame() *SessionRecordingThumbnail { + if x != nil { + if x, ok := x.Chunk.(*GetMetadataResponseChunk_Frame); ok { + return x.Frame + } + } + return nil +} + +type isGetMetadataResponseChunk_Chunk interface { + isGetMetadataResponseChunk_Chunk() +} + +type GetMetadataResponseChunk_Metadata struct { + // Metadata contains the metadata of the session recording. + Metadata *SessionRecordingMetadata `protobuf:"bytes,1,opt,name=metadata,proto3,oneof"` +} + +type GetMetadataResponseChunk_Frame struct { + // Frame contains a frame from the session recording. + Frame *SessionRecordingThumbnail `protobuf:"bytes,2,opt,name=frame,proto3,oneof"` +} + +func (*GetMetadataResponseChunk_Metadata) isGetMetadataResponseChunk_Chunk() {} + +func (*GetMetadataResponseChunk_Frame) isGetMetadataResponseChunk_Chunk() {} + +// GetThumbnailRequest is a request for a session's thumbnail. +type GetThumbnailRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // SessionId is the ID of the session whose thumbnail is being requested. + SessionId string `protobuf:"bytes,1,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetThumbnailRequest) Reset() { + *x = GetThumbnailRequest{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetThumbnailRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetThumbnailRequest) ProtoMessage() {} + +func (x *GetThumbnailRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetThumbnailRequest.ProtoReflect.Descriptor instead. +func (*GetThumbnailRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescGZIP(), []int{1} +} + +func (x *GetThumbnailRequest) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +// GetThumbnailResponse is a response for retrieving a session's thumbnail. +type GetThumbnailResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Thumbnail is the thumbnail for the session. + Thumbnail *SessionRecordingThumbnail `protobuf:"bytes,1,opt,name=thumbnail,proto3" json:"thumbnail,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetThumbnailResponse) Reset() { + *x = GetThumbnailResponse{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetThumbnailResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetThumbnailResponse) ProtoMessage() {} + +func (x *GetThumbnailResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetThumbnailResponse.ProtoReflect.Descriptor instead. +func (*GetThumbnailResponse) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescGZIP(), []int{2} +} + +func (x *GetThumbnailResponse) GetThumbnail() *SessionRecordingThumbnail { + if x != nil { + return x.Thumbnail + } + return nil +} + +// GetMetadataRequest is a request for retrieving a session's metadata. +type GetMetadataRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // SessionId is the ID of the session whose metadata is being requested. + SessionId string `protobuf:"bytes,1,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetMetadataRequest) Reset() { + *x = GetMetadataRequest{} + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetMetadataRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetMetadataRequest) ProtoMessage() {} + +func (x *GetMetadataRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetMetadataRequest.ProtoReflect.Descriptor instead. +func (*GetMetadataRequest) Descriptor() ([]byte, []int) { + return file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescGZIP(), []int{3} +} + +func (x *GetMetadataRequest) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +var File_teleport_recordingmetadata_v1_recordingmetadata_service_proto protoreflect.FileDescriptor + +const file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDesc = "" + + "\n" + + "=teleport/recordingmetadata/v1/recordingmetadata_service.proto\x12\x1dteleport.recordingmetadata.v1\x1a5teleport/recordingmetadata/v1/recordingmetadata.proto\"\xcc\x01\n" + + "\x18GetMetadataResponseChunk\x12U\n" + + "\bmetadata\x18\x01 \x01(\v27.teleport.recordingmetadata.v1.SessionRecordingMetadataH\x00R\bmetadata\x12P\n" + + "\x05frame\x18\x02 \x01(\v28.teleport.recordingmetadata.v1.SessionRecordingThumbnailH\x00R\x05frameB\a\n" + + "\x05chunk\"4\n" + + "\x13GetThumbnailRequest\x12\x1d\n" + + "\n" + + "session_id\x18\x01 \x01(\tR\tsessionId\"n\n" + + "\x14GetThumbnailResponse\x12V\n" + + "\tthumbnail\x18\x01 \x01(\v28.teleport.recordingmetadata.v1.SessionRecordingThumbnailR\tthumbnail\"3\n" + + "\x12GetMetadataRequest\x12\x1d\n" + + "\n" + + "session_id\x18\x01 \x01(\tR\tsessionId2\x90\x02\n" + + "\x18RecordingMetadataService\x12w\n" + + "\fGetThumbnail\x122.teleport.recordingmetadata.v1.GetThumbnailRequest\x1a3.teleport.recordingmetadata.v1.GetThumbnailResponse\x12{\n" + + "\vGetMetadata\x121.teleport.recordingmetadata.v1.GetMetadataRequest\x1a7.teleport.recordingmetadata.v1.GetMetadataResponseChunk0\x01BfZdgithub.com/gravitational/teleport/api/gen/proto/go/teleport/recordingmetadata/v1;recordingmetadatav1b\x06proto3" + +var ( + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescOnce sync.Once + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescData []byte +) + +func file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescGZIP() []byte { + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescOnce.Do(func() { + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDesc), len(file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDesc))) + }) + return file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDescData +} + +var file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes = make([]protoimpl.MessageInfo, 4) +var file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_goTypes = []any{ + (*GetMetadataResponseChunk)(nil), // 0: teleport.recordingmetadata.v1.GetMetadataResponseChunk + (*GetThumbnailRequest)(nil), // 1: teleport.recordingmetadata.v1.GetThumbnailRequest + (*GetThumbnailResponse)(nil), // 2: teleport.recordingmetadata.v1.GetThumbnailResponse + (*GetMetadataRequest)(nil), // 3: teleport.recordingmetadata.v1.GetMetadataRequest + (*SessionRecordingMetadata)(nil), // 4: teleport.recordingmetadata.v1.SessionRecordingMetadata + (*SessionRecordingThumbnail)(nil), // 5: teleport.recordingmetadata.v1.SessionRecordingThumbnail +} +var file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_depIdxs = []int32{ + 4, // 0: teleport.recordingmetadata.v1.GetMetadataResponseChunk.metadata:type_name -> teleport.recordingmetadata.v1.SessionRecordingMetadata + 5, // 1: teleport.recordingmetadata.v1.GetMetadataResponseChunk.frame:type_name -> teleport.recordingmetadata.v1.SessionRecordingThumbnail + 5, // 2: teleport.recordingmetadata.v1.GetThumbnailResponse.thumbnail:type_name -> teleport.recordingmetadata.v1.SessionRecordingThumbnail + 1, // 3: teleport.recordingmetadata.v1.RecordingMetadataService.GetThumbnail:input_type -> teleport.recordingmetadata.v1.GetThumbnailRequest + 3, // 4: teleport.recordingmetadata.v1.RecordingMetadataService.GetMetadata:input_type -> teleport.recordingmetadata.v1.GetMetadataRequest + 2, // 5: teleport.recordingmetadata.v1.RecordingMetadataService.GetThumbnail:output_type -> teleport.recordingmetadata.v1.GetThumbnailResponse + 0, // 6: teleport.recordingmetadata.v1.RecordingMetadataService.GetMetadata:output_type -> teleport.recordingmetadata.v1.GetMetadataResponseChunk + 5, // [5:7] is the sub-list for method output_type + 3, // [3:5] is the sub-list for method input_type + 3, // [3:3] is the sub-list for extension type_name + 3, // [3:3] is the sub-list for extension extendee + 0, // [0:3] is the sub-list for field type_name +} + +func init() { file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_init() } +func file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_init() { + if File_teleport_recordingmetadata_v1_recordingmetadata_service_proto != nil { + return + } + file_teleport_recordingmetadata_v1_recordingmetadata_proto_init() + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes[0].OneofWrappers = []any{ + (*GetMetadataResponseChunk_Metadata)(nil), + (*GetMetadataResponseChunk_Frame)(nil), + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDesc), len(file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 4, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_goTypes, + DependencyIndexes: file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_depIdxs, + MessageInfos: file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_msgTypes, + }.Build() + File_teleport_recordingmetadata_v1_recordingmetadata_service_proto = out.File + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_goTypes = nil + file_teleport_recordingmetadata_v1_recordingmetadata_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service_grpc.pb.go b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service_grpc.pb.go new file mode 100644 index 0000000000000..4eca5cc2b0119 --- /dev/null +++ b/api/gen/proto/go/teleport/recordingmetadata/v1/recordingmetadata_service_grpc.pb.go @@ -0,0 +1,186 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/recordingmetadata/v1/recordingmetadata_service.proto + +package recordingmetadatav1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + RecordingMetadataService_GetThumbnail_FullMethodName = "/teleport.recordingmetadata.v1.RecordingMetadataService/GetThumbnail" + RecordingMetadataService_GetMetadata_FullMethodName = "/teleport.recordingmetadata.v1.RecordingMetadataService/GetMetadata" +) + +// RecordingMetadataServiceClient is the client API for RecordingMetadataService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// RecordingMetadataService provides methods to retrieve metadata and thumbnails for a session recording. +type RecordingMetadataServiceClient interface { + // GetThumbnail retrieves the thumbnail for a session recording. + GetThumbnail(ctx context.Context, in *GetThumbnailRequest, opts ...grpc.CallOption) (*GetThumbnailResponse, error) + // GetMetadata retrieves the metadata for a session recording. + GetMetadata(ctx context.Context, in *GetMetadataRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[GetMetadataResponseChunk], error) +} + +type recordingMetadataServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewRecordingMetadataServiceClient(cc grpc.ClientConnInterface) RecordingMetadataServiceClient { + return &recordingMetadataServiceClient{cc} +} + +func (c *recordingMetadataServiceClient) GetThumbnail(ctx context.Context, in *GetThumbnailRequest, opts ...grpc.CallOption) (*GetThumbnailResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetThumbnailResponse) + err := c.cc.Invoke(ctx, RecordingMetadataService_GetThumbnail_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *recordingMetadataServiceClient) GetMetadata(ctx context.Context, in *GetMetadataRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[GetMetadataResponseChunk], error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + stream, err := c.cc.NewStream(ctx, &RecordingMetadataService_ServiceDesc.Streams[0], RecordingMetadataService_GetMetadata_FullMethodName, cOpts...) + if err != nil { + return nil, err + } + x := &grpc.GenericClientStream[GetMetadataRequest, GetMetadataResponseChunk]{ClientStream: stream} + if err := x.ClientStream.SendMsg(in); err != nil { + return nil, err + } + if err := x.ClientStream.CloseSend(); err != nil { + return nil, err + } + return x, nil +} + +// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name. +type RecordingMetadataService_GetMetadataClient = grpc.ServerStreamingClient[GetMetadataResponseChunk] + +// RecordingMetadataServiceServer is the server API for RecordingMetadataService service. +// All implementations must embed UnimplementedRecordingMetadataServiceServer +// for forward compatibility. +// +// RecordingMetadataService provides methods to retrieve metadata and thumbnails for a session recording. +type RecordingMetadataServiceServer interface { + // GetThumbnail retrieves the thumbnail for a session recording. + GetThumbnail(context.Context, *GetThumbnailRequest) (*GetThumbnailResponse, error) + // GetMetadata retrieves the metadata for a session recording. + GetMetadata(*GetMetadataRequest, grpc.ServerStreamingServer[GetMetadataResponseChunk]) error + mustEmbedUnimplementedRecordingMetadataServiceServer() +} + +// UnimplementedRecordingMetadataServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedRecordingMetadataServiceServer struct{} + +func (UnimplementedRecordingMetadataServiceServer) GetThumbnail(context.Context, *GetThumbnailRequest) (*GetThumbnailResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetThumbnail not implemented") +} +func (UnimplementedRecordingMetadataServiceServer) GetMetadata(*GetMetadataRequest, grpc.ServerStreamingServer[GetMetadataResponseChunk]) error { + return status.Errorf(codes.Unimplemented, "method GetMetadata not implemented") +} +func (UnimplementedRecordingMetadataServiceServer) mustEmbedUnimplementedRecordingMetadataServiceServer() { +} +func (UnimplementedRecordingMetadataServiceServer) testEmbeddedByValue() {} + +// UnsafeRecordingMetadataServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to RecordingMetadataServiceServer will +// result in compilation errors. +type UnsafeRecordingMetadataServiceServer interface { + mustEmbedUnimplementedRecordingMetadataServiceServer() +} + +func RegisterRecordingMetadataServiceServer(s grpc.ServiceRegistrar, srv RecordingMetadataServiceServer) { + // If the following call pancis, it indicates UnimplementedRecordingMetadataServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&RecordingMetadataService_ServiceDesc, srv) +} + +func _RecordingMetadataService_GetThumbnail_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetThumbnailRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RecordingMetadataServiceServer).GetThumbnail(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: RecordingMetadataService_GetThumbnail_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RecordingMetadataServiceServer).GetThumbnail(ctx, req.(*GetThumbnailRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _RecordingMetadataService_GetMetadata_Handler(srv interface{}, stream grpc.ServerStream) error { + m := new(GetMetadataRequest) + if err := stream.RecvMsg(m); err != nil { + return err + } + return srv.(RecordingMetadataServiceServer).GetMetadata(m, &grpc.GenericServerStream[GetMetadataRequest, GetMetadataResponseChunk]{ServerStream: stream}) +} + +// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name. +type RecordingMetadataService_GetMetadataServer = grpc.ServerStreamingServer[GetMetadataResponseChunk] + +// RecordingMetadataService_ServiceDesc is the grpc.ServiceDesc for RecordingMetadataService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var RecordingMetadataService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.recordingmetadata.v1.RecordingMetadataService", + HandlerType: (*RecordingMetadataServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "GetThumbnail", + Handler: _RecordingMetadataService_GetThumbnail_Handler, + }, + }, + Streams: []grpc.StreamDesc{ + { + StreamName: "GetMetadata", + Handler: _RecordingMetadataService_GetMetadata_Handler, + ServerStreams: true, + }, + }, + Metadata: "teleport/recordingmetadata/v1/recordingmetadata_service.proto", +} diff --git a/api/gen/proto/go/teleport/resourceusage/v1/access_requests.pb.go b/api/gen/proto/go/teleport/resourceusage/v1/access_requests.pb.go index 774eb42c4f98e..9058f4c48707e 100644 --- a/api/gen/proto/go/teleport/resourceusage/v1/access_requests.pb.go +++ b/api/gen/proto/go/teleport/resourceusage/v1/access_requests.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/resourceusage/v1/access_requests.proto diff --git a/api/gen/proto/go/teleport/resourceusage/v1/account_usage_type.pb.go b/api/gen/proto/go/teleport/resourceusage/v1/account_usage_type.pb.go index 7320449eef464..5188f973f0a4e 100644 --- a/api/gen/proto/go/teleport/resourceusage/v1/account_usage_type.pb.go +++ b/api/gen/proto/go/teleport/resourceusage/v1/account_usage_type.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/resourceusage/v1/account_usage_type.proto diff --git a/api/gen/proto/go/teleport/resourceusage/v1/device_trust.pb.go b/api/gen/proto/go/teleport/resourceusage/v1/device_trust.pb.go index 3b2597a17a285..3dc67394263af 100644 --- a/api/gen/proto/go/teleport/resourceusage/v1/device_trust.pb.go +++ b/api/gen/proto/go/teleport/resourceusage/v1/device_trust.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/resourceusage/v1/device_trust.proto diff --git a/api/gen/proto/go/teleport/resourceusage/v1/resourceusage_service.pb.go b/api/gen/proto/go/teleport/resourceusage/v1/resourceusage_service.pb.go index b129ae503aabc..533bf40df6fae 100644 --- a/api/gen/proto/go/teleport/resourceusage/v1/resourceusage_service.pb.go +++ b/api/gen/proto/go/teleport/resourceusage/v1/resourceusage_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/resourceusage/v1/resourceusage_service.proto diff --git a/api/gen/proto/go/teleport/samlidp/v1/samlidp.pb.go b/api/gen/proto/go/teleport/samlidp/v1/samlidp.pb.go index 26048cfd87fbc..787928b3fffa6 100644 --- a/api/gen/proto/go/teleport/samlidp/v1/samlidp.pb.go +++ b/api/gen/proto/go/teleport/samlidp/v1/samlidp.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/samlidp/v1/samlidp.proto diff --git a/api/gen/proto/go/teleport/scim/v1/scim_service.pb.go b/api/gen/proto/go/teleport/scim/v1/scim_service.pb.go index 4c5a7baf3ee59..beed64c062fc5 100644 --- a/api/gen/proto/go/teleport/scim/v1/scim_service.pb.go +++ b/api/gen/proto/go/teleport/scim/v1/scim_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/scim/v1/scim_service.proto @@ -41,7 +41,7 @@ const ( // ListSCIMResourcesRequest represents a request to fetch multiple resources type ListSCIMResourcesRequest struct { state protoimpl.MessageState `protogen:"open.v1"` - // Target describes the set of requested by the client, vy integration and + // Target describes the set of requested by the client, by integration and // resource type. Target *RequestTarget `protobuf:"bytes,1,opt,name=target,proto3" json:"target,omitempty"` // Page is an optional request to retrieve a page of results. Returns all @@ -262,7 +262,7 @@ func (x *UpdateSCIMResourceRequest) GetResource() *Resource { return nil } -// DeleteSCIMResourceRequest describes a request to delete a SCIM-mamanged +// DeleteSCIMResourceRequest describes a request to delete a SCIM-mananged // resource type DeleteSCIMResourceRequest struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -671,6 +671,67 @@ func (x *Page) GetCount() uint64 { return 0 } +// PatchSCIMResourceRequest describes a SCIM PATCH operation as per RFC 7644 +// Section 3.5.2. The PATCH operation allows modifying a resource with a set of +// change operations, enabling partial updates without replacing the entire +// resource. +// +// See https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2 +type PatchSCIMResourceRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Target identifies the resource to patch + Target *RequestTarget `protobuf:"bytes,1,opt,name=target,proto3" json:"target,omitempty"` + // Payload is the SCIM PATCH Request payload containing the Operations array + // and schemas as defined in RFC 7644 Section 3.5.2. + Payload *structpb.Struct `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *PatchSCIMResourceRequest) Reset() { + *x = PatchSCIMResourceRequest{} + mi := &file_teleport_scim_v1_scim_service_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *PatchSCIMResourceRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*PatchSCIMResourceRequest) ProtoMessage() {} + +func (x *PatchSCIMResourceRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scim_v1_scim_service_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use PatchSCIMResourceRequest.ProtoReflect.Descriptor instead. +func (*PatchSCIMResourceRequest) Descriptor() ([]byte, []int) { + return file_teleport_scim_v1_scim_service_proto_rawDescGZIP(), []int{10} +} + +func (x *PatchSCIMResourceRequest) GetTarget() *RequestTarget { + if x != nil { + return x.Target + } + return nil +} + +func (x *PatchSCIMResourceRequest) GetPayload() *structpb.Struct { + if x != nil { + return x.Payload + } + return nil +} + var File_teleport_scim_v1_scim_service_proto protoreflect.FileDescriptor const file_teleport_scim_v1_scim_service_proto_rawDesc = "" + @@ -720,13 +781,17 @@ const file_teleport_scim_v1_scim_service_proto_rawDesc = "" + "\x04Page\x12\x1f\n" + "\vstart_index\x18\x01 \x01(\x04R\n" + "startIndex\x12\x14\n" + - "\x05count\x18\x02 \x01(\x04R\x05count2\xe0\x03\n" + + "\x05count\x18\x02 \x01(\x04R\x05count\"\x86\x01\n" + + "\x18PatchSCIMResourceRequest\x127\n" + + "\x06target\x18\x01 \x01(\v2\x1f.teleport.scim.v1.RequestTargetR\x06target\x121\n" + + "\apayload\x18\x02 \x01(\v2\x17.google.protobuf.StructR\apayload2\xbd\x04\n" + "\vSCIMService\x12_\n" + "\x11ListSCIMResources\x12*.teleport.scim.v1.ListSCIMResourcesRequest\x1a\x1e.teleport.scim.v1.ResourceList\x12W\n" + "\x0fGetSCIMResource\x12(.teleport.scim.v1.GetSCIMResourceRequest\x1a\x1a.teleport.scim.v1.Resource\x12]\n" + "\x12CreateSCIMResource\x12+.teleport.scim.v1.CreateSCIMResourceRequest\x1a\x1a.teleport.scim.v1.Resource\x12]\n" + "\x12UpdateSCIMResource\x12+.teleport.scim.v1.UpdateSCIMResourceRequest\x1a\x1a.teleport.scim.v1.Resource\x12Y\n" + - "\x12DeleteSCIMResource\x12+.teleport.scim.v1.DeleteSCIMResourceRequest\x1a\x16.google.protobuf.EmptyBLZJgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scim/v1;scimv1b\x06proto3" + "\x12DeleteSCIMResource\x12+.teleport.scim.v1.DeleteSCIMResourceRequest\x1a\x16.google.protobuf.Empty\x12[\n" + + "\x11PatchSCIMResource\x12*.teleport.scim.v1.PatchSCIMResourceRequest\x1a\x1a.teleport.scim.v1.ResourceBLZJgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scim/v1;scimv1b\x06proto3" var ( file_teleport_scim_v1_scim_service_proto_rawDescOnce sync.Once @@ -740,7 +805,7 @@ func file_teleport_scim_v1_scim_service_proto_rawDescGZIP() []byte { return file_teleport_scim_v1_scim_service_proto_rawDescData } -var file_teleport_scim_v1_scim_service_proto_msgTypes = make([]protoimpl.MessageInfo, 10) +var file_teleport_scim_v1_scim_service_proto_msgTypes = make([]protoimpl.MessageInfo, 11) var file_teleport_scim_v1_scim_service_proto_goTypes = []any{ (*ListSCIMResourcesRequest)(nil), // 0: teleport.scim.v1.ListSCIMResourcesRequest (*GetSCIMResourceRequest)(nil), // 1: teleport.scim.v1.GetSCIMResourceRequest @@ -752,9 +817,10 @@ var file_teleport_scim_v1_scim_service_proto_goTypes = []any{ (*ResourceList)(nil), // 7: teleport.scim.v1.ResourceList (*RequestTarget)(nil), // 8: teleport.scim.v1.RequestTarget (*Page)(nil), // 9: teleport.scim.v1.Page - (*structpb.Struct)(nil), // 10: google.protobuf.Struct - (*timestamppb.Timestamp)(nil), // 11: google.protobuf.Timestamp - (*emptypb.Empty)(nil), // 12: google.protobuf.Empty + (*PatchSCIMResourceRequest)(nil), // 10: teleport.scim.v1.PatchSCIMResourceRequest + (*structpb.Struct)(nil), // 11: google.protobuf.Struct + (*timestamppb.Timestamp)(nil), // 12: google.protobuf.Timestamp + (*emptypb.Empty)(nil), // 13: google.protobuf.Empty } var file_teleport_scim_v1_scim_service_proto_depIdxs = []int32{ 8, // 0: teleport.scim.v1.ListSCIMResourcesRequest.target:type_name -> teleport.scim.v1.RequestTarget @@ -766,25 +832,29 @@ var file_teleport_scim_v1_scim_service_proto_depIdxs = []int32{ 5, // 6: teleport.scim.v1.UpdateSCIMResourceRequest.resource:type_name -> teleport.scim.v1.Resource 8, // 7: teleport.scim.v1.DeleteSCIMResourceRequest.target:type_name -> teleport.scim.v1.RequestTarget 6, // 8: teleport.scim.v1.Resource.meta:type_name -> teleport.scim.v1.Meta - 10, // 9: teleport.scim.v1.Resource.attributes:type_name -> google.protobuf.Struct - 11, // 10: teleport.scim.v1.Meta.created:type_name -> google.protobuf.Timestamp - 11, // 11: teleport.scim.v1.Meta.modified:type_name -> google.protobuf.Timestamp + 11, // 9: teleport.scim.v1.Resource.attributes:type_name -> google.protobuf.Struct + 12, // 10: teleport.scim.v1.Meta.created:type_name -> google.protobuf.Timestamp + 12, // 11: teleport.scim.v1.Meta.modified:type_name -> google.protobuf.Timestamp 5, // 12: teleport.scim.v1.ResourceList.resources:type_name -> teleport.scim.v1.Resource - 0, // 13: teleport.scim.v1.SCIMService.ListSCIMResources:input_type -> teleport.scim.v1.ListSCIMResourcesRequest - 1, // 14: teleport.scim.v1.SCIMService.GetSCIMResource:input_type -> teleport.scim.v1.GetSCIMResourceRequest - 2, // 15: teleport.scim.v1.SCIMService.CreateSCIMResource:input_type -> teleport.scim.v1.CreateSCIMResourceRequest - 3, // 16: teleport.scim.v1.SCIMService.UpdateSCIMResource:input_type -> teleport.scim.v1.UpdateSCIMResourceRequest - 4, // 17: teleport.scim.v1.SCIMService.DeleteSCIMResource:input_type -> teleport.scim.v1.DeleteSCIMResourceRequest - 7, // 18: teleport.scim.v1.SCIMService.ListSCIMResources:output_type -> teleport.scim.v1.ResourceList - 5, // 19: teleport.scim.v1.SCIMService.GetSCIMResource:output_type -> teleport.scim.v1.Resource - 5, // 20: teleport.scim.v1.SCIMService.CreateSCIMResource:output_type -> teleport.scim.v1.Resource - 5, // 21: teleport.scim.v1.SCIMService.UpdateSCIMResource:output_type -> teleport.scim.v1.Resource - 12, // 22: teleport.scim.v1.SCIMService.DeleteSCIMResource:output_type -> google.protobuf.Empty - 18, // [18:23] is the sub-list for method output_type - 13, // [13:18] is the sub-list for method input_type - 13, // [13:13] is the sub-list for extension type_name - 13, // [13:13] is the sub-list for extension extendee - 0, // [0:13] is the sub-list for field type_name + 8, // 13: teleport.scim.v1.PatchSCIMResourceRequest.target:type_name -> teleport.scim.v1.RequestTarget + 11, // 14: teleport.scim.v1.PatchSCIMResourceRequest.payload:type_name -> google.protobuf.Struct + 0, // 15: teleport.scim.v1.SCIMService.ListSCIMResources:input_type -> teleport.scim.v1.ListSCIMResourcesRequest + 1, // 16: teleport.scim.v1.SCIMService.GetSCIMResource:input_type -> teleport.scim.v1.GetSCIMResourceRequest + 2, // 17: teleport.scim.v1.SCIMService.CreateSCIMResource:input_type -> teleport.scim.v1.CreateSCIMResourceRequest + 3, // 18: teleport.scim.v1.SCIMService.UpdateSCIMResource:input_type -> teleport.scim.v1.UpdateSCIMResourceRequest + 4, // 19: teleport.scim.v1.SCIMService.DeleteSCIMResource:input_type -> teleport.scim.v1.DeleteSCIMResourceRequest + 10, // 20: teleport.scim.v1.SCIMService.PatchSCIMResource:input_type -> teleport.scim.v1.PatchSCIMResourceRequest + 7, // 21: teleport.scim.v1.SCIMService.ListSCIMResources:output_type -> teleport.scim.v1.ResourceList + 5, // 22: teleport.scim.v1.SCIMService.GetSCIMResource:output_type -> teleport.scim.v1.Resource + 5, // 23: teleport.scim.v1.SCIMService.CreateSCIMResource:output_type -> teleport.scim.v1.Resource + 5, // 24: teleport.scim.v1.SCIMService.UpdateSCIMResource:output_type -> teleport.scim.v1.Resource + 13, // 25: teleport.scim.v1.SCIMService.DeleteSCIMResource:output_type -> google.protobuf.Empty + 5, // 26: teleport.scim.v1.SCIMService.PatchSCIMResource:output_type -> teleport.scim.v1.Resource + 21, // [21:27] is the sub-list for method output_type + 15, // [15:21] is the sub-list for method input_type + 15, // [15:15] is the sub-list for extension type_name + 15, // [15:15] is the sub-list for extension extendee + 0, // [0:15] is the sub-list for field type_name } func init() { file_teleport_scim_v1_scim_service_proto_init() } @@ -798,7 +868,7 @@ func file_teleport_scim_v1_scim_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scim_v1_scim_service_proto_rawDesc), len(file_teleport_scim_v1_scim_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 10, + NumMessages: 11, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/scim/v1/scim_service_grpc.pb.go b/api/gen/proto/go/teleport/scim/v1/scim_service_grpc.pb.go index f851b4bfeab8b..87431b2dfb804 100644 --- a/api/gen/proto/go/teleport/scim/v1/scim_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/scim/v1/scim_service_grpc.pb.go @@ -39,6 +39,7 @@ const ( SCIMService_CreateSCIMResource_FullMethodName = "/teleport.scim.v1.SCIMService/CreateSCIMResource" SCIMService_UpdateSCIMResource_FullMethodName = "/teleport.scim.v1.SCIMService/UpdateSCIMResource" SCIMService_DeleteSCIMResource_FullMethodName = "/teleport.scim.v1.SCIMService/DeleteSCIMResource" + SCIMService_PatchSCIMResource_FullMethodName = "/teleport.scim.v1.SCIMService/PatchSCIMResource" ) // SCIMServiceClient is the client API for SCIMService service. @@ -59,6 +60,11 @@ type SCIMServiceClient interface { UpdateSCIMResource(ctx context.Context, in *UpdateSCIMResourceRequest, opts ...grpc.CallOption) (*Resource, error) // DeleteSCIMResource deletes a SCIM-managed resource DeleteSCIMResource(ctx context.Context, in *DeleteSCIMResourceRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // PatchSCIMResource handles a request to patch a resource as per RFC 7644 + // Section 3.5.2. + // + // See https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2 + PatchSCIMResource(ctx context.Context, in *PatchSCIMResourceRequest, opts ...grpc.CallOption) (*Resource, error) } type sCIMServiceClient struct { @@ -119,6 +125,16 @@ func (c *sCIMServiceClient) DeleteSCIMResource(ctx context.Context, in *DeleteSC return out, nil } +func (c *sCIMServiceClient) PatchSCIMResource(ctx context.Context, in *PatchSCIMResourceRequest, opts ...grpc.CallOption) (*Resource, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(Resource) + err := c.cc.Invoke(ctx, SCIMService_PatchSCIMResource_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // SCIMServiceServer is the server API for SCIMService service. // All implementations must embed UnimplementedSCIMServiceServer // for forward compatibility. @@ -137,6 +153,11 @@ type SCIMServiceServer interface { UpdateSCIMResource(context.Context, *UpdateSCIMResourceRequest) (*Resource, error) // DeleteSCIMResource deletes a SCIM-managed resource DeleteSCIMResource(context.Context, *DeleteSCIMResourceRequest) (*emptypb.Empty, error) + // PatchSCIMResource handles a request to patch a resource as per RFC 7644 + // Section 3.5.2. + // + // See https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2 + PatchSCIMResource(context.Context, *PatchSCIMResourceRequest) (*Resource, error) mustEmbedUnimplementedSCIMServiceServer() } @@ -162,6 +183,9 @@ func (UnimplementedSCIMServiceServer) UpdateSCIMResource(context.Context, *Updat func (UnimplementedSCIMServiceServer) DeleteSCIMResource(context.Context, *DeleteSCIMResourceRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteSCIMResource not implemented") } +func (UnimplementedSCIMServiceServer) PatchSCIMResource(context.Context, *PatchSCIMResourceRequest) (*Resource, error) { + return nil, status.Errorf(codes.Unimplemented, "method PatchSCIMResource not implemented") +} func (UnimplementedSCIMServiceServer) mustEmbedUnimplementedSCIMServiceServer() {} func (UnimplementedSCIMServiceServer) testEmbeddedByValue() {} @@ -273,6 +297,24 @@ func _SCIMService_DeleteSCIMResource_Handler(srv interface{}, ctx context.Contex return interceptor(ctx, in, info, handler) } +func _SCIMService_PatchSCIMResource_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(PatchSCIMResourceRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SCIMServiceServer).PatchSCIMResource(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SCIMService_PatchSCIMResource_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SCIMServiceServer).PatchSCIMResource(ctx, req.(*PatchSCIMResourceRequest)) + } + return interceptor(ctx, in, info, handler) +} + // SCIMService_ServiceDesc is the grpc.ServiceDesc for SCIMService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -300,6 +342,10 @@ var SCIMService_ServiceDesc = grpc.ServiceDesc{ MethodName: "DeleteSCIMResource", Handler: _SCIMService_DeleteSCIMResource_Handler, }, + { + MethodName: "PatchSCIMResource", + Handler: _SCIMService_PatchSCIMResource_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/scim/v1/scim_service.proto", diff --git a/api/gen/proto/go/teleport/scopedrole/v1/assignment.pb.go b/api/gen/proto/go/teleport/scopedrole/v1/assignment.pb.go deleted file mode 100644 index 9b9574a2abb4c..0000000000000 --- a/api/gen/proto/go/teleport/scopedrole/v1/assignment.pb.go +++ /dev/null @@ -1,316 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/scopedrole/v1/assignment.proto - -package scopedrole - -import ( - v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// ScopedRoleAssignment is a role assignment whose resource and permissions are scoped. A scoped role assignment -// assigns roles to users at scopes. One assignment may contain multiple roles at multiple scopes. Most assignments -// are stored at random IDs, but some assignments created by teleport may have special static names that are -// reserved for teleport's internal use (e.g. for managing the set of subassignments generated by a connector). -type ScopedRoleAssignment struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Kind is the resource kind. - Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` - // SubKind is the resource sub-kind. - SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` - // Version is the resource version. - Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` - // Metadata contains the resource metadata. - Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` - // Scope is the scope of the role assignment resource. - Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` - // Spec is the role assignment specification. - Spec *ScopedRoleAssignmentSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedRoleAssignment) Reset() { - *x = ScopedRoleAssignment{} - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedRoleAssignment) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedRoleAssignment) ProtoMessage() {} - -func (x *ScopedRoleAssignment) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedRoleAssignment.ProtoReflect.Descriptor instead. -func (*ScopedRoleAssignment) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_assignment_proto_rawDescGZIP(), []int{0} -} - -func (x *ScopedRoleAssignment) GetKind() string { - if x != nil { - return x.Kind - } - return "" -} - -func (x *ScopedRoleAssignment) GetSubKind() string { - if x != nil { - return x.SubKind - } - return "" -} - -func (x *ScopedRoleAssignment) GetVersion() string { - if x != nil { - return x.Version - } - return "" -} - -func (x *ScopedRoleAssignment) GetMetadata() *v1.Metadata { - if x != nil { - return x.Metadata - } - return nil -} - -func (x *ScopedRoleAssignment) GetScope() string { - if x != nil { - return x.Scope - } - return "" -} - -func (x *ScopedRoleAssignment) GetSpec() *ScopedRoleAssignmentSpec { - if x != nil { - return x.Spec - } - return nil -} - -// ScopedRoleAssignmentSpec is the specification of a scoped role. -type ScopedRoleAssignmentSpec struct { - state protoimpl.MessageState `protogen:"open.v1"` - // User is the user to whom all contained assignments apply. - User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` - // Assignments is a list of individual role @ scope assignments. - Assignments []*Assignment `protobuf:"bytes,2,rep,name=assignments,proto3" json:"assignments,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedRoleAssignmentSpec) Reset() { - *x = ScopedRoleAssignmentSpec{} - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedRoleAssignmentSpec) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedRoleAssignmentSpec) ProtoMessage() {} - -func (x *ScopedRoleAssignmentSpec) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedRoleAssignmentSpec.ProtoReflect.Descriptor instead. -func (*ScopedRoleAssignmentSpec) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_assignment_proto_rawDescGZIP(), []int{1} -} - -func (x *ScopedRoleAssignmentSpec) GetUser() string { - if x != nil { - return x.User - } - return "" -} - -func (x *ScopedRoleAssignmentSpec) GetAssignments() []*Assignment { - if x != nil { - return x.Assignments - } - return nil -} - -// Assignment is a role/scope pair that defines an individual assignment. -type Assignment struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Roles is the name of the role that is assigned by this assignment. - Role string `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - // Scope is the scope to which the role is assigned. This must be a member/child - // of the scope of the [ScopedRoleAssignment] in which this assignment is contained. - Scope string `protobuf:"bytes,2,opt,name=scope,proto3" json:"scope,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *Assignment) Reset() { - *x = Assignment{} - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[2] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *Assignment) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*Assignment) ProtoMessage() {} - -func (x *Assignment) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_assignment_proto_msgTypes[2] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use Assignment.ProtoReflect.Descriptor instead. -func (*Assignment) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_assignment_proto_rawDescGZIP(), []int{2} -} - -func (x *Assignment) GetRole() string { - if x != nil { - return x.Role - } - return "" -} - -func (x *Assignment) GetScope() string { - if x != nil { - return x.Scope - } - return "" -} - -var File_teleport_scopedrole_v1_assignment_proto protoreflect.FileDescriptor - -const file_teleport_scopedrole_v1_assignment_proto_rawDesc = "" + - "\n" + - "'teleport/scopedrole/v1/assignment.proto\x12\x16teleport.scopedrole.v1\x1a!teleport/header/v1/metadata.proto\"\xf5\x01\n" + - "\x14ScopedRoleAssignment\x12\x12\n" + - "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + - "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + - "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + - "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + - "\x05scope\x18\x05 \x01(\tR\x05scope\x12D\n" + - "\x04spec\x18\x06 \x01(\v20.teleport.scopedrole.v1.ScopedRoleAssignmentSpecR\x04spec\"t\n" + - "\x18ScopedRoleAssignmentSpec\x12\x12\n" + - "\x04user\x18\x01 \x01(\tR\x04user\x12D\n" + - "\vassignments\x18\x02 \x03(\v2\".teleport.scopedrole.v1.AssignmentR\vassignments\"6\n" + - "\n" + - "Assignment\x12\x12\n" + - "\x04role\x18\x01 \x01(\tR\x04role\x12\x14\n" + - "\x05scope\x18\x02 \x01(\tR\x05scopeBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedroleb\x06proto3" - -var ( - file_teleport_scopedrole_v1_assignment_proto_rawDescOnce sync.Once - file_teleport_scopedrole_v1_assignment_proto_rawDescData []byte -) - -func file_teleport_scopedrole_v1_assignment_proto_rawDescGZIP() []byte { - file_teleport_scopedrole_v1_assignment_proto_rawDescOnce.Do(func() { - file_teleport_scopedrole_v1_assignment_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_assignment_proto_rawDesc), len(file_teleport_scopedrole_v1_assignment_proto_rawDesc))) - }) - return file_teleport_scopedrole_v1_assignment_proto_rawDescData -} - -var file_teleport_scopedrole_v1_assignment_proto_msgTypes = make([]protoimpl.MessageInfo, 3) -var file_teleport_scopedrole_v1_assignment_proto_goTypes = []any{ - (*ScopedRoleAssignment)(nil), // 0: teleport.scopedrole.v1.ScopedRoleAssignment - (*ScopedRoleAssignmentSpec)(nil), // 1: teleport.scopedrole.v1.ScopedRoleAssignmentSpec - (*Assignment)(nil), // 2: teleport.scopedrole.v1.Assignment - (*v1.Metadata)(nil), // 3: teleport.header.v1.Metadata -} -var file_teleport_scopedrole_v1_assignment_proto_depIdxs = []int32{ - 3, // 0: teleport.scopedrole.v1.ScopedRoleAssignment.metadata:type_name -> teleport.header.v1.Metadata - 1, // 1: teleport.scopedrole.v1.ScopedRoleAssignment.spec:type_name -> teleport.scopedrole.v1.ScopedRoleAssignmentSpec - 2, // 2: teleport.scopedrole.v1.ScopedRoleAssignmentSpec.assignments:type_name -> teleport.scopedrole.v1.Assignment - 3, // [3:3] is the sub-list for method output_type - 3, // [3:3] is the sub-list for method input_type - 3, // [3:3] is the sub-list for extension type_name - 3, // [3:3] is the sub-list for extension extendee - 0, // [0:3] is the sub-list for field type_name -} - -func init() { file_teleport_scopedrole_v1_assignment_proto_init() } -func file_teleport_scopedrole_v1_assignment_proto_init() { - if File_teleport_scopedrole_v1_assignment_proto != nil { - return - } - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_assignment_proto_rawDesc), len(file_teleport_scopedrole_v1_assignment_proto_rawDesc)), - NumEnums: 0, - NumMessages: 3, - NumExtensions: 0, - NumServices: 0, - }, - GoTypes: file_teleport_scopedrole_v1_assignment_proto_goTypes, - DependencyIndexes: file_teleport_scopedrole_v1_assignment_proto_depIdxs, - MessageInfos: file_teleport_scopedrole_v1_assignment_proto_msgTypes, - }.Build() - File_teleport_scopedrole_v1_assignment_proto = out.File - file_teleport_scopedrole_v1_assignment_proto_goTypes = nil - file_teleport_scopedrole_v1_assignment_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/scopedrole/v1/role.pb.go b/api/gen/proto/go/teleport/scopedrole/v1/role.pb.go deleted file mode 100644 index 297ab88dd6faa..0000000000000 --- a/api/gen/proto/go/teleport/scopedrole/v1/role.pb.go +++ /dev/null @@ -1,244 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/scopedrole/v1/role.proto - -package scopedrole - -import ( - v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// ScopedRole is a role whose resource and permissions are scoped. Scoped roles implement a subset of role -// features tailored to the usecases of scoped access and scoped access administration. Scoped roles may be -// assigned to the same user multiple times at various scopes. Scoped roles do not contain deny rules. -type ScopedRole struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Kind is the resource kind. - Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` - // SubKind is the resource sub-kind. - SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` - // Version is the resource version. - Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` - // Metadata contains the resource metadata. - Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` - // Scope is the scope of the role resource. - Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` - // Spec is the role specification. - Spec *ScopedRoleSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedRole) Reset() { - *x = ScopedRole{} - mi := &file_teleport_scopedrole_v1_role_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedRole) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedRole) ProtoMessage() {} - -func (x *ScopedRole) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_role_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedRole.ProtoReflect.Descriptor instead. -func (*ScopedRole) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_role_proto_rawDescGZIP(), []int{0} -} - -func (x *ScopedRole) GetKind() string { - if x != nil { - return x.Kind - } - return "" -} - -func (x *ScopedRole) GetSubKind() string { - if x != nil { - return x.SubKind - } - return "" -} - -func (x *ScopedRole) GetVersion() string { - if x != nil { - return x.Version - } - return "" -} - -func (x *ScopedRole) GetMetadata() *v1.Metadata { - if x != nil { - return x.Metadata - } - return nil -} - -func (x *ScopedRole) GetScope() string { - if x != nil { - return x.Scope - } - return "" -} - -func (x *ScopedRole) GetSpec() *ScopedRoleSpec { - if x != nil { - return x.Spec - } - return nil -} - -// ScopedRoleSpec is the specification of a scoped role. -type ScopedRoleSpec struct { - state protoimpl.MessageState `protogen:"open.v1"` - // AssignableScopes is a list of scopes to which this role can be assigned. - AssignableScopes []string `protobuf:"bytes,1,rep,name=assignable_scopes,json=assignableScopes,proto3" json:"assignable_scopes,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedRoleSpec) Reset() { - *x = ScopedRoleSpec{} - mi := &file_teleport_scopedrole_v1_role_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedRoleSpec) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedRoleSpec) ProtoMessage() {} - -func (x *ScopedRoleSpec) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_role_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedRoleSpec.ProtoReflect.Descriptor instead. -func (*ScopedRoleSpec) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_role_proto_rawDescGZIP(), []int{1} -} - -func (x *ScopedRoleSpec) GetAssignableScopes() []string { - if x != nil { - return x.AssignableScopes - } - return nil -} - -var File_teleport_scopedrole_v1_role_proto protoreflect.FileDescriptor - -const file_teleport_scopedrole_v1_role_proto_rawDesc = "" + - "\n" + - "!teleport/scopedrole/v1/role.proto\x12\x16teleport.scopedrole.v1\x1a!teleport/header/v1/metadata.proto\"\xe1\x01\n" + - "\n" + - "ScopedRole\x12\x12\n" + - "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + - "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + - "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + - "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + - "\x05scope\x18\x05 \x01(\tR\x05scope\x12:\n" + - "\x04spec\x18\x06 \x01(\v2&.teleport.scopedrole.v1.ScopedRoleSpecR\x04spec\"=\n" + - "\x0eScopedRoleSpec\x12+\n" + - "\x11assignable_scopes\x18\x01 \x03(\tR\x10assignableScopesBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedroleb\x06proto3" - -var ( - file_teleport_scopedrole_v1_role_proto_rawDescOnce sync.Once - file_teleport_scopedrole_v1_role_proto_rawDescData []byte -) - -func file_teleport_scopedrole_v1_role_proto_rawDescGZIP() []byte { - file_teleport_scopedrole_v1_role_proto_rawDescOnce.Do(func() { - file_teleport_scopedrole_v1_role_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_role_proto_rawDesc), len(file_teleport_scopedrole_v1_role_proto_rawDesc))) - }) - return file_teleport_scopedrole_v1_role_proto_rawDescData -} - -var file_teleport_scopedrole_v1_role_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_teleport_scopedrole_v1_role_proto_goTypes = []any{ - (*ScopedRole)(nil), // 0: teleport.scopedrole.v1.ScopedRole - (*ScopedRoleSpec)(nil), // 1: teleport.scopedrole.v1.ScopedRoleSpec - (*v1.Metadata)(nil), // 2: teleport.header.v1.Metadata -} -var file_teleport_scopedrole_v1_role_proto_depIdxs = []int32{ - 2, // 0: teleport.scopedrole.v1.ScopedRole.metadata:type_name -> teleport.header.v1.Metadata - 1, // 1: teleport.scopedrole.v1.ScopedRole.spec:type_name -> teleport.scopedrole.v1.ScopedRoleSpec - 2, // [2:2] is the sub-list for method output_type - 2, // [2:2] is the sub-list for method input_type - 2, // [2:2] is the sub-list for extension type_name - 2, // [2:2] is the sub-list for extension extendee - 0, // [0:2] is the sub-list for field type_name -} - -func init() { file_teleport_scopedrole_v1_role_proto_init() } -func file_teleport_scopedrole_v1_role_proto_init() { - if File_teleport_scopedrole_v1_role_proto != nil { - return - } - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_role_proto_rawDesc), len(file_teleport_scopedrole_v1_role_proto_rawDesc)), - NumEnums: 0, - NumMessages: 2, - NumExtensions: 0, - NumServices: 0, - }, - GoTypes: file_teleport_scopedrole_v1_role_proto_goTypes, - DependencyIndexes: file_teleport_scopedrole_v1_role_proto_depIdxs, - MessageInfos: file_teleport_scopedrole_v1_role_proto_msgTypes, - }.Build() - File_teleport_scopedrole_v1_role_proto = out.File - file_teleport_scopedrole_v1_role_proto_goTypes = nil - file_teleport_scopedrole_v1_role_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/scopedrole/v1/service.pb.go b/api/gen/proto/go/teleport/scopedrole/v1/service.pb.go deleted file mode 100644 index 3645f2406e055..0000000000000 --- a/api/gen/proto/go/teleport/scopedrole/v1/service.pb.go +++ /dev/null @@ -1,1143 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/scopedrole/v1/service.proto - -package scopedrole - -import ( - v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// GetScopedRoleRequest is the request to get a scoped role. -type GetScopedRoleRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped role. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedRoleRequest) Reset() { - *x = GetScopedRoleRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedRoleRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedRoleRequest) ProtoMessage() {} - -func (x *GetScopedRoleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedRoleRequest.ProtoReflect.Descriptor instead. -func (*GetScopedRoleRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{0} -} - -func (x *GetScopedRoleRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -// GetScopedRoleResponse is the response to get a scoped role. -type GetScopedRoleResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Role is the scoped role. - Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedRoleResponse) Reset() { - *x = GetScopedRoleResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedRoleResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedRoleResponse) ProtoMessage() {} - -func (x *GetScopedRoleResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedRoleResponse.ProtoReflect.Descriptor instead. -func (*GetScopedRoleResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{1} -} - -func (x *GetScopedRoleResponse) GetRole() *ScopedRole { - if x != nil { - return x.Role - } - return nil -} - -// ListScopedRolesRequest is the request to list scoped roles. -type ListScopedRolesRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // PageSize is the maximum number of results to return. - PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` - // PageToken is the pagination cursor used to start from where a previous request left off. - PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` - // ResourceScope filters roles by their resource scope if specified. - ResourceScope *v1.Filter `protobuf:"bytes,3,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` - // AssignableScope filters roles by their assignable scope if specified. - AssignableScope *v1.Filter `protobuf:"bytes,4,opt,name=assignable_scope,json=assignableScope,proto3" json:"assignable_scope,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedRolesRequest) Reset() { - *x = ListScopedRolesRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[2] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedRolesRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedRolesRequest) ProtoMessage() {} - -func (x *ListScopedRolesRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[2] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedRolesRequest.ProtoReflect.Descriptor instead. -func (*ListScopedRolesRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{2} -} - -func (x *ListScopedRolesRequest) GetPageSize() int32 { - if x != nil { - return x.PageSize - } - return 0 -} - -func (x *ListScopedRolesRequest) GetPageToken() string { - if x != nil { - return x.PageToken - } - return "" -} - -func (x *ListScopedRolesRequest) GetResourceScope() *v1.Filter { - if x != nil { - return x.ResourceScope - } - return nil -} - -func (x *ListScopedRolesRequest) GetAssignableScope() *v1.Filter { - if x != nil { - return x.AssignableScope - } - return nil -} - -// ListScopedRolesResponse is the response to list scoped roles. -type ListScopedRolesResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Roles is the list of scoped roles. - Roles []*ScopedRole `protobuf:"bytes,1,rep,name=roles,proto3" json:"roles,omitempty"` - // NextPageToken is a pagination cursor usable to fetch the next page of results. - NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedRolesResponse) Reset() { - *x = ListScopedRolesResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[3] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedRolesResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedRolesResponse) ProtoMessage() {} - -func (x *ListScopedRolesResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[3] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedRolesResponse.ProtoReflect.Descriptor instead. -func (*ListScopedRolesResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{3} -} - -func (x *ListScopedRolesResponse) GetRoles() []*ScopedRole { - if x != nil { - return x.Roles - } - return nil -} - -func (x *ListScopedRolesResponse) GetNextPageToken() string { - if x != nil { - return x.NextPageToken - } - return "" -} - -// CreateScopedRoleRequest is the request to create a scoped role. -type CreateScopedRoleRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Role is the scoped role to create. - Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedRoleRequest) Reset() { - *x = CreateScopedRoleRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[4] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedRoleRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedRoleRequest) ProtoMessage() {} - -func (x *CreateScopedRoleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[4] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedRoleRequest.ProtoReflect.Descriptor instead. -func (*CreateScopedRoleRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{4} -} - -func (x *CreateScopedRoleRequest) GetRole() *ScopedRole { - if x != nil { - return x.Role - } - return nil -} - -// CreateScopedRoleResponse is the response to create a scoped role. -type CreateScopedRoleResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Role is the scoped role that was created. - Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedRoleResponse) Reset() { - *x = CreateScopedRoleResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[5] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedRoleResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedRoleResponse) ProtoMessage() {} - -func (x *CreateScopedRoleResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[5] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedRoleResponse.ProtoReflect.Descriptor instead. -func (*CreateScopedRoleResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{5} -} - -func (x *CreateScopedRoleResponse) GetRole() *ScopedRole { - if x != nil { - return x.Role - } - return nil -} - -// UpdateScopedRoleRequest is the request to update a scoped role. -type UpdateScopedRoleRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Role is the scoped role to update. - Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *UpdateScopedRoleRequest) Reset() { - *x = UpdateScopedRoleRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[6] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *UpdateScopedRoleRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*UpdateScopedRoleRequest) ProtoMessage() {} - -func (x *UpdateScopedRoleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[6] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use UpdateScopedRoleRequest.ProtoReflect.Descriptor instead. -func (*UpdateScopedRoleRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{6} -} - -func (x *UpdateScopedRoleRequest) GetRole() *ScopedRole { - if x != nil { - return x.Role - } - return nil -} - -// UpdateScopedRoleResponse is the response to update a scoped role. -type UpdateScopedRoleResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Role is the post-update scoped role. - Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *UpdateScopedRoleResponse) Reset() { - *x = UpdateScopedRoleResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[7] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *UpdateScopedRoleResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*UpdateScopedRoleResponse) ProtoMessage() {} - -func (x *UpdateScopedRoleResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[7] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use UpdateScopedRoleResponse.ProtoReflect.Descriptor instead. -func (*UpdateScopedRoleResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{7} -} - -func (x *UpdateScopedRoleResponse) GetRole() *ScopedRole { - if x != nil { - return x.Role - } - return nil -} - -// DeleteScopedRoleRequest is the request to delete a scoped role. -type DeleteScopedRoleRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped role to delete. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - // Revision asserts the revision of the scoped role to delete (optional). - Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedRoleRequest) Reset() { - *x = DeleteScopedRoleRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[8] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedRoleRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedRoleRequest) ProtoMessage() {} - -func (x *DeleteScopedRoleRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[8] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedRoleRequest.ProtoReflect.Descriptor instead. -func (*DeleteScopedRoleRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{8} -} - -func (x *DeleteScopedRoleRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -func (x *DeleteScopedRoleRequest) GetRevision() string { - if x != nil { - return x.Revision - } - return "" -} - -// DeleteScopedRoleResponse is the response to delete a scoped role. -type DeleteScopedRoleResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedRoleResponse) Reset() { - *x = DeleteScopedRoleResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[9] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedRoleResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedRoleResponse) ProtoMessage() {} - -func (x *DeleteScopedRoleResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[9] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedRoleResponse.ProtoReflect.Descriptor instead. -func (*DeleteScopedRoleResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{9} -} - -// GetScopedRoleAssignmentRequest is the request to get a scoped role assignment. -type GetScopedRoleAssignmentRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped role assignment. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedRoleAssignmentRequest) Reset() { - *x = GetScopedRoleAssignmentRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[10] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedRoleAssignmentRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedRoleAssignmentRequest) ProtoMessage() {} - -func (x *GetScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[10] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. -func (*GetScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{10} -} - -func (x *GetScopedRoleAssignmentRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -// GetScopedRoleAssignmentResponse is the response to get a scoped role assignment. -type GetScopedRoleAssignmentResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Assignment is the scoped role assignment. - Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedRoleAssignmentResponse) Reset() { - *x = GetScopedRoleAssignmentResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[11] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedRoleAssignmentResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedRoleAssignmentResponse) ProtoMessage() {} - -func (x *GetScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[11] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. -func (*GetScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{11} -} - -func (x *GetScopedRoleAssignmentResponse) GetAssignment() *ScopedRoleAssignment { - if x != nil { - return x.Assignment - } - return nil -} - -// ListScopedRoleAssignmentsRequest is the request to list scoped role assignments. -type ListScopedRoleAssignmentsRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // PageSize is the maximum number of results to return. - PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` - // PageToken is the pagination cursor used to start from where a previous request left off. - PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` - // ResourceScope filters assignments by their resource scope if specified. - ResourceScope *v1.Filter `protobuf:"bytes,3,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` - // AssignedScope filters assignments by the scopes they assign to if specified (note: matches assignment - // resources with 1 or more maching scopes, not all scopes within the assignment will necessarily match). - AssignedScope *v1.Filter `protobuf:"bytes,4,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` - // User optionally limits the list to assignments for a specific user. - User string `protobuf:"bytes,5,opt,name=user,proto3" json:"user,omitempty"` - // Role optionally limits the list to assignments for a specific role. - Role string `protobuf:"bytes,6,opt,name=role,proto3" json:"role,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedRoleAssignmentsRequest) Reset() { - *x = ListScopedRoleAssignmentsRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[12] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedRoleAssignmentsRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedRoleAssignmentsRequest) ProtoMessage() {} - -func (x *ListScopedRoleAssignmentsRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[12] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedRoleAssignmentsRequest.ProtoReflect.Descriptor instead. -func (*ListScopedRoleAssignmentsRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{12} -} - -func (x *ListScopedRoleAssignmentsRequest) GetPageSize() int32 { - if x != nil { - return x.PageSize - } - return 0 -} - -func (x *ListScopedRoleAssignmentsRequest) GetPageToken() string { - if x != nil { - return x.PageToken - } - return "" -} - -func (x *ListScopedRoleAssignmentsRequest) GetResourceScope() *v1.Filter { - if x != nil { - return x.ResourceScope - } - return nil -} - -func (x *ListScopedRoleAssignmentsRequest) GetAssignedScope() *v1.Filter { - if x != nil { - return x.AssignedScope - } - return nil -} - -func (x *ListScopedRoleAssignmentsRequest) GetUser() string { - if x != nil { - return x.User - } - return "" -} - -func (x *ListScopedRoleAssignmentsRequest) GetRole() string { - if x != nil { - return x.Role - } - return "" -} - -// ListScopedRoleAssignmentsResponse is the response to list scoped role assignments. -type ListScopedRoleAssignmentsResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Assignments is the list of scoped role assignments. - Assignments []*ScopedRoleAssignment `protobuf:"bytes,1,rep,name=assignments,proto3" json:"assignments,omitempty"` - // NextPageToken is a pagination cursor usable to fetch the next page of results. - NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedRoleAssignmentsResponse) Reset() { - *x = ListScopedRoleAssignmentsResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[13] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedRoleAssignmentsResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedRoleAssignmentsResponse) ProtoMessage() {} - -func (x *ListScopedRoleAssignmentsResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[13] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedRoleAssignmentsResponse.ProtoReflect.Descriptor instead. -func (*ListScopedRoleAssignmentsResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{13} -} - -func (x *ListScopedRoleAssignmentsResponse) GetAssignments() []*ScopedRoleAssignment { - if x != nil { - return x.Assignments - } - return nil -} - -func (x *ListScopedRoleAssignmentsResponse) GetNextPageToken() string { - if x != nil { - return x.NextPageToken - } - return "" -} - -// CreateScopedRoleAssignmentRequest is the request to create a scoped role assignment. -type CreateScopedRoleAssignmentRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Assignment is the scoped role assignment to create. - Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` - // RoleRevisions asserts the revisions of the roles assigned by the assignments (optional). - RoleRevisions map[string]string `protobuf:"bytes,2,rep,name=role_revisions,json=roleRevisions,proto3" json:"role_revisions,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedRoleAssignmentRequest) Reset() { - *x = CreateScopedRoleAssignmentRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[14] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedRoleAssignmentRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedRoleAssignmentRequest) ProtoMessage() {} - -func (x *CreateScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[14] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. -func (*CreateScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{14} -} - -func (x *CreateScopedRoleAssignmentRequest) GetAssignment() *ScopedRoleAssignment { - if x != nil { - return x.Assignment - } - return nil -} - -func (x *CreateScopedRoleAssignmentRequest) GetRoleRevisions() map[string]string { - if x != nil { - return x.RoleRevisions - } - return nil -} - -// CreateScopedRoleAssignmentResponse is the response to create a scoped role assignment. -type CreateScopedRoleAssignmentResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Assignment is the scoped role assignment that was created. - Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedRoleAssignmentResponse) Reset() { - *x = CreateScopedRoleAssignmentResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[15] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedRoleAssignmentResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedRoleAssignmentResponse) ProtoMessage() {} - -func (x *CreateScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[15] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. -func (*CreateScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{15} -} - -func (x *CreateScopedRoleAssignmentResponse) GetAssignment() *ScopedRoleAssignment { - if x != nil { - return x.Assignment - } - return nil -} - -// DeleteScopedRoleAssignmentRequest is the request to delete a scoped role assignment. -type DeleteScopedRoleAssignmentRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped role assignment to delete. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - // Revision asserts the revision of the scoped role assignment to delete (optional). - Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedRoleAssignmentRequest) Reset() { - *x = DeleteScopedRoleAssignmentRequest{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[16] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedRoleAssignmentRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedRoleAssignmentRequest) ProtoMessage() {} - -func (x *DeleteScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[16] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. -func (*DeleteScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{16} -} - -func (x *DeleteScopedRoleAssignmentRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -func (x *DeleteScopedRoleAssignmentRequest) GetRevision() string { - if x != nil { - return x.Revision - } - return "" -} - -// DeleteScopedRoleAssignmentResponse is the response to delete a scoped role assignment. -type DeleteScopedRoleAssignmentResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedRoleAssignmentResponse) Reset() { - *x = DeleteScopedRoleAssignmentResponse{} - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[17] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedRoleAssignmentResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedRoleAssignmentResponse) ProtoMessage() {} - -func (x *DeleteScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedrole_v1_service_proto_msgTypes[17] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. -func (*DeleteScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedrole_v1_service_proto_rawDescGZIP(), []int{17} -} - -var File_teleport_scopedrole_v1_service_proto protoreflect.FileDescriptor - -const file_teleport_scopedrole_v1_service_proto_rawDesc = "" + - "\n" + - "$teleport/scopedrole/v1/service.proto\x12\x16teleport.scopedrole.v1\x1a'teleport/scopedrole/v1/assignment.proto\x1a!teleport/scopedrole/v1/role.proto\x1a\x1fteleport/scopes/v1/scopes.proto\"*\n" + - "\x14GetScopedRoleRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\"O\n" + - "\x15GetScopedRoleResponse\x126\n" + - "\x04role\x18\x01 \x01(\v2\".teleport.scopedrole.v1.ScopedRoleR\x04role\"\xde\x01\n" + - "\x16ListScopedRolesRequest\x12\x1b\n" + - "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + - "\n" + - "page_token\x18\x02 \x01(\tR\tpageToken\x12A\n" + - "\x0eresource_scope\x18\x03 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12E\n" + - "\x10assignable_scope\x18\x04 \x01(\v2\x1a.teleport.scopes.v1.FilterR\x0fassignableScope\"{\n" + - "\x17ListScopedRolesResponse\x128\n" + - "\x05roles\x18\x01 \x03(\v2\".teleport.scopedrole.v1.ScopedRoleR\x05roles\x12&\n" + - "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"Q\n" + - "\x17CreateScopedRoleRequest\x126\n" + - "\x04role\x18\x01 \x01(\v2\".teleport.scopedrole.v1.ScopedRoleR\x04role\"R\n" + - "\x18CreateScopedRoleResponse\x126\n" + - "\x04role\x18\x01 \x01(\v2\".teleport.scopedrole.v1.ScopedRoleR\x04role\"Q\n" + - "\x17UpdateScopedRoleRequest\x126\n" + - "\x04role\x18\x01 \x01(\v2\".teleport.scopedrole.v1.ScopedRoleR\x04role\"R\n" + - "\x18UpdateScopedRoleResponse\x126\n" + - "\x04role\x18\x01 \x01(\v2\".teleport.scopedrole.v1.ScopedRoleR\x04role\"I\n" + - "\x17DeleteScopedRoleRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + - "\brevision\x18\x02 \x01(\tR\brevision\"\x1a\n" + - "\x18DeleteScopedRoleResponse\"4\n" + - "\x1eGetScopedRoleAssignmentRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\"o\n" + - "\x1fGetScopedRoleAssignmentResponse\x12L\n" + - "\n" + - "assignment\x18\x01 \x01(\v2,.teleport.scopedrole.v1.ScopedRoleAssignmentR\n" + - "assignment\"\x8c\x02\n" + - " ListScopedRoleAssignmentsRequest\x12\x1b\n" + - "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + - "\n" + - "page_token\x18\x02 \x01(\tR\tpageToken\x12A\n" + - "\x0eresource_scope\x18\x03 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12A\n" + - "\x0eassigned_scope\x18\x04 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rassignedScope\x12\x12\n" + - "\x04user\x18\x05 \x01(\tR\x04user\x12\x12\n" + - "\x04role\x18\x06 \x01(\tR\x04role\"\x9b\x01\n" + - "!ListScopedRoleAssignmentsResponse\x12N\n" + - "\vassignments\x18\x01 \x03(\v2,.teleport.scopedrole.v1.ScopedRoleAssignmentR\vassignments\x12&\n" + - "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"\xa8\x02\n" + - "!CreateScopedRoleAssignmentRequest\x12L\n" + - "\n" + - "assignment\x18\x01 \x01(\v2,.teleport.scopedrole.v1.ScopedRoleAssignmentR\n" + - "assignment\x12s\n" + - "\x0erole_revisions\x18\x02 \x03(\v2L.teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntryR\rroleRevisions\x1a@\n" + - "\x12RoleRevisionsEntry\x12\x10\n" + - "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + - "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"r\n" + - "\"CreateScopedRoleAssignmentResponse\x12L\n" + - "\n" + - "assignment\x18\x01 \x01(\v2,.teleport.scopedrole.v1.ScopedRoleAssignmentR\n" + - "assignment\"S\n" + - "!DeleteScopedRoleAssignmentRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + - "\brevision\x18\x02 \x01(\tR\brevision\"$\n" + - "\"DeleteScopedRoleAssignmentResponse2\xa6\t\n" + - "\x11ScopedRoleService\x12l\n" + - "\rGetScopedRole\x12,.teleport.scopedrole.v1.GetScopedRoleRequest\x1a-.teleport.scopedrole.v1.GetScopedRoleResponse\x12r\n" + - "\x0fListScopedRoles\x12..teleport.scopedrole.v1.ListScopedRolesRequest\x1a/.teleport.scopedrole.v1.ListScopedRolesResponse\x12u\n" + - "\x10CreateScopedRole\x12/.teleport.scopedrole.v1.CreateScopedRoleRequest\x1a0.teleport.scopedrole.v1.CreateScopedRoleResponse\x12u\n" + - "\x10UpdateScopedRole\x12/.teleport.scopedrole.v1.UpdateScopedRoleRequest\x1a0.teleport.scopedrole.v1.UpdateScopedRoleResponse\x12u\n" + - "\x10DeleteScopedRole\x12/.teleport.scopedrole.v1.DeleteScopedRoleRequest\x1a0.teleport.scopedrole.v1.DeleteScopedRoleResponse\x12\x8a\x01\n" + - "\x17GetScopedRoleAssignment\x126.teleport.scopedrole.v1.GetScopedRoleAssignmentRequest\x1a7.teleport.scopedrole.v1.GetScopedRoleAssignmentResponse\x12\x90\x01\n" + - "\x19ListScopedRoleAssignments\x128.teleport.scopedrole.v1.ListScopedRoleAssignmentsRequest\x1a9.teleport.scopedrole.v1.ListScopedRoleAssignmentsResponse\x12\x93\x01\n" + - "\x1aCreateScopedRoleAssignment\x129.teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest\x1a:.teleport.scopedrole.v1.CreateScopedRoleAssignmentResponse\x12\x93\x01\n" + - "\x1aDeleteScopedRoleAssignment\x129.teleport.scopedrole.v1.DeleteScopedRoleAssignmentRequest\x1a:.teleport.scopedrole.v1.DeleteScopedRoleAssignmentResponseBVZTgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedroleb\x06proto3" - -var ( - file_teleport_scopedrole_v1_service_proto_rawDescOnce sync.Once - file_teleport_scopedrole_v1_service_proto_rawDescData []byte -) - -func file_teleport_scopedrole_v1_service_proto_rawDescGZIP() []byte { - file_teleport_scopedrole_v1_service_proto_rawDescOnce.Do(func() { - file_teleport_scopedrole_v1_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_service_proto_rawDesc), len(file_teleport_scopedrole_v1_service_proto_rawDesc))) - }) - return file_teleport_scopedrole_v1_service_proto_rawDescData -} - -var file_teleport_scopedrole_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 19) -var file_teleport_scopedrole_v1_service_proto_goTypes = []any{ - (*GetScopedRoleRequest)(nil), // 0: teleport.scopedrole.v1.GetScopedRoleRequest - (*GetScopedRoleResponse)(nil), // 1: teleport.scopedrole.v1.GetScopedRoleResponse - (*ListScopedRolesRequest)(nil), // 2: teleport.scopedrole.v1.ListScopedRolesRequest - (*ListScopedRolesResponse)(nil), // 3: teleport.scopedrole.v1.ListScopedRolesResponse - (*CreateScopedRoleRequest)(nil), // 4: teleport.scopedrole.v1.CreateScopedRoleRequest - (*CreateScopedRoleResponse)(nil), // 5: teleport.scopedrole.v1.CreateScopedRoleResponse - (*UpdateScopedRoleRequest)(nil), // 6: teleport.scopedrole.v1.UpdateScopedRoleRequest - (*UpdateScopedRoleResponse)(nil), // 7: teleport.scopedrole.v1.UpdateScopedRoleResponse - (*DeleteScopedRoleRequest)(nil), // 8: teleport.scopedrole.v1.DeleteScopedRoleRequest - (*DeleteScopedRoleResponse)(nil), // 9: teleport.scopedrole.v1.DeleteScopedRoleResponse - (*GetScopedRoleAssignmentRequest)(nil), // 10: teleport.scopedrole.v1.GetScopedRoleAssignmentRequest - (*GetScopedRoleAssignmentResponse)(nil), // 11: teleport.scopedrole.v1.GetScopedRoleAssignmentResponse - (*ListScopedRoleAssignmentsRequest)(nil), // 12: teleport.scopedrole.v1.ListScopedRoleAssignmentsRequest - (*ListScopedRoleAssignmentsResponse)(nil), // 13: teleport.scopedrole.v1.ListScopedRoleAssignmentsResponse - (*CreateScopedRoleAssignmentRequest)(nil), // 14: teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest - (*CreateScopedRoleAssignmentResponse)(nil), // 15: teleport.scopedrole.v1.CreateScopedRoleAssignmentResponse - (*DeleteScopedRoleAssignmentRequest)(nil), // 16: teleport.scopedrole.v1.DeleteScopedRoleAssignmentRequest - (*DeleteScopedRoleAssignmentResponse)(nil), // 17: teleport.scopedrole.v1.DeleteScopedRoleAssignmentResponse - nil, // 18: teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntry - (*ScopedRole)(nil), // 19: teleport.scopedrole.v1.ScopedRole - (*v1.Filter)(nil), // 20: teleport.scopes.v1.Filter - (*ScopedRoleAssignment)(nil), // 21: teleport.scopedrole.v1.ScopedRoleAssignment -} -var file_teleport_scopedrole_v1_service_proto_depIdxs = []int32{ - 19, // 0: teleport.scopedrole.v1.GetScopedRoleResponse.role:type_name -> teleport.scopedrole.v1.ScopedRole - 20, // 1: teleport.scopedrole.v1.ListScopedRolesRequest.resource_scope:type_name -> teleport.scopes.v1.Filter - 20, // 2: teleport.scopedrole.v1.ListScopedRolesRequest.assignable_scope:type_name -> teleport.scopes.v1.Filter - 19, // 3: teleport.scopedrole.v1.ListScopedRolesResponse.roles:type_name -> teleport.scopedrole.v1.ScopedRole - 19, // 4: teleport.scopedrole.v1.CreateScopedRoleRequest.role:type_name -> teleport.scopedrole.v1.ScopedRole - 19, // 5: teleport.scopedrole.v1.CreateScopedRoleResponse.role:type_name -> teleport.scopedrole.v1.ScopedRole - 19, // 6: teleport.scopedrole.v1.UpdateScopedRoleRequest.role:type_name -> teleport.scopedrole.v1.ScopedRole - 19, // 7: teleport.scopedrole.v1.UpdateScopedRoleResponse.role:type_name -> teleport.scopedrole.v1.ScopedRole - 21, // 8: teleport.scopedrole.v1.GetScopedRoleAssignmentResponse.assignment:type_name -> teleport.scopedrole.v1.ScopedRoleAssignment - 20, // 9: teleport.scopedrole.v1.ListScopedRoleAssignmentsRequest.resource_scope:type_name -> teleport.scopes.v1.Filter - 20, // 10: teleport.scopedrole.v1.ListScopedRoleAssignmentsRequest.assigned_scope:type_name -> teleport.scopes.v1.Filter - 21, // 11: teleport.scopedrole.v1.ListScopedRoleAssignmentsResponse.assignments:type_name -> teleport.scopedrole.v1.ScopedRoleAssignment - 21, // 12: teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest.assignment:type_name -> teleport.scopedrole.v1.ScopedRoleAssignment - 18, // 13: teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest.role_revisions:type_name -> teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntry - 21, // 14: teleport.scopedrole.v1.CreateScopedRoleAssignmentResponse.assignment:type_name -> teleport.scopedrole.v1.ScopedRoleAssignment - 0, // 15: teleport.scopedrole.v1.ScopedRoleService.GetScopedRole:input_type -> teleport.scopedrole.v1.GetScopedRoleRequest - 2, // 16: teleport.scopedrole.v1.ScopedRoleService.ListScopedRoles:input_type -> teleport.scopedrole.v1.ListScopedRolesRequest - 4, // 17: teleport.scopedrole.v1.ScopedRoleService.CreateScopedRole:input_type -> teleport.scopedrole.v1.CreateScopedRoleRequest - 6, // 18: teleport.scopedrole.v1.ScopedRoleService.UpdateScopedRole:input_type -> teleport.scopedrole.v1.UpdateScopedRoleRequest - 8, // 19: teleport.scopedrole.v1.ScopedRoleService.DeleteScopedRole:input_type -> teleport.scopedrole.v1.DeleteScopedRoleRequest - 10, // 20: teleport.scopedrole.v1.ScopedRoleService.GetScopedRoleAssignment:input_type -> teleport.scopedrole.v1.GetScopedRoleAssignmentRequest - 12, // 21: teleport.scopedrole.v1.ScopedRoleService.ListScopedRoleAssignments:input_type -> teleport.scopedrole.v1.ListScopedRoleAssignmentsRequest - 14, // 22: teleport.scopedrole.v1.ScopedRoleService.CreateScopedRoleAssignment:input_type -> teleport.scopedrole.v1.CreateScopedRoleAssignmentRequest - 16, // 23: teleport.scopedrole.v1.ScopedRoleService.DeleteScopedRoleAssignment:input_type -> teleport.scopedrole.v1.DeleteScopedRoleAssignmentRequest - 1, // 24: teleport.scopedrole.v1.ScopedRoleService.GetScopedRole:output_type -> teleport.scopedrole.v1.GetScopedRoleResponse - 3, // 25: teleport.scopedrole.v1.ScopedRoleService.ListScopedRoles:output_type -> teleport.scopedrole.v1.ListScopedRolesResponse - 5, // 26: teleport.scopedrole.v1.ScopedRoleService.CreateScopedRole:output_type -> teleport.scopedrole.v1.CreateScopedRoleResponse - 7, // 27: teleport.scopedrole.v1.ScopedRoleService.UpdateScopedRole:output_type -> teleport.scopedrole.v1.UpdateScopedRoleResponse - 9, // 28: teleport.scopedrole.v1.ScopedRoleService.DeleteScopedRole:output_type -> teleport.scopedrole.v1.DeleteScopedRoleResponse - 11, // 29: teleport.scopedrole.v1.ScopedRoleService.GetScopedRoleAssignment:output_type -> teleport.scopedrole.v1.GetScopedRoleAssignmentResponse - 13, // 30: teleport.scopedrole.v1.ScopedRoleService.ListScopedRoleAssignments:output_type -> teleport.scopedrole.v1.ListScopedRoleAssignmentsResponse - 15, // 31: teleport.scopedrole.v1.ScopedRoleService.CreateScopedRoleAssignment:output_type -> teleport.scopedrole.v1.CreateScopedRoleAssignmentResponse - 17, // 32: teleport.scopedrole.v1.ScopedRoleService.DeleteScopedRoleAssignment:output_type -> teleport.scopedrole.v1.DeleteScopedRoleAssignmentResponse - 24, // [24:33] is the sub-list for method output_type - 15, // [15:24] is the sub-list for method input_type - 15, // [15:15] is the sub-list for extension type_name - 15, // [15:15] is the sub-list for extension extendee - 0, // [0:15] is the sub-list for field type_name -} - -func init() { file_teleport_scopedrole_v1_service_proto_init() } -func file_teleport_scopedrole_v1_service_proto_init() { - if File_teleport_scopedrole_v1_service_proto != nil { - return - } - file_teleport_scopedrole_v1_assignment_proto_init() - file_teleport_scopedrole_v1_role_proto_init() - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopedrole_v1_service_proto_rawDesc), len(file_teleport_scopedrole_v1_service_proto_rawDesc)), - NumEnums: 0, - NumMessages: 19, - NumExtensions: 0, - NumServices: 1, - }, - GoTypes: file_teleport_scopedrole_v1_service_proto_goTypes, - DependencyIndexes: file_teleport_scopedrole_v1_service_proto_depIdxs, - MessageInfos: file_teleport_scopedrole_v1_service_proto_msgTypes, - }.Build() - File_teleport_scopedrole_v1_service_proto = out.File - file_teleport_scopedrole_v1_service_proto_goTypes = nil - file_teleport_scopedrole_v1_service_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/scopedrole/v1/service_grpc.pb.go b/api/gen/proto/go/teleport/scopedrole/v1/service_grpc.pb.go deleted file mode 100644 index 40053f67c0b7f..0000000000000 --- a/api/gen/proto/go/teleport/scopedrole/v1/service_grpc.pb.go +++ /dev/null @@ -1,461 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go-grpc. DO NOT EDIT. -// versions: -// - protoc-gen-go-grpc v1.5.1 -// - protoc (unknown) -// source: teleport/scopedrole/v1/service.proto - -package scopedrole - -import ( - context "context" - grpc "google.golang.org/grpc" - codes "google.golang.org/grpc/codes" - status "google.golang.org/grpc/status" -) - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the grpc package it is being compiled against. -// Requires gRPC-Go v1.64.0 or later. -const _ = grpc.SupportPackageIsVersion9 - -const ( - ScopedRoleService_GetScopedRole_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/GetScopedRole" - ScopedRoleService_ListScopedRoles_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/ListScopedRoles" - ScopedRoleService_CreateScopedRole_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/CreateScopedRole" - ScopedRoleService_UpdateScopedRole_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/UpdateScopedRole" - ScopedRoleService_DeleteScopedRole_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/DeleteScopedRole" - ScopedRoleService_GetScopedRoleAssignment_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/GetScopedRoleAssignment" - ScopedRoleService_ListScopedRoleAssignments_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/ListScopedRoleAssignments" - ScopedRoleService_CreateScopedRoleAssignment_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/CreateScopedRoleAssignment" - ScopedRoleService_DeleteScopedRoleAssignment_FullMethodName = "/teleport.scopedrole.v1.ScopedRoleService/DeleteScopedRoleAssignment" -) - -// ScopedRoleServiceClient is the client API for ScopedRoleService service. -// -// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. -// -// ScopedRoleService provides an API for managing scoped role resources and their assignments. -type ScopedRoleServiceClient interface { - // GetScopedRole gets a scoped role by name. - GetScopedRole(ctx context.Context, in *GetScopedRoleRequest, opts ...grpc.CallOption) (*GetScopedRoleResponse, error) - // ListScopedRoles returns a paginated list of scoped roles. - ListScopedRoles(ctx context.Context, in *ListScopedRolesRequest, opts ...grpc.CallOption) (*ListScopedRolesResponse, error) - // CreateScopedRole creates a new scoped role. - CreateScopedRole(ctx context.Context, in *CreateScopedRoleRequest, opts ...grpc.CallOption) (*CreateScopedRoleResponse, error) - // UpdateScopedRole updates a scoped role. - UpdateScopedRole(ctx context.Context, in *UpdateScopedRoleRequest, opts ...grpc.CallOption) (*UpdateScopedRoleResponse, error) - // DeleteScopedRole deletes a scoped role. - DeleteScopedRole(ctx context.Context, in *DeleteScopedRoleRequest, opts ...grpc.CallOption) (*DeleteScopedRoleResponse, error) - // GetScopedRoleAssignment gets a scoped role assignment by name. - GetScopedRoleAssignment(ctx context.Context, in *GetScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*GetScopedRoleAssignmentResponse, error) - // ListScopedRoleAssignments returns a paginated list of scoped role assignments. - ListScopedRoleAssignments(ctx context.Context, in *ListScopedRoleAssignmentsRequest, opts ...grpc.CallOption) (*ListScopedRoleAssignmentsResponse, error) - // CreateScopedRoleAssignment creates a new scoped role assignment. - CreateScopedRoleAssignment(ctx context.Context, in *CreateScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*CreateScopedRoleAssignmentResponse, error) - // DeleteScopedRoleAssignment deletes a scoped role assignment. - DeleteScopedRoleAssignment(ctx context.Context, in *DeleteScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*DeleteScopedRoleAssignmentResponse, error) -} - -type scopedRoleServiceClient struct { - cc grpc.ClientConnInterface -} - -func NewScopedRoleServiceClient(cc grpc.ClientConnInterface) ScopedRoleServiceClient { - return &scopedRoleServiceClient{cc} -} - -func (c *scopedRoleServiceClient) GetScopedRole(ctx context.Context, in *GetScopedRoleRequest, opts ...grpc.CallOption) (*GetScopedRoleResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(GetScopedRoleResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_GetScopedRole_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) ListScopedRoles(ctx context.Context, in *ListScopedRolesRequest, opts ...grpc.CallOption) (*ListScopedRolesResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(ListScopedRolesResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_ListScopedRoles_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) CreateScopedRole(ctx context.Context, in *CreateScopedRoleRequest, opts ...grpc.CallOption) (*CreateScopedRoleResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(CreateScopedRoleResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_CreateScopedRole_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) UpdateScopedRole(ctx context.Context, in *UpdateScopedRoleRequest, opts ...grpc.CallOption) (*UpdateScopedRoleResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(UpdateScopedRoleResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_UpdateScopedRole_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) DeleteScopedRole(ctx context.Context, in *DeleteScopedRoleRequest, opts ...grpc.CallOption) (*DeleteScopedRoleResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(DeleteScopedRoleResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_DeleteScopedRole_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) GetScopedRoleAssignment(ctx context.Context, in *GetScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*GetScopedRoleAssignmentResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(GetScopedRoleAssignmentResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_GetScopedRoleAssignment_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) ListScopedRoleAssignments(ctx context.Context, in *ListScopedRoleAssignmentsRequest, opts ...grpc.CallOption) (*ListScopedRoleAssignmentsResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(ListScopedRoleAssignmentsResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_ListScopedRoleAssignments_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) CreateScopedRoleAssignment(ctx context.Context, in *CreateScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*CreateScopedRoleAssignmentResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(CreateScopedRoleAssignmentResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_CreateScopedRoleAssignment_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedRoleServiceClient) DeleteScopedRoleAssignment(ctx context.Context, in *DeleteScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*DeleteScopedRoleAssignmentResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(DeleteScopedRoleAssignmentResponse) - err := c.cc.Invoke(ctx, ScopedRoleService_DeleteScopedRoleAssignment_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -// ScopedRoleServiceServer is the server API for ScopedRoleService service. -// All implementations must embed UnimplementedScopedRoleServiceServer -// for forward compatibility. -// -// ScopedRoleService provides an API for managing scoped role resources and their assignments. -type ScopedRoleServiceServer interface { - // GetScopedRole gets a scoped role by name. - GetScopedRole(context.Context, *GetScopedRoleRequest) (*GetScopedRoleResponse, error) - // ListScopedRoles returns a paginated list of scoped roles. - ListScopedRoles(context.Context, *ListScopedRolesRequest) (*ListScopedRolesResponse, error) - // CreateScopedRole creates a new scoped role. - CreateScopedRole(context.Context, *CreateScopedRoleRequest) (*CreateScopedRoleResponse, error) - // UpdateScopedRole updates a scoped role. - UpdateScopedRole(context.Context, *UpdateScopedRoleRequest) (*UpdateScopedRoleResponse, error) - // DeleteScopedRole deletes a scoped role. - DeleteScopedRole(context.Context, *DeleteScopedRoleRequest) (*DeleteScopedRoleResponse, error) - // GetScopedRoleAssignment gets a scoped role assignment by name. - GetScopedRoleAssignment(context.Context, *GetScopedRoleAssignmentRequest) (*GetScopedRoleAssignmentResponse, error) - // ListScopedRoleAssignments returns a paginated list of scoped role assignments. - ListScopedRoleAssignments(context.Context, *ListScopedRoleAssignmentsRequest) (*ListScopedRoleAssignmentsResponse, error) - // CreateScopedRoleAssignment creates a new scoped role assignment. - CreateScopedRoleAssignment(context.Context, *CreateScopedRoleAssignmentRequest) (*CreateScopedRoleAssignmentResponse, error) - // DeleteScopedRoleAssignment deletes a scoped role assignment. - DeleteScopedRoleAssignment(context.Context, *DeleteScopedRoleAssignmentRequest) (*DeleteScopedRoleAssignmentResponse, error) - mustEmbedUnimplementedScopedRoleServiceServer() -} - -// UnimplementedScopedRoleServiceServer must be embedded to have -// forward compatible implementations. -// -// NOTE: this should be embedded by value instead of pointer to avoid a nil -// pointer dereference when methods are called. -type UnimplementedScopedRoleServiceServer struct{} - -func (UnimplementedScopedRoleServiceServer) GetScopedRole(context.Context, *GetScopedRoleRequest) (*GetScopedRoleResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method GetScopedRole not implemented") -} -func (UnimplementedScopedRoleServiceServer) ListScopedRoles(context.Context, *ListScopedRolesRequest) (*ListScopedRolesResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListScopedRoles not implemented") -} -func (UnimplementedScopedRoleServiceServer) CreateScopedRole(context.Context, *CreateScopedRoleRequest) (*CreateScopedRoleResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreateScopedRole not implemented") -} -func (UnimplementedScopedRoleServiceServer) UpdateScopedRole(context.Context, *UpdateScopedRoleRequest) (*UpdateScopedRoleResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method UpdateScopedRole not implemented") -} -func (UnimplementedScopedRoleServiceServer) DeleteScopedRole(context.Context, *DeleteScopedRoleRequest) (*DeleteScopedRoleResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedRole not implemented") -} -func (UnimplementedScopedRoleServiceServer) GetScopedRoleAssignment(context.Context, *GetScopedRoleAssignmentRequest) (*GetScopedRoleAssignmentResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method GetScopedRoleAssignment not implemented") -} -func (UnimplementedScopedRoleServiceServer) ListScopedRoleAssignments(context.Context, *ListScopedRoleAssignmentsRequest) (*ListScopedRoleAssignmentsResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListScopedRoleAssignments not implemented") -} -func (UnimplementedScopedRoleServiceServer) CreateScopedRoleAssignment(context.Context, *CreateScopedRoleAssignmentRequest) (*CreateScopedRoleAssignmentResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreateScopedRoleAssignment not implemented") -} -func (UnimplementedScopedRoleServiceServer) DeleteScopedRoleAssignment(context.Context, *DeleteScopedRoleAssignmentRequest) (*DeleteScopedRoleAssignmentResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedRoleAssignment not implemented") -} -func (UnimplementedScopedRoleServiceServer) mustEmbedUnimplementedScopedRoleServiceServer() {} -func (UnimplementedScopedRoleServiceServer) testEmbeddedByValue() {} - -// UnsafeScopedRoleServiceServer may be embedded to opt out of forward compatibility for this service. -// Use of this interface is not recommended, as added methods to ScopedRoleServiceServer will -// result in compilation errors. -type UnsafeScopedRoleServiceServer interface { - mustEmbedUnimplementedScopedRoleServiceServer() -} - -func RegisterScopedRoleServiceServer(s grpc.ServiceRegistrar, srv ScopedRoleServiceServer) { - // If the following call pancis, it indicates UnimplementedScopedRoleServiceServer was - // embedded by pointer and is nil. This will cause panics if an - // unimplemented method is ever invoked, so we test this at initialization - // time to prevent it from happening at runtime later due to I/O. - if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { - t.testEmbeddedByValue() - } - s.RegisterService(&ScopedRoleService_ServiceDesc, srv) -} - -func _ScopedRoleService_GetScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(GetScopedRoleRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).GetScopedRole(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_GetScopedRole_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).GetScopedRole(ctx, req.(*GetScopedRoleRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_ListScopedRoles_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(ListScopedRolesRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).ListScopedRoles(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_ListScopedRoles_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).ListScopedRoles(ctx, req.(*ListScopedRolesRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_CreateScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(CreateScopedRoleRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).CreateScopedRole(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_CreateScopedRole_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).CreateScopedRole(ctx, req.(*CreateScopedRoleRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_UpdateScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(UpdateScopedRoleRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).UpdateScopedRole(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_UpdateScopedRole_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).UpdateScopedRole(ctx, req.(*UpdateScopedRoleRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_DeleteScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteScopedRoleRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).DeleteScopedRole(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_DeleteScopedRole_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).DeleteScopedRole(ctx, req.(*DeleteScopedRoleRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_GetScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(GetScopedRoleAssignmentRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).GetScopedRoleAssignment(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_GetScopedRoleAssignment_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).GetScopedRoleAssignment(ctx, req.(*GetScopedRoleAssignmentRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_ListScopedRoleAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(ListScopedRoleAssignmentsRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).ListScopedRoleAssignments(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_ListScopedRoleAssignments_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).ListScopedRoleAssignments(ctx, req.(*ListScopedRoleAssignmentsRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_CreateScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(CreateScopedRoleAssignmentRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).CreateScopedRoleAssignment(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_CreateScopedRoleAssignment_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).CreateScopedRoleAssignment(ctx, req.(*CreateScopedRoleAssignmentRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedRoleService_DeleteScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteScopedRoleAssignmentRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedRoleServiceServer).DeleteScopedRoleAssignment(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedRoleService_DeleteScopedRoleAssignment_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedRoleServiceServer).DeleteScopedRoleAssignment(ctx, req.(*DeleteScopedRoleAssignmentRequest)) - } - return interceptor(ctx, in, info, handler) -} - -// ScopedRoleService_ServiceDesc is the grpc.ServiceDesc for ScopedRoleService service. -// It's only intended for direct use with grpc.RegisterService, -// and not to be introspected or modified (even as a copy) -var ScopedRoleService_ServiceDesc = grpc.ServiceDesc{ - ServiceName: "teleport.scopedrole.v1.ScopedRoleService", - HandlerType: (*ScopedRoleServiceServer)(nil), - Methods: []grpc.MethodDesc{ - { - MethodName: "GetScopedRole", - Handler: _ScopedRoleService_GetScopedRole_Handler, - }, - { - MethodName: "ListScopedRoles", - Handler: _ScopedRoleService_ListScopedRoles_Handler, - }, - { - MethodName: "CreateScopedRole", - Handler: _ScopedRoleService_CreateScopedRole_Handler, - }, - { - MethodName: "UpdateScopedRole", - Handler: _ScopedRoleService_UpdateScopedRole_Handler, - }, - { - MethodName: "DeleteScopedRole", - Handler: _ScopedRoleService_DeleteScopedRole_Handler, - }, - { - MethodName: "GetScopedRoleAssignment", - Handler: _ScopedRoleService_GetScopedRoleAssignment_Handler, - }, - { - MethodName: "ListScopedRoleAssignments", - Handler: _ScopedRoleService_ListScopedRoleAssignments_Handler, - }, - { - MethodName: "CreateScopedRoleAssignment", - Handler: _ScopedRoleService_CreateScopedRoleAssignment_Handler, - }, - { - MethodName: "DeleteScopedRoleAssignment", - Handler: _ScopedRoleService_DeleteScopedRoleAssignment_Handler, - }, - }, - Streams: []grpc.StreamDesc{}, - Metadata: "teleport/scopedrole/v1/service.proto", -} diff --git a/api/gen/proto/go/teleport/scopedtoken/v1/service.pb.go b/api/gen/proto/go/teleport/scopedtoken/v1/service.pb.go deleted file mode 100644 index 2f36dd7ccb1a4..0000000000000 --- a/api/gen/proto/go/teleport/scopedtoken/v1/service.pb.go +++ /dev/null @@ -1,647 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/scopedtoken/v1/service.proto - -package scopedtoken - -import ( - v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// GetScopedTokenRequest is the request to get a scoped token. -type GetScopedTokenRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped token. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedTokenRequest) Reset() { - *x = GetScopedTokenRequest{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedTokenRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedTokenRequest) ProtoMessage() {} - -func (x *GetScopedTokenRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedTokenRequest.ProtoReflect.Descriptor instead. -func (*GetScopedTokenRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{0} -} - -func (x *GetScopedTokenRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -// GetScopedTokenResponse is the response to get a scoped token. -type GetScopedTokenResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Token is the scoped token. - Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *GetScopedTokenResponse) Reset() { - *x = GetScopedTokenResponse{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *GetScopedTokenResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*GetScopedTokenResponse) ProtoMessage() {} - -func (x *GetScopedTokenResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use GetScopedTokenResponse.ProtoReflect.Descriptor instead. -func (*GetScopedTokenResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{1} -} - -func (x *GetScopedTokenResponse) GetToken() *ScopedToken { - if x != nil { - return x.Token - } - return nil -} - -// ListScopedTokensRequest is the request to list scoped tokens. -type ListScopedTokensRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // ResourceScope filters tokens by their resource scope if specified. - ResourceScope *v1.Filter `protobuf:"bytes,1,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` - // AssignedScope filters tokens by their assigned scope if specified. - AssignedScope *v1.Filter `protobuf:"bytes,2,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` - // Cursor is the pagination cursor. - Cursor string `protobuf:"bytes,3,opt,name=cursor,proto3" json:"cursor,omitempty"` - // Limit is the maximum number of results to return. - Limit uint32 `protobuf:"varint,4,opt,name=limit,proto3" json:"limit,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedTokensRequest) Reset() { - *x = ListScopedTokensRequest{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[2] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedTokensRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedTokensRequest) ProtoMessage() {} - -func (x *ListScopedTokensRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[2] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedTokensRequest.ProtoReflect.Descriptor instead. -func (*ListScopedTokensRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{2} -} - -func (x *ListScopedTokensRequest) GetResourceScope() *v1.Filter { - if x != nil { - return x.ResourceScope - } - return nil -} - -func (x *ListScopedTokensRequest) GetAssignedScope() *v1.Filter { - if x != nil { - return x.AssignedScope - } - return nil -} - -func (x *ListScopedTokensRequest) GetCursor() string { - if x != nil { - return x.Cursor - } - return "" -} - -func (x *ListScopedTokensRequest) GetLimit() uint32 { - if x != nil { - return x.Limit - } - return 0 -} - -// ListScopedTokensResponse is the response to list scoped tokens. -type ListScopedTokensResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Tokens is the list of scoped tokens. - Tokens []*ScopedToken `protobuf:"bytes,1,rep,name=tokens,proto3" json:"tokens,omitempty"` - // Cursor is the pagination cursor. - Cursor string `protobuf:"bytes,2,opt,name=cursor,proto3" json:"cursor,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ListScopedTokensResponse) Reset() { - *x = ListScopedTokensResponse{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[3] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ListScopedTokensResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ListScopedTokensResponse) ProtoMessage() {} - -func (x *ListScopedTokensResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[3] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ListScopedTokensResponse.ProtoReflect.Descriptor instead. -func (*ListScopedTokensResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{3} -} - -func (x *ListScopedTokensResponse) GetTokens() []*ScopedToken { - if x != nil { - return x.Tokens - } - return nil -} - -func (x *ListScopedTokensResponse) GetCursor() string { - if x != nil { - return x.Cursor - } - return "" -} - -// CreateScopedTokenRequest is the request to create a scoped token. -type CreateScopedTokenRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Token is the scoped token to create. - Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedTokenRequest) Reset() { - *x = CreateScopedTokenRequest{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[4] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedTokenRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedTokenRequest) ProtoMessage() {} - -func (x *CreateScopedTokenRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[4] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedTokenRequest.ProtoReflect.Descriptor instead. -func (*CreateScopedTokenRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{4} -} - -func (x *CreateScopedTokenRequest) GetToken() *ScopedToken { - if x != nil { - return x.Token - } - return nil -} - -// CreateScopedTokenResponse is the response to create a scoped token. -type CreateScopedTokenResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Token is the scoped token that was created. - Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *CreateScopedTokenResponse) Reset() { - *x = CreateScopedTokenResponse{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[5] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *CreateScopedTokenResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*CreateScopedTokenResponse) ProtoMessage() {} - -func (x *CreateScopedTokenResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[5] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use CreateScopedTokenResponse.ProtoReflect.Descriptor instead. -func (*CreateScopedTokenResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{5} -} - -func (x *CreateScopedTokenResponse) GetToken() *ScopedToken { - if x != nil { - return x.Token - } - return nil -} - -// UpdateScopedTokenRequest is the request to update a scoped token. -type UpdateScopedTokenRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Token is the scoped token to update. - Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *UpdateScopedTokenRequest) Reset() { - *x = UpdateScopedTokenRequest{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[6] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *UpdateScopedTokenRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*UpdateScopedTokenRequest) ProtoMessage() {} - -func (x *UpdateScopedTokenRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[6] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use UpdateScopedTokenRequest.ProtoReflect.Descriptor instead. -func (*UpdateScopedTokenRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{6} -} - -func (x *UpdateScopedTokenRequest) GetToken() *ScopedToken { - if x != nil { - return x.Token - } - return nil -} - -// UpdateScopedTokenResponse is the response to update a scoped token. -type UpdateScopedTokenResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Token is the post-update scoped token. - Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *UpdateScopedTokenResponse) Reset() { - *x = UpdateScopedTokenResponse{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[7] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *UpdateScopedTokenResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*UpdateScopedTokenResponse) ProtoMessage() {} - -func (x *UpdateScopedTokenResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[7] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use UpdateScopedTokenResponse.ProtoReflect.Descriptor instead. -func (*UpdateScopedTokenResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{7} -} - -func (x *UpdateScopedTokenResponse) GetToken() *ScopedToken { - if x != nil { - return x.Token - } - return nil -} - -// DeleteScopedTokenRequest is the request to delete a scoped token. -type DeleteScopedTokenRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Name is the name of the scoped token to delete. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - // Revision asserts the revision of the scoped token to delete (optional). - Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedTokenRequest) Reset() { - *x = DeleteScopedTokenRequest{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[8] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedTokenRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedTokenRequest) ProtoMessage() {} - -func (x *DeleteScopedTokenRequest) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[8] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedTokenRequest.ProtoReflect.Descriptor instead. -func (*DeleteScopedTokenRequest) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{8} -} - -func (x *DeleteScopedTokenRequest) GetName() string { - if x != nil { - return x.Name - } - return "" -} - -func (x *DeleteScopedTokenRequest) GetRevision() string { - if x != nil { - return x.Revision - } - return "" -} - -// DeleteScopedTokenResponse is the response to delete a scoped token. -type DeleteScopedTokenResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *DeleteScopedTokenResponse) Reset() { - *x = DeleteScopedTokenResponse{} - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[9] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *DeleteScopedTokenResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*DeleteScopedTokenResponse) ProtoMessage() {} - -func (x *DeleteScopedTokenResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_service_proto_msgTypes[9] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use DeleteScopedTokenResponse.ProtoReflect.Descriptor instead. -func (*DeleteScopedTokenResponse) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_service_proto_rawDescGZIP(), []int{9} -} - -var File_teleport_scopedtoken_v1_service_proto protoreflect.FileDescriptor - -const file_teleport_scopedtoken_v1_service_proto_rawDesc = "" + - "\n" + - "%teleport/scopedtoken/v1/service.proto\x12\x17teleport.scopedtoken.v1\x1a#teleport/scopedtoken/v1/token.proto\x1a\x1fteleport/scopes/v1/scopes.proto\"+\n" + - "\x15GetScopedTokenRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\"T\n" + - "\x16GetScopedTokenResponse\x12:\n" + - "\x05token\x18\x01 \x01(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x05token\"\xcd\x01\n" + - "\x17ListScopedTokensRequest\x12A\n" + - "\x0eresource_scope\x18\x01 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12A\n" + - "\x0eassigned_scope\x18\x02 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rassignedScope\x12\x16\n" + - "\x06cursor\x18\x03 \x01(\tR\x06cursor\x12\x14\n" + - "\x05limit\x18\x04 \x01(\rR\x05limit\"p\n" + - "\x18ListScopedTokensResponse\x12<\n" + - "\x06tokens\x18\x01 \x03(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x06tokens\x12\x16\n" + - "\x06cursor\x18\x02 \x01(\tR\x06cursor\"V\n" + - "\x18CreateScopedTokenRequest\x12:\n" + - "\x05token\x18\x01 \x01(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x05token\"W\n" + - "\x19CreateScopedTokenResponse\x12:\n" + - "\x05token\x18\x01 \x01(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x05token\"V\n" + - "\x18UpdateScopedTokenRequest\x12:\n" + - "\x05token\x18\x01 \x01(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x05token\"W\n" + - "\x19UpdateScopedTokenResponse\x12:\n" + - "\x05token\x18\x01 \x01(\v2$.teleport.scopedtoken.v1.ScopedTokenR\x05token\"J\n" + - "\x18DeleteScopedTokenRequest\x12\x12\n" + - "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + - "\brevision\x18\x02 \x01(\tR\brevision\"\x1b\n" + - "\x19DeleteScopedTokenResponse2\xf4\x04\n" + - "\x12ScopedTokenService\x12q\n" + - "\x0eGetScopedToken\x12..teleport.scopedtoken.v1.GetScopedTokenRequest\x1a/.teleport.scopedtoken.v1.GetScopedTokenResponse\x12w\n" + - "\x10ListScopedTokens\x120.teleport.scopedtoken.v1.ListScopedTokensRequest\x1a1.teleport.scopedtoken.v1.ListScopedTokensResponse\x12z\n" + - "\x11CreateScopedToken\x121.teleport.scopedtoken.v1.CreateScopedTokenRequest\x1a2.teleport.scopedtoken.v1.CreateScopedTokenResponse\x12z\n" + - "\x11UpdateScopedToken\x121.teleport.scopedtoken.v1.UpdateScopedTokenRequest\x1a2.teleport.scopedtoken.v1.UpdateScopedTokenResponse\x12z\n" + - "\x11DeleteScopedToken\x121.teleport.scopedtoken.v1.DeleteScopedTokenRequest\x1a2.teleport.scopedtoken.v1.DeleteScopedTokenResponseBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopedtoken/v1;scopedtokenb\x06proto3" - -var ( - file_teleport_scopedtoken_v1_service_proto_rawDescOnce sync.Once - file_teleport_scopedtoken_v1_service_proto_rawDescData []byte -) - -func file_teleport_scopedtoken_v1_service_proto_rawDescGZIP() []byte { - file_teleport_scopedtoken_v1_service_proto_rawDescOnce.Do(func() { - file_teleport_scopedtoken_v1_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopedtoken_v1_service_proto_rawDesc), len(file_teleport_scopedtoken_v1_service_proto_rawDesc))) - }) - return file_teleport_scopedtoken_v1_service_proto_rawDescData -} - -var file_teleport_scopedtoken_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 10) -var file_teleport_scopedtoken_v1_service_proto_goTypes = []any{ - (*GetScopedTokenRequest)(nil), // 0: teleport.scopedtoken.v1.GetScopedTokenRequest - (*GetScopedTokenResponse)(nil), // 1: teleport.scopedtoken.v1.GetScopedTokenResponse - (*ListScopedTokensRequest)(nil), // 2: teleport.scopedtoken.v1.ListScopedTokensRequest - (*ListScopedTokensResponse)(nil), // 3: teleport.scopedtoken.v1.ListScopedTokensResponse - (*CreateScopedTokenRequest)(nil), // 4: teleport.scopedtoken.v1.CreateScopedTokenRequest - (*CreateScopedTokenResponse)(nil), // 5: teleport.scopedtoken.v1.CreateScopedTokenResponse - (*UpdateScopedTokenRequest)(nil), // 6: teleport.scopedtoken.v1.UpdateScopedTokenRequest - (*UpdateScopedTokenResponse)(nil), // 7: teleport.scopedtoken.v1.UpdateScopedTokenResponse - (*DeleteScopedTokenRequest)(nil), // 8: teleport.scopedtoken.v1.DeleteScopedTokenRequest - (*DeleteScopedTokenResponse)(nil), // 9: teleport.scopedtoken.v1.DeleteScopedTokenResponse - (*ScopedToken)(nil), // 10: teleport.scopedtoken.v1.ScopedToken - (*v1.Filter)(nil), // 11: teleport.scopes.v1.Filter -} -var file_teleport_scopedtoken_v1_service_proto_depIdxs = []int32{ - 10, // 0: teleport.scopedtoken.v1.GetScopedTokenResponse.token:type_name -> teleport.scopedtoken.v1.ScopedToken - 11, // 1: teleport.scopedtoken.v1.ListScopedTokensRequest.resource_scope:type_name -> teleport.scopes.v1.Filter - 11, // 2: teleport.scopedtoken.v1.ListScopedTokensRequest.assigned_scope:type_name -> teleport.scopes.v1.Filter - 10, // 3: teleport.scopedtoken.v1.ListScopedTokensResponse.tokens:type_name -> teleport.scopedtoken.v1.ScopedToken - 10, // 4: teleport.scopedtoken.v1.CreateScopedTokenRequest.token:type_name -> teleport.scopedtoken.v1.ScopedToken - 10, // 5: teleport.scopedtoken.v1.CreateScopedTokenResponse.token:type_name -> teleport.scopedtoken.v1.ScopedToken - 10, // 6: teleport.scopedtoken.v1.UpdateScopedTokenRequest.token:type_name -> teleport.scopedtoken.v1.ScopedToken - 10, // 7: teleport.scopedtoken.v1.UpdateScopedTokenResponse.token:type_name -> teleport.scopedtoken.v1.ScopedToken - 0, // 8: teleport.scopedtoken.v1.ScopedTokenService.GetScopedToken:input_type -> teleport.scopedtoken.v1.GetScopedTokenRequest - 2, // 9: teleport.scopedtoken.v1.ScopedTokenService.ListScopedTokens:input_type -> teleport.scopedtoken.v1.ListScopedTokensRequest - 4, // 10: teleport.scopedtoken.v1.ScopedTokenService.CreateScopedToken:input_type -> teleport.scopedtoken.v1.CreateScopedTokenRequest - 6, // 11: teleport.scopedtoken.v1.ScopedTokenService.UpdateScopedToken:input_type -> teleport.scopedtoken.v1.UpdateScopedTokenRequest - 8, // 12: teleport.scopedtoken.v1.ScopedTokenService.DeleteScopedToken:input_type -> teleport.scopedtoken.v1.DeleteScopedTokenRequest - 1, // 13: teleport.scopedtoken.v1.ScopedTokenService.GetScopedToken:output_type -> teleport.scopedtoken.v1.GetScopedTokenResponse - 3, // 14: teleport.scopedtoken.v1.ScopedTokenService.ListScopedTokens:output_type -> teleport.scopedtoken.v1.ListScopedTokensResponse - 5, // 15: teleport.scopedtoken.v1.ScopedTokenService.CreateScopedToken:output_type -> teleport.scopedtoken.v1.CreateScopedTokenResponse - 7, // 16: teleport.scopedtoken.v1.ScopedTokenService.UpdateScopedToken:output_type -> teleport.scopedtoken.v1.UpdateScopedTokenResponse - 9, // 17: teleport.scopedtoken.v1.ScopedTokenService.DeleteScopedToken:output_type -> teleport.scopedtoken.v1.DeleteScopedTokenResponse - 13, // [13:18] is the sub-list for method output_type - 8, // [8:13] is the sub-list for method input_type - 8, // [8:8] is the sub-list for extension type_name - 8, // [8:8] is the sub-list for extension extendee - 0, // [0:8] is the sub-list for field type_name -} - -func init() { file_teleport_scopedtoken_v1_service_proto_init() } -func file_teleport_scopedtoken_v1_service_proto_init() { - if File_teleport_scopedtoken_v1_service_proto != nil { - return - } - file_teleport_scopedtoken_v1_token_proto_init() - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopedtoken_v1_service_proto_rawDesc), len(file_teleport_scopedtoken_v1_service_proto_rawDesc)), - NumEnums: 0, - NumMessages: 10, - NumExtensions: 0, - NumServices: 1, - }, - GoTypes: file_teleport_scopedtoken_v1_service_proto_goTypes, - DependencyIndexes: file_teleport_scopedtoken_v1_service_proto_depIdxs, - MessageInfos: file_teleport_scopedtoken_v1_service_proto_msgTypes, - }.Build() - File_teleport_scopedtoken_v1_service_proto = out.File - file_teleport_scopedtoken_v1_service_proto_goTypes = nil - file_teleport_scopedtoken_v1_service_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/scopedtoken/v1/service_grpc.pb.go b/api/gen/proto/go/teleport/scopedtoken/v1/service_grpc.pb.go deleted file mode 100644 index 93d0abc2deccf..0000000000000 --- a/api/gen/proto/go/teleport/scopedtoken/v1/service_grpc.pb.go +++ /dev/null @@ -1,301 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go-grpc. DO NOT EDIT. -// versions: -// - protoc-gen-go-grpc v1.5.1 -// - protoc (unknown) -// source: teleport/scopedtoken/v1/service.proto - -package scopedtoken - -import ( - context "context" - grpc "google.golang.org/grpc" - codes "google.golang.org/grpc/codes" - status "google.golang.org/grpc/status" -) - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the grpc package it is being compiled against. -// Requires gRPC-Go v1.64.0 or later. -const _ = grpc.SupportPackageIsVersion9 - -const ( - ScopedTokenService_GetScopedToken_FullMethodName = "/teleport.scopedtoken.v1.ScopedTokenService/GetScopedToken" - ScopedTokenService_ListScopedTokens_FullMethodName = "/teleport.scopedtoken.v1.ScopedTokenService/ListScopedTokens" - ScopedTokenService_CreateScopedToken_FullMethodName = "/teleport.scopedtoken.v1.ScopedTokenService/CreateScopedToken" - ScopedTokenService_UpdateScopedToken_FullMethodName = "/teleport.scopedtoken.v1.ScopedTokenService/UpdateScopedToken" - ScopedTokenService_DeleteScopedToken_FullMethodName = "/teleport.scopedtoken.v1.ScopedTokenService/DeleteScopedToken" -) - -// ScopedTokenServiceClient is the client API for ScopedTokenService service. -// -// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. -// -// ScopedTokenService provides an API for managing scoped token resources and their assignments. -type ScopedTokenServiceClient interface { - // GetScopedToken gets a scoped token by name. - GetScopedToken(ctx context.Context, in *GetScopedTokenRequest, opts ...grpc.CallOption) (*GetScopedTokenResponse, error) - // ListScopedTokens returns a paginated list of scoped tokens. - ListScopedTokens(ctx context.Context, in *ListScopedTokensRequest, opts ...grpc.CallOption) (*ListScopedTokensResponse, error) - // CreateScopedToken creates a new scoped token. - CreateScopedToken(ctx context.Context, in *CreateScopedTokenRequest, opts ...grpc.CallOption) (*CreateScopedTokenResponse, error) - // UpdateScopedToken updates a scoped token. - UpdateScopedToken(ctx context.Context, in *UpdateScopedTokenRequest, opts ...grpc.CallOption) (*UpdateScopedTokenResponse, error) - // DeleteScopedToken deletes a scoped token. - DeleteScopedToken(ctx context.Context, in *DeleteScopedTokenRequest, opts ...grpc.CallOption) (*DeleteScopedTokenResponse, error) -} - -type scopedTokenServiceClient struct { - cc grpc.ClientConnInterface -} - -func NewScopedTokenServiceClient(cc grpc.ClientConnInterface) ScopedTokenServiceClient { - return &scopedTokenServiceClient{cc} -} - -func (c *scopedTokenServiceClient) GetScopedToken(ctx context.Context, in *GetScopedTokenRequest, opts ...grpc.CallOption) (*GetScopedTokenResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(GetScopedTokenResponse) - err := c.cc.Invoke(ctx, ScopedTokenService_GetScopedToken_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedTokenServiceClient) ListScopedTokens(ctx context.Context, in *ListScopedTokensRequest, opts ...grpc.CallOption) (*ListScopedTokensResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(ListScopedTokensResponse) - err := c.cc.Invoke(ctx, ScopedTokenService_ListScopedTokens_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedTokenServiceClient) CreateScopedToken(ctx context.Context, in *CreateScopedTokenRequest, opts ...grpc.CallOption) (*CreateScopedTokenResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(CreateScopedTokenResponse) - err := c.cc.Invoke(ctx, ScopedTokenService_CreateScopedToken_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedTokenServiceClient) UpdateScopedToken(ctx context.Context, in *UpdateScopedTokenRequest, opts ...grpc.CallOption) (*UpdateScopedTokenResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(UpdateScopedTokenResponse) - err := c.cc.Invoke(ctx, ScopedTokenService_UpdateScopedToken_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -func (c *scopedTokenServiceClient) DeleteScopedToken(ctx context.Context, in *DeleteScopedTokenRequest, opts ...grpc.CallOption) (*DeleteScopedTokenResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(DeleteScopedTokenResponse) - err := c.cc.Invoke(ctx, ScopedTokenService_DeleteScopedToken_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - -// ScopedTokenServiceServer is the server API for ScopedTokenService service. -// All implementations must embed UnimplementedScopedTokenServiceServer -// for forward compatibility. -// -// ScopedTokenService provides an API for managing scoped token resources and their assignments. -type ScopedTokenServiceServer interface { - // GetScopedToken gets a scoped token by name. - GetScopedToken(context.Context, *GetScopedTokenRequest) (*GetScopedTokenResponse, error) - // ListScopedTokens returns a paginated list of scoped tokens. - ListScopedTokens(context.Context, *ListScopedTokensRequest) (*ListScopedTokensResponse, error) - // CreateScopedToken creates a new scoped token. - CreateScopedToken(context.Context, *CreateScopedTokenRequest) (*CreateScopedTokenResponse, error) - // UpdateScopedToken updates a scoped token. - UpdateScopedToken(context.Context, *UpdateScopedTokenRequest) (*UpdateScopedTokenResponse, error) - // DeleteScopedToken deletes a scoped token. - DeleteScopedToken(context.Context, *DeleteScopedTokenRequest) (*DeleteScopedTokenResponse, error) - mustEmbedUnimplementedScopedTokenServiceServer() -} - -// UnimplementedScopedTokenServiceServer must be embedded to have -// forward compatible implementations. -// -// NOTE: this should be embedded by value instead of pointer to avoid a nil -// pointer dereference when methods are called. -type UnimplementedScopedTokenServiceServer struct{} - -func (UnimplementedScopedTokenServiceServer) GetScopedToken(context.Context, *GetScopedTokenRequest) (*GetScopedTokenResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method GetScopedToken not implemented") -} -func (UnimplementedScopedTokenServiceServer) ListScopedTokens(context.Context, *ListScopedTokensRequest) (*ListScopedTokensResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListScopedTokens not implemented") -} -func (UnimplementedScopedTokenServiceServer) CreateScopedToken(context.Context, *CreateScopedTokenRequest) (*CreateScopedTokenResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreateScopedToken not implemented") -} -func (UnimplementedScopedTokenServiceServer) UpdateScopedToken(context.Context, *UpdateScopedTokenRequest) (*UpdateScopedTokenResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method UpdateScopedToken not implemented") -} -func (UnimplementedScopedTokenServiceServer) DeleteScopedToken(context.Context, *DeleteScopedTokenRequest) (*DeleteScopedTokenResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedToken not implemented") -} -func (UnimplementedScopedTokenServiceServer) mustEmbedUnimplementedScopedTokenServiceServer() {} -func (UnimplementedScopedTokenServiceServer) testEmbeddedByValue() {} - -// UnsafeScopedTokenServiceServer may be embedded to opt out of forward compatibility for this service. -// Use of this interface is not recommended, as added methods to ScopedTokenServiceServer will -// result in compilation errors. -type UnsafeScopedTokenServiceServer interface { - mustEmbedUnimplementedScopedTokenServiceServer() -} - -func RegisterScopedTokenServiceServer(s grpc.ServiceRegistrar, srv ScopedTokenServiceServer) { - // If the following call pancis, it indicates UnimplementedScopedTokenServiceServer was - // embedded by pointer and is nil. This will cause panics if an - // unimplemented method is ever invoked, so we test this at initialization - // time to prevent it from happening at runtime later due to I/O. - if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { - t.testEmbeddedByValue() - } - s.RegisterService(&ScopedTokenService_ServiceDesc, srv) -} - -func _ScopedTokenService_GetScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(GetScopedTokenRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedTokenServiceServer).GetScopedToken(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedTokenService_GetScopedToken_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedTokenServiceServer).GetScopedToken(ctx, req.(*GetScopedTokenRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedTokenService_ListScopedTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(ListScopedTokensRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedTokenServiceServer).ListScopedTokens(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedTokenService_ListScopedTokens_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedTokenServiceServer).ListScopedTokens(ctx, req.(*ListScopedTokensRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedTokenService_CreateScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(CreateScopedTokenRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedTokenServiceServer).CreateScopedToken(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedTokenService_CreateScopedToken_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedTokenServiceServer).CreateScopedToken(ctx, req.(*CreateScopedTokenRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedTokenService_UpdateScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(UpdateScopedTokenRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedTokenServiceServer).UpdateScopedToken(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedTokenService_UpdateScopedToken_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedTokenServiceServer).UpdateScopedToken(ctx, req.(*UpdateScopedTokenRequest)) - } - return interceptor(ctx, in, info, handler) -} - -func _ScopedTokenService_DeleteScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(DeleteScopedTokenRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(ScopedTokenServiceServer).DeleteScopedToken(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: ScopedTokenService_DeleteScopedToken_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(ScopedTokenServiceServer).DeleteScopedToken(ctx, req.(*DeleteScopedTokenRequest)) - } - return interceptor(ctx, in, info, handler) -} - -// ScopedTokenService_ServiceDesc is the grpc.ServiceDesc for ScopedTokenService service. -// It's only intended for direct use with grpc.RegisterService, -// and not to be introspected or modified (even as a copy) -var ScopedTokenService_ServiceDesc = grpc.ServiceDesc{ - ServiceName: "teleport.scopedtoken.v1.ScopedTokenService", - HandlerType: (*ScopedTokenServiceServer)(nil), - Methods: []grpc.MethodDesc{ - { - MethodName: "GetScopedToken", - Handler: _ScopedTokenService_GetScopedToken_Handler, - }, - { - MethodName: "ListScopedTokens", - Handler: _ScopedTokenService_ListScopedTokens_Handler, - }, - { - MethodName: "CreateScopedToken", - Handler: _ScopedTokenService_CreateScopedToken_Handler, - }, - { - MethodName: "UpdateScopedToken", - Handler: _ScopedTokenService_UpdateScopedToken_Handler, - }, - { - MethodName: "DeleteScopedToken", - Handler: _ScopedTokenService_DeleteScopedToken_Handler, - }, - }, - Streams: []grpc.StreamDesc{}, - Metadata: "teleport/scopedtoken/v1/service.proto", -} diff --git a/api/gen/proto/go/teleport/scopedtoken/v1/token.pb.go b/api/gen/proto/go/teleport/scopedtoken/v1/token.pb.go deleted file mode 100644 index 4653888a0340d..0000000000000 --- a/api/gen/proto/go/teleport/scopedtoken/v1/token.pb.go +++ /dev/null @@ -1,244 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.36.6 -// protoc (unknown) -// source: teleport/scopedtoken/v1/token.proto - -package scopedtoken - -import ( - v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - reflect "reflect" - sync "sync" - unsafe "unsafe" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -// ScopedToken is a token whose resource and permissions are scoped. Scoped tokens are used for the provisioning -// of teleport agents locked to specific scopes. Scoped tokens implement a subset of the functionality of standard -// provisioning tokens, specifically tailored to the usecase of limited admins/users provisioning resources within -// sub-scopes over which they have been granted elevated privileges. -type ScopedToken struct { - state protoimpl.MessageState `protogen:"open.v1"` - // Kind is the resource kind. - Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` - // SubKind is the resource sub-kind. - SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` - // Version is the resource version. - Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` - // Metadata contains the resource metadata. - Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` - // Scope is the scope of the token resource. - Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` - // Spec is the token specification. - Spec *ScopedTokenSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedToken) Reset() { - *x = ScopedToken{} - mi := &file_teleport_scopedtoken_v1_token_proto_msgTypes[0] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedToken) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedToken) ProtoMessage() {} - -func (x *ScopedToken) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_token_proto_msgTypes[0] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedToken.ProtoReflect.Descriptor instead. -func (*ScopedToken) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_token_proto_rawDescGZIP(), []int{0} -} - -func (x *ScopedToken) GetKind() string { - if x != nil { - return x.Kind - } - return "" -} - -func (x *ScopedToken) GetSubKind() string { - if x != nil { - return x.SubKind - } - return "" -} - -func (x *ScopedToken) GetVersion() string { - if x != nil { - return x.Version - } - return "" -} - -func (x *ScopedToken) GetMetadata() *v1.Metadata { - if x != nil { - return x.Metadata - } - return nil -} - -func (x *ScopedToken) GetScope() string { - if x != nil { - return x.Scope - } - return "" -} - -func (x *ScopedToken) GetSpec() *ScopedTokenSpec { - if x != nil { - return x.Spec - } - return nil -} - -// ScopedTokenSpec is the specification of a scoped token. -type ScopedTokenSpec struct { - state protoimpl.MessageState `protogen:"open.v1"` - // AssignedScope is the scope to which this token is assigned. - AssignedScope string `protobuf:"bytes,1,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *ScopedTokenSpec) Reset() { - *x = ScopedTokenSpec{} - mi := &file_teleport_scopedtoken_v1_token_proto_msgTypes[1] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *ScopedTokenSpec) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*ScopedTokenSpec) ProtoMessage() {} - -func (x *ScopedTokenSpec) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopedtoken_v1_token_proto_msgTypes[1] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use ScopedTokenSpec.ProtoReflect.Descriptor instead. -func (*ScopedTokenSpec) Descriptor() ([]byte, []int) { - return file_teleport_scopedtoken_v1_token_proto_rawDescGZIP(), []int{1} -} - -func (x *ScopedTokenSpec) GetAssignedScope() string { - if x != nil { - return x.AssignedScope - } - return "" -} - -var File_teleport_scopedtoken_v1_token_proto protoreflect.FileDescriptor - -const file_teleport_scopedtoken_v1_token_proto_rawDesc = "" + - "\n" + - "#teleport/scopedtoken/v1/token.proto\x12\x17teleport.scopedtoken.v1\x1a!teleport/header/v1/metadata.proto\"\xe4\x01\n" + - "\vScopedToken\x12\x12\n" + - "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + - "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + - "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + - "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + - "\x05scope\x18\x05 \x01(\tR\x05scope\x12<\n" + - "\x04spec\x18\x06 \x01(\v2(.teleport.scopedtoken.v1.ScopedTokenSpecR\x04spec\"8\n" + - "\x0fScopedTokenSpec\x12%\n" + - "\x0eassigned_scope\x18\x01 \x01(\tR\rassignedScopeBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopedtoken/v1;scopedtokenb\x06proto3" - -var ( - file_teleport_scopedtoken_v1_token_proto_rawDescOnce sync.Once - file_teleport_scopedtoken_v1_token_proto_rawDescData []byte -) - -func file_teleport_scopedtoken_v1_token_proto_rawDescGZIP() []byte { - file_teleport_scopedtoken_v1_token_proto_rawDescOnce.Do(func() { - file_teleport_scopedtoken_v1_token_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopedtoken_v1_token_proto_rawDesc), len(file_teleport_scopedtoken_v1_token_proto_rawDesc))) - }) - return file_teleport_scopedtoken_v1_token_proto_rawDescData -} - -var file_teleport_scopedtoken_v1_token_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_teleport_scopedtoken_v1_token_proto_goTypes = []any{ - (*ScopedToken)(nil), // 0: teleport.scopedtoken.v1.ScopedToken - (*ScopedTokenSpec)(nil), // 1: teleport.scopedtoken.v1.ScopedTokenSpec - (*v1.Metadata)(nil), // 2: teleport.header.v1.Metadata -} -var file_teleport_scopedtoken_v1_token_proto_depIdxs = []int32{ - 2, // 0: teleport.scopedtoken.v1.ScopedToken.metadata:type_name -> teleport.header.v1.Metadata - 1, // 1: teleport.scopedtoken.v1.ScopedToken.spec:type_name -> teleport.scopedtoken.v1.ScopedTokenSpec - 2, // [2:2] is the sub-list for method output_type - 2, // [2:2] is the sub-list for method input_type - 2, // [2:2] is the sub-list for extension type_name - 2, // [2:2] is the sub-list for extension extendee - 0, // [0:2] is the sub-list for field type_name -} - -func init() { file_teleport_scopedtoken_v1_token_proto_init() } -func file_teleport_scopedtoken_v1_token_proto_init() { - if File_teleport_scopedtoken_v1_token_proto != nil { - return - } - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopedtoken_v1_token_proto_rawDesc), len(file_teleport_scopedtoken_v1_token_proto_rawDesc)), - NumEnums: 0, - NumMessages: 2, - NumExtensions: 0, - NumServices: 0, - }, - GoTypes: file_teleport_scopedtoken_v1_token_proto_goTypes, - DependencyIndexes: file_teleport_scopedtoken_v1_token_proto_depIdxs, - MessageInfos: file_teleport_scopedtoken_v1_token_proto_msgTypes, - }.Build() - File_teleport_scopedtoken_v1_token_proto = out.File - file_teleport_scopedtoken_v1_token_proto_goTypes = nil - file_teleport_scopedtoken_v1_token_proto_depIdxs = nil -} diff --git a/api/gen/proto/go/teleport/scopes/access/v1/assignment.pb.go b/api/gen/proto/go/teleport/scopes/access/v1/assignment.pb.go new file mode 100644 index 0000000000000..93f955ac0c836 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/access/v1/assignment.pb.go @@ -0,0 +1,316 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/scopes/access/v1/assignment.proto + +package accessv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// ScopedRoleAssignment is a role assignment whose resource and permissions are scoped. A scoped role assignment +// assigns roles to users at scopes. One assignment may contain multiple roles at multiple scopes. Most assignments +// are stored at random IDs, but some assignments created by teleport may have special static names that are +// reserved for teleport's internal use (e.g. for managing the set of subassignments generated by a connector). +type ScopedRoleAssignment struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + // Metadata contains the resource metadata. + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Scope is the scope of the role assignment resource. + Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` + // Spec is the role assignment specification. + Spec *ScopedRoleAssignmentSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRoleAssignment) Reset() { + *x = ScopedRoleAssignment{} + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRoleAssignment) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRoleAssignment) ProtoMessage() {} + +func (x *ScopedRoleAssignment) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRoleAssignment.ProtoReflect.Descriptor instead. +func (*ScopedRoleAssignment) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_assignment_proto_rawDescGZIP(), []int{0} +} + +func (x *ScopedRoleAssignment) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *ScopedRoleAssignment) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *ScopedRoleAssignment) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *ScopedRoleAssignment) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *ScopedRoleAssignment) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +func (x *ScopedRoleAssignment) GetSpec() *ScopedRoleAssignmentSpec { + if x != nil { + return x.Spec + } + return nil +} + +// ScopedRoleAssignmentSpec is the specification of a scoped role. +type ScopedRoleAssignmentSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // User is the user to whom all contained assignments apply. + User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` + // Assignments is a list of individual role @ scope assignments. + Assignments []*Assignment `protobuf:"bytes,2,rep,name=assignments,proto3" json:"assignments,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRoleAssignmentSpec) Reset() { + *x = ScopedRoleAssignmentSpec{} + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRoleAssignmentSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRoleAssignmentSpec) ProtoMessage() {} + +func (x *ScopedRoleAssignmentSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRoleAssignmentSpec.ProtoReflect.Descriptor instead. +func (*ScopedRoleAssignmentSpec) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_assignment_proto_rawDescGZIP(), []int{1} +} + +func (x *ScopedRoleAssignmentSpec) GetUser() string { + if x != nil { + return x.User + } + return "" +} + +func (x *ScopedRoleAssignmentSpec) GetAssignments() []*Assignment { + if x != nil { + return x.Assignments + } + return nil +} + +// Assignment is a role/scope pair that defines an individual assignment. +type Assignment struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Roles is the name of the role that is assigned by this assignment. + Role string `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + // Scope is the scope to which the role is assigned. This must be a member/child + // of the scope of the [ScopedRoleAssignment] in which this assignment is contained. + Scope string `protobuf:"bytes,2,opt,name=scope,proto3" json:"scope,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Assignment) Reset() { + *x = Assignment{} + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Assignment) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Assignment) ProtoMessage() {} + +func (x *Assignment) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_assignment_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Assignment.ProtoReflect.Descriptor instead. +func (*Assignment) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_assignment_proto_rawDescGZIP(), []int{2} +} + +func (x *Assignment) GetRole() string { + if x != nil { + return x.Role + } + return "" +} + +func (x *Assignment) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +var File_teleport_scopes_access_v1_assignment_proto protoreflect.FileDescriptor + +const file_teleport_scopes_access_v1_assignment_proto_rawDesc = "" + + "\n" + + "*teleport/scopes/access/v1/assignment.proto\x12\x19teleport.scopes.access.v1\x1a!teleport/header/v1/metadata.proto\"\xf8\x01\n" + + "\x14ScopedRoleAssignment\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + + "\x05scope\x18\x05 \x01(\tR\x05scope\x12G\n" + + "\x04spec\x18\x06 \x01(\v23.teleport.scopes.access.v1.ScopedRoleAssignmentSpecR\x04spec\"w\n" + + "\x18ScopedRoleAssignmentSpec\x12\x12\n" + + "\x04user\x18\x01 \x01(\tR\x04user\x12G\n" + + "\vassignments\x18\x02 \x03(\v2%.teleport.scopes.access.v1.AssignmentR\vassignments\"6\n" + + "\n" + + "Assignment\x12\x12\n" + + "\x04role\x18\x01 \x01(\tR\x04role\x12\x14\n" + + "\x05scope\x18\x02 \x01(\tR\x05scopeBWZUgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1b\x06proto3" + +var ( + file_teleport_scopes_access_v1_assignment_proto_rawDescOnce sync.Once + file_teleport_scopes_access_v1_assignment_proto_rawDescData []byte +) + +func file_teleport_scopes_access_v1_assignment_proto_rawDescGZIP() []byte { + file_teleport_scopes_access_v1_assignment_proto_rawDescOnce.Do(func() { + file_teleport_scopes_access_v1_assignment_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_assignment_proto_rawDesc), len(file_teleport_scopes_access_v1_assignment_proto_rawDesc))) + }) + return file_teleport_scopes_access_v1_assignment_proto_rawDescData +} + +var file_teleport_scopes_access_v1_assignment_proto_msgTypes = make([]protoimpl.MessageInfo, 3) +var file_teleport_scopes_access_v1_assignment_proto_goTypes = []any{ + (*ScopedRoleAssignment)(nil), // 0: teleport.scopes.access.v1.ScopedRoleAssignment + (*ScopedRoleAssignmentSpec)(nil), // 1: teleport.scopes.access.v1.ScopedRoleAssignmentSpec + (*Assignment)(nil), // 2: teleport.scopes.access.v1.Assignment + (*v1.Metadata)(nil), // 3: teleport.header.v1.Metadata +} +var file_teleport_scopes_access_v1_assignment_proto_depIdxs = []int32{ + 3, // 0: teleport.scopes.access.v1.ScopedRoleAssignment.metadata:type_name -> teleport.header.v1.Metadata + 1, // 1: teleport.scopes.access.v1.ScopedRoleAssignment.spec:type_name -> teleport.scopes.access.v1.ScopedRoleAssignmentSpec + 2, // 2: teleport.scopes.access.v1.ScopedRoleAssignmentSpec.assignments:type_name -> teleport.scopes.access.v1.Assignment + 3, // [3:3] is the sub-list for method output_type + 3, // [3:3] is the sub-list for method input_type + 3, // [3:3] is the sub-list for extension type_name + 3, // [3:3] is the sub-list for extension extendee + 0, // [0:3] is the sub-list for field type_name +} + +func init() { file_teleport_scopes_access_v1_assignment_proto_init() } +func file_teleport_scopes_access_v1_assignment_proto_init() { + if File_teleport_scopes_access_v1_assignment_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_assignment_proto_rawDesc), len(file_teleport_scopes_access_v1_assignment_proto_rawDesc)), + NumEnums: 0, + NumMessages: 3, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_scopes_access_v1_assignment_proto_goTypes, + DependencyIndexes: file_teleport_scopes_access_v1_assignment_proto_depIdxs, + MessageInfos: file_teleport_scopes_access_v1_assignment_proto_msgTypes, + }.Build() + File_teleport_scopes_access_v1_assignment_proto = out.File + file_teleport_scopes_access_v1_assignment_proto_goTypes = nil + file_teleport_scopes_access_v1_assignment_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/scopes/access/v1/role.pb.go b/api/gen/proto/go/teleport/scopes/access/v1/role.pb.go new file mode 100644 index 0000000000000..fa486ffae32c3 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/access/v1/role.pb.go @@ -0,0 +1,391 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/scopes/access/v1/role.proto + +package accessv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + v11 "github.com/gravitational/teleport/api/gen/proto/go/teleport/label/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// ScopedRole is a role whose resource and permissions are scoped. Scoped roles implement a subset of role +// features tailored to the usecases of scoped access and scoped access administration. Scoped roles may be +// assigned to the same user multiple times at various scopes. Scoped roles do not contain deny rules. +type ScopedRole struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + // Metadata contains the resource metadata. + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Scope is the scope of the role resource. + Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` + // Spec is the role specification. + Spec *ScopedRoleSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRole) Reset() { + *x = ScopedRole{} + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRole) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRole) ProtoMessage() {} + +func (x *ScopedRole) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRole.ProtoReflect.Descriptor instead. +func (*ScopedRole) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_role_proto_rawDescGZIP(), []int{0} +} + +func (x *ScopedRole) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *ScopedRole) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *ScopedRole) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *ScopedRole) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *ScopedRole) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +func (x *ScopedRole) GetSpec() *ScopedRoleSpec { + if x != nil { + return x.Spec + } + return nil +} + +// ScopedRoleSpec is the specification of a scoped role. +type ScopedRoleSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // AssignableScopes is a list of scopes to which this role can be assigned. + AssignableScopes []string `protobuf:"bytes,1,rep,name=assignable_scopes,json=assignableScopes,proto3" json:"assignable_scopes,omitempty"` + // Allow specifies the permissions granted by this role. + Allow *ScopedRoleConditions `protobuf:"bytes,2,opt,name=allow,proto3" json:"allow,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRoleSpec) Reset() { + *x = ScopedRoleSpec{} + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRoleSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRoleSpec) ProtoMessage() {} + +func (x *ScopedRoleSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRoleSpec.ProtoReflect.Descriptor instead. +func (*ScopedRoleSpec) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_role_proto_rawDescGZIP(), []int{1} +} + +func (x *ScopedRoleSpec) GetAssignableScopes() []string { + if x != nil { + return x.AssignableScopes + } + return nil +} + +func (x *ScopedRoleSpec) GetAllow() *ScopedRoleConditions { + if x != nil { + return x.Allow + } + return nil +} + +// ScopedRoleConditions describes a role's allow block. +type ScopedRoleConditions struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Rules describe basic resource:verb permissions. + Rules []*ScopedRule `protobuf:"bytes,1,rep,name=rules,proto3" json:"rules,omitempty"` + // Logins is the list of host logins this role allows. + Logins []string `protobuf:"bytes,2,rep,name=logins,proto3" json:"logins,omitempty"` + // NodeLabels is a map of node labels (used to dynamically grant access to + // nodes). + NodeLabels []*v11.Label `protobuf:"bytes,3,rep,name=node_labels,json=nodeLabels,proto3" json:"node_labels,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRoleConditions) Reset() { + *x = ScopedRoleConditions{} + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRoleConditions) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRoleConditions) ProtoMessage() {} + +func (x *ScopedRoleConditions) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRoleConditions.ProtoReflect.Descriptor instead. +func (*ScopedRoleConditions) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_role_proto_rawDescGZIP(), []int{2} +} + +func (x *ScopedRoleConditions) GetRules() []*ScopedRule { + if x != nil { + return x.Rules + } + return nil +} + +func (x *ScopedRoleConditions) GetLogins() []string { + if x != nil { + return x.Logins + } + return nil +} + +func (x *ScopedRoleConditions) GetNodeLabels() []*v11.Label { + if x != nil { + return x.NodeLabels + } + return nil +} + +// Rule maps resources to verbs. This is the underlying type used to describe +// permissions like 'node:read' or 'role:create'. +type ScopedRule struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Resources is a list of resource kinds (e.g. 'scoped_token') that the below verbs apply to. + Resources []string `protobuf:"bytes,1,rep,name=resources,proto3" json:"resources,omitempty"` + // Verbs is the list of action verbs (e.g. 'read') that apply to the above resources. + Verbs []string `protobuf:"bytes,2,rep,name=verbs,proto3" json:"verbs,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedRule) Reset() { + *x = ScopedRule{} + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedRule) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedRule) ProtoMessage() {} + +func (x *ScopedRule) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_role_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedRule.ProtoReflect.Descriptor instead. +func (*ScopedRule) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_role_proto_rawDescGZIP(), []int{3} +} + +func (x *ScopedRule) GetResources() []string { + if x != nil { + return x.Resources + } + return nil +} + +func (x *ScopedRule) GetVerbs() []string { + if x != nil { + return x.Verbs + } + return nil +} + +var File_teleport_scopes_access_v1_role_proto protoreflect.FileDescriptor + +const file_teleport_scopes_access_v1_role_proto_rawDesc = "" + + "\n" + + "$teleport/scopes/access/v1/role.proto\x12\x19teleport.scopes.access.v1\x1a!teleport/header/v1/metadata.proto\x1a\x1dteleport/label/v1/label.proto\"\xe4\x01\n" + + "\n" + + "ScopedRole\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + + "\x05scope\x18\x05 \x01(\tR\x05scope\x12=\n" + + "\x04spec\x18\x06 \x01(\v2).teleport.scopes.access.v1.ScopedRoleSpecR\x04spec\"\x84\x01\n" + + "\x0eScopedRoleSpec\x12+\n" + + "\x11assignable_scopes\x18\x01 \x03(\tR\x10assignableScopes\x12E\n" + + "\x05allow\x18\x02 \x01(\v2/.teleport.scopes.access.v1.ScopedRoleConditionsR\x05allow\"\xa6\x01\n" + + "\x14ScopedRoleConditions\x12;\n" + + "\x05rules\x18\x01 \x03(\v2%.teleport.scopes.access.v1.ScopedRuleR\x05rules\x12\x16\n" + + "\x06logins\x18\x02 \x03(\tR\x06logins\x129\n" + + "\vnode_labels\x18\x03 \x03(\v2\x18.teleport.label.v1.LabelR\n" + + "nodeLabels\"@\n" + + "\n" + + "ScopedRule\x12\x1c\n" + + "\tresources\x18\x01 \x03(\tR\tresources\x12\x14\n" + + "\x05verbs\x18\x02 \x03(\tR\x05verbsBWZUgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1b\x06proto3" + +var ( + file_teleport_scopes_access_v1_role_proto_rawDescOnce sync.Once + file_teleport_scopes_access_v1_role_proto_rawDescData []byte +) + +func file_teleport_scopes_access_v1_role_proto_rawDescGZIP() []byte { + file_teleport_scopes_access_v1_role_proto_rawDescOnce.Do(func() { + file_teleport_scopes_access_v1_role_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_role_proto_rawDesc), len(file_teleport_scopes_access_v1_role_proto_rawDesc))) + }) + return file_teleport_scopes_access_v1_role_proto_rawDescData +} + +var file_teleport_scopes_access_v1_role_proto_msgTypes = make([]protoimpl.MessageInfo, 4) +var file_teleport_scopes_access_v1_role_proto_goTypes = []any{ + (*ScopedRole)(nil), // 0: teleport.scopes.access.v1.ScopedRole + (*ScopedRoleSpec)(nil), // 1: teleport.scopes.access.v1.ScopedRoleSpec + (*ScopedRoleConditions)(nil), // 2: teleport.scopes.access.v1.ScopedRoleConditions + (*ScopedRule)(nil), // 3: teleport.scopes.access.v1.ScopedRule + (*v1.Metadata)(nil), // 4: teleport.header.v1.Metadata + (*v11.Label)(nil), // 5: teleport.label.v1.Label +} +var file_teleport_scopes_access_v1_role_proto_depIdxs = []int32{ + 4, // 0: teleport.scopes.access.v1.ScopedRole.metadata:type_name -> teleport.header.v1.Metadata + 1, // 1: teleport.scopes.access.v1.ScopedRole.spec:type_name -> teleport.scopes.access.v1.ScopedRoleSpec + 2, // 2: teleport.scopes.access.v1.ScopedRoleSpec.allow:type_name -> teleport.scopes.access.v1.ScopedRoleConditions + 3, // 3: teleport.scopes.access.v1.ScopedRoleConditions.rules:type_name -> teleport.scopes.access.v1.ScopedRule + 5, // 4: teleport.scopes.access.v1.ScopedRoleConditions.node_labels:type_name -> teleport.label.v1.Label + 5, // [5:5] is the sub-list for method output_type + 5, // [5:5] is the sub-list for method input_type + 5, // [5:5] is the sub-list for extension type_name + 5, // [5:5] is the sub-list for extension extendee + 0, // [0:5] is the sub-list for field type_name +} + +func init() { file_teleport_scopes_access_v1_role_proto_init() } +func file_teleport_scopes_access_v1_role_proto_init() { + if File_teleport_scopes_access_v1_role_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_role_proto_rawDesc), len(file_teleport_scopes_access_v1_role_proto_rawDesc)), + NumEnums: 0, + NumMessages: 4, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_scopes_access_v1_role_proto_goTypes, + DependencyIndexes: file_teleport_scopes_access_v1_role_proto_depIdxs, + MessageInfos: file_teleport_scopes_access_v1_role_proto_msgTypes, + }.Build() + File_teleport_scopes_access_v1_role_proto = out.File + file_teleport_scopes_access_v1_role_proto_goTypes = nil + file_teleport_scopes_access_v1_role_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/scopes/access/v1/service.pb.go b/api/gen/proto/go/teleport/scopes/access/v1/service.pb.go new file mode 100644 index 0000000000000..dd17b632b40e7 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/access/v1/service.pb.go @@ -0,0 +1,1156 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/scopes/access/v1/service.proto + +package accessv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// GetScopedRoleRequest is the request to get a scoped role. +type GetScopedRoleRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped role. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedRoleRequest) Reset() { + *x = GetScopedRoleRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedRoleRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedRoleRequest) ProtoMessage() {} + +func (x *GetScopedRoleRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedRoleRequest.ProtoReflect.Descriptor instead. +func (*GetScopedRoleRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{0} +} + +func (x *GetScopedRoleRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetScopedRoleResponse is the response to get a scoped role. +type GetScopedRoleResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Role is the scoped role. + Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedRoleResponse) Reset() { + *x = GetScopedRoleResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedRoleResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedRoleResponse) ProtoMessage() {} + +func (x *GetScopedRoleResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedRoleResponse.ProtoReflect.Descriptor instead. +func (*GetScopedRoleResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{1} +} + +func (x *GetScopedRoleResponse) GetRole() *ScopedRole { + if x != nil { + return x.Role + } + return nil +} + +// ListScopedRolesRequest is the request to list scoped roles. +type ListScopedRolesRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PageSize is the maximum number of results to return. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // PageToken is the pagination cursor used to start from where a previous request left off. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // ResourceScope filters roles by their resource scope if specified. + ResourceScope *v1.Filter `protobuf:"bytes,3,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` + // AssignableScope filters roles by their assignable scope if specified. + AssignableScope *v1.Filter `protobuf:"bytes,4,opt,name=assignable_scope,json=assignableScope,proto3" json:"assignable_scope,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedRolesRequest) Reset() { + *x = ListScopedRolesRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedRolesRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedRolesRequest) ProtoMessage() {} + +func (x *ListScopedRolesRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedRolesRequest.ProtoReflect.Descriptor instead. +func (*ListScopedRolesRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{2} +} + +func (x *ListScopedRolesRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListScopedRolesRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListScopedRolesRequest) GetResourceScope() *v1.Filter { + if x != nil { + return x.ResourceScope + } + return nil +} + +func (x *ListScopedRolesRequest) GetAssignableScope() *v1.Filter { + if x != nil { + return x.AssignableScope + } + return nil +} + +// ListScopedRolesResponse is the response to list scoped roles. +type ListScopedRolesResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Roles is the list of scoped roles. + Roles []*ScopedRole `protobuf:"bytes,1,rep,name=roles,proto3" json:"roles,omitempty"` + // NextPageToken is a pagination cursor usable to fetch the next page of results. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedRolesResponse) Reset() { + *x = ListScopedRolesResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedRolesResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedRolesResponse) ProtoMessage() {} + +func (x *ListScopedRolesResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedRolesResponse.ProtoReflect.Descriptor instead. +func (*ListScopedRolesResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{3} +} + +func (x *ListScopedRolesResponse) GetRoles() []*ScopedRole { + if x != nil { + return x.Roles + } + return nil +} + +func (x *ListScopedRolesResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// CreateScopedRoleRequest is the request to create a scoped role. +type CreateScopedRoleRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Role is the scoped role to create. + Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedRoleRequest) Reset() { + *x = CreateScopedRoleRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedRoleRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedRoleRequest) ProtoMessage() {} + +func (x *CreateScopedRoleRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedRoleRequest.ProtoReflect.Descriptor instead. +func (*CreateScopedRoleRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{4} +} + +func (x *CreateScopedRoleRequest) GetRole() *ScopedRole { + if x != nil { + return x.Role + } + return nil +} + +// CreateScopedRoleResponse is the response to create a scoped role. +type CreateScopedRoleResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Role is the scoped role that was created. + Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedRoleResponse) Reset() { + *x = CreateScopedRoleResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedRoleResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedRoleResponse) ProtoMessage() {} + +func (x *CreateScopedRoleResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedRoleResponse.ProtoReflect.Descriptor instead. +func (*CreateScopedRoleResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{5} +} + +func (x *CreateScopedRoleResponse) GetRole() *ScopedRole { + if x != nil { + return x.Role + } + return nil +} + +// UpdateScopedRoleRequest is the request to update a scoped role. +type UpdateScopedRoleRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Role is the scoped role to update. + Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateScopedRoleRequest) Reset() { + *x = UpdateScopedRoleRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateScopedRoleRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateScopedRoleRequest) ProtoMessage() {} + +func (x *UpdateScopedRoleRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateScopedRoleRequest.ProtoReflect.Descriptor instead. +func (*UpdateScopedRoleRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{6} +} + +func (x *UpdateScopedRoleRequest) GetRole() *ScopedRole { + if x != nil { + return x.Role + } + return nil +} + +// UpdateScopedRoleResponse is the response to update a scoped role. +type UpdateScopedRoleResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Role is the post-update scoped role. + Role *ScopedRole `protobuf:"bytes,1,opt,name=role,proto3" json:"role,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateScopedRoleResponse) Reset() { + *x = UpdateScopedRoleResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateScopedRoleResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateScopedRoleResponse) ProtoMessage() {} + +func (x *UpdateScopedRoleResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateScopedRoleResponse.ProtoReflect.Descriptor instead. +func (*UpdateScopedRoleResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{7} +} + +func (x *UpdateScopedRoleResponse) GetRole() *ScopedRole { + if x != nil { + return x.Role + } + return nil +} + +// DeleteScopedRoleRequest is the request to delete a scoped role. +type DeleteScopedRoleRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped role to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + // Revision asserts the revision of the scoped role to delete (optional). + Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedRoleRequest) Reset() { + *x = DeleteScopedRoleRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedRoleRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedRoleRequest) ProtoMessage() {} + +func (x *DeleteScopedRoleRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedRoleRequest.ProtoReflect.Descriptor instead. +func (*DeleteScopedRoleRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{8} +} + +func (x *DeleteScopedRoleRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +func (x *DeleteScopedRoleRequest) GetRevision() string { + if x != nil { + return x.Revision + } + return "" +} + +// DeleteScopedRoleResponse is the response to delete a scoped role. +type DeleteScopedRoleResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedRoleResponse) Reset() { + *x = DeleteScopedRoleResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedRoleResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedRoleResponse) ProtoMessage() {} + +func (x *DeleteScopedRoleResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedRoleResponse.ProtoReflect.Descriptor instead. +func (*DeleteScopedRoleResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{9} +} + +// GetScopedRoleAssignmentRequest is the request to get a scoped role assignment. +type GetScopedRoleAssignmentRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped role assignment. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedRoleAssignmentRequest) Reset() { + *x = GetScopedRoleAssignmentRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedRoleAssignmentRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedRoleAssignmentRequest) ProtoMessage() {} + +func (x *GetScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. +func (*GetScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{10} +} + +func (x *GetScopedRoleAssignmentRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetScopedRoleAssignmentResponse is the response to get a scoped role assignment. +type GetScopedRoleAssignmentResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Assignment is the scoped role assignment. + Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedRoleAssignmentResponse) Reset() { + *x = GetScopedRoleAssignmentResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedRoleAssignmentResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedRoleAssignmentResponse) ProtoMessage() {} + +func (x *GetScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. +func (*GetScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{11} +} + +func (x *GetScopedRoleAssignmentResponse) GetAssignment() *ScopedRoleAssignment { + if x != nil { + return x.Assignment + } + return nil +} + +// ListScopedRoleAssignmentsRequest is the request to list scoped role assignments. +type ListScopedRoleAssignmentsRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PageSize is the maximum number of results to return. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // PageToken is the pagination cursor used to start from where a previous request left off. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // ResourceScope filters assignments by their resource scope if specified. + ResourceScope *v1.Filter `protobuf:"bytes,3,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` + // AssignedScope filters assignments by the scopes they assign to if specified (note: matches assignment + // resources with 1 or more maching scopes, not all scopes within the assignment will necessarily match). + AssignedScope *v1.Filter `protobuf:"bytes,4,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` + // User optionally limits the list to assignments for a specific user. + User string `protobuf:"bytes,5,opt,name=user,proto3" json:"user,omitempty"` + // Role optionally limits the list to assignments for a specific role. + Role string `protobuf:"bytes,6,opt,name=role,proto3" json:"role,omitempty"` + // AllCallerAssignments, if set to true, overrides the default behavior in favor of returning all + // assignments that apply to the caller, including those assigned in parent/orthogonal scopes. This flag + // is specifically used to support users discovering where they have been granted privileges across scopes, + // and is not intended for general use. + AllCallerAssignments bool `protobuf:"varint,7,opt,name=all_caller_assignments,json=allCallerAssignments,proto3" json:"all_caller_assignments,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedRoleAssignmentsRequest) Reset() { + *x = ListScopedRoleAssignmentsRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedRoleAssignmentsRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedRoleAssignmentsRequest) ProtoMessage() {} + +func (x *ListScopedRoleAssignmentsRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedRoleAssignmentsRequest.ProtoReflect.Descriptor instead. +func (*ListScopedRoleAssignmentsRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{12} +} + +func (x *ListScopedRoleAssignmentsRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListScopedRoleAssignmentsRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListScopedRoleAssignmentsRequest) GetResourceScope() *v1.Filter { + if x != nil { + return x.ResourceScope + } + return nil +} + +func (x *ListScopedRoleAssignmentsRequest) GetAssignedScope() *v1.Filter { + if x != nil { + return x.AssignedScope + } + return nil +} + +func (x *ListScopedRoleAssignmentsRequest) GetUser() string { + if x != nil { + return x.User + } + return "" +} + +func (x *ListScopedRoleAssignmentsRequest) GetRole() string { + if x != nil { + return x.Role + } + return "" +} + +func (x *ListScopedRoleAssignmentsRequest) GetAllCallerAssignments() bool { + if x != nil { + return x.AllCallerAssignments + } + return false +} + +// ListScopedRoleAssignmentsResponse is the response to list scoped role assignments. +type ListScopedRoleAssignmentsResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Assignments is the list of scoped role assignments. + Assignments []*ScopedRoleAssignment `protobuf:"bytes,1,rep,name=assignments,proto3" json:"assignments,omitempty"` + // NextPageToken is a pagination cursor usable to fetch the next page of results. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedRoleAssignmentsResponse) Reset() { + *x = ListScopedRoleAssignmentsResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedRoleAssignmentsResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedRoleAssignmentsResponse) ProtoMessage() {} + +func (x *ListScopedRoleAssignmentsResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedRoleAssignmentsResponse.ProtoReflect.Descriptor instead. +func (*ListScopedRoleAssignmentsResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{13} +} + +func (x *ListScopedRoleAssignmentsResponse) GetAssignments() []*ScopedRoleAssignment { + if x != nil { + return x.Assignments + } + return nil +} + +func (x *ListScopedRoleAssignmentsResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// CreateScopedRoleAssignmentRequest is the request to create a scoped role assignment. +type CreateScopedRoleAssignmentRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Assignment is the scoped role assignment to create. + Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` + // RoleRevisions asserts the revisions of the roles assigned by the assignments (optional). + RoleRevisions map[string]string `protobuf:"bytes,2,rep,name=role_revisions,json=roleRevisions,proto3" json:"role_revisions,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedRoleAssignmentRequest) Reset() { + *x = CreateScopedRoleAssignmentRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedRoleAssignmentRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedRoleAssignmentRequest) ProtoMessage() {} + +func (x *CreateScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. +func (*CreateScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{14} +} + +func (x *CreateScopedRoleAssignmentRequest) GetAssignment() *ScopedRoleAssignment { + if x != nil { + return x.Assignment + } + return nil +} + +func (x *CreateScopedRoleAssignmentRequest) GetRoleRevisions() map[string]string { + if x != nil { + return x.RoleRevisions + } + return nil +} + +// CreateScopedRoleAssignmentResponse is the response to create a scoped role assignment. +type CreateScopedRoleAssignmentResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Assignment is the scoped role assignment that was created. + Assignment *ScopedRoleAssignment `protobuf:"bytes,1,opt,name=assignment,proto3" json:"assignment,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedRoleAssignmentResponse) Reset() { + *x = CreateScopedRoleAssignmentResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedRoleAssignmentResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedRoleAssignmentResponse) ProtoMessage() {} + +func (x *CreateScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. +func (*CreateScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{15} +} + +func (x *CreateScopedRoleAssignmentResponse) GetAssignment() *ScopedRoleAssignment { + if x != nil { + return x.Assignment + } + return nil +} + +// DeleteScopedRoleAssignmentRequest is the request to delete a scoped role assignment. +type DeleteScopedRoleAssignmentRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped role assignment to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + // Revision asserts the revision of the scoped role assignment to delete (optional). + Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedRoleAssignmentRequest) Reset() { + *x = DeleteScopedRoleAssignmentRequest{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedRoleAssignmentRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedRoleAssignmentRequest) ProtoMessage() {} + +func (x *DeleteScopedRoleAssignmentRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedRoleAssignmentRequest.ProtoReflect.Descriptor instead. +func (*DeleteScopedRoleAssignmentRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{16} +} + +func (x *DeleteScopedRoleAssignmentRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +func (x *DeleteScopedRoleAssignmentRequest) GetRevision() string { + if x != nil { + return x.Revision + } + return "" +} + +// DeleteScopedRoleAssignmentResponse is the response to delete a scoped role assignment. +type DeleteScopedRoleAssignmentResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedRoleAssignmentResponse) Reset() { + *x = DeleteScopedRoleAssignmentResponse{} + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[17] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedRoleAssignmentResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedRoleAssignmentResponse) ProtoMessage() {} + +func (x *DeleteScopedRoleAssignmentResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_access_v1_service_proto_msgTypes[17] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedRoleAssignmentResponse.ProtoReflect.Descriptor instead. +func (*DeleteScopedRoleAssignmentResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_access_v1_service_proto_rawDescGZIP(), []int{17} +} + +var File_teleport_scopes_access_v1_service_proto protoreflect.FileDescriptor + +const file_teleport_scopes_access_v1_service_proto_rawDesc = "" + + "\n" + + "'teleport/scopes/access/v1/service.proto\x12\x19teleport.scopes.access.v1\x1a*teleport/scopes/access/v1/assignment.proto\x1a$teleport/scopes/access/v1/role.proto\x1a\x1fteleport/scopes/v1/scopes.proto\"*\n" + + "\x14GetScopedRoleRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"R\n" + + "\x15GetScopedRoleResponse\x129\n" + + "\x04role\x18\x01 \x01(\v2%.teleport.scopes.access.v1.ScopedRoleR\x04role\"\xde\x01\n" + + "\x16ListScopedRolesRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12A\n" + + "\x0eresource_scope\x18\x03 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12E\n" + + "\x10assignable_scope\x18\x04 \x01(\v2\x1a.teleport.scopes.v1.FilterR\x0fassignableScope\"~\n" + + "\x17ListScopedRolesResponse\x12;\n" + + "\x05roles\x18\x01 \x03(\v2%.teleport.scopes.access.v1.ScopedRoleR\x05roles\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"T\n" + + "\x17CreateScopedRoleRequest\x129\n" + + "\x04role\x18\x01 \x01(\v2%.teleport.scopes.access.v1.ScopedRoleR\x04role\"U\n" + + "\x18CreateScopedRoleResponse\x129\n" + + "\x04role\x18\x01 \x01(\v2%.teleport.scopes.access.v1.ScopedRoleR\x04role\"T\n" + + "\x17UpdateScopedRoleRequest\x129\n" + + "\x04role\x18\x01 \x01(\v2%.teleport.scopes.access.v1.ScopedRoleR\x04role\"U\n" + + "\x18UpdateScopedRoleResponse\x129\n" + + "\x04role\x18\x01 \x01(\v2%.teleport.scopes.access.v1.ScopedRoleR\x04role\"I\n" + + "\x17DeleteScopedRoleRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + + "\brevision\x18\x02 \x01(\tR\brevision\"\x1a\n" + + "\x18DeleteScopedRoleResponse\"4\n" + + "\x1eGetScopedRoleAssignmentRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"r\n" + + "\x1fGetScopedRoleAssignmentResponse\x12O\n" + + "\n" + + "assignment\x18\x01 \x01(\v2/.teleport.scopes.access.v1.ScopedRoleAssignmentR\n" + + "assignment\"\xc2\x02\n" + + " ListScopedRoleAssignmentsRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12A\n" + + "\x0eresource_scope\x18\x03 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12A\n" + + "\x0eassigned_scope\x18\x04 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rassignedScope\x12\x12\n" + + "\x04user\x18\x05 \x01(\tR\x04user\x12\x12\n" + + "\x04role\x18\x06 \x01(\tR\x04role\x124\n" + + "\x16all_caller_assignments\x18\a \x01(\bR\x14allCallerAssignments\"\x9e\x01\n" + + "!ListScopedRoleAssignmentsResponse\x12Q\n" + + "\vassignments\x18\x01 \x03(\v2/.teleport.scopes.access.v1.ScopedRoleAssignmentR\vassignments\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"\xae\x02\n" + + "!CreateScopedRoleAssignmentRequest\x12O\n" + + "\n" + + "assignment\x18\x01 \x01(\v2/.teleport.scopes.access.v1.ScopedRoleAssignmentR\n" + + "assignment\x12v\n" + + "\x0erole_revisions\x18\x02 \x03(\v2O.teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntryR\rroleRevisions\x1a@\n" + + "\x12RoleRevisionsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"u\n" + + "\"CreateScopedRoleAssignmentResponse\x12O\n" + + "\n" + + "assignment\x18\x01 \x01(\v2/.teleport.scopes.access.v1.ScopedRoleAssignmentR\n" + + "assignment\"S\n" + + "!DeleteScopedRoleAssignmentRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + + "\brevision\x18\x02 \x01(\tR\brevision\"$\n" + + "\"DeleteScopedRoleAssignmentResponse2\xde\t\n" + + "\x13ScopedAccessService\x12r\n" + + "\rGetScopedRole\x12/.teleport.scopes.access.v1.GetScopedRoleRequest\x1a0.teleport.scopes.access.v1.GetScopedRoleResponse\x12x\n" + + "\x0fListScopedRoles\x121.teleport.scopes.access.v1.ListScopedRolesRequest\x1a2.teleport.scopes.access.v1.ListScopedRolesResponse\x12{\n" + + "\x10CreateScopedRole\x122.teleport.scopes.access.v1.CreateScopedRoleRequest\x1a3.teleport.scopes.access.v1.CreateScopedRoleResponse\x12{\n" + + "\x10UpdateScopedRole\x122.teleport.scopes.access.v1.UpdateScopedRoleRequest\x1a3.teleport.scopes.access.v1.UpdateScopedRoleResponse\x12{\n" + + "\x10DeleteScopedRole\x122.teleport.scopes.access.v1.DeleteScopedRoleRequest\x1a3.teleport.scopes.access.v1.DeleteScopedRoleResponse\x12\x90\x01\n" + + "\x17GetScopedRoleAssignment\x129.teleport.scopes.access.v1.GetScopedRoleAssignmentRequest\x1a:.teleport.scopes.access.v1.GetScopedRoleAssignmentResponse\x12\x96\x01\n" + + "\x19ListScopedRoleAssignments\x12;.teleport.scopes.access.v1.ListScopedRoleAssignmentsRequest\x1a<.teleport.scopes.access.v1.ListScopedRoleAssignmentsResponse\x12\x99\x01\n" + + "\x1aCreateScopedRoleAssignment\x12<.teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest\x1a=.teleport.scopes.access.v1.CreateScopedRoleAssignmentResponse\x12\x99\x01\n" + + "\x1aDeleteScopedRoleAssignment\x12<.teleport.scopes.access.v1.DeleteScopedRoleAssignmentRequest\x1a=.teleport.scopes.access.v1.DeleteScopedRoleAssignmentResponseBWZUgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1b\x06proto3" + +var ( + file_teleport_scopes_access_v1_service_proto_rawDescOnce sync.Once + file_teleport_scopes_access_v1_service_proto_rawDescData []byte +) + +func file_teleport_scopes_access_v1_service_proto_rawDescGZIP() []byte { + file_teleport_scopes_access_v1_service_proto_rawDescOnce.Do(func() { + file_teleport_scopes_access_v1_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_service_proto_rawDesc), len(file_teleport_scopes_access_v1_service_proto_rawDesc))) + }) + return file_teleport_scopes_access_v1_service_proto_rawDescData +} + +var file_teleport_scopes_access_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 19) +var file_teleport_scopes_access_v1_service_proto_goTypes = []any{ + (*GetScopedRoleRequest)(nil), // 0: teleport.scopes.access.v1.GetScopedRoleRequest + (*GetScopedRoleResponse)(nil), // 1: teleport.scopes.access.v1.GetScopedRoleResponse + (*ListScopedRolesRequest)(nil), // 2: teleport.scopes.access.v1.ListScopedRolesRequest + (*ListScopedRolesResponse)(nil), // 3: teleport.scopes.access.v1.ListScopedRolesResponse + (*CreateScopedRoleRequest)(nil), // 4: teleport.scopes.access.v1.CreateScopedRoleRequest + (*CreateScopedRoleResponse)(nil), // 5: teleport.scopes.access.v1.CreateScopedRoleResponse + (*UpdateScopedRoleRequest)(nil), // 6: teleport.scopes.access.v1.UpdateScopedRoleRequest + (*UpdateScopedRoleResponse)(nil), // 7: teleport.scopes.access.v1.UpdateScopedRoleResponse + (*DeleteScopedRoleRequest)(nil), // 8: teleport.scopes.access.v1.DeleteScopedRoleRequest + (*DeleteScopedRoleResponse)(nil), // 9: teleport.scopes.access.v1.DeleteScopedRoleResponse + (*GetScopedRoleAssignmentRequest)(nil), // 10: teleport.scopes.access.v1.GetScopedRoleAssignmentRequest + (*GetScopedRoleAssignmentResponse)(nil), // 11: teleport.scopes.access.v1.GetScopedRoleAssignmentResponse + (*ListScopedRoleAssignmentsRequest)(nil), // 12: teleport.scopes.access.v1.ListScopedRoleAssignmentsRequest + (*ListScopedRoleAssignmentsResponse)(nil), // 13: teleport.scopes.access.v1.ListScopedRoleAssignmentsResponse + (*CreateScopedRoleAssignmentRequest)(nil), // 14: teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest + (*CreateScopedRoleAssignmentResponse)(nil), // 15: teleport.scopes.access.v1.CreateScopedRoleAssignmentResponse + (*DeleteScopedRoleAssignmentRequest)(nil), // 16: teleport.scopes.access.v1.DeleteScopedRoleAssignmentRequest + (*DeleteScopedRoleAssignmentResponse)(nil), // 17: teleport.scopes.access.v1.DeleteScopedRoleAssignmentResponse + nil, // 18: teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntry + (*ScopedRole)(nil), // 19: teleport.scopes.access.v1.ScopedRole + (*v1.Filter)(nil), // 20: teleport.scopes.v1.Filter + (*ScopedRoleAssignment)(nil), // 21: teleport.scopes.access.v1.ScopedRoleAssignment +} +var file_teleport_scopes_access_v1_service_proto_depIdxs = []int32{ + 19, // 0: teleport.scopes.access.v1.GetScopedRoleResponse.role:type_name -> teleport.scopes.access.v1.ScopedRole + 20, // 1: teleport.scopes.access.v1.ListScopedRolesRequest.resource_scope:type_name -> teleport.scopes.v1.Filter + 20, // 2: teleport.scopes.access.v1.ListScopedRolesRequest.assignable_scope:type_name -> teleport.scopes.v1.Filter + 19, // 3: teleport.scopes.access.v1.ListScopedRolesResponse.roles:type_name -> teleport.scopes.access.v1.ScopedRole + 19, // 4: teleport.scopes.access.v1.CreateScopedRoleRequest.role:type_name -> teleport.scopes.access.v1.ScopedRole + 19, // 5: teleport.scopes.access.v1.CreateScopedRoleResponse.role:type_name -> teleport.scopes.access.v1.ScopedRole + 19, // 6: teleport.scopes.access.v1.UpdateScopedRoleRequest.role:type_name -> teleport.scopes.access.v1.ScopedRole + 19, // 7: teleport.scopes.access.v1.UpdateScopedRoleResponse.role:type_name -> teleport.scopes.access.v1.ScopedRole + 21, // 8: teleport.scopes.access.v1.GetScopedRoleAssignmentResponse.assignment:type_name -> teleport.scopes.access.v1.ScopedRoleAssignment + 20, // 9: teleport.scopes.access.v1.ListScopedRoleAssignmentsRequest.resource_scope:type_name -> teleport.scopes.v1.Filter + 20, // 10: teleport.scopes.access.v1.ListScopedRoleAssignmentsRequest.assigned_scope:type_name -> teleport.scopes.v1.Filter + 21, // 11: teleport.scopes.access.v1.ListScopedRoleAssignmentsResponse.assignments:type_name -> teleport.scopes.access.v1.ScopedRoleAssignment + 21, // 12: teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest.assignment:type_name -> teleport.scopes.access.v1.ScopedRoleAssignment + 18, // 13: teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest.role_revisions:type_name -> teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest.RoleRevisionsEntry + 21, // 14: teleport.scopes.access.v1.CreateScopedRoleAssignmentResponse.assignment:type_name -> teleport.scopes.access.v1.ScopedRoleAssignment + 0, // 15: teleport.scopes.access.v1.ScopedAccessService.GetScopedRole:input_type -> teleport.scopes.access.v1.GetScopedRoleRequest + 2, // 16: teleport.scopes.access.v1.ScopedAccessService.ListScopedRoles:input_type -> teleport.scopes.access.v1.ListScopedRolesRequest + 4, // 17: teleport.scopes.access.v1.ScopedAccessService.CreateScopedRole:input_type -> teleport.scopes.access.v1.CreateScopedRoleRequest + 6, // 18: teleport.scopes.access.v1.ScopedAccessService.UpdateScopedRole:input_type -> teleport.scopes.access.v1.UpdateScopedRoleRequest + 8, // 19: teleport.scopes.access.v1.ScopedAccessService.DeleteScopedRole:input_type -> teleport.scopes.access.v1.DeleteScopedRoleRequest + 10, // 20: teleport.scopes.access.v1.ScopedAccessService.GetScopedRoleAssignment:input_type -> teleport.scopes.access.v1.GetScopedRoleAssignmentRequest + 12, // 21: teleport.scopes.access.v1.ScopedAccessService.ListScopedRoleAssignments:input_type -> teleport.scopes.access.v1.ListScopedRoleAssignmentsRequest + 14, // 22: teleport.scopes.access.v1.ScopedAccessService.CreateScopedRoleAssignment:input_type -> teleport.scopes.access.v1.CreateScopedRoleAssignmentRequest + 16, // 23: teleport.scopes.access.v1.ScopedAccessService.DeleteScopedRoleAssignment:input_type -> teleport.scopes.access.v1.DeleteScopedRoleAssignmentRequest + 1, // 24: teleport.scopes.access.v1.ScopedAccessService.GetScopedRole:output_type -> teleport.scopes.access.v1.GetScopedRoleResponse + 3, // 25: teleport.scopes.access.v1.ScopedAccessService.ListScopedRoles:output_type -> teleport.scopes.access.v1.ListScopedRolesResponse + 5, // 26: teleport.scopes.access.v1.ScopedAccessService.CreateScopedRole:output_type -> teleport.scopes.access.v1.CreateScopedRoleResponse + 7, // 27: teleport.scopes.access.v1.ScopedAccessService.UpdateScopedRole:output_type -> teleport.scopes.access.v1.UpdateScopedRoleResponse + 9, // 28: teleport.scopes.access.v1.ScopedAccessService.DeleteScopedRole:output_type -> teleport.scopes.access.v1.DeleteScopedRoleResponse + 11, // 29: teleport.scopes.access.v1.ScopedAccessService.GetScopedRoleAssignment:output_type -> teleport.scopes.access.v1.GetScopedRoleAssignmentResponse + 13, // 30: teleport.scopes.access.v1.ScopedAccessService.ListScopedRoleAssignments:output_type -> teleport.scopes.access.v1.ListScopedRoleAssignmentsResponse + 15, // 31: teleport.scopes.access.v1.ScopedAccessService.CreateScopedRoleAssignment:output_type -> teleport.scopes.access.v1.CreateScopedRoleAssignmentResponse + 17, // 32: teleport.scopes.access.v1.ScopedAccessService.DeleteScopedRoleAssignment:output_type -> teleport.scopes.access.v1.DeleteScopedRoleAssignmentResponse + 24, // [24:33] is the sub-list for method output_type + 15, // [15:24] is the sub-list for method input_type + 15, // [15:15] is the sub-list for extension type_name + 15, // [15:15] is the sub-list for extension extendee + 0, // [0:15] is the sub-list for field type_name +} + +func init() { file_teleport_scopes_access_v1_service_proto_init() } +func file_teleport_scopes_access_v1_service_proto_init() { + if File_teleport_scopes_access_v1_service_proto != nil { + return + } + file_teleport_scopes_access_v1_assignment_proto_init() + file_teleport_scopes_access_v1_role_proto_init() + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_access_v1_service_proto_rawDesc), len(file_teleport_scopes_access_v1_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 19, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_scopes_access_v1_service_proto_goTypes, + DependencyIndexes: file_teleport_scopes_access_v1_service_proto_depIdxs, + MessageInfos: file_teleport_scopes_access_v1_service_proto_msgTypes, + }.Build() + File_teleport_scopes_access_v1_service_proto = out.File + file_teleport_scopes_access_v1_service_proto_goTypes = nil + file_teleport_scopes_access_v1_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/scopes/access/v1/service_grpc.pb.go b/api/gen/proto/go/teleport/scopes/access/v1/service_grpc.pb.go new file mode 100644 index 0000000000000..b19f8b1e0839d --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/access/v1/service_grpc.pb.go @@ -0,0 +1,465 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/scopes/access/v1/service.proto + +package accessv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + ScopedAccessService_GetScopedRole_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/GetScopedRole" + ScopedAccessService_ListScopedRoles_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/ListScopedRoles" + ScopedAccessService_CreateScopedRole_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/CreateScopedRole" + ScopedAccessService_UpdateScopedRole_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/UpdateScopedRole" + ScopedAccessService_DeleteScopedRole_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/DeleteScopedRole" + ScopedAccessService_GetScopedRoleAssignment_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/GetScopedRoleAssignment" + ScopedAccessService_ListScopedRoleAssignments_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/ListScopedRoleAssignments" + ScopedAccessService_CreateScopedRoleAssignment_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/CreateScopedRoleAssignment" + ScopedAccessService_DeleteScopedRoleAssignment_FullMethodName = "/teleport.scopes.access.v1.ScopedAccessService/DeleteScopedRoleAssignment" +) + +// ScopedAccessServiceClient is the client API for ScopedAccessService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// ScopedAccessService provides an API for managing scoped access-control resources. +type ScopedAccessServiceClient interface { + // GetScopedRole gets a scoped role by name. + GetScopedRole(ctx context.Context, in *GetScopedRoleRequest, opts ...grpc.CallOption) (*GetScopedRoleResponse, error) + // ListScopedRoles returns a paginated list of scoped roles. + ListScopedRoles(ctx context.Context, in *ListScopedRolesRequest, opts ...grpc.CallOption) (*ListScopedRolesResponse, error) + // CreateScopedRole creates a new scoped role. + CreateScopedRole(ctx context.Context, in *CreateScopedRoleRequest, opts ...grpc.CallOption) (*CreateScopedRoleResponse, error) + // UpdateScopedRole updates a scoped role. + UpdateScopedRole(ctx context.Context, in *UpdateScopedRoleRequest, opts ...grpc.CallOption) (*UpdateScopedRoleResponse, error) + // DeleteScopedRole deletes a scoped role. Note that scoped role deletion is always + // conditional. Scoped roles cannot be deleted if referenced by any existing assignment + // and deletes may fail due to concurrent modification. + DeleteScopedRole(ctx context.Context, in *DeleteScopedRoleRequest, opts ...grpc.CallOption) (*DeleteScopedRoleResponse, error) + // GetScopedRoleAssignment gets a scoped role assignment by name. + GetScopedRoleAssignment(ctx context.Context, in *GetScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*GetScopedRoleAssignmentResponse, error) + // ListScopedRoleAssignments returns a paginated list of scoped role assignments. + ListScopedRoleAssignments(ctx context.Context, in *ListScopedRoleAssignmentsRequest, opts ...grpc.CallOption) (*ListScopedRoleAssignmentsResponse, error) + // CreateScopedRoleAssignment creates a new scoped role assignment. + CreateScopedRoleAssignment(ctx context.Context, in *CreateScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*CreateScopedRoleAssignmentResponse, error) + // DeleteScopedRoleAssignment deletes a scoped role assignment. + DeleteScopedRoleAssignment(ctx context.Context, in *DeleteScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*DeleteScopedRoleAssignmentResponse, error) +} + +type scopedAccessServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewScopedAccessServiceClient(cc grpc.ClientConnInterface) ScopedAccessServiceClient { + return &scopedAccessServiceClient{cc} +} + +func (c *scopedAccessServiceClient) GetScopedRole(ctx context.Context, in *GetScopedRoleRequest, opts ...grpc.CallOption) (*GetScopedRoleResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetScopedRoleResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_GetScopedRole_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) ListScopedRoles(ctx context.Context, in *ListScopedRolesRequest, opts ...grpc.CallOption) (*ListScopedRolesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListScopedRolesResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_ListScopedRoles_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) CreateScopedRole(ctx context.Context, in *CreateScopedRoleRequest, opts ...grpc.CallOption) (*CreateScopedRoleResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateScopedRoleResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_CreateScopedRole_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) UpdateScopedRole(ctx context.Context, in *UpdateScopedRoleRequest, opts ...grpc.CallOption) (*UpdateScopedRoleResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdateScopedRoleResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_UpdateScopedRole_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) DeleteScopedRole(ctx context.Context, in *DeleteScopedRoleRequest, opts ...grpc.CallOption) (*DeleteScopedRoleResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteScopedRoleResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_DeleteScopedRole_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) GetScopedRoleAssignment(ctx context.Context, in *GetScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*GetScopedRoleAssignmentResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetScopedRoleAssignmentResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_GetScopedRoleAssignment_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) ListScopedRoleAssignments(ctx context.Context, in *ListScopedRoleAssignmentsRequest, opts ...grpc.CallOption) (*ListScopedRoleAssignmentsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListScopedRoleAssignmentsResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_ListScopedRoleAssignments_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) CreateScopedRoleAssignment(ctx context.Context, in *CreateScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*CreateScopedRoleAssignmentResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateScopedRoleAssignmentResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_CreateScopedRoleAssignment_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedAccessServiceClient) DeleteScopedRoleAssignment(ctx context.Context, in *DeleteScopedRoleAssignmentRequest, opts ...grpc.CallOption) (*DeleteScopedRoleAssignmentResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteScopedRoleAssignmentResponse) + err := c.cc.Invoke(ctx, ScopedAccessService_DeleteScopedRoleAssignment_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +// ScopedAccessServiceServer is the server API for ScopedAccessService service. +// All implementations must embed UnimplementedScopedAccessServiceServer +// for forward compatibility. +// +// ScopedAccessService provides an API for managing scoped access-control resources. +type ScopedAccessServiceServer interface { + // GetScopedRole gets a scoped role by name. + GetScopedRole(context.Context, *GetScopedRoleRequest) (*GetScopedRoleResponse, error) + // ListScopedRoles returns a paginated list of scoped roles. + ListScopedRoles(context.Context, *ListScopedRolesRequest) (*ListScopedRolesResponse, error) + // CreateScopedRole creates a new scoped role. + CreateScopedRole(context.Context, *CreateScopedRoleRequest) (*CreateScopedRoleResponse, error) + // UpdateScopedRole updates a scoped role. + UpdateScopedRole(context.Context, *UpdateScopedRoleRequest) (*UpdateScopedRoleResponse, error) + // DeleteScopedRole deletes a scoped role. Note that scoped role deletion is always + // conditional. Scoped roles cannot be deleted if referenced by any existing assignment + // and deletes may fail due to concurrent modification. + DeleteScopedRole(context.Context, *DeleteScopedRoleRequest) (*DeleteScopedRoleResponse, error) + // GetScopedRoleAssignment gets a scoped role assignment by name. + GetScopedRoleAssignment(context.Context, *GetScopedRoleAssignmentRequest) (*GetScopedRoleAssignmentResponse, error) + // ListScopedRoleAssignments returns a paginated list of scoped role assignments. + ListScopedRoleAssignments(context.Context, *ListScopedRoleAssignmentsRequest) (*ListScopedRoleAssignmentsResponse, error) + // CreateScopedRoleAssignment creates a new scoped role assignment. + CreateScopedRoleAssignment(context.Context, *CreateScopedRoleAssignmentRequest) (*CreateScopedRoleAssignmentResponse, error) + // DeleteScopedRoleAssignment deletes a scoped role assignment. + DeleteScopedRoleAssignment(context.Context, *DeleteScopedRoleAssignmentRequest) (*DeleteScopedRoleAssignmentResponse, error) + mustEmbedUnimplementedScopedAccessServiceServer() +} + +// UnimplementedScopedAccessServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedScopedAccessServiceServer struct{} + +func (UnimplementedScopedAccessServiceServer) GetScopedRole(context.Context, *GetScopedRoleRequest) (*GetScopedRoleResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetScopedRole not implemented") +} +func (UnimplementedScopedAccessServiceServer) ListScopedRoles(context.Context, *ListScopedRolesRequest) (*ListScopedRolesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListScopedRoles not implemented") +} +func (UnimplementedScopedAccessServiceServer) CreateScopedRole(context.Context, *CreateScopedRoleRequest) (*CreateScopedRoleResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateScopedRole not implemented") +} +func (UnimplementedScopedAccessServiceServer) UpdateScopedRole(context.Context, *UpdateScopedRoleRequest) (*UpdateScopedRoleResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateScopedRole not implemented") +} +func (UnimplementedScopedAccessServiceServer) DeleteScopedRole(context.Context, *DeleteScopedRoleRequest) (*DeleteScopedRoleResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedRole not implemented") +} +func (UnimplementedScopedAccessServiceServer) GetScopedRoleAssignment(context.Context, *GetScopedRoleAssignmentRequest) (*GetScopedRoleAssignmentResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetScopedRoleAssignment not implemented") +} +func (UnimplementedScopedAccessServiceServer) ListScopedRoleAssignments(context.Context, *ListScopedRoleAssignmentsRequest) (*ListScopedRoleAssignmentsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListScopedRoleAssignments not implemented") +} +func (UnimplementedScopedAccessServiceServer) CreateScopedRoleAssignment(context.Context, *CreateScopedRoleAssignmentRequest) (*CreateScopedRoleAssignmentResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateScopedRoleAssignment not implemented") +} +func (UnimplementedScopedAccessServiceServer) DeleteScopedRoleAssignment(context.Context, *DeleteScopedRoleAssignmentRequest) (*DeleteScopedRoleAssignmentResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedRoleAssignment not implemented") +} +func (UnimplementedScopedAccessServiceServer) mustEmbedUnimplementedScopedAccessServiceServer() {} +func (UnimplementedScopedAccessServiceServer) testEmbeddedByValue() {} + +// UnsafeScopedAccessServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to ScopedAccessServiceServer will +// result in compilation errors. +type UnsafeScopedAccessServiceServer interface { + mustEmbedUnimplementedScopedAccessServiceServer() +} + +func RegisterScopedAccessServiceServer(s grpc.ServiceRegistrar, srv ScopedAccessServiceServer) { + // If the following call pancis, it indicates UnimplementedScopedAccessServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&ScopedAccessService_ServiceDesc, srv) +} + +func _ScopedAccessService_GetScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetScopedRoleRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).GetScopedRole(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_GetScopedRole_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).GetScopedRole(ctx, req.(*GetScopedRoleRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_ListScopedRoles_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListScopedRolesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).ListScopedRoles(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_ListScopedRoles_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).ListScopedRoles(ctx, req.(*ListScopedRolesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_CreateScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateScopedRoleRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).CreateScopedRole(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_CreateScopedRole_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).CreateScopedRole(ctx, req.(*CreateScopedRoleRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_UpdateScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdateScopedRoleRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).UpdateScopedRole(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_UpdateScopedRole_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).UpdateScopedRole(ctx, req.(*UpdateScopedRoleRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_DeleteScopedRole_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteScopedRoleRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).DeleteScopedRole(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_DeleteScopedRole_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).DeleteScopedRole(ctx, req.(*DeleteScopedRoleRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_GetScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetScopedRoleAssignmentRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).GetScopedRoleAssignment(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_GetScopedRoleAssignment_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).GetScopedRoleAssignment(ctx, req.(*GetScopedRoleAssignmentRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_ListScopedRoleAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListScopedRoleAssignmentsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).ListScopedRoleAssignments(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_ListScopedRoleAssignments_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).ListScopedRoleAssignments(ctx, req.(*ListScopedRoleAssignmentsRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_CreateScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateScopedRoleAssignmentRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).CreateScopedRoleAssignment(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_CreateScopedRoleAssignment_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).CreateScopedRoleAssignment(ctx, req.(*CreateScopedRoleAssignmentRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedAccessService_DeleteScopedRoleAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteScopedRoleAssignmentRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedAccessServiceServer).DeleteScopedRoleAssignment(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedAccessService_DeleteScopedRoleAssignment_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedAccessServiceServer).DeleteScopedRoleAssignment(ctx, req.(*DeleteScopedRoleAssignmentRequest)) + } + return interceptor(ctx, in, info, handler) +} + +// ScopedAccessService_ServiceDesc is the grpc.ServiceDesc for ScopedAccessService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var ScopedAccessService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.scopes.access.v1.ScopedAccessService", + HandlerType: (*ScopedAccessServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "GetScopedRole", + Handler: _ScopedAccessService_GetScopedRole_Handler, + }, + { + MethodName: "ListScopedRoles", + Handler: _ScopedAccessService_ListScopedRoles_Handler, + }, + { + MethodName: "CreateScopedRole", + Handler: _ScopedAccessService_CreateScopedRole_Handler, + }, + { + MethodName: "UpdateScopedRole", + Handler: _ScopedAccessService_UpdateScopedRole_Handler, + }, + { + MethodName: "DeleteScopedRole", + Handler: _ScopedAccessService_DeleteScopedRole_Handler, + }, + { + MethodName: "GetScopedRoleAssignment", + Handler: _ScopedAccessService_GetScopedRoleAssignment_Handler, + }, + { + MethodName: "ListScopedRoleAssignments", + Handler: _ScopedAccessService_ListScopedRoleAssignments_Handler, + }, + { + MethodName: "CreateScopedRoleAssignment", + Handler: _ScopedAccessService_CreateScopedRoleAssignment_Handler, + }, + { + MethodName: "DeleteScopedRoleAssignment", + Handler: _ScopedAccessService_DeleteScopedRoleAssignment_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "teleport/scopes/access/v1/service.proto", +} diff --git a/api/gen/proto/go/teleport/scopes/joining/v1/service.pb.go b/api/gen/proto/go/teleport/scopes/joining/v1/service.pb.go new file mode 100644 index 0000000000000..5b7dd39dba977 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/joining/v1/service.pb.go @@ -0,0 +1,672 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/scopes/joining/v1/service.proto + +package joiningv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// GetScopedTokenRequest is the request to get a scoped token. +type GetScopedTokenRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped token. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedTokenRequest) Reset() { + *x = GetScopedTokenRequest{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedTokenRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedTokenRequest) ProtoMessage() {} + +func (x *GetScopedTokenRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedTokenRequest.ProtoReflect.Descriptor instead. +func (*GetScopedTokenRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{0} +} + +func (x *GetScopedTokenRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetScopedTokenResponse is the response to get a scoped token. +type GetScopedTokenResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Token is the scoped token. + Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetScopedTokenResponse) Reset() { + *x = GetScopedTokenResponse{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetScopedTokenResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetScopedTokenResponse) ProtoMessage() {} + +func (x *GetScopedTokenResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetScopedTokenResponse.ProtoReflect.Descriptor instead. +func (*GetScopedTokenResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{1} +} + +func (x *GetScopedTokenResponse) GetToken() *ScopedToken { + if x != nil { + return x.Token + } + return nil +} + +// ListScopedTokensRequest is the request to list scoped tokens. +type ListScopedTokensRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Filter tokens by their resource scope. + ResourceScope *v1.Filter `protobuf:"bytes,1,opt,name=resource_scope,json=resourceScope,proto3" json:"resource_scope,omitempty"` + // Filter tokens by their assigned scope. + AssignedScope *v1.Filter `protobuf:"bytes,2,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` + // The pagination cursor. + Cursor string `protobuf:"bytes,3,opt,name=cursor,proto3" json:"cursor,omitempty"` + // The maximum number of results to return. + Limit uint32 `protobuf:"varint,4,opt,name=limit,proto3" json:"limit,omitempty"` + // Filter tokens that apply at least one of the provided roles. + Roles []string `protobuf:"bytes,5,rep,name=roles,proto3" json:"roles,omitempty"` + // Filter tokens that match all provided labels. + Labels map[string]string `protobuf:"bytes,6,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedTokensRequest) Reset() { + *x = ListScopedTokensRequest{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedTokensRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedTokensRequest) ProtoMessage() {} + +func (x *ListScopedTokensRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedTokensRequest.ProtoReflect.Descriptor instead. +func (*ListScopedTokensRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{2} +} + +func (x *ListScopedTokensRequest) GetResourceScope() *v1.Filter { + if x != nil { + return x.ResourceScope + } + return nil +} + +func (x *ListScopedTokensRequest) GetAssignedScope() *v1.Filter { + if x != nil { + return x.AssignedScope + } + return nil +} + +func (x *ListScopedTokensRequest) GetCursor() string { + if x != nil { + return x.Cursor + } + return "" +} + +func (x *ListScopedTokensRequest) GetLimit() uint32 { + if x != nil { + return x.Limit + } + return 0 +} + +func (x *ListScopedTokensRequest) GetRoles() []string { + if x != nil { + return x.Roles + } + return nil +} + +func (x *ListScopedTokensRequest) GetLabels() map[string]string { + if x != nil { + return x.Labels + } + return nil +} + +// ListScopedTokensResponse is the response to list scoped tokens. +type ListScopedTokensResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Tokens is the list of scoped tokens. + Tokens []*ScopedToken `protobuf:"bytes,1,rep,name=tokens,proto3" json:"tokens,omitempty"` + // Cursor is the pagination cursor. + Cursor string `protobuf:"bytes,2,opt,name=cursor,proto3" json:"cursor,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListScopedTokensResponse) Reset() { + *x = ListScopedTokensResponse{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListScopedTokensResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListScopedTokensResponse) ProtoMessage() {} + +func (x *ListScopedTokensResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListScopedTokensResponse.ProtoReflect.Descriptor instead. +func (*ListScopedTokensResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{3} +} + +func (x *ListScopedTokensResponse) GetTokens() []*ScopedToken { + if x != nil { + return x.Tokens + } + return nil +} + +func (x *ListScopedTokensResponse) GetCursor() string { + if x != nil { + return x.Cursor + } + return "" +} + +// CreateScopedTokenRequest is the request to create a scoped token. +type CreateScopedTokenRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Token is the scoped token to create. + Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedTokenRequest) Reset() { + *x = CreateScopedTokenRequest{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedTokenRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedTokenRequest) ProtoMessage() {} + +func (x *CreateScopedTokenRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedTokenRequest.ProtoReflect.Descriptor instead. +func (*CreateScopedTokenRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{4} +} + +func (x *CreateScopedTokenRequest) GetToken() *ScopedToken { + if x != nil { + return x.Token + } + return nil +} + +// CreateScopedTokenResponse is the response to create a scoped token. +type CreateScopedTokenResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Token is the scoped token that was created. + Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateScopedTokenResponse) Reset() { + *x = CreateScopedTokenResponse{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateScopedTokenResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateScopedTokenResponse) ProtoMessage() {} + +func (x *CreateScopedTokenResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateScopedTokenResponse.ProtoReflect.Descriptor instead. +func (*CreateScopedTokenResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{5} +} + +func (x *CreateScopedTokenResponse) GetToken() *ScopedToken { + if x != nil { + return x.Token + } + return nil +} + +// UpdateScopedTokenRequest is the request to update a scoped token. +type UpdateScopedTokenRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Token is the scoped token to update. + Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateScopedTokenRequest) Reset() { + *x = UpdateScopedTokenRequest{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateScopedTokenRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateScopedTokenRequest) ProtoMessage() {} + +func (x *UpdateScopedTokenRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateScopedTokenRequest.ProtoReflect.Descriptor instead. +func (*UpdateScopedTokenRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{6} +} + +func (x *UpdateScopedTokenRequest) GetToken() *ScopedToken { + if x != nil { + return x.Token + } + return nil +} + +// UpdateScopedTokenResponse is the response to update a scoped token. +type UpdateScopedTokenResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Token is the post-update scoped token. + Token *ScopedToken `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateScopedTokenResponse) Reset() { + *x = UpdateScopedTokenResponse{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateScopedTokenResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateScopedTokenResponse) ProtoMessage() {} + +func (x *UpdateScopedTokenResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateScopedTokenResponse.ProtoReflect.Descriptor instead. +func (*UpdateScopedTokenResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{7} +} + +func (x *UpdateScopedTokenResponse) GetToken() *ScopedToken { + if x != nil { + return x.Token + } + return nil +} + +// DeleteScopedTokenRequest is the request to delete a scoped token. +type DeleteScopedTokenRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the name of the scoped token to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + // Revision asserts the revision of the scoped token to delete (optional). + Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedTokenRequest) Reset() { + *x = DeleteScopedTokenRequest{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedTokenRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedTokenRequest) ProtoMessage() {} + +func (x *DeleteScopedTokenRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedTokenRequest.ProtoReflect.Descriptor instead. +func (*DeleteScopedTokenRequest) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{8} +} + +func (x *DeleteScopedTokenRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +func (x *DeleteScopedTokenRequest) GetRevision() string { + if x != nil { + return x.Revision + } + return "" +} + +// DeleteScopedTokenResponse is the response to delete a scoped token. +type DeleteScopedTokenResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteScopedTokenResponse) Reset() { + *x = DeleteScopedTokenResponse{} + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteScopedTokenResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteScopedTokenResponse) ProtoMessage() {} + +func (x *DeleteScopedTokenResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_service_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteScopedTokenResponse.ProtoReflect.Descriptor instead. +func (*DeleteScopedTokenResponse) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_service_proto_rawDescGZIP(), []int{9} +} + +var File_teleport_scopes_joining_v1_service_proto protoreflect.FileDescriptor + +const file_teleport_scopes_joining_v1_service_proto_rawDesc = "" + + "\n" + + "(teleport/scopes/joining/v1/service.proto\x12\x1ateleport.scopes.joining.v1\x1a&teleport/scopes/joining/v1/token.proto\x1a\x1fteleport/scopes/v1/scopes.proto\"+\n" + + "\x15GetScopedTokenRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"W\n" + + "\x16GetScopedTokenResponse\x12=\n" + + "\x05token\x18\x01 \x01(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x05token\"\xf7\x02\n" + + "\x17ListScopedTokensRequest\x12A\n" + + "\x0eresource_scope\x18\x01 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rresourceScope\x12A\n" + + "\x0eassigned_scope\x18\x02 \x01(\v2\x1a.teleport.scopes.v1.FilterR\rassignedScope\x12\x16\n" + + "\x06cursor\x18\x03 \x01(\tR\x06cursor\x12\x14\n" + + "\x05limit\x18\x04 \x01(\rR\x05limit\x12\x14\n" + + "\x05roles\x18\x05 \x03(\tR\x05roles\x12W\n" + + "\x06labels\x18\x06 \x03(\v2?.teleport.scopes.joining.v1.ListScopedTokensRequest.LabelsEntryR\x06labels\x1a9\n" + + "\vLabelsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" + + "\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"s\n" + + "\x18ListScopedTokensResponse\x12?\n" + + "\x06tokens\x18\x01 \x03(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x06tokens\x12\x16\n" + + "\x06cursor\x18\x02 \x01(\tR\x06cursor\"Y\n" + + "\x18CreateScopedTokenRequest\x12=\n" + + "\x05token\x18\x01 \x01(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x05token\"Z\n" + + "\x19CreateScopedTokenResponse\x12=\n" + + "\x05token\x18\x01 \x01(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x05token\"Y\n" + + "\x18UpdateScopedTokenRequest\x12=\n" + + "\x05token\x18\x01 \x01(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x05token\"Z\n" + + "\x19UpdateScopedTokenResponse\x12=\n" + + "\x05token\x18\x01 \x01(\v2'.teleport.scopes.joining.v1.ScopedTokenR\x05token\"J\n" + + "\x18DeleteScopedTokenRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\x12\x1a\n" + + "\brevision\x18\x02 \x01(\tR\brevision\"\x1b\n" + + "\x19DeleteScopedTokenResponse2\x97\x05\n" + + "\x14ScopedJoiningService\x12w\n" + + "\x0eGetScopedToken\x121.teleport.scopes.joining.v1.GetScopedTokenRequest\x1a2.teleport.scopes.joining.v1.GetScopedTokenResponse\x12}\n" + + "\x10ListScopedTokens\x123.teleport.scopes.joining.v1.ListScopedTokensRequest\x1a4.teleport.scopes.joining.v1.ListScopedTokensResponse\x12\x80\x01\n" + + "\x11CreateScopedToken\x124.teleport.scopes.joining.v1.CreateScopedTokenRequest\x1a5.teleport.scopes.joining.v1.CreateScopedTokenResponse\x12\x80\x01\n" + + "\x11UpdateScopedToken\x124.teleport.scopes.joining.v1.UpdateScopedTokenRequest\x1a5.teleport.scopes.joining.v1.UpdateScopedTokenResponse\x12\x80\x01\n" + + "\x11DeleteScopedToken\x124.teleport.scopes.joining.v1.DeleteScopedTokenRequest\x1a5.teleport.scopes.joining.v1.DeleteScopedTokenResponseBYZWgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/joining/v1;joiningv1b\x06proto3" + +var ( + file_teleport_scopes_joining_v1_service_proto_rawDescOnce sync.Once + file_teleport_scopes_joining_v1_service_proto_rawDescData []byte +) + +func file_teleport_scopes_joining_v1_service_proto_rawDescGZIP() []byte { + file_teleport_scopes_joining_v1_service_proto_rawDescOnce.Do(func() { + file_teleport_scopes_joining_v1_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopes_joining_v1_service_proto_rawDesc), len(file_teleport_scopes_joining_v1_service_proto_rawDesc))) + }) + return file_teleport_scopes_joining_v1_service_proto_rawDescData +} + +var file_teleport_scopes_joining_v1_service_proto_msgTypes = make([]protoimpl.MessageInfo, 11) +var file_teleport_scopes_joining_v1_service_proto_goTypes = []any{ + (*GetScopedTokenRequest)(nil), // 0: teleport.scopes.joining.v1.GetScopedTokenRequest + (*GetScopedTokenResponse)(nil), // 1: teleport.scopes.joining.v1.GetScopedTokenResponse + (*ListScopedTokensRequest)(nil), // 2: teleport.scopes.joining.v1.ListScopedTokensRequest + (*ListScopedTokensResponse)(nil), // 3: teleport.scopes.joining.v1.ListScopedTokensResponse + (*CreateScopedTokenRequest)(nil), // 4: teleport.scopes.joining.v1.CreateScopedTokenRequest + (*CreateScopedTokenResponse)(nil), // 5: teleport.scopes.joining.v1.CreateScopedTokenResponse + (*UpdateScopedTokenRequest)(nil), // 6: teleport.scopes.joining.v1.UpdateScopedTokenRequest + (*UpdateScopedTokenResponse)(nil), // 7: teleport.scopes.joining.v1.UpdateScopedTokenResponse + (*DeleteScopedTokenRequest)(nil), // 8: teleport.scopes.joining.v1.DeleteScopedTokenRequest + (*DeleteScopedTokenResponse)(nil), // 9: teleport.scopes.joining.v1.DeleteScopedTokenResponse + nil, // 10: teleport.scopes.joining.v1.ListScopedTokensRequest.LabelsEntry + (*ScopedToken)(nil), // 11: teleport.scopes.joining.v1.ScopedToken + (*v1.Filter)(nil), // 12: teleport.scopes.v1.Filter +} +var file_teleport_scopes_joining_v1_service_proto_depIdxs = []int32{ + 11, // 0: teleport.scopes.joining.v1.GetScopedTokenResponse.token:type_name -> teleport.scopes.joining.v1.ScopedToken + 12, // 1: teleport.scopes.joining.v1.ListScopedTokensRequest.resource_scope:type_name -> teleport.scopes.v1.Filter + 12, // 2: teleport.scopes.joining.v1.ListScopedTokensRequest.assigned_scope:type_name -> teleport.scopes.v1.Filter + 10, // 3: teleport.scopes.joining.v1.ListScopedTokensRequest.labels:type_name -> teleport.scopes.joining.v1.ListScopedTokensRequest.LabelsEntry + 11, // 4: teleport.scopes.joining.v1.ListScopedTokensResponse.tokens:type_name -> teleport.scopes.joining.v1.ScopedToken + 11, // 5: teleport.scopes.joining.v1.CreateScopedTokenRequest.token:type_name -> teleport.scopes.joining.v1.ScopedToken + 11, // 6: teleport.scopes.joining.v1.CreateScopedTokenResponse.token:type_name -> teleport.scopes.joining.v1.ScopedToken + 11, // 7: teleport.scopes.joining.v1.UpdateScopedTokenRequest.token:type_name -> teleport.scopes.joining.v1.ScopedToken + 11, // 8: teleport.scopes.joining.v1.UpdateScopedTokenResponse.token:type_name -> teleport.scopes.joining.v1.ScopedToken + 0, // 9: teleport.scopes.joining.v1.ScopedJoiningService.GetScopedToken:input_type -> teleport.scopes.joining.v1.GetScopedTokenRequest + 2, // 10: teleport.scopes.joining.v1.ScopedJoiningService.ListScopedTokens:input_type -> teleport.scopes.joining.v1.ListScopedTokensRequest + 4, // 11: teleport.scopes.joining.v1.ScopedJoiningService.CreateScopedToken:input_type -> teleport.scopes.joining.v1.CreateScopedTokenRequest + 6, // 12: teleport.scopes.joining.v1.ScopedJoiningService.UpdateScopedToken:input_type -> teleport.scopes.joining.v1.UpdateScopedTokenRequest + 8, // 13: teleport.scopes.joining.v1.ScopedJoiningService.DeleteScopedToken:input_type -> teleport.scopes.joining.v1.DeleteScopedTokenRequest + 1, // 14: teleport.scopes.joining.v1.ScopedJoiningService.GetScopedToken:output_type -> teleport.scopes.joining.v1.GetScopedTokenResponse + 3, // 15: teleport.scopes.joining.v1.ScopedJoiningService.ListScopedTokens:output_type -> teleport.scopes.joining.v1.ListScopedTokensResponse + 5, // 16: teleport.scopes.joining.v1.ScopedJoiningService.CreateScopedToken:output_type -> teleport.scopes.joining.v1.CreateScopedTokenResponse + 7, // 17: teleport.scopes.joining.v1.ScopedJoiningService.UpdateScopedToken:output_type -> teleport.scopes.joining.v1.UpdateScopedTokenResponse + 9, // 18: teleport.scopes.joining.v1.ScopedJoiningService.DeleteScopedToken:output_type -> teleport.scopes.joining.v1.DeleteScopedTokenResponse + 14, // [14:19] is the sub-list for method output_type + 9, // [9:14] is the sub-list for method input_type + 9, // [9:9] is the sub-list for extension type_name + 9, // [9:9] is the sub-list for extension extendee + 0, // [0:9] is the sub-list for field type_name +} + +func init() { file_teleport_scopes_joining_v1_service_proto_init() } +func file_teleport_scopes_joining_v1_service_proto_init() { + if File_teleport_scopes_joining_v1_service_proto != nil { + return + } + file_teleport_scopes_joining_v1_token_proto_init() + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_joining_v1_service_proto_rawDesc), len(file_teleport_scopes_joining_v1_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 11, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_scopes_joining_v1_service_proto_goTypes, + DependencyIndexes: file_teleport_scopes_joining_v1_service_proto_depIdxs, + MessageInfos: file_teleport_scopes_joining_v1_service_proto_msgTypes, + }.Build() + File_teleport_scopes_joining_v1_service_proto = out.File + file_teleport_scopes_joining_v1_service_proto_goTypes = nil + file_teleport_scopes_joining_v1_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/scopes/joining/v1/service_grpc.pb.go b/api/gen/proto/go/teleport/scopes/joining/v1/service_grpc.pb.go new file mode 100644 index 0000000000000..849955fe78df2 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/joining/v1/service_grpc.pb.go @@ -0,0 +1,301 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/scopes/joining/v1/service.proto + +package joiningv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + ScopedJoiningService_GetScopedToken_FullMethodName = "/teleport.scopes.joining.v1.ScopedJoiningService/GetScopedToken" + ScopedJoiningService_ListScopedTokens_FullMethodName = "/teleport.scopes.joining.v1.ScopedJoiningService/ListScopedTokens" + ScopedJoiningService_CreateScopedToken_FullMethodName = "/teleport.scopes.joining.v1.ScopedJoiningService/CreateScopedToken" + ScopedJoiningService_UpdateScopedToken_FullMethodName = "/teleport.scopes.joining.v1.ScopedJoiningService/UpdateScopedToken" + ScopedJoiningService_DeleteScopedToken_FullMethodName = "/teleport.scopes.joining.v1.ScopedJoiningService/DeleteScopedToken" +) + +// ScopedJoiningServiceClient is the client API for ScopedJoiningService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// ScopedJoiningService provides an API for managing scoped cluster joining resources. +type ScopedJoiningServiceClient interface { + // GetScopedToken gets a scoped token by name. + GetScopedToken(ctx context.Context, in *GetScopedTokenRequest, opts ...grpc.CallOption) (*GetScopedTokenResponse, error) + // ListScopedTokens returns a paginated list of scoped tokens. + ListScopedTokens(ctx context.Context, in *ListScopedTokensRequest, opts ...grpc.CallOption) (*ListScopedTokensResponse, error) + // CreateScopedToken creates a new scoped token. + CreateScopedToken(ctx context.Context, in *CreateScopedTokenRequest, opts ...grpc.CallOption) (*CreateScopedTokenResponse, error) + // UpdateScopedToken updates a scoped token. + UpdateScopedToken(ctx context.Context, in *UpdateScopedTokenRequest, opts ...grpc.CallOption) (*UpdateScopedTokenResponse, error) + // DeleteScopedToken deletes a scoped token. + DeleteScopedToken(ctx context.Context, in *DeleteScopedTokenRequest, opts ...grpc.CallOption) (*DeleteScopedTokenResponse, error) +} + +type scopedJoiningServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewScopedJoiningServiceClient(cc grpc.ClientConnInterface) ScopedJoiningServiceClient { + return &scopedJoiningServiceClient{cc} +} + +func (c *scopedJoiningServiceClient) GetScopedToken(ctx context.Context, in *GetScopedTokenRequest, opts ...grpc.CallOption) (*GetScopedTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetScopedTokenResponse) + err := c.cc.Invoke(ctx, ScopedJoiningService_GetScopedToken_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedJoiningServiceClient) ListScopedTokens(ctx context.Context, in *ListScopedTokensRequest, opts ...grpc.CallOption) (*ListScopedTokensResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListScopedTokensResponse) + err := c.cc.Invoke(ctx, ScopedJoiningService_ListScopedTokens_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedJoiningServiceClient) CreateScopedToken(ctx context.Context, in *CreateScopedTokenRequest, opts ...grpc.CallOption) (*CreateScopedTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateScopedTokenResponse) + err := c.cc.Invoke(ctx, ScopedJoiningService_CreateScopedToken_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedJoiningServiceClient) UpdateScopedToken(ctx context.Context, in *UpdateScopedTokenRequest, opts ...grpc.CallOption) (*UpdateScopedTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdateScopedTokenResponse) + err := c.cc.Invoke(ctx, ScopedJoiningService_UpdateScopedToken_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *scopedJoiningServiceClient) DeleteScopedToken(ctx context.Context, in *DeleteScopedTokenRequest, opts ...grpc.CallOption) (*DeleteScopedTokenResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteScopedTokenResponse) + err := c.cc.Invoke(ctx, ScopedJoiningService_DeleteScopedToken_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +// ScopedJoiningServiceServer is the server API for ScopedJoiningService service. +// All implementations must embed UnimplementedScopedJoiningServiceServer +// for forward compatibility. +// +// ScopedJoiningService provides an API for managing scoped cluster joining resources. +type ScopedJoiningServiceServer interface { + // GetScopedToken gets a scoped token by name. + GetScopedToken(context.Context, *GetScopedTokenRequest) (*GetScopedTokenResponse, error) + // ListScopedTokens returns a paginated list of scoped tokens. + ListScopedTokens(context.Context, *ListScopedTokensRequest) (*ListScopedTokensResponse, error) + // CreateScopedToken creates a new scoped token. + CreateScopedToken(context.Context, *CreateScopedTokenRequest) (*CreateScopedTokenResponse, error) + // UpdateScopedToken updates a scoped token. + UpdateScopedToken(context.Context, *UpdateScopedTokenRequest) (*UpdateScopedTokenResponse, error) + // DeleteScopedToken deletes a scoped token. + DeleteScopedToken(context.Context, *DeleteScopedTokenRequest) (*DeleteScopedTokenResponse, error) + mustEmbedUnimplementedScopedJoiningServiceServer() +} + +// UnimplementedScopedJoiningServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedScopedJoiningServiceServer struct{} + +func (UnimplementedScopedJoiningServiceServer) GetScopedToken(context.Context, *GetScopedTokenRequest) (*GetScopedTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetScopedToken not implemented") +} +func (UnimplementedScopedJoiningServiceServer) ListScopedTokens(context.Context, *ListScopedTokensRequest) (*ListScopedTokensResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListScopedTokens not implemented") +} +func (UnimplementedScopedJoiningServiceServer) CreateScopedToken(context.Context, *CreateScopedTokenRequest) (*CreateScopedTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateScopedToken not implemented") +} +func (UnimplementedScopedJoiningServiceServer) UpdateScopedToken(context.Context, *UpdateScopedTokenRequest) (*UpdateScopedTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateScopedToken not implemented") +} +func (UnimplementedScopedJoiningServiceServer) DeleteScopedToken(context.Context, *DeleteScopedTokenRequest) (*DeleteScopedTokenResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteScopedToken not implemented") +} +func (UnimplementedScopedJoiningServiceServer) mustEmbedUnimplementedScopedJoiningServiceServer() {} +func (UnimplementedScopedJoiningServiceServer) testEmbeddedByValue() {} + +// UnsafeScopedJoiningServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to ScopedJoiningServiceServer will +// result in compilation errors. +type UnsafeScopedJoiningServiceServer interface { + mustEmbedUnimplementedScopedJoiningServiceServer() +} + +func RegisterScopedJoiningServiceServer(s grpc.ServiceRegistrar, srv ScopedJoiningServiceServer) { + // If the following call pancis, it indicates UnimplementedScopedJoiningServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&ScopedJoiningService_ServiceDesc, srv) +} + +func _ScopedJoiningService_GetScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetScopedTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedJoiningServiceServer).GetScopedToken(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedJoiningService_GetScopedToken_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedJoiningServiceServer).GetScopedToken(ctx, req.(*GetScopedTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedJoiningService_ListScopedTokens_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListScopedTokensRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedJoiningServiceServer).ListScopedTokens(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedJoiningService_ListScopedTokens_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedJoiningServiceServer).ListScopedTokens(ctx, req.(*ListScopedTokensRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedJoiningService_CreateScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateScopedTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedJoiningServiceServer).CreateScopedToken(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedJoiningService_CreateScopedToken_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedJoiningServiceServer).CreateScopedToken(ctx, req.(*CreateScopedTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedJoiningService_UpdateScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdateScopedTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedJoiningServiceServer).UpdateScopedToken(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedJoiningService_UpdateScopedToken_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedJoiningServiceServer).UpdateScopedToken(ctx, req.(*UpdateScopedTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _ScopedJoiningService_DeleteScopedToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteScopedTokenRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(ScopedJoiningServiceServer).DeleteScopedToken(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: ScopedJoiningService_DeleteScopedToken_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(ScopedJoiningServiceServer).DeleteScopedToken(ctx, req.(*DeleteScopedTokenRequest)) + } + return interceptor(ctx, in, info, handler) +} + +// ScopedJoiningService_ServiceDesc is the grpc.ServiceDesc for ScopedJoiningService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var ScopedJoiningService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.scopes.joining.v1.ScopedJoiningService", + HandlerType: (*ScopedJoiningServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "GetScopedToken", + Handler: _ScopedJoiningService_GetScopedToken_Handler, + }, + { + MethodName: "ListScopedTokens", + Handler: _ScopedJoiningService_ListScopedTokens_Handler, + }, + { + MethodName: "CreateScopedToken", + Handler: _ScopedJoiningService_CreateScopedToken_Handler, + }, + { + MethodName: "UpdateScopedToken", + Handler: _ScopedJoiningService_UpdateScopedToken_Handler, + }, + { + MethodName: "DeleteScopedToken", + Handler: _ScopedJoiningService_DeleteScopedToken_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "teleport/scopes/joining/v1/service.proto", +} diff --git a/api/gen/proto/go/teleport/scopes/joining/v1/token.pb.go b/api/gen/proto/go/teleport/scopes/joining/v1/token.pb.go new file mode 100644 index 0000000000000..36b81a5d9d411 --- /dev/null +++ b/api/gen/proto/go/teleport/scopes/joining/v1/token.pb.go @@ -0,0 +1,268 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/scopes/joining/v1/token.proto + +package joiningv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// ScopedToken is a token whose resource and permissions are scoped. Scoped tokens are used for the provisioning +// of teleport agents locked to specific scopes. Scoped tokens implement a subset of the functionality of standard +// provisioning tokens, specifically tailored to the usecase of limited admins/users provisioning resources within +// sub-scopes over which they have been granted elevated privileges. +type ScopedToken struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + // Metadata contains the resource metadata. + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Scope is the scope of the token resource. + Scope string `protobuf:"bytes,5,opt,name=scope,proto3" json:"scope,omitempty"` + // Spec is the token specification. + Spec *ScopedTokenSpec `protobuf:"bytes,6,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedToken) Reset() { + *x = ScopedToken{} + mi := &file_teleport_scopes_joining_v1_token_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedToken) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedToken) ProtoMessage() {} + +func (x *ScopedToken) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_token_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedToken.ProtoReflect.Descriptor instead. +func (*ScopedToken) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_token_proto_rawDescGZIP(), []int{0} +} + +func (x *ScopedToken) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *ScopedToken) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *ScopedToken) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *ScopedToken) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *ScopedToken) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +func (x *ScopedToken) GetSpec() *ScopedTokenSpec { + if x != nil { + return x.Spec + } + return nil +} + +// ScopedTokenSpec is the specification of a scoped token. +type ScopedTokenSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The scope to which this token is assigned. + AssignedScope string `protobuf:"bytes,1,opt,name=assigned_scope,json=assignedScope,proto3" json:"assigned_scope,omitempty"` + // The list of roles associated with the token. They will be converted + // to metadata in the SSH and X509 certificates issued to the user of the + // token. + Roles []string `protobuf:"bytes,2,rep,name=roles,proto3" json:"roles,omitempty"` + // The joining method required in order to use this token. + // Supported joining methods for scoped tokens only include 'token'. + JoinMethod string `protobuf:"bytes,3,opt,name=join_method,json=joinMethod,proto3" json:"join_method,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ScopedTokenSpec) Reset() { + *x = ScopedTokenSpec{} + mi := &file_teleport_scopes_joining_v1_token_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ScopedTokenSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ScopedTokenSpec) ProtoMessage() {} + +func (x *ScopedTokenSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_joining_v1_token_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ScopedTokenSpec.ProtoReflect.Descriptor instead. +func (*ScopedTokenSpec) Descriptor() ([]byte, []int) { + return file_teleport_scopes_joining_v1_token_proto_rawDescGZIP(), []int{1} +} + +func (x *ScopedTokenSpec) GetAssignedScope() string { + if x != nil { + return x.AssignedScope + } + return "" +} + +func (x *ScopedTokenSpec) GetRoles() []string { + if x != nil { + return x.Roles + } + return nil +} + +func (x *ScopedTokenSpec) GetJoinMethod() string { + if x != nil { + return x.JoinMethod + } + return "" +} + +var File_teleport_scopes_joining_v1_token_proto protoreflect.FileDescriptor + +const file_teleport_scopes_joining_v1_token_proto_rawDesc = "" + + "\n" + + "&teleport/scopes/joining/v1/token.proto\x12\x1ateleport.scopes.joining.v1\x1a!teleport/header/v1/metadata.proto\"\xe7\x01\n" + + "\vScopedToken\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12\x14\n" + + "\x05scope\x18\x05 \x01(\tR\x05scope\x12?\n" + + "\x04spec\x18\x06 \x01(\v2+.teleport.scopes.joining.v1.ScopedTokenSpecR\x04spec\"o\n" + + "\x0fScopedTokenSpec\x12%\n" + + "\x0eassigned_scope\x18\x01 \x01(\tR\rassignedScope\x12\x14\n" + + "\x05roles\x18\x02 \x03(\tR\x05roles\x12\x1f\n" + + "\vjoin_method\x18\x03 \x01(\tR\n" + + "joinMethodBYZWgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/joining/v1;joiningv1b\x06proto3" + +var ( + file_teleport_scopes_joining_v1_token_proto_rawDescOnce sync.Once + file_teleport_scopes_joining_v1_token_proto_rawDescData []byte +) + +func file_teleport_scopes_joining_v1_token_proto_rawDescGZIP() []byte { + file_teleport_scopes_joining_v1_token_proto_rawDescOnce.Do(func() { + file_teleport_scopes_joining_v1_token_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_scopes_joining_v1_token_proto_rawDesc), len(file_teleport_scopes_joining_v1_token_proto_rawDesc))) + }) + return file_teleport_scopes_joining_v1_token_proto_rawDescData +} + +var file_teleport_scopes_joining_v1_token_proto_msgTypes = make([]protoimpl.MessageInfo, 2) +var file_teleport_scopes_joining_v1_token_proto_goTypes = []any{ + (*ScopedToken)(nil), // 0: teleport.scopes.joining.v1.ScopedToken + (*ScopedTokenSpec)(nil), // 1: teleport.scopes.joining.v1.ScopedTokenSpec + (*v1.Metadata)(nil), // 2: teleport.header.v1.Metadata +} +var file_teleport_scopes_joining_v1_token_proto_depIdxs = []int32{ + 2, // 0: teleport.scopes.joining.v1.ScopedToken.metadata:type_name -> teleport.header.v1.Metadata + 1, // 1: teleport.scopes.joining.v1.ScopedToken.spec:type_name -> teleport.scopes.joining.v1.ScopedTokenSpec + 2, // [2:2] is the sub-list for method output_type + 2, // [2:2] is the sub-list for method input_type + 2, // [2:2] is the sub-list for extension type_name + 2, // [2:2] is the sub-list for extension extendee + 0, // [0:2] is the sub-list for field type_name +} + +func init() { file_teleport_scopes_joining_v1_token_proto_init() } +func file_teleport_scopes_joining_v1_token_proto_init() { + if File_teleport_scopes_joining_v1_token_proto != nil { + return + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_joining_v1_token_proto_rawDesc), len(file_teleport_scopes_joining_v1_token_proto_rawDesc)), + NumEnums: 0, + NumMessages: 2, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_scopes_joining_v1_token_proto_goTypes, + DependencyIndexes: file_teleport_scopes_joining_v1_token_proto_depIdxs, + MessageInfos: file_teleport_scopes_joining_v1_token_proto_msgTypes, + }.Build() + File_teleport_scopes_joining_v1_token_proto = out.File + file_teleport_scopes_joining_v1_token_proto_goTypes = nil + file_teleport_scopes_joining_v1_token_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/scopes/v1/scopes.pb.go b/api/gen/proto/go/teleport/scopes/v1/scopes.pb.go index e4f8eed01a2e7..3817734456fb4 100644 --- a/api/gen/proto/go/teleport/scopes/v1/scopes.pb.go +++ b/api/gen/proto/go/teleport/scopes/v1/scopes.pb.go @@ -14,11 +14,11 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/scopes/v1/scopes.proto -package scopes +package scopesv1 import ( protoreflect "google.golang.org/protobuf/reflect/protoreflect" @@ -96,6 +96,112 @@ func (Mode) EnumDescriptor() ([]byte, []int) { return file_teleport_scopes_v1_scopes_proto_rawDescGZIP(), []int{0} } +// Pin is a marker that identifies a certificate/identity as being "pinned" to a target scope, and encodes relevant +// information for access-control evaluation at that scope. +type Pin struct { + state protoimpl.MessageState `protogen:"open.v1"` + // scope is the target scope that this pin is associated with. This is the scope that the certificate/identity is + // pinned to. Any resources in parent/orthogonal scopes are not necessarily subject to the privileges/policies + // conveyed by this pin. + Scope string `protobuf:"bytes,1,opt,name=scope,proto3" json:"scope,omitempty"` + // assignments encodes the scoped role assignments relevant to access-control decisions about the pinned identity. This may + // include assignments to parents of the pinned scope as well as assignments to equivalent/child scopes. Effectively, this + // means all assignments that are not orthogonal to the pinned scope. + Assignments map[string]*PinnedAssignments `protobuf:"bytes,2,rep,name=assignments,proto3" json:"assignments,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Pin) Reset() { + *x = Pin{} + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Pin) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Pin) ProtoMessage() {} + +func (x *Pin) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Pin.ProtoReflect.Descriptor instead. +func (*Pin) Descriptor() ([]byte, []int) { + return file_teleport_scopes_v1_scopes_proto_rawDescGZIP(), []int{0} +} + +func (x *Pin) GetScope() string { + if x != nil { + return x.Scope + } + return "" +} + +func (x *Pin) GetAssignments() map[string]*PinnedAssignments { + if x != nil { + return x.Assignments + } + return nil +} + +// PinnedAssignments is a collection of scoped role assignments that are relevant to the pinned identity at the target scope. +type PinnedAssignments struct { + state protoimpl.MessageState `protogen:"open.v1"` + // roles is a list of scoped roles that are assigned to the pinned identity at the target scope. + Roles []string `protobuf:"bytes,1,rep,name=roles,proto3" json:"roles,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *PinnedAssignments) Reset() { + *x = PinnedAssignments{} + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *PinnedAssignments) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*PinnedAssignments) ProtoMessage() {} + +func (x *PinnedAssignments) ProtoReflect() protoreflect.Message { + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use PinnedAssignments.ProtoReflect.Descriptor instead. +func (*PinnedAssignments) Descriptor() ([]byte, []int) { + return file_teleport_scopes_v1_scopes_proto_rawDescGZIP(), []int{1} +} + +func (x *PinnedAssignments) GetRoles() []string { + if x != nil { + return x.Roles + } + return nil +} + // Filter is a query parameter that matches other scopes based on a specified scope and mode. Used for // filtering resources that are subject to or policies that apply to a given scope. type Filter struct { @@ -110,7 +216,7 @@ type Filter struct { func (x *Filter) Reset() { *x = Filter{} - mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[0] + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[2] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -122,7 +228,7 @@ func (x *Filter) String() string { func (*Filter) ProtoMessage() {} func (x *Filter) ProtoReflect() protoreflect.Message { - mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[0] + mi := &file_teleport_scopes_v1_scopes_proto_msgTypes[2] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -135,7 +241,7 @@ func (x *Filter) ProtoReflect() protoreflect.Message { // Deprecated: Use Filter.ProtoReflect.Descriptor instead. func (*Filter) Descriptor() ([]byte, []int) { - return file_teleport_scopes_v1_scopes_proto_rawDescGZIP(), []int{0} + return file_teleport_scopes_v1_scopes_proto_rawDescGZIP(), []int{2} } func (x *Filter) GetScope() string { @@ -156,14 +262,22 @@ var File_teleport_scopes_v1_scopes_proto protoreflect.FileDescriptor const file_teleport_scopes_v1_scopes_proto_rawDesc = "" + "\n" + - "\x1fteleport/scopes/v1/scopes.proto\x12\x12teleport.scopes.v1\"L\n" + + "\x1fteleport/scopes/v1/scopes.proto\x12\x12teleport.scopes.v1\"\xce\x01\n" + + "\x03Pin\x12\x14\n" + + "\x05scope\x18\x01 \x01(\tR\x05scope\x12J\n" + + "\vassignments\x18\x02 \x03(\v2(.teleport.scopes.v1.Pin.AssignmentsEntryR\vassignments\x1ae\n" + + "\x10AssignmentsEntry\x12\x10\n" + + "\x03key\x18\x01 \x01(\tR\x03key\x12;\n" + + "\x05value\x18\x02 \x01(\v2%.teleport.scopes.v1.PinnedAssignmentsR\x05value:\x028\x01\")\n" + + "\x11PinnedAssignments\x12\x14\n" + + "\x05roles\x18\x01 \x03(\tR\x05roles\"L\n" + "\x06Filter\x12\x14\n" + "\x05scope\x18\x01 \x01(\tR\x05scope\x12,\n" + "\x04mode\x18\x02 \x01(\x0e2\x18.teleport.scopes.v1.ModeR\x04mode*h\n" + "\x04Mode\x12\x14\n" + "\x10MODE_UNSPECIFIED\x10\x00\x12#\n" + "\x1fMODE_RESOURCES_SUBJECT_TO_SCOPE\x10\x01\x12%\n" + - "!MODE_POLICIES_APPLICABLE_TO_SCOPE\x10\x02BNZLgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1;scopesb\x06proto3" + "!MODE_POLICIES_APPLICABLE_TO_SCOPE\x10\x02BPZNgithub.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1;scopesv1b\x06proto3" var ( file_teleport_scopes_v1_scopes_proto_rawDescOnce sync.Once @@ -178,18 +292,23 @@ func file_teleport_scopes_v1_scopes_proto_rawDescGZIP() []byte { } var file_teleport_scopes_v1_scopes_proto_enumTypes = make([]protoimpl.EnumInfo, 1) -var file_teleport_scopes_v1_scopes_proto_msgTypes = make([]protoimpl.MessageInfo, 1) +var file_teleport_scopes_v1_scopes_proto_msgTypes = make([]protoimpl.MessageInfo, 4) var file_teleport_scopes_v1_scopes_proto_goTypes = []any{ - (Mode)(0), // 0: teleport.scopes.v1.Mode - (*Filter)(nil), // 1: teleport.scopes.v1.Filter + (Mode)(0), // 0: teleport.scopes.v1.Mode + (*Pin)(nil), // 1: teleport.scopes.v1.Pin + (*PinnedAssignments)(nil), // 2: teleport.scopes.v1.PinnedAssignments + (*Filter)(nil), // 3: teleport.scopes.v1.Filter + nil, // 4: teleport.scopes.v1.Pin.AssignmentsEntry } var file_teleport_scopes_v1_scopes_proto_depIdxs = []int32{ - 0, // 0: teleport.scopes.v1.Filter.mode:type_name -> teleport.scopes.v1.Mode - 1, // [1:1] is the sub-list for method output_type - 1, // [1:1] is the sub-list for method input_type - 1, // [1:1] is the sub-list for extension type_name - 1, // [1:1] is the sub-list for extension extendee - 0, // [0:1] is the sub-list for field type_name + 4, // 0: teleport.scopes.v1.Pin.assignments:type_name -> teleport.scopes.v1.Pin.AssignmentsEntry + 0, // 1: teleport.scopes.v1.Filter.mode:type_name -> teleport.scopes.v1.Mode + 2, // 2: teleport.scopes.v1.Pin.AssignmentsEntry.value:type_name -> teleport.scopes.v1.PinnedAssignments + 3, // [3:3] is the sub-list for method output_type + 3, // [3:3] is the sub-list for method input_type + 3, // [3:3] is the sub-list for extension type_name + 3, // [3:3] is the sub-list for extension extendee + 0, // [0:3] is the sub-list for field type_name } func init() { file_teleport_scopes_v1_scopes_proto_init() } @@ -203,7 +322,7 @@ func file_teleport_scopes_v1_scopes_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_scopes_v1_scopes_proto_rawDesc), len(file_teleport_scopes_v1_scopes_proto_rawDesc)), NumEnums: 1, - NumMessages: 1, + NumMessages: 4, NumExtensions: 0, NumServices: 0, }, diff --git a/api/gen/proto/go/teleport/secreports/v1/secreports.pb.go b/api/gen/proto/go/teleport/secreports/v1/secreports.pb.go index bd16277f62a43..49eaef4aea78f 100644 --- a/api/gen/proto/go/teleport/secreports/v1/secreports.pb.go +++ b/api/gen/proto/go/teleport/secreports/v1/secreports.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/secreports/v1/secreports.proto diff --git a/api/gen/proto/go/teleport/secreports/v1/secreports_service.pb.go b/api/gen/proto/go/teleport/secreports/v1/secreports_service.pb.go index 71c805f8f2a25..c254839265d3f 100644 --- a/api/gen/proto/go/teleport/secreports/v1/secreports_service.pb.go +++ b/api/gen/proto/go/teleport/secreports/v1/secreports_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/secreports/v1/secreports_service.proto diff --git a/api/gen/proto/go/teleport/stableunixusers/v1/stableunixusers.pb.go b/api/gen/proto/go/teleport/stableunixusers/v1/stableunixusers.pb.go index 608b0e2beb212..857c19d93b120 100644 --- a/api/gen/proto/go/teleport/stableunixusers/v1/stableunixusers.pb.go +++ b/api/gen/proto/go/teleport/stableunixusers/v1/stableunixusers.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/stableunixusers/v1/stableunixusers.proto diff --git a/api/gen/proto/go/teleport/summarizer/v1/summarizer.pb.go b/api/gen/proto/go/teleport/summarizer/v1/summarizer.pb.go new file mode 100644 index 0000000000000..56686b7b0a6ad --- /dev/null +++ b/api/gen/proto/go/teleport/summarizer/v1/summarizer.pb.go @@ -0,0 +1,1724 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/summarizer/v1/summarizer.proto + +package summarizerv1 + +import ( + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + durationpb "google.golang.org/protobuf/types/known/durationpb" + structpb "google.golang.org/protobuf/types/known/structpb" + timestamppb "google.golang.org/protobuf/types/known/timestamppb" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// SummaryState is the state of the summarization process. +type SummaryState int32 + +const ( + SummaryState_SUMMARY_STATE_UNSPECIFIED SummaryState = 0 + SummaryState_SUMMARY_STATE_PENDING SummaryState = 1 + SummaryState_SUMMARY_STATE_SUCCESS SummaryState = 2 + SummaryState_SUMMARY_STATE_ERROR SummaryState = 3 +) + +// Enum value maps for SummaryState. +var ( + SummaryState_name = map[int32]string{ + 0: "SUMMARY_STATE_UNSPECIFIED", + 1: "SUMMARY_STATE_PENDING", + 2: "SUMMARY_STATE_SUCCESS", + 3: "SUMMARY_STATE_ERROR", + } + SummaryState_value = map[string]int32{ + "SUMMARY_STATE_UNSPECIFIED": 0, + "SUMMARY_STATE_PENDING": 1, + "SUMMARY_STATE_SUCCESS": 2, + "SUMMARY_STATE_ERROR": 3, + } +) + +func (x SummaryState) Enum() *SummaryState { + p := new(SummaryState) + *p = x + return p +} + +func (x SummaryState) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (SummaryState) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_summarizer_v1_summarizer_proto_enumTypes[0].Descriptor() +} + +func (SummaryState) Type() protoreflect.EnumType { + return &file_teleport_summarizer_v1_summarizer_proto_enumTypes[0] +} + +func (x SummaryState) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use SummaryState.Descriptor instead. +func (SummaryState) EnumDescriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{0} +} + +// CommandCategory represents the category of a command. +type CommandCategory int32 + +const ( + // CommandCategoryUnspecified is the default value and indicates that the command + // category is not specified. + CommandCategory_COMMAND_CATEGORY_UNSPECIFIED CommandCategory = 0 + // CommandCategoryFileOperation indicates that the command is related to file + // operations, such as copying, moving, or deleting files. + CommandCategory_COMMAND_CATEGORY_FILE_OPERATION CommandCategory = 1 + // CommandCategoryNetwork indicates that the command is related to network + // operations, such as connecting to a remote host or transferring data. + CommandCategory_COMMAND_CATEGORY_NETWORK CommandCategory = 2 + // CommandCategoryProcess indicates that the command is related to process + // management, such as starting or stopping processes. + CommandCategory_COMMAND_CATEGORY_PROCESS CommandCategory = 3 + // CommandCategorySystemConfig indicates that the command is related to system + // configuration, such as changing system settings or installing software. + CommandCategory_COMMAND_CATEGORY_SYSTEM_CONFIG CommandCategory = 4 + // CommandCategoryDataAccess indicates that the command is related to data access, + // such as querying databases or reading data files. + CommandCategory_COMMAND_CATEGORY_DATA_ACCESS CommandCategory = 5 + // CommandCategoryAuthentication indicates that the command is related to + // authentication, such as logging in or changing user credentials. + CommandCategory_COMMAND_CATEGORY_AUTHENTICATION CommandCategory = 6 + // CommandCategoryOther indicates that the command does not fit into any of the + // other categories. + CommandCategory_COMMAND_CATEGORY_OTHER CommandCategory = 7 +) + +// Enum value maps for CommandCategory. +var ( + CommandCategory_name = map[int32]string{ + 0: "COMMAND_CATEGORY_UNSPECIFIED", + 1: "COMMAND_CATEGORY_FILE_OPERATION", + 2: "COMMAND_CATEGORY_NETWORK", + 3: "COMMAND_CATEGORY_PROCESS", + 4: "COMMAND_CATEGORY_SYSTEM_CONFIG", + 5: "COMMAND_CATEGORY_DATA_ACCESS", + 6: "COMMAND_CATEGORY_AUTHENTICATION", + 7: "COMMAND_CATEGORY_OTHER", + } + CommandCategory_value = map[string]int32{ + "COMMAND_CATEGORY_UNSPECIFIED": 0, + "COMMAND_CATEGORY_FILE_OPERATION": 1, + "COMMAND_CATEGORY_NETWORK": 2, + "COMMAND_CATEGORY_PROCESS": 3, + "COMMAND_CATEGORY_SYSTEM_CONFIG": 4, + "COMMAND_CATEGORY_DATA_ACCESS": 5, + "COMMAND_CATEGORY_AUTHENTICATION": 6, + "COMMAND_CATEGORY_OTHER": 7, + } +) + +func (x CommandCategory) Enum() *CommandCategory { + p := new(CommandCategory) + *p = x + return p +} + +func (x CommandCategory) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (CommandCategory) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_summarizer_v1_summarizer_proto_enumTypes[1].Descriptor() +} + +func (CommandCategory) Type() protoreflect.EnumType { + return &file_teleport_summarizer_v1_summarizer_proto_enumTypes[1] +} + +func (x CommandCategory) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use CommandCategory.Descriptor instead. +func (CommandCategory) EnumDescriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{1} +} + +// RiskLevel represents the risk level associated with a command. +type RiskLevel int32 + +const ( + // RiskLevelUnspecified is the default value and indicates that the risk level + // is not specified. + RiskLevel_RISK_LEVEL_UNSPECIFIED RiskLevel = 0 + // RiskLevelLow indicates that the command has a low risk level. + RiskLevel_RISK_LEVEL_LOW RiskLevel = 1 + // RiskLevelMedium indicates that the command has a medium risk level. + RiskLevel_RISK_LEVEL_MEDIUM RiskLevel = 2 + // RiskLevelHigh indicates that the command has a high risk level. + RiskLevel_RISK_LEVEL_HIGH RiskLevel = 3 + // RiskLevelCritical indicates that the command has a critical risk level. + RiskLevel_RISK_LEVEL_CRITICAL RiskLevel = 4 +) + +// Enum value maps for RiskLevel. +var ( + RiskLevel_name = map[int32]string{ + 0: "RISK_LEVEL_UNSPECIFIED", + 1: "RISK_LEVEL_LOW", + 2: "RISK_LEVEL_MEDIUM", + 3: "RISK_LEVEL_HIGH", + 4: "RISK_LEVEL_CRITICAL", + } + RiskLevel_value = map[string]int32{ + "RISK_LEVEL_UNSPECIFIED": 0, + "RISK_LEVEL_LOW": 1, + "RISK_LEVEL_MEDIUM": 2, + "RISK_LEVEL_HIGH": 3, + "RISK_LEVEL_CRITICAL": 4, + } +) + +func (x RiskLevel) Enum() *RiskLevel { + p := new(RiskLevel) + *p = x + return p +} + +func (x RiskLevel) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (RiskLevel) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_summarizer_v1_summarizer_proto_enumTypes[2].Descriptor() +} + +func (RiskLevel) Type() protoreflect.EnumType { + return &file_teleport_summarizer_v1_summarizer_proto_enumTypes[2] +} + +func (x RiskLevel) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use RiskLevel.Descriptor instead. +func (RiskLevel) EnumDescriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{2} +} + +// ThreatCategory represents the category of a detected threat. +type ThreatCategory int32 + +const ( + // ThreatCategoryUnspecified is the default value and indicates that the threat + // category is not specified. + ThreatCategory_THREAT_CATEGORY_UNSPECIFIED ThreatCategory = 0 + // ThreatCategoryReconnaissance indicates that the threat is related to reconnaissance + // activities, such as scanning or probing the system. + ThreatCategory_THREAT_CATEGORY_RECONNAISSANCE ThreatCategory = 1 + // ThreatCategoryExecution indicates that the threat is related to execution activities, + // such as running malicious code. + ThreatCategory_THREAT_CATEGORY_EXECUTION ThreatCategory = 2 + // ThreatCategoryPersistence indicates that the threat is related to persistence + // activities, such as installing backdoors. + ThreatCategory_THREAT_CATEGORY_PERSISTENCE ThreatCategory = 3 + // ThreatCategoryPrivilegeEscalation indicates that the threat is related to privilege + // escalation activities, such as exploiting vulnerabilities to gain higher privileges. + ThreatCategory_THREAT_CATEGORY_PRIVILEGE_ESCALATION ThreatCategory = 4 + // ThreatCategoryDefenseEvasion indicates that the threat is related to defense evasion + // activities, such as disabling security software. + ThreatCategory_THREAT_CATEGORY_DEFENSE_EVASION ThreatCategory = 5 + // ThreatCategoryCredentialAccess indicates that the threat is related to credential + // access activities, such as stealing passwords. + ThreatCategory_THREAT_CATEGORY_CREDENTIAL_ACCESS ThreatCategory = 6 + // ThreatCategoryDiscovery indicates that the threat is related to discovery activities, + // such as gathering information about the system. + ThreatCategory_THREAT_CATEGORY_DISCOVERY ThreatCategory = 7 + // ThreatCategoryLateralMovement indicates that the threat is related to lateral movement + // activities, such as moving laterally within the network. + ThreatCategory_THREAT_CATEGORY_LATERAL_MOVEMENT ThreatCategory = 8 + // ThreatCategoryCollection indicates that the threat is related to collection activities, + // such as collecting sensitive data. + ThreatCategory_THREAT_CATEGORY_COLLECTION ThreatCategory = 9 + // ThreatCategoryExfiltration indicates that the threat is related to exfiltration + // activities, such as stealing data from the system. + ThreatCategory_THREAT_CATEGORY_EXFILTRATION ThreatCategory = 10 + // ThreatCategoryImpact indicates that the threat is related to impact activities, + // such as disrupting system operations. + ThreatCategory_THREAT_CATEGORY_IMPACT ThreatCategory = 11 + // ThreatCategoryNone indicates that there is no threat detected. + ThreatCategory_THREAT_CATEGORY_NONE ThreatCategory = 12 +) + +// Enum value maps for ThreatCategory. +var ( + ThreatCategory_name = map[int32]string{ + 0: "THREAT_CATEGORY_UNSPECIFIED", + 1: "THREAT_CATEGORY_RECONNAISSANCE", + 2: "THREAT_CATEGORY_EXECUTION", + 3: "THREAT_CATEGORY_PERSISTENCE", + 4: "THREAT_CATEGORY_PRIVILEGE_ESCALATION", + 5: "THREAT_CATEGORY_DEFENSE_EVASION", + 6: "THREAT_CATEGORY_CREDENTIAL_ACCESS", + 7: "THREAT_CATEGORY_DISCOVERY", + 8: "THREAT_CATEGORY_LATERAL_MOVEMENT", + 9: "THREAT_CATEGORY_COLLECTION", + 10: "THREAT_CATEGORY_EXFILTRATION", + 11: "THREAT_CATEGORY_IMPACT", + 12: "THREAT_CATEGORY_NONE", + } + ThreatCategory_value = map[string]int32{ + "THREAT_CATEGORY_UNSPECIFIED": 0, + "THREAT_CATEGORY_RECONNAISSANCE": 1, + "THREAT_CATEGORY_EXECUTION": 2, + "THREAT_CATEGORY_PERSISTENCE": 3, + "THREAT_CATEGORY_PRIVILEGE_ESCALATION": 4, + "THREAT_CATEGORY_DEFENSE_EVASION": 5, + "THREAT_CATEGORY_CREDENTIAL_ACCESS": 6, + "THREAT_CATEGORY_DISCOVERY": 7, + "THREAT_CATEGORY_LATERAL_MOVEMENT": 8, + "THREAT_CATEGORY_COLLECTION": 9, + "THREAT_CATEGORY_EXFILTRATION": 10, + "THREAT_CATEGORY_IMPACT": 11, + "THREAT_CATEGORY_NONE": 12, + } +) + +func (x ThreatCategory) Enum() *ThreatCategory { + p := new(ThreatCategory) + *p = x + return p +} + +func (x ThreatCategory) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (ThreatCategory) Descriptor() protoreflect.EnumDescriptor { + return file_teleport_summarizer_v1_summarizer_proto_enumTypes[3].Descriptor() +} + +func (ThreatCategory) Type() protoreflect.EnumType { + return &file_teleport_summarizer_v1_summarizer_proto_enumTypes[3] +} + +func (x ThreatCategory) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Use ThreatCategory.Descriptor instead. +func (ThreatCategory) EnumDescriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{3} +} + +// InferenceModel resource specifies a session summarization inference model +// configuration. It tells Teleport how to use a specific provider and model to +// summarize sessions. +type InferenceModel struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. Should always be set to "inference_model". + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. Should be empty. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. Should be set to "v1". + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + Spec *InferenceModelSpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferenceModel) Reset() { + *x = InferenceModel{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferenceModel) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferenceModel) ProtoMessage() {} + +func (x *InferenceModel) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferenceModel.ProtoReflect.Descriptor instead. +func (*InferenceModel) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{0} +} + +func (x *InferenceModel) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *InferenceModel) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *InferenceModel) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *InferenceModel) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *InferenceModel) GetSpec() *InferenceModelSpec { + if x != nil { + return x.Spec + } + return nil +} + +// InferenceModelSpec specifies the inference provider and provider-specific +// parameters. +type InferenceModelSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Types that are valid to be assigned to Provider: + // + // *InferenceModelSpec_Openai + // *InferenceModelSpec_Bedrock + Provider isInferenceModelSpec_Provider `protobuf_oneof:"provider"` + // MaxSessionLengthBytes is the maximum session length that can be sent to + // inference provider. Currently, it's determined by the size of model's + // context window; future versions of Teleport will allow summarizing larger + // sessions by splitting them. + // + // Inference providers will reject requests that are larger than given + // model's context window. Since context windows are usually sized in tokens, + // this value is an approximation. Assuming 2 bytes per input token should be + // safe. + // + // Currently, Teleport will outright reject sessions larger than this limit; + // future versions will split sessions in chunks, treating this size as a + // maximum. + // + // If unset or set to 0, defaults to 1MB. + MaxSessionLengthBytes int64 `protobuf:"varint,2,opt,name=max_session_length_bytes,json=maxSessionLengthBytes,proto3" json:"max_session_length_bytes,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferenceModelSpec) Reset() { + *x = InferenceModelSpec{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferenceModelSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferenceModelSpec) ProtoMessage() {} + +func (x *InferenceModelSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferenceModelSpec.ProtoReflect.Descriptor instead. +func (*InferenceModelSpec) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{1} +} + +func (x *InferenceModelSpec) GetProvider() isInferenceModelSpec_Provider { + if x != nil { + return x.Provider + } + return nil +} + +func (x *InferenceModelSpec) GetOpenai() *OpenAIProvider { + if x != nil { + if x, ok := x.Provider.(*InferenceModelSpec_Openai); ok { + return x.Openai + } + } + return nil +} + +func (x *InferenceModelSpec) GetBedrock() *BedrockProvider { + if x != nil { + if x, ok := x.Provider.(*InferenceModelSpec_Bedrock); ok { + return x.Bedrock + } + } + return nil +} + +func (x *InferenceModelSpec) GetMaxSessionLengthBytes() int64 { + if x != nil { + return x.MaxSessionLengthBytes + } + return 0 +} + +type isInferenceModelSpec_Provider interface { + isInferenceModelSpec_Provider() +} + +type InferenceModelSpec_Openai struct { + // Openai indicates that this model uses OpenAI as the inference provider + // and specifies OpenAI-specific parameters. + Openai *OpenAIProvider `protobuf:"bytes,1,opt,name=openai,proto3,oneof"` +} + +type InferenceModelSpec_Bedrock struct { + // Bedrock indicates that this model uses Amazon Bedrock as the inference + // provider and specifies Bedrock-specific parameters. + Bedrock *BedrockProvider `protobuf:"bytes,3,opt,name=bedrock,proto3,oneof"` +} + +func (*InferenceModelSpec_Openai) isInferenceModelSpec_Provider() {} + +func (*InferenceModelSpec_Bedrock) isInferenceModelSpec_Provider() {} + +// OpenAIProvider specifies OpenAI-specific parameters. It can be used to +// configure OpenAI or an OpenAI-compatible API, such as LiteLLM. +type OpenAIProvider struct { + state protoimpl.MessageState `protogen:"open.v1"` + // OpenaiModelId specifies the model ID, as understood by the OpenAI API. + OpenaiModelId string `protobuf:"bytes,1,opt,name=openai_model_id,json=openaiModelId,proto3" json:"openai_model_id,omitempty"` + // Temperature controls the randomness of the model's output. + Temperature float64 `protobuf:"fixed64,2,opt,name=temperature,proto3" json:"temperature,omitempty"` + // ApiKeySecretRef is a reference to an InferenceSecret that contains the + // OpenAI API key. + ApiKeySecretRef string `protobuf:"bytes,3,opt,name=api_key_secret_ref,json=apiKeySecretRef,proto3" json:"api_key_secret_ref,omitempty"` + // BaseUrl is the OpenAI API base URL. Optional, defaults to the public + // OpenAI API URL. May be used to point to a custom OpenAI-compatible API, + // such as LiteLLM. In such case, the `api_key_secret_ref` must point to a + // secret that contains the API key for that custom API. + BaseUrl string `protobuf:"bytes,4,opt,name=base_url,json=baseUrl,proto3" json:"base_url,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *OpenAIProvider) Reset() { + *x = OpenAIProvider{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *OpenAIProvider) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*OpenAIProvider) ProtoMessage() {} + +func (x *OpenAIProvider) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use OpenAIProvider.ProtoReflect.Descriptor instead. +func (*OpenAIProvider) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{2} +} + +func (x *OpenAIProvider) GetOpenaiModelId() string { + if x != nil { + return x.OpenaiModelId + } + return "" +} + +func (x *OpenAIProvider) GetTemperature() float64 { + if x != nil { + return x.Temperature + } + return 0 +} + +func (x *OpenAIProvider) GetApiKeySecretRef() string { + if x != nil { + return x.ApiKeySecretRef + } + return "" +} + +func (x *OpenAIProvider) GetBaseUrl() string { + if x != nil { + return x.BaseUrl + } + return "" +} + +// BedrockProvider specifies parameters specific to Amazon Bedrock. +type BedrockProvider struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Region is the AWS region which will be used for inference. + Region string `protobuf:"bytes,1,opt,name=region,proto3" json:"region,omitempty"` + // BedrockModelId specifies a model ID or an inference profile as understood + // by the Bedrock API. + BedrockModelId string `protobuf:"bytes,2,opt,name=bedrock_model_id,json=bedrockModelId,proto3" json:"bedrock_model_id,omitempty"` + // Temperature controls the randomness of the model's output. + Temperature float32 `protobuf:"fixed32,3,opt,name=temperature,proto3" json:"temperature,omitempty"` + // Integration is the AWS OIDC Integration name. If unset, Teleport will use + // AWS credentials available on the auth server machine; otherwise, it will + // use the specified OIDC integration for assuming appropriate role. + Integration string `protobuf:"bytes,4,opt,name=integration,proto3" json:"integration,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *BedrockProvider) Reset() { + *x = BedrockProvider{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *BedrockProvider) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*BedrockProvider) ProtoMessage() {} + +func (x *BedrockProvider) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use BedrockProvider.ProtoReflect.Descriptor instead. +func (*BedrockProvider) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{3} +} + +func (x *BedrockProvider) GetRegion() string { + if x != nil { + return x.Region + } + return "" +} + +func (x *BedrockProvider) GetBedrockModelId() string { + if x != nil { + return x.BedrockModelId + } + return "" +} + +func (x *BedrockProvider) GetTemperature() float32 { + if x != nil { + return x.Temperature + } + return 0 +} + +func (x *BedrockProvider) GetIntegration() string { + if x != nil { + return x.Integration + } + return "" +} + +// InferenceSecret resource stores session summarization inference provider +// secrets, such as API keys. They need to be referenced by appropriate +// provider configuration inside `InferenceModelSpec`. +type InferenceSecret struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. Should always be set to "inference_secret". + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. Should be empty. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. Should be set to "v1". + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Spec contains the secret value. Once set, it can only be read by Teleport + // itself; it will not be returned in API responses. + Spec *InferenceSecretSpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferenceSecret) Reset() { + *x = InferenceSecret{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferenceSecret) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferenceSecret) ProtoMessage() {} + +func (x *InferenceSecret) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferenceSecret.ProtoReflect.Descriptor instead. +func (*InferenceSecret) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{4} +} + +func (x *InferenceSecret) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *InferenceSecret) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *InferenceSecret) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *InferenceSecret) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *InferenceSecret) GetSpec() *InferenceSecretSpec { + if x != nil { + return x.Spec + } + return nil +} + +// InferenceSecretSpec defines the secret value for the inference model. +type InferenceSecretSpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Value is the secret value, such as an API key. + Value string `protobuf:"bytes,1,opt,name=value,proto3" json:"value,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferenceSecretSpec) Reset() { + *x = InferenceSecretSpec{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferenceSecretSpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferenceSecretSpec) ProtoMessage() {} + +func (x *InferenceSecretSpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferenceSecretSpec.ProtoReflect.Descriptor instead. +func (*InferenceSecretSpec) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{5} +} + +func (x *InferenceSecretSpec) GetValue() string { + if x != nil { + return x.Value + } + return "" +} + +// InferencePolicy resource maps sessions to summarization models. +type InferencePolicy struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kind is the resource kind. Should always be set to "inference_policy". + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // SubKind is the resource sub-kind. Should be empty. + SubKind string `protobuf:"bytes,2,opt,name=sub_kind,json=subKind,proto3" json:"sub_kind,omitempty"` + // Version is the resource version. Should be set to "v1". + Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + Metadata *v1.Metadata `protobuf:"bytes,4,opt,name=metadata,proto3" json:"metadata,omitempty"` + Spec *InferencePolicySpec `protobuf:"bytes,5,opt,name=spec,proto3" json:"spec,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferencePolicy) Reset() { + *x = InferencePolicy{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferencePolicy) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferencePolicy) ProtoMessage() {} + +func (x *InferencePolicy) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferencePolicy.ProtoReflect.Descriptor instead. +func (*InferencePolicy) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{6} +} + +func (x *InferencePolicy) GetKind() string { + if x != nil { + return x.Kind + } + return "" +} + +func (x *InferencePolicy) GetSubKind() string { + if x != nil { + return x.SubKind + } + return "" +} + +func (x *InferencePolicy) GetVersion() string { + if x != nil { + return x.Version + } + return "" +} + +func (x *InferencePolicy) GetMetadata() *v1.Metadata { + if x != nil { + return x.Metadata + } + return nil +} + +func (x *InferencePolicy) GetSpec() *InferencePolicySpec { + if x != nil { + return x.Spec + } + return nil +} + +// InferencePolicySpec maps sessions to summarization models using a filter. +type InferencePolicySpec struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Kinds are session kinds matched by this policy, e.g., "ssh", "k8s", "db" + Kinds []string `protobuf:"bytes,1,rep,name=kinds,proto3" json:"kinds,omitempty"` + // Model is the name of the `InferenceModel` resource to be used for + // summarization. + Model string `protobuf:"bytes,2,opt,name=model,proto3" json:"model,omitempty"` + // Filter is an optional filter expression using Teleport Predicate Language + // to select sessions for summarization. If it's empty, all sessions that + // match the list of kinds will be summarized using this model. + Filter string `protobuf:"bytes,3,opt,name=filter,proto3" json:"filter,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *InferencePolicySpec) Reset() { + *x = InferencePolicySpec{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *InferencePolicySpec) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*InferencePolicySpec) ProtoMessage() {} + +func (x *InferencePolicySpec) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use InferencePolicySpec.ProtoReflect.Descriptor instead. +func (*InferencePolicySpec) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{7} +} + +func (x *InferencePolicySpec) GetKinds() []string { + if x != nil { + return x.Kinds + } + return nil +} + +func (x *InferencePolicySpec) GetModel() string { + if x != nil { + return x.Model + } + return "" +} + +func (x *InferencePolicySpec) GetFilter() string { + if x != nil { + return x.Filter + } + return "" +} + +// Summary represents a summary of a session recording. This format is used to +// store the summaries in the session storage and return it with gRPC. +type Summary struct { + state protoimpl.MessageState `protogen:"open.v1"` + // sessionId is an ID of the session whose recording got summarized. + SessionId string `protobuf:"bytes,1,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + // State is the state of the summarization process. + State SummaryState `protobuf:"varint,2,opt,name=state,proto3,enum=teleport.summarizer.v1.SummaryState" json:"state,omitempty"` + // InferenceStartedAt is the time when the summarization process started. + InferenceStartedAt *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=inference_started_at,json=inferenceStartedAt,proto3" json:"inference_started_at,omitempty"` + // InferenceFinishedAt is the time when the summarization process finished. + InferenceFinishedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=inference_finished_at,json=inferenceFinishedAt,proto3" json:"inference_finished_at,omitempty"` + // Content is the main text content of the summary, stored in Markdown + // format. Available if the state is SUMMARY_STATE_SUCCESS. + Content string `protobuf:"bytes,5,opt,name=content,proto3" json:"content,omitempty"` + // ModelName is the name of the `InferenceModel` resource that was used to + // generate this summary. + ModelName string `protobuf:"bytes,6,opt,name=model_name,json=modelName,proto3" json:"model_name,omitempty"` + // SessionEndEvent is the event that ended the summarized session. Session + // end events carry the most complete set of data that Teleport has about a + // given session. Used for checking access based on RBAC rule "where" + // filters. + // + // The event is stored in an unstructured form, as storing an instance of + // events.OneOf posed a number of technical challenges with JSON + // serialization as a subcomponent of this message. These challenges stem + // from the fact that audit events have gogoproto extensions. + SessionEndEvent *structpb.Struct `protobuf:"bytes,7,opt,name=session_end_event,json=sessionEndEvent,proto3" json:"session_end_event,omitempty"` + // ErrorMessage is an error message if the summarization failed. Available if + // the state is SUMMARY_STATE_ERROR. + ErrorMessage string `protobuf:"bytes,8,opt,name=error_message,json=errorMessage,proto3" json:"error_message,omitempty"` + // EnhancedSummary contains structured data extracted from the session. + EnhancedSummary *EnhancedSummary `protobuf:"bytes,9,opt,name=enhanced_summary,json=enhancedSummary,proto3" json:"enhanced_summary,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *Summary) Reset() { + *x = Summary{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *Summary) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Summary) ProtoMessage() {} + +func (x *Summary) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Summary.ProtoReflect.Descriptor instead. +func (*Summary) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{8} +} + +func (x *Summary) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +func (x *Summary) GetState() SummaryState { + if x != nil { + return x.State + } + return SummaryState_SUMMARY_STATE_UNSPECIFIED +} + +func (x *Summary) GetInferenceStartedAt() *timestamppb.Timestamp { + if x != nil { + return x.InferenceStartedAt + } + return nil +} + +func (x *Summary) GetInferenceFinishedAt() *timestamppb.Timestamp { + if x != nil { + return x.InferenceFinishedAt + } + return nil +} + +func (x *Summary) GetContent() string { + if x != nil { + return x.Content + } + return "" +} + +func (x *Summary) GetModelName() string { + if x != nil { + return x.ModelName + } + return "" +} + +func (x *Summary) GetSessionEndEvent() *structpb.Struct { + if x != nil { + return x.SessionEndEvent + } + return nil +} + +func (x *Summary) GetErrorMessage() string { + if x != nil { + return x.ErrorMessage + } + return "" +} + +func (x *Summary) GetEnhancedSummary() *EnhancedSummary { + if x != nil { + return x.EnhancedSummary + } + return nil +} + +// CommandAnalysis represents a summary of a single command executed during a +// session. +type CommandAnalysis struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Command is the command that was executed. + Command string `protobuf:"bytes,1,opt,name=command,proto3" json:"command,omitempty"` + // Category is the category of the command. + Category CommandCategory `protobuf:"varint,2,opt,name=category,proto3,enum=teleport.summarizer.v1.CommandCategory" json:"category,omitempty"` + // Success indicates whether the command executed successfully. + Success bool `protobuf:"varint,3,opt,name=success,proto3" json:"success,omitempty"` + // RiskLevel is the risk level associated with the command. + RiskLevel RiskLevel `protobuf:"varint,4,opt,name=risk_level,json=riskLevel,proto3,enum=teleport.summarizer.v1.RiskLevel" json:"risk_level,omitempty"` + // RiskScore is a numerical score representing the risk associated with the command. + RiskScore int32 `protobuf:"varint,5,opt,name=risk_score,json=riskScore,proto3" json:"risk_score,omitempty"` + // ThreatCategory is the category of any detected threat associated with the command. + ThreatCategory ThreatCategory `protobuf:"varint,6,opt,name=threat_category,json=threatCategory,proto3,enum=teleport.summarizer.v1.ThreatCategory" json:"threat_category,omitempty"` + // TimelineTitle is a brief title for the command's entry in the session timeline. + TimelineTitle string `protobuf:"bytes,7,opt,name=timeline_title,json=timelineTitle,proto3" json:"timeline_title,omitempty"` + // TimelineSubtitle is a more detailed subtitle for the command's entry in the session timeline. + TimelineSubtitle string `protobuf:"bytes,8,opt,name=timeline_subtitle,json=timelineSubtitle,proto3" json:"timeline_subtitle,omitempty"` + // ShortDescription is a concise description of the command and its purpose. + ShortDescription string `protobuf:"bytes,9,opt,name=short_description,json=shortDescription,proto3" json:"short_description,omitempty"` + // DetailedDescription is an in-depth explanation of the command, its function, + // and any relevant context. + DetailedDescription string `protobuf:"bytes,10,opt,name=detailed_description,json=detailedDescription,proto3" json:"detailed_description,omitempty"` + // ErrorMessages contains any error messages produced during the execution of the command. + ErrorMessages []string `protobuf:"bytes,11,rep,name=error_messages,json=errorMessages,proto3" json:"error_messages,omitempty"` + // SuspiciousFlags contains any suspicious flags or indicators associated with the command. + SuspiciousFlags []string `protobuf:"bytes,12,rep,name=suspicious_flags,json=suspiciousFlags,proto3" json:"suspicious_flags,omitempty"` + // SensitiveItems contains any sensitive items accessed or modified by the command. + SensitiveItems []string `protobuf:"bytes,13,rep,name=sensitive_items,json=sensitiveItems,proto3" json:"sensitive_items,omitempty"` + // SuspiciousPatterns contains any suspicious patterns detected in relation to the command. + SuspiciousPatterns []string `protobuf:"bytes,14,rep,name=suspicious_patterns,json=suspiciousPatterns,proto3" json:"suspicious_patterns,omitempty"` + // IOCs contains any indicators of compromise associated with the command. + Iocs []string `protobuf:"bytes,15,rep,name=iocs,proto3" json:"iocs,omitempty"` + // MitreAttackIDs contains any MITRE ATT&CK IDs relevant to the command. + MitreAttackIds []string `protobuf:"bytes,16,rep,name=mitre_attack_ids,json=mitreAttackIds,proto3" json:"mitre_attack_ids,omitempty"` + // HasSensitiveData indicates whether the command involved access to or modification of sensitive data. + HasSensitiveData bool `protobuf:"varint,17,opt,name=has_sensitive_data,json=hasSensitiveData,proto3" json:"has_sensitive_data,omitempty"` + // PrivilegeEscalation indicates whether the command involved privilege escalation. + PrivilegeEscalation bool `protobuf:"varint,18,opt,name=privilege_escalation,json=privilegeEscalation,proto3" json:"privilege_escalation,omitempty"` + // DataExfiltration indicates whether the command involved data exfiltration. + DataExfiltration bool `protobuf:"varint,19,opt,name=data_exfiltration,json=dataExfiltration,proto3" json:"data_exfiltration,omitempty"` + // Persistence indicates whether the command involved persistence mechanisms. + Persistence bool `protobuf:"varint,20,opt,name=persistence,proto3" json:"persistence,omitempty"` + // StartOffset is the start time of the command, relative to the start of the session. + StartOffset *durationpb.Duration `protobuf:"bytes,21,opt,name=start_offset,json=startOffset,proto3" json:"start_offset,omitempty"` + // EndOffset is the end time of the command, relative to the start of the session. + EndOffset *durationpb.Duration `protobuf:"bytes,22,opt,name=end_offset,json=endOffset,proto3" json:"end_offset,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CommandAnalysis) Reset() { + *x = CommandAnalysis{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CommandAnalysis) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CommandAnalysis) ProtoMessage() {} + +func (x *CommandAnalysis) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CommandAnalysis.ProtoReflect.Descriptor instead. +func (*CommandAnalysis) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{9} +} + +func (x *CommandAnalysis) GetCommand() string { + if x != nil { + return x.Command + } + return "" +} + +func (x *CommandAnalysis) GetCategory() CommandCategory { + if x != nil { + return x.Category + } + return CommandCategory_COMMAND_CATEGORY_UNSPECIFIED +} + +func (x *CommandAnalysis) GetSuccess() bool { + if x != nil { + return x.Success + } + return false +} + +func (x *CommandAnalysis) GetRiskLevel() RiskLevel { + if x != nil { + return x.RiskLevel + } + return RiskLevel_RISK_LEVEL_UNSPECIFIED +} + +func (x *CommandAnalysis) GetRiskScore() int32 { + if x != nil { + return x.RiskScore + } + return 0 +} + +func (x *CommandAnalysis) GetThreatCategory() ThreatCategory { + if x != nil { + return x.ThreatCategory + } + return ThreatCategory_THREAT_CATEGORY_UNSPECIFIED +} + +func (x *CommandAnalysis) GetTimelineTitle() string { + if x != nil { + return x.TimelineTitle + } + return "" +} + +func (x *CommandAnalysis) GetTimelineSubtitle() string { + if x != nil { + return x.TimelineSubtitle + } + return "" +} + +func (x *CommandAnalysis) GetShortDescription() string { + if x != nil { + return x.ShortDescription + } + return "" +} + +func (x *CommandAnalysis) GetDetailedDescription() string { + if x != nil { + return x.DetailedDescription + } + return "" +} + +func (x *CommandAnalysis) GetErrorMessages() []string { + if x != nil { + return x.ErrorMessages + } + return nil +} + +func (x *CommandAnalysis) GetSuspiciousFlags() []string { + if x != nil { + return x.SuspiciousFlags + } + return nil +} + +func (x *CommandAnalysis) GetSensitiveItems() []string { + if x != nil { + return x.SensitiveItems + } + return nil +} + +func (x *CommandAnalysis) GetSuspiciousPatterns() []string { + if x != nil { + return x.SuspiciousPatterns + } + return nil +} + +func (x *CommandAnalysis) GetIocs() []string { + if x != nil { + return x.Iocs + } + return nil +} + +func (x *CommandAnalysis) GetMitreAttackIds() []string { + if x != nil { + return x.MitreAttackIds + } + return nil +} + +func (x *CommandAnalysis) GetHasSensitiveData() bool { + if x != nil { + return x.HasSensitiveData + } + return false +} + +func (x *CommandAnalysis) GetPrivilegeEscalation() bool { + if x != nil { + return x.PrivilegeEscalation + } + return false +} + +func (x *CommandAnalysis) GetDataExfiltration() bool { + if x != nil { + return x.DataExfiltration + } + return false +} + +func (x *CommandAnalysis) GetPersistence() bool { + if x != nil { + return x.Persistence + } + return false +} + +func (x *CommandAnalysis) GetStartOffset() *durationpb.Duration { + if x != nil { + return x.StartOffset + } + return nil +} + +func (x *CommandAnalysis) GetEndOffset() *durationpb.Duration { + if x != nil { + return x.EndOffset + } + return nil +} + +// SecurityRecommendation represents a security recommendation related to a command +type SecurityRecommendation struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Title is a brief title for the security recommendation. + Title string `protobuf:"bytes,1,opt,name=title,proto3" json:"title,omitempty"` + // Description is a detailed description of the security recommendation. + Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"` + // Severity indicates the severity level of the recommendation. + Severity RiskLevel `protobuf:"varint,3,opt,name=severity,proto3,enum=teleport.summarizer.v1.RiskLevel" json:"severity,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *SecurityRecommendation) Reset() { + *x = SecurityRecommendation{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *SecurityRecommendation) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*SecurityRecommendation) ProtoMessage() {} + +func (x *SecurityRecommendation) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use SecurityRecommendation.ProtoReflect.Descriptor instead. +func (*SecurityRecommendation) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{10} +} + +func (x *SecurityRecommendation) GetTitle() string { + if x != nil { + return x.Title + } + return "" +} + +func (x *SecurityRecommendation) GetDescription() string { + if x != nil { + return x.Description + } + return "" +} + +func (x *SecurityRecommendation) GetSeverity() RiskLevel { + if x != nil { + return x.Severity + } + return RiskLevel_RISK_LEVEL_UNSPECIFIED +} + +// EnhancedSummary represents an enhanced summary of a session recording, +// with structured data extracted from the session. +type EnhancedSummary struct { + state protoimpl.MessageState `protogen:"open.v1"` + // ShortDescription is a concise description of the session. + ShortDescription string `protobuf:"bytes,1,opt,name=short_description,json=shortDescription,proto3" json:"short_description,omitempty"` + // DetailedDescription is an in-depth explanation of the session. + DetailedDescription string `protobuf:"bytes,2,opt,name=detailed_description,json=detailedDescription,proto3" json:"detailed_description,omitempty"` + // RiskLevel is the overall risk level associated with the session. + RiskLevel RiskLevel `protobuf:"varint,3,opt,name=risk_level,json=riskLevel,proto3,enum=teleport.summarizer.v1.RiskLevel" json:"risk_level,omitempty"` + // SuspiciousActivities is a list of suspicious activities detected during the session. + SuspiciousActivities []string `protobuf:"bytes,4,rep,name=suspicious_activities,json=suspiciousActivities,proto3" json:"suspicious_activities,omitempty"` + // CompromiseIndicators indicates whether any indicators of compromise were detected during the session. + CompromiseIndicators bool `protobuf:"varint,5,opt,name=compromise_indicators,json=compromiseIndicators,proto3" json:"compromise_indicators,omitempty"` + // NotableCommandIndexes is a list of indexes of commands that are considered notable. + NotableCommandIndexes []int32 `protobuf:"varint,6,rep,packed,name=notable_command_indexes,json=notableCommandIndexes,proto3" json:"notable_command_indexes,omitempty"` + // Commands is a list of command summaries extracted from the session. + Commands []*CommandAnalysis `protobuf:"bytes,7,rep,name=commands,proto3" json:"commands,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *EnhancedSummary) Reset() { + *x = EnhancedSummary{} + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *EnhancedSummary) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*EnhancedSummary) ProtoMessage() {} + +func (x *EnhancedSummary) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use EnhancedSummary.ProtoReflect.Descriptor instead. +func (*EnhancedSummary) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP(), []int{11} +} + +func (x *EnhancedSummary) GetShortDescription() string { + if x != nil { + return x.ShortDescription + } + return "" +} + +func (x *EnhancedSummary) GetDetailedDescription() string { + if x != nil { + return x.DetailedDescription + } + return "" +} + +func (x *EnhancedSummary) GetRiskLevel() RiskLevel { + if x != nil { + return x.RiskLevel + } + return RiskLevel_RISK_LEVEL_UNSPECIFIED +} + +func (x *EnhancedSummary) GetSuspiciousActivities() []string { + if x != nil { + return x.SuspiciousActivities + } + return nil +} + +func (x *EnhancedSummary) GetCompromiseIndicators() bool { + if x != nil { + return x.CompromiseIndicators + } + return false +} + +func (x *EnhancedSummary) GetNotableCommandIndexes() []int32 { + if x != nil { + return x.NotableCommandIndexes + } + return nil +} + +func (x *EnhancedSummary) GetCommands() []*CommandAnalysis { + if x != nil { + return x.Commands + } + return nil +} + +var File_teleport_summarizer_v1_summarizer_proto protoreflect.FileDescriptor + +const file_teleport_summarizer_v1_summarizer_proto_rawDesc = "" + + "\n" + + "'teleport/summarizer/v1/summarizer.proto\x12\x16teleport.summarizer.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a!teleport/header/v1/metadata.proto\"\xd3\x01\n" + + "\x0eInferenceModel\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12>\n" + + "\x04spec\x18\x05 \x01(\v2*.teleport.summarizer.v1.InferenceModelSpecR\x04spec\"\xe0\x01\n" + + "\x12InferenceModelSpec\x12@\n" + + "\x06openai\x18\x01 \x01(\v2&.teleport.summarizer.v1.OpenAIProviderH\x00R\x06openai\x12C\n" + + "\abedrock\x18\x03 \x01(\v2'.teleport.summarizer.v1.BedrockProviderH\x00R\abedrock\x127\n" + + "\x18max_session_length_bytes\x18\x02 \x01(\x03R\x15maxSessionLengthBytesB\n" + + "\n" + + "\bprovider\"\xa2\x01\n" + + "\x0eOpenAIProvider\x12&\n" + + "\x0fopenai_model_id\x18\x01 \x01(\tR\ropenaiModelId\x12 \n" + + "\vtemperature\x18\x02 \x01(\x01R\vtemperature\x12+\n" + + "\x12api_key_secret_ref\x18\x03 \x01(\tR\x0fapiKeySecretRef\x12\x19\n" + + "\bbase_url\x18\x04 \x01(\tR\abaseUrl\"\x97\x01\n" + + "\x0fBedrockProvider\x12\x16\n" + + "\x06region\x18\x01 \x01(\tR\x06region\x12(\n" + + "\x10bedrock_model_id\x18\x02 \x01(\tR\x0ebedrockModelId\x12 \n" + + "\vtemperature\x18\x03 \x01(\x02R\vtemperature\x12 \n" + + "\vintegration\x18\x04 \x01(\tR\vintegration\"\xd5\x01\n" + + "\x0fInferenceSecret\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12?\n" + + "\x04spec\x18\x05 \x01(\v2+.teleport.summarizer.v1.InferenceSecretSpecR\x04spec\"+\n" + + "\x13InferenceSecretSpec\x12\x14\n" + + "\x05value\x18\x01 \x01(\tR\x05value\"\xd5\x01\n" + + "\x0fInferencePolicy\x12\x12\n" + + "\x04kind\x18\x01 \x01(\tR\x04kind\x12\x19\n" + + "\bsub_kind\x18\x02 \x01(\tR\asubKind\x12\x18\n" + + "\aversion\x18\x03 \x01(\tR\aversion\x128\n" + + "\bmetadata\x18\x04 \x01(\v2\x1c.teleport.header.v1.MetadataR\bmetadata\x12?\n" + + "\x04spec\x18\x05 \x01(\v2+.teleport.summarizer.v1.InferencePolicySpecR\x04spec\"Y\n" + + "\x13InferencePolicySpec\x12\x14\n" + + "\x05kinds\x18\x01 \x03(\tR\x05kinds\x12\x14\n" + + "\x05model\x18\x02 \x01(\tR\x05model\x12\x16\n" + + "\x06filter\x18\x03 \x01(\tR\x06filter\"\xf9\x03\n" + + "\aSummary\x12\x1d\n" + + "\n" + + "session_id\x18\x01 \x01(\tR\tsessionId\x12:\n" + + "\x05state\x18\x02 \x01(\x0e2$.teleport.summarizer.v1.SummaryStateR\x05state\x12L\n" + + "\x14inference_started_at\x18\x03 \x01(\v2\x1a.google.protobuf.TimestampR\x12inferenceStartedAt\x12N\n" + + "\x15inference_finished_at\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\x13inferenceFinishedAt\x12\x18\n" + + "\acontent\x18\x05 \x01(\tR\acontent\x12\x1d\n" + + "\n" + + "model_name\x18\x06 \x01(\tR\tmodelName\x12C\n" + + "\x11session_end_event\x18\a \x01(\v2\x17.google.protobuf.StructR\x0fsessionEndEvent\x12#\n" + + "\rerror_message\x18\b \x01(\tR\ferrorMessage\x12R\n" + + "\x10enhanced_summary\x18\t \x01(\v2'.teleport.summarizer.v1.EnhancedSummaryR\x0fenhancedSummary\"\x82\b\n" + + "\x0fCommandAnalysis\x12\x18\n" + + "\acommand\x18\x01 \x01(\tR\acommand\x12C\n" + + "\bcategory\x18\x02 \x01(\x0e2'.teleport.summarizer.v1.CommandCategoryR\bcategory\x12\x18\n" + + "\asuccess\x18\x03 \x01(\bR\asuccess\x12@\n" + + "\n" + + "risk_level\x18\x04 \x01(\x0e2!.teleport.summarizer.v1.RiskLevelR\triskLevel\x12\x1d\n" + + "\n" + + "risk_score\x18\x05 \x01(\x05R\triskScore\x12O\n" + + "\x0fthreat_category\x18\x06 \x01(\x0e2&.teleport.summarizer.v1.ThreatCategoryR\x0ethreatCategory\x12%\n" + + "\x0etimeline_title\x18\a \x01(\tR\rtimelineTitle\x12+\n" + + "\x11timeline_subtitle\x18\b \x01(\tR\x10timelineSubtitle\x12+\n" + + "\x11short_description\x18\t \x01(\tR\x10shortDescription\x121\n" + + "\x14detailed_description\x18\n" + + " \x01(\tR\x13detailedDescription\x12%\n" + + "\x0eerror_messages\x18\v \x03(\tR\rerrorMessages\x12)\n" + + "\x10suspicious_flags\x18\f \x03(\tR\x0fsuspiciousFlags\x12'\n" + + "\x0fsensitive_items\x18\r \x03(\tR\x0esensitiveItems\x12/\n" + + "\x13suspicious_patterns\x18\x0e \x03(\tR\x12suspiciousPatterns\x12\x12\n" + + "\x04iocs\x18\x0f \x03(\tR\x04iocs\x12(\n" + + "\x10mitre_attack_ids\x18\x10 \x03(\tR\x0emitreAttackIds\x12,\n" + + "\x12has_sensitive_data\x18\x11 \x01(\bR\x10hasSensitiveData\x121\n" + + "\x14privilege_escalation\x18\x12 \x01(\bR\x13privilegeEscalation\x12+\n" + + "\x11data_exfiltration\x18\x13 \x01(\bR\x10dataExfiltration\x12 \n" + + "\vpersistence\x18\x14 \x01(\bR\vpersistence\x12<\n" + + "\fstart_offset\x18\x15 \x01(\v2\x19.google.protobuf.DurationR\vstartOffset\x128\n" + + "\n" + + "end_offset\x18\x16 \x01(\v2\x19.google.protobuf.DurationR\tendOffset\"\x8f\x01\n" + + "\x16SecurityRecommendation\x12\x14\n" + + "\x05title\x18\x01 \x01(\tR\x05title\x12 \n" + + "\vdescription\x18\x02 \x01(\tR\vdescription\x12=\n" + + "\bseverity\x18\x03 \x01(\x0e2!.teleport.summarizer.v1.RiskLevelR\bseverity\"\x9a\x03\n" + + "\x0fEnhancedSummary\x12+\n" + + "\x11short_description\x18\x01 \x01(\tR\x10shortDescription\x121\n" + + "\x14detailed_description\x18\x02 \x01(\tR\x13detailedDescription\x12@\n" + + "\n" + + "risk_level\x18\x03 \x01(\x0e2!.teleport.summarizer.v1.RiskLevelR\triskLevel\x123\n" + + "\x15suspicious_activities\x18\x04 \x03(\tR\x14suspiciousActivities\x123\n" + + "\x15compromise_indicators\x18\x05 \x01(\bR\x14compromiseIndicators\x126\n" + + "\x17notable_command_indexes\x18\x06 \x03(\x05R\x15notableCommandIndexes\x12C\n" + + "\bcommands\x18\a \x03(\v2'.teleport.summarizer.v1.CommandAnalysisR\bcommands*|\n" + + "\fSummaryState\x12\x1d\n" + + "\x19SUMMARY_STATE_UNSPECIFIED\x10\x00\x12\x19\n" + + "\x15SUMMARY_STATE_PENDING\x10\x01\x12\x19\n" + + "\x15SUMMARY_STATE_SUCCESS\x10\x02\x12\x17\n" + + "\x13SUMMARY_STATE_ERROR\x10\x03*\x9b\x02\n" + + "\x0fCommandCategory\x12 \n" + + "\x1cCOMMAND_CATEGORY_UNSPECIFIED\x10\x00\x12#\n" + + "\x1fCOMMAND_CATEGORY_FILE_OPERATION\x10\x01\x12\x1c\n" + + "\x18COMMAND_CATEGORY_NETWORK\x10\x02\x12\x1c\n" + + "\x18COMMAND_CATEGORY_PROCESS\x10\x03\x12\"\n" + + "\x1eCOMMAND_CATEGORY_SYSTEM_CONFIG\x10\x04\x12 \n" + + "\x1cCOMMAND_CATEGORY_DATA_ACCESS\x10\x05\x12#\n" + + "\x1fCOMMAND_CATEGORY_AUTHENTICATION\x10\x06\x12\x1a\n" + + "\x16COMMAND_CATEGORY_OTHER\x10\a*\x80\x01\n" + + "\tRiskLevel\x12\x1a\n" + + "\x16RISK_LEVEL_UNSPECIFIED\x10\x00\x12\x12\n" + + "\x0eRISK_LEVEL_LOW\x10\x01\x12\x15\n" + + "\x11RISK_LEVEL_MEDIUM\x10\x02\x12\x13\n" + + "\x0fRISK_LEVEL_HIGH\x10\x03\x12\x17\n" + + "\x13RISK_LEVEL_CRITICAL\x10\x04*\xc8\x03\n" + + "\x0eThreatCategory\x12\x1f\n" + + "\x1bTHREAT_CATEGORY_UNSPECIFIED\x10\x00\x12\"\n" + + "\x1eTHREAT_CATEGORY_RECONNAISSANCE\x10\x01\x12\x1d\n" + + "\x19THREAT_CATEGORY_EXECUTION\x10\x02\x12\x1f\n" + + "\x1bTHREAT_CATEGORY_PERSISTENCE\x10\x03\x12(\n" + + "$THREAT_CATEGORY_PRIVILEGE_ESCALATION\x10\x04\x12#\n" + + "\x1fTHREAT_CATEGORY_DEFENSE_EVASION\x10\x05\x12%\n" + + "!THREAT_CATEGORY_CREDENTIAL_ACCESS\x10\x06\x12\x1d\n" + + "\x19THREAT_CATEGORY_DISCOVERY\x10\a\x12$\n" + + " THREAT_CATEGORY_LATERAL_MOVEMENT\x10\b\x12\x1e\n" + + "\x1aTHREAT_CATEGORY_COLLECTION\x10\t\x12 \n" + + "\x1cTHREAT_CATEGORY_EXFILTRATION\x10\n" + + "\x12\x1a\n" + + "\x16THREAT_CATEGORY_IMPACT\x10\v\x12\x18\n" + + "\x14THREAT_CATEGORY_NONE\x10\fBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1;summarizerv1b\x06proto3" + +var ( + file_teleport_summarizer_v1_summarizer_proto_rawDescOnce sync.Once + file_teleport_summarizer_v1_summarizer_proto_rawDescData []byte +) + +func file_teleport_summarizer_v1_summarizer_proto_rawDescGZIP() []byte { + file_teleport_summarizer_v1_summarizer_proto_rawDescOnce.Do(func() { + file_teleport_summarizer_v1_summarizer_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_summarizer_v1_summarizer_proto_rawDesc), len(file_teleport_summarizer_v1_summarizer_proto_rawDesc))) + }) + return file_teleport_summarizer_v1_summarizer_proto_rawDescData +} + +var file_teleport_summarizer_v1_summarizer_proto_enumTypes = make([]protoimpl.EnumInfo, 4) +var file_teleport_summarizer_v1_summarizer_proto_msgTypes = make([]protoimpl.MessageInfo, 12) +var file_teleport_summarizer_v1_summarizer_proto_goTypes = []any{ + (SummaryState)(0), // 0: teleport.summarizer.v1.SummaryState + (CommandCategory)(0), // 1: teleport.summarizer.v1.CommandCategory + (RiskLevel)(0), // 2: teleport.summarizer.v1.RiskLevel + (ThreatCategory)(0), // 3: teleport.summarizer.v1.ThreatCategory + (*InferenceModel)(nil), // 4: teleport.summarizer.v1.InferenceModel + (*InferenceModelSpec)(nil), // 5: teleport.summarizer.v1.InferenceModelSpec + (*OpenAIProvider)(nil), // 6: teleport.summarizer.v1.OpenAIProvider + (*BedrockProvider)(nil), // 7: teleport.summarizer.v1.BedrockProvider + (*InferenceSecret)(nil), // 8: teleport.summarizer.v1.InferenceSecret + (*InferenceSecretSpec)(nil), // 9: teleport.summarizer.v1.InferenceSecretSpec + (*InferencePolicy)(nil), // 10: teleport.summarizer.v1.InferencePolicy + (*InferencePolicySpec)(nil), // 11: teleport.summarizer.v1.InferencePolicySpec + (*Summary)(nil), // 12: teleport.summarizer.v1.Summary + (*CommandAnalysis)(nil), // 13: teleport.summarizer.v1.CommandAnalysis + (*SecurityRecommendation)(nil), // 14: teleport.summarizer.v1.SecurityRecommendation + (*EnhancedSummary)(nil), // 15: teleport.summarizer.v1.EnhancedSummary + (*v1.Metadata)(nil), // 16: teleport.header.v1.Metadata + (*timestamppb.Timestamp)(nil), // 17: google.protobuf.Timestamp + (*structpb.Struct)(nil), // 18: google.protobuf.Struct + (*durationpb.Duration)(nil), // 19: google.protobuf.Duration +} +var file_teleport_summarizer_v1_summarizer_proto_depIdxs = []int32{ + 16, // 0: teleport.summarizer.v1.InferenceModel.metadata:type_name -> teleport.header.v1.Metadata + 5, // 1: teleport.summarizer.v1.InferenceModel.spec:type_name -> teleport.summarizer.v1.InferenceModelSpec + 6, // 2: teleport.summarizer.v1.InferenceModelSpec.openai:type_name -> teleport.summarizer.v1.OpenAIProvider + 7, // 3: teleport.summarizer.v1.InferenceModelSpec.bedrock:type_name -> teleport.summarizer.v1.BedrockProvider + 16, // 4: teleport.summarizer.v1.InferenceSecret.metadata:type_name -> teleport.header.v1.Metadata + 9, // 5: teleport.summarizer.v1.InferenceSecret.spec:type_name -> teleport.summarizer.v1.InferenceSecretSpec + 16, // 6: teleport.summarizer.v1.InferencePolicy.metadata:type_name -> teleport.header.v1.Metadata + 11, // 7: teleport.summarizer.v1.InferencePolicy.spec:type_name -> teleport.summarizer.v1.InferencePolicySpec + 0, // 8: teleport.summarizer.v1.Summary.state:type_name -> teleport.summarizer.v1.SummaryState + 17, // 9: teleport.summarizer.v1.Summary.inference_started_at:type_name -> google.protobuf.Timestamp + 17, // 10: teleport.summarizer.v1.Summary.inference_finished_at:type_name -> google.protobuf.Timestamp + 18, // 11: teleport.summarizer.v1.Summary.session_end_event:type_name -> google.protobuf.Struct + 15, // 12: teleport.summarizer.v1.Summary.enhanced_summary:type_name -> teleport.summarizer.v1.EnhancedSummary + 1, // 13: teleport.summarizer.v1.CommandAnalysis.category:type_name -> teleport.summarizer.v1.CommandCategory + 2, // 14: teleport.summarizer.v1.CommandAnalysis.risk_level:type_name -> teleport.summarizer.v1.RiskLevel + 3, // 15: teleport.summarizer.v1.CommandAnalysis.threat_category:type_name -> teleport.summarizer.v1.ThreatCategory + 19, // 16: teleport.summarizer.v1.CommandAnalysis.start_offset:type_name -> google.protobuf.Duration + 19, // 17: teleport.summarizer.v1.CommandAnalysis.end_offset:type_name -> google.protobuf.Duration + 2, // 18: teleport.summarizer.v1.SecurityRecommendation.severity:type_name -> teleport.summarizer.v1.RiskLevel + 2, // 19: teleport.summarizer.v1.EnhancedSummary.risk_level:type_name -> teleport.summarizer.v1.RiskLevel + 13, // 20: teleport.summarizer.v1.EnhancedSummary.commands:type_name -> teleport.summarizer.v1.CommandAnalysis + 21, // [21:21] is the sub-list for method output_type + 21, // [21:21] is the sub-list for method input_type + 21, // [21:21] is the sub-list for extension type_name + 21, // [21:21] is the sub-list for extension extendee + 0, // [0:21] is the sub-list for field type_name +} + +func init() { file_teleport_summarizer_v1_summarizer_proto_init() } +func file_teleport_summarizer_v1_summarizer_proto_init() { + if File_teleport_summarizer_v1_summarizer_proto != nil { + return + } + file_teleport_summarizer_v1_summarizer_proto_msgTypes[1].OneofWrappers = []any{ + (*InferenceModelSpec_Openai)(nil), + (*InferenceModelSpec_Bedrock)(nil), + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_summarizer_v1_summarizer_proto_rawDesc), len(file_teleport_summarizer_v1_summarizer_proto_rawDesc)), + NumEnums: 4, + NumMessages: 12, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_teleport_summarizer_v1_summarizer_proto_goTypes, + DependencyIndexes: file_teleport_summarizer_v1_summarizer_proto_depIdxs, + EnumInfos: file_teleport_summarizer_v1_summarizer_proto_enumTypes, + MessageInfos: file_teleport_summarizer_v1_summarizer_proto_msgTypes, + }.Build() + File_teleport_summarizer_v1_summarizer_proto = out.File + file_teleport_summarizer_v1_summarizer_proto_goTypes = nil + file_teleport_summarizer_v1_summarizer_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/summarizer/v1/summarizer_service.pb.go b/api/gen/proto/go/teleport/summarizer/v1/summarizer_service.pb.go new file mode 100644 index 0000000000000..c982978a0fd1d --- /dev/null +++ b/api/gen/proto/go/teleport/summarizer/v1/summarizer_service.pb.go @@ -0,0 +1,2185 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.36.8 +// protoc (unknown) +// source: teleport/summarizer/v1/summarizer_service.proto + +package summarizerv1 + +import ( + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" + unsafe "unsafe" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +// CreateInferenceModelRequest is a request for creating an InferenceModel. +type CreateInferenceModelRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // InferenceModel is the InferenceModel resource to create. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferenceModelRequest) Reset() { + *x = CreateInferenceModelRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferenceModelRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferenceModelRequest) ProtoMessage() {} + +func (x *CreateInferenceModelRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[0] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferenceModelRequest.ProtoReflect.Descriptor instead. +func (*CreateInferenceModelRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{0} +} + +func (x *CreateInferenceModelRequest) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// CreateInferenceModelResponse is a response to creating an InferenceModel. +type CreateInferenceModelResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource that was created. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferenceModelResponse) Reset() { + *x = CreateInferenceModelResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferenceModelResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferenceModelResponse) ProtoMessage() {} + +func (x *CreateInferenceModelResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[1] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferenceModelResponse.ProtoReflect.Descriptor instead. +func (*CreateInferenceModelResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{1} +} + +func (x *CreateInferenceModelResponse) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// GetInferenceModelRequest is a request for retrieving an InferenceModel. +type GetInferenceModelRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferenceModel to retrieve. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferenceModelRequest) Reset() { + *x = GetInferenceModelRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferenceModelRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferenceModelRequest) ProtoMessage() {} + +func (x *GetInferenceModelRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[2] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferenceModelRequest.ProtoReflect.Descriptor instead. +func (*GetInferenceModelRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{2} +} + +func (x *GetInferenceModelRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetInferenceModelResponse is a response to retrieving an InferenceModel. +type GetInferenceModelResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource that was retrieved. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferenceModelResponse) Reset() { + *x = GetInferenceModelResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferenceModelResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferenceModelResponse) ProtoMessage() {} + +func (x *GetInferenceModelResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[3] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferenceModelResponse.ProtoReflect.Descriptor instead. +func (*GetInferenceModelResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{3} +} + +func (x *GetInferenceModelResponse) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// UpdateInferenceModelRequest is a request for updating an InferenceModel. +type UpdateInferenceModelRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource to update. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferenceModelRequest) Reset() { + *x = UpdateInferenceModelRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferenceModelRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferenceModelRequest) ProtoMessage() {} + +func (x *UpdateInferenceModelRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[4] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferenceModelRequest.ProtoReflect.Descriptor instead. +func (*UpdateInferenceModelRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{4} +} + +func (x *UpdateInferenceModelRequest) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// UpdateInferenceModelResponse is a response to updating an InferenceModel. +type UpdateInferenceModelResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource that was updated. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferenceModelResponse) Reset() { + *x = UpdateInferenceModelResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferenceModelResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferenceModelResponse) ProtoMessage() {} + +func (x *UpdateInferenceModelResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferenceModelResponse.ProtoReflect.Descriptor instead. +func (*UpdateInferenceModelResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{5} +} + +func (x *UpdateInferenceModelResponse) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// UpsertInferenceModelRequest is a request for creating or updating a +// InferenceModel. +type UpsertInferenceModelRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource to create or update. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferenceModelRequest) Reset() { + *x = UpsertInferenceModelRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferenceModelRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferenceModelRequest) ProtoMessage() {} + +func (x *UpsertInferenceModelRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferenceModelRequest.ProtoReflect.Descriptor instead. +func (*UpsertInferenceModelRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{6} +} + +func (x *UpsertInferenceModelRequest) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// UpsertInferenceModelResponse is a response to creating or updating an +// InferenceModel. +type UpsertInferenceModelResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Model is the InferenceModel resource that was created or updated. + Model *InferenceModel `protobuf:"bytes,1,opt,name=model,proto3" json:"model,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferenceModelResponse) Reset() { + *x = UpsertInferenceModelResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferenceModelResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferenceModelResponse) ProtoMessage() {} + +func (x *UpsertInferenceModelResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferenceModelResponse.ProtoReflect.Descriptor instead. +func (*UpsertInferenceModelResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{7} +} + +func (x *UpsertInferenceModelResponse) GetModel() *InferenceModel { + if x != nil { + return x.Model + } + return nil +} + +// DeleteInferenceModelRequest is a request for deleting an InferenceModel. +type DeleteInferenceModelRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferenceModel to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferenceModelRequest) Reset() { + *x = DeleteInferenceModelRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferenceModelRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferenceModelRequest) ProtoMessage() {} + +func (x *DeleteInferenceModelRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[8] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferenceModelRequest.ProtoReflect.Descriptor instead. +func (*DeleteInferenceModelRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{8} +} + +func (x *DeleteInferenceModelRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// DeleteInferenceModelResponse is a response to deleting an InferenceModel. +type DeleteInferenceModelResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferenceModelResponse) Reset() { + *x = DeleteInferenceModelResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferenceModelResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferenceModelResponse) ProtoMessage() {} + +func (x *DeleteInferenceModelResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[9] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferenceModelResponse.ProtoReflect.Descriptor instead. +func (*DeleteInferenceModelResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{9} +} + +// ListInferenceModelsRequest is a request for listing InferenceModels. +type ListInferenceModelsRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // PageToken is the next_page_token value returned from a previous List + // request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferenceModelsRequest) Reset() { + *x = ListInferenceModelsRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferenceModelsRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferenceModelsRequest) ProtoMessage() {} + +func (x *ListInferenceModelsRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[10] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferenceModelsRequest.ProtoReflect.Descriptor instead. +func (*ListInferenceModelsRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{10} +} + +func (x *ListInferenceModelsRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListInferenceModelsRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListInferenceModelsResponse is the response for listing InferenceModels. +type ListInferenceModelsResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Models is the page of InferenceModels that matched the request. + Models []*InferenceModel `protobuf:"bytes,1,rep,name=models,proto3" json:"models,omitempty"` + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferenceModelsResponse) Reset() { + *x = ListInferenceModelsResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferenceModelsResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferenceModelsResponse) ProtoMessage() {} + +func (x *ListInferenceModelsResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[11] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferenceModelsResponse.ProtoReflect.Descriptor instead. +func (*ListInferenceModelsResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{11} +} + +func (x *ListInferenceModelsResponse) GetModels() []*InferenceModel { + if x != nil { + return x.Models + } + return nil +} + +func (x *ListInferenceModelsResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// CreateInferenceSecretRequest is a request for creating an InferenceSecret. +type CreateInferenceSecretRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // InferenceSecret is the InferenceSecret resource + // to create. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferenceSecretRequest) Reset() { + *x = CreateInferenceSecretRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[12] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferenceSecretRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferenceSecretRequest) ProtoMessage() {} + +func (x *CreateInferenceSecretRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[12] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferenceSecretRequest.ProtoReflect.Descriptor instead. +func (*CreateInferenceSecretRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{12} +} + +func (x *CreateInferenceSecretRequest) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// CreateInferenceSecretResponse is a response to creating an InferenceSecret. +type CreateInferenceSecretResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource that was created. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferenceSecretResponse) Reset() { + *x = CreateInferenceSecretResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[13] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferenceSecretResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferenceSecretResponse) ProtoMessage() {} + +func (x *CreateInferenceSecretResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[13] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferenceSecretResponse.ProtoReflect.Descriptor instead. +func (*CreateInferenceSecretResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{13} +} + +func (x *CreateInferenceSecretResponse) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// GetInferenceSecretRequest is a request for retrieving an InferenceSecret. +type GetInferenceSecretRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferenceSecret to retrieve. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferenceSecretRequest) Reset() { + *x = GetInferenceSecretRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[14] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferenceSecretRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferenceSecretRequest) ProtoMessage() {} + +func (x *GetInferenceSecretRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[14] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferenceSecretRequest.ProtoReflect.Descriptor instead. +func (*GetInferenceSecretRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{14} +} + +func (x *GetInferenceSecretRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetInferenceSecretResponse is a response to retrieving an InferenceSecret. +type GetInferenceSecretResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource that was retrieved. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferenceSecretResponse) Reset() { + *x = GetInferenceSecretResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferenceSecretResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferenceSecretResponse) ProtoMessage() {} + +func (x *GetInferenceSecretResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferenceSecretResponse.ProtoReflect.Descriptor instead. +func (*GetInferenceSecretResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{15} +} + +func (x *GetInferenceSecretResponse) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// UpdateInferenceSecretRequest is a request for updating an InferenceSecret. +type UpdateInferenceSecretRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource to update. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferenceSecretRequest) Reset() { + *x = UpdateInferenceSecretRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferenceSecretRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferenceSecretRequest) ProtoMessage() {} + +func (x *UpdateInferenceSecretRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferenceSecretRequest.ProtoReflect.Descriptor instead. +func (*UpdateInferenceSecretRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{16} +} + +func (x *UpdateInferenceSecretRequest) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// UpdateInferenceSecretResponse is a response to updating an InferenceSecret. +type UpdateInferenceSecretResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource that was updated. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferenceSecretResponse) Reset() { + *x = UpdateInferenceSecretResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[17] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferenceSecretResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferenceSecretResponse) ProtoMessage() {} + +func (x *UpdateInferenceSecretResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[17] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferenceSecretResponse.ProtoReflect.Descriptor instead. +func (*UpdateInferenceSecretResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{17} +} + +func (x *UpdateInferenceSecretResponse) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// UpsertInferenceSecretRequest is a request for creating or updating an +// InferenceSecret. +type UpsertInferenceSecretRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource to create or update. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferenceSecretRequest) Reset() { + *x = UpsertInferenceSecretRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[18] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferenceSecretRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferenceSecretRequest) ProtoMessage() {} + +func (x *UpsertInferenceSecretRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[18] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferenceSecretRequest.ProtoReflect.Descriptor instead. +func (*UpsertInferenceSecretRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{18} +} + +func (x *UpsertInferenceSecretRequest) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// UpsertInferenceSecretResponse is a response to creating or updating an +// InferenceSecret. +type UpsertInferenceSecretResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secret is the InferenceSecret resource that was created or updated. + Secret *InferenceSecret `protobuf:"bytes,1,opt,name=secret,proto3" json:"secret,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferenceSecretResponse) Reset() { + *x = UpsertInferenceSecretResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[19] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferenceSecretResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferenceSecretResponse) ProtoMessage() {} + +func (x *UpsertInferenceSecretResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[19] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferenceSecretResponse.ProtoReflect.Descriptor instead. +func (*UpsertInferenceSecretResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{19} +} + +func (x *UpsertInferenceSecretResponse) GetSecret() *InferenceSecret { + if x != nil { + return x.Secret + } + return nil +} + +// DeleteInferenceSecretRequest is a request for deleting an InferenceSecret. +type DeleteInferenceSecretRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferenceSecret to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferenceSecretRequest) Reset() { + *x = DeleteInferenceSecretRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[20] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferenceSecretRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferenceSecretRequest) ProtoMessage() {} + +func (x *DeleteInferenceSecretRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[20] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferenceSecretRequest.ProtoReflect.Descriptor instead. +func (*DeleteInferenceSecretRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{20} +} + +func (x *DeleteInferenceSecretRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// DeleteInferenceSecretResponse is a response to deleting an InferenceSecret. +type DeleteInferenceSecretResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferenceSecretResponse) Reset() { + *x = DeleteInferenceSecretResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[21] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferenceSecretResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferenceSecretResponse) ProtoMessage() {} + +func (x *DeleteInferenceSecretResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[21] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferenceSecretResponse.ProtoReflect.Descriptor instead. +func (*DeleteInferenceSecretResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{21} +} + +// ListInferenceSecretsRequest is a request for listing InferenceSecrets. +type ListInferenceSecretsRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // PageToken is the next_page_token value returned from a previous List + // request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferenceSecretsRequest) Reset() { + *x = ListInferenceSecretsRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[22] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferenceSecretsRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferenceSecretsRequest) ProtoMessage() {} + +func (x *ListInferenceSecretsRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[22] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferenceSecretsRequest.ProtoReflect.Descriptor instead. +func (*ListInferenceSecretsRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{22} +} + +func (x *ListInferenceSecretsRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListInferenceSecretsRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListInferenceSecretsResponse is the response for listing InferenceSecrets. +type ListInferenceSecretsResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Secrets is the page of InferenceSecrets that matched the + // request. + Secrets []*InferenceSecret `protobuf:"bytes,1,rep,name=secrets,proto3" json:"secrets,omitempty"` + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferenceSecretsResponse) Reset() { + *x = ListInferenceSecretsResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[23] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferenceSecretsResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferenceSecretsResponse) ProtoMessage() {} + +func (x *ListInferenceSecretsResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[23] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferenceSecretsResponse.ProtoReflect.Descriptor instead. +func (*ListInferenceSecretsResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{23} +} + +func (x *ListInferenceSecretsResponse) GetSecrets() []*InferenceSecret { + if x != nil { + return x.Secrets + } + return nil +} + +func (x *ListInferenceSecretsResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// CreateInferencePolicyRequest is a request for creating an InferencePolicy. +type CreateInferencePolicyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // InferencePolicy is the InferencePolicy resource to create. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferencePolicyRequest) Reset() { + *x = CreateInferencePolicyRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[24] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferencePolicyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferencePolicyRequest) ProtoMessage() {} + +func (x *CreateInferencePolicyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[24] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferencePolicyRequest.ProtoReflect.Descriptor instead. +func (*CreateInferencePolicyRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{24} +} + +func (x *CreateInferencePolicyRequest) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// CreateInferencePolicyResponse is a response to creating an InferencePolicy. +type CreateInferencePolicyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource that was created. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *CreateInferencePolicyResponse) Reset() { + *x = CreateInferencePolicyResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[25] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *CreateInferencePolicyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*CreateInferencePolicyResponse) ProtoMessage() {} + +func (x *CreateInferencePolicyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[25] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use CreateInferencePolicyResponse.ProtoReflect.Descriptor instead. +func (*CreateInferencePolicyResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{25} +} + +func (x *CreateInferencePolicyResponse) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// GetInferencePolicyRequest is a request for retrieving an InferencePolicy. +type GetInferencePolicyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferencePolicy to retrieve. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferencePolicyRequest) Reset() { + *x = GetInferencePolicyRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[26] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferencePolicyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferencePolicyRequest) ProtoMessage() {} + +func (x *GetInferencePolicyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[26] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferencePolicyRequest.ProtoReflect.Descriptor instead. +func (*GetInferencePolicyRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{26} +} + +func (x *GetInferencePolicyRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// GetInferencePolicyResponse is a response to retrieving an InferencePolicy. +type GetInferencePolicyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource that was retrieved. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetInferencePolicyResponse) Reset() { + *x = GetInferencePolicyResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[27] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetInferencePolicyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetInferencePolicyResponse) ProtoMessage() {} + +func (x *GetInferencePolicyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[27] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetInferencePolicyResponse.ProtoReflect.Descriptor instead. +func (*GetInferencePolicyResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{27} +} + +func (x *GetInferencePolicyResponse) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// UpdateInferencePolicyRequest is a request for updating an InferencePolicy. +type UpdateInferencePolicyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource to update. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferencePolicyRequest) Reset() { + *x = UpdateInferencePolicyRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[28] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferencePolicyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferencePolicyRequest) ProtoMessage() {} + +func (x *UpdateInferencePolicyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[28] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferencePolicyRequest.ProtoReflect.Descriptor instead. +func (*UpdateInferencePolicyRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{28} +} + +func (x *UpdateInferencePolicyRequest) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// UpdateInferencePolicyResponse is a response to updating an InferencePolicy. +type UpdateInferencePolicyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource that was updated. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpdateInferencePolicyResponse) Reset() { + *x = UpdateInferencePolicyResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[29] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpdateInferencePolicyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpdateInferencePolicyResponse) ProtoMessage() {} + +func (x *UpdateInferencePolicyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[29] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpdateInferencePolicyResponse.ProtoReflect.Descriptor instead. +func (*UpdateInferencePolicyResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{29} +} + +func (x *UpdateInferencePolicyResponse) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// UpsertInferencePolicyRequest is a request for creating or updating an +// InferencePolicy. +type UpsertInferencePolicyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource to create or update. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferencePolicyRequest) Reset() { + *x = UpsertInferencePolicyRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[30] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferencePolicyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferencePolicyRequest) ProtoMessage() {} + +func (x *UpsertInferencePolicyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[30] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferencePolicyRequest.ProtoReflect.Descriptor instead. +func (*UpsertInferencePolicyRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{30} +} + +func (x *UpsertInferencePolicyRequest) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// UpsertInferencePolicyResponse is a response to creating or updating an +// InferencePolicy. +type UpsertInferencePolicyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policy is the InferencePolicy resource that was created or updated. + Policy *InferencePolicy `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *UpsertInferencePolicyResponse) Reset() { + *x = UpsertInferencePolicyResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[31] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *UpsertInferencePolicyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*UpsertInferencePolicyResponse) ProtoMessage() {} + +func (x *UpsertInferencePolicyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[31] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use UpsertInferencePolicyResponse.ProtoReflect.Descriptor instead. +func (*UpsertInferencePolicyResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{31} +} + +func (x *UpsertInferencePolicyResponse) GetPolicy() *InferencePolicy { + if x != nil { + return x.Policy + } + return nil +} + +// DeleteInferencePolicyRequest is a request for deleting an InferencePolicy. +type DeleteInferencePolicyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Name is the resource name of the InferencePolicy to delete. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferencePolicyRequest) Reset() { + *x = DeleteInferencePolicyRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[32] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferencePolicyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferencePolicyRequest) ProtoMessage() {} + +func (x *DeleteInferencePolicyRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[32] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferencePolicyRequest.ProtoReflect.Descriptor instead. +func (*DeleteInferencePolicyRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{32} +} + +func (x *DeleteInferencePolicyRequest) GetName() string { + if x != nil { + return x.Name + } + return "" +} + +// DeleteInferencePolicyResponse is a response to deleting an InferencePolicy. +type DeleteInferencePolicyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeleteInferencePolicyResponse) Reset() { + *x = DeleteInferencePolicyResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[33] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeleteInferencePolicyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeleteInferencePolicyResponse) ProtoMessage() {} + +func (x *DeleteInferencePolicyResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[33] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeleteInferencePolicyResponse.ProtoReflect.Descriptor instead. +func (*DeleteInferencePolicyResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{33} +} + +// ListInferencePoliciesRequest is a request for listing InferencePolicies. +type ListInferencePoliciesRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // PageToken is the next_page_token value returned from a previous List + // request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferencePoliciesRequest) Reset() { + *x = ListInferencePoliciesRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[34] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferencePoliciesRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferencePoliciesRequest) ProtoMessage() {} + +func (x *ListInferencePoliciesRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[34] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferencePoliciesRequest.ProtoReflect.Descriptor instead. +func (*ListInferencePoliciesRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{34} +} + +func (x *ListInferencePoliciesRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListInferencePoliciesRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListInferencePoliciesResponse is the response for listing InferencePolicies. +type ListInferencePoliciesResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Policies is the page of InferencePolicies that matched the + // request. + Policies []*InferencePolicy `protobuf:"bytes,1,rep,name=policies,proto3" json:"policies,omitempty"` + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListInferencePoliciesResponse) Reset() { + *x = ListInferencePoliciesResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[35] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListInferencePoliciesResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListInferencePoliciesResponse) ProtoMessage() {} + +func (x *ListInferencePoliciesResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[35] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListInferencePoliciesResponse.ProtoReflect.Descriptor instead. +func (*ListInferencePoliciesResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{35} +} + +func (x *ListInferencePoliciesResponse) GetPolicies() []*InferencePolicy { + if x != nil { + return x.Policies + } + return nil +} + +func (x *ListInferencePoliciesResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + +// GetSummaryRequest is a request for retrieving a session recording summary. +type GetSummaryRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // SessionId is the ID of the session whose summary is being requested. + SessionId string `protobuf:"bytes,1,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetSummaryRequest) Reset() { + *x = GetSummaryRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[36] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetSummaryRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetSummaryRequest) ProtoMessage() {} + +func (x *GetSummaryRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[36] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetSummaryRequest.ProtoReflect.Descriptor instead. +func (*GetSummaryRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{36} +} + +func (x *GetSummaryRequest) GetSessionId() string { + if x != nil { + return x.SessionId + } + return "" +} + +// GetSummaryResponse is a response to retrieving a session recording summary. +type GetSummaryResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Summary is the session recording summary. + Summary *Summary `protobuf:"bytes,1,opt,name=summary,proto3" json:"summary,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *GetSummaryResponse) Reset() { + *x = GetSummaryResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[37] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *GetSummaryResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*GetSummaryResponse) ProtoMessage() {} + +func (x *GetSummaryResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[37] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use GetSummaryResponse.ProtoReflect.Descriptor instead. +func (*GetSummaryResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{37} +} + +func (x *GetSummaryResponse) GetSummary() *Summary { + if x != nil { + return x.Summary + } + return nil +} + +// IsEnabledRequest is a request to tell if the summarizer is enabled. +type IsEnabledRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *IsEnabledRequest) Reset() { + *x = IsEnabledRequest{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[38] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *IsEnabledRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*IsEnabledRequest) ProtoMessage() {} + +func (x *IsEnabledRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[38] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use IsEnabledRequest.ProtoReflect.Descriptor instead. +func (*IsEnabledRequest) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{38} +} + +// IsEnabledResponse is a response that tells if the summarizer is enabled. +type IsEnabledResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // Enabled tells if the summarizer is enabled by the license and configured. + // (Note that this doesn't tell anything about actual correctness of the + // configuration or presence of recording summaries.) + Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *IsEnabledResponse) Reset() { + *x = IsEnabledResponse{} + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[39] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *IsEnabledResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*IsEnabledResponse) ProtoMessage() {} + +func (x *IsEnabledResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_summarizer_v1_summarizer_service_proto_msgTypes[39] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use IsEnabledResponse.ProtoReflect.Descriptor instead. +func (*IsEnabledResponse) Descriptor() ([]byte, []int) { + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP(), []int{39} +} + +func (x *IsEnabledResponse) GetEnabled() bool { + if x != nil { + return x.Enabled + } + return false +} + +var File_teleport_summarizer_v1_summarizer_service_proto protoreflect.FileDescriptor + +const file_teleport_summarizer_v1_summarizer_service_proto_rawDesc = "" + + "\n" + + "/teleport/summarizer/v1/summarizer_service.proto\x12\x16teleport.summarizer.v1\x1a'teleport/summarizer/v1/summarizer.proto\"[\n" + + "\x1bCreateInferenceModelRequest\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"\\\n" + + "\x1cCreateInferenceModelResponse\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\".\n" + + "\x18GetInferenceModelRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"Y\n" + + "\x19GetInferenceModelResponse\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"[\n" + + "\x1bUpdateInferenceModelRequest\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"\\\n" + + "\x1cUpdateInferenceModelResponse\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"[\n" + + "\x1bUpsertInferenceModelRequest\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"\\\n" + + "\x1cUpsertInferenceModelResponse\x12<\n" + + "\x05model\x18\x01 \x01(\v2&.teleport.summarizer.v1.InferenceModelR\x05model\"1\n" + + "\x1bDeleteInferenceModelRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"\x1e\n" + + "\x1cDeleteInferenceModelResponse\"X\n" + + "\x1aListInferenceModelsRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\x85\x01\n" + + "\x1bListInferenceModelsResponse\x12>\n" + + "\x06models\x18\x01 \x03(\v2&.teleport.summarizer.v1.InferenceModelR\x06models\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"_\n" + + "\x1cCreateInferenceSecretRequest\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"`\n" + + "\x1dCreateInferenceSecretResponse\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"/\n" + + "\x19GetInferenceSecretRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"]\n" + + "\x1aGetInferenceSecretResponse\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"_\n" + + "\x1cUpdateInferenceSecretRequest\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"`\n" + + "\x1dUpdateInferenceSecretResponse\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"_\n" + + "\x1cUpsertInferenceSecretRequest\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"`\n" + + "\x1dUpsertInferenceSecretResponse\x12?\n" + + "\x06secret\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferenceSecretR\x06secret\"2\n" + + "\x1cDeleteInferenceSecretRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"\x1f\n" + + "\x1dDeleteInferenceSecretResponse\"Y\n" + + "\x1bListInferenceSecretsRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\x89\x01\n" + + "\x1cListInferenceSecretsResponse\x12A\n" + + "\asecrets\x18\x01 \x03(\v2'.teleport.summarizer.v1.InferenceSecretR\asecrets\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"_\n" + + "\x1cCreateInferencePolicyRequest\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"`\n" + + "\x1dCreateInferencePolicyResponse\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"/\n" + + "\x19GetInferencePolicyRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"]\n" + + "\x1aGetInferencePolicyResponse\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"_\n" + + "\x1cUpdateInferencePolicyRequest\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"`\n" + + "\x1dUpdateInferencePolicyResponse\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"_\n" + + "\x1cUpsertInferencePolicyRequest\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"`\n" + + "\x1dUpsertInferencePolicyResponse\x12?\n" + + "\x06policy\x18\x01 \x01(\v2'.teleport.summarizer.v1.InferencePolicyR\x06policy\"2\n" + + "\x1cDeleteInferencePolicyRequest\x12\x12\n" + + "\x04name\x18\x01 \x01(\tR\x04name\"\x1f\n" + + "\x1dDeleteInferencePolicyResponse\"Z\n" + + "\x1cListInferencePoliciesRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\x8c\x01\n" + + "\x1dListInferencePoliciesResponse\x12C\n" + + "\bpolicies\x18\x01 \x03(\v2'.teleport.summarizer.v1.InferencePolicyR\bpolicies\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken\"2\n" + + "\x11GetSummaryRequest\x12\x1d\n" + + "\n" + + "session_id\x18\x01 \x01(\tR\tsessionId\"O\n" + + "\x12GetSummaryResponse\x129\n" + + "\asummary\x18\x01 \x01(\v2\x1f.teleport.summarizer.v1.SummaryR\asummary\"\x12\n" + + "\x10IsEnabledRequest\"-\n" + + "\x11IsEnabledResponse\x12\x18\n" + + "\aenabled\x18\x01 \x01(\bR\aenabled2\xa1\x14\n" + + "\x11SummarizerService\x12\x81\x01\n" + + "\x14CreateInferenceModel\x123.teleport.summarizer.v1.CreateInferenceModelRequest\x1a4.teleport.summarizer.v1.CreateInferenceModelResponse\x12x\n" + + "\x11GetInferenceModel\x120.teleport.summarizer.v1.GetInferenceModelRequest\x1a1.teleport.summarizer.v1.GetInferenceModelResponse\x12\x81\x01\n" + + "\x14UpdateInferenceModel\x123.teleport.summarizer.v1.UpdateInferenceModelRequest\x1a4.teleport.summarizer.v1.UpdateInferenceModelResponse\x12\x81\x01\n" + + "\x14UpsertInferenceModel\x123.teleport.summarizer.v1.UpsertInferenceModelRequest\x1a4.teleport.summarizer.v1.UpsertInferenceModelResponse\x12\x81\x01\n" + + "\x14DeleteInferenceModel\x123.teleport.summarizer.v1.DeleteInferenceModelRequest\x1a4.teleport.summarizer.v1.DeleteInferenceModelResponse\x12~\n" + + "\x13ListInferenceModels\x122.teleport.summarizer.v1.ListInferenceModelsRequest\x1a3.teleport.summarizer.v1.ListInferenceModelsResponse\x12\x84\x01\n" + + "\x15CreateInferenceSecret\x124.teleport.summarizer.v1.CreateInferenceSecretRequest\x1a5.teleport.summarizer.v1.CreateInferenceSecretResponse\x12{\n" + + "\x12GetInferenceSecret\x121.teleport.summarizer.v1.GetInferenceSecretRequest\x1a2.teleport.summarizer.v1.GetInferenceSecretResponse\x12\x84\x01\n" + + "\x15UpdateInferenceSecret\x124.teleport.summarizer.v1.UpdateInferenceSecretRequest\x1a5.teleport.summarizer.v1.UpdateInferenceSecretResponse\x12\x84\x01\n" + + "\x15UpsertInferenceSecret\x124.teleport.summarizer.v1.UpsertInferenceSecretRequest\x1a5.teleport.summarizer.v1.UpsertInferenceSecretResponse\x12\x84\x01\n" + + "\x15DeleteInferenceSecret\x124.teleport.summarizer.v1.DeleteInferenceSecretRequest\x1a5.teleport.summarizer.v1.DeleteInferenceSecretResponse\x12\x81\x01\n" + + "\x14ListInferenceSecrets\x123.teleport.summarizer.v1.ListInferenceSecretsRequest\x1a4.teleport.summarizer.v1.ListInferenceSecretsResponse\x12\x84\x01\n" + + "\x15CreateInferencePolicy\x124.teleport.summarizer.v1.CreateInferencePolicyRequest\x1a5.teleport.summarizer.v1.CreateInferencePolicyResponse\x12{\n" + + "\x12GetInferencePolicy\x121.teleport.summarizer.v1.GetInferencePolicyRequest\x1a2.teleport.summarizer.v1.GetInferencePolicyResponse\x12\x84\x01\n" + + "\x15UpdateInferencePolicy\x124.teleport.summarizer.v1.UpdateInferencePolicyRequest\x1a5.teleport.summarizer.v1.UpdateInferencePolicyResponse\x12\x84\x01\n" + + "\x15UpsertInferencePolicy\x124.teleport.summarizer.v1.UpsertInferencePolicyRequest\x1a5.teleport.summarizer.v1.UpsertInferencePolicyResponse\x12\x84\x01\n" + + "\x15DeleteInferencePolicy\x124.teleport.summarizer.v1.DeleteInferencePolicyRequest\x1a5.teleport.summarizer.v1.DeleteInferencePolicyResponse\x12\x84\x01\n" + + "\x15ListInferencePolicies\x124.teleport.summarizer.v1.ListInferencePoliciesRequest\x1a5.teleport.summarizer.v1.ListInferencePoliciesResponse\x12c\n" + + "\n" + + "GetSummary\x12).teleport.summarizer.v1.GetSummaryRequest\x1a*.teleport.summarizer.v1.GetSummaryResponse\x12`\n" + + "\tIsEnabled\x12(.teleport.summarizer.v1.IsEnabledRequest\x1a).teleport.summarizer.v1.IsEnabledResponseBXZVgithub.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1;summarizerv1b\x06proto3" + +var ( + file_teleport_summarizer_v1_summarizer_service_proto_rawDescOnce sync.Once + file_teleport_summarizer_v1_summarizer_service_proto_rawDescData []byte +) + +func file_teleport_summarizer_v1_summarizer_service_proto_rawDescGZIP() []byte { + file_teleport_summarizer_v1_summarizer_service_proto_rawDescOnce.Do(func() { + file_teleport_summarizer_v1_summarizer_service_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_teleport_summarizer_v1_summarizer_service_proto_rawDesc), len(file_teleport_summarizer_v1_summarizer_service_proto_rawDesc))) + }) + return file_teleport_summarizer_v1_summarizer_service_proto_rawDescData +} + +var file_teleport_summarizer_v1_summarizer_service_proto_msgTypes = make([]protoimpl.MessageInfo, 40) +var file_teleport_summarizer_v1_summarizer_service_proto_goTypes = []any{ + (*CreateInferenceModelRequest)(nil), // 0: teleport.summarizer.v1.CreateInferenceModelRequest + (*CreateInferenceModelResponse)(nil), // 1: teleport.summarizer.v1.CreateInferenceModelResponse + (*GetInferenceModelRequest)(nil), // 2: teleport.summarizer.v1.GetInferenceModelRequest + (*GetInferenceModelResponse)(nil), // 3: teleport.summarizer.v1.GetInferenceModelResponse + (*UpdateInferenceModelRequest)(nil), // 4: teleport.summarizer.v1.UpdateInferenceModelRequest + (*UpdateInferenceModelResponse)(nil), // 5: teleport.summarizer.v1.UpdateInferenceModelResponse + (*UpsertInferenceModelRequest)(nil), // 6: teleport.summarizer.v1.UpsertInferenceModelRequest + (*UpsertInferenceModelResponse)(nil), // 7: teleport.summarizer.v1.UpsertInferenceModelResponse + (*DeleteInferenceModelRequest)(nil), // 8: teleport.summarizer.v1.DeleteInferenceModelRequest + (*DeleteInferenceModelResponse)(nil), // 9: teleport.summarizer.v1.DeleteInferenceModelResponse + (*ListInferenceModelsRequest)(nil), // 10: teleport.summarizer.v1.ListInferenceModelsRequest + (*ListInferenceModelsResponse)(nil), // 11: teleport.summarizer.v1.ListInferenceModelsResponse + (*CreateInferenceSecretRequest)(nil), // 12: teleport.summarizer.v1.CreateInferenceSecretRequest + (*CreateInferenceSecretResponse)(nil), // 13: teleport.summarizer.v1.CreateInferenceSecretResponse + (*GetInferenceSecretRequest)(nil), // 14: teleport.summarizer.v1.GetInferenceSecretRequest + (*GetInferenceSecretResponse)(nil), // 15: teleport.summarizer.v1.GetInferenceSecretResponse + (*UpdateInferenceSecretRequest)(nil), // 16: teleport.summarizer.v1.UpdateInferenceSecretRequest + (*UpdateInferenceSecretResponse)(nil), // 17: teleport.summarizer.v1.UpdateInferenceSecretResponse + (*UpsertInferenceSecretRequest)(nil), // 18: teleport.summarizer.v1.UpsertInferenceSecretRequest + (*UpsertInferenceSecretResponse)(nil), // 19: teleport.summarizer.v1.UpsertInferenceSecretResponse + (*DeleteInferenceSecretRequest)(nil), // 20: teleport.summarizer.v1.DeleteInferenceSecretRequest + (*DeleteInferenceSecretResponse)(nil), // 21: teleport.summarizer.v1.DeleteInferenceSecretResponse + (*ListInferenceSecretsRequest)(nil), // 22: teleport.summarizer.v1.ListInferenceSecretsRequest + (*ListInferenceSecretsResponse)(nil), // 23: teleport.summarizer.v1.ListInferenceSecretsResponse + (*CreateInferencePolicyRequest)(nil), // 24: teleport.summarizer.v1.CreateInferencePolicyRequest + (*CreateInferencePolicyResponse)(nil), // 25: teleport.summarizer.v1.CreateInferencePolicyResponse + (*GetInferencePolicyRequest)(nil), // 26: teleport.summarizer.v1.GetInferencePolicyRequest + (*GetInferencePolicyResponse)(nil), // 27: teleport.summarizer.v1.GetInferencePolicyResponse + (*UpdateInferencePolicyRequest)(nil), // 28: teleport.summarizer.v1.UpdateInferencePolicyRequest + (*UpdateInferencePolicyResponse)(nil), // 29: teleport.summarizer.v1.UpdateInferencePolicyResponse + (*UpsertInferencePolicyRequest)(nil), // 30: teleport.summarizer.v1.UpsertInferencePolicyRequest + (*UpsertInferencePolicyResponse)(nil), // 31: teleport.summarizer.v1.UpsertInferencePolicyResponse + (*DeleteInferencePolicyRequest)(nil), // 32: teleport.summarizer.v1.DeleteInferencePolicyRequest + (*DeleteInferencePolicyResponse)(nil), // 33: teleport.summarizer.v1.DeleteInferencePolicyResponse + (*ListInferencePoliciesRequest)(nil), // 34: teleport.summarizer.v1.ListInferencePoliciesRequest + (*ListInferencePoliciesResponse)(nil), // 35: teleport.summarizer.v1.ListInferencePoliciesResponse + (*GetSummaryRequest)(nil), // 36: teleport.summarizer.v1.GetSummaryRequest + (*GetSummaryResponse)(nil), // 37: teleport.summarizer.v1.GetSummaryResponse + (*IsEnabledRequest)(nil), // 38: teleport.summarizer.v1.IsEnabledRequest + (*IsEnabledResponse)(nil), // 39: teleport.summarizer.v1.IsEnabledResponse + (*InferenceModel)(nil), // 40: teleport.summarizer.v1.InferenceModel + (*InferenceSecret)(nil), // 41: teleport.summarizer.v1.InferenceSecret + (*InferencePolicy)(nil), // 42: teleport.summarizer.v1.InferencePolicy + (*Summary)(nil), // 43: teleport.summarizer.v1.Summary +} +var file_teleport_summarizer_v1_summarizer_service_proto_depIdxs = []int32{ + 40, // 0: teleport.summarizer.v1.CreateInferenceModelRequest.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 1: teleport.summarizer.v1.CreateInferenceModelResponse.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 2: teleport.summarizer.v1.GetInferenceModelResponse.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 3: teleport.summarizer.v1.UpdateInferenceModelRequest.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 4: teleport.summarizer.v1.UpdateInferenceModelResponse.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 5: teleport.summarizer.v1.UpsertInferenceModelRequest.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 6: teleport.summarizer.v1.UpsertInferenceModelResponse.model:type_name -> teleport.summarizer.v1.InferenceModel + 40, // 7: teleport.summarizer.v1.ListInferenceModelsResponse.models:type_name -> teleport.summarizer.v1.InferenceModel + 41, // 8: teleport.summarizer.v1.CreateInferenceSecretRequest.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 9: teleport.summarizer.v1.CreateInferenceSecretResponse.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 10: teleport.summarizer.v1.GetInferenceSecretResponse.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 11: teleport.summarizer.v1.UpdateInferenceSecretRequest.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 12: teleport.summarizer.v1.UpdateInferenceSecretResponse.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 13: teleport.summarizer.v1.UpsertInferenceSecretRequest.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 14: teleport.summarizer.v1.UpsertInferenceSecretResponse.secret:type_name -> teleport.summarizer.v1.InferenceSecret + 41, // 15: teleport.summarizer.v1.ListInferenceSecretsResponse.secrets:type_name -> teleport.summarizer.v1.InferenceSecret + 42, // 16: teleport.summarizer.v1.CreateInferencePolicyRequest.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 17: teleport.summarizer.v1.CreateInferencePolicyResponse.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 18: teleport.summarizer.v1.GetInferencePolicyResponse.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 19: teleport.summarizer.v1.UpdateInferencePolicyRequest.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 20: teleport.summarizer.v1.UpdateInferencePolicyResponse.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 21: teleport.summarizer.v1.UpsertInferencePolicyRequest.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 22: teleport.summarizer.v1.UpsertInferencePolicyResponse.policy:type_name -> teleport.summarizer.v1.InferencePolicy + 42, // 23: teleport.summarizer.v1.ListInferencePoliciesResponse.policies:type_name -> teleport.summarizer.v1.InferencePolicy + 43, // 24: teleport.summarizer.v1.GetSummaryResponse.summary:type_name -> teleport.summarizer.v1.Summary + 0, // 25: teleport.summarizer.v1.SummarizerService.CreateInferenceModel:input_type -> teleport.summarizer.v1.CreateInferenceModelRequest + 2, // 26: teleport.summarizer.v1.SummarizerService.GetInferenceModel:input_type -> teleport.summarizer.v1.GetInferenceModelRequest + 4, // 27: teleport.summarizer.v1.SummarizerService.UpdateInferenceModel:input_type -> teleport.summarizer.v1.UpdateInferenceModelRequest + 6, // 28: teleport.summarizer.v1.SummarizerService.UpsertInferenceModel:input_type -> teleport.summarizer.v1.UpsertInferenceModelRequest + 8, // 29: teleport.summarizer.v1.SummarizerService.DeleteInferenceModel:input_type -> teleport.summarizer.v1.DeleteInferenceModelRequest + 10, // 30: teleport.summarizer.v1.SummarizerService.ListInferenceModels:input_type -> teleport.summarizer.v1.ListInferenceModelsRequest + 12, // 31: teleport.summarizer.v1.SummarizerService.CreateInferenceSecret:input_type -> teleport.summarizer.v1.CreateInferenceSecretRequest + 14, // 32: teleport.summarizer.v1.SummarizerService.GetInferenceSecret:input_type -> teleport.summarizer.v1.GetInferenceSecretRequest + 16, // 33: teleport.summarizer.v1.SummarizerService.UpdateInferenceSecret:input_type -> teleport.summarizer.v1.UpdateInferenceSecretRequest + 18, // 34: teleport.summarizer.v1.SummarizerService.UpsertInferenceSecret:input_type -> teleport.summarizer.v1.UpsertInferenceSecretRequest + 20, // 35: teleport.summarizer.v1.SummarizerService.DeleteInferenceSecret:input_type -> teleport.summarizer.v1.DeleteInferenceSecretRequest + 22, // 36: teleport.summarizer.v1.SummarizerService.ListInferenceSecrets:input_type -> teleport.summarizer.v1.ListInferenceSecretsRequest + 24, // 37: teleport.summarizer.v1.SummarizerService.CreateInferencePolicy:input_type -> teleport.summarizer.v1.CreateInferencePolicyRequest + 26, // 38: teleport.summarizer.v1.SummarizerService.GetInferencePolicy:input_type -> teleport.summarizer.v1.GetInferencePolicyRequest + 28, // 39: teleport.summarizer.v1.SummarizerService.UpdateInferencePolicy:input_type -> teleport.summarizer.v1.UpdateInferencePolicyRequest + 30, // 40: teleport.summarizer.v1.SummarizerService.UpsertInferencePolicy:input_type -> teleport.summarizer.v1.UpsertInferencePolicyRequest + 32, // 41: teleport.summarizer.v1.SummarizerService.DeleteInferencePolicy:input_type -> teleport.summarizer.v1.DeleteInferencePolicyRequest + 34, // 42: teleport.summarizer.v1.SummarizerService.ListInferencePolicies:input_type -> teleport.summarizer.v1.ListInferencePoliciesRequest + 36, // 43: teleport.summarizer.v1.SummarizerService.GetSummary:input_type -> teleport.summarizer.v1.GetSummaryRequest + 38, // 44: teleport.summarizer.v1.SummarizerService.IsEnabled:input_type -> teleport.summarizer.v1.IsEnabledRequest + 1, // 45: teleport.summarizer.v1.SummarizerService.CreateInferenceModel:output_type -> teleport.summarizer.v1.CreateInferenceModelResponse + 3, // 46: teleport.summarizer.v1.SummarizerService.GetInferenceModel:output_type -> teleport.summarizer.v1.GetInferenceModelResponse + 5, // 47: teleport.summarizer.v1.SummarizerService.UpdateInferenceModel:output_type -> teleport.summarizer.v1.UpdateInferenceModelResponse + 7, // 48: teleport.summarizer.v1.SummarizerService.UpsertInferenceModel:output_type -> teleport.summarizer.v1.UpsertInferenceModelResponse + 9, // 49: teleport.summarizer.v1.SummarizerService.DeleteInferenceModel:output_type -> teleport.summarizer.v1.DeleteInferenceModelResponse + 11, // 50: teleport.summarizer.v1.SummarizerService.ListInferenceModels:output_type -> teleport.summarizer.v1.ListInferenceModelsResponse + 13, // 51: teleport.summarizer.v1.SummarizerService.CreateInferenceSecret:output_type -> teleport.summarizer.v1.CreateInferenceSecretResponse + 15, // 52: teleport.summarizer.v1.SummarizerService.GetInferenceSecret:output_type -> teleport.summarizer.v1.GetInferenceSecretResponse + 17, // 53: teleport.summarizer.v1.SummarizerService.UpdateInferenceSecret:output_type -> teleport.summarizer.v1.UpdateInferenceSecretResponse + 19, // 54: teleport.summarizer.v1.SummarizerService.UpsertInferenceSecret:output_type -> teleport.summarizer.v1.UpsertInferenceSecretResponse + 21, // 55: teleport.summarizer.v1.SummarizerService.DeleteInferenceSecret:output_type -> teleport.summarizer.v1.DeleteInferenceSecretResponse + 23, // 56: teleport.summarizer.v1.SummarizerService.ListInferenceSecrets:output_type -> teleport.summarizer.v1.ListInferenceSecretsResponse + 25, // 57: teleport.summarizer.v1.SummarizerService.CreateInferencePolicy:output_type -> teleport.summarizer.v1.CreateInferencePolicyResponse + 27, // 58: teleport.summarizer.v1.SummarizerService.GetInferencePolicy:output_type -> teleport.summarizer.v1.GetInferencePolicyResponse + 29, // 59: teleport.summarizer.v1.SummarizerService.UpdateInferencePolicy:output_type -> teleport.summarizer.v1.UpdateInferencePolicyResponse + 31, // 60: teleport.summarizer.v1.SummarizerService.UpsertInferencePolicy:output_type -> teleport.summarizer.v1.UpsertInferencePolicyResponse + 33, // 61: teleport.summarizer.v1.SummarizerService.DeleteInferencePolicy:output_type -> teleport.summarizer.v1.DeleteInferencePolicyResponse + 35, // 62: teleport.summarizer.v1.SummarizerService.ListInferencePolicies:output_type -> teleport.summarizer.v1.ListInferencePoliciesResponse + 37, // 63: teleport.summarizer.v1.SummarizerService.GetSummary:output_type -> teleport.summarizer.v1.GetSummaryResponse + 39, // 64: teleport.summarizer.v1.SummarizerService.IsEnabled:output_type -> teleport.summarizer.v1.IsEnabledResponse + 45, // [45:65] is the sub-list for method output_type + 25, // [25:45] is the sub-list for method input_type + 25, // [25:25] is the sub-list for extension type_name + 25, // [25:25] is the sub-list for extension extendee + 0, // [0:25] is the sub-list for field type_name +} + +func init() { file_teleport_summarizer_v1_summarizer_service_proto_init() } +func file_teleport_summarizer_v1_summarizer_service_proto_init() { + if File_teleport_summarizer_v1_summarizer_service_proto != nil { + return + } + file_teleport_summarizer_v1_summarizer_proto_init() + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_summarizer_v1_summarizer_service_proto_rawDesc), len(file_teleport_summarizer_v1_summarizer_service_proto_rawDesc)), + NumEnums: 0, + NumMessages: 40, + NumExtensions: 0, + NumServices: 1, + }, + GoTypes: file_teleport_summarizer_v1_summarizer_service_proto_goTypes, + DependencyIndexes: file_teleport_summarizer_v1_summarizer_service_proto_depIdxs, + MessageInfos: file_teleport_summarizer_v1_summarizer_service_proto_msgTypes, + }.Build() + File_teleport_summarizer_v1_summarizer_service_proto = out.File + file_teleport_summarizer_v1_summarizer_service_proto_goTypes = nil + file_teleport_summarizer_v1_summarizer_service_proto_depIdxs = nil +} diff --git a/api/gen/proto/go/teleport/summarizer/v1/summarizer_service_grpc.pb.go b/api/gen/proto/go/teleport/summarizer/v1/summarizer_service_grpc.pb.go new file mode 100644 index 0000000000000..9077f15e439ed --- /dev/null +++ b/api/gen/proto/go/teleport/summarizer/v1/summarizer_service_grpc.pb.go @@ -0,0 +1,917 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.5.1 +// - protoc (unknown) +// source: teleport/summarizer/v1/summarizer_service.proto + +package summarizerv1 + +import ( + context "context" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.64.0 or later. +const _ = grpc.SupportPackageIsVersion9 + +const ( + SummarizerService_CreateInferenceModel_FullMethodName = "/teleport.summarizer.v1.SummarizerService/CreateInferenceModel" + SummarizerService_GetInferenceModel_FullMethodName = "/teleport.summarizer.v1.SummarizerService/GetInferenceModel" + SummarizerService_UpdateInferenceModel_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpdateInferenceModel" + SummarizerService_UpsertInferenceModel_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpsertInferenceModel" + SummarizerService_DeleteInferenceModel_FullMethodName = "/teleport.summarizer.v1.SummarizerService/DeleteInferenceModel" + SummarizerService_ListInferenceModels_FullMethodName = "/teleport.summarizer.v1.SummarizerService/ListInferenceModels" + SummarizerService_CreateInferenceSecret_FullMethodName = "/teleport.summarizer.v1.SummarizerService/CreateInferenceSecret" + SummarizerService_GetInferenceSecret_FullMethodName = "/teleport.summarizer.v1.SummarizerService/GetInferenceSecret" + SummarizerService_UpdateInferenceSecret_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpdateInferenceSecret" + SummarizerService_UpsertInferenceSecret_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpsertInferenceSecret" + SummarizerService_DeleteInferenceSecret_FullMethodName = "/teleport.summarizer.v1.SummarizerService/DeleteInferenceSecret" + SummarizerService_ListInferenceSecrets_FullMethodName = "/teleport.summarizer.v1.SummarizerService/ListInferenceSecrets" + SummarizerService_CreateInferencePolicy_FullMethodName = "/teleport.summarizer.v1.SummarizerService/CreateInferencePolicy" + SummarizerService_GetInferencePolicy_FullMethodName = "/teleport.summarizer.v1.SummarizerService/GetInferencePolicy" + SummarizerService_UpdateInferencePolicy_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpdateInferencePolicy" + SummarizerService_UpsertInferencePolicy_FullMethodName = "/teleport.summarizer.v1.SummarizerService/UpsertInferencePolicy" + SummarizerService_DeleteInferencePolicy_FullMethodName = "/teleport.summarizer.v1.SummarizerService/DeleteInferencePolicy" + SummarizerService_ListInferencePolicies_FullMethodName = "/teleport.summarizer.v1.SummarizerService/ListInferencePolicies" + SummarizerService_GetSummary_FullMethodName = "/teleport.summarizer.v1.SummarizerService/GetSummary" + SummarizerService_IsEnabled_FullMethodName = "/teleport.summarizer.v1.SummarizerService/IsEnabled" +) + +// SummarizerServiceClient is the client API for SummarizerService service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +// +// SummarizerService is the service for managing summarization inference +// models, secrets, and policies. These objects configures the session +// recording summarizer. +type SummarizerServiceClient interface { + // CreateInferenceModel creates a new InferenceModel. + CreateInferenceModel(ctx context.Context, in *CreateInferenceModelRequest, opts ...grpc.CallOption) (*CreateInferenceModelResponse, error) + // GetInferenceModel retrieves an existing InferenceModel by name. + GetInferenceModel(ctx context.Context, in *GetInferenceModelRequest, opts ...grpc.CallOption) (*GetInferenceModelResponse, error) + // UpdateInferenceModel updates an existing InferenceModel. + UpdateInferenceModel(ctx context.Context, in *UpdateInferenceModelRequest, opts ...grpc.CallOption) (*UpdateInferenceModelResponse, error) + // UpsertInferenceModel creates a new InferenceModel or updates an existing + // one. + UpsertInferenceModel(ctx context.Context, in *UpsertInferenceModelRequest, opts ...grpc.CallOption) (*UpsertInferenceModelResponse, error) + // DeleteInferenceModel deletes an existing InferenceModel by name. + DeleteInferenceModel(ctx context.Context, in *DeleteInferenceModelRequest, opts ...grpc.CallOption) (*DeleteInferenceModelResponse, error) + // ListInferenceModels lists all InferenceModels that match the request. + ListInferenceModels(ctx context.Context, in *ListInferenceModelsRequest, opts ...grpc.CallOption) (*ListInferenceModelsResponse, error) + // CreateInferenceSecret creates a new InferenceSecret. + CreateInferenceSecret(ctx context.Context, in *CreateInferenceSecretRequest, opts ...grpc.CallOption) (*CreateInferenceSecretResponse, error) + // GetInferenceSecret retrieves an existing InferenceSecret by name. + GetInferenceSecret(ctx context.Context, in *GetInferenceSecretRequest, opts ...grpc.CallOption) (*GetInferenceSecretResponse, error) + // UpdateInferenceSecret updates an existing InferenceSecret. + UpdateInferenceSecret(ctx context.Context, in *UpdateInferenceSecretRequest, opts ...grpc.CallOption) (*UpdateInferenceSecretResponse, error) + // UpsertInferenceSecret creates a new InferenceSecret or updates an existing + // one. + UpsertInferenceSecret(ctx context.Context, in *UpsertInferenceSecretRequest, opts ...grpc.CallOption) (*UpsertInferenceSecretResponse, error) + // DeleteInferenceSecret deletes an existing InferenceSecret by name. + DeleteInferenceSecret(ctx context.Context, in *DeleteInferenceSecretRequest, opts ...grpc.CallOption) (*DeleteInferenceSecretResponse, error) + // ListInferenceSecrets lists all InferenceSecrets that match the request. + ListInferenceSecrets(ctx context.Context, in *ListInferenceSecretsRequest, opts ...grpc.CallOption) (*ListInferenceSecretsResponse, error) + // CreateInferencePolicy creates a new InferencePolicy. + CreateInferencePolicy(ctx context.Context, in *CreateInferencePolicyRequest, opts ...grpc.CallOption) (*CreateInferencePolicyResponse, error) + // GetInferencePolicy retrieves an existing InferencePolicy by name. + GetInferencePolicy(ctx context.Context, in *GetInferencePolicyRequest, opts ...grpc.CallOption) (*GetInferencePolicyResponse, error) + // UpdateInferencePolicy updates an existing InferencePolicy. + UpdateInferencePolicy(ctx context.Context, in *UpdateInferencePolicyRequest, opts ...grpc.CallOption) (*UpdateInferencePolicyResponse, error) + // UpsertInferencePolicy creates a new InferencePolicy or updates an existing + // one. + UpsertInferencePolicy(ctx context.Context, in *UpsertInferencePolicyRequest, opts ...grpc.CallOption) (*UpsertInferencePolicyResponse, error) + // DeleteInferencePolicy deletes an existing InferencePolicy by name. + DeleteInferencePolicy(ctx context.Context, in *DeleteInferencePolicyRequest, opts ...grpc.CallOption) (*DeleteInferencePolicyResponse, error) + // ListInferencePolicies lists all InferencePolicies that match the request. + ListInferencePolicies(ctx context.Context, in *ListInferencePoliciesRequest, opts ...grpc.CallOption) (*ListInferencePoliciesResponse, error) + // GetSummary retrieves the inference result for a session, which + // contains the session summary. + GetSummary(ctx context.Context, in *GetSummaryRequest, opts ...grpc.CallOption) (*GetSummaryResponse, error) + // IsEnabled tells if the summarizer is enabled by the license and + // configured. (Note that this doesn't tell anything about actual correctness + // of the configuration or presence of recording summaries.) + IsEnabled(ctx context.Context, in *IsEnabledRequest, opts ...grpc.CallOption) (*IsEnabledResponse, error) +} + +type summarizerServiceClient struct { + cc grpc.ClientConnInterface +} + +func NewSummarizerServiceClient(cc grpc.ClientConnInterface) SummarizerServiceClient { + return &summarizerServiceClient{cc} +} + +func (c *summarizerServiceClient) CreateInferenceModel(ctx context.Context, in *CreateInferenceModelRequest, opts ...grpc.CallOption) (*CreateInferenceModelResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateInferenceModelResponse) + err := c.cc.Invoke(ctx, SummarizerService_CreateInferenceModel_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) GetInferenceModel(ctx context.Context, in *GetInferenceModelRequest, opts ...grpc.CallOption) (*GetInferenceModelResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetInferenceModelResponse) + err := c.cc.Invoke(ctx, SummarizerService_GetInferenceModel_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpdateInferenceModel(ctx context.Context, in *UpdateInferenceModelRequest, opts ...grpc.CallOption) (*UpdateInferenceModelResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdateInferenceModelResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpdateInferenceModel_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpsertInferenceModel(ctx context.Context, in *UpsertInferenceModelRequest, opts ...grpc.CallOption) (*UpsertInferenceModelResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpsertInferenceModelResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpsertInferenceModel_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) DeleteInferenceModel(ctx context.Context, in *DeleteInferenceModelRequest, opts ...grpc.CallOption) (*DeleteInferenceModelResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteInferenceModelResponse) + err := c.cc.Invoke(ctx, SummarizerService_DeleteInferenceModel_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) ListInferenceModels(ctx context.Context, in *ListInferenceModelsRequest, opts ...grpc.CallOption) (*ListInferenceModelsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListInferenceModelsResponse) + err := c.cc.Invoke(ctx, SummarizerService_ListInferenceModels_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) CreateInferenceSecret(ctx context.Context, in *CreateInferenceSecretRequest, opts ...grpc.CallOption) (*CreateInferenceSecretResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateInferenceSecretResponse) + err := c.cc.Invoke(ctx, SummarizerService_CreateInferenceSecret_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) GetInferenceSecret(ctx context.Context, in *GetInferenceSecretRequest, opts ...grpc.CallOption) (*GetInferenceSecretResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetInferenceSecretResponse) + err := c.cc.Invoke(ctx, SummarizerService_GetInferenceSecret_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpdateInferenceSecret(ctx context.Context, in *UpdateInferenceSecretRequest, opts ...grpc.CallOption) (*UpdateInferenceSecretResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdateInferenceSecretResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpdateInferenceSecret_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpsertInferenceSecret(ctx context.Context, in *UpsertInferenceSecretRequest, opts ...grpc.CallOption) (*UpsertInferenceSecretResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpsertInferenceSecretResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpsertInferenceSecret_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) DeleteInferenceSecret(ctx context.Context, in *DeleteInferenceSecretRequest, opts ...grpc.CallOption) (*DeleteInferenceSecretResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteInferenceSecretResponse) + err := c.cc.Invoke(ctx, SummarizerService_DeleteInferenceSecret_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) ListInferenceSecrets(ctx context.Context, in *ListInferenceSecretsRequest, opts ...grpc.CallOption) (*ListInferenceSecretsResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListInferenceSecretsResponse) + err := c.cc.Invoke(ctx, SummarizerService_ListInferenceSecrets_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) CreateInferencePolicy(ctx context.Context, in *CreateInferencePolicyRequest, opts ...grpc.CallOption) (*CreateInferencePolicyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(CreateInferencePolicyResponse) + err := c.cc.Invoke(ctx, SummarizerService_CreateInferencePolicy_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) GetInferencePolicy(ctx context.Context, in *GetInferencePolicyRequest, opts ...grpc.CallOption) (*GetInferencePolicyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetInferencePolicyResponse) + err := c.cc.Invoke(ctx, SummarizerService_GetInferencePolicy_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpdateInferencePolicy(ctx context.Context, in *UpdateInferencePolicyRequest, opts ...grpc.CallOption) (*UpdateInferencePolicyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpdateInferencePolicyResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpdateInferencePolicy_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) UpsertInferencePolicy(ctx context.Context, in *UpsertInferencePolicyRequest, opts ...grpc.CallOption) (*UpsertInferencePolicyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(UpsertInferencePolicyResponse) + err := c.cc.Invoke(ctx, SummarizerService_UpsertInferencePolicy_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) DeleteInferencePolicy(ctx context.Context, in *DeleteInferencePolicyRequest, opts ...grpc.CallOption) (*DeleteInferencePolicyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeleteInferencePolicyResponse) + err := c.cc.Invoke(ctx, SummarizerService_DeleteInferencePolicy_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) ListInferencePolicies(ctx context.Context, in *ListInferencePoliciesRequest, opts ...grpc.CallOption) (*ListInferencePoliciesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListInferencePoliciesResponse) + err := c.cc.Invoke(ctx, SummarizerService_ListInferencePolicies_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) GetSummary(ctx context.Context, in *GetSummaryRequest, opts ...grpc.CallOption) (*GetSummaryResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(GetSummaryResponse) + err := c.cc.Invoke(ctx, SummarizerService_GetSummary_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *summarizerServiceClient) IsEnabled(ctx context.Context, in *IsEnabledRequest, opts ...grpc.CallOption) (*IsEnabledResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(IsEnabledResponse) + err := c.cc.Invoke(ctx, SummarizerService_IsEnabled_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + +// SummarizerServiceServer is the server API for SummarizerService service. +// All implementations must embed UnimplementedSummarizerServiceServer +// for forward compatibility. +// +// SummarizerService is the service for managing summarization inference +// models, secrets, and policies. These objects configures the session +// recording summarizer. +type SummarizerServiceServer interface { + // CreateInferenceModel creates a new InferenceModel. + CreateInferenceModel(context.Context, *CreateInferenceModelRequest) (*CreateInferenceModelResponse, error) + // GetInferenceModel retrieves an existing InferenceModel by name. + GetInferenceModel(context.Context, *GetInferenceModelRequest) (*GetInferenceModelResponse, error) + // UpdateInferenceModel updates an existing InferenceModel. + UpdateInferenceModel(context.Context, *UpdateInferenceModelRequest) (*UpdateInferenceModelResponse, error) + // UpsertInferenceModel creates a new InferenceModel or updates an existing + // one. + UpsertInferenceModel(context.Context, *UpsertInferenceModelRequest) (*UpsertInferenceModelResponse, error) + // DeleteInferenceModel deletes an existing InferenceModel by name. + DeleteInferenceModel(context.Context, *DeleteInferenceModelRequest) (*DeleteInferenceModelResponse, error) + // ListInferenceModels lists all InferenceModels that match the request. + ListInferenceModels(context.Context, *ListInferenceModelsRequest) (*ListInferenceModelsResponse, error) + // CreateInferenceSecret creates a new InferenceSecret. + CreateInferenceSecret(context.Context, *CreateInferenceSecretRequest) (*CreateInferenceSecretResponse, error) + // GetInferenceSecret retrieves an existing InferenceSecret by name. + GetInferenceSecret(context.Context, *GetInferenceSecretRequest) (*GetInferenceSecretResponse, error) + // UpdateInferenceSecret updates an existing InferenceSecret. + UpdateInferenceSecret(context.Context, *UpdateInferenceSecretRequest) (*UpdateInferenceSecretResponse, error) + // UpsertInferenceSecret creates a new InferenceSecret or updates an existing + // one. + UpsertInferenceSecret(context.Context, *UpsertInferenceSecretRequest) (*UpsertInferenceSecretResponse, error) + // DeleteInferenceSecret deletes an existing InferenceSecret by name. + DeleteInferenceSecret(context.Context, *DeleteInferenceSecretRequest) (*DeleteInferenceSecretResponse, error) + // ListInferenceSecrets lists all InferenceSecrets that match the request. + ListInferenceSecrets(context.Context, *ListInferenceSecretsRequest) (*ListInferenceSecretsResponse, error) + // CreateInferencePolicy creates a new InferencePolicy. + CreateInferencePolicy(context.Context, *CreateInferencePolicyRequest) (*CreateInferencePolicyResponse, error) + // GetInferencePolicy retrieves an existing InferencePolicy by name. + GetInferencePolicy(context.Context, *GetInferencePolicyRequest) (*GetInferencePolicyResponse, error) + // UpdateInferencePolicy updates an existing InferencePolicy. + UpdateInferencePolicy(context.Context, *UpdateInferencePolicyRequest) (*UpdateInferencePolicyResponse, error) + // UpsertInferencePolicy creates a new InferencePolicy or updates an existing + // one. + UpsertInferencePolicy(context.Context, *UpsertInferencePolicyRequest) (*UpsertInferencePolicyResponse, error) + // DeleteInferencePolicy deletes an existing InferencePolicy by name. + DeleteInferencePolicy(context.Context, *DeleteInferencePolicyRequest) (*DeleteInferencePolicyResponse, error) + // ListInferencePolicies lists all InferencePolicies that match the request. + ListInferencePolicies(context.Context, *ListInferencePoliciesRequest) (*ListInferencePoliciesResponse, error) + // GetSummary retrieves the inference result for a session, which + // contains the session summary. + GetSummary(context.Context, *GetSummaryRequest) (*GetSummaryResponse, error) + // IsEnabled tells if the summarizer is enabled by the license and + // configured. (Note that this doesn't tell anything about actual correctness + // of the configuration or presence of recording summaries.) + IsEnabled(context.Context, *IsEnabledRequest) (*IsEnabledResponse, error) + mustEmbedUnimplementedSummarizerServiceServer() +} + +// UnimplementedSummarizerServiceServer must be embedded to have +// forward compatible implementations. +// +// NOTE: this should be embedded by value instead of pointer to avoid a nil +// pointer dereference when methods are called. +type UnimplementedSummarizerServiceServer struct{} + +func (UnimplementedSummarizerServiceServer) CreateInferenceModel(context.Context, *CreateInferenceModelRequest) (*CreateInferenceModelResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateInferenceModel not implemented") +} +func (UnimplementedSummarizerServiceServer) GetInferenceModel(context.Context, *GetInferenceModelRequest) (*GetInferenceModelResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetInferenceModel not implemented") +} +func (UnimplementedSummarizerServiceServer) UpdateInferenceModel(context.Context, *UpdateInferenceModelRequest) (*UpdateInferenceModelResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateInferenceModel not implemented") +} +func (UnimplementedSummarizerServiceServer) UpsertInferenceModel(context.Context, *UpsertInferenceModelRequest) (*UpsertInferenceModelResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpsertInferenceModel not implemented") +} +func (UnimplementedSummarizerServiceServer) DeleteInferenceModel(context.Context, *DeleteInferenceModelRequest) (*DeleteInferenceModelResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteInferenceModel not implemented") +} +func (UnimplementedSummarizerServiceServer) ListInferenceModels(context.Context, *ListInferenceModelsRequest) (*ListInferenceModelsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListInferenceModels not implemented") +} +func (UnimplementedSummarizerServiceServer) CreateInferenceSecret(context.Context, *CreateInferenceSecretRequest) (*CreateInferenceSecretResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateInferenceSecret not implemented") +} +func (UnimplementedSummarizerServiceServer) GetInferenceSecret(context.Context, *GetInferenceSecretRequest) (*GetInferenceSecretResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetInferenceSecret not implemented") +} +func (UnimplementedSummarizerServiceServer) UpdateInferenceSecret(context.Context, *UpdateInferenceSecretRequest) (*UpdateInferenceSecretResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateInferenceSecret not implemented") +} +func (UnimplementedSummarizerServiceServer) UpsertInferenceSecret(context.Context, *UpsertInferenceSecretRequest) (*UpsertInferenceSecretResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpsertInferenceSecret not implemented") +} +func (UnimplementedSummarizerServiceServer) DeleteInferenceSecret(context.Context, *DeleteInferenceSecretRequest) (*DeleteInferenceSecretResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteInferenceSecret not implemented") +} +func (UnimplementedSummarizerServiceServer) ListInferenceSecrets(context.Context, *ListInferenceSecretsRequest) (*ListInferenceSecretsResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListInferenceSecrets not implemented") +} +func (UnimplementedSummarizerServiceServer) CreateInferencePolicy(context.Context, *CreateInferencePolicyRequest) (*CreateInferencePolicyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateInferencePolicy not implemented") +} +func (UnimplementedSummarizerServiceServer) GetInferencePolicy(context.Context, *GetInferencePolicyRequest) (*GetInferencePolicyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetInferencePolicy not implemented") +} +func (UnimplementedSummarizerServiceServer) UpdateInferencePolicy(context.Context, *UpdateInferencePolicyRequest) (*UpdateInferencePolicyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpdateInferencePolicy not implemented") +} +func (UnimplementedSummarizerServiceServer) UpsertInferencePolicy(context.Context, *UpsertInferencePolicyRequest) (*UpsertInferencePolicyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method UpsertInferencePolicy not implemented") +} +func (UnimplementedSummarizerServiceServer) DeleteInferencePolicy(context.Context, *DeleteInferencePolicyRequest) (*DeleteInferencePolicyResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteInferencePolicy not implemented") +} +func (UnimplementedSummarizerServiceServer) ListInferencePolicies(context.Context, *ListInferencePoliciesRequest) (*ListInferencePoliciesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListInferencePolicies not implemented") +} +func (UnimplementedSummarizerServiceServer) GetSummary(context.Context, *GetSummaryRequest) (*GetSummaryResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetSummary not implemented") +} +func (UnimplementedSummarizerServiceServer) IsEnabled(context.Context, *IsEnabledRequest) (*IsEnabledResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method IsEnabled not implemented") +} +func (UnimplementedSummarizerServiceServer) mustEmbedUnimplementedSummarizerServiceServer() {} +func (UnimplementedSummarizerServiceServer) testEmbeddedByValue() {} + +// UnsafeSummarizerServiceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to SummarizerServiceServer will +// result in compilation errors. +type UnsafeSummarizerServiceServer interface { + mustEmbedUnimplementedSummarizerServiceServer() +} + +func RegisterSummarizerServiceServer(s grpc.ServiceRegistrar, srv SummarizerServiceServer) { + // If the following call pancis, it indicates UnimplementedSummarizerServiceServer was + // embedded by pointer and is nil. This will cause panics if an + // unimplemented method is ever invoked, so we test this at initialization + // time to prevent it from happening at runtime later due to I/O. + if t, ok := srv.(interface{ testEmbeddedByValue() }); ok { + t.testEmbeddedByValue() + } + s.RegisterService(&SummarizerService_ServiceDesc, srv) +} + +func _SummarizerService_CreateInferenceModel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateInferenceModelRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).CreateInferenceModel(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_CreateInferenceModel_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).CreateInferenceModel(ctx, req.(*CreateInferenceModelRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_GetInferenceModel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetInferenceModelRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).GetInferenceModel(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_GetInferenceModel_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).GetInferenceModel(ctx, req.(*GetInferenceModelRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpdateInferenceModel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdateInferenceModelRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpdateInferenceModel(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpdateInferenceModel_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpdateInferenceModel(ctx, req.(*UpdateInferenceModelRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpsertInferenceModel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpsertInferenceModelRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpsertInferenceModel(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpsertInferenceModel_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpsertInferenceModel(ctx, req.(*UpsertInferenceModelRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_DeleteInferenceModel_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteInferenceModelRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).DeleteInferenceModel(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_DeleteInferenceModel_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).DeleteInferenceModel(ctx, req.(*DeleteInferenceModelRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_ListInferenceModels_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListInferenceModelsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).ListInferenceModels(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_ListInferenceModels_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).ListInferenceModels(ctx, req.(*ListInferenceModelsRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_CreateInferenceSecret_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateInferenceSecretRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).CreateInferenceSecret(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_CreateInferenceSecret_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).CreateInferenceSecret(ctx, req.(*CreateInferenceSecretRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_GetInferenceSecret_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetInferenceSecretRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).GetInferenceSecret(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_GetInferenceSecret_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).GetInferenceSecret(ctx, req.(*GetInferenceSecretRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpdateInferenceSecret_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdateInferenceSecretRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpdateInferenceSecret(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpdateInferenceSecret_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpdateInferenceSecret(ctx, req.(*UpdateInferenceSecretRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpsertInferenceSecret_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpsertInferenceSecretRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpsertInferenceSecret(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpsertInferenceSecret_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpsertInferenceSecret(ctx, req.(*UpsertInferenceSecretRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_DeleteInferenceSecret_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteInferenceSecretRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).DeleteInferenceSecret(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_DeleteInferenceSecret_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).DeleteInferenceSecret(ctx, req.(*DeleteInferenceSecretRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_ListInferenceSecrets_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListInferenceSecretsRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).ListInferenceSecrets(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_ListInferenceSecrets_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).ListInferenceSecrets(ctx, req.(*ListInferenceSecretsRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_CreateInferencePolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateInferencePolicyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).CreateInferencePolicy(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_CreateInferencePolicy_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).CreateInferencePolicy(ctx, req.(*CreateInferencePolicyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_GetInferencePolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetInferencePolicyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).GetInferencePolicy(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_GetInferencePolicy_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).GetInferencePolicy(ctx, req.(*GetInferencePolicyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpdateInferencePolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpdateInferencePolicyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpdateInferencePolicy(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpdateInferencePolicy_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpdateInferencePolicy(ctx, req.(*UpdateInferencePolicyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_UpsertInferencePolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(UpsertInferencePolicyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).UpsertInferencePolicy(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_UpsertInferencePolicy_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).UpsertInferencePolicy(ctx, req.(*UpsertInferencePolicyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_DeleteInferencePolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteInferencePolicyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).DeleteInferencePolicy(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_DeleteInferencePolicy_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).DeleteInferencePolicy(ctx, req.(*DeleteInferencePolicyRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_ListInferencePolicies_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListInferencePoliciesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).ListInferencePolicies(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_ListInferencePolicies_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).ListInferencePolicies(ctx, req.(*ListInferencePoliciesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_GetSummary_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetSummaryRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).GetSummary(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_GetSummary_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).GetSummary(ctx, req.(*GetSummaryRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _SummarizerService_IsEnabled_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(IsEnabledRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(SummarizerServiceServer).IsEnabled(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: SummarizerService_IsEnabled_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(SummarizerServiceServer).IsEnabled(ctx, req.(*IsEnabledRequest)) + } + return interceptor(ctx, in, info, handler) +} + +// SummarizerService_ServiceDesc is the grpc.ServiceDesc for SummarizerService service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var SummarizerService_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "teleport.summarizer.v1.SummarizerService", + HandlerType: (*SummarizerServiceServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "CreateInferenceModel", + Handler: _SummarizerService_CreateInferenceModel_Handler, + }, + { + MethodName: "GetInferenceModel", + Handler: _SummarizerService_GetInferenceModel_Handler, + }, + { + MethodName: "UpdateInferenceModel", + Handler: _SummarizerService_UpdateInferenceModel_Handler, + }, + { + MethodName: "UpsertInferenceModel", + Handler: _SummarizerService_UpsertInferenceModel_Handler, + }, + { + MethodName: "DeleteInferenceModel", + Handler: _SummarizerService_DeleteInferenceModel_Handler, + }, + { + MethodName: "ListInferenceModels", + Handler: _SummarizerService_ListInferenceModels_Handler, + }, + { + MethodName: "CreateInferenceSecret", + Handler: _SummarizerService_CreateInferenceSecret_Handler, + }, + { + MethodName: "GetInferenceSecret", + Handler: _SummarizerService_GetInferenceSecret_Handler, + }, + { + MethodName: "UpdateInferenceSecret", + Handler: _SummarizerService_UpdateInferenceSecret_Handler, + }, + { + MethodName: "UpsertInferenceSecret", + Handler: _SummarizerService_UpsertInferenceSecret_Handler, + }, + { + MethodName: "DeleteInferenceSecret", + Handler: _SummarizerService_DeleteInferenceSecret_Handler, + }, + { + MethodName: "ListInferenceSecrets", + Handler: _SummarizerService_ListInferenceSecrets_Handler, + }, + { + MethodName: "CreateInferencePolicy", + Handler: _SummarizerService_CreateInferencePolicy_Handler, + }, + { + MethodName: "GetInferencePolicy", + Handler: _SummarizerService_GetInferencePolicy_Handler, + }, + { + MethodName: "UpdateInferencePolicy", + Handler: _SummarizerService_UpdateInferencePolicy_Handler, + }, + { + MethodName: "UpsertInferencePolicy", + Handler: _SummarizerService_UpsertInferencePolicy_Handler, + }, + { + MethodName: "DeleteInferencePolicy", + Handler: _SummarizerService_DeleteInferencePolicy_Handler, + }, + { + MethodName: "ListInferencePolicies", + Handler: _SummarizerService_ListInferencePolicies_Handler, + }, + { + MethodName: "GetSummary", + Handler: _SummarizerService_GetSummary_Handler, + }, + { + MethodName: "IsEnabled", + Handler: _SummarizerService_IsEnabled_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "teleport/summarizer/v1/summarizer_service.proto", +} diff --git a/api/gen/proto/go/teleport/trait/v1/trait.pb.go b/api/gen/proto/go/teleport/trait/v1/trait.pb.go index fcd84f5888aa4..e5ab47a14420b 100644 --- a/api/gen/proto/go/teleport/trait/v1/trait.pb.go +++ b/api/gen/proto/go/teleport/trait/v1/trait.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/trait/v1/trait.proto diff --git a/api/gen/proto/go/teleport/transport/v1/transport_service.pb.go b/api/gen/proto/go/teleport/transport/v1/transport_service.pb.go index a37fae786ad9d..d4975335ccbb1 100644 --- a/api/gen/proto/go/teleport/transport/v1/transport_service.pb.go +++ b/api/gen/proto/go/teleport/transport/v1/transport_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/transport/v1/transport_service.proto diff --git a/api/gen/proto/go/teleport/trust/v1/trust_service.pb.go b/api/gen/proto/go/teleport/trust/v1/trust_service.pb.go index 6589f97278328..90d2cd5e77d26 100644 --- a/api/gen/proto/go/teleport/trust/v1/trust_service.pb.go +++ b/api/gen/proto/go/teleport/trust/v1/trust_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/trust/v1/trust_service.proto @@ -865,6 +865,118 @@ func (x *GenerateHostCertResponse) GetSshCertificate() []byte { return nil } +// ListTrustedClustersRequest is the request for ListTrustedClusters. +type ListTrustedClustersRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The next_page_token value returned from a previous List request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListTrustedClustersRequest) Reset() { + *x = ListTrustedClustersRequest{} + mi := &file_teleport_trust_v1_trust_service_proto_msgTypes[15] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListTrustedClustersRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListTrustedClustersRequest) ProtoMessage() {} + +func (x *ListTrustedClustersRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_trust_v1_trust_service_proto_msgTypes[15] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListTrustedClustersRequest.ProtoReflect.Descriptor instead. +func (*ListTrustedClustersRequest) Descriptor() ([]byte, []int) { + return file_teleport_trust_v1_trust_service_proto_rawDescGZIP(), []int{15} +} + +func (x *ListTrustedClustersRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListTrustedClustersRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListTrustedClustersResponse is the response for ListTrustedClusters. +type ListTrustedClustersResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // TrustedClusters is a list of Trusted Cluster resources. + TrustedClusters []*types.TrustedClusterV2 `protobuf:"bytes,1,rep,name=trusted_clusters,json=trustedClusters,proto3" json:"trusted_clusters,omitempty"` + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListTrustedClustersResponse) Reset() { + *x = ListTrustedClustersResponse{} + mi := &file_teleport_trust_v1_trust_service_proto_msgTypes[16] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListTrustedClustersResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListTrustedClustersResponse) ProtoMessage() {} + +func (x *ListTrustedClustersResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_trust_v1_trust_service_proto_msgTypes[16] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListTrustedClustersResponse.ProtoReflect.Descriptor instead. +func (*ListTrustedClustersResponse) Descriptor() ([]byte, []int) { + return file_teleport_trust_v1_trust_service_proto_rawDescGZIP(), []int{16} +} + +func (x *ListTrustedClustersResponse) GetTrustedClusters() []*types.TrustedClusterV2 { + if x != nil { + return x.TrustedClusters + } + return nil +} + +func (x *ListTrustedClustersResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + var File_teleport_trust_v1_trust_service_proto protoreflect.FileDescriptor const file_teleport_trust_v1_trust_service_proto_rawDesc = "" + @@ -917,7 +1029,14 @@ const file_teleport_trust_v1_trust_service_proto_rawDesc = "" + "\x04role\x18\x06 \x01(\tR\x04role\x12+\n" + "\x03ttl\x18\a \x01(\v2\x19.google.protobuf.DurationR\x03ttl\"C\n" + "\x18GenerateHostCertResponse\x12'\n" + - "\x0fssh_certificate\x18\x01 \x01(\fR\x0esshCertificate2\xaa\b\n" + + "\x0fssh_certificate\x18\x01 \x01(\fR\x0esshCertificate\"X\n" + + "\x1aListTrustedClustersRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\x89\x01\n" + + "\x1bListTrustedClustersResponse\x12B\n" + + "\x10trusted_clusters\x18\x01 \x03(\v2\x17.types.TrustedClusterV2R\x0ftrustedClusters\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken2\xa0\t\n" + "\fTrustService\x12V\n" + "\x10GetCertAuthority\x12*.teleport.trust.v1.GetCertAuthorityRequest\x1a\x16.types.CertAuthorityV2\x12q\n" + "\x12GetCertAuthorities\x12,.teleport.trust.v1.GetCertAuthoritiesRequest\x1a-.teleport.trust.v1.GetCertAuthoritiesResponse\x12\\\n" + @@ -928,7 +1047,8 @@ const file_teleport_trust_v1_trust_service_proto_rawDesc = "" + "\x10GenerateHostCert\x12*.teleport.trust.v1.GenerateHostCertRequest\x1a+.teleport.trust.v1.GenerateHostCertResponse\x12_\n" + "\x14UpsertTrustedCluster\x12..teleport.trust.v1.UpsertTrustedClusterRequest\x1a\x17.types.TrustedClusterV2\x12_\n" + "\x14CreateTrustedCluster\x12..teleport.trust.v1.CreateTrustedClusterRequest\x1a\x17.types.TrustedClusterV2\x12_\n" + - "\x14UpdateTrustedCluster\x12..teleport.trust.v1.UpdateTrustedClusterRequest\x1a\x17.types.TrustedClusterV2BNZLgithub.com/gravitational/teleport/api/gen/proto/go/teleport/trust/v1;trustv1b\x06proto3" + "\x14UpdateTrustedCluster\x12..teleport.trust.v1.UpdateTrustedClusterRequest\x1a\x17.types.TrustedClusterV2\x12t\n" + + "\x13ListTrustedClusters\x12-.teleport.trust.v1.ListTrustedClustersRequest\x1a..teleport.trust.v1.ListTrustedClustersResponseBNZLgithub.com/gravitational/teleport/api/gen/proto/go/teleport/trust/v1;trustv1b\x06proto3" var ( file_teleport_trust_v1_trust_service_proto_rawDescOnce sync.Once @@ -942,7 +1062,7 @@ func file_teleport_trust_v1_trust_service_proto_rawDescGZIP() []byte { return file_teleport_trust_v1_trust_service_proto_rawDescData } -var file_teleport_trust_v1_trust_service_proto_msgTypes = make([]protoimpl.MessageInfo, 15) +var file_teleport_trust_v1_trust_service_proto_msgTypes = make([]protoimpl.MessageInfo, 17) var file_teleport_trust_v1_trust_service_proto_goTypes = []any{ (*UpsertTrustedClusterRequest)(nil), // 0: teleport.trust.v1.UpsertTrustedClusterRequest (*CreateTrustedClusterRequest)(nil), // 1: teleport.trust.v1.CreateTrustedClusterRequest @@ -959,50 +1079,55 @@ var file_teleport_trust_v1_trust_service_proto_goTypes = []any{ (*RotateExternalCertAuthorityResponse)(nil), // 12: teleport.trust.v1.RotateExternalCertAuthorityResponse (*GenerateHostCertRequest)(nil), // 13: teleport.trust.v1.GenerateHostCertRequest (*GenerateHostCertResponse)(nil), // 14: teleport.trust.v1.GenerateHostCertResponse - (*types.TrustedClusterV2)(nil), // 15: types.TrustedClusterV2 - (*types.CertAuthorityV2)(nil), // 16: types.CertAuthorityV2 - (*durationpb.Duration)(nil), // 17: google.protobuf.Duration - (*timestamppb.Timestamp)(nil), // 18: google.protobuf.Timestamp - (*emptypb.Empty)(nil), // 19: google.protobuf.Empty + (*ListTrustedClustersRequest)(nil), // 15: teleport.trust.v1.ListTrustedClustersRequest + (*ListTrustedClustersResponse)(nil), // 16: teleport.trust.v1.ListTrustedClustersResponse + (*types.TrustedClusterV2)(nil), // 17: types.TrustedClusterV2 + (*types.CertAuthorityV2)(nil), // 18: types.CertAuthorityV2 + (*durationpb.Duration)(nil), // 19: google.protobuf.Duration + (*timestamppb.Timestamp)(nil), // 20: google.protobuf.Timestamp + (*emptypb.Empty)(nil), // 21: google.protobuf.Empty } var file_teleport_trust_v1_trust_service_proto_depIdxs = []int32{ - 15, // 0: teleport.trust.v1.UpsertTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 - 15, // 1: teleport.trust.v1.CreateTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 - 15, // 2: teleport.trust.v1.UpdateTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 - 16, // 3: teleport.trust.v1.GetCertAuthoritiesResponse.cert_authorities_v2:type_name -> types.CertAuthorityV2 - 16, // 4: teleport.trust.v1.UpsertCertAuthorityRequest.cert_authority:type_name -> types.CertAuthorityV2 - 17, // 5: teleport.trust.v1.RotateCertAuthorityRequest.grace_period:type_name -> google.protobuf.Duration + 17, // 0: teleport.trust.v1.UpsertTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 + 17, // 1: teleport.trust.v1.CreateTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 + 17, // 2: teleport.trust.v1.UpdateTrustedClusterRequest.trusted_cluster:type_name -> types.TrustedClusterV2 + 18, // 3: teleport.trust.v1.GetCertAuthoritiesResponse.cert_authorities_v2:type_name -> types.CertAuthorityV2 + 18, // 4: teleport.trust.v1.UpsertCertAuthorityRequest.cert_authority:type_name -> types.CertAuthorityV2 + 19, // 5: teleport.trust.v1.RotateCertAuthorityRequest.grace_period:type_name -> google.protobuf.Duration 9, // 6: teleport.trust.v1.RotateCertAuthorityRequest.schedule:type_name -> teleport.trust.v1.RotationSchedule - 18, // 7: teleport.trust.v1.RotationSchedule.update_clients:type_name -> google.protobuf.Timestamp - 18, // 8: teleport.trust.v1.RotationSchedule.update_servers:type_name -> google.protobuf.Timestamp - 18, // 9: teleport.trust.v1.RotationSchedule.standby:type_name -> google.protobuf.Timestamp - 16, // 10: teleport.trust.v1.RotateExternalCertAuthorityRequest.cert_authority:type_name -> types.CertAuthorityV2 - 17, // 11: teleport.trust.v1.GenerateHostCertRequest.ttl:type_name -> google.protobuf.Duration - 3, // 12: teleport.trust.v1.TrustService.GetCertAuthority:input_type -> teleport.trust.v1.GetCertAuthorityRequest - 4, // 13: teleport.trust.v1.TrustService.GetCertAuthorities:input_type -> teleport.trust.v1.GetCertAuthoritiesRequest - 6, // 14: teleport.trust.v1.TrustService.DeleteCertAuthority:input_type -> teleport.trust.v1.DeleteCertAuthorityRequest - 7, // 15: teleport.trust.v1.TrustService.UpsertCertAuthority:input_type -> teleport.trust.v1.UpsertCertAuthorityRequest - 8, // 16: teleport.trust.v1.TrustService.RotateCertAuthority:input_type -> teleport.trust.v1.RotateCertAuthorityRequest - 11, // 17: teleport.trust.v1.TrustService.RotateExternalCertAuthority:input_type -> teleport.trust.v1.RotateExternalCertAuthorityRequest - 13, // 18: teleport.trust.v1.TrustService.GenerateHostCert:input_type -> teleport.trust.v1.GenerateHostCertRequest - 0, // 19: teleport.trust.v1.TrustService.UpsertTrustedCluster:input_type -> teleport.trust.v1.UpsertTrustedClusterRequest - 1, // 20: teleport.trust.v1.TrustService.CreateTrustedCluster:input_type -> teleport.trust.v1.CreateTrustedClusterRequest - 2, // 21: teleport.trust.v1.TrustService.UpdateTrustedCluster:input_type -> teleport.trust.v1.UpdateTrustedClusterRequest - 16, // 22: teleport.trust.v1.TrustService.GetCertAuthority:output_type -> types.CertAuthorityV2 - 5, // 23: teleport.trust.v1.TrustService.GetCertAuthorities:output_type -> teleport.trust.v1.GetCertAuthoritiesResponse - 19, // 24: teleport.trust.v1.TrustService.DeleteCertAuthority:output_type -> google.protobuf.Empty - 16, // 25: teleport.trust.v1.TrustService.UpsertCertAuthority:output_type -> types.CertAuthorityV2 - 10, // 26: teleport.trust.v1.TrustService.RotateCertAuthority:output_type -> teleport.trust.v1.RotateCertAuthorityResponse - 12, // 27: teleport.trust.v1.TrustService.RotateExternalCertAuthority:output_type -> teleport.trust.v1.RotateExternalCertAuthorityResponse - 14, // 28: teleport.trust.v1.TrustService.GenerateHostCert:output_type -> teleport.trust.v1.GenerateHostCertResponse - 15, // 29: teleport.trust.v1.TrustService.UpsertTrustedCluster:output_type -> types.TrustedClusterV2 - 15, // 30: teleport.trust.v1.TrustService.CreateTrustedCluster:output_type -> types.TrustedClusterV2 - 15, // 31: teleport.trust.v1.TrustService.UpdateTrustedCluster:output_type -> types.TrustedClusterV2 - 22, // [22:32] is the sub-list for method output_type - 12, // [12:22] is the sub-list for method input_type - 12, // [12:12] is the sub-list for extension type_name - 12, // [12:12] is the sub-list for extension extendee - 0, // [0:12] is the sub-list for field type_name + 20, // 7: teleport.trust.v1.RotationSchedule.update_clients:type_name -> google.protobuf.Timestamp + 20, // 8: teleport.trust.v1.RotationSchedule.update_servers:type_name -> google.protobuf.Timestamp + 20, // 9: teleport.trust.v1.RotationSchedule.standby:type_name -> google.protobuf.Timestamp + 18, // 10: teleport.trust.v1.RotateExternalCertAuthorityRequest.cert_authority:type_name -> types.CertAuthorityV2 + 19, // 11: teleport.trust.v1.GenerateHostCertRequest.ttl:type_name -> google.protobuf.Duration + 17, // 12: teleport.trust.v1.ListTrustedClustersResponse.trusted_clusters:type_name -> types.TrustedClusterV2 + 3, // 13: teleport.trust.v1.TrustService.GetCertAuthority:input_type -> teleport.trust.v1.GetCertAuthorityRequest + 4, // 14: teleport.trust.v1.TrustService.GetCertAuthorities:input_type -> teleport.trust.v1.GetCertAuthoritiesRequest + 6, // 15: teleport.trust.v1.TrustService.DeleteCertAuthority:input_type -> teleport.trust.v1.DeleteCertAuthorityRequest + 7, // 16: teleport.trust.v1.TrustService.UpsertCertAuthority:input_type -> teleport.trust.v1.UpsertCertAuthorityRequest + 8, // 17: teleport.trust.v1.TrustService.RotateCertAuthority:input_type -> teleport.trust.v1.RotateCertAuthorityRequest + 11, // 18: teleport.trust.v1.TrustService.RotateExternalCertAuthority:input_type -> teleport.trust.v1.RotateExternalCertAuthorityRequest + 13, // 19: teleport.trust.v1.TrustService.GenerateHostCert:input_type -> teleport.trust.v1.GenerateHostCertRequest + 0, // 20: teleport.trust.v1.TrustService.UpsertTrustedCluster:input_type -> teleport.trust.v1.UpsertTrustedClusterRequest + 1, // 21: teleport.trust.v1.TrustService.CreateTrustedCluster:input_type -> teleport.trust.v1.CreateTrustedClusterRequest + 2, // 22: teleport.trust.v1.TrustService.UpdateTrustedCluster:input_type -> teleport.trust.v1.UpdateTrustedClusterRequest + 15, // 23: teleport.trust.v1.TrustService.ListTrustedClusters:input_type -> teleport.trust.v1.ListTrustedClustersRequest + 18, // 24: teleport.trust.v1.TrustService.GetCertAuthority:output_type -> types.CertAuthorityV2 + 5, // 25: teleport.trust.v1.TrustService.GetCertAuthorities:output_type -> teleport.trust.v1.GetCertAuthoritiesResponse + 21, // 26: teleport.trust.v1.TrustService.DeleteCertAuthority:output_type -> google.protobuf.Empty + 18, // 27: teleport.trust.v1.TrustService.UpsertCertAuthority:output_type -> types.CertAuthorityV2 + 10, // 28: teleport.trust.v1.TrustService.RotateCertAuthority:output_type -> teleport.trust.v1.RotateCertAuthorityResponse + 12, // 29: teleport.trust.v1.TrustService.RotateExternalCertAuthority:output_type -> teleport.trust.v1.RotateExternalCertAuthorityResponse + 14, // 30: teleport.trust.v1.TrustService.GenerateHostCert:output_type -> teleport.trust.v1.GenerateHostCertResponse + 17, // 31: teleport.trust.v1.TrustService.UpsertTrustedCluster:output_type -> types.TrustedClusterV2 + 17, // 32: teleport.trust.v1.TrustService.CreateTrustedCluster:output_type -> types.TrustedClusterV2 + 17, // 33: teleport.trust.v1.TrustService.UpdateTrustedCluster:output_type -> types.TrustedClusterV2 + 16, // 34: teleport.trust.v1.TrustService.ListTrustedClusters:output_type -> teleport.trust.v1.ListTrustedClustersResponse + 24, // [24:35] is the sub-list for method output_type + 13, // [13:24] is the sub-list for method input_type + 13, // [13:13] is the sub-list for extension type_name + 13, // [13:13] is the sub-list for extension extendee + 0, // [0:13] is the sub-list for field type_name } func init() { file_teleport_trust_v1_trust_service_proto_init() } @@ -1016,7 +1141,7 @@ func file_teleport_trust_v1_trust_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_trust_v1_trust_service_proto_rawDesc), len(file_teleport_trust_v1_trust_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 15, + NumMessages: 17, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/trust/v1/trust_service_grpc.pb.go b/api/gen/proto/go/teleport/trust/v1/trust_service_grpc.pb.go index 4cdc57fe369e1..4fac904c3ee24 100644 --- a/api/gen/proto/go/teleport/trust/v1/trust_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/trust/v1/trust_service_grpc.pb.go @@ -45,6 +45,7 @@ const ( TrustService_UpsertTrustedCluster_FullMethodName = "/teleport.trust.v1.TrustService/UpsertTrustedCluster" TrustService_CreateTrustedCluster_FullMethodName = "/teleport.trust.v1.TrustService/CreateTrustedCluster" TrustService_UpdateTrustedCluster_FullMethodName = "/teleport.trust.v1.TrustService/UpdateTrustedCluster" + TrustService_ListTrustedClusters_FullMethodName = "/teleport.trust.v1.TrustService/ListTrustedClusters" ) // TrustServiceClient is the client API for TrustService service. @@ -74,6 +75,8 @@ type TrustServiceClient interface { CreateTrustedCluster(ctx context.Context, in *CreateTrustedClusterRequest, opts ...grpc.CallOption) (*types.TrustedClusterV2, error) // UpdateTrustedCluster updates a Trusted Cluster in a backend. UpdateTrustedCluster(ctx context.Context, in *UpdateTrustedClusterRequest, opts ...grpc.CallOption) (*types.TrustedClusterV2, error) + // ListTrustedClusters returns a page of current Trusted Cluster resources. + ListTrustedClusters(ctx context.Context, in *ListTrustedClustersRequest, opts ...grpc.CallOption) (*ListTrustedClustersResponse, error) } type trustServiceClient struct { @@ -184,6 +187,16 @@ func (c *trustServiceClient) UpdateTrustedCluster(ctx context.Context, in *Updat return out, nil } +func (c *trustServiceClient) ListTrustedClusters(ctx context.Context, in *ListTrustedClustersRequest, opts ...grpc.CallOption) (*ListTrustedClustersResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListTrustedClustersResponse) + err := c.cc.Invoke(ctx, TrustService_ListTrustedClusters_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // TrustServiceServer is the server API for TrustService service. // All implementations must embed UnimplementedTrustServiceServer // for forward compatibility. @@ -211,6 +224,8 @@ type TrustServiceServer interface { CreateTrustedCluster(context.Context, *CreateTrustedClusterRequest) (*types.TrustedClusterV2, error) // UpdateTrustedCluster updates a Trusted Cluster in a backend. UpdateTrustedCluster(context.Context, *UpdateTrustedClusterRequest) (*types.TrustedClusterV2, error) + // ListTrustedClusters returns a page of current Trusted Cluster resources. + ListTrustedClusters(context.Context, *ListTrustedClustersRequest) (*ListTrustedClustersResponse, error) mustEmbedUnimplementedTrustServiceServer() } @@ -251,6 +266,9 @@ func (UnimplementedTrustServiceServer) CreateTrustedCluster(context.Context, *Cr func (UnimplementedTrustServiceServer) UpdateTrustedCluster(context.Context, *UpdateTrustedClusterRequest) (*types.TrustedClusterV2, error) { return nil, status.Errorf(codes.Unimplemented, "method UpdateTrustedCluster not implemented") } +func (UnimplementedTrustServiceServer) ListTrustedClusters(context.Context, *ListTrustedClustersRequest) (*ListTrustedClustersResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListTrustedClusters not implemented") +} func (UnimplementedTrustServiceServer) mustEmbedUnimplementedTrustServiceServer() {} func (UnimplementedTrustServiceServer) testEmbeddedByValue() {} @@ -452,6 +470,24 @@ func _TrustService_UpdateTrustedCluster_Handler(srv interface{}, ctx context.Con return interceptor(ctx, in, info, handler) } +func _TrustService_ListTrustedClusters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListTrustedClustersRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(TrustServiceServer).ListTrustedClusters(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: TrustService_ListTrustedClusters_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(TrustServiceServer).ListTrustedClusters(ctx, req.(*ListTrustedClustersRequest)) + } + return interceptor(ctx, in, info, handler) +} + // TrustService_ServiceDesc is the grpc.ServiceDesc for TrustService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -499,6 +535,10 @@ var TrustService_ServiceDesc = grpc.ServiceDesc{ MethodName: "UpdateTrustedCluster", Handler: _TrustService_UpdateTrustedCluster_Handler, }, + { + MethodName: "ListTrustedClusters", + Handler: _TrustService_ListTrustedClusters_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/trust/v1/trust_service.proto", diff --git a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate.pb.go b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate.pb.go index c36150430b225..de57e3543dd0f 100644 --- a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate.pb.go +++ b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userloginstate/v1/userloginstate.proto @@ -95,22 +95,54 @@ func (x *UserLoginState) GetSpec() *Spec { // Spec is the specification for a user login state. type Spec struct { state protoimpl.MessageState `protogen:"open.v1"` - // roles are the user roles attached to the user. + // roles are the user roles attached to the user. Basically, [roles] = [original_roles] + + // [access_list_roles]. Roles []string `protobuf:"bytes,1,rep,name=roles,proto3" json:"roles,omitempty"` - // traits are the traits attached to the user. + // traits are the traits attached to the user. Basically, [roles] = [original_traits] + + // [access_list_traits]. Traits []*v11.Trait `protobuf:"bytes,2,rep,name=traits,proto3" json:"traits,omitempty"` // user_type is the type of user this state represents. UserType string `protobuf:"bytes,3,opt,name=user_type,json=userType,proto3" json:"user_type,omitempty"` - // original_roles are the user roles that are part of the user's static definition. These roles are - // not affected by access granted by access lists and are obtained prior to granting access list access. + // original_roles are the user roles that are part of the user's static definition. These roles + // are not affected by access granted by access lists and are obtained prior to granting access + // list access. Basically, [original_roles] = [roles] - [access_list_roles]. OriginalRoles []string `protobuf:"bytes,4,rep,name=original_roles,json=originalRoles,proto3" json:"original_roles,omitempty"` - // original_traits are the user traits that are part of the user's static definition. These traits are - // not affected by access granted by access lists and are obtained prior to granting access list access. + // original_traits are the user traits that are part of the user's static definition. These + // traits are not affected by access granted by access lists and are obtained prior to granting + // access list access. Basically, [original_traits] = [traits] - [access_list_traits]. OriginalTraits []*v11.Trait `protobuf:"bytes,5,rep,name=original_traits,json=originalTraits,proto3" json:"original_traits,omitempty"` // GitHubIdentity is the external identity attached to this user state. GitHubIdentity *ExternalIdentity `protobuf:"bytes,6,opt,name=git_hub_identity,json=gitHubIdentity,proto3" json:"git_hub_identity,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache + // saml_identities are the identities created from the SAML connectors used to log in by this + // user name. They are useful for RBAC calculation when there is a user with same username form + // multiple SSO connectors, i.e. having multiple SSO indentities. + // + // NOTE: There is no mechanism to clean those identities. If the the user is deleted, the + // user_login_state and it's saml_identities will not be deleted. Or even if the user still + // exists, but it's SAML identity expires it isn't cleared from the user_login_state. This means + // the information stored here can be used only as long as there is a background sync running and + // making sure the user's info is up-to-date. E.g. Okta assignment creator is using this + // information, but it is running only when Okta user sync is active and periodically updates the + // user which in turn updates the user_login_state. + // + // NOTE2: This field isn't currently used. It's introduced so we can resolve the + // https://github.com/gravitational/teleport.e/issues/6723 issue in stages. + // The STAGE 1 is to introduce this field and give enough time to get existing Teleport + // installations to get updated and populate this field. + // The STAGE 2 is in the v19 release (or maybe even v20) to deploy the actual fix PR + // (https://github.com/gravitational/teleport.e/pull/7168) reading this field and calculating + // access to Okta resources. See more details in the description of the fix PR. + // + // TODO(kopiczko) v19: consider proceeding with the STAGE 2 described above. + SamlIdentities []*ExternalIdentity `protobuf:"bytes,7,rep,name=saml_identities,json=samlIdentities,proto3" json:"saml_identities,omitempty"` + // access_list_roles are roles granted to this user by the Access Lists + // membership/ownership. Basically, [access_list_roles] = [roles] - [original_roles]. + AccessListRoles []string `protobuf:"bytes,8,rep,name=access_list_roles,json=accessListRoles,proto3" json:"access_list_roles,omitempty"` + // access_list_traits are traits granted to this user by the Access Lists membership/ownership. + // Basically, [access_list_traits] = [traits] - [original_traits]. + AccessListTraits []*v11.Trait `protobuf:"bytes,9,rep,name=access_list_traits,json=accessListTraits,proto3" json:"access_list_traits,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache } func (x *Spec) Reset() { @@ -185,14 +217,41 @@ func (x *Spec) GetGitHubIdentity() *ExternalIdentity { return nil } +func (x *Spec) GetSamlIdentities() []*ExternalIdentity { + if x != nil { + return x.SamlIdentities + } + return nil +} + +func (x *Spec) GetAccessListRoles() []string { + if x != nil { + return x.AccessListRoles + } + return nil +} + +func (x *Spec) GetAccessListTraits() []*v11.Trait { + if x != nil { + return x.AccessListTraits + } + return nil +} + // ExternalIdentity defines an external identity attached to this user state. type ExternalIdentity struct { state protoimpl.MessageState `protogen:"open.v1"` - // UserId is the unique identifier of the external identity such as GitHub user + // UserID is the unique identifier of the external identity such as GitHub user // ID. UserId string `protobuf:"bytes,1,opt,name=user_id,json=userId,proto3" json:"user_id,omitempty"` // Username is the username of the external identity. - Username string `protobuf:"bytes,2,opt,name=username,proto3" json:"username,omitempty"` + Username string `protobuf:"bytes,2,opt,name=username,proto3" json:"username,omitempty"` + // ConnectorID is the connector this identity was created with. It's empty for the local user. + ConnectorId string `protobuf:"bytes,3,opt,name=connector_id,json=connectorId,proto3" json:"connector_id,omitempty"` + // GrantedRoles specific for this identity. E.g.: from connector attributes mapping. + GrantedRoles []string `protobuf:"bytes,4,rep,name=granted_roles,json=grantedRoles,proto3" json:"granted_roles,omitempty"` + // GrantedTraits specific for this identity. E.g.: from connector roles attributes mapping. + GrantedTraits []*v11.Trait `protobuf:"bytes,5,rep,name=granted_traits,json=grantedTraits,proto3" json:"granted_traits,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -241,6 +300,27 @@ func (x *ExternalIdentity) GetUsername() string { return "" } +func (x *ExternalIdentity) GetConnectorId() string { + if x != nil { + return x.ConnectorId + } + return "" +} + +func (x *ExternalIdentity) GetGrantedRoles() []string { + if x != nil { + return x.GrantedRoles + } + return nil +} + +func (x *ExternalIdentity) GetGrantedTraits() []*v11.Trait { + if x != nil { + return x.GrantedTraits + } + return nil +} + var File_teleport_userloginstate_v1_userloginstate_proto protoreflect.FileDescriptor const file_teleport_userloginstate_v1_userloginstate_proto_rawDesc = "" + @@ -248,17 +328,23 @@ const file_teleport_userloginstate_v1_userloginstate_proto_rawDesc = "" + "/teleport/userloginstate/v1/userloginstate.proto\x12\x1ateleport.userloginstate.v1\x1a'teleport/header/v1/resourceheader.proto\x1a\x1dteleport/trait/v1/trait.proto\"\x82\x01\n" + "\x0eUserLoginState\x12:\n" + "\x06header\x18\x01 \x01(\v2\".teleport.header.v1.ResourceHeaderR\x06header\x124\n" + - "\x04spec\x18\x02 \x01(\v2 .teleport.userloginstate.v1.SpecR\x04spec\"\xad\x02\n" + + "\x04spec\x18\x02 \x01(\v2 .teleport.userloginstate.v1.SpecR\x04spec\"\xf8\x03\n" + "\x04Spec\x12\x14\n" + "\x05roles\x18\x01 \x03(\tR\x05roles\x120\n" + "\x06traits\x18\x02 \x03(\v2\x18.teleport.trait.v1.TraitR\x06traits\x12\x1b\n" + "\tuser_type\x18\x03 \x01(\tR\buserType\x12%\n" + "\x0eoriginal_roles\x18\x04 \x03(\tR\roriginalRoles\x12A\n" + "\x0foriginal_traits\x18\x05 \x03(\v2\x18.teleport.trait.v1.TraitR\x0eoriginalTraits\x12V\n" + - "\x10git_hub_identity\x18\x06 \x01(\v2,.teleport.userloginstate.v1.ExternalIdentityR\x0egitHubIdentity\"G\n" + + "\x10git_hub_identity\x18\x06 \x01(\v2,.teleport.userloginstate.v1.ExternalIdentityR\x0egitHubIdentity\x12U\n" + + "\x0fsaml_identities\x18\a \x03(\v2,.teleport.userloginstate.v1.ExternalIdentityR\x0esamlIdentities\x12*\n" + + "\x11access_list_roles\x18\b \x03(\tR\x0faccessListRoles\x12F\n" + + "\x12access_list_traits\x18\t \x03(\v2\x18.teleport.trait.v1.TraitR\x10accessListTraits\"\xd0\x01\n" + "\x10ExternalIdentity\x12\x17\n" + "\auser_id\x18\x01 \x01(\tR\x06userId\x12\x1a\n" + - "\busername\x18\x02 \x01(\tR\busernameB`Z^github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1;userloginstatev1b\x06proto3" + "\busername\x18\x02 \x01(\tR\busername\x12!\n" + + "\fconnector_id\x18\x03 \x01(\tR\vconnectorId\x12#\n" + + "\rgranted_roles\x18\x04 \x03(\tR\fgrantedRoles\x12?\n" + + "\x0egranted_traits\x18\x05 \x03(\v2\x18.teleport.trait.v1.TraitR\rgrantedTraitsB`Z^github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1;userloginstatev1b\x06proto3" var ( file_teleport_userloginstate_v1_userloginstate_proto_rawDescOnce sync.Once @@ -286,11 +372,14 @@ var file_teleport_userloginstate_v1_userloginstate_proto_depIdxs = []int32{ 4, // 2: teleport.userloginstate.v1.Spec.traits:type_name -> teleport.trait.v1.Trait 4, // 3: teleport.userloginstate.v1.Spec.original_traits:type_name -> teleport.trait.v1.Trait 2, // 4: teleport.userloginstate.v1.Spec.git_hub_identity:type_name -> teleport.userloginstate.v1.ExternalIdentity - 5, // [5:5] is the sub-list for method output_type - 5, // [5:5] is the sub-list for method input_type - 5, // [5:5] is the sub-list for extension type_name - 5, // [5:5] is the sub-list for extension extendee - 0, // [0:5] is the sub-list for field type_name + 2, // 5: teleport.userloginstate.v1.Spec.saml_identities:type_name -> teleport.userloginstate.v1.ExternalIdentity + 4, // 6: teleport.userloginstate.v1.Spec.access_list_traits:type_name -> teleport.trait.v1.Trait + 4, // 7: teleport.userloginstate.v1.ExternalIdentity.granted_traits:type_name -> teleport.trait.v1.Trait + 8, // [8:8] is the sub-list for method output_type + 8, // [8:8] is the sub-list for method input_type + 8, // [8:8] is the sub-list for extension type_name + 8, // [8:8] is the sub-list for extension extendee + 0, // [0:8] is the sub-list for field type_name } func init() { file_teleport_userloginstate_v1_userloginstate_proto_init() } diff --git a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service.pb.go b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service.pb.go index fbe19a0f0d065..58fd0bcb10478 100644 --- a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service.pb.go +++ b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userloginstate/v1/userloginstate_service.proto @@ -294,6 +294,116 @@ func (*DeleteAllUserLoginStatesRequest) Descriptor() ([]byte, []int) { return file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescGZIP(), []int{5} } +// ListUserLoginStatesRequest is the request for listing user login states with pagination. +type ListUserLoginStatesRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + // page_size is the maximum number of user login states to return. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // page_token is the token for the next page of results. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListUserLoginStatesRequest) Reset() { + *x = ListUserLoginStatesRequest{} + mi := &file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListUserLoginStatesRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListUserLoginStatesRequest) ProtoMessage() {} + +func (x *ListUserLoginStatesRequest) ProtoReflect() protoreflect.Message { + mi := &file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListUserLoginStatesRequest.ProtoReflect.Descriptor instead. +func (*ListUserLoginStatesRequest) Descriptor() ([]byte, []int) { + return file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescGZIP(), []int{6} +} + +func (x *ListUserLoginStatesRequest) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListUserLoginStatesRequest) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +// ListUserLoginStatesResponse is the response for listing user login states with pagination. +type ListUserLoginStatesResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + // user_login_states is the list of user login states. + UserLoginStates []*UserLoginState `protobuf:"bytes,1,rep,name=user_login_states,json=userLoginStates,proto3" json:"user_login_states,omitempty"` + // next_page_token is the token for the next page of results. + NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken,proto3" json:"next_page_token,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListUserLoginStatesResponse) Reset() { + *x = ListUserLoginStatesResponse{} + mi := &file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListUserLoginStatesResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListUserLoginStatesResponse) ProtoMessage() {} + +func (x *ListUserLoginStatesResponse) ProtoReflect() protoreflect.Message { + mi := &file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes[7] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListUserLoginStatesResponse.ProtoReflect.Descriptor instead. +func (*ListUserLoginStatesResponse) Descriptor() ([]byte, []int) { + return file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescGZIP(), []int{7} +} + +func (x *ListUserLoginStatesResponse) GetUserLoginStates() []*UserLoginState { + if x != nil { + return x.UserLoginStates + } + return nil +} + +func (x *ListUserLoginStatesResponse) GetNextPageToken() string { + if x != nil { + return x.NextPageToken + } + return "" +} + var File_teleport_userloginstate_v1_userloginstate_service_proto protoreflect.FileDescriptor const file_teleport_userloginstate_v1_userloginstate_service_proto_rawDesc = "" + @@ -308,13 +418,21 @@ const file_teleport_userloginstate_v1_userloginstate_service_proto_rawDesc = "" "\x10user_login_state\x18\x01 \x01(\v2*.teleport.userloginstate.v1.UserLoginStateR\x0euserLoginState\"1\n" + "\x1bDeleteUserLoginStateRequest\x12\x12\n" + "\x04name\x18\x01 \x01(\tR\x04name\"!\n" + - "\x1fDeleteAllUserLoginStatesRequest2\xeb\x04\n" + + "\x1fDeleteAllUserLoginStatesRequest\"X\n" + + "\x1aListUserLoginStatesRequest\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\x9d\x01\n" + + "\x1bListUserLoginStatesResponse\x12V\n" + + "\x11user_login_states\x18\x01 \x03(\v2*.teleport.userloginstate.v1.UserLoginStateR\x0fuserLoginStates\x12&\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken2\xf4\x05\n" + "\x15UserLoginStateService\x12\x83\x01\n" + "\x12GetUserLoginStates\x125.teleport.userloginstate.v1.GetUserLoginStatesRequest\x1a6.teleport.userloginstate.v1.GetUserLoginStatesResponse\x12u\n" + "\x11GetUserLoginState\x124.teleport.userloginstate.v1.GetUserLoginStateRequest\x1a*.teleport.userloginstate.v1.UserLoginState\x12{\n" + "\x14UpsertUserLoginState\x127.teleport.userloginstate.v1.UpsertUserLoginStateRequest\x1a*.teleport.userloginstate.v1.UserLoginState\x12g\n" + "\x14DeleteUserLoginState\x127.teleport.userloginstate.v1.DeleteUserLoginStateRequest\x1a\x16.google.protobuf.Empty\x12o\n" + - "\x18DeleteAllUserLoginStates\x12;.teleport.userloginstate.v1.DeleteAllUserLoginStatesRequest\x1a\x16.google.protobuf.EmptyB`Z^github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1;userloginstatev1b\x06proto3" + "\x18DeleteAllUserLoginStates\x12;.teleport.userloginstate.v1.DeleteAllUserLoginStatesRequest\x1a\x16.google.protobuf.Empty\x12\x86\x01\n" + + "\x13ListUserLoginStates\x126.teleport.userloginstate.v1.ListUserLoginStatesRequest\x1a7.teleport.userloginstate.v1.ListUserLoginStatesResponseB`Z^github.com/gravitational/teleport/api/gen/proto/go/teleport/userloginstate/v1;userloginstatev1b\x06proto3" var ( file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescOnce sync.Once @@ -328,7 +446,7 @@ func file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescGZIP() return file_teleport_userloginstate_v1_userloginstate_service_proto_rawDescData } -var file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes = make([]protoimpl.MessageInfo, 6) +var file_teleport_userloginstate_v1_userloginstate_service_proto_msgTypes = make([]protoimpl.MessageInfo, 8) var file_teleport_userloginstate_v1_userloginstate_service_proto_goTypes = []any{ (*GetUserLoginStatesRequest)(nil), // 0: teleport.userloginstate.v1.GetUserLoginStatesRequest (*GetUserLoginStatesResponse)(nil), // 1: teleport.userloginstate.v1.GetUserLoginStatesResponse @@ -336,27 +454,32 @@ var file_teleport_userloginstate_v1_userloginstate_service_proto_goTypes = []any (*UpsertUserLoginStateRequest)(nil), // 3: teleport.userloginstate.v1.UpsertUserLoginStateRequest (*DeleteUserLoginStateRequest)(nil), // 4: teleport.userloginstate.v1.DeleteUserLoginStateRequest (*DeleteAllUserLoginStatesRequest)(nil), // 5: teleport.userloginstate.v1.DeleteAllUserLoginStatesRequest - (*UserLoginState)(nil), // 6: teleport.userloginstate.v1.UserLoginState - (*emptypb.Empty)(nil), // 7: google.protobuf.Empty + (*ListUserLoginStatesRequest)(nil), // 6: teleport.userloginstate.v1.ListUserLoginStatesRequest + (*ListUserLoginStatesResponse)(nil), // 7: teleport.userloginstate.v1.ListUserLoginStatesResponse + (*UserLoginState)(nil), // 8: teleport.userloginstate.v1.UserLoginState + (*emptypb.Empty)(nil), // 9: google.protobuf.Empty } var file_teleport_userloginstate_v1_userloginstate_service_proto_depIdxs = []int32{ - 6, // 0: teleport.userloginstate.v1.GetUserLoginStatesResponse.user_login_states:type_name -> teleport.userloginstate.v1.UserLoginState - 6, // 1: teleport.userloginstate.v1.UpsertUserLoginStateRequest.user_login_state:type_name -> teleport.userloginstate.v1.UserLoginState - 0, // 2: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginStates:input_type -> teleport.userloginstate.v1.GetUserLoginStatesRequest - 2, // 3: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginState:input_type -> teleport.userloginstate.v1.GetUserLoginStateRequest - 3, // 4: teleport.userloginstate.v1.UserLoginStateService.UpsertUserLoginState:input_type -> teleport.userloginstate.v1.UpsertUserLoginStateRequest - 4, // 5: teleport.userloginstate.v1.UserLoginStateService.DeleteUserLoginState:input_type -> teleport.userloginstate.v1.DeleteUserLoginStateRequest - 5, // 6: teleport.userloginstate.v1.UserLoginStateService.DeleteAllUserLoginStates:input_type -> teleport.userloginstate.v1.DeleteAllUserLoginStatesRequest - 1, // 7: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginStates:output_type -> teleport.userloginstate.v1.GetUserLoginStatesResponse - 6, // 8: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginState:output_type -> teleport.userloginstate.v1.UserLoginState - 6, // 9: teleport.userloginstate.v1.UserLoginStateService.UpsertUserLoginState:output_type -> teleport.userloginstate.v1.UserLoginState - 7, // 10: teleport.userloginstate.v1.UserLoginStateService.DeleteUserLoginState:output_type -> google.protobuf.Empty - 7, // 11: teleport.userloginstate.v1.UserLoginStateService.DeleteAllUserLoginStates:output_type -> google.protobuf.Empty - 7, // [7:12] is the sub-list for method output_type - 2, // [2:7] is the sub-list for method input_type - 2, // [2:2] is the sub-list for extension type_name - 2, // [2:2] is the sub-list for extension extendee - 0, // [0:2] is the sub-list for field type_name + 8, // 0: teleport.userloginstate.v1.GetUserLoginStatesResponse.user_login_states:type_name -> teleport.userloginstate.v1.UserLoginState + 8, // 1: teleport.userloginstate.v1.UpsertUserLoginStateRequest.user_login_state:type_name -> teleport.userloginstate.v1.UserLoginState + 8, // 2: teleport.userloginstate.v1.ListUserLoginStatesResponse.user_login_states:type_name -> teleport.userloginstate.v1.UserLoginState + 0, // 3: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginStates:input_type -> teleport.userloginstate.v1.GetUserLoginStatesRequest + 2, // 4: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginState:input_type -> teleport.userloginstate.v1.GetUserLoginStateRequest + 3, // 5: teleport.userloginstate.v1.UserLoginStateService.UpsertUserLoginState:input_type -> teleport.userloginstate.v1.UpsertUserLoginStateRequest + 4, // 6: teleport.userloginstate.v1.UserLoginStateService.DeleteUserLoginState:input_type -> teleport.userloginstate.v1.DeleteUserLoginStateRequest + 5, // 7: teleport.userloginstate.v1.UserLoginStateService.DeleteAllUserLoginStates:input_type -> teleport.userloginstate.v1.DeleteAllUserLoginStatesRequest + 6, // 8: teleport.userloginstate.v1.UserLoginStateService.ListUserLoginStates:input_type -> teleport.userloginstate.v1.ListUserLoginStatesRequest + 1, // 9: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginStates:output_type -> teleport.userloginstate.v1.GetUserLoginStatesResponse + 8, // 10: teleport.userloginstate.v1.UserLoginStateService.GetUserLoginState:output_type -> teleport.userloginstate.v1.UserLoginState + 8, // 11: teleport.userloginstate.v1.UserLoginStateService.UpsertUserLoginState:output_type -> teleport.userloginstate.v1.UserLoginState + 9, // 12: teleport.userloginstate.v1.UserLoginStateService.DeleteUserLoginState:output_type -> google.protobuf.Empty + 9, // 13: teleport.userloginstate.v1.UserLoginStateService.DeleteAllUserLoginStates:output_type -> google.protobuf.Empty + 7, // 14: teleport.userloginstate.v1.UserLoginStateService.ListUserLoginStates:output_type -> teleport.userloginstate.v1.ListUserLoginStatesResponse + 9, // [9:15] is the sub-list for method output_type + 3, // [3:9] is the sub-list for method input_type + 3, // [3:3] is the sub-list for extension type_name + 3, // [3:3] is the sub-list for extension extendee + 0, // [0:3] is the sub-list for field type_name } func init() { file_teleport_userloginstate_v1_userloginstate_service_proto_init() } @@ -371,7 +494,7 @@ func file_teleport_userloginstate_v1_userloginstate_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_userloginstate_v1_userloginstate_service_proto_rawDesc), len(file_teleport_userloginstate_v1_userloginstate_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 6, + NumMessages: 8, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service_grpc.pb.go b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service_grpc.pb.go index 8258f7d85f09b..7821a4bd3130c 100644 --- a/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/userloginstate/v1/userloginstate_service_grpc.pb.go @@ -39,6 +39,7 @@ const ( UserLoginStateService_UpsertUserLoginState_FullMethodName = "/teleport.userloginstate.v1.UserLoginStateService/UpsertUserLoginState" UserLoginStateService_DeleteUserLoginState_FullMethodName = "/teleport.userloginstate.v1.UserLoginStateService/DeleteUserLoginState" UserLoginStateService_DeleteAllUserLoginStates_FullMethodName = "/teleport.userloginstate.v1.UserLoginStateService/DeleteAllUserLoginStates" + UserLoginStateService_ListUserLoginStates_FullMethodName = "/teleport.userloginstate.v1.UserLoginStateService/ListUserLoginStates" ) // UserLoginStateServiceClient is the client API for UserLoginStateService service. @@ -48,6 +49,7 @@ const ( // UserLoginStateService provides CRUD methods for user login state resources. type UserLoginStateServiceClient interface { // GetUserLoginStates returns a list of all user login states. + // Deprecated: Use ListUserLoginStates instead. GetUserLoginStates(ctx context.Context, in *GetUserLoginStatesRequest, opts ...grpc.CallOption) (*GetUserLoginStatesResponse, error) // GetUserLoginState returns the specified user login state resource. GetUserLoginState(ctx context.Context, in *GetUserLoginStateRequest, opts ...grpc.CallOption) (*UserLoginState, error) @@ -57,6 +59,8 @@ type UserLoginStateServiceClient interface { DeleteUserLoginState(ctx context.Context, in *DeleteUserLoginStateRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) // DeleteAllUserLoginStates hard deletes all user login states. DeleteAllUserLoginStates(ctx context.Context, in *DeleteAllUserLoginStatesRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) + // ListUserLoginStates lists all user login states allowing for pagination. + ListUserLoginStates(ctx context.Context, in *ListUserLoginStatesRequest, opts ...grpc.CallOption) (*ListUserLoginStatesResponse, error) } type userLoginStateServiceClient struct { @@ -117,6 +121,16 @@ func (c *userLoginStateServiceClient) DeleteAllUserLoginStates(ctx context.Conte return out, nil } +func (c *userLoginStateServiceClient) ListUserLoginStates(ctx context.Context, in *ListUserLoginStatesRequest, opts ...grpc.CallOption) (*ListUserLoginStatesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListUserLoginStatesResponse) + err := c.cc.Invoke(ctx, UserLoginStateService_ListUserLoginStates_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // UserLoginStateServiceServer is the server API for UserLoginStateService service. // All implementations must embed UnimplementedUserLoginStateServiceServer // for forward compatibility. @@ -124,6 +138,7 @@ func (c *userLoginStateServiceClient) DeleteAllUserLoginStates(ctx context.Conte // UserLoginStateService provides CRUD methods for user login state resources. type UserLoginStateServiceServer interface { // GetUserLoginStates returns a list of all user login states. + // Deprecated: Use ListUserLoginStates instead. GetUserLoginStates(context.Context, *GetUserLoginStatesRequest) (*GetUserLoginStatesResponse, error) // GetUserLoginState returns the specified user login state resource. GetUserLoginState(context.Context, *GetUserLoginStateRequest) (*UserLoginState, error) @@ -133,6 +148,8 @@ type UserLoginStateServiceServer interface { DeleteUserLoginState(context.Context, *DeleteUserLoginStateRequest) (*emptypb.Empty, error) // DeleteAllUserLoginStates hard deletes all user login states. DeleteAllUserLoginStates(context.Context, *DeleteAllUserLoginStatesRequest) (*emptypb.Empty, error) + // ListUserLoginStates lists all user login states allowing for pagination. + ListUserLoginStates(context.Context, *ListUserLoginStatesRequest) (*ListUserLoginStatesResponse, error) mustEmbedUnimplementedUserLoginStateServiceServer() } @@ -158,6 +175,9 @@ func (UnimplementedUserLoginStateServiceServer) DeleteUserLoginState(context.Con func (UnimplementedUserLoginStateServiceServer) DeleteAllUserLoginStates(context.Context, *DeleteAllUserLoginStatesRequest) (*emptypb.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method DeleteAllUserLoginStates not implemented") } +func (UnimplementedUserLoginStateServiceServer) ListUserLoginStates(context.Context, *ListUserLoginStatesRequest) (*ListUserLoginStatesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListUserLoginStates not implemented") +} func (UnimplementedUserLoginStateServiceServer) mustEmbedUnimplementedUserLoginStateServiceServer() {} func (UnimplementedUserLoginStateServiceServer) testEmbeddedByValue() {} @@ -269,6 +289,24 @@ func _UserLoginStateService_DeleteAllUserLoginStates_Handler(srv interface{}, ct return interceptor(ctx, in, info, handler) } +func _UserLoginStateService_ListUserLoginStates_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListUserLoginStatesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(UserLoginStateServiceServer).ListUserLoginStates(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: UserLoginStateService_ListUserLoginStates_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(UserLoginStateServiceServer).ListUserLoginStates(ctx, req.(*ListUserLoginStatesRequest)) + } + return interceptor(ctx, in, info, handler) +} + // UserLoginStateService_ServiceDesc is the grpc.ServiceDesc for UserLoginStateService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -296,6 +334,10 @@ var UserLoginStateService_ServiceDesc = grpc.ServiceDesc{ MethodName: "DeleteAllUserLoginStates", Handler: _UserLoginStateService_DeleteAllUserLoginStates_Handler, }, + { + MethodName: "ListUserLoginStates", + Handler: _UserLoginStateService_ListUserLoginStates_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/userloginstate/v1/userloginstate_service.proto", diff --git a/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser.pb.go b/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser.pb.go index 92af7aeccb23b..e1b2cc8f159c6 100644 --- a/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser.pb.go +++ b/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userprovisioning/v2/statichostuser.proto diff --git a/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser_service.pb.go b/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser_service.pb.go index 9b0468c6077d4..31108fb5b7607 100644 --- a/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser_service.pb.go +++ b/api/gen/proto/go/teleport/userprovisioning/v2/statichostuser_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userprovisioning/v2/statichostuser_service.proto diff --git a/api/gen/proto/go/teleport/users/v1/users_service.pb.go b/api/gen/proto/go/teleport/users/v1/users_service.pb.go index daf57063ad614..499a634461624 100644 --- a/api/gen/proto/go/teleport/users/v1/users_service.pb.go +++ b/api/gen/proto/go/teleport/users/v1/users_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/users/v1/users_service.proto diff --git a/api/gen/proto/go/teleport/usertasks/v1/user_tasks.pb.go b/api/gen/proto/go/teleport/usertasks/v1/user_tasks.pb.go index c1f127ffa1a8a..7785888ea64c6 100644 --- a/api/gen/proto/go/teleport/usertasks/v1/user_tasks.pb.go +++ b/api/gen/proto/go/teleport/usertasks/v1/user_tasks.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/usertasks/v1/user_tasks.proto diff --git a/api/gen/proto/go/teleport/usertasks/v1/user_tasks_service.pb.go b/api/gen/proto/go/teleport/usertasks/v1/user_tasks_service.pb.go index 877021278caef..faa3096a05f05 100644 --- a/api/gen/proto/go/teleport/usertasks/v1/user_tasks_service.pb.go +++ b/api/gen/proto/go/teleport/usertasks/v1/user_tasks_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/usertasks/v1/user_tasks_service.proto diff --git a/api/gen/proto/go/teleport/vnet/v1/vnet_config.pb.go b/api/gen/proto/go/teleport/vnet/v1/vnet_config.pb.go index 13ba70217da19..52fbe32ff0a4e 100644 --- a/api/gen/proto/go/teleport/vnet/v1/vnet_config.pb.go +++ b/api/gen/proto/go/teleport/vnet/v1/vnet_config.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/vnet/v1/vnet_config.proto diff --git a/api/gen/proto/go/teleport/vnet/v1/vnet_config_service.pb.go b/api/gen/proto/go/teleport/vnet/v1/vnet_config_service.pb.go index 3e92203f78e82..e4f35c78ca2e4 100644 --- a/api/gen/proto/go/teleport/vnet/v1/vnet_config_service.pb.go +++ b/api/gen/proto/go/teleport/vnet/v1/vnet_config_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/vnet/v1/vnet_config_service.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/attrs.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/attrs.pb.go index 40fc920806d44..368d21cdcd5cf 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/attrs.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/attrs.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/attrs.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/issuance_service.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/issuance_service.pb.go index 14ffe10e8f6c2..c54e76b8ce608 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/issuance_service.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/issuance_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/issuance_service.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/join_attrs.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/join_attrs.pb.go index 1a9b1978765f5..1d52ebfbf83a2 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/join_attrs.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/join_attrs.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/join_attrs.proto @@ -66,7 +66,9 @@ type JoinAttrs struct { // Attributes that are specific to the Oracle (`oracle`) join method. Oracle *JoinAttrsOracle `protobuf:"bytes,13,opt,name=oracle,proto3" json:"oracle,omitempty"` // Attributes that are specific to the Azure Devops (`azure_devops`) join method. - AzureDevops *JoinAttrsAzureDevops `protobuf:"bytes,14,opt,name=azure_devops,json=azureDevops,proto3" json:"azure_devops,omitempty"` + AzureDevops *JoinAttrsAzureDevops `protobuf:"bytes,14,opt,name=azure_devops,json=azureDevops,proto3" json:"azure_devops,omitempty"` + // Attributes that are specific to the Env0 (`env0`) join method. + Env0 *JoinAttrsEnv0 `protobuf:"bytes,15,opt,name=env0,proto3" json:"env0,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -199,6 +201,13 @@ func (x *JoinAttrs) GetAzureDevops() *JoinAttrsAzureDevops { return nil } +func (x *JoinAttrs) GetEnv0() *JoinAttrsEnv0 { + if x != nil { + return x.Env0 + } + return nil +} + // The collection of attributes that result from the join process but are not // specific to any particular join method. type JoinAttrsMeta struct { @@ -1717,11 +1726,176 @@ func (x *JoinAttrsAzureDevopsPipeline) GetRunId() string { return "" } +// Attributes that are specific to the Env0 (`env0`) join method. +type JoinAttrsEnv0 struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The `sub` claim of an Env0 OIDC token. + Sub string `protobuf:"bytes,1,opt,name=sub,proto3" json:"sub,omitempty"` + // The unique organization identifier, corresponding to `organizationId` in an + // Env0 OIDC token. + OrganizationId string `protobuf:"bytes,2,opt,name=organization_id,json=organizationId,proto3" json:"organization_id,omitempty"` + // The unique project identifier, corresponding to `projectId` in an Env0 OIDC + // token. + ProjectId string `protobuf:"bytes,3,opt,name=project_id,json=projectId,proto3" json:"project_id,omitempty"` + // The name of the project under which the job was run corresponding to + // `projectName` in an Env0 OIDC token. + ProjectName string `protobuf:"bytes,4,opt,name=project_name,json=projectName,proto3" json:"project_name,omitempty"` + // The unique identifier of the Env0 template, corresponding to `templateId` + // in an Env0 OIDC token. + TemplateId string `protobuf:"bytes,5,opt,name=template_id,json=templateId,proto3" json:"template_id,omitempty"` + // The name of the Env0 template, corresponding to `templateName` in an Env0 + // OIDC token. + TemplateName string `protobuf:"bytes,6,opt,name=template_name,json=templateName,proto3" json:"template_name,omitempty"` + // The unique identifier of the Env0 environment, corresponding to + // `environmentId` in an Env0 OIDC token. + EnvironmentId string `protobuf:"bytes,7,opt,name=environment_id,json=environmentId,proto3" json:"environment_id,omitempty"` + // The name of the Env0 environment, corresponding to `environmentName` in an + // Env0 OIDC token. + EnvironmentName string `protobuf:"bytes,8,opt,name=environment_name,json=environmentName,proto3" json:"environment_name,omitempty"` + // The name of the Env0 workspace, corresponding to `workspaceName` in an Env0 + // OIDC token. + WorkspaceName string `protobuf:"bytes,9,opt,name=workspace_name,json=workspaceName,proto3" json:"workspace_name,omitempty"` + // A unique ID for this deployment, corresponding to `deploymentLogId` in an + // Env0 OIDC token. + DeploymentLogId string `protobuf:"bytes,10,opt,name=deployment_log_id,json=deploymentLogId,proto3" json:"deployment_log_id,omitempty"` + // The env0 deployment type, such as "deploy", "destroy", etc. Corresponds to + // `deploymentType` in an Env0 OIDC token. + DeploymentType string `protobuf:"bytes,11,opt,name=deployment_type,json=deploymentType,proto3" json:"deployment_type,omitempty"` + // The email of the person that triggered the deployment, corresponding to + // `deployerEmail` in an Env0 OIDC token. + DeployerEmail string `protobuf:"bytes,12,opt,name=deployer_email,json=deployerEmail,proto3" json:"deployer_email,omitempty"` + // A custom tag value corresponding to `env0Tag` when `ENV0_OIDC_TAG` is set. + Env0Tag string `protobuf:"bytes,13,opt,name=env0_tag,json=env0Tag,proto3" json:"env0_tag,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *JoinAttrsEnv0) Reset() { + *x = JoinAttrsEnv0{} + mi := &file_teleport_workloadidentity_v1_join_attrs_proto_msgTypes[19] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *JoinAttrsEnv0) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*JoinAttrsEnv0) ProtoMessage() {} + +func (x *JoinAttrsEnv0) ProtoReflect() protoreflect.Message { + mi := &file_teleport_workloadidentity_v1_join_attrs_proto_msgTypes[19] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use JoinAttrsEnv0.ProtoReflect.Descriptor instead. +func (*JoinAttrsEnv0) Descriptor() ([]byte, []int) { + return file_teleport_workloadidentity_v1_join_attrs_proto_rawDescGZIP(), []int{19} +} + +func (x *JoinAttrsEnv0) GetSub() string { + if x != nil { + return x.Sub + } + return "" +} + +func (x *JoinAttrsEnv0) GetOrganizationId() string { + if x != nil { + return x.OrganizationId + } + return "" +} + +func (x *JoinAttrsEnv0) GetProjectId() string { + if x != nil { + return x.ProjectId + } + return "" +} + +func (x *JoinAttrsEnv0) GetProjectName() string { + if x != nil { + return x.ProjectName + } + return "" +} + +func (x *JoinAttrsEnv0) GetTemplateId() string { + if x != nil { + return x.TemplateId + } + return "" +} + +func (x *JoinAttrsEnv0) GetTemplateName() string { + if x != nil { + return x.TemplateName + } + return "" +} + +func (x *JoinAttrsEnv0) GetEnvironmentId() string { + if x != nil { + return x.EnvironmentId + } + return "" +} + +func (x *JoinAttrsEnv0) GetEnvironmentName() string { + if x != nil { + return x.EnvironmentName + } + return "" +} + +func (x *JoinAttrsEnv0) GetWorkspaceName() string { + if x != nil { + return x.WorkspaceName + } + return "" +} + +func (x *JoinAttrsEnv0) GetDeploymentLogId() string { + if x != nil { + return x.DeploymentLogId + } + return "" +} + +func (x *JoinAttrsEnv0) GetDeploymentType() string { + if x != nil { + return x.DeploymentType + } + return "" +} + +func (x *JoinAttrsEnv0) GetDeployerEmail() string { + if x != nil { + return x.DeployerEmail + } + return "" +} + +func (x *JoinAttrsEnv0) GetEnv0Tag() string { + if x != nil { + return x.Env0Tag + } + return "" +} + var File_teleport_workloadidentity_v1_join_attrs_proto protoreflect.FileDescriptor const file_teleport_workloadidentity_v1_join_attrs_proto_rawDesc = "" + "\n" + - "-teleport/workloadidentity/v1/join_attrs.proto\x12\x1cteleport.workloadidentity.v1\"\x99\b\n" + + "-teleport/workloadidentity/v1/join_attrs.proto\x12\x1cteleport.workloadidentity.v1\"\xda\b\n" + "\tJoinAttrs\x12?\n" + "\x04meta\x18\x01 \x01(\v2+.teleport.workloadidentity.v1.JoinAttrsMetaR\x04meta\x12E\n" + "\x06gitlab\x18\x02 \x01(\v2-.teleport.workloadidentity.v1.JoinAttrsGitLabR\x06gitlab\x12E\n" + @@ -1739,7 +1913,8 @@ const file_teleport_workloadidentity_v1_join_attrs_proto_rawDesc = "" + "kubernetes\x18\f \x01(\v21.teleport.workloadidentity.v1.JoinAttrsKubernetesR\n" + "kubernetes\x12E\n" + "\x06oracle\x18\r \x01(\v2-.teleport.workloadidentity.v1.JoinAttrsOracleR\x06oracle\x12U\n" + - "\fazure_devops\x18\x0e \x01(\v22.teleport.workloadidentity.v1.JoinAttrsAzureDevopsR\vazureDevops\"X\n" + + "\fazure_devops\x18\x0e \x01(\v22.teleport.workloadidentity.v1.JoinAttrsAzureDevopsR\vazureDevops\x12?\n" + + "\x04env0\x18\x0f \x01(\v2+.teleport.workloadidentity.v1.JoinAttrsEnv0R\x04env0\"X\n" + "\rJoinAttrsMeta\x12&\n" + "\x0fjoin_token_name\x18\x01 \x01(\tR\rjoinTokenName\x12\x1f\n" + "\vjoin_method\x18\x02 \x01(\tR\n" + @@ -1862,7 +2037,24 @@ const file_teleport_workloadidentity_v1_join_attrs_proto_rawDesc = "" + "\x12repository_version\x18\t \x01(\tR\x11repositoryVersion\x12%\n" + "\x0erepository_ref\x18\n" + " \x01(\tR\rrepositoryRef\x12\x15\n" + - "\x06run_id\x18\v \x01(\tR\x05runIdBdZbgithub.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1;workloadidentityv1b\x06proto3" + "\x06run_id\x18\v \x01(\tR\x05runId\"\xe2\x03\n" + + "\rJoinAttrsEnv0\x12\x10\n" + + "\x03sub\x18\x01 \x01(\tR\x03sub\x12'\n" + + "\x0forganization_id\x18\x02 \x01(\tR\x0eorganizationId\x12\x1d\n" + + "\n" + + "project_id\x18\x03 \x01(\tR\tprojectId\x12!\n" + + "\fproject_name\x18\x04 \x01(\tR\vprojectName\x12\x1f\n" + + "\vtemplate_id\x18\x05 \x01(\tR\n" + + "templateId\x12#\n" + + "\rtemplate_name\x18\x06 \x01(\tR\ftemplateName\x12%\n" + + "\x0eenvironment_id\x18\a \x01(\tR\renvironmentId\x12)\n" + + "\x10environment_name\x18\b \x01(\tR\x0fenvironmentName\x12%\n" + + "\x0eworkspace_name\x18\t \x01(\tR\rworkspaceName\x12*\n" + + "\x11deployment_log_id\x18\n" + + " \x01(\tR\x0fdeploymentLogId\x12'\n" + + "\x0fdeployment_type\x18\v \x01(\tR\x0edeploymentType\x12%\n" + + "\x0edeployer_email\x18\f \x01(\tR\rdeployerEmail\x12\x19\n" + + "\benv0_tag\x18\r \x01(\tR\aenv0TagBdZbgithub.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1;workloadidentityv1b\x06proto3" var ( file_teleport_workloadidentity_v1_join_attrs_proto_rawDescOnce sync.Once @@ -1876,7 +2068,7 @@ func file_teleport_workloadidentity_v1_join_attrs_proto_rawDescGZIP() []byte { return file_teleport_workloadidentity_v1_join_attrs_proto_rawDescData } -var file_teleport_workloadidentity_v1_join_attrs_proto_msgTypes = make([]protoimpl.MessageInfo, 19) +var file_teleport_workloadidentity_v1_join_attrs_proto_msgTypes = make([]protoimpl.MessageInfo, 20) var file_teleport_workloadidentity_v1_join_attrs_proto_goTypes = []any{ (*JoinAttrs)(nil), // 0: teleport.workloadidentity.v1.JoinAttrs (*JoinAttrsMeta)(nil), // 1: teleport.workloadidentity.v1.JoinAttrsMeta @@ -1897,6 +2089,7 @@ var file_teleport_workloadidentity_v1_join_attrs_proto_goTypes = []any{ (*JoinAttrsOracle)(nil), // 16: teleport.workloadidentity.v1.JoinAttrsOracle (*JoinAttrsAzureDevops)(nil), // 17: teleport.workloadidentity.v1.JoinAttrsAzureDevops (*JoinAttrsAzureDevopsPipeline)(nil), // 18: teleport.workloadidentity.v1.JoinAttrsAzureDevopsPipeline + (*JoinAttrsEnv0)(nil), // 19: teleport.workloadidentity.v1.JoinAttrsEnv0 } var file_teleport_workloadidentity_v1_join_attrs_proto_depIdxs = []int32{ 1, // 0: teleport.workloadidentity.v1.JoinAttrs.meta:type_name -> teleport.workloadidentity.v1.JoinAttrsMeta @@ -1913,15 +2106,16 @@ var file_teleport_workloadidentity_v1_join_attrs_proto_depIdxs = []int32{ 15, // 11: teleport.workloadidentity.v1.JoinAttrs.kubernetes:type_name -> teleport.workloadidentity.v1.JoinAttrsKubernetes 16, // 12: teleport.workloadidentity.v1.JoinAttrs.oracle:type_name -> teleport.workloadidentity.v1.JoinAttrsOracle 17, // 13: teleport.workloadidentity.v1.JoinAttrs.azure_devops:type_name -> teleport.workloadidentity.v1.JoinAttrsAzureDevops - 11, // 14: teleport.workloadidentity.v1.JoinAttrsGCP.gce:type_name -> teleport.workloadidentity.v1.JoinAttrsGCPGCE - 14, // 15: teleport.workloadidentity.v1.JoinAttrsKubernetes.service_account:type_name -> teleport.workloadidentity.v1.JoinAttrsKubernetesServiceAccount - 13, // 16: teleport.workloadidentity.v1.JoinAttrsKubernetes.pod:type_name -> teleport.workloadidentity.v1.JoinAttrsKubernetesPod - 18, // 17: teleport.workloadidentity.v1.JoinAttrsAzureDevops.pipeline:type_name -> teleport.workloadidentity.v1.JoinAttrsAzureDevopsPipeline - 18, // [18:18] is the sub-list for method output_type - 18, // [18:18] is the sub-list for method input_type - 18, // [18:18] is the sub-list for extension type_name - 18, // [18:18] is the sub-list for extension extendee - 0, // [0:18] is the sub-list for field type_name + 19, // 14: teleport.workloadidentity.v1.JoinAttrs.env0:type_name -> teleport.workloadidentity.v1.JoinAttrsEnv0 + 11, // 15: teleport.workloadidentity.v1.JoinAttrsGCP.gce:type_name -> teleport.workloadidentity.v1.JoinAttrsGCPGCE + 14, // 16: teleport.workloadidentity.v1.JoinAttrsKubernetes.service_account:type_name -> teleport.workloadidentity.v1.JoinAttrsKubernetesServiceAccount + 13, // 17: teleport.workloadidentity.v1.JoinAttrsKubernetes.pod:type_name -> teleport.workloadidentity.v1.JoinAttrsKubernetesPod + 18, // 18: teleport.workloadidentity.v1.JoinAttrsAzureDevops.pipeline:type_name -> teleport.workloadidentity.v1.JoinAttrsAzureDevopsPipeline + 19, // [19:19] is the sub-list for method output_type + 19, // [19:19] is the sub-list for method input_type + 19, // [19:19] is the sub-list for extension type_name + 19, // [19:19] is the sub-list for extension extendee + 0, // [0:19] is the sub-list for field type_name } func init() { file_teleport_workloadidentity_v1_join_attrs_proto_init() } @@ -1935,7 +2129,7 @@ func file_teleport_workloadidentity_v1_join_attrs_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_workloadidentity_v1_join_attrs_proto_rawDesc), len(file_teleport_workloadidentity_v1_join_attrs_proto_rawDesc)), NumEnums: 0, - NumMessages: 19, + NumMessages: 20, NumExtensions: 0, NumServices: 0, }, diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/resource.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/resource.pb.go index fbb9d23983ea8..4d1396aa98b87 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/resource.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/resource.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/resource.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/resource_service.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/resource_service.pb.go index 9d080a5b7e61c..334455226ba38 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/resource_service.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/resource_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/resource_service.proto @@ -322,6 +322,89 @@ func (x *ListWorkloadIdentitiesRequest) GetPageToken() string { return "" } +// The request for ListWorkloadIdentitiesV2. +type ListWorkloadIdentitiesV2Request struct { + state protoimpl.MessageState `protogen:"open.v1"` + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize,proto3" json:"page_size,omitempty"` + // The page_token value returned from a previous ListWorkloadIdentities request, if any. + PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken,proto3" json:"page_token,omitempty"` + // The sort field to use for the results. If empty, the default sort field is used. + SortField string `protobuf:"bytes,3,opt,name=sort_field,json=sortField,proto3" json:"sort_field,omitempty"` + // The sort order to use for the results. If empty, the default sort order is used. + SortDesc bool `protobuf:"varint,4,opt,name=sort_desc,json=sortDesc,proto3" json:"sort_desc,omitempty"` + // A search term used to filter the results. If non-empty, it's used to match against supported fields. + FilterSearchTerm string `protobuf:"bytes,5,opt,name=filter_search_term,json=filterSearchTerm,proto3" json:"filter_search_term,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *ListWorkloadIdentitiesV2Request) Reset() { + *x = ListWorkloadIdentitiesV2Request{} + mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *ListWorkloadIdentitiesV2Request) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*ListWorkloadIdentitiesV2Request) ProtoMessage() {} + +func (x *ListWorkloadIdentitiesV2Request) ProtoReflect() protoreflect.Message { + mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use ListWorkloadIdentitiesV2Request.ProtoReflect.Descriptor instead. +func (*ListWorkloadIdentitiesV2Request) Descriptor() ([]byte, []int) { + return file_teleport_workloadidentity_v1_resource_service_proto_rawDescGZIP(), []int{6} +} + +func (x *ListWorkloadIdentitiesV2Request) GetPageSize() int32 { + if x != nil { + return x.PageSize + } + return 0 +} + +func (x *ListWorkloadIdentitiesV2Request) GetPageToken() string { + if x != nil { + return x.PageToken + } + return "" +} + +func (x *ListWorkloadIdentitiesV2Request) GetSortField() string { + if x != nil { + return x.SortField + } + return "" +} + +func (x *ListWorkloadIdentitiesV2Request) GetSortDesc() bool { + if x != nil { + return x.SortDesc + } + return false +} + +func (x *ListWorkloadIdentitiesV2Request) GetFilterSearchTerm() string { + if x != nil { + return x.FilterSearchTerm + } + return "" +} + // The response for ListWorkloadIdentities. type ListWorkloadIdentitiesResponse struct { state protoimpl.MessageState `protogen:"open.v1"` @@ -336,7 +419,7 @@ type ListWorkloadIdentitiesResponse struct { func (x *ListWorkloadIdentitiesResponse) Reset() { *x = ListWorkloadIdentitiesResponse{} - mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[6] + mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[7] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -348,7 +431,7 @@ func (x *ListWorkloadIdentitiesResponse) String() string { func (*ListWorkloadIdentitiesResponse) ProtoMessage() {} func (x *ListWorkloadIdentitiesResponse) ProtoReflect() protoreflect.Message { - mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[6] + mi := &file_teleport_workloadidentity_v1_resource_service_proto_msgTypes[7] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -361,7 +444,7 @@ func (x *ListWorkloadIdentitiesResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListWorkloadIdentitiesResponse.ProtoReflect.Descriptor instead. func (*ListWorkloadIdentitiesResponse) Descriptor() ([]byte, []int) { - return file_teleport_workloadidentity_v1_resource_service_proto_rawDescGZIP(), []int{6} + return file_teleport_workloadidentity_v1_resource_service_proto_rawDescGZIP(), []int{7} } func (x *ListWorkloadIdentitiesResponse) GetWorkloadIdentities() []*WorkloadIdentity { @@ -396,17 +479,26 @@ const file_teleport_workloadidentity_v1_resource_service_proto_rawDesc = "" + "\x1dListWorkloadIdentitiesRequest\x12\x1b\n" + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + "\n" + - "page_token\x18\x02 \x01(\tR\tpageToken\"\xa9\x01\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\"\xc7\x01\n" + + "\x1fListWorkloadIdentitiesV2Request\x12\x1b\n" + + "\tpage_size\x18\x01 \x01(\x05R\bpageSize\x12\x1d\n" + + "\n" + + "page_token\x18\x02 \x01(\tR\tpageToken\x12\x1d\n" + + "\n" + + "sort_field\x18\x03 \x01(\tR\tsortField\x12\x1b\n" + + "\tsort_desc\x18\x04 \x01(\bR\bsortDesc\x12,\n" + + "\x12filter_search_term\x18\x05 \x01(\tR\x10filterSearchTerm\"\xa9\x01\n" + "\x1eListWorkloadIdentitiesResponse\x12_\n" + "\x13workload_identities\x18\x01 \x03(\v2..teleport.workloadidentity.v1.WorkloadIdentityR\x12workloadIdentities\x12&\n" + - "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken2\xbf\x06\n" + + "\x0fnext_page_token\x18\x02 \x01(\tR\rnextPageToken2\xd9\a\n" + "\x1fWorkloadIdentityResourceService\x12\x85\x01\n" + "\x16CreateWorkloadIdentity\x12;.teleport.workloadidentity.v1.CreateWorkloadIdentityRequest\x1a..teleport.workloadidentity.v1.WorkloadIdentity\x12\x85\x01\n" + "\x16UpdateWorkloadIdentity\x12;.teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest\x1a..teleport.workloadidentity.v1.WorkloadIdentity\x12\x85\x01\n" + "\x16UpsertWorkloadIdentity\x12;.teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest\x1a..teleport.workloadidentity.v1.WorkloadIdentity\x12\x7f\n" + "\x13GetWorkloadIdentity\x128.teleport.workloadidentity.v1.GetWorkloadIdentityRequest\x1a..teleport.workloadidentity.v1.WorkloadIdentity\x12m\n" + "\x16DeleteWorkloadIdentity\x12;.teleport.workloadidentity.v1.DeleteWorkloadIdentityRequest\x1a\x16.google.protobuf.Empty\x12\x93\x01\n" + - "\x16ListWorkloadIdentities\x12;.teleport.workloadidentity.v1.ListWorkloadIdentitiesRequest\x1a<.teleport.workloadidentity.v1.ListWorkloadIdentitiesResponseBdZbgithub.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1;workloadidentityv1b\x06proto3" + "\x16ListWorkloadIdentities\x12;.teleport.workloadidentity.v1.ListWorkloadIdentitiesRequest\x1a<.teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse\x12\x97\x01\n" + + "\x18ListWorkloadIdentitiesV2\x12=.teleport.workloadidentity.v1.ListWorkloadIdentitiesV2Request\x1a<.teleport.workloadidentity.v1.ListWorkloadIdentitiesResponseBdZbgithub.com/gravitational/teleport/api/gen/proto/go/teleport/workloadidentity/v1;workloadidentityv1b\x06proto3" var ( file_teleport_workloadidentity_v1_resource_service_proto_rawDescOnce sync.Once @@ -420,37 +512,40 @@ func file_teleport_workloadidentity_v1_resource_service_proto_rawDescGZIP() []by return file_teleport_workloadidentity_v1_resource_service_proto_rawDescData } -var file_teleport_workloadidentity_v1_resource_service_proto_msgTypes = make([]protoimpl.MessageInfo, 7) +var file_teleport_workloadidentity_v1_resource_service_proto_msgTypes = make([]protoimpl.MessageInfo, 8) var file_teleport_workloadidentity_v1_resource_service_proto_goTypes = []any{ - (*CreateWorkloadIdentityRequest)(nil), // 0: teleport.workloadidentity.v1.CreateWorkloadIdentityRequest - (*UpdateWorkloadIdentityRequest)(nil), // 1: teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest - (*UpsertWorkloadIdentityRequest)(nil), // 2: teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest - (*GetWorkloadIdentityRequest)(nil), // 3: teleport.workloadidentity.v1.GetWorkloadIdentityRequest - (*DeleteWorkloadIdentityRequest)(nil), // 4: teleport.workloadidentity.v1.DeleteWorkloadIdentityRequest - (*ListWorkloadIdentitiesRequest)(nil), // 5: teleport.workloadidentity.v1.ListWorkloadIdentitiesRequest - (*ListWorkloadIdentitiesResponse)(nil), // 6: teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse - (*WorkloadIdentity)(nil), // 7: teleport.workloadidentity.v1.WorkloadIdentity - (*emptypb.Empty)(nil), // 8: google.protobuf.Empty + (*CreateWorkloadIdentityRequest)(nil), // 0: teleport.workloadidentity.v1.CreateWorkloadIdentityRequest + (*UpdateWorkloadIdentityRequest)(nil), // 1: teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest + (*UpsertWorkloadIdentityRequest)(nil), // 2: teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest + (*GetWorkloadIdentityRequest)(nil), // 3: teleport.workloadidentity.v1.GetWorkloadIdentityRequest + (*DeleteWorkloadIdentityRequest)(nil), // 4: teleport.workloadidentity.v1.DeleteWorkloadIdentityRequest + (*ListWorkloadIdentitiesRequest)(nil), // 5: teleport.workloadidentity.v1.ListWorkloadIdentitiesRequest + (*ListWorkloadIdentitiesV2Request)(nil), // 6: teleport.workloadidentity.v1.ListWorkloadIdentitiesV2Request + (*ListWorkloadIdentitiesResponse)(nil), // 7: teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse + (*WorkloadIdentity)(nil), // 8: teleport.workloadidentity.v1.WorkloadIdentity + (*emptypb.Empty)(nil), // 9: google.protobuf.Empty } var file_teleport_workloadidentity_v1_resource_service_proto_depIdxs = []int32{ - 7, // 0: teleport.workloadidentity.v1.CreateWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 1: teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 2: teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 3: teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse.workload_identities:type_name -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 0: teleport.workloadidentity.v1.CreateWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 1: teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 2: teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest.workload_identity:type_name -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 3: teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse.workload_identities:type_name -> teleport.workloadidentity.v1.WorkloadIdentity 0, // 4: teleport.workloadidentity.v1.WorkloadIdentityResourceService.CreateWorkloadIdentity:input_type -> teleport.workloadidentity.v1.CreateWorkloadIdentityRequest 1, // 5: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpdateWorkloadIdentity:input_type -> teleport.workloadidentity.v1.UpdateWorkloadIdentityRequest 2, // 6: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpsertWorkloadIdentity:input_type -> teleport.workloadidentity.v1.UpsertWorkloadIdentityRequest 3, // 7: teleport.workloadidentity.v1.WorkloadIdentityResourceService.GetWorkloadIdentity:input_type -> teleport.workloadidentity.v1.GetWorkloadIdentityRequest 4, // 8: teleport.workloadidentity.v1.WorkloadIdentityResourceService.DeleteWorkloadIdentity:input_type -> teleport.workloadidentity.v1.DeleteWorkloadIdentityRequest 5, // 9: teleport.workloadidentity.v1.WorkloadIdentityResourceService.ListWorkloadIdentities:input_type -> teleport.workloadidentity.v1.ListWorkloadIdentitiesRequest - 7, // 10: teleport.workloadidentity.v1.WorkloadIdentityResourceService.CreateWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 11: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpdateWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 12: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpsertWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity - 7, // 13: teleport.workloadidentity.v1.WorkloadIdentityResourceService.GetWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity - 8, // 14: teleport.workloadidentity.v1.WorkloadIdentityResourceService.DeleteWorkloadIdentity:output_type -> google.protobuf.Empty - 6, // 15: teleport.workloadidentity.v1.WorkloadIdentityResourceService.ListWorkloadIdentities:output_type -> teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse - 10, // [10:16] is the sub-list for method output_type - 4, // [4:10] is the sub-list for method input_type + 6, // 10: teleport.workloadidentity.v1.WorkloadIdentityResourceService.ListWorkloadIdentitiesV2:input_type -> teleport.workloadidentity.v1.ListWorkloadIdentitiesV2Request + 8, // 11: teleport.workloadidentity.v1.WorkloadIdentityResourceService.CreateWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 12: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpdateWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 13: teleport.workloadidentity.v1.WorkloadIdentityResourceService.UpsertWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity + 8, // 14: teleport.workloadidentity.v1.WorkloadIdentityResourceService.GetWorkloadIdentity:output_type -> teleport.workloadidentity.v1.WorkloadIdentity + 9, // 15: teleport.workloadidentity.v1.WorkloadIdentityResourceService.DeleteWorkloadIdentity:output_type -> google.protobuf.Empty + 7, // 16: teleport.workloadidentity.v1.WorkloadIdentityResourceService.ListWorkloadIdentities:output_type -> teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse + 7, // 17: teleport.workloadidentity.v1.WorkloadIdentityResourceService.ListWorkloadIdentitiesV2:output_type -> teleport.workloadidentity.v1.ListWorkloadIdentitiesResponse + 11, // [11:18] is the sub-list for method output_type + 4, // [4:11] is the sub-list for method input_type 4, // [4:4] is the sub-list for extension type_name 4, // [4:4] is the sub-list for extension extendee 0, // [0:4] is the sub-list for field type_name @@ -468,7 +563,7 @@ func file_teleport_workloadidentity_v1_resource_service_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_teleport_workloadidentity_v1_resource_service_proto_rawDesc), len(file_teleport_workloadidentity_v1_resource_service_proto_rawDesc)), NumEnums: 0, - NumMessages: 7, + NumMessages: 8, NumExtensions: 0, NumServices: 1, }, diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/resource_service_grpc.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/resource_service_grpc.pb.go index aea5810b7f29a..b5a5b450eb248 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/resource_service_grpc.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/resource_service_grpc.pb.go @@ -34,12 +34,13 @@ import ( const _ = grpc.SupportPackageIsVersion9 const ( - WorkloadIdentityResourceService_CreateWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/CreateWorkloadIdentity" - WorkloadIdentityResourceService_UpdateWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/UpdateWorkloadIdentity" - WorkloadIdentityResourceService_UpsertWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/UpsertWorkloadIdentity" - WorkloadIdentityResourceService_GetWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/GetWorkloadIdentity" - WorkloadIdentityResourceService_DeleteWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/DeleteWorkloadIdentity" - WorkloadIdentityResourceService_ListWorkloadIdentities_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/ListWorkloadIdentities" + WorkloadIdentityResourceService_CreateWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/CreateWorkloadIdentity" + WorkloadIdentityResourceService_UpdateWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/UpdateWorkloadIdentity" + WorkloadIdentityResourceService_UpsertWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/UpsertWorkloadIdentity" + WorkloadIdentityResourceService_GetWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/GetWorkloadIdentity" + WorkloadIdentityResourceService_DeleteWorkloadIdentity_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/DeleteWorkloadIdentity" + WorkloadIdentityResourceService_ListWorkloadIdentities_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/ListWorkloadIdentities" + WorkloadIdentityResourceService_ListWorkloadIdentitiesV2_FullMethodName = "/teleport.workloadidentity.v1.WorkloadIdentityResourceService/ListWorkloadIdentitiesV2" ) // WorkloadIdentityResourceServiceClient is the client API for WorkloadIdentityResourceService service. @@ -69,6 +70,10 @@ type WorkloadIdentityResourceServiceClient interface { // ListWorkloadIdentities of all workload identities, pagination semantics are // applied. ListWorkloadIdentities(ctx context.Context, in *ListWorkloadIdentitiesRequest, opts ...grpc.CallOption) (*ListWorkloadIdentitiesResponse, error) + // ListWorkloadIdentities of all workload identities, pagination semantics are + // applied. Sorting by name or spiffe id is supported, and results can be + // filtered using a search term + ListWorkloadIdentitiesV2(ctx context.Context, in *ListWorkloadIdentitiesV2Request, opts ...grpc.CallOption) (*ListWorkloadIdentitiesResponse, error) } type workloadIdentityResourceServiceClient struct { @@ -139,6 +144,16 @@ func (c *workloadIdentityResourceServiceClient) ListWorkloadIdentities(ctx conte return out, nil } +func (c *workloadIdentityResourceServiceClient) ListWorkloadIdentitiesV2(ctx context.Context, in *ListWorkloadIdentitiesV2Request, opts ...grpc.CallOption) (*ListWorkloadIdentitiesResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(ListWorkloadIdentitiesResponse) + err := c.cc.Invoke(ctx, WorkloadIdentityResourceService_ListWorkloadIdentitiesV2_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + // WorkloadIdentityResourceServiceServer is the server API for WorkloadIdentityResourceService service. // All implementations must embed UnimplementedWorkloadIdentityResourceServiceServer // for forward compatibility. @@ -166,6 +181,10 @@ type WorkloadIdentityResourceServiceServer interface { // ListWorkloadIdentities of all workload identities, pagination semantics are // applied. ListWorkloadIdentities(context.Context, *ListWorkloadIdentitiesRequest) (*ListWorkloadIdentitiesResponse, error) + // ListWorkloadIdentities of all workload identities, pagination semantics are + // applied. Sorting by name or spiffe id is supported, and results can be + // filtered using a search term + ListWorkloadIdentitiesV2(context.Context, *ListWorkloadIdentitiesV2Request) (*ListWorkloadIdentitiesResponse, error) mustEmbedUnimplementedWorkloadIdentityResourceServiceServer() } @@ -194,6 +213,9 @@ func (UnimplementedWorkloadIdentityResourceServiceServer) DeleteWorkloadIdentity func (UnimplementedWorkloadIdentityResourceServiceServer) ListWorkloadIdentities(context.Context, *ListWorkloadIdentitiesRequest) (*ListWorkloadIdentitiesResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method ListWorkloadIdentities not implemented") } +func (UnimplementedWorkloadIdentityResourceServiceServer) ListWorkloadIdentitiesV2(context.Context, *ListWorkloadIdentitiesV2Request) (*ListWorkloadIdentitiesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method ListWorkloadIdentitiesV2 not implemented") +} func (UnimplementedWorkloadIdentityResourceServiceServer) mustEmbedUnimplementedWorkloadIdentityResourceServiceServer() { } func (UnimplementedWorkloadIdentityResourceServiceServer) testEmbeddedByValue() {} @@ -324,6 +346,24 @@ func _WorkloadIdentityResourceService_ListWorkloadIdentities_Handler(srv interfa return interceptor(ctx, in, info, handler) } +func _WorkloadIdentityResourceService_ListWorkloadIdentitiesV2_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(ListWorkloadIdentitiesV2Request) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(WorkloadIdentityResourceServiceServer).ListWorkloadIdentitiesV2(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: WorkloadIdentityResourceService_ListWorkloadIdentitiesV2_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(WorkloadIdentityResourceServiceServer).ListWorkloadIdentitiesV2(ctx, req.(*ListWorkloadIdentitiesV2Request)) + } + return interceptor(ctx, in, info, handler) +} + // WorkloadIdentityResourceService_ServiceDesc is the grpc.ServiceDesc for WorkloadIdentityResourceService service. // It's only intended for direct use with grpc.RegisterService, // and not to be introspected or modified (even as a copy) @@ -355,6 +395,10 @@ var WorkloadIdentityResourceService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ListWorkloadIdentities", Handler: _WorkloadIdentityResourceService_ListWorkloadIdentities_Handler, }, + { + MethodName: "ListWorkloadIdentitiesV2", + Handler: _WorkloadIdentityResourceService_ListWorkloadIdentitiesV2_Handler, + }, }, Streams: []grpc.StreamDesc{}, Metadata: "teleport/workloadidentity/v1/resource_service.proto", diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/revocation_resource.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/revocation_resource.pb.go index d9735654b835e..261c3645e5e3e 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/revocation_resource.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/revocation_resource.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/revocation_resource.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/revocation_service.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/revocation_service.pb.go index 8c049d2cbb0c1..499f04aa8fc70 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/revocation_service.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/revocation_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/revocation_service.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore.pb.go index 3f56a003df20b..625f626cd4e64 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/sigstore.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_resource.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_resource.pb.go index ce6ac39250656..11496aa972015 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_resource.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_resource.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/sigstore_policy_resource.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_service.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_service.pb.go index 261c369bf45fd..cdd1ab555e18f 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_service.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/sigstore_policy_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/sigstore_policy_service.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides.pb.go index 8e8a984e13416..70bc70727aec2 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/x509_overrides.proto diff --git a/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides_service.pb.go b/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides_service.pb.go index 32538c7fe3730..c4800091a054d 100644 --- a/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides_service.pb.go +++ b/api/gen/proto/go/teleport/workloadidentity/v1/x509_overrides_service.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/workloadidentity/v1/x509_overrides_service.proto diff --git a/api/gen/proto/go/usageevents/v1/usageevents.pb.go b/api/gen/proto/go/usageevents/v1/usageevents.pb.go index ce2cd98cfa988..48d71b07f38e9 100644 --- a/api/gen/proto/go/usageevents/v1/usageevents.pb.go +++ b/api/gen/proto/go/usageevents/v1/usageevents.pb.go @@ -69,6 +69,9 @@ const ( DiscoverResource_DISCOVER_RESOURCE_DOC_WINDOWS_DESKTOP_NON_AD DiscoverResource = 39 DiscoverResource_DISCOVER_RESOURCE_KUBERNETES_EKS DiscoverResource = 40 DiscoverResource_DISCOVER_RESOURCE_APPLICATION_AWS_CONSOLE DiscoverResource = 41 + DiscoverResource_DISCOVER_RESOURCE_MCP_STDIO DiscoverResource = 42 + DiscoverResource_DISCOVER_RESOURCE_MCP_SSE DiscoverResource = 43 + DiscoverResource_DISCOVER_RESOURCE_MCP_STREAMABLE_HTTP DiscoverResource = 44 ) var DiscoverResource_name = map[int32]string{ @@ -114,6 +117,9 @@ var DiscoverResource_name = map[int32]string{ 39: "DISCOVER_RESOURCE_DOC_WINDOWS_DESKTOP_NON_AD", 40: "DISCOVER_RESOURCE_KUBERNETES_EKS", 41: "DISCOVER_RESOURCE_APPLICATION_AWS_CONSOLE", + 42: "DISCOVER_RESOURCE_MCP_STDIO", + 43: "DISCOVER_RESOURCE_MCP_SSE", + 44: "DISCOVER_RESOURCE_MCP_STREAMABLE_HTTP", } var DiscoverResource_value = map[string]int32{ @@ -159,6 +165,9 @@ var DiscoverResource_value = map[string]int32{ "DISCOVER_RESOURCE_DOC_WINDOWS_DESKTOP_NON_AD": 39, "DISCOVER_RESOURCE_KUBERNETES_EKS": 40, "DISCOVER_RESOURCE_APPLICATION_AWS_CONSOLE": 41, + "DISCOVER_RESOURCE_MCP_STDIO": 42, + "DISCOVER_RESOURCE_MCP_SSE": 43, + "DISCOVER_RESOURCE_MCP_STREAMABLE_HTTP": 44, } func (x DiscoverResource) String() string { @@ -288,34 +297,36 @@ func (CTA) EnumDescriptor() ([]byte, []int) { type IntegrationEnrollKind int32 const ( - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_UNSPECIFIED IntegrationEnrollKind = 0 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_SLACK IntegrationEnrollKind = 1 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_AWS_OIDC IntegrationEnrollKind = 2 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_PAGERDUTY IntegrationEnrollKind = 3 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_EMAIL IntegrationEnrollKind = 4 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_JIRA IntegrationEnrollKind = 5 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_DISCORD IntegrationEnrollKind = 6 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MATTERMOST IntegrationEnrollKind = 7 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MS_TEAMS IntegrationEnrollKind = 8 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_OPSGENIE IntegrationEnrollKind = 9 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_OKTA IntegrationEnrollKind = 10 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_JAMF IntegrationEnrollKind = 11 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID IntegrationEnrollKind = 12 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS IntegrationEnrollKind = 13 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_CIRCLECI IntegrationEnrollKind = 14 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GITLAB IntegrationEnrollKind = 15 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_JENKINS IntegrationEnrollKind = 16 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_ANSIBLE IntegrationEnrollKind = 17 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_SERVICENOW IntegrationEnrollKind = 18 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_ENTRA_ID IntegrationEnrollKind = 19 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_DATADOG_INCIDENT_MANAGEMENT IntegrationEnrollKind = 20 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_AWS IntegrationEnrollKind = 21 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GCP IntegrationEnrollKind = 22 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_AZURE IntegrationEnrollKind = 23 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_SPACELIFT IntegrationEnrollKind = 24 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES IntegrationEnrollKind = 25 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER IntegrationEnrollKind = 26 - IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS IntegrationEnrollKind = 27 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_UNSPECIFIED IntegrationEnrollKind = 0 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_SLACK IntegrationEnrollKind = 1 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_AWS_OIDC IntegrationEnrollKind = 2 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_PAGERDUTY IntegrationEnrollKind = 3 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_EMAIL IntegrationEnrollKind = 4 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_JIRA IntegrationEnrollKind = 5 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_DISCORD IntegrationEnrollKind = 6 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MATTERMOST IntegrationEnrollKind = 7 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MS_TEAMS IntegrationEnrollKind = 8 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_OPSGENIE IntegrationEnrollKind = 9 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_OKTA IntegrationEnrollKind = 10 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_JAMF IntegrationEnrollKind = 11 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID IntegrationEnrollKind = 12 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS IntegrationEnrollKind = 13 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_CIRCLECI IntegrationEnrollKind = 14 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GITLAB IntegrationEnrollKind = 15 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_JENKINS IntegrationEnrollKind = 16 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_ANSIBLE IntegrationEnrollKind = 17 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_SERVICENOW IntegrationEnrollKind = 18 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_ENTRA_ID IntegrationEnrollKind = 19 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_DATADOG_INCIDENT_MANAGEMENT IntegrationEnrollKind = 20 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_AWS IntegrationEnrollKind = 21 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GCP IntegrationEnrollKind = 22 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_AZURE IntegrationEnrollKind = 23 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_SPACELIFT IntegrationEnrollKind = 24 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES IntegrationEnrollKind = 25 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER IntegrationEnrollKind = 26 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS IntegrationEnrollKind = 27 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_ARGOCD IntegrationEnrollKind = 28 + IntegrationEnrollKind_INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS_KUBERNETES IntegrationEnrollKind = 29 ) var IntegrationEnrollKind_name = map[int32]string{ @@ -347,37 +358,41 @@ var IntegrationEnrollKind_name = map[int32]string{ 25: "INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES", 26: "INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER", 27: "INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS", + 28: "INTEGRATION_ENROLL_KIND_MACHINE_ID_ARGOCD", + 29: "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS_KUBERNETES", } var IntegrationEnrollKind_value = map[string]int32{ - "INTEGRATION_ENROLL_KIND_UNSPECIFIED": 0, - "INTEGRATION_ENROLL_KIND_SLACK": 1, - "INTEGRATION_ENROLL_KIND_AWS_OIDC": 2, - "INTEGRATION_ENROLL_KIND_PAGERDUTY": 3, - "INTEGRATION_ENROLL_KIND_EMAIL": 4, - "INTEGRATION_ENROLL_KIND_JIRA": 5, - "INTEGRATION_ENROLL_KIND_DISCORD": 6, - "INTEGRATION_ENROLL_KIND_MATTERMOST": 7, - "INTEGRATION_ENROLL_KIND_MS_TEAMS": 8, - "INTEGRATION_ENROLL_KIND_OPSGENIE": 9, - "INTEGRATION_ENROLL_KIND_OKTA": 10, - "INTEGRATION_ENROLL_KIND_JAMF": 11, - "INTEGRATION_ENROLL_KIND_MACHINE_ID": 12, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS": 13, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_CIRCLECI": 14, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITLAB": 15, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_JENKINS": 16, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_ANSIBLE": 17, - "INTEGRATION_ENROLL_KIND_SERVICENOW": 18, - "INTEGRATION_ENROLL_KIND_ENTRA_ID": 19, - "INTEGRATION_ENROLL_KIND_DATADOG_INCIDENT_MANAGEMENT": 20, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_AWS": 21, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_GCP": 22, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_AZURE": 23, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_SPACELIFT": 24, - "INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES": 25, - "INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER": 26, - "INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS": 27, + "INTEGRATION_ENROLL_KIND_UNSPECIFIED": 0, + "INTEGRATION_ENROLL_KIND_SLACK": 1, + "INTEGRATION_ENROLL_KIND_AWS_OIDC": 2, + "INTEGRATION_ENROLL_KIND_PAGERDUTY": 3, + "INTEGRATION_ENROLL_KIND_EMAIL": 4, + "INTEGRATION_ENROLL_KIND_JIRA": 5, + "INTEGRATION_ENROLL_KIND_DISCORD": 6, + "INTEGRATION_ENROLL_KIND_MATTERMOST": 7, + "INTEGRATION_ENROLL_KIND_MS_TEAMS": 8, + "INTEGRATION_ENROLL_KIND_OPSGENIE": 9, + "INTEGRATION_ENROLL_KIND_OKTA": 10, + "INTEGRATION_ENROLL_KIND_JAMF": 11, + "INTEGRATION_ENROLL_KIND_MACHINE_ID": 12, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS": 13, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_CIRCLECI": 14, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITLAB": 15, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_JENKINS": 16, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_ANSIBLE": 17, + "INTEGRATION_ENROLL_KIND_SERVICENOW": 18, + "INTEGRATION_ENROLL_KIND_ENTRA_ID": 19, + "INTEGRATION_ENROLL_KIND_DATADOG_INCIDENT_MANAGEMENT": 20, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_AWS": 21, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_GCP": 22, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_AZURE": 23, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_SPACELIFT": 24, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES": 25, + "INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER": 26, + "INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS": 27, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_ARGOCD": 28, + "INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS_KUBERNETES": 29, } func (x IntegrationEnrollKind) String() string { @@ -404,18 +419,27 @@ const ( IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_GIT_SERVER IntegrationEnrollStep = 6 IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_GITHUBRA_CONFIGURE_SSH_CERT IntegrationEnrollStep = 7 IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_ROLE IntegrationEnrollStep = 8 + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONNECT_GITHUB IntegrationEnrollStep = 9 + IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONFIGURE_ACCESS IntegrationEnrollStep = 10 + IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_MWIGHAK8S_SETUP_WORKFLOW IntegrationEnrollStep = 11 + IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_MWIGHAK8S_WELCOME IntegrationEnrollStep = 12 ) var IntegrationEnrollStep_name = map[int32]string{ - 0: "INTEGRATION_ENROLL_STEP_UNSPECIFIED", - 1: "INTEGRATION_ENROLL_STEP_AWSIC_CONNECT_OIDC", - 2: "INTEGRATION_ENROLL_STEP_AWSIC_SET_ACCESSLIST_DEFAULT_OWNER", - 3: "INTEGRATION_ENROLL_STEP_AWSIC_UPLOAD_AWS_SAML_SP_METADATA", - 4: "INTEGRATION_ENROLL_STEP_AWSIC_TEST_SCIM_CONNECTION", - 5: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_INTEGRATION", - 6: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_GIT_SERVER", - 7: "INTEGRATION_ENROLL_STEP_GITHUBRA_CONFIGURE_SSH_CERT", - 8: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_ROLE", + 0: "INTEGRATION_ENROLL_STEP_UNSPECIFIED", + 1: "INTEGRATION_ENROLL_STEP_AWSIC_CONNECT_OIDC", + 2: "INTEGRATION_ENROLL_STEP_AWSIC_SET_ACCESSLIST_DEFAULT_OWNER", + 3: "INTEGRATION_ENROLL_STEP_AWSIC_UPLOAD_AWS_SAML_SP_METADATA", + 4: "INTEGRATION_ENROLL_STEP_AWSIC_TEST_SCIM_CONNECTION", + 5: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_INTEGRATION", + 6: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_GIT_SERVER", + 7: "INTEGRATION_ENROLL_STEP_GITHUBRA_CONFIGURE_SSH_CERT", + 8: "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_ROLE", + 9: "INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONNECT_GITHUB", + 10: "INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONFIGURE_ACCESS", + 11: "INTEGRATION_ENROLL_STEP_MWIGHAK8S_SETUP_WORKFLOW", + 12: "INTEGRATION_ENROLL_STEP_MWIGHAK8S_WELCOME", } var IntegrationEnrollStep_value = map[string]int32{ @@ -428,6 +452,10 @@ var IntegrationEnrollStep_value = map[string]int32{ "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_GIT_SERVER": 6, "INTEGRATION_ENROLL_STEP_GITHUBRA_CONFIGURE_SSH_CERT": 7, "INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_ROLE": 8, + "INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONNECT_GITHUB": 9, + "INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONFIGURE_ACCESS": 10, + "INTEGRATION_ENROLL_STEP_MWIGHAK8S_SETUP_WORKFLOW": 11, + "INTEGRATION_ENROLL_STEP_MWIGHAK8S_WELCOME": 12, } func (x IntegrationEnrollStep) String() string { @@ -480,6 +508,131 @@ func (IntegrationEnrollStatusCode) EnumDescriptor() ([]byte, []int) { return fileDescriptor_94cf2ca1c69fd564, []int{5} } +// IntegrationEnrollSection identifies a section the user opened or expanded in +// an integration setup wizard. +type IntegrationEnrollSection int32 + +const ( + IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_UNSPECIFIED IntegrationEnrollSection = 0 + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_MWIGHAK8S_GITHUB_ADVANCED_OPTIONS IntegrationEnrollSection = 1 + IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_LABEL_PICKER IntegrationEnrollSection = 2 + IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_ADVANCED_OPTIONS IntegrationEnrollSection = 3 + IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_RESOURCE_RULE_PICKER IntegrationEnrollSection = 4 +) + +var IntegrationEnrollSection_name = map[int32]string{ + 0: "INTEGRATION_ENROLL_SECTION_UNSPECIFIED", + 1: "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_GITHUB_ADVANCED_OPTIONS", + 2: "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_LABEL_PICKER", + 3: "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_ADVANCED_OPTIONS", + 4: "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_RESOURCE_RULE_PICKER", +} + +var IntegrationEnrollSection_value = map[string]int32{ + "INTEGRATION_ENROLL_SECTION_UNSPECIFIED": 0, + "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_GITHUB_ADVANCED_OPTIONS": 1, + "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_LABEL_PICKER": 2, + "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_ADVANCED_OPTIONS": 3, + "INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_RESOURCE_RULE_PICKER": 4, +} + +func (x IntegrationEnrollSection) String() string { + return proto.EnumName(IntegrationEnrollSection_name, int32(x)) +} + +func (IntegrationEnrollSection) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{6} +} + +// IntegrationEnrollField identifies a field the user completed in an integration +// setup wizard. +type IntegrationEnrollField int32 + +const ( + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_UNSPECIFIED IntegrationEnrollField = 0 + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REPOSITORY_URL IntegrationEnrollField = 1 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_BRANCH IntegrationEnrollField = 2 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_WORKFLOW IntegrationEnrollField = 3 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENVIRONMENT IntegrationEnrollField = 4 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REF IntegrationEnrollField = 5 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_SLUG IntegrationEnrollField = 6 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_STATIC_JWKS IntegrationEnrollField = 7 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_LABELS IntegrationEnrollField = 8 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_GROUPS IntegrationEnrollField = 9 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_USERS IntegrationEnrollField = 10 + IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_RESOURCE_RULES IntegrationEnrollField = 11 +) + +var IntegrationEnrollField_name = map[int32]string{ + 0: "INTEGRATION_ENROLL_FIELD_UNSPECIFIED", + 1: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REPOSITORY_URL", + 2: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_BRANCH", + 3: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_WORKFLOW", + 4: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENVIRONMENT", + 5: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REF", + 6: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_SLUG", + 7: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_STATIC_JWKS", + 8: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_LABELS", + 9: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_GROUPS", + 10: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_USERS", + 11: "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_RESOURCE_RULES", +} + +var IntegrationEnrollField_value = map[string]int32{ + "INTEGRATION_ENROLL_FIELD_UNSPECIFIED": 0, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REPOSITORY_URL": 1, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_BRANCH": 2, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_WORKFLOW": 3, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENVIRONMENT": 4, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REF": 5, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_SLUG": 6, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_STATIC_JWKS": 7, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_LABELS": 8, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_GROUPS": 9, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_USERS": 10, + "INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_RESOURCE_RULES": 11, +} + +func (x IntegrationEnrollField) String() string { + return proto.EnumName(IntegrationEnrollField_name, int32(x)) +} + +func (IntegrationEnrollField) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{7} +} + +// IntegrationEnrollCodeType identifies the type of code that was copied in an +// integration setup wizard. +type IntegrationEnrollCodeType int32 + +const ( + IntegrationEnrollCodeType_INTEGRATION_ENROLL_CODE_TYPE_UNSPECIFIED IntegrationEnrollCodeType = 0 + IntegrationEnrollCodeType_INTEGRATION_ENROLL_CODE_TYPE_TERRAFORM IntegrationEnrollCodeType = 1 + IntegrationEnrollCodeType_INTEGRATION_ENROLL_CODE_TYPE_GITHUB_ACTIONS_YAML IntegrationEnrollCodeType = 2 +) + +var IntegrationEnrollCodeType_name = map[int32]string{ + 0: "INTEGRATION_ENROLL_CODE_TYPE_UNSPECIFIED", + 1: "INTEGRATION_ENROLL_CODE_TYPE_TERRAFORM", + 2: "INTEGRATION_ENROLL_CODE_TYPE_GITHUB_ACTIONS_YAML", +} + +var IntegrationEnrollCodeType_value = map[string]int32{ + "INTEGRATION_ENROLL_CODE_TYPE_UNSPECIFIED": 0, + "INTEGRATION_ENROLL_CODE_TYPE_TERRAFORM": 1, + "INTEGRATION_ENROLL_CODE_TYPE_GITHUB_ACTIONS_YAML": 2, +} + +func (x IntegrationEnrollCodeType) String() string { + return proto.EnumName(IntegrationEnrollCodeType_name, int32(x)) +} + +func (IntegrationEnrollCodeType) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{8} +} + // Feature is name of Teleport feature type Feature int32 @@ -503,7 +656,7 @@ func (x Feature) String() string { } func (Feature) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{6} + return fileDescriptor_94cf2ca1c69fd564, []int{9} } // FeatureRecommendationStatus is feature recommendation status. @@ -534,7 +687,7 @@ func (x FeatureRecommendationStatus) String() string { } func (FeatureRecommendationStatus) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{7} + return fileDescriptor_94cf2ca1c69fd564, []int{10} } // DeployMethod describes the method used to deploy a service. @@ -4211,6 +4364,278 @@ func (m *UIIntegrationEnrollStepEvent) GetStatus() *IntegrationEnrollStepStatus return nil } +// UIIntegrationEnrollSectionOpenEvent is emitted when the user opens or expands +// a section (e.g. "Advanced Options") in an integration setup wizard. +type UIIntegrationEnrollSectionOpenEvent struct { + // Metadata of the event. + Metadata *IntegrationEnrollMetadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Step where the section was opened. + Step IntegrationEnrollStep `protobuf:"varint,2,opt,name=step,proto3,enum=teleport.usageevents.v1.IntegrationEnrollStep" json:"step,omitempty"` + // Section that was opened. + Section IntegrationEnrollSection `protobuf:"varint,3,opt,name=section,proto3,enum=teleport.usageevents.v1.IntegrationEnrollSection" json:"section,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *UIIntegrationEnrollSectionOpenEvent) Reset() { *m = UIIntegrationEnrollSectionOpenEvent{} } +func (m *UIIntegrationEnrollSectionOpenEvent) String() string { return proto.CompactTextString(m) } +func (*UIIntegrationEnrollSectionOpenEvent) ProtoMessage() {} +func (*UIIntegrationEnrollSectionOpenEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{59} +} +func (m *UIIntegrationEnrollSectionOpenEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UIIntegrationEnrollSectionOpenEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UIIntegrationEnrollSectionOpenEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UIIntegrationEnrollSectionOpenEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_UIIntegrationEnrollSectionOpenEvent.Merge(m, src) +} +func (m *UIIntegrationEnrollSectionOpenEvent) XXX_Size() int { + return m.Size() +} +func (m *UIIntegrationEnrollSectionOpenEvent) XXX_DiscardUnknown() { + xxx_messageInfo_UIIntegrationEnrollSectionOpenEvent.DiscardUnknown(m) +} + +var xxx_messageInfo_UIIntegrationEnrollSectionOpenEvent proto.InternalMessageInfo + +func (m *UIIntegrationEnrollSectionOpenEvent) GetMetadata() *IntegrationEnrollMetadata { + if m != nil { + return m.Metadata + } + return nil +} + +func (m *UIIntegrationEnrollSectionOpenEvent) GetStep() IntegrationEnrollStep { + if m != nil { + return m.Step + } + return IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_UNSPECIFIED +} + +func (m *UIIntegrationEnrollSectionOpenEvent) GetSection() IntegrationEnrollSection { + if m != nil { + return m.Section + } + return IntegrationEnrollSection_INTEGRATION_ENROLL_SECTION_UNSPECIFIED +} + +// UIIntegrationEnrollFieldCompleteEvent is emitted when the user completes a +// field in an integration setup wizard. +type UIIntegrationEnrollFieldCompleteEvent struct { + // Metadata of the event. + Metadata *IntegrationEnrollMetadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Step where the field was completed. + Step IntegrationEnrollStep `protobuf:"varint,2,opt,name=step,proto3,enum=teleport.usageevents.v1.IntegrationEnrollStep" json:"step,omitempty"` + // Field that was completed. + Field IntegrationEnrollField `protobuf:"varint,3,opt,name=field,proto3,enum=teleport.usageevents.v1.IntegrationEnrollField" json:"field,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) Reset() { *m = UIIntegrationEnrollFieldCompleteEvent{} } +func (m *UIIntegrationEnrollFieldCompleteEvent) String() string { return proto.CompactTextString(m) } +func (*UIIntegrationEnrollFieldCompleteEvent) ProtoMessage() {} +func (*UIIntegrationEnrollFieldCompleteEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{60} +} +func (m *UIIntegrationEnrollFieldCompleteEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UIIntegrationEnrollFieldCompleteEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UIIntegrationEnrollFieldCompleteEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UIIntegrationEnrollFieldCompleteEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_UIIntegrationEnrollFieldCompleteEvent.Merge(m, src) +} +func (m *UIIntegrationEnrollFieldCompleteEvent) XXX_Size() int { + return m.Size() +} +func (m *UIIntegrationEnrollFieldCompleteEvent) XXX_DiscardUnknown() { + xxx_messageInfo_UIIntegrationEnrollFieldCompleteEvent.DiscardUnknown(m) +} + +var xxx_messageInfo_UIIntegrationEnrollFieldCompleteEvent proto.InternalMessageInfo + +func (m *UIIntegrationEnrollFieldCompleteEvent) GetMetadata() *IntegrationEnrollMetadata { + if m != nil { + return m.Metadata + } + return nil +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) GetStep() IntegrationEnrollStep { + if m != nil { + return m.Step + } + return IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_UNSPECIFIED +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) GetField() IntegrationEnrollField { + if m != nil { + return m.Field + } + return IntegrationEnrollField_INTEGRATION_ENROLL_FIELD_UNSPECIFIED +} + +// UIIntegrationEnrollCodeCopyEvent is emitted when the user copies generated +// IaC (e.g. Terraform) code in an integration setup wizard. +type UIIntegrationEnrollCodeCopyEvent struct { + // Metadata of the event. + Metadata *IntegrationEnrollMetadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Step where the code was copied. + Step IntegrationEnrollStep `protobuf:"varint,2,opt,name=step,proto3,enum=teleport.usageevents.v1.IntegrationEnrollStep" json:"step,omitempty"` + // Type of code that was copied. + Type IntegrationEnrollCodeType `protobuf:"varint,3,opt,name=type,proto3,enum=teleport.usageevents.v1.IntegrationEnrollCodeType" json:"type,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *UIIntegrationEnrollCodeCopyEvent) Reset() { *m = UIIntegrationEnrollCodeCopyEvent{} } +func (m *UIIntegrationEnrollCodeCopyEvent) String() string { return proto.CompactTextString(m) } +func (*UIIntegrationEnrollCodeCopyEvent) ProtoMessage() {} +func (*UIIntegrationEnrollCodeCopyEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{61} +} +func (m *UIIntegrationEnrollCodeCopyEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UIIntegrationEnrollCodeCopyEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UIIntegrationEnrollCodeCopyEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UIIntegrationEnrollCodeCopyEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_UIIntegrationEnrollCodeCopyEvent.Merge(m, src) +} +func (m *UIIntegrationEnrollCodeCopyEvent) XXX_Size() int { + return m.Size() +} +func (m *UIIntegrationEnrollCodeCopyEvent) XXX_DiscardUnknown() { + xxx_messageInfo_UIIntegrationEnrollCodeCopyEvent.DiscardUnknown(m) +} + +var xxx_messageInfo_UIIntegrationEnrollCodeCopyEvent proto.InternalMessageInfo + +func (m *UIIntegrationEnrollCodeCopyEvent) GetMetadata() *IntegrationEnrollMetadata { + if m != nil { + return m.Metadata + } + return nil +} + +func (m *UIIntegrationEnrollCodeCopyEvent) GetStep() IntegrationEnrollStep { + if m != nil { + return m.Step + } + return IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_UNSPECIFIED +} + +func (m *UIIntegrationEnrollCodeCopyEvent) GetType() IntegrationEnrollCodeType { + if m != nil { + return m.Type + } + return IntegrationEnrollCodeType_INTEGRATION_ENROLL_CODE_TYPE_UNSPECIFIED +} + +// UIIntegrationEnrollLinkClickEvent is emitted when the user clicks a link to +// documentation, etc. in an integration setup wizard. +type UIIntegrationEnrollLinkClickEvent struct { + // Metadata of the event. + Metadata *IntegrationEnrollMetadata `protobuf:"bytes,1,opt,name=metadata,proto3" json:"metadata,omitempty"` + // Step where the link was clicked. + Step IntegrationEnrollStep `protobuf:"varint,2,opt,name=step,proto3,enum=teleport.usageevents.v1.IntegrationEnrollStep" json:"step,omitempty"` + // Link that was clicked. + Link string `protobuf:"bytes,3,opt,name=link,proto3" json:"link,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *UIIntegrationEnrollLinkClickEvent) Reset() { *m = UIIntegrationEnrollLinkClickEvent{} } +func (m *UIIntegrationEnrollLinkClickEvent) String() string { return proto.CompactTextString(m) } +func (*UIIntegrationEnrollLinkClickEvent) ProtoMessage() {} +func (*UIIntegrationEnrollLinkClickEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_94cf2ca1c69fd564, []int{62} +} +func (m *UIIntegrationEnrollLinkClickEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *UIIntegrationEnrollLinkClickEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_UIIntegrationEnrollLinkClickEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *UIIntegrationEnrollLinkClickEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_UIIntegrationEnrollLinkClickEvent.Merge(m, src) +} +func (m *UIIntegrationEnrollLinkClickEvent) XXX_Size() int { + return m.Size() +} +func (m *UIIntegrationEnrollLinkClickEvent) XXX_DiscardUnknown() { + xxx_messageInfo_UIIntegrationEnrollLinkClickEvent.DiscardUnknown(m) +} + +var xxx_messageInfo_UIIntegrationEnrollLinkClickEvent proto.InternalMessageInfo + +func (m *UIIntegrationEnrollLinkClickEvent) GetMetadata() *IntegrationEnrollMetadata { + if m != nil { + return m.Metadata + } + return nil +} + +func (m *UIIntegrationEnrollLinkClickEvent) GetStep() IntegrationEnrollStep { + if m != nil { + return m.Step + } + return IntegrationEnrollStep_INTEGRATION_ENROLL_STEP_UNSPECIFIED +} + +func (m *UIIntegrationEnrollLinkClickEvent) GetLink() string { + if m != nil { + return m.Link + } + return "" +} + // ResourceCreateEvent is emitted when a resource is created. type ResourceCreateEvent struct { // resource_type is the type of resource ("node", "node.openssh", "db", "k8s", "app"). @@ -4231,7 +4656,7 @@ func (m *ResourceCreateEvent) Reset() { *m = ResourceCreateEvent{} } func (m *ResourceCreateEvent) String() string { return proto.CompactTextString(m) } func (*ResourceCreateEvent) ProtoMessage() {} func (*ResourceCreateEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{59} + return fileDescriptor_94cf2ca1c69fd564, []int{63} } func (m *ResourceCreateEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4303,7 +4728,7 @@ func (m *DiscoveredDatabaseMetadata) Reset() { *m = DiscoveredDatabaseMe func (m *DiscoveredDatabaseMetadata) String() string { return proto.CompactTextString(m) } func (*DiscoveredDatabaseMetadata) ProtoMessage() {} func (*DiscoveredDatabaseMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{60} + return fileDescriptor_94cf2ca1c69fd564, []int{64} } func (m *DiscoveredDatabaseMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4373,7 +4798,7 @@ func (m *FeatureRecommendationEvent) Reset() { *m = FeatureRecommendatio func (m *FeatureRecommendationEvent) String() string { return proto.CompactTextString(m) } func (*FeatureRecommendationEvent) ProtoMessage() {} func (*FeatureRecommendationEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{61} + return fileDescriptor_94cf2ca1c69fd564, []int{65} } func (m *FeatureRecommendationEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4443,7 +4868,7 @@ func (m *TAGExecuteQueryEvent) Reset() { *m = TAGExecuteQueryEvent{} } func (m *TAGExecuteQueryEvent) String() string { return proto.CompactTextString(m) } func (*TAGExecuteQueryEvent) ProtoMessage() {} func (*TAGExecuteQueryEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{62} + return fileDescriptor_94cf2ca1c69fd564, []int{66} } func (m *TAGExecuteQueryEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4527,7 +4952,7 @@ func (m *AccessGraphAWSScanEvent) Reset() { *m = AccessGraphAWSScanEvent func (m *AccessGraphAWSScanEvent) String() string { return proto.CompactTextString(m) } func (*AccessGraphAWSScanEvent) ProtoMessage() {} func (*AccessGraphAWSScanEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{63} + return fileDescriptor_94cf2ca1c69fd564, []int{67} } func (m *AccessGraphAWSScanEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4648,7 +5073,7 @@ func (m *UIAccessGraphCrownJewelDiffViewEvent) Reset() { *m = UIAccessGr func (m *UIAccessGraphCrownJewelDiffViewEvent) String() string { return proto.CompactTextString(m) } func (*UIAccessGraphCrownJewelDiffViewEvent) ProtoMessage() {} func (*UIAccessGraphCrownJewelDiffViewEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{64} + return fileDescriptor_94cf2ca1c69fd564, []int{68} } func (m *UIAccessGraphCrownJewelDiffViewEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4706,7 +5131,7 @@ func (m *SecurityReportGetResultEvent) Reset() { *m = SecurityReportGetR func (m *SecurityReportGetResultEvent) String() string { return proto.CompactTextString(m) } func (*SecurityReportGetResultEvent) ProtoMessage() {} func (*SecurityReportGetResultEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{65} + return fileDescriptor_94cf2ca1c69fd564, []int{69} } func (m *SecurityReportGetResultEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4767,7 +5192,7 @@ func (m *DiscoveryFetchEvent) Reset() { *m = DiscoveryFetchEvent{} } func (m *DiscoveryFetchEvent) String() string { return proto.CompactTextString(m) } func (*DiscoveryFetchEvent) ProtoMessage() {} func (*DiscoveryFetchEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{66} + return fileDescriptor_94cf2ca1c69fd564, []int{70} } func (m *DiscoveryFetchEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4846,7 +5271,7 @@ func (m *UserTaskStateEvent) Reset() { *m = UserTaskStateEvent{} } func (m *UserTaskStateEvent) String() string { return proto.CompactTextString(m) } func (*UserTaskStateEvent) ProtoMessage() {} func (*UserTaskStateEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{67} + return fileDescriptor_94cf2ca1c69fd564, []int{71} } func (m *UserTaskStateEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4968,6 +5393,10 @@ type UsageEventOneOf struct { // *UsageEventOneOf_UiAccessGraphCrownJewelDiffView // *UsageEventOneOf_UserTaskStateEvent // *UsageEventOneOf_UiIntegrationEnrollStepEvent + // *UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent + // *UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent + // *UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent + // *UsageEventOneOf_UiIntegrationEnrollLinkClickEvent Event isUsageEventOneOf_Event `protobuf_oneof:"event"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` @@ -4978,7 +5407,7 @@ func (m *UsageEventOneOf) Reset() { *m = UsageEventOneOf{} } func (m *UsageEventOneOf) String() string { return proto.CompactTextString(m) } func (*UsageEventOneOf) ProtoMessage() {} func (*UsageEventOneOf) Descriptor() ([]byte, []int) { - return fileDescriptor_94cf2ca1c69fd564, []int{68} + return fileDescriptor_94cf2ca1c69fd564, []int{72} } func (m *UsageEventOneOf) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5193,6 +5622,18 @@ type UsageEventOneOf_UserTaskStateEvent struct { type UsageEventOneOf_UiIntegrationEnrollStepEvent struct { UiIntegrationEnrollStepEvent *UIIntegrationEnrollStepEvent `protobuf:"bytes,61,opt,name=ui_integration_enroll_step_event,json=uiIntegrationEnrollStepEvent,proto3,oneof" json:"ui_integration_enroll_step_event,omitempty"` } +type UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent struct { + UiIntegrationEnrollSectionOpenEvent *UIIntegrationEnrollSectionOpenEvent `protobuf:"bytes,62,opt,name=ui_integration_enroll_section_open_event,json=uiIntegrationEnrollSectionOpenEvent,proto3,oneof" json:"ui_integration_enroll_section_open_event,omitempty"` +} +type UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent struct { + UiIntegrationEnrollFieldCompleteEvent *UIIntegrationEnrollFieldCompleteEvent `protobuf:"bytes,63,opt,name=ui_integration_enroll_field_complete_event,json=uiIntegrationEnrollFieldCompleteEvent,proto3,oneof" json:"ui_integration_enroll_field_complete_event,omitempty"` +} +type UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent struct { + UiIntegrationEnrollCodeCopyEvent *UIIntegrationEnrollCodeCopyEvent `protobuf:"bytes,64,opt,name=ui_integration_enroll_code_copy_event,json=uiIntegrationEnrollCodeCopyEvent,proto3,oneof" json:"ui_integration_enroll_code_copy_event,omitempty"` +} +type UsageEventOneOf_UiIntegrationEnrollLinkClickEvent struct { + UiIntegrationEnrollLinkClickEvent *UIIntegrationEnrollLinkClickEvent `protobuf:"bytes,65,opt,name=ui_integration_enroll_link_click_event,json=uiIntegrationEnrollLinkClickEvent,proto3,oneof" json:"ui_integration_enroll_link_click_event,omitempty"` +} func (*UsageEventOneOf_UiBannerClick) isUsageEventOneOf_Event() {} func (*UsageEventOneOf_UiOnboardCompleteGoToDashboardClick) isUsageEventOneOf_Event() {} @@ -5254,6 +5695,10 @@ func (*UsageEventOneOf_AccessGraphAwsScanEvent) isUsageEventOneOf_Event() func (*UsageEventOneOf_UiAccessGraphCrownJewelDiffView) isUsageEventOneOf_Event() {} func (*UsageEventOneOf_UserTaskStateEvent) isUsageEventOneOf_Event() {} func (*UsageEventOneOf_UiIntegrationEnrollStepEvent) isUsageEventOneOf_Event() {} +func (*UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent) isUsageEventOneOf_Event() {} +func (*UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent) isUsageEventOneOf_Event() {} +func (*UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent) isUsageEventOneOf_Event() {} +func (*UsageEventOneOf_UiIntegrationEnrollLinkClickEvent) isUsageEventOneOf_Event() {} func (m *UsageEventOneOf) GetEvent() isUsageEventOneOf_Event { if m != nil { @@ -5682,6 +6127,34 @@ func (m *UsageEventOneOf) GetUiIntegrationEnrollStepEvent() *UIIntegrationEnroll return nil } +func (m *UsageEventOneOf) GetUiIntegrationEnrollSectionOpenEvent() *UIIntegrationEnrollSectionOpenEvent { + if x, ok := m.GetEvent().(*UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent); ok { + return x.UiIntegrationEnrollSectionOpenEvent + } + return nil +} + +func (m *UsageEventOneOf) GetUiIntegrationEnrollFieldCompleteEvent() *UIIntegrationEnrollFieldCompleteEvent { + if x, ok := m.GetEvent().(*UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent); ok { + return x.UiIntegrationEnrollFieldCompleteEvent + } + return nil +} + +func (m *UsageEventOneOf) GetUiIntegrationEnrollCodeCopyEvent() *UIIntegrationEnrollCodeCopyEvent { + if x, ok := m.GetEvent().(*UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent); ok { + return x.UiIntegrationEnrollCodeCopyEvent + } + return nil +} + +func (m *UsageEventOneOf) GetUiIntegrationEnrollLinkClickEvent() *UIIntegrationEnrollLinkClickEvent { + if x, ok := m.GetEvent().(*UsageEventOneOf_UiIntegrationEnrollLinkClickEvent); ok { + return x.UiIntegrationEnrollLinkClickEvent + } + return nil +} + // XXX_OneofWrappers is for the internal use of the proto package. func (*UsageEventOneOf) XXX_OneofWrappers() []interface{} { return []interface{}{ @@ -5745,6 +6218,10 @@ func (*UsageEventOneOf) XXX_OneofWrappers() []interface{} { (*UsageEventOneOf_UiAccessGraphCrownJewelDiffView)(nil), (*UsageEventOneOf_UserTaskStateEvent)(nil), (*UsageEventOneOf_UiIntegrationEnrollStepEvent)(nil), + (*UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent)(nil), + (*UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent)(nil), + (*UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent)(nil), + (*UsageEventOneOf_UiIntegrationEnrollLinkClickEvent)(nil), } } @@ -5755,6 +6232,9 @@ func init() { proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollKind", IntegrationEnrollKind_name, IntegrationEnrollKind_value) proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollStep", IntegrationEnrollStep_name, IntegrationEnrollStep_value) proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollStatusCode", IntegrationEnrollStatusCode_name, IntegrationEnrollStatusCode_value) + proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollSection", IntegrationEnrollSection_name, IntegrationEnrollSection_value) + proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollField", IntegrationEnrollField_name, IntegrationEnrollField_value) + proto.RegisterEnum("teleport.usageevents.v1.IntegrationEnrollCodeType", IntegrationEnrollCodeType_name, IntegrationEnrollCodeType_value) proto.RegisterEnum("teleport.usageevents.v1.Feature", Feature_name, Feature_value) proto.RegisterEnum("teleport.usageevents.v1.FeatureRecommendationStatus", FeatureRecommendationStatus_name, FeatureRecommendationStatus_value) proto.RegisterEnum("teleport.usageevents.v1.UIDiscoverDeployServiceEvent_DeployMethod", UIDiscoverDeployServiceEvent_DeployMethod_name, UIDiscoverDeployServiceEvent_DeployMethod_value) @@ -5819,6 +6299,10 @@ func init() { proto.RegisterType((*UIIntegrationEnrollCompleteEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollCompleteEvent") proto.RegisterType((*IntegrationEnrollStepStatus)(nil), "teleport.usageevents.v1.IntegrationEnrollStepStatus") proto.RegisterType((*UIIntegrationEnrollStepEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollStepEvent") + proto.RegisterType((*UIIntegrationEnrollSectionOpenEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollSectionOpenEvent") + proto.RegisterType((*UIIntegrationEnrollFieldCompleteEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollFieldCompleteEvent") + proto.RegisterType((*UIIntegrationEnrollCodeCopyEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollCodeCopyEvent") + proto.RegisterType((*UIIntegrationEnrollLinkClickEvent)(nil), "teleport.usageevents.v1.UIIntegrationEnrollLinkClickEvent") proto.RegisterType((*ResourceCreateEvent)(nil), "teleport.usageevents.v1.ResourceCreateEvent") proto.RegisterType((*DiscoveredDatabaseMetadata)(nil), "teleport.usageevents.v1.DiscoveredDatabaseMetadata") proto.RegisterType((*FeatureRecommendationEvent)(nil), "teleport.usageevents.v1.FeatureRecommendationEvent") @@ -5836,376 +6320,413 @@ func init() { } var fileDescriptor_94cf2ca1c69fd564 = []byte{ - // 5894 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x7d, 0x5b, 0x6c, 0x1b, 0xcb, - 0x79, 0x3f, 0x97, 0x92, 0x6c, 0x69, 0x6c, 0xcb, 0xeb, 0xb5, 0x2d, 0x53, 0x92, 0xe5, 0x0b, 0x7d, - 0xd7, 0x39, 0x47, 0x3e, 0xd6, 0x71, 0x72, 0x7c, 0x2e, 0xc9, 0xf9, 0xaf, 0x96, 0x2b, 0x71, 0x2d, - 0x92, 0xcb, 0x33, 0xbb, 0xb4, 0xa3, 0x13, 0x04, 0xf3, 0x5f, 0x71, 0x47, 0xd2, 0x56, 0x24, 0x97, - 0xd9, 0x5d, 0x4a, 0x47, 0x28, 0x82, 0xa4, 0x4d, 0x1b, 0x14, 0x4d, 0xd2, 0xb4, 0x40, 0x11, 0x14, - 0x08, 0xd0, 0x0b, 0x7a, 0x41, 0x1f, 0x0a, 0xf4, 0xa9, 0xc8, 0x53, 0x81, 0x16, 0x68, 0x81, 0x3e, - 0xf4, 0xa1, 0x40, 0x1f, 0xfb, 0x52, 0xe4, 0xa9, 0x6f, 0x79, 0x6a, 0x81, 0x36, 0x2d, 0x5a, 0xcc, - 0x65, 0x97, 0x4b, 0x72, 0xc9, 0x5d, 0x9f, 0xfa, 0x34, 0x80, 0xfd, 0xa6, 0x9d, 0xf9, 0x2e, 0xbf, - 0x99, 0xf9, 0xe6, 0x9b, 0x6f, 0x66, 0xbe, 0xa1, 0xc0, 0x83, 0x00, 0xb7, 0x70, 0xd7, 0xf5, 0x82, - 0x87, 0x3d, 0xdf, 0xda, 0xc7, 0xf8, 0x08, 0x77, 0x02, 0xff, 0xe1, 0xd1, 0xa3, 0xf8, 0xe7, 0x5a, - 0xd7, 0x73, 0x03, 0x57, 0xba, 0x12, 0x92, 0xae, 0xc5, 0xeb, 0x8e, 0x1e, 0x2d, 0xdd, 0x8b, 0x64, - 0x58, 0xcd, 0x26, 0xf6, 0xfd, 0x96, 0xe3, 0x07, 0x44, 0x44, 0xff, 0x8b, 0x49, 0x28, 0xae, 0x02, - 0xa9, 0xa1, 0x6d, 0x58, 0x9d, 0x0e, 0xf6, 0x94, 0x96, 0xd3, 0x3c, 0x54, 0x89, 0x08, 0xe9, 0x12, - 0x98, 0xb1, 0x5a, 0xd8, 0x0b, 0x0a, 0xc2, 0x0d, 0xe1, 0xfe, 0x1c, 0x64, 0x1f, 0xc5, 0x4d, 0x70, - 0xbf, 0xa1, 0xe9, 0x9d, 0x5d, 0xd7, 0xf2, 0x6c, 0xc5, 0x6d, 0x77, 0x5b, 0x38, 0xc0, 0x5b, 0xae, - 0xe9, 0x96, 0x2c, 0xff, 0x80, 0x15, 0xf6, 0x25, 0x2c, 0x81, 0xd9, 0x9e, 0x8f, 0xbd, 0x8e, 0xd5, - 0xc6, 0x5c, 0x48, 0xf4, 0x5d, 0xbc, 0x03, 0x6e, 0x45, 0x72, 0x64, 0xdb, 0xde, 0x74, 0x3c, 0x3f, - 0x80, 0xd8, 0x77, 0x7b, 0x5e, 0x13, 0xf7, 0x45, 0x14, 0x57, 0x63, 0xea, 0x86, 0xc9, 0x2a, 0x56, - 0x10, 0x07, 0x5c, 0xfc, 0x08, 0xdc, 0x8c, 0x68, 0x0d, 0x1c, 0x28, 0x1e, 0xb6, 0x71, 0x27, 0x70, - 0xac, 0x96, 0xd1, 0xdb, 0x6d, 0x3b, 0x41, 0x3a, 0xa6, 0xb8, 0x80, 0x8f, 0x7b, 0xd8, 0x0f, 0x1c, - 0xb7, 0xd3, 0xb1, 0x1c, 0x0f, 0x67, 0x15, 0xf0, 0x0d, 0x70, 0x27, 0x12, 0x00, 0xf1, 0xbe, 0xe3, - 0x13, 0x80, 0x07, 0x56, 0xab, 0x85, 0x3b, 0xfb, 0x59, 0x85, 0x48, 0x8b, 0x60, 0xb6, 0xbd, 0x67, - 0xa1, 0xe0, 0xa4, 0x8b, 0x0b, 0x79, 0x5a, 0x77, 0xba, 0xbd, 0x67, 0x99, 0x27, 0x5d, 0x2c, 0xad, - 0x00, 0xd0, 0x72, 0xf7, 0x9d, 0x0e, 0xda, 0x6b, 0xb9, 0xc7, 0x85, 0x29, 0x5a, 0x39, 0x47, 0x4b, - 0x36, 0x5b, 0xee, 0x31, 0xc3, 0x0f, 0x71, 0xd3, 0x3d, 0xc2, 0xde, 0x89, 0xe2, 0xda, 0xd8, 0x57, - 0xdc, 0x4e, 0xe0, 0x74, 0x7a, 0x38, 0xe3, 0xa0, 0x7c, 0x00, 0x56, 0x46, 0x04, 0x74, 0x4f, 0x32, - 0x32, 0x7f, 0x08, 0xae, 0x0d, 0x31, 0xd7, 0x3d, 0xa7, 0x13, 0x64, 0xe4, 0x2e, 0x02, 0xb1, 0xe4, - 0xf8, 0x94, 0xb9, 0x8a, 0x03, 0xcb, 0xb6, 0x02, 0x4b, 0x9a, 0x07, 0x79, 0xc7, 0xe6, 0x94, 0x79, - 0xc7, 0x2e, 0x5a, 0xa0, 0x10, 0xd2, 0x84, 0x36, 0x10, 0xd1, 0xaa, 0x60, 0xd6, 0xe3, 0x65, 0x94, - 0x63, 0x7e, 0xfd, 0xc1, 0xda, 0x98, 0x89, 0xb1, 0x36, 0x2c, 0x04, 0x46, 0xac, 0xc5, 0x43, 0x20, - 0x85, 0xb5, 0x46, 0x80, 0xbb, 0x46, 0x60, 0x05, 0x3d, 0x5f, 0xfa, 0x08, 0x9c, 0xf2, 0xe9, 0x5f, - 0x5c, 0xf4, 0xbd, 0x54, 0xd1, 0x8c, 0x11, 0x72, 0x36, 0x32, 0x97, 0xb0, 0xe7, 0xb9, 0x1e, 0x1f, - 0x50, 0xf6, 0x51, 0xfc, 0x23, 0x01, 0x2c, 0x34, 0xb4, 0x18, 0x8b, 0x17, 0x60, 0x9b, 0x75, 0x95, - 0x0a, 0x66, 0xdb, 0xbc, 0x69, 0x54, 0xe7, 0x99, 0x0c, 0xcd, 0x09, 0xfb, 0x02, 0x46, 0xac, 0x92, - 0x12, 0x01, 0xcf, 0x53, 0x21, 0x6f, 0x64, 0x00, 0x1e, 0xb6, 0x3a, 0x04, 0x5f, 0xfc, 0x2f, 0x01, - 0xdc, 0xe8, 0xc3, 0x0c, 0x3b, 0xcd, 0xc0, 0x2d, 0xdc, 0x24, 0x33, 0xe4, 0xa5, 0x02, 0xae, 0xc6, - 0x86, 0x91, 0x41, 0x7e, 0x94, 0x79, 0x18, 0xfb, 0xe2, 0x42, 0x11, 0xb1, 0xf6, 0x4f, 0x7d, 0xf6, - 0xf6, 0xff, 0x6a, 0x9e, 0x38, 0xa1, 0x90, 0x40, 0xeb, 0x04, 0x78, 0xdf, 0xb3, 0x48, 0xcb, 0xe5, - 0xe7, 0x86, 0xae, 0x95, 0x14, 0xc5, 0xed, 0x74, 0x70, 0x33, 0x78, 0xe5, 0xfb, 0xe1, 0xc7, 0xf9, - 0xb8, 0x1d, 0x94, 0xac, 0xc0, 0xda, 0xb5, 0x7c, 0x0c, 0x4b, 0x86, 0xda, 0xf1, 0xdc, 0x56, 0xeb, - 0x55, 0x6f, 0xbf, 0xf4, 0x04, 0x14, 0x7c, 0x6a, 0xf4, 0xd8, 0x46, 0xa1, 0x64, 0x1f, 0x35, 0xdd, - 0x5e, 0x27, 0x28, 0x4c, 0xdf, 0x10, 0xee, 0x4f, 0xc1, 0x85, 0xb0, 0x3e, 0x84, 0xe2, 0x2b, 0xa4, - 0xb6, 0xf8, 0xef, 0x02, 0xb8, 0xda, 0xef, 0xb9, 0xed, 0xde, 0x2e, 0x56, 0xb7, 0x5f, 0x93, 0x5e, - 0x2b, 0x3e, 0x05, 0x85, 0x86, 0xa6, 0x58, 0xad, 0x96, 0xe9, 0xca, 0xd4, 0x5f, 0xc4, 0x16, 0x84, - 0x35, 0x30, 0xd5, 0xe4, 0x2d, 0x9e, 0x5f, 0xbf, 0x3a, 0x56, 0xba, 0x62, 0xca, 0x90, 0x10, 0x16, - 0xbf, 0x3f, 0x13, 0xef, 0xc7, 0x12, 0xee, 0xb6, 0xdc, 0x13, 0x03, 0x7b, 0x47, 0x4e, 0x13, 0xbf, - 0xf2, 0xd6, 0xb7, 0x0f, 0xce, 0xd9, 0xb4, 0xc1, 0xa8, 0x8d, 0x83, 0x03, 0xd7, 0xa6, 0x26, 0x37, - 0xbf, 0xbe, 0x31, 0x56, 0xd6, 0xa4, 0x8e, 0x5a, 0x63, 0x45, 0x55, 0x2a, 0x09, 0x9e, 0xb5, 0x63, - 0x5f, 0x92, 0x05, 0xce, 0x70, 0x45, 0x34, 0x04, 0x99, 0xa1, 0x6a, 0xfe, 0xdf, 0xff, 0x46, 0x0d, - 0x89, 0x5d, 0x20, 0xb0, 0xa3, 0xbf, 0x8b, 0x08, 0x9c, 0x8d, 0x03, 0x90, 0x56, 0xc0, 0x62, 0x49, - 0xad, 0x57, 0xf4, 0x1d, 0x54, 0x55, 0xcd, 0xb2, 0x5e, 0x42, 0x8d, 0x9a, 0x51, 0x57, 0x15, 0x6d, - 0x53, 0x53, 0x4b, 0x62, 0x4e, 0x5a, 0x00, 0xd2, 0x60, 0xb5, 0xdc, 0x30, 0x75, 0x51, 0x90, 0x0a, - 0xe0, 0xd2, 0x60, 0x79, 0x55, 0xae, 0x35, 0xe4, 0x8a, 0x98, 0x2f, 0x62, 0x00, 0xfa, 0xaa, 0xa5, - 0x65, 0x70, 0x85, 0xd3, 0x99, 0x3b, 0x75, 0x75, 0x48, 0xf8, 0x35, 0xb0, 0x14, 0xaf, 0xd4, 0x6a, - 0x86, 0x29, 0x57, 0x2a, 0xc8, 0x50, 0xa0, 0x56, 0x37, 0x45, 0x41, 0x5a, 0x02, 0x0b, 0xf1, 0x7a, - 0xb9, 0x2a, 0x7f, 0xa2, 0xd7, 0x90, 0xaa, 0x18, 0x62, 0xbe, 0xf8, 0xa3, 0x69, 0x70, 0xbb, 0xdf, - 0x7e, 0xc5, 0xc3, 0x56, 0x80, 0xc3, 0xaf, 0x13, 0xc5, 0xed, 0xec, 0x39, 0xfb, 0xaf, 0xbc, 0x5d, - 0xba, 0xe0, 0x5c, 0x93, 0xb6, 0x74, 0xd0, 0x2e, 0x9f, 0x66, 0x30, 0x98, 0xf1, 0x1d, 0xb6, 0xc6, - 0xfe, 0x0e, 0xed, 0xb3, 0x19, 0xfb, 0x2a, 0xfe, 0x89, 0x00, 0xce, 0xc6, 0xab, 0x89, 0xf5, 0x28, - 0x7a, 0x6d, 0x53, 0xdb, 0x4a, 0xb6, 0x9e, 0x91, 0x6a, 0xf9, 0xb9, 0x81, 0x54, 0x65, 0x1d, 0x19, - 0x46, 0x55, 0x14, 0xc8, 0xf8, 0x27, 0x57, 0xab, 0x9a, 0xa2, 0x8a, 0xf9, 0x64, 0x76, 0x58, 0x32, - 0xa8, 0x09, 0x4c, 0x49, 0x8b, 0xe0, 0x72, 0x02, 0xfb, 0xb6, 0x21, 0x4e, 0x17, 0xff, 0x53, 0x00, - 0xd7, 0x13, 0xd6, 0x4b, 0xbe, 0x2f, 0x78, 0xe5, 0x1d, 0xff, 0x2f, 0xe5, 0xe3, 0x93, 0x23, 0x6c, - 0x3e, 0x1b, 0xb9, 0x9e, 0x87, 0xab, 0x66, 0xc5, 0x78, 0xe5, 0xfb, 0xe0, 0x37, 0xf2, 0xe0, 0x51, - 0xdc, 0x41, 0xfa, 0x87, 0x81, 0xdb, 0x25, 0xcb, 0xe0, 0x11, 0x2e, 0x39, 0x1e, 0x6e, 0x06, 0xae, - 0x77, 0x62, 0xba, 0x6e, 0xcb, 0xd7, 0x3a, 0x7e, 0x60, 0xbd, 0x06, 0xd1, 0xc0, 0x77, 0xf3, 0x60, - 0x2d, 0xad, 0x43, 0x22, 0x13, 0x79, 0xe5, 0x7b, 0xe3, 0xcf, 0xf2, 0xe0, 0x6e, 0xbf, 0x37, 0xe4, - 0x5e, 0xe0, 0x86, 0x7f, 0xc7, 0x42, 0xc8, 0x57, 0x7e, 0x05, 0xb9, 0x07, 0xce, 0x27, 0x87, 0xd3, - 0xf3, 0xde, 0x60, 0x18, 0xfd, 0xad, 0x3c, 0xb8, 0xd5, 0xef, 0x2e, 0x55, 0x59, 0xa7, 0xb3, 0xa6, - 0xf3, 0x3a, 0xed, 0x45, 0xff, 0x4d, 0x00, 0x8b, 0xc3, 0x11, 0x17, 0x59, 0xa8, 0x5e, 0xb3, 0x86, - 0xb3, 0xc8, 0xa1, 0xe6, 0xda, 0xaf, 0xbe, 0x8f, 0xf8, 0x99, 0x00, 0xae, 0x0d, 0x37, 0x5c, 0xee, - 0x76, 0x49, 0x98, 0xfd, 0x1a, 0x04, 0x11, 0xdf, 0xc9, 0x83, 0x07, 0x13, 0x82, 0x08, 0x4d, 0xae, - 0xd6, 0xdd, 0x96, 0xd3, 0x3c, 0x79, 0xe5, 0x3b, 0xe2, 0xbf, 0x05, 0x50, 0xec, 0x77, 0x44, 0xdd, - 0x73, 0x3a, 0x4d, 0xa7, 0x6b, 0xb5, 0xfc, 0xd7, 0x67, 0xb1, 0xfc, 0x0f, 0x01, 0xac, 0xf4, 0x7b, - 0xc0, 0xc4, 0x7e, 0xc0, 0x0f, 0xde, 0x5e, 0x07, 0xbf, 0xff, 0xaf, 0x02, 0x28, 0xc4, 0xbc, 0x00, - 0xbf, 0x78, 0xb1, 0x5f, 0xf9, 0x76, 0x2f, 0x13, 0xaf, 0xcf, 0xbd, 0x3d, 0x3e, 0x86, 0x6e, 0x2b, - 0x7e, 0x39, 0xf4, 0xf7, 0xd4, 0x22, 0x06, 0x6a, 0x0d, 0xeb, 0x28, 0x7e, 0xd9, 0x71, 0x0b, 0x9c, - 0x23, 0x11, 0x82, 0x6d, 0x79, 0x36, 0xea, 0xf9, 0x98, 0x5d, 0x26, 0xcc, 0xc2, 0xb3, 0x61, 0x61, - 0xc3, 0xc7, 0xb6, 0xb4, 0x0c, 0xe6, 0x4e, 0xac, 0x76, 0x8b, 0x11, 0xe4, 0x29, 0xc1, 0x2c, 0x29, - 0xa0, 0x95, 0x77, 0xc1, 0xf9, 0xb6, 0x6b, 0x63, 0x74, 0x7c, 0x80, 0x3b, 0xc8, 0xb7, 0x8e, 0xb0, - 0xcd, 0xef, 0x5d, 0xce, 0x91, 0xe2, 0xe7, 0x07, 0xb8, 0x43, 0x54, 0xda, 0x92, 0x0c, 0x56, 0xf6, - 0x1c, 0xdc, 0xb2, 0x7d, 0x74, 0xec, 0x04, 0x07, 0xa8, 0xe9, 0x76, 0x8e, 0xb0, 0xe7, 0x3b, 0x6e, - 0x07, 0xd1, 0xb3, 0x7e, 0xbf, 0x30, 0x7d, 0x63, 0xea, 0xfe, 0x1c, 0x5c, 0x62, 0x44, 0xcf, 0x9d, - 0xe0, 0x40, 0x89, 0x48, 0x54, 0x4a, 0x51, 0xbc, 0x49, 0xb6, 0x8b, 0x83, 0x6d, 0x25, 0xc1, 0x4d, - 0x2b, 0xd6, 0xe2, 0x37, 0xc1, 0xea, 0x10, 0xc9, 0x33, 0x07, 0x1f, 0x97, 0xdc, 0x66, 0xaf, 0x8d, - 0x3b, 0x81, 0x35, 0x78, 0xbc, 0x56, 0xfc, 0x0b, 0x01, 0x5c, 0x96, 0x7d, 0xdf, 0x21, 0x33, 0x85, - 0x1a, 0x4c, 0x34, 0x53, 0xee, 0x81, 0xf3, 0x1c, 0x21, 0xe5, 0x41, 0xd1, 0x35, 0xcb, 0x7c, 0xbc, - 0x58, 0xb3, 0xa5, 0x9b, 0xe0, 0x6c, 0xe0, 0x06, 0x56, 0x0b, 0x05, 0xee, 0x21, 0xee, 0xb0, 0x6b, - 0x84, 0x29, 0x78, 0x86, 0x96, 0x99, 0xb4, 0x88, 0xf4, 0x71, 0xd7, 0x73, 0xdb, 0xdd, 0x20, 0xa4, - 0x99, 0xa2, 0x34, 0x67, 0x59, 0x21, 0x27, 0x7a, 0x03, 0x5c, 0x68, 0x46, 0x18, 0x42, 0x42, 0x16, - 0xe5, 0x89, 0xfd, 0x0a, 0x46, 0x5c, 0xfc, 0x47, 0x01, 0x5c, 0x62, 0xb8, 0xd5, 0x4f, 0x71, 0xb3, - 0xf7, 0x19, 0x60, 0xaf, 0x00, 0xd0, 0x21, 0xa3, 0xc6, 0xa2, 0x49, 0x06, 0x7a, 0x8e, 0x94, 0xd0, - 0x40, 0x72, 0xa4, 0x55, 0x53, 0x19, 0x5a, 0x35, 0x9d, 0xb5, 0x55, 0x33, 0x63, 0x5a, 0xf5, 0x04, - 0x2c, 0xb1, 0x46, 0xd5, 0xf0, 0xb1, 0x12, 0x83, 0x1b, 0xdd, 0x8d, 0x35, 0xad, 0x00, 0xef, 0xbb, - 0xde, 0x49, 0x78, 0x37, 0x16, 0x7e, 0x17, 0xff, 0x5c, 0x00, 0x17, 0x19, 0xab, 0x4c, 0xaf, 0x6e, - 0x21, 0xfe, 0x7a, 0x0f, 0xfb, 0xd4, 0xba, 0xc3, 0xd9, 0xc6, 0xce, 0xea, 0x18, 0xe3, 0xd9, 0xb0, - 0x90, 0x1e, 0x7e, 0xfd, 0x5c, 0x46, 0xf0, 0x47, 0x02, 0x38, 0x1b, 0x22, 0x26, 0xc5, 0xd2, 0x02, - 0x38, 0x65, 0xd1, 0xbf, 0x38, 0x46, 0xfe, 0xf5, 0xf3, 0x41, 0x77, 0x1b, 0x48, 0xac, 0x23, 0x2b, - 0x8e, 0x1f, 0x8c, 0xbd, 0x6d, 0xfc, 0x26, 0x28, 0xc4, 0xa9, 0xda, 0xbb, 0xb1, 0x9b, 0x49, 0x09, - 0x4c, 0xc7, 0x6e, 0x31, 0xe9, 0xdf, 0x92, 0x0e, 0xce, 0xb7, 0x29, 0x95, 0x7f, 0xe0, 0x74, 0xd1, - 0xa1, 0xd3, 0x61, 0xce, 0x64, 0x7e, 0xfd, 0x6e, 0xdf, 0xf1, 0xc5, 0xae, 0xde, 0x8f, 0x1e, 0xad, - 0x55, 0x23, 0xf2, 0x6d, 0xa7, 0x63, 0xc3, 0xf9, 0xf6, 0xc0, 0x77, 0xf1, 0xab, 0x40, 0xec, 0x03, - 0x60, 0x93, 0x5e, 0xda, 0x1a, 0x71, 0xf5, 0xe3, 0xdd, 0xea, 0x68, 0x1b, 0xfb, 0xce, 0x7e, 0x50, - 0x78, 0xa3, 0x6b, 0x7f, 0x7e, 0xc2, 0x4b, 0x98, 0x2c, 0x55, 0x2f, 0x4f, 0xf8, 0x5f, 0x0a, 0x60, - 0x61, 0x78, 0x60, 0x5e, 0x72, 0xef, 0x48, 0x9f, 0x84, 0x63, 0x89, 0x22, 0x79, 0x69, 0x2b, 0xe2, - 0x38, 0x5b, 0x09, 0x87, 0xb5, 0x3a, 0x09, 0xff, 0x4b, 0x1e, 0x80, 0xff, 0x73, 0xfc, 0x2f, 0x79, - 0x8c, 0x3f, 0x57, 0xfc, 0xbf, 0x97, 0x8f, 0xe3, 0xdf, 0xf2, 0xac, 0x4e, 0xe0, 0x9b, 0x6e, 0xc3, - 0xc7, 0x9e, 0xb4, 0x06, 0x2e, 0xd2, 0x15, 0x03, 0x79, 0x6e, 0x0b, 0xfb, 0x68, 0x9f, 0xd4, 0xf1, - 0xa0, 0x61, 0x06, 0x5e, 0xa0, 0x55, 0x64, 0xcd, 0xf5, 0xb7, 0x58, 0x05, 0x59, 0xf4, 0x19, 0xbd, - 0xd3, 0x39, 0xc0, 0x9e, 0x43, 0x2f, 0x06, 0x07, 0x38, 0xa7, 0x28, 0xe7, 0x12, 0x25, 0xd2, 0x42, - 0x9a, 0x01, 0x11, 0x6f, 0x83, 0x4b, 0x4c, 0x44, 0xe0, 0x59, 0x4e, 0xd0, 0xe7, 0xcc, 0x53, 0x4e, - 0x89, 0xd6, 0x99, 0xb4, 0x2a, 0xe4, 0x50, 0xc0, 0xb5, 0x61, 0xa5, 0x43, 0xbc, 0xd3, 0x94, 0x77, - 0x79, 0x50, 0xeb, 0xa0, 0x90, 0x65, 0x30, 0xd7, 0xf3, 0xb1, 0x87, 0xa8, 0x17, 0x9b, 0xe9, 0xe7, - 0x62, 0xd4, 0xac, 0x36, 0x2e, 0xfe, 0x70, 0x2a, 0xde, 0x43, 0x10, 0x1f, 0x39, 0xf8, 0xf8, 0x65, - 0xcf, 0xb0, 0x27, 0x60, 0xd1, 0xb6, 0x4e, 0x7c, 0xd4, 0xb5, 0xfc, 0x00, 0x75, 0xf0, 0xa7, 0x01, - 0xb2, 0x7a, 0xb6, 0x13, 0x20, 0x32, 0x0f, 0x78, 0xe3, 0x2f, 0x13, 0x82, 0xba, 0x45, 0x16, 0xcc, - 0x4f, 0x03, 0x99, 0xd4, 0x96, 0x08, 0x84, 0x4d, 0x70, 0x3d, 0xe6, 0x67, 0x3d, 0xfc, 0xf5, 0x9e, - 0xe3, 0x61, 0x12, 0xfe, 0xf8, 0xa8, 0x79, 0x60, 0x75, 0xf6, 0x79, 0xb7, 0xcf, 0xc2, 0x95, 0x3e, - 0x19, 0x8c, 0x51, 0x29, 0x8c, 0x48, 0x7a, 0x02, 0x0a, 0x1e, 0x6d, 0x1a, 0xda, 0x23, 0x42, 0x70, - 0xa7, 0x79, 0x12, 0x09, 0x98, 0xa6, 0x02, 0x16, 0x58, 0xfd, 0x66, 0x58, 0x1d, 0x72, 0x7e, 0x08, - 0x96, 0x39, 0xa7, 0x6d, 0x9d, 0x20, 0x77, 0x0f, 0xb5, 0xdd, 0x0e, 0x89, 0xf9, 0x38, 0xf3, 0x0c, - 0x65, 0xbe, 0xc2, 0x48, 0x4a, 0xd6, 0x89, 0xbe, 0x57, 0x25, 0xf5, 0x21, 0xf7, 0x7b, 0x60, 0xb1, - 0xd3, 0xa3, 0xb6, 0xed, 0xee, 0x21, 0x0f, 0xb7, 0xdd, 0x23, 0x6c, 0x23, 0x0e, 0xb5, 0x70, 0x8a, - 0xb6, 0x7c, 0x81, 0x11, 0xe8, 0x7b, 0x90, 0x55, 0xf3, 0x85, 0xa2, 0xf8, 0xdb, 0xc2, 0xe8, 0xc0, - 0xbc, 0xec, 0xa9, 0xf7, 0x08, 0x5c, 0x66, 0xab, 0x14, 0x22, 0xcb, 0x14, 0xe2, 0x0d, 0x75, 0x6c, - 0x9e, 0xba, 0x22, 0x59, 0x43, 0xfa, 0x35, 0xbb, 0xf8, 0x3d, 0x01, 0x2c, 0xc6, 0xd2, 0x22, 0xd8, - 0xc5, 0xf6, 0xb8, 0x75, 0x55, 0xda, 0x00, 0xd3, 0xb1, 0xc5, 0x71, 0x6d, 0x2c, 0xca, 0x11, 0x89, - 0x74, 0x91, 0xa4, 0xbc, 0x83, 0xe6, 0x3b, 0x35, 0x64, 0xbe, 0x2e, 0xd9, 0x15, 0x8c, 0x70, 0xd3, - 0xf4, 0x1a, 0x16, 0x6b, 0xd5, 0x46, 0xfa, 0x6a, 0x3d, 0x3b, 0x8a, 0x84, 0x15, 0xc9, 0x03, 0x37, - 0x12, 0x14, 0x86, 0x9b, 0xb4, 0xcf, 0x47, 0xe7, 0x37, 0xc0, 0x72, 0x42, 0x13, 0xa3, 0x8c, 0xa5, - 0x32, 0x98, 0x6e, 0xba, 0x76, 0x98, 0x0a, 0xf5, 0x38, 0xbb, 0x2a, 0xc6, 0xaf, 0xb8, 0x36, 0x86, - 0x54, 0xc2, 0x98, 0xd4, 0xa5, 0x6f, 0xe5, 0xc1, 0xd5, 0xc4, 0x4e, 0xc6, 0xdd, 0xcf, 0xa5, 0xbd, - 0xc4, 0x6a, 0xfc, 0x00, 0x77, 0x5f, 0xdc, 0x6a, 0x08, 0x24, 0x48, 0x79, 0xa5, 0xca, 0xd0, 0x8e, - 0xf4, 0xf1, 0x8b, 0x49, 0x19, 0xda, 0x9a, 0xfe, 0x93, 0x00, 0x2e, 0x46, 0x19, 0x8b, 0xd4, 0x3b, - 0x46, 0x7b, 0xce, 0xf4, 0xa8, 0x3c, 0x76, 0xe6, 0x8d, 0x5c, 0xcf, 0xd9, 0x77, 0x3a, 0xbc, 0x7f, - 0xa3, 0x33, 0x6f, 0x9d, 0x96, 0x4a, 0x77, 0xc0, 0x7c, 0xb3, 0xe5, 0xf6, 0x6c, 0xd4, 0xf5, 0xdc, - 0x23, 0xc7, 0xc6, 0x5e, 0xb8, 0xfd, 0xa4, 0xa5, 0x75, 0x5e, 0x28, 0xe9, 0x60, 0xd6, 0xe6, 0x87, - 0x63, 0xd4, 0x79, 0x9d, 0x59, 0x7f, 0x27, 0x75, 0xbb, 0x8d, 0xed, 0xf0, 0x3c, 0xad, 0xdf, 0xdf, - 0xa1, 0x90, 0xe2, 0x33, 0xb0, 0x34, 0x9e, 0x4e, 0xba, 0x02, 0x4e, 0xdb, 0xbb, 0xf1, 0xd6, 0x9d, - 0xb2, 0x77, 0x69, 0xbb, 0xae, 0x83, 0x33, 0xf6, 0x2e, 0xa2, 0x69, 0xa5, 0x4d, 0xb7, 0xc5, 0xdb, - 0x04, 0xec, 0xdd, 0x3a, 0x2f, 0x29, 0xfe, 0x54, 0x00, 0x4b, 0x9b, 0xd8, 0x0a, 0x7a, 0x1e, 0x86, - 0xb8, 0xe9, 0xb6, 0xdb, 0xb8, 0x63, 0xc7, 0xb6, 0x41, 0x03, 0x13, 0x5b, 0x18, 0x9c, 0xd8, 0xd2, - 0xfb, 0xe0, 0xf4, 0x1e, 0x63, 0xe5, 0x66, 0x70, 0x63, 0x6c, 0x1b, 0x43, 0x15, 0x21, 0x83, 0xf4, - 0x29, 0x58, 0xe1, 0x7f, 0x22, 0x6f, 0x40, 0x2f, 0x8a, 0x99, 0xc4, 0xa4, 0x99, 0x92, 0x08, 0x9a, - 0x9b, 0xc4, 0xf2, 0xde, 0xf8, 0xca, 0xe2, 0x31, 0xb8, 0x64, 0xca, 0x5b, 0x6c, 0x27, 0x8b, 0x3f, - 0xee, 0x61, 0x8f, 0x1f, 0x56, 0x5e, 0x07, 0x6c, 0x9b, 0x83, 0xc8, 0xbe, 0x94, 0x65, 0x16, 0x4e, - 0x41, 0x40, 0x8b, 0x6a, 0xa4, 0xa4, 0x4f, 0x80, 0xed, 0x7d, 0x1c, 0x6e, 0x8d, 0x18, 0x81, 0x4a, - 0x4a, 0xc8, 0x2e, 0xd7, 0xf1, 0x91, 0xdf, 0xa3, 0x1e, 0x99, 0x2f, 0x7a, 0x73, 0x8e, 0x6f, 0xb0, - 0x82, 0xe2, 0xbf, 0x4c, 0x81, 0x2b, 0xcc, 0xd5, 0x6f, 0x79, 0x56, 0xf7, 0x40, 0x7e, 0x6e, 0x18, - 0x4d, 0xab, 0x13, 0x66, 0xde, 0x5c, 0xe4, 0xb2, 0x9b, 0xeb, 0xc8, 0xe1, 0xd7, 0x28, 0x0c, 0xc4, - 0x34, 0xbc, 0xc0, 0x74, 0x34, 0xa3, 0xfb, 0x95, 0x18, 0x16, 0x32, 0x18, 0x0c, 0xcb, 0x34, 0xc7, - 0x42, 0x22, 0x27, 0xbf, 0xbf, 0x91, 0xdb, 0xf7, 0xdc, 0x5e, 0x97, 0xa1, 0x99, 0xe6, 0x1b, 0xb9, - 0x2d, 0x5a, 0xd4, 0x97, 0x41, 0x63, 0x24, 0x6a, 0xa6, 0xa1, 0x0c, 0x1a, 0x12, 0x11, 0x5b, 0x67, - 0x04, 0x5d, 0xb7, 0xe5, 0x34, 0x1d, 0xcc, 0xf6, 0xd2, 0xd3, 0xf0, 0x1c, 0x2d, 0xad, 0xf3, 0x42, - 0xe9, 0x4d, 0x20, 0x71, 0xec, 0x87, 0x3e, 0x6a, 0xb6, 0x7a, 0x7e, 0x10, 0xae, 0x9c, 0xd3, 0x50, - 0x64, 0xd0, 0x0f, 0x7d, 0x85, 0x97, 0xf7, 0x5b, 0xea, 0xd9, 0x7e, 0xac, 0xa5, 0xa7, 0x63, 0x2d, - 0x85, 0xb6, 0xdf, 0x6f, 0xe9, 0x7d, 0xc0, 0x64, 0x20, 0xff, 0x1d, 0xb4, 0xdb, 0x6b, 0x1e, 0xe2, - 0xc0, 0x2f, 0xcc, 0x52, 0x62, 0x06, 0xce, 0x78, 0x67, 0x83, 0x95, 0x92, 0xd0, 0x8d, 0x53, 0x5a, - 0xed, 0x56, 0x34, 0x3f, 0xfd, 0xc2, 0x1c, 0xa5, 0x66, 0x18, 0x0d, 0xab, 0xdd, 0x0a, 0x27, 0x69, - 0x8c, 0xc3, 0x75, 0xec, 0x66, 0x8c, 0x03, 0xc4, 0x38, 0x74, 0xc7, 0x6e, 0xf6, 0x39, 0xa2, 0x2e, - 0xb1, 0x9a, 0x34, 0x9c, 0xf3, 0x0b, 0x67, 0x62, 0x5d, 0x22, 0xf3, 0xc2, 0xe2, 0x0f, 0x05, 0x70, - 0xbb, 0xa1, 0xc5, 0x06, 0x5b, 0xf1, 0xdc, 0xe3, 0xce, 0x53, 0x7c, 0x8c, 0x5b, 0x25, 0x67, 0x6f, - 0xef, 0x99, 0x83, 0x8f, 0xd9, 0xb8, 0x3f, 0x01, 0x05, 0x6b, 0x6f, 0x6f, 0x30, 0x87, 0x0d, 0xc5, - 0xd2, 0x66, 0xe7, 0xe0, 0x42, 0x58, 0x1f, 0x25, 0x7a, 0xb2, 0xe3, 0xbc, 0xc7, 0x60, 0x61, 0x94, - 0x33, 0x96, 0xa4, 0x7c, 0x69, 0x98, 0x8f, 0x66, 0xfa, 0x6c, 0x82, 0xab, 0x06, 0x6e, 0xf6, 0x3c, - 0x27, 0x38, 0x81, 0x74, 0x5a, 0x6d, 0xe1, 0x00, 0x62, 0xbf, 0xd7, 0xe2, 0x4b, 0x71, 0xd2, 0x46, - 0x5a, 0x02, 0xd3, 0x24, 0xf2, 0xe3, 0x51, 0x20, 0xfd, 0xbb, 0x68, 0x81, 0x8b, 0x51, 0x9e, 0xc8, - 0x26, 0x0e, 0x9a, 0x07, 0x8c, 0x7d, 0xd4, 0x3b, 0x0a, 0x49, 0xde, 0x71, 0xc4, 0x25, 0xe7, 0x47, - 0x5d, 0x72, 0xf1, 0x07, 0x02, 0x90, 0x88, 0x2d, 0x9b, 0x96, 0x7f, 0x48, 0xa6, 0x2e, 0x8e, 0x3c, - 0x52, 0x60, 0xf9, 0x87, 0x71, 0x67, 0x37, 0x4b, 0x0a, 0xc2, 0x84, 0x6c, 0xc7, 0xf7, 0x7b, 0x03, - 0x52, 0xe7, 0x68, 0x09, 0xad, 0xbe, 0x04, 0x66, 0x88, 0x77, 0x09, 0x43, 0x14, 0xf6, 0x41, 0x7c, - 0x7f, 0x64, 0x87, 0xb1, 0xfb, 0xce, 0x19, 0x38, 0x1f, 0x15, 0xb3, 0xfb, 0xce, 0xbf, 0xfd, 0x00, - 0x9c, 0x6f, 0x10, 0x2f, 0x44, 0x91, 0xe8, 0x1d, 0xac, 0xef, 0x49, 0x0d, 0x70, 0xbe, 0xe7, 0xa0, - 0x5d, 0x9a, 0xac, 0x8f, 0x9a, 0x2d, 0xa7, 0x79, 0x98, 0x1a, 0xee, 0x8d, 0xe6, 0xf6, 0x97, 0x73, - 0xf0, 0x5c, 0xcf, 0x89, 0x95, 0x4a, 0x3f, 0x12, 0xc0, 0x83, 0x9e, 0x83, 0x5c, 0x96, 0xbb, 0x8e, - 0xf8, 0x91, 0x09, 0x46, 0xfb, 0x2e, 0x0a, 0x5c, 0x64, 0x87, 0xc9, 0xfd, 0x5c, 0x23, 0x5b, 0x3e, - 0xe5, 0x09, 0x1a, 0xb3, 0xbd, 0x10, 0x28, 0xe7, 0xe0, 0xad, 0x9e, 0x93, 0x4a, 0x2b, 0x7d, 0x57, - 0x00, 0xb7, 0x62, 0xe8, 0x2c, 0xdb, 0x46, 0x7b, 0x8e, 0x47, 0xa3, 0x53, 0x3e, 0xaa, 0x0c, 0x17, - 0x5b, 0xf9, 0x3e, 0x4c, 0xc7, 0x35, 0xfe, 0xc5, 0x41, 0x39, 0x07, 0xaf, 0x45, 0x90, 0x12, 0xc9, - 0x86, 0xfb, 0x2a, 0x01, 0x4d, 0xcb, 0x0a, 0xa2, 0xd1, 0x99, 0xc9, 0xda, 0x57, 0x29, 0xcf, 0x1b, - 0x06, 0xfa, 0x6a, 0x3c, 0xad, 0xf4, 0x2b, 0x02, 0xb8, 0x11, 0x43, 0xe7, 0xe3, 0x00, 0x35, 0xa3, - 0x97, 0x10, 0xc8, 0xa7, 0x8f, 0x10, 0xa8, 0xb3, 0x3c, 0xb3, 0xfe, 0x7e, 0x3a, 0xa8, 0x71, 0xef, - 0x28, 0xca, 0x39, 0x78, 0x35, 0x42, 0x93, 0x40, 0x24, 0xfd, 0xa6, 0x00, 0x6e, 0xc7, 0x60, 0x78, - 0x3c, 0xeb, 0x89, 0x6c, 0x92, 0xd8, 0x73, 0x88, 0x10, 0xca, 0x69, 0x0a, 0xe5, 0xcb, 0xe9, 0x50, - 0x26, 0x3d, 0xa8, 0x28, 0xe7, 0xe0, 0x8d, 0x08, 0xce, 0x18, 0xc2, 0xb0, 0x67, 0x3c, 0xfe, 0x44, - 0x01, 0x91, 0xf0, 0x96, 0x4c, 0x40, 0xf6, 0x44, 0x82, 0x0f, 0xd7, 0x6c, 0x6a, 0xcf, 0xa4, 0x3c, - 0xb0, 0x60, 0x3d, 0x33, 0x9e, 0x48, 0xfa, 0x14, 0x5c, 0x4d, 0x42, 0xd1, 0x3d, 0xe1, 0x08, 0xe6, - 0x28, 0x82, 0x2f, 0x66, 0x47, 0x10, 0x7f, 0xa1, 0x51, 0xce, 0xc1, 0xc2, 0x88, 0x76, 0x4e, 0x20, - 0xfd, 0x22, 0x58, 0x19, 0xd5, 0xdc, 0xf5, 0x9c, 0x4e, 0xc0, 0x55, 0x03, 0xaa, 0xfa, 0xdd, 0xac, - 0xaa, 0x87, 0xde, 0x77, 0x94, 0x73, 0x70, 0x71, 0x48, 0x77, 0x9f, 0x42, 0x6a, 0x81, 0xc5, 0x9e, - 0x83, 0x6c, 0xee, 0xc4, 0x49, 0xd4, 0xe5, 0x91, 0xa5, 0x84, 0x0a, 0xa7, 0x8b, 0xda, 0x99, 0xf5, - 0x87, 0x19, 0x72, 0x06, 0xe3, 0xaf, 0x24, 0xca, 0x39, 0xb8, 0xd0, 0x73, 0x12, 0xdf, 0x4f, 0x7c, - 0x97, 0x99, 0x5f, 0xa4, 0xae, 0xbf, 0xd6, 0x85, 0xa9, 0x22, 0x5c, 0xf3, 0x59, 0xaa, 0xf9, 0xbd, - 0x0c, 0x9a, 0x93, 0x1f, 0x3e, 0x30, 0xcb, 0x4b, 0x79, 0x1c, 0xf1, 0x4d, 0x6a, 0x78, 0x11, 0x18, - 0x9e, 0x5e, 0xeb, 0xb3, 0x4c, 0x59, 0x0e, 0xe4, 0x1c, 0x05, 0xf2, 0x85, 0xcf, 0x94, 0x67, 0xcb, - 0x6c, 0x6e, 0x42, 0x5e, 0xf4, 0xaf, 0x31, 0x07, 0xda, 0x47, 0xc0, 0x03, 0xfa, 0xfe, 0xbc, 0x64, - 0x20, 0xe6, 0x29, 0x88, 0x27, 0x59, 0x40, 0x24, 0xa5, 0x33, 0x96, 0x73, 0xf0, 0x7a, 0x0c, 0x47, - 0x62, 0xc6, 0xe3, 0xef, 0x30, 0xef, 0x39, 0x0a, 0xa5, 0x19, 0xde, 0x64, 0xa3, 0x76, 0xd0, 0xf2, - 0x39, 0xa0, 0xf3, 0x14, 0xd0, 0x97, 0x5e, 0x00, 0xd0, 0x68, 0x82, 0x61, 0x39, 0x07, 0x6f, 0x8f, - 0xa2, 0xea, 0xd3, 0x05, 0x2d, 0x9e, 0x63, 0xf5, 0xd7, 0x02, 0x78, 0x32, 0x38, 0x4e, 0x34, 0x3d, - 0x0d, 0x59, 0x34, 0x3f, 0x0d, 0xd9, 0x61, 0x82, 0x1a, 0x0a, 0x5c, 0xb7, 0xc5, 0x83, 0xc9, 0x56, - 0x8b, 0x23, 0x15, 0x29, 0xd2, 0xa7, 0x99, 0xc6, 0x2f, 0x53, 0x1a, 0x60, 0x39, 0x07, 0x1f, 0xc5, - 0x07, 0x35, 0x5b, 0xee, 0xe0, 0x8f, 0x05, 0xf0, 0x38, 0x53, 0x1b, 0xfa, 0xdd, 0xcd, 0xf0, 0x5f, - 0xa0, 0xf8, 0xb7, 0x3e, 0x33, 0xfe, 0xc1, 0x44, 0x84, 0x72, 0x0e, 0xae, 0xa5, 0x81, 0x1f, 0x4a, - 0x5d, 0xf8, 0x5d, 0x01, 0xbc, 0x11, 0x47, 0x6e, 0xf5, 0x48, 0xe4, 0x11, 0xed, 0x41, 0x63, 0x4f, - 0x2e, 0x18, 0x60, 0x89, 0x02, 0xfe, 0x28, 0x03, 0xe0, 0x49, 0x89, 0x75, 0xe5, 0x1c, 0xbc, 0xdb, - 0x07, 0x3a, 0x31, 0x05, 0xef, 0x4f, 0x05, 0xf0, 0x30, 0xc5, 0x72, 0x1d, 0xab, 0xcd, 0x36, 0x2f, - 0x27, 0x1c, 0xe4, 0x45, 0x0a, 0x72, 0xe3, 0xb3, 0xd8, 0xef, 0x60, 0x6e, 0x4b, 0x39, 0x07, 0x1f, - 0x4c, 0x30, 0x62, 0xcd, 0x6a, 0xc7, 0x13, 0x61, 0x7e, 0x4b, 0x00, 0x77, 0xe3, 0x50, 0xbb, 0x51, - 0xbe, 0xc8, 0xc8, 0xb8, 0x5f, 0xa2, 0x08, 0x3f, 0xc8, 0x80, 0x70, 0x5c, 0xd2, 0x49, 0x39, 0x07, - 0x8b, 0x7d, 0x68, 0x63, 0x53, 0x53, 0x7e, 0x59, 0x00, 0x37, 0xe3, 0x98, 0x02, 0xec, 0x07, 0x04, - 0x4d, 0x67, 0xc0, 0x1f, 0x5f, 0x4e, 0x5d, 0xfd, 0x26, 0x64, 0x80, 0x94, 0x73, 0x70, 0xa5, 0x8f, - 0x24, 0x29, 0x45, 0xc4, 0x03, 0xcb, 0x71, 0x0c, 0x61, 0x9c, 0x1b, 0xae, 0x43, 0x0b, 0x29, 0x97, - 0x0c, 0xe3, 0x52, 0x30, 0xd8, 0xb2, 0x3b, 0x26, 0x3d, 0xa3, 0x05, 0x0a, 0x3d, 0x87, 0x04, 0x61, - 0x56, 0x80, 0x51, 0x07, 0x1f, 0xd3, 0xfd, 0x2f, 0x5f, 0x71, 0xaf, 0xa4, 0x1c, 0x8d, 0x8d, 0x4d, - 0x7e, 0x28, 0xe7, 0xe0, 0xa5, 0x9e, 0x33, 0x5a, 0x29, 0x9d, 0xd0, 0x45, 0x7e, 0x58, 0x9b, 0x6f, - 0x1d, 0x85, 0x2a, 0x0b, 0xa9, 0x3d, 0x3c, 0x21, 0xa3, 0x82, 0x35, 0x34, 0x99, 0x40, 0xfa, 0x26, - 0xb8, 0x9e, 0xd4, 0x50, 0x9a, 0xc4, 0xc0, 0x95, 0x2f, 0xa6, 0x2e, 0x30, 0x13, 0x13, 0x20, 0xca, - 0x39, 0xb8, 0x34, 0xdc, 0xea, 0x3e, 0x89, 0xf4, 0x07, 0xcc, 0x85, 0x0c, 0x23, 0x60, 0x47, 0xf5, - 0xf1, 0x24, 0x09, 0x8e, 0x66, 0x89, 0xa2, 0x51, 0xb2, 0xa2, 0x99, 0x90, 0x6b, 0x51, 0xce, 0xc1, - 0x3b, 0x43, 0xc0, 0x92, 0xa9, 0xa5, 0x3f, 0x16, 0xc0, 0x5a, 0xdc, 0x04, 0x9d, 0xfe, 0x51, 0x23, - 0xb2, 0x8e, 0x7d, 0x76, 0x34, 0xc0, 0xa7, 0x05, 0xb7, 0xca, 0xe5, 0xd4, 0x2d, 0x44, 0xb6, 0xc7, - 0x89, 0xe5, 0x1c, 0xbc, 0xdf, 0xb7, 0xd2, 0x38, 0xed, 0xb1, 0xaf, 0x3b, 0x76, 0x73, 0xe0, 0x21, - 0xe3, 0xf7, 0x04, 0x70, 0x27, 0x39, 0x64, 0xb0, 0x7d, 0x84, 0xe9, 0xa1, 0x28, 0x87, 0x77, 0x35, - 0x73, 0x08, 0x95, 0xfc, 0x66, 0x70, 0x30, 0x84, 0x8a, 0x68, 0x6c, 0x3f, 0xfe, 0x42, 0x2e, 0x60, - 0x66, 0x4d, 0xd6, 0xdb, 0xc0, 0x45, 0x2c, 0x7b, 0x80, 0x8d, 0x22, 0x47, 0xb1, 0x92, 0x3a, 0x75, - 0x93, 0x1f, 0xa1, 0x71, 0x8b, 0x4e, 0x7e, 0xa0, 0xf6, 0x35, 0x70, 0xc1, 0xa2, 0x69, 0x0c, 0xa8, - 0x9f, 0x44, 0x50, 0xb8, 0x46, 0x35, 0x8d, 0x3f, 0x82, 0x4e, 0x4c, 0xb9, 0x29, 0xe7, 0xa0, 0x68, - 0x0d, 0x55, 0x84, 0x2e, 0x31, 0x6e, 0x02, 0xbc, 0x67, 0x69, 0x78, 0xcc, 0x5b, 0x76, 0x3d, 0x75, - 0xc2, 0x4e, 0xb8, 0xec, 0x60, 0x2e, 0x71, 0xd2, 0x6d, 0x08, 0x0f, 0x95, 0x13, 0x40, 0x44, 0xa7, - 0x00, 0x0c, 0xc7, 0x8d, 0xd4, 0x71, 0x9e, 0x7c, 0x07, 0xc2, 0xc6, 0x39, 0xe5, 0x9e, 0xe4, 0xdb, - 0x02, 0x75, 0x22, 0xe1, 0xbe, 0xf1, 0xeb, 0xf1, 0x67, 0xf8, 0xe1, 0x96, 0xf1, 0x66, 0xd6, 0xdd, - 0xeb, 0xb8, 0x47, 0xfc, 0x03, 0xbb, 0xd7, 0x04, 0x22, 0xe9, 0x13, 0xc0, 0x07, 0x0b, 0xe1, 0x30, - 0x03, 0xa9, 0x50, 0xa4, 0x5a, 0xdf, 0x4a, 0x19, 0xf6, 0xc1, 0x8c, 0xa5, 0x72, 0x0e, 0x9e, 0xb7, - 0x06, 0xcb, 0xa5, 0x36, 0xb8, 0xc2, 0x65, 0x13, 0x07, 0x15, 0x4f, 0x5c, 0x2a, 0xdc, 0x4a, 0x39, - 0xb9, 0x1f, 0x9f, 0x3f, 0x54, 0xce, 0xc1, 0xcb, 0x56, 0x52, 0xad, 0xb4, 0x0b, 0x2e, 0xf7, 0x4f, - 0x49, 0x98, 0x63, 0x64, 0xc3, 0x79, 0x9b, 0x2a, 0x7b, 0x73, 0xac, 0xb2, 0x84, 0xbb, 0x8d, 0x72, - 0x0e, 0x5e, 0xf4, 0x12, 0xae, 0x3c, 0x8e, 0xc1, 0xd5, 0x31, 0x87, 0xeb, 0x4c, 0xd5, 0x9d, 0x94, - 0x76, 0x8d, 0xbf, 0x10, 0x20, 0x0e, 0x7f, 0x6f, 0xfc, 0x75, 0xc1, 0x2e, 0xe0, 0xad, 0x46, 0xfc, - 0xce, 0xd2, 0x63, 0xa9, 0x51, 0x85, 0xbb, 0x29, 0x8d, 0x4b, 0x48, 0xa7, 0x22, 0x8d, 0xb3, 0x12, - 0xb2, 0xac, 0x2a, 0xe0, 0x5c, 0xa4, 0x83, 0x8e, 0xd2, 0x3d, 0x2a, 0xfb, 0x4e, 0xaa, 0x6c, 0x42, - 0x5c, 0xce, 0xc1, 0xb3, 0x56, 0x3c, 0x11, 0x6a, 0x07, 0x48, 0xf1, 0xeb, 0x55, 0x36, 0x22, 0x85, - 0xfb, 0x29, 0x59, 0x9b, 0xc3, 0x79, 0x40, 0xd4, 0x9b, 0x0c, 0xe7, 0x06, 0x0d, 0x89, 0xee, 0xd1, - 0x9c, 0x92, 0xc2, 0x83, 0xcc, 0xa2, 0x59, 0x12, 0xca, 0xa0, 0x68, 0x9e, 0x98, 0x32, 0x24, 0xda, - 0xa6, 0x77, 0xce, 0x85, 0xd5, 0xcc, 0xa2, 0xd9, 0x25, 0xf5, 0xa0, 0x68, 0x7e, 0x71, 0xdd, 0x02, - 0x8b, 0x71, 0xd1, 0x3c, 0xed, 0x83, 0xf7, 0xcb, 0x1b, 0x29, 0xe7, 0x02, 0xc9, 0x79, 0x40, 0xe5, - 0x1c, 0x5c, 0xb0, 0x92, 0x33, 0x84, 0x92, 0xb5, 0xf1, 0xae, 0x7a, 0xf3, 0x05, 0xb5, 0x45, 0x1d, - 0x36, 0xa2, 0x8d, 0x77, 0x5b, 0xb2, 0x36, 0xde, 0x7b, 0x6f, 0xbd, 0xa0, 0xb6, 0xa8, 0x0f, 0x47, - 0xb4, 0xf1, 0x9e, 0x6c, 0x83, 0xa5, 0xb8, 0x36, 0x9a, 0x0d, 0xe2, 0x93, 0xd5, 0xb2, 0xe7, 0x63, - 0xaf, 0xb0, 0x96, 0x59, 0x5d, 0x3c, 0x25, 0x66, 0x50, 0xdd, 0x40, 0xb2, 0xcc, 0xaf, 0x0b, 0xa0, - 0x18, 0x0f, 0x10, 0xe2, 0x37, 0x49, 0xfd, 0x63, 0x96, 0xc2, 0xc3, 0xd4, 0x33, 0xd9, 0xd4, 0x07, - 0x3d, 0xec, 0x4c, 0x36, 0x22, 0x6b, 0x8e, 0x92, 0x49, 0x87, 0xe0, 0x4a, 0xc2, 0x09, 0x0b, 0x76, - 0x9a, 0xb8, 0xf0, 0x76, 0x6a, 0x88, 0x3d, 0xe6, 0x39, 0x0d, 0x0b, 0xb1, 0x87, 0x2a, 0x9d, 0x26, - 0x1e, 0x56, 0x16, 0x86, 0x9b, 0xae, 0x8d, 0x0b, 0x8f, 0x32, 0x2b, 0x1b, 0x7a, 0xc2, 0x32, 0xa8, - 0xac, 0x5f, 0x29, 0x7d, 0x15, 0x5c, 0x08, 0xac, 0x7d, 0xbe, 0x0e, 0x61, 0xb2, 0x20, 0x7a, 0x27, - 0x85, 0xf5, 0x94, 0xb5, 0x28, 0xe9, 0xc2, 0x91, 0xac, 0x45, 0x81, 0xb5, 0x1f, 0x2f, 0x97, 0x02, - 0xb0, 0xe4, 0xf3, 0xeb, 0x19, 0xe4, 0x51, 0x49, 0x68, 0x1f, 0xd3, 0x83, 0xec, 0x5e, 0x2b, 0x28, - 0xbc, 0x93, 0x72, 0x24, 0x35, 0xe9, 0x66, 0xa7, 0x9c, 0x83, 0x57, 0xfc, 0xe4, 0xfa, 0xe1, 0x69, - 0xc1, 0x53, 0x4c, 0xf8, 0x94, 0x7f, 0x9c, 0xd9, 0x4e, 0xe3, 0x89, 0x49, 0x83, 0x76, 0x3a, 0x90, - 0xb2, 0x94, 0xac, 0x8d, 0x4f, 0xc2, 0x2f, 0xbc, 0xa0, 0xb6, 0xa4, 0x49, 0x38, 0x90, 0x87, 0xb3, - 0x0b, 0x2e, 0x87, 0x86, 0x71, 0x82, 0xf6, 0x70, 0xd0, 0x3c, 0xe0, 0x6b, 0xe0, 0x17, 0x53, 0x56, - 0xa4, 0x84, 0xeb, 0x2d, 0xb2, 0x22, 0xd9, 0x09, 0xb7, 0x5e, 0xdf, 0x1f, 0x3a, 0xcd, 0xe3, 0x06, - 0xd8, 0xd7, 0xcb, 0xf6, 0xf8, 0x85, 0x77, 0x33, 0x1f, 0x9e, 0x8d, 0x7f, 0x89, 0x3d, 0x78, 0xa4, - 0x97, 0x48, 0x27, 0x7d, 0x6b, 0x68, 0x67, 0x7f, 0xd8, 0xdb, 0xc5, 0xf4, 0x72, 0x76, 0x60, 0x9b, - 0xf0, 0x24, 0xf3, 0x01, 0xe7, 0xe8, 0x0f, 0x64, 0x0c, 0x1e, 0x70, 0xd2, 0xfa, 0xc3, 0x81, 0xed, - 0xc1, 0x77, 0x92, 0xbb, 0xc4, 0xea, 0x76, 0xe9, 0x31, 0x6b, 0x74, 0xc0, 0xf9, 0x5e, 0xea, 0x09, - 0xf7, 0xa4, 0x97, 0x56, 0x83, 0x8e, 0x28, 0xf1, 0x2d, 0x56, 0x17, 0x2c, 0x73, 0x6b, 0xdb, 0xf7, - 0xac, 0xee, 0x01, 0xdd, 0xd2, 0xf9, 0x4d, 0x2b, 0x8c, 0x84, 0xde, 0xa7, 0xfa, 0xdf, 0x4e, 0xb1, - 0xb7, 0x91, 0xfb, 0x7a, 0x32, 0x9b, 0xac, 0x58, 0xd5, 0xb1, 0xdf, 0xbf, 0xca, 0xff, 0x01, 0xdb, - 0xa8, 0x0d, 0x68, 0x6d, 0x7a, 0xee, 0x71, 0x07, 0xfd, 0x02, 0x3e, 0xc6, 0x2d, 0x64, 0x3b, 0x7b, - 0x7b, 0x74, 0x0b, 0x5c, 0xf8, 0x20, 0xd5, 0x1e, 0xd2, 0x6f, 0x90, 0x99, 0x3d, 0x4c, 0xa4, 0x93, - 0xfe, 0x3f, 0xb8, 0x4c, 0x93, 0x38, 0xe8, 0xbd, 0x29, 0xbd, 0xf3, 0xe4, 0xad, 0xff, 0x30, 0xed, - 0xa6, 0x72, 0xe4, 0xfa, 0xb5, 0x9c, 0x83, 0x52, 0x6f, 0xf4, 0x52, 0x96, 0x1d, 0xa8, 0x27, 0xee, - 0x9b, 0x70, 0x97, 0x2b, 0xfb, 0x52, 0xaa, 0xbd, 0x8d, 0x4f, 0x5f, 0x62, 0xf6, 0x36, 0xbe, 0x7e, - 0xe3, 0x34, 0x98, 0xa1, 0x92, 0x9e, 0x4e, 0xcf, 0xe6, 0xc5, 0x29, 0x42, 0x1c, 0x6d, 0x59, 0x88, - 0x03, 0x0d, 0x6f, 0x37, 0xe8, 0x06, 0x75, 0xf5, 0xaf, 0xce, 0xf7, 0x7f, 0xdc, 0x2a, 0x8c, 0xaa, - 0xa5, 0x9b, 0x60, 0xa5, 0xa4, 0x19, 0x8a, 0xfe, 0x4c, 0x85, 0x08, 0xaa, 0x86, 0xde, 0x80, 0xca, - 0xf0, 0x4f, 0x53, 0x5c, 0x05, 0x85, 0x51, 0x12, 0x43, 0x85, 0xcf, 0x54, 0x28, 0x0a, 0xd2, 0x0d, - 0x70, 0x75, 0xb4, 0x76, 0xbb, 0xb1, 0xa1, 0xc2, 0x9a, 0x6a, 0xaa, 0x86, 0x98, 0x97, 0xde, 0x01, - 0x0f, 0x47, 0x29, 0x4a, 0xb2, 0x29, 0x6f, 0xc8, 0x86, 0x8a, 0xea, 0xba, 0x61, 0x6e, 0x41, 0xd5, - 0x40, 0x86, 0x5a, 0xd9, 0x44, 0x65, 0xdd, 0x30, 0xd5, 0x92, 0x38, 0x25, 0xbd, 0x0d, 0xde, 0x9c, - 0xc0, 0x54, 0xdd, 0x31, 0x3e, 0xae, 0x0c, 0x70, 0x4c, 0x4b, 0xeb, 0x60, 0x6d, 0x12, 0x87, 0x5e, - 0xdb, 0xd2, 0x4b, 0x1b, 0x03, 0x3c, 0x33, 0xd2, 0x1b, 0xe0, 0x5e, 0x16, 0x68, 0xb0, 0x64, 0x88, - 0xa7, 0xa4, 0xfb, 0xe0, 0x76, 0x2a, 0x24, 0x42, 0x79, 0x5a, 0xba, 0x0b, 0x8a, 0xa3, 0x94, 0x72, - 0xbd, 0x5e, 0xd1, 0x14, 0xd9, 0xd4, 0xf4, 0x1a, 0x2a, 0x9b, 0x66, 0x5d, 0x9c, 0x95, 0xee, 0x80, - 0x9b, 0x93, 0xe9, 0x4c, 0xa5, 0x2e, 0xce, 0x25, 0x93, 0x3d, 0xd7, 0x6a, 0x25, 0xfd, 0xb9, 0x81, - 0x4a, 0xaa, 0xb1, 0x6d, 0xea, 0x75, 0x11, 0x48, 0x6f, 0x82, 0xfb, 0x13, 0xf0, 0x19, 0x1f, 0x57, - 0xd8, 0x98, 0x51, 0x8c, 0x67, 0x52, 0x3a, 0xb8, 0xdf, 0x74, 0xb5, 0x64, 0x94, 0xb5, 0x4d, 0x53, - 0x3c, 0x2b, 0x3d, 0x06, 0x6f, 0x67, 0x92, 0x1f, 0xef, 0xe2, 0x73, 0x29, 0x7a, 0xa0, 0x5a, 0xd2, - 0x06, 0x87, 0x7e, 0x3e, 0xeb, 0xa0, 0x6c, 0x29, 0x75, 0xf1, 0x7c, 0xa6, 0x41, 0x21, 0x94, 0x62, - 0xe6, 0xee, 0x21, 0xd4, 0x17, 0xa4, 0x0f, 0xc0, 0xbb, 0x2f, 0xd2, 0x3d, 0x7c, 0x3e, 0x54, 0x54, - 0xc3, 0x10, 0x25, 0xe9, 0x2d, 0xf0, 0x20, 0x0b, 0xb3, 0xfc, 0x49, 0x03, 0xaa, 0xe2, 0x45, 0xe9, - 0x1e, 0xb8, 0x35, 0x81, 0xbc, 0xb4, 0x53, 0x93, 0xab, 0x7a, 0x69, 0x43, 0xbc, 0x94, 0x62, 0xe2, - 0x8a, 0x6c, 0x18, 0x72, 0xad, 0x04, 0x65, 0xb4, 0xad, 0xee, 0x18, 0x75, 0x59, 0x51, 0x0d, 0xf1, - 0x72, 0xca, 0xa8, 0xf5, 0x79, 0xe2, 0x63, 0xb0, 0x20, 0x3d, 0x01, 0x8f, 0x27, 0x70, 0xa9, 0x15, - 0xd9, 0x30, 0x35, 0xc5, 0x50, 0x65, 0xa8, 0x94, 0x07, 0x38, 0xaf, 0x64, 0x1a, 0x6f, 0xce, 0x2f, - 0x2b, 0x65, 0x55, 0x2c, 0xa4, 0xf4, 0x16, 0xe3, 0xa8, 0xaa, 0x55, 0x1d, 0xee, 0x94, 0x36, 0xc4, - 0xc5, 0x4c, 0x0a, 0x68, 0xcf, 0x22, 0xa6, 0x60, 0x29, 0xa5, 0x31, 0x8c, 0x43, 0xa9, 0x34, 0x0c, - 0x73, 0xc8, 0x78, 0x97, 0xa5, 0x55, 0x70, 0x37, 0xd5, 0xba, 0xd8, 0x28, 0x5e, 0x95, 0xd6, 0xc0, - 0x6a, 0x26, 0xfb, 0x62, 0xf4, 0x2b, 0x29, 0x83, 0xd9, 0xa7, 0xaf, 0x6a, 0x0a, 0xd4, 0x0d, 0x7d, - 0xd3, 0x14, 0xaf, 0x49, 0x5f, 0x04, 0xeb, 0x93, 0x06, 0x53, 0x57, 0xb6, 0xa1, 0x2e, 0x2b, 0xe5, - 0x21, 0x3f, 0x77, 0x3d, 0xc5, 0xf6, 0x43, 0xdf, 0x28, 0x9b, 0x15, 0xd9, 0x10, 0x6f, 0xa4, 0xcc, - 0x29, 0xa3, 0xa6, 0x3f, 0xdf, 0xac, 0xc8, 0xdb, 0xaa, 0x78, 0x73, 0x8c, 0x5c, 0x5d, 0x89, 0xf5, - 0x6e, 0xc9, 0x40, 0x75, 0xa8, 0x7f, 0x65, 0x47, 0x2c, 0x8e, 0x31, 0xc5, 0x38, 0x75, 0x59, 0xdb, - 0x2a, 0x23, 0xf9, 0x99, 0xac, 0x55, 0xe4, 0x0d, 0xad, 0xa2, 0x99, 0x3b, 0xe2, 0x2d, 0xe9, 0x5d, - 0xf0, 0x4e, 0x0a, 0x17, 0x9d, 0x21, 0x9a, 0x82, 0xa0, 0xba, 0xa5, 0x19, 0x26, 0xa4, 0xae, 0x53, - 0xbc, 0x9d, 0xec, 0x85, 0x0d, 0xb9, 0x5a, 0x89, 0xbb, 0x58, 0xf1, 0x8e, 0x54, 0x04, 0xd7, 0x46, - 0xe9, 0x54, 0x65, 0x9d, 0xfd, 0x08, 0x53, 0x4d, 0x51, 0xc5, 0xbb, 0x63, 0x8c, 0x4e, 0x57, 0x86, - 0xdd, 0x30, 0xaa, 0xe9, 0x35, 0x24, 0x97, 0xc4, 0x7b, 0xd2, 0x6d, 0x70, 0x63, 0xd2, 0xba, 0x48, - 0x7f, 0x9c, 0xe7, 0x7e, 0xb2, 0xed, 0xc7, 0x57, 0x00, 0xf9, 0xb9, 0x81, 0x14, 0xbd, 0x66, 0xe8, - 0x15, 0x55, 0x7c, 0xb0, 0xfa, 0x87, 0x02, 0x98, 0x1f, 0xfc, 0x6d, 0x47, 0xe9, 0x3a, 0x58, 0x8e, - 0x24, 0x18, 0xa6, 0x6c, 0x36, 0x8c, 0xa1, 0xe5, 0x7b, 0x19, 0x5c, 0x19, 0x26, 0x30, 0x1a, 0x8a, - 0x42, 0x3c, 0x95, 0x90, 0x58, 0xb9, 0xad, 0xd5, 0xeb, 0x6a, 0x49, 0xcc, 0x4b, 0x8b, 0xe0, 0xf2, - 0x70, 0xa5, 0x0a, 0xa1, 0x0e, 0xc5, 0xa9, 0x24, 0x3e, 0x79, 0x43, 0x87, 0x74, 0x25, 0x5e, 0xfd, - 0x69, 0x1e, 0x4c, 0x29, 0xa6, 0x2c, 0x5d, 0x04, 0xe7, 0x15, 0x53, 0x1e, 0xfd, 0x15, 0x2d, 0x52, - 0x28, 0x37, 0xcc, 0x32, 0x69, 0x58, 0x4d, 0x55, 0x4c, 0x9d, 0xc4, 0x11, 0x57, 0xc0, 0x45, 0x5a, - 0xae, 0x98, 0xda, 0x33, 0x12, 0x5e, 0x18, 0x86, 0xa6, 0xd7, 0x48, 0xf8, 0x10, 0x55, 0x10, 0xc8, - 0x08, 0xaa, 0x1f, 0x37, 0x54, 0xc3, 0x34, 0xc4, 0xa9, 0xb0, 0xa2, 0x0e, 0xd5, 0xaa, 0xd6, 0xa8, - 0x22, 0xa3, 0x51, 0xaf, 0xeb, 0xd0, 0x14, 0xa7, 0xc3, 0x0a, 0x13, 0x92, 0x29, 0x5d, 0x42, 0x25, - 0xf5, 0x99, 0x46, 0x7c, 0xe1, 0x4c, 0xa8, 0xbb, 0x51, 0xdf, 0x82, 0x72, 0x49, 0x45, 0x1b, 0x72, - 0xad, 0xa6, 0x42, 0xf1, 0x54, 0xc8, 0xb0, 0xa1, 0x55, 0x2a, 0x5a, 0x6d, 0x0b, 0x19, 0x8d, 0x6a, - 0x55, 0x86, 0x3b, 0xe2, 0xe9, 0xb0, 0x05, 0x5c, 0x77, 0x45, 0x33, 0x4c, 0x71, 0x96, 0xfe, 0xd6, - 0x52, 0xbf, 0xb0, 0xaa, 0xd7, 0x34, 0x53, 0x87, 0x5a, 0x6d, 0x4b, 0x9c, 0xa3, 0xbf, 0xe2, 0x64, - 0xca, 0x48, 0xfd, 0x8a, 0xa9, 0xc2, 0x9a, 0x5c, 0x41, 0x72, 0xa3, 0xa4, 0x99, 0xc8, 0x30, 0x75, - 0x28, 0x6f, 0xa9, 0x22, 0x08, 0x01, 0xe8, 0xdb, 0x04, 0x85, 0x41, 0xfa, 0x6e, 0xa7, 0xa6, 0x88, - 0x67, 0x24, 0x11, 0x9c, 0xa5, 0x7c, 0x35, 0x13, 0xca, 0x48, 0x2b, 0x89, 0x67, 0xa5, 0x0b, 0xe0, - 0x5c, 0x44, 0x69, 0x28, 0x5a, 0x55, 0x3c, 0x17, 0xea, 0xd5, 0x4a, 0x6a, 0xcd, 0xd4, 0xcc, 0x1d, - 0x64, 0xa8, 0x4a, 0x03, 0x92, 0x39, 0x32, 0xbf, 0xfa, 0x37, 0x73, 0xe0, 0x72, 0xe2, 0x43, 0x05, - 0xb2, 0xb6, 0x68, 0x35, 0x53, 0xdd, 0x62, 0xb3, 0x02, 0xa9, 0x35, 0xa8, 0x57, 0x2a, 0x68, 0x5b, - 0xab, 0x0d, 0xff, 0x3e, 0xd5, 0x4d, 0xb0, 0x32, 0x8e, 0xd0, 0xa8, 0xc8, 0xca, 0xb6, 0x28, 0x10, - 0x93, 0x1e, 0x47, 0x42, 0xcc, 0x54, 0xd7, 0x4a, 0x8a, 0x98, 0x27, 0xd1, 0xca, 0x38, 0xaa, 0xba, - 0xbc, 0xa5, 0xc2, 0x52, 0xc3, 0xdc, 0x11, 0xa7, 0x26, 0xe9, 0x53, 0xab, 0xb2, 0x56, 0x11, 0xa7, - 0x49, 0x68, 0x39, 0x8e, 0xe4, 0xa9, 0x06, 0x65, 0x71, 0x46, 0xba, 0x05, 0xae, 0x8f, 0xa3, 0xa0, - 0xe6, 0x09, 0x4b, 0xe2, 0x29, 0xe2, 0x07, 0xc6, 0x11, 0x55, 0x65, 0xd3, 0x54, 0x61, 0x55, 0x37, - 0x4c, 0xf1, 0xf4, 0xa4, 0xe6, 0x55, 0x0d, 0x64, 0xaa, 0x72, 0xd5, 0x10, 0x67, 0x27, 0x51, 0xe9, - 0x75, 0x63, 0x4b, 0xad, 0x69, 0xaa, 0x38, 0x37, 0x09, 0x3a, 0x19, 0x52, 0x11, 0x4c, 0x6c, 0x9c, - 0x5c, 0xdd, 0x14, 0xcf, 0x4c, 0xc6, 0xad, 0x94, 0xb5, 0x9a, 0xca, 0x4c, 0xe5, 0x0b, 0xe0, 0x51, - 0x3a, 0x1d, 0xda, 0xd2, 0xcc, 0x72, 0x63, 0x83, 0xce, 0x2f, 0x32, 0xaf, 0xce, 0x49, 0x0f, 0xc1, - 0x1b, 0x19, 0xd8, 0x14, 0x0d, 0x2a, 0x15, 0x55, 0xd1, 0xc4, 0x79, 0xe2, 0xab, 0xb2, 0xe9, 0xa9, - 0xc8, 0x1b, 0xe2, 0x79, 0xb2, 0x1e, 0x66, 0x20, 0x7f, 0xaa, 0xd6, 0xb6, 0xb5, 0x9a, 0x21, 0x8a, - 0x19, 0xe9, 0xe5, 0x9a, 0xa1, 0x6d, 0x54, 0x54, 0xf1, 0xc2, 0xa4, 0xee, 0x21, 0x2b, 0xa7, 0xa6, - 0xa8, 0x35, 0xfd, 0xb9, 0x28, 0x4d, 0x1a, 0xb0, 0x68, 0xbe, 0x5d, 0x24, 0xab, 0xcc, 0x58, 0x4b, - 0x92, 0x4d, 0xb9, 0xa4, 0x6f, 0x21, 0xad, 0xa6, 0xd0, 0xb9, 0x87, 0xaa, 0x72, 0x4d, 0xde, 0x52, - 0xab, 0x6a, 0xcd, 0x14, 0x2f, 0x91, 0x10, 0x21, 0x0b, 0xec, 0xe7, 0x24, 0x16, 0xcb, 0x46, 0x4b, - 0x02, 0xd0, 0x05, 0xb2, 0xb4, 0x66, 0x91, 0x4b, 0x83, 0x09, 0x1a, 0x75, 0x65, 0xa0, 0xa6, 0x41, - 0x61, 0x85, 0x44, 0xf3, 0x05, 0xe9, 0x11, 0x78, 0x2b, 0x03, 0x47, 0x6c, 0x23, 0xb7, 0x38, 0xc9, - 0x62, 0xc8, 0xfc, 0x8f, 0x1c, 0x93, 0xa2, 0xd6, 0x4c, 0x15, 0x8a, 0x4b, 0x93, 0x86, 0x94, 0x9b, - 0x23, 0x54, 0xeb, 0x3a, 0xf7, 0xa4, 0xe2, 0xf2, 0xea, 0xb7, 0xa7, 0x13, 0xdc, 0x18, 0xd9, 0xed, - 0x8e, 0x71, 0x63, 0x86, 0xa9, 0xd6, 0x87, 0xdc, 0x58, 0xb2, 0x4a, 0x4a, 0x28, 0x3f, 0x37, 0x34, - 0x25, 0x5c, 0x73, 0x98, 0xb7, 0x12, 0xa4, 0x2f, 0x83, 0xf7, 0x27, 0xd3, 0x1b, 0xaa, 0xc9, 0x01, - 0x12, 0xf7, 0x8f, 0x4a, 0xea, 0xa6, 0xdc, 0xa8, 0x98, 0x48, 0x7f, 0x4e, 0x96, 0x8e, 0xbc, 0xf4, - 0x25, 0xf0, 0xde, 0x64, 0xfe, 0x46, 0xbd, 0xa2, 0xcb, 0xac, 0x83, 0x68, 0xec, 0x61, 0xd4, 0x51, - 0x55, 0x35, 0x65, 0x62, 0x54, 0xe2, 0x14, 0x09, 0xe8, 0x26, 0xb3, 0x9b, 0xaa, 0x61, 0xd2, 0x05, - 0x20, 0x04, 0x4e, 0x62, 0x96, 0xe9, 0x31, 0xe6, 0x4a, 0xf9, 0x58, 0xcf, 0x42, 0x19, 0x29, 0x50, - 0x95, 0x4d, 0x15, 0xc5, 0xe8, 0xc4, 0x99, 0x49, 0x0a, 0x87, 0x19, 0xb7, 0xb4, 0x70, 0x5b, 0x23, - 0x9e, 0xca, 0xa6, 0x90, 0xfe, 0x02, 0x21, 0x89, 0xbc, 0x0d, 0xa3, 0x8c, 0x14, 0x15, 0x12, 0xaf, - 0x9a, 0x6c, 0x99, 0x89, 0x0a, 0x21, 0x09, 0x72, 0x66, 0x57, 0x7f, 0x26, 0x24, 0x3e, 0x2a, 0x0b, - 0x1f, 0x84, 0x8d, 0x1d, 0x62, 0x1a, 0x85, 0x28, 0x7a, 0x69, 0xf8, 0xfc, 0x22, 0x79, 0xd6, 0xc5, - 0xe9, 0xfb, 0xf1, 0x50, 0x06, 0xda, 0x28, 0x3c, 0xba, 0x0f, 0x6e, 0xa7, 0xd0, 0x86, 0xd1, 0x52, - 0xba, 0xd4, 0x7e, 0xf0, 0xf4, 0x11, 0x38, 0xcd, 0xef, 0x21, 0x49, 0x58, 0xb2, 0xa9, 0xca, 0x26, - 0xe9, 0xd0, 0x91, 0x90, 0x2e, 0xac, 0x18, 0x0e, 0x72, 0x84, 0xd5, 0xdf, 0x17, 0xc0, 0xf2, 0x84, - 0x57, 0x42, 0xc4, 0x8d, 0x87, 0xcc, 0x50, 0x55, 0xf4, 0x6a, 0x55, 0xad, 0x95, 0x18, 0xae, 0xc4, - 0xf0, 0x71, 0x15, 0xdc, 0x9d, 0x4c, 0x5e, 0xd3, 0x4d, 0x46, 0x2b, 0x10, 0x97, 0x3c, 0x99, 0xb6, - 0xa4, 0xd7, 0x54, 0x31, 0xbf, 0xf1, 0xb5, 0xbf, 0xfb, 0xc9, 0x35, 0xe1, 0x1f, 0x7e, 0x72, 0x4d, - 0xf8, 0xe7, 0x9f, 0x5c, 0x13, 0x3e, 0xd1, 0xf7, 0x9d, 0xe0, 0xa0, 0xb7, 0xbb, 0xd6, 0x74, 0xdb, - 0x0f, 0xf7, 0x3d, 0xeb, 0xc8, 0x61, 0x79, 0x2a, 0x56, 0xeb, 0x61, 0xff, 0x3f, 0x07, 0x74, 0x9d, - 0x87, 0xfb, 0xb8, 0xf3, 0x90, 0xbe, 0xe8, 0x7a, 0xb8, 0xef, 0x0e, 0xfd, 0x3b, 0x82, 0x0f, 0x62, - 0x9f, 0x47, 0x8f, 0x76, 0x4f, 0x51, 0xb2, 0x77, 0xfe, 0x27, 0x00, 0x00, 0xff, 0xff, 0xe9, 0x4e, - 0x4f, 0x5b, 0xbe, 0x60, 0x00, 0x00, + // 6496 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x7d, 0x5b, 0x6c, 0x1c, 0xc9, + 0x75, 0xe8, 0xf4, 0x90, 0x94, 0xc8, 0x92, 0x44, 0xb5, 0x5a, 0x12, 0x35, 0x22, 0x45, 0x3d, 0x46, + 0x6f, 0xee, 0x2e, 0xb5, 0xd2, 0x6a, 0xbd, 0xda, 0x87, 0xbd, 0xdb, 0xec, 0x69, 0xce, 0xb4, 0x38, + 0x33, 0x3d, 0x5b, 0xdd, 0x23, 0x9a, 0x6b, 0x18, 0x75, 0x9b, 0xd3, 0x45, 0xb2, 0x2f, 0x87, 0xd3, + 0xe3, 0xee, 0x1e, 0x72, 0x89, 0x0b, 0xc3, 0xbe, 0xbe, 0xf7, 0x1a, 0x17, 0xd7, 0xf6, 0x75, 0x02, + 0x24, 0x46, 0x80, 0x0d, 0xf2, 0x40, 0x1e, 0x08, 0xe0, 0x00, 0xf9, 0x32, 0xfc, 0x95, 0x8f, 0xfc, + 0x05, 0x48, 0x3e, 0x0c, 0x04, 0xc8, 0x47, 0xf2, 0x13, 0xf8, 0x2b, 0x7f, 0xfe, 0x4a, 0x80, 0xc4, + 0x79, 0xa1, 0x1e, 0x3d, 0xd3, 0x33, 0xd3, 0x3d, 0xdd, 0xda, 0x68, 0x63, 0x43, 0xfb, 0xc7, 0xa9, + 0x3a, 0xaf, 0x3a, 0x75, 0xea, 0x9c, 0x53, 0x55, 0xa7, 0x9a, 0xe0, 0x7e, 0x80, 0xdb, 0xb8, 0xeb, + 0x7a, 0xc1, 0x83, 0x9e, 0x6f, 0xed, 0x62, 0x7c, 0x88, 0x3b, 0x81, 0xff, 0xe0, 0xf0, 0x61, 0xf4, + 0xe7, 0x6a, 0xd7, 0x73, 0x03, 0x57, 0xba, 0x14, 0x82, 0xae, 0x46, 0xfb, 0x0e, 0x1f, 0x2e, 0xde, + 0xed, 0xd3, 0xb0, 0x5a, 0x2d, 0xec, 0xfb, 0x6d, 0xc7, 0x0f, 0x08, 0x89, 0xc1, 0x2f, 0x46, 0xa1, + 0xb8, 0x02, 0xa4, 0xa6, 0xb6, 0x66, 0x75, 0x3a, 0xd8, 0x53, 0xda, 0x4e, 0x6b, 0x5f, 0x25, 0x24, + 0xa4, 0x0b, 0x60, 0xc6, 0x6a, 0x63, 0x2f, 0x28, 0x08, 0xd7, 0x85, 0x7b, 0x73, 0x90, 0xfd, 0x28, + 0xae, 0x83, 0x7b, 0x4d, 0x4d, 0xef, 0x6c, 0xbb, 0x96, 0x67, 0x2b, 0xee, 0x41, 0xb7, 0x8d, 0x03, + 0x5c, 0x76, 0x4d, 0xb7, 0x64, 0xf9, 0x7b, 0xac, 0x71, 0x40, 0x61, 0x11, 0xcc, 0xf6, 0x7c, 0xec, + 0x75, 0xac, 0x03, 0xcc, 0x89, 0xf4, 0x7f, 0x17, 0x6f, 0x83, 0x9b, 0x7d, 0x3a, 0xb2, 0x6d, 0xaf, + 0x3b, 0x9e, 0x1f, 0x40, 0xec, 0xbb, 0x3d, 0xaf, 0x85, 0x07, 0x24, 0x8a, 0x2b, 0x11, 0x76, 0xa3, + 0x60, 0x55, 0x2b, 0x88, 0x0a, 0x5c, 0x7c, 0x1f, 0xdc, 0xe8, 0xc3, 0x1a, 0x38, 0x50, 0x3c, 0x6c, + 0xe3, 0x4e, 0xe0, 0x58, 0x6d, 0xa3, 0xb7, 0x7d, 0xe0, 0x04, 0xe9, 0x32, 0x45, 0x09, 0x7c, 0xd8, + 0xc3, 0x7e, 0xe0, 0xb8, 0x9d, 0x8e, 0xe5, 0x78, 0x38, 0x2b, 0x81, 0xaf, 0x83, 0xdb, 0x7d, 0x02, + 0x10, 0xef, 0x3a, 0x3e, 0x11, 0x70, 0xcf, 0x6a, 0xb7, 0x71, 0x67, 0x37, 0x2b, 0x11, 0xe9, 0x32, + 0x98, 0x3d, 0xd8, 0xb1, 0x50, 0x70, 0xdc, 0xc5, 0x85, 0x3c, 0xed, 0x3b, 0x79, 0xb0, 0x63, 0x99, + 0xc7, 0x5d, 0x2c, 0x2d, 0x03, 0xd0, 0x76, 0x77, 0x9d, 0x0e, 0xda, 0x69, 0xbb, 0x47, 0x85, 0x29, + 0xda, 0x39, 0x47, 0x5b, 0xd6, 0xdb, 0xee, 0x11, 0x93, 0x1f, 0xe2, 0x96, 0x7b, 0x88, 0xbd, 0x63, + 0xc5, 0xb5, 0xb1, 0xaf, 0xb8, 0x9d, 0xc0, 0xe9, 0xf4, 0x70, 0xc6, 0x49, 0x79, 0x17, 0x2c, 0x8f, + 0x11, 0xe8, 0x1e, 0x67, 0x44, 0x7e, 0x0f, 0x5c, 0x1d, 0x41, 0x6e, 0x78, 0x4e, 0x27, 0xc8, 0x88, + 0x5d, 0x04, 0x62, 0xc9, 0xf1, 0x29, 0x72, 0x0d, 0x07, 0x96, 0x6d, 0x05, 0x96, 0x34, 0x0f, 0xf2, + 0x8e, 0xcd, 0x21, 0xf3, 0x8e, 0x5d, 0xb4, 0x40, 0x21, 0x84, 0x09, 0x6d, 0xa0, 0x0f, 0xab, 0x82, + 0x59, 0x8f, 0xb7, 0x51, 0x8c, 0xf9, 0x47, 0xf7, 0x57, 0x13, 0x16, 0xc6, 0xea, 0x28, 0x11, 0xd8, + 0x47, 0x2d, 0xee, 0x03, 0x29, 0xec, 0x35, 0x02, 0xdc, 0x35, 0x02, 0x2b, 0xe8, 0xf9, 0xd2, 0xfb, + 0xe0, 0x84, 0x4f, 0xff, 0xe2, 0xa4, 0xef, 0xa6, 0x92, 0x66, 0x88, 0x90, 0xa3, 0x91, 0xb5, 0x84, + 0x3d, 0xcf, 0xf5, 0xf8, 0x84, 0xb2, 0x1f, 0xc5, 0xdf, 0x17, 0xc0, 0x42, 0x53, 0x8b, 0xa0, 0x78, + 0x01, 0xb6, 0x99, 0xaa, 0x54, 0x30, 0x7b, 0xc0, 0x87, 0x46, 0x79, 0x9e, 0xca, 0x30, 0x9c, 0x50, + 0x17, 0xb0, 0x8f, 0x2a, 0x29, 0x7d, 0xc1, 0xf3, 0x94, 0xc8, 0x2b, 0x19, 0x04, 0x0f, 0x47, 0x1d, + 0x0a, 0x5f, 0xfc, 0x57, 0x01, 0x5c, 0x1f, 0x88, 0x19, 0x2a, 0xcd, 0xc0, 0x6d, 0xdc, 0x22, 0x2b, + 0xe4, 0x85, 0x0a, 0x5c, 0x8b, 0x4c, 0x23, 0x13, 0xf9, 0x61, 0xe6, 0x69, 0x1c, 0x90, 0x0b, 0x49, + 0x44, 0xc6, 0x3f, 0xf5, 0xe9, 0xc7, 0xff, 0x7f, 0xf2, 0xc4, 0x09, 0x85, 0x00, 0x5a, 0x27, 0xc0, + 0xbb, 0x9e, 0x45, 0x46, 0x2e, 0x6f, 0x1a, 0xba, 0x56, 0x52, 0x14, 0xb7, 0xd3, 0xc1, 0xad, 0xe0, + 0xa5, 0xd7, 0xc3, 0x8f, 0xf3, 0x51, 0x3b, 0x28, 0x59, 0x81, 0xb5, 0x6d, 0xf9, 0x18, 0x96, 0x0c, + 0xb5, 0xe3, 0xb9, 0xed, 0xf6, 0xcb, 0x3e, 0x7e, 0xe9, 0x09, 0x28, 0xf8, 0xd4, 0xe8, 0xb1, 0x8d, + 0x42, 0xca, 0x3e, 0x6a, 0xb9, 0xbd, 0x4e, 0x50, 0x98, 0xbe, 0x2e, 0xdc, 0x9b, 0x82, 0x0b, 0x61, + 0x7f, 0x28, 0x8a, 0xaf, 0x90, 0xde, 0xe2, 0x3f, 0x09, 0xe0, 0xca, 0x40, 0x73, 0x1b, 0xbd, 0x6d, + 0xac, 0x6e, 0x7c, 0x4e, 0xb4, 0x56, 0x7c, 0x0a, 0x0a, 0x4d, 0x4d, 0xb1, 0xda, 0x6d, 0xd3, 0x95, + 0xa9, 0xbf, 0x88, 0x04, 0x84, 0x55, 0x30, 0xd5, 0xe2, 0x23, 0x9e, 0x7f, 0x74, 0x25, 0x91, 0xba, + 0x62, 0xca, 0x90, 0x00, 0x16, 0xbf, 0x37, 0x13, 0xd5, 0x63, 0x09, 0x77, 0xdb, 0xee, 0xb1, 0x81, + 0xbd, 0x43, 0xa7, 0x85, 0x5f, 0x7a, 0xeb, 0xdb, 0x05, 0x67, 0x6c, 0x3a, 0x60, 0x74, 0x80, 0x83, + 0x3d, 0xd7, 0xa6, 0x26, 0x37, 0xff, 0x68, 0x2d, 0x91, 0xd6, 0x24, 0x45, 0xad, 0xb2, 0xa6, 0x1a, + 0xa5, 0x04, 0x4f, 0xdb, 0x91, 0x5f, 0x92, 0x05, 0x4e, 0x71, 0x46, 0x34, 0x05, 0x99, 0xa1, 0x6c, + 0x3e, 0xf8, 0xcf, 0xb0, 0x21, 0xb9, 0x0b, 0x04, 0x76, 0xff, 0xef, 0x22, 0x02, 0xa7, 0xa3, 0x02, + 0x48, 0xcb, 0xe0, 0x72, 0x49, 0x6d, 0x54, 0xf5, 0x2d, 0x54, 0x53, 0xcd, 0x8a, 0x5e, 0x42, 0xcd, + 0xba, 0xd1, 0x50, 0x15, 0x6d, 0x5d, 0x53, 0x4b, 0x62, 0x4e, 0x5a, 0x00, 0xd2, 0x70, 0xb7, 0xdc, + 0x34, 0x75, 0x51, 0x90, 0x0a, 0xe0, 0xc2, 0x70, 0x7b, 0x4d, 0xae, 0x37, 0xe5, 0xaa, 0x98, 0x2f, + 0x62, 0x00, 0x06, 0xac, 0xa5, 0x25, 0x70, 0x89, 0xc3, 0x99, 0x5b, 0x0d, 0x75, 0x84, 0xf8, 0x55, + 0xb0, 0x18, 0xed, 0xd4, 0xea, 0x86, 0x29, 0x57, 0xab, 0xc8, 0x50, 0xa0, 0xd6, 0x30, 0x45, 0x41, + 0x5a, 0x04, 0x0b, 0xd1, 0x7e, 0xb9, 0x26, 0x7f, 0xa4, 0xd7, 0x91, 0xaa, 0x18, 0x62, 0xbe, 0xf8, + 0xc9, 0x34, 0xb8, 0x35, 0x18, 0xbf, 0xe2, 0x61, 0x2b, 0xc0, 0xe1, 0xaf, 0x63, 0xc5, 0xed, 0xec, + 0x38, 0xbb, 0x2f, 0xbd, 0x5d, 0xba, 0xe0, 0x4c, 0x8b, 0x8e, 0x74, 0xd8, 0x2e, 0x9f, 0x66, 0x30, + 0x98, 0x64, 0x85, 0xad, 0xb2, 0xbf, 0x43, 0xfb, 0x6c, 0x45, 0x7e, 0x15, 0xff, 0x50, 0x00, 0xa7, + 0xa3, 0xdd, 0xc4, 0x7a, 0x14, 0xbd, 0xbe, 0xae, 0x95, 0xe3, 0xad, 0x67, 0xac, 0x5b, 0xde, 0x34, + 0x90, 0xaa, 0x3c, 0x42, 0x86, 0x51, 0x13, 0x05, 0x32, 0xff, 0xf1, 0xdd, 0xaa, 0xa6, 0xa8, 0x62, + 0x3e, 0x1e, 0x1d, 0x96, 0x0c, 0x6a, 0x02, 0x53, 0xd2, 0x65, 0x70, 0x31, 0x06, 0x7d, 0xc3, 0x10, + 0xa7, 0x8b, 0xff, 0x22, 0x80, 0x6b, 0x31, 0xf1, 0x92, 0xef, 0x0b, 0x5e, 0x7a, 0xc7, 0xff, 0x3f, + 0xf3, 0xd1, 0xc5, 0x11, 0x0e, 0x9f, 0xcd, 0x5c, 0xcf, 0xc3, 0x35, 0xb3, 0x6a, 0xbc, 0xf4, 0x3a, + 0xf8, 0xff, 0x79, 0xf0, 0x30, 0xea, 0x20, 0xfd, 0xfd, 0xc0, 0xed, 0x92, 0x30, 0x78, 0x88, 0x4b, + 0x8e, 0x87, 0x5b, 0x81, 0xeb, 0x1d, 0x9b, 0xae, 0xdb, 0xf6, 0xb5, 0x8e, 0x1f, 0x58, 0x9f, 0x83, + 0x6c, 0xe0, 0x3b, 0x79, 0xb0, 0x9a, 0xa6, 0x90, 0xbe, 0x89, 0xbc, 0xf4, 0xda, 0xf8, 0xe3, 0x3c, + 0xb8, 0x33, 0xd0, 0x86, 0xdc, 0x0b, 0xdc, 0xf0, 0xef, 0x48, 0x0a, 0xf9, 0xd2, 0x47, 0x90, 0xbb, + 0xe0, 0x6c, 0x7c, 0x3a, 0x3d, 0xef, 0x0d, 0xa7, 0xd1, 0xdf, 0xcc, 0x83, 0x9b, 0x03, 0x75, 0xa9, + 0xca, 0x23, 0xba, 0x6a, 0x3a, 0x9f, 0xa7, 0xbd, 0xe8, 0x3f, 0x0a, 0xe0, 0xf2, 0x68, 0xc6, 0x45, + 0x02, 0xd5, 0xe7, 0x6c, 0xe0, 0x2c, 0x73, 0xa8, 0xbb, 0xf6, 0xcb, 0xef, 0x23, 0x7e, 0x2e, 0x80, + 0xab, 0xa3, 0x03, 0x97, 0xbb, 0x5d, 0x92, 0x66, 0x7f, 0x0e, 0x92, 0x88, 0x6f, 0xe7, 0xc1, 0xfd, + 0x09, 0x49, 0x84, 0x26, 0xd7, 0x1a, 0x6e, 0xdb, 0x69, 0x1d, 0xbf, 0xf4, 0x8a, 0xf8, 0x77, 0x01, + 0x14, 0x07, 0x8a, 0x68, 0x78, 0x4e, 0xa7, 0xe5, 0x74, 0xad, 0xb6, 0xff, 0xf9, 0x09, 0x96, 0xff, + 0x2c, 0x80, 0xe5, 0x81, 0x06, 0x4c, 0xec, 0x07, 0xfc, 0xe0, 0xed, 0xf3, 0xe0, 0xf7, 0xff, 0x41, + 0x00, 0x85, 0x88, 0x17, 0xe0, 0x17, 0x2f, 0xf6, 0x4b, 0x3f, 0xee, 0x25, 0xe2, 0xf5, 0xb9, 0xb7, + 0xc7, 0x47, 0xd0, 0x6d, 0x47, 0x2f, 0x87, 0xfe, 0x92, 0x5a, 0xc4, 0x50, 0xaf, 0x61, 0x1d, 0x46, + 0x2f, 0x3b, 0x6e, 0x82, 0x33, 0x24, 0x43, 0xb0, 0x2d, 0xcf, 0x46, 0x3d, 0x1f, 0xb3, 0xcb, 0x84, + 0x59, 0x78, 0x3a, 0x6c, 0x6c, 0xfa, 0xd8, 0x96, 0x96, 0xc0, 0xdc, 0xb1, 0x75, 0xd0, 0x66, 0x00, + 0x79, 0x0a, 0x30, 0x4b, 0x1a, 0x68, 0xe7, 0x1d, 0x70, 0xf6, 0xc0, 0xb5, 0x31, 0x3a, 0xda, 0xc3, + 0x1d, 0xe4, 0x5b, 0x87, 0xd8, 0xe6, 0xf7, 0x2e, 0x67, 0x48, 0xf3, 0xe6, 0x1e, 0xee, 0x10, 0x96, + 0xb6, 0x24, 0x83, 0xe5, 0x1d, 0x07, 0xb7, 0x6d, 0x1f, 0x1d, 0x39, 0xc1, 0x1e, 0x6a, 0xb9, 0x9d, + 0x43, 0xec, 0xf9, 0x8e, 0xdb, 0x41, 0xf4, 0xac, 0xdf, 0x2f, 0x4c, 0x5f, 0x9f, 0xba, 0x37, 0x07, + 0x17, 0x19, 0xd0, 0xa6, 0x13, 0xec, 0x29, 0x7d, 0x10, 0x95, 0x42, 0x14, 0x6f, 0x90, 0xed, 0xe2, + 0xf0, 0x58, 0x49, 0x72, 0xd3, 0x8e, 0x8c, 0xf8, 0x55, 0xb0, 0x32, 0x02, 0xf2, 0xcc, 0xc1, 0x47, + 0x25, 0xb7, 0xd5, 0x3b, 0xc0, 0x9d, 0xc0, 0x1a, 0x3e, 0x5e, 0x2b, 0xfe, 0x48, 0x00, 0x17, 0x65, + 0xdf, 0x77, 0xc8, 0x4a, 0xa1, 0x06, 0xd3, 0x5f, 0x29, 0x77, 0xc1, 0x59, 0x2e, 0x21, 0xc5, 0x41, + 0xfd, 0x6b, 0x96, 0xf9, 0x68, 0xb3, 0x66, 0x4b, 0x37, 0xc0, 0xe9, 0xc0, 0x0d, 0xac, 0x36, 0x0a, + 0xdc, 0x7d, 0xdc, 0x61, 0xd7, 0x08, 0x53, 0xf0, 0x14, 0x6d, 0x33, 0x69, 0x13, 0xd1, 0x71, 0xd7, + 0x73, 0x0f, 0xba, 0x41, 0x08, 0x33, 0x45, 0x61, 0x4e, 0xb3, 0x46, 0x0e, 0xf4, 0x0a, 0x38, 0xd7, + 0xea, 0xcb, 0x10, 0x02, 0xb2, 0x2c, 0x4f, 0x1c, 0x74, 0x30, 0xe0, 0xe2, 0x5f, 0x09, 0xe0, 0x02, + 0x93, 0x5b, 0xfd, 0x18, 0xb7, 0x7a, 0x9f, 0x42, 0xec, 0x65, 0x00, 0x3a, 0x64, 0xd6, 0x58, 0x36, + 0xc9, 0x84, 0x9e, 0x23, 0x2d, 0x34, 0x91, 0x1c, 0x1b, 0xd5, 0x54, 0x86, 0x51, 0x4d, 0x67, 0x1d, + 0xd5, 0x4c, 0xc2, 0xa8, 0x9e, 0x80, 0x45, 0x36, 0xa8, 0x3a, 0x3e, 0x52, 0x22, 0xe2, 0xf6, 0xef, + 0xc6, 0x5a, 0x56, 0x80, 0x77, 0x5d, 0xef, 0x38, 0xbc, 0x1b, 0x0b, 0x7f, 0x17, 0xff, 0x44, 0x00, + 0xe7, 0x19, 0xaa, 0x4c, 0xaf, 0x6e, 0x21, 0xfe, 0x5a, 0x0f, 0xfb, 0xd4, 0xba, 0xc3, 0xd5, 0xc6, + 0xce, 0xea, 0x18, 0xe2, 0xe9, 0xb0, 0x91, 0x1e, 0x7e, 0xfd, 0x42, 0x66, 0xf0, 0x13, 0x01, 0x9c, + 0x0e, 0x25, 0x26, 0xcd, 0xd2, 0x02, 0x38, 0x61, 0xd1, 0xbf, 0xb8, 0x8c, 0xfc, 0xd7, 0x2f, 0x46, + 0xba, 0x5b, 0x40, 0x62, 0x8a, 0xac, 0x3a, 0x7e, 0x90, 0x78, 0xdb, 0xf8, 0x0d, 0x50, 0x88, 0x42, + 0x1d, 0x6c, 0x47, 0x6e, 0x26, 0x25, 0x30, 0x1d, 0xb9, 0xc5, 0xa4, 0x7f, 0x4b, 0x3a, 0x38, 0x7b, + 0x40, 0xa1, 0xfc, 0x3d, 0xa7, 0x8b, 0xf6, 0x9d, 0x0e, 0x73, 0x26, 0xf3, 0x8f, 0xee, 0x0c, 0x1c, + 0x5f, 0xe4, 0xea, 0xfd, 0xf0, 0xe1, 0x6a, 0xad, 0x0f, 0xbe, 0xe1, 0x74, 0x6c, 0x38, 0x7f, 0x30, + 0xf4, 0xbb, 0xf8, 0x15, 0x20, 0x0e, 0x04, 0x60, 0x8b, 0x5e, 0x2a, 0x8f, 0xb9, 0xfa, 0x64, 0xb7, + 0x3a, 0x3e, 0xc6, 0x81, 0xb3, 0x1f, 0x26, 0xde, 0xec, 0xda, 0x9f, 0x1d, 0xf1, 0x12, 0x26, 0xa1, + 0xea, 0xc5, 0x11, 0xff, 0x53, 0x01, 0x2c, 0x8c, 0x4e, 0xcc, 0x0b, 0xd6, 0x8e, 0xf4, 0x51, 0x38, + 0x97, 0xa8, 0x4f, 0x2f, 0x2d, 0x22, 0x26, 0xd9, 0x4a, 0x38, 0xad, 0xb5, 0x49, 0xf2, 0xbf, 0xe0, + 0x09, 0xf8, 0x2f, 0x97, 0xff, 0x05, 0xcf, 0xf1, 0x67, 0x2a, 0xff, 0x6f, 0xe7, 0xa3, 0xf2, 0x97, + 0x3d, 0xab, 0x13, 0xf8, 0xa6, 0xdb, 0xf4, 0xb1, 0x27, 0xad, 0x82, 0xf3, 0x34, 0x62, 0x20, 0xcf, + 0x6d, 0x63, 0x1f, 0xed, 0x92, 0x3e, 0x9e, 0x34, 0xcc, 0xc0, 0x73, 0xb4, 0x8b, 0xc4, 0x5c, 0xbf, + 0xcc, 0x3a, 0x48, 0xd0, 0x67, 0xf0, 0x4e, 0x67, 0x0f, 0x7b, 0x0e, 0xbd, 0x18, 0x1c, 0xc2, 0x9c, + 0xa2, 0x98, 0x8b, 0x14, 0x48, 0x0b, 0x61, 0x86, 0x48, 0xbc, 0x0e, 0x2e, 0x30, 0x12, 0x81, 0x67, + 0x39, 0xc1, 0x00, 0x33, 0x4f, 0x31, 0x25, 0xda, 0x67, 0xd2, 0xae, 0x10, 0x43, 0x01, 0x57, 0x47, + 0x99, 0x8e, 0xe0, 0x4e, 0x53, 0xdc, 0xa5, 0x61, 0xae, 0xc3, 0x44, 0x96, 0xc0, 0x5c, 0xcf, 0xc7, + 0x1e, 0xa2, 0x5e, 0x6c, 0x66, 0x50, 0x8b, 0x51, 0xb7, 0x0e, 0x70, 0xf1, 0x07, 0x53, 0x51, 0x0d, + 0x41, 0x7c, 0xe8, 0xe0, 0xa3, 0x17, 0xbd, 0xc2, 0x9e, 0x80, 0xcb, 0xb6, 0x75, 0xec, 0xa3, 0xae, + 0xe5, 0x07, 0xa8, 0x83, 0x3f, 0x0e, 0x90, 0xd5, 0xb3, 0x9d, 0x00, 0x91, 0x75, 0xc0, 0x07, 0x7f, + 0x91, 0x00, 0x34, 0x2c, 0x12, 0x30, 0x3f, 0x0e, 0x64, 0xd2, 0x5b, 0x22, 0x22, 0xac, 0x83, 0x6b, + 0x11, 0x3f, 0xeb, 0xe1, 0xaf, 0xf5, 0x1c, 0x0f, 0x93, 0xf4, 0xc7, 0x47, 0xad, 0x3d, 0xab, 0xb3, + 0xcb, 0xd5, 0x3e, 0x0b, 0x97, 0x07, 0x60, 0x30, 0x02, 0xa5, 0x30, 0x20, 0xe9, 0x09, 0x28, 0x78, + 0x74, 0x68, 0x68, 0x87, 0x10, 0xc1, 0x9d, 0xd6, 0x71, 0x9f, 0xc0, 0x34, 0x25, 0xb0, 0xc0, 0xfa, + 0xd7, 0xc3, 0xee, 0x10, 0xf3, 0x3d, 0xb0, 0xc4, 0x31, 0x6d, 0xeb, 0x18, 0xb9, 0x3b, 0xe8, 0xc0, + 0xed, 0x90, 0x9c, 0x8f, 0x23, 0xcf, 0x50, 0xe4, 0x4b, 0x0c, 0xa4, 0x64, 0x1d, 0xeb, 0x3b, 0x35, + 0xd2, 0x1f, 0x62, 0xbf, 0x0d, 0x2e, 0x77, 0x7a, 0xd4, 0xb6, 0xdd, 0x1d, 0xe4, 0xe1, 0x03, 0xf7, + 0x10, 0xdb, 0x88, 0x8b, 0x5a, 0x38, 0x41, 0x47, 0xbe, 0xc0, 0x00, 0xf4, 0x1d, 0xc8, 0xba, 0x79, + 0xa0, 0x28, 0xfe, 0x9a, 0x30, 0x3e, 0x31, 0x2f, 0x7a, 0xe9, 0x3d, 0x04, 0x17, 0x59, 0x94, 0x42, + 0x24, 0x4c, 0x21, 0x3e, 0x50, 0xc7, 0xe6, 0xa5, 0x2b, 0x92, 0x35, 0xc2, 0x5f, 0xb3, 0x8b, 0xdf, + 0x15, 0xc0, 0xe5, 0x48, 0x59, 0x04, 0xbb, 0xd8, 0x4e, 0x8a, 0xab, 0xd2, 0x1a, 0x98, 0x8e, 0x04, + 0xc7, 0xd5, 0x44, 0x29, 0xc7, 0x28, 0xd2, 0x20, 0x49, 0x71, 0x87, 0xcd, 0x77, 0x6a, 0xc4, 0x7c, + 0x5d, 0xb2, 0x2b, 0x18, 0xc3, 0xa6, 0xe5, 0x35, 0x2c, 0xd7, 0xaa, 0x8f, 0xe9, 0xea, 0x51, 0x76, + 0x29, 0x62, 0x22, 0x92, 0x07, 0xae, 0xc7, 0x30, 0x0c, 0x37, 0x69, 0x9f, 0x0d, 0xcf, 0xaf, 0x83, + 0xa5, 0x98, 0x21, 0xf6, 0x2b, 0x96, 0x2a, 0x60, 0xba, 0xe5, 0xda, 0x61, 0x29, 0xd4, 0xe3, 0xec, + 0xac, 0x18, 0xbe, 0xe2, 0xda, 0x18, 0x52, 0x0a, 0x09, 0xa5, 0x4b, 0xdf, 0xcc, 0x83, 0x2b, 0xb1, + 0x4a, 0xc6, 0xdd, 0xcf, 0x64, 0xbc, 0xc4, 0x6a, 0xfc, 0x00, 0x77, 0x9f, 0xdf, 0x6a, 0x88, 0x48, + 0x90, 0xe2, 0x4a, 0xd5, 0x91, 0x1d, 0xe9, 0xe3, 0xe7, 0xa3, 0x32, 0x7e, 0x34, 0x75, 0x33, 0x4e, + 0x05, 0xec, 0x44, 0x42, 0xef, 0xe2, 0xce, 0x2f, 0xaf, 0x26, 0x36, 0xc0, 0x49, 0x9f, 0xc9, 0x49, + 0x55, 0x31, 0x3f, 0x21, 0xae, 0x26, 0x0d, 0x10, 0x86, 0x14, 0x8a, 0xdf, 0xca, 0x83, 0xdb, 0x31, + 0x8a, 0x58, 0x27, 0x3b, 0xdd, 0xcf, 0x74, 0x11, 0xbc, 0x10, 0x55, 0xa8, 0x60, 0x86, 0xee, 0xc9, + 0xb9, 0x22, 0x1e, 0x64, 0x27, 0x42, 0x07, 0x08, 0x19, 0x76, 0xf1, 0xdf, 0x84, 0x04, 0x27, 0x40, + 0xf6, 0x9c, 0xdd, 0xe3, 0x5f, 0xde, 0xf1, 0xaf, 0x83, 0x69, 0xba, 0x77, 0x64, 0xc3, 0x7f, 0x0e, + 0x79, 0xc8, 0xd0, 0x68, 0x65, 0x07, 0xc5, 0x2f, 0xfe, 0x85, 0x00, 0x6e, 0xc4, 0x28, 0xa0, 0xea, + 0x74, 0xf6, 0x23, 0x07, 0x32, 0xbf, 0x8c, 0x1a, 0x90, 0xc0, 0x74, 0xdb, 0xe9, 0xec, 0xf3, 0x38, + 0x42, 0xff, 0x2e, 0xfe, 0xad, 0x00, 0xce, 0xf7, 0xcb, 0x91, 0x69, 0xea, 0xd3, 0x3f, 0x50, 0x4a, + 0xdf, 0x72, 0x47, 0x2e, 0xb4, 0x90, 0xeb, 0x39, 0xbb, 0x4e, 0x87, 0x3b, 0xcf, 0xfe, 0x85, 0x96, + 0x4e, 0x5b, 0xa5, 0xdb, 0x60, 0xbe, 0xd5, 0x76, 0x7b, 0x36, 0xea, 0x7a, 0xee, 0xa1, 0x63, 0x63, + 0x2f, 0x3c, 0x5b, 0xa2, 0xad, 0x0d, 0xde, 0x28, 0xe9, 0x60, 0xd6, 0xe6, 0x27, 0xdf, 0x34, 0x33, + 0x39, 0xf5, 0xe8, 0x8d, 0xd4, 0xb3, 0x34, 0x6c, 0x87, 0x87, 0xe5, 0x03, 0xad, 0x85, 0x44, 0x8a, + 0xcf, 0xc0, 0x62, 0x32, 0x9c, 0x74, 0x09, 0x9c, 0xb4, 0xb7, 0xa3, 0xa3, 0x3b, 0x61, 0x6f, 0xd3, + 0x71, 0x5d, 0x03, 0xa7, 0xec, 0x6d, 0x44, 0x6b, 0xc6, 0x5b, 0x6e, 0x9b, 0x8f, 0x09, 0xd8, 0xdb, + 0x0d, 0xde, 0x52, 0xfc, 0x99, 0x00, 0x16, 0xd7, 0xb1, 0x15, 0xf4, 0x3c, 0x0c, 0x71, 0xcb, 0x3d, + 0x38, 0xc0, 0x1d, 0x3b, 0x72, 0xc6, 0x31, 0x14, 0xb5, 0x85, 0xe1, 0xa8, 0x2d, 0xbd, 0x03, 0x4e, + 0xee, 0x30, 0x54, 0x3e, 0x99, 0xd7, 0x13, 0xc7, 0x18, 0xb2, 0x08, 0x11, 0xa4, 0x8f, 0xc1, 0x32, + 0xff, 0x13, 0x79, 0x43, 0x7c, 0x51, 0xc4, 0xdf, 0x4f, 0x0a, 0x83, 0xb1, 0x42, 0x73, 0x7f, 0xbf, + 0xb4, 0x93, 0xdc, 0x59, 0x3c, 0x02, 0x17, 0x4c, 0xb9, 0xcc, 0x8e, 0xa9, 0xf0, 0x87, 0x3d, 0xec, + 0xf1, 0x95, 0x7e, 0x0d, 0xb0, 0x33, 0x0c, 0xd4, 0x71, 0x6d, 0xcc, 0xca, 0x86, 0xa7, 0x20, 0xa0, + 0x4d, 0x75, 0xd2, 0x32, 0x00, 0xc0, 0xf6, 0x2e, 0x0e, 0xcf, 0x3d, 0x18, 0x80, 0x4a, 0x5a, 0xa4, + 0x65, 0x00, 0x1c, 0x1f, 0xf9, 0x3d, 0x9a, 0x6e, 0xf1, 0x8c, 0x76, 0xce, 0xf1, 0x0d, 0xd6, 0x50, + 0xfc, 0xfb, 0x29, 0x70, 0x89, 0xe5, 0x71, 0x65, 0xcf, 0xea, 0xee, 0xc9, 0x9b, 0x86, 0xd1, 0xb2, + 0x3a, 0x61, 0x59, 0xdd, 0x79, 0x4e, 0xbb, 0xf5, 0x08, 0x39, 0xfc, 0x8e, 0x94, 0x09, 0x31, 0x0d, + 0xcf, 0x31, 0x1e, 0xad, 0xfe, 0xe5, 0x69, 0x44, 0x16, 0x32, 0x19, 0x4c, 0x96, 0x69, 0x2e, 0x0b, + 0xd9, 0x16, 0xf9, 0x83, 0x53, 0x9a, 0x5d, 0xcf, 0xed, 0x75, 0x99, 0x34, 0xd3, 0xfc, 0x94, 0xa6, + 0x4c, 0x9b, 0x06, 0x34, 0xe8, 0x06, 0x88, 0x9a, 0x69, 0x48, 0x83, 0xee, 0x77, 0x88, 0xad, 0x33, + 0x80, 0xae, 0xdb, 0x76, 0x5a, 0x0e, 0x66, 0x07, 0x65, 0xd3, 0xf0, 0x0c, 0x6d, 0x6d, 0xf0, 0x46, + 0xe9, 0x55, 0x20, 0x71, 0xd9, 0xf7, 0x7d, 0xd4, 0x6a, 0xf7, 0xfc, 0x20, 0x4c, 0x8b, 0xa7, 0xa1, + 0xc8, 0x44, 0xdf, 0xf7, 0x15, 0xde, 0x3e, 0x18, 0xa9, 0x67, 0xfb, 0x91, 0x91, 0x9e, 0x8c, 0x8c, + 0x14, 0xda, 0xfe, 0x60, 0xa4, 0xf7, 0x00, 0xa3, 0x81, 0xfc, 0x37, 0xd0, 0x76, 0xaf, 0xb5, 0x8f, + 0x03, 0xbf, 0x30, 0x4b, 0x81, 0x99, 0x70, 0xc6, 0x1b, 0x6b, 0xac, 0x95, 0xec, 0xcb, 0x38, 0xa4, + 0x75, 0xd0, 0xee, 0xaf, 0x4f, 0xbf, 0x30, 0x47, 0xa1, 0x99, 0x8c, 0x86, 0x75, 0xd0, 0x0e, 0x17, + 0x69, 0x04, 0xc3, 0x75, 0xec, 0x56, 0x04, 0x03, 0x44, 0x30, 0x74, 0xc7, 0x6e, 0x0d, 0x30, 0xfa, + 0x2a, 0xb1, 0x5a, 0x74, 0xaf, 0xe6, 0x17, 0x4e, 0x45, 0x54, 0x22, 0xf3, 0xc6, 0xe2, 0x0f, 0x04, + 0x70, 0xab, 0xa9, 0x45, 0x26, 0x5b, 0xf1, 0xdc, 0xa3, 0xce, 0x53, 0x7c, 0x84, 0xdb, 0x25, 0x67, + 0x67, 0xe7, 0x99, 0x83, 0x8f, 0xd8, 0xbc, 0x3f, 0x01, 0x05, 0x6b, 0x67, 0x67, 0xb8, 0x40, 0x15, + 0x45, 0x6a, 0xe2, 0xe7, 0xe0, 0x42, 0xd8, 0xdf, 0xaf, 0xe2, 0x66, 0x67, 0xf5, 0x8f, 0xc1, 0xc2, + 0x38, 0x66, 0xe4, 0x05, 0xc2, 0x85, 0x51, 0x3c, 0x5a, 0xc6, 0xb7, 0x0e, 0xae, 0x18, 0xb8, 0xd5, + 0xf3, 0x9c, 0xe0, 0x18, 0xd2, 0x65, 0x55, 0xc6, 0x01, 0xc4, 0x7e, 0xaf, 0xcd, 0xf3, 0xec, 0xb8, + 0x53, 0x32, 0x09, 0x4c, 0x93, 0x6d, 0x1d, 0xdf, 0xe2, 0xd1, 0xbf, 0x8b, 0x16, 0x38, 0xdf, 0x2f, + 0x02, 0x5b, 0xc7, 0x41, 0x6b, 0x8f, 0xa1, 0x8f, 0x7b, 0x47, 0x21, 0xce, 0x3b, 0x8e, 0xb9, 0xe4, + 0xfc, 0xb8, 0x4b, 0x2e, 0x7e, 0x5f, 0x00, 0x12, 0xb1, 0x65, 0xd3, 0xf2, 0xf7, 0xc9, 0xd2, 0xc5, + 0x7d, 0x8f, 0x14, 0x58, 0xfe, 0x7e, 0xd4, 0xd9, 0xcd, 0x92, 0x86, 0xf0, 0xb5, 0x85, 0xe3, 0xfb, + 0xbd, 0x21, 0xaa, 0x73, 0xb4, 0x85, 0x76, 0x5f, 0x00, 0x33, 0xc4, 0xbb, 0x84, 0xfb, 0x0f, 0xf6, + 0x83, 0xf8, 0xfe, 0xbe, 0x1d, 0x46, 0x8a, 0x19, 0x66, 0xe0, 0x7c, 0xbf, 0x99, 0x15, 0x33, 0xfc, + 0xe6, 0x07, 0xe0, 0x6c, 0x93, 0x78, 0x21, 0x2a, 0x89, 0xde, 0xc1, 0xfa, 0x8e, 0xd4, 0x04, 0x67, + 0x7b, 0x0e, 0xda, 0xa6, 0x2f, 0x71, 0x50, 0x8b, 0x44, 0xcd, 0xd4, 0xbd, 0xdc, 0xf8, 0xc3, 0x9d, + 0x4a, 0x0e, 0x9e, 0xe9, 0x39, 0x91, 0x56, 0xe9, 0x13, 0x01, 0xdc, 0xef, 0x39, 0xc8, 0x65, 0x0f, + 0x53, 0x10, 0x3f, 0x0f, 0xc5, 0x68, 0xd7, 0x45, 0x81, 0x8b, 0xec, 0xf0, 0xe5, 0x0e, 0xe7, 0xc8, + 0x72, 0x63, 0x79, 0x02, 0xc7, 0x6c, 0xcf, 0x7f, 0x2a, 0x39, 0x78, 0xb3, 0xe7, 0xa4, 0xc2, 0x4a, + 0xdf, 0x11, 0xc0, 0xcd, 0x88, 0x74, 0x96, 0x6d, 0xa3, 0x1d, 0xc7, 0xa3, 0x5b, 0x4f, 0x3e, 0xab, + 0x4c, 0x2e, 0x16, 0xf9, 0xde, 0x4b, 0x97, 0x2b, 0xf9, 0x39, 0x51, 0x25, 0x07, 0xaf, 0xf6, 0x45, + 0x8a, 0x05, 0x1b, 0xd5, 0x55, 0x8c, 0x34, 0x6d, 0x2b, 0xe8, 0xcf, 0xce, 0x4c, 0x56, 0x5d, 0xa5, + 0xbc, 0x5d, 0x1a, 0xd2, 0x55, 0x32, 0xac, 0xf4, 0xbf, 0x05, 0x70, 0x3d, 0x22, 0x9d, 0x8f, 0x03, + 0xd4, 0xea, 0x3f, 0x73, 0x42, 0x3e, 0x7d, 0x61, 0x44, 0x9d, 0xe5, 0xa9, 0x47, 0xef, 0xa4, 0x0b, + 0x95, 0xf4, 0x48, 0xaa, 0x92, 0x83, 0x57, 0xfa, 0xd2, 0xc4, 0x00, 0x49, 0xbf, 0x22, 0x80, 0x5b, + 0x11, 0x31, 0x3c, 0x5e, 0xd2, 0x88, 0x5a, 0xe1, 0x5b, 0xa7, 0x50, 0x94, 0x93, 0x54, 0x94, 0x2f, + 0xa5, 0x8b, 0x32, 0xe9, 0xb5, 0x54, 0x25, 0x07, 0xaf, 0xf7, 0xc5, 0x49, 0x00, 0x0c, 0x35, 0xe3, + 0xf1, 0xf7, 0x47, 0x88, 0xec, 0x5d, 0xc9, 0x02, 0x64, 0xef, 0x9f, 0xf8, 0x74, 0xcd, 0xa6, 0x6a, + 0x26, 0xe5, 0xf5, 0x14, 0xd3, 0x4c, 0x32, 0x90, 0xf4, 0x31, 0xb8, 0x12, 0x27, 0x45, 0xf7, 0x98, + 0x4b, 0x30, 0x47, 0x25, 0xf8, 0x42, 0x76, 0x09, 0xa2, 0xcf, 0xaf, 0x2a, 0x39, 0x58, 0x18, 0xe3, + 0xce, 0x01, 0xa4, 0xff, 0x01, 0x96, 0xc7, 0x39, 0x77, 0x3d, 0xa7, 0x13, 0x70, 0xd6, 0x80, 0xb2, + 0x7e, 0x2b, 0x2b, 0xeb, 0x91, 0xc7, 0x5b, 0x95, 0x1c, 0xbc, 0x3c, 0xc2, 0x7b, 0x00, 0x21, 0xb5, + 0xc1, 0xe5, 0x9e, 0x83, 0x6c, 0xee, 0xc4, 0x49, 0xd6, 0xe5, 0x91, 0x50, 0x42, 0x89, 0xd3, 0xa0, + 0x76, 0x6a, 0xc2, 0xc6, 0x2a, 0xfe, 0x09, 0x54, 0x25, 0x07, 0x17, 0x7a, 0x4e, 0xec, 0xe3, 0xa8, + 0xef, 0x30, 0xf3, 0xeb, 0xb3, 0x1b, 0xc4, 0xba, 0xb0, 0x0e, 0x8c, 0x73, 0x3e, 0x4d, 0x39, 0xbf, + 0x9d, 0x81, 0x73, 0xfc, 0xab, 0x26, 0x66, 0x79, 0x29, 0x2f, 0x9f, 0xbe, 0x41, 0x0d, 0xaf, 0x2f, + 0x0c, 0xaf, 0x9d, 0xf7, 0x59, 0x19, 0x3c, 0x17, 0xe4, 0x0c, 0x15, 0xe4, 0xcd, 0x4f, 0x55, 0x44, + 0xcf, 0x6c, 0x6e, 0xc2, 0xa3, 0x87, 0xff, 0xcb, 0x1c, 0xe8, 0x40, 0x02, 0x9e, 0xd0, 0x0f, 0xd6, + 0x25, 0x13, 0x62, 0x9e, 0x0a, 0xf1, 0x24, 0x8b, 0x10, 0x71, 0xb5, 0xca, 0x95, 0x1c, 0xbc, 0x16, + 0x91, 0x23, 0xb6, 0x9c, 0xf9, 0x37, 0x98, 0xf7, 0x1c, 0x17, 0xa5, 0x15, 0x96, 0xa9, 0xa0, 0x83, + 0xa0, 0xed, 0x73, 0x81, 0xce, 0x52, 0x81, 0xbe, 0xf8, 0x1c, 0x02, 0x8d, 0x57, 0x0f, 0x57, 0x72, + 0xf0, 0xd6, 0xb8, 0x54, 0x03, 0xb8, 0xa0, 0xcd, 0x0b, 0x28, 0xff, 0x4c, 0x00, 0x4f, 0x86, 0xe7, + 0x89, 0xd6, 0x9e, 0x22, 0x8b, 0x16, 0x9f, 0x22, 0x3b, 0xac, 0x3e, 0x45, 0x81, 0xeb, 0xb6, 0x79, + 0x32, 0xd9, 0x6e, 0x73, 0x49, 0x45, 0x2a, 0xe9, 0xd3, 0x4c, 0xf3, 0x97, 0xa9, 0xc6, 0xb7, 0x92, + 0x83, 0x0f, 0xa3, 0x93, 0x9a, 0xad, 0x30, 0xf8, 0xc7, 0x02, 0x78, 0x9c, 0x69, 0x0c, 0x03, 0x75, + 0x33, 0xf9, 0xcf, 0x51, 0xf9, 0xcb, 0x9f, 0x5a, 0xfe, 0xe1, 0x2a, 0xa3, 0x4a, 0x0e, 0xae, 0xa6, + 0x09, 0x3f, 0x52, 0x97, 0xf4, 0x5b, 0x02, 0x78, 0x25, 0x2a, 0xb9, 0xd5, 0x23, 0x99, 0x47, 0x7f, + 0x0f, 0x1a, 0x79, 0x4f, 0xc5, 0x04, 0x96, 0xa8, 0xc0, 0xef, 0x67, 0x10, 0x78, 0x52, 0xd5, 0x6c, + 0x25, 0x07, 0xef, 0x0c, 0x04, 0x9d, 0x58, 0x5f, 0xfb, 0x47, 0x02, 0x78, 0x90, 0x62, 0xb9, 0x8e, + 0x75, 0xc0, 0x36, 0x2f, 0xc7, 0x5c, 0xc8, 0xf3, 0x54, 0xc8, 0xb5, 0x4f, 0x63, 0xbf, 0xc3, 0x85, + 0x6b, 0x95, 0x1c, 0xbc, 0x3f, 0xc1, 0x88, 0x35, 0xeb, 0x20, 0x5a, 0xe5, 0xf6, 0xab, 0x02, 0xb8, + 0x13, 0x15, 0xb5, 0xdb, 0x2f, 0x06, 0x1b, 0x9b, 0xf7, 0x0b, 0x54, 0xc2, 0x77, 0x33, 0x48, 0x98, + 0x54, 0x51, 0x56, 0xc9, 0xc1, 0xe2, 0x40, 0xb4, 0xc4, 0xba, 0xb3, 0x6f, 0x09, 0xe0, 0x46, 0x54, + 0xa6, 0x00, 0xfb, 0x01, 0x91, 0xa6, 0x33, 0xe4, 0x8f, 0x2f, 0xa6, 0x46, 0xbf, 0x09, 0xe5, 0x5d, + 0x95, 0x1c, 0x5c, 0x1e, 0x48, 0x12, 0x57, 0xff, 0xe5, 0x81, 0xa5, 0xa8, 0x0c, 0x61, 0x9e, 0x1b, + 0xc6, 0xa1, 0x85, 0x94, 0x1b, 0xc4, 0xa4, 0xfa, 0x2a, 0x16, 0x76, 0x13, 0x6a, 0xaf, 0xda, 0xa0, + 0xd0, 0x73, 0x48, 0x12, 0x66, 0x05, 0x18, 0x75, 0xf0, 0x11, 0xdd, 0xff, 0xf2, 0x88, 0x7b, 0x29, + 0xe5, 0x80, 0x2b, 0xb1, 0xb2, 0xa9, 0x92, 0x83, 0x17, 0x7a, 0xce, 0x78, 0xa7, 0x74, 0x4c, 0x83, + 0xfc, 0x28, 0x37, 0xdf, 0x3a, 0x0c, 0x59, 0x16, 0x52, 0x35, 0x3c, 0xa1, 0x5c, 0x8a, 0x0d, 0x34, + 0x1e, 0x40, 0xfa, 0x06, 0xb8, 0x16, 0x37, 0x50, 0x5a, 0xa1, 0xc4, 0x99, 0x5f, 0x4e, 0x0d, 0x30, + 0x13, 0xab, 0x9b, 0x2a, 0x39, 0xb8, 0x38, 0x3a, 0xea, 0x01, 0x88, 0xf4, 0xbb, 0xcc, 0x85, 0x8c, + 0x4a, 0xc0, 0xee, 0xe1, 0xa2, 0x15, 0x50, 0x5c, 0x9a, 0x45, 0x2a, 0x8d, 0x92, 0x55, 0x9a, 0x09, + 0x85, 0x54, 0x95, 0x1c, 0xbc, 0x3d, 0x22, 0x58, 0x3c, 0xb4, 0xf4, 0x07, 0x02, 0x58, 0x8d, 0x9a, + 0xa0, 0x33, 0x38, 0x76, 0x44, 0xd6, 0x91, 0xcf, 0x8e, 0x06, 0xf8, 0xb2, 0xe0, 0x56, 0xb9, 0x94, + 0xba, 0x85, 0xc8, 0xf6, 0xf2, 0xb8, 0x92, 0x83, 0xf7, 0x06, 0x56, 0x1a, 0x85, 0x3d, 0xf2, 0x75, + 0xc7, 0x6e, 0x0d, 0xbd, 0x52, 0xfe, 0xae, 0x00, 0x6e, 0xc7, 0xa7, 0x0c, 0xb6, 0x8f, 0x30, 0x3d, + 0x20, 0xe5, 0xe2, 0x5d, 0xc9, 0x9c, 0x42, 0xc5, 0x3f, 0x08, 0x1e, 0x4e, 0xa1, 0xfa, 0x30, 0xb6, + 0x1f, 0x7d, 0xfe, 0x1a, 0x30, 0xb3, 0x26, 0xf1, 0x36, 0x70, 0x11, 0x2b, 0x0d, 0x62, 0xb3, 0xc8, + 0xa5, 0x58, 0x4e, 0x5d, 0xba, 0xf1, 0x2f, 0x4c, 0xb9, 0x45, 0xc7, 0xbf, 0x3e, 0xfd, 0x2a, 0x38, + 0x67, 0xd1, 0x1a, 0x25, 0x34, 0xa8, 0x10, 0x2a, 0x5c, 0xa5, 0x9c, 0x92, 0x0f, 0x92, 0x63, 0xeb, + 0xe9, 0x2a, 0x39, 0x28, 0x5a, 0x23, 0x1d, 0xa1, 0x4b, 0x8c, 0x9a, 0x00, 0xd7, 0x2c, 0x4d, 0x8f, + 0xf9, 0xc8, 0xae, 0xa5, 0x2e, 0xd8, 0x09, 0x37, 0x99, 0xcc, 0x25, 0x4e, 0xba, 0xea, 0xe4, 0xa9, + 0x72, 0x8c, 0x10, 0xfd, 0x53, 0x00, 0x26, 0xc7, 0xf5, 0xd4, 0x79, 0x9e, 0x7c, 0xc1, 0xc9, 0xe6, + 0x39, 0xe5, 0x12, 0xf4, 0x7f, 0x09, 0xd4, 0x89, 0x84, 0xfb, 0xc6, 0xaf, 0x45, 0xbf, 0xb1, 0x11, + 0x6e, 0x19, 0x6f, 0x64, 0xdd, 0xbd, 0x26, 0x7d, 0xa1, 0x63, 0x68, 0xf7, 0x1a, 0x03, 0x24, 0x7d, + 0x04, 0xf8, 0x64, 0x21, 0x1c, 0x96, 0x17, 0x16, 0x8a, 0x94, 0xeb, 0x6b, 0x29, 0xd3, 0x3e, 0x5c, + 0x8e, 0x58, 0xc9, 0xc1, 0xb3, 0xd6, 0x70, 0xbb, 0x74, 0x00, 0x2e, 0x71, 0xda, 0xc4, 0x41, 0x45, + 0xab, 0x12, 0x0b, 0x37, 0x53, 0x4e, 0xee, 0x93, 0x8b, 0x03, 0x2b, 0x39, 0x78, 0xd1, 0x8a, 0xeb, + 0x95, 0xb6, 0xc1, 0xc5, 0xc1, 0x29, 0x09, 0x73, 0x8c, 0x6c, 0x3a, 0x6f, 0x51, 0x66, 0xaf, 0x26, + 0x32, 0x8b, 0xb9, 0xdb, 0xa8, 0xe4, 0xe0, 0x79, 0x2f, 0xe6, 0xca, 0xe3, 0x08, 0x5c, 0x49, 0x38, + 0x5c, 0x67, 0xac, 0x6e, 0xa7, 0x8c, 0x2b, 0xf9, 0x42, 0x80, 0x38, 0xfc, 0x9d, 0xe4, 0xeb, 0x82, + 0x6d, 0xc0, 0x47, 0x8d, 0x78, 0x41, 0x82, 0xc7, 0xea, 0x1e, 0x0b, 0x77, 0x52, 0x06, 0x17, 0x53, + 0x2b, 0x49, 0x06, 0x67, 0xc5, 0x94, 0x50, 0x56, 0xc1, 0x99, 0x3e, 0x0f, 0x3a, 0x4b, 0x77, 0x29, + 0xed, 0xdb, 0xa9, 0xb4, 0x09, 0x70, 0x25, 0x07, 0x4f, 0x5b, 0xd1, 0x2a, 0xc7, 0x2d, 0x20, 0x45, + 0x6b, 0x27, 0xd8, 0x8c, 0x14, 0xee, 0xa5, 0x94, 0x64, 0x8f, 0x16, 0xf9, 0x51, 0x6f, 0x32, 0x5a, + 0xf8, 0x37, 0x42, 0xba, 0x47, 0x0b, 0xc6, 0x0a, 0xf7, 0x33, 0x93, 0x66, 0x15, 0x66, 0xc3, 0xa4, + 0x79, 0xd5, 0xd9, 0x08, 0x69, 0x9b, 0x16, 0x94, 0x14, 0x56, 0x32, 0x93, 0x66, 0x15, 0x28, 0xc3, + 0xa4, 0x79, 0x55, 0x4a, 0x1b, 0x5c, 0x8e, 0x92, 0xe6, 0x35, 0x5d, 0x5c, 0x2f, 0xaf, 0xa4, 0x9c, + 0x0b, 0xc4, 0x17, 0xf9, 0x55, 0x72, 0x70, 0xc1, 0x8a, 0x2f, 0xff, 0x8b, 0xe7, 0xc6, 0x55, 0xf5, + 0xea, 0x73, 0x72, 0xeb, 0x2b, 0x6c, 0x8c, 0x1b, 0x57, 0x5b, 0x3c, 0x37, 0xae, 0xbd, 0xd7, 0x9e, + 0x93, 0x5b, 0x5f, 0x87, 0x63, 0xdc, 0xb8, 0x26, 0x0f, 0xc0, 0x62, 0x94, 0x1b, 0x2d, 0xf5, 0xf2, + 0x49, 0xb4, 0xec, 0xf9, 0xd8, 0x2b, 0xac, 0x66, 0x66, 0x17, 0xad, 0x77, 0x1b, 0x66, 0x37, 0x54, + 0x09, 0xf7, 0xff, 0x04, 0x50, 0x8c, 0x26, 0x08, 0xd1, 0x9b, 0xa4, 0xc1, 0x31, 0x4b, 0xe1, 0x41, + 0xea, 0x99, 0x6c, 0xea, 0x6b, 0x3d, 0x76, 0x26, 0xdb, 0x07, 0x6b, 0x8d, 0x83, 0x49, 0xfb, 0xe0, + 0x52, 0xcc, 0x09, 0x0b, 0x76, 0x5a, 0xb8, 0xf0, 0x7a, 0x6a, 0x8a, 0x9d, 0xf0, 0x56, 0x8e, 0xa5, + 0xd8, 0x23, 0x9d, 0x4e, 0x0b, 0x8f, 0x32, 0x0b, 0xd3, 0x4d, 0xd7, 0xc6, 0x85, 0x87, 0x99, 0x99, + 0x8d, 0xbc, 0x4f, 0x1b, 0x66, 0x36, 0xe8, 0x94, 0xbe, 0x02, 0xce, 0x05, 0xd6, 0x2e, 0x8f, 0x43, + 0x98, 0x04, 0x44, 0xef, 0xb8, 0xf0, 0x28, 0x25, 0x16, 0xc5, 0x5d, 0x38, 0x92, 0x58, 0x14, 0x58, + 0xbb, 0xd1, 0x76, 0x29, 0x00, 0x8b, 0x3e, 0xbf, 0x9e, 0x41, 0x1e, 0xa5, 0x84, 0x76, 0x31, 0x3d, + 0xc8, 0xee, 0xb5, 0x83, 0xc2, 0x1b, 0x29, 0x47, 0x52, 0x93, 0x6e, 0x76, 0x2a, 0x39, 0x78, 0xc9, + 0x8f, 0xef, 0x1f, 0x5d, 0x16, 0xbc, 0x7e, 0x8c, 0x2f, 0xf9, 0xc7, 0x99, 0xed, 0x34, 0x5a, 0x75, + 0x38, 0x6c, 0xa7, 0x43, 0xf5, 0x88, 0xf1, 0xdc, 0xf8, 0x22, 0x7c, 0xf3, 0x39, 0xb9, 0xc5, 0x2d, + 0xc2, 0xa1, 0x22, 0xbb, 0x6d, 0x70, 0x31, 0x34, 0x8c, 0x63, 0xb4, 0x83, 0x83, 0xd6, 0x1e, 0x8f, + 0x81, 0x5f, 0x48, 0x89, 0x48, 0x31, 0xd7, 0x5b, 0x24, 0x22, 0xd9, 0x31, 0xb7, 0x5e, 0xdf, 0x1b, + 0x39, 0xcd, 0xe3, 0x06, 0x38, 0xe0, 0xcb, 0xf6, 0xf8, 0x85, 0xb7, 0x32, 0x1f, 0x9e, 0x25, 0x7f, + 0x66, 0x61, 0xf8, 0x48, 0x2f, 0x16, 0x4e, 0xfa, 0xe6, 0xc8, 0xce, 0x7e, 0xbf, 0xb7, 0x8d, 0xe9, + 0xe5, 0xec, 0xd0, 0x36, 0xe1, 0x49, 0xe6, 0x03, 0xce, 0xf1, 0xaf, 0xdf, 0x0c, 0x1f, 0x70, 0xd2, + 0xfe, 0xfd, 0xa1, 0xed, 0xc1, 0xb7, 0xe3, 0x55, 0x62, 0x75, 0xbb, 0xf4, 0x98, 0xb5, 0x7f, 0xc0, + 0xf9, 0x76, 0xea, 0x09, 0xf7, 0xa4, 0x67, 0x94, 0xc3, 0x8e, 0x28, 0xf6, 0xa1, 0x65, 0x17, 0x2c, + 0x71, 0x6b, 0xdb, 0xf5, 0xac, 0xee, 0x1e, 0xdd, 0xd2, 0xf9, 0x2d, 0x2b, 0xcc, 0x84, 0xde, 0xa1, + 0xfc, 0x5f, 0x4f, 0xb1, 0xb7, 0xb1, 0xfb, 0x7a, 0xb2, 0x9a, 0xac, 0x48, 0xd7, 0x91, 0x3f, 0xb8, + 0xca, 0xff, 0x3e, 0xdb, 0xa8, 0x0d, 0x71, 0x6d, 0x79, 0xee, 0x51, 0x07, 0xfd, 0x77, 0x7c, 0x84, + 0xdb, 0xc8, 0x76, 0x76, 0x76, 0xe8, 0x16, 0xb8, 0xf0, 0x6e, 0xaa, 0x3d, 0xa4, 0xdf, 0x20, 0x33, + 0x7b, 0x98, 0x08, 0x27, 0xfd, 0x37, 0x70, 0x91, 0x16, 0x71, 0xd0, 0x7b, 0x53, 0x7a, 0xe7, 0xc9, + 0x47, 0xff, 0x5e, 0xda, 0x4d, 0xe5, 0xd8, 0xf5, 0x6b, 0x25, 0x07, 0xa5, 0xde, 0xf8, 0xa5, 0x2c, + 0x3b, 0x50, 0x8f, 0xdd, 0x37, 0xe1, 0x2e, 0x67, 0xf6, 0xc5, 0x54, 0x7b, 0x4b, 0xae, 0x4d, 0x64, + 0xf6, 0x36, 0xa1, 0x76, 0xf1, 0xd7, 0x05, 0x70, 0x2f, 0x41, 0x02, 0x7e, 0x9c, 0xe5, 0x76, 0x71, + 0x38, 0xe9, 0x5f, 0x4a, 0x0d, 0x81, 0xa9, 0x25, 0x82, 0xec, 0xf6, 0x2f, 0xbd, 0x92, 0xf0, 0x13, + 0x01, 0xac, 0xc4, 0xcb, 0x45, 0x6b, 0xd0, 0x46, 0xb7, 0x74, 0xef, 0xa7, 0x5e, 0xbe, 0x65, 0xa8, + 0xd9, 0x63, 0x67, 0x1f, 0x59, 0x8a, 0xfb, 0xf8, 0x99, 0x42, 0xec, 0x56, 0x93, 0x3e, 0xc5, 0xea, + 0x86, 0xe7, 0xa6, 0x1f, 0x7c, 0x9a, 0xbd, 0x66, 0xa4, 0x8e, 0x2e, 0x71, 0xaf, 0x19, 0xad, 0xb5, + 0xfb, 0x3e, 0x3b, 0x25, 0x8d, 0x11, 0xa7, 0xed, 0x74, 0xf6, 0x87, 0x4e, 0x17, 0xe4, 0xd4, 0x2d, + 0x67, 0x4a, 0x59, 0x5b, 0x25, 0x07, 0x6f, 0xc4, 0x08, 0x34, 0x0c, 0xb4, 0x76, 0x12, 0xcc, 0x50, + 0x9a, 0x4f, 0xa7, 0x67, 0xf3, 0xe2, 0x14, 0xb1, 0xc1, 0xfe, 0x4e, 0x98, 0xc4, 0xe5, 0xf0, 0xd2, + 0x8c, 0x4a, 0xb6, 0xf2, 0x43, 0x71, 0xf0, 0x41, 0xc4, 0x70, 0xb3, 0x26, 0xdd, 0x00, 0xcb, 0x25, + 0xcd, 0x50, 0xf4, 0x67, 0x2a, 0x44, 0x50, 0x35, 0xf4, 0x26, 0x54, 0x46, 0x3f, 0x67, 0x74, 0x05, + 0x14, 0xc6, 0x41, 0x0c, 0x15, 0x3e, 0x53, 0xa1, 0x28, 0x48, 0xd7, 0xc1, 0x95, 0xf1, 0xde, 0x8d, + 0xe6, 0x9a, 0x0a, 0xeb, 0xaa, 0xa9, 0x1a, 0x62, 0x5e, 0x7a, 0x03, 0x3c, 0x18, 0x87, 0x28, 0xc9, + 0xa6, 0xbc, 0x26, 0x1b, 0x2a, 0x6a, 0xe8, 0x86, 0x59, 0x86, 0xaa, 0x81, 0x0c, 0xb5, 0xba, 0x8e, + 0x2a, 0xba, 0x61, 0xaa, 0x25, 0x71, 0x4a, 0x7a, 0x1d, 0xbc, 0x3a, 0x01, 0xa9, 0xb6, 0x65, 0x7c, + 0x58, 0x1d, 0xc2, 0x98, 0x96, 0x1e, 0x81, 0xd5, 0x49, 0x18, 0x7a, 0xbd, 0xac, 0x97, 0xd6, 0x86, + 0x70, 0x66, 0xa4, 0x57, 0xc0, 0xdd, 0x2c, 0xa2, 0xc1, 0x92, 0x21, 0x9e, 0x90, 0xee, 0x81, 0x5b, + 0xa9, 0x22, 0x11, 0xc8, 0x93, 0xd2, 0x1d, 0x50, 0x1c, 0x87, 0x94, 0x1b, 0x8d, 0xaa, 0xa6, 0xc8, + 0xa6, 0xa6, 0xd7, 0x51, 0xc5, 0x34, 0x1b, 0xe2, 0xac, 0x74, 0x1b, 0xdc, 0x98, 0x0c, 0x67, 0x2a, + 0x0d, 0x71, 0x2e, 0x1e, 0x6c, 0x53, 0xab, 0x97, 0xf4, 0x4d, 0x03, 0x95, 0x54, 0x63, 0xc3, 0xd4, + 0x1b, 0x22, 0x90, 0x5e, 0x05, 0xf7, 0x26, 0xc8, 0x67, 0x7c, 0x58, 0x65, 0x73, 0x46, 0x65, 0x3c, + 0x95, 0xa2, 0xe0, 0xc1, 0xd0, 0xd5, 0x92, 0x51, 0xd1, 0xd6, 0x4d, 0xf1, 0xb4, 0xf4, 0x18, 0xbc, + 0x9e, 0x89, 0x7e, 0x54, 0xc5, 0x67, 0x52, 0xf8, 0x40, 0xb5, 0xa4, 0x0d, 0x4f, 0xfd, 0x7c, 0xd6, + 0x49, 0x29, 0x2b, 0x0d, 0xf1, 0x6c, 0xa6, 0x49, 0x21, 0x90, 0x62, 0x66, 0xf5, 0x10, 0xe8, 0x73, + 0xd2, 0xbb, 0xe0, 0xad, 0xe7, 0x51, 0x0f, 0x5f, 0x0f, 0x55, 0xd5, 0x30, 0x44, 0x49, 0x7a, 0x0d, + 0xdc, 0xcf, 0x82, 0x2c, 0x7f, 0xd4, 0x84, 0xaa, 0x78, 0x5e, 0xba, 0x0b, 0x6e, 0x4e, 0x00, 0x2f, + 0x6d, 0xd5, 0xe5, 0x9a, 0x5e, 0x5a, 0x13, 0x2f, 0xa4, 0x98, 0xb8, 0x22, 0x1b, 0x86, 0x5c, 0x2f, + 0x41, 0x19, 0x6d, 0xa8, 0x5b, 0x46, 0x43, 0x56, 0x54, 0x43, 0xbc, 0x98, 0x32, 0x6b, 0x03, 0x9c, + 0xe8, 0x1c, 0x2c, 0x48, 0x4f, 0xc0, 0xe3, 0x09, 0x58, 0x6a, 0x55, 0x36, 0x4c, 0x4d, 0x31, 0x54, + 0x19, 0x2a, 0x95, 0x21, 0xcc, 0x4b, 0x99, 0xe6, 0x9b, 0xe3, 0xcb, 0x4a, 0x45, 0x15, 0x0b, 0x29, + 0xda, 0x62, 0x18, 0x35, 0xb5, 0xa6, 0xc3, 0xad, 0xd2, 0x9a, 0x78, 0x39, 0x13, 0x03, 0xaa, 0x59, + 0xc4, 0x18, 0x2c, 0xa6, 0x0c, 0x86, 0x61, 0x28, 0xd5, 0xa6, 0x61, 0x8e, 0x18, 0xef, 0x92, 0xb4, + 0x02, 0xee, 0xa4, 0x5a, 0x17, 0x9b, 0xc5, 0x2b, 0xd2, 0x2a, 0x58, 0xc9, 0x64, 0x5f, 0x0c, 0x7e, + 0x39, 0x65, 0x32, 0x07, 0xf0, 0x35, 0x4d, 0x81, 0xba, 0xa1, 0xaf, 0x9b, 0xe2, 0x55, 0xe9, 0x0b, + 0xe0, 0xd1, 0xa4, 0xc9, 0xd4, 0x95, 0x0d, 0xa8, 0xcb, 0x4a, 0x65, 0xc4, 0xcf, 0x5d, 0x4b, 0xb1, + 0xfd, 0xd0, 0x37, 0xca, 0x66, 0x55, 0x36, 0xc4, 0xeb, 0x29, 0x6b, 0xca, 0xa8, 0xeb, 0x9b, 0xeb, + 0x55, 0x79, 0x43, 0x15, 0x6f, 0x24, 0xd0, 0xd5, 0x95, 0x88, 0x76, 0x4b, 0x06, 0x6a, 0x40, 0xfd, + 0xcb, 0x5b, 0x62, 0x31, 0xc1, 0x14, 0xa3, 0xd0, 0x15, 0xad, 0x5c, 0x41, 0xf2, 0x33, 0x59, 0xab, + 0xca, 0x6b, 0x5a, 0x55, 0x33, 0xb7, 0xc4, 0x9b, 0xd2, 0x5b, 0xe0, 0x8d, 0x14, 0x2c, 0xba, 0x42, + 0x34, 0x05, 0x41, 0xb5, 0xac, 0x19, 0x26, 0xa4, 0xae, 0x53, 0xbc, 0x15, 0xef, 0x85, 0x0d, 0xb9, + 0x56, 0x8d, 0xba, 0x58, 0xf1, 0xb6, 0x54, 0x04, 0x57, 0xc7, 0xe1, 0x54, 0xe5, 0x11, 0xfb, 0x70, + 0x5f, 0x5d, 0x51, 0xc5, 0x3b, 0x09, 0x46, 0xa7, 0x2b, 0xa3, 0x6e, 0x18, 0xd5, 0xf5, 0x3a, 0x92, + 0x4b, 0xe2, 0x5d, 0xe9, 0x16, 0xb8, 0x3e, 0x29, 0x2e, 0xd2, 0x0f, 0xba, 0xdd, 0x8b, 0xb7, 0xfd, + 0x68, 0x04, 0x90, 0x37, 0x0d, 0xa4, 0xe8, 0x75, 0x43, 0xaf, 0xaa, 0xe2, 0x7d, 0xe9, 0x1a, 0x58, + 0x1a, 0x07, 0xaf, 0x29, 0x0d, 0x64, 0x98, 0x25, 0x4d, 0x17, 0x57, 0xe8, 0x67, 0x0f, 0xe3, 0x01, + 0x0c, 0x55, 0x7c, 0x45, 0xba, 0x0f, 0x6e, 0x27, 0xe1, 0x43, 0x55, 0xae, 0xc9, 0x6b, 0x55, 0x95, + 0xc5, 0xa6, 0x57, 0x57, 0x7e, 0x4f, 0x00, 0xf3, 0xc3, 0x9f, 0x1e, 0x1e, 0xe2, 0x6e, 0x98, 0xb2, + 0xd9, 0x34, 0x46, 0x32, 0x85, 0x25, 0x70, 0x69, 0x14, 0xc0, 0x68, 0x2a, 0x0a, 0x71, 0x8a, 0x42, + 0x6c, 0xe7, 0x86, 0xd6, 0x68, 0xa8, 0x25, 0x31, 0x2f, 0x5d, 0x06, 0x17, 0x47, 0x3b, 0x55, 0x08, + 0x75, 0x28, 0x4e, 0xc5, 0xe1, 0xc9, 0x6b, 0x3a, 0xa4, 0x41, 0x7f, 0xe5, 0x67, 0x79, 0x30, 0xa5, + 0x98, 0xb2, 0x74, 0x1e, 0x9c, 0x55, 0x4c, 0x79, 0xfc, 0x23, 0x8f, 0xa4, 0x51, 0x6e, 0x9a, 0x15, + 0xa2, 0xc3, 0xba, 0xaa, 0x98, 0x3a, 0x49, 0x59, 0x2e, 0x81, 0xf3, 0xb4, 0x5d, 0x31, 0xb5, 0x67, + 0x24, 0x93, 0x31, 0x0c, 0x4d, 0xaf, 0x93, 0x4c, 0xa5, 0xdf, 0x41, 0x44, 0x46, 0x50, 0xfd, 0xb0, + 0xa9, 0x1a, 0xa6, 0x21, 0x4e, 0x85, 0x1d, 0x0d, 0xa8, 0xd6, 0xb4, 0x66, 0x0d, 0x19, 0xcd, 0x46, + 0x43, 0x87, 0xa6, 0x38, 0x1d, 0x76, 0x98, 0x90, 0x78, 0x8f, 0x12, 0x2a, 0xa9, 0xcf, 0x34, 0xe2, + 0x76, 0x67, 0x42, 0xde, 0xcd, 0x46, 0x19, 0xca, 0x25, 0x15, 0xad, 0xc9, 0xf5, 0xba, 0x0a, 0xc5, + 0x13, 0x21, 0xc2, 0x9a, 0x56, 0xad, 0x6a, 0xf5, 0x32, 0x32, 0x9a, 0xb5, 0x9a, 0x0c, 0xb7, 0xc4, + 0x93, 0xe1, 0x08, 0x38, 0xef, 0xaa, 0x66, 0x98, 0xe2, 0x2c, 0xfd, 0x14, 0xe0, 0xa0, 0xb1, 0xa6, + 0xd7, 0x35, 0x53, 0x87, 0x5a, 0xbd, 0x2c, 0xce, 0xd1, 0x8f, 0x0c, 0x9a, 0x32, 0x52, 0xbf, 0x6c, + 0xaa, 0xb0, 0x2e, 0x57, 0x91, 0xdc, 0x2c, 0x69, 0x26, 0x32, 0x4c, 0x1d, 0xca, 0x65, 0x55, 0x04, + 0xa1, 0x00, 0xfa, 0x06, 0x91, 0xc2, 0x20, 0xba, 0xdb, 0xaa, 0x2b, 0xe2, 0x29, 0x49, 0x04, 0xa7, + 0x29, 0x5e, 0xdd, 0x84, 0x32, 0xd2, 0x4a, 0xe2, 0x69, 0xe9, 0x1c, 0x38, 0xd3, 0x87, 0x34, 0x14, + 0xad, 0x26, 0x9e, 0x09, 0xf9, 0x6a, 0x25, 0xb5, 0x6e, 0x6a, 0xe6, 0x16, 0x32, 0x54, 0xa5, 0x09, + 0xc9, 0x72, 0x9c, 0x5f, 0xf9, 0x11, 0x00, 0x17, 0x63, 0xdf, 0xd1, 0x91, 0x30, 0xa6, 0xd5, 0x4d, + 0xb5, 0xcc, 0x16, 0x20, 0x52, 0xeb, 0x50, 0xaf, 0x56, 0xd1, 0x86, 0x56, 0x1f, 0xfd, 0x7c, 0xe2, + 0x0d, 0xb0, 0x9c, 0x04, 0x68, 0x54, 0x65, 0x65, 0x43, 0x14, 0xc8, 0xea, 0x49, 0x02, 0x21, 0x2b, + 0x42, 0xd7, 0x4a, 0x8a, 0x98, 0x27, 0x89, 0x51, 0x12, 0x54, 0x43, 0x2e, 0xab, 0xb0, 0xd4, 0x34, + 0xb7, 0xc4, 0xa9, 0x49, 0xfc, 0xd4, 0x9a, 0xac, 0x55, 0xc5, 0x69, 0x92, 0xc5, 0x26, 0x81, 0x3c, + 0xd5, 0xa0, 0x2c, 0xce, 0x48, 0x37, 0xc1, 0xb5, 0x24, 0x08, 0x6a, 0x9e, 0xb0, 0x24, 0x9e, 0x20, + 0x2e, 0x27, 0x09, 0xa8, 0x26, 0x9b, 0xa6, 0x0a, 0x6b, 0xba, 0x61, 0x8a, 0x27, 0x27, 0x0d, 0xaf, + 0x66, 0x20, 0x53, 0x95, 0x6b, 0x86, 0x38, 0x3b, 0x09, 0x4a, 0x6f, 0x18, 0x65, 0xb5, 0xae, 0xa9, + 0xe2, 0xdc, 0x24, 0xd1, 0xc9, 0x94, 0x8a, 0x60, 0xe2, 0xe0, 0xe4, 0xda, 0xba, 0x78, 0x6a, 0xb2, + 0xdc, 0x4a, 0x45, 0xab, 0xab, 0xcc, 0x54, 0xde, 0x04, 0x0f, 0xd3, 0xe1, 0x50, 0x59, 0x33, 0x2b, + 0xcd, 0x35, 0xba, 0xbe, 0xc8, 0xba, 0x3a, 0x23, 0x3d, 0x00, 0xaf, 0x64, 0x40, 0x53, 0x34, 0xa8, + 0x54, 0x55, 0x45, 0x13, 0xe7, 0x89, 0x5b, 0xcc, 0xc6, 0xa7, 0x2a, 0xaf, 0x89, 0x67, 0x49, 0xe8, + 0xcd, 0x00, 0xfe, 0x54, 0xad, 0x6f, 0x68, 0x75, 0x43, 0x14, 0x33, 0xc2, 0xcb, 0x75, 0x43, 0x5b, + 0xab, 0xaa, 0xe2, 0xb9, 0x49, 0xea, 0x21, 0x41, 0x5a, 0x53, 0xd4, 0xba, 0xbe, 0x29, 0x4a, 0x93, + 0x26, 0xac, 0xbf, 0xde, 0xce, 0x93, 0x80, 0x96, 0x68, 0x49, 0xb2, 0x29, 0x97, 0xf4, 0x32, 0xd2, + 0xea, 0x0a, 0x5d, 0x7b, 0xa8, 0x26, 0xd7, 0xe5, 0xb2, 0x5a, 0x53, 0xeb, 0xa6, 0x78, 0x81, 0x64, + 0x23, 0x59, 0xc4, 0xde, 0x24, 0x69, 0x5f, 0x36, 0x58, 0x92, 0xeb, 0x2e, 0x90, 0x28, 0x9e, 0x85, + 0x2e, 0xcd, 0x5b, 0x68, 0x82, 0x97, 0x01, 0x9a, 0xe6, 0x9f, 0x55, 0xb2, 0x71, 0x28, 0x48, 0x0f, + 0xc1, 0x6b, 0x19, 0x30, 0x22, 0x7b, 0xc6, 0xcb, 0x93, 0x2c, 0x86, 0xac, 0xff, 0xbe, 0x63, 0x52, + 0xd4, 0xba, 0xa9, 0x42, 0x71, 0x71, 0xd2, 0x94, 0x72, 0x73, 0x84, 0x6a, 0x43, 0xe7, 0x9e, 0x54, + 0x5c, 0xca, 0x68, 0x61, 0x32, 0x2c, 0xeb, 0x4a, 0x49, 0xbc, 0x22, 0x7d, 0x00, 0xde, 0x7b, 0x6e, + 0xc3, 0x8f, 0x8e, 0x68, 0x79, 0xe5, 0xaf, 0x67, 0x62, 0xfc, 0xa6, 0x11, 0xe0, 0x6e, 0x82, 0xdf, + 0x34, 0x4c, 0xb5, 0x31, 0xe2, 0x37, 0xe3, 0xc7, 0x48, 0x01, 0xe5, 0x4d, 0x43, 0x53, 0xc2, 0x20, + 0xc7, 0xdc, 0xa3, 0x20, 0x7d, 0x09, 0xbc, 0x33, 0x19, 0xde, 0x50, 0x4d, 0xae, 0x11, 0x12, 0x6f, + 0x50, 0x49, 0x5d, 0x97, 0x9b, 0x55, 0x13, 0xe9, 0x9b, 0x24, 0x56, 0xe5, 0xa5, 0x2f, 0x82, 0xb7, + 0x27, 0xe3, 0x37, 0x1b, 0x55, 0x5d, 0x66, 0x33, 0x42, 0xf3, 0x2a, 0xa3, 0x81, 0x6a, 0xaa, 0x29, + 0x13, 0x2b, 0x16, 0xa7, 0x48, 0xb2, 0x3a, 0x19, 0xdd, 0x54, 0x0d, 0x93, 0x46, 0x9c, 0x50, 0x70, + 0x92, 0x8f, 0x4d, 0x27, 0xac, 0x0f, 0x8a, 0xc7, 0x14, 0x0c, 0x65, 0xa4, 0x40, 0x55, 0x36, 0x55, + 0x14, 0x81, 0x13, 0x67, 0x26, 0x31, 0x1c, 0x45, 0x2c, 0x6b, 0xe1, 0x96, 0x4d, 0x3c, 0x91, 0x8d, + 0x21, 0xfd, 0x22, 0x2f, 0xd9, 0x55, 0x18, 0x46, 0x05, 0x29, 0x2a, 0x24, 0x6e, 0x3c, 0x7e, 0x29, + 0xc4, 0x32, 0x84, 0x24, 0x81, 0x9b, 0x25, 0x29, 0x70, 0x12, 0x46, 0x6d, 0x53, 0x2b, 0x57, 0xe4, + 0x8d, 0x27, 0x46, 0x7f, 0x1a, 0x19, 0x0d, 0x71, 0x6e, 0xd2, 0xc0, 0x86, 0xb0, 0xb8, 0x84, 0xdc, + 0xc8, 0x41, 0x36, 0x6e, 0x86, 0x6a, 0x36, 0x1b, 0x68, 0x53, 0x87, 0x1b, 0xeb, 0x55, 0x7d, 0x53, + 0x3c, 0x95, 0xb0, 0x34, 0x46, 0xb0, 0x36, 0xd5, 0xaa, 0xa2, 0xd7, 0x54, 0xf1, 0xf4, 0xca, 0xcf, + 0x85, 0xd8, 0x77, 0xe3, 0xe1, 0x9b, 0xef, 0x44, 0xab, 0xa5, 0x99, 0x9c, 0xa2, 0x97, 0x46, 0x8f, + 0x9b, 0xe2, 0x3d, 0x57, 0x14, 0x7e, 0x90, 0x53, 0x66, 0x80, 0xed, 0xa7, 0x98, 0xf7, 0xc0, 0xad, + 0x14, 0xd8, 0x30, 0xe3, 0x4c, 0xa7, 0x3a, 0x48, 0x40, 0xff, 0x26, 0x0f, 0x0a, 0x49, 0xc7, 0xac, + 0x49, 0x84, 0x98, 0xa5, 0x8f, 0x0c, 0x3b, 0xde, 0xc3, 0x84, 0xb0, 0x03, 0xc5, 0x87, 0x3e, 0xa6, + 0xf4, 0x8c, 0x6c, 0x4d, 0x48, 0x26, 0xc0, 0xa2, 0xac, 0x90, 0x99, 0x42, 0x64, 0x13, 0x52, 0x95, + 0xd7, 0xd4, 0x2a, 0x6a, 0x68, 0xca, 0x06, 0x5d, 0xf0, 0x25, 0xf0, 0xc1, 0xf3, 0x52, 0x18, 0x93, + 0x63, 0x4a, 0xaa, 0x80, 0xd2, 0xf3, 0x52, 0xe9, 0xef, 0x45, 0x60, 0xb3, 0xaa, 0x86, 0xf2, 0x4c, + 0xaf, 0xfc, 0x64, 0x06, 0x2c, 0xc4, 0x9f, 0x13, 0x27, 0xcc, 0xe6, 0xba, 0xa6, 0x56, 0x47, 0xb3, + 0xcd, 0xf7, 0xc0, 0x93, 0x44, 0xc8, 0x31, 0xb5, 0x92, 0x20, 0x61, 0x90, 0x24, 0x7b, 0x0b, 0x35, + 0x61, 0x55, 0x14, 0x12, 0x96, 0x50, 0x02, 0xf6, 0x1a, 0x94, 0xeb, 0x4a, 0x45, 0xcc, 0x27, 0x2c, + 0xd8, 0x04, 0xac, 0xfe, 0xd2, 0x9b, 0x92, 0xde, 0x06, 0x6f, 0x66, 0xc7, 0x53, 0xeb, 0xcf, 0x34, + 0xa8, 0xd7, 0x69, 0x72, 0x30, 0x9d, 0x10, 0x64, 0x13, 0x87, 0xb9, 0x2e, 0xce, 0x24, 0xf8, 0xf7, + 0x44, 0x6e, 0xa6, 0x0a, 0x1b, 0x50, 0x33, 0x54, 0x64, 0x54, 0x9b, 0x65, 0xf1, 0x44, 0x82, 0xb5, + 0x64, 0x40, 0x37, 0x65, 0x53, 0x53, 0xd0, 0xd3, 0xcd, 0x0d, 0x43, 0x3c, 0x29, 0x3d, 0x01, 0x8f, + 0x33, 0x50, 0x19, 0xb5, 0x59, 0x92, 0x1e, 0x3f, 0x37, 0x66, 0x19, 0xea, 0xcd, 0x86, 0x21, 0xce, + 0x25, 0x38, 0xfc, 0x09, 0x98, 0x64, 0x03, 0x45, 0x1c, 0x6a, 0xfc, 0x12, 0x9b, 0x80, 0x38, 0x64, + 0xd8, 0x86, 0x78, 0x6a, 0xe5, 0x87, 0x71, 0x1f, 0x36, 0x09, 0xdf, 0xbd, 0x27, 0x64, 0x62, 0xd4, + 0xe5, 0xc4, 0xfc, 0x9f, 0x81, 0x78, 0xf7, 0x32, 0x80, 0x36, 0x55, 0x08, 0xe5, 0x75, 0x1d, 0xd6, + 0x12, 0xed, 0x78, 0x00, 0x3b, 0x92, 0xba, 0x6c, 0xc9, 0xb5, 0xaa, 0x98, 0x5f, 0x79, 0x1f, 0x9c, + 0xe4, 0xb5, 0x56, 0x64, 0xe3, 0xba, 0xae, 0xca, 0x26, 0x89, 0x2f, 0x63, 0x9b, 0xfe, 0xb0, 0x63, + 0x74, 0x1b, 0x2c, 0xac, 0xfc, 0x8e, 0x00, 0x96, 0x26, 0xbc, 0x84, 0x26, 0xb1, 0x26, 0x44, 0x86, + 0xaa, 0xa2, 0xd7, 0x6a, 0x6a, 0xbd, 0xc4, 0x24, 0x8c, 0x3d, 0x60, 0x58, 0x01, 0x77, 0x26, 0x83, + 0xd7, 0x75, 0x93, 0xc1, 0x0a, 0x24, 0x69, 0x9f, 0x0c, 0x5b, 0xd2, 0xeb, 0xaa, 0x98, 0x5f, 0xfb, + 0xea, 0x9f, 0xff, 0xf4, 0xaa, 0xf0, 0x93, 0x9f, 0x5e, 0x15, 0xfe, 0xee, 0xa7, 0x57, 0x85, 0x8f, + 0xf4, 0x5d, 0x27, 0xd8, 0xeb, 0x6d, 0xaf, 0xb6, 0xdc, 0x83, 0x07, 0xbb, 0x9e, 0x75, 0xe8, 0xb0, + 0x5a, 0x5c, 0xab, 0xfd, 0x60, 0xf0, 0xaf, 0xcf, 0xba, 0xce, 0x83, 0x5d, 0xdc, 0x79, 0x40, 0x5f, + 0xad, 0x3f, 0xd8, 0x75, 0x47, 0xfe, 0x9f, 0xda, 0xbb, 0x91, 0x9f, 0x87, 0x0f, 0xb7, 0x4f, 0x50, + 0xb0, 0x37, 0xfe, 0x23, 0x00, 0x00, 0xff, 0xff, 0xc7, 0xc7, 0x14, 0xee, 0x7f, 0x6d, 0x00, 0x00, } func (m *UIBannerClickEvent) Marshal() (dAtA []byte, err error) { @@ -9078,7 +9599,7 @@ func (m *UIIntegrationEnrollStepEvent) MarshalToSizedBuffer(dAtA []byte) (int, e return len(dAtA) - i, nil } -func (m *ResourceCreateEvent) Marshal() (dAtA []byte, err error) { +func (m *UIIntegrationEnrollSectionOpenEvent) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -9088,12 +9609,12 @@ func (m *ResourceCreateEvent) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ResourceCreateEvent) MarshalTo(dAtA []byte) (int, error) { +func (m *UIIntegrationEnrollSectionOpenEvent) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ResourceCreateEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UIIntegrationEnrollSectionOpenEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -9102,9 +9623,19 @@ func (m *ResourceCreateEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Database != nil { + if m.Section != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Section)) + i-- + dAtA[i] = 0x18 + } + if m.Step != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Step)) + i-- + dAtA[i] = 0x10 + } + if m.Metadata != nil { { - size, err := m.Database.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -9112,7 +9643,195 @@ func (m *ResourceCreateEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintUsageevents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Field != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Field)) + i-- + dAtA[i] = 0x18 + } + if m.Step != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Step)) + i-- + dAtA[i] = 0x10 + } + if m.Metadata != nil { + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *UIIntegrationEnrollCodeCopyEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UIIntegrationEnrollCodeCopyEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UIIntegrationEnrollCodeCopyEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Type != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Type)) + i-- + dAtA[i] = 0x18 + } + if m.Step != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Step)) + i-- + dAtA[i] = 0x10 + } + if m.Metadata != nil { + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *UIIntegrationEnrollLinkClickEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UIIntegrationEnrollLinkClickEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UIIntegrationEnrollLinkClickEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Link) > 0 { + i -= len(m.Link) + copy(dAtA[i:], m.Link) + i = encodeVarintUsageevents(dAtA, i, uint64(len(m.Link))) + i-- + dAtA[i] = 0x1a + } + if m.Step != 0 { + i = encodeVarintUsageevents(dAtA, i, uint64(m.Step)) + i-- + dAtA[i] = 0x10 + } + if m.Metadata != nil { + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *ResourceCreateEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ResourceCreateEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ResourceCreateEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Database != nil { + { + size, err := m.Database.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 } if len(m.CloudProvider) > 0 { i -= len(m.CloudProvider) @@ -10914,6 +11633,98 @@ func (m *UsageEventOneOf_UiIntegrationEnrollStepEvent) MarshalToSizedBuffer(dAtA } return len(dAtA) - i, nil } +func (m *UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.UiIntegrationEnrollSectionOpenEvent != nil { + { + size, err := m.UiIntegrationEnrollSectionOpenEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3 + i-- + dAtA[i] = 0xf2 + } + return len(dAtA) - i, nil +} +func (m *UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.UiIntegrationEnrollFieldCompleteEvent != nil { + { + size, err := m.UiIntegrationEnrollFieldCompleteEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3 + i-- + dAtA[i] = 0xfa + } + return len(dAtA) - i, nil +} +func (m *UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.UiIntegrationEnrollCodeCopyEvent != nil { + { + size, err := m.UiIntegrationEnrollCodeCopyEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x4 + i-- + dAtA[i] = 0x82 + } + return len(dAtA) - i, nil +} +func (m *UsageEventOneOf_UiIntegrationEnrollLinkClickEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UsageEventOneOf_UiIntegrationEnrollLinkClickEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.UiIntegrationEnrollLinkClickEvent != nil { + { + size, err := m.UiIntegrationEnrollLinkClickEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintUsageevents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x4 + i-- + dAtA[i] = 0x8a + } + return len(dAtA) - i, nil +} func encodeVarintUsageevents(dAtA []byte, offset int, v uint64) int { offset -= sovUsageevents(v) base := offset @@ -12143,19 +12954,108 @@ func (m *UIIntegrationEnrollStepEvent) Size() (n int) { return n } -func (m *ResourceCreateEvent) Size() (n int) { +func (m *UIIntegrationEnrollSectionOpenEvent) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.ResourceType) - if l > 0 { + if m.Metadata != nil { + l = m.Metadata.Size() n += 1 + l + sovUsageevents(uint64(l)) } - l = len(m.ResourceOrigin) - if l > 0 { - n += 1 + l + sovUsageevents(uint64(l)) + if m.Step != 0 { + n += 1 + sovUsageevents(uint64(m.Step)) + } + if m.Section != 0 { + n += 1 + sovUsageevents(uint64(m.Section)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UIIntegrationEnrollFieldCompleteEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Metadata != nil { + l = m.Metadata.Size() + n += 1 + l + sovUsageevents(uint64(l)) + } + if m.Step != 0 { + n += 1 + sovUsageevents(uint64(m.Step)) + } + if m.Field != 0 { + n += 1 + sovUsageevents(uint64(m.Field)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UIIntegrationEnrollCodeCopyEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Metadata != nil { + l = m.Metadata.Size() + n += 1 + l + sovUsageevents(uint64(l)) + } + if m.Step != 0 { + n += 1 + sovUsageevents(uint64(m.Step)) + } + if m.Type != 0 { + n += 1 + sovUsageevents(uint64(m.Type)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UIIntegrationEnrollLinkClickEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Metadata != nil { + l = m.Metadata.Size() + n += 1 + l + sovUsageevents(uint64(l)) + } + if m.Step != 0 { + n += 1 + sovUsageevents(uint64(m.Step)) + } + l = len(m.Link) + if l > 0 { + n += 1 + l + sovUsageevents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ResourceCreateEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.ResourceType) + if l > 0 { + n += 1 + l + sovUsageevents(uint64(l)) + } + l = len(m.ResourceOrigin) + if l > 0 { + n += 1 + l + sovUsageevents(uint64(l)) } l = len(m.CloudProvider) if l > 0 { @@ -13100,6 +14000,54 @@ func (m *UsageEventOneOf_UiIntegrationEnrollStepEvent) Size() (n int) { } return n } +func (m *UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.UiIntegrationEnrollSectionOpenEvent != nil { + l = m.UiIntegrationEnrollSectionOpenEvent.Size() + n += 2 + l + sovUsageevents(uint64(l)) + } + return n +} +func (m *UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.UiIntegrationEnrollFieldCompleteEvent != nil { + l = m.UiIntegrationEnrollFieldCompleteEvent.Size() + n += 2 + l + sovUsageevents(uint64(l)) + } + return n +} +func (m *UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.UiIntegrationEnrollCodeCopyEvent != nil { + l = m.UiIntegrationEnrollCodeCopyEvent.Size() + n += 2 + l + sovUsageevents(uint64(l)) + } + return n +} +func (m *UsageEventOneOf_UiIntegrationEnrollLinkClickEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.UiIntegrationEnrollLinkClickEvent != nil { + l = m.UiIntegrationEnrollLinkClickEvent.Size() + n += 2 + l + sovUsageevents(uint64(l)) + } + return n +} func sovUsageevents(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 @@ -19947,7 +20895,463 @@ func (m *UIIntegrationEnrollStartEvent) Unmarshal(dAtA []byte) error { if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - iNdEx = postIndex + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipUsageevents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthUsageevents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UIIntegrationEnrollCompleteEvent) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UIIntegrationEnrollCompleteEvent: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UIIntegrationEnrollCompleteEvent: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Metadata == nil { + m.Metadata = &IntegrationEnrollMetadata{} + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipUsageevents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthUsageevents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: IntegrationEnrollStepStatus: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: IntegrationEnrollStepStatus: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType) + } + m.Code = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Code |= IntegrationEnrollStatusCode(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Error = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipUsageevents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthUsageevents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UIIntegrationEnrollStepEvent) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UIIntegrationEnrollStepEvent: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UIIntegrationEnrollStepEvent: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Metadata == nil { + m.Metadata = &IntegrationEnrollMetadata{} + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Step", wireType) + } + m.Step = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Step |= IntegrationEnrollStep(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Status == nil { + m.Status = &IntegrationEnrollStepStatus{} + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipUsageevents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthUsageevents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UIIntegrationEnrollSectionOpenEvent) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UIIntegrationEnrollSectionOpenEvent: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UIIntegrationEnrollSectionOpenEvent: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Metadata == nil { + m.Metadata = &IntegrationEnrollMetadata{} + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Step", wireType) + } + m.Step = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Step |= IntegrationEnrollStep(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Section", wireType) + } + m.Section = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Section |= IntegrationEnrollSection(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipUsageevents(dAtA[iNdEx:]) @@ -19970,7 +21374,7 @@ func (m *UIIntegrationEnrollStartEvent) Unmarshal(dAtA []byte) error { } return nil } -func (m *UIIntegrationEnrollCompleteEvent) Unmarshal(dAtA []byte) error { +func (m *UIIntegrationEnrollFieldCompleteEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -19993,10 +21397,10 @@ func (m *UIIntegrationEnrollCompleteEvent) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UIIntegrationEnrollCompleteEvent: wiretype end group for non-group") + return fmt.Errorf("proto: UIIntegrationEnrollFieldCompleteEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UIIntegrationEnrollCompleteEvent: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UIIntegrationEnrollFieldCompleteEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -20035,6 +21439,44 @@ func (m *UIIntegrationEnrollCompleteEvent) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Step", wireType) + } + m.Step = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Step |= IntegrationEnrollStep(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Field", wireType) + } + m.Field = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Field |= IntegrationEnrollField(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipUsageevents(dAtA[iNdEx:]) @@ -20057,7 +21499,7 @@ func (m *UIIntegrationEnrollCompleteEvent) Unmarshal(dAtA []byte) error { } return nil } -func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { +func (m *UIIntegrationEnrollCodeCopyEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -20080,17 +21522,17 @@ func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IntegrationEnrollStepStatus: wiretype end group for non-group") + return fmt.Errorf("proto: UIIntegrationEnrollCodeCopyEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IntegrationEnrollStepStatus: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UIIntegrationEnrollCodeCopyEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - m.Code = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowUsageevents @@ -20100,16 +21542,33 @@ func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Code |= IntegrationEnrollStatusCode(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Metadata == nil { + m.Metadata = &IntegrationEnrollMetadata{} + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Step", wireType) } - var stringLen uint64 + m.Step = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowUsageevents @@ -20119,24 +21578,30 @@ func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.Step |= IntegrationEnrollStep(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthUsageevents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthUsageevents + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) } - if postIndex > l { - return io.ErrUnexpectedEOF + m.Type = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Type |= IntegrationEnrollCodeType(b&0x7F) << shift + if b < 0x80 { + break + } } - m.Error = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipUsageevents(dAtA[iNdEx:]) @@ -20159,7 +21624,7 @@ func (m *IntegrationEnrollStepStatus) Unmarshal(dAtA []byte) error { } return nil } -func (m *UIIntegrationEnrollStepEvent) Unmarshal(dAtA []byte) error { +func (m *UIIntegrationEnrollLinkClickEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -20182,10 +21647,10 @@ func (m *UIIntegrationEnrollStepEvent) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UIIntegrationEnrollStepEvent: wiretype end group for non-group") + return fmt.Errorf("proto: UIIntegrationEnrollLinkClickEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UIIntegrationEnrollStepEvent: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UIIntegrationEnrollLinkClickEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -20245,9 +21710,9 @@ func (m *UIIntegrationEnrollStepEvent) Unmarshal(dAtA []byte) error { } case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Link", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowUsageevents @@ -20257,27 +21722,23 @@ func (m *UIIntegrationEnrollStepEvent) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthUsageevents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthUsageevents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Status == nil { - m.Status = &IntegrationEnrollStepStatus{} - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Link = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -23716,6 +25177,146 @@ func (m *UsageEventOneOf) Unmarshal(dAtA []byte) error { } m.Event = &UsageEventOneOf_UiIntegrationEnrollStepEvent{v} iNdEx = postIndex + case 62: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UiIntegrationEnrollSectionOpenEvent", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &UIIntegrationEnrollSectionOpenEvent{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &UsageEventOneOf_UiIntegrationEnrollSectionOpenEvent{v} + iNdEx = postIndex + case 63: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UiIntegrationEnrollFieldCompleteEvent", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &UIIntegrationEnrollFieldCompleteEvent{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &UsageEventOneOf_UiIntegrationEnrollFieldCompleteEvent{v} + iNdEx = postIndex + case 64: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UiIntegrationEnrollCodeCopyEvent", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &UIIntegrationEnrollCodeCopyEvent{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &UsageEventOneOf_UiIntegrationEnrollCodeCopyEvent{v} + iNdEx = postIndex + case 65: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UiIntegrationEnrollLinkClickEvent", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowUsageevents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthUsageevents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthUsageevents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &UIIntegrationEnrollLinkClickEvent{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &UsageEventOneOf_UiIntegrationEnrollLinkClickEvent{v} + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipUsageevents(dAtA[iNdEx:]) diff --git a/api/gen/proto/go/userpreferences/v1/access_graph.pb.go b/api/gen/proto/go/userpreferences/v1/access_graph.pb.go index 73af932ecc25c..9f56b2051797e 100644 --- a/api/gen/proto/go/userpreferences/v1/access_graph.pb.go +++ b/api/gen/proto/go/userpreferences/v1/access_graph.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/access_graph.proto diff --git a/api/gen/proto/go/userpreferences/v1/assist.pb.go b/api/gen/proto/go/userpreferences/v1/assist.pb.go index c91b484df82f7..a4e6e5bdb9dd5 100644 --- a/api/gen/proto/go/userpreferences/v1/assist.pb.go +++ b/api/gen/proto/go/userpreferences/v1/assist.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/assist.proto diff --git a/api/gen/proto/go/userpreferences/v1/cluster_preferences.pb.go b/api/gen/proto/go/userpreferences/v1/cluster_preferences.pb.go index 23d25b7be0723..8f0b69e2873b3 100644 --- a/api/gen/proto/go/userpreferences/v1/cluster_preferences.pb.go +++ b/api/gen/proto/go/userpreferences/v1/cluster_preferences.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/cluster_preferences.proto diff --git a/api/gen/proto/go/userpreferences/v1/discover_resource_preferences.pb.go b/api/gen/proto/go/userpreferences/v1/discover_resource_preferences.pb.go index 69e65929f824e..03c1ba41385a3 100644 --- a/api/gen/proto/go/userpreferences/v1/discover_resource_preferences.pb.go +++ b/api/gen/proto/go/userpreferences/v1/discover_resource_preferences.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/discover_resource_preferences.proto diff --git a/api/gen/proto/go/userpreferences/v1/onboard.pb.go b/api/gen/proto/go/userpreferences/v1/onboard.pb.go index acecb297e5897..a56fdc798c38d 100644 --- a/api/gen/proto/go/userpreferences/v1/onboard.pb.go +++ b/api/gen/proto/go/userpreferences/v1/onboard.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/onboard.proto diff --git a/api/gen/proto/go/userpreferences/v1/sidenav_preferences.pb.go b/api/gen/proto/go/userpreferences/v1/sidenav_preferences.pb.go index c5dbfab377bd0..e99aa261be554 100644 --- a/api/gen/proto/go/userpreferences/v1/sidenav_preferences.pb.go +++ b/api/gen/proto/go/userpreferences/v1/sidenav_preferences.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/sidenav_preferences.proto diff --git a/api/gen/proto/go/userpreferences/v1/theme.pb.go b/api/gen/proto/go/userpreferences/v1/theme.pb.go index edcf804184ba6..d43dfbf2c2618 100644 --- a/api/gen/proto/go/userpreferences/v1/theme.pb.go +++ b/api/gen/proto/go/userpreferences/v1/theme.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/theme.proto diff --git a/api/gen/proto/go/userpreferences/v1/unified_resource_preferences.pb.go b/api/gen/proto/go/userpreferences/v1/unified_resource_preferences.pb.go index 9669a85a4aafc..b8f55f0c28024 100644 --- a/api/gen/proto/go/userpreferences/v1/unified_resource_preferences.pb.go +++ b/api/gen/proto/go/userpreferences/v1/unified_resource_preferences.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/unified_resource_preferences.proto diff --git a/api/gen/proto/go/userpreferences/v1/userpreferences.pb.go b/api/gen/proto/go/userpreferences/v1/userpreferences.pb.go index fddd1612c4c9a..2fe3eaa2f4a24 100644 --- a/api/gen/proto/go/userpreferences/v1/userpreferences.pb.go +++ b/api/gen/proto/go/userpreferences/v1/userpreferences.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.6 +// protoc-gen-go v1.36.8 // protoc (unknown) // source: teleport/userpreferences/v1/userpreferences.proto diff --git a/api/go.mod b/api/go.mod index ead3a9dc3aaf7..f2e40d4deddd5 100644 --- a/api/go.mod +++ b/api/go.mod @@ -1,13 +1,11 @@ module github.com/gravitational/teleport/api -go 1.23.7 - -toolchain go1.24.1 +go 1.24.11 require ( github.com/charlievieth/strcase v0.0.5 github.com/coreos/go-semver v0.3.1 - github.com/go-piv/piv-go v1.11.0 + github.com/go-piv/piv-go/v2 v2.3.0 github.com/gobwas/ws v1.4.0 github.com/gogo/protobuf v1.3.2 github.com/google/go-cmp v0.7.0 @@ -25,9 +23,10 @@ require ( go.opentelemetry.io/otel/sdk v1.35.0 go.opentelemetry.io/otel/trace v1.35.0 go.opentelemetry.io/proto/otlp v1.6.0 - golang.org/x/crypto v0.37.0 - golang.org/x/net v0.39.0 - golang.org/x/term v0.31.0 + golang.org/x/crypto v0.45.0 + golang.org/x/net v0.47.0 + golang.org/x/sync v0.18.0 + golang.org/x/term v0.37.0 google.golang.org/genproto/googleapis/rpc v0.0.0-20250428153025-10db94c68c34 google.golang.org/grpc v1.72.1 google.golang.org/protobuf v1.36.6 @@ -48,8 +47,8 @@ require ( github.com/pmezard/go-difflib v1.0.0 // indirect github.com/russellhaering/goxmldsig v1.5.0 // indirect go.opentelemetry.io/auto/sdk v1.1.0 // indirect - golang.org/x/sys v0.32.0 // indirect - golang.org/x/text v0.24.0 // indirect + golang.org/x/sys v0.38.0 // indirect + golang.org/x/text v0.31.0 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20250428153025-10db94c68c34 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/api/go.sum b/api/go.sum index 2da62fd598e21..ab48e490cf7c5 100644 --- a/api/go.sum +++ b/api/go.sum @@ -16,8 +16,8 @@ github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= -github.com/go-piv/piv-go v1.11.0 h1:5vAaCdRTFSIW4PeqMbnsDlUZ7odMYWnHBDGdmtU/Zhg= -github.com/go-piv/piv-go v1.11.0/go.mod h1:NZ2zmjVkfFaL/CF8cVQ/pXdXtuj110zEKGdJM6fJZZM= +github.com/go-piv/piv-go/v2 v2.3.0 h1:kKkrYlgLQTMPA6BiSL25A7/x4CEh2YCG7rtb/aTkx+g= +github.com/go-piv/piv-go/v2 v2.3.0/go.mod h1:ShZi74nnrWNQEdWzRUd/3cSig3uNOcEZp+EWl0oewnI= github.com/gobwas/httphead v0.1.0 h1:exrUm0f4YX0L7EBwZHuCF4GDp8aJfVeBrlLQrs6NqWU= github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM= github.com/gobwas/pool v0.2.1 h1:xfeeEhW7pwmX8nuLVlqbzVc7udMDrwetjEv+TZIz1og= @@ -87,31 +87,33 @@ go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE= -golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc= +golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q= +golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY= -golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E= +golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= +golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I= +golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20= -golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= -golang.org/x/term v0.31.0 h1:erwDkOK1Msy6offm1mOgvspSkslFnIGsFnxOKoufg3o= -golang.org/x/term v0.31.0/go.mod h1:R4BeIy7D95HzImkxGkTW1UQTtP54tio2RyHz7PwK0aw= +golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc= +golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU= +golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0= -golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU= +golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= +golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= diff --git a/api/mfa/mfa.go b/api/mfa/mfa.go index 98e34b4651fae..368435fb169d1 100644 --- a/api/mfa/mfa.go +++ b/api/mfa/mfa.go @@ -21,7 +21,7 @@ import ( "context" "encoding/base64" - "github.com/gogo/protobuf/jsonpb" + "github.com/gogo/protobuf/jsonpb" //nolint:depguard // needed for backwards compatibility "github.com/gravitational/trace" "google.golang.org/grpc" "google.golang.org/grpc/credentials" diff --git a/api/observability/tracing/ssh/client.go b/api/observability/tracing/ssh/client.go index dd0a20c1d3217..28f0f8d014726 100644 --- a/api/observability/tracing/ssh/client.go +++ b/api/observability/tracing/ssh/client.go @@ -22,6 +22,7 @@ import ( "sync" "sync/atomic" + "github.com/google/uuid" "github.com/gravitational/trace" "go.opentelemetry.io/otel/attribute" semconv "go.opentelemetry.io/otel/semconv/v1.10.0" @@ -29,6 +30,7 @@ import ( "golang.org/x/crypto/ssh" "github.com/gravitational/teleport/api/observability/tracing" + "github.com/gravitational/teleport/api/types" ) // Client is a wrapper around ssh.Client that adds tracing support. @@ -36,6 +38,9 @@ type Client struct { *ssh.Client opts []tracing.Option capability tracingCapability + + requestHandlersMu sync.Mutex + requestHandlers map[string]RequestHandlerFn } type tracingCapability int @@ -46,6 +51,24 @@ const ( tracingSupported ) +// NewClientWithTimeout establishes a new client connection, honoring the earliest of the following: +// +// - The context's deadline or cancellation +// - The timeout specified in the config +// - A default timeout of 30 seconds if config doesn't specify a timeout +// +// - If config.Timeout > 0: the specified timeout is applied in addition to any context deadline. +// - If config.Timeout == 0: a default timeout of 30 seconds is used to prevent indefinite hangs. +// - If config.Timeout < 0: only the context's deadline or cancellation is respected if any. +func NewClientWithTimeout(ctx context.Context, conn net.Conn, addr string, config *ssh.ClientConfig, opts ...tracing.Option) (*Client, error) { + c, chans, reqs, err := NewClientConnWithTimeout(ctx, conn, addr, config, opts...) + if err != nil { + return nil, err + } + + return NewClient(c, chans, reqs, opts...), nil +} + // NewClient creates a new Client. // // The server being connected to is probed to determine if it supports @@ -56,9 +79,10 @@ const ( // of whether they should provide tracing context. func NewClient(c ssh.Conn, chans <-chan ssh.NewChannel, reqs <-chan *ssh.Request, opts ...tracing.Option) *Client { clt := &Client{ - Client: ssh.NewClient(c, chans, reqs), - opts: opts, - capability: tracingUnsupported, + Client: ssh.NewClient(c, chans, reqs), + opts: opts, + capability: tracingUnsupported, + requestHandlers: map[string]RequestHandlerFn{}, } if bytes.HasPrefix(clt.ServerVersion(), []byte("SSH-2.0-Teleport")) { @@ -89,7 +113,7 @@ func (c *Client) DialContext(ctx context.Context, n, addr string) (net.Conn, err ) defer span.End() - // create the wrapper while the lock is held + // create a new wrapper to propagate tracing span context. wrapper := &clientWrapper{ capability: c.capability, Conn: c.Client.Conn, @@ -162,21 +186,65 @@ func (c *Client) OpenChannel( }, reqs, err } -// NewSession creates a new SSH session that is passed tracing context -// so that spans may be correlated properly over the ssh connection. -func (c *Client) NewSession(ctx context.Context) (*Session, error) { - return c.newSession(ctx, nil) +// SessionParams are session parameters supported by Teleport to provide additional +// session context or parameters to the server. +type SessionParams struct { + // WebProxyAddr is the address of the proxy forwarding the SSH connection to the target server. + WebProxyAddr string + // Reason is a reason attached to started sessions meant to describe their intent. + Reason string + // Invited is a list of people invited to a session. + Invited []string + // DisplayParticipantRequirements is set if debug information about participants requirements + // should be printed in moderated sessions. + DisplayParticipantRequirements bool + // JoinSessionID is the ID of a session to join. + JoinSessionID string + // JoinMode is the participant mode to join the session with. + // Required if JoinSessionID is set. + JoinMode types.SessionParticipantMode + // ModeratedSessionID is an optional parameter sent during SCP requests to specify which moderated session + // to check for valid FileTransferRequests. + ModeratedSessionID string +} + +// ParseSessionParams unmarshals session parameters which have been [ssh.Marshal]ed by the client +// and provided as extra data in the session channel request. If the provided data is empty, nil params +// will be returned with a nil error. +func ParseSessionParams(data []byte) (*SessionParams, error) { + if len(data) == 0 { + return nil, nil + } + + var params SessionParams + if err := ssh.Unmarshal(data, ¶ms); err != nil { + return nil, trace.Wrap(err) + } + + if params.JoinSessionID != "" { + if _, err := uuid.Parse(params.JoinSessionID); err != nil { + return nil, trace.Wrap(err, "failed to parse join session ID: %v", params.JoinSessionID) + } + + switch params.JoinMode { + case types.SessionModeratorMode, types.SessionObserverMode, types.SessionPeerMode: + default: + return nil, trace.BadParameter("Unrecognized session participant mode: %q", params.JoinMode) + } + } + + return ¶ms, nil } -// NewSessionWithRequestCallback creates a new SSH session that is passed -// tracing context so that spans may be correlated properly over the ssh -// connection. The handling of channel requests from the underlying SSH -// session can be controlled with chanReqCallback. -func (c *Client) NewSessionWithRequestCallback(ctx context.Context, chanReqCallback ChannelRequestCallback) (*Session, error) { - return c.newSession(ctx, chanReqCallback) +// NewSession creates a new SSH session. This session is passed a tracing context so that +// spans may be correlated properly over the ssh connection. +func (c *Client) NewSession(ctx context.Context) (*Session, error) { + return c.NewSessionWithParams(ctx, nil) } -func (c *Client) newSession(ctx context.Context, chanReqCallback ChannelRequestCallback) (*Session, error) { +// NewSessionWithParams creates a new SSH session with the given (optional) params. This session is +// passed a tracing context so that spans may be correlated properly over the ssh connection. +func (c *Client) NewSessionWithParams(ctx context.Context, sessionParams *SessionParams) (*Session, error) { tracer := tracing.NewConfig(c.opts).TracerProvider.Tracer(instrumentationName) ctx, span := tracer.Start( @@ -194,7 +262,7 @@ func (c *Client) newSession(ctx context.Context, chanReqCallback ChannelRequestC ) defer span.End() - // create the wrapper while the lock is still held + // create a new wrapper to propagate tracing span context. wrapper := &clientWrapper{ capability: c.capability, Conn: c.Client.Conn, @@ -203,9 +271,99 @@ func (c *Client) newSession(ctx context.Context, chanReqCallback ChannelRequestC contexts: make(map[string][]context.Context), } - // get a session from the wrapper - session, err := wrapper.NewSession(chanReqCallback) - return session, trace.Wrap(err) + // If we are connected to a Teleport server, send session params in the session request. + // If the server does not support session parameters in the extra data, it will be ignored. + var sessionData []byte + if sessionParams != nil && c.capability == tracingSupported { + sessionData = ssh.Marshal(sessionParams) + } + + // open a session manually so we can take ownership of the + // requests chan + ch, reqs, err := wrapper.OpenChannel("session", sessionData) + if err != nil { + return nil, trace.Wrap(err) + } + + unhandledReqs := c.serveSessionRequests(ctx, reqs) + session, err := newCryptoSSHSession(ch, unhandledReqs) + if err != nil { + _ = ch.Close() + return nil, trace.Wrap(err) + } + + // wrap the session so all session requests on the channel + // can be traced + return &Session{ + Session: session, + wrapper: wrapper, + }, nil +} + +// RequestHandlerFn is an ssh request handler function. +type RequestHandlerFn func(ctx context.Context, req *ssh.Request) + +// HandleSessionRequest registers a handler for any incoming [ssh.Request] matching the +// provided type within a session. If the type is already being handled, an error is returned. +// All registered handlers are consumed by the next call to [Client.NewSession]. +func (c *Client) HandleSessionRequest(ctx context.Context, requestType string, handlerFn RequestHandlerFn) error { + c.requestHandlersMu.Lock() + defer c.requestHandlersMu.Unlock() + + if _, ok := c.requestHandlers[requestType]; ok { + return trace.AlreadyExists("ssh request type %q is already being handled for this session", requestType) + } + + c.requestHandlers[requestType] = handlerFn + return nil +} + +// serveSessionRequests from the remote side with registered handlers. +// +// This method consumes all registered handlers so that the next call to +// [Client.NewSession] will not reuse the same handlers. +func (c *Client) serveSessionRequests(ctx context.Context, in <-chan *ssh.Request) <-chan *ssh.Request { + c.requestHandlersMu.Lock() + requestHandlers := c.requestHandlers + c.requestHandlers = make(map[string]RequestHandlerFn) + c.requestHandlersMu.Unlock() + + // Capture requests not handled by registered request handlers and + // pass them to the crypto [ssh.Session]. + unhandledReqs := make(chan *ssh.Request, cap(in)) + + tracer := tracing.NewConfig(c.opts).TracerProvider.Tracer(instrumentationName) + go func() { + defer close(unhandledReqs) + for req := range in { + ctx, span := tracer.Start( + ctx, + fmt.Sprintf("ssh.HandleRequests/%s", req.Type), + oteltrace.WithSpanKind(oteltrace.SpanKindClient), + oteltrace.WithAttributes( + append( + peerAttr(c.Conn.RemoteAddr()), + semconv.RPCServiceKey.String("ssh.Client"), + semconv.RPCMethodKey.String("HandleRequests"), + semconv.RPCSystemKey.String("ssh"), + )..., + ), + ) + + handler, ok := requestHandlers[req.Type] + if ok { + handler(ctx, req) + } else { + // Pass on requests without a registered handler. These will be + // handled by the default x/crypto/ssh request handler. + unhandledReqs <- req + } + + span.End() + } + }() + + return unhandledReqs } // clientWrapper wraps the ssh.Conn for individual ssh.Client @@ -229,64 +387,6 @@ type clientWrapper struct { contexts map[string][]context.Context } -// ChannelRequestCallback allows the handling of channel requests -// to be customized. nil can be returned if you don't want -// golang/x/crypto/ssh to handle the request. -type ChannelRequestCallback func(req *ssh.Request) *ssh.Request - -// NewSession opens a new Session for this client. -func (c *clientWrapper) NewSession(callback ChannelRequestCallback) (*Session, error) { - // create a client that will defer to us when - // opening the "session" channel so that we - // can add an Envelope to the request - client := &ssh.Client{ - Conn: c, - } - - var session *ssh.Session - var err error - if callback != nil { - // open a session manually so we can take ownership of the - // requests chan - ch, originalReqs, openChannelErr := client.OpenChannel("session", nil) - if openChannelErr != nil { - return nil, trace.Wrap(openChannelErr) - } - - // pass the channel requests to the provided callback and - // forward them to another chan so golang.org/x/crypto/ssh - // can handle Session exiting correctly - reqs := make(chan *ssh.Request, cap(originalReqs)) - go func() { - defer close(reqs) - - for req := range originalReqs { - if req := callback(req); req != nil { - reqs <- req - } - } - }() - - session, err = newCryptoSSHSession(ch, reqs) - if err != nil { - _ = ch.Close() - return nil, trace.Wrap(err) - } - } else { - session, err = client.NewSession() - if err != nil { - return nil, trace.Wrap(err) - } - } - - // wrap the session so all session requests on the channel - // can be traced - return &Session{ - Session: session, - wrapper: c, - }, nil -} - // wrappedSSHConn allows an SSH session to be created while also allowing // callers to take ownership of the SSH channel requests chan. type wrappedSSHConn struct { diff --git a/api/observability/tracing/ssh/client_test.go b/api/observability/tracing/ssh/client_test.go index b59549f2181bc..c7e8b5d7b9c38 100644 --- a/api/observability/tracing/ssh/client_test.go +++ b/api/observability/tracing/ssh/client_test.go @@ -21,7 +21,7 @@ import ( "testing" "time" - "github.com/gravitational/trace" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/crypto/ssh" ) @@ -48,9 +48,8 @@ func TestIsTracingSupported(t *testing.T) { t.Run(tt.name, func(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) t.Cleanup(cancel) - errChan := make(chan error, 5) - srv := newServer(t, tt.expectedCapability, func(conn *ssh.ServerConn, channels <-chan ssh.NewChannel, requests <-chan *ssh.Request) { + srv := newServer(t, tt.srvVersion, func(conn *ssh.ServerConn, channels <-chan ssh.NewChannel, requests <-chan *ssh.Request) { go ssh.DiscardRequests(requests) for { @@ -64,29 +63,17 @@ func TestIsTracingSupported(t *testing.T) { } if err := ch.Reject(ssh.Prohibited, "no channels allowed"); err != nil { - errChan <- trace.Wrap(err, "rejecting channel") + assert.NoError(t, err, "rejecting channel") return } } } }) - if tt.srvVersion != "" { - srv.config.ServerVersion = tt.srvVersion - } - - go srv.Run(errChan) - conn, chans, reqs := srv.GetClient(t) client := NewClient(conn, chans, reqs) require.Equal(t, tt.expectedCapability, client.capability) - - select { - case err := <-errChan: - require.NoError(t, err) - default: - } }) } } @@ -104,14 +91,13 @@ func TestSetEnvs(t *testing.T) { t.Parallel() ctx, cancel := context.WithCancel(context.Background()) t.Cleanup(cancel) - errChan := make(chan error, 5) expected := map[string]string{"a": "1", "b": "2", "c": "3"} // used to collect individual envs requests envReqC := make(chan envReqParams, 3) - srv := newServer(t, tracingSupported, func(conn *ssh.ServerConn, channels <-chan ssh.NewChannel, requests <-chan *ssh.Request) { + srv := newServer(t, tracingSupportedVersion, func(conn *ssh.ServerConn, channels <-chan ssh.NewChannel, requests <-chan *ssh.Request) { for { select { case <-ctx.Done(): @@ -123,7 +109,7 @@ func TestSetEnvs(t *testing.T) { case ch.ChannelType() == "session": ch, reqs, err := ch.Accept() if err != nil { - errChan <- trace.Wrap(err, "failed to accept session channel") + assert.NoError(t, err, "failed to accept session channel") return } @@ -178,7 +164,7 @@ func TestSetEnvs(t *testing.T) { _ = req.Reply(true, nil) default: // out of order or unexpected message _ = req.Reply(false, []byte(fmt.Sprintf("unexpected ssh request %s on iteration %d", req.Type, i))) - errChan <- err + assert.NoError(t, err) return } } @@ -186,7 +172,7 @@ func TestSetEnvs(t *testing.T) { }() default: if err := ch.Reject(ssh.ConnectionFailed, fmt.Sprintf("unexpected channel %s", ch.ChannelType())); err != nil { - errChan <- err + assert.NoError(t, err) return } } @@ -194,8 +180,6 @@ func TestSetEnvs(t *testing.T) { } }) - go srv.Run(errChan) - // create a client and open a session conn, chans, reqs := srv.GetClient(t) client := NewClient(conn, chans, reqs) @@ -235,12 +219,6 @@ func TestSetEnvs(t *testing.T) { require.Equal(t, v, actual, "expected value %s for env %s, got %s", v, k, actual) } }) - - select { - case err := <-errChan: - require.NoError(t, err) - default: - } } type mockSSHChannel struct { @@ -268,3 +246,136 @@ func TestWrappedSSHConn(t *testing.T) { wrappedConn.OpenChannel("", nil) }) } + +// TestGlobalAndSessionRequests tests that the tracing client correctly handles global and session requests. +func TestGlobalAndSessionRequests(t *testing.T) { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + // pingRequest is an example request type. Whether sent by the server or client in + // a global or session context, the receiver should give an ok as the reply. + pingRequest := "ping@goteleport.com" + + clientGlobalReply := make(chan bool, 1) + clientSessionReply := make(chan bool, 1) + + srv := newServer(t, tracingSupportedVersion, func(conn *ssh.ServerConn, channels <-chan ssh.NewChannel, requests <-chan *ssh.Request) { + // Send a ping request when the client connection is established. + ok, _, err := conn.SendRequest(pingRequest, true, nil) + assert.NoError(t, err, "server failed to send global ping request") + clientGlobalReply <- ok + + for { + select { + case <-ctx.Done(): + return + case req := <-requests: + switch req.Type { + case pingRequest: + err := req.Reply(true, nil) + assert.NoError(t, err, "server failed to reply to global ping request") + default: + err := req.Reply(false, nil) + assert.NoError(t, err, "server failed to reply to global %q request", req.Type) + } + case ch := <-channels: + switch { + case ch == nil: + return + case ch.ChannelType() == "session": + ch, reqs, err := ch.Accept() + if err != nil { + assert.NoError(t, err, "failed to accept session channel") + return + } + + go func() { + defer ch.Close() + for { + select { + case <-ctx.Done(): + return + case req := <-reqs: + switch req.Type { + case pingRequest: + err := req.Reply(true, nil) + assert.NoError(t, err, "server failed to reply to session ping request") + } + continue + } + } + }() + + // Send a ping request when the session is established. + ok, err := ch.SendRequest(pingRequest, true, nil) + assert.NoError(t, err, "server failed to send ping request") + clientSessionReply <- ok + default: + err := ch.Reject(ssh.ConnectionFailed, fmt.Sprintf("unexpected channel %s", ch.ChannelType())) + assert.NoError(t, err) + } + } + } + }) + + conn, chans, reqs := srv.GetClient(t) + client := NewClient(conn, chans, reqs) + + // The client should reply false to any global request from the server, as we + // don't currently support a mechanism for the client to register global handlers. + select { + case reply := <-clientGlobalReply: + require.False(t, reply, "Expected the client to reply false to global ping request") + case <-time.After(10 * time.Second): + t.Fatalf("Failed to receive client global reply to ping request") + } + + // The server should reply true to a global ping request. + ok, _, err := client.SendRequest(ctx, pingRequest, true, nil) + require.True(t, ok, "Expected the server to reply true to global ping request") + require.NoError(t, err) + + // If the client isn't setup to handle session requests, it should reply false to them. + // The client should reply true to a session ping request. + _, err = client.NewSession(ctx) + require.NoError(t, err) + + select { + case reply := <-clientSessionReply: + require.False(t, reply, "Expected the client to reply false to session ping request") + case <-time.After(10 * time.Second): + t.Fatalf("Failed to receive client session reply to ping request") + } + + // The client should reply true to a session ping request. + err = client.HandleSessionRequest(ctx, pingRequest, func(ctx context.Context, req *ssh.Request) { + err := req.Reply(true, nil) + assert.NoError(t, err) + }) + require.NoError(t, err) + _, err = client.NewSession(ctx) + require.NoError(t, err) + + select { + case reply := <-clientSessionReply: + require.True(t, reply, "Expected the client to reply true to session ping request") + case <-time.After(10 * time.Second): + t.Fatalf("Failed to receive client session reply to ping request") + } + + // New Sessions do not reuse previously registered handlers. + session, err := client.NewSession(ctx) + require.NoError(t, err) + + select { + case reply := <-clientSessionReply: + require.False(t, reply, "Expected the client to reply false to session ping request") + case <-time.After(10 * time.Second): + t.Fatalf("Failed to receive client session reply to ping request") + } + + // The server should reply true to a session ping request. + ok, err = session.SendRequest(ctx, pingRequest, true, nil) + require.NoError(t, err) + require.True(t, ok, "Expected the server to reply true to session ping request") +} diff --git a/api/observability/tracing/ssh/ssh.go b/api/observability/tracing/ssh/ssh.go index 9e1d0243b5c48..5e8ee4d7003c3 100644 --- a/api/observability/tracing/ssh/ssh.go +++ b/api/observability/tracing/ssh/ssh.go @@ -15,10 +15,10 @@ package ssh import ( + "cmp" "context" "encoding/json" "net" - "time" "github.com/gravitational/trace" "go.opentelemetry.io/otel/attribute" @@ -27,6 +27,7 @@ import ( oteltrace "go.opentelemetry.io/otel/trace" "golang.org/x/crypto/ssh" + "github.com/gravitational/teleport/api/defaults" "github.com/gravitational/teleport/api/observability/tracing" ) @@ -126,58 +127,106 @@ func Dial(ctx context.Context, network, addr string, config *ssh.ClientConfig, o if err != nil { return nil, err } - c, chans, reqs, err := NewClientConn(ctx, conn, addr, config, opts...) + c, err := NewClientWithTimeout(ctx, conn, addr, config, opts...) if err != nil { return nil, err } - return NewClient(c, chans, reqs), nil + return c, nil } -// NewClientConn creates a new SSH client connection that is passed tracing context so that spans may be correlated -// properly over the ssh connection. -func NewClientConn(ctx context.Context, conn net.Conn, addr string, config *ssh.ClientConfig, opts ...tracing.Option) (ssh.Conn, <-chan ssh.NewChannel, <-chan *ssh.Request, error) { +// NewClientConnWithTimeout creates a new SSH client connection that includes tracing context, +// allowing spans to be properly correlated across the SSH connection. +// +// The connection respects the earliest of the following: +// - The context's deadline or cancellation +// - The timeout specified in the config +// - A default timeout of 30 seconds if config doesn't specify a timeout +// +// Behavior based on config.Timeout: +// - If > 0: the timeout is applied in addition to any context deadline. +// - If == 0: a default timeout of 30 seconds is used to avoid hanging connections. +// - If < 0: only the context’s deadline or cancellation is used. +func NewClientConnWithTimeout(ctx context.Context, conn net.Conn, addr string, config *ssh.ClientConfig, opts ...tracing.Option) (ssh.Conn, <-chan ssh.NewChannel, <-chan *ssh.Request, error) { tracer := tracing.NewConfig(opts).TracerProvider.Tracer(instrumentationName) ctx, span := tracer.Start( //nolint:staticcheck,ineffassign // keeping shadowed ctx to avoid accidental missing in the future ctx, - "ssh/NewClientConn", + "ssh/NewClientConnWithTimeout", oteltrace.WithSpanKind(oteltrace.SpanKindClient), oteltrace.WithAttributes( append( peerAttr(conn.RemoteAddr()), attribute.String("address", addr), semconv.RPCServiceKey.String("ssh"), - semconv.RPCMethodKey.String("NewClientConn"), + semconv.RPCMethodKey.String("NewClientConnWithTimeout"), semconv.RPCSystemKey.String("ssh"), )..., ), ) defer span.End() - c, chans, reqs, err := ssh.NewClientConn(conn, addr, config) - if err != nil { - return nil, nil, nil, trace.Wrap(err) + // ssh.ClientConfig.Timeout is not the total timeout for the connection + // establishment, including DNS resolution, TCP connection, but it doesn't + // include the SSH estalbishment. + // From the crypto/ssh docs: + // > Timeout is the maximum amount of time for the TCP connection to establish. + // + // Since we pass the connection here, the timeout will never be enforced by the + // ssh package. `NewClientConnWithDeadline` tries to enforced it by setting + // the read deadline on the connection, but might not be sufficient because + // we have some net.Conn implementations that don't support setting read deadlines + // and will block forever. + // To be sure that we don't block forever, we set up a timer that will close + // the connection when the timeout is reached. + // If the context has a deadline, we use that instead and take the minimum + // between the two. + // If neither is set, we default to 30 seconds. + + // We aim to close the connection to avoid clients to hang forever + // if the server is not responding. + + // If config.Timeout is negative, we don't set a timeout and restrict + // ourselves to the context deadline if any. + if config.Timeout >= 0 { + newCtx, cancel := context.WithTimeout( + ctx, + cmp.Or(config.Timeout, defaults.DefaultIOTimeout), + ) + defer cancel() + ctx = newCtx } - return c, chans, reqs, nil -} + stopFn := context.AfterFunc(ctx, func() { + _ = conn.Close() + }) + defer stopFn() -// NewClientConnWithDeadline establishes new client connection with specified deadline -func NewClientConnWithDeadline(ctx context.Context, conn net.Conn, addr string, config *ssh.ClientConfig, opts ...tracing.Option) (*Client, error) { - if config.Timeout > 0 { - if err := conn.SetReadDeadline(time.Now().Add(config.Timeout)); err != nil { - return nil, trace.Wrap(err) - } - } - c, chans, reqs, err := NewClientConn(ctx, conn, addr, config, opts...) + c, chans, reqs, err := ssh.NewClientConn(conn, addr, config) if err != nil { - return nil, err + // if the context was canceled or timed out, return that an aggregated error instead + // of the original error returned from NewClientConn. The returned error would be something like + // "ssh: handshake failed: read tcp {ip}:{port} -> {ip}:{port} use of closed network connection" + // which doesn't indicate the real error was a timeout or cancellation. + // If the context was not canceled and the function failed, it returns the original error as + // ctx.Err() would be nil. + return nil, nil, nil, trace.NewAggregate(ctx.Err(), err) } - if config.Timeout > 0 { - if err := conn.SetReadDeadline(time.Time{}); err != nil { - return nil, trace.Wrap(err) - } + + if !stopFn() { + // we failed to stop the AfterFunc so conn will be closed and + // c will soon become invalid no matter what we do, so we + // drain it and close it + _ = conn.Close() + go func() { + for newCh := range chans { + _ = newCh.Reject(0, "") + } + }() + go ssh.DiscardRequests(reqs) + _ = c.Close() + return nil, nil, nil, trace.Wrap(ctx.Err()) } - return NewClient(c, chans, reqs, opts...), nil + + return c, chans, reqs, nil } // peerAttr returns attributes about the peer address. diff --git a/api/observability/tracing/ssh/ssh_test.go b/api/observability/tracing/ssh/ssh_test.go index e7618a3be15e3..9e6e1fe0fdbf1 100644 --- a/api/observability/tracing/ssh/ssh_test.go +++ b/api/observability/tracing/ssh/ssh_test.go @@ -21,10 +21,14 @@ import ( "crypto/subtle" "encoding/json" "errors" + "io" "net" + "sync" "testing" + "time" "github.com/gravitational/trace" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/propagation" @@ -45,31 +49,6 @@ type server struct { hSigner ssh.Signer } -func (s *server) Run(errC chan error) { - for { - conn, err := s.listener.Accept() - if err != nil { - if !errors.Is(err, net.ErrClosed) { - errC <- err - } - return - } - - go func() { - sconn, chans, reqs, err := ssh.NewServerConn(conn, s.config) - if err != nil { - errC <- err - return - } - s.handler(sconn, chans, reqs) - }() - } -} - -func (s *server) Stop() error { - return s.listener.Close() -} - func generateSigner(t *testing.T) ssh.Signer { _, private, err := ed25519.GenerateKey(rand.Reader) require.NoError(t, err) @@ -91,18 +70,18 @@ func (s *server) GetClient(t *testing.T) (ssh.Conn, <-chan ssh.NewChannel, <-cha return sconn, nc, r } -func newServer(t *testing.T, tracingCap tracingCapability, handler func(*ssh.ServerConn, <-chan ssh.NewChannel, <-chan *ssh.Request)) *server { +const ( + tracingSupportedVersion = "SSH-2.0-Teleport" + tracingUnsupportedVersion = "SSH-2.0" +) + +func newServer(t *testing.T, version string, handler func(*ssh.ServerConn, <-chan ssh.NewChannel, <-chan *ssh.Request)) *server { listener, err := net.Listen("tcp", "localhost:0") require.NoError(t, err) cSigner := generateSigner(t) hSigner := generateSigner(t) - version := "SSH-2.0-Teleport" - if tracingCap != tracingSupported { - version = "SSH-2.0" - } - config := &ssh.ServerConfig{ NoClientAuth: true, ServerVersion: version, @@ -117,7 +96,33 @@ func newServer(t *testing.T, tracingCap tracingCapability, handler func(*ssh.Ser hSigner: hSigner, } - t.Cleanup(func() { require.NoError(t, srv.Stop()) }) + errC := make(chan error, 1) + go func() { + defer close(errC) + for { + conn, err := srv.listener.Accept() + if err != nil { + if !errors.Is(err, net.ErrClosed) { + errC <- err + } + return + } + + go func() { + sconn, chans, reqs, err := ssh.NewServerConn(conn, srv.config) + if err != nil { + errC <- err + return + } + srv.handler(sconn, chans, reqs) + }() + } + }() + + t.Cleanup(func() { + require.NoError(t, srv.listener.Close()) + require.NoError(t, <-errC) + }) return srv } @@ -300,8 +305,12 @@ func TestClient(t *testing.T) { ctx: ctx, } - srv := newServer(t, tt.tracingSupported, handler.handle) - go srv.Run(errChan) + version := tracingSupportedVersion + if tt.tracingSupported != tracingSupported { + version = tracingUnsupportedVersion + } + + srv := newServer(t, version, handler.handle) tp := sdktrace.NewTracerProvider() conn, chans, reqs := srv.GetClient(t) @@ -414,3 +423,73 @@ func TestWrapPayload(t *testing.T) { }) } } + +func TestNewClientConnTimeout(t *testing.T) { + t.Parallel() + + // This test ensure that NewClientConnWithTimeout respects the context + // timeout and does not hang indefinitely. + listener, err := net.Listen("tcp", "localhost:0") + require.NoError(t, err) + + var wg sync.WaitGroup + t.Cleanup(wg.Wait) + t.Cleanup(func() { listener.Close() }) + + wg.Add(1) + go func() { + defer wg.Done() + defer listener.Close() + + for { + conn, err := listener.Accept() + if err != nil { + assert.ErrorIs(t, err, net.ErrClosed) + return + } + require.NoError(t, err) + wg.Add(1) + go func() { + defer wg.Done() + defer conn.Close() + // Simulate a server that does not respond so the ssh.NewClientConn + // call on the client side hangs indefinitely. + _, _ = io.Copy(io.Discard, conn) + }() + } + }() + + t.Run("context timeout is respected", func(t *testing.T) { + t.Parallel() + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Millisecond) + t.Cleanup(cancel) + + conn, err := net.Dial("tcp", listener.Addr().String()) + require.NoError(t, err) + + _, _, _, err = NewClientConnWithTimeout(ctx, conn, listener.Addr().String(), &ssh.ClientConfig{ + Timeout: -1, + HostKeyCallback: ssh.InsecureIgnoreHostKey(), + }) + + require.Error(t, err) + require.ErrorIs(t, err, context.DeadlineExceeded, "expected context deadline exceeded error, got: %v", err) + }) + + t.Run("config timeout is respected", func(t *testing.T) { + t.Parallel() + + conn, err := net.Dial("tcp", listener.Addr().String()) + require.NoError(t, err) + + _, _, _, err = NewClientConnWithTimeout(context.Background(), conn, listener.Addr().String(), &ssh.ClientConfig{ + Timeout: 5 * time.Millisecond, + HostKeyCallback: ssh.InsecureIgnoreHostKey(), + }) + + require.Error(t, err) + require.ErrorIs(t, err, context.DeadlineExceeded, "expected context deadline exceeded error, got: %v", err) + }) + +} diff --git a/api/profile/profile.go b/api/profile/profile.go index ed9020e433b61..c01ccad841fa9 100644 --- a/api/profile/profile.go +++ b/api/profile/profile.go @@ -68,6 +68,14 @@ type Profile struct { // MongoProxyAddr is the host:port the Mongo proxy can be accessed at. MongoProxyAddr string `yaml:"mongo_proxy_addr,omitempty"` + // RelayAddr is the relay in use specified at login time, or "none" if use + // of a relay is explicitly disabled. + RelayAddr string `yaml:"relay_addr,omitempty"` + + // DefaultRelayAddr is the cluster-specified address of the relay in use, to + // be used if RelayAddr is unspecified. Set at login time. + DefaultRelayAddr string `yaml:"default_relay_addr,omitempty"` + // Username is the Teleport username for the client. Username string `yaml:"user,omitempty"` @@ -169,7 +177,7 @@ func (p *Profile) TLSConfig() (*tls.Config, error) { // Expiry returns the credential expiry. func (p *Profile) Expiry() (time.Time, bool) { - certPEMBlock, err := os.ReadFile(p.TLSCertPath()) + certPEMBlock, err := p.TLSCert() if err != nil { return time.Time{}, false } @@ -180,6 +188,12 @@ func (p *Profile) Expiry() (time.Time, bool) { return cert.NotAfter, true } +// TLSCert returns the profile's TLS certificate. +func (p *Profile) TLSCert() ([]byte, error) { + certPEMBlock, err := os.ReadFile(p.TLSCertPath()) + return certPEMBlock, trace.Wrap(err) +} + // RequireKubeLocalProxy returns true if this profile indicates a local proxy // is required for kube access. func (p *Profile) RequireKubeLocalProxy() bool { @@ -317,19 +331,26 @@ func FullProfilePath(dir string) string { // defaultProfilePath retrieves the default path of the TSH profile. func defaultProfilePath() string { - // start with UserHomeDir, which is the fastest option as it - // relies only on environment variables and does not perform - // a user lookup (which can be very slow on large AD environments) - home, err := os.UserHomeDir() - if err == nil && home != "" { - return filepath.Join(home, profileDir) + home, ok := UserHomeDir() + if !ok { + home = os.TempDir() } + return filepath.Join(home, profileDir) +} - home = os.TempDir() +// UserHomeDir returns the current user's home directory if it can be found. +func UserHomeDir() (string, bool) { + // Start with os.UserHomeDir, which is the fastest option as it relies only + // on environment variables and does not perform a user lookup (which can be + // very slow on large AD environments). + if home, err := os.UserHomeDir(); err == nil && home != "" { + return home, true + } + // Fall back to the user lookup. if u, err := user.Current(); err == nil && u.HomeDir != "" { - home = u.HomeDir + return u.HomeDir, true } - return filepath.Join(home, profileDir) + return "", false } // FromDir reads the user profile from a given directory. If dir is empty, @@ -371,6 +392,10 @@ func profileFromFile(filePath string) (*Profile, error) { // Older versions of tsh did not always store the cluster name in the // profile. If no cluster name is found, fallback to the name of the profile // for backward compatibility. + // + // TODO(gzdunek): A profile name is not the same thing as a site name, and they differ when the proxy hostname is different + // from the cluster name. + // Instead, tsh should be able to handle an empty site name, or this default should be changed. if p.SiteName == "" { p.SiteName = p.Name() } diff --git a/api/proto/teleport/accesslist/v1/accesslist.proto b/api/proto/teleport/accesslist/v1/accesslist.proto index b12ee28b400f4..f57ac7c605e5b 100644 --- a/api/proto/teleport/accesslist/v1/accesslist.proto +++ b/api/proto/teleport/accesslist/v1/accesslist.proto @@ -71,6 +71,11 @@ message AccessListSpec { // owner_grants describes the access granted by owners to this Access List. AccessListGrants owner_grants = 11; + + // type can be an empty string which denotes a regular Access List, "scim" which represents + // an Access List created from SCIM group or "static" for Access Lists managed by IaC + // tools. + string type = 12; } // AccessListOwner is an owner of an Access List. @@ -304,6 +309,14 @@ message CurrentUserAssignments { AccessListUserAssignmentType membership_type = 2; } +// UserAssignments describes the requested user's ownership and membership assignment types in the access list. +message UserAssignments { + // ownership_type represents the requested user's ownership type (explicit, inherited, or none) in the access list. + AccessListUserAssignmentType ownership_type = 1; + // membership_type represents the requested user's membership type (explicit, inherited, or none) in the access list. + AccessListUserAssignmentType membership_type = 2; +} + // AccessListStatus contains dynamic fields calculated during retrieval. message AccessListStatus { // member_count is the number of members in the Access List. @@ -316,4 +329,6 @@ message AccessListStatus { repeated string member_of = 4; // current_user_assignments describes the current user's ownership and membership status in the access list. CurrentUserAssignments current_user_assignments = 5; + // user_assignments describes the requested user's ownership and membership assignment types in the access list. + UserAssignments user_assignments = 6; } diff --git a/api/proto/teleport/accesslist/v1/accesslist_service.proto b/api/proto/teleport/accesslist/v1/accesslist_service.proto index 73fc691c92ab4..e1c46d166e117 100644 --- a/api/proto/teleport/accesslist/v1/accesslist_service.proto +++ b/api/proto/teleport/accesslist/v1/accesslist_service.proto @@ -28,7 +28,12 @@ service AccessListService { // GetAccessLists returns a list of all access lists. rpc GetAccessLists(GetAccessListsRequest) returns (GetAccessListsResponse); // ListAccessLists returns a paginated list of all access lists. - rpc ListAccessLists(ListAccessListsRequest) returns (ListAccessListsResponse); + // Deprecated: Use ListAccessListsV2 instead. + rpc ListAccessLists(ListAccessListsRequest) returns (ListAccessListsResponse) { + option deprecated = true; + } + // ListAccessListsV2 returns a paginated, filtered, and sorted list of all access lists. + rpc ListAccessListsV2(ListAccessListsV2Request) returns (ListAccessListsV2Response); // GetAccessList returns the specified access list resource. rpc GetAccessList(GetAccessListRequest) returns (AccessList); // UpsertAccessList creates or updates an access list resource. @@ -53,16 +58,28 @@ service AccessListService { rpc ListAllAccessListMembers(ListAllAccessListMembersRequest) returns (ListAllAccessListMembersResponse); // GetAccessListMember returns the specified access list member resource. rpc GetAccessListMember(GetAccessListMemberRequest) returns (Member); + // GetStaticAccessListMember returns the specified access_list_member resource. If returns error + // if the target access_list is not of type static. This API is there for the IaC tools to + // prevent them from making changes to members of dynamic access lists. + rpc GetStaticAccessListMember(GetStaticAccessListMemberRequest) returns (GetStaticAccessListMemberResponse); // GetAccessListOwners returns a list of all owners in an Access List, // including those inherited from nested Access Lists. rpc GetAccessListOwners(GetAccessListOwnersRequest) returns (GetAccessListOwnersResponse); // UpsertAccessListMember creates or updates an access list member resource. rpc UpsertAccessListMember(UpsertAccessListMemberRequest) returns (Member); + // UpsertStaticAccessListMember creates or updates an access_list_member resource. It returns + // error and does nothing if the target access_list is not of type static. This API is there for + // the IaC tools to prevent them from making changes to members of dynamic access lists. + rpc UpsertStaticAccessListMember(UpsertStaticAccessListMemberRequest) returns (UpsertStaticAccessListMemberResponse); // UpdateAccessListMember conditionally updates an access list member resource. rpc UpdateAccessListMember(UpdateAccessListMemberRequest) returns (Member); // DeleteAccessListMember hard deletes the specified access list member // resource. rpc DeleteAccessListMember(DeleteAccessListMemberRequest) returns (google.protobuf.Empty); + // DeleteStaticAccessListMember hard deletes the specified access_list_member. It returns error + // and does nothing if the target access_list is not of static type. This API is there for the + // IaC tools to prevent them from making changes to members of dynamic access lists. + rpc DeleteStaticAccessListMember(DeleteStaticAccessListMemberRequest) returns (DeleteStaticAccessListMemberResponse); // DeleteAllAccessListMembers hard deletes all access list members for an // access list. rpc DeleteAllAccessListMembersForAccessList(DeleteAllAccessListMembersForAccessListRequest) returns (google.protobuf.Empty); @@ -95,6 +112,10 @@ service AccessListService { // GetInheritedGrants returns the inherited grants for an access list. rpc GetInheritedGrants(GetInheritedGrantsRequest) returns (GetInheritedGrantsResponse); + + // ListUserAccessLists returns a paginated list of all access lists where the + // user is an owner or member. + rpc ListUserAccessLists(ListUserAccessListsRequest) returns (ListUserAccessListsResponse); } // GetAccessListsRequest is the request for getting all access lists. @@ -110,7 +131,6 @@ message GetAccessListsResponse { message ListAccessListsRequest { // page_size is the size of the page to request. int32 page_size = 1; - // next_token is the page token. string next_token = 2; } @@ -123,6 +143,38 @@ message ListAccessListsResponse { string next_token = 2; } +// ListAccessListsV2Request is the request for getting filtered and sorted paginated access lists. +message ListAccessListsV2Request { + // page_size is the size of the page to request. + int32 page_size = 1; + // page_token is the token to begin the next page with. + string page_token = 2; + // sort_by specifies the sort order for the results. + types.SortBy sort_by = 3; + // filter is a collection of fields to filter access lists. + AccessListsFilter filter = 4; +} + +// AccessListsFilter is used to collect filter options for listing access lists. +message AccessListsFilter { + // search is a search term to filter access lists by name. + string search = 1; + // owners indicates returned access lists should be owned by one of the provider owners + repeated string owners = 2; + // roles indicates returned access lists should great one of the provider roles + repeated string roles = 3; + // origin is origin of the resource + string origin = 4; +} + +// ListAccessListsV2Response is the response for getting paginated access lists. +message ListAccessListsV2Response { + // access_lists is the list of access lists. + repeated AccessList access_lists = 1; + // next_page_token is the next page token. + string next_page_token = 2; +} + // GetInheritedGrantsRequest is the request for getting inherited grants. message GetInheritedGrantsRequest { // access_list_id is the ID of the access list to retrieve. @@ -247,16 +299,30 @@ message UpsertAccessListWithMembersResponse { repeated Member members = 2; } -// GetAccessListMemberRequest is the request for retrieving an access list -// member. +// GetAccessListMemberRequest is the request for retrieving an access_list_member. message GetAccessListMemberRequest { // access_list is the name of the access list that the member belongs to. string access_list = 1; - // member_name is the name of the user that belongs to the access list. string member_name = 2; } +// GetStaticAccessListMemberRequest is the request for retrieving an access_list_member of a static +// type access_list. +message GetStaticAccessListMemberRequest { + // access_list is the name of the access_list that the member belongs to. + string access_list = 1; + // member_name is the name of the user that belongs to the access_list. + string member_name = 2; +} + +// GetStaticAccessListMemberResponse is the response containing the access_list_member of the +// target access_list of static type. +message GetStaticAccessListMemberResponse { + // member of the target static access_list. + Member member = 1; +} + // GetAccessListOwnersRequest is the request for getting a list of all owners // in an Access List, including those inherited from nested Access Lists. message GetAccessListOwnersRequest { @@ -282,6 +348,20 @@ message UpsertAccessListMemberRequest { Member member = 4; } +// UpsertStaticAccessListMemberRequest is the request for upserting an access_list_member to an +// access_list of type static. +message UpsertStaticAccessListMemberRequest { + // member is the access_list_member to upsert. + Member member = 1; +} + +// UpsertStaticAccessListMemberResponse is the response of upserting an access_list_member to an +// static_access of type static. +message UpsertStaticAccessListMemberResponse { + // member is the upserted access_list_member. + Member member = 1; +} + // UpdateAccessListMemberRequest is the request for updating an access list // member. message UpdateAccessListMemberRequest { @@ -301,6 +381,19 @@ message DeleteAccessListMemberRequest { string member_name = 3; } +// DeleteStaticAccessListMemberRequest is the request for deleting an access_list_member from an +// access_list of type static. +message DeleteStaticAccessListMemberRequest { + // access_list is the name of access list. + string access_list = 1; + // member_name is the name of the user to delete. + string member_name = 2; +} + +// DeleteStaticAccessListMemberResponse is the response of deleting an access_list_member from an +// access_list of type static. +message DeleteStaticAccessListMemberResponse {} + // DeleteAllAccessListMembersForAccessListRequest is the request for deleting // all members from an access list. message DeleteAllAccessListMembersForAccessListRequest { @@ -416,3 +509,25 @@ message GetSuggestedAccessListsResponse { // access_lists is the list of suggested lists. repeated AccessList access_lists = 1; } + +// ListUserAccessListsRequest is the request for getting access lists where the +// user is an owner or member. +message ListUserAccessListsRequest { + // username is the name of the user to get access lists for. + string username = 1; + // page_size is the size of the page to request. + int32 page_size = 2; + // page_token is the page token. + string page_token = 3; +} + +// ListUserAccessListsResponse is the response for getting access lists where the +// user is an owner or member. +message ListUserAccessListsResponse { + // access_lists are the access lists where the user is a member or owner. + repeated AccessList access_lists = 1; + // next_page_token is the next page token. + string next_page_token = 2; + // total_count is the total number of access lists in all pages. + int32 total_count = 3; +} diff --git a/api/proto/teleport/accessmonitoringrules/v1/access_monitoring_rules.proto b/api/proto/teleport/accessmonitoringrules/v1/access_monitoring_rules.proto index f6ed98d53d3f0..df9c7e1a0d423 100644 --- a/api/proto/teleport/accessmonitoringrules/v1/access_monitoring_rules.proto +++ b/api/proto/teleport/accessmonitoringrules/v1/access_monitoring_rules.proto @@ -55,7 +55,7 @@ message AccessMonitoringRuleSpec { reserved 5; reserved "automatic_approval"; - // automatic_review defines automatic review configurations for access requests. + // automatic_review defines automatic review configurations for Access Requests. // Both notification and automatic_review may be set within the same // access_monitoring_rule. If both fields are set, the rule will trigger // both notifications and automatic reviews for the same set of access events. @@ -63,10 +63,16 @@ message AccessMonitoringRuleSpec { // set. AutomaticReview automatic_review = 6; - // desired_state defines the desired state of the subject. For access request + // desired_state defines the desired state of the subject. For Access Request // subjects, the desired_state may be set to `reviewed` to indicate that the - // access request should be automatically reviewed. + // Access Request should be automatically reviewed. string desired_state = 7; + + // schedules specifies a map of schedules that can be used to configure the + // access monitoring rule conditions. + // + // Available in Teleport v18.2.8 or higher. + map schedules = 8; } // Notification contains configurations for plugin notification rules. @@ -87,6 +93,36 @@ message AutomaticReview { string decision = 2; } +// Schedule specifies a schedule that can be used to configure rule conditions. +message Schedule { + // TimeSchedule specifies an in-line schedule. + TimeSchedule time = 1; +} + +// TimeSchedule specifies an in-line schedule. +message TimeSchedule { + // Shifts contains a set of shifts that make up the schedule. + repeated Shift shifts = 1; + + // Timezone specifies the schedule timezone. This field is optional and defaults + // to "UTC". Accepted values use timezone locations as defined in the IANA + // Time Zone Database, such as "America/Los_Angeles", "Europe/Lisbon", or + // "Asia/Singapore". + // + // See https://data.iana.org/time-zones/tzdb/zone1970.tab for a list of supported values. + string timezone = 2; + + // Shift contains the weekday, start time, and end time of a shift. + message Shift { + // Weekday specifies the day of the week, e.g., "Sunday", "Monday", "Tuesday". + string weekday = 1; + // Start specifies the start time in the format HH:MM, e.g., "12:30". + string start = 2; + // End specifies the end time in the format HH:MM, e.g., "12:30". + string end = 3; + } +} + // CreateAccessMonitoringRuleRequest is the request for CreateAccessMonitoringRule. message CreateAccessMonitoringRuleRequest { // access_monitoring_rule is the specification of the rule to be created. diff --git a/api/proto/teleport/autoupdate/v1/autoupdate.proto b/api/proto/teleport/autoupdate/v1/autoupdate.proto index 0a1a6a0cf2d62..13acd5afcc6f0 100644 --- a/api/proto/teleport/autoupdate/v1/autoupdate.proto +++ b/api/proto/teleport/autoupdate/v1/autoupdate.proto @@ -83,6 +83,10 @@ message AgentAutoUpdateGroup { // wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". // This field must be positive. int32 wait_hours = 5; + // canary_count is the number of canary agents that will be updated before the whole group is updated. + // when set to 0, the group does not enter the canary phase. This number is capped to 5. + // This number must always be lower than the total number of agents in the group, else the rollout will be stuck. + int32 canary_count = 6; } // AutoUpdateVersion is a resource singleton with version required for @@ -113,9 +117,9 @@ message AutoUpdateVersionSpecTools { // AutoUpdateVersionSpecAgents is the spec for the autoupdate version. message AutoUpdateVersionSpecAgents { - // start_version is the version to update from. + // start_version is the version used for newly installed agents before their update window. string start_version = 1; - // target_version is the version to update to. + // target_version is the version that all agents will update to during their update window. string target_version = 2; // schedule to use for the rollout string schedule = 3; @@ -139,9 +143,9 @@ message AutoUpdateAgentRollout { // This is built by merging the user-provided AutoUpdateConfigSpecAgents and the operator-provided // AutoUpdateVersionSpecAgents. message AutoUpdateAgentRolloutSpec { - // start_version is the version to update from. + // start_version is the version used for newly installed agents before their update window. string start_version = 1; - // target_version is the version to update to. + // target_version is the version that all agents will update to during their update window. string target_version = 2; // schedule to use for the rollout. Supported values are "regular" and "immediate". // - "regular" follows the regular group schedule @@ -230,6 +234,12 @@ message AutoUpdateAgentRolloutStatusGroup { // to the done state if: // - the ratio present_count/initial_count is above 0.9 (no more than 10% of the nodes dropped during update) uint64 up_to_date_count = 12; + // canary_count represents how many canaries this group should have to leave the AUTO_UPDATE_AGENT_GROUP_STATE_CANARY + // state. + uint64 canary_count = 13; + // canaries is the list of canary agents that should be updated. + // This list is empty until we enter the AUTO_UPDATE_AGENT_GROUP_STATE_CANARY state. + repeated Canary canaries = 14; } // AutoUpdateAgentGroupState represents the agent group state. This state controls whether the agents from this group @@ -247,6 +257,9 @@ enum AutoUpdateAgentGroupState { // AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK represents that the group has been rolled back. // New agents should run v1, existing agents should update to v1. AUTO_UPDATE_AGENT_GROUP_STATE_ROLLEDBACK = 4; + // AUTO_UPDATE_AGENT_GROUP_STATE_CANARY represents that the group is updating a few canary nodes, but that most nodes + // have not started updating yet. + AUTO_UPDATE_AGENT_GROUP_STATE_CANARY = 5; } // AutoUpdateAgentRolloutState represents the rollout state. This tells if Teleport started updating agents from the @@ -305,3 +318,60 @@ message AutoUpdateAgentReportSpecOmitted { int64 count = 1; string reason = 2; } + +// AutoUpdateBotInstanceReport is a report generated by an elected instance of the +// Teleport Auth service. The report tracks per group and per version how many +// instances of tbot are running. +message AutoUpdateBotInstanceReport { + // The kind of resource represented. This is always `autoupdate_bot_instance_report`. + string kind = 1; + + // Differentiates variations of the same kind. All resources should contain + // one, even if it is never populated. + string sub_kind = 2; + + // The version of the resource being represented. + string version = 3; + + // Common metadata that all resources share. + teleport.header.v1.Metadata metadata = 4; + + // Contents of the report. + AutoUpdateBotInstanceReportSpec spec = 5; +} + +// AutoUpdateBotInstanceReportSpec holds the contents of an AutoUpdateBotInstanceReport. +message AutoUpdateBotInstanceReportSpec { + // Timestamp is when the report was generated. + google.protobuf.Timestamp timestamp = 1; + + // Bot counts aggregated by update group. + map groups = 2; +} + +// AutoUpdateBotInstanceReportSpecGroup holds an update group's bot counts. +message AutoUpdateBotInstanceReportSpecGroup { + // Bot counts aggregated by version. + map versions = 1; +} + +// AutoUpdateBotInstanceReportSpecGroupVersion holds a version's bot count. +message AutoUpdateBotInstanceReportSpecGroupVersion { + // Count of bot instances running this version. + int32 count = 1; +} + +// Canary describes a node that is acting as a canary and being updated before other nodes in its group. +message Canary { + // updater_id is reported by the agent in its control stream Hello. This allows us to uniquely identify an updater so + // the proxy can modulate its answer when the request comes from this specific updater. + string updater_id = 1; + // host_id is the node Host ID, reported by the agent in its control stream Hello. + string host_id = 2; + // hostname is the server hostname reported by the agent in its control stream Hello. + // This is purely for debugging purposes: if the agent drops, we won't be able to query the inventory to know which + // agent it was. + string hostname = 3; + // success represents if the agent successfully connected back, running the target version. + bool success = 4; +} diff --git a/api/proto/teleport/autoupdate/v1/autoupdate_service.proto b/api/proto/teleport/autoupdate/v1/autoupdate_service.proto index 10146e5981f8e..e00c699ae8c9b 100644 --- a/api/proto/teleport/autoupdate/v1/autoupdate_service.proto +++ b/api/proto/teleport/autoupdate/v1/autoupdate_service.proto @@ -96,6 +96,12 @@ service AutoUpdateService { // DeleteAutoUpdateAgentReport removes the specified AutoUpdateAgentReport resource. rpc DeleteAutoUpdateAgentReport(DeleteAutoUpdateAgentReportRequest) returns (google.protobuf.Empty); + + // GetAutoUpdateBotInstanceReport returns the singleton AutoUpdateBotInstanceReport resource. + rpc GetAutoUpdateBotInstanceReport(GetAutoUpdateBotInstanceReportRequest) returns (AutoUpdateBotInstanceReport); + + // DeleteAutoUpdateBotInstanceReport removes the singleton AutoUpdateBotInstanceReport resource. + rpc DeleteAutoUpdateBotInstanceReport(DeleteAutoUpdateBotInstanceReportRequest) returns (google.protobuf.Empty); } // Request for GetAutoUpdateConfig. @@ -237,3 +243,9 @@ message DeleteAutoUpdateAgentReportRequest { // Name is the name of the AutoUpdateAgentReport to be deleted. string name = 1; } + +// Request for GetAutoUpdateBotInstanceReport. +message GetAutoUpdateBotInstanceReportRequest {} + +// Request for DeleteAutoUpdateBotInstanceReport. +message DeleteAutoUpdateBotInstanceReportRequest {} diff --git a/api/proto/teleport/componentfeatures/v1/component_features.proto b/api/proto/teleport/componentfeatures/v1/component_features.proto new file mode 100644 index 0000000000000..bbd3c51f6d0dc --- /dev/null +++ b/api/proto/teleport/componentfeatures/v1/component_features.proto @@ -0,0 +1,32 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.componentfeatures.v1; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/componentfeatures/v1;componentfeaturesv1"; + +// ComponentFeatureID is an identifier for a specific feature supported by a Teleport component. +enum ComponentFeatureID { + COMPONENT_FEATURE_ID_UNSPECIFIED = 0; + // ResourceConstraintsV1 indicates support for Resource Constraints and ResourceAccessIDs, as defined in RFD 228. + COMPONENT_FEATURE_ID_RESOURCE_CONSTRAINTS_V1 = 1; +} + +// ComponentFeatures represents a set of features supported by a given Teleport component. +message ComponentFeatures { + // features is a list of supported feature identifiers. + repeated ComponentFeatureID features = 1; +} diff --git a/api/proto/teleport/decision/v1alpha1/ssh_access.proto b/api/proto/teleport/decision/v1alpha1/ssh_access.proto index 4e982cc15cf75..2b383397ef0b1 100644 --- a/api/proto/teleport/decision/v1alpha1/ssh_access.proto +++ b/api/proto/teleport/decision/v1alpha1/ssh_access.proto @@ -159,6 +159,14 @@ message LockTarget { // ServerID is the host id of the Teleport instance. string server_id = 8; + + // BotInstanceID is the bot instance ID if this is a bot identity. + string bot_instance_id = 9; + + // JoinToken is the name of the join token used when this identity originally + // joined. This only applies to bot identities, and cannot be used to target + // bots that joined via the `token` join method. + string join_token = 10; } // HostUserMode determines how host users should be created. diff --git a/api/proto/teleport/decision/v1alpha1/ssh_identity.proto b/api/proto/teleport/decision/v1alpha1/ssh_identity.proto index 0b50aac726ee1..add9a21342d18 100644 --- a/api/proto/teleport/decision/v1alpha1/ssh_identity.proto +++ b/api/proto/teleport/decision/v1alpha1/ssh_identity.proto @@ -18,6 +18,7 @@ package teleport.decision.v1alpha1; import "google/protobuf/timestamp.proto"; import "teleport/decision/v1alpha1/tls_identity.proto"; +import "teleport/scopes/v1/scopes.proto"; import "teleport/trait/v1/trait.proto"; option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/decision/v1alpha1;decisionpb"; @@ -155,6 +156,17 @@ message SSHIdentity { // GitHubUsername indicates the GitHub username identified by the GitHub // connector. string github_username = 33; + + // JoinToken is the name of the join token used for bot joining. It is unset + // for other identity types, or for bots using the `token` join method. + string join_token = 34; + + // ScopePin is an optional pin that ties the certificate to a specific scope and set of scoped roles. When + // set, the Roles field must not be set. + teleport.scopes.v1.Pin scope_pin = 35; + + // The scope associated with a host identity. + string agent_scope = 36; } // CertExtensionMode specifies the type of extension to use in the cert. This type diff --git a/api/proto/teleport/decision/v1alpha1/tls_identity.proto b/api/proto/teleport/decision/v1alpha1/tls_identity.proto index 2bf71360d174d..e75c944e57f90 100644 --- a/api/proto/teleport/decision/v1alpha1/tls_identity.proto +++ b/api/proto/teleport/decision/v1alpha1/tls_identity.proto @@ -17,6 +17,7 @@ syntax = "proto3"; package teleport.decision.v1alpha1; import "google/protobuf/timestamp.proto"; +import "teleport/scopes/v1/scopes.proto"; import "teleport/trait/v1/trait.proto"; option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/decision/v1alpha1;decisionpb"; @@ -149,6 +150,15 @@ message TLSIdentity { // UserType indicates if the User was created by an SSO Provider or locally. string user_type = 35; + + // JoinToken is the name of the join token used when a bot joins; it does not + // apply to other identity types, or to bots using the traditional `token` + // join method. + string join_token = 36; + + // ScopePin is an optional pin that ties the certificate to a specific scope and set of scoped roles. When + // set, the Roles field must not be set. + teleport.scopes.v1.Pin scope_pin = 37; } // RouteToApp holds routing information for applications. diff --git a/api/proto/teleport/healthcheckconfig/v1/health_check_config.proto b/api/proto/teleport/healthcheckconfig/v1/health_check_config.proto index f28125bf74cc0..79c537b48f2dd 100644 --- a/api/proto/teleport/healthcheckconfig/v1/health_check_config.proto +++ b/api/proto/teleport/healthcheckconfig/v1/health_check_config.proto @@ -64,4 +64,13 @@ message Matcher { // empty value is ignored. The match result is logically ANDed with DBLabels, // if both are non-empty. string db_labels_expression = 2; + // KubernetesLabels matches Kubernetes labels. An empty value is ignored. The match + // result is logically ANDed with KubernetesLabelsExpression, if both are non-empty. + repeated teleport.label.v1.Label kubernetes_labels = 3; + // KubernetesLabelsExpression is a label predicate expression to match Kubernetes. An + // empty value is ignored. The match result is logically ANDed with KubernetesLabels, + // if both are non-empty. + string kubernetes_labels_expression = 4; + // Disabled disables matches for all labels and expressions. + bool disabled = 5; } diff --git a/api/proto/teleport/identitycenter/v1/identitycenter_service.proto b/api/proto/teleport/identitycenter/v1/identitycenter_service.proto deleted file mode 100644 index 5230c53040f60..0000000000000 --- a/api/proto/teleport/identitycenter/v1/identitycenter_service.proto +++ /dev/null @@ -1,49 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package teleport.identitycenter.v1; - -import "google/protobuf/empty.proto"; - -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/identitycenter/v1;identitycenterv1"; - -// IdentityCenterService provides methods to manage Identity Center -// resources. -service IdentityCenterService { - // DeleteAllIdentityCenterAccounts deletes all Identity Center accounts. - rpc DeleteAllIdentityCenterAccounts(DeleteAllIdentityCenterAccountsRequest) returns (google.protobuf.Empty); - - // DeleteAllAccountAssignments deletes all Identity Center Account assignments. - rpc DeleteAllAccountAssignments(DeleteAllAccountAssignmentsRequest) returns (google.protobuf.Empty); - - // DeleteAllPrincipalAssignments deletes all Identity Center principal assignments. - rpc DeleteAllPrincipalAssignments(DeleteAllPrincipalAssignmentsRequest) returns (google.protobuf.Empty); - - // DeleteAllPermissionSets deletes all Identity Center permission sets. - rpc DeleteAllPermissionSets(DeleteAllPermissionSetsRequest) returns (google.protobuf.Empty); -} - -// DeleteAllIdentityCenterAccountsRequest is a request to delete all Identity Center imported accounts. -message DeleteAllIdentityCenterAccountsRequest {} - -// DeleteAllAccountAssignmentsRequest is a request to delete all Identity Center account assignments. -message DeleteAllAccountAssignmentsRequest {} - -// DeleteAllPrincipalAssignmentsRequest is a request to delete all Identity Center principal assignments. -message DeleteAllPrincipalAssignmentsRequest {} - -// DeleteAllPermissionSetsRequest is a request to delete all Identity Center permission sets. -message DeleteAllPermissionSetsRequest {} diff --git a/api/proto/teleport/integration/v1/awsra_service.proto b/api/proto/teleport/integration/v1/awsra_service.proto new file mode 100644 index 0000000000000..e15effa44e5f4 --- /dev/null +++ b/api/proto/teleport/integration/v1/awsra_service.proto @@ -0,0 +1,128 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.integration.v1; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/integration/v1;integrationv1"; + +// AWSRolesAnywhereService provides access to AWS APIs using the AWS Roles Anywhere Integration. +service AWSRolesAnywhereService { + // AWSRolesAnywherePing does an health check for the integration. + // Returns the caller identity and the number of AWS Roles Anywhere Profiles that are active. + // It uses the following APIs: + // https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + rpc AWSRolesAnywherePing(AWSRolesAnywherePingRequest) returns (AWSRolesAnywherePingResponse); + + // ListRolesAnywhereProfiles lists the AWS Roles Anywhere Profiles that are configured in the integration. + // It uses the following API: + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListProfiles.html + // https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ListTagsForResource.html + // + // The number of profiles returned is always between 0 and page_size. + // If the number of elements is 0, then there are no more profiles to return and the next page token is empty. + rpc ListRolesAnywhereProfiles(ListRolesAnywhereProfilesRequest) returns (ListRolesAnywhereProfilesResponse); +} + +// AWSRolesAnywherePingRequest is a request for doing an health check against the configured integration. +message AWSRolesAnywherePingRequest { + oneof mode { + // Use an integration to perform the Ping operation. + string integration = 1; + // Use a Trust Anchor, Profile and Role to perform the Ping operation. + // This is useful when the integration is not configured. + AWSRolesAnywherePingRequestWithoutIntegration custom = 2; + } +} + +// Identifies the Trust Anchor, Profile and Role to use for the Ping operation. +message AWSRolesAnywherePingRequestWithoutIntegration { + // The AWS Roles Anywhere Trust Anchor ARN to be used when generating the token. + string trust_anchor_arn = 1; + + // The AWS Roles Anywhere Profile ARN to be used when generating the token. + string profile_arn = 3; + + // The AWS Role ARN to be used when generating the token. + string role_arn = 4; +} + +// AWSRolesAnywherePingResponse contains the response for the Ping operation. +message AWSRolesAnywherePingResponse { + // The AWS account ID number of the account that owns or contains the calling entity. + string account_id = 1; + // The AWS ARN associated with the calling entity. + string arn = 2; + // The unique identifier of the calling entity. + string user_id = 3; + // The number of AWS Roles Anywhere Profiles that are active and have at least one associated Role. + int32 profile_count = 4; +} + +// ListRolesAnywhereProfilesRequest is a request to list AWS Roles Anywhere Profiles. +message ListRolesAnywhereProfilesRequest { + // Integration is the AWS Roles Anywhere Integration name. + string integration = 1; + // page_size is the max size of the page to request. + // Depending on the filters, the actual number of profiles returned may be less than this value. + int32 page_size = 2; + // next_page_token is the page token. + string next_page_token = 3; + // ProfileNameFilters is a list of filters applied to the profile name. + // Only matching profiles will be returned. + // If empty, no filtering is applied. + // + // Filters can be globs, for example: + // + // profile* + // *name* + // + // Or regexes if they're prefixed and suffixed with ^ and $, for example: + // + // ^profile.*$ + // ^.*name.*$ + repeated string profile_name_filters = 4; +} + +// ListRolesAnywhereProfilesResponse contains the response for the ListRolesAnywhereProfiles operation. +message ListRolesAnywhereProfilesResponse { + // Profiles is a list of AWS Roles Anywhere Profiles. + repeated RolesAnywhereProfile profiles = 1; + + // NextPageToken is used to paginate the results. + string next_page_token = 2; +} + +// RolesAnywhereProfile represents an AWS Roles Anywhere Profile. +message RolesAnywhereProfile { + // The AWS Roles Anywhere Profile ARN. + string arn = 1; + + // Whether the AWS Roles Anywhere Profile is enabled. + bool enabled = 2; + + // The name of the AWS Roles Anywhere Profile. + string name = 3; + + // Whether the profile accepts role session names. + bool accept_role_session_name = 4; + + // The tags associated with the AWS Roles Anywhere Profile. + map tags = 5; + + // The roles accessible from this AWS Roles Anywhere Profile. + repeated string roles = 6; +} diff --git a/api/proto/teleport/join/v1/joinservice.proto b/api/proto/teleport/join/v1/joinservice.proto new file mode 100644 index 0000000000000..2bc620a2ac6ac --- /dev/null +++ b/api/proto/teleport/join/v1/joinservice.proto @@ -0,0 +1,554 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.join.v1; + +import "google/protobuf/timestamp.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/join/v1;joinv1"; + +// ClientInit is the first message sent from the client during the join process, it +// holds parameters common to all join methods. +message ClientInit { + // JoinMethod is the name of the join method that the client is configured to use. + // This parameter is optional, the client can leave it empty to allow the + // server to determine the join method based on the provision token named by + // TokenName, it will be sent to the client in the ServerInit message. + optional string join_method = 1; + // TokenName is the name of the join token. + // This is a secret if using the token join method, otherwise it is a + // non-secret name of a provision token resource. + string token_name = 2; + // SystemRole is the system role requested, e.g. Proxy, Node, Instance, Bot. + string system_role = 3; + // ForwardedByProxy will be set to true when the message is forwarded by the + // Proxy service. When this is set the Auth service must ignore any + // any credentials authenticating the request, except for the purpose of + // accepting ProxySuppliedParams. + bool forwarded_by_proxy = 4; + + // ProxySuppliedParams holds parameters set by the Proxy when nodes join + // via the proxy address. They must only be trusted if the incoming join + // request is authenticated as the Proxy. + message ProxySuppliedParams { + // RemoteAddr is the remote address of the host requesting a host certificate. + // It replaces 0.0.0.0 in the list of additional principals. + string remote_addr = 1; + // ClientVersion is the Teleport version of the client attempting to join. + string client_version = 2; + } + optional ProxySuppliedParams proxy_supplied_parameters = 5; +} + +// PublicKeys holds public keys sent by the client requested subject keys for +// issued certificates. +message PublicKeys { + // PublicTlsKey is the public key requested for the subject of the x509 certificate. + // It must be encoded in PKIX, ASN.1 DER form. + bytes public_tls_key = 1; + // PublicSshKey is the public key requested for the subject of the SSH certificate. + // It must be encoded in SSH wire format. + bytes public_ssh_key = 2; +} + +// HostParams holds parameters required for host joining. +message HostParams { + // PublicKeys holds the host public keys. + PublicKeys public_keys = 1; + // HostName is the user-friendly node name for the host. This comes from + // teleport.nodename in the service configuration and defaults to the + // hostname. It is encoded as a valid principal in issued certificates. + string host_name = 2; + // AdditionalPrincipals is a list of additional principals requested. + repeated string additional_principals = 3; + // DnsNames is a list of DNS names requested for inclusion in the x509 certificate. + repeated string dns_names = 4; +} + +// BotParams holds parameters required for bot joining. +message BotParams { + // PublicKeys holds the bot public keys. + PublicKeys public_keys = 1; + // Expires is a desired time of the expiry of the returned certificates. + optional google.protobuf.Timestamp expires = 2; +} + +// ClientParams holds either host or bot join parameters. +message ClientParams { + oneof payload { + HostParams host_params = 1; + BotParams bot_params = 2; + } +} + +// TokenInit is sent by the client in response to the ServerInit message for +// the Token join method. +// +// The Token method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: TokenInit +// 4. server->client: Result +message TokenInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; +} + +// OIDCInit holds the OIDC identity token used for all OIDC-based join methods. +// +// The join flow for all OIDC-based join methods is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: OIDCInit +// 4. server->client: Result +message OIDCInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; + // IdToken is the OIDC identity token. + bytes id_token = 2; +} + +// BoundKeypairInit is sent from the client in response to the ServerInit +// message for the bound keypair join method. +// The server is expected to respond with a BoundKeypairChallenge. +// +// The bound keypair method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: BoundKeypairInit +// 4. server->client: BoundKeypairChallenge +// 5. client->server: BoundKeypairChallengeSolution +// (optional additional steps if keypair rotation is required) +// server->client: BoundKeypairRotationRequest +// client->server: BoundKeypairRotationResponse +// server->client: BoundKeypairChallenge +// client->server: BoundKeypairChallengeSolution +// 6. server->client: Result containing BoundKeypairResult +message BoundKeypairInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; + // If set, attempts to bind a new keypair using an initial join secret. + // Any value set here will be ignored if a keypair is already bound. + string initial_join_secret = 2; + // A document signed by Auth containing join state parameters from the + // previous join attempt. Not required on initial join; required on all + // subsequent joins. + bytes previous_join_state = 3; +} + +// BoundKeypairChallenge is a challenge issued by the server that joining +// clients are expected to complete. +// The client is expected to respond with a BoundKeypairChallengeSolution. +message BoundKeypairChallenge { + // The desired public key corresponding to the private key that should be used + // to sign this challenge, in SSH authorized keys format. + bytes public_key = 1; + // A challenge to sign with the requested public key. During keypair rotation, + // a second challenge will be provided to verify the new keypair before certs + // are returned. + string challenge = 2; +} + +// BoundKeypairChallengeSolution is sent from the client in response to the +// BoundKeypairChallenge. +// The server is expected to respond with either a Result or a +// BoundKeypairRotationRequest. +message BoundKeypairChallengeSolution { + // A solution to a challenge from the server. This generated by signing the + // challenge as a JWT using the keypair associated with the requested public + // key. + bytes solution = 1; +} + +// BoundKeypairRotationRequest is sent by the server in response to a +// BoundKeypairChallenge when a keypair rotation is required. It acts like an +// additional challenge, the client is expected to respond with a +// BoundKeypairRotationResponse. +message BoundKeypairRotationRequest { + // The signature algorithm suite in use by the cluster. + string signature_algorithm_suite = 1; +} + +// BoundKeypairRotationResponse is sent by the client in response to a +// BoundKeypairRotationRequest from the server. +// The server is expected to respond with an additional BoundKeypairChallenge +// for the new key. +message BoundKeypairRotationResponse { + // The public key to be registered with auth. Clients should expect a + // subsequent challenge against this public key to be sent. This is encoded in + // SSH authorized keys format. + bytes public_key = 1; +} + +// BoundKeypairResult holds additional result parameters relevant to the bound +// keypair join method. +message BoundKeypairResult { + // A signed join state document to be provided on the next join attempt. + bytes join_state = 2; + // The public key registered with Auth at the end of the joining ceremony. + // After a successful keypair rotation, this should reflect the newly + // registered public key. This is encoded in SSH authorized keys format. + bytes public_key = 3; +} + +// IAMInit is sent from the client in response to the ServerInit message for +// the IAM join method. +// +// The IAM method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: IAMInit +// 4. server->client: IAMChallenge +// 5. client->server: IAMChallengeSolution +// 6. server->client: Result +message IAMInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; +} + +// IAMChallenge is from the server in response to the IAMInit message from the client. +// The client is expected to respond with a IAMChallengeSolution. +message IAMChallenge { + // Challenge is a a crypto-random string that should be included by the + // client in the IAMChallengeSolution message. + string challenge = 1; +} + +// IAMChallengeSolution must be sent from the client in response to the +// IAMChallenge message. +message IAMChallengeSolution { + // STSIdentityRequest is a signed sts:GetCallerIdentity API request used + // to prove the AWS identity of a joining node. It must include the + // challenge string as a signed header. + bytes sts_identity_request = 1; +} + +// EC2Init is sent from the client in response to the ServerInit message for +// the EC2 join method. +// +// The EC2 method join flow is: +// 1. client->server: ClientInit +// 2. server->client: ServerInit +// 3. client->server: EC2Init +// 4. server->client: Result +message EC2Init { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; + // Document is a signed EC2 Instance Identity Document used to prove the + // identity of a joining EC2 instance. + bytes document = 2; +} + +// OracleInit is sent from the client in response to the ServerInit message for +// the Oracle join method. +// +// The Oracle method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: OracleInit +// 4. client<-server: OracleChallenge +// 5. client->server: OracleChallengeSolution +// 6. client<-server: Result +message OracleInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; +} + +// OracleChallenge is sent from the server in response to the OracleInit message from the client. +// The client is expected to respond with a OracleChallengeSolution. +message OracleChallenge { + // Challenge is a a crypto-random string that should be included by the + // client in the OracleChallengeSolution message. + string challenge = 1; +} + +// OracleChallengeSolution must be sent from the client in response to the +// OracleChallenge message. +message OracleChallengeSolution { + // Cert is the OCI instance identity certificate, an X509 certificate in PEM format. + bytes cert = 1; + // Intermediate encodes the intermediate CAs that issued the instance + // identity certificate, in PEM format. + bytes intermediate = 2; + // Signature is a signature over the challenge, signed by the private key + // matching the instance identity certificate. + bytes signature = 3; + // SignedRootCaReq is a signed request to the Oracle API for retreiving the + // root CAs that issued the instance identity certificate. + bytes signed_root_ca_req = 4; +} + +// TPMInit is the message sent from the client in response to the ServerInit +// message for the TPM join flow. +// The server is expected to respond with a TPMEncryptedCredential message. +// +// The TPM method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: TPMInit +// 4. client<-server: TPMEncryptedCredential +// 5. client->server: TPMSolution +// 6. client<-server: Result +message TPMInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; + // The encoded TPMT_PUBLIC structure containing the attestation public key + // and signing parameters. + bytes public = 2; + // The properties of the attestation key, encoded as a TPMS_CREATION_DATA + // structure. + bytes create_data = 3; + // An assertion as to the details of the key, encoded as a TPMS_ATTEST + // structure. + bytes create_attestation = 4; + // A signature of create_attestation, encoded as a TPMT_SIGNATURE structure. + bytes create_signature = 5; + oneof ek { + // The device's endorsement certificate in X509, ASN.1 DER form. This + // certificate contains the public key of the endorsement key. This is + // preferred to ek_key. + bytes ek_cert = 6; + // The device's public endorsement key in PKIX, ASN.1 DER form. This is + // used when a TPM does not contain any endorsement certificates. + bytes ek_key = 7; + } +} + +// TPMEncryptedCredential is the message sent from the server in response to the +// TPMInit message. +// The client is expected to respond with a TPMSolution message. +message TPMEncryptedCredential { + // The `credential_blob` parameter to be used with the `ActivateCredential` + // command. This is used with the decrypted value of `secret` in a + // cryptographic process to decrypt the solution. + bytes credential_blob = 1; + // The `secret` parameter to be used with `ActivateCredential`. This is a + // seed which can be decrypted with the EK. The decrypted seed is then used + // when decrypting `credential_blob`. + bytes secret = 2; +} + +// TPMSolution is the message sent from the client in response to the +// TPMEncryptedCredential message. The server is expected to respond with a +// Result message. +message TPMSolution { + // The client's solution to TPMEncryptedCredential using ActivateCredential. + bytes solution = 1; +} + +// AzureInit is sent from the client in response to the ServerInit message for +// the Azure join method. +// +// The Azure method join flow is: +// 1. client->server: ClientInit +// 2. client<-server: ServerInit +// 3. client->server: AzureInit +// 4. client<-server: AzureChallenge +// 5. client->server: AzureChallengeSolution +// 6. client<-server: Result +message AzureInit { + // ClientParams holds parameters for the specific type of client trying to join. + ClientParams client_params = 1; +} + +// AzureChallenge is sent from the server in response to the AzureInit message from the client. +// The client is expected to respond with a AzureChallengeSolution. +message AzureChallenge { + // Challenge is a a crypto-random string that should be included by the + // client in the challenge response message. + string challenge = 1; +} + +// AzureChallengeSolution must be sent from the client in response to the +// AzureChallenge message. +message AzureChallengeSolution { + // AttestedData is a signed JSON document from an Azure VM's attested data + // metadata endpoint used to prove the identity of a joining node. It must + // include the challenge string as the nonce. + bytes attested_data = 1; + // Intermediate encodes the intermediate CAs that issued the leaf certificate + // used to sign the attested data document, in x509 DER format. + bytes intermediate = 2; + // AccessToken is a JWT signed by Azure, used to prove the identity of a + // joining node. + string access_token = 3; +} + +// ChallengeSolution holds a solution to a challenge issued by the server. +message ChallengeSolution { + oneof payload { + BoundKeypairChallengeSolution bound_keypair_challenge_solution = 1; + BoundKeypairRotationResponse bound_keypair_rotation_response = 2; + IAMChallengeSolution iam_challenge_solution = 3; + OracleChallengeSolution oracle_challenge_solution = 4; + TPMSolution tpm_solution = 5; + AzureChallengeSolution azure_challenge_solution = 6; + } +} + +// GivingUp should be sent by clients that fail to complete the join flow so +// that the Auth service can log an informative error message. +message GivingUp { + // Reason is the reason the client is giving up. + enum Reason { + // REASON_UNSPECIFIED is an unspecified reason. + REASON_UNSPECIFIED = 0; + // REASON_UNSUPPORTED_JOIN_METHOD means the client does not support the + // join method sent by the server. + REASON_UNSUPPORTED_JOIN_METHOD = 1; + // REASON_UNSUPPORTED_MESSAGE_TYPE means the client can not handle a + // message type sent by the server. + REASON_UNSUPPORTED_MESSAGE_TYPE = 2; + // REASON_CHALLENGE_SOLUTION_FAILED means the client failed to solve a + // challenge sent by the server. + REASON_CHALLENGE_SOLUTION_FAILED = 3; + } + // Reason is the reason the client is giving up. + Reason reason = 1; + // Msg is an error message related to the failure. + string msg = 2; +} + +// JoinRequest is the message type sent from the joining client to the server. +message JoinRequest { + oneof payload { + ClientInit client_init = 1; + TokenInit token_init = 2; + BoundKeypairInit bound_keypair_init = 3; + ChallengeSolution solution = 4; + IAMInit iam_init = 5; + GivingUp giving_up = 6; + EC2Init ec2_init = 7; + OIDCInit oidc_init = 8; + OracleInit oracle_init = 9; + TPMInit tpm_init = 10; + AzureInit azure_init = 11; + } +} + +// ServerInit is the first message sent from the server in response to the +// ClientInit message. +message ServerInit { + // JoinMethod is the name of the selected join method. + string join_method = 1; + // SignatureAlgorithmSuite is the name of the signature algorithm suite + // currently configured for the cluster. + string signature_algorithm_suite = 2; +} + +// Challenge is a challenge message sent from the server that the client must solve. +message Challenge { + oneof payload { + BoundKeypairChallenge bound_keypair_challenge = 1; + BoundKeypairRotationRequest bound_keypair_rotation_request = 2; + IAMChallenge iam_challenge = 3; + OracleChallenge oracle_challenge = 4; + TPMEncryptedCredential tpm_encrypted_credential = 5; + AzureChallenge azure_challenge = 6; + } +} + +// Result is the final message sent from the cluster back to the client, it +// contains the result of the joining process including the assigned host ID +// and issued certificates. +message Result { + oneof payload { + HostResult host_result = 1; + BotResult bot_result = 2; + } +} + +// Certificates holds issued certificates and cluster CAs. +message Certificates { + // TlsCert is an X.509 certificate encoded in ASN.1 DER form. + bytes tls_cert = 1; + // TlsCaCerts is a list of TLS certificate authorities that the client should trust. + // Each certificate is encoding in ASN.1 DER form. + repeated bytes tls_ca_certs = 2; + // SshCert is an SSH certificate encoded in SSH wire format. + bytes ssh_cert = 3; + // SshCaKey is a list of SSH certificate authority public keys that the client should trust. + // Each CA key is encoded in SSH wire format. + repeated bytes ssh_ca_keys = 4; +} + +// HostResult holds results for host joining. +message HostResult { + // Certificates holds issued certificates and cluster CAs. + Certificates certificates = 1; + // HostId is the unique ID assigned to the host. + string host_id = 2; +} + +// HostResult holds results for bot joining. +message BotResult { + // Certificates holds issued certificates and cluster CAs. + Certificates certificates = 1; + // BoundKeypairResult holds extra result parameters relevant to the bound keypair join method. + optional BoundKeypairResult bound_keypair_result = 2; +} + +// JoinResponse is the message type sent from the server to the joining client. +message JoinResponse { + oneof payload { + // Init is the initial message sent from the server in response to the + // ClientInit message. It specifies the join method used by the provision token. + ServerInit init = 1; + // Challenge is a challenge issued by the server that the client must solve + // in order to complete the join flow. The challenge type depends on the join method. + // Each method may issue zero or more challenges that the client must solve. + Challenge challenge = 2; + // Result is the result of the join flow, it is the final message sent from + // the cluster when the join flow is successful. + // For the token join method, it is sent immediately in response to the ClientInit request. + Result result = 3; + } +} + +// JoinService provides methods which allow Teleport nodes, proxies, and other +// services to "join" the Teleport cluster by completing a supported join flow +// in order to receive signed certificates issued by the cluster. +// +// It may be used in multiple cases: +// * Teleport agents joining the cluster on their first start to receive their +// initial certificates. These requests do not use mTLS and the client +// authenticates itself using only the join flow and is assigned a new host +// ID. +// * Teleport agents that need certificates authenticated for an additional +// system role allowed by a new provision token. These requests must be +// authenticated with mTLS using their existing certificates so that the +// existing host ID can be maintained. +// * MachineID bots fetching their initial certificates. +// * MachineID bots refreshing their certificates. +// +// It is implemented on both the Auth and Proxy servers to serve the needs of +// * clients connecting to the proxy address for their initial join when they are +// unauthenticated and unable to directly dial the auth service. +// * clients connecting to the auth address for their initial join. +// * clients refreshing existing certificates that are able to make an +// authenticates dial to the auth service via proxy TLS routing. +service JoinService { + // Join is a bidirectional streaming RPC that implements all join methods. + // The client does not need to know the join method ahead of time, all it + // needs is the token name. + // + // The client must send an ClientInit message on the JoinRequest stream to + // initiate the join flow. + // + // The server will reply with a ServerInit message, and subsequent messages + // on the stream will depend on the join method. + rpc Join(stream JoinRequest) returns (stream JoinResponse); +} diff --git a/api/proto/teleport/legacy/client/proto/authservice.proto b/api/proto/teleport/legacy/client/proto/authservice.proto index 3b060ae25c50c..9f45e8b2be07e 100644 --- a/api/proto/teleport/legacy/client/proto/authservice.proto +++ b/api/proto/teleport/legacy/client/proto/authservice.proto @@ -23,6 +23,7 @@ import "teleport/attestation/v1/attestation.proto"; import "teleport/legacy/client/proto/certs.proto"; import "teleport/legacy/client/proto/event.proto"; import "teleport/legacy/client/proto/inventory.proto"; +import "teleport/legacy/client/proto/requestable_roles.proto"; import "teleport/legacy/types/events/events.proto"; import "teleport/legacy/types/types.proto"; import "teleport/legacy/types/webauthn/webauthn.proto"; @@ -242,6 +243,10 @@ message UserCertsRequest { // tunnel. Requests from this requester allows reuse of the MFA session // response but TTL is limited to single use TTL. TSH_DB_EXEC = 5; + // TSH_APP_AWS_CREDENTIALPROCESS is set when tsh provides access to an AWS App which uses client side credentials. + // When using per-session MFA, this ensures the TTL of the certificate (and thus the AWS session) is the same as the Teleport identity session. + // AWS credentials should not be written to disk when this requester is used, but may be exported as env variables through stdout. + TSH_APP_AWS_CREDENTIALPROCESS = 6; } // RequesterName identifies who sent the request. Requester RequesterName = 17 [(gogoproto.jsontag) = "requester_name"]; @@ -371,6 +376,24 @@ message ChangePasswordRequest { webauthn.CredentialAssertionResponse Webauthn = 5 [(gogoproto.jsontag) = "webauthn"]; } +message ListSemaphoresRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; + // filter encodes semaphore filtering params. + types.SemaphoreFilter filter = 3; +} + +message ListSemaphoresResponse { + // a list of semaphores. + repeated types.SemaphoreV3 semaphores = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // PluginDataSeq is a sequence of plugin data. message PluginDataSeq { repeated types.PluginDataV3 PluginData = 1 [(gogoproto.jsontag) = "plugin_data"]; @@ -432,6 +455,24 @@ message CreateResetPasswordTokenRequest { ]; } +// ListResetPasswordTokenRequest is a request for a page of user tokens. +message ListResetPasswordTokenRequest { + // PageSize is the maximum amount of resources to retrieve. + int32 page_size = 1; + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + string page_token = 2; +} + +// ListResetPasswordTokenResponse contains a page of user tokens. +message ListResetPasswordTokenResponse { + // UserTokens is a list of user tokens. + repeated types.UserTokenV3 user_tokens = 1; + // NextKey is the key for the next page of user tokens. + string next_page_token = 2; +} + // RenewableCertsRequest is a request to generate a first set of renewable // certificates from a bot join token. message RenewableCertsRequest { @@ -597,6 +638,8 @@ message Features { reserved "CloudAnonymizationKey"; // AccessGraphDemoMode enables the ability to opt-in to a demo mode of Access Graph with limited features. bool AccessGraphDemoMode = 38 [(gogoproto.jsontag) = "access_graph_demo_mode,omitempty"]; + // ClientIPRestrictions allows Cloud users to setup a client IP allowlist + bool ClientIPRestrictions = 39 [(gogoproto.jsontag) = "client_ip_restrictions,omitempty"]; } // EntitlementInfo is the state and limits of a particular entitlement @@ -976,6 +1019,24 @@ message GetWebTokensResponse { repeated types.WebTokenV3 Tokens = 1 [(gogoproto.jsontag) = "tokens"]; } +// ListWebTokensRequest contains all the requested web tokens. +message ListWebTokensRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +// ListWebTokensResponse contains all the requested web tokens. +message ListWebTokensResponse { + // Tokens is a list of web tokens. + repeated types.WebTokenV3 tokens = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // UpsertKubernetesServerRequest are the parameters used to add or update a // kubernetes server. message UpsertKubernetesServerRequest { @@ -1089,7 +1150,14 @@ message DatabaseCertRequest { Extensions CertificateExtensions = 6 [(gogoproto.jsontag) = "certificate_extensions"]; // CRLEndpoint is a certificate revocation list distribution point. Required for Windows smartcard certs. - string CRLEndpoint = 7 [(gogoproto.jsontag) = "crl_endpoint"]; + // DEPRECATED: use CRLDomain instead. + string CRLEndpoint = 7 [ + (gogoproto.jsontag) = "crl_endpoint", + deprecated = true + ]; + + // CRLDomain is the Active Directory domain where CRLs are published. + string CRLDomain = 8 [(gogoproto.jsontag) = "crl_domain"]; } // DatabaseCertResponse contains the signed certificate. @@ -1526,6 +1594,8 @@ message GetEventsRequest { // Order specifies an ascending or descending order of events. // A value of 0 means a descending order and a value of 1 means an ascending order. Order Order = 7; + // Search is an optional search term to filter events by (case-insensitive substring match). + string Search = 8; } message GetSessionEventsRequest { @@ -1572,6 +1642,26 @@ message GetLocksResponse { repeated types.LockV2 Locks = 1; } +// ListLocksRequest is a request for a page of locks. +message ListLocksRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; + // Filter specifies lock specific filters. + types.LockFilter filter = 3; +} + +// ListLocksResponse contains a page of registered applications. +message ListLocksResponse { + // Locks is a list of locks. + repeated types.LockV2 locks = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + message GetLockRequest { // Name is the name of the lock to get. string Name = 1; @@ -1589,6 +1679,40 @@ message ReplaceRemoteLocksRequest { repeated types.LockV2 Locks = 2; } +// ListAppsRequest is a request for a page of registered applications. +message ListAppsRequest { + // Limit is the maximum amount of resources to retrieve. + int32 limit = 1; + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + string start_key = 2; +} + +// ListAppsResponse contains a page of registered applications. +message ListAppsResponse { + // Applictaions is a list of applications. + repeated types.AppV3 applications = 1; + // NextKey is the key for the next page of applications. + string next_key = 2; +} + +message ListDatabasesRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +message ListDatabasesResponse { + // a list of databases. + repeated types.DatabaseV3 databases = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // GetWindowsDesktopServicesResponse contains all registered Windows desktop services. message GetWindowsDesktopServicesResponse { // Services is a list of Windows desktop services. @@ -1613,6 +1737,35 @@ message DeleteWindowsDesktopServiceRequest { string Name = 1 [(gogoproto.jsontag) = "name"]; } +// ListWindowsDesktopsRequest is a request for a page of registered Windows desktop hosts. +message ListWindowsDesktopsRequest { + // Limit is the maximum amount of resources to retrieve. + int32 limit = 1; + // StartKey is used to start listing resources from a specific spot. It + // should be set to the previous NextKey value if using pagination, or + // left empty. + string start_key = 2; + // Labels is a label-based matcher if non-empty. + map labels = 3; + // PredicateExpression defines boolean conditions that will be matched against the resource. + string predicate_expression = 4; + // SearchKeywords is a list of search keywords to match against resource field values. + repeated string search_keywords = 5; + // WindowsDesktopFilter specifies windows desktop specific filters. + types.WindowsDesktopFilter windows_desktop_filter = 6 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "windows_desktop_filter,omitempty" + ]; +} + +// ListWindowsDesktopsResponse contains a page of registered Windows desktop hosts. +message ListWindowsDesktopsResponse { + // Desktops is a list of Windows desktop hosts. + repeated types.WindowsDesktopV3 desktops = 1; + // NextKey is the key for the next page of desktops. + string next_key = 2; +} + // GetWindowsDesktopsResponse contains all registered Windows desktop hosts. message GetWindowsDesktopsResponse { // Servers is a list of Windows desktop hosts. @@ -1635,9 +1788,12 @@ message WindowsDesktopCertRequest { // CSR is the request to sign in PEM format. bytes CSR = 1; // CRLEndpoint is the address of the CRL for this certificate. - string CRLEndpoint = 2; + // DEPRECATED: use CRLDomain instead. + string CRLEndpoint = 2 [deprecated = true]; // TTL is the certificate validity period. int64 TTL = 3 [(gogoproto.casttype) = "Duration"]; + // CRLDomain is the Active Directory domain where CRLs are published. + string CRLDomain = 4; } // WindowsDesktopCertResponse contains the signed Windows RDP certificate. @@ -2006,6 +2162,22 @@ message IdentityCenterAccountAssignment { IdentityCenterPermissionSet PermissionSet = 7; } +message ListInstallersRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +message ListInstallersResponse { + // Installers is a list of installer resources. + repeated types.InstallerV1 installers = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // PaginatedResource represents one of the supported resources. message PaginatedResource { // Resource is the resource itself. @@ -2331,6 +2503,24 @@ message GetGithubAuthRequestRequest { string StateToken = 1 [(gogoproto.jsontag) = "state_token"]; } +message ListOIDCConnectorsRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; + // WithSecrets specifies whether to load associated secrets. + bool with_secrets = 3; +} + +message ListOIDCConnectorsResponse { + // Connectors is a list of OIDC connectors. + repeated types.OIDCConnectorV3 connectors = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // CreateOIDCConnectorRequest is a request for CreateOIDCConnector. message CreateOIDCConnectorRequest { // Connector to be created. @@ -2367,6 +2557,45 @@ message UpsertSAMLConnectorRequest { types.SAMLConnectorV2 Connector = 1 [(gogoproto.jsontag) = "connector"]; } +message ListSAMLConnectorsRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; + // WithSecrets specifies whether to load associated secrets. + bool with_secrets = 3; + // NoFollowURLs specifies whether to skip following URLs when + // validating SAML connector resources. + bool no_follow_urls = 4; +} + +message ListSAMLConnectorsResponse { + // Connectors is a list of SAML connectors. + repeated types.SAMLConnectorV2 connectors = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + +message ListGithubConnectorsRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; + // WithSecrets specifies whether to load associated secrets. + bool with_secrets = 3; +} + +message ListGithubConnectorsResponse { + // Connectors is a list of Github connectors. + repeated types.GithubConnectorV3 connectors = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // CreateGithubConnectorRequest is a request for CreateGithubConnector. message CreateGithubConnectorRequest { // Connector to be created. @@ -2639,6 +2868,67 @@ message AccessRequestAllowedPromotionResponse { types.AccessRequestAllowedPromotions allowedPromotions = 1; } +// ListProvisionTokensRequest is used to retrieve a paginated list of provision tokens. +message ListProvisionTokensRequest { + // Limit is the maximum amount of items per page. + int32 Limit = 1; + + // StartKey is used to resume a query in order to enable pagination. + // If the previous response had NextKey set then this should be + // set to its value. Otherwise leave empty. + string StartKey = 2; + + // FilterRoles allows filtering for tokens with the provided roles. Tokens + // with ANY of the provided roles are returned. + repeated string FilterRoles = 3; + + // FilterBotName allows filtering for tokens associated with the + // named bot. In addition, only items with a role of Bot are returned. + string FilterBotName = 4; +} + +// ListProvisionTokensResponse is used to retrieve a paginated list of provision tokens. +message ListProvisionTokensResponse { + // Tokens is the list of tokens. + repeated types.ProvisionTokenV2 Tokens = 1; + + // NextKey is used to resume a query in order to enable pagination. + // Leave empty to start at the beginning. + string NextKey = 2; +} + +message ListKubernetesClustersRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +message ListKubernetesClustersResponse { + // KubernetesClusters is a list of kubernetes cluster resources. + repeated types.KubernetesClusterV3 kubernetes_clusters = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + +message ListSnowflakeSessionsRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +message ListSnowflakeSessionsResponse { + // Sessions is a list of Snowflake web sessions. + repeated types.WebSessionV2 sessions = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + // AuthService is authentication/authorization service implementation service AuthService { // InventoryControlStream is the per-instance stream used to advertise teleport instance @@ -2739,6 +3029,8 @@ service AuthService { rpc SubmitAccessReview(types.AccessReviewSubmission) returns (types.AccessRequestV3); // GetAccessCapabilities requests the access capabilities of a user. rpc GetAccessCapabilities(types.AccessCapabilitiesRequest) returns (types.AccessCapabilities); + // GetRemoteAccessCapabilities requests the access capabilities for a user from a remote cluster + rpc GetRemoteAccessCapabilities(types.RemoteAccessCapabilitiesRequest) returns (types.RemoteAccessCapabilities); // GetAccessRequestAllowedPromotions returns a list of allowed promotions from an access request to an access list. rpc GetAccessRequestAllowedPromotions(AccessRequestAllowedPromotionRequest) returns (AccessRequestAllowedPromotionResponse); @@ -2760,6 +3052,9 @@ service AuthService { // Only local users may be reset. rpc CreateResetPasswordToken(CreateResetPasswordTokenRequest) returns (types.UserTokenV3); + // ListResetPasswordTokens returns a page of user tokens. + rpc ListResetPasswordTokens(ListResetPasswordTokenRequest) returns (ListResetPasswordTokenResponse); + // GetUser gets a user resource by name. // // Deprecated: Use [teleport.users.v1.UsersService] instead. @@ -2812,6 +3107,8 @@ service AuthService { rpc CancelSemaphoreLease(types.SemaphoreLease) returns (google.protobuf.Empty); // GetSemaphores returns a list of all semaphores matching the supplied filter. rpc GetSemaphores(types.SemaphoreFilter) returns (Semaphores); + // ListSemaphores returns a page of all semaphores matching the supplied filter. + rpc ListSemaphores(ListSemaphoresRequest) returns (ListSemaphoresResponse); // DeleteSemaphore deletes a semaphore matching the supplied filter. rpc DeleteSemaphore(types.SemaphoreFilter) returns (google.protobuf.Empty); @@ -2851,6 +3148,9 @@ service AuthService { rpc GetSnowflakeSession(GetSnowflakeSessionRequest) returns (GetSnowflakeSessionResponse); // GetSnowflakeSessions gets all Snowflake web sessions. rpc GetSnowflakeSessions(google.protobuf.Empty) returns (GetSnowflakeSessionsResponse); + // ListSnowflakeSessions returns a page of Snowflake web sessions. + rpc ListSnowflakeSessions(ListSnowflakeSessionsRequest) returns (ListSnowflakeSessionsResponse); + // DeleteSnowflakeSession removes a Snowflake web session. rpc DeleteSnowflakeSession(DeleteSnowflakeSessionRequest) returns (google.protobuf.Empty); // DeleteAllSnowflakeSessions removes all Snowflake web sessions. @@ -2898,6 +3198,8 @@ service AuthService { rpc GetWebToken(types.GetWebTokenRequest) returns (GetWebTokenResponse); // GetWebTokens gets all web tokens. rpc GetWebTokens(google.protobuf.Empty) returns (GetWebTokensResponse); + // ListWebTokens returns a page of web tokens. + rpc ListWebTokens(ListWebTokensRequest) returns (ListWebTokensResponse); // DeleteWebToken deletes a web token. rpc DeleteWebToken(types.DeleteWebTokenRequest) returns (google.protobuf.Empty); // DeleteAllWebTokens deletes all web tokens. @@ -2942,6 +3244,8 @@ service AuthService { rpc GetRole(GetRoleRequest) returns (types.RoleV6); // ListRoles is a paginated role getter. rpc ListRoles(ListRolesRequest) returns (ListRolesResponse); + // ListRequestableRoles is a paginated requestable role getter. + rpc ListRequestableRoles(ListRequestableRolesRequest) returns (ListRequestableRolesResponse); // CreateRole creates a new role. rpc CreateRole(CreateRoleRequest) returns (types.RoleV6); // UpdateRole updates an existing role. @@ -3017,6 +3321,8 @@ service AuthService { rpc GetOIDCConnector(types.ResourceWithSecretsRequest) returns (types.OIDCConnectorV3); // GetOIDCConnectors gets all current OIDC connector resources. rpc GetOIDCConnectors(types.ResourcesWithSecretsRequest) returns (types.OIDCConnectorV3List); + // ListOIDCConnectors returns a page of current OIDC connector resources. + rpc ListOIDCConnectors(ListOIDCConnectorsRequest) returns (ListOIDCConnectorsResponse); // UpsertOIDCConnector creates a new OIDC connector in the backend. rpc CreateOIDCConnector(CreateOIDCConnectorRequest) returns (types.OIDCConnectorV3); // UpsertOIDCConnector updates an existing OIDC connector in the backend. @@ -3040,6 +3346,8 @@ service AuthService { rpc GetSAMLConnector(types.ResourceWithSecretsRequest) returns (types.SAMLConnectorV2); // GetSAMLConnectors gets all current SAML connector resources. rpc GetSAMLConnectors(types.ResourcesWithSecretsRequest) returns (types.SAMLConnectorV2List); + // ListSAMLConnectors returns a page of current SAML connector resources. + rpc ListSAMLConnectors(ListSAMLConnectorsRequest) returns (ListSAMLConnectorsResponse); // CreateSAMLConnector creates a new SAML connector in the backend. rpc CreateSAMLConnector(CreateSAMLConnectorRequest) returns (types.SAMLConnectorV2); // UpdateSAMLConnector updates an existing SAML connector in the backend. @@ -3063,6 +3371,8 @@ service AuthService { rpc GetGithubConnector(types.ResourceWithSecretsRequest) returns (types.GithubConnectorV3); // GetGithubConnectors gets all current Github connector resources. rpc GetGithubConnectors(types.ResourcesWithSecretsRequest) returns (types.GithubConnectorV3List); + // ListGithubConnectors returns a page of current Github connector resources. + rpc ListGithubConnectors(ListGithubConnectorsRequest) returns (ListGithubConnectorsResponse); // CreateGithubConnector creates a new Github connector in the backend. rpc CreateGithubConnector(CreateGithubConnectorRequest) returns (types.GithubConnectorV3); // UpdateGithubConnector updates an existing Github connector in the backend. @@ -3112,7 +3422,15 @@ service AuthService { // GetToken retrieves a token described by the given request. rpc GetToken(types.ResourceRequest) returns (types.ProvisionTokenV2); // GetToken retrieves all tokens. - rpc GetTokens(google.protobuf.Empty) returns (types.ProvisionTokenV2List); + // Deprecated: Use [ListProvisionTokens], [GetStaticTokens], and [ListResetPasswordTokens] instead. + // TODO(hugoShaka): DELETE IN 21.0.0 + rpc GetTokens(google.protobuf.Empty) returns (types.ProvisionTokenV2List) { + option deprecated = true; + } + // GetStaticTokens retrieves all static tokens. + rpc GetStaticTokens(google.protobuf.Empty) returns (types.StaticTokensV2); + // ListToken retrieves a paginated list of filtered provision tokens. + rpc ListProvisionTokens(ListProvisionTokensRequest) returns (ListProvisionTokensResponse); // CreateTokenV2 creates a token in a backend. rpc CreateTokenV2(CreateTokenV2Request) returns (google.protobuf.Empty); // UpsertTokenV2 upserts a token in a backend. @@ -3187,6 +3505,8 @@ service AuthService { rpc GetLock(GetLockRequest) returns (types.LockV2); // GetLocks gets all/in-force locks that match at least one of the targets when specified. rpc GetLocks(GetLocksRequest) returns (GetLocksResponse); + // ListLocks returns a page of locks matching a filter. + rpc ListLocks(ListLocksRequest) returns (ListLocksResponse); // UpsertLock upserts a lock. rpc UpsertLock(types.LockV2) returns (google.protobuf.Empty); // DeleteLock deletes a lock. @@ -3206,6 +3526,8 @@ service AuthService { // GetApps returns all registered applications. rpc GetApps(google.protobuf.Empty) returns (types.AppV3List); + // ListApps returns a page of registered applications. + rpc ListApps(ListAppsRequest) returns (ListAppsResponse); // GetApp returns an application by name. rpc GetApp(types.ResourceRequest) returns (types.AppV3); // CreateApp creates a new application resource. @@ -3219,6 +3541,8 @@ service AuthService { // GetDatabases returns all registered databases. rpc GetDatabases(google.protobuf.Empty) returns (types.DatabaseV3List); + // ListDatabases returns a page of registered databases. + rpc ListDatabases(ListDatabasesRequest) returns (ListDatabasesResponse); // GetDatabase returns a database by name. rpc GetDatabase(types.ResourceRequest) returns (types.DatabaseV3); // CreateDatabase creates a new database resource. @@ -3232,6 +3556,8 @@ service AuthService { // GetKubernetesClusters returns all registered kubernetes clusters. rpc GetKubernetesClusters(google.protobuf.Empty) returns (types.KubernetesClusterV3List); + // ListKubernetesClusters returns a page of registered kubernetes clusters. + rpc ListKubernetesClusters(ListKubernetesClustersRequest) returns (ListKubernetesClustersResponse); // GetKubernetesCluster returns a kubernetes cluster by name. rpc GetKubernetesCluster(types.ResourceRequest) returns (types.KubernetesClusterV3); // CreateKubernetesCluster creates a new kubernetes cluster resource. @@ -3256,6 +3582,8 @@ service AuthService { // GetWindowsDesktops returns all registered Windows desktop hosts matching the supplied filter. rpc GetWindowsDesktops(types.WindowsDesktopFilter) returns (GetWindowsDesktopsResponse); + // ListWindowsDesktops returns a page of registered Windows desktop hosts. + rpc ListWindowsDesktops(ListWindowsDesktopsRequest) returns (ListWindowsDesktopsResponse); // CreateWindowsDesktop registers a new Windows desktop host. rpc CreateWindowsDesktop(types.WindowsDesktopV3) returns (google.protobuf.Empty); // UpdateWindowsDesktop updates an existing Windows desktop host. @@ -3359,6 +3687,8 @@ service AuthService { rpc GetInstaller(types.ResourceRequest) returns (types.InstallerV1); // GetInstallers retrieves all of installer script resources. rpc GetInstallers(google.protobuf.Empty) returns (types.InstallerV1List); + // ListInstallers returns a page of installer script resources. + rpc ListInstallers(ListInstallersRequest) returns (ListInstallersResponse); // SetInstaller sets the installer script resource rpc SetInstaller(types.InstallerV1) returns (google.protobuf.Empty); diff --git a/api/proto/teleport/legacy/client/proto/event.proto b/api/proto/teleport/legacy/client/proto/event.proto index fb74c89cbcdda..02a7ca21e6b90 100644 --- a/api/proto/teleport/legacy/client/proto/event.proto +++ b/api/proto/teleport/legacy/client/proto/event.proto @@ -30,7 +30,11 @@ import "teleport/legacy/types/types.proto"; import "teleport/machineid/v1/bot_instance.proto"; import "teleport/machineid/v1/federation.proto"; import "teleport/notifications/v1/notifications.proto"; +import "teleport/presence/v1/relay_server.proto"; import "teleport/provisioning/v1/provisioning.proto"; +import "teleport/recordingencryption/v1/recording_encryption.proto"; +import "teleport/scopes/access/v1/assignment.proto"; +import "teleport/scopes/access/v1/role.proto"; import "teleport/secreports/v1/secreports.proto"; import "teleport/userloginstate/v1/userloginstate.proto"; import "teleport/userprovisioning/v2/statichostuser.proto"; @@ -219,5 +223,16 @@ message Event { teleport.healthcheckconfig.v1.HealthCheckConfig HealthCheckConfig = 78; // AutoUpdateAgentReport is a resource for counting agents connected to an auth instance. teleport.autoupdate.v1.AutoUpdateAgentReport AutoUpdateAgentReport = 79; + // ScopedRole is a role that descibes scoped privileges. + teleport.scopes.access.v1.ScopedRole ScopedRole = 80; + // ScopedRoleAssignment is an assignment of one or more scoped roles to a user. + teleport.scopes.access.v1.ScopedRoleAssignment ScopedRoleAssignment = 81; + teleport.presence.v1.RelayServer relay_server = 82; + // RecordingEncryption is a resource for controlling session recording encryption. + teleport.recordingencryption.v1.RecordingEncryption RecordingEncryption = 83; + // PluginV1 is a resource for Teleport plugins. + types.PluginV1 plugin = 84; + // AutoUpdateAgentReport is a resource for counting connected bot instances. + teleport.autoupdate.v1.AutoUpdateBotInstanceReport AutoUpdateBotInstanceReport = 85; } } diff --git a/api/proto/teleport/legacy/client/proto/inventory.proto b/api/proto/teleport/legacy/client/proto/inventory.proto index 74168690b2daf..f27dd9464da3a 100644 --- a/api/proto/teleport/legacy/client/proto/inventory.proto +++ b/api/proto/teleport/legacy/client/proto/inventory.proto @@ -20,6 +20,7 @@ package proto; import "google/protobuf/timestamp.proto"; import "teleport/legacy/types/types.proto"; +import "teleport/presence/v1/relay_server.proto"; option go_package = "github.com/gravitational/teleport/api/client/proto"; @@ -38,6 +39,9 @@ message UpstreamInventoryOneOf { UpstreamInventoryAgentMetadata AgentMetadata = 4; // UpstreamInventoryGoodbye advertises that the instance is terminating. UpstreamInventoryGoodbye Goodbye = 5; + // UpstreamInventoryStopHeartbeat informs the upstream service that a + // heartbeat is stopping. + UpstreamInventoryStopHeartbeat stop_heartbeat = 6; } } @@ -95,6 +99,9 @@ message UpstreamInventoryHello { string ExternalUpgraderVersion = 6; // UpdaterInfo is used by Teleport to send information about how the Teleport updater is doing. types.UpdaterV2Info UpdaterInfo = 8; + // The advertized scope of the instance. An instance's scope can not change once assigned, so future + // heartbeats must include a scope value matching the one declared in the hello message. + string scope = 9; } // UpstreamInventoryAgentMetadata is the message sent up the inventory control stream containing @@ -166,6 +173,10 @@ message DownstreamInventoryHello { bool KubernetesHeartbeats = 17; // KubernetesCleanup indicates the ICS supports deleting kubernetes clusters when UpstreamInventoryGoodbye.DeleteResources is set. bool KubernetesCleanup = 18; + // Indicates that the ICS supports heartbeating relay_server entries as well as deleting them on disconnect if UpstreamInventoryGoodbye.DeleteResources is set. + bool relay_server_heartbeats_cleanup = 19; + // DatabaseHeartbeatGracefulStop indicates the ICS supports stopping an individual database heartbeat. + bool database_heartbeat_graceful_stop = 20; } // SupportedCapabilities advertises the supported features of the auth server. @@ -217,6 +228,8 @@ message InventoryHeartbeat { types.DatabaseServerV3 DatabaseServer = 3; // KubeServer is a complete kube server spec to be heartbeated. types.KubernetesServerV3 KubernetesServer = 4; + // A relay_server to be heartbeated. + teleport.presence.v1.RelayServer relay_server = 5; } // UpstreamInventoryGoodbye informs the upstream service that instance @@ -255,3 +268,20 @@ message InventoryStatusSummary { // ServiceCounts aggregates the number of services. map ServiceCounts = 5; } + +// UpstreamInventoryStopHeartbeat informs the upstream service that the +// heartbeat is stopping. +message UpstreamInventoryStopHeartbeat { + // Kind is the kind of heartbeat to stop. + StopHeartbeatKind kind = 1; + // Name is the name of the heatbeat to stop. + string name = 2; +} + +// StopHeartbeatKind is the type of heartbeat to stop. +enum StopHeartbeatKind { + STOP_HEARTBEAT_KIND_UNSPECIFIED = 0; + + // STOP_HEARTBEAT_KIND_DATABASE_SERVER means stop a database server heartbeat. + STOP_HEARTBEAT_KIND_DATABASE_SERVER = 1; +} diff --git a/api/proto/teleport/legacy/client/proto/joinservice.proto b/api/proto/teleport/legacy/client/proto/joinservice.proto index 145f67afdf0b5..2d7e2ac2f59d1 100644 --- a/api/proto/teleport/legacy/client/proto/joinservice.proto +++ b/api/proto/teleport/legacy/client/proto/joinservice.proto @@ -276,7 +276,8 @@ message RegisterUsingBoundKeypairCertificates { // RegisterUsingBoundKeypairRotationRequest is the response sent by the server // when a keypair rotation is required. message RegisterUsingBoundKeypairRotationRequest { - // This is a marker, no contents at this time. + // The signature algorithm suite in use by the cluster. + types.SignatureAlgorithmSuite signature_algorithm_suite = 1; } // RegisterUsingBoundKeypairMethodResponse is a response sent by the server diff --git a/api/proto/teleport/legacy/client/proto/requestable_roles.proto b/api/proto/teleport/legacy/client/proto/requestable_roles.proto new file mode 100644 index 0000000000000..f43c4bf8987f2 --- /dev/null +++ b/api/proto/teleport/legacy/client/proto/requestable_roles.proto @@ -0,0 +1,52 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +// buf:lint:ignore PACKAGE_DIRECTORY_MATCH +// buf:lint:ignore PACKAGE_VERSION_SUFFIX +package proto; + +option go_package = "github.com/gravitational/teleport/api/client/proto"; + +// ListRequestableRolesRequest is the request message for AuthService.ListRequestableRoles +message ListRequestableRolesRequest { + // page_size is the maximum number of requestable roles to return per page. + int32 page_size = 1; + + // page_token is the next_page_token value returned from a previous List request, if any. + string page_token = 2; + + // Filter is the type for the filtering options for the returned roles. + message Filter { + // a list of search keywords to match against resource field values, if specified + repeated string search_keywords = 1; + } + // filter is the filtering options for the returned roles. + Filter filter = 3; +} + +// ListRequestableRolesResponse is the response message for AuthService.ListRequestableRoles +message ListRequestableRolesResponse { + // RequestableRole is the type of a requestable role, containing only its name and description. + message RequestableRole { + string name = 1; + string description = 2; + } + // roles is the requestable roles. + repeated RequestableRole roles = 1; + + // next_page_token is the next page token. If there are no more results, it will be empty. + string next_page_token = 2; +} diff --git a/api/proto/teleport/legacy/types/events/events.proto b/api/proto/teleport/legacy/types/events/events.proto index 3e2c952973dbb..84d5dce9c0c05 100644 --- a/api/proto/teleport/legacy/types/events/events.proto +++ b/api/proto/teleport/legacy/types/events/events.proto @@ -77,6 +77,9 @@ enum UserKind { // Indicates the user associated with this event is a Machine ID bot user. USER_KIND_BOT = 2; + + // Indicates that the user associated with this event is a system component e.g. Okta service. + USER_KIND_SYSTEM = 3; } // UserOrigin is the origin of a user account. @@ -140,6 +143,22 @@ message UserMetadata { // UserOrigin specifies the origin of this user account. UserOrigin UserOrigin = 13 [(gogoproto.jsontag) = "user_origin,omitempty"]; + + // UserRoles specifies the roles user for the user to perform the action. + repeated string UserRoles = 14 [(gogoproto.jsontag) = "user_roles,omitempty"]; + + // UserTraits hold claim data used to populate a role at runtime + // for this session. + wrappers.LabelValues UserTraits = 15 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "user_traits,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; + + // UserClusterName represents the Teleport Cluster name the user belongs to. + // For leaf clusters, this field represents the root cluster name. + // For root clusters, this field holds the cluster name. + string UserClusterName = 16 [(gogoproto.jsontag) = "user_cluster_name,omitempty"]; } // Server is a server metadata @@ -553,6 +572,8 @@ message DesktopClipboardReceive { string DesktopAddr = 5 [(gogoproto.jsontag) = "desktop_addr"]; // Length is the number of bytes of data received from the remote clipboard. int32 Length = 6 [(gogoproto.jsontag) = "length"]; + // DesktopName is the name of the desktop resource. + string DesktopName = 7 [(gogoproto.jsontag) = "desktop_name"]; } // DesktopClipboardSend is emitted when clipboard data is @@ -586,6 +607,8 @@ message DesktopClipboardSend { string DesktopAddr = 5 [(gogoproto.jsontag) = "desktop_addr"]; // Length is the number of bytes of data sent. int32 Length = 6 [(gogoproto.jsontag) = "length"]; + // DesktopName is the name of the desktop resource. + string DesktopName = 7 [(gogoproto.jsontag) = "desktop_name"]; } // DesktopSharedDirectoryStart is emitted when Teleport @@ -627,6 +650,8 @@ message DesktopSharedDirectoryStart { string DirectoryName = 7 [(gogoproto.jsontag) = "directory_name"]; // DirectoryID is the ID of the directory being shared (unique to the Windows Desktop Session). uint32 DirectoryID = 8 [(gogoproto.jsontag) = "directory_id"]; + // DesktopName is the name of the desktop resource. + string DesktopName = 9 [(gogoproto.jsontag) = "desktop_name"]; } // DesktopSharedDirectoryRead is emitted when Teleport @@ -675,6 +700,8 @@ message DesktopSharedDirectoryRead { uint32 Length = 10 [(gogoproto.jsontag) = "length"]; // Offset is the offset the bytes were read from. uint64 Offset = 11 [(gogoproto.jsontag) = "offset"]; + // DesktopName is the name of the desktop resource. + string DesktopName = 12 [(gogoproto.jsontag) = "desktop_name"]; } // DesktopSharedDirectoryWrite is emitted when Teleport @@ -723,6 +750,8 @@ message DesktopSharedDirectoryWrite { uint32 Length = 10 [(gogoproto.jsontag) = "length"]; // Offset is the offset the bytes were written to. uint64 Offset = 11 [(gogoproto.jsontag) = "offset"]; + // DesktopName is the name of the desktop resource. + string DesktopName = 12 [(gogoproto.jsontag) = "desktop_name"]; } // SessionReject event happens when a user hits a session control restriction. @@ -1802,6 +1831,7 @@ message Exec { } // SCP is emitted when data transfer has occurred between server and client +// Deprecated: use SFTP message SCP { // Metadata is a common event metadata Metadata Metadata = 1 [ @@ -2128,6 +2158,13 @@ message AuthAttempt { (gogoproto.embed) = true, (gogoproto.jsontag) = "" ]; + + // ServerMetadata holds information about the target host server + ServerMetadata Server = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; } // UserTokenCreate is emitted when a user token is created. @@ -3567,6 +3604,18 @@ message PostgresFunctionCall { repeated string FunctionArgs = 6 [(gogoproto.jsontag) = "function_args,omitempty"]; } +// WindowsCertificateMetadata contains metadata about certificates +// issued for Windows environments. +message WindowsCertificateMetadata { + string Subject = 1 [(gogoproto.jsontag) = "subject"]; + string SerialNumber = 2 [(gogoproto.jsontag) = "serial_number"]; + string UPN = 3 [(gogoproto.jsontag) = "upn"]; + repeated string CRLDistributionPoints = 4 [(gogoproto.jsontag) = "crl_distribution_points"]; + int32 KeyUsage = 5 [(gogoproto.jsontag) = "key_usage"]; + repeated int32 ExtendedKeyUsage = 6 [(gogoproto.jsontag) = "extended_key_usage"]; + repeated string EnhancedKeyUsage = 7 [(gogoproto.jsontag) = "enhanced_key_usage"]; +} + // WindowsDesktopSessionStart is emitted when a user connects to a desktop. message WindowsDesktopSessionStart { // Metadata is common event metadata. @@ -3617,6 +3666,8 @@ message WindowsDesktopSessionStart { // NLA indicates whether Teleport performed Network Level Authentication (NLA) // when initiating this session. bool NLA = 13 [(gogoproto.jsontag) = "nla"]; + + WindowsCertificateMetadata CertMetadata = 14 [(gogoproto.jsontag) = "certificate"]; } // DatabaseSessionEnd is emitted when a user ends the database session. @@ -3659,6 +3710,16 @@ message DatabaseSessionEnd { (gogoproto.nullable) = false, (gogoproto.jsontag) = "session_stop,omitempty" ]; + + // ConnectionMetadata holds information about the connection. + ConnectionMetadata Connection = 7 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Participants is the list of participants in a session. + repeated string Participants = 8 [(gogoproto.jsontag) = "participants,omitempty"]; } // MFADeviceMetadata is a common MFA device metadata. @@ -3933,6 +3994,12 @@ message WindowsDesktopSessionEnd { bool Recorded = 12 [(gogoproto.jsontag) = "recorded"]; // Participants is a list of participants in the session. repeated string Participants = 13 [(gogoproto.jsontag) = "participants"]; + // Connection holds information about the connection. + ConnectionMetadata Connection = 14 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; } // CertificateCreate is emitted when a certificate is issued. @@ -3956,6 +4023,19 @@ message CertificateCreate { (gogoproto.embed) = true, (gogoproto.jsontag) = "" ]; + + // CertificateAuthority holds information about the issuer of the certificate. + CertificateAuthority CertificateAuthority = 5 [(gogoproto.jsontag) = "certificate_authority"]; +} + +// CertificateAuthority holds information about the issuer of the certificate. +message CertificateAuthority { + // Type of the cert authority + string Type = 1 [(gogoproto.jsontag) = "type,omitempty"]; + // Domain of the cert authority + string Domain = 2 [(gogoproto.jsontag) = "domain,omitempty"]; + // SubjectKeyID is BASE32-encoded subject key ID from authority certificate + string SubjectKeyID = 3 [(gogoproto.jsontag) = "subject_key_id,omitempty"]; } // RenewableCertificateGenerationMismatch is emitted when a renewable @@ -4806,6 +4886,21 @@ message OneOf { events.AutoUpdateAgentRolloutTrigger AutoUpdateAgentRolloutTrigger = 213; events.AutoUpdateAgentRolloutForceDone AutoUpdateAgentRolloutForceDone = 214; events.AutoUpdateAgentRolloutRollback AutoUpdateAgentRolloutRollback = 215; + events.MCPSessionStart MCPSessionStart = 216; + events.MCPSessionEnd MCPSessionEnd = 217; + events.MCPSessionRequest MCPSessionRequest = 218; + events.MCPSessionNotification MCPSessionNotification = 219; + events.BoundKeypairRecovery BoundKeypairRecovery = 220; + events.BoundKeypairRotation BoundKeypairRotation = 221; + events.BoundKeypairJoinStateVerificationFailed BoundKeypairJoinStateVerificationFailed = 222; + events.MCPSessionListenSSEStream MCPSessionListenSSEStream = 223; + events.MCPSessionInvalidHTTPRequest MCPSessionInvalidHTTPRequest = 224; + events.SCIMListingEvent SCIMListingEvent = 225; + events.SCIMResourceEvent SCIMResourceEvent = 226; + events.ClientIPRestrictionsUpdate ClientIPRestrictionsUpdate = 227; + events.VnetConfigCreate VnetConfigCreate = 232; + events.VnetConfigUpdate VnetConfigUpdate = 233; + events.VnetConfigDelete VnetConfigDelete = 234; } } @@ -4939,6 +5034,31 @@ message Identity { // BotInstanceID indicates the name of the Machine ID bot instance this // identity was issued to, if any. string BotInstanceID = 29 [(gogoproto.jsontag) = "bot_instance_id,omitempty"]; + // JoinToken is the name of the join token used when a Machine ID bot joined, + // if any. It is not set for bots using the `token` join method. + string JoinToken = 30 [(gogoproto.jsontag) = "join_token,omitempty"]; + // ScopePin pins a certificate to a specific scope and set of scoped roles. + ScopePin ScopePin = 31 [(gogoproto.jsontag) = "scope_pin,omitempty"]; +} + +// ScopePin is a marker that identifies a certificate/identity as being "pinned" to a target scope, and encodes relevant +// information for access-control evaluation at that scope. +message ScopePin { + // Scope is the target scope that this pin is associated with. This is the scope that the certificate/identity is + // pinned to. Any resources in parent/orthogonal scopes are not necessarily subject to the privileges/policies + // conveyed by this pin. + string Scope = 1 [(gogoproto.jsontag) = "scope"]; + + // Assignments encodes the scoped role assignments relevant to access-control decisions about the pinned identity. This may + // include assignments to parents of the pinned scope as well as assignments to equivalent/child scopes. Effectively, this + // means all assignments that are not orthogonal to the pinned scope. + map Assignments = 2 [(gogoproto.jsontag) = "assignments"]; +} + +// ScopePinnedAssignments is a collection of scoped role assignments that are relevant to the pinned identity at the target scope. +message ScopePinnedAssignments { + // Roles is a list of scoped roles that are assigned to the pinned identity at the target scope. + repeated string Roles = 1 [(gogoproto.jsontag) = "roles"]; } // RouteToApp contains parameters for application access certificate requests. @@ -6798,6 +6918,13 @@ message OktaUserSync { int32 num_users_total = 8 [(gogoproto.jsontag) = "num_users_total"]; } +// LabelSelector is a label selector that can be specified by the client to +// filter the resources that are returned by the server. +message LabelSelector { + string Key = 1 [(gogoproto.jsontag) = "key"]; + repeated string Values = 2 [(gogoproto.jsontag) = "values"]; +} + // SPIFFESVIDIssued is an event recorded when a SPIFFE SVID is issued. message SPIFFESVIDIssued { // Metadata is a common event metadata @@ -6851,6 +6978,10 @@ message SPIFFESVIDIssued { (gogoproto.jsontag) = "attributes,omitempty", (gogoproto.casttype) = "Struct" ]; + // The name selector specified by the client when requesting the SVID. + string NameSelector = 15 [(gogoproto.jsontag) = "name_selector,omitempty"]; + // The label selectors specified by the client when requesting the SVID. + repeated LabelSelector LabelSelectors = 16 [(gogoproto.jsontag) = "label_selectors,omitempty"]; } // AuthPreferenceUpdate is emitted when the auth preference is updated. @@ -8515,3 +8646,656 @@ message SigstorePolicyDelete { (gogoproto.jsontag) = "" ]; } + +// MCPSessionStart is emitted when a user starts a MCP session. +message MCPSessionStart { + // Metadata is a common event metadata + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata User = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata Session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ServerMetadata is a common server metadata + ServerMetadata Server = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ConnectionMetadata holds information about the connection + ConnectionMetadata Connection = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata App = 6 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // McpSessionId is the session ID tracked by remote MCP servers. + string mcp_session_id = 7; + // ClientInfo stores reported client agent information, e.g. "claude-ai/0.1.0". + string client_info = 8; + // ServerInfo stores reported MCP server information, e.g. "teleport-mcp-demo/18.3.0". + string server_info = 9; + // ProtocolVersion is the MCP protocol version like "2025-06-18". + string protocol_version = 10; + // IngressAuthType indicates how the MCP session is authorized between MCP client and Teleport. + string ingress_auth_type = 11; + // EgressAuthType indicates how the MCP session is authorized between Teleport and remote MCP server. + string egress_auth_type = 12; +} + +// MCPSessionEnd is emitted when an MCP session ends. +message MCPSessionEnd { + // Metadata is a common event metadata + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata User = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata Session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ServerMetadata is a common server metadata + ServerMetadata Server = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ConnectionMetadata holds information about the connection + ConnectionMetadata Connection = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata App = 6 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains common command or operation status fields. + // + // The protocol spec states that the MCP server may refuse the clients from + // terminating the session. + // https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#session-management + Status Status = 7 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Headers are the HTTP request headers. + wrappers.LabelValues headers = 8 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "headers,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; +} + +// MCPJSONRPCMessage includes details of a MCP request or notification. +// https://modelcontextprotocol.io/docs/concepts/transports#requests +message MCPJSONRPCMessage { + // JSONRPC specifies the version of the protocol. + string JSONRPC = 1 [(gogoproto.jsontag) = "jsonrpc"]; + // ID is the ID of a request. Notifications have no IDs. + string ID = 2 [(gogoproto.jsontag) = "id,omitempty"]; + // Method is the method of this message. + string method = 3 [(gogoproto.jsontag) = "method"]; + // Params is the optional parameters. + google.protobuf.Struct params = 5 [ + (gogoproto.jsontag) = "params,omitempty", + (gogoproto.casttype) = "Struct" + ]; +} + +// MCPSessionRequest is emitted when a request is sent by client during a MCP session. +message MCPSessionRequest { + // Metadata is a common event metadata + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata user = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata App = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains information whether the request is successful or not. + Status status = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Message contains details of the message. + MCPJSONRPCMessage message = 6 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "message,omitempty" + ]; + // Headers are the HTTP request headers. + wrappers.LabelValues headers = 7 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "headers,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; +} + +// MCPSessionNotification is emitted when a notification is sent by client +// during a MCP session. +message MCPSessionNotification { + // Metadata is a common event metadata + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata user = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata App = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Message contains details of the message. + MCPJSONRPCMessage message = 5 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "message,omitempty" + ]; + // Status contains information whether the request is successful or not. + Status status = 6 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Headers are the HTTP request headers. + wrappers.LabelValues headers = 7 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "headers,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; +} + +// MCPSessionListenSSEStream is emitted when client sends a GET request to a +// streamable HTTP MCP server for listening server notifications via a SSE +// stream. +message MCPSessionListenSSEStream { + // Metadata is a common event metadata + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata user = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata app = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains information whether the request is successful or not. + Status status = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Headers are the HTTP request headers. + wrappers.LabelValues headers = 6 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "headers,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; +} + +// MCPSessionInvalidHTTPRequest is a blanket event for all requests that we do +// not understand (usually out of MCP spec). +message MCPSessionInvalidHTTPRequest { + // Metadata is a common event metadata + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // User is a common user event metadata + UserMetadata user = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // SessionMetadata is a common event session metadata + SessionMetadata session = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // App is a common application resource metadata. + AppMetadata app = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Path is relative path in the URL. + string path = 5 [(gogoproto.jsontag) = "path"]; + // Method is the request HTTP method, like GET/POST/DELETE/etc. + string method = 6 [(gogoproto.jsontag) = "method"]; + // RawQuery are the encoded query values. + string raw_query = 7 [(gogoproto.jsontag) = "raw_query"]; + // Body is the request HTTP body. + bytes body = 8 [(gogoproto.jsontag) = "body"]; + // Headers are the HTTP request headers. + wrappers.LabelValues headers = 9 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "headers,omitempty", + (gogoproto.customtype) = "github.com/gravitational/teleport/api/types/wrappers.Traits" + ]; +} + +// BoundKeypairRecovery is emitted when a client performs a self recovery using +// a bound_keypair joining token. This event is also emitted upon first join. +message BoundKeypairRecovery { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains common command or operation status fields. + Status Status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ConnectionMetadata holds information about the connection + ConnectionMetadata Connection = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // TokenName is the name of the provision token used to join. + string TokenName = 4 [(gogoproto.jsontag) = "token_name"]; + // BotName is the name of the bot attempting to join, if any. + string BotName = 5 [(gogoproto.jsontag) = "bot_name,omitempty"]; + // PublicKey is the public key at the completion of the joining process, in + // SSH authorized_keys format. If a keypair rotation occurred, this is the + // keypair trusted at the end of the join process. + string PublicKey = 6 [(gogoproto.jsontag) = "public_key,omitempty"]; + // RecoveryCount is the recovery counter value at the time of this recovery. + uint32 RecoveryCount = 7 [(gogoproto.jsontag) = "recovery_count"]; + // RecoveryMode is the bound keypair token's configured recovery mode. + string RecoveryMode = 8 [(gogoproto.jsontag) = "recovery_mode"]; +} + +// BoundKeypairRotation is emitted when a keypair rotation takes place. +message BoundKeypairRotation { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains common command or operation status fields. + Status Status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ConnectionMetadata holds information about the connection + ConnectionMetadata Connection = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // TokenName is the name of the provision token used to join. + string TokenName = 4 [(gogoproto.jsontag) = "token_name"]; + // BotName is the name of the bot attempting to join, if any. + string BotName = 5 [(gogoproto.jsontag) = "bot_name,omitempty"]; + // PreviousPublicKey is the previous public key in SSH authorized_keys format. + // On first join using a registration secret, this value will be empty. + string PreviousPublicKey = 6 [(gogoproto.jsontag) = "previous_public_key,omitempty"]; + // NewPublicKey is the new public key after rotation. If rotation fails, this + // value will be empty. + string NewPublicKey = 7 [(gogoproto.jsontag) = "new_public_key,omitempty"]; +} + +// BoundKeypairJoinStateVerificationFailed is emitted when join state +// verification fails, potentially indicating a compromised keypair. +message BoundKeypairJoinStateVerificationFailed { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // Status contains information about the failure. + Status Status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // ConnectionMetadata holds information about the connection + ConnectionMetadata Connection = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + // TokenName is the name of the provision token used to join. + string TokenName = 4 [(gogoproto.jsontag) = "token_name"]; + // BotName is the name of the bot attempting to join, if any. + string BotName = 5 [(gogoproto.jsontag) = "bot_name,omitempty"]; +} + +// SCIMRequest describes the SCIM http request that triggered an event +message SCIMRequest { + // ID is a Teleport-generated arbitrary unique ID for the SCIM request that + // triggered the enclosing event. It will be included in all structured log + // messages involved in the handling of this request. + string ID = 1 [(gogoproto.jsontag) = "id,omitempty"]; + + // SourceAddress is the source IP address for the SCIM HTTP request that + // triggered the enclosing event. + string SourceAddress = 2 [(gogoproto.jsontag) = "source_address,omitempty"]; + + // UserAgent is the user agent string from the HTTP request that triggered + // the enclosing event + string UserAgent = 3 [(gogoproto.jsontag) = "user_agent,omitempty"]; + + // Method is the HTTP method used by the SCIM HTTP request that triggered the + // enclosing event (e.g. GET, PUT, etc) + string Method = 4 [(gogoproto.jsontag) = "method,omitempty"]; + + // Path is the fully-qualified path of the SCIM request that triggered the + // enclosing event. + string Path = 5 [(gogoproto.jsontag) = "path,omitempty"]; + + // Body holds a representation of the arbitrary JSON body of the initial + // request. May be empty for list operations, hold a SCIM resource + // definition for creation and update operations, or a SCIM PATCH description + // for patch update requests. + google.protobuf.Struct Body = 6 [ + (gogoproto.jsontag) = "body,omitempty", + (gogoproto.casttype) = "Struct" + ]; +} + +message SCIMCommonData { + // Request holds metadata about the original SCIM request that triggered this + // event. + SCIMRequest Request = 3 [(gogoproto.jsontag) = "request,omitempty"]; + + // Integration is the name of the integration/access plugin that the SCIM + // service was operating on for this event. + string Integration = 4 [(gogoproto.jsontag) = "integration,omitempty"]; + + // ResourceType is the SCIM resource type of the Teleport resource involved in + // this event. Valid values include, but are not limited to, "user" and + // "group". + string ResourceType = 5 [(gogoproto.jsontag) = "resource_type,omitempty"]; +} + +// SCIMListingEvent records an attempt to list SCIM resources +message SCIMListingEvent { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status contains common command or operation status fields. + Status Status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Common holds values common to SCIM listing and resource events + SCIMCommonData Common = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Filter is the listing filter function supplied by the client, if any. + string Filter = 4 [(gogoproto.jsontag) = "filter,omitempty"]; + + // Count is the requested page size from the client. A zero value indicates that + // the client did not request a specific page size. + int32 Count = 5 [(gogoproto.jsontag) = "count,omitempty"]; + + // Start is the 1-based start index requested by the client, if any. A zero + // value indicates that the client did not request a specific starting index + // for the page. + int32 StartIndex = 6 [(gogoproto.jsontag) = "start_index,omitempty"]; + + // ResourceCount is the number of resources returned in response to this + // request. + uint32 ResourceCount = 7 [(gogoproto.jsontag) = "resource_count,omitempty"]; +} + +// SCIMResourceEvent records an attmpted operation on a specific SCIM resource +message SCIMResourceEvent { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status contains common command or operation status fields. + Status Status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Common holds values common to SCIM listing and resource events + SCIMCommonData Common = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // TeleportID is the name of the Teleport resource involved in this event. + string TeleportID = 4 [(gogoproto.jsontag) = "teleport_id,omitempty"]; + + // ExternalID is the ID used by the external SCIM client to to refer to the + // resource affected by this event. + string ExternalID = 5 [(gogoproto.jsontag) = "external_id,omitempty"]; + + // Display is a human-readable name or identifier for the Teleport resource + // affected by the event. + string Display = 6 [(gogoproto.jsontag) = "display,omitempty"]; +} + +// ClientIPRestrictionsUpdate records a Client IP Restrictions update. +message ClientIPRestrictionsUpdate { + // Metadata is a common event metadata. + Metadata Metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // UserMetadata is a common user event metadata. + UserMetadata User = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // ConnectionMetadata holds information about the connection. + ConnectionMetadata Connection = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // ResourceMetadata is a common resource event metadata. + ResourceMetadata Resource = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status indicates whether the operation was successful. + Status Status = 5 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // ClientIPRestrictions is the new Client IP Restrictions allowlist. + repeated string ClientIPRestrictions = 6 [(gogoproto.jsontag) = "client_ip_restrictions"]; +} + +// VnetConfigCreate is emitted when a VnetConfig is created. +message VnetConfigCreate { + // Metadata is a common event metadata. + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status indicates whether the creation was successful. + Status status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // User is a common user event metadata. + UserMetadata user = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Connection holds information about the connection. + ConnectionMetadata connection = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; +} + +// VnetConfigUpdate is emitted when a VnetConfig is updated. +message VnetConfigUpdate { + // Metadata is a common event metadata. + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status indicates whether the update was successful. + Status status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // User is a common user event metadata. + UserMetadata user = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Connection holds information about the connection. + ConnectionMetadata connection = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; +} + +// VnetConfigDelete is emitted when a VnetConfig is deleted. +message VnetConfigDelete { + // Metadata is a common event metadata. + Metadata metadata = 1 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Status indicates whether the deletion was successful. + Status status = 2 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // User is a common user event metadata. + UserMetadata user = 3 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; + + // Connection holds information about the connection. + ConnectionMetadata connection = 4 [ + (gogoproto.nullable) = false, + (gogoproto.embed) = true, + (gogoproto.jsontag) = "" + ]; +} diff --git a/api/proto/teleport/legacy/types/types.proto b/api/proto/teleport/legacy/types/types.proto index 7d9653d9f07ef..869cdce35ad3f 100644 --- a/api/proto/teleport/legacy/types/types.proto +++ b/api/proto/teleport/legacy/types/types.proto @@ -21,6 +21,7 @@ import "google/protobuf/duration.proto"; import "google/protobuf/timestamp.proto"; import "google/protobuf/wrappers.proto"; import "teleport/attestation/v1/attestation.proto"; +import "teleport/componentfeatures/v1/component_features.proto"; import "teleport/legacy/types/trusted_device_requirement.proto"; import "teleport/legacy/types/wrappers/wrappers.proto"; @@ -208,6 +209,8 @@ message DatabaseServerV3 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "status" ]; + // The advertized scope of the server which can not change once assigned. + string scope = 7; } // DatabaseServerSpecV3 is the database server spec. @@ -242,6 +245,11 @@ message DatabaseServerSpecV3 { DatabaseV3 Database = 12 [(gogoproto.jsontag) = "database,omitempty"]; // ProxyIDs is a list of proxy IDs this server is expected to be connected to. repeated string ProxyIDs = 13 [(gogoproto.jsontag) = "proxy_ids,omitempty"]; + + // the name of the Relay group that the server is connected to + string relay_group = 14; + // the list of Relay host IDs that the server is connected to + repeated string relay_ids = 15; } // DatabaseServerStatusV3 is the database server status. @@ -362,11 +370,17 @@ message DatabaseAdminUser { string DefaultDatabase = 2 [(gogoproto.jsontag) = "default_database"]; } -// OracleOptions contains information about privileged database user used -// for database audit. +// OracleOptions contains Oracle-specific configuration options. message OracleOptions { - // AuditUser is the Oracle database user privilege to access internal Oracle audit trail. - string AuditUser = 1 [(gogoproto.jsontag) = "audit_user"]; + // AuditUser is the name of the Oracle database user that should be used to access + // the internal audit trail. + string AuditUser = 1 [(gogoproto.jsontag) = "audit_user,omitempty"]; + // RetryCount is the maximum number of times to retry connecting to a + // host upon failure. If not specified it defaults to 2, for a total of 3 connection attempts. + int32 RetryCount = 2 [(gogoproto.jsontag) = "retry_count,omitempty"]; + // ShuffleHostnames, when true, randomizes the order of hosts to connect to from + // the provided list. + bool ShuffleHostnames = 3 [(gogoproto.jsontag) = "shuffle_hostnames,omitempty"]; } // DatabaseStatusV3 contains runtime information about the database. @@ -437,7 +451,7 @@ message AWS { ]; // AccountID is the AWS account ID this database belongs to. string AccountID = 4 [(gogoproto.jsontag) = "account_id,omitempty"]; - // ElastiCache contains AWS ElastiCache Redis specific metadata. + // ElastiCache contains Amazon ElastiCache Redis-specific metadata. ElastiCache ElastiCache = 5 [ (gogoproto.nullable) = false, (gogoproto.jsontag) = "elasticache,omitempty" @@ -457,7 +471,7 @@ message AWS { (gogoproto.nullable) = false, (gogoproto.jsontag) = "rdsproxy,omitempty" ]; - // RedshiftServerless contains AWS Redshift Serverless specific metadata. + // RedshiftServerless contains Amazon Redshift Serverless-specific metadata. RedshiftServerless RedshiftServerless = 9 [ (gogoproto.nullable) = false, (gogoproto.jsontag) = "redshift_serverless,omitempty" @@ -483,11 +497,17 @@ message AWS { // SessionTags is a list of AWS STS session tags. map SessionTags = 15 [(gogoproto.jsontag) = "session_tags,omitempty"]; - // DocumentDB contains AWS DocumentDB specific metadata. + // DocumentDB contains Amazon DocumentDB-specific metadata. DocumentDB DocumentDB = 16 [ (gogoproto.nullable) = false, (gogoproto.jsontag) = "docdb,omitempty" ]; + + // ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. + ElastiCacheServerless ElastiCacheServerless = 17 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "elasticache_serverless,omitempty" + ]; } // SecretStore contains secret store configurations. @@ -532,7 +552,7 @@ message RDSProxy { string ResourceID = 3 [(gogoproto.jsontag) = "resource_id,omitempty"]; } -// ElastiCache contains AWS ElastiCache Redis specific metadata. +// ElastiCache contains Amazon ElastiCache Redis-specific metadata. message ElastiCache { // ReplicationGroupID is the Redis replication group ID. string ReplicationGroupID = 1 [(gogoproto.jsontag) = "replication_group_id,omitempty"]; @@ -544,6 +564,12 @@ message ElastiCache { string EndpointType = 4 [(gogoproto.jsontag) = "endpoint_type,omitempty"]; } +// ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. +message ElastiCacheServerless { + // CacheName is an ElastiCache Serverless cache name. + string CacheName = 1 [(gogoproto.jsontag) = "cache_name,omitempty"]; +} + // MemoryDB contains AWS MemoryDB specific metadata. message MemoryDB { // ClusterName is the name of the MemoryDB cluster. @@ -556,7 +582,7 @@ message MemoryDB { string EndpointType = 4 [(gogoproto.jsontag) = "endpoint_type,omitempty"]; } -// RedshiftServerless contains AWS Redshift Serverless specific metadata. +// RedshiftServerless contains Amazon Redshift Serverless-specific metadata. message RedshiftServerless { // WorkgroupName is the workgroup name. string WorkgroupName = 1 [(gogoproto.jsontag) = "workgroup_name,omitempty"]; @@ -576,7 +602,7 @@ message OpenSearch { string EndpointType = 3 [(gogoproto.jsontag) = "endpoint_type,omitempty"]; } -// DocumentDB contains AWS DocumentDB specific metadata. +// DocumentDB contains Amazon DocumentDB-specific metadata. message DocumentDB { // ClusterID is the cluster identifier. string ClusterID = 1 [(gogoproto.jsontag) = "cluster_id,omitempty"]; @@ -586,12 +612,26 @@ message DocumentDB { string EndpointType = 3 [(gogoproto.jsontag) = "endpoint_type,omitempty"]; } -// GCPCloudSQL contains parameters specific to GCP Cloud SQL databases. +// GCPCloudSQL contains parameters specific to GCP databases. +// The name "GCPCloudSQL" is a legacy from a time when only GCP Cloud SQL was supported. message GCPCloudSQL { // ProjectID is the GCP project ID the Cloud SQL instance resides in. string ProjectID = 1 [(gogoproto.jsontag) = "project_id,omitempty"]; // InstanceID is the Cloud SQL instance ID. string InstanceID = 2 [(gogoproto.jsontag) = "instance_id,omitempty"]; + // AlloyDB contains AlloyDB specific configuration elements. + AlloyDB AlloyDB = 3 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "alloydb,omitempty" + ]; +} + +// AlloyDB contains AlloyDB specific configuration elements. +message AlloyDB { + // EndpointType is the database endpoint type to use. Should be one of: "private", "public", "psc". + string EndpointType = 1 [(gogoproto.jsontag) = "endpoint_type,omitempty"]; + // EndpointOverride is an override of endpoint address to use. + string EndpointOverride = 2 [(gogoproto.jsontag) = "endpoint_override,omitempty"]; } // Azure contains Azure specific database metadata. @@ -886,6 +926,8 @@ message ServerV2 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "spec" ]; + // The advertized scope of the server which can not change once assigned. + string scope = 6; } // ServerSpecV2 is a specification for V2 Server @@ -926,6 +968,14 @@ message ServerSpecV2 { // GitHub organization. GitHubServerMetadata git_hub = 15 [(gogoproto.jsontag) = "github,omitempty"]; + // the name of the Relay group that the server is connected to + string relay_group = 16; + // the list of Relay host IDs that the server is connected to + repeated string relay_ids = 17; + + // component_features represents features supported by this server + teleport.componentfeatures.v1.ComponentFeatures component_features = 18; + reserved 8; reserved 10; reserved "KubernetesClusters"; @@ -988,6 +1038,8 @@ message AppServerV3 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "spec" ]; + // The advertized scope of the server which can not change once assigned. + string scope = 6; } // AppServerSpecV3 is the app access server spec. @@ -1007,6 +1059,14 @@ message AppServerSpecV3 { AppV3 App = 5 [(gogoproto.jsontag) = "app"]; // ProxyIDs is a list of proxy IDs this server is expected to be connected to. repeated string ProxyIDs = 6 [(gogoproto.jsontag) = "proxy_ids,omitempty"]; + + // the name of the Relay group that the server is connected to + string relay_group = 7; + // the list of Relay host IDs that the server is connected to + repeated string relay_ids = 8; + + // component_features contains features supported by this app server. + teleport.componentfeatures.v1.ComponentFeatures component_features = 9; } // AppV3List represents a list of app resources. @@ -1122,6 +1182,19 @@ message AppSpecV3 { // want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, // setting this value to true will overwrite that public address in the web UI. bool UseAnyProxyPublicAddr = 14 [(gogoproto.jsontag) = "use_any_proxy_public_addr,omitempty"]; + // MCP contains MCP server related configurations. + MCP MCP = 15 [(gogoproto.jsontag) = "mcp,omitempty"]; +} + +// MCP contains MCP server-related configurations. +message MCP { + // Command to launch stdio-based MCP servers. + string command = 1; + // Args to execute with the command. + repeated string args = 2; + // RunAsHostUser is the host user account under which the command will be + // executed. Required for stdio-based MCP servers. + string run_as_host_user = 3; } // Rewrite is a list of rewriting rules to apply to requests and responses. @@ -1208,6 +1281,8 @@ enum PrivateKeyType { // SSHKeyPair is an SSH CA key pair. message SSHKeyPair { + option (gogoproto.equal) = true; + // PublicKey is the SSH public key. bytes PublicKey = 1 [(gogoproto.jsontag) = "public_key,omitempty"]; // PrivateKey is the SSH private key. @@ -1224,6 +1299,8 @@ message TLSKeyPair { bytes Key = 2 [(gogoproto.jsontag) = "key,omitempty"]; // KeyType is the type of the Key. PrivateKeyType KeyType = 3 [(gogoproto.jsontag) = "key_type,omitempty"]; + // CRL is an empty DER-encoded revocation list. + bytes CRL = 4 [(gogoproto.jsontag) = "crl"]; } // JWTKeyPair is a PEM encoded keypair used for signing JWT tokens. @@ -1236,6 +1313,25 @@ message JWTKeyPair { PrivateKeyType PrivateKeyType = 3 [(gogoproto.jsontag) = "private_key_type,omitempty"]; } +// EncryptionKeyPair is a PKIX ASN.1 DER encoded keypair used for encrypting and decrypting data. +message EncryptionKeyPair { + // PublicKey is a PKIX ASN.1 DER encoded public key. + bytes public_key = 1 [(gogoproto.jsontag) = "public_key,omitempty"]; + // PrivateKey is a PKCS#8 ASN.1 DER encoded private key. + bytes private_key = 2 [(gogoproto.jsontag) = "private_key,omitempty"]; + // PrivateKeyType is the type of the PrivateKey. + PrivateKeyType private_key_type = 3 [(gogoproto.jsontag) = "private_key_type,omitempty"]; + // Hash function used during OAEP encryption/decryption. It maps directly to the possible + // values of [crypto.Hash] in the go crypto package. + uint32 hash = 4 [(gogoproto.jsontag) = "hash,omitempty"]; +} + +// A public key to be used as a recipient during age encryption of session recordings. +message AgeEncryptionKey { + // A PKIX ASN.1 DER encoded public key used for key wrapping during age encryption. Expected to be RSA 4096. + bytes public_key = 1 [(gogoproto.jsontag) = "public_key"]; +} + // CertAuthorityV2 is version 2 resource spec for Cert Authority message CertAuthorityV2 { option (gogoproto.goproto_stringer) = false; @@ -1472,6 +1568,8 @@ message ProvisionTokenSpecV2 { ProvisionTokenSpecV2BoundKeypair BoundKeypair = 19 [(gogoproto.jsontag) = "bound_keypair,omitempty"]; // AzureDevops allows the configuration of options specific to the "azure_devops" join method. ProvisionTokenSpecV2AzureDevops AzureDevops = 20 [(gogoproto.jsontag) = "azure_devops,omitempty"]; + // Env0 allows the configuration of options specific to the "env0" join method. + ProvisionTokenSpecV2Env0 Env0 = 21 [(gogoproto.jsontag) = "env0,omitempty"]; } // ProvisionTokenSpecV2AzureDevops contains the Azure Devops-specific @@ -1726,9 +1824,17 @@ message ProvisionTokenSpecV2Spacelift { message Rule { // SpaceID is the ID of the space in which the run that owns the token was // executed. + // + // This field supports "glob-style" matching when enable_glob_matching is true: + // - Use '*' to match zero or more characters. + // - Use '?' to match any single character. string SpaceID = 1 [(gogoproto.jsontag) = "space_id,omitempty"]; // CallerID is the ID of the caller, ie. the stack or module that generated // the run. + // + // This field supports "glob-style" matching when enable_glob_matching is true: + // - Use '*' to match zero or more characters. + // - Use '?' to match any single character. string CallerID = 2 [(gogoproto.jsontag) = "caller_id,omitempty"]; // CallerType is the type of the caller, ie. the entity that owns the run - // either `stack` or `module`. @@ -1743,6 +1849,9 @@ message ProvisionTokenSpecV2Spacelift { // Hostname is the hostname of the Spacelift tenant that tokens // will originate from. E.g `example.app.spacelift.io` string Hostname = 2 [(gogoproto.jsontag) = "hostname,omitempty"]; + // EnableGlobMatching enables glob-style matching for the space_id and + // caller_id fields in the rules. + bool EnableGlobMatching = 3 [(gogoproto.jsontag) = "enable_glob_matching,omitempty"]; } // ProvisionTokenSpecV2Kubernetes contains the Kubernetes-specific part of the @@ -1754,6 +1863,18 @@ message ProvisionTokenSpecV2Kubernetes { // This can be fetched from /openid/v1/jwks on the Kubernetes API Server. string JWKS = 1 [(gogoproto.jsontag) = "jwks,omitempty"]; } + message OIDCConfig { + // Issuer is the URI of the OIDC issuer. It must have an accessible and + // OIDC-compliant `/.well-known/oidc-configuration` endpoint. This should + // be a valid URL and must exactly match the `issuer` field in a service + // account JWT. For example: + // https://oidc.eks.us-west-2.amazonaws.com/id/12345... + string Issuer = 1 [(gogoproto.jsontag) = "issuer,omitempty"]; + + // InsecureAllowHTTPIssuer is a flag that, if set, disables the requirement + // that the issuer must use HTTPS. + bool InsecureAllowHTTPIssuer = 2 [(gogoproto.jsontag) = "insecure_allow_http_issuer"]; + } // Rule is a set of properties the Kubernetes-issued token might have to be // allowed to use this ProvisionToken message Rule { @@ -1768,6 +1889,7 @@ message ProvisionTokenSpecV2Kubernetes { // Service Account token. Support values: // - `in_cluster` // - `static_jwks` + // - `oidc` // If unset, this defaults to `in_cluster`. string Type = 2 [ (gogoproto.jsontag) = "type,omitempty", @@ -1775,6 +1897,8 @@ message ProvisionTokenSpecV2Kubernetes { ]; // StaticJWKS is the configuration specific to the `static_jwks` type. StaticJWKSConfig StaticJWKS = 3 [(gogoproto.jsontag) = "static_jwks,omitempty"]; + // OIDCConfig configures the `oidc` type. + OIDCConfig OIDC = 4 [(gogoproto.jsontag) = "oidc,omitempty"]; } // ProvisionTokenSpecV2Azure contains the Azure-specific part of the @@ -1929,12 +2053,61 @@ message ProvisionTokenSpecV2Oracle { // full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. // If empty, any region is allowed. repeated string Regions = 3 [(gogoproto.jsontag) = "regions,omitempty"]; + // Instances is a list of the OCIDs of specific instances that are allowed + // to join. If empty, any instance matching the other fields in the rule is allowed. + // Limited to 100 instance OCIDs per rule. + repeated string Instances = 4 [(gogoproto.jsontag) = "instances,omitempty"]; } // Allow is a list of Rules, nodes using this token must match one // allow rule to use this token. repeated Rule Allow = 1 [(gogoproto.jsontag) = "allow,omitempty"]; } +// ProvisionTokenSpecV2Env0 contains env0-specific parts of the +// ProvisionTokenSpecV2. +message ProvisionTokenSpecV2Env0 { + // Rule is a set of properties the env0 environment might have to be allowed + // to use this provision token. + message Rule { + // OrganizationID is the unique organization identifier, corresponding to + // `organizationId` in an Env0 OIDC token. + string OrganizationID = 1 [(gogoproto.jsontag) = "organization_id,omitempty"]; + // ProjectID is a unique project identifier, corresponding to `projectId` in + // an Env0 OIDC token. + string ProjectID = 2 [(gogoproto.jsontag) = "project_id,omitempty"]; + // ProjectName is the name of the project under which the job was run + // corresponding to `projectName` in an Env0 OIDC token. + string ProjectName = 3 [(gogoproto.jsontag) = "project_name,omitempty"]; + // TemplateID is the unique identifier of the Env0 template, corresponding + // to `templateId` in an Env0 OIDC token. + string TemplateID = 4 [(gogoproto.jsontag) = "template_id,omitempty"]; + // TemplateName is the name of the Env0 template, corresponding to + // `templateName` in an Env0 OIDC token. + string TemplateName = 5 [(gogoproto.jsontag) = "template_name,omitempty"]; + // EnvironmentID is the unique identifier of the Env0 environment, + // corresponding to `environmentId` in an Env0 OIDC token. + string EnvironmentID = 6 [(gogoproto.jsontag) = "environment_id,omitempty"]; + // EnvironmentName is the name of the Env0 environment, corresponding to + // `environmentName` in an Env0 OIDC token. + string EnvironmentName = 7 [(gogoproto.jsontag) = "environment_name,omitempty"]; + // WorkspaceName is the name of the Env0 workspace, corresponding to + // `workspaceName` in an Env0 OIDC token. + string WorkspaceName = 8 [(gogoproto.jsontag) = "workspace_name,omitempty"]; + // DeploymentType is the env0 deployment type, such as "deploy", "destroy", + // etc. Corresponds to `deploymentType` in an Env0 OIDC token. + string DeploymentType = 9 [(gogoproto.jsontag) = "deployment_type,omitempty"]; + // DeployerEmail is the email of the person that triggered the deployment, + // corresponding to `deployerEmail` in an Env0 OIDC token. + string DeployerEmail = 10 [(gogoproto.jsontag) = "deployer_email,omitempty"]; + // Env0Tag is a custom tag value corresponding to `env0Tag` when + // `ENV0_OIDC_TAG` is set. + string Env0Tag = 11 [(gogoproto.jsontag) = "env0_tag,omitempty"]; + } + // Allow is a list of Rules, jobs using this token must match at least one + // allow rule to use this token. + repeated Rule Allow = 1 [(gogoproto.jsontag) = "allow,omitempty"]; +} + // ProvisionTokenSpecV2BoundKeypair contains configuration for bound_keypair // type join tokens. message ProvisionTokenSpecV2BoundKeypair { @@ -1953,12 +2126,12 @@ message ProvisionTokenSpecV2BoundKeypair { // public key on first join, which may be used instead of preregistering a // public key with `initial_public_key`. If `initial_public_key` is set, // this value is ignored. Otherwise, if set, this value will be used to - // populate `.status.bound_keypair.intitial_join_secret`. If unset and no + // populate `.status.bound_keypair.registration_secret`. If unset and no // `initial_public_key` is provided, a random secure value will be generated // server-side to populate the status field. string RegistrationSecret = 2 [(gogoproto.jsontag) = "registration_secret,omitempty"]; - // MustRegisterBefore is an optional time before which registeration via + // MustRegisterBefore is an optional time before which registration via // initial join secret must be performed. Attempts to register using an // initial join secret after this timestamp will not be allowed. This may be // modified after creation if necessary to allow the initial registration to @@ -2032,7 +2205,7 @@ message ProvisionTokenStatusV2BoundKeypair { // RegistrationSecret contains a secret value that may be used for public key // registration during the initial join process if no public key is // preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` - // is set, †his field will remain empty. Otherwise, if + // is set, this field will remain empty. Otherwise, if // `.spec.bound_keypair.onboarding.registration_secret` is set, that value // will be copied here. If that field is unset, a value will be randomly // generated. @@ -2059,9 +2232,9 @@ message ProvisionTokenStatusV2BoundKeypair { uint32 RecoveryCount = 4 [(gogoproto.jsontag) = "recovery_count"]; // LastRecoveredAt contains a timestamp of the last successful recovery - // attempt. Note that normal renewals do not count as a recovery attempt, - // however onboarding does, either with a preregistered key or registration - // secret. This corresponds with the last time `bound_bot_instance_id` was + // attempt. Note that normal renewals with valid client certificates do not + // count as a recovery attempt, however the initial join during onboarding + // does. This corresponds with the last time `bound_bot_instance_id` was // updated. google.protobuf.Timestamp LastRecoveredAt = 5 [ (gogoproto.stdtime) = true, @@ -2264,9 +2437,9 @@ message ClusterNetworkingConfigSpecV2 { // missed before the server disconnects the connection to the client. int64 KeepAliveCountMax = 3 [(gogoproto.jsontag) = "keep_alive_count_max"]; - // SessionControlTimeout is the session control lease expiry and defines - // the upper limit of how long a node may be out of contact with the auth - // server before it begins terminating controlled sessions. + // SessionControlTimeout is the session control lease expiry and defines the + // upper limit of how long a node may be out of contact with the Auth Service + // before it begins terminating controlled sessions. int64 SessionControlTimeout = 4 [ (gogoproto.jsontag) = "session_control_timeout", (gogoproto.casttype) = "Duration" @@ -2370,6 +2543,41 @@ message SessionRecordingConfigV2 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "spec" ]; + // Status is the SessionRecordingConfig status containing active encryption keys + SessionRecordingConfigStatus Status = 6 [ + (gogoproto.nullable) = true, + (gogoproto.jsontag) = "status,omitempty" + ]; +} + +// KeyLabel combines a label that can be used to identify one or more keys with a keystore type that +// determines where the keys can be found. +message KeyLabel { + // Type represents which keystore should be searched when looking up keys by label. + string type = 1 [(gogoproto.jsontag) = "type"]; + // Label is a value that can be used with the related keystore in order to find relevant keys. + string label = 2 [(gogoproto.jsontag) = "label"]; +} + +// ManualKeyManagementConfig defines whether or not recording encryption keys should be managed externally +// and how to query those keys. +message ManualKeyManagementConfig { + // Enabled controls whether or recording encryption keys should be managed externally. + bool enabled = 1 [(gogoproto.jsontag) = "enabled,omitempty"]; + // ActiveKeys describe which keys should be queried for active recording encryption and replay. + repeated KeyLabel active_keys = 2 [(gogoproto.jsontag) = "active_keys,omitempty"]; + // RotatedKeys describe which keys should be queried for historical replay. + repeated KeyLabel rotated_keys = 3 [(gogoproto.jsontag) = "rotated_keys,omitempty"]; +} + +// SessionRecordingEncryptionConfig configures if and how session recordings +// should be encrypted. +message SessionRecordingEncryptionConfig { + // Enabled controls whether or not session recordings should be encrypted. + bool enabled = 1 [(gogoproto.jsontag) = "enabled,omitempty"]; + // ManualKeyManagement defines whether or not recording encryption keys should be managed externally + // and how to query those keys. + ManualKeyManagementConfig manual_key_management = 2 [(gogoproto.jsontag) = "manual_key_management,omitempty"]; } // SessionRecordingConfigSpecV2 is the actual data we care about @@ -2385,6 +2593,20 @@ message SessionRecordingConfigSpecV2 { (gogoproto.jsontag) = "proxy_checks_host_keys", (gogoproto.customtype) = "BoolOption" ]; + + // Encryption configures if and how session recordings should be encrypted. + SessionRecordingEncryptionConfig encryption = 3 [ + (gogoproto.nullable) = true, + (gogoproto.jsontag) = "encryption,omitempty" + ]; +} + +// SessionRecordingConfigStatus contains the currently active age encryption keys used +// for encrypted session recording. +message SessionRecordingConfigStatus { + // EncryptionKeys contain the currently active age encryption keys used for + // encrypted session recording. + repeated AgeEncryptionKey encryption_keys = 1 [(gogoproto.jsontag) = "encryption_keys"]; } // AuthPreferenceV2 implements the AuthPreference interface. @@ -2617,6 +2839,8 @@ message DeviceTrust { // endpoints. // - "required": enforces the presence of device extensions for sensitive // endpoints. + // - "required-for-humans": enforces the presence of device extensions for + // sensitive endpoints, for human users only (bots are exempt). // // Mode is always "off" for OSS. // Defaults to "optional" for Enterprise. @@ -3064,6 +3288,23 @@ message AccessRequestSpecV3 { // DryRunEnrichment contains the extra info added in response to a dry run request. AccessRequestDryRunEnrichment DryRunEnrichment = 23 [(gogoproto.jsontag) = "dry_run_enrichment,omitempty"]; + + // RequestKind indicates the kind (short/long-term) of request. + AccessRequestKind RequestKind = 24 [(gogoproto.jsontag) = "request_kind,omitempty"]; + + // LongTermResourceGrouping contains information about how resources can be grouped + // based on Access List promotions for long-term Access Requests. + LongTermResourceGrouping LongTermGrouping = 25 [(gogoproto.jsontag) = "long_term_grouping,omitempty"]; +} + +// AccessRequestKind represents the kind of Access Request being made (short/long-term). +enum AccessRequestKind { + // UNDEFINED is the default value, and represents an undefined request kind. + UNDEFINED = 0; + // SHORT_TERM represents a short-term request, either role-based or resource-based. + SHORT_TERM = 1; + // LONG_TERM represents a long-term resource-based request. + LONG_TERM = 2; } enum AccessRequestScope { @@ -3146,6 +3387,34 @@ message AccessCapabilitiesRequest { bool FilterRequestableRolesByResource = 6 [(gogoproto.jsontag) = "filter_requestable_roles_by_resource,omitempty"]; } +// RemoteAccessCapabilities is a summary of the capabilites that a remote cluster +// user is granted in target cluster. +// buf:lint:ignore PAGINATION_REQUIRED +message RemoteAccessCapabilities { + // ApplicableRolesForResources is a list of the remote-cluster roles applicable + // for access to a given set of resources. This will always be a subset of the + // SearchAsRoles supplied in the [RemoteAccessCapabilitiesRequest] + repeated string ApplicableRolesForResources = 1 [(gogoproto.jsontag) = "applicable_roles,omitempty"]; +} + +// AccessCapabilitiesRequest encodes parameters for the GetRemoteAccessCapabilities method. +// buf:lint:ignore PAGINATION_REQUIRED +message RemoteAccessCapabilitiesRequest { + // user is the name of the target user on their home cluster + string User = 1 [(gogoproto.jsontag) = "user,omitempty"]; + + // SearchAsRoles holds the roles the target user may use when searching for + // resources on the user's home cluster + repeated string SearchAsRoles = 2 [(gogoproto.jsontag) = "remote_search_as_roles,omitempty"]; + + // ResourceIDs is the list of the ResourceIDs of the resources we would like to view + // the necessary roles for. + repeated ResourceID ResourceIDs = 3 [ + (gogoproto.jsontag) = "resource_ids,omitempty", + (gogoproto.nullable) = false + ]; +} + // RequestKubernetesResource is the Kubernetes resource identifier used // in access request settings. // Modeled after existing message KubernetesResource. @@ -3258,7 +3527,7 @@ message RoleV6 { // SubKind is an optional resource sub kind, used in some resources string SubKind = 2 [(gogoproto.jsontag) = "sub_kind,omitempty"]; // Version is the resource version. It must be specified. - // Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`. + // Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`, `v8`. string Version = 3 [(gogoproto.jsontag) = "version"]; // Metadata is resource metadata Metadata Metadata = 4 [ @@ -3789,6 +4058,9 @@ message RoleConditions { // WorkloadIdentityLabelsExpression is a predicate expression used to // allow/deny access to issuing a WorkloadIdentity. string WorkloadIdentityLabelsExpression = 45 [(gogoproto.jsontag) = "workload_identity_labels_expression,omitempty"]; + + // MCPPermissions defines MCP servers related permissions. + MCPPermissions MCP = 46 [(gogoproto.jsontag) = "mcp,omitempty"]; } // IdentityCenterAccountAssignment captures an AWS Identity Center account @@ -3803,6 +4075,15 @@ message GitHubPermission { repeated string organizations = 1 [(gogoproto.jsontag) = "orgs,omitempty"]; } +// MCPPermissions defines MCP servers related permissions. +message MCPPermissions { + // Tools defines the list of tools allowed or denied for this role. Each entry + // can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular + // expression (must start with '^' and end with '$'). If the list is empty, no + // tools are allowed. + repeated string tools = 1; +} + // SPIFFERoleCondition sets out which SPIFFE identities this role is allowed or // denied to generate. The Path matcher is required, and is evaluated first. If, // the Path does not match then the other matcher fields are not evaluated. @@ -3986,6 +4267,9 @@ message AccessRequestConditionsReason { (gogoproto.jsontag) = "mode,omitempty", (gogoproto.casttype) = "RequestReasonMode" ]; + // Prompt is a custom message prompted to the user for the requested roles or resources searchable + // as other roles. This is only applied to the requested roles and resources specifying the prompt. + string Prompt = 2 [(gogoproto.jsontag) = "prompt,omitempty"]; } // AccessReviewConditions is a matcher for allow/deny restrictions on @@ -4024,6 +4308,31 @@ message AccessRequestAllowedPromotions { repeated AccessRequestAllowedPromotion promotions = 1; } +// ResourceIDList represents a list of ResourceID objects. +message ResourceIDList { + repeated ResourceID resource_ids = 1 [(gogoproto.nullable) = false]; +} + +// LongTermResourceGrouping contains information about how resources can be grouped +// based on Access List promotions for long-term Access Requests. +message LongTermResourceGrouping { + // AccessListToResources maps applicable Access List names to the resources they can grant, + // including the optimal grouping. + map AccessListToResources = 1 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "grouped_by_access_list" + ]; + // RecommendedAccessList is the name of the Access List that would provide + // access to the most resources. If multiple Access Lists provide the same + // number of resources, the first one found will be used. + string RecommendedAccessList = 2 [(gogoproto.jsontag) = "recommended_access_list"]; + // ValidationMessage is a user-friendly message explaining any grouping error, if CanProceed is false. + string ValidationMessage = 3 [(gogoproto.jsontag) = "validation_message,omitempty"]; + // CanProceed represents the validity of the long-term grouping. If all requested + // resources cannot be grouped together, this will be false. + bool CanProceed = 4 [(gogoproto.jsontag) = "can_proceed"]; +} + // ClaimMapping maps a claim to teleport roles. message ClaimMapping { // Claim is a claim name. @@ -4079,6 +4388,8 @@ message BoolValue { message UserFilter { // SearchKeywords is a list of search keywords to match against resource field values. repeated string SearchKeywords = 1 [(gogoproto.jsontag) = "search_keywords,omitempty"]; + // SkipSystemUsers filters out teleport system users from the results. + bool SkipSystemUsers = 2 [(gogoproto.jsontag) = "skip_system_users,omitempty"]; } // UserV2 is version 2 resource spec of the user @@ -4923,6 +5234,7 @@ message KubernetesClusterV3List { message KubernetesServerV3 { option (gogoproto.goproto_stringer) = false; option (gogoproto.stringer) = false; + // Kind is the Kubernetes server resource kind. Always "kube_server". string Kind = 1 [(gogoproto.jsontag) = "kind"]; // SubKind is an optional resource subkind. @@ -4939,6 +5251,10 @@ message KubernetesServerV3 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "spec" ]; + // Status is the Kubernetes server status. + KubernetesServerStatusV3 status = 6; + // The advertized scope of the server which can not change once assigned. + string scope = 7; } // KubernetesServerSpecV3 is the Kubernetes server spec. @@ -4958,6 +5274,18 @@ message KubernetesServerSpecV3 { KubernetesClusterV3 Cluster = 5 [(gogoproto.jsontag) = "cluster"]; // ProxyIDs is a list of proxy IDs this server is expected to be connected to. repeated string ProxyIDs = 6 [(gogoproto.jsontag) = "proxy_ids,omitempty"]; + + // the name of the Relay group that the server is connected to + string relay_group = 7; + // the list of Relay host IDs that the server is connected to + repeated string relay_ids = 8; +} + +// KubernetesServerStatusV3 is the Kubernetes cluster status. +message KubernetesServerStatusV3 { + // TargetHealth is the health status of between the Teleport agent + // and Kubernetes cluster. + TargetHealth target_health = 1; } // WebTokenV3 describes a web token. Web tokens are used as a transport to relay bearer tokens @@ -5161,6 +5489,40 @@ message OIDCConnectorSpecV3 { OIDCConnectorMFASettings MFASettings = 19 [(gogoproto.jsontag) = "mfa,omitempty"]; // PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled" string PKCEMode = 20 [(gogoproto.jsontag) = "pkce_mode,omitempty"]; + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + repeated string UserMatchers = 21 [(gogoproto.jsontag) = "user_matchers,omitempty"]; + // RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization + // requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality + // for authorization request parameters. + string RequestObjectMode = 22 [(gogoproto.jsontag) = "request_object_mode,omitempty"]; + // EntraIDGroupsProvider configures out-of-band user groups provider. + // It works by following through the groups claim source, which is sent for the "groups" + // claim when the user's group membership exceeds 200 max item limit. + EntraIDGroupsProvider entra_id_groups_provider = 23; +} + +// EntraIDGroupsProvider configures out-of-band user groups provider. +// It works by following through the groups claim source, which is sent for +// "groups" claim when the user's group membership exceeds 200 max item limit. +message EntraIDGroupsProvider { + // Disabled specifies that the groups provider should be disabled + // even when Entra ID responds with a groups claim source. + // User may choose to disable it if they are using + // integrations such as SCIM or similar groups importer as + // connector based role mapping may be not needed in such a scenario. + bool disabled = 1; + // GroupType is a user group type filter. Defaults to "security-groups". + // Value can be "security-groups", "directory-roles", "all-groups". + string group_type = 2; + // GraphEndpoint is a Microsoft Graph API endpoint. + // The groups claim source endpoint provided by Entra ID points to the + // now-retired Azure AD Graph endpoint ("https://graph.windows.net"). + // To convert it to the newer Microsoft Graph API endpoint, + // Teleport defaults to the Microsoft Graph global service endpoint ("https://graph.microsoft.com"). + // Update GraphEndpoint to point to a different Microsoft Graph national + // cloud deployment endpoint. + string graph_endpoint = 3; } // MaxAge allows the max_age parameter to be nullable to preserve backwards @@ -5200,6 +5562,12 @@ message OIDCConnectorMFASettings { // 0 to always force re-authentication for MFA checks. This should only be set to a non-zero // value if the IdP is setup to perform MFA checks on top of active user sessions. int64 max_age = 6 [(gogoproto.casttype) = "Duration"]; + // RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization + // requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality + // for authorization request parameters. If omitted, MFA flows will default to the `RequestObjectMode` behavior + // specified in the base OIDC connector. Set this property to 'none' to explicitly disable request objects for + // the MFA client. + string RequestObjectMode = 7 [(gogoproto.jsontag) = "request_object_mode,omitempty"]; } // OIDCAuthRequest is a request to authenticate with OIDC @@ -5281,6 +5649,9 @@ message OIDCAuthRequest { teleport.attestation.v1.AttestationStatement tls_attestation_statement = 23 [(gogoproto.jsontag) = "tls_attestation_statement,omitempty"]; // pkce_verifier is used to verified a generated code challenge. string pkce_verifier = 24 [(gogoproto.jsontag) = "pkce_verifier"]; + // LoginHint is an optional username/email provided by the client that will be passed + // to the IdP via the 'login_hint' query parameter. + string login_hint = 25 [(gogoproto.jsontag) = "login_hint,omitempty"]; } // SAMLConnectorV2 represents a SAML connector. @@ -5374,6 +5745,13 @@ message SAMLConnectorSpecV2 { // binding as a default. Setting up PreferredRequestBinding value lets us preserve existing // auth connector behavior and only use http-post binding if it is explicitly configured. string PreferredRequestBinding = 19 [(gogoproto.jsontag) = "preferred_request_binding,omitempty"]; + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + repeated string UserMatchers = 20 [(gogoproto.jsontag) = "user_matchers,omitempty"]; + // IncludeSubject is a flag that indicates whether the Subject element is included in the SAML + // authentication request. Defaults to false. + // Note: Some IdPs will reject requests that contain a Subject. + bool IncludeSubject = 21 [(gogoproto.jsontag) = "include_subject,omitempty"]; } // SAMLConnectorMFASettings contains SAML MFA settings. @@ -5487,6 +5865,9 @@ message SAMLAuthRequest { bytes PostForm = 23 [(gogoproto.jsontag) = "post_form,omitempty"]; // ClientVersion is the version of tsh or Proxy that is sending the SAMLAuthRequest request. string ClientVersion = 24 [(gogoproto.jsontag) = "client_version,omitempty"]; + // SubjectIdentifier is an optional username/email provided by the client that will be + // passed to prepopulate the IdP's login form + string SubjectIdentifier = 25 [(gogoproto.jsontag) = "subject_identifier,omitempty"]; } // AttributeMapping maps a SAML attribute statement to teleport roles. @@ -5566,6 +5947,9 @@ message GithubConnectorSpecV3 { // ClientRedirectSettings defines which client redirect URLs are allowed for // non-browser SSO logins other than the standard localhost ones. SSOClientRedirectSettings ClientRedirectSettings = 9 [(gogoproto.jsontag) = "client_redirect_settings,omitempty"]; + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + repeated string UserMatchers = 10 [(gogoproto.jsontag) = "user_matchers,omitempty"]; } // GithubAuthRequest is the request to start Github OAuth2 flow. @@ -5967,6 +6351,24 @@ message LockTarget { // ServerID is the host id of the Teleport instance. string ServerID = 9 [(gogoproto.jsontag) = "server_id,omitempty"]; + + // BotInstanceID is the bot instance ID if this is a bot identity and is + // ignored otherwise. + string BotInstanceID = 10 [(gogoproto.jsontag) = "bot_instance_id,omitempty"]; + + // JoinToken is the name of the join token used when this identity originally + // joined. This is only valid for bot identities, and cannot be used to target + // `token`-joined bots. + string JoinToken = 11 [(gogoproto.jsontag) = "join_token,omitempty"]; +} + +// LockFilter encodes optional filters to apply when listing Lock resources +message LockFilter { + // Targets is a list of targets. Every returned lock must match at least + // one of the targets. + repeated LockTarget targets = 1; + // InForceOnly specifies whether to return active locks only. + bool in_force_only = 2; } // AddressCondition represents a set of addresses. Presently the addresses are specified @@ -6038,6 +6440,11 @@ message WindowsDesktopServiceSpecV3 { string Hostname = 3 [(gogoproto.jsontag) = "hostname"]; // ProxyIDs is a list of proxy IDs this server is expected to be connected to. repeated string ProxyIDs = 4 [(gogoproto.jsontag) = "proxy_ids,omitempty"]; + + // the name of the Relay group that the server is connected to + string relay_group = 5; + // the list of Relay host IDs that the server is connected to + repeated string relay_ids = 6; } // WindowsDesktopFilter are filters to apply when searching for windows desktops. @@ -6381,6 +6788,9 @@ message Participant { (gogoproto.nullable) = false, (gogoproto.jsontag) = "last_active,omitempty" ]; + + // Cluster is the cluster name the user is authenticated against. + string Cluster = 5 [(gogoproto.jsontag) = "cluster,omitempty"]; } // UIConfigV1 represents the configuration for the web UI served by the proxy service @@ -6806,6 +7216,8 @@ message PluginSpecV1 { PluginNetIQSettings net_iq = 19; // Settings for the GitHub plugin. PluginGithubSettings github = 20; + // Settings for the Intune plugin. + PluginIntuneSettings intune = 21; // Note: If you add a new settings type, please also update the custom JSON for oneof filed // in the `tctl get/edit plugin` plugin command. You can find it in: @@ -6944,6 +7356,29 @@ message PluginJamfSettings { JamfSpecV1 jamf_spec = 1; } +// Defines settings for Intune plugin. +message PluginIntuneSettings { + option (gogoproto.equal) = true; + + // Tenant is the primary domain name (e.g. contoso.onmicrosoft.com) or the tenant ID (e.g. + // 38d49456-54d4-455d-a8d6-c383c71e0a6d) of an organization within Microsoft Entra ID. + // + // https://learn.microsoft.com/en-us/partner-center/account-settings/find-ids-and-domain-names#find-the-microsoft-entra-tenant-id-and-primary-domain-name + string tenant = 1; + + // login_endpoint points to one of the national deployments of Microsoft Entra ID. + // Optional, defaults to "https://login.microsoftonline.com". + // + // https://learn.microsoft.com/en-us/graph/deployments + string login_endpoint = 2; + + // graph_endpoint points to one of the national deployments of Microsoft Graph. + // Optional, defaults to "https://graph.microsoft.com". + // + // https://learn.microsoft.com/en-us/graph/deployments + string graph_endpoint = 3; +} + // Defines settings for the Okta plugin. message PluginOktaSettings { option (gogoproto.equal) = true; @@ -7052,6 +7487,11 @@ message PluginOktaSyncSettings { // synchronized users. This is allows for a more advanced RBAC setup where not all // Okta-originated users are allowed request all Okta-originated resources. bool disable_assign_default_roles = 13; + + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. The default value is 30m. + string time_between_imports = 14; } // Defines a set of discord channel IDs @@ -7104,6 +7544,32 @@ message PluginEntraIDSyncSettings { // This field is populated on a best-effort basis for legacy plugins but mandatory for plugins created after its introduction. // For existing plugins, it is filled in using the entity descriptor url when utilized. string entra_app_id = 5; + + // GroupFilters configures which groups should be included or exlcuded. + repeated PluginSyncFilter group_filters = 6; +} + +// PluginSyncFilter can specify inclusion or exclusion of a resource. +message PluginSyncFilter { + option (gogoproto.equal) = true; + + // Include describes that the resource should be explicitly included. + oneof include { + // Id includes resource matched with an ID. + string id = 1 [(gogoproto.jsontag) = "id,omitempty"]; + + // NameRegex includes resource matched with a regexp. + string name_regex = 2 [(gogoproto.jsontag) = "name_regex,omitempty"]; + } + + // Exclude specifies which AWS resources should be explicitly excluded. + oneof exclude { + // ExcludeId excludes resource matched with an ID. + string exclude_id = 3 [(gogoproto.jsontag) = "id,omitempty"]; + + // ExcludeNameRegex excludes resource matched with a regexp. + string exclude_name_regex = 4 [(gogoproto.jsontag) = "name_regex,omitempty"]; + } } // EntraIDCredentialsSource defines the credentials source for Entra ID. @@ -7148,11 +7614,28 @@ message PluginSCIMSettings { // SamlConnectorName is the name of the SAML Connector that users provisioned // by this SCIM plugin will use to log in to Teleport. - string saml_connector_name = 1; + // DEPRECATED: Use ConnectorInfo instead. + // This is old field added when the Okta SCIM plugin was created + // and was limited usage to SAML connectors only. + string saml_connector_name = 1 [deprecated = true]; // DefaultRole is the default role assigned to users provisioned by this // plugin. - string default_role = 2; + string default_role = 2 [deprecated = true]; + + message ConnectorInfo { + option (gogoproto.equal) = true; + // Name is the name of the connector. + string name = 1 [(gogoproto.jsontag) = "name"]; + // Type is the type of the connector: types.KindSAML, types.KindOIDC, etc. + // Note: The name of the connect is not unique across types. + string type = 2 [(gogoproto.jsontag) = "type"]; + } + + // ConnectorInfo contains information about the user's origin as provided + // by the SCIM plugin. It enables matching a SAML/OIDC external user + // with a SCIM-persisted user, allowing the ephemeral user entry to be updated to SCIM user. + ConnectorInfo connector_info = 3 [(gogoproto.jsontag) = "connector_info"]; } // PluginDatadogAccessSettings defines the settings for a Datadog Incident Management plugin @@ -7244,6 +7727,19 @@ message PluginAWSICSettings { // Credentials represents the AWS credentials used by the Identity Center // integration AWSICCredentials credentials = 11 [(gogoproto.jsontag) = "credentials,omitempty"]; + + // RolesSyncMode indicates how the Identity Center integration will create and + // manage roles representing potential Identity Center Account Assignments. + // + // Possible values are ALL or NONE: + // ALL: indicates that the AWS Identity Center integration should + // create and maintain roles for all possible Account Assignments. + // NONE: indicates that the AWS Identity Center integration should + // not create any roles representing potential account Account + // Assignments. + // For backwards compatibility, an empty value is treated as equivalent to + // to ALL + string roles_sync_mode = 12; } // AWSICCredentials holds the credentials for authenticating with AWS @@ -7427,6 +7923,7 @@ message PluginBootstrapCredentialsV1 { // PluginIdSecretCredential can be OAuth2-like client_id and client_secret or username and password. message PluginIdSecretCredential { + option (gogoproto.equal) = true; string id = 1; string secret = 2; } @@ -7556,6 +8053,10 @@ message PluginOktaStatusV1 { // AccessListSyncDetails are status details relating to synchronizing access // lists from Okta. PluginOktaStatusDetailsAccessListsSync access_lists_sync_details = 5; + + // SystemLogExportDetails are the status defaults related to the System Logs + // exporter. + PluginOktaStatusSystemLogExporter system_log_export_details = 6; } // PluginOktaStatusDetailsSSO are details related to the @@ -7691,9 +8192,38 @@ message PluginOktaStatusDetailsAccessListsSync { string error = 9; } +// PluginOktaStatusSystemLogExporter are details related to the +// current status of the Okta integration w/r/t system logs sync. +message PluginOktaStatusSystemLogExporter { + // Enabled is whether Okta System Log exporter is enabled. + bool enabled = 1; + + // StatusCode indicates the current state of the service + OktaPluginSyncStatusCode status_code = 2; + + // LastSuccessful is the date of the last successful run. + google.protobuf.Timestamp last_successful = 3 [ + (gogoproto.stdtime) = true, + (gogoproto.nullable) = true, + (gogoproto.jsontag) = "last_successful" + ]; + + // LastFailed is the date of the last failed run. + google.protobuf.Timestamp last_failed = 4 [ + (gogoproto.stdtime) = true, + (gogoproto.nullable) = true, + (gogoproto.jsontag) = "last_failed" + ]; + + // Error contains a textual description of the reason the last synchronization + // failed. Only valid when StatusCode is OKTA_PLUGIN_SYNC_STATUS_CODE_ERROR. + string error = 9; +} + // PluginCredentialsV1 represents "live" credentials // that are used by the plugin to authenticate to the 3rd party API. message PluginCredentialsV1 { + option (gogoproto.equal) = true; oneof credentials { PluginOAuth2AccessTokenCredentials oauth2_access_token = 1; PluginBearerTokenCredentials bearer_token = 2; @@ -7703,6 +8233,8 @@ message PluginCredentialsV1 { } message PluginOAuth2AccessTokenCredentials { + option (gogoproto.equal) = true; + string access_token = 1; string refresh_token = 2; google.protobuf.Timestamp expires = 3 [ @@ -7712,6 +8244,8 @@ message PluginOAuth2AccessTokenCredentials { } message PluginBearerTokenCredentials { + option (gogoproto.equal) = true; + // Token is the literal bearer token to be submitted to the 3rd-party API provider. string token = 1; @@ -7750,6 +8284,7 @@ message PluginStaticCredentialsV1 { // PluginStaticCredentialsSpecV1 is the specification for the static credentials object. message PluginStaticCredentialsSpecV1 { + option (gogoproto.equal) = true; oneof credentials { string APIToken = 1; PluginStaticCredentialsBasicAuth BasicAuth = 2; @@ -7761,6 +8296,8 @@ message PluginStaticCredentialsSpecV1 { // PluginStaticCredentialsBasicAuth represents username and password credentials for a plugin. message PluginStaticCredentialsBasicAuth { + option (gogoproto.equal) = true; + // Username is the username to use for basic auth. string Username = 1 [(gogoproto.jsontag) = "username"]; @@ -7770,6 +8307,8 @@ message PluginStaticCredentialsBasicAuth { // PluginStaticCredentialsOAuthClientSecret represents an oauth client id and secret. message PluginStaticCredentialsOAuthClientSecret { + option (gogoproto.equal) = true; + // ClientId is the client ID to use for OAuth client secret. string ClientId = 1 [(gogoproto.jsontag) = "client_id"]; @@ -7780,6 +8319,8 @@ message PluginStaticCredentialsOAuthClientSecret { // PluginStaticCredentialsSSHCertAuthorities contains the active SSH CAs used // for the integration or plugin. message PluginStaticCredentialsSSHCertAuthorities { + option (gogoproto.equal) = true; + // CertAuthorities contains the active SSH CAs used for the integration or // plugin. repeated SSHKeyPair cert_authorities = 1; @@ -8123,6 +8664,12 @@ message IntegrationV1 { (gogoproto.nullable) = false, (gogoproto.jsontag) = "spec" ]; + + // Status is an Integration specification. + IntegrationStatusV1 Status = 3 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "status" + ]; } // IntegrationSpecV1 contains properties of all the supported integrations. @@ -8142,6 +8689,12 @@ message IntegrationSpecV1 { PluginCredentialsV1 credentials = 4; } +// IntegrationStatusV1 contains the status of the integration. +message IntegrationStatusV1 { + // AWSRolesAnywhere contains the specific status fields to related to the AWS Roles Anywhere Integration subkind. + AWSRAIntegrationStatusV1 AWSRolesAnywhere = 1 [(gogoproto.jsontag) = "aws_ra,omitempty"]; +} + // AWSOIDCIntegrationSpecV1 contains the spec properties for the AWS OIDC SubKind Integration. message AWSOIDCIntegrationSpecV1 { // RoleARN contains the Role ARN used to set up the Integration. @@ -8152,7 +8705,7 @@ message AWSOIDCIntegrationSpecV1 { // This bucket/prefix/* files must be publicly accessible and contain the following: // > .well-known/openid-configuration // > .well-known/jwks - // Format: s3:/// + // Format: `s3:///` // Optional. The proxy's endpoint is used if it is not specified. // // DEPRECATED: Thumbprint validation requires the issuer to update the IdP in AWS everytime the issuer changes the certificate. @@ -8164,9 +8717,9 @@ message AWSOIDCIntegrationSpecV1 { deprecated = true ]; - // Audience is used to record a name of a plugin or a discover service in Teleport - // that depends on this integration. - // Audience value can be empty or configured with supported preset audience type. + // Audience is used to record a name of a plugin or a discover service in + // Teleport that depends on this integration. + // Audience value can either be empty or "aws-identity-center". // Preset audience may impose specific behavior on the integration CRUD API, // such as preventing integration from update or deletion. Empty audience value // should be treated as a default and backward-compatible behavior of the integration. @@ -8211,6 +8764,55 @@ message AWSRolesAnywhereProfileSyncConfig { bool ProfileAcceptsRoleSessionName = 3 [(gogoproto.jsontag) = "profile_accepts_role_session_name"]; // RoleARN is the ARN of the IAM Role to assume when accessing the AWS APIs. string RoleARN = 4 [(gogoproto.jsontag) = "role_arn"]; + // ProfileNameFilters is a list of filters applied to the profile name. + // Only matching profiles will be synchronized as application servers. + // If empty, no filtering is applied. + // + // Filters can be globs, for example: + // + // profile* + // *name* + // + // Or regexes if they're prefixed and suffixed with ^ and $, for example: + // + // ^profile.*$ + // ^.*name.*$ + repeated string ProfileNameFilters = 5 [(gogoproto.jsontag) = "profile_name_filters"]; +} + +// AWSRAIntegrationStatusV1 contains the status properties for the AWS IAM Roles Anywhere SubKind Integration. +message AWSRAIntegrationStatusV1 { + // LastProfileSync is the summary of the last profile sync iteration. + AWSRolesAnywhereProfileSyncIterationSummary LastProfileSync = 1 [ + (gogoproto.nullable) = true, + (gogoproto.jsontag) = "last_profile_sync" + ]; +} + +// AWSRolesAnywhereProfileSyncIterationSummary contains the summary of a single profile sync iteration. +message AWSRolesAnywhereProfileSyncIterationSummary { + // StartTime is the time when the sync iteration started. + google.protobuf.Timestamp StartTime = 1 [ + (gogoproto.stdtime) = true, + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "start_time" + ]; + + // EndTime is the time when the sync iteration ended. + google.protobuf.Timestamp EndTime = 2 [ + (gogoproto.stdtime) = true, + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "end_time" + ]; + + // Status is the result of the sync iteration: SUCCESS or ERROR. + string Status = 3 [(gogoproto.jsontag) = "status"]; + + // ErrorMessage holds the error message when status is ERROR. + string ErrorMessage = 4 [(gogoproto.jsontag) = "error_message"]; + + // SyncedProfiles is the number of profiles synchronized as application servers. + int32 SyncedProfiles = 5 [(gogoproto.jsontag) = "synced_profiles"]; } // HeadlessAuthentication holds data for an ongoing headless authentication attempt. @@ -8454,10 +9056,36 @@ message AWSMatcher { // KubeAppDiscovery controls whether Kubernetes App Discovery will be enabled for agents running on // discovered clusters, currently only affects AWS EKS discovery in integration mode. bool KubeAppDiscovery = 8 [(gogoproto.jsontag) = "kube_app_discovery,omitempty"]; - // SetupAccessForARN is the role that the discovery service should create EKS Access Entries for. + // SetupAccessForARN is the role that the Discovery Service should create EKS Access Entries for. // This value should match the IAM identity that Teleport Kubernetes Service uses. - // If this value is empty, the discovery service will attempt to set up access for its own identity (self). + // If this value is empty, the Discovery Service will attempt to set up access for its own identity (self). string SetupAccessForARN = 9 [(gogoproto.jsontag) = "setup_access_for_arn,omitempty"]; + // Organization is an AWS Organization matcher for discovering resources accross multiple accounts under an Organization. + AWSOrganizationMatcher Organization = 10 [(gogoproto.jsontag) = "organization,omitempty"]; +} + +// AWSOrganizationMatcher specifies an Organization and rules for discovering accounts under that organization. +message AWSOrganizationMatcher { + // OrganizationID is the AWS Organization ID to match against. + // Required. + string OrganizationID = 1 [(gogoproto.jsontag) = "organization_id,omitempty"]; + + // OrganizationalUnits contains rules for matchings AWS accounts based on their Organizational Units. + AWSOrganizationUnitsMatcher OrganizationalUnits = 2 [(gogoproto.jsontag) = "organizational_units,omitempty"]; +} + +// AWSOrganizationUnitsMatcher contains rules for matching accounts under an Organization. +// Accounts that belong to an excluded Organizational Unit, and its children, will be excluded even if they were included. +message AWSOrganizationUnitsMatcher { + // Include is a list of AWS Organizational Unit IDs to match. + // Only exact matches or wildcard (*) are supported. + // If empty, all Organizational Units are included by default. + repeated string Include = 1 [(gogoproto.jsontag) = "include,omitempty"]; + + // Exclude is a list of AWS Organizational Unit IDs to exclude. + // Only exact matches or wildcard (*) are supported. + // If empty, no Organizational Units are excluded by default. + repeated string Exclude = 2 [(gogoproto.jsontag) = "exclude,omitempty"]; } // AssumeRole provides a role ARN and ExternalID to assume an AWS role @@ -8496,6 +9124,34 @@ message InstallerParams { // 1: uses script mode // 2: uses eice mode InstallParamEnrollMode EnrollMode = 8 [(gogoproto.jsontag) = "enroll_mode,omitempty"]; + // Suffix indicates the installation suffix for the teleport installation. + // Set this value if you want multiple installations of Teleport. + // See --install-suffix flag in teleport-update program. + // Note: only supported for Amazon EC2. + // Suffix name can only contain alphanumeric characters and hyphens. + string Suffix = 9 [(gogoproto.jsontag) = "suffix,omitempty"]; + // UpdateGroup indicates the update group for the teleport installation. + // This value is used to group installations in order to update them in batches. + // See --group flag in teleport-update program. + // Note: only supported for Amazon EC2. + // Group name can only contain alphanumeric characters and hyphens. + string UpdateGroup = 10 [(gogoproto.jsontag) = "update_group,omitempty"]; + // HTTPProxySettings defines HTTP proxy settings for making HTTP requests. + // When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. + HTTPProxySettings HTTPProxySettings = 11 [(gogoproto.jsontag) = "http_proxy_settings,omitempty"]; +} + +// HTTPProxySettings defines HTTP proxy settings for making HTTP and HTTPS requests. +message HTTPProxySettings { + // HTTPProxy is the URL for the HTTP proxy to use when making requests. + // When applied, this will set the HTTP_PROXY environment variable. + string HTTPProxy = 1 [(gogoproto.jsontag) = "http_proxy,omitempty"]; + // HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. + // When applied, this will set the HTTPS_PROXY environment variable. + string HTTPSProxy = 2 [(gogoproto.jsontag) = "https_proxy,omitempty"]; + // NoProxy is a comma separated list of URLs that will be excluded from proxying. + // When applied, this will set the NO_PROXY environment variable. + string NoProxy = 3 [(gogoproto.jsontag) = "no_proxy,omitempty"]; } // InstallParamEnrollMode is the mode used to enroll the node into the cluster. @@ -8543,6 +9199,9 @@ message AzureMatcher { // Params sets the join method when installing on // discovered Azure nodes. InstallerParams Params = 6 [(gogoproto.jsontag) = "install_params,omitempty"]; + // Integration is the integration name used to generate credentials to interact with Azure APIs. + // Environment credentials will not be used when this value is set. + string Integration = 7 [(gogoproto.jsontag) = "integration,omitempty"]; } // GCPMatcher matches GCP resources. @@ -8619,6 +9278,17 @@ message AccessGraphAWSSyncCloudTrailLogs { string SQSQueue = 2 [(gogoproto.jsontag) = "sqs_queue,omitempty"]; } +// AccessGraphAWSSyncEKSAuditLogs defines the settings for ingesting Kubernetes apiserver +// audit logs from EKS clusters. +message AccessGraphAWSSyncEKSAuditLogs { + // The tags of EKS clusters for which apiserver audit logs should be fetched. + wrappers.LabelValues Tags = 1 [ + (gogoproto.nullable) = false, + (gogoproto.jsontag) = "tags,omitempty", + (gogoproto.customtype) = "Labels" + ]; +} + // AccessGraphAWSSync is a configuration for AWS Access Graph service poll service. message AccessGraphAWSSync { // Regions are AWS regions to import resources from. @@ -8629,6 +9299,7 @@ message AccessGraphAWSSync { string Integration = 4 [(gogoproto.jsontag) = "integration,omitempty"]; // Configuration settings for collecting AWS CloudTrail logs via an SQS queue. AccessGraphAWSSyncCloudTrailLogs cloud_trail_logs = 5 [(gogoproto.jsontag) = "cloud_trail_logs,omitempty"]; + AccessGraphAWSSyncEKSAuditLogs eks_audit_logs = 6 [(gogoproto.jsontag) = "eks_audit_logs,omitempty"]; } // AccessGraphAzureSync is a configuration for Azure Access Graph service poll service. diff --git a/api/proto/teleport/machineid/v1/bot_instance.proto b/api/proto/teleport/machineid/v1/bot_instance.proto index 68ce4835b2139..d31b9b4947262 100644 --- a/api/proto/teleport/machineid/v1/bot_instance.proto +++ b/api/proto/teleport/machineid/v1/bot_instance.proto @@ -20,6 +20,7 @@ import "google/protobuf/duration.proto"; import "google/protobuf/struct.proto"; import "google/protobuf/timestamp.proto"; import "teleport/header/v1/metadata.proto"; +import "teleport/legacy/types/types.proto"; import "teleport/workloadidentity/v1/join_attrs.proto"; option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1;machineidv1"; @@ -83,11 +84,42 @@ message BotInstanceStatusHeartbeat { string architecture = 8; // The OS of the host that `tbot` is running on, determined by runtime.GOOS. string os = 9; + // Identifies the external updater process. + string external_updater = 10; + // Identifies the external updated version. Empty if no updater is configured. + string external_updater_version = 11; + // Information provided by the external updater, including the update group + // and updater status. + types.UpdaterV2Info updater_info = 12; + + // Identifies whether the bot is running in the tbot binary or embedded in + // another component. + BotKind kind = 13; + // In future iterations, additional information can be submitted here. // For example, the configuration of `tbot` or the health of individual // outputs. } +// BotKind identifies whether the bot is the tbot binary or embedded in another +// component. +enum BotKind { + // The enum zero-value, it means no kind was included. + BOT_KIND_UNSPECIFIED = 0; + + // Means the bot is running the tbot binary. + BOT_KIND_TBOT = 1; + + // Means the bot is running inside one of our Terraform providers. + BOT_KIND_TERRAFORM_PROVIDER = 2; + + // Means the bot is running inside the Teleport Kubernetes operator. + BOT_KIND_KUBERNETES_OPERATOR = 3; + + // Means the bot is running inside tctl (e.g. `tctl terraform env`) + BOT_KIND_TCTL = 4; +} + // BotInstanceStatusAuthentication contains information about a join or renewal. // Ths information is entirely sourced by the Auth Server and can be trusted. message BotInstanceStatusAuthentication { @@ -135,4 +167,47 @@ message BotInstanceStatus { BotInstanceStatusHeartbeat initial_heartbeat = 3; // The N most recent heartbeats for this bot instance. repeated BotInstanceStatusHeartbeat latest_heartbeats = 4; + // The health of the services/output `tbot` is running. + repeated BotInstanceServiceHealth service_health = 5; +} + +// BotInstanceServiceHealth is a snapshot of a `tbot` service's health. +message BotInstanceServiceHealth { + // Service identifies the service. + BotInstanceServiceIdentifier service = 1; + + // Status describes the service's healthiness. + BotInstanceHealthStatus status = 2; + + // Reason is a human-readable explanation for the service's status. It might + // include an error message. + optional string reason = 3; + + // UpdatedAt is the time at which the service's health last changed. + google.protobuf.Timestamp updated_at = 4; +} + +// BotInstanceServiceIdentifier uniquely identifies a `tbot` service. +message BotInstanceServiceIdentifier { + // Type of service (e.g. database-tunnel, ssh-multiplexer). + string type = 1; + + // Name of the service, either given by the user or auto-generated. + string name = 2; +} + +// BotInstanceHealthStatus describes the healthiness of a `tbot` service. +enum BotInstanceHealthStatus { + // The enum zero-value, it means no status was included. + BOT_INSTANCE_HEALTH_STATUS_UNSPECIFIED = 0; + + // Means the service is still "starting up" and hasn't reported its status. + BOT_INSTANCE_HEALTH_STATUS_INITIALIZING = 1; + + // Means the service is healthy and ready to serve traffic, or it has + // recently succeeded in generating an output. + BOT_INSTANCE_HEALTH_STATUS_HEALTHY = 2; + + // Means the service is failing to serve traffic or generate output. + BOT_INSTANCE_HEALTH_STATUS_UNHEALTHY = 3; } diff --git a/api/proto/teleport/machineid/v1/bot_instance_service.proto b/api/proto/teleport/machineid/v1/bot_instance_service.proto index 8f1d2fb2fefb7..bc3469f895ab6 100644 --- a/api/proto/teleport/machineid/v1/bot_instance_service.proto +++ b/api/proto/teleport/machineid/v1/bot_instance_service.proto @@ -17,6 +17,7 @@ syntax = "proto3"; package teleport.machineid.v1; import "google/protobuf/empty.proto"; +import "teleport/legacy/types/types.proto"; import "teleport/machineid/v1/bot_instance.proto"; option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/machineid/v1;machineidv1"; @@ -45,6 +46,41 @@ message ListBotInstancesRequest { string page_token = 3; // A search term used to filter the results. If non-empty, it's used to match against supported fields. string filter_search_term = 4; + // The sort config to use for the results. If empty, the default sort field and order is used. + types.SortBy sort = 5; +} + +// Request for ListBotInstancesV2. +// +// Follows the pagination semantics of +// https://cloud.google.com/apis/design/standard_methods#list +message ListBotInstancesV2Request { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The page_token value returned from a previous ListBotInstancesV2 request, + // if any. + string page_token = 2; + // The sort field to use for the results. If empty, the default sort field is + // used. + string sort_field = 3; + // The sort order to use for the results. If empty, the default sort order is + // used. + bool sort_desc = 4; + // Fields used to filter the results + Filters filter = 5; + + // Filters contains fields to be used to filter the results. + message Filters { + // The name of the Bot to list BotInstances for. If non-empty, only + // BotInstances for that bot will be listed. + string bot_name = 1; + // A search term used to filter the results. If non-empty, it's used to + // match against supported fields. + string search_term = 2; + // A Teleport predicate language query used to filter the results. + string query = 3; + } } // Response for ListBotInstances. @@ -68,6 +104,9 @@ message DeleteBotInstanceRequest { message SubmitHeartbeatRequest { // The heartbeat data to submit. BotInstanceStatusHeartbeat heartbeat = 1; + + // The health of the services/output `tbot` is running. + repeated BotInstanceServiceHealth service_health = 2; } // The response for SubmitHeartbeat. @@ -80,7 +119,12 @@ service BotInstanceService { // GetBotInstance returns the specified BotInstance resource. rpc GetBotInstance(GetBotInstanceRequest) returns (BotInstance); // ListBotInstances returns a page of BotInstance resources. - rpc ListBotInstances(ListBotInstancesRequest) returns (ListBotInstancesResponse); + // Deprecated: Use ListBotInstancesV2 instead + rpc ListBotInstances(ListBotInstancesRequest) returns (ListBotInstancesResponse) { + option deprecated = true; + } + // ListBotInstancesV2 returns a page of BotInstance resources. + rpc ListBotInstancesV2(ListBotInstancesV2Request) returns (ListBotInstancesResponse); // DeleteBotInstance hard deletes the specified BotInstance resource. rpc DeleteBotInstance(DeleteBotInstanceRequest) returns (google.protobuf.Empty); // SubmitHeartbeat submits a heartbeat for a BotInstance. diff --git a/api/proto/teleport/okta/v1/okta_service.proto b/api/proto/teleport/okta/v1/okta_service.proto index 4e6c562dec2ec..541e94939a5ae 100644 --- a/api/proto/teleport/okta/v1/okta_service.proto +++ b/api/proto/teleport/okta/v1/okta_service.proto @@ -138,6 +138,11 @@ message CreateIntegrationRequest { bool enable_system_log_export = 11; // Whether to assign the builtin okta-requester role to all Okta synced users. bool disable_assign_default_roles = 12; + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. It will be rounded down to the nearest second The default value + // is 1800 (30 minutes). + google.protobuf.Duration time_between_imports = 13; } // UpdateIntegrationRequest is the request message for updating an existing Okta integration. @@ -166,6 +171,11 @@ message UpdateIntegrationRequest { bool enable_system_log_export = 11; // Whether to assign the builtin okta-requester role to all Okta synced users. bool disable_assign_default_roles = 12; + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. It will be rounded down to the nearest second. The default + // value is 1800 (30 minutes). + google.protobuf.Duration time_between_imports = 13; } // AccessListSettings contains the settings for access list synchronization. diff --git a/api/proto/teleport/plugins/v1/plugin_service.proto b/api/proto/teleport/plugins/v1/plugin_service.proto index 95599618cd811..fd5677cdfa6d7 100644 --- a/api/proto/teleport/plugins/v1/plugin_service.proto +++ b/api/proto/teleport/plugins/v1/plugin_service.proto @@ -143,6 +143,37 @@ message SearchPluginStaticCredentialsRequest { map labels = 1; } +// CredentialQuery is a set of values to match when searching for credentials +message CredentialQuery { + map labels = 1; +} + +// UpdatePluginStaticCredentialsRequest holds information for updating a plugin +// static credential. The service will attempt to find the credential to update +// based on the supplied plugin name and labels. +message UpdatePluginStaticCredentialsRequest { + oneof target { + // Name is the name of the plugin static credentials resource that we're + // targeting + string name = 1; + + // Query is the search query that a credential must match in order to be + // updated. The update will only proceed if exactly one credential matches + // the search query. + CredentialQuery query = 2; + } + + // Credential is the payload containing the updated credential. Only the spec + // is allowed to be updated via this interface. + types.PluginStaticCredentialsSpecV1 credential = 3; +} + +// UpdatePluginStaticCredentialsResponse holds the updated credential returned +// from UpdatePluginStaticCredentials +message UpdatePluginStaticCredentialsResponse { + types.PluginStaticCredentialsV1 credential = 1; +} + // SearchPluginStaticCredentialsResponse is the response type for // SearchPluginStaticCredentials message SearchPluginStaticCredentialsResponse { @@ -210,10 +241,41 @@ service PluginService { // RoleProxy. rpc SearchPluginStaticCredentials(SearchPluginStaticCredentialsRequest) returns (SearchPluginStaticCredentialsResponse); + // UpdatePluginStaticCredentials + rpc UpdatePluginStaticCredentials(UpdatePluginStaticCredentialsRequest) returns (UpdatePluginStaticCredentialsResponse); + // NeedsCleanup will indicate whether a plugin of the given type needs cleanup // before it can be created. rpc NeedsCleanup(NeedsCleanupRequest) returns (NeedsCleanupResponse); // Cleanup will clean up the resources for the given plugin type. rpc Cleanup(CleanupRequest) returns (google.protobuf.Empty); + + // CreatePluginOauthToken issues a short-lived OAuth access token for the specified plugin. + // + // This endpoint supports the OAuth 2.0 "client_credentials" grant type, where the plugin + // authenticates using its client ID and client secret + rpc CreatePluginOauthToken(CreatePluginOauthTokenRequest) returns (CreatePluginOauthTokenResponse); +} + +// CreatePluginOauthTokenRequest is the request type for creating an OAuth token for a plugin. +message CreatePluginOauthTokenRequest { + // plugin_name is the name of the plugin for which the OAuth token is requested. + string plugin_name = 1; + // client_id is the OAuth client identifier issued to the plugin. + string client_id = 2; + // client_secret is the secret associated with the client_id. + string client_secret = 3; + // grant_type is the OAuth 2.0 grant type being used. Currently, only "client_credentials" is supported. + string grant_type = 4; +} + +// CreatePluginOauthTokenResponse is the response type for a successful OAuth token creation. +message CreatePluginOauthTokenResponse { + // access_token is the generated token issued to the plugin. + string access_token = 1; + // token_type describes the type of the token issued + string token_type = 2; + // expires_in is the number of seconds until the token expires. + int64 expires_in = 3; } diff --git a/api/proto/teleport/presence/v1/relay_server.proto b/api/proto/teleport/presence/v1/relay_server.proto new file mode 100644 index 0000000000000..1354ecaee06b4 --- /dev/null +++ b/api/proto/teleport/presence/v1/relay_server.proto @@ -0,0 +1,64 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.presence.v1; + +import "teleport/header/v1/metadata.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1;presencev1"; + +// A heartbeat for a relay service; this message serves as both the type used in +// the v1 service and as the canonical v1 storage format (in protojson). +message RelayServer { + // fixed string, "relay_server". + string kind = 1; + // fixed string, "". + string sub_kind = 2; + // fixed string, "v1". + string version = 3; + + teleport.header.v1.Metadata metadata = 4; + + // resource spec + message Spec { + // host IDs of Proxy Service instances that this server is available on + // through a reverse tunnel + repeated string proxy_ids = 1; + + // configured hostname (or nodename) of the machine, for troubleshooting and + // debugging + string hostname = 2; + + // the name of the Relay group this server belongs to + string relay_group = 3; + + // address and port that this server is reachable at by other Relay Service + // instance of the same group + string peer_addr = 4; + + // random string chosen for the duration of the process, for troubleshooting + // and debugging + string nonce = 5; + + // set after the Teleport instance has received a termination signal (but + // hasn't necessarily began shutting down) + bool terminating = 6; + } + Spec spec = 5; + // The advertized scope of the server. A server's scope can not change once assigned, so + // heartbeats must include a scope value matching the one declared in the hello message. + string scope = 6; +} diff --git a/api/proto/teleport/presence/v1/service.proto b/api/proto/teleport/presence/v1/service.proto index 325d5b68ee093..68e07a2fb49b6 100644 --- a/api/proto/teleport/presence/v1/service.proto +++ b/api/proto/teleport/presence/v1/service.proto @@ -19,6 +19,7 @@ package teleport.presence.v1; import "google/protobuf/empty.proto"; import "google/protobuf/field_mask.proto"; import "teleport/legacy/types/types.proto"; +import "teleport/presence/v1/relay_server.proto"; option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/presence/v1;presencev1"; @@ -39,6 +40,18 @@ service PresenceService { rpc UpsertReverseTunnel(UpsertReverseTunnelRequest) returns (types.ReverseTunnelV2); // DeleteReverseTunnel removes an existing ReverseTunnel by name. rpc DeleteReverseTunnel(DeleteReverseTunnelRequest) returns (google.protobuf.Empty); + + // GetRelayServer returns a single relay_server by name. + rpc GetRelayServer(GetRelayServerRequest) returns (GetRelayServerResponse); + // ListRelayServers returns a page of relay_server resources. + rpc ListRelayServers(ListRelayServersRequest) returns (ListRelayServersResponse); + // DeleteRelayServer deletes a relay_server resource by name. + rpc DeleteRelayServer(DeleteRelayServerRequest) returns (DeleteRelayServerResponse); + + // ListAuthServers returns a page of Auth servers. + rpc ListAuthServers(ListAuthServersRequest) returns (ListAuthServersResponse); + // ListProxyServers returns a page of Proxy servers. + rpc ListProxyServers(ListProxyServersRequest) returns (ListProxyServersResponse); } // Request for GetRemoteCluster @@ -115,3 +128,77 @@ message DeleteReverseTunnelRequest { // Name is the name of the ReverseTunnel to delete. string name = 1; } + +// Request message for the PresenceService.GetRelayServer rpc. +message GetRelayServerRequest { + string name = 1; +} + +// Response message for the PresenceService.GetRelayServer rpc. +message GetRelayServerResponse { + RelayServer relay_server = 1; +} + +// Request message for the PresenceService.ListRelayServers rpc. +message ListRelayServersRequest { + // The maximum number of items to return. The service may return fewer than + // this value. If unspecified, the service will use a sensible default. + int64 page_size = 1; + + // A pagination token returned from a previous request. If empty, the request + // will return the first page. + string page_token = 2; +} + +// Response message for the PresenceService.ListRelayServers rpc. +message ListRelayServersResponse { + repeated RelayServer relays = 1; + + // A token that can be sent as the page_token to retrieve the next page. If + // this field is empty, there are no more pages. + string next_page_token = 2; +} + +// Request message for the PresenceService.DeleteRelayServer rpc. +message DeleteRelayServerRequest { + string name = 1; +} + +// Response message for the PresenceService.DeleteRelayServer rpc. +message DeleteRelayServerResponse {} + +// Request message for the PresenceService.ListAuthServers rpc. +message ListAuthServersRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +// Response message for the PresenceService.ListAuthServers rpc. +message ListAuthServersResponse { + // A list of auth server resources. + repeated types.ServerV2 servers = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} + +// Request message for the PresenceService.ListProxyServers rpc. +message ListProxyServersRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +// Response message for the PresenceService.ListProxyServers rpc. +message ListProxyServersResponse { + // A list of proxy server resources. + repeated types.ServerV2 servers = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} diff --git a/api/proto/teleport/provisioning/v1/provisioning_service.proto b/api/proto/teleport/provisioning/v1/provisioning_service.proto deleted file mode 100644 index a477786bd0698..0000000000000 --- a/api/proto/teleport/provisioning/v1/provisioning_service.proto +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright 2024 Gravitational, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package teleport.provisioning.v1; - -import "google/protobuf/empty.proto"; - -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/provisioning/v1;provisioningv1"; - -// ProvisioningService provides methods to manage Provisioning resources. -service ProvisioningService { - // DeleteDownstreamProvisioningStates deletes all Identity Center provisioning state for a given downstream. - rpc DeleteDownstreamProvisioningStates(DeleteDownstreamProvisioningStatesRequest) returns (google.protobuf.Empty); -} - -// DeleteDownstreamProvisioningStatesRequest is a request to delete all provisioning states for -// a given DownstreamId. -message DeleteDownstreamProvisioningStatesRequest { - // DownstreamId identifies the downstream service that this state applies to. - string downstream_id = 1; -} diff --git a/api/proto/teleport/recordingencryption/v1/recording_encryption.proto b/api/proto/teleport/recordingencryption/v1/recording_encryption.proto new file mode 100644 index 0000000000000..e8addac57d980 --- /dev/null +++ b/api/proto/teleport/recordingencryption/v1/recording_encryption.proto @@ -0,0 +1,91 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.recordingencryption.v1; + +import "teleport/header/v1/metadata.proto"; +import "teleport/legacy/types/types.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1;recordingencryptionv1"; + +// The possible states a KeyPair can be in. +enum KeyPairState { + // Unspecified value + KEY_PAIR_STATE_UNSPECIFIED = 0; + // Represents an active key. + KEY_PAIR_STATE_ACTIVE = 1; + // Represents a key in the process of being rotated. + KEY_PAIR_STATE_ROTATING = 2; + // Represents a key being rotated in that is inaccessible to at least one + // auth server. + KEY_PAIR_STATE_INACCESSIBLE = 3; +} + +// A key pair used with age to wrap and unwrap file keys for session recording encryption. +message KeyPair { + // A key pair used with age to wrap and unwrap file keys for session recording encryption. + types.EncryptionKeyPair key_pair = 1; + // The current state of the key pair. + KeyPairState state = 2; +} + +// RecordingEncryptionSpec contains the active key set for encrypted session recording. +message RecordingEncryptionSpec { + reserved 1; // active_keys + reserved "active_keys"; + + // A list of active key pairs used for session recording encryption. The unique set of + // active public keys are used as recipients during age encryption. This allows any + // active private key to be used during decryption which guards against recordings being + // inaccessible to auth servers waiting for key rotation. + repeated KeyPair active_key_pairs = 2; +} + +// RecordingEncryptionStatus contains the status of the RecordingEncryption resource. +message RecordingEncryptionStatus {} + +// RecordingEncryption contains cluster state for encrypted session recordings. +message RecordingEncryption { + string kind = 1; + string sub_kind = 2; + string version = 3; + teleport.header.v1.Metadata metadata = 4; + RecordingEncryptionSpec spec = 5; + RecordingEncryptionStatus status = 6; +} + +// A rotated key pair previously used with age to wrap and unwrap file keys for session recording +// encryption. +message RotatedKeySpec { + // The rotated key pair previously used with age to wrap and unwrap file keys for session recording + // encryption. + types.EncryptionKeyPair encryption_key_pair = 2; +} + +// The empty status of a RotatedKey. +message RotatedKeyStatus {} + +// A previously rotated encryption key for session recordings kept for future replay. The metadata.name +// is expected to be the fingerprint of the public key contained in the spec, which is a hex encoded +// SHA256 hash of its PKIX form. +message RotatedKey { + string kind = 1; + string sub_kind = 2; + string version = 3; + teleport.header.v1.Metadata metadata = 4; + RotatedKeySpec spec = 5; + RotatedKeyStatus status = 6; +} diff --git a/api/proto/teleport/recordingencryption/v1/recording_encryption_service.proto b/api/proto/teleport/recordingencryption/v1/recording_encryption_service.proto new file mode 100644 index 0000000000000..a4590271265f3 --- /dev/null +++ b/api/proto/teleport/recordingencryption/v1/recording_encryption_service.proto @@ -0,0 +1,141 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.recordingencryption.v1; + +import "google/protobuf/timestamp.proto"; +import "teleport/recordingencryption/v1/recording_encryption.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingencryption/v1;recordingencryptionv1"; + +// RecordingEncryption provides methods to manage cluster encryption configuration resources. +service RecordingEncryptionService { + // CreateUpload begins a multipart upload for an encrypted recording. The + // returned upload ID should be used while uploading parts. + rpc CreateUpload(CreateUploadRequest) returns (CreateUploadResponse); + // UploadPart uploads a part to the given upload ID. + rpc UploadPart(UploadPartRequest) returns (UploadPartResponse); + // CompleteUploadRequest marks a multipart upload as complete. + rpc CompleteUpload(CompleteUploadRequest) returns (CompleteUploadResponse); + + // RotateKey rotates the key pair used for encrypting session recording data. + rpc RotateKey(RotateKeyRequest) returns (RotateKeyResponse); + // GetRotationState returns whether or not a rotation is in progress. + rpc GetRotationState(GetRotationStateRequest) returns (GetRotationStateResponse); + // CompleteRotation moves rotated keys out of the active set. + rpc CompleteRotation(CompleteRotationRequest) returns (CompleteRotationResponse); + // RollbackRotation removes active keys and reverts rotating keys back to being active. + rpc RollbackRotation(RollbackRotationRequest) returns (RollbackRotationResponse); +} + +// The handle to an upload for an encrypted session. +message Upload { + // The primary identifier for an Upload. + string upload_id = 1; + // The session ID an upload is tied to. + string session_id = 2; + // The time that an upload was created at. + google.protobuf.Timestamp initiated_at = 3; +} + +// The request to start a multipart upload for a specific session recording. +message CreateUploadRequest { + // The session ID associated with the recording being uploaded. + string session_id = 1; +} + +// The resulting Upload message for a created Upload. +message CreateUploadResponse { + // The handle for the created Upload. + Upload upload = 1; +} + +// The request to upload a single part in a multipart upload. +message UploadPartRequest { + // The handle to the in-progress upload that should be uploaded to. + Upload upload = 1; + // The ordered index applied to the part. + int64 part_number = 2; + // The encrypted part of session recording data being uploaded. + bytes part = 3; + // Whether this is the last upload part in the upload. + bool is_last = 4; +} + +// The resulting metadata about an uploaded part. +message Part { + // The ordered index applied to the part. + int64 part_number = 1; + // The part e-tag value relevant to some storage backends. + string etag = 2; +} + +// A successfully uploaded Part to be included in the final CompleteUpload request. +message UploadPartResponse { + // The resulting part metadata about an uploaded part. + Part part = 1; +} + +// The request to complete an upload. The included part numbers must match the parts successfully +// uploaded up until this point. +message CompleteUploadRequest { + // The handle to an upload to complete. + Upload upload = 1; + // The parts expected to be successfully uploaded. + repeated Part parts = 2; +} + +// The body of a CompleteUpload request. +message CompleteUploadResponse {} + +// The body of a RotateKey request. +message RotateKeyRequest {} + +// The return value of a RotateKey request. +message RotateKeyResponse {} + +// The body of a GetRotationState request. +message GetRotationStateRequest { + int32 page_size = 1; + string page_token = 2; +} + +// A public key fingerprint coupled with its current state. +message FingerprintWithState { + // A fingerprint identifying the public key of a KeyPair. + string fingerprint = 1; + // The state associated with the identified KeyPair. + v1.KeyPairState state = 2; +} + +// The current state of all active encryption key pairs. +message GetRotationStateResponse { + string next_page_token = 1; + // The state of all active encryption key pairs. + repeated FingerprintWithState key_pair_states = 2; +} + +// The body of a CompleteRotation request. +message CompleteRotationRequest {} + +// The return value of a CompleteRotation request. +message CompleteRotationResponse {} + +// The body of a RollbackRotation request. +message RollbackRotationRequest {} + +// The return value of a RollbackRotation request +message RollbackRotationResponse {} diff --git a/api/proto/teleport/recordingmetadata/v1/recordingmetadata.proto b/api/proto/teleport/recordingmetadata/v1/recordingmetadata.proto new file mode 100644 index 0000000000000..1a0f2f0638b0c --- /dev/null +++ b/api/proto/teleport/recordingmetadata/v1/recordingmetadata.proto @@ -0,0 +1,111 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// SessionRecordingEvent is an event that occurred during a session recording. + +syntax = "proto3"; + +package teleport.recordingmetadata.v1; + +import "google/protobuf/duration.proto"; +import "google/protobuf/timestamp.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingmetadata/v1;recordingmetadatav1"; + +// SessionRecordingEvent represents an event that occurred during a session recording. +message SessionRecordingEvent { + // StartOffset is the start time of the event, relative to the start of the session. + google.protobuf.Duration start_offset = 1; + // EndOffset is the end time of the event, relative to the start of the session. + google.protobuf.Duration end_offset = 2; + + oneof event { + // Inactivity is an event that indicates inactivity during the session. + SessionRecordingInactivityEvent inactivity = 3; + // Join is an event that indicates a user joined the session. + SessionRecordingJoinEvent join = 4; + // Resize is an event that indicates the terminal was resized. + SessionRecordingResizeEvent resize = 5; + } +} + +// SessionRecordingInactivityEvent is an event that indicates inactivity during the session. +message SessionRecordingInactivityEvent {} + +// SessionRecordingJoinEvent is an event that indicates a user joined the session. +message SessionRecordingJoinEvent { + // User is the name of the user who joined the session. + string user = 1; +} + +// SessionRecordingResizeEvent is an event that indicates the terminal was resized. +message SessionRecordingResizeEvent { + // Cols is the number of columns in the terminal. + int32 cols = 1; + // Rows is the number of rows in the terminal. + int32 rows = 2; +} + +// SessionRecordingType is the type of session recording. +enum SessionRecordingType { + // SessionRecordingTypeUnspecified is the default value for session recording type. + SESSION_RECORDING_TYPE_UNSPECIFIED = 0; + // SessionRecordingTypeSsh is an interactive SSH session recording. + SESSION_RECORDING_TYPE_SSH = 1; + // SessionRecordingTypeKubernetes is an interactive Kubernetes session recording. + SESSION_RECORDING_TYPE_KUBERNETES = 2; +} + +// SessionRecordingMetadata contains metadata for a session recording. +message SessionRecordingMetadata { + // Duration is the duration of the session recording. + google.protobuf.Duration duration = 1; + // Events is the events that occurred during the session. + repeated SessionRecordingEvent events = 2; + // StartCols is the number of columns in the terminal at the start of the session. + int32 start_cols = 3; + // StartRows is the number of rows in the terminal at the start of the session. + int32 start_rows = 4; + // StartTime is the start time of the session recording. + google.protobuf.Timestamp start_time = 5; + // EndTime is the end time of the session recording. + google.protobuf.Timestamp end_time = 6; + // ClusterName is the name of the cluster where the session recording took place. + string cluster_name = 7; + // User is the user whose session is being recorded. + string user = 8; + // ResourceName is the name of the resource that was connected to. + string resource_name = 9; + // Type is the type of session recording. + SessionRecordingType type = 10; +} + +// SessionRecordingThumbnail is a thumbnail of a session recording. +message SessionRecordingThumbnail { + // SVG is the SVG image of the thumbnail. + bytes svg = 1; + // Cols is the number of columns in the terminal. + int32 cols = 2; + // Rows is the number of rows in the terminal. + int32 rows = 3; + // CursorX is the X coordinate of the cursor. + int32 cursor_x = 4; + // CursorY is the Y coordinate of the cursor. + int32 cursor_y = 5; + // StartOffset is the start time of the thumbnail, relative to the start of the session. + google.protobuf.Duration start_offset = 6; + // EndOffset is the end time of the thumbnail, relative to the start of the session. + google.protobuf.Duration end_offset = 7; + // CursorVisible indicates whether the cursor is visible. + bool cursor_visible = 8; +} diff --git a/api/proto/teleport/recordingmetadata/v1/recordingmetadata_service.proto b/api/proto/teleport/recordingmetadata/v1/recordingmetadata_service.proto new file mode 100644 index 0000000000000..c0271cd62989f --- /dev/null +++ b/api/proto/teleport/recordingmetadata/v1/recordingmetadata_service.proto @@ -0,0 +1,58 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.recordingmetadata.v1; + +import "teleport/recordingmetadata/v1/recordingmetadata.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/recordingmetadata/v1;recordingmetadatav1"; + +// RecordingMetadataService provides methods to retrieve metadata and thumbnails for a session recording. +service RecordingMetadataService { + // GetThumbnail retrieves the thumbnail for a session recording. + rpc GetThumbnail(GetThumbnailRequest) returns (GetThumbnailResponse); + // GetMetadata retrieves the metadata for a session recording. + rpc GetMetadata(GetMetadataRequest) returns (stream GetMetadataResponseChunk); +} + +// GetMetadataResponseChunk is a chunked response for retrieving a session's metadata. +// It can contain either metadata or a frame from the session recording. +message GetMetadataResponseChunk { + oneof chunk { + // Metadata contains the metadata of the session recording. + teleport.recordingmetadata.v1.SessionRecordingMetadata metadata = 1; + // Frame contains a frame from the session recording. + teleport.recordingmetadata.v1.SessionRecordingThumbnail frame = 2; + } +} + +// GetThumbnailRequest is a request for a session's thumbnail. +message GetThumbnailRequest { + // SessionId is the ID of the session whose thumbnail is being requested. + string session_id = 1; +} + +// GetThumbnailResponse is a response for retrieving a session's thumbnail. +message GetThumbnailResponse { + // Thumbnail is the thumbnail for the session. + teleport.recordingmetadata.v1.SessionRecordingThumbnail thumbnail = 1; +} + +// GetMetadataRequest is a request for retrieving a session's metadata. +message GetMetadataRequest { + // SessionId is the ID of the session whose metadata is being requested. + string session_id = 1; +} diff --git a/api/proto/teleport/scim/v1/scim_service.proto b/api/proto/teleport/scim/v1/scim_service.proto index dbdd26beeb6ca..336b2e54f97b2 100644 --- a/api/proto/teleport/scim/v1/scim_service.proto +++ b/api/proto/teleport/scim/v1/scim_service.proto @@ -40,11 +40,17 @@ service SCIMService { // DeleteSCIMResource deletes a SCIM-managed resource rpc DeleteSCIMResource(DeleteSCIMResourceRequest) returns (google.protobuf.Empty); + + // PatchSCIMResource handles a request to patch a resource as per RFC 7644 + // Section 3.5.2. + // + // See https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2 + rpc PatchSCIMResource(PatchSCIMResourceRequest) returns (Resource); } // ListSCIMResourcesRequest represents a request to fetch multiple resources message ListSCIMResourcesRequest { - // Target describes the set of requested by the client, vy integration and + // Target describes the set of requested by the client, by integration and // resource type. RequestTarget target = 1; @@ -82,7 +88,7 @@ message UpdateSCIMResourceRequest { Resource resource = 2; } -// DeleteSCIMResourceRequest describes a request to delete a SCIM-mamanged +// DeleteSCIMResourceRequest describes a request to delete a SCIM-mananged // resource message DeleteSCIMResourceRequest { // Target is the owner, type and ID if the resource targeted by the request. @@ -148,3 +154,18 @@ message Page { uint64 start_index = 1; uint64 count = 2; } + +// PatchSCIMResourceRequest describes a SCIM PATCH operation as per RFC 7644 +// Section 3.5.2. The PATCH operation allows modifying a resource with a set of +// change operations, enabling partial updates without replacing the entire +// resource. +// +// See https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2 +message PatchSCIMResourceRequest { + // Target identifies the resource to patch + RequestTarget target = 1; + + // Payload is the SCIM PATCH Request payload containing the Operations array + // and schemas as defined in RFC 7644 Section 3.5.2. + google.protobuf.Struct payload = 2; +} diff --git a/api/proto/teleport/scopedrole/v1/role.proto b/api/proto/teleport/scopedrole/v1/role.proto deleted file mode 100644 index f133493aa9cba..0000000000000 --- a/api/proto/teleport/scopedrole/v1/role.proto +++ /dev/null @@ -1,52 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package teleport.scopedrole.v1; - -import "teleport/header/v1/metadata.proto"; - -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedrole"; - -// ScopedRole is a role whose resource and permissions are scoped. Scoped roles implement a subset of role -// features tailored to the usecases of scoped access and scoped access administration. Scoped roles may be -// assigned to the same user multiple times at various scopes. Scoped roles do not contain deny rules. -message ScopedRole { - // Kind is the resource kind. - string kind = 1; - - // SubKind is the resource sub-kind. - string sub_kind = 2; - - // Version is the resource version. - string version = 3; - - // Metadata contains the resource metadata. - teleport.header.v1.Metadata metadata = 4; - - // Scope is the scope of the role resource. - string scope = 5; - - // Spec is the role specification. - ScopedRoleSpec spec = 6; -} - -// ScopedRoleSpec is the specification of a scoped role. -message ScopedRoleSpec { - // AssignableScopes is a list of scopes to which this role can be assigned. - repeated string assignable_scopes = 1; - - // TODO(fspmarshall): port relevant role features to scoped roles. -} diff --git a/api/proto/teleport/scopedrole/v1/service.proto b/api/proto/teleport/scopedrole/v1/service.proto deleted file mode 100644 index 3b0ace0a69f83..0000000000000 --- a/api/proto/teleport/scopedrole/v1/service.proto +++ /dev/null @@ -1,195 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package teleport.scopedrole.v1; - -import "teleport/scopedrole/v1/assignment.proto"; -import "teleport/scopedrole/v1/role.proto"; -import "teleport/scopes/v1/scopes.proto"; - -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedrole"; - -// ScopedRoleService provides an API for managing scoped role resources and their assignments. -service ScopedRoleService { - // GetScopedRole gets a scoped role by name. - rpc GetScopedRole(GetScopedRoleRequest) returns (GetScopedRoleResponse); - - // ListScopedRoles returns a paginated list of scoped roles. - rpc ListScopedRoles(ListScopedRolesRequest) returns (ListScopedRolesResponse); - - // CreateScopedRole creates a new scoped role. - rpc CreateScopedRole(CreateScopedRoleRequest) returns (CreateScopedRoleResponse); - - // UpdateScopedRole updates a scoped role. - rpc UpdateScopedRole(UpdateScopedRoleRequest) returns (UpdateScopedRoleResponse); - - // DeleteScopedRole deletes a scoped role. - rpc DeleteScopedRole(DeleteScopedRoleRequest) returns (DeleteScopedRoleResponse); - - // GetScopedRoleAssignment gets a scoped role assignment by name. - rpc GetScopedRoleAssignment(GetScopedRoleAssignmentRequest) returns (GetScopedRoleAssignmentResponse); - - // ListScopedRoleAssignments returns a paginated list of scoped role assignments. - rpc ListScopedRoleAssignments(ListScopedRoleAssignmentsRequest) returns (ListScopedRoleAssignmentsResponse); - - // CreateScopedRoleAssignment creates a new scoped role assignment. - rpc CreateScopedRoleAssignment(CreateScopedRoleAssignmentRequest) returns (CreateScopedRoleAssignmentResponse); - - // DeleteScopedRoleAssignment deletes a scoped role assignment. - rpc DeleteScopedRoleAssignment(DeleteScopedRoleAssignmentRequest) returns (DeleteScopedRoleAssignmentResponse); -} - -// GetScopedRoleRequest is the request to get a scoped role. -message GetScopedRoleRequest { - // Name is the name of the scoped role. - string name = 1; -} - -// GetScopedRoleResponse is the response to get a scoped role. -message GetScopedRoleResponse { - // Role is the scoped role. - ScopedRole role = 1; -} - -// ListScopedRolesRequest is the request to list scoped roles. -message ListScopedRolesRequest { - // PageSize is the maximum number of results to return. - int32 page_size = 1; - - // PageToken is the pagination cursor used to start from where a previous request left off. - string page_token = 2; - - // ResourceScope filters roles by their resource scope if specified. - teleport.scopes.v1.Filter resource_scope = 3; - - // AssignableScope filters roles by their assignable scope if specified. - teleport.scopes.v1.Filter assignable_scope = 4; -} - -// ListScopedRolesResponse is the response to list scoped roles. -message ListScopedRolesResponse { - // Roles is the list of scoped roles. - repeated ScopedRole roles = 1; - - // NextPageToken is a pagination cursor usable to fetch the next page of results. - string next_page_token = 2; -} - -// CreateScopedRoleRequest is the request to create a scoped role. -message CreateScopedRoleRequest { - // Role is the scoped role to create. - ScopedRole role = 1; -} - -// CreateScopedRoleResponse is the response to create a scoped role. -message CreateScopedRoleResponse { - // Role is the scoped role that was created. - ScopedRole role = 1; -} - -// UpdateScopedRoleRequest is the request to update a scoped role. -message UpdateScopedRoleRequest { - // Role is the scoped role to update. - ScopedRole role = 1; -} - -// UpdateScopedRoleResponse is the response to update a scoped role. -message UpdateScopedRoleResponse { - // Role is the post-update scoped role. - ScopedRole role = 1; -} - -// DeleteScopedRoleRequest is the request to delete a scoped role. -message DeleteScopedRoleRequest { - // Name is the name of the scoped role to delete. - string name = 1; - - // Revision asserts the revision of the scoped role to delete (optional). - string revision = 2; -} - -// DeleteScopedRoleResponse is the response to delete a scoped role. -message DeleteScopedRoleResponse {} - -// GetScopedRoleAssignmentRequest is the request to get a scoped role assignment. -message GetScopedRoleAssignmentRequest { - // Name is the name of the scoped role assignment. - string name = 1; -} - -// GetScopedRoleAssignmentResponse is the response to get a scoped role assignment. -message GetScopedRoleAssignmentResponse { - // Assignment is the scoped role assignment. - ScopedRoleAssignment assignment = 1; -} - -// ListScopedRoleAssignmentsRequest is the request to list scoped role assignments. -message ListScopedRoleAssignmentsRequest { - // PageSize is the maximum number of results to return. - int32 page_size = 1; - - // PageToken is the pagination cursor used to start from where a previous request left off. - string page_token = 2; - - // ResourceScope filters assignments by their resource scope if specified. - teleport.scopes.v1.Filter resource_scope = 3; - - // AssignedScope filters assignments by the scopes they assign to if specified (note: matches assignment - // resources with 1 or more maching scopes, not all scopes within the assignment will necessarily match). - teleport.scopes.v1.Filter assigned_scope = 4; - - // User optionally limits the list to assignments for a specific user. - string user = 5; - - // Role optionally limits the list to assignments for a specific role. - string role = 6; -} - -// ListScopedRoleAssignmentsResponse is the response to list scoped role assignments. -message ListScopedRoleAssignmentsResponse { - // Assignments is the list of scoped role assignments. - repeated ScopedRoleAssignment assignments = 1; - - // NextPageToken is a pagination cursor usable to fetch the next page of results. - string next_page_token = 2; -} - -// CreateScopedRoleAssignmentRequest is the request to create a scoped role assignment. -message CreateScopedRoleAssignmentRequest { - // Assignment is the scoped role assignment to create. - ScopedRoleAssignment assignment = 1; - - // RoleRevisions asserts the revisions of the roles assigned by the assignments (optional). - map role_revisions = 2; -} - -// CreateScopedRoleAssignmentResponse is the response to create a scoped role assignment. -message CreateScopedRoleAssignmentResponse { - // Assignment is the scoped role assignment that was created. - ScopedRoleAssignment assignment = 1; -} - -// DeleteScopedRoleAssignmentRequest is the request to delete a scoped role assignment. -message DeleteScopedRoleAssignmentRequest { - // Name is the name of the scoped role assignment to delete. - string name = 1; - - // Revision asserts the revision of the scoped role assignment to delete (optional). - string revision = 2; -} - -// DeleteScopedRoleAssignmentResponse is the response to delete a scoped role assignment. -message DeleteScopedRoleAssignmentResponse {} diff --git a/api/proto/teleport/scopedtoken/v1/service.proto b/api/proto/teleport/scopedtoken/v1/service.proto deleted file mode 100644 index b410e5d4a19c1..0000000000000 --- a/api/proto/teleport/scopedtoken/v1/service.proto +++ /dev/null @@ -1,112 +0,0 @@ -// Copyright 2025 Gravitational, Inc -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package teleport.scopedtoken.v1; - -import "teleport/scopedtoken/v1/token.proto"; -import "teleport/scopes/v1/scopes.proto"; - -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopedtoken/v1;scopedtoken"; - -// ScopedTokenService provides an API for managing scoped token resources and their assignments. -service ScopedTokenService { - // GetScopedToken gets a scoped token by name. - rpc GetScopedToken(GetScopedTokenRequest) returns (GetScopedTokenResponse); - - // ListScopedTokens returns a paginated list of scoped tokens. - rpc ListScopedTokens(ListScopedTokensRequest) returns (ListScopedTokensResponse); - - // CreateScopedToken creates a new scoped token. - rpc CreateScopedToken(CreateScopedTokenRequest) returns (CreateScopedTokenResponse); - - // UpdateScopedToken updates a scoped token. - rpc UpdateScopedToken(UpdateScopedTokenRequest) returns (UpdateScopedTokenResponse); - - // DeleteScopedToken deletes a scoped token. - rpc DeleteScopedToken(DeleteScopedTokenRequest) returns (DeleteScopedTokenResponse); -} - -// GetScopedTokenRequest is the request to get a scoped token. -message GetScopedTokenRequest { - // Name is the name of the scoped token. - string name = 1; -} - -// GetScopedTokenResponse is the response to get a scoped token. -message GetScopedTokenResponse { - // Token is the scoped token. - ScopedToken token = 1; -} - -// ListScopedTokensRequest is the request to list scoped tokens. -message ListScopedTokensRequest { - // ResourceScope filters tokens by their resource scope if specified. - teleport.scopes.v1.Filter resource_scope = 1; - - // AssignedScope filters tokens by their assigned scope if specified. - teleport.scopes.v1.Filter assigned_scope = 2; - - // Cursor is the pagination cursor. - string cursor = 3; - - // Limit is the maximum number of results to return. - uint32 limit = 4; -} - -// ListScopedTokensResponse is the response to list scoped tokens. -message ListScopedTokensResponse { - // Tokens is the list of scoped tokens. - repeated ScopedToken tokens = 1; - - // Cursor is the pagination cursor. - string cursor = 2; -} - -// CreateScopedTokenRequest is the request to create a scoped token. -message CreateScopedTokenRequest { - // Token is the scoped token to create. - ScopedToken token = 1; -} - -// CreateScopedTokenResponse is the response to create a scoped token. -message CreateScopedTokenResponse { - // Token is the scoped token that was created. - ScopedToken token = 1; -} - -// UpdateScopedTokenRequest is the request to update a scoped token. -message UpdateScopedTokenRequest { - // Token is the scoped token to update. - ScopedToken token = 1; -} - -// UpdateScopedTokenResponse is the response to update a scoped token. -message UpdateScopedTokenResponse { - // Token is the post-update scoped token. - ScopedToken token = 1; -} - -// DeleteScopedTokenRequest is the request to delete a scoped token. -message DeleteScopedTokenRequest { - // Name is the name of the scoped token to delete. - string name = 1; - - // Revision asserts the revision of the scoped token to delete (optional). - string revision = 2; -} - -// DeleteScopedTokenResponse is the response to delete a scoped token. -message DeleteScopedTokenResponse {} diff --git a/api/proto/teleport/scopedrole/v1/assignment.proto b/api/proto/teleport/scopes/access/v1/assignment.proto similarity index 96% rename from api/proto/teleport/scopedrole/v1/assignment.proto rename to api/proto/teleport/scopes/access/v1/assignment.proto index adc3d0f355134..983efcf25685f 100644 --- a/api/proto/teleport/scopedrole/v1/assignment.proto +++ b/api/proto/teleport/scopes/access/v1/assignment.proto @@ -14,11 +14,11 @@ syntax = "proto3"; -package teleport.scopedrole.v1; +package teleport.scopes.access.v1; import "teleport/header/v1/metadata.proto"; -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopedrole/v1;scopedrole"; +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1"; // ScopedRoleAssignment is a role assignment whose resource and permissions are scoped. A scoped role assignment // assigns roles to users at scopes. One assignment may contain multiple roles at multiple scopes. Most assignments diff --git a/api/proto/teleport/scopes/access/v1/role.proto b/api/proto/teleport/scopes/access/v1/role.proto new file mode 100644 index 0000000000000..442170f31a083 --- /dev/null +++ b/api/proto/teleport/scopes/access/v1/role.proto @@ -0,0 +1,76 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.scopes.access.v1; + +import "teleport/header/v1/metadata.proto"; +import "teleport/label/v1/label.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1"; + +// ScopedRole is a role whose resource and permissions are scoped. Scoped roles implement a subset of role +// features tailored to the usecases of scoped access and scoped access administration. Scoped roles may be +// assigned to the same user multiple times at various scopes. Scoped roles do not contain deny rules. +message ScopedRole { + // Kind is the resource kind. + string kind = 1; + + // SubKind is the resource sub-kind. + string sub_kind = 2; + + // Version is the resource version. + string version = 3; + + // Metadata contains the resource metadata. + teleport.header.v1.Metadata metadata = 4; + + // Scope is the scope of the role resource. + string scope = 5; + + // Spec is the role specification. + ScopedRoleSpec spec = 6; +} + +// ScopedRoleSpec is the specification of a scoped role. +message ScopedRoleSpec { + // AssignableScopes is a list of scopes to which this role can be assigned. + repeated string assignable_scopes = 1; + + // Allow specifies the permissions granted by this role. + ScopedRoleConditions allow = 2; +} + +// ScopedRoleConditions describes a role's allow block. +message ScopedRoleConditions { + // Rules describe basic resource:verb permissions. + repeated ScopedRule rules = 1; + + // Logins is the list of host logins this role allows. + repeated string logins = 2; + + // NodeLabels is a map of node labels (used to dynamically grant access to + // nodes). + repeated teleport.label.v1.Label node_labels = 3; +} + +// Rule maps resources to verbs. This is the underlying type used to describe +// permissions like 'node:read' or 'role:create'. +message ScopedRule { + // Resources is a list of resource kinds (e.g. 'scoped_token') that the below verbs apply to. + repeated string resources = 1; + // Verbs is the list of action verbs (e.g. 'read') that apply to the above resources. + repeated string verbs = 2; +} diff --git a/api/proto/teleport/scopes/access/v1/service.proto b/api/proto/teleport/scopes/access/v1/service.proto new file mode 100644 index 0000000000000..bf3200e5270ed --- /dev/null +++ b/api/proto/teleport/scopes/access/v1/service.proto @@ -0,0 +1,203 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.scopes.access.v1; + +import "teleport/scopes/access/v1/assignment.proto"; +import "teleport/scopes/access/v1/role.proto"; +import "teleport/scopes/v1/scopes.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/access/v1;accessv1"; + +// ScopedAccessService provides an API for managing scoped access-control resources. +service ScopedAccessService { + // GetScopedRole gets a scoped role by name. + rpc GetScopedRole(GetScopedRoleRequest) returns (GetScopedRoleResponse); + + // ListScopedRoles returns a paginated list of scoped roles. + rpc ListScopedRoles(ListScopedRolesRequest) returns (ListScopedRolesResponse); + + // CreateScopedRole creates a new scoped role. + rpc CreateScopedRole(CreateScopedRoleRequest) returns (CreateScopedRoleResponse); + + // UpdateScopedRole updates a scoped role. + rpc UpdateScopedRole(UpdateScopedRoleRequest) returns (UpdateScopedRoleResponse); + + // DeleteScopedRole deletes a scoped role. Note that scoped role deletion is always + // conditional. Scoped roles cannot be deleted if referenced by any existing assignment + // and deletes may fail due to concurrent modification. + rpc DeleteScopedRole(DeleteScopedRoleRequest) returns (DeleteScopedRoleResponse); + + // GetScopedRoleAssignment gets a scoped role assignment by name. + rpc GetScopedRoleAssignment(GetScopedRoleAssignmentRequest) returns (GetScopedRoleAssignmentResponse); + + // ListScopedRoleAssignments returns a paginated list of scoped role assignments. + rpc ListScopedRoleAssignments(ListScopedRoleAssignmentsRequest) returns (ListScopedRoleAssignmentsResponse); + + // CreateScopedRoleAssignment creates a new scoped role assignment. + rpc CreateScopedRoleAssignment(CreateScopedRoleAssignmentRequest) returns (CreateScopedRoleAssignmentResponse); + + // DeleteScopedRoleAssignment deletes a scoped role assignment. + rpc DeleteScopedRoleAssignment(DeleteScopedRoleAssignmentRequest) returns (DeleteScopedRoleAssignmentResponse); +} + +// GetScopedRoleRequest is the request to get a scoped role. +message GetScopedRoleRequest { + // Name is the name of the scoped role. + string name = 1; +} + +// GetScopedRoleResponse is the response to get a scoped role. +message GetScopedRoleResponse { + // Role is the scoped role. + ScopedRole role = 1; +} + +// ListScopedRolesRequest is the request to list scoped roles. +message ListScopedRolesRequest { + // PageSize is the maximum number of results to return. + int32 page_size = 1; + + // PageToken is the pagination cursor used to start from where a previous request left off. + string page_token = 2; + + // ResourceScope filters roles by their resource scope if specified. + teleport.scopes.v1.Filter resource_scope = 3; + + // AssignableScope filters roles by their assignable scope if specified. + teleport.scopes.v1.Filter assignable_scope = 4; +} + +// ListScopedRolesResponse is the response to list scoped roles. +message ListScopedRolesResponse { + // Roles is the list of scoped roles. + repeated ScopedRole roles = 1; + + // NextPageToken is a pagination cursor usable to fetch the next page of results. + string next_page_token = 2; +} + +// CreateScopedRoleRequest is the request to create a scoped role. +message CreateScopedRoleRequest { + // Role is the scoped role to create. + ScopedRole role = 1; +} + +// CreateScopedRoleResponse is the response to create a scoped role. +message CreateScopedRoleResponse { + // Role is the scoped role that was created. + ScopedRole role = 1; +} + +// UpdateScopedRoleRequest is the request to update a scoped role. +message UpdateScopedRoleRequest { + // Role is the scoped role to update. + ScopedRole role = 1; +} + +// UpdateScopedRoleResponse is the response to update a scoped role. +message UpdateScopedRoleResponse { + // Role is the post-update scoped role. + ScopedRole role = 1; +} + +// DeleteScopedRoleRequest is the request to delete a scoped role. +message DeleteScopedRoleRequest { + // Name is the name of the scoped role to delete. + string name = 1; + + // Revision asserts the revision of the scoped role to delete (optional). + string revision = 2; +} + +// DeleteScopedRoleResponse is the response to delete a scoped role. +message DeleteScopedRoleResponse {} + +// GetScopedRoleAssignmentRequest is the request to get a scoped role assignment. +message GetScopedRoleAssignmentRequest { + // Name is the name of the scoped role assignment. + string name = 1; +} + +// GetScopedRoleAssignmentResponse is the response to get a scoped role assignment. +message GetScopedRoleAssignmentResponse { + // Assignment is the scoped role assignment. + ScopedRoleAssignment assignment = 1; +} + +// ListScopedRoleAssignmentsRequest is the request to list scoped role assignments. +message ListScopedRoleAssignmentsRequest { + // PageSize is the maximum number of results to return. + int32 page_size = 1; + + // PageToken is the pagination cursor used to start from where a previous request left off. + string page_token = 2; + + // ResourceScope filters assignments by their resource scope if specified. + teleport.scopes.v1.Filter resource_scope = 3; + + // AssignedScope filters assignments by the scopes they assign to if specified (note: matches assignment + // resources with 1 or more maching scopes, not all scopes within the assignment will necessarily match). + teleport.scopes.v1.Filter assigned_scope = 4; + + // User optionally limits the list to assignments for a specific user. + string user = 5; + + // Role optionally limits the list to assignments for a specific role. + string role = 6; + + // AllCallerAssignments, if set to true, overrides the default behavior in favor of returning all + // assignments that apply to the caller, including those assigned in parent/orthogonal scopes. This flag + // is specifically used to support users discovering where they have been granted privileges across scopes, + // and is not intended for general use. + bool all_caller_assignments = 7; +} + +// ListScopedRoleAssignmentsResponse is the response to list scoped role assignments. +message ListScopedRoleAssignmentsResponse { + // Assignments is the list of scoped role assignments. + repeated ScopedRoleAssignment assignments = 1; + + // NextPageToken is a pagination cursor usable to fetch the next page of results. + string next_page_token = 2; +} + +// CreateScopedRoleAssignmentRequest is the request to create a scoped role assignment. +message CreateScopedRoleAssignmentRequest { + // Assignment is the scoped role assignment to create. + ScopedRoleAssignment assignment = 1; + + // RoleRevisions asserts the revisions of the roles assigned by the assignments (optional). + map role_revisions = 2; +} + +// CreateScopedRoleAssignmentResponse is the response to create a scoped role assignment. +message CreateScopedRoleAssignmentResponse { + // Assignment is the scoped role assignment that was created. + ScopedRoleAssignment assignment = 1; +} + +// DeleteScopedRoleAssignmentRequest is the request to delete a scoped role assignment. +message DeleteScopedRoleAssignmentRequest { + // Name is the name of the scoped role assignment to delete. + string name = 1; + + // Revision asserts the revision of the scoped role assignment to delete (optional). + string revision = 2; +} + +// DeleteScopedRoleAssignmentResponse is the response to delete a scoped role assignment. +message DeleteScopedRoleAssignmentResponse {} diff --git a/api/proto/teleport/scopes/joining/v1/service.proto b/api/proto/teleport/scopes/joining/v1/service.proto new file mode 100644 index 0000000000000..8d1311f24aee1 --- /dev/null +++ b/api/proto/teleport/scopes/joining/v1/service.proto @@ -0,0 +1,118 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.scopes.joining.v1; + +import "teleport/scopes/joining/v1/token.proto"; +import "teleport/scopes/v1/scopes.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/joining/v1;joiningv1"; + +// ScopedJoiningService provides an API for managing scoped cluster joining resources. +service ScopedJoiningService { + // GetScopedToken gets a scoped token by name. + rpc GetScopedToken(GetScopedTokenRequest) returns (GetScopedTokenResponse); + + // ListScopedTokens returns a paginated list of scoped tokens. + rpc ListScopedTokens(ListScopedTokensRequest) returns (ListScopedTokensResponse); + + // CreateScopedToken creates a new scoped token. + rpc CreateScopedToken(CreateScopedTokenRequest) returns (CreateScopedTokenResponse); + + // UpdateScopedToken updates a scoped token. + rpc UpdateScopedToken(UpdateScopedTokenRequest) returns (UpdateScopedTokenResponse); + + // DeleteScopedToken deletes a scoped token. + rpc DeleteScopedToken(DeleteScopedTokenRequest) returns (DeleteScopedTokenResponse); +} + +// GetScopedTokenRequest is the request to get a scoped token. +message GetScopedTokenRequest { + // Name is the name of the scoped token. + string name = 1; +} + +// GetScopedTokenResponse is the response to get a scoped token. +message GetScopedTokenResponse { + // Token is the scoped token. + ScopedToken token = 1; +} + +// ListScopedTokensRequest is the request to list scoped tokens. +message ListScopedTokensRequest { + // Filter tokens by their resource scope. + teleport.scopes.v1.Filter resource_scope = 1; + + // Filter tokens by their assigned scope. + teleport.scopes.v1.Filter assigned_scope = 2; + + // The pagination cursor. + string cursor = 3; + + // The maximum number of results to return. + uint32 limit = 4; + + // Filter tokens that apply at least one of the provided roles. + repeated string roles = 5; + + // Filter tokens that match all provided labels. + map labels = 6; +} + +// ListScopedTokensResponse is the response to list scoped tokens. +message ListScopedTokensResponse { + // Tokens is the list of scoped tokens. + repeated ScopedToken tokens = 1; + + // Cursor is the pagination cursor. + string cursor = 2; +} + +// CreateScopedTokenRequest is the request to create a scoped token. +message CreateScopedTokenRequest { + // Token is the scoped token to create. + ScopedToken token = 1; +} + +// CreateScopedTokenResponse is the response to create a scoped token. +message CreateScopedTokenResponse { + // Token is the scoped token that was created. + ScopedToken token = 1; +} + +// UpdateScopedTokenRequest is the request to update a scoped token. +message UpdateScopedTokenRequest { + // Token is the scoped token to update. + ScopedToken token = 1; +} + +// UpdateScopedTokenResponse is the response to update a scoped token. +message UpdateScopedTokenResponse { + // Token is the post-update scoped token. + ScopedToken token = 1; +} + +// DeleteScopedTokenRequest is the request to delete a scoped token. +message DeleteScopedTokenRequest { + // Name is the name of the scoped token to delete. + string name = 1; + + // Revision asserts the revision of the scoped token to delete (optional). + string revision = 2; +} + +// DeleteScopedTokenResponse is the response to delete a scoped token. +message DeleteScopedTokenResponse {} diff --git a/api/proto/teleport/scopedtoken/v1/token.proto b/api/proto/teleport/scopes/joining/v1/token.proto similarity index 77% rename from api/proto/teleport/scopedtoken/v1/token.proto rename to api/proto/teleport/scopes/joining/v1/token.proto index 2094ef2e5b825..6b11a02baee66 100644 --- a/api/proto/teleport/scopedtoken/v1/token.proto +++ b/api/proto/teleport/scopes/joining/v1/token.proto @@ -14,11 +14,11 @@ syntax = "proto3"; -package teleport.scopedtoken.v1; +package teleport.scopes.joining.v1; import "teleport/header/v1/metadata.proto"; -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopedtoken/v1;scopedtoken"; +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/joining/v1;joiningv1"; // ScopedToken is a token whose resource and permissions are scoped. Scoped tokens are used for the provisioning // of teleport agents locked to specific scopes. Scoped tokens implement a subset of the functionality of standard @@ -46,8 +46,15 @@ message ScopedToken { // ScopedTokenSpec is the specification of a scoped token. message ScopedTokenSpec { - // AssignedScope is the scope to which this token is assigned. + // The scope to which this token is assigned. string assigned_scope = 1; - // TODO(fspmarshall): port relevant token features to scoped tokens. + // The list of roles associated with the token. They will be converted + // to metadata in the SSH and X509 certificates issued to the user of the + // token. + repeated string roles = 2; + + // The joining method required in order to use this token. + // Supported joining methods for scoped tokens only include 'token'. + string join_method = 3; } diff --git a/api/proto/teleport/scopes/v1/scopes.proto b/api/proto/teleport/scopes/v1/scopes.proto index 657724d94b8c3..3b67e51a33985 100644 --- a/api/proto/teleport/scopes/v1/scopes.proto +++ b/api/proto/teleport/scopes/v1/scopes.proto @@ -16,7 +16,27 @@ syntax = "proto3"; package teleport.scopes.v1; -option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1;scopes"; +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/scopes/v1;scopesv1"; + +// Pin is a marker that identifies a certificate/identity as being "pinned" to a target scope, and encodes relevant +// information for access-control evaluation at that scope. +message Pin { + // scope is the target scope that this pin is associated with. This is the scope that the certificate/identity is + // pinned to. Any resources in parent/orthogonal scopes are not necessarily subject to the privileges/policies + // conveyed by this pin. + string scope = 1; + + // assignments encodes the scoped role assignments relevant to access-control decisions about the pinned identity. This may + // include assignments to parents of the pinned scope as well as assignments to equivalent/child scopes. Effectively, this + // means all assignments that are not orthogonal to the pinned scope. + map assignments = 2; +} + +// PinnedAssignments is a collection of scoped role assignments that are relevant to the pinned identity at the target scope. +message PinnedAssignments { + // roles is a list of scoped roles that are assigned to the pinned identity at the target scope. + repeated string roles = 1; +} // Mode determines the mode of scoping when a query specifies a scope. When a query specifies a scope, // one of two questions is typically trying to be answered. Either, what resources are "in" and/or "subject to" diff --git a/api/proto/teleport/summarizer/v1/summarizer.proto b/api/proto/teleport/summarizer/v1/summarizer.proto new file mode 100644 index 0000000000000..c246d0e7bf267 --- /dev/null +++ b/api/proto/teleport/summarizer/v1/summarizer.proto @@ -0,0 +1,354 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.summarizer.v1; + +import "google/protobuf/duration.proto"; +import "google/protobuf/struct.proto"; +import "google/protobuf/timestamp.proto"; +import "teleport/header/v1/metadata.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1;summarizerv1"; + +// InferenceModel resource specifies a session summarization inference model +// configuration. It tells Teleport how to use a specific provider and model to +// summarize sessions. +message InferenceModel { + // Kind is the resource kind. Should always be set to "inference_model". + string kind = 1; + // SubKind is the resource sub-kind. Should be empty. + string sub_kind = 2; + // Version is the resource version. Should be set to "v1". + string version = 3; + teleport.header.v1.Metadata metadata = 4; + InferenceModelSpec spec = 5; +} + +// InferenceModelSpec specifies the inference provider and provider-specific +// parameters. +message InferenceModelSpec { + oneof provider { + // Openai indicates that this model uses OpenAI as the inference provider + // and specifies OpenAI-specific parameters. + OpenAIProvider openai = 1; + // Bedrock indicates that this model uses Amazon Bedrock as the inference + // provider and specifies Bedrock-specific parameters. + BedrockProvider bedrock = 3; + } + // MaxSessionLengthBytes is the maximum session length that can be sent to + // inference provider. Currently, it's determined by the size of model's + // context window; future versions of Teleport will allow summarizing larger + // sessions by splitting them. + // + // Inference providers will reject requests that are larger than given + // model's context window. Since context windows are usually sized in tokens, + // this value is an approximation. Assuming 2 bytes per input token should be + // safe. + // + // Currently, Teleport will outright reject sessions larger than this limit; + // future versions will split sessions in chunks, treating this size as a + // maximum. + // + // If unset or set to 0, defaults to 1MB. + int64 max_session_length_bytes = 2; +} + +// OpenAIProvider specifies OpenAI-specific parameters. It can be used to +// configure OpenAI or an OpenAI-compatible API, such as LiteLLM. +message OpenAIProvider { + // OpenaiModelId specifies the model ID, as understood by the OpenAI API. + string openai_model_id = 1; + // Temperature controls the randomness of the model's output. + double temperature = 2; + // ApiKeySecretRef is a reference to an InferenceSecret that contains the + // OpenAI API key. + string api_key_secret_ref = 3; + // BaseUrl is the OpenAI API base URL. Optional, defaults to the public + // OpenAI API URL. May be used to point to a custom OpenAI-compatible API, + // such as LiteLLM. In such case, the `api_key_secret_ref` must point to a + // secret that contains the API key for that custom API. + string base_url = 4; +} + +// BedrockProvider specifies parameters specific to Amazon Bedrock. +message BedrockProvider { + // Region is the AWS region which will be used for inference. + string region = 1; + // BedrockModelId specifies a model ID or an inference profile as understood + // by the Bedrock API. + string bedrock_model_id = 2; + // Temperature controls the randomness of the model's output. + float temperature = 3; + // Integration is the AWS OIDC Integration name. If unset, Teleport will use + // AWS credentials available on the auth server machine; otherwise, it will + // use the specified OIDC integration for assuming appropriate role. + string integration = 4; +} + +// InferenceSecret resource stores session summarization inference provider +// secrets, such as API keys. They need to be referenced by appropriate +// provider configuration inside `InferenceModelSpec`. +message InferenceSecret { + // Kind is the resource kind. Should always be set to "inference_secret". + string kind = 1; + // SubKind is the resource sub-kind. Should be empty. + string sub_kind = 2; + // Version is the resource version. Should be set to "v1". + string version = 3; + teleport.header.v1.Metadata metadata = 4; + // Spec contains the secret value. Once set, it can only be read by Teleport + // itself; it will not be returned in API responses. + InferenceSecretSpec spec = 5; +} + +// InferenceSecretSpec defines the secret value for the inference model. +message InferenceSecretSpec { + // Value is the secret value, such as an API key. + string value = 1; +} + +// InferencePolicy resource maps sessions to summarization models. +message InferencePolicy { + // Kind is the resource kind. Should always be set to "inference_policy". + string kind = 1; + // SubKind is the resource sub-kind. Should be empty. + string sub_kind = 2; + // Version is the resource version. Should be set to "v1". + string version = 3; + teleport.header.v1.Metadata metadata = 4; + InferencePolicySpec spec = 5; +} + +// InferencePolicySpec maps sessions to summarization models using a filter. +message InferencePolicySpec { + // Kinds are session kinds matched by this policy, e.g., "ssh", "k8s", "db" + repeated string kinds = 1; + // Model is the name of the `InferenceModel` resource to be used for + // summarization. + string model = 2; + // Filter is an optional filter expression using Teleport Predicate Language + // to select sessions for summarization. If it's empty, all sessions that + // match the list of kinds will be summarized using this model. + string filter = 3; +} + +// SummaryState is the state of the summarization process. +enum SummaryState { + SUMMARY_STATE_UNSPECIFIED = 0; + SUMMARY_STATE_PENDING = 1; + SUMMARY_STATE_SUCCESS = 2; + SUMMARY_STATE_ERROR = 3; +} + +// Summary represents a summary of a session recording. This format is used to +// store the summaries in the session storage and return it with gRPC. +message Summary { + // sessionId is an ID of the session whose recording got summarized. + string session_id = 1; + // State is the state of the summarization process. + SummaryState state = 2; + // InferenceStartedAt is the time when the summarization process started. + google.protobuf.Timestamp inference_started_at = 3; + // InferenceFinishedAt is the time when the summarization process finished. + google.protobuf.Timestamp inference_finished_at = 4; + // Content is the main text content of the summary, stored in Markdown + // format. Available if the state is SUMMARY_STATE_SUCCESS. + string content = 5; + // ModelName is the name of the `InferenceModel` resource that was used to + // generate this summary. + string model_name = 6; + // SessionEndEvent is the event that ended the summarized session. Session + // end events carry the most complete set of data that Teleport has about a + // given session. Used for checking access based on RBAC rule "where" + // filters. + // + // The event is stored in an unstructured form, as storing an instance of + // events.OneOf posed a number of technical challenges with JSON + // serialization as a subcomponent of this message. These challenges stem + // from the fact that audit events have gogoproto extensions. + google.protobuf.Struct session_end_event = 7; + // ErrorMessage is an error message if the summarization failed. Available if + // the state is SUMMARY_STATE_ERROR. + string error_message = 8; + // EnhancedSummary contains structured data extracted from the session. + EnhancedSummary enhanced_summary = 9; +} + +// The following enums and messages should be kept in sync with `e/lib/auth/summarizer/schema/command.go` + +// CommandCategory represents the category of a command. +enum CommandCategory { + // CommandCategoryUnspecified is the default value and indicates that the command + // category is not specified. + COMMAND_CATEGORY_UNSPECIFIED = 0; + // CommandCategoryFileOperation indicates that the command is related to file + // operations, such as copying, moving, or deleting files. + COMMAND_CATEGORY_FILE_OPERATION = 1; + // CommandCategoryNetwork indicates that the command is related to network + // operations, such as connecting to a remote host or transferring data. + COMMAND_CATEGORY_NETWORK = 2; + // CommandCategoryProcess indicates that the command is related to process + // management, such as starting or stopping processes. + COMMAND_CATEGORY_PROCESS = 3; + // CommandCategorySystemConfig indicates that the command is related to system + // configuration, such as changing system settings or installing software. + COMMAND_CATEGORY_SYSTEM_CONFIG = 4; + // CommandCategoryDataAccess indicates that the command is related to data access, + // such as querying databases or reading data files. + COMMAND_CATEGORY_DATA_ACCESS = 5; + // CommandCategoryAuthentication indicates that the command is related to + // authentication, such as logging in or changing user credentials. + COMMAND_CATEGORY_AUTHENTICATION = 6; + // CommandCategoryOther indicates that the command does not fit into any of the + // other categories. + COMMAND_CATEGORY_OTHER = 7; +} + +// RiskLevel represents the risk level associated with a command. +enum RiskLevel { + // RiskLevelUnspecified is the default value and indicates that the risk level + // is not specified. + RISK_LEVEL_UNSPECIFIED = 0; + // RiskLevelLow indicates that the command has a low risk level. + RISK_LEVEL_LOW = 1; + // RiskLevelMedium indicates that the command has a medium risk level. + RISK_LEVEL_MEDIUM = 2; + // RiskLevelHigh indicates that the command has a high risk level. + RISK_LEVEL_HIGH = 3; + // RiskLevelCritical indicates that the command has a critical risk level. + RISK_LEVEL_CRITICAL = 4; +} + +// ThreatCategory represents the category of a detected threat. +enum ThreatCategory { + // ThreatCategoryUnspecified is the default value and indicates that the threat + // category is not specified. + THREAT_CATEGORY_UNSPECIFIED = 0; + // ThreatCategoryReconnaissance indicates that the threat is related to reconnaissance + // activities, such as scanning or probing the system. + THREAT_CATEGORY_RECONNAISSANCE = 1; + // ThreatCategoryExecution indicates that the threat is related to execution activities, + // such as running malicious code. + THREAT_CATEGORY_EXECUTION = 2; + // ThreatCategoryPersistence indicates that the threat is related to persistence + // activities, such as installing backdoors. + THREAT_CATEGORY_PERSISTENCE = 3; + // ThreatCategoryPrivilegeEscalation indicates that the threat is related to privilege + // escalation activities, such as exploiting vulnerabilities to gain higher privileges. + THREAT_CATEGORY_PRIVILEGE_ESCALATION = 4; + // ThreatCategoryDefenseEvasion indicates that the threat is related to defense evasion + // activities, such as disabling security software. + THREAT_CATEGORY_DEFENSE_EVASION = 5; + // ThreatCategoryCredentialAccess indicates that the threat is related to credential + // access activities, such as stealing passwords. + THREAT_CATEGORY_CREDENTIAL_ACCESS = 6; + // ThreatCategoryDiscovery indicates that the threat is related to discovery activities, + // such as gathering information about the system. + THREAT_CATEGORY_DISCOVERY = 7; + // ThreatCategoryLateralMovement indicates that the threat is related to lateral movement + // activities, such as moving laterally within the network. + THREAT_CATEGORY_LATERAL_MOVEMENT = 8; + // ThreatCategoryCollection indicates that the threat is related to collection activities, + // such as collecting sensitive data. + THREAT_CATEGORY_COLLECTION = 9; + // ThreatCategoryExfiltration indicates that the threat is related to exfiltration + // activities, such as stealing data from the system. + THREAT_CATEGORY_EXFILTRATION = 10; + // ThreatCategoryImpact indicates that the threat is related to impact activities, + // such as disrupting system operations. + THREAT_CATEGORY_IMPACT = 11; + // ThreatCategoryNone indicates that there is no threat detected. + THREAT_CATEGORY_NONE = 12; +} + +// CommandAnalysis represents a summary of a single command executed during a +// session. +message CommandAnalysis { + // Command is the command that was executed. + string command = 1; + // Category is the category of the command. + CommandCategory category = 2; + // Success indicates whether the command executed successfully. + bool success = 3; + // RiskLevel is the risk level associated with the command. + RiskLevel risk_level = 4; + // RiskScore is a numerical score representing the risk associated with the command. + int32 risk_score = 5; + // ThreatCategory is the category of any detected threat associated with the command. + ThreatCategory threat_category = 6; + // TimelineTitle is a brief title for the command's entry in the session timeline. + string timeline_title = 7; + // TimelineSubtitle is a more detailed subtitle for the command's entry in the session timeline. + string timeline_subtitle = 8; + // ShortDescription is a concise description of the command and its purpose. + string short_description = 9; + // DetailedDescription is an in-depth explanation of the command, its function, + // and any relevant context. + string detailed_description = 10; + // ErrorMessages contains any error messages produced during the execution of the command. + repeated string error_messages = 11; + // SuspiciousFlags contains any suspicious flags or indicators associated with the command. + repeated string suspicious_flags = 12; + // SensitiveItems contains any sensitive items accessed or modified by the command. + repeated string sensitive_items = 13; + // SuspiciousPatterns contains any suspicious patterns detected in relation to the command. + repeated string suspicious_patterns = 14; + // IOCs contains any indicators of compromise associated with the command. + repeated string iocs = 15; + // MitreAttackIDs contains any MITRE ATT&CK IDs relevant to the command. + repeated string mitre_attack_ids = 16; + // HasSensitiveData indicates whether the command involved access to or modification of sensitive data. + bool has_sensitive_data = 17; + // PrivilegeEscalation indicates whether the command involved privilege escalation. + bool privilege_escalation = 18; + // DataExfiltration indicates whether the command involved data exfiltration. + bool data_exfiltration = 19; + // Persistence indicates whether the command involved persistence mechanisms. + bool persistence = 20; + // StartOffset is the start time of the command, relative to the start of the session. + google.protobuf.Duration start_offset = 21; + // EndOffset is the end time of the command, relative to the start of the session. + google.protobuf.Duration end_offset = 22; +} + +// SecurityRecommendation represents a security recommendation related to a command +message SecurityRecommendation { + // Title is a brief title for the security recommendation. + string title = 1; + // Description is a detailed description of the security recommendation. + string description = 2; + // Severity indicates the severity level of the recommendation. + RiskLevel severity = 3; +} + +// EnhancedSummary represents an enhanced summary of a session recording, +// with structured data extracted from the session. +message EnhancedSummary { + // ShortDescription is a concise description of the session. + string short_description = 1; + // DetailedDescription is an in-depth explanation of the session. + string detailed_description = 2; + // RiskLevel is the overall risk level associated with the session. + RiskLevel risk_level = 3; + // SuspiciousActivities is a list of suspicious activities detected during the session. + repeated string suspicious_activities = 4; + // CompromiseIndicators indicates whether any indicators of compromise were detected during the session. + bool compromise_indicators = 5; + // NotableCommandIndexes is a list of indexes of commands that are considered notable. + repeated int32 notable_command_indexes = 6; + // Commands is a list of command summaries extracted from the session. + repeated CommandAnalysis commands = 7; +} diff --git a/api/proto/teleport/summarizer/v1/summarizer_service.proto b/api/proto/teleport/summarizer/v1/summarizer_service.proto new file mode 100644 index 0000000000000..3085829aa122c --- /dev/null +++ b/api/proto/teleport/summarizer/v1/summarizer_service.proto @@ -0,0 +1,343 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +syntax = "proto3"; + +package teleport.summarizer.v1; + +import "teleport/summarizer/v1/summarizer.proto"; + +option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1;summarizerv1"; + +// SummarizerService is the service for managing summarization inference +// models, secrets, and policies. These objects configures the session +// recording summarizer. +service SummarizerService { + // CreateInferenceModel creates a new InferenceModel. + rpc CreateInferenceModel(CreateInferenceModelRequest) returns (CreateInferenceModelResponse); + // GetInferenceModel retrieves an existing InferenceModel by name. + rpc GetInferenceModel(GetInferenceModelRequest) returns (GetInferenceModelResponse); + // UpdateInferenceModel updates an existing InferenceModel. + rpc UpdateInferenceModel(UpdateInferenceModelRequest) returns (UpdateInferenceModelResponse); + // UpsertInferenceModel creates a new InferenceModel or updates an existing + // one. + rpc UpsertInferenceModel(UpsertInferenceModelRequest) returns (UpsertInferenceModelResponse); + // DeleteInferenceModel deletes an existing InferenceModel by name. + rpc DeleteInferenceModel(DeleteInferenceModelRequest) returns (DeleteInferenceModelResponse); + // ListInferenceModels lists all InferenceModels that match the request. + rpc ListInferenceModels(ListInferenceModelsRequest) returns (ListInferenceModelsResponse); + + // CreateInferenceSecret creates a new InferenceSecret. + rpc CreateInferenceSecret(CreateInferenceSecretRequest) returns (CreateInferenceSecretResponse); + // GetInferenceSecret retrieves an existing InferenceSecret by name. + rpc GetInferenceSecret(GetInferenceSecretRequest) returns (GetInferenceSecretResponse); + // UpdateInferenceSecret updates an existing InferenceSecret. + rpc UpdateInferenceSecret(UpdateInferenceSecretRequest) returns (UpdateInferenceSecretResponse); + // UpsertInferenceSecret creates a new InferenceSecret or updates an existing + // one. + rpc UpsertInferenceSecret(UpsertInferenceSecretRequest) returns (UpsertInferenceSecretResponse); + // DeleteInferenceSecret deletes an existing InferenceSecret by name. + rpc DeleteInferenceSecret(DeleteInferenceSecretRequest) returns (DeleteInferenceSecretResponse); + // ListInferenceSecrets lists all InferenceSecrets that match the request. + rpc ListInferenceSecrets(ListInferenceSecretsRequest) returns (ListInferenceSecretsResponse); + + // CreateInferencePolicy creates a new InferencePolicy. + rpc CreateInferencePolicy(CreateInferencePolicyRequest) returns (CreateInferencePolicyResponse); + // GetInferencePolicy retrieves an existing InferencePolicy by name. + rpc GetInferencePolicy(GetInferencePolicyRequest) returns (GetInferencePolicyResponse); + // UpdateInferencePolicy updates an existing InferencePolicy. + rpc UpdateInferencePolicy(UpdateInferencePolicyRequest) returns (UpdateInferencePolicyResponse); + // UpsertInferencePolicy creates a new InferencePolicy or updates an existing + // one. + rpc UpsertInferencePolicy(UpsertInferencePolicyRequest) returns (UpsertInferencePolicyResponse); + // DeleteInferencePolicy deletes an existing InferencePolicy by name. + rpc DeleteInferencePolicy(DeleteInferencePolicyRequest) returns (DeleteInferencePolicyResponse); + // ListInferencePolicies lists all InferencePolicies that match the request. + rpc ListInferencePolicies(ListInferencePoliciesRequest) returns (ListInferencePoliciesResponse); + + // GetSummary retrieves the inference result for a session, which + // contains the session summary. + rpc GetSummary(GetSummaryRequest) returns (GetSummaryResponse); + + // IsEnabled tells if the summarizer is enabled by the license and + // configured. (Note that this doesn't tell anything about actual correctness + // of the configuration or presence of recording summaries.) + rpc IsEnabled(IsEnabledRequest) returns (IsEnabledResponse); +} + +// Summarization inference models + +// CreateInferenceModelRequest is a request for creating an InferenceModel. +message CreateInferenceModelRequest { + // InferenceModel is the InferenceModel resource to create. + InferenceModel model = 1; +} + +// CreateInferenceModelResponse is a response to creating an InferenceModel. +message CreateInferenceModelResponse { + // Model is the InferenceModel resource that was created. + InferenceModel model = 1; +} + +// GetInferenceModelRequest is a request for retrieving an InferenceModel. +message GetInferenceModelRequest { + // Name is the resource name of the InferenceModel to retrieve. + string name = 1; +} + +// GetInferenceModelResponse is a response to retrieving an InferenceModel. +message GetInferenceModelResponse { + // Model is the InferenceModel resource that was retrieved. + InferenceModel model = 1; +} + +// UpdateInferenceModelRequest is a request for updating an InferenceModel. +message UpdateInferenceModelRequest { + // Model is the InferenceModel resource to update. + InferenceModel model = 1; +} + +// UpdateInferenceModelResponse is a response to updating an InferenceModel. +message UpdateInferenceModelResponse { + // Model is the InferenceModel resource that was updated. + InferenceModel model = 1; +} + +// UpsertInferenceModelRequest is a request for creating or updating a +// InferenceModel. +message UpsertInferenceModelRequest { + // Model is the InferenceModel resource to create or update. + InferenceModel model = 1; +} + +// UpsertInferenceModelResponse is a response to creating or updating an +// InferenceModel. +message UpsertInferenceModelResponse { + // Model is the InferenceModel resource that was created or updated. + InferenceModel model = 1; +} + +// DeleteInferenceModelRequest is a request for deleting an InferenceModel. +message DeleteInferenceModelRequest { + // Name is the resource name of the InferenceModel to delete. + string name = 1; +} + +// DeleteInferenceModelResponse is a response to deleting an InferenceModel. +message DeleteInferenceModelResponse {} + +// ListInferenceModelsRequest is a request for listing InferenceModels. +message ListInferenceModelsRequest { + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + int32 page_size = 1; + // PageToken is the next_page_token value returned from a previous List + // request, if any. + string page_token = 2; +} + +// ListInferenceModelsResponse is the response for listing InferenceModels. +message ListInferenceModelsResponse { + // Models is the page of InferenceModels that matched the request. + repeated InferenceModel models = 1; + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + string next_page_token = 2; +} + +// Summarization inference secrets + +// CreateInferenceSecretRequest is a request for creating an InferenceSecret. +message CreateInferenceSecretRequest { + // InferenceSecret is the InferenceSecret resource + // to create. + InferenceSecret secret = 1; +} + +// CreateInferenceSecretResponse is a response to creating an InferenceSecret. +message CreateInferenceSecretResponse { + // Secret is the InferenceSecret resource that was created. + InferenceSecret secret = 1; +} + +// GetInferenceSecretRequest is a request for retrieving an InferenceSecret. +message GetInferenceSecretRequest { + // Name is the resource name of the InferenceSecret to retrieve. + string name = 1; +} + +// GetInferenceSecretResponse is a response to retrieving an InferenceSecret. +message GetInferenceSecretResponse { + // Secret is the InferenceSecret resource that was retrieved. + InferenceSecret secret = 1; +} + +// UpdateInferenceSecretRequest is a request for updating an InferenceSecret. +message UpdateInferenceSecretRequest { + // Secret is the InferenceSecret resource to update. + InferenceSecret secret = 1; +} + +// UpdateInferenceSecretResponse is a response to updating an InferenceSecret. +message UpdateInferenceSecretResponse { + // Secret is the InferenceSecret resource that was updated. + InferenceSecret secret = 1; +} + +// UpsertInferenceSecretRequest is a request for creating or updating an +// InferenceSecret. +message UpsertInferenceSecretRequest { + // Secret is the InferenceSecret resource to create or update. + InferenceSecret secret = 1; +} + +// UpsertInferenceSecretResponse is a response to creating or updating an +// InferenceSecret. +message UpsertInferenceSecretResponse { + // Secret is the InferenceSecret resource that was created or updated. + InferenceSecret secret = 1; +} + +// DeleteInferenceSecretRequest is a request for deleting an InferenceSecret. +message DeleteInferenceSecretRequest { + // Name is the resource name of the InferenceSecret to delete. + string name = 1; +} + +// DeleteInferenceSecretResponse is a response to deleting an InferenceSecret. +message DeleteInferenceSecretResponse {} + +// ListInferenceSecretsRequest is a request for listing InferenceSecrets. +message ListInferenceSecretsRequest { + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + int32 page_size = 1; + // PageToken is the next_page_token value returned from a previous List + // request, if any. + string page_token = 2; +} + +// ListInferenceSecretsResponse is the response for listing InferenceSecrets. +message ListInferenceSecretsResponse { + // Secrets is the page of InferenceSecrets that matched the + // request. + repeated InferenceSecret secrets = 1; + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + string next_page_token = 2; +} + +// Summarization inference policies + +// CreateInferencePolicyRequest is a request for creating an InferencePolicy. +message CreateInferencePolicyRequest { + // InferencePolicy is the InferencePolicy resource to create. + InferencePolicy policy = 1; +} + +// CreateInferencePolicyResponse is a response to creating an InferencePolicy. +message CreateInferencePolicyResponse { + // Policy is the InferencePolicy resource that was created. + InferencePolicy policy = 1; +} + +// GetInferencePolicyRequest is a request for retrieving an InferencePolicy. +message GetInferencePolicyRequest { + // Name is the resource name of the InferencePolicy to retrieve. + string name = 1; +} + +// GetInferencePolicyResponse is a response to retrieving an InferencePolicy. +message GetInferencePolicyResponse { + // Policy is the InferencePolicy resource that was retrieved. + InferencePolicy policy = 1; +} + +// UpdateInferencePolicyRequest is a request for updating an InferencePolicy. +message UpdateInferencePolicyRequest { + // Policy is the InferencePolicy resource to update. + InferencePolicy policy = 1; +} + +// UpdateInferencePolicyResponse is a response to updating an InferencePolicy. +message UpdateInferencePolicyResponse { + // Policy is the InferencePolicy resource that was updated. + InferencePolicy policy = 1; +} + +// UpsertInferencePolicyRequest is a request for creating or updating an +// InferencePolicy. +message UpsertInferencePolicyRequest { + // Policy is the InferencePolicy resource to create or update. + InferencePolicy policy = 1; +} + +// UpsertInferencePolicyResponse is a response to creating or updating an +// InferencePolicy. +message UpsertInferencePolicyResponse { + // Policy is the InferencePolicy resource that was created or updated. + InferencePolicy policy = 1; +} + +// DeleteInferencePolicyRequest is a request for deleting an InferencePolicy. +message DeleteInferencePolicyRequest { + // Name is the resource name of the InferencePolicy to delete. + string name = 1; +} + +// DeleteInferencePolicyResponse is a response to deleting an InferencePolicy. +message DeleteInferencePolicyResponse {} + +// ListInferencePoliciesRequest is a request for listing InferencePolicies. +message ListInferencePoliciesRequest { + // PageSize is the maximum number of items to return. The server may use a + // different page size at its discretion. + int32 page_size = 1; + // PageToken is the next_page_token value returned from a previous List + // request, if any. + string page_token = 2; +} + +// ListInferencePoliciesResponse is the response for listing InferencePolicies. +message ListInferencePoliciesResponse { + // Policies is the page of InferencePolicies that matched the + // request. + repeated InferencePolicy policies = 1; + // NextPageToken is the token to retrieve the next page of results, or empty + // if there are no more results in the list. + string next_page_token = 2; +} + +// GetSummaryRequest is a request for retrieving a session recording summary. +message GetSummaryRequest { + // SessionId is the ID of the session whose summary is being requested. + string session_id = 1; +} + +// GetSummaryResponse is a response to retrieving a session recording summary. +message GetSummaryResponse { + // Summary is the session recording summary. + Summary summary = 1; +} + +// IsEnabledRequest is a request to tell if the summarizer is enabled. +message IsEnabledRequest {} + +// IsEnabledResponse is a response that tells if the summarizer is enabled. +message IsEnabledResponse { + // Enabled tells if the summarizer is enabled by the license and configured. + // (Note that this doesn't tell anything about actual correctness of the + // configuration or presence of recording summaries.) + bool enabled = 1; +} diff --git a/api/proto/teleport/trust/v1/trust_service.proto b/api/proto/teleport/trust/v1/trust_service.proto index 7d8748f2375f7..7e4300e3aaade 100644 --- a/api/proto/teleport/trust/v1/trust_service.proto +++ b/api/proto/teleport/trust/v1/trust_service.proto @@ -47,6 +47,8 @@ service TrustService { rpc CreateTrustedCluster(CreateTrustedClusterRequest) returns (types.TrustedClusterV2); // UpdateTrustedCluster updates a Trusted Cluster in a backend. rpc UpdateTrustedCluster(UpdateTrustedClusterRequest) returns (types.TrustedClusterV2); + // ListTrustedClusters returns a page of current Trusted Cluster resources. + rpc ListTrustedClusters(ListTrustedClustersRequest) returns (ListTrustedClustersResponse); } // Request for UpsertTrustedCluster. @@ -174,3 +176,21 @@ message GenerateHostCertResponse { // ssh_certificate is the encoded bytes of the SSH certificate generated by the RPC. bytes ssh_certificate = 1; } + +// ListTrustedClustersRequest is the request for ListTrustedClusters. +message ListTrustedClustersRequest { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The next_page_token value returned from a previous List request, if any. + string page_token = 2; +} + +// ListTrustedClustersResponse is the response for ListTrustedClusters. +message ListTrustedClustersResponse { + // TrustedClusters is a list of Trusted Cluster resources. + repeated types.TrustedClusterV2 trusted_clusters = 1; + // Token to retrieve the next page of results, or empty if there are no + // more results in the list. + string next_page_token = 2; +} diff --git a/api/proto/teleport/usageevents/v1/usageevents.proto b/api/proto/teleport/usageevents/v1/usageevents.proto index dde7cbea132c1..592b37578e163 100644 --- a/api/proto/teleport/usageevents/v1/usageevents.proto +++ b/api/proto/teleport/usageevents/v1/usageevents.proto @@ -147,6 +147,10 @@ enum DiscoverResource { DISCOVER_RESOURCE_KUBERNETES_EKS = 40; DISCOVER_RESOURCE_APPLICATION_AWS_CONSOLE = 41; + + DISCOVER_RESOURCE_MCP_STDIO = 42; + DISCOVER_RESOURCE_MCP_SSE = 43; + DISCOVER_RESOURCE_MCP_STREAMABLE_HTTP = 44; } // DiscoverResourceMetadata contains common metadata identifying resource type being added. @@ -613,6 +617,8 @@ enum IntegrationEnrollKind { INTEGRATION_ENROLL_KIND_MACHINE_ID_KUBERNETES = 25; INTEGRATION_ENROLL_KIND_AWS_IDENTITY_CENTER = 26; INTEGRATION_ENROLL_KIND_GITHUB_REPO_ACCESS = 27; + INTEGRATION_ENROLL_KIND_MACHINE_ID_ARGOCD = 28; + INTEGRATION_ENROLL_KIND_MACHINE_ID_GITHUB_ACTIONS_KUBERNETES = 29; } // IntegrationEnrollMetadata contains common metadata @@ -655,6 +661,12 @@ enum IntegrationEnrollStep { INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_GIT_SERVER = 6; INTEGRATION_ENROLL_STEP_GITHUBRA_CONFIGURE_SSH_CERT = 7; INTEGRATION_ENROLL_STEP_GITHUBRA_CREATE_ROLE = 8; + + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONNECT_GITHUB = 9; + INTEGRATION_ENROLL_STEP_MWIGHAK8S_CONFIGURE_ACCESS = 10; + INTEGRATION_ENROLL_STEP_MWIGHAK8S_SETUP_WORKFLOW = 11; + INTEGRATION_ENROLL_STEP_MWIGHAK8S_WELCOME = 12; } // IntegrationEnrollStatusCode defines status code for an integration enroll step. @@ -695,6 +707,89 @@ message UIIntegrationEnrollStepEvent { IntegrationEnrollStepStatus status = 3; } +// UIIntegrationEnrollSectionOpenEvent is emitted when the user opens or expands +// a section (e.g. "Advanced Options") in an integration setup wizard. +message UIIntegrationEnrollSectionOpenEvent { + // Metadata of the event. + IntegrationEnrollMetadata metadata = 1; + // Step where the section was opened. + IntegrationEnrollStep step = 2; + // Section that was opened. + IntegrationEnrollSection section = 3; +} + +// IntegrationEnrollSection identifies a section the user opened or expanded in +// an integration setup wizard. +enum IntegrationEnrollSection { + INTEGRATION_ENROLL_SECTION_UNSPECIFIED = 0; + + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + INTEGRATION_ENROLL_SECTION_MWIGHAK8S_GITHUB_ADVANCED_OPTIONS = 1; + INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_LABEL_PICKER = 2; + INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_ADVANCED_OPTIONS = 3; + INTEGRATION_ENROLL_SECTION_MWIGHAK8S_KUBERNETES_RESOURCE_RULE_PICKER = 4; +} + +// UIIntegrationEnrollFieldCompleteEvent is emitted when the user completes a +// field in an integration setup wizard. +message UIIntegrationEnrollFieldCompleteEvent { + // Metadata of the event. + IntegrationEnrollMetadata metadata = 1; + // Step where the field was completed. + IntegrationEnrollStep step = 2; + // Field that was completed. + IntegrationEnrollField field = 3; +} + +// IntegrationEnrollField identifies a field the user completed in an integration +// setup wizard. +enum IntegrationEnrollField { + INTEGRATION_ENROLL_FIELD_UNSPECIFIED = 0; + + // MWIGHAK8S denotes the MWI GitHub Actions + Kubernetes wizard. + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REPOSITORY_URL = 1; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_BRANCH = 2; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_WORKFLOW = 3; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENVIRONMENT = 4; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_REF = 5; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_SLUG = 6; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_GITHUB_ENTERPRISE_STATIC_JWKS = 7; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_LABELS = 8; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_GROUPS = 9; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_USERS = 10; + INTEGRATION_ENROLL_FIELD_MWIGHAK8S_KUBERNETES_RESOURCE_RULES = 11; +} + +// UIIntegrationEnrollCodeCopyEvent is emitted when the user copies generated +// IaC (e.g. Terraform) code in an integration setup wizard. +message UIIntegrationEnrollCodeCopyEvent { + // Metadata of the event. + IntegrationEnrollMetadata metadata = 1; + // Step where the code was copied. + IntegrationEnrollStep step = 2; + // Type of code that was copied. + IntegrationEnrollCodeType type = 3; +} + +// IntegrationEnrollCodeType identifies the type of code that was copied in an +// integration setup wizard. +enum IntegrationEnrollCodeType { + INTEGRATION_ENROLL_CODE_TYPE_UNSPECIFIED = 0; + INTEGRATION_ENROLL_CODE_TYPE_TERRAFORM = 1; + INTEGRATION_ENROLL_CODE_TYPE_GITHUB_ACTIONS_YAML = 2; +} + +// UIIntegrationEnrollLinkClickEvent is emitted when the user clicks a link to +// documentation, etc. in an integration setup wizard. +message UIIntegrationEnrollLinkClickEvent { + // Metadata of the event. + IntegrationEnrollMetadata metadata = 1; + // Step where the link was clicked. + IntegrationEnrollStep step = 2; + // Link that was clicked. + string link = 3; +} + // ResourceCreateEvent is emitted when a resource is created. message ResourceCreateEvent { // resource_type is the type of resource ("node", "node.openssh", "db", "k8s", "app"). @@ -915,6 +1010,10 @@ message UsageEventOneOf { UIAccessGraphCrownJewelDiffViewEvent ui_access_graph_crown_jewel_diff_view = 59; UserTaskStateEvent user_task_state_event = 60; UIIntegrationEnrollStepEvent ui_integration_enroll_step_event = 61; + UIIntegrationEnrollSectionOpenEvent ui_integration_enroll_section_open_event = 62; + UIIntegrationEnrollFieldCompleteEvent ui_integration_enroll_field_complete_event = 63; + UIIntegrationEnrollCodeCopyEvent ui_integration_enroll_code_copy_event = 64; + UIIntegrationEnrollLinkClickEvent ui_integration_enroll_link_click_event = 65; } reserved 2; //UIOnboardGetStartedClickEvent reserved "ui_onboard_get_started_click"; diff --git a/api/proto/teleport/userloginstate/v1/userloginstate.proto b/api/proto/teleport/userloginstate/v1/userloginstate.proto index 85e8401161b80..7df86844bf8c8 100644 --- a/api/proto/teleport/userloginstate/v1/userloginstate.proto +++ b/api/proto/teleport/userloginstate/v1/userloginstate.proto @@ -32,32 +32,73 @@ message UserLoginState { // Spec is the specification for a user login state. message Spec { - // roles are the user roles attached to the user. + // roles are the user roles attached to the user. Basically, [roles] = [original_roles] + + // [access_list_roles]. repeated string roles = 1; - // traits are the traits attached to the user. + // traits are the traits attached to the user. Basically, [roles] = [original_traits] + + // [access_list_traits]. repeated teleport.trait.v1.Trait traits = 2; // user_type is the type of user this state represents. string user_type = 3; - // original_roles are the user roles that are part of the user's static definition. These roles are - // not affected by access granted by access lists and are obtained prior to granting access list access. + // original_roles are the user roles that are part of the user's static definition. These roles + // are not affected by access granted by access lists and are obtained prior to granting access + // list access. Basically, [original_roles] = [roles] - [access_list_roles]. repeated string original_roles = 4; - // original_traits are the user traits that are part of the user's static definition. These traits are - // not affected by access granted by access lists and are obtained prior to granting access list access. + // original_traits are the user traits that are part of the user's static definition. These + // traits are not affected by access granted by access lists and are obtained prior to granting + // access list access. Basically, [original_traits] = [traits] - [access_list_traits]. repeated teleport.trait.v1.Trait original_traits = 5; // GitHubIdentity is the external identity attached to this user state. ExternalIdentity git_hub_identity = 6; + + // saml_identities are the identities created from the SAML connectors used to log in by this + // user name. They are useful for RBAC calculation when there is a user with same username form + // multiple SSO connectors, i.e. having multiple SSO indentities. + // + // NOTE: There is no mechanism to clean those identities. If the the user is deleted, the + // user_login_state and it's saml_identities will not be deleted. Or even if the user still + // exists, but it's SAML identity expires it isn't cleared from the user_login_state. This means + // the information stored here can be used only as long as there is a background sync running and + // making sure the user's info is up-to-date. E.g. Okta assignment creator is using this + // information, but it is running only when Okta user sync is active and periodically updates the + // user which in turn updates the user_login_state. + // + // NOTE2: This field isn't currently used. It's introduced so we can resolve the + // https://github.com/gravitational/teleport.e/issues/6723 issue in stages. + // The STAGE 1 is to introduce this field and give enough time to get existing Teleport + // installations to get updated and populate this field. + // The STAGE 2 is in the v19 release (or maybe even v20) to deploy the actual fix PR + // (https://github.com/gravitational/teleport.e/pull/7168) reading this field and calculating + // access to Okta resources. See more details in the description of the fix PR. + // + // TODO(kopiczko) v19: consider proceeding with the STAGE 2 described above. + repeated ExternalIdentity saml_identities = 7; + + // access_list_roles are roles granted to this user by the Access Lists + // membership/ownership. Basically, [access_list_roles] = [roles] - [original_roles]. + repeated string access_list_roles = 8; + + // access_list_traits are traits granted to this user by the Access Lists membership/ownership. + // Basically, [access_list_traits] = [traits] - [original_traits]. + repeated teleport.trait.v1.Trait access_list_traits = 9; } // ExternalIdentity defines an external identity attached to this user state. message ExternalIdentity { - // UserId is the unique identifier of the external identity such as GitHub user + // UserID is the unique identifier of the external identity such as GitHub user // ID. string user_id = 1; // Username is the username of the external identity. string username = 2; + // ConnectorID is the connector this identity was created with. It's empty for the local user. + string connector_id = 3; + // GrantedRoles specific for this identity. E.g.: from connector attributes mapping. + repeated string granted_roles = 4; + // GrantedTraits specific for this identity. E.g.: from connector roles attributes mapping. + repeated teleport.trait.v1.Trait granted_traits = 5; } diff --git a/api/proto/teleport/userloginstate/v1/userloginstate_service.proto b/api/proto/teleport/userloginstate/v1/userloginstate_service.proto index 2c2d203d4efa5..c1b0c09d8016f 100644 --- a/api/proto/teleport/userloginstate/v1/userloginstate_service.proto +++ b/api/proto/teleport/userloginstate/v1/userloginstate_service.proto @@ -24,6 +24,7 @@ option go_package = "github.com/gravitational/teleport/api/gen/proto/go/teleport // UserLoginStateService provides CRUD methods for user login state resources. service UserLoginStateService { // GetUserLoginStates returns a list of all user login states. + // Deprecated: Use ListUserLoginStates instead. rpc GetUserLoginStates(GetUserLoginStatesRequest) returns (GetUserLoginStatesResponse); // GetUserLoginState returns the specified user login state resource. rpc GetUserLoginState(GetUserLoginStateRequest) returns (UserLoginState); @@ -33,6 +34,8 @@ service UserLoginStateService { rpc DeleteUserLoginState(DeleteUserLoginStateRequest) returns (google.protobuf.Empty); // DeleteAllUserLoginStates hard deletes all user login states. rpc DeleteAllUserLoginStates(DeleteAllUserLoginStatesRequest) returns (google.protobuf.Empty); + // ListUserLoginStates lists all user login states allowing for pagination. + rpc ListUserLoginStates(ListUserLoginStatesRequest) returns (ListUserLoginStatesResponse); } // GetUserLoginStatesRequest is the request for getting all user login states. @@ -64,3 +67,19 @@ message DeleteUserLoginStateRequest { // DeleteAllUserLoginStatesRequest is the request for deleting all user login states. message DeleteAllUserLoginStatesRequest {} + +// ListUserLoginStatesRequest is the request for listing user login states with pagination. +message ListUserLoginStatesRequest { + // page_size is the maximum number of user login states to return. + int32 page_size = 1; + // page_token is the token for the next page of results. + string page_token = 2; +} + +// ListUserLoginStatesResponse is the response for listing user login states with pagination. +message ListUserLoginStatesResponse { + // user_login_states is the list of user login states. + repeated UserLoginState user_login_states = 1; + // next_page_token is the token for the next page of results. + string next_page_token = 2; +} diff --git a/api/proto/teleport/workloadidentity/v1/join_attrs.proto b/api/proto/teleport/workloadidentity/v1/join_attrs.proto index ee78bed71402b..b726f4086cd43 100644 --- a/api/proto/teleport/workloadidentity/v1/join_attrs.proto +++ b/api/proto/teleport/workloadidentity/v1/join_attrs.proto @@ -49,6 +49,8 @@ message JoinAttrs { JoinAttrsOracle oracle = 13; // Attributes that are specific to the Azure Devops (`azure_devops`) join method. JoinAttrsAzureDevops azure_devops = 14; + // Attributes that are specific to the Env0 (`env0`) join method. + JoinAttrsEnv0 env0 = 15; } // The collection of attributes that result from the join process but are not @@ -358,3 +360,44 @@ message JoinAttrsAzureDevopsPipeline { // The ID of the run that is being executed. string run_id = 11; } + +// Attributes that are specific to the Env0 (`env0`) join method. +message JoinAttrsEnv0 { + // The `sub` claim of an Env0 OIDC token. + string sub = 1; + // The unique organization identifier, corresponding to `organizationId` in an + // Env0 OIDC token. + string organization_id = 2; + // The unique project identifier, corresponding to `projectId` in an Env0 OIDC + // token. + string project_id = 3; + // The name of the project under which the job was run corresponding to + // `projectName` in an Env0 OIDC token. + string project_name = 4; + // The unique identifier of the Env0 template, corresponding to `templateId` + // in an Env0 OIDC token. + string template_id = 5; + // The name of the Env0 template, corresponding to `templateName` in an Env0 + // OIDC token. + string template_name = 6; + // The unique identifier of the Env0 environment, corresponding to + // `environmentId` in an Env0 OIDC token. + string environment_id = 7; + // The name of the Env0 environment, corresponding to `environmentName` in an + // Env0 OIDC token. + string environment_name = 8; + // The name of the Env0 workspace, corresponding to `workspaceName` in an Env0 + // OIDC token. + string workspace_name = 9; + // A unique ID for this deployment, corresponding to `deploymentLogId` in an + // Env0 OIDC token. + string deployment_log_id = 10; + // The env0 deployment type, such as "deploy", "destroy", etc. Corresponds to + // `deploymentType` in an Env0 OIDC token. + string deployment_type = 11; + // The email of the person that triggered the deployment, corresponding to + // `deployerEmail` in an Env0 OIDC token. + string deployer_email = 12; + // A custom tag value corresponding to `env0Tag` when `ENV0_OIDC_TAG` is set. + string env0_tag = 13; +} diff --git a/api/proto/teleport/workloadidentity/v1/resource_service.proto b/api/proto/teleport/workloadidentity/v1/resource_service.proto index 71d41996ec745..9c72ecadc4577 100644 --- a/api/proto/teleport/workloadidentity/v1/resource_service.proto +++ b/api/proto/teleport/workloadidentity/v1/resource_service.proto @@ -44,6 +44,10 @@ service WorkloadIdentityResourceService { // ListWorkloadIdentities of all workload identities, pagination semantics are // applied. rpc ListWorkloadIdentities(ListWorkloadIdentitiesRequest) returns (ListWorkloadIdentitiesResponse); + // ListWorkloadIdentities of all workload identities, pagination semantics are + // applied. Sorting by name or spiffe id is supported, and results can be + // filtered using a search term + rpc ListWorkloadIdentitiesV2(ListWorkloadIdentitiesV2Request) returns (ListWorkloadIdentitiesResponse); } // The request for CreateWorkloadIdentity. @@ -85,6 +89,21 @@ message ListWorkloadIdentitiesRequest { string page_token = 2; } +// The request for ListWorkloadIdentitiesV2. +message ListWorkloadIdentitiesV2Request { + // The maximum number of items to return. + // The server may impose a different page size at its discretion. + int32 page_size = 1; + // The page_token value returned from a previous ListWorkloadIdentities request, if any. + string page_token = 2; + // The sort field to use for the results. If empty, the default sort field is used. + string sort_field = 3; + // The sort order to use for the results. If empty, the default sort order is used. + bool sort_desc = 4; + // A search term used to filter the results. If non-empty, it's used to match against supported fields. + string filter_search_term = 5; +} + // The response for ListWorkloadIdentities. message ListWorkloadIdentitiesResponse { // The page of workload identities that matched the request. diff --git a/api/types/access_request.go b/api/types/access_request.go index 590c67b8bca20..2d114ec045cb6 100644 --- a/api/types/access_request.go +++ b/api/types/access_request.go @@ -135,8 +135,16 @@ type AccessRequest interface { GetDryRunEnrichment() *AccessRequestDryRunEnrichment // SetDryRunEnrichment sets the dry run enrichment data. SetDryRunEnrichment(*AccessRequestDryRunEnrichment) + // GetRequestKind gets the kind of request. + GetRequestKind() AccessRequestKind + // SetRequestKind sets the kind (short/long-term) of request. + SetRequestKind(AccessRequestKind) // Copy returns a copy of the access request resource. Copy() AccessRequest + // GetLongTermResourceGrouping gets the long-term resource grouping, if present. + GetLongTermResourceGrouping() *LongTermResourceGrouping + // SetLongTermResourceGrouping sets the long-term resource grouping. + SetLongTermResourceGrouping(*LongTermResourceGrouping) } // NewAccessRequest assembles an AccessRequest resource. @@ -528,6 +536,36 @@ func (r *AccessRequestV3) SetDryRunEnrichment(enrichment *AccessRequestDryRunEnr r.Spec.DryRunEnrichment = enrichment } +// GetRequestKind gets the kind of request. +func (r *AccessRequestV3) GetRequestKind() AccessRequestKind { + return r.Spec.RequestKind +} + +// SetRequestKind sets the kind (short/long-term) of request. +func (r *AccessRequestV3) SetRequestKind(kind AccessRequestKind) { + r.Spec.RequestKind = kind +} + +// GetLongTermResourceGrouping gets the long-term resource grouping, if present. +func (r *AccessRequestV3) GetLongTermResourceGrouping() *LongTermResourceGrouping { + return r.Spec.LongTermGrouping +} + +// SetLongTermResourceGrouping sets the long-term resource grouping suggestion. +func (r *AccessRequestV3) SetLongTermResourceGrouping(grouping *LongTermResourceGrouping) { + r.Spec.LongTermGrouping = grouping +} + +// IsLongTerm checks if the request kind is long-term. +func (a AccessRequestKind) IsLongTerm() bool { + return a == AccessRequestKind_LONG_TERM +} + +// IsShortTerm checks if the request kind is explicitly short-term, or is undefined. +func (a AccessRequestKind) IsShortTerm() bool { + return a != AccessRequestKind_LONG_TERM +} + // Copy returns a copy of the access request resource. func (r *AccessRequestV3) Copy() AccessRequest { return utils.CloneProtoMsg(r) diff --git a/api/types/accesslist/accesslist.go b/api/types/accesslist/accesslist.go index 0d45014d9cf0c..9784c72cd5f5b 100644 --- a/api/types/accesslist/accesslist.go +++ b/api/types/accesslist/accesslist.go @@ -18,6 +18,7 @@ package accesslist import ( "encoding/json" + "slices" "strings" "time" @@ -40,8 +41,6 @@ const ( ThreeMonths ReviewFrequency = 3 SixMonths ReviewFrequency = 6 OneYear ReviewFrequency = 12 - - twoWeeks = 24 * time.Hour * 14 ) func (r ReviewFrequency) String() string { @@ -143,6 +142,11 @@ type AccessList struct { // Spec is the specification for an access list. type Spec struct { + // Type can be an empty string which denotes a regular Access List, "scim" which represents + // an Access List created from SCIM group or "static" for Access Lists managed by IaC + // tools. + Type Type `json:"type" yaml:"type"` + // Title is a plaintext short description of the access list. Title string `json:"title" yaml:"title"` @@ -172,11 +176,52 @@ type Spec struct { OwnerGrants Grants `json:"owner_grants" yaml:"owner_grants"` } +type Type string + +const ( + // TODO(kopiczko) v21: Remove DeprecatedDynamic. The only version setting this type is 17.5.4. + + // DeprecatedDynamic is deprecated and should not be used. Use [Default] instead. It has + // the same semantic meaning. + DeprecatedDynamic Type = "dynamic" + // Default Access Lists are the default type supposed to be managed with the web UI. They + // require periodic audit reviews. + Default Type = "" + // Static Access Lists are supposed to be managed with the IaC tools like Terraform. Audit + // reviews are not supported for them and the ownership is optional. + Static Type = "static" + // SCIM Access Lists are created with the SCIM integration. Audit reviews are not supported + // for them and the ownership is optional. + SCIM Type = "scim" +) + +// AllTypes is a slice of all currently supported access list types. +var AllTypes = []Type{DeprecatedDynamic, Default, Static, SCIM} + +// IsReviewable returns true if the AccessList type supports the audit reviews in the web UI. +func (t Type) IsReviewable() bool { + switch t { + case DeprecatedDynamic, Default: + return true + default: + return false + } +} + +// Equals checks if the Type is equal to another. +func (t Type) Equals(other Type) bool { + return t == other +} + // Owner is an owner of an access list. type Owner struct { // Name is the username of the owner. Name string `json:"name" yaml:"name"` + // Title is the title of an owner if it is of type MEMBERSHIP_KIND_LIST. + // This is only populated by the proxy when fetching an access list and its members for the web UI + Title string `json:"title" yaml:"title"` + // Description is the plaintext description of the owner and why they are an owner. Description string `json:"description" yaml:"description"` @@ -188,6 +233,22 @@ type Owner struct { MembershipKind string `json:"membership_kind" yaml:"membership_kind"` } +// IsMembershipKindUser returns true if the owner is of kind user. +// All types expect "MEMBERSHIP_KIND_LIST" are treated as "MEMBERSHIP_KIND_USER". +func (o *Owner) IsMembershipKindUser() bool { + return isMembershipKindUser(o.MembershipKind) +} + +func isMembershipKindUser(membershipKind string) bool { + switch membershipKind { + case MembershipKindUnspecified, MembershipKindUser, "": + return true + default: + // In case if MembershipKind was extended. + return false + } +} + // Audit describes the audit configuration for an access list. type Audit struct { // NextAuditDate is the date that the next audit should be performed. @@ -231,6 +292,17 @@ func (r *Requires) IsEmpty() bool { return len(r.Roles) == 0 && len(r.Traits) == 0 } +// Clone returns a deep copy of the [Requires] +func (r *Requires) Clone() Requires { + if r == nil { + return Requires{} + } + return Requires{ + Roles: slices.Clone(r.Roles), + Traits: r.Traits.Clone(), + } +} + // Grants describes what access is granted by membership to the access list. type Grants struct { // Roles are the roles that are granted to users who are members of the access list. @@ -254,6 +326,8 @@ type Status struct { // CurrentUserAssignments describes the current user's ownership and membership in the access list. CurrentUserAssignments *CurrentUserAssignments `json:"-" yaml:"-"` + // UserAssignments describes the requested user's ownership and membership assignment types in the access list. + UserAssignments *UserAssignments `json:"-" yaml:"-"` } // CurrentUserAssignments describes the current user's ownership and membership status in the access list. @@ -274,6 +348,24 @@ func (c *CurrentUserAssignments) IsOwner() bool { return c.OwnershipType != accesslistv1.AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED } +// UserAssignments describes the requested user's ownership and membership assignment types in the access list. +type UserAssignments struct { + // OwnershipType represents the requested user's ownership type (explicit, inherited, or none) in the access list. + OwnershipType accesslistv1.AccessListUserAssignmentType `json:"ownership_type" yaml:"ownership_type"` + // MembershipType represents the requested user's membership type (explicit, inherited, or none) in the access list. + MembershipType accesslistv1.AccessListUserAssignmentType `json:"membership_type" yaml:"membership_type"` +} + +// IsMember returns true if the MembershipType is either explicit or inherited. +func (u *UserAssignments) IsMember() bool { + return u.MembershipType != accesslistv1.AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED +} + +// IsOwner returns true if the OwnershipType is either explicit or inherited. +func (u *UserAssignments) IsOwner() bool { + return u.OwnershipType != accesslistv1.AccessListUserAssignmentType_ACCESS_LIST_USER_ASSIGNMENT_TYPE_UNSPECIFIED +} + // NewAccessList will create a new access list. func NewAccessList(metadata header.Metadata, spec Spec) (*AccessList, error) { accessList := &AccessList{ @@ -288,7 +380,8 @@ func NewAccessList(metadata header.Metadata, spec Spec) (*AccessList, error) { return accessList, nil } -// CheckAndSetDefaults validates fields and populates empty fields with default values. +// CheckAndSetDefaults performs very basic validation and populates empty fields with default +// values. The main validation part is performed before the storage. func (a *AccessList) CheckAndSetDefaults() error { a.SetKind(types.KindAccessList) a.SetVersion(types.V1) @@ -297,40 +390,31 @@ func (a *AccessList) CheckAndSetDefaults() error { return trace.Wrap(err) } - if a.Spec.Title == "" { - return trace.BadParameter("access list title required") - } - - if len(a.Spec.Owners) == 0 { - return trace.BadParameter("owners are missing") - } - - if a.Spec.Audit.Recurrence.Frequency == 0 { - a.Spec.Audit.Recurrence.Frequency = SixMonths + // Restore the type if the cluster was ever running in version 17.5.4. + if a.Spec.Type == DeprecatedDynamic { + a.Spec.Type = Default } - switch a.Spec.Audit.Recurrence.Frequency { - case OneMonth, ThreeMonths, SixMonths, OneYear: - default: - return trace.BadParameter("recurrence frequency is an invalid value") - } - - if a.Spec.Audit.Recurrence.DayOfMonth == 0 { - a.Spec.Audit.Recurrence.DayOfMonth = FirstDayOfMonth - } - - switch a.Spec.Audit.Recurrence.DayOfMonth { - case FirstDayOfMonth, FifteenthDayOfMonth, LastDayOfMonth: - default: - return trace.BadParameter("recurrence day of month is an invalid value") - } - - if a.Spec.Audit.NextAuditDate.IsZero() { - a.setInitialAuditDate(clockwork.NewRealClock()) + if a.Spec.Title == "" { + return trace.BadParameter("access list title required") } - if a.Spec.Audit.Notifications.Start == 0 { - a.Spec.Audit.Notifications.Start = twoWeeks + if a.IsReviewable() { + if a.Spec.Audit.Recurrence.Frequency == 0 { + a.Spec.Audit.Recurrence.Frequency = SixMonths + } + if a.Spec.Audit.Recurrence.DayOfMonth == 0 { + a.Spec.Audit.Recurrence.DayOfMonth = FirstDayOfMonth + } + if a.Spec.Audit.NextAuditDate.IsZero() { + if err := a.setInitialAuditDate(clockwork.NewRealClock()); err != nil { + return trace.Wrap(err, "setting initial audit date") + } + } + if a.Spec.Audit.Notifications.Start == 0 { + twoWeeks := 24 * time.Hour * 14 + a.Spec.Audit.Notifications.Start = twoWeeks + } } // Deduplicate owners. The backend will currently prevent this, but it's possible that access lists @@ -408,9 +492,12 @@ func (a *AccessList) MatchSearch(values []string) bool { // Clone returns a copy of the list. func (a *AccessList) Clone() *AccessList { - var copy *AccessList - utils.StrictObjectToStruct(a, ©) - return copy + if a == nil { + return nil + } + out := &AccessList{} + deriveDeepCopyAccessList(out, a) + return out } func (a *Audit) UnmarshalJSON(data []byte) error { @@ -510,8 +597,17 @@ func (n Notifications) MarshalJSON() ([]byte, error) { }) } +// IsReviewable returns true if the AccessList type supports the audit reviews in the web UI. +func (a *AccessList) IsReviewable() bool { + return a.Spec.Type.IsReviewable() +} + // SelectNextReviewDate will select the next review date for the access list. -func (a *AccessList) SelectNextReviewDate() time.Time { +func (a *AccessList) SelectNextReviewDate() (time.Time, error) { + if !a.IsReviewable() { + return time.Time{}, trace.BadParameter("access_list %q is not reviewable", a.GetName()) + } + numMonths := int(a.Spec.Audit.Recurrence.Frequency) dayOfMonth := int(a.Spec.Audit.Recurrence.DayOfMonth) @@ -526,15 +622,94 @@ func (a *AccessList) SelectNextReviewDate() time.Time { nextDate := time.Date(currentReviewDate.Year(), currentReviewDate.Month()+time.Month(numMonths), dayOfMonth, 0, 0, 0, 0, time.UTC) - return nextDate + return nextDate, nil } // setInitialAuditDate sets the NextAuditDate for a newly created AccessList. // The function is extracted from CheckAndSetDefaults for the sake of testing // (we need to pass a fake clock). -func (a *AccessList) setInitialAuditDate(clock clockwork.Clock) { +func (a *AccessList) setInitialAuditDate(clock clockwork.Clock) (err error) { // We act as if the AccessList just got reviewed (we just created it, so // we're pretty sure of what it does) and pick the next review date. a.Spec.Audit.NextAuditDate = clock.Now() - a.Spec.Audit.NextAuditDate = a.SelectNextReviewDate() + a.Spec.Audit.NextAuditDate, err = a.SelectNextReviewDate() + return trace.Wrap(err) +} + +// EqualAccessListsOption is a functional option for configuring +// the behavior of EqualAccessLists. +type EqualAccessListsOption func(*equalAccessListsConfig) + +type equalAccessListsConfig struct { + skipClone bool + resetFieldsFn func(*AccessList) +} + +// WithSkipClone configures EqualAccessLists to skip cloning +// and directly mutate the input access lists. Use this option only when you're +// certain the input access lists can be safely modified (e.g., they're already +// clones or will be discarded after comparison). +func WithSkipClone() EqualAccessListsOption { + return func(c *equalAccessListsConfig) { + c.skipClone = true + } +} + +// WithIgnoreEphemeralFields configures EqualAccessLists to reset ephemeral +// fields before comparison. Ephemeral fields are those managed by reconcilers +// or the backend and should typically be ignored when comparing access lists. +// +// The following fields are reset: +// - Metadata.Revision: Managed by the backend +// - Status: Contains dynamically calculated fields (member counts, assignments, etc.) +// - Owner.IneligibleStatus: Managed by the IneligibleStatusReconciler +// +// Note: This option causes the input access lists to be cloned (unless WithSkipClone +// is also used) to avoid modifying the originals. +func WithIgnoreEphemeralFields() EqualAccessListsOption { + return func(c *equalAccessListsConfig) { + c.resetFieldsFn = resetEphemeralFieldsAccessList + } +} + +// EqualAccessLists compares two access lists for semantic equality. +// +// By default, this function performs a standard equality check. Use WithIgnoreEphemeralFields() +// to ignore ephemeral fields that are managed by reconcilers or the backend. This function +// mimics the behavior of services.CompareResources for AccessList types when used with +// WithIgnoreEphemeralFields(). +// +// By default, this function clones the input access lists before comparison to avoid +// modifying the originals. Use WithSkipClone() to skip cloning if the inputs can be +// safely modified (e.g., when inputs are already clones or will be discarded after comparison). +func EqualAccessLists(a, b *AccessList, opts ...EqualAccessListsOption) bool { + cfg := equalAccessListsConfig{} + for _, opt := range opts { + opt(&cfg) + } + + if !cfg.skipClone { + a = a.Clone() + b = b.Clone() + } + + if cfg.resetFieldsFn != nil { + cfg.resetFieldsFn(a) + cfg.resetFieldsFn(b) + } + + return deriveTeleportEqualAccessList(a, b) +} + +// resetEphemeralFieldsAccessList clears ephemeral fields that should be +// ignored when comparing access lists. +func resetEphemeralFieldsAccessList(a *AccessList) { + if a == nil { + return + } + a.Metadata.Revision = "" + a.Status = Status{} + for i := range a.Spec.Owners { + a.Spec.Owners[i].IneligibleStatus = "" + } } diff --git a/api/types/accesslist/accesslist_test.go b/api/types/accesslist/accesslist_test.go index 765f398c77c89..73491f993293d 100644 --- a/api/types/accesslist/accesslist_test.go +++ b/api/types/accesslist/accesslist_test.go @@ -18,6 +18,7 @@ package accesslist import ( "encoding/json" + "fmt" "testing" "time" @@ -182,6 +183,8 @@ func TestAuditMarshaling(t *testing.T) { } func TestAuditUnmarshaling(t *testing.T) { + const twoWeeks = 14 * 24 * time.Hour + tests := []struct { name string input map[string]interface{} @@ -243,86 +246,95 @@ func TestAuditUnmarshaling(t *testing.T) { } } -func TestAccessListDefaults(t *testing.T) { - newValidAccessList := func() *AccessList { - return &AccessList{ - ResourceHeader: header.ResourceHeader{ - Metadata: header.Metadata{ - Name: "test", - }, - }, - Spec: Spec{ - Title: "test access list", - Owners: []Owner{{Name: "Daphne"}}, - Grants: Grants{Roles: []string{"requester"}}, - Audit: Audit{ - NextAuditDate: time.Date(2000, time.September, 12, 1, 2, 3, 4, time.UTC), - }, - }, - } - } - - t.Run("owners are required", func(t *testing.T) { - uut := newValidAccessList() - uut.Spec.Owners = []Owner{} - - err := uut.CheckAndSetDefaults() - require.Error(t, err) - require.Contains(t, err.Error(), "owners") - }) -} - func TestSelectNextReviewDate(t *testing.T) { t.Parallel() tests := []struct { name string + accessListTypes []Type frequency ReviewFrequency dayOfMonth ReviewDayOfMonth currentReviewDate time.Time expected time.Time + expectedErr bool }{ { name: "one month, first day", + accessListTypes: []Type{Default, DeprecatedDynamic}, frequency: OneMonth, dayOfMonth: FirstDayOfMonth, currentReviewDate: time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC), expected: time.Date(2023, 2, 1, 0, 0, 0, 0, time.UTC), + expectedErr: false, }, { name: "one month, fifteenth day", + accessListTypes: []Type{Default, DeprecatedDynamic}, frequency: OneMonth, dayOfMonth: FifteenthDayOfMonth, currentReviewDate: time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC), expected: time.Date(2023, 2, 15, 0, 0, 0, 0, time.UTC), + expectedErr: false, }, { name: "one month, last day", + accessListTypes: []Type{Default, DeprecatedDynamic}, frequency: OneMonth, dayOfMonth: LastDayOfMonth, currentReviewDate: time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC), expected: time.Date(2023, 2, 28, 0, 0, 0, 0, time.UTC), + expectedErr: false, }, { name: "six months, last day", + accessListTypes: []Type{Default, DeprecatedDynamic}, frequency: SixMonths, dayOfMonth: LastDayOfMonth, currentReviewDate: time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC), expected: time.Date(2023, 7, 31, 0, 0, 0, 0, time.UTC), + expectedErr: false, + }, + { + name: "six months, last day", + accessListTypes: []Type{Static, SCIM, "__test_unknown__"}, + frequency: SixMonths, + dayOfMonth: LastDayOfMonth, + currentReviewDate: time.Time{}, + expected: time.Time{}, + expectedErr: true, + }, + { + name: "six months, last day", + accessListTypes: []Type{Static, SCIM, "__test_unknown__"}, + frequency: SixMonths, + dayOfMonth: LastDayOfMonth, + currentReviewDate: time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC), + expected: time.Time{}, + expectedErr: true, }, } for _, test := range tests { test := test t.Run(test.name, func(t *testing.T) { - t.Parallel() - accessList := AccessList{} - accessList.Spec.Audit.NextAuditDate = test.currentReviewDate - accessList.Spec.Audit.Recurrence = Recurrence{ - Frequency: test.frequency, - DayOfMonth: test.dayOfMonth, + for _, typ := range test.accessListTypes { + t.Run(fmt.Sprintf("type=%q", typ), func(t *testing.T) { + accessList := AccessList{} + accessList.Spec.Type = typ + accessList.Spec.Audit.NextAuditDate = test.currentReviewDate + accessList.Spec.Audit.Recurrence = Recurrence{ + Frequency: test.frequency, + DayOfMonth: test.dayOfMonth, + } + nextReviewDate, err := accessList.SelectNextReviewDate() + if test.expectedErr { + require.Error(t, err) + } else { + require.NoError(t, err) + require.Equal(t, test.expected, nextReviewDate) + } + }) } - require.Equal(t, test.expected, accessList.SelectNextReviewDate()) }) } } @@ -381,3 +393,224 @@ func TestAccessList_setInitialReviewDate(t *testing.T) { }) } } + +func TestEqualIgnoreEphemeralFields(t *testing.T) { + t.Parallel() + + baseTime := time.Now() + createAccessList := func(name string) *AccessList { + al, err := NewAccessList( + header.Metadata{ + Name: name, + Revision: "rev1", + }, + Spec{ + Title: "Test Access List", + Description: "Test description", + Owners: []Owner{ + { + Name: "owner1", + Description: "First owner", + IneligibleStatus: "ineligible-reason", + MembershipKind: MembershipKindUser, + }, + { + Name: "owner2", + Description: "Second owner", + IneligibleStatus: "another-reason", + MembershipKind: MembershipKindUser, + }, + }, + Audit: Audit{ + NextAuditDate: baseTime, + Recurrence: Recurrence{ + Frequency: SixMonths, + DayOfMonth: FirstDayOfMonth, + }, + Notifications: Notifications{ + Start: 14 * 24 * time.Hour, + }, + }, + MembershipRequires: Requires{ + Roles: []string{"role1"}, + Traits: map[string][]string{"trait1": {"value1"}}, + }, + OwnershipRequires: Requires{ + Roles: []string{"owner-role"}, + Traits: map[string][]string{"owner-trait": {"owner-value"}}, + }, + Grants: Grants{ + Roles: []string{"granted-role"}, + Traits: map[string][]string{"granted-trait": {"granted-value"}}, + }, + OwnerGrants: Grants{ + Roles: []string{"owner-granted-role"}, + Traits: map[string][]string{"owner-granted-trait": {"owner-granted-value"}}, + }, + }, + ) + require.NoError(t, err) + + al.Status = Status{ + OwnerOf: []string{"list1", "list2"}, + MemberOf: []string{"list3", "list4"}, + } + + return al + } + + t.Run("identical access lists are equal", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.True(t, result) + }) + + t.Run("default behavior does not mutate originals", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + + originalRevision := al1.Metadata.Revision + originalStatus := al1.Status + originalIneligibleStatus := al1.Spec.Owners[0].IneligibleStatus + + EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + + // Verify original values are preserved + require.Equal(t, originalRevision, al1.Metadata.Revision) + require.Equal(t, originalStatus, al1.Status) + require.Equal(t, originalIneligibleStatus, al1.Spec.Owners[0].IneligibleStatus) + }) + + t.Run("WithSkipClone mutates originals", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + + originalRevision := al1.Metadata.Revision + originalIneligibleStatus := al1.Spec.Owners[0].IneligibleStatus + + EqualAccessLists(al1, al2, WithSkipClone(), WithIgnoreEphemeralFields()) + + // Verify values were mutated + require.Empty(t, al1.Metadata.Revision) + require.NotEqual(t, originalRevision, al1.Metadata.Revision) + require.Empty(t, al1.Spec.Owners[0].IneligibleStatus) + require.NotEqual(t, originalIneligibleStatus, al1.Spec.Owners[0].IneligibleStatus) + }) + + t.Run("different revisions are ignored", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Metadata.Revision = "different-revision" + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.True(t, result) + }) + + t.Run("different status is ignored", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Status.OwnerOf = []string{"different", "lists"} + al2.Status.MemberOf = []string{"other", "lists"} + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.True(t, result) + }) + + t.Run("different owner ineligible status is ignored", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Owners[0].IneligibleStatus = "completely-different-reason" + al2.Spec.Owners[1].IneligibleStatus = "" + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.True(t, result) + }) + + t.Run("all ignored fields different at once", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + + // Change all ignored fields + al2.Metadata.Revision = "rev999" + al2.Status.OwnerOf = []string{"completely", "different", "lists"} + al2.Spec.Owners[0].IneligibleStatus = "new-reason-1" + al2.Spec.Owners[1].IneligibleStatus = "new-reason-2" + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.True(t, result) + }) + + t.Run("different name causes inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test2") + + result := EqualAccessLists(al1, al2, WithIgnoreEphemeralFields()) + require.False(t, result) + }) + + t.Run("different title causes inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Title = "Different Title" + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different description causes inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Description = "Different description" + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different owner name causes inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Owners[0].Name = "different-owner" + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different owner description causes inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Owners[0].Description = "Different owner description" + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different audit settings cause inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Audit.Recurrence.Frequency = OneMonth + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different grants cause inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.Grants.Roles = []string{"different-role"} + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("different membership requires cause inequality", func(t *testing.T) { + al1 := createAccessList("test1") + al2 := createAccessList("test1") + al2.Spec.MembershipRequires.Roles = []string{"different-role"} + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + }) + + t.Run("one nil access list", func(t *testing.T) { + al1 := createAccessList("test1") + var al2 *AccessList + + require.False(t, EqualAccessLists(al1, al2, WithIgnoreEphemeralFields())) + require.False(t, EqualAccessLists(al2, al1, WithIgnoreEphemeralFields())) + }) +} diff --git a/api/types/accesslist/convert/v1/accesslist.go b/api/types/accesslist/convert/v1/accesslist.go index 057e0e8dd15d6..e94862b641ab2 100644 --- a/api/types/accesslist/convert/v1/accesslist.go +++ b/api/types/accesslist/convert/v1/accesslist.go @@ -33,86 +33,26 @@ type AccessListOption func(*accesslist.AccessList) // FromProto converts a v1 access list into an internal access list object. func FromProto(msg *accesslistv1.AccessList, opts ...AccessListOption) (*accesslist.AccessList, error) { - if msg == nil { - return nil, trace.BadParameter("access list message is nil") - } - - if msg.Spec == nil { + spec := msg.GetSpec() + if spec == nil { return nil, trace.BadParameter("spec is missing") } - if msg.Spec.Audit == nil { - return nil, trace.BadParameter("audit is missing") - } - if msg.Spec.MembershipRequires == nil { - return nil, trace.BadParameter("membershipRequires is missing") - } - if msg.Spec.OwnershipRequires == nil { - return nil, trace.BadParameter("ownershipRequires is missing") - } - if msg.Spec.Grants == nil { - return nil, trace.BadParameter("grants is missing") - } - - var recurrence accesslist.Recurrence - if msg.Spec.Audit.Recurrence != nil { - recurrence.Frequency = accesslist.ReviewFrequency(msg.Spec.Audit.Recurrence.Frequency) - recurrence.DayOfMonth = accesslist.ReviewDayOfMonth(msg.Spec.Audit.Recurrence.DayOfMonth) - } - - var notifications accesslist.Notifications - if msg.Spec.Audit.Notifications != nil { - if msg.Spec.Audit.Notifications.Start != nil { - notifications.Start = msg.Spec.Audit.Notifications.Start.AsDuration() - } - } - - owners := make([]accesslist.Owner, len(msg.Spec.Owners)) - for i, owner := range msg.Spec.Owners { - owners[i] = FromOwnerProto(owner) - // Set IneligibleStatus to empty as default. - // Must provide as options to set it with the provided value. - owners[i].IneligibleStatus = "" - } - var ownerGrants accesslist.Grants - if msg.Spec.OwnerGrants != nil { - ownerGrants.Roles = msg.Spec.OwnerGrants.Roles - if msg.Spec.OwnerGrants.Traits != nil { - ownerGrants.Traits = traitv1.FromProto(msg.Spec.OwnerGrants.Traits) - } - } + metadata := headerv1.FromMetadataProto(msg.GetHeader().GetMetadata()) - // We map the zero protobuf time (nil) to the zero go time. - // NewAccessList will handle this properly and set a time in the future - // based on the recurrence rules. - var nextAuditDate time.Time - if msg.Spec.Audit.NextAuditDate != nil { - nextAuditDate = msg.Spec.Audit.NextAuditDate.AsTime() + accessListSpec := accesslist.Spec{ + Type: accesslist.Type(msg.GetSpec().GetType()), + Title: spec.GetTitle(), + Description: spec.GetDescription(), + Owners: convertOwnersFromProto(spec.GetOwners()), + Audit: convertAuditFromProto(spec.GetAudit()), + MembershipRequires: convertRequiresFromProto(spec.GetMembershipRequires()), + OwnershipRequires: convertRequiresFromProto(spec.GetOwnershipRequires()), + Grants: convertGrantsFromProto(spec.GetGrants()), + OwnerGrants: convertGrantsFromProto(spec.GetOwnerGrants()), } - accessList, err := accesslist.NewAccessList(headerv1.FromMetadataProto(msg.Header.Metadata), accesslist.Spec{ - Title: msg.Spec.Title, - Description: msg.Spec.Description, - Owners: owners, - Audit: accesslist.Audit{ - NextAuditDate: nextAuditDate, - Recurrence: recurrence, - Notifications: notifications, - }, - MembershipRequires: accesslist.Requires{ - Roles: msg.Spec.MembershipRequires.Roles, - Traits: traitv1.FromProto(msg.Spec.MembershipRequires.Traits), - }, - OwnershipRequires: accesslist.Requires{ - Roles: msg.Spec.OwnershipRequires.Roles, - Traits: traitv1.FromProto(msg.Spec.OwnershipRequires.Traits), - }, - Grants: accesslist.Grants{ - Roles: msg.Spec.Grants.Roles, - Traits: traitv1.FromProto(msg.Spec.Grants.Traits), - }, - OwnerGrants: ownerGrants, - }) + accessList, err := accesslist.NewAccessList(metadata, accessListSpec) if err != nil { return nil, trace.Wrap(err) } @@ -121,7 +61,6 @@ func FromProto(msg *accesslistv1.AccessList, opts ...AccessListOption) (*accessl if status := fromStatusProto(msg); status != nil { accessList.Status = *status } - for _, opt := range opts { opt(accessList) } @@ -131,66 +70,83 @@ func FromProto(msg *accesslistv1.AccessList, opts ...AccessListOption) (*accessl // ToProto converts an internal access list into a v1 access list object. func ToProto(accessList *accesslist.AccessList) *accesslistv1.AccessList { - owners := make([]*accesslistv1.AccessListOwner, len(accessList.Spec.Owners)) - for i, owner := range accessList.Spec.Owners { - owners[i] = ToOwnerProto(owner) + if accessList == nil { + return nil } - var ownerGrants *accesslistv1.AccessListGrants - if len(accessList.Spec.OwnerGrants.Roles) > 0 { - ownerGrants = &accesslistv1.AccessListGrants{ - Roles: accessList.Spec.OwnerGrants.Roles, - } + return &accesslistv1.AccessList{ + Header: headerv1.ToResourceHeaderProto(accessList.ResourceHeader), + Spec: &accesslistv1.AccessListSpec{ + Type: string(accessList.Spec.Type), + Title: accessList.Spec.Title, + Description: accessList.Spec.Description, + Owners: convertOwnersToProto(accessList.Spec.Owners), + Audit: convertAuditToProto(accessList.Spec.Audit), + MembershipRequires: convertRequiresToProto(accessList.Spec.MembershipRequires), + OwnershipRequires: convertRequiresToProto(accessList.Spec.OwnershipRequires), + Grants: convertGrantsToProto(accessList.Spec.Grants), + OwnerGrants: convertGrantsToProto(accessList.Spec.OwnerGrants), + }, + Status: convertStatusToProto(&accessList.Status), } +} - if len(accessList.Spec.OwnerGrants.Traits) > 0 { - if ownerGrants == nil { - ownerGrants = &accesslistv1.AccessListGrants{} - } +func convertAuditFromProto(audit *accesslistv1.AccessListAudit) accesslist.Audit { + if audit == nil { + return accesslist.Audit{} + } + return accesslist.Audit{ + NextAuditDate: convertTimeFromProto(audit.GetNextAuditDate()), + Recurrence: accesslist.Recurrence{ + Frequency: accesslist.ReviewFrequency(audit.GetRecurrence().GetFrequency()), + DayOfMonth: accesslist.ReviewDayOfMonth(audit.GetRecurrence().GetDayOfMonth()), + }, + Notifications: convertNotificationsFromProto(audit.GetNotifications()), + } +} - ownerGrants.Traits = traitv1.ToProto(accessList.Spec.OwnerGrants.Traits) +func convertTimeFromProto(t *timestamppb.Timestamp) time.Time { + if t == nil { + return time.Time{} } + return t.AsTime() +} - // We map the zero go time to the zero protobuf time (nil). - var nextAuditDate *timestamppb.Timestamp - if !accessList.Spec.Audit.NextAuditDate.IsZero() { - nextAuditDate = timestamppb.New(accessList.Spec.Audit.NextAuditDate) +func convertNotificationsFromProto(notifications *accesslistv1.Notifications) accesslist.Notifications { + if notifications.GetStart() == nil { + return accesslist.Notifications{} + } + return accesslist.Notifications{ + Start: notifications.GetStart().AsDuration(), } +} - proto := &accesslistv1.AccessList{ - Header: headerv1.ToResourceHeaderProto(accessList.ResourceHeader), - Spec: &accesslistv1.AccessListSpec{ - Title: accessList.Spec.Title, - Description: accessList.Spec.Description, - Owners: owners, - Audit: &accesslistv1.AccessListAudit{ - NextAuditDate: nextAuditDate, - Recurrence: &accesslistv1.Recurrence{ - Frequency: accesslistv1.ReviewFrequency(accessList.Spec.Audit.Recurrence.Frequency), - DayOfMonth: accesslistv1.ReviewDayOfMonth(accessList.Spec.Audit.Recurrence.DayOfMonth), - }, - Notifications: &accesslistv1.Notifications{ - Start: durationpb.New(accessList.Spec.Audit.Notifications.Start), - }, - }, - MembershipRequires: &accesslistv1.AccessListRequires{ - Roles: accessList.Spec.MembershipRequires.Roles, - Traits: traitv1.ToProto(accessList.Spec.MembershipRequires.Traits), - }, - OwnershipRequires: &accesslistv1.AccessListRequires{ - Roles: accessList.Spec.OwnershipRequires.Roles, - Traits: traitv1.ToProto(accessList.Spec.OwnershipRequires.Traits), - }, - Grants: &accesslistv1.AccessListGrants{ - Roles: accessList.Spec.Grants.Roles, - Traits: traitv1.ToProto(accessList.Spec.Grants.Traits), - }, - OwnerGrants: ownerGrants, - }, - Status: toStatusProto(&accessList.Status), +func convertRequiresFromProto(requires *accesslistv1.AccessListRequires) accesslist.Requires { + if requires == nil { + return accesslist.Requires{} + } + return accesslist.Requires{ + Roles: requires.GetRoles(), + Traits: traitv1.FromProto(requires.GetTraits()), + } +} +func convertOwnersFromProto(protoOwners []*accesslistv1.AccessListOwner) []accesslist.Owner { + owners := make([]accesslist.Owner, len(protoOwners)) + for i, owner := range protoOwners { + owners[i] = FromOwnerProto(owner) + owners[i].IneligibleStatus = "" // default, overridden via option if needed } + return owners +} - return proto +func convertGrantsFromProto(protoGrants *accesslistv1.AccessListGrants) accesslist.Grants { + if protoGrants == nil { + return accesslist.Grants{} + } + return accesslist.Grants{ + Roles: protoGrants.GetRoles(), + Traits: traitv1.FromProto(protoGrants.GetTraits()), + } } // ToOwnerProto converts an internal access list owner into a v1 access list owner object. @@ -219,16 +175,19 @@ func ToOwnerProto(owner accesslist.Owner) *accesslistv1.AccessListOwner { // FromOwnerProto converts a v1 access list owner into an internal access list owner object. func FromOwnerProto(protoOwner *accesslistv1.AccessListOwner) accesslist.Owner { + if protoOwner == nil { + return accesslist.Owner{} + } ineligibleStatus := "" - if protoOwner.IneligibleStatus != accesslistv1.IneligibleStatus_INELIGIBLE_STATUS_UNSPECIFIED { - ineligibleStatus = protoOwner.IneligibleStatus.String() + if protoOwner.GetIneligibleStatus() != accesslistv1.IneligibleStatus_INELIGIBLE_STATUS_UNSPECIFIED { + ineligibleStatus = protoOwner.GetIneligibleStatus().String() } return accesslist.Owner{ - Name: protoOwner.Name, - Description: protoOwner.Description, + Name: protoOwner.GetName(), + Description: protoOwner.GetDescription(), IneligibleStatus: ineligibleStatus, - MembershipKind: protoOwner.MembershipKind.String(), + MembershipKind: protoOwner.GetMembershipKind().String(), } } @@ -249,60 +208,43 @@ func WithOwnersIneligibleStatusField(protoOwners []*accesslistv1.AccessListOwner } } -func toStatusProto(status *accesslist.Status) *accesslistv1.AccessListStatus { +func convertStatusToProto(status *accesslist.Status) *accesslistv1.AccessListStatus { if status == nil { return nil } - protoStatus := &accesslistv1.AccessListStatus{} - - if status.MemberCount != nil { - protoStatus.MemberCount = new(uint32) - *protoStatus.MemberCount = *status.MemberCount - } - if status.MemberListCount != nil { - protoStatus.MemberListCount = new(uint32) - *protoStatus.MemberListCount = *status.MemberListCount - } - - if status.OwnerOf != nil { - protoStatus.OwnerOf = status.OwnerOf + return &accesslistv1.AccessListStatus{ + MemberCount: copyPointer(status.MemberCount), + MemberListCount: copyPointer(status.MemberListCount), + OwnerOf: status.OwnerOf, + MemberOf: status.MemberOf, + CurrentUserAssignments: toCurrentUserAssignmentsProto(status.CurrentUserAssignments), + UserAssignments: toUserAssignmentsProto(status.UserAssignments), } - if status.MemberOf != nil { - protoStatus.MemberOf = status.MemberOf - } - - protoStatus.CurrentUserAssignments = toCurrentUserAssignmentsProto(status.CurrentUserAssignments) - - return protoStatus } func fromStatusProto(msg *accesslistv1.AccessList) *accesslist.Status { - if msg.Status == nil { + protoStatus := msg.GetStatus() + if protoStatus == nil { return nil } - status := &accesslist.Status{} - - if msg.Status.MemberCount != nil { - status.MemberCount = new(uint32) - *status.MemberCount = *msg.Status.MemberCount - } - if msg.Status.MemberListCount != nil { - status.MemberListCount = new(uint32) - *status.MemberListCount = *msg.Status.MemberListCount + return &accesslist.Status{ + MemberCount: copyPointer(protoStatus.MemberCount), + MemberListCount: copyPointer(protoStatus.MemberListCount), + OwnerOf: protoStatus.GetOwnerOf(), + MemberOf: protoStatus.GetMemberOf(), + CurrentUserAssignments: fromCurrentUserAssignmentsProto(protoStatus.GetCurrentUserAssignments()), + UserAssignments: fromUserAssignmentsProto(protoStatus.GetUserAssignments()), } +} - if msg.Status.OwnerOf != nil { - status.OwnerOf = msg.Status.OwnerOf - } - if msg.Status.MemberOf != nil { - status.MemberOf = msg.Status.MemberOf +func copyPointer[T any](val *T) *T { + if val == nil { + return nil } - - status.CurrentUserAssignments = fromCurrentUserAssignmentsProto(msg.Status.CurrentUserAssignments) - - return status + out := *val + return &out } func toCurrentUserAssignmentsProto(assignments *accesslist.CurrentUserAssignments) *accesslistv1.CurrentUserAssignments { @@ -315,12 +257,75 @@ func toCurrentUserAssignmentsProto(assignments *accesslist.CurrentUserAssignment } } -func fromCurrentUserAssignmentsProto(assignments *accesslistv1.CurrentUserAssignments) *accesslist.CurrentUserAssignments { +func toUserAssignmentsProto(assignments *accesslist.UserAssignments) *accesslistv1.UserAssignments { if assignments == nil { return nil } - return &accesslist.CurrentUserAssignments{ + return &accesslistv1.UserAssignments{ OwnershipType: assignments.OwnershipType, MembershipType: assignments.MembershipType, } } + +func fromCurrentUserAssignmentsProto(assignments *accesslistv1.CurrentUserAssignments) *accesslist.CurrentUserAssignments { + if assignments == nil { + return nil + } + return &accesslist.CurrentUserAssignments{ + OwnershipType: assignments.GetOwnershipType(), + MembershipType: assignments.GetMembershipType(), + } +} + +func fromUserAssignmentsProto(assignments *accesslistv1.UserAssignments) *accesslist.UserAssignments { + if assignments == nil { + return nil + } + return &accesslist.UserAssignments{ + OwnershipType: assignments.GetOwnershipType(), + MembershipType: assignments.GetMembershipType(), + } +} +func convertOwnersToProto(owners []accesslist.Owner) []*accesslistv1.AccessListOwner { + protoOwners := make([]*accesslistv1.AccessListOwner, len(owners)) + for i, owner := range owners { + protoOwners[i] = ToOwnerProto(owner) + } + return protoOwners +} + +func convertGrantsToProto(grants accesslist.Grants) *accesslistv1.AccessListGrants { + return &accesslistv1.AccessListGrants{ + Roles: grants.Roles, + Traits: traitv1.ToProto(grants.Traits), + } +} + +func convertAuditToProto(audit accesslist.Audit) *accesslistv1.AccessListAudit { + if audit == (accesslist.Audit{}) { + return nil + } + + var nextAuditDate *timestamppb.Timestamp + if !audit.NextAuditDate.IsZero() { + nextAuditDate = timestamppb.New(audit.NextAuditDate) + } + + return &accesslistv1.AccessListAudit{ + NextAuditDate: nextAuditDate, + Recurrence: &accesslistv1.Recurrence{ + Frequency: accesslistv1.ReviewFrequency(audit.Recurrence.Frequency), + DayOfMonth: accesslistv1.ReviewDayOfMonth(audit.Recurrence.DayOfMonth), + }, + Notifications: &accesslistv1.Notifications{ + Start: durationpb.New(audit.Notifications.Start), + }, + } +} + +func convertRequiresToProto(requires accesslist.Requires) *accesslistv1.AccessListRequires { + return &accesslistv1.AccessListRequires{ + Roles: requires.Roles, + Traits: traitv1.ToProto(requires.Traits), + } +} diff --git a/api/types/accesslist/convert/v1/accesslist_test.go b/api/types/accesslist/convert/v1/accesslist_test.go index 9af8b21f1c9e8..cb0b124d8bd4b 100644 --- a/api/types/accesslist/convert/v1/accesslist_test.go +++ b/api/types/accesslist/convert/v1/accesslist_test.go @@ -17,18 +17,25 @@ limitations under the License. package v1 import ( + stdcmp "cmp" "testing" "time" "github.com/google/go-cmp/cmp" "github.com/stretchr/testify/require" + "google.golang.org/protobuf/types/known/durationpb" + "google.golang.org/protobuf/types/known/timestamppb" accesslistv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/accesslist/v1" + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + "github.com/gravitational/teleport/api/types" "github.com/gravitational/teleport/api/types/accesslist" "github.com/gravitational/teleport/api/types/header" ) func TestWithOwnersIneligibleStatusField(t *testing.T) { + t.Parallel() + proto := []*accesslistv1.AccessListOwner{ { Name: "expired", @@ -85,17 +92,70 @@ func TestWithOwnersIneligibleStatusField(t *testing.T) { } func TestRoundtrip(t *testing.T) { - accessList := newAccessList(t, "access-list") - accessList.ResourceHeader.SetSubKind("access-list-subkind") + t.Parallel() - converted, err := FromProto(ToProto(accessList)) - require.NoError(t, err) + type testCase struct { + name string + modificationFn func(*accesslist.AccessList) + } + + for _, tc := range []testCase{ + { + name: "no-modifications", + modificationFn: func(accessList *accesslist.AccessList) {}, + }, + { + name: "with-subkind", + modificationFn: func(accessList *accesslist.AccessList) { + accessList.ResourceHeader.SetSubKind("access-list-subkind") + }, + }, + { + name: "deprecated-dynamic-type", + modificationFn: func(accessList *accesslist.AccessList) { + accessList.Spec.Type = accesslist.DeprecatedDynamic + }, + }, + { + name: "default-type", + modificationFn: func(accessList *accesslist.AccessList) { + accessList.Spec.Type = accesslist.Default + }, + }, + { + name: "implicit-dynamic-type", + modificationFn: func(accessList *accesslist.AccessList) { + accessList.Spec.Type = "" + }, + }, + { + name: "static-type", + modificationFn: func(accessList *accesslist.AccessList) { + accessList.Spec.Type = accesslist.Static + accessList.Spec.Audit = accesslist.Audit{} + }, + }, + } { + t.Run(tc.name, func(t *testing.T) { + accessList := newAccessList(t, "access-list") + tc.modificationFn(accessList) + + converted, err := FromProto(ToProto(accessList)) + require.NoError(t, err) - require.Empty(t, cmp.Diff(accessList, converted)) + if accessList.Spec.Type == accesslist.DeprecatedDynamic { + accessList.Spec.Type = accesslist.Default + } + + require.Empty(t, cmp.Diff(accessList, converted)) + }) + } } // Make sure that we don't panic if any of the message fields are missing. func TestFromProtoNils(t *testing.T) { + t.Parallel() + t.Run("spec", func(t *testing.T) { accessList := ToProto(newAccessList(t, "access-list")) accessList.Spec = nil @@ -109,7 +169,7 @@ func TestFromProtoNils(t *testing.T) { accessList.Spec.Owners = nil _, err := FromProto(accessList) - require.Error(t, err) + require.NoError(t, err) }) t.Run("audit", func(t *testing.T) { @@ -117,7 +177,7 @@ func TestFromProtoNils(t *testing.T) { accessList.Spec.Audit = nil _, err := FromProto(accessList) - require.Error(t, err) + require.NoError(t, err) }) t.Run("recurrence", func(t *testing.T) { @@ -136,30 +196,6 @@ func TestFromProtoNils(t *testing.T) { require.NoError(t, err) }) - t.Run("membership-requires", func(t *testing.T) { - accessList := ToProto(newAccessList(t, "access-list")) - accessList.Spec.MembershipRequires = nil - - _, err := FromProto(accessList) - require.Error(t, err) - }) - - t.Run("ownership-requires", func(t *testing.T) { - accessList := ToProto(newAccessList(t, "access-list")) - accessList.Spec.OwnershipRequires = nil - - _, err := FromProto(accessList) - require.Error(t, err) - }) - - t.Run("grants", func(t *testing.T) { - accessList := ToProto(newAccessList(t, "access-list")) - accessList.Spec.Grants = nil - - _, err := FromProto(accessList) - require.Error(t, err) - }) - t.Run("owner_grants", func(t *testing.T) { msg := ToProto(newAccessList(t, "access-list")) msg.Spec.OwnerGrants = nil @@ -183,6 +219,20 @@ func TestFromProtoNils(t *testing.T) { _, err := FromProto(msg) require.NoError(t, err) }) + + t.Run("requires is not set to nil", func(t *testing.T) { + acl := newAccessList(t, "access-list") + acl.Spec.MembershipRequires = accesslist.Requires{} + acl.Spec.OwnershipRequires = accesslist.Requires{} + msg := ToProto(acl) + // Ensure Requires fields are not set to nil for backward compatibility. + // Older implementations of FromProto (e.g., in Teleport v16) may incorrectly set these fields to nil: + // https://github.com/gravitational/teleport/blob/branch/v16/api/types/accesslist/convert/v1/accesslist.go#L46 + // Since FromProto is invoked client-side (e.g., by the Terraform provider), + // setting Requires to nil could introduce breaking changes for existing clients. + require.NotNil(t, msg.Spec.MembershipRequires) + require.NotNil(t, msg.Spec.OwnershipRequires) + }) } func newAccessList(t *testing.T, name string) *accesslist.AccessList { @@ -271,3 +321,186 @@ func TestNextAuditDateZeroTime(t *testing.T) { require.Nil(t, convertedTwice.Spec.Audit.NextAuditDate) } + +func TestConvAccessList(t *testing.T) { + t.Parallel() + + newAccessList := func(modifyFn func(*accesslistv1.AccessList)) *accesslistv1.AccessList { + al := &accesslistv1.AccessList{ + Header: &v1.ResourceHeader{ + Version: "v1", + Kind: types.KindAccessList, + Metadata: &v1.Metadata{ + Name: "access-list", + }, + }, + Spec: &accesslistv1.AccessListSpec{ + Title: "test access list", + Description: "test description", + OwnershipRequires: &accesslistv1.AccessListRequires{}, + MembershipRequires: &accesslistv1.AccessListRequires{}, + Owners: []*accesslistv1.AccessListOwner{ + { + Name: "test-user1", + }, + }, + Audit: &accesslistv1.AccessListAudit{ + Recurrence: &accesslistv1.Recurrence{ + Frequency: 1, + DayOfMonth: 1, + }, + NextAuditDate: ×tamppb.Timestamp{ + Seconds: 6, + Nanos: 1, + }, + Notifications: &accesslistv1.Notifications{ + Start: &durationpb.Duration{ + Seconds: 1209600, + }, + }, + }, + Grants: &accesslistv1.AccessListGrants{ + Roles: []string{"role1"}, + }, + }, + Status: &accesslistv1.AccessListStatus{}, + } + if modifyFn != nil { + modifyFn(al) + } + return al + } + + tests := []struct { + name string + input *accesslistv1.AccessList + }{ + { + name: "basic conversion", + input: newAccessList(nil), + }, + { + name: "nil grants", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Grants = nil + }), + }, + { + name: "SCIM, Static access list allows for empty owners", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + al.Spec.Owners = []*accesslistv1.AccessListOwner{} + }), + }, + { + name: "audit with only Recurrence.DayOfMonth set", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + al.Spec.Audit = &accesslistv1.AccessListAudit{ + Recurrence: &accesslistv1.Recurrence{ + DayOfMonth: accesslistv1.ReviewDayOfMonth_REVIEW_DAY_OF_MONTH_LAST, + }, + Notifications: &accesslistv1.Notifications{ + Start: &durationpb.Duration{ + Seconds: 12345, + }, + }, + } + }), + }, + { + name: "audit with only Recurrence.Frequency and Notifications.Start set", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + al.Spec.Audit = &accesslistv1.AccessListAudit{ + Recurrence: &accesslistv1.Recurrence{ + Frequency: accesslistv1.ReviewFrequency_REVIEW_FREQUENCY_ONE_YEAR, + }, + Notifications: &accesslistv1.Notifications{ + Start: &durationpb.Duration{}, + }, + } + }), + }, + { + name: "scim-type", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + }), + }, + { + name: "static-type", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + }), + }, + { + name: "scim-type and zero audit", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.SCIM) + al.Spec.Audit = &accesslistv1.AccessListAudit{ + NextAuditDate: ×tamppb.Timestamp{}, + Recurrence: &accesslistv1.Recurrence{ + Frequency: 0, + DayOfMonth: 0, + }, + Notifications: &accesslistv1.Notifications{ + Start: &durationpb.Duration{}, + }, + } + }), + }, + { + name: "static-type and partial audit", + input: newAccessList(func(al *accesslistv1.AccessList) { + al.Spec.Type = string(accesslist.Static) + al.Spec.Audit = &accesslistv1.AccessListAudit{ + NextAuditDate: ×tamppb.Timestamp{}, + Recurrence: &accesslistv1.Recurrence{ + Frequency: 0, + DayOfMonth: 4, + }, + Notifications: &accesslistv1.Notifications{ + Start: &durationpb.Duration{}, + }, + } + }), + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + acl, err := FromProto(tt.input) + require.NoError(t, err) + + got := ToProto(acl) + require.NoError(t, err) + + // See [Test_convertGrantsToProto_never_nil] why that is. + tt.input.Spec.Grants = stdcmp.Or(tt.input.Spec.Grants, &accesslistv1.AccessListGrants{}) + tt.input.Spec.OwnerGrants = stdcmp.Or(tt.input.Spec.OwnerGrants, &accesslistv1.AccessListGrants{}) + + require.Equal(t, tt.input, got) + }) + } +} + +func Test_convertGrantsToProto_never_nil(t *testing.T) { + // We can't convert empty (owner) grants to nil because, when grants are + // not specified in Terraform, that causes it to error with: + // + // teleport_access_list.test_direct: Creating... + // ╷ + // │ Error: Provider produced inconsistent result after apply + // │ + // │ When applying changes to teleport_access_list.test_direct, provider "provider[\"terraform.releases.teleport.dev/gravitational/teleport\"]" produced + // │ an unexpected new value: .spec.grants: was cty.ObjectVal(map[string]cty.Value{"roles":cty.ListValEmpty(cty.String), + // │ "traits":cty.ListValEmpty(cty.Object(map[string]cty.Type{"key":cty.String, "values":cty.List(cty.String)}))}), but now null. + // │ + // │ This is a bug in the provider, which should be reported in the provider's own issue tracker. + // ╵ + // + // See https://github.com/gravitational/teleport/issues/58948 + emptyGrants := accesslist.Grants{} + require.NotNil(t, convertGrantsToProto(emptyGrants)) +} diff --git a/api/types/accesslist/convert/v1/member.go b/api/types/accesslist/convert/v1/member.go index 8fe6e596db781..78cb36143dfeb 100644 --- a/api/types/accesslist/convert/v1/member.go +++ b/api/types/accesslist/convert/v1/member.go @@ -18,11 +18,11 @@ package v1 import ( "github.com/gravitational/trace" - "google.golang.org/protobuf/types/known/timestamppb" accesslistv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/accesslist/v1" "github.com/gravitational/teleport/api/types/accesslist" headerv1 "github.com/gravitational/teleport/api/types/header/convert/v1" + "github.com/gravitational/teleport/api/utils" ) type MemberOption func(*accesslist.AccessListMember) @@ -37,13 +37,13 @@ func FromMemberProto(msg *accesslistv1.Member, opts ...MemberOption) (*accesslis return nil, trace.BadParameter("spec is missing") } - member, err := accesslist.NewAccessListMember(headerv1.FromMetadataProto(msg.Header.Metadata), accesslist.AccessListMemberSpec{ - AccessList: msg.Spec.AccessList, - Name: msg.Spec.Name, - Joined: msg.Spec.Joined.AsTime(), - Expires: msg.Spec.Expires.AsTime(), - Reason: msg.Spec.Reason, - AddedBy: msg.Spec.AddedBy, + member, err := accesslist.NewAccessListMember(headerv1.FromMetadataProto(msg.GetHeader().GetMetadata()), accesslist.AccessListMemberSpec{ + AccessList: msg.GetSpec().GetAccessList(), + Name: msg.GetSpec().GetName(), + Joined: utils.TimeFromProto(msg.GetSpec().GetJoined()), + Expires: utils.TimeFromProto(msg.GetSpec().GetExpires()), + Reason: msg.GetSpec().GetReason(), + AddedBy: msg.GetSpec().GetAddedBy(), // Set it to empty as default. // Must provide as options to set it with the provided value. IneligibleStatus: "", @@ -90,8 +90,8 @@ func ToMemberProto(member *accesslist.AccessListMember) *accesslistv1.Member { Spec: &accesslistv1.MemberSpec{ AccessList: member.Spec.AccessList, Name: member.Spec.Name, - Joined: timestamppb.New(member.Spec.Joined), - Expires: timestamppb.New(member.Spec.Expires), + Joined: utils.TimeIntoProto(member.Spec.Joined), + Expires: utils.TimeIntoProto(member.Spec.Expires), Reason: member.Spec.Reason, AddedBy: member.Spec.AddedBy, IneligibleStatus: ineligibleStatus, diff --git a/api/types/accesslist/convert/v1/member_test.go b/api/types/accesslist/convert/v1/member_test.go index de44fb67be7c0..1b0e171ec8747 100644 --- a/api/types/accesslist/convert/v1/member_test.go +++ b/api/types/accesslist/convert/v1/member_test.go @@ -71,12 +71,12 @@ func TestMemberFromProtoNils(t *testing.T) { { name: "accesslist", mutate: func(m *accesslistv1.Member) { m.Spec.AccessList = "" }, - checkErr: require.Error, + checkErr: require.NoError, }, { name: "joined", mutate: func(m *accesslistv1.Member) { m.Spec.Joined = nil }, - checkErr: require.Error, + checkErr: require.NoError, }, { name: "expires", @@ -91,7 +91,7 @@ func TestMemberFromProtoNils(t *testing.T) { { name: "added by", mutate: func(m *accesslistv1.Member) { m.Spec.AddedBy = "" }, - checkErr: require.Error, + checkErr: require.NoError, }, } @@ -109,6 +109,43 @@ func TestMemberFromProtoNils(t *testing.T) { } } +func TestMemberTimeConversion(t *testing.T) { + t.Run("when zero converts to proto nil", func(t *testing.T) { + member := newAccessListMember(t, "test_member") + member.Spec.Expires = time.Time{} + member.Spec.Joined = time.Time{} + + proto := ToMemberProto(member) + require.Nil(t, proto.Spec.Expires) + require.Nil(t, proto.Spec.Joined) + }) + t.Run("when non-zero converts to proto time", func(t *testing.T) { + member := newAccessListMember(t, "test_member") + expires, err := time.Parse(time.RFC3339, "2025-10-09T15:00:00Z") + require.NoError(t, err) + joined, err := time.Parse(time.RFC3339, "2025-08-07T15:00:00Z") + require.NoError(t, err) + member.Spec.Expires = expires + member.Spec.Joined = joined + + proto := ToMemberProto(member) + require.NotNil(t, proto.Spec.Expires) + require.NotNil(t, proto.Spec.Joined) + require.True(t, proto.Spec.Expires.AsTime().Equal(expires)) + require.True(t, proto.Spec.Joined.AsTime().Equal(joined)) + }) + t.Run("proto nil converts to zero time", func(t *testing.T) { + proto := ToMemberProto(newAccessListMember(t, "test_member")) + proto.Spec.Expires = nil + proto.Spec.Joined = nil + + member, err := FromMemberProto(proto) + require.NoError(t, err) + require.True(t, member.Spec.Expires.IsZero()) + require.True(t, member.Spec.Joined.IsZero()) + }) +} + func newAccessListMember(t *testing.T, name string) *accesslist.AccessListMember { t.Helper() diff --git a/api/types/accesslist/convert/v1/review.go b/api/types/accesslist/convert/v1/review.go index 5854de458fbea..e5db9149669b3 100644 --- a/api/types/accesslist/convert/v1/review.go +++ b/api/types/accesslist/convert/v1/review.go @@ -34,35 +34,35 @@ func FromReviewProto(msg *accesslistv1.Review) (*accesslist.Review, error) { return nil, trace.BadParameter("access list review message is nil") } - if msg.Spec == nil { + if msg.GetSpec() == nil { return nil, trace.BadParameter("spec is missing") } // Manually check for the presence of the time so that we can be sure that the review date is // zero if the proto message's review date is nil. var reviewDate time.Time - if msg.Spec.ReviewDate != nil { - reviewDate = msg.Spec.ReviewDate.AsTime() + if msg.GetSpec().GetReviewDate() != nil { + reviewDate = msg.GetSpec().GetReviewDate().AsTime() } var reviewChanges accesslist.ReviewChanges - if msg.Spec.Changes != nil { - if msg.Spec.Changes.MembershipRequirementsChanged != nil { + if msg.GetSpec().GetChanges() != nil { + if msg.GetSpec().GetChanges().GetMembershipRequirementsChanged() != nil { reviewChanges.MembershipRequirementsChanged = &accesslist.Requires{ - Roles: msg.Spec.Changes.MembershipRequirementsChanged.Roles, - Traits: traitv1.FromProto(msg.Spec.Changes.MembershipRequirementsChanged.Traits), + Roles: msg.GetSpec().GetChanges().GetMembershipRequirementsChanged().GetRoles(), + Traits: traitv1.FromProto(msg.GetSpec().GetChanges().GetMembershipRequirementsChanged().GetTraits()), } } - reviewChanges.RemovedMembers = msg.Spec.Changes.RemovedMembers - reviewChanges.ReviewFrequencyChanged = accesslist.ReviewFrequency(msg.Spec.Changes.ReviewFrequencyChanged) - reviewChanges.ReviewDayOfMonthChanged = accesslist.ReviewDayOfMonth(msg.Spec.Changes.ReviewDayOfMonthChanged) + reviewChanges.RemovedMembers = msg.GetSpec().GetChanges().GetRemovedMembers() + reviewChanges.ReviewFrequencyChanged = accesslist.ReviewFrequency(msg.GetSpec().GetChanges().GetReviewFrequencyChanged()) + reviewChanges.ReviewDayOfMonthChanged = accesslist.ReviewDayOfMonth(msg.GetSpec().GetChanges().GetReviewDayOfMonthChanged()) } - member, err := accesslist.NewReview(headerv1.FromMetadataProto(msg.Header.Metadata), accesslist.ReviewSpec{ - AccessList: msg.Spec.AccessList, - Reviewers: msg.Spec.Reviewers, + member, err := accesslist.NewReview(headerv1.FromMetadataProto(msg.GetHeader().GetMetadata()), accesslist.ReviewSpec{ + AccessList: msg.GetSpec().GetAccessList(), + Reviewers: msg.GetSpec().GetReviewers(), ReviewDate: reviewDate, - Notes: msg.Spec.Notes, + Notes: msg.GetSpec().GetNotes(), Changes: reviewChanges, }) if err != nil { diff --git a/api/types/accesslist/derived.gen.go b/api/types/accesslist/derived.gen.go new file mode 100644 index 0000000000000..6e0377120c681 --- /dev/null +++ b/api/types/accesslist/derived.gen.go @@ -0,0 +1,530 @@ +// Code generated by goderive DO NOT EDIT. + +package accesslist + +import ( + header "github.com/gravitational/teleport/api/types/header" + "time" +) + +// deriveTeleportEqualAccessList returns whether this and that are equal. +func deriveTeleportEqualAccessList(this, that *AccessList) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual(&this.ResourceHeader, &that.ResourceHeader) && + deriveTeleportEqual_(&this.Spec, &that.Spec) +} + +// deriveDeepCopyAccessList recursively copies the contents of src into dst. +func deriveDeepCopyAccessList(dst, src *AccessList) { + func() { + field := new(header.ResourceHeader) + deriveDeepCopy(field, &src.ResourceHeader) + dst.ResourceHeader = *field + }() + func() { + field := new(Spec) + deriveDeepCopy_(field, &src.Spec) + dst.Spec = *field + }() + func() { + field := new(Status) + deriveDeepCopy_1(field, &src.Status) + dst.Status = *field + }() +} + +// deriveDeepCopyAccessListMember recursively copies the contents of src into dst. +func deriveDeepCopyAccessListMember(dst, src *AccessListMember) { + func() { + field := new(header.ResourceHeader) + deriveDeepCopy(field, &src.ResourceHeader) + dst.ResourceHeader = *field + }() + func() { + field := new(AccessListMemberSpec) + deriveDeepCopy_2(field, &src.Spec) + dst.Spec = *field + }() +} + +// deriveDeepCopyReview recursively copies the contents of src into dst. +func deriveDeepCopyReview(dst, src *Review) { + func() { + field := new(header.ResourceHeader) + deriveDeepCopy(field, &src.ResourceHeader) + dst.ResourceHeader = *field + }() + func() { + field := new(ReviewSpec) + deriveDeepCopy_3(field, &src.Spec) + dst.Spec = *field + }() +} + +// deriveTeleportEqual returns whether this and that are equal. +func deriveTeleportEqual(this, that *header.ResourceHeader) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Kind == that.Kind && + this.SubKind == that.SubKind && + this.Version == that.Version && + deriveTeleportEqual_1(&this.Metadata, &that.Metadata) +} + +// deriveTeleportEqual_ returns whether this and that are equal. +func deriveTeleportEqual_(this, that *Spec) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Type == that.Type && + this.Title == that.Title && + this.Description == that.Description && + deriveTeleportEqual_2(this.Owners, that.Owners) && + deriveTeleportEqual_3(&this.Audit, &that.Audit) && + deriveTeleportEqual_4(&this.MembershipRequires, &that.MembershipRequires) && + deriveTeleportEqual_4(&this.OwnershipRequires, &that.OwnershipRequires) && + deriveTeleportEqual_5(&this.Grants, &that.Grants) && + deriveTeleportEqual_5(&this.OwnerGrants, &that.OwnerGrants) +} + +// deriveDeepCopy recursively copies the contents of src into dst. +func deriveDeepCopy(dst, src *header.ResourceHeader) { + dst.Kind = src.Kind + dst.SubKind = src.SubKind + dst.Version = src.Version + func() { + field := new(header.Metadata) + deriveDeepCopy_4(field, &src.Metadata) + dst.Metadata = *field + }() +} + +// deriveDeepCopy_ recursively copies the contents of src into dst. +func deriveDeepCopy_(dst, src *Spec) { + dst.Type = src.Type + dst.Title = src.Title + dst.Description = src.Description + if src.Owners == nil { + dst.Owners = nil + } else { + if dst.Owners != nil { + if len(src.Owners) > len(dst.Owners) { + if cap(dst.Owners) >= len(src.Owners) { + dst.Owners = (dst.Owners)[:len(src.Owners)] + } else { + dst.Owners = make([]Owner, len(src.Owners)) + } + } else if len(src.Owners) < len(dst.Owners) { + dst.Owners = (dst.Owners)[:len(src.Owners)] + } + } else { + dst.Owners = make([]Owner, len(src.Owners)) + } + copy(dst.Owners, src.Owners) + } + func() { + field := new(Audit) + deriveDeepCopy_5(field, &src.Audit) + dst.Audit = *field + }() + func() { + field := new(Requires) + deriveDeepCopy_6(field, &src.MembershipRequires) + dst.MembershipRequires = *field + }() + func() { + field := new(Requires) + deriveDeepCopy_6(field, &src.OwnershipRequires) + dst.OwnershipRequires = *field + }() + func() { + field := new(Grants) + deriveDeepCopy_7(field, &src.Grants) + dst.Grants = *field + }() + func() { + field := new(Grants) + deriveDeepCopy_7(field, &src.OwnerGrants) + dst.OwnerGrants = *field + }() +} + +// deriveDeepCopy_1 recursively copies the contents of src into dst. +func deriveDeepCopy_1(dst, src *Status) { + if src.MemberCount == nil { + dst.MemberCount = nil + } else { + dst.MemberCount = new(uint32) + *dst.MemberCount = *src.MemberCount + } + if src.MemberListCount == nil { + dst.MemberListCount = nil + } else { + dst.MemberListCount = new(uint32) + *dst.MemberListCount = *src.MemberListCount + } + if src.OwnerOf == nil { + dst.OwnerOf = nil + } else { + if dst.OwnerOf != nil { + if len(src.OwnerOf) > len(dst.OwnerOf) { + if cap(dst.OwnerOf) >= len(src.OwnerOf) { + dst.OwnerOf = (dst.OwnerOf)[:len(src.OwnerOf)] + } else { + dst.OwnerOf = make([]string, len(src.OwnerOf)) + } + } else if len(src.OwnerOf) < len(dst.OwnerOf) { + dst.OwnerOf = (dst.OwnerOf)[:len(src.OwnerOf)] + } + } else { + dst.OwnerOf = make([]string, len(src.OwnerOf)) + } + copy(dst.OwnerOf, src.OwnerOf) + } + if src.MemberOf == nil { + dst.MemberOf = nil + } else { + if dst.MemberOf != nil { + if len(src.MemberOf) > len(dst.MemberOf) { + if cap(dst.MemberOf) >= len(src.MemberOf) { + dst.MemberOf = (dst.MemberOf)[:len(src.MemberOf)] + } else { + dst.MemberOf = make([]string, len(src.MemberOf)) + } + } else if len(src.MemberOf) < len(dst.MemberOf) { + dst.MemberOf = (dst.MemberOf)[:len(src.MemberOf)] + } + } else { + dst.MemberOf = make([]string, len(src.MemberOf)) + } + copy(dst.MemberOf, src.MemberOf) + } + if src.CurrentUserAssignments == nil { + dst.CurrentUserAssignments = nil + } else { + dst.CurrentUserAssignments = new(CurrentUserAssignments) + *dst.CurrentUserAssignments = *src.CurrentUserAssignments + } + if src.UserAssignments == nil { + dst.UserAssignments = nil + } else { + dst.UserAssignments = new(UserAssignments) + *dst.UserAssignments = *src.UserAssignments + } +} + +// deriveDeepCopy_2 recursively copies the contents of src into dst. +func deriveDeepCopy_2(dst, src *AccessListMemberSpec) { + dst.AccessList = src.AccessList + dst.Name = src.Name + dst.Title = src.Title + func() { + field := new(time.Time) + deriveDeepCopy_8(field, &src.Joined) + dst.Joined = *field + }() + func() { + field := new(time.Time) + deriveDeepCopy_8(field, &src.Expires) + dst.Expires = *field + }() + dst.Reason = src.Reason + dst.AddedBy = src.AddedBy + dst.IneligibleStatus = src.IneligibleStatus + dst.MembershipKind = src.MembershipKind +} + +// deriveDeepCopy_3 recursively copies the contents of src into dst. +func deriveDeepCopy_3(dst, src *ReviewSpec) { + dst.AccessList = src.AccessList + if src.Reviewers == nil { + dst.Reviewers = nil + } else { + if dst.Reviewers != nil { + if len(src.Reviewers) > len(dst.Reviewers) { + if cap(dst.Reviewers) >= len(src.Reviewers) { + dst.Reviewers = (dst.Reviewers)[:len(src.Reviewers)] + } else { + dst.Reviewers = make([]string, len(src.Reviewers)) + } + } else if len(src.Reviewers) < len(dst.Reviewers) { + dst.Reviewers = (dst.Reviewers)[:len(src.Reviewers)] + } + } else { + dst.Reviewers = make([]string, len(src.Reviewers)) + } + copy(dst.Reviewers, src.Reviewers) + } + func() { + field := new(time.Time) + deriveDeepCopy_8(field, &src.ReviewDate) + dst.ReviewDate = *field + }() + dst.Notes = src.Notes + func() { + field := new(ReviewChanges) + deriveDeepCopy_9(field, &src.Changes) + dst.Changes = *field + }() +} + +// deriveTeleportEqual_1 returns whether this and that are equal. +func deriveTeleportEqual_1(this, that *header.Metadata) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Name == that.Name && + this.Description == that.Description && + deriveTeleportEqual_6(this.Labels, that.Labels) && + this.Expires.Equal(that.Expires) +} + +// deriveTeleportEqual_2 returns whether this and that are equal. +func deriveTeleportEqual_2(this, that []Owner) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for i := 0; i < len(this); i++ { + if !(this[i] == that[i]) { + return false + } + } + return true +} + +// deriveTeleportEqual_3 returns whether this and that are equal. +func deriveTeleportEqual_3(this, that *Audit) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.NextAuditDate.Equal(that.NextAuditDate) && + this.Recurrence == that.Recurrence && + this.Notifications == that.Notifications +} + +// deriveTeleportEqual_4 returns whether this and that are equal. +func deriveTeleportEqual_4(this, that *Requires) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual_7(this.Roles, that.Roles) && + deriveTeleportEqual_8(this.Traits, that.Traits) +} + +// deriveTeleportEqual_5 returns whether this and that are equal. +func deriveTeleportEqual_5(this, that *Grants) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual_7(this.Roles, that.Roles) && + deriveTeleportEqual_8(this.Traits, that.Traits) +} + +// deriveDeepCopy_4 recursively copies the contents of src into dst. +func deriveDeepCopy_4(dst, src *header.Metadata) { + dst.Name = src.Name + dst.Description = src.Description + if src.Labels != nil { + dst.Labels = make(map[string]string, len(src.Labels)) + deriveDeepCopy_10(dst.Labels, src.Labels) + } else { + dst.Labels = nil + } + func() { + field := new(time.Time) + deriveDeepCopy_8(field, &src.Expires) + dst.Expires = *field + }() + dst.Revision = src.Revision +} + +// deriveDeepCopy_5 recursively copies the contents of src into dst. +func deriveDeepCopy_5(dst, src *Audit) { + func() { + field := new(time.Time) + deriveDeepCopy_8(field, &src.NextAuditDate) + dst.NextAuditDate = *field + }() + dst.Recurrence = src.Recurrence + dst.Notifications = src.Notifications +} + +// deriveDeepCopy_6 recursively copies the contents of src into dst. +func deriveDeepCopy_6(dst, src *Requires) { + if src.Roles == nil { + dst.Roles = nil + } else { + if dst.Roles != nil { + if len(src.Roles) > len(dst.Roles) { + if cap(dst.Roles) >= len(src.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } else { + dst.Roles = make([]string, len(src.Roles)) + } + } else if len(src.Roles) < len(dst.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } + } else { + dst.Roles = make([]string, len(src.Roles)) + } + copy(dst.Roles, src.Roles) + } + if src.Traits != nil { + dst.Traits = make(map[string][]string, len(src.Traits)) + deriveDeepCopy_11(dst.Traits, src.Traits) + } else { + dst.Traits = nil + } +} + +// deriveDeepCopy_7 recursively copies the contents of src into dst. +func deriveDeepCopy_7(dst, src *Grants) { + if src.Roles == nil { + dst.Roles = nil + } else { + if dst.Roles != nil { + if len(src.Roles) > len(dst.Roles) { + if cap(dst.Roles) >= len(src.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } else { + dst.Roles = make([]string, len(src.Roles)) + } + } else if len(src.Roles) < len(dst.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } + } else { + dst.Roles = make([]string, len(src.Roles)) + } + copy(dst.Roles, src.Roles) + } + if src.Traits != nil { + dst.Traits = make(map[string][]string, len(src.Traits)) + deriveDeepCopy_11(dst.Traits, src.Traits) + } else { + dst.Traits = nil + } +} + +// deriveDeepCopy_8 recursively copies the contents of src into dst. +func deriveDeepCopy_8(dst, src *time.Time) { + *dst = *src +} + +// deriveDeepCopy_9 recursively copies the contents of src into dst. +func deriveDeepCopy_9(dst, src *ReviewChanges) { + if src.MembershipRequirementsChanged == nil { + dst.MembershipRequirementsChanged = nil + } else { + dst.MembershipRequirementsChanged = new(Requires) + deriveDeepCopy_6(dst.MembershipRequirementsChanged, src.MembershipRequirementsChanged) + } + if src.RemovedMembers == nil { + dst.RemovedMembers = nil + } else { + if dst.RemovedMembers != nil { + if len(src.RemovedMembers) > len(dst.RemovedMembers) { + if cap(dst.RemovedMembers) >= len(src.RemovedMembers) { + dst.RemovedMembers = (dst.RemovedMembers)[:len(src.RemovedMembers)] + } else { + dst.RemovedMembers = make([]string, len(src.RemovedMembers)) + } + } else if len(src.RemovedMembers) < len(dst.RemovedMembers) { + dst.RemovedMembers = (dst.RemovedMembers)[:len(src.RemovedMembers)] + } + } else { + dst.RemovedMembers = make([]string, len(src.RemovedMembers)) + } + copy(dst.RemovedMembers, src.RemovedMembers) + } + dst.ReviewFrequencyChanged = src.ReviewFrequencyChanged + dst.ReviewDayOfMonthChanged = src.ReviewDayOfMonthChanged +} + +// deriveTeleportEqual_6 returns whether this and that are equal. +func deriveTeleportEqual_6(this, that map[string]string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for k, v := range this { + thatv, ok := that[k] + if !ok { + return false + } + if !(v == thatv) { + return false + } + } + return true +} + +// deriveTeleportEqual_7 returns whether this and that are equal. +func deriveTeleportEqual_7(this, that []string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for i := 0; i < len(this); i++ { + if !(this[i] == that[i]) { + return false + } + } + return true +} + +// deriveTeleportEqual_8 returns whether this and that are equal. +func deriveTeleportEqual_8(this, that map[string][]string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for k, v := range this { + thatv, ok := that[k] + if !ok { + return false + } + if !(deriveTeleportEqual_7(v, thatv)) { + return false + } + } + return true +} + +// deriveDeepCopy_10 recursively copies the contents of src into dst. +func deriveDeepCopy_10(dst, src map[string]string) { + for src_key, src_value := range src { + dst[src_key] = src_value + } +} + +// deriveDeepCopy_11 recursively copies the contents of src into dst. +func deriveDeepCopy_11(dst, src map[string][]string) { + for src_key, src_value := range src { + if src_value == nil { + dst[src_key] = nil + } + if src_value == nil { + dst[src_key] = nil + } else { + if dst[src_key] != nil { + if len(src_value) > len(dst[src_key]) { + if cap(dst[src_key]) >= len(src_value) { + dst[src_key] = (dst[src_key])[:len(src_value)] + } else { + dst[src_key] = make([]string, len(src_value)) + } + } else if len(src_value) < len(dst[src_key]) { + dst[src_key] = (dst[src_key])[:len(src_value)] + } + } else { + dst[src_key] = make([]string, len(src_value)) + } + copy(dst[src_key], src_value) + } + } +} diff --git a/api/types/accesslist/member.go b/api/types/accesslist/member.go index fc3633e087ac1..376b1eca19068 100644 --- a/api/types/accesslist/member.go +++ b/api/types/accesslist/member.go @@ -47,6 +47,12 @@ type AccessListMemberSpec struct { // Name is the name of the member of the access list. Name string `json:"name" yaml:"name"` + // TODO (avatus): eventually populate this in the backend/cache. + + // Title is the title of an AccessListMember if it is of type MEMBERSHIP_KIND_LIST. + // This is only populated by the proxy when fetching an access list and its members for the web UI + Title string `json:"title" yaml:"title"` + // Joined is when the user joined the access list. Joined time.Time `json:"joined" yaml:"joined"` @@ -67,7 +73,7 @@ type AccessListMemberSpec struct { MembershipKind string `json:"membership_kind" yaml:"membership_kind"` } -// NewAccessListMember will create a new access listm member. +// NewAccessListMember will create a new AccessListMember. func NewAccessListMember(metadata header.Metadata, spec AccessListMemberSpec) (*AccessListMember, error) { member := &AccessListMember{ ResourceHeader: header.ResourceHeaderFromMetadata(metadata), @@ -81,35 +87,16 @@ func NewAccessListMember(metadata header.Metadata, spec AccessListMemberSpec) (* return member, nil } -// CheckAndSetDefaults validates fields and populates empty fields with default values. +// CheckAndSetDefaults defaults empty fields and performs metadata validation. func (a *AccessListMember) CheckAndSetDefaults() error { a.SetKind(types.KindAccessListMember) a.SetVersion(types.V1) - if err := a.ResourceHeader.CheckAndSetDefaults(); err != nil { return trace.Wrap(err) } - if a.Spec.MembershipKind == "" { a.Spec.MembershipKind = MembershipKindUser } - - if a.Spec.AccessList == "" { - return trace.BadParameter("access list is missing") - } - - if a.Spec.Name == "" { - return trace.BadParameter("member name is missing") - } - - if a.Spec.Joined.IsZero() || a.Spec.Joined.Unix() == 0 { - return trace.BadParameter("member %s: joined field empty or missing", a.Spec.Name) - } - - if a.Spec.AddedBy == "" { - return trace.BadParameter("member %s: added_by field is empty", a.Spec.Name) - } - return nil } @@ -138,7 +125,21 @@ func (a *AccessListMember) MatchSearch(values []string) bool { // Clone returns a copy of the member. func (a *AccessListMember) Clone() *AccessListMember { - var copy *AccessListMember - utils.StrictObjectToStruct(a, ©) - return copy + if a == nil { + return nil + } + out := &AccessListMember{} + deriveDeepCopyAccessListMember(out, a) + return out +} + +// IsExpired checks if the access list member is expired based on the current time. +func (a *AccessListMember) IsExpired(t time.Time) bool { + return !a.Spec.Expires.IsZero() && !t.Before(a.Spec.Expires) +} + +// IsUser returns true if the membership kind is User +// All types expect "MEMBERSHIP_KIND_LIST" are treated as "MEMBERSHIP_KIND_USER". +func (a *AccessListMember) IsUser() bool { + return isMembershipKindUser(a.Spec.MembershipKind) } diff --git a/api/types/accesslist/member_test.go b/api/types/accesslist/member_test.go index e9bc6d8c6dc1b..f5b36f8468e8c 100644 --- a/api/types/accesslist/member_test.go +++ b/api/types/accesslist/member_test.go @@ -50,19 +50,12 @@ func TestAccessListMemberDefaults(t *testing.T) { } } - t.Run("join date required for member", func(t *testing.T) { + t.Run("membership kind defaults to user", func(t *testing.T) { uut := newValidAccessListMember() - uut.Spec.Joined = time.Time{} + uut.Spec.MembershipKind = "" err := uut.CheckAndSetDefaults() - require.Error(t, err) - }) - - t.Run("added-by required", func(t *testing.T) { - uut := newValidAccessListMember() - uut.Spec.AddedBy = "" - - err := uut.CheckAndSetDefaults() - require.Error(t, err) + require.NoError(t, err) + require.Equal(t, MembershipKindUser, uut.Spec.MembershipKind) }) } diff --git a/api/types/accesslist/review.go b/api/types/accesslist/review.go index 776d06bdc1638..650107a8ee1a5 100644 --- a/api/types/accesslist/review.go +++ b/api/types/accesslist/review.go @@ -1,18 +1,20 @@ /* -Copyright 2023 Gravitational, Inc. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ + * Teleport + * Copyright (C) 2025 Gravitational, Inc. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU Affero General Public License as published by + * the Free Software Foundation, either version 3 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Affero General Public License for more details. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ package accesslist @@ -25,7 +27,6 @@ import ( "github.com/gravitational/teleport/api/types" "github.com/gravitational/teleport/api/types/header" "github.com/gravitational/teleport/api/types/header/convert/legacy" - "github.com/gravitational/teleport/api/utils" ) // Review is an access list review resource. @@ -117,9 +118,12 @@ func (r *Review) GetMetadata() types.Metadata { // Clone returns a copy of the review. func (a *Review) Clone() *Review { - var copy *Review - utils.StrictObjectToStruct(a, ©) - return copy + if a == nil { + return nil + } + out := &Review{} + deriveDeepCopyReview(out, a) + return out } func (r *ReviewSpec) UnmarshalJSON(data []byte) error { diff --git a/api/types/app.go b/api/types/app.go index 790c43c4a4691..477c718b7fdda 100644 --- a/api/types/app.go +++ b/api/types/app.go @@ -18,6 +18,7 @@ package types import ( "fmt" + "iter" "net/url" "slices" "strconv" @@ -58,6 +59,8 @@ type Application interface { SetURI(string) // GetPublicAddr returns the app public address. GetPublicAddr() string + // SetPublicAddr sets the app public address. + SetPublicAddr(s string) // GetInsecureSkipVerify returns the app insecure setting. GetInsecureSkipVerify() bool // GetRewrite returns the app rewrite configuration. @@ -70,6 +73,8 @@ type Application interface { IsGCP() bool // IsTCP returns true if this app represents a TCP endpoint. IsTCP() bool + // IsMCP returns true if this app represents a MCP server. + IsMCP() bool // GetProtocol returns the application protocol. GetProtocol() string // GetAWSAccountID returns value of label containing AWS account ID on this app. @@ -101,6 +106,10 @@ type Application interface { SetTCPPorts([]*PortRange) // GetIdentityCenter fetches identity center info for the app, if any. GetIdentityCenter() *AppIdentityCenter + // GetMCP fetches MCP specific configuration. + GetMCP() *MCP + // IsEqual determines if two application resources are equivalent to one another. + IsEqual(Application) bool } // NewAppV3 creates a new app resource. @@ -244,6 +253,11 @@ func (a *AppV3) GetPublicAddr() string { return a.Spec.PublicAddr } +// SetPublicAddr sets the app public address. +func (a *AppV3) SetPublicAddr(addr string) { + a.Spec.PublicAddr = addr +} + // GetInsecureSkipVerify returns the app insecure setting. func (a *AppV3) GetInsecureSkipVerify() bool { return a.Spec.InsecureSkipVerify @@ -286,15 +300,28 @@ func (a *AppV3) IsTCP() bool { return IsAppTCP(a.Spec.URI) } +// IsMCP returns true if provided uri is an MCP app. +func (a *AppV3) IsMCP() bool { + return IsAppMCP(a.Spec.URI) +} + func IsAppTCP(uri string) bool { return strings.HasPrefix(uri, "tcp://") } +// IsAppMCP returns true if provided uri is an MCP app. +func IsAppMCP(uri string) bool { + return GetMCPServerTransportType(uri) != "" +} + // GetProtocol returns the application protocol. func (a *AppV3) GetProtocol() string { if a.IsTCP() { return "TCP" } + if a.IsMCP() { + return "MCP" + } return "HTTP" } @@ -400,10 +427,14 @@ func (a *AppV3) CheckAndSetDefaults() error { return trace.BadParameter("app %q invalid label key: %q", a.GetName(), key) } } + if a.Spec.URI == "" { - if a.Spec.Cloud != "" { + switch { + case a.Spec.Cloud != "": a.Spec.URI = fmt.Sprintf("cloud://%v", a.Spec.Cloud) - } else { + case a.Spec.MCP != nil && a.Spec.MCP.Command != "": + a.Spec.URI = SchemeMCPStdio + "://" + default: return trace.BadParameter("app %q URI is empty", a.GetName()) } } @@ -453,6 +484,20 @@ func (a *AppV3) CheckAndSetDefaults() error { } } + if a.IsMCP() { + a.SetSubKind(SubKindMCP) + if err := a.checkMCP(); err != nil { + return trace.Wrap(err) + } + } + + // Set an "app-sub-kind" label can be used for RBAC. + if a.SubKind != "" { + if a.Metadata.Labels == nil { + a.Metadata.Labels = make(map[string]string) + } + a.Metadata.Labels[AppSubKindLabel] = a.SubKind + } return nil } @@ -486,6 +531,35 @@ func (a *AppV3) checkTCPPorts() error { return nil } +func (a *AppV3) checkMCP() error { + switch GetMCPServerTransportType(a.Spec.URI) { + case MCPTransportStdio: + return trace.Wrap(a.checkMCPStdio()) + case MCPTransportSSE, MCPTransportHTTP: + _, err := url.Parse(a.Spec.URI) + return trace.Wrap(err) + default: + return trace.BadParameter("unsupported MCP server %q with URI %q", a.GetName(), a.Spec.URI) + } +} + +func (a *AppV3) checkMCPStdio() error { + // Skip validation for internal demo resource. + if resourceType, _ := a.GetLabel(TeleportInternalResourceType); resourceType == DemoResource { + return nil + } + if a.Spec.MCP == nil { + return trace.BadParameter("MCP server %q is missing 'mcp' spec", a.GetName()) + } + if a.Spec.MCP.Command == "" { + return trace.BadParameter("MCP server %q is missing 'command' which specifies the executable to launch the MCP server. Arguments should be specified through the 'args' field", a.GetName()) + } + if a.Spec.MCP.RunAsHostUser == "" { + return trace.BadParameter("MCP server %q is missing 'run_as_host_user' which specifies a valid host user to execute the command", a.GetName()) + } + return nil +} + // GetIdentityCenter returns the Identity Center information for the app, if any. // May be nil. func (a *AppV3) GetIdentityCenter() *AppIdentityCenter { @@ -511,20 +585,34 @@ func (a *AppV3) IsEqual(i Application) bool { return false } +// GetMCP returns MCP specific configuration. +func (a *AppV3) GetMCP() *MCP { + return a.Spec.MCP +} + // DeduplicateApps deduplicates apps by combination of app name and public address. // Apps can have the same name but also could have different addresses. -func DeduplicateApps(apps []Application) (result []Application) { +func DeduplicateApps(apps []Application) []Application { + return slices.Collect(DeduplicatedApps(slices.Values(apps))) +} + +// DeduplicatedApps iterates deduplicated apps by combination of app name and +// public address. This is the iter.Seq version of DeduplicateApps. +func DeduplicatedApps(apps iter.Seq[Application]) iter.Seq[Application] { type key struct{ name, addr string } seen := make(map[key]struct{}) - for _, app := range apps { - key := key{app.GetName(), app.GetPublicAddr()} - if _, ok := seen[key]; ok { - continue + return func(yield func(Application) bool) { + for app := range apps { + key := key{app.GetName(), app.GetPublicAddr()} + if _, ok := seen[key]; ok { + continue + } + seen[key] = struct{}{} + if !yield(app) { + return + } } - seen[key] = struct{}{} - result = append(result, app) } - return result } // Apps is a list of app resources. @@ -596,3 +684,24 @@ func (p *PortRange) String() string { return fmt.Sprintf("%d-%d", p.Port, p.EndPort) } } + +// GetMCPServerTransportType returns the transport of the MCP server based on +// the URI. If no MCP transport type can be determined from the URI, an empty +// string is returned. +func GetMCPServerTransportType(uri string) string { + parsed, err := url.Parse(uri) + if err != nil { + return "" + } + + switch parsed.Scheme { + case SchemeMCPStdio: + return MCPTransportStdio + case SchemeMCPSSEHTTP, SchemeMCPSSEHTTPS: + return MCPTransportSSE + case SchemeMCPHTTP, SchemeMCPHTTPS: + return MCPTransportHTTP + default: + return "" + } +} diff --git a/api/types/app_test.go b/api/types/app_test.go index b4b57c0023e4c..12d3110fb00ac 100644 --- a/api/types/app_test.go +++ b/api/types/app_test.go @@ -18,6 +18,7 @@ package types import ( "fmt" + "slices" "strconv" "testing" @@ -603,6 +604,136 @@ func TestNewAppV3(t *testing.T) { want: nil, wantErr: require.Error, }, + { + name: "mcp with command", + meta: Metadata{ + Name: "mcp-everything", + }, + spec: AppSpecV3{ + MCP: &MCP{ + Command: "docker", + Args: []string{"run", "-i", "--rm", "mcp/everything"}, + RunAsHostUser: "docker", + }, + }, + want: &AppV3{ + Kind: "app", + SubKind: "mcp", + Version: "v3", + Metadata: Metadata{ + Name: "mcp-everything", + Namespace: "default", + Labels: map[string]string{AppSubKindLabel: "mcp"}, + }, + Spec: AppSpecV3{ + URI: "mcp+stdio://", + MCP: &MCP{ + Command: "docker", + Args: []string{"run", "-i", "--rm", "mcp/everything"}, + RunAsHostUser: "docker", + }, + }, + }, + wantErr: require.NoError, + }, + { + name: "mcp missing spec", + meta: Metadata{ + Name: "mcp-missing-run-as", + }, + spec: AppSpecV3{ + URI: "mcp+stdio://", + }, + wantErr: require.Error, + }, + { + name: "mcp missing run_as_host_user", + meta: Metadata{ + Name: "mcp-missing-spec", + }, + spec: AppSpecV3{ + MCP: &MCP{ + Command: "docker", + Args: []string{"run", "-i", "--rm", "mcp/everything"}, + }, + }, + wantErr: require.Error, + }, + { + name: "mcp demo", + meta: Metadata{ + Name: "teleport-mcp-demo", + Labels: map[string]string{ + TeleportInternalResourceType: DemoResource, + }, + }, + spec: AppSpecV3{ + URI: "mcp+stdio://teleport-mcp-demo", + }, + want: &AppV3{ + Kind: "app", + SubKind: "mcp", + Version: "v3", + Metadata: Metadata{ + Name: "teleport-mcp-demo", + Namespace: "default", + Labels: map[string]string{ + TeleportInternalResourceType: DemoResource, + AppSubKindLabel: "mcp", + }, + }, + Spec: AppSpecV3{ + URI: "mcp+stdio://teleport-mcp-demo", + }, + }, + wantErr: require.NoError, + }, + { + name: "mcp with SSE transport", + meta: Metadata{ + Name: "mcp-everything", + }, + spec: AppSpecV3{ + URI: "mcp+sse+http://localhost:12345/sse", + }, + want: &AppV3{ + Kind: "app", + SubKind: "mcp", + Version: "v3", + Metadata: Metadata{ + Name: "mcp-everything", + Namespace: "default", + Labels: map[string]string{AppSubKindLabel: "mcp"}, + }, + Spec: AppSpecV3{ + URI: "mcp+sse+http://localhost:12345/sse", + }, + }, + wantErr: require.NoError, + }, + { + name: "mcp with streamable HTTP transport", + meta: Metadata{ + Name: "mcp-everything", + }, + spec: AppSpecV3{ + URI: "mcp+http://localhost:12345/mcp", + }, + want: &AppV3{ + Kind: "app", + SubKind: "mcp", + Version: "v3", + Metadata: Metadata{ + Name: "mcp-everything", + Namespace: "default", + Labels: map[string]string{AppSubKindLabel: "mcp"}, + }, + Spec: AppSpecV3{ + URI: "mcp+http://localhost:12345/mcp", + }, + }, + wantErr: require.NoError, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { @@ -658,3 +789,53 @@ func hasErrAndContains(msg string) require.ErrorAssertionFunc { require.ErrorContains(t, err, msg, msgAndArgs...) } } + +func TestGetMCPServerTransportType(t *testing.T) { + tests := []struct { + name string + uri string + want string + }{ + { + name: "stdio", + uri: "mcp+stdio://", + want: MCPTransportStdio, + }, + { + name: "unknown", + uri: "http://localhost", + want: "", + }, + { + name: "SSE HTTP", + uri: "mcp+sse+http://127.0.0.1:12345", + want: MCPTransportSSE, + }, + { + name: "SSE HTTPS", + uri: "mcp+sse+httpS://some-domain:443", + want: MCPTransportSSE, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + require.Equal(t, tt.want, GetMCPServerTransportType(tt.uri)) + }) + } +} + +func TestDeduplicateApps(t *testing.T) { + var apps []Application + for _, name := range []string{"a", "b", "c", "b", "a", "d"} { + app_, err := NewAppV3(Metadata{ + Name: name, + }, AppSpecV3{ + URI: "localhost:3080", + }) + require.NoError(t, err) + apps = append(apps, app_) + } + + deduped := DeduplicateApps(apps) + require.Equal(t, []string{"a", "b", "c", "d"}, slices.Collect(ResourceNames(deduped))) +} diff --git a/api/types/appserver.go b/api/types/appserver.go index f2094e25391af..822dfb8d31fb9 100644 --- a/api/types/appserver.go +++ b/api/types/appserver.go @@ -18,6 +18,8 @@ package types import ( "fmt" + "iter" + "slices" "sort" "time" @@ -25,7 +27,9 @@ import ( "github.com/gravitational/teleport/api" "github.com/gravitational/teleport/api/constants" + componentfeaturesv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/componentfeatures/v1" "github.com/gravitational/teleport/api/utils" + "github.com/gravitational/teleport/api/utils/iterutils" ) // AppServer represents a single proxied web app. @@ -59,6 +63,18 @@ type AppServer interface { GetTunnelType() TunnelType // ProxiedService provides common methods for a proxied service. ProxiedService + // GetRelayGroup returns the name of the Relay group that the app server is + // connected to. + GetRelayGroup() string + // GetRelayIDs returns the list of Relay host IDs that the app server is + // connected to. + GetRelayIDs() []string + // GetScope returns the scope this server belongs to. + GetScope() string + // GetComponentFeatures returns the ComponentFeatures supported by this AppServer. + GetComponentFeatures() *componentfeaturesv1.ComponentFeatures + // SetComponentFeatures sets the ComponentFeatures supported by this AppServer. + SetComponentFeatures(*componentfeaturesv1.ComponentFeatures) } // NewAppServerV3 creates a new app server instance. @@ -103,6 +119,16 @@ func NewAppServerForAWSOIDCIntegration(integrationName, hostID, publicAddr strin }) } +// GetComponentFeatures returns the ComponentFeatures supported by this AppServer. +func (s *AppServerV3) GetComponentFeatures() *componentfeaturesv1.ComponentFeatures { + return s.Spec.ComponentFeatures +} + +// SetComponentFeatures sets the ComponentFeatures supported by this AppServer. +func (s *AppServerV3) SetComponentFeatures(cf *componentfeaturesv1.ComponentFeatures) { + s.Spec.ComponentFeatures = cf +} + // GetVersion returns the database server resource version. func (s *AppServerV3) GetVersion() string { return s.Version @@ -269,6 +295,22 @@ func (s *AppServerV3) SetProxyIDs(proxyIDs []string) { s.Spec.ProxyIDs = proxyIDs } +// GetRelayGroup implements [AppServer]. +func (s *AppServerV3) GetRelayGroup() string { + if s == nil { + return "" + } + return s.Spec.RelayGroup +} + +// GetRelayIDs implements [AppServer]. +func (s *AppServerV3) GetRelayIDs() []string { + if s == nil { + return nil + } + return s.Spec.RelayIds +} + // GetLabel retrieves the label with the provided key. If not found // value will be empty and ok will be false. func (s *AppServerV3) GetLabel(key string) (value string, ok bool) { @@ -328,6 +370,11 @@ func (s *AppServerV3) MatchSearch(values []string) bool { return MatchSearch(nil, values, nil) } +// GetScope returns the scope this server belongs to. +func (s *AppServerV3) GetScope() string { + return s.Scope +} + // AppServers represents a list of app servers. type AppServers []AppServer @@ -409,3 +456,10 @@ func (s AppServers) GetFieldVals(field string) ([]string, error) { return vals, nil } + +// Applications iterates over the applications that the AppServers proxy. +func (s AppServers) Applications() iter.Seq[Application] { + return iterutils.Map(func(appServer AppServer) Application { + return appServer.GetApp() + }, slices.Values(s)) +} diff --git a/api/types/authentication.go b/api/types/authentication.go index 1f83b1386f1cb..d02a3519288a4 100644 --- a/api/types/authentication.go +++ b/api/types/authentication.go @@ -837,7 +837,8 @@ func (c *AuthPreferenceV2) CheckAndSetDefaults() error { case "": // OK, "default" mode. Varies depending on OSS or Enterprise. case constants.DeviceTrustModeOff, constants.DeviceTrustModeOptional, - constants.DeviceTrustModeRequired: // OK. + constants.DeviceTrustModeRequired, + constants.DeviceTrustModeRequiredForHumans: // OK. default: return trace.BadParameter("device trust mode %q not supported", dt.Mode) } diff --git a/api/types/authentication_authpreference_test.go b/api/types/authentication_authpreference_test.go index b6efaae1946b1..1f674985fa7b1 100644 --- a/api/types/authentication_authpreference_test.go +++ b/api/types/authentication_authpreference_test.go @@ -710,6 +710,16 @@ func TestAuthPreferenceV2_CheckAndSetDefaults_deviceTrust(t *testing.T) { }, }, }, + { + name: "Mode=required-for-humans", + authPref: &types.AuthPreferenceV2{ + Spec: types.AuthPreferenceSpecV2{ + DeviceTrust: &types.DeviceTrust{ + Mode: constants.DeviceTrustModeRequiredForHumans, + }, + }, + }, + }, { name: "Mode invalid", authPref: &types.AuthPreferenceV2{ diff --git a/api/types/authority.go b/api/types/authority.go index 9b556b42ddd97..6d056a5fdc1b0 100644 --- a/api/types/authority.go +++ b/api/types/authority.go @@ -556,6 +556,7 @@ func (k *TLSKeyPair) Clone() *TLSKeyPair { KeyType: k.KeyType, Key: slices.Clone(k.Key), Cert: slices.Clone(k.Cert), + CRL: slices.Clone(k.CRL), } } diff --git a/api/types/autoupdate/config.go b/api/types/autoupdate/config.go index ad79765895c0d..37a949606e470 100644 --- a/api/types/autoupdate/config.go +++ b/api/types/autoupdate/config.go @@ -104,6 +104,12 @@ func checkAgentSchedules(c *autoupdate.AutoUpdateConfig) error { if group.StartHour > 23 || group.StartHour < 0 { return trace.BadParameter("spec.agents.schedules.regular[%d].start_hour must be between 0 and 23", i) } + if group.CanaryCount < 0 || group.CanaryCount > MaxCanaryCount { + return trace.BadParameter("spec.agents.schedule.regular[%d].canary_count must be between 0 and %d", i, MaxCanaryCount) + } + if c.Spec.Agents.Strategy == AgentsStrategyTimeBased && group.CanaryCount != 0 { + return trace.BadParameter("spec.agents.schedules.regular[%d].canary_count is not zero but the strategy %q doesn't support canaries", i, AgentsStrategyTimeBased) + } if c.Spec.Agents.Strategy == AgentsStrategyTimeBased && group.WaitHours != 0 { return trace.BadParameter("spec.agents.schedules.regular[%d].wait_hours must be zero when strategy is %s", i, AgentsStrategyTimeBased) } diff --git a/api/types/autoupdate/config_test.go b/api/types/autoupdate/config_test.go index 0981dd7e681c1..6a022f11500f9 100644 --- a/api/types/autoupdate/config_test.go +++ b/api/types/autoupdate/config_test.go @@ -465,6 +465,28 @@ func TestValidateAutoUpdateConfig(t *testing.T) { }, assertErr: require.Error, }, + { + name: "group with too many canaries", + config: &autoupdate.AutoUpdateConfig{ + Kind: types.KindAutoUpdateConfig, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: types.MetaNameAutoUpdateConfig, + }, + Spec: &autoupdate.AutoUpdateConfigSpec{ + Agents: &autoupdate.AutoUpdateConfigSpecAgents{ + Mode: AgentsUpdateModeEnabled, + Strategy: AgentsStrategyHaltOnError, + Schedules: &autoupdate.AgentAutoUpdateSchedules{ + Regular: []*autoupdate.AgentAutoUpdateGroup{ + {Name: "g1", Days: []string{"*"}, WaitHours: 0, CanaryCount: 123}, + }, + }, + }, + }, + }, + assertErr: require.Error, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { diff --git a/api/types/autoupdate/constants.go b/api/types/autoupdate/constants.go index deed5168fb21f..0c38c7b190f42 100644 --- a/api/types/autoupdate/constants.go +++ b/api/types/autoupdate/constants.go @@ -43,4 +43,9 @@ const ( // maintenance window. There is no dependency between groups. Agents won't be instructed to update // if the window is over. AgentsStrategyTimeBased = "time-based" + + // MaxCanaryCount is the maximum number of canaries allowed for a single group. + // This value is arbitrarily low to avoid XXL rollouts to grow over the max backend + // item size. + MaxCanaryCount = 5 ) diff --git a/api/types/autoupdate/report.go b/api/types/autoupdate/report.go index 0a2bb9d1642c3..07b2f48a00b55 100644 --- a/api/types/autoupdate/report.go +++ b/api/types/autoupdate/report.go @@ -28,28 +28,28 @@ import ( ) const ( - autoUpdateAgentReportTTL = time.Hour - maxGroups = 50 - maxVersions = 20 + autoUpdateReportTTL = time.Hour + maxGroups = 50 + maxVersions = 20 ) -// NewAutoUpdateAgentReport creates a new auto update version resource. +// NewAutoUpdateAgentReport creates a new auto update agent report resource. func NewAutoUpdateAgentReport(spec *autoupdate.AutoUpdateAgentReportSpec, authName string) (*autoupdate.AutoUpdateAgentReport, error) { - rollout := &autoupdate.AutoUpdateAgentReport{ + report := &autoupdate.AutoUpdateAgentReport{ Kind: types.KindAutoUpdateAgentReport, Version: types.V1, Metadata: &headerv1.Metadata{ Name: authName, // Validate will fail later if timestamp is zero - Expires: timestamppb.New(spec.GetTimestamp().AsTime().Add(autoUpdateAgentReportTTL)), + Expires: timestamppb.New(spec.GetTimestamp().AsTime().Add(autoUpdateReportTTL)), }, Spec: spec, } - if err := ValidateAutoUpdateAgentReport(rollout); err != nil { + if err := ValidateAutoUpdateAgentReport(report); err != nil { return nil, trace.Wrap(err) } - return rollout, nil + return report, nil } // ValidateAutoUpdateAgentReport checks that required parameters are set @@ -78,3 +78,49 @@ func ValidateAutoUpdateAgentReport(v *autoupdate.AutoUpdateAgentReport) error { return nil } + +// NewAutoUpdateBotInstanceReport creates a new auto update bot instance report resource. +func NewAutoUpdateBotInstanceReport(spec *autoupdate.AutoUpdateBotInstanceReportSpec) (*autoupdate.AutoUpdateBotInstanceReport, error) { + report := &autoupdate.AutoUpdateBotInstanceReport{ + Kind: types.KindAutoUpdateBotInstanceReport, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: types.MetaNameAutoUpdateBotInstanceReport, + // Validate will fail later if timestamp is zero + Expires: timestamppb.New(spec.GetTimestamp().AsTime().Add(autoUpdateReportTTL)), + }, + Spec: spec, + } + if err := ValidateAutoUpdateBotInstanceReport(report); err != nil { + return nil, trace.Wrap(err) + } + + return report, nil +} + +// ValidateAutoUpdateBotInstanceReport checks that the given bot instance report +// is well-formed and doesn't exceed limits. +func ValidateAutoUpdateBotInstanceReport(v *autoupdate.AutoUpdateBotInstanceReport) error { + if v.GetMetadata().GetName() == "" { + return trace.BadParameter("Metadata.Name is empty") + } + if v.Spec == nil { + return trace.BadParameter("Spec is nil") + } + + if ts := v.GetSpec().GetTimestamp(); ts.GetSeconds() == 0 && ts.GetNanos() == 0 { + return trace.BadParameter("Spec.Timestamp is empty or zero") + } + + if numGroups := len(v.GetSpec().GetGroups()); numGroups > maxGroups { + return trace.BadParameter("Spec.Groups is too large (%d while the max is %d)", numGroups, maxGroups) + } + + for groupName, group := range v.GetSpec().GetGroups() { + if numVersions := len(group.GetVersions()); numVersions > maxVersions { + return trace.BadParameter("group %q has too many versions (%d while the max is %d)", groupName, numVersions, maxVersions) + } + } + + return nil +} diff --git a/api/types/autoupdate/report_test.go b/api/types/autoupdate/report_test.go index b93d9719c20c7..e46327aae09c5 100644 --- a/api/types/autoupdate/report_test.go +++ b/api/types/autoupdate/report_test.go @@ -19,6 +19,7 @@ package autoupdate import ( + "strconv" "testing" "github.com/google/go-cmp/cmp" @@ -33,7 +34,7 @@ import ( func TestNewAutoUpdateAgentReport(t *testing.T) { now := timestamppb.Now() - expires := timestamppb.New(now.AsTime().Add(autoUpdateAgentReportTTL)) + expires := timestamppb.New(now.AsTime().Add(autoUpdateReportTTL)) tests := []struct { name string spec *autoupdate.AutoUpdateAgentReportSpec @@ -88,3 +89,85 @@ func TestNewAutoUpdateAgentReport(t *testing.T) { }) } } + +func TestNewAutoUpdateBotInstanceReport(t *testing.T) { + now := timestamppb.Now() + expires := timestamppb.New(now.AsTime().Add(autoUpdateReportTTL)) + + generateGroups := func(t *testing.T, numGroups, numVersions int) map[string]*autoupdate.AutoUpdateBotInstanceReportSpecGroup { + t.Helper() + + groups := make(map[string]*autoupdate.AutoUpdateBotInstanceReportSpecGroup, numGroups) + for groupIdx := range numGroups { + group := &autoupdate.AutoUpdateBotInstanceReportSpecGroup{ + Versions: make(map[string]*autoupdate.AutoUpdateBotInstanceReportSpecGroupVersion, numVersions), + } + for versionIdx := range numVersions { + group.Versions[strconv.Itoa(versionIdx)] = &autoupdate.AutoUpdateBotInstanceReportSpecGroupVersion{Count: 1} + } + groups[strconv.Itoa(groupIdx)] = group + } + return groups + } + + tests := []struct { + name string + spec *autoupdate.AutoUpdateBotInstanceReportSpec + + want *autoupdate.AutoUpdateBotInstanceReport + wantErr require.ErrorAssertionFunc + }{ + { + name: "nil spec", + wantErr: require.Error, + }, + { + name: "no timestamp", + spec: &autoupdate.AutoUpdateBotInstanceReportSpec{}, + wantErr: require.Error, + }, + { + name: "too many groups", + spec: &autoupdate.AutoUpdateBotInstanceReportSpec{ + Timestamp: now, + Groups: generateGroups(t, 51, 0), + }, + wantErr: require.Error, + }, + { + name: "too many versions", + spec: &autoupdate.AutoUpdateBotInstanceReportSpec{ + Timestamp: now, + Groups: generateGroups(t, 1, 21), + }, + wantErr: require.Error, + }, + { + name: "ok", + spec: &autoupdate.AutoUpdateBotInstanceReportSpec{ + Timestamp: now, + Groups: generateGroups(t, 1, 1), + }, + want: &autoupdate.AutoUpdateBotInstanceReport{ + Kind: types.KindAutoUpdateBotInstanceReport, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: types.MetaNameAutoUpdateBotInstanceReport, + Expires: expires, + }, + Spec: &autoupdate.AutoUpdateBotInstanceReportSpec{ + Timestamp: now, + Groups: generateGroups(t, 1, 1), + }, + }, + wantErr: require.NoError, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result, err := NewAutoUpdateBotInstanceReport(tt.spec) + tt.wantErr(t, err) + require.Empty(t, cmp.Diff(tt.want, result, protocmp.Transform())) + }) + } +} diff --git a/api/types/autoupdate/rollout.go b/api/types/autoupdate/rollout.go index 111c9a65e0095..ff7d9294d080d 100644 --- a/api/types/autoupdate/rollout.go +++ b/api/types/autoupdate/rollout.go @@ -93,6 +93,12 @@ func ValidateAutoUpdateAgentRollout(v *autoupdate.AutoUpdateAgentRollout) error if v.Spec.Strategy == AgentsStrategyHaltOnError && i == 0 && group.ConfigWaitHours != 0 { return trace.BadParameter("status.schedules.groups[0].config_wait_hours must be zero as it's the first group") } + if group.CanaryCount > MaxCanaryCount { + return trace.BadParameter("status.schedules.groups[%d].canary_count must be less than %d", i, MaxCanaryCount) + } + if len(group.Canaries) > MaxCanaryCount { + return trace.BadParameter("status.schedules.groups[%d].canaries must be contain less than %d elements", i, MaxCanaryCount) + } if conflictingGroup, ok := seenGroups[group.Name]; ok { return trace.BadParameter("spec.agents.schedules.regular contains groups with the same name %q at indices %d and %d", group.Name, conflictingGroup, i) } diff --git a/api/types/autoupdate/rollout_test.go b/api/types/autoupdate/rollout_test.go index d95ba9ef890fd..8a8e054f4ef1c 100644 --- a/api/types/autoupdate/rollout_test.go +++ b/api/types/autoupdate/rollout_test.go @@ -349,6 +349,23 @@ func TestValidateAutoUpdateAgentRollout(t *testing.T) { }, assertErr: require.Error, }, + { + name: "group with too high canary count", + rollout: &autoupdate.AutoUpdateAgentRollout{ + Kind: types.KindAutoUpdateAgentRollout, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: types.MetaNameAutoUpdateAgentRollout, + }, + Spec: &haltOnErrorRolloutSpec, + Status: &autoupdate.AutoUpdateAgentRolloutStatus{ + Groups: []*autoupdate.AutoUpdateAgentRolloutStatusGroup{ + {Name: "g1", ConfigDays: []string{"*"}, CanaryCount: 15}, + }, + }, + }, + assertErr: require.Error, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { diff --git a/api/types/common/constants.go b/api/types/common/constants.go index cda6ee39a803f..718bc21f1bdbc 100644 --- a/api/types/common/constants.go +++ b/api/types/common/constants.go @@ -92,4 +92,5 @@ var OriginValues = []string{ OriginDiscoveryKubernetes, OriginEntraID, OriginAWSIdentityCenter, + OriginIntegrationAWSRolesAnywhere, } diff --git a/api/types/constants.go b/api/types/constants.go index a2daa0b5ff5e0..64da596b1e91a 100644 --- a/api/types/constants.go +++ b/api/types/constants.go @@ -180,6 +180,14 @@ const ( // KindApp is a web app resource. KindApp = "app" + // SubKindMCP represents an MCP server as a subkind of app. + SubKindMCP = KindMCP + + // KindMCP is an MCP server resource. + // Currently, MCP servers are accessed through apps. + // In the future, they may become a standalone resource kind. + KindMCP = "mcp" + // KindDatabaseServer is a database proxy server resource. KindDatabaseServer = "db_server" @@ -201,6 +209,8 @@ const ( KindCrownJewel = "crown_jewel" // KindKubernetesCluster is a Kubernetes cluster. KindKubernetesCluster = "kube_cluster" + // KindKubernetesResource is a Kubernetes resource within a cluster. + KindKubernetesResource = "kube_resource" // KindKubePod is a Kubernetes Pod resource type. KindKubePod = "pod" @@ -276,6 +286,9 @@ const ( // KindToken is a provisioning token resource KindToken = "token" + // KindScopedToken is a provisioning token resource + KindScopedToken = "scoped_token" + // KindCertAuthority is a certificate authority resource KindCertAuthority = "cert_authority" @@ -303,10 +316,20 @@ const ( // KindSessionRecordingConfig is the resource for session recording configuration. KindSessionRecordingConfig = "session_recording_config" + // KindRecordingEncryption is the collection of active session recording encryption keys. + KindRecordingEncryption = "recording_encryption" + + // KindRotatedKey is a previously rotated session recording encryption key kept for future replay. + KindRotatedKey = "rotated_key" + // MetaNameSessionRecordingConfig is the exact name of the singleton resource for // session recording configuration. MetaNameSessionRecordingConfig = "session-recording-config" + // MetaNameRecordingEncryption is the exact name of the singleton resource for + // session recording configuration. + MetaNameRecordingEncryption = "recording-encryption" + // KindExternalAuditStorage the resource kind for External Audit Storage // configuration. KindExternalAuditStorage = "external_audit_storage" @@ -335,6 +358,12 @@ const ( // KindAutoUpdateAgentReport is the resource that tracks connected agents. KindAutoUpdateAgentReport = "autoupdate_agent_report" + // KindAutoUpdateBotInstanceReport is the resource that tracks connected bots. + KindAutoUpdateBotInstanceReport = "autoupdate_bot_instance_report" + + // MetaNameAutoUpdateBotInstanceReport is the name of the singleton auto update bot report. + MetaNameAutoUpdateBotInstanceReport = "autoupdate-bot-instance-report" + // MetaNameAutoUpdateConfig is the name of a configuration resource for autoupdate config. MetaNameAutoUpdateConfig = "autoupdate-config" @@ -636,6 +665,15 @@ const ( // stable UNIX users. KindStableUNIXUser = "stable_unix_user" + // KindInferenceModel is the kind of teleport.summarizer.v1.InferenceModel. + KindInferenceModel = "inference_model" + + // KindInferenceSecret is the kind of teleport.summarizer.v1.InferenceSecret. + KindInferenceSecret = "inference_secret" + + // KindInferencePolicy is the kind of teleport.summarizer.v1.InferencePolicy. + KindInferencePolicy = "inference_policy" + // MetaNameAccessGraphSettings is the exact name of the singleton resource holding // access graph settings. MetaNameAccessGraphSettings = "access-graph-settings" @@ -643,6 +681,12 @@ const ( // MetaNameVnetConfig is the exact name of the singleton resource holding VNet config. MetaNameVnetConfig = "vnet-config" + // KindRelayServer is the resource kind for a Relay service heartbeat. + KindRelayServer = "relay_server" + + // KindClientIPRestriction is the resource kind for Client IP Restriction allowlist. + KindClientIPRestriction = "client_ip_restriction" + // V8 is the eighth version of resources. V8 = "v8" @@ -861,6 +905,15 @@ const ( // KubernetesClusterLabel indicates name of the kubernetes cluster for auto-discovered services inside kubernetes. KubernetesClusterLabel = TeleportNamespace + "/kubernetes-cluster" + // AWSRolesAnywhereProfileNameOverrideLabel indicates the name of the AWS IAM Roles Anywhere Profile's tag key + // that Teleport will use to override the name of the discovered profile. + // Ensure this name is unique and valid DNS label. + AWSRolesAnywhereProfileNameOverrideLabel = "TeleportApplicationName" + + // AWSRolesAnywhereProfileARNLabel is the label key to store the Profile ARN when creating an Application + // resource from an AWS IAM Roles Anywhere Profile. + AWSRolesAnywhereProfileARNLabel = TeleportNamespace + "/aws-roles-anywhere-profile-arn" + // DiscoveryTypeLabel specifies type of discovered service that should be created from Kubernetes service. // Also added by discovery service to indicate the type of discovered // resource, e.g. "rds" for RDS databases, "eks" for EKS kube clusters, etc. @@ -881,6 +934,8 @@ const ( DiscoveryAppIgnore = TeleportNamespace + "/ignore" // DiscoveryPublicAddr specifies the public address for a discovered app created from a Kubernetes service. DiscoveryPublicAddr = TeleportNamespace + "/public-addr" + // DiscoveryDescription specifies the description for a discovered app created from a Kubernetes service. + DiscoveryDescription = TeleportNamespace + "/description" // ReqAnnotationApproveSchedulesLabel is the request annotation key at which schedules are stored for access plugins. ReqAnnotationApproveSchedulesLabel = "/schedules" @@ -896,6 +951,27 @@ const ( // CloudGCP identifies that a resource was discovered in GCP. CloudGCP = "GCP" + // SchemeMCPStdio is a URI scheme for MCP servers using stdio transport. + SchemeMCPStdio = "mcp+stdio" + // MCPTransportStdio indicates the MCP server uses stdio transport. + MCPTransportStdio = "stdio" + // SchemeMCPSSEHTTP is a URI scheme for MCP servers using HTTP with SSE + // transport. + SchemeMCPSSEHTTP = "mcp+sse+http" + // SchemeMCPSSEHTTPS is a URI scheme for MCP servers using HTTPS with SSE + // transport. + SchemeMCPSSEHTTPS = "mcp+sse+https" + // MCPTransportSSE indicates the MCP server uses SSE transport. + MCPTransportSSE = "SSE" + // SchemeMCPHTTP is a URI scheme for MCP servers using HTTP with streamable + // HTTP transport. + SchemeMCPHTTP = "mcp+http" + // SchemeMCPHTTPS is a URI scheme for MCP servers using HTTPS with + // streamable HTTP transport. + SchemeMCPHTTPS = "mcp+https" + // MCPTransportHTTP indicates the MCP server uses SSE transport. + MCPTransportHTTP = "Streamable HTTP" + // DiscoveredResourceNode identifies a discovered SSH node. DiscoveredResourceNode = "node" // DiscoveredResourceDatabase identifies a discovered database. @@ -911,6 +987,8 @@ const ( // TeleportAzureMSIEndpoint is a special URL intercepted by TSH local proxy, serving Azure credentials. TeleportAzureMSIEndpoint = "azure-msi." + TeleportNamespace + // TeleportAzureIdentityEndpoint is a special URL intercepted by TSH local proxy, serving Azure credentials. + TeleportAzureIdentityEndpoint = "azure-identity." + TeleportNamespace // ConnectMyComputerNodeOwnerLabel is a label used to control access to the node managed by // Teleport Connect as part of Connect My Computer. See [teleterm.connectmycomputer.RoleSetup]. @@ -1119,10 +1197,15 @@ const ( // should not change these resources. SystemResource = "system" - // PresetResource are resources resources will be created if they don't exist. Updates may be applied + // PresetResource are resources that will be created if they don't exist. Updates may be applied // to them, but user changes to these resources will be preserved. PresetResource = "preset" + // DemoResource are resources that demonstrates specific Teleport features. + // These resources are typically managed internally by Teleport and enabled + // via flags. Users should not change these resources. + DemoResource = "demo" + // ProxyGroupIDLabel is the internal-use label for proxy heartbeats that's // used by reverse tunnel agents to keep track of multiple independent sets // of proxies in proxy peering mode. @@ -1174,6 +1257,9 @@ const ( // GitHubOrgLabel is the label for GitHub organization. GitHubOrgLabel = TeleportInternalLabelPrefix + "github-org" + + // AppSubKindLabel is the label that has the same value of "app.sub_kind". + AppSubKindLabel = TeleportInternalLabelPrefix + "app-sub-kind" ) const ( @@ -1352,10 +1438,22 @@ const ( var RequestableResourceKinds = []string{ KindNode, KindKubernetesCluster, + KindKubernetesResource, KindDatabase, KindApp, KindWindowsDesktop, KindUserGroup, + KindSAMLIdPServiceProvider, + KindIdentityCenterAccount, + KindIdentityCenterAccountAssignment, + KindGitServer, +} + +// LegacyRequestableKubeResourceKinds lists all legacy Teleport resource kinds users can request access to. +// Those are the requestable Kubernetes resource kinds that were supported before the introduction of +// custom resource support. We need to keep them to maintain support with older Teleport versions. +// TODO(@creack): DELETE IN v20.0.0. +var LegacyRequestableKubeResourceKinds = []string{ KindKubePod, KindKubeSecret, KindKubeConfigmap, @@ -1377,12 +1475,18 @@ var RequestableResourceKinds = []string{ KindKubeJob, KindKubeCertificateSigningRequest, KindKubeIngress, - KindSAMLIdPServiceProvider, - KindIdentityCenterAccount, - KindIdentityCenterAccountAssignment, - KindGitServer, } +// Prefix constants to identify kubernetes resources in access requests. +const ( + // AccessRequestPrefixKindKube denotes that the resource is a kubernetes one. Used for access requests. + AccessRequestPrefixKindKube = "kube:" + // AccessRequestPrefixKindKubeClusterWide denotes that the kube resource is cluster-wide. + AccessRequestPrefixKindKubeClusterWide = AccessRequestPrefixKindKube + "cw:" + // AccessRequestPrefixKindKubeNamespaced denotes that the kube resource is namespaced. + AccessRequestPrefixKindKubeNamespaced = AccessRequestPrefixKindKube + "ns:" +) + // The list below needs to be kept in sync with `kubernetesResourceKindOptions` // in `web/packages/teleport/src/Roles/RoleEditor/standardmodel.ts`. (Keeping // this comment separate to prevent it from being included in the official @@ -1415,6 +1519,15 @@ var KubernetesResourcesKinds = []string{ KindKubeIngress, } +// KubernetesResourceSelfSubjectAccessReview is a Kubernetes resource that +// represents a self-subject access review. This gets injected in the allow section in the roles. +var KubernetesResourceSelfSubjectAccessReview = KubernetesResource{ + Kind: "selfsubjectaccessreviews", + Name: Wildcard, + Verbs: []string{"create"}, + APIGroup: "authorization.k8s.io", +} + // KubernetesResourcesV7KindGroups maps the legacy Teleport kube kinds // to their kubernetes group. // Used for validation in role >=v8 to check whether an older value has @@ -1527,49 +1640,48 @@ var KubernetesClusterWideResourceKinds = []string{ KindKubeCertificateSigningRequest, } -type groupKind = struct{ apiGroup, kind string } - // KubernetesNamespacedResourceKinds is the list of known Kubernetes resource kinds // that are namespaced. +// // Generated from `kubectl api-resources --namespaced=true -o name --sort-by=name` (kind k8s v1.32.2). -// (added .core to core resources.) // The format is ".". // // Only used in role >=v8 to attempt to validate the api_group field. // If we have a match, we know we need a namespaced value, if we don't // have a match, we don't know we don't. Best effort basis. -var kubernetesNamespacedResourceKinds = map[groupKind]struct{}{ - {"", "bindings"}: {}, - {"", "configmaps"}: {}, - {"apps", "controllerrevisions"}: {}, - {"batch", "cronjobs"}: {}, - {"storage.k8s.io", "csistoragecapacities"}: {}, - {"apps", "daemonsets"}: {}, - {"apps", "deployments"}: {}, - {"", "endpoints"}: {}, - {"discovery.k8s.io", "endpointslices"}: {}, - {"events.k8s.io", "events"}: {}, - {"", "events"}: {}, - {"autoscaling", "horizontalpodautoscalers"}: {}, - {"networking.k8s.io", "ingresses"}: {}, - {"batch", "jobs"}: {}, - {"coordination.k8s.io", "leases"}: {}, - {"", "limitranges"}: {}, - {"authorization.k8s.io", "localsubjectaccessreviews"}: {}, - {"networking.k8s.io", "networkpolicies"}: {}, - {"", "persistentvolumeclaims"}: {}, - {"policy", "poddisruptionbudgets"}: {}, - {"", "pods"}: {}, - {"", "podtemplates"}: {}, - {"apps", "replicasets"}: {}, - {"", "replicationcontrollers"}: {}, - {"", "resourcequotas"}: {}, - {"rbac.authorization.k8s.io", "rolebindings"}: {}, - {"rbac.authorization.k8s.io", "roles"}: {}, - {"", "secrets"}: {}, - {"", "serviceaccounts"}: {}, - {"", "services"}: {}, - {"apps", "statefulsets"}: {}, +// +// Key: resource kind, value: api group. +var kubernetesNamespacedResourceKinds = map[string]string{ + "bindings": "", + "configmaps": "", + "controllerrevisions": "apps", + "cronjobs": "batch", + "csistoragecapacities": "storage.k8s.io", + "daemonsets": "apps", + "deployments": "apps", + "endpoints": "", + "endpointslices": "discovery.k8s.io", + "events": "events.k8s.io", + "horizontalpodautoscalers": "autoscaling", + "ingresses": "networking.k8s.io", + "jobs": "batch", + "leases": "coordination.k8s.io", + "limitranges": "", + "localsubjectaccessreviews": "authorization.k8s.io", + "networkpolicies": "networking.k8s.io", + "persistentvolumeclaims": "", + "poddisruptionbudgets": "policy", + "pods": "", + "podtemplates": "", + "replicasets": "apps", + "replicationcontrollers": "", + "resourcequotas": "", + "rolebindings": "rbac.authorization.k8s.io", + "roles": "rbac.authorization.k8s.io", + "secrets": "", + "serviceaccounts": "", + "services": "", + "statefulsets": "apps", } // List of "" (core / legacy) resources. diff --git a/api/types/database.go b/api/types/database.go index 686bbfdbbed87..291e162f0094f 100644 --- a/api/types/database.go +++ b/api/types/database.go @@ -88,8 +88,12 @@ type Database interface { SetAWSExternalID(id string) // SetAWSAssumeRole sets the database AWS assume role arn in the Spec.AWS field. SetAWSAssumeRole(roleARN string) + // IsGCPHosted returns true if the database is hosted by GCP. + IsGCPHosted() bool // GetGCP returns GCP information for Cloud SQL databases. GetGCP() GCPCloudSQL + // GetGCPProjectID returns Project ID for GCP databases. + GetGCPProjectID() (string, error) // GetAzure returns Azure database server metadata. GetAzure() Azure // SetStatusAzure sets the database Azure metadata in the status field. @@ -118,6 +122,8 @@ type Database interface { IsAzure() bool // IsElastiCache returns true if this is an AWS ElastiCache database. IsElastiCache() bool + // IsElastiCacheServerless returns true if this is an AWS ElastiCache Serverless database. + IsElastiCacheServerless() bool // IsMemoryDB returns true if this is an AWS MemoryDB database. IsMemoryDB() bool // IsAWSHosted returns true if database is hosted by AWS. @@ -148,6 +154,8 @@ type Database interface { // IsAutoUsersEnabled returns true if the database has auto user // provisioning enabled. IsAutoUsersEnabled() bool + // IsEqual determines if two database resources are equivalent to one another. + IsEqual(Database) bool } // NewDatabaseV3 creates a new database resource. @@ -429,6 +437,11 @@ func (g GCPCloudSQL) IsEmpty() bool { return deriveTeleportEqualGCPCloudSQL(&g, &GCPCloudSQL{}) } +// IsEmpty returns true if AlloyDB options are empty. +func (a AlloyDB) IsEmpty() bool { + return deriveTeleportEqualAlloyDB(&a, &AlloyDB{}) +} + // GetGCP returns GCP information for Cloud SQL databases. func (d *DatabaseV3) GetGCP() GCPCloudSQL { return d.Spec.GCP @@ -477,6 +490,11 @@ func (d *DatabaseV3) IsCloudSQL() bool { return d.GetType() == DatabaseTypeCloudSQL } +// IsAlloyDB returns true if this database is a GCP-hosted AlloyDB instance. +func (d *DatabaseV3) IsAlloyDB() bool { + return d.GetType() == DatabaseTypeAlloyDB +} + // IsAzure returns true if this is Azure hosted database. func (d *DatabaseV3) IsAzure() bool { return d.GetType() == DatabaseTypeAzure @@ -487,6 +505,11 @@ func (d *DatabaseV3) IsElastiCache() bool { return d.GetType() == DatabaseTypeElastiCache } +// IsElastiCacheServerless returns true if this is an AWS ElastiCache database. +func (d *DatabaseV3) IsElastiCacheServerless() bool { + return d.GetType() == DatabaseTypeElastiCacheServerless +} + // IsMemoryDB returns true if this is an AWS MemoryDB database. func (d *DatabaseV3) IsMemoryDB() bool { return d.GetType() == DatabaseTypeMemoryDB @@ -545,16 +568,50 @@ func (d *DatabaseV3) IsGCPHosted() bool { return ok } +// GetGCPProjectID returns Project ID for GCP databases. +func (d *DatabaseV3) GetGCPProjectID() (string, error) { + dbType, isGCP := d.getGCPType() + + if !isGCP { + return "", trace.NotFound("%v is not a GCP database; db type: %v", d.GetName(), dbType) + } + + switch dbType { + case DatabaseTypeAlloyDB: + info, err := gcputils.ParseAlloyDBConnectionURI(d.GetURI()) + if err != nil { + return "", trace.Wrap(err) + } + return info.ProjectID, nil + default: + return d.GetGCP().ProjectID, nil + } +} + // getAWSType returns the gcp hosted database type. func (d *DatabaseV3) getGCPType() (string, bool) { if d.Spec.Protocol == DatabaseTypeSpanner { return DatabaseTypeSpanner, true } + + if gcputils.IsAlloyDBConnectionURI(d.Spec.URI) { + return DatabaseTypeAlloyDB, true + } + gcp := d.GetGCP() - if !gcp.IsEmpty() { - return DatabaseTypeCloudSQL, true + if gcp.IsEmpty() { + return "", false } - return "", false + + // This check catches the case when URI is not prefixed with `alloydb://`, and yet spec.gcp.alloydb is not empty. + // Most likely this is due to a typo in URI or misconfiguration (copy-pasting the URI without adding the prefix). + // + // Making it clear this is AlloyDB instance will prevent CloudSQL-specific logic to fire, + // but also make it eligible for AlloyDB-specific validation to run, which will catch the URI problem. + if !gcp.AlloyDB.IsEmpty() { + return DatabaseTypeAlloyDB, true + } + return DatabaseTypeCloudSQL, true } // getAWSType returns the database type. @@ -583,6 +640,9 @@ func (d *DatabaseV3) getAWSType() (string, bool) { if aws.ElastiCache.ReplicationGroupID != "" { return DatabaseTypeElastiCache, true } + if aws.ElastiCacheServerless.CacheName != "" { + return DatabaseTypeElastiCacheServerless, true + } if aws.MemoryDB.ClusterName != "" { return DatabaseTypeMemoryDB, true } @@ -728,6 +788,10 @@ func (d *DatabaseV3) CheckAndSetDefaults() error { return trace.BadParameter("GCP Spanner database %q missing GCP instance ID", d.GetName()) } + case d.IsAlloyDB(): + if err := d.handleAlloyDBConfig(); err != nil { + return trace.Wrap(err) + } case d.IsDynamoDB(): if err := d.handleDynamoDBConfig(); err != nil { return trace.Wrap(err) @@ -804,6 +868,21 @@ func (d *DatabaseV3) CheckAndSetDefaults() error { } d.Spec.AWS.ElastiCache.TransitEncryptionEnabled = endpointInfo.TransitEncryptionEnabled d.Spec.AWS.ElastiCache.EndpointType = endpointInfo.EndpointType + case awsutils.IsElastiCacheServerlessEndpoint(d.Spec.URI): + info, err := awsutils.ParseElastiCacheServerlessEndpoint(d.Spec.URI) + if err != nil { + slog.WarnContext(context.Background(), "Failed to parse ElastiCache Serverless endpoint", + "uri", d.Spec.URI, + "error", err, + ) + break + } + if d.Spec.AWS.ElastiCacheServerless.CacheName == "" { + d.Spec.AWS.ElastiCacheServerless.CacheName = info.ID + } + if d.Spec.AWS.Region == "" { + d.Spec.AWS.Region = info.Region + } case awsutils.IsMemoryDBEndpoint(d.Spec.URI): endpointInfo, err := awsutils.ParseMemoryDBEndpoint(d.Spec.URI) if err != nil { @@ -965,6 +1044,36 @@ func (d *DatabaseV3) IsEqual(i Database) bool { return false } +// handleAlloyDBConfig validates AlloyDB configuration. +func (d *DatabaseV3) handleAlloyDBConfig() error { + // default to private endpoint type, but only if override isn't set. + if d.Spec.GCP.AlloyDB.EndpointType == "" && d.Spec.GCP.AlloyDB.EndpointOverride == "" { + d.Spec.GCP.AlloyDB.EndpointType = string(gcputils.AlloyDBEndpointTypePrivate) + } + + err := gcputils.ValidateAlloyDBEndpointType(d.Spec.GCP.AlloyDB.EndpointType) + if err != nil { + return trace.Wrap(err) + } + + info, err := gcputils.ParseAlloyDBConnectionURI(d.Spec.URI) + if err != nil { + return trace.Wrap(err, "failed to parse AlloyDB connection URI") + } + + // ensure the GCP fields are empty: we want to avoid redundant information in the database spec. + if d.Spec.GCP.InstanceID != "" { + return trace.BadParameter("database %q the gcp.instance_id field should be empty but is %q instead; the GCP instance ID configured through URI %q will be automatically used instead", + d.GetName(), d.Spec.GCP.InstanceID, info.InstanceID) + } + if d.Spec.GCP.ProjectID != "" { + return trace.BadParameter("database %q the gcp.project_id field should be empty but is %q instead; the GCP project ID configured through URI %q will be automatically used instead", + d.GetName(), d.Spec.GCP.ProjectID, info.ProjectID) + } + + return nil +} + // handleDynamoDBConfig handles DynamoDB configuration checking. func (d *DatabaseV3) handleDynamoDBConfig() error { if d.Spec.AWS.AccountID == "" { @@ -1109,6 +1218,9 @@ func (d *DatabaseV3) GetEndpointType() string { switch d.GetType() { case DatabaseTypeElastiCache: return d.GetAWS().ElastiCache.EndpointType + case DatabaseTypeElastiCacheServerless: + // ElastiCache Serverless endpoints are always cluster mode. + return awsutils.ElastiCacheConfigurationEndpoint case DatabaseTypeMemoryDB: return d.GetAWS().MemoryDB.EndpointType case DatabaseTypeOpenSearch: @@ -1167,12 +1279,16 @@ const ( DatabaseTypeRedshiftServerless = "redshift-serverless" // DatabaseTypeCloudSQL is GCP-hosted Cloud SQL database. DatabaseTypeCloudSQL = "gcp" + // DatabaseTypeAlloyDB is GCP-hosted AlloyDB database. + DatabaseTypeAlloyDB = "alloydb" // DatabaseTypeSpanner is a GCP Spanner instance. DatabaseTypeSpanner = "spanner" // DatabaseTypeAzure is Azure-hosted database. DatabaseTypeAzure = "azure" // DatabaseTypeElastiCache is AWS-hosted ElastiCache database. DatabaseTypeElastiCache = "elasticache" + // DatabaseTypeElastiCacheServerless is AWS-hosted ElastiCache serverless database. + DatabaseTypeElastiCacheServerless = "elasticache-serverless" // DatabaseTypeMemoryDB is AWS-hosted MemoryDB database. DatabaseTypeMemoryDB = "memorydb" // DatabaseTypeAWSKeyspaces is AWS-hosted Keyspaces database (Cassandra). diff --git a/api/types/database_test.go b/api/types/database_test.go index 29437b970722a..6c15fa66ade3c 100644 --- a/api/types/database_test.go +++ b/api/types/database_test.go @@ -247,6 +247,53 @@ func TestDatabaseElastiCacheEndpoint(t *testing.T) { }) } +func TestDatabaseElastiCacheServerlessEndpoint(t *testing.T) { + t.Run("valid URI", func(t *testing.T) { + database, err := NewDatabaseV3(Metadata{ + Name: "example", + }, DatabaseSpecV3{ + Protocol: "redis", + URI: "example-abc123.serverless.cac1.cache.amazonaws.com:6379", + }) + + require.NoError(t, err) + require.Equal(t, AWS{ + Region: "ca-central-1", + ElastiCacheServerless: ElastiCacheServerless{ + CacheName: "example", + }, + }, database.GetAWS()) + require.True(t, database.IsElastiCacheServerless()) + require.True(t, database.IsAWSHosted()) + require.True(t, database.IsCloudHosted()) + }) + + t.Run("invalid URI", func(t *testing.T) { + database, err := NewDatabaseV3(Metadata{ + Name: "example", + }, DatabaseSpecV3{ + Protocol: "redis", + URI: "some.serverless.cache.amazonaws.com:6379", + AWS: AWS{ + Region: "us-east-5", + ElastiCacheServerless: ElastiCacheServerless{ + CacheName: "example", + }, + }, + }) + + // A warning is logged, no error is returned, and AWS metadata is not + // updated. + require.NoError(t, err) + require.Equal(t, AWS{ + Region: "us-east-5", + ElastiCacheServerless: ElastiCacheServerless{ + CacheName: "example", + }, + }, database.GetAWS()) + }) +} + func TestDatabaseMemoryDBEndpoint(t *testing.T) { t.Run("valid URI", func(t *testing.T) { database, err := NewDatabaseV3(Metadata{ @@ -1121,6 +1168,231 @@ func TestDatabaseGCPCloudSQL(t *testing.T) { } } +func TestDatabaseAlloyDB(t *testing.T) { + t.Parallel() + + for _, test := range []struct { + inputName string + inputSpec DatabaseSpecV3 + wantSpec DatabaseSpecV3 + wantErr string + }{ + { + inputName: "URI-only configuration", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + }, + wantSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "private", + }, + }, + }, + }, + { + inputName: "custom endpoint type", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "public", + }, + }, + }, + wantSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "public", + }, + }, + }, + }, + { + inputName: "invalid endpoint type", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "does-not-exist", + }, + }, + }, + wantErr: "invalid alloy db endpoint type: does-not-exist, expected one of [public private psc]", + }, + { + inputName: "endpoint override to IP address", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "", + EndpointOverride: "11.22.33.44", + }, + }, + }, + wantSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "", + EndpointOverride: "11.22.33.44", + }, + }, + }, + }, + { + inputName: "endpoint override, GCP fields set and matching", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "private", + EndpointOverride: "11.22.33.44", + }, + }, + }, + wantSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "private", + EndpointOverride: "11.22.33.44", + }, + }, + }, + }, + { + inputName: "unwanted gcp.project_id", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + ProjectID: "my-project-123456", + }, + }, + wantErr: `database "mydb" the gcp.project_id field should be empty but is "my-project-123456" instead; the GCP project ID configured through URI "my-project-123456" will be automatically used instead`, + }, + { + inputName: "unwanted gcp.instance_id", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + GCP: GCPCloudSQL{ + InstanceID: "my-instance", + }, + }, + wantErr: `database "mydb" the gcp.instance_id field should be empty but is "my-instance" instead; the GCP instance ID configured through URI "my-instance" will be automatically used instead`, + }, + { + inputName: "wrong URI scheme", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "dummy://foo", + GCP: GCPCloudSQL{ + AlloyDB: AlloyDB{ + EndpointType: "private", + EndpointOverride: "11.22.33.44", + }, + }, + }, + wantErr: `invalid connection URI "dummy://foo": should start with alloydb://`, + }, + { + // just a single URI test for completeness; full coverage through ParseAlloyDBConnectionURI. + inputName: "incomplete URI", + inputSpec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/", + GCP: GCPCloudSQL{ + ProjectID: "my-project-123456", + InstanceID: "my-instance", + AlloyDB: AlloyDB{ + EndpointType: "private", + EndpointOverride: "11.22.33.44", + }, + }, + }, + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/": wrong number of parts`, + }, + } { + t.Run(test.inputName, func(t *testing.T) { + db, err := NewDatabaseV3(Metadata{Name: "mydb"}, test.inputSpec) + if test.wantErr != "" { + require.ErrorContains(t, err, test.wantErr) + } else { + require.NoError(t, err) + require.Equal(t, test.wantSpec, db.Spec) + require.True(t, db.IsAlloyDB()) + } + }) + } +} + +func TestGetGCPProjectID(t *testing.T) { + t.Parallel() + + for _, test := range []struct { + name string + spec DatabaseSpecV3 + wantResult string + wantErr string + }{ + { + name: "alloydb", + spec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + }, + wantResult: "my-project-123456", + }, + { + name: "cloudsql", + spec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "localhost:5432", + GCP: GCPCloudSQL{ + ProjectID: "my-project-123456", + InstanceID: "instance-123", + }, + }, + wantResult: "my-project-123456", + }, + { + name: "nongcp", + spec: DatabaseSpecV3{ + Protocol: DatabaseProtocolPostgreSQL, + URI: "localhost:12345", + }, + wantErr: "mydb is not a GCP database", + }, + } { + t.Run(test.name, func(t *testing.T) { + db, err := NewDatabaseV3(Metadata{Name: "mydb"}, test.spec) + require.NoError(t, err) + + result, err := db.GetGCPProjectID() + if test.wantErr != "" { + require.ErrorContains(t, err, test.wantErr) + } else { + require.NoError(t, err) + require.Equal(t, test.wantResult, result) + } + }) + } +} + func TestGetAdminUser(t *testing.T) { t.Parallel() diff --git a/api/types/databaseserver.go b/api/types/databaseserver.go index ad65057448e58..9f91553018d6e 100644 --- a/api/types/databaseserver.go +++ b/api/types/databaseserver.go @@ -57,10 +57,22 @@ type DatabaseServer interface { SetDatabase(Database) error // ProxiedService provides common methods for a proxied service. ProxiedService + // GetRelayGroup returns the name of the Relay group that the database + // server is connected to. + GetRelayGroup() string + // GetRelayIDs returns the list of Relay host IDs that the database server + // is connected to. + GetRelayIDs() []string // GetTargetHealth returns the database server's target health. GetTargetHealth() TargetHealth - // SetTargetHealth sets the database server's target health status. + // SetTargetHealth sets the database server's target health. SetTargetHealth(h TargetHealth) + // GetTargetHealthStatus returns target health status + GetTargetHealthStatus() TargetHealthStatus + // SetTargetHealthStatus sets target health status + SetTargetHealthStatus(status TargetHealthStatus) + // GetScope returns the scope this server belongs to. + GetScope() string } // NewDatabaseServerV3 creates a new database server instance. @@ -188,6 +200,22 @@ func (s *DatabaseServerV3) SetProxyIDs(proxyIDs []string) { s.Spec.ProxyIDs = proxyIDs } +// GetRelayGroup implements [DatabaseServer]. +func (s *DatabaseServerV3) GetRelayGroup() string { + if s == nil { + return "" + } + return s.Spec.RelayGroup +} + +// GetRelayIDs implements [DatabaseServer]. +func (s *DatabaseServerV3) GetRelayIDs() []string { + if s == nil { + return nil + } + return s.Spec.RelayIds +} + // String returns the server string representation. func (s *DatabaseServerV3) String() string { return fmt.Sprintf("DatabaseServer(Name=%v, Version=%v, Hostname=%v, HostID=%v, Database=%v)", @@ -278,6 +306,11 @@ func (s *DatabaseServerV3) Copy() DatabaseServer { return utils.CloneProtoMsg(s) } +// GetScope returns the scope this server belongs to. +func (s *DatabaseServerV3) GetScope() string { + return s.Scope +} + // CloneResource returns a copy of this database server object. func (s *DatabaseServerV3) CloneResource() ResourceWithLabels { return s.Copy() @@ -302,6 +335,22 @@ func (s *DatabaseServerV3) SetTargetHealth(h TargetHealth) { s.Status.TargetHealth = &h } +// GetTargetHealthStatus returns target health status +func (s *DatabaseServerV3) GetTargetHealthStatus() TargetHealthStatus { + if s.Status.TargetHealth == nil { + return "" + } + return TargetHealthStatus(s.Status.TargetHealth.Status) +} + +// SetTargetHealthStatus sets target health status +func (s *DatabaseServerV3) SetTargetHealthStatus(status TargetHealthStatus) { + if s.Status.TargetHealth == nil { + s.Status.TargetHealth = &TargetHealth{} + } + s.Status.TargetHealth.Status = string(status) +} + // DatabaseServers represents a list of database servers. type DatabaseServers []DatabaseServer diff --git a/api/types/derived.gen.go b/api/types/derived.gen.go index 932217f61bb8d..76bfffb2300d4 100644 --- a/api/types/derived.gen.go +++ b/api/types/derived.gen.go @@ -45,7 +45,8 @@ func deriveTeleportEqualAWS(this, that *AWS) bool { deriveTeleportEqual_7(&this.OpenSearch, &that.OpenSearch) && this.IAMPolicyStatus == that.IAMPolicyStatus && deriveTeleportEqual_8(this.SessionTags, that.SessionTags) && - deriveTeleportEqual_9(&this.DocumentDB, &that.DocumentDB) + deriveTeleportEqual_9(&this.DocumentDB, &that.DocumentDB) && + deriveTeleportEqual_10(&this.ElastiCacheServerless, &that.ElastiCacheServerless) } // deriveTeleportEqualGCPCloudSQL returns whether this and that are equal. @@ -53,7 +54,16 @@ func deriveTeleportEqualGCPCloudSQL(this, that *GCPCloudSQL) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ProjectID == that.ProjectID && - this.InstanceID == that.InstanceID + this.InstanceID == that.InstanceID && + deriveTeleportEqualAlloyDB(&this.AlloyDB, &that.AlloyDB) +} + +// deriveTeleportEqualAlloyDB returns whether this and that are equal. +func deriveTeleportEqualAlloyDB(this, that *AlloyDB) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.EndpointType == that.EndpointType && + this.EndpointOverride == that.EndpointOverride } // deriveTeleportEqualAzure returns whether this and that are equal. @@ -62,7 +72,7 @@ func deriveTeleportEqualAzure(this, that *Azure) bool { this != nil && that != nil && this.Name == that.Name && this.ResourceID == that.ResourceID && - deriveTeleportEqual_10(&this.Redis, &that.Redis) && + deriveTeleportEqual_11(&this.Redis, &that.Redis) && this.IsFlexiServer == that.IsFlexiServer } @@ -74,7 +84,7 @@ func deriveTeleportEqualDatabaseV3(this, that *DatabaseV3) bool { this.SubKind == that.SubKind && this.Version == that.Version && deriveTeleportEqualMetadata(&this.Metadata, &that.Metadata) && - deriveTeleportEqual_11(&this.Spec, &that.Spec) + deriveTeleportEqual_12(&this.Spec, &that.Spec) } // deriveTeleportEqualDynamicWindowsDesktopV1 returns whether this and that are equal. @@ -82,7 +92,7 @@ func deriveTeleportEqualDynamicWindowsDesktopV1(this, that *DynamicWindowsDeskto return (this == nil && that == nil) || this != nil && that != nil && deriveTeleportEqualResourceHeader(&this.ResourceHeader, &that.ResourceHeader) && - deriveTeleportEqual_12(&this.Spec, &that.Spec) + deriveTeleportEqual_13(&this.Spec, &that.Spec) } // deriveTeleportEqualWindowsDesktopV3 returns whether this and that are equal. @@ -90,7 +100,7 @@ func deriveTeleportEqualWindowsDesktopV3(this, that *WindowsDesktopV3) bool { return (this == nil && that == nil) || this != nil && that != nil && deriveTeleportEqualResourceHeader(&this.ResourceHeader, &that.ResourceHeader) && - deriveTeleportEqual_13(&this.Spec, &that.Spec) + deriveTeleportEqual_14(&this.Spec, &that.Spec) } // deriveTeleportEqualKubeAzure returns whether this and that are equal. @@ -129,7 +139,7 @@ func deriveTeleportEqualKubernetesClusterV3(this, that *KubernetesClusterV3) boo this.SubKind == that.SubKind && this.Version == that.Version && deriveTeleportEqualMetadata(&this.Metadata, &that.Metadata) && - deriveTeleportEqual_14(&this.Spec, &that.Spec) + deriveTeleportEqual_15(&this.Spec, &that.Spec) } // deriveTeleportEqualKubernetesServerV3 returns whether this and that are equal. @@ -140,7 +150,8 @@ func deriveTeleportEqualKubernetesServerV3(this, that *KubernetesServerV3) bool this.SubKind == that.SubKind && this.Version == that.Version && deriveTeleportEqualMetadata(&this.Metadata, &that.Metadata) && - deriveTeleportEqual_15(&this.Spec, &that.Spec) + deriveTeleportEqual_16(&this.Spec, &that.Spec) && + this.Scope == that.Scope } // deriveTeleportEqualOktaAssignmentV1 returns whether this and that are equal. @@ -148,7 +159,7 @@ func deriveTeleportEqualOktaAssignmentV1(this, that *OktaAssignmentV1) bool { return (this == nil && that == nil) || this != nil && that != nil && deriveTeleportEqualResourceHeader(&this.ResourceHeader, &that.ResourceHeader) && - deriveTeleportEqual_16(&this.Spec, &that.Spec) + deriveTeleportEqual_17(&this.Spec, &that.Spec) } // deriveTeleportEqualResourceHeader returns whether this and that are equal. @@ -177,7 +188,7 @@ func deriveTeleportEqualUserGroupV1(this, that *UserGroupV1) bool { return (this == nil && that == nil) || this != nil && that != nil && deriveTeleportEqualResourceHeader(&this.ResourceHeader, &that.ResourceHeader) && - deriveTeleportEqual_17(&this.Spec, &that.Spec) + deriveTeleportEqual_18(&this.Spec, &that.Spec) } // deriveTeleportEqual returns whether this and that are equal. @@ -186,18 +197,19 @@ func deriveTeleportEqual(this, that *AppSpecV3) bool { this != nil && that != nil && this.URI == that.URI && this.PublicAddr == that.PublicAddr && - deriveTeleportEqual_18(this.DynamicLabels, that.DynamicLabels) && + deriveTeleportEqual_19(this.DynamicLabels, that.DynamicLabels) && this.InsecureSkipVerify == that.InsecureSkipVerify && - deriveTeleportEqual_19(this.Rewrite, that.Rewrite) && - deriveTeleportEqual_20(this.AWS, that.AWS) && + deriveTeleportEqual_20(this.Rewrite, that.Rewrite) && + deriveTeleportEqual_21(this.AWS, that.AWS) && this.Cloud == that.Cloud && - deriveTeleportEqual_21(this.UserGroups, that.UserGroups) && + deriveTeleportEqual_22(this.UserGroups, that.UserGroups) && this.Integration == that.Integration && - deriveTeleportEqual_21(this.RequiredAppNames, that.RequiredAppNames) && - deriveTeleportEqual_22(this.CORS, that.CORS) && - deriveTeleportEqual_23(this.IdentityCenter, that.IdentityCenter) && - deriveTeleportEqual_24(this.TCPPorts, that.TCPPorts) && - this.UseAnyProxyPublicAddr == that.UseAnyProxyPublicAddr + deriveTeleportEqual_22(this.RequiredAppNames, that.RequiredAppNames) && + deriveTeleportEqual_23(this.CORS, that.CORS) && + deriveTeleportEqual_24(this.IdentityCenter, that.IdentityCenter) && + deriveTeleportEqual_25(this.TCPPorts, that.TCPPorts) && + this.UseAnyProxyPublicAddr == that.UseAnyProxyPublicAddr && + deriveTeleportEqual_26(this.MCP, that.MCP) } // deriveTeleportEqual_ returns whether this and that are equal. @@ -215,9 +227,9 @@ func deriveTeleportEqual_1(this, that *RDS) bool { this.ClusterID == that.ClusterID && this.ResourceID == that.ResourceID && this.IAMAuth == that.IAMAuth && - deriveTeleportEqual_21(this.Subnets, that.Subnets) && + deriveTeleportEqual_22(this.Subnets, that.Subnets) && this.VPCID == that.VPCID && - deriveTeleportEqual_21(this.SecurityGroups, that.SecurityGroups) + deriveTeleportEqual_22(this.SecurityGroups, that.SecurityGroups) } // deriveTeleportEqual_2 returns whether this and that are equal. @@ -225,7 +237,7 @@ func deriveTeleportEqual_2(this, that *ElastiCache) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ReplicationGroupID == that.ReplicationGroupID && - deriveTeleportEqual_21(this.UserGroupIDs, that.UserGroupIDs) && + deriveTeleportEqual_22(this.UserGroupIDs, that.UserGroupIDs) && this.TransitEncryptionEnabled == that.TransitEncryptionEnabled && this.EndpointType == that.EndpointType } @@ -305,96 +317,105 @@ func deriveTeleportEqual_9(this, that *DocumentDB) bool { } // deriveTeleportEqual_10 returns whether this and that are equal. -func deriveTeleportEqual_10(this, that *AzureRedis) bool { +func deriveTeleportEqual_10(this, that *ElastiCacheServerless) bool { return (this == nil && that == nil) || this != nil && that != nil && - this.ClusteringPolicy == that.ClusteringPolicy + this.CacheName == that.CacheName } // deriveTeleportEqual_11 returns whether this and that are equal. -func deriveTeleportEqual_11(this, that *DatabaseSpecV3) bool { +func deriveTeleportEqual_11(this, that *AzureRedis) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.ClusteringPolicy == that.ClusteringPolicy +} + +// deriveTeleportEqual_12 returns whether this and that are equal. +func deriveTeleportEqual_12(this, that *DatabaseSpecV3) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Protocol == that.Protocol && this.URI == that.URI && this.CACert == that.CACert && - deriveTeleportEqual_18(this.DynamicLabels, that.DynamicLabels) && + deriveTeleportEqual_19(this.DynamicLabels, that.DynamicLabels) && deriveTeleportEqualAWS(&this.AWS, &that.AWS) && deriveTeleportEqualGCPCloudSQL(&this.GCP, &that.GCP) && deriveTeleportEqualAzure(&this.Azure, &that.Azure) && - deriveTeleportEqual_25(&this.TLS, &that.TLS) && - deriveTeleportEqual_26(&this.AD, &that.AD) && - deriveTeleportEqual_27(&this.MySQL, &that.MySQL) && - deriveTeleportEqual_28(this.AdminUser, that.AdminUser) && - deriveTeleportEqual_29(&this.MongoAtlas, &that.MongoAtlas) && - deriveTeleportEqual_30(&this.Oracle, &that.Oracle) + deriveTeleportEqual_27(&this.TLS, &that.TLS) && + deriveTeleportEqual_28(&this.AD, &that.AD) && + deriveTeleportEqual_29(&this.MySQL, &that.MySQL) && + deriveTeleportEqual_30(this.AdminUser, that.AdminUser) && + deriveTeleportEqual_31(&this.MongoAtlas, &that.MongoAtlas) && + deriveTeleportEqual_32(&this.Oracle, &that.Oracle) } -// deriveTeleportEqual_12 returns whether this and that are equal. -func deriveTeleportEqual_12(this, that *DynamicWindowsDesktopSpecV1) bool { +// deriveTeleportEqual_13 returns whether this and that are equal. +func deriveTeleportEqual_13(this, that *DynamicWindowsDesktopSpecV1) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Addr == that.Addr && this.Domain == that.Domain && this.NonAD == that.NonAD && - deriveTeleportEqual_31(this.ScreenSize, that.ScreenSize) + deriveTeleportEqual_33(this.ScreenSize, that.ScreenSize) } -// deriveTeleportEqual_13 returns whether this and that are equal. -func deriveTeleportEqual_13(this, that *WindowsDesktopSpecV3) bool { +// deriveTeleportEqual_14 returns whether this and that are equal. +func deriveTeleportEqual_14(this, that *WindowsDesktopSpecV3) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Addr == that.Addr && this.Domain == that.Domain && this.HostID == that.HostID && this.NonAD == that.NonAD && - deriveTeleportEqual_31(this.ScreenSize, that.ScreenSize) + deriveTeleportEqual_33(this.ScreenSize, that.ScreenSize) } -// deriveTeleportEqual_14 returns whether this and that are equal. -func deriveTeleportEqual_14(this, that *KubernetesClusterSpecV3) bool { +// deriveTeleportEqual_15 returns whether this and that are equal. +func deriveTeleportEqual_15(this, that *KubernetesClusterSpecV3) bool { return (this == nil && that == nil) || this != nil && that != nil && - deriveTeleportEqual_18(this.DynamicLabels, that.DynamicLabels) && + deriveTeleportEqual_19(this.DynamicLabels, that.DynamicLabels) && bytes.Equal(this.Kubeconfig, that.Kubeconfig) && deriveTeleportEqualKubeAzure(&this.Azure, &that.Azure) && deriveTeleportEqualKubeAWS(&this.AWS, &that.AWS) && deriveTeleportEqualKubeGCP(&this.GCP, &that.GCP) } -// deriveTeleportEqual_15 returns whether this and that are equal. -func deriveTeleportEqual_15(this, that *KubernetesServerSpecV3) bool { +// deriveTeleportEqual_16 returns whether this and that are equal. +func deriveTeleportEqual_16(this, that *KubernetesServerSpecV3) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Version == that.Version && this.Hostname == that.Hostname && this.HostID == that.HostID && - deriveTeleportEqual_32(&this.Rotation, &that.Rotation) && + deriveTeleportEqual_34(&this.Rotation, &that.Rotation) && deriveTeleportEqualKubernetesClusterV3(this.Cluster, that.Cluster) && - deriveTeleportEqual_21(this.ProxyIDs, that.ProxyIDs) + deriveTeleportEqual_22(this.ProxyIDs, that.ProxyIDs) && + this.RelayGroup == that.RelayGroup && + deriveTeleportEqual_22(this.RelayIds, that.RelayIds) } -// deriveTeleportEqual_16 returns whether this and that are equal. -func deriveTeleportEqual_16(this, that *OktaAssignmentSpecV1) bool { +// deriveTeleportEqual_17 returns whether this and that are equal. +func deriveTeleportEqual_17(this, that *OktaAssignmentSpecV1) bool { return (this == nil && that == nil) || this != nil && that != nil && this.User == that.User && - deriveTeleportEqual_33(this.Targets, that.Targets) && + deriveTeleportEqual_35(this.Targets, that.Targets) && this.CleanupTime.Equal(that.CleanupTime) && this.Status == that.Status && this.LastTransition.Equal(that.LastTransition) && this.Finalized == that.Finalized } -// deriveTeleportEqual_17 returns whether this and that are equal. -func deriveTeleportEqual_17(this, that *UserGroupSpecV1) bool { +// deriveTeleportEqual_18 returns whether this and that are equal. +func deriveTeleportEqual_18(this, that *UserGroupSpecV1) bool { return (this == nil && that == nil) || this != nil && that != nil && - deriveTeleportEqual_21(this.Applications, that.Applications) + deriveTeleportEqual_22(this.Applications, that.Applications) } -// deriveTeleportEqual_18 returns whether this and that are equal. -func deriveTeleportEqual_18(this, that map[string]CommandLabelV2) bool { +// deriveTeleportEqual_19 returns whether this and that are equal. +func deriveTeleportEqual_19(this, that map[string]CommandLabelV2) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -406,32 +427,32 @@ func deriveTeleportEqual_18(this, that map[string]CommandLabelV2) bool { if !ok { return false } - if !(deriveTeleportEqual_34(&v, &thatv)) { + if !(deriveTeleportEqual_36(&v, &thatv)) { return false } } return true } -// deriveTeleportEqual_19 returns whether this and that are equal. -func deriveTeleportEqual_19(this, that *Rewrite) bool { +// deriveTeleportEqual_20 returns whether this and that are equal. +func deriveTeleportEqual_20(this, that *Rewrite) bool { return (this == nil && that == nil) || this != nil && that != nil && - deriveTeleportEqual_21(this.Redirect, that.Redirect) && - deriveTeleportEqual_35(this.Headers, that.Headers) && + deriveTeleportEqual_22(this.Redirect, that.Redirect) && + deriveTeleportEqual_37(this.Headers, that.Headers) && this.JWTClaims == that.JWTClaims } -// deriveTeleportEqual_20 returns whether this and that are equal. -func deriveTeleportEqual_20(this, that *AppAWS) bool { +// deriveTeleportEqual_21 returns whether this and that are equal. +func deriveTeleportEqual_21(this, that *AppAWS) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ExternalID == that.ExternalID && - deriveTeleportEqual_36(this.RolesAnywhereProfile, that.RolesAnywhereProfile) + deriveTeleportEqual_38(this.RolesAnywhereProfile, that.RolesAnywhereProfile) } -// deriveTeleportEqual_21 returns whether this and that are equal. -func deriveTeleportEqual_21(this, that []string) bool { +// deriveTeleportEqual_22 returns whether this and that are equal. +func deriveTeleportEqual_22(this, that []string) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -446,28 +467,28 @@ func deriveTeleportEqual_21(this, that []string) bool { return true } -// deriveTeleportEqual_22 returns whether this and that are equal. -func deriveTeleportEqual_22(this, that *CORSPolicy) bool { +// deriveTeleportEqual_23 returns whether this and that are equal. +func deriveTeleportEqual_23(this, that *CORSPolicy) bool { return (this == nil && that == nil) || this != nil && that != nil && - deriveTeleportEqual_21(this.AllowedOrigins, that.AllowedOrigins) && - deriveTeleportEqual_21(this.AllowedMethods, that.AllowedMethods) && - deriveTeleportEqual_21(this.AllowedHeaders, that.AllowedHeaders) && + deriveTeleportEqual_22(this.AllowedOrigins, that.AllowedOrigins) && + deriveTeleportEqual_22(this.AllowedMethods, that.AllowedMethods) && + deriveTeleportEqual_22(this.AllowedHeaders, that.AllowedHeaders) && this.AllowCredentials == that.AllowCredentials && this.MaxAge == that.MaxAge && - deriveTeleportEqual_21(this.ExposedHeaders, that.ExposedHeaders) + deriveTeleportEqual_22(this.ExposedHeaders, that.ExposedHeaders) } -// deriveTeleportEqual_23 returns whether this and that are equal. -func deriveTeleportEqual_23(this, that *AppIdentityCenter) bool { +// deriveTeleportEqual_24 returns whether this and that are equal. +func deriveTeleportEqual_24(this, that *AppIdentityCenter) bool { return (this == nil && that == nil) || this != nil && that != nil && this.AccountID == that.AccountID && - deriveTeleportEqual_37(this.PermissionSets, that.PermissionSets) + deriveTeleportEqual_39(this.PermissionSets, that.PermissionSets) } -// deriveTeleportEqual_24 returns whether this and that are equal. -func deriveTeleportEqual_24(this, that []*PortRange) bool { +// deriveTeleportEqual_25 returns whether this and that are equal. +func deriveTeleportEqual_25(this, that []*PortRange) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -475,15 +496,24 @@ func deriveTeleportEqual_24(this, that []*PortRange) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_38(this[i], that[i])) { + if !(deriveTeleportEqual_40(this[i], that[i])) { return false } } return true } -// deriveTeleportEqual_25 returns whether this and that are equal. -func deriveTeleportEqual_25(this, that *DatabaseTLS) bool { +// deriveTeleportEqual_26 returns whether this and that are equal. +func deriveTeleportEqual_26(this, that *MCP) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Command == that.Command && + deriveTeleportEqual_22(this.Args, that.Args) && + this.RunAsHostUser == that.RunAsHostUser +} + +// deriveTeleportEqual_27 returns whether this and that are equal. +func deriveTeleportEqual_27(this, that *DatabaseTLS) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Mode == that.Mode && @@ -492,8 +522,8 @@ func deriveTeleportEqual_25(this, that *DatabaseTLS) bool { this.TrustSystemCertPool == that.TrustSystemCertPool } -// deriveTeleportEqual_26 returns whether this and that are equal. -func deriveTeleportEqual_26(this, that *AD) bool { +// deriveTeleportEqual_28 returns whether this and that are equal. +func deriveTeleportEqual_28(this, that *AD) bool { return (this == nil && that == nil) || this != nil && that != nil && this.KeytabFile == that.KeytabFile && @@ -506,45 +536,47 @@ func deriveTeleportEqual_26(this, that *AD) bool { this.LDAPServiceAccountSID == that.LDAPServiceAccountSID } -// deriveTeleportEqual_27 returns whether this and that are equal. -func deriveTeleportEqual_27(this, that *MySQLOptions) bool { +// deriveTeleportEqual_29 returns whether this and that are equal. +func deriveTeleportEqual_29(this, that *MySQLOptions) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ServerVersion == that.ServerVersion } -// deriveTeleportEqual_28 returns whether this and that are equal. -func deriveTeleportEqual_28(this, that *DatabaseAdminUser) bool { +// deriveTeleportEqual_30 returns whether this and that are equal. +func deriveTeleportEqual_30(this, that *DatabaseAdminUser) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Name == that.Name && this.DefaultDatabase == that.DefaultDatabase } -// deriveTeleportEqual_29 returns whether this and that are equal. -func deriveTeleportEqual_29(this, that *MongoAtlas) bool { +// deriveTeleportEqual_31 returns whether this and that are equal. +func deriveTeleportEqual_31(this, that *MongoAtlas) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Name == that.Name } -// deriveTeleportEqual_30 returns whether this and that are equal. -func deriveTeleportEqual_30(this, that *OracleOptions) bool { +// deriveTeleportEqual_32 returns whether this and that are equal. +func deriveTeleportEqual_32(this, that *OracleOptions) bool { return (this == nil && that == nil) || this != nil && that != nil && - this.AuditUser == that.AuditUser + this.AuditUser == that.AuditUser && + this.RetryCount == that.RetryCount && + this.ShuffleHostnames == that.ShuffleHostnames } -// deriveTeleportEqual_31 returns whether this and that are equal. -func deriveTeleportEqual_31(this, that *Resolution) bool { +// deriveTeleportEqual_33 returns whether this and that are equal. +func deriveTeleportEqual_33(this, that *Resolution) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Width == that.Width && this.Height == that.Height } -// deriveTeleportEqual_32 returns whether this and that are equal. -func deriveTeleportEqual_32(this, that *Rotation) bool { +// deriveTeleportEqual_34 returns whether this and that are equal. +func deriveTeleportEqual_34(this, that *Rotation) bool { return (this == nil && that == nil) || this != nil && that != nil && this.State == that.State && @@ -554,11 +586,11 @@ func deriveTeleportEqual_32(this, that *Rotation) bool { this.Started.Equal(that.Started) && this.GracePeriod == that.GracePeriod && this.LastRotated.Equal(that.LastRotated) && - deriveTeleportEqual_39(&this.Schedule, &that.Schedule) + deriveTeleportEqual_41(&this.Schedule, &that.Schedule) } -// deriveTeleportEqual_33 returns whether this and that are equal. -func deriveTeleportEqual_33(this, that []*OktaAssignmentTargetV1) bool { +// deriveTeleportEqual_35 returns whether this and that are equal. +func deriveTeleportEqual_35(this, that []*OktaAssignmentTargetV1) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -566,24 +598,24 @@ func deriveTeleportEqual_33(this, that []*OktaAssignmentTargetV1) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_40(this[i], that[i])) { + if !(deriveTeleportEqual_42(this[i], that[i])) { return false } } return true } -// deriveTeleportEqual_34 returns whether this and that are equal. -func deriveTeleportEqual_34(this, that *CommandLabelV2) bool { +// deriveTeleportEqual_36 returns whether this and that are equal. +func deriveTeleportEqual_36(this, that *CommandLabelV2) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Period == that.Period && - deriveTeleportEqual_21(this.Command, that.Command) && + deriveTeleportEqual_22(this.Command, that.Command) && this.Result == that.Result } -// deriveTeleportEqual_35 returns whether this and that are equal. -func deriveTeleportEqual_35(this, that []*Header) bool { +// deriveTeleportEqual_37 returns whether this and that are equal. +func deriveTeleportEqual_37(this, that []*Header) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -591,23 +623,23 @@ func deriveTeleportEqual_35(this, that []*Header) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_41(this[i], that[i])) { + if !(deriveTeleportEqual_43(this[i], that[i])) { return false } } return true } -// deriveTeleportEqual_36 returns whether this and that are equal. -func deriveTeleportEqual_36(this, that *AppAWSRolesAnywhereProfile) bool { +// deriveTeleportEqual_38 returns whether this and that are equal. +func deriveTeleportEqual_38(this, that *AppAWSRolesAnywhereProfile) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ProfileARN == that.ProfileARN && this.AcceptRoleSessionName == that.AcceptRoleSessionName } -// deriveTeleportEqual_37 returns whether this and that are equal. -func deriveTeleportEqual_37(this, that []*IdentityCenterPermissionSet) bool { +// deriveTeleportEqual_39 returns whether this and that are equal. +func deriveTeleportEqual_39(this, that []*IdentityCenterPermissionSet) bool { if this == nil || that == nil { return this == nil && that == nil } @@ -615,23 +647,23 @@ func deriveTeleportEqual_37(this, that []*IdentityCenterPermissionSet) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_42(this[i], that[i])) { + if !(deriveTeleportEqual_44(this[i], that[i])) { return false } } return true } -// deriveTeleportEqual_38 returns whether this and that are equal. -func deriveTeleportEqual_38(this, that *PortRange) bool { +// deriveTeleportEqual_40 returns whether this and that are equal. +func deriveTeleportEqual_40(this, that *PortRange) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Port == that.Port && this.EndPort == that.EndPort } -// deriveTeleportEqual_39 returns whether this and that are equal. -func deriveTeleportEqual_39(this, that *RotationSchedule) bool { +// deriveTeleportEqual_41 returns whether this and that are equal. +func deriveTeleportEqual_41(this, that *RotationSchedule) bool { return (this == nil && that == nil) || this != nil && that != nil && this.UpdateClients.Equal(that.UpdateClients) && @@ -639,24 +671,24 @@ func deriveTeleportEqual_39(this, that *RotationSchedule) bool { this.Standby.Equal(that.Standby) } -// deriveTeleportEqual_40 returns whether this and that are equal. -func deriveTeleportEqual_40(this, that *OktaAssignmentTargetV1) bool { +// deriveTeleportEqual_42 returns whether this and that are equal. +func deriveTeleportEqual_42(this, that *OktaAssignmentTargetV1) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Type == that.Type && this.Id == that.Id } -// deriveTeleportEqual_41 returns whether this and that are equal. -func deriveTeleportEqual_41(this, that *Header) bool { +// deriveTeleportEqual_43 returns whether this and that are equal. +func deriveTeleportEqual_43(this, that *Header) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Name == that.Name && this.Value == that.Value } -// deriveTeleportEqual_42 returns whether this and that are equal. -func deriveTeleportEqual_42(this, that *IdentityCenterPermissionSet) bool { +// deriveTeleportEqual_44 returns whether this and that are equal. +func deriveTeleportEqual_44(this, that *IdentityCenterPermissionSet) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ARN == that.ARN && diff --git a/api/types/desktop.go b/api/types/desktop.go index ae31c3e976221..f06d0d16168f8 100644 --- a/api/types/desktop.go +++ b/api/types/desktop.go @@ -45,6 +45,12 @@ type WindowsDesktopService interface { GetHostname() string // ProxiedService provides common methods for a proxied service. ProxiedService + // GetRelayGroup returns the name of the Relay group that this service is + // connected to. + GetRelayGroup() string + // GetRelayIDs returns the list of Relay host IDs that this service is + // connected to. + GetRelayIDs() []string // Clone creates a copy of the service. Clone() WindowsDesktopService } @@ -117,6 +123,22 @@ func (s *WindowsDesktopServiceV3) SetProxyIDs(proxyIDs []string) { s.Spec.ProxyIDs = proxyIDs } +// GetRelayGroup implements [WindowsDesktopService]. +func (s *WindowsDesktopServiceV3) GetRelayGroup() string { + if s == nil { + return "" + } + return s.Spec.RelayGroup +} + +// GetRelayIDs implements [WindowsDesktopService]. +func (s *WindowsDesktopServiceV3) GetRelayIDs() []string { + if s == nil { + return nil + } + return s.Spec.RelayIds +} + // GetHostname returns the windows hostname of this service. func (s *WindowsDesktopServiceV3) GetHostname() string { return s.Spec.Hostname diff --git a/api/types/discoveryconfig/convert/v1/discoveryconfig.go b/api/types/discoveryconfig/convert/v1/discoveryconfig.go index 4c5083a975f0d..ff5aed11b171d 100644 --- a/api/types/discoveryconfig/convert/v1/discoveryconfig.go +++ b/api/types/discoveryconfig/convert/v1/discoveryconfig.go @@ -34,48 +34,48 @@ func FromProto(msg *discoveryconfigv1.DiscoveryConfig) (*discoveryconfig.Discove return nil, trace.BadParameter("discovery config message is nil") } - if msg.Spec == nil { + if msg.GetSpec() == nil { return nil, trace.BadParameter("spec is missing") } - if msg.Spec.DiscoveryGroup == "" { + if msg.GetSpec().GetDiscoveryGroup() == "" { return nil, trace.BadParameter("discovery group is missing") } - awsMatchers := make([]types.AWSMatcher, 0, len(msg.Spec.Aws)) - for _, m := range msg.Spec.Aws { + awsMatchers := make([]types.AWSMatcher, 0, len(msg.GetSpec().GetAws())) + for _, m := range msg.GetSpec().GetAws() { awsMatchers = append(awsMatchers, *m) } - azureMatchers := make([]types.AzureMatcher, 0, len(msg.Spec.Azure)) - for _, m := range msg.Spec.Azure { + azureMatchers := make([]types.AzureMatcher, 0, len(msg.GetSpec().GetAzure())) + for _, m := range msg.GetSpec().GetAzure() { azureMatchers = append(azureMatchers, *m) } - gcpMatchers := make([]types.GCPMatcher, 0, len(msg.Spec.Gcp)) - for _, m := range msg.Spec.Gcp { + gcpMatchers := make([]types.GCPMatcher, 0, len(msg.GetSpec().GetGcp())) + for _, m := range msg.GetSpec().GetGcp() { gcpMatchers = append(gcpMatchers, *m) } - kubeMatchers := make([]types.KubernetesMatcher, 0, len(msg.Spec.Kube)) - for _, m := range msg.Spec.Kube { + kubeMatchers := make([]types.KubernetesMatcher, 0, len(msg.GetSpec().GetKube())) + for _, m := range msg.GetSpec().GetKube() { kubeMatchers = append(kubeMatchers, *m) } discoveryConfig, err := discoveryconfig.NewDiscoveryConfig( - headerv1.FromMetadataProto(msg.Header.Metadata), + headerv1.FromMetadataProto(msg.GetHeader().GetMetadata()), discoveryconfig.Spec{ - DiscoveryGroup: msg.Spec.DiscoveryGroup, + DiscoveryGroup: msg.GetSpec().GetDiscoveryGroup(), AWS: awsMatchers, Azure: azureMatchers, GCP: gcpMatchers, Kube: kubeMatchers, - AccessGraph: msg.Spec.AccessGraph, + AccessGraph: msg.GetSpec().GetAccessGraph(), }, ) if err != nil { return nil, trace.Wrap(err) } - discoveryConfig.Status = StatusFromProto(msg.Status) + discoveryConfig.Status = StatusFromProto(msg.GetStatus()) return discoveryConfig, nil } diff --git a/api/types/discoveryconfig/derived.gen.go b/api/types/discoveryconfig/derived.gen.go index 3ba43afc89bb7..26419729f5262 100644 --- a/api/types/discoveryconfig/derived.gen.go +++ b/api/types/discoveryconfig/derived.gen.go @@ -153,7 +153,8 @@ func deriveTeleportEqual_8(this, that *types.AWSMatcher) bool { deriveTeleportEqual_18(this.SSM, that.SSM) && this.Integration == that.Integration && this.KubeAppDiscovery == that.KubeAppDiscovery && - this.SetupAccessForARN == that.SetupAccessForARN + this.SetupAccessForARN == that.SetupAccessForARN && + deriveTeleportEqual_19(this.Organization, that.Organization) } // deriveTeleportEqual_9 returns whether this and that are equal. @@ -165,7 +166,8 @@ func deriveTeleportEqual_9(this, that *types.AzureMatcher) bool { deriveTeleportEqual_14(this.Types, that.Types) && deriveTeleportEqual_14(this.Regions, that.Regions) && deriveTeleportEqual_16(this.ResourceTags, that.ResourceTags) && - deriveTeleportEqual_17(this.Params, that.Params) + deriveTeleportEqual_17(this.Params, that.Params) && + this.Integration == that.Integration } // deriveTeleportEqual_10 returns whether this and that are equal. @@ -199,7 +201,7 @@ func deriveTeleportEqual_12(this, that []*types.AccessGraphAWSSync) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_19(this[i], that[i])) { + if !(deriveTeleportEqual_20(this[i], that[i])) { return false } } @@ -215,7 +217,7 @@ func deriveTeleportEqual_13(this, that []*types.AccessGraphAzureSync) bool { return false } for i := 0; i < len(this); i++ { - if !(deriveTeleportEqual_20(this[i], that[i])) { + if !(deriveTeleportEqual_21(this[i], that[i])) { return false } } @@ -276,8 +278,11 @@ func deriveTeleportEqual_17(this, that *types.InstallerParams) bool { this.InstallTeleport == that.InstallTeleport && this.SSHDConfig == that.SSHDConfig && this.PublicProxyAddr == that.PublicProxyAddr && - deriveTeleportEqual_21(this.Azure, that.Azure) && - this.EnrollMode == that.EnrollMode + deriveTeleportEqual_22(this.Azure, that.Azure) && + this.EnrollMode == that.EnrollMode && + this.Suffix == that.Suffix && + this.UpdateGroup == that.UpdateGroup && + deriveTeleportEqual_23(this.HTTPProxySettings, that.HTTPProxySettings) } // deriveTeleportEqual_18 returns whether this and that are equal. @@ -288,34 +293,67 @@ func deriveTeleportEqual_18(this, that *types.AWSSSM) bool { } // deriveTeleportEqual_19 returns whether this and that are equal. -func deriveTeleportEqual_19(this, that *types.AccessGraphAWSSync) bool { +func deriveTeleportEqual_19(this, that *types.AWSOrganizationMatcher) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.OrganizationID == that.OrganizationID && + deriveTeleportEqual_24(this.OrganizationalUnits, that.OrganizationalUnits) +} + +// deriveTeleportEqual_20 returns whether this and that are equal. +func deriveTeleportEqual_20(this, that *types.AccessGraphAWSSync) bool { return (this == nil && that == nil) || this != nil && that != nil && deriveTeleportEqual_14(this.Regions, that.Regions) && deriveTeleportEqual_15(this.AssumeRole, that.AssumeRole) && this.Integration == that.Integration && - deriveTeleportEqual_22(this.CloudTrailLogs, that.CloudTrailLogs) + deriveTeleportEqual_25(this.CloudTrailLogs, that.CloudTrailLogs) && + deriveTeleportEqual_26(this.EksAuditLogs, that.EksAuditLogs) } -// deriveTeleportEqual_20 returns whether this and that are equal. -func deriveTeleportEqual_20(this, that *types.AccessGraphAzureSync) bool { +// deriveTeleportEqual_21 returns whether this and that are equal. +func deriveTeleportEqual_21(this, that *types.AccessGraphAzureSync) bool { return (this == nil && that == nil) || this != nil && that != nil && this.SubscriptionID == that.SubscriptionID && this.Integration == that.Integration } -// deriveTeleportEqual_21 returns whether this and that are equal. -func deriveTeleportEqual_21(this, that *types.AzureInstallerParams) bool { +// deriveTeleportEqual_22 returns whether this and that are equal. +func deriveTeleportEqual_22(this, that *types.AzureInstallerParams) bool { return (this == nil && that == nil) || this != nil && that != nil && this.ClientID == that.ClientID } -// deriveTeleportEqual_22 returns whether this and that are equal. -func deriveTeleportEqual_22(this, that *types.AccessGraphAWSSyncCloudTrailLogs) bool { +// deriveTeleportEqual_23 returns whether this and that are equal. +func deriveTeleportEqual_23(this, that *types.HTTPProxySettings) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.HTTPProxy == that.HTTPProxy && + this.HTTPSProxy == that.HTTPSProxy && + this.NoProxy == that.NoProxy +} + +// deriveTeleportEqual_24 returns whether this and that are equal. +func deriveTeleportEqual_24(this, that *types.AWSOrganizationUnitsMatcher) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual_14(this.Include, that.Include) && + deriveTeleportEqual_14(this.Exclude, that.Exclude) +} + +// deriveTeleportEqual_25 returns whether this and that are equal. +func deriveTeleportEqual_25(this, that *types.AccessGraphAWSSyncCloudTrailLogs) bool { return (this == nil && that == nil) || this != nil && that != nil && this.Region == that.Region && this.SQSQueue == that.SQSQueue } + +// deriveTeleportEqual_26 returns whether this and that are equal. +func deriveTeleportEqual_26(this, that *types.AccessGraphAWSSyncEKSAuditLogs) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual_16(this.Tags, that.Tags) +} diff --git a/api/types/discoveryconfig/discoveryconfig.go b/api/types/discoveryconfig/discoveryconfig.go index 64d42367e773f..a2cd66eff97b0 100644 --- a/api/types/discoveryconfig/discoveryconfig.go +++ b/api/types/discoveryconfig/discoveryconfig.go @@ -201,6 +201,15 @@ func (a *DiscoveryConfig) MatchSearch(values []string) bool { return types.MatchSearch(fieldVals, values, nil) } +// IsMatchersEmpty returns true if all matchers are empty. +func (a *DiscoveryConfig) IsMatchersEmpty() bool { + return len(a.Spec.AWS) == 0 && + len(a.Spec.Azure) == 0 && + len(a.Spec.GCP) == 0 && + len(a.Spec.Kube) == 0 && + (a.Spec.AccessGraph == nil || len(a.Spec.AccessGraph.AWS) == 0) +} + // CloneResource returns a copy of the resource as types.ResourceWithLabels. func (a *DiscoveryConfig) CloneResource() types.ResourceWithLabels { var copy *DiscoveryConfig diff --git a/api/types/discoveryconfig/discoveryconfig_test.go b/api/types/discoveryconfig/discoveryconfig_test.go index 39c2cca20643b..b8439dd0aa015 100644 --- a/api/types/discoveryconfig/discoveryconfig_test.go +++ b/api/types/discoveryconfig/discoveryconfig_test.go @@ -451,3 +451,110 @@ func TestNewDiscoveryConfig(t *testing.T) { }) } } + +func TestDiscoveryConfig_IsMatchersEmpty(t *testing.T) { + for _, tt := range []struct { + name string + config *DiscoveryConfig + expected bool + }{ + { + name: "empty config", + config: &DiscoveryConfig{ + Spec: Spec{}, + }, + expected: true, + }, + { + name: "has AWS matchers", + config: &DiscoveryConfig{ + Spec: Spec{ + AWS: []types.AWSMatcher{{ + Types: []string{"ec2"}, + Regions: []string{"us-west-2"}, + }}, + }, + }, + expected: false, + }, + { + name: "has Azure matchers", + config: &DiscoveryConfig{ + Spec: Spec{ + Azure: []types.AzureMatcher{{ + Types: []string{"vm"}, + Regions: []string{"europe-west-2"}, + }}, + }, + }, + expected: false, + }, + { + name: "has GCP matchers", + config: &DiscoveryConfig{ + Spec: Spec{ + GCP: []types.GCPMatcher{{ + Types: []string{"gce"}, + ProjectIDs: []string{"p1"}, + }}, + }, + }, + expected: false, + }, + { + name: "has Kube matchers", + config: &DiscoveryConfig{ + Spec: Spec{ + Kube: []types.KubernetesMatcher{{ + Types: []string{"app"}, + }}, + }, + }, + expected: false, + }, + { + name: "has AccessGraph with AWS", + config: &DiscoveryConfig{ + Spec: Spec{ + AccessGraph: &types.AccessGraphSync{ + AWS: []*types.AccessGraphAWSSync{{ + Integration: "integration1", + Regions: []string{"us-west-2"}, + }}, + }, + }, + }, + expected: false, + }, + { + name: "has AccessGraph but no AWS", + config: &DiscoveryConfig{ + Spec: Spec{ + AccessGraph: &types.AccessGraphSync{}, + }, + }, + expected: true, + }, + { + name: "has multiple matcher types", + config: &DiscoveryConfig{ + Spec: Spec{ + AWS: []types.AWSMatcher{{ + Types: []string{"ec2"}, + Regions: []string{"us-west-2"}, + }}, + Azure: []types.AzureMatcher{{ + Types: []string{"vm"}, + Regions: []string{"europe-west-2"}, + }}, + }, + }, + expected: false, + }, + } { + t.Run(tt.name, func(t *testing.T) { + got := tt.config.IsMatchersEmpty() + require.Equal(t, tt.expected, got) + }) + } +} diff --git a/api/types/events/api.go b/api/types/events/api.go index 96456005dba94..54307bd95fa4d 100644 --- a/api/types/events/api.go +++ b/api/types/events/api.go @@ -115,3 +115,15 @@ type Stream interface { // the stream completed and closes the stream instance Close(ctx context.Context) error } + +const ( + // EventProtocolsSSH specifies SSH as a type of captured protocol + EventProtocolSSH = "ssh" + // EventProtocolKube specifies kubernetes as a type of captured protocol + EventProtocolKube = "kube" + // EventProtocolTDP specifies Teleport Desktop Protocol (TDP) + // as a type of captured protocol + EventProtocolTDP = "tdp" + // EventProtocolDB specifies database as a type of captured protocol + EventProtocolDB = "db" +) diff --git a/api/types/events/events.go b/api/types/events/events.go index d988a5ebcbfd4..2e7c8ae229ff8 100644 --- a/api/types/events/events.go +++ b/api/types/events/events.go @@ -80,15 +80,16 @@ func computeEventID(evt AuditEvent, payload []byte) string { } // trimStr trims a string to a given length. -func trimStr(s string, n int) string { +func trimStr[T string | []byte](s T, n int) T { // Starting at 2 to leave room for quotes at the begging and end. charCount := 2 - for i, r := range s { + for i := range len(s) { // Make sure we always have room to add an escape character if necessary. if charCount+1 > n { return s[:i] } - if r == rune('"') || r == '\\' { + r := rune(s[i]) + if r == '"' || r == '\\' { charCount++ } charCount++ @@ -113,6 +114,14 @@ func adjustedMaxSize(e AuditEvent, maxSize int) int { return maxSize - sizeBallast } +// nonEmptyStr returns 1 if the input string is not empty +func nonEmptyStr[T string | []byte](s T) int { + if len(s) == 0 { + return 0 + } + return 1 +} + // nonEmptyStrs returns the number of non-empty strings. func nonEmptyStrs(s ...string) int { nonEmptyStrs := 0 @@ -153,10 +162,10 @@ func (m *Status) nonEmptyStrs() int { return nonEmptyStrs(m.Error, m.UserMessage) } -func (m *Status) trimToMaxSize(maxSize int) Status { +func (m *Status) trimToMaxFieldSize(maxFieldSize int) Status { var out Status - out.Error = trimStr(m.Error, maxSize) - out.UserMessage = trimStr(m.UserMessage, maxSize) + out.Error = trimStr(m.Error, maxFieldSize) + out.UserMessage = trimStr(m.UserMessage, maxFieldSize) return out } @@ -183,7 +192,7 @@ func (m *Struct) nonEmptyStrs() int { return toTrim } -func (m *Struct) trimToMaxSize(maxSize int) *Struct { +func (m *Struct) trimToMaxFieldSize(maxFieldSize int) *Struct { if len(m.Fields) == 0 { return m } @@ -194,11 +203,11 @@ func (m *Struct) trimToMaxSize(maxSize int) *Struct { }, } for k, v := range m.Fields { - trimmedKey := trimStr(k, maxSize) + trimmedKey := trimStr(k, maxFieldSize) if v != nil { if strVal := v.GetStringValue(); strVal != "" { - trimmedVal := trimStr(strVal, maxSize) + trimmedVal := trimStr(strVal, maxFieldSize) out.Fields[trimmedKey] = &types.Value{ Kind: &types.Value_StringValue{ StringValue: trimmedVal, @@ -207,7 +216,7 @@ func (m *Struct) trimToMaxSize(maxSize int) *Struct { } else if l := v.GetListValue(); l != nil { for i, lv := range l.Values { if strVal := lv.GetStringValue(); strVal != "" { - trimmedVal := trimStr(strVal, maxSize) + trimmedVal := trimStr(strVal, maxFieldSize) l.Values[i] = &types.Value{ Kind: &types.Value_StringValue{ StringValue: trimmedVal, @@ -377,26 +386,12 @@ func (m *ClientDisconnect) TrimToMaxSize(maxSize int) AuditEvent { // per-field max size where only user input message fields DatabaseQuery and DatabaseQueryParameters are taken into // account. func (m *DatabaseSessionQuery) TrimToMaxSize(maxSize int) AuditEvent { - size := m.Size() - if size <= maxSize { - return m - } - - out := utils.CloneProtoMsg(m) - out.DatabaseQuery = "" - out.DatabaseQueryParameters = nil - - maxSize = adjustedMaxSize(out, maxSize) - - // Check how many custom fields are set. - customFieldsCount := nonEmptyStrs(m.DatabaseQuery) + nonEmptyStrsInSlice(m.DatabaseQueryParameters) - - maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - - out.DatabaseQuery = trimStr(m.DatabaseQuery, maxFieldsSize) - out.DatabaseQueryParameters = trimStrSlice(m.DatabaseQueryParameters, maxFieldsSize) - - return out + return trimEventToMaxSize(m, maxSize, func(m *DatabaseSessionQuery, out *DatabaseSessionQuery) fieldTrimmer { + return fieldTrimmers{ + newStrTrimmer(m.DatabaseQuery, &out.DatabaseQuery), + newStrSliceTrimmer(m.DatabaseQueryParameters, &out.DatabaseQueryParameters), + } + }) } // TrimToMaxSize trims the Exec event to the given maximum size. @@ -427,21 +422,9 @@ func (m *Exec) TrimToMaxSize(maxSize int) AuditEvent { // underlying storage and thus cause the events to be omitted entirely. See // teleport-private#172. func (m *UserLogin) TrimToMaxSize(maxSize int) AuditEvent { - size := m.Size() - if size <= maxSize { - return m - } - - out := utils.CloneProtoMsg(m) - out.Status = Status{} - - maxSize = adjustedMaxSize(out, maxSize) - customFieldsCount := m.Status.nonEmptyStrs() - maxFieldSize := maxSizePerField(maxSize, customFieldsCount) - - out.Status = m.Status.trimToMaxSize(maxFieldSize) - - return out + return trimEventToMaxSize(m, maxSize, func(m, out *UserLogin) fieldTrimmer { + return newGenericTrimmer(&m.Status, &out.Status) + }) } func (m *UserDelete) TrimToMaxSize(maxSize int) AuditEvent { @@ -510,7 +493,7 @@ func (m *AccessRequestCreate) TrimToMaxSize(maxSize int) AuditEvent { out.Roles = trimStrSlice(m.Roles, maxFieldsSize) out.Reason = trimStr(m.Reason, maxFieldsSize) - out.Annotations = m.Annotations.trimToMaxSize(maxFieldsSize) + out.Annotations = m.Annotations.trimToMaxFieldSize(maxFieldsSize) return out } @@ -774,6 +757,10 @@ func (m *AppSessionChunk) TrimToMaxSize(maxSize int) AuditEvent { return m } +func (m *AppSessionChunk) GetSessionChunkID() string { + return m.SessionChunkID +} + func (m *AppSessionRequest) TrimToMaxSize(maxSize int) AuditEvent { size := m.Size() if size <= maxSize { @@ -815,7 +802,7 @@ func (m *AppSessionDynamoDBRequest) TrimToMaxSize(maxSize int) AuditEvent { out.Path = trimStr(m.Path, maxFieldsSize) out.RawQuery = trimStr(m.RawQuery, maxFieldsSize) out.Target = trimStr(m.Target, maxFieldsSize) - out.Body = m.Body.trimToMaxSize(maxFieldsSize) + out.Body = m.Body.trimToMaxFieldSize(maxFieldsSize) return out } @@ -858,7 +845,7 @@ func (m *DatabaseSessionStart) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -909,7 +896,7 @@ func (m *DatabaseUserCreate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.Username) + nonEmptyStrsInSlice(m.Roles) maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) out.Username = trimStr(m.Username, maxFieldsSize) out.Roles = trimStrSlice(m.Roles, maxFieldsSize) @@ -931,7 +918,7 @@ func (m *DatabaseUserDeactivate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.Username) maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) out.Username = trimStr(m.Username, maxFieldsSize) return out @@ -1305,7 +1292,7 @@ func (m *DynamoDBRequest) TrimToMaxSize(maxSize int) AuditEvent { out.Path = trimStr(m.Path, maxFieldsSize) out.RawQuery = trimStr(m.RawQuery, maxFieldsSize) - out.Body = m.Body.trimToMaxSize(maxFieldsSize) + out.Body = m.Body.trimToMaxFieldSize(maxFieldsSize) out.Target = trimStr(m.Target, maxFieldsSize) return out @@ -1358,7 +1345,7 @@ func (m *DeviceEvent) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - status := m.Status.trimToMaxSize(maxFieldsSize) + status := m.Status.trimToMaxFieldSize(maxFieldsSize) out.Status = &status return out @@ -1378,7 +1365,7 @@ func (m *DeviceEvent2) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1409,7 +1396,7 @@ func (m *RecoveryCodeUsed) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1434,7 +1421,7 @@ func (m *WindowsDesktopSessionStart) TrimToMaxSize(maxSize int) AuditEvent { nonEmptyStrsInMap(m.DesktopLabels) maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) out.Domain = trimStr(m.Domain, maxFieldsSize) out.WindowsUser = trimStr(m.WindowsUser, maxFieldsSize) out.DesktopLabels = trimMap(m.DesktopLabels, maxFieldsSize) @@ -1596,6 +1583,7 @@ func (m *DesktopSharedDirectoryStart) TrimToMaxSize(maxSize int) AuditEvent { out := utils.CloneProtoMsg(m) out.DirectoryName = "" + out.DesktopName = "" maxSize = adjustedMaxSize(out, maxSize) @@ -1603,6 +1591,7 @@ func (m *DesktopSharedDirectoryStart) TrimToMaxSize(maxSize int) AuditEvent { maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) out.DirectoryName = trimStr(m.DirectoryName, maxFieldsSize) + out.DesktopName = trimStr(m.DesktopName, maxFieldsSize) return out } @@ -1663,7 +1652,7 @@ func (m *BotJoin) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Attributes.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Attributes = m.Attributes.trimToMaxSize(maxFieldsSize) + out.Attributes = m.Attributes.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1682,7 +1671,7 @@ func (m *InstanceJoin) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1721,7 +1710,7 @@ func (m *SAMLIdPAuthAttempt) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1760,7 +1749,7 @@ func (m *OktaSyncFailure) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1779,7 +1768,7 @@ func (m *OktaAssignmentResult) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1798,7 +1787,7 @@ func (m *OktaUserSync) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1817,7 +1806,7 @@ func (m *OktaAccessListSync) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1840,7 +1829,7 @@ func (m *AccessListCreate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1859,7 +1848,7 @@ func (m *AccessListUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1878,7 +1867,7 @@ func (m *AccessListDelete) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1897,7 +1886,7 @@ func (m *AccessListReview) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1916,7 +1905,7 @@ func (m *AccessListMemberCreate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1935,7 +1924,7 @@ func (m *AccessListMemberUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1954,7 +1943,7 @@ func (m *AccessListMemberDelete) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1973,7 +1962,7 @@ func (m *AccessListMemberDeleteAllForAccessList) TrimToMaxSize(maxSize int) Audi customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -1992,7 +1981,7 @@ func (m *UserLoginAccessListInvalid) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2013,7 +2002,7 @@ func (m *AuditQueryRun) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.Name, m.Query) maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) out.Name = trimStr(m.Name, maxFieldsSize) out.Query = trimStr(m.Query, maxFieldsSize) @@ -2034,7 +2023,7 @@ func (m *SecurityReportRun) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2100,7 +2089,7 @@ func (m *ClusterNetworkingConfigUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2119,7 +2108,7 @@ func (m *SessionRecordingConfigUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2138,7 +2127,7 @@ func (m *AccessGraphSettingsUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2159,9 +2148,9 @@ func (m *SpannerRPC) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.Procedure) + m.Args.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) out.Procedure = trimStr(m.Procedure, maxFieldsSize) - out.Args = m.Args.trimToMaxSize(maxFieldsSize) + out.Args = m.Args.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2422,7 +2411,7 @@ func (m *WorkloadIdentityCreate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.WorkloadIdentityData.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.WorkloadIdentityData = m.WorkloadIdentityData.trimToMaxSize(maxFieldsSize) + out.WorkloadIdentityData = m.WorkloadIdentityData.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2441,7 +2430,7 @@ func (m *WorkloadIdentityUpdate) TrimToMaxSize(maxSize int) AuditEvent { customFieldsCount := m.WorkloadIdentityData.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.WorkloadIdentityData = m.WorkloadIdentityData.trimToMaxSize(maxFieldsSize) + out.WorkloadIdentityData = m.WorkloadIdentityData.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2511,7 +2500,7 @@ func (m *AWSICResourceSync) TrimToMaxSize(maxSize int) AuditEvent { maxSize = adjustedMaxSize(out, maxSize) customFieldsCount := m.Status.nonEmptyStrs() maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) - out.Status = m.Status.trimToMaxSize(maxFieldsSize) + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) return out } @@ -2548,3 +2537,215 @@ func (m *SigstorePolicyUpdate) TrimToMaxSize(int) AuditEvent { func (m *SigstorePolicyDelete) TrimToMaxSize(int) AuditEvent { return m } + +func (m *MCPSessionStart) TrimToMaxSize(maxSize int) AuditEvent { + return m +} + +func (m *MCPSessionEnd) TrimToMaxSize(maxSize int) AuditEvent { + return trimEventToMaxSize(m, maxSize, func(m, out *MCPSessionEnd) fieldTrimmer { + return fieldTrimmers{ + newGenericTrimmer(&m.Status, &out.Status), + newTraitsTrimmer(m.Headers, &out.Headers), + } + }) +} + +func (m *MCPJSONRPCMessage) nonEmptyStrs() int { + if m == nil { + return 0 + } + return nonEmptyStrs(m.JSONRPC, m.ID, m.Method) + m.Params.nonEmptyStrs() +} + +func (m *MCPJSONRPCMessage) trimToMaxFieldSize(maxFieldsSize int) MCPJSONRPCMessage { + if m == nil { + return MCPJSONRPCMessage{} + } + + out := MCPJSONRPCMessage{} + out.JSONRPC = trimStr(m.JSONRPC, maxFieldsSize) + out.ID = trimStr(m.ID, maxFieldsSize) + out.Method = trimStr(m.Method, maxFieldsSize) + out.Params = m.Params.trimToMaxFieldSize(maxFieldsSize) + return out +} + +func (m *MCPSessionRequest) TrimToMaxSize(maxSize int) AuditEvent { + return trimEventToMaxSize(m, maxSize, func(m, out *MCPSessionRequest) fieldTrimmer { + return fieldTrimmers{ + newGenericTrimmer(&m.Message, &out.Message), + newGenericTrimmer(&m.Status, &out.Status), + newTraitsTrimmer(m.Headers, &out.Headers), + } + }) +} + +func (m *MCPSessionNotification) TrimToMaxSize(maxSize int) AuditEvent { + return trimEventToMaxSize(m, maxSize, func(m, out *MCPSessionNotification) fieldTrimmer { + return fieldTrimmers{ + newGenericTrimmer(&m.Message, &out.Message), + newGenericTrimmer(&m.Status, &out.Status), + newTraitsTrimmer(m.Headers, &out.Headers), + } + }) +} + +func (m *MCPSessionListenSSEStream) TrimToMaxSize(maxSize int) AuditEvent { + return trimEventToMaxSize(m, maxSize, func(m, out *MCPSessionListenSSEStream) fieldTrimmer { + return fieldTrimmers{ + newGenericTrimmer(&m.Status, &out.Status), + newTraitsTrimmer(m.Headers, &out.Headers), + } + }) +} + +func (m *MCPSessionInvalidHTTPRequest) TrimToMaxSize(maxSize int) AuditEvent { + return trimEventToMaxSize(m, maxSize, func(m, out *MCPSessionInvalidHTTPRequest) fieldTrimmer { + return fieldTrimmers{ + newStrTrimmer(m.Path, &out.Path), + newStrTrimmer(m.RawQuery, &out.RawQuery), + newTraitsTrimmer(m.Headers, &out.Headers), + newBytesTrimmer(m.Body, &out.Body), + } + }) +} + +func (m *BoundKeypairRecovery) TrimToMaxSize(maxSize int) AuditEvent { + size := m.Size() + if size <= maxSize { + return m + } + out := utils.CloneProtoMsg(m) + out.Status = Status{} + out.TokenName = "" + out.BotName = "" + out.PublicKey = "" + + maxSize = adjustedMaxSize(out, maxSize) + customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.TokenName, m.BotName, m.PublicKey) + maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) + + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) + out.TokenName = trimStr(m.TokenName, maxFieldsSize) + out.BotName = trimStr(m.BotName, maxFieldsSize) + out.PublicKey = trimStr(m.PublicKey, maxFieldsSize) + return out +} + +func (m *BoundKeypairRotation) TrimToMaxSize(maxSize int) AuditEvent { + size := m.Size() + if size <= maxSize { + return m + } + out := utils.CloneProtoMsg(m) + out.Status = Status{} + out.TokenName = "" + out.BotName = "" + out.PreviousPublicKey = "" + out.NewPublicKey = "" + + maxSize = adjustedMaxSize(out, maxSize) + customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.TokenName, m.BotName, m.PreviousPublicKey, m.NewPublicKey) + maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) + + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) + out.TokenName = trimStr(m.TokenName, maxFieldsSize) + out.BotName = trimStr(m.BotName, maxFieldsSize) + out.PreviousPublicKey = trimStr(m.PreviousPublicKey, maxFieldsSize) + out.NewPublicKey = trimStr(m.NewPublicKey, maxFieldsSize) + return out +} + +func (m *BoundKeypairJoinStateVerificationFailed) TrimToMaxSize(maxSize int) AuditEvent { + size := m.Size() + if size <= maxSize { + return m + } + out := utils.CloneProtoMsg(m) + out.Status = Status{} + out.TokenName = "" + out.BotName = "" + + maxSize = adjustedMaxSize(out, maxSize) + customFieldsCount := m.Status.nonEmptyStrs() + nonEmptyStrs(m.TokenName, m.BotName) + maxFieldsSize := maxSizePerField(maxSize, customFieldsCount) + + out.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) + out.TokenName = trimStr(m.TokenName, maxFieldsSize) + out.BotName = trimStr(m.BotName, maxFieldsSize) + return out +} + +func (m *SCIMResourceEvent) TrimToMaxSize(maxSize int) AuditEvent { + if m.Size() <= maxSize { + return m + } + trimmed := utils.CloneProtoMsg(m) + if trimmed.Request != nil { + trimmed.Request.Body = nil + } + if trimmed.Size() <= maxSize { + return trimmed + } + + maxSize = adjustedMaxSize(trimmed, maxSize) + trimmableFieldCount := trimmed.Status.nonEmptyStrs() + nonEmptyStrs( + trimmed.Integration, + trimmed.ResourceType, + trimmed.TeleportID, + trimmed.ExternalID, + trimmed.Display) + maxFieldsSize := maxSizePerField(maxSize, trimmableFieldCount) + + trimmed.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) + trimmed.Integration = trimStr(trimmed.Integration, maxFieldsSize) + trimmed.ResourceType = trimStr(trimmed.ResourceType, maxFieldsSize) + trimmed.TeleportID = trimStr(trimmed.Integration, maxFieldsSize) + trimmed.ExternalID = trimStr(trimmed.ExternalID, maxFieldsSize) + + return trimmed +} + +func (m *SCIMListingEvent) TrimToMaxSize(maxSize int) AuditEvent { + if m.Size() <= maxSize { + return m + } + trimmed := utils.CloneProtoMsg(m) + + if trimmed.Request != nil { + trimmed.Request.Body = nil + } + if trimmed.Size() <= maxSize { + return trimmed + } + + maxSize = adjustedMaxSize(trimmed, maxSize) + trimmableFieldCount := m.Status.nonEmptyStrs() + nonEmptyStrs( + trimmed.Integration, + trimmed.ResourceType, + m.Filter) + maxFieldsSize := maxSizePerField(maxSize, trimmableFieldCount) + trimmed.Status = m.Status.trimToMaxFieldSize(maxFieldsSize) + trimmed.Integration = trimStr(trimmed.Integration, maxFieldsSize) + trimmed.ResourceType = trimStr(trimmed.ResourceType, maxFieldsSize) + trimmed.Filter = trimStr(trimmed.Filter, maxFieldsSize) + + return trimmed +} + +func (m *ClientIPRestrictionsUpdate) TrimToMaxSize(int) AuditEvent { + return m +} + +func (m *VnetConfigCreate) TrimToMaxSize(int) AuditEvent { + return m +} + +func (m *VnetConfigUpdate) TrimToMaxSize(int) AuditEvent { + return m +} + +func (m *VnetConfigDelete) TrimToMaxSize(int) AuditEvent { + return m +} diff --git a/api/types/events/events.pb.go b/api/types/events/events.pb.go index b5bc65bfa5626..48ecc672d0e6d 100644 --- a/api/types/events/events.pb.go +++ b/api/types/events/events.pb.go @@ -45,18 +45,22 @@ const ( UserKind_USER_KIND_HUMAN UserKind = 1 // Indicates the user associated with this event is a Machine ID bot user. UserKind_USER_KIND_BOT UserKind = 2 + // Indicates that the user associated with this event is a system component e.g. Okta service. + UserKind_USER_KIND_SYSTEM UserKind = 3 ) var UserKind_name = map[int32]string{ 0: "USER_KIND_UNSPECIFIED", 1: "USER_KIND_HUMAN", 2: "USER_KIND_BOT", + 3: "USER_KIND_SYSTEM", } var UserKind_value = map[string]int32{ "USER_KIND_UNSPECIFIED": 0, "USER_KIND_HUMAN": 1, "USER_KIND_BOT": 2, + "USER_KIND_SYSTEM": 3, } func (x UserKind) String() string { @@ -581,10 +585,19 @@ type UserMetadata struct { // with one. BotInstanceID string `protobuf:"bytes,12,opt,name=BotInstanceID,proto3" json:"bot_instance_id,omitempty"` // UserOrigin specifies the origin of this user account. - UserOrigin UserOrigin `protobuf:"varint,13,opt,name=UserOrigin,proto3,enum=events.UserOrigin" json:"user_origin,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + UserOrigin UserOrigin `protobuf:"varint,13,opt,name=UserOrigin,proto3,enum=events.UserOrigin" json:"user_origin,omitempty"` + // UserRoles specifies the roles user for the user to perform the action. + UserRoles []string `protobuf:"bytes,14,rep,name=UserRoles,proto3" json:"user_roles,omitempty"` + // UserTraits hold claim data used to populate a role at runtime + // for this session. + UserTraits github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,15,opt,name=UserTraits,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"user_traits,omitempty"` + // UserClusterName represents the Teleport Cluster name the user belongs to. + // For leaf clusters, this field represents the root cluster name. + // For root clusters, this field holds the cluster name. + UserClusterName string `protobuf:"bytes,16,opt,name=UserClusterName,proto3" json:"user_cluster_name,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *UserMetadata) Reset() { *m = UserMetadata{} } @@ -1533,7 +1546,9 @@ type DesktopClipboardReceive struct { // DesktopAddr is the address of the desktop being accessed. DesktopAddr string `protobuf:"bytes,5,opt,name=DesktopAddr,proto3" json:"desktop_addr"` // Length is the number of bytes of data received from the remote clipboard. - Length int32 `protobuf:"varint,6,opt,name=Length,proto3" json:"length"` + Length int32 `protobuf:"varint,6,opt,name=Length,proto3" json:"length"` + // DesktopName is the name of the desktop resource. + DesktopName string `protobuf:"bytes,7,opt,name=DesktopName,proto3" json:"desktop_name"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -1586,7 +1601,9 @@ type DesktopClipboardSend struct { // DesktopAddr is the address of the desktop being accessed. DesktopAddr string `protobuf:"bytes,5,opt,name=DesktopAddr,proto3" json:"desktop_addr"` // Length is the number of bytes of data sent. - Length int32 `protobuf:"varint,6,opt,name=Length,proto3" json:"length"` + Length int32 `protobuf:"varint,6,opt,name=Length,proto3" json:"length"` + // DesktopName is the name of the desktop resource. + DesktopName string `protobuf:"bytes,7,opt,name=DesktopName,proto3" json:"desktop_name"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -1643,7 +1660,9 @@ type DesktopSharedDirectoryStart struct { // DirectoryName is the name of the directory being shared. DirectoryName string `protobuf:"bytes,7,opt,name=DirectoryName,proto3" json:"directory_name"` // DirectoryID is the ID of the directory being shared (unique to the Windows Desktop Session). - DirectoryID uint32 `protobuf:"varint,8,opt,name=DirectoryID,proto3" json:"directory_id"` + DirectoryID uint32 `protobuf:"varint,8,opt,name=DirectoryID,proto3" json:"directory_id"` + // DesktopName is the name of the desktop resource. + DesktopName string `protobuf:"bytes,9,opt,name=DesktopName,proto3" json:"desktop_name"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -1707,7 +1726,9 @@ type DesktopSharedDirectoryRead struct { // Length is the number of bytes read. Length uint32 `protobuf:"varint,10,opt,name=Length,proto3" json:"length"` // Offset is the offset the bytes were read from. - Offset uint64 `protobuf:"varint,11,opt,name=Offset,proto3" json:"offset"` + Offset uint64 `protobuf:"varint,11,opt,name=Offset,proto3" json:"offset"` + // DesktopName is the name of the desktop resource. + DesktopName string `protobuf:"bytes,12,opt,name=DesktopName,proto3" json:"desktop_name"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -1771,7 +1792,9 @@ type DesktopSharedDirectoryWrite struct { // Length is the number of bytes written. Length uint32 `protobuf:"varint,10,opt,name=Length,proto3" json:"length"` // Offset is the offset the bytes were written to. - Offset uint64 `protobuf:"varint,11,opt,name=Offset,proto3" json:"offset"` + Offset uint64 `protobuf:"varint,11,opt,name=Offset,proto3" json:"offset"` + // DesktopName is the name of the desktop resource. + DesktopName string `protobuf:"bytes,12,opt,name=DesktopName,proto3" json:"desktop_name"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -3309,6 +3332,7 @@ func (m *Exec) XXX_DiscardUnknown() { var xxx_messageInfo_Exec proto.InternalMessageInfo // SCP is emitted when data transfer has occurred between server and client +// Deprecated: use SFTP type SCP struct { // Metadata is a common event metadata Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` @@ -3691,7 +3715,9 @@ type AuthAttempt struct { // ConnectionMetadata holds information about the connection ConnectionMetadata `protobuf:"bytes,3,opt,name=Connection,proto3,embedded=Connection" json:""` // Status contains common command or operation status fields - Status `protobuf:"bytes,4,opt,name=Status,proto3,embedded=Status" json:""` + Status `protobuf:"bytes,4,opt,name=Status,proto3,embedded=Status" json:""` + // ServerMetadata holds information about the target host server + ServerMetadata `protobuf:"bytes,5,opt,name=Server,proto3,embedded=Server" json:""` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -6046,6 +6072,54 @@ func (m *PostgresFunctionCall) XXX_DiscardUnknown() { var xxx_messageInfo_PostgresFunctionCall proto.InternalMessageInfo +// WindowsCertificateMetadata contains metadata about certificates +// issued for Windows environments. +type WindowsCertificateMetadata struct { + Subject string `protobuf:"bytes,1,opt,name=Subject,proto3" json:"subject"` + SerialNumber string `protobuf:"bytes,2,opt,name=SerialNumber,proto3" json:"serial_number"` + UPN string `protobuf:"bytes,3,opt,name=UPN,proto3" json:"upn"` + CRLDistributionPoints []string `protobuf:"bytes,4,rep,name=CRLDistributionPoints,proto3" json:"crl_distribution_points"` + KeyUsage int32 `protobuf:"varint,5,opt,name=KeyUsage,proto3" json:"key_usage"` + ExtendedKeyUsage []int32 `protobuf:"varint,6,rep,packed,name=ExtendedKeyUsage,proto3" json:"extended_key_usage"` + EnhancedKeyUsage []string `protobuf:"bytes,7,rep,name=EnhancedKeyUsage,proto3" json:"enhanced_key_usage"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *WindowsCertificateMetadata) Reset() { *m = WindowsCertificateMetadata{} } +func (m *WindowsCertificateMetadata) String() string { return proto.CompactTextString(m) } +func (*WindowsCertificateMetadata) ProtoMessage() {} +func (*WindowsCertificateMetadata) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{108} +} +func (m *WindowsCertificateMetadata) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *WindowsCertificateMetadata) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_WindowsCertificateMetadata.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *WindowsCertificateMetadata) XXX_Merge(src proto.Message) { + xxx_messageInfo_WindowsCertificateMetadata.Merge(m, src) +} +func (m *WindowsCertificateMetadata) XXX_Size() int { + return m.Size() +} +func (m *WindowsCertificateMetadata) XXX_DiscardUnknown() { + xxx_messageInfo_WindowsCertificateMetadata.DiscardUnknown(m) +} + +var xxx_messageInfo_WindowsCertificateMetadata proto.InternalMessageInfo + // WindowsDesktopSessionStart is emitted when a user connects to a desktop. type WindowsDesktopSessionStart struct { // Metadata is common event metadata. @@ -6075,17 +6149,18 @@ type WindowsDesktopSessionStart struct { AllowUserCreation bool `protobuf:"varint,12,opt,name=AllowUserCreation,proto3" json:"allow_user_creation"` // NLA indicates whether Teleport performed Network Level Authentication (NLA) // when initiating this session. - NLA bool `protobuf:"varint,13,opt,name=NLA,proto3" json:"nla"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + NLA bool `protobuf:"varint,13,opt,name=NLA,proto3" json:"nla"` + CertMetadata *WindowsCertificateMetadata `protobuf:"bytes,14,opt,name=CertMetadata,proto3" json:"certificate"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *WindowsDesktopSessionStart) Reset() { *m = WindowsDesktopSessionStart{} } func (m *WindowsDesktopSessionStart) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopSessionStart) ProtoMessage() {} func (*WindowsDesktopSessionStart) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{108} + return fileDescriptor_007ba1c3d6266d56, []int{109} } func (m *WindowsDesktopSessionStart) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6127,17 +6202,21 @@ type DatabaseSessionEnd struct { // StartTime is the timestamp at which the session began. StartTime time.Time `protobuf:"bytes,5,opt,name=StartTime,proto3,stdtime" json:"session_start,omitempty"` // EndTime is the timestamp at which the session ended. - EndTime time.Time `protobuf:"bytes,6,opt,name=EndTime,proto3,stdtime" json:"session_stop,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + EndTime time.Time `protobuf:"bytes,6,opt,name=EndTime,proto3,stdtime" json:"session_stop,omitempty"` + // ConnectionMetadata holds information about the connection. + ConnectionMetadata `protobuf:"bytes,7,opt,name=Connection,proto3,embedded=Connection" json:""` + // Participants is the list of participants in a session. + Participants []string `protobuf:"bytes,8,rep,name=Participants,proto3" json:"participants,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *DatabaseSessionEnd) Reset() { *m = DatabaseSessionEnd{} } func (m *DatabaseSessionEnd) String() string { return proto.CompactTextString(m) } func (*DatabaseSessionEnd) ProtoMessage() {} func (*DatabaseSessionEnd) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{109} + return fileDescriptor_007ba1c3d6266d56, []int{110} } func (m *DatabaseSessionEnd) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6183,7 +6262,7 @@ func (m *MFADeviceMetadata) Reset() { *m = MFADeviceMetadata{} } func (m *MFADeviceMetadata) String() string { return proto.CompactTextString(m) } func (*MFADeviceMetadata) ProtoMessage() {} func (*MFADeviceMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{110} + return fileDescriptor_007ba1c3d6266d56, []int{111} } func (m *MFADeviceMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6231,7 +6310,7 @@ func (m *MFADeviceAdd) Reset() { *m = MFADeviceAdd{} } func (m *MFADeviceAdd) String() string { return proto.CompactTextString(m) } func (*MFADeviceAdd) ProtoMessage() {} func (*MFADeviceAdd) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{111} + return fileDescriptor_007ba1c3d6266d56, []int{112} } func (m *MFADeviceAdd) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6279,7 +6358,7 @@ func (m *MFADeviceDelete) Reset() { *m = MFADeviceDelete{} } func (m *MFADeviceDelete) String() string { return proto.CompactTextString(m) } func (*MFADeviceDelete) ProtoMessage() {} func (*MFADeviceDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{112} + return fileDescriptor_007ba1c3d6266d56, []int{113} } func (m *MFADeviceDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6323,7 +6402,7 @@ func (m *BillingInformationUpdate) Reset() { *m = BillingInformationUpda func (m *BillingInformationUpdate) String() string { return proto.CompactTextString(m) } func (*BillingInformationUpdate) ProtoMessage() {} func (*BillingInformationUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{113} + return fileDescriptor_007ba1c3d6266d56, []int{114} } func (m *BillingInformationUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6367,7 +6446,7 @@ func (m *BillingCardCreate) Reset() { *m = BillingCardCreate{} } func (m *BillingCardCreate) String() string { return proto.CompactTextString(m) } func (*BillingCardCreate) ProtoMessage() {} func (*BillingCardCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{114} + return fileDescriptor_007ba1c3d6266d56, []int{115} } func (m *BillingCardCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6411,7 +6490,7 @@ func (m *BillingCardDelete) Reset() { *m = BillingCardDelete{} } func (m *BillingCardDelete) String() string { return proto.CompactTextString(m) } func (*BillingCardDelete) ProtoMessage() {} func (*BillingCardDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{115} + return fileDescriptor_007ba1c3d6266d56, []int{116} } func (m *BillingCardDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6465,7 +6544,7 @@ func (m *LockCreate) Reset() { *m = LockCreate{} } func (m *LockCreate) String() string { return proto.CompactTextString(m) } func (*LockCreate) ProtoMessage() {} func (*LockCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{116} + return fileDescriptor_007ba1c3d6266d56, []int{117} } func (m *LockCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6513,7 +6592,7 @@ func (m *LockDelete) Reset() { *m = LockDelete{} } func (m *LockDelete) String() string { return proto.CompactTextString(m) } func (*LockDelete) ProtoMessage() {} func (*LockDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{117} + return fileDescriptor_007ba1c3d6266d56, []int{118} } func (m *LockDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6557,7 +6636,7 @@ func (m *RecoveryCodeGenerate) Reset() { *m = RecoveryCodeGenerate{} } func (m *RecoveryCodeGenerate) String() string { return proto.CompactTextString(m) } func (*RecoveryCodeGenerate) ProtoMessage() {} func (*RecoveryCodeGenerate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{118} + return fileDescriptor_007ba1c3d6266d56, []int{119} } func (m *RecoveryCodeGenerate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6604,7 +6683,7 @@ func (m *RecoveryCodeUsed) Reset() { *m = RecoveryCodeUsed{} } func (m *RecoveryCodeUsed) String() string { return proto.CompactTextString(m) } func (*RecoveryCodeUsed) ProtoMessage() {} func (*RecoveryCodeUsed) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{119} + return fileDescriptor_007ba1c3d6266d56, []int{120} } func (m *RecoveryCodeUsed) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6660,7 +6739,9 @@ type WindowsDesktopSessionEnd struct { // Recorded is true if the session was recorded, false otherwise. Recorded bool `protobuf:"varint,12,opt,name=Recorded,proto3" json:"recorded"` // Participants is a list of participants in the session. - Participants []string `protobuf:"bytes,13,rep,name=Participants,proto3" json:"participants"` + Participants []string `protobuf:"bytes,13,rep,name=Participants,proto3" json:"participants"` + // Connection holds information about the connection. + ConnectionMetadata `protobuf:"bytes,14,opt,name=Connection,proto3,embedded=Connection" json:""` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -6670,7 +6751,7 @@ func (m *WindowsDesktopSessionEnd) Reset() { *m = WindowsDesktopSessionE func (m *WindowsDesktopSessionEnd) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopSessionEnd) ProtoMessage() {} func (*WindowsDesktopSessionEnd) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{120} + return fileDescriptor_007ba1c3d6266d56, []int{121} } func (m *WindowsDesktopSessionEnd) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6708,17 +6789,19 @@ type CertificateCreate struct { // Identity is the identity associated with the certificate, as interpreted by Teleport. Identity *Identity `protobuf:"bytes,3,opt,name=Identity,proto3" json:"identity"` // Client is the common client event metadata - ClientMetadata `protobuf:"bytes,4,opt,name=Client,proto3,embedded=Client" json:""` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ClientMetadata `protobuf:"bytes,4,opt,name=Client,proto3,embedded=Client" json:""` + // CertificateAuthority holds information about the issuer of the certificate. + CertificateAuthority *CertificateAuthority `protobuf:"bytes,5,opt,name=CertificateAuthority,proto3" json:"certificate_authority"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *CertificateCreate) Reset() { *m = CertificateCreate{} } func (m *CertificateCreate) String() string { return proto.CompactTextString(m) } func (*CertificateCreate) ProtoMessage() {} func (*CertificateCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{121} + return fileDescriptor_007ba1c3d6266d56, []int{122} } func (m *CertificateCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6747,6 +6830,52 @@ func (m *CertificateCreate) XXX_DiscardUnknown() { var xxx_messageInfo_CertificateCreate proto.InternalMessageInfo +// CertificateAuthority holds information about the issuer of the certificate. +type CertificateAuthority struct { + // Type of the cert authority + Type string `protobuf:"bytes,1,opt,name=Type,proto3" json:"type,omitempty"` + // Domain of the cert authority + Domain string `protobuf:"bytes,2,opt,name=Domain,proto3" json:"domain,omitempty"` + // SubjectKeyID is BASE32-encoded subject key ID from authority certificate + SubjectKeyID string `protobuf:"bytes,3,opt,name=SubjectKeyID,proto3" json:"subject_key_id,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *CertificateAuthority) Reset() { *m = CertificateAuthority{} } +func (m *CertificateAuthority) String() string { return proto.CompactTextString(m) } +func (*CertificateAuthority) ProtoMessage() {} +func (*CertificateAuthority) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{123} +} +func (m *CertificateAuthority) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *CertificateAuthority) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_CertificateAuthority.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *CertificateAuthority) XXX_Merge(src proto.Message) { + xxx_messageInfo_CertificateAuthority.Merge(m, src) +} +func (m *CertificateAuthority) XXX_Size() int { + return m.Size() +} +func (m *CertificateAuthority) XXX_DiscardUnknown() { + xxx_messageInfo_CertificateAuthority.DiscardUnknown(m) +} + +var xxx_messageInfo_CertificateAuthority proto.InternalMessageInfo + // RenewableCertificateGenerationMismatch is emitted when a renewable // certificate's generation counter fails to validate, possibly indicating a // stolen certificate and an invalid renewal attempt. @@ -6766,7 +6895,7 @@ func (m *RenewableCertificateGenerationMismatch) Reset() { func (m *RenewableCertificateGenerationMismatch) String() string { return proto.CompactTextString(m) } func (*RenewableCertificateGenerationMismatch) ProtoMessage() {} func (*RenewableCertificateGenerationMismatch) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{122} + return fileDescriptor_007ba1c3d6266d56, []int{124} } func (m *RenewableCertificateGenerationMismatch) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6824,7 +6953,7 @@ func (m *BotJoin) Reset() { *m = BotJoin{} } func (m *BotJoin) String() string { return proto.CompactTextString(m) } func (*BotJoin) ProtoMessage() {} func (*BotJoin) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{123} + return fileDescriptor_007ba1c3d6266d56, []int{125} } func (m *BotJoin) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6886,7 +7015,7 @@ func (m *InstanceJoin) Reset() { *m = InstanceJoin{} } func (m *InstanceJoin) String() string { return proto.CompactTextString(m) } func (*InstanceJoin) ProtoMessage() {} func (*InstanceJoin) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{124} + return fileDescriptor_007ba1c3d6266d56, []int{126} } func (m *InstanceJoin) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6934,7 +7063,7 @@ func (m *Unknown) Reset() { *m = Unknown{} } func (m *Unknown) String() string { return proto.CompactTextString(m) } func (*Unknown) ProtoMessage() {} func (*Unknown) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{125} + return fileDescriptor_007ba1c3d6266d56, []int{127} } func (m *Unknown) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6991,7 +7120,7 @@ func (m *DeviceMetadata) Reset() { *m = DeviceMetadata{} } func (m *DeviceMetadata) String() string { return proto.CompactTextString(m) } func (*DeviceMetadata) ProtoMessage() {} func (*DeviceMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{126} + return fileDescriptor_007ba1c3d6266d56, []int{128} } func (m *DeviceMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7042,7 +7171,7 @@ func (m *DeviceEvent) Reset() { *m = DeviceEvent{} } func (m *DeviceEvent) String() string { return proto.CompactTextString(m) } func (*DeviceEvent) ProtoMessage() {} func (*DeviceEvent) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{127} + return fileDescriptor_007ba1c3d6266d56, []int{129} } func (m *DeviceEvent) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7094,7 +7223,7 @@ func (m *DeviceEvent2) Reset() { *m = DeviceEvent2{} } func (m *DeviceEvent2) String() string { return proto.CompactTextString(m) } func (*DeviceEvent2) ProtoMessage() {} func (*DeviceEvent2) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{128} + return fileDescriptor_007ba1c3d6266d56, []int{130} } func (m *DeviceEvent2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7142,7 +7271,7 @@ func (m *DiscoveryConfigCreate) Reset() { *m = DiscoveryConfigCreate{} } func (m *DiscoveryConfigCreate) String() string { return proto.CompactTextString(m) } func (*DiscoveryConfigCreate) ProtoMessage() {} func (*DiscoveryConfigCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{129} + return fileDescriptor_007ba1c3d6266d56, []int{131} } func (m *DiscoveryConfigCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7190,7 +7319,7 @@ func (m *DiscoveryConfigUpdate) Reset() { *m = DiscoveryConfigUpdate{} } func (m *DiscoveryConfigUpdate) String() string { return proto.CompactTextString(m) } func (*DiscoveryConfigUpdate) ProtoMessage() {} func (*DiscoveryConfigUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{130} + return fileDescriptor_007ba1c3d6266d56, []int{132} } func (m *DiscoveryConfigUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7238,7 +7367,7 @@ func (m *DiscoveryConfigDelete) Reset() { *m = DiscoveryConfigDelete{} } func (m *DiscoveryConfigDelete) String() string { return proto.CompactTextString(m) } func (*DiscoveryConfigDelete) ProtoMessage() {} func (*DiscoveryConfigDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{131} + return fileDescriptor_007ba1c3d6266d56, []int{133} } func (m *DiscoveryConfigDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7284,7 +7413,7 @@ func (m *DiscoveryConfigDeleteAll) Reset() { *m = DiscoveryConfigDeleteA func (m *DiscoveryConfigDeleteAll) String() string { return proto.CompactTextString(m) } func (*DiscoveryConfigDeleteAll) ProtoMessage() {} func (*DiscoveryConfigDeleteAll) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{132} + return fileDescriptor_007ba1c3d6266d56, []int{134} } func (m *DiscoveryConfigDeleteAll) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7333,7 +7462,7 @@ func (m *IntegrationCreate) Reset() { *m = IntegrationCreate{} } func (m *IntegrationCreate) String() string { return proto.CompactTextString(m) } func (*IntegrationCreate) ProtoMessage() {} func (*IntegrationCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{133} + return fileDescriptor_007ba1c3d6266d56, []int{135} } func (m *IntegrationCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7382,7 +7511,7 @@ func (m *IntegrationUpdate) Reset() { *m = IntegrationUpdate{} } func (m *IntegrationUpdate) String() string { return proto.CompactTextString(m) } func (*IntegrationUpdate) ProtoMessage() {} func (*IntegrationUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{134} + return fileDescriptor_007ba1c3d6266d56, []int{136} } func (m *IntegrationUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7431,7 +7560,7 @@ func (m *IntegrationDelete) Reset() { *m = IntegrationDelete{} } func (m *IntegrationDelete) String() string { return proto.CompactTextString(m) } func (*IntegrationDelete) ProtoMessage() {} func (*IntegrationDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{135} + return fileDescriptor_007ba1c3d6266d56, []int{137} } func (m *IntegrationDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7481,7 +7610,7 @@ func (m *IntegrationMetadata) Reset() { *m = IntegrationMetadata{} } func (m *IntegrationMetadata) String() string { return proto.CompactTextString(m) } func (*IntegrationMetadata) ProtoMessage() {} func (*IntegrationMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{136} + return fileDescriptor_007ba1c3d6266d56, []int{138} } func (m *IntegrationMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7526,7 +7655,7 @@ func (m *AWSOIDCIntegrationMetadata) Reset() { *m = AWSOIDCIntegrationMe func (m *AWSOIDCIntegrationMetadata) String() string { return proto.CompactTextString(m) } func (*AWSOIDCIntegrationMetadata) ProtoMessage() {} func (*AWSOIDCIntegrationMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{137} + return fileDescriptor_007ba1c3d6266d56, []int{139} } func (m *AWSOIDCIntegrationMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7570,7 +7699,7 @@ func (m *AzureOIDCIntegrationMetadata) Reset() { *m = AzureOIDCIntegrati func (m *AzureOIDCIntegrationMetadata) String() string { return proto.CompactTextString(m) } func (*AzureOIDCIntegrationMetadata) ProtoMessage() {} func (*AzureOIDCIntegrationMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{138} + return fileDescriptor_007ba1c3d6266d56, []int{140} } func (m *AzureOIDCIntegrationMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7612,7 +7741,7 @@ func (m *GitHubIntegrationMetadata) Reset() { *m = GitHubIntegrationMeta func (m *GitHubIntegrationMetadata) String() string { return proto.CompactTextString(m) } func (*GitHubIntegrationMetadata) ProtoMessage() {} func (*GitHubIntegrationMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{139} + return fileDescriptor_007ba1c3d6266d56, []int{141} } func (m *GitHubIntegrationMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7654,7 +7783,7 @@ func (m *AWSRAIntegrationMetadata) Reset() { *m = AWSRAIntegrationMetada func (m *AWSRAIntegrationMetadata) String() string { return proto.CompactTextString(m) } func (*AWSRAIntegrationMetadata) ProtoMessage() {} func (*AWSRAIntegrationMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{140} + return fileDescriptor_007ba1c3d6266d56, []int{142} } func (m *AWSRAIntegrationMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7703,7 +7832,7 @@ func (m *PluginCreate) Reset() { *m = PluginCreate{} } func (m *PluginCreate) String() string { return proto.CompactTextString(m) } func (*PluginCreate) ProtoMessage() {} func (*PluginCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{141} + return fileDescriptor_007ba1c3d6266d56, []int{143} } func (m *PluginCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7752,7 +7881,7 @@ func (m *PluginUpdate) Reset() { *m = PluginUpdate{} } func (m *PluginUpdate) String() string { return proto.CompactTextString(m) } func (*PluginUpdate) ProtoMessage() {} func (*PluginUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{142} + return fileDescriptor_007ba1c3d6266d56, []int{144} } func (m *PluginUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7801,7 +7930,7 @@ func (m *PluginDelete) Reset() { *m = PluginDelete{} } func (m *PluginDelete) String() string { return proto.CompactTextString(m) } func (*PluginDelete) ProtoMessage() {} func (*PluginDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{143} + return fileDescriptor_007ba1c3d6266d56, []int{145} } func (m *PluginDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7850,7 +7979,7 @@ func (m *PluginMetadata) Reset() { *m = PluginMetadata{} } func (m *PluginMetadata) String() string { return proto.CompactTextString(m) } func (*PluginMetadata) ProtoMessage() {} func (*PluginMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{144} + return fileDescriptor_007ba1c3d6266d56, []int{146} } func (m *PluginMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8093,6 +8222,21 @@ type OneOf struct { // *OneOf_AutoUpdateAgentRolloutTrigger // *OneOf_AutoUpdateAgentRolloutForceDone // *OneOf_AutoUpdateAgentRolloutRollback + // *OneOf_MCPSessionStart + // *OneOf_MCPSessionEnd + // *OneOf_MCPSessionRequest + // *OneOf_MCPSessionNotification + // *OneOf_BoundKeypairRecovery + // *OneOf_BoundKeypairRotation + // *OneOf_BoundKeypairJoinStateVerificationFailed + // *OneOf_MCPSessionListenSSEStream + // *OneOf_MCPSessionInvalidHTTPRequest + // *OneOf_SCIMListingEvent + // *OneOf_SCIMResourceEvent + // *OneOf_ClientIPRestrictionsUpdate + // *OneOf_VnetConfigCreate + // *OneOf_VnetConfigUpdate + // *OneOf_VnetConfigDelete Event isOneOf_Event `protobuf_oneof:"Event"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` @@ -8103,7 +8247,7 @@ func (m *OneOf) Reset() { *m = OneOf{} } func (m *OneOf) String() string { return proto.CompactTextString(m) } func (*OneOf) ProtoMessage() {} func (*OneOf) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{145} + return fileDescriptor_007ba1c3d6266d56, []int{147} } func (m *OneOf) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8771,6 +8915,51 @@ type OneOf_AutoUpdateAgentRolloutForceDone struct { type OneOf_AutoUpdateAgentRolloutRollback struct { AutoUpdateAgentRolloutRollback *AutoUpdateAgentRolloutRollback `protobuf:"bytes,215,opt,name=AutoUpdateAgentRolloutRollback,proto3,oneof" json:"AutoUpdateAgentRolloutRollback,omitempty"` } +type OneOf_MCPSessionStart struct { + MCPSessionStart *MCPSessionStart `protobuf:"bytes,216,opt,name=MCPSessionStart,proto3,oneof" json:"MCPSessionStart,omitempty"` +} +type OneOf_MCPSessionEnd struct { + MCPSessionEnd *MCPSessionEnd `protobuf:"bytes,217,opt,name=MCPSessionEnd,proto3,oneof" json:"MCPSessionEnd,omitempty"` +} +type OneOf_MCPSessionRequest struct { + MCPSessionRequest *MCPSessionRequest `protobuf:"bytes,218,opt,name=MCPSessionRequest,proto3,oneof" json:"MCPSessionRequest,omitempty"` +} +type OneOf_MCPSessionNotification struct { + MCPSessionNotification *MCPSessionNotification `protobuf:"bytes,219,opt,name=MCPSessionNotification,proto3,oneof" json:"MCPSessionNotification,omitempty"` +} +type OneOf_BoundKeypairRecovery struct { + BoundKeypairRecovery *BoundKeypairRecovery `protobuf:"bytes,220,opt,name=BoundKeypairRecovery,proto3,oneof" json:"BoundKeypairRecovery,omitempty"` +} +type OneOf_BoundKeypairRotation struct { + BoundKeypairRotation *BoundKeypairRotation `protobuf:"bytes,221,opt,name=BoundKeypairRotation,proto3,oneof" json:"BoundKeypairRotation,omitempty"` +} +type OneOf_BoundKeypairJoinStateVerificationFailed struct { + BoundKeypairJoinStateVerificationFailed *BoundKeypairJoinStateVerificationFailed `protobuf:"bytes,222,opt,name=BoundKeypairJoinStateVerificationFailed,proto3,oneof" json:"BoundKeypairJoinStateVerificationFailed,omitempty"` +} +type OneOf_MCPSessionListenSSEStream struct { + MCPSessionListenSSEStream *MCPSessionListenSSEStream `protobuf:"bytes,223,opt,name=MCPSessionListenSSEStream,proto3,oneof" json:"MCPSessionListenSSEStream,omitempty"` +} +type OneOf_MCPSessionInvalidHTTPRequest struct { + MCPSessionInvalidHTTPRequest *MCPSessionInvalidHTTPRequest `protobuf:"bytes,224,opt,name=MCPSessionInvalidHTTPRequest,proto3,oneof" json:"MCPSessionInvalidHTTPRequest,omitempty"` +} +type OneOf_SCIMListingEvent struct { + SCIMListingEvent *SCIMListingEvent `protobuf:"bytes,225,opt,name=SCIMListingEvent,proto3,oneof" json:"SCIMListingEvent,omitempty"` +} +type OneOf_SCIMResourceEvent struct { + SCIMResourceEvent *SCIMResourceEvent `protobuf:"bytes,226,opt,name=SCIMResourceEvent,proto3,oneof" json:"SCIMResourceEvent,omitempty"` +} +type OneOf_ClientIPRestrictionsUpdate struct { + ClientIPRestrictionsUpdate *ClientIPRestrictionsUpdate `protobuf:"bytes,227,opt,name=ClientIPRestrictionsUpdate,proto3,oneof" json:"ClientIPRestrictionsUpdate,omitempty"` +} +type OneOf_VnetConfigCreate struct { + VnetConfigCreate *VnetConfigCreate `protobuf:"bytes,232,opt,name=VnetConfigCreate,proto3,oneof" json:"VnetConfigCreate,omitempty"` +} +type OneOf_VnetConfigUpdate struct { + VnetConfigUpdate *VnetConfigUpdate `protobuf:"bytes,233,opt,name=VnetConfigUpdate,proto3,oneof" json:"VnetConfigUpdate,omitempty"` +} +type OneOf_VnetConfigDelete struct { + VnetConfigDelete *VnetConfigDelete `protobuf:"bytes,234,opt,name=VnetConfigDelete,proto3,oneof" json:"VnetConfigDelete,omitempty"` +} func (*OneOf_UserLogin) isOneOf_Event() {} func (*OneOf_UserCreate) isOneOf_Event() {} @@ -8983,6 +9172,21 @@ func (*OneOf_SigstorePolicyDelete) isOneOf_Event() {} func (*OneOf_AutoUpdateAgentRolloutTrigger) isOneOf_Event() {} func (*OneOf_AutoUpdateAgentRolloutForceDone) isOneOf_Event() {} func (*OneOf_AutoUpdateAgentRolloutRollback) isOneOf_Event() {} +func (*OneOf_MCPSessionStart) isOneOf_Event() {} +func (*OneOf_MCPSessionEnd) isOneOf_Event() {} +func (*OneOf_MCPSessionRequest) isOneOf_Event() {} +func (*OneOf_MCPSessionNotification) isOneOf_Event() {} +func (*OneOf_BoundKeypairRecovery) isOneOf_Event() {} +func (*OneOf_BoundKeypairRotation) isOneOf_Event() {} +func (*OneOf_BoundKeypairJoinStateVerificationFailed) isOneOf_Event() {} +func (*OneOf_MCPSessionListenSSEStream) isOneOf_Event() {} +func (*OneOf_MCPSessionInvalidHTTPRequest) isOneOf_Event() {} +func (*OneOf_SCIMListingEvent) isOneOf_Event() {} +func (*OneOf_SCIMResourceEvent) isOneOf_Event() {} +func (*OneOf_ClientIPRestrictionsUpdate) isOneOf_Event() {} +func (*OneOf_VnetConfigCreate) isOneOf_Event() {} +func (*OneOf_VnetConfigUpdate) isOneOf_Event() {} +func (*OneOf_VnetConfigDelete) isOneOf_Event() {} func (m *OneOf) GetEvent() isOneOf_Event { if m != nil { @@ -10468,6 +10672,111 @@ func (m *OneOf) GetAutoUpdateAgentRolloutRollback() *AutoUpdateAgentRolloutRollb return nil } +func (m *OneOf) GetMCPSessionStart() *MCPSessionStart { + if x, ok := m.GetEvent().(*OneOf_MCPSessionStart); ok { + return x.MCPSessionStart + } + return nil +} + +func (m *OneOf) GetMCPSessionEnd() *MCPSessionEnd { + if x, ok := m.GetEvent().(*OneOf_MCPSessionEnd); ok { + return x.MCPSessionEnd + } + return nil +} + +func (m *OneOf) GetMCPSessionRequest() *MCPSessionRequest { + if x, ok := m.GetEvent().(*OneOf_MCPSessionRequest); ok { + return x.MCPSessionRequest + } + return nil +} + +func (m *OneOf) GetMCPSessionNotification() *MCPSessionNotification { + if x, ok := m.GetEvent().(*OneOf_MCPSessionNotification); ok { + return x.MCPSessionNotification + } + return nil +} + +func (m *OneOf) GetBoundKeypairRecovery() *BoundKeypairRecovery { + if x, ok := m.GetEvent().(*OneOf_BoundKeypairRecovery); ok { + return x.BoundKeypairRecovery + } + return nil +} + +func (m *OneOf) GetBoundKeypairRotation() *BoundKeypairRotation { + if x, ok := m.GetEvent().(*OneOf_BoundKeypairRotation); ok { + return x.BoundKeypairRotation + } + return nil +} + +func (m *OneOf) GetBoundKeypairJoinStateVerificationFailed() *BoundKeypairJoinStateVerificationFailed { + if x, ok := m.GetEvent().(*OneOf_BoundKeypairJoinStateVerificationFailed); ok { + return x.BoundKeypairJoinStateVerificationFailed + } + return nil +} + +func (m *OneOf) GetMCPSessionListenSSEStream() *MCPSessionListenSSEStream { + if x, ok := m.GetEvent().(*OneOf_MCPSessionListenSSEStream); ok { + return x.MCPSessionListenSSEStream + } + return nil +} + +func (m *OneOf) GetMCPSessionInvalidHTTPRequest() *MCPSessionInvalidHTTPRequest { + if x, ok := m.GetEvent().(*OneOf_MCPSessionInvalidHTTPRequest); ok { + return x.MCPSessionInvalidHTTPRequest + } + return nil +} + +func (m *OneOf) GetSCIMListingEvent() *SCIMListingEvent { + if x, ok := m.GetEvent().(*OneOf_SCIMListingEvent); ok { + return x.SCIMListingEvent + } + return nil +} + +func (m *OneOf) GetSCIMResourceEvent() *SCIMResourceEvent { + if x, ok := m.GetEvent().(*OneOf_SCIMResourceEvent); ok { + return x.SCIMResourceEvent + } + return nil +} + +func (m *OneOf) GetClientIPRestrictionsUpdate() *ClientIPRestrictionsUpdate { + if x, ok := m.GetEvent().(*OneOf_ClientIPRestrictionsUpdate); ok { + return x.ClientIPRestrictionsUpdate + } + return nil +} + +func (m *OneOf) GetVnetConfigCreate() *VnetConfigCreate { + if x, ok := m.GetEvent().(*OneOf_VnetConfigCreate); ok { + return x.VnetConfigCreate + } + return nil +} + +func (m *OneOf) GetVnetConfigUpdate() *VnetConfigUpdate { + if x, ok := m.GetEvent().(*OneOf_VnetConfigUpdate); ok { + return x.VnetConfigUpdate + } + return nil +} + +func (m *OneOf) GetVnetConfigDelete() *VnetConfigDelete { + if x, ok := m.GetEvent().(*OneOf_VnetConfigDelete); ok { + return x.VnetConfigDelete + } + return nil +} + // XXX_OneofWrappers is for the internal use of the proto package. func (*OneOf) XXX_OneofWrappers() []interface{} { return []interface{}{ @@ -10682,6 +10991,21 @@ func (*OneOf) XXX_OneofWrappers() []interface{} { (*OneOf_AutoUpdateAgentRolloutTrigger)(nil), (*OneOf_AutoUpdateAgentRolloutForceDone)(nil), (*OneOf_AutoUpdateAgentRolloutRollback)(nil), + (*OneOf_MCPSessionStart)(nil), + (*OneOf_MCPSessionEnd)(nil), + (*OneOf_MCPSessionRequest)(nil), + (*OneOf_MCPSessionNotification)(nil), + (*OneOf_BoundKeypairRecovery)(nil), + (*OneOf_BoundKeypairRotation)(nil), + (*OneOf_BoundKeypairJoinStateVerificationFailed)(nil), + (*OneOf_MCPSessionListenSSEStream)(nil), + (*OneOf_MCPSessionInvalidHTTPRequest)(nil), + (*OneOf_SCIMListingEvent)(nil), + (*OneOf_SCIMResourceEvent)(nil), + (*OneOf_ClientIPRestrictionsUpdate)(nil), + (*OneOf_VnetConfigCreate)(nil), + (*OneOf_VnetConfigUpdate)(nil), + (*OneOf_VnetConfigDelete)(nil), } } @@ -10702,7 +11026,7 @@ func (m *StreamStatus) Reset() { *m = StreamStatus{} } func (m *StreamStatus) String() string { return proto.CompactTextString(m) } func (*StreamStatus) ProtoMessage() {} func (*StreamStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{146} + return fileDescriptor_007ba1c3d6266d56, []int{148} } func (m *StreamStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10748,7 +11072,7 @@ func (m *SessionUpload) Reset() { *m = SessionUpload{} } func (m *SessionUpload) String() string { return proto.CompactTextString(m) } func (*SessionUpload) ProtoMessage() {} func (*SessionUpload) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{147} + return fileDescriptor_007ba1c3d6266d56, []int{149} } func (m *SessionUpload) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10857,17 +11181,22 @@ type Identity struct { DeviceExtensions *DeviceExtensions `protobuf:"bytes,28,opt,name=DeviceExtensions,proto3" json:"device_extensions,omitempty"` // BotInstanceID indicates the name of the Machine ID bot instance this // identity was issued to, if any. - BotInstanceID string `protobuf:"bytes,29,opt,name=BotInstanceID,proto3" json:"bot_instance_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + BotInstanceID string `protobuf:"bytes,29,opt,name=BotInstanceID,proto3" json:"bot_instance_id,omitempty"` + // JoinToken is the name of the join token used when a Machine ID bot joined, + // if any. It is not set for bots using the `token` join method. + JoinToken string `protobuf:"bytes,30,opt,name=JoinToken,proto3" json:"join_token,omitempty"` + // ScopePin pins a certificate to a specific scope and set of scoped roles. + ScopePin *ScopePin `protobuf:"bytes,31,opt,name=ScopePin,proto3" json:"scope_pin,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *Identity) Reset() { *m = Identity{} } func (m *Identity) String() string { return proto.CompactTextString(m) } func (*Identity) ProtoMessage() {} func (*Identity) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{148} + return fileDescriptor_007ba1c3d6266d56, []int{150} } func (m *Identity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10896,6 +11225,97 @@ func (m *Identity) XXX_DiscardUnknown() { var xxx_messageInfo_Identity proto.InternalMessageInfo +// ScopePin is a marker that identifies a certificate/identity as being "pinned" to a target scope, and encodes relevant +// information for access-control evaluation at that scope. +type ScopePin struct { + // Scope is the target scope that this pin is associated with. This is the scope that the certificate/identity is + // pinned to. Any resources in parent/orthogonal scopes are not necessarily subject to the privileges/policies + // conveyed by this pin. + Scope string `protobuf:"bytes,1,opt,name=Scope,proto3" json:"scope"` + // Assignments encodes the scoped role assignments relevant to access-control decisions about the pinned identity. This may + // include assignments to parents of the pinned scope as well as assignments to equivalent/child scopes. Effectively, this + // means all assignments that are not orthogonal to the pinned scope. + Assignments map[string]*ScopePinnedAssignments `protobuf:"bytes,2,rep,name=Assignments,proto3" json:"assignments" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ScopePin) Reset() { *m = ScopePin{} } +func (m *ScopePin) String() string { return proto.CompactTextString(m) } +func (*ScopePin) ProtoMessage() {} +func (*ScopePin) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{151} +} +func (m *ScopePin) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ScopePin) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ScopePin.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ScopePin) XXX_Merge(src proto.Message) { + xxx_messageInfo_ScopePin.Merge(m, src) +} +func (m *ScopePin) XXX_Size() int { + return m.Size() +} +func (m *ScopePin) XXX_DiscardUnknown() { + xxx_messageInfo_ScopePin.DiscardUnknown(m) +} + +var xxx_messageInfo_ScopePin proto.InternalMessageInfo + +// ScopePinnedAssignments is a collection of scoped role assignments that are relevant to the pinned identity at the target scope. +type ScopePinnedAssignments struct { + // Roles is a list of scoped roles that are assigned to the pinned identity at the target scope. + Roles []string `protobuf:"bytes,1,rep,name=Roles,proto3" json:"roles"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ScopePinnedAssignments) Reset() { *m = ScopePinnedAssignments{} } +func (m *ScopePinnedAssignments) String() string { return proto.CompactTextString(m) } +func (*ScopePinnedAssignments) ProtoMessage() {} +func (*ScopePinnedAssignments) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{152} +} +func (m *ScopePinnedAssignments) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ScopePinnedAssignments) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ScopePinnedAssignments.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ScopePinnedAssignments) XXX_Merge(src proto.Message) { + xxx_messageInfo_ScopePinnedAssignments.Merge(m, src) +} +func (m *ScopePinnedAssignments) XXX_Size() int { + return m.Size() +} +func (m *ScopePinnedAssignments) XXX_DiscardUnknown() { + xxx_messageInfo_ScopePinnedAssignments.DiscardUnknown(m) +} + +var xxx_messageInfo_ScopePinnedAssignments proto.InternalMessageInfo + // RouteToApp contains parameters for application access certificate requests. type RouteToApp struct { // Name is the application name certificate is being requested for. @@ -10926,7 +11346,7 @@ func (m *RouteToApp) Reset() { *m = RouteToApp{} } func (m *RouteToApp) String() string { return proto.CompactTextString(m) } func (*RouteToApp) ProtoMessage() {} func (*RouteToApp) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{149} + return fileDescriptor_007ba1c3d6266d56, []int{153} } func (m *RouteToApp) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10976,7 +11396,7 @@ func (m *RouteToDatabase) Reset() { *m = RouteToDatabase{} } func (m *RouteToDatabase) String() string { return proto.CompactTextString(m) } func (*RouteToDatabase) ProtoMessage() {} func (*RouteToDatabase) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{150} + return fileDescriptor_007ba1c3d6266d56, []int{154} } func (m *RouteToDatabase) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11026,7 +11446,7 @@ func (m *DeviceExtensions) Reset() { *m = DeviceExtensions{} } func (m *DeviceExtensions) String() string { return proto.CompactTextString(m) } func (*DeviceExtensions) ProtoMessage() {} func (*DeviceExtensions) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{151} + return fileDescriptor_007ba1c3d6266d56, []int{155} } func (m *DeviceExtensions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11083,7 +11503,7 @@ func (m *AccessRequestResourceSearch) Reset() { *m = AccessRequestResour func (m *AccessRequestResourceSearch) String() string { return proto.CompactTextString(m) } func (*AccessRequestResourceSearch) ProtoMessage() {} func (*AccessRequestResourceSearch) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{152} + return fileDescriptor_007ba1c3d6266d56, []int{156} } func (m *AccessRequestResourceSearch) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11134,7 +11554,7 @@ func (m *MySQLStatementPrepare) Reset() { *m = MySQLStatementPrepare{} } func (m *MySQLStatementPrepare) String() string { return proto.CompactTextString(m) } func (*MySQLStatementPrepare) ProtoMessage() {} func (*MySQLStatementPrepare) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{153} + return fileDescriptor_007ba1c3d6266d56, []int{157} } func (m *MySQLStatementPrepare) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11187,7 +11607,7 @@ func (m *MySQLStatementExecute) Reset() { *m = MySQLStatementExecute{} } func (m *MySQLStatementExecute) String() string { return proto.CompactTextString(m) } func (*MySQLStatementExecute) ProtoMessage() {} func (*MySQLStatementExecute) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{154} + return fileDescriptor_007ba1c3d6266d56, []int{158} } func (m *MySQLStatementExecute) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11242,7 +11662,7 @@ func (m *MySQLStatementSendLongData) Reset() { *m = MySQLStatementSendLo func (m *MySQLStatementSendLongData) String() string { return proto.CompactTextString(m) } func (*MySQLStatementSendLongData) ProtoMessage() {} func (*MySQLStatementSendLongData) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{155} + return fileDescriptor_007ba1c3d6266d56, []int{159} } func (m *MySQLStatementSendLongData) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11293,7 +11713,7 @@ func (m *MySQLStatementClose) Reset() { *m = MySQLStatementClose{} } func (m *MySQLStatementClose) String() string { return proto.CompactTextString(m) } func (*MySQLStatementClose) ProtoMessage() {} func (*MySQLStatementClose) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{156} + return fileDescriptor_007ba1c3d6266d56, []int{160} } func (m *MySQLStatementClose) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11344,7 +11764,7 @@ func (m *MySQLStatementReset) Reset() { *m = MySQLStatementReset{} } func (m *MySQLStatementReset) String() string { return proto.CompactTextString(m) } func (*MySQLStatementReset) ProtoMessage() {} func (*MySQLStatementReset) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{157} + return fileDescriptor_007ba1c3d6266d56, []int{161} } func (m *MySQLStatementReset) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11397,7 +11817,7 @@ func (m *MySQLStatementFetch) Reset() { *m = MySQLStatementFetch{} } func (m *MySQLStatementFetch) String() string { return proto.CompactTextString(m) } func (*MySQLStatementFetch) ProtoMessage() {} func (*MySQLStatementFetch) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{158} + return fileDescriptor_007ba1c3d6266d56, []int{162} } func (m *MySQLStatementFetch) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11450,7 +11870,7 @@ func (m *MySQLStatementBulkExecute) Reset() { *m = MySQLStatementBulkExe func (m *MySQLStatementBulkExecute) String() string { return proto.CompactTextString(m) } func (*MySQLStatementBulkExecute) ProtoMessage() {} func (*MySQLStatementBulkExecute) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{159} + return fileDescriptor_007ba1c3d6266d56, []int{163} } func (m *MySQLStatementBulkExecute) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11501,7 +11921,7 @@ func (m *MySQLInitDB) Reset() { *m = MySQLInitDB{} } func (m *MySQLInitDB) String() string { return proto.CompactTextString(m) } func (*MySQLInitDB) ProtoMessage() {} func (*MySQLInitDB) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{160} + return fileDescriptor_007ba1c3d6266d56, []int{164} } func (m *MySQLInitDB) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11551,7 +11971,7 @@ func (m *MySQLCreateDB) Reset() { *m = MySQLCreateDB{} } func (m *MySQLCreateDB) String() string { return proto.CompactTextString(m) } func (*MySQLCreateDB) ProtoMessage() {} func (*MySQLCreateDB) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{161} + return fileDescriptor_007ba1c3d6266d56, []int{165} } func (m *MySQLCreateDB) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11601,7 +12021,7 @@ func (m *MySQLDropDB) Reset() { *m = MySQLDropDB{} } func (m *MySQLDropDB) String() string { return proto.CompactTextString(m) } func (*MySQLDropDB) ProtoMessage() {} func (*MySQLDropDB) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{162} + return fileDescriptor_007ba1c3d6266d56, []int{166} } func (m *MySQLDropDB) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11649,7 +12069,7 @@ func (m *MySQLShutDown) Reset() { *m = MySQLShutDown{} } func (m *MySQLShutDown) String() string { return proto.CompactTextString(m) } func (*MySQLShutDown) ProtoMessage() {} func (*MySQLShutDown) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{163} + return fileDescriptor_007ba1c3d6266d56, []int{167} } func (m *MySQLShutDown) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11700,7 +12120,7 @@ func (m *MySQLProcessKill) Reset() { *m = MySQLProcessKill{} } func (m *MySQLProcessKill) String() string { return proto.CompactTextString(m) } func (*MySQLProcessKill) ProtoMessage() {} func (*MySQLProcessKill) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{164} + return fileDescriptor_007ba1c3d6266d56, []int{168} } func (m *MySQLProcessKill) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11749,7 +12169,7 @@ func (m *MySQLDebug) Reset() { *m = MySQLDebug{} } func (m *MySQLDebug) String() string { return proto.CompactTextString(m) } func (*MySQLDebug) ProtoMessage() {} func (*MySQLDebug) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{165} + return fileDescriptor_007ba1c3d6266d56, []int{169} } func (m *MySQLDebug) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11799,7 +12219,7 @@ func (m *MySQLRefresh) Reset() { *m = MySQLRefresh{} } func (m *MySQLRefresh) String() string { return proto.CompactTextString(m) } func (*MySQLRefresh) ProtoMessage() {} func (*MySQLRefresh) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{166} + return fileDescriptor_007ba1c3d6266d56, []int{170} } func (m *MySQLRefresh) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11851,7 +12271,7 @@ func (m *SQLServerRPCRequest) Reset() { *m = SQLServerRPCRequest{} } func (m *SQLServerRPCRequest) String() string { return proto.CompactTextString(m) } func (*SQLServerRPCRequest) ProtoMessage() {} func (*SQLServerRPCRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{167} + return fileDescriptor_007ba1c3d6266d56, []int{171} } func (m *SQLServerRPCRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11901,7 +12321,7 @@ func (m *DatabaseSessionMalformedPacket) Reset() { *m = DatabaseSessionM func (m *DatabaseSessionMalformedPacket) String() string { return proto.CompactTextString(m) } func (*DatabaseSessionMalformedPacket) ProtoMessage() {} func (*DatabaseSessionMalformedPacket) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{168} + return fileDescriptor_007ba1c3d6266d56, []int{172} } func (m *DatabaseSessionMalformedPacket) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11968,7 +12388,7 @@ func (m *ElasticsearchRequest) Reset() { *m = ElasticsearchRequest{} } func (m *ElasticsearchRequest) String() string { return proto.CompactTextString(m) } func (*ElasticsearchRequest) ProtoMessage() {} func (*ElasticsearchRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{169} + return fileDescriptor_007ba1c3d6266d56, []int{173} } func (m *ElasticsearchRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12034,7 +12454,7 @@ func (m *OpenSearchRequest) Reset() { *m = OpenSearchRequest{} } func (m *OpenSearchRequest) String() string { return proto.CompactTextString(m) } func (*OpenSearchRequest) ProtoMessage() {} func (*OpenSearchRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{170} + return fileDescriptor_007ba1c3d6266d56, []int{174} } func (m *OpenSearchRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12094,7 +12514,7 @@ func (m *DynamoDBRequest) Reset() { *m = DynamoDBRequest{} } func (m *DynamoDBRequest) String() string { return proto.CompactTextString(m) } func (*DynamoDBRequest) ProtoMessage() {} func (*DynamoDBRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{171} + return fileDescriptor_007ba1c3d6266d56, []int{175} } func (m *DynamoDBRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12160,7 +12580,7 @@ func (m *AppSessionDynamoDBRequest) Reset() { *m = AppSessionDynamoDBReq func (m *AppSessionDynamoDBRequest) String() string { return proto.CompactTextString(m) } func (*AppSessionDynamoDBRequest) ProtoMessage() {} func (*AppSessionDynamoDBRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{172} + return fileDescriptor_007ba1c3d6266d56, []int{176} } func (m *AppSessionDynamoDBRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12202,7 +12622,7 @@ func (m *UpgradeWindowStartMetadata) Reset() { *m = UpgradeWindowStartMe func (m *UpgradeWindowStartMetadata) String() string { return proto.CompactTextString(m) } func (*UpgradeWindowStartMetadata) ProtoMessage() {} func (*UpgradeWindowStartMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{173} + return fileDescriptor_007ba1c3d6266d56, []int{177} } func (m *UpgradeWindowStartMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12250,7 +12670,7 @@ func (m *UpgradeWindowStartUpdate) Reset() { *m = UpgradeWindowStartUpda func (m *UpgradeWindowStartUpdate) String() string { return proto.CompactTextString(m) } func (*UpgradeWindowStartUpdate) ProtoMessage() {} func (*UpgradeWindowStartUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{174} + return fileDescriptor_007ba1c3d6266d56, []int{178} } func (m *UpgradeWindowStartUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12301,7 +12721,7 @@ func (m *SessionRecordingAccess) Reset() { *m = SessionRecordingAccess{} func (m *SessionRecordingAccess) String() string { return proto.CompactTextString(m) } func (*SessionRecordingAccess) ProtoMessage() {} func (*SessionRecordingAccess) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{175} + return fileDescriptor_007ba1c3d6266d56, []int{179} } func (m *SessionRecordingAccess) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12343,7 +12763,7 @@ func (m *KubeClusterMetadata) Reset() { *m = KubeClusterMetadata{} } func (m *KubeClusterMetadata) String() string { return proto.CompactTextString(m) } func (*KubeClusterMetadata) ProtoMessage() {} func (*KubeClusterMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{176} + return fileDescriptor_007ba1c3d6266d56, []int{180} } func (m *KubeClusterMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12391,7 +12811,7 @@ func (m *KubernetesClusterCreate) Reset() { *m = KubernetesClusterCreate func (m *KubernetesClusterCreate) String() string { return proto.CompactTextString(m) } func (*KubernetesClusterCreate) ProtoMessage() {} func (*KubernetesClusterCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{177} + return fileDescriptor_007ba1c3d6266d56, []int{181} } func (m *KubernetesClusterCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12439,7 +12859,7 @@ func (m *KubernetesClusterUpdate) Reset() { *m = KubernetesClusterUpdate func (m *KubernetesClusterUpdate) String() string { return proto.CompactTextString(m) } func (*KubernetesClusterUpdate) ProtoMessage() {} func (*KubernetesClusterUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{178} + return fileDescriptor_007ba1c3d6266d56, []int{182} } func (m *KubernetesClusterUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12485,7 +12905,7 @@ func (m *KubernetesClusterDelete) Reset() { *m = KubernetesClusterDelete func (m *KubernetesClusterDelete) String() string { return proto.CompactTextString(m) } func (*KubernetesClusterDelete) ProtoMessage() {} func (*KubernetesClusterDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{179} + return fileDescriptor_007ba1c3d6266d56, []int{183} } func (m *KubernetesClusterDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12548,7 +12968,7 @@ func (m *SSMRun) Reset() { *m = SSMRun{} } func (m *SSMRun) String() string { return proto.CompactTextString(m) } func (*SSMRun) ProtoMessage() {} func (*SSMRun) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{180} + return fileDescriptor_007ba1c3d6266d56, []int{184} } func (m *SSMRun) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12601,7 +13021,7 @@ func (m *CassandraPrepare) Reset() { *m = CassandraPrepare{} } func (m *CassandraPrepare) String() string { return proto.CompactTextString(m) } func (*CassandraPrepare) ProtoMessage() {} func (*CassandraPrepare) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{181} + return fileDescriptor_007ba1c3d6266d56, []int{185} } func (m *CassandraPrepare) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12651,7 +13071,7 @@ func (m *CassandraExecute) Reset() { *m = CassandraExecute{} } func (m *CassandraExecute) String() string { return proto.CompactTextString(m) } func (*CassandraExecute) ProtoMessage() {} func (*CassandraExecute) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{182} + return fileDescriptor_007ba1c3d6266d56, []int{186} } func (m *CassandraExecute) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12707,7 +13127,7 @@ func (m *CassandraBatch) Reset() { *m = CassandraBatch{} } func (m *CassandraBatch) String() string { return proto.CompactTextString(m) } func (*CassandraBatch) ProtoMessage() {} func (*CassandraBatch) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{183} + return fileDescriptor_007ba1c3d6266d56, []int{187} } func (m *CassandraBatch) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12753,7 +13173,7 @@ func (m *CassandraBatch_BatchChild) Reset() { *m = CassandraBatch_BatchC func (m *CassandraBatch_BatchChild) String() string { return proto.CompactTextString(m) } func (*CassandraBatch_BatchChild) ProtoMessage() {} func (*CassandraBatch_BatchChild) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{183, 0} + return fileDescriptor_007ba1c3d6266d56, []int{187, 0} } func (m *CassandraBatch_BatchChild) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12797,7 +13217,7 @@ func (m *CassandraBatch_BatchChild_Value) Reset() { *m = CassandraBatch_ func (m *CassandraBatch_BatchChild_Value) String() string { return proto.CompactTextString(m) } func (*CassandraBatch_BatchChild_Value) ProtoMessage() {} func (*CassandraBatch_BatchChild_Value) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{183, 0, 0} + return fileDescriptor_007ba1c3d6266d56, []int{187, 0, 0} } func (m *CassandraBatch_BatchChild_Value) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12848,7 +13268,7 @@ func (m *CassandraRegister) Reset() { *m = CassandraRegister{} } func (m *CassandraRegister) String() string { return proto.CompactTextString(m) } func (*CassandraRegister) ProtoMessage() {} func (*CassandraRegister) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{184} + return fileDescriptor_007ba1c3d6266d56, []int{188} } func (m *CassandraRegister) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12894,7 +13314,7 @@ func (m *LoginRuleCreate) Reset() { *m = LoginRuleCreate{} } func (m *LoginRuleCreate) String() string { return proto.CompactTextString(m) } func (*LoginRuleCreate) ProtoMessage() {} func (*LoginRuleCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{185} + return fileDescriptor_007ba1c3d6266d56, []int{189} } func (m *LoginRuleCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12940,7 +13360,7 @@ func (m *LoginRuleDelete) Reset() { *m = LoginRuleDelete{} } func (m *LoginRuleDelete) String() string { return proto.CompactTextString(m) } func (*LoginRuleDelete) ProtoMessage() {} func (*LoginRuleDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{186} + return fileDescriptor_007ba1c3d6266d56, []int{190} } func (m *LoginRuleDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12990,7 +13410,7 @@ func (m *SAMLIdPAuthAttempt) Reset() { *m = SAMLIdPAuthAttempt{} } func (m *SAMLIdPAuthAttempt) String() string { return proto.CompactTextString(m) } func (*SAMLIdPAuthAttempt) ProtoMessage() {} func (*SAMLIdPAuthAttempt) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{187} + return fileDescriptor_007ba1c3d6266d56, []int{191} } func (m *SAMLIdPAuthAttempt) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13036,7 +13456,7 @@ func (m *SAMLIdPServiceProviderCreate) Reset() { *m = SAMLIdPServiceProv func (m *SAMLIdPServiceProviderCreate) String() string { return proto.CompactTextString(m) } func (*SAMLIdPServiceProviderCreate) ProtoMessage() {} func (*SAMLIdPServiceProviderCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{188} + return fileDescriptor_007ba1c3d6266d56, []int{192} } func (m *SAMLIdPServiceProviderCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13082,7 +13502,7 @@ func (m *SAMLIdPServiceProviderUpdate) Reset() { *m = SAMLIdPServiceProv func (m *SAMLIdPServiceProviderUpdate) String() string { return proto.CompactTextString(m) } func (*SAMLIdPServiceProviderUpdate) ProtoMessage() {} func (*SAMLIdPServiceProviderUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{189} + return fileDescriptor_007ba1c3d6266d56, []int{193} } func (m *SAMLIdPServiceProviderUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13128,7 +13548,7 @@ func (m *SAMLIdPServiceProviderDelete) Reset() { *m = SAMLIdPServiceProv func (m *SAMLIdPServiceProviderDelete) String() string { return proto.CompactTextString(m) } func (*SAMLIdPServiceProviderDelete) ProtoMessage() {} func (*SAMLIdPServiceProviderDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{190} + return fileDescriptor_007ba1c3d6266d56, []int{194} } func (m *SAMLIdPServiceProviderDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13172,7 +13592,7 @@ func (m *SAMLIdPServiceProviderDeleteAll) Reset() { *m = SAMLIdPServiceP func (m *SAMLIdPServiceProviderDeleteAll) String() string { return proto.CompactTextString(m) } func (*SAMLIdPServiceProviderDeleteAll) ProtoMessage() {} func (*SAMLIdPServiceProviderDeleteAll) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{191} + return fileDescriptor_007ba1c3d6266d56, []int{195} } func (m *SAMLIdPServiceProviderDeleteAll) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13218,7 +13638,7 @@ func (m *OktaResourcesUpdate) Reset() { *m = OktaResourcesUpdate{} } func (m *OktaResourcesUpdate) String() string { return proto.CompactTextString(m) } func (*OktaResourcesUpdate) ProtoMessage() {} func (*OktaResourcesUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{192} + return fileDescriptor_007ba1c3d6266d56, []int{196} } func (m *OktaResourcesUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13264,7 +13684,7 @@ func (m *OktaSyncFailure) Reset() { *m = OktaSyncFailure{} } func (m *OktaSyncFailure) String() string { return proto.CompactTextString(m) } func (*OktaSyncFailure) ProtoMessage() {} func (*OktaSyncFailure) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{193} + return fileDescriptor_007ba1c3d6266d56, []int{197} } func (m *OktaSyncFailure) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13314,7 +13734,7 @@ func (m *OktaAssignmentResult) Reset() { *m = OktaAssignmentResult{} } func (m *OktaAssignmentResult) String() string { return proto.CompactTextString(m) } func (*OktaAssignmentResult) ProtoMessage() {} func (*OktaAssignmentResult) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{194} + return fileDescriptor_007ba1c3d6266d56, []int{198} } func (m *OktaAssignmentResult) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13362,7 +13782,7 @@ func (m *AccessListCreate) Reset() { *m = AccessListCreate{} } func (m *AccessListCreate) String() string { return proto.CompactTextString(m) } func (*AccessListCreate) ProtoMessage() {} func (*AccessListCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{195} + return fileDescriptor_007ba1c3d6266d56, []int{199} } func (m *AccessListCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13410,7 +13830,7 @@ func (m *AccessListUpdate) Reset() { *m = AccessListUpdate{} } func (m *AccessListUpdate) String() string { return proto.CompactTextString(m) } func (*AccessListUpdate) ProtoMessage() {} func (*AccessListUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{196} + return fileDescriptor_007ba1c3d6266d56, []int{200} } func (m *AccessListUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13458,7 +13878,7 @@ func (m *AccessListDelete) Reset() { *m = AccessListDelete{} } func (m *AccessListDelete) String() string { return proto.CompactTextString(m) } func (*AccessListDelete) ProtoMessage() {} func (*AccessListDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{197} + return fileDescriptor_007ba1c3d6266d56, []int{201} } func (m *AccessListDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13506,7 +13926,7 @@ func (m *AccessListMemberCreate) Reset() { *m = AccessListMemberCreate{} func (m *AccessListMemberCreate) String() string { return proto.CompactTextString(m) } func (*AccessListMemberCreate) ProtoMessage() {} func (*AccessListMemberCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{198} + return fileDescriptor_007ba1c3d6266d56, []int{202} } func (m *AccessListMemberCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13554,7 +13974,7 @@ func (m *AccessListMemberUpdate) Reset() { *m = AccessListMemberUpdate{} func (m *AccessListMemberUpdate) String() string { return proto.CompactTextString(m) } func (*AccessListMemberUpdate) ProtoMessage() {} func (*AccessListMemberUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{199} + return fileDescriptor_007ba1c3d6266d56, []int{203} } func (m *AccessListMemberUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13602,7 +14022,7 @@ func (m *AccessListMemberDelete) Reset() { *m = AccessListMemberDelete{} func (m *AccessListMemberDelete) String() string { return proto.CompactTextString(m) } func (*AccessListMemberDelete) ProtoMessage() {} func (*AccessListMemberDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{200} + return fileDescriptor_007ba1c3d6266d56, []int{204} } func (m *AccessListMemberDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13652,7 +14072,7 @@ func (m *AccessListMemberDeleteAllForAccessList) Reset() { func (m *AccessListMemberDeleteAllForAccessList) String() string { return proto.CompactTextString(m) } func (*AccessListMemberDeleteAllForAccessList) ProtoMessage() {} func (*AccessListMemberDeleteAllForAccessList) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{201} + return fileDescriptor_007ba1c3d6266d56, []int{205} } func (m *AccessListMemberDeleteAllForAccessList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13700,7 +14120,7 @@ func (m *AccessListReview) Reset() { *m = AccessListReview{} } func (m *AccessListReview) String() string { return proto.CompactTextString(m) } func (*AccessListReview) ProtoMessage() {} func (*AccessListReview) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{202} + return fileDescriptor_007ba1c3d6266d56, []int{206} } func (m *AccessListReview) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13748,7 +14168,7 @@ func (m *AuditQueryRun) Reset() { *m = AuditQueryRun{} } func (m *AuditQueryRun) String() string { return proto.CompactTextString(m) } func (*AuditQueryRun) ProtoMessage() {} func (*AuditQueryRun) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{203} + return fileDescriptor_007ba1c3d6266d56, []int{207} } func (m *AuditQueryRun) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13798,7 +14218,7 @@ func (m *AuditQueryDetails) Reset() { *m = AuditQueryDetails{} } func (m *AuditQueryDetails) String() string { return proto.CompactTextString(m) } func (*AuditQueryDetails) ProtoMessage() {} func (*AuditQueryDetails) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{204} + return fileDescriptor_007ba1c3d6266d56, []int{208} } func (m *AuditQueryDetails) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13854,7 +14274,7 @@ func (m *SecurityReportRun) Reset() { *m = SecurityReportRun{} } func (m *SecurityReportRun) String() string { return proto.CompactTextString(m) } func (*SecurityReportRun) ProtoMessage() {} func (*SecurityReportRun) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{205} + return fileDescriptor_007ba1c3d6266d56, []int{209} } func (m *SecurityReportRun) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13902,7 +14322,7 @@ func (m *ExternalAuditStorageEnable) Reset() { *m = ExternalAuditStorage func (m *ExternalAuditStorageEnable) String() string { return proto.CompactTextString(m) } func (*ExternalAuditStorageEnable) ProtoMessage() {} func (*ExternalAuditStorageEnable) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{206} + return fileDescriptor_007ba1c3d6266d56, []int{210} } func (m *ExternalAuditStorageEnable) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13950,7 +14370,7 @@ func (m *ExternalAuditStorageDisable) Reset() { *m = ExternalAuditStorag func (m *ExternalAuditStorageDisable) String() string { return proto.CompactTextString(m) } func (*ExternalAuditStorageDisable) ProtoMessage() {} func (*ExternalAuditStorageDisable) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{207} + return fileDescriptor_007ba1c3d6266d56, []int{211} } func (m *ExternalAuditStorageDisable) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14011,7 +14431,7 @@ func (m *ExternalAuditStorageDetails) Reset() { *m = ExternalAuditStorag func (m *ExternalAuditStorageDetails) String() string { return proto.CompactTextString(m) } func (*ExternalAuditStorageDetails) ProtoMessage() {} func (*ExternalAuditStorageDetails) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{208} + return fileDescriptor_007ba1c3d6266d56, []int{212} } func (m *ExternalAuditStorageDetails) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14069,7 +14489,7 @@ func (m *OktaAccessListSync) Reset() { *m = OktaAccessListSync{} } func (m *OktaAccessListSync) String() string { return proto.CompactTextString(m) } func (*OktaAccessListSync) ProtoMessage() {} func (*OktaAccessListSync) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{209} + return fileDescriptor_007ba1c3d6266d56, []int{213} } func (m *OktaAccessListSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14133,7 +14553,7 @@ func (m *OktaUserSync) Reset() { *m = OktaUserSync{} } func (m *OktaUserSync) String() string { return proto.CompactTextString(m) } func (*OktaUserSync) ProtoMessage() {} func (*OktaUserSync) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{210} + return fileDescriptor_007ba1c3d6266d56, []int{214} } func (m *OktaUserSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14162,6 +14582,49 @@ func (m *OktaUserSync) XXX_DiscardUnknown() { var xxx_messageInfo_OktaUserSync proto.InternalMessageInfo +// LabelSelector is a label selector that can be specified by the client to +// filter the resources that are returned by the server. +type LabelSelector struct { + Key string `protobuf:"bytes,1,opt,name=Key,proto3" json:"key"` + Values []string `protobuf:"bytes,2,rep,name=Values,proto3" json:"values"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *LabelSelector) Reset() { *m = LabelSelector{} } +func (m *LabelSelector) String() string { return proto.CompactTextString(m) } +func (*LabelSelector) ProtoMessage() {} +func (*LabelSelector) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{215} +} +func (m *LabelSelector) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *LabelSelector) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_LabelSelector.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *LabelSelector) XXX_Merge(src proto.Message) { + xxx_messageInfo_LabelSelector.Merge(m, src) +} +func (m *LabelSelector) XXX_Size() int { + return m.Size() +} +func (m *LabelSelector) XXX_DiscardUnknown() { + xxx_messageInfo_LabelSelector.DiscardUnknown(m) +} + +var xxx_messageInfo_LabelSelector proto.InternalMessageInfo + // SPIFFESVIDIssued is an event recorded when a SPIFFE SVID is issued. type SPIFFESVIDIssued struct { // Metadata is a common event metadata @@ -14196,17 +14659,21 @@ type SPIFFESVIDIssued struct { WorkloadIdentityRevision string `protobuf:"bytes,13,opt,name=WorkloadIdentityRevision,proto3" json:"workload_identity_revision,omitempty"` // Attributes is the collection of data that was used to make the decision on // whether to issue the workload identity credential & to perform templating. - Attributes *Struct `protobuf:"bytes,14,opt,name=Attributes,proto3,casttype=Struct" json:"attributes,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Attributes *Struct `protobuf:"bytes,14,opt,name=Attributes,proto3,casttype=Struct" json:"attributes,omitempty"` + // The name selector specified by the client when requesting the SVID. + NameSelector string `protobuf:"bytes,15,opt,name=NameSelector,proto3" json:"name_selector,omitempty"` + // The label selectors specified by the client when requesting the SVID. + LabelSelectors []*LabelSelector `protobuf:"bytes,16,rep,name=LabelSelectors,proto3" json:"label_selectors,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *SPIFFESVIDIssued) Reset() { *m = SPIFFESVIDIssued{} } func (m *SPIFFESVIDIssued) String() string { return proto.CompactTextString(m) } func (*SPIFFESVIDIssued) ProtoMessage() {} func (*SPIFFESVIDIssued) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{211} + return fileDescriptor_007ba1c3d6266d56, []int{216} } func (m *SPIFFESVIDIssued) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14257,7 +14724,7 @@ func (m *AuthPreferenceUpdate) Reset() { *m = AuthPreferenceUpdate{} } func (m *AuthPreferenceUpdate) String() string { return proto.CompactTextString(m) } func (*AuthPreferenceUpdate) ProtoMessage() {} func (*AuthPreferenceUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{212} + return fileDescriptor_007ba1c3d6266d56, []int{217} } func (m *AuthPreferenceUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14305,7 +14772,7 @@ func (m *ClusterNetworkingConfigUpdate) Reset() { *m = ClusterNetworking func (m *ClusterNetworkingConfigUpdate) String() string { return proto.CompactTextString(m) } func (*ClusterNetworkingConfigUpdate) ProtoMessage() {} func (*ClusterNetworkingConfigUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{213} + return fileDescriptor_007ba1c3d6266d56, []int{218} } func (m *ClusterNetworkingConfigUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14353,7 +14820,7 @@ func (m *SessionRecordingConfigUpdate) Reset() { *m = SessionRecordingCo func (m *SessionRecordingConfigUpdate) String() string { return proto.CompactTextString(m) } func (*SessionRecordingConfigUpdate) ProtoMessage() {} func (*SessionRecordingConfigUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{214} + return fileDescriptor_007ba1c3d6266d56, []int{219} } func (m *SessionRecordingConfigUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14403,7 +14870,7 @@ func (m *AccessPathChanged) Reset() { *m = AccessPathChanged{} } func (m *AccessPathChanged) String() string { return proto.CompactTextString(m) } func (*AccessPathChanged) ProtoMessage() {} func (*AccessPathChanged) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{215} + return fileDescriptor_007ba1c3d6266d56, []int{220} } func (m *AccessPathChanged) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14457,7 +14924,7 @@ func (m *SpannerRPC) Reset() { *m = SpannerRPC{} } func (m *SpannerRPC) String() string { return proto.CompactTextString(m) } func (*SpannerRPC) ProtoMessage() {} func (*SpannerRPC) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{216} + return fileDescriptor_007ba1c3d6266d56, []int{221} } func (m *SpannerRPC) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14505,7 +14972,7 @@ func (m *AccessGraphSettingsUpdate) Reset() { *m = AccessGraphSettingsUp func (m *AccessGraphSettingsUpdate) String() string { return proto.CompactTextString(m) } func (*AccessGraphSettingsUpdate) ProtoMessage() {} func (*AccessGraphSettingsUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{217} + return fileDescriptor_007ba1c3d6266d56, []int{222} } func (m *AccessGraphSettingsUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14553,7 +15020,7 @@ func (m *SPIFFEFederationCreate) Reset() { *m = SPIFFEFederationCreate{} func (m *SPIFFEFederationCreate) String() string { return proto.CompactTextString(m) } func (*SPIFFEFederationCreate) ProtoMessage() {} func (*SPIFFEFederationCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{218} + return fileDescriptor_007ba1c3d6266d56, []int{223} } func (m *SPIFFEFederationCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14601,7 +15068,7 @@ func (m *SPIFFEFederationDelete) Reset() { *m = SPIFFEFederationDelete{} func (m *SPIFFEFederationDelete) String() string { return proto.CompactTextString(m) } func (*SPIFFEFederationDelete) ProtoMessage() {} func (*SPIFFEFederationDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{219} + return fileDescriptor_007ba1c3d6266d56, []int{224} } func (m *SPIFFEFederationDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14651,7 +15118,7 @@ func (m *AutoUpdateConfigCreate) Reset() { *m = AutoUpdateConfigCreate{} func (m *AutoUpdateConfigCreate) String() string { return proto.CompactTextString(m) } func (*AutoUpdateConfigCreate) ProtoMessage() {} func (*AutoUpdateConfigCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{220} + return fileDescriptor_007ba1c3d6266d56, []int{225} } func (m *AutoUpdateConfigCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14701,7 +15168,7 @@ func (m *AutoUpdateConfigUpdate) Reset() { *m = AutoUpdateConfigUpdate{} func (m *AutoUpdateConfigUpdate) String() string { return proto.CompactTextString(m) } func (*AutoUpdateConfigUpdate) ProtoMessage() {} func (*AutoUpdateConfigUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{221} + return fileDescriptor_007ba1c3d6266d56, []int{226} } func (m *AutoUpdateConfigUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14751,7 +15218,7 @@ func (m *AutoUpdateConfigDelete) Reset() { *m = AutoUpdateConfigDelete{} func (m *AutoUpdateConfigDelete) String() string { return proto.CompactTextString(m) } func (*AutoUpdateConfigDelete) ProtoMessage() {} func (*AutoUpdateConfigDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{222} + return fileDescriptor_007ba1c3d6266d56, []int{227} } func (m *AutoUpdateConfigDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14801,7 +15268,7 @@ func (m *AutoUpdateVersionCreate) Reset() { *m = AutoUpdateVersionCreate func (m *AutoUpdateVersionCreate) String() string { return proto.CompactTextString(m) } func (*AutoUpdateVersionCreate) ProtoMessage() {} func (*AutoUpdateVersionCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{223} + return fileDescriptor_007ba1c3d6266d56, []int{228} } func (m *AutoUpdateVersionCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14851,7 +15318,7 @@ func (m *AutoUpdateVersionUpdate) Reset() { *m = AutoUpdateVersionUpdate func (m *AutoUpdateVersionUpdate) String() string { return proto.CompactTextString(m) } func (*AutoUpdateVersionUpdate) ProtoMessage() {} func (*AutoUpdateVersionUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{224} + return fileDescriptor_007ba1c3d6266d56, []int{229} } func (m *AutoUpdateVersionUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14901,7 +15368,7 @@ func (m *AutoUpdateVersionDelete) Reset() { *m = AutoUpdateVersionDelete func (m *AutoUpdateVersionDelete) String() string { return proto.CompactTextString(m) } func (*AutoUpdateVersionDelete) ProtoMessage() {} func (*AutoUpdateVersionDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{225} + return fileDescriptor_007ba1c3d6266d56, []int{230} } func (m *AutoUpdateVersionDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14951,7 +15418,7 @@ func (m *AutoUpdateAgentRolloutTrigger) Reset() { *m = AutoUpdateAgentRo func (m *AutoUpdateAgentRolloutTrigger) String() string { return proto.CompactTextString(m) } func (*AutoUpdateAgentRolloutTrigger) ProtoMessage() {} func (*AutoUpdateAgentRolloutTrigger) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{226} + return fileDescriptor_007ba1c3d6266d56, []int{231} } func (m *AutoUpdateAgentRolloutTrigger) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15001,7 +15468,7 @@ func (m *AutoUpdateAgentRolloutForceDone) Reset() { *m = AutoUpdateAgent func (m *AutoUpdateAgentRolloutForceDone) String() string { return proto.CompactTextString(m) } func (*AutoUpdateAgentRolloutForceDone) ProtoMessage() {} func (*AutoUpdateAgentRolloutForceDone) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{227} + return fileDescriptor_007ba1c3d6266d56, []int{232} } func (m *AutoUpdateAgentRolloutForceDone) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15051,7 +15518,7 @@ func (m *AutoUpdateAgentRolloutRollback) Reset() { *m = AutoUpdateAgentR func (m *AutoUpdateAgentRolloutRollback) String() string { return proto.CompactTextString(m) } func (*AutoUpdateAgentRolloutRollback) ProtoMessage() {} func (*AutoUpdateAgentRolloutRollback) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{228} + return fileDescriptor_007ba1c3d6266d56, []int{233} } func (m *AutoUpdateAgentRolloutRollback) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15101,7 +15568,7 @@ func (m *StaticHostUserCreate) Reset() { *m = StaticHostUserCreate{} } func (m *StaticHostUserCreate) String() string { return proto.CompactTextString(m) } func (*StaticHostUserCreate) ProtoMessage() {} func (*StaticHostUserCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{229} + return fileDescriptor_007ba1c3d6266d56, []int{234} } func (m *StaticHostUserCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15151,7 +15618,7 @@ func (m *StaticHostUserUpdate) Reset() { *m = StaticHostUserUpdate{} } func (m *StaticHostUserUpdate) String() string { return proto.CompactTextString(m) } func (*StaticHostUserUpdate) ProtoMessage() {} func (*StaticHostUserUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{230} + return fileDescriptor_007ba1c3d6266d56, []int{235} } func (m *StaticHostUserUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15201,7 +15668,7 @@ func (m *StaticHostUserDelete) Reset() { *m = StaticHostUserDelete{} } func (m *StaticHostUserDelete) String() string { return proto.CompactTextString(m) } func (*StaticHostUserDelete) ProtoMessage() {} func (*StaticHostUserDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{231} + return fileDescriptor_007ba1c3d6266d56, []int{236} } func (m *StaticHostUserDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15253,7 +15720,7 @@ func (m *CrownJewelCreate) Reset() { *m = CrownJewelCreate{} } func (m *CrownJewelCreate) String() string { return proto.CompactTextString(m) } func (*CrownJewelCreate) ProtoMessage() {} func (*CrownJewelCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{232} + return fileDescriptor_007ba1c3d6266d56, []int{237} } func (m *CrownJewelCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15307,7 +15774,7 @@ func (m *CrownJewelUpdate) Reset() { *m = CrownJewelUpdate{} } func (m *CrownJewelUpdate) String() string { return proto.CompactTextString(m) } func (*CrownJewelUpdate) ProtoMessage() {} func (*CrownJewelUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{233} + return fileDescriptor_007ba1c3d6266d56, []int{238} } func (m *CrownJewelUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15357,7 +15824,7 @@ func (m *CrownJewelDelete) Reset() { *m = CrownJewelDelete{} } func (m *CrownJewelDelete) String() string { return proto.CompactTextString(m) } func (*CrownJewelDelete) ProtoMessage() {} func (*CrownJewelDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{234} + return fileDescriptor_007ba1c3d6266d56, []int{239} } func (m *CrownJewelDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15409,7 +15876,7 @@ func (m *UserTaskCreate) Reset() { *m = UserTaskCreate{} } func (m *UserTaskCreate) String() string { return proto.CompactTextString(m) } func (*UserTaskCreate) ProtoMessage() {} func (*UserTaskCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{235} + return fileDescriptor_007ba1c3d6266d56, []int{240} } func (m *UserTaskCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15465,7 +15932,7 @@ func (m *UserTaskUpdate) Reset() { *m = UserTaskUpdate{} } func (m *UserTaskUpdate) String() string { return proto.CompactTextString(m) } func (*UserTaskUpdate) ProtoMessage() {} func (*UserTaskUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{236} + return fileDescriptor_007ba1c3d6266d56, []int{241} } func (m *UserTaskUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15511,7 +15978,7 @@ func (m *UserTaskMetadata) Reset() { *m = UserTaskMetadata{} } func (m *UserTaskMetadata) String() string { return proto.CompactTextString(m) } func (*UserTaskMetadata) ProtoMessage() {} func (*UserTaskMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{237} + return fileDescriptor_007ba1c3d6266d56, []int{242} } func (m *UserTaskMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15561,7 +16028,7 @@ func (m *UserTaskDelete) Reset() { *m = UserTaskDelete{} } func (m *UserTaskDelete) String() string { return proto.CompactTextString(m) } func (*UserTaskDelete) ProtoMessage() {} func (*UserTaskDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{238} + return fileDescriptor_007ba1c3d6266d56, []int{243} } func (m *UserTaskDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15615,7 +16082,7 @@ func (m *ContactCreate) Reset() { *m = ContactCreate{} } func (m *ContactCreate) String() string { return proto.CompactTextString(m) } func (*ContactCreate) ProtoMessage() {} func (*ContactCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{239} + return fileDescriptor_007ba1c3d6266d56, []int{244} } func (m *ContactCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15669,7 +16136,7 @@ func (m *ContactDelete) Reset() { *m = ContactDelete{} } func (m *ContactDelete) String() string { return proto.CompactTextString(m) } func (*ContactDelete) ProtoMessage() {} func (*ContactDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{240} + return fileDescriptor_007ba1c3d6266d56, []int{245} } func (m *ContactDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15719,7 +16186,7 @@ func (m *WorkloadIdentityCreate) Reset() { *m = WorkloadIdentityCreate{} func (m *WorkloadIdentityCreate) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityCreate) ProtoMessage() {} func (*WorkloadIdentityCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{241} + return fileDescriptor_007ba1c3d6266d56, []int{246} } func (m *WorkloadIdentityCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15769,7 +16236,7 @@ func (m *WorkloadIdentityUpdate) Reset() { *m = WorkloadIdentityUpdate{} func (m *WorkloadIdentityUpdate) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityUpdate) ProtoMessage() {} func (*WorkloadIdentityUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{242} + return fileDescriptor_007ba1c3d6266d56, []int{247} } func (m *WorkloadIdentityUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15817,7 +16284,7 @@ func (m *WorkloadIdentityDelete) Reset() { *m = WorkloadIdentityDelete{} func (m *WorkloadIdentityDelete) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityDelete) ProtoMessage() {} func (*WorkloadIdentityDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{243} + return fileDescriptor_007ba1c3d6266d56, []int{248} } func (m *WorkloadIdentityDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15868,7 +16335,7 @@ func (m *WorkloadIdentityX509RevocationCreate) Reset() { *m = WorkloadId func (m *WorkloadIdentityX509RevocationCreate) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityX509RevocationCreate) ProtoMessage() {} func (*WorkloadIdentityX509RevocationCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{244} + return fileDescriptor_007ba1c3d6266d56, []int{249} } func (m *WorkloadIdentityX509RevocationCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15919,7 +16386,7 @@ func (m *WorkloadIdentityX509RevocationUpdate) Reset() { *m = WorkloadId func (m *WorkloadIdentityX509RevocationUpdate) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityX509RevocationUpdate) ProtoMessage() {} func (*WorkloadIdentityX509RevocationUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{245} + return fileDescriptor_007ba1c3d6266d56, []int{250} } func (m *WorkloadIdentityX509RevocationUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15968,7 +16435,7 @@ func (m *WorkloadIdentityX509RevocationDelete) Reset() { *m = WorkloadId func (m *WorkloadIdentityX509RevocationDelete) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityX509RevocationDelete) ProtoMessage() {} func (*WorkloadIdentityX509RevocationDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{246} + return fileDescriptor_007ba1c3d6266d56, []int{251} } func (m *WorkloadIdentityX509RevocationDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16027,7 +16494,7 @@ func (m *GitCommand) Reset() { *m = GitCommand{} } func (m *GitCommand) String() string { return proto.CompactTextString(m) } func (*GitCommand) ProtoMessage() {} func (*GitCommand) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{247} + return fileDescriptor_007ba1c3d6266d56, []int{252} } func (m *GitCommand) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16075,7 +16542,7 @@ func (m *GitCommandAction) Reset() { *m = GitCommandAction{} } func (m *GitCommandAction) String() string { return proto.CompactTextString(m) } func (*GitCommandAction) ProtoMessage() {} func (*GitCommandAction) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{248} + return fileDescriptor_007ba1c3d6266d56, []int{253} } func (m *GitCommandAction) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16121,7 +16588,7 @@ func (m *AccessListInvalidMetadata) Reset() { *m = AccessListInvalidMeta func (m *AccessListInvalidMetadata) String() string { return proto.CompactTextString(m) } func (*AccessListInvalidMetadata) ProtoMessage() {} func (*AccessListInvalidMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{249} + return fileDescriptor_007ba1c3d6266d56, []int{254} } func (m *AccessListInvalidMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16169,7 +16636,7 @@ func (m *UserLoginAccessListInvalid) Reset() { *m = UserLoginAccessListI func (m *UserLoginAccessListInvalid) String() string { return proto.CompactTextString(m) } func (*UserLoginAccessListInvalid) ProtoMessage() {} func (*UserLoginAccessListInvalid) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{250} + return fileDescriptor_007ba1c3d6266d56, []int{255} } func (m *UserLoginAccessListInvalid) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16215,7 +16682,7 @@ func (m *StableUNIXUserCreate) Reset() { *m = StableUNIXUserCreate{} } func (m *StableUNIXUserCreate) String() string { return proto.CompactTextString(m) } func (*StableUNIXUserCreate) ProtoMessage() {} func (*StableUNIXUserCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{251} + return fileDescriptor_007ba1c3d6266d56, []int{256} } func (m *StableUNIXUserCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16256,7 +16723,7 @@ func (m *StableUNIXUser) Reset() { *m = StableUNIXUser{} } func (m *StableUNIXUser) String() string { return proto.CompactTextString(m) } func (*StableUNIXUser) ProtoMessage() {} func (*StableUNIXUser) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{252} + return fileDescriptor_007ba1c3d6266d56, []int{257} } func (m *StableUNIXUser) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16310,7 +16777,7 @@ func (m *AWSICResourceSync) Reset() { *m = AWSICResourceSync{} } func (m *AWSICResourceSync) String() string { return proto.CompactTextString(m) } func (*AWSICResourceSync) ProtoMessage() {} func (*AWSICResourceSync) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{253} + return fileDescriptor_007ba1c3d6266d56, []int{258} } func (m *AWSICResourceSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16358,7 +16825,7 @@ func (m *HealthCheckConfigCreate) Reset() { *m = HealthCheckConfigCreate func (m *HealthCheckConfigCreate) String() string { return proto.CompactTextString(m) } func (*HealthCheckConfigCreate) ProtoMessage() {} func (*HealthCheckConfigCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{254} + return fileDescriptor_007ba1c3d6266d56, []int{259} } func (m *HealthCheckConfigCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16406,7 +16873,7 @@ func (m *HealthCheckConfigUpdate) Reset() { *m = HealthCheckConfigUpdate func (m *HealthCheckConfigUpdate) String() string { return proto.CompactTextString(m) } func (*HealthCheckConfigUpdate) ProtoMessage() {} func (*HealthCheckConfigUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{255} + return fileDescriptor_007ba1c3d6266d56, []int{260} } func (m *HealthCheckConfigUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16454,7 +16921,7 @@ func (m *HealthCheckConfigDelete) Reset() { *m = HealthCheckConfigDelete func (m *HealthCheckConfigDelete) String() string { return proto.CompactTextString(m) } func (*HealthCheckConfigDelete) ProtoMessage() {} func (*HealthCheckConfigDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{256} + return fileDescriptor_007ba1c3d6266d56, []int{261} } func (m *HealthCheckConfigDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16499,7 +16966,7 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Reset() { func (m *WorkloadIdentityX509IssuerOverrideCreate) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityX509IssuerOverrideCreate) ProtoMessage() {} func (*WorkloadIdentityX509IssuerOverrideCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{257} + return fileDescriptor_007ba1c3d6266d56, []int{262} } func (m *WorkloadIdentityX509IssuerOverrideCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16544,7 +17011,7 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Reset() { func (m *WorkloadIdentityX509IssuerOverrideDelete) String() string { return proto.CompactTextString(m) } func (*WorkloadIdentityX509IssuerOverrideDelete) ProtoMessage() {} func (*WorkloadIdentityX509IssuerOverrideDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{258} + return fileDescriptor_007ba1c3d6266d56, []int{263} } func (m *WorkloadIdentityX509IssuerOverrideDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16592,7 +17059,7 @@ func (m *SigstorePolicyCreate) Reset() { *m = SigstorePolicyCreate{} } func (m *SigstorePolicyCreate) String() string { return proto.CompactTextString(m) } func (*SigstorePolicyCreate) ProtoMessage() {} func (*SigstorePolicyCreate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{259} + return fileDescriptor_007ba1c3d6266d56, []int{264} } func (m *SigstorePolicyCreate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16640,7 +17107,7 @@ func (m *SigstorePolicyUpdate) Reset() { *m = SigstorePolicyUpdate{} } func (m *SigstorePolicyUpdate) String() string { return proto.CompactTextString(m) } func (*SigstorePolicyUpdate) ProtoMessage() {} func (*SigstorePolicyUpdate) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{260} + return fileDescriptor_007ba1c3d6266d56, []int{265} } func (m *SigstorePolicyUpdate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16688,7 +17155,7 @@ func (m *SigstorePolicyDelete) Reset() { *m = SigstorePolicyDelete{} } func (m *SigstorePolicyDelete) String() string { return proto.CompactTextString(m) } func (*SigstorePolicyDelete) ProtoMessage() {} func (*SigstorePolicyDelete) Descriptor() ([]byte, []int) { - return fileDescriptor_007ba1c3d6266d56, []int{261} + return fileDescriptor_007ba1c3d6266d56, []int{266} } func (m *SigstorePolicyDelete) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16717,2212 +17184,2619 @@ func (m *SigstorePolicyDelete) XXX_DiscardUnknown() { var xxx_messageInfo_SigstorePolicyDelete proto.InternalMessageInfo -func init() { - proto.RegisterEnum("events.UserKind", UserKind_name, UserKind_value) - proto.RegisterEnum("events.UserOrigin", UserOrigin_name, UserOrigin_value) - proto.RegisterEnum("events.EventAction", EventAction_name, EventAction_value) - proto.RegisterEnum("events.SFTPAction", SFTPAction_name, SFTPAction_value) - proto.RegisterEnum("events.OSType", OSType_name, OSType_value) - proto.RegisterEnum("events.DeviceOrigin", DeviceOrigin_name, DeviceOrigin_value) - proto.RegisterEnum("events.ElasticsearchCategory", ElasticsearchCategory_name, ElasticsearchCategory_value) - proto.RegisterEnum("events.OpenSearchCategory", OpenSearchCategory_name, OpenSearchCategory_value) - proto.RegisterEnum("events.AdminActionsMFAStatus", AdminActionsMFAStatus_name, AdminActionsMFAStatus_value) - proto.RegisterEnum("events.ContactType", ContactType_name, ContactType_value) - proto.RegisterEnum("events.SessionNetwork_NetworkOperation", SessionNetwork_NetworkOperation_name, SessionNetwork_NetworkOperation_value) - proto.RegisterType((*Metadata)(nil), "events.Metadata") - proto.RegisterType((*SessionMetadata)(nil), "events.SessionMetadata") - proto.RegisterType((*UserMetadata)(nil), "events.UserMetadata") - proto.RegisterType((*ServerMetadata)(nil), "events.ServerMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.ServerMetadata.ServerLabelsEntry") - proto.RegisterType((*ConnectionMetadata)(nil), "events.ConnectionMetadata") - proto.RegisterType((*ClientMetadata)(nil), "events.ClientMetadata") - proto.RegisterType((*KubernetesClusterMetadata)(nil), "events.KubernetesClusterMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.KubernetesClusterMetadata.KubernetesLabelsEntry") - proto.RegisterType((*KubernetesPodMetadata)(nil), "events.KubernetesPodMetadata") - proto.RegisterType((*SAMLIdPServiceProviderMetadata)(nil), "events.SAMLIdPServiceProviderMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.SAMLIdPServiceProviderMetadata.AttributeMappingEntry") - proto.RegisterType((*OktaResourcesUpdatedMetadata)(nil), "events.OktaResourcesUpdatedMetadata") - proto.RegisterType((*OktaResource)(nil), "events.OktaResource") - proto.RegisterType((*OktaAssignmentMetadata)(nil), "events.OktaAssignmentMetadata") - proto.RegisterType((*AccessListMemberMetadata)(nil), "events.AccessListMemberMetadata") - proto.RegisterType((*AccessListMember)(nil), "events.AccessListMember") - proto.RegisterType((*AccessListReviewMembershipRequirementsChanged)(nil), "events.AccessListReviewMembershipRequirementsChanged") - proto.RegisterMapType((map[string]string)(nil), "events.AccessListReviewMembershipRequirementsChanged.TraitsEntry") - proto.RegisterType((*AccessListReviewMetadata)(nil), "events.AccessListReviewMetadata") - proto.RegisterType((*LockMetadata)(nil), "events.LockMetadata") - proto.RegisterType((*SessionStart)(nil), "events.SessionStart") - proto.RegisterType((*SessionJoin)(nil), "events.SessionJoin") - proto.RegisterType((*SessionPrint)(nil), "events.SessionPrint") - proto.RegisterType((*DesktopRecording)(nil), "events.DesktopRecording") - proto.RegisterType((*DesktopClipboardReceive)(nil), "events.DesktopClipboardReceive") - proto.RegisterType((*DesktopClipboardSend)(nil), "events.DesktopClipboardSend") - proto.RegisterType((*DesktopSharedDirectoryStart)(nil), "events.DesktopSharedDirectoryStart") - proto.RegisterType((*DesktopSharedDirectoryRead)(nil), "events.DesktopSharedDirectoryRead") - proto.RegisterType((*DesktopSharedDirectoryWrite)(nil), "events.DesktopSharedDirectoryWrite") - proto.RegisterType((*SessionReject)(nil), "events.SessionReject") - proto.RegisterType((*SessionConnect)(nil), "events.SessionConnect") - proto.RegisterType((*FileTransferRequestEvent)(nil), "events.FileTransferRequestEvent") - proto.RegisterType((*Resize)(nil), "events.Resize") - proto.RegisterType((*SessionEnd)(nil), "events.SessionEnd") - proto.RegisterType((*BPFMetadata)(nil), "events.BPFMetadata") - proto.RegisterType((*Status)(nil), "events.Status") - proto.RegisterType((*SessionCommand)(nil), "events.SessionCommand") - proto.RegisterType((*SessionDisk)(nil), "events.SessionDisk") - proto.RegisterType((*SessionNetwork)(nil), "events.SessionNetwork") - proto.RegisterType((*SessionData)(nil), "events.SessionData") - proto.RegisterType((*SessionLeave)(nil), "events.SessionLeave") - proto.RegisterType((*UserLogin)(nil), "events.UserLogin") - proto.RegisterType((*CreateMFAAuthChallenge)(nil), "events.CreateMFAAuthChallenge") - proto.RegisterType((*ValidateMFAAuthResponse)(nil), "events.ValidateMFAAuthResponse") - proto.RegisterType((*ResourceMetadata)(nil), "events.ResourceMetadata") - proto.RegisterType((*UserCreate)(nil), "events.UserCreate") - proto.RegisterType((*UserUpdate)(nil), "events.UserUpdate") - proto.RegisterType((*UserDelete)(nil), "events.UserDelete") - proto.RegisterType((*UserPasswordChange)(nil), "events.UserPasswordChange") - proto.RegisterType((*AccessRequestCreate)(nil), "events.AccessRequestCreate") - proto.RegisterType((*AccessRequestExpire)(nil), "events.AccessRequestExpire") - proto.RegisterType((*ResourceID)(nil), "events.ResourceID") - proto.RegisterType((*AccessRequestDelete)(nil), "events.AccessRequestDelete") - proto.RegisterType((*PortForward)(nil), "events.PortForward") - proto.RegisterType((*X11Forward)(nil), "events.X11Forward") - proto.RegisterType((*CommandMetadata)(nil), "events.CommandMetadata") - proto.RegisterType((*Exec)(nil), "events.Exec") - proto.RegisterType((*SCP)(nil), "events.SCP") - proto.RegisterType((*SFTPAttributes)(nil), "events.SFTPAttributes") - proto.RegisterType((*SFTP)(nil), "events.SFTP") - proto.RegisterType((*SFTPSummary)(nil), "events.SFTPSummary") - proto.RegisterType((*FileTransferStat)(nil), "events.FileTransferStat") - proto.RegisterType((*Subsystem)(nil), "events.Subsystem") - proto.RegisterType((*ClientDisconnect)(nil), "events.ClientDisconnect") - proto.RegisterType((*AuthAttempt)(nil), "events.AuthAttempt") - proto.RegisterType((*UserTokenCreate)(nil), "events.UserTokenCreate") - proto.RegisterType((*RoleCreate)(nil), "events.RoleCreate") - proto.RegisterType((*RoleUpdate)(nil), "events.RoleUpdate") - proto.RegisterType((*RoleDelete)(nil), "events.RoleDelete") - proto.RegisterType((*BotCreate)(nil), "events.BotCreate") - proto.RegisterType((*BotUpdate)(nil), "events.BotUpdate") - proto.RegisterType((*BotDelete)(nil), "events.BotDelete") - proto.RegisterType((*TrustedClusterCreate)(nil), "events.TrustedClusterCreate") - proto.RegisterType((*TrustedClusterDelete)(nil), "events.TrustedClusterDelete") - proto.RegisterType((*ProvisionTokenCreate)(nil), "events.ProvisionTokenCreate") - proto.RegisterType((*TrustedClusterTokenCreate)(nil), "events.TrustedClusterTokenCreate") - proto.RegisterType((*GithubConnectorCreate)(nil), "events.GithubConnectorCreate") - proto.RegisterType((*GithubConnectorUpdate)(nil), "events.GithubConnectorUpdate") - proto.RegisterType((*GithubConnectorDelete)(nil), "events.GithubConnectorDelete") - proto.RegisterType((*OIDCConnectorCreate)(nil), "events.OIDCConnectorCreate") - proto.RegisterType((*OIDCConnectorUpdate)(nil), "events.OIDCConnectorUpdate") - proto.RegisterType((*OIDCConnectorDelete)(nil), "events.OIDCConnectorDelete") - proto.RegisterType((*SAMLConnectorCreate)(nil), "events.SAMLConnectorCreate") - proto.RegisterType((*SAMLConnectorUpdate)(nil), "events.SAMLConnectorUpdate") - proto.RegisterType((*SAMLConnectorDelete)(nil), "events.SAMLConnectorDelete") - proto.RegisterType((*KubeRequest)(nil), "events.KubeRequest") - proto.RegisterType((*AppMetadata)(nil), "events.AppMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.AppMetadata.AppLabelsEntry") - proto.RegisterType((*AppCreate)(nil), "events.AppCreate") - proto.RegisterType((*AppUpdate)(nil), "events.AppUpdate") - proto.RegisterType((*AppDelete)(nil), "events.AppDelete") - proto.RegisterType((*AppSessionStart)(nil), "events.AppSessionStart") - proto.RegisterType((*AppSessionEnd)(nil), "events.AppSessionEnd") - proto.RegisterType((*AppSessionChunk)(nil), "events.AppSessionChunk") - proto.RegisterType((*AppSessionRequest)(nil), "events.AppSessionRequest") - proto.RegisterType((*AWSRequestMetadata)(nil), "events.AWSRequestMetadata") - proto.RegisterType((*DatabaseMetadata)(nil), "events.DatabaseMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.DatabaseMetadata.DatabaseLabelsEntry") - proto.RegisterType((*DatabaseCreate)(nil), "events.DatabaseCreate") - proto.RegisterType((*DatabaseUpdate)(nil), "events.DatabaseUpdate") - proto.RegisterType((*DatabaseDelete)(nil), "events.DatabaseDelete") - proto.RegisterType((*DatabaseSessionStart)(nil), "events.DatabaseSessionStart") - proto.RegisterType((*DatabaseSessionQuery)(nil), "events.DatabaseSessionQuery") - proto.RegisterType((*DatabaseSessionCommandResult)(nil), "events.DatabaseSessionCommandResult") - proto.RegisterType((*DatabasePermissionUpdate)(nil), "events.DatabasePermissionUpdate") - proto.RegisterMapType((map[string]int32)(nil), "events.DatabasePermissionUpdate.AffectedObjectCountsEntry") - proto.RegisterType((*DatabasePermissionEntry)(nil), "events.DatabasePermissionEntry") - proto.RegisterMapType((map[string]int32)(nil), "events.DatabasePermissionEntry.CountsEntry") - proto.RegisterType((*DatabaseUserCreate)(nil), "events.DatabaseUserCreate") - proto.RegisterType((*DatabaseUserDeactivate)(nil), "events.DatabaseUserDeactivate") - proto.RegisterType((*PostgresParse)(nil), "events.PostgresParse") - proto.RegisterType((*PostgresBind)(nil), "events.PostgresBind") - proto.RegisterType((*PostgresExecute)(nil), "events.PostgresExecute") - proto.RegisterType((*PostgresClose)(nil), "events.PostgresClose") - proto.RegisterType((*PostgresFunctionCall)(nil), "events.PostgresFunctionCall") - proto.RegisterType((*WindowsDesktopSessionStart)(nil), "events.WindowsDesktopSessionStart") - proto.RegisterMapType((map[string]string)(nil), "events.WindowsDesktopSessionStart.DesktopLabelsEntry") - proto.RegisterType((*DatabaseSessionEnd)(nil), "events.DatabaseSessionEnd") - proto.RegisterType((*MFADeviceMetadata)(nil), "events.MFADeviceMetadata") - proto.RegisterType((*MFADeviceAdd)(nil), "events.MFADeviceAdd") - proto.RegisterType((*MFADeviceDelete)(nil), "events.MFADeviceDelete") - proto.RegisterType((*BillingInformationUpdate)(nil), "events.BillingInformationUpdate") - proto.RegisterType((*BillingCardCreate)(nil), "events.BillingCardCreate") - proto.RegisterType((*BillingCardDelete)(nil), "events.BillingCardDelete") - proto.RegisterType((*LockCreate)(nil), "events.LockCreate") - proto.RegisterType((*LockDelete)(nil), "events.LockDelete") - proto.RegisterType((*RecoveryCodeGenerate)(nil), "events.RecoveryCodeGenerate") - proto.RegisterType((*RecoveryCodeUsed)(nil), "events.RecoveryCodeUsed") - proto.RegisterType((*WindowsDesktopSessionEnd)(nil), "events.WindowsDesktopSessionEnd") - proto.RegisterMapType((map[string]string)(nil), "events.WindowsDesktopSessionEnd.DesktopLabelsEntry") - proto.RegisterType((*CertificateCreate)(nil), "events.CertificateCreate") - proto.RegisterType((*RenewableCertificateGenerationMismatch)(nil), "events.RenewableCertificateGenerationMismatch") - proto.RegisterType((*BotJoin)(nil), "events.BotJoin") - proto.RegisterType((*InstanceJoin)(nil), "events.InstanceJoin") - proto.RegisterType((*Unknown)(nil), "events.Unknown") - proto.RegisterType((*DeviceMetadata)(nil), "events.DeviceMetadata") - proto.RegisterType((*DeviceEvent)(nil), "events.DeviceEvent") - proto.RegisterType((*DeviceEvent2)(nil), "events.DeviceEvent2") - proto.RegisterType((*DiscoveryConfigCreate)(nil), "events.DiscoveryConfigCreate") - proto.RegisterType((*DiscoveryConfigUpdate)(nil), "events.DiscoveryConfigUpdate") - proto.RegisterType((*DiscoveryConfigDelete)(nil), "events.DiscoveryConfigDelete") - proto.RegisterType((*DiscoveryConfigDeleteAll)(nil), "events.DiscoveryConfigDeleteAll") - proto.RegisterType((*IntegrationCreate)(nil), "events.IntegrationCreate") - proto.RegisterType((*IntegrationUpdate)(nil), "events.IntegrationUpdate") - proto.RegisterType((*IntegrationDelete)(nil), "events.IntegrationDelete") - proto.RegisterType((*IntegrationMetadata)(nil), "events.IntegrationMetadata") - proto.RegisterType((*AWSOIDCIntegrationMetadata)(nil), "events.AWSOIDCIntegrationMetadata") - proto.RegisterType((*AzureOIDCIntegrationMetadata)(nil), "events.AzureOIDCIntegrationMetadata") - proto.RegisterType((*GitHubIntegrationMetadata)(nil), "events.GitHubIntegrationMetadata") - proto.RegisterType((*AWSRAIntegrationMetadata)(nil), "events.AWSRAIntegrationMetadata") - proto.RegisterType((*PluginCreate)(nil), "events.PluginCreate") - proto.RegisterType((*PluginUpdate)(nil), "events.PluginUpdate") - proto.RegisterType((*PluginDelete)(nil), "events.PluginDelete") - proto.RegisterType((*PluginMetadata)(nil), "events.PluginMetadata") - proto.RegisterType((*OneOf)(nil), "events.OneOf") - proto.RegisterType((*StreamStatus)(nil), "events.StreamStatus") - proto.RegisterType((*SessionUpload)(nil), "events.SessionUpload") - proto.RegisterType((*Identity)(nil), "events.Identity") - proto.RegisterType((*RouteToApp)(nil), "events.RouteToApp") - proto.RegisterType((*RouteToDatabase)(nil), "events.RouteToDatabase") - proto.RegisterType((*DeviceExtensions)(nil), "events.DeviceExtensions") - proto.RegisterType((*AccessRequestResourceSearch)(nil), "events.AccessRequestResourceSearch") - proto.RegisterMapType((map[string]string)(nil), "events.AccessRequestResourceSearch.LabelsEntry") - proto.RegisterType((*MySQLStatementPrepare)(nil), "events.MySQLStatementPrepare") - proto.RegisterType((*MySQLStatementExecute)(nil), "events.MySQLStatementExecute") - proto.RegisterType((*MySQLStatementSendLongData)(nil), "events.MySQLStatementSendLongData") - proto.RegisterType((*MySQLStatementClose)(nil), "events.MySQLStatementClose") - proto.RegisterType((*MySQLStatementReset)(nil), "events.MySQLStatementReset") - proto.RegisterType((*MySQLStatementFetch)(nil), "events.MySQLStatementFetch") - proto.RegisterType((*MySQLStatementBulkExecute)(nil), "events.MySQLStatementBulkExecute") - proto.RegisterType((*MySQLInitDB)(nil), "events.MySQLInitDB") - proto.RegisterType((*MySQLCreateDB)(nil), "events.MySQLCreateDB") - proto.RegisterType((*MySQLDropDB)(nil), "events.MySQLDropDB") - proto.RegisterType((*MySQLShutDown)(nil), "events.MySQLShutDown") - proto.RegisterType((*MySQLProcessKill)(nil), "events.MySQLProcessKill") - proto.RegisterType((*MySQLDebug)(nil), "events.MySQLDebug") - proto.RegisterType((*MySQLRefresh)(nil), "events.MySQLRefresh") - proto.RegisterType((*SQLServerRPCRequest)(nil), "events.SQLServerRPCRequest") - proto.RegisterType((*DatabaseSessionMalformedPacket)(nil), "events.DatabaseSessionMalformedPacket") - proto.RegisterType((*ElasticsearchRequest)(nil), "events.ElasticsearchRequest") - proto.RegisterType((*OpenSearchRequest)(nil), "events.OpenSearchRequest") - proto.RegisterType((*DynamoDBRequest)(nil), "events.DynamoDBRequest") - proto.RegisterType((*AppSessionDynamoDBRequest)(nil), "events.AppSessionDynamoDBRequest") - proto.RegisterType((*UpgradeWindowStartMetadata)(nil), "events.UpgradeWindowStartMetadata") - proto.RegisterType((*UpgradeWindowStartUpdate)(nil), "events.UpgradeWindowStartUpdate") - proto.RegisterType((*SessionRecordingAccess)(nil), "events.SessionRecordingAccess") - proto.RegisterType((*KubeClusterMetadata)(nil), "events.KubeClusterMetadata") - proto.RegisterMapType((map[string]string)(nil), "events.KubeClusterMetadata.KubeLabelsEntry") - proto.RegisterType((*KubernetesClusterCreate)(nil), "events.KubernetesClusterCreate") - proto.RegisterType((*KubernetesClusterUpdate)(nil), "events.KubernetesClusterUpdate") - proto.RegisterType((*KubernetesClusterDelete)(nil), "events.KubernetesClusterDelete") - proto.RegisterType((*SSMRun)(nil), "events.SSMRun") - proto.RegisterType((*CassandraPrepare)(nil), "events.CassandraPrepare") - proto.RegisterType((*CassandraExecute)(nil), "events.CassandraExecute") - proto.RegisterType((*CassandraBatch)(nil), "events.CassandraBatch") - proto.RegisterType((*CassandraBatch_BatchChild)(nil), "events.CassandraBatch.BatchChild") - proto.RegisterType((*CassandraBatch_BatchChild_Value)(nil), "events.CassandraBatch.BatchChild.Value") - proto.RegisterType((*CassandraRegister)(nil), "events.CassandraRegister") - proto.RegisterType((*LoginRuleCreate)(nil), "events.LoginRuleCreate") - proto.RegisterType((*LoginRuleDelete)(nil), "events.LoginRuleDelete") - proto.RegisterType((*SAMLIdPAuthAttempt)(nil), "events.SAMLIdPAuthAttempt") - proto.RegisterType((*SAMLIdPServiceProviderCreate)(nil), "events.SAMLIdPServiceProviderCreate") - proto.RegisterType((*SAMLIdPServiceProviderUpdate)(nil), "events.SAMLIdPServiceProviderUpdate") - proto.RegisterType((*SAMLIdPServiceProviderDelete)(nil), "events.SAMLIdPServiceProviderDelete") - proto.RegisterType((*SAMLIdPServiceProviderDeleteAll)(nil), "events.SAMLIdPServiceProviderDeleteAll") - proto.RegisterType((*OktaResourcesUpdate)(nil), "events.OktaResourcesUpdate") - proto.RegisterType((*OktaSyncFailure)(nil), "events.OktaSyncFailure") - proto.RegisterType((*OktaAssignmentResult)(nil), "events.OktaAssignmentResult") - proto.RegisterType((*AccessListCreate)(nil), "events.AccessListCreate") - proto.RegisterType((*AccessListUpdate)(nil), "events.AccessListUpdate") - proto.RegisterType((*AccessListDelete)(nil), "events.AccessListDelete") - proto.RegisterType((*AccessListMemberCreate)(nil), "events.AccessListMemberCreate") - proto.RegisterType((*AccessListMemberUpdate)(nil), "events.AccessListMemberUpdate") - proto.RegisterType((*AccessListMemberDelete)(nil), "events.AccessListMemberDelete") - proto.RegisterType((*AccessListMemberDeleteAllForAccessList)(nil), "events.AccessListMemberDeleteAllForAccessList") - proto.RegisterType((*AccessListReview)(nil), "events.AccessListReview") - proto.RegisterType((*AuditQueryRun)(nil), "events.AuditQueryRun") - proto.RegisterType((*AuditQueryDetails)(nil), "events.AuditQueryDetails") - proto.RegisterType((*SecurityReportRun)(nil), "events.SecurityReportRun") - proto.RegisterType((*ExternalAuditStorageEnable)(nil), "events.ExternalAuditStorageEnable") - proto.RegisterType((*ExternalAuditStorageDisable)(nil), "events.ExternalAuditStorageDisable") - proto.RegisterType((*ExternalAuditStorageDetails)(nil), "events.ExternalAuditStorageDetails") - proto.RegisterType((*OktaAccessListSync)(nil), "events.OktaAccessListSync") - proto.RegisterType((*OktaUserSync)(nil), "events.OktaUserSync") - proto.RegisterType((*SPIFFESVIDIssued)(nil), "events.SPIFFESVIDIssued") - proto.RegisterType((*AuthPreferenceUpdate)(nil), "events.AuthPreferenceUpdate") - proto.RegisterType((*ClusterNetworkingConfigUpdate)(nil), "events.ClusterNetworkingConfigUpdate") - proto.RegisterType((*SessionRecordingConfigUpdate)(nil), "events.SessionRecordingConfigUpdate") - proto.RegisterType((*AccessPathChanged)(nil), "events.AccessPathChanged") - proto.RegisterType((*SpannerRPC)(nil), "events.SpannerRPC") - proto.RegisterType((*AccessGraphSettingsUpdate)(nil), "events.AccessGraphSettingsUpdate") - proto.RegisterType((*SPIFFEFederationCreate)(nil), "events.SPIFFEFederationCreate") - proto.RegisterType((*SPIFFEFederationDelete)(nil), "events.SPIFFEFederationDelete") - proto.RegisterType((*AutoUpdateConfigCreate)(nil), "events.AutoUpdateConfigCreate") - proto.RegisterType((*AutoUpdateConfigUpdate)(nil), "events.AutoUpdateConfigUpdate") - proto.RegisterType((*AutoUpdateConfigDelete)(nil), "events.AutoUpdateConfigDelete") - proto.RegisterType((*AutoUpdateVersionCreate)(nil), "events.AutoUpdateVersionCreate") - proto.RegisterType((*AutoUpdateVersionUpdate)(nil), "events.AutoUpdateVersionUpdate") - proto.RegisterType((*AutoUpdateVersionDelete)(nil), "events.AutoUpdateVersionDelete") - proto.RegisterType((*AutoUpdateAgentRolloutTrigger)(nil), "events.AutoUpdateAgentRolloutTrigger") - proto.RegisterType((*AutoUpdateAgentRolloutForceDone)(nil), "events.AutoUpdateAgentRolloutForceDone") - proto.RegisterType((*AutoUpdateAgentRolloutRollback)(nil), "events.AutoUpdateAgentRolloutRollback") - proto.RegisterType((*StaticHostUserCreate)(nil), "events.StaticHostUserCreate") - proto.RegisterType((*StaticHostUserUpdate)(nil), "events.StaticHostUserUpdate") - proto.RegisterType((*StaticHostUserDelete)(nil), "events.StaticHostUserDelete") - proto.RegisterType((*CrownJewelCreate)(nil), "events.CrownJewelCreate") - proto.RegisterType((*CrownJewelUpdate)(nil), "events.CrownJewelUpdate") - proto.RegisterType((*CrownJewelDelete)(nil), "events.CrownJewelDelete") - proto.RegisterType((*UserTaskCreate)(nil), "events.UserTaskCreate") - proto.RegisterType((*UserTaskUpdate)(nil), "events.UserTaskUpdate") - proto.RegisterType((*UserTaskMetadata)(nil), "events.UserTaskMetadata") - proto.RegisterType((*UserTaskDelete)(nil), "events.UserTaskDelete") - proto.RegisterType((*ContactCreate)(nil), "events.ContactCreate") - proto.RegisterType((*ContactDelete)(nil), "events.ContactDelete") - proto.RegisterType((*WorkloadIdentityCreate)(nil), "events.WorkloadIdentityCreate") - proto.RegisterType((*WorkloadIdentityUpdate)(nil), "events.WorkloadIdentityUpdate") - proto.RegisterType((*WorkloadIdentityDelete)(nil), "events.WorkloadIdentityDelete") - proto.RegisterType((*WorkloadIdentityX509RevocationCreate)(nil), "events.WorkloadIdentityX509RevocationCreate") - proto.RegisterType((*WorkloadIdentityX509RevocationUpdate)(nil), "events.WorkloadIdentityX509RevocationUpdate") - proto.RegisterType((*WorkloadIdentityX509RevocationDelete)(nil), "events.WorkloadIdentityX509RevocationDelete") - proto.RegisterType((*GitCommand)(nil), "events.GitCommand") - proto.RegisterType((*GitCommandAction)(nil), "events.GitCommandAction") - proto.RegisterType((*AccessListInvalidMetadata)(nil), "events.AccessListInvalidMetadata") - proto.RegisterType((*UserLoginAccessListInvalid)(nil), "events.UserLoginAccessListInvalid") - proto.RegisterType((*StableUNIXUserCreate)(nil), "events.StableUNIXUserCreate") - proto.RegisterType((*StableUNIXUser)(nil), "events.StableUNIXUser") - proto.RegisterType((*AWSICResourceSync)(nil), "events.AWSICResourceSync") - proto.RegisterType((*HealthCheckConfigCreate)(nil), "events.HealthCheckConfigCreate") - proto.RegisterType((*HealthCheckConfigUpdate)(nil), "events.HealthCheckConfigUpdate") - proto.RegisterType((*HealthCheckConfigDelete)(nil), "events.HealthCheckConfigDelete") - proto.RegisterType((*WorkloadIdentityX509IssuerOverrideCreate)(nil), "events.WorkloadIdentityX509IssuerOverrideCreate") - proto.RegisterType((*WorkloadIdentityX509IssuerOverrideDelete)(nil), "events.WorkloadIdentityX509IssuerOverrideDelete") - proto.RegisterType((*SigstorePolicyCreate)(nil), "events.SigstorePolicyCreate") - proto.RegisterType((*SigstorePolicyUpdate)(nil), "events.SigstorePolicyUpdate") - proto.RegisterType((*SigstorePolicyDelete)(nil), "events.SigstorePolicyDelete") +// MCPSessionStart is emitted when a user starts a MCP session. +type MCPSessionStart struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=User,proto3,embedded=User" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=Session,proto3,embedded=Session" json:""` + // ServerMetadata is a common server metadata + ServerMetadata `protobuf:"bytes,4,opt,name=Server,proto3,embedded=Server" json:""` + // ConnectionMetadata holds information about the connection + ConnectionMetadata `protobuf:"bytes,5,opt,name=Connection,proto3,embedded=Connection" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,6,opt,name=App,proto3,embedded=App" json:""` + // McpSessionId is the session ID tracked by remote MCP servers. + McpSessionId string `protobuf:"bytes,7,opt,name=mcp_session_id,json=mcpSessionId,proto3" json:"mcp_session_id,omitempty"` + // ClientInfo stores reported client agent information, e.g. "claude-ai/0.1.0". + ClientInfo string `protobuf:"bytes,8,opt,name=client_info,json=clientInfo,proto3" json:"client_info,omitempty"` + // ServerInfo stores reported MCP server information, e.g. "teleport-mcp-demo/18.3.0". + ServerInfo string `protobuf:"bytes,9,opt,name=server_info,json=serverInfo,proto3" json:"server_info,omitempty"` + // ProtocolVersion is the MCP protocol version like "2025-06-18". + ProtocolVersion string `protobuf:"bytes,10,opt,name=protocol_version,json=protocolVersion,proto3" json:"protocol_version,omitempty"` + // IngressAuthType indicates how the MCP session is authorized between MCP client and Teleport. + IngressAuthType string `protobuf:"bytes,11,opt,name=ingress_auth_type,json=ingressAuthType,proto3" json:"ingress_auth_type,omitempty"` + // EgressAuthType indicates how the MCP session is authorized between Teleport and remote MCP server. + EgressAuthType string `protobuf:"bytes,12,opt,name=egress_auth_type,json=egressAuthType,proto3" json:"egress_auth_type,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func init() { - proto.RegisterFile("teleport/legacy/types/events/events.proto", fileDescriptor_007ba1c3d6266d56) +func (m *MCPSessionStart) Reset() { *m = MCPSessionStart{} } +func (m *MCPSessionStart) String() string { return proto.CompactTextString(m) } +func (*MCPSessionStart) ProtoMessage() {} +func (*MCPSessionStart) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{267} +} +func (m *MCPSessionStart) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionStart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionStart.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *MCPSessionStart) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionStart.Merge(m, src) +} +func (m *MCPSessionStart) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionStart) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionStart.DiscardUnknown(m) } -var fileDescriptor_007ba1c3d6266d56 = []byte{ - // 18789 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xfd, 0x7b, 0x78, 0x25, 0xc9, - 0x75, 0x18, 0x86, 0xe3, 0x3e, 0x70, 0x01, 0x1c, 0x3c, 0xe6, 0x4e, 0xcd, 0xab, 0x67, 0x76, 0x76, - 0xef, 0x6e, 0xef, 0x72, 0x38, 0xb3, 0xdc, 0x9d, 0xd9, 0x9d, 0x7d, 0x90, 0xfb, 0x20, 0x97, 0x17, - 0xb8, 0xc0, 0xe0, 0xce, 0xe0, 0xc5, 0xbe, 0x98, 0x99, 0x5d, 0x52, 0xe4, 0x55, 0xe3, 0x76, 0x0d, - 0xd0, 0x8b, 0x8b, 0xee, 0xcb, 0xee, 0xbe, 0x83, 0xc1, 0xfa, 0xf7, 0x4b, 0x44, 0x59, 0x96, 0x44, - 0x9b, 0xa2, 0x18, 0x2a, 0x12, 0x65, 0xc9, 0x4e, 0x28, 0x3b, 0x4e, 0x64, 0x89, 0x16, 0x23, 0x87, - 0xd1, 0x9b, 0xb6, 0x64, 0xf9, 0x41, 0x89, 0x92, 0x22, 0xc9, 0x96, 0xe2, 0x2f, 0x71, 0x20, 0x47, - 0x89, 0xff, 0xc1, 0x97, 0xe4, 0xd3, 0x97, 0xe8, 0x8b, 0x19, 0x27, 0xce, 0x97, 0xaf, 0x4e, 0x55, - 0x77, 0x57, 0xbf, 0x2e, 0x9e, 0x2b, 0x2c, 0x76, 0xf0, 0xcf, 0x0c, 0xee, 0x39, 0xa7, 0x4e, 0x55, - 0x9f, 0x3a, 0x55, 0x75, 0xaa, 0xea, 0xd4, 0x39, 0x70, 0xc5, 0xa3, 0x6d, 0xda, 0xb1, 0x1d, 0xef, - 0x5a, 0x9b, 0x2e, 0xeb, 0xad, 0x8d, 0x6b, 0xde, 0x46, 0x87, 0xba, 0xd7, 0xe8, 0x7d, 0x6a, 0x79, - 0xfe, 0x7f, 0x57, 0x3b, 0x8e, 0xed, 0xd9, 0xa4, 0xc4, 0x7f, 0x5d, 0x38, 0xbd, 0x6c, 0x2f, 0xdb, - 0x08, 0xba, 0xc6, 0xfe, 0xe2, 0xd8, 0x0b, 0x17, 0x97, 0x6d, 0x7b, 0xb9, 0x4d, 0xaf, 0xe1, 0xaf, - 0xa5, 0xee, 0xbd, 0x6b, 0xae, 0xe7, 0x74, 0x5b, 0x9e, 0xc0, 0x56, 0xe2, 0x58, 0xcf, 0x5c, 0xa3, - 0xae, 0xa7, 0xaf, 0x75, 0x04, 0xc1, 0x63, 0x71, 0x82, 0x75, 0x47, 0xef, 0x74, 0xa8, 0x23, 0x2a, - 0xbf, 0xf0, 0xc1, 0xa0, 0x9d, 0x7a, 0xab, 0x45, 0x5d, 0xb7, 0x6d, 0xba, 0xde, 0xb5, 0xfb, 0xcf, - 0x4b, 0xbf, 0x04, 0xe1, 0x13, 0xe9, 0x1f, 0x84, 0xff, 0x0a, 0x92, 0x67, 0xd3, 0x49, 0xfc, 0x1a, - 0x63, 0x55, 0xab, 0x5f, 0xce, 0xc3, 0xe0, 0x2c, 0xf5, 0x74, 0x43, 0xf7, 0x74, 0x72, 0x11, 0xfa, - 0xeb, 0x96, 0x41, 0x1f, 0x28, 0xb9, 0xc7, 0x73, 0x97, 0x0b, 0xe3, 0xa5, 0xad, 0xcd, 0x4a, 0x9e, - 0x9a, 0x1a, 0x07, 0x92, 0x47, 0xa1, 0xb8, 0xb8, 0xd1, 0xa1, 0x4a, 0xfe, 0xf1, 0xdc, 0xe5, 0xa1, - 0xf1, 0xa1, 0xad, 0xcd, 0x4a, 0x3f, 0x0a, 0x4d, 0x43, 0x30, 0x79, 0x02, 0xf2, 0xf5, 0x9a, 0x52, - 0x40, 0xe4, 0xc9, 0xad, 0xcd, 0xca, 0x68, 0xd7, 0x34, 0x9e, 0xb1, 0xd7, 0x4c, 0x8f, 0xae, 0x75, - 0xbc, 0x0d, 0x2d, 0x5f, 0xaf, 0x91, 0x4b, 0x50, 0x9c, 0xb0, 0x0d, 0xaa, 0x14, 0x91, 0x88, 0x6c, - 0x6d, 0x56, 0xc6, 0x5a, 0xb6, 0x41, 0x25, 0x2a, 0xc4, 0x93, 0x8f, 0x43, 0x71, 0xd1, 0x5c, 0xa3, - 0x4a, 0xff, 0xe3, 0xb9, 0xcb, 0xc3, 0xd7, 0x2f, 0x5c, 0xe5, 0xe2, 0xbb, 0xea, 0x8b, 0xef, 0xea, - 0xa2, 0x2f, 0xdf, 0xf1, 0xf2, 0xb7, 0x36, 0x2b, 0x7d, 0x5b, 0x9b, 0x95, 0x22, 0x13, 0xf9, 0x97, - 0xfe, 0xa4, 0x92, 0xd3, 0xb0, 0x24, 0x79, 0x1d, 0x86, 0x27, 0xda, 0x5d, 0xd7, 0xa3, 0xce, 0x9c, - 0xbe, 0x46, 0x95, 0x12, 0x56, 0x78, 0x61, 0x6b, 0xb3, 0x72, 0xb6, 0xc5, 0xc1, 0x4d, 0x4b, 0x5f, - 0x93, 0x2b, 0x96, 0xc9, 0xd5, 0x5f, 0xca, 0xc1, 0x89, 0x06, 0x75, 0x5d, 0xd3, 0xb6, 0x02, 0xd9, - 0x7c, 0x00, 0x86, 0x04, 0xa8, 0x5e, 0x43, 0xf9, 0x0c, 0x8d, 0x0f, 0x6c, 0x6d, 0x56, 0x0a, 0xae, - 0x69, 0x68, 0x21, 0x86, 0x3c, 0x07, 0x03, 0x77, 0x4d, 0x6f, 0x65, 0x76, 0xaa, 0x2a, 0xe4, 0x74, - 0x76, 0x6b, 0xb3, 0x42, 0xd6, 0x4d, 0x6f, 0xa5, 0xb9, 0x76, 0x4f, 0x97, 0x2a, 0xf4, 0xc9, 0xc8, - 0x0c, 0x94, 0x17, 0x1c, 0xf3, 0xbe, 0xee, 0xd1, 0x5b, 0x74, 0x63, 0xc1, 0x6e, 0x9b, 0xad, 0x0d, - 0x21, 0xc5, 0xc7, 0xb7, 0x36, 0x2b, 0x17, 0x3b, 0x1c, 0xd7, 0x5c, 0xa5, 0x1b, 0xcd, 0x0e, 0x62, - 0x25, 0x26, 0x89, 0x92, 0xea, 0xe7, 0x07, 0x60, 0xe4, 0xb6, 0x4b, 0x9d, 0xa0, 0xdd, 0x97, 0xa0, - 0xc8, 0x7e, 0x8b, 0x26, 0xa3, 0xcc, 0xbb, 0x2e, 0x75, 0x64, 0x99, 0x33, 0x3c, 0xb9, 0x02, 0xfd, - 0x33, 0xf6, 0xb2, 0x69, 0x89, 0x66, 0x9f, 0xda, 0xda, 0xac, 0x9c, 0x68, 0x33, 0x80, 0x44, 0xc9, - 0x29, 0xc8, 0xc7, 0x60, 0xa4, 0xbe, 0xc6, 0x74, 0xc8, 0xb6, 0x74, 0xcf, 0x76, 0x44, 0x6b, 0x51, - 0xba, 0xa6, 0x04, 0x97, 0x0a, 0x46, 0xe8, 0xc9, 0xab, 0x00, 0xd5, 0xbb, 0x0d, 0xcd, 0x6e, 0xd3, - 0xaa, 0x36, 0x27, 0x94, 0x01, 0x4b, 0xeb, 0xeb, 0x6e, 0xd3, 0xb1, 0xdb, 0xb4, 0xa9, 0x3b, 0x72, - 0xb5, 0x12, 0x35, 0x99, 0x84, 0xb1, 0x2a, 0x8e, 0x0a, 0x8d, 0x7e, 0xb6, 0x4b, 0x5d, 0xcf, 0x55, - 0xfa, 0x1f, 0x2f, 0x5c, 0x1e, 0x1a, 0x7f, 0x74, 0x6b, 0xb3, 0x72, 0x9e, 0x8f, 0x97, 0xa6, 0x23, - 0x50, 0x12, 0x8b, 0x58, 0x21, 0x32, 0x0e, 0xa3, 0xd5, 0x77, 0xba, 0x0e, 0xad, 0x1b, 0xd4, 0xf2, - 0x4c, 0x6f, 0x43, 0x68, 0xc8, 0xc5, 0xad, 0xcd, 0x8a, 0xa2, 0x33, 0x44, 0xd3, 0x14, 0x18, 0x89, - 0x49, 0xb4, 0x08, 0x99, 0x87, 0x93, 0x37, 0x26, 0x16, 0x1a, 0xd4, 0xb9, 0x6f, 0xb6, 0x68, 0xb5, - 0xd5, 0xb2, 0xbb, 0x96, 0xa7, 0x0c, 0x20, 0x9f, 0x27, 0xb6, 0x36, 0x2b, 0x8f, 0x2e, 0xb7, 0x3a, - 0x4d, 0x97, 0x63, 0x9b, 0x3a, 0x47, 0x4b, 0xcc, 0x92, 0x65, 0xc9, 0x27, 0x61, 0x74, 0xd1, 0x61, - 0x5a, 0x68, 0xd4, 0x28, 0x83, 0x2b, 0x83, 0xa8, 0xff, 0x67, 0xaf, 0x8a, 0x99, 0x8a, 0x43, 0xfd, - 0x9e, 0xe5, 0x8d, 0xf5, 0x78, 0x81, 0xa6, 0x81, 0x38, 0xb9, 0xb1, 0x11, 0x56, 0x84, 0x82, 0xc2, - 0x3e, 0xde, 0x74, 0xa8, 0x91, 0xd0, 0xb6, 0x21, 0x6c, 0xf3, 0x95, 0xad, 0xcd, 0xca, 0x07, 0x1c, - 0x41, 0xd3, 0xec, 0xa9, 0x76, 0x99, 0xac, 0xc8, 0x24, 0x0c, 0x32, 0x6d, 0xba, 0x65, 0x5a, 0x86, - 0x02, 0x8f, 0xe7, 0x2e, 0x8f, 0x5d, 0x2f, 0xfb, 0xad, 0xf7, 0xe1, 0xe3, 0xe7, 0xb6, 0x36, 0x2b, - 0xa7, 0x98, 0x0e, 0x36, 0x57, 0x4d, 0x4b, 0x9e, 0x22, 0x82, 0xa2, 0x6c, 0x14, 0x8d, 0xdb, 0x1e, - 0x0e, 0xdd, 0xe1, 0x70, 0x14, 0x2d, 0xd9, 0x5e, 0x7c, 0xd8, 0xfa, 0x64, 0x64, 0x02, 0x46, 0xc7, - 0x6d, 0xaf, 0x6e, 0xb9, 0x9e, 0x6e, 0xb5, 0x68, 0xbd, 0xa6, 0x8c, 0x60, 0x39, 0x54, 0x0b, 0x56, - 0xce, 0x14, 0x98, 0x66, 0x64, 0x52, 0x8a, 0x96, 0x21, 0xb3, 0x00, 0xac, 0x09, 0xf3, 0x8e, 0xc9, - 0x06, 0xc2, 0x28, 0xb6, 0x9f, 0xc8, 0xed, 0xe7, 0x98, 0xf1, 0xf3, 0x5b, 0x9b, 0x95, 0x33, 0xf8, - 0x05, 0x36, 0x02, 0x64, 0x5d, 0x0d, 0xc9, 0xd4, 0x7f, 0x5b, 0x84, 0x31, 0xd6, 0xc5, 0xd2, 0x68, - 0xac, 0xb2, 0x89, 0x85, 0x41, 0x58, 0xa3, 0xdd, 0x8e, 0xde, 0xa2, 0x62, 0x60, 0xa2, 0x50, 0x2c, - 0x1f, 0x28, 0x31, 0x8c, 0xd3, 0x93, 0x2b, 0x30, 0xc8, 0x41, 0xf5, 0x9a, 0x18, 0xab, 0xa3, 0x5b, - 0x9b, 0x95, 0x21, 0x17, 0x61, 0x4d, 0xd3, 0xd0, 0x02, 0x34, 0x1b, 0x2c, 0xfc, 0xef, 0x69, 0xdb, - 0xf5, 0x18, 0x73, 0x31, 0x54, 0x51, 0x2a, 0xa2, 0xc0, 0x8a, 0x40, 0xc9, 0x83, 0x25, 0x5a, 0x88, - 0xbc, 0x02, 0xc0, 0x21, 0x55, 0xc3, 0x70, 0xc4, 0x78, 0x45, 0x11, 0x08, 0x16, 0xba, 0x61, 0xc8, - 0x83, 0x5d, 0x22, 0x26, 0x6b, 0x30, 0xc2, 0x7f, 0xcd, 0xe8, 0x4b, 0xb4, 0xcd, 0x07, 0xeb, 0xf0, - 0xf5, 0xcb, 0xbe, 0x4c, 0xa3, 0xd2, 0xb9, 0x2a, 0x93, 0x4e, 0x5a, 0x9e, 0xb3, 0x31, 0x5e, 0x11, - 0xf3, 0xfb, 0x39, 0x51, 0x55, 0x1b, 0x71, 0xf2, 0xcc, 0x22, 0x97, 0x61, 0xd3, 0xfe, 0x94, 0xed, - 0xac, 0xeb, 0x8e, 0x41, 0x8d, 0xf1, 0x0d, 0x79, 0xda, 0xbf, 0xe7, 0x83, 0x9b, 0x4b, 0xb2, 0x26, - 0xcb, 0xe4, 0x4c, 0x87, 0x38, 0xb7, 0x46, 0x77, 0x09, 0x35, 0x78, 0x20, 0x21, 0x2d, 0xb7, 0xbb, - 0x14, 0xd7, 0xda, 0x68, 0x19, 0x36, 0xb3, 0x70, 0xc0, 0x1d, 0xea, 0xb0, 0x35, 0x01, 0x07, 0xb1, - 0x98, 0x59, 0x04, 0x93, 0xfb, 0x1c, 0x93, 0xe4, 0x21, 0x8a, 0x5c, 0x78, 0x03, 0x4e, 0x26, 0x44, - 0x41, 0xca, 0x50, 0x58, 0xa5, 0x1b, 0x5c, 0x5d, 0x34, 0xf6, 0x27, 0x39, 0x0d, 0xfd, 0xf7, 0xf5, - 0x76, 0x57, 0xac, 0xc8, 0x1a, 0xff, 0xf1, 0x6a, 0xfe, 0x23, 0x39, 0xb6, 0x80, 0x91, 0x09, 0xdb, - 0xb2, 0x68, 0xcb, 0x93, 0xd7, 0xb0, 0x97, 0x61, 0x68, 0xc6, 0x6e, 0xe9, 0x6d, 0xec, 0x47, 0xae, - 0x77, 0xca, 0xd6, 0x66, 0xe5, 0x34, 0xeb, 0xc0, 0xab, 0x6d, 0x86, 0x91, 0xda, 0x14, 0x92, 0x32, - 0x05, 0xd0, 0xe8, 0x9a, 0xed, 0x51, 0x2c, 0x98, 0x0f, 0x15, 0x00, 0x0b, 0x3a, 0x88, 0x92, 0x15, - 0x20, 0x24, 0x26, 0xd7, 0x60, 0x70, 0x81, 0x2d, 0xdb, 0x2d, 0xbb, 0x2d, 0x94, 0x0f, 0x57, 0x16, - 0x5c, 0xca, 0xe5, 0xa1, 0xef, 0x13, 0xa9, 0xd3, 0x30, 0x36, 0xd1, 0x36, 0xa9, 0xe5, 0xc9, 0xad, - 0x66, 0x83, 0xaa, 0xba, 0x4c, 0x2d, 0x4f, 0x6e, 0x35, 0x0e, 0x40, 0x9d, 0x41, 0xe5, 0x56, 0x07, - 0xa4, 0xea, 0xef, 0x15, 0xe0, 0xfc, 0xad, 0xee, 0x12, 0x75, 0x2c, 0xea, 0x51, 0x57, 0xac, 0xef, - 0x01, 0xd7, 0x39, 0x38, 0x99, 0x40, 0x0a, 0xee, 0xb8, 0xee, 0xae, 0x06, 0xc8, 0xa6, 0x30, 0x19, - 0xe4, 0xc9, 0x3b, 0x51, 0x94, 0x4c, 0xc3, 0x89, 0x10, 0xc8, 0x1a, 0xe1, 0x2a, 0x79, 0x5c, 0x99, - 0x1e, 0xdb, 0xda, 0xac, 0x5c, 0x90, 0xb8, 0xb1, 0x66, 0xcb, 0x1a, 0x1c, 0x2f, 0x46, 0x6e, 0x41, - 0x39, 0x04, 0xdd, 0x70, 0xec, 0x6e, 0xc7, 0x55, 0x0a, 0xc8, 0xaa, 0xb2, 0xb5, 0x59, 0x79, 0x44, - 0x62, 0xb5, 0x8c, 0x48, 0xd9, 0x1e, 0x88, 0x17, 0x24, 0xdf, 0x97, 0x93, 0xb9, 0x89, 0x51, 0x58, - 0xc4, 0x51, 0xf8, 0x61, 0x7f, 0x14, 0x66, 0x0a, 0xe9, 0x6a, 0xbc, 0xa4, 0x18, 0x94, 0xb1, 0x66, - 0x24, 0x06, 0x65, 0xa2, 0xc6, 0x0b, 0x13, 0x70, 0x26, 0x95, 0xd7, 0xae, 0xb4, 0xfa, 0xdf, 0x14, - 0x64, 0x2e, 0x0b, 0xb6, 0x11, 0x74, 0xe6, 0xbc, 0xdc, 0x99, 0x0b, 0xb6, 0x81, 0x2b, 0x47, 0x2e, - 0x5c, 0x8a, 0xa5, 0xc6, 0x76, 0x6c, 0x23, 0xbe, 0x88, 0x24, 0xcb, 0x92, 0xcf, 0xc0, 0xd9, 0x04, - 0x90, 0x4f, 0xd7, 0x5c, 0xfb, 0x2f, 0x6d, 0x6d, 0x56, 0xd4, 0x14, 0xae, 0xf1, 0xd9, 0x3b, 0x83, - 0x0b, 0xd1, 0xe1, 0x9c, 0x24, 0x75, 0xdb, 0xf2, 0x74, 0xd3, 0x12, 0xb6, 0x2a, 0x1f, 0x25, 0x1f, - 0xdc, 0xda, 0xac, 0x3c, 0x29, 0xeb, 0xa0, 0x4f, 0x13, 0x6f, 0x7c, 0x16, 0x1f, 0x62, 0x80, 0x92, - 0x82, 0xaa, 0xaf, 0xe9, 0xcb, 0xbe, 0x01, 0x7e, 0x79, 0x6b, 0xb3, 0xf2, 0x54, 0x6a, 0x1d, 0x26, - 0xa3, 0x92, 0x17, 0xfc, 0x2c, 0x4e, 0x44, 0x03, 0x12, 0xe2, 0xe6, 0x6c, 0x83, 0xe2, 0x37, 0xf4, - 0x23, 0x7f, 0x75, 0x6b, 0xb3, 0xf2, 0x98, 0xc4, 0xdf, 0xb2, 0x0d, 0x1a, 0x6f, 0x7e, 0x4a, 0x69, - 0xf5, 0x97, 0x0a, 0xf0, 0x58, 0xa3, 0x3a, 0x3b, 0x53, 0x37, 0x7c, 0x0b, 0x69, 0xc1, 0xb1, 0xef, - 0x9b, 0x86, 0x34, 0x7a, 0x97, 0xe0, 0x5c, 0x0c, 0x35, 0x89, 0x46, 0x59, 0x60, 0x9b, 0xe3, 0xb7, - 0xf9, 0xd6, 0x57, 0x47, 0xd0, 0x34, 0xb9, 0xe5, 0x16, 0xb5, 0x01, 0xb2, 0x18, 0xb1, 0x3e, 0x8a, - 0xa1, 0x1a, 0x2b, 0xb6, 0xe3, 0xb5, 0xba, 0x9e, 0x50, 0x02, 0xec, 0xa3, 0x44, 0x1d, 0xae, 0x20, - 0xea, 0x51, 0x85, 0xcf, 0x87, 0x7c, 0x3e, 0x07, 0xe5, 0xaa, 0xe7, 0x39, 0xe6, 0x52, 0xd7, 0xa3, - 0xb3, 0x7a, 0xa7, 0x63, 0x5a, 0xcb, 0x38, 0xd6, 0x87, 0xaf, 0xbf, 0x1e, 0xac, 0x91, 0x3d, 0x25, - 0x71, 0x35, 0x5e, 0x5c, 0x1a, 0xa2, 0xba, 0x8f, 0x6a, 0xae, 0x71, 0x9c, 0x3c, 0x44, 0xe3, 0xe5, - 0xd8, 0x10, 0x4d, 0xe5, 0xb5, 0xab, 0x21, 0xfa, 0xe5, 0x02, 0x5c, 0x9c, 0x5f, 0xf5, 0x74, 0x8d, - 0xba, 0x76, 0xd7, 0x69, 0x51, 0xf7, 0x76, 0xc7, 0xd0, 0x3d, 0x1a, 0x8e, 0xd4, 0x0a, 0xf4, 0x57, - 0x0d, 0x83, 0x1a, 0xc8, 0xae, 0x9f, 0xef, 0x22, 0x75, 0x06, 0xd0, 0x38, 0x9c, 0x7c, 0x00, 0x06, - 0x44, 0x19, 0xe4, 0xde, 0x3f, 0x3e, 0xbc, 0xb5, 0x59, 0x19, 0xe8, 0x72, 0x90, 0xe6, 0xe3, 0x18, - 0x59, 0x8d, 0xb6, 0x29, 0x23, 0x2b, 0x84, 0x64, 0x06, 0x07, 0x69, 0x3e, 0x8e, 0x7c, 0x02, 0xc6, - 0x90, 0x6d, 0xd0, 0x1e, 0x31, 0xf7, 0x9d, 0xf6, 0xa5, 0x2b, 0x37, 0x96, 0x2f, 0x4d, 0xd8, 0x9a, - 0xa6, 0xe3, 0x17, 0xd0, 0x62, 0x0c, 0xc8, 0x5d, 0x28, 0x8b, 0x46, 0x84, 0x4c, 0xfb, 0x7b, 0x30, - 0x3d, 0xb3, 0xb5, 0x59, 0x39, 0x29, 0xda, 0x2f, 0xb1, 0x4d, 0x30, 0x61, 0x8c, 0x45, 0xb3, 0x43, - 0xc6, 0xa5, 0xed, 0x18, 0x8b, 0x2f, 0x96, 0x19, 0xc7, 0x99, 0xa8, 0x6f, 0xc1, 0x88, 0x5c, 0x90, - 0x9c, 0xc5, 0x9d, 0x3a, 0x1f, 0x27, 0xb8, 0xc7, 0x37, 0x0d, 0xdc, 0x9e, 0x3f, 0x0f, 0xc3, 0x35, - 0xea, 0xb6, 0x1c, 0xb3, 0xc3, 0xac, 0x06, 0xa1, 0xe4, 0x27, 0xb6, 0x36, 0x2b, 0xc3, 0x46, 0x08, - 0xd6, 0x64, 0x1a, 0xf5, 0xff, 0xcc, 0xc1, 0x59, 0xc6, 0xbb, 0xea, 0xba, 0xe6, 0xb2, 0xb5, 0x26, - 0x2f, 0xdb, 0xcf, 0x40, 0xa9, 0x81, 0xf5, 0x89, 0x9a, 0x4e, 0x6f, 0x6d, 0x56, 0xca, 0xbc, 0x05, - 0x92, 0x1e, 0x0a, 0x9a, 0x60, 0x9b, 0x9a, 0xdf, 0x66, 0x9b, 0xca, 0x4c, 0x5a, 0x4f, 0x77, 0x3c, - 0xd3, 0x5a, 0x6e, 0x78, 0xba, 0xd7, 0x75, 0x23, 0x26, 0xad, 0xc0, 0x34, 0x5d, 0x44, 0x45, 0x4c, - 0xda, 0x48, 0x21, 0xf2, 0x06, 0x8c, 0x4c, 0x5a, 0x46, 0xc8, 0x84, 0x4f, 0x88, 0x8f, 0x30, 0x4b, - 0x93, 0x22, 0x3c, 0xc9, 0x22, 0x52, 0x40, 0xfd, 0x4e, 0x0e, 0x14, 0xbe, 0xa7, 0x9c, 0x31, 0x5d, - 0x6f, 0x96, 0xae, 0x2d, 0x49, 0xb3, 0xd3, 0x94, 0xbf, 0x49, 0x65, 0x38, 0x69, 0x2d, 0x42, 0x53, - 0x40, 0x6c, 0x52, 0xdb, 0xa6, 0x9b, 0xd8, 0xcd, 0xc4, 0x4a, 0x91, 0x3a, 0x0c, 0x70, 0xce, 0xdc, - 0x96, 0x18, 0xbe, 0xae, 0xf8, 0x8a, 0x10, 0xaf, 0x9a, 0x2b, 0xc3, 0x1a, 0x27, 0x96, 0xf7, 0x47, - 0xa2, 0x3c, 0xa9, 0xc3, 0x89, 0xb0, 0xcc, 0xa2, 0xe9, 0xb5, 0xfd, 0x45, 0x80, 0xcf, 0x14, 0x52, - 0x9b, 0x3c, 0x86, 0x94, 0xed, 0x93, 0x58, 0x39, 0xf5, 0xab, 0x05, 0x28, 0xc7, 0xeb, 0x27, 0x77, - 0x61, 0xf0, 0xa6, 0x6d, 0x5a, 0xd4, 0x98, 0xb7, 0xf0, 0x63, 0x7b, 0x1f, 0xdb, 0xf8, 0x66, 0xfd, - 0xa9, 0xb7, 0xb1, 0x4c, 0x53, 0x36, 0x86, 0xf1, 0x14, 0x27, 0x60, 0x46, 0x3e, 0x09, 0x43, 0xcc, - 0x9c, 0xbc, 0x8f, 0x9c, 0xf3, 0xdb, 0x72, 0x7e, 0x5c, 0x70, 0x3e, 0xed, 0xf0, 0x42, 0x49, 0xd6, - 0x21, 0x3b, 0xa6, 0xa2, 0x1a, 0xd5, 0x5d, 0xdb, 0x12, 0x4a, 0x84, 0x2a, 0xea, 0x20, 0x44, 0x56, - 0x51, 0x4e, 0xc3, 0xac, 0x60, 0xfe, 0xb1, 0xd8, 0xa3, 0xd2, 0x36, 0x88, 0x8b, 0x3d, 0xde, 0x99, - 0x12, 0x31, 0xb1, 0xe0, 0x84, 0xe8, 0x9b, 0x15, 0xb3, 0x83, 0x1b, 0x08, 0x5c, 0x22, 0xc7, 0xae, - 0x5f, 0xba, 0xea, 0x1f, 0xd7, 0x5d, 0x95, 0x0e, 0xfb, 0xee, 0x3f, 0x7f, 0x75, 0x36, 0x20, 0xc7, - 0x3d, 0x33, 0xaa, 0x77, 0x8c, 0x85, 0xac, 0x38, 0x6b, 0x11, 0x72, 0xf5, 0xfb, 0xf3, 0xf0, 0x6c, - 0xd8, 0x45, 0x1a, 0xbd, 0x6f, 0xd2, 0xf5, 0x90, 0xa3, 0xd8, 0xbd, 0xb3, 0xd1, 0xea, 0x4e, 0xac, - 0xe8, 0xd6, 0x32, 0x35, 0xc8, 0x15, 0xe8, 0xd7, 0xec, 0x36, 0x75, 0x95, 0x1c, 0x5a, 0x9a, 0x38, - 0x13, 0x3a, 0x0c, 0x20, 0x1f, 0xff, 0x20, 0x05, 0xb1, 0xa1, 0xb4, 0xe8, 0xe8, 0xa6, 0xe7, 0x2b, - 0x65, 0x35, 0xa9, 0x94, 0x3b, 0xa8, 0xf1, 0x2a, 0xe7, 0xc1, 0x97, 0x2b, 0x14, 0xbc, 0x87, 0x00, - 0x59, 0xf0, 0x9c, 0xe4, 0xc2, 0x2b, 0x30, 0x2c, 0x11, 0xef, 0x6a, 0x3d, 0xfa, 0xbe, 0x7e, 0x79, - 0x98, 0xfa, 0xcd, 0x12, 0xc3, 0xf4, 0x1a, 0x1b, 0x5e, 0xae, 0xcb, 0x0c, 0x22, 0x3e, 0x3e, 0xc5, - 0x20, 0x42, 0x50, 0x74, 0x10, 0x21, 0x88, 0xbc, 0x00, 0x83, 0x9c, 0x45, 0xb0, 0xf5, 0xc6, 0x6d, - 0xbb, 0x83, 0xb0, 0xa8, 0x55, 0x11, 0x10, 0x92, 0x9f, 0xc9, 0xc1, 0xa3, 0x3d, 0x25, 0x81, 0xca, - 0x37, 0x7c, 0xfd, 0xa5, 0x3d, 0x89, 0x71, 0xfc, 0xd9, 0xad, 0xcd, 0xca, 0x15, 0x49, 0x33, 0x1c, - 0x89, 0xa6, 0xd9, 0xe2, 0x44, 0x52, 0xbb, 0x7a, 0x37, 0x85, 0xd9, 0xbd, 0xbc, 0xd2, 0x29, 0x3c, - 0x44, 0xb3, 0x5a, 0x1b, 0x7e, 0x23, 0x8b, 0xa1, 0xdd, 0x2b, 0xbe, 0xf7, 0x9e, 0x4f, 0x92, 0x52, - 0x4d, 0x06, 0x17, 0xd2, 0x82, 0x73, 0x1c, 0x53, 0xd3, 0x37, 0xe6, 0xef, 0xcd, 0xda, 0x96, 0xb7, - 0xe2, 0x57, 0xd0, 0x2f, 0x9f, 0x42, 0x61, 0x05, 0x86, 0xbe, 0xd1, 0xb4, 0xef, 0x35, 0xd7, 0x18, - 0x55, 0x4a, 0x1d, 0x59, 0x9c, 0xd8, 0x1a, 0x21, 0xc6, 0xb8, 0x3f, 0x7b, 0x96, 0xc2, 0x33, 0x42, - 0x7f, 0x5e, 0x48, 0xce, 0x95, 0xb1, 0x42, 0x69, 0x53, 0xe6, 0xc0, 0x1e, 0xa7, 0xcc, 0x3a, 0x8c, - 0xcc, 0xd8, 0xad, 0xd5, 0x40, 0xf3, 0x5e, 0x81, 0xd2, 0xa2, 0xee, 0x2c, 0x53, 0x0f, 0xc5, 0x3a, - 0x7c, 0xfd, 0xe4, 0x55, 0x7e, 0x84, 0xcf, 0x88, 0x38, 0x62, 0x7c, 0x4c, 0x4c, 0x64, 0x25, 0x0f, - 0x7f, 0x6b, 0xa2, 0x80, 0xfa, 0x0f, 0x4a, 0x30, 0x22, 0x8e, 0x9b, 0x71, 0x4d, 0x23, 0xaf, 0x86, - 0x07, 0xf8, 0x62, 0xe6, 0x0d, 0x8e, 0xdc, 0x82, 0xa3, 0xc2, 0x11, 0xc6, 0xec, 0xf7, 0x37, 0x2b, - 0xb9, 0xad, 0xcd, 0x4a, 0x9f, 0x36, 0x28, 0x6d, 0xad, 0xc3, 0x55, 0x57, 0x32, 0x33, 0xe4, 0x03, - 0xe4, 0x58, 0x59, 0xbe, 0x0a, 0xbf, 0x01, 0x03, 0xa2, 0x0d, 0x42, 0x79, 0xcf, 0x85, 0x27, 0x3a, - 0x91, 0x63, 0xf3, 0x58, 0x69, 0xbf, 0x14, 0x79, 0x1d, 0x4a, 0xfc, 0x84, 0x43, 0x08, 0xe0, 0x6c, - 0xfa, 0x89, 0x50, 0xac, 0xb8, 0x28, 0x43, 0xa6, 0x01, 0xc2, 0xd3, 0x8d, 0xe0, 0x96, 0x40, 0x70, - 0x48, 0x9e, 0x7b, 0xc4, 0xb8, 0x48, 0x65, 0xc9, 0xcb, 0x30, 0xb2, 0x48, 0x9d, 0x35, 0xd3, 0xd2, - 0xdb, 0x0d, 0xf3, 0x1d, 0xff, 0xa2, 0x00, 0xcd, 0x0f, 0xd7, 0x7c, 0x47, 0xee, 0xd3, 0x08, 0x1d, - 0xf9, 0x74, 0xda, 0xe9, 0xc1, 0x00, 0x36, 0xe4, 0x89, 0x6d, 0xb7, 0xd5, 0xb1, 0xf6, 0xa4, 0x1c, - 0x26, 0x7c, 0x02, 0x46, 0x23, 0x1b, 0x47, 0x71, 0x12, 0xfc, 0x68, 0x92, 0xb5, 0xb4, 0x0b, 0x8e, - 0xb1, 0x8d, 0x72, 0x60, 0x83, 0xa2, 0x6e, 0x99, 0x9e, 0xa9, 0xb7, 0x27, 0xec, 0xb5, 0x35, 0xdd, - 0x32, 0x94, 0xa1, 0x70, 0x50, 0x98, 0x1c, 0xd3, 0x6c, 0x71, 0x94, 0x3c, 0x28, 0xa2, 0x85, 0xc8, - 0x2d, 0x28, 0x8b, 0x3e, 0xd4, 0x68, 0xcb, 0x76, 0x98, 0x45, 0x84, 0x07, 0xbd, 0x62, 0x54, 0xb8, - 0x1c, 0xd7, 0x74, 0x7c, 0xa4, 0xbc, 0xe5, 0x88, 0x17, 0x64, 0x13, 0x70, 0xdd, 0xba, 0x6f, 0x32, - 0x23, 0x7e, 0x04, 0x1b, 0x83, 0x13, 0xb0, 0xc9, 0x41, 0xf2, 0x04, 0x2c, 0xa8, 0xa4, 0x05, 0x7b, - 0x74, 0xfb, 0x05, 0xfb, 0x66, 0x71, 0x70, 0xb8, 0x3c, 0x12, 0x3f, 0xfa, 0x57, 0xff, 0x6e, 0x01, - 0x86, 0x45, 0x4b, 0x98, 0x91, 0x71, 0x3c, 0x7e, 0xf6, 0x33, 0x7e, 0x52, 0xc7, 0x41, 0xe9, 0xa0, - 0xc6, 0x81, 0xfa, 0x85, 0x7c, 0x30, 0xd9, 0x2d, 0x38, 0xa6, 0xb5, 0xbf, 0xc9, 0xee, 0x12, 0xc0, - 0xc4, 0x4a, 0xd7, 0x5a, 0xe5, 0x57, 0x9c, 0xf9, 0xf0, 0x8a, 0xb3, 0x65, 0x6a, 0x12, 0x86, 0x3c, - 0x0a, 0xc5, 0x1a, 0xe3, 0xcf, 0x7a, 0x66, 0x64, 0x7c, 0xe8, 0x5b, 0x9c, 0x53, 0xee, 0x59, 0x0d, - 0xc1, 0x6c, 0x07, 0x3b, 0xbe, 0xe1, 0x51, 0xbe, 0x67, 0x28, 0xf0, 0x1d, 0xec, 0x12, 0x03, 0x68, - 0x1c, 0x4e, 0x5e, 0x84, 0x93, 0x35, 0xda, 0xd6, 0x37, 0x66, 0xcd, 0x76, 0xdb, 0x74, 0x69, 0xcb, - 0xb6, 0x0c, 0x17, 0x85, 0x2c, 0xaa, 0x5b, 0x73, 0xb5, 0x24, 0x01, 0x51, 0xa1, 0x34, 0x7f, 0xef, - 0x9e, 0x4b, 0x3d, 0x14, 0x5f, 0x61, 0x1c, 0xd8, 0xdc, 0x6f, 0x23, 0x44, 0x13, 0x18, 0xf5, 0xeb, - 0x39, 0xb6, 0x45, 0x74, 0x57, 0x3d, 0xbb, 0x13, 0x0e, 0xa2, 0xfd, 0x88, 0xe4, 0x4a, 0x68, 0x01, - 0xe5, 0xf1, 0x6b, 0x4f, 0x88, 0xaf, 0x1d, 0x10, 0x56, 0x50, 0x68, 0xfb, 0xa4, 0x7e, 0x55, 0x61, - 0x9b, 0xaf, 0x52, 0xff, 0x2c, 0x0f, 0xe7, 0x44, 0x8b, 0x27, 0xda, 0x66, 0x67, 0xc9, 0xd6, 0x1d, - 0x43, 0xa3, 0x2d, 0x6a, 0xde, 0xa7, 0x47, 0x73, 0xe0, 0x45, 0x87, 0x4e, 0x71, 0x1f, 0x43, 0xe7, - 0x3a, 0xee, 0xb6, 0x99, 0x64, 0xf0, 0x54, 0x9d, 0x9b, 0x3f, 0xe5, 0xad, 0xcd, 0xca, 0x88, 0xc1, - 0xc1, 0x78, 0xaf, 0xa2, 0xc9, 0x44, 0x4c, 0x49, 0x66, 0xa8, 0xb5, 0xec, 0xad, 0xa0, 0x92, 0xf4, - 0x73, 0x25, 0x69, 0x23, 0x44, 0x13, 0x18, 0xf5, 0x7f, 0xcd, 0xc3, 0xe9, 0xb8, 0xc8, 0x1b, 0xd4, - 0x32, 0x8e, 0xe5, 0xfd, 0xee, 0xc8, 0xfb, 0xcf, 0x0b, 0xf0, 0x88, 0x28, 0xd3, 0x58, 0xd1, 0x1d, - 0x6a, 0xd4, 0x4c, 0x87, 0xb6, 0x3c, 0xdb, 0xd9, 0x38, 0xc2, 0xf6, 0xd9, 0xc1, 0x89, 0xfd, 0x45, - 0x28, 0x89, 0x33, 0x16, 0xbe, 0xce, 0x8c, 0x05, 0x2d, 0x41, 0x68, 0x62, 0x85, 0xe2, 0xe7, 0x33, - 0xb1, 0xce, 0x2a, 0xed, 0xa4, 0xb3, 0x3e, 0x02, 0xa3, 0x81, 0xe8, 0x71, 0x8b, 0x3e, 0x10, 0x1a, - 0x73, 0x86, 0x8f, 0xc0, 0x5d, 0xba, 0x16, 0x25, 0xc4, 0xda, 0x7c, 0x40, 0xbd, 0x86, 0xc6, 0xd6, - 0xa8, 0xa8, 0x2d, 0x28, 0x67, 0x1a, 0x9a, 0x4c, 0xa4, 0x6e, 0x16, 0xe1, 0x42, 0x7a, 0xb7, 0x6b, - 0x54, 0x37, 0x8e, 0x7b, 0xfd, 0x7d, 0xd9, 0xeb, 0xe4, 0x09, 0x28, 0x2e, 0xe8, 0xde, 0x8a, 0x70, - 0x99, 0xc0, 0x8b, 0xf7, 0x7b, 0x66, 0x9b, 0x36, 0x3b, 0xba, 0xb7, 0xa2, 0x21, 0x4a, 0x9a, 0x33, - 0x00, 0x39, 0xa6, 0xcc, 0x19, 0xd2, 0x62, 0x3f, 0xfc, 0x78, 0xee, 0x72, 0x31, 0x75, 0xb1, 0xff, - 0x93, 0x62, 0xd6, 0xbc, 0x72, 0xd7, 0x31, 0x3d, 0x7a, 0xac, 0x61, 0xc7, 0x1a, 0xb6, 0x4f, 0x0d, - 0xfb, 0xc3, 0x3c, 0x8c, 0x06, 0x7b, 0xb2, 0xb7, 0x69, 0xeb, 0x70, 0xd6, 0xaa, 0x70, 0x2b, 0x53, - 0xd8, 0xf7, 0x56, 0x66, 0x3f, 0x0a, 0xa5, 0x06, 0x7b, 0x4b, 0x6e, 0x1a, 0xa0, 0xc4, 0xf8, 0xde, - 0x32, 0x38, 0x02, 0x7e, 0x02, 0x06, 0x66, 0xf5, 0x07, 0xe6, 0x5a, 0x77, 0x4d, 0x58, 0xe9, 0xe8, - 0x02, 0xb8, 0xa6, 0x3f, 0xd0, 0x7c, 0xb8, 0xfa, 0x2f, 0x72, 0x30, 0x26, 0x84, 0x2a, 0x98, 0xef, - 0x4b, 0xaa, 0xa1, 0x74, 0xf2, 0xfb, 0x96, 0x4e, 0x61, 0xef, 0xd2, 0x51, 0xff, 0x46, 0x01, 0x94, - 0x29, 0xb3, 0x4d, 0x17, 0x1d, 0xdd, 0x72, 0xef, 0x51, 0x47, 0x6c, 0xa7, 0x27, 0x19, 0xab, 0x7d, - 0x7d, 0xa0, 0x34, 0xa5, 0xe4, 0xf7, 0x34, 0xa5, 0x7c, 0x08, 0x86, 0x44, 0x63, 0x02, 0xf7, 0x53, - 0x1c, 0x35, 0x8e, 0x0f, 0xd4, 0x42, 0x3c, 0x23, 0xae, 0x76, 0x3a, 0x8e, 0x7d, 0x9f, 0x3a, 0xfc, - 0x2a, 0x50, 0x10, 0xeb, 0x3e, 0x50, 0x0b, 0xf1, 0x12, 0x67, 0xea, 0xdb, 0x8b, 0x32, 0x67, 0xea, - 0x68, 0x21, 0x9e, 0x5c, 0x86, 0xc1, 0x19, 0xbb, 0xa5, 0xa3, 0xa0, 0xf9, 0xb4, 0x32, 0xb2, 0xb5, - 0x59, 0x19, 0x6c, 0x0b, 0x98, 0x16, 0x60, 0x19, 0x65, 0xcd, 0x5e, 0xb7, 0xda, 0xb6, 0xce, 0x3d, - 0x8c, 0x06, 0x39, 0xa5, 0x21, 0x60, 0x5a, 0x80, 0x65, 0x94, 0x4c, 0xe6, 0xe8, 0xb9, 0x35, 0x18, - 0xf2, 0xbc, 0x27, 0x60, 0x5a, 0x80, 0x55, 0xbf, 0x5e, 0x64, 0xda, 0xeb, 0x9a, 0xef, 0x3c, 0xf4, - 0xeb, 0x42, 0x38, 0x60, 0xfa, 0xf7, 0x30, 0x60, 0x1e, 0x9a, 0xf3, 0x40, 0xf5, 0xdf, 0x0e, 0x00, - 0x08, 0xe9, 0x4f, 0x1e, 0x6f, 0x0e, 0xf7, 0xa7, 0x35, 0x35, 0x38, 0x39, 0x69, 0xad, 0xe8, 0x56, - 0x8b, 0x1a, 0xe1, 0xa9, 0x68, 0x09, 0x87, 0x36, 0x3a, 0xae, 0x52, 0x81, 0x0c, 0x8f, 0x45, 0xb5, - 0x64, 0x01, 0xf2, 0x3c, 0x0c, 0xd7, 0x2d, 0x8f, 0x3a, 0x7a, 0xcb, 0x33, 0xef, 0x53, 0x31, 0x35, - 0xe0, 0xf5, 0xbb, 0x19, 0x82, 0x35, 0x99, 0x86, 0xbc, 0x08, 0x23, 0x0b, 0xba, 0xe3, 0x99, 0x2d, - 0xb3, 0xa3, 0x5b, 0x9e, 0xab, 0x0c, 0xe2, 0x8c, 0x86, 0x16, 0x46, 0x47, 0x82, 0x6b, 0x11, 0x2a, - 0xf2, 0x69, 0x18, 0xc2, 0xad, 0x29, 0xfa, 0xd8, 0x0f, 0x6d, 0x7b, 0xa5, 0xfa, 0x64, 0xe8, 0x83, - 0xc9, 0x0f, 0x77, 0xf1, 0x9a, 0x3d, 0x7e, 0xab, 0x1a, 0x70, 0x24, 0x6f, 0xc2, 0xc0, 0xa4, 0x65, - 0x20, 0x73, 0xd8, 0x96, 0xb9, 0x2a, 0x98, 0x9f, 0x0d, 0x99, 0xdb, 0x9d, 0x18, 0x6f, 0x9f, 0x5d, - 0xfa, 0x28, 0x1b, 0x7e, 0xf7, 0x46, 0xd9, 0xc8, 0xbb, 0x70, 0xea, 0x3e, 0x7a, 0x50, 0xa7, 0xee, - 0x63, 0x7b, 0x3c, 0x75, 0x57, 0xdf, 0x81, 0xe1, 0xf1, 0x85, 0xa9, 0x60, 0xf4, 0x9e, 0x87, 0xc2, - 0x82, 0x70, 0x07, 0x29, 0x72, 0x7b, 0xa6, 0x63, 0x1a, 0x1a, 0x83, 0x91, 0x2b, 0x30, 0x38, 0x81, - 0x3e, 0x86, 0xe2, 0xbe, 0xb3, 0xc8, 0xd7, 0xbf, 0x16, 0xc2, 0xd0, 0xd5, 0xd8, 0x47, 0x93, 0x0f, - 0xc0, 0xc0, 0x82, 0x63, 0x2f, 0x3b, 0xfa, 0x9a, 0x58, 0x83, 0xd1, 0x1f, 0xa7, 0xc3, 0x41, 0x9a, - 0x8f, 0x53, 0x7f, 0x24, 0xe7, 0x9b, 0xed, 0xac, 0x44, 0xa3, 0x8b, 0x47, 0xf3, 0x58, 0xf7, 0x20, - 0x2f, 0xe1, 0x72, 0x90, 0xe6, 0xe3, 0xc8, 0x15, 0xe8, 0x9f, 0x74, 0x1c, 0xdb, 0x91, 0xdf, 0x25, - 0x50, 0x06, 0x90, 0x2f, 0xa6, 0x91, 0x82, 0x7c, 0x18, 0x86, 0xf9, 0x9c, 0xc3, 0x4f, 0x34, 0x0b, - 0xbd, 0xee, 0x74, 0x65, 0x4a, 0xf5, 0x37, 0x0b, 0x92, 0xcd, 0xc6, 0x25, 0xfe, 0x10, 0xde, 0x0a, - 0xbc, 0x00, 0x85, 0xf1, 0x85, 0x29, 0x31, 0x01, 0x9e, 0xf2, 0x8b, 0x4a, 0xaa, 0x12, 0x2b, 0xc7, - 0xa8, 0xc9, 0x45, 0x28, 0x2e, 0x30, 0xf5, 0x29, 0xa1, 0x7a, 0x0c, 0x6e, 0x6d, 0x56, 0x8a, 0x1d, - 0xa6, 0x3f, 0x08, 0x45, 0x2c, 0xdb, 0xcc, 0xf0, 0x1d, 0x13, 0xc7, 0x86, 0xfb, 0x98, 0x8b, 0x50, - 0xac, 0x3a, 0xcb, 0xf7, 0xc5, 0xac, 0x85, 0x58, 0xdd, 0x59, 0xbe, 0xaf, 0x21, 0x94, 0x5c, 0x03, - 0xd0, 0xa8, 0xd7, 0x75, 0x2c, 0x7c, 0x32, 0x34, 0x84, 0xe7, 0x6f, 0x38, 0x1b, 0x3a, 0x08, 0x6d, - 0xb6, 0x6c, 0x83, 0x6a, 0x12, 0x89, 0xfa, 0x77, 0xc2, 0x8b, 0x9d, 0x9a, 0xe9, 0xae, 0x1e, 0x77, - 0xe1, 0x2e, 0xba, 0x50, 0x17, 0x47, 0x9c, 0xc9, 0x4e, 0xaa, 0x40, 0xff, 0x54, 0x5b, 0x5f, 0x76, - 0xb1, 0x0f, 0x85, 0xc3, 0xde, 0x3d, 0x06, 0xd0, 0x38, 0x3c, 0xd6, 0x4f, 0x83, 0xdb, 0xf7, 0xd3, - 0x57, 0xfa, 0x83, 0xd1, 0x36, 0x47, 0xbd, 0x75, 0xdb, 0x39, 0xee, 0xaa, 0x9d, 0x76, 0xd5, 0x25, - 0x18, 0x68, 0x38, 0x2d, 0xe9, 0xe8, 0x02, 0xf7, 0x03, 0xae, 0xd3, 0xe2, 0xc7, 0x16, 0x3e, 0x92, - 0xd1, 0xd5, 0x5c, 0x0f, 0xe9, 0x06, 0x42, 0x3a, 0xc3, 0xf5, 0x04, 0x9d, 0x40, 0x0a, 0xba, 0x05, - 0xdb, 0xf1, 0x44, 0xc7, 0x05, 0x74, 0x1d, 0xdb, 0xf1, 0x34, 0x1f, 0x49, 0x3e, 0x04, 0xb0, 0x38, - 0xb1, 0xe0, 0xbf, 0x68, 0x18, 0x0a, 0x1d, 0x2e, 0xc5, 0x53, 0x06, 0x4d, 0x42, 0x93, 0x45, 0x18, - 0x9a, 0xef, 0x50, 0x87, 0x6f, 0x85, 0xf8, 0x23, 0xa0, 0x0f, 0xc6, 0x44, 0x2b, 0xfa, 0xfd, 0xaa, - 0xf8, 0x3f, 0x20, 0xe7, 0xeb, 0x8b, 0xed, 0xff, 0xd4, 0x42, 0x46, 0xe4, 0xc3, 0x50, 0xaa, 0x72, - 0x3b, 0x6f, 0x18, 0x59, 0x06, 0x22, 0xc3, 0x2d, 0x28, 0x47, 0xf1, 0x3d, 0xbb, 0x8e, 0x7f, 0x6b, - 0x82, 0x5c, 0xbd, 0x02, 0xe5, 0x78, 0x35, 0x64, 0x18, 0x06, 0x26, 0xe6, 0xe7, 0xe6, 0x26, 0x27, - 0x16, 0xcb, 0x7d, 0x64, 0x10, 0x8a, 0x8d, 0xc9, 0xb9, 0x5a, 0x39, 0xa7, 0x7e, 0x4d, 0x9a, 0x41, - 0x98, 0x6a, 0x1d, 0x5f, 0x0d, 0xef, 0xeb, 0xbe, 0xa5, 0x8c, 0xf7, 0xa1, 0x78, 0x62, 0xb0, 0x66, - 0x7a, 0x1e, 0x35, 0xc4, 0x2a, 0x81, 0xf7, 0x85, 0xde, 0x03, 0x2d, 0x81, 0x27, 0xcf, 0xc0, 0x28, - 0xc2, 0xc4, 0x15, 0x21, 0xdf, 0x1f, 0x8b, 0x02, 0xce, 0x03, 0x2d, 0x8a, 0x54, 0xbf, 0x1d, 0xde, - 0x0e, 0xcf, 0x50, 0xfd, 0xa8, 0xde, 0x28, 0xbe, 0x47, 0xfa, 0x4b, 0xfd, 0x95, 0x7e, 0xfe, 0xce, - 0x86, 0xbf, 0xf1, 0x3c, 0x0c, 0x51, 0x86, 0x47, 0xba, 0x85, 0x5d, 0x1c, 0xe9, 0x3e, 0x03, 0xa5, - 0x59, 0xea, 0xad, 0xd8, 0xbe, 0x8b, 0x1a, 0xfa, 0x84, 0xac, 0x21, 0x44, 0xf6, 0x09, 0xe1, 0x34, - 0x64, 0x15, 0x88, 0xff, 0x80, 0x33, 0xf0, 0x76, 0xf7, 0x8f, 0x90, 0xcf, 0x25, 0xf6, 0x29, 0x0d, - 0x7c, 0xe6, 0x8d, 0x0f, 0x19, 0x4e, 0x07, 0xde, 0xf4, 0x92, 0xcf, 0xd8, 0xbf, 0xdb, 0xac, 0x94, - 0x38, 0x8d, 0x96, 0xc2, 0x96, 0x7c, 0x02, 0x86, 0x66, 0xa7, 0xaa, 0xe2, 0x31, 0x27, 0xf7, 0x8a, - 0x38, 0x1f, 0x48, 0xd1, 0x47, 0x04, 0x22, 0xc1, 0x47, 0x4d, 0x6b, 0xf7, 0xf4, 0xe4, 0x5b, 0xce, - 0x90, 0x0b, 0xd3, 0x16, 0xfe, 0x3c, 0x4a, 0x9c, 0x2e, 0x04, 0xda, 0x12, 0x7d, 0x34, 0x15, 0x97, - 0x15, 0xc7, 0xc6, 0xb4, 0x65, 0x70, 0x1f, 0xa3, 0x7b, 0x1e, 0x4e, 0x56, 0x3b, 0x9d, 0xb6, 0x49, - 0x0d, 0xd4, 0x17, 0xad, 0xdb, 0xa6, 0xae, 0xf0, 0x28, 0xc2, 0x17, 0x37, 0x3a, 0x47, 0x36, 0xf1, - 0x09, 0x71, 0xd3, 0xe9, 0x46, 0x3d, 0x49, 0x93, 0x65, 0xf1, 0xc5, 0x36, 0x67, 0x6f, 0x3b, 0xf5, - 0x9a, 0xf0, 0x29, 0xe2, 0x2f, 0xb6, 0x7d, 0x70, 0xd4, 0xc3, 0x52, 0x26, 0x57, 0x7f, 0x2c, 0x0f, - 0x67, 0x27, 0x1c, 0xaa, 0x7b, 0x74, 0x76, 0xaa, 0x5a, 0xed, 0xa2, 0x2f, 0x60, 0xbb, 0x4d, 0xad, - 0xe5, 0xc3, 0x99, 0x14, 0x5e, 0x83, 0xb1, 0xa0, 0x01, 0x8d, 0x96, 0xdd, 0xa1, 0xf2, 0xdb, 0xb7, - 0x96, 0x8f, 0x69, 0xba, 0x0c, 0xa5, 0xc5, 0x48, 0xc9, 0x2d, 0x38, 0x15, 0x40, 0xaa, 0xed, 0xb6, - 0xbd, 0xae, 0xd1, 0xae, 0xcb, 0x1d, 0x8e, 0x07, 0xb9, 0xc3, 0x71, 0xc8, 0x41, 0x67, 0xf8, 0xa6, - 0xc3, 0x08, 0xb4, 0xb4, 0x52, 0xea, 0x57, 0x0b, 0x70, 0xee, 0x8e, 0xde, 0x36, 0x8d, 0x50, 0x34, - 0x1a, 0x75, 0x3b, 0xb6, 0xe5, 0xd2, 0x23, 0x34, 0xc6, 0x23, 0x03, 0xa9, 0x78, 0x20, 0x03, 0x29, - 0xd9, 0x45, 0xfd, 0xfb, 0xee, 0xa2, 0xd2, 0x9e, 0xba, 0xe8, 0x7f, 0xc9, 0x41, 0xd9, 0x7f, 0x9b, - 0x21, 0x3f, 0xdb, 0x97, 0x1e, 0x0e, 0xe0, 0x01, 0x64, 0xcc, 0xbf, 0x1c, 0xf1, 0xa4, 0x01, 0x03, - 0x93, 0x0f, 0x3a, 0xa6, 0x43, 0xdd, 0x1d, 0x38, 0xc7, 0x3f, 0x2a, 0x0e, 0x5b, 0x4e, 0x52, 0x5e, - 0x24, 0x71, 0xce, 0xc2, 0xc1, 0xf8, 0xe2, 0x92, 0xbf, 0x4e, 0x19, 0xf7, 0x63, 0x11, 0xf0, 0x17, - 0x97, 0xe2, 0x15, 0x4b, 0xe4, 0x09, 0x6d, 0x48, 0x4a, 0x9e, 0x84, 0xc2, 0xe2, 0xe2, 0x8c, 0x98, - 0x87, 0x31, 0x06, 0x84, 0xe7, 0xc9, 0x4f, 0x4a, 0x19, 0x56, 0xfd, 0x57, 0x79, 0xfe, 0xca, 0x9a, - 0x0f, 0xd7, 0x43, 0x51, 0xc2, 0x71, 0x18, 0xf4, 0x05, 0x2e, 0xd4, 0x30, 0x78, 0x58, 0x11, 0xef, - 0x88, 0x78, 0xdd, 0xc1, 0x23, 0x9a, 0x8a, 0xef, 0x30, 0xcf, 0x6f, 0x11, 0x70, 0x5f, 0x84, 0x0e, - 0xf3, 0xbe, 0x9b, 0xfc, 0x87, 0x60, 0x28, 0x98, 0xa1, 0xe4, 0xdb, 0x83, 0x60, 0x3a, 0xd3, 0x42, - 0x7c, 0x6c, 0x62, 0x2e, 0xed, 0x63, 0x19, 0xf7, 0xc5, 0xcb, 0x7b, 0xe5, 0x58, 0xbc, 0x07, 0x2c, - 0xde, 0x2f, 0x0a, 0xf1, 0xf2, 0x47, 0x56, 0x47, 0x56, 0xbc, 0x07, 0x76, 0x72, 0xae, 0xfe, 0x61, - 0x0e, 0x08, 0x6b, 0xd6, 0x82, 0xee, 0xba, 0xeb, 0xb6, 0x63, 0x70, 0x27, 0xfc, 0x43, 0x11, 0xcc, - 0xc1, 0xdd, 0x76, 0xfe, 0xc0, 0x10, 0x9c, 0x8a, 0xb8, 0x0d, 0x1f, 0xf1, 0xc9, 0xea, 0x4a, 0x74, - 0x34, 0xf5, 0x7a, 0xdd, 0xf3, 0x94, 0x7c, 0x9d, 0xda, 0x1f, 0x79, 0x23, 0x28, 0xdd, 0xa3, 0x3e, - 0x0b, 0x23, 0xe2, 0x07, 0x5b, 0xa1, 0xfd, 0x7b, 0x32, 0x1c, 0xa5, 0x2e, 0x03, 0x68, 0x11, 0x34, - 0x79, 0x09, 0x86, 0xd8, 0x80, 0x59, 0xc6, 0x70, 0x31, 0x03, 0xe1, 0xcb, 0x19, 0xc3, 0x07, 0xca, - 0xeb, 0x49, 0x40, 0x29, 0xb9, 0x7b, 0x0f, 0xee, 0xe0, 0x7d, 0xd6, 0x67, 0x60, 0xb8, 0x6a, 0x59, - 0xb6, 0x87, 0x5b, 0x7c, 0x57, 0x5c, 0x6c, 0x64, 0xda, 0xf4, 0x4f, 0x62, 0xfc, 0x82, 0x90, 0x3e, - 0xd5, 0xa8, 0x97, 0x19, 0x92, 0xeb, 0xfe, 0xeb, 0x1f, 0xea, 0x08, 0xf3, 0x14, 0x2f, 0x77, 0x1c, - 0x01, 0x4b, 0x3e, 0xfe, 0xc1, 0xce, 0x1b, 0x5d, 0x70, 0xec, 0x8e, 0xed, 0x52, 0x83, 0x0b, 0x6a, - 0x38, 0x8c, 0x06, 0xd1, 0x11, 0x08, 0x7c, 0x6a, 0x18, 0x09, 0xdd, 0x12, 0x29, 0x42, 0xee, 0xc1, - 0x69, 0xff, 0x9a, 0x39, 0x78, 0xd4, 0x59, 0xaf, 0xb9, 0xe8, 0x32, 0x3f, 0x1c, 0xc6, 0x27, 0x09, - 0x51, 0xe3, 0x8f, 0xf9, 0x97, 0x2a, 0xfe, 0xab, 0xd0, 0xa6, 0x69, 0xc8, 0x5d, 0x9d, 0xca, 0x8f, - 0x7c, 0x37, 0x0c, 0xcf, 0xea, 0x0f, 0x6a, 0x5d, 0x71, 0x72, 0x33, 0xba, 0xf3, 0xbb, 0x9b, 0x35, - 0xfd, 0x41, 0xd3, 0x10, 0xe5, 0x62, 0x36, 0x85, 0xcc, 0x92, 0x34, 0xe1, 0xec, 0x82, 0x63, 0xaf, - 0xd9, 0x1e, 0x35, 0x62, 0xef, 0x23, 0x4f, 0x84, 0x0f, 0xaa, 0x3b, 0x82, 0xa2, 0xd9, 0xe3, 0xa1, - 0x64, 0x06, 0x1b, 0xb2, 0x06, 0x27, 0xaa, 0xae, 0xdb, 0x5d, 0xa3, 0xe1, 0xfd, 0x56, 0x79, 0xdb, - 0xcf, 0xf8, 0xa0, 0xf0, 0x79, 0x7e, 0x44, 0xc7, 0xa2, 0xfc, 0x7a, 0xab, 0xe9, 0x99, 0x72, 0x8d, - 0xf8, 0x2d, 0x71, 0xde, 0xac, 0x77, 0x7d, 0x01, 0xe2, 0xd3, 0x7e, 0xe5, 0x24, 0x0e, 0x2f, 0xec, - 0xdd, 0x40, 0xf4, 0x18, 0x16, 0x40, 0xee, 0xdd, 0x48, 0x91, 0x9b, 0xc5, 0xc1, 0xb1, 0xf2, 0x09, - 0xed, 0x5c, 0xf2, 0x83, 0xf8, 0xcb, 0xa1, 0xbf, 0x99, 0x8f, 0xcd, 0x44, 0xdc, 0x46, 0xdb, 0xd7, - 0x4c, 0x24, 0xcf, 0x28, 0xf9, 0x3d, 0xce, 0x28, 0x4f, 0x25, 0xbd, 0x2e, 0x52, 0xa6, 0x89, 0xef, - 0x86, 0x31, 0xbf, 0x04, 0xb6, 0x7b, 0x23, 0x58, 0x6a, 0xb2, 0xbb, 0xe3, 0xa2, 0xe8, 0x8e, 0x32, - 0x1a, 0xa9, 0x1b, 0xb1, 0x3e, 0x88, 0xf1, 0x53, 0xbf, 0x99, 0x03, 0x08, 0x95, 0x98, 0x3c, 0x1b, - 0x8d, 0xfb, 0x95, 0x0b, 0xaf, 0xa2, 0x44, 0x10, 0x8f, 0x48, 0xa0, 0x2f, 0x72, 0x11, 0x8a, 0x18, - 0xe8, 0x25, 0x1f, 0x1e, 0x7d, 0xaf, 0x9a, 0x96, 0xa1, 0x21, 0x94, 0x61, 0xa5, 0x88, 0x0c, 0x88, - 0x45, 0xb7, 0x0b, 0x6e, 0x79, 0xd7, 0xe0, 0x44, 0xa3, 0xbb, 0x24, 0x77, 0xa6, 0x1c, 0xca, 0xca, - 0xed, 0x2e, 0x05, 0x6f, 0xb2, 0x23, 0xd1, 0x7c, 0xa2, 0x45, 0xd4, 0xaf, 0xe7, 0x62, 0xfd, 0x7b, - 0x88, 0x86, 0xc5, 0x8e, 0xfa, 0x54, 0xfd, 0x83, 0x02, 0x0c, 0x2f, 0xd8, 0x8e, 0x27, 0x22, 0xe7, - 0x1c, 0xed, 0x95, 0x5e, 0xda, 0x8f, 0x16, 0x77, 0xb1, 0x1f, 0xbd, 0x08, 0x45, 0xc9, 0x89, 0x9c, - 0xdf, 0x5c, 0x19, 0x86, 0xa3, 0x21, 0xf4, 0x5d, 0x7e, 0x14, 0x93, 0xbc, 0xa6, 0x1e, 0xd8, 0xb7, - 0x33, 0xc8, 0xf7, 0xe4, 0x01, 0xde, 0x7c, 0xfe, 0xf9, 0x87, 0xb8, 0x4b, 0xd5, 0x9f, 0xcc, 0xc1, - 0x09, 0x71, 0xf9, 0x2b, 0xc5, 0xfc, 0x1b, 0xf0, 0xaf, 0xed, 0xe5, 0x99, 0x84, 0x83, 0x34, 0x1f, - 0xc7, 0x0c, 0x83, 0xc9, 0x07, 0xa6, 0x87, 0xf7, 0x5f, 0x52, 0xd0, 0x3f, 0x2a, 0x60, 0xb2, 0x61, - 0xe0, 0xd3, 0x91, 0x67, 0xfd, 0x6b, 0xed, 0x42, 0x68, 0x0d, 0xb1, 0x02, 0x93, 0xa9, 0x57, 0xdb, - 0xea, 0x2f, 0x14, 0xa1, 0x38, 0xf9, 0x80, 0xb6, 0x8e, 0x78, 0xd7, 0x48, 0x87, 0xe5, 0xc5, 0x7d, - 0x1e, 0x96, 0xef, 0xc5, 0x4f, 0xe7, 0x8d, 0xb0, 0x3f, 0x4b, 0xd1, 0xea, 0x63, 0x3d, 0x1f, 0xaf, - 0xde, 0xef, 0xe9, 0xa3, 0xe7, 0xe6, 0xf5, 0x4f, 0x0a, 0x50, 0x68, 0x4c, 0x2c, 0x1c, 0xeb, 0xcd, - 0xa1, 0xea, 0x4d, 0x6f, 0x3f, 0x08, 0x35, 0xb8, 0xda, 0x1c, 0x0c, 0x3d, 0x8f, 0x63, 0xb7, 0x98, - 0x7f, 0x5e, 0x80, 0xb1, 0xc6, 0xd4, 0xe2, 0x82, 0x74, 0xbb, 0x70, 0x8b, 0x7b, 0x87, 0xa2, 0x9f, - 0x22, 0xef, 0xd2, 0x8b, 0x09, 0xb3, 0xea, 0x76, 0xdd, 0xf2, 0x5e, 0x7e, 0xf1, 0x8e, 0xde, 0xee, - 0x52, 0x3c, 0x90, 0xe3, 0xbe, 0xe4, 0xae, 0xf9, 0x0e, 0xfd, 0x2a, 0x86, 0xd9, 0xf0, 0x19, 0x90, - 0xd7, 0xa0, 0x70, 0x5b, 0x78, 0xf9, 0x64, 0xf1, 0x79, 0xe1, 0x3a, 0xe7, 0xc3, 0x26, 0xc1, 0x42, - 0xd7, 0x34, 0x90, 0x03, 0x2b, 0xc5, 0x0a, 0xdf, 0x10, 0x26, 0xc3, 0x8e, 0x0a, 0x2f, 0xfb, 0x85, - 0x6f, 0xd4, 0x6b, 0xa4, 0x01, 0xc3, 0x0b, 0xd4, 0x59, 0x33, 0xb1, 0xa3, 0xfc, 0x39, 0xbb, 0x37, - 0x13, 0xb6, 0x7f, 0x1d, 0xee, 0x84, 0x85, 0x90, 0x99, 0xcc, 0x85, 0xbc, 0x05, 0xc0, 0xad, 0xaa, - 0x1d, 0xc6, 0x91, 0x7d, 0x14, 0x77, 0x83, 0x7c, 0xc3, 0x91, 0x62, 0xf9, 0x4b, 0xcc, 0xc8, 0x2a, - 0x94, 0x67, 0x6d, 0xc3, 0xbc, 0x67, 0x72, 0x77, 0x5e, 0xac, 0xa0, 0xb4, 0xbd, 0x13, 0x1d, 0xdb, - 0x60, 0xac, 0x49, 0xe5, 0xd2, 0xaa, 0x49, 0x30, 0x56, 0xff, 0x61, 0x3f, 0x14, 0x59, 0xb7, 0x1f, - 0x8f, 0xdf, 0xfd, 0x8c, 0xdf, 0x2a, 0x94, 0xef, 0xda, 0xce, 0xaa, 0x69, 0x2d, 0x07, 0x2f, 0x2d, - 0xc4, 0x89, 0x05, 0x7a, 0x87, 0xad, 0x73, 0x5c, 0x33, 0x78, 0x94, 0xa1, 0x25, 0xc8, 0xb7, 0x19, - 0xc1, 0xaf, 0x00, 0xf0, 0xf0, 0x0c, 0x48, 0x33, 0x18, 0x86, 0x86, 0xe1, 0xc1, 0x1b, 0xf0, 0xf1, - 0x86, 0x1c, 0x1a, 0x26, 0x24, 0x26, 0x57, 0x7c, 0xff, 0x9a, 0x21, 0x7c, 0xcb, 0x81, 0x47, 0x33, - 0xe8, 0x5f, 0x23, 0x1b, 0x01, 0xdc, 0xd3, 0x66, 0x01, 0x40, 0xba, 0xb3, 0x84, 0x98, 0x20, 0x22, - 0x93, 0x83, 0x88, 0xeb, 0x98, 0x72, 0x65, 0xa9, 0x49, 0x3c, 0xc8, 0xcb, 0x31, 0xa7, 0x0a, 0x12, - 0xe1, 0x96, 0xe9, 0x53, 0x11, 0x3a, 0xe5, 0x8d, 0x6c, 0xe7, 0x94, 0xa7, 0xfe, 0x6c, 0x01, 0x86, - 0x19, 0xb7, 0x46, 0x77, 0x6d, 0x4d, 0x77, 0x36, 0x8e, 0x15, 0x79, 0x3f, 0x8a, 0xdc, 0x84, 0x93, - 0xf2, 0x23, 0x0c, 0x66, 0xba, 0xfa, 0x31, 0xc2, 0x82, 0x2d, 0x7c, 0x9c, 0x80, 0xdb, 0x96, 0x38, - 0xef, 0x7b, 0x02, 0x8c, 0x27, 0x4e, 0xae, 0x96, 0xe4, 0xa5, 0xfe, 0x68, 0x0e, 0xca, 0x71, 0x68, - 0xa0, 0xfb, 0xb9, 0x54, 0xdd, 0x7f, 0x06, 0x86, 0x84, 0x5b, 0x86, 0x6e, 0x08, 0x2f, 0xd1, 0xb1, - 0xad, 0xcd, 0x0a, 0xe0, 0x9b, 0xf8, 0xa6, 0x43, 0x75, 0x43, 0x0b, 0x09, 0xc8, 0x4b, 0x30, 0x82, - 0x3f, 0xee, 0x3a, 0xa6, 0xe7, 0x51, 0xde, 0x19, 0x45, 0x7e, 0x57, 0xc4, 0x0b, 0xac, 0x73, 0x84, - 0x16, 0x21, 0x53, 0x7f, 0x3b, 0x0f, 0x43, 0x8d, 0xee, 0x92, 0xbb, 0xe1, 0x7a, 0x74, 0xed, 0x88, - 0xeb, 0x90, 0x7f, 0xac, 0x50, 0x4c, 0x3d, 0x56, 0x78, 0xd2, 0x1f, 0x5a, 0xd2, 0x9d, 0x46, 0xb0, - 0x31, 0xf0, 0x3d, 0x5d, 0x43, 0x2d, 0x2a, 0xed, 0x5e, 0x8b, 0xd4, 0xbf, 0x9f, 0x87, 0x32, 0x77, - 0x08, 0xa8, 0x99, 0x6e, 0xeb, 0x00, 0x1e, 0x29, 0x1d, 0xbe, 0x4c, 0xf7, 0xe7, 0x44, 0xb3, 0x83, - 0xa7, 0x5f, 0xea, 0xe7, 0xf2, 0x30, 0x5c, 0xed, 0x7a, 0x2b, 0x55, 0x0f, 0xe7, 0xb7, 0x87, 0x72, - 0x8f, 0xfc, 0x5b, 0x39, 0x38, 0xc1, 0x1a, 0xb2, 0x68, 0xaf, 0x52, 0xeb, 0x00, 0xae, 0x44, 0x0e, - 0xe2, 0x20, 0xd2, 0x97, 0x65, 0x61, 0x77, 0xb2, 0xc4, 0x8b, 0x3c, 0xcd, 0x6e, 0xd3, 0xa3, 0xfd, - 0x19, 0x07, 0x78, 0x91, 0xe7, 0x0b, 0xe4, 0x00, 0x2e, 0x8e, 0xdf, 0x5f, 0x02, 0x39, 0x80, 0x13, - 0xd9, 0xf7, 0x87, 0x40, 0x7e, 0x33, 0x07, 0x43, 0xe3, 0xb6, 0x77, 0xc4, 0x07, 0xbe, 0xf8, 0x8a, - 0xa3, 0xad, 0xe6, 0xfe, 0x57, 0x1c, 0x6d, 0xdd, 0x54, 0x7f, 0x3c, 0x0f, 0xa7, 0x45, 0x9e, 0x0a, - 0x71, 0x06, 0x76, 0x3c, 0x1d, 0x8b, 0xc1, 0x96, 0x14, 0xcd, 0xf1, 0x3c, 0x24, 0x44, 0xf3, 0xd3, - 0x05, 0x38, 0x8d, 0x71, 0xb0, 0xd9, 0x8e, 0xea, 0x7d, 0x60, 0x8b, 0x90, 0x56, 0xd4, 0x3d, 0x63, - 0x36, 0xc5, 0x3d, 0xe3, 0xdf, 0x6d, 0x56, 0x5e, 0x5e, 0x36, 0xbd, 0x95, 0xee, 0xd2, 0xd5, 0x96, - 0xbd, 0x76, 0x6d, 0xd9, 0xd1, 0xef, 0x9b, 0xdc, 0x31, 0x41, 0x6f, 0x5f, 0x0b, 0xd3, 0x47, 0x75, - 0x4c, 0x91, 0x0c, 0xaa, 0x81, 0x3b, 0x25, 0xc6, 0xd5, 0x77, 0xec, 0x70, 0x01, 0x6e, 0xda, 0xa6, - 0x25, 0x7c, 0xa5, 0xb9, 0xa1, 0xdb, 0xd8, 0xda, 0xac, 0x9c, 0x79, 0xdb, 0x36, 0xad, 0x66, 0xdc, - 0x61, 0x7a, 0xb7, 0xf5, 0x85, 0xac, 0x35, 0xa9, 0x1a, 0xf5, 0x9f, 0xe7, 0xe0, 0x7c, 0x54, 0x8b, - 0xdf, 0x0f, 0xb6, 0xe3, 0x5f, 0xcf, 0xc3, 0x99, 0x1b, 0x28, 0x9c, 0xc0, 0xc5, 0xec, 0x78, 0xde, - 0x12, 0x83, 0x33, 0x45, 0x36, 0xc7, 0x16, 0x65, 0xb6, 0x6c, 0x8e, 0x27, 0x75, 0x21, 0x9b, 0xdf, - 0xcd, 0xc1, 0xa9, 0xf9, 0x7a, 0x6d, 0xe2, 0x7d, 0x32, 0xa2, 0x92, 0xdf, 0x73, 0xc4, 0x0d, 0xce, - 0xc4, 0xf7, 0x1c, 0x71, 0xd3, 0xf3, 0xcb, 0x79, 0x38, 0xd5, 0xa8, 0xce, 0xce, 0xbc, 0x5f, 0x66, - 0xf0, 0x09, 0xd9, 0x1f, 0xda, 0x3f, 0x04, 0x13, 0xb6, 0x80, 0xfc, 0x99, 0x77, 0xae, 0x67, 0xfb, - 0x49, 0x27, 0x85, 0x72, 0xc4, 0xa7, 0xee, 0x03, 0x11, 0x0a, 0xd3, 0xfc, 0x08, 0xf5, 0x11, 0xd7, - 0xfc, 0x7f, 0x5c, 0x82, 0xe1, 0x5b, 0xdd, 0x25, 0x2a, 0x5c, 0xba, 0x1e, 0xea, 0x93, 0xdf, 0xeb, - 0x30, 0x2c, 0xc4, 0x80, 0x37, 0x1c, 0x52, 0x50, 0x50, 0x11, 0xe4, 0x89, 0xc7, 0x5d, 0x93, 0x89, - 0xc8, 0x45, 0x28, 0xde, 0xa1, 0xce, 0x92, 0xfc, 0x5e, 0xfe, 0x3e, 0x75, 0x96, 0x34, 0x84, 0x92, - 0x99, 0xf0, 0x31, 0x4f, 0x75, 0xa1, 0x8e, 0x59, 0xb8, 0xc4, 0xa5, 0x21, 0xa6, 0x15, 0x0b, 0xdc, - 0x42, 0xf5, 0x8e, 0xc9, 0xf3, 0x77, 0xc9, 0xb1, 0x3a, 0xe2, 0x25, 0xc9, 0x1c, 0x9c, 0x8c, 0xb8, - 0x8b, 0x62, 0x0a, 0xaa, 0xc1, 0x14, 0x76, 0x69, 0xc9, 0xa7, 0x92, 0x45, 0xc9, 0x1b, 0x30, 0xe2, - 0x03, 0xd1, 0xf1, 0x71, 0x28, 0xcc, 0x7b, 0x12, 0xb0, 0x8a, 0xe5, 0x96, 0x88, 0x14, 0x90, 0x19, - 0xe0, 0x25, 0x06, 0xa4, 0x30, 0x88, 0x39, 0xeb, 0x46, 0x0a, 0x90, 0x97, 0x90, 0x01, 0x3e, 0x40, - 0x43, 0x87, 0xa9, 0x61, 0x7c, 0x4c, 0x8e, 0x17, 0x40, 0x8e, 0x80, 0xf3, 0x90, 0x01, 0x11, 0x32, - 0x32, 0x0f, 0x10, 0x3a, 0xb6, 0x88, 0xc0, 0x2c, 0xbb, 0x76, 0xb9, 0x91, 0x58, 0xc8, 0x37, 0x79, - 0xa3, 0x7b, 0xb9, 0xc9, 0x53, 0x7f, 0xac, 0x00, 0xc3, 0xd5, 0x4e, 0x27, 0x18, 0x0a, 0xcf, 0x42, - 0xa9, 0xda, 0xe9, 0xdc, 0xd6, 0xea, 0x72, 0x32, 0x09, 0xbd, 0xd3, 0x69, 0x76, 0x1d, 0x53, 0xf6, - 0x56, 0xe7, 0x44, 0x64, 0x02, 0x46, 0xab, 0x9d, 0xce, 0x42, 0x77, 0xa9, 0x6d, 0xb6, 0xa4, 0xb4, - 0x7a, 0x3c, 0x8f, 0x69, 0xa7, 0xd3, 0xec, 0x20, 0x26, 0x9e, 0x5b, 0x31, 0x5a, 0x86, 0x7c, 0x06, - 0xc3, 0x99, 0x89, 0xac, 0x6e, 0x3c, 0x6f, 0x94, 0x1a, 0xa4, 0x91, 0x08, 0xdb, 0x76, 0x35, 0x20, - 0xe2, 0xe9, 0x36, 0x2e, 0xfa, 0x49, 0x52, 0x58, 0x45, 0x89, 0xec, 0x6d, 0x21, 0x4b, 0xf2, 0x1c, - 0x0c, 0x54, 0x3b, 0x1d, 0xe9, 0xb6, 0x0a, 0x1d, 0xdb, 0x58, 0xa9, 0x78, 0x1e, 0x4e, 0x41, 0x26, - 0x3e, 0x4b, 0xdc, 0x6f, 0xdb, 0x8e, 0x87, 0x43, 0x6a, 0x34, 0xfc, 0x2c, 0xff, 0x42, 0xdc, 0x96, - 0x23, 0x08, 0x69, 0xd1, 0x32, 0x17, 0x5e, 0x87, 0xb1, 0x68, 0x8b, 0x77, 0x95, 0xf3, 0xe3, 0x3b, - 0x39, 0x94, 0xca, 0x11, 0x7f, 0xb2, 0xf1, 0x02, 0x14, 0xaa, 0x9d, 0x8e, 0x98, 0xd4, 0x4e, 0xa5, - 0x74, 0x6a, 0x3c, 0x3e, 0x44, 0xb5, 0xd3, 0xf1, 0x3f, 0xfd, 0x88, 0xbf, 0xfd, 0xda, 0xd3, 0xa7, - 0xff, 0x26, 0xff, 0xf4, 0xa3, 0xfd, 0x2e, 0x4b, 0xfd, 0x85, 0x02, 0x9c, 0xa8, 0x76, 0x3a, 0xc7, - 0x09, 0x3e, 0x0e, 0x2a, 0x0a, 0xc5, 0xf3, 0x00, 0xd2, 0x1c, 0x3b, 0x10, 0xbc, 0x4c, 0x1d, 0x96, - 0xe6, 0x57, 0x25, 0xa7, 0x49, 0x44, 0xbe, 0xfa, 0x0d, 0xee, 0x4a, 0xfd, 0x3e, 0x57, 0xc0, 0x89, - 0xef, 0xa8, 0x47, 0xd4, 0x7b, 0xaf, 0x74, 0x9b, 0xe8, 0x83, 0xd2, 0xae, 0xfa, 0xe0, 0x37, 0x22, - 0x83, 0x07, 0x33, 0x3a, 0x1c, 0xf7, 0x42, 0xff, 0xbe, 0x6c, 0xeb, 0x31, 0x59, 0x98, 0x22, 0xcc, - 0x97, 0x9f, 0xca, 0x4f, 0x04, 0x9d, 0x6b, 0x31, 0x54, 0xd3, 0x34, 0xb4, 0x18, 0xad, 0xdf, 0x87, - 0x03, 0xbb, 0xea, 0xc3, 0xcd, 0x3c, 0x06, 0x96, 0x08, 0x82, 0xd6, 0xed, 0x7f, 0x8b, 0x72, 0x0d, - 0x80, 0xbb, 0x2f, 0x04, 0xfe, 0xf9, 0xa3, 0x3c, 0x3e, 0x15, 0xcf, 0xf0, 0x27, 0xe2, 0x53, 0x85, - 0x24, 0x81, 0xbb, 0x53, 0x21, 0xd5, 0xdd, 0xe9, 0x0a, 0x0c, 0x6a, 0xfa, 0xfa, 0x27, 0xba, 0x54, - 0x3c, 0x66, 0xf2, 0x63, 0xc2, 0xea, 0xeb, 0xcd, 0xcf, 0x32, 0xa0, 0x16, 0xa0, 0x89, 0x1a, 0x44, - 0x26, 0x91, 0xdc, 0x4a, 0xf8, 0x41, 0x7b, 0x10, 0x8f, 0x64, 0x2f, 0x8a, 0x4e, 0x5e, 0x85, 0x42, - 0xf5, 0x6e, 0x43, 0x48, 0x36, 0xe8, 0xda, 0xea, 0xdd, 0x86, 0x90, 0x57, 0x66, 0xd9, 0xbb, 0x0d, - 0xf5, 0x73, 0x79, 0x20, 0x49, 0x4a, 0xf2, 0x32, 0x0c, 0x21, 0x74, 0x99, 0xe9, 0x8c, 0x9c, 0x1a, - 0x7a, 0xdd, 0x6d, 0x3a, 0x08, 0x8d, 0x58, 0x88, 0x3e, 0x29, 0x79, 0x05, 0x73, 0xf9, 0x8b, 0xe4, - 0xa4, 0x91, 0xd4, 0xd0, 0xeb, 0xae, 0x9f, 0xfd, 0x3e, 0x96, 0xca, 0x5f, 0x10, 0xa3, 0x71, 0x79, - 0xb7, 0x31, 0x6d, 0xbb, 0x9e, 0x10, 0x35, 0x37, 0x2e, 0xd7, 0x5d, 0xcc, 0x49, 0x1e, 0x31, 0x2e, - 0x39, 0x19, 0xe6, 0x55, 0xbc, 0xdb, 0xe0, 0xaf, 0xf0, 0x0c, 0xcd, 0x0e, 0x72, 0x18, 0xf2, 0xbc, - 0x8a, 0xeb, 0x6e, 0x93, 0xbf, 0xe0, 0x33, 0x9a, 0x8e, 0xdd, 0x8e, 0xe6, 0x55, 0x8c, 0x94, 0x52, - 0x7f, 0x68, 0x10, 0xca, 0x35, 0xdd, 0xd3, 0x97, 0x74, 0x97, 0x4a, 0x5b, 0xf2, 0x13, 0x3e, 0xcc, - 0xff, 0x1c, 0x49, 0x0e, 0xc6, 0x52, 0xca, 0xd7, 0xc4, 0x0b, 0x90, 0xd7, 0x42, 0xbe, 0x41, 0xd6, - 0x6b, 0x39, 0x8d, 0xe6, 0x52, 0xb3, 0x23, 0xc0, 0x5a, 0x82, 0x90, 0x3c, 0x03, 0xc3, 0x3e, 0x8c, - 0xed, 0x22, 0x0a, 0xa1, 0xce, 0x18, 0x4b, 0x6c, 0x13, 0xa1, 0xc9, 0x68, 0xf2, 0x0a, 0x8c, 0xf8, - 0x3f, 0x25, 0xfb, 0x9c, 0xe7, 0x04, 0x5d, 0x4a, 0x6c, 0xc1, 0x64, 0x52, 0xb9, 0x28, 0xce, 0x6f, - 0xfd, 0x91, 0xa2, 0xb1, 0xb4, 0x9b, 0x11, 0x52, 0xf2, 0x59, 0x18, 0xf3, 0x7f, 0x8b, 0x5d, 0x07, - 0xf7, 0x3e, 0x7c, 0xc6, 0x57, 0xc2, 0xb8, 0x58, 0xaf, 0x46, 0xc9, 0xf9, 0xfe, 0xe3, 0x11, 0x3f, - 0xfd, 0xa3, 0xb1, 0x94, 0xdc, 0x7e, 0xc4, 0x2a, 0x20, 0x75, 0x38, 0xe9, 0x43, 0x42, 0x0d, 0x1d, - 0x08, 0xb7, 0x9d, 0xc6, 0x52, 0x33, 0x55, 0x49, 0x93, 0xa5, 0x48, 0x1b, 0x2e, 0x46, 0x80, 0x86, - 0xbb, 0x62, 0xde, 0xf3, 0xc4, 0x9e, 0x51, 0x04, 0x68, 0x17, 0xa9, 0x83, 0x03, 0xae, 0x9c, 0xc6, - 0xcf, 0x01, 0x1e, 0x0d, 0x41, 0xd3, 0x93, 0x1b, 0x69, 0xc0, 0x69, 0x1f, 0x7f, 0x63, 0x62, 0x61, - 0xc1, 0xb1, 0xdf, 0xa6, 0x2d, 0xaf, 0x5e, 0x13, 0x7b, 0x6e, 0x0c, 0xdc, 0x69, 0x2c, 0x35, 0x97, - 0x5b, 0x1d, 0xa6, 0x14, 0x0c, 0x17, 0x65, 0x9e, 0x5a, 0x98, 0xdc, 0x81, 0x33, 0x12, 0xbc, 0x6e, - 0xb9, 0x9e, 0x6e, 0xb5, 0x68, 0x10, 0x30, 0x07, 0x0f, 0x05, 0x04, 0x57, 0x53, 0x20, 0xa3, 0x6c, - 0xd3, 0x8b, 0x93, 0xd7, 0x61, 0xd4, 0x47, 0xf0, 0xab, 0xc8, 0x61, 0xbc, 0x8a, 0xc4, 0x21, 0x69, - 0x2c, 0x35, 0xe3, 0x8f, 0xc5, 0xa3, 0xc4, 0xb2, 0x46, 0x2d, 0x6e, 0x74, 0xa8, 0x70, 0x0b, 0xf6, - 0x35, 0xca, 0xdb, 0xe8, 0xa4, 0x2a, 0x23, 0x23, 0x25, 0x6f, 0x84, 0x1a, 0x35, 0xef, 0x98, 0xcb, - 0xa6, 0x9f, 0xda, 0xeb, 0x9c, 0xd0, 0x0f, 0x1b, 0x81, 0x69, 0xfa, 0xc1, 0xc9, 0x2f, 0x54, 0xe1, - 0x54, 0x8a, 0x8e, 0xed, 0x6a, 0xc7, 0xf8, 0x85, 0x7c, 0xd8, 0x88, 0x23, 0xbe, 0x6d, 0x1c, 0x87, - 0x41, 0xff, 0x4b, 0x84, 0xf1, 0xa0, 0x64, 0x0d, 0xcd, 0x38, 0x0f, 0x1f, 0x1f, 0x11, 0xc7, 0x11, - 0xdf, 0x4a, 0x1e, 0x84, 0x38, 0xbe, 0x95, 0x0b, 0xc5, 0x71, 0xc4, 0xb7, 0x97, 0xbf, 0x55, 0x0c, - 0xe7, 0xa4, 0xe3, 0x3d, 0xe6, 0x41, 0x99, 0xc9, 0xa1, 0x33, 0x6d, 0x69, 0x17, 0x6f, 0x88, 0x65, - 0xd5, 0x1c, 0xd8, 0x9b, 0x6a, 0x92, 0xd7, 0x61, 0x78, 0xc1, 0x76, 0xbd, 0x65, 0x87, 0xba, 0x0b, - 0x41, 0x82, 0x11, 0x7c, 0x7f, 0xde, 0x11, 0xe0, 0x66, 0x27, 0x1a, 0x34, 0x4d, 0x22, 0x97, 0x62, - 0xc9, 0x0d, 0xed, 0x3e, 0x96, 0x9c, 0xfa, 0xc7, 0x85, 0x84, 0x2e, 0x71, 0xb3, 0xf7, 0x48, 0xea, - 0xd2, 0x01, 0x4c, 0x14, 0xe4, 0x7a, 0xb8, 0x86, 0xf2, 0xfd, 0x41, 0xbf, 0x14, 0x7b, 0x75, 0x49, - 0x6c, 0x0f, 0xa2, 0x24, 0xe4, 0x53, 0x70, 0x2e, 0x02, 0x58, 0xd0, 0x1d, 0x7d, 0x8d, 0x7a, 0x61, - 0xd2, 0x5a, 0x8c, 0xa6, 0xe7, 0x97, 0x6e, 0x76, 0x02, 0xb4, 0x9c, 0x08, 0x37, 0x83, 0x83, 0xa4, - 0x98, 0x03, 0xbb, 0xf0, 0xf2, 0xfe, 0x4a, 0x21, 0x34, 0x93, 0xa2, 0x51, 0xb1, 0x35, 0xea, 0x76, - 0xdb, 0xde, 0xc3, 0xdb, 0xc1, 0x7b, 0xcb, 0x39, 0x34, 0x0d, 0x27, 0xaa, 0xf7, 0xee, 0xd1, 0x96, - 0xe7, 0x07, 0xfb, 0x77, 0x45, 0x1c, 0x54, 0xbe, 0x6d, 0x11, 0x28, 0x11, 0xbc, 0xdd, 0x8d, 0xa4, - 0x11, 0x8e, 0x16, 0x53, 0xff, 0x65, 0x11, 0x94, 0x60, 0xdb, 0x10, 0xbc, 0x76, 0x3c, 0xc4, 0x25, - 0xfa, 0x3d, 0xd1, 0x2b, 0x26, 0x9c, 0x0c, 0x85, 0x21, 0x9e, 0x99, 0x29, 0xfd, 0xb8, 0x2d, 0xa9, - 0xc4, 0x99, 0x85, 0x84, 0x7c, 0x27, 0x72, 0x41, 0xec, 0x44, 0x48, 0xf8, 0x9a, 0xb4, 0xe9, 0x72, - 0x16, 0x5a, 0x92, 0x2b, 0xf9, 0x62, 0x0e, 0x4e, 0xfb, 0x9d, 0x32, 0xbf, 0xc4, 0x4c, 0xf2, 0x09, - 0xbb, 0x6b, 0x05, 0x6f, 0xb0, 0x5e, 0xcd, 0xae, 0x8e, 0x77, 0xd2, 0xd5, 0xb4, 0xc2, 0xbc, 0x25, - 0x41, 0xcc, 0x9e, 0x40, 0x21, 0x6c, 0xa4, 0x69, 0xb6, 0x90, 0x48, 0x4b, 0xad, 0xf7, 0xc2, 0x0d, - 0x38, 0x9f, 0xc9, 0x72, 0x3b, 0x13, 0xb8, 0x5f, 0x36, 0x81, 0xff, 0x28, 0x17, 0x4e, 0x44, 0x31, - 0x21, 0x91, 0xab, 0x00, 0x21, 0x48, 0x6c, 0x8a, 0xf1, 0x89, 0x57, 0x28, 0x34, 0x4d, 0xa2, 0x20, - 0xf3, 0x50, 0x12, 0x62, 0xe1, 0x09, 0xe2, 0x3f, 0xb4, 0x4d, 0x2f, 0x5c, 0x95, 0xe5, 0x80, 0x1b, - 0x5e, 0xf1, 0xcd, 0x82, 0xcd, 0x85, 0x57, 0x60, 0x78, 0xaf, 0xdf, 0xf5, 0xc5, 0x02, 0x10, 0x79, - 0x07, 0x7b, 0x88, 0xe6, 0xfd, 0x11, 0x9e, 0xc2, 0x2e, 0xc3, 0x20, 0xfb, 0x04, 0x4c, 0x44, 0x24, - 0x05, 0x1e, 0xef, 0x0a, 0x98, 0x16, 0x60, 0xc3, 0xb8, 0x7d, 0x03, 0xe9, 0x71, 0xfb, 0xd4, 0x1f, - 0x2d, 0xc0, 0x59, 0xb9, 0x43, 0x6a, 0x14, 0x73, 0x99, 0x1c, 0x77, 0xca, 0xbb, 0xd8, 0x29, 0x2a, - 0x94, 0xf8, 0xc6, 0x45, 0x24, 0x95, 0xe1, 0x87, 0x4a, 0x08, 0xd1, 0x04, 0x46, 0xfd, 0x9f, 0xf2, - 0x30, 0x1a, 0x18, 0x87, 0xba, 0xe3, 0x3e, 0xc4, 0xdd, 0xf1, 0x11, 0x18, 0xc5, 0xc8, 0x6b, 0x6b, - 0xd4, 0xe2, 0xd1, 0xc9, 0xfa, 0xa5, 0x2c, 0x50, 0x3e, 0x42, 0x24, 0xfc, 0x8b, 0x10, 0x32, 0xed, - 0xe7, 0x96, 0x9f, 0x14, 0x0f, 0x8f, 0x9b, 0x7d, 0x1c, 0xae, 0xfe, 0xad, 0x02, 0x8c, 0xf8, 0x52, - 0x1e, 0x37, 0x8f, 0xea, 0x2d, 0xd1, 0xe1, 0x0a, 0xf9, 0x1a, 0xc0, 0x82, 0xed, 0x78, 0x7a, 0x7b, - 0x2e, 0xd4, 0x7c, 0x3c, 0x5e, 0xed, 0x20, 0x94, 0x97, 0x91, 0x48, 0x70, 0xfd, 0x0a, 0xcd, 0x6a, - 0x3e, 0x31, 0xf1, 0xf5, 0x2b, 0x80, 0x6a, 0x12, 0x85, 0xfa, 0xab, 0x79, 0x38, 0xe1, 0x77, 0xd2, - 0xe4, 0x03, 0xda, 0xea, 0x3e, 0xcc, 0x73, 0x53, 0x54, 0xda, 0xfd, 0xdb, 0x4a, 0x5b, 0xfd, 0x3f, - 0xa4, 0x89, 0x64, 0xa2, 0x6d, 0x1f, 0x4f, 0x24, 0x7f, 0x11, 0x3a, 0xae, 0x7e, 0x5f, 0x01, 0x4e, - 0xfb, 0x52, 0x9f, 0xea, 0x5a, 0x78, 0x30, 0x31, 0xa1, 0xb7, 0xdb, 0x0f, 0xf3, 0x6e, 0x7c, 0xd8, - 0x17, 0xc4, 0xbc, 0x08, 0x65, 0x2a, 0x92, 0xaf, 0xde, 0x13, 0xe0, 0xa6, 0x6d, 0x1a, 0x9a, 0x4c, - 0x44, 0xde, 0x80, 0x11, 0xff, 0x67, 0xd5, 0x59, 0xf6, 0xb7, 0xe0, 0x78, 0xcd, 0x10, 0x14, 0xd2, - 0x9d, 0x48, 0x6c, 0x8e, 0x48, 0x01, 0xf5, 0x73, 0x03, 0x70, 0xe1, 0xae, 0x69, 0x19, 0xf6, 0xba, - 0xeb, 0xe7, 0xee, 0x3d, 0xf2, 0xc7, 0x6c, 0x87, 0x9d, 0xb3, 0xf7, 0x13, 0x70, 0x26, 0x2e, 0x52, - 0x27, 0xc8, 0xa8, 0x20, 0x7a, 0x67, 0x9d, 0x13, 0x34, 0xfd, 0x2c, 0xbe, 0xe2, 0xae, 0x4e, 0x4b, - 0x2f, 0x19, 0x4f, 0x03, 0x3c, 0xb0, 0x93, 0x34, 0xc0, 0x4f, 0x43, 0xa9, 0x66, 0xaf, 0xe9, 0xa6, - 0x1f, 0xa5, 0x09, 0x47, 0x71, 0x50, 0x2f, 0x62, 0x34, 0x41, 0xc1, 0xf8, 0x8b, 0x8a, 0xb1, 0xcb, - 0x86, 0x42, 0xfe, 0x7e, 0x01, 0x66, 0xa5, 0x69, 0x32, 0x11, 0xb1, 0x61, 0x54, 0x54, 0x27, 0x6e, - 0xd6, 0x00, 0x37, 0x4f, 0x2f, 0xf9, 0x32, 0xca, 0x56, 0xab, 0xab, 0x91, 0x72, 0x7c, 0x1b, 0xc5, - 0xb3, 0x13, 0x8b, 0x8f, 0xe1, 0x77, 0x6c, 0x5a, 0x94, 0xbf, 0x24, 0x04, 0x9c, 0x64, 0x86, 0x93, - 0x42, 0xc0, 0x59, 0x46, 0x26, 0x22, 0x93, 0x70, 0x12, 0x23, 0xd7, 0x07, 0x5b, 0x29, 0xa6, 0x12, - 0x23, 0x68, 0x54, 0xe2, 0x85, 0x0d, 0x0f, 0x76, 0xcf, 0x3e, 0xae, 0xd9, 0x12, 0x68, 0x2d, 0x59, - 0x82, 0x9c, 0x87, 0xc2, 0xdc, 0x4c, 0x15, 0x6f, 0x7a, 0x06, 0x79, 0xce, 0x39, 0xab, 0xad, 0x6b, - 0x0c, 0x76, 0xe1, 0xe3, 0x40, 0x92, 0x9f, 0xb3, 0xab, 0xdb, 0x9c, 0x7f, 0x2a, 0x6d, 0xf9, 0x8e, - 0xba, 0x3f, 0xce, 0x41, 0x4c, 0x84, 0x91, 0x74, 0x8f, 0xfd, 0xef, 0x66, 0xba, 0xc7, 0xd2, 0x81, - 0xa6, 0x7b, 0x54, 0x7f, 0x2e, 0x07, 0x27, 0x13, 0xd9, 0x1d, 0xc8, 0x0b, 0x00, 0x1c, 0x22, 0x45, - 0x78, 0xc5, 0x00, 0x44, 0x61, 0xc6, 0x07, 0xb1, 0x3c, 0x86, 0x64, 0xe4, 0x1a, 0x0c, 0xf2, 0x5f, - 0x22, 0xc6, 0x59, 0xb2, 0x48, 0xb7, 0x6b, 0x1a, 0x5a, 0x40, 0x14, 0xd6, 0x82, 0xf7, 0x99, 0x85, - 0xd4, 0x22, 0xde, 0x46, 0x27, 0xa8, 0x85, 0x91, 0xa9, 0x3f, 0x94, 0x87, 0x91, 0xa0, 0xc1, 0x55, - 0xe3, 0xb0, 0x74, 0xae, 0x24, 0x12, 0x65, 0x14, 0xb6, 0x4b, 0x94, 0x11, 0x9b, 0x6f, 0x45, 0x66, - 0x8c, 0x83, 0x7b, 0xd3, 0xf5, 0xa5, 0x3c, 0x9c, 0x08, 0x6a, 0x3d, 0xc4, 0xab, 0xb3, 0xf7, 0x90, - 0x48, 0xbe, 0x98, 0x03, 0x65, 0xdc, 0x6c, 0xb7, 0x4d, 0x6b, 0xb9, 0x6e, 0xdd, 0xb3, 0x9d, 0x35, - 0x9c, 0x10, 0x0f, 0xef, 0x08, 0x57, 0xfd, 0x81, 0x1c, 0x9c, 0x14, 0x0d, 0x9a, 0xd0, 0x1d, 0xe3, - 0xf0, 0xce, 0xc7, 0xe2, 0x2d, 0x39, 0x3c, 0x7d, 0x51, 0x7f, 0x39, 0x0f, 0x30, 0x63, 0xb7, 0x56, - 0x8f, 0xf8, 0x93, 0xb0, 0xd7, 0xa0, 0xc4, 0x9d, 0xea, 0x85, 0xc6, 0x9e, 0x14, 0x4f, 0x9f, 0xd8, - 0xa7, 0x71, 0xc4, 0x78, 0x59, 0xcc, 0xc7, 0x25, 0xee, 0x97, 0xaf, 0xe4, 0x34, 0x51, 0x84, 0x55, - 0xca, 0xe8, 0xc4, 0x82, 0x11, 0x54, 0xca, 0x60, 0xd1, 0x4a, 0xb7, 0x36, 0x2b, 0xc5, 0xb6, 0xdd, - 0x5a, 0xd5, 0x90, 0x5e, 0xfd, 0x7f, 0x72, 0x5c, 0x76, 0x47, 0xfc, 0x61, 0xab, 0xff, 0xf9, 0xc5, - 0x5d, 0x7e, 0xfe, 0x5f, 0xcd, 0xc1, 0x69, 0x8d, 0xb6, 0xec, 0xfb, 0xd4, 0xd9, 0x98, 0xb0, 0x0d, - 0x7a, 0x83, 0x5a, 0xd4, 0x39, 0xac, 0x11, 0xf5, 0x6b, 0x98, 0x59, 0x28, 0x6c, 0xcc, 0x6d, 0x97, - 0x1a, 0x47, 0x27, 0xeb, 0x93, 0xfa, 0x8d, 0x01, 0x50, 0x52, 0xad, 0xde, 0x23, 0x6b, 0xce, 0x65, - 0x6e, 0x65, 0x8a, 0x07, 0xb5, 0x95, 0xe9, 0xdf, 0xdd, 0x56, 0xa6, 0xb4, 0xdb, 0xad, 0xcc, 0xc0, - 0x4e, 0xb6, 0x32, 0x6b, 0xf1, 0xad, 0xcc, 0x20, 0x6e, 0x65, 0x5e, 0xe8, 0xb9, 0x95, 0x99, 0xb4, - 0x8c, 0x3d, 0x6e, 0x64, 0x8e, 0x6c, 0x3e, 0xf3, 0xbd, 0xec, 0xc0, 0x2e, 0xb3, 0x49, 0xb1, 0x65, - 0x3b, 0x06, 0x35, 0xc4, 0xc6, 0x0b, 0x4f, 0xfd, 0x1d, 0x01, 0xd3, 0x02, 0x6c, 0x22, 0x39, 0xfc, - 0xe8, 0x4e, 0x92, 0xc3, 0x1f, 0xc0, 0xfe, 0xeb, 0x0b, 0x79, 0x38, 0x39, 0x41, 0x1d, 0x8f, 0x47, - 0xb2, 0x3d, 0x08, 0x87, 0xba, 0x2a, 0x9c, 0x90, 0x18, 0xa2, 0x45, 0x9e, 0x0f, 0x9d, 0x04, 0x5b, - 0xd4, 0xf1, 0xe2, 0x3e, 0x86, 0x71, 0x7a, 0x56, 0xbd, 0x9f, 0xa0, 0x51, 0x8c, 0xdd, 0xa0, 0x7a, - 0x1f, 0xce, 0x05, 0x69, 0x8a, 0x5f, 0x5a, 0x40, 0x2f, 0xf9, 0xc9, 0x14, 0xf7, 0xe0, 0x27, 0xf3, - 0xb5, 0x1c, 0x5c, 0xd2, 0xa8, 0x45, 0xd7, 0xf5, 0xa5, 0x36, 0x95, 0x9a, 0x25, 0x56, 0x06, 0x36, - 0x6b, 0x98, 0xee, 0x9a, 0xee, 0xb5, 0x56, 0xf6, 0x25, 0xa3, 0x29, 0x18, 0x91, 0xe7, 0xaf, 0x5d, - 0xcc, 0x6d, 0x91, 0x72, 0xea, 0x37, 0x8a, 0x30, 0x30, 0x6e, 0x7b, 0x37, 0xed, 0x7d, 0x26, 0x01, - 0x0d, 0xa7, 0xfc, 0xfc, 0x2e, 0xce, 0x7a, 0x9e, 0xc3, 0xca, 0xa5, 0xac, 0x1b, 0xe8, 0x80, 0xba, - 0x64, 0x27, 0x32, 0xc0, 0xf8, 0x64, 0xbb, 0x4c, 0xff, 0xf9, 0x32, 0x0c, 0x61, 0x00, 0x1a, 0xe9, - 0x34, 0x16, 0xdd, 0xbb, 0x3d, 0x06, 0x8c, 0xd7, 0x11, 0x92, 0x92, 0x4f, 0x45, 0x42, 0xef, 0x96, - 0xf6, 0x9f, 0x2e, 0x54, 0x8e, 0xc2, 0xfb, 0x02, 0xbf, 0xc8, 0xc3, 0x36, 0x49, 0xc9, 0x91, 0xf0, - 0x14, 0x25, 0xd6, 0xa4, 0x80, 0xf0, 0x00, 0x53, 0x79, 0x4e, 0xc0, 0xe8, 0xb8, 0xed, 0x49, 0xae, - 0xc4, 0x43, 0xe1, 0x4b, 0x54, 0x26, 0xf9, 0x74, 0x3f, 0xe2, 0x68, 0x19, 0xf5, 0xcf, 0x8b, 0x30, - 0xe2, 0xff, 0x3c, 0x24, 0xdd, 0x79, 0x16, 0x4a, 0xd3, 0xb6, 0x94, 0xbb, 0x04, 0xdd, 0x8f, 0x57, - 0x6c, 0x37, 0xe6, 0x57, 0x2d, 0x88, 0x98, 0xd4, 0xe7, 0x6c, 0x43, 0x76, 0x9e, 0x47, 0xa9, 0x5b, - 0xb6, 0x91, 0x78, 0xc1, 0x1c, 0x10, 0x92, 0x4b, 0x50, 0xc4, 0x77, 0x07, 0xd2, 0x41, 0x7e, 0xec, - 0xad, 0x01, 0xe2, 0x25, 0xad, 0x2c, 0xed, 0x56, 0x2b, 0x07, 0xf6, 0xaa, 0x95, 0x83, 0x07, 0xab, - 0x95, 0x6f, 0xc1, 0x08, 0xd6, 0xe4, 0xa7, 0x97, 0xdc, 0x7e, 0x61, 0x3d, 0x2f, 0xd6, 0xbe, 0x51, - 0xde, 0x6e, 0x91, 0x64, 0x12, 0x97, 0xbc, 0x08, 0xab, 0x98, 0xee, 0xc2, 0x3e, 0xb6, 0xd3, 0x7f, - 0x9c, 0x83, 0x81, 0xdb, 0xd6, 0xaa, 0x65, 0xaf, 0xef, 0x4f, 0xe3, 0x5e, 0x80, 0x61, 0xc1, 0x46, - 0x5a, 0x5d, 0xf0, 0x51, 0x7a, 0x97, 0x83, 0x9b, 0xc8, 0x49, 0x93, 0xa9, 0xc8, 0xeb, 0x41, 0x21, - 0x7c, 0x5a, 0x54, 0x08, 0xb3, 0xff, 0xf8, 0x85, 0x5a, 0xd1, 0xf4, 0x1f, 0x32, 0x39, 0xb9, 0x08, - 0xc5, 0x1a, 0x6b, 0xaa, 0x14, 0x06, 0x98, 0x35, 0x45, 0x43, 0xa8, 0xfa, 0x85, 0x22, 0x8c, 0xc5, - 0x0e, 0xbe, 0x9e, 0x86, 0x21, 0x71, 0xf0, 0x64, 0xfa, 0xf9, 0x48, 0xf0, 0xe9, 0x51, 0x00, 0xd4, - 0x06, 0xf9, 0x9f, 0x75, 0x83, 0x7c, 0x0c, 0x06, 0x6c, 0x17, 0x17, 0x45, 0xfc, 0x96, 0xb1, 0x70, - 0x08, 0xcd, 0x37, 0x58, 0xdb, 0xf9, 0xe0, 0x10, 0x24, 0xb2, 0x46, 0xda, 0x2e, 0x7e, 0xda, 0x8b, - 0x30, 0xa4, 0xbb, 0x2e, 0xf5, 0x9a, 0x9e, 0xbe, 0x2c, 0xa7, 0x28, 0x09, 0x80, 0xf2, 0xe8, 0x40, - 0xe0, 0xa2, 0xbe, 0x4c, 0x3e, 0x0e, 0xa3, 0x2d, 0x87, 0xe2, 0xb2, 0xa9, 0xb7, 0x59, 0x2b, 0x25, - 0xb3, 0x36, 0x82, 0x90, 0xef, 0x4f, 0x42, 0x44, 0xdd, 0x20, 0x77, 0x60, 0x54, 0x7c, 0x0e, 0xf7, - 0xfb, 0xc7, 0x81, 0x36, 0x16, 0x2e, 0x63, 0x5c, 0x24, 0xdc, 0xf3, 0x5f, 0x3c, 0xff, 0x90, 0xc9, - 0x65, 0xbe, 0x86, 0x44, 0x4a, 0xe6, 0x81, 0xac, 0xd3, 0xa5, 0xa6, 0xde, 0xf5, 0x56, 0x58, 0x5d, - 0x3c, 0xc2, 0xbe, 0xc8, 0xd7, 0x8a, 0x6f, 0x26, 0x92, 0x58, 0xf9, 0x29, 0xc9, 0x3a, 0x5d, 0xaa, - 0x46, 0x90, 0xe4, 0x2e, 0x9c, 0x49, 0x16, 0x61, 0x9f, 0xcc, 0x2f, 0x07, 0x9e, 0xdc, 0xda, 0xac, - 0x54, 0x52, 0x09, 0x24, 0xb6, 0xa7, 0x12, 0x6c, 0xeb, 0xc6, 0xcd, 0xe2, 0xe0, 0x40, 0x79, 0x50, - 0x1b, 0x63, 0x65, 0x7d, 0x13, 0xd2, 0x34, 0xd4, 0x6f, 0xe7, 0x98, 0xa9, 0xc8, 0x3e, 0x08, 0xd3, - 0xdd, 0x33, 0x5d, 0x5f, 0xdb, 0xa5, 0xae, 0xaf, 0x85, 0xa9, 0x65, 0x4b, 0x6e, 0x8f, 0xd9, 0x55, - 0x13, 0x58, 0x72, 0x15, 0x4a, 0x86, 0x7c, 0x6a, 0x76, 0x36, 0xda, 0x09, 0x7e, 0x3d, 0x9a, 0xa0, - 0x22, 0x97, 0xa1, 0xc8, 0x96, 0xac, 0xf8, 0x96, 0x59, 0xb6, 0x2e, 0x34, 0xa4, 0x50, 0xbf, 0x27, - 0x0f, 0x23, 0xd2, 0xd7, 0x5c, 0xdf, 0xd7, 0xe7, 0xbc, 0xba, 0xb3, 0x66, 0xfa, 0x4e, 0x2f, 0xb8, - 0x97, 0xf2, 0x9b, 0xfc, 0x62, 0x20, 0x8a, 0x1d, 0x5d, 0x48, 0x09, 0xc1, 0xbc, 0x2c, 0x3e, 0xb4, - 0xb4, 0xf3, 0xed, 0x23, 0xa3, 0xbf, 0x59, 0x1c, 0xcc, 0x97, 0x0b, 0x37, 0x8b, 0x83, 0xc5, 0x72, - 0x3f, 0x86, 0x02, 0xc3, 0xe8, 0xdb, 0x7c, 0x6f, 0x6e, 0xdd, 0x33, 0x97, 0x8f, 0xf8, 0xcb, 0x93, - 0x83, 0x0d, 0x93, 0x16, 0x93, 0xcd, 0x11, 0x7f, 0x86, 0xf2, 0xae, 0xca, 0xe6, 0x38, 0x15, 0xad, - 0x90, 0xcd, 0xbf, 0xcc, 0x81, 0x92, 0x2a, 0x9b, 0xea, 0x21, 0xf9, 0x41, 0x1c, 0x5c, 0x42, 0xda, - 0x3f, 0xcd, 0xc3, 0xc9, 0xba, 0xe5, 0xd1, 0x65, 0xbe, 0x63, 0x3c, 0xe2, 0x53, 0xc5, 0x2d, 0x18, - 0x96, 0x3e, 0x46, 0xf4, 0xf9, 0x23, 0xc1, 0x7e, 0x3c, 0x44, 0x65, 0x70, 0x92, 0x4b, 0x1f, 0xdc, - 0x3b, 0x9e, 0xb8, 0x90, 0x8f, 0xf8, 0x9c, 0x73, 0x34, 0x84, 0x7c, 0xc4, 0x27, 0xaf, 0xf7, 0xa8, - 0x90, 0xbf, 0x54, 0x80, 0x53, 0x29, 0x95, 0x93, 0x4b, 0x30, 0xd0, 0xe8, 0x2e, 0x61, 0xe4, 0xaf, - 0x5c, 0xe8, 0x31, 0xec, 0x76, 0x97, 0x30, 0xe8, 0x97, 0xe6, 0x23, 0xc9, 0x22, 0x3e, 0xcd, 0x9f, - 0xaf, 0xd7, 0x26, 0x84, 0x54, 0x55, 0x29, 0xc8, 0x00, 0x03, 0xa7, 0x7d, 0x59, 0xf0, 0x7c, 0xdf, - 0x36, 0x8d, 0x56, 0xec, 0xf9, 0x3e, 0x2b, 0x43, 0xbe, 0x0b, 0x86, 0xaa, 0xef, 0x74, 0x1d, 0x8a, - 0x7c, 0xb9, 0xc4, 0x9f, 0x0a, 0xf8, 0xfa, 0x88, 0x34, 0xce, 0x3c, 0x12, 0x01, 0xa3, 0x88, 0xf3, - 0x0e, 0x19, 0x92, 0x79, 0x28, 0xdd, 0x30, 0xbd, 0xe9, 0xee, 0x92, 0xe8, 0x85, 0x20, 0x3a, 0x18, - 0x87, 0xa6, 0xf1, 0xc5, 0x5d, 0x39, 0x8f, 0x72, 0x2c, 0xef, 0x81, 0x78, 0x01, 0x32, 0x03, 0xfd, - 0xd5, 0xbb, 0x0d, 0xad, 0x2a, 0x7a, 0xe2, 0x71, 0x39, 0xce, 0x42, 0x35, 0x93, 0x1d, 0xbe, 0x1a, - 0xd7, 0xe5, 0x3c, 0x48, 0x48, 0xaf, 0xfe, 0x50, 0x0e, 0x2e, 0x64, 0x0b, 0x8f, 0x3c, 0x07, 0x03, - 0x9a, 0xdd, 0xa6, 0x55, 0x6d, 0x4e, 0xf4, 0x0c, 0xcf, 0x2d, 0x6d, 0xb7, 0x69, 0x53, 0x77, 0xe4, - 0xbd, 0x88, 0x4f, 0x46, 0x3e, 0x0a, 0xc3, 0x75, 0xd7, 0xed, 0x52, 0xa7, 0xf1, 0xc2, 0x6d, 0xad, - 0x2e, 0xb6, 0xac, 0xb8, 0x25, 0x32, 0x11, 0xdc, 0x74, 0x5f, 0x88, 0x85, 0x1e, 0x93, 0xe9, 0xd5, - 0x1f, 0xcc, 0xc1, 0xc5, 0x5e, 0x42, 0x27, 0x2f, 0xc0, 0xe0, 0x22, 0xb5, 0x74, 0xcb, 0xab, 0xd7, - 0x44, 0x93, 0x70, 0x07, 0xe8, 0x21, 0x2c, 0xba, 0x91, 0x09, 0x08, 0x59, 0x21, 0x7e, 0xec, 0x19, - 0xf8, 0x59, 0xf0, 0x23, 0x5a, 0x84, 0xc5, 0x0a, 0xf9, 0x84, 0xea, 0xa7, 0xe0, 0x7c, 0x66, 0x1f, - 0x91, 0x8f, 0xc1, 0xc8, 0xbc, 0xb3, 0xac, 0x5b, 0xe6, 0x3b, 0x7c, 0x88, 0xe5, 0xc2, 0x5d, 0xb6, - 0x2d, 0xc1, 0xe5, 0x9d, 0x9f, 0x4c, 0xaf, 0x2e, 0x81, 0x92, 0xd5, 0x61, 0x64, 0x0a, 0xc6, 0x30, - 0x38, 0x75, 0xd5, 0x6a, 0xad, 0xd8, 0x4e, 0x28, 0x7b, 0x7c, 0x98, 0xe5, 0x31, 0x4c, 0x53, 0x47, - 0x54, 0xac, 0x0f, 0x62, 0xa5, 0xd4, 0xdf, 0xcb, 0xc3, 0xc8, 0x42, 0xbb, 0xbb, 0x6c, 0x4a, 0x0b, - 0xf3, 0x9e, 0xf7, 0x33, 0xfe, 0xee, 0x22, 0xbf, 0xbb, 0xdd, 0x05, 0x9b, 0xce, 0x9c, 0x3d, 0x4e, - 0x67, 0x7e, 0x39, 0xf2, 0x3a, 0x94, 0x3a, 0xf8, 0x1d, 0xf1, 0x93, 0x6e, 0xfe, 0x75, 0x59, 0x27, - 0xdd, 0xbc, 0x0c, 0x9b, 0xbf, 0x5a, 0xfb, 0x98, 0xbf, 0xc2, 0xb2, 0x92, 0x40, 0xc3, 0x45, 0xf8, - 0x58, 0xa0, 0x07, 0x22, 0xd0, 0x70, 0xc1, 0x3d, 0x16, 0xe8, 0x3e, 0x04, 0xfa, 0x8d, 0x3c, 0x8c, - 0x45, 0xab, 0x24, 0xcf, 0xc1, 0x30, 0xaf, 0x86, 0x9f, 0xbb, 0xe5, 0x24, 0xa7, 0xed, 0x10, 0xac, - 0x01, 0xff, 0x21, 0x0e, 0x10, 0x4f, 0xac, 0xe8, 0x6e, 0x33, 0x3c, 0x01, 0xe3, 0xf7, 0xe3, 0x83, - 0xdc, 0xd3, 0x2c, 0x86, 0xd2, 0xc6, 0x56, 0x74, 0x77, 0x22, 0xfc, 0x4d, 0x26, 0x81, 0x38, 0xb4, - 0xeb, 0xd2, 0x28, 0x83, 0x22, 0x32, 0xe0, 0xab, 0x47, 0x02, 0xab, 0x9d, 0xe4, 0x30, 0x99, 0xcd, - 0xa7, 0x83, 0x66, 0xa3, 0x32, 0xf4, 0xf7, 0x3e, 0x45, 0x7e, 0x72, 0x6b, 0xb3, 0x72, 0x46, 0xa2, - 0x4f, 0x3f, 0x46, 0xe6, 0x04, 0x35, 0xdd, 0xd3, 0xf9, 0xa1, 0x87, 0xdf, 0x01, 0xea, 0xef, 0x7c, - 0x6f, 0x0e, 0xfa, 0xe7, 0x2d, 0x3a, 0x7f, 0x8f, 0x3c, 0x0f, 0x43, 0x4c, 0x63, 0x66, 0x6c, 0xd6, - 0x99, 0x39, 0xe1, 0xa0, 0x22, 0xa9, 0x12, 0x22, 0xa6, 0xfb, 0xb4, 0x90, 0x8a, 0xbc, 0x08, 0x10, - 0xbe, 0xe1, 0x13, 0xea, 0x47, 0xe4, 0x32, 0x1c, 0x33, 0xdd, 0xa7, 0x49, 0x74, 0x7e, 0x29, 0xf1, - 0x02, 0xaa, 0x90, 0x2c, 0xc5, 0x31, 0x7e, 0x29, 0x31, 0x40, 0x66, 0x80, 0xb0, 0x5f, 0x0b, 0xba, - 0xeb, 0xae, 0xdb, 0x8e, 0x31, 0xb1, 0xa2, 0x5b, 0xcb, 0x34, 0xbe, 0x3d, 0x4d, 0x52, 0x4c, 0xf7, - 0x69, 0x29, 0xe5, 0xc8, 0xab, 0x30, 0x22, 0x7b, 0xec, 0xc6, 0xbd, 0x6a, 0x64, 0xdc, 0x74, 0x9f, - 0x16, 0xa1, 0x25, 0x1f, 0x86, 0x61, 0xf1, 0xfb, 0xa6, 0x2d, 0xae, 0xec, 0xa5, 0x50, 0x51, 0x12, - 0x6a, 0xba, 0x4f, 0x93, 0x29, 0xa5, 0x4a, 0x17, 0x1c, 0xd3, 0xf2, 0xc4, 0x23, 0xf0, 0x78, 0xa5, - 0x88, 0x93, 0x2a, 0xc5, 0xdf, 0xe4, 0xa3, 0x30, 0x1a, 0xc4, 0xe0, 0x7a, 0x9b, 0xb6, 0x3c, 0x71, - 0xbb, 0x70, 0x26, 0x56, 0x98, 0x23, 0xa7, 0xfb, 0xb4, 0x28, 0x35, 0xb9, 0x0c, 0x25, 0x8d, 0xba, - 0xe6, 0x3b, 0xfe, 0x7d, 0xfc, 0x98, 0x34, 0xd0, 0xcd, 0x77, 0x98, 0x94, 0x04, 0x9e, 0xf5, 0x4e, - 0xe8, 0x00, 0x20, 0xee, 0x02, 0x48, 0xac, 0x96, 0x49, 0xcb, 0x60, 0xbd, 0x23, 0x79, 0x7f, 0x7c, - 0x3c, 0x8c, 0x4c, 0x26, 0x12, 0xf3, 0x0e, 0xc7, 0x43, 0x40, 0xc8, 0xd8, 0xe9, 0x3e, 0x2d, 0x46, - 0x2f, 0x49, 0xb5, 0x66, 0xba, 0xab, 0x22, 0xa2, 0x6c, 0x5c, 0xaa, 0x0c, 0x25, 0x49, 0x95, 0xfd, - 0x94, 0xaa, 0x9e, 0xa3, 0xde, 0xba, 0xed, 0xac, 0x8a, 0xf8, 0xb1, 0xf1, 0xaa, 0x05, 0x56, 0xaa, - 0x5a, 0x40, 0xe4, 0xaa, 0xd9, 0x88, 0x1b, 0x4b, 0xaf, 0x5a, 0xf7, 0x74, 0xb9, 0x6a, 0x7e, 0xd4, - 0xe9, 0x77, 0xd2, 0x0c, 0xd5, 0xef, 0x53, 0xe5, 0x44, 0x6a, 0x87, 0x22, 0x4e, 0xea, 0x50, 0xfc, - 0xcd, 0x2a, 0x95, 0xb2, 0xf6, 0x2b, 0xe5, 0x68, 0xa5, 0x12, 0x8a, 0x55, 0x2a, 0xe7, 0xf7, 0x7f, - 0x51, 0x4e, 0x0d, 0xaf, 0x9c, 0x8c, 0x76, 0x50, 0x88, 0x61, 0x1d, 0x24, 0xa5, 0x90, 0xaf, 0x60, - 0xda, 0x69, 0x85, 0x20, 0xf9, 0x70, 0xd0, 0xc2, 0x89, 0x85, 0xe9, 0x3e, 0x0d, 0x13, 0x52, 0xab, - 0x3c, 0xa1, 0xb9, 0x72, 0x0a, 0x29, 0x46, 0x7c, 0x0a, 0x06, 0x9b, 0xee, 0xd3, 0x78, 0xb2, 0xf3, - 0xe7, 0xa5, 0xa4, 0x8f, 0xca, 0xe9, 0xe8, 0x14, 0x11, 0x20, 0xd8, 0x14, 0x11, 0xa6, 0x86, 0x9c, - 0x4a, 0xa6, 0x36, 0x54, 0xce, 0x44, 0xd7, 0x9a, 0x38, 0x7e, 0xba, 0x4f, 0x4b, 0xa6, 0x43, 0xfc, - 0x70, 0x24, 0xdb, 0x9f, 0x72, 0x36, 0x16, 0x9f, 0x2d, 0x44, 0x31, 0x71, 0xc9, 0x79, 0x01, 0xe7, - 0xe1, 0x14, 0x4f, 0x16, 0x2c, 0x22, 0xac, 0x89, 0xc9, 0xea, 0x5c, 0x74, 0x67, 0x98, 0x42, 0x32, - 0xdd, 0xa7, 0xa5, 0x95, 0x24, 0x13, 0x89, 0x9c, 0x7b, 0x8a, 0x12, 0x75, 0x3e, 0x8a, 0xa1, 0xa7, - 0xfb, 0xb4, 0x44, 0x96, 0xbe, 0x17, 0xe5, 0x64, 0x77, 0xca, 0xf9, 0x68, 0x27, 0x86, 0x18, 0xd6, - 0x89, 0x52, 0x52, 0xbc, 0x17, 0xe5, 0x04, 0x68, 0xca, 0x85, 0x64, 0xa9, 0x70, 0xe6, 0x94, 0x12, - 0xa5, 0x69, 0xe9, 0x39, 0x9d, 0x94, 0x47, 0x44, 0x66, 0x67, 0x51, 0x3e, 0x8d, 0x66, 0xba, 0x4f, - 0x4b, 0xcf, 0x07, 0xa5, 0xa5, 0x27, 0x43, 0x52, 0x2e, 0xf6, 0xe2, 0x19, 0xb4, 0x2e, 0x3d, 0x91, - 0x92, 0xde, 0x23, 0x35, 0x8d, 0xf2, 0x68, 0x74, 0x0f, 0x99, 0x49, 0x38, 0xdd, 0xa7, 0xf5, 0x48, - 0x70, 0x73, 0x3b, 0x23, 0x4f, 0x8c, 0xf2, 0x58, 0x34, 0xb1, 0x7b, 0x2a, 0xd1, 0x74, 0x9f, 0x96, - 0x91, 0x65, 0xe6, 0x76, 0x46, 0x1a, 0x11, 0xa5, 0xd2, 0x93, 0x6d, 0x20, 0x8f, 0x8c, 0x24, 0x24, - 0xf3, 0xa9, 0x19, 0x38, 0x94, 0xc7, 0xa3, 0xaa, 0x9b, 0x42, 0xc2, 0x54, 0x37, 0x2d, 0x77, 0xc7, - 0x7c, 0x6a, 0xca, 0x08, 0xe5, 0x89, 0x1e, 0x0c, 0x83, 0x36, 0xa6, 0x26, 0x9b, 0x98, 0x4f, 0xcd, - 0xd9, 0xa0, 0xa8, 0x51, 0x86, 0x29, 0x24, 0x8c, 0x61, 0x5a, 0xb6, 0x87, 0xf9, 0xd4, 0xd0, 0xfe, - 0xca, 0x93, 0x3d, 0x18, 0x86, 0x2d, 0x4c, 0x4b, 0x0a, 0xf0, 0xe1, 0x48, 0x6c, 0x7d, 0xe5, 0xa9, - 0xe8, 0xbc, 0x21, 0xa1, 0xd8, 0xbc, 0x21, 0x47, 0xe1, 0x9f, 0x48, 0x04, 0xfe, 0x55, 0x3e, 0x10, - 0x1d, 0xe6, 0x31, 0x34, 0x1b, 0xe6, 0xf1, 0x50, 0xc1, 0x13, 0x89, 0x00, 0xa8, 0xca, 0xa5, 0x2c, - 0x26, 0x88, 0x8e, 0x32, 0xe1, 0x21, 0x53, 0xeb, 0x29, 0x11, 0x38, 0x95, 0x0f, 0x46, 0x1d, 0xe7, - 0x13, 0x04, 0xd3, 0x7d, 0x5a, 0x4a, 0xdc, 0x4e, 0x2d, 0x3d, 0xdc, 0x94, 0x72, 0x39, 0x3a, 0x6c, - 0xd3, 0x68, 0xd8, 0xb0, 0x4d, 0x0d, 0x55, 0x35, 0x93, 0xf6, 0xba, 0x47, 0xb9, 0x12, 0x35, 0xcc, - 0x92, 0x14, 0xcc, 0x30, 0x4b, 0x79, 0x15, 0xa4, 0xa5, 0x07, 0x31, 0x52, 0x9e, 0xee, 0xd9, 0x42, - 0xa4, 0x49, 0x69, 0x21, 0x8f, 0xe9, 0x13, 0xda, 0x4e, 0xb7, 0x3b, 0x6d, 0x5b, 0x37, 0x94, 0x0f, - 0xa5, 0xda, 0x4e, 0x1c, 0x29, 0xd9, 0x4e, 0x1c, 0xc0, 0x56, 0x79, 0xf9, 0x11, 0x89, 0xf2, 0x4c, - 0x74, 0x95, 0x97, 0x71, 0x6c, 0x95, 0x8f, 0x3c, 0x38, 0x99, 0x48, 0x3c, 0xb8, 0x50, 0x9e, 0x8d, - 0x2a, 0x40, 0x0c, 0xcd, 0x14, 0x20, 0xfe, 0x44, 0xe3, 0x33, 0xd9, 0x4f, 0x14, 0x94, 0xab, 0xd1, - 0xb3, 0xb0, 0x2c, 0xba, 0xe9, 0x3e, 0x2d, 0xfb, 0x99, 0x43, 0x3d, 0xe5, 0xc5, 0x81, 0x72, 0x2d, - 0xaa, 0x60, 0x09, 0x02, 0xa6, 0x60, 0xc9, 0x77, 0x0a, 0xf5, 0x94, 0x27, 0x03, 0xca, 0x73, 0x99, - 0xac, 0x82, 0x6f, 0x4e, 0x79, 0x68, 0xf0, 0xa2, 0xec, 0xf3, 0xaf, 0x3c, 0x1f, 0x5d, 0xec, 0x42, - 0x0c, 0x5b, 0xec, 0xa4, 0xb7, 0x01, 0x2f, 0xca, 0xde, 0xee, 0xca, 0xf5, 0x64, 0xa9, 0x70, 0x89, - 0x94, 0xbc, 0xe2, 0xb5, 0x74, 0x27, 0x71, 0xe5, 0x85, 0xa8, 0xd6, 0xa5, 0xd1, 0x30, 0xad, 0x4b, - 0x75, 0x30, 0x9f, 0x4a, 0xfa, 0x7a, 0x2b, 0x2f, 0xc6, 0x77, 0xd9, 0x51, 0x3c, 0xb3, 0x7c, 0x12, - 0xfe, 0xe1, 0x1f, 0x8f, 0xc7, 0x42, 0x54, 0x5e, 0x8a, 0xdd, 0xab, 0x47, 0xb0, 0xcc, 0xbe, 0x8d, - 0xc5, 0x4e, 0xfc, 0x78, 0x3c, 0x7c, 0xa0, 0xf2, 0x72, 0x3a, 0x87, 0x40, 0x57, 0xe2, 0xe1, 0x06, - 0x3f, 0x1e, 0x8f, 0xb8, 0xa7, 0x7c, 0x38, 0x9d, 0x43, 0x20, 0xdd, 0x78, 0x84, 0xbe, 0xe7, 0xa5, - 0x1c, 0x00, 0xca, 0x47, 0xa2, 0xa6, 0x63, 0x80, 0x60, 0xa6, 0x63, 0x98, 0x29, 0xe0, 0x79, 0x29, - 0x76, 0xbe, 0xf2, 0x4a, 0xa2, 0x48, 0xd0, 0x58, 0x29, 0xc2, 0xfe, 0xf3, 0x52, 0xcc, 0x79, 0xe5, - 0xd5, 0x44, 0x91, 0xa0, 0x75, 0x52, 0x64, 0x7a, 0xa3, 0xd7, 0x03, 0x61, 0xe5, 0xb5, 0xe8, 0x69, - 0x7b, 0x36, 0xe5, 0x74, 0x9f, 0xd6, 0xeb, 0xa1, 0xf1, 0x67, 0xb2, 0x3d, 0xe7, 0x95, 0xd7, 0xa3, - 0x43, 0x38, 0x8b, 0x8e, 0x0d, 0xe1, 0x4c, 0xef, 0xfb, 0x8f, 0xc6, 0x82, 0x85, 0x28, 0x1f, 0x8d, - 0x4e, 0x71, 0x11, 0x24, 0x9b, 0xe2, 0xe2, 0xa1, 0x45, 0x22, 0x51, 0x30, 0x94, 0x8f, 0x45, 0xa7, - 0x38, 0x19, 0xc7, 0xa6, 0xb8, 0x48, 0xc4, 0x8c, 0x89, 0x44, 0x70, 0x06, 0xe5, 0x8d, 0xe8, 0x14, - 0x17, 0x43, 0xb3, 0x29, 0x2e, 0x1e, 0xce, 0xe1, 0xa3, 0xb1, 0x18, 0x05, 0xca, 0xc7, 0xd3, 0xdb, - 0x8f, 0x48, 0xb9, 0xfd, 0x3c, 0xa2, 0x81, 0x96, 0xfe, 0xd8, 0x5e, 0xa9, 0x46, 0xc7, 0x6f, 0x1a, - 0x0d, 0x1b, 0xbf, 0xa9, 0x0f, 0xf5, 0xe3, 0x1b, 0x07, 0xa1, 0x55, 0xe3, 0x3d, 0x36, 0x0e, 0xa1, - 0x29, 0x92, 0x02, 0x8e, 0xec, 0x91, 0xf9, 0x46, 0x68, 0x22, 0x63, 0x8f, 0xec, 0x6f, 0x83, 0x62, - 0xf4, 0x6c, 0x76, 0x4d, 0x38, 0x72, 0x2b, 0xb5, 0xe8, 0xec, 0x9a, 0x20, 0x60, 0xb3, 0x6b, 0xd2, - 0xfd, 0x7b, 0x0a, 0xca, 0x42, 0x8b, 0xb8, 0x7f, 0xba, 0x69, 0x2d, 0x2b, 0x93, 0xb1, 0x07, 0xad, - 0x31, 0x3c, 0x9b, 0x9d, 0xe2, 0x30, 0x5c, 0xaf, 0x39, 0x6c, 0xa2, 0x6d, 0x76, 0x96, 0x6c, 0xdd, - 0x31, 0x1a, 0xd4, 0x32, 0x94, 0xa9, 0xd8, 0x7a, 0x9d, 0x42, 0x83, 0xeb, 0x75, 0x0a, 0x1c, 0x63, - 0xf0, 0xc5, 0xe0, 0x1a, 0x6d, 0x51, 0xf3, 0x3e, 0x55, 0x6e, 0x20, 0xdb, 0x4a, 0x16, 0x5b, 0x41, - 0x36, 0xdd, 0xa7, 0x65, 0x71, 0x60, 0xb6, 0xfa, 0xec, 0x46, 0xe3, 0x13, 0x33, 0x41, 0x7c, 0x87, - 0x05, 0x87, 0x76, 0x74, 0x87, 0x2a, 0xd3, 0x51, 0x5b, 0x3d, 0x95, 0x88, 0xd9, 0xea, 0xa9, 0x88, - 0x24, 0x5b, 0x7f, 0x2c, 0xd4, 0x7b, 0xb1, 0x0d, 0x47, 0x44, 0x7a, 0x69, 0x36, 0x3b, 0x45, 0x11, - 0x4c, 0x40, 0x33, 0xb6, 0xb5, 0x8c, 0x27, 0x15, 0x37, 0xa3, 0xb3, 0x53, 0x36, 0x25, 0x9b, 0x9d, - 0xb2, 0xb1, 0x4c, 0xd5, 0xa3, 0x58, 0x3e, 0x06, 0x6f, 0x45, 0x55, 0x3d, 0x85, 0x84, 0xa9, 0x7a, - 0x0a, 0x38, 0xc9, 0x50, 0xa3, 0x2e, 0xf5, 0x94, 0x99, 0x5e, 0x0c, 0x91, 0x24, 0xc9, 0x10, 0xc1, - 0x49, 0x86, 0x53, 0xd4, 0x6b, 0xad, 0x28, 0xb3, 0xbd, 0x18, 0x22, 0x49, 0x92, 0x21, 0x82, 0xd9, - 0x66, 0x33, 0x0a, 0x1e, 0xef, 0xb6, 0x57, 0xfd, 0x3e, 0x9b, 0x8b, 0x6e, 0x36, 0x33, 0x09, 0xd9, - 0x66, 0x33, 0x13, 0x49, 0x7e, 0x70, 0xc7, 0x0f, 0x0d, 0x94, 0x79, 0xac, 0xf0, 0x6a, 0x68, 0x17, - 0xec, 0xa4, 0xd4, 0x74, 0x9f, 0xb6, 0xd3, 0x87, 0x0c, 0x1f, 0x0a, 0xbc, 0x72, 0x95, 0x05, 0xac, - 0xea, 0x44, 0x70, 0x56, 0xc1, 0xc1, 0xd3, 0x7d, 0x5a, 0xe0, 0xb7, 0xfb, 0x61, 0x18, 0xc6, 0x8f, - 0xaa, 0x5b, 0xa6, 0x57, 0x1b, 0x57, 0x3e, 0x11, 0xdd, 0x32, 0x49, 0x28, 0xb6, 0x65, 0x92, 0x7e, - 0xb2, 0x49, 0x1c, 0x7f, 0xf2, 0x29, 0xa6, 0x36, 0xae, 0x68, 0xd1, 0x49, 0x3c, 0x82, 0x64, 0x93, - 0x78, 0x04, 0x10, 0xd4, 0x5b, 0x73, 0xec, 0x4e, 0x6d, 0x5c, 0x69, 0xa4, 0xd4, 0xcb, 0x51, 0x41, - 0xbd, 0xfc, 0x67, 0x50, 0x6f, 0x63, 0xa5, 0xeb, 0xd5, 0xd8, 0x37, 0x2e, 0xa6, 0xd4, 0xeb, 0x23, - 0x83, 0x7a, 0x7d, 0x00, 0x9b, 0x0a, 0x11, 0xb0, 0xe0, 0xd8, 0x6c, 0xd2, 0xbe, 0x65, 0xb6, 0xdb, - 0xca, 0xed, 0xe8, 0x54, 0x18, 0xc7, 0xb3, 0xa9, 0x30, 0x0e, 0x63, 0xa6, 0x27, 0x6f, 0x15, 0x5d, - 0xea, 0x2e, 0x2b, 0x77, 0xa2, 0xa6, 0x67, 0x88, 0x61, 0xa6, 0x67, 0xf8, 0x0b, 0x77, 0x17, 0xec, - 0x97, 0x46, 0xef, 0x39, 0xd4, 0x5d, 0x51, 0xee, 0xc6, 0x76, 0x17, 0x12, 0x0e, 0x77, 0x17, 0xd2, - 0x6f, 0xb2, 0x0c, 0x8f, 0x44, 0x16, 0x1a, 0xff, 0xd6, 0xa6, 0x41, 0x75, 0xa7, 0xb5, 0xa2, 0xbc, - 0x89, 0xac, 0x9e, 0x4c, 0x5d, 0xaa, 0xa2, 0xa4, 0xd3, 0x7d, 0x5a, 0x2f, 0x4e, 0xb8, 0x2d, 0xff, - 0xc4, 0x0c, 0x0f, 0xd4, 0xab, 0x2d, 0x4c, 0xf8, 0x9b, 0xd0, 0xb7, 0x62, 0xdb, 0xf2, 0x24, 0x09, - 0x6e, 0xcb, 0x93, 0x60, 0xd2, 0x81, 0xc7, 0x62, 0x5b, 0xb5, 0x59, 0xbd, 0xcd, 0xf6, 0x25, 0xd4, - 0x58, 0xd0, 0x5b, 0xab, 0xd4, 0x53, 0x3e, 0x89, 0xbc, 0x2f, 0x65, 0x6c, 0xf8, 0x62, 0xd4, 0xd3, - 0x7d, 0xda, 0x36, 0xfc, 0x88, 0x0a, 0xc5, 0xc6, 0xd4, 0xe2, 0x82, 0xf2, 0xa9, 0xe8, 0xf9, 0x26, - 0x83, 0x4d, 0xf7, 0x69, 0x88, 0x63, 0x56, 0xda, 0xed, 0xce, 0xb2, 0xa3, 0x1b, 0x94, 0x1b, 0x5a, - 0x68, 0xbb, 0x09, 0x03, 0xf4, 0xbb, 0xa2, 0x56, 0x5a, 0x16, 0x1d, 0xb3, 0xd2, 0xb2, 0x70, 0x4c, - 0x51, 0x23, 0x39, 0x69, 0x94, 0x4f, 0x47, 0x15, 0x35, 0x82, 0x64, 0x8a, 0x1a, 0xcd, 0x60, 0xf3, - 0x26, 0x9c, 0x0d, 0xf6, 0xf3, 0x62, 0xfd, 0xe5, 0x9d, 0xa6, 0x7c, 0x06, 0xf9, 0x3c, 0x96, 0xb8, - 0x0c, 0x88, 0x50, 0x4d, 0xf7, 0x69, 0x19, 0xe5, 0xd9, 0x8a, 0x9b, 0xc8, 0xd9, 0x26, 0xcc, 0x8b, - 0xef, 0x8e, 0xae, 0xb8, 0x19, 0x64, 0x6c, 0xc5, 0xcd, 0x40, 0xa5, 0x32, 0x17, 0x42, 0xd5, 0xb7, - 0x61, 0x1e, 0xc8, 0x34, 0x8b, 0x43, 0x2a, 0x73, 0x61, 0xa9, 0x2d, 0x6d, 0xc3, 0x3c, 0xb0, 0xd6, - 0xb2, 0x38, 0x90, 0xcb, 0x50, 0x6a, 0x34, 0x66, 0xb5, 0xae, 0xa5, 0xb4, 0x62, 0xee, 0xc8, 0x08, - 0x9d, 0xee, 0xd3, 0x04, 0x9e, 0x99, 0x41, 0x93, 0x6d, 0xdd, 0xf5, 0xcc, 0x96, 0x8b, 0x23, 0xc6, - 0x1f, 0x21, 0x46, 0xd4, 0x0c, 0x4a, 0xa3, 0x61, 0x66, 0x50, 0x1a, 0x9c, 0xd9, 0x8b, 0x13, 0xba, - 0xeb, 0xea, 0x96, 0xe1, 0xe8, 0xe3, 0xb8, 0x4c, 0xd0, 0xd8, 0x73, 0xb7, 0x08, 0x96, 0xd9, 0x8b, - 0x51, 0x08, 0x1e, 0xbe, 0xfb, 0x10, 0xdf, 0xcc, 0xb9, 0x17, 0x3b, 0x7c, 0x8f, 0xe1, 0xf1, 0xf0, - 0x3d, 0x06, 0x43, 0xbb, 0xd3, 0x87, 0x69, 0x74, 0xd9, 0x64, 0x22, 0x52, 0x96, 0x63, 0x76, 0x67, - 0x9c, 0x00, 0xed, 0xce, 0x38, 0x30, 0xd2, 0x24, 0x7f, 0xb9, 0x5d, 0xc9, 0x68, 0x52, 0xb8, 0xca, - 0x26, 0xca, 0xb0, 0xf5, 0x3b, 0x1c, 0x1c, 0xb5, 0x0d, 0x4b, 0x5f, 0xb3, 0x6b, 0xe3, 0xbe, 0xd4, - 0xcd, 0xe8, 0xfa, 0x9d, 0x49, 0xc8, 0xd6, 0xef, 0x4c, 0x24, 0x9b, 0x5d, 0xfd, 0x8d, 0xd6, 0x8a, - 0xee, 0x50, 0xa3, 0x66, 0x3a, 0x78, 0xb2, 0xb8, 0xc1, 0xb7, 0x86, 0x6f, 0x47, 0x67, 0xd7, 0x1e, - 0xa4, 0x6c, 0x76, 0xed, 0x81, 0x66, 0x46, 0x5e, 0x3a, 0x5a, 0xa3, 0xba, 0xa1, 0xac, 0x46, 0x8d, - 0xbc, 0x6c, 0x4a, 0x66, 0xe4, 0x65, 0x63, 0xb3, 0x3f, 0xe7, 0xae, 0x63, 0x7a, 0x54, 0x69, 0xef, - 0xe4, 0x73, 0x90, 0x34, 0xfb, 0x73, 0x10, 0xcd, 0x36, 0x84, 0xf1, 0x0e, 0x59, 0x8b, 0x6e, 0x08, - 0x93, 0xdd, 0x10, 0x2f, 0xc1, 0x2c, 0x16, 0xf1, 0xea, 0x51, 0xb1, 0xa2, 0x16, 0x8b, 0x00, 0x33, - 0x8b, 0x25, 0x7c, 0x17, 0x19, 0x79, 0xeb, 0xa6, 0xd8, 0xd1, 0x35, 0x54, 0xc6, 0xb1, 0x35, 0x34, - 0xf2, 0x2e, 0xee, 0xc3, 0x91, 0x87, 0x1c, 0x4a, 0x27, 0x6a, 0x75, 0x48, 0x28, 0x66, 0x75, 0xc8, - 0x4f, 0x3e, 0x26, 0xe0, 0x04, 0xde, 0x82, 0x6b, 0xdd, 0xe0, 0x1e, 0xe7, 0xb3, 0xd1, 0xcf, 0x8c, - 0xa1, 0xd9, 0x67, 0xc6, 0x40, 0x11, 0x26, 0x62, 0xda, 0x72, 0x32, 0x98, 0x84, 0xe7, 0x83, 0x31, - 0x10, 0x99, 0x01, 0xd2, 0xa8, 0xce, 0xce, 0xd4, 0x8d, 0x05, 0xf9, 0x8a, 0xcc, 0x8d, 0x9e, 0xc0, - 0x26, 0x29, 0xa6, 0xfb, 0xb4, 0x94, 0x72, 0xe4, 0x6d, 0xb8, 0x28, 0xa0, 0xe2, 0x49, 0xfb, 0x82, - 0x63, 0xdf, 0x37, 0x8d, 0x60, 0x41, 0xf0, 0xa2, 0x8e, 0x82, 0xbd, 0x68, 0xa7, 0xfb, 0xb4, 0x9e, - 0xbc, 0xb2, 0xeb, 0x12, 0xeb, 0x43, 0x77, 0x27, 0x75, 0x05, 0x8b, 0x44, 0x4f, 0x5e, 0xd9, 0x75, - 0x09, 0xb9, 0xdf, 0xdf, 0x49, 0x5d, 0x41, 0x27, 0xf4, 0xe4, 0x45, 0x5c, 0xa8, 0xf4, 0xc2, 0x57, - 0xdb, 0x6d, 0x65, 0x1d, 0xab, 0xfb, 0xe0, 0x4e, 0xaa, 0xab, 0xa2, 0xc1, 0xb9, 0x1d, 0x47, 0x36, - 0x4b, 0xcf, 0x77, 0xa8, 0xd5, 0x88, 0x2c, 0x40, 0x0f, 0xa2, 0xb3, 0x74, 0x82, 0x80, 0xcd, 0xd2, - 0x09, 0x20, 0x1b, 0x50, 0xf2, 0x7b, 0x20, 0x65, 0x23, 0x3a, 0xa0, 0x64, 0x1c, 0x1b, 0x50, 0x91, - 0xb7, 0x43, 0xf3, 0x70, 0x6a, 0x7e, 0xd5, 0xd3, 0x7d, 0x0b, 0xd2, 0x15, 0x5d, 0xf9, 0x4e, 0xec, - 0x92, 0x29, 0x49, 0x82, 0x97, 0x4c, 0x49, 0x30, 0x1b, 0x23, 0x0c, 0xdc, 0xd8, 0xb0, 0x5a, 0x53, - 0xba, 0xd9, 0xee, 0x3a, 0x54, 0xf9, 0x4b, 0xd1, 0x31, 0x12, 0x43, 0xb3, 0x31, 0x12, 0x03, 0xb1, - 0x05, 0x9a, 0x81, 0xaa, 0xae, 0x6b, 0x2e, 0x5b, 0x62, 0x5f, 0xd9, 0x6d, 0x7b, 0xca, 0xff, 0x2f, - 0xba, 0x40, 0xa7, 0xd1, 0xb0, 0x05, 0x3a, 0x0d, 0x8e, 0xa7, 0x4e, 0xac, 0x17, 0xd8, 0xe2, 0x21, - 0xdf, 0x55, 0xfe, 0xff, 0x63, 0xa7, 0x4e, 0x29, 0x34, 0x78, 0xea, 0x94, 0x02, 0x67, 0xeb, 0x23, - 0xb7, 0xc9, 0x66, 0xcc, 0xe0, 0xae, 0xfa, 0x3f, 0x88, 0xae, 0x8f, 0x71, 0x3c, 0x5b, 0x1f, 0xe3, - 0xb0, 0x28, 0x1f, 0xd1, 0x05, 0xff, 0x61, 0x16, 0x9f, 0x40, 0xfe, 0x89, 0x32, 0xe4, 0x86, 0xcc, - 0x47, 0x8c, 0x94, 0xef, 0xc9, 0x65, 0x31, 0x0a, 0x86, 0x47, 0xa2, 0x50, 0x94, 0x91, 0x46, 0xef, - 0x9b, 0x74, 0x5d, 0xf9, 0x5c, 0x26, 0x23, 0x4e, 0x10, 0x65, 0xc4, 0x61, 0xe4, 0x2d, 0x38, 0x1b, - 0xc2, 0x66, 0xe9, 0xda, 0x52, 0x30, 0x33, 0x7d, 0x6f, 0x2e, 0x6a, 0x06, 0xa7, 0x93, 0x31, 0x33, - 0x38, 0x1d, 0x93, 0xc6, 0x5a, 0x88, 0xee, 0x2f, 0x6f, 0xc3, 0x3a, 0x90, 0x60, 0x06, 0x83, 0x34, - 0xd6, 0x42, 0x9a, 0xdf, 0xb7, 0x0d, 0xeb, 0x40, 0xa6, 0x19, 0x0c, 0xc8, 0xe7, 0x73, 0x70, 0x29, - 0x1d, 0x55, 0x6d, 0xb7, 0xa7, 0x6c, 0x27, 0xc4, 0x29, 0x7f, 0x25, 0x17, 0x3d, 0x68, 0xd8, 0x59, - 0xb1, 0xe9, 0x3e, 0x6d, 0x87, 0x15, 0x90, 0x8f, 0xc1, 0x68, 0xb5, 0x6b, 0x98, 0x1e, 0x5e, 0xbc, - 0x31, 0xc3, 0xf9, 0xfb, 0x73, 0xb1, 0x2d, 0x8e, 0x8c, 0xc5, 0x2d, 0x8e, 0x0c, 0x20, 0x37, 0xe1, - 0x64, 0x83, 0xb6, 0xba, 0x8e, 0xe9, 0x6d, 0x68, 0xb4, 0x63, 0x3b, 0x1e, 0xe3, 0xf1, 0x03, 0xb9, - 0xe8, 0x24, 0x96, 0xa0, 0x60, 0x93, 0x58, 0x02, 0x48, 0xee, 0x24, 0x6e, 0xe5, 0x45, 0x67, 0xfe, - 0x60, 0xae, 0xe7, 0xb5, 0x7c, 0xd0, 0x97, 0xe9, 0xc5, 0xc9, 0x42, 0xec, 0x16, 0x5d, 0x70, 0xfd, - 0x7c, 0xae, 0xc7, 0x35, 0xba, 0x34, 0xc3, 0x25, 0xc1, 0x8c, 0x63, 0x4a, 0x96, 0x7f, 0xe5, 0xaf, - 0xe6, 0x7a, 0x5c, 0x7b, 0x87, 0x1c, 0x53, 0xc0, 0xe4, 0x25, 0xee, 0x29, 0x22, 0x18, 0xfd, 0xb5, - 0x5c, 0xd2, 0x55, 0x24, 0x28, 0x2f, 0x11, 0xb2, 0x62, 0xb7, 0xdd, 0x40, 0xe9, 0xbf, 0x90, 0x4b, - 0xfa, 0xe6, 0x85, 0xc5, 0xc2, 0x5f, 0x84, 0xc2, 0x85, 0xc9, 0x07, 0x1e, 0x75, 0x2c, 0xbd, 0x8d, - 0xdd, 0xd9, 0xf0, 0x6c, 0x47, 0x5f, 0xa6, 0x93, 0x96, 0xbe, 0xd4, 0xa6, 0xca, 0x0f, 0xe5, 0xa2, - 0x16, 0x6c, 0x36, 0x29, 0xb3, 0x60, 0xb3, 0xb1, 0x64, 0x05, 0x1e, 0x49, 0xc3, 0xd6, 0x4c, 0x17, - 0xeb, 0xf9, 0x62, 0x2e, 0x6a, 0xc2, 0xf6, 0xa0, 0x65, 0x26, 0x6c, 0x0f, 0x34, 0xb9, 0x0e, 0x43, - 0xe3, 0xb6, 0x3f, 0xfd, 0xfe, 0x70, 0xcc, 0x19, 0x32, 0xc0, 0x4c, 0xf7, 0x69, 0x21, 0x99, 0x28, - 0x23, 0x06, 0xf5, 0x97, 0x92, 0x65, 0xc2, 0xcb, 0xa7, 0xe0, 0x87, 0x28, 0x23, 0xc4, 0xfd, 0x1f, - 0x25, 0xcb, 0x84, 0x77, 0x5c, 0xc1, 0x0f, 0x36, 0x93, 0xf0, 0x1a, 0x67, 0xa7, 0xaa, 0xcc, 0x6e, - 0x9b, 0x58, 0xd1, 0xdb, 0x6d, 0x6a, 0x2d, 0x53, 0xe5, 0xcb, 0xb1, 0x99, 0x24, 0x9d, 0x8c, 0xcd, - 0x24, 0xe9, 0x18, 0xf2, 0x5d, 0x70, 0xee, 0x8e, 0xde, 0x36, 0x8d, 0x10, 0xe7, 0xe7, 0x7c, 0x57, - 0x7e, 0x24, 0x17, 0xdd, 0x4d, 0x67, 0xd0, 0xb1, 0xdd, 0x74, 0x06, 0x8a, 0xcc, 0x02, 0xc1, 0x65, - 0x34, 0x98, 0x2d, 0xd8, 0xfa, 0xac, 0xfc, 0xc7, 0xb9, 0xa8, 0x9d, 0x9a, 0x24, 0x61, 0x76, 0x6a, - 0x12, 0x4a, 0x9a, 0xd9, 0xb9, 0x57, 0x94, 0x1f, 0xcd, 0x45, 0x4f, 0x6b, 0xb2, 0x08, 0xa7, 0xfb, - 0xb4, 0xec, 0x04, 0x2e, 0x37, 0xa0, 0xdc, 0x58, 0xa8, 0x4f, 0x4d, 0x4d, 0x36, 0xee, 0xd4, 0x6b, - 0xf8, 0x54, 0xc3, 0x50, 0x7e, 0x2c, 0xb6, 0x62, 0xc5, 0x09, 0xd8, 0x8a, 0x15, 0x87, 0x91, 0xd7, - 0x60, 0x84, 0xb5, 0x9f, 0x0d, 0x18, 0xfc, 0xe4, 0xaf, 0xe4, 0xa2, 0xe6, 0x94, 0x8c, 0x64, 0xe6, - 0x94, 0xfc, 0x9b, 0x34, 0xe0, 0x34, 0x93, 0xe2, 0x82, 0x43, 0xef, 0x51, 0x87, 0x5a, 0x2d, 0x7f, - 0x4c, 0xff, 0x78, 0x2e, 0x6a, 0x65, 0xa4, 0x11, 0x31, 0x2b, 0x23, 0x0d, 0x4e, 0x56, 0xe1, 0x62, - 0xfc, 0x24, 0x48, 0x7e, 0xd7, 0xab, 0xfc, 0xf5, 0x5c, 0xcc, 0x18, 0xee, 0x41, 0x8c, 0xc6, 0x70, - 0x0f, 0x3c, 0xb1, 0xe0, 0x51, 0x71, 0xac, 0x22, 0x1c, 0x2e, 0xe3, 0xb5, 0xfd, 0x04, 0xaf, 0xed, - 0x03, 0xa1, 0x43, 0x60, 0x0f, 0xea, 0xe9, 0x3e, 0xad, 0x37, 0x3b, 0xa6, 0x67, 0xc9, 0x0c, 0x23, - 0xca, 0x4f, 0xe6, 0xd2, 0x3d, 0x52, 0x22, 0x6e, 0xca, 0x69, 0xa9, 0x49, 0xde, 0xca, 0xca, 0x8f, - 0xa1, 0xfc, 0x8d, 0xd8, 0x78, 0x4b, 0x27, 0x63, 0xe3, 0x2d, 0x23, 0xc1, 0xc6, 0x4d, 0x38, 0xc9, - 0x95, 0x7a, 0x41, 0xc7, 0x61, 0x68, 0x2d, 0x53, 0x43, 0xf9, 0x9b, 0xb1, 0xd5, 0x2e, 0x41, 0x81, - 0xae, 0x3d, 0x71, 0x20, 0x9b, 0xba, 0x1b, 0x1d, 0xdd, 0xb2, 0xf0, 0x98, 0x55, 0xf9, 0x4f, 0x62, - 0x53, 0x77, 0x88, 0x42, 0xc7, 0xdd, 0xe0, 0x17, 0xd3, 0x84, 0x5e, 0xb9, 0xa5, 0x94, 0xff, 0x34, - 0xa6, 0x09, 0xbd, 0x88, 0x99, 0x26, 0xf4, 0x4c, 0x54, 0x75, 0x27, 0xe3, 0x8d, 0xbd, 0xf2, 0xd5, - 0xd8, 0x8a, 0x9c, 0x4a, 0xc5, 0x56, 0xe4, 0xf4, 0x27, 0xfa, 0x77, 0x32, 0xde, 0xa7, 0x2b, 0x3f, - 0xd5, 0x9b, 0x6f, 0xb8, 0xd2, 0xa7, 0x3f, 0x6f, 0xbf, 0x93, 0xf1, 0xb6, 0x5b, 0xf9, 0x5b, 0xbd, - 0xf9, 0x86, 0x8e, 0x7d, 0xe9, 0x4f, 0xc3, 0x9b, 0xd9, 0xef, 0xa2, 0x95, 0xbf, 0x1d, 0x9f, 0xba, - 0x32, 0x08, 0x71, 0xea, 0xca, 0x7a, 0x5c, 0xbd, 0x04, 0xe7, 0xb9, 0x86, 0xdc, 0x70, 0xf4, 0xce, - 0x4a, 0x83, 0x7a, 0x9e, 0x69, 0x2d, 0xfb, 0x3b, 0xb1, 0xff, 0x2c, 0x17, 0x3b, 0x1e, 0xcb, 0xa2, - 0xc4, 0xe3, 0xb1, 0x2c, 0x24, 0x53, 0xde, 0xc4, 0x0b, 0x68, 0xe5, 0xef, 0xc4, 0x94, 0x37, 0x41, - 0xc1, 0x94, 0x37, 0xf9, 0x70, 0xfa, 0x66, 0xca, 0x43, 0x5f, 0xe5, 0x3f, 0xcf, 0xe6, 0x15, 0xb4, - 0x2f, 0xe5, 0x7d, 0xf0, 0xcd, 0x94, 0xf7, 0xac, 0xca, 0x7f, 0x91, 0xcd, 0x2b, 0xf4, 0x41, 0x4a, - 0x3e, 0x83, 0x7d, 0x0b, 0xce, 0xf2, 0xd9, 0x7c, 0x8a, 0x1a, 0x34, 0xf2, 0xa1, 0x3f, 0x1d, 0x1b, - 0xfb, 0xe9, 0x64, 0x78, 0xe4, 0x9e, 0x8a, 0x49, 0x63, 0x2d, 0xda, 0xfa, 0x77, 0xb7, 0x61, 0x1d, - 0x6e, 0x08, 0xd2, 0x31, 0x6c, 0xbd, 0x91, 0x5f, 0xbf, 0x29, 0x3f, 0x13, 0x5b, 0x6f, 0x64, 0x24, - 0xba, 0x73, 0xc8, 0x4f, 0xe5, 0x5e, 0x8b, 0xbe, 0xf4, 0x52, 0x7e, 0x36, 0xb5, 0x70, 0xd0, 0x01, - 0xd1, 0x67, 0x61, 0xaf, 0x45, 0x5f, 0x35, 0x29, 0x5f, 0x4b, 0x2d, 0x1c, 0x7c, 0x40, 0xf4, 0x09, - 0x14, 0xdb, 0x22, 0x75, 0x3d, 0x9b, 0xb3, 0x8a, 0x4c, 0x0f, 0x7f, 0x2f, 0xbe, 0x45, 0x4a, 0x25, - 0xc3, 0x2d, 0x52, 0x2a, 0x26, 0x8d, 0xb5, 0xf8, 0xbc, 0x9f, 0xdb, 0x86, 0xb5, 0xb4, 0xb1, 0x4b, - 0xc5, 0xa4, 0xb1, 0x16, 0x1f, 0xff, 0xf5, 0x6d, 0x58, 0x4b, 0x1b, 0xbb, 0x54, 0x0c, 0x33, 0xc7, - 0x42, 0xcc, 0x1d, 0xea, 0xb8, 0xa1, 0xfa, 0xfd, 0x97, 0x31, 0x73, 0x2c, 0x83, 0x8e, 0x99, 0x63, - 0x19, 0xa8, 0x54, 0xee, 0x42, 0x28, 0x3f, 0xbf, 0x1d, 0xf7, 0xf0, 0x5e, 0x26, 0x03, 0x95, 0xca, - 0x5d, 0xc8, 0xe5, 0xef, 0x6f, 0xc7, 0x3d, 0xbc, 0x98, 0xc9, 0x40, 0x31, 0xa3, 0xa8, 0xe1, 0xe9, - 0x9e, 0xd9, 0x9a, 0xb6, 0x5d, 0x4f, 0x5a, 0xe4, 0xff, 0xab, 0x98, 0x51, 0x94, 0x46, 0xc4, 0x8c, - 0xa2, 0x34, 0x78, 0x92, 0xa9, 0x90, 0xc6, 0x37, 0x7a, 0x32, 0x0d, 0x2d, 0xad, 0x34, 0x78, 0x92, - 0xa9, 0x10, 0xc2, 0x7f, 0xdd, 0x93, 0x69, 0xe8, 0x29, 0x9f, 0x06, 0x67, 0x96, 0xe9, 0x84, 0x63, - 0xaf, 0x5b, 0x37, 0xe9, 0x3a, 0x6d, 0x8b, 0x4f, 0xff, 0x85, 0x98, 0x65, 0x1a, 0x27, 0xc0, 0x5b, - 0x94, 0x18, 0x2c, 0xca, 0x48, 0x7c, 0xee, 0x2f, 0x66, 0x32, 0x0a, 0x8f, 0x89, 0xe2, 0xb0, 0x28, - 0x23, 0xf1, 0x89, 0xbf, 0x94, 0xc9, 0x28, 0x3c, 0x26, 0x8a, 0xc3, 0x48, 0x15, 0xc6, 0xf0, 0xad, - 0x84, 0xee, 0xfa, 0x9e, 0x9f, 0xbf, 0x96, 0x8b, 0xde, 0x7a, 0x45, 0xd1, 0xd3, 0x7d, 0x5a, 0xac, - 0x80, 0xcc, 0x42, 0x7c, 0xd2, 0x37, 0x33, 0x58, 0x84, 0xfe, 0x8e, 0x51, 0x88, 0xcc, 0x42, 0x7c, - 0xcc, 0x3f, 0xc8, 0x60, 0x11, 0x3a, 0x3c, 0x46, 0x21, 0xe4, 0x23, 0x30, 0xdc, 0x98, 0x5a, 0x5c, - 0xf0, 0xf3, 0x1f, 0xfe, 0xc3, 0x5c, 0xec, 0x55, 0x51, 0x88, 0xc3, 0x57, 0x45, 0xe1, 0x4f, 0xf2, - 0x31, 0x18, 0x9d, 0xb0, 0x2d, 0x4f, 0x6f, 0xf9, 0x1b, 0xd0, 0x5f, 0x8f, 0x9d, 0xa1, 0x44, 0xb0, - 0xd3, 0x7d, 0x5a, 0x94, 0x5c, 0x2a, 0x2f, 0xda, 0xfe, 0x1b, 0xe9, 0xe5, 0x83, 0xa6, 0x47, 0xc9, - 0xd9, 0x8c, 0x76, 0xd7, 0x76, 0x56, 0xdb, 0xb6, 0x6e, 0xf8, 0x21, 0x37, 0x45, 0x43, 0xfe, 0x51, - 0x6c, 0x46, 0x4b, 0x27, 0x63, 0x33, 0x5a, 0x3a, 0x26, 0x8d, 0xb5, 0xe8, 0xa2, 0xdf, 0xdc, 0x86, - 0x75, 0x38, 0x0f, 0xa7, 0x63, 0xd2, 0x58, 0x8b, 0xcf, 0xff, 0xc7, 0xdb, 0xb0, 0x0e, 0xe7, 0xe1, - 0x74, 0x0c, 0x33, 0xad, 0x6f, 0x98, 0x9e, 0xff, 0xb0, 0xed, 0x9f, 0xc4, 0x4c, 0xeb, 0x10, 0xc5, - 0x4c, 0xeb, 0xf0, 0x17, 0xa1, 0x70, 0x21, 0x78, 0x2a, 0x19, 0xee, 0x5d, 0xeb, 0xd6, 0x7d, 0xb6, - 0x3f, 0x56, 0xfe, 0x69, 0xec, 0x54, 0x24, 0x9b, 0x74, 0xba, 0x4f, 0xeb, 0xc1, 0x88, 0x2c, 0xc4, - 0xfc, 0x14, 0x79, 0x50, 0x3f, 0xe5, 0x9f, 0xe5, 0x7a, 0x38, 0x2a, 0x72, 0x9a, 0x84, 0xa3, 0x22, - 0x07, 0x8b, 0x39, 0x6b, 0xa9, 0x4d, 0x6f, 0xcf, 0xd5, 0xdf, 0x94, 0x66, 0xd7, 0x6f, 0x25, 0xe7, - 0xac, 0x04, 0x91, 0x98, 0xb3, 0x12, 0x70, 0xf2, 0x97, 0x73, 0xf0, 0x54, 0x5c, 0xbe, 0x6f, 0xbe, - 0xf4, 0xdc, 0x2b, 0x1a, 0xbd, 0x6f, 0xb7, 0x64, 0xcb, 0xea, 0xb7, 0x78, 0x2d, 0xcf, 0x64, 0x75, - 0x57, 0x5a, 0xa1, 0xe9, 0x3e, 0x6d, 0x47, 0xcc, 0x77, 0xd0, 0x0a, 0xa1, 0x34, 0xbf, 0xbd, 0xab, - 0x56, 0x04, 0x2a, 0xb4, 0x23, 0xe6, 0x3b, 0x68, 0x85, 0x18, 0x15, 0xdf, 0xde, 0x55, 0x2b, 0x82, - 0x31, 0xb2, 0x23, 0xe6, 0xb8, 0xfb, 0xbc, 0xdb, 0xa8, 0x4f, 0x04, 0xbe, 0x3e, 0x1b, 0x56, 0x4b, - 0xf9, 0x9d, 0xf8, 0xee, 0x33, 0x4e, 0x81, 0xbb, 0xcf, 0x38, 0x90, 0x2d, 0xf7, 0xd3, 0x54, 0x6f, - 0xb3, 0xed, 0x28, 0x6d, 0xad, 0x46, 0x8c, 0xb7, 0xdf, 0x8d, 0x2d, 0xf7, 0x19, 0x74, 0x6c, 0xb9, - 0xcf, 0x40, 0xa5, 0x72, 0x17, 0x12, 0xfa, 0xbd, 0xed, 0xb8, 0x87, 0xa6, 0x4a, 0x06, 0x2a, 0x95, - 0xbb, 0xd0, 0x82, 0xff, 0x66, 0x3b, 0xee, 0xa1, 0xa9, 0x92, 0x81, 0x22, 0x3f, 0x9c, 0x83, 0xcb, - 0x69, 0xdd, 0xc1, 0x63, 0x7f, 0xcc, 0xdf, 0xa7, 0x8e, 0x63, 0x1a, 0xfe, 0x05, 0xf2, 0xef, 0xf3, - 0xfa, 0x9e, 0xeb, 0xd5, 0xdf, 0x69, 0x05, 0xa7, 0xfb, 0xb4, 0x1d, 0x57, 0xb2, 0xc3, 0x16, 0x09, - 0x09, 0xfc, 0xc1, 0xae, 0x5b, 0x14, 0x88, 0x64, 0xc7, 0x95, 0xe0, 0x84, 0x63, 0x2e, 0xbb, 0x9e, - 0xed, 0xd0, 0x05, 0xbb, 0x6d, 0xb6, 0xfc, 0xf5, 0xe6, 0x0f, 0xe3, 0x13, 0x4e, 0x0a, 0x11, 0x4e, - 0x38, 0x29, 0xf0, 0x24, 0x53, 0xa1, 0x31, 0xff, 0xbc, 0x27, 0x53, 0xc9, 0x9c, 0x4b, 0x81, 0x27, - 0x99, 0x0a, 0x31, 0xfd, 0x8b, 0x9e, 0x4c, 0x25, 0x73, 0x2e, 0x05, 0x4e, 0x2c, 0x78, 0x34, 0x34, - 0x74, 0xab, 0xcb, 0xd4, 0xf2, 0x34, 0xbb, 0xdd, 0xb6, 0xbb, 0xde, 0xa2, 0x63, 0x2e, 0x2f, 0x53, - 0x47, 0xf9, 0xa3, 0xd8, 0x01, 0x59, 0x4f, 0xea, 0xe9, 0x3e, 0xad, 0x37, 0x3b, 0xe2, 0x41, 0x25, - 0x9d, 0x60, 0xca, 0x76, 0x5a, 0xb4, 0x66, 0x5b, 0x54, 0xf9, 0xe3, 0x5c, 0xf4, 0x7a, 0x7a, 0x1b, - 0xfa, 0xe9, 0x3e, 0x6d, 0x3b, 0x96, 0xe4, 0xb3, 0xf0, 0x58, 0x3a, 0x09, 0xfb, 0x6f, 0x49, 0x6f, - 0xad, 0x2a, 0xff, 0x6d, 0x2e, 0xea, 0xf3, 0xd7, 0x9b, 0x7c, 0xba, 0x4f, 0xdb, 0x86, 0xe1, 0xf8, - 0x00, 0xf4, 0xe3, 0xa5, 0xf4, 0xcd, 0xd2, 0xe0, 0x2f, 0xe7, 0xca, 0xbf, 0x92, 0xbb, 0x59, 0x1a, - 0xfc, 0x95, 0x5c, 0xf9, 0x57, 0xd9, 0xff, 0xbf, 0x9a, 0x2b, 0xff, 0x5a, 0x4e, 0x3b, 0x1f, 0x63, - 0xb0, 0xd0, 0xd6, 0xc5, 0x4a, 0x91, 0x8a, 0xe2, 0x3f, 0x53, 0x51, 0x22, 0x6b, 0xeb, 0x57, 0x73, - 0x30, 0xd2, 0xf0, 0x1c, 0xaa, 0xaf, 0x89, 0x20, 0xc8, 0x17, 0x60, 0x90, 0x3f, 0x23, 0xf3, 0xa3, - 0xf6, 0x68, 0xc1, 0x6f, 0x72, 0x09, 0xc6, 0x66, 0x74, 0xd7, 0xc3, 0x26, 0xd6, 0x2d, 0x83, 0x3e, - 0xc0, 0x10, 0x0a, 0x05, 0x2d, 0x06, 0x25, 0x33, 0x9c, 0x8e, 0x97, 0xc3, 0xb8, 0xf7, 0x85, 0x6d, - 0x63, 0xff, 0x0e, 0x7e, 0x6b, 0xb3, 0xd2, 0x87, 0xa1, 0x7e, 0x63, 0x65, 0xd5, 0x6f, 0xe7, 0x20, - 0xf1, 0xc0, 0x6d, 0xef, 0xc1, 0xbe, 0xe6, 0xe1, 0x44, 0x2c, 0xd7, 0x82, 0x88, 0x03, 0xb1, 0xc3, - 0x54, 0x0c, 0xf1, 0xd2, 0xe4, 0x83, 0x41, 0xfc, 0x81, 0xdb, 0xda, 0x8c, 0x88, 0xeb, 0x8c, 0x19, - 0xc9, 0xba, 0x4e, 0x5b, 0x93, 0x50, 0x22, 0x6e, 0xe7, 0x77, 0xca, 0x61, 0x20, 0x79, 0x72, 0x49, - 0x44, 0x1e, 0xcb, 0x85, 0xd1, 0xa0, 0xbb, 0x2e, 0x75, 0xe4, 0x68, 0xd0, 0x18, 0x69, 0xec, 0x63, - 0x30, 0x52, 0x5f, 0xeb, 0x50, 0xc7, 0xb5, 0x2d, 0xdd, 0xb3, 0x1d, 0x11, 0x19, 0x09, 0x63, 0x18, - 0x99, 0x12, 0x5c, 0x8e, 0x61, 0x24, 0xd3, 0x93, 0x2b, 0x7e, 0x52, 0xe5, 0x02, 0x86, 0xf0, 0xc7, - 0xe8, 0x20, 0x98, 0x54, 0x59, 0x0e, 0x33, 0x85, 0x14, 0x8c, 0xf4, 0xb6, 0xab, 0x63, 0xa4, 0x8a, - 0x80, 0xb4, 0xcb, 0x00, 0x32, 0x29, 0x52, 0x90, 0x67, 0xa0, 0x84, 0x26, 0x9e, 0x8b, 0xc9, 0xd2, - 0x45, 0x8c, 0xea, 0x36, 0x42, 0xe4, 0x68, 0x58, 0x9c, 0x86, 0xdc, 0x82, 0x72, 0xe8, 0xb6, 0x78, - 0xc3, 0xb1, 0xbb, 0x1d, 0x3f, 0x3d, 0x62, 0x65, 0x6b, 0xb3, 0xf2, 0xc8, 0x6a, 0x80, 0x6b, 0x2e, - 0x23, 0x52, 0x62, 0x91, 0x28, 0x48, 0xa6, 0xe1, 0x44, 0x08, 0x63, 0x22, 0xf2, 0xd3, 0xb2, 0x62, - 0xe4, 0x25, 0x89, 0x17, 0x13, 0x67, 0x24, 0x25, 0x7e, 0xac, 0x18, 0xa9, 0xc3, 0x80, 0x1f, 0xa0, - 0x7a, 0x70, 0x5b, 0x25, 0x3d, 0x25, 0x02, 0x54, 0x0f, 0xc8, 0xa1, 0xa9, 0xfd, 0xf2, 0x64, 0x0a, - 0xc6, 0x34, 0xbb, 0xeb, 0xd1, 0x45, 0x5b, 0x9c, 0xf7, 0x8b, 0x40, 0xe8, 0xd8, 0x26, 0x87, 0x61, - 0x9a, 0x9e, 0xdd, 0x6c, 0x71, 0x9c, 0x1c, 0x0d, 0x2a, 0x5a, 0x8a, 0xcc, 0xc1, 0xc9, 0x84, 0x83, - 0x27, 0x06, 0xb6, 0x18, 0xe2, 0xa1, 0x86, 0xa5, 0xcf, 0x4b, 0x32, 0x4b, 0x16, 0x25, 0xdf, 0x9f, - 0x83, 0xd2, 0xa2, 0xa3, 0x9b, 0x9e, 0x2b, 0x82, 0x5c, 0x9c, 0xb9, 0xba, 0xee, 0xe8, 0x1d, 0xa6, - 0x1f, 0x57, 0x31, 0x47, 0xc3, 0x1d, 0xbd, 0xdd, 0xa5, 0xee, 0xf8, 0x5d, 0xf6, 0x75, 0xff, 0xdd, - 0x66, 0xe5, 0x35, 0x1e, 0xd1, 0xec, 0x6a, 0xcb, 0x5e, 0xbb, 0xb6, 0xec, 0xe8, 0xf7, 0x4d, 0x0f, - 0xad, 0x30, 0xbd, 0x7d, 0xcd, 0xa3, 0x6d, 0xbc, 0xad, 0xbe, 0xa6, 0x77, 0xcc, 0x6b, 0x98, 0x0b, - 0xe8, 0x5a, 0xc0, 0x89, 0xd7, 0xc0, 0x54, 0xc0, 0xc3, 0xbf, 0x64, 0x15, 0xe0, 0x38, 0x32, 0x07, - 0x20, 0x3e, 0xb5, 0xda, 0xe9, 0x88, 0x88, 0x19, 0xd2, 0x1d, 0xaf, 0x8f, 0xe1, 0x8a, 0x1d, 0x08, - 0x4c, 0xef, 0x48, 0xf9, 0x2f, 0x34, 0x89, 0x03, 0xd3, 0x82, 0x45, 0xd1, 0x22, 0x5f, 0x4c, 0xa3, - 0x52, 0xfc, 0x2d, 0x81, 0x4a, 0x11, 0x52, 0xbc, 0x18, 0x59, 0x82, 0x13, 0x82, 0x6f, 0x90, 0x2d, - 0x6f, 0x2c, 0x3a, 0x2b, 0xc4, 0xd0, 0x5c, 0x69, 0x83, 0x36, 0x1a, 0x02, 0x2c, 0xd7, 0x11, 0x2b, - 0x41, 0xc6, 0x61, 0xd4, 0xff, 0x7b, 0x4e, 0x5f, 0xa3, 0xae, 0x72, 0x02, 0x35, 0xf6, 0xe2, 0xd6, - 0x66, 0x45, 0xf1, 0xcb, 0x63, 0xac, 0x76, 0x59, 0x74, 0xd1, 0x22, 0x32, 0x0f, 0xae, 0xf5, 0xe5, - 0x14, 0x1e, 0x71, 0x9d, 0x8f, 0x16, 0x21, 0x13, 0x30, 0x1a, 0x3c, 0xd8, 0xbd, 0x7d, 0xbb, 0x5e, - 0xc3, 0x90, 0x1c, 0x22, 0x5c, 0x7f, 0x2c, 0x9f, 0x9d, 0xcc, 0x24, 0x52, 0x46, 0x8a, 0xd3, 0xc6, - 0x63, 0x74, 0xc4, 0xe2, 0xb4, 0x75, 0x52, 0xe2, 0xb4, 0x2d, 0x90, 0x8f, 0xc2, 0x70, 0xf5, 0x6e, - 0x43, 0xc4, 0x9f, 0x73, 0x95, 0x53, 0x61, 0x72, 0x54, 0x0c, 0x7a, 0x27, 0x62, 0xd5, 0xc9, 0x4d, - 0x97, 0xe9, 0xc9, 0x24, 0x8c, 0x45, 0x76, 0x7f, 0xae, 0x72, 0x1a, 0x39, 0x60, 0xcb, 0x75, 0xc4, - 0x34, 0x1d, 0x81, 0x92, 0x87, 0x57, 0xb4, 0x10, 0xd3, 0x9a, 0x9a, 0xe9, 0x62, 0xa2, 0x49, 0x8d, - 0x62, 0xa8, 0x3b, 0x0c, 0xf0, 0x31, 0xc8, 0xb5, 0xc6, 0x10, 0xa8, 0xa6, 0xc3, 0x71, 0x72, 0x8f, - 0xc6, 0x8a, 0x91, 0xb7, 0x81, 0x60, 0x6a, 0x4a, 0x6a, 0xf8, 0x7b, 0x8b, 0x7a, 0xcd, 0x55, 0xce, - 0x62, 0xae, 0x1a, 0x12, 0x8f, 0x4c, 0x55, 0xaf, 0x8d, 0x5f, 0x12, 0xd3, 0xc7, 0x63, 0x3a, 0x2f, - 0xd5, 0xf4, 0xa3, 0x52, 0x35, 0x4d, 0x43, 0x6e, 0x71, 0x0a, 0x57, 0xb2, 0x0e, 0xe7, 0x16, 0x1c, - 0x7a, 0xdf, 0xb4, 0xbb, 0xae, 0xbf, 0x7c, 0xf8, 0xf3, 0xd6, 0xb9, 0x6d, 0xe7, 0xad, 0x27, 0x44, - 0xc5, 0x67, 0x3a, 0x0e, 0xbd, 0xdf, 0xf4, 0x33, 0x94, 0x44, 0x02, 0xec, 0x67, 0x71, 0x67, 0xe2, - 0xc2, 0x30, 0x7f, 0x02, 0x6e, 0x52, 0x57, 0x51, 0xc2, 0xa9, 0x96, 0x07, 0x55, 0x34, 0x03, 0x9c, - 0x2c, 0xae, 0x58, 0x31, 0xa2, 0x01, 0xb9, 0x31, 0xe1, 0xbb, 0x03, 0x56, 0x5b, 0x2d, 0xbb, 0x6b, - 0x79, 0xae, 0x72, 0x1e, 0x99, 0xa9, 0x4c, 0x2c, 0xcb, 0xad, 0x20, 0x5b, 0x51, 0x53, 0x17, 0x78, - 0x59, 0x2c, 0xc9, 0xd2, 0x64, 0x06, 0xca, 0x0b, 0x0e, 0x5e, 0x4e, 0xde, 0xa2, 0x1b, 0xdc, 0x48, - 0xc5, 0x38, 0x23, 0x62, 0xaa, 0xec, 0x70, 0x5c, 0x73, 0x95, 0x6e, 0x34, 0x3b, 0x88, 0x95, 0x97, - 0x95, 0x78, 0x49, 0x39, 0x7b, 0xc8, 0x23, 0x3b, 0xcb, 0x1e, 0x42, 0xa1, 0x2c, 0x9c, 0x09, 0x1f, - 0x78, 0xd4, 0x62, 0x4b, 0xbd, 0x2b, 0x62, 0x8a, 0x28, 0x31, 0xe7, 0xc3, 0x00, 0xcf, 0xa7, 0x0e, - 0x31, 0xca, 0x68, 0x00, 0x96, 0x1b, 0x16, 0x2f, 0x92, 0x4c, 0xb1, 0xf1, 0xe8, 0x1e, 0x52, 0x6c, - 0xfc, 0x6f, 0x05, 0x79, 0xfe, 0x25, 0x17, 0xa1, 0x28, 0x65, 0xc0, 0xc4, 0xfc, 0x01, 0x98, 0x2d, - 0xa8, 0x28, 0xd2, 0xa2, 0x0c, 0x09, 0xdb, 0x25, 0x88, 0xc4, 0x88, 0x29, 0xcf, 0xc3, 0x98, 0xf2, - 0x5a, 0x48, 0x80, 0xe9, 0xa6, 0xbb, 0x4b, 0x6d, 0xb3, 0x85, 0x39, 0xa4, 0x0a, 0x52, 0xe4, 0x32, - 0x84, 0xf2, 0x14, 0x52, 0x12, 0x09, 0xb9, 0x0e, 0xc3, 0xfe, 0xa5, 0x78, 0x98, 0x3f, 0x03, 0x53, - 0x0b, 0x89, 0xd9, 0x5a, 0x64, 0x2e, 0x92, 0x88, 0xc8, 0xab, 0x00, 0xe1, 0x74, 0x20, 0x2c, 0x2d, - 0x5c, 0x2a, 0xe4, 0xd9, 0x43, 0x5e, 0x2a, 0x42, 0x6a, 0x36, 0x71, 0xca, 0xea, 0xe8, 0x27, 0xd8, - 0xc7, 0x89, 0x33, 0xa2, 0xc3, 0xb2, 0x82, 0x44, 0x8b, 0x90, 0x79, 0x38, 0x99, 0xd0, 0x40, 0x91, - 0x6d, 0xe3, 0x89, 0xad, 0xcd, 0xca, 0xa3, 0x29, 0xea, 0x2b, 0x2f, 0xcc, 0x89, 0xb2, 0xe4, 0x49, - 0x28, 0xdc, 0xd6, 0xea, 0x22, 0xe2, 0x3f, 0x4f, 0x16, 0x11, 0x89, 0xb7, 0xc9, 0xb0, 0xe4, 0x15, - 0x00, 0x9e, 0x51, 0x6f, 0xc1, 0x76, 0x3c, 0xb4, 0x28, 0x46, 0xc7, 0xcf, 0xb3, 0xb1, 0xcc, 0x33, - 0xee, 0x35, 0xd9, 0x32, 0x26, 0x7f, 0x74, 0x48, 0xac, 0x7e, 0x6f, 0x3e, 0xb1, 0xac, 0x31, 0xc1, - 0x8b, 0x56, 0x48, 0x9d, 0x8f, 0x82, 0xf7, 0x9b, 0xce, 0x05, 0x2f, 0x11, 0x91, 0xcb, 0x30, 0xb8, - 0xc0, 0x26, 0x95, 0x96, 0xdd, 0x16, 0xaa, 0x80, 0x61, 0x5f, 0x3b, 0x02, 0xa6, 0x05, 0x58, 0x72, - 0x9d, 0x67, 0xa2, 0xb1, 0x62, 0xf9, 0x77, 0xba, 0x02, 0x16, 0x4f, 0x44, 0xc3, 0x60, 0xac, 0x4c, - 0x24, 0x45, 0xad, 0x28, 0x93, 0xb2, 0xa4, 0x86, 0x29, 0x69, 0x03, 0x83, 0xb6, 0x7f, 0x3b, 0x83, - 0x56, 0xfd, 0xf5, 0x5c, 0x72, 0x88, 0x92, 0x17, 0x93, 0xa9, 0x30, 0x70, 0xfd, 0x0a, 0x80, 0x72, - 0xad, 0x41, 0x52, 0x8c, 0x48, 0x52, 0x8b, 0xfc, 0x9e, 0x93, 0x5a, 0x14, 0x76, 0x99, 0xd4, 0x42, - 0xfd, 0xbf, 0x8b, 0x3d, 0xdf, 0xcd, 0x1d, 0x4a, 0xf0, 0xe3, 0x57, 0xd8, 0xa6, 0x8c, 0xd5, 0x5e, - 0x75, 0x13, 0x5b, 0x0b, 0xfe, 0x2c, 0xa8, 0xa9, 0xf3, 0x51, 0xe9, 0x6a, 0x51, 0x4a, 0xf2, 0x06, - 0x8c, 0xf8, 0x1f, 0x80, 0xc9, 0x52, 0xa4, 0x24, 0x1f, 0xc1, 0x82, 0x18, 0x4b, 0x2b, 0x12, 0x29, - 0x40, 0x5e, 0x82, 0x21, 0x34, 0x87, 0x3a, 0x7a, 0xcb, 0xcf, 0xa4, 0xc3, 0x53, 0xef, 0xf8, 0x40, - 0x39, 0xc0, 0x6f, 0x40, 0x49, 0x3e, 0x0d, 0x25, 0x91, 0x4e, 0xae, 0x84, 0x4b, 0xf4, 0xb5, 0x1d, - 0x3c, 0x34, 0xbc, 0x2a, 0xa7, 0x92, 0xe3, 0x1b, 0x1c, 0x04, 0x44, 0x36, 0x38, 0x3c, 0x8b, 0xdc, - 0x22, 0x9c, 0x5a, 0x70, 0xa8, 0x81, 0x4f, 0x5a, 0x27, 0x1f, 0x74, 0x1c, 0x91, 0xe8, 0x8f, 0x4f, - 0x10, 0xb8, 0xbe, 0x75, 0x7c, 0x34, 0x5b, 0x79, 0x05, 0x5e, 0x4e, 0xe7, 0x91, 0x52, 0x9c, 0x19, - 0x3d, 0xbc, 0x25, 0xb7, 0xe8, 0xc6, 0xba, 0xed, 0x18, 0x3c, 0x17, 0x9e, 0x98, 0xfa, 0x85, 0xa0, - 0x57, 0x05, 0x4a, 0x36, 0x7a, 0xa2, 0x85, 0x2e, 0xbc, 0x02, 0xc3, 0x7b, 0x4d, 0xc7, 0xf6, 0xf3, - 0xf9, 0x8c, 0x17, 0xe8, 0x0f, 0x6f, 0x46, 0xec, 0x0a, 0xf4, 0xf3, 0x28, 0x3f, 0x5c, 0xf9, 0x86, - 0xb6, 0x36, 0x2b, 0xfd, 0x9f, 0x45, 0x97, 0x60, 0x0e, 0x57, 0xff, 0x3c, 0x9f, 0xf1, 0xbc, 0xfe, - 0xe1, 0x95, 0x19, 0x5b, 0x78, 0x7c, 0x61, 0xd4, 0x6b, 0x28, 0xb9, 0x51, 0xb1, 0xf0, 0xf8, 0x60, - 0x66, 0x54, 0xc8, 0x44, 0xe4, 0x2a, 0xc0, 0x82, 0xee, 0xe8, 0x6b, 0xd4, 0x63, 0x7b, 0x1d, 0x7e, - 0x5a, 0x80, 0x56, 0x48, 0x27, 0x80, 0x6a, 0x12, 0x85, 0xfa, 0xd3, 0x85, 0x5e, 0xe1, 0x07, 0x8e, - 0x65, 0xbf, 0x1b, 0xd9, 0x5f, 0x87, 0xe1, 0x40, 0xb2, 0xf5, 0x1a, 0xda, 0x4b, 0xa3, 0x41, 0xf2, - 0x47, 0x0e, 0xc6, 0x32, 0x12, 0x11, 0xb9, 0xc2, 0xdb, 0xda, 0x30, 0xdf, 0xe1, 0x69, 0xc8, 0x46, - 0x45, 0x82, 0x29, 0xdd, 0xd3, 0x9b, 0xae, 0xf9, 0x0e, 0xd5, 0x02, 0xb4, 0xfa, 0x8f, 0xf2, 0xa9, - 0x31, 0x1c, 0x8e, 0xfb, 0x68, 0x17, 0x7d, 0x94, 0x22, 0x44, 0x1e, 0x7d, 0xe2, 0x58, 0x88, 0xbb, - 0x10, 0xe2, 0x9f, 0xe5, 0x53, 0x63, 0x75, 0x1c, 0x0b, 0x71, 0x37, 0xb3, 0xc5, 0x33, 0x30, 0xa4, - 0xd9, 0xeb, 0xee, 0x04, 0xee, 0x89, 0xf8, 0x5c, 0x81, 0x13, 0xb5, 0x63, 0xaf, 0xbb, 0x4d, 0xdc, - 0xed, 0x68, 0x21, 0x81, 0xfa, 0x9d, 0x7c, 0x8f, 0x68, 0x26, 0xc7, 0x82, 0x7f, 0x37, 0x97, 0xc8, - 0x5f, 0xcc, 0x47, 0xa2, 0xa5, 0x3c, 0xbc, 0xc2, 0xbe, 0x06, 0xd0, 0x68, 0xad, 0xd0, 0x35, 0x5d, - 0x4a, 0xe5, 0x8a, 0x47, 0x16, 0x2e, 0x42, 0xf9, 0x36, 0x58, 0x22, 0x51, 0x7f, 0x39, 0x1f, 0x0b, - 0x17, 0x73, 0x2c, 0xbb, 0x1d, 0xcb, 0x2e, 0xd0, 0x3a, 0x11, 0x01, 0xe7, 0x58, 0x72, 0x3b, 0x95, - 0xdc, 0x0f, 0xe6, 0x63, 0xc1, 0x82, 0x1e, 0x5a, 0xd9, 0xb1, 0x01, 0x98, 0x0c, 0x62, 0xf4, 0xd0, - 0x6a, 0xd2, 0x33, 0x30, 0x24, 0xe4, 0x10, 0x2c, 0x15, 0x7c, 0xde, 0xe7, 0x40, 0x3c, 0xa0, 0x0d, - 0x08, 0xd4, 0xbf, 0x92, 0x87, 0x68, 0x10, 0xa7, 0x87, 0x54, 0x87, 0x7e, 0x31, 0x1f, 0x0d, 0x5f, - 0xf5, 0xf0, 0xea, 0xcf, 0x55, 0x80, 0x46, 0x77, 0xa9, 0x25, 0x9c, 0x44, 0xfb, 0xa5, 0x13, 0xfe, - 0x00, 0xaa, 0x49, 0x14, 0xea, 0xbf, 0xcf, 0xa7, 0xc6, 0xd4, 0x7a, 0x78, 0x05, 0xf8, 0x02, 0x9e, - 0x8a, 0xb7, 0xac, 0x70, 0x22, 0xc7, 0x43, 0x48, 0x36, 0xfe, 0x12, 0xf9, 0xbf, 0x7d, 0x42, 0xf2, - 0x91, 0x14, 0x73, 0x0d, 0xb3, 0x93, 0x85, 0xe6, 0x9a, 0x7c, 0x98, 0x2f, 0x19, 0x6e, 0xbf, 0x9d, - 0xdf, 0x2e, 0x04, 0xd9, 0xc3, 0xbc, 0xaa, 0x0e, 0x2c, 0xe8, 0x1b, 0x18, 0x2a, 0x9b, 0xf5, 0xc4, - 0x08, 0xcf, 0x4e, 0xdd, 0xe1, 0x20, 0xf9, 0xda, 0x4e, 0x50, 0xa9, 0xff, 0xa6, 0x3f, 0x3d, 0xfe, - 0xd5, 0xc3, 0x2b, 0xc2, 0x8b, 0x50, 0x5c, 0xd0, 0xbd, 0x15, 0xa1, 0xc9, 0x78, 0x1b, 0xd8, 0xd1, - 0xbd, 0x15, 0x0d, 0xa1, 0xe4, 0x0a, 0x0c, 0x6a, 0xfa, 0x3a, 0x3f, 0xf3, 0x2c, 0x85, 0x99, 0xc3, - 0x1d, 0x7d, 0xbd, 0xc9, 0xcf, 0x3d, 0x03, 0x34, 0x51, 0x83, 0xcc, 0xf5, 0xfc, 0xe4, 0x1b, 0xd3, - 0x26, 0xf3, 0xcc, 0xf5, 0x41, 0xbe, 0xfa, 0x8b, 0x50, 0x1c, 0xb7, 0x8d, 0x0d, 0xbc, 0xf9, 0x1a, - 0xe1, 0x95, 0x2d, 0xd9, 0xc6, 0x86, 0x86, 0x50, 0xf2, 0xf9, 0x1c, 0x0c, 0x4c, 0x53, 0xdd, 0x60, - 0x23, 0x64, 0xa8, 0x97, 0xc3, 0xca, 0x9b, 0x07, 0xe3, 0xb0, 0x72, 0x72, 0x85, 0x57, 0x26, 0x2b, - 0x8a, 0xa8, 0x9f, 0xdc, 0x80, 0xc1, 0x09, 0xdd, 0xa3, 0xcb, 0xb6, 0xb3, 0x81, 0x2e, 0x38, 0x63, - 0xe1, 0x1b, 0xca, 0x88, 0xfe, 0xf8, 0x44, 0xfc, 0x66, 0xac, 0x25, 0x7e, 0x69, 0x41, 0x61, 0x26, - 0x16, 0x7e, 0x33, 0x87, 0x3e, 0x38, 0x42, 0x2c, 0xfc, 0x0a, 0x4f, 0x13, 0x98, 0xf0, 0x58, 0x79, - 0x24, 0xfd, 0x58, 0x19, 0xad, 0x47, 0x74, 0xd3, 0xc3, 0x7c, 0xf1, 0xa3, 0xb8, 0xe8, 0x73, 0xeb, - 0x11, 0xa1, 0x98, 0x2e, 0x5e, 0x93, 0x48, 0xd4, 0x3f, 0xe9, 0x87, 0xd4, 0x68, 0x39, 0xc7, 0x4a, - 0x7e, 0xac, 0xe4, 0xa1, 0x92, 0xd7, 0x12, 0x4a, 0x7e, 0x21, 0x19, 0x7f, 0xe9, 0x3d, 0xaa, 0xe1, - 0x5f, 0x29, 0x26, 0xa2, 0xb7, 0x3d, 0xdc, 0xbb, 0xcb, 0x50, 0x7a, 0xfd, 0xdb, 0x4a, 0x2f, 0x18, - 0x10, 0xa5, 0x6d, 0x07, 0xc4, 0xc0, 0x4e, 0x07, 0xc4, 0x60, 0xe6, 0x80, 0x08, 0x15, 0x64, 0x28, - 0x53, 0x41, 0xea, 0x62, 0xd0, 0x40, 0xef, 0x2c, 0x72, 0x17, 0xb7, 0x36, 0x2b, 0x63, 0x6c, 0x34, - 0xa5, 0xa6, 0x8f, 0x43, 0x16, 0xea, 0xb7, 0x8b, 0x3d, 0x42, 0x2e, 0x1e, 0x8a, 0x8e, 0xbc, 0x00, - 0x85, 0x6a, 0xa7, 0x23, 0xf4, 0xe3, 0x94, 0x14, 0xed, 0x31, 0xa3, 0x14, 0xa3, 0x26, 0xaf, 0x42, - 0xa1, 0x7a, 0xb7, 0x11, 0x4f, 0x1c, 0x57, 0xbd, 0xdb, 0x10, 0x5f, 0x92, 0x59, 0xf6, 0x6e, 0x83, - 0xbc, 0x1e, 0x46, 0x70, 0x5f, 0xe9, 0x5a, 0xab, 0x62, 0xa3, 0x28, 0x3c, 0x75, 0x7d, 0x4f, 0x9e, - 0x16, 0x43, 0xb1, 0xed, 0x62, 0x8c, 0x36, 0xa6, 0x4d, 0xa5, 0x9d, 0x6b, 0xd3, 0xc0, 0xb6, 0xda, - 0x34, 0xb8, 0x53, 0x6d, 0x1a, 0xda, 0x81, 0x36, 0xc1, 0xb6, 0xda, 0x34, 0xbc, 0x7f, 0x6d, 0xea, - 0xc0, 0x85, 0x64, 0x98, 0xdc, 0x40, 0x23, 0x34, 0x20, 0x49, 0xac, 0x70, 0x2c, 0xc1, 0xab, 0xff, - 0x2e, 0xc7, 0x36, 0xd7, 0x11, 0xdd, 0x74, 0x19, 0x5e, 0x76, 0x6d, 0x4b, 0x96, 0x56, 0x7f, 0x3e, - 0x9f, 0x1d, 0xdd, 0xf7, 0x68, 0x4e, 0x71, 0xdf, 0x9d, 0x2a, 0xa5, 0x62, 0xec, 0x5d, 0x61, 0xa6, - 0x94, 0x63, 0x6c, 0xd3, 0x64, 0xf6, 0xf5, 0x7c, 0x56, 0xc8, 0xe1, 0x7d, 0x49, 0xec, 0x03, 0x49, - 0x67, 0x38, 0x74, 0xf1, 0x77, 0xa3, 0x5e, 0x70, 0x53, 0x30, 0x22, 0x0b, 0x51, 0x48, 0x69, 0x27, - 0x02, 0x8e, 0x94, 0x23, 0xaf, 0x07, 0xf9, 0xfd, 0x24, 0xff, 0x18, 0xf4, 0x74, 0xf3, 0xc7, 0x6c, - 0xcc, 0x3d, 0x46, 0x26, 0x27, 0xcf, 0x40, 0x69, 0x0a, 0x13, 0xe6, 0xc8, 0x83, 0x9d, 0xa7, 0xd0, - 0x91, 0xbd, 0x56, 0x38, 0x8d, 0xfa, 0xeb, 0x39, 0x38, 0x75, 0xab, 0xbb, 0x44, 0x85, 0xa3, 0x5d, - 0xd0, 0x86, 0xb7, 0x01, 0x18, 0x58, 0x38, 0xcc, 0xe4, 0xd0, 0x61, 0xe6, 0x43, 0x72, 0x68, 0xe2, - 0x58, 0x81, 0xab, 0x21, 0x35, 0x77, 0x96, 0x79, 0xd4, 0xf7, 0x39, 0x5d, 0xed, 0x2e, 0xd1, 0x66, - 0xc2, 0x6b, 0x46, 0xe2, 0x7e, 0xe1, 0xa3, 0xdc, 0x9b, 0x7f, 0xaf, 0x0e, 0x2a, 0x3f, 0x97, 0xcf, - 0x8c, 0x06, 0x7d, 0x64, 0xf3, 0xc2, 0x7f, 0x2a, 0xb5, 0x57, 0xe2, 0xf9, 0xe1, 0x53, 0x48, 0x62, - 0x1c, 0xd3, 0xb8, 0xa4, 0x0b, 0xec, 0x10, 0x27, 0x96, 0xf7, 0xbc, 0xc0, 0xfe, 0x20, 0x97, 0x19, - 0xb5, 0xfb, 0xa8, 0x0a, 0x4c, 0xfd, 0x9f, 0x0b, 0x7e, 0xb0, 0xf0, 0x7d, 0x7d, 0xc2, 0x33, 0x30, - 0x24, 0xde, 0x8f, 0x47, 0xfd, 0x84, 0xc5, 0xb1, 0x21, 0x1e, 0x43, 0x07, 0x04, 0xcc, 0xa4, 0x90, - 0x9c, 0x98, 0x25, 0x3f, 0x61, 0xc9, 0x81, 0x59, 0x93, 0x48, 0x98, 0xd1, 0x30, 0xf9, 0xc0, 0xf4, - 0xd0, 0x02, 0x61, 0x7d, 0x59, 0xe0, 0x46, 0x03, 0x7d, 0x60, 0x7a, 0xdc, 0xfe, 0x08, 0xd0, 0xcc, - 0x20, 0xe0, 0xb6, 0x88, 0x98, 0xf7, 0xd0, 0x20, 0xe0, 0xa6, 0x8a, 0x26, 0x30, 0xac, 0xb5, 0xc2, - 0xf9, 0x56, 0xb8, 0xb4, 0x88, 0xd6, 0x0a, 0x77, 0x5d, 0x6c, 0x6d, 0x40, 0xc0, 0x38, 0x6a, 0x74, - 0x39, 0x74, 0xe2, 0x43, 0x8e, 0x0e, 0x42, 0x34, 0x81, 0x21, 0xd7, 0x61, 0xac, 0xe1, 0xe9, 0x96, - 0xa1, 0x3b, 0xc6, 0x7c, 0xd7, 0xeb, 0x74, 0x3d, 0xd9, 0x00, 0x76, 0x3d, 0xc3, 0xee, 0x7a, 0x5a, - 0x8c, 0x82, 0x3c, 0x07, 0xa3, 0x3e, 0x64, 0xd2, 0x71, 0x6c, 0x47, 0xb6, 0x72, 0x5c, 0xcf, 0xa0, - 0x8e, 0xa3, 0x45, 0x09, 0xc8, 0x47, 0x60, 0xb4, 0x6e, 0x05, 0xcf, 0xa1, 0xb5, 0x19, 0x61, 0xf3, - 0xe0, 0x8b, 0x31, 0x33, 0x40, 0x34, 0xbb, 0x4e, 0x5b, 0x8b, 0x12, 0xaa, 0x5b, 0xf9, 0x64, 0x4c, - 0xf5, 0x87, 0x77, 0x83, 0x74, 0x25, 0xea, 0xb8, 0x87, 0xde, 0xaa, 0x68, 0x7c, 0xca, 0x7e, 0xc3, - 0xdc, 0x06, 0xbd, 0x0e, 0x83, 0xb7, 0xe8, 0x06, 0xf7, 0x31, 0x2d, 0x85, 0x6e, 0xc9, 0xab, 0x02, - 0x26, 0x9f, 0xee, 0xfa, 0x74, 0xea, 0x37, 0xf3, 0xc9, 0x68, 0xf1, 0x0f, 0xaf, 0xb0, 0x9f, 0x83, - 0x01, 0x14, 0x65, 0xdd, 0xbf, 0x5e, 0x40, 0x01, 0xa2, 0xb8, 0xa3, 0xde, 0xce, 0x3e, 0x99, 0xfa, - 0x53, 0xa5, 0x78, 0x0a, 0x81, 0x87, 0x57, 0x7a, 0xaf, 0xc1, 0xf0, 0x84, 0x6d, 0xb9, 0xa6, 0xeb, - 0x51, 0xab, 0xe5, 0x2b, 0x2c, 0x3a, 0xfe, 0xb7, 0x42, 0xb0, 0x6c, 0x03, 0x4a, 0xd4, 0x7b, 0x51, - 0x5e, 0xf2, 0x32, 0x0c, 0xa1, 0xc8, 0xd1, 0xe6, 0xe4, 0x13, 0x1e, 0xde, 0x4c, 0x2c, 0x31, 0x60, - 0xdc, 0xe2, 0x0c, 0x49, 0xc9, 0x6d, 0x18, 0x9c, 0x58, 0x31, 0xdb, 0x86, 0x43, 0x2d, 0xf4, 0x4d, - 0x96, 0x22, 0xb5, 0x45, 0xfb, 0xf2, 0x2a, 0xfe, 0x8b, 0xb4, 0xbc, 0x39, 0x2d, 0x51, 0x2c, 0xf2, - 0x58, 0x4c, 0xc0, 0x2e, 0xfc, 0x68, 0x1e, 0x20, 0x2c, 0x40, 0x1e, 0x87, 0xbc, 0xff, 0x22, 0x99, - 0xbb, 0xc4, 0x44, 0x34, 0x28, 0x8f, 0x4b, 0x85, 0x18, 0xdb, 0xf9, 0x6d, 0xc7, 0xf6, 0x6d, 0x28, - 0xf1, 0xd3, 0x35, 0xf4, 0x5a, 0x97, 0x9e, 0x8d, 0x67, 0x36, 0xf8, 0x2a, 0xd2, 0x73, 0x5b, 0x1a, - 0x2d, 0xcf, 0x88, 0x07, 0x38, 0x67, 0x76, 0xa1, 0x05, 0xfd, 0xf8, 0x17, 0xb9, 0x04, 0xc5, 0x45, - 0x3f, 0x85, 0xff, 0x28, 0x9f, 0xa5, 0x63, 0xf2, 0x43, 0x3c, 0xeb, 0xa6, 0x09, 0xdb, 0xf2, 0x58, - 0xd5, 0xd8, 0xea, 0x11, 0x21, 0x17, 0x01, 0x8b, 0xc8, 0x45, 0xc0, 0xd4, 0x7f, 0x96, 0x4f, 0x49, - 0x6e, 0xf1, 0xf0, 0x0e, 0x93, 0x57, 0x00, 0xf0, 0xe5, 0x39, 0x93, 0xa7, 0xff, 0x1c, 0x04, 0x47, - 0x09, 0x32, 0x42, 0xb5, 0x8d, 0x6c, 0x3b, 0x42, 0x62, 0xf5, 0xb7, 0x72, 0x89, 0x8c, 0x08, 0xfb, - 0x92, 0xa3, 0x6c, 0x95, 0xe5, 0xf7, 0x68, 0xc6, 0xfa, 0x7d, 0x51, 0xd8, 0x5d, 0x5f, 0x44, 0xbf, - 0xe5, 0x00, 0x2c, 0xd3, 0xc3, 0xfc, 0x96, 0x3f, 0xc9, 0xa7, 0xe5, 0x87, 0x38, 0x9a, 0x2a, 0xfe, - 0x62, 0x60, 0x94, 0x16, 0x63, 0x19, 0x79, 0x10, 0x1a, 0x2b, 0xe6, 0x9b, 0xa9, 0x9f, 0x81, 0x13, - 0xb1, 0xac, 0x09, 0x38, 0xff, 0x4b, 0xa1, 0x26, 0xd2, 0x73, 0x2b, 0x64, 0xc7, 0x2c, 0x88, 0x90, - 0xa9, 0xff, 0x6f, 0xae, 0x77, 0xce, 0x8c, 0x43, 0x57, 0x9d, 0x14, 0x01, 0x14, 0xfe, 0x62, 0x04, - 0x70, 0x00, 0xdb, 0xe0, 0xa3, 0x2d, 0x80, 0xf7, 0xc8, 0xe4, 0xf1, 0x6e, 0x0b, 0xe0, 0xa7, 0x72, - 0xdb, 0xa6, 0x3c, 0x39, 0x6c, 0x19, 0xa8, 0xff, 0x43, 0x2e, 0x35, 0x35, 0xc9, 0xbe, 0xda, 0xf5, - 0x3a, 0x94, 0xb8, 0x0b, 0x8f, 0x68, 0x95, 0x94, 0xcc, 0x95, 0x41, 0x33, 0xca, 0x8b, 0x32, 0x64, - 0x06, 0x06, 0x78, 0x1b, 0x0c, 0xd1, 0x1b, 0x4f, 0xf5, 0xc8, 0x8f, 0x62, 0x64, 0x4d, 0x8e, 0x02, - 0xad, 0xfe, 0x46, 0x2e, 0x91, 0x29, 0xe5, 0x10, 0xbf, 0x2d, 0x9c, 0xaa, 0x0b, 0x3b, 0x9f, 0xaa, - 0xd5, 0x7f, 0x9d, 0x4f, 0x4f, 0xd4, 0x72, 0x88, 0x1f, 0x72, 0x10, 0xc7, 0x69, 0x7b, 0x5b, 0xb7, - 0x16, 0x61, 0x2c, 0x2a, 0x0b, 0xb1, 0x6c, 0x3d, 0x96, 0x9e, 0xae, 0x26, 0xa3, 0x15, 0x31, 0x1e, - 0xea, 0x8f, 0xe4, 0x93, 0x39, 0x66, 0x0e, 0x7d, 0x7e, 0xda, 0x93, 0xb6, 0x90, 0x3a, 0x9c, 0x08, - 0xbf, 0x64, 0xd1, 0xf4, 0xda, 0xfe, 0xe9, 0x3e, 0xc6, 0x04, 0x10, 0x31, 0x2c, 0xda, 0xa6, 0xeb, - 0x35, 0x3d, 0x86, 0x8c, 0x44, 0x53, 0x88, 0x96, 0x8b, 0x49, 0xe5, 0x3d, 0xb2, 0x6c, 0xbd, 0xc7, - 0xa4, 0xf2, 0x1e, 0x59, 0xcb, 0x0e, 0x5d, 0x2a, 0x3f, 0x93, 0xcf, 0xca, 0x41, 0x74, 0xe8, 0xb2, - 0xf9, 0xa4, 0xdc, 0x5f, 0xbc, 0x65, 0x42, 0x4a, 0x8f, 0x67, 0x25, 0xfd, 0xc9, 0xe0, 0x99, 0xe0, - 0xb3, 0xb7, 0x49, 0x2c, 0x55, 0x58, 0xef, 0x91, 0xe1, 0x75, 0x34, 0x84, 0xf5, 0x1e, 0x19, 0x75, - 0xef, 0x3d, 0x61, 0xfd, 0x4a, 0x7e, 0xa7, 0x89, 0xaf, 0x8e, 0x85, 0x97, 0x10, 0xde, 0x97, 0xf2, - 0xc9, 0x84, 0x6c, 0x87, 0x2e, 0xa6, 0x29, 0x28, 0x89, 0xd4, 0x70, 0x99, 0xc2, 0xe1, 0xf8, 0x2c, - 0x93, 0x4d, 0x7c, 0xc7, 0x8b, 0x20, 0x6e, 0xaa, 0x76, 0x26, 0x12, 0x4e, 0xab, 0x7e, 0x27, 0x17, - 0xcb, 0x5e, 0x76, 0x28, 0x67, 0x24, 0x7b, 0x5b, 0xdd, 0x3e, 0xea, 0x9f, 0xd6, 0x16, 0x63, 0xf1, - 0x7b, 0x83, 0xef, 0xa9, 0x51, 0x4f, 0x37, 0xdb, 0xf1, 0xf2, 0x22, 0xc0, 0xc2, 0x37, 0xf3, 0x70, - 0x32, 0x41, 0x4a, 0x2e, 0x45, 0x42, 0x1a, 0xe1, 0xb9, 0x6b, 0xcc, 0x13, 0x9f, 0x07, 0x37, 0xda, - 0xc5, 0x51, 0xf1, 0x25, 0x28, 0xd6, 0xf4, 0x0d, 0xfe, 0x6d, 0xfd, 0x9c, 0xa5, 0xa1, 0x6f, 0xc8, - 0x47, 0x8a, 0x88, 0x27, 0x4b, 0x70, 0x86, 0x5f, 0xf8, 0x98, 0xb6, 0xb5, 0x68, 0xae, 0xd1, 0xba, - 0x35, 0x6b, 0xb6, 0xdb, 0xa6, 0x2b, 0x6e, 0x2d, 0x9f, 0xd9, 0xda, 0xac, 0x5c, 0xf6, 0x6c, 0x4f, - 0x6f, 0x37, 0xa9, 0x4f, 0xd6, 0xf4, 0xcc, 0x35, 0xda, 0x34, 0xad, 0xe6, 0x1a, 0x52, 0x4a, 0x2c, - 0xd3, 0x59, 0x91, 0x3a, 0x4f, 0x14, 0xd4, 0x68, 0xe9, 0x96, 0x45, 0x8d, 0xba, 0x35, 0xbe, 0xe1, - 0x51, 0x7e, 0xdb, 0x59, 0xe0, 0x67, 0x9e, 0xfc, 0xa1, 0x3d, 0x47, 0x33, 0xc6, 0x4b, 0x8c, 0x40, - 0x4b, 0x29, 0xa4, 0xfe, 0x6a, 0x31, 0x25, 0x71, 0xdd, 0x11, 0x52, 0x1f, 0xbf, 0xa7, 0x8b, 0xdb, - 0xf4, 0xf4, 0x35, 0x18, 0x10, 0x99, 0x18, 0xc4, 0x0d, 0x0a, 0xbe, 0x0c, 0xb8, 0xcf, 0x41, 0xf2, - 0x15, 0x94, 0xa0, 0x22, 0x6d, 0xb8, 0xb0, 0xc8, 0xba, 0x29, 0xbd, 0x33, 0x4b, 0x7b, 0xe8, 0xcc, - 0x1e, 0xfc, 0xc8, 0x5b, 0x70, 0x0e, 0xb1, 0x29, 0xdd, 0x3a, 0x80, 0x55, 0xa1, 0xad, 0xc7, 0xab, - 0x4a, 0xef, 0xdc, 0xac, 0xf2, 0xe4, 0x93, 0x30, 0x12, 0x0c, 0x10, 0x93, 0xba, 0xe2, 0x6a, 0xa6, - 0xc7, 0x38, 0xe3, 0x81, 0xf8, 0x18, 0x18, 0xfd, 0xf1, 0xa2, 0xc1, 0xdc, 0x22, 0xbc, 0xd4, 0xff, - 0x3e, 0xd7, 0x2b, 0x81, 0xde, 0xa1, 0xcf, 0xca, 0x1f, 0x85, 0x01, 0x83, 0x7f, 0x94, 0xd0, 0xa9, - 0xde, 0x29, 0xf6, 0x38, 0xa9, 0xe6, 0x97, 0x51, 0xff, 0x55, 0xae, 0x67, 0xde, 0xbe, 0xa3, 0xfe, - 0x79, 0x5f, 0x2a, 0x64, 0x7c, 0x9e, 0x98, 0x44, 0xaf, 0x40, 0xd9, 0x0c, 0x13, 0x0b, 0x35, 0xc3, - 0x58, 0x5e, 0xda, 0x09, 0x09, 0x8e, 0xa3, 0xeb, 0x45, 0x08, 0x3c, 0xd2, 0x1c, 0xdf, 0xdd, 0xce, - 0x6d, 0x76, 0x1d, 0x93, 0x8f, 0x4b, 0xed, 0xb4, 0x1b, 0xf3, 0xc5, 0x73, 0x6f, 0x3b, 0x26, 0xab, - 0x40, 0xf7, 0x56, 0xa8, 0xa5, 0x37, 0xd7, 0x6d, 0x67, 0x15, 0xa3, 0xbd, 0xf2, 0xc1, 0xa9, 0x9d, - 0xe0, 0xf0, 0xbb, 0x3e, 0x98, 0x3c, 0x09, 0xa3, 0xcb, 0xed, 0x2e, 0x0d, 0xe2, 0x6b, 0xf2, 0xcb, - 0x4c, 0x6d, 0x84, 0x01, 0x83, 0x2b, 0xa0, 0x47, 0x01, 0x90, 0x08, 0xa3, 0xf8, 0xf3, 0x9b, 0x4b, - 0x6d, 0x88, 0x41, 0x16, 0x45, 0x77, 0x5d, 0xe0, 0x5a, 0xcd, 0x85, 0xd4, 0x6c, 0xdb, 0xd6, 0x72, - 0xd3, 0xa3, 0xce, 0x1a, 0x36, 0x14, 0xbd, 0x35, 0xb4, 0xb3, 0x48, 0x81, 0x77, 0x43, 0xee, 0x8c, - 0x6d, 0x2d, 0x2f, 0x52, 0x67, 0x8d, 0x35, 0xf5, 0x19, 0x20, 0xa2, 0xa9, 0x0e, 0x9e, 0xea, 0xf0, - 0x8f, 0x43, 0x77, 0x0d, 0x4d, 0x7c, 0x04, 0x3f, 0xee, 0xc1, 0x0f, 0xab, 0xc0, 0x30, 0x0f, 0x32, - 0xc8, 0x85, 0x86, 0x3e, 0x1a, 0x1a, 0x70, 0x10, 0xca, 0xeb, 0x2c, 0x08, 0xf7, 0x11, 0xee, 0x22, - 0xaf, 0x89, 0x5f, 0xea, 0x17, 0x0a, 0x69, 0xa9, 0x06, 0xf7, 0xa5, 0x68, 0xe1, 0xb4, 0x9a, 0xdf, - 0xd5, 0xb4, 0x7a, 0xc2, 0xea, 0xae, 0x35, 0xf5, 0x4e, 0xa7, 0x79, 0xcf, 0x6c, 0xe3, 0x1b, 0x35, - 0x5c, 0xf8, 0xb4, 0x51, 0xab, 0xbb, 0x56, 0xed, 0x74, 0xa6, 0x38, 0x90, 0x3c, 0x0d, 0x27, 0x19, - 0x1d, 0x76, 0x52, 0x40, 0x59, 0x44, 0x4a, 0xc6, 0x00, 0xa3, 0xf4, 0xfa, 0xb4, 0xe7, 0x61, 0x50, - 0xf0, 0xe4, 0x6b, 0x55, 0xbf, 0x36, 0xc0, 0x99, 0xb9, 0xac, 0xe7, 0x02, 0x36, 0x7c, 0x72, 0xed, - 0xd7, 0x86, 0xfc, 0xf2, 0x18, 0x8b, 0xda, 0xea, 0xae, 0xf1, 0xf0, 0x62, 0x03, 0x88, 0x0c, 0x7e, - 0x93, 0x4b, 0x30, 0xc6, 0xb8, 0x04, 0x02, 0xe3, 0xe1, 0x7b, 0xfb, 0xb5, 0x18, 0x94, 0x5c, 0x87, - 0xd3, 0x11, 0x08, 0xb7, 0x41, 0xf9, 0x9b, 0x8b, 0x7e, 0x2d, 0x15, 0xa7, 0xfe, 0x72, 0x21, 0x9a, - 0x00, 0xf1, 0x10, 0x3a, 0xe2, 0x1c, 0x0c, 0xd8, 0xce, 0x72, 0xb3, 0xeb, 0xb4, 0xc5, 0xd8, 0x2b, - 0xd9, 0xce, 0xf2, 0x6d, 0xa7, 0x4d, 0xce, 0x40, 0x89, 0xf5, 0x8e, 0x69, 0x88, 0x21, 0xd6, 0xaf, - 0x77, 0x3a, 0x75, 0x83, 0x54, 0x79, 0x87, 0x60, 0xe8, 0xd7, 0x66, 0x0b, 0xb7, 0xf6, 0xdc, 0xeb, - 0xa2, 0x9f, 0xaf, 0x78, 0x09, 0x24, 0xf6, 0x13, 0x06, 0x84, 0xe5, 0x07, 0x01, 0x31, 0x16, 0x06, - 0x6e, 0x4b, 0x0c, 0xde, 0x27, 0x71, 0x16, 0x02, 0x19, 0xb2, 0xe0, 0x9b, 0x18, 0x83, 0xd4, 0x80, - 0x84, 0x54, 0x6b, 0xb6, 0x61, 0xde, 0x33, 0x29, 0x7f, 0x22, 0xd3, 0xcf, 0x6f, 0xb6, 0x93, 0x58, - 0xad, 0xec, 0x33, 0x99, 0x15, 0x10, 0xf2, 0x1a, 0x57, 0x42, 0x4e, 0x87, 0x6b, 0x1f, 0xef, 0x5b, - 0x6e, 0xa7, 0xc5, 0x50, 0xa8, 0x99, 0x58, 0x1e, 0x17, 0x42, 0xf5, 0x1b, 0xa5, 0x64, 0x16, 0xcc, - 0x43, 0xb1, 0x6b, 0xa6, 0x01, 0x44, 0x92, 0xdb, 0xf0, 0xf6, 0xf0, 0x82, 0x94, 0xd1, 0x46, 0x60, - 0x32, 0x78, 0x48, 0x65, 0xc9, 0x15, 0x18, 0xe4, 0x5f, 0x54, 0xaf, 0x09, 0x7b, 0x07, 0x7d, 0xe0, - 0xdc, 0x8e, 0x79, 0xef, 0x1e, 0x3a, 0xcc, 0x05, 0x68, 0x72, 0x09, 0x06, 0x6a, 0x73, 0x8d, 0x46, - 0x75, 0xce, 0xbf, 0x0a, 0xc7, 0xc7, 0x3a, 0x86, 0xe5, 0x36, 0x5d, 0xdd, 0x72, 0x35, 0x1f, 0x49, - 0x9e, 0x84, 0x52, 0x7d, 0x01, 0xc9, 0xf8, 0x13, 0xd4, 0xe1, 0xad, 0xcd, 0xca, 0x80, 0xd9, 0xe1, - 0x54, 0x02, 0x85, 0xf5, 0xde, 0xa9, 0xd7, 0x24, 0x7f, 0x10, 0x5e, 0xef, 0x7d, 0xd3, 0xc0, 0x7b, - 0x75, 0x2d, 0x40, 0x93, 0x97, 0x60, 0xa4, 0x41, 0x1d, 0x53, 0x6f, 0xcf, 0x75, 0x71, 0xab, 0x28, - 0x85, 0xb4, 0x74, 0x11, 0xde, 0xb4, 0x10, 0xa1, 0x45, 0xc8, 0xc8, 0x45, 0x28, 0x4e, 0x9b, 0x96, - 0xff, 0x1e, 0x04, 0x1f, 0x0c, 0xac, 0x98, 0x96, 0xa7, 0x21, 0x94, 0x3c, 0x09, 0x85, 0x9b, 0x8b, - 0x75, 0xe1, 0xea, 0x86, 0xbc, 0xde, 0xf6, 0x22, 0xe1, 0x31, 0x6f, 0x2e, 0xd6, 0xc9, 0x4b, 0x30, - 0xc4, 0x16, 0x31, 0x6a, 0xb5, 0xa8, 0xab, 0x0c, 0xe3, 0xc7, 0xf0, 0x98, 0x8c, 0x3e, 0x50, 0x76, - 0x5a, 0x09, 0x28, 0xc9, 0x2d, 0x28, 0xc7, 0xb3, 0x3d, 0x88, 0x37, 0x49, 0x68, 0x71, 0xad, 0x0b, - 0x5c, 0x5a, 0x54, 0xd0, 0x44, 0x41, 0x62, 0x80, 0x12, 0x87, 0xb1, 0x7d, 0x1d, 0x5a, 0x9d, 0x3c, - 0x20, 0xf5, 0xe5, 0xad, 0xcd, 0xca, 0x53, 0x09, 0xa6, 0x4d, 0x47, 0x50, 0x49, 0xdc, 0x33, 0x39, - 0x91, 0x4f, 0x01, 0x54, 0x3d, 0xcf, 0x31, 0x97, 0xba, 0xcc, 0x3c, 0x1c, 0xeb, 0xfd, 0xa4, 0x41, - 0xdd, 0xda, 0xac, 0x9c, 0xd6, 0x03, 0xf2, 0xd4, 0x87, 0x0d, 0x12, 0x3b, 0xf5, 0x7f, 0xcf, 0xa7, - 0xa7, 0x6d, 0x3d, 0x84, 0xa9, 0x6f, 0x8f, 0x6e, 0x03, 0xb1, 0x01, 0x57, 0xdc, 0xc7, 0x80, 0xbb, - 0x07, 0x27, 0xaa, 0xc6, 0x9a, 0x69, 0x55, 0xf1, 0xa7, 0x3b, 0x3b, 0x55, 0xc5, 0xa9, 0x54, 0x7a, - 0xfb, 0x19, 0x43, 0x8b, 0xef, 0xe1, 0x81, 0xa8, 0x19, 0xaa, 0xa9, 0x73, 0x5c, 0x73, 0xed, 0x9e, - 0xde, 0x6c, 0xf1, 0x8c, 0xa7, 0x5a, 0x9c, 0xa9, 0xfa, 0x23, 0xf9, 0x6d, 0x32, 0xcd, 0x3e, 0x8c, - 0xd2, 0x57, 0xbf, 0x9c, 0xef, 0x9d, 0xec, 0xf7, 0xa1, 0x14, 0xca, 0x9f, 0xe5, 0x53, 0x52, 0xef, - 0xee, 0x4b, 0x12, 0x57, 0x60, 0x90, 0xb3, 0x09, 0xfc, 0xb6, 0x71, 0x76, 0xe7, 0xca, 0x8a, 0xab, - 0x8a, 0x8f, 0x26, 0x73, 0x70, 0xba, 0x7a, 0xef, 0x1e, 0x6d, 0x79, 0x61, 0x48, 0xf2, 0xb9, 0x30, - 0xc2, 0x2f, 0x0f, 0xc1, 0x2c, 0xf0, 0x61, 0x48, 0x73, 0x8c, 0x64, 0x93, 0x5a, 0x8e, 0x2c, 0xc2, - 0xd9, 0x38, 0xbc, 0xc1, 0xb7, 0x44, 0x45, 0x29, 0x2a, 0x73, 0x82, 0x23, 0xff, 0x4f, 0xcb, 0x28, - 0x9b, 0xd6, 0x4a, 0x5c, 0xba, 0xfa, 0x7b, 0xb5, 0x12, 0xd7, 0xb1, 0xd4, 0x72, 0xea, 0x37, 0x0b, - 0x72, 0x86, 0xe2, 0x87, 0xd7, 0xc3, 0xee, 0xc5, 0x88, 0x5f, 0xfd, 0x4e, 0x87, 0xcc, 0x4b, 0x22, - 0x3c, 0x8d, 0xd1, 0x75, 0x7c, 0x17, 0xd4, 0x20, 0x3c, 0x06, 0x02, 0xe5, 0x75, 0x39, 0xa0, 0x24, - 0x75, 0x28, 0x56, 0x9d, 0x65, 0x6e, 0xee, 0x6f, 0xf7, 0x62, 0x4f, 0x77, 0x96, 0xd3, 0x17, 0x36, - 0x64, 0xa1, 0xfe, 0x70, 0xbe, 0x47, 0x52, 0xe1, 0x87, 0x72, 0x12, 0xf9, 0x89, 0x7c, 0x56, 0x7a, - 0xe0, 0xa3, 0xea, 0x2b, 0xf8, 0x2e, 0x0b, 0xe7, 0x68, 0x3b, 0x52, 0x1e, 0xa0, 0x70, 0x7e, 0x3f, - 0x9f, 0x95, 0xeb, 0xf8, 0x58, 0x38, 0x7b, 0x9b, 0x20, 0x53, 0x45, 0xfa, 0x10, 0xdb, 0xdc, 0xb2, - 0x2a, 0xf4, 0xef, 0xd1, 0x5f, 0x2e, 0x4d, 0xa4, 0xc7, 0x43, 0x78, 0x5f, 0x5a, 0xfa, 0x07, 0xf9, - 0xcc, 0x9c, 0xde, 0xc7, 0x32, 0x3d, 0x48, 0x99, 0x1e, 0x0f, 0xfd, 0x7d, 0x0d, 0xfd, 0x54, 0x99, - 0x1e, 0x8f, 0xfd, 0x7d, 0xe9, 0xe9, 0xcf, 0xe6, 0xb7, 0xc9, 0xf3, 0x79, 0xc4, 0xcf, 0x55, 0xcf, - 0x42, 0x49, 0xdc, 0x3c, 0x60, 0xae, 0x43, 0x4d, 0xfc, 0xda, 0xa3, 0xb4, 0xfe, 0x5e, 0x7e, 0xdb, - 0x2c, 0xa5, 0xc7, 0xf2, 0x92, 0xe4, 0xf5, 0xb5, 0xfc, 0x76, 0xf9, 0x55, 0x8f, 0xc5, 0x25, 0x89, - 0xeb, 0xf7, 0xf2, 0x98, 0xe4, 0xdc, 0x33, 0x5b, 0xd3, 0xb6, 0xeb, 0x49, 0x79, 0xca, 0xff, 0xe2, - 0x57, 0x8c, 0x83, 0xf0, 0x2f, 0xf7, 0xbb, 0xa7, 0xb8, 0xaf, 0xee, 0xe9, 0xdf, 0xc7, 0x96, 0x26, - 0x29, 0xd0, 0x43, 0x5b, 0x82, 0xdf, 0xaf, 0x02, 0x3d, 0x80, 0xf5, 0xf7, 0x61, 0x16, 0xe8, 0x5f, - 0x2b, 0x40, 0x79, 0xc2, 0xb1, 0xd7, 0xad, 0x9b, 0x74, 0x9d, 0xb6, 0x0f, 0x6d, 0xb8, 0xbf, 0x27, - 0x0c, 0x44, 0x67, 0x8f, 0x06, 0xa2, 0x5f, 0x8e, 0xbc, 0x01, 0x27, 0x42, 0x59, 0xca, 0x31, 0x1e, - 0xf1, 0x6e, 0xbb, 0xc5, 0x50, 0xcd, 0xb7, 0x19, 0x4e, 0x04, 0x23, 0x8b, 0x53, 0xab, 0xdf, 0x89, - 0xf4, 0xc6, 0xc3, 0x6d, 0xae, 0xef, 0xbb, 0x37, 0x6e, 0xc3, 0xd9, 0x89, 0xae, 0xe3, 0x50, 0xcb, - 0x4b, 0xef, 0x14, 0xbc, 0x49, 0x6b, 0x71, 0x8a, 0x66, 0xb2, 0x73, 0x32, 0x0a, 0x33, 0xb6, 0xe2, - 0x6d, 0x59, 0x9c, 0xed, 0x40, 0xc8, 0xb6, 0xcb, 0x29, 0xd2, 0xd8, 0xa6, 0x17, 0x56, 0x7f, 0x3b, - 0x2f, 0x77, 0xfd, 0xa1, 0xcd, 0x6a, 0xef, 0x8b, 0xae, 0x57, 0xbf, 0x50, 0x80, 0x31, 0xd6, 0xac, - 0x45, 0xdd, 0x5d, 0x3d, 0x36, 0x61, 0xf6, 0xb3, 0x40, 0xb0, 0xaf, 0xf0, 0x25, 0x89, 0xe3, 0x46, - 0xfa, 0x0a, 0x1f, 0x9e, 0xf5, 0x15, 0x3e, 0x5e, 0xfd, 0xb9, 0x62, 0xd8, 0x1d, 0xc7, 0x06, 0xd0, - 0x61, 0x77, 0x07, 0x99, 0x87, 0xd3, 0x62, 0x6e, 0xf3, 0x41, 0x98, 0xec, 0x47, 0xcc, 0x5f, 0x3c, - 0x67, 0xa8, 0x98, 0x16, 0xbb, 0x2e, 0x75, 0x9a, 0x9e, 0xee, 0xae, 0x36, 0x31, 0x3b, 0x90, 0x96, - 0x5a, 0x90, 0x31, 0x14, 0xb3, 0x5a, 0x94, 0xe1, 0x60, 0xc8, 0xd0, 0x9f, 0x10, 0x13, 0x0c, 0xd3, - 0x0a, 0xaa, 0xbf, 0x98, 0x83, 0x72, 0xfc, 0x73, 0xc8, 0x55, 0x18, 0x64, 0xbf, 0x83, 0xa0, 0x27, - 0xc2, 0x25, 0x3b, 0xe4, 0xc8, 0xfd, 0x85, 0x7c, 0x1a, 0xf2, 0x32, 0x0c, 0xa1, 0x6b, 0x16, 0x16, - 0xc8, 0x87, 0xb1, 0x66, 0xc2, 0x02, 0x98, 0x61, 0x9b, 0x17, 0x0b, 0x49, 0xc9, 0x6b, 0x30, 0x5c, - 0x0f, 0x7d, 0x50, 0xc5, 0x05, 0x34, 0xba, 0xbe, 0x4b, 0x25, 0x43, 0x02, 0x4d, 0xa6, 0x56, 0xbf, - 0x95, 0x0f, 0x55, 0xfd, 0xd8, 0x34, 0xdd, 0x97, 0x69, 0xfa, 0xf3, 0x05, 0x18, 0x9d, 0xb0, 0x2d, - 0x4f, 0x6f, 0x79, 0xc7, 0x87, 0xc1, 0xfb, 0x39, 0x64, 0x23, 0x15, 0xe8, 0x9f, 0x5c, 0xd3, 0xcd, - 0xb6, 0x30, 0x7c, 0x30, 0x22, 0x36, 0x65, 0x00, 0x8d, 0xc3, 0xc9, 0x0d, 0x8c, 0x03, 0xc5, 0x24, - 0x1d, 0x38, 0xe2, 0x8d, 0x85, 0xc1, 0x83, 0x25, 0x94, 0x48, 0x9f, 0xcd, 0x01, 0x7c, 0xe4, 0xc8, - 0x25, 0xe5, 0x3e, 0x3b, 0x3e, 0x18, 0x3d, 0x22, 0x7d, 0xf6, 0x95, 0x02, 0x9c, 0x8d, 0x3b, 0x04, - 0x1e, 0x0f, 0x38, 0xd1, 0x79, 0x7f, 0x09, 0x4e, 0xc7, 0x65, 0x53, 0x63, 0xd2, 0xe8, 0xef, 0xed, - 0x3b, 0x72, 0x75, 0x6b, 0xb3, 0xf2, 0x78, 0xd2, 0x17, 0x93, 0x55, 0x96, 0xea, 0x4d, 0x92, 0x5a, - 0x49, 0x6a, 0xcf, 0xbc, 0x47, 0x5e, 0x09, 0x3f, 0xe4, 0x3d, 0xf3, 0x13, 0xf9, 0x64, 0xcf, 0x1c, - 0x4f, 0x78, 0x62, 0xe1, 0xfe, 0x9d, 0x3c, 0x3c, 0x15, 0x17, 0xce, 0x9b, 0x2f, 0x3d, 0xf7, 0x8a, - 0x46, 0xfd, 0xa0, 0xa1, 0xc7, 0xd3, 0x8b, 0x50, 0x62, 0x8c, 0xfe, 0xaa, 0xbb, 0xc1, 0xcb, 0x41, - 0x11, 0xfd, 0x95, 0x41, 0x34, 0x81, 0xd9, 0x81, 0x38, 0x8f, 0xe7, 0x84, 0x5d, 0x88, 0xf3, 0xa7, - 0xb7, 0x15, 0xe7, 0xf1, 0x40, 0xf6, 0xdd, 0xd5, 0x8a, 0x00, 0x37, 0x4c, 0x4f, 0x84, 0x56, 0x3e, - 0xe2, 0x57, 0x65, 0x92, 0x9f, 0x6b, 0x71, 0x4f, 0x7e, 0xae, 0x61, 0xc0, 0xa4, 0xfe, 0x3d, 0x04, - 0x4c, 0x7a, 0x03, 0x06, 0x84, 0x1c, 0xc5, 0xbe, 0xfd, 0x5c, 0xf8, 0x15, 0x08, 0xce, 0xaa, 0xde, - 0x97, 0xfe, 0x07, 0x60, 0xc0, 0xe5, 0x41, 0xc4, 0xc4, 0xbe, 0x1a, 0xdf, 0xd3, 0x08, 0x90, 0xe6, - 0xff, 0x41, 0x2e, 0x02, 0xe6, 0xc3, 0x90, 0x9f, 0xbb, 0xf0, 0xfc, 0x18, 0xec, 0x5f, 0x52, 0x87, - 0x01, 0xf1, 0x6a, 0x40, 0x01, 0x7c, 0xab, 0x1b, 0xa8, 0x65, 0xd8, 0xcf, 0xfc, 0xf1, 0x00, 0x3f, - 0xb3, 0x16, 0xc4, 0xf2, 0x23, 0x66, 0x01, 0x52, 0x7f, 0x23, 0x07, 0xe5, 0x78, 0x21, 0xf2, 0x0c, - 0x94, 0xf8, 0x5f, 0x62, 0x87, 0x8e, 0xb1, 0x4c, 0x79, 0x09, 0x39, 0x96, 0xa9, 0xa0, 0x7e, 0x09, - 0x86, 0x34, 0xff, 0x29, 0x88, 0xd8, 0xa1, 0xa3, 0xff, 0x6e, 0xf0, 0x3e, 0x44, 0xf6, 0xdf, 0x0d, - 0x28, 0xc9, 0x93, 0x50, 0x98, 0x6f, 0x1b, 0x62, 0x63, 0x8e, 0x6f, 0x76, 0xec, 0xb6, 0x1c, 0xa8, - 0x95, 0x61, 0x19, 0xd1, 0x1c, 0x5d, 0x17, 0xce, 0xde, 0x48, 0x64, 0xd1, 0x75, 0x99, 0x68, 0x8e, - 0xae, 0xab, 0xbf, 0x9b, 0xf3, 0xdd, 0x77, 0x67, 0x4c, 0xd7, 0xab, 0x5b, 0xf7, 0xf5, 0xb6, 0x19, - 0x74, 0x04, 0xb9, 0x01, 0x63, 0x21, 0x52, 0x7a, 0xf3, 0x9f, 0x88, 0x8d, 0x83, 0xaf, 0xc2, 0x1f, - 0x0f, 0x79, 0xc7, 0x8a, 0x91, 0x4b, 0x92, 0xf2, 0x4b, 0xa7, 0x16, 0xf2, 0x43, 0x72, 0xe1, 0x8a, - 0x3d, 0x32, 0x6b, 0xba, 0xae, 0x69, 0x2d, 0xf3, 0xf7, 0x88, 0x05, 0x7c, 0x6a, 0x84, 0xe7, 0x27, - 0x6b, 0x1c, 0xde, 0x74, 0x18, 0x42, 0x7e, 0x33, 0x2d, 0x17, 0x50, 0xff, 0x7d, 0x0e, 0x2e, 0x30, - 0x4e, 0x18, 0xa5, 0x33, 0xf1, 0x61, 0xfb, 0x1a, 0xc0, 0x6b, 0x3d, 0x24, 0x25, 0x46, 0xf5, 0x13, - 0xc9, 0xc0, 0x14, 0x31, 0xc2, 0x18, 0xf7, 0x1e, 0xb2, 0xdf, 0x5b, 0xa0, 0xb4, 0xdf, 0xc9, 0xe1, - 0xf5, 0xe0, 0x52, 0x9b, 0xde, 0x9e, 0xab, 0xbf, 0x79, 0x40, 0x17, 0xd8, 0x7b, 0x9d, 0xba, 0x3e, - 0x0e, 0x65, 0x17, 0xdb, 0xd2, 0xec, 0x5a, 0xe6, 0x03, 0x3c, 0xf9, 0x12, 0x1f, 0x73, 0x56, 0xfa, - 0x18, 0xa9, 0xad, 0xda, 0x18, 0xa7, 0xbf, 0x6d, 0x99, 0x0f, 0x30, 0x48, 0xe9, 0xc7, 0x30, 0xee, - 0xbb, 0x44, 0x41, 0x2e, 0xc0, 0x20, 0xe3, 0x63, 0x05, 0xca, 0xa8, 0x05, 0xbf, 0x49, 0x19, 0x0a, - 0x5d, 0xd3, 0xc0, 0x66, 0xf6, 0x6b, 0xec, 0x4f, 0xf5, 0x8f, 0x0a, 0x70, 0xb2, 0x7a, 0xb7, 0x51, - 0x9f, 0x08, 0x5e, 0x31, 0x88, 0x87, 0xa6, 0x6b, 0xbb, 0x94, 0x85, 0x4f, 0x4f, 0x26, 0x60, 0x8c, - 0x07, 0x0a, 0x10, 0xc1, 0xec, 0xf9, 0xb9, 0x54, 0x3f, 0x7f, 0x4d, 0x11, 0xc5, 0x48, 0x4a, 0x3a, - 0x8a, 0x18, 0x11, 0xf3, 0xde, 0x25, 0x2d, 0x38, 0x1f, 0x21, 0x6d, 0xea, 0x41, 0x1c, 0x36, 0x3f, - 0x06, 0xc6, 0x07, 0xb7, 0x36, 0x2b, 0x4f, 0x66, 0x12, 0x49, 0xac, 0xcf, 0xc9, 0xac, 0xc3, 0x78, - 0x6e, 0x2e, 0xb9, 0x05, 0x27, 0x79, 0x79, 0x3c, 0xb4, 0x0b, 0x9c, 0x24, 0x18, 0x73, 0x29, 0xde, - 0x81, 0x84, 0x94, 0x63, 0x5b, 0x21, 0x92, 0x09, 0x5c, 0x3c, 0x12, 0xbe, 0x0b, 0x67, 0x38, 0x7d, - 0x87, 0x3a, 0x38, 0x12, 0x6d, 0xab, 0xe9, 0x52, 0x4f, 0xbc, 0x35, 0x1e, 0x7f, 0x72, 0x6b, 0xb3, - 0x52, 0x49, 0x25, 0x90, 0x98, 0x9e, 0x42, 0x82, 0x85, 0x00, 0xdf, 0xa0, 0x9e, 0xec, 0xa7, 0x51, - 0xda, 0x85, 0x9a, 0xff, 0x64, 0x1e, 0xce, 0x4d, 0x53, 0xbd, 0xed, 0xad, 0x4c, 0xac, 0xd0, 0xd6, - 0xea, 0xb1, 0xab, 0x74, 0xc4, 0x6a, 0x49, 0x95, 0xce, 0xb1, 0x89, 0xdc, 0x4b, 0x3a, 0xc7, 0x16, - 0xaf, 0x90, 0xce, 0xd7, 0xf2, 0x70, 0x39, 0x6d, 0x73, 0x80, 0xb7, 0x03, 0xce, 0xfc, 0x7d, 0xea, - 0x38, 0xa6, 0x41, 0x0f, 0x71, 0x51, 0x39, 0x38, 0x7b, 0x58, 0xee, 0xb0, 0xe2, 0x1e, 0x3d, 0x62, - 0x77, 0x26, 0xae, 0x43, 0x4c, 0x62, 0xf3, 0xde, 0x12, 0xd7, 0x8f, 0xe7, 0xe1, 0x74, 0xc3, 0x5c, - 0x76, 0x3d, 0xdb, 0xa1, 0x0b, 0x18, 0xb1, 0xe3, 0x58, 0x93, 0x32, 0x45, 0x73, 0x88, 0xb9, 0xa2, - 0xde, 0xeb, 0xa2, 0x39, 0x1e, 0x50, 0x1c, 0xff, 0xf4, 0x2c, 0xbf, 0x0d, 0xbf, 0x65, 0x5a, 0x06, - 0x39, 0x0f, 0x67, 0x6e, 0x37, 0x26, 0xb5, 0xe6, 0xad, 0xfa, 0x5c, 0xad, 0x79, 0x7b, 0xae, 0xb1, - 0x30, 0x39, 0x51, 0x9f, 0xaa, 0x4f, 0xd6, 0xca, 0x7d, 0xe4, 0x14, 0x9c, 0x08, 0x51, 0xd3, 0xb7, - 0x67, 0xab, 0x73, 0xe5, 0x1c, 0x39, 0x09, 0xa3, 0x21, 0x70, 0x7c, 0x7e, 0xb1, 0x9c, 0x7f, 0xfa, - 0x27, 0x72, 0x00, 0x8c, 0xdf, 0xbc, 0x63, 0x2e, 0x9b, 0x16, 0x79, 0x04, 0xce, 0x21, 0xc5, 0xbc, - 0x56, 0xbf, 0x51, 0x9f, 0x8b, 0xf1, 0x3c, 0x03, 0x27, 0x65, 0xe4, 0xcc, 0xfc, 0x44, 0x75, 0xa6, - 0x9c, 0x0b, 0xaa, 0x12, 0xe0, 0x46, 0x63, 0xbe, 0x9c, 0x27, 0xa7, 0xa1, 0x2c, 0x03, 0xe7, 0x6f, - 0x2d, 0x56, 0xcb, 0x85, 0x38, 0xb4, 0x31, 0x51, 0x9f, 0x2d, 0x17, 0xc9, 0x39, 0x38, 0x25, 0x43, - 0x27, 0xe7, 0x16, 0xb5, 0x6a, 0xbd, 0x56, 0xee, 0x7f, 0xfa, 0x83, 0x30, 0x8c, 0xb1, 0x83, 0xc4, - 0xde, 0x79, 0x04, 0x06, 0xe7, 0xc7, 0x1b, 0x93, 0xda, 0x1d, 0x6c, 0x0d, 0x40, 0xa9, 0x36, 0x39, - 0xc7, 0x5a, 0x96, 0x7b, 0xfa, 0xff, 0xca, 0x01, 0x34, 0xa6, 0x16, 0x17, 0x04, 0xe1, 0x30, 0x0c, - 0xd4, 0xe7, 0xee, 0x54, 0x67, 0xea, 0x8c, 0x6e, 0x10, 0x8a, 0xf3, 0x0b, 0x93, 0xec, 0xf3, 0x87, - 0xa0, 0x7f, 0x62, 0x66, 0xbe, 0x31, 0x59, 0xce, 0x33, 0xa0, 0x36, 0x59, 0xad, 0x95, 0x0b, 0x0c, - 0x78, 0x57, 0xab, 0x2f, 0x4e, 0x96, 0x8b, 0xec, 0xcf, 0x99, 0xc6, 0x62, 0x75, 0xb1, 0xdc, 0xcf, - 0xfe, 0x9c, 0xc2, 0x3f, 0x4b, 0x8c, 0x59, 0x63, 0x72, 0x11, 0x7f, 0x0c, 0xb0, 0x26, 0x4c, 0xf9, - 0xbf, 0x06, 0x19, 0x8a, 0xb1, 0xae, 0xd5, 0xb5, 0xf2, 0x10, 0xfb, 0xc1, 0x58, 0xb2, 0x1f, 0xc0, - 0x1a, 0xa7, 0x4d, 0xce, 0xce, 0xdf, 0x99, 0x2c, 0x0f, 0x33, 0x5e, 0xb3, 0xb7, 0x18, 0x78, 0x84, - 0xfd, 0xa9, 0xcd, 0xb2, 0x3f, 0x47, 0x19, 0x27, 0x6d, 0xb2, 0x3a, 0xb3, 0x50, 0x5d, 0x9c, 0x2e, - 0x8f, 0xb1, 0xf6, 0x20, 0xcf, 0x13, 0xbc, 0xe4, 0x5c, 0x75, 0x76, 0xb2, 0x5c, 0x16, 0x34, 0xb5, - 0x99, 0xfa, 0xdc, 0xad, 0xf2, 0x49, 0x6c, 0xc8, 0x5b, 0xb3, 0xf8, 0x83, 0xb0, 0x02, 0xf8, 0xd7, - 0xa9, 0xa7, 0xbf, 0x0b, 0x4a, 0xf3, 0x0d, 0xbc, 0xc5, 0x3f, 0x07, 0xa7, 0xe6, 0x1b, 0xcd, 0xc5, - 0xb7, 0x16, 0x26, 0x63, 0x1d, 0x77, 0x12, 0x46, 0x7d, 0xc4, 0x4c, 0x7d, 0xee, 0xf6, 0x9b, 0x5c, - 0x15, 0x7c, 0xd0, 0x6c, 0x75, 0x62, 0xbe, 0x51, 0xce, 0xb3, 0x7e, 0xf4, 0x41, 0x77, 0xeb, 0x73, - 0xb5, 0xf9, 0xbb, 0x8d, 0x72, 0xe1, 0xe9, 0xfb, 0x30, 0x52, 0xa3, 0xf7, 0xcd, 0x16, 0x15, 0x0a, - 0xf2, 0x28, 0x9c, 0xaf, 0x4d, 0xde, 0xa9, 0x4f, 0x4c, 0x66, 0xaa, 0x48, 0x14, 0x5d, 0x5d, 0xa8, - 0x97, 0x73, 0xe4, 0x2c, 0x90, 0x28, 0xf8, 0x66, 0x75, 0x76, 0xaa, 0x9c, 0x27, 0x0a, 0x9c, 0x8e, - 0xc2, 0xeb, 0x73, 0x8b, 0xb7, 0xe7, 0x26, 0xcb, 0x85, 0xa7, 0xff, 0x76, 0x0e, 0xce, 0xa4, 0x66, - 0x02, 0x27, 0x2a, 0x3c, 0x36, 0x39, 0x53, 0x6d, 0x2c, 0xd6, 0x27, 0x1a, 0x93, 0x55, 0x6d, 0x62, - 0xba, 0x39, 0x51, 0x5d, 0x9c, 0xbc, 0x31, 0xaf, 0xbd, 0xd5, 0xbc, 0x31, 0x39, 0x37, 0xa9, 0x55, - 0x67, 0xca, 0x7d, 0xe4, 0x49, 0xa8, 0x64, 0xd0, 0x34, 0x26, 0x27, 0x6e, 0x6b, 0xf5, 0xc5, 0xb7, - 0xca, 0x39, 0xf2, 0x04, 0x3c, 0x9a, 0x49, 0xc4, 0x7e, 0x97, 0xf3, 0xe4, 0x31, 0xb8, 0x90, 0x45, - 0xf2, 0x89, 0x99, 0x72, 0xe1, 0xe9, 0x1f, 0xcf, 0x01, 0x49, 0xa6, 0x72, 0x26, 0x8f, 0xc3, 0x45, - 0xa6, 0x17, 0xcd, 0xec, 0x06, 0x3e, 0x01, 0x8f, 0xa6, 0x52, 0x48, 0xcd, 0xab, 0xc0, 0x23, 0x19, - 0x24, 0xa2, 0x71, 0x17, 0x41, 0x49, 0x27, 0xc0, 0xa6, 0xfd, 0x42, 0x0e, 0xce, 0xa4, 0x86, 0xd3, - 0x20, 0x97, 0xe1, 0xa9, 0x6a, 0x6d, 0x96, 0xf5, 0xcd, 0xc4, 0x62, 0x7d, 0x7e, 0xae, 0xd1, 0x9c, - 0x9d, 0xaa, 0x36, 0x99, 0xf6, 0xdd, 0x6e, 0xc4, 0x7a, 0xf3, 0x12, 0xa8, 0x3d, 0x28, 0x27, 0xa6, - 0xab, 0x73, 0x37, 0xd8, 0xf0, 0x23, 0x4f, 0xc1, 0xe3, 0x99, 0x74, 0x93, 0x73, 0xd5, 0xf1, 0x99, - 0xc9, 0x5a, 0x39, 0x4f, 0x3e, 0x00, 0x4f, 0x64, 0x52, 0xd5, 0xea, 0x0d, 0x4e, 0x56, 0x78, 0x5a, - 0x8f, 0x5c, 0xf2, 0xb2, 0xaf, 0x9c, 0x98, 0x9f, 0x5b, 0xac, 0x4e, 0x2c, 0xa6, 0x69, 0xf6, 0x79, - 0x38, 0x13, 0xc1, 0x8e, 0xdf, 0x6e, 0xd4, 0xe7, 0x26, 0x1b, 0x8d, 0x72, 0x2e, 0x81, 0x0a, 0x44, - 0x9b, 0x1f, 0xaf, 0x7d, 0xeb, 0x7f, 0x7c, 0xac, 0xef, 0x5b, 0x7f, 0xfa, 0x58, 0xee, 0xf7, 0xff, - 0xf4, 0xb1, 0xdc, 0xbf, 0xfe, 0xd3, 0xc7, 0x72, 0x9f, 0xbc, 0xbe, 0x9b, 0x2c, 0xe0, 0x7c, 0xca, - 0x5e, 0x2a, 0xe1, 0x3d, 0xdb, 0x0b, 0xff, 0x5f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xb3, 0xfc, 0xdb, - 0x68, 0xce, 0xb2, 0x01, 0x00, +var xxx_messageInfo_MCPSessionStart proto.InternalMessageInfo + +// MCPSessionEnd is emitted when an MCP session ends. +type MCPSessionEnd struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=User,proto3,embedded=User" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=Session,proto3,embedded=Session" json:""` + // ServerMetadata is a common server metadata + ServerMetadata `protobuf:"bytes,4,opt,name=Server,proto3,embedded=Server" json:""` + // ConnectionMetadata holds information about the connection + ConnectionMetadata `protobuf:"bytes,5,opt,name=Connection,proto3,embedded=Connection" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,6,opt,name=App,proto3,embedded=App" json:""` + // Status contains common command or operation status fields. + // + // The protocol spec states that the MCP server may refuse the clients from + // terminating the session. + // https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#session-management + Status `protobuf:"bytes,7,opt,name=Status,proto3,embedded=Status" json:""` + // Headers are the HTTP request headers. + Headers github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,8,opt,name=headers,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"headers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *Metadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +func (m *MCPSessionEnd) Reset() { *m = MCPSessionEnd{} } +func (m *MCPSessionEnd) String() string { return proto.CompactTextString(m) } +func (*MCPSessionEnd) ProtoMessage() {} +func (*MCPSessionEnd) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{268} +} +func (m *MCPSessionEnd) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionEnd) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionEnd.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *MCPSessionEnd) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionEnd.Merge(m, src) +} +func (m *MCPSessionEnd) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionEnd) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionEnd.DiscardUnknown(m) } -func (m *Metadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_MCPSessionEnd proto.InternalMessageInfo + +// MCPJSONRPCMessage includes details of a MCP request or notification. +// https://modelcontextprotocol.io/docs/concepts/transports#requests +type MCPJSONRPCMessage struct { + // JSONRPC specifies the version of the protocol. + JSONRPC string `protobuf:"bytes,1,opt,name=JSONRPC,proto3" json:"jsonrpc"` + // ID is the ID of a request. Notifications have no IDs. + ID string `protobuf:"bytes,2,opt,name=ID,proto3" json:"id,omitempty"` + // Method is the method of this message. + Method string `protobuf:"bytes,3,opt,name=method,proto3" json:"method"` + // Params is the optional parameters. + Params *Struct `protobuf:"bytes,5,opt,name=params,proto3,casttype=Struct" json:"params,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *Metadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.ClusterName) > 0 { - i -= len(m.ClusterName) - copy(dAtA[i:], m.ClusterName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ClusterName))) - i-- - dAtA[i] = 0x32 - } - n1, err1 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) - if err1 != nil { - return 0, err1 - } - i -= n1 - i = encodeVarintEvents(dAtA, i, uint64(n1)) - i-- - dAtA[i] = 0x2a - if len(m.Code) > 0 { - i -= len(m.Code) - copy(dAtA[i:], m.Code) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Code))) - i-- - dAtA[i] = 0x22 - } - if len(m.ID) > 0 { - i -= len(m.ID) - copy(dAtA[i:], m.ID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) - i-- - dAtA[i] = 0x1a - } - if len(m.Type) > 0 { - i -= len(m.Type) - copy(dAtA[i:], m.Type) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Type))) - i-- - dAtA[i] = 0x12 - } - if m.Index != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Index)) - i-- - dAtA[i] = 0x8 +func (m *MCPJSONRPCMessage) Reset() { *m = MCPJSONRPCMessage{} } +func (m *MCPJSONRPCMessage) String() string { return proto.CompactTextString(m) } +func (*MCPJSONRPCMessage) ProtoMessage() {} +func (*MCPJSONRPCMessage) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{269} +} +func (m *MCPJSONRPCMessage) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPJSONRPCMessage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPJSONRPCMessage.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return len(dAtA) - i, nil +} +func (m *MCPJSONRPCMessage) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPJSONRPCMessage.Merge(m, src) +} +func (m *MCPJSONRPCMessage) XXX_Size() int { + return m.Size() +} +func (m *MCPJSONRPCMessage) XXX_DiscardUnknown() { + xxx_messageInfo_MCPJSONRPCMessage.DiscardUnknown(m) } -func (m *SessionMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +var xxx_messageInfo_MCPJSONRPCMessage proto.InternalMessageInfo + +// MCPSessionRequest is emitted when a request is sent by client during a MCP session. +type MCPSessionRequest struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=user,proto3,embedded=user" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=session,proto3,embedded=session" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,4,opt,name=App,proto3,embedded=App" json:""` + // Status contains information whether the request is successful or not. + Status `protobuf:"bytes,5,opt,name=status,proto3,embedded=status" json:""` + // Message contains details of the message. + Message MCPJSONRPCMessage `protobuf:"bytes,6,opt,name=message,proto3" json:"message,omitempty"` + // Headers are the HTTP request headers. + Headers github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,7,opt,name=headers,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"headers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *MCPSessionRequest) Reset() { *m = MCPSessionRequest{} } +func (m *MCPSessionRequest) String() string { return proto.CompactTextString(m) } +func (*MCPSessionRequest) ProtoMessage() {} +func (*MCPSessionRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{270} +} +func (m *MCPSessionRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *MCPSessionRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionRequest.Merge(m, src) +} +func (m *MCPSessionRequest) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionRequest) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionRequest.DiscardUnknown(m) } -func (m *SessionMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_MCPSessionRequest proto.InternalMessageInfo + +// MCPSessionNotification is emitted when a notification is sent by client +// during a MCP session. +type MCPSessionNotification struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=user,proto3,embedded=user" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=session,proto3,embedded=session" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,4,opt,name=App,proto3,embedded=App" json:""` + // Message contains details of the message. + Message MCPJSONRPCMessage `protobuf:"bytes,5,opt,name=message,proto3" json:"message,omitempty"` + // Status contains information whether the request is successful or not. + Status `protobuf:"bytes,6,opt,name=status,proto3,embedded=status" json:""` + // Headers are the HTTP request headers. + Headers github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,7,opt,name=headers,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"headers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *SessionMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.PrivateKeyPolicy) > 0 { - i -= len(m.PrivateKeyPolicy) - copy(dAtA[i:], m.PrivateKeyPolicy) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PrivateKeyPolicy))) - i-- - dAtA[i] = 0x1a - } - if len(m.WithMFA) > 0 { - i -= len(m.WithMFA) - copy(dAtA[i:], m.WithMFA) - i = encodeVarintEvents(dAtA, i, uint64(len(m.WithMFA))) - i-- - dAtA[i] = 0x12 - } - if len(m.SessionID) > 0 { - i -= len(m.SessionID) - copy(dAtA[i:], m.SessionID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionID))) - i-- - dAtA[i] = 0xa +func (m *MCPSessionNotification) Reset() { *m = MCPSessionNotification{} } +func (m *MCPSessionNotification) String() string { return proto.CompactTextString(m) } +func (*MCPSessionNotification) ProtoMessage() {} +func (*MCPSessionNotification) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{271} +} +func (m *MCPSessionNotification) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionNotification) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionNotification.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return len(dAtA) - i, nil +} +func (m *MCPSessionNotification) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionNotification.Merge(m, src) +} +func (m *MCPSessionNotification) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionNotification) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionNotification.DiscardUnknown(m) } -func (m *UserMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +var xxx_messageInfo_MCPSessionNotification proto.InternalMessageInfo + +// MCPSessionListenSSEStream is emitted when client sends a GET request to a +// streamable HTTP MCP server for listening server notifications via a SSE +// stream. +type MCPSessionListenSSEStream struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=user,proto3,embedded=user" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=session,proto3,embedded=session" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,4,opt,name=app,proto3,embedded=app" json:""` + // Status contains information whether the request is successful or not. + Status `protobuf:"bytes,5,opt,name=status,proto3,embedded=status" json:""` + // Headers are the HTTP request headers. + Headers github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,6,opt,name=headers,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"headers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *MCPSessionListenSSEStream) Reset() { *m = MCPSessionListenSSEStream{} } +func (m *MCPSessionListenSSEStream) String() string { return proto.CompactTextString(m) } +func (*MCPSessionListenSSEStream) ProtoMessage() {} +func (*MCPSessionListenSSEStream) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{272} +} +func (m *MCPSessionListenSSEStream) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionListenSSEStream) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionListenSSEStream.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *MCPSessionListenSSEStream) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionListenSSEStream.Merge(m, src) +} +func (m *MCPSessionListenSSEStream) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionListenSSEStream) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionListenSSEStream.DiscardUnknown(m) } -func (m *UserMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_MCPSessionListenSSEStream proto.InternalMessageInfo + +// MCPSessionInvalidHTTPRequest is a blanket event for all requests that we do +// not understand (usually out of MCP spec). +type MCPSessionInvalidHTTPRequest struct { + // Metadata is a common event metadata + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // User is a common user event metadata + UserMetadata `protobuf:"bytes,2,opt,name=user,proto3,embedded=user" json:""` + // SessionMetadata is a common event session metadata + SessionMetadata `protobuf:"bytes,3,opt,name=session,proto3,embedded=session" json:""` + // App is a common application resource metadata. + AppMetadata `protobuf:"bytes,4,opt,name=app,proto3,embedded=app" json:""` + // Path is relative path in the URL. + Path string `protobuf:"bytes,5,opt,name=path,proto3" json:"path"` + // Method is the request HTTP method, like GET/POST/DELETE/etc. + Method string `protobuf:"bytes,6,opt,name=method,proto3" json:"method"` + // RawQuery are the encoded query values. + RawQuery string `protobuf:"bytes,7,opt,name=raw_query,json=rawQuery,proto3" json:"raw_query"` + // Body is the request HTTP body. + Body []byte `protobuf:"bytes,8,opt,name=body,proto3" json:"body"` + // Headers are the HTTP request headers. + Headers github_com_gravitational_teleport_api_types_wrappers.Traits `protobuf:"bytes,9,opt,name=headers,proto3,customtype=github.com/gravitational/teleport/api/types/wrappers.Traits" json:"headers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *UserMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.UserOrigin != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.UserOrigin)) - i-- - dAtA[i] = 0x68 - } - if len(m.BotInstanceID) > 0 { - i -= len(m.BotInstanceID) - copy(dAtA[i:], m.BotInstanceID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.BotInstanceID))) - i-- - dAtA[i] = 0x62 - } - if len(m.BotName) > 0 { - i -= len(m.BotName) - copy(dAtA[i:], m.BotName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) - i-- - dAtA[i] = 0x5a - } - if m.UserKind != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.UserKind)) - i-- - dAtA[i] = 0x50 - } - if len(m.RequiredPrivateKeyPolicy) > 0 { - i -= len(m.RequiredPrivateKeyPolicy) - copy(dAtA[i:], m.RequiredPrivateKeyPolicy) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequiredPrivateKeyPolicy))) - i-- - dAtA[i] = 0x4a - } - if m.TrustedDevice != nil { - { - size, err := m.TrustedDevice.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) +func (m *MCPSessionInvalidHTTPRequest) Reset() { *m = MCPSessionInvalidHTTPRequest{} } +func (m *MCPSessionInvalidHTTPRequest) String() string { return proto.CompactTextString(m) } +func (*MCPSessionInvalidHTTPRequest) ProtoMessage() {} +func (*MCPSessionInvalidHTTPRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{273} +} +func (m *MCPSessionInvalidHTTPRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPSessionInvalidHTTPRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPSessionInvalidHTTPRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } - i-- - dAtA[i] = 0x42 - } - if len(m.GCPServiceAccount) > 0 { - i -= len(m.GCPServiceAccount) - copy(dAtA[i:], m.GCPServiceAccount) - i = encodeVarintEvents(dAtA, i, uint64(len(m.GCPServiceAccount))) - i-- - dAtA[i] = 0x3a - } - if len(m.AzureIdentity) > 0 { - i -= len(m.AzureIdentity) - copy(dAtA[i:], m.AzureIdentity) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AzureIdentity))) - i-- - dAtA[i] = 0x32 + return b[:n], nil } - if len(m.AccessRequests) > 0 { - for iNdEx := len(m.AccessRequests) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.AccessRequests[iNdEx]) - copy(dAtA[i:], m.AccessRequests[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessRequests[iNdEx]))) - i-- - dAtA[i] = 0x2a +} +func (m *MCPSessionInvalidHTTPRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPSessionInvalidHTTPRequest.Merge(m, src) +} +func (m *MCPSessionInvalidHTTPRequest) XXX_Size() int { + return m.Size() +} +func (m *MCPSessionInvalidHTTPRequest) XXX_DiscardUnknown() { + xxx_messageInfo_MCPSessionInvalidHTTPRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_MCPSessionInvalidHTTPRequest proto.InternalMessageInfo + +// BoundKeypairRecovery is emitted when a client performs a self recovery using +// a bound_keypair joining token. This event is also emitted upon first join. +type BoundKeypairRecovery struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // Status contains common command or operation status fields. + Status `protobuf:"bytes,2,opt,name=Status,proto3,embedded=Status" json:""` + // ConnectionMetadata holds information about the connection + ConnectionMetadata `protobuf:"bytes,3,opt,name=Connection,proto3,embedded=Connection" json:""` + // TokenName is the name of the provision token used to join. + TokenName string `protobuf:"bytes,4,opt,name=TokenName,proto3" json:"token_name"` + // BotName is the name of the bot attempting to join, if any. + BotName string `protobuf:"bytes,5,opt,name=BotName,proto3" json:"bot_name,omitempty"` + // PublicKey is the public key at the completion of the joining process, in + // SSH authorized_keys format. If a keypair rotation occurred, this is the + // keypair trusted at the end of the join process. + PublicKey string `protobuf:"bytes,6,opt,name=PublicKey,proto3" json:"public_key,omitempty"` + // RecoveryCount is the recovery counter value at the time of this recovery. + RecoveryCount uint32 `protobuf:"varint,7,opt,name=RecoveryCount,proto3" json:"recovery_count"` + // RecoveryMode is the bound keypair token's configured recovery mode. + RecoveryMode string `protobuf:"bytes,8,opt,name=RecoveryMode,proto3" json:"recovery_mode"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *BoundKeypairRecovery) Reset() { *m = BoundKeypairRecovery{} } +func (m *BoundKeypairRecovery) String() string { return proto.CompactTextString(m) } +func (*BoundKeypairRecovery) ProtoMessage() {} +func (*BoundKeypairRecovery) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{274} +} +func (m *BoundKeypairRecovery) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *BoundKeypairRecovery) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BoundKeypairRecovery.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if len(m.AWSRoleARN) > 0 { - i -= len(m.AWSRoleARN) - copy(dAtA[i:], m.AWSRoleARN) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSRoleARN))) - i-- - dAtA[i] = 0x22 - } - if len(m.Impersonator) > 0 { - i -= len(m.Impersonator) - copy(dAtA[i:], m.Impersonator) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Impersonator))) - i-- - dAtA[i] = 0x1a - } - if len(m.Login) > 0 { - i -= len(m.Login) - copy(dAtA[i:], m.Login) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Login))) - i-- - dAtA[i] = 0x12 - } - if len(m.User) > 0 { - i -= len(m.User) - copy(dAtA[i:], m.User) - i = encodeVarintEvents(dAtA, i, uint64(len(m.User))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +} +func (m *BoundKeypairRecovery) XXX_Merge(src proto.Message) { + xxx_messageInfo_BoundKeypairRecovery.Merge(m, src) +} +func (m *BoundKeypairRecovery) XXX_Size() int { + return m.Size() +} +func (m *BoundKeypairRecovery) XXX_DiscardUnknown() { + xxx_messageInfo_BoundKeypairRecovery.DiscardUnknown(m) } -func (m *ServerMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +var xxx_messageInfo_BoundKeypairRecovery proto.InternalMessageInfo + +// BoundKeypairRotation is emitted when a keypair rotation takes place. +type BoundKeypairRotation struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // Status contains common command or operation status fields. + Status `protobuf:"bytes,2,opt,name=Status,proto3,embedded=Status" json:""` + // ConnectionMetadata holds information about the connection + ConnectionMetadata `protobuf:"bytes,3,opt,name=Connection,proto3,embedded=Connection" json:""` + // TokenName is the name of the provision token used to join. + TokenName string `protobuf:"bytes,4,opt,name=TokenName,proto3" json:"token_name"` + // BotName is the name of the bot attempting to join, if any. + BotName string `protobuf:"bytes,5,opt,name=BotName,proto3" json:"bot_name,omitempty"` + // PreviousPublicKey is the previous public key in SSH authorized_keys format. + // On first join using a registration secret, this value will be empty. + PreviousPublicKey string `protobuf:"bytes,6,opt,name=PreviousPublicKey,proto3" json:"previous_public_key,omitempty"` + // NewPublicKey is the new public key after rotation. If rotation fails, this + // value will be empty. + NewPublicKey string `protobuf:"bytes,7,opt,name=NewPublicKey,proto3" json:"new_public_key,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *BoundKeypairRotation) Reset() { *m = BoundKeypairRotation{} } +func (m *BoundKeypairRotation) String() string { return proto.CompactTextString(m) } +func (*BoundKeypairRotation) ProtoMessage() {} +func (*BoundKeypairRotation) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{275} +} +func (m *BoundKeypairRotation) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *BoundKeypairRotation) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BoundKeypairRotation.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *BoundKeypairRotation) XXX_Merge(src proto.Message) { + xxx_messageInfo_BoundKeypairRotation.Merge(m, src) +} +func (m *BoundKeypairRotation) XXX_Size() int { + return m.Size() +} +func (m *BoundKeypairRotation) XXX_DiscardUnknown() { + xxx_messageInfo_BoundKeypairRotation.DiscardUnknown(m) } -func (m *ServerMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_BoundKeypairRotation proto.InternalMessageInfo + +// BoundKeypairJoinStateVerificationFailed is emitted when join state +// verification fails, potentially indicating a compromised keypair. +type BoundKeypairJoinStateVerificationFailed struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // Status contains information about the failure. + Status `protobuf:"bytes,2,opt,name=Status,proto3,embedded=Status" json:""` + // ConnectionMetadata holds information about the connection + ConnectionMetadata `protobuf:"bytes,3,opt,name=Connection,proto3,embedded=Connection" json:""` + // TokenName is the name of the provision token used to join. + TokenName string `protobuf:"bytes,4,opt,name=TokenName,proto3" json:"token_name"` + // BotName is the name of the bot attempting to join, if any. + BotName string `protobuf:"bytes,5,opt,name=BotName,proto3" json:"bot_name,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *ServerMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.ServerVersion) > 0 { - i -= len(m.ServerVersion) - copy(dAtA[i:], m.ServerVersion) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerVersion))) - i-- - dAtA[i] = 0x42 - } - if len(m.ServerSubKind) > 0 { - i -= len(m.ServerSubKind) - copy(dAtA[i:], m.ServerSubKind) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerSubKind))) - i-- - dAtA[i] = 0x3a - } - if len(m.ForwardedBy) > 0 { - i -= len(m.ForwardedBy) - copy(dAtA[i:], m.ForwardedBy) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ForwardedBy))) - i-- - dAtA[i] = 0x32 - } - if len(m.ServerLabels) > 0 { - for k := range m.ServerLabels { - v := m.ServerLabels[k] - baseI := i - i -= len(v) - copy(dAtA[i:], v) - i = encodeVarintEvents(dAtA, i, uint64(len(v))) - i-- - dAtA[i] = 0x12 - i -= len(k) - copy(dAtA[i:], k) - i = encodeVarintEvents(dAtA, i, uint64(len(k))) - i-- - dAtA[i] = 0xa - i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) - i-- - dAtA[i] = 0x2a +func (m *BoundKeypairJoinStateVerificationFailed) Reset() { + *m = BoundKeypairJoinStateVerificationFailed{} +} +func (m *BoundKeypairJoinStateVerificationFailed) String() string { return proto.CompactTextString(m) } +func (*BoundKeypairJoinStateVerificationFailed) ProtoMessage() {} +func (*BoundKeypairJoinStateVerificationFailed) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{276} +} +func (m *BoundKeypairJoinStateVerificationFailed) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *BoundKeypairJoinStateVerificationFailed) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_BoundKeypairJoinStateVerificationFailed.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if len(m.ServerAddr) > 0 { - i -= len(m.ServerAddr) - copy(dAtA[i:], m.ServerAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerAddr))) - i-- - dAtA[i] = 0x22 - } - if len(m.ServerHostname) > 0 { - i -= len(m.ServerHostname) - copy(dAtA[i:], m.ServerHostname) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerHostname))) - i-- - dAtA[i] = 0x1a - } - if len(m.ServerID) > 0 { - i -= len(m.ServerID) - copy(dAtA[i:], m.ServerID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerID))) - i-- - dAtA[i] = 0x12 - } - if len(m.ServerNamespace) > 0 { - i -= len(m.ServerNamespace) - copy(dAtA[i:], m.ServerNamespace) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerNamespace))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil } - -func (m *ConnectionMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil +func (m *BoundKeypairJoinStateVerificationFailed) XXX_Merge(src proto.Message) { + xxx_messageInfo_BoundKeypairJoinStateVerificationFailed.Merge(m, src) } - -func (m *ConnectionMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +func (m *BoundKeypairJoinStateVerificationFailed) XXX_Size() int { + return m.Size() +} +func (m *BoundKeypairJoinStateVerificationFailed) XXX_DiscardUnknown() { + xxx_messageInfo_BoundKeypairJoinStateVerificationFailed.DiscardUnknown(m) } -func (m *ConnectionMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.Protocol) > 0 { - i -= len(m.Protocol) - copy(dAtA[i:], m.Protocol) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Protocol))) - i-- - dAtA[i] = 0x1a - } - if len(m.RemoteAddr) > 0 { - i -= len(m.RemoteAddr) - copy(dAtA[i:], m.RemoteAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RemoteAddr))) - i-- - dAtA[i] = 0x12 - } - if len(m.LocalAddr) > 0 { - i -= len(m.LocalAddr) - copy(dAtA[i:], m.LocalAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.LocalAddr))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +var xxx_messageInfo_BoundKeypairJoinStateVerificationFailed proto.InternalMessageInfo + +// SCIMRequest describes the SCIM http request that triggered an event +type SCIMRequest struct { + // ID is a Teleport-generated arbitrary unique ID for the SCIM request that + // triggered the enclosing event. It will be included in all structured log + // messages involved in the handling of this request. + ID string `protobuf:"bytes,1,opt,name=ID,proto3" json:"id,omitempty"` + // SourceAddress is the source IP address for the SCIM HTTP request that + // triggered the enclosing event. + SourceAddress string `protobuf:"bytes,2,opt,name=SourceAddress,proto3" json:"source_address,omitempty"` + // UserAgent is the user agent string from the HTTP request that triggered + // the enclosing event + UserAgent string `protobuf:"bytes,3,opt,name=UserAgent,proto3" json:"user_agent,omitempty"` + // Method is the HTTP method used by the SCIM HTTP request that triggered the + // enclosing event (e.g. GET, PUT, etc) + Method string `protobuf:"bytes,4,opt,name=Method,proto3" json:"method,omitempty"` + // Path is the fully-qualified path of the SCIM request that triggered the + // enclosing event. + Path string `protobuf:"bytes,5,opt,name=Path,proto3" json:"path,omitempty"` + // Body holds a representation of the arbitrary JSON body of the initial + // request. May be empty for list operations, hold a SCIM resource + // definition for creation and update operations, or a SCIM PATCH description + // for patch update requests. + Body *Struct `protobuf:"bytes,6,opt,name=Body,proto3,casttype=Struct" json:"body,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *ClientMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +func (m *SCIMRequest) Reset() { *m = SCIMRequest{} } +func (m *SCIMRequest) String() string { return proto.CompactTextString(m) } +func (*SCIMRequest) ProtoMessage() {} +func (*SCIMRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{277} +} +func (m *SCIMRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SCIMRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SCIMRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil } - -func (m *ClientMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +func (m *SCIMRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_SCIMRequest.Merge(m, src) +} +func (m *SCIMRequest) XXX_Size() int { + return m.Size() +} +func (m *SCIMRequest) XXX_DiscardUnknown() { + xxx_messageInfo_SCIMRequest.DiscardUnknown(m) } -func (m *ClientMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.UserAgent) > 0 { - i -= len(m.UserAgent) - copy(dAtA[i:], m.UserAgent) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UserAgent))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +var xxx_messageInfo_SCIMRequest proto.InternalMessageInfo + +type SCIMCommonData struct { + // Request holds metadata about the original SCIM request that triggered this + // event. + Request *SCIMRequest `protobuf:"bytes,3,opt,name=Request,proto3" json:"request,omitempty"` + // Integration is the name of the integration/access plugin that the SCIM + // service was operating on for this event. + Integration string `protobuf:"bytes,4,opt,name=Integration,proto3" json:"integration,omitempty"` + // ResourceType is the SCIM resource type of the Teleport resource involved in + // this event. Valid values include, but are not limited to, "user" and + // "group". + ResourceType string `protobuf:"bytes,5,opt,name=ResourceType,proto3" json:"resource_type,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *KubernetesClusterMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +func (m *SCIMCommonData) Reset() { *m = SCIMCommonData{} } +func (m *SCIMCommonData) String() string { return proto.CompactTextString(m) } +func (*SCIMCommonData) ProtoMessage() {} +func (*SCIMCommonData) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{278} +} +func (m *SCIMCommonData) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SCIMCommonData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SCIMCommonData.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *SCIMCommonData) XXX_Merge(src proto.Message) { + xxx_messageInfo_SCIMCommonData.Merge(m, src) +} +func (m *SCIMCommonData) XXX_Size() int { + return m.Size() +} +func (m *SCIMCommonData) XXX_DiscardUnknown() { + xxx_messageInfo_SCIMCommonData.DiscardUnknown(m) } -func (m *KubernetesClusterMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_SCIMCommonData proto.InternalMessageInfo + +// SCIMListingEvent records an attempt to list SCIM resources +type SCIMListingEvent struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // Status contains common command or operation status fields. + Status `protobuf:"bytes,2,opt,name=Status,proto3,embedded=Status" json:""` + // Common holds values common to SCIM listing and resource events + SCIMCommonData `protobuf:"bytes,3,opt,name=Common,proto3,embedded=Common" json:""` + // Filter is the listing filter function supplied by the client, if any. + Filter string `protobuf:"bytes,4,opt,name=Filter,proto3" json:"filter,omitempty"` + // Count is the requested page size from the client. A zero value indicates that + // the client did not request a specific page size. + Count int32 `protobuf:"varint,5,opt,name=Count,proto3" json:"count,omitempty"` + // Start is the 1-based start index requested by the client, if any. A zero + // value indicates that the client did not request a specific starting index + // for the page. + StartIndex int32 `protobuf:"varint,6,opt,name=StartIndex,proto3" json:"start_index,omitempty"` + // ResourceCount is the number of resources returned in response to this + // request. + ResourceCount uint32 `protobuf:"varint,7,opt,name=ResourceCount,proto3" json:"resource_count,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *KubernetesClusterMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.KubernetesLabels) > 0 { - for k := range m.KubernetesLabels { - v := m.KubernetesLabels[k] - baseI := i - i -= len(v) - copy(dAtA[i:], v) - i = encodeVarintEvents(dAtA, i, uint64(len(v))) - i-- - dAtA[i] = 0x12 - i -= len(k) - copy(dAtA[i:], k) - i = encodeVarintEvents(dAtA, i, uint64(len(k))) - i-- - dAtA[i] = 0xa - i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) - i-- - dAtA[i] = 0x22 - } - } - if len(m.KubernetesGroups) > 0 { - for iNdEx := len(m.KubernetesGroups) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.KubernetesGroups[iNdEx]) - copy(dAtA[i:], m.KubernetesGroups[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesGroups[iNdEx]))) - i-- - dAtA[i] = 0x1a - } - } - if len(m.KubernetesUsers) > 0 { - for iNdEx := len(m.KubernetesUsers) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.KubernetesUsers[iNdEx]) - copy(dAtA[i:], m.KubernetesUsers[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesUsers[iNdEx]))) - i-- - dAtA[i] = 0x12 +func (m *SCIMListingEvent) Reset() { *m = SCIMListingEvent{} } +func (m *SCIMListingEvent) String() string { return proto.CompactTextString(m) } +func (*SCIMListingEvent) ProtoMessage() {} +func (*SCIMListingEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{279} +} +func (m *SCIMListingEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SCIMListingEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SCIMListingEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if len(m.KubernetesCluster) > 0 { - i -= len(m.KubernetesCluster) - copy(dAtA[i:], m.KubernetesCluster) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesCluster))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil } - -func (m *KubernetesPodMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil +func (m *SCIMListingEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_SCIMListingEvent.Merge(m, src) } - -func (m *KubernetesPodMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +func (m *SCIMListingEvent) XXX_Size() int { + return m.Size() +} +func (m *SCIMListingEvent) XXX_DiscardUnknown() { + xxx_messageInfo_SCIMListingEvent.DiscardUnknown(m) } -func (m *KubernetesPodMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.KubernetesNodeName) > 0 { - i -= len(m.KubernetesNodeName) - copy(dAtA[i:], m.KubernetesNodeName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesNodeName))) - i-- - dAtA[i] = 0x2a - } - if len(m.KubernetesContainerImage) > 0 { - i -= len(m.KubernetesContainerImage) - copy(dAtA[i:], m.KubernetesContainerImage) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesContainerImage))) - i-- - dAtA[i] = 0x22 - } - if len(m.KubernetesContainerName) > 0 { - i -= len(m.KubernetesContainerName) - copy(dAtA[i:], m.KubernetesContainerName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesContainerName))) - i-- - dAtA[i] = 0x1a - } - if len(m.KubernetesPodNamespace) > 0 { - i -= len(m.KubernetesPodNamespace) - copy(dAtA[i:], m.KubernetesPodNamespace) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesPodNamespace))) - i-- - dAtA[i] = 0x12 - } - if len(m.KubernetesPodName) > 0 { - i -= len(m.KubernetesPodName) - copy(dAtA[i:], m.KubernetesPodName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesPodName))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +var xxx_messageInfo_SCIMListingEvent proto.InternalMessageInfo + +// SCIMResourceEvent records an attmpted operation on a specific SCIM resource +type SCIMResourceEvent struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // Status contains common command or operation status fields. + Status `protobuf:"bytes,2,opt,name=Status,proto3,embedded=Status" json:""` + // Common holds values common to SCIM listing and resource events + SCIMCommonData `protobuf:"bytes,3,opt,name=Common,proto3,embedded=Common" json:""` + // TeleportID is the name of the Teleport resource involved in this event. + TeleportID string `protobuf:"bytes,4,opt,name=TeleportID,proto3" json:"teleport_id,omitempty"` + // ExternalID is the ID used by the external SCIM client to to refer to the + // resource affected by this event. + ExternalID string `protobuf:"bytes,5,opt,name=ExternalID,proto3" json:"external_id,omitempty"` + // Display is a human-readable name or identifier for the Teleport resource + // affected by the event. + Display string `protobuf:"bytes,6,opt,name=Display,proto3" json:"display,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *SAMLIdPServiceProviderMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +func (m *SCIMResourceEvent) Reset() { *m = SCIMResourceEvent{} } +func (m *SCIMResourceEvent) String() string { return proto.CompactTextString(m) } +func (*SCIMResourceEvent) ProtoMessage() {} +func (*SCIMResourceEvent) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{280} +} +func (m *SCIMResourceEvent) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SCIMResourceEvent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SCIMResourceEvent.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *SCIMResourceEvent) XXX_Merge(src proto.Message) { + xxx_messageInfo_SCIMResourceEvent.Merge(m, src) +} +func (m *SCIMResourceEvent) XXX_Size() int { + return m.Size() +} +func (m *SCIMResourceEvent) XXX_DiscardUnknown() { + xxx_messageInfo_SCIMResourceEvent.DiscardUnknown(m) } -func (m *SAMLIdPServiceProviderMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_SCIMResourceEvent proto.InternalMessageInfo + +// ClientIPRestrictionsUpdate records a Client IP Restrictions update. +type ClientIPRestrictionsUpdate struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=Metadata,proto3,embedded=Metadata" json:""` + // UserMetadata is a common user event metadata. + UserMetadata `protobuf:"bytes,2,opt,name=User,proto3,embedded=User" json:""` + // ConnectionMetadata holds information about the connection. + ConnectionMetadata `protobuf:"bytes,3,opt,name=Connection,proto3,embedded=Connection" json:""` + // ResourceMetadata is a common resource event metadata. + ResourceMetadata `protobuf:"bytes,4,opt,name=Resource,proto3,embedded=Resource" json:""` + // Status indicates whether the operation was successful. + Status `protobuf:"bytes,5,opt,name=Status,proto3,embedded=Status" json:""` + // ClientIPRestrictions is the new Client IP Restrictions allowlist. + ClientIPRestrictions []string `protobuf:"bytes,6,rep,name=ClientIPRestrictions,proto3" json:"client_ip_restrictions"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *SAMLIdPServiceProviderMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.AttributeMapping) > 0 { - for k := range m.AttributeMapping { - v := m.AttributeMapping[k] - baseI := i - i -= len(v) - copy(dAtA[i:], v) - i = encodeVarintEvents(dAtA, i, uint64(len(v))) - i-- - dAtA[i] = 0x12 - i -= len(k) - copy(dAtA[i:], k) - i = encodeVarintEvents(dAtA, i, uint64(len(k))) - i-- - dAtA[i] = 0xa - i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) - i-- - dAtA[i] = 0x1a +func (m *ClientIPRestrictionsUpdate) Reset() { *m = ClientIPRestrictionsUpdate{} } +func (m *ClientIPRestrictionsUpdate) String() string { return proto.CompactTextString(m) } +func (*ClientIPRestrictionsUpdate) ProtoMessage() {} +func (*ClientIPRestrictionsUpdate) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{281} +} +func (m *ClientIPRestrictionsUpdate) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ClientIPRestrictionsUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ClientIPRestrictionsUpdate.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if len(m.ServiceProviderShortcut) > 0 { - i -= len(m.ServiceProviderShortcut) - copy(dAtA[i:], m.ServiceProviderShortcut) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServiceProviderShortcut))) - i-- - dAtA[i] = 0x12 - } - if len(m.ServiceProviderEntityID) > 0 { - i -= len(m.ServiceProviderEntityID) - copy(dAtA[i:], m.ServiceProviderEntityID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ServiceProviderEntityID))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +} +func (m *ClientIPRestrictionsUpdate) XXX_Merge(src proto.Message) { + xxx_messageInfo_ClientIPRestrictionsUpdate.Merge(m, src) +} +func (m *ClientIPRestrictionsUpdate) XXX_Size() int { + return m.Size() +} +func (m *ClientIPRestrictionsUpdate) XXX_DiscardUnknown() { + xxx_messageInfo_ClientIPRestrictionsUpdate.DiscardUnknown(m) } -func (m *OktaResourcesUpdatedMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err +var xxx_messageInfo_ClientIPRestrictionsUpdate proto.InternalMessageInfo + +// VnetConfigCreate is emitted when a VnetConfig is created. +type VnetConfigCreate struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // Status indicates whether the creation was successful. + Status `protobuf:"bytes,2,opt,name=status,proto3,embedded=status" json:""` + // User is a common user event metadata. + UserMetadata `protobuf:"bytes,3,opt,name=user,proto3,embedded=user" json:""` + // Connection holds information about the connection. + ConnectionMetadata `protobuf:"bytes,4,opt,name=connection,proto3,embedded=connection" json:""` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *VnetConfigCreate) Reset() { *m = VnetConfigCreate{} } +func (m *VnetConfigCreate) String() string { return proto.CompactTextString(m) } +func (*VnetConfigCreate) ProtoMessage() {} +func (*VnetConfigCreate) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{282} +} +func (m *VnetConfigCreate) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *VnetConfigCreate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_VnetConfigCreate.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil } - return dAtA[:n], nil +} +func (m *VnetConfigCreate) XXX_Merge(src proto.Message) { + xxx_messageInfo_VnetConfigCreate.Merge(m, src) +} +func (m *VnetConfigCreate) XXX_Size() int { + return m.Size() +} +func (m *VnetConfigCreate) XXX_DiscardUnknown() { + xxx_messageInfo_VnetConfigCreate.DiscardUnknown(m) } -func (m *OktaResourcesUpdatedMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +var xxx_messageInfo_VnetConfigCreate proto.InternalMessageInfo + +// VnetConfigUpdate is emitted when a VnetConfig is updated. +type VnetConfigUpdate struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // Status indicates whether the update was successful. + Status `protobuf:"bytes,2,opt,name=status,proto3,embedded=status" json:""` + // User is a common user event metadata. + UserMetadata `protobuf:"bytes,3,opt,name=user,proto3,embedded=user" json:""` + // Connection holds information about the connection. + ConnectionMetadata `protobuf:"bytes,4,opt,name=connection,proto3,embedded=connection" json:""` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } -func (m *OktaResourcesUpdatedMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.DeletedResources) > 0 { - for iNdEx := len(m.DeletedResources) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.DeletedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 +func (m *VnetConfigUpdate) Reset() { *m = VnetConfigUpdate{} } +func (m *VnetConfigUpdate) String() string { return proto.CompactTextString(m) } +func (*VnetConfigUpdate) ProtoMessage() {} +func (*VnetConfigUpdate) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{283} +} +func (m *VnetConfigUpdate) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *VnetConfigUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_VnetConfigUpdate.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if len(m.UpdatedResources) > 0 { - for iNdEx := len(m.UpdatedResources) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.UpdatedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - } - } - if len(m.AddedResources) > 0 { - for iNdEx := len(m.AddedResources) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.AddedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 +} +func (m *VnetConfigUpdate) XXX_Merge(src proto.Message) { + xxx_messageInfo_VnetConfigUpdate.Merge(m, src) +} +func (m *VnetConfigUpdate) XXX_Size() int { + return m.Size() +} +func (m *VnetConfigUpdate) XXX_DiscardUnknown() { + xxx_messageInfo_VnetConfigUpdate.DiscardUnknown(m) +} + +var xxx_messageInfo_VnetConfigUpdate proto.InternalMessageInfo + +// VnetConfigDelete is emitted when a VnetConfig is deleted. +type VnetConfigDelete struct { + // Metadata is a common event metadata. + Metadata `protobuf:"bytes,1,opt,name=metadata,proto3,embedded=metadata" json:""` + // Status indicates whether the deletion was successful. + Status `protobuf:"bytes,2,opt,name=status,proto3,embedded=status" json:""` + // User is a common user event metadata. + UserMetadata `protobuf:"bytes,3,opt,name=user,proto3,embedded=user" json:""` + // Connection holds information about the connection. + ConnectionMetadata `protobuf:"bytes,4,opt,name=connection,proto3,embedded=connection" json:""` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *VnetConfigDelete) Reset() { *m = VnetConfigDelete{} } +func (m *VnetConfigDelete) String() string { return proto.CompactTextString(m) } +func (*VnetConfigDelete) ProtoMessage() {} +func (*VnetConfigDelete) Descriptor() ([]byte, []int) { + return fileDescriptor_007ba1c3d6266d56, []int{284} +} +func (m *VnetConfigDelete) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *VnetConfigDelete) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_VnetConfigDelete.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err } + return b[:n], nil } - if m.Deleted != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Deleted)) - i-- - dAtA[i] = 0x18 - } - if m.Updated != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Updated)) - i-- - dAtA[i] = 0x10 - } - if m.Added != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Added)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil +} +func (m *VnetConfigDelete) XXX_Merge(src proto.Message) { + xxx_messageInfo_VnetConfigDelete.Merge(m, src) +} +func (m *VnetConfigDelete) XXX_Size() int { + return m.Size() +} +func (m *VnetConfigDelete) XXX_DiscardUnknown() { + xxx_messageInfo_VnetConfigDelete.DiscardUnknown(m) } -func (m *OktaResource) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil +var xxx_messageInfo_VnetConfigDelete proto.InternalMessageInfo + +func init() { + proto.RegisterEnum("events.UserKind", UserKind_name, UserKind_value) + proto.RegisterEnum("events.UserOrigin", UserOrigin_name, UserOrigin_value) + proto.RegisterEnum("events.EventAction", EventAction_name, EventAction_value) + proto.RegisterEnum("events.SFTPAction", SFTPAction_name, SFTPAction_value) + proto.RegisterEnum("events.OSType", OSType_name, OSType_value) + proto.RegisterEnum("events.DeviceOrigin", DeviceOrigin_name, DeviceOrigin_value) + proto.RegisterEnum("events.ElasticsearchCategory", ElasticsearchCategory_name, ElasticsearchCategory_value) + proto.RegisterEnum("events.OpenSearchCategory", OpenSearchCategory_name, OpenSearchCategory_value) + proto.RegisterEnum("events.AdminActionsMFAStatus", AdminActionsMFAStatus_name, AdminActionsMFAStatus_value) + proto.RegisterEnum("events.ContactType", ContactType_name, ContactType_value) + proto.RegisterEnum("events.SessionNetwork_NetworkOperation", SessionNetwork_NetworkOperation_name, SessionNetwork_NetworkOperation_value) + proto.RegisterType((*Metadata)(nil), "events.Metadata") + proto.RegisterType((*SessionMetadata)(nil), "events.SessionMetadata") + proto.RegisterType((*UserMetadata)(nil), "events.UserMetadata") + proto.RegisterType((*ServerMetadata)(nil), "events.ServerMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.ServerMetadata.ServerLabelsEntry") + proto.RegisterType((*ConnectionMetadata)(nil), "events.ConnectionMetadata") + proto.RegisterType((*ClientMetadata)(nil), "events.ClientMetadata") + proto.RegisterType((*KubernetesClusterMetadata)(nil), "events.KubernetesClusterMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.KubernetesClusterMetadata.KubernetesLabelsEntry") + proto.RegisterType((*KubernetesPodMetadata)(nil), "events.KubernetesPodMetadata") + proto.RegisterType((*SAMLIdPServiceProviderMetadata)(nil), "events.SAMLIdPServiceProviderMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.SAMLIdPServiceProviderMetadata.AttributeMappingEntry") + proto.RegisterType((*OktaResourcesUpdatedMetadata)(nil), "events.OktaResourcesUpdatedMetadata") + proto.RegisterType((*OktaResource)(nil), "events.OktaResource") + proto.RegisterType((*OktaAssignmentMetadata)(nil), "events.OktaAssignmentMetadata") + proto.RegisterType((*AccessListMemberMetadata)(nil), "events.AccessListMemberMetadata") + proto.RegisterType((*AccessListMember)(nil), "events.AccessListMember") + proto.RegisterType((*AccessListReviewMembershipRequirementsChanged)(nil), "events.AccessListReviewMembershipRequirementsChanged") + proto.RegisterMapType((map[string]string)(nil), "events.AccessListReviewMembershipRequirementsChanged.TraitsEntry") + proto.RegisterType((*AccessListReviewMetadata)(nil), "events.AccessListReviewMetadata") + proto.RegisterType((*LockMetadata)(nil), "events.LockMetadata") + proto.RegisterType((*SessionStart)(nil), "events.SessionStart") + proto.RegisterType((*SessionJoin)(nil), "events.SessionJoin") + proto.RegisterType((*SessionPrint)(nil), "events.SessionPrint") + proto.RegisterType((*DesktopRecording)(nil), "events.DesktopRecording") + proto.RegisterType((*DesktopClipboardReceive)(nil), "events.DesktopClipboardReceive") + proto.RegisterType((*DesktopClipboardSend)(nil), "events.DesktopClipboardSend") + proto.RegisterType((*DesktopSharedDirectoryStart)(nil), "events.DesktopSharedDirectoryStart") + proto.RegisterType((*DesktopSharedDirectoryRead)(nil), "events.DesktopSharedDirectoryRead") + proto.RegisterType((*DesktopSharedDirectoryWrite)(nil), "events.DesktopSharedDirectoryWrite") + proto.RegisterType((*SessionReject)(nil), "events.SessionReject") + proto.RegisterType((*SessionConnect)(nil), "events.SessionConnect") + proto.RegisterType((*FileTransferRequestEvent)(nil), "events.FileTransferRequestEvent") + proto.RegisterType((*Resize)(nil), "events.Resize") + proto.RegisterType((*SessionEnd)(nil), "events.SessionEnd") + proto.RegisterType((*BPFMetadata)(nil), "events.BPFMetadata") + proto.RegisterType((*Status)(nil), "events.Status") + proto.RegisterType((*SessionCommand)(nil), "events.SessionCommand") + proto.RegisterType((*SessionDisk)(nil), "events.SessionDisk") + proto.RegisterType((*SessionNetwork)(nil), "events.SessionNetwork") + proto.RegisterType((*SessionData)(nil), "events.SessionData") + proto.RegisterType((*SessionLeave)(nil), "events.SessionLeave") + proto.RegisterType((*UserLogin)(nil), "events.UserLogin") + proto.RegisterType((*CreateMFAAuthChallenge)(nil), "events.CreateMFAAuthChallenge") + proto.RegisterType((*ValidateMFAAuthResponse)(nil), "events.ValidateMFAAuthResponse") + proto.RegisterType((*ResourceMetadata)(nil), "events.ResourceMetadata") + proto.RegisterType((*UserCreate)(nil), "events.UserCreate") + proto.RegisterType((*UserUpdate)(nil), "events.UserUpdate") + proto.RegisterType((*UserDelete)(nil), "events.UserDelete") + proto.RegisterType((*UserPasswordChange)(nil), "events.UserPasswordChange") + proto.RegisterType((*AccessRequestCreate)(nil), "events.AccessRequestCreate") + proto.RegisterType((*AccessRequestExpire)(nil), "events.AccessRequestExpire") + proto.RegisterType((*ResourceID)(nil), "events.ResourceID") + proto.RegisterType((*AccessRequestDelete)(nil), "events.AccessRequestDelete") + proto.RegisterType((*PortForward)(nil), "events.PortForward") + proto.RegisterType((*X11Forward)(nil), "events.X11Forward") + proto.RegisterType((*CommandMetadata)(nil), "events.CommandMetadata") + proto.RegisterType((*Exec)(nil), "events.Exec") + proto.RegisterType((*SCP)(nil), "events.SCP") + proto.RegisterType((*SFTPAttributes)(nil), "events.SFTPAttributes") + proto.RegisterType((*SFTP)(nil), "events.SFTP") + proto.RegisterType((*SFTPSummary)(nil), "events.SFTPSummary") + proto.RegisterType((*FileTransferStat)(nil), "events.FileTransferStat") + proto.RegisterType((*Subsystem)(nil), "events.Subsystem") + proto.RegisterType((*ClientDisconnect)(nil), "events.ClientDisconnect") + proto.RegisterType((*AuthAttempt)(nil), "events.AuthAttempt") + proto.RegisterType((*UserTokenCreate)(nil), "events.UserTokenCreate") + proto.RegisterType((*RoleCreate)(nil), "events.RoleCreate") + proto.RegisterType((*RoleUpdate)(nil), "events.RoleUpdate") + proto.RegisterType((*RoleDelete)(nil), "events.RoleDelete") + proto.RegisterType((*BotCreate)(nil), "events.BotCreate") + proto.RegisterType((*BotUpdate)(nil), "events.BotUpdate") + proto.RegisterType((*BotDelete)(nil), "events.BotDelete") + proto.RegisterType((*TrustedClusterCreate)(nil), "events.TrustedClusterCreate") + proto.RegisterType((*TrustedClusterDelete)(nil), "events.TrustedClusterDelete") + proto.RegisterType((*ProvisionTokenCreate)(nil), "events.ProvisionTokenCreate") + proto.RegisterType((*TrustedClusterTokenCreate)(nil), "events.TrustedClusterTokenCreate") + proto.RegisterType((*GithubConnectorCreate)(nil), "events.GithubConnectorCreate") + proto.RegisterType((*GithubConnectorUpdate)(nil), "events.GithubConnectorUpdate") + proto.RegisterType((*GithubConnectorDelete)(nil), "events.GithubConnectorDelete") + proto.RegisterType((*OIDCConnectorCreate)(nil), "events.OIDCConnectorCreate") + proto.RegisterType((*OIDCConnectorUpdate)(nil), "events.OIDCConnectorUpdate") + proto.RegisterType((*OIDCConnectorDelete)(nil), "events.OIDCConnectorDelete") + proto.RegisterType((*SAMLConnectorCreate)(nil), "events.SAMLConnectorCreate") + proto.RegisterType((*SAMLConnectorUpdate)(nil), "events.SAMLConnectorUpdate") + proto.RegisterType((*SAMLConnectorDelete)(nil), "events.SAMLConnectorDelete") + proto.RegisterType((*KubeRequest)(nil), "events.KubeRequest") + proto.RegisterType((*AppMetadata)(nil), "events.AppMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.AppMetadata.AppLabelsEntry") + proto.RegisterType((*AppCreate)(nil), "events.AppCreate") + proto.RegisterType((*AppUpdate)(nil), "events.AppUpdate") + proto.RegisterType((*AppDelete)(nil), "events.AppDelete") + proto.RegisterType((*AppSessionStart)(nil), "events.AppSessionStart") + proto.RegisterType((*AppSessionEnd)(nil), "events.AppSessionEnd") + proto.RegisterType((*AppSessionChunk)(nil), "events.AppSessionChunk") + proto.RegisterType((*AppSessionRequest)(nil), "events.AppSessionRequest") + proto.RegisterType((*AWSRequestMetadata)(nil), "events.AWSRequestMetadata") + proto.RegisterType((*DatabaseMetadata)(nil), "events.DatabaseMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.DatabaseMetadata.DatabaseLabelsEntry") + proto.RegisterType((*DatabaseCreate)(nil), "events.DatabaseCreate") + proto.RegisterType((*DatabaseUpdate)(nil), "events.DatabaseUpdate") + proto.RegisterType((*DatabaseDelete)(nil), "events.DatabaseDelete") + proto.RegisterType((*DatabaseSessionStart)(nil), "events.DatabaseSessionStart") + proto.RegisterType((*DatabaseSessionQuery)(nil), "events.DatabaseSessionQuery") + proto.RegisterType((*DatabaseSessionCommandResult)(nil), "events.DatabaseSessionCommandResult") + proto.RegisterType((*DatabasePermissionUpdate)(nil), "events.DatabasePermissionUpdate") + proto.RegisterMapType((map[string]int32)(nil), "events.DatabasePermissionUpdate.AffectedObjectCountsEntry") + proto.RegisterType((*DatabasePermissionEntry)(nil), "events.DatabasePermissionEntry") + proto.RegisterMapType((map[string]int32)(nil), "events.DatabasePermissionEntry.CountsEntry") + proto.RegisterType((*DatabaseUserCreate)(nil), "events.DatabaseUserCreate") + proto.RegisterType((*DatabaseUserDeactivate)(nil), "events.DatabaseUserDeactivate") + proto.RegisterType((*PostgresParse)(nil), "events.PostgresParse") + proto.RegisterType((*PostgresBind)(nil), "events.PostgresBind") + proto.RegisterType((*PostgresExecute)(nil), "events.PostgresExecute") + proto.RegisterType((*PostgresClose)(nil), "events.PostgresClose") + proto.RegisterType((*PostgresFunctionCall)(nil), "events.PostgresFunctionCall") + proto.RegisterType((*WindowsCertificateMetadata)(nil), "events.WindowsCertificateMetadata") + proto.RegisterType((*WindowsDesktopSessionStart)(nil), "events.WindowsDesktopSessionStart") + proto.RegisterMapType((map[string]string)(nil), "events.WindowsDesktopSessionStart.DesktopLabelsEntry") + proto.RegisterType((*DatabaseSessionEnd)(nil), "events.DatabaseSessionEnd") + proto.RegisterType((*MFADeviceMetadata)(nil), "events.MFADeviceMetadata") + proto.RegisterType((*MFADeviceAdd)(nil), "events.MFADeviceAdd") + proto.RegisterType((*MFADeviceDelete)(nil), "events.MFADeviceDelete") + proto.RegisterType((*BillingInformationUpdate)(nil), "events.BillingInformationUpdate") + proto.RegisterType((*BillingCardCreate)(nil), "events.BillingCardCreate") + proto.RegisterType((*BillingCardDelete)(nil), "events.BillingCardDelete") + proto.RegisterType((*LockCreate)(nil), "events.LockCreate") + proto.RegisterType((*LockDelete)(nil), "events.LockDelete") + proto.RegisterType((*RecoveryCodeGenerate)(nil), "events.RecoveryCodeGenerate") + proto.RegisterType((*RecoveryCodeUsed)(nil), "events.RecoveryCodeUsed") + proto.RegisterType((*WindowsDesktopSessionEnd)(nil), "events.WindowsDesktopSessionEnd") + proto.RegisterMapType((map[string]string)(nil), "events.WindowsDesktopSessionEnd.DesktopLabelsEntry") + proto.RegisterType((*CertificateCreate)(nil), "events.CertificateCreate") + proto.RegisterType((*CertificateAuthority)(nil), "events.CertificateAuthority") + proto.RegisterType((*RenewableCertificateGenerationMismatch)(nil), "events.RenewableCertificateGenerationMismatch") + proto.RegisterType((*BotJoin)(nil), "events.BotJoin") + proto.RegisterType((*InstanceJoin)(nil), "events.InstanceJoin") + proto.RegisterType((*Unknown)(nil), "events.Unknown") + proto.RegisterType((*DeviceMetadata)(nil), "events.DeviceMetadata") + proto.RegisterType((*DeviceEvent)(nil), "events.DeviceEvent") + proto.RegisterType((*DeviceEvent2)(nil), "events.DeviceEvent2") + proto.RegisterType((*DiscoveryConfigCreate)(nil), "events.DiscoveryConfigCreate") + proto.RegisterType((*DiscoveryConfigUpdate)(nil), "events.DiscoveryConfigUpdate") + proto.RegisterType((*DiscoveryConfigDelete)(nil), "events.DiscoveryConfigDelete") + proto.RegisterType((*DiscoveryConfigDeleteAll)(nil), "events.DiscoveryConfigDeleteAll") + proto.RegisterType((*IntegrationCreate)(nil), "events.IntegrationCreate") + proto.RegisterType((*IntegrationUpdate)(nil), "events.IntegrationUpdate") + proto.RegisterType((*IntegrationDelete)(nil), "events.IntegrationDelete") + proto.RegisterType((*IntegrationMetadata)(nil), "events.IntegrationMetadata") + proto.RegisterType((*AWSOIDCIntegrationMetadata)(nil), "events.AWSOIDCIntegrationMetadata") + proto.RegisterType((*AzureOIDCIntegrationMetadata)(nil), "events.AzureOIDCIntegrationMetadata") + proto.RegisterType((*GitHubIntegrationMetadata)(nil), "events.GitHubIntegrationMetadata") + proto.RegisterType((*AWSRAIntegrationMetadata)(nil), "events.AWSRAIntegrationMetadata") + proto.RegisterType((*PluginCreate)(nil), "events.PluginCreate") + proto.RegisterType((*PluginUpdate)(nil), "events.PluginUpdate") + proto.RegisterType((*PluginDelete)(nil), "events.PluginDelete") + proto.RegisterType((*PluginMetadata)(nil), "events.PluginMetadata") + proto.RegisterType((*OneOf)(nil), "events.OneOf") + proto.RegisterType((*StreamStatus)(nil), "events.StreamStatus") + proto.RegisterType((*SessionUpload)(nil), "events.SessionUpload") + proto.RegisterType((*Identity)(nil), "events.Identity") + proto.RegisterType((*ScopePin)(nil), "events.ScopePin") + proto.RegisterMapType((map[string]*ScopePinnedAssignments)(nil), "events.ScopePin.AssignmentsEntry") + proto.RegisterType((*ScopePinnedAssignments)(nil), "events.ScopePinnedAssignments") + proto.RegisterType((*RouteToApp)(nil), "events.RouteToApp") + proto.RegisterType((*RouteToDatabase)(nil), "events.RouteToDatabase") + proto.RegisterType((*DeviceExtensions)(nil), "events.DeviceExtensions") + proto.RegisterType((*AccessRequestResourceSearch)(nil), "events.AccessRequestResourceSearch") + proto.RegisterMapType((map[string]string)(nil), "events.AccessRequestResourceSearch.LabelsEntry") + proto.RegisterType((*MySQLStatementPrepare)(nil), "events.MySQLStatementPrepare") + proto.RegisterType((*MySQLStatementExecute)(nil), "events.MySQLStatementExecute") + proto.RegisterType((*MySQLStatementSendLongData)(nil), "events.MySQLStatementSendLongData") + proto.RegisterType((*MySQLStatementClose)(nil), "events.MySQLStatementClose") + proto.RegisterType((*MySQLStatementReset)(nil), "events.MySQLStatementReset") + proto.RegisterType((*MySQLStatementFetch)(nil), "events.MySQLStatementFetch") + proto.RegisterType((*MySQLStatementBulkExecute)(nil), "events.MySQLStatementBulkExecute") + proto.RegisterType((*MySQLInitDB)(nil), "events.MySQLInitDB") + proto.RegisterType((*MySQLCreateDB)(nil), "events.MySQLCreateDB") + proto.RegisterType((*MySQLDropDB)(nil), "events.MySQLDropDB") + proto.RegisterType((*MySQLShutDown)(nil), "events.MySQLShutDown") + proto.RegisterType((*MySQLProcessKill)(nil), "events.MySQLProcessKill") + proto.RegisterType((*MySQLDebug)(nil), "events.MySQLDebug") + proto.RegisterType((*MySQLRefresh)(nil), "events.MySQLRefresh") + proto.RegisterType((*SQLServerRPCRequest)(nil), "events.SQLServerRPCRequest") + proto.RegisterType((*DatabaseSessionMalformedPacket)(nil), "events.DatabaseSessionMalformedPacket") + proto.RegisterType((*ElasticsearchRequest)(nil), "events.ElasticsearchRequest") + proto.RegisterType((*OpenSearchRequest)(nil), "events.OpenSearchRequest") + proto.RegisterType((*DynamoDBRequest)(nil), "events.DynamoDBRequest") + proto.RegisterType((*AppSessionDynamoDBRequest)(nil), "events.AppSessionDynamoDBRequest") + proto.RegisterType((*UpgradeWindowStartMetadata)(nil), "events.UpgradeWindowStartMetadata") + proto.RegisterType((*UpgradeWindowStartUpdate)(nil), "events.UpgradeWindowStartUpdate") + proto.RegisterType((*SessionRecordingAccess)(nil), "events.SessionRecordingAccess") + proto.RegisterType((*KubeClusterMetadata)(nil), "events.KubeClusterMetadata") + proto.RegisterMapType((map[string]string)(nil), "events.KubeClusterMetadata.KubeLabelsEntry") + proto.RegisterType((*KubernetesClusterCreate)(nil), "events.KubernetesClusterCreate") + proto.RegisterType((*KubernetesClusterUpdate)(nil), "events.KubernetesClusterUpdate") + proto.RegisterType((*KubernetesClusterDelete)(nil), "events.KubernetesClusterDelete") + proto.RegisterType((*SSMRun)(nil), "events.SSMRun") + proto.RegisterType((*CassandraPrepare)(nil), "events.CassandraPrepare") + proto.RegisterType((*CassandraExecute)(nil), "events.CassandraExecute") + proto.RegisterType((*CassandraBatch)(nil), "events.CassandraBatch") + proto.RegisterType((*CassandraBatch_BatchChild)(nil), "events.CassandraBatch.BatchChild") + proto.RegisterType((*CassandraBatch_BatchChild_Value)(nil), "events.CassandraBatch.BatchChild.Value") + proto.RegisterType((*CassandraRegister)(nil), "events.CassandraRegister") + proto.RegisterType((*LoginRuleCreate)(nil), "events.LoginRuleCreate") + proto.RegisterType((*LoginRuleDelete)(nil), "events.LoginRuleDelete") + proto.RegisterType((*SAMLIdPAuthAttempt)(nil), "events.SAMLIdPAuthAttempt") + proto.RegisterType((*SAMLIdPServiceProviderCreate)(nil), "events.SAMLIdPServiceProviderCreate") + proto.RegisterType((*SAMLIdPServiceProviderUpdate)(nil), "events.SAMLIdPServiceProviderUpdate") + proto.RegisterType((*SAMLIdPServiceProviderDelete)(nil), "events.SAMLIdPServiceProviderDelete") + proto.RegisterType((*SAMLIdPServiceProviderDeleteAll)(nil), "events.SAMLIdPServiceProviderDeleteAll") + proto.RegisterType((*OktaResourcesUpdate)(nil), "events.OktaResourcesUpdate") + proto.RegisterType((*OktaSyncFailure)(nil), "events.OktaSyncFailure") + proto.RegisterType((*OktaAssignmentResult)(nil), "events.OktaAssignmentResult") + proto.RegisterType((*AccessListCreate)(nil), "events.AccessListCreate") + proto.RegisterType((*AccessListUpdate)(nil), "events.AccessListUpdate") + proto.RegisterType((*AccessListDelete)(nil), "events.AccessListDelete") + proto.RegisterType((*AccessListMemberCreate)(nil), "events.AccessListMemberCreate") + proto.RegisterType((*AccessListMemberUpdate)(nil), "events.AccessListMemberUpdate") + proto.RegisterType((*AccessListMemberDelete)(nil), "events.AccessListMemberDelete") + proto.RegisterType((*AccessListMemberDeleteAllForAccessList)(nil), "events.AccessListMemberDeleteAllForAccessList") + proto.RegisterType((*AccessListReview)(nil), "events.AccessListReview") + proto.RegisterType((*AuditQueryRun)(nil), "events.AuditQueryRun") + proto.RegisterType((*AuditQueryDetails)(nil), "events.AuditQueryDetails") + proto.RegisterType((*SecurityReportRun)(nil), "events.SecurityReportRun") + proto.RegisterType((*ExternalAuditStorageEnable)(nil), "events.ExternalAuditStorageEnable") + proto.RegisterType((*ExternalAuditStorageDisable)(nil), "events.ExternalAuditStorageDisable") + proto.RegisterType((*ExternalAuditStorageDetails)(nil), "events.ExternalAuditStorageDetails") + proto.RegisterType((*OktaAccessListSync)(nil), "events.OktaAccessListSync") + proto.RegisterType((*OktaUserSync)(nil), "events.OktaUserSync") + proto.RegisterType((*LabelSelector)(nil), "events.LabelSelector") + proto.RegisterType((*SPIFFESVIDIssued)(nil), "events.SPIFFESVIDIssued") + proto.RegisterType((*AuthPreferenceUpdate)(nil), "events.AuthPreferenceUpdate") + proto.RegisterType((*ClusterNetworkingConfigUpdate)(nil), "events.ClusterNetworkingConfigUpdate") + proto.RegisterType((*SessionRecordingConfigUpdate)(nil), "events.SessionRecordingConfigUpdate") + proto.RegisterType((*AccessPathChanged)(nil), "events.AccessPathChanged") + proto.RegisterType((*SpannerRPC)(nil), "events.SpannerRPC") + proto.RegisterType((*AccessGraphSettingsUpdate)(nil), "events.AccessGraphSettingsUpdate") + proto.RegisterType((*SPIFFEFederationCreate)(nil), "events.SPIFFEFederationCreate") + proto.RegisterType((*SPIFFEFederationDelete)(nil), "events.SPIFFEFederationDelete") + proto.RegisterType((*AutoUpdateConfigCreate)(nil), "events.AutoUpdateConfigCreate") + proto.RegisterType((*AutoUpdateConfigUpdate)(nil), "events.AutoUpdateConfigUpdate") + proto.RegisterType((*AutoUpdateConfigDelete)(nil), "events.AutoUpdateConfigDelete") + proto.RegisterType((*AutoUpdateVersionCreate)(nil), "events.AutoUpdateVersionCreate") + proto.RegisterType((*AutoUpdateVersionUpdate)(nil), "events.AutoUpdateVersionUpdate") + proto.RegisterType((*AutoUpdateVersionDelete)(nil), "events.AutoUpdateVersionDelete") + proto.RegisterType((*AutoUpdateAgentRolloutTrigger)(nil), "events.AutoUpdateAgentRolloutTrigger") + proto.RegisterType((*AutoUpdateAgentRolloutForceDone)(nil), "events.AutoUpdateAgentRolloutForceDone") + proto.RegisterType((*AutoUpdateAgentRolloutRollback)(nil), "events.AutoUpdateAgentRolloutRollback") + proto.RegisterType((*StaticHostUserCreate)(nil), "events.StaticHostUserCreate") + proto.RegisterType((*StaticHostUserUpdate)(nil), "events.StaticHostUserUpdate") + proto.RegisterType((*StaticHostUserDelete)(nil), "events.StaticHostUserDelete") + proto.RegisterType((*CrownJewelCreate)(nil), "events.CrownJewelCreate") + proto.RegisterType((*CrownJewelUpdate)(nil), "events.CrownJewelUpdate") + proto.RegisterType((*CrownJewelDelete)(nil), "events.CrownJewelDelete") + proto.RegisterType((*UserTaskCreate)(nil), "events.UserTaskCreate") + proto.RegisterType((*UserTaskUpdate)(nil), "events.UserTaskUpdate") + proto.RegisterType((*UserTaskMetadata)(nil), "events.UserTaskMetadata") + proto.RegisterType((*UserTaskDelete)(nil), "events.UserTaskDelete") + proto.RegisterType((*ContactCreate)(nil), "events.ContactCreate") + proto.RegisterType((*ContactDelete)(nil), "events.ContactDelete") + proto.RegisterType((*WorkloadIdentityCreate)(nil), "events.WorkloadIdentityCreate") + proto.RegisterType((*WorkloadIdentityUpdate)(nil), "events.WorkloadIdentityUpdate") + proto.RegisterType((*WorkloadIdentityDelete)(nil), "events.WorkloadIdentityDelete") + proto.RegisterType((*WorkloadIdentityX509RevocationCreate)(nil), "events.WorkloadIdentityX509RevocationCreate") + proto.RegisterType((*WorkloadIdentityX509RevocationUpdate)(nil), "events.WorkloadIdentityX509RevocationUpdate") + proto.RegisterType((*WorkloadIdentityX509RevocationDelete)(nil), "events.WorkloadIdentityX509RevocationDelete") + proto.RegisterType((*GitCommand)(nil), "events.GitCommand") + proto.RegisterType((*GitCommandAction)(nil), "events.GitCommandAction") + proto.RegisterType((*AccessListInvalidMetadata)(nil), "events.AccessListInvalidMetadata") + proto.RegisterType((*UserLoginAccessListInvalid)(nil), "events.UserLoginAccessListInvalid") + proto.RegisterType((*StableUNIXUserCreate)(nil), "events.StableUNIXUserCreate") + proto.RegisterType((*StableUNIXUser)(nil), "events.StableUNIXUser") + proto.RegisterType((*AWSICResourceSync)(nil), "events.AWSICResourceSync") + proto.RegisterType((*HealthCheckConfigCreate)(nil), "events.HealthCheckConfigCreate") + proto.RegisterType((*HealthCheckConfigUpdate)(nil), "events.HealthCheckConfigUpdate") + proto.RegisterType((*HealthCheckConfigDelete)(nil), "events.HealthCheckConfigDelete") + proto.RegisterType((*WorkloadIdentityX509IssuerOverrideCreate)(nil), "events.WorkloadIdentityX509IssuerOverrideCreate") + proto.RegisterType((*WorkloadIdentityX509IssuerOverrideDelete)(nil), "events.WorkloadIdentityX509IssuerOverrideDelete") + proto.RegisterType((*SigstorePolicyCreate)(nil), "events.SigstorePolicyCreate") + proto.RegisterType((*SigstorePolicyUpdate)(nil), "events.SigstorePolicyUpdate") + proto.RegisterType((*SigstorePolicyDelete)(nil), "events.SigstorePolicyDelete") + proto.RegisterType((*MCPSessionStart)(nil), "events.MCPSessionStart") + proto.RegisterType((*MCPSessionEnd)(nil), "events.MCPSessionEnd") + proto.RegisterType((*MCPJSONRPCMessage)(nil), "events.MCPJSONRPCMessage") + proto.RegisterType((*MCPSessionRequest)(nil), "events.MCPSessionRequest") + proto.RegisterType((*MCPSessionNotification)(nil), "events.MCPSessionNotification") + proto.RegisterType((*MCPSessionListenSSEStream)(nil), "events.MCPSessionListenSSEStream") + proto.RegisterType((*MCPSessionInvalidHTTPRequest)(nil), "events.MCPSessionInvalidHTTPRequest") + proto.RegisterType((*BoundKeypairRecovery)(nil), "events.BoundKeypairRecovery") + proto.RegisterType((*BoundKeypairRotation)(nil), "events.BoundKeypairRotation") + proto.RegisterType((*BoundKeypairJoinStateVerificationFailed)(nil), "events.BoundKeypairJoinStateVerificationFailed") + proto.RegisterType((*SCIMRequest)(nil), "events.SCIMRequest") + proto.RegisterType((*SCIMCommonData)(nil), "events.SCIMCommonData") + proto.RegisterType((*SCIMListingEvent)(nil), "events.SCIMListingEvent") + proto.RegisterType((*SCIMResourceEvent)(nil), "events.SCIMResourceEvent") + proto.RegisterType((*ClientIPRestrictionsUpdate)(nil), "events.ClientIPRestrictionsUpdate") + proto.RegisterType((*VnetConfigCreate)(nil), "events.VnetConfigCreate") + proto.RegisterType((*VnetConfigUpdate)(nil), "events.VnetConfigUpdate") + proto.RegisterType((*VnetConfigDelete)(nil), "events.VnetConfigDelete") } -func (m *OktaResource) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) +func init() { + proto.RegisterFile("teleport/legacy/types/events/events.proto", fileDescriptor_007ba1c3d6266d56) } -func (m *OktaResource) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.Description) > 0 { - i -= len(m.Description) - copy(dAtA[i:], m.Description) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Description))) - i-- - dAtA[i] = 0x12 - } - if len(m.ID) > 0 { - i -= len(m.ID) - copy(dAtA[i:], m.ID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil +var fileDescriptor_007ba1c3d6266d56 = []byte{ + // 20935 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xbd, 0x69, 0x78, 0x24, 0x49, + 0x76, 0x18, 0x86, 0x3a, 0x70, 0x3d, 0x5c, 0x85, 0xe8, 0x2b, 0xbb, 0xa7, 0x67, 0x6a, 0x26, 0x67, + 0xb6, 0xa7, 0x7b, 0x76, 0xb6, 0x7b, 0xa6, 0xe7, 0xd8, 0x9d, 0x9d, 0xd9, 0x9d, 0x29, 0x14, 0x80, + 0x46, 0x75, 0xe3, 0xa8, 0xcd, 0x42, 0x77, 0xcf, 0xec, 0x2e, 0xb7, 0x98, 0xa8, 0x8c, 0x06, 0x72, + 0x50, 0x95, 0x59, 0x9b, 0x99, 0xd5, 0x68, 0x8c, 0x7c, 0x90, 0x12, 0xc5, 0x43, 0x5a, 0x92, 0xeb, + 0x5d, 0xf1, 0x10, 0x29, 0xd9, 0x4b, 0x8a, 0xb2, 0x29, 0x92, 0x22, 0x4d, 0x99, 0xe6, 0x4d, 0x91, + 0x14, 0x45, 0x73, 0x79, 0x9a, 0xb7, 0x69, 0x92, 0x02, 0x0f, 0x59, 0xdf, 0x67, 0xe3, 0xd3, 0xe7, + 0x4f, 0x9f, 0xcd, 0xcf, 0xa4, 0x64, 0x4b, 0x9f, 0xbf, 0x78, 0x11, 0x99, 0x19, 0x79, 0x54, 0xe1, + 0x5c, 0xa2, 0xb1, 0x8d, 0x3f, 0xdd, 0xa8, 0xf7, 0x5e, 0xbc, 0x88, 0x7c, 0xf1, 0x22, 0xe2, 0x45, + 0xc4, 0x8b, 0xf7, 0xe0, 0x8a, 0x47, 0x9b, 0xb4, 0x6d, 0x3b, 0xde, 0xb5, 0x26, 0x5d, 0xd5, 0x1b, + 0x9b, 0xd7, 0xbc, 0xcd, 0x36, 0x75, 0xaf, 0xd1, 0xfb, 0xd4, 0xf2, 0xfc, 0xff, 0xae, 0xb6, 0x1d, + 0xdb, 0xb3, 0xc9, 0x00, 0xff, 0x75, 0xe1, 0xf4, 0xaa, 0xbd, 0x6a, 0x23, 0xe8, 0x1a, 0xfb, 0x8b, + 0x63, 0x2f, 0x5c, 0x5c, 0xb5, 0xed, 0xd5, 0x26, 0xbd, 0x86, 0xbf, 0x56, 0x3a, 0xf7, 0xae, 0xb9, + 0x9e, 0xd3, 0x69, 0x78, 0x02, 0x5b, 0x8c, 0x63, 0x3d, 0xb3, 0x45, 0x5d, 0x4f, 0x6f, 0xb5, 0x05, + 0xc1, 0x13, 0x71, 0x82, 0x0d, 0x47, 0x6f, 0xb7, 0xa9, 0x23, 0x2a, 0xbf, 0xf0, 0x6c, 0xd0, 0x4e, + 0xbd, 0xd1, 0xa0, 0xae, 0xdb, 0x34, 0x5d, 0xef, 0xda, 0xfd, 0x17, 0xa5, 0x5f, 0x82, 0xf0, 0xa9, + 0xf4, 0x0f, 0xc2, 0x7f, 0x05, 0xc9, 0x07, 0xd2, 0x49, 0xfc, 0x1a, 0x63, 0x55, 0xab, 0x9f, 0xcb, + 0xc2, 0xd0, 0x02, 0xf5, 0x74, 0x43, 0xf7, 0x74, 0x72, 0x11, 0xfa, 0x2b, 0x96, 0x41, 0x1f, 0x28, + 0x99, 0x27, 0x33, 0x97, 0x73, 0x53, 0x03, 0xdb, 0x5b, 0xc5, 0x2c, 0x35, 0x35, 0x0e, 0x24, 0x8f, + 0x43, 0x7e, 0x79, 0xb3, 0x4d, 0x95, 0xec, 0x93, 0x99, 0xcb, 0xc3, 0x53, 0xc3, 0xdb, 0x5b, 0xc5, + 0x7e, 0x14, 0x9a, 0x86, 0x60, 0xf2, 0x14, 0x64, 0x2b, 0xd3, 0x4a, 0x0e, 0x91, 0x93, 0xdb, 0x5b, + 0xc5, 0xb1, 0x8e, 0x69, 0x3c, 0x6f, 0xb7, 0x4c, 0x8f, 0xb6, 0xda, 0xde, 0xa6, 0x96, 0xad, 0x4c, + 0x93, 0x4b, 0x90, 0x2f, 0xdb, 0x06, 0x55, 0xf2, 0x48, 0x44, 0xb6, 0xb7, 0x8a, 0xe3, 0x0d, 0xdb, + 0xa0, 0x12, 0x15, 0xe2, 0xc9, 0x5b, 0x90, 0x5f, 0x36, 0x5b, 0x54, 0xe9, 0x7f, 0x32, 0x73, 0x79, + 0xe4, 0xfa, 0x85, 0xab, 0x5c, 0x7c, 0x57, 0x7d, 0xf1, 0x5d, 0x5d, 0xf6, 0xe5, 0x3b, 0x55, 0xf8, + 0xe2, 0x56, 0xb1, 0x6f, 0x7b, 0xab, 0x98, 0x67, 0x22, 0xff, 0xec, 0x9f, 0x14, 0x33, 0x1a, 0x96, + 0x24, 0x6f, 0xc0, 0x48, 0xb9, 0xd9, 0x71, 0x3d, 0xea, 0x2c, 0xea, 0x2d, 0xaa, 0x0c, 0x60, 0x85, + 0x17, 0xb6, 0xb7, 0x8a, 0x67, 0x1b, 0x1c, 0x5c, 0xb7, 0xf4, 0x96, 0x5c, 0xb1, 0x4c, 0xae, 0xfe, + 0x78, 0x06, 0x26, 0x6a, 0xd4, 0x75, 0x4d, 0xdb, 0x0a, 0x64, 0xf3, 0x3e, 0x18, 0x16, 0xa0, 0xca, + 0x34, 0xca, 0x67, 0x78, 0x6a, 0x70, 0x7b, 0xab, 0x98, 0x73, 0x4d, 0x43, 0x0b, 0x31, 0xe4, 0x05, + 0x18, 0xbc, 0x6b, 0x7a, 0x6b, 0x0b, 0xb3, 0x25, 0x21, 0xa7, 0xb3, 0xdb, 0x5b, 0x45, 0xb2, 0x61, + 0x7a, 0x6b, 0xf5, 0xd6, 0x3d, 0x5d, 0xaa, 0xd0, 0x27, 0x23, 0xf3, 0x50, 0xa8, 0x3a, 0xe6, 0x7d, + 0xdd, 0xa3, 0xb7, 0xe8, 0x66, 0xd5, 0x6e, 0x9a, 0x8d, 0x4d, 0x21, 0xc5, 0x27, 0xb7, 0xb7, 0x8a, + 0x17, 0xdb, 0x1c, 0x57, 0x5f, 0xa7, 0x9b, 0xf5, 0x36, 0x62, 0x25, 0x26, 0x89, 0x92, 0xea, 0x77, + 0x0f, 0xc3, 0xe8, 0x6d, 0x97, 0x3a, 0x41, 0xbb, 0x2f, 0x41, 0x9e, 0xfd, 0x16, 0x4d, 0x46, 0x99, + 0x77, 0x5c, 0xea, 0xc8, 0x32, 0x67, 0x78, 0x72, 0x05, 0xfa, 0xe7, 0xed, 0x55, 0xd3, 0x12, 0xcd, + 0x3e, 0xb5, 0xbd, 0x55, 0x9c, 0x68, 0x32, 0x80, 0x44, 0xc9, 0x29, 0xc8, 0x47, 0x61, 0xb4, 0xd2, + 0x62, 0x3a, 0x64, 0x5b, 0xba, 0x67, 0x3b, 0xa2, 0xb5, 0x28, 0x5d, 0x53, 0x82, 0x4b, 0x05, 0x23, + 0xf4, 0xe4, 0xc3, 0x00, 0xa5, 0xbb, 0x35, 0xcd, 0x6e, 0xd2, 0x92, 0xb6, 0x28, 0x94, 0x01, 0x4b, + 0xeb, 0x1b, 0x6e, 0xdd, 0xb1, 0x9b, 0xb4, 0xae, 0x3b, 0x72, 0xb5, 0x12, 0x35, 0x99, 0x81, 0xf1, + 0x12, 0x8e, 0x0a, 0x8d, 0x7e, 0xba, 0x43, 0x5d, 0xcf, 0x55, 0xfa, 0x9f, 0xcc, 0x5d, 0x1e, 0x9e, + 0x7a, 0x7c, 0x7b, 0xab, 0x78, 0x9e, 0x8f, 0x97, 0xba, 0x23, 0x50, 0x12, 0x8b, 0x58, 0x21, 0x32, + 0x05, 0x63, 0xa5, 0xf7, 0x3a, 0x0e, 0xad, 0x18, 0xd4, 0xf2, 0x4c, 0x6f, 0x53, 0x68, 0xc8, 0xc5, + 0xed, 0xad, 0xa2, 0xa2, 0x33, 0x44, 0xdd, 0x14, 0x18, 0x89, 0x49, 0xb4, 0x08, 0x59, 0x82, 0xc9, + 0x1b, 0xe5, 0x6a, 0x8d, 0x3a, 0xf7, 0xcd, 0x06, 0x2d, 0x35, 0x1a, 0x76, 0xc7, 0xf2, 0x94, 0x41, + 0xe4, 0xf3, 0xd4, 0xf6, 0x56, 0xf1, 0xf1, 0xd5, 0x46, 0xbb, 0xee, 0x72, 0x6c, 0x5d, 0xe7, 0x68, + 0x89, 0x59, 0xb2, 0x2c, 0xf9, 0x38, 0x8c, 0x2d, 0x3b, 0x4c, 0x0b, 0x8d, 0x69, 0xca, 0xe0, 0xca, + 0x10, 0xea, 0xff, 0xd9, 0xab, 0x62, 0xa6, 0xe2, 0x50, 0xbf, 0x67, 0x79, 0x63, 0x3d, 0x5e, 0xa0, + 0x6e, 0x20, 0x4e, 0x6e, 0x6c, 0x84, 0x15, 0xa1, 0xa0, 0xb0, 0x8f, 0x37, 0x1d, 0x6a, 0x24, 0xb4, + 0x6d, 0x18, 0xdb, 0x7c, 0x65, 0x7b, 0xab, 0xf8, 0x3e, 0x47, 0xd0, 0xd4, 0x7b, 0xaa, 0x5d, 0x57, + 0x56, 0x64, 0x06, 0x86, 0x98, 0x36, 0xdd, 0x32, 0x2d, 0x43, 0x81, 0x27, 0x33, 0x97, 0xc7, 0xaf, + 0x17, 0xfc, 0xd6, 0xfb, 0xf0, 0xa9, 0x73, 0xdb, 0x5b, 0xc5, 0x53, 0x4c, 0x07, 0xeb, 0xeb, 0xa6, + 0x25, 0x4f, 0x11, 0x41, 0x51, 0x36, 0x8a, 0xa6, 0x6c, 0x0f, 0x87, 0xee, 0x48, 0x38, 0x8a, 0x56, + 0x6c, 0x2f, 0x3e, 0x6c, 0x7d, 0x32, 0x52, 0x86, 0xb1, 0x29, 0xdb, 0xab, 0x58, 0xae, 0xa7, 0x5b, + 0x0d, 0x5a, 0x99, 0x56, 0x46, 0xb1, 0x1c, 0xaa, 0x05, 0x2b, 0x67, 0x0a, 0x4c, 0x3d, 0x32, 0x29, + 0x45, 0xcb, 0x90, 0x05, 0x00, 0xd6, 0x84, 0x25, 0xc7, 0x64, 0x03, 0x61, 0x0c, 0xdb, 0x4f, 0xe4, + 0xf6, 0x73, 0xcc, 0xd4, 0xf9, 0xed, 0xad, 0xe2, 0x19, 0xfc, 0x02, 0x1b, 0x01, 0xb2, 0xae, 0x86, + 0x64, 0xe4, 0x55, 0x18, 0x66, 0xbf, 0x98, 0xea, 0xba, 0xca, 0x38, 0xaa, 0xa9, 0xb2, 0xbd, 0x55, + 0x3c, 0x8d, 0x25, 0x99, 0x9e, 0xcb, 0x1a, 0x1a, 0x92, 0x92, 0xcf, 0x67, 0x78, 0x3b, 0x96, 0x1d, + 0xdd, 0xf4, 0x5c, 0x65, 0x02, 0xb5, 0xe0, 0xcc, 0xd5, 0x60, 0xe6, 0x9e, 0xd7, 0x57, 0x68, 0xf3, + 0x8e, 0xde, 0xec, 0x50, 0x77, 0xea, 0x93, 0x6c, 0x02, 0xfc, 0xc3, 0xad, 0xe2, 0xeb, 0xab, 0xa6, + 0xb7, 0xd6, 0x59, 0xb9, 0xda, 0xb0, 0x5b, 0xd7, 0x56, 0x1d, 0xfd, 0xbe, 0xe9, 0xe9, 0x9e, 0x69, + 0x5b, 0x7a, 0xf3, 0x5a, 0xb8, 0xb6, 0xb4, 0xcd, 0xd8, 0x62, 0x70, 0x95, 0xd7, 0x10, 0x7c, 0x8d, + 0x87, 0x3f, 0xe3, 0x5f, 0xc3, 0x89, 0x48, 0x05, 0x26, 0xd8, 0x2f, 0x79, 0x5a, 0x2d, 0xa0, 0x8c, + 0x8b, 0xdb, 0x5b, 0xc5, 0xc7, 0xb0, 0x7c, 0x97, 0xb9, 0x35, 0x5e, 0x4e, 0xfd, 0xcb, 0x3c, 0x8c, + 0x33, 0xdd, 0x97, 0xa6, 0xa9, 0x12, 0x9b, 0x71, 0x19, 0x84, 0x11, 0xb8, 0x6d, 0xbd, 0x41, 0xc5, + 0x8c, 0x85, 0xda, 0x62, 0xf9, 0x40, 0x99, 0x6b, 0x8c, 0x9e, 0x5c, 0x81, 0x21, 0x0e, 0xaa, 0x4c, + 0x8b, 0x49, 0x6c, 0x6c, 0x7b, 0xab, 0x38, 0xec, 0x22, 0xac, 0x6e, 0x1a, 0x5a, 0x80, 0x66, 0xb3, + 0x08, 0xff, 0x7b, 0xce, 0x76, 0x3d, 0xc6, 0x5c, 0xcc, 0x61, 0xa8, 0x2e, 0xa2, 0xc0, 0x9a, 0x40, + 0xc9, 0xb3, 0x48, 0xb4, 0x10, 0x79, 0x0d, 0x80, 0x43, 0x4a, 0x86, 0xe1, 0x88, 0x89, 0x0c, 0x75, + 0x43, 0xb0, 0xd0, 0x0d, 0x43, 0x9e, 0x05, 0x25, 0x62, 0xd2, 0x82, 0x51, 0xfe, 0x0b, 0x3b, 0x93, + 0xcf, 0x62, 0x23, 0xd7, 0x2f, 0xfb, 0xca, 0x16, 0x95, 0xce, 0x55, 0x99, 0x74, 0xc6, 0xf2, 0x9c, + 0xcd, 0xa9, 0xa2, 0x58, 0xf8, 0xce, 0x89, 0xaa, 0x9a, 0x88, 0x93, 0xa7, 0x5c, 0xb9, 0x0c, 0x5b, + 0x0f, 0x67, 0x6d, 0x67, 0x43, 0x77, 0x0c, 0x6a, 0x4c, 0x6d, 0xca, 0xeb, 0xe1, 0x3d, 0x1f, 0x5c, + 0x5f, 0x91, 0x87, 0xb8, 0x4c, 0xce, 0x06, 0x17, 0xe7, 0x56, 0xeb, 0xac, 0xe0, 0xd0, 0x1e, 0x4c, + 0x48, 0xcb, 0xed, 0xac, 0xc4, 0x87, 0x73, 0xb4, 0x0c, 0x9b, 0x72, 0x39, 0xe0, 0x0e, 0x75, 0xd8, + 0x62, 0x89, 0xb3, 0x9b, 0x98, 0x72, 0x05, 0x93, 0xfb, 0x1c, 0x93, 0xe4, 0x21, 0x8a, 0x5c, 0x78, + 0x13, 0x26, 0x13, 0xa2, 0x20, 0x05, 0xc8, 0xad, 0xd3, 0x4d, 0xae, 0x2e, 0x1a, 0xfb, 0x93, 0x9c, + 0x86, 0xfe, 0xfb, 0x6c, 0x78, 0x70, 0x35, 0xd0, 0xf8, 0x8f, 0x0f, 0x67, 0x3f, 0x94, 0x61, 0x2b, + 0x3b, 0x29, 0xdb, 0x96, 0x45, 0x1b, 0x9e, 0xbc, 0xb8, 0xbf, 0x0a, 0xc3, 0xf3, 0x76, 0x43, 0x6f, + 0x62, 0x3f, 0x72, 0xbd, 0xc3, 0x91, 0xca, 0x3a, 0xf0, 0x6a, 0x93, 0x61, 0xe4, 0x91, 0x1a, 0x90, + 0x32, 0x05, 0xd0, 0x68, 0xcb, 0xf6, 0x28, 0x16, 0xcc, 0x86, 0x0a, 0x80, 0x05, 0x1d, 0x44, 0xc9, + 0x0a, 0x10, 0x12, 0x93, 0x6b, 0x30, 0x54, 0x65, 0xf6, 0x4c, 0xc3, 0x6e, 0x0a, 0xe5, 0xc3, 0x25, + 0x17, 0x6d, 0x1c, 0x79, 0x4e, 0xf4, 0x89, 0xd4, 0x39, 0x18, 0x2f, 0x37, 0x4d, 0x6a, 0x79, 0x72, + 0xab, 0xd9, 0xc8, 0x2a, 0xad, 0x52, 0xcb, 0x93, 0x5b, 0x8d, 0x63, 0x51, 0x67, 0xd0, 0xf8, 0xfc, + 0x82, 0xa4, 0xea, 0x6f, 0xe4, 0xe0, 0xfc, 0xad, 0xce, 0x0a, 0x75, 0x2c, 0xea, 0x51, 0x57, 0x0c, + 0xcc, 0x80, 0xeb, 0x22, 0x4c, 0x26, 0x90, 0x82, 0x3b, 0x1a, 0x24, 0xeb, 0x01, 0xd2, 0x1f, 0xef, + 0xf2, 0xaa, 0x96, 0x28, 0x4a, 0xe6, 0x60, 0x22, 0x04, 0xb2, 0x46, 0xb8, 0x4a, 0x16, 0xe7, 0xc2, + 0x27, 0xb6, 0xb7, 0x8a, 0x17, 0x24, 0x6e, 0xac, 0xd9, 0xb2, 0x06, 0xc7, 0x8b, 0x91, 0x5b, 0x50, + 0x08, 0x41, 0x37, 0x1c, 0xbb, 0xd3, 0x76, 0x95, 0x1c, 0xb2, 0xc2, 0x29, 0x48, 0x62, 0xb5, 0x8a, + 0x48, 0xd9, 0x50, 0x8a, 0x17, 0x24, 0x5f, 0x93, 0x91, 0xb9, 0x89, 0x51, 0x98, 0xc7, 0x51, 0xf8, + 0x41, 0x7f, 0x14, 0x76, 0x15, 0xd2, 0xd5, 0x78, 0x49, 0x31, 0x28, 0x63, 0xcd, 0x48, 0x0c, 0xca, + 0x44, 0x8d, 0x17, 0xca, 0x70, 0x26, 0x95, 0xd7, 0x9e, 0xb4, 0xfa, 0xdf, 0xe4, 0x64, 0x2e, 0x55, + 0xdb, 0x08, 0x3a, 0x73, 0x49, 0xee, 0xcc, 0xaa, 0x6d, 0xe0, 0xb4, 0x9d, 0x09, 0x6d, 0x14, 0xa9, + 0xb1, 0x6d, 0xdb, 0x88, 0x4f, 0xdc, 0xc9, 0xb2, 0xe4, 0x53, 0x70, 0x36, 0x01, 0xe4, 0xd3, 0x35, + 0xd7, 0xfe, 0x4b, 0xdb, 0x5b, 0x45, 0x35, 0x85, 0x6b, 0x7c, 0xf6, 0xee, 0xc2, 0x85, 0xe8, 0x70, + 0x4e, 0x92, 0xba, 0x6d, 0x79, 0xba, 0x69, 0x89, 0xd5, 0x86, 0x8f, 0x92, 0x67, 0xb7, 0xb7, 0x8a, + 0x4f, 0xcb, 0x3a, 0xe8, 0xd3, 0xc4, 0x1b, 0xdf, 0x8d, 0x0f, 0x31, 0x40, 0x49, 0x41, 0x55, 0x5a, + 0xfa, 0xaa, 0xbf, 0x33, 0xb9, 0xbc, 0xbd, 0x55, 0x7c, 0x26, 0xb5, 0x0e, 0x93, 0x51, 0xc9, 0x96, + 0x50, 0x37, 0x4e, 0x44, 0x03, 0x12, 0xe2, 0x16, 0x6d, 0x83, 0xe2, 0x37, 0xf4, 0x23, 0x7f, 0x75, + 0x7b, 0xab, 0xf8, 0x84, 0xc4, 0xdf, 0xb2, 0x0d, 0x1a, 0x6f, 0x7e, 0x4a, 0x69, 0xf5, 0xc7, 0x73, + 0xf0, 0x44, 0xad, 0xb4, 0x30, 0x5f, 0x31, 0x7c, 0xd3, 0xb1, 0xea, 0xd8, 0xf7, 0x4d, 0x43, 0x1a, + 0xbd, 0x2b, 0x70, 0x2e, 0x86, 0x9a, 0x41, 0x6b, 0x35, 0xd8, 0xb4, 0xe0, 0xb7, 0xf9, 0x66, 0x69, + 0x5b, 0xd0, 0xd4, 0xb9, 0x49, 0x1b, 0x35, 0x8e, 0xba, 0x31, 0x62, 0x7d, 0x14, 0x43, 0xd5, 0xd6, + 0x6c, 0xc7, 0x6b, 0x74, 0x3c, 0xa1, 0x04, 0xd8, 0x47, 0x89, 0x3a, 0x5c, 0x41, 0xd4, 0xa3, 0x0a, + 0x9f, 0x0f, 0xf9, 0x86, 0x0c, 0x14, 0x4a, 0x9e, 0xe7, 0x98, 0x2b, 0x1d, 0x8f, 0x2e, 0xe8, 0xed, + 0xb6, 0x69, 0xad, 0xe2, 0x58, 0x1f, 0xb9, 0xfe, 0x46, 0xb0, 0x46, 0xf6, 0x94, 0xc4, 0xd5, 0x78, + 0x71, 0x69, 0x88, 0xea, 0x3e, 0xaa, 0xde, 0xe2, 0x38, 0x79, 0x88, 0xc6, 0xcb, 0xb1, 0x21, 0x9a, + 0xca, 0x6b, 0x4f, 0x43, 0xf4, 0x73, 0x39, 0xb8, 0xb8, 0xb4, 0xee, 0xe9, 0x1a, 0x75, 0xed, 0x8e, + 0xd3, 0xa0, 0xee, 0xed, 0xb6, 0xa1, 0x7b, 0x34, 0x1c, 0xa9, 0x45, 0xe8, 0x2f, 0x19, 0x06, 0x35, + 0x90, 0x5d, 0x3f, 0xdf, 0x5e, 0xeb, 0x0c, 0xa0, 0x71, 0x38, 0x79, 0x1f, 0x0c, 0x8a, 0x32, 0xc8, + 0xbd, 0x7f, 0x6a, 0x64, 0x7b, 0xab, 0x38, 0xd8, 0xe1, 0x20, 0xcd, 0xc7, 0x31, 0xb2, 0x69, 0xda, + 0xa4, 0x8c, 0x2c, 0x17, 0x92, 0x19, 0x1c, 0xa4, 0xf9, 0x38, 0xf2, 0x31, 0x18, 0x47, 0xb6, 0x41, + 0x7b, 0xc4, 0xdc, 0x77, 0xda, 0x97, 0xae, 0xdc, 0x58, 0xbe, 0x34, 0x61, 0x6b, 0xea, 0x8e, 0x5f, + 0x40, 0x8b, 0x31, 0x20, 0x77, 0xa1, 0x20, 0x1a, 0x11, 0x32, 0xed, 0xef, 0xc1, 0xf4, 0xcc, 0xf6, + 0x56, 0x71, 0x52, 0xb4, 0x5f, 0x62, 0x9b, 0x60, 0xc2, 0x18, 0x8b, 0x66, 0x87, 0x8c, 0x07, 0x76, + 0x62, 0x2c, 0xbe, 0x58, 0x66, 0x1c, 0x67, 0xa2, 0xbe, 0x03, 0xa3, 0x72, 0x41, 0x72, 0x16, 0x8f, + 0x30, 0xf8, 0x38, 0xc1, 0xc3, 0x0f, 0xd3, 0xc0, 0x73, 0x8b, 0x17, 0x61, 0x64, 0x9a, 0xba, 0x0d, + 0xc7, 0x6c, 0x33, 0xab, 0x41, 0x28, 0xf9, 0xc4, 0xf6, 0x56, 0x71, 0xc4, 0x08, 0xc1, 0x9a, 0x4c, + 0xa3, 0xfe, 0x3f, 0x19, 0x38, 0xcb, 0x78, 0x97, 0x5c, 0xd7, 0x5c, 0xb5, 0x5a, 0xf2, 0xb2, 0xfd, + 0x3c, 0x0c, 0xd4, 0xb0, 0x3e, 0x51, 0xd3, 0xe9, 0xed, 0xad, 0x62, 0x81, 0xb7, 0x40, 0xd2, 0x43, + 0x41, 0x13, 0xec, 0xdf, 0xb3, 0x3b, 0xec, 0xdf, 0x99, 0x49, 0xeb, 0xe9, 0x8e, 0x67, 0x5a, 0xab, + 0x35, 0x4f, 0xf7, 0x3a, 0x6e, 0xc4, 0xa4, 0x15, 0x98, 0xba, 0x8b, 0xa8, 0x88, 0x49, 0x1b, 0x29, + 0x44, 0xde, 0x84, 0xd1, 0x19, 0xcb, 0x08, 0x99, 0xf0, 0x09, 0xf1, 0x31, 0x66, 0x69, 0x52, 0x84, + 0x27, 0x59, 0x44, 0x0a, 0xa8, 0x7f, 0x95, 0x01, 0x85, 0x6f, 0xb6, 0xe7, 0x4d, 0xd7, 0x5b, 0xa0, + 0xad, 0x15, 0x69, 0x76, 0x9a, 0xf5, 0x77, 0xef, 0x0c, 0x27, 0xad, 0x45, 0x68, 0x0a, 0x88, 0xdd, + 0x7b, 0xd3, 0x74, 0x13, 0xdb, 0xbc, 0x58, 0x29, 0x52, 0x81, 0x41, 0xce, 0x99, 0xdb, 0x12, 0x23, + 0xd7, 0x15, 0x5f, 0x11, 0xe2, 0x55, 0x73, 0x65, 0x68, 0x71, 0x62, 0x79, 0xe3, 0x28, 0xca, 0xb3, + 0x6d, 0x4d, 0x58, 0x66, 0xd9, 0xf4, 0x9a, 0xfe, 0x22, 0xc0, 0x67, 0x0a, 0xa9, 0x4d, 0x1e, 0x43, + 0xca, 0xf6, 0x49, 0xac, 0x9c, 0xfa, 0x85, 0x1c, 0x14, 0xe2, 0xf5, 0x93, 0xbb, 0x30, 0x74, 0xd3, + 0x36, 0x2d, 0x6a, 0x2c, 0x59, 0xf8, 0xb1, 0xbd, 0xcf, 0xb3, 0x7c, 0xb3, 0xfe, 0xd4, 0xbb, 0x58, + 0xa6, 0x2e, 0x1b, 0xc3, 0x78, 0xbc, 0x15, 0x30, 0x23, 0x1f, 0x87, 0x61, 0x66, 0x4e, 0xde, 0x47, + 0xce, 0xd9, 0x1d, 0x39, 0x3f, 0x29, 0x38, 0x9f, 0x76, 0x78, 0xa1, 0x24, 0xeb, 0x90, 0x1d, 0x53, + 0x51, 0x8d, 0xea, 0xae, 0x6d, 0x09, 0x25, 0x42, 0x15, 0x75, 0x10, 0x22, 0xab, 0x28, 0xa7, 0x61, + 0x56, 0x30, 0xff, 0x58, 0xec, 0x51, 0x69, 0x1b, 0xc4, 0xc5, 0x1e, 0xef, 0x4c, 0x89, 0x98, 0x58, + 0x30, 0x21, 0xfa, 0x66, 0xcd, 0x6c, 0xe3, 0x06, 0x02, 0x97, 0xc8, 0xf1, 0xeb, 0x97, 0xae, 0xfa, + 0xfb, 0xd6, 0xab, 0xd2, 0x29, 0xe8, 0xfd, 0x17, 0xaf, 0x2e, 0x04, 0xe4, 0x78, 0x98, 0x80, 0xea, + 0x1d, 0x63, 0x21, 0x2b, 0x4e, 0x2b, 0x42, 0xae, 0x7e, 0x6d, 0x16, 0x3e, 0x10, 0x76, 0x91, 0x46, + 0xef, 0x9b, 0x74, 0x23, 0xe4, 0x28, 0x8e, 0x35, 0xd8, 0x68, 0x75, 0xcb, 0x6b, 0xba, 0xb5, 0x4a, + 0x0d, 0x72, 0x05, 0xfa, 0xf9, 0x06, 0x3e, 0x83, 0x96, 0x26, 0xce, 0x84, 0xf1, 0xbd, 0x3b, 0xa7, + 0x20, 0x36, 0x0c, 0x88, 0x2d, 0x3b, 0x57, 0xca, 0x52, 0x52, 0x29, 0x77, 0x51, 0xa3, 0xd8, 0x94, + 0xf3, 0xe5, 0x0a, 0x05, 0x9f, 0xd8, 0x96, 0x8b, 0x6a, 0x2e, 0xbc, 0x06, 0x23, 0x12, 0xf1, 0x9e, + 0xd6, 0xa3, 0xaf, 0xe9, 0x97, 0x87, 0xa9, 0xdf, 0x2c, 0x31, 0x4c, 0xaf, 0xb1, 0xe1, 0xe5, 0xba, + 0xcc, 0x20, 0xe2, 0xe3, 0x53, 0x0c, 0x22, 0x04, 0x45, 0x07, 0x11, 0x82, 0xc8, 0x4b, 0x30, 0xc4, + 0x59, 0x04, 0x5b, 0x6f, 0xdc, 0xb6, 0x3b, 0x08, 0x8b, 0x5a, 0x15, 0x01, 0x21, 0xf9, 0xbe, 0x0c, + 0x3c, 0xde, 0x53, 0x12, 0xa8, 0x7c, 0x23, 0xd7, 0x5f, 0xd9, 0x97, 0x18, 0xa7, 0x3e, 0xb0, 0xbd, + 0x55, 0xbc, 0x22, 0x69, 0x86, 0x23, 0xd1, 0xd4, 0x1b, 0x9c, 0x48, 0x6a, 0x57, 0xef, 0xa6, 0x30, + 0xbb, 0x97, 0x57, 0x3a, 0x8b, 0xa7, 0x8b, 0x56, 0x63, 0xd3, 0x6f, 0x64, 0x3e, 0xb4, 0x7b, 0xc5, + 0xf7, 0xde, 0xf3, 0x49, 0x52, 0xaa, 0xe9, 0xc2, 0x85, 0x34, 0xe0, 0x1c, 0xc7, 0x4c, 0xeb, 0x9b, + 0x4b, 0xf7, 0x16, 0x6c, 0xcb, 0x5b, 0xf3, 0x2b, 0xe8, 0x97, 0x8f, 0xe7, 0xb0, 0x02, 0x43, 0xdf, + 0xac, 0xdb, 0xf7, 0xea, 0x2d, 0x46, 0x95, 0x52, 0x47, 0x37, 0x4e, 0x6c, 0x8d, 0x10, 0x63, 0xdc, + 0x9f, 0x3d, 0x07, 0xc2, 0xc3, 0x53, 0x7f, 0x5e, 0x48, 0xce, 0x95, 0xb1, 0x42, 0x69, 0x53, 0xe6, + 0xe0, 0x3e, 0xa7, 0xcc, 0x0a, 0x8c, 0xce, 0xdb, 0x8d, 0xf5, 0x40, 0xf3, 0x5e, 0x83, 0x81, 0x65, + 0xdd, 0x59, 0xa5, 0x1e, 0x8a, 0x75, 0xe4, 0xfa, 0xe4, 0x55, 0x7e, 0xb7, 0xc1, 0x88, 0x38, 0x62, + 0x6a, 0x5c, 0x4c, 0x64, 0x03, 0x1e, 0xfe, 0xd6, 0x44, 0x01, 0xf5, 0x9f, 0x0f, 0xc0, 0xa8, 0x38, + 0x87, 0xc7, 0x35, 0x8d, 0x7c, 0x38, 0xbc, 0xd9, 0x10, 0x33, 0x6f, 0x70, 0x16, 0x19, 0x9c, 0xa1, + 0x8e, 0x32, 0x66, 0xbf, 0xb9, 0x55, 0xcc, 0x6c, 0x6f, 0x15, 0xfb, 0xb4, 0x21, 0x69, 0x6b, 0x1d, + 0xae, 0xba, 0x92, 0x99, 0x21, 0x9f, 0xac, 0xc7, 0xca, 0xf2, 0x55, 0xf8, 0x4d, 0x18, 0x14, 0x6d, + 0x10, 0xca, 0x7b, 0x2e, 0x3c, 0xd1, 0x89, 0xdc, 0x27, 0xc4, 0x4a, 0xfb, 0xa5, 0xc8, 0x1b, 0x30, + 0xc0, 0x4f, 0x38, 0x84, 0x00, 0xce, 0xa6, 0x9f, 0x08, 0xc5, 0x8a, 0x8b, 0x32, 0x64, 0x0e, 0x20, + 0x3c, 0xdd, 0x08, 0xae, 0x4f, 0x04, 0x87, 0xe4, 0xb9, 0x47, 0x8c, 0x8b, 0x54, 0x96, 0xbc, 0x0a, + 0xa3, 0xcb, 0xd4, 0x69, 0x99, 0x96, 0xde, 0xac, 0x99, 0xef, 0xf9, 0x37, 0x28, 0x68, 0x7e, 0xb8, + 0xe6, 0x7b, 0x72, 0x9f, 0x46, 0xe8, 0xc8, 0x57, 0xa4, 0x9d, 0x1e, 0x0c, 0x62, 0x43, 0x9e, 0xda, + 0x71, 0x5b, 0x1d, 0x6b, 0x4f, 0xca, 0x61, 0xc2, 0xc7, 0x60, 0x2c, 0xb2, 0x71, 0x14, 0x47, 0xe4, + 0x8f, 0x27, 0x59, 0x4b, 0xbb, 0xe0, 0x18, 0xdb, 0x28, 0x07, 0x36, 0x28, 0x2a, 0x96, 0xe9, 0x99, + 0x7a, 0xb3, 0x6c, 0xb7, 0x5a, 0xba, 0x65, 0x28, 0xc3, 0xe1, 0xa0, 0x30, 0x39, 0xa6, 0xde, 0xe0, + 0x28, 0x79, 0x50, 0x44, 0x0b, 0x91, 0x5b, 0x50, 0x10, 0x7d, 0xa8, 0xd1, 0x86, 0xed, 0x30, 0x8b, + 0x08, 0x4f, 0xc0, 0xc5, 0xa8, 0x70, 0x39, 0xae, 0xee, 0xf8, 0x48, 0x79, 0xcb, 0x11, 0x2f, 0xc8, + 0x26, 0xe0, 0x8a, 0x75, 0xdf, 0x64, 0x46, 0xfc, 0x28, 0x36, 0x06, 0x27, 0x60, 0x93, 0x83, 0xe4, + 0x09, 0x58, 0x50, 0x49, 0x0b, 0xf6, 0xd8, 0xce, 0x0b, 0xf6, 0xcd, 0xfc, 0xd0, 0x48, 0x61, 0x34, + 0x7e, 0x27, 0xa2, 0xfe, 0x93, 0x1c, 0x8c, 0x88, 0x96, 0x30, 0x23, 0xe3, 0x64, 0xfc, 0x1c, 0x64, + 0xfc, 0xa4, 0x8e, 0x83, 0x81, 0xc3, 0x1a, 0x07, 0xea, 0x67, 0xb2, 0xc1, 0x64, 0x57, 0x75, 0x4c, + 0xeb, 0x60, 0x93, 0xdd, 0x25, 0x80, 0xf2, 0x5a, 0xc7, 0x5a, 0xe7, 0x77, 0xbf, 0xd9, 0xf0, 0xee, + 0xb7, 0x61, 0x6a, 0x12, 0x86, 0x3c, 0x0e, 0xf9, 0x69, 0xc6, 0x9f, 0xf5, 0xcc, 0xe8, 0xd4, 0xf0, + 0x17, 0x39, 0xa7, 0xcc, 0x07, 0x34, 0x04, 0xb3, 0x1d, 0xec, 0xd4, 0xa6, 0x47, 0xf9, 0x9e, 0x21, + 0xc7, 0x77, 0xb0, 0x2b, 0x0c, 0xa0, 0x71, 0x38, 0x79, 0x19, 0x26, 0xa7, 0x69, 0x53, 0xdf, 0x5c, + 0x30, 0x9b, 0x4d, 0xd3, 0xa5, 0x0d, 0xdb, 0x32, 0x5c, 0x14, 0xb2, 0xa8, 0xae, 0xe5, 0x6a, 0x49, + 0x02, 0xa2, 0xc2, 0xc0, 0xd2, 0xbd, 0x7b, 0x2e, 0xf5, 0x50, 0x7c, 0xb9, 0x29, 0x60, 0x73, 0xbf, + 0x8d, 0x10, 0x4d, 0x60, 0xd4, 0x1f, 0xca, 0xb0, 0x2d, 0xa2, 0xbb, 0xee, 0xd9, 0xed, 0x70, 0x10, + 0x1d, 0x44, 0x24, 0x57, 0x42, 0x0b, 0x28, 0x8b, 0x5f, 0x3b, 0x21, 0xbe, 0x76, 0x50, 0x58, 0x41, + 0xa1, 0xed, 0x93, 0xfa, 0x55, 0xb9, 0x1d, 0xbe, 0x4a, 0xfd, 0x9e, 0x1c, 0x9c, 0x13, 0x2d, 0x2e, + 0x37, 0xcd, 0xf6, 0x8a, 0xad, 0x3b, 0x86, 0x46, 0x1b, 0xd4, 0xbc, 0x4f, 0x8f, 0xe7, 0xc0, 0x8b, + 0x0e, 0x9d, 0xfc, 0x01, 0x86, 0xce, 0x75, 0xdc, 0x6d, 0x33, 0xc9, 0xe0, 0xa9, 0x3a, 0x37, 0x7f, + 0x0a, 0xdb, 0x5b, 0xc5, 0x51, 0x83, 0x83, 0xf1, 0x5e, 0x45, 0x93, 0x89, 0x98, 0x92, 0xcc, 0x53, + 0x6b, 0xd5, 0x5b, 0x43, 0x25, 0xe9, 0xe7, 0x4a, 0xd2, 0x44, 0x88, 0x26, 0x30, 0x12, 0x5f, 0xdc, + 0xa7, 0x0c, 0x26, 0xf9, 0xb2, 0x8d, 0x8a, 0x26, 0x13, 0xa9, 0xdf, 0x95, 0x83, 0xd3, 0xf1, 0x6e, + 0xaa, 0x51, 0xcb, 0x38, 0xe9, 0xa3, 0x87, 0xa7, 0x8f, 0xbe, 0x37, 0x0f, 0x8f, 0x89, 0xdf, 0xb5, + 0x35, 0xdd, 0xa1, 0xc6, 0xb4, 0xe9, 0xd0, 0x86, 0x67, 0x3b, 0x9b, 0xc7, 0xd8, 0x0e, 0x3c, 0xbc, + 0xae, 0x7a, 0x19, 0x06, 0xc4, 0x59, 0x0e, 0x5f, 0xcf, 0xc6, 0x83, 0x96, 0x20, 0x34, 0xb1, 0x12, + 0xf2, 0x73, 0xa0, 0x58, 0x07, 0x0f, 0xec, 0xa6, 0x83, 0x3f, 0x04, 0x63, 0x81, 0xe8, 0xa5, 0xee, + 0x43, 0xa3, 0xd1, 0xf0, 0x11, 0xbc, 0x03, 0xa3, 0x84, 0x58, 0x9b, 0x0f, 0xa8, 0x4c, 0xa3, 0x51, + 0x37, 0x26, 0x6a, 0x0b, 0xca, 0x99, 0x86, 0x26, 0x13, 0xc5, 0x55, 0x65, 0x78, 0x37, 0xaa, 0xf2, + 0xb9, 0x7e, 0xb8, 0x90, 0xae, 0x2a, 0x1a, 0xd5, 0x8d, 0x13, 0x4d, 0xf9, 0xf2, 0xd4, 0x94, 0xa7, + 0x20, 0x5f, 0xd5, 0xbd, 0x35, 0xa1, 0x22, 0xe8, 0x14, 0x70, 0xcf, 0x6c, 0xd2, 0x7a, 0x5b, 0xf7, + 0xd6, 0x34, 0x44, 0x49, 0x73, 0x13, 0x20, 0xc7, 0xb4, 0xb9, 0x29, 0x34, 0x44, 0x46, 0x9e, 0xcc, + 0x5c, 0xce, 0xa7, 0x19, 0x22, 0x71, 0xa5, 0x1c, 0xdd, 0x8d, 0x52, 0x7e, 0xbe, 0xbf, 0xdb, 0xfc, + 0x75, 0xd7, 0x31, 0x3d, 0x7a, 0xa2, 0x95, 0x27, 0x5a, 0x79, 0x04, 0x5a, 0xf9, 0xdb, 0x59, 0x18, + 0x0b, 0xf6, 0xa5, 0xef, 0xd2, 0xc6, 0xd1, 0xac, 0xa3, 0xe1, 0x76, 0x2e, 0x77, 0xe0, 0xed, 0xdc, + 0x41, 0x94, 0x50, 0x0d, 0xf6, 0xd7, 0xdc, 0xd4, 0x41, 0x29, 0xf3, 0xfd, 0x75, 0x70, 0x0c, 0xfe, + 0x14, 0x0c, 0x2e, 0xe8, 0x0f, 0xcc, 0x56, 0xa7, 0x25, 0x76, 0x2a, 0xe8, 0x1f, 0xda, 0xd2, 0x1f, + 0x68, 0x3e, 0x5c, 0xfd, 0xdd, 0x0c, 0x8c, 0x0b, 0xa1, 0x0a, 0xe6, 0x07, 0x92, 0x6a, 0x28, 0x9d, + 0xec, 0x81, 0xa5, 0x93, 0xdb, 0xbf, 0x74, 0xd4, 0x7f, 0x90, 0x03, 0x65, 0xd6, 0x6c, 0xd2, 0x65, + 0x47, 0xb7, 0xdc, 0x7b, 0xd4, 0x11, 0x47, 0x0a, 0x33, 0x8c, 0xd5, 0x81, 0x3e, 0x50, 0x9a, 0x86, + 0xb2, 0xfb, 0x9a, 0x86, 0xde, 0x0f, 0xc3, 0xa2, 0x31, 0x81, 0x6f, 0x32, 0x8e, 0x34, 0xc7, 0x07, + 0x6a, 0x21, 0x9e, 0x11, 0x97, 0xda, 0x6d, 0xc7, 0xbe, 0x4f, 0x1d, 0x7e, 0x1d, 0x2a, 0x88, 0x75, + 0x1f, 0xa8, 0x85, 0x78, 0x89, 0x33, 0xf5, 0xed, 0x5f, 0x99, 0x33, 0x75, 0xb4, 0x10, 0x4f, 0x2e, + 0xc3, 0xd0, 0xbc, 0xdd, 0x40, 0x8f, 0x3c, 0x31, 0x15, 0x8d, 0x6e, 0x6f, 0x15, 0x87, 0x9a, 0x02, + 0xa6, 0x05, 0x58, 0x46, 0x39, 0x6d, 0x6f, 0x58, 0x4d, 0x5b, 0xe7, 0x5e, 0x56, 0x43, 0x9c, 0xd2, + 0x10, 0x30, 0x2d, 0xc0, 0x32, 0x4a, 0x26, 0x73, 0xf4, 0x5e, 0x1b, 0x0a, 0x79, 0xde, 0x13, 0x30, + 0x2d, 0xc0, 0xaa, 0x3f, 0x94, 0x67, 0xda, 0xeb, 0x9a, 0xef, 0x3d, 0xf2, 0x6b, 0x49, 0x38, 0x60, + 0xfa, 0xf7, 0x31, 0x60, 0x1e, 0x99, 0x33, 0x51, 0xf5, 0x2f, 0x07, 0x01, 0x84, 0xf4, 0x67, 0x4e, + 0x36, 0xbb, 0x07, 0xd3, 0x9a, 0x69, 0x98, 0x9c, 0xb1, 0xd6, 0x74, 0xab, 0x41, 0x8d, 0xf0, 0x64, + 0x78, 0x00, 0x87, 0x36, 0x7a, 0x35, 0x53, 0x81, 0x0c, 0x8f, 0x86, 0xb5, 0x64, 0x01, 0xf2, 0x22, + 0x8c, 0x54, 0x2c, 0x8f, 0x3a, 0x7a, 0xc3, 0x33, 0xef, 0x53, 0x31, 0x35, 0xa0, 0x0b, 0x82, 0x19, + 0x82, 0x35, 0x99, 0x86, 0xbc, 0x0c, 0xa3, 0x55, 0xdd, 0xf1, 0xcc, 0x86, 0xd9, 0xd6, 0x2d, 0xcf, + 0x55, 0x86, 0x70, 0x46, 0xc3, 0x65, 0xbf, 0x2d, 0xc1, 0xb5, 0x08, 0x15, 0xf9, 0x0a, 0x18, 0xc6, + 0x6d, 0x33, 0x3e, 0xc0, 0x18, 0xde, 0xf1, 0x5a, 0xf9, 0xe9, 0xd0, 0x0f, 0x95, 0x1f, 0x70, 0xa3, + 0xab, 0x41, 0xfc, 0x66, 0x39, 0xe0, 0x48, 0xde, 0x86, 0xc1, 0x19, 0xcb, 0x40, 0xe6, 0xb0, 0x23, + 0x73, 0x55, 0x30, 0x3f, 0x1b, 0x32, 0xb7, 0xdb, 0x31, 0xde, 0x3e, 0xbb, 0xf4, 0x51, 0x36, 0xf2, + 0xa5, 0x1b, 0x65, 0xa3, 0x5f, 0x82, 0x9b, 0x87, 0xb1, 0xc3, 0xba, 0x79, 0x18, 0xdf, 0xe7, 0xcd, + 0x83, 0xfa, 0x1e, 0x8c, 0x4c, 0x55, 0x67, 0x83, 0xd1, 0x7b, 0x1e, 0x72, 0x55, 0xe1, 0x12, 0x93, + 0xe7, 0xf6, 0x4c, 0xdb, 0x34, 0x34, 0x06, 0x23, 0x57, 0x60, 0xa8, 0x8c, 0x7e, 0x96, 0xe2, 0xce, + 0x37, 0xcf, 0xd7, 0xbf, 0x06, 0xc2, 0xd0, 0xdd, 0xda, 0x47, 0x93, 0xf7, 0xc1, 0x60, 0xd5, 0xb1, + 0x57, 0x1d, 0xbd, 0x25, 0xd6, 0x60, 0xf4, 0x49, 0x6a, 0x73, 0x90, 0xe6, 0xe3, 0xd4, 0xcf, 0x67, + 0x7c, 0x53, 0x9f, 0x95, 0xa8, 0x75, 0xf0, 0x7a, 0x02, 0xeb, 0x1e, 0xe2, 0x25, 0x5c, 0x0e, 0xd2, + 0x7c, 0x1c, 0xb9, 0x02, 0xfd, 0x33, 0x8e, 0x63, 0x3b, 0xf2, 0xa3, 0x15, 0xca, 0x00, 0xf2, 0xe5, + 0x3c, 0x52, 0x90, 0x0f, 0xc2, 0x08, 0x9f, 0x73, 0xf8, 0xa9, 0x6e, 0xae, 0xd7, 0xbd, 0xb6, 0x4c, + 0xa9, 0xfe, 0x42, 0x4e, 0xb2, 0xd9, 0xb8, 0xc4, 0x1f, 0xc1, 0x9b, 0x91, 0x97, 0x20, 0x37, 0x55, + 0x9d, 0x15, 0x13, 0xe0, 0x29, 0xbf, 0xa8, 0xa4, 0x2a, 0xb1, 0x72, 0x8c, 0x9a, 0x5c, 0x84, 0x7c, + 0x95, 0xa9, 0xcf, 0x00, 0xaa, 0xc7, 0xd0, 0xf6, 0x56, 0x31, 0xdf, 0x66, 0xfa, 0x83, 0x50, 0xc4, + 0xb2, 0x0d, 0x10, 0xdf, 0x65, 0x71, 0x6c, 0xb8, 0xf7, 0xb9, 0x08, 0xf9, 0x92, 0xb3, 0x7a, 0x5f, + 0xcc, 0x5a, 0x88, 0xd5, 0x9d, 0xd5, 0xfb, 0x1a, 0x42, 0xc9, 0x35, 0x00, 0x8d, 0x7a, 0x1d, 0xc7, + 0xc2, 0xf7, 0x64, 0xc3, 0x78, 0x9e, 0x88, 0xb3, 0xa1, 0x83, 0xd0, 0x7a, 0xc3, 0x36, 0xa8, 0x26, + 0x91, 0xa8, 0xff, 0x38, 0xbc, 0xdc, 0x9a, 0x36, 0xdd, 0xf5, 0x93, 0x2e, 0xdc, 0x43, 0x17, 0xea, + 0xe2, 0xc8, 0x36, 0xd9, 0x49, 0x45, 0xe8, 0x9f, 0x6d, 0xea, 0xab, 0x2e, 0xf6, 0xa1, 0x70, 0x5a, + 0xbc, 0xc7, 0x00, 0x1a, 0x87, 0xc7, 0xfa, 0x69, 0x68, 0xe7, 0x7e, 0xfa, 0xb6, 0xfe, 0x60, 0xb4, + 0x2d, 0x52, 0x6f, 0xc3, 0x76, 0x4e, 0xba, 0x6a, 0xb7, 0x5d, 0x75, 0x09, 0x06, 0x6b, 0x4e, 0x43, + 0x3a, 0xee, 0xc0, 0xfd, 0x80, 0xeb, 0x34, 0xf8, 0x51, 0x87, 0x8f, 0x64, 0x74, 0xd3, 0xae, 0x87, + 0x74, 0x83, 0x21, 0x9d, 0xe1, 0x7a, 0x82, 0x4e, 0x20, 0x05, 0x5d, 0xd5, 0x76, 0x3c, 0xd1, 0x71, + 0x01, 0x5d, 0xdb, 0x76, 0x3c, 0xcd, 0x47, 0x92, 0xf7, 0x03, 0x2c, 0x97, 0xab, 0xfe, 0xab, 0x8e, + 0xe1, 0xd0, 0xe9, 0x54, 0x3c, 0xe7, 0xd0, 0x24, 0x34, 0x59, 0x86, 0xe1, 0xa5, 0x36, 0x75, 0xf8, + 0x56, 0x88, 0xbf, 0x10, 0x7b, 0x36, 0x26, 0x5a, 0xd1, 0xef, 0x57, 0xc5, 0xff, 0x01, 0x39, 0x5f, + 0x5f, 0x6c, 0xff, 0xa7, 0x16, 0x32, 0x22, 0x1f, 0x84, 0x81, 0x12, 0xb7, 0xf3, 0x46, 0x90, 0x65, + 0x20, 0x32, 0xdc, 0x82, 0x72, 0x14, 0xdf, 0xb3, 0xeb, 0xf8, 0xb7, 0x26, 0xc8, 0xd5, 0x2b, 0x50, + 0x88, 0x57, 0x43, 0x46, 0x60, 0xb0, 0xbc, 0xb4, 0xb8, 0x38, 0x53, 0x5e, 0x2e, 0xf4, 0x91, 0x21, + 0xc8, 0xd7, 0x66, 0x16, 0xa7, 0x0b, 0x19, 0xf5, 0x07, 0xa4, 0x19, 0x84, 0xa9, 0xd6, 0xc9, 0xf5, + 0xf8, 0x81, 0xee, 0x8f, 0x0a, 0x78, 0x27, 0x8c, 0x27, 0x06, 0x2d, 0xd3, 0xf3, 0xa8, 0x21, 0x56, + 0x09, 0xbc, 0x33, 0xf5, 0x1e, 0x68, 0x09, 0x3c, 0x79, 0x1e, 0xc6, 0x10, 0x26, 0xae, 0x49, 0xf9, + 0xfe, 0x58, 0x14, 0x70, 0x1e, 0x68, 0x51, 0xa4, 0xfa, 0xab, 0xe1, 0x0d, 0xf9, 0x3c, 0xd5, 0x8f, + 0xeb, 0xad, 0xea, 0x43, 0xd2, 0x5f, 0xea, 0x4f, 0xf6, 0xf3, 0xb7, 0x46, 0xfc, 0x01, 0xf0, 0x51, + 0x88, 0x32, 0x3c, 0x06, 0xce, 0xed, 0xe1, 0x18, 0xf8, 0x79, 0x18, 0x58, 0xa0, 0xde, 0x9a, 0xed, + 0xbb, 0xe9, 0xa1, 0x5f, 0x4c, 0x0b, 0x21, 0xb2, 0x5f, 0x0c, 0xa7, 0x21, 0xeb, 0x40, 0xfc, 0xd7, + 0xbd, 0x81, 0xc7, 0xbf, 0x7f, 0xec, 0x7c, 0x2e, 0xb1, 0x4f, 0xa9, 0x61, 0x0c, 0x00, 0x7c, 0xcc, + 0x71, 0x3a, 0x78, 0x51, 0x20, 0xf9, 0xcd, 0xfd, 0x87, 0xad, 0xe2, 0x00, 0xa7, 0xd1, 0x52, 0xd8, + 0x92, 0x8f, 0xc1, 0xf0, 0xc2, 0x6c, 0x49, 0xbc, 0xf4, 0xe5, 0x9e, 0x21, 0xe7, 0x03, 0x29, 0xfa, + 0x88, 0x40, 0x24, 0xf8, 0xb0, 0xab, 0x75, 0x4f, 0x4f, 0x3e, 0xf4, 0x0d, 0xb9, 0x30, 0x6d, 0xe1, + 0x4f, 0xc4, 0xc4, 0xe9, 0x42, 0xa0, 0x2d, 0xd1, 0x87, 0x63, 0x71, 0x59, 0x71, 0x6c, 0x4c, 0x5b, + 0x86, 0x0e, 0x30, 0xba, 0x97, 0x60, 0xb2, 0xd4, 0x6e, 0x37, 0x4d, 0x6a, 0xa0, 0xbe, 0x68, 0x9d, + 0x26, 0x75, 0x85, 0x57, 0x15, 0xbe, 0x3a, 0xd2, 0x39, 0xb2, 0x8e, 0xef, 0xcb, 0xeb, 0x4e, 0x27, + 0xea, 0x4d, 0x9b, 0x2c, 0x8b, 0xcf, 0xf9, 0x39, 0x7b, 0xdb, 0xa9, 0x4c, 0x0b, 0xbf, 0x2a, 0xfe, + 0x9c, 0xdf, 0x07, 0x47, 0xbd, 0x4c, 0x65, 0x72, 0xf5, 0x5b, 0xb3, 0x70, 0xb6, 0xec, 0x50, 0xdd, + 0xa3, 0x0b, 0xb3, 0xa5, 0x52, 0x07, 0xfd, 0x21, 0x9b, 0x4d, 0x6a, 0xad, 0x1e, 0xcd, 0xa4, 0xf0, + 0x3a, 0x8c, 0x07, 0x0d, 0xa8, 0x35, 0xec, 0x36, 0x95, 0xdf, 0xff, 0x35, 0x7c, 0x4c, 0xdd, 0x65, + 0x28, 0x2d, 0x46, 0x4a, 0x6e, 0xc1, 0xa9, 0x00, 0x52, 0x6a, 0x36, 0xed, 0x0d, 0x8d, 0x76, 0x5c, + 0xee, 0x74, 0x3d, 0xc4, 0x9d, 0xae, 0x43, 0x0e, 0x3a, 0xc3, 0xd7, 0x1d, 0x46, 0xa0, 0xa5, 0x95, + 0x52, 0xbf, 0x90, 0x83, 0x73, 0x77, 0xf4, 0xa6, 0x69, 0x84, 0xa2, 0xd1, 0xa8, 0xdb, 0xb6, 0x2d, + 0x97, 0x1e, 0xa3, 0x31, 0x1e, 0x19, 0x48, 0xf9, 0x43, 0x19, 0x48, 0xc9, 0x2e, 0xea, 0x3f, 0x70, + 0x17, 0x0d, 0xec, 0xab, 0x8b, 0xfe, 0x6d, 0x06, 0x0a, 0xfe, 0xfb, 0x14, 0x39, 0xa6, 0x83, 0xf4, + 0x78, 0x02, 0x0f, 0x20, 0x63, 0x3e, 0xf6, 0x88, 0x27, 0x35, 0x18, 0x9c, 0x79, 0xd0, 0x36, 0x1d, + 0xea, 0xee, 0xe2, 0x81, 0xc0, 0xe3, 0xe2, 0xb0, 0x65, 0x92, 0xf2, 0x22, 0x89, 0x73, 0x16, 0x0e, + 0xc6, 0x57, 0xa7, 0xfc, 0x85, 0xce, 0x94, 0x1f, 0xa8, 0x82, 0xbf, 0x3a, 0x15, 0x2f, 0x79, 0x22, + 0xcf, 0x88, 0x43, 0x52, 0xf2, 0x34, 0xe4, 0x96, 0x97, 0xe7, 0xc5, 0x3c, 0x8c, 0x01, 0x42, 0x3c, + 0x4f, 0x7e, 0x56, 0xcb, 0xb0, 0xea, 0x1f, 0x67, 0xf9, 0xd3, 0x77, 0x3e, 0x5c, 0x8f, 0x44, 0x09, + 0xa7, 0x60, 0xc8, 0x17, 0xb8, 0x50, 0xc3, 0xe0, 0x71, 0x49, 0xbc, 0x23, 0xe2, 0x75, 0x07, 0x0f, + 0x89, 0x8a, 0xfe, 0xa3, 0x01, 0x7e, 0x8b, 0x80, 0xfb, 0x22, 0x7c, 0x34, 0xe0, 0x3f, 0x15, 0x78, + 0x3f, 0x0c, 0x07, 0x33, 0x94, 0x7c, 0x7b, 0x10, 0x4c, 0x67, 0x5a, 0x88, 0x8f, 0x4d, 0xcc, 0x03, + 0x07, 0x58, 0xc6, 0x7d, 0xf1, 0xf2, 0x5e, 0x39, 0x11, 0xef, 0x21, 0x8b, 0xf7, 0x9b, 0x84, 0x78, + 0xf9, 0x43, 0xb3, 0x63, 0x2b, 0xde, 0x43, 0x3b, 0x39, 0x57, 0x7f, 0x3b, 0x03, 0x84, 0x35, 0xab, + 0xaa, 0xbb, 0xee, 0x86, 0xed, 0x18, 0xfc, 0x21, 0xc2, 0x91, 0x08, 0xe6, 0xf0, 0x6e, 0x3b, 0xbf, + 0x6e, 0x18, 0x4e, 0x45, 0x5c, 0xa7, 0x8f, 0xf9, 0x64, 0x75, 0x25, 0x3a, 0x9a, 0x7a, 0xbd, 0x70, + 0x7a, 0x46, 0xbe, 0x4e, 0xed, 0x8f, 0xbc, 0x93, 0x94, 0xee, 0x51, 0x3f, 0x00, 0xa3, 0xe2, 0x07, + 0x5b, 0xa1, 0xfd, 0x7b, 0x32, 0x1c, 0xa5, 0x2e, 0x03, 0x68, 0x11, 0x34, 0x79, 0x05, 0x86, 0xd9, + 0x80, 0x59, 0xc5, 0x58, 0x42, 0x83, 0xe1, 0xeb, 0x21, 0xc3, 0x07, 0xca, 0xeb, 0x49, 0x40, 0x29, + 0xb9, 0xbc, 0x0f, 0xed, 0xe2, 0x8d, 0xda, 0xa7, 0x60, 0xa4, 0x64, 0x59, 0x36, 0x8f, 0x8a, 0xe2, + 0x8a, 0x8b, 0x8d, 0xae, 0x36, 0xfd, 0xd3, 0x18, 0xc3, 0x21, 0xa4, 0x4f, 0x35, 0xea, 0x65, 0x86, + 0xe4, 0xba, 0xff, 0x02, 0x8a, 0x3a, 0xc2, 0x3c, 0xc5, 0xcb, 0x1d, 0x47, 0xc0, 0x92, 0x0f, 0xa0, + 0xb0, 0xf3, 0xc6, 0xaa, 0x8e, 0xdd, 0xb6, 0x5d, 0x6a, 0x70, 0x41, 0x8d, 0x84, 0x11, 0x31, 0xda, + 0x02, 0x81, 0xcf, 0x2d, 0x23, 0x71, 0x7d, 0x22, 0x45, 0xc8, 0x3d, 0x38, 0xed, 0x5f, 0x33, 0x07, + 0x0f, 0x5b, 0x2b, 0xd3, 0x2e, 0x3e, 0x1b, 0x18, 0x09, 0x83, 0xd7, 0x84, 0xa8, 0xa9, 0x27, 0xfc, + 0x4b, 0x15, 0xff, 0x65, 0x6c, 0xdd, 0x34, 0xe4, 0xae, 0x4e, 0xe5, 0x47, 0xbe, 0x12, 0x46, 0x16, + 0xf4, 0x07, 0xd3, 0x1d, 0x71, 0x72, 0x33, 0xb6, 0xfb, 0xbb, 0x9b, 0x96, 0xfe, 0xa0, 0x6e, 0x88, + 0x72, 0x31, 0x9b, 0x42, 0x66, 0x49, 0xea, 0x70, 0xb6, 0xea, 0xd8, 0x2d, 0xdb, 0xa3, 0x46, 0xec, + 0x8d, 0xe8, 0x44, 0xf8, 0xa8, 0xbc, 0x2d, 0x28, 0xea, 0x3d, 0x1e, 0x8b, 0x76, 0x61, 0x43, 0x5a, + 0x30, 0x51, 0x72, 0xdd, 0x4e, 0x8b, 0x86, 0xf7, 0x5b, 0x85, 0x1d, 0x3f, 0xe3, 0x59, 0xe1, 0xf7, + 0xfd, 0x98, 0x8e, 0x45, 0xf9, 0xf5, 0x56, 0xdd, 0x33, 0xe5, 0x1a, 0xf1, 0x5b, 0xe2, 0xbc, 0x59, + 0xef, 0xfa, 0x02, 0xc4, 0xf0, 0x06, 0xca, 0x24, 0x0e, 0x2f, 0xec, 0xdd, 0x40, 0xf4, 0x18, 0x1a, + 0x41, 0xee, 0xdd, 0x48, 0x91, 0x9b, 0xf9, 0xa1, 0xf1, 0xc2, 0x84, 0x76, 0x2e, 0xf9, 0x41, 0xfc, + 0xf5, 0xd4, 0x3f, 0xcc, 0xc6, 0x66, 0x22, 0x6e, 0xa3, 0x1d, 0x68, 0x26, 0x92, 0x67, 0x94, 0xec, + 0x3e, 0x67, 0x94, 0x67, 0x92, 0x5e, 0x17, 0x29, 0xd3, 0xc4, 0x57, 0xc2, 0xb8, 0x5f, 0x02, 0xdb, + 0xbd, 0x19, 0x2c, 0x35, 0xdd, 0xbb, 0xe3, 0xa2, 0xe8, 0x8e, 0x02, 0x1a, 0xa9, 0x9b, 0xb1, 0x3e, + 0x88, 0xf1, 0x53, 0x7f, 0x26, 0x03, 0x10, 0x2a, 0x31, 0xf9, 0x40, 0x34, 0x28, 0x5c, 0x26, 0xbc, + 0x8a, 0x12, 0x81, 0x4c, 0x22, 0x51, 0xe0, 0xc8, 0x45, 0xc8, 0x63, 0xb0, 0x9b, 0x6c, 0x78, 0xf4, + 0xbd, 0x6e, 0x5a, 0x86, 0x86, 0x50, 0x86, 0x95, 0xa2, 0x52, 0x20, 0x16, 0xdd, 0x2e, 0xb8, 0xe5, + 0x3d, 0x0d, 0x13, 0xb5, 0xce, 0x8a, 0xdc, 0x99, 0x72, 0x9c, 0x33, 0xb7, 0xb3, 0x12, 0xbc, 0x4b, + 0x8f, 0x44, 0x34, 0x8a, 0x16, 0x51, 0x7f, 0x28, 0x13, 0xeb, 0xdf, 0x23, 0x34, 0x2c, 0x76, 0xd5, + 0xa7, 0xea, 0x6f, 0xe5, 0x60, 0xa4, 0x6a, 0x3b, 0x9e, 0x88, 0x1e, 0x74, 0xbc, 0x57, 0x7a, 0x69, + 0x3f, 0x9a, 0xdf, 0xc3, 0x7e, 0xf4, 0x22, 0xe4, 0x25, 0xa7, 0x78, 0x7e, 0x73, 0x65, 0x18, 0x8e, + 0x86, 0xd0, 0x2f, 0xf1, 0xc3, 0xa0, 0xe4, 0x35, 0xf5, 0xe0, 0x81, 0x9d, 0x41, 0xbe, 0x2a, 0x0b, + 0xf0, 0xf6, 0x8b, 0x2f, 0x3e, 0xc2, 0x5d, 0xaa, 0x7e, 0x67, 0x06, 0x26, 0xc4, 0xe5, 0xaf, 0x14, + 0x10, 0x72, 0xd0, 0xbf, 0xb6, 0x97, 0x67, 0x12, 0x0e, 0xd2, 0x7c, 0x1c, 0x33, 0x0c, 0x66, 0x1e, + 0x98, 0x1e, 0xde, 0x7f, 0x49, 0x11, 0x21, 0xa9, 0x80, 0xc9, 0x86, 0x81, 0x4f, 0x47, 0x3e, 0xe0, + 0x5f, 0x6b, 0xe7, 0x42, 0x6b, 0x88, 0x15, 0x98, 0x49, 0xbd, 0xda, 0x56, 0x7f, 0x34, 0x0f, 0xf9, + 0x99, 0x07, 0xb4, 0x71, 0xcc, 0xbb, 0x46, 0x3a, 0x2c, 0xcf, 0x1f, 0xf0, 0xb0, 0x7c, 0x3f, 0x7e, + 0x3a, 0x6f, 0x86, 0xfd, 0x39, 0x10, 0xad, 0x3e, 0xd6, 0xf3, 0xf1, 0xea, 0xfd, 0x9e, 0x3e, 0x7e, + 0x6e, 0x5e, 0xbf, 0x98, 0x83, 0x5c, 0xad, 0x5c, 0x3d, 0xd1, 0x9b, 0x23, 0xd5, 0x9b, 0xde, 0x7e, + 0x10, 0x6a, 0x70, 0xb5, 0x39, 0x14, 0x7a, 0x1e, 0xc7, 0x6e, 0x31, 0xff, 0x22, 0x07, 0xe3, 0xb5, + 0xd9, 0xe5, 0xaa, 0x74, 0xbb, 0x70, 0x8b, 0x7b, 0x87, 0xa2, 0x9f, 0x22, 0xef, 0xd2, 0x8b, 0x09, + 0xb3, 0xea, 0x76, 0xc5, 0xf2, 0x5e, 0x7d, 0x19, 0x03, 0x49, 0xe2, 0x81, 0x1c, 0xf7, 0x3f, 0x77, + 0xcd, 0xf7, 0xe8, 0x17, 0x30, 0xd4, 0x88, 0xcf, 0x80, 0xbc, 0x0e, 0xb9, 0xdb, 0xc2, 0xcb, 0xa7, + 0x1b, 0x9f, 0x97, 0xae, 0x73, 0x3e, 0x6c, 0x12, 0xcc, 0x75, 0x4c, 0x03, 0x39, 0xb0, 0x52, 0xac, + 0xf0, 0x0d, 0x61, 0x32, 0xec, 0xaa, 0xf0, 0xaa, 0x5f, 0xf8, 0x46, 0x65, 0x9a, 0xd4, 0x60, 0xa4, + 0x4a, 0x9d, 0x96, 0x89, 0x1d, 0xe5, 0xcf, 0xd9, 0xbd, 0x99, 0xb0, 0xfd, 0xeb, 0x48, 0x3b, 0x2c, + 0x84, 0xcc, 0x64, 0x2e, 0xe4, 0x1d, 0x00, 0x6e, 0x55, 0xed, 0x32, 0xc8, 0xf0, 0xe3, 0xb8, 0x1b, + 0xe4, 0x1b, 0x8e, 0x14, 0xcb, 0x5f, 0x62, 0x46, 0xd6, 0xa1, 0xb0, 0x60, 0x1b, 0xe6, 0x3d, 0x93, + 0xbb, 0xf3, 0x62, 0x05, 0x03, 0x3b, 0x3b, 0xd1, 0xb1, 0x0d, 0x46, 0x4b, 0x2a, 0x97, 0x56, 0x4d, + 0x82, 0xb1, 0xfa, 0xb3, 0xfd, 0x90, 0x67, 0xdd, 0x7e, 0x32, 0x7e, 0x0f, 0x32, 0x7e, 0x4b, 0x50, + 0xb8, 0x6b, 0x3b, 0xeb, 0xa6, 0xb5, 0x1a, 0xbc, 0xce, 0x10, 0x27, 0x16, 0xe8, 0x1d, 0xb6, 0xc1, + 0x71, 0xf5, 0xe0, 0x21, 0x87, 0x96, 0x20, 0xdf, 0x61, 0x04, 0xbf, 0x06, 0xc0, 0x43, 0x54, 0x20, + 0xcd, 0x50, 0x18, 0x1e, 0x87, 0x07, 0xb0, 0xc0, 0x07, 0x1f, 0x72, 0x78, 0x9c, 0x90, 0x98, 0x5c, + 0xf1, 0xfd, 0x6b, 0x86, 0xf1, 0xfd, 0x07, 0x1e, 0xcd, 0xa0, 0x7f, 0x8d, 0x6c, 0x04, 0x70, 0x4f, + 0x9b, 0x2a, 0x80, 0x74, 0x67, 0x09, 0x31, 0x41, 0x44, 0x26, 0x07, 0x11, 0xdb, 0x32, 0xe5, 0xca, + 0x52, 0x93, 0x78, 0x90, 0x57, 0x63, 0x4e, 0x15, 0x24, 0xc2, 0xad, 0xab, 0x4f, 0x45, 0xe8, 0x94, + 0x37, 0xba, 0x93, 0x53, 0x9e, 0xfa, 0xfd, 0x39, 0x18, 0x61, 0xdc, 0x6a, 0x9d, 0x56, 0x4b, 0x77, + 0x36, 0x4f, 0x14, 0xf9, 0x20, 0x8a, 0x5c, 0x87, 0x49, 0xf9, 0x11, 0x06, 0x33, 0x5d, 0xfd, 0x38, + 0x69, 0xc1, 0x16, 0x3e, 0x4e, 0xc0, 0x6d, 0x4b, 0x9c, 0xf7, 0x3d, 0x01, 0xc6, 0x13, 0x27, 0x57, + 0x4b, 0xf2, 0x52, 0xbf, 0x25, 0x03, 0x85, 0x38, 0x34, 0xd0, 0xfd, 0x4c, 0xaa, 0xee, 0x3f, 0x0f, + 0xc3, 0xc2, 0x2d, 0x43, 0x37, 0x84, 0x97, 0xe8, 0xf8, 0xf6, 0x56, 0x11, 0x30, 0x2e, 0x40, 0xdd, + 0xa1, 0xba, 0xa1, 0x85, 0x04, 0xe4, 0x15, 0x18, 0xc5, 0x1f, 0x77, 0x1d, 0xd3, 0xf3, 0x28, 0xef, + 0x8c, 0x3c, 0xbf, 0x2b, 0xe2, 0x05, 0x36, 0x38, 0x42, 0x8b, 0x90, 0xa9, 0xbf, 0x92, 0x85, 0xe1, + 0x5a, 0x67, 0xc5, 0xdd, 0x74, 0x3d, 0xda, 0x3a, 0xe6, 0x3a, 0xe4, 0x1f, 0x2b, 0xe4, 0x53, 0x8f, + 0x15, 0x9e, 0xf6, 0x87, 0x96, 0x74, 0xa7, 0x11, 0x6c, 0x0c, 0x7c, 0x4f, 0xd7, 0x50, 0x8b, 0x06, + 0xf6, 0xae, 0x45, 0xea, 0x3f, 0xcb, 0x42, 0x81, 0x3b, 0x04, 0x4c, 0x9b, 0x6e, 0xe3, 0x10, 0x1e, + 0x29, 0x1d, 0xbd, 0x4c, 0x0f, 0xe6, 0x44, 0xb3, 0x8b, 0xa7, 0x5f, 0xea, 0xcf, 0x66, 0x61, 0xa4, + 0xd4, 0xf1, 0xd6, 0x4a, 0x1e, 0xce, 0x6f, 0x8f, 0xe4, 0xb1, 0xc7, 0x81, 0x26, 0x2f, 0xf5, 0x97, + 0x33, 0x3c, 0xbc, 0xf8, 0xb2, 0xbd, 0x4e, 0xad, 0x43, 0xb8, 0x50, 0x39, 0x8c, 0x63, 0x4c, 0xbf, + 0x27, 0x72, 0x7b, 0xeb, 0x09, 0xbc, 0x06, 0xd4, 0xec, 0x26, 0x3d, 0xde, 0x9f, 0x71, 0x88, 0xd7, + 0x80, 0xbe, 0x40, 0x0e, 0xe1, 0xda, 0xf9, 0xcb, 0x4b, 0x20, 0x87, 0x70, 0x9e, 0xfb, 0xe5, 0x21, + 0x90, 0x5f, 0xc8, 0xc0, 0xf0, 0x94, 0xed, 0x1d, 0xf3, 0x81, 0x2f, 0xbe, 0xe2, 0x78, 0xab, 0xb9, + 0xff, 0x15, 0xc7, 0x5b, 0x37, 0xd5, 0x6f, 0xcf, 0xc2, 0x69, 0x91, 0x02, 0x45, 0x9c, 0xa0, 0x9d, + 0x4c, 0xc7, 0x62, 0xb0, 0x25, 0x45, 0x73, 0x32, 0x0f, 0x09, 0xd1, 0x7c, 0x6f, 0x0e, 0x4e, 0x63, + 0x24, 0x71, 0xb6, 0x1f, 0xfb, 0x32, 0xb0, 0x45, 0x48, 0x23, 0xea, 0xdc, 0xb1, 0x90, 0xe2, 0xdc, + 0xf1, 0x1f, 0xb6, 0x8a, 0xaf, 0xee, 0x21, 0x7b, 0xcc, 0xd5, 0x1a, 0xee, 0xb3, 0x18, 0x57, 0xdf, + 0x2d, 0xc4, 0x05, 0xb8, 0x69, 0x9b, 0x96, 0xf0, 0xb4, 0xe6, 0x66, 0x72, 0x6d, 0x7b, 0xab, 0x78, + 0xe6, 0x5d, 0xdb, 0xb4, 0xea, 0x71, 0x77, 0xeb, 0xbd, 0xd6, 0x17, 0xb2, 0xd6, 0xa4, 0x6a, 0xd4, + 0xdf, 0xc9, 0xc0, 0xf9, 0xa8, 0x16, 0x7f, 0x39, 0xd8, 0x8e, 0x7f, 0x3f, 0x0b, 0x67, 0x6e, 0xa0, + 0x70, 0x02, 0x07, 0xb5, 0x93, 0x79, 0x4b, 0x0c, 0xce, 0x14, 0xd9, 0x9c, 0x58, 0x94, 0xdd, 0x65, + 0x73, 0x32, 0xa9, 0x0b, 0xd9, 0xfc, 0x7a, 0x06, 0x4e, 0x2d, 0x55, 0xa6, 0xcb, 0x5f, 0x26, 0x23, + 0x2a, 0xf9, 0x3d, 0xc7, 0xdc, 0xe0, 0x4c, 0x7c, 0xcf, 0x31, 0x37, 0x3d, 0x3f, 0x97, 0x85, 0x53, + 0xb5, 0xd2, 0xc2, 0xfc, 0x97, 0xcb, 0x0c, 0x5e, 0x96, 0xbd, 0xa9, 0xfd, 0x23, 0x34, 0x61, 0x0b, + 0xc8, 0x9f, 0x79, 0xe7, 0x7a, 0x77, 0x2f, 0xeb, 0xa4, 0x50, 0x8e, 0xf9, 0xd4, 0x7d, 0x28, 0x42, + 0x61, 0x9a, 0x1f, 0xa1, 0x3e, 0xe6, 0x9a, 0xff, 0x2f, 0x07, 0x60, 0xe4, 0x56, 0x67, 0x85, 0x0a, + 0x87, 0xb0, 0x47, 0xfa, 0xdc, 0xf8, 0x3a, 0x8c, 0x08, 0x31, 0xe0, 0xfd, 0x88, 0x14, 0x22, 0x55, + 0x84, 0x88, 0xe2, 0x91, 0xde, 0x64, 0x22, 0x72, 0x11, 0xf2, 0x77, 0xa8, 0xb3, 0x22, 0xbf, 0xb6, + 0xbf, 0x4f, 0x9d, 0x15, 0x0d, 0xa1, 0x64, 0x3e, 0x7c, 0x0a, 0x54, 0xaa, 0x56, 0x30, 0x8f, 0x99, + 0xb8, 0x72, 0xc4, 0xc4, 0x6c, 0x81, 0x53, 0xa9, 0xde, 0x36, 0x79, 0x06, 0x34, 0x39, 0xd2, 0x47, + 0xbc, 0x24, 0x59, 0x84, 0xc9, 0x88, 0xb3, 0x29, 0x26, 0xf1, 0x1a, 0x4a, 0x61, 0x97, 0x96, 0xbe, + 0x2b, 0x59, 0x94, 0xbc, 0x09, 0xa3, 0x3e, 0x10, 0xdd, 0x26, 0x87, 0xc3, 0xcc, 0x31, 0x01, 0xab, + 0x58, 0x76, 0x8e, 0x48, 0x01, 0x99, 0x01, 0x5e, 0x81, 0x40, 0x0a, 0x83, 0x98, 0xab, 0x6f, 0xa4, + 0x00, 0x79, 0x05, 0x19, 0xe0, 0xf3, 0x35, 0x74, 0xb7, 0x1a, 0xc1, 0xa7, 0xe8, 0x78, 0x7d, 0xe4, + 0x08, 0x38, 0x0f, 0x38, 0x10, 0x21, 0x23, 0x4b, 0x00, 0xa1, 0x5b, 0x8c, 0x08, 0xeb, 0xb2, 0x67, + 0x87, 0x1d, 0x89, 0x85, 0x7c, 0x0f, 0x38, 0xb6, 0x9f, 0x7b, 0x40, 0xf5, 0x5b, 0x73, 0x30, 0x52, + 0x6a, 0xb7, 0x83, 0xa1, 0xf0, 0x01, 0x18, 0x28, 0xb5, 0xdb, 0xb7, 0xb5, 0x8a, 0x9c, 0x8e, 0x43, + 0x6f, 0xb7, 0xeb, 0x1d, 0xc7, 0x94, 0x7d, 0xdd, 0x39, 0x11, 0x29, 0xc3, 0x58, 0xa9, 0xdd, 0xae, + 0x76, 0x56, 0x9a, 0x66, 0x43, 0x4a, 0x4c, 0xc8, 0x53, 0xe4, 0xb6, 0xdb, 0xf5, 0x36, 0x62, 0xe2, + 0xd9, 0x29, 0xa3, 0x65, 0xc8, 0xa7, 0x30, 0x18, 0x9a, 0xc8, 0x8b, 0xc7, 0x33, 0x6f, 0xa9, 0x41, + 0x22, 0x8e, 0xb0, 0x6d, 0x57, 0x03, 0x22, 0x9e, 0xb0, 0xe4, 0xa2, 0x9f, 0x66, 0x86, 0x55, 0x94, + 0xc8, 0x7f, 0x17, 0xb2, 0x24, 0x2f, 0xc0, 0x60, 0xa9, 0xdd, 0x96, 0xee, 0xba, 0xd0, 0x2d, 0x8e, + 0x95, 0x8a, 0xa7, 0x78, 0x15, 0x64, 0xe2, 0xb3, 0xc4, 0xed, 0xb8, 0xed, 0x78, 0x38, 0xa4, 0xc6, + 0xc2, 0xcf, 0xf2, 0xaf, 0xd3, 0x6d, 0x39, 0xfe, 0x90, 0x16, 0x2d, 0x73, 0xe1, 0x0d, 0x18, 0x8f, + 0xb6, 0x78, 0x4f, 0x59, 0x53, 0xfe, 0x2a, 0x83, 0x52, 0x39, 0xe6, 0x0f, 0x3e, 0x5e, 0x82, 0x5c, + 0xa9, 0xdd, 0x16, 0x93, 0xda, 0xa9, 0x94, 0x4e, 0x8d, 0x47, 0x97, 0x28, 0xb5, 0xdb, 0xfe, 0xa7, + 0x1f, 0xf3, 0x97, 0x63, 0xfb, 0xfa, 0xf4, 0x5f, 0xe0, 0x9f, 0x7e, 0xbc, 0x5f, 0x75, 0xa9, 0x3f, + 0x9a, 0x83, 0x89, 0x52, 0xbb, 0x7d, 0x92, 0x22, 0xe5, 0xb0, 0x62, 0x58, 0xbc, 0x08, 0x20, 0xcd, + 0xb1, 0x83, 0xc1, 0xbb, 0xd6, 0x11, 0x69, 0x7e, 0x55, 0x32, 0x9a, 0x44, 0xe4, 0xab, 0xdf, 0xd0, + 0x9e, 0xd4, 0xef, 0xab, 0x73, 0x38, 0xf1, 0x1d, 0xf7, 0x78, 0x7c, 0x0f, 0x4b, 0xb7, 0x89, 0x3e, + 0x18, 0xd8, 0x53, 0x1f, 0xfc, 0x7c, 0x64, 0xf0, 0x60, 0x4e, 0x8c, 0x93, 0x5e, 0xe8, 0x3f, 0x90, + 0x6d, 0x3d, 0x2e, 0x0b, 0x53, 0x04, 0x09, 0xf3, 0x93, 0x21, 0x8a, 0x90, 0x75, 0x0d, 0x86, 0xaa, + 0x9b, 0x86, 0x16, 0xa3, 0xf5, 0xfb, 0x70, 0x70, 0x4f, 0x7d, 0xb8, 0x95, 0xc5, 0xb0, 0x14, 0x41, + 0xc8, 0xbb, 0x83, 0x6f, 0x51, 0xae, 0x01, 0x70, 0xe7, 0x87, 0xc0, 0xbb, 0x7f, 0x8c, 0x47, 0xb7, + 0xe2, 0x39, 0x12, 0x45, 0x74, 0xab, 0x90, 0x24, 0x70, 0x96, 0xca, 0xa5, 0x3a, 0x4b, 0x5d, 0x81, + 0x21, 0x4d, 0xdf, 0xf8, 0x58, 0x87, 0x8a, 0xa7, 0x50, 0x7e, 0x44, 0x59, 0x7d, 0xa3, 0xfe, 0x69, + 0x06, 0xd4, 0x02, 0x34, 0x51, 0x83, 0xb8, 0x26, 0x92, 0x53, 0x0a, 0x3f, 0x68, 0x0f, 0xa2, 0x99, + 0xec, 0x47, 0xd1, 0xc9, 0x87, 0x21, 0x57, 0xba, 0x5b, 0x13, 0x92, 0x0d, 0xba, 0xb6, 0x74, 0xb7, + 0x26, 0xe4, 0xd5, 0xb5, 0xec, 0xdd, 0x9a, 0xfa, 0xd5, 0x59, 0x20, 0x49, 0x4a, 0xf2, 0x2a, 0x0c, + 0x23, 0x74, 0x95, 0xe9, 0x8c, 0x9c, 0x5c, 0x7b, 0xc3, 0xad, 0x3b, 0x08, 0x8d, 0x58, 0x88, 0x3e, + 0x29, 0x79, 0x0d, 0xa0, 0x74, 0xb7, 0x26, 0xd2, 0xbb, 0x46, 0x92, 0x6b, 0x6f, 0xb8, 0x75, 0x91, + 0x5d, 0x36, 0xe2, 0xba, 0x18, 0x10, 0xa3, 0x71, 0x79, 0xb7, 0x36, 0x67, 0xbb, 0x9e, 0x10, 0x35, + 0x37, 0x2e, 0x37, 0x5c, 0xcc, 0xea, 0x1e, 0x31, 0x2e, 0x39, 0x19, 0x66, 0xa6, 0xbc, 0x5b, 0xe3, + 0x6f, 0xf8, 0x0c, 0xcd, 0x0e, 0xb2, 0x40, 0xf2, 0xcc, 0x94, 0x1b, 0x6e, 0x9d, 0xbf, 0xff, 0x33, + 0x30, 0x6f, 0x7f, 0x24, 0x33, 0x65, 0xa4, 0x94, 0xfa, 0x8d, 0x43, 0x50, 0x98, 0xd6, 0x3d, 0x7d, + 0x45, 0x77, 0xa9, 0xb4, 0x25, 0x9f, 0xf0, 0x61, 0xfe, 0xe7, 0x48, 0x72, 0x30, 0x56, 0x52, 0xbe, + 0x26, 0x5e, 0x80, 0xbc, 0x1e, 0xf2, 0x0d, 0xf2, 0x86, 0xcb, 0x89, 0x48, 0x57, 0xea, 0x6d, 0x01, + 0xd6, 0x12, 0x84, 0xe4, 0x79, 0x18, 0xf1, 0x61, 0x6c, 0x17, 0x91, 0x0b, 0x75, 0xc6, 0x58, 0x61, + 0x9b, 0x08, 0x4d, 0x46, 0x93, 0xd7, 0x60, 0xd4, 0xff, 0x29, 0xd9, 0xe7, 0x3c, 0xab, 0xea, 0x4a, + 0x62, 0x0b, 0x26, 0x93, 0xca, 0x45, 0x71, 0x7e, 0xeb, 0x8f, 0x14, 0x8d, 0x25, 0x2e, 0x8d, 0x90, + 0x92, 0x4f, 0xc3, 0xb8, 0xff, 0x5b, 0xec, 0x3a, 0xb8, 0xef, 0xe2, 0xf3, 0xbe, 0x12, 0xc6, 0xc5, + 0x7a, 0x35, 0x4a, 0xce, 0xf7, 0x1f, 0x8f, 0xf9, 0x09, 0x34, 0x8d, 0x95, 0xe4, 0xf6, 0x23, 0x56, + 0x01, 0xa9, 0xc0, 0xa4, 0x0f, 0x09, 0x35, 0x74, 0x30, 0xdc, 0x76, 0x1a, 0x2b, 0xf5, 0x54, 0x25, + 0x4d, 0x96, 0x22, 0x4d, 0xb8, 0x18, 0x01, 0x1a, 0xee, 0x9a, 0x79, 0xcf, 0x13, 0x7b, 0x46, 0x11, + 0x12, 0x5e, 0x24, 0x5f, 0x0e, 0xb8, 0x72, 0x1a, 0x3f, 0x8b, 0x7a, 0x34, 0x80, 0x4d, 0x4f, 0x6e, + 0xa4, 0x06, 0xa7, 0x7d, 0xfc, 0x8d, 0x72, 0xb5, 0xea, 0xd8, 0xef, 0xd2, 0x86, 0x57, 0x99, 0x16, + 0x7b, 0x6e, 0x0c, 0xfb, 0x69, 0xac, 0xd4, 0x57, 0x1b, 0x6d, 0xa6, 0x14, 0x0c, 0x17, 0x65, 0x9e, + 0x5a, 0x98, 0xdc, 0x81, 0x33, 0x12, 0xbc, 0x62, 0xb9, 0x9e, 0x6e, 0x35, 0x68, 0x10, 0x6e, 0x07, + 0x0f, 0x05, 0x04, 0x57, 0x53, 0x20, 0xa3, 0x6c, 0xd3, 0x8b, 0x93, 0x37, 0x60, 0xcc, 0x47, 0xf0, + 0xab, 0xc8, 0x11, 0xbc, 0x8a, 0xc4, 0x21, 0x69, 0xac, 0xd4, 0xe3, 0x4f, 0xcd, 0xa3, 0xc4, 0xb2, + 0x46, 0x2d, 0x6f, 0xb6, 0xfd, 0xe0, 0xf5, 0xbe, 0x46, 0x79, 0x9b, 0xed, 0x54, 0x65, 0x64, 0xa4, + 0xe4, 0xcd, 0x50, 0xa3, 0x96, 0x1c, 0x73, 0xd5, 0xf4, 0x93, 0xa3, 0x9d, 0x13, 0xfa, 0x61, 0x23, + 0x30, 0x4d, 0x3f, 0x38, 0xf9, 0x85, 0x12, 0x9c, 0x4a, 0xd1, 0xb1, 0x3d, 0xed, 0x18, 0x3f, 0x93, + 0x0d, 0x1b, 0x71, 0xcc, 0xb7, 0x8d, 0x53, 0x30, 0xe4, 0x7f, 0x89, 0x30, 0x1e, 0x94, 0x6e, 0x43, + 0x33, 0xce, 0xc3, 0xc7, 0x47, 0xc4, 0x71, 0xcc, 0xb7, 0x92, 0x87, 0x21, 0x8e, 0x2f, 0x66, 0x42, + 0x71, 0x1c, 0xf3, 0xed, 0xe5, 0x2f, 0xe7, 0xc3, 0x39, 0xe9, 0x64, 0x8f, 0x79, 0x58, 0x66, 0x72, + 0xe8, 0x8a, 0x3b, 0xb0, 0x07, 0x57, 0x5c, 0x59, 0x35, 0x07, 0xf7, 0xa7, 0x9a, 0xe4, 0x0d, 0x18, + 0xa9, 0xda, 0xae, 0xb7, 0xea, 0x50, 0xb7, 0x1a, 0xa4, 0x34, 0xc1, 0xd7, 0xeb, 0x6d, 0x01, 0xae, + 0xb7, 0xa3, 0x21, 0xd7, 0x24, 0x72, 0x29, 0x12, 0xdd, 0xf0, 0xde, 0x23, 0xd1, 0xa9, 0xbf, 0x9f, + 0x4b, 0xe8, 0x12, 0x37, 0x7b, 0x8f, 0xa5, 0x2e, 0x1d, 0xc2, 0x44, 0x41, 0xae, 0x87, 0x6b, 0x28, + 0xdf, 0x1f, 0xf4, 0x4b, 0x91, 0x5b, 0x57, 0xc4, 0xf6, 0x20, 0x4a, 0x42, 0x3e, 0x01, 0xe7, 0x22, + 0x80, 0xaa, 0xee, 0xe8, 0x2d, 0xea, 0x85, 0x69, 0x7f, 0x31, 0x16, 0x9f, 0x5f, 0xba, 0xde, 0x0e, + 0xd0, 0x72, 0x2a, 0xe1, 0x2e, 0x1c, 0x24, 0xc5, 0x1c, 0xdc, 0xc3, 0x3b, 0xea, 0x6f, 0xcb, 0x85, + 0x66, 0x52, 0x34, 0xa6, 0xb6, 0x46, 0xdd, 0x4e, 0xd3, 0x7b, 0x74, 0x3b, 0x78, 0x7f, 0x59, 0x8e, + 0xe6, 0x60, 0xa2, 0x74, 0xef, 0x1e, 0x6d, 0x78, 0x7e, 0xaa, 0x00, 0x57, 0x44, 0x51, 0xe5, 0xdb, + 0x16, 0x81, 0x12, 0xa1, 0xdf, 0xdd, 0x48, 0x22, 0xe6, 0x68, 0x31, 0xf5, 0x0f, 0xf2, 0xa0, 0x04, + 0xdb, 0x86, 0xe0, 0xad, 0xe4, 0x11, 0x2e, 0xd1, 0x0f, 0x45, 0xaf, 0x98, 0x30, 0x19, 0x0a, 0x43, + 0x3c, 0x52, 0x53, 0xfa, 0x71, 0x5b, 0x52, 0x8c, 0x33, 0x0b, 0x09, 0xf9, 0x4e, 0xe4, 0x82, 0xd8, + 0x89, 0x90, 0xf0, 0x2d, 0x6a, 0xdd, 0xe5, 0x2c, 0xb4, 0x24, 0x57, 0xf2, 0x4d, 0x19, 0x38, 0xed, + 0x77, 0xca, 0xd2, 0x0a, 0x33, 0xc9, 0xcb, 0x76, 0xc7, 0x0a, 0x5e, 0x70, 0x7d, 0xb8, 0x7b, 0x75, + 0xbc, 0x93, 0xae, 0xa6, 0x15, 0xe6, 0x2d, 0x09, 0x22, 0xfe, 0x04, 0x0a, 0x61, 0x23, 0x4d, 0xbd, + 0x81, 0x44, 0x5a, 0x6a, 0xbd, 0x17, 0x6e, 0xc0, 0xf9, 0xae, 0x2c, 0x77, 0x32, 0x81, 0xfb, 0x65, + 0x13, 0xf8, 0xf7, 0x32, 0xe1, 0x44, 0x14, 0x13, 0x12, 0xb9, 0x0a, 0x10, 0x82, 0xc4, 0xa6, 0x18, + 0x1f, 0x88, 0x85, 0x42, 0xd3, 0x24, 0x0a, 0xb2, 0x04, 0x03, 0x42, 0x2c, 0x3c, 0xc5, 0xfe, 0xfb, + 0x77, 0xe8, 0x85, 0xab, 0xb2, 0x1c, 0x70, 0xc3, 0x2b, 0xbe, 0x59, 0xb0, 0xb9, 0xf0, 0x1a, 0x8c, + 0xec, 0xf7, 0xbb, 0xbe, 0x29, 0x07, 0x44, 0xde, 0xc1, 0x1e, 0xa1, 0x79, 0x7f, 0x8c, 0xa7, 0xb0, + 0xcb, 0x30, 0xc4, 0x3e, 0x01, 0xd3, 0x18, 0x49, 0x61, 0xcb, 0x3b, 0x02, 0xa6, 0x05, 0xd8, 0x30, + 0xea, 0xdf, 0x60, 0x7a, 0xd4, 0x3f, 0xf5, 0x5b, 0x72, 0x70, 0x56, 0xee, 0x90, 0x69, 0x8a, 0x99, + 0x50, 0x4e, 0x3a, 0xe5, 0x4b, 0xd8, 0x29, 0x2a, 0x0c, 0xf0, 0x8d, 0x8b, 0x48, 0x49, 0xc3, 0x0f, + 0x95, 0x10, 0xa2, 0x09, 0x8c, 0xfa, 0xaf, 0xb3, 0x30, 0x16, 0x18, 0x87, 0xba, 0xe3, 0x3e, 0xc2, + 0xdd, 0xf1, 0x21, 0x18, 0xc3, 0xb8, 0x6d, 0x2d, 0x6a, 0xf1, 0xd8, 0x66, 0xfd, 0x52, 0x0e, 0x29, + 0x1f, 0x21, 0x52, 0x0c, 0x46, 0x08, 0x99, 0xf6, 0x73, 0xcb, 0x4f, 0x8a, 0xa6, 0xc7, 0xcd, 0x3e, + 0x0e, 0x57, 0xbf, 0x3b, 0x07, 0xa3, 0xbe, 0x94, 0xa7, 0xcc, 0xe3, 0x7a, 0x4b, 0x74, 0xb4, 0x42, + 0xbe, 0x06, 0x50, 0xb5, 0x1d, 0x4f, 0x6f, 0x2e, 0x86, 0x9a, 0x8f, 0xc7, 0xab, 0x6d, 0x84, 0xf2, + 0x32, 0x12, 0x09, 0xae, 0x5f, 0xa1, 0x59, 0xcd, 0x27, 0x26, 0xbe, 0x7e, 0x05, 0x50, 0x4d, 0xa2, + 0x50, 0x7f, 0x2a, 0x0b, 0x13, 0x7e, 0x27, 0xcd, 0x3c, 0xa0, 0x8d, 0xce, 0xa3, 0x3c, 0x37, 0x45, + 0xa5, 0xdd, 0xbf, 0xa3, 0xb4, 0xd5, 0xff, 0x5b, 0x9a, 0x48, 0xca, 0x4d, 0xfb, 0x64, 0x22, 0xf9, + 0xeb, 0xd0, 0x71, 0xf5, 0x6b, 0x72, 0x70, 0xda, 0x97, 0xfa, 0x6c, 0xc7, 0xc2, 0x83, 0x89, 0xb2, + 0xde, 0x6c, 0x3e, 0xca, 0xbb, 0xf1, 0x11, 0x5f, 0x10, 0x4b, 0x22, 0x10, 0xaa, 0x48, 0xf7, 0x7a, + 0x4f, 0x80, 0xeb, 0xb6, 0x69, 0x68, 0x32, 0x11, 0x79, 0x13, 0x46, 0xfd, 0x9f, 0x25, 0x67, 0xd5, + 0xdf, 0x82, 0xe3, 0x35, 0x43, 0x50, 0x48, 0x77, 0x22, 0x91, 0x3d, 0x22, 0x05, 0xd4, 0xef, 0xcc, + 0xc1, 0x85, 0xbb, 0xa6, 0x65, 0xd8, 0x1b, 0x6e, 0x99, 0x3a, 0x1e, 0x0f, 0x05, 0x43, 0xe5, 0x70, + 0x64, 0xb5, 0x0e, 0xda, 0xe9, 0x72, 0x38, 0x32, 0x97, 0x83, 0x34, 0x1f, 0x47, 0x5e, 0x81, 0xd1, + 0x1a, 0x75, 0x4c, 0xbd, 0xb9, 0xd8, 0x69, 0xad, 0x50, 0xdf, 0x35, 0x0c, 0x7d, 0xe4, 0x5c, 0x84, + 0xd7, 0x2d, 0x44, 0x68, 0x11, 0x32, 0x72, 0x1e, 0x72, 0xb7, 0xab, 0x8b, 0xe2, 0xe2, 0x08, 0xf3, + 0x80, 0x75, 0xda, 0x96, 0xc6, 0x60, 0xe4, 0x63, 0x70, 0xa6, 0xac, 0xcd, 0x4f, 0x9b, 0x2e, 0x0f, + 0x1c, 0x62, 0xda, 0x56, 0xd5, 0x36, 0x99, 0x85, 0x9e, 0x0f, 0xbf, 0xb0, 0xe1, 0x34, 0xeb, 0x86, + 0x44, 0x51, 0x6f, 0x23, 0x89, 0x96, 0x5e, 0x92, 0x5c, 0x81, 0xa1, 0x5b, 0x74, 0xf3, 0x36, 0x26, + 0xea, 0xea, 0x47, 0x27, 0x3e, 0xbc, 0x08, 0x5d, 0xa7, 0x9b, 0xf5, 0x0e, 0x03, 0x6a, 0x01, 0x9a, + 0x4c, 0x41, 0x61, 0xe6, 0x81, 0x47, 0x2d, 0x83, 0x1a, 0x41, 0x11, 0x26, 0xda, 0x7e, 0x3f, 0xcc, + 0x1a, 0xc7, 0xd5, 0xc3, 0xb2, 0x09, 0x7a, 0xe4, 0x21, 0x12, 0xee, 0x05, 0x3c, 0x06, 0xc3, 0x3b, + 0x8a, 0x20, 0x41, 0x9f, 0xcc, 0x23, 0x46, 0xaf, 0xfe, 0xe1, 0x60, 0xd0, 0x3b, 0x7e, 0x2e, 0xe7, + 0x63, 0x7f, 0x08, 0x7a, 0xd4, 0x39, 0x9c, 0x3f, 0x06, 0x67, 0xe2, 0x22, 0x75, 0x82, 0x6c, 0x19, + 0x42, 0xb3, 0x36, 0x38, 0x41, 0xdd, 0x4f, 0x64, 0x2c, 0x6e, 0x52, 0xb5, 0xf4, 0x92, 0xf1, 0xb4, + 0xd0, 0x83, 0xbb, 0x49, 0x0b, 0xfd, 0x1c, 0x0c, 0x4c, 0xdb, 0x2d, 0xdd, 0xf4, 0x23, 0x70, 0xe1, + 0x1c, 0x1b, 0xd4, 0x8b, 0x18, 0x4d, 0x50, 0x30, 0xfe, 0xa2, 0x62, 0xec, 0x32, 0x29, 0x29, 0xbd, + 0x5f, 0x80, 0xd9, 0xd0, 0x9a, 0x4c, 0x44, 0x6c, 0x18, 0x13, 0xd5, 0x89, 0x7b, 0x4f, 0xc0, 0xad, + 0xed, 0x2b, 0xbe, 0x8c, 0xba, 0xab, 0xd5, 0xd5, 0x48, 0x39, 0xbe, 0xc9, 0xe5, 0xd9, 0xaa, 0xc5, + 0xc7, 0xf0, 0x1b, 0x50, 0x2d, 0xca, 0x3f, 0x9e, 0x0e, 0x7a, 0x64, 0x17, 0xe9, 0xa0, 0xc9, 0x0c, + 0x4c, 0x62, 0x56, 0x82, 0x60, 0xa3, 0xcb, 0x54, 0x62, 0x14, 0x4d, 0x7e, 0xbc, 0x4e, 0xe3, 0x89, + 0x0c, 0xd8, 0xc7, 0xd5, 0x1b, 0x02, 0xad, 0x25, 0x4b, 0xb0, 0x79, 0x64, 0x71, 0xbe, 0x84, 0xf7, + 0x70, 0x43, 0x7c, 0x1e, 0xb1, 0x9a, 0xba, 0xc6, 0x60, 0xe4, 0x2e, 0x8c, 0xb2, 0x79, 0x2d, 0x18, + 0x26, 0xe3, 0xa8, 0x29, 0x6a, 0x4c, 0x0a, 0x29, 0x53, 0x1f, 0x5f, 0xbd, 0x1a, 0x21, 0x42, 0x8b, + 0x30, 0xba, 0xf0, 0x16, 0x90, 0xa4, 0x9c, 0xf6, 0x74, 0x89, 0xf7, 0x9b, 0xf9, 0x70, 0xa7, 0x7f, + 0xdc, 0xdd, 0xb0, 0x0e, 0x63, 0xfd, 0x8b, 0xe4, 0x08, 0xed, 0xff, 0x52, 0xe6, 0x08, 0x1d, 0x38, + 0xdc, 0x1c, 0xa1, 0xd1, 0x19, 0x6d, 0xf0, 0x00, 0x33, 0xda, 0x47, 0x53, 0x93, 0xab, 0xf2, 0xfb, + 0x11, 0x09, 0x2e, 0xaf, 0xe6, 0x32, 0xbd, 0xfa, 0x83, 0x19, 0x98, 0x4c, 0x24, 0x27, 0x21, 0x2f, + 0x01, 0x70, 0x88, 0x14, 0xa0, 0x18, 0xe3, 0x67, 0x85, 0x09, 0x4b, 0x84, 0x7d, 0x16, 0x92, 0x91, + 0x6b, 0x30, 0xc4, 0x7f, 0x89, 0x10, 0x7d, 0xc9, 0x22, 0x9d, 0x8e, 0x69, 0x68, 0x01, 0x51, 0x58, + 0x0b, 0x5e, 0xa8, 0xe7, 0x52, 0x8b, 0x78, 0x9b, 0xed, 0xa0, 0x16, 0x46, 0xa6, 0x7e, 0x63, 0x16, + 0x46, 0x83, 0x06, 0x97, 0x8c, 0xa3, 0xd2, 0xfe, 0x01, 0x91, 0xe7, 0x25, 0xb7, 0x53, 0x9e, 0x97, + 0xd8, 0x92, 0x22, 0x12, 0xbb, 0x1c, 0xde, 0xa3, 0xc2, 0xcf, 0x66, 0x61, 0x22, 0xa8, 0xf5, 0x08, + 0xef, 0x6e, 0x1f, 0x22, 0x91, 0x7c, 0x53, 0x06, 0x94, 0x29, 0xb3, 0xd9, 0x34, 0xad, 0xd5, 0x8a, + 0x75, 0xcf, 0x76, 0x5a, 0x38, 0xe7, 0x1f, 0xdd, 0x1d, 0x82, 0xfa, 0x75, 0x19, 0x98, 0x14, 0x0d, + 0x2a, 0xeb, 0x8e, 0x71, 0x74, 0x07, 0xb4, 0xf1, 0x96, 0x1c, 0x9d, 0xbe, 0xa8, 0x3f, 0x91, 0x05, + 0x98, 0xb7, 0x1b, 0xeb, 0xc7, 0xfc, 0x4d, 0xe2, 0xeb, 0x30, 0xc0, 0x5f, 0x75, 0x08, 0x8d, 0x9d, + 0x14, 0x6f, 0xef, 0xd8, 0xa7, 0x71, 0xc4, 0x54, 0x41, 0xac, 0x0c, 0x03, 0xfc, 0x61, 0x88, 0x92, + 0xd1, 0x44, 0x11, 0x56, 0x29, 0xa3, 0x13, 0x4b, 0x57, 0x50, 0x29, 0x83, 0x45, 0x2b, 0xdd, 0xde, + 0x2a, 0xe6, 0x9b, 0x76, 0x63, 0x5d, 0x43, 0x7a, 0xf5, 0xff, 0xcb, 0x70, 0xd9, 0x1d, 0xf3, 0x97, + 0xd5, 0xfe, 0xe7, 0xe7, 0xf7, 0xf8, 0xf9, 0x7f, 0x27, 0x03, 0xa7, 0x35, 0xda, 0xb0, 0xef, 0x53, + 0x67, 0xb3, 0x6c, 0x1b, 0xf4, 0x06, 0xb5, 0xa8, 0x73, 0x54, 0x23, 0xea, 0xa7, 0x31, 0x31, 0x56, + 0xd8, 0x98, 0xdb, 0x2e, 0x35, 0x8e, 0x4f, 0xd2, 0x32, 0xf5, 0x3f, 0x0d, 0x82, 0x92, 0x6a, 0xd8, + 0x1f, 0x5b, 0xc3, 0xb2, 0xeb, 0x6e, 0x2d, 0x7f, 0x58, 0xbb, 0xb5, 0xfe, 0xbd, 0xed, 0xd6, 0x06, + 0xf6, 0xba, 0x5b, 0x1b, 0xdc, 0xcd, 0x6e, 0xad, 0x15, 0xdf, 0xad, 0x0d, 0xe1, 0x6e, 0xed, 0xa5, + 0x9e, 0xbb, 0xb5, 0x19, 0xcb, 0xd8, 0xe7, 0x5e, 0xed, 0xd8, 0xa6, 0xe3, 0xdf, 0xcf, 0x26, 0xf3, + 0x32, 0x9b, 0x14, 0x1b, 0xb6, 0x63, 0x50, 0x43, 0xec, 0x2d, 0xf1, 0xda, 0xc9, 0x11, 0x30, 0x2d, + 0xc0, 0x92, 0x97, 0x63, 0xe6, 0x37, 0x4f, 0x9c, 0x8f, 0xec, 0x65, 0xf3, 0x3b, 0x6a, 0x74, 0xc7, + 0x4c, 0x9d, 0xf1, 0xfd, 0x9b, 0x3a, 0x87, 0xb0, 0xa7, 0xfc, 0xf7, 0x59, 0x98, 0x94, 0x36, 0xb3, + 0x87, 0xb0, 0x1c, 0x97, 0x60, 0x42, 0x62, 0x88, 0xb6, 0x7d, 0x36, 0xf4, 0x77, 0x65, 0xfb, 0xe3, + 0xb8, 0xbb, 0x6c, 0x9c, 0x9e, 0x55, 0xef, 0x67, 0x2a, 0x15, 0xb3, 0x40, 0x50, 0xbd, 0x0f, 0xe7, + 0x5d, 0x62, 0x8a, 0x5f, 0x5a, 0x40, 0x2f, 0xb9, 0x7c, 0xe5, 0xf7, 0x91, 0x7c, 0x74, 0x1d, 0x4e, + 0x4b, 0x8d, 0x29, 0x75, 0xbc, 0x35, 0xdb, 0x61, 0xad, 0xe8, 0x17, 0x11, 0xbf, 0x7d, 0x5e, 0x29, + 0x34, 0x22, 0xa7, 0x62, 0x88, 0xa9, 0xeb, 0x3e, 0x4a, 0x4b, 0x65, 0xca, 0x36, 0x5f, 0xa9, 0x08, + 0x72, 0x09, 0xf2, 0x28, 0x37, 0x29, 0xb1, 0x62, 0x4c, 0x64, 0x88, 0x27, 0xcf, 0x07, 0x93, 0x4c, + 0x36, 0x7c, 0xb5, 0xc2, 0x27, 0x17, 0xf9, 0x3d, 0xae, 0x98, 0x66, 0xde, 0x82, 0x51, 0x71, 0xfc, + 0x7a, 0x8b, 0x6e, 0x06, 0xd9, 0x53, 0x30, 0x11, 0x90, 0x38, 0x9f, 0xc5, 0xa3, 0xc5, 0x88, 0x37, + 0x5d, 0xa4, 0x84, 0xfa, 0x03, 0x19, 0xb8, 0xa4, 0x51, 0x8b, 0x6e, 0xe8, 0x2b, 0x4d, 0x2a, 0xb5, + 0x5c, 0xac, 0xc0, 0x4c, 0x67, 0x4d, 0xb7, 0xa5, 0x7b, 0x8d, 0xb5, 0x03, 0x69, 0xd0, 0x2c, 0x8c, + 0xca, 0xeb, 0xc4, 0x1e, 0xd6, 0x90, 0x48, 0x39, 0xf5, 0x47, 0xf2, 0x30, 0x38, 0x65, 0x7b, 0x37, + 0xed, 0x03, 0xe6, 0x0a, 0x0e, 0x97, 0xd6, 0xec, 0x1e, 0x8e, 0x0d, 0x5f, 0xc0, 0xca, 0xa5, 0xe4, + 0x3c, 0x78, 0x8a, 0xbb, 0x62, 0x27, 0x12, 0x45, 0xf9, 0x64, 0x7b, 0xcc, 0x12, 0xfc, 0x2a, 0x0c, + 0x63, 0xa4, 0x29, 0xe9, 0xda, 0x05, 0xdf, 0x71, 0x78, 0x0c, 0x18, 0xaf, 0x23, 0x24, 0x25, 0x9f, + 0x88, 0x44, 0xe8, 0x1e, 0x38, 0x78, 0x56, 0x61, 0x39, 0x58, 0xf7, 0x4b, 0xfc, 0xc6, 0x1e, 0xdb, + 0x24, 0xe5, 0x50, 0xc3, 0x03, 0xb9, 0x58, 0x93, 0x02, 0xc2, 0x43, 0xcc, 0xf8, 0x5b, 0x86, 0xb1, + 0x29, 0xdb, 0x93, 0xde, 0x0c, 0x0c, 0x87, 0x4f, 0xce, 0x99, 0xe4, 0xd3, 0x1f, 0x0c, 0x44, 0xcb, + 0xa8, 0x7f, 0x91, 0x87, 0x51, 0xff, 0xe7, 0x11, 0xe9, 0xce, 0x07, 0x60, 0x60, 0xce, 0x96, 0x52, + 0x1c, 0xe1, 0x3b, 0x83, 0x35, 0xdb, 0x8d, 0x3d, 0xa0, 0x10, 0x44, 0x4c, 0xea, 0x8b, 0xb6, 0x21, + 0xbf, 0x92, 0x41, 0xa9, 0x5b, 0xb6, 0x91, 0x08, 0x55, 0x10, 0x10, 0xb2, 0x49, 0x06, 0x1f, 0x18, + 0x49, 0x37, 0x76, 0xb1, 0x47, 0x45, 0x88, 0x97, 0xb4, 0x72, 0x60, 0xaf, 0x5a, 0x39, 0xb8, 0x5f, + 0xad, 0x1c, 0x3a, 0x5c, 0xad, 0x7c, 0x07, 0x46, 0xb1, 0x26, 0x3f, 0x0b, 0xed, 0xce, 0x06, 0xcc, + 0x79, 0x61, 0x63, 0x8c, 0xf1, 0x76, 0x8b, 0x5c, 0xb4, 0x68, 0x5a, 0x44, 0x58, 0xc5, 0x74, 0x17, + 0x0e, 0x70, 0x6c, 0xf1, 0xfb, 0x19, 0x18, 0xbc, 0x6d, 0xad, 0x5b, 0xf6, 0xc6, 0xc1, 0x34, 0xee, + 0x25, 0x18, 0x11, 0x6c, 0xa4, 0xb5, 0x17, 0x6f, 0xd6, 0x3a, 0x1c, 0x5c, 0x47, 0x4e, 0x9a, 0x4c, + 0x45, 0xde, 0x08, 0x0a, 0xe1, 0x1b, 0xc2, 0x5c, 0x98, 0x24, 0xcc, 0x2f, 0xd4, 0x88, 0x66, 0x09, + 0x92, 0xc9, 0xc9, 0x45, 0xc8, 0x4f, 0xb3, 0xa6, 0x4a, 0xd1, 0xc2, 0x59, 0x53, 0x34, 0x84, 0xaa, + 0x9f, 0xc9, 0xc3, 0x78, 0xec, 0x80, 0xf1, 0x39, 0x18, 0x16, 0x07, 0x7c, 0xa6, 0x9f, 0xb6, 0x08, + 0xaf, 0xd6, 0x02, 0xa0, 0x36, 0xc4, 0xff, 0xac, 0x18, 0xe4, 0xa3, 0x30, 0x68, 0xbb, 0x68, 0x32, + 0xe0, 0xb7, 0x8c, 0x87, 0x43, 0x68, 0xa9, 0xc6, 0xda, 0xce, 0x07, 0x87, 0x20, 0x91, 0x35, 0xd2, + 0x76, 0xf1, 0xd3, 0x5e, 0x86, 0x61, 0xdd, 0x75, 0xa9, 0x57, 0xf7, 0xf4, 0x55, 0x39, 0x93, 0x51, + 0x00, 0x94, 0x47, 0x07, 0x02, 0x97, 0xf5, 0x55, 0xf2, 0x16, 0x8c, 0x35, 0x1c, 0x8a, 0x46, 0x85, + 0xde, 0x64, 0xad, 0x94, 0xb6, 0x0f, 0x11, 0x84, 0xbc, 0x58, 0x86, 0x88, 0x8a, 0x41, 0xee, 0xc0, + 0x98, 0xf8, 0x1c, 0xfe, 0xc0, 0x07, 0x07, 0xda, 0x78, 0xb8, 0x8c, 0x71, 0x91, 0xf0, 0x27, 0x3e, + 0xe2, 0x9d, 0x97, 0x4c, 0x2e, 0xf3, 0x35, 0x24, 0x52, 0xb2, 0x04, 0x64, 0x83, 0xae, 0xa0, 0x71, + 0xc1, 0xea, 0xe2, 0x89, 0x38, 0x44, 0x5a, 0x67, 0x7c, 0x1c, 0x95, 0xc4, 0xca, 0x6f, 0xc6, 0x36, + 0xe8, 0x4a, 0x29, 0x82, 0x24, 0x77, 0xe1, 0x4c, 0xb2, 0x08, 0xfb, 0x64, 0x7e, 0xcf, 0xf4, 0xf4, + 0xf6, 0x56, 0xb1, 0x98, 0x4a, 0x20, 0xb1, 0x3d, 0x95, 0x60, 0x5b, 0x31, 0x6e, 0xe6, 0x87, 0x06, + 0x0b, 0x43, 0xda, 0x38, 0x2b, 0xeb, 0x9b, 0xea, 0xa6, 0xa1, 0xfe, 0x6a, 0x86, 0x99, 0xe4, 0xec, + 0x83, 0x66, 0x98, 0x20, 0x98, 0xae, 0xb7, 0xf6, 0xa8, 0xeb, 0xad, 0x30, 0x03, 0xf5, 0x80, 0xdb, + 0x63, 0x76, 0xd5, 0x04, 0x96, 0x5c, 0x85, 0x01, 0x43, 0x3e, 0x9d, 0x3c, 0x1b, 0xed, 0x04, 0xbf, + 0x1e, 0x4d, 0x50, 0x91, 0xcb, 0x90, 0x67, 0x4b, 0x56, 0xfc, 0x68, 0x42, 0xb6, 0x2e, 0x34, 0xa4, + 0x50, 0xbf, 0x2a, 0x0b, 0xa3, 0xd2, 0xd7, 0x5c, 0x3f, 0xd0, 0xe7, 0x7c, 0x78, 0x77, 0xcd, 0xf4, + 0xbd, 0xdb, 0x70, 0xcf, 0xea, 0x37, 0xf9, 0xe5, 0x40, 0x14, 0xbb, 0xba, 0xdb, 0x14, 0x82, 0x79, + 0x55, 0x7c, 0xe8, 0xc0, 0xee, 0xb7, 0xe9, 0x8c, 0xfe, 0x66, 0x7e, 0x28, 0x5b, 0xc8, 0xdd, 0xcc, + 0x0f, 0xe5, 0x0b, 0xfd, 0x18, 0xf3, 0x0f, 0x83, 0xf4, 0xf3, 0x33, 0x10, 0xeb, 0x9e, 0xb9, 0x7a, + 0xcc, 0x9f, 0x98, 0x1d, 0x6e, 0x3c, 0xc4, 0x98, 0x6c, 0x8e, 0xf9, 0x7b, 0xb3, 0x2f, 0xa9, 0x6c, + 0x4e, 0x32, 0x56, 0x0b, 0xd9, 0xfc, 0x41, 0x06, 0x94, 0x54, 0xd9, 0x94, 0x8e, 0xc8, 0xe1, 0xe9, + 0xf0, 0xf2, 0x56, 0xff, 0x79, 0x16, 0x26, 0x2b, 0x96, 0x47, 0x57, 0xf9, 0x8e, 0xf1, 0x98, 0x4f, + 0x15, 0xb7, 0x60, 0x44, 0xfa, 0x18, 0xd1, 0xe7, 0x8f, 0x05, 0xa7, 0x15, 0x21, 0xaa, 0x0b, 0x27, + 0xb9, 0xf4, 0xe1, 0x3d, 0xd8, 0x8b, 0x0b, 0xf9, 0x98, 0xcf, 0x39, 0xc7, 0x43, 0xc8, 0xc7, 0x7c, + 0xf2, 0x7a, 0x48, 0x85, 0xfc, 0xd9, 0x1c, 0x9c, 0x4a, 0xa9, 0x9c, 0x5c, 0x42, 0x47, 0x43, 0x0c, + 0xf1, 0x97, 0x09, 0x9f, 0x06, 0xb8, 0x9d, 0x15, 0x8c, 0xee, 0xa7, 0xf9, 0x48, 0xb2, 0x8c, 0x31, + 0x38, 0x96, 0x2a, 0xd3, 0x65, 0x21, 0x55, 0x55, 0x8a, 0x26, 0xc2, 0xc0, 0x69, 0x5f, 0x16, 0xc4, + 0xe9, 0xb0, 0x4d, 0xa3, 0x11, 0x8b, 0xd3, 0xc1, 0xca, 0x90, 0x4f, 0xc2, 0x70, 0xe9, 0xbd, 0x8e, + 0x43, 0x91, 0x2f, 0x97, 0xf8, 0x33, 0x01, 0x5f, 0x1f, 0x91, 0xc6, 0x99, 0x87, 0x1c, 0x61, 0x14, + 0x71, 0xde, 0x21, 0x43, 0xb2, 0x04, 0x03, 0x37, 0x4c, 0x6f, 0xae, 0xb3, 0x22, 0x7a, 0x21, 0x08, + 0x03, 0xc8, 0xa1, 0x69, 0x7c, 0x71, 0x57, 0xce, 0xc3, 0x99, 0xcb, 0x7b, 0x20, 0x5e, 0x80, 0xcc, + 0x43, 0x7f, 0xe9, 0x6e, 0x4d, 0x2b, 0x89, 0x9e, 0x78, 0x52, 0x0e, 0xa8, 0x52, 0xea, 0xca, 0x0e, + 0xc3, 0x43, 0xe8, 0x72, 0xba, 0x34, 0xa4, 0x57, 0xbf, 0x31, 0x03, 0x17, 0xba, 0x0b, 0x8f, 0xbc, + 0x00, 0x83, 0x9a, 0xdd, 0xa4, 0x25, 0x6d, 0x51, 0xf4, 0x0c, 0x4f, 0x41, 0x6f, 0x37, 0x69, 0x5d, + 0x77, 0xe4, 0xbd, 0x88, 0x4f, 0x46, 0x3e, 0x02, 0x23, 0x15, 0xd7, 0xed, 0x50, 0xa7, 0xf6, 0xd2, + 0x6d, 0xad, 0x22, 0xb6, 0xac, 0xb8, 0x25, 0x32, 0x11, 0x5c, 0x77, 0x5f, 0x8a, 0xc5, 0x18, 0x94, + 0xe9, 0xd5, 0xaf, 0xcf, 0xc0, 0xc5, 0x5e, 0x42, 0x27, 0x2f, 0xc1, 0xd0, 0x32, 0xb5, 0x74, 0xcb, + 0xab, 0x4c, 0x8b, 0x26, 0xe1, 0x0e, 0xd0, 0x43, 0x58, 0x74, 0x23, 0x13, 0x10, 0xb2, 0x42, 0xfc, + 0x50, 0x38, 0xf0, 0x67, 0xe1, 0x07, 0xd8, 0x08, 0x8b, 0x15, 0xf2, 0x09, 0xd5, 0x4f, 0xc0, 0xf9, + 0xae, 0x7d, 0x44, 0x3e, 0x0a, 0xa3, 0x4b, 0xce, 0xaa, 0x6e, 0x99, 0xef, 0xf1, 0x21, 0x96, 0x09, + 0x77, 0xd9, 0xb6, 0x04, 0x97, 0x77, 0x7e, 0x32, 0xbd, 0xba, 0x02, 0x4a, 0xb7, 0x0e, 0x23, 0xb3, + 0x30, 0x8e, 0x51, 0xe8, 0x4b, 0x56, 0x63, 0xcd, 0x76, 0x42, 0xd9, 0xe3, 0x0b, 0x4c, 0x8f, 0x61, + 0xea, 0x3a, 0xa2, 0x62, 0x7d, 0x10, 0x2b, 0xa5, 0xfe, 0x46, 0x16, 0x46, 0xab, 0xcd, 0xce, 0xaa, + 0x29, 0x2d, 0xcc, 0xfb, 0xde, 0xcf, 0xf8, 0xbb, 0x8b, 0xec, 0xde, 0x76, 0x17, 0x6c, 0x3a, 0x73, + 0xf6, 0x39, 0x9d, 0xf9, 0xe5, 0xc8, 0x1b, 0x30, 0xd0, 0xc6, 0xef, 0x88, 0xdf, 0x03, 0xf0, 0xaf, + 0xeb, 0x76, 0x0f, 0xc0, 0xcb, 0xb0, 0xf9, 0xab, 0x71, 0x80, 0xf9, 0x2b, 0x2c, 0x2b, 0x09, 0x34, + 0x5c, 0x84, 0x4f, 0x04, 0x7a, 0x28, 0x02, 0x0d, 0x17, 0xdc, 0x13, 0x81, 0x1e, 0x40, 0xa0, 0x3f, + 0x92, 0x85, 0xf1, 0x68, 0x95, 0xe4, 0x05, 0x18, 0xe1, 0xd5, 0xf0, 0x73, 0xb7, 0x8c, 0xf4, 0x3a, + 0x23, 0x04, 0x6b, 0xc0, 0x7f, 0x88, 0x03, 0xc4, 0x89, 0x35, 0xdd, 0xad, 0x87, 0x27, 0x60, 0xdc, + 0x0f, 0x61, 0x88, 0x7b, 0xf4, 0xc5, 0x50, 0xda, 0xf8, 0x9a, 0xee, 0x96, 0xc3, 0xdf, 0x64, 0x06, + 0x88, 0x43, 0x3b, 0x2e, 0x8d, 0x32, 0xc8, 0x23, 0x03, 0xbe, 0x7a, 0x24, 0xb0, 0xda, 0x24, 0x87, + 0xc9, 0x6c, 0xbe, 0x22, 0x68, 0x36, 0x2a, 0x43, 0x7f, 0xef, 0x53, 0xe4, 0xa7, 0xb7, 0xb7, 0x8a, + 0x67, 0x24, 0xfa, 0xf4, 0x63, 0x64, 0x4e, 0x30, 0xad, 0x7b, 0x3a, 0x3f, 0xf4, 0xf0, 0x3b, 0x40, + 0xfd, 0xc3, 0xcf, 0x64, 0xa0, 0x7f, 0xc9, 0xa2, 0x4b, 0xf7, 0xc8, 0x8b, 0x30, 0xcc, 0x34, 0x66, + 0xde, 0x66, 0x9d, 0x99, 0x11, 0x8e, 0x40, 0x92, 0x2a, 0x21, 0x62, 0xae, 0x4f, 0x0b, 0xa9, 0xc8, + 0xcb, 0x00, 0xe1, 0x63, 0x5d, 0xa1, 0x7e, 0x44, 0x2e, 0xc3, 0x31, 0x73, 0x7d, 0x9a, 0x44, 0xe7, + 0x97, 0x12, 0x4f, 0x1d, 0x73, 0xc9, 0x52, 0x1c, 0xe3, 0x97, 0x12, 0x03, 0x64, 0x1e, 0x08, 0xfb, + 0x55, 0xd5, 0x5d, 0x77, 0xc3, 0x76, 0x8c, 0xf2, 0x9a, 0x6e, 0xad, 0xd2, 0xf8, 0xf6, 0x34, 0x49, + 0x31, 0xd7, 0xa7, 0xa5, 0x94, 0x23, 0x1f, 0x86, 0x51, 0xd9, 0xf9, 0x3b, 0xee, 0xbd, 0x24, 0xe3, + 0xe6, 0xfa, 0xb4, 0x08, 0x2d, 0xf9, 0x20, 0x8c, 0x88, 0xdf, 0x37, 0x6d, 0xe1, 0x1a, 0x21, 0xc5, + 0x84, 0x93, 0x50, 0x73, 0x7d, 0x9a, 0x4c, 0x29, 0x55, 0x5a, 0x75, 0x4c, 0xcb, 0x13, 0x3e, 0xb3, + 0xf1, 0x4a, 0x11, 0x27, 0x55, 0x8a, 0xbf, 0xc9, 0x47, 0x60, 0x2c, 0x08, 0xb6, 0x87, 0x0f, 0x53, + 0xf8, 0xed, 0xc2, 0x99, 0x58, 0x61, 0x8e, 0x9c, 0xeb, 0xd3, 0xa2, 0xd4, 0xe4, 0x32, 0x0c, 0x68, + 0xd4, 0x35, 0xdf, 0xf3, 0xfd, 0x1e, 0xc6, 0xa5, 0x81, 0x6e, 0xbe, 0xc7, 0xa4, 0x24, 0xf0, 0xac, + 0x77, 0x42, 0x47, 0x0b, 0x71, 0x17, 0x40, 0x62, 0xb5, 0xcc, 0x58, 0x06, 0xeb, 0x1d, 0xc9, 0xcb, + 0xe6, 0xad, 0x30, 0x04, 0xa1, 0xc8, 0xdf, 0x3d, 0x12, 0x8f, 0xf5, 0x22, 0x63, 0xe7, 0xfa, 0xb4, + 0x18, 0xbd, 0x24, 0xd5, 0x69, 0xd3, 0x5d, 0x17, 0xa1, 0xa3, 0xe3, 0x52, 0x65, 0x28, 0x49, 0xaa, + 0xec, 0xa7, 0x54, 0xf5, 0x22, 0xf5, 0x36, 0x6c, 0x67, 0x5d, 0x04, 0x8a, 0x8e, 0x57, 0x2d, 0xb0, + 0x52, 0xd5, 0x02, 0x22, 0x57, 0x1d, 0x3a, 0xcb, 0x27, 0xaa, 0xd6, 0x3d, 0x5d, 0xae, 0x9a, 0x1f, + 0x75, 0xfa, 0x9d, 0x34, 0x4f, 0xf5, 0xfb, 0x54, 0x99, 0x48, 0xed, 0x50, 0xc4, 0x49, 0x1d, 0x8a, + 0xbf, 0x59, 0xa5, 0x55, 0xdb, 0xf1, 0x66, 0x6d, 0x67, 0x43, 0x77, 0x0c, 0xa5, 0x10, 0xad, 0x54, + 0x42, 0xb1, 0x4a, 0xa5, 0x9f, 0xac, 0x83, 0xde, 0x7e, 0xf1, 0x45, 0xbf, 0xdc, 0x64, 0xb4, 0x83, + 0x42, 0x0c, 0xeb, 0xa0, 0xf0, 0x17, 0x29, 0x62, 0x76, 0x7a, 0x85, 0x20, 0xf9, 0x48, 0xd0, 0xc2, + 0x72, 0x75, 0xae, 0x4f, 0xc3, 0xbc, 0xf5, 0x2a, 0xe4, 0x67, 0x1e, 0xd0, 0x86, 0x72, 0x0a, 0x29, + 0x46, 0x7d, 0x0a, 0x06, 0x9b, 0xeb, 0xd3, 0x10, 0xc7, 0xa6, 0x88, 0x20, 0x37, 0xac, 0x72, 0x3a, + 0x3a, 0x45, 0x04, 0x08, 0x36, 0x45, 0x84, 0x19, 0x64, 0x67, 0x93, 0x19, 0x50, 0x95, 0x33, 0xd1, + 0xb5, 0x26, 0x8e, 0x9f, 0xeb, 0xd3, 0x92, 0x59, 0x53, 0x3f, 0x18, 0x49, 0x0a, 0xaa, 0x9c, 0x8d, + 0x05, 0x62, 0x0c, 0x51, 0x4c, 0x5c, 0x72, 0xfa, 0xd0, 0x25, 0x38, 0xc5, 0x73, 0x8a, 0x8b, 0x50, + 0x8a, 0x62, 0xb2, 0x3a, 0x17, 0xdd, 0x19, 0xa6, 0x90, 0xcc, 0xf5, 0x69, 0x69, 0x25, 0x49, 0x39, + 0x91, 0x5c, 0x53, 0x51, 0xa2, 0x4e, 0x5e, 0x31, 0xf4, 0x5c, 0x9f, 0x96, 0x48, 0xc7, 0xf9, 0xb2, + 0x9c, 0xd5, 0x52, 0x39, 0x1f, 0xed, 0xc4, 0x10, 0xc3, 0x3a, 0x51, 0xca, 0x7e, 0xf9, 0xb2, 0x9c, + 0xe9, 0x50, 0xb9, 0x90, 0x2c, 0x15, 0xce, 0x9c, 0x52, 0x46, 0x44, 0x2d, 0x3d, 0x79, 0x9b, 0xf2, + 0x58, 0xd4, 0x1d, 0x24, 0x8d, 0x66, 0xae, 0x4f, 0x4b, 0x4f, 0xfc, 0xa6, 0xa5, 0x67, 0x3d, 0x53, + 0x2e, 0xf6, 0xe2, 0x19, 0xb4, 0x2e, 0x3d, 0x63, 0x9a, 0xde, 0x23, 0x07, 0x95, 0xf2, 0x78, 0x74, + 0x0f, 0xd9, 0x95, 0x70, 0xae, 0x4f, 0xeb, 0x91, 0xc9, 0xea, 0x76, 0x97, 0x84, 0x50, 0xca, 0x13, + 0xc8, 0xfe, 0x71, 0x69, 0x8b, 0x9a, 0x24, 0x9a, 0xeb, 0xd3, 0xba, 0xa4, 0x93, 0xba, 0xdd, 0x25, + 0x5f, 0x90, 0x52, 0xec, 0xc9, 0x36, 0x90, 0x47, 0x97, 0x6c, 0x43, 0x4b, 0xa9, 0xa9, 0x76, 0x94, + 0x27, 0xa3, 0xaa, 0x9b, 0x42, 0xc2, 0x54, 0x37, 0x2d, 0x49, 0xcf, 0x52, 0x6a, 0x6e, 0x18, 0xe5, + 0xa9, 0x1e, 0x0c, 0x83, 0x36, 0xa6, 0x66, 0x95, 0x59, 0x4a, 0x4d, 0xce, 0xa2, 0xa8, 0x51, 0x86, + 0x29, 0x24, 0x8c, 0x61, 0x5a, 0x5a, 0x97, 0xa5, 0xd4, 0x1c, 0x1e, 0xca, 0xd3, 0x3d, 0x18, 0x86, + 0x2d, 0x4c, 0xcb, 0xfe, 0xf1, 0xc1, 0x48, 0x12, 0x0d, 0xe5, 0x99, 0xe8, 0xbc, 0x21, 0xa1, 0xd8, + 0xbc, 0x21, 0xa7, 0xdb, 0x28, 0x27, 0x22, 0x7c, 0x2b, 0xef, 0x8b, 0x0e, 0xf3, 0x18, 0x9a, 0x0d, + 0xf3, 0x78, 0x4c, 0xf0, 0x72, 0x22, 0xd2, 0xb1, 0x72, 0xa9, 0x1b, 0x13, 0x44, 0x47, 0x99, 0xf0, + 0xd8, 0xc8, 0x95, 0x94, 0x50, 0xbb, 0xca, 0xb3, 0xd1, 0x07, 0x0a, 0x09, 0x82, 0xb9, 0x3e, 0x2d, + 0x25, 0x40, 0xaf, 0x96, 0x1e, 0x57, 0x4e, 0xb9, 0x1c, 0x1d, 0xb6, 0x69, 0x34, 0x6c, 0xd8, 0xa6, + 0xc6, 0xa4, 0x9b, 0x4f, 0x7b, 0xcf, 0xa5, 0x5c, 0x89, 0x1a, 0x66, 0x49, 0x0a, 0x66, 0x98, 0xa5, + 0xbc, 0x03, 0xd3, 0xd2, 0xa3, 0x95, 0x29, 0xcf, 0xf5, 0x6c, 0x21, 0xd2, 0xa4, 0xb4, 0x90, 0x07, + 0xef, 0x0a, 0x6d, 0xa7, 0xdb, 0xed, 0xa6, 0xad, 0x1b, 0xca, 0xfb, 0x53, 0x6d, 0x27, 0x8e, 0x94, + 0x6c, 0x27, 0x0e, 0x60, 0xab, 0xbc, 0xfc, 0x58, 0x47, 0x79, 0x3e, 0xba, 0xca, 0xcb, 0x38, 0xb6, + 0xca, 0x47, 0x1e, 0xf6, 0x94, 0x13, 0x0f, 0x5b, 0x94, 0x0f, 0x44, 0x15, 0x20, 0x86, 0x66, 0x0a, + 0x10, 0x7f, 0x0a, 0xf3, 0xa9, 0xee, 0x4f, 0x41, 0x94, 0xab, 0xd1, 0xb3, 0xb0, 0x6e, 0x74, 0x73, + 0x7d, 0x5a, 0xf7, 0xe7, 0x24, 0x95, 0x94, 0x97, 0x1d, 0xca, 0xb5, 0xa8, 0x82, 0x25, 0x08, 0x98, + 0x82, 0x25, 0xdf, 0x83, 0x54, 0x52, 0x9e, 0x66, 0x28, 0x2f, 0x74, 0x65, 0x15, 0x7c, 0x73, 0xca, + 0x83, 0x8e, 0x97, 0xe5, 0xb7, 0x15, 0xca, 0x8b, 0xd1, 0xc5, 0x2e, 0xc4, 0xb0, 0xc5, 0x4e, 0x7a, + 0x83, 0xf1, 0xb2, 0xfc, 0xaa, 0x40, 0xb9, 0x9e, 0x2c, 0x15, 0x2e, 0x91, 0xd2, 0xeb, 0x03, 0x2d, + 0xdd, 0x19, 0x5f, 0x79, 0x29, 0xaa, 0x75, 0x69, 0x34, 0x4c, 0xeb, 0x52, 0x1d, 0xf9, 0x67, 0x93, + 0x3e, 0xf5, 0xca, 0xcb, 0xf1, 0x5d, 0x76, 0x14, 0xcf, 0x2c, 0x9f, 0x84, 0x1f, 0xfe, 0x5b, 0xf1, + 0xa0, 0xa7, 0xca, 0x2b, 0xb1, 0x7b, 0xf5, 0x08, 0x96, 0xd9, 0xb7, 0xb1, 0x20, 0xa9, 0x6f, 0xc5, + 0xe3, 0x84, 0x2a, 0xaf, 0xa6, 0x73, 0x08, 0x74, 0x25, 0x1e, 0x57, 0xf4, 0xad, 0x78, 0x68, 0x4d, + 0xe5, 0x83, 0xe9, 0x1c, 0x02, 0xe9, 0xc6, 0x43, 0x71, 0xbe, 0x28, 0x25, 0xfb, 0x50, 0x3e, 0x14, + 0x35, 0x1d, 0x03, 0x04, 0x33, 0x1d, 0xc3, 0x94, 0x20, 0x2f, 0x4a, 0x49, 0x32, 0x94, 0xd7, 0x12, + 0x45, 0x82, 0xc6, 0x4a, 0xa9, 0x34, 0x5e, 0x94, 0x92, 0x4b, 0x28, 0x1f, 0x4e, 0x14, 0x09, 0x5a, + 0x27, 0xa5, 0xa0, 0x30, 0x7a, 0xbd, 0x35, 0x57, 0x5e, 0x4f, 0x7d, 0x38, 0x9b, 0x42, 0x39, 0xd7, + 0xa7, 0xf5, 0x7a, 0xb3, 0xfe, 0xa9, 0xee, 0x2f, 0x14, 0x94, 0x37, 0xa2, 0x43, 0xb8, 0x1b, 0x1d, + 0x1b, 0xc2, 0x5d, 0x5f, 0x39, 0x7c, 0x24, 0x16, 0x15, 0x48, 0xf9, 0x48, 0x74, 0x8a, 0x8b, 0x20, + 0xd9, 0x14, 0x17, 0x8f, 0x21, 0x14, 0x09, 0x77, 0xa3, 0x7c, 0x34, 0x3a, 0xc5, 0xc9, 0x38, 0x36, + 0xc5, 0x45, 0x42, 0xe3, 0x94, 0x13, 0x51, 0x58, 0x94, 0x37, 0xa3, 0x53, 0x5c, 0x0c, 0xcd, 0xa6, + 0xb8, 0x78, 0xdc, 0x96, 0x8f, 0xc4, 0x82, 0x91, 0x28, 0x6f, 0xa5, 0xb7, 0x1f, 0x91, 0x72, 0xfb, + 0x79, 0xe8, 0x12, 0x2d, 0x3d, 0xaa, 0x86, 0x52, 0x8a, 0x8e, 0xdf, 0x34, 0x1a, 0x36, 0x7e, 0x53, + 0x23, 0x72, 0xc4, 0x37, 0x0e, 0x42, 0xab, 0xa6, 0x7a, 0x6c, 0x1c, 0x42, 0x53, 0x24, 0x05, 0x1c, + 0xd9, 0x23, 0xf3, 0x8d, 0x50, 0xb9, 0xcb, 0x1e, 0xd9, 0xdf, 0x06, 0xc5, 0xe8, 0xd9, 0xec, 0x9a, + 0x70, 0x73, 0x57, 0xa6, 0xa3, 0xb3, 0x6b, 0x82, 0x80, 0xcd, 0xae, 0x49, 0xe7, 0xf8, 0x59, 0x28, + 0x08, 0x2d, 0xe2, 0xef, 0x00, 0x4c, 0x6b, 0x55, 0x99, 0x89, 0x3d, 0x61, 0x8e, 0xe1, 0xd9, 0xec, + 0x14, 0x87, 0xe1, 0x7a, 0xcd, 0x61, 0xe5, 0xa6, 0xd9, 0x5e, 0xb1, 0x75, 0xc7, 0xa8, 0x51, 0xcb, + 0x50, 0x66, 0x63, 0xeb, 0x75, 0x0a, 0x0d, 0xae, 0xd7, 0x29, 0x70, 0x0c, 0xb6, 0x19, 0x83, 0x6b, + 0xb4, 0x41, 0xcd, 0xfb, 0x54, 0xb9, 0x81, 0x6c, 0x8b, 0xdd, 0xd8, 0x0a, 0xb2, 0xb9, 0x3e, 0xad, + 0x1b, 0x07, 0x66, 0xab, 0x2f, 0x6c, 0xd6, 0x3e, 0x36, 0x1f, 0x04, 0x72, 0xa9, 0x3a, 0xb4, 0xad, + 0x3b, 0x54, 0x99, 0x8b, 0xda, 0xea, 0xa9, 0x44, 0xcc, 0x56, 0x4f, 0x45, 0x24, 0xd9, 0xfa, 0x63, + 0xa1, 0xd2, 0x8b, 0x6d, 0x38, 0x22, 0xd2, 0x4b, 0xb3, 0xd9, 0x29, 0x8a, 0x60, 0x02, 0x9a, 0xb7, + 0xad, 0x55, 0x3c, 0xa9, 0xb8, 0x19, 0x9d, 0x9d, 0xba, 0x53, 0xb2, 0xd9, 0xa9, 0x3b, 0x96, 0xa9, + 0x7a, 0x14, 0xcb, 0xc7, 0xe0, 0xad, 0xa8, 0xaa, 0xa7, 0x90, 0x30, 0x55, 0x4f, 0x01, 0x27, 0x19, + 0x6a, 0xd4, 0xa5, 0x9e, 0x32, 0xdf, 0x8b, 0x21, 0x92, 0x24, 0x19, 0x22, 0x38, 0xc9, 0x70, 0x96, + 0x7a, 0x8d, 0x35, 0x65, 0xa1, 0x17, 0x43, 0x24, 0x49, 0x32, 0x44, 0x30, 0xdb, 0x6c, 0x46, 0xc1, + 0x53, 0x9d, 0xe6, 0xba, 0xdf, 0x67, 0x8b, 0xd1, 0xcd, 0x66, 0x57, 0x42, 0xb6, 0xd9, 0xec, 0x8a, + 0x24, 0x5f, 0xbf, 0xeb, 0x87, 0x06, 0xca, 0x12, 0x56, 0x78, 0x35, 0xb4, 0x0b, 0x76, 0x53, 0x6a, + 0xae, 0x4f, 0xdb, 0xed, 0x43, 0x86, 0xf7, 0x07, 0x5e, 0xb9, 0x4a, 0x15, 0xab, 0x9a, 0x08, 0xce, + 0x2a, 0x38, 0x78, 0xae, 0x4f, 0x0b, 0xfc, 0x76, 0x3f, 0x08, 0x23, 0xf8, 0x51, 0x15, 0xcb, 0xf4, + 0xa6, 0xa7, 0x94, 0x8f, 0x45, 0xb7, 0x4c, 0x12, 0x8a, 0x6d, 0x99, 0xa4, 0x9f, 0x6c, 0x12, 0xc7, + 0x9f, 0x7c, 0x8a, 0x99, 0x9e, 0x52, 0xb4, 0xe8, 0x24, 0x1e, 0x41, 0xb2, 0x49, 0x3c, 0x02, 0x08, + 0xea, 0x9d, 0x76, 0xec, 0xf6, 0xf4, 0x94, 0x52, 0x4b, 0xa9, 0x97, 0xa3, 0x82, 0x7a, 0xf9, 0xcf, + 0xa0, 0xde, 0xda, 0x5a, 0xc7, 0x9b, 0x66, 0xdf, 0xb8, 0x9c, 0x52, 0xaf, 0x8f, 0x0c, 0xea, 0xf5, + 0x01, 0x6c, 0x2a, 0x44, 0x40, 0xd5, 0xb1, 0xd9, 0xa4, 0x7d, 0xcb, 0x6c, 0x36, 0x95, 0xdb, 0xd1, + 0xa9, 0x30, 0x8e, 0x67, 0x53, 0x61, 0x1c, 0xc6, 0x4c, 0x4f, 0xde, 0x2a, 0xba, 0xd2, 0x59, 0x55, + 0xee, 0x44, 0x4d, 0xcf, 0x10, 0xc3, 0x4c, 0xcf, 0xf0, 0x17, 0xee, 0x2e, 0xd8, 0x2f, 0x8d, 0xde, + 0x73, 0xa8, 0xbb, 0xa6, 0xdc, 0x8d, 0xed, 0x2e, 0x24, 0x1c, 0xee, 0x2e, 0xa4, 0xdf, 0x64, 0x15, + 0x1e, 0x8b, 0x2c, 0x34, 0xfe, 0xad, 0x4d, 0x8d, 0xea, 0x4e, 0x63, 0x4d, 0x79, 0x1b, 0x59, 0x3d, + 0x9d, 0xba, 0x54, 0x45, 0x49, 0xe7, 0xfa, 0xb4, 0x5e, 0x9c, 0x70, 0x5b, 0xfe, 0xb1, 0x79, 0x1e, + 0x91, 0x5b, 0xab, 0x96, 0xfd, 0x4d, 0xe8, 0x3b, 0xb1, 0x6d, 0x79, 0x92, 0x04, 0xb7, 0xe5, 0x49, + 0x30, 0x69, 0xc3, 0x13, 0xb1, 0xad, 0xda, 0x82, 0xde, 0x64, 0xfb, 0x12, 0x6a, 0x54, 0xf5, 0xc6, + 0x3a, 0xf5, 0x94, 0x8f, 0x23, 0xef, 0x4b, 0x5d, 0x36, 0x7c, 0x31, 0xea, 0xb9, 0x3e, 0x6d, 0x07, + 0x7e, 0x44, 0x85, 0x7c, 0x6d, 0x76, 0xb9, 0xaa, 0x7c, 0x22, 0x7a, 0xbe, 0xc9, 0x60, 0x73, 0x7d, + 0x1a, 0xe2, 0x98, 0x95, 0x76, 0xbb, 0xbd, 0xea, 0xe8, 0x06, 0xe5, 0x86, 0x16, 0xda, 0x6e, 0xc2, + 0x00, 0xfd, 0x64, 0xd4, 0x4a, 0xeb, 0x46, 0xc7, 0xac, 0xb4, 0x6e, 0x38, 0xa6, 0xa8, 0x91, 0xe4, + 0x53, 0xca, 0x57, 0x44, 0x15, 0x35, 0x82, 0x64, 0x8a, 0x1a, 0x4d, 0x55, 0xf5, 0x36, 0x9c, 0x0d, + 0xf6, 0xf3, 0x62, 0xfd, 0xe5, 0x9d, 0xa6, 0x7c, 0x0a, 0xf9, 0x3c, 0x91, 0xb8, 0x0c, 0x88, 0x50, + 0xcd, 0xf5, 0x69, 0x5d, 0xca, 0xb3, 0x15, 0x37, 0x91, 0x9c, 0x51, 0x98, 0x17, 0x5f, 0x19, 0x5d, + 0x71, 0xbb, 0x90, 0xb1, 0x15, 0xb7, 0x0b, 0x2a, 0x95, 0xb9, 0x10, 0xaa, 0xbe, 0x03, 0xf3, 0x40, + 0xa6, 0xdd, 0x38, 0xa4, 0x32, 0x17, 0x96, 0xda, 0xca, 0x0e, 0xcc, 0x03, 0x6b, 0xad, 0x1b, 0x07, + 0x72, 0x19, 0x06, 0x6a, 0xb5, 0x05, 0xad, 0x63, 0x29, 0x8d, 0x98, 0x3b, 0x32, 0x42, 0xe7, 0xfa, + 0x34, 0x81, 0x67, 0x66, 0xd0, 0x4c, 0x53, 0x77, 0x3d, 0xb3, 0xe1, 0xe2, 0x88, 0xf1, 0x47, 0x88, + 0x11, 0x35, 0x83, 0xd2, 0x68, 0x98, 0x19, 0x94, 0x06, 0x67, 0xf6, 0x62, 0x59, 0x77, 0x5d, 0xdd, + 0x32, 0x1c, 0x7d, 0x0a, 0x97, 0x09, 0x1a, 0x7b, 0x0c, 0x18, 0xc1, 0x32, 0x7b, 0x31, 0x0a, 0xc1, + 0xc3, 0x77, 0x1f, 0xe2, 0x9b, 0x39, 0xf7, 0x62, 0x87, 0xef, 0x31, 0x3c, 0x1e, 0xbe, 0xc7, 0x60, + 0x68, 0x77, 0xfa, 0x30, 0x8d, 0xae, 0x9a, 0x4c, 0x44, 0xca, 0x6a, 0xcc, 0xee, 0x8c, 0x13, 0xa0, + 0xdd, 0x19, 0x07, 0x46, 0x9a, 0xe4, 0x2f, 0xb7, 0x6b, 0x5d, 0x9a, 0x14, 0xae, 0xb2, 0x89, 0x32, + 0x6c, 0xfd, 0x0e, 0x07, 0xc7, 0xf4, 0xa6, 0xa5, 0xb7, 0xec, 0xe9, 0x29, 0x5f, 0xea, 0x66, 0x74, + 0xfd, 0xee, 0x4a, 0xc8, 0xd6, 0xef, 0xae, 0x48, 0x36, 0xbb, 0xfa, 0x1b, 0xad, 0x35, 0xdd, 0xa1, + 0xc6, 0xb4, 0xe9, 0xe0, 0xc9, 0xe2, 0x26, 0xdf, 0x1a, 0xbe, 0x1b, 0x9d, 0x5d, 0x7b, 0x90, 0xb2, + 0xd9, 0xb5, 0x07, 0x9a, 0x19, 0x79, 0xe9, 0x68, 0x8d, 0xea, 0x86, 0xb2, 0x1e, 0x35, 0xf2, 0xba, + 0x53, 0x32, 0x23, 0xaf, 0x3b, 0xb6, 0xfb, 0xe7, 0xdc, 0x75, 0x4c, 0x8f, 0x2a, 0xcd, 0xdd, 0x7c, + 0x0e, 0x92, 0x76, 0xff, 0x1c, 0x44, 0xb3, 0x0d, 0x61, 0xbc, 0x43, 0x5a, 0xd1, 0x0d, 0x61, 0xb2, + 0x1b, 0xe2, 0x25, 0x98, 0xc5, 0x22, 0x5e, 0x3d, 0x2a, 0x56, 0xd4, 0x62, 0x11, 0x60, 0x66, 0xb1, + 0x84, 0xef, 0x22, 0x23, 0x6f, 0xdd, 0x14, 0x3b, 0xba, 0x86, 0xca, 0x38, 0xb6, 0x86, 0x46, 0xde, + 0xc5, 0x7d, 0x30, 0xf2, 0x90, 0x43, 0x69, 0x47, 0xad, 0x0e, 0x09, 0xc5, 0xac, 0x0e, 0xf9, 0xc9, + 0x47, 0x19, 0x26, 0xf0, 0x16, 0x5c, 0xeb, 0x04, 0xf7, 0x38, 0x9f, 0x8e, 0x7e, 0x66, 0x0c, 0xcd, + 0x3e, 0x33, 0x06, 0x8a, 0x30, 0x11, 0xd3, 0x96, 0xd3, 0x85, 0x49, 0x78, 0x3e, 0x18, 0x03, 0x91, + 0x79, 0x20, 0xb5, 0xd2, 0xc2, 0x7c, 0xc5, 0xa8, 0xca, 0x57, 0x64, 0x6e, 0xf4, 0x04, 0x36, 0x49, + 0x31, 0xd7, 0xa7, 0xa5, 0x94, 0x23, 0xef, 0xc2, 0x45, 0x01, 0x15, 0xa1, 0x03, 0xaa, 0x8e, 0x7d, + 0xdf, 0x34, 0x82, 0x05, 0xc1, 0x8b, 0x3a, 0x0a, 0xf6, 0xa2, 0x9d, 0xeb, 0xd3, 0x7a, 0xf2, 0xea, + 0x5e, 0x97, 0x58, 0x1f, 0x3a, 0xbb, 0xa9, 0x2b, 0x58, 0x24, 0x7a, 0xf2, 0xea, 0x5e, 0x97, 0x90, + 0xfb, 0xfd, 0xdd, 0xd4, 0x15, 0x74, 0x42, 0x4f, 0x5e, 0xc4, 0x85, 0x62, 0x2f, 0x7c, 0xa9, 0xd9, + 0x54, 0x36, 0xb0, 0xba, 0x67, 0x77, 0x53, 0x5d, 0x09, 0x0d, 0xce, 0x9d, 0x38, 0xb2, 0x59, 0x7a, + 0xa9, 0x4d, 0xad, 0x5a, 0x64, 0x01, 0x7a, 0x10, 0x9d, 0xa5, 0x13, 0x04, 0x6c, 0x96, 0x4e, 0x00, + 0xd9, 0x80, 0x92, 0xdf, 0x03, 0x29, 0x9b, 0xd1, 0x01, 0x25, 0xe3, 0xd8, 0x80, 0x8a, 0xbc, 0x1d, + 0x5a, 0x82, 0x53, 0x4b, 0xeb, 0x9e, 0xee, 0x5b, 0x90, 0xae, 0xe8, 0xca, 0xf7, 0x62, 0x97, 0x4c, + 0x49, 0x12, 0xbc, 0x64, 0x4a, 0x82, 0xd9, 0x18, 0x61, 0xe0, 0xda, 0xa6, 0xd5, 0x98, 0xd5, 0xcd, + 0x66, 0xc7, 0xa1, 0xca, 0xdf, 0x88, 0x8e, 0x91, 0x18, 0x9a, 0x8d, 0x91, 0x18, 0x88, 0x2d, 0xd0, + 0x0c, 0x54, 0x72, 0x5d, 0x73, 0xd5, 0x12, 0xfb, 0xca, 0x4e, 0xd3, 0x53, 0xfe, 0xb3, 0xe8, 0x02, + 0x9d, 0x46, 0xc3, 0x16, 0xe8, 0x34, 0x38, 0x9e, 0x3a, 0xb1, 0x5e, 0x60, 0x8b, 0x87, 0x7c, 0x57, + 0xf9, 0x9f, 0xc7, 0x4e, 0x9d, 0x52, 0x68, 0xf0, 0xd4, 0x29, 0x05, 0xce, 0xd6, 0x47, 0x6e, 0x93, + 0xcd, 0x9b, 0xc1, 0x5d, 0xf5, 0x7f, 0x11, 0x5d, 0x1f, 0xe3, 0x78, 0xb6, 0x3e, 0xc6, 0x61, 0x51, + 0x3e, 0xa2, 0x0b, 0xfe, 0xcb, 0x6e, 0x7c, 0x02, 0xf9, 0x27, 0xca, 0x90, 0x1b, 0x32, 0x1f, 0x31, + 0x52, 0xbe, 0x2a, 0xd3, 0x8d, 0x51, 0x30, 0x3c, 0x12, 0x85, 0xa2, 0x8c, 0x34, 0x7a, 0xdf, 0xa4, + 0x1b, 0xca, 0x57, 0x77, 0x65, 0xc4, 0x09, 0xa2, 0x8c, 0x38, 0x8c, 0xbc, 0x03, 0x67, 0x43, 0xd8, + 0x02, 0x6d, 0xad, 0x04, 0x33, 0xd3, 0xdf, 0xcc, 0x44, 0xcd, 0xe0, 0x74, 0x32, 0x66, 0x06, 0xa7, + 0x63, 0xd2, 0x58, 0x0b, 0xd1, 0xfd, 0xad, 0x1d, 0x58, 0x07, 0x12, 0xec, 0xc2, 0x20, 0x8d, 0xb5, + 0x90, 0xe6, 0xd7, 0xec, 0xc0, 0x3a, 0x90, 0x69, 0x17, 0x06, 0xe4, 0x1b, 0x32, 0x70, 0x29, 0x1d, + 0x55, 0x6a, 0x36, 0x67, 0x6d, 0x27, 0xc4, 0x29, 0x7f, 0x3b, 0x13, 0x3d, 0x68, 0xd8, 0x5d, 0xb1, + 0xb9, 0x3e, 0x6d, 0x97, 0x15, 0x90, 0x8f, 0xc2, 0x58, 0xa9, 0x63, 0x98, 0x1e, 0x5e, 0xbc, 0x31, + 0xc3, 0xf9, 0x6b, 0x33, 0xb1, 0x2d, 0x8e, 0x8c, 0xc5, 0x2d, 0x8e, 0x0c, 0x20, 0x37, 0x61, 0xb2, + 0x46, 0x1b, 0x1d, 0x8c, 0x37, 0x41, 0xdb, 0xb6, 0xe3, 0x31, 0x1e, 0x5f, 0x97, 0x89, 0x4e, 0x62, + 0x09, 0x0a, 0x36, 0x89, 0x25, 0x80, 0xe4, 0x4e, 0xe2, 0x56, 0x5e, 0x74, 0xe6, 0xd7, 0x67, 0x7a, + 0x5e, 0xcb, 0x07, 0x7d, 0x99, 0x5e, 0x9c, 0x54, 0x63, 0xb7, 0xe8, 0x82, 0xeb, 0x37, 0x64, 0x7a, + 0x5c, 0xa3, 0x4b, 0x33, 0x5c, 0x12, 0xcc, 0x38, 0x46, 0xee, 0xae, 0x05, 0xc7, 0xbf, 0x93, 0xe9, + 0x71, 0xed, 0x1d, 0x72, 0x4c, 0x01, 0x93, 0x57, 0xb8, 0xa7, 0x88, 0x60, 0xf4, 0x77, 0x33, 0x49, + 0x57, 0x91, 0xa0, 0xbc, 0x44, 0xc8, 0x8a, 0xdd, 0x76, 0x03, 0xa5, 0xff, 0x4c, 0x26, 0xe9, 0x9b, + 0x17, 0x16, 0x0b, 0x7f, 0x11, 0x0a, 0x17, 0x66, 0x1e, 0x78, 0xd4, 0xb1, 0xf4, 0x26, 0x76, 0x67, + 0xcd, 0xb3, 0x1d, 0x7d, 0x95, 0xce, 0x58, 0xfa, 0x4a, 0x93, 0x2a, 0xdf, 0x98, 0x89, 0x5a, 0xb0, + 0xdd, 0x49, 0x99, 0x05, 0xdb, 0x1d, 0x4b, 0xd6, 0xe0, 0xb1, 0x34, 0xec, 0xb4, 0xe9, 0x62, 0x3d, + 0xdf, 0x94, 0x89, 0x9a, 0xb0, 0x3d, 0x68, 0x99, 0x09, 0xdb, 0x03, 0x4d, 0xae, 0xc3, 0xf0, 0x94, + 0xed, 0x4f, 0xbf, 0xdf, 0x1c, 0x73, 0x86, 0x0c, 0x30, 0x73, 0x7d, 0x5a, 0x48, 0x26, 0xca, 0x88, + 0x41, 0xfd, 0xd9, 0x64, 0x99, 0xf0, 0xf2, 0x29, 0xf8, 0x21, 0xca, 0x08, 0x71, 0xff, 0x57, 0xc9, + 0x32, 0xe1, 0x1d, 0x57, 0xf0, 0x83, 0xcd, 0x24, 0xbc, 0xc6, 0x85, 0xd9, 0x12, 0xb3, 0xdb, 0xca, + 0x6b, 0x7a, 0xb3, 0x49, 0xad, 0x55, 0xaa, 0x7c, 0x2e, 0x36, 0x93, 0xa4, 0x93, 0xb1, 0x99, 0x24, + 0x1d, 0x43, 0x3e, 0x09, 0xe7, 0xee, 0xe8, 0x4d, 0xd3, 0x08, 0x71, 0x1a, 0x75, 0xdb, 0xb6, 0xe5, + 0x52, 0xe5, 0xf3, 0x99, 0xe8, 0x6e, 0xba, 0x0b, 0x1d, 0xdb, 0x4d, 0x77, 0x41, 0x91, 0x05, 0x20, + 0xb8, 0x8c, 0x06, 0xb3, 0x05, 0x5b, 0x9f, 0x95, 0xbf, 0x97, 0x89, 0xda, 0xa9, 0x49, 0x12, 0x66, + 0xa7, 0x26, 0xa1, 0xa4, 0xde, 0x3d, 0xc9, 0x92, 0xf2, 0x2d, 0x99, 0xe8, 0x69, 0x4d, 0x37, 0xc2, + 0xb9, 0x3e, 0xad, 0x7b, 0xa6, 0xa6, 0x1b, 0x50, 0xa8, 0x55, 0x2b, 0xb3, 0xb3, 0x33, 0xb5, 0x3b, + 0x95, 0x69, 0x7c, 0xaa, 0x61, 0x28, 0xdf, 0x1a, 0x5b, 0xb1, 0xe2, 0x04, 0x6c, 0xc5, 0x8a, 0xc3, + 0xc8, 0xeb, 0x30, 0xca, 0xda, 0xcf, 0x06, 0x0c, 0x7e, 0xf2, 0xb7, 0x65, 0xa2, 0xe6, 0x94, 0x8c, + 0x64, 0xe6, 0x94, 0xfc, 0x9b, 0xd4, 0xe0, 0x34, 0x93, 0x62, 0xd5, 0xa1, 0xf7, 0xa8, 0x43, 0xad, + 0x86, 0x3f, 0xa6, 0xbf, 0x3d, 0x13, 0xb5, 0x32, 0xd2, 0x88, 0x98, 0x95, 0x91, 0x06, 0x27, 0xeb, + 0x70, 0x31, 0x7e, 0x12, 0x24, 0xbf, 0xeb, 0x55, 0xfe, 0x7e, 0x26, 0x66, 0x0c, 0xf7, 0x20, 0x46, + 0x63, 0xb8, 0x07, 0x9e, 0x58, 0xf0, 0xb8, 0x38, 0x56, 0x11, 0x0e, 0x97, 0xf1, 0xda, 0xbe, 0x83, + 0xd7, 0xf6, 0xbe, 0xd0, 0x21, 0xb0, 0x07, 0xf5, 0x5c, 0x9f, 0xd6, 0x9b, 0x1d, 0xd3, 0xb3, 0x64, + 0x2a, 0x21, 0xe5, 0x3b, 0x33, 0xe9, 0x1e, 0x29, 0x11, 0x37, 0xe5, 0xb4, 0x1c, 0x44, 0xef, 0x74, + 0x4b, 0x84, 0xa3, 0xfc, 0x83, 0xd8, 0x78, 0x4b, 0x27, 0x63, 0xe3, 0xad, 0x4b, 0x26, 0x9d, 0x9b, + 0x30, 0xc9, 0x95, 0xba, 0xaa, 0xe3, 0x30, 0xb4, 0x56, 0xa9, 0xa1, 0xfc, 0xc3, 0xd8, 0x6a, 0x97, + 0xa0, 0x40, 0xd7, 0x9e, 0x38, 0x90, 0x4d, 0xdd, 0xb5, 0xb6, 0x6e, 0x59, 0x78, 0xcc, 0xaa, 0xfc, + 0xd7, 0xb1, 0xa9, 0x3b, 0x44, 0xa1, 0xe3, 0x6e, 0xf0, 0x8b, 0x69, 0x42, 0xaf, 0x24, 0x72, 0xca, + 0x7f, 0x13, 0xd3, 0x84, 0x5e, 0xc4, 0x4c, 0x13, 0x7a, 0x66, 0xa4, 0xbb, 0xd3, 0xe5, 0x8d, 0xbd, + 0xf2, 0x85, 0xd8, 0x8a, 0x9c, 0x4a, 0xc5, 0x56, 0xe4, 0xf4, 0x27, 0xfa, 0x77, 0xba, 0xbc, 0x4f, + 0x57, 0xbe, 0xab, 0x37, 0xdf, 0x70, 0xa5, 0x4f, 0x7f, 0xde, 0x7e, 0xa7, 0xcb, 0xdb, 0x6e, 0xe5, + 0xbb, 0x7b, 0xf3, 0x0d, 0x1d, 0xfb, 0xd2, 0x9f, 0x86, 0xd7, 0xbb, 0xbf, 0x8b, 0x56, 0xfe, 0x51, + 0x7c, 0xea, 0xea, 0x42, 0x88, 0x53, 0x57, 0xb7, 0xc7, 0xd5, 0x2b, 0x70, 0x9e, 0x6b, 0xc8, 0x0d, + 0x47, 0x6f, 0xaf, 0xd5, 0xa8, 0xe7, 0x99, 0xd6, 0xaa, 0xbf, 0x13, 0xfb, 0x9e, 0x4c, 0xec, 0x78, + 0xac, 0x1b, 0x25, 0x1e, 0x8f, 0x75, 0x43, 0x32, 0xe5, 0x4d, 0xbc, 0x80, 0x56, 0xfe, 0x71, 0x4c, + 0x79, 0x13, 0x14, 0x4c, 0x79, 0x93, 0x0f, 0xa7, 0x6f, 0xa6, 0x3c, 0xf4, 0x55, 0xfe, 0xdb, 0xee, + 0xbc, 0x82, 0xf6, 0xa5, 0xbc, 0x0f, 0xbe, 0x99, 0xf2, 0x9e, 0x55, 0xf9, 0xef, 0xba, 0xf3, 0x0a, + 0x7d, 0x90, 0x92, 0xcf, 0x60, 0xdf, 0x81, 0xb3, 0x7c, 0x36, 0x9f, 0xa5, 0x06, 0x8d, 0x7c, 0xe8, + 0xf7, 0xc6, 0xc6, 0x7e, 0x3a, 0x19, 0x1e, 0xb9, 0xa7, 0x62, 0xd2, 0x58, 0x8b, 0xb6, 0xfe, 0x93, + 0x1d, 0x58, 0x87, 0x1b, 0x82, 0x74, 0x0c, 0x5b, 0x6f, 0xe4, 0xd7, 0x6f, 0xca, 0xf7, 0xc5, 0xd6, + 0x1b, 0x19, 0x89, 0xee, 0x1c, 0xf2, 0x53, 0xb9, 0xd7, 0xa3, 0x2f, 0xbd, 0x94, 0xef, 0x4f, 0x2d, + 0x1c, 0x74, 0x40, 0xf4, 0x59, 0xd8, 0xeb, 0xd1, 0x57, 0x4d, 0xca, 0x0f, 0xa4, 0x16, 0x0e, 0x3e, + 0x20, 0xfa, 0x04, 0x8a, 0x6d, 0x91, 0x3a, 0x9e, 0xcd, 0x59, 0x45, 0xa6, 0x87, 0x7f, 0x1a, 0xdf, + 0x22, 0xa5, 0x92, 0xe1, 0x16, 0x29, 0x15, 0x93, 0xc6, 0x5a, 0x7c, 0xde, 0x0f, 0xee, 0xc0, 0x5a, + 0xda, 0xd8, 0xa5, 0x62, 0xd2, 0x58, 0x8b, 0x8f, 0xff, 0xa1, 0x1d, 0x58, 0x4b, 0x1b, 0xbb, 0x54, + 0x0c, 0x33, 0xc7, 0x42, 0xcc, 0x1d, 0xea, 0xb8, 0xa1, 0xfa, 0xfd, 0xf7, 0x31, 0x73, 0xac, 0x0b, + 0x1d, 0x33, 0xc7, 0xba, 0xa0, 0x52, 0xb9, 0x0b, 0xa1, 0xfc, 0xf0, 0x4e, 0xdc, 0xc3, 0x7b, 0x99, + 0x2e, 0xa8, 0x54, 0xee, 0x42, 0x2e, 0xff, 0x6c, 0x27, 0xee, 0xe1, 0xc5, 0x4c, 0x17, 0x14, 0x33, + 0x8a, 0x6a, 0x9e, 0xee, 0x99, 0x8d, 0x39, 0xdb, 0xf5, 0xa4, 0x45, 0xfe, 0x7f, 0x88, 0x19, 0x45, + 0x69, 0x44, 0xcc, 0x28, 0x4a, 0x83, 0x27, 0x99, 0x0a, 0x69, 0xfc, 0x48, 0x4f, 0xa6, 0xa1, 0xa5, + 0x95, 0x06, 0x4f, 0x32, 0x15, 0x42, 0xf8, 0x1f, 0x7b, 0x32, 0x0d, 0x3d, 0xe5, 0xd3, 0xe0, 0xcc, + 0x32, 0x2d, 0x3b, 0xf6, 0x86, 0x75, 0x93, 0x6e, 0xd0, 0xa6, 0xf8, 0xf4, 0x1f, 0x8d, 0x59, 0xa6, + 0x71, 0x02, 0xbc, 0x45, 0x89, 0xc1, 0xa2, 0x8c, 0xc4, 0xe7, 0xfe, 0x58, 0x57, 0x46, 0xe1, 0x31, + 0x51, 0x1c, 0x16, 0x65, 0x24, 0x3e, 0xf1, 0xc7, 0xbb, 0x32, 0x0a, 0x8f, 0x89, 0xe2, 0x30, 0x52, + 0x82, 0x71, 0x7c, 0x2b, 0xa1, 0xbb, 0xbe, 0xe7, 0xe7, 0x4f, 0x67, 0xa2, 0xb7, 0x5e, 0x51, 0xf4, + 0x5c, 0x9f, 0x16, 0x2b, 0x20, 0xb3, 0x10, 0x9f, 0xf4, 0x33, 0x5d, 0x58, 0x84, 0xfe, 0x8e, 0x51, + 0x88, 0xcc, 0x42, 0x7c, 0xcc, 0x3f, 0xef, 0xc2, 0x22, 0x74, 0x78, 0x8c, 0x42, 0xc8, 0x87, 0x60, + 0xa4, 0x36, 0xbb, 0x5c, 0xf5, 0x13, 0x9d, 0xfe, 0x6c, 0x26, 0xf6, 0xaa, 0x28, 0xc4, 0xe1, 0xab, + 0xa2, 0xf0, 0x27, 0xf9, 0x28, 0x8c, 0x95, 0x6d, 0xcb, 0xd3, 0x1b, 0xfe, 0x06, 0xf4, 0xe7, 0x62, + 0x67, 0x28, 0x11, 0xec, 0x5c, 0x9f, 0x16, 0x25, 0x97, 0xca, 0x8b, 0xb6, 0xff, 0x7c, 0x7a, 0xf9, + 0xa0, 0xe9, 0x51, 0x72, 0x36, 0xa3, 0xdd, 0xb5, 0x9d, 0xf5, 0xa6, 0xad, 0x1b, 0x7e, 0x40, 0x52, + 0xd1, 0x90, 0x7f, 0x11, 0x9b, 0xd1, 0xd2, 0xc9, 0xd8, 0x8c, 0x96, 0x8e, 0x49, 0x63, 0x2d, 0xba, + 0xe8, 0x17, 0x76, 0x60, 0x1d, 0xce, 0xc3, 0xe9, 0x98, 0x34, 0xd6, 0xe2, 0xf3, 0xff, 0xe5, 0x0e, + 0xac, 0xc3, 0x79, 0x38, 0x1d, 0xc3, 0x4c, 0xeb, 0x1b, 0xa6, 0xe7, 0x3f, 0x6c, 0xfb, 0xc5, 0x98, + 0x69, 0x1d, 0xa2, 0x98, 0x69, 0x1d, 0xfe, 0x22, 0x14, 0x2e, 0x04, 0x4f, 0x25, 0xc3, 0xbd, 0x6b, + 0xc5, 0xba, 0xcf, 0xf6, 0xc7, 0xca, 0xff, 0x14, 0x3b, 0x15, 0xe9, 0x4e, 0x3a, 0xd7, 0xa7, 0xf5, + 0x60, 0x44, 0xaa, 0x31, 0x3f, 0x45, 0x1e, 0xd4, 0x4f, 0xf9, 0xa5, 0x4c, 0x0f, 0x47, 0x45, 0x4e, + 0x93, 0x70, 0x54, 0xe4, 0x60, 0x31, 0x67, 0xad, 0x34, 0xe9, 0xed, 0xc5, 0xca, 0xdb, 0xd2, 0xec, + 0xfa, 0xc5, 0xe4, 0x9c, 0x95, 0x20, 0x12, 0x73, 0x56, 0x02, 0x4e, 0xfe, 0x56, 0x06, 0x9e, 0x89, + 0xcb, 0xf7, 0xed, 0x57, 0x5e, 0x78, 0x4d, 0xa3, 0xf7, 0xed, 0x86, 0x6c, 0x59, 0xfd, 0x32, 0xaf, + 0xe5, 0xf9, 0x6e, 0xdd, 0x95, 0x56, 0x68, 0xae, 0x4f, 0xdb, 0x15, 0xf3, 0x5d, 0xb4, 0x42, 0x28, + 0xcd, 0xaf, 0xec, 0xa9, 0x15, 0x81, 0x0a, 0xed, 0x8a, 0xf9, 0x2e, 0x5a, 0x21, 0x46, 0xc5, 0xaf, + 0xee, 0xa9, 0x15, 0xc1, 0x18, 0xd9, 0x15, 0x73, 0xdc, 0x7d, 0xde, 0xad, 0x55, 0xca, 0x81, 0xaf, + 0xcf, 0xa6, 0xd5, 0x50, 0x7e, 0x2d, 0xbe, 0xfb, 0x8c, 0x53, 0xe0, 0xee, 0x33, 0x0e, 0x64, 0xcb, + 0xfd, 0x1c, 0xd5, 0x9b, 0x6c, 0x3b, 0x4a, 0x1b, 0xeb, 0x11, 0xe3, 0xed, 0xd7, 0x63, 0xcb, 0x7d, + 0x17, 0x3a, 0xb6, 0xdc, 0x77, 0x41, 0xa5, 0x72, 0x17, 0x12, 0xfa, 0x8d, 0x9d, 0xb8, 0x87, 0xa6, + 0x4a, 0x17, 0x54, 0x2a, 0x77, 0xa1, 0x05, 0xff, 0xf3, 0x4e, 0xdc, 0x43, 0x53, 0xa5, 0x0b, 0x8a, + 0x7c, 0x73, 0x06, 0x2e, 0xa7, 0x75, 0x07, 0x8f, 0xfd, 0xb1, 0x74, 0x9f, 0x3a, 0x8e, 0x69, 0xf8, + 0x17, 0xc8, 0xbf, 0xc9, 0xeb, 0x7b, 0xa1, 0x57, 0x7f, 0xa7, 0x15, 0x9c, 0xeb, 0xd3, 0x76, 0x5d, + 0xc9, 0x2e, 0x5b, 0x24, 0x24, 0xf0, 0x5b, 0x7b, 0x6e, 0x51, 0x20, 0x92, 0x5d, 0x57, 0x82, 0x13, + 0x8e, 0xb9, 0xea, 0x7a, 0xb6, 0x43, 0xab, 0x76, 0xd3, 0x6c, 0xf8, 0xeb, 0xcd, 0x6f, 0xc7, 0x27, + 0x9c, 0x14, 0x22, 0x9c, 0x70, 0x52, 0xe0, 0x49, 0xa6, 0x42, 0x63, 0x7e, 0xa7, 0x27, 0x53, 0xc9, + 0x9c, 0x4b, 0x81, 0x27, 0x99, 0x0a, 0x31, 0xfd, 0x6e, 0x4f, 0xa6, 0x92, 0x39, 0x97, 0x02, 0x27, + 0x16, 0x3c, 0x1e, 0x1a, 0xba, 0xa5, 0x55, 0x6a, 0x79, 0x9a, 0xdd, 0x6c, 0xda, 0x1d, 0x6f, 0xd9, + 0x31, 0x57, 0x57, 0xa9, 0xa3, 0xfc, 0x5e, 0xec, 0x80, 0xac, 0x27, 0xf5, 0x5c, 0x9f, 0xd6, 0x9b, + 0x1d, 0xf1, 0xa0, 0x98, 0x4e, 0x30, 0x6b, 0x3b, 0x0d, 0x3a, 0x6d, 0x5b, 0x54, 0xf9, 0xfd, 0x4c, + 0xf4, 0x7a, 0x7a, 0x07, 0xfa, 0xb9, 0x3e, 0x6d, 0x27, 0x96, 0xe4, 0xd3, 0xf0, 0x44, 0x3a, 0x09, + 0xfb, 0x6f, 0x45, 0x6f, 0xac, 0x2b, 0xff, 0x4b, 0x26, 0xea, 0xf3, 0xd7, 0x9b, 0x7c, 0xae, 0x4f, + 0xdb, 0x81, 0x21, 0x99, 0x86, 0x89, 0x85, 0x72, 0x35, 0xf2, 0xa0, 0xe3, 0x0f, 0x32, 0xb1, 0xe7, + 0x57, 0x51, 0x3c, 0x3e, 0xbf, 0x8a, 0x82, 0x98, 0x3d, 0x15, 0x82, 0x66, 0x2c, 0x43, 0xf9, 0x5f, + 0x63, 0xf6, 0x54, 0x04, 0x8b, 0xfe, 0xa5, 0x32, 0x80, 0xcd, 0xb3, 0x21, 0xc0, 0xbf, 0x97, 0xff, + 0xc3, 0xd8, 0x3c, 0x9b, 0xa0, 0x60, 0xf3, 0x6c, 0x02, 0xc8, 0xac, 0x9c, 0x10, 0xb8, 0x68, 0x0b, + 0x97, 0x5f, 0xd3, 0xb6, 0x94, 0x3f, 0x8a, 0x59, 0x39, 0xe9, 0x64, 0xcc, 0xca, 0x49, 0xc7, 0x30, + 0xd5, 0x9e, 0xb2, 0x3b, 0x96, 0x71, 0x8b, 0x6e, 0xb6, 0x75, 0xd3, 0xf1, 0xdf, 0x21, 0x29, 0x7f, + 0x1c, 0x53, 0xed, 0x34, 0x22, 0xa6, 0xda, 0x69, 0xf0, 0x04, 0x53, 0xdb, 0xe3, 0xad, 0xfd, 0x57, + 0xbd, 0x98, 0x0a, 0xa2, 0x04, 0x53, 0x01, 0x27, 0x9f, 0xc9, 0xc0, 0xb3, 0x32, 0xe2, 0xa6, 0x6d, + 0x5a, 0xe8, 0x81, 0x7d, 0x87, 0x3a, 0xc1, 0xf7, 0xcc, 0xea, 0x66, 0x93, 0x1a, 0xca, 0x16, 0xaf, + 0xe8, 0x5a, 0x5a, 0x45, 0x3d, 0xca, 0xcd, 0xf5, 0x69, 0xbb, 0xad, 0x82, 0xac, 0xc0, 0xf9, 0x50, + 0xa4, 0xcc, 0x32, 0xa3, 0x56, 0xad, 0x36, 0x53, 0xf3, 0x1c, 0xaa, 0xb7, 0x94, 0x3f, 0x89, 0x1d, + 0xb6, 0x75, 0xa5, 0x44, 0x5f, 0xf2, 0x6e, 0x48, 0xb2, 0x0e, 0x17, 0x43, 0xa4, 0x6f, 0x17, 0x2e, + 0x2f, 0x57, 0x7d, 0x75, 0xfa, 0xd3, 0xd8, 0x31, 0x6d, 0x2f, 0xe2, 0xb9, 0x3e, 0xad, 0x27, 0x33, + 0xbc, 0xf8, 0x28, 0x57, 0x16, 0x58, 0x1b, 0x4c, 0x6b, 0x95, 0xfb, 0x45, 0xfd, 0x59, 0xfc, 0xe2, + 0x23, 0x46, 0x80, 0x17, 0x1f, 0x31, 0x18, 0xde, 0xe6, 0x96, 0x2b, 0x0b, 0xbe, 0xa5, 0xc0, 0x39, + 0xfd, 0x79, 0xfc, 0x36, 0x37, 0x4e, 0x81, 0xb7, 0xb9, 0x71, 0x20, 0xb3, 0xa6, 0x45, 0x80, 0xaa, + 0xaa, 0x46, 0x5d, 0xcf, 0x31, 0xf1, 0xa5, 0x8e, 0x7f, 0xa6, 0xf9, 0xaf, 0x63, 0xd6, 0x74, 0x77, + 0x52, 0x66, 0x4d, 0x77, 0xc7, 0xb2, 0x6f, 0xbf, 0x63, 0x51, 0x2f, 0x62, 0xc1, 0xfc, 0xef, 0xb1, + 0x6f, 0x8f, 0x13, 0xb0, 0x6f, 0x8f, 0xc3, 0xa2, 0x8c, 0x44, 0x2b, 0xff, 0x8f, 0xae, 0x8c, 0xc2, + 0xad, 0x75, 0x1c, 0x16, 0x65, 0x24, 0x96, 0x9b, 0xed, 0xae, 0x8c, 0xc2, 0xad, 0x75, 0x1c, 0x36, + 0x35, 0x08, 0xfd, 0x28, 0xca, 0x9b, 0x03, 0x43, 0x3f, 0x91, 0x29, 0xfc, 0x64, 0xe6, 0xe6, 0xc0, + 0xd0, 0x4f, 0x66, 0x0a, 0x3f, 0xc5, 0xfe, 0xff, 0xa9, 0x4c, 0xe1, 0xa7, 0x33, 0xda, 0xf9, 0xd8, + 0x74, 0x5a, 0x6d, 0xea, 0xc2, 0x6e, 0x4e, 0x45, 0xf1, 0x9f, 0xa9, 0x28, 0x91, 0xac, 0xfe, 0x0b, + 0x19, 0x18, 0xe5, 0x0a, 0x2c, 0x42, 0xc2, 0x5f, 0x80, 0x21, 0xfe, 0xa8, 0xd6, 0x8f, 0x61, 0xa6, + 0x05, 0xbf, 0xc9, 0x25, 0x18, 0x9f, 0xd7, 0x5d, 0x0f, 0x9b, 0x58, 0xb1, 0x0c, 0xfa, 0x00, 0x03, + 0xca, 0xe4, 0xb4, 0x18, 0x94, 0xcc, 0x73, 0x3a, 0x5e, 0x0e, 0xb3, 0xad, 0xe4, 0x76, 0x8c, 0x84, + 0x3e, 0xf4, 0xc5, 0xad, 0x62, 0x1f, 0x06, 0x3e, 0x8f, 0x95, 0x55, 0x7f, 0x35, 0x03, 0x89, 0xe7, + 0xbe, 0xfb, 0x0f, 0x7d, 0xb8, 0x04, 0x13, 0xb1, 0x0c, 0x3f, 0x22, 0x2a, 0xce, 0x2e, 0x13, 0x00, + 0xc5, 0x4b, 0x93, 0x67, 0x83, 0x68, 0x2c, 0xb7, 0xb5, 0x79, 0x11, 0xe5, 0x9e, 0xa7, 0x0c, 0x76, + 0x9a, 0x9a, 0x84, 0x12, 0x51, 0x8c, 0xff, 0x72, 0x32, 0x4c, 0x3a, 0x42, 0x2e, 0x89, 0x38, 0x8c, + 0x52, 0x02, 0x8e, 0x8e, 0x4b, 0x1d, 0x39, 0x36, 0x3e, 0xc6, 0x5d, 0xfc, 0x28, 0x8c, 0x56, 0x5a, + 0x6d, 0xea, 0xb8, 0xb6, 0xa5, 0x7b, 0xb6, 0x9f, 0xc6, 0x18, 0x23, 0xba, 0x99, 0x12, 0x5c, 0x8e, + 0xe8, 0x26, 0xd3, 0x93, 0x2b, 0xd0, 0xaf, 0xd9, 0x4d, 0xea, 0x2a, 0x39, 0x4c, 0x1c, 0x83, 0xb1, + 0x92, 0x1c, 0x06, 0x90, 0x83, 0xee, 0x21, 0x05, 0x23, 0xe5, 0x29, 0x81, 0xf3, 0x21, 0x29, 0x66, + 0x01, 0x96, 0x49, 0x79, 0x22, 0xe1, 0xe7, 0x61, 0x00, 0x37, 0xbc, 0xae, 0xd2, 0x8f, 0xb4, 0x18, + 0xcc, 0xaf, 0x89, 0x10, 0x39, 0x36, 0x20, 0xa7, 0x21, 0xb7, 0xa0, 0x10, 0x3a, 0x71, 0xdf, 0x70, + 0xec, 0x4e, 0xdb, 0xcf, 0x0a, 0x5d, 0xdc, 0xde, 0x2a, 0x3e, 0xb6, 0x1e, 0xe0, 0xea, 0xab, 0x88, + 0x94, 0x58, 0x24, 0x0a, 0x92, 0x39, 0x98, 0x08, 0x61, 0x4c, 0x44, 0x7e, 0x36, 0x7a, 0x8c, 0x43, + 0x27, 0xf1, 0x62, 0xe2, 0x94, 0x59, 0xc5, 0x8b, 0x91, 0x0a, 0x0c, 0xfa, 0xe1, 0xfa, 0x87, 0x76, + 0x54, 0xd2, 0x53, 0x22, 0x5c, 0xff, 0xa0, 0x1c, 0xa8, 0xdf, 0x2f, 0x4f, 0x66, 0x61, 0x5c, 0xb3, + 0x3b, 0x1e, 0x5d, 0xb6, 0xc5, 0xed, 0xa7, 0x48, 0x0b, 0x81, 0x6d, 0x72, 0x18, 0xa6, 0xee, 0xd9, + 0xf5, 0x06, 0xc7, 0xc9, 0xb1, 0xf1, 0xa2, 0xa5, 0xc8, 0x22, 0x4c, 0x26, 0xdc, 0xdd, 0x31, 0xcc, + 0xcf, 0x30, 0x0f, 0xbc, 0x2e, 0x7d, 0x5e, 0x92, 0x59, 0xb2, 0x28, 0xf9, 0xda, 0x0c, 0x0c, 0x2c, + 0x3b, 0xba, 0xe9, 0xb9, 0x22, 0xe4, 0xcf, 0x99, 0xab, 0x1b, 0x8e, 0xde, 0x66, 0xfa, 0x71, 0x15, + 0xf3, 0xf9, 0xdc, 0xd1, 0x9b, 0x1d, 0xea, 0x4e, 0xdd, 0x65, 0x5f, 0xf7, 0x87, 0x5b, 0xc5, 0xd7, + 0x79, 0x7c, 0xc7, 0xab, 0x0d, 0xbb, 0x75, 0x6d, 0xd5, 0xd1, 0xef, 0x9b, 0x7c, 0x51, 0xd7, 0x9b, + 0xd7, 0x3c, 0xda, 0x44, 0xdf, 0x9d, 0x6b, 0x7a, 0xdb, 0xbc, 0x86, 0x19, 0xe8, 0xae, 0x05, 0x9c, + 0x78, 0x0d, 0x4c, 0x05, 0x3c, 0xfc, 0x4b, 0x56, 0x01, 0x8e, 0x23, 0x8b, 0x00, 0xe2, 0x53, 0x4b, + 0xed, 0xb6, 0x88, 0x1f, 0x24, 0x79, 0xbc, 0xf8, 0x18, 0xae, 0xd8, 0x81, 0xc0, 0xf4, 0xb6, 0x94, + 0x75, 0x49, 0x93, 0x38, 0x30, 0x2d, 0x58, 0x16, 0x2d, 0xf2, 0xc5, 0x34, 0x26, 0x45, 0x23, 0x14, + 0xa8, 0x14, 0x21, 0xc5, 0x8b, 0x91, 0x15, 0x98, 0x10, 0x7c, 0x83, 0x6c, 0xb1, 0xe3, 0xd1, 0x59, + 0x21, 0x86, 0xe6, 0x4a, 0x1b, 0xb4, 0xd1, 0x10, 0x60, 0xb9, 0x8e, 0x58, 0x09, 0x32, 0x05, 0x63, + 0xfe, 0xdf, 0x8b, 0x7a, 0x8b, 0xba, 0xca, 0x04, 0x6a, 0x2c, 0x26, 0xc6, 0xf1, 0xcb, 0x63, 0xe6, + 0x0a, 0x59, 0x74, 0xd1, 0x22, 0x32, 0x0f, 0xae, 0xf5, 0x85, 0x14, 0x1e, 0x71, 0x9d, 0x8f, 0x16, + 0x21, 0x65, 0x18, 0x0b, 0xc2, 0x17, 0xdc, 0xbe, 0x5d, 0x99, 0xc6, 0x00, 0x45, 0x22, 0x79, 0x49, + 0x2c, 0x8b, 0xaa, 0xcc, 0x24, 0x52, 0x46, 0x8a, 0x5a, 0xc9, 0x23, 0x16, 0xc5, 0xa2, 0x56, 0xb6, + 0x53, 0xa2, 0x56, 0x56, 0xc9, 0x47, 0x60, 0xa4, 0x74, 0xb7, 0x26, 0xa2, 0x71, 0xba, 0xca, 0xa9, + 0x30, 0x63, 0x3a, 0x86, 0x00, 0x15, 0x91, 0x3b, 0xe5, 0xa6, 0xcb, 0xf4, 0x64, 0x06, 0xc6, 0x23, + 0x67, 0x61, 0xae, 0x72, 0x1a, 0x39, 0x60, 0xcb, 0x75, 0xc4, 0xd4, 0x1d, 0x81, 0x92, 0x87, 0x57, + 0xb4, 0x10, 0xd3, 0x9a, 0x69, 0xd3, 0xc5, 0x0c, 0xce, 0x1a, 0xc5, 0xc0, 0x9f, 0x18, 0xee, 0x68, + 0x88, 0x6b, 0x8d, 0x21, 0x50, 0x75, 0x87, 0xe3, 0xe4, 0x1e, 0x8d, 0x15, 0x23, 0xef, 0x02, 0xc1, + 0x9c, 0xcf, 0xd4, 0xf0, 0x0d, 0xa1, 0xca, 0xb4, 0xab, 0x9c, 0xc5, 0x0c, 0x69, 0x24, 0x1e, 0xa7, + 0xaf, 0x32, 0x3d, 0x75, 0x49, 0x4c, 0x1f, 0x4f, 0xe8, 0xbc, 0x54, 0xdd, 0x8f, 0xd1, 0x57, 0x37, + 0x0d, 0xb9, 0xc5, 0x29, 0x5c, 0xc9, 0x06, 0x9c, 0xab, 0x3a, 0xf4, 0xbe, 0x69, 0x77, 0x5c, 0x7f, + 0xf9, 0xf0, 0xe7, 0xad, 0x73, 0x3b, 0xce, 0x5b, 0x4f, 0x89, 0x8a, 0xcf, 0xb4, 0x1d, 0x7a, 0xbf, + 0xee, 0x67, 0xb3, 0x8a, 0xa4, 0x1b, 0xe9, 0xc6, 0x9d, 0x89, 0x0b, 0x83, 0x9e, 0x0a, 0xb8, 0x49, + 0x5d, 0x45, 0x09, 0xa7, 0x5a, 0x1e, 0x62, 0xd6, 0x0c, 0x70, 0xb2, 0xb8, 0x62, 0xc5, 0x88, 0x06, + 0xe4, 0x46, 0xd9, 0x77, 0x8e, 0x2e, 0x35, 0x1a, 0x76, 0xc7, 0xf2, 0x5c, 0xe5, 0x3c, 0x32, 0x53, + 0x99, 0x58, 0x56, 0x1b, 0x41, 0x8e, 0xbc, 0xba, 0x2e, 0xf0, 0xb2, 0x58, 0x92, 0xa5, 0xc9, 0x3c, + 0x14, 0xaa, 0x0e, 0xba, 0x6a, 0xdc, 0xa2, 0x9b, 0x7c, 0xcb, 0x8e, 0x51, 0x97, 0xc4, 0x54, 0xd9, + 0xe6, 0x38, 0x4c, 0x38, 0xd5, 0x46, 0xac, 0xbc, 0xac, 0xc4, 0x4b, 0xca, 0xb9, 0x94, 0x1e, 0xdb, + 0x5d, 0x2e, 0x25, 0x0a, 0x05, 0xe1, 0x5a, 0xfd, 0xc0, 0xa3, 0x16, 0x5b, 0xea, 0x5d, 0x11, 0x61, + 0x49, 0x89, 0xb9, 0x62, 0x07, 0x78, 0x3e, 0x75, 0x88, 0x51, 0x46, 0x03, 0xb0, 0xdc, 0xb0, 0x78, + 0x91, 0x64, 0xc2, 0xa1, 0xc7, 0xf7, 0x9e, 0x70, 0x88, 0xbc, 0x0a, 0xc3, 0x6c, 0x93, 0x84, 0xbe, + 0xcc, 0x18, 0x4e, 0x49, 0xe4, 0xcc, 0x79, 0xd7, 0x36, 0xad, 0x3a, 0x26, 0xa0, 0x91, 0xc3, 0x04, + 0x07, 0xa4, 0x64, 0x06, 0x86, 0x6a, 0x0d, 0xbb, 0x4d, 0xab, 0xa6, 0x25, 0xc2, 0x25, 0x05, 0xe6, + 0x96, 0x0f, 0xe7, 0xa3, 0xdf, 0x65, 0xbf, 0xea, 0xed, 0x48, 0x52, 0x91, 0xa0, 0xa8, 0xfa, 0x67, + 0x99, 0x90, 0x0f, 0x29, 0x42, 0x3f, 0xfe, 0x2d, 0x4c, 0x9f, 0xe1, 0xed, 0xad, 0x62, 0x3f, 0x16, + 0xd7, 0x38, 0x9c, 0x2c, 0xc3, 0x48, 0xe8, 0xcd, 0xed, 0x2a, 0x59, 0x1c, 0x54, 0x4f, 0xc5, 0xeb, + 0xbd, 0x2a, 0xd1, 0xf0, 0x24, 0x83, 0x18, 0x3d, 0x52, 0x0f, 0xa1, 0x9a, 0xcc, 0xe6, 0xc2, 0xa7, + 0xa0, 0x10, 0x2f, 0x91, 0x92, 0xc6, 0xee, 0x65, 0x39, 0x8d, 0x9d, 0x7c, 0xcd, 0x2f, 0x6a, 0xb5, + 0xa8, 0x21, 0x71, 0x91, 0xd3, 0xdc, 0xbd, 0x06, 0x67, 0xd3, 0x89, 0xd8, 0x07, 0x73, 0x13, 0x2c, + 0x83, 0xfa, 0x8e, 0x1f, 0x8c, 0x26, 0x98, 0x30, 0xbc, 0xd4, 0xff, 0x33, 0x27, 0xaf, 0x8e, 0xe4, + 0x22, 0xe4, 0xa5, 0xac, 0xd8, 0x98, 0xeb, 0x06, 0x33, 0x08, 0xe6, 0x45, 0x0a, 0xaf, 0x61, 0x7f, + 0x73, 0xe8, 0x47, 0x0d, 0x1e, 0xdf, 0xde, 0x2a, 0x42, 0x98, 0xff, 0x44, 0x0b, 0x09, 0xc8, 0x35, + 0x80, 0x6a, 0x67, 0xa5, 0x69, 0x36, 0x30, 0xaf, 0x64, 0x4e, 0x8a, 0xb2, 0x89, 0x50, 0x9e, 0x56, + 0x52, 0x22, 0x21, 0xd7, 0x61, 0xc4, 0x77, 0xe0, 0x0a, 0x73, 0x3d, 0x61, 0xba, 0x41, 0xb1, 0x96, + 0x8a, 0x6c, 0x86, 0x12, 0x11, 0xf9, 0x30, 0x40, 0x38, 0x59, 0x0b, 0x3b, 0x18, 0x17, 0x72, 0x79, + 0x6e, 0x97, 0x17, 0xf2, 0x90, 0x9a, 0x2d, 0x6b, 0xf2, 0x64, 0xb1, 0x29, 0x52, 0x40, 0xe1, 0xb2, + 0x16, 0x99, 0x61, 0xe4, 0xe1, 0x1b, 0x2d, 0x42, 0x96, 0x60, 0x32, 0x31, 0x3f, 0x88, 0xcc, 0x50, + 0x4f, 0x6d, 0x6f, 0x15, 0x1f, 0x4f, 0x99, 0x5c, 0x64, 0xb3, 0x29, 0x51, 0x96, 0x3c, 0x0d, 0xb9, + 0xdb, 0x5a, 0x45, 0x64, 0xa7, 0xe1, 0x89, 0x8d, 0x22, 0xb1, 0xa1, 0x19, 0x96, 0xbc, 0x06, 0xc0, + 0xb3, 0xec, 0x56, 0x6d, 0xc7, 0x43, 0x7b, 0x6f, 0x8c, 0x27, 0xe8, 0xe3, 0x59, 0x78, 0xeb, 0xcc, + 0xc8, 0x90, 0x3f, 0x3a, 0x24, 0x56, 0xff, 0x66, 0x36, 0x61, 0x74, 0x30, 0xc1, 0x8b, 0x56, 0x48, + 0x9d, 0x8f, 0x82, 0xf7, 0x9b, 0xce, 0x05, 0x2f, 0x11, 0x91, 0xcb, 0x30, 0x54, 0x65, 0x53, 0x7e, + 0xc3, 0x6e, 0x0a, 0x55, 0xc0, 0x10, 0xe5, 0x6d, 0x01, 0xd3, 0x02, 0x2c, 0xb9, 0xce, 0xb3, 0xa6, + 0x59, 0xb1, 0x5c, 0x71, 0x1d, 0x01, 0x8b, 0x27, 0x4d, 0x63, 0x30, 0x56, 0x26, 0x92, 0x40, 0x5f, + 0x94, 0x49, 0x31, 0x78, 0xc2, 0x84, 0xf9, 0xc1, 0x76, 0xa3, 0x7f, 0xa7, 0xed, 0x86, 0xfa, 0x73, + 0x99, 0xe4, 0x04, 0x4a, 0x5e, 0x4e, 0xa6, 0x6d, 0xc2, 0xf9, 0x25, 0x00, 0xca, 0xb5, 0x06, 0x09, + 0x9c, 0x22, 0x09, 0x98, 0xb2, 0xfb, 0x4e, 0xc0, 0x94, 0xdb, 0x63, 0x02, 0x26, 0xf5, 0xff, 0xcd, + 0xf7, 0x7c, 0xe3, 0x7d, 0x24, 0x81, 0xfa, 0x5f, 0x63, 0x5b, 0x66, 0x56, 0x7b, 0xc9, 0x4d, 0x6c, + 0xfc, 0xf8, 0x13, 0xd6, 0xba, 0xce, 0x47, 0xa5, 0xab, 0x45, 0x29, 0xc9, 0x9b, 0x30, 0xea, 0x7f, + 0x00, 0x26, 0xf6, 0x92, 0x12, 0x52, 0x05, 0xe6, 0x4a, 0x2c, 0x05, 0x56, 0xa4, 0x00, 0x79, 0x05, + 0x86, 0xd1, 0x58, 0x6d, 0xeb, 0x0d, 0x3f, 0xeb, 0x1b, 0x4f, 0x13, 0xe7, 0x03, 0xe5, 0x55, 0x26, + 0xa0, 0x24, 0x5f, 0x01, 0x03, 0x22, 0xc5, 0xec, 0x00, 0xce, 0xf5, 0xd7, 0x76, 0xf1, 0x28, 0xfe, + 0xaa, 0x9c, 0x5e, 0x96, 0x6f, 0x3f, 0x11, 0x10, 0xd9, 0x7e, 0xf2, 0xcc, 0xb2, 0xcb, 0x70, 0xaa, + 0xea, 0x50, 0x03, 0xc3, 0x2f, 0xcc, 0x3c, 0x68, 0x3b, 0x22, 0xf9, 0x2f, 0x9f, 0x20, 0xd0, 0xfa, + 0x68, 0xfb, 0x68, 0x66, 0x17, 0x09, 0xbc, 0x9c, 0x7a, 0x2a, 0xa5, 0x38, 0x33, 0x49, 0x79, 0x4b, + 0x6e, 0xd1, 0xcd, 0x0d, 0xdb, 0x31, 0x78, 0x7e, 0x5c, 0xb1, 0x30, 0x0b, 0x41, 0xaf, 0x0b, 0x94, + 0x6c, 0x92, 0x46, 0x0b, 0x5d, 0x78, 0x0d, 0x46, 0xf6, 0x9b, 0x58, 0xf5, 0x87, 0xb3, 0x5d, 0xa2, + 0xa5, 0x1c, 0xcf, 0xb4, 0xca, 0x53, 0xb1, 0xe9, 0x46, 0xb6, 0xa3, 0x04, 0xbc, 0x5b, 0xe3, 0x83, + 0xe9, 0xa7, 0x08, 0xfd, 0x3c, 0x22, 0x5d, 0x7f, 0x68, 0x5b, 0x7c, 0x1a, 0x9f, 0xaf, 0x70, 0xb8, + 0xfa, 0x17, 0xd9, 0x2e, 0xa1, 0x60, 0x1e, 0x5d, 0x99, 0xb1, 0x85, 0xc7, 0x17, 0x46, 0x65, 0x1a, + 0x25, 0x37, 0x26, 0x16, 0x1e, 0x1f, 0xcc, 0x8c, 0x0a, 0x99, 0x88, 0x5c, 0x05, 0xa8, 0xea, 0x8e, + 0xde, 0xa2, 0x1e, 0xdb, 0x89, 0xf2, 0xb3, 0x1c, 0xb4, 0x42, 0xda, 0x01, 0x54, 0x93, 0x28, 0xd4, + 0xef, 0xcd, 0xf5, 0x0a, 0x95, 0x73, 0x22, 0xfb, 0xbd, 0xc8, 0xfe, 0x3a, 0x8c, 0x04, 0x92, 0xad, + 0x4c, 0xa3, 0xbd, 0x34, 0x16, 0x24, 0x84, 0xe6, 0x60, 0x2c, 0x23, 0x11, 0x91, 0x2b, 0xbc, 0xad, + 0x35, 0xf3, 0x3d, 0x9e, 0x32, 0x73, 0x4c, 0x24, 0x43, 0xd4, 0x3d, 0xbd, 0xee, 0x9a, 0xef, 0x51, + 0x2d, 0x40, 0xab, 0xff, 0x22, 0x9b, 0x1a, 0x6f, 0xe8, 0xa4, 0x8f, 0xf6, 0xd0, 0x47, 0x29, 0x42, + 0xe4, 0x91, 0x92, 0x4e, 0x84, 0xb8, 0x07, 0x21, 0xfe, 0xbb, 0x6c, 0x6a, 0x5c, 0xa9, 0x13, 0x21, + 0xee, 0x65, 0xb6, 0x78, 0x1e, 0x86, 0x35, 0x7b, 0xc3, 0x2d, 0xe3, 0x9e, 0x88, 0xcf, 0x15, 0x38, + 0x51, 0x3b, 0xf6, 0x86, 0x5b, 0xc7, 0xdd, 0x8e, 0x16, 0x12, 0xa8, 0x7f, 0x95, 0xed, 0x11, 0x79, + 0xeb, 0x44, 0xf0, 0x5f, 0xca, 0x25, 0xf2, 0xc7, 0xb2, 0x91, 0xc8, 0x5e, 0x8f, 0xae, 0xb0, 0xaf, + 0x01, 0xd4, 0x1a, 0x6b, 0xb4, 0xa5, 0x4b, 0x69, 0xc7, 0xf1, 0xc8, 0xc2, 0x45, 0x28, 0xdf, 0x06, + 0x4b, 0x24, 0xea, 0x4f, 0x64, 0x63, 0xa1, 0xcd, 0x4e, 0x64, 0xb7, 0x6b, 0xd9, 0x05, 0x5a, 0x27, + 0xa2, 0xb5, 0x9d, 0x48, 0x6e, 0xb7, 0x92, 0xfb, 0xfa, 0x6c, 0x2c, 0xb0, 0xdd, 0x23, 0x2b, 0x3b, + 0x36, 0x00, 0x93, 0x01, 0xf7, 0x1e, 0x59, 0x4d, 0x7a, 0x1e, 0x86, 0x85, 0x1c, 0x82, 0xa5, 0x82, + 0xcf, 0xfb, 0x1c, 0x88, 0x07, 0xb4, 0x01, 0x81, 0xfa, 0xb7, 0xb3, 0x10, 0x0d, 0x38, 0xf8, 0x88, + 0xea, 0xd0, 0x8f, 0x65, 0xa3, 0xa1, 0x16, 0x1f, 0x5d, 0xfd, 0xb9, 0x0a, 0x50, 0xeb, 0xac, 0x34, + 0xc4, 0x83, 0x86, 0x7e, 0xe9, 0x84, 0x3f, 0x80, 0x6a, 0x12, 0x85, 0xfa, 0x1f, 0xb3, 0xa9, 0xf1, + 0x1f, 0x1f, 0x5d, 0x01, 0xbe, 0x84, 0xa7, 0xe2, 0x0d, 0x2b, 0x9c, 0xc8, 0xf1, 0x10, 0x92, 0x8d, + 0xbf, 0xf8, 0x65, 0x5e, 0x40, 0x48, 0x3e, 0x94, 0x62, 0xae, 0xe1, 0x15, 0x59, 0x68, 0xae, 0xc9, + 0x87, 0xf9, 0x92, 0xe1, 0xf6, 0x2b, 0xd9, 0x9d, 0xc2, 0x65, 0x3e, 0xca, 0xab, 0xea, 0x60, 0x55, + 0xdf, 0xc4, 0xb4, 0x0e, 0xac, 0x27, 0x46, 0xa7, 0xce, 0x6c, 0x6f, 0x15, 0x27, 0xdb, 0x1c, 0x24, + 0x5f, 0xaa, 0x0a, 0x2a, 0xf5, 0xdf, 0xf4, 0xa7, 0xc7, 0x6a, 0x7c, 0x74, 0x45, 0x78, 0x11, 0xf2, + 0x55, 0xdd, 0x5b, 0x13, 0x9a, 0x8c, 0xb7, 0x81, 0x6d, 0xdd, 0x5b, 0xd3, 0x10, 0x4a, 0xae, 0xc0, + 0x90, 0xa6, 0x6f, 0xf0, 0x33, 0x4f, 0x7e, 0x73, 0x86, 0x07, 0x3b, 0x8e, 0xbe, 0x51, 0xe7, 0xe7, + 0x9e, 0x01, 0x9a, 0xa8, 0x30, 0xb0, 0x40, 0xbd, 0x35, 0xdb, 0x10, 0x27, 0xdf, 0x98, 0xe2, 0xbf, + 0x85, 0x10, 0x4d, 0x60, 0x58, 0x65, 0x53, 0xb6, 0xb1, 0x89, 0x37, 0x5f, 0xa3, 0xbc, 0xb2, 0x15, + 0xdb, 0xd8, 0xd4, 0x10, 0x4a, 0xbe, 0x21, 0x03, 0x83, 0x73, 0x54, 0x37, 0xd8, 0x08, 0x19, 0xee, + 0xe5, 0x4e, 0xf4, 0xf6, 0xe1, 0xb8, 0x13, 0x4d, 0xae, 0xf1, 0xca, 0x64, 0x45, 0x11, 0xf5, 0x93, + 0x1b, 0x30, 0x54, 0xd6, 0x3d, 0xba, 0x6a, 0x3b, 0x9b, 0xe8, 0x20, 0x35, 0x1e, 0xbe, 0xf7, 0x8f, + 0xe8, 0x8f, 0x4f, 0xc4, 0x6f, 0xc6, 0x1a, 0xe2, 0x97, 0x16, 0x14, 0x66, 0x62, 0xe1, 0x37, 0x73, + 0xe8, 0x21, 0x25, 0xc4, 0xc2, 0xaf, 0xf0, 0x34, 0x81, 0x09, 0x8f, 0x95, 0x47, 0xd3, 0x8f, 0x95, + 0xd1, 0x7a, 0x44, 0x27, 0xca, 0xb2, 0x6d, 0x50, 0xf4, 0x44, 0x1a, 0x13, 0xd6, 0x23, 0x42, 0xeb, + 0x0d, 0xdb, 0x60, 0xd6, 0x63, 0x40, 0xa2, 0xfe, 0x49, 0x3f, 0xa4, 0x46, 0x76, 0x3b, 0x51, 0xf2, + 0x13, 0x25, 0x0f, 0x95, 0x7c, 0x3a, 0xa1, 0xe4, 0x17, 0x92, 0xb1, 0x02, 0x1f, 0x52, 0x0d, 0xff, + 0xb6, 0x7c, 0x22, 0xd2, 0xe8, 0xa3, 0xbd, 0xbb, 0x0c, 0xa5, 0xd7, 0xbf, 0xa3, 0xf4, 0x82, 0x01, + 0x31, 0xb0, 0xe3, 0x80, 0x18, 0xdc, 0xed, 0x80, 0x18, 0xea, 0x3a, 0x20, 0x42, 0x05, 0x19, 0xee, + 0xaa, 0x20, 0x15, 0x31, 0x68, 0xa0, 0x77, 0xc6, 0xd3, 0x8b, 0xdb, 0x5b, 0xc5, 0x71, 0x36, 0x9a, + 0x52, 0x53, 0x9d, 0x22, 0x0b, 0xf5, 0x57, 0xf3, 0x3d, 0xc2, 0x03, 0x1f, 0x89, 0x8e, 0xbc, 0x04, + 0xb9, 0x52, 0xbb, 0x2d, 0xf4, 0xe3, 0x94, 0x14, 0x99, 0xb8, 0x4b, 0x29, 0x46, 0x4d, 0x3e, 0x0c, + 0xb9, 0xd2, 0xdd, 0x5a, 0x3c, 0xc9, 0x69, 0xe9, 0x6e, 0x4d, 0x7c, 0x49, 0xd7, 0xb2, 0x77, 0x6b, + 0xe4, 0x8d, 0x30, 0xdb, 0xc8, 0x5a, 0xc7, 0x5a, 0x17, 0x1b, 0x45, 0xe1, 0x47, 0xed, 0x7b, 0xf2, + 0x34, 0x18, 0x8a, 0x6d, 0x17, 0x63, 0xb4, 0x31, 0x6d, 0x1a, 0xd8, 0xbd, 0x36, 0x0d, 0xee, 0xa8, + 0x4d, 0x43, 0xbb, 0xd5, 0xa6, 0xe1, 0x5d, 0x68, 0x13, 0xec, 0xa8, 0x4d, 0x23, 0x07, 0xd7, 0xa6, + 0x36, 0x5c, 0x48, 0x86, 0x74, 0x0f, 0x34, 0x42, 0x03, 0x92, 0xc4, 0x0a, 0xc7, 0x12, 0xbc, 0xfa, + 0xef, 0x70, 0x6c, 0x7d, 0x03, 0xd1, 0x75, 0x97, 0xe1, 0x65, 0xc7, 0xc3, 0x64, 0x69, 0xf5, 0x87, + 0xb3, 0xdd, 0x23, 0xd1, 0x1f, 0xcf, 0x29, 0xee, 0x2b, 0x53, 0xa5, 0x94, 0x8f, 0xbd, 0x81, 0xef, + 0x2a, 0xe5, 0x18, 0xdb, 0x34, 0x99, 0xfd, 0x50, 0xb6, 0x5b, 0x78, 0xfc, 0x03, 0x49, 0xec, 0x7d, + 0x49, 0x67, 0x38, 0x7c, 0x80, 0xe1, 0x46, 0xbd, 0xe0, 0x66, 0x61, 0x54, 0x16, 0xa2, 0x90, 0xd2, + 0x6e, 0x04, 0x1c, 0x29, 0x47, 0xde, 0x08, 0x72, 0xd1, 0x4a, 0xfe, 0x31, 0xe8, 0xe9, 0xe6, 0x8f, + 0xd9, 0x98, 0x7b, 0x8c, 0x4c, 0x4e, 0x9e, 0x87, 0x81, 0x59, 0x4c, 0xee, 0x26, 0x0f, 0x76, 0x9e, + 0xee, 0x4d, 0xf6, 0x5a, 0xe1, 0x34, 0xea, 0xcf, 0x65, 0xe0, 0xd4, 0xad, 0xce, 0x0a, 0x15, 0x8e, + 0x76, 0x41, 0x1b, 0xde, 0x05, 0x60, 0x60, 0xe1, 0x30, 0x93, 0x41, 0x87, 0x99, 0xf7, 0xcb, 0x61, + 0xf4, 0x63, 0x05, 0xae, 0x86, 0xd4, 0xdc, 0x59, 0xe6, 0x71, 0xdf, 0x23, 0x78, 0xbd, 0xb3, 0x42, + 0xeb, 0x09, 0xaf, 0x19, 0x89, 0xfb, 0x85, 0x8f, 0xf0, 0xb7, 0x16, 0xfb, 0x75, 0x50, 0xf9, 0xc1, + 0x6c, 0xd7, 0xcc, 0x05, 0x47, 0x32, 0x4e, 0xa6, 0x60, 0x48, 0xdb, 0x67, 0x4a, 0x75, 0x1f, 0x4f, + 0x3e, 0x91, 0xda, 0x2b, 0x62, 0xac, 0x3c, 0xd6, 0xa3, 0x1f, 0x62, 0x1c, 0xd3, 0xb8, 0xa4, 0x0b, + 0xec, 0x08, 0x27, 0x96, 0x87, 0x5e, 0x60, 0xbf, 0x95, 0xe9, 0x9a, 0x61, 0xe2, 0xb8, 0x0a, 0x4c, + 0xfd, 0xdf, 0x72, 0x7e, 0x62, 0x8b, 0x03, 0x7d, 0xc2, 0xf3, 0x30, 0x2c, 0x62, 0x9d, 0x44, 0xfd, + 0x84, 0xc5, 0xb1, 0x21, 0x1e, 0x43, 0x07, 0x04, 0xcc, 0xa4, 0x90, 0x5c, 0xcc, 0x25, 0x3f, 0x61, + 0xc9, 0xbd, 0x5c, 0x93, 0x48, 0x98, 0xd1, 0x30, 0xf3, 0xc0, 0xf4, 0xd0, 0x02, 0x61, 0x7d, 0x99, + 0xe3, 0x46, 0x03, 0x7d, 0x60, 0x7a, 0xdc, 0xfe, 0x08, 0xd0, 0xcc, 0x20, 0xe0, 0xb6, 0x88, 0x98, + 0xf7, 0xd0, 0x20, 0xe0, 0xa6, 0x8a, 0x26, 0x30, 0xac, 0xb5, 0xc2, 0xf9, 0x56, 0xb8, 0xb4, 0x88, + 0xd6, 0x0a, 0x77, 0x5d, 0x6c, 0x6d, 0x40, 0xc0, 0x38, 0x6a, 0x74, 0x35, 0x74, 0xe2, 0x43, 0x8e, + 0x0e, 0x42, 0x34, 0x81, 0x21, 0xd7, 0x61, 0xbc, 0xe6, 0xe9, 0x96, 0xa1, 0x3b, 0xc6, 0x52, 0xc7, + 0x6b, 0x77, 0x3c, 0xd9, 0x00, 0x76, 0x3d, 0xc3, 0xee, 0x78, 0x5a, 0x8c, 0x82, 0xbc, 0x00, 0x63, + 0x3e, 0x64, 0xc6, 0x71, 0x6c, 0x47, 0xb6, 0x72, 0x5c, 0xcf, 0xa0, 0x8e, 0xa3, 0x45, 0x09, 0xc8, + 0x87, 0x60, 0xac, 0x62, 0x05, 0xa1, 0x3b, 0xb4, 0x79, 0x61, 0xf3, 0xe0, 0x7b, 0x3e, 0x33, 0x40, + 0xd4, 0x3b, 0x4e, 0x53, 0x8b, 0x12, 0xaa, 0xdb, 0xd9, 0x64, 0xfe, 0x8f, 0x47, 0x77, 0x83, 0x74, + 0x25, 0xea, 0xb8, 0x87, 0xde, 0xaa, 0x68, 0x7c, 0xca, 0x7e, 0xc3, 0xdc, 0x06, 0xbd, 0x0e, 0x43, + 0xb7, 0xe8, 0x26, 0xf7, 0x31, 0x1d, 0x08, 0xdd, 0x92, 0xd7, 0x05, 0x4c, 0x3e, 0xdd, 0xf5, 0xe9, + 0xd4, 0x9f, 0xc9, 0x26, 0x33, 0x9b, 0x3c, 0xba, 0xc2, 0x7e, 0x01, 0x06, 0x51, 0x94, 0x15, 0xff, + 0x7a, 0x01, 0x05, 0x88, 0xe2, 0x8e, 0x7a, 0x3b, 0xfb, 0x64, 0xea, 0x77, 0x0d, 0xc4, 0xd3, 0xdd, + 0x3c, 0xba, 0xd2, 0x7b, 0x1d, 0x46, 0xca, 0xb6, 0xe5, 0x62, 0x0c, 0x80, 0x86, 0xaf, 0xb0, 0xe8, + 0xf8, 0xdf, 0x08, 0xc1, 0xb2, 0x0d, 0x28, 0x51, 0xef, 0x47, 0x79, 0xc9, 0xab, 0x30, 0x8c, 0x22, + 0x47, 0x9b, 0x73, 0x30, 0x7c, 0xbc, 0xb3, 0xc2, 0x80, 0x71, 0x8b, 0x33, 0x24, 0x25, 0xb7, 0x61, + 0xa8, 0xbc, 0x66, 0x36, 0x0d, 0x87, 0x5a, 0xe8, 0x9b, 0x2c, 0x3d, 0xa2, 0x89, 0xf6, 0xe5, 0x55, + 0xfc, 0x17, 0x69, 0x79, 0x73, 0x1a, 0xa2, 0x58, 0xe4, 0x29, 0x9f, 0x80, 0x5d, 0xf8, 0x96, 0x2c, + 0x40, 0x58, 0x80, 0x3c, 0x09, 0x59, 0xff, 0xbd, 0x38, 0x77, 0x89, 0x89, 0x68, 0x50, 0x16, 0x97, + 0x0a, 0x31, 0xb6, 0xb3, 0x3b, 0x8e, 0xed, 0xdb, 0x30, 0xc0, 0x4f, 0xd7, 0xd0, 0x6b, 0x5d, 0x0a, + 0x71, 0xd2, 0xb5, 0xc1, 0x57, 0x91, 0x9e, 0xdb, 0xd2, 0x68, 0x79, 0x46, 0x3c, 0xc0, 0x39, 0xb3, + 0x0b, 0x0d, 0xe8, 0xc7, 0xbf, 0xc8, 0x25, 0xc8, 0xa3, 0x14, 0x33, 0xb8, 0x67, 0xc6, 0x59, 0x3a, + 0x26, 0x3f, 0xc4, 0xb3, 0x6e, 0x2a, 0xdb, 0x96, 0x27, 0xde, 0x1f, 0x65, 0x2e, 0x8f, 0x0a, 0xb9, + 0x08, 0x58, 0x44, 0x2e, 0x02, 0xa6, 0xfe, 0x52, 0x36, 0x25, 0x11, 0xd3, 0xa3, 0x3b, 0x4c, 0x5e, + 0x03, 0xc0, 0xb8, 0x00, 0x4c, 0x9e, 0xfe, 0x73, 0x10, 0x1c, 0x25, 0xc8, 0x08, 0xd5, 0x36, 0xb2, + 0xed, 0x08, 0x89, 0xd5, 0x5f, 0xce, 0x24, 0xb2, 0xf7, 0x1c, 0x48, 0x8e, 0xb2, 0x55, 0x96, 0xdd, + 0xa7, 0x19, 0xeb, 0xf7, 0x45, 0x6e, 0x6f, 0x7d, 0x11, 0xfd, 0x96, 0x43, 0xb0, 0x4c, 0x8f, 0xf2, + 0x5b, 0xfe, 0x24, 0x9b, 0x96, 0xcb, 0xe8, 0x78, 0xaa, 0xf8, 0xcb, 0x81, 0x51, 0x9a, 0x8f, 0x65, + 0x8f, 0x43, 0x68, 0xac, 0x98, 0x6f, 0xa6, 0x7e, 0x0a, 0x26, 0x62, 0x19, 0x7e, 0x70, 0xfe, 0x97, + 0xc2, 0x22, 0xa5, 0xe7, 0x01, 0xea, 0x1e, 0x51, 0x22, 0x42, 0xa6, 0xfe, 0xa7, 0x4c, 0xef, 0xfc, + 0x4e, 0x47, 0xae, 0x3a, 0x29, 0x02, 0xc8, 0xfd, 0xf5, 0x08, 0xe0, 0x10, 0xb6, 0xc1, 0xc7, 0x5b, + 0x00, 0x0f, 0xc9, 0xe4, 0xf1, 0xa5, 0x16, 0xc0, 0x77, 0x65, 0x76, 0x4c, 0xcf, 0x75, 0xd4, 0x32, + 0x50, 0xff, 0x55, 0x26, 0x35, 0x8d, 0xd6, 0x81, 0xda, 0xf5, 0x06, 0x0c, 0x70, 0x17, 0x1e, 0xd1, + 0x2a, 0x29, 0xf1, 0x38, 0x83, 0x76, 0x29, 0x2f, 0xca, 0x90, 0x79, 0x18, 0xe4, 0x6d, 0x30, 0x44, + 0x6f, 0x3c, 0xd3, 0x23, 0x97, 0x97, 0xd1, 0x6d, 0x72, 0x14, 0x68, 0xf5, 0xe7, 0x33, 0x89, 0xac, + 0x5e, 0x47, 0xf8, 0x6d, 0xe1, 0x54, 0x9d, 0xdb, 0xfd, 0x54, 0xad, 0xfe, 0x69, 0x36, 0x3d, 0xa9, + 0xd8, 0x11, 0x7e, 0xc8, 0x61, 0x1c, 0xa7, 0xed, 0x6f, 0xdd, 0x5a, 0x86, 0xf1, 0xa8, 0x2c, 0xc4, + 0xb2, 0xf5, 0x44, 0x7a, 0x6a, 0xb5, 0x2e, 0xad, 0x88, 0xf1, 0x50, 0x3f, 0x9f, 0x4d, 0xe6, 0x43, + 0x3b, 0xf2, 0xf9, 0x69, 0x5f, 0xda, 0x42, 0x2a, 0x30, 0x11, 0x7e, 0xc9, 0xb2, 0xe9, 0x35, 0xfd, + 0xd3, 0x7d, 0x8c, 0xd8, 0x20, 0x22, 0x8c, 0x34, 0x4d, 0xd7, 0xab, 0x7b, 0x0c, 0x19, 0x89, 0x75, + 0x11, 0x2d, 0x17, 0x93, 0xca, 0x43, 0xb2, 0x6c, 0x3d, 0x64, 0x52, 0x79, 0x48, 0xd6, 0xb2, 0x23, + 0x97, 0xca, 0xf7, 0x65, 0xbb, 0xe5, 0xcb, 0x3b, 0x72, 0xd9, 0x7c, 0x5c, 0xee, 0x2f, 0xde, 0x32, + 0x21, 0xa5, 0x27, 0xbb, 0x25, 0xa8, 0xeb, 0xc2, 0x33, 0xc1, 0x67, 0x7f, 0x93, 0x58, 0xaa, 0xb0, + 0x1e, 0x92, 0xe1, 0x75, 0x3c, 0x84, 0xf5, 0x90, 0x8c, 0xba, 0x87, 0x4f, 0x58, 0x3f, 0x99, 0xdd, + 0x6d, 0x92, 0xc6, 0x13, 0xe1, 0x25, 0x84, 0xf7, 0xd9, 0x6c, 0x32, 0x79, 0xe8, 0x91, 0x8b, 0x69, + 0x16, 0x06, 0x44, 0x1a, 0xd3, 0xae, 0xc2, 0xe1, 0xf8, 0x6e, 0x26, 0x9b, 0xf8, 0x8e, 0x97, 0x41, + 0xdc, 0x54, 0xed, 0x4e, 0x24, 0x9c, 0x56, 0xfd, 0xab, 0x4c, 0x2c, 0xd3, 0xe6, 0x91, 0x9c, 0x91, + 0xec, 0x6f, 0x75, 0xfb, 0x88, 0x7f, 0x5a, 0x9b, 0x8f, 0xc5, 0x9a, 0x0f, 0xbe, 0x67, 0x9a, 0x7a, + 0xba, 0xd9, 0x8c, 0x97, 0x17, 0x01, 0x16, 0x7e, 0x26, 0x0b, 0x93, 0x09, 0x52, 0x72, 0x29, 0x12, + 0xd2, 0x08, 0xcf, 0x5d, 0x63, 0x9e, 0xf8, 0x3c, 0xb8, 0xd1, 0x1e, 0x8e, 0x8a, 0x2f, 0x41, 0x7e, + 0x5a, 0xdf, 0xe4, 0xdf, 0xd6, 0xcf, 0x59, 0x1a, 0xfa, 0xa6, 0x7c, 0xa4, 0x88, 0x78, 0xb2, 0x02, + 0x67, 0xf8, 0x85, 0x8f, 0x69, 0x5b, 0xcb, 0x66, 0x8b, 0x56, 0xac, 0x05, 0xb3, 0xd9, 0x34, 0x5d, + 0x71, 0x6b, 0xf9, 0xfc, 0xf6, 0x56, 0xf1, 0xb2, 0x67, 0x7b, 0x7a, 0xb3, 0x4e, 0x7d, 0xb2, 0xba, + 0x67, 0xb6, 0x68, 0xdd, 0xb4, 0xea, 0x2d, 0xa4, 0x94, 0x58, 0xa6, 0xb3, 0x22, 0x15, 0x9e, 0xd4, + 0xae, 0xd6, 0xd0, 0x2d, 0x8b, 0x1a, 0x15, 0x6b, 0x6a, 0xd3, 0xa3, 0xfc, 0xb6, 0x33, 0xc7, 0xcf, + 0x3c, 0xf9, 0x43, 0x7b, 0x8e, 0x66, 0x8c, 0x57, 0x18, 0x81, 0x96, 0x52, 0x48, 0xfd, 0xa9, 0x7c, + 0x4a, 0x92, 0xd5, 0x63, 0xa4, 0x3e, 0x7e, 0x4f, 0xe7, 0x77, 0xe8, 0xe9, 0x6b, 0x30, 0x28, 0xb2, + 0x06, 0x89, 0x1b, 0x14, 0x7c, 0x19, 0x70, 0x9f, 0x83, 0xe4, 0x2b, 0x28, 0x41, 0x45, 0x9a, 0x70, + 0x61, 0x99, 0x75, 0x53, 0x7a, 0x67, 0x0e, 0xec, 0xa3, 0x33, 0x7b, 0xf0, 0x23, 0xef, 0xc0, 0x39, + 0xc4, 0xa6, 0x74, 0xeb, 0x20, 0x56, 0x85, 0xb6, 0x1e, 0xaf, 0x2a, 0xbd, 0x73, 0xbb, 0x95, 0x27, + 0x1f, 0x87, 0xd1, 0x60, 0x80, 0x98, 0xd4, 0x15, 0x57, 0x33, 0x3d, 0xc6, 0x19, 0x0f, 0x93, 0xc8, + 0xc0, 0xe8, 0x8f, 0x17, 0x0d, 0xb5, 0x17, 0xe1, 0xa5, 0xfe, 0x51, 0xa6, 0x57, 0xb2, 0xd7, 0x23, + 0x9f, 0x95, 0x3f, 0x02, 0x83, 0x06, 0xff, 0x28, 0xa1, 0x53, 0xbd, 0xd3, 0xc1, 0x72, 0x52, 0xcd, + 0x2f, 0xa3, 0xfe, 0x71, 0xa6, 0x67, 0x8e, 0xd9, 0xe3, 0xfe, 0x79, 0x9f, 0xcd, 0x75, 0xf9, 0x3c, + 0x31, 0x89, 0x5e, 0x81, 0x82, 0x19, 0x26, 0xc1, 0xab, 0x87, 0xb1, 0xbc, 0xb4, 0x09, 0x09, 0x8e, + 0xa3, 0xeb, 0x65, 0x08, 0x3c, 0xd2, 0x1c, 0xdf, 0xdd, 0xce, 0xad, 0x77, 0x1c, 0x93, 0x8f, 0x4b, + 0xed, 0xb4, 0x1b, 0xf3, 0xc5, 0x73, 0x6f, 0x3b, 0x26, 0xab, 0x40, 0xf7, 0xd6, 0xa8, 0xa5, 0xd7, + 0x37, 0x6c, 0x67, 0x1d, 0x63, 0xf1, 0xf2, 0xc1, 0xa9, 0x4d, 0x70, 0xf8, 0x5d, 0x1f, 0x4c, 0x9e, + 0x86, 0xb1, 0xd5, 0x66, 0x87, 0x06, 0xd1, 0x4f, 0xf9, 0x65, 0xa6, 0x36, 0xca, 0x80, 0xc1, 0x15, + 0xd0, 0xe3, 0x00, 0x48, 0x84, 0x19, 0x67, 0xf8, 0xcd, 0xa5, 0x36, 0xcc, 0x20, 0xcb, 0xa2, 0xbb, + 0x2e, 0x70, 0xad, 0xe6, 0x42, 0xaa, 0x37, 0x6d, 0x6b, 0xb5, 0xee, 0x51, 0xa7, 0x85, 0x0d, 0x45, + 0x6f, 0x0d, 0xed, 0x2c, 0x52, 0xe0, 0xdd, 0x90, 0x3b, 0x6f, 0x5b, 0xab, 0xcb, 0xd4, 0x69, 0xb1, + 0xa6, 0x3e, 0x0f, 0x44, 0x34, 0xd5, 0xc1, 0x53, 0x1d, 0xfe, 0x71, 0xe8, 0xae, 0xa1, 0x89, 0x8f, + 0xe0, 0xc7, 0x3d, 0xf8, 0x61, 0x45, 0x18, 0xe1, 0x21, 0x20, 0xb9, 0xd0, 0xd0, 0x47, 0x43, 0x03, + 0x0e, 0x42, 0x79, 0x9d, 0x05, 0xe1, 0x3e, 0xc2, 0x5d, 0xe4, 0x35, 0xf1, 0x4b, 0xfd, 0x4c, 0x2e, + 0x2d, 0x2d, 0xee, 0x81, 0x14, 0x2d, 0x9c, 0x56, 0xb3, 0x7b, 0x9a, 0x56, 0x27, 0xac, 0x4e, 0xab, + 0xae, 0xb7, 0xdb, 0xf5, 0x7b, 0x66, 0x13, 0xdf, 0xa8, 0xe1, 0xc2, 0xa7, 0x8d, 0x59, 0x9d, 0x56, + 0xa9, 0xdd, 0x9e, 0xe5, 0x40, 0xf2, 0x1c, 0x4c, 0x32, 0x3a, 0xec, 0xa4, 0x80, 0x32, 0x8f, 0x94, + 0x8c, 0x01, 0xc6, 0x50, 0xf6, 0x69, 0xcf, 0xc3, 0x90, 0xe0, 0xc9, 0xd7, 0xaa, 0x7e, 0x6d, 0x90, + 0x33, 0x73, 0x59, 0xcf, 0x05, 0x6c, 0xf8, 0xe4, 0xda, 0xaf, 0x0d, 0xfb, 0xe5, 0x31, 0x52, 0xb8, + 0xd5, 0x69, 0xf1, 0xf0, 0x62, 0x83, 0x88, 0x0c, 0x7e, 0x93, 0x4b, 0x30, 0xce, 0xb8, 0x04, 0x02, + 0xe3, 0xc1, 0x95, 0xfb, 0xb5, 0x18, 0x94, 0x5c, 0x87, 0xd3, 0x11, 0x08, 0xb7, 0x41, 0xf9, 0x9b, + 0x8b, 0x7e, 0x2d, 0x15, 0xa7, 0xfe, 0x44, 0x2e, 0x9a, 0xac, 0xf7, 0x08, 0x3a, 0xe2, 0x1c, 0x0c, + 0xda, 0xce, 0x6a, 0xbd, 0xe3, 0x34, 0xc5, 0xd8, 0x1b, 0xb0, 0x9d, 0xd5, 0xdb, 0x4e, 0x93, 0x9c, + 0x81, 0x01, 0xd6, 0x3b, 0xa6, 0x21, 0x86, 0x58, 0xbf, 0xde, 0x6e, 0x57, 0x0c, 0x52, 0xe2, 0x1d, + 0x82, 0x81, 0x79, 0xeb, 0x0d, 0xdc, 0xda, 0x73, 0xaf, 0x8b, 0x7e, 0xbe, 0xe2, 0x25, 0x90, 0xd8, + 0x4f, 0x18, 0xae, 0x97, 0x1f, 0x04, 0xc4, 0x58, 0x18, 0xb8, 0x2d, 0x31, 0x78, 0x9f, 0xc4, 0x59, + 0x08, 0x64, 0xc8, 0x82, 0x6f, 0x62, 0x0c, 0x32, 0x0d, 0x24, 0xa4, 0x6a, 0xd9, 0x86, 0x79, 0xcf, + 0xa4, 0xfc, 0x89, 0x4c, 0x3f, 0xbf, 0xd9, 0x4e, 0x62, 0xb5, 0x82, 0xcf, 0x64, 0x41, 0x40, 0xc8, + 0xeb, 0x5c, 0x09, 0x39, 0x1d, 0xae, 0x7d, 0xbc, 0x6f, 0xb9, 0x9d, 0x16, 0x43, 0xa1, 0x66, 0x62, + 0x79, 0x5c, 0x08, 0xd5, 0x45, 0x18, 0x43, 0x3f, 0xd2, 0x1a, 0x6d, 0x62, 0x2e, 0x74, 0x72, 0x1e, + 0x72, 0xb7, 0x7c, 0x4f, 0x52, 0xee, 0xb5, 0xbb, 0x4e, 0x37, 0x35, 0x06, 0x23, 0x6a, 0xe0, 0x06, + 0x90, 0xc5, 0x7b, 0x63, 0x74, 0xc0, 0xe2, 0xb7, 0xfb, 0xfe, 0x9d, 0xbe, 0xfa, 0x3b, 0x83, 0xc9, + 0x0c, 0xd0, 0x47, 0x62, 0x27, 0xcd, 0x01, 0x88, 0x04, 0xef, 0xe1, 0x6d, 0xe4, 0x05, 0x29, 0x9b, + 0x9b, 0xc0, 0x74, 0xe1, 0x21, 0x95, 0x25, 0x57, 0x60, 0x88, 0x7f, 0x51, 0x65, 0x5a, 0xd8, 0x4f, + 0xe8, 0x53, 0xe7, 0xb6, 0xcd, 0x7b, 0xf7, 0xd0, 0x01, 0x2f, 0x40, 0x93, 0x4b, 0x30, 0x38, 0xbd, + 0x58, 0xab, 0x95, 0x16, 0xfd, 0xab, 0x75, 0x7c, 0xfc, 0x63, 0x58, 0x6e, 0xdd, 0xd5, 0x2d, 0x57, + 0xf3, 0x91, 0xe4, 0x69, 0x18, 0xa8, 0x54, 0x91, 0x8c, 0x3f, 0x69, 0x1d, 0xd9, 0xde, 0x2a, 0x0e, + 0x9a, 0x6d, 0x4e, 0x25, 0x50, 0x58, 0xef, 0x9d, 0xca, 0xb4, 0xe4, 0x5f, 0xc2, 0xeb, 0xbd, 0x6f, + 0x1a, 0x78, 0x4f, 0xaf, 0x05, 0x68, 0xf2, 0x0a, 0x8c, 0xd6, 0xa8, 0x63, 0xea, 0xcd, 0xc5, 0x0e, + 0x6e, 0x3d, 0xa5, 0x10, 0x99, 0x2e, 0xc2, 0xeb, 0x16, 0x22, 0xb4, 0x08, 0x19, 0xb9, 0x08, 0xf9, + 0x39, 0xd3, 0xf2, 0xdf, 0x97, 0xe0, 0x03, 0x84, 0x35, 0xd3, 0xf2, 0x34, 0x84, 0x92, 0xa7, 0x21, + 0x77, 0x73, 0xb9, 0x22, 0x5c, 0xe7, 0x90, 0xd7, 0xbb, 0x5e, 0x24, 0xdc, 0xe6, 0xcd, 0xe5, 0x0a, + 0x79, 0x05, 0x86, 0xd9, 0xa2, 0x48, 0xad, 0x06, 0x75, 0x95, 0x11, 0xfc, 0x18, 0x1e, 0xe3, 0xd1, + 0x07, 0xca, 0x4e, 0x30, 0x01, 0x25, 0xb9, 0x05, 0x85, 0x78, 0xa6, 0x23, 0xf1, 0xc6, 0x09, 0x2d, + 0xb8, 0x0d, 0x81, 0x4b, 0x8b, 0x32, 0x9a, 0x28, 0x48, 0x0c, 0x50, 0xe2, 0x30, 0xb6, 0x4f, 0x44, + 0x2b, 0x96, 0x87, 0x1f, 0xbf, 0xbc, 0xbd, 0x55, 0x7c, 0x26, 0xc1, 0xb4, 0xee, 0x08, 0x2a, 0x89, + 0x7b, 0x57, 0x4e, 0xe4, 0x13, 0x00, 0x25, 0xcf, 0x73, 0xcc, 0x95, 0x0e, 0x33, 0x37, 0xc7, 0x7b, + 0x3f, 0x91, 0x50, 0xb7, 0xb7, 0x8a, 0xa7, 0xf5, 0x80, 0x3c, 0xf5, 0xa1, 0x84, 0xc4, 0x8e, 0xbc, + 0x09, 0xa3, 0x6c, 0xc5, 0xf3, 0x47, 0xa1, 0x32, 0x11, 0xc6, 0x78, 0x64, 0x6b, 0x63, 0xdd, 0x15, + 0x08, 0xd9, 0xc4, 0x94, 0x0b, 0x90, 0x4f, 0xc2, 0x78, 0x64, 0x1c, 0xf3, 0x40, 0xe4, 0x52, 0xc2, + 0x9d, 0x08, 0x96, 0x87, 0x43, 0x44, 0x27, 0xf3, 0x80, 0x75, 0x24, 0x1c, 0x62, 0x94, 0x97, 0xfa, + 0x7f, 0x65, 0xd3, 0x33, 0xaa, 0x1f, 0xc1, 0x4c, 0xbf, 0x4f, 0x2f, 0x89, 0xd8, 0x7c, 0x90, 0x3f, + 0xc0, 0x7c, 0x70, 0x0f, 0x26, 0x4a, 0x46, 0xcb, 0xb4, 0x4a, 0x3c, 0x09, 0xca, 0xc2, 0x6c, 0x09, + 0x57, 0x0e, 0xe9, 0xa9, 0x6b, 0x0c, 0x2d, 0xbe, 0x87, 0x47, 0x45, 0x67, 0xa8, 0xba, 0xce, 0x71, + 0xf5, 0xd6, 0x3d, 0xbd, 0xde, 0xe0, 0xc9, 0xc8, 0xb5, 0x38, 0x53, 0xf5, 0xf3, 0xd9, 0x1d, 0x92, + 0xc0, 0x3f, 0x8a, 0xd2, 0x57, 0x3f, 0x97, 0xed, 0x9d, 0x87, 0xff, 0x91, 0x14, 0xca, 0xbf, 0xcb, + 0xa6, 0x64, 0xc5, 0x3f, 0x90, 0x24, 0xae, 0xc0, 0x10, 0x67, 0x13, 0xb8, 0xa9, 0xe3, 0xe2, 0xc3, + 0x95, 0x15, 0x17, 0x3d, 0x1f, 0x4d, 0x16, 0xe1, 0x74, 0xe9, 0xde, 0x3d, 0xda, 0xf0, 0xc2, 0xf8, + 0xf8, 0x8b, 0x61, 0x40, 0x63, 0x1e, 0x71, 0x5a, 0xe0, 0xc3, 0xf8, 0xfa, 0x18, 0xb8, 0x27, 0xb5, + 0x1c, 0x59, 0x86, 0xb3, 0x71, 0x78, 0x8d, 0xef, 0x00, 0xf3, 0x52, 0x10, 0xea, 0x04, 0x47, 0xfe, + 0x9f, 0xd6, 0xa5, 0x6c, 0x5a, 0x2b, 0x71, 0x65, 0xed, 0xef, 0xd5, 0x4a, 0x5c, 0x66, 0x53, 0xcb, + 0xa9, 0x3f, 0x93, 0x03, 0xa8, 0xb5, 0x75, 0xcb, 0xc2, 0xe8, 0x1e, 0x8f, 0xae, 0x43, 0xe1, 0xcb, + 0x91, 0x67, 0x04, 0xbb, 0x1d, 0x32, 0xaf, 0x88, 0x68, 0x3c, 0x46, 0xc7, 0xf1, 0x3d, 0x6e, 0x83, + 0x68, 0x20, 0x08, 0x94, 0xcd, 0x86, 0x80, 0x92, 0x54, 0x20, 0x5f, 0x72, 0x56, 0xf9, 0xee, 0x66, + 0xa7, 0x07, 0x8a, 0xba, 0xb3, 0x9a, 0xbe, 0xee, 0x22, 0x0b, 0xf5, 0x9b, 0xb3, 0x3d, 0xf2, 0xfd, + 0x3f, 0x92, 0x93, 0xc8, 0x77, 0x64, 0xbb, 0x65, 0xee, 0x3f, 0xae, 0xae, 0x91, 0x5f, 0x62, 0xe1, + 0x1c, 0x6f, 0xbf, 0xd1, 0x43, 0x14, 0xce, 0x6f, 0x66, 0x93, 0x09, 0xfd, 0x4f, 0x34, 0x47, 0x98, + 0x8b, 0xfb, 0x9a, 0x20, 0x53, 0x45, 0xfa, 0x08, 0xdb, 0xdc, 0xb2, 0x2a, 0xf4, 0xef, 0xd3, 0x3d, + 0x30, 0x4d, 0xa4, 0x27, 0x43, 0xf8, 0x40, 0x5a, 0xfa, 0x5b, 0x59, 0x38, 0x17, 0x8a, 0x54, 0xdc, + 0x09, 0x9d, 0x8c, 0xfc, 0xc3, 0x97, 0xe9, 0xc9, 0xd0, 0x3f, 0xd0, 0xd0, 0x4f, 0x95, 0xe9, 0xc9, + 0xd8, 0x3f, 0x90, 0x9e, 0x7e, 0x7f, 0x76, 0x87, 0x14, 0xdc, 0xc7, 0xfc, 0xd8, 0xf7, 0x2c, 0x0c, + 0x88, 0x8b, 0x16, 0x4c, 0xbc, 0xa9, 0x89, 0x5f, 0xfb, 0x94, 0xd6, 0x3f, 0xcd, 0xee, 0x98, 0x40, + 0xfc, 0x44, 0x5e, 0x92, 0xbc, 0x7e, 0x20, 0xbb, 0x53, 0xea, 0xf3, 0x13, 0x71, 0x49, 0xe2, 0xfa, + 0x8d, 0x2c, 0x9c, 0x66, 0x7f, 0x9a, 0x8d, 0x39, 0xdb, 0xf5, 0x58, 0x53, 0x0f, 0x61, 0x15, 0xde, + 0xdf, 0x8a, 0x71, 0x18, 0xee, 0xf4, 0x7e, 0xf7, 0xe4, 0x0f, 0xd4, 0x3d, 0xfd, 0x07, 0xd8, 0xd2, + 0x24, 0x05, 0x7a, 0x64, 0x4b, 0xf0, 0x97, 0xab, 0x40, 0x0f, 0x61, 0xfd, 0x7d, 0x94, 0x05, 0xfa, + 0x77, 0x73, 0x50, 0x28, 0x3b, 0xf6, 0x86, 0x75, 0x93, 0x6e, 0xd0, 0xe6, 0x91, 0x0d, 0xf7, 0x87, + 0xc2, 0x40, 0x74, 0xf6, 0x69, 0x20, 0xfa, 0xe5, 0xc8, 0x9b, 0x30, 0x11, 0xca, 0x52, 0x0e, 0x69, + 0x89, 0x57, 0xf9, 0x0d, 0x86, 0xaa, 0xbf, 0xcb, 0x70, 0x22, 0xf6, 0x5a, 0x9c, 0x5a, 0xfd, 0xab, + 0x48, 0x6f, 0x3c, 0xda, 0xe6, 0xfa, 0x81, 0x7b, 0xe3, 0x36, 0x9c, 0x2d, 0x77, 0x1c, 0x87, 0x5a, + 0x5e, 0x7a, 0xa7, 0xe0, 0x4d, 0x5a, 0x83, 0x53, 0xd4, 0x93, 0x9d, 0xd3, 0xa5, 0x30, 0x63, 0x2b, + 0x9e, 0xd2, 0xc5, 0xd9, 0x0e, 0x86, 0x6c, 0x3b, 0x9c, 0x22, 0x8d, 0x6d, 0x7a, 0x61, 0xf5, 0x57, + 0xb2, 0x72, 0xd7, 0x1f, 0xd9, 0xac, 0xf6, 0x65, 0xd1, 0xf5, 0xea, 0x67, 0x72, 0x30, 0xce, 0x9a, + 0xb5, 0xac, 0xbb, 0xeb, 0x27, 0x26, 0xcc, 0x41, 0x16, 0x08, 0xf6, 0x15, 0xbe, 0x24, 0x71, 0xdc, + 0x48, 0x5f, 0xe1, 0xc3, 0xbb, 0x7d, 0x85, 0x8f, 0x57, 0x7f, 0x30, 0x1f, 0x76, 0xc7, 0x89, 0x01, + 0x74, 0xd4, 0xdd, 0x41, 0x96, 0xe0, 0xb4, 0x98, 0xdb, 0x7c, 0x10, 0xe6, 0x36, 0x12, 0xf3, 0x17, + 0x4f, 0x91, 0x2a, 0xa6, 0xc5, 0x8e, 0x4b, 0x9d, 0xba, 0xa7, 0xbb, 0xeb, 0x75, 0x4c, 0x86, 0xa4, + 0xa5, 0x16, 0x64, 0x0c, 0xc5, 0xac, 0x16, 0x65, 0x38, 0x14, 0x32, 0xf4, 0x27, 0xc4, 0x04, 0xc3, + 0xb4, 0x82, 0xea, 0x8f, 0x65, 0xa0, 0x10, 0xff, 0x1c, 0x72, 0x15, 0x86, 0xd8, 0xef, 0x20, 0xc6, + 0x8b, 0xf0, 0x40, 0x0f, 0x39, 0x72, 0x77, 0x26, 0x9f, 0x86, 0xbc, 0x0a, 0xc3, 0xe8, 0x39, 0x86, + 0x05, 0xb2, 0x61, 0x68, 0x9d, 0xb0, 0x00, 0xa6, 0x7b, 0xe7, 0xc5, 0x42, 0x52, 0xf2, 0x3a, 0x8c, + 0x54, 0x42, 0x97, 0x5b, 0x71, 0x01, 0x8d, 0x9e, 0xfe, 0x52, 0xc9, 0x90, 0x40, 0x93, 0xa9, 0xd5, + 0x2f, 0x66, 0x43, 0x55, 0x3f, 0x31, 0x4d, 0x0f, 0x64, 0x9a, 0xfe, 0x70, 0x0e, 0xc6, 0xca, 0xb6, + 0xe5, 0xe9, 0x0d, 0xef, 0xe4, 0x30, 0xf8, 0x20, 0x87, 0x6c, 0xa4, 0x08, 0xfd, 0x33, 0x2d, 0xdd, + 0x6c, 0x0a, 0xc3, 0x07, 0x03, 0x80, 0x53, 0x06, 0xd0, 0x38, 0x9c, 0xdc, 0xc0, 0xb0, 0x57, 0x4c, + 0xd2, 0x81, 0x9f, 0xe0, 0x78, 0x18, 0x2b, 0x59, 0x42, 0x89, 0x6c, 0xe1, 0x1c, 0xc0, 0x47, 0x8e, + 0x5c, 0x52, 0xee, 0xb3, 0x93, 0x83, 0xd1, 0x63, 0xd2, 0x67, 0xdf, 0x96, 0x83, 0xb3, 0x71, 0x7f, + 0xc5, 0x93, 0x01, 0x27, 0x3a, 0xef, 0x6f, 0xc0, 0xe9, 0xb8, 0x6c, 0xa6, 0x99, 0x34, 0xfa, 0x7b, + 0xfb, 0x8e, 0x5c, 0xdd, 0xde, 0x2a, 0x3e, 0x99, 0x74, 0x15, 0x65, 0x95, 0xa5, 0x7a, 0x93, 0xa4, + 0x56, 0x92, 0xda, 0x33, 0x0f, 0xc9, 0xa3, 0xe8, 0x47, 0xbc, 0x67, 0xbe, 0x23, 0x9b, 0xec, 0x99, + 0x93, 0x09, 0x4f, 0x2c, 0xdc, 0xbf, 0x96, 0x85, 0x67, 0xe2, 0xc2, 0x79, 0xfb, 0x95, 0x17, 0x5e, + 0xd3, 0xa8, 0x1f, 0x23, 0xf5, 0x64, 0x7a, 0x11, 0x4a, 0x8c, 0xc1, 0x6e, 0x75, 0x37, 0x78, 0x28, + 0x29, 0x82, 0xdd, 0x32, 0x88, 0x26, 0x30, 0xbb, 0x10, 0xe7, 0xc9, 0x9c, 0xb0, 0x07, 0x71, 0x7e, + 0xef, 0x8e, 0xe2, 0x3c, 0x19, 0xc8, 0xbe, 0xbb, 0x5a, 0x1e, 0xe0, 0x86, 0xe9, 0x89, 0x48, 0xd2, + 0xc7, 0xfc, 0xaa, 0x4c, 0xf2, 0x73, 0xcd, 0xef, 0xcb, 0xcf, 0x35, 0x8c, 0x0f, 0xd5, 0xbf, 0x8f, + 0xf8, 0x50, 0x6f, 0xc2, 0xa0, 0x90, 0xa3, 0xd8, 0xb7, 0x9f, 0x0b, 0xbf, 0x02, 0xc1, 0xdd, 0xaa, + 0xf7, 0xa5, 0xff, 0x3e, 0x18, 0x74, 0x79, 0xcc, 0x34, 0xb1, 0xaf, 0xc6, 0xe7, 0x3e, 0x02, 0xa4, + 0xf9, 0x7f, 0x90, 0x8b, 0x80, 0xe9, 0x3f, 0xe4, 0xd7, 0x38, 0x3c, 0x1d, 0x08, 0xfb, 0x97, 0x54, + 0x60, 0x50, 0xbc, 0x1a, 0x50, 0x00, 0x5f, 0x76, 0x04, 0x6a, 0x19, 0xf6, 0x33, 0x7f, 0x3c, 0xc0, + 0xcf, 0xac, 0x05, 0xb1, 0xfc, 0x66, 0x5b, 0x80, 0xd4, 0x9f, 0xcf, 0x40, 0x21, 0x5e, 0x88, 0x3c, + 0x0f, 0x03, 0xfc, 0x2f, 0xb1, 0x43, 0xc7, 0xd0, 0xad, 0xbc, 0x84, 0x1c, 0xba, 0x55, 0x50, 0xbf, + 0x02, 0xc3, 0x9a, 0xff, 0x14, 0x44, 0xec, 0xd0, 0xd1, 0x7f, 0x37, 0x78, 0x1f, 0x22, 0xfb, 0xef, + 0x06, 0x94, 0xe4, 0x69, 0xc8, 0x2d, 0x35, 0x0d, 0xb1, 0x31, 0xc7, 0x27, 0x45, 0x76, 0x53, 0x8e, + 0x4b, 0xcb, 0xb0, 0x8c, 0x68, 0x91, 0x6e, 0x08, 0x67, 0x6f, 0x24, 0xb2, 0xe8, 0x86, 0x4c, 0xb4, + 0x48, 0x37, 0xd4, 0x5f, 0xcf, 0xf8, 0xee, 0xbb, 0xf3, 0xa6, 0xeb, 0x55, 0xac, 0xfb, 0x7a, 0xd3, + 0x0c, 0x3a, 0x82, 0xdc, 0x80, 0xf1, 0x10, 0x29, 0x85, 0x38, 0x48, 0x84, 0x02, 0xc2, 0x47, 0xf0, + 0x4f, 0x4a, 0x0f, 0x5f, 0xa2, 0xc5, 0xc8, 0x25, 0x49, 0xf9, 0xa5, 0x53, 0x0b, 0xf9, 0xdd, 0xbc, + 0x70, 0xc5, 0x1e, 0x5d, 0x30, 0x5d, 0xd7, 0xb4, 0x56, 0xf9, 0xf3, 0xcb, 0x1c, 0xbe, 0x84, 0xc2, + 0xf3, 0x93, 0x16, 0x87, 0xd7, 0x1d, 0x86, 0x90, 0xdf, 0xef, 0xc8, 0x05, 0xd4, 0xff, 0x98, 0x81, + 0x0b, 0x8c, 0x13, 0x06, 0x25, 0x4d, 0x7c, 0xd8, 0x81, 0x06, 0x70, 0xab, 0x87, 0xa4, 0xc4, 0xa8, + 0x7e, 0x2a, 0x19, 0x87, 0x23, 0x46, 0x18, 0xe3, 0xde, 0x43, 0xf6, 0xfb, 0x8b, 0x0b, 0xf7, 0x6b, + 0x19, 0xbc, 0x1e, 0x5c, 0x69, 0xd2, 0xdb, 0x8b, 0x95, 0xb7, 0x0f, 0xe9, 0x02, 0x7b, 0xbf, 0x53, + 0xd7, 0x5b, 0x50, 0x70, 0xb1, 0x2d, 0xf5, 0x8e, 0x65, 0x3e, 0xc0, 0x93, 0x2f, 0xf1, 0x31, 0x67, + 0xa5, 0x8f, 0x91, 0xda, 0xaa, 0x8d, 0x73, 0xfa, 0xdb, 0x96, 0xf9, 0x00, 0x63, 0xb2, 0x7e, 0x14, + 0xc3, 0xdc, 0x4b, 0x14, 0xe4, 0x02, 0x0c, 0x31, 0x3e, 0x56, 0xa0, 0x8c, 0x5a, 0xf0, 0x9b, 0x14, + 0x20, 0xd7, 0x31, 0x0d, 0x6c, 0x66, 0xbf, 0xc6, 0xfe, 0x54, 0x7f, 0x2f, 0x07, 0x93, 0xa5, 0xbb, + 0xb5, 0x4a, 0x39, 0x78, 0xc5, 0x20, 0xde, 0xd5, 0xb6, 0xf6, 0x28, 0x0b, 0x9f, 0x9e, 0x94, 0x61, + 0x9c, 0xc7, 0x45, 0x10, 0xb1, 0xfb, 0xf9, 0xb9, 0x54, 0x3f, 0x7f, 0x4d, 0x11, 0xc5, 0x48, 0x4a, + 0x3a, 0x86, 0x18, 0x11, 0xe2, 0xdf, 0x25, 0x0d, 0x38, 0x1f, 0x21, 0xad, 0xeb, 0x41, 0xd8, 0x39, + 0x3f, 0xe4, 0xc7, 0xb3, 0xdb, 0x5b, 0xc5, 0xa7, 0xbb, 0x12, 0x49, 0xac, 0xcf, 0xc9, 0xac, 0xc3, + 0xf0, 0x75, 0x2e, 0xb9, 0x05, 0x93, 0xbc, 0x3c, 0x1e, 0xda, 0x05, 0x4e, 0x12, 0x8c, 0xb9, 0x14, + 0xde, 0x41, 0x42, 0xca, 0xa1, 0xbc, 0x10, 0xc9, 0x04, 0x2e, 0xde, 0x44, 0xdf, 0x85, 0x33, 0x9c, + 0xbe, 0x4d, 0x1d, 0x1c, 0x89, 0xb6, 0x55, 0x77, 0xa9, 0x27, 0x9e, 0x56, 0x4f, 0x3d, 0xbd, 0xbd, + 0x55, 0x2c, 0xa6, 0x12, 0x48, 0x4c, 0x4f, 0x21, 0x41, 0x35, 0xc0, 0xd7, 0xa8, 0x27, 0xfb, 0x69, + 0x0c, 0xec, 0x41, 0xcd, 0xbf, 0x33, 0x0b, 0xe7, 0xe6, 0xa8, 0xde, 0xf4, 0xd6, 0xca, 0x6b, 0xb4, + 0xb1, 0x7e, 0xe2, 0x2a, 0x1d, 0xb1, 0x5a, 0x52, 0xa5, 0x73, 0x62, 0x22, 0xf7, 0x92, 0xce, 0x89, + 0xc5, 0x2b, 0xa4, 0xf3, 0x03, 0x59, 0xb8, 0x9c, 0xb6, 0x39, 0xc0, 0xdb, 0x01, 0x67, 0xe9, 0x3e, + 0x75, 0x1c, 0xd3, 0xa0, 0x47, 0xb8, 0xa8, 0x1c, 0x9e, 0x3d, 0x2c, 0x77, 0x58, 0x7e, 0x9f, 0x1e, + 0xb1, 0xbb, 0x13, 0xd7, 0x11, 0xe6, 0xec, 0x79, 0xb8, 0xc4, 0xf5, 0xed, 0x59, 0x38, 0x5d, 0x33, + 0x57, 0x5d, 0xcf, 0x76, 0x68, 0x15, 0x03, 0x94, 0x9c, 0x68, 0x52, 0x57, 0xd1, 0x1c, 0x61, 0x6a, + 0xac, 0x87, 0x5d, 0x34, 0x27, 0x03, 0x4a, 0x88, 0xe6, 0xdf, 0xe6, 0x61, 0x62, 0xa1, 0x5c, 0x15, + 0x3b, 0x74, 0xcc, 0x46, 0x78, 0x3c, 0xdf, 0xd0, 0x86, 0x67, 0x0b, 0xf9, 0x7d, 0x9c, 0x2d, 0x1c, + 0x9e, 0x7b, 0x81, 0x48, 0x96, 0x3a, 0xb0, 0xa7, 0x64, 0xa9, 0xcf, 0xc0, 0x78, 0xab, 0xd1, 0xae, + 0xfb, 0x41, 0xa9, 0x4c, 0x91, 0x9f, 0x59, 0x1b, 0x6d, 0x35, 0xfc, 0x44, 0xb0, 0x15, 0x83, 0x14, + 0x61, 0xa4, 0xd1, 0x34, 0xa9, 0xe5, 0xd5, 0x4d, 0xeb, 0x9e, 0x2d, 0x42, 0x40, 0x01, 0x07, 0x55, + 0xac, 0x7b, 0x36, 0x23, 0x70, 0xf1, 0x7b, 0x38, 0x01, 0x8f, 0xf7, 0x04, 0x1c, 0x84, 0x04, 0x57, + 0xa0, 0x80, 0xa7, 0xf9, 0x0d, 0xbb, 0x59, 0x17, 0xc1, 0xe4, 0x44, 0xb8, 0xa7, 0x09, 0x1f, 0xee, + 0x07, 0x94, 0x7b, 0x0e, 0x26, 0x4d, 0x6b, 0xd5, 0x61, 0x3b, 0x74, 0xbd, 0xe3, 0xf1, 0x44, 0x3a, + 0x22, 0xfc, 0xd3, 0x84, 0x40, 0x94, 0x3a, 0x1e, 0x4f, 0xa5, 0x73, 0x19, 0x0a, 0x34, 0x4e, 0x8a, + 0x51, 0x44, 0xb4, 0x71, 0x1a, 0xa1, 0x54, 0x7f, 0x21, 0x0f, 0x63, 0xa1, 0xba, 0xcd, 0x1c, 0xd1, + 0x91, 0xd8, 0xa3, 0xad, 0x6c, 0xe1, 0x8e, 0x69, 0x70, 0x0f, 0xb7, 0xa9, 0xdf, 0x90, 0x81, 0x41, + 0x91, 0x8b, 0x1b, 0x35, 0xef, 0x28, 0x12, 0x7f, 0x0b, 0x90, 0xfa, 0x5b, 0x19, 0x98, 0x5c, 0x28, + 0x57, 0x6f, 0xd6, 0x96, 0x16, 0xb5, 0x6a, 0x79, 0x81, 0xba, 0xae, 0xbe, 0x4a, 0xc9, 0xfb, 0x60, + 0x50, 0x40, 0xc4, 0x29, 0x13, 0x1e, 0xef, 0xbd, 0xeb, 0xda, 0x96, 0xd3, 0x6e, 0x68, 0x3e, 0x4e, + 0x64, 0x64, 0xca, 0xf6, 0xc8, 0xc8, 0xa4, 0x82, 0x48, 0xda, 0x2b, 0x0e, 0xc8, 0x22, 0x69, 0x7c, + 0xf9, 0xff, 0x64, 0x09, 0x06, 0xda, 0xba, 0xa3, 0xb7, 0xdc, 0x9d, 0x6e, 0xcb, 0x9e, 0xd8, 0xde, + 0x2a, 0x16, 0x38, 0x69, 0xea, 0xed, 0x98, 0x60, 0xa3, 0x7e, 0x73, 0x1e, 0xbf, 0x29, 0x08, 0xa9, + 0x11, 0xa4, 0x7b, 0xde, 0xf7, 0x49, 0xc3, 0xab, 0x90, 0xef, 0xec, 0x71, 0x74, 0x74, 0xc4, 0xe8, + 0x70, 0xf7, 0x35, 0x3a, 0x44, 0x29, 0x5f, 0x2b, 0xf3, 0x7b, 0xd5, 0x4a, 0x77, 0x0f, 0x77, 0xfc, + 0x9c, 0x96, 0x2c, 0xc2, 0x60, 0x8b, 0x77, 0xbf, 0x18, 0x04, 0x41, 0xa0, 0xc8, 0x84, 0x7e, 0x4c, + 0x9d, 0x17, 0x99, 0x5d, 0x27, 0x45, 0x09, 0x59, 0xb3, 0x04, 0x28, 0xa2, 0xe5, 0x83, 0x47, 0xac, + 0xe5, 0x7f, 0x2f, 0x0f, 0x67, 0x43, 0x8d, 0x58, 0xb4, 0x3d, 0xf3, 0x9e, 0xc9, 0xaf, 0x57, 0x1e, + 0x21, 0xb5, 0x90, 0x3a, 0xb8, 0xff, 0x30, 0x3a, 0x38, 0x54, 0xb3, 0x81, 0x3d, 0xa8, 0xd9, 0xc3, + 0xa4, 0x16, 0xbf, 0x94, 0x83, 0xf3, 0xa1, 0x5a, 0xcc, 0x63, 0xea, 0xbc, 0x5a, 0x6d, 0xa6, 0xe6, + 0x39, 0x54, 0x6f, 0x1d, 0x5b, 0xcd, 0xd0, 0xf7, 0xa4, 0x19, 0xfa, 0xbe, 0x27, 0x0c, 0xb9, 0x27, + 0x07, 0x8e, 0xb8, 0x27, 0xbf, 0x90, 0x87, 0x8b, 0x61, 0x4f, 0x8a, 0xf3, 0xfb, 0xb9, 0xe5, 0xe5, + 0xea, 0x71, 0x9f, 0xfd, 0xf7, 0xde, 0x99, 0xfe, 0x9d, 0x5b, 0x7f, 0xea, 0x9d, 0x5b, 0xb8, 0x20, + 0x0f, 0x74, 0x5d, 0x90, 0x9f, 0x83, 0x30, 0x25, 0xbf, 0x1c, 0xa6, 0x4f, 0xca, 0xd3, 0xef, 0xf8, + 0x79, 0xfa, 0x2f, 0x42, 0x7e, 0xc5, 0x36, 0x78, 0x3a, 0xff, 0x51, 0x5e, 0x1b, 0xfb, 0xad, 0xe1, + 0xbf, 0x11, 0x15, 0x19, 0x3e, 0x62, 0x15, 0xf9, 0xa3, 0x1c, 0x9c, 0x9e, 0xb2, 0x3b, 0x96, 0x71, + 0x8b, 0x6e, 0xb6, 0x75, 0xd3, 0xd1, 0x68, 0xc3, 0xbe, 0xcf, 0x3e, 0xe1, 0xaf, 0xdf, 0x25, 0xf6, + 0xf0, 0xf6, 0xad, 0xcf, 0xc3, 0xf0, 0xb2, 0xbd, 0x4e, 0x2d, 0x29, 0x12, 0x36, 0x66, 0x33, 0xf6, + 0x18, 0x90, 0x47, 0xa4, 0x0a, 0x09, 0xc8, 0x0b, 0x30, 0x38, 0x65, 0xf3, 0xcb, 0x43, 0x29, 0x1d, + 0xeb, 0x8a, 0x2d, 0x2e, 0x0d, 0x25, 0xa1, 0x09, 0x32, 0xf2, 0x2a, 0x0c, 0x57, 0x3b, 0x2b, 0x4d, + 0xb3, 0x71, 0x8b, 0xfa, 0x2f, 0x55, 0xd0, 0x6d, 0xb9, 0x8d, 0xc0, 0xfa, 0x3a, 0xdd, 0x8c, 0x44, + 0x35, 0xf2, 0x49, 0xc9, 0x87, 0x60, 0xcc, 0x97, 0x6f, 0xd9, 0xee, 0x58, 0x1e, 0xaa, 0x91, 0xc8, + 0x83, 0xe9, 0x08, 0x44, 0x1d, 0xaf, 0x49, 0xb4, 0x28, 0x21, 0x79, 0x05, 0x46, 0x7d, 0xc0, 0x82, + 0x6d, 0x50, 0x39, 0xee, 0x63, 0x50, 0xb0, 0x65, 0x1b, 0x54, 0x8b, 0x90, 0xa9, 0xbf, 0x18, 0xef, + 0x5d, 0xdb, 0x0b, 0xd6, 0xf7, 0x93, 0xde, 0xed, 0xd2, 0xbb, 0x4b, 0x30, 0x59, 0x75, 0xe8, 0x7d, + 0xd3, 0xee, 0xb8, 0xf1, 0x5e, 0x7e, 0x6a, 0x7b, 0xab, 0xf8, 0x78, 0x5b, 0x20, 0xeb, 0xa9, 0xdd, + 0x9d, 0x2c, 0x4b, 0xde, 0x82, 0xd1, 0x45, 0xba, 0x11, 0xf2, 0x1a, 0x0c, 0xa3, 0x9b, 0x59, 0x74, + 0x23, 0x9d, 0x4d, 0xa4, 0x84, 0xfa, 0x8b, 0x59, 0x78, 0x56, 0xee, 0xc7, 0x9b, 0xb6, 0x69, 0xa1, + 0x1b, 0xfe, 0x1d, 0xea, 0x04, 0x46, 0xdb, 0xac, 0x6e, 0x36, 0x0f, 0x18, 0x0c, 0xee, 0xcb, 0xbc, + 0x6b, 0xd5, 0x5f, 0xcb, 0xc2, 0x48, 0xad, 0x5c, 0x59, 0xf0, 0xd7, 0xbf, 0x9d, 0x93, 0xe7, 0x4e, + 0xc1, 0x18, 0x8f, 0x2b, 0x57, 0x32, 0x0c, 0x87, 0xba, 0xae, 0xd8, 0xd7, 0x61, 0xe7, 0x89, 0xd8, + 0x71, 0x3a, 0xc7, 0xc8, 0x97, 0xa9, 0x91, 0x22, 0x6c, 0xba, 0x60, 0xab, 0x22, 0xbe, 0x6d, 0x17, + 0x3b, 0xbe, 0xf0, 0x95, 0x83, 0xce, 0xa0, 0xf2, 0x74, 0x11, 0x90, 0x92, 0xe7, 0x61, 0x60, 0x81, + 0xaf, 0x4a, 0xf9, 0xd0, 0x53, 0x83, 0xaf, 0x46, 0xb2, 0xa7, 0x06, 0xa7, 0x21, 0x97, 0x20, 0x5f, + 0x0d, 0x57, 0x38, 0x9c, 0x53, 0xd8, 0xda, 0x26, 0x7b, 0x30, 0x54, 0xb9, 0x7f, 0x49, 0x7e, 0x8a, + 0xad, 0x4d, 0x03, 0xbb, 0x08, 0xad, 0xc6, 0x96, 0xab, 0xf4, 0xd0, 0x6a, 0x8c, 0x85, 0xfa, 0xbb, + 0x19, 0x18, 0x67, 0xe2, 0x2c, 0xdb, 0xad, 0x96, 0x6d, 0x4d, 0x33, 0x0d, 0x9a, 0x86, 0x41, 0x21, + 0x5c, 0xa1, 0x08, 0xc1, 0x02, 0x2d, 0xc9, 0x9d, 0x3b, 0xae, 0x38, 0xfc, 0x87, 0xdc, 0x4f, 0x7e, + 0xbf, 0xc4, 0xde, 0x77, 0xe4, 0xc3, 0xf7, 0x1d, 0xd2, 0xab, 0x0e, 0x39, 0xc7, 0xb3, 0x44, 0x4d, + 0xde, 0x64, 0x73, 0x65, 0x22, 0xf0, 0x1f, 0xba, 0x68, 0x44, 0xe2, 0xfd, 0xc9, 0xa3, 0x2d, 0x12, + 0xf1, 0xef, 0x3b, 0x72, 0x50, 0x60, 0xad, 0x65, 0xa6, 0xaf, 0x69, 0xad, 0x62, 0xf4, 0xf3, 0x23, + 0x18, 0x56, 0x6f, 0xc0, 0x00, 0x17, 0x6c, 0xc2, 0x25, 0x21, 0x22, 0xf2, 0x78, 0x69, 0x8e, 0x61, + 0xca, 0xc3, 0x03, 0x8d, 0xcb, 0xca, 0xc3, 0x23, 0x92, 0xcb, 0xca, 0xc3, 0x69, 0xc8, 0x15, 0xe8, + 0xe7, 0x2b, 0x52, 0x7f, 0x18, 0x50, 0x1a, 0x17, 0x22, 0x39, 0xf1, 0x07, 0x5f, 0x8a, 0x5e, 0x03, + 0xc0, 0x53, 0xdc, 0x8a, 0x65, 0xd0, 0x07, 0x22, 0x0e, 0x36, 0x76, 0x8d, 0xcb, 0xa0, 0x75, 0x93, + 0x81, 0xe5, 0xc4, 0xc2, 0x21, 0x31, 0x1b, 0x4c, 0xbe, 0xa0, 0xe5, 0xf5, 0x0f, 0x07, 0x53, 0xd0, + 0x35, 0xf1, 0x6a, 0xa3, 0x45, 0xd4, 0xed, 0x2c, 0x4c, 0x72, 0x55, 0xe2, 0xd0, 0xe3, 0xd9, 0x3b, + 0xaf, 0x01, 0x2c, 0x0b, 0xd3, 0x2d, 0x08, 0x36, 0x8d, 0x42, 0xf4, 0x0d, 0xba, 0x68, 0x22, 0x78, + 0x89, 0x18, 0x13, 0x3b, 0x8b, 0x2c, 0x05, 0x95, 0x69, 0x39, 0xfd, 0x39, 0x15, 0xd0, 0x58, 0xd1, + 0x90, 0x98, 0x5c, 0x83, 0xc1, 0x69, 0xd3, 0x6d, 0x37, 0xf5, 0xc8, 0xa3, 0x67, 0x83, 0x83, 0xe4, + 0x71, 0x28, 0xa8, 0xd4, 0xef, 0xc9, 0xc1, 0x85, 0x32, 0x3f, 0xfe, 0xad, 0x6a, 0xd4, 0xf5, 0x1c, + 0x93, 0x7b, 0x96, 0x9d, 0x5c, 0xfc, 0x24, 0x32, 0xd8, 0xed, 0xe5, 0xe1, 0xc8, 0x22, 0x9c, 0x4e, + 0x93, 0xaa, 0x88, 0x11, 0x8e, 0x21, 0x4a, 0xfd, 0x73, 0xf8, 0x76, 0xdd, 0x91, 0x28, 0xb4, 0xd4, + 0x72, 0xea, 0xd7, 0x66, 0xa1, 0x70, 0xc7, 0xa2, 0x5e, 0xdc, 0xcb, 0x64, 0xdf, 0x7b, 0xbb, 0x70, + 0xeb, 0x9b, 0xdd, 0xc3, 0xd6, 0xd7, 0xdf, 0x11, 0xe6, 0xf6, 0xb8, 0x23, 0x9c, 0x03, 0x68, 0x1c, + 0xc0, 0x37, 0x20, 0x2c, 0x1b, 0x13, 0x44, 0xa8, 0xa5, 0x8f, 0xb8, 0x20, 0xc2, 0xcb, 0xc8, 0x47, + 0x4d, 0x10, 0xcf, 0x35, 0xf8, 0x63, 0xdc, 0x5b, 0xa6, 0x65, 0x90, 0xf3, 0x70, 0xe6, 0x76, 0x6d, + 0x46, 0xab, 0xdf, 0xaa, 0x2c, 0x4e, 0xd7, 0x6f, 0x2f, 0xd6, 0xaa, 0x33, 0xe5, 0xca, 0x6c, 0x65, + 0x66, 0xba, 0xd0, 0x47, 0x4e, 0xc1, 0x44, 0x88, 0x9a, 0xbb, 0xbd, 0x50, 0x5a, 0x2c, 0x64, 0xc8, + 0x24, 0x8c, 0x85, 0xc0, 0xa9, 0xa5, 0xe5, 0x42, 0x96, 0x9c, 0x86, 0x42, 0x08, 0xaa, 0xbd, 0x53, + 0x5b, 0x9e, 0x59, 0x28, 0xe4, 0x9e, 0xfb, 0x8e, 0x0c, 0x00, 0xab, 0x65, 0xc9, 0x31, 0x57, 0x4d, + 0x8b, 0x3c, 0x06, 0xe7, 0x90, 0x68, 0x49, 0xab, 0xdc, 0xa8, 0x2c, 0xc6, 0x6a, 0x3a, 0x03, 0x93, + 0x32, 0x72, 0x7e, 0xa9, 0x5c, 0x9a, 0x2f, 0x64, 0x82, 0x06, 0x08, 0x70, 0xad, 0xb6, 0x24, 0xd5, + 0x26, 0x80, 0x4b, 0xb7, 0x96, 0x4b, 0x85, 0x5c, 0x1c, 0xca, 0xd6, 0x9b, 0x42, 0x9e, 0x9c, 0x83, + 0x53, 0x32, 0x74, 0x66, 0x71, 0x59, 0x2b, 0x55, 0xa6, 0x0b, 0xfd, 0xcf, 0x3d, 0x0b, 0x23, 0xb8, + 0x46, 0x0a, 0x87, 0xde, 0x51, 0x18, 0x5a, 0x9a, 0xaa, 0xcd, 0x68, 0x77, 0xb0, 0x35, 0x00, 0x03, + 0xd3, 0x33, 0x8b, 0xac, 0x65, 0x99, 0xe7, 0xfe, 0x7d, 0x06, 0xa0, 0x36, 0xbb, 0x5c, 0x15, 0x84, + 0x23, 0x30, 0x58, 0x59, 0xbc, 0x53, 0x9a, 0xaf, 0x30, 0xba, 0x21, 0xc8, 0x2f, 0x55, 0x67, 0x98, + 0x50, 0x86, 0xa1, 0xbf, 0x3c, 0xbf, 0x54, 0x9b, 0x29, 0x64, 0x19, 0x50, 0x9b, 0x29, 0x4d, 0x17, + 0x72, 0x0c, 0x78, 0x57, 0xab, 0x2c, 0xcf, 0x14, 0xf2, 0xec, 0xcf, 0xf9, 0xda, 0x72, 0x69, 0xb9, + 0xd0, 0xcf, 0xfe, 0x9c, 0xc5, 0x3f, 0x07, 0x18, 0xb3, 0xda, 0xcc, 0x32, 0xfe, 0x18, 0x64, 0x4d, + 0x98, 0xf5, 0x7f, 0x0d, 0x31, 0x14, 0x63, 0x3d, 0x5d, 0xd1, 0x0a, 0xc3, 0xec, 0x07, 0x63, 0xc9, + 0x7e, 0x00, 0x6b, 0x9c, 0x36, 0xb3, 0xb0, 0x74, 0x67, 0xa6, 0x30, 0xc2, 0x78, 0x2d, 0xdc, 0x62, + 0xe0, 0x51, 0xf6, 0xa7, 0xb6, 0xc0, 0xfe, 0x1c, 0x63, 0x9c, 0xb4, 0x99, 0xd2, 0x7c, 0xb5, 0xb4, + 0x3c, 0x57, 0x18, 0x67, 0xed, 0x41, 0x9e, 0x13, 0xbc, 0xe4, 0x62, 0x69, 0x61, 0xa6, 0x50, 0x10, + 0x34, 0xd3, 0xf3, 0x95, 0xc5, 0x5b, 0x85, 0x49, 0x6c, 0xc8, 0x3b, 0x0b, 0xf8, 0x83, 0xb0, 0x02, + 0xf8, 0xd7, 0xa9, 0xe7, 0x3e, 0x09, 0x03, 0x4b, 0x35, 0xbc, 0x6a, 0x3c, 0x07, 0xa7, 0x96, 0x6a, + 0xf5, 0xe5, 0x77, 0xaa, 0x33, 0xb1, 0x8e, 0x9b, 0x84, 0x31, 0x1f, 0x31, 0x5f, 0x59, 0xbc, 0xfd, + 0x36, 0x57, 0x10, 0x1f, 0xb4, 0x50, 0x2a, 0x2f, 0xd5, 0x0a, 0x59, 0xd6, 0x8f, 0x3e, 0xe8, 0x6e, + 0x65, 0x71, 0x7a, 0xe9, 0x6e, 0xad, 0x90, 0x7b, 0xee, 0x3e, 0x8c, 0x4e, 0xd3, 0xfb, 0x66, 0x83, + 0x0a, 0x05, 0x79, 0x1c, 0xce, 0x4f, 0xcf, 0xdc, 0xa9, 0x94, 0x67, 0xba, 0xaa, 0x48, 0x14, 0x5d, + 0xaa, 0x56, 0x0a, 0x19, 0x72, 0x16, 0x48, 0x14, 0x7c, 0xb3, 0xb4, 0x30, 0x5b, 0xc8, 0x12, 0x05, + 0x4e, 0x47, 0xe1, 0x95, 0xc5, 0xe5, 0xdb, 0x8b, 0x33, 0x85, 0xdc, 0x73, 0xff, 0x28, 0x03, 0x67, + 0x66, 0x9a, 0xba, 0xeb, 0x99, 0x0d, 0x97, 0xea, 0x4e, 0x63, 0xad, 0xac, 0x7b, 0x74, 0xd5, 0x76, + 0x36, 0x89, 0x0a, 0x4f, 0xcc, 0xcc, 0x97, 0x6a, 0xcb, 0x95, 0x72, 0x6d, 0xa6, 0xa4, 0x95, 0xe7, + 0xea, 0xe5, 0xd2, 0xf2, 0xcc, 0x8d, 0x25, 0xed, 0x9d, 0xfa, 0x8d, 0x99, 0xc5, 0x19, 0xad, 0x34, + 0x5f, 0xe8, 0x23, 0x4f, 0x43, 0xb1, 0x0b, 0x4d, 0x6d, 0xa6, 0x7c, 0x5b, 0xab, 0x2c, 0xbf, 0x53, + 0xc8, 0x90, 0xa7, 0xe0, 0xf1, 0xae, 0x44, 0xec, 0x77, 0x21, 0x4b, 0x9e, 0x80, 0x0b, 0xdd, 0x48, + 0x3e, 0x36, 0x5f, 0xc8, 0x3d, 0xf7, 0xed, 0x19, 0x20, 0x4b, 0x6d, 0x6a, 0xd5, 0xa2, 0x4d, 0x7c, + 0x12, 0x2e, 0x32, 0xbd, 0xa8, 0x77, 0x6f, 0xe0, 0x53, 0xf0, 0x78, 0x2a, 0x85, 0xd4, 0xbc, 0x22, + 0x3c, 0xd6, 0x85, 0x44, 0x34, 0xee, 0x22, 0x28, 0xe9, 0x04, 0xd8, 0xb4, 0x1f, 0xcd, 0xc0, 0x99, + 0xd4, 0x18, 0xff, 0xe4, 0x32, 0x3c, 0x53, 0x9a, 0x5e, 0x60, 0x7d, 0x53, 0x5e, 0xae, 0x2c, 0x2d, + 0xd6, 0xea, 0x0b, 0xb3, 0xa5, 0x3a, 0xd3, 0xbe, 0xdb, 0xb5, 0x58, 0x6f, 0x5e, 0x02, 0xb5, 0x07, + 0x65, 0x79, 0xae, 0xb4, 0x78, 0x83, 0x0d, 0x3f, 0xf2, 0x0c, 0x3c, 0xd9, 0x95, 0x6e, 0x66, 0xb1, + 0x34, 0x35, 0x3f, 0x33, 0x5d, 0xc8, 0x92, 0xf7, 0xc1, 0x53, 0x5d, 0xa9, 0xa6, 0x2b, 0x35, 0x4e, + 0x96, 0x7b, 0x4e, 0x8f, 0xbc, 0x3c, 0x65, 0x5f, 0x59, 0x5e, 0x5a, 0x5c, 0x2e, 0x95, 0x97, 0xd3, + 0x34, 0xfb, 0x3c, 0x9c, 0x89, 0x60, 0xa7, 0x6e, 0xd7, 0x2a, 0x8b, 0x33, 0xb5, 0x5a, 0x21, 0x93, + 0x40, 0x05, 0xa2, 0xcd, 0x4e, 0x4d, 0x7f, 0xf1, 0xcf, 0x9e, 0xe8, 0xfb, 0xe2, 0x9f, 0x3f, 0x91, + 0xf9, 0xcd, 0x3f, 0x7f, 0x22, 0xf3, 0xa7, 0x7f, 0xfe, 0x44, 0xe6, 0xe3, 0xd7, 0xf7, 0x72, 0x4c, + 0xc9, 0x27, 0xf3, 0x95, 0x01, 0xdc, 0x77, 0xbe, 0xf4, 0xff, 0x07, 0x00, 0x00, 0xff, 0xff, 0xe0, + 0xf3, 0x76, 0xa9, 0x42, 0xdd, 0x01, 0x00, } -func (m *OktaAssignmentMetadata) Marshal() (dAtA []byte, err error) { +func (m *Metadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -18932,12 +19806,12 @@ func (m *OktaAssignmentMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *OktaAssignmentMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *Metadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OktaAssignmentMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *Metadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -18946,38 +19820,51 @@ func (m *OktaAssignmentMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.EndingStatus) > 0 { - i -= len(m.EndingStatus) - copy(dAtA[i:], m.EndingStatus) - i = encodeVarintEvents(dAtA, i, uint64(len(m.EndingStatus))) + if len(m.ClusterName) > 0 { + i -= len(m.ClusterName) + copy(dAtA[i:], m.ClusterName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ClusterName))) + i-- + dAtA[i] = 0x32 + } + n1, err1 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) + if err1 != nil { + return 0, err1 + } + i -= n1 + i = encodeVarintEvents(dAtA, i, uint64(n1)) + i-- + dAtA[i] = 0x2a + if len(m.Code) > 0 { + i -= len(m.Code) + copy(dAtA[i:], m.Code) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Code))) i-- dAtA[i] = 0x22 } - if len(m.StartingStatus) > 0 { - i -= len(m.StartingStatus) - copy(dAtA[i:], m.StartingStatus) - i = encodeVarintEvents(dAtA, i, uint64(len(m.StartingStatus))) + if len(m.ID) > 0 { + i -= len(m.ID) + copy(dAtA[i:], m.ID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) i-- dAtA[i] = 0x1a } - if len(m.User) > 0 { - i -= len(m.User) - copy(dAtA[i:], m.User) - i = encodeVarintEvents(dAtA, i, uint64(len(m.User))) + if len(m.Type) > 0 { + i -= len(m.Type) + copy(dAtA[i:], m.Type) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Type))) i-- dAtA[i] = 0x12 } - if len(m.Source) > 0 { - i -= len(m.Source) - copy(dAtA[i:], m.Source) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Source))) + if m.Index != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Index)) i-- - dAtA[i] = 0xa + dAtA[i] = 0x8 } return len(dAtA) - i, nil } -func (m *AccessListMemberMetadata) Marshal() (dAtA []byte, err error) { +func (m *SessionMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -18987,12 +19874,12 @@ func (m *AccessListMemberMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AccessListMemberMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessListMemberMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19001,38 +19888,31 @@ func (m *AccessListMemberMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.AccessListTitle) > 0 { - i -= len(m.AccessListTitle) - copy(dAtA[i:], m.AccessListTitle) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListTitle))) + if len(m.PrivateKeyPolicy) > 0 { + i -= len(m.PrivateKeyPolicy) + copy(dAtA[i:], m.PrivateKeyPolicy) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PrivateKeyPolicy))) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x1a } - if len(m.Members) > 0 { - for iNdEx := len(m.Members) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Members[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x12 - } + if len(m.WithMFA) > 0 { + i -= len(m.WithMFA) + copy(dAtA[i:], m.WithMFA) + i = encodeVarintEvents(dAtA, i, uint64(len(m.WithMFA))) + i-- + dAtA[i] = 0x12 } - if len(m.AccessListName) > 0 { - i -= len(m.AccessListName) - copy(dAtA[i:], m.AccessListName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListName))) + if len(m.SessionID) > 0 { + i -= len(m.SessionID) + copy(dAtA[i:], m.SessionID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionID))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *AccessListMember) Marshal() (dAtA []byte, err error) { +func (m *UserMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19042,12 +19922,12 @@ func (m *AccessListMember) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AccessListMember) MarshalTo(dAtA []byte) (int, error) { +func (m *UserMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessListMember) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UserMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19056,45 +19936,132 @@ func (m *AccessListMember) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.MembershipKind != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.MembershipKind)) - i-- - dAtA[i] = 0x28 - } - if len(m.MemberName) > 0 { - i -= len(m.MemberName) - copy(dAtA[i:], m.MemberName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.MemberName))) + if len(m.UserClusterName) > 0 { + i -= len(m.UserClusterName) + copy(dAtA[i:], m.UserClusterName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserClusterName))) i-- - dAtA[i] = 0x22 - } - if len(m.Reason) > 0 { - i -= len(m.Reason) - copy(dAtA[i:], m.Reason) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) + dAtA[i] = 0x1 i-- - dAtA[i] = 0x1a + dAtA[i] = 0x82 } - n3, err3 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.RemovedOn, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.RemovedOn):]) - if err3 != nil { - return 0, err3 + { + size := m.UserTraits.Size() + i -= size + if _, err := m.UserTraits.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= n3 - i = encodeVarintEvents(dAtA, i, uint64(n3)) i-- - dAtA[i] = 0x12 - n4, err4 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.JoinedOn, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.JoinedOn):]) - if err4 != nil { - return 0, err4 + dAtA[i] = 0x7a + if len(m.UserRoles) > 0 { + for iNdEx := len(m.UserRoles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UserRoles[iNdEx]) + copy(dAtA[i:], m.UserRoles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserRoles[iNdEx]))) + i-- + dAtA[i] = 0x72 + } + } + if m.UserOrigin != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.UserOrigin)) + i-- + dAtA[i] = 0x68 + } + if len(m.BotInstanceID) > 0 { + i -= len(m.BotInstanceID) + copy(dAtA[i:], m.BotInstanceID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotInstanceID))) + i-- + dAtA[i] = 0x62 + } + if len(m.BotName) > 0 { + i -= len(m.BotName) + copy(dAtA[i:], m.BotName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) + i-- + dAtA[i] = 0x5a + } + if m.UserKind != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.UserKind)) + i-- + dAtA[i] = 0x50 + } + if len(m.RequiredPrivateKeyPolicy) > 0 { + i -= len(m.RequiredPrivateKeyPolicy) + copy(dAtA[i:], m.RequiredPrivateKeyPolicy) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequiredPrivateKeyPolicy))) + i-- + dAtA[i] = 0x4a + } + if m.TrustedDevice != nil { + { + size, err := m.TrustedDevice.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x42 + } + if len(m.GCPServiceAccount) > 0 { + i -= len(m.GCPServiceAccount) + copy(dAtA[i:], m.GCPServiceAccount) + i = encodeVarintEvents(dAtA, i, uint64(len(m.GCPServiceAccount))) + i-- + dAtA[i] = 0x3a + } + if len(m.AzureIdentity) > 0 { + i -= len(m.AzureIdentity) + copy(dAtA[i:], m.AzureIdentity) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AzureIdentity))) + i-- + dAtA[i] = 0x32 + } + if len(m.AccessRequests) > 0 { + for iNdEx := len(m.AccessRequests) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AccessRequests[iNdEx]) + copy(dAtA[i:], m.AccessRequests[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessRequests[iNdEx]))) + i-- + dAtA[i] = 0x2a + } + } + if len(m.AWSRoleARN) > 0 { + i -= len(m.AWSRoleARN) + copy(dAtA[i:], m.AWSRoleARN) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSRoleARN))) + i-- + dAtA[i] = 0x22 + } + if len(m.Impersonator) > 0 { + i -= len(m.Impersonator) + copy(dAtA[i:], m.Impersonator) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Impersonator))) + i-- + dAtA[i] = 0x1a + } + if len(m.Login) > 0 { + i -= len(m.Login) + copy(dAtA[i:], m.Login) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Login))) + i-- + dAtA[i] = 0x12 + } + if len(m.User) > 0 { + i -= len(m.User) + copy(dAtA[i:], m.User) + i = encodeVarintEvents(dAtA, i, uint64(len(m.User))) + i-- + dAtA[i] = 0xa } - i -= n4 - i = encodeVarintEvents(dAtA, i, uint64(n4)) - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *AccessListReviewMembershipRequirementsChanged) Marshal() (dAtA []byte, err error) { +func (m *ServerMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19104,12 +20071,12 @@ func (m *AccessListReviewMembershipRequirementsChanged) Marshal() (dAtA []byte, return dAtA[:n], nil } -func (m *AccessListReviewMembershipRequirementsChanged) MarshalTo(dAtA []byte) (int, error) { +func (m *ServerMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessListReviewMembershipRequirementsChanged) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ServerMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19118,9 +20085,30 @@ func (m *AccessListReviewMembershipRequirementsChanged) MarshalToSizedBuffer(dAt i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Traits) > 0 { - for k := range m.Traits { - v := m.Traits[k] + if len(m.ServerVersion) > 0 { + i -= len(m.ServerVersion) + copy(dAtA[i:], m.ServerVersion) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerVersion))) + i-- + dAtA[i] = 0x42 + } + if len(m.ServerSubKind) > 0 { + i -= len(m.ServerSubKind) + copy(dAtA[i:], m.ServerSubKind) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerSubKind))) + i-- + dAtA[i] = 0x3a + } + if len(m.ForwardedBy) > 0 { + i -= len(m.ForwardedBy) + copy(dAtA[i:], m.ForwardedBy) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ForwardedBy))) + i-- + dAtA[i] = 0x32 + } + if len(m.ServerLabels) > 0 { + for k := range m.ServerLabels { + v := m.ServerLabels[k] baseI := i i -= len(v) copy(dAtA[i:], v) @@ -19134,22 +20122,41 @@ func (m *AccessListReviewMembershipRequirementsChanged) MarshalToSizedBuffer(dAt dAtA[i] = 0xa i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x2a } } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0xa - } + if len(m.ServerAddr) > 0 { + i -= len(m.ServerAddr) + copy(dAtA[i:], m.ServerAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerAddr))) + i-- + dAtA[i] = 0x22 + } + if len(m.ServerHostname) > 0 { + i -= len(m.ServerHostname) + copy(dAtA[i:], m.ServerHostname) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerHostname))) + i-- + dAtA[i] = 0x1a + } + if len(m.ServerID) > 0 { + i -= len(m.ServerID) + copy(dAtA[i:], m.ServerID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerID))) + i-- + dAtA[i] = 0x12 + } + if len(m.ServerNamespace) > 0 { + i -= len(m.ServerNamespace) + copy(dAtA[i:], m.ServerNamespace) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerNamespace))) + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *AccessListReviewMetadata) Marshal() (dAtA []byte, err error) { +func (m *ConnectionMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19159,12 +20166,12 @@ func (m *AccessListReviewMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AccessListReviewMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *ConnectionMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessListReviewMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ConnectionMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19173,66 +20180,31 @@ func (m *AccessListReviewMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.AccessListTitle) > 0 { - i -= len(m.AccessListTitle) - copy(dAtA[i:], m.AccessListTitle) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListTitle))) - i-- - dAtA[i] = 0x3a - } - if len(m.RemovedMembers) > 0 { - for iNdEx := len(m.RemovedMembers) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.RemovedMembers[iNdEx]) - copy(dAtA[i:], m.RemovedMembers[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RemovedMembers[iNdEx]))) - i-- - dAtA[i] = 0x32 - } - } - if len(m.ReviewDayOfMonthChanged) > 0 { - i -= len(m.ReviewDayOfMonthChanged) - copy(dAtA[i:], m.ReviewDayOfMonthChanged) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewDayOfMonthChanged))) - i-- - dAtA[i] = 0x2a - } - if len(m.ReviewFrequencyChanged) > 0 { - i -= len(m.ReviewFrequencyChanged) - copy(dAtA[i:], m.ReviewFrequencyChanged) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewFrequencyChanged))) - i-- - dAtA[i] = 0x22 - } - if m.MembershipRequirementsChanged != nil { - { - size, err := m.MembershipRequirementsChanged.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + if len(m.Protocol) > 0 { + i -= len(m.Protocol) + copy(dAtA[i:], m.Protocol) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Protocol))) i-- dAtA[i] = 0x1a } - if len(m.ReviewID) > 0 { - i -= len(m.ReviewID) - copy(dAtA[i:], m.ReviewID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewID))) + if len(m.RemoteAddr) > 0 { + i -= len(m.RemoteAddr) + copy(dAtA[i:], m.RemoteAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RemoteAddr))) i-- dAtA[i] = 0x12 } - if len(m.Message) > 0 { - i -= len(m.Message) - copy(dAtA[i:], m.Message) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Message))) + if len(m.LocalAddr) > 0 { + i -= len(m.LocalAddr) + copy(dAtA[i:], m.LocalAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.LocalAddr))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *LockMetadata) Marshal() (dAtA []byte, err error) { +func (m *ClientMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19242,12 +20214,12 @@ func (m *LockMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *LockMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *ClientMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *LockMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ClientMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19256,20 +20228,17 @@ func (m *LockMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.Target.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.UserAgent) > 0 { + i -= len(m.UserAgent) + copy(dAtA[i:], m.UserAgent) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserAgent))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0x22 return len(dAtA) - i, nil } -func (m *SessionStart) Marshal() (dAtA []byte, err error) { +func (m *KubernetesClusterMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19279,12 +20248,12 @@ func (m *SessionStart) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionStart) MarshalTo(dAtA []byte) (int, error) { +func (m *KubernetesClusterMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *KubernetesClusterMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19293,119 +20262,54 @@ func (m *SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Reason) > 0 { - i -= len(m.Reason) - copy(dAtA[i:], m.Reason) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) - i-- - dAtA[i] = 0x6a - } - if len(m.Invited) > 0 { - for iNdEx := len(m.Invited) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Invited[iNdEx]) - copy(dAtA[i:], m.Invited[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Invited[iNdEx]))) + if len(m.KubernetesLabels) > 0 { + for k := range m.KubernetesLabels { + v := m.KubernetesLabels[k] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintEvents(dAtA, i, uint64(len(v))) i-- - dAtA[i] = 0x62 - } - } - if len(m.SessionRecording) > 0 { - i -= len(m.SessionRecording) - copy(dAtA[i:], m.SessionRecording) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionRecording))) - i-- - dAtA[i] = 0x52 - } - if len(m.InitialCommand) > 0 { - for iNdEx := len(m.InitialCommand) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.InitialCommand[iNdEx]) - copy(dAtA[i:], m.InitialCommand[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.InitialCommand[iNdEx]))) + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) i-- - dAtA[i] = 0x4a + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x22 } } - { - size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.KubernetesGroups) > 0 { + for iNdEx := len(m.KubernetesGroups) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.KubernetesGroups[iNdEx]) + copy(dAtA[i:], m.KubernetesGroups[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesGroups[iNdEx]))) + i-- + dAtA[i] = 0x1a } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x42 - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.KubernetesUsers) > 0 { + for iNdEx := len(m.KubernetesUsers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.KubernetesUsers[iNdEx]) + copy(dAtA[i:], m.KubernetesUsers[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesUsers[iNdEx]))) + i-- + dAtA[i] = 0x12 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x3a - if len(m.TerminalSize) > 0 { - i -= len(m.TerminalSize) - copy(dAtA[i:], m.TerminalSize) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TerminalSize))) + if len(m.KubernetesCluster) > 0 { + i -= len(m.KubernetesCluster) + copy(dAtA[i:], m.KubernetesCluster) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesCluster))) i-- - dAtA[i] = 0x32 - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *SessionJoin) Marshal() (dAtA []byte, err error) { +func (m *KubernetesPodMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19415,12 +20319,12 @@ func (m *SessionJoin) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionJoin) MarshalTo(dAtA []byte) (int, error) { +func (m *KubernetesPodMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *KubernetesPodMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19429,70 +20333,45 @@ func (m *SessionJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.KubernetesNodeName) > 0 { + i -= len(m.KubernetesNodeName) + copy(dAtA[i:], m.KubernetesNodeName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesNodeName))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.KubernetesContainerImage) > 0 { + i -= len(m.KubernetesContainerImage) + copy(dAtA[i:], m.KubernetesContainerImage) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesContainerImage))) + i-- + dAtA[i] = 0x22 } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.KubernetesContainerName) > 0 { + i -= len(m.KubernetesContainerName) + copy(dAtA[i:], m.KubernetesContainerName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesContainerName))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.KubernetesPodNamespace) > 0 { + i -= len(m.KubernetesPodNamespace) + copy(dAtA[i:], m.KubernetesPodNamespace) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesPodNamespace))) + i-- + dAtA[i] = 0x12 } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.KubernetesPodName) > 0 { + i -= len(m.KubernetesPodName) + copy(dAtA[i:], m.KubernetesPodName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.KubernetesPodName))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *SessionPrint) Marshal() (dAtA []byte, err error) { +func (m *SAMLIdPServiceProviderMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19502,12 +20381,12 @@ func (m *SessionPrint) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionPrint) MarshalTo(dAtA []byte) (int, error) { +func (m *SAMLIdPServiceProviderMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionPrint) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SAMLIdPServiceProviderMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19516,47 +20395,43 @@ func (m *SessionPrint) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Offset != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) - i-- - dAtA[i] = 0x30 - } - if m.DelayMilliseconds != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DelayMilliseconds)) - i-- - dAtA[i] = 0x28 - } - if m.Bytes != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Bytes)) - i-- - dAtA[i] = 0x20 + if len(m.AttributeMapping) > 0 { + for k := range m.AttributeMapping { + v := m.AttributeMapping[k] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintEvents(dAtA, i, uint64(len(v))) + i-- + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x1a + } } - if len(m.Data) > 0 { - i -= len(m.Data) - copy(dAtA[i:], m.Data) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Data))) + if len(m.ServiceProviderShortcut) > 0 { + i -= len(m.ServiceProviderShortcut) + copy(dAtA[i:], m.ServiceProviderShortcut) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServiceProviderShortcut))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x12 } - if m.ChunkIndex != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.ChunkIndex)) + if len(m.ServiceProviderEntityID) > 0 { + i -= len(m.ServiceProviderEntityID) + copy(dAtA[i:], m.ServiceProviderEntityID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServiceProviderEntityID))) i-- - dAtA[i] = 0x10 - } - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopRecording) Marshal() (dAtA []byte, err error) { +func (m *OktaResourcesUpdatedMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19566,12 +20441,12 @@ func (m *DesktopRecording) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopRecording) MarshalTo(dAtA []byte) (int, error) { +func (m *OktaResourcesUpdatedMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopRecording) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OktaResourcesUpdatedMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19580,32 +20455,67 @@ func (m *DesktopRecording) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.DelayMilliseconds != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DelayMilliseconds)) + if len(m.DeletedResources) > 0 { + for iNdEx := len(m.DeletedResources) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.DeletedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } + } + if len(m.UpdatedResources) > 0 { + for iNdEx := len(m.UpdatedResources) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.UpdatedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } + } + if len(m.AddedResources) > 0 { + for iNdEx := len(m.AddedResources) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.AddedResources[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } + } + if m.Deleted != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Deleted)) i-- dAtA[i] = 0x18 } - if len(m.Message) > 0 { - i -= len(m.Message) - copy(dAtA[i:], m.Message) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Message))) + if m.Updated != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Updated)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x10 } - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if m.Added != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Added)) + i-- + dAtA[i] = 0x8 } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopClipboardReceive) Marshal() (dAtA []byte, err error) { +func (m *OktaResource) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19615,12 +20525,12 @@ func (m *DesktopClipboardReceive) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopClipboardReceive) MarshalTo(dAtA []byte) (int, error) { +func (m *OktaResource) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopClipboardReceive) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OktaResource) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19629,62 +20539,24 @@ func (m *DesktopClipboardReceive) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Length != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Length)) + if len(m.Description) > 0 { + i -= len(m.Description) + copy(dAtA[i:], m.Description) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Description))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x12 } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + if len(m.ID) > 0 { + i -= len(m.ID) + copy(dAtA[i:], m.ID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) i-- - dAtA[i] = 0x2a - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopClipboardSend) Marshal() (dAtA []byte, err error) { +func (m *OktaAssignmentMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19694,12 +20566,12 @@ func (m *DesktopClipboardSend) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopClipboardSend) MarshalTo(dAtA []byte) (int, error) { +func (m *OktaAssignmentMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopClipboardSend) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OktaAssignmentMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19708,62 +20580,38 @@ func (m *DesktopClipboardSend) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Length != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Length)) + if len(m.EndingStatus) > 0 { + i -= len(m.EndingStatus) + copy(dAtA[i:], m.EndingStatus) + i = encodeVarintEvents(dAtA, i, uint64(len(m.EndingStatus))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x22 } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + if len(m.StartingStatus) > 0 { + i -= len(m.StartingStatus) + copy(dAtA[i:], m.StartingStatus) + i = encodeVarintEvents(dAtA, i, uint64(len(m.StartingStatus))) i-- - dAtA[i] = 0x2a - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.User) > 0 { + i -= len(m.User) + copy(dAtA[i:], m.User) + i = encodeVarintEvents(dAtA, i, uint64(len(m.User))) + i-- + dAtA[i] = 0x12 } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.Source) > 0 { + i -= len(m.Source) + copy(dAtA[i:], m.Source) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Source))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopSharedDirectoryStart) Marshal() (dAtA []byte, err error) { +func (m *AccessListMemberMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19773,12 +20621,12 @@ func (m *DesktopSharedDirectoryStart) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopSharedDirectoryStart) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessListMemberMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopSharedDirectoryStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessListMemberMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19787,79 +20635,38 @@ func (m *DesktopSharedDirectoryStart) MarshalToSizedBuffer(dAtA []byte) (int, er i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.DirectoryID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) - i-- - dAtA[i] = 0x40 - } - if len(m.DirectoryName) > 0 { - i -= len(m.DirectoryName) - copy(dAtA[i:], m.DirectoryName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) - i-- - dAtA[i] = 0x3a - } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + if len(m.AccessListTitle) > 0 { + i -= len(m.AccessListTitle) + copy(dAtA[i:], m.AccessListTitle) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListTitle))) i-- - dAtA[i] = 0x32 - } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x22 } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Members) > 0 { + for iNdEx := len(m.Members) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Members[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.AccessListName) > 0 { + i -= len(m.AccessListName) + copy(dAtA[i:], m.AccessListName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListName))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopSharedDirectoryRead) Marshal() (dAtA []byte, err error) { +func (m *AccessListMember) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19869,12 +20676,12 @@ func (m *DesktopSharedDirectoryRead) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopSharedDirectoryRead) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessListMember) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopSharedDirectoryRead) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessListMember) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19883,96 +20690,100 @@ func (m *DesktopSharedDirectoryRead) MarshalToSizedBuffer(dAtA []byte) (int, err i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Offset != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) - i-- - dAtA[i] = 0x58 - } - if m.Length != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Length)) - i-- - dAtA[i] = 0x50 - } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) - i-- - dAtA[i] = 0x4a - } - if m.DirectoryID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) + if m.MembershipKind != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.MembershipKind)) i-- - dAtA[i] = 0x40 + dAtA[i] = 0x28 } - if len(m.DirectoryName) > 0 { - i -= len(m.DirectoryName) - copy(dAtA[i:], m.DirectoryName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) + if len(m.MemberName) > 0 { + i -= len(m.MemberName) + copy(dAtA[i:], m.MemberName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.MemberName))) i-- - dAtA[i] = 0x3a + dAtA[i] = 0x22 } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + if len(m.Reason) > 0 { + i -= len(m.Reason) + copy(dAtA[i:], m.Reason) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x1a } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + n4, err4 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.RemovedOn, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.RemovedOn):]) + if err4 != nil { + return 0, err4 } + i -= n4 + i = encodeVarintEvents(dAtA, i, uint64(n4)) i-- - dAtA[i] = 0x2a - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x12 + n5, err5 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.JoinedOn, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.JoinedOn):]) + if err5 != nil { + return 0, err5 } + i -= n5 + i = encodeVarintEvents(dAtA, i, uint64(n5)) i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *AccessListReviewMembershipRequirementsChanged) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + return dAtA[:n], nil +} + +func (m *AccessListReviewMembershipRequirementsChanged) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AccessListReviewMembershipRequirementsChanged) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Traits) > 0 { + for k := range m.Traits { + v := m.Traits[k] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintEvents(dAtA, i, uint64(len(v))) + i-- + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x12 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0xa } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DesktopSharedDirectoryWrite) Marshal() (dAtA []byte, err error) { +func (m *AccessListReviewMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -19982,12 +20793,12 @@ func (m *DesktopSharedDirectoryWrite) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DesktopSharedDirectoryWrite) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessListReviewMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DesktopSharedDirectoryWrite) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessListReviewMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -19996,44 +20807,187 @@ func (m *DesktopSharedDirectoryWrite) MarshalToSizedBuffer(dAtA []byte) (int, er i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Offset != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) + if len(m.AccessListTitle) > 0 { + i -= len(m.AccessListTitle) + copy(dAtA[i:], m.AccessListTitle) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AccessListTitle))) i-- - dAtA[i] = 0x58 + dAtA[i] = 0x3a } - if m.Length != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Length)) + if len(m.RemovedMembers) > 0 { + for iNdEx := len(m.RemovedMembers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RemovedMembers[iNdEx]) + copy(dAtA[i:], m.RemovedMembers[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RemovedMembers[iNdEx]))) + i-- + dAtA[i] = 0x32 + } + } + if len(m.ReviewDayOfMonthChanged) > 0 { + i -= len(m.ReviewDayOfMonthChanged) + copy(dAtA[i:], m.ReviewDayOfMonthChanged) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewDayOfMonthChanged))) i-- - dAtA[i] = 0x50 + dAtA[i] = 0x2a } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + if len(m.ReviewFrequencyChanged) > 0 { + i -= len(m.ReviewFrequencyChanged) + copy(dAtA[i:], m.ReviewFrequencyChanged) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewFrequencyChanged))) i-- - dAtA[i] = 0x4a + dAtA[i] = 0x22 } - if m.DirectoryID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) + if m.MembershipRequirementsChanged != nil { + { + size, err := m.MembershipRequirementsChanged.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } i-- - dAtA[i] = 0x40 + dAtA[i] = 0x1a } - if len(m.DirectoryName) > 0 { - i -= len(m.DirectoryName) - copy(dAtA[i:], m.DirectoryName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) + if len(m.ReviewID) > 0 { + i -= len(m.ReviewID) + copy(dAtA[i:], m.ReviewID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ReviewID))) i-- - dAtA[i] = 0x3a + dAtA[i] = 0x12 } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + if len(m.Message) > 0 { + i -= len(m.Message) + copy(dAtA[i:], m.Message) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Message))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *LockMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *LockMetadata) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *LockMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.Target.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + return len(dAtA) - i, nil +} + +func (m *SessionStart) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SessionStart) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Reason) > 0 { + i -= len(m.Reason) + copy(dAtA[i:], m.Reason) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) + i-- + dAtA[i] = 0x6a + } + if len(m.Invited) > 0 { + for iNdEx := len(m.Invited) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Invited[iNdEx]) + copy(dAtA[i:], m.Invited[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Invited[iNdEx]))) + i-- + dAtA[i] = 0x62 + } + } + if len(m.SessionRecording) > 0 { + i -= len(m.SessionRecording) + copy(dAtA[i:], m.SessionRecording) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionRecording))) + i-- + dAtA[i] = 0x52 + } + if len(m.InitialCommand) > 0 { + for iNdEx := len(m.InitialCommand) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.InitialCommand[iNdEx]) + copy(dAtA[i:], m.InitialCommand[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.InitialCommand[iNdEx]))) + i-- + dAtA[i] = 0x4a + } + } + { + size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x42 + { + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + if len(m.TerminalSize) > 0 { + i -= len(m.TerminalSize) + copy(dAtA[i:], m.TerminalSize) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TerminalSize))) i-- dAtA[i] = 0x32 } { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20043,7 +20997,7 @@ func (m *DesktopSharedDirectoryWrite) MarshalToSizedBuffer(dAtA []byte) (int, er i-- dAtA[i] = 0x2a { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20085,7 +21039,7 @@ func (m *DesktopSharedDirectoryWrite) MarshalToSizedBuffer(dAtA []byte) (int, er return len(dAtA) - i, nil } -func (m *SessionReject) Marshal() (dAtA []byte, err error) { +func (m *SessionJoin) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20095,12 +21049,12 @@ func (m *SessionReject) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionReject) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionJoin) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20109,18 +21063,16 @@ func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Maximum != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Maximum)) - i-- - dAtA[i] = 0x30 - } - if len(m.Reason) > 0 { - i -= len(m.Reason) - copy(dAtA[i:], m.Reason) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) - i-- - dAtA[i] = 0x2a + { + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x32 { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20130,7 +21082,7 @@ func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a { size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20140,6 +21092,16 @@ func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -20164,7 +21126,7 @@ func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionConnect) Marshal() (dAtA []byte, err error) { +func (m *SessionPrint) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20174,12 +21136,12 @@ func (m *SessionConnect) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionConnect) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionPrint) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionConnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionPrint) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20188,26 +21150,33 @@ func (m *SessionConnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if m.Offset != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) + i-- + dAtA[i] = 0x30 } - i-- - dAtA[i] = 0x1a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if m.DelayMilliseconds != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DelayMilliseconds)) + i-- + dAtA[i] = 0x28 + } + if m.Bytes != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Bytes)) + i-- + dAtA[i] = 0x20 + } + if len(m.Data) > 0 { + i -= len(m.Data) + copy(dAtA[i:], m.Data) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Data))) + i-- + dAtA[i] = 0x1a + } + if m.ChunkIndex != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.ChunkIndex)) + i-- + dAtA[i] = 0x10 } - i-- - dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20221,7 +21190,7 @@ func (m *SessionConnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *FileTransferRequestEvent) Marshal() (dAtA []byte, err error) { +func (m *DesktopRecording) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20231,12 +21200,12 @@ func (m *FileTransferRequestEvent) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *FileTransferRequestEvent) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopRecording) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *FileTransferRequestEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopRecording) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20245,63 +21214,18 @@ func (m *FileTransferRequestEvent) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Filename) > 0 { - i -= len(m.Filename) - copy(dAtA[i:], m.Filename) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Filename))) - i-- - dAtA[i] = 0x42 - } - if m.Download { - i-- - if m.Download { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x38 - } - if len(m.Location) > 0 { - i -= len(m.Location) - copy(dAtA[i:], m.Location) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Location))) - i-- - dAtA[i] = 0x32 - } - if len(m.Requester) > 0 { - i -= len(m.Requester) - copy(dAtA[i:], m.Requester) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Requester))) + if m.DelayMilliseconds != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DelayMilliseconds)) i-- - dAtA[i] = 0x2a - } - if len(m.Approvers) > 0 { - for iNdEx := len(m.Approvers) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Approvers[iNdEx]) - copy(dAtA[i:], m.Approvers[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Approvers[iNdEx]))) - i-- - dAtA[i] = 0x22 - } + dAtA[i] = 0x18 } - if len(m.RequestID) > 0 { - i -= len(m.RequestID) - copy(dAtA[i:], m.RequestID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) + if len(m.Message) > 0 { + i -= len(m.Message) + copy(dAtA[i:], m.Message) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Message))) i-- - dAtA[i] = 0x1a - } - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x12 } - i-- - dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20315,7 +21239,7 @@ func (m *FileTransferRequestEvent) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } -func (m *Resize) Marshal() (dAtA []byte, err error) { +func (m *DesktopClipboardReceive) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20325,12 +21249,12 @@ func (m *Resize) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *Resize) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopClipboardReceive) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopClipboardReceive) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20339,43 +21263,25 @@ func (m *Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x42 - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) + i-- + dAtA[i] = 0x3a } - i-- - dAtA[i] = 0x3a - if len(m.TerminalSize) > 0 { - i -= len(m.TerminalSize) - copy(dAtA[i:], m.TerminalSize) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TerminalSize))) + if m.Length != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Length)) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x30 } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20419,7 +21325,7 @@ func (m *Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionEnd) Marshal() (dAtA []byte, err error) { +func (m *DesktopClipboardSend) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20429,12 +21335,12 @@ func (m *SessionEnd) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionEnd) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopClipboardSend) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopClipboardSend) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20443,97 +21349,25 @@ func (m *SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.SessionRecording) > 0 { - i -= len(m.SessionRecording) - copy(dAtA[i:], m.SessionRecording) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionRecording))) - i-- - dAtA[i] = 0x72 - } - if len(m.InitialCommand) > 0 { - for iNdEx := len(m.InitialCommand) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.InitialCommand[iNdEx]) - copy(dAtA[i:], m.InitialCommand[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.InitialCommand[iNdEx]))) - i-- - dAtA[i] = 0x6a - } - } - { - size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x62 - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x5a - n63, err63 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) - if err63 != nil { - return 0, err63 - } - i -= n63 - i = encodeVarintEvents(dAtA, i, uint64(n63)) - i-- - dAtA[i] = 0x52 - n64, err64 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) - if err64 != nil { - return 0, err64 - } - i -= n64 - i = encodeVarintEvents(dAtA, i, uint64(n64)) - i-- - dAtA[i] = 0x4a - if len(m.Participants) > 0 { - for iNdEx := len(m.Participants) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Participants[iNdEx]) - copy(dAtA[i:], m.Participants[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Participants[iNdEx]))) - i-- - dAtA[i] = 0x42 - } - } - if m.Interactive { - i-- - if m.Interactive { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) i-- - dAtA[i] = 0x38 + dAtA[i] = 0x3a } - if m.EnhancedRecording { - i-- - if m.EnhancedRecording { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } + if m.Length != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Length)) i-- dAtA[i] = 0x30 } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -20577,7 +21411,7 @@ func (m *SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *BPFMetadata) Marshal() (dAtA []byte, err error) { +func (m *DesktopSharedDirectoryStart) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20587,12 +21421,12 @@ func (m *BPFMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *BPFMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryStart) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *BPFMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20601,129 +21435,34 @@ func (m *BPFMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Program) > 0 { - i -= len(m.Program) - copy(dAtA[i:], m.Program) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Program))) + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x4a } - if m.CgroupID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.CgroupID)) + if m.DirectoryID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) i-- - dAtA[i] = 0x10 - } - if m.PID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.PID)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - -func (m *Status) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *Status) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *Status) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.UserMessage) > 0 { - i -= len(m.UserMessage) - copy(dAtA[i:], m.UserMessage) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UserMessage))) - i-- - dAtA[i] = 0x1a - } - if len(m.Error) > 0 { - i -= len(m.Error) - copy(dAtA[i:], m.Error) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) - i-- - dAtA[i] = 0x12 - } - if m.Success { - i-- - if m.Success { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - -func (m *SessionCommand) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *SessionCommand) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.ReturnCode != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.ReturnCode)) - i-- - dAtA[i] = 0x48 - } - if len(m.Argv) > 0 { - for iNdEx := len(m.Argv) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Argv[iNdEx]) - copy(dAtA[i:], m.Argv[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Argv[iNdEx]))) - i-- - dAtA[i] = 0x42 - } + dAtA[i] = 0x40 } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + if len(m.DirectoryName) > 0 { + i -= len(m.DirectoryName) + copy(dAtA[i:], m.DirectoryName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) i-- dAtA[i] = 0x3a } - if m.PPID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.PPID)) + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x32 } { - size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20733,7 +21472,7 @@ func (m *SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x2a { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20775,7 +21514,7 @@ func (m *SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionDisk) Marshal() (dAtA []byte, err error) { +func (m *DesktopSharedDirectoryRead) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20785,12 +21524,12 @@ func (m *SessionDisk) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionDisk) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryRead) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryRead) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20799,25 +21538,51 @@ func (m *SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.ReturnCode != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.ReturnCode)) + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) i-- - dAtA[i] = 0x40 + dAtA[i] = 0x62 } - if m.Flags != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Flags)) + if m.Offset != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) i-- - dAtA[i] = 0x38 + dAtA[i] = 0x58 + } + if m.Length != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Length)) + i-- + dAtA[i] = 0x50 } if len(m.Path) > 0 { i -= len(m.Path) copy(dAtA[i:], m.Path) i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) i-- + dAtA[i] = 0x4a + } + if m.DirectoryID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) + i-- + dAtA[i] = 0x40 + } + if len(m.DirectoryName) > 0 { + i -= len(m.DirectoryName) + copy(dAtA[i:], m.DirectoryName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) + i-- + dAtA[i] = 0x3a + } + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + i-- dAtA[i] = 0x32 } { - size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20827,7 +21592,7 @@ func (m *SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x2a { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20869,7 +21634,7 @@ func (m *SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionNetwork) Marshal() (dAtA []byte, err error) { +func (m *DesktopSharedDirectoryWrite) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20879,12 +21644,12 @@ func (m *SessionNetwork) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionNetwork) MarshalTo(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryWrite) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DesktopSharedDirectoryWrite) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -20893,42 +21658,51 @@ func (m *SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Action != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Action)) + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) + i-- + dAtA[i] = 0x62 + } + if m.Offset != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Offset)) i-- dAtA[i] = 0x58 } - if m.Operation != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Operation)) + if m.Length != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Length)) i-- dAtA[i] = 0x50 } - if m.TCPVersion != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.TCPVersion)) + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) i-- - dAtA[i] = 0x48 + dAtA[i] = 0x4a } - if m.DstPort != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DstPort)) + if m.DirectoryID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DirectoryID)) i-- dAtA[i] = 0x40 } - if len(m.DstAddr) > 0 { - i -= len(m.DstAddr) - copy(dAtA[i:], m.DstAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DstAddr))) + if len(m.DirectoryName) > 0 { + i -= len(m.DirectoryName) + copy(dAtA[i:], m.DirectoryName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DirectoryName))) i-- dAtA[i] = 0x3a } - if len(m.SrcAddr) > 0 { - i -= len(m.SrcAddr) - copy(dAtA[i:], m.SrcAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SrcAddr))) + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) i-- dAtA[i] = 0x32 } { - size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20938,7 +21712,7 @@ func (m *SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x2a { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -20980,7 +21754,7 @@ func (m *SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionData) Marshal() (dAtA []byte, err error) { +func (m *SessionReject) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -20990,12 +21764,12 @@ func (m *SessionData) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionData) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionReject) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21004,15 +21778,17 @@ func (m *SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.BytesReceived != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.BytesReceived)) + if m.Maximum != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Maximum)) i-- - dAtA[i] = 0x38 + dAtA[i] = 0x30 } - if m.BytesTransmitted != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.BytesTransmitted)) + if len(m.Reason) > 0 { + i -= len(m.Reason) + copy(dAtA[i:], m.Reason) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x2a } { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -21023,19 +21799,9 @@ func (m *SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- dAtA[i] = 0x22 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21067,7 +21833,7 @@ func (m *SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SessionLeave) Marshal() (dAtA []byte, err error) { +func (m *SessionConnect) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21077,12 +21843,12 @@ func (m *SessionLeave) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SessionLeave) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionConnect) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionConnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21100,29 +21866,9 @@ func (m *SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21144,7 +21890,7 @@ func (m *SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UserLogin) Marshal() (dAtA []byte, err error) { +func (m *FileTransferRequestEvent) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21154,12 +21900,12 @@ func (m *UserLogin) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UserLogin) MarshalTo(dAtA []byte) (int, error) { +func (m *FileTransferRequestEvent) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *FileTransferRequestEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21168,85 +21914,55 @@ func (m *UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.ConnectorID) > 0 { - i -= len(m.ConnectorID) - copy(dAtA[i:], m.ConnectorID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ConnectorID))) + if len(m.Filename) > 0 { + i -= len(m.Filename) + copy(dAtA[i:], m.Filename) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Filename))) i-- - dAtA[i] = 0x52 - } - if len(m.AppliedLoginRules) > 0 { - for iNdEx := len(m.AppliedLoginRules) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.AppliedLoginRules[iNdEx]) - copy(dAtA[i:], m.AppliedLoginRules[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AppliedLoginRules[iNdEx]))) - i-- - dAtA[i] = 0x4a - } - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x42 } - i-- - dAtA[i] = 0x42 - { - size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.Download { + i-- + if m.Download { + dAtA[i] = 1 + } else { + dAtA[i] = 0 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x38 } - i-- - dAtA[i] = 0x3a - if m.MFADevice != nil { - { - size, err := m.MFADevice.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + if len(m.Location) > 0 { + i -= len(m.Location) + copy(dAtA[i:], m.Location) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Location))) i-- dAtA[i] = 0x32 } - if m.IdentityAttributes != nil { - { - size, err := m.IdentityAttributes.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + if len(m.Requester) > 0 { + i -= len(m.Requester) + copy(dAtA[i:], m.Requester) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Requester))) i-- dAtA[i] = 0x2a } - if len(m.Method) > 0 { - i -= len(m.Method) - copy(dAtA[i:], m.Method) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) - i-- - dAtA[i] = 0x22 - } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Approvers) > 0 { + for iNdEx := len(m.Approvers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Approvers[iNdEx]) + copy(dAtA[i:], m.Approvers[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Approvers[iNdEx]))) + i-- + dAtA[i] = 0x22 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x1a + if len(m.RequestID) > 0 { + i -= len(m.RequestID) + copy(dAtA[i:], m.RequestID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) + i-- + dAtA[i] = 0x1a + } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21268,7 +21984,7 @@ func (m *UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *CreateMFAAuthChallenge) Marshal() (dAtA []byte, err error) { +func (m *Resize) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21278,12 +21994,12 @@ func (m *CreateMFAAuthChallenge) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *CreateMFAAuthChallenge) MarshalTo(dAtA []byte) (int, error) { +func (m *Resize) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *CreateMFAAuthChallenge) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21292,25 +22008,8 @@ func (m *CreateMFAAuthChallenge) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.ChallengeAllowReuse { - i-- - if m.ChallengeAllowReuse { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x20 - } - if len(m.ChallengeScope) > 0 { - i -= len(m.ChallengeScope) - copy(dAtA[i:], m.ChallengeScope) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ChallengeScope))) - i-- - dAtA[i] = 0x1a - } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21318,9 +22017,9 @@ func (m *CreateMFAAuthChallenge) MarshalToSizedBuffer(dAtA []byte) (int, error) i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x42 { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21328,65 +22027,36 @@ func (m *CreateMFAAuthChallenge) MarshalToSizedBuffer(dAtA []byte) (int, error) i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *ValidateMFAAuthResponse) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *ValidateMFAAuthResponse) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *ValidateMFAAuthResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.ChallengeAllowReuse { - i-- - if m.ChallengeAllowReuse { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } + dAtA[i] = 0x3a + if len(m.TerminalSize) > 0 { + i -= len(m.TerminalSize) + copy(dAtA[i:], m.TerminalSize) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TerminalSize))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x32 } - if len(m.ChallengeScope) > 0 { - i -= len(m.ChallengeScope) - copy(dAtA[i:], m.ChallengeScope) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ChallengeScope))) - i-- - dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.MFADevice != nil { - { - size, err := m.MFADevice.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x2a + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x22 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21418,7 +22088,7 @@ func (m *ValidateMFAAuthResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) return len(dAtA) - i, nil } -func (m *ResourceMetadata) Marshal() (dAtA []byte, err error) { +func (m *SessionEnd) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21428,12 +22098,12 @@ func (m *ResourceMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ResourceMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionEnd) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ResourceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21442,64 +22112,89 @@ func (m *ResourceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.TTL) > 0 { - i -= len(m.TTL) - copy(dAtA[i:], m.TTL) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TTL))) + if len(m.SessionRecording) > 0 { + i -= len(m.SessionRecording) + copy(dAtA[i:], m.SessionRecording) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionRecording))) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x72 } - if len(m.UpdatedBy) > 0 { - i -= len(m.UpdatedBy) - copy(dAtA[i:], m.UpdatedBy) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UpdatedBy))) - i-- - dAtA[i] = 0x1a + if len(m.InitialCommand) > 0 { + for iNdEx := len(m.InitialCommand) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.InitialCommand[iNdEx]) + copy(dAtA[i:], m.InitialCommand[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.InitialCommand[iNdEx]))) + i-- + dAtA[i] = 0x6a + } } - n108, err108 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err108 != nil { - return 0, err108 + { + size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= n108 - i = encodeVarintEvents(dAtA, i, uint64(n108)) i-- - dAtA[i] = 0x12 - if len(m.Name) > 0 { - i -= len(m.Name) - copy(dAtA[i:], m.Name) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) - i-- - dAtA[i] = 0xa + dAtA[i] = 0x62 + { + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return len(dAtA) - i, nil -} - -func (m *UserCreate) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err + i-- + dAtA[i] = 0x5a + n64, err64 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) + if err64 != nil { + return 0, err64 } - return dAtA[:n], nil -} - -func (m *UserCreate) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + i -= n64 + i = encodeVarintEvents(dAtA, i, uint64(n64)) + i-- + dAtA[i] = 0x52 + n65, err65 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) + if err65 != nil { + return 0, err65 + } + i -= n65 + i = encodeVarintEvents(dAtA, i, uint64(n65)) + i-- + dAtA[i] = 0x4a + if len(m.Participants) > 0 { + for iNdEx := len(m.Participants) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Participants[iNdEx]) + copy(dAtA[i:], m.Participants[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Participants[iNdEx]))) + i-- + dAtA[i] = 0x42 + } + } + if m.Interactive { + i-- + if m.Interactive { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x38 + } + if m.EnhancedRecording { + i-- + if m.EnhancedRecording { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x30 } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21507,25 +22202,19 @@ func (m *UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 - if len(m.Connector) > 0 { - i -= len(m.Connector) - copy(dAtA[i:], m.Connector) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Connector))) - i-- - dAtA[i] = 0x2a - } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21557,7 +22246,7 @@ func (m *UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UserUpdate) Marshal() (dAtA []byte, err error) { +func (m *BPFMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21567,12 +22256,107 @@ func (m *UserUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UserUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *BPFMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BPFMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Program) > 0 { + i -= len(m.Program) + copy(dAtA[i:], m.Program) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Program))) + i-- + dAtA[i] = 0x1a + } + if m.CgroupID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.CgroupID)) + i-- + dAtA[i] = 0x10 + } + if m.PID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.PID)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *Status) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Status) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Status) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.UserMessage) > 0 { + i -= len(m.UserMessage) + copy(dAtA[i:], m.UserMessage) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserMessage))) + i-- + dAtA[i] = 0x1a + } + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x12 + } + if m.Success { + i-- + if m.Success { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *SessionCommand) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SessionCommand) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21581,8 +22365,34 @@ func (m *UserUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ReturnCode != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.ReturnCode)) + i-- + dAtA[i] = 0x48 + } + if len(m.Argv) > 0 { + for iNdEx := len(m.Argv) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Argv[iNdEx]) + copy(dAtA[i:], m.Argv[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Argv[iNdEx]))) + i-- + dAtA[i] = 0x42 + } + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x3a + } + if m.PPID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.PPID)) + i-- + dAtA[i] = 0x30 + } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21590,25 +22400,19 @@ func (m *UserUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 - if len(m.Connector) > 0 { - i -= len(m.Connector) - copy(dAtA[i:], m.Connector) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Connector))) - i-- - dAtA[i] = 0x2a - } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0x22 + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21640,7 +22444,7 @@ func (m *UserUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UserDelete) Marshal() (dAtA []byte, err error) { +func (m *SessionDisk) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21650,12 +22454,12 @@ func (m *UserDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UserDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionDisk) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21664,8 +22468,35 @@ func (m *UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ReturnCode != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.ReturnCode)) + i-- + dAtA[i] = 0x40 + } + if m.Flags != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Flags)) + i-- + dAtA[i] = 0x38 + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x32 + } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21675,7 +22506,7 @@ func (m *UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21707,7 +22538,7 @@ func (m *UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UserPasswordChange) Marshal() (dAtA []byte, err error) { +func (m *SessionNetwork) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21717,12 +22548,12 @@ func (m *UserPasswordChange) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UserPasswordChange) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionNetwork) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21731,8 +22562,62 @@ func (m *UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.Action != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Action)) + i-- + dAtA[i] = 0x58 + } + if m.Operation != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Operation)) + i-- + dAtA[i] = 0x50 + } + if m.TCPVersion != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.TCPVersion)) + i-- + dAtA[i] = 0x48 + } + if m.DstPort != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DstPort)) + i-- + dAtA[i] = 0x40 + } + if len(m.DstAddr) > 0 { + i -= len(m.DstAddr) + copy(dAtA[i:], m.DstAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DstAddr))) + i-- + dAtA[i] = 0x3a + } + if len(m.SrcAddr) > 0 { + i -= len(m.SrcAddr) + copy(dAtA[i:], m.SrcAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SrcAddr))) + i-- + dAtA[i] = 0x32 + } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.BPFMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21764,7 +22649,7 @@ func (m *UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AccessRequestCreate) Marshal() (dAtA []byte, err error) { +func (m *SessionData) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21774,12 +22659,12 @@ func (m *AccessRequestCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AccessRequestCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionData) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessRequestCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21788,123 +22673,38 @@ func (m *AccessRequestCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.ResourceNames) > 0 { - for iNdEx := len(m.ResourceNames) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.ResourceNames[iNdEx]) - copy(dAtA[i:], m.ResourceNames[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceNames[iNdEx]))) - i-- - dAtA[i] = 0x1 - i-- - dAtA[i] = 0x8a - } - } - if m.AssumeStartTime != nil { - n124, err124 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) - if err124 != nil { - return 0, err124 - } - i -= n124 - i = encodeVarintEvents(dAtA, i, uint64(n124)) - i-- - dAtA[i] = 0x1 - i-- - dAtA[i] = 0x82 - } - if len(m.PromotedAccessListName) > 0 { - i -= len(m.PromotedAccessListName) - copy(dAtA[i:], m.PromotedAccessListName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PromotedAccessListName))) - i-- - dAtA[i] = 0x7a - } - n125, err125 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.MaxDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration):]) - if err125 != nil { - return 0, err125 - } - i -= n125 - i = encodeVarintEvents(dAtA, i, uint64(n125)) - i-- - dAtA[i] = 0x6a - if len(m.RequestedResourceIDs) > 0 { - for iNdEx := len(m.RequestedResourceIDs) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.RequestedResourceIDs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x62 - } - } - if len(m.ProposedState) > 0 { - i -= len(m.ProposedState) - copy(dAtA[i:], m.ProposedState) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ProposedState))) + if m.BytesReceived != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.BytesReceived)) i-- - dAtA[i] = 0x5a + dAtA[i] = 0x38 } - if len(m.Reviewer) > 0 { - i -= len(m.Reviewer) - copy(dAtA[i:], m.Reviewer) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reviewer))) + if m.BytesTransmitted != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.BytesTransmitted)) i-- - dAtA[i] = 0x52 + dAtA[i] = 0x30 } - if m.Annotations != nil { - { - size, err := m.Annotations.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x4a - } - if len(m.Reason) > 0 { - i -= len(m.Reason) - copy(dAtA[i:], m.Reason) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) - i-- - dAtA[i] = 0x42 - } - if len(m.Delegator) > 0 { - i -= len(m.Delegator) - copy(dAtA[i:], m.Delegator) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Delegator))) - i-- - dAtA[i] = 0x3a - } - if len(m.RequestState) > 0 { - i -= len(m.RequestState) - copy(dAtA[i:], m.RequestState) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestState))) - i-- - dAtA[i] = 0x32 - } - if len(m.RequestID) > 0 { - i -= len(m.RequestID) - copy(dAtA[i:], m.RequestID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) - i-- - dAtA[i] = 0x2a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0x22 + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -21936,7 +22736,7 @@ func (m *AccessRequestCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AccessRequestExpire) Marshal() (dAtA []byte, err error) { +func (m *SessionLeave) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -21946,12 +22746,12 @@ func (m *AccessRequestExpire) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AccessRequestExpire) MarshalTo(dAtA []byte) (int, error) { +func (m *SessionLeave) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AccessRequestExpire) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -21960,25 +22760,38 @@ func (m *AccessRequestExpire) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.ResourceExpiry != nil { - n130, err130 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ResourceExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry):]) - if err130 != nil { - return 0, err130 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i -= n130 - i = encodeVarintEvents(dAtA, i, uint64(n130)) - i-- - dAtA[i] = 0x22 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.RequestID) > 0 { - i -= len(m.RequestID) - copy(dAtA[i:], m.RequestID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) - i-- - dAtA[i] = 0x1a + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22000,7 +22813,7 @@ func (m *AccessRequestExpire) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *ResourceID) Marshal() (dAtA []byte, err error) { +func (m *UserLogin) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22010,12 +22823,12 @@ func (m *ResourceID) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ResourceID) MarshalTo(dAtA []byte) (int, error) { +func (m *UserLogin) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ResourceID) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22024,68 +22837,83 @@ func (m *ResourceID) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.SubResourceName) > 0 { - i -= len(m.SubResourceName) - copy(dAtA[i:], m.SubResourceName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SubResourceName))) + if len(m.ConnectorID) > 0 { + i -= len(m.ConnectorID) + copy(dAtA[i:], m.ConnectorID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ConnectorID))) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x52 } - if len(m.Name) > 0 { - i -= len(m.Name) - copy(dAtA[i:], m.Name) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) - i-- - dAtA[i] = 0x1a + if len(m.AppliedLoginRules) > 0 { + for iNdEx := len(m.AppliedLoginRules) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AppliedLoginRules[iNdEx]) + copy(dAtA[i:], m.AppliedLoginRules[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AppliedLoginRules[iNdEx]))) + i-- + dAtA[i] = 0x4a + } } - if len(m.Kind) > 0 { - i -= len(m.Kind) - copy(dAtA[i:], m.Kind) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Kind))) - i-- - dAtA[i] = 0x12 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.ClusterName) > 0 { - i -= len(m.ClusterName) - copy(dAtA[i:], m.ClusterName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ClusterName))) - i-- - dAtA[i] = 0xa + i-- + dAtA[i] = 0x42 + { + size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return len(dAtA) - i, nil -} - -func (m *AccessRequestDelete) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err + i-- + dAtA[i] = 0x3a + if m.MFADevice != nil { + { + size, err := m.MFADevice.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 } - return dAtA[:n], nil -} - -func (m *AccessRequestDelete) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *AccessRequestDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + if m.IdentityAttributes != nil { + { + size, err := m.IdentityAttributes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a } - if len(m.RequestID) > 0 { - i -= len(m.RequestID) - copy(dAtA[i:], m.RequestID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 + } + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -22109,7 +22937,7 @@ func (m *AccessRequestDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PortForward) Marshal() (dAtA []byte, err error) { +func (m *CreateMFAAuthChallenge) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22119,12 +22947,12 @@ func (m *PortForward) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PortForward) MarshalTo(dAtA []byte) (int, error) { +func (m *CreateMFAAuthChallenge) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *CreateMFAAuthChallenge) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22133,53 +22961,23 @@ func (m *PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x3a - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 - if len(m.Addr) > 0 { - i -= len(m.Addr) - copy(dAtA[i:], m.Addr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Addr))) + if m.ChallengeAllowReuse { i-- - dAtA[i] = 0x2a - } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.ChallengeAllowReuse { + dAtA[i] = 1 + } else { + dAtA[i] = 0 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x20 } - i-- - dAtA[i] = 0x22 - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.ChallengeScope) > 0 { + i -= len(m.ChallengeScope) + copy(dAtA[i:], m.ChallengeScope) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ChallengeScope))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -22203,7 +23001,7 @@ func (m *PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *X11Forward) Marshal() (dAtA []byte, err error) { +func (m *ValidateMFAAuthResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22213,12 +23011,12 @@ func (m *X11Forward) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *X11Forward) MarshalTo(dAtA []byte) (int, error) { +func (m *ValidateMFAAuthResponse) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ValidateMFAAuthResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22227,18 +23025,37 @@ func (m *X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.ChallengeAllowReuse { + i-- + if m.ChallengeAllowReuse { + dAtA[i] = 1 + } else { + dAtA[i] = 0 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x30 + } + if len(m.ChallengeScope) > 0 { + i -= len(m.ChallengeScope) + copy(dAtA[i:], m.ChallengeScope) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ChallengeScope))) + i-- + dAtA[i] = 0x2a + } + if m.MFADevice != nil { + { + size, err := m.MFADevice.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 } - i-- - dAtA[i] = 0x22 { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22270,7 +23087,7 @@ func (m *X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *CommandMetadata) Marshal() (dAtA []byte, err error) { +func (m *ResourceMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22280,12 +23097,12 @@ func (m *CommandMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *CommandMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *ResourceMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *CommandMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ResourceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22294,31 +23111,39 @@ func (m *CommandMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Error) > 0 { - i -= len(m.Error) - copy(dAtA[i:], m.Error) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + if len(m.TTL) > 0 { + i -= len(m.TTL) + copy(dAtA[i:], m.TTL) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TTL))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } - if len(m.ExitCode) > 0 { - i -= len(m.ExitCode) - copy(dAtA[i:], m.ExitCode) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ExitCode))) + if len(m.UpdatedBy) > 0 { + i -= len(m.UpdatedBy) + copy(dAtA[i:], m.UpdatedBy) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UpdatedBy))) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x1a } - if len(m.Command) > 0 { - i -= len(m.Command) - copy(dAtA[i:], m.Command) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Command))) + n109, err109 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err109 != nil { + return 0, err109 + } + i -= n109 + i = encodeVarintEvents(dAtA, i, uint64(n109)) + i-- + dAtA[i] = 0x12 + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *Exec) Marshal() (dAtA []byte, err error) { +func (m *UserCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22328,12 +23153,12 @@ func (m *Exec) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *Exec) MarshalTo(dAtA []byte) (int, error) { +func (m *UserCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22343,7 +23168,7 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22351,9 +23176,25 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x42 + dAtA[i] = 0x32 + if len(m.Connector) > 0 { + i -= len(m.Connector) + copy(dAtA[i:], m.Connector) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Connector))) + i-- + dAtA[i] = 0x2a + } + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0x22 + } + } { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22361,9 +23202,9 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x3a + dAtA[i] = 0x1a { - size, err := m.CommandMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22371,9 +23212,9 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 + dAtA[i] = 0x12 { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22381,9 +23222,36 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *UserUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UserUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UserUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22391,9 +23259,25 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x32 + if len(m.Connector) > 0 { + i -= len(m.Connector) + copy(dAtA[i:], m.Connector) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Connector))) + i-- + dAtA[i] = 0x2a + } + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0x22 + } + } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22425,7 +23309,7 @@ func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SCP) Marshal() (dAtA []byte, err error) { +func (m *UserDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22435,12 +23319,12 @@ func (m *SCP) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SCP) MarshalTo(dAtA []byte) (int, error) { +func (m *UserDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22449,22 +23333,18 @@ func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Action) > 0 { - i -= len(m.Action) - copy(dAtA[i:], m.Action) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Action))) - i-- - dAtA[i] = 0x42 - } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) - i-- - dAtA[i] = 0x3a + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.CommandMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22472,9 +23352,9 @@ func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 + dAtA[i] = 0x1a { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22482,9 +23362,9 @@ func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x12 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22492,7 +23372,34 @@ func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *UserPasswordChange) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *UserPasswordChange) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -22526,7 +23433,7 @@ func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SFTPAttributes) Marshal() (dAtA []byte, err error) { +func (m *AccessRequestCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22536,12 +23443,12 @@ func (m *SFTPAttributes) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SFTPAttributes) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessRequestCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SFTPAttributes) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessRequestCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22550,108 +23457,75 @@ func (m *SFTPAttributes) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.ModificationTime != nil { - n159, err159 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ModificationTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ModificationTime):]) - if err159 != nil { - return 0, err159 + if len(m.ResourceNames) > 0 { + for iNdEx := len(m.ResourceNames) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ResourceNames[iNdEx]) + copy(dAtA[i:], m.ResourceNames[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceNames[iNdEx]))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x8a } - i -= n159 - i = encodeVarintEvents(dAtA, i, uint64(n159)) - i-- - dAtA[i] = 0x32 } - if m.AccessTime != nil { - n160, err160 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AccessTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AccessTime):]) - if err160 != nil { - return 0, err160 + if m.AssumeStartTime != nil { + n125, err125 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) + if err125 != nil { + return 0, err125 } - i -= n160 - i = encodeVarintEvents(dAtA, i, uint64(n160)) + i -= n125 + i = encodeVarintEvents(dAtA, i, uint64(n125)) i-- - dAtA[i] = 0x2a - } - if m.Permissions != nil { - n161, err161 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.Permissions, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.Permissions):]) - if err161 != nil { - return 0, err161 - } - i -= n161 - i = encodeVarintEvents(dAtA, i, uint64(n161)) + dAtA[i] = 0x1 i-- - dAtA[i] = 0x22 + dAtA[i] = 0x82 } - if m.GID != nil { - n162, err162 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.GID, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.GID):]) - if err162 != nil { - return 0, err162 - } - i -= n162 - i = encodeVarintEvents(dAtA, i, uint64(n162)) + if len(m.PromotedAccessListName) > 0 { + i -= len(m.PromotedAccessListName) + copy(dAtA[i:], m.PromotedAccessListName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PromotedAccessListName))) i-- - dAtA[i] = 0x1a + dAtA[i] = 0x7a } - if m.UID != nil { - n163, err163 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.UID, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.UID):]) - if err163 != nil { - return 0, err163 - } - i -= n163 - i = encodeVarintEvents(dAtA, i, uint64(n163)) - i-- - dAtA[i] = 0x12 + n126, err126 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.MaxDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration):]) + if err126 != nil { + return 0, err126 } - if m.FileSize != nil { - n164, err164 := github_com_gogo_protobuf_types.StdUInt64MarshalTo(*m.FileSize, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt64(*m.FileSize):]) - if err164 != nil { - return 0, err164 + i -= n126 + i = encodeVarintEvents(dAtA, i, uint64(n126)) + i-- + dAtA[i] = 0x6a + if len(m.RequestedResourceIDs) > 0 { + for iNdEx := len(m.RequestedResourceIDs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.RequestedResourceIDs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x62 } - i -= n164 - i = encodeVarintEvents(dAtA, i, uint64(n164)) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil -} - -func (m *SFTP) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *SFTP) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *SFTP) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Error) > 0 { - i -= len(m.Error) - copy(dAtA[i:], m.Error) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + if len(m.ProposedState) > 0 { + i -= len(m.ProposedState) + copy(dAtA[i:], m.ProposedState) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ProposedState))) i-- - dAtA[i] = 0x62 + dAtA[i] = 0x5a } - if m.Action != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Action)) + if len(m.Reviewer) > 0 { + i -= len(m.Reviewer) + copy(dAtA[i:], m.Reviewer) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reviewer))) i-- - dAtA[i] = 0x58 + dAtA[i] = 0x52 } - if m.Attributes != nil { + if m.Annotations != nil { { - size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Annotations.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22659,56 +23533,47 @@ func (m *SFTP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x52 - } - if m.Flags != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.Flags)) - i-- - dAtA[i] = 0x48 + dAtA[i] = 0x4a } - if len(m.TargetPath) > 0 { - i -= len(m.TargetPath) - copy(dAtA[i:], m.TargetPath) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TargetPath))) + if len(m.Reason) > 0 { + i -= len(m.Reason) + copy(dAtA[i:], m.Reason) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) i-- dAtA[i] = 0x42 } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + if len(m.Delegator) > 0 { + i -= len(m.Delegator) + copy(dAtA[i:], m.Delegator) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Delegator))) i-- dAtA[i] = 0x3a } - if len(m.WorkingDirectory) > 0 { - i -= len(m.WorkingDirectory) - copy(dAtA[i:], m.WorkingDirectory) - i = encodeVarintEvents(dAtA, i, uint64(len(m.WorkingDirectory))) + if len(m.RequestState) > 0 { + i -= len(m.RequestState) + copy(dAtA[i:], m.RequestState) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestState))) i-- dAtA[i] = 0x32 } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.RequestID) > 0 { + i -= len(m.RequestID) + copy(dAtA[i:], m.RequestID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0x22 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x22 { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22740,7 +23605,7 @@ func (m *SFTP) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SFTPSummary) Marshal() (dAtA []byte, err error) { +func (m *AccessRequestExpire) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22750,12 +23615,12 @@ func (m *SFTPSummary) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SFTPSummary) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessRequestExpire) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SFTPSummary) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessRequestExpire) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22764,52 +23629,25 @@ func (m *SFTPSummary) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.FileTransferStats) > 0 { - for iNdEx := len(m.FileTransferStats) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.FileTransferStats[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 - } - } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.ResourceExpiry != nil { + n131, err131 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ResourceExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry):]) + if err131 != nil { + return 0, err131 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i -= n131 + i = encodeVarintEvents(dAtA, i, uint64(n131)) + i-- + dAtA[i] = 0x22 } - i-- - dAtA[i] = 0x22 - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.RequestID) > 0 { + i -= len(m.RequestID) + copy(dAtA[i:], m.RequestID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -22831,7 +23669,7 @@ func (m *SFTPSummary) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *FileTransferStat) Marshal() (dAtA []byte, err error) { +func (m *ResourceID) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22841,12 +23679,12 @@ func (m *FileTransferStat) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *FileTransferStat) MarshalTo(dAtA []byte) (int, error) { +func (m *ResourceID) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *FileTransferStat) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ResourceID) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22855,27 +23693,38 @@ func (m *FileTransferStat) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.BytesWritten != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.BytesWritten)) + if len(m.SubResourceName) > 0 { + i -= len(m.SubResourceName) + copy(dAtA[i:], m.SubResourceName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SubResourceName))) i-- - dAtA[i] = 0x18 + dAtA[i] = 0x22 } - if m.BytesRead != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.BytesRead)) + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) i-- - dAtA[i] = 0x10 + dAtA[i] = 0x1a } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + if len(m.Kind) > 0 { + i -= len(m.Kind) + copy(dAtA[i:], m.Kind) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Kind))) + i-- + dAtA[i] = 0x12 + } + if len(m.ClusterName) > 0 { + i -= len(m.ClusterName) + copy(dAtA[i:], m.ClusterName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ClusterName))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *Subsystem) Marshal() (dAtA []byte, err error) { +func (m *AccessRequestDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22885,12 +23734,12 @@ func (m *Subsystem) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *Subsystem) MarshalTo(dAtA []byte) (int, error) { +func (m *AccessRequestDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *Subsystem) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AccessRequestDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22899,40 +23748,13 @@ func (m *Subsystem) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 - if len(m.Error) > 0 { - i -= len(m.Error) - copy(dAtA[i:], m.Error) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) - i-- - dAtA[i] = 0x2a - } - if len(m.Name) > 0 { - i -= len(m.Name) - copy(dAtA[i:], m.Name) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) + if len(m.RequestID) > 0 { + i -= len(m.RequestID) + copy(dAtA[i:], m.RequestID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestID))) i-- - dAtA[i] = 0x22 - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -22956,7 +23778,7 @@ func (m *Subsystem) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *ClientDisconnect) Marshal() (dAtA []byte, err error) { +func (m *PortForward) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -22966,12 +23788,12 @@ func (m *ClientDisconnect) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ClientDisconnect) MarshalTo(dAtA []byte) (int, error) { +func (m *PortForward) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ClientDisconnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -22980,15 +23802,35 @@ func (m *ClientDisconnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Reason) > 0 { - i -= len(m.Reason) - copy(dAtA[i:], m.Reason) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) + { + size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + { + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + if len(m.Addr) > 0 { + i -= len(m.Addr) + copy(dAtA[i:], m.Addr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Addr))) i-- dAtA[i] = 0x2a } { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23030,7 +23872,7 @@ func (m *ClientDisconnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AuthAttempt) Marshal() (dAtA []byte, err error) { +func (m *X11Forward) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23040,12 +23882,12 @@ func (m *AuthAttempt) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AuthAttempt) MarshalTo(dAtA []byte) (int, error) { +func (m *X11Forward) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AuthAttempt) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23097,7 +23939,7 @@ func (m *AuthAttempt) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *UserTokenCreate) Marshal() (dAtA []byte, err error) { +func (m *CommandMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23107,12 +23949,12 @@ func (m *UserTokenCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *UserTokenCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *CommandMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *CommandMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23121,40 +23963,31 @@ func (m *UserTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a - { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.ExitCode) > 0 { + i -= len(m.ExitCode) + copy(dAtA[i:], m.ExitCode) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ExitCode))) + i-- + dAtA[i] = 0x12 } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.Command) > 0 { + i -= len(m.Command) + copy(dAtA[i:], m.Command) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Command))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *RoleCreate) Marshal() (dAtA []byte, err error) { +func (m *Exec) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23164,12 +23997,12 @@ func (m *RoleCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *RoleCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *Exec) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23179,7 +24012,7 @@ func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.KubernetesPodMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23187,9 +24020,9 @@ func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x42 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23197,9 +24030,9 @@ func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x3a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.CommandMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23207,9 +24040,9 @@ func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x32 { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23217,36 +24050,9 @@ func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *RoleUpdate) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *RoleUpdate) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *RoleUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } + dAtA[i] = 0x2a { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23256,7 +24062,7 @@ func (m *RoleUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23266,7 +24072,7 @@ func (m *RoleUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23288,7 +24094,7 @@ func (m *RoleUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *RoleDelete) Marshal() (dAtA []byte, err error) { +func (m *SCP) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23298,12 +24104,12 @@ func (m *RoleDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *RoleDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *SCP) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23312,18 +24118,22 @@ func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.Action) > 0 { + i -= len(m.Action) + copy(dAtA[i:], m.Action) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Action))) + i-- + dAtA[i] = 0x42 + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x3a } - i-- - dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.CommandMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23331,9 +24141,9 @@ func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x32 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23341,9 +24151,9 @@ func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x2a { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23351,36 +24161,9 @@ func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *BotCreate) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *BotCreate) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *BotCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } + dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23390,7 +24173,7 @@ func (m *BotCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23412,7 +24195,7 @@ func (m *BotCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *BotUpdate) Marshal() (dAtA []byte, err error) { +func (m *SFTPAttributes) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23422,12 +24205,12 @@ func (m *BotUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *BotUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *SFTPAttributes) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *BotUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SFTPAttributes) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23436,40 +24219,70 @@ func (m *BotUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.ModificationTime != nil { + n160, err160 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ModificationTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ModificationTime):]) + if err160 != nil { + return 0, err160 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + i -= n160 + i = encodeVarintEvents(dAtA, i, uint64(n160)) + i-- + dAtA[i] = 0x32 + } + if m.AccessTime != nil { + n161, err161 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AccessTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AccessTime):]) + if err161 != nil { + return 0, err161 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i -= n161 + i = encodeVarintEvents(dAtA, i, uint64(n161)) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.Permissions != nil { + n162, err162 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.Permissions, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.Permissions):]) + if err162 != nil { + return 0, err162 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i -= n162 + i = encodeVarintEvents(dAtA, i, uint64(n162)) + i-- + dAtA[i] = 0x22 + } + if m.GID != nil { + n163, err163 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.GID, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.GID):]) + if err163 != nil { + return 0, err163 + } + i -= n163 + i = encodeVarintEvents(dAtA, i, uint64(n163)) + i-- + dAtA[i] = 0x1a + } + if m.UID != nil { + n164, err164 := github_com_gogo_protobuf_types.StdUInt32MarshalTo(*m.UID, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.UID):]) + if err164 != nil { + return 0, err164 + } + i -= n164 + i = encodeVarintEvents(dAtA, i, uint64(n164)) + i-- + dAtA[i] = 0x12 + } + if m.FileSize != nil { + n165, err165 := github_com_gogo_protobuf_types.StdUInt64MarshalTo(*m.FileSize, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdUInt64(*m.FileSize):]) + if err165 != nil { + return 0, err165 + } + i -= n165 + i = encodeVarintEvents(dAtA, i, uint64(n165)) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *BotDelete) Marshal() (dAtA []byte, err error) { +func (m *SFTP) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23479,12 +24292,12 @@ func (m *BotDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *BotDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *SFTP) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *BotDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SFTP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23493,8 +24306,78 @@ func (m *BotDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x62 + } + if m.Action != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Action)) + i-- + dAtA[i] = 0x58 + } + if m.Attributes != nil { + { + size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x52 + } + if m.Flags != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Flags)) + i-- + dAtA[i] = 0x48 + } + if len(m.TargetPath) > 0 { + i -= len(m.TargetPath) + copy(dAtA[i:], m.TargetPath) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TargetPath))) + i-- + dAtA[i] = 0x42 + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x3a + } + if len(m.WorkingDirectory) > 0 { + i -= len(m.WorkingDirectory) + copy(dAtA[i:], m.WorkingDirectory) + i = encodeVarintEvents(dAtA, i, uint64(len(m.WorkingDirectory))) + i-- + dAtA[i] = 0x32 + } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23504,7 +24387,7 @@ func (m *BotDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23526,7 +24409,7 @@ func (m *BotDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *TrustedClusterCreate) Marshal() (dAtA []byte, err error) { +func (m *SFTPSummary) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23536,12 +24419,12 @@ func (m *TrustedClusterCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *TrustedClusterCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *SFTPSummary) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SFTPSummary) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23550,8 +24433,32 @@ func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.FileTransferStats) > 0 { + for iNdEx := len(m.FileTransferStats) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.FileTransferStats[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } + } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23561,7 +24468,7 @@ func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23571,7 +24478,7 @@ func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23593,7 +24500,7 @@ func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *TrustedClusterDelete) Marshal() (dAtA []byte, err error) { +func (m *FileTransferStat) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23603,12 +24510,56 @@ func (m *TrustedClusterDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *TrustedClusterDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *FileTransferStat) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *FileTransferStat) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.BytesWritten != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.BytesWritten)) + i-- + dAtA[i] = 0x18 + } + if m.BytesRead != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.BytesRead)) + i-- + dAtA[i] = 0x10 + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *Subsystem) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Subsystem) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Subsystem) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23618,7 +24569,7 @@ func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23626,9 +24577,23 @@ func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x32 + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x2a + } + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0x22 + } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23638,7 +24603,7 @@ func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23660,7 +24625,7 @@ func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *ProvisionTokenCreate) Marshal() (dAtA []byte, err error) { +func (m *ClientDisconnect) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23670,12 +24635,12 @@ func (m *ProvisionTokenCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ProvisionTokenCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *ClientDisconnect) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ProvisionTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ClientDisconnect) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23684,24 +24649,25 @@ func (m *ProvisionTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.JoinMethod) > 0 { - i -= len(m.JoinMethod) - copy(dAtA[i:], m.JoinMethod) - i = encodeVarintEvents(dAtA, i, uint64(len(m.JoinMethod))) + if len(m.Reason) > 0 { + i -= len(m.Reason) + copy(dAtA[i:], m.Reason) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Reason))) i-- dAtA[i] = 0x2a } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0x22 + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23711,7 +24677,7 @@ func (m *ProvisionTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23733,7 +24699,7 @@ func (m *ProvisionTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *TrustedClusterTokenCreate) Marshal() (dAtA []byte, err error) { +func (m *AuthAttempt) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23743,12 +24709,12 @@ func (m *TrustedClusterTokenCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *TrustedClusterTokenCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *AuthAttempt) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *TrustedClusterTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AuthAttempt) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23758,7 +24724,27 @@ func (m *TrustedClusterTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, erro copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23768,7 +24754,7 @@ func (m *TrustedClusterTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, erro i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -23790,7 +24776,7 @@ func (m *TrustedClusterTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, erro return len(dAtA) - i, nil } -func (m *GithubConnectorCreate) Marshal() (dAtA []byte, err error) { +func (m *UserTokenCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23800,12 +24786,12 @@ func (m *GithubConnectorCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *GithubConnectorCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *UserTokenCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *GithubConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *UserTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23814,16 +24800,6 @@ func (m *GithubConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -23857,7 +24833,7 @@ func (m *GithubConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *GithubConnectorUpdate) Marshal() (dAtA []byte, err error) { +func (m *RoleCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23867,12 +24843,12 @@ func (m *GithubConnectorUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *GithubConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *RoleCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *GithubConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *RoleCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23924,7 +24900,7 @@ func (m *GithubConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *GithubConnectorDelete) Marshal() (dAtA []byte, err error) { +func (m *RoleUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -23934,12 +24910,12 @@ func (m *GithubConnectorDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *GithubConnectorDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *RoleUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *GithubConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *RoleUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -23991,7 +24967,7 @@ func (m *GithubConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *OIDCConnectorCreate) Marshal() (dAtA []byte, err error) { +func (m *RoleDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24001,12 +24977,12 @@ func (m *OIDCConnectorCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *OIDCConnectorCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *RoleDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OIDCConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *RoleDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24015,6 +24991,16 @@ func (m *OIDCConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -24048,7 +25034,7 @@ func (m *OIDCConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *OIDCConnectorUpdate) Marshal() (dAtA []byte, err error) { +func (m *BotCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24058,12 +25044,12 @@ func (m *OIDCConnectorUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *OIDCConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *BotCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OIDCConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BotCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24105,7 +25091,7 @@ func (m *OIDCConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *OIDCConnectorDelete) Marshal() (dAtA []byte, err error) { +func (m *BotUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24115,12 +25101,12 @@ func (m *OIDCConnectorDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *OIDCConnectorDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *BotUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OIDCConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BotUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24162,7 +25148,7 @@ func (m *OIDCConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SAMLConnectorCreate) Marshal() (dAtA []byte, err error) { +func (m *BotDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24172,12 +25158,12 @@ func (m *SAMLConnectorCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SAMLConnectorCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *BotDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SAMLConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BotDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24186,18 +25172,6 @@ func (m *SAMLConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Connector != nil { - { - size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - } { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -24231,7 +25205,7 @@ func (m *SAMLConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SAMLConnectorUpdate) Marshal() (dAtA []byte, err error) { +func (m *TrustedClusterCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24241,12 +25215,12 @@ func (m *SAMLConnectorUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SAMLConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *TrustedClusterCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SAMLConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *TrustedClusterCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24255,18 +25229,16 @@ func (m *SAMLConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Connector != nil { - { - size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x22 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -24300,7 +25272,7 @@ func (m *SAMLConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *SAMLConnectorDelete) Marshal() (dAtA []byte, err error) { +func (m *TrustedClusterDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24310,12 +25282,12 @@ func (m *SAMLConnectorDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *SAMLConnectorDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *TrustedClusterDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *SAMLConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *TrustedClusterDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24324,6 +25296,16 @@ func (m *SAMLConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -24357,7 +25339,7 @@ func (m *SAMLConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *KubeRequest) Marshal() (dAtA []byte, err error) { +func (m *ProvisionTokenCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24367,12 +25349,12 @@ func (m *KubeRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *KubeRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *ProvisionTokenCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *KubeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ProvisionTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24381,85 +25363,24 @@ func (m *KubeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x6a - { - size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x62 - if m.ResponseCode != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.ResponseCode)) - i-- - dAtA[i] = 0x58 - } - if len(m.ResourceName) > 0 { - i -= len(m.ResourceName) - copy(dAtA[i:], m.ResourceName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceName))) - i-- - dAtA[i] = 0x52 - } - if len(m.ResourceKind) > 0 { - i -= len(m.ResourceKind) - copy(dAtA[i:], m.ResourceKind) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceKind))) - i-- - dAtA[i] = 0x4a - } - if len(m.ResourceNamespace) > 0 { - i -= len(m.ResourceNamespace) - copy(dAtA[i:], m.ResourceNamespace) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceNamespace))) - i-- - dAtA[i] = 0x42 - } - if len(m.ResourceAPIGroup) > 0 { - i -= len(m.ResourceAPIGroup) - copy(dAtA[i:], m.ResourceAPIGroup) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceAPIGroup))) - i-- - dAtA[i] = 0x3a - } - if len(m.Verb) > 0 { - i -= len(m.Verb) - copy(dAtA[i:], m.Verb) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Verb))) - i-- - dAtA[i] = 0x32 - } - if len(m.RequestPath) > 0 { - i -= len(m.RequestPath) - copy(dAtA[i:], m.RequestPath) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestPath))) + if len(m.JoinMethod) > 0 { + i -= len(m.JoinMethod) + copy(dAtA[i:], m.JoinMethod) + i = encodeVarintEvents(dAtA, i, uint64(len(m.JoinMethod))) i-- dAtA[i] = 0x2a } - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0x22 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x22 { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24469,7 +25390,7 @@ func (m *KubeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24491,7 +25412,7 @@ func (m *KubeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppMetadata) Marshal() (dAtA []byte, err error) { +func (m *TrustedClusterTokenCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24501,12 +25422,12 @@ func (m *AppMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *TrustedClusterTokenCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *TrustedClusterTokenCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24515,55 +25436,40 @@ func (m *AppMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.AppTargetPort != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.AppTargetPort)) - i-- - dAtA[i] = 0x28 - } - if len(m.AppName) > 0 { - i -= len(m.AppName) - copy(dAtA[i:], m.AppName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AppName))) - i-- - dAtA[i] = 0x22 - } - if len(m.AppLabels) > 0 { - for k := range m.AppLabels { - v := m.AppLabels[k] - baseI := i - i -= len(v) - copy(dAtA[i:], v) - i = encodeVarintEvents(dAtA, i, uint64(len(v))) - i-- - dAtA[i] = 0x12 - i -= len(k) - copy(dAtA[i:], k) - i = encodeVarintEvents(dAtA, i, uint64(len(k))) - i-- - dAtA[i] = 0xa - i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) - i-- - dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AppPublicAddr) > 0 { - i -= len(m.AppPublicAddr) - copy(dAtA[i:], m.AppPublicAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AppPublicAddr))) - i-- - dAtA[i] = 0x12 + i-- + dAtA[i] = 0x1a + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AppURI) > 0 { - i -= len(m.AppURI) - copy(dAtA[i:], m.AppURI) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AppURI))) - i-- - dAtA[i] = 0xa + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *AppCreate) Marshal() (dAtA []byte, err error) { +func (m *GithubConnectorCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24573,12 +25479,12 @@ func (m *AppCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *GithubConnectorCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *GithubConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24588,7 +25494,7 @@ func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24598,7 +25504,7 @@ func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24608,7 +25514,7 @@ func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24630,7 +25536,7 @@ func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppUpdate) Marshal() (dAtA []byte, err error) { +func (m *GithubConnectorUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24640,12 +25546,12 @@ func (m *AppUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *GithubConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *GithubConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24655,7 +25561,7 @@ func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24665,7 +25571,7 @@ func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24675,7 +25581,7 @@ func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24697,7 +25603,7 @@ func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppDelete) Marshal() (dAtA []byte, err error) { +func (m *GithubConnectorDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24707,12 +25613,12 @@ func (m *AppDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *GithubConnectorDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *GithubConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24722,7 +25628,7 @@ func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24730,7 +25636,7 @@ func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -24740,6 +25646,16 @@ func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- + dAtA[i] = 0x1a + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) @@ -24754,7 +25670,7 @@ func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppSessionStart) Marshal() (dAtA []byte, err error) { +func (m *OIDCConnectorCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24764,12 +25680,12 @@ func (m *AppSessionStart) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppSessionStart) MarshalTo(dAtA []byte) (int, error) { +func (m *OIDCConnectorCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OIDCConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24779,7 +25695,7 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24787,16 +25703,9 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x42 - if len(m.PublicAddr) > 0 { - i -= len(m.PublicAddr) - copy(dAtA[i:], m.PublicAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PublicAddr))) - i-- - dAtA[i] = 0x3a - } + dAtA[i] = 0x1a { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24804,9 +25713,9 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x12 { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24814,9 +25723,36 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *OIDCConnectorUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *OIDCConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OIDCConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24826,7 +25762,7 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24848,7 +25784,7 @@ func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppSessionEnd) Marshal() (dAtA []byte, err error) { +func (m *OIDCConnectorDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24858,12 +25794,12 @@ func (m *AppSessionEnd) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppSessionEnd) MarshalTo(dAtA []byte) (int, error) { +func (m *OIDCConnectorDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OIDCConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24873,7 +25809,7 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24881,9 +25817,9 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 + dAtA[i] = 0x1a { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24891,9 +25827,9 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x12 { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24901,9 +25837,48 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *SAMLConnectorCreate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SAMLConnectorCreate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SAMLConnectorCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Connector != nil { + { + size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24913,7 +25888,7 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -24935,7 +25910,7 @@ func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppSessionChunk) Marshal() (dAtA []byte, err error) { +func (m *SAMLConnectorUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -24945,12 +25920,12 @@ func (m *AppSessionChunk) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppSessionChunk) MarshalTo(dAtA []byte) (int, error) { +func (m *SAMLConnectorUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppSessionChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SAMLConnectorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -24959,45 +25934,20 @@ func (m *AppSessionChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.Connector != nil { + { + size, err := m.Connector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x3a - if len(m.SessionChunkID) > 0 { - i -= len(m.SessionChunkID) - copy(dAtA[i:], m.SessionChunkID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionChunkID))) i-- - dAtA[i] = 0x32 - } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - { - size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x22 } - i-- - dAtA[i] = 0x22 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25007,7 +25957,7 @@ func (m *AppSessionChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25029,7 +25979,7 @@ func (m *AppSessionChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AppSessionRequest) Marshal() (dAtA []byte, err error) { +func (m *SAMLConnectorDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25039,12 +25989,12 @@ func (m *AppSessionRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AppSessionRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *SAMLConnectorDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *SAMLConnectorDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25054,7 +26004,7 @@ func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.AWSRequestMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25062,9 +26012,9 @@ func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x3a + dAtA[i] = 0x1a { - size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25072,33 +26022,7 @@ func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 - if len(m.Method) > 0 { - i -= len(m.Method) - copy(dAtA[i:], m.Method) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) - i-- - dAtA[i] = 0x2a - } - if len(m.RawQuery) > 0 { - i -= len(m.RawQuery) - copy(dAtA[i:], m.RawQuery) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RawQuery))) - i-- - dAtA[i] = 0x22 - } - if len(m.Path) > 0 { - i -= len(m.Path) - copy(dAtA[i:], m.Path) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) - i-- - dAtA[i] = 0x1a - } - if m.StatusCode != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.StatusCode)) - i-- - dAtA[i] = 0x10 - } + dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -25112,7 +26036,7 @@ func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *AWSRequestMetadata) Marshal() (dAtA []byte, err error) { +func (m *KubeRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25122,12 +26046,12 @@ func (m *AWSRequestMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AWSRequestMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *KubeRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AWSRequestMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *KubeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25136,38 +26060,117 @@ func (m *AWSRequestMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.AWSAssumedRole) > 0 { - i -= len(m.AWSAssumedRole) - copy(dAtA[i:], m.AWSAssumedRole) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSAssumedRole))) - i-- - dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AWSHost) > 0 { - i -= len(m.AWSHost) - copy(dAtA[i:], m.AWSHost) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSHost))) - i-- - dAtA[i] = 0x1a + i-- + dAtA[i] = 0x6a + { + size, err := m.KubernetesClusterMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AWSService) > 0 { - i -= len(m.AWSService) - copy(dAtA[i:], m.AWSService) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSService))) + i-- + dAtA[i] = 0x62 + if m.ResponseCode != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.ResponseCode)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x58 } - if len(m.AWSRegion) > 0 { - i -= len(m.AWSRegion) - copy(dAtA[i:], m.AWSRegion) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSRegion))) + if len(m.ResourceName) > 0 { + i -= len(m.ResourceName) + copy(dAtA[i:], m.ResourceName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceName))) i-- - dAtA[i] = 0xa + dAtA[i] = 0x52 + } + if len(m.ResourceKind) > 0 { + i -= len(m.ResourceKind) + copy(dAtA[i:], m.ResourceKind) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceKind))) + i-- + dAtA[i] = 0x4a + } + if len(m.ResourceNamespace) > 0 { + i -= len(m.ResourceNamespace) + copy(dAtA[i:], m.ResourceNamespace) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceNamespace))) + i-- + dAtA[i] = 0x42 } + if len(m.ResourceAPIGroup) > 0 { + i -= len(m.ResourceAPIGroup) + copy(dAtA[i:], m.ResourceAPIGroup) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceAPIGroup))) + i-- + dAtA[i] = 0x3a + } + if len(m.Verb) > 0 { + i -= len(m.Verb) + copy(dAtA[i:], m.Verb) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Verb))) + i-- + dAtA[i] = 0x32 + } + if len(m.RequestPath) > 0 { + i -= len(m.RequestPath) + copy(dAtA[i:], m.RequestPath) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RequestPath))) + i-- + dAtA[i] = 0x2a + } + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *DatabaseMetadata) Marshal() (dAtA []byte, err error) { +func (m *AppMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25177,12 +26180,12 @@ func (m *DatabaseMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *AppMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25191,60 +26194,21 @@ func (m *DatabaseMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.DatabaseOrigin) > 0 { - i -= len(m.DatabaseOrigin) - copy(dAtA[i:], m.DatabaseOrigin) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseOrigin))) - i-- - dAtA[i] = 0x6a - } - if len(m.DatabaseType) > 0 { - i -= len(m.DatabaseType) - copy(dAtA[i:], m.DatabaseType) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseType))) - i-- - dAtA[i] = 0x62 - } - if len(m.DatabaseRoles) > 0 { - for iNdEx := len(m.DatabaseRoles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.DatabaseRoles[iNdEx]) - copy(dAtA[i:], m.DatabaseRoles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseRoles[iNdEx]))) - i-- - dAtA[i] = 0x5a - } - } - if len(m.DatabaseGCPInstanceID) > 0 { - i -= len(m.DatabaseGCPInstanceID) - copy(dAtA[i:], m.DatabaseGCPInstanceID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseGCPInstanceID))) - i-- - dAtA[i] = 0x52 - } - if len(m.DatabaseGCPProjectID) > 0 { - i -= len(m.DatabaseGCPProjectID) - copy(dAtA[i:], m.DatabaseGCPProjectID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseGCPProjectID))) - i-- - dAtA[i] = 0x4a - } - if len(m.DatabaseAWSRedshiftClusterID) > 0 { - i -= len(m.DatabaseAWSRedshiftClusterID) - copy(dAtA[i:], m.DatabaseAWSRedshiftClusterID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseAWSRedshiftClusterID))) + if m.AppTargetPort != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.AppTargetPort)) i-- - dAtA[i] = 0x42 + dAtA[i] = 0x28 } - if len(m.DatabaseAWSRegion) > 0 { - i -= len(m.DatabaseAWSRegion) - copy(dAtA[i:], m.DatabaseAWSRegion) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseAWSRegion))) + if len(m.AppName) > 0 { + i -= len(m.AppName) + copy(dAtA[i:], m.AppName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AppName))) i-- - dAtA[i] = 0x3a + dAtA[i] = 0x22 } - if len(m.DatabaseLabels) > 0 { - for k := range m.DatabaseLabels { - v := m.DatabaseLabels[k] + if len(m.AppLabels) > 0 { + for k := range m.AppLabels { + v := m.AppLabels[k] baseI := i i -= len(v) copy(dAtA[i:], v) @@ -25258,48 +26222,27 @@ func (m *DatabaseMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0xa i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x1a } } - if len(m.DatabaseUser) > 0 { - i -= len(m.DatabaseUser) - copy(dAtA[i:], m.DatabaseUser) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseUser))) - i-- - dAtA[i] = 0x2a - } - if len(m.DatabaseName) > 0 { - i -= len(m.DatabaseName) - copy(dAtA[i:], m.DatabaseName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseName))) - i-- - dAtA[i] = 0x22 - } - if len(m.DatabaseURI) > 0 { - i -= len(m.DatabaseURI) - copy(dAtA[i:], m.DatabaseURI) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseURI))) - i-- - dAtA[i] = 0x1a - } - if len(m.DatabaseProtocol) > 0 { - i -= len(m.DatabaseProtocol) - copy(dAtA[i:], m.DatabaseProtocol) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseProtocol))) + if len(m.AppPublicAddr) > 0 { + i -= len(m.AppPublicAddr) + copy(dAtA[i:], m.AppPublicAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AppPublicAddr))) i-- dAtA[i] = 0x12 } - if len(m.DatabaseService) > 0 { - i -= len(m.DatabaseService) - copy(dAtA[i:], m.DatabaseService) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseService))) + if len(m.AppURI) > 0 { + i -= len(m.AppURI) + copy(dAtA[i:], m.AppURI) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AppURI))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *DatabaseCreate) Marshal() (dAtA []byte, err error) { +func (m *AppCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25309,12 +26252,12 @@ func (m *DatabaseCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *AppCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25324,7 +26267,7 @@ func (m *DatabaseCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25366,7 +26309,7 @@ func (m *DatabaseCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseUpdate) Marshal() (dAtA []byte, err error) { +func (m *AppUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25376,12 +26319,12 @@ func (m *DatabaseUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *AppUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25391,7 +26334,7 @@ func (m *DatabaseUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25433,7 +26376,7 @@ func (m *DatabaseUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseDelete) Marshal() (dAtA []byte, err error) { +func (m *AppDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25443,12 +26386,12 @@ func (m *DatabaseDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *AppDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25490,7 +26433,7 @@ func (m *DatabaseDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseSessionStart) Marshal() (dAtA []byte, err error) { +func (m *AppSessionStart) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25500,12 +26443,12 @@ func (m *DatabaseSessionStart) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseSessionStart) MarshalTo(dAtA []byte) (int, error) { +func (m *AppSessionStart) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25515,7 +26458,7 @@ func (m *DatabaseSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25523,32 +26466,14 @@ func (m *DatabaseSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x4a - if m.PostgresPID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.PostgresPID)) + dAtA[i] = 0x42 + if len(m.PublicAddr) > 0 { + i -= len(m.PublicAddr) + copy(dAtA[i:], m.PublicAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PublicAddr))) i-- - dAtA[i] = 0x40 - } - { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x3a - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x3a } - i-- - dAtA[i] = 0x32 { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -25602,7 +26527,7 @@ func (m *DatabaseSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseSessionQuery) Marshal() (dAtA []byte, err error) { +func (m *AppSessionEnd) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25612,12 +26537,12 @@ func (m *DatabaseSessionQuery) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseSessionQuery) MarshalTo(dAtA []byte) (int, error) { +func (m *AppSessionEnd) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseSessionQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25627,7 +26552,7 @@ func (m *DatabaseSessionQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25635,25 +26560,19 @@ func (m *DatabaseSessionQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x3a - if len(m.DatabaseQueryParameters) > 0 { - for iNdEx := len(m.DatabaseQueryParameters) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.DatabaseQueryParameters[iNdEx]) - copy(dAtA[i:], m.DatabaseQueryParameters[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseQueryParameters[iNdEx]))) - i-- - dAtA[i] = 0x32 + dAtA[i] = 0x32 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.DatabaseQuery) > 0 { - i -= len(m.DatabaseQuery) - copy(dAtA[i:], m.DatabaseQuery) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseQuery))) - i-- - dAtA[i] = 0x2a - } + i-- + dAtA[i] = 0x2a { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25695,7 +26614,7 @@ func (m *DatabaseSessionQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseSessionCommandResult) Marshal() (dAtA []byte, err error) { +func (m *AppSessionChunk) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25705,12 +26624,12 @@ func (m *DatabaseSessionCommandResult) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseSessionCommandResult) MarshalTo(dAtA []byte) (int, error) { +func (m *AppSessionChunk) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseSessionCommandResult) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppSessionChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25719,13 +26638,25 @@ func (m *DatabaseSessionCommandResult) MarshalToSizedBuffer(dAtA []byte) (int, e i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.AffectedRecords != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.AffectedRecords)) + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + if len(m.SessionChunkID) > 0 { + i -= len(m.SessionChunkID) + copy(dAtA[i:], m.SessionChunkID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SessionChunkID))) i-- - dAtA[i] = 0x30 + dAtA[i] = 0x32 } { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25735,7 +26666,7 @@ func (m *DatabaseSessionCommandResult) MarshalToSizedBuffer(dAtA []byte) (int, e i-- dAtA[i] = 0x2a { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25777,7 +26708,7 @@ func (m *DatabaseSessionCommandResult) MarshalToSizedBuffer(dAtA []byte) (int, e return len(dAtA) - i, nil } -func (m *DatabasePermissionUpdate) Marshal() (dAtA []byte, err error) { +func (m *AppSessionRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25787,12 +26718,12 @@ func (m *DatabasePermissionUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabasePermissionUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *AppSessionRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AppSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25801,39 +26732,8 @@ func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.AffectedObjectCounts) > 0 { - for k := range m.AffectedObjectCounts { - v := m.AffectedObjectCounts[k] - baseI := i - i = encodeVarintEvents(dAtA, i, uint64(v)) - i-- - dAtA[i] = 0x10 - i -= len(k) - copy(dAtA[i:], k) - i = encodeVarintEvents(dAtA, i, uint64(len(k))) - i-- - dAtA[i] = 0xa - i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) - i-- - dAtA[i] = 0x32 - } - } - if len(m.PermissionSummary) > 0 { - for iNdEx := len(m.PermissionSummary) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.PermissionSummary[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - } - } { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AWSRequestMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25841,9 +26741,9 @@ func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x3a { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -25851,17 +26751,33 @@ func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x32 + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x2a + } + if len(m.RawQuery) > 0 { + i -= len(m.RawQuery) + copy(dAtA[i:], m.RawQuery) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RawQuery))) + i-- + dAtA[i] = 0x22 + } + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x1a + } + if m.StatusCode != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.StatusCode)) + i-- + dAtA[i] = 0x10 } - i-- - dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -25875,7 +26791,7 @@ func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } -func (m *DatabasePermissionEntry) Marshal() (dAtA []byte, err error) { +func (m *AWSRequestMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25885,12 +26801,12 @@ func (m *DatabasePermissionEntry) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabasePermissionEntry) MarshalTo(dAtA []byte) (int, error) { +func (m *AWSRequestMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabasePermissionEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AWSRequestMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25899,13 +26815,121 @@ func (m *DatabasePermissionEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Counts) > 0 { - for k := range m.Counts { - v := m.Counts[k] + if len(m.AWSAssumedRole) > 0 { + i -= len(m.AWSAssumedRole) + copy(dAtA[i:], m.AWSAssumedRole) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSAssumedRole))) + i-- + dAtA[i] = 0x22 + } + if len(m.AWSHost) > 0 { + i -= len(m.AWSHost) + copy(dAtA[i:], m.AWSHost) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSHost))) + i-- + dAtA[i] = 0x1a + } + if len(m.AWSService) > 0 { + i -= len(m.AWSService) + copy(dAtA[i:], m.AWSService) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSService))) + i-- + dAtA[i] = 0x12 + } + if len(m.AWSRegion) > 0 { + i -= len(m.AWSRegion) + copy(dAtA[i:], m.AWSRegion) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AWSRegion))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *DatabaseMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *DatabaseMetadata) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *DatabaseMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.DatabaseOrigin) > 0 { + i -= len(m.DatabaseOrigin) + copy(dAtA[i:], m.DatabaseOrigin) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseOrigin))) + i-- + dAtA[i] = 0x6a + } + if len(m.DatabaseType) > 0 { + i -= len(m.DatabaseType) + copy(dAtA[i:], m.DatabaseType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseType))) + i-- + dAtA[i] = 0x62 + } + if len(m.DatabaseRoles) > 0 { + for iNdEx := len(m.DatabaseRoles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.DatabaseRoles[iNdEx]) + copy(dAtA[i:], m.DatabaseRoles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseRoles[iNdEx]))) + i-- + dAtA[i] = 0x5a + } + } + if len(m.DatabaseGCPInstanceID) > 0 { + i -= len(m.DatabaseGCPInstanceID) + copy(dAtA[i:], m.DatabaseGCPInstanceID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseGCPInstanceID))) + i-- + dAtA[i] = 0x52 + } + if len(m.DatabaseGCPProjectID) > 0 { + i -= len(m.DatabaseGCPProjectID) + copy(dAtA[i:], m.DatabaseGCPProjectID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseGCPProjectID))) + i-- + dAtA[i] = 0x4a + } + if len(m.DatabaseAWSRedshiftClusterID) > 0 { + i -= len(m.DatabaseAWSRedshiftClusterID) + copy(dAtA[i:], m.DatabaseAWSRedshiftClusterID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseAWSRedshiftClusterID))) + i-- + dAtA[i] = 0x42 + } + if len(m.DatabaseAWSRegion) > 0 { + i -= len(m.DatabaseAWSRegion) + copy(dAtA[i:], m.DatabaseAWSRegion) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseAWSRegion))) + i-- + dAtA[i] = 0x3a + } + if len(m.DatabaseLabels) > 0 { + for k := range m.DatabaseLabels { + v := m.DatabaseLabels[k] baseI := i - i = encodeVarintEvents(dAtA, i, uint64(v)) + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintEvents(dAtA, i, uint64(len(v))) i-- - dAtA[i] = 0x10 + dAtA[i] = 0x12 i -= len(k) copy(dAtA[i:], k) i = encodeVarintEvents(dAtA, i, uint64(len(k))) @@ -25913,20 +26937,48 @@ func (m *DatabasePermissionEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) dAtA[i] = 0xa i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x32 } } - if len(m.Permission) > 0 { - i -= len(m.Permission) - copy(dAtA[i:], m.Permission) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Permission))) + if len(m.DatabaseUser) > 0 { + i -= len(m.DatabaseUser) + copy(dAtA[i:], m.DatabaseUser) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseUser))) + i-- + dAtA[i] = 0x2a + } + if len(m.DatabaseName) > 0 { + i -= len(m.DatabaseName) + copy(dAtA[i:], m.DatabaseName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseName))) + i-- + dAtA[i] = 0x22 + } + if len(m.DatabaseURI) > 0 { + i -= len(m.DatabaseURI) + copy(dAtA[i:], m.DatabaseURI) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseURI))) + i-- + dAtA[i] = 0x1a + } + if len(m.DatabaseProtocol) > 0 { + i -= len(m.DatabaseProtocol) + copy(dAtA[i:], m.DatabaseProtocol) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseProtocol))) + i-- + dAtA[i] = 0x12 + } + if len(m.DatabaseService) > 0 { + i -= len(m.DatabaseService) + copy(dAtA[i:], m.DatabaseService) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseService))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *DatabaseUserCreate) Marshal() (dAtA []byte, err error) { +func (m *DatabaseCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -25936,12 +26988,12 @@ func (m *DatabaseUserCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseUserCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseUserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -25950,32 +27002,6 @@ func (m *DatabaseUserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Roles) > 0 { - for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Roles[iNdEx]) - copy(dAtA[i:], m.Roles[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) - i-- - dAtA[i] = 0x3a - } - } - if len(m.Username) > 0 { - i -= len(m.Username) - copy(dAtA[i:], m.Username) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Username))) - i-- - dAtA[i] = 0x32 - } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a { size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -25987,7 +27013,7 @@ func (m *DatabaseUserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26019,7 +27045,7 @@ func (m *DatabaseUserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DatabaseUserDeactivate) Marshal() (dAtA []byte, err error) { +func (m *DatabaseUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26029,12 +27055,12 @@ func (m *DatabaseUserDeactivate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseUserDeactivate) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseUserDeactivate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26043,33 +27069,6 @@ func (m *DatabaseUserDeactivate) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Delete { - i-- - if m.Delete { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x38 - } - if len(m.Username) > 0 { - i -= len(m.Username) - copy(dAtA[i:], m.Username) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Username))) - i-- - dAtA[i] = 0x32 - } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a { size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -26081,7 +27080,7 @@ func (m *DatabaseUserDeactivate) MarshalToSizedBuffer(dAtA []byte) (int, error) i-- dAtA[i] = 0x22 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26113,7 +27112,7 @@ func (m *DatabaseUserDeactivate) MarshalToSizedBuffer(dAtA []byte) (int, error) return len(dAtA) - i, nil } -func (m *PostgresParse) Marshal() (dAtA []byte, err error) { +func (m *DatabaseDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26123,12 +27122,12 @@ func (m *PostgresParse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PostgresParse) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PostgresParse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26137,32 +27136,8 @@ func (m *PostgresParse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Query) > 0 { - i -= len(m.Query) - copy(dAtA[i:], m.Query) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Query))) - i-- - dAtA[i] = 0x32 - } - if len(m.StatementName) > 0 { - i -= len(m.StatementName) - copy(dAtA[i:], m.StatementName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) - i-- - dAtA[i] = 0x2a - } - { - size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 { - size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26194,7 +27169,7 @@ func (m *PostgresParse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PostgresBind) Marshal() (dAtA []byte, err error) { +func (m *DatabaseSessionStart) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26204,12 +27179,12 @@ func (m *PostgresBind) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PostgresBind) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseSessionStart) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PostgresBind) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26218,28 +27193,20 @@ func (m *PostgresBind) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Parameters) > 0 { - for iNdEx := len(m.Parameters) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Parameters[iNdEx]) - copy(dAtA[i:], m.Parameters[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Parameters[iNdEx]))) - i-- - dAtA[i] = 0x3a + { + size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.PortalName) > 0 { - i -= len(m.PortalName) - copy(dAtA[i:], m.PortalName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) - i-- - dAtA[i] = 0x32 - } - if len(m.StatementName) > 0 { - i -= len(m.StatementName) - copy(dAtA[i:], m.StatementName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) + i-- + dAtA[i] = 0x4a + if m.PostgresPID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.PostgresPID)) i-- - dAtA[i] = 0x2a + dAtA[i] = 0x40 } { size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -26250,6 +27217,36 @@ func (m *PostgresBind) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- + dAtA[i] = 0x3a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- dAtA[i] = 0x22 { size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -26284,7 +27281,7 @@ func (m *PostgresBind) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PostgresExecute) Marshal() (dAtA []byte, err error) { +func (m *DatabaseSessionQuery) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26294,12 +27291,12 @@ func (m *PostgresExecute) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PostgresExecute) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseSessionQuery) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PostgresExecute) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseSessionQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26308,10 +27305,29 @@ func (m *PostgresExecute) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.PortalName) > 0 { - i -= len(m.PortalName) - copy(dAtA[i:], m.PortalName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + if len(m.DatabaseQueryParameters) > 0 { + for iNdEx := len(m.DatabaseQueryParameters) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.DatabaseQueryParameters[iNdEx]) + copy(dAtA[i:], m.DatabaseQueryParameters[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseQueryParameters[iNdEx]))) + i-- + dAtA[i] = 0x32 + } + } + if len(m.DatabaseQuery) > 0 { + i -= len(m.DatabaseQuery) + copy(dAtA[i:], m.DatabaseQuery) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DatabaseQuery))) i-- dAtA[i] = 0x2a } @@ -26358,7 +27374,7 @@ func (m *PostgresExecute) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PostgresClose) Marshal() (dAtA []byte, err error) { +func (m *DatabaseSessionCommandResult) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26368,12 +27384,12 @@ func (m *PostgresClose) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PostgresClose) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseSessionCommandResult) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PostgresClose) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseSessionCommandResult) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26382,20 +27398,21 @@ func (m *PostgresClose) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.PortalName) > 0 { - i -= len(m.PortalName) - copy(dAtA[i:], m.PortalName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) + if m.AffectedRecords != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.AffectedRecords)) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x30 } - if len(m.StatementName) > 0 { - i -= len(m.StatementName) - copy(dAtA[i:], m.StatementName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) - i-- - dAtA[i] = 0x2a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x2a { size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -26439,7 +27456,7 @@ func (m *PostgresClose) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PostgresFunctionCall) Marshal() (dAtA []byte, err error) { +func (m *DatabasePermissionUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26449,12 +27466,12 @@ func (m *PostgresFunctionCall) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PostgresFunctionCall) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabasePermissionUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PostgresFunctionCall) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabasePermissionUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26463,19 +27480,36 @@ func (m *PostgresFunctionCall) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.FunctionArgs) > 0 { - for iNdEx := len(m.FunctionArgs) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.FunctionArgs[iNdEx]) - copy(dAtA[i:], m.FunctionArgs[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.FunctionArgs[iNdEx]))) + if len(m.AffectedObjectCounts) > 0 { + for k := range m.AffectedObjectCounts { + v := m.AffectedObjectCounts[k] + baseI := i + i = encodeVarintEvents(dAtA, i, uint64(v)) + i-- + dAtA[i] = 0x10 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- dAtA[i] = 0x32 } } - if m.FunctionOID != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.FunctionOID)) - i-- - dAtA[i] = 0x28 + if len(m.PermissionSummary) > 0 { + for iNdEx := len(m.PermissionSummary) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.PermissionSummary[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } } { size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) @@ -26520,7 +27554,7 @@ func (m *PostgresFunctionCall) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *WindowsDesktopSessionStart) Marshal() (dAtA []byte, err error) { +func (m *DatabasePermissionEntry) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26530,12 +27564,12 @@ func (m *WindowsDesktopSessionStart) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *WindowsDesktopSessionStart) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabasePermissionEntry) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabasePermissionEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26544,42 +27578,13 @@ func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, err i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.NLA { - i-- - if m.NLA { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x68 - } - if m.AllowUserCreation { - i-- - if m.AllowUserCreation { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x60 - } - if len(m.DesktopName) > 0 { - i -= len(m.DesktopName) - copy(dAtA[i:], m.DesktopName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) - i-- - dAtA[i] = 0x5a - } - if len(m.DesktopLabels) > 0 { - for k := range m.DesktopLabels { - v := m.DesktopLabels[k] + if len(m.Counts) > 0 { + for k := range m.Counts { + v := m.Counts[k] baseI := i - i -= len(v) - copy(dAtA[i:], v) - i = encodeVarintEvents(dAtA, i, uint64(len(v))) + i = encodeVarintEvents(dAtA, i, uint64(v)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x10 i -= len(k) copy(dAtA[i:], k) i = encodeVarintEvents(dAtA, i, uint64(len(k))) @@ -26587,34 +27592,56 @@ func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, err dAtA[i] = 0xa i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- - dAtA[i] = 0x52 + dAtA[i] = 0x12 } } - if len(m.WindowsUser) > 0 { - i -= len(m.WindowsUser) - copy(dAtA[i:], m.WindowsUser) - i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsUser))) + if len(m.Permission) > 0 { + i -= len(m.Permission) + copy(dAtA[i:], m.Permission) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Permission))) i-- - dAtA[i] = 0x4a + dAtA[i] = 0xa } - if len(m.Domain) > 0 { - i -= len(m.Domain) - copy(dAtA[i:], m.Domain) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Domain))) - i-- - dAtA[i] = 0x42 + return len(dAtA) - i, nil +} + +func (m *DatabaseUserCreate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - if len(m.DesktopAddr) > 0 { - i -= len(m.DesktopAddr) - copy(dAtA[i:], m.DesktopAddr) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) - i-- - dAtA[i] = 0x3a + return dAtA[:n], nil +} + +func (m *DatabaseUserCreate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *DatabaseUserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.WindowsDesktopService) > 0 { - i -= len(m.WindowsDesktopService) - copy(dAtA[i:], m.WindowsDesktopService) - i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsDesktopService))) + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0x3a + } + } + if len(m.Username) > 0 { + i -= len(m.Username) + copy(dAtA[i:], m.Username) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Username))) i-- dAtA[i] = 0x32 } @@ -26629,7 +27656,7 @@ func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, err i-- dAtA[i] = 0x2a { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26671,7 +27698,7 @@ func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, err return len(dAtA) - i, nil } -func (m *DatabaseSessionEnd) Marshal() (dAtA []byte, err error) { +func (m *DatabaseUserDeactivate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26681,12 +27708,12 @@ func (m *DatabaseSessionEnd) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DatabaseSessionEnd) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseUserDeactivate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DatabaseSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseUserDeactivate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26695,20 +27722,31 @@ func (m *DatabaseSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n364, err364 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) - if err364 != nil { - return 0, err364 + if m.Delete { + i-- + if m.Delete { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x38 } - i -= n364 - i = encodeVarintEvents(dAtA, i, uint64(n364)) - i-- - dAtA[i] = 0x32 - n365, err365 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) - if err365 != nil { - return 0, err365 + if len(m.Username) > 0 { + i -= len(m.Username) + copy(dAtA[i:], m.Username) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Username))) + i-- + dAtA[i] = 0x32 + } + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= n365 - i = encodeVarintEvents(dAtA, i, uint64(n365)) i-- dAtA[i] = 0x2a { @@ -26754,7 +27792,7 @@ func (m *DatabaseSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *MFADeviceMetadata) Marshal() (dAtA []byte, err error) { +func (m *PostgresParse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26764,12 +27802,12 @@ func (m *MFADeviceMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *MFADeviceMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *PostgresParse) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *MFADeviceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PostgresParse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26778,56 +27816,22 @@ func (m *MFADeviceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.DeviceType) > 0 { - i -= len(m.DeviceType) - copy(dAtA[i:], m.DeviceType) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceType))) - i-- - dAtA[i] = 0x1a - } - if len(m.DeviceID) > 0 { - i -= len(m.DeviceID) - copy(dAtA[i:], m.DeviceID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceID))) + if len(m.Query) > 0 { + i -= len(m.Query) + copy(dAtA[i:], m.Query) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Query))) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x32 } - if len(m.DeviceName) > 0 { - i -= len(m.DeviceName) - copy(dAtA[i:], m.DeviceName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceName))) + if len(m.StatementName) > 0 { + i -= len(m.StatementName) + copy(dAtA[i:], m.StatementName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil -} - -func (m *MFADeviceAdd) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *MFADeviceAdd) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *MFADeviceAdd) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + dAtA[i] = 0x2a } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26837,7 +27841,7 @@ func (m *MFADeviceAdd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.MFADeviceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26869,7 +27873,7 @@ func (m *MFADeviceAdd) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *MFADeviceDelete) Marshal() (dAtA []byte, err error) { +func (m *PostgresBind) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26879,12 +27883,12 @@ func (m *MFADeviceDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *MFADeviceDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *PostgresBind) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *MFADeviceDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PostgresBind) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -26893,28 +27897,31 @@ func (m *MFADeviceDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.Parameters) > 0 { + for iNdEx := len(m.Parameters) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Parameters[iNdEx]) + copy(dAtA[i:], m.Parameters[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Parameters[iNdEx]))) + i-- + dAtA[i] = 0x3a } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x22 - { - size, err := m.MFADeviceMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.PortalName) > 0 { + i -= len(m.PortalName) + copy(dAtA[i:], m.PortalName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) + i-- + dAtA[i] = 0x32 + } + if len(m.StatementName) > 0 { + i -= len(m.StatementName) + copy(dAtA[i:], m.StatementName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26922,9 +27929,9 @@ func (m *MFADeviceDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x22 { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -26932,34 +27939,7 @@ func (m *MFADeviceDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *BillingInformationUpdate) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *BillingInformationUpdate) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *BillingInformationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } + dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -26983,7 +27963,7 @@ func (m *BillingInformationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } -func (m *BillingCardCreate) Marshal() (dAtA []byte, err error) { +func (m *PostgresExecute) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -26993,12 +27973,12 @@ func (m *BillingCardCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *BillingCardCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *PostgresExecute) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *BillingCardCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PostgresExecute) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27007,8 +27987,15 @@ func (m *BillingCardCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.PortalName) > 0 { + i -= len(m.PortalName) + copy(dAtA[i:], m.PortalName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) + i-- + dAtA[i] = 0x2a + } { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27016,9 +28003,9 @@ func (m *BillingCardCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x22 { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27026,34 +28013,7 @@ func (m *BillingCardCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *BillingCardDelete) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *BillingCardDelete) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *BillingCardDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } + dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -27077,7 +28037,7 @@ func (m *BillingCardDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *LockCreate) Marshal() (dAtA []byte, err error) { +func (m *PostgresClose) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27087,12 +28047,12 @@ func (m *LockCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *LockCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *PostgresClose) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PostgresClose) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27101,18 +28061,22 @@ func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.Lock.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.PortalName) > 0 { + i -= len(m.PortalName) + copy(dAtA[i:], m.PortalName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PortalName))) + i-- + dAtA[i] = 0x32 + } + if len(m.StatementName) > 0 { + i -= len(m.StatementName) + copy(dAtA[i:], m.StatementName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.StatementName))) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a { - size, err := m.Target.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27122,7 +28086,7 @@ func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27132,7 +28096,7 @@ func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27154,7 +28118,7 @@ func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *LockDelete) Marshal() (dAtA []byte, err error) { +func (m *PostgresFunctionCall) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27164,12 +28128,12 @@ func (m *LockDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *LockDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *PostgresFunctionCall) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PostgresFunctionCall) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27178,8 +28142,22 @@ func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.FunctionArgs) > 0 { + for iNdEx := len(m.FunctionArgs) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.FunctionArgs[iNdEx]) + copy(dAtA[i:], m.FunctionArgs[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.FunctionArgs[iNdEx]))) + i-- + dAtA[i] = 0x32 + } + } + if m.FunctionOID != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.FunctionOID)) + i-- + dAtA[i] = 0x28 + } { - size, err := m.Lock.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27189,7 +28167,7 @@ func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27199,7 +28177,7 @@ func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27221,7 +28199,7 @@ func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *RecoveryCodeGenerate) Marshal() (dAtA []byte, err error) { +func (m *WindowsCertificateMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27231,12 +28209,12 @@ func (m *RecoveryCodeGenerate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *RecoveryCodeGenerate) MarshalTo(dAtA []byte) (int, error) { +func (m *WindowsCertificateMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *RecoveryCodeGenerate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *WindowsCertificateMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27245,87 +28223,73 @@ func (m *RecoveryCodeGenerate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.EnhancedKeyUsage) > 0 { + for iNdEx := len(m.EnhancedKeyUsage) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.EnhancedKeyUsage[iNdEx]) + copy(dAtA[i:], m.EnhancedKeyUsage[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.EnhancedKeyUsage[iNdEx]))) + i-- + dAtA[i] = 0x3a } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.ExtendedKeyUsage) > 0 { + dAtA362 := make([]byte, len(m.ExtendedKeyUsage)*10) + var j361 int + for _, num1 := range m.ExtendedKeyUsage { + num := uint64(num1) + for num >= 1<<7 { + dAtA362[j361] = uint8(uint64(num)&0x7f | 0x80) + num >>= 7 + j361++ + } + dAtA362[j361] = uint8(num) + j361++ } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - return len(dAtA) - i, nil -} - -func (m *RecoveryCodeUsed) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err + i -= j361 + copy(dAtA[i:], dAtA362[:j361]) + i = encodeVarintEvents(dAtA, i, uint64(j361)) + i-- + dAtA[i] = 0x32 } - return dAtA[:n], nil -} - -func (m *RecoveryCodeUsed) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *RecoveryCodeUsed) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + if m.KeyUsage != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.KeyUsage)) + i-- + dAtA[i] = 0x28 } - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if len(m.CRLDistributionPoints) > 0 { + for iNdEx := len(m.CRLDistributionPoints) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.CRLDistributionPoints[iNdEx]) + copy(dAtA[i:], m.CRLDistributionPoints[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.CRLDistributionPoints[iNdEx]))) + i-- + dAtA[i] = 0x22 } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x1a - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.UPN) > 0 { + i -= len(m.UPN) + copy(dAtA[i:], m.UPN) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UPN))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.SerialNumber) > 0 { + i -= len(m.SerialNumber) + copy(dAtA[i:], m.SerialNumber) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SerialNumber))) + i-- + dAtA[i] = 0x12 + } + if len(m.Subject) > 0 { + i -= len(m.Subject) + copy(dAtA[i:], m.Subject) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Subject))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *WindowsDesktopSessionEnd) Marshal() (dAtA []byte, err error) { +func (m *WindowsDesktopSessionStart) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27335,12 +28299,12 @@ func (m *WindowsDesktopSessionEnd) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *WindowsDesktopSessionEnd) MarshalTo(dAtA []byte) (int, error) { +func (m *WindowsDesktopSessionStart) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *WindowsDesktopSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27349,48 +28313,45 @@ func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Participants) > 0 { - for iNdEx := len(m.Participants) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.Participants[iNdEx]) - copy(dAtA[i:], m.Participants[iNdEx]) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Participants[iNdEx]))) - i-- - dAtA[i] = 0x6a + if m.CertMetadata != nil { + { + size, err := m.CertMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x72 } - if m.Recorded { + if m.NLA { i-- - if m.Recorded { + if m.NLA { dAtA[i] = 1 } else { dAtA[i] = 0 } i-- - dAtA[i] = 0x60 + dAtA[i] = 0x68 } - if len(m.DesktopName) > 0 { + if m.AllowUserCreation { + i-- + if m.AllowUserCreation { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x60 + } + if len(m.DesktopName) > 0 { i -= len(m.DesktopName) copy(dAtA[i:], m.DesktopName) i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) i-- dAtA[i] = 0x5a } - n398, err398 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) - if err398 != nil { - return 0, err398 - } - i -= n398 - i = encodeVarintEvents(dAtA, i, uint64(n398)) - i-- - dAtA[i] = 0x52 - n399, err399 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) - if err399 != nil { - return 0, err399 - } - i -= n399 - i = encodeVarintEvents(dAtA, i, uint64(n399)) - i-- - dAtA[i] = 0x4a if len(m.DesktopLabels) > 0 { for k := range m.DesktopLabels { v := m.DesktopLabels[k] @@ -27407,7 +28368,7 @@ func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error dAtA[i] = 0xa i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) i-- - dAtA[i] = 0x42 + dAtA[i] = 0x52 } } if len(m.WindowsUser) > 0 { @@ -27415,29 +28376,49 @@ func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error copy(dAtA[i:], m.WindowsUser) i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsUser))) i-- - dAtA[i] = 0x3a + dAtA[i] = 0x4a } if len(m.Domain) > 0 { i -= len(m.Domain) copy(dAtA[i:], m.Domain) i = encodeVarintEvents(dAtA, i, uint64(len(m.Domain))) i-- - dAtA[i] = 0x32 + dAtA[i] = 0x42 } if len(m.DesktopAddr) > 0 { i -= len(m.DesktopAddr) copy(dAtA[i:], m.DesktopAddr) i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) i-- - dAtA[i] = 0x2a + dAtA[i] = 0x3a } if len(m.WindowsDesktopService) > 0 { i -= len(m.WindowsDesktopService) copy(dAtA[i:], m.WindowsDesktopService) i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsDesktopService))) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x32 + } + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x22 { size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -27471,7 +28452,7 @@ func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } -func (m *CertificateCreate) Marshal() (dAtA []byte, err error) { +func (m *DatabaseSessionEnd) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27481,12 +28462,12 @@ func (m *CertificateCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *CertificateCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *DatabaseSessionEnd) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *CertificateCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DatabaseSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27495,8 +28476,43 @@ func (m *CertificateCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Participants) > 0 { + for iNdEx := len(m.Participants) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Participants[iNdEx]) + copy(dAtA[i:], m.Participants[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Participants[iNdEx]))) + i-- + dAtA[i] = 0x42 + } + } { - size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + n370, err370 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) + if err370 != nil { + return 0, err370 + } + i -= n370 + i = encodeVarintEvents(dAtA, i, uint64(n370)) + i-- + dAtA[i] = 0x32 + n371, err371 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) + if err371 != nil { + return 0, err371 + } + i -= n371 + i = encodeVarintEvents(dAtA, i, uint64(n371)) + i-- + dAtA[i] = 0x2a + { + size, err := m.DatabaseMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27505,25 +28521,26 @@ func (m *CertificateCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x22 - if m.Identity != nil { - { - size, err := m.Identity.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x1a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.CertificateType) > 0 { - i -= len(m.CertificateType) - copy(dAtA[i:], m.CertificateType) - i = encodeVarintEvents(dAtA, i, uint64(len(m.CertificateType))) - i-- - dAtA[i] = 0x12 + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -27537,7 +28554,7 @@ func (m *CertificateCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *RenewableCertificateGenerationMismatch) Marshal() (dAtA []byte, err error) { +func (m *MFADeviceMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27547,12 +28564,12 @@ func (m *RenewableCertificateGenerationMismatch) Marshal() (dAtA []byte, err err return dAtA[:n], nil } -func (m *RenewableCertificateGenerationMismatch) MarshalTo(dAtA []byte) (int, error) { +func (m *MFADeviceMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *RenewableCertificateGenerationMismatch) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *MFADeviceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27561,30 +28578,31 @@ func (m *RenewableCertificateGenerationMismatch) MarshalToSizedBuffer(dAtA []byt i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.DeviceType) > 0 { + i -= len(m.DeviceType) + copy(dAtA[i:], m.DeviceType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceType))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x12 - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + if len(m.DeviceID) > 0 { + i -= len(m.DeviceID) + copy(dAtA[i:], m.DeviceID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceID))) + i-- + dAtA[i] = 0x12 + } + if len(m.DeviceName) > 0 { + i -= len(m.DeviceName) + copy(dAtA[i:], m.DeviceName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceName))) + i-- + dAtA[i] = 0xa } - i-- - dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *BotJoin) Marshal() (dAtA []byte, err error) { +func (m *MFADeviceAdd) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27594,12 +28612,12 @@ func (m *BotJoin) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *BotJoin) MarshalTo(dAtA []byte) (int, error) { +func (m *MFADeviceAdd) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *BotJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *MFADeviceAdd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27608,13 +28626,6 @@ func (m *BotJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.BotInstanceID) > 0 { - i -= len(m.BotInstanceID) - copy(dAtA[i:], m.BotInstanceID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.BotInstanceID))) - i-- - dAtA[i] = 0x4a - } { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -27624,49 +28635,19 @@ func (m *BotJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x42 - if len(m.UserName) > 0 { - i -= len(m.UserName) - copy(dAtA[i:], m.UserName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UserName))) - i-- - dAtA[i] = 0x3a - } - if m.Attributes != nil { - { - size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x22 + { + size, err := m.MFADeviceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x32 - } - if len(m.TokenName) > 0 { - i -= len(m.TokenName) - copy(dAtA[i:], m.TokenName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) - i-- - dAtA[i] = 0x2a - } - if len(m.Method) > 0 { - i -= len(m.Method) - copy(dAtA[i:], m.Method) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) - i-- - dAtA[i] = 0x22 - } - if len(m.BotName) > 0 { - i -= len(m.BotName) - copy(dAtA[i:], m.BotName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) - i-- - dAtA[i] = 0x1a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x1a { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27688,7 +28669,7 @@ func (m *BotJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *InstanceJoin) Marshal() (dAtA []byte, err error) { +func (m *MFADeviceDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27698,12 +28679,12 @@ func (m *InstanceJoin) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *InstanceJoin) MarshalTo(dAtA []byte) (int, error) { +func (m *MFADeviceDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *InstanceJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *MFADeviceDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27721,64 +28702,19 @@ func (m *InstanceJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x52 - n413, err413 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.TokenExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.TokenExpires):]) - if err413 != nil { - return 0, err413 - } - i -= n413 - i = encodeVarintEvents(dAtA, i, uint64(n413)) - i-- - dAtA[i] = 0x4a - if m.Attributes != nil { - { - size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x22 + { + size, err := m.MFADeviceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x42 - } - if len(m.TokenName) > 0 { - i -= len(m.TokenName) - copy(dAtA[i:], m.TokenName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) - i-- - dAtA[i] = 0x3a - } - if len(m.Method) > 0 { - i -= len(m.Method) - copy(dAtA[i:], m.Method) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) - i-- - dAtA[i] = 0x32 - } - if len(m.Role) > 0 { - i -= len(m.Role) - copy(dAtA[i:], m.Role) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Role))) - i-- - dAtA[i] = 0x2a - } - if len(m.NodeName) > 0 { - i -= len(m.NodeName) - copy(dAtA[i:], m.NodeName) - i = encodeVarintEvents(dAtA, i, uint64(len(m.NodeName))) - i-- - dAtA[i] = 0x22 - } - if len(m.HostID) > 0 { - i -= len(m.HostID) - copy(dAtA[i:], m.HostID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.HostID))) - i-- - dAtA[i] = 0x1a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x1a { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -27800,7 +28736,7 @@ func (m *InstanceJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *Unknown) Marshal() (dAtA []byte, err error) { +func (m *BillingInformationUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27810,12 +28746,12 @@ func (m *Unknown) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *Unknown) MarshalTo(dAtA []byte) (int, error) { +func (m *BillingInformationUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *Unknown) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BillingInformationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27824,27 +28760,16 @@ func (m *Unknown) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Data) > 0 { - i -= len(m.Data) - copy(dAtA[i:], m.Data) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Data))) - i-- - dAtA[i] = 0x22 - } - if len(m.UnknownCode) > 0 { - i -= len(m.UnknownCode) - copy(dAtA[i:], m.UnknownCode) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UnknownCode))) - i-- - dAtA[i] = 0x1a - } - if len(m.UnknownType) > 0 { - i -= len(m.UnknownType) - copy(dAtA[i:], m.UnknownType) - i = encodeVarintEvents(dAtA, i, uint64(len(m.UnknownType))) - i-- - dAtA[i] = 0x12 + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -27858,82 +28783,7 @@ func (m *Unknown) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DeviceMetadata) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *DeviceMetadata) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *DeviceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if len(m.WebAuthenticationId) > 0 { - i -= len(m.WebAuthenticationId) - copy(dAtA[i:], m.WebAuthenticationId) - i = encodeVarintEvents(dAtA, i, uint64(len(m.WebAuthenticationId))) - i-- - dAtA[i] = 0x42 - } - if m.WebAuthentication { - i-- - if m.WebAuthentication { - dAtA[i] = 1 - } else { - dAtA[i] = 0 - } - i-- - dAtA[i] = 0x30 - } - if m.DeviceOrigin != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.DeviceOrigin)) - i-- - dAtA[i] = 0x28 - } - if len(m.CredentialId) > 0 { - i -= len(m.CredentialId) - copy(dAtA[i:], m.CredentialId) - i = encodeVarintEvents(dAtA, i, uint64(len(m.CredentialId))) - i-- - dAtA[i] = 0x22 - } - if len(m.AssetTag) > 0 { - i -= len(m.AssetTag) - copy(dAtA[i:], m.AssetTag) - i = encodeVarintEvents(dAtA, i, uint64(len(m.AssetTag))) - i-- - dAtA[i] = 0x1a - } - if m.OsType != 0 { - i = encodeVarintEvents(dAtA, i, uint64(m.OsType)) - i-- - dAtA[i] = 0x10 - } - if len(m.DeviceId) > 0 { - i -= len(m.DeviceId) - copy(dAtA[i:], m.DeviceId) - i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceId))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil -} - -func (m *DeviceEvent) Marshal() (dAtA []byte, err error) { +func (m *BillingCardCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -27943,12 +28793,12 @@ func (m *DeviceEvent) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DeviceEvent) MarshalTo(dAtA []byte) (int, error) { +func (m *BillingCardCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DeviceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BillingCardCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -27957,42 +28807,16 @@ func (m *DeviceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.User != nil { - { - size, err := m.User.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - } - if m.Device != nil { - { - size, err := m.Device.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - } - if m.Status != nil { - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x12 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28006,7 +28830,7 @@ func (m *DeviceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DeviceEvent2) Marshal() (dAtA []byte, err error) { +func (m *BillingCardDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28016,12 +28840,12 @@ func (m *DeviceEvent2) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DeviceEvent2) MarshalTo(dAtA []byte) (int, error) { +func (m *BillingCardDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DeviceEvent2) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BillingCardDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28039,29 +28863,7 @@ func (m *DeviceEvent2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x32 - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - if m.Device != nil { - { - size, err := m.Device.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - } + dAtA[i] = 0x12 { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28075,7 +28877,7 @@ func (m *DeviceEvent2) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DiscoveryConfigCreate) Marshal() (dAtA []byte, err error) { +func (m *LockCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28085,12 +28887,12 @@ func (m *DiscoveryConfigCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DiscoveryConfigCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *LockCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *LockCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28100,7 +28902,17 @@ func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Lock.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.Target.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28110,7 +28922,7 @@ func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28120,7 +28932,7 @@ func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28142,7 +28954,7 @@ func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DiscoveryConfigUpdate) Marshal() (dAtA []byte, err error) { +func (m *LockDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28152,12 +28964,12 @@ func (m *DiscoveryConfigUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DiscoveryConfigUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *LockDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *LockDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28167,7 +28979,7 @@ func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Lock.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28177,7 +28989,7 @@ func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28187,7 +28999,7 @@ func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28209,7 +29021,7 @@ func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DiscoveryConfigDelete) Marshal() (dAtA []byte, err error) { +func (m *RecoveryCodeGenerate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28219,12 +29031,12 @@ func (m *DiscoveryConfigDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DiscoveryConfigDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *RecoveryCodeGenerate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DiscoveryConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *RecoveryCodeGenerate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28233,26 +29045,6 @@ func (m *DiscoveryConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28276,7 +29068,7 @@ func (m *DiscoveryConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *DiscoveryConfigDeleteAll) Marshal() (dAtA []byte, err error) { +func (m *RecoveryCodeUsed) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28286,12 +29078,12 @@ func (m *DiscoveryConfigDeleteAll) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *DiscoveryConfigDeleteAll) MarshalTo(dAtA []byte) (int, error) { +func (m *RecoveryCodeUsed) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *DiscoveryConfigDeleteAll) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *RecoveryCodeUsed) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28301,7 +29093,7 @@ func (m *DiscoveryConfigDeleteAll) MarshalToSizedBuffer(dAtA []byte) (int, error copy(dAtA[i:], m.XXX_unrecognized) } { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28333,7 +29125,7 @@ func (m *DiscoveryConfigDeleteAll) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } -func (m *IntegrationCreate) Marshal() (dAtA []byte, err error) { +func (m *WindowsDesktopSessionEnd) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28343,12 +29135,12 @@ func (m *IntegrationCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *IntegrationCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *WindowsDesktopSessionEnd) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *IntegrationCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *WindowsDesktopSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28366,19 +29158,98 @@ func (m *IntegrationCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a - { - size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + dAtA[i] = 0x72 + if len(m.Participants) > 0 { + for iNdEx := len(m.Participants) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Participants[iNdEx]) + copy(dAtA[i:], m.Participants[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Participants[iNdEx]))) + i-- + dAtA[i] = 0x6a } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } + if m.Recorded { + i-- + if m.Recorded { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x60 + } + if len(m.DesktopName) > 0 { + i -= len(m.DesktopName) + copy(dAtA[i:], m.DesktopName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopName))) + i-- + dAtA[i] = 0x5a + } + n405, err405 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) + if err405 != nil { + return 0, err405 + } + i -= n405 + i = encodeVarintEvents(dAtA, i, uint64(n405)) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x52 + n406, err406 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) + if err406 != nil { + return 0, err406 + } + i -= n406 + i = encodeVarintEvents(dAtA, i, uint64(n406)) + i-- + dAtA[i] = 0x4a + if len(m.DesktopLabels) > 0 { + for k := range m.DesktopLabels { + v := m.DesktopLabels[k] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintEvents(dAtA, i, uint64(len(v))) + i-- + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x42 + } + } + if len(m.WindowsUser) > 0 { + i -= len(m.WindowsUser) + copy(dAtA[i:], m.WindowsUser) + i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsUser))) + i-- + dAtA[i] = 0x3a + } + if len(m.Domain) > 0 { + i -= len(m.Domain) + copy(dAtA[i:], m.Domain) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Domain))) + i-- + dAtA[i] = 0x32 + } + if len(m.DesktopAddr) > 0 { + i -= len(m.DesktopAddr) + copy(dAtA[i:], m.DesktopAddr) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DesktopAddr))) + i-- + dAtA[i] = 0x2a + } + if len(m.WindowsDesktopService) > 0 { + i -= len(m.WindowsDesktopService) + copy(dAtA[i:], m.WindowsDesktopService) + i = encodeVarintEvents(dAtA, i, uint64(len(m.WindowsDesktopService))) + i-- + dAtA[i] = 0x22 + } { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28410,7 +29281,7 @@ func (m *IntegrationCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *IntegrationUpdate) Marshal() (dAtA []byte, err error) { +func (m *CertificateCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28420,12 +29291,12 @@ func (m *IntegrationUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *IntegrationUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *CertificateCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *CertificateCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28434,18 +29305,20 @@ func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.CertificateAuthority != nil { + { + size, err := m.CertificateAuthority.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x2a } - i-- - dAtA[i] = 0x2a { - size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ClientMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28454,8 +29327,27 @@ func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x22 + if m.Identity != nil { + { + size, err := m.Identity.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + if len(m.CertificateType) > 0 { + i -= len(m.CertificateType) + copy(dAtA[i:], m.CertificateType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.CertificateType))) + i-- + dAtA[i] = 0x12 + } { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28463,7 +29355,82 @@ func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *CertificateAuthority) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *CertificateAuthority) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *CertificateAuthority) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.SubjectKeyID) > 0 { + i -= len(m.SubjectKeyID) + copy(dAtA[i:], m.SubjectKeyID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SubjectKeyID))) + i-- + dAtA[i] = 0x1a + } + if len(m.Domain) > 0 { + i -= len(m.Domain) + copy(dAtA[i:], m.Domain) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Domain))) + i-- + dAtA[i] = 0x12 + } + if len(m.Type) > 0 { + i -= len(m.Type) + copy(dAtA[i:], m.Type) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Type))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *RenewableCertificateGenerationMismatch) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RenewableCertificateGenerationMismatch) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RenewableCertificateGenerationMismatch) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } { size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28487,7 +29454,7 @@ func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *IntegrationDelete) Marshal() (dAtA []byte, err error) { +func (m *BotJoin) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28497,12 +29464,12 @@ func (m *IntegrationDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *IntegrationDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *BotJoin) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *IntegrationDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *BotJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28511,6 +29478,13 @@ func (m *IntegrationDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.BotInstanceID) > 0 { + i -= len(m.BotInstanceID) + copy(dAtA[i:], m.BotInstanceID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotInstanceID))) + i-- + dAtA[i] = 0x4a + } { size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28520,29 +29494,49 @@ func (m *IntegrationDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a - { - size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + dAtA[i] = 0x42 + if len(m.UserName) > 0 { + i -= len(m.UserName) + copy(dAtA[i:], m.UserName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserName))) + i-- + dAtA[i] = 0x3a } - i-- - dAtA[i] = 0x22 - { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err + if m.Attributes != nil { + { + size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + i-- + dAtA[i] = 0x32 + } + if len(m.TokenName) > 0 { + i -= len(m.TokenName) + copy(dAtA[i:], m.TokenName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) + i-- + dAtA[i] = 0x2a + } + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x22 + } + if len(m.BotName) > 0 { + i -= len(m.BotName) + copy(dAtA[i:], m.BotName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) + i-- + dAtA[i] = 0x1a } - i-- - dAtA[i] = 0x1a { - size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28564,7 +29558,7 @@ func (m *IntegrationDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *IntegrationMetadata) Marshal() (dAtA []byte, err error) { +func (m *InstanceJoin) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28574,12 +29568,12 @@ func (m *IntegrationMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *IntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *InstanceJoin) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *IntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *InstanceJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28588,9 +29582,27 @@ func (m *IntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.AWSRA != nil { + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x52 + n421, err421 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.TokenExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.TokenExpires):]) + if err421 != nil { + return 0, err421 + } + i -= n421 + i = encodeVarintEvents(dAtA, i, uint64(n421)) + i-- + dAtA[i] = 0x4a + if m.Attributes != nil { { - size, err := m.AWSRA.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28598,55 +29610,67 @@ func (m *IntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- + dAtA[i] = 0x42 + } + if len(m.TokenName) > 0 { + i -= len(m.TokenName) + copy(dAtA[i:], m.TokenName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) + i-- + dAtA[i] = 0x3a + } + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x32 + } + if len(m.Role) > 0 { + i -= len(m.Role) + copy(dAtA[i:], m.Role) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Role))) + i-- dAtA[i] = 0x2a } - if m.GitHub != nil { - { - size, err := m.GitHub.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + if len(m.NodeName) > 0 { + i -= len(m.NodeName) + copy(dAtA[i:], m.NodeName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.NodeName))) i-- dAtA[i] = 0x22 } - if m.AzureOIDC != nil { - { - size, err := m.AzureOIDC.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + if len(m.HostID) > 0 { + i -= len(m.HostID) + copy(dAtA[i:], m.HostID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.HostID))) i-- dAtA[i] = 0x1a } - if m.AWSOIDC != nil { - { - size, err := m.AWSOIDC.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x12 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.SubKind) > 0 { - i -= len(m.SubKind) - copy(dAtA[i:], m.SubKind) - i = encodeVarintEvents(dAtA, i, uint64(len(m.SubKind))) - i-- - dAtA[i] = 0xa + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *AWSOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { +func (m *Unknown) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28656,12 +29680,12 @@ func (m *AWSOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AWSOIDCIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *Unknown) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AWSOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *Unknown) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28670,24 +29694,41 @@ func (m *AWSOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, err i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.IssuerS3URI) > 0 { - i -= len(m.IssuerS3URI) - copy(dAtA[i:], m.IssuerS3URI) - i = encodeVarintEvents(dAtA, i, uint64(len(m.IssuerS3URI))) + if len(m.Data) > 0 { + i -= len(m.Data) + copy(dAtA[i:], m.Data) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Data))) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x22 } - if len(m.RoleARN) > 0 { - i -= len(m.RoleARN) - copy(dAtA[i:], m.RoleARN) - i = encodeVarintEvents(dAtA, i, uint64(len(m.RoleARN))) + if len(m.UnknownCode) > 0 { + i -= len(m.UnknownCode) + copy(dAtA[i:], m.UnknownCode) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UnknownCode))) i-- - dAtA[i] = 0xa + dAtA[i] = 0x1a + } + if len(m.UnknownType) > 0 { + i -= len(m.UnknownType) + copy(dAtA[i:], m.UnknownType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UnknownType))) + i-- + dAtA[i] = 0x12 + } + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *AzureOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { +func (m *DeviceMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28697,12 +29738,12 @@ func (m *AzureOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AzureOIDCIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *DeviceMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AzureOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DeviceMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28711,24 +29752,58 @@ func (m *AzureOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, e i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.ClientID) > 0 { - i -= len(m.ClientID) - copy(dAtA[i:], m.ClientID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.ClientID))) + if len(m.WebAuthenticationId) > 0 { + i -= len(m.WebAuthenticationId) + copy(dAtA[i:], m.WebAuthenticationId) + i = encodeVarintEvents(dAtA, i, uint64(len(m.WebAuthenticationId))) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x42 } - if len(m.TenantID) > 0 { - i -= len(m.TenantID) - copy(dAtA[i:], m.TenantID) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TenantID))) + if m.WebAuthentication { + i-- + if m.WebAuthentication { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x30 + } + if m.DeviceOrigin != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.DeviceOrigin)) + i-- + dAtA[i] = 0x28 + } + if len(m.CredentialId) > 0 { + i -= len(m.CredentialId) + copy(dAtA[i:], m.CredentialId) + i = encodeVarintEvents(dAtA, i, uint64(len(m.CredentialId))) + i-- + dAtA[i] = 0x22 + } + if len(m.AssetTag) > 0 { + i -= len(m.AssetTag) + copy(dAtA[i:], m.AssetTag) + i = encodeVarintEvents(dAtA, i, uint64(len(m.AssetTag))) + i-- + dAtA[i] = 0x1a + } + if m.OsType != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.OsType)) + i-- + dAtA[i] = 0x10 + } + if len(m.DeviceId) > 0 { + i -= len(m.DeviceId) + copy(dAtA[i:], m.DeviceId) + i = encodeVarintEvents(dAtA, i, uint64(len(m.DeviceId))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *GitHubIntegrationMetadata) Marshal() (dAtA []byte, err error) { +func (m *DeviceEvent) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28738,12 +29813,12 @@ func (m *GitHubIntegrationMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *GitHubIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *DeviceEvent) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *GitHubIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DeviceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28752,17 +29827,56 @@ func (m *GitHubIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, erro i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.Organization) > 0 { - i -= len(m.Organization) - copy(dAtA[i:], m.Organization) - i = encodeVarintEvents(dAtA, i, uint64(len(m.Organization))) + if m.User != nil { + { + size, err := m.User.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } i-- - dAtA[i] = 0xa + dAtA[i] = 0x22 + } + if m.Device != nil { + { + size, err := m.Device.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a } + if m.Status != nil { + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *AWSRAIntegrationMetadata) Marshal() (dAtA []byte, err error) { +func (m *DeviceEvent2) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28772,12 +29886,12 @@ func (m *AWSRAIntegrationMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *AWSRAIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *DeviceEvent2) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *AWSRAIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DeviceEvent2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28786,17 +29900,52 @@ func (m *AWSRAIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.TrustAnchorARN) > 0 { - i -= len(m.TrustAnchorARN) - copy(dAtA[i:], m.TrustAnchorARN) - i = encodeVarintEvents(dAtA, i, uint64(len(m.TrustAnchorARN))) + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + if m.Device != nil { + { + size, err := m.Device.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } i-- - dAtA[i] = 0xa + dAtA[i] = 0x1a + } + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *PluginCreate) Marshal() (dAtA []byte, err error) { +func (m *DiscoveryConfigCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28806,12 +29955,12 @@ func (m *PluginCreate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PluginCreate) MarshalTo(dAtA []byte) (int, error) { +func (m *DiscoveryConfigCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PluginCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DiscoveryConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28829,9 +29978,66 @@ func (m *PluginCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x22 { - size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *DiscoveryConfigUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *DiscoveryConfigUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *DiscoveryConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28873,7 +30079,7 @@ func (m *PluginCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PluginUpdate) Marshal() (dAtA []byte, err error) { +func (m *DiscoveryConfigDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28883,12 +30089,12 @@ func (m *PluginUpdate) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PluginUpdate) MarshalTo(dAtA []byte) (int, error) { +func (m *DiscoveryConfigDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PluginUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *DiscoveryConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28906,9 +30112,9 @@ func (m *PluginUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x22 { - size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28916,9 +30122,56 @@ func (m *PluginUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x1a { - size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *DiscoveryConfigDeleteAll) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *DiscoveryConfigDeleteAll) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *DiscoveryConfigDeleteAll) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -28950,7 +30203,7 @@ func (m *PluginUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PluginDelete) Marshal() (dAtA []byte, err error) { +func (m *IntegrationCreate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -28960,12 +30213,12 @@ func (m *PluginDelete) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PluginDelete) MarshalTo(dAtA []byte) (int, error) { +func (m *IntegrationCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PluginDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *IntegrationCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -28985,7 +30238,7 @@ func (m *PluginDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x2a { - size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29027,7 +30280,7 @@ func (m *PluginDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *PluginMetadata) Marshal() (dAtA []byte, err error) { +func (m *IntegrationUpdate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -29037,12 +30290,12 @@ func (m *PluginMetadata) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *PluginMetadata) MarshalTo(dAtA []byte) (int, error) { +func (m *IntegrationUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *PluginMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *IntegrationUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -29051,49 +30304,60 @@ func (m *PluginMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.PluginData != nil { - { - size, err := m.PluginData.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x2a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.ReusesCredentials { - i-- - if m.ReusesCredentials { - dAtA[i] = 1 - } else { - dAtA[i] = 0 + i-- + dAtA[i] = 0x2a + { + size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x20 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.HasCredentials { - i-- - if m.HasCredentials { - dAtA[i] = 1 - } else { - dAtA[i] = 0 + i-- + dAtA[i] = 0x22 + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x18 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.PluginType) > 0 { - i -= len(m.PluginType) - copy(dAtA[i:], m.PluginType) - i = encodeVarintEvents(dAtA, i, uint64(len(m.PluginType))) - i-- - dAtA[i] = 0xa + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *OneOf) Marshal() (dAtA []byte, err error) { +func (m *IntegrationDelete) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -29103,12 +30367,12 @@ func (m *OneOf) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *OneOf) MarshalTo(dAtA []byte) (int, error) { +func (m *IntegrationDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *IntegrationDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -29117,49 +30381,86 @@ func (m *OneOf) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - if m.Event != nil { - { - size := m.Event.Size() - i -= size - if _, err := m.Event.MarshalTo(dAtA[i:]); err != nil { - return 0, err - } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.IntegrationMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *OneOf_UserLogin) MarshalTo(dAtA []byte) (int, error) { +func (m *IntegrationMetadata) Marshal() (dAtA []byte, err error) { size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *OneOf_UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - if m.UserLogin != nil { - { - size, err := m.UserLogin.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return len(dAtA) - i, nil + return dAtA[:n], nil } -func (m *OneOf_UserCreate) MarshalTo(dAtA []byte) (int, error) { + +func (m *IntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *IntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.UserCreate != nil { + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.AWSRA != nil { { - size, err := m.UserCreate.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AWSRA.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29167,20 +30468,11 @@ func (m *OneOf_UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x2a } - return len(dAtA) - i, nil -} -func (m *OneOf_UserDelete) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *OneOf_UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - if m.UserDelete != nil { + if m.GitHub != nil { { - size, err := m.UserDelete.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.GitHub.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29188,20 +30480,11 @@ func (m *OneOf_UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0x22 } - return len(dAtA) - i, nil -} -func (m *OneOf_UserPasswordChange) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *OneOf_UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - if m.UserPasswordChange != nil { + if m.AzureOIDC != nil { { - size, err := m.UserPasswordChange.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AzureOIDC.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29209,20 +30492,11 @@ func (m *OneOf_UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x22 + dAtA[i] = 0x1a } - return len(dAtA) - i, nil -} -func (m *OneOf_SessionStart) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *OneOf_SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - if m.SessionStart != nil { + if m.AWSOIDC != nil { { - size, err := m.SessionStart.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.AWSOIDC.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29230,188 +30504,426 @@ func (m *OneOf_SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x2a + dAtA[i] = 0x12 + } + if len(m.SubKind) > 0 { + i -= len(m.SubKind) + copy(dAtA[i:], m.SubKind) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SubKind))) + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_SessionJoin) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} -func (m *OneOf_SessionJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - if m.SessionJoin != nil { - { - size, err := m.SessionJoin.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x32 +func (m *AWSOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return len(dAtA) - i, nil + return dAtA[:n], nil } -func (m *OneOf_SessionPrint) MarshalTo(dAtA []byte) (int, error) { + +func (m *AWSOIDCIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionPrint) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AWSOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionPrint != nil { - { - size, err := m.SessionPrint.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.IssuerS3URI) > 0 { + i -= len(m.IssuerS3URI) + copy(dAtA[i:], m.IssuerS3URI) + i = encodeVarintEvents(dAtA, i, uint64(len(m.IssuerS3URI))) i-- - dAtA[i] = 0x3a + dAtA[i] = 0x12 + } + if len(m.RoleARN) > 0 { + i -= len(m.RoleARN) + copy(dAtA[i:], m.RoleARN) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RoleARN))) + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_SessionReject) MarshalTo(dAtA []byte) (int, error) { + +func (m *AzureOIDCIntegrationMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AzureOIDCIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AzureOIDCIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionReject != nil { - { - size, err := m.SessionReject.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.ClientID) > 0 { + i -= len(m.ClientID) + copy(dAtA[i:], m.ClientID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ClientID))) i-- - dAtA[i] = 0x42 + dAtA[i] = 0x12 + } + if len(m.TenantID) > 0 { + i -= len(m.TenantID) + copy(dAtA[i:], m.TenantID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TenantID))) + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_Resize) MarshalTo(dAtA []byte) (int, error) { + +func (m *GitHubIntegrationMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *GitHubIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *GitHubIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.Resize != nil { - { - size, err := m.Resize.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Organization) > 0 { + i -= len(m.Organization) + copy(dAtA[i:], m.Organization) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Organization))) i-- - dAtA[i] = 0x4a + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_SessionEnd) MarshalTo(dAtA []byte) (int, error) { + +func (m *AWSRAIntegrationMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AWSRAIntegrationMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *AWSRAIntegrationMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionEnd != nil { - { - size, err := m.SessionEnd.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) - } + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.TrustAnchorARN) > 0 { + i -= len(m.TrustAnchorARN) + copy(dAtA[i:], m.TrustAnchorARN) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TrustAnchorARN))) i-- - dAtA[i] = 0x52 + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_SessionCommand) MarshalTo(dAtA []byte) (int, error) { + +func (m *PluginCreate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PluginCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionCommand != nil { - { - size, err := m.SessionCommand.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x5a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *OneOf_SessionDisk) MarshalTo(dAtA []byte) (int, error) { + +func (m *PluginUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginUpdate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PluginUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionDisk != nil { - { - size, err := m.SessionDisk.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x62 + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *OneOf_SessionNetwork) MarshalTo(dAtA []byte) (int, error) { + +func (m *PluginDelete) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PluginDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionNetwork != nil { - { - size, err := m.SessionNetwork.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } - i-- - dAtA[i] = 0x6a + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.PluginMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *OneOf_SessionData) MarshalTo(dAtA []byte) (int, error) { + +func (m *PluginMetadata) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginMetadata) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *PluginMetadata) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionData != nil { + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.PluginData != nil { { - size, err := m.SessionData.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.PluginData.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29419,41 +30931,84 @@ func (m *OneOf_SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x72 + dAtA[i] = 0x2a + } + if m.ReusesCredentials { + i-- + if m.ReusesCredentials { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x20 + } + if m.HasCredentials { + i-- + if m.HasCredentials { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x18 + } + if len(m.PluginType) > 0 { + i -= len(m.PluginType) + copy(dAtA[i:], m.PluginType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PluginType))) + i-- + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_SessionLeave) MarshalTo(dAtA []byte) (int, error) { + +func (m *OneOf) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *OneOf) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OneOf) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SessionLeave != nil { + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Event != nil { { - size, err := m.SessionLeave.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { + size := m.Event.Size() + i -= size + if _, err := m.Event.MarshalTo(dAtA[i:]); err != nil { return 0, err } - i -= size - i = encodeVarintEvents(dAtA, i, uint64(size)) } - i-- - dAtA[i] = 0x7a } return len(dAtA) - i, nil } -func (m *OneOf_PortForward) MarshalTo(dAtA []byte) (int, error) { + +func (m *OneOf_UserLogin) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OneOf_UserLogin) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.PortForward != nil { + if m.UserLogin != nil { { - size, err := m.PortForward.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserLogin.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29461,22 +31016,20 @@ func (m *OneOf_PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1 - i-- - dAtA[i] = 0x82 + dAtA[i] = 0xa } return len(dAtA) - i, nil } -func (m *OneOf_X11Forward) MarshalTo(dAtA []byte) (int, error) { +func (m *OneOf_UserCreate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OneOf_UserCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.X11Forward != nil { + if m.UserCreate != nil { { - size, err := m.X11Forward.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserCreate.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29484,22 +31037,20 @@ func (m *OneOf_X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1 - i-- - dAtA[i] = 0x8a + dAtA[i] = 0x12 } return len(dAtA) - i, nil } -func (m *OneOf_SCP) MarshalTo(dAtA []byte) (int, error) { +func (m *OneOf_UserDelete) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OneOf_UserDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.SCP != nil { + if m.UserDelete != nil { { - size, err := m.SCP.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.UserDelete.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -29507,20 +31058,339 @@ func (m *OneOf_SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintEvents(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1 - i-- - dAtA[i] = 0x92 + dAtA[i] = 0x1a } return len(dAtA) - i, nil } -func (m *OneOf_Exec) MarshalTo(dAtA []byte) (int, error) { +func (m *OneOf_UserPasswordChange) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *OneOf_Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *OneOf_UserPasswordChange) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) - if m.Exec != nil { + if m.UserPasswordChange != nil { + { + size, err := m.UserPasswordChange.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionStart) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionStart != nil { + { + size, err := m.SessionStart.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionJoin) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionJoin) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionJoin != nil { + { + size, err := m.SessionJoin.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionPrint) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionPrint) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionPrint != nil { + { + size, err := m.SessionPrint.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionReject) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionReject) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionReject != nil { + { + size, err := m.SessionReject.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x42 + } + return len(dAtA) - i, nil +} +func (m *OneOf_Resize) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_Resize) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.Resize != nil { + { + size, err := m.Resize.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x4a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionEnd) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionEnd != nil { + { + size, err := m.SessionEnd.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x52 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionCommand) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionCommand) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionCommand != nil { + { + size, err := m.SessionCommand.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x5a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionDisk) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionDisk) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionDisk != nil { + { + size, err := m.SessionDisk.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x62 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionNetwork) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionNetwork) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionNetwork != nil { + { + size, err := m.SessionNetwork.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x6a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionData) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionData != nil { + { + size, err := m.SessionData.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x72 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SessionLeave) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SessionLeave) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SessionLeave != nil { + { + size, err := m.SessionLeave.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x7a + } + return len(dAtA) - i, nil +} +func (m *OneOf_PortForward) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_PortForward) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.PortForward != nil { + { + size, err := m.PortForward.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x82 + } + return len(dAtA) - i, nil +} +func (m *OneOf_X11Forward) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_X11Forward) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.X11Forward != nil { + { + size, err := m.X11Forward.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x8a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SCP) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SCP != nil { + { + size, err := m.SCP.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x92 + } + return len(dAtA) - i, nil +} +func (m *OneOf_Exec) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_Exec) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.Exec != nil { { size, err := m.Exec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -33952,6 +35822,351 @@ func (m *OneOf_AutoUpdateAgentRolloutRollback) MarshalToSizedBuffer(dAtA []byte) } return len(dAtA) - i, nil } +func (m *OneOf_MCPSessionStart) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionStart != nil { + { + size, err := m.MCPSessionStart.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xc2 + } + return len(dAtA) - i, nil +} +func (m *OneOf_MCPSessionEnd) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionEnd != nil { + { + size, err := m.MCPSessionEnd.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xca + } + return len(dAtA) - i, nil +} +func (m *OneOf_MCPSessionRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionRequest != nil { + { + size, err := m.MCPSessionRequest.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xd2 + } + return len(dAtA) - i, nil +} +func (m *OneOf_MCPSessionNotification) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionNotification) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionNotification != nil { + { + size, err := m.MCPSessionNotification.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xda + } + return len(dAtA) - i, nil +} +func (m *OneOf_BoundKeypairRecovery) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_BoundKeypairRecovery) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.BoundKeypairRecovery != nil { + { + size, err := m.BoundKeypairRecovery.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xe2 + } + return len(dAtA) - i, nil +} +func (m *OneOf_BoundKeypairRotation) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_BoundKeypairRotation) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.BoundKeypairRotation != nil { + { + size, err := m.BoundKeypairRotation.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xea + } + return len(dAtA) - i, nil +} +func (m *OneOf_BoundKeypairJoinStateVerificationFailed) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_BoundKeypairJoinStateVerificationFailed) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.BoundKeypairJoinStateVerificationFailed != nil { + { + size, err := m.BoundKeypairJoinStateVerificationFailed.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xf2 + } + return len(dAtA) - i, nil +} +func (m *OneOf_MCPSessionListenSSEStream) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionListenSSEStream) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionListenSSEStream != nil { + { + size, err := m.MCPSessionListenSSEStream.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xd + i-- + dAtA[i] = 0xfa + } + return len(dAtA) - i, nil +} +func (m *OneOf_MCPSessionInvalidHTTPRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_MCPSessionInvalidHTTPRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.MCPSessionInvalidHTTPRequest != nil { + { + size, err := m.MCPSessionInvalidHTTPRequest.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0x82 + } + return len(dAtA) - i, nil +} +func (m *OneOf_SCIMListingEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SCIMListingEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SCIMListingEvent != nil { + { + size, err := m.SCIMListingEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0x8a + } + return len(dAtA) - i, nil +} +func (m *OneOf_SCIMResourceEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_SCIMResourceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.SCIMResourceEvent != nil { + { + size, err := m.SCIMResourceEvent.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0x92 + } + return len(dAtA) - i, nil +} +func (m *OneOf_ClientIPRestrictionsUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_ClientIPRestrictionsUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.ClientIPRestrictionsUpdate != nil { + { + size, err := m.ClientIPRestrictionsUpdate.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0x9a + } + return len(dAtA) - i, nil +} +func (m *OneOf_VnetConfigCreate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_VnetConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.VnetConfigCreate != nil { + { + size, err := m.VnetConfigCreate.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0xc2 + } + return len(dAtA) - i, nil +} +func (m *OneOf_VnetConfigUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_VnetConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.VnetConfigUpdate != nil { + { + size, err := m.VnetConfigUpdate.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0xca + } + return len(dAtA) - i, nil +} +func (m *OneOf_VnetConfigDelete) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *OneOf_VnetConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.VnetConfigDelete != nil { + { + size, err := m.VnetConfigDelete.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xe + i-- + dAtA[i] = 0xd2 + } + return len(dAtA) - i, nil +} func (m *StreamStatus) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -33976,12 +36191,12 @@ func (m *StreamStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n687, err687 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastUploadTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUploadTime):]) - if err687 != nil { - return 0, err687 + n710, err710 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastUploadTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUploadTime):]) + if err710 != nil { + return 0, err710 } - i -= n687 - i = encodeVarintEvents(dAtA, i, uint64(n687)) + i -= n710 + i = encodeVarintEvents(dAtA, i, uint64(n710)) i-- dAtA[i] = 0x1a if m.LastEventIndex != 0 { @@ -34077,6 +36292,29 @@ func (m *Identity) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ScopePin != nil { + { + size, err := m.ScopePin.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xfa + } + if len(m.JoinToken) > 0 { + i -= len(m.JoinToken) + copy(dAtA[i:], m.JoinToken) + i = encodeVarintEvents(dAtA, i, uint64(len(m.JoinToken))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xf2 + } if len(m.BotInstanceID) > 0 { i -= len(m.BotInstanceID) copy(dAtA[i:], m.BotInstanceID) @@ -34140,12 +36378,12 @@ func (m *Identity) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0xc2 } } - n691, err691 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.PreviousIdentityExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.PreviousIdentityExpires):]) - if err691 != nil { - return 0, err691 + n715, err715 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.PreviousIdentityExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.PreviousIdentityExpires):]) + if err715 != nil { + return 0, err715 } - i -= n691 - i = encodeVarintEvents(dAtA, i, uint64(n691)) + i -= n715 + i = encodeVarintEvents(dAtA, i, uint64(n715)) i-- dAtA[i] = 0x1 i-- @@ -34293,12 +36531,12 @@ func (m *Identity) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x4a } - n695, err695 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err695 != nil { - return 0, err695 + n719, err719 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err719 != nil { + return 0, err719 } - i -= n695 - i = encodeVarintEvents(dAtA, i, uint64(n695)) + i -= n719 + i = encodeVarintEvents(dAtA, i, uint64(n719)) i-- dAtA[i] = 0x42 if len(m.KubernetesUsers) > 0 { @@ -34363,6 +36601,102 @@ func (m *Identity) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *ScopePin) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ScopePin) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ScopePin) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Assignments) > 0 { + for k := range m.Assignments { + v := m.Assignments[k] + baseI := i + if v != nil { + { + size, err := v.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintEvents(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintEvents(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x12 + } + } + if len(m.Scope) > 0 { + i -= len(m.Scope) + copy(dAtA[i:], m.Scope) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Scope))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *ScopePinnedAssignments) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ScopePinnedAssignments) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ScopePinnedAssignments) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Roles) > 0 { + for iNdEx := len(m.Roles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Roles[iNdEx]) + copy(dAtA[i:], m.Roles[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Roles[iNdEx]))) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *RouteToApp) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -39002,6 +41336,49 @@ func (m *OktaUserSync) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *LabelSelector) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *LabelSelector) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *LabelSelector) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Values) > 0 { + for iNdEx := len(m.Values) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Values[iNdEx]) + copy(dAtA[i:], m.Values[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Values[iNdEx]))) + i-- + dAtA[i] = 0x12 + } + } + if len(m.Key) > 0 { + i -= len(m.Key) + copy(dAtA[i:], m.Key) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Key))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *SPIFFESVIDIssued) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -39026,6 +41403,29 @@ func (m *SPIFFESVIDIssued) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.LabelSelectors) > 0 { + for iNdEx := len(m.LabelSelectors) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.LabelSelectors[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x82 + } + } + if len(m.NameSelector) > 0 { + i -= len(m.NameSelector) + copy(dAtA[i:], m.NameSelector) + i = encodeVarintEvents(dAtA, i, uint64(len(m.NameSelector))) + i-- + dAtA[i] = 0x7a + } if m.Attributes != nil { { size, err := m.Attributes.MarshalToSizedBuffer(dAtA[:i]) @@ -42797,919 +45197,1540 @@ func (m *SigstorePolicyDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func encodeVarintEvents(dAtA []byte, offset int, v uint64) int { - offset -= sovEvents(v) - base := offset - for v >= 1<<7 { - dAtA[offset] = uint8(v&0x7f | 0x80) - v >>= 7 - offset++ - } - dAtA[offset] = uint8(v) - return base -} -func (m *Metadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.Index != 0 { - n += 1 + sovEvents(uint64(m.Index)) - } - l = len(m.Type) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Code) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time) - n += 1 + l + sovEvents(uint64(l)) - l = len(m.ClusterName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *MCPSessionStart) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *SessionMetadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.SessionID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.WithMFA) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.PrivateKeyPolicy) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n +func (m *MCPSessionStart) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *UserMetadata) Size() (n int) { - if m == nil { - return 0 - } +func (m *MCPSessionStart) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.User) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Login) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.Impersonator) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.EgressAuthType) > 0 { + i -= len(m.EgressAuthType) + copy(dAtA[i:], m.EgressAuthType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.EgressAuthType))) + i-- + dAtA[i] = 0x62 } - l = len(m.AWSRoleARN) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.IngressAuthType) > 0 { + i -= len(m.IngressAuthType) + copy(dAtA[i:], m.IngressAuthType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.IngressAuthType))) + i-- + dAtA[i] = 0x5a } - if len(m.AccessRequests) > 0 { - for _, s := range m.AccessRequests { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } + if len(m.ProtocolVersion) > 0 { + i -= len(m.ProtocolVersion) + copy(dAtA[i:], m.ProtocolVersion) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ProtocolVersion))) + i-- + dAtA[i] = 0x52 } - l = len(m.AzureIdentity) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.ServerInfo) > 0 { + i -= len(m.ServerInfo) + copy(dAtA[i:], m.ServerInfo) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ServerInfo))) + i-- + dAtA[i] = 0x4a } - l = len(m.GCPServiceAccount) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.ClientInfo) > 0 { + i -= len(m.ClientInfo) + copy(dAtA[i:], m.ClientInfo) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ClientInfo))) + i-- + dAtA[i] = 0x42 } - if m.TrustedDevice != nil { - l = m.TrustedDevice.Size() - n += 1 + l + sovEvents(uint64(l)) + if len(m.McpSessionId) > 0 { + i -= len(m.McpSessionId) + copy(dAtA[i:], m.McpSessionId) + i = encodeVarintEvents(dAtA, i, uint64(len(m.McpSessionId))) + i-- + dAtA[i] = 0x3a } - l = len(m.RequiredPrivateKeyPolicy) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.UserKind != 0 { - n += 1 + sovEvents(uint64(m.UserKind)) + i-- + dAtA[i] = 0x32 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.BotName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.BotInstanceID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.UserOrigin != 0 { - n += 1 + sovEvents(uint64(m.UserOrigin)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *ServerMetadata) Size() (n int) { - if m == nil { - return 0 +func (m *MCPSessionEnd) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *MCPSessionEnd) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPSessionEnd) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.ServerNamespace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.ServerID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size := m.Headers.Size() + i -= size + if _, err := m.Headers.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.ServerHostname) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x42 + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.ServerAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x3a + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.ServerLabels) > 0 { - for k, v := range m.ServerLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + i-- + dAtA[i] = 0x32 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.ForwardedBy) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x2a + { + size, err := m.ServerMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.ServerSubKind) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.ServerVersion) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *ConnectionMetadata) Size() (n int) { - if m == nil { - return 0 +func (m *MCPJSONRPCMessage) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *MCPJSONRPCMessage) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPJSONRPCMessage) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.LocalAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.RemoteAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Params != nil { + { + size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a } - l = len(m.Protocol) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x1a } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.ID) > 0 { + i -= len(m.ID) + copy(dAtA[i:], m.ID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) + i-- + dAtA[i] = 0x12 } - return n + if len(m.JSONRPC) > 0 { + i -= len(m.JSONRPC) + copy(dAtA[i:], m.JSONRPC) + i = encodeVarintEvents(dAtA, i, uint64(len(m.JSONRPC))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil } -func (m *ClientMetadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.UserAgent) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *MCPSessionRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *KubernetesClusterMetadata) Size() (n int) { - if m == nil { - return 0 - } +func (m *MCPSessionRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPSessionRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.KubernetesCluster) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if len(m.KubernetesUsers) > 0 { - for _, s := range m.KubernetesUsers { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) + { + size := m.Headers.Size() + i -= size + if _, err := m.Headers.MarshalTo(dAtA[i:]); err != nil { + return 0, err } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.KubernetesGroups) > 0 { - for _, s := range m.KubernetesGroups { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x3a + { + size, err := m.Message.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.KubernetesLabels) > 0 { - for k, v := range m.KubernetesLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + i-- + dAtA[i] = 0x32 + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *KubernetesPodMetadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.KubernetesPodName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x2a + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.KubernetesPodNamespace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.KubernetesContainerName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.KubernetesContainerImage) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.KubernetesNodeName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *SAMLIdPServiceProviderMetadata) Size() (n int) { - if m == nil { - return 0 +func (m *MCPSessionNotification) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *MCPSessionNotification) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPSessionNotification) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.ServiceProviderEntityID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.ServiceProviderShortcut) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size := m.Headers.Size() + i -= size + if _, err := m.Headers.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AttributeMapping) > 0 { - for k, v := range m.AttributeMapping { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + i-- + dAtA[i] = 0x3a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x32 + { + size, err := m.Message.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0x2a + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *OktaResourcesUpdatedMetadata) Size() (n int) { - if m == nil { - return 0 +func (m *MCPSessionListenSSEStream) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *MCPSessionListenSSEStream) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPSessionListenSSEStream) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - if m.Added != 0 { - n += 1 + sovEvents(uint64(m.Added)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if m.Updated != 0 { - n += 1 + sovEvents(uint64(m.Updated)) + { + size := m.Headers.Size() + i -= size + if _, err := m.Headers.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.Deleted != 0 { - n += 1 + sovEvents(uint64(m.Deleted)) + i-- + dAtA[i] = 0x32 + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.AddedResources) > 0 { - for _, e := range m.AddedResources { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x2a + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.UpdatedResources) > 0 { - for _, e := range m.UpdatedResources { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.DeletedResources) > 0 { - for _, e := range m.DeletedResources { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *OktaResource) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.ID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Description) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *MCPSessionInvalidHTTPRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *OktaAssignmentMetadata) Size() (n int) { - if m == nil { - return 0 - } +func (m *MCPSessionInvalidHTTPRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPSessionInvalidHTTPRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.Source) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.User) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size := m.Headers.Size() + i -= size + if _, err := m.Headers.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.StartingStatus) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x4a + if len(m.Body) > 0 { + i -= len(m.Body) + copy(dAtA[i:], m.Body) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Body))) + i-- + dAtA[i] = 0x42 } - l = len(m.EndingStatus) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.RawQuery) > 0 { + i -= len(m.RawQuery) + copy(dAtA[i:], m.RawQuery) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RawQuery))) + i-- + dAtA[i] = 0x3a } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x32 } - return n -} - -func (m *AccessListMemberMetadata) Size() (n int) { - if m == nil { - return 0 + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x2a } - var l int - _ = l - l = len(m.AccessListName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size, err := m.AppMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.Members) > 0 { - for _, e := range m.Members { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x22 + { + size, err := m.SessionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.AccessListTitle) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *AccessListMember) Size() (n int) { - if m == nil { - return 0 +func (m *BoundKeypairRecovery) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *BoundKeypairRecovery) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BoundKeypairRecovery) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.JoinedOn) - n += 1 + l + sovEvents(uint64(l)) - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.RemovedOn) - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Reason) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.MemberName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.RecoveryMode) > 0 { + i -= len(m.RecoveryMode) + copy(dAtA[i:], m.RecoveryMode) + i = encodeVarintEvents(dAtA, i, uint64(len(m.RecoveryMode))) + i-- + dAtA[i] = 0x42 } - if m.MembershipKind != 0 { - n += 1 + sovEvents(uint64(m.MembershipKind)) + if m.RecoveryCount != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.RecoveryCount)) + i-- + dAtA[i] = 0x38 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.PublicKey) > 0 { + i -= len(m.PublicKey) + copy(dAtA[i:], m.PublicKey) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PublicKey))) + i-- + dAtA[i] = 0x32 } - return n -} - -func (m *AccessListReviewMembershipRequirementsChanged) Size() (n int) { - if m == nil { - return 0 + if len(m.BotName) > 0 { + i -= len(m.BotName) + copy(dAtA[i:], m.BotName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) + i-- + dAtA[i] = 0x2a } - var l int - _ = l - if len(m.Roles) > 0 { - for _, s := range m.Roles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) + if len(m.TokenName) > 0 { + i -= len(m.TokenName) + copy(dAtA[i:], m.TokenName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) + i-- + dAtA[i] = 0x22 + } + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if len(m.Traits) > 0 { - for k, v := range m.Traits { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *AccessListReviewMetadata) Size() (n int) { - if m == nil { - return 0 +func (m *BoundKeypairRotation) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *BoundKeypairRotation) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BoundKeypairRotation) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.Message) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.ReviewID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.NewPublicKey) > 0 { + i -= len(m.NewPublicKey) + copy(dAtA[i:], m.NewPublicKey) + i = encodeVarintEvents(dAtA, i, uint64(len(m.NewPublicKey))) + i-- + dAtA[i] = 0x3a } - if m.MembershipRequirementsChanged != nil { - l = m.MembershipRequirementsChanged.Size() - n += 1 + l + sovEvents(uint64(l)) + if len(m.PreviousPublicKey) > 0 { + i -= len(m.PreviousPublicKey) + copy(dAtA[i:], m.PreviousPublicKey) + i = encodeVarintEvents(dAtA, i, uint64(len(m.PreviousPublicKey))) + i-- + dAtA[i] = 0x32 } - l = len(m.ReviewFrequencyChanged) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.BotName) > 0 { + i -= len(m.BotName) + copy(dAtA[i:], m.BotName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) + i-- + dAtA[i] = 0x2a } - l = len(m.ReviewDayOfMonthChanged) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.TokenName) > 0 { + i -= len(m.TokenName) + copy(dAtA[i:], m.TokenName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) + i-- + dAtA[i] = 0x22 } - if len(m.RemovedMembers) > 0 { - for _, s := range m.RemovedMembers { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.AccessListTitle) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *LockMetadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Target.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *BoundKeypairJoinStateVerificationFailed) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *SessionStart) Size() (n int) { - if m == nil { - return 0 - } +func (m *BoundKeypairJoinStateVerificationFailed) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *BoundKeypairJoinStateVerificationFailed) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.TerminalSize) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesPodMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.InitialCommand) > 0 { - for _, s := range m.InitialCommand { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } + if len(m.BotName) > 0 { + i -= len(m.BotName) + copy(dAtA[i:], m.BotName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.BotName))) + i-- + dAtA[i] = 0x2a } - l = len(m.SessionRecording) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.TokenName) > 0 { + i -= len(m.TokenName) + copy(dAtA[i:], m.TokenName) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TokenName))) + i-- + dAtA[i] = 0x22 } - if len(m.Invited) > 0 { - for _, s := range m.Invited { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.Reason) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *SessionJoin) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *SCIMRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *SessionPrint) Size() (n int) { - if m == nil { - return 0 - } +func (m *SCIMRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SCIMRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.ChunkIndex != 0 { - n += 1 + sovEvents(uint64(m.ChunkIndex)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.Data) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Body != nil { + { + size, err := m.Body.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 } - if m.Bytes != 0 { - n += 1 + sovEvents(uint64(m.Bytes)) + if len(m.Path) > 0 { + i -= len(m.Path) + copy(dAtA[i:], m.Path) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Path))) + i-- + dAtA[i] = 0x2a } - if m.DelayMilliseconds != 0 { - n += 1 + sovEvents(uint64(m.DelayMilliseconds)) + if len(m.Method) > 0 { + i -= len(m.Method) + copy(dAtA[i:], m.Method) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Method))) + i-- + dAtA[i] = 0x22 } - if m.Offset != 0 { - n += 1 + sovEvents(uint64(m.Offset)) + if len(m.UserAgent) > 0 { + i -= len(m.UserAgent) + copy(dAtA[i:], m.UserAgent) + i = encodeVarintEvents(dAtA, i, uint64(len(m.UserAgent))) + i-- + dAtA[i] = 0x1a } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.SourceAddress) > 0 { + i -= len(m.SourceAddress) + copy(dAtA[i:], m.SourceAddress) + i = encodeVarintEvents(dAtA, i, uint64(len(m.SourceAddress))) + i-- + dAtA[i] = 0x12 } - return n + if len(m.ID) > 0 { + i -= len(m.ID) + copy(dAtA[i:], m.ID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ID))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil } -func (m *DesktopRecording) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Message) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.DelayMilliseconds != 0 { - n += 1 + sovEvents(uint64(m.DelayMilliseconds)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) +func (m *SCIMCommonData) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *DesktopClipboardReceive) Size() (n int) { - if m == nil { - return 0 - } +func (m *SCIMCommonData) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SCIMCommonData) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if m.Length != 0 { - n += 1 + sovEvents(uint64(m.Length)) + if len(m.ResourceType) > 0 { + i -= len(m.ResourceType) + copy(dAtA[i:], m.ResourceType) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ResourceType))) + i-- + dAtA[i] = 0x2a } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.Integration) > 0 { + i -= len(m.Integration) + copy(dAtA[i:], m.Integration) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Integration))) + i-- + dAtA[i] = 0x22 } - return n + if m.Request != nil { + { + size, err := m.Request.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + return len(dAtA) - i, nil } -func (m *DesktopClipboardSend) Size() (n int) { - if m == nil { - return 0 +func (m *SCIMListingEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *SCIMListingEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SCIMListingEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if m.Length != 0 { - n += 1 + sovEvents(uint64(m.Length)) + if m.ResourceCount != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.ResourceCount)) + i-- + dAtA[i] = 0x38 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.StartIndex != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.StartIndex)) + i-- + dAtA[i] = 0x30 } - return n -} - -func (m *DesktopSharedDirectoryStart) Size() (n int) { - if m == nil { - return 0 + if m.Count != 0 { + i = encodeVarintEvents(dAtA, i, uint64(m.Count)) + i-- + dAtA[i] = 0x28 } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Filter) > 0 { + i -= len(m.Filter) + copy(dAtA[i:], m.Filter) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Filter))) + i-- + dAtA[i] = 0x22 } - l = len(m.DirectoryName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + { + size, err := m.SCIMCommonData.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.DirectoryID != 0 { - n += 1 + sovEvents(uint64(m.DirectoryID)) + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *DesktopSharedDirectoryRead) Size() (n int) { - if m == nil { - return 0 +func (m *SCIMResourceEvent) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *SCIMResourceEvent) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SCIMResourceEvent) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.DirectoryName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Display) > 0 { + i -= len(m.Display) + copy(dAtA[i:], m.Display) + i = encodeVarintEvents(dAtA, i, uint64(len(m.Display))) + i-- + dAtA[i] = 0x32 } - if m.DirectoryID != 0 { - n += 1 + sovEvents(uint64(m.DirectoryID)) + if len(m.ExternalID) > 0 { + i -= len(m.ExternalID) + copy(dAtA[i:], m.ExternalID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ExternalID))) + i-- + dAtA[i] = 0x2a } - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.TeleportID) > 0 { + i -= len(m.TeleportID) + copy(dAtA[i:], m.TeleportID) + i = encodeVarintEvents(dAtA, i, uint64(len(m.TeleportID))) + i-- + dAtA[i] = 0x22 } - if m.Length != 0 { - n += 1 + sovEvents(uint64(m.Length)) + { + size, err := m.SCIMCommonData.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.Offset != 0 { - n += 1 + sovEvents(uint64(m.Offset)) + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *DesktopSharedDirectoryWrite) Size() (n int) { - if m == nil { - return 0 +func (m *ClientIPRestrictionsUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *ClientIPRestrictionsUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ClientIPRestrictionsUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - l = len(m.DirectoryName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.ClientIPRestrictions) > 0 { + for iNdEx := len(m.ClientIPRestrictions) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ClientIPRestrictions[iNdEx]) + copy(dAtA[i:], m.ClientIPRestrictions[iNdEx]) + i = encodeVarintEvents(dAtA, i, uint64(len(m.ClientIPRestrictions[iNdEx]))) + i-- + dAtA[i] = 0x32 + } } - if m.DirectoryID != 0 { - n += 1 + sovEvents(uint64(m.DirectoryID)) + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + i-- + dAtA[i] = 0x2a + { + size, err := m.ResourceMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.Length != 0 { - n += 1 + sovEvents(uint64(m.Length)) + i-- + dAtA[i] = 0x22 + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.Offset != 0 { - n += 1 + sovEvents(uint64(m.Offset)) + i-- + dAtA[i] = 0x1a + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *SessionReject) Size() (n int) { - if m == nil { - return 0 +func (m *VnetConfigCreate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *VnetConfigCreate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *VnetConfigCreate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Reason) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - if m.Maximum != 0 { - n += 1 + sovEvents(uint64(m.Maximum)) + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *VnetConfigUpdate) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *VnetConfigUpdate) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *VnetConfigUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - return n + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *SessionConnect) Size() (n int) { - if m == nil { - return 0 +func (m *VnetConfigDelete) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *VnetConfigDelete) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *VnetConfigDelete) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } - return n + { + size, err := m.ConnectionMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + { + size, err := m.UserMetadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintEvents(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *FileTransferRequestEvent) Size() (n int) { +func encodeVarintEvents(dAtA []byte, offset int, v uint64) int { + offset -= sovEvents(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + v >>= 7 + offset++ + } + dAtA[offset] = uint8(v) + return base +} +func (m *Metadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.RequestID) + if m.Index != 0 { + n += 1 + sovEvents(uint64(m.Index)) + } + l = len(m.Type) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.Approvers) > 0 { - for _, s := range m.Approvers { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = len(m.Requester) + l = len(m.ID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Location) + l = len(m.Code) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Download { - n += 2 - } - l = len(m.Filename) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time) + n += 1 + l + sovEvents(uint64(l)) + l = len(m.ClusterName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -43719,81 +46740,99 @@ func (m *FileTransferRequestEvent) Size() (n int) { return n } -func (m *Resize) Size() (n int) { +func (m *SessionMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.TerminalSize) + l = len(m.SessionID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.WithMFA) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PrivateKeyPolicy) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesPodMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SessionEnd) Size() (n int) { +func (m *UserMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.EnhancedRecording { - n += 2 + l = len(m.User) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.Interactive { - n += 2 + l = len(m.Login) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if len(m.Participants) > 0 { - for _, s := range m.Participants { + l = len(m.Impersonator) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AWSRoleARN) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.AccessRequests) > 0 { + for _, s := range m.AccessRequests { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) - n += 1 + l + sovEvents(uint64(l)) - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesPodMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.InitialCommand) > 0 { - for _, s := range m.InitialCommand { + l = len(m.AzureIdentity) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.GCPServiceAccount) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.TrustedDevice != nil { + l = m.TrustedDevice.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.RequiredPrivateKeyPolicy) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.UserKind != 0 { + n += 1 + sovEvents(uint64(m.UserKind)) + } + l = len(m.BotName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.BotInstanceID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.UserOrigin != 0 { + n += 1 + sovEvents(uint64(m.UserOrigin)) + } + if len(m.UserRoles) > 0 { + for _, s := range m.UserRoles { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.SessionRecording) + l = m.UserTraits.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.UserClusterName) if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + n += 2 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -43801,19 +46840,45 @@ func (m *SessionEnd) Size() (n int) { return n } -func (m *BPFMetadata) Size() (n int) { +func (m *ServerMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PID != 0 { - n += 1 + sovEvents(uint64(m.PID)) + l = len(m.ServerNamespace) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.CgroupID != 0 { - n += 1 + sovEvents(uint64(m.CgroupID)) + l = len(m.ServerID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Program) + l = len(m.ServerHostname) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ServerAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.ServerLabels) > 0 { + for k, v := range m.ServerLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } + } + l = len(m.ForwardedBy) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ServerSubKind) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ServerVersion) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -43823,20 +46888,21 @@ func (m *BPFMetadata) Size() (n int) { return n } -func (m *Status) Size() (n int) { +func (m *ConnectionMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.Success { - n += 2 + l = len(m.LocalAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Error) + l = len(m.RemoteAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.UserMessage) + l = len(m.Protocol) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -43846,69 +46912,51 @@ func (m *Status) Size() (n int) { return n } -func (m *SessionCommand) Size() (n int) { +func (m *ClientMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.BPFMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.PPID != 0 { - n += 1 + sovEvents(uint64(m.PPID)) - } - l = len(m.Path) + l = len(m.UserAgent) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.Argv) > 0 { - for _, s := range m.Argv { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.ReturnCode != 0 { - n += 1 + sovEvents(uint64(m.ReturnCode)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SessionDisk) Size() (n int) { +func (m *KubernetesClusterMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.BPFMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Path) + l = len(m.KubernetesCluster) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Flags != 0 { - n += 1 + sovEvents(uint64(m.Flags)) + if len(m.KubernetesUsers) > 0 { + for _, s := range m.KubernetesUsers { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - if m.ReturnCode != 0 { - n += 1 + sovEvents(uint64(m.ReturnCode)) + if len(m.KubernetesGroups) > 0 { + for _, s := range m.KubernetesGroups { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if len(m.KubernetesLabels) > 0 { + for k, v := range m.KubernetesLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -43916,41 +46964,31 @@ func (m *SessionDisk) Size() (n int) { return n } -func (m *SessionNetwork) Size() (n int) { +func (m *KubernetesPodMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.BPFMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SrcAddr) + l = len(m.KubernetesPodName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.DstAddr) + l = len(m.KubernetesPodNamespace) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.DstPort != 0 { - n += 1 + sovEvents(uint64(m.DstPort)) - } - if m.TCPVersion != 0 { - n += 1 + sovEvents(uint64(m.TCPVersion)) + l = len(m.KubernetesContainerName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.Operation != 0 { - n += 1 + sovEvents(uint64(m.Operation)) + l = len(m.KubernetesContainerImage) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.Action != 0 { - n += 1 + sovEvents(uint64(m.Action)) + l = len(m.KubernetesNodeName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -43958,27 +46996,27 @@ func (m *SessionNetwork) Size() (n int) { return n } -func (m *SessionData) Size() (n int) { +func (m *SAMLIdPServiceProviderMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.BytesTransmitted != 0 { - n += 1 + sovEvents(uint64(m.BytesTransmitted)) + l = len(m.ServiceProviderEntityID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.BytesReceived != 0 { - n += 1 + sovEvents(uint64(m.BytesReceived)) + l = len(m.ServiceProviderShortcut) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.AttributeMapping) > 0 { + for k, v := range m.AttributeMapping { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -43986,63 +47024,56 @@ func (m *SessionData) Size() (n int) { return n } -func (m *SessionLeave) Size() (n int) { +func (m *OktaResourcesUpdatedMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) + if m.Added != 0 { + n += 1 + sovEvents(uint64(m.Added)) + } + if m.Updated != 0 { + n += 1 + sovEvents(uint64(m.Updated)) + } + if m.Deleted != 0 { + n += 1 + sovEvents(uint64(m.Deleted)) + } + if len(m.AddedResources) > 0 { + for _, e := range m.AddedResources { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + if len(m.UpdatedResources) > 0 { + for _, e := range m.UpdatedResources { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + if len(m.DeletedResources) > 0 { + for _, e := range m.DeletedResources { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserLogin) Size() (n int) { +func (m *OktaResource) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Method) + l = len(m.ID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.IdentityAttributes != nil { - l = m.IdentityAttributes.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if m.MFADevice != nil { - l = m.MFADevice.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = m.ClientMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.AppliedLoginRules) > 0 { - for _, s := range m.AppliedLoginRules { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = len(m.ConnectorID) + l = len(m.Description) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -44052,22 +47083,27 @@ func (m *UserLogin) Size() (n int) { return n } -func (m *CreateMFAAuthChallenge) Size() (n int) { +func (m *OktaAssignmentMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.ChallengeScope) + l = len(m.Source) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.ChallengeAllowReuse { - n += 2 + l = len(m.User) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.StartingStatus) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.EndingStatus) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -44075,134 +47111,134 @@ func (m *CreateMFAAuthChallenge) Size() (n int) { return n } -func (m *ValidateMFAAuthResponse) Size() (n int) { +func (m *AccessListMemberMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.MFADevice != nil { - l = m.MFADevice.Size() + l = len(m.AccessListName) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.ChallengeScope) + if len(m.Members) > 0 { + for _, e := range m.Members { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.AccessListTitle) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.ChallengeAllowReuse { - n += 2 - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *ResourceMetadata) Size() (n int) { +func (m *AccessListMember) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Name) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.JoinedOn) n += 1 + l + sovEvents(uint64(l)) - l = len(m.UpdatedBy) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.RemovedOn) + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Reason) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.TTL) + l = len(m.MemberName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if m.MembershipKind != 0 { + n += 1 + sovEvents(uint64(m.MembershipKind)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserCreate) Size() (n int) { +func (m *AccessListReviewMembershipRequirementsChanged) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if len(m.Roles) > 0 { for _, s := range m.Roles { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.Connector) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Traits) > 0 { + for k, v := range m.Traits { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserUpdate) Size() (n int) { +func (m *AccessListReviewMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.Roles) > 0 { - for _, s := range m.Roles { + l = len(m.Message) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ReviewID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.MembershipRequirementsChanged != nil { + l = m.MembershipRequirementsChanged.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ReviewFrequencyChanged) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ReviewDayOfMonthChanged) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.RemovedMembers) > 0 { + for _, s := range m.RemovedMembers { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.Connector) + l = len(m.AccessListTitle) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserDelete) Size() (n int) { +func (m *LockMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.Target.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -44210,7 +47246,7 @@ func (m *UserDelete) Size() (n int) { return n } -func (m *UserPasswordChange) Size() (n int) { +func (m *SessionStart) Size() (n int) { if m == nil { return 0 } @@ -44220,89 +47256,47 @@ func (m *UserPasswordChange) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *AccessRequestCreate) Size() (n int) { - if m == nil { - return 0 + l = len(m.TerminalSize) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.KubernetesClusterMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.KubernetesPodMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.Roles) > 0 { - for _, s := range m.Roles { + if len(m.InitialCommand) > 0 { + for _, s := range m.InitialCommand { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.RequestID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.RequestState) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Delegator) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Reason) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Annotations != nil { - l = m.Annotations.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Reviewer) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ProposedState) + l = len(m.SessionRecording) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.RequestedResourceIDs) > 0 { - for _, e := range m.RequestedResourceIDs { - l = e.Size() + if len(m.Invited) > 0 { + for _, s := range m.Invited { + l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration) - n += 1 + l + sovEvents(uint64(l)) - l = len(m.PromotedAccessListName) + l = len(m.Reason) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.AssumeStartTime != nil { - l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime) - n += 2 + l + sovEvents(uint64(l)) - } - if len(m.ResourceNames) > 0 { - for _, s := range m.ResourceNames { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AccessRequestExpire) Size() (n int) { +func (m *SessionJoin) Size() (n int) { if m == nil { return 0 } @@ -44310,43 +47304,45 @@ func (m *AccessRequestExpire) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubernetesClusterMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.RequestID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.ResourceExpiry != nil { - l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry) - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *ResourceID) Size() (n int) { +func (m *SessionPrint) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.ClusterName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.ChunkIndex != 0 { + n += 1 + sovEvents(uint64(m.ChunkIndex)) } - l = len(m.Kind) + l = len(m.Data) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Name) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Bytes != 0 { + n += 1 + sovEvents(uint64(m.Bytes)) } - l = len(m.SubResourceName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.DelayMilliseconds != 0 { + n += 1 + sovEvents(uint64(m.DelayMilliseconds)) + } + if m.Offset != 0 { + n += 1 + sovEvents(uint64(m.Offset)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -44354,7 +47350,7 @@ func (m *ResourceID) Size() (n int) { return n } -func (m *AccessRequestDelete) Size() (n int) { +func (m *DesktopRecording) Size() (n int) { if m == nil { return 0 } @@ -44362,19 +47358,20 @@ func (m *AccessRequestDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.RequestID) + l = len(m.Message) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if m.DelayMilliseconds != 0 { + n += 1 + sovEvents(uint64(m.DelayMilliseconds)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PortForward) Size() (n int) { +func (m *DesktopClipboardReceive) Size() (n int) { if m == nil { return 0 } @@ -44384,25 +47381,28 @@ func (m *PortForward) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Addr) + l = len(m.DesktopAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Length != 0 { + n += 1 + sovEvents(uint64(m.Length)) + } + l = len(m.DesktopName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesPodMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *X11Forward) Size() (n int) { +func (m *DesktopClipboardSend) Size() (n int) { if m == nil { return 0 } @@ -44412,31 +47412,18 @@ func (m *X11Forward) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *CommandMetadata) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.Command) + l = len(m.DesktopAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.ExitCode) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Length != 0 { + n += 1 + sovEvents(uint64(m.Length)) } - l = len(m.Error) + l = len(m.DesktopName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -44446,7 +47433,7 @@ func (m *CommandMetadata) Size() (n int) { return n } -func (m *Exec) Size() (n int) { +func (m *DesktopSharedDirectoryStart) Size() (n int) { if m == nil { return 0 } @@ -44456,25 +47443,34 @@ func (m *Exec) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.CommandMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesClusterMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.KubernetesPodMetadata.Size() + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.DesktopAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.DirectoryName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.DirectoryID != 0 { + n += 1 + sovEvents(uint64(m.DirectoryID)) + } + l = len(m.DesktopName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SCP) Size() (n int) { +func (m *DesktopSharedDirectoryRead) Size() (n int) { if m == nil { return 0 } @@ -44484,56 +47480,35 @@ func (m *SCP) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.CommandMetadata.Size() + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Path) + l = len(m.DesktopAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Action) + l = len(m.DirectoryName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *SFTPAttributes) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.FileSize != nil { - l = github_com_gogo_protobuf_types.SizeOfStdUInt64(*m.FileSize) - n += 1 + l + sovEvents(uint64(l)) - } - if m.UID != nil { - l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.UID) - n += 1 + l + sovEvents(uint64(l)) + if m.DirectoryID != 0 { + n += 1 + sovEvents(uint64(m.DirectoryID)) } - if m.GID != nil { - l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.GID) + l = len(m.Path) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Permissions != nil { - l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.Permissions) - n += 1 + l + sovEvents(uint64(l)) + if m.Length != 0 { + n += 1 + sovEvents(uint64(m.Length)) } - if m.AccessTime != nil { - l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.AccessTime) - n += 1 + l + sovEvents(uint64(l)) + if m.Offset != 0 { + n += 1 + sovEvents(uint64(m.Offset)) } - if m.ModificationTime != nil { - l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.ModificationTime) + l = len(m.DesktopName) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { @@ -44542,7 +47517,7 @@ func (m *SFTPAttributes) Size() (n int) { return n } -func (m *SFTP) Size() (n int) { +func (m *DesktopSharedDirectoryWrite) Size() (n int) { if m == nil { return 0 } @@ -44552,35 +47527,34 @@ func (m *SFTP) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.WorkingDirectory) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.DesktopAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Path) + l = len(m.DirectoryName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.TargetPath) + if m.DirectoryID != 0 { + n += 1 + sovEvents(uint64(m.DirectoryID)) + } + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Flags != 0 { - n += 1 + sovEvents(uint64(m.Flags)) - } - if m.Attributes != nil { - l = m.Attributes.Size() - n += 1 + l + sovEvents(uint64(l)) + if m.Length != 0 { + n += 1 + sovEvents(uint64(m.Length)) } - if m.Action != 0 { - n += 1 + sovEvents(uint64(m.Action)) + if m.Offset != 0 { + n += 1 + sovEvents(uint64(m.Offset)) } - l = len(m.Error) + l = len(m.DesktopName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -44590,7 +47564,7 @@ func (m *SFTP) Size() (n int) { return n } -func (m *SFTPSummary) Size() (n int) { +func (m *SessionReject) Size() (n int) { if m == nil { return 0 } @@ -44600,17 +47574,16 @@ func (m *SFTPSummary) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.FileTransferStats) > 0 { - for _, e := range m.FileTransferStats { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) - } + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Reason) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Maximum != 0 { + n += 1 + sovEvents(uint64(m.Maximum)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -44618,29 +47591,25 @@ func (m *SFTPSummary) Size() (n int) { return n } -func (m *FileTransferStat) Size() (n int) { +func (m *SessionConnect) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.BytesRead != 0 { - n += 1 + sovEvents(uint64(m.BytesRead)) - } - if m.BytesWritten != 0 { - n += 1 + sovEvents(uint64(m.BytesWritten)) - } + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *Subsystem) Size() (n int) { +func (m *FileTransferRequestEvent) Size() (n int) { if m == nil { return 0 } @@ -44648,27 +47617,40 @@ func (m *Subsystem) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Name) + l = len(m.RequestID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Error) + if len(m.Approvers) > 0 { + for _, s := range m.Approvers { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.Requester) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Location) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Download { + n += 2 + } + l = len(m.Filename) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *ClientDisconnect) Size() (n int) { +func (m *Resize) Size() (n int) { if m == nil { return 0 } @@ -44678,21 +47660,27 @@ func (m *ClientDisconnect) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Reason) + l = len(m.TerminalSize) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + l = m.KubernetesClusterMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubernetesPodMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AuthAttempt) Size() (n int) { +func (m *SessionEnd) Size() (n int) { if m == nil { return 0 } @@ -44702,75 +47690,94 @@ func (m *AuthAttempt) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.EnhancedRecording { + n += 2 } - return n -} - -func (m *UserTokenCreate) Size() (n int) { - if m == nil { - return 0 + if m.Interactive { + n += 2 } - var l int - _ = l - l = m.Metadata.Size() + if len(m.Participants) > 0 { + for _, s := range m.Participants { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.KubernetesClusterMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.KubernetesPodMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.InitialCommand) > 0 { + for _, s := range m.InitialCommand { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.SessionRecording) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *RoleCreate) Size() (n int) { +func (m *BPFMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) + if m.PID != 0 { + n += 1 + sovEvents(uint64(m.PID)) + } + if m.CgroupID != 0 { + n += 1 + sovEvents(uint64(m.CgroupID)) + } + l = len(m.Program) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *RoleUpdate) Size() (n int) { +func (m *Status) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) + if m.Success { + n += 2 + } + l = len(m.Error) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UserMessage) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *RoleDelete) Size() (n int) { +func (m *SessionCommand) Size() (n int) { if m == nil { return 0 } @@ -44778,19 +47785,37 @@ func (m *RoleDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.BPFMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.PPID != 0 { + n += 1 + sovEvents(uint64(m.PPID)) + } + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Argv) > 0 { + for _, s := range m.Argv { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.ReturnCode != 0 { + n += 1 + sovEvents(uint64(m.ReturnCode)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *BotCreate) Size() (n int) { +func (m *SessionDisk) Size() (n int) { if m == nil { return 0 } @@ -44798,35 +47823,31 @@ func (m *BotCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *BotUpdate) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.BPFMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Flags != 0 { + n += 1 + sovEvents(uint64(m.Flags)) + } + if m.ReturnCode != 0 { + n += 1 + sovEvents(uint64(m.ReturnCode)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *BotDelete) Size() (n int) { +func (m *SessionNetwork) Size() (n int) { if m == nil { return 0 } @@ -44834,17 +47855,41 @@ func (m *BotDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.BPFMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.SrcAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.DstAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.DstPort != 0 { + n += 1 + sovEvents(uint64(m.DstPort)) + } + if m.TCPVersion != 0 { + n += 1 + sovEvents(uint64(m.TCPVersion)) + } + if m.Operation != 0 { + n += 1 + sovEvents(uint64(m.Operation)) + } + if m.Action != 0 { + n += 1 + sovEvents(uint64(m.Action)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *TrustedClusterCreate) Size() (n int) { +func (m *SessionData) Size() (n int) { if m == nil { return 0 } @@ -44852,19 +47897,27 @@ func (m *TrustedClusterCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.BytesTransmitted != 0 { + n += 1 + sovEvents(uint64(m.BytesTransmitted)) + } + if m.BytesReceived != 0 { + n += 1 + sovEvents(uint64(m.BytesReceived)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *TrustedClusterDelete) Size() (n int) { +func (m *SessionLeave) Size() (n int) { if m == nil { return 0 } @@ -44872,10 +47925,12 @@ func (m *TrustedClusterDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { @@ -44884,7 +47939,7 @@ func (m *TrustedClusterDelete) Size() (n int) { return n } -func (m *ProvisionTokenCreate) Size() (n int) { +func (m *UserLogin) Size() (n int) { if m == nil { return 0 } @@ -44892,17 +47947,33 @@ func (m *ProvisionTokenCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.Roles) > 0 { - for _, s := range m.Roles { + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.IdentityAttributes != nil { + l = m.IdentityAttributes.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.MFADevice != nil { + l = m.MFADevice.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = m.ClientMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.AppliedLoginRules) > 0 { + for _, s := range m.AppliedLoginRules { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.JoinMethod) + l = len(m.ConnectorID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -44912,7 +47983,7 @@ func (m *ProvisionTokenCreate) Size() (n int) { return n } -func (m *TrustedClusterTokenCreate) Size() (n int) { +func (m *CreateMFAAuthChallenge) Size() (n int) { if m == nil { return 0 } @@ -44920,17 +47991,22 @@ func (m *TrustedClusterTokenCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.ChallengeScope) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ChallengeAllowReuse { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *GithubConnectorCreate) Size() (n int) { +func (m *ValidateMFAAuthResponse) Size() (n int) { if m == nil { return 0 } @@ -44938,39 +48014,54 @@ func (m *GithubConnectorCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) + if m.MFADevice != nil { + l = m.MFADevice.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ChallengeScope) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ChallengeAllowReuse { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *GithubConnectorUpdate) Size() (n int) { +func (m *ResourceMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = len(m.Name) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires) n += 1 + l + sovEvents(uint64(l)) + l = len(m.UpdatedBy) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.TTL) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *GithubConnectorDelete) Size() (n int) { +func (m *UserCreate) Size() (n int) { if m == nil { return 0 } @@ -44978,10 +48069,20 @@ func (m *GithubConnectorDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.Connector) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { @@ -44990,7 +48091,7 @@ func (m *GithubConnectorDelete) Size() (n int) { return n } -func (m *OIDCConnectorCreate) Size() (n int) { +func (m *UserUpdate) Size() (n int) { if m == nil { return 0 } @@ -44998,9 +48099,21 @@ func (m *OIDCConnectorCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.Connector) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45008,7 +48121,7 @@ func (m *OIDCConnectorCreate) Size() (n int) { return n } -func (m *OIDCConnectorUpdate) Size() (n int) { +func (m *UserDelete) Size() (n int) { if m == nil { return 0 } @@ -45016,9 +48129,11 @@ func (m *OIDCConnectorUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45026,7 +48141,7 @@ func (m *OIDCConnectorUpdate) Size() (n int) { return n } -func (m *OIDCConnectorDelete) Size() (n int) { +func (m *UserPasswordChange) Size() (n int) { if m == nil { return 0 } @@ -45034,17 +48149,17 @@ func (m *OIDCConnectorDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SAMLConnectorCreate) Size() (n int) { +func (m *AccessRequestCreate) Size() (n int) { if m == nil { return 0 } @@ -45052,61 +48167,73 @@ func (m *SAMLConnectorCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.Connector != nil { - l = m.Connector.Size() + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.RequestID) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.RequestState) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *SAMLConnectorUpdate) Size() (n int) { - if m == nil { - return 0 + l = len(m.Delegator) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Connector != nil { - l = m.Connector.Size() + l = len(m.Reason) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.Annotations != nil { + l = m.Annotations.Size() + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *SAMLConnectorDelete) Size() (n int) { - if m == nil { - return 0 + l = len(m.Reviewer) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = len(m.ProposedState) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.RequestedResourceIDs) > 0 { + for _, e := range m.RequestedResourceIDs { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration) n += 1 + l + sovEvents(uint64(l)) + l = len(m.PromotedAccessListName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.AssumeStartTime != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime) + n += 2 + l + sovEvents(uint64(l)) + } + if len(m.ResourceNames) > 0 { + for _, s := range m.ResourceNames { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *KubeRequest) Size() (n int) { +func (m *AccessRequestExpire) Size() (n int) { if m == nil { return 0 } @@ -45114,77 +48241,43 @@ func (m *KubeRequest) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.RequestPath) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Verb) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ResourceAPIGroup) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ResourceNamespace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ResourceKind) + l = len(m.RequestID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.ResourceName) - if l > 0 { + if m.ResourceExpiry != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry) n += 1 + l + sovEvents(uint64(l)) } - if m.ResponseCode != 0 { - n += 1 + sovEvents(uint64(m.ResponseCode)) - } - l = m.KubernetesClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AppMetadata) Size() (n int) { +func (m *ResourceID) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.AppURI) + l = len(m.ClusterName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AppPublicAddr) + l = len(m.Kind) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.AppLabels) > 0 { - for k, v := range m.AppLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } - l = len(m.AppName) + l = len(m.Name) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.AppTargetPort != 0 { - n += 1 + sovEvents(uint64(m.AppTargetPort)) + l = len(m.SubResourceName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45192,7 +48285,7 @@ func (m *AppMetadata) Size() (n int) { return n } -func (m *AppCreate) Size() (n int) { +func (m *AccessRequestDelete) Size() (n int) { if m == nil { return 0 } @@ -45202,17 +48295,17 @@ func (m *AppCreate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AppMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) + l = len(m.RequestID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AppUpdate) Size() (n int) { +func (m *PortForward) Size() (n int) { if m == nil { return 0 } @@ -45222,9 +48315,17 @@ func (m *AppUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.AppMetadata.Size() + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Addr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = m.KubernetesClusterMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubernetesPodMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45232,7 +48333,7 @@ func (m *AppUpdate) Size() (n int) { return n } -func (m *AppDelete) Size() (n int) { +func (m *X11Forward) Size() (n int) { if m == nil { return 0 } @@ -45242,7 +48343,9 @@ func (m *AppDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45250,35 +48353,31 @@ func (m *AppDelete) Size() (n int) { return n } -func (m *AppSessionStart) Size() (n int) { +func (m *CommandMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.PublicAddr) + l = len(m.Command) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ExitCode) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Error) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.AppMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AppSessionEnd) Size() (n int) { +func (m *Exec) Size() (n int) { if m == nil { return 0 } @@ -45288,13 +48387,17 @@ func (m *AppSessionEnd) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.CommandMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.AppMetadata.Size() + l = m.KubernetesClusterMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubernetesPodMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45302,7 +48405,7 @@ func (m *AppSessionEnd) Size() (n int) { return n } -func (m *AppSessionChunk) Size() (n int) { +func (m *SCP) Size() (n int) { if m == nil { return 0 } @@ -45312,76 +48415,103 @@ func (m *AppSessionChunk) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.CommandMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.SessionChunkID) + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Action) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = m.AppMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AppSessionRequest) Size() (n int) { +func (m *SFTPAttributes) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatusCode != 0 { - n += 1 + sovEvents(uint64(m.StatusCode)) + if m.FileSize != nil { + l = github_com_gogo_protobuf_types.SizeOfStdUInt64(*m.FileSize) + n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Path) - if l > 0 { + if m.UID != nil { + l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.UID) n += 1 + l + sovEvents(uint64(l)) } - l = len(m.RawQuery) - if l > 0 { + if m.GID != nil { + l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.GID) n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Method) - if l > 0 { + if m.Permissions != nil { + l = github_com_gogo_protobuf_types.SizeOfStdUInt32(*m.Permissions) + n += 1 + l + sovEvents(uint64(l)) + } + if m.AccessTime != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.AccessTime) + n += 1 + l + sovEvents(uint64(l)) + } + if m.ModificationTime != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.ModificationTime) n += 1 + l + sovEvents(uint64(l)) } - l = m.AppMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AWSRequestMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AWSRequestMetadata) Size() (n int) { +func (m *SFTP) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.AWSRegion) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.WorkingDirectory) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AWSService) + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AWSHost) + l = len(m.TargetPath) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AWSAssumedRole) + if m.Flags != 0 { + n += 1 + sovEvents(uint64(m.Flags)) + } + if m.Attributes != nil { + l = m.Attributes.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.Action != 0 { + n += 1 + sovEvents(uint64(m.Action)) + } + l = len(m.Error) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -45391,77 +48521,85 @@ func (m *AWSRequestMetadata) Size() (n int) { return n } -func (m *DatabaseMetadata) Size() (n int) { +func (m *SFTPSummary) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.DatabaseService) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DatabaseProtocol) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DatabaseURI) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DatabaseName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.FileTransferStats) > 0 { + for _, e := range m.FileTransferStats { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } } - l = len(m.DatabaseUser) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } - if len(m.DatabaseLabels) > 0 { - for k, v := range m.DatabaseLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } + return n +} + +func (m *FileTransferStat) Size() (n int) { + if m == nil { + return 0 } - l = len(m.DatabaseAWSRegion) + var l int + _ = l + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.DatabaseAWSRedshiftClusterID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.BytesRead != 0 { + n += 1 + sovEvents(uint64(m.BytesRead)) } - l = len(m.DatabaseGCPProjectID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.BytesWritten != 0 { + n += 1 + sovEvents(uint64(m.BytesWritten)) } - l = len(m.DatabaseGCPInstanceID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } - if len(m.DatabaseRoles) > 0 { - for _, s := range m.DatabaseRoles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } + return n +} + +func (m *Subsystem) Size() (n int) { + if m == nil { + return 0 } - l = len(m.DatabaseType) + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Name) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.DatabaseOrigin) + l = len(m.Error) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseCreate) Size() (n int) { +func (m *ClientDisconnect) Size() (n int) { if m == nil { return 0 } @@ -45471,17 +48609,21 @@ func (m *DatabaseCreate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.Reason) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseUpdate) Size() (n int) { +func (m *AuthAttempt) Size() (n int) { if m == nil { return 0 } @@ -45491,9 +48633,11 @@ func (m *DatabaseUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45501,7 +48645,7 @@ func (m *DatabaseUpdate) Size() (n int) { return n } -func (m *DatabaseDelete) Size() (n int) { +func (m *UserTokenCreate) Size() (n int) { if m == nil { return 0 } @@ -45509,17 +48653,17 @@ func (m *DatabaseDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseSessionStart) Size() (n int) { +func (m *RoleCreate) Size() (n int) { if m == nil { return 0 } @@ -45527,30 +48671,19 @@ func (m *DatabaseSessionStart) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.PostgresPID != 0 { - n += 1 + sovEvents(uint64(m.PostgresPID)) - } - l = m.ClientMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseSessionQuery) Size() (n int) { +func (m *RoleUpdate) Size() (n int) { if m == nil { return 0 } @@ -45558,23 +48691,11 @@ func (m *DatabaseSessionQuery) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.DatabaseQuery) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.DatabaseQueryParameters) > 0 { - for _, s := range m.DatabaseQueryParameters { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = m.Status.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45582,7 +48703,7 @@ func (m *DatabaseSessionQuery) Size() (n int) { return n } -func (m *DatabaseSessionCommandResult) Size() (n int) { +func (m *RoleDelete) Size() (n int) { if m == nil { return 0 } @@ -45590,24 +48711,19 @@ func (m *DatabaseSessionCommandResult) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.AffectedRecords != 0 { - n += 1 + sovEvents(uint64(m.AffectedRecords)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabasePermissionUpdate) Size() (n int) { +func (m *BotCreate) Size() (n int) { if m == nil { return 0 } @@ -45615,57 +48731,35 @@ func (m *DatabasePermissionUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.PermissionSummary) > 0 { - for _, e := range m.PermissionSummary { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.AffectedObjectCounts) > 0 { - for k, v := range m.AffectedObjectCounts { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + sovEvents(uint64(v)) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabasePermissionEntry) Size() (n int) { +func (m *BotUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Permission) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Counts) > 0 { - for k, v := range m.Counts { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + sovEvents(uint64(v)) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseUserCreate) Size() (n int) { +func (m *BotDelete) Size() (n int) { if m == nil { return 0 } @@ -45673,31 +48767,17 @@ func (m *DatabaseUserCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Username) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Roles) > 0 { - for _, s := range m.Roles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseUserDeactivate) Size() (n int) { +func (m *TrustedClusterCreate) Size() (n int) { if m == nil { return 0 } @@ -45705,28 +48785,19 @@ func (m *DatabaseUserDeactivate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Username) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Delete { - n += 2 - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PostgresParse) Size() (n int) { +func (m *TrustedClusterDelete) Size() (n int) { if m == nil { return 0 } @@ -45734,27 +48805,19 @@ func (m *PostgresParse) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.StatementName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PostgresBind) Size() (n int) { +func (m *ProvisionTokenCreate) Size() (n int) { if m == nil { return 0 } @@ -45762,33 +48825,27 @@ func (m *PostgresBind) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.StatementName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.PortalName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Parameters) > 0 { - for _, s := range m.Parameters { + if len(m.Roles) > 0 { + for _, s := range m.Roles { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } + l = len(m.JoinMethod) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PostgresExecute) Size() (n int) { +func (m *TrustedClusterTokenCreate) Size() (n int) { if m == nil { return 0 } @@ -45796,23 +48853,17 @@ func (m *PostgresExecute) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.PortalName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PostgresClose) Size() (n int) { +func (m *GithubConnectorCreate) Size() (n int) { if m == nil { return 0 } @@ -45820,27 +48871,19 @@ func (m *PostgresClose) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.StatementName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.PortalName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PostgresFunctionCall) Size() (n int) { +func (m *GithubConnectorUpdate) Size() (n int) { if m == nil { return 0 } @@ -45848,28 +48891,19 @@ func (m *PostgresFunctionCall) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.FunctionOID != 0 { - n += 1 + sovEvents(uint64(m.FunctionOID)) - } - if len(m.FunctionArgs) > 0 { - for _, s := range m.FunctionArgs { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *WindowsDesktopSessionStart) Size() (n int) { +func (m *GithubConnectorDelete) Size() (n int) { if m == nil { return 0 } @@ -45877,55 +48911,19 @@ func (m *WindowsDesktopSessionStart) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.WindowsDesktopService) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Domain) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.WindowsUser) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.DesktopLabels) > 0 { - for k, v := range m.DesktopLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } - l = len(m.DesktopName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.AllowUserCreation { - n += 2 - } - if m.NLA { - n += 2 - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DatabaseSessionEnd) Size() (n int) { +func (m *OIDCConnectorCreate) Size() (n int) { if m == nil { return 0 } @@ -45933,15 +48931,9 @@ func (m *DatabaseSessionEnd) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45949,31 +48941,25 @@ func (m *DatabaseSessionEnd) Size() (n int) { return n } -func (m *MFADeviceMetadata) Size() (n int) { +func (m *OIDCConnectorUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.DeviceName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DeviceID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DeviceType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *MFADeviceAdd) Size() (n int) { +func (m *OIDCConnectorDelete) Size() (n int) { if m == nil { return 0 } @@ -45981,11 +48967,9 @@ func (m *MFADeviceAdd) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.MFADeviceMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -45993,7 +48977,7 @@ func (m *MFADeviceAdd) Size() (n int) { return n } -func (m *MFADeviceDelete) Size() (n int) { +func (m *SAMLConnectorCreate) Size() (n int) { if m == nil { return 0 } @@ -46001,19 +48985,21 @@ func (m *MFADeviceDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.MFADeviceMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.Connector != nil { + l = m.Connector.Size() + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *BillingInformationUpdate) Size() (n int) { +func (m *SAMLConnectorUpdate) Size() (n int) { if m == nil { return 0 } @@ -46021,15 +49007,21 @@ func (m *BillingInformationUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.Connector != nil { + l = m.Connector.Size() + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *BillingCardCreate) Size() (n int) { +func (m *SAMLConnectorDelete) Size() (n int) { if m == nil { return 0 } @@ -46037,6 +49029,8 @@ func (m *BillingCardCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { @@ -46045,7 +49039,7 @@ func (m *BillingCardCreate) Size() (n int) { return n } -func (m *BillingCardDelete) Size() (n int) { +func (m *KubeRequest) Size() (n int) { if m == nil { return 0 } @@ -46055,35 +49049,83 @@ func (m *BillingCardDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.RequestPath) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Verb) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ResourceAPIGroup) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ResourceNamespace) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ResourceKind) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ResourceName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ResponseCode != 0 { + n += 1 + sovEvents(uint64(m.ResponseCode)) + } + l = m.KubernetesClusterMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *LockCreate) Size() (n int) { +func (m *AppMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Target.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Lock.Size() - n += 1 + l + sovEvents(uint64(l)) + l = len(m.AppURI) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AppPublicAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.AppLabels) > 0 { + for k, v := range m.AppLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } + } + l = len(m.AppName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.AppTargetPort != 0 { + n += 1 + sovEvents(uint64(m.AppTargetPort)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *LockDelete) Size() (n int) { +func (m *AppCreate) Size() (n int) { if m == nil { return 0 } @@ -46091,11 +49133,11 @@ func (m *LockDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Lock.Size() + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46103,7 +49145,7 @@ func (m *LockDelete) Size() (n int) { return n } -func (m *RecoveryCodeGenerate) Size() (n int) { +func (m *AppUpdate) Size() (n int) { if m == nil { return 0 } @@ -46113,13 +49155,17 @@ func (m *RecoveryCodeGenerate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *RecoveryCodeUsed) Size() (n int) { +func (m *AppDelete) Size() (n int) { if m == nil { return 0 } @@ -46129,7 +49175,7 @@ func (m *RecoveryCodeUsed) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46137,7 +49183,7 @@ func (m *RecoveryCodeUsed) Size() (n int) { return n } -func (m *WindowsDesktopSessionEnd) Size() (n int) { +func (m *AppSessionStart) Size() (n int) { if m == nil { return 0 } @@ -46149,54 +49195,23 @@ func (m *WindowsDesktopSessionEnd) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.WindowsDesktopService) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.DesktopAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Domain) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.WindowsUser) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.DesktopLabels) > 0 { - for k, v := range m.DesktopLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) + l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.DesktopName) + l = len(m.PublicAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Recorded { - n += 2 - } - if len(m.Participants) > 0 { - for _, s := range m.Participants { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *CertificateCreate) Size() (n int) { +func (m *AppSessionEnd) Size() (n int) { if m == nil { return 0 } @@ -46204,15 +49219,15 @@ func (m *CertificateCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.CertificateType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Identity != nil { - l = m.Identity.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = m.ClientMetadata.Size() + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46220,7 +49235,7 @@ func (m *CertificateCreate) Size() (n int) { return n } -func (m *RenewableCertificateGenerationMismatch) Size() (n int) { +func (m *AppSessionChunk) Size() (n int) { if m == nil { return 0 } @@ -46230,55 +49245,25 @@ func (m *RenewableCertificateGenerationMismatch) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *BotJoin) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.BotName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Method) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.TokenName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Attributes != nil { - l = m.Attributes.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.UserName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.BotInstanceID) + l = len(m.SessionChunkID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *InstanceJoin) Size() (n int) { +func (m *AppSessionRequest) Size() (n int) { if m == nil { return 0 } @@ -46286,17 +49271,14 @@ func (m *InstanceJoin) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.HostID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.StatusCode != 0 { + n += 1 + sovEvents(uint64(m.StatusCode)) } - l = len(m.NodeName) + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Role) + l = len(m.RawQuery) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -46304,17 +49286,9 @@ func (m *InstanceJoin) Size() (n int) { if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.TokenName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Attributes != nil { - l = m.Attributes.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.TokenExpires) + l = m.AppMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.AWSRequestMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46322,23 +49296,25 @@ func (m *InstanceJoin) Size() (n int) { return n } -func (m *Unknown) Size() (n int) { +func (m *AWSRequestMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.UnknownType) + l = len(m.AWSRegion) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.UnknownCode) + l = len(m.AWSService) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Data) + l = len(m.AWSHost) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AWSAssumedRole) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -46348,61 +49324,68 @@ func (m *Unknown) Size() (n int) { return n } -func (m *DeviceMetadata) Size() (n int) { +func (m *DatabaseMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.DeviceId) + l = len(m.DatabaseService) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.OsType != 0 { - n += 1 + sovEvents(uint64(m.OsType)) + l = len(m.DatabaseProtocol) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AssetTag) + l = len(m.DatabaseURI) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.CredentialId) + l = len(m.DatabaseName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.DeviceOrigin != 0 { - n += 1 + sovEvents(uint64(m.DeviceOrigin)) + l = len(m.DatabaseUser) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.WebAuthentication { - n += 2 + if len(m.DatabaseLabels) > 0 { + for k, v := range m.DatabaseLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } - l = len(m.WebAuthenticationId) + l = len(m.DatabaseAWSRegion) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.DatabaseAWSRedshiftClusterID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *DeviceEvent) Size() (n int) { - if m == nil { - return 0 + l = len(m.DatabaseGCPProjectID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Status != nil { - l = m.Status.Size() + l = len(m.DatabaseGCPInstanceID) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Device != nil { - l = m.Device.Size() + if len(m.DatabaseRoles) > 0 { + for _, s := range m.DatabaseRoles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.DatabaseType) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.User != nil { - l = m.User.Size() + l = len(m.DatabaseOrigin) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { @@ -46411,7 +49394,7 @@ func (m *DeviceEvent) Size() (n int) { return n } -func (m *DeviceEvent2) Size() (n int) { +func (m *DatabaseCreate) Size() (n int) { if m == nil { return 0 } @@ -46419,21 +49402,19 @@ func (m *DeviceEvent2) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.Device != nil { - l = m.Device.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DiscoveryConfigCreate) Size() (n int) { +func (m *DatabaseUpdate) Size() (n int) { if m == nil { return 0 } @@ -46445,7 +49426,7 @@ func (m *DiscoveryConfigCreate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46453,7 +49434,7 @@ func (m *DiscoveryConfigCreate) Size() (n int) { return n } -func (m *DiscoveryConfigUpdate) Size() (n int) { +func (m *DatabaseDelete) Size() (n int) { if m == nil { return 0 } @@ -46465,15 +49446,13 @@ func (m *DiscoveryConfigUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DiscoveryConfigDelete) Size() (n int) { +func (m *DatabaseSessionStart) Size() (n int) { if m == nil { return 0 } @@ -46483,17 +49462,28 @@ func (m *DiscoveryConfigDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ConnectionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.PostgresPID != 0 { + n += 1 + sovEvents(uint64(m.PostgresPID)) + } + l = m.ClientMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *DiscoveryConfigDeleteAll) Size() (n int) { +func (m *DatabaseSessionQuery) Size() (n int) { if m == nil { return 0 } @@ -46503,29 +49493,21 @@ func (m *DiscoveryConfigDeleteAll) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *IntegrationCreate) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.IntegrationMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = len(m.DatabaseQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.DatabaseQueryParameters) > 0 { + for _, s := range m.DatabaseQueryParameters { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46533,7 +49515,7 @@ func (m *IntegrationCreate) Size() (n int) { return n } -func (m *IntegrationUpdate) Size() (n int) { +func (m *DatabaseSessionCommandResult) Size() (n int) { if m == nil { return 0 } @@ -46543,19 +49525,22 @@ func (m *IntegrationUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.IntegrationMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) + if m.AffectedRecords != 0 { + n += 1 + sovEvents(uint64(m.AffectedRecords)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *IntegrationDelete) Size() (n int) { +func (m *DatabasePermissionUpdate) Size() (n int) { if m == nil { return 0 } @@ -46565,43 +49550,47 @@ func (m *IntegrationDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.IntegrationMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if len(m.PermissionSummary) > 0 { + for _, e := range m.PermissionSummary { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + if len(m.AffectedObjectCounts) > 0 { + for k, v := range m.AffectedObjectCounts { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + sovEvents(uint64(v)) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *IntegrationMetadata) Size() (n int) { +func (m *DatabasePermissionEntry) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.SubKind) + l = len(m.Permission) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.AWSOIDC != nil { - l = m.AWSOIDC.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if m.AzureOIDC != nil { - l = m.AzureOIDC.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if m.GitHub != nil { - l = m.GitHub.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if m.AWSRA != nil { - l = m.AWSRA.Size() - n += 1 + l + sovEvents(uint64(l)) + if len(m.Counts) > 0 { + for k, v := range m.Counts { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + sovEvents(uint64(v)) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46609,19 +49598,31 @@ func (m *IntegrationMetadata) Size() (n int) { return n } -func (m *AWSOIDCIntegrationMetadata) Size() (n int) { +func (m *DatabaseUserCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.RoleARN) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Username) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.IssuerS3URI) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46629,19 +49630,28 @@ func (m *AWSOIDCIntegrationMetadata) Size() (n int) { return n } -func (m *AzureOIDCIntegrationMetadata) Size() (n int) { +func (m *DatabaseUserDeactivate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.TenantID) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Username) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.ClientID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Delete { + n += 2 } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46649,13 +49659,25 @@ func (m *AzureOIDCIntegrationMetadata) Size() (n int) { return n } -func (m *GitHubIntegrationMetadata) Size() (n int) { +func (m *PostgresParse) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Organization) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.StatementName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Query) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -46665,23 +49687,41 @@ func (m *GitHubIntegrationMetadata) Size() (n int) { return n } -func (m *AWSRAIntegrationMetadata) Size() (n int) { +func (m *PostgresBind) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.TrustAnchorARN) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.StatementName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PortalName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if len(m.Parameters) > 0 { + for _, s := range m.Parameters { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PluginCreate) Size() (n int) { +func (m *PostgresExecute) Size() (n int) { if m == nil { return 0 } @@ -46691,19 +49731,21 @@ func (m *PluginCreate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.PluginMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.PortalName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PluginUpdate) Size() (n int) { +func (m *PostgresClose) Size() (n int) { if m == nil { return 0 } @@ -46713,19 +49755,25 @@ func (m *PluginUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.PluginMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.StatementName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PortalName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PluginDelete) Size() (n int) { +func (m *PostgresFunctionCall) Size() (n int) { if m == nil { return 0 } @@ -46735,52 +49783,64 @@ func (m *PluginDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.PluginMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.FunctionOID != 0 { + n += 1 + sovEvents(uint64(m.FunctionOID)) + } + if len(m.FunctionArgs) > 0 { + for _, s := range m.FunctionArgs { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *PluginMetadata) Size() (n int) { +func (m *WindowsCertificateMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.PluginType) + l = len(m.Subject) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.HasCredentials { - n += 2 - } - if m.ReusesCredentials { - n += 2 + l = len(m.SerialNumber) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.PluginData != nil { - l = m.PluginData.Size() + l = len(m.UPN) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.CRLDistributionPoints) > 0 { + for _, s := range m.CRLDistributionPoints { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - return n -} - -func (m *OneOf) Size() (n int) { - if m == nil { - return 0 + if m.KeyUsage != 0 { + n += 1 + sovEvents(uint64(m.KeyUsage)) } - var l int - _ = l - if m.Event != nil { - n += m.Event.Size() + if len(m.ExtendedKeyUsage) > 0 { + l = 0 + for _, e := range m.ExtendedKeyUsage { + l += sovEvents(uint64(e)) + } + n += 1 + sovEvents(uint64(l)) + l + } + if len(m.EnhancedKeyUsage) > 0 { + for _, s := range m.EnhancedKeyUsage { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -46788,4164 +49848,3693 @@ func (m *OneOf) Size() (n int) { return n } -func (m *OneOf_UserLogin) Size() (n int) { +func (m *WindowsDesktopSessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserLogin != nil { - l = m.UserLogin.Size() + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.WindowsDesktopService) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_UserCreate) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.UserCreate != nil { - l = m.UserCreate.Size() + l = len(m.DesktopAddr) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_UserDelete) Size() (n int) { - if m == nil { - return 0 + l = len(m.Domain) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - if m.UserDelete != nil { - l = m.UserDelete.Size() + l = len(m.WindowsUser) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_UserPasswordChange) Size() (n int) { - if m == nil { - return 0 + if len(m.DesktopLabels) > 0 { + for k, v := range m.DesktopLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } - var l int - _ = l - if m.UserPasswordChange != nil { - l = m.UserPasswordChange.Size() + l = len(m.DesktopName) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_SessionStart) Size() (n int) { - if m == nil { - return 0 + if m.AllowUserCreation { + n += 2 } - var l int - _ = l - if m.SessionStart != nil { - l = m.SessionStart.Size() + if m.NLA { + n += 2 + } + if m.CertMetadata != nil { + l = m.CertMetadata.Size() n += 1 + l + sovEvents(uint64(l)) } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } return n } -func (m *OneOf_SessionJoin) Size() (n int) { + +func (m *DatabaseSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionJoin != nil { - l = m.SessionJoin.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) + n += 1 + l + sovEvents(uint64(l)) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Participants) > 0 { + for _, s := range m.Participants { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionPrint) Size() (n int) { + +func (m *MFADeviceMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionPrint != nil { - l = m.SessionPrint.Size() + l = len(m.DeviceName) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_SessionReject) Size() (n int) { - if m == nil { - return 0 + l = len(m.DeviceID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - if m.SessionReject != nil { - l = m.SessionReject.Size() + l = len(m.DeviceType) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } return n } -func (m *OneOf_Resize) Size() (n int) { + +func (m *MFADeviceAdd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.Resize != nil { - l = m.Resize.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.MFADeviceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionEnd) Size() (n int) { + +func (m *MFADeviceDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionEnd != nil { - l = m.SessionEnd.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.MFADeviceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionCommand) Size() (n int) { + +func (m *BillingInformationUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionCommand != nil { - l = m.SessionCommand.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionDisk) Size() (n int) { + +func (m *BillingCardCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionDisk != nil { - l = m.SessionDisk.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionNetwork) Size() (n int) { + +func (m *BillingCardDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionNetwork != nil { - l = m.SessionNetwork.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionData) Size() (n int) { + +func (m *LockCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionData != nil { - l = m.SessionData.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Target.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Lock.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionLeave) Size() (n int) { + +func (m *LockDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionLeave != nil { - l = m.SessionLeave.Size() - n += 1 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Lock.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_PortForward) Size() (n int) { + +func (m *RecoveryCodeGenerate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PortForward != nil { - l = m.PortForward.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_X11Forward) Size() (n int) { + +func (m *RecoveryCodeUsed) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.X11Forward != nil { - l = m.X11Forward.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SCP) Size() (n int) { + +func (m *WindowsDesktopSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SCP != nil { - l = m.SCP.Size() - n += 2 + l + sovEvents(uint64(l)) - } - return n -} -func (m *OneOf_Exec) Size() (n int) { - if m == nil { - return 0 + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.WindowsDesktopService) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - if m.Exec != nil { - l = m.Exec.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.DesktopAddr) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} -func (m *OneOf_Subsystem) Size() (n int) { - if m == nil { - return 0 + l = len(m.Domain) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - if m.Subsystem != nil { - l = m.Subsystem.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.WindowsUser) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.DesktopLabels) > 0 { + for k, v := range m.DesktopLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) + n += 1 + l + sovEvents(uint64(l)) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) + n += 1 + l + sovEvents(uint64(l)) + l = len(m.DesktopName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Recorded { + n += 2 + } + if len(m.Participants) > 0 { + for _, s := range m.Participants { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_ClientDisconnect) Size() (n int) { + +func (m *CertificateCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ClientDisconnect != nil { - l = m.ClientDisconnect.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.CertificateType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Identity != nil { + l = m.Identity.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = m.ClientMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.CertificateAuthority != nil { + l = m.CertificateAuthority.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_AuthAttempt) Size() (n int) { + +func (m *CertificateAuthority) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AuthAttempt != nil { - l = m.AuthAttempt.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.Type) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Domain) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.SubjectKeyID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_AccessRequestCreate) Size() (n int) { + +func (m *RenewableCertificateGenerationMismatch) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessRequestCreate != nil { - l = m.AccessRequestCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_UserTokenCreate) Size() (n int) { + +func (m *BotJoin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserTokenCreate != nil { - l = m.UserTokenCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.BotName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.TokenName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Attributes != nil { + l = m.Attributes.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UserName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.BotInstanceID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_RoleCreate) Size() (n int) { + +func (m *InstanceJoin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RoleCreate != nil { - l = m.RoleCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.HostID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.NodeName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Role) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.TokenName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Attributes != nil { + l = m.Attributes.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.TokenExpires) + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_RoleDelete) Size() (n int) { + +func (m *Unknown) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RoleDelete != nil { - l = m.RoleDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.UnknownType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UnknownCode) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Data) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_TrustedClusterCreate) Size() (n int) { + +func (m *DeviceMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.TrustedClusterCreate != nil { - l = m.TrustedClusterCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.DeviceId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.OsType != 0 { + n += 1 + sovEvents(uint64(m.OsType)) + } + l = len(m.AssetTag) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.CredentialId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.DeviceOrigin != 0 { + n += 1 + sovEvents(uint64(m.DeviceOrigin)) + } + if m.WebAuthentication { + n += 2 + } + l = len(m.WebAuthenticationId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_TrustedClusterDelete) Size() (n int) { + +func (m *DeviceEvent) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.TrustedClusterDelete != nil { - l = m.TrustedClusterDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.Status != nil { + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.Device != nil { + l = m.Device.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.User != nil { + l = m.User.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_TrustedClusterTokenCreate) Size() (n int) { + +func (m *DeviceEvent2) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.TrustedClusterTokenCreate != nil { - l = m.TrustedClusterTokenCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.Device != nil { + l = m.Device.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_GithubConnectorCreate) Size() (n int) { + +func (m *DiscoveryConfigCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.GithubConnectorCreate != nil { - l = m.GithubConnectorCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_GithubConnectorDelete) Size() (n int) { + +func (m *DiscoveryConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.GithubConnectorDelete != nil { - l = m.GithubConnectorDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_OIDCConnectorCreate) Size() (n int) { + +func (m *DiscoveryConfigDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OIDCConnectorCreate != nil { - l = m.OIDCConnectorCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_OIDCConnectorDelete) Size() (n int) { + +func (m *DiscoveryConfigDeleteAll) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OIDCConnectorDelete != nil { - l = m.OIDCConnectorDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SAMLConnectorCreate) Size() (n int) { + +func (m *IntegrationCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLConnectorCreate != nil { - l = m.SAMLConnectorCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.IntegrationMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SAMLConnectorDelete) Size() (n int) { + +func (m *IntegrationUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLConnectorDelete != nil { - l = m.SAMLConnectorDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.IntegrationMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_KubeRequest) Size() (n int) { + +func (m *IntegrationDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.KubeRequest != nil { - l = m.KubeRequest.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.IntegrationMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_AppSessionStart) Size() (n int) { + +func (m *IntegrationMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppSessionStart != nil { - l = m.AppSessionStart.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.SubKind) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.AWSOIDC != nil { + l = m.AWSOIDC.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.AzureOIDC != nil { + l = m.AzureOIDC.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.GitHub != nil { + l = m.GitHub.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.AWSRA != nil { + l = m.AWSRA.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_AppSessionChunk) Size() (n int) { + +func (m *AWSOIDCIntegrationMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppSessionChunk != nil { - l = m.AppSessionChunk.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.RoleARN) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.IssuerS3URI) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_AppSessionRequest) Size() (n int) { + +func (m *AzureOIDCIntegrationMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppSessionRequest != nil { - l = m.AppSessionRequest.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.TenantID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ClientID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_DatabaseSessionStart) Size() (n int) { + +func (m *GitHubIntegrationMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseSessionStart != nil { - l = m.DatabaseSessionStart.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.Organization) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_DatabaseSessionEnd) Size() (n int) { + +func (m *AWSRAIntegrationMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseSessionEnd != nil { - l = m.DatabaseSessionEnd.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.TrustAnchorARN) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_DatabaseSessionQuery) Size() (n int) { + +func (m *PluginCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseSessionQuery != nil { - l = m.DatabaseSessionQuery.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.PluginMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_SessionUpload) Size() (n int) { + +func (m *PluginUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionUpload != nil { - l = m.SessionUpload.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.PluginMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_MFADeviceAdd) Size() (n int) { + +func (m *PluginDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MFADeviceAdd != nil { - l = m.MFADeviceAdd.Size() - n += 2 + l + sovEvents(uint64(l)) + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.PluginMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_MFADeviceDelete) Size() (n int) { + +func (m *PluginMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MFADeviceDelete != nil { - l = m.MFADeviceDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + l = len(m.PluginType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.HasCredentials { + n += 2 + } + if m.ReusesCredentials { + n += 2 + } + if m.PluginData != nil { + l = m.PluginData.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_BillingInformationUpdate) Size() (n int) { + +func (m *OneOf) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BillingInformationUpdate != nil { - l = m.BillingInformationUpdate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.Event != nil { + n += m.Event.Size() + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) } return n } -func (m *OneOf_BillingCardCreate) Size() (n int) { + +func (m *OneOf_UserLogin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BillingCardCreate != nil { - l = m.BillingCardCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.UserLogin != nil { + l = m.UserLogin.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_BillingCardDelete) Size() (n int) { +func (m *OneOf_UserCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BillingCardDelete != nil { - l = m.BillingCardDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.UserCreate != nil { + l = m.UserCreate.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_LockCreate) Size() (n int) { +func (m *OneOf_UserDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.LockCreate != nil { - l = m.LockCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.UserDelete != nil { + l = m.UserDelete.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_LockDelete) Size() (n int) { +func (m *OneOf_UserPasswordChange) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.LockDelete != nil { - l = m.LockDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.UserPasswordChange != nil { + l = m.UserPasswordChange.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_RecoveryCodeGenerate) Size() (n int) { +func (m *OneOf_SessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RecoveryCodeGenerate != nil { - l = m.RecoveryCodeGenerate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionStart != nil { + l = m.SessionStart.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_RecoveryCodeUsed) Size() (n int) { +func (m *OneOf_SessionJoin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RecoveryCodeUsed != nil { - l = m.RecoveryCodeUsed.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionJoin != nil { + l = m.SessionJoin.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseCreate) Size() (n int) { +func (m *OneOf_SessionPrint) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseCreate != nil { - l = m.DatabaseCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionPrint != nil { + l = m.SessionPrint.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseUpdate) Size() (n int) { +func (m *OneOf_SessionReject) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseUpdate != nil { - l = m.DatabaseUpdate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionReject != nil { + l = m.SessionReject.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseDelete) Size() (n int) { +func (m *OneOf_Resize) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseDelete != nil { - l = m.DatabaseDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.Resize != nil { + l = m.Resize.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AppCreate) Size() (n int) { +func (m *OneOf_SessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppCreate != nil { - l = m.AppCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionEnd != nil { + l = m.SessionEnd.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AppUpdate) Size() (n int) { +func (m *OneOf_SessionCommand) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppUpdate != nil { - l = m.AppUpdate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionCommand != nil { + l = m.SessionCommand.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AppDelete) Size() (n int) { +func (m *OneOf_SessionDisk) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppDelete != nil { - l = m.AppDelete.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionDisk != nil { + l = m.SessionDisk.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WindowsDesktopSessionStart) Size() (n int) { +func (m *OneOf_SessionNetwork) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WindowsDesktopSessionStart != nil { - l = m.WindowsDesktopSessionStart.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionNetwork != nil { + l = m.SessionNetwork.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WindowsDesktopSessionEnd) Size() (n int) { +func (m *OneOf_SessionData) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WindowsDesktopSessionEnd != nil { - l = m.WindowsDesktopSessionEnd.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionData != nil { + l = m.SessionData.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PostgresParse) Size() (n int) { +func (m *OneOf_SessionLeave) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PostgresParse != nil { - l = m.PostgresParse.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.SessionLeave != nil { + l = m.SessionLeave.Size() + n += 1 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PostgresBind) Size() (n int) { +func (m *OneOf_PortForward) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PostgresBind != nil { - l = m.PostgresBind.Size() + if m.PortForward != nil { + l = m.PortForward.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PostgresExecute) Size() (n int) { +func (m *OneOf_X11Forward) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PostgresExecute != nil { - l = m.PostgresExecute.Size() + if m.X11Forward != nil { + l = m.X11Forward.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PostgresClose) Size() (n int) { +func (m *OneOf_SCP) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PostgresClose != nil { - l = m.PostgresClose.Size() + if m.SCP != nil { + l = m.SCP.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PostgresFunctionCall) Size() (n int) { +func (m *OneOf_Exec) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PostgresFunctionCall != nil { - l = m.PostgresFunctionCall.Size() + if m.Exec != nil { + l = m.Exec.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessRequestDelete) Size() (n int) { +func (m *OneOf_Subsystem) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessRequestDelete != nil { - l = m.AccessRequestDelete.Size() + if m.Subsystem != nil { + l = m.Subsystem.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SessionConnect) Size() (n int) { +func (m *OneOf_ClientDisconnect) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionConnect != nil { - l = m.SessionConnect.Size() + if m.ClientDisconnect != nil { + l = m.ClientDisconnect.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CertificateCreate) Size() (n int) { +func (m *OneOf_AuthAttempt) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CertificateCreate != nil { - l = m.CertificateCreate.Size() + if m.AuthAttempt != nil { + l = m.AuthAttempt.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopRecording) Size() (n int) { +func (m *OneOf_AccessRequestCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopRecording != nil { - l = m.DesktopRecording.Size() + if m.AccessRequestCreate != nil { + l = m.AccessRequestCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopClipboardSend) Size() (n int) { +func (m *OneOf_UserTokenCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopClipboardSend != nil { - l = m.DesktopClipboardSend.Size() + if m.UserTokenCreate != nil { + l = m.UserTokenCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopClipboardReceive) Size() (n int) { +func (m *OneOf_RoleCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopClipboardReceive != nil { - l = m.DesktopClipboardReceive.Size() + if m.RoleCreate != nil { + l = m.RoleCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementPrepare) Size() (n int) { +func (m *OneOf_RoleDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementPrepare != nil { - l = m.MySQLStatementPrepare.Size() + if m.RoleDelete != nil { + l = m.RoleDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementExecute) Size() (n int) { +func (m *OneOf_TrustedClusterCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementExecute != nil { - l = m.MySQLStatementExecute.Size() + if m.TrustedClusterCreate != nil { + l = m.TrustedClusterCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementSendLongData) Size() (n int) { +func (m *OneOf_TrustedClusterDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementSendLongData != nil { - l = m.MySQLStatementSendLongData.Size() + if m.TrustedClusterDelete != nil { + l = m.TrustedClusterDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementClose) Size() (n int) { +func (m *OneOf_TrustedClusterTokenCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementClose != nil { - l = m.MySQLStatementClose.Size() + if m.TrustedClusterTokenCreate != nil { + l = m.TrustedClusterTokenCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementReset) Size() (n int) { +func (m *OneOf_GithubConnectorCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementReset != nil { - l = m.MySQLStatementReset.Size() + if m.GithubConnectorCreate != nil { + l = m.GithubConnectorCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementFetch) Size() (n int) { +func (m *OneOf_GithubConnectorDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementFetch != nil { - l = m.MySQLStatementFetch.Size() + if m.GithubConnectorDelete != nil { + l = m.GithubConnectorDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLStatementBulkExecute) Size() (n int) { +func (m *OneOf_OIDCConnectorCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLStatementBulkExecute != nil { - l = m.MySQLStatementBulkExecute.Size() + if m.OIDCConnectorCreate != nil { + l = m.OIDCConnectorCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_RenewableCertificateGenerationMismatch) Size() (n int) { +func (m *OneOf_OIDCConnectorDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RenewableCertificateGenerationMismatch != nil { - l = m.RenewableCertificateGenerationMismatch.Size() + if m.OIDCConnectorDelete != nil { + l = m.OIDCConnectorDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_Unknown) Size() (n int) { +func (m *OneOf_SAMLConnectorCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.Unknown != nil { - l = m.Unknown.Size() + if m.SAMLConnectorCreate != nil { + l = m.SAMLConnectorCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLInitDB) Size() (n int) { +func (m *OneOf_SAMLConnectorDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLInitDB != nil { - l = m.MySQLInitDB.Size() + if m.SAMLConnectorDelete != nil { + l = m.SAMLConnectorDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLCreateDB) Size() (n int) { +func (m *OneOf_KubeRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLCreateDB != nil { - l = m.MySQLCreateDB.Size() + if m.KubeRequest != nil { + l = m.KubeRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLDropDB) Size() (n int) { +func (m *OneOf_AppSessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLDropDB != nil { - l = m.MySQLDropDB.Size() + if m.AppSessionStart != nil { + l = m.AppSessionStart.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLShutDown) Size() (n int) { +func (m *OneOf_AppSessionChunk) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLShutDown != nil { - l = m.MySQLShutDown.Size() + if m.AppSessionChunk != nil { + l = m.AppSessionChunk.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLProcessKill) Size() (n int) { +func (m *OneOf_AppSessionRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLProcessKill != nil { - l = m.MySQLProcessKill.Size() + if m.AppSessionRequest != nil { + l = m.AppSessionRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLDebug) Size() (n int) { +func (m *OneOf_DatabaseSessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLDebug != nil { - l = m.MySQLDebug.Size() + if m.DatabaseSessionStart != nil { + l = m.DatabaseSessionStart.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_MySQLRefresh) Size() (n int) { +func (m *OneOf_DatabaseSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.MySQLRefresh != nil { - l = m.MySQLRefresh.Size() + if m.DatabaseSessionEnd != nil { + l = m.DatabaseSessionEnd.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessRequestResourceSearch) Size() (n int) { +func (m *OneOf_DatabaseSessionQuery) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessRequestResourceSearch != nil { - l = m.AccessRequestResourceSearch.Size() + if m.DatabaseSessionQuery != nil { + l = m.DatabaseSessionQuery.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SQLServerRPCRequest) Size() (n int) { +func (m *OneOf_SessionUpload) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SQLServerRPCRequest != nil { - l = m.SQLServerRPCRequest.Size() + if m.SessionUpload != nil { + l = m.SessionUpload.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseSessionMalformedPacket) Size() (n int) { +func (m *OneOf_MFADeviceAdd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseSessionMalformedPacket != nil { - l = m.DatabaseSessionMalformedPacket.Size() + if m.MFADeviceAdd != nil { + l = m.MFADeviceAdd.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SFTP) Size() (n int) { +func (m *OneOf_MFADeviceDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SFTP != nil { - l = m.SFTP.Size() + if m.MFADeviceDelete != nil { + l = m.MFADeviceDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UpgradeWindowStartUpdate) Size() (n int) { +func (m *OneOf_BillingInformationUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UpgradeWindowStartUpdate != nil { - l = m.UpgradeWindowStartUpdate.Size() + if m.BillingInformationUpdate != nil { + l = m.BillingInformationUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AppSessionEnd) Size() (n int) { +func (m *OneOf_BillingCardCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppSessionEnd != nil { - l = m.AppSessionEnd.Size() + if m.BillingCardCreate != nil { + l = m.BillingCardCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SessionRecordingAccess) Size() (n int) { +func (m *OneOf_BillingCardDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionRecordingAccess != nil { - l = m.SessionRecordingAccess.Size() + if m.BillingCardDelete != nil { + l = m.BillingCardDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_KubernetesClusterCreate) Size() (n int) { +func (m *OneOf_LockCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.KubernetesClusterCreate != nil { - l = m.KubernetesClusterCreate.Size() + if m.LockCreate != nil { + l = m.LockCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_KubernetesClusterUpdate) Size() (n int) { +func (m *OneOf_LockDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.KubernetesClusterUpdate != nil { - l = m.KubernetesClusterUpdate.Size() + if m.LockDelete != nil { + l = m.LockDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_KubernetesClusterDelete) Size() (n int) { +func (m *OneOf_RecoveryCodeGenerate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.KubernetesClusterDelete != nil { - l = m.KubernetesClusterDelete.Size() + if m.RecoveryCodeGenerate != nil { + l = m.RecoveryCodeGenerate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SSMRun) Size() (n int) { +func (m *OneOf_RecoveryCodeUsed) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SSMRun != nil { - l = m.SSMRun.Size() + if m.RecoveryCodeUsed != nil { + l = m.RecoveryCodeUsed.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ElasticsearchRequest) Size() (n int) { +func (m *OneOf_DatabaseCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ElasticsearchRequest != nil { - l = m.ElasticsearchRequest.Size() + if m.DatabaseCreate != nil { + l = m.DatabaseCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CassandraBatch) Size() (n int) { +func (m *OneOf_DatabaseUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CassandraBatch != nil { - l = m.CassandraBatch.Size() + if m.DatabaseUpdate != nil { + l = m.DatabaseUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CassandraPrepare) Size() (n int) { +func (m *OneOf_DatabaseDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CassandraPrepare != nil { - l = m.CassandraPrepare.Size() + if m.DatabaseDelete != nil { + l = m.DatabaseDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CassandraRegister) Size() (n int) { +func (m *OneOf_AppCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CassandraRegister != nil { - l = m.CassandraRegister.Size() + if m.AppCreate != nil { + l = m.AppCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CassandraExecute) Size() (n int) { +func (m *OneOf_AppUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CassandraExecute != nil { - l = m.CassandraExecute.Size() + if m.AppUpdate != nil { + l = m.AppUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AppSessionDynamoDBRequest) Size() (n int) { +func (m *OneOf_AppDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AppSessionDynamoDBRequest != nil { - l = m.AppSessionDynamoDBRequest.Size() + if m.AppDelete != nil { + l = m.AppDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopSharedDirectoryStart) Size() (n int) { +func (m *OneOf_WindowsDesktopSessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopSharedDirectoryStart != nil { - l = m.DesktopSharedDirectoryStart.Size() + if m.WindowsDesktopSessionStart != nil { + l = m.WindowsDesktopSessionStart.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopSharedDirectoryRead) Size() (n int) { +func (m *OneOf_WindowsDesktopSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopSharedDirectoryRead != nil { - l = m.DesktopSharedDirectoryRead.Size() + if m.WindowsDesktopSessionEnd != nil { + l = m.WindowsDesktopSessionEnd.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DesktopSharedDirectoryWrite) Size() (n int) { +func (m *OneOf_PostgresParse) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DesktopSharedDirectoryWrite != nil { - l = m.DesktopSharedDirectoryWrite.Size() + if m.PostgresParse != nil { + l = m.PostgresParse.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DynamoDBRequest) Size() (n int) { +func (m *OneOf_PostgresBind) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DynamoDBRequest != nil { - l = m.DynamoDBRequest.Size() + if m.PostgresBind != nil { + l = m.PostgresBind.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_BotJoin) Size() (n int) { +func (m *OneOf_PostgresExecute) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BotJoin != nil { - l = m.BotJoin.Size() + if m.PostgresExecute != nil { + l = m.PostgresExecute.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_InstanceJoin) Size() (n int) { +func (m *OneOf_PostgresClose) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.InstanceJoin != nil { - l = m.InstanceJoin.Size() + if m.PostgresClose != nil { + l = m.PostgresClose.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DeviceEvent) Size() (n int) { +func (m *OneOf_PostgresFunctionCall) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DeviceEvent != nil { - l = m.DeviceEvent.Size() + if m.PostgresFunctionCall != nil { + l = m.PostgresFunctionCall.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_LoginRuleCreate) Size() (n int) { +func (m *OneOf_AccessRequestDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.LoginRuleCreate != nil { - l = m.LoginRuleCreate.Size() + if m.AccessRequestDelete != nil { + l = m.AccessRequestDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_LoginRuleDelete) Size() (n int) { +func (m *OneOf_SessionConnect) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.LoginRuleDelete != nil { - l = m.LoginRuleDelete.Size() + if m.SessionConnect != nil { + l = m.SessionConnect.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLIdPAuthAttempt) Size() (n int) { +func (m *OneOf_CertificateCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLIdPAuthAttempt != nil { - l = m.SAMLIdPAuthAttempt.Size() + if m.CertificateCreate != nil { + l = m.CertificateCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLIdPServiceProviderCreate) Size() (n int) { +func (m *OneOf_DesktopRecording) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLIdPServiceProviderCreate != nil { - l = m.SAMLIdPServiceProviderCreate.Size() + if m.DesktopRecording != nil { + l = m.DesktopRecording.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLIdPServiceProviderUpdate) Size() (n int) { +func (m *OneOf_DesktopClipboardSend) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLIdPServiceProviderUpdate != nil { - l = m.SAMLIdPServiceProviderUpdate.Size() + if m.DesktopClipboardSend != nil { + l = m.DesktopClipboardSend.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLIdPServiceProviderDelete) Size() (n int) { +func (m *OneOf_DesktopClipboardReceive) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLIdPServiceProviderDelete != nil { - l = m.SAMLIdPServiceProviderDelete.Size() + if m.DesktopClipboardReceive != nil { + l = m.DesktopClipboardReceive.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLIdPServiceProviderDeleteAll) Size() (n int) { +func (m *OneOf_MySQLStatementPrepare) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLIdPServiceProviderDeleteAll != nil { - l = m.SAMLIdPServiceProviderDeleteAll.Size() + if m.MySQLStatementPrepare != nil { + l = m.MySQLStatementPrepare.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OpenSearchRequest) Size() (n int) { +func (m *OneOf_MySQLStatementExecute) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OpenSearchRequest != nil { - l = m.OpenSearchRequest.Size() + if m.MySQLStatementExecute != nil { + l = m.MySQLStatementExecute.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DeviceEvent2) Size() (n int) { +func (m *OneOf_MySQLStatementSendLongData) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DeviceEvent2 != nil { - l = m.DeviceEvent2.Size() + if m.MySQLStatementSendLongData != nil { + l = m.MySQLStatementSendLongData.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OktaResourcesUpdate) Size() (n int) { +func (m *OneOf_MySQLStatementClose) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OktaResourcesUpdate != nil { - l = m.OktaResourcesUpdate.Size() + if m.MySQLStatementClose != nil { + l = m.MySQLStatementClose.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OktaSyncFailure) Size() (n int) { +func (m *OneOf_MySQLStatementReset) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OktaSyncFailure != nil { - l = m.OktaSyncFailure.Size() + if m.MySQLStatementReset != nil { + l = m.MySQLStatementReset.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OktaAssignmentResult) Size() (n int) { +func (m *OneOf_MySQLStatementFetch) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OktaAssignmentResult != nil { - l = m.OktaAssignmentResult.Size() + if m.MySQLStatementFetch != nil { + l = m.MySQLStatementFetch.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ProvisionTokenCreate) Size() (n int) { +func (m *OneOf_MySQLStatementBulkExecute) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ProvisionTokenCreate != nil { - l = m.ProvisionTokenCreate.Size() + if m.MySQLStatementBulkExecute != nil { + l = m.MySQLStatementBulkExecute.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListCreate) Size() (n int) { +func (m *OneOf_RenewableCertificateGenerationMismatch) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListCreate != nil { - l = m.AccessListCreate.Size() + if m.RenewableCertificateGenerationMismatch != nil { + l = m.RenewableCertificateGenerationMismatch.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListUpdate) Size() (n int) { +func (m *OneOf_Unknown) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListUpdate != nil { - l = m.AccessListUpdate.Size() + if m.Unknown != nil { + l = m.Unknown.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListDelete) Size() (n int) { +func (m *OneOf_MySQLInitDB) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListDelete != nil { - l = m.AccessListDelete.Size() + if m.MySQLInitDB != nil { + l = m.MySQLInitDB.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListReview) Size() (n int) { +func (m *OneOf_MySQLCreateDB) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListReview != nil { - l = m.AccessListReview.Size() + if m.MySQLCreateDB != nil { + l = m.MySQLCreateDB.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListMemberCreate) Size() (n int) { +func (m *OneOf_MySQLDropDB) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListMemberCreate != nil { - l = m.AccessListMemberCreate.Size() + if m.MySQLDropDB != nil { + l = m.MySQLDropDB.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListMemberUpdate) Size() (n int) { +func (m *OneOf_MySQLShutDown) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListMemberUpdate != nil { - l = m.AccessListMemberUpdate.Size() + if m.MySQLShutDown != nil { + l = m.MySQLShutDown.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListMemberDelete) Size() (n int) { +func (m *OneOf_MySQLProcessKill) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListMemberDelete != nil { - l = m.AccessListMemberDelete.Size() + if m.MySQLProcessKill != nil { + l = m.MySQLProcessKill.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessListMemberDeleteAllForAccessList) Size() (n int) { +func (m *OneOf_MySQLDebug) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessListMemberDeleteAllForAccessList != nil { - l = m.AccessListMemberDeleteAllForAccessList.Size() + if m.MySQLDebug != nil { + l = m.MySQLDebug.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AuditQueryRun) Size() (n int) { +func (m *OneOf_MySQLRefresh) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AuditQueryRun != nil { - l = m.AuditQueryRun.Size() + if m.MySQLRefresh != nil { + l = m.MySQLRefresh.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SecurityReportRun) Size() (n int) { +func (m *OneOf_AccessRequestResourceSearch) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SecurityReportRun != nil { - l = m.SecurityReportRun.Size() + if m.AccessRequestResourceSearch != nil { + l = m.AccessRequestResourceSearch.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_GithubConnectorUpdate) Size() (n int) { +func (m *OneOf_SQLServerRPCRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.GithubConnectorUpdate != nil { - l = m.GithubConnectorUpdate.Size() + if m.SQLServerRPCRequest != nil { + l = m.SQLServerRPCRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OIDCConnectorUpdate) Size() (n int) { +func (m *OneOf_DatabaseSessionMalformedPacket) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OIDCConnectorUpdate != nil { - l = m.OIDCConnectorUpdate.Size() + if m.DatabaseSessionMalformedPacket != nil { + l = m.DatabaseSessionMalformedPacket.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SAMLConnectorUpdate) Size() (n int) { +func (m *OneOf_SFTP) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SAMLConnectorUpdate != nil { - l = m.SAMLConnectorUpdate.Size() + if m.SFTP != nil { + l = m.SFTP.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_RoleUpdate) Size() (n int) { +func (m *OneOf_UpgradeWindowStartUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.RoleUpdate != nil { - l = m.RoleUpdate.Size() + if m.UpgradeWindowStartUpdate != nil { + l = m.UpgradeWindowStartUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UserUpdate) Size() (n int) { +func (m *OneOf_AppSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserUpdate != nil { - l = m.UserUpdate.Size() + if m.AppSessionEnd != nil { + l = m.AppSessionEnd.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ExternalAuditStorageEnable) Size() (n int) { +func (m *OneOf_SessionRecordingAccess) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ExternalAuditStorageEnable != nil { - l = m.ExternalAuditStorageEnable.Size() + if m.SessionRecordingAccess != nil { + l = m.SessionRecordingAccess.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ExternalAuditStorageDisable) Size() (n int) { +func (m *OneOf_KubernetesClusterCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ExternalAuditStorageDisable != nil { - l = m.ExternalAuditStorageDisable.Size() + if m.KubernetesClusterCreate != nil { + l = m.KubernetesClusterCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_BotCreate) Size() (n int) { +func (m *OneOf_KubernetesClusterUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BotCreate != nil { - l = m.BotCreate.Size() + if m.KubernetesClusterUpdate != nil { + l = m.KubernetesClusterUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_BotDelete) Size() (n int) { +func (m *OneOf_KubernetesClusterDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BotDelete != nil { - l = m.BotDelete.Size() + if m.KubernetesClusterDelete != nil { + l = m.KubernetesClusterDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_BotUpdate) Size() (n int) { +func (m *OneOf_SSMRun) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.BotUpdate != nil { - l = m.BotUpdate.Size() + if m.SSMRun != nil { + l = m.SSMRun.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CreateMFAAuthChallenge) Size() (n int) { +func (m *OneOf_ElasticsearchRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CreateMFAAuthChallenge != nil { - l = m.CreateMFAAuthChallenge.Size() + if m.ElasticsearchRequest != nil { + l = m.ElasticsearchRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ValidateMFAAuthResponse) Size() (n int) { +func (m *OneOf_CassandraBatch) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ValidateMFAAuthResponse != nil { - l = m.ValidateMFAAuthResponse.Size() + if m.CassandraBatch != nil { + l = m.CassandraBatch.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OktaAccessListSync) Size() (n int) { +func (m *OneOf_CassandraPrepare) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OktaAccessListSync != nil { - l = m.OktaAccessListSync.Size() + if m.CassandraPrepare != nil { + l = m.CassandraPrepare.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabasePermissionUpdate) Size() (n int) { +func (m *OneOf_CassandraRegister) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabasePermissionUpdate != nil { - l = m.DatabasePermissionUpdate.Size() + if m.CassandraRegister != nil { + l = m.CassandraRegister.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SPIFFESVIDIssued) Size() (n int) { +func (m *OneOf_CassandraExecute) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SPIFFESVIDIssued != nil { - l = m.SPIFFESVIDIssued.Size() + if m.CassandraExecute != nil { + l = m.CassandraExecute.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_OktaUserSync) Size() (n int) { +func (m *OneOf_AppSessionDynamoDBRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.OktaUserSync != nil { - l = m.OktaUserSync.Size() + if m.AppSessionDynamoDBRequest != nil { + l = m.AppSessionDynamoDBRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AuthPreferenceUpdate) Size() (n int) { +func (m *OneOf_DesktopSharedDirectoryStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AuthPreferenceUpdate != nil { - l = m.AuthPreferenceUpdate.Size() + if m.DesktopSharedDirectoryStart != nil { + l = m.DesktopSharedDirectoryStart.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SessionRecordingConfigUpdate) Size() (n int) { +func (m *OneOf_DesktopSharedDirectoryRead) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SessionRecordingConfigUpdate != nil { - l = m.SessionRecordingConfigUpdate.Size() + if m.DesktopSharedDirectoryRead != nil { + l = m.DesktopSharedDirectoryRead.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ClusterNetworkingConfigUpdate) Size() (n int) { +func (m *OneOf_DesktopSharedDirectoryWrite) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ClusterNetworkingConfigUpdate != nil { - l = m.ClusterNetworkingConfigUpdate.Size() + if m.DesktopSharedDirectoryWrite != nil { + l = m.DesktopSharedDirectoryWrite.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseUserCreate) Size() (n int) { +func (m *OneOf_DynamoDBRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseUserCreate != nil { - l = m.DatabaseUserCreate.Size() + if m.DynamoDBRequest != nil { + l = m.DynamoDBRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseUserDeactivate) Size() (n int) { +func (m *OneOf_BotJoin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseUserDeactivate != nil { - l = m.DatabaseUserDeactivate.Size() + if m.BotJoin != nil { + l = m.BotJoin.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessPathChanged) Size() (n int) { +func (m *OneOf_InstanceJoin) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessPathChanged != nil { - l = m.AccessPathChanged.Size() + if m.InstanceJoin != nil { + l = m.InstanceJoin.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SpannerRPC) Size() (n int) { +func (m *OneOf_DeviceEvent) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SpannerRPC != nil { - l = m.SpannerRPC.Size() + if m.DeviceEvent != nil { + l = m.DeviceEvent.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DatabaseSessionCommandResult) Size() (n int) { +func (m *OneOf_LoginRuleCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DatabaseSessionCommandResult != nil { - l = m.DatabaseSessionCommandResult.Size() + if m.LoginRuleCreate != nil { + l = m.LoginRuleCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DiscoveryConfigCreate) Size() (n int) { +func (m *OneOf_LoginRuleDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DiscoveryConfigCreate != nil { - l = m.DiscoveryConfigCreate.Size() + if m.LoginRuleDelete != nil { + l = m.LoginRuleDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DiscoveryConfigUpdate) Size() (n int) { +func (m *OneOf_SAMLIdPAuthAttempt) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DiscoveryConfigUpdate != nil { - l = m.DiscoveryConfigUpdate.Size() + if m.SAMLIdPAuthAttempt != nil { + l = m.SAMLIdPAuthAttempt.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DiscoveryConfigDelete) Size() (n int) { +func (m *OneOf_SAMLIdPServiceProviderCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DiscoveryConfigDelete != nil { - l = m.DiscoveryConfigDelete.Size() + if m.SAMLIdPServiceProviderCreate != nil { + l = m.SAMLIdPServiceProviderCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_DiscoveryConfigDeleteAll) Size() (n int) { +func (m *OneOf_SAMLIdPServiceProviderUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.DiscoveryConfigDeleteAll != nil { - l = m.DiscoveryConfigDeleteAll.Size() + if m.SAMLIdPServiceProviderUpdate != nil { + l = m.SAMLIdPServiceProviderUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessGraphSettingsUpdate) Size() (n int) { +func (m *OneOf_SAMLIdPServiceProviderDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessGraphSettingsUpdate != nil { - l = m.AccessGraphSettingsUpdate.Size() + if m.SAMLIdPServiceProviderDelete != nil { + l = m.SAMLIdPServiceProviderDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_IntegrationCreate) Size() (n int) { +func (m *OneOf_SAMLIdPServiceProviderDeleteAll) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.IntegrationCreate != nil { - l = m.IntegrationCreate.Size() + if m.SAMLIdPServiceProviderDeleteAll != nil { + l = m.SAMLIdPServiceProviderDeleteAll.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_IntegrationUpdate) Size() (n int) { +func (m *OneOf_OpenSearchRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.IntegrationUpdate != nil { - l = m.IntegrationUpdate.Size() + if m.OpenSearchRequest != nil { + l = m.OpenSearchRequest.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_IntegrationDelete) Size() (n int) { +func (m *OneOf_DeviceEvent2) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.IntegrationDelete != nil { - l = m.IntegrationDelete.Size() + if m.DeviceEvent2 != nil { + l = m.DeviceEvent2.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SPIFFEFederationCreate) Size() (n int) { +func (m *OneOf_OktaResourcesUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SPIFFEFederationCreate != nil { - l = m.SPIFFEFederationCreate.Size() + if m.OktaResourcesUpdate != nil { + l = m.OktaResourcesUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SPIFFEFederationDelete) Size() (n int) { +func (m *OneOf_OktaSyncFailure) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SPIFFEFederationDelete != nil { - l = m.SPIFFEFederationDelete.Size() + if m.OktaSyncFailure != nil { + l = m.OktaSyncFailure.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PluginCreate) Size() (n int) { +func (m *OneOf_OktaAssignmentResult) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PluginCreate != nil { - l = m.PluginCreate.Size() + if m.OktaAssignmentResult != nil { + l = m.OktaAssignmentResult.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PluginUpdate) Size() (n int) { +func (m *OneOf_ProvisionTokenCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PluginUpdate != nil { - l = m.PluginUpdate.Size() + if m.ProvisionTokenCreate != nil { + l = m.ProvisionTokenCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_PluginDelete) Size() (n int) { +func (m *OneOf_AccessListCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.PluginDelete != nil { - l = m.PluginDelete.Size() + if m.AccessListCreate != nil { + l = m.AccessListCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateConfigCreate) Size() (n int) { +func (m *OneOf_AccessListUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateConfigCreate != nil { - l = m.AutoUpdateConfigCreate.Size() + if m.AccessListUpdate != nil { + l = m.AccessListUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateConfigUpdate) Size() (n int) { +func (m *OneOf_AccessListDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateConfigUpdate != nil { - l = m.AutoUpdateConfigUpdate.Size() + if m.AccessListDelete != nil { + l = m.AccessListDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateConfigDelete) Size() (n int) { +func (m *OneOf_AccessListReview) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateConfigDelete != nil { - l = m.AutoUpdateConfigDelete.Size() + if m.AccessListReview != nil { + l = m.AccessListReview.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateVersionCreate) Size() (n int) { +func (m *OneOf_AccessListMemberCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateVersionCreate != nil { - l = m.AutoUpdateVersionCreate.Size() + if m.AccessListMemberCreate != nil { + l = m.AccessListMemberCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateVersionUpdate) Size() (n int) { +func (m *OneOf_AccessListMemberUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateVersionUpdate != nil { - l = m.AutoUpdateVersionUpdate.Size() + if m.AccessListMemberUpdate != nil { + l = m.AccessListMemberUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateVersionDelete) Size() (n int) { +func (m *OneOf_AccessListMemberDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateVersionDelete != nil { - l = m.AutoUpdateVersionDelete.Size() + if m.AccessListMemberDelete != nil { + l = m.AccessListMemberDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_StaticHostUserCreate) Size() (n int) { +func (m *OneOf_AccessListMemberDeleteAllForAccessList) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.StaticHostUserCreate != nil { - l = m.StaticHostUserCreate.Size() + if m.AccessListMemberDeleteAllForAccessList != nil { + l = m.AccessListMemberDeleteAllForAccessList.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_StaticHostUserUpdate) Size() (n int) { +func (m *OneOf_AuditQueryRun) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.StaticHostUserUpdate != nil { - l = m.StaticHostUserUpdate.Size() + if m.AuditQueryRun != nil { + l = m.AuditQueryRun.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_StaticHostUserDelete) Size() (n int) { +func (m *OneOf_SecurityReportRun) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.StaticHostUserDelete != nil { - l = m.StaticHostUserDelete.Size() + if m.SecurityReportRun != nil { + l = m.SecurityReportRun.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CrownJewelCreate) Size() (n int) { +func (m *OneOf_GithubConnectorUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CrownJewelCreate != nil { - l = m.CrownJewelCreate.Size() + if m.GithubConnectorUpdate != nil { + l = m.GithubConnectorUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CrownJewelUpdate) Size() (n int) { +func (m *OneOf_OIDCConnectorUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CrownJewelUpdate != nil { - l = m.CrownJewelUpdate.Size() + if m.OIDCConnectorUpdate != nil { + l = m.OIDCConnectorUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_CrownJewelDelete) Size() (n int) { +func (m *OneOf_SAMLConnectorUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.CrownJewelDelete != nil { - l = m.CrownJewelDelete.Size() + if m.SAMLConnectorUpdate != nil { + l = m.SAMLConnectorUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UserTaskCreate) Size() (n int) { +func (m *OneOf_RoleUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserTaskCreate != nil { - l = m.UserTaskCreate.Size() + if m.RoleUpdate != nil { + l = m.RoleUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UserTaskUpdate) Size() (n int) { +func (m *OneOf_UserUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserTaskUpdate != nil { - l = m.UserTaskUpdate.Size() + if m.UserUpdate != nil { + l = m.UserUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UserTaskDelete) Size() (n int) { +func (m *OneOf_ExternalAuditStorageEnable) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserTaskDelete != nil { - l = m.UserTaskDelete.Size() + if m.ExternalAuditStorageEnable != nil { + l = m.ExternalAuditStorageEnable.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SFTPSummary) Size() (n int) { +func (m *OneOf_ExternalAuditStorageDisable) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SFTPSummary != nil { - l = m.SFTPSummary.Size() + if m.ExternalAuditStorageDisable != nil { + l = m.ExternalAuditStorageDisable.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ContactCreate) Size() (n int) { +func (m *OneOf_BotCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ContactCreate != nil { - l = m.ContactCreate.Size() + if m.BotCreate != nil { + l = m.BotCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_ContactDelete) Size() (n int) { +func (m *OneOf_BotDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.ContactDelete != nil { - l = m.ContactDelete.Size() + if m.BotDelete != nil { + l = m.BotDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityCreate) Size() (n int) { +func (m *OneOf_BotUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityCreate != nil { - l = m.WorkloadIdentityCreate.Size() + if m.BotUpdate != nil { + l = m.BotUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityUpdate) Size() (n int) { +func (m *OneOf_CreateMFAAuthChallenge) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityUpdate != nil { - l = m.WorkloadIdentityUpdate.Size() + if m.CreateMFAAuthChallenge != nil { + l = m.CreateMFAAuthChallenge.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityDelete) Size() (n int) { +func (m *OneOf_ValidateMFAAuthResponse) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityDelete != nil { - l = m.WorkloadIdentityDelete.Size() + if m.ValidateMFAAuthResponse != nil { + l = m.ValidateMFAAuthResponse.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_GitCommand) Size() (n int) { +func (m *OneOf_OktaAccessListSync) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.GitCommand != nil { - l = m.GitCommand.Size() + if m.OktaAccessListSync != nil { + l = m.OktaAccessListSync.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_UserLoginAccessListInvalid) Size() (n int) { +func (m *OneOf_DatabasePermissionUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.UserLoginAccessListInvalid != nil { - l = m.UserLoginAccessListInvalid.Size() + if m.DatabasePermissionUpdate != nil { + l = m.DatabasePermissionUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AccessRequestExpire) Size() (n int) { +func (m *OneOf_SPIFFESVIDIssued) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AccessRequestExpire != nil { - l = m.AccessRequestExpire.Size() + if m.SPIFFESVIDIssued != nil { + l = m.SPIFFESVIDIssued.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_StableUNIXUserCreate) Size() (n int) { +func (m *OneOf_OktaUserSync) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.StableUNIXUserCreate != nil { - l = m.StableUNIXUserCreate.Size() + if m.OktaUserSync != nil { + l = m.OktaUserSync.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityX509RevocationCreate) Size() (n int) { +func (m *OneOf_AuthPreferenceUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityX509RevocationCreate != nil { - l = m.WorkloadIdentityX509RevocationCreate.Size() + if m.AuthPreferenceUpdate != nil { + l = m.AuthPreferenceUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityX509RevocationDelete) Size() (n int) { +func (m *OneOf_SessionRecordingConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityX509RevocationDelete != nil { - l = m.WorkloadIdentityX509RevocationDelete.Size() + if m.SessionRecordingConfigUpdate != nil { + l = m.SessionRecordingConfigUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityX509RevocationUpdate) Size() (n int) { +func (m *OneOf_ClusterNetworkingConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityX509RevocationUpdate != nil { - l = m.WorkloadIdentityX509RevocationUpdate.Size() + if m.ClusterNetworkingConfigUpdate != nil { + l = m.ClusterNetworkingConfigUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AWSICResourceSync) Size() (n int) { +func (m *OneOf_DatabaseUserCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AWSICResourceSync != nil { - l = m.AWSICResourceSync.Size() + if m.DatabaseUserCreate != nil { + l = m.DatabaseUserCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_HealthCheckConfigCreate) Size() (n int) { +func (m *OneOf_DatabaseUserDeactivate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.HealthCheckConfigCreate != nil { - l = m.HealthCheckConfigCreate.Size() - n += 2 + l + sovEvents(uint64(l)) + if m.DatabaseUserDeactivate != nil { + l = m.DatabaseUserDeactivate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_HealthCheckConfigUpdate) Size() (n int) { +func (m *OneOf_AccessPathChanged) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.HealthCheckConfigUpdate != nil { - l = m.HealthCheckConfigUpdate.Size() + if m.AccessPathChanged != nil { + l = m.AccessPathChanged.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_HealthCheckConfigDelete) Size() (n int) { +func (m *OneOf_SpannerRPC) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.HealthCheckConfigDelete != nil { - l = m.HealthCheckConfigDelete.Size() + if m.SpannerRPC != nil { + l = m.SpannerRPC.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { +func (m *OneOf_DatabaseSessionCommandResult) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityX509IssuerOverrideCreate != nil { - l = m.WorkloadIdentityX509IssuerOverrideCreate.Size() + if m.DatabaseSessionCommandResult != nil { + l = m.DatabaseSessionCommandResult.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { +func (m *OneOf_DiscoveryConfigCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.WorkloadIdentityX509IssuerOverrideDelete != nil { - l = m.WorkloadIdentityX509IssuerOverrideDelete.Size() + if m.DiscoveryConfigCreate != nil { + l = m.DiscoveryConfigCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SigstorePolicyCreate) Size() (n int) { +func (m *OneOf_DiscoveryConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SigstorePolicyCreate != nil { - l = m.SigstorePolicyCreate.Size() + if m.DiscoveryConfigUpdate != nil { + l = m.DiscoveryConfigUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SigstorePolicyUpdate) Size() (n int) { +func (m *OneOf_DiscoveryConfigDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SigstorePolicyUpdate != nil { - l = m.SigstorePolicyUpdate.Size() + if m.DiscoveryConfigDelete != nil { + l = m.DiscoveryConfigDelete.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_SigstorePolicyDelete) Size() (n int) { +func (m *OneOf_DiscoveryConfigDeleteAll) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.SigstorePolicyDelete != nil { - l = m.SigstorePolicyDelete.Size() + if m.DiscoveryConfigDeleteAll != nil { + l = m.DiscoveryConfigDeleteAll.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateAgentRolloutTrigger) Size() (n int) { +func (m *OneOf_AccessGraphSettingsUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateAgentRolloutTrigger != nil { - l = m.AutoUpdateAgentRolloutTrigger.Size() + if m.AccessGraphSettingsUpdate != nil { + l = m.AccessGraphSettingsUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateAgentRolloutForceDone) Size() (n int) { +func (m *OneOf_IntegrationCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateAgentRolloutForceDone != nil { - l = m.AutoUpdateAgentRolloutForceDone.Size() + if m.IntegrationCreate != nil { + l = m.IntegrationCreate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *OneOf_AutoUpdateAgentRolloutRollback) Size() (n int) { +func (m *OneOf_IntegrationUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.AutoUpdateAgentRolloutRollback != nil { - l = m.AutoUpdateAgentRolloutRollback.Size() + if m.IntegrationUpdate != nil { + l = m.IntegrationUpdate.Size() n += 2 + l + sovEvents(uint64(l)) } return n } -func (m *StreamStatus) Size() (n int) { +func (m *OneOf_IntegrationDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.UploadID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.LastEventIndex != 0 { - n += 1 + sovEvents(uint64(m.LastEventIndex)) - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUploadTime) - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.IntegrationDelete != nil { + l = m.IntegrationDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SessionUpload) Size() (n int) { +func (m *OneOf_SPIFFEFederationCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SessionURL) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SPIFFEFederationCreate != nil { + l = m.SPIFFEFederationCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *Identity) Size() (n int) { +func (m *OneOf_SPIFFEFederationDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.User) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Impersonator) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Roles) > 0 { - for _, s := range m.Roles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.Usage) > 0 { - for _, s := range m.Usage { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.Logins) > 0 { - for _, s := range m.Logins { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.KubernetesGroups) > 0 { - for _, s := range m.KubernetesGroups { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.KubernetesUsers) > 0 { - for _, s := range m.KubernetesUsers { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires) - n += 1 + l + sovEvents(uint64(l)) - l = len(m.RouteToCluster) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.KubernetesCluster) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = m.Traits.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.RouteToApp != nil { - l = m.RouteToApp.Size() - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.TeleportCluster) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.RouteToDatabase != nil { - l = m.RouteToDatabase.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.DatabaseNames) > 0 { - for _, s := range m.DatabaseNames { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.DatabaseUsers) > 0 { - for _, s := range m.DatabaseUsers { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } - l = len(m.MFADeviceUUID) - if l > 0 { - n += 2 + l + sovEvents(uint64(l)) - } - l = len(m.ClientIP) - if l > 0 { - n += 2 + l + sovEvents(uint64(l)) - } - if len(m.AWSRoleARNs) > 0 { - for _, s := range m.AWSRoleARNs { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } - if len(m.AccessRequests) > 0 { - for _, s := range m.AccessRequests { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } - if m.DisallowReissue { - n += 3 - } - if len(m.AllowedResourceIDs) > 0 { - for _, e := range m.AllowedResourceIDs { - l = e.Size() - n += 2 + l + sovEvents(uint64(l)) - } - } - l = github_com_gogo_protobuf_types.SizeOfStdTime(m.PreviousIdentityExpires) - n += 2 + l + sovEvents(uint64(l)) - if len(m.AzureIdentities) > 0 { - for _, s := range m.AzureIdentities { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } - if len(m.GCPServiceAccounts) > 0 { - for _, s := range m.GCPServiceAccounts { - l = len(s) - n += 2 + l + sovEvents(uint64(l)) - } - } - l = len(m.PrivateKeyPolicy) - if l > 0 { - n += 2 + l + sovEvents(uint64(l)) - } - l = len(m.BotName) - if l > 0 { + if m.SPIFFEFederationDelete != nil { + l = m.SPIFFEFederationDelete.Size() n += 2 + l + sovEvents(uint64(l)) } - if m.DeviceExtensions != nil { - l = m.DeviceExtensions.Size() - n += 2 + l + sovEvents(uint64(l)) + return n +} +func (m *OneOf_PluginCreate) Size() (n int) { + if m == nil { + return 0 } - l = len(m.BotInstanceID) - if l > 0 { + var l int + _ = l + if m.PluginCreate != nil { + l = m.PluginCreate.Size() n += 2 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } return n } - -func (m *RouteToApp) Size() (n int) { +func (m *OneOf_PluginUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Name) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.SessionID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.PublicAddr) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.ClusterName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.AWSRoleARN) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.AzureIdentity) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.GCPServiceAccount) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.URI) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.TargetPort != 0 { - n += 1 + sovEvents(uint64(m.TargetPort)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.PluginUpdate != nil { + l = m.PluginUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *RouteToDatabase) Size() (n int) { +func (m *OneOf_PluginDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.ServiceName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Protocol) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Username) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Database) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Roles) > 0 { - for _, s := range m.Roles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.PluginDelete != nil { + l = m.PluginDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *DeviceExtensions) Size() (n int) { +func (m *OneOf_AutoUpdateConfigCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.DeviceId) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.AssetTag) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.CredentialId) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateConfigCreate != nil { + l = m.AutoUpdateConfigCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessRequestResourceSearch) Size() (n int) { +func (m *OneOf_AutoUpdateConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.SearchAsRoles) > 0 { - for _, s := range m.SearchAsRoles { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = len(m.ResourceType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Namespace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Labels) > 0 { - for k, v := range m.Labels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } - l = len(m.PredicateExpression) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.SearchKeywords) > 0 { - for _, s := range m.SearchKeywords { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateConfigUpdate != nil { + l = m.AutoUpdateConfigUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementPrepare) Size() (n int) { +func (m *OneOf_AutoUpdateConfigDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateConfigDelete != nil { + l = m.AutoUpdateConfigDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementExecute) Size() (n int) { +func (m *OneOf_AutoUpdateVersionCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if len(m.Parameters) > 0 { - for _, s := range m.Parameters { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateVersionCreate != nil { + l = m.AutoUpdateVersionCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementSendLongData) Size() (n int) { +func (m *OneOf_AutoUpdateVersionUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if m.ParameterID != 0 { - n += 1 + sovEvents(uint64(m.ParameterID)) - } - if m.DataSize != 0 { - n += 1 + sovEvents(uint64(m.DataSize)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateVersionUpdate != nil { + l = m.AutoUpdateVersionUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementClose) Size() (n int) { +func (m *OneOf_AutoUpdateVersionDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateVersionDelete != nil { + l = m.AutoUpdateVersionDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementReset) Size() (n int) { +func (m *OneOf_StaticHostUserCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.StaticHostUserCreate != nil { + l = m.StaticHostUserCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementFetch) Size() (n int) { +func (m *OneOf_StaticHostUserUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if m.RowsCount != 0 { - n += 1 + sovEvents(uint64(m.RowsCount)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.StaticHostUserUpdate != nil { + l = m.StaticHostUserUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLStatementBulkExecute) Size() (n int) { +func (m *OneOf_StaticHostUserDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatementID != 0 { - n += 1 + sovEvents(uint64(m.StatementID)) - } - if len(m.Parameters) > 0 { - for _, s := range m.Parameters { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.StaticHostUserDelete != nil { + l = m.StaticHostUserDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLInitDB) Size() (n int) { +func (m *OneOf_CrownJewelCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SchemaName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.CrownJewelCreate != nil { + l = m.CrownJewelCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLCreateDB) Size() (n int) { +func (m *OneOf_CrownJewelUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SchemaName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.CrownJewelUpdate != nil { + l = m.CrownJewelUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLDropDB) Size() (n int) { +func (m *OneOf_CrownJewelDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SchemaName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.CrownJewelDelete != nil { + l = m.CrownJewelDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLShutDown) Size() (n int) { +func (m *OneOf_UserTaskCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.UserTaskCreate != nil { + l = m.UserTaskCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLProcessKill) Size() (n int) { +func (m *OneOf_UserTaskUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.ProcessID != 0 { - n += 1 + sovEvents(uint64(m.ProcessID)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.UserTaskUpdate != nil { + l = m.UserTaskUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLDebug) Size() (n int) { +func (m *OneOf_UserTaskDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.UserTaskDelete != nil { + l = m.UserTaskDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *MySQLRefresh) Size() (n int) { +func (m *OneOf_SFTPSummary) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Subcommand) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SFTPSummary != nil { + l = m.SFTPSummary.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SQLServerRPCRequest) Size() (n int) { +func (m *OneOf_ContactCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Procname) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Parameters) > 0 { - for _, s := range m.Parameters { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.ContactCreate != nil { + l = m.ContactCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *DatabaseSessionMalformedPacket) Size() (n int) { +func (m *OneOf_ContactDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Payload) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.ContactDelete != nil { + l = m.ContactDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *ElasticsearchRequest) Size() (n int) { +func (m *OneOf_WorkloadIdentityCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.RawQuery) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Method) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Body) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = m.Headers.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Category != 0 { - n += 1 + sovEvents(uint64(m.Category)) - } - l = len(m.Target) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.StatusCode != 0 { - n += 1 + sovEvents(uint64(m.StatusCode)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityCreate != nil { + l = m.WorkloadIdentityCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *OpenSearchRequest) Size() (n int) { +func (m *OneOf_WorkloadIdentityUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.RawQuery) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Method) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Body) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = m.Headers.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Category != 0 { - n += 1 + sovEvents(uint64(m.Category)) - } - l = len(m.Target) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.StatusCode != 0 { - n += 1 + sovEvents(uint64(m.StatusCode)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityUpdate != nil { + l = m.WorkloadIdentityUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *DynamoDBRequest) Size() (n int) { +func (m *OneOf_WorkloadIdentityDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.StatusCode != 0 { - n += 1 + sovEvents(uint64(m.StatusCode)) - } - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.RawQuery) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Method) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Target) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Body != nil { - l = m.Body.Size() - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityDelete != nil { + l = m.WorkloadIdentityDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AppSessionDynamoDBRequest) Size() (n int) { +func (m *OneOf_GitCommand) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AppMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AWSRequestMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SessionChunkID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.StatusCode != 0 { - n += 1 + sovEvents(uint64(m.StatusCode)) - } - l = len(m.Path) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.RawQuery) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Method) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Target) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.GitCommand != nil { + l = m.GitCommand.Size() + n += 2 + l + sovEvents(uint64(l)) } - if m.Body != nil { - l = m.Body.Size() - n += 1 + l + sovEvents(uint64(l)) + return n +} +func (m *OneOf_UserLoginAccessListInvalid) Size() (n int) { + if m == nil { + return 0 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + var l int + _ = l + if m.UserLoginAccessListInvalid != nil { + l = m.UserLoginAccessListInvalid.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *UpgradeWindowStartMetadata) Size() (n int) { +func (m *OneOf_AccessRequestExpire) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.UpgradeWindowStart) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AccessRequestExpire != nil { + l = m.AccessRequestExpire.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *UpgradeWindowStartUpdate) Size() (n int) { +func (m *OneOf_StableUNIXUserCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UpgradeWindowStartMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.StableUNIXUserCreate != nil { + l = m.StableUNIXUserCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SessionRecordingAccess) Size() (n int) { +func (m *OneOf_WorkloadIdentityX509RevocationCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SessionID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SessionType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.WorkloadIdentityX509RevocationCreate != nil { + l = m.WorkloadIdentityX509RevocationCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } - l = len(m.Format) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + return n +} +func (m *OneOf_WorkloadIdentityX509RevocationDelete) Size() (n int) { + if m == nil { + return 0 } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + var l int + _ = l + if m.WorkloadIdentityX509RevocationDelete != nil { + l = m.WorkloadIdentityX509RevocationDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *KubeClusterMetadata) Size() (n int) { +func (m *OneOf_WorkloadIdentityX509RevocationUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - if len(m.KubeLabels) > 0 { - for k, v := range m.KubeLabels { - _ = k - _ = v - mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) - n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityX509RevocationUpdate != nil { + l = m.WorkloadIdentityX509RevocationUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *KubernetesClusterCreate) Size() (n int) { +func (m *OneOf_AWSICResourceSync) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubeClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AWSICResourceSync != nil { + l = m.AWSICResourceSync.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *KubernetesClusterUpdate) Size() (n int) { +func (m *OneOf_HealthCheckConfigCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.KubeClusterMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.HealthCheckConfigCreate != nil { + l = m.HealthCheckConfigCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *KubernetesClusterDelete) Size() (n int) { +func (m *OneOf_HealthCheckConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.HealthCheckConfigUpdate != nil { + l = m.HealthCheckConfigUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SSMRun) Size() (n int) { +func (m *OneOf_HealthCheckConfigDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.CommandID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.InstanceID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.ExitCode != 0 { - n += 1 + sovEvents(uint64(m.ExitCode)) - } - l = len(m.Status) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.AccountID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Region) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.StandardOutput) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.StandardError) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.InvocationURL) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.HealthCheckConfigDelete != nil { + l = m.HealthCheckConfigDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraPrepare) Size() (n int) { +func (m *OneOf_WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Keyspace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityX509IssuerOverrideCreate != nil { + l = m.WorkloadIdentityX509IssuerOverrideCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraExecute) Size() (n int) { +func (m *OneOf_WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.QueryId) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.WorkloadIdentityX509IssuerOverrideDelete != nil { + l = m.WorkloadIdentityX509IssuerOverrideDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraBatch) Size() (n int) { +func (m *OneOf_SigstorePolicyCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Consistency) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Keyspace) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.BatchType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Children) > 0 { - for _, e := range m.Children { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SigstorePolicyCreate != nil { + l = m.SigstorePolicyCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraBatch_BatchChild) Size() (n int) { +func (m *OneOf_SigstorePolicyUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.ID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Query) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.Values) > 0 { - for _, e := range m.Values { - l = e.Size() - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SigstorePolicyUpdate != nil { + l = m.SigstorePolicyUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraBatch_BatchChild_Value) Size() (n int) { +func (m *OneOf_SigstorePolicyDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - if m.Type != 0 { - n += 1 + sovEvents(uint64(m.Type)) - } - l = len(m.Contents) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SigstorePolicyDelete != nil { + l = m.SigstorePolicyDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *CassandraRegister) Size() (n int) { +func (m *OneOf_AutoUpdateAgentRolloutTrigger) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.DatabaseMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if len(m.EventTypes) > 0 { - for _, s := range m.EventTypes { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateAgentRolloutTrigger != nil { + l = m.AutoUpdateAgentRolloutTrigger.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *LoginRuleCreate) Size() (n int) { +func (m *OneOf_AutoUpdateAgentRolloutForceDone) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateAgentRolloutForceDone != nil { + l = m.AutoUpdateAgentRolloutForceDone.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *LoginRuleDelete) Size() (n int) { +func (m *OneOf_AutoUpdateAgentRolloutRollback) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.AutoUpdateAgentRolloutRollback != nil { + l = m.AutoUpdateAgentRolloutRollback.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SAMLIdPAuthAttempt) Size() (n int) { +func (m *OneOf_MCPSessionStart) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SessionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SAMLIdPServiceProviderMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionStart != nil { + l = m.MCPSessionStart.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SAMLIdPServiceProviderCreate) Size() (n int) { +func (m *OneOf_MCPSessionEnd) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SAMLIdPServiceProviderMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionEnd != nil { + l = m.MCPSessionEnd.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SAMLIdPServiceProviderUpdate) Size() (n int) { +func (m *OneOf_MCPSessionRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SAMLIdPServiceProviderMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionRequest != nil { + l = m.MCPSessionRequest.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SAMLIdPServiceProviderDelete) Size() (n int) { +func (m *OneOf_MCPSessionNotification) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.SAMLIdPServiceProviderMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionNotification != nil { + l = m.MCPSessionNotification.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *SAMLIdPServiceProviderDeleteAll) Size() (n int) { +func (m *OneOf_BoundKeypairRecovery) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.BoundKeypairRecovery != nil { + l = m.BoundKeypairRecovery.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *OktaResourcesUpdate) Size() (n int) { +func (m *OneOf_BoundKeypairRotation) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.OktaResourcesUpdatedMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.BoundKeypairRotation != nil { + l = m.BoundKeypairRotation.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *OktaSyncFailure) Size() (n int) { +func (m *OneOf_BoundKeypairJoinStateVerificationFailed) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.BoundKeypairJoinStateVerificationFailed != nil { + l = m.BoundKeypairJoinStateVerificationFailed.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *OktaAssignmentResult) Size() (n int) { +func (m *OneOf_MCPSessionListenSSEStream) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.OktaAssignmentMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionListenSSEStream != nil { + l = m.MCPSessionListenSSEStream.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListCreate) Size() (n int) { +func (m *OneOf_MCPSessionInvalidHTTPRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.AccessListTitle) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.MCPSessionInvalidHTTPRequest != nil { + l = m.MCPSessionInvalidHTTPRequest.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListUpdate) Size() (n int) { +func (m *OneOf_SCIMListingEvent) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.AccessListTitle) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SCIMListingEvent != nil { + l = m.SCIMListingEvent.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListDelete) Size() (n int) { +func (m *OneOf_SCIMResourceEvent) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.AccessListTitle) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.SCIMResourceEvent != nil { + l = m.SCIMResourceEvent.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListMemberCreate) Size() (n int) { +func (m *OneOf_ClientIPRestrictionsUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListMemberMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.ClientIPRestrictionsUpdate != nil { + l = m.ClientIPRestrictionsUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListMemberUpdate) Size() (n int) { +func (m *OneOf_VnetConfigCreate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListMemberMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.VnetConfigCreate != nil { + l = m.VnetConfigCreate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListMemberDelete) Size() (n int) { +func (m *OneOf_VnetConfigUpdate) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListMemberMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.VnetConfigUpdate != nil { + l = m.VnetConfigUpdate.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListMemberDeleteAllForAccessList) Size() (n int) { +func (m *OneOf_VnetConfigDelete) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListMemberMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if m.VnetConfigDelete != nil { + l = m.VnetConfigDelete.Size() + n += 2 + l + sovEvents(uint64(l)) } return n } - -func (m *AccessListReview) Size() (n int) { +func (m *StreamStatus) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListReviewMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = len(m.UploadID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.LastEventIndex != 0 { + n += 1 + sovEvents(uint64(m.LastEventIndex)) + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUploadTime) n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -50953,7 +53542,7 @@ func (m *AccessListReview) Size() (n int) { return n } -func (m *AuditQueryRun) Size() (n int) { +func (m *SessionUpload) Size() (n int) { if m == nil { return 0 } @@ -50961,118 +53550,211 @@ func (m *AuditQueryRun) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.AuditQueryDetails.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.SessionURL) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AuditQueryDetails) Size() (n int) { +func (m *Identity) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Name) + l = len(m.User) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Query) + l = len(m.Impersonator) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.Days != 0 { - n += 1 + sovEvents(uint64(m.Days)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - if m.ExecutionTimeInMillis != 0 { - n += 1 + sovEvents(uint64(m.ExecutionTimeInMillis)) + if len(m.Usage) > 0 { + for _, s := range m.Usage { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - if m.DataScannedInBytes != 0 { - n += 1 + sovEvents(uint64(m.DataScannedInBytes)) + if len(m.Logins) > 0 { + for _, s := range m.Logins { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.KubernetesGroups) > 0 { + for _, s := range m.KubernetesGroups { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - return n -} - -func (m *SecurityReportRun) Size() (n int) { - if m == nil { - return 0 + if len(m.KubernetesUsers) > 0 { + for _, s := range m.KubernetesUsers { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires) n += 1 + l + sovEvents(uint64(l)) - l = len(m.Name) + l = len(m.RouteToCluster) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Version) + l = len(m.KubernetesCluster) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.TotalExecutionTimeInMillis != 0 { - n += 1 + sovEvents(uint64(m.TotalExecutionTimeInMillis)) + l = m.Traits.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.RouteToApp != nil { + l = m.RouteToApp.Size() + n += 1 + l + sovEvents(uint64(l)) } - if m.TotalDataScannedInBytes != 0 { - n += 1 + sovEvents(uint64(m.TotalDataScannedInBytes)) + l = len(m.TeleportCluster) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if len(m.AuditQueries) > 0 { - for _, e := range m.AuditQueries { - l = e.Size() + if m.RouteToDatabase != nil { + l = m.RouteToDatabase.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.DatabaseNames) > 0 { + for _, s := range m.DatabaseNames { + l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if len(m.DatabaseUsers) > 0 { + for _, s := range m.DatabaseUsers { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } } - return n -} + l = len(m.MFADeviceUUID) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + l = len(m.ClientIP) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + if len(m.AWSRoleARNs) > 0 { + for _, s := range m.AWSRoleARNs { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } + } + if len(m.AccessRequests) > 0 { + for _, s := range m.AccessRequests { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } + } + if m.DisallowReissue { + n += 3 + } + if len(m.AllowedResourceIDs) > 0 { + for _, e := range m.AllowedResourceIDs { + l = e.Size() + n += 2 + l + sovEvents(uint64(l)) + } + } + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.PreviousIdentityExpires) + n += 2 + l + sovEvents(uint64(l)) + if len(m.AzureIdentities) > 0 { + for _, s := range m.AzureIdentities { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } + } + if len(m.GCPServiceAccounts) > 0 { + for _, s := range m.GCPServiceAccounts { + l = len(s) + n += 2 + l + sovEvents(uint64(l)) + } + } + l = len(m.PrivateKeyPolicy) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + l = len(m.BotName) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + if m.DeviceExtensions != nil { + l = m.DeviceExtensions.Size() + n += 2 + l + sovEvents(uint64(l)) + } + l = len(m.BotInstanceID) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + l = len(m.JoinToken) + if l > 0 { + n += 2 + l + sovEvents(uint64(l)) + } + if m.ScopePin != nil { + l = m.ScopePin.Size() + n += 2 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} -func (m *ExternalAuditStorageEnable) Size() (n int) { +func (m *ScopePin) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Details != nil { - l = m.Details.Size() + l = len(m.Scope) + if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if len(m.Assignments) > 0 { + for k, v := range m.Assignments { + _ = k + _ = v + l = 0 + if v != nil { + l = v.Size() + l += 1 + sovEvents(uint64(l)) + } + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + l + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *ExternalAuditStorageDisable) Size() (n int) { +func (m *ScopePinnedAssignments) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.Details != nil { - l = m.Details.Size() - n += 1 + l + sovEvents(uint64(l)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51080,47 +53762,46 @@ func (m *ExternalAuditStorageDisable) Size() (n int) { return n } -func (m *ExternalAuditStorageDetails) Size() (n int) { +func (m *RouteToApp) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.IntegrationName) + l = len(m.Name) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.SessionRecordingsUri) + l = len(m.SessionID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AthenaWorkgroup) + l = len(m.PublicAddr) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.GlueDatabase) + l = len(m.ClusterName) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.GlueTable) + l = len(m.AWSRoleARN) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AuditEventsLongTermUri) + l = len(m.AzureIdentity) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AthenaResultsUri) + l = len(m.GCPServiceAccount) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.PolicyName) + l = len(m.URI) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Region) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.TargetPort != 0 { + n += 1 + sovEvents(uint64(m.TargetPort)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51128,36 +53809,33 @@ func (m *ExternalAuditStorageDetails) Size() (n int) { return n } -func (m *OktaAccessListSync) Size() (n int) { +func (m *RouteToDatabase) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.NumAppFilters != 0 { - n += 1 + sovEvents(uint64(m.NumAppFilters)) - } - if m.NumGroupFilters != 0 { - n += 1 + sovEvents(uint64(m.NumGroupFilters)) - } - if m.NumApps != 0 { - n += 1 + sovEvents(uint64(m.NumApps)) + l = len(m.ServiceName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.NumGroups != 0 { - n += 1 + sovEvents(uint64(m.NumGroups)) + l = len(m.Protocol) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.NumRoles != 0 { - n += 1 + sovEvents(uint64(m.NumRoles)) + l = len(m.Username) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.NumAccessLists != 0 { - n += 1 + sovEvents(uint64(m.NumAccessLists)) + l = len(m.Database) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - if m.NumAccessListMembers != 0 { - n += 1 + sovEvents(uint64(m.NumAccessListMembers)) + if len(m.Roles) > 0 { + for _, s := range m.Roles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51165,35 +53843,23 @@ func (m *OktaAccessListSync) Size() (n int) { return n } -func (m *OktaUserSync) Size() (n int) { +func (m *DeviceExtensions) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.OrgUrl) + l = len(m.DeviceId) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.AppId) + l = len(m.AssetTag) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.NumUsersCreated != 0 { - n += 1 + sovEvents(uint64(m.NumUsersCreated)) - } - if m.NumUsersDeleted != 0 { - n += 1 + sovEvents(uint64(m.NumUsersDeleted)) - } - if m.NumUsersModified != 0 { - n += 1 + sovEvents(uint64(m.NumUsersModified)) - } - if m.NumUsersTotal != 0 { - n += 1 + sovEvents(uint64(m.NumUsersTotal)) + l = len(m.CredentialId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51201,7 +53867,7 @@ func (m *OktaUserSync) Size() (n int) { return n } -func (m *SPIFFESVIDIssued) Size() (n int) { +func (m *AccessRequestResourceSearch) Size() (n int) { if m == nil { return 0 } @@ -51211,65 +53877,45 @@ func (m *SPIFFESVIDIssued) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.SPIFFEID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if len(m.DNSSANs) > 0 { - for _, s := range m.DNSSANs { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - if len(m.IPSANs) > 0 { - for _, s := range m.IPSANs { + if len(m.SearchAsRoles) > 0 { + for _, s := range m.SearchAsRoles { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.SVIDType) + l = len(m.ResourceType) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.SerialNumber) + l = len(m.Namespace) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Hint) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Labels) > 0 { + for k, v := range m.Labels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } - l = len(m.JTI) + l = len(m.PredicateExpression) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.Audiences) > 0 { - for _, s := range m.Audiences { + if len(m.SearchKeywords) > 0 { + for _, s := range m.SearchKeywords { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = len(m.WorkloadIdentity) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.WorkloadIdentityRevision) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Attributes != nil { - l = m.Attributes.Size() - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AuthPreferenceUpdate) Size() (n int) { +func (m *MySQLStatementPrepare) Size() (n int) { if m == nil { return 0 } @@ -51277,14 +53923,15 @@ func (m *AuthPreferenceUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.AdminActionsMFA != 0 { - n += 1 + sovEvents(uint64(m.AdminActionsMFA)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Query) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51292,7 +53939,7 @@ func (m *AuthPreferenceUpdate) Size() (n int) { return n } -func (m *ClusterNetworkingConfigUpdate) Size() (n int) { +func (m *MySQLStatementExecute) Size() (n int) { if m == nil { return 0 } @@ -51300,19 +53947,28 @@ func (m *ClusterNetworkingConfigUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) + } + if len(m.Parameters) > 0 { + for _, s := range m.Parameters { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SessionRecordingConfigUpdate) Size() (n int) { +func (m *MySQLStatementSendLongData) Size() (n int) { if m == nil { return 0 } @@ -51320,41 +53976,20 @@ func (m *SessionRecordingConfigUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) - } - return n -} - -func (m *AccessPathChanged) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.ChangeID) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.AffectedResourceName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) } - l = len(m.AffectedResourceSource) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.ParameterID != 0 { + n += 1 + sovEvents(uint64(m.ParameterID)) } - l = len(m.AffectedResourceType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.DataSize != 0 { + n += 1 + sovEvents(uint64(m.DataSize)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51362,7 +53997,7 @@ func (m *AccessPathChanged) Size() (n int) { return n } -func (m *SpannerRPC) Size() (n int) { +func (m *MySQLStatementClose) Size() (n int) { if m == nil { return 0 } @@ -51376,15 +54011,8 @@ func (m *SpannerRPC) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Procedure) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Args != nil { - l = m.Args.Size() - n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51392,7 +54020,7 @@ func (m *SpannerRPC) Size() (n int) { return n } -func (m *AccessGraphSettingsUpdate) Size() (n int) { +func (m *MySQLStatementReset) Size() (n int) { if m == nil { return 0 } @@ -51400,19 +54028,22 @@ func (m *AccessGraphSettingsUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SPIFFEFederationCreate) Size() (n int) { +func (m *MySQLStatementFetch) Size() (n int) { if m == nil { return 0 } @@ -51420,19 +54051,25 @@ func (m *SPIFFEFederationCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) + } + if m.RowsCount != 0 { + n += 1 + sovEvents(uint64(m.RowsCount)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SPIFFEFederationDelete) Size() (n int) { +func (m *MySQLStatementBulkExecute) Size() (n int) { if m == nil { return 0 } @@ -51440,19 +54077,28 @@ func (m *SPIFFEFederationDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.StatementID != 0 { + n += 1 + sovEvents(uint64(m.StatementID)) + } + if len(m.Parameters) > 0 { + for _, s := range m.Parameters { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateConfigCreate) Size() (n int) { +func (m *MySQLInitDB) Size() (n int) { if m == nil { return 0 } @@ -51460,21 +54106,23 @@ func (m *AutoUpdateConfigCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.SchemaName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateConfigUpdate) Size() (n int) { +func (m *MySQLCreateDB) Size() (n int) { if m == nil { return 0 } @@ -51482,21 +54130,23 @@ func (m *AutoUpdateConfigUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.SchemaName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateConfigDelete) Size() (n int) { +func (m *MySQLDropDB) Size() (n int) { if m == nil { return 0 } @@ -51504,21 +54154,23 @@ func (m *AutoUpdateConfigDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.SchemaName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateVersionCreate) Size() (n int) { +func (m *MySQLShutDown) Size() (n int) { if m == nil { return 0 } @@ -51526,13 +54178,11 @@ func (m *AutoUpdateVersionCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51540,7 +54190,7 @@ func (m *AutoUpdateVersionCreate) Size() (n int) { return n } -func (m *AutoUpdateVersionUpdate) Size() (n int) { +func (m *MySQLProcessKill) Size() (n int) { if m == nil { return 0 } @@ -51548,21 +54198,22 @@ func (m *AutoUpdateVersionUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + if m.ProcessID != 0 { + n += 1 + sovEvents(uint64(m.ProcessID)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateVersionDelete) Size() (n int) { +func (m *MySQLDebug) Size() (n int) { if m == nil { return 0 } @@ -51570,13 +54221,11 @@ func (m *AutoUpdateVersionDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51584,7 +54233,7 @@ func (m *AutoUpdateVersionDelete) Size() (n int) { return n } -func (m *AutoUpdateAgentRolloutTrigger) Size() (n int) { +func (m *MySQLRefresh) Size() (n int) { if m == nil { return 0 } @@ -51594,23 +54243,21 @@ func (m *AutoUpdateAgentRolloutTrigger) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.Groups) > 0 { - for _, s := range m.Groups { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.Subcommand) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateAgentRolloutForceDone) Size() (n int) { +func (m *SQLServerRPCRequest) Size() (n int) { if m == nil { return 0 } @@ -51620,23 +54267,27 @@ func (m *AutoUpdateAgentRolloutForceDone) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.Groups) > 0 { - for _, s := range m.Groups { + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Procname) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Parameters) > 0 { + for _, s := range m.Parameters { l = len(s) n += 1 + l + sovEvents(uint64(l)) } } - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AutoUpdateAgentRolloutRollback) Size() (n int) { +func (m *DatabaseSessionMalformedPacket) Size() (n int) { if m == nil { return 0 } @@ -51646,23 +54297,21 @@ func (m *AutoUpdateAgentRolloutRollback) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if len(m.Groups) > 0 { - for _, s := range m.Groups { - l = len(s) - n += 1 + l + sovEvents(uint64(l)) - } - } - l = m.Status.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = len(m.Payload) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *StaticHostUserCreate) Size() (n int) { +func (m *ElasticsearchRequest) Size() (n int) { if m == nil { return 0 } @@ -51670,21 +54319,51 @@ func (m *StaticHostUserCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.RawQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Body) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = m.Headers.Size() n += 1 + l + sovEvents(uint64(l)) + if m.Category != 0 { + n += 1 + sovEvents(uint64(m.Category)) + } + l = len(m.Target) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Query) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.StatusCode != 0 { + n += 1 + sovEvents(uint64(m.StatusCode)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *StaticHostUserUpdate) Size() (n int) { +func (m *OpenSearchRequest) Size() (n int) { if m == nil { return 0 } @@ -51692,69 +54371,51 @@ func (m *StaticHostUserUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *StaticHostUserDelete) Size() (n int) { - if m == nil { - return 0 + l = len(m.RawQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *CrownJewelCreate) Size() (n int) { - if m == nil { - return 0 + l = len(m.Body) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.Headers.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.CrownJewelQuery) + if m.Category != 0 { + n += 1 + sovEvents(uint64(m.Category)) + } + l = len(m.Target) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Query) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if m.StatusCode != 0 { + n += 1 + sovEvents(uint64(m.StatusCode)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *CrownJewelUpdate) Size() (n int) { +func (m *DynamoDBRequest) Size() (n int) { if m == nil { return 0 } @@ -51762,51 +54423,42 @@ func (m *CrownJewelUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.CurrentCrownJewelQuery) + if m.StatusCode != 0 { + n += 1 + sovEvents(uint64(m.StatusCode)) + } + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.UpdatedCrownJewelQuery) + l = len(m.RawQuery) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *CrownJewelDelete) Size() (n int) { - if m == nil { - return 0 + l = len(m.Target) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Body != nil { + l = m.Body.Size() + n += 1 + l + sovEvents(uint64(l)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserTaskCreate) Size() (n int) { +func (m *AppSessionDynamoDBRequest) Size() (n int) { if m == nil { return 0 } @@ -51814,69 +54466,52 @@ func (m *UserTaskCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.AppMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserTaskMetadata.Size() + l = m.AWSRequestMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + l = len(m.SessionChunkID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } - return n -} - -func (m *UserTaskUpdate) Size() (n int) { - if m == nil { - return 0 + if m.StatusCode != 0 { + n += 1 + sovEvents(uint64(m.StatusCode)) } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserTaskMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.CurrentUserTaskState) + l = len(m.Path) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.UpdatedUserTaskState) + l = len(m.RawQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Target) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } + if m.Body != nil { + l = m.Body.Size() + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *UserTaskMetadata) Size() (n int) { +func (m *UpgradeWindowStartMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.TaskType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.IssueType) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Integration) + l = len(m.UpgradeWindowStart) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -51886,7 +54521,7 @@ func (m *UserTaskMetadata) Size() (n int) { return n } -func (m *UserTaskDelete) Size() (n int) { +func (m *UpgradeWindowStartUpdate) Size() (n int) { if m == nil { return 0 } @@ -51894,13 +54529,11 @@ func (m *UserTaskDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UpgradeWindowStartMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51908,7 +54541,7 @@ func (m *UserTaskDelete) Size() (n int) { return n } -func (m *ContactCreate) Size() (n int) { +func (m *SessionRecordingAccess) Size() (n int) { if m == nil { return 0 } @@ -51916,20 +54549,19 @@ func (m *ContactCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) + l = len(m.SessionID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Email) + l = len(m.SessionType) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if m.ContactType != 0 { - n += 1 + sovEvents(uint64(m.ContactType)) + l = len(m.Format) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51937,28 +54569,19 @@ func (m *ContactCreate) Size() (n int) { return n } -func (m *ContactDelete) Size() (n int) { +func (m *KubeClusterMetadata) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Email) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.ContactType != 0 { - n += 1 + sovEvents(uint64(m.ContactType)) + if len(m.KubeLabels) > 0 { + for k, v := range m.KubeLabels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovEvents(uint64(len(k))) + 1 + len(v) + sovEvents(uint64(len(v))) + n += mapEntrySize + 1 + sovEvents(uint64(mapEntrySize)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -51966,7 +54589,7 @@ func (m *ContactDelete) Size() (n int) { return n } -func (m *WorkloadIdentityCreate) Size() (n int) { +func (m *KubernetesClusterCreate) Size() (n int) { if m == nil { return 0 } @@ -51974,23 +54597,19 @@ func (m *WorkloadIdentityCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubeClusterMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.WorkloadIdentityData != nil { - l = m.WorkloadIdentityData.Size() - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *WorkloadIdentityUpdate) Size() (n int) { +func (m *KubernetesClusterUpdate) Size() (n int) { if m == nil { return 0 } @@ -51998,23 +54617,19 @@ func (m *WorkloadIdentityUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.KubeClusterMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.WorkloadIdentityData != nil { - l = m.WorkloadIdentityData.Size() - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *WorkloadIdentityDelete) Size() (n int) { +func (m *KubernetesClusterDelete) Size() (n int) { if m == nil { return 0 } @@ -52022,11 +54637,9 @@ func (m *WorkloadIdentityDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52034,7 +54647,7 @@ func (m *WorkloadIdentityDelete) Size() (n int) { return n } -func (m *WorkloadIdentityX509RevocationCreate) Size() (n int) { +func (m *SSMRun) Size() (n int) { if m == nil { return 0 } @@ -52042,13 +54655,38 @@ func (m *WorkloadIdentityX509RevocationCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = len(m.Reason) + l = len(m.CommandID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.InstanceID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ExitCode != 0 { + n += 1 + sovEvents(uint64(m.ExitCode)) + } + l = len(m.Status) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AccountID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Region) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.StandardOutput) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.StandardError) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.InvocationURL) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -52058,7 +54696,7 @@ func (m *WorkloadIdentityX509RevocationCreate) Size() (n int) { return n } -func (m *WorkloadIdentityX509RevocationUpdate) Size() (n int) { +func (m *CassandraPrepare) Size() (n int) { if m == nil { return 0 } @@ -52066,13 +54704,17 @@ func (m *WorkloadIdentityX509RevocationUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Reason) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Query) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Keyspace) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } @@ -52082,7 +54724,7 @@ func (m *WorkloadIdentityX509RevocationUpdate) Size() (n int) { return n } -func (m *WorkloadIdentityX509RevocationDelete) Size() (n int) { +func (m *CassandraExecute) Size() (n int) { if m == nil { return 0 } @@ -52090,19 +54732,23 @@ func (m *WorkloadIdentityX509RevocationDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.QueryId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *GitCommand) Size() (n int) { +func (m *CassandraBatch) Size() (n int) { if m == nil { return 0 } @@ -52112,24 +54758,24 @@ func (m *GitCommand) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.SessionMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ServerMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.CommandMetadata.Size() + l = m.DatabaseMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = len(m.Service) + l = len(m.Consistency) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Path) + l = len(m.Keyspace) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.Actions) > 0 { - for _, e := range m.Actions { + l = len(m.BatchType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Children) > 0 { + for _, e := range m.Children { l = e.Size() n += 1 + l + sovEvents(uint64(l)) } @@ -52140,27 +54786,25 @@ func (m *GitCommand) Size() (n int) { return n } -func (m *GitCommandAction) Size() (n int) { +func (m *CassandraBatch_BatchChild) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Action) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - l = len(m.Reference) + l = len(m.ID) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.Old) + l = len(m.Query) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - l = len(m.New) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if len(m.Values) > 0 { + for _, e := range m.Values { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52168,22 +54812,41 @@ func (m *GitCommandAction) Size() (n int) { return n } -func (m *AccessListInvalidMetadata) Size() (n int) { +func (m *CassandraBatch_BatchChild_Value) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.AccessListName) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) + if m.Type != 0 { + n += 1 + sovEvents(uint64(m.Type)) } - l = len(m.User) + l = len(m.Contents) if l > 0 { n += 1 + l + sovEvents(uint64(l)) } - if len(m.MissingRoles) > 0 { - for _, s := range m.MissingRoles { + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *CassandraRegister) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.EventTypes) > 0 { + for _, s := range m.EventTypes { l = len(s) n += 1 + l + sovEvents(uint64(l)) } @@ -52194,7 +54857,7 @@ func (m *AccessListInvalidMetadata) Size() (n int) { return n } -func (m *UserLoginAccessListInvalid) Size() (n int) { +func (m *LoginRuleCreate) Size() (n int) { if m == nil { return 0 } @@ -52202,9 +54865,9 @@ func (m *UserLoginAccessListInvalid) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.AccessListInvalidMetadata.Size() + l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.Status.Size() + l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52212,7 +54875,7 @@ func (m *UserLoginAccessListInvalid) Size() (n int) { return n } -func (m *StableUNIXUserCreate) Size() (n int) { +func (m *LoginRuleDelete) Size() (n int) { if m == nil { return 0 } @@ -52220,38 +54883,39 @@ func (m *StableUNIXUserCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) l = m.UserMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.StableUnixUser != nil { - l = m.StableUnixUser.Size() - n += 1 + l + sovEvents(uint64(l)) - } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *StableUNIXUser) Size() (n int) { +func (m *SAMLIdPAuthAttempt) Size() (n int) { if m == nil { return 0 } var l int _ = l - l = len(m.Username) - if l > 0 { - n += 1 + l + sovEvents(uint64(l)) - } - if m.Uid != 0 { - n += 1 + sovEvents(uint64(m.Uid)) - } + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SAMLIdPServiceProviderMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *AWSICResourceSync) Size() (n int) { +func (m *SAMLIdPServiceProviderCreate) Size() (n int) { if m == nil { return 0 } @@ -52259,19 +54923,9 @@ func (m *AWSICResourceSync) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - if m.TotalAccounts != 0 { - n += 1 + sovEvents(uint64(m.TotalAccounts)) - } - if m.TotalAccountAssignments != 0 { - n += 1 + sovEvents(uint64(m.TotalAccountAssignments)) - } - if m.TotalUserGroups != 0 { - n += 1 + sovEvents(uint64(m.TotalUserGroups)) - } - if m.TotalPermissionSets != 0 { - n += 1 + sovEvents(uint64(m.TotalPermissionSets)) - } - l = m.Status.Size() + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SAMLIdPServiceProviderMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52279,7 +54933,7 @@ func (m *AWSICResourceSync) Size() (n int) { return n } -func (m *HealthCheckConfigCreate) Size() (n int) { +func (m *SAMLIdPServiceProviderUpdate) Size() (n int) { if m == nil { return 0 } @@ -52289,9 +54943,7 @@ func (m *HealthCheckConfigCreate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SAMLIdPServiceProviderMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52299,7 +54951,7 @@ func (m *HealthCheckConfigCreate) Size() (n int) { return n } -func (m *HealthCheckConfigUpdate) Size() (n int) { +func (m *SAMLIdPServiceProviderDelete) Size() (n int) { if m == nil { return 0 } @@ -52309,9 +54961,7 @@ func (m *HealthCheckConfigUpdate) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.SAMLIdPServiceProviderMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52319,7 +54969,7 @@ func (m *HealthCheckConfigUpdate) Size() (n int) { return n } -func (m *HealthCheckConfigDelete) Size() (n int) { +func (m *SAMLIdPServiceProviderDeleteAll) Size() (n int) { if m == nil { return 0 } @@ -52329,17 +54979,13 @@ func (m *HealthCheckConfigDelete) Size() (n int) { n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { +func (m *OktaResourcesUpdate) Size() (n int) { if m == nil { return 0 } @@ -52347,11 +54993,9 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.OktaResourcesUpdatedMetadata.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52359,7 +55003,7 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { return n } -func (m *WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { +func (m *OktaSyncFailure) Size() (n int) { if m == nil { return 0 } @@ -52367,11 +55011,9 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() + l = m.Status.Size() n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) @@ -52379,7 +55021,7 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { return n } -func (m *SigstorePolicyCreate) Size() (n int) { +func (m *OktaAssignmentResult) Size() (n int) { if m == nil { return 0 } @@ -52387,19 +55029,21 @@ func (m *SigstorePolicyCreate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() + l = m.ServerMetadata.Size() n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.OktaAssignmentMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } -func (m *SigstorePolicyUpdate) Size() (n int) { +func (m *AccessListCreate) Size() (n int) { if m == nil { return 0 } @@ -52407,45 +55051,8204 @@ func (m *SigstorePolicyUpdate) Size() (n int) { _ = l l = m.Metadata.Size() n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) l = m.ResourceMetadata.Size() n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.AccessListTitle) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } - return n -} + return n +} + +func (m *AccessListUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.AccessListTitle) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.AccessListTitle) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListMemberCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListMemberMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListMemberUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListMemberMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListMemberDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListMemberMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListMemberDeleteAllForAccessList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListMemberMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListReview) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListReviewMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AuditQueryRun) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AuditQueryDetails.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AuditQueryDetails) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Query) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Days != 0 { + n += 1 + sovEvents(uint64(m.Days)) + } + if m.ExecutionTimeInMillis != 0 { + n += 1 + sovEvents(uint64(m.ExecutionTimeInMillis)) + } + if m.DataScannedInBytes != 0 { + n += 1 + sovEvents(uint64(m.DataScannedInBytes)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SecurityReportRun) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Name) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Version) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.TotalExecutionTimeInMillis != 0 { + n += 1 + sovEvents(uint64(m.TotalExecutionTimeInMillis)) + } + if m.TotalDataScannedInBytes != 0 { + n += 1 + sovEvents(uint64(m.TotalDataScannedInBytes)) + } + if len(m.AuditQueries) > 0 { + for _, e := range m.AuditQueries { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ExternalAuditStorageEnable) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.Details != nil { + l = m.Details.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ExternalAuditStorageDisable) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.Details != nil { + l = m.Details.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ExternalAuditStorageDetails) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.IntegrationName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.SessionRecordingsUri) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AthenaWorkgroup) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.GlueDatabase) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.GlueTable) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AuditEventsLongTermUri) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AthenaResultsUri) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PolicyName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Region) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *OktaAccessListSync) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.NumAppFilters != 0 { + n += 1 + sovEvents(uint64(m.NumAppFilters)) + } + if m.NumGroupFilters != 0 { + n += 1 + sovEvents(uint64(m.NumGroupFilters)) + } + if m.NumApps != 0 { + n += 1 + sovEvents(uint64(m.NumApps)) + } + if m.NumGroups != 0 { + n += 1 + sovEvents(uint64(m.NumGroups)) + } + if m.NumRoles != 0 { + n += 1 + sovEvents(uint64(m.NumRoles)) + } + if m.NumAccessLists != 0 { + n += 1 + sovEvents(uint64(m.NumAccessLists)) + } + if m.NumAccessListMembers != 0 { + n += 1 + sovEvents(uint64(m.NumAccessListMembers)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *OktaUserSync) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.OrgUrl) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AppId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.NumUsersCreated != 0 { + n += 1 + sovEvents(uint64(m.NumUsersCreated)) + } + if m.NumUsersDeleted != 0 { + n += 1 + sovEvents(uint64(m.NumUsersDeleted)) + } + if m.NumUsersModified != 0 { + n += 1 + sovEvents(uint64(m.NumUsersModified)) + } + if m.NumUsersTotal != 0 { + n += 1 + sovEvents(uint64(m.NumUsersTotal)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *LabelSelector) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Key) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Values) > 0 { + for _, s := range m.Values { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SPIFFESVIDIssued) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.SPIFFEID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.DNSSANs) > 0 { + for _, s := range m.DNSSANs { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if len(m.IPSANs) > 0 { + for _, s := range m.IPSANs { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.SVIDType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.SerialNumber) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Hint) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.JTI) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Audiences) > 0 { + for _, s := range m.Audiences { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = len(m.WorkloadIdentity) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.WorkloadIdentityRevision) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Attributes != nil { + l = m.Attributes.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.NameSelector) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.LabelSelectors) > 0 { + for _, e := range m.LabelSelectors { + l = e.Size() + n += 2 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AuthPreferenceUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.AdminActionsMFA != 0 { + n += 1 + sovEvents(uint64(m.AdminActionsMFA)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ClusterNetworkingConfigUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SessionRecordingConfigUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessPathChanged) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.ChangeID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AffectedResourceName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AffectedResourceSource) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.AffectedResourceType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SpannerRPC) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.DatabaseMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Procedure) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Args != nil { + l = m.Args.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessGraphSettingsUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SPIFFEFederationCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SPIFFEFederationDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateConfigCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateConfigUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateConfigDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateVersionCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateVersionUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateVersionDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateAgentRolloutTrigger) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Groups) > 0 { + for _, s := range m.Groups { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateAgentRolloutForceDone) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Groups) > 0 { + for _, s := range m.Groups { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AutoUpdateAgentRolloutRollback) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.Groups) > 0 { + for _, s := range m.Groups { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *StaticHostUserCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *StaticHostUserUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *StaticHostUserDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *CrownJewelCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.CrownJewelQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *CrownJewelUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.CurrentCrownJewelQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UpdatedCrownJewelQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *CrownJewelDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UserTaskCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserTaskMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UserTaskUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserTaskMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.CurrentUserTaskState) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UpdatedUserTaskState) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UserTaskMetadata) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.TaskType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.IssueType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Integration) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UserTaskDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ContactCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Email) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ContactType != 0 { + n += 1 + sovEvents(uint64(m.ContactType)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ContactDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Email) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.ContactType != 0 { + n += 1 + sovEvents(uint64(m.ContactType)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.WorkloadIdentityData != nil { + l = m.WorkloadIdentityData.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.WorkloadIdentityData != nil { + l = m.WorkloadIdentityData.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityX509RevocationCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Reason) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityX509RevocationUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Reason) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityX509RevocationDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *GitCommand) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.CommandMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Service) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.Actions) > 0 { + for _, e := range m.Actions { + l = e.Size() + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *GitCommandAction) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Action) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Reference) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Old) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.New) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AccessListInvalidMetadata) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.AccessListName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.User) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if len(m.MissingRoles) > 0 { + for _, s := range m.MissingRoles { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *UserLoginAccessListInvalid) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AccessListInvalidMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *StableUNIXUserCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.StableUnixUser != nil { + l = m.StableUnixUser.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *StableUNIXUser) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Username) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Uid != 0 { + n += 1 + sovEvents(uint64(m.Uid)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AWSICResourceSync) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.TotalAccounts != 0 { + n += 1 + sovEvents(uint64(m.TotalAccounts)) + } + if m.TotalAccountAssignments != 0 { + n += 1 + sovEvents(uint64(m.TotalAccountAssignments)) + } + if m.TotalUserGroups != 0 { + n += 1 + sovEvents(uint64(m.TotalUserGroups)) + } + if m.TotalPermissionSets != 0 { + n += 1 + sovEvents(uint64(m.TotalPermissionSets)) + } + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *HealthCheckConfigCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *HealthCheckConfigUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *HealthCheckConfigDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityX509IssuerOverrideCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *WorkloadIdentityX509IssuerOverrideDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SigstorePolicyCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SigstorePolicyUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SigstorePolicyDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionStart) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.McpSessionId) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ClientInfo) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ServerInfo) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ProtocolVersion) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.IngressAuthType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.EgressAuthType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionEnd) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ServerMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Headers.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPJSONRPCMessage) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.JSONRPC) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Params != nil { + l = m.Params.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Message.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Headers.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionNotification) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Message.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Headers.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionListenSSEStream) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Headers.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCPSessionInvalidHTTPRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SessionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.AppMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.RawQuery) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Body) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = m.Headers.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *BoundKeypairRecovery) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.TokenName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.BotName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PublicKey) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.RecoveryCount != 0 { + n += 1 + sovEvents(uint64(m.RecoveryCount)) + } + l = len(m.RecoveryMode) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *BoundKeypairRotation) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.TokenName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.BotName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.PreviousPublicKey) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.NewPublicKey) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *BoundKeypairJoinStateVerificationFailed) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.TokenName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.BotName) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SCIMRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.ID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.SourceAddress) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.UserAgent) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Method) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Path) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Body != nil { + l = m.Body.Size() + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SCIMCommonData) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Request != nil { + l = m.Request.Size() + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Integration) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ResourceType) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SCIMListingEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SCIMCommonData.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.Filter) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.Count != 0 { + n += 1 + sovEvents(uint64(m.Count)) + } + if m.StartIndex != 0 { + n += 1 + sovEvents(uint64(m.StartIndex)) + } + if m.ResourceCount != 0 { + n += 1 + sovEvents(uint64(m.ResourceCount)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SCIMResourceEvent) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.SCIMCommonData.Size() + n += 1 + l + sovEvents(uint64(l)) + l = len(m.TeleportID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.ExternalID) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + l = len(m.Display) + if l > 0 { + n += 1 + l + sovEvents(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ClientIPRestrictionsUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ResourceMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + if len(m.ClientIPRestrictions) > 0 { + for _, s := range m.ClientIPRestrictions { + l = len(s) + n += 1 + l + sovEvents(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *VnetConfigCreate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *VnetConfigUpdate) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *VnetConfigDelete) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Metadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.UserMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + l = m.ConnectionMetadata.Size() + n += 1 + l + sovEvents(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func sovEvents(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozEvents(x uint64) (n int) { + return sovEvents(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (m *Metadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Metadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Metadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType) + } + m.Index = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Index |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Type = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Code = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ClusterName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WithMFA", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.WithMFA = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UserMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UserMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UserMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.User = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Login", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Login = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Impersonator", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Impersonator = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARN", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AWSRoleARN = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AccessRequests = append(m.AccessRequests, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentity", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AzureIdentity = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccount", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.GCPServiceAccount = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TrustedDevice", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.TrustedDevice == nil { + m.TrustedDevice = &DeviceMetadata{} + } + if err := m.TrustedDevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RequiredPrivateKeyPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RequiredPrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UserKind", wireType) + } + m.UserKind = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.UserKind |= UserKind(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.BotName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.BotInstanceID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field UserOrigin", wireType) + } + m.UserOrigin = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.UserOrigin |= UserOrigin(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 14: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserRoles", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserRoles = append(m.UserRoles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserTraits", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserTraits.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 16: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserClusterName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserClusterName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ServerMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ServerMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ServerMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerNamespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerNamespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerHostname", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerHostname = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerLabels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ServerLabels == nil { + m.ServerLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.ServerLabels[mapkey] = mapvalue + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ForwardedBy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ForwardedBy = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerSubKind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerSubKind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerVersion", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServerVersion = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ConnectionMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ConnectionMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LocalAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LocalAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RemoteAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RemoteAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Protocol", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Protocol = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ClientMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ClientMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ClientMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserAgent", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserAgent = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: KubernetesClusterMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KubernetesClusterMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesCluster", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesCluster = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesUsers", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesUsers = append(m.KubernetesUsers, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesGroups", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesGroups = append(m.KubernetesGroups, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesLabels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.KubernetesLabels == nil { + m.KubernetesLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.KubernetesLabels[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: KubernetesPodMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KubernetesPodMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesPodName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodNamespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesPodNamespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesContainerName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesContainerName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesContainerImage", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesContainerImage = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesNodeName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.KubernetesNodeName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SAMLIdPServiceProviderMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SAMLIdPServiceProviderMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviderEntityID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServiceProviderEntityID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviderShortcut", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServiceProviderShortcut = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AttributeMapping", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AttributeMapping == nil { + m.AttributeMapping = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.AttributeMapping[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaResourcesUpdatedMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaResourcesUpdatedMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Added", wireType) + } + m.Added = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Added |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Updated", wireType) + } + m.Updated = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Updated |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Deleted", wireType) + } + m.Deleted = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Deleted |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AddedResources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AddedResources = append(m.AddedResources, &OktaResource{}) + if err := m.AddedResources[len(m.AddedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpdatedResources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UpdatedResources = append(m.UpdatedResources, &OktaResource{}) + if err := m.UpdatedResources[len(m.UpdatedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DeletedResources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DeletedResources = append(m.DeletedResources, &OktaResource{}) + if err := m.DeletedResources[len(m.DeletedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaResource) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaResource: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaResource: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Description = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaAssignmentMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaAssignmentMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Source", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Source = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.User = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StartingStatus", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.StartingStatus = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EndingStatus", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EndingStatus = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessListMemberMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessListMemberMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AccessListName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Members", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Members = append(m.Members, &AccessListMember{}) + if err := m.Members[len(m.Members)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AccessListTitle = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessListMember) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessListMember: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessListMember: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field JoinedOn", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.JoinedOn, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RemovedOn", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.RemovedOn, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Reason = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MemberName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MemberName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field MembershipKind", wireType) + } + m.MembershipKind = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.MembershipKind |= v1.MembershipKind(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessListReviewMembershipRequirementsChanged) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessListReviewMembershipRequirementsChanged: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessListReviewMembershipRequirementsChanged: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Traits", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Traits == nil { + m.Traits = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Traits[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessListReviewMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessListReviewMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Message = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ReviewID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ReviewID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MembershipRequirementsChanged", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.MembershipRequirementsChanged == nil { + m.MembershipRequirementsChanged = &AccessListReviewMembershipRequirementsChanged{} + } + if err := m.MembershipRequirementsChanged.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ReviewFrequencyChanged", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ReviewFrequencyChanged = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ReviewDayOfMonthChanged", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ReviewDayOfMonthChanged = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RemovedMembers", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RemovedMembers = append(m.RemovedMembers, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AccessListTitle = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *LockMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: LockMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: LockMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Target.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionStart) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionStart: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionStart: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TerminalSize", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TerminalSize = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field InitialCommand", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.InitialCommand = append(m.InitialCommand, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionRecording", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionRecording = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Invited", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Invited = append(m.Invited, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 13: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Reason = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionJoin) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionJoin: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionJoin: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionPrint) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionPrint: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionPrint: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ChunkIndex", wireType) + } + m.ChunkIndex = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.ChunkIndex |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...) + if m.Data == nil { + m.Data = []byte{} + } + iNdEx = postIndex + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Bytes", wireType) + } + m.Bytes = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Bytes |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DelayMilliseconds", wireType) + } + m.DelayMilliseconds = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.DelayMilliseconds |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) + } + m.Offset = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Offset |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DesktopRecording) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DesktopRecording: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DesktopRecording: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Message = append(m.Message[:0], dAtA[iNdEx:postIndex]...) + if m.Message == nil { + m.Message = []byte{} + } + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DelayMilliseconds", wireType) + } + m.DelayMilliseconds = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.DelayMilliseconds |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DesktopClipboardReceive: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DesktopClipboardReceive: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + } + m.Length = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Length |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DesktopClipboardSend: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DesktopClipboardSend: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + } + m.Length = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Length |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DesktopSharedDirectoryStart: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DesktopSharedDirectoryStart: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DirectoryName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) + } + m.DirectoryID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.DirectoryID |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DesktopSharedDirectoryRead: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DesktopSharedDirectoryRead: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DirectoryName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) + } + m.DirectoryID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.DirectoryID |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + } + m.Length = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Length |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) + } + m.Offset = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Offset |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } -func (m *SigstorePolicyDelete) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.Metadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.UserMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ConnectionMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - l = m.ResourceMetadata.Size() - n += 1 + l + sovEvents(uint64(l)) - if m.XXX_unrecognized != nil { - n += len(m.XXX_unrecognized) + if iNdEx > l { + return io.ErrUnexpectedEOF } - return n -} - -func sovEvents(x uint64) (n int) { - return (math_bits.Len64(x|1) + 6) / 7 -} -func sozEvents(x uint64) (n int) { - return sovEvents(uint64((x << 1) ^ uint64((int64(x) >> 63)))) + return nil } -func (m *Metadata) Unmarshal(dAtA []byte) error { +func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52468,17 +63271,17 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Metadata: wiretype end group for non-group") + return fmt.Errorf("proto: DesktopSharedDirectoryWrite: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Metadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DesktopSharedDirectoryWrite: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - m.Index = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52488,14 +63291,160 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Index |= int64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52523,11 +63472,11 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Type = string(dAtA[iNdEx:postIndex]) + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52555,13 +63504,251 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ID = string(dAtA[iNdEx:postIndex]) + m.DirectoryName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) + } + m.DirectoryID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.DirectoryID |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + } + m.Length = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Length |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) + } + m.Offset = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Offset |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionReject) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionReject: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionReject: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52571,27 +63758,28 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Code = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -52618,13 +63806,13 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52652,8 +63840,27 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) + m.Reason = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Maximum", wireType) + } + m.Maximum = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Maximum |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -52676,7 +63883,7 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionMetadata) Unmarshal(dAtA []byte) error { +func (m *SessionConnect) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52699,17 +63906,17 @@ func (m *SessionMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SessionConnect: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionConnect: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52719,29 +63926,30 @@ func (m *SessionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WithMFA", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52751,29 +63959,30 @@ func (m *SessionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WithMFA = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52783,23 +63992,24 @@ func (m *SessionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -52823,7 +64033,7 @@ func (m *SessionMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserMetadata) Unmarshal(dAtA []byte) error { +func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -52846,17 +64056,17 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: FileTransferRequestEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: FileTransferRequestEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52866,29 +64076,30 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.User = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Login", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -52898,27 +64109,28 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Login = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Impersonator", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52946,11 +64158,11 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Impersonator = string(dAtA[iNdEx:postIndex]) + m.RequestID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Approvers", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -52978,11 +64190,11 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSRoleARN = string(dAtA[iNdEx:postIndex]) + m.Approvers = append(m.Approvers, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Requester", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53010,11 +64222,11 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessRequests = append(m.AccessRequests, string(dAtA[iNdEx:postIndex])) + m.Requester = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentity", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Location", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53042,132 +64254,13 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AzureIdentity = string(dAtA[iNdEx:postIndex]) + m.Location = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 7: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccount", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.GCPServiceAccount = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TrustedDevice", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if m.TrustedDevice == nil { - m.TrustedDevice = &DeviceMetadata{} - } - if err := m.TrustedDevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequiredPrivateKeyPolicy", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.RequiredPrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 10: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UserKind", wireType) - } - m.UserKind = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.UserKind |= UserKind(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Download", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53177,27 +64270,15 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.BotName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 12: + m.Download = bool(v != 0) + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Filename", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53225,27 +64306,8 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.BotInstanceID = string(dAtA[iNdEx:postIndex]) + m.Filename = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 13: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field UserOrigin", wireType) - } - m.UserOrigin = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.UserOrigin |= UserOrigin(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -53268,7 +64330,7 @@ func (m *UserMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *ServerMetadata) Unmarshal(dAtA []byte) error { +func (m *Resize) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -53291,17 +64353,17 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ServerMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: Resize: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ServerMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Resize: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerNamespace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53311,29 +64373,30 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerNamespace = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53343,29 +64406,30 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerID = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerHostname", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53375,29 +64439,30 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerHostname = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53407,27 +64472,28 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerAddr = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -53454,107 +64520,13 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ServerLabels == nil { - m.ServerLabels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.ServerLabels[mapkey] = mapvalue iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ForwardedBy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TerminalSize", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53582,13 +64554,13 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ForwardedBy = string(dAtA[iNdEx:postIndex]) + m.TerminalSize = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerSubKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53598,29 +64570,30 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerSubKind = string(dAtA[iNdEx:postIndex]) + if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerVersion", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53630,23 +64603,24 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ServerVersion = string(dAtA[iNdEx:postIndex]) + if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -53670,7 +64644,7 @@ func (m *ServerMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { +func (m *SessionEnd) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -53693,17 +64667,17 @@ func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ConnectionMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SessionEnd: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ConnectionMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LocalAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53713,29 +64687,30 @@ func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.LocalAddr = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RemoteAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53745,29 +64720,30 @@ func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RemoteAddr = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Protocol", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53777,80 +64753,63 @@ func (m *ConnectionMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Protocol = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ClientMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if msglen < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ClientMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ClientMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserAgent", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53860,78 +64819,68 @@ func (m *ClientMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UserAgent = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field EnhancedRecording", wireType) } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if iNdEx >= l { - return io.ErrUnexpectedEOF + m.EnhancedRecording = bool(v != 0) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Interactive", wireType) } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: KubernetesClusterMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesClusterMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Interactive = bool(v != 0) + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesCluster", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Participants", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -53959,13 +64908,13 @@ func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesCluster = string(dAtA[iNdEx:postIndex]) + m.Participants = append(m.Participants, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 2: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesUsers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -53975,29 +64924,30 @@ func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesUsers = append(m.KubernetesUsers, string(dAtA[iNdEx:postIndex])) + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 3: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesGroups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54007,27 +64957,28 @@ func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesGroups = append(m.KubernetesGroups, string(dAtA[iNdEx:postIndex])) + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 4: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54054,160 +65005,15 @@ func (m *KubernetesClusterMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.KubernetesLabels == nil { - m.KubernetesLabels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - m.KubernetesLabels[mapkey] = mapvalue - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: KubernetesPodMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesPodMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54217,27 +65023,28 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesPodName = string(dAtA[iNdEx:postIndex]) + if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 2: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodNamespace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field InitialCommand", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54265,11 +65072,11 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesPodNamespace = string(dAtA[iNdEx:postIndex]) + m.InitialCommand = append(m.InitialCommand, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 3: + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesContainerName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionRecording", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54297,13 +65104,83 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesContainerName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesContainerImage", wireType) + m.SessionRecording = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BPFMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BPFMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BPFMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PID", wireType) + } + m.PID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PID |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CgroupID", wireType) } - var stringLen uint64 + m.CgroupID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54313,27 +65190,14 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.CgroupID |= uint64(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.KubernetesContainerImage = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesNodeName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Program", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54361,7 +65225,7 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesNodeName = string(dAtA[iNdEx:postIndex]) + m.Program = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -54385,7 +65249,7 @@ func (m *KubernetesPodMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { +func (m *Status) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54408,17 +65272,17 @@ func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPServiceProviderMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: Status: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPServiceProviderMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Status: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviderEntityID", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Success", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54428,27 +65292,15 @@ func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.ServiceProviderEntityID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex + m.Success = bool(v != 0) case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceProviderShortcut", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -54476,13 +65328,13 @@ func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServiceProviderShortcut = string(dAtA[iNdEx:postIndex]) + m.Error = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AttributeMapping", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMessage", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54492,118 +65344,23 @@ func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.AttributeMapping == nil { - m.AttributeMapping = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - m.AttributeMapping[mapkey] = mapvalue + m.UserMessage = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -54627,7 +65384,7 @@ func (m *SAMLIdPServiceProviderMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { +func (m *SessionCommand) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54650,17 +65407,17 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaResourcesUpdatedMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SessionCommand: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaResourcesUpdatedMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionCommand: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Added", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - m.Added = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54670,16 +65427,30 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Added |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Updated", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - m.Updated = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54689,16 +65460,30 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Updated |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Deleted", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - m.Deleted = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54708,14 +65493,28 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Deleted |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AddedResources", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54742,14 +65541,13 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AddedResources = append(m.AddedResources, &OktaResource{}) - if err := m.AddedResources[len(m.AddedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdatedResources", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -54776,16 +65574,34 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.UpdatedResources = append(m.UpdatedResources, &OktaResource{}) - if err := m.UpdatedResources[len(m.UpdatedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PPID", wireType) + } + m.PPID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PPID |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeletedResources", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54795,26 +65611,75 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DeletedResources = append(m.DeletedResources, &OktaResource{}) - if err := m.DeletedResources[len(m.DeletedResources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Argv", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.Argv = append(m.Argv, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ReturnCode", wireType) + } + m.ReturnCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.ReturnCode |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -54837,7 +65702,7 @@ func (m *OktaResourcesUpdatedMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaResource) Unmarshal(dAtA []byte) error { +func (m *SessionDisk) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -54860,17 +65725,17 @@ func (m *OktaResource) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaResource: wiretype end group for non-group") + return fmt.Errorf("proto: SessionDisk: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaResource: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionDisk: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54880,29 +65745,30 @@ func (m *OktaResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ID = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54912,80 +65778,63 @@ func (m *OktaResource) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Description = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if msglen < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OktaAssignmentMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OktaAssignmentMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Source", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -54995,29 +65844,30 @@ func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Source = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55027,27 +65877,28 @@ func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.User = string(dAtA[iNdEx:postIndex]) + if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartingStatus", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55075,13 +65926,32 @@ func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.StartingStatus = string(dAtA[iNdEx:postIndex]) + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field EndingStatus", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Flags", wireType) } - var stringLen uint64 + m.Flags = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Flags |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ReturnCode", wireType) + } + m.ReturnCode = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55091,24 +65961,11 @@ func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.ReturnCode |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.EndingStatus = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -55131,7 +65988,7 @@ func (m *OktaAssignmentMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { +func (m *SessionNetwork) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55154,17 +66011,17 @@ func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListMemberMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SessionNetwork: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMemberMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionNetwork: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55174,27 +66031,28 @@ func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListName = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Members", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55221,16 +66079,15 @@ func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Members = append(m.Members, &AccessListMember{}) - if err := m.Members[len(m.Members)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55240,78 +66097,28 @@ func (m *AccessListMemberMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListTitle = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessListMember) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AccessListMember: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMember: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field JoinedOn", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55338,13 +66145,13 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.JoinedOn, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RemovedOn", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55371,13 +66178,13 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.RemovedOn, dAtA[iNdEx:postIndex]); err != nil { + if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SrcAddr", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55405,11 +66212,11 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) + m.SrcAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MemberName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DstAddr", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -55437,13 +66244,13 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.MemberName = string(dAtA[iNdEx:postIndex]) + m.DstAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 8: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field MembershipKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DstPort", wireType) } - m.MembershipKind = 0 + m.DstPort = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55453,7 +66260,64 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.MembershipKind |= v1.MembershipKind(b&0x7F) << shift + m.DstPort |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TCPVersion", wireType) + } + m.TCPVersion = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TCPVersion |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Operation", wireType) + } + m.Operation = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Operation |= SessionNetwork_NetworkOperation(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) + } + m.Action = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Action |= EventAction(b&0x7F) << shift if b < 0x80 { break } @@ -55480,7 +66344,7 @@ func (m *AccessListMember) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListReviewMembershipRequirementsChanged) Unmarshal(dAtA []byte) error { +func (m *SessionData) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -55503,17 +66367,17 @@ func (m *AccessListReviewMembershipRequirementsChanged) Unmarshal(dAtA []byte) e fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListReviewMembershipRequirementsChanged: wiretype end group for non-group") + return fmt.Errorf("proto: SessionData: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListReviewMembershipRequirementsChanged: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionData: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55523,27 +66387,28 @@ func (m *AccessListReviewMembershipRequirementsChanged) Unmarshal(dAtA []byte) e } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Traits", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55570,160 +66435,15 @@ func (m *AccessListReviewMembershipRequirementsChanged) Unmarshal(dAtA []byte) e if postIndex > l { return io.ErrUnexpectedEOF } - if m.Traits == nil { - m.Traits = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - m.Traits[mapkey] = mapvalue - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AccessListReviewMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListReviewMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55733,29 +66453,30 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Message = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ReviewID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55765,27 +66486,28 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ReviewID = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MembershipRequirementsChanged", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -55812,18 +66534,104 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.MembershipRequirementsChanged == nil { - m.MembershipRequirementsChanged = &AccessListReviewMembershipRequirementsChanged{} - } - if err := m.MembershipRequirementsChanged.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field BytesTransmitted", wireType) + } + m.BytesTransmitted = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.BytesTransmitted |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field BytesReceived", wireType) + } + m.BytesReceived = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.BytesReceived |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionLeave) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionLeave: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionLeave: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ReviewFrequencyChanged", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55833,29 +66641,30 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ReviewFrequencyChanged = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ReviewDayOfMonthChanged", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55865,29 +66674,30 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ReviewDayOfMonthChanged = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RemovedMembers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55897,29 +66707,30 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RemovedMembers = append(m.RemovedMembers, string(dAtA[iNdEx:postIndex])) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -55929,78 +66740,28 @@ func (m *AccessListReviewMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListTitle = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *LockMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: LockMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: LockMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 4: + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56027,7 +66788,7 @@ func (m *LockMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Target.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -56053,7 +66814,7 @@ func (m *LockMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionStart) Unmarshal(dAtA []byte) error { +func (m *UserLogin) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56076,10 +66837,10 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionStart: wiretype end group for non-group") + return fmt.Errorf("proto: UserLogin: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionStart: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserLogin: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56150,7 +66911,7 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56177,15 +66938,15 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56195,28 +66956,27 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IdentityAttributes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56243,15 +67003,18 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.IdentityAttributes == nil { + m.IdentityAttributes = &Struct{} + } + if err := m.IdentityAttributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TerminalSize", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADevice", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56261,27 +67024,31 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TerminalSize = string(dAtA[iNdEx:postIndex]) + if m.MFADevice == nil { + m.MFADevice = &MFADeviceMetadata{} + } + if err := m.MFADevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56308,13 +67075,13 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56341,13 +67108,13 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InitialCommand", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppliedLoginRules", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -56375,11 +67142,11 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.InitialCommand = append(m.InitialCommand, string(dAtA[iNdEx:postIndex])) + m.AppliedLoginRules = append(m.AppliedLoginRules, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionRecording", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectorID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -56407,13 +67174,64 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionRecording = string(dAtA[iNdEx:postIndex]) + m.ConnectorID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 12: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CreateMFAAuthChallenge: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CreateMFAAuthChallenge: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Invited", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56423,27 +67241,61 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Invited = append(m.Invited, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 13: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeScope", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -56471,8 +67323,28 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) + m.ChallengeScope = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeAllowReuse", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.ChallengeAllowReuse = bool(v != 0) default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -56495,7 +67367,7 @@ func (m *SessionStart) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionJoin) Unmarshal(dAtA []byte) error { +func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56518,10 +67390,10 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionJoin: wiretype end group for non-group") + return fmt.Errorf("proto: ValidateMFAAuthResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionJoin: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ValidateMFAAuthResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56592,7 +67464,7 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56619,13 +67491,13 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADevice", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56652,15 +67524,18 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.MFADevice == nil { + m.MFADevice = &MFADeviceMetadata{} + } + if err := m.MFADevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeScope", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56670,28 +67545,130 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.ChallengeScope = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ChallengeAllowReuse", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.ChallengeAllowReuse = bool(v != 0) + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ResourceMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ResourceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -56718,10 +67695,74 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Expires, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpdatedBy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UpdatedBy = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TTL = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -56744,7 +67785,7 @@ func (m *SessionJoin) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionPrint) Unmarshal(dAtA []byte) error { +func (m *UserCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -56767,10 +67808,10 @@ func (m *SessionPrint) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionPrint: wiretype end group for non-group") + return fmt.Errorf("proto: UserCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionPrint: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -56807,29 +67848,10 @@ func (m *SessionPrint) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ChunkIndex", wireType) - } - m.ChunkIndex = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.ChunkIndex |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56839,31 +67861,30 @@ func (m *SessionPrint) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...) - if m.Data == nil { - m.Data = []byte{} + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Bytes", wireType) + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - m.Bytes = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56873,35 +67894,30 @@ func (m *SessionPrint) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Bytes |= int64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DelayMilliseconds", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - m.DelayMilliseconds = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DelayMilliseconds |= int64(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) + if postIndex > l { + return io.ErrUnexpectedEOF } - m.Offset = 0 + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + } + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56911,67 +67927,29 @@ func (m *SessionPrint) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Offset |= int64(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DesktopRecording) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DesktopRecording: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopRecording: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -56981,30 +67959,29 @@ func (m *DesktopRecording) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Connector = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57014,45 +67991,25 @@ func (m *DesktopRecording) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Message = append(m.Message[:0], dAtA[iNdEx:postIndex]...) - if m.Message == nil { - m.Message = []byte{} + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DelayMilliseconds", wireType) - } - m.DelayMilliseconds = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DelayMilliseconds |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -57075,7 +68032,7 @@ func (m *DesktopRecording) Unmarshal(dAtA []byte) error { } return nil } -func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { +func (m *UserUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57098,10 +68055,10 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DesktopClipboardReceive: wiretype end group for non-group") + return fmt.Errorf("proto: UserUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopClipboardReceive: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -57172,7 +68129,7 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57199,15 +68156,15 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57217,28 +68174,27 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57266,13 +68222,13 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + m.Connector = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.Length = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57282,11 +68238,25 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Length |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -57309,7 +68279,7 @@ func (m *DesktopClipboardReceive) Unmarshal(dAtA []byte) error { } return nil } -func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { +func (m *UserDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57332,10 +68302,10 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DesktopClipboardSend: wiretype end group for non-group") + return fmt.Errorf("proto: UserDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopClipboardSend: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -57406,7 +68376,40 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57433,13 +68436,64 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UserPasswordChange: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UserPasswordChange: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57466,15 +68520,15 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57484,29 +68538,30 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.Length = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57516,11 +68571,25 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Length |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -57543,7 +68612,7 @@ func (m *DesktopClipboardSend) Unmarshal(dAtA []byte) error { } return nil } -func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { +func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -57566,10 +68635,10 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DesktopSharedDirectoryStart: wiretype end group for non-group") + return fmt.Errorf("proto: AccessRequestCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopSharedDirectoryStart: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessRequestCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -57640,7 +68709,7 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57667,15 +68736,15 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57685,30 +68754,29 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57718,28 +68786,27 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.RequestID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestState", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57767,11 +68834,11 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + m.RequestState = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Delegator", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -57799,13 +68866,13 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DirectoryName = string(dAtA[iNdEx:postIndex]) + m.Delegator = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) } - m.DirectoryID = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57815,65 +68882,27 @@ func (m *DesktopSharedDirectoryStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.DirectoryID |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DesktopSharedDirectoryRead: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopSharedDirectoryRead: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Reason = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Annotations", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57900,15 +68929,18 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Annotations == nil { + m.Annotations = &Struct{} + } + if err := m.Annotations.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reviewer", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57918,30 +68950,29 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Reviewer = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ProposedState", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -57951,28 +68982,27 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ProposedState = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestedResourceIDs", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -57999,13 +69029,14 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.RequestedResourceIDs = append(m.RequestedResourceIDs, ResourceID{}) + if err := m.RequestedResourceIDs[len(m.RequestedResourceIDs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MaxDuration", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58032,13 +69063,13 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.MaxDuration, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 15: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PromotedAccessListName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58066,13 +69097,13 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + m.PromotedAccessListName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 16: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AssumeStartTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58082,46 +69113,31 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DirectoryName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) + if m.AssumeStartTime == nil { + m.AssumeStartTime = new(time.Time) } - m.DirectoryID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DirectoryID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.AssumeStartTime, dAtA[iNdEx:postIndex]); err != nil { + return err } - case 9: + iNdEx = postIndex + case 17: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceNames", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58149,46 +69165,8 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + m.ResourceNames = append(m.ResourceNames, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) - } - m.Length = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Length |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) - } - m.Offset = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Offset |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -58211,7 +69189,7 @@ func (m *DesktopSharedDirectoryRead) Unmarshal(dAtA []byte) error { } return nil } -func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { +func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58234,10 +69212,10 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DesktopSharedDirectoryWrite: wiretype end group for non-group") + return fmt.Errorf("proto: AccessRequestExpire: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DesktopSharedDirectoryWrite: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessRequestExpire: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -58275,7 +69253,7 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58302,15 +69280,15 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58320,28 +69298,27 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.RequestID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceExpiry", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58368,15 +69345,69 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.ResourceExpiry == nil { + m.ResourceExpiry = new(time.Time) + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.ResourceExpiry, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ResourceID) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ResourceID: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ResourceID: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58386,28 +69417,27 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ClusterName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58435,11 +69465,11 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + m.Kind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58467,30 +69497,11 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DirectoryName = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DirectoryID", wireType) - } - m.DirectoryID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DirectoryID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 9: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SubResourceName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -58518,46 +69529,8 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + m.SubResourceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Length", wireType) - } - m.Length = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Length |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Offset", wireType) - } - m.Offset = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Offset |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -58580,7 +69553,7 @@ func (m *DesktopSharedDirectoryWrite) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionReject) Unmarshal(dAtA []byte) error { +func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58603,10 +69576,10 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionReject: wiretype end group for non-group") + return fmt.Errorf("proto: AccessRequestDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionReject: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessRequestDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -58677,7 +69650,90 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RequestID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PortForward) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PortForward: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PortForward: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58704,13 +69760,13 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58737,15 +69793,15 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58755,29 +69811,30 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Maximum", wireType) + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - m.Maximum = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58787,67 +69844,30 @@ func (m *SessionReject) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Maximum |= int64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SessionConnect) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SessionConnect: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SessionConnect: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Addr", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -58857,28 +69877,27 @@ func (m *SessionConnect) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Addr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58905,13 +69924,13 @@ func (m *SessionConnect) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -58938,7 +69957,7 @@ func (m *SessionConnect) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -58964,7 +69983,7 @@ func (m *SessionConnect) Unmarshal(dAtA []byte) error { } return nil } -func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { +func (m *X11Forward) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -58987,10 +70006,10 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: FileTransferRequestEvent: wiretype end group for non-group") + return fmt.Errorf("proto: X11Forward: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: FileTransferRequestEvent: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: X11Forward: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -59028,7 +70047,7 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59055,15 +70074,15 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59073,29 +70092,30 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RequestID = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Approvers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59105,27 +70125,79 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Approvers = append(m.Approvers, string(dAtA[iNdEx:postIndex])) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CommandMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CommandMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CommandMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Requester", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Command", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59153,11 +70225,11 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Requester = string(dAtA[iNdEx:postIndex]) + m.Command = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Location", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExitCode", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59185,31 +70257,11 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Location = string(dAtA[iNdEx:postIndex]) + m.ExitCode = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Download", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.Download = bool(v != 0) - case 8: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Filename", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59237,7 +70289,7 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Filename = string(dAtA[iNdEx:postIndex]) + m.Error = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -59261,7 +70313,7 @@ func (m *FileTransferRequestEvent) Unmarshal(dAtA []byte) error { } return nil } -func (m *Resize) Unmarshal(dAtA []byte) error { +func (m *Exec) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59284,10 +70336,10 @@ func (m *Resize) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Resize: wiretype end group for non-group") + return fmt.Errorf("proto: Exec: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Resize: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Exec: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -59358,7 +70410,7 @@ func (m *Resize) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59385,13 +70437,13 @@ func (m *Resize) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59418,7 +70470,7 @@ func (m *Resize) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -59457,9 +70509,9 @@ func (m *Resize) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TerminalSize", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59469,23 +70521,24 @@ func (m *Resize) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TerminalSize = string(dAtA[iNdEx:postIndex]) + if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 7: if wireType != 2 { @@ -59575,7 +70628,7 @@ func (m *Resize) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionEnd) Unmarshal(dAtA []byte) error { +func (m *SCP) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -59598,10 +70651,10 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionEnd: wiretype end group for non-group") + return fmt.Errorf("proto: SCP: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SCP: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -59672,7 +70725,7 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59699,13 +70752,13 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59732,7 +70785,7 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -59770,10 +70823,10 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field EnhancedRecording", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59783,17 +70836,30 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.EnhancedRecording = bool(v != 0) + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Interactive", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var v int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59803,15 +70869,27 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - m.Interactive = bool(v != 0) + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Participants", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -59839,11 +70917,62 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Participants = append(m.Participants, string(dAtA[iNdEx:postIndex])) + m.Action = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 9: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SFTPAttributes: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SFTPAttributes: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field FileSize", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59870,13 +70999,16 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { + if m.FileSize == nil { + m.FileSize = new(uint64) + } + if err := github_com_gogo_protobuf_types.StdUInt64Unmarshal(m.FileSize, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 10: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UID", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59903,13 +71035,16 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { + if m.UID == nil { + m.UID = new(uint32) + } + if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.UID, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 11: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GID", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59936,13 +71071,16 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.GID == nil { + m.GID = new(uint32) + } + if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.GID, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 12: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Permissions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -59969,15 +71107,18 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Permissions == nil { + m.Permissions = new(uint32) + } + if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.Permissions, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 13: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InitialCommand", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -59987,29 +71128,33 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.InitialCommand = append(m.InitialCommand, string(dAtA[iNdEx:postIndex])) + if m.AccessTime == nil { + m.AccessTime = new(time.Time) + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.AccessTime, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 14: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionRecording", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ModificationTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60019,23 +71164,27 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionRecording = string(dAtA[iNdEx:postIndex]) + if m.ModificationTime == nil { + m.ModificationTime = new(time.Time) + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.ModificationTime, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -60059,7 +71208,7 @@ func (m *SessionEnd) Unmarshal(dAtA []byte) error { } return nil } -func (m *BPFMetadata) Unmarshal(dAtA []byte) error { +func (m *SFTP) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -60082,17 +71231,17 @@ func (m *BPFMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BPFMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SFTP: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BPFMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SFTP: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PID", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - m.PID = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60102,35 +71251,30 @@ func (m *BPFMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.PID |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field CgroupID", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - m.CgroupID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.CgroupID |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 3: + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Program", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60140,100 +71284,30 @@ func (m *BPFMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Program = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *Status) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Status: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Status: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Success", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.Success = bool(v != 0) - case 2: + iNdEx = postIndex + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60243,29 +71317,30 @@ func (m *Status) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Error = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMessage", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60275,78 +71350,28 @@ func (m *Status) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UserMessage = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SessionCommand) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SessionCommand: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SessionCommand: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -60373,15 +71398,15 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkingDirectory", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60391,30 +71416,29 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.WorkingDirectory = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60424,30 +71448,29 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TargetPath", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60457,28 +71480,46 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TargetPath = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Flags", wireType) + } + m.Flags = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Flags |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -60505,34 +71546,18 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Attributes == nil { + m.Attributes = &SFTPAttributes{} + } + if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 11: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PPID", wireType) - } - m.PPID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.PPID |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 7: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) } - var stringLen uint64 + m.Action = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60542,27 +71567,14 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.Action |= SFTPAction(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Path = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Argv", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -60590,27 +71602,8 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Argv = append(m.Argv, string(dAtA[iNdEx:postIndex])) + m.Error = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ReturnCode", wireType) - } - m.ReturnCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.ReturnCode |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -60633,7 +71626,7 @@ func (m *SessionCommand) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionDisk) Unmarshal(dAtA []byte) error { +func (m *SFTPSummary) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -60656,10 +71649,10 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionDisk: wiretype end group for non-group") + return fmt.Errorf("proto: SFTPSummary: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionDisk: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SFTPSummary: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -60729,6 +71722,39 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } @@ -60761,7 +71787,7 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } @@ -60794,9 +71820,9 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field FileTransferStats", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -60823,11 +71849,63 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.FileTransferStats = append(m.FileTransferStats, &FileTransferStat{}) + if err := m.FileTransferStats[len(m.FileTransferStats)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *FileTransferStat) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: FileTransferStat: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: FileTransferStat: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } @@ -60859,11 +71937,11 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { } m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 2: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Flags", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BytesRead", wireType) } - m.Flags = 0 + m.BytesRead = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60873,16 +71951,16 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Flags |= int32(b&0x7F) << shift + m.BytesRead |= uint64(b&0x7F) << shift if b < 0x80 { break } } - case 8: + case 3: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ReturnCode", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BytesWritten", wireType) } - m.ReturnCode = 0 + m.BytesWritten = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -60892,7 +71970,7 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ReturnCode |= int32(b&0x7F) << shift + m.BytesWritten |= uint64(b&0x7F) << shift if b < 0x80 { break } @@ -60919,7 +71997,7 @@ func (m *SessionDisk) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionNetwork) Unmarshal(dAtA []byte) error { +func (m *Subsystem) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -60942,10 +72020,10 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionNetwork: wiretype end group for non-group") + return fmt.Errorf("proto: Subsystem: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionNetwork: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Subsystem: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -61016,7 +72094,7 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61043,15 +72121,15 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -61061,61 +72139,27 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BPFMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.BPFMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SrcAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -61143,13 +72187,13 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SrcAddr = string(dAtA[iNdEx:postIndex]) + m.Error = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DstAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -61159,100 +72203,25 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DstAddr = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DstPort", wireType) - } - m.DstPort = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.DstPort |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TCPVersion", wireType) - } - m.TCPVersion = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TCPVersion |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Operation", wireType) - } - m.Operation = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Operation |= SessionNetwork_NetworkOperation(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) - } - m.Action = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Action |= EventAction(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -61275,7 +72244,7 @@ func (m *SessionNetwork) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionData) Unmarshal(dAtA []byte) error { +func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -61298,10 +72267,10 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionData: wiretype end group for non-group") + return fmt.Errorf("proto: ClientDisconnect: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionData: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ClientDisconnect: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -61372,7 +72341,7 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61399,7 +72368,7 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -61438,9 +72407,9 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -61450,63 +72419,24 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Reason = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field BytesTransmitted", wireType) - } - m.BytesTransmitted = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.BytesTransmitted |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field BytesReceived", wireType) - } - m.BytesReceived = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.BytesReceived |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -61529,7 +72459,7 @@ func (m *SessionData) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionLeave) Unmarshal(dAtA []byte) error { +func (m *AuthAttempt) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -61552,10 +72482,10 @@ func (m *SessionLeave) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionLeave: wiretype end group for non-group") + return fmt.Errorf("proto: AuthAttempt: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionLeave: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AuthAttempt: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -61620,46 +72550,46 @@ func (m *SessionLeave) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61686,13 +72616,13 @@ func (m *SessionLeave) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61719,7 +72649,7 @@ func (m *SessionLeave) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -61745,7 +72675,7 @@ func (m *SessionLeave) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserLogin) Unmarshal(dAtA []byte) error { +func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -61768,10 +72698,10 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserLogin: wiretype end group for non-group") + return fmt.Errorf("proto: UserTokenCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserLogin: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -61809,7 +72739,7 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61836,13 +72766,13 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61869,45 +72799,64 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RoleCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RoleCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RoleCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IdentityAttributes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61934,16 +72883,13 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.IdentityAttributes == nil { - m.IdentityAttributes = &Struct{} - } - if err := m.IdentityAttributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADevice", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -61970,16 +72916,13 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.MFADevice == nil { - m.MFADevice = &MFADeviceMetadata{} - } - if err := m.MFADevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62006,11 +72949,11 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 8: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -62043,70 +72986,6 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppliedLoginRules", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.AppliedLoginRules = append(m.AppliedLoginRules, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 10: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectorID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.ConnectorID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -62129,7 +73008,7 @@ func (m *UserLogin) Unmarshal(dAtA []byte) error { } return nil } -func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { +func (m *RoleUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -62152,10 +73031,10 @@ func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CreateMFAAuthChallenge: wiretype end group for non-group") + return fmt.Errorf("proto: RoleUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CreateMFAAuthChallenge: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RoleUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -62192,6 +73071,39 @@ func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -62224,11 +73136,11 @@ func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeScope", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62238,44 +73150,25 @@ func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ChallengeScope = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeAllowReuse", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.ChallengeAllowReuse = bool(v != 0) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -62298,7 +73191,7 @@ func (m *CreateMFAAuthChallenge) Unmarshal(dAtA []byte) error { } return nil } -func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { +func (m *RoleDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -62321,10 +73214,10 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ValidateMFAAuthResponse: wiretype end group for non-group") + return fmt.Errorf("proto: RoleDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ValidateMFAAuthResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RoleDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -62362,7 +73255,7 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62389,13 +73282,13 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62422,13 +73315,13 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADevice", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62455,18 +73348,99 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.MFADevice == nil { - m.MFADevice = &MFADeviceMetadata{} + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if err := m.MFADevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BotCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BotCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BotCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeScope", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62476,29 +73450,30 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ChallengeScope = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ChallengeAllowReuse", wireType) + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62508,12 +73483,25 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.ChallengeAllowReuse = bool(v != 0) + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -62536,7 +73524,7 @@ func (m *ValidateMFAAuthResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { +func (m *BotUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -62559,17 +73547,17 @@ func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ResourceMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: BotUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ResourceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BotUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62579,27 +73567,28 @@ func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62626,15 +73615,15 @@ func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Expires, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdatedBy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62644,55 +73633,24 @@ func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UpdatedBy = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.TTL = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -62716,7 +73674,7 @@ func (m *ResourceMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserCreate) Unmarshal(dAtA []byte) error { +func (m *BotDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -62739,10 +73697,10 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserCreate: wiretype end group for non-group") + return fmt.Errorf("proto: BotDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BotDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -62780,7 +73738,7 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62807,13 +73765,13 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -62840,15 +73798,66 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: TrustedClusterCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: TrustedClusterCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62858,29 +73867,30 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -62890,25 +73900,59 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Connector = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -62963,7 +74007,7 @@ func (m *UserCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserUpdate) Unmarshal(dAtA []byte) error { +func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -62986,10 +74030,10 @@ func (m *UserUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: TrustedClusterDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: TrustedClusterDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -63027,7 +74071,7 @@ func (m *UserUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63054,13 +74098,13 @@ func (m *UserUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63087,75 +74131,11 @@ func (m *UserUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Connector = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -63210,7 +74190,7 @@ func (m *UserUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserDelete) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -63233,10 +74213,10 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserDelete: wiretype end group for non-group") + return fmt.Errorf("proto: ProvisionTokenCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ProvisionTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -63274,7 +74254,7 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63301,13 +74281,13 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63334,15 +74314,15 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -63352,24 +74332,55 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.Roles = append(m.Roles, github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field JoinMethod", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.JoinMethod = github_com_gravitational_teleport_api_types.JoinMethod(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -63393,7 +74404,7 @@ func (m *UserDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { +func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -63416,10 +74427,10 @@ func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserPasswordChange: wiretype end group for non-group") + return fmt.Errorf("proto: TrustedClusterTokenCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserPasswordChange: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: TrustedClusterTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -63457,7 +74468,7 @@ func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63484,13 +74495,13 @@ func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63517,7 +74528,7 @@ func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -63543,7 +74554,7 @@ func (m *UserPasswordChange) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { +func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -63566,10 +74577,10 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessRequestCreate: wiretype end group for non-group") + return fmt.Errorf("proto: GithubConnectorCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GithubConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -63607,7 +74618,7 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63634,13 +74645,13 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63667,15 +74678,15 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -63685,93 +74696,81 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.RequestID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestState", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.RequestState = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GithubConnectorUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GithubConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Delegator", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -63781,29 +74780,30 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Delegator = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -63813,27 +74813,28 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Annotations", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63860,18 +74861,15 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Annotations == nil { - m.Annotations = &Struct{} - } - if err := m.Annotations.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 10: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reviewer", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -63881,59 +74879,79 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reviewer = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 11: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ProposedState", wireType) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.ProposedState = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 12: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GithubConnectorDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GithubConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestedResourceIDs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63960,14 +74978,13 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.RequestedResourceIDs = append(m.RequestedResourceIDs, ResourceID{}) - if err := m.RequestedResourceIDs[len(m.RequestedResourceIDs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 13: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MaxDuration", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -63994,45 +75011,13 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.MaxDuration, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 15: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PromotedAccessListName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.PromotedAccessListName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 16: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AssumeStartTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64059,18 +75044,15 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AssumeStartTime == nil { - m.AssumeStartTime = new(time.Time) - } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.AssumeStartTime, dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 17: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceNames", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64080,23 +75062,24 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceNames = append(m.ResourceNames, string(dAtA[iNdEx:postIndex])) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -64120,7 +75103,7 @@ func (m *AccessRequestCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { +func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -64143,10 +75126,10 @@ func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessRequestExpire: wiretype end group for non-group") + return fmt.Errorf("proto: OIDCConnectorCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestExpire: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OIDCConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -64217,39 +75200,7 @@ func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.RequestID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceExpiry", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64276,10 +75227,7 @@ func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ResourceExpiry == nil { - m.ResourceExpiry = new(time.Time) - } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.ResourceExpiry, dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -64305,7 +75253,7 @@ func (m *AccessRequestExpire) Unmarshal(dAtA []byte) error { } return nil } -func (m *ResourceID) Unmarshal(dAtA []byte) error { +func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -64328,17 +75276,17 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ResourceID: wiretype end group for non-group") + return fmt.Errorf("proto: OIDCConnectorUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ResourceID: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OIDCConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64348,29 +75296,30 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64380,29 +75329,30 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Kind = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64412,55 +75362,24 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubResourceName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.SubResourceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -64484,7 +75403,7 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { +func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -64507,10 +75426,10 @@ func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessRequestDelete: wiretype end group for non-group") + return fmt.Errorf("proto: OIDCConnectorDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OIDCConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -64548,7 +75467,7 @@ func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64575,15 +75494,15 @@ func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64593,23 +75512,24 @@ func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RequestID = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -64633,7 +75553,7 @@ func (m *AccessRequestDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *PortForward) Unmarshal(dAtA []byte) error { +func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -64656,10 +75576,10 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PortForward: wiretype end group for non-group") + return fmt.Errorf("proto: SAMLConnectorCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PortForward: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SAMLConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -64691,11 +75611,44 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -64728,9 +75681,9 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64757,13 +75710,67 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Connector == nil { + m.Connector = &types1.SAMLConnectorV2{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SAMLConnectorUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SAMLConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64790,15 +75797,15 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Addr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -64808,27 +75815,28 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Addr = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64855,13 +75863,13 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 7: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -64888,7 +75896,10 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Connector == nil { + m.Connector = &types1.SAMLConnectorV2{} + } + if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -64914,7 +75925,7 @@ func (m *PortForward) Unmarshal(dAtA []byte) error { } return nil } -func (m *X11Forward) Unmarshal(dAtA []byte) error { +func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -64937,10 +75948,10 @@ func (m *X11Forward) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: X11Forward: wiretype end group for non-group") + return fmt.Errorf("proto: SAMLConnectorDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: X11Forward: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SAMLConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -64978,7 +75989,7 @@ func (m *X11Forward) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65005,46 +76016,13 @@ func (m *X11Forward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65071,7 +76049,7 @@ func (m *X11Forward) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -65097,7 +76075,7 @@ func (m *X11Forward) Unmarshal(dAtA []byte) error { } return nil } -func (m *CommandMetadata) Unmarshal(dAtA []byte) error { +func (m *KubeRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -65120,17 +76098,17 @@ func (m *CommandMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CommandMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: KubeRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CommandMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: KubeRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Command", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65140,29 +76118,30 @@ func (m *CommandMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Command = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExitCode", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65172,29 +76151,30 @@ func (m *CommandMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ExitCode = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65204,80 +76184,95 @@ func (m *CommandMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Error = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *Exec) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if iNdEx >= l { - return io.ErrUnexpectedEOF + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RequestPath", wireType) } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Exec: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Exec: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RequestPath = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Verb", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65287,30 +76282,29 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Verb = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceAPIGroup", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65320,30 +76314,29 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ResourceAPIGroup = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceNamespace", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65353,30 +76346,29 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ResourceNamespace = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceKind", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65386,30 +76378,29 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ResourceKind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65419,30 +76410,29 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ResourceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ResponseCode", wireType) } - var msglen int + m.ResponseCode = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65452,26 +76442,12 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.ResponseCode |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 7: + case 12: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) } @@ -65504,9 +76480,9 @@ func (m *Exec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 8: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesPodMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65533,7 +76509,7 @@ func (m *Exec) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubernetesPodMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -65559,7 +76535,7 @@ func (m *Exec) Unmarshal(dAtA []byte) error { } return nil } -func (m *SCP) Unmarshal(dAtA []byte) error { +func (m *AppMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -65582,17 +76558,17 @@ func (m *SCP) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SCP: wiretype end group for non-group") + return fmt.Errorf("proto: AppMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SCP: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppURI", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65602,30 +76578,29 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AppURI = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppPublicAddr", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65635,28 +76610,27 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AppPublicAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppLabels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65683,15 +76657,109 @@ func (m *SCP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.AppLabels == nil { + m.AppLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.AppLabels[mapkey] = mapvalue iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65701,28 +76769,97 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AppName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field AppTargetPort", wireType) + } + m.AppTargetPort = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.AppTargetPort |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AppCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AppCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AppCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65749,13 +76886,13 @@ func (m *SCP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65782,15 +76919,15 @@ func (m *SCP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65800,29 +76937,30 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -65832,23 +76970,24 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Action = string(dAtA[iNdEx:postIndex]) + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -65872,7 +77011,7 @@ func (m *SCP) Unmarshal(dAtA []byte) error { } return nil } -func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { +func (m *AppUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -65895,15 +77034,15 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SFTPAttributes: wiretype end group for non-group") + return fmt.Errorf("proto: AppUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SFTPAttributes: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field FileSize", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65930,16 +77069,13 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.FileSize == nil { - m.FileSize = new(uint64) - } - if err := github_com_gogo_protobuf_types.StdUInt64Unmarshal(m.FileSize, dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -65966,16 +77102,13 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.UID == nil { - m.UID = new(uint32) - } - if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.UID, dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66002,16 +77135,13 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.GID == nil { - m.GID = new(uint32) - } - if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.GID, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Permissions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66038,16 +77168,64 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Permissions == nil { - m.Permissions = new(uint32) - } - if err := github_com_gogo_protobuf_types.StdUInt32Unmarshal(m.Permissions, dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AppDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AppDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AppDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66074,16 +77252,46 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AccessTime == nil { - m.AccessTime = new(time.Time) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.AccessTime, dAtA[iNdEx:postIndex]); err != nil { + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ModificationTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66110,10 +77318,7 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ModificationTime == nil { - m.ModificationTime = new(time.Time) - } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.ModificationTime, dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -66139,7 +77344,7 @@ func (m *SFTPAttributes) Unmarshal(dAtA []byte) error { } return nil } -func (m *SFTP) Unmarshal(dAtA []byte) error { +func (m *AppSessionStart) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -66162,10 +77367,10 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SFTP: wiretype end group for non-group") + return fmt.Errorf("proto: AppSessionStart: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SFTP: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -66236,7 +77441,7 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66263,13 +77468,13 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66296,13 +77501,13 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66329,13 +77534,13 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkingDirectory", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PublicAddr", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -66363,13 +77568,13 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.WorkingDirectory = string(dAtA[iNdEx:postIndex]) + m.PublicAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66379,29 +77584,81 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AppSessionEnd: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AppSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TargetPath", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66411,29 +77668,30 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TargetPath = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Flags", wireType) + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - m.Flags = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66443,14 +77701,28 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Flags |= uint32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 10: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66477,18 +77749,48 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Attributes == nil { - m.Attributes = &SFTPAttributes{} + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 11: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.Action = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66498,16 +77800,30 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Action |= SFTPAction(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 12: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66517,23 +77833,24 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Error = string(dAtA[iNdEx:postIndex]) + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -66557,7 +77874,7 @@ func (m *SFTP) Unmarshal(dAtA []byte) error { } return nil } -func (m *SFTPSummary) Unmarshal(dAtA []byte) error { +func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -66580,10 +77897,10 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SFTPSummary: wiretype end group for non-group") + return fmt.Errorf("proto: AppSessionChunk: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SFTPSummary: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppSessionChunk: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -66648,13 +77965,46 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66681,13 +78031,13 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66714,15 +78064,15 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionChunkID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66732,28 +78082,27 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SessionChunkID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field FileTransferStats", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -66780,8 +78129,7 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.FileTransferStats = append(m.FileTransferStats, &FileTransferStat{}) - if err := m.FileTransferStats[len(m.FileTransferStats)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -66807,7 +78155,7 @@ func (m *SFTPSummary) Unmarshal(dAtA []byte) error { } return nil } -func (m *FileTransferStat) Unmarshal(dAtA []byte) error { +func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -66830,17 +78178,17 @@ func (m *FileTransferStat) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: FileTransferStat: wiretype end group for non-group") + return fmt.Errorf("proto: AppSessionRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: FileTransferStat: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppSessionRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66850,29 +78198,30 @@ func (m *FileTransferStat) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field BytesRead", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) } - m.BytesRead = 0 + m.StatusCode = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66882,16 +78231,16 @@ func (m *FileTransferStat) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.BytesRead |= uint64(b&0x7F) << shift + m.StatusCode |= uint32(b&0x7F) << shift if b < 0x80 { break } } case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field BytesWritten", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - m.BytesWritten = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66901,67 +78250,61 @@ func (m *FileTransferStat) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.BytesWritten |= uint64(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - if (skippy < 0) || (iNdEx+skippy) < 0 { + postIndex := iNdEx + intStringLen + if postIndex < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *Subsystem) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } - if iNdEx >= l { - return io.ErrUnexpectedEOF + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Subsystem: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Subsystem: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RawQuery = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -66971,28 +78314,27 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67019,13 +78361,13 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRequestMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67052,13 +78394,64 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AWSRequestMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSRequestMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSRequestMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRegion", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -67086,11 +78479,11 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.AWSRegion = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSService", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -67118,13 +78511,13 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Error = string(dAtA[iNdEx:postIndex]) + m.AWSService = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSHost", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67134,24 +78527,55 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.AWSHost = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AWSAssumedRole", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.AWSAssumedRole = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -67175,7 +78599,7 @@ func (m *Subsystem) Unmarshal(dAtA []byte) error { } return nil } -func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { +func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -67198,17 +78622,17 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ClientDisconnect: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ClientDisconnect: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseService", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67218,30 +78642,29 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseService = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseProtocol", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67251,30 +78674,29 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseProtocol = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseURI", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67284,28 +78706,91 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseURI = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DatabaseName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUser", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DatabaseUser = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseLabels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67332,13 +78817,107 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.DatabaseLabels == nil { + m.DatabaseLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.DatabaseLabels[mapkey] = mapvalue iNdEx = postIndex - case 5: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseAWSRegion", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -67366,97 +78945,13 @@ func (m *ClientDisconnect) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AuthAttempt) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AuthAttempt: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AuthAttempt: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseAWSRegion = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseAWSRedshiftClusterID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67466,30 +78961,29 @@ func (m *AuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseAWSRedshiftClusterID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseGCPProjectID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67499,30 +78993,29 @@ func (m *AuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseGCPProjectID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseGCPInstanceID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67532,81 +79025,29 @@ func (m *AuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseGCPInstanceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: UserTokenCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: UserTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseRoles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67616,30 +79057,29 @@ func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseRoles = append(m.DatabaseRoles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 2: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67649,30 +79089,29 @@ func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseOrigin", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -67682,24 +79121,23 @@ func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseOrigin = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -67723,7 +79161,7 @@ func (m *UserTokenCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *RoleCreate) Unmarshal(dAtA []byte) error { +func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -67746,10 +79184,10 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RoleCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RoleCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -67787,7 +79225,7 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67814,13 +79252,13 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67847,13 +79285,13 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67880,7 +79318,7 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -67906,7 +79344,7 @@ func (m *RoleCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *RoleUpdate) Unmarshal(dAtA []byte) error { +func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -67929,10 +79367,10 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RoleUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RoleUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -67970,7 +79408,7 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -67997,13 +79435,13 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68030,13 +79468,13 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68063,7 +79501,7 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -68089,7 +79527,7 @@ func (m *RoleUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *RoleDelete) Unmarshal(dAtA []byte) error { +func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -68112,10 +79550,10 @@ func (m *RoleDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RoleDelete: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RoleDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -68152,39 +79590,6 @@ func (m *RoleDelete) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -68217,9 +79622,9 @@ func (m *RoleDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68246,7 +79651,7 @@ func (m *RoleDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -68272,7 +79677,7 @@ func (m *RoleDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *BotCreate) Unmarshal(dAtA []byte) error { +func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -68295,10 +79700,10 @@ func (m *BotCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BotCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseSessionStart: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BotCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -68336,7 +79741,7 @@ func (m *BotCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68363,13 +79768,13 @@ func (m *BotCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68396,64 +79801,79 @@ func (m *BotCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *BotUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if iNdEx >= l { + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: BotUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: BotUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68480,13 +79900,13 @@ func (m *BotUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68513,13 +79933,32 @@ func (m *BotUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PostgresPID", wireType) + } + m.PostgresPID = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.PostgresPID |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68546,7 +79985,7 @@ func (m *BotUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -68572,7 +80011,7 @@ func (m *BotUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *BotDelete) Unmarshal(dAtA []byte) error { +func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -68595,10 +80034,10 @@ func (m *BotDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BotDelete: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseSessionQuery: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BotDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseSessionQuery: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -68636,7 +80075,7 @@ func (m *BotDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68663,13 +80102,13 @@ func (m *BotDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68696,64 +80135,13 @@ func (m *BotDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: TrustedClusterCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: TrustedClusterCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68780,15 +80168,15 @@ func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseQuery", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -68798,30 +80186,29 @@ func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DatabaseQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseQueryParameters", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -68831,28 +80218,27 @@ func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + return io.ErrUnexpectedEOF } + m.DatabaseQueryParameters = append(m.DatabaseQueryParameters, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68879,7 +80265,7 @@ func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -68905,7 +80291,7 @@ func (m *TrustedClusterCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { +func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -68928,10 +80314,10 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: TrustedClusterDelete: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseSessionCommandResult: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: TrustedClusterDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseSessionCommandResult: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -68969,7 +80355,7 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -68996,13 +80382,13 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69029,13 +80415,13 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69062,10 +80448,62 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field AffectedRecords", wireType) + } + m.AffectedRecords = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.AffectedRecords |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -69088,7 +80526,7 @@ func (m *TrustedClusterDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { +func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -69111,10 +80549,10 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ProvisionTokenCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabasePermissionUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ProvisionTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabasePermissionUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -69152,7 +80590,7 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69179,13 +80617,13 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69212,15 +80650,15 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -69230,29 +80668,30 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, github_com_gravitational_teleport_api_types.SystemRole(dAtA[iNdEx:postIndex])) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field JoinMethod", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PermissionSummary", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -69262,23 +80701,138 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.JoinMethod = github_com_gravitational_teleport_api_types.JoinMethod(dAtA[iNdEx:postIndex]) + m.PermissionSummary = append(m.PermissionSummary, DatabasePermissionEntry{}) + if err := m.PermissionSummary[len(m.PermissionSummary)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AffectedObjectCounts", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AffectedObjectCounts == nil { + m.AffectedObjectCounts = make(map[string]int32) + } + var mapkey string + var mapvalue int32 + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapvalue |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.AffectedObjectCounts[mapkey] = mapvalue iNdEx = postIndex default: iNdEx = preIndex @@ -69302,7 +80856,7 @@ func (m *ProvisionTokenCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { +func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -69325,17 +80879,17 @@ func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: TrustedClusterTokenCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabasePermissionEntry: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: TrustedClusterTokenCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabasePermissionEntry: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Permission", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -69345,28 +80899,27 @@ func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Permission = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Counts", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69393,42 +80946,89 @@ func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + if m.Counts == nil { + m.Counts = make(map[string]int32) } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF + var mapkey string + var mapvalue int32 + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapvalue |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Counts[mapkey] = mapvalue iNdEx = postIndex default: iNdEx = preIndex @@ -69452,7 +81052,7 @@ func (m *TrustedClusterTokenCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { +func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -69475,10 +81075,10 @@ func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GithubConnectorCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseUserCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GithubConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -69515,39 +81115,6 @@ func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -69580,9 +81147,9 @@ func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69609,64 +81176,13 @@ func (m *GithubConnectorCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GithubConnectorUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GithubConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69693,13 +81209,13 @@ func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69726,15 +81242,15 @@ func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -69744,30 +81260,29 @@ func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -69777,24 +81292,23 @@ func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -69818,7 +81332,7 @@ func (m *GithubConnectorUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { +func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -69841,10 +81355,10 @@ func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GithubConnectorDelete: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseUserDeactivate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GithubConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseUserDeactivate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -69882,7 +81396,7 @@ func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69909,13 +81423,13 @@ func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69942,13 +81456,13 @@ func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -69975,64 +81489,13 @@ func (m *GithubConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OIDCConnectorCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OIDCConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70059,15 +81522,15 @@ func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70077,30 +81540,29 @@ func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Delete", wireType) } - var msglen int + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70110,25 +81572,12 @@ func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex + m.Delete = bool(v != 0) default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -70151,7 +81600,7 @@ func (m *OIDCConnectorCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { +func (m *PostgresParse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -70174,10 +81623,10 @@ func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OIDCConnectorUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: PostgresParse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OIDCConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PostgresParse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -70215,7 +81664,7 @@ func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70242,13 +81691,13 @@ func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70275,64 +81724,13 @@ func (m *OIDCConnectorUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OIDCConnectorDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OIDCConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70359,15 +81757,15 @@ func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70377,30 +81775,29 @@ func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.StatementName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70410,24 +81807,23 @@ func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -70451,7 +81847,7 @@ func (m *OIDCConnectorDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { +func (m *PostgresBind) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -70474,10 +81870,10 @@ func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SAMLConnectorCreate: wiretype end group for non-group") + return fmt.Errorf("proto: PostgresBind: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLConnectorCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PostgresBind: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -70514,39 +81910,6 @@ func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -70579,9 +81942,9 @@ func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70608,67 +81971,13 @@ func (m *SAMLConnectorCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types1.SAMLConnectorV2{} - } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SAMLConnectorUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLConnectorUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70695,15 +82004,15 @@ func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70713,30 +82022,29 @@ func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.StatementName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70746,30 +82054,29 @@ func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.PortalName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Connector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70779,27 +82086,23 @@ func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Connector == nil { - m.Connector = &types1.SAMLConnectorV2{} - } - if err := m.Connector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -70823,7 +82126,7 @@ func (m *SAMLConnectorUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { +func (m *PostgresExecute) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -70846,10 +82149,10 @@ func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SAMLConnectorDelete: wiretype end group for non-group") + return fmt.Errorf("proto: PostgresExecute: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLConnectorDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PostgresExecute: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -70881,13 +82184,79 @@ func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -70914,15 +82283,15 @@ func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -70932,24 +82301,23 @@ func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.PortalName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -70973,7 +82341,7 @@ func (m *SAMLConnectorDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *KubeRequest) Unmarshal(dAtA []byte) error { +func (m *PostgresClose) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -70996,10 +82364,10 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubeRequest: wiretype end group for non-group") + return fmt.Errorf("proto: PostgresClose: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubeRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PostgresClose: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -71070,7 +82438,7 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -71097,13 +82465,13 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -71130,13 +82498,13 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RequestPath", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71164,11 +82532,11 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.RequestPath = string(dAtA[iNdEx:postIndex]) + m.StatementName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Verb", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71196,13 +82564,64 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Verb = string(dAtA[iNdEx:postIndex]) + m.PortalName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PostgresFunctionCall: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PostgresFunctionCall: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceAPIGroup", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71212,29 +82631,30 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceAPIGroup = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceNamespace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71244,29 +82664,30 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceNamespace = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71276,29 +82697,30 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceKind = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71308,48 +82730,30 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceName = string(dAtA[iNdEx:postIndex]) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 5: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ResponseCode", wireType) - } - m.ResponseCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.ResponseCode |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 12: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field FunctionOID", wireType) } - var msglen int + m.FunctionOID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71359,30 +82763,16 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.FunctionOID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.KubernetesClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 13: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field FunctionArgs", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71392,24 +82782,23 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.FunctionArgs = append(m.FunctionArgs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -71433,7 +82822,7 @@ func (m *KubeRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppMetadata) Unmarshal(dAtA []byte) error { +func (m *WindowsCertificateMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -71452,19 +82841,51 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { if b < 0x80 { break } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AppMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AppMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: WindowsCertificateMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: WindowsCertificateMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Subject", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Subject = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppURI", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SerialNumber", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71492,11 +82913,11 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AppURI = string(dAtA[iNdEx:postIndex]) + m.SerialNumber = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppPublicAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UPN", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71524,13 +82945,13 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AppPublicAddr = string(dAtA[iNdEx:postIndex]) + m.UPN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CRLDistributionPoints", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71540,29 +82961,46 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.AppLabels == nil { - m.AppLabels = make(map[string]string) + m.CRLDistributionPoints = append(m.CRLDistributionPoints, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field KeyUsage", wireType) } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 + m.KeyUsage = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.KeyUsage |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 6: + if wireType == 0 { + var v int32 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71572,43 +83010,51 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - wire |= uint64(b&0x7F) << shift + v |= int32(b&0x7F) << shift if b < 0x80 { break } } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + m.ExtendedKeyUsage = append(m.ExtendedKeyUsage, v) + } else if wireType == 2 { + var packedLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents + if iNdEx >= l { + return io.ErrUnexpectedEOF } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents + b := dAtA[iNdEx] + iNdEx++ + packedLen |= int(b&0x7F) << shift + if b < 0x80 { + break } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF + } + if packedLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + packedLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + var elementCount int + var count int + for _, integer := range dAtA[iNdEx:postIndex] { + if integer < 128 { + count++ } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 + } + elementCount = count + if elementCount != 0 && len(m.ExtendedKeyUsage) == 0 { + m.ExtendedKeyUsage = make([]int32, 0, elementCount) + } + for iNdEx < postIndex { + var v int32 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71618,44 +83064,19 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift + v |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + m.ExtendedKeyUsage = append(m.ExtendedKeyUsage, v) } + } else { + return fmt.Errorf("proto: wrong wireType = %d for field ExtendedKeyUsage", wireType) } - m.AppLabels[mapkey] = mapvalue - iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EnhancedKeyUsage", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71683,27 +83104,8 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AppName = string(dAtA[iNdEx:postIndex]) + m.EnhancedKeyUsage = append(m.EnhancedKeyUsage, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field AppTargetPort", wireType) - } - m.AppTargetPort = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.AppTargetPort |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -71726,7 +83128,7 @@ func (m *AppMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppCreate) Unmarshal(dAtA []byte) error { +func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -71749,10 +83151,10 @@ func (m *AppCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppCreate: wiretype end group for non-group") + return fmt.Errorf("proto: WindowsDesktopSessionStart: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WindowsDesktopSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -71823,7 +83225,7 @@ func (m *AppCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -71850,13 +83252,13 @@ func (m *AppCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -71883,66 +83285,112 @@ func (m *AppCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AppUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if iNdEx >= l { + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + m.WindowsDesktopService = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AppUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AppUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Domain", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -71952,28 +83400,59 @@ func (m *AppUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.Domain = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WindowsUser", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF } + m.WindowsUser = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopLabels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72000,15 +83479,109 @@ func (m *AppUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.DesktopLabels == nil { + m.DesktopLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.DesktopLabels[mapkey] = mapvalue iNdEx = postIndex - case 3: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -72018,28 +83591,67 @@ func (m *AppUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DesktopName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 12: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field AllowUserCreation", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.AllowUserCreation = bool(v != 0) + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NLA", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NLA = bool(v != 0) + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CertMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72066,7 +83678,10 @@ func (m *AppUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.CertMetadata == nil { + m.CertMetadata = &WindowsCertificateMetadata{} + } + if err := m.CertMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -72092,7 +83707,7 @@ func (m *AppUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppDelete) Unmarshal(dAtA []byte) error { +func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -72115,10 +83730,10 @@ func (m *AppDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppDelete: wiretype end group for non-group") + return fmt.Errorf("proto: DatabaseSessionEnd: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DatabaseSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -72189,91 +83804,7 @@ func (m *AppDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AppSessionStart) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AppSessionStart: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AppSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72300,13 +83831,13 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72333,13 +83864,13 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72366,13 +83897,13 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72399,11 +83930,11 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 7: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -72436,9 +83967,9 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 7: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PublicAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Participants", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -72466,40 +83997,7 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.PublicAddr = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Participants = append(m.Participants, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -72523,7 +84021,7 @@ func (m *AppSessionStart) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { +func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -72546,17 +84044,17 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppSessionEnd: wiretype end group for non-group") + return fmt.Errorf("proto: MFADeviceMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MFADeviceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -72566,30 +84064,29 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DeviceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -72599,30 +84096,29 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DeviceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -72632,28 +84128,78 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.DeviceType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - iNdEx = postIndex - case 4: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MFADeviceAdd: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MFADeviceAdd: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72680,13 +84226,13 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72713,13 +84259,13 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72746,7 +84292,40 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.MFADeviceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -72772,7 +84351,7 @@ func (m *AppSessionEnd) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { +func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -72795,10 +84374,10 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppSessionChunk: wiretype end group for non-group") + return fmt.Errorf("proto: MFADeviceDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppSessionChunk: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MFADeviceDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -72869,7 +84448,7 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72896,13 +84475,13 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.MFADeviceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -72929,48 +84508,66 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BillingInformationUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 6: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BillingInformationUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BillingInformationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionChunkID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -72980,27 +84577,28 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionChunkID = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -73027,7 +84625,7 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -73053,7 +84651,7 @@ func (m *AppSessionChunk) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { +func (m *BillingCardCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -73076,10 +84674,10 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppSessionRequest: wiretype end group for non-group") + return fmt.Errorf("proto: BillingCardCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppSessionRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BillingCardCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -73116,29 +84714,10 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) - } - m.StatusCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatusCode |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73148,91 +84727,79 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.RawQuery = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BillingCardDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BillingCardDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BillingCardDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -73259,13 +84826,13 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRequestMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -73292,7 +84859,7 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AWSRequestMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -73318,7 +84885,7 @@ func (m *AppSessionRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { +func (m *LockCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -73341,17 +84908,17 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AWSRequestMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: LockCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AWSRequestMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: LockCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRegion", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73361,29 +84928,30 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSRegion = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSService", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73393,29 +84961,30 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSService = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSHost", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73425,29 +84994,30 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSHost = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSAssumedRole", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73457,23 +85027,57 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSAssumedRole = string(dAtA[iNdEx:postIndex]) + if err := m.Target.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Lock", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Lock.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -73497,7 +85101,7 @@ func (m *AWSRequestMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { +func (m *LockDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -73520,17 +85124,17 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: LockDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: LockDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseService", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73540,29 +85144,30 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseService = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseProtocol", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73572,29 +85177,30 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseProtocol = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseURI", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73604,29 +85210,30 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseURI = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Lock", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73636,59 +85243,79 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUser", wireType) + if err := m.Lock.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RecoveryCodeGenerate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.DatabaseUser = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RecoveryCodeGenerate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RecoveryCodeGenerate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -73715,109 +85342,15 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.DatabaseLabels == nil { - m.DatabaseLabels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.DatabaseLabels[mapkey] = mapvalue iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseAWSRegion", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73827,125 +85360,81 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseAWSRegion = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseAWSRedshiftClusterID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.DatabaseAWSRedshiftClusterID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseGCPProjectID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.DatabaseGCPProjectID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 10: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseGCPInstanceID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RecoveryCodeUsed) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.DatabaseGCPInstanceID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 11: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RecoveryCodeUsed: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RecoveryCodeUsed: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseRoles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73955,29 +85444,30 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseRoles = append(m.DatabaseRoles, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 12: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -73987,29 +85477,30 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseType = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 13: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseOrigin", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74019,23 +85510,24 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseOrigin = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -74059,7 +85551,7 @@ func (m *DatabaseMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { +func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -74082,10 +85574,10 @@ func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseCreate: wiretype end group for non-group") + return fmt.Errorf("proto: WindowsDesktopSessionEnd: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WindowsDesktopSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -74156,7 +85648,7 @@ func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74183,15 +85675,15 @@ func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74201,81 +85693,93 @@ func (m *DatabaseCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.WindowsDesktopService = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Domain", wireType) } - if iNdEx >= l { - return io.ErrUnexpectedEOF + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DatabaseUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Domain = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WindowsUser", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74285,28 +85789,27 @@ func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.WindowsUser = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopLabels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74333,13 +85836,107 @@ func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.DesktopLabels == nil { + m.DesktopLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.DesktopLabels[mapkey] = mapvalue iNdEx = postIndex - case 3: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74366,13 +85963,13 @@ func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74399,66 +85996,15 @@ func (m *DatabaseUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DatabaseDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74468,30 +86014,49 @@ func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DesktopName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 12: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Recorded", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Recorded = bool(v != 0) + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Participants", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74501,28 +86066,27 @@ func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Participants = append(m.Participants, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 3: + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74549,7 +86113,7 @@ func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -74575,7 +86139,7 @@ func (m *DatabaseDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { +func (m *CertificateCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -74598,10 +86162,10 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseSessionStart: wiretype end group for non-group") + return fmt.Errorf("proto: CertificateCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CertificateCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -74639,9 +86203,9 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CertificateType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74651,28 +86215,27 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.CertificateType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Identity", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74699,13 +86262,16 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Identity == nil { + m.Identity = &Identity{} + } + if err := m.Identity.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74732,13 +86298,13 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CertificateAuthority", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -74765,15 +86331,69 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.CertificateAuthority == nil { + m.CertificateAuthority = &CertificateAuthority{} + } + if err := m.CertificateAuthority.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CertificateAuthority) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CertificateAuthority: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CertificateAuthority: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74783,30 +86403,29 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Type = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Domain", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74816,49 +86435,29 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Domain = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresPID", wireType) - } - m.PostgresPID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.PostgresPID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SubjectKeyID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -74868,24 +86467,23 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SubjectKeyID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -74909,7 +86507,7 @@ func (m *DatabaseSessionStart) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { +func (m *RenewableCertificateGenerationMismatch) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -74932,10 +86530,10 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseSessionQuery: wiretype end group for non-group") + return fmt.Errorf("proto: RenewableCertificateGenerationMismatch: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseSessionQuery: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RenewableCertificateGenerationMismatch: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -75004,9 +86602,60 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BotJoin) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BotJoin: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BotJoin: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -75033,13 +86682,13 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -75066,13 +86715,13 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -75100,11 +86749,11 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseQuery = string(dAtA[iNdEx:postIndex]) + m.BotName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseQueryParameters", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -75132,13 +86781,13 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseQueryParameters = append(m.DatabaseQueryParameters, string(dAtA[iNdEx:postIndex])) + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75148,79 +86797,27 @@ func (m *DatabaseSessionQuery) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TokenName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DatabaseSessionCommandResult: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseSessionCommandResult: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -75247,48 +86844,18 @@ func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if m.Attributes == nil { + m.Attributes = &Struct{} } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75298,28 +86865,27 @@ func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.UserName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -75346,15 +86912,15 @@ func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75364,44 +86930,24 @@ func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.BotInstanceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field AffectedRecords", wireType) - } - m.AffectedRecords = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.AffectedRecords |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -75424,7 +86970,7 @@ func (m *DatabaseSessionCommandResult) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { +func (m *InstanceJoin) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -75447,10 +86993,10 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabasePermissionUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: InstanceJoin: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabasePermissionUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: InstanceJoin: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -75488,7 +87034,7 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -75515,15 +87061,15 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75533,30 +87079,29 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.HostID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NodeName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75566,30 +87111,29 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.NodeName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PermissionSummary", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Role", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75599,31 +87143,29 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PermissionSummary = append(m.PermissionSummary, DatabasePermissionEntry{}) - if err := m.PermissionSummary[len(m.PermissionSummary)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Role = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AffectedObjectCounts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75633,104 +87175,157 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.AffectedObjectCounts == nil { - m.AffectedObjectCounts = make(map[string]int32) + m.Method = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) } - var mapkey string - var mapvalue int32 - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - mapvalue |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TokenName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Attributes == nil { + m.Attributes = &Struct{} + } + if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TokenExpires", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break } } - m.AffectedObjectCounts[mapkey] = mapvalue + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.TokenExpires, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -75754,7 +87349,7 @@ func (m *DatabasePermissionUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { +func (m *Unknown) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -75777,15 +87372,48 @@ func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabasePermissionEntry: wiretype end group for non-group") + return fmt.Errorf("proto: Unknown: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabasePermissionEntry: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Unknown: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Permission", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UnknownType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -75813,13 +87441,13 @@ func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Permission = string(dAtA[iNdEx:postIndex]) + m.UnknownType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Counts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UnknownCode", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75829,104 +87457,55 @@ func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Counts == nil { - m.Counts = make(map[string]int32) + m.UnknownCode = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType) } - var mapkey string - var mapvalue int32 - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - mapvalue |= int32(b&0x7F) << shift - if b < 0x80 { - break - } - } - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break } } - m.Counts[mapkey] = mapvalue + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Data = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -75950,7 +87529,7 @@ func (m *DatabasePermissionEntry) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { +func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -75973,17 +87552,17 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseUserCreate: wiretype end group for non-group") + return fmt.Errorf("proto: DeviceMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeviceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -75993,30 +87572,29 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.DeviceId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field OsType", wireType) } - var msglen int + m.OsType = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76026,30 +87604,16 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.OsType |= OSType(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AssetTag", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76059,30 +87623,29 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AssetTag = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CredentialId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76092,30 +87655,29 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.CredentialId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DeviceOrigin", wireType) } - var msglen int + m.DeviceOrigin = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76125,30 +87687,16 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.DeviceOrigin |= DeviceOrigin(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field WebAuthentication", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76158,27 +87706,15 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Username = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: + m.WebAuthentication = bool(v != 0) + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WebAuthenticationId", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -76206,7 +87742,7 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + m.WebAuthenticationId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -76230,7 +87766,7 @@ func (m *DatabaseUserCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { +func (m *DeviceEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -76253,10 +87789,10 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseUserDeactivate: wiretype end group for non-group") + return fmt.Errorf("proto: DeviceEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseUserDeactivate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeviceEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -76294,7 +87830,7 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76321,13 +87857,16 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Status == nil { + m.Status = &Status{} + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76354,13 +87893,16 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Device == nil { + m.Device = &DeviceMetadata{} + } + if err := m.Device.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76387,95 +87929,13 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if m.User == nil { + m.User = &UserMetadata{} } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.User.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Username = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Delete", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.Delete = bool(v != 0) default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -76498,7 +87958,7 @@ func (m *DatabaseUserDeactivate) Unmarshal(dAtA []byte) error { } return nil } -func (m *PostgresParse) Unmarshal(dAtA []byte) error { +func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -76521,10 +87981,10 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PostgresParse: wiretype end group for non-group") + return fmt.Errorf("proto: DeviceEvent2: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PostgresParse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DeviceEvent2: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -76556,46 +88016,13 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76622,13 +88049,16 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Device == nil { + m.Device = &DeviceMetadata{} + } + if err := m.Device.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76655,47 +88085,15 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.StatementName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -76705,23 +88103,24 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Query = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -76745,7 +88144,7 @@ func (m *PostgresParse) Unmarshal(dAtA []byte) error { } return nil } -func (m *PostgresBind) Unmarshal(dAtA []byte) error { +func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -76768,10 +88167,10 @@ func (m *PostgresBind) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PostgresBind: wiretype end group for non-group") + return fmt.Errorf("proto: DiscoveryConfigCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PostgresBind: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DiscoveryConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -76842,7 +88241,7 @@ func (m *PostgresBind) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76869,13 +88268,13 @@ func (m *PostgresBind) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -76902,106 +88301,10 @@ func (m *PostgresBind) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.StatementName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.PortalName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -77024,7 +88327,7 @@ func (m *PostgresBind) Unmarshal(dAtA []byte) error { } return nil } -func (m *PostgresExecute) Unmarshal(dAtA []byte) error { +func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77047,10 +88350,10 @@ func (m *PostgresExecute) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PostgresExecute: wiretype end group for non-group") + return fmt.Errorf("proto: DiscoveryConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PostgresExecute: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DiscoveryConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -77121,7 +88424,7 @@ func (m *PostgresExecute) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77148,13 +88451,13 @@ func (m *PostgresExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77181,42 +88484,10 @@ func (m *PostgresExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.PortalName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -77239,7 +88510,7 @@ func (m *PostgresExecute) Unmarshal(dAtA []byte) error { } return nil } -func (m *PostgresClose) Unmarshal(dAtA []byte) error { +func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77262,10 +88533,10 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PostgresClose: wiretype end group for non-group") + return fmt.Errorf("proto: DiscoveryConfigDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PostgresClose: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DiscoveryConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -77336,7 +88607,40 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77363,13 +88667,64 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DiscoveryConfigDeleteAll) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DiscoveryConfigDeleteAll: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DiscoveryConfigDeleteAll: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77396,15 +88751,15 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -77414,29 +88769,30 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.StatementName = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PortalName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -77446,23 +88802,24 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PortalName = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -77486,7 +88843,7 @@ func (m *PostgresClose) Unmarshal(dAtA []byte) error { } return nil } -func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { +func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77509,10 +88866,10 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PostgresFunctionCall: wiretype end group for non-group") + return fmt.Errorf("proto: IntegrationCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PostgresFunctionCall: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IntegrationCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -77583,7 +88940,7 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77610,13 +88967,13 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77643,34 +89000,15 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field FunctionOID", wireType) - } - m.FunctionOID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.FunctionOID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field FunctionArgs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -77680,23 +89018,24 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.FunctionArgs = append(m.FunctionArgs, string(dAtA[iNdEx:postIndex])) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -77720,7 +89059,7 @@ func (m *PostgresFunctionCall) Unmarshal(dAtA []byte) error { } return nil } -func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { +func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77743,10 +89082,10 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WindowsDesktopSessionStart: wiretype end group for non-group") + return fmt.Errorf("proto: IntegrationUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WindowsDesktopSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IntegrationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -77817,7 +89156,7 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77844,13 +89183,13 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77877,13 +89216,13 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -77910,47 +89249,66 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.WindowsDesktopService = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: IntegrationDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: IntegrationDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -77960,29 +89318,30 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Domain", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -77992,29 +89351,30 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Domain = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsUser", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78024,27 +89384,28 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WindowsUser = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78071,109 +89432,15 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.DesktopLabels == nil { - m.DesktopLabels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } + if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.DesktopLabels[mapkey] = mapvalue iNdEx = postIndex - case 11: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78183,64 +89450,25 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 12: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field AllowUserCreation", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.AllowUserCreation = bool(v != 0) - case 13: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NLA", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.NLA = bool(v != 0) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -78263,7 +89491,7 @@ func (m *WindowsDesktopSessionStart) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { +func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -78286,17 +89514,17 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseSessionEnd: wiretype end group for non-group") + return fmt.Errorf("proto: IntegrationMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78306,28 +89534,27 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SubKind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSOIDC", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78354,13 +89581,16 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.AWSOIDC == nil { + m.AWSOIDC = &AWSOIDCIntegrationMetadata{} + } + if err := m.AWSOIDC.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AzureOIDC", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78387,46 +89617,16 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if m.AzureOIDC == nil { + m.AzureOIDC = &AzureOIDCIntegrationMetadata{} } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AzureOIDC.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GitHub", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78453,13 +89653,16 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { + if m.GitHub == nil { + m.GitHub = &GitHubIntegrationMetadata{} + } + if err := m.GitHub.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRA", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78486,7 +89689,10 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { + if m.AWSRA == nil { + m.AWSRA = &AWSRAIntegrationMetadata{} + } + if err := m.AWSRA.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -78512,7 +89718,7 @@ func (m *DatabaseSessionEnd) Unmarshal(dAtA []byte) error { } return nil } -func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { +func (m *AWSOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -78535,15 +89741,15 @@ func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MFADeviceMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: AWSOIDCIntegrationMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MFADeviceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AWSOIDCIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RoleARN", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -78571,43 +89777,11 @@ func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DeviceName = string(dAtA[iNdEx:postIndex]) + m.RoleARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.DeviceID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IssuerS3URI", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -78635,7 +89809,7 @@ func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.DeviceType = string(dAtA[iNdEx:postIndex]) + m.IssuerS3URI = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -78659,7 +89833,7 @@ func (m *MFADeviceMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { +func (m *AzureOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -78682,17 +89856,17 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MFADeviceAdd: wiretype end group for non-group") + return fmt.Errorf("proto: AzureOIDCIntegrationMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MFADeviceAdd: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AzureOIDCIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TenantID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78702,30 +89876,29 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TenantID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78735,30 +89908,80 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.ClientID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - iNdEx = postIndex - case 3: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *GitHubIntegrationMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: GitHubIntegrationMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: GitHubIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Organization", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78768,30 +89991,80 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.MFADeviceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Organization = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - iNdEx = postIndex - case 4: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSRAIntegrationMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSRAIntegrationMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSRAIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrustAnchorARN", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -78801,24 +90074,23 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TrustAnchorARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -78842,7 +90114,7 @@ func (m *MFADeviceAdd) Unmarshal(dAtA []byte) error { } return nil } -func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { +func (m *PluginCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -78865,10 +90137,10 @@ func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MFADeviceDelete: wiretype end group for non-group") + return fmt.Errorf("proto: PluginCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MFADeviceDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PluginCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -78939,7 +90211,7 @@ func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -78966,11 +90238,44 @@ func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.MFADeviceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -79025,7 +90330,7 @@ func (m *MFADeviceDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *BillingInformationUpdate) Unmarshal(dAtA []byte) error { +func (m *PluginUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -79048,10 +90353,10 @@ func (m *BillingInformationUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BillingInformationUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: PluginUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BillingInformationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PluginUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -79120,60 +90425,42 @@ func (m *BillingInformationUpdate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + if msglen < 0 { + return ErrInvalidLengthEvents } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *BillingCardCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: BillingCardCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: BillingCardCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79200,13 +90487,13 @@ func (m *BillingCardCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79233,7 +90520,7 @@ func (m *BillingCardCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -79259,7 +90546,7 @@ func (m *BillingCardCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *BillingCardDelete) Unmarshal(dAtA []byte) error { +func (m *PluginDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -79282,10 +90569,10 @@ func (m *BillingCardDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BillingCardDelete: wiretype end group for non-group") + return fmt.Errorf("proto: PluginDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BillingCardDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PluginDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -79354,60 +90641,42 @@ func (m *BillingCardDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + if msglen < 0 { + return ErrInvalidLengthEvents } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *LockCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: LockCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: LockCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79434,13 +90703,13 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79467,15 +90736,66 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginMetadata) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginMetadata: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -79485,30 +90805,49 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.PluginType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field HasCredentials", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - iNdEx = postIndex + m.HasCredentials = bool(v != 0) case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ReusesCredentials", wireType) } - var msglen int + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -79518,28 +90857,15 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Target.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex + m.ReusesCredentials = bool(v != 0) case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Lock", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79566,7 +90892,10 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Lock.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.PluginData == nil { + m.PluginData = &Struct{} + } + if err := m.PluginData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -79592,7 +90921,7 @@ func (m *LockCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *LockDelete) Unmarshal(dAtA []byte) error { +func (m *OneOf) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -79615,15 +90944,15 @@ func (m *LockDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: LockDelete: wiretype end group for non-group") + return fmt.Errorf("proto: OneOf: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: LockDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OneOf: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserLogin", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79650,13 +90979,15 @@ func (m *LockDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UserLogin{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UserLogin{v} iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79683,13 +91014,15 @@ func (m *LockDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UserCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UserCreate{v} iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79716,13 +91049,15 @@ func (m *LockDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UserDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UserDelete{v} iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Lock", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserPasswordChange", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79749,64 +91084,15 @@ func (m *LockDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Lock.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UserPasswordChange{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UserPasswordChange{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *RecoveryCodeGenerate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: RecoveryCodeGenerate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: RecoveryCodeGenerate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionStart", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79833,13 +91119,15 @@ func (m *RecoveryCodeGenerate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionStart{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionStart{v} iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionJoin", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79866,64 +91154,15 @@ func (m *RecoveryCodeGenerate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionJoin{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionJoin{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *RecoveryCodeUsed) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: RecoveryCodeUsed: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: RecoveryCodeUsed: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionPrint", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79950,13 +91189,15 @@ func (m *RecoveryCodeUsed) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionPrint{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionPrint{v} iNdEx = postIndex - case 2: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionReject", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -79983,13 +91224,15 @@ func (m *RecoveryCodeUsed) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionReject{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionReject{v} iNdEx = postIndex - case 3: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Resize", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80016,64 +91259,15 @@ func (m *RecoveryCodeUsed) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &Resize{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_Resize{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: WindowsDesktopSessionEnd: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: WindowsDesktopSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionEnd", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80100,13 +91294,15 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionEnd{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionEnd{v} iNdEx = postIndex - case 2: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionCommand", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80133,13 +91329,15 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionCommand{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionCommand{v} iNdEx = postIndex - case 3: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionDisk", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80166,15 +91364,17 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionDisk{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionDisk{v} iNdEx = postIndex - case 4: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopService", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionNetwork", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80184,29 +91384,32 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WindowsDesktopService = string(dAtA[iNdEx:postIndex]) + v := &SessionNetwork{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_SessionNetwork{v} iNdEx = postIndex - case 5: + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionData", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80216,29 +91419,32 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DesktopAddr = string(dAtA[iNdEx:postIndex]) + v := &SessionData{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_SessionData{v} iNdEx = postIndex - case 6: + case 15: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Domain", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionLeave", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80248,29 +91454,32 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Domain = string(dAtA[iNdEx:postIndex]) + v := &SessionLeave{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_SessionLeave{v} iNdEx = postIndex - case 7: + case 16: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsUser", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PortForward", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80280,27 +91489,30 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WindowsUser = string(dAtA[iNdEx:postIndex]) + v := &PortForward{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_PortForward{v} iNdEx = postIndex - case 8: + case 17: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field X11Forward", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80327,107 +91539,15 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.DesktopLabels == nil { - m.DesktopLabels = make(map[string]string) - } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } + v := &X11Forward{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.DesktopLabels[mapkey] = mapvalue + m.Event = &OneOf_X11Forward{v} iNdEx = postIndex - case 9: + case 18: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SCP", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80454,13 +91574,15 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { + v := &SCP{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SCP{v} iNdEx = postIndex - case 10: + case 19: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Exec", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80487,15 +91609,17 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { + v := &Exec{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_Exec{v} iNdEx = postIndex - case 11: + case 20: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Subsystem", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80505,49 +91629,32 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { - return io.ErrUnexpectedEOF - } - m.DesktopName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 12: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Recorded", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } + return io.ErrUnexpectedEOF } - m.Recorded = bool(v != 0) - case 13: + v := &Subsystem{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_Subsystem{v} + iNdEx = postIndex + case 21: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Participants", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientDisconnect", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80557,78 +91664,30 @@ func (m *WindowsDesktopSessionEnd) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Participants = append(m.Participants, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &ClientDisconnect{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *CertificateCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: CertificateCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: CertificateCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_ClientDisconnect{v} + iNdEx = postIndex + case 22: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuthAttempt", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80655,15 +91714,17 @@ func (m *CertificateCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AuthAttempt{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AuthAttempt{v} iNdEx = postIndex - case 2: + case 23: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CertificateType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -80673,27 +91734,30 @@ func (m *CertificateCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CertificateType = string(dAtA[iNdEx:postIndex]) + v := &AccessRequestCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AccessRequestCreate{v} iNdEx = postIndex - case 3: + case 24: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Identity", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserTokenCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80720,16 +91784,15 @@ func (m *CertificateCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Identity == nil { - m.Identity = &Identity{} - } - if err := m.Identity.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UserTokenCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UserTokenCreate{v} iNdEx = postIndex - case 4: + case 25: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RoleCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80756,64 +91819,15 @@ func (m *CertificateCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ClientMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &RoleCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_RoleCreate{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *RenewableCertificateGenerationMismatch) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: RenewableCertificateGenerationMismatch: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: RenewableCertificateGenerationMismatch: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 26: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RoleDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80840,13 +91854,15 @@ func (m *RenewableCertificateGenerationMismatch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &RoleDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_RoleDelete{v} iNdEx = postIndex - case 2: + case 27: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80873,64 +91889,15 @@ func (m *RenewableCertificateGenerationMismatch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &TrustedClusterCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_TrustedClusterCreate{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *BotJoin) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: BotJoin: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: BotJoin: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 28: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80957,13 +91924,15 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &TrustedClusterDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_TrustedClusterDelete{v} iNdEx = postIndex - case 2: + case 29: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterTokenCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -80990,15 +91959,17 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &TrustedClusterTokenCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_TrustedClusterTokenCreate{v} iNdEx = postIndex - case 3: + case 30: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81008,29 +91979,32 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.BotName = string(dAtA[iNdEx:postIndex]) + v := &GithubConnectorCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_GithubConnectorCreate{v} iNdEx = postIndex - case 4: + case 31: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorDelete", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81040,29 +92014,32 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) + v := &GithubConnectorDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_GithubConnectorDelete{v} iNdEx = postIndex - case 5: + case 32: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81072,27 +92049,30 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TokenName = string(dAtA[iNdEx:postIndex]) + v := &OIDCConnectorCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_OIDCConnectorCreate{v} iNdEx = postIndex - case 6: + case 33: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81119,18 +92099,17 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Attributes == nil { - m.Attributes = &Struct{} - } - if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &OIDCConnectorDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_OIDCConnectorDelete{v} iNdEx = postIndex - case 7: + case 34: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81140,27 +92119,30 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UserName = string(dAtA[iNdEx:postIndex]) + v := &SAMLConnectorCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_SAMLConnectorCreate{v} iNdEx = postIndex - case 8: + case 35: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81187,15 +92169,17 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLConnectorDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLConnectorDelete{v} iNdEx = postIndex - case 9: + case 36: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubeRequest", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81205,78 +92189,30 @@ func (m *BotJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.BotInstanceID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &KubeRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *InstanceJoin) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: InstanceJoin: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: InstanceJoin: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_KubeRequest{v} + iNdEx = postIndex + case 37: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppSessionStart", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81303,13 +92239,15 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AppSessionStart{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AppSessionStart{v} iNdEx = postIndex - case 2: + case 38: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppSessionChunk", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81336,15 +92274,17 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AppSessionChunk{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AppSessionChunk{v} iNdEx = postIndex - case 3: + case 39: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HostID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppSessionRequest", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81354,29 +92294,32 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.HostID = string(dAtA[iNdEx:postIndex]) + v := &AppSessionRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AppSessionRequest{v} iNdEx = postIndex - case 4: + case 40: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NodeName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionStart", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81386,29 +92329,32 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.NodeName = string(dAtA[iNdEx:postIndex]) + v := &DatabaseSessionStart{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseSessionStart{v} iNdEx = postIndex - case 5: + case 41: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Role", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionEnd", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81418,29 +92364,32 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Role = string(dAtA[iNdEx:postIndex]) + v := &DatabaseSessionEnd{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseSessionEnd{v} iNdEx = postIndex - case 6: + case 42: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionQuery", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81450,29 +92399,32 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) + v := &DatabaseSessionQuery{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseSessionQuery{v} iNdEx = postIndex - case 7: + case 43: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionUpload", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81482,27 +92434,30 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TokenName = string(dAtA[iNdEx:postIndex]) + v := &SessionUpload{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_SessionUpload{v} iNdEx = postIndex - case 8: + case 44: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceAdd", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81529,16 +92484,15 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Attributes == nil { - m.Attributes = &Struct{} - } - if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MFADeviceAdd{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MFADeviceAdd{v} iNdEx = postIndex - case 9: + case 45: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TokenExpires", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81565,13 +92519,15 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.TokenExpires, dAtA[iNdEx:postIndex]); err != nil { + v := &MFADeviceDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MFADeviceDelete{v} iNdEx = postIndex - case 10: + case 46: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BillingInformationUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81598,64 +92554,15 @@ func (m *InstanceJoin) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &BillingInformationUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_BillingInformationUpdate{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *Unknown) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Unknown: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Unknown: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 47: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BillingCardCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -81682,15 +92589,17 @@ func (m *Unknown) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &BillingCardCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_BillingCardCreate{v} iNdEx = postIndex - case 2: + case 48: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UnknownType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BillingCardDelete", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81700,29 +92609,32 @@ func (m *Unknown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UnknownType = string(dAtA[iNdEx:postIndex]) + v := &BillingCardDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_BillingCardDelete{v} iNdEx = postIndex - case 3: + case 49: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UnknownCode", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LockCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81732,29 +92644,32 @@ func (m *Unknown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UnknownCode = string(dAtA[iNdEx:postIndex]) + v := &LockCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_LockCreate{v} iNdEx = postIndex - case 4: + case 50: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LockDelete", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81764,80 +92679,32 @@ func (m *Unknown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Data = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &LockDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DeviceMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DeviceMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_LockDelete{v} + iNdEx = postIndex + case 51: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCodeGenerate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81847,29 +92714,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DeviceId = string(dAtA[iNdEx:postIndex]) + v := &RecoveryCodeGenerate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_RecoveryCodeGenerate{v} iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field OsType", wireType) + case 52: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCodeUsed", wireType) } - m.OsType = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81879,16 +92749,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.OsType |= OSType(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 3: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &RecoveryCodeUsed{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_RecoveryCodeUsed{v} + iNdEx = postIndex + case 53: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AssetTag", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81898,29 +92784,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AssetTag = string(dAtA[iNdEx:postIndex]) + v := &DatabaseCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseCreate{v} iNdEx = postIndex - case 4: + case 54: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CredentialId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUpdate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81930,29 +92819,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CredentialId = string(dAtA[iNdEx:postIndex]) + v := &DatabaseUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseUpdate{v} iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceOrigin", wireType) + case 55: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseDelete", wireType) } - m.DeviceOrigin = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81962,16 +92854,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.DeviceOrigin |= DeviceOrigin(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field WebAuthentication", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - var v int + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &DatabaseDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DatabaseDelete{v} + iNdEx = postIndex + case 56: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AppCreate", wireType) + } + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -81981,17 +92889,32 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.WebAuthentication = bool(v != 0) - case 8: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &AppCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AppCreate{v} + iNdEx = postIndex + case 57: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WebAuthenticationId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppUpdate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -82001,78 +92924,30 @@ func (m *DeviceMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WebAuthenticationId = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &AppUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DeviceEvent) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DeviceEvent: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DeviceEvent: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_AppUpdate{v} + iNdEx = postIndex + case 58: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82099,13 +92974,15 @@ func (m *DeviceEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AppDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AppDelete{v} iNdEx = postIndex - case 2: + case 59: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopSessionStart", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82132,16 +93009,15 @@ func (m *DeviceEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Status == nil { - m.Status = &Status{} - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &WindowsDesktopSessionStart{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_WindowsDesktopSessionStart{v} iNdEx = postIndex - case 3: + case 60: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopSessionEnd", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82168,16 +93044,15 @@ func (m *DeviceEvent) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Device == nil { - m.Device = &DeviceMetadata{} - } - if err := m.Device.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &WindowsDesktopSessionEnd{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_WindowsDesktopSessionEnd{v} iNdEx = postIndex - case 4: + case 61: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PostgresParse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82199,72 +93074,20 @@ func (m *DeviceEvent) Unmarshal(dAtA []byte) error { } postIndex := iNdEx + msglen if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if m.User == nil { - m.User = &UserMetadata{} - } - if err := m.User.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DeviceEvent2: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DeviceEvent2: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &PostgresParse{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_PostgresParse{v} + iNdEx = postIndex + case 62: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PostgresBind", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82291,13 +93114,15 @@ func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &PostgresBind{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_PostgresBind{v} iNdEx = postIndex - case 3: + case 63: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PostgresExecute", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82324,16 +93149,15 @@ func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Device == nil { - m.Device = &DeviceMetadata{} - } - if err := m.Device.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &PostgresExecute{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_PostgresExecute{v} iNdEx = postIndex - case 5: + case 64: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PostgresClose", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82360,13 +93184,15 @@ func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &PostgresClose{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_PostgresClose{v} iNdEx = postIndex - case 6: + case 65: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PostgresFunctionCall", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82393,64 +93219,15 @@ func (m *DeviceEvent2) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &PostgresFunctionCall{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_PostgresFunctionCall{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DiscoveryConfigCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DiscoveryConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 66: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82477,13 +93254,15 @@ func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AccessRequestDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AccessRequestDelete{v} iNdEx = postIndex - case 2: + case 67: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionConnect", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82510,13 +93289,15 @@ func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionConnect{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionConnect{v} iNdEx = postIndex - case 3: + case 68: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CertificateCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82543,13 +93324,15 @@ func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &CertificateCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_CertificateCreate{v} iNdEx = postIndex - case 4: + case 69: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopRecording", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82576,64 +93359,15 @@ func (m *DiscoveryConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DesktopRecording{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DesktopRecording{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DiscoveryConfigUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DiscoveryConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 70: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopClipboardSend", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82660,13 +93394,15 @@ func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DesktopClipboardSend{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DesktopClipboardSend{v} iNdEx = postIndex - case 2: + case 71: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopClipboardReceive", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82693,13 +93429,15 @@ func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DesktopClipboardReceive{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DesktopClipboardReceive{v} iNdEx = postIndex - case 3: + case 72: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementPrepare", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82726,13 +93464,15 @@ func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementPrepare{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementPrepare{v} iNdEx = postIndex - case 4: + case 73: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementExecute", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82759,64 +93499,15 @@ func (m *DiscoveryConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementExecute{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementExecute{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DiscoveryConfigDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DiscoveryConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 74: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementSendLongData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82843,13 +93534,15 @@ func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementSendLongData{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementSendLongData{v} iNdEx = postIndex - case 2: + case 75: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementClose", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82876,13 +93569,15 @@ func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementClose{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementClose{v} iNdEx = postIndex - case 3: + case 76: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementReset", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82909,13 +93604,15 @@ func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementReset{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementReset{v} iNdEx = postIndex - case 4: + case 77: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementFetch", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -82942,64 +93639,15 @@ func (m *DiscoveryConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementFetch{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementFetch{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *DiscoveryConfigDeleteAll) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: DiscoveryConfigDeleteAll: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: DiscoveryConfigDeleteAll: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 78: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementBulkExecute", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83026,13 +93674,15 @@ func (m *DiscoveryConfigDeleteAll) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLStatementBulkExecute{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLStatementBulkExecute{v} iNdEx = postIndex - case 2: + case 79: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RenewableCertificateGenerationMismatch", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83059,13 +93709,15 @@ func (m *DiscoveryConfigDeleteAll) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &RenewableCertificateGenerationMismatch{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_RenewableCertificateGenerationMismatch{v} iNdEx = postIndex - case 3: + case 80: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Unknown", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83092,64 +93744,15 @@ func (m *DiscoveryConfigDeleteAll) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &Unknown{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_Unknown{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: IntegrationCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: IntegrationCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 81: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLInitDB", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83176,13 +93779,15 @@ func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLInitDB{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLInitDB{v} iNdEx = postIndex - case 2: + case 82: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLCreateDB", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83209,13 +93814,15 @@ func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLCreateDB{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLCreateDB{v} iNdEx = postIndex - case 3: + case 83: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLDropDB", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83242,13 +93849,15 @@ func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLDropDB{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLDropDB{v} iNdEx = postIndex - case 4: + case 84: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLShutDown", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83275,13 +93884,15 @@ func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLShutDown{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLShutDown{v} iNdEx = postIndex - case 5: + case 85: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLProcessKill", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83308,64 +93919,15 @@ func (m *IntegrationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &MySQLProcessKill{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: IntegrationUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: IntegrationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_MySQLProcessKill{v} + iNdEx = postIndex + case 86: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLDebug", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83392,13 +93954,15 @@ func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLDebug{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLDebug{v} iNdEx = postIndex - case 2: + case 87: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MySQLRefresh", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83425,13 +93989,15 @@ func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &MySQLRefresh{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_MySQLRefresh{v} iNdEx = postIndex - case 3: + case 88: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestResourceSearch", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83458,13 +94024,15 @@ func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AccessRequestResourceSearch{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AccessRequestResourceSearch{v} iNdEx = postIndex - case 4: + case 89: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SQLServerRPCRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83491,13 +94059,15 @@ func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SQLServerRPCRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SQLServerRPCRequest{v} iNdEx = postIndex - case 5: + case 90: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionMalformedPacket", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83524,64 +94094,15 @@ func (m *IntegrationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DatabaseSessionMalformedPacket{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DatabaseSessionMalformedPacket{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: IntegrationDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: IntegrationDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 91: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SFTP", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83608,13 +94129,15 @@ func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SFTP{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SFTP{v} iNdEx = postIndex - case 2: + case 92: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStartUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83641,13 +94164,15 @@ func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &UpgradeWindowStartUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_UpgradeWindowStartUpdate{v} iNdEx = postIndex - case 3: + case 93: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppSessionEnd", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83674,13 +94199,15 @@ func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AppSessionEnd{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AppSessionEnd{v} iNdEx = postIndex - case 4: + case 94: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingAccess", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83707,13 +94234,15 @@ func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.IntegrationMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SessionRecordingAccess{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SessionRecordingAccess{v} iNdEx = postIndex - case 5: + case 96: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83740,66 +94269,17 @@ func (m *IntegrationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &KubernetesClusterCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_KubernetesClusterCreate{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: IntegrationMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: IntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 97: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterUpdate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -83809,27 +94289,30 @@ func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SubKind = string(dAtA[iNdEx:postIndex]) + v := &KubernetesClusterUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_KubernetesClusterUpdate{v} iNdEx = postIndex - case 2: + case 98: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSOIDC", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83856,16 +94339,15 @@ func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AWSOIDC == nil { - m.AWSOIDC = &AWSOIDCIntegrationMetadata{} - } - if err := m.AWSOIDC.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &KubernetesClusterDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_KubernetesClusterDelete{v} iNdEx = postIndex - case 3: + case 99: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AzureOIDC", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SSMRun", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83892,16 +94374,15 @@ func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AzureOIDC == nil { - m.AzureOIDC = &AzureOIDCIntegrationMetadata{} - } - if err := m.AzureOIDC.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SSMRun{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SSMRun{v} iNdEx = postIndex - case 4: + case 100: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GitHub", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ElasticsearchRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83928,16 +94409,15 @@ func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.GitHub == nil { - m.GitHub = &GitHubIntegrationMetadata{} - } - if err := m.GitHub.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &ElasticsearchRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_ElasticsearchRequest{v} iNdEx = postIndex - case 5: + case 101: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRA", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CassandraBatch", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -83964,69 +94444,52 @@ func (m *IntegrationMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AWSRA == nil { - m.AWSRA = &AWSRAIntegrationMetadata{} - } - if err := m.AWSRA.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &CassandraBatch{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_CassandraBatch{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 102: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CassandraPrepare", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + if msglen < 0 { + return ErrInvalidLengthEvents } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AWSOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + v := &CassandraPrepare{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AWSOIDCIntegrationMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AWSOIDCIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_CassandraPrepare{v} + iNdEx = postIndex + case 103: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RoleARN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CassandraRegister", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84036,29 +94499,32 @@ func (m *AWSOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RoleARN = string(dAtA[iNdEx:postIndex]) + v := &CassandraRegister{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_CassandraRegister{v} iNdEx = postIndex - case 2: + case 104: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IssuerS3URI", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CassandraExecute", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84068,80 +94534,32 @@ func (m *AWSOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.IssuerS3URI = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &CassandraExecute{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AzureOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AzureOIDCIntegrationMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AzureOIDCIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_CassandraExecute{v} + iNdEx = postIndex + case 105: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TenantID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppSessionDynamoDBRequest", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84151,29 +94569,32 @@ func (m *AzureOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TenantID = string(dAtA[iNdEx:postIndex]) + v := &AppSessionDynamoDBRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AppSessionDynamoDBRequest{v} iNdEx = postIndex - case 2: + case 106: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryStart", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84183,80 +94604,67 @@ func (m *AzureOIDCIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ClientID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &DesktopSharedDirectoryStart{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + m.Event = &OneOf_DesktopSharedDirectoryStart{v} + iNdEx = postIndex + case 107: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryRead", wireType) } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *GitHubIntegrationMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + if msglen < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { - return io.ErrUnexpectedEOF + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + if postIndex > l { + return io.ErrUnexpectedEOF } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: GitHubIntegrationMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: GitHubIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + v := &DesktopSharedDirectoryRead{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_DesktopSharedDirectoryRead{v} + iNdEx = postIndex + case 108: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Organization", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryWrite", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84266,80 +94674,32 @@ func (m *GitHubIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Organization = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &DesktopSharedDirectoryWrite{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AWSRAIntegrationMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AWSRAIntegrationMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AWSRAIntegrationMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_DesktopSharedDirectoryWrite{v} + iNdEx = postIndex + case 109: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TrustAnchorARN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DynamoDBRequest", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -84349,78 +94709,30 @@ func (m *AWSRAIntegrationMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TrustAnchorARN = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + v := &DynamoDBRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_DynamoDBRequest{v} + iNdEx = postIndex + case 110: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotJoin", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84447,13 +94759,15 @@ func (m *PluginCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &BotJoin{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_BotJoin{v} iNdEx = postIndex - case 2: + case 111: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field InstanceJoin", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84480,13 +94794,15 @@ func (m *PluginCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &InstanceJoin{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_InstanceJoin{v} iNdEx = postIndex - case 3: + case 112: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceEvent", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84513,13 +94829,15 @@ func (m *PluginCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DeviceEvent{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DeviceEvent{v} iNdEx = postIndex - case 4: + case 113: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LoginRuleCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84546,13 +94864,15 @@ func (m *PluginCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &LoginRuleCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_LoginRuleCreate{v} iNdEx = postIndex - case 5: + case 114: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LoginRuleDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84579,64 +94899,15 @@ func (m *PluginCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &LoginRuleDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_LoginRuleDelete{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 115: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPAuthAttempt", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84663,13 +94934,15 @@ func (m *PluginUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLIdPAuthAttempt{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLIdPAuthAttempt{v} iNdEx = postIndex - case 2: + case 116: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84696,13 +94969,15 @@ func (m *PluginUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLIdPServiceProviderCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLIdPServiceProviderCreate{v} iNdEx = postIndex - case 3: + case 117: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84729,13 +95004,15 @@ func (m *PluginUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLIdPServiceProviderUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLIdPServiceProviderUpdate{v} iNdEx = postIndex - case 4: + case 118: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84762,13 +95039,15 @@ func (m *PluginUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLIdPServiceProviderDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLIdPServiceProviderDelete{v} iNdEx = postIndex - case 5: + case 119: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderDeleteAll", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84795,64 +95074,15 @@ func (m *PluginUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &SAMLIdPServiceProviderDeleteAll{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_SAMLIdPServiceProviderDeleteAll{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 120: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OpenSearchRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84879,13 +95109,15 @@ func (m *PluginDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &OpenSearchRequest{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_OpenSearchRequest{v} iNdEx = postIndex - case 2: + case 121: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceEvent2", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84912,13 +95144,15 @@ func (m *PluginDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &DeviceEvent2{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_DeviceEvent2{v} iNdEx = postIndex - case 3: + case 122: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaResourcesUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84945,13 +95179,15 @@ func (m *PluginDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &OktaResourcesUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_OktaResourcesUpdate{v} iNdEx = postIndex - case 4: + case 123: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaSyncFailure", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -84978,13 +95214,15 @@ func (m *PluginDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.PluginMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &OktaSyncFailure{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_OktaSyncFailure{v} iNdEx = postIndex - case 5: + case 124: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaAssignmentResult", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85011,66 +95249,52 @@ func (m *PluginDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &OktaAssignmentResult{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_OktaAssignmentResult{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 125: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProvisionTokenCreate", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + if msglen < 0 { + return ErrInvalidLengthEvents } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + v := &ProvisionTokenCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_ProvisionTokenCreate{v} + iNdEx = postIndex + case 126: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListCreate", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -85080,29 +95304,32 @@ func (m *PluginMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PluginType = string(dAtA[iNdEx:postIndex]) + v := &AccessListCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AccessListCreate{v} iNdEx = postIndex - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field HasCredentials", wireType) + case 127: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListUpdate", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -85112,17 +95339,32 @@ func (m *PluginMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.HasCredentials = bool(v != 0) - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ReusesCredentials", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - var v int + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &AccessListUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AccessListUpdate{v} + iNdEx = postIndex + case 128: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListDelete", wireType) + } + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -85132,15 +95374,30 @@ func (m *PluginMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.ReusesCredentials = bool(v != 0) - case 5: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &AccessListDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Event = &OneOf_AccessListDelete{v} + iNdEx = postIndex + case 129: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginData", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListReview", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85167,67 +95424,120 @@ func (m *PluginMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.PluginData == nil { - m.PluginData = &Struct{} - } - if err := m.PluginData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + v := &AccessListReview{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } + m.Event = &OneOf_AccessListReview{v} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + case 130: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberCreate", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &AccessListMemberCreate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { + m.Event = &OneOf_AccessListMemberCreate{v} + iNdEx = postIndex + case 131: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberUpdate", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *OneOf) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + v := &AccessListMemberUpdate{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if iNdEx >= l { + m.Event = &OneOf_AccessListMemberUpdate{v} + iNdEx = postIndex + case 132: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberDelete", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + v := &AccessListMemberDelete{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OneOf: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OneOf: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Event = &OneOf_AccessListMemberDelete{v} + iNdEx = postIndex + case 133: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserLogin", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberDeleteAllForAccessList", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85254,15 +95564,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserLogin{} + v := &AccessListMemberDeleteAllForAccessList{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserLogin{v} + m.Event = &OneOf_AccessListMemberDeleteAllForAccessList{v} iNdEx = postIndex - case 2: + case 134: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuditQueryRun", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85289,15 +95599,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserCreate{} + v := &AuditQueryRun{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserCreate{v} + m.Event = &OneOf_AuditQueryRun{v} iNdEx = postIndex - case 3: + case 135: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SecurityReportRun", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85324,15 +95634,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserDelete{} + v := &SecurityReportRun{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserDelete{v} + m.Event = &OneOf_SecurityReportRun{v} iNdEx = postIndex - case 4: + case 136: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserPasswordChange", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85359,15 +95669,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserPasswordChange{} + v := &GithubConnectorUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserPasswordChange{v} + m.Event = &OneOf_GithubConnectorUpdate{v} iNdEx = postIndex - case 5: + case 137: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85394,15 +95704,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionStart{} + v := &OIDCConnectorUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionStart{v} + m.Event = &OneOf_OIDCConnectorUpdate{v} iNdEx = postIndex - case 6: + case 138: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionJoin", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85429,15 +95739,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionJoin{} + v := &SAMLConnectorUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionJoin{v} + m.Event = &OneOf_SAMLConnectorUpdate{v} iNdEx = postIndex - case 7: + case 139: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionPrint", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RoleUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85464,15 +95774,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionPrint{} + v := &RoleUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionPrint{v} + m.Event = &OneOf_RoleUpdate{v} iNdEx = postIndex - case 8: + case 140: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionReject", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85499,15 +95809,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionReject{} + v := &UserUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionReject{v} + m.Event = &OneOf_UserUpdate{v} iNdEx = postIndex - case 9: + case 141: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Resize", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExternalAuditStorageEnable", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85534,15 +95844,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &Resize{} + v := &ExternalAuditStorageEnable{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_Resize{v} + m.Event = &OneOf_ExternalAuditStorageEnable{v} iNdEx = postIndex - case 10: + case 142: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionEnd", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExternalAuditStorageDisable", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85569,15 +95879,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionEnd{} + v := &ExternalAuditStorageDisable{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionEnd{v} + m.Event = &OneOf_ExternalAuditStorageDisable{v} iNdEx = postIndex - case 11: + case 143: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionCommand", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85604,15 +95914,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionCommand{} + v := &BotCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionCommand{v} + m.Event = &OneOf_BotCreate{v} iNdEx = postIndex - case 12: + case 144: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionDisk", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85639,15 +95949,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionDisk{} + v := &BotDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionDisk{v} + m.Event = &OneOf_BotDelete{v} iNdEx = postIndex - case 13: + case 145: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionNetwork", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85674,15 +95984,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionNetwork{} + v := &BotUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionNetwork{v} + m.Event = &OneOf_BotUpdate{v} iNdEx = postIndex - case 14: + case 146: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionData", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CreateMFAAuthChallenge", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85709,15 +96019,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionData{} + v := &CreateMFAAuthChallenge{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionData{v} + m.Event = &OneOf_CreateMFAAuthChallenge{v} iNdEx = postIndex - case 15: + case 147: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionLeave", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ValidateMFAAuthResponse", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85744,15 +96054,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionLeave{} + v := &ValidateMFAAuthResponse{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionLeave{v} + m.Event = &OneOf_ValidateMFAAuthResponse{v} iNdEx = postIndex - case 16: + case 148: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PortForward", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaAccessListSync", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85779,15 +96089,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PortForward{} + v := &OktaAccessListSync{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PortForward{v} + m.Event = &OneOf_OktaAccessListSync{v} iNdEx = postIndex - case 17: + case 149: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field X11Forward", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabasePermissionUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85814,15 +96124,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &X11Forward{} + v := &DatabasePermissionUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_X11Forward{v} + m.Event = &OneOf_DatabasePermissionUpdate{v} iNdEx = postIndex - case 18: + case 150: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SCP", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SPIFFESVIDIssued", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85849,15 +96159,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SCP{} + v := &SPIFFESVIDIssued{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SCP{v} + m.Event = &OneOf_SPIFFESVIDIssued{v} iNdEx = postIndex - case 19: + case 151: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Exec", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaUserSync", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85884,15 +96194,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &Exec{} + v := &OktaUserSync{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_Exec{v} + m.Event = &OneOf_OktaUserSync{v} iNdEx = postIndex - case 20: + case 152: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Subsystem", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuthPreferenceUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85919,15 +96229,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &Subsystem{} + v := &AuthPreferenceUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_Subsystem{v} + m.Event = &OneOf_AuthPreferenceUpdate{v} iNdEx = postIndex - case 21: + case 153: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientDisconnect", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85954,15 +96264,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &ClientDisconnect{} + v := &SessionRecordingConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_ClientDisconnect{v} + m.Event = &OneOf_SessionRecordingConfigUpdate{v} iNdEx = postIndex - case 22: + case 154: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuthAttempt", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterNetworkingConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -85989,15 +96299,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AuthAttempt{} + v := &ClusterNetworkingConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AuthAttempt{v} + m.Event = &OneOf_ClusterNetworkingConfigUpdate{v} iNdEx = postIndex - case 23: + case 155: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUserCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86024,15 +96334,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessRequestCreate{} + v := &DatabaseUserCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessRequestCreate{v} + m.Event = &OneOf_DatabaseUserCreate{v} iNdEx = postIndex - case 24: + case 156: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTokenCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUserDeactivate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86059,15 +96369,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserTokenCreate{} + v := &DatabaseUserDeactivate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserTokenCreate{v} + m.Event = &OneOf_DatabaseUserDeactivate{v} iNdEx = postIndex - case 25: + case 157: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RoleCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessPathChanged", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86094,15 +96404,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &RoleCreate{} + v := &AccessPathChanged{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_RoleCreate{v} + m.Event = &OneOf_AccessPathChanged{v} iNdEx = postIndex - case 26: + case 158: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RoleDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SpannerRPC", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86129,15 +96439,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &RoleDelete{} + v := &SpannerRPC{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_RoleDelete{v} + m.Event = &OneOf_SpannerRPC{v} iNdEx = postIndex - case 27: + case 159: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionCommandResult", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86164,15 +96474,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &TrustedClusterCreate{} + v := &DatabaseSessionCommandResult{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_TrustedClusterCreate{v} + m.Event = &OneOf_DatabaseSessionCommandResult{v} iNdEx = postIndex - case 28: + case 160: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86199,15 +96509,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &TrustedClusterDelete{} + v := &DiscoveryConfigCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_TrustedClusterDelete{v} + m.Event = &OneOf_DiscoveryConfigCreate{v} iNdEx = postIndex - case 29: + case 161: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TrustedClusterTokenCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86234,15 +96544,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &TrustedClusterTokenCreate{} + v := &DiscoveryConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_TrustedClusterTokenCreate{v} + m.Event = &OneOf_DiscoveryConfigUpdate{v} iNdEx = postIndex - case 30: + case 162: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86269,15 +96579,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &GithubConnectorCreate{} + v := &DiscoveryConfigDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_GithubConnectorCreate{v} + m.Event = &OneOf_DiscoveryConfigDelete{v} iNdEx = postIndex - case 31: + case 163: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigDeleteAll", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86304,15 +96614,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &GithubConnectorDelete{} + v := &DiscoveryConfigDeleteAll{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_GithubConnectorDelete{v} + m.Event = &OneOf_DiscoveryConfigDeleteAll{v} iNdEx = postIndex - case 32: + case 164: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessGraphSettingsUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86339,15 +96649,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &OIDCConnectorCreate{} + v := &AccessGraphSettingsUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_OIDCConnectorCreate{v} + m.Event = &OneOf_AccessGraphSettingsUpdate{v} iNdEx = postIndex - case 33: + case 165: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86374,15 +96684,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &OIDCConnectorDelete{} + v := &IntegrationCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_OIDCConnectorDelete{v} + m.Event = &OneOf_IntegrationCreate{v} iNdEx = postIndex - case 34: + case 166: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86409,15 +96719,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLConnectorCreate{} + v := &IntegrationUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SAMLConnectorCreate{v} + m.Event = &OneOf_IntegrationUpdate{v} iNdEx = postIndex - case 35: + case 167: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86444,15 +96754,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLConnectorDelete{} + v := &IntegrationDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SAMLConnectorDelete{v} + m.Event = &OneOf_IntegrationDelete{v} iNdEx = postIndex - case 36: + case 168: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEFederationCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86479,15 +96789,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &KubeRequest{} + v := &SPIFFEFederationCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_KubeRequest{v} + m.Event = &OneOf_SPIFFEFederationCreate{v} iNdEx = postIndex - case 37: + case 169: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSessionStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEFederationDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86514,15 +96824,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppSessionStart{} + v := &SPIFFEFederationDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppSessionStart{v} + m.Event = &OneOf_SPIFFEFederationDelete{v} iNdEx = postIndex - case 38: + case 170: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSessionChunk", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86549,15 +96859,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppSessionChunk{} + v := &PluginCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppSessionChunk{v} + m.Event = &OneOf_PluginCreate{v} iNdEx = postIndex - case 39: + case 171: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSessionRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86584,15 +96894,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppSessionRequest{} + v := &PluginUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppSessionRequest{v} + m.Event = &OneOf_PluginUpdate{v} iNdEx = postIndex - case 40: + case 172: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PluginDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86619,15 +96929,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseSessionStart{} + v := &PluginDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseSessionStart{v} + m.Event = &OneOf_PluginDelete{v} iNdEx = postIndex - case 41: + case 173: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionEnd", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86654,15 +96964,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseSessionEnd{} + v := &AutoUpdateConfigCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseSessionEnd{v} + m.Event = &OneOf_AutoUpdateConfigCreate{v} iNdEx = postIndex - case 42: + case 174: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86689,15 +96999,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseSessionQuery{} + v := &AutoUpdateConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseSessionQuery{v} + m.Event = &OneOf_AutoUpdateConfigUpdate{v} iNdEx = postIndex - case 43: + case 175: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionUpload", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86724,15 +97034,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionUpload{} + v := &AutoUpdateConfigDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionUpload{v} + m.Event = &OneOf_AutoUpdateConfigDelete{v} iNdEx = postIndex - case 44: + case 176: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceAdd", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86759,15 +97069,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MFADeviceAdd{} + v := &AutoUpdateVersionCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MFADeviceAdd{v} + m.Event = &OneOf_AutoUpdateVersionCreate{v} iNdEx = postIndex - case 45: + case 177: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86794,15 +97104,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MFADeviceDelete{} + v := &AutoUpdateVersionUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MFADeviceDelete{v} + m.Event = &OneOf_AutoUpdateVersionUpdate{v} iNdEx = postIndex - case 46: + case 178: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BillingInformationUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86829,15 +97139,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &BillingInformationUpdate{} + v := &AutoUpdateVersionDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_BillingInformationUpdate{v} + m.Event = &OneOf_AutoUpdateVersionDelete{v} iNdEx = postIndex - case 47: + case 179: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BillingCardCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86864,15 +97174,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &BillingCardCreate{} + v := &StaticHostUserCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_BillingCardCreate{v} + m.Event = &OneOf_StaticHostUserCreate{v} iNdEx = postIndex - case 48: + case 180: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BillingCardDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86899,15 +97209,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &BillingCardDelete{} + v := &StaticHostUserUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_BillingCardDelete{v} + m.Event = &OneOf_StaticHostUserUpdate{v} iNdEx = postIndex - case 49: + case 181: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LockCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86934,15 +97244,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &LockCreate{} + v := &StaticHostUserDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_LockCreate{v} + m.Event = &OneOf_StaticHostUserDelete{v} iNdEx = postIndex - case 50: + case 182: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LockDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -86969,15 +97279,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &LockDelete{} + v := &CrownJewelCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_LockDelete{v} + m.Event = &OneOf_CrownJewelCreate{v} iNdEx = postIndex - case 51: + case 183: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCodeGenerate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87004,15 +97314,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &RecoveryCodeGenerate{} + v := &CrownJewelUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_RecoveryCodeGenerate{v} + m.Event = &OneOf_CrownJewelUpdate{v} iNdEx = postIndex - case 52: + case 184: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCodeUsed", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87039,15 +97349,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &RecoveryCodeUsed{} + v := &CrownJewelDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_RecoveryCodeUsed{v} + m.Event = &OneOf_CrownJewelDelete{v} iNdEx = postIndex - case 53: + case 188: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserTaskCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87074,15 +97384,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseCreate{} + v := &UserTaskCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseCreate{v} + m.Event = &OneOf_UserTaskCreate{v} iNdEx = postIndex - case 54: + case 189: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserTaskUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87109,15 +97419,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseUpdate{} + v := &UserTaskUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseUpdate{v} + m.Event = &OneOf_UserTaskUpdate{v} iNdEx = postIndex - case 55: + case 190: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserTaskDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87144,15 +97454,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseDelete{} + v := &UserTaskDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseDelete{v} + m.Event = &OneOf_UserTaskDelete{v} iNdEx = postIndex - case 56: + case 191: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SFTPSummary", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87179,15 +97489,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppCreate{} + v := &SFTPSummary{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppCreate{v} + m.Event = &OneOf_SFTPSummary{v} iNdEx = postIndex - case 57: + case 192: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ContactCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87214,15 +97524,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppUpdate{} + v := &ContactCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppUpdate{v} + m.Event = &OneOf_ContactCreate{v} iNdEx = postIndex - case 58: + case 193: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ContactDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87249,15 +97559,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppDelete{} + v := &ContactDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppDelete{v} + m.Event = &OneOf_ContactDelete{v} iNdEx = postIndex - case 59: + case 194: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopSessionStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87284,15 +97594,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WindowsDesktopSessionStart{} + v := &WorkloadIdentityCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WindowsDesktopSessionStart{v} + m.Event = &OneOf_WorkloadIdentityCreate{v} iNdEx = postIndex - case 60: + case 195: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WindowsDesktopSessionEnd", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87319,15 +97629,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WindowsDesktopSessionEnd{} + v := &WorkloadIdentityUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WindowsDesktopSessionEnd{v} + m.Event = &OneOf_WorkloadIdentityUpdate{v} iNdEx = postIndex - case 61: + case 196: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresParse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87354,15 +97664,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PostgresParse{} + v := &WorkloadIdentityDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PostgresParse{v} + m.Event = &OneOf_WorkloadIdentityDelete{v} iNdEx = postIndex - case 62: + case 197: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresBind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GitCommand", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87389,15 +97699,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PostgresBind{} + v := &GitCommand{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PostgresBind{v} + m.Event = &OneOf_GitCommand{v} iNdEx = postIndex - case 63: + case 198: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresExecute", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserLoginAccessListInvalid", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87424,15 +97734,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PostgresExecute{} + v := &UserLoginAccessListInvalid{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PostgresExecute{v} + m.Event = &OneOf_UserLoginAccessListInvalid{v} iNdEx = postIndex - case 64: + case 199: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresClose", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestExpire", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87459,15 +97769,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PostgresClose{} + v := &AccessRequestExpire{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PostgresClose{v} + m.Event = &OneOf_AccessRequestExpire{v} iNdEx = postIndex - case 65: + case 200: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PostgresFunctionCall", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StableUNIXUserCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87494,15 +97804,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PostgresFunctionCall{} + v := &StableUNIXUserCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PostgresFunctionCall{v} + m.Event = &OneOf_StableUNIXUserCreate{v} iNdEx = postIndex - case 66: + case 201: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87529,15 +97839,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessRequestDelete{} + v := &WorkloadIdentityX509RevocationCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessRequestDelete{v} + m.Event = &OneOf_WorkloadIdentityX509RevocationCreate{v} iNdEx = postIndex - case 67: + case 202: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionConnect", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87564,15 +97874,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionConnect{} + v := &WorkloadIdentityX509RevocationDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionConnect{v} + m.Event = &OneOf_WorkloadIdentityX509RevocationDelete{v} iNdEx = postIndex - case 68: + case 203: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CertificateCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87599,15 +97909,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &CertificateCreate{} + v := &WorkloadIdentityX509RevocationUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_CertificateCreate{v} + m.Event = &OneOf_WorkloadIdentityX509RevocationUpdate{v} iNdEx = postIndex - case 69: + case 204: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopRecording", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSICResourceSync", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87634,15 +97944,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopRecording{} + v := &AWSICResourceSync{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DesktopRecording{v} + m.Event = &OneOf_AWSICResourceSync{v} iNdEx = postIndex - case 70: + case 205: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopClipboardSend", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87669,15 +97979,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopClipboardSend{} + v := &HealthCheckConfigCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DesktopClipboardSend{v} + m.Event = &OneOf_HealthCheckConfigCreate{v} iNdEx = postIndex - case 71: + case 206: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopClipboardReceive", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87704,15 +98014,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopClipboardReceive{} + v := &HealthCheckConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DesktopClipboardReceive{v} + m.Event = &OneOf_HealthCheckConfigUpdate{v} iNdEx = postIndex - case 72: + case 207: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementPrepare", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87739,15 +98049,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementPrepare{} + v := &HealthCheckConfigDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementPrepare{v} + m.Event = &OneOf_HealthCheckConfigDelete{v} iNdEx = postIndex - case 73: + case 208: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementExecute", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509IssuerOverrideCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87774,15 +98084,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementExecute{} + v := &WorkloadIdentityX509IssuerOverrideCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementExecute{v} + m.Event = &OneOf_WorkloadIdentityX509IssuerOverrideCreate{v} iNdEx = postIndex - case 74: + case 209: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementSendLongData", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509IssuerOverrideDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87809,15 +98119,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementSendLongData{} + v := &WorkloadIdentityX509IssuerOverrideDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementSendLongData{v} + m.Event = &OneOf_WorkloadIdentityX509IssuerOverrideDelete{v} iNdEx = postIndex - case 75: + case 210: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementClose", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87844,15 +98154,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementClose{} + v := &SigstorePolicyCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementClose{v} + m.Event = &OneOf_SigstorePolicyCreate{v} iNdEx = postIndex - case 76: + case 211: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementReset", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87879,15 +98189,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementReset{} + v := &SigstorePolicyUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementReset{v} + m.Event = &OneOf_SigstorePolicyUpdate{v} iNdEx = postIndex - case 77: + case 212: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementFetch", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87914,15 +98224,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementFetch{} + v := &SigstorePolicyDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementFetch{v} + m.Event = &OneOf_SigstorePolicyDelete{v} iNdEx = postIndex - case 78: + case 213: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLStatementBulkExecute", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutTrigger", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87949,15 +98259,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLStatementBulkExecute{} + v := &AutoUpdateAgentRolloutTrigger{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLStatementBulkExecute{v} + m.Event = &OneOf_AutoUpdateAgentRolloutTrigger{v} iNdEx = postIndex - case 79: + case 214: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RenewableCertificateGenerationMismatch", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutForceDone", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -87984,15 +98294,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &RenewableCertificateGenerationMismatch{} + v := &AutoUpdateAgentRolloutForceDone{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_RenewableCertificateGenerationMismatch{v} + m.Event = &OneOf_AutoUpdateAgentRolloutForceDone{v} iNdEx = postIndex - case 80: + case 215: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Unknown", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutRollback", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88019,15 +98329,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &Unknown{} + v := &AutoUpdateAgentRolloutRollback{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_Unknown{v} + m.Event = &OneOf_AutoUpdateAgentRolloutRollback{v} iNdEx = postIndex - case 81: + case 216: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLInitDB", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionStart", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88054,15 +98364,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLInitDB{} + v := &MCPSessionStart{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLInitDB{v} + m.Event = &OneOf_MCPSessionStart{v} iNdEx = postIndex - case 82: + case 217: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLCreateDB", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionEnd", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88089,15 +98399,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLCreateDB{} + v := &MCPSessionEnd{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLCreateDB{v} + m.Event = &OneOf_MCPSessionEnd{v} iNdEx = postIndex - case 83: + case 218: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLDropDB", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88124,15 +98434,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLDropDB{} + v := &MCPSessionRequest{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLDropDB{v} + m.Event = &OneOf_MCPSessionRequest{v} iNdEx = postIndex - case 84: + case 219: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLShutDown", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionNotification", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88159,15 +98469,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLShutDown{} + v := &MCPSessionNotification{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLShutDown{v} + m.Event = &OneOf_MCPSessionNotification{v} iNdEx = postIndex - case 85: + case 220: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLProcessKill", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BoundKeypairRecovery", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88194,15 +98504,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLProcessKill{} + v := &BoundKeypairRecovery{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLProcessKill{v} + m.Event = &OneOf_BoundKeypairRecovery{v} iNdEx = postIndex - case 86: + case 221: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLDebug", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BoundKeypairRotation", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88229,15 +98539,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLDebug{} + v := &BoundKeypairRotation{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLDebug{v} + m.Event = &OneOf_BoundKeypairRotation{v} iNdEx = postIndex - case 87: + case 222: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MySQLRefresh", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BoundKeypairJoinStateVerificationFailed", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88264,15 +98574,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &MySQLRefresh{} + v := &BoundKeypairJoinStateVerificationFailed{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_MySQLRefresh{v} + m.Event = &OneOf_BoundKeypairJoinStateVerificationFailed{v} iNdEx = postIndex - case 88: + case 223: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestResourceSearch", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionListenSSEStream", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88299,15 +98609,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessRequestResourceSearch{} + v := &MCPSessionListenSSEStream{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessRequestResourceSearch{v} + m.Event = &OneOf_MCPSessionListenSSEStream{v} iNdEx = postIndex - case 89: + case 224: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SQLServerRPCRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MCPSessionInvalidHTTPRequest", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88334,15 +98644,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SQLServerRPCRequest{} + v := &MCPSessionInvalidHTTPRequest{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SQLServerRPCRequest{v} + m.Event = &OneOf_MCPSessionInvalidHTTPRequest{v} iNdEx = postIndex - case 90: + case 225: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionMalformedPacket", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SCIMListingEvent", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88369,15 +98679,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseSessionMalformedPacket{} + v := &SCIMListingEvent{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DatabaseSessionMalformedPacket{v} + m.Event = &OneOf_SCIMListingEvent{v} iNdEx = postIndex - case 91: + case 226: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SFTP", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SCIMResourceEvent", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88404,15 +98714,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SFTP{} + v := &SCIMResourceEvent{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SFTP{v} + m.Event = &OneOf_SCIMResourceEvent{v} iNdEx = postIndex - case 92: + case 227: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStartUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientIPRestrictionsUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88439,15 +98749,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UpgradeWindowStartUpdate{} + v := &ClientIPRestrictionsUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UpgradeWindowStartUpdate{v} + m.Event = &OneOf_ClientIPRestrictionsUpdate{v} iNdEx = postIndex - case 93: + case 232: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSessionEnd", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field VnetConfigCreate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88474,15 +98784,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppSessionEnd{} + v := &VnetConfigCreate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AppSessionEnd{v} + m.Event = &OneOf_VnetConfigCreate{v} iNdEx = postIndex - case 94: + case 233: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingAccess", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field VnetConfigUpdate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88509,15 +98819,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionRecordingAccess{} + v := &VnetConfigUpdate{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionRecordingAccess{v} + m.Event = &OneOf_VnetConfigUpdate{v} iNdEx = postIndex - case 96: + case 234: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field VnetConfigDelete", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88544,17 +98854,68 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &KubernetesClusterCreate{} + v := &VnetConfigDelete{} if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_KubernetesClusterCreate{v} + m.Event = &OneOf_VnetConfigDelete{v} iNdEx = postIndex - case 97: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *StreamStatus) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: StreamStatus: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: StreamStatus: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UploadID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88564,30 +98925,46 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &KubernetesClusterUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_KubernetesClusterUpdate{v} + m.UploadID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 98: + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LastEventIndex", wireType) + } + m.LastEventIndex = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.LastEventIndex |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesClusterDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LastUploadTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88614,15 +98991,64 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &KubernetesClusterDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.LastUploadTime, dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_KubernetesClusterDelete{v} iNdEx = postIndex - case 99: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionUpload) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionUpload: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionUpload: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SSMRun", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88649,15 +99075,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SSMRun{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SSMRun{v} iNdEx = postIndex - case 100: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ElasticsearchRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88684,17 +99108,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &ElasticsearchRequest{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_ElasticsearchRequest{v} iNdEx = postIndex - case 101: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CassandraBatch", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionURL", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88704,32 +99126,80 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &CassandraBatch{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.SessionURL = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_CassandraBatch{v} - iNdEx = postIndex - case 102: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *Identity) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Identity: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Identity: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CassandraPrepare", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88739,32 +99209,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &CassandraPrepare{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_CassandraPrepare{v} + m.User = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 103: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CassandraRegister", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Impersonator", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88774,32 +99241,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &CassandraRegister{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_CassandraRegister{v} + m.Impersonator = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 104: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CassandraExecute", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88809,32 +99273,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &CassandraExecute{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_CassandraExecute{v} + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 105: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSessionDynamoDBRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88844,32 +99305,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AppSessionDynamoDBRequest{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AppSessionDynamoDBRequest{v} + m.Usage = append(m.Usage, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 106: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Logins", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88879,32 +99337,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopSharedDirectoryStart{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DesktopSharedDirectoryStart{v} + m.Logins = append(m.Logins, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 107: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryRead", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesGroups", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88914,32 +99369,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopSharedDirectoryRead{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DesktopSharedDirectoryRead{v} + m.KubernetesGroups = append(m.KubernetesGroups, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 108: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DesktopSharedDirectoryWrite", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesUsers", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -88949,30 +99401,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DesktopSharedDirectoryWrite{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DesktopSharedDirectoryWrite{v} + m.KubernetesUsers = append(m.KubernetesUsers, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 109: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DynamoDBRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -88999,17 +99448,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DynamoDBRequest{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Expires, dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DynamoDBRequest{v} iNdEx = postIndex - case 110: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotJoin", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RouteToCluster", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89019,32 +99466,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &BotJoin{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_BotJoin{v} + m.RouteToCluster = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 111: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InstanceJoin", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubernetesCluster", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89054,30 +99498,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &InstanceJoin{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_InstanceJoin{v} + m.KubernetesCluster = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 112: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceEvent", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Traits", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89104,15 +99545,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DeviceEvent{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Traits.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DeviceEvent{v} iNdEx = postIndex - case 113: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LoginRuleCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RouteToApp", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89139,17 +99578,18 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &LoginRuleCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.RouteToApp == nil { + m.RouteToApp = &RouteToApp{} + } + if err := m.RouteToApp.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_LoginRuleCreate{v} iNdEx = postIndex - case 114: + case 13: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LoginRuleDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TeleportCluster", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89159,30 +99599,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &LoginRuleDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_LoginRuleDelete{v} + m.TeleportCluster = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 115: + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPAuthAttempt", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RouteToDatabase", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89209,17 +99646,18 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLIdPAuthAttempt{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.RouteToDatabase == nil { + m.RouteToDatabase = &RouteToDatabase{} + } + if err := m.RouteToDatabase.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SAMLIdPAuthAttempt{v} iNdEx = postIndex - case 116: + case 15: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseNames", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89229,32 +99667,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLIdPServiceProviderCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SAMLIdPServiceProviderCreate{v} + m.DatabaseNames = append(m.DatabaseNames, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 117: + case 16: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUsers", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89264,32 +99699,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLIdPServiceProviderUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SAMLIdPServiceProviderUpdate{v} + m.DatabaseUsers = append(m.DatabaseUsers, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 118: + case 17: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceUUID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89299,32 +99731,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLIdPServiceProviderDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SAMLIdPServiceProviderDelete{v} + m.MFADeviceUUID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 119: + case 18: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderDeleteAll", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientIP", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89334,32 +99763,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLIdPServiceProviderDeleteAll{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SAMLIdPServiceProviderDeleteAll{v} + m.ClientIP = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 120: + case 19: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OpenSearchRequest", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARNs", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89369,32 +99795,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &OpenSearchRequest{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_OpenSearchRequest{v} + m.AWSRoleARNs = append(m.AWSRoleARNs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 121: + case 20: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceEvent2", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89404,30 +99827,47 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DeviceEvent2{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DeviceEvent2{v} + m.AccessRequests = append(m.AccessRequests, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 122: + case 21: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DisallowReissue", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.DisallowReissue = bool(v != 0) + case 22: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaResourcesUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AllowedResourceIDs", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89454,15 +99894,14 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &OktaResourcesUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.AllowedResourceIDs = append(m.AllowedResourceIDs, ResourceID{}) + if err := m.AllowedResourceIDs[len(m.AllowedResourceIDs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_OktaResourcesUpdate{v} iNdEx = postIndex - case 123: + case 23: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaSyncFailure", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PreviousIdentityExpires", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89489,17 +99928,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &OktaSyncFailure{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.PreviousIdentityExpires, dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_OktaSyncFailure{v} iNdEx = postIndex - case 124: + case 24: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaAssignmentResult", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentities", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89509,32 +99946,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &OktaAssignmentResult{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_OktaAssignmentResult{v} + m.AzureIdentities = append(m.AzureIdentities, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 125: + case 25: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ProvisionTokenCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccounts", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89544,32 +99978,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ProvisionTokenCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_ProvisionTokenCreate{v} + m.GCPServiceAccounts = append(m.GCPServiceAccounts, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 126: + case 26: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicy", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89579,32 +100010,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessListCreate{v} + m.PrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 127: + case 27: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89614,30 +100042,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessListUpdate{v} + m.BotName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 128: + case 28: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceExtensions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89664,17 +100089,18 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.DeviceExtensions == nil { + m.DeviceExtensions = &DeviceExtensions{} + } + if err := m.DeviceExtensions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessListDelete{v} iNdEx = postIndex - case 129: + case 29: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListReview", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89684,32 +100110,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListReview{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessListReview{v} + m.BotInstanceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 130: + case 30: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field JoinToken", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89719,30 +100142,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListMemberCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessListMemberCreate{v} + m.JoinToken = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 131: + case 31: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ScopePin", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89769,17 +100189,69 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListMemberUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.ScopePin == nil { + m.ScopePin = &ScopePin{} + } + if err := m.ScopePin.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessListMemberUpdate{v} iNdEx = postIndex - case 132: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ScopePin) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ScopePin: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ScopePin: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Scope", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89789,30 +100261,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListMemberDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessListMemberDelete{v} + m.Scope = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 133: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberDeleteAllForAccessList", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Assignments", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -89839,17 +100308,162 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessListMemberDeleteAllForAccessList{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.Assignments == nil { + m.Assignments = make(map[string]*ScopePinnedAssignments) } - m.Event = &OneOf_AccessListMemberDeleteAllForAccessList{v} + var mapkey string + var mapvalue *ScopePinnedAssignments + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthEvents + } + postmsgIndex := iNdEx + mapmsglen + if postmsgIndex < 0 { + return ErrInvalidLengthEvents + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &ScopePinnedAssignments{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Assignments[mapkey] = mapvalue iNdEx = postIndex - case 134: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ScopePinnedAssignments) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ScopePinnedAssignments: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ScopePinnedAssignments: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuditQueryRun", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89859,32 +100473,80 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AuditQueryRun{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_AuditQueryRun{v} - iNdEx = postIndex - case 135: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RouteToApp) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RouteToApp: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RouteToApp: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SecurityReportRun", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89894,32 +100556,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SecurityReportRun{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SecurityReportRun{v} + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 136: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GithubConnectorUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89929,32 +100588,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &GithubConnectorUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_GithubConnectorUpdate{v} + m.SessionID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 137: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OIDCConnectorUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PublicAddr", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89964,32 +100620,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &OIDCConnectorUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_OIDCConnectorUpdate{v} + m.PublicAddr = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 138: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLConnectorUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -89999,32 +100652,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SAMLConnectorUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SAMLConnectorUpdate{v} + m.ClusterName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 139: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RoleUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARN", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90034,32 +100684,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &RoleUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_RoleUpdate{v} + m.AWSRoleARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 140: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentity", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90069,32 +100716,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_UserUpdate{v} + m.AzureIdentity = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 141: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExternalAuditStorageEnable", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccount", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90104,32 +100748,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ExternalAuditStorageEnable{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_ExternalAuditStorageEnable{v} + m.GCPServiceAccount = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 142: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExternalAuditStorageDisable", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field URI", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90139,32 +100780,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ExternalAuditStorageDisable{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_ExternalAuditStorageDisable{v} + m.URI = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 143: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotCreate", wireType) + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TargetPort", wireType) } - var msglen int + m.TargetPort = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90174,32 +100812,67 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.TargetPort |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - v := &BotCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.Event = &OneOf_BotCreate{v} - iNdEx = postIndex - case 144: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RouteToDatabase: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RouteToDatabase: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServiceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90209,32 +100882,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &BotDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_BotDelete{v} + m.ServiceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 145: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Protocol", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90244,32 +100914,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &BotUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_BotUpdate{v} + m.Protocol = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 146: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CreateMFAAuthChallenge", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90279,32 +100946,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &CreateMFAAuthChallenge{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_CreateMFAAuthChallenge{v} + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 147: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ValidateMFAAuthResponse", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Database", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90314,32 +100978,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ValidateMFAAuthResponse{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_ValidateMFAAuthResponse{v} + m.Database = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 148: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaAccessListSync", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90349,32 +101010,80 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &OktaAccessListSync{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_OktaAccessListSync{v} - iNdEx = postIndex - case 149: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DeviceExtensions: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DeviceExtensions: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabasePermissionUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DeviceId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90384,32 +101093,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabasePermissionUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DatabasePermissionUpdate{v} + m.DeviceId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 150: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SPIFFESVIDIssued", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AssetTag", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90419,32 +101125,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SPIFFESVIDIssued{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SPIFFESVIDIssued{v} + m.AssetTag = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 151: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaUserSync", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CredentialId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90454,30 +101157,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &OktaUserSync{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.CredentialId = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_OktaUserSync{v} - iNdEx = postIndex - case 152: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessRequestResourceSearch: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessRequestResourceSearch: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuthPreferenceUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90504,15 +101255,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AuthPreferenceUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AuthPreferenceUpdate{v} iNdEx = postIndex - case 153: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingConfigUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90539,17 +101288,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SessionRecordingConfigUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SessionRecordingConfigUpdate{v} iNdEx = postIndex - case 154: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterNetworkingConfigUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SearchAsRoles", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90559,32 +101306,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &ClusterNetworkingConfigUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_ClusterNetworkingConfigUpdate{v} + m.SearchAsRoles = append(m.SearchAsRoles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 155: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUserCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90594,32 +101338,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseUserCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DatabaseUserCreate{v} + m.ResourceType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 156: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUserDeactivate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90629,30 +101370,27 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseUserDeactivate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_DatabaseUserDeactivate{v} + m.Namespace = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 157: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessPathChanged", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90679,17 +101417,109 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessPathChanged{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.Labels == nil { + m.Labels = make(map[string]string) } - m.Event = &OneOf_AccessPathChanged{v} + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Labels[mapkey] = mapvalue iNdEx = postIndex - case 158: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SpannerRPC", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90699,32 +101529,29 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &SpannerRPC{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_SpannerRPC{v} + m.PredicateExpression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 159: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseSessionCommandResult", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90734,30 +101561,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &DatabaseSessionCommandResult{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_DatabaseSessionCommandResult{v} - iNdEx = postIndex - case 160: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementPrepare) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementPrepare: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementPrepare: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90784,15 +101659,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DiscoveryConfigCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DiscoveryConfigCreate{v} iNdEx = postIndex - case 161: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90819,15 +101692,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DiscoveryConfigUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DiscoveryConfigUpdate{v} iNdEx = postIndex - case 162: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90854,15 +101725,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DiscoveryConfigDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DiscoveryConfigDelete{v} iNdEx = postIndex - case 163: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DiscoveryConfigDeleteAll", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90889,17 +101758,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &DiscoveryConfigDeleteAll{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_DiscoveryConfigDeleteAll{v} iNdEx = postIndex - case 164: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessGraphSettingsUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -90909,65 +101776,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessGraphSettingsUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AccessGraphSettingsUpdate{v} + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 165: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationCreate", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - v := &IntegrationCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.Event = &OneOf_IntegrationCreate{v} - iNdEx = postIndex - case 166: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementExecute: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementExecute: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -90994,15 +101874,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &IntegrationUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_IntegrationUpdate{v} iNdEx = postIndex - case 167: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91029,15 +101907,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &IntegrationDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_IntegrationDelete{v} iNdEx = postIndex - case 168: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEFederationCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91064,15 +101940,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SPIFFEFederationCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SPIFFEFederationCreate{v} iNdEx = postIndex - case 169: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEFederationDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91099,17 +101973,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SPIFFEFederationDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SPIFFEFederationDelete{v} iNdEx = postIndex - case 170: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginCreate", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91119,32 +101991,16 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &PluginCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_PluginCreate{v} - iNdEx = postIndex - case 171: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91154,30 +102010,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &PluginUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { return err } - m.Event = &OneOf_PluginUpdate{v} - iNdEx = postIndex - case 172: + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementSendLongData: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementSendLongData: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PluginDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91204,15 +102108,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &PluginDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_PluginDelete{v} iNdEx = postIndex - case 173: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91239,15 +102141,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateConfigCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AutoUpdateConfigCreate{v} iNdEx = postIndex - case 174: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91274,15 +102174,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateConfigUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AutoUpdateConfigUpdate{v} iNdEx = postIndex - case 175: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateConfigDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91309,17 +102207,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateConfigDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AutoUpdateConfigDelete{v} iNdEx = postIndex - case 176: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionCreate", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91329,32 +102225,16 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &AutoUpdateVersionCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AutoUpdateVersionCreate{v} - iNdEx = postIndex - case 177: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionUpdate", wireType) + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ParameterID", wireType) } - var msglen int + m.ParameterID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91364,32 +102244,16 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.ParameterID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &AutoUpdateVersionUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AutoUpdateVersionUpdate{v} - iNdEx = postIndex - case 178: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateVersionDelete", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DataSize", wireType) } - var msglen int + m.DataSize = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91399,30 +102263,65 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.DataSize |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateVersionDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - m.Event = &OneOf_AutoUpdateVersionDelete{v} - iNdEx = postIndex - case 179: + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementClose: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementClose: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91449,15 +102348,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &StaticHostUserCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_StaticHostUserCreate{v} iNdEx = postIndex - case 180: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91484,15 +102381,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &StaticHostUserUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_StaticHostUserUpdate{v} iNdEx = postIndex - case 181: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StaticHostUserDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91519,15 +102414,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &StaticHostUserDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_StaticHostUserDelete{v} iNdEx = postIndex - case 182: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91554,17 +102447,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &CrownJewelCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_CrownJewelCreate{v} iNdEx = postIndex - case 183: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelUpdate", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91574,30 +102465,65 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - v := &CrownJewelUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementReset) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - m.Event = &OneOf_CrownJewelUpdate{v} - iNdEx = postIndex - case 184: + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementReset: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementReset: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91624,15 +102550,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &CrownJewelDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_CrownJewelDelete{v} iNdEx = postIndex - case 188: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTaskCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91659,15 +102583,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserTaskCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserTaskCreate{v} iNdEx = postIndex - case 189: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTaskUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91694,15 +102616,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserTaskUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserTaskUpdate{v} iNdEx = postIndex - case 190: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTaskDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91729,17 +102649,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserTaskDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserTaskDelete{v} iNdEx = postIndex - case 191: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SFTPSummary", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91749,30 +102667,65 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - v := &SFTPSummary{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - m.Event = &OneOf_SFTPSummary{v} - iNdEx = postIndex - case 192: + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementFetch: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementFetch: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ContactCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91799,15 +102752,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &ContactCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_ContactCreate{v} iNdEx = postIndex - case 193: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ContactDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91834,15 +102785,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &ContactDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_ContactDelete{v} iNdEx = postIndex - case 194: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91869,15 +102818,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WorkloadIdentityCreate{v} iNdEx = postIndex - case 195: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -91904,17 +102851,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WorkloadIdentityUpdate{v} iNdEx = postIndex - case 196: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityDelete", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91924,32 +102869,16 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &WorkloadIdentityDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_WorkloadIdentityDelete{v} - iNdEx = postIndex - case 197: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GitCommand", wireType) + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field RowsCount", wireType) } - var msglen int + m.RowsCount = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -91959,30 +102888,65 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.RowsCount |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - v := &GitCommand{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - m.Event = &OneOf_GitCommand{v} - iNdEx = postIndex - case 198: + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLStatementBulkExecute: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLStatementBulkExecute: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserLoginAccessListInvalid", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92009,15 +102973,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &UserLoginAccessListInvalid{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_UserLoginAccessListInvalid{v} iNdEx = postIndex - case 199: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequestExpire", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92044,15 +103006,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AccessRequestExpire{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AccessRequestExpire{v} iNdEx = postIndex - case 200: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StableUNIXUserCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92079,15 +103039,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &StableUNIXUserCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_StableUNIXUserCreate{v} iNdEx = postIndex - case 201: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92114,17 +103072,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityX509RevocationCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WorkloadIdentityX509RevocationCreate{v} iNdEx = postIndex - case 202: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationDelete", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) } - var msglen int + m.StatementID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92134,32 +103090,16 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.StatementID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - v := &WorkloadIdentityX509RevocationDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_WorkloadIdentityX509RevocationDelete{v} - iNdEx = postIndex - case 203: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509RevocationUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92169,65 +103109,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityX509RevocationUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_WorkloadIdentityX509RevocationUpdate{v} + m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 204: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSICResourceSync", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - v := &AWSICResourceSync{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.Event = &OneOf_AWSICResourceSync{v} - iNdEx = postIndex - case 205: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLInitDB: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLInitDB: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92254,15 +103207,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &HealthCheckConfigCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_HealthCheckConfigCreate{v} iNdEx = postIndex - case 206: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92289,15 +103240,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &HealthCheckConfigUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_HealthCheckConfigUpdate{v} iNdEx = postIndex - case 207: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field HealthCheckConfigDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92324,15 +103273,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &HealthCheckConfigDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_HealthCheckConfigDelete{v} iNdEx = postIndex - case 208: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509IssuerOverrideCreate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92359,17 +103306,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityX509IssuerOverrideCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_WorkloadIdentityX509IssuerOverrideCreate{v} iNdEx = postIndex - case 209: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityX509IssuerOverrideDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92379,65 +103324,78 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &WorkloadIdentityX509IssuerOverrideDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_WorkloadIdentityX509IssuerOverrideDelete{v} + m.SchemaName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 210: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyCreate", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - v := &SigstorePolicyCreate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.Event = &OneOf_SigstorePolicyCreate{v} - iNdEx = postIndex - case 211: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLCreateDB: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLCreateDB: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyUpdate", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92464,15 +103422,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SigstorePolicyUpdate{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SigstorePolicyUpdate{v} iNdEx = postIndex - case 212: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SigstorePolicyDelete", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92499,15 +103455,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &SigstorePolicyDelete{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_SigstorePolicyDelete{v} iNdEx = postIndex - case 213: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutTrigger", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92534,15 +103488,13 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateAgentRolloutTrigger{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AutoUpdateAgentRolloutTrigger{v} iNdEx = postIndex - case 214: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutForceDone", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92569,17 +103521,15 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateAgentRolloutForceDone{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - m.Event = &OneOf_AutoUpdateAgentRolloutForceDone{v} iNdEx = postIndex - case 215: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AutoUpdateAgentRolloutRollback", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92589,26 +103539,23 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - v := &AutoUpdateAgentRolloutRollback{} - if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - m.Event = &OneOf_AutoUpdateAgentRolloutRollback{v} + m.SchemaName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -92632,7 +103579,7 @@ func (m *OneOf) Unmarshal(dAtA []byte) error { } return nil } -func (m *StreamStatus) Unmarshal(dAtA []byte) error { +func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -92655,17 +103602,17 @@ func (m *StreamStatus) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StreamStatus: wiretype end group for non-group") + return fmt.Errorf("proto: MySQLDropDB: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StreamStatus: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MySQLDropDB: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UploadID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92675,46 +103622,28 @@ func (m *StreamStatus) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UploadID = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field LastEventIndex", wireType) - } - m.LastEventIndex = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.LastEventIndex |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field LastUploadTime", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92741,64 +103670,13 @@ func (m *StreamStatus) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.LastUploadTime, dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SessionUpload) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SessionUpload: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SessionUpload: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92825,13 +103703,13 @@ func (m *SessionUpload) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -92858,13 +103736,13 @@ func (m *SessionUpload) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionURL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -92892,7 +103770,7 @@ func (m *SessionUpload) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionURL = string(dAtA[iNdEx:postIndex]) + m.SchemaName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -92916,7 +103794,7 @@ func (m *SessionUpload) Unmarshal(dAtA []byte) error { } return nil } -func (m *Identity) Unmarshal(dAtA []byte) error { +func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -92939,17 +103817,17 @@ func (m *Identity) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Identity: wiretype end group for non-group") + return fmt.Errorf("proto: MySQLShutDown: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Identity: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MySQLShutDown: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92959,29 +103837,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.User = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Impersonator", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -92991,29 +103870,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Impersonator = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93023,29 +103903,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93055,29 +103936,81 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Usage = append(m.Usage, string(dAtA[iNdEx:postIndex])) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLProcessKill: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLProcessKill: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Logins", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93087,29 +104020,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Logins = append(m.Logins, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesGroups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93119,29 +104053,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesGroups = append(m.KubernetesGroups, string(dAtA[iNdEx:postIndex])) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesUsers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93151,27 +104086,28 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesUsers = append(m.KubernetesUsers, string(dAtA[iNdEx:postIndex])) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93198,15 +104134,15 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Expires, dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RouteToCluster", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ProcessID", wireType) } - var stringLen uint64 + m.ProcessID = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93216,29 +104152,67 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.ProcessID |= uint32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.RouteToCluster = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 10: + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLDebug) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLDebug: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLDebug: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubernetesCluster", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93248,27 +104222,28 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.KubernetesCluster = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Traits", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93295,13 +104270,13 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Traits.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 12: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RouteToApp", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93328,18 +104303,15 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.RouteToApp == nil { - m.RouteToApp = &RouteToApp{} - } - if err := m.RouteToApp.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 13: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TeleportCluster", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93349,27 +104321,79 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TeleportCluster = string(dAtA[iNdEx:postIndex]) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 14: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MySQLRefresh: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MySQLRefresh: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RouteToDatabase", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93396,18 +104420,15 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.RouteToDatabase == nil { - m.RouteToDatabase = &RouteToDatabase{} - } - if err := m.RouteToDatabase.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 15: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseNames", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93417,29 +104438,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseNames = append(m.DatabaseNames, string(dAtA[iNdEx:postIndex])) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 16: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseUsers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93449,29 +104471,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DatabaseUsers = append(m.DatabaseUsers, string(dAtA[iNdEx:postIndex])) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 17: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MFADeviceUUID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93481,27 +104504,28 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.MFADeviceUUID = string(dAtA[iNdEx:postIndex]) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 18: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientIP", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Subcommand", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93529,13 +104553,64 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClientIP = string(dAtA[iNdEx:postIndex]) + m.Subcommand = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 19: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SQLServerRPCRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SQLServerRPCRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARNs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93545,29 +104620,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSRoleARNs = append(m.AWSRoleARNs, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 20: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessRequests", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93577,29 +104653,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessRequests = append(m.AccessRequests, string(dAtA[iNdEx:postIndex])) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 21: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DisallowReissue", wireType) + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var v int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93609,15 +104686,28 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - m.DisallowReissue = bool(v != 0) - case 22: + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AllowedResourceIDs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93644,16 +104734,15 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AllowedResourceIDs = append(m.AllowedResourceIDs, ResourceID{}) - if err := m.AllowedResourceIDs[len(m.AllowedResourceIDs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 23: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PreviousIdentityExpires", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Procname", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93663,28 +104752,27 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.PreviousIdentityExpires, dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Procname = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 24: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentities", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93712,13 +104800,64 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AzureIdentities = append(m.AzureIdentities, string(dAtA[iNdEx:postIndex])) + m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 25: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: DatabaseSessionMalformedPacket: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: DatabaseSessionMalformedPacket: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccounts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93728,29 +104867,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.GCPServiceAccounts = append(m.GCPServiceAccounts, string(dAtA[iNdEx:postIndex])) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 26: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyPolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93760,29 +104900,30 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PrivateKeyPolicy = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 27: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93792,27 +104933,28 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.BotName = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 28: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceExtensions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -93839,18 +104981,15 @@ func (m *Identity) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.DeviceExtensions == nil { - m.DeviceExtensions = &DeviceExtensions{} - } - if err := m.DeviceExtensions.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 29: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Payload", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93860,23 +104999,25 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.BotInstanceID = string(dAtA[iNdEx:postIndex]) + m.Payload = append(m.Payload[:0], dAtA[iNdEx:postIndex]...) + if m.Payload == nil { + m.Payload = []byte{} + } iNdEx = postIndex default: iNdEx = preIndex @@ -93900,7 +105041,7 @@ func (m *Identity) Unmarshal(dAtA []byte) error { } return nil } -func (m *RouteToApp) Unmarshal(dAtA []byte) error { +func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -93923,17 +105064,17 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RouteToApp: wiretype end group for non-group") + return fmt.Errorf("proto: ElasticsearchRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RouteToApp: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ElasticsearchRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93943,29 +105084,30 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -93975,29 +105117,30 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PublicAddr", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94007,29 +105150,30 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PublicAddr = string(dAtA[iNdEx:postIndex]) + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94039,27 +105183,28 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRoleARN", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94087,11 +105232,11 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AWSRoleARN = string(dAtA[iNdEx:postIndex]) + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AzureIdentity", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94119,145 +105264,11 @@ func (m *RouteToApp) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AzureIdentity = string(dAtA[iNdEx:postIndex]) + m.RawQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GCPServiceAccount", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.GCPServiceAccount = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field URI", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.URI = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TargetPort", wireType) - } - m.TargetPort = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TargetPort |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: RouteToDatabase: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: RouteToDatabase: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94285,13 +105296,13 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServiceName = string(dAtA[iNdEx:postIndex]) + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Protocol", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94301,29 +105312,31 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Protocol = string(dAtA[iNdEx:postIndex]) + m.Body = append(m.Body[:0], dAtA[iNdEx:postIndex]...) + if m.Body == nil { + m.Body = []byte{} + } iNdEx = postIndex - case 3: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94333,27 +105346,47 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Username = string(dAtA[iNdEx:postIndex]) + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 4: + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Category", wireType) + } + m.Category = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Category |= ElasticsearchCategory(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Database", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94381,11 +105414,11 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Database = string(dAtA[iNdEx:postIndex]) + m.Target = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94413,8 +105446,27 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) + } + m.StatusCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StatusCode |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -94437,7 +105489,7 @@ func (m *RouteToDatabase) Unmarshal(dAtA []byte) error { } return nil } -func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { +func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -94460,17 +105512,17 @@ func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DeviceExtensions: wiretype end group for non-group") + return fmt.Errorf("proto: OpenSearchRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DeviceExtensions: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OpenSearchRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DeviceId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94480,29 +105532,30 @@ func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DeviceId = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AssetTag", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94512,29 +105565,30 @@ func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AssetTag = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CredentialId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94544,78 +105598,28 @@ func (m *DeviceExtensions) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CredentialId = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AccessRequestResourceSearch: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRequestResourceSearch: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -94642,15 +105646,15 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94660,28 +105664,27 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SearchAsRoles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94709,11 +105712,11 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SearchAsRoles = append(m.SearchAsRoles, string(dAtA[iNdEx:postIndex])) + m.RawQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94741,13 +105744,13 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceType = string(dAtA[iNdEx:postIndex]) + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -94757,27 +105760,29 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Namespace = string(dAtA[iNdEx:postIndex]) + m.Body = append(m.Body[:0], dAtA[iNdEx:postIndex]...) + if m.Body == nil { + m.Body = []byte{} + } iNdEx = postIndex - case 6: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -94804,107 +105809,32 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Labels == nil { - m.Labels = make(map[string]string) + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + case 10: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Category", wireType) + } + m.Category = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Category |= OpenSearchCategory(b&0x7F) << shift + if b < 0x80 { + break } } - m.Labels[mapkey] = mapvalue - iNdEx = postIndex - case 7: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PredicateExpression", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94932,11 +105862,11 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.PredicateExpression = string(dAtA[iNdEx:postIndex]) + m.Target = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: + case 12: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -94964,8 +105894,27 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) + } + m.StatusCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StatusCode |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -94988,7 +105937,7 @@ func (m *AccessRequestResourceSearch) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLStatementPrepare) Unmarshal(dAtA []byte) error { +func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -95011,10 +105960,10 @@ func (m *MySQLStatementPrepare) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementPrepare: wiretype end group for non-group") + return fmt.Errorf("proto: DynamoDBRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementPrepare: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: DynamoDBRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -95145,131 +106094,34 @@ func (m *MySQLStatementPrepare) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Query = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementExecute: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementExecute: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) + } + m.StatusCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StatusCode |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95279,30 +106131,29 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95312,30 +106163,29 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.RawQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95345,30 +106195,29 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) } - m.StatementID = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95378,16 +106227,29 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - case 6: + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Target = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95397,23 +106259,27 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) + if m.Body == nil { + m.Body = &Struct{} + } + if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -95437,7 +106303,7 @@ func (m *MySQLStatementExecute) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { +func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -95460,10 +106326,10 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementSendLongData: wiretype end group for non-group") + return fmt.Errorf("proto: AppSessionDynamoDBRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementSendLongData: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AppSessionDynamoDBRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -95534,7 +106400,7 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -95561,13 +106427,13 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWSRequestMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -95594,15 +106460,15 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AWSRequestMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionChunkID", wireType) } - m.StatementID = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95612,16 +106478,29 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SessionChunkID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex case 6: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ParameterID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) } - m.ParameterID = 0 + m.StatusCode = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95631,16 +106510,16 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ParameterID |= uint32(b&0x7F) << shift + m.StatusCode |= uint32(b&0x7F) << shift if b < 0x80 { break } } case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DataSize", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - m.DataSize = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95650,67 +106529,29 @@ func (m *MySQLStatementSendLongData) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.DataSize |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementClose: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementClose: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95720,30 +106561,29 @@ func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.RawQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95753,30 +106593,29 @@ func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -95786,28 +106625,27 @@ func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Target = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -95834,29 +106672,13 @@ func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Body == nil { + m.Body = &Struct{} + } + if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) - } - m.StatementID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -95879,7 +106701,7 @@ func (m *MySQLStatementClose) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLStatementReset) Unmarshal(dAtA []byte) error { +func (m *UpgradeWindowStartMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -95902,116 +106724,17 @@ func (m *MySQLStatementReset) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementReset: wiretype end group for non-group") + return fmt.Errorf("proto: UpgradeWindowStartMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementReset: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpgradeWindowStartMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStart", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -96021,44 +106744,24 @@ func (m *MySQLStatementReset) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.UpgradeWindowStart = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) - } - m.StatementID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -96081,7 +106784,7 @@ func (m *MySQLStatementReset) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { +func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -96104,10 +106807,10 @@ func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementFetch: wiretype end group for non-group") + return fmt.Errorf("proto: UpgradeWindowStartUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementFetch: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UpgradeWindowStartUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -96211,7 +106914,7 @@ func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStartMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96238,48 +106941,10 @@ func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UpgradeWindowStartMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) - } - m.StatementID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field RowsCount", wireType) - } - m.RowsCount = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.RowsCount |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -96302,7 +106967,7 @@ func (m *MySQLStatementFetch) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { +func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -96325,10 +106990,10 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLStatementBulkExecute: wiretype end group for non-group") + return fmt.Errorf("proto: SessionRecordingAccess: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLStatementBulkExecute: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionRecordingAccess: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -96366,9 +107031,9 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -96378,28 +107043,27 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SessionID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96426,15 +107090,15 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -96444,47 +107108,27 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SessionType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatementID", wireType) - } - m.StatementID = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatementID |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Format", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -96512,7 +107156,7 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) + m.Format = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -96536,7 +107180,7 @@ func (m *MySQLStatementBulkExecute) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { +func (m *KubeClusterMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -96559,15 +107203,15 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLInitDB: wiretype end group for non-group") + return fmt.Errorf("proto: KubeClusterMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLInitDB: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: KubeClusterMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubeLabels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96594,13 +107238,158 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + if m.KubeLabels == nil { + m.KubeLabels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthEvents + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { + return ErrInvalidLengthEvents + } + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } + m.KubeLabels[mapkey] = mapvalue iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: KubernetesClusterCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KubernetesClusterCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96627,13 +107416,13 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96660,13 +107449,13 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96693,15 +107482,15 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubeClusterMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -96711,23 +107500,24 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SchemaName = string(dAtA[iNdEx:postIndex]) + if err := m.KubeClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -96751,7 +107541,7 @@ func (m *MySQLInitDB) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { +func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -96774,10 +107564,10 @@ func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLCreateDB: wiretype end group for non-group") + return fmt.Errorf("proto: KubernetesClusterUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLCreateDB: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: KubernetesClusterUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -96848,7 +107638,7 @@ func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96875,13 +107665,13 @@ func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field KubeClusterMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -96908,42 +107698,10 @@ func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.KubeClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.SchemaName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -96966,7 +107724,7 @@ func (m *MySQLCreateDB) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { +func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -96989,10 +107747,10 @@ func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLDropDB: wiretype end group for non-group") + return fmt.Errorf("proto: KubernetesClusterDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLDropDB: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: KubernetesClusterDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -97063,40 +107821,7 @@ func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -97123,42 +107848,10 @@ func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SchemaName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.SchemaName = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -97181,7 +107874,7 @@ func (m *MySQLDropDB) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { +func (m *SSMRun) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -97204,10 +107897,10 @@ func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLShutDown: wiretype end group for non-group") + return fmt.Errorf("proto: SSMRun: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLShutDown: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SSMRun: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -97245,9 +107938,9 @@ func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CommandID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97257,30 +107950,29 @@ func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.CommandID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field InstanceID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97290,30 +107982,48 @@ func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.InstanceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ExitCode", wireType) + } + m.ExitCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.ExitCode |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97323,81 +108033,29 @@ func (m *MySQLShutDown) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Status = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: MySQLProcessKill: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLProcessKill: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccountID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97407,30 +108065,29 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AccountID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97440,30 +108097,29 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Region = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StandardOutput", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97473,30 +108129,29 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.StandardOutput = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StandardError", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97506,30 +108161,29 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.StandardError = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ProcessID", wireType) + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field InvocationURL", wireType) } - m.ProcessID = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -97539,11 +108193,24 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ProcessID |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.InvocationURL = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -97566,7 +108233,7 @@ func (m *MySQLProcessKill) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLDebug) Unmarshal(dAtA []byte) error { +func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -97589,10 +108256,10 @@ func (m *MySQLDebug) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLDebug: wiretype end group for non-group") + return fmt.Errorf("proto: CassandraPrepare: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLDebug: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CassandraPrepare: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -97727,6 +108394,70 @@ func (m *MySQLDebug) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Query = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Keyspace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Keyspace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -97749,7 +108480,7 @@ func (m *MySQLDebug) Unmarshal(dAtA []byte) error { } return nil } -func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { +func (m *CassandraExecute) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -97772,10 +108503,10 @@ func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MySQLRefresh: wiretype end group for non-group") + return fmt.Errorf("proto: CassandraExecute: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MySQLRefresh: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CassandraExecute: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -97912,7 +108643,7 @@ func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Subcommand", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field QueryId", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -97940,7 +108671,7 @@ func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Subcommand = string(dAtA[iNdEx:postIndex]) + m.QueryId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -97964,7 +108695,7 @@ func (m *MySQLRefresh) Unmarshal(dAtA []byte) error { } return nil } -func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { +func (m *CassandraBatch) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -97987,10 +108718,10 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SQLServerRPCRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CassandraBatch: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SQLServerRPCRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CassandraBatch: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -98078,25 +108809,90 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { break } } - if msglen < 0 { + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Consistency", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Consistency = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Keyspace", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98106,28 +108902,27 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Keyspace = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Procname", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BatchType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -98155,13 +108950,13 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Procname = string(dAtA[iNdEx:postIndex]) + m.BatchType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Parameters", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Children", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98171,23 +108966,25 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Parameters = append(m.Parameters, string(dAtA[iNdEx:postIndex])) + m.Children = append(m.Children, &CassandraBatch_BatchChild{}) + if err := m.Children[len(m.Children)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -98211,7 +109008,7 @@ func (m *SQLServerRPCRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { +func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -98234,17 +109031,17 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DatabaseSessionMalformedPacket: wiretype end group for non-group") + return fmt.Errorf("proto: BatchChild: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DatabaseSessionMalformedPacket: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BatchChild: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98254,30 +109051,29 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98287,28 +109083,27 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Values", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -98335,15 +109130,67 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Values = append(m.Values, &CassandraBatch_BatchChild_Value{}) + if err := m.Values[len(m.Values)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - var msglen int + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CassandraBatch_BatchChild_Value) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Value: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Value: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + } + m.Type = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98353,28 +109200,14 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.Type |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Payload", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Contents", wireType) } var byteLen int for shift := uint(0); ; shift += 7 { @@ -98401,9 +109234,9 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Payload = append(m.Payload[:0], dAtA[iNdEx:postIndex]...) - if m.Payload == nil { - m.Payload = []byte{} + m.Contents = append(m.Contents[:0], dAtA[iNdEx:postIndex]...) + if m.Contents == nil { + m.Contents = []byte{} } iNdEx = postIndex default: @@ -98428,7 +109261,7 @@ func (m *DatabaseSessionMalformedPacket) Unmarshal(dAtA []byte) error { } return nil } -func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { +func (m *CassandraRegister) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -98451,10 +109284,10 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ElasticsearchRequest: wiretype end group for non-group") + return fmt.Errorf("proto: CassandraRegister: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ElasticsearchRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CassandraRegister: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -98591,7 +109424,7 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EventTypes", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -98619,13 +109452,64 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + m.EventTypes = append(m.EventTypes, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 6: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: LoginRuleCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: LoginRuleCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98635,29 +109519,30 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RawQuery = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98667,29 +109552,30 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var byteLen int + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98699,29 +109585,79 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + byteLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Body = append(m.Body[:0], dAtA[iNdEx:postIndex]...) - if m.Body == nil { - m.Body = []byte{} + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } iNdEx = postIndex - case 9: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: LoginRuleDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: LoginRuleDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -98748,34 +109684,15 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Category", wireType) - } - m.Category = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Category |= ElasticsearchCategory(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98785,29 +109702,30 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Target = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 12: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -98817,43 +109735,25 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Query = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 13: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) - } - m.StatusCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatusCode |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -98876,7 +109776,7 @@ func (m *ElasticsearchRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { +func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -98899,10 +109799,10 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OpenSearchRequest: wiretype end group for non-group") + return fmt.Errorf("proto: SAMLIdPAuthAttempt: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OpenSearchRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SAMLIdPAuthAttempt: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -99006,7 +109906,7 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99033,15 +109933,99 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SAMLIdPServiceProviderCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SAMLIdPServiceProviderCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99051,29 +110035,30 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99083,29 +110068,30 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RawQuery = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99115,61 +110101,79 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) + if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if byteLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Body = append(m.Body[:0], dAtA[iNdEx:postIndex]...) - if m.Body == nil { - m.Body = []byte{} + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 9: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SAMLIdPServiceProviderUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SAMLIdPServiceProviderUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99196,34 +110200,15 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 10: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Category", wireType) - } - m.Category = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Category |= OpenSearchCategory(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 11: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99233,29 +110218,30 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Target = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 12: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99265,43 +110251,25 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Query = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 13: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) - } - m.StatusCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatusCode |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -99324,7 +110292,7 @@ func (m *OpenSearchRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { +func (m *SAMLIdPServiceProviderDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -99347,10 +110315,10 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: DynamoDBRequest: wiretype end group for non-group") + return fmt.Errorf("proto: SAMLIdPServiceProviderDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: DynamoDBRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SAMLIdPServiceProviderDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -99388,7 +110356,7 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99415,13 +110383,13 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99448,13 +110416,64 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SAMLIdPServiceProviderDeleteAll) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SAMLIdPServiceProviderDeleteAll: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SAMLIdPServiceProviderDeleteAll: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99481,34 +110500,15 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) - } - m.StatusCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatusCode |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99518,61 +110518,81 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - intStringLen := int(stringLen) - if intStringLen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaResourcesUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.RawQuery = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 8: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaResourcesUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaResourcesUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99582,29 +110602,30 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99614,27 +110635,28 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Target = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaResourcesUpdatedMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99661,10 +110683,7 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Body == nil { - m.Body = &Struct{} - } - if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.OktaResourcesUpdatedMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -99690,7 +110709,7 @@ func (m *DynamoDBRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { +func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -99713,10 +110732,10 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppSessionDynamoDBRequest: wiretype end group for non-group") + return fmt.Errorf("proto: OktaSyncFailure: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppSessionDynamoDBRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OktaSyncFailure: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -99754,7 +110773,7 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99781,13 +110800,13 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -99814,99 +110833,66 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWSRequestMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + msglen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - if err := m.AWSRequestMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionChunkID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.SessionChunkID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) - } - m.StatusCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.StatusCode |= uint32(b&0x7F) << shift - if b < 0x80 { - break - } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - case 7: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaAssignmentResult: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaAssignmentResult: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99916,29 +110902,30 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99948,29 +110935,30 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.RawQuery = string(dAtA[iNdEx:postIndex]) + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -99980,29 +110968,30 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Method = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -100012,27 +111001,28 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Target = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OktaAssignmentMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100059,10 +111049,7 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Body == nil { - m.Body = &Struct{} - } - if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.OktaAssignmentMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -100088,7 +111075,7 @@ func (m *AppSessionDynamoDBRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpgradeWindowStartMetadata) Unmarshal(dAtA []byte) error { +func (m *AccessListCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100111,15 +111098,114 @@ func (m *UpgradeWindowStartMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpgradeWindowStartMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpgradeWindowStartMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStart", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100147,7 +111233,7 @@ func (m *UpgradeWindowStartMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.UpgradeWindowStart = string(dAtA[iNdEx:postIndex]) + m.AccessListTitle = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -100171,7 +111257,7 @@ func (m *UpgradeWindowStartMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { +func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100194,10 +111280,10 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UpgradeWindowStartUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UpgradeWindowStartUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -100235,7 +111321,7 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100262,13 +111348,13 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100295,15 +111381,15 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpgradeWindowStartMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -100313,24 +111399,23 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UpgradeWindowStartMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AccessListTitle = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -100354,7 +111439,7 @@ func (m *UpgradeWindowStartUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { +func (m *AccessListDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100377,10 +111462,10 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SessionRecordingAccess: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SessionRecordingAccess: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -100418,9 +111503,9 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -100430,27 +111515,28 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionID = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100477,45 +111563,13 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionType", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.SessionType = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Format", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100543,7 +111597,7 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Format = string(dAtA[iNdEx:postIndex]) + m.AccessListTitle = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -100567,7 +111621,7 @@ func (m *SessionRecordingAccess) Unmarshal(dAtA []byte) error { } return nil } -func (m *KubeClusterMetadata) Unmarshal(dAtA []byte) error { +func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100590,15 +111644,15 @@ func (m *KubeClusterMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubeClusterMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListMemberCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubeClusterMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListMemberCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeLabels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100625,103 +111679,108 @@ func (m *KubeClusterMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.KubeLabels == nil { - m.KubeLabels = make(map[string]string) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - var mapkey string - var mapvalue string - for iNdEx < postIndex { - entryPreIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - fieldNum := int32(wire >> 3) - if fieldNum == 1 { - var stringLenmapkey uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapkey |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapkey := int(stringLenmapkey) - if intStringLenmapkey < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapkey := iNdEx + intStringLenmapkey - if postStringIndexmapkey < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapkey > l { - return io.ErrUnexpectedEOF - } - mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) - iNdEx = postStringIndexmapkey - } else if fieldNum == 2 { - var stringLenmapvalue uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLenmapvalue |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLenmapvalue := int(stringLenmapvalue) - if intStringLenmapvalue < 0 { - return ErrInvalidLengthEvents - } - postStringIndexmapvalue := iNdEx + intStringLenmapvalue - if postStringIndexmapvalue < 0 { - return ErrInvalidLengthEvents - } - if postStringIndexmapvalue > l { - return io.ErrUnexpectedEOF - } - mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) - iNdEx = postStringIndexmapvalue - } else { - iNdEx = entryPreIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > postIndex { - return io.ErrUnexpectedEOF - } - iNdEx += skippy + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break } } - m.KubeLabels[mapkey] = mapvalue + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -100745,7 +111804,7 @@ func (m *KubeClusterMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { +func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100768,10 +111827,10 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubernetesClusterCreate: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListMemberUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesClusterCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListMemberUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -100809,7 +111868,7 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100836,13 +111895,13 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100869,13 +111928,13 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -100902,7 +111961,7 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubeClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -100928,7 +111987,7 @@ func (m *KubernetesClusterCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { +func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100951,10 +112010,10 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubernetesClusterUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListMemberDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesClusterUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListMemberDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -100992,7 +112051,7 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101019,13 +112078,13 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101052,13 +112111,13 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field KubeClusterMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101085,7 +112144,7 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.KubeClusterMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -101111,7 +112170,7 @@ func (m *KubernetesClusterUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { +func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -101134,10 +112193,10 @@ func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubernetesClusterDelete: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListMemberDeleteAllForAccessList: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesClusterDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListMemberDeleteAllForAccessList: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -101175,7 +112234,7 @@ func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101202,11 +112261,161 @@ func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessListReview) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessListReview: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessListReview: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } @@ -101239,6 +112448,72 @@ func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccessListReviewMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.AccessListReviewMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -101261,7 +112536,7 @@ func (m *KubernetesClusterDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *SSMRun) Unmarshal(dAtA []byte) error { +func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -101284,10 +112559,10 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SSMRun: wiretype end group for non-group") + return fmt.Errorf("proto: AuditQueryRun: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SSMRun: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AuditQueryRun: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -101325,9 +112600,9 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CommandID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101337,29 +112612,30 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CommandID = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InstanceID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101369,48 +112645,30 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.InstanceID = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ExitCode", wireType) - } - m.ExitCode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.ExitCode |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuditQueryDetails", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101420,27 +112678,79 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Status = string(dAtA[iNdEx:postIndex]) + if err := m.AuditQueryDetails.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AuditQueryDetails: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AuditQueryDetails: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccountID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -101468,11 +112778,11 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AccountID = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -101500,13 +112810,13 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Region = string(dAtA[iNdEx:postIndex]) + m.Query = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StandardOutput", wireType) + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Days", wireType) } - var stringLen uint64 + m.Days = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101516,29 +112826,16 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.Days |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.StandardOutput = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StandardError", wireType) + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ExecutionTimeInMillis", wireType) } - var stringLen uint64 + m.ExecutionTimeInMillis = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101548,29 +112845,16 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.ExecutionTimeInMillis |= int64(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.StandardError = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 10: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InvocationURL", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field DataScannedInBytes", wireType) } - var stringLen uint64 + m.DataScannedInBytes = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101580,24 +112864,11 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.DataScannedInBytes |= int64(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.InvocationURL = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -101620,7 +112891,7 @@ func (m *SSMRun) Unmarshal(dAtA []byte) error { } return nil } -func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { +func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -101643,10 +112914,10 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CassandraPrepare: wiretype end group for non-group") + return fmt.Errorf("proto: SecurityReportRun: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CassandraPrepare: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SecurityReportRun: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -101717,7 +112988,7 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101744,15 +113015,15 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101762,28 +113033,27 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -101795,29 +113065,67 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Version = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalExecutionTimeInMillis", wireType) + } + m.TotalExecutionTimeInMillis = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TotalExecutionTimeInMillis |= int64(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalDataScannedInBytes", wireType) } - if postIndex > l { - return io.ErrUnexpectedEOF + m.TotalDataScannedInBytes = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.TotalDataScannedInBytes |= int64(b&0x7F) << shift + if b < 0x80 { + break + } } - m.Query = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Keyspace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuditQueries", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -101827,23 +113135,25 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Keyspace = string(dAtA[iNdEx:postIndex]) + m.AuditQueries = append(m.AuditQueries, &AuditQueryDetails{}) + if err := m.AuditQueries[len(m.AuditQueries)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -101867,7 +113177,7 @@ func (m *CassandraPrepare) Unmarshal(dAtA []byte) error { } return nil } -func (m *CassandraExecute) Unmarshal(dAtA []byte) error { +func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -101890,10 +113200,10 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CassandraExecute: wiretype end group for non-group") + return fmt.Errorf("proto: ExternalAuditStorageEnable: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CassandraExecute: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ExternalAuditStorageEnable: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -101931,7 +113241,7 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101958,13 +113268,13 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Details", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -101991,13 +113301,67 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Details == nil { + m.Details = &ExternalAuditStorageDetails{} + } + if err := m.Details.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExternalAuditStorageDisable: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExternalAuditStorageDisable: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -102024,15 +113388,15 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field QueryId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102042,23 +113406,60 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.QueryId = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Details", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Details == nil { + m.Details = &ExternalAuditStorageDetails{} + } + if err := m.Details.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -102082,7 +113483,7 @@ func (m *CassandraExecute) Unmarshal(dAtA []byte) error { } return nil } -func (m *CassandraBatch) Unmarshal(dAtA []byte) error { +func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -102105,17 +113506,17 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CassandraBatch: wiretype end group for non-group") + return fmt.Errorf("proto: ExternalAuditStorageDetails: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CassandraBatch: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ExternalAuditStorageDetails: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IntegrationName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102125,30 +113526,29 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.IntegrationName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingsUri", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102158,30 +113558,29 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SessionRecordingsUri = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AthenaWorkgroup", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102191,30 +113590,29 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AthenaWorkgroup = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GlueDatabase", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102224,28 +113622,27 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.GlueDatabase = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Consistency", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GlueTable", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -102273,11 +113670,11 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Consistency = string(dAtA[iNdEx:postIndex]) + m.GlueTable = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Keyspace", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AuditEventsLongTermUri", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -102305,11 +113702,11 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Keyspace = string(dAtA[iNdEx:postIndex]) + m.AuditEventsLongTermUri = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field BatchType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AthenaResultsUri", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -102337,13 +113734,13 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.BatchType = string(dAtA[iNdEx:postIndex]) + m.AthenaResultsUri = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Children", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PolicyName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102353,25 +113750,55 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Children = append(m.Children, &CassandraBatch_BatchChild{}) - if err := m.Children[len(m.Children)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.PolicyName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Region = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -102395,7 +113822,7 @@ func (m *CassandraBatch) Unmarshal(dAtA []byte) error { } return nil } -func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { +func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -102418,17 +113845,17 @@ func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BatchChild: wiretype end group for non-group") + return fmt.Errorf("proto: OktaAccessListSync: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BatchChild: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OktaAccessListSync: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102438,29 +113865,30 @@ func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.ID = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102470,29 +113898,30 @@ func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Query = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Values", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumAppFilters", wireType) } - var msglen int + m.NumAppFilters = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102502,82 +113931,54 @@ func (m *CassandraBatch_BatchChild) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.NumAppFilters |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Values = append(m.Values, &CassandraBatch_BatchChild_Value{}) - if err := m.Values[len(m.Values)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumGroupFilters", wireType) } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *CassandraBatch_BatchChild_Value) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.NumGroupFilters = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumGroupFilters |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - if iNdEx >= l { - return io.ErrUnexpectedEOF + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumApps", wireType) } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + m.NumApps = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumApps |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: Value: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: Value: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NumGroups", wireType) } - m.Type = 0 + m.NumGroups = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102587,16 +113988,16 @@ func (m *CassandraBatch_BatchChild_Value) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Type |= uint32(b&0x7F) << shift + m.NumGroups |= int32(b&0x7F) << shift if b < 0x80 { break } } - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Contents", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumRoles", wireType) } - var byteLen int + m.NumRoles = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102606,26 +114007,49 @@ func (m *CassandraBatch_BatchChild_Value) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + m.NumRoles |= int32(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { - return ErrInvalidLengthEvents + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumAccessLists", wireType) } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.NumAccessLists = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumAccessLists |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - if postIndex > l { - return io.ErrUnexpectedEOF + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumAccessListMembers", wireType) } - m.Contents = append(m.Contents[:0], dAtA[iNdEx:postIndex]...) - if m.Contents == nil { - m.Contents = []byte{} + m.NumAccessListMembers = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumAccessListMembers |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -102648,7 +114072,7 @@ func (m *CassandraBatch_BatchChild_Value) Unmarshal(dAtA []byte) error { } return nil } -func (m *CassandraRegister) Unmarshal(dAtA []byte) error { +func (m *OktaUserSync) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -102671,10 +114095,10 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CassandraRegister: wiretype end group for non-group") + return fmt.Errorf("proto: OktaUserSync: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CassandraRegister: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OktaUserSync: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -102712,7 +114136,7 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -102739,15 +114163,15 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OrgUrl", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102757,30 +114181,29 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.OrgUrl = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102790,30 +114213,29 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AppId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field EventTypes", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumUsersCreated", wireType) } - var stringLen uint64 + m.NumUsersCreated = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102823,24 +114245,68 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.NumUsersCreated |= int32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumUsersDeleted", wireType) } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.NumUsersDeleted = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumUsersDeleted |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - if postIndex > l { - return io.ErrUnexpectedEOF + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumUsersModified", wireType) + } + m.NumUsersModified = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumUsersModified |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NumUsersTotal", wireType) + } + m.NumUsersTotal = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.NumUsersTotal |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - m.EventTypes = append(m.EventTypes, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -102863,7 +114329,7 @@ func (m *CassandraRegister) Unmarshal(dAtA []byte) error { } return nil } -func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { +func (m *LabelSelector) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -102886,17 +114352,17 @@ func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: LoginRuleCreate: wiretype end group for non-group") + return fmt.Errorf("proto: LabelSelector: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: LoginRuleCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: LabelSelector: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102906,63 +114372,29 @@ func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Key = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Values", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -102972,24 +114404,23 @@ func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Values = append(m.Values, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -103013,7 +114444,7 @@ func (m *LoginRuleCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { +func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -103036,10 +114467,10 @@ func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: LoginRuleDelete: wiretype end group for non-group") + return fmt.Errorf("proto: SPIFFESVIDIssued: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: LoginRuleDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SPIFFESVIDIssued: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -103077,7 +114508,7 @@ func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103104,13 +114535,13 @@ func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103137,66 +114568,79 @@ func (m *LoginRuleDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEID", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.SPIFFEID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DNSSANs", wireType) } - if iNdEx >= l { - return io.ErrUnexpectedEOF + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPAuthAttempt: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPAuthAttempt: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DNSSANs = append(m.DNSSANs, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IPSANs", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103206,30 +114650,29 @@ func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.IPSANs = append(m.IPSANs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SVIDType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103239,30 +114682,29 @@ func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SVIDType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SerialNumber", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103272,30 +114714,29 @@ func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SerialNumber = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Hint", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103305,30 +114746,29 @@ func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Hint = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field JTI", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103338,79 +114778,123 @@ func (m *SAMLIdPAuthAttempt) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.JTI = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Audiences", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.Audiences = append(m.Audiences, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentity", wireType) } - if iNdEx >= l { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + m.WorkloadIdentity = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 13: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityRevision", wireType) } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPServiceProviderCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPServiceProviderCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.WorkloadIdentityRevision = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 14: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103437,15 +114921,18 @@ func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Attributes == nil { + m.Attributes = &Struct{} + } + if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 15: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NameSelector", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -103455,28 +114942,27 @@ func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.NameSelector = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 16: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LabelSelectors", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103503,7 +114989,8 @@ func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.LabelSelectors = append(m.LabelSelectors, &LabelSelector{}) + if err := m.LabelSelectors[len(m.LabelSelectors)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -103529,7 +115016,7 @@ func (m *SAMLIdPServiceProviderCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { +func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -103552,10 +115039,10 @@ func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPServiceProviderUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: AuthPreferenceUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPServiceProviderUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AuthPreferenceUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -103593,7 +115080,7 @@ func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103620,13 +115107,13 @@ func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103653,10 +115140,62 @@ func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field AdminActionsMFA", wireType) + } + m.AdminActionsMFA = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.AdminActionsMFA |= AdminActionsMFAStatus(b&0x7F) << shift + if b < 0x80 { + break + } + } default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -103679,7 +115218,7 @@ func (m *SAMLIdPServiceProviderUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SAMLIdPServiceProviderDelete) Unmarshal(dAtA []byte) error { +func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -103702,10 +115241,10 @@ func (m *SAMLIdPServiceProviderDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPServiceProviderDelete: wiretype end group for non-group") + return fmt.Errorf("proto: ClusterNetworkingConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPServiceProviderDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ClusterNetworkingConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -103743,7 +115282,7 @@ func (m *SAMLIdPServiceProviderDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103770,97 +115309,13 @@ func (m *SAMLIdPServiceProviderDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SAMLIdPServiceProviderMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.SAMLIdPServiceProviderMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SAMLIdPServiceProviderDeleteAll) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SAMLIdPServiceProviderDeleteAll: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SAMLIdPServiceProviderDeleteAll: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103887,13 +115342,13 @@ func (m *SAMLIdPServiceProviderDeleteAll) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -103920,7 +115375,7 @@ func (m *SAMLIdPServiceProviderDeleteAll) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -103946,7 +115401,7 @@ func (m *SAMLIdPServiceProviderDeleteAll) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaResourcesUpdate) Unmarshal(dAtA []byte) error { +func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -103969,10 +115424,10 @@ func (m *OktaResourcesUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaResourcesUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: SessionRecordingConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaResourcesUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SessionRecordingConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -104004,130 +115459,13 @@ func (m *OktaResourcesUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaResourcesUpdatedMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.OktaResourcesUpdatedMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OktaSyncFailure: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OktaSyncFailure: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104154,13 +115492,13 @@ func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104187,13 +115525,13 @@ func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104220,7 +115558,7 @@ func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -104246,7 +115584,7 @@ func (m *OktaSyncFailure) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { +func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -104269,10 +115607,10 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaAssignmentResult: wiretype end group for non-group") + return fmt.Errorf("proto: AccessPathChanged: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaAssignmentResult: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessPathChanged: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -104310,9 +115648,9 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ChangeID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104322,30 +115660,29 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ChangeID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104355,30 +115692,29 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AffectedResourceName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceSource", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104388,30 +115724,29 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AffectedResourceSource = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OktaAssignmentMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104421,24 +115756,23 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.OktaAssignmentMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AffectedResourceType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -104462,7 +115796,7 @@ func (m *OktaAssignmentResult) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListCreate) Unmarshal(dAtA []byte) error { +func (m *SpannerRPC) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -104485,10 +115819,10 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListCreate: wiretype end group for non-group") + return fmt.Errorf("proto: SpannerRPC: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SpannerRPC: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -104526,7 +115860,7 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104553,11 +115887,77 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -104590,9 +115990,9 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Procedure", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -104620,7 +116020,43 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListTitle = string(dAtA[iNdEx:postIndex]) + m.Procedure = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Args", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Args == nil { + m.Args = &Struct{} + } + if err := m.Args.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -104644,7 +116080,7 @@ func (m *AccessListCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { +func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -104667,10 +116103,10 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: AccessGraphSettingsUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessGraphSettingsUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -104708,7 +116144,7 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104735,13 +116171,13 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104768,15 +116204,15 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104786,23 +116222,24 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListTitle = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -104826,7 +116263,7 @@ func (m *AccessListUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListDelete) Unmarshal(dAtA []byte) error { +func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -104849,10 +116286,10 @@ func (m *AccessListDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListDelete: wiretype end group for non-group") + return fmt.Errorf("proto: SPIFFEFederationCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SPIFFEFederationCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -104923,7 +116360,7 @@ func (m *AccessListDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -104950,15 +116387,15 @@ func (m *AccessListDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListTitle", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -104968,23 +116405,24 @@ func (m *AccessListDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AccessListTitle = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -105008,7 +116446,7 @@ func (m *AccessListDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { +func (m *SPIFFEFederationDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105031,10 +116469,10 @@ func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListMemberCreate: wiretype end group for non-group") + return fmt.Errorf("proto: SPIFFEFederationDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMemberCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SPIFFEFederationDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105105,7 +116543,7 @@ func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105132,13 +116570,13 @@ func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105165,7 +116603,7 @@ func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -105191,7 +116629,7 @@ func (m *AccessListMemberCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105214,10 +116652,10 @@ func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListMemberUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateConfigCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMemberUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105288,7 +116726,7 @@ func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105315,11 +116753,44 @@ func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -105374,7 +116845,7 @@ func (m *AccessListMemberUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105397,10 +116868,10 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListMemberDelete: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMemberDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105438,7 +116909,7 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105465,13 +116936,13 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105498,13 +116969,13 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105531,7 +117002,40 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -105557,7 +117061,7 @@ func (m *AccessListMemberDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105580,10 +117084,10 @@ func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListMemberDeleteAllForAccessList: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateConfigDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListMemberDeleteAllForAccessList: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105654,7 +117158,7 @@ func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListMemberMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105681,11 +117185,44 @@ func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AccessListMemberMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -105740,7 +117277,7 @@ func (m *AccessListMemberDeleteAllForAccessList) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessListReview) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105763,10 +117300,10 @@ func (m *AccessListReview) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessListReview: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateVersionCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListReview: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateVersionCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105837,7 +117374,7 @@ func (m *AccessListReview) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListReviewMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -105864,11 +117401,44 @@ func (m *AccessListReview) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AccessListReviewMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } @@ -105923,7 +117493,7 @@ func (m *AccessListReview) Unmarshal(dAtA []byte) error { } return nil } -func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -105946,10 +117516,10 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AuditQueryRun: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateVersionUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AuditQueryRun: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateVersionUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -105987,7 +117557,7 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106014,13 +117584,13 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106047,13 +117617,13 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuditQueryDetails", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106080,7 +117650,40 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.AuditQueryDetails.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -106106,7 +117709,7 @@ func (m *AuditQueryRun) Unmarshal(dAtA []byte) error { } return nil } -func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateVersionDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -106129,17 +117732,17 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AuditQueryDetails: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateVersionDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AuditQueryDetails: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateVersionDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106149,29 +117752,30 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106181,29 +117785,30 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Query = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Days", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - m.Days = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106213,16 +117818,30 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Days |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ExecutionTimeInMillis", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.ExecutionTimeInMillis = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106232,16 +117851,30 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ExecutionTimeInMillis |= int64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field DataScannedInBytes", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - m.DataScannedInBytes = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106251,11 +117884,25 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.DataScannedInBytes |= int64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -106278,7 +117925,7 @@ func (m *AuditQueryDetails) Unmarshal(dAtA []byte) error { } return nil } -func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateAgentRolloutTrigger) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -106301,10 +117948,10 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SecurityReportRun: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateAgentRolloutTrigger: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SecurityReportRun: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateAgentRolloutTrigger: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -106375,7 +118022,7 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106402,13 +118049,13 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -106436,81 +118083,11 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Version = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalExecutionTimeInMillis", wireType) - } - m.TotalExecutionTimeInMillis = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TotalExecutionTimeInMillis |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalDataScannedInBytes", wireType) - } - m.TotalDataScannedInBytes = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TotalDataScannedInBytes |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuditQueries", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106537,8 +118114,7 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AuditQueries = append(m.AuditQueries, &AuditQueryDetails{}) - if err := m.AuditQueries[len(m.AuditQueries)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -106564,7 +118140,7 @@ func (m *SecurityReportRun) Unmarshal(dAtA []byte) error { } return nil } -func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -106587,10 +118163,10 @@ func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ExternalAuditStorageEnable: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateAgentRolloutForceDone: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ExternalAuditStorageEnable: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateAgentRolloutForceDone: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -106622,13 +118198,46 @@ func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106655,13 +118264,45 @@ func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Details", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106688,10 +118329,7 @@ func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Details == nil { - m.Details = &ExternalAuditStorageDetails{} - } - if err := m.Details.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -106717,7 +118355,7 @@ func (m *ExternalAuditStorageEnable) Unmarshal(dAtA []byte) error { } return nil } -func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { +func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -106740,10 +118378,10 @@ func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ExternalAuditStorageDisable: wiretype end group for non-group") + return fmt.Errorf("proto: AutoUpdateAgentRolloutRollback: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ExternalAuditStorageDisable: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AutoUpdateAgentRolloutRollback: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -106781,7 +118419,7 @@ func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106808,13 +118446,13 @@ func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Details", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -106841,10 +118479,72 @@ func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Details == nil { - m.Details = &ExternalAuditStorageDetails{} + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - if err := m.Details.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -106870,7 +118570,7 @@ func (m *ExternalAuditStorageDisable) Unmarshal(dAtA []byte) error { } return nil } -func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { +func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -106893,17 +118593,17 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ExternalAuditStorageDetails: wiretype end group for non-group") + return fmt.Errorf("proto: StaticHostUserCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ExternalAuditStorageDetails: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: StaticHostUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 3: + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IntegrationName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106913,29 +118613,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.IntegrationName = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionRecordingsUri", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106945,29 +118646,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SessionRecordingsUri = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AthenaWorkgroup", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -106977,29 +118679,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AthenaWorkgroup = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GlueDatabase", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107009,29 +118712,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.GlueDatabase = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field GlueTable", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107041,29 +118745,81 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.GlueTable = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: StaticHostUserUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: StaticHostUserUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AuditEventsLongTermUri", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107073,29 +118829,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AuditEventsLongTermUri = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AthenaResultsUri", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107105,29 +118862,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AthenaResultsUri = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PolicyName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107137,29 +118895,30 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.PolicyName = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107169,23 +118928,57 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Region = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -107209,7 +119002,7 @@ func (m *ExternalAuditStorageDetails) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { +func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -107232,10 +119025,10 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaAccessListSync: wiretype end group for non-group") + return fmt.Errorf("proto: StaticHostUserDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaAccessListSync: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: StaticHostUserDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -107305,10 +119098,10 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumAppFilters", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - m.NumAppFilters = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107318,16 +119111,30 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.NumAppFilters |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumGroupFilters", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - m.NumGroupFilters = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107337,54 +119144,30 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.NumGroupFilters |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumApps", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - m.NumApps = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumApps |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumGroups", wireType) + if postIndex > l { + return io.ErrUnexpectedEOF } - m.NumGroups = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumGroups |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumRoles", wireType) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.NumRoles = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107394,49 +119177,25 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.NumRoles |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumAccessLists", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - m.NumAccessLists = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumAccessLists |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 9: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumAccessListMembers", wireType) + if postIndex > l { + return io.ErrUnexpectedEOF } - m.NumAccessListMembers = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumAccessListMembers |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -107459,7 +119218,7 @@ func (m *OktaAccessListSync) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaUserSync) Unmarshal(dAtA []byte) error { +func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -107482,10 +119241,10 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OktaUserSync: wiretype end group for non-group") + return fmt.Errorf("proto: CrownJewelCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OktaUserSync: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CrownJewelCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -107556,9 +119315,9 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OrgUrl", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107568,29 +119327,30 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.OrgUrl = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107600,29 +119360,30 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.AppId = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumUsersCreated", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - m.NumUsersCreated = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107632,54 +119393,30 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.NumUsersCreated |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - case 6: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumUsersDeleted", wireType) + if msglen < 0 { + return ErrInvalidLengthEvents } - m.NumUsersDeleted = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumUsersDeleted |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumUsersModified", wireType) + if postIndex > l { + return io.ErrUnexpectedEOF } - m.NumUsersModified = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.NumUsersModified |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field NumUsersTotal", wireType) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelQuery", wireType) } - m.NumUsersTotal = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107689,11 +119426,24 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.NumUsersTotal |= int32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CrownJewelQuery = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -107716,7 +119466,7 @@ func (m *OktaUserSync) Unmarshal(dAtA []byte) error { } return nil } -func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { +func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -107739,10 +119489,10 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SPIFFESVIDIssued: wiretype end group for non-group") + return fmt.Errorf("proto: CrownJewelUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SPIFFESVIDIssued: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CrownJewelUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -107780,7 +119530,7 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -107807,13 +119557,13 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -107840,15 +119590,15 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SPIFFEID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107858,29 +119608,30 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.SPIFFEID = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DNSSANs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -107890,27 +119641,28 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.DNSSANs = append(m.DNSSANs, string(dAtA[iNdEx:postIndex])) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IPSANs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CurrentCrownJewelQuery", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -107938,11 +119690,11 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.IPSANs = append(m.IPSANs, string(dAtA[iNdEx:postIndex])) + m.CurrentCrownJewelQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SVIDType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UpdatedCrownJewelQuery", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -107970,77 +119722,64 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SVIDType = string(dAtA[iNdEx:postIndex]) + m.UpdatedCrownJewelQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SerialNumber", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.SerialNumber = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 9: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Hint", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Hint = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 10: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CrownJewelDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CrownJewelDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field JTI", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108050,29 +119789,30 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.JTI = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Audiences", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108082,29 +119822,30 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Audiences = append(m.Audiences, string(dAtA[iNdEx:postIndex])) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 12: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentity", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108114,29 +119855,30 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WorkloadIdentity = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 13: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityRevision", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108146,27 +119888,28 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.WorkloadIdentityRevision = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 14: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -108193,10 +119936,7 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Attributes == nil { - m.Attributes = &Struct{} - } - if err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -108222,7 +119962,7 @@ func (m *SPIFFESVIDIssued) Unmarshal(dAtA []byte) error { } return nil } -func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { +func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -108245,10 +119985,10 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AuthPreferenceUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: UserTaskCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AuthPreferenceUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserTaskCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -108318,6 +120058,39 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -108350,7 +120123,7 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -108383,11 +120156,11 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field AdminActionsMFA", wireType) + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserTaskMetadata", wireType) } - m.AdminActionsMFA = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108397,11 +120170,25 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.AdminActionsMFA |= AdminActionsMFAStatus(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserTaskMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -108424,7 +120211,7 @@ func (m *AuthPreferenceUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { +func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -108447,10 +120234,10 @@ func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ClusterNetworkingConfigUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: UserTaskUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ClusterNetworkingConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserTaskUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -108521,7 +120308,7 @@ func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -108548,13 +120335,13 @@ func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -108581,64 +120368,13 @@ func (m *ClusterNetworkingConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SessionRecordingConfigUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SessionRecordingConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -108665,13 +120401,13 @@ func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserTaskMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -108698,15 +120434,15 @@ func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserTaskMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CurrentUserTaskState", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108716,30 +120452,29 @@ func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.CurrentUserTaskState = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UpdatedUserTaskState", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -108749,24 +120484,23 @@ func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.UpdatedUserTaskState = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -108790,7 +120524,7 @@ func (m *SessionRecordingConfigUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { +func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -108813,80 +120547,15 @@ func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessPathChanged: wiretype end group for non-group") + return fmt.Errorf("proto: UserTaskMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessPathChanged: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserTaskMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChangeID", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.ChangeID = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TaskType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -108914,11 +120583,11 @@ func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AffectedResourceName = string(dAtA[iNdEx:postIndex]) + m.TaskType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceSource", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IssueType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -108946,11 +120615,11 @@ func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AffectedResourceSource = string(dAtA[iNdEx:postIndex]) + m.IssueType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AffectedResourceType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Integration", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -108978,7 +120647,7 @@ func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AffectedResourceType = string(dAtA[iNdEx:postIndex]) + m.Integration = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -109002,7 +120671,7 @@ func (m *AccessPathChanged) Unmarshal(dAtA []byte) error { } return nil } -func (m *SpannerRPC) Unmarshal(dAtA []byte) error { +func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -109025,10 +120694,10 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SpannerRPC: wiretype end group for non-group") + return fmt.Errorf("proto: UserTaskDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SpannerRPC: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: UserTaskDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -109066,7 +120735,7 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109093,13 +120762,13 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109126,13 +120795,13 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DatabaseMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109159,78 +120828,13 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.DatabaseMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Procedure", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Procedure = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 7: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Args", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109257,10 +120861,7 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Args == nil { - m.Args = &Struct{} - } - if err := m.Args.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -109286,7 +120887,7 @@ func (m *SpannerRPC) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { +func (m *ContactCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -109309,10 +120910,10 @@ func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessGraphSettingsUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: ContactCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessGraphSettingsUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ContactCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -109350,7 +120951,7 @@ func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109377,7 +120978,7 @@ func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -109447,93 +121048,9 @@ func (m *AccessGraphSettingsUpdate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SPIFFEFederationCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SPIFFEFederationCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109560,15 +121077,15 @@ func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Email", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -109578,30 +121095,29 @@ func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Email = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ContactType", wireType) } - var msglen int + m.ContactType = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -109611,25 +121127,11 @@ func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.ContactType |= ContactType(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -109652,7 +121154,7 @@ func (m *SPIFFEFederationCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SPIFFEFederationDelete) Unmarshal(dAtA []byte) error { +func (m *ContactDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -109675,10 +121177,10 @@ func (m *SPIFFEFederationDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SPIFFEFederationDelete: wiretype end group for non-group") + return fmt.Errorf("proto: ContactDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SPIFFEFederationDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ContactDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -109743,130 +121245,13 @@ func (m *SPIFFEFederationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateConfigCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109893,13 +121278,13 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109926,13 +121311,13 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -109959,15 +121344,15 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Email", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -109977,30 +121362,29 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Email = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ContactType", wireType) } - var msglen int + m.ContactType = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -110010,25 +121394,11 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.ContactType |= ContactType(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -110051,7 +121421,7 @@ func (m *AutoUpdateConfigCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -110074,10 +121444,10 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateConfigUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -110115,7 +121485,7 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -110142,7 +121512,7 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -110214,7 +121584,7 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -110241,7 +121611,10 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.WorkloadIdentityData == nil { + m.WorkloadIdentityData = &Struct{} + } + if err := m.WorkloadIdentityData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -110267,7 +121640,7 @@ func (m *AutoUpdateConfigUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -110290,10 +121663,10 @@ func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateConfigDelete: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -110430,7 +121803,127 @@ func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityData", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.WorkloadIdentityData == nil { + m.WorkloadIdentityData = &Struct{} + } + if err := m.WorkloadIdentityData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: WorkloadIdentityDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: WorkloadIdentityDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -110457,7 +121950,73 @@ func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -110483,7 +122042,7 @@ func (m *AutoUpdateConfigDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -110506,10 +122065,10 @@ func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateVersionCreate: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityX509RevocationCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateVersionCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityX509RevocationCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -110646,9 +122205,9 @@ func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -110658,24 +122217,23 @@ func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Reason = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -110699,7 +122257,7 @@ func (m *AutoUpdateVersionCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -110722,10 +122280,10 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateVersionUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityX509RevocationUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateVersionUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityX509RevocationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -110763,7 +122321,7 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -110790,7 +122348,7 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -110862,9 +122420,9 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -110874,24 +122432,23 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Reason = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -110915,7 +122472,7 @@ func (m *AutoUpdateVersionUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateVersionDelete) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -110938,10 +122495,10 @@ func (m *AutoUpdateVersionDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateVersionDelete: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityX509RevocationDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateVersionDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityX509RevocationDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -111076,39 +122633,6 @@ func (m *AutoUpdateVersionDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -111131,7 +122655,7 @@ func (m *AutoUpdateVersionDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateAgentRolloutTrigger) Unmarshal(dAtA []byte) error { +func (m *GitCommand) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -111154,10 +122678,10 @@ func (m *AutoUpdateAgentRolloutTrigger) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutTrigger: wiretype end group for non-group") + return fmt.Errorf("proto: GitCommand: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutTrigger: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GitCommand: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -111261,39 +122785,7 @@ func (m *AutoUpdateAgentRolloutTrigger) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111320,64 +122812,13 @@ func (m *AutoUpdateAgentRolloutTrigger) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutForceDone: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutForceDone: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111404,13 +122845,13 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111437,15 +122878,15 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Service", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111455,28 +122896,27 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Service = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -111504,11 +122944,11 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Actions", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111535,7 +122975,8 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Actions = append(m.Actions, &GitCommandAction{}) + if err := m.Actions[len(m.Actions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -111561,7 +123002,7 @@ func (m *AutoUpdateAgentRolloutForceDone) Unmarshal(dAtA []byte) error { } return nil } -func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { +func (m *GitCommandAction) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -111584,17 +123025,17 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutRollback: wiretype end group for non-group") + return fmt.Errorf("proto: GitCommandAction: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AutoUpdateAgentRolloutRollback: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: GitCommandAction: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111604,30 +123045,29 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Action = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Reference", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111637,30 +123077,29 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Reference = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Old", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111670,28 +123109,27 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Old = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field New", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -111719,40 +123157,7 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.New = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -111776,7 +123181,7 @@ func (m *AutoUpdateAgentRolloutRollback) Unmarshal(dAtA []byte) error { } return nil } -func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { +func (m *AccessListInvalidMetadata) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -111799,17 +123204,17 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StaticHostUserCreate: wiretype end group for non-group") + return fmt.Errorf("proto: AccessListInvalidMetadata: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StaticHostUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessListInvalidMetadata: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111819,30 +123224,29 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.AccessListName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -111852,28 +123256,110 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.User = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MissingRoles", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MissingRoles = append(m.MissingRoles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UserLoginAccessListInvalid: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UserLoginAccessListInvalid: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111900,13 +123386,13 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListInvalidMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111933,13 +123419,13 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AccessListInvalidMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -111966,7 +123452,7 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -111992,7 +123478,7 @@ func (m *StaticHostUserCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { +func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -112015,10 +123501,10 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StaticHostUserUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: StableUNIXUserCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StaticHostUserUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: StableUNIXUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -112056,7 +123542,7 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112083,13 +123569,13 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StableUnixUser", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112116,15 +123602,69 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.StableUnixUser == nil { + m.StableUnixUser = &StableUNIXUser{} + } + if err := m.StableUnixUser.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *StableUNIXUser) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: StableUNIXUser: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: StableUNIXUser: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112134,30 +123674,29 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Username = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Uid", wireType) } - var msglen int + m.Uid = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112167,25 +123706,11 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.Uid |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -112208,7 +123733,7 @@ func (m *StaticHostUserUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { +func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -112231,10 +123756,10 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StaticHostUserDelete: wiretype end group for non-group") + return fmt.Errorf("proto: AWSICResourceSync: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StaticHostUserDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AWSICResourceSync: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -112271,10 +123796,10 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalAccounts", wireType) } - var msglen int + m.TotalAccounts = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112284,30 +123809,16 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.TotalAccounts |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalAccountAssignments", wireType) } - var msglen int + m.TotalAccountAssignments = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112317,30 +123828,16 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.TotalAccountAssignments |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalUserGroups", wireType) } - var msglen int + m.TotalUserGroups = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112350,30 +123847,16 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.TotalUserGroups |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TotalPermissionSets", wireType) } - var msglen int + m.TotalPermissionSets = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112383,79 +123866,14 @@ func (m *StaticHostUserDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.TotalPermissionSets |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: CrownJewelCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: CrownJewelCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112482,46 +123900,64 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 3: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: HealthCheckConfigCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: HealthCheckConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112548,13 +123984,13 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112581,13 +124017,13 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112614,15 +124050,15 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CrownJewelQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112632,23 +124068,24 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CrownJewelQuery = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -112672,7 +124109,7 @@ func (m *CrownJewelCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { +func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -112695,10 +124132,10 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CrownJewelUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: HealthCheckConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CrownJewelUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: HealthCheckConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -112736,7 +124173,7 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112763,7 +124200,7 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -112833,9 +124270,60 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: HealthCheckConfigDelete: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: HealthCheckConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -112862,15 +124350,15 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CurrentCrownJewelQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112880,29 +124368,30 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CurrentCrownJewelQuery = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdatedCrownJewelQuery", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -112912,78 +124401,28 @@ func (m *CrownJewelUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UpdatedCrownJewelQuery = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: CrownJewelDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: CrownJewelDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113010,13 +124449,64 @@ func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideCreate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideCreate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113043,11 +124533,11 @@ func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -113080,7 +124570,7 @@ func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 4: + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -113113,7 +124603,7 @@ func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } @@ -113168,7 +124658,7 @@ func (m *CrownJewelDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { +func (m *WorkloadIdentityX509IssuerOverrideDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -113191,10 +124681,10 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserTaskCreate: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserTaskCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -113231,72 +124721,6 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } @@ -113329,7 +124753,7 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: + case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -113362,9 +124786,9 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTaskMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113391,7 +124815,7 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserTaskMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -113417,7 +124841,7 @@ func (m *UserTaskCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { +func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -113440,10 +124864,10 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserTaskUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: SigstorePolicyCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserTaskUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SigstorePolicyCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -113481,7 +124905,7 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113508,13 +124932,13 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113541,13 +124965,13 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113574,13 +124998,64 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthEvents + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SigstorePolicyUpdate: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SigstorePolicyUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113607,13 +125082,13 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserTaskMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113640,15 +125115,15 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserTaskMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CurrentUserTaskState", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -113658,29 +125133,30 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.CurrentUserTaskState = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 8: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UpdatedUserTaskState", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -113690,23 +125166,24 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.UpdatedUserTaskState = string(dAtA[iNdEx:postIndex]) + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -113730,7 +125207,7 @@ func (m *UserTaskUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { +func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -113753,17 +125230,17 @@ func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserTaskMetadata: wiretype end group for non-group") + return fmt.Errorf("proto: SigstorePolicyDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserTaskMetadata: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SigstorePolicyDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TaskType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -113773,29 +125250,30 @@ func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.TaskType = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field IssueType", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -113805,29 +125283,30 @@ func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.IssueType = string(dAtA[iNdEx:postIndex]) + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Integration", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -113837,23 +125316,57 @@ func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Integration = string(dAtA[iNdEx:postIndex]) + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -113877,7 +125390,7 @@ func (m *UserTaskMetadata) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { +func (m *MCPSessionStart) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -113900,10 +125413,10 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserTaskDelete: wiretype end group for non-group") + return fmt.Errorf("proto: MCPSessionStart: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserTaskDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPSessionStart: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -113941,7 +125454,7 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -113968,13 +125481,13 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114001,13 +125514,13 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114034,7 +125547,7 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -114071,60 +125584,9 @@ func (m *UserTaskDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ContactCreate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ContactCreate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ContactCreate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114151,15 +125613,15 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field McpSessionId", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114169,30 +125631,29 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.McpSessionId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientInfo", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114202,30 +125663,29 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ClientInfo = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerInfo", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114235,30 +125695,29 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ServerInfo = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 10: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ProtocolVersion", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114268,28 +125727,27 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ProtocolVersion = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 11: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Email", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field IngressAuthType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -114317,13 +125775,13 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Email = string(dAtA[iNdEx:postIndex]) + m.IngressAuthType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ContactType", wireType) + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EgressAuthType", wireType) } - m.ContactType = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114333,11 +125791,24 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ContactType |= ContactType(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EgressAuthType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -114360,7 +125831,7 @@ func (m *ContactCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *ContactDelete) Unmarshal(dAtA []byte) error { +func (m *MCPSessionEnd) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -114383,10 +125854,10 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ContactDelete: wiretype end group for non-group") + return fmt.Errorf("proto: MCPSessionEnd: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ContactDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPSessionEnd: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -114418,13 +125889,46 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114451,13 +125955,13 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114484,11 +125988,11 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } @@ -114521,9 +126025,9 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 5: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114550,15 +126054,15 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Email", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114568,29 +126072,30 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Email = string(dAtA[iNdEx:postIndex]) + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field ContactType", wireType) + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } - m.ContactType = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114600,11 +126105,25 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.ContactType |= ContactType(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -114627,7 +126146,7 @@ func (m *ContactDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { +func (m *MCPJSONRPCMessage) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -114650,17 +126169,17 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityCreate: wiretype end group for non-group") + return fmt.Errorf("proto: MCPJSONRPCMessage: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPJSONRPCMessage: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field JSONRPC", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114670,30 +126189,29 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.JSONRPC = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114703,63 +126221,29 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -114769,28 +126253,27 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityData", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114817,10 +126300,10 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.WorkloadIdentityData == nil { - m.WorkloadIdentityData = &Struct{} + if m.Params == nil { + m.Params = &Struct{} } - if err := m.WorkloadIdentityData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Params.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -114846,7 +126329,7 @@ func (m *WorkloadIdentityCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { +func (m *MCPSessionRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -114869,10 +126352,10 @@ func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: MCPSessionRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPSessionRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -114910,7 +126393,7 @@ func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114937,13 +126420,13 @@ func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -114970,13 +126453,13 @@ func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115003,133 +126486,13 @@ func (m *WorkloadIdentityUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkloadIdentityData", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if m.WorkloadIdentityData == nil { - m.WorkloadIdentityData = &Struct{} - } - if err := m.WorkloadIdentityData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115156,13 +126519,13 @@ func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115189,13 +126552,13 @@ func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Message.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115222,7 +126585,7 @@ func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -115248,7 +126611,7 @@ func (m *WorkloadIdentityDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { +func (m *MCPSessionNotification) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -115271,10 +126634,10 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationCreate: wiretype end group for non-group") + return fmt.Errorf("proto: MCPSessionNotification: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPSessionNotification: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -115312,7 +126675,7 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115339,13 +126702,13 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115372,13 +126735,13 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115405,15 +126768,15 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -115423,78 +126786,28 @@ func (m *WorkloadIdentityX509RevocationCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.Message.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationUpdate: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationUpdate: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115521,13 +126834,13 @@ func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115554,46 +126867,64 @@ func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } + default: + iNdEx = preIndex + skippy, err := skipEvents(dAtA[iNdEx:]) + if err != nil { + return err } - if msglen < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF } - if postIndex > l { + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MCPSessionListenSSEStream) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - iNdEx = postIndex - case 4: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MCPSessionListenSSEStream: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MCPSessionListenSSEStream: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115620,15 +126951,15 @@ func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -115638,78 +126969,28 @@ func (m *WorkloadIdentityX509RevocationUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reason = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationDelete: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityX509RevocationDelete: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + iNdEx = postIndex + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115736,13 +127017,13 @@ func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115769,13 +127050,13 @@ func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115802,13 +127083,13 @@ func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115835,7 +127116,7 @@ func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -115861,7 +127142,7 @@ func (m *WorkloadIdentityX509RevocationDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *GitCommand) Unmarshal(dAtA []byte) error { +func (m *MCPSessionInvalidHTTPRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -115884,10 +127165,10 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GitCommand: wiretype end group for non-group") + return fmt.Errorf("proto: MCPSessionInvalidHTTPRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GitCommand: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPSessionInvalidHTTPRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -115958,7 +127239,7 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -115985,13 +127266,13 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SessionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AppMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -116018,15 +127299,15 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.SessionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.AppMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServerMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116036,30 +127317,29 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ServerMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Path = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CommandMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116069,28 +127349,27 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.CommandMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Method = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Service", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RawQuery", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -116118,13 +127397,13 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Service = string(dAtA[iNdEx:postIndex]) + m.RawQuery = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 9: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116134,27 +127413,29 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + m.Body = append(m.Body[:0], dAtA[iNdEx:postIndex]...) + if m.Body == nil { + m.Body = []byte{} + } iNdEx = postIndex - case 10: + case 9: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Actions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -116181,8 +127462,7 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Actions = append(m.Actions, &GitCommandAction{}) - if err := m.Actions[len(m.Actions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Headers.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -116208,7 +127488,7 @@ func (m *GitCommand) Unmarshal(dAtA []byte) error { } return nil } -func (m *GitCommandAction) Unmarshal(dAtA []byte) error { +func (m *BoundKeypairRecovery) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -116231,17 +127511,17 @@ func (m *GitCommandAction) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: GitCommandAction: wiretype end group for non-group") + return fmt.Errorf("proto: BoundKeypairRecovery: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: GitCommandAction: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BoundKeypairRecovery: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116251,29 +127531,30 @@ func (m *GitCommandAction) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Action = string(dAtA[iNdEx:postIndex]) + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Reference", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116283,61 +127564,30 @@ func (m *GitCommandAction) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.Reference = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Old", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.Old = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field New", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116347,110 +127597,28 @@ func (m *GitCommandAction) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - m.New = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessListInvalidMetadata) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AccessListInvalidMetadata: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AccessListInvalidMetadata: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListName", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.AccessListName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -116478,11 +127646,11 @@ func (m *AccessListInvalidMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.User = string(dAtA[iNdEx:postIndex]) + m.TokenName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MissingRoles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -116510,64 +127678,13 @@ func (m *AccessListInvalidMetadata) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.MissingRoles = append(m.MissingRoles, string(dAtA[iNdEx:postIndex])) + m.BotName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthEvents - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: UserLoginAccessListInvalid: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: UserLoginAccessListInvalid: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116577,30 +127694,29 @@ func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.PublicKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccessListInvalidMetadata", wireType) + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryCount", wireType) } - var msglen int + m.RecoveryCount = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116610,30 +127726,16 @@ func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.RecoveryCount |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.AccessListInvalidMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: + case 8: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RecoveryMode", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116643,24 +127745,23 @@ func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.RecoveryMode = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -116684,7 +127785,7 @@ func (m *UserLoginAccessListInvalid) Unmarshal(dAtA []byte) error { } return nil } -func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { +func (m *BoundKeypairRotation) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -116707,10 +127808,10 @@ func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: StableUNIXUserCreate: wiretype end group for non-group") + return fmt.Errorf("proto: BoundKeypairRotation: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: StableUNIXUserCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BoundKeypairRotation: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -116748,7 +127849,7 @@ func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -116775,13 +127876,13 @@ func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StableUnixUser", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -116808,67 +127909,77 @@ func (m *StableUNIXUserCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.StableUnixUser == nil { - m.StableUnixUser = &StableUNIXUser{} - } - if err := m.StableUnixUser.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipEvents(dAtA[iNdEx:]) - if err != nil { - return err + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) } - if (skippy < 0) || (iNdEx+skippy) < 0 { + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - if (iNdEx + skippy) > l { + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { return io.ErrUnexpectedEOF } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *StableUNIXUser) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents + m.TokenName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) } - if iNdEx >= l { - return io.ErrUnexpectedEOF + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: StableUNIXUser: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: StableUNIXUser: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.BotName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Username", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PreviousPublicKey", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -116896,13 +128007,13 @@ func (m *StableUNIXUser) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Username = string(dAtA[iNdEx:postIndex]) + m.PreviousPublicKey = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Uid", wireType) + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NewPublicKey", wireType) } - m.Uid = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -116912,11 +128023,24 @@ func (m *StableUNIXUser) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Uid |= int32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NewPublicKey = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -116939,7 +128063,7 @@ func (m *StableUNIXUser) Unmarshal(dAtA []byte) error { } return nil } -func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { +func (m *BoundKeypairJoinStateVerificationFailed) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -116962,10 +128086,10 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AWSICResourceSync: wiretype end group for non-group") + return fmt.Errorf("proto: BoundKeypairJoinStateVerificationFailed: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AWSICResourceSync: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: BoundKeypairJoinStateVerificationFailed: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -117002,10 +128126,10 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalAccounts", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } - m.TotalAccounts = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117015,16 +128139,30 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TotalAccounts |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalAccountAssignments", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } - m.TotalAccountAssignments = 0 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117034,16 +128172,30 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TotalAccountAssignments |= int32(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalUserGroups", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TokenName", wireType) } - m.TotalUserGroups = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117053,35 +128205,29 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.TotalUserGroups |= int32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - case 5: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TotalPermissionSets", wireType) + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents } - m.TotalPermissionSets = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.TotalPermissionSets |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents } - case 6: + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TokenName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field BotName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117091,24 +128237,23 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.BotName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -117132,7 +128277,7 @@ func (m *AWSICResourceSync) Unmarshal(dAtA []byte) error { } return nil } -func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { +func (m *SCIMRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -117155,17 +128300,17 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: HealthCheckConfigCreate: wiretype end group for non-group") + return fmt.Errorf("proto: SCIMRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: HealthCheckConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SCIMRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117175,30 +128320,29 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SourceAddress", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117208,30 +128352,29 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SourceAddress = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserAgent", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117241,28 +128384,91 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.UserAgent = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Method", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Method = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -117289,7 +128495,10 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.Body == nil { + m.Body = &Struct{} + } + if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -117315,7 +128524,7 @@ func (m *HealthCheckConfigCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { +func (m *SCIMCommonData) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -117338,15 +128547,15 @@ func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: HealthCheckConfigUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: SCIMCommonData: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: HealthCheckConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SCIMCommonData: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Request", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -117373,48 +128582,18 @@ func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowEvents - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthEvents - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents - } - if postIndex > l { - return io.ErrUnexpectedEOF + if m.Request == nil { + m.Request = &SCIMRequest{} } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Request.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Integration", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117424,30 +128603,29 @@ func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Integration = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 5: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceType", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117457,24 +128635,23 @@ func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ResourceType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -117498,7 +128675,7 @@ func (m *HealthCheckConfigUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { +func (m *SCIMListingEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -117521,10 +128698,10 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: HealthCheckConfigDelete: wiretype end group for non-group") + return fmt.Errorf("proto: SCIMListingEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: HealthCheckConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SCIMListingEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -117556,13 +128733,46 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SCIMCommonData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -117589,15 +128799,15 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SCIMCommonData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Filter", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117607,30 +128817,29 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Filter = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Count", wireType) } - var msglen int + m.Count = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117640,25 +128849,49 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.Count |= int32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthEvents + case 6: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StartIndex", wireType) } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthEvents + m.StartIndex = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StartIndex |= int32(b&0x7F) << shift + if b < 0x80 { + break + } } - if postIndex > l { - return io.ErrUnexpectedEOF + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceCount", wireType) } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.ResourceCount = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.ResourceCount |= uint32(b&0x7F) << shift + if b < 0x80 { + break + } } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -117681,7 +128914,7 @@ func (m *HealthCheckConfigDelete) Unmarshal(dAtA []byte) error { } return nil } -func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error { +func (m *SCIMResourceEvent) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -117704,10 +128937,10 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideCreate: wiretype end group for non-group") + return fmt.Errorf("proto: SCIMResourceEvent: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SCIMResourceEvent: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -117745,7 +128978,7 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -117772,13 +129005,13 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SCIMCommonData", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -117805,15 +129038,15 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SCIMCommonData.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TeleportID", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowEvents @@ -117823,24 +129056,87 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthEvents } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthEvents } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.TeleportID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ExternalID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ExternalID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Display", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Display = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -117864,7 +129160,7 @@ func (m *WorkloadIdentityX509IssuerOverrideCreate) Unmarshal(dAtA []byte) error } return nil } -func (m *WorkloadIdentityX509IssuerOverrideDelete) Unmarshal(dAtA []byte) error { +func (m *ClientIPRestrictionsUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -117887,10 +129183,10 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Unmarshal(dAtA []byte) error fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideDelete: wiretype end group for non-group") + return fmt.Errorf("proto: ClientIPRestrictionsUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WorkloadIdentityX509IssuerOverrideDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ClientIPRestrictionsUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -118025,6 +129321,71 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Unmarshal(dAtA []byte) error return err } iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ClientIPRestrictions", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowEvents + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthEvents + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthEvents + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ClientIPRestrictions = append(m.ClientIPRestrictions, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipEvents(dAtA[iNdEx:]) @@ -118047,7 +129408,7 @@ func (m *WorkloadIdentityX509IssuerOverrideDelete) Unmarshal(dAtA []byte) error } return nil } -func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { +func (m *VnetConfigCreate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -118070,10 +129431,10 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SigstorePolicyCreate: wiretype end group for non-group") + return fmt.Errorf("proto: VnetConfigCreate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SigstorePolicyCreate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: VnetConfigCreate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -118111,7 +129472,7 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118138,13 +129499,13 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118171,13 +129532,13 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118204,7 +129565,7 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -118230,7 +129591,7 @@ func (m *SigstorePolicyCreate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { +func (m *VnetConfigUpdate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -118253,10 +129614,10 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SigstorePolicyUpdate: wiretype end group for non-group") + return fmt.Errorf("proto: VnetConfigUpdate: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SigstorePolicyUpdate: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: VnetConfigUpdate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -118294,7 +129655,7 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118321,13 +129682,13 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118354,13 +129715,13 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118387,7 +129748,7 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -118413,7 +129774,7 @@ func (m *SigstorePolicyUpdate) Unmarshal(dAtA []byte) error { } return nil } -func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { +func (m *VnetConfigDelete) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -118436,10 +129797,10 @@ func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SigstorePolicyDelete: wiretype end group for non-group") + return fmt.Errorf("proto: VnetConfigDelete: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SigstorePolicyDelete: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: VnetConfigDelete: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -118477,7 +129838,7 @@ func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118504,13 +129865,13 @@ func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field UserMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118537,13 +129898,13 @@ func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.UserMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceMetadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ConnectionMetadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -118570,7 +129931,7 @@ func (m *SigstorePolicyDelete) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ConnectionMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex diff --git a/api/types/events/events_test.go b/api/types/events/events_test.go index 74abf79895739..e0032ca295149 100644 --- a/api/types/events/events_test.go +++ b/api/types/events/events_test.go @@ -17,6 +17,7 @@ limitations under the License. package events import ( + "bytes" "strings" "testing" @@ -24,8 +25,12 @@ import ( "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" "github.com/stretchr/testify/require" + + "github.com/gravitational/teleport/api/types/wrappers" ) +// TestTrimToMaxSize tests TrimToMaxSize implementation of several events. +// It also tests trimEventToMaxSize used by these events. func TestTrimToMaxSize(t *testing.T) { type messageSizeTrimmer interface { TrimToMaxSize(int) AuditEvent @@ -46,7 +51,7 @@ func TestTrimToMaxSize(t *testing.T) { DatabaseQuery: strings.Repeat("A", 7000), }, want: &DatabaseSessionQuery{ - DatabaseQuery: strings.Repeat("A", 5375), + DatabaseQuery: strings.Repeat("A", 5373), }, }, { @@ -60,7 +65,7 @@ func TestTrimToMaxSize(t *testing.T) { }, }, want: &DatabaseSessionQuery{ - DatabaseQuery: strings.Repeat("A", 590), + DatabaseQuery: strings.Repeat("A", 589), DatabaseQueryParameters: []string{ strings.Repeat("A", 89), strings.Repeat("A", 89), @@ -100,7 +105,7 @@ func TestTrimToMaxSize(t *testing.T) { DatabaseQuery: `{` + strings.Repeat(`"a": "b",`, 100) + "}", }, want: &DatabaseSessionQuery{ - DatabaseQuery: `{"a": "b","a":`, + DatabaseQuery: `{"a": "b","a"`, }, }, // UserLogin @@ -115,8 +120,8 @@ func TestTrimToMaxSize(t *testing.T) { }, want: &UserLogin{ Status: Status{ - Error: strings.Repeat("A", 1336), - UserMessage: strings.Repeat("A", 1336), + Error: strings.Repeat("A", 1335), + UserMessage: strings.Repeat("A", 1335), }, }, cmpOpts: []cmp.Option{ @@ -125,6 +130,35 @@ func TestTrimToMaxSize(t *testing.T) { cmpopts.IgnoreFields(UserLogin{}, "IdentityAttributes"), }, }, + { + name: "MCPSessionInvalidHTTPRequest trimmed", + maxSize: 200, + in: &MCPSessionInvalidHTTPRequest{ + // Metadata not being trimmed. + Metadata: Metadata{ + Code: "TMCP006E", + Type: "mcp.session.invalid_http_request", + }, + Path: strings.Repeat("/path", 10), + Body: bytes.Repeat([]byte("body"), 10), + Headers: wrappers.Traits{ + "A": {strings.Repeat("a", 20)}, + "B": {strings.Repeat("b", 20)}, + }, + }, + want: &MCPSessionInvalidHTTPRequest{ + Metadata: Metadata{ + Code: "TMCP006E", + Type: "mcp.session.invalid_http_request", + }, + Path: "/path/path/path/", + Body: []byte("bodybodybodybody"), + Headers: wrappers.Traits{ + "A": {strings.Repeat("a", 16)}, + "B": {strings.Repeat("b", 16)}, + }, + }, + }, } for _, tc := range testCases { @@ -195,8 +229,56 @@ func TestStructTrimToMaxSize(t *testing.T) { for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - got := tc.in.trimToMaxSize(tc.maxSize) + got := tc.in.trimToMaxFieldSize(tc.maxSize) require.Equal(t, tc.want, got) }) } } + +func TestTrimMCPJSONRPCMessage(t *testing.T) { + m := MCPJSONRPCMessage{ + JSONRPC: "2.0", + ID: "some-id", + Method: "tools/call", + Params: &Struct{ + Struct: types.Struct{ + Fields: map[string]*types.Value{ + strings.Repeat("A", 100): { + Kind: &types.Value_StringValue{ + StringValue: "A", + }, + }, + }, + }, + }, + } + + orgSize := m.Size() + t.Run("not trimmed", func(t *testing.T) { + notTrimmed := m.trimToMaxFieldSize(10000) + require.Equal(t, orgSize, m.Size()) + require.Equal(t, notTrimmed, m) + }) + + t.Run("trimmed", func(t *testing.T) { + trimmed := m.trimToMaxFieldSize(maxSizePerField(50, m.nonEmptyStrs())) + require.Equal(t, orgSize, m.Size()) + require.Less(t, trimmed.Size(), 50) + require.Equal(t, MCPJSONRPCMessage{ + JSONRPC: "2.0", + ID: "some-id", + Method: "tools/ca", + Params: &Struct{ + Struct: types.Struct{ + Fields: map[string]*types.Value{ + strings.Repeat("A", 8): { + Kind: &types.Value_StringValue{ + StringValue: "A", + }, + }, + }, + }, + }, + }, trimmed) + }) +} diff --git a/api/types/events/oneof.go b/api/types/events/oneof.go index 094f4d02ac3f2..3cd88b23673d6 100644 --- a/api/types/events/oneof.go +++ b/api/types/events/oneof.go @@ -886,6 +886,60 @@ func ToOneOf(in AuditEvent) (*OneOf, error) { out.Event = &OneOf_SigstorePolicyDelete{ SigstorePolicyDelete: e, } + case *MCPSessionStart: + out.Event = &OneOf_MCPSessionStart{ + MCPSessionStart: e, + } + case *MCPSessionEnd: + out.Event = &OneOf_MCPSessionEnd{ + MCPSessionEnd: e, + } + case *MCPSessionRequest: + out.Event = &OneOf_MCPSessionRequest{ + MCPSessionRequest: e, + } + case *MCPSessionNotification: + out.Event = &OneOf_MCPSessionNotification{ + MCPSessionNotification: e, + } + case *MCPSessionListenSSEStream: + out.Event = &OneOf_MCPSessionListenSSEStream{ + MCPSessionListenSSEStream: e, + } + case *MCPSessionInvalidHTTPRequest: + out.Event = &OneOf_MCPSessionInvalidHTTPRequest{ + MCPSessionInvalidHTTPRequest: e, + } + case *BoundKeypairRecovery: + out.Event = &OneOf_BoundKeypairRecovery{ + BoundKeypairRecovery: e, + } + case *BoundKeypairRotation: + out.Event = &OneOf_BoundKeypairRotation{ + BoundKeypairRotation: e, + } + case *BoundKeypairJoinStateVerificationFailed: + out.Event = &OneOf_BoundKeypairJoinStateVerificationFailed{ + BoundKeypairJoinStateVerificationFailed: e, + } + case *SCIMListingEvent: + out.Event = &OneOf_SCIMListingEvent{SCIMListingEvent: e} + case *SCIMResourceEvent: + out.Event = &OneOf_SCIMResourceEvent{SCIMResourceEvent: e} + case *ClientIPRestrictionsUpdate: + out.Event = &OneOf_ClientIPRestrictionsUpdate{ClientIPRestrictionsUpdate: e} + case *VnetConfigCreate: + out.Event = &OneOf_VnetConfigCreate{ + VnetConfigCreate: e, + } + case *VnetConfigUpdate: + out.Event = &OneOf_VnetConfigUpdate{ + VnetConfigUpdate: e, + } + case *VnetConfigDelete: + out.Event = &OneOf_VnetConfigDelete{ + VnetConfigDelete: e, + } default: slog.ErrorContext(context.Background(), "Attempted to convert dynamic event of unknown type into protobuf event.", "event_type", in.GetType()) unknown := &Unknown{} diff --git a/api/types/events/struct.go b/api/types/events/struct.go index 71c73d5454e73..6670b3eee6c1a 100644 --- a/api/types/events/struct.go +++ b/api/types/events/struct.go @@ -20,7 +20,7 @@ import ( "bytes" "encoding/json" - "github.com/gogo/protobuf/jsonpb" + "github.com/gogo/protobuf/jsonpb" //nolint:depguard // needed for backwards compatibility "github.com/gogo/protobuf/types" "github.com/gravitational/trace" ) diff --git a/api/types/events/trimmer.go b/api/types/events/trimmer.go new file mode 100644 index 0000000000000..98fb346087fa9 --- /dev/null +++ b/api/types/events/trimmer.go @@ -0,0 +1,153 @@ +/* + * Teleport + * Copyright (C) 2025 Gravitational, Inc. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU Affero General Public License as published by + * the Free Software Foundation, either version 3 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Affero General Public License for more details. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ + +package events + +import ( + "google.golang.org/protobuf/protoadapt" + + "github.com/gravitational/teleport/api/types/wrappers" + "github.com/gravitational/teleport/api/utils" +) + +// fieldTrimer handles trimming a field from an event. fieldTrimmer usually +// holds both the field from the source event (for calculation) and the pointer +// to the field from the target copy (for trimming updates). +type fieldTrimmer interface { + // emptyTarget empties the target field. + emptyTarget() + // nonEmptyStrs calculates non-empty strings from this field. + nonEmptyStrs() int + // trimToMaxFieldSize trims and updates the target field. + trimToMaxFieldSize(maxFieldSize int) +} + +type trimmableEvent interface { + AuditEvent + protoadapt.MessageV1 +} + +// trimEventToMaxSize handles trimming of an event. +// See MCPSessionInvalidHTTPRequest.TrimToMaxSize for example. +func trimEventToMaxSize[T trimmableEvent](m T, maxSize int, makeTrimmer func(m T, out T) fieldTrimmer) AuditEvent { + size := m.Size() + if size <= maxSize { + return m + } + + out := utils.CloneProtoMsg(m) + trimmer := makeTrimmer(m, out) + trimmer.emptyTarget() // Empty before adjusting max size + maxSize = adjustedMaxSize(out, maxSize) + maxFieldSize := maxSizePerField(maxSize, trimmer.nonEmptyStrs()) + trimmer.trimToMaxFieldSize(maxFieldSize) + return out +} + +// fieldTrimmers is a slice of fieldTrimmer that implements fieldTrimmer. +type fieldTrimmers []fieldTrimmer + +func (t fieldTrimmers) emptyTarget() { + for _, e := range t { + e.emptyTarget() + } +} +func (t fieldTrimmers) nonEmptyStrs() (sum int) { + for _, e := range t { + sum += e.nonEmptyStrs() + } + return sum +} +func (t fieldTrimmers) trimToMaxFieldSize(maxFieldSize int) { + for _, e := range t { + e.trimToMaxFieldSize(maxFieldSize) + } +} + +type baseTrimmer struct { + emptyTargetFunc func() + nonEmptyStrsFunc func() int + trimToMaxFieldSizeFunc func(int) +} + +func newBaseTrimmer[Source any, Target any]( + source Source, + target *Target, + nonEmptyStrs func(Source) int, + trimToMaxFieldSize func(Source, int) Target, +) fieldTrimmer { + return &baseTrimmer{ + emptyTargetFunc: func() { + var empty Target + *target = empty + }, + nonEmptyStrsFunc: func() int { + return nonEmptyStrs(source) + }, + trimToMaxFieldSizeFunc: func(maxFieldSize int) { + *target = trimToMaxFieldSize(source, maxFieldSize) + }, + } +} + +func (t *baseTrimmer) emptyTarget() { t.emptyTargetFunc() } +func (t *baseTrimmer) nonEmptyStrs() int { return t.nonEmptyStrsFunc() } +func (t *baseTrimmer) trimToMaxFieldSize(maxFieldSize int) { t.trimToMaxFieldSizeFunc(maxFieldSize) } + +func newStrTrimmer(source string, target *string) fieldTrimmer { + return newBaseTrimmer(source, target, nonEmptyStr, trimStr) +} + +func newStrSliceTrimmer(source []string, target *[]string) fieldTrimmer { + return newBaseTrimmer( + source, + target, + func(source []string) int { + return nonEmptyStrsInSlice(source) + }, + trimStrSlice, + ) +} + +func newBytesTrimmer(source []byte, target *[]byte) fieldTrimmer { + return newBaseTrimmer(source, target, nonEmptyStr, trimStr) +} + +func newTraitsTrimmer(source wrappers.Traits, target *wrappers.Traits) fieldTrimmer { + return newBaseTrimmer(source, target, nonEmptyTraits, trimTraits) +} + +// trimmableField defines an interface for any struct (that is a field of an +// event) that can be trimmed. +type trimmableField[T any] interface { + nonEmptyStrs() int + trimToMaxFieldSize(maxFieldSize int) T +} + +func newGenericTrimmer[T any, Trimmable trimmableField[T]](source Trimmable, target *T) fieldTrimmer { + return newBaseTrimmer( + source, + target, + func(source Trimmable) int { + return source.nonEmptyStrs() + }, + func(source Trimmable, maxFieldSize int) T { + return source.trimToMaxFieldSize(maxFieldSize) + }, + ) +} diff --git a/api/types/events/trimmer_test.go b/api/types/events/trimmer_test.go new file mode 100644 index 0000000000000..7bd3f935dd07e --- /dev/null +++ b/api/types/events/trimmer_test.go @@ -0,0 +1,118 @@ +/* + * Teleport + * Copyright (C) 2025 Gravitational, Inc. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU Affero General Public License as published by + * the Free Software Foundation, either version 3 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Affero General Public License for more details. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ + +package events + +import ( + "bytes" + "strings" + "testing" + + "github.com/stretchr/testify/require" + + "github.com/gravitational/teleport/api/types/wrappers" +) + +const trimmerTestMaxFieldSize = 10 + +type trimmerTestSuite[T any] struct { + source T + target T + makeTrimmer func(T, *T) fieldTrimmer + + expectNonEmptyStrs int + expectTrimmedValue T +} + +func (s *trimmerTestSuite[T]) Run(t *testing.T) { + require.NotEmpty(t, s.target) + + trimmer := s.makeTrimmer(s.source, &s.target) + t.Run("emptyTarget", func(t *testing.T) { + trimmer.emptyTarget() + require.Empty(t, s.target) + }) + + t.Run("nonEmptyStrs", func(t *testing.T) { + require.Equal(t, s.expectNonEmptyStrs, trimmer.nonEmptyStrs()) + }) + + t.Run("trimToMaxFieldSize not trimmed", func(t *testing.T) { + trimmer.trimToMaxFieldSize(10000) + require.Equal(t, s.source, s.target) + }) + + t.Run("trimToMaxFieldSize not trimmed", func(t *testing.T) { + trimmer.trimToMaxFieldSize(trimmerTestMaxFieldSize) + require.Equal(t, s.expectTrimmedValue, s.target) + }) +} + +func Test_newStrTrimmer(t *testing.T) { + s := trimmerTestSuite[string]{ + source: strings.Repeat("source", 10), + target: "some-initial-value", + makeTrimmer: newStrTrimmer, + expectNonEmptyStrs: 1, + // trimStr reserves two bytes for quotes so 10-2=8 + expectTrimmedValue: "sourceso", + } + s.Run(t) +} + +func Test_newBytesTrimmer(t *testing.T) { + s := trimmerTestSuite[[]byte]{ + source: bytes.Repeat([]byte("source"), 10), + target: []byte("some-initial-value"), + makeTrimmer: newBytesTrimmer, + expectNonEmptyStrs: 1, + // trimStr reserves two bytes for quotes so 10-2=8 + expectTrimmedValue: []byte("sourceso"), + } + s.Run(t) +} + +func Test_newStrSliceTrimmer(t *testing.T) { + s := trimmerTestSuite[[]string]{ + source: []string{strings.Repeat("a", 100), strings.Repeat("b", 100)}, + target: []string{"some-initial-value"}, + makeTrimmer: newStrSliceTrimmer, + expectNonEmptyStrs: 2, + expectTrimmedValue: []string{strings.Repeat("a", 8), strings.Repeat("b", 8)}, + } + s.Run(t) +} + +func Test_newTraitsTrimmer(t *testing.T) { + s := trimmerTestSuite[wrappers.Traits]{ + source: wrappers.Traits{ + "a": {strings.Repeat("a", 100)}, + "bc": {strings.Repeat("b", 100), strings.Repeat("c", 100)}, + }, + target: wrappers.Traits{ + "some": {"initial", "value"}, + }, + makeTrimmer: newTraitsTrimmer, + expectNonEmptyStrs: 5, // count keys and values + expectTrimmedValue: wrappers.Traits{ + "a": {strings.Repeat("a", 8)}, + "bc": {strings.Repeat("b", 8), strings.Repeat("c", 8)}, + }, + } + s.Run(t) +} diff --git a/api/types/externalauditstorage/convert/v1/externalauditstorage.go b/api/types/externalauditstorage/convert/v1/externalauditstorage.go index c97e21a48fa2e..54d1fbd66e318 100644 --- a/api/types/externalauditstorage/convert/v1/externalauditstorage.go +++ b/api/types/externalauditstorage/convert/v1/externalauditstorage.go @@ -31,20 +31,20 @@ func FromProtoDraft(in *externalauditstoragev1.ExternalAuditStorage) (*externala return nil, trace.BadParameter("External Audit Storage message is nil") } - if in.Spec == nil { + if in.GetSpec() == nil { return nil, trace.BadParameter("spec is missing") } - externalAuditStorage, err := externalauditstorage.NewDraftExternalAuditStorage(headerv1.FromMetadataProto(in.Header.Metadata), externalauditstorage.ExternalAuditStorageSpec{ - IntegrationName: in.Spec.IntegrationName, - PolicyName: in.Spec.PolicyName, - Region: in.Spec.Region, - SessionRecordingsURI: in.Spec.SessionRecordingsUri, - AthenaWorkgroup: in.Spec.AthenaWorkgroup, - GlueDatabase: in.Spec.GlueDatabase, - GlueTable: in.Spec.GlueTable, - AuditEventsLongTermURI: in.Spec.AuditEventsLongTermUri, - AthenaResultsURI: in.Spec.AthenaResultsUri, + externalAuditStorage, err := externalauditstorage.NewDraftExternalAuditStorage(headerv1.FromMetadataProto(in.GetHeader().GetMetadata()), externalauditstorage.ExternalAuditStorageSpec{ + IntegrationName: in.GetSpec().GetIntegrationName(), + PolicyName: in.GetSpec().GetPolicyName(), + Region: in.GetSpec().GetRegion(), + SessionRecordingsURI: in.GetSpec().GetSessionRecordingsUri(), + AthenaWorkgroup: in.GetSpec().GetAthenaWorkgroup(), + GlueDatabase: in.GetSpec().GetGlueDatabase(), + GlueTable: in.GetSpec().GetGlueTable(), + AuditEventsLongTermURI: in.GetSpec().GetAuditEventsLongTermUri(), + AthenaResultsURI: in.GetSpec().GetAthenaResultsUri(), }) if err != nil { return nil, trace.Wrap(err) @@ -59,20 +59,20 @@ func FromProtoCluster(in *externalauditstoragev1.ExternalAuditStorage) (*externa return nil, trace.BadParameter("External Audit Storage message is nil") } - if in.Spec == nil { + if in.GetSpec() == nil { return nil, trace.BadParameter("spec is missing") } - externalAuditStorage, err := externalauditstorage.NewClusterExternalAuditStorage(headerv1.FromMetadataProto(in.Header.Metadata), externalauditstorage.ExternalAuditStorageSpec{ - IntegrationName: in.Spec.IntegrationName, - PolicyName: in.Spec.PolicyName, - Region: in.Spec.Region, - SessionRecordingsURI: in.Spec.SessionRecordingsUri, - AthenaWorkgroup: in.Spec.AthenaWorkgroup, - GlueDatabase: in.Spec.GlueDatabase, - GlueTable: in.Spec.GlueTable, - AuditEventsLongTermURI: in.Spec.AuditEventsLongTermUri, - AthenaResultsURI: in.Spec.AthenaResultsUri, + externalAuditStorage, err := externalauditstorage.NewClusterExternalAuditStorage(headerv1.FromMetadataProto(in.GetHeader().GetMetadata()), externalauditstorage.ExternalAuditStorageSpec{ + IntegrationName: in.GetSpec().GetIntegrationName(), + PolicyName: in.GetSpec().GetPolicyName(), + Region: in.GetSpec().GetRegion(), + SessionRecordingsURI: in.GetSpec().GetSessionRecordingsUri(), + AthenaWorkgroup: in.GetSpec().GetAthenaWorkgroup(), + GlueDatabase: in.GetSpec().GetGlueDatabase(), + GlueTable: in.GetSpec().GetGlueTable(), + AuditEventsLongTermURI: in.GetSpec().GetAuditEventsLongTermUri(), + AthenaResultsURI: in.GetSpec().GetAthenaResultsUri(), }) if err != nil { return nil, trace.Wrap(err) diff --git a/api/types/github.go b/api/types/github.go index 1a5f3ebb9eae6..5659d0648cb32 100644 --- a/api/types/github.go +++ b/api/types/github.go @@ -73,6 +73,12 @@ type GithubConnector interface { GetAPIEndpointURL() string // GetClientRedirectSettings returns the client redirect settings. GetClientRedirectSettings() *SSOClientRedirectSettings + // GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + GetUserMatchers() []string + // SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match + // for identifier-first login. + SetUserMatchers([]string) } // NewGithubConnector creates a new Github connector from name and spec @@ -323,6 +329,21 @@ func (c *GithubConnectorV3) MapClaims(claims GithubClaims) ([]string, []string, return utils.Deduplicate(roles), utils.Deduplicate(kubeGroups), utils.Deduplicate(kubeUsers) } +// GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should +// match for identifier-first login. +func (r *GithubConnectorV3) GetUserMatchers() []string { + if r.Spec.UserMatchers == nil { + return nil + } + return r.Spec.UserMatchers +} + +// SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match +// for identifier-first login. +func (r *GithubConnectorV3) SetUserMatchers(userMatchers []string) { + r.Spec.UserMatchers = userMatchers +} + // SetExpiry sets expiry time for the object func (r *GithubAuthRequest) SetExpiry(expires time.Time) { r.Expires = &expires diff --git a/api/types/header/convert/v1/header.go b/api/types/header/convert/v1/header.go index d8493d848a17a..670a1138c08d2 100644 --- a/api/types/header/convert/v1/header.go +++ b/api/types/header/convert/v1/header.go @@ -48,18 +48,21 @@ func ToResourceHeaderProto(resourceHeader header.ResourceHeader) *headerv1.Resou // FromMetadataProto converts v1 metadata into an internal metadata object. func FromMetadataProto(msg *headerv1.Metadata) header.Metadata { + if msg == nil { + return header.Metadata{} + } // We map the zero protobuf time (nil) to the zero go time. var expires time.Time - if msg.Expires != nil { - expires = msg.Expires.AsTime() + if msg.GetExpires() != nil { + expires = msg.GetExpires().AsTime() } return header.Metadata{ - Name: msg.Name, - Description: msg.Description, - Labels: msg.Labels, + Name: msg.GetName(), + Description: msg.GetDescription(), + Labels: msg.GetLabels(), Expires: expires, - Revision: msg.Revision, + Revision: msg.GetRevision(), } } diff --git a/api/types/integration.go b/api/types/integration.go index e13ae146818d7..d9ca87362f138 100644 --- a/api/types/integration.go +++ b/api/types/integration.go @@ -20,6 +20,7 @@ import ( "encoding/json" "fmt" "net/url" + "slices" "github.com/gravitational/trace" "google.golang.org/protobuf/encoding/protojson" @@ -42,6 +43,14 @@ const ( IntegrationSubKindAWSRolesAnywhere = "aws-ra" ) +// integrationSubKindValues is a list of supported integration subkind values. +var integrationSubKindValues = []string{ + IntegrationSubKindAWSOIDC, + IntegrationSubKindAzureOIDC, + IntegrationSubKindAWSRolesAnywhere, + IntegrationSubKindGitHub, +} + const ( // IntegrationAWSOIDCAudienceUnspecified denotes an empty audience value. Empty audience value // is used to maintain default OIDC integration behavior and backward compatibility. @@ -50,6 +59,21 @@ const ( IntegrationAWSOIDCAudienceAWSIdentityCenter = "aws-identity-center" ) +// integrationAWSOIDCAudienceValues is a list of the supported AWS OIDC Audience +// values. If this list is updated, be sure to also update the audience field's +// godoc string in the [AWSOIDCIntegrationSpecV1] protobuf definition. +var integrationAWSOIDCAudienceValues = []string{ + IntegrationAWSOIDCAudienceUnspecified, + IntegrationAWSOIDCAudienceAWSIdentityCenter, +} + +const ( + // IntegrationAWSRolesAnywhereProfileSyncStatusSuccess indicates that the profile sync was successful. + IntegrationAWSRolesAnywhereProfileSyncStatusSuccess = "SUCCESS" + // IntegrationAWSRolesAnywhereProfileSyncStatusError indicates that the profile sync failed. + IntegrationAWSRolesAnywhereProfileSyncStatusError = "ERROR" +) + // Integration specifies is a connection configuration between Teleport and a 3rd party system. type Integration interface { ResourceWithLabels @@ -69,6 +93,8 @@ type Integration interface { // GetAzureOIDCIntegrationSpec returns the `azure-oidc` spec fields. GetAzureOIDCIntegrationSpec() *AzureOIDCIntegrationSpecV1 + // SetAzureOIDCIntegrationSpec sets the `azure-oidc` spec fields. + SetAzureOIDCIntegrationSpec(*AzureOIDCIntegrationSpecV1) // GetGitHubIntegrationSpec returns the GitHub spec. GetGitHubIntegrationSpec() *GitHubIntegrationSpecV1 @@ -86,6 +112,12 @@ type Integration interface { GetCredentials() PluginCredentials // WithoutCredentials returns a copy without credentials. WithoutCredentials() Integration + + // GetStatus retrieves the integration status. + GetStatus() IntegrationStatusV1 + // SetStatus updates the integration status. + SetStatus(IntegrationStatusV1) + // Clone returns a copy of the integration. Clone() Integration } @@ -285,12 +317,13 @@ func (s *IntegrationSpecV1_AWSOIDC) CheckAndSetDefaults() error { // ValidateAudience validates if the audience field is configured with // a supported audience value. func (s *IntegrationSpecV1_AWSOIDC) ValidateAudience() error { - switch s.AWSOIDC.Audience { - case IntegrationAWSOIDCAudienceUnspecified, IntegrationAWSOIDCAudienceAWSIdentityCenter: - return nil - default: - return trace.BadParameter("unsupported audience value %q", s.AWSOIDC.Audience) + if !slices.Contains(integrationAWSOIDCAudienceValues, s.AWSOIDC.Audience) { + return trace.BadParameter("unsupported audience value %q, supported values are %q", + s.AWSOIDC.Audience, + integrationAWSOIDCAudienceValues, + ) } + return nil } // Validate validates the configuration for Azure OIDC integration subkind. @@ -388,6 +421,13 @@ func (ig *IntegrationV1) GetAzureOIDCIntegrationSpec() *AzureOIDCIntegrationSpec return ig.Spec.GetAzureOIDC() } +// SetAzureOIDCIntegrationSpec sets the `azure-oidc` spec fields. +func (ig *IntegrationV1) SetAzureOIDCIntegrationSpec(spec *AzureOIDCIntegrationSpecV1) { + ig.Spec.SubKindSpec = &IntegrationSpecV1_AzureOIDC{ + AzureOIDC: spec, + } +} + // GetGitHubIntegrationSpec returns the GitHub spec. func (ig *IntegrationV1) GetGitHubIntegrationSpec() *GitHubIntegrationSpecV1 { return ig.Spec.GetGitHub() @@ -467,6 +507,7 @@ func (ig *IntegrationV1) UnmarshalJSON(data []byte) error { AWSRA json.RawMessage `json:"aws_ra"` Credentials json.RawMessage `json:"credentials"` } `json:"spec"` + Status IntegrationStatusV1 `json:"status,omitempty"` }{} err := json.Unmarshal(data, &d) @@ -475,6 +516,7 @@ func (ig *IntegrationV1) UnmarshalJSON(data []byte) error { } integration.ResourceHeader = d.ResourceHeader + integration.Status = d.Status if len(d.Spec.Credentials) != 0 { var credentials PluginCredentialsV1 if err := (protojson.UnmarshalOptions{DiscardUnknown: true}).Unmarshal(d.Spec.Credentials, protoadapt.MessageV2Of(&credentials)); err != nil { @@ -554,9 +596,11 @@ func (ig *IntegrationV1) MarshalJSON() ([]byte, error) { AWSRA AWSRAIntegrationSpecV1 `json:"aws_ra,omitempty"` Credentials json.RawMessage `json:"credentials,omitempty"` } `json:"spec"` + Status IntegrationStatusV1 `json:"status,omitempty"` }{} d.ResourceHeader = ig.ResourceHeader + d.Status = ig.Status if ig.Spec.Credentials != nil { data, err := protojson.Marshal(protoadapt.MessageV2Of(ig.Spec.Credentials)) if err != nil { @@ -589,13 +633,23 @@ func (ig *IntegrationV1) MarshalJSON() ([]byte, error) { } d.Spec.AWSRA = *ig.GetAWSRolesAnywhereIntegrationSpec() default: - return nil, trace.BadParameter("invalid subkind %q", ig.SubKind) + return nil, trace.BadParameter("invalid subkind %q, supported values are %q", ig.SubKind, integrationSubKindValues) } out, err := json.Marshal(d) return out, trace.Wrap(err) } +// SetStatus updates the integration status. +func (ig *IntegrationV1) SetStatus(status IntegrationStatusV1) { + ig.Status = status +} + +// GetStatus retrieves the integration status. +func (ig *IntegrationV1) GetStatus() IntegrationStatusV1 { + return ig.Status +} + // SetCredentials updates credentials. func (ig *IntegrationV1) SetCredentials(creds PluginCredentials) error { if creds == nil { diff --git a/api/types/integration_test.go b/api/types/integration_test.go index 5684a0038ad24..9b9b875751759 100644 --- a/api/types/integration_test.go +++ b/api/types/integration_test.go @@ -18,6 +18,7 @@ package types import ( "encoding/json" + "strings" "testing" "github.com/google/uuid" @@ -68,6 +69,10 @@ func TestIntegrationJSONMarshalCycle(t *testing.T) { require.Equal(t, &ig2, ig) }) } + + aws.SubKind = "" + _, err = json.MarshalIndent(aws, "", " ") + require.ErrorContains(t, err, `invalid subkind "", supported values are ["aws-oidc" "azure-oidc" "aws-ra" "github"]`) } func TestIntegrationCheckAndSetDefaults(t *testing.T) { @@ -223,7 +228,12 @@ func TestIntegrationCheckAndSetDefaults(t *testing.T) { }, ) }, - expectedErrorIs: trace.IsBadParameter, + expectedErrorIs: func(err error) bool { + return trace.IsBadParameter(err) && + strings.Contains(err.Error(), + `unsupported audience value "testvalue", supported values are ["" "aws-identity-center"]`, + ) + }, }, { name: "azure-oidc: valid", @@ -468,8 +478,8 @@ func TestIntegrationCheckAndSetDefaults(t *testing.T) { t.Run(tt.name, func(t *testing.T) { name := uuid.NewString() ig, err := tt.integration(name) - require.True(t, tt.expectedErrorIs(err), "expected another error", err) - if err != nil { + require.True(t, tt.expectedErrorIs(err), "expected another error %v", err) + if err != nil && trace.IsBadParameter(err) { return } diff --git a/api/types/kubernetes.go b/api/types/kubernetes.go index 0ddac7c6ce28d..9d872f57a7f75 100644 --- a/api/types/kubernetes.go +++ b/api/types/kubernetes.go @@ -21,6 +21,7 @@ import ( "regexp" "slices" "sort" + "strings" "time" "github.com/gravitational/trace" @@ -78,6 +79,8 @@ type KubeCluster interface { // GetCloud gets the cloud this kube cluster is running on, or an empty string if it // isn't running on a cloud provider. GetCloud() string + // IsEqual determines if two Kubernetes cluster resources are equivalent. + IsEqual(KubeCluster) bool } // DiscoveredEKSCluster represents a server discovered by EKS discovery fetchers. @@ -397,7 +400,7 @@ func (k *KubernetesClusterV3) CheckAndSetDefaults() error { return nil } -// IsEqual determines if two user resources are equivalent to one another. +// IsEqual determines if two Kubernetes cluster resources are equivalent. func (k *KubernetesClusterV3) IsEqual(i KubeCluster) bool { if other, ok := i.(*KubernetesClusterV3); ok { return deriveTeleportEqualKubernetesClusterV3(k, other) @@ -547,28 +550,14 @@ func DeduplicateKubeClusters(kubeclusters []KubeCluster) []KubeCluster { var _ ResourceWithLabels = (*KubernetesResourceV1)(nil) -// NewKubernetesPodV1 creates a new kubernetes resource with kind "pod". -func NewKubernetesPodV1(meta Metadata, spec KubernetesResourceSpecV1) (*KubernetesResourceV1, error) { - pod := &KubernetesResourceV1{ - Kind: KindKubePod, - Metadata: meta, - Spec: spec, - } - - if err := pod.CheckAndSetDefaults(); err != nil { - return nil, trace.Wrap(err) - } - return pod, nil -} - // NewKubernetesResourceV1 creates a new kubernetes resource . -func NewKubernetesResourceV1(kind string, meta Metadata, spec KubernetesResourceSpecV1) (*KubernetesResourceV1, error) { +func NewKubernetesResourceV1(kind string, namespaced bool, meta Metadata, spec KubernetesResourceSpecV1) (*KubernetesResourceV1, error) { resource := &KubernetesResourceV1{ Kind: kind, Metadata: meta, Spec: spec, } - if err := resource.CheckAndSetDefaults(); err != nil { + if err := resource.CheckAndSetDefaults(namespaced); err != nil { return nil, trace.Wrap(err) } return resource, nil @@ -631,17 +620,17 @@ func (k *KubernetesResourceV1) SetRevision(rev string) { // CheckAndSetDefaults validates the Resource and sets any empty fields to // default values. -func (k *KubernetesResourceV1) CheckAndSetDefaults() error { +func (k *KubernetesResourceV1) CheckAndSetDefaults(namespaced bool) error { k.setStaticFields() - if !slices.Contains(KubernetesResourcesKinds, k.Kind) { - return trace.BadParameter("invalid kind %q defined; allowed values: %v", k.Kind, KubernetesResourcesKinds) + if !slices.Contains(KubernetesResourcesKinds, k.Kind) && !strings.HasPrefix(k.Kind, AccessRequestPrefixKindKube) { + return trace.BadParameter("invalid kind %q defined; allowed values: %v, %s", k.Kind, KubernetesResourcesKinds, AccessRequestPrefixKindKube) } if err := k.Metadata.CheckAndSetDefaults(); err != nil { return trace.Wrap(err) } // Unless the resource is cluster-wide, it must have a namespace. - if len(k.Spec.Namespace) == 0 && !slices.Contains(KubernetesClusterWideResourceKinds, k.Kind) { + if len(k.Spec.Namespace) == 0 && namespaced { return trace.BadParameter("missing kubernetes namespace") } @@ -753,3 +742,27 @@ func (k KubeResources) AsResources() ResourcesWithLabels { } return resources } + +// KubeResource represents either a KubernetesResource or RequestKubernetesResource. +type KubeResource interface { + GetAPIGroup() string + GetKind() string + GetNamespace() string + SetAPIGroup(string) + SetKind(string) + SetNamespace(string) +} + +// Setter/Getter to enable generics. +func (m *RequestKubernetesResource) GetAPIGroup() string { return m.APIGroup } +func (m *KubernetesResource) GetAPIGroup() string { return m.APIGroup } +func (m *RequestKubernetesResource) SetAPIGroup(group string) { m.APIGroup = group } +func (m *KubernetesResource) SetAPIGroup(group string) { m.APIGroup = group } +func (m *RequestKubernetesResource) GetKind() string { return m.Kind } +func (m *KubernetesResource) GetKind() string { return m.Kind } +func (m *RequestKubernetesResource) SetKind(kind string) { m.Kind = kind } +func (m *KubernetesResource) SetKind(kind string) { m.Kind = kind } +func (m *RequestKubernetesResource) GetNamespace() string { return "" } +func (m *KubernetesResource) GetNamespace() string { return m.Namespace } +func (m *RequestKubernetesResource) SetNamespace(ns string) {} +func (m *KubernetesResource) SetNamespace(ns string) { m.Namespace = ns } diff --git a/api/types/kubernetes_server.go b/api/types/kubernetes_server.go index f4cee7ca36492..85800921f4bfc 100644 --- a/api/types/kubernetes_server.go +++ b/api/types/kubernetes_server.go @@ -58,6 +58,22 @@ type KubeServer interface { SetCluster(KubeCluster) error // ProxiedService provides common methods for a proxied service. ProxiedService + // GetTargetHealth gets health details for a target Kubernetes cluster. + GetTargetHealth() *TargetHealth + // SetTargetHealth sets health details for a target Kubernetes cluster. + SetTargetHealth(h *TargetHealth) + // GetTargetHealthStatus gets the health status of a target Kubernetes cluster. + GetTargetHealthStatus() TargetHealthStatus + // SetTargetHealthStatus sets the health status of a target Kubernetes cluster. + SetTargetHealthStatus(status TargetHealthStatus) + // GetRelayGroup returns the name of the Relay group that the kube server is + // connected to. + GetRelayGroup() string + // GetRelayIDs returns the list of Relay host IDs that the kube server is + // connected to. + GetRelayIDs() []string + // GetScope returns the scope this server belongs to. + GetScope() string } // NewKubernetesServerV3 creates a new kube server instance. @@ -241,6 +257,22 @@ func (s *KubernetesServerV3) SetProxyIDs(proxyIDs []string) { s.Spec.ProxyIDs = proxyIDs } +// GetRelayGroup implements [KubeServer]. +func (s *KubernetesServerV3) GetRelayGroup() string { + if s == nil { + return "" + } + return s.Spec.RelayGroup +} + +// GetRelayIDs implements [KubeServer]. +func (s *KubernetesServerV3) GetRelayIDs() []string { + if s == nil { + return nil + } + return s.Spec.RelayIds +} + // GetLabel retrieves the label with the provided key. If not found // value will be empty and ok will be false. func (s *KubernetesServerV3) GetLabel(key string) (value string, ok bool) { @@ -309,6 +341,60 @@ func (k *KubernetesServerV3) IsEqual(i KubeServer) bool { return false } +// GetTargetHealth gets health details for a target Kubernetes cluster. +func (s *KubernetesServerV3) GetTargetHealth() *TargetHealth { + return s.GetStatus().GetTargetHealth() +} + +// SetTargetHealth sets health details for a target Kubernetes cluster. +func (s *KubernetesServerV3) SetTargetHealth(h *TargetHealth) { + if s.Status == nil { + s.Status = &KubernetesServerStatusV3{} + } + s.Status.TargetHealth = h +} + +// GetTargetHealthStatus gets the health status of a target Kubernetes cluster. +func (s *KubernetesServerV3) GetTargetHealthStatus() TargetHealthStatus { + health := s.GetStatus().GetTargetHealth() + if health == nil { + return TargetHealthStatusUnknown + } + return TargetHealthStatus(health.Status) +} + +// SetTargetHealthStatus sets the health status of a target Kubernetes cluster. +func (s *KubernetesServerV3) SetTargetHealthStatus(status TargetHealthStatus) { + if s.Status == nil { + s.Status = &KubernetesServerStatusV3{} + } + if s.Status.TargetHealth == nil { + s.Status.TargetHealth = &TargetHealth{} + } + s.Status.TargetHealth.Status = string(status) +} + +// GetStatus gets the Kubernetes server status. +func (s *KubernetesServerV3) GetStatus() *KubernetesServerStatusV3 { + if s == nil { + return nil + } + return s.Status +} + +// GetScope returns the scope this server belongs to. +func (s *KubernetesServerV3) GetScope() string { + return s.Scope +} + +// GetTargetHealth gets the health of a Kubernetes cluster. +func (s *KubernetesServerStatusV3) GetTargetHealth() *TargetHealth { + if s == nil { + return nil + } + return s.TargetHealth +} + // KubeServers represents a list of kube servers. type KubeServers []KubeServer diff --git a/api/types/lock.go b/api/types/lock.go index 4293bb5fb77b7..aa2d7abb61af4 100644 --- a/api/types/lock.go +++ b/api/types/lock.go @@ -272,7 +272,9 @@ func (t LockTarget) IsEmpty() bool { t.WindowsDesktop == "" && t.AccessRequest == "" && t.Device == "" && - t.ServerID == "" + t.ServerID == "" && + t.BotInstanceID == "" && + t.JoinToken == "" } // Match returns true if the lock's target is matched by this target. @@ -288,7 +290,9 @@ func (t LockTarget) Match(lock Lock) bool { (t.WindowsDesktop == "" || lockTarget.WindowsDesktop == t.WindowsDesktop) && (t.AccessRequest == "" || lockTarget.AccessRequest == t.AccessRequest) && (t.Device == "" || lockTarget.Device == t.Device) && - (t.ServerID == "" || lockTarget.ServerID == t.ServerID) + (t.ServerID == "" || lockTarget.ServerID == t.ServerID) && + (t.BotInstanceID == "" || lockTarget.BotInstanceID == t.BotInstanceID) && + (t.JoinToken == "" || lockTarget.JoinToken == t.JoinToken) } // String returns string representation of the LockTarget. @@ -305,5 +309,7 @@ func (t LockTarget) Equals(t2 LockTarget) bool { t.WindowsDesktop == t2.WindowsDesktop && t.AccessRequest == t2.AccessRequest && t.Device == t2.Device && - t.ServerID == t2.ServerID + t.ServerID == t2.ServerID && + t.BotInstanceID == t2.BotInstanceID && + t.JoinToken == t2.JoinToken } diff --git a/api/types/matchers.go b/api/types/matchers.go index e27cc61ac3384..3fce91a72f845 100644 --- a/api/types/matchers.go +++ b/api/types/matchers.go @@ -16,6 +16,13 @@ limitations under the License. package types +import ( + "net/url" + "strings" + + "github.com/gravitational/trace" +) + // Matcher is an interface for cloud resource matchers. type Matcher interface { // GetTypes gets the types that the matcher can match. @@ -23,3 +30,41 @@ type Matcher interface { // CopyWithTypes copies the matcher with new types. CopyWithTypes(t []string) Matcher } + +// CheckAndSetDefaults checks and sets defaults for HTTPProxySettings. +func (settings *HTTPProxySettings) CheckAndSetDefaults() error { + if settings == nil { + return nil + } + + if !isValidHTTPProxyURL(settings.HTTPProxy) { + return trace.BadParameter("invalid http_proxy setting: %q", settings.HTTPProxy) + } + if !isValidHTTPProxyURL(settings.HTTPSProxy) { + return trace.BadParameter("invalid https_proxy setting: %q", settings.HTTPSProxy) + } + + // NO_PROXY can contain multiple comma-separated values. + // Each value can have multiple formats: IP address, CIDR, domain name, etc. + // Each tool might have its own rules for parsing and validating NO_PROXY values. + // Due to this complexity and ambiguity, we skip strict validation here. + + return nil +} + +// We expect these variables to be used by Go code, so this method must allow at least all possible variations that are allowed by the golang.org/x/net/http/httpproxy. +func isValidHTTPProxyURL(proxyURL string) bool { + if proxyURL == "" { + return true + } + + if !strings.HasPrefix("https://", proxyURL) && !strings.HasPrefix("http://", proxyURL) { + // See https://cs.opensource.google/go/x/net/+/refs/tags/v0.46.0:http/httpproxy/proxy.go;drc=cde1dda944dcf6350753df966bb5bda87a544842;l=154 + proxyURL = "http://" + proxyURL + } + if _, err := url.Parse(proxyURL); err != nil { + return false + } + + return true +} diff --git a/api/types/matchers_aws.go b/api/types/matchers_aws.go index ed548235b2603..8f9182dd7b268 100644 --- a/api/types/matchers_aws.go +++ b/api/types/matchers_aws.go @@ -38,6 +38,10 @@ const ( // that will be called when executing the SSM command. AWSInstallerDocument = "TeleportDiscoveryInstaller" + // AWSSSMDocumentRunShellScript is the `AWS-RunShellScript` SSM Document name. + // It is available in all AWS accounts and does not need to be manually created. + AWSSSMDocumentRunShellScript = "AWS-RunShellScript" + // AWSAgentlessInstallerDocument is the name of the default AWS document // that will be called when executing the SSM command . AWSAgentlessInstallerDocument = "TeleportAgentlessDiscoveryInstaller" @@ -56,6 +60,8 @@ const ( AWSMatcherRedshiftServerless = "redshift-serverless" // AWSMatcherElastiCache is the AWS matcher type for ElastiCache databases. AWSMatcherElastiCache = "elasticache" + // AWSMatcherElastiCacheServerless is the AWS matcher type for ElastiCacheServerless databases. + AWSMatcherElastiCacheServerless = "elasticache-serverless" // AWSMatcherMemoryDB is the AWS matcher type for MemoryDB databases. AWSMatcherMemoryDB = "memorydb" // AWSMatcherOpenSearch is the AWS matcher type for OpenSearch databases. @@ -81,6 +87,7 @@ var SupportedAWSDatabaseMatchers = []string{ AWSMatcherRedshift, AWSMatcherRedshiftServerless, AWSMatcherElastiCache, + AWSMatcherElastiCacheServerless, AWSMatcherMemoryDB, AWSMatcherOpenSearch, AWSMatcherDocumentDB, @@ -109,6 +116,23 @@ func (m AWSMatcher) CopyWithTypes(t []string) Matcher { return newMatcher } +func isAlphanumericIncluding(s string, extraChars ...rune) bool { + for _, r := range s { + if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || slices.Contains(extraChars, r) { + continue + } + + return false + } + + return true +} + +// IsRegionWildcard returns true if the matcher is configured to discover resources in all regions. +func (m *AWSMatcher) IsRegionWildcard() bool { + return len(m.Regions) == 1 && m.Regions[0] == Wildcard +} + // CheckAndSetDefaults that the matcher is correct and adds default values. func (m *AWSMatcher) CheckAndSetDefaults() error { for _, matcherType := range m.Types { @@ -123,10 +147,17 @@ func (m *AWSMatcher) CheckAndSetDefaults() error { } if len(m.Regions) == 0 { - return trace.BadParameter("discovery service requires at least one region") + return trace.BadParameter("discovery service requires at least one region, for EC2 you can also set the region to %q to iterate over all regions (requires account:ListRegions IAM permission)", Wildcard) } for _, region := range m.Regions { + if region == Wildcard { + if len(m.Regions) > 1 { + return trace.BadParameter("when using %q as region, no other regions can be specified", Wildcard) + } + break + } + if err := awsapiutils.IsValidRegion(region); err != nil { return trace.BadParameter("discovery service does not support region %q", region) } @@ -199,6 +230,22 @@ func (m *AWSMatcher) CheckAndSetDefaults() error { m.Params.SSHDConfig = SSHDConfigPath } + if m.Params.Suffix != "" { + if !isAlphanumericIncluding(m.Params.Suffix, '-') { + return trace.BadParameter("install.suffix can only contain alphanumeric characters and hyphens") + } + } + + if m.Params.UpdateGroup != "" { + if !isAlphanumericIncluding(m.Params.UpdateGroup, '-') { + return trace.BadParameter("install.update_group can only contain alphanumeric characters and hyphens") + } + } + + if err := m.Params.HTTPProxySettings.CheckAndSetDefaults(); err != nil { + return trace.Wrap(err) + } + if m.Params.ScriptName == "" { m.Params.ScriptName = DefaultInstallerScriptNameAgentless if m.Params.InstallTeleport { diff --git a/api/types/matchers_aws_test.go b/api/types/matchers_aws_test.go index 32b3faf67200e..449ff82596524 100644 --- a/api/types/matchers_aws_test.go +++ b/api/types/matchers_aws_test.go @@ -133,12 +133,12 @@ func TestAWSMatcherCheckAndSetDefaults(t *testing.T) { errCheck: isBadParameterErr, }, { - name: "wildcard is invalid for regions", + name: "wildcard is valid for the ec2 type", in: &AWSMatcher{ - Types: []string{"ec2", "rds"}, + Types: []string{"ec2"}, Regions: []string{"*"}, }, - errCheck: isBadParameterErr, + errCheck: require.NoError, }, { name: "invalid type", @@ -175,6 +175,14 @@ func TestAWSMatcherCheckAndSetDefaults(t *testing.T) { }, errCheck: isBadParameterErr, }, + { + name: "wildcard region is valid for ec2 type", + in: &AWSMatcher{ + Types: []string{"ec2"}, + Regions: []string{"*"}, + }, + errCheck: require.NoError, + }, { name: "no region", in: &AWSMatcher{ @@ -326,6 +334,41 @@ func TestAWSMatcherCheckAndSetDefaults(t *testing.T) { SSM: &AWSSSM{DocumentName: "TeleportDiscoveryInstaller"}, }, }, + { + name: "invalid update group", + in: &AWSMatcher{ + Types: []string{"ec2"}, + Regions: []string{"us-east-1"}, + Params: &InstallerParams{ + UpdateGroup: "invalid!", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid install suffix", + in: &AWSMatcher{ + Types: []string{"ec2"}, + Regions: []string{"us-east-1"}, + Params: &InstallerParams{ + Suffix: "invalid!", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid proxy settings", + in: &AWSMatcher{ + Types: []string{"ec2"}, + Regions: []string{"us-east-1"}, + Params: &InstallerParams{ + HTTPProxySettings: &HTTPProxySettings{ + HTTPProxy: "not a valid url", + }, + }, + }, + errCheck: isBadParameterErr, + }, } { t.Run(tt.name, func(t *testing.T) { err := tt.in.CheckAndSetDefaults() diff --git a/api/types/matchers_azure.go b/api/types/matchers_azure.go index edadd4883a703..32da6c422cd30 100644 --- a/api/types/matchers_azure.go +++ b/api/types/matchers_azure.go @@ -90,6 +90,18 @@ func (m *AzureMatcher) CheckAndSetDefaults() error { m.Params.Azure = &AzureInstallerParams{} } + if m.Params.Suffix != "" { + if !isAlphanumericIncluding(m.Params.Suffix, '-') { + return trace.BadParameter("install.suffix can only contain alphanumeric characters and hyphens") + } + } + + if m.Params.UpdateGroup != "" { + if !isAlphanumericIncluding(m.Params.UpdateGroup, '-') { + return trace.BadParameter("install.update_group can only contain alphanumeric characters and hyphens") + } + } + switch m.Params.JoinMethod { case JoinMethodAzure, "": m.Params.JoinMethod = JoinMethodAzure @@ -104,6 +116,10 @@ func (m *AzureMatcher) CheckAndSetDefaults() error { if m.Params.ScriptName == "" { m.Params.ScriptName = DefaultInstallerScriptName } + + if err := m.Params.HTTPProxySettings.CheckAndSetDefaults(); err != nil { + return trace.Wrap(err) + } } if slices.Contains(m.Regions, Wildcard) || len(m.Regions) == 0 { diff --git a/api/types/matchers_azure_test.go b/api/types/matchers_azure_test.go index 850b30e26d66c..376b5a1cc2f8f 100644 --- a/api/types/matchers_azure_test.go +++ b/api/types/matchers_azure_test.go @@ -119,6 +119,39 @@ func TestAzureMatcherCheckAndSetDefaults(t *testing.T) { }, errCheck: isBadParameterErr, }, + { + name: "invalid install suffix", + in: &AzureMatcher{ + Types: []string{"vm"}, + Params: &InstallerParams{ + Suffix: "$SHELL", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid update groups", + in: &AzureMatcher{ + Types: []string{"vm"}, + Params: &InstallerParams{ + UpdateGroup: "$SHELL", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid proxy settings", + in: &AzureMatcher{ + Types: []string{"vm"}, + Regions: []string{"us-east-1"}, + Params: &InstallerParams{ + HTTPProxySettings: &HTTPProxySettings{ + HTTPProxy: "not a valid url", + }, + }, + }, + errCheck: isBadParameterErr, + }, } { t.Run(tt.name, func(t *testing.T) { err := tt.in.CheckAndSetDefaults() diff --git a/api/types/matchers_gcp.go b/api/types/matchers_gcp.go index e60fc8c73b7ab..4020b9711ec7d 100644 --- a/api/types/matchers_gcp.go +++ b/api/types/matchers_gcp.go @@ -82,6 +82,18 @@ func (m *GCPMatcher) CheckAndSetDefaults() error { m.Params = &InstallerParams{} } + if m.Params.Suffix != "" { + if !isAlphanumericIncluding(m.Params.Suffix, '-') { + return trace.BadParameter("install.suffix can only contain alphanumeric characters and hyphens") + } + } + + if m.Params.UpdateGroup != "" { + if !isAlphanumericIncluding(m.Params.UpdateGroup, '-') { + return trace.BadParameter("install.update_group can only contain alphanumeric characters and hyphens") + } + } + switch m.Params.JoinMethod { case JoinMethodGCP, "": m.Params.JoinMethod = JoinMethodGCP @@ -96,6 +108,10 @@ func (m *GCPMatcher) CheckAndSetDefaults() error { if m.Params.ScriptName == "" { m.Params.ScriptName = DefaultInstallerScriptName } + + if err := m.Params.HTTPProxySettings.CheckAndSetDefaults(); err != nil { + return trace.Wrap(err) + } } if slices.Contains(m.Locations, Wildcard) || len(m.Locations) == 0 { diff --git a/api/types/matchers_gcp_test.go b/api/types/matchers_gcp_test.go index f56c93172b654..2cfa6d1f63831 100644 --- a/api/types/matchers_gcp_test.go +++ b/api/types/matchers_gcp_test.go @@ -148,6 +148,41 @@ func TestGCPMatcherCheckAndSetDefaults(t *testing.T) { }, errCheck: isBadParameterErr, }, + { + name: "invalid install suffix", + in: &GCPMatcher{ + Types: []string{"gce"}, + ProjectIDs: []string{"project001"}, + Params: &InstallerParams{ + Suffix: "$SHELL", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid update groups", + in: &GCPMatcher{ + Types: []string{"gce"}, + ProjectIDs: []string{"project001"}, + Params: &InstallerParams{ + UpdateGroup: "$SHELL", + }, + }, + errCheck: isBadParameterErr, + }, + { + name: "invalid proxy settings", + in: &GCPMatcher{ + Types: []string{"gce"}, + ProjectIDs: []string{"project001"}, + Params: &InstallerParams{ + HTTPProxySettings: &HTTPProxySettings{ + HTTPProxy: "not a valid url", + }, + }, + }, + errCheck: isBadParameterErr, + }, } { t.Run(tt.name, func(t *testing.T) { err := tt.in.CheckAndSetDefaults() diff --git a/api/types/matchers_test.go b/api/types/matchers_test.go new file mode 100644 index 0000000000000..7ff55610837bf --- /dev/null +++ b/api/types/matchers_test.go @@ -0,0 +1,65 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package types + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestHTTPProxySettingsCheckAndSetDefaults(t *testing.T) { + for _, tt := range []struct { + name string + in *HTTPProxySettings + errCheck require.ErrorAssertionFunc + }{ + { + name: "valid", + in: &HTTPProxySettings{ + HTTPProxy: "http://proxy.example.com:8080", + HTTPSProxy: "http://proxy.example.com:8080", + NoProxy: "internal.local", + }, + errCheck: require.NoError, + }, + { + name: "invalid http_proxy", + in: &HTTPProxySettings{ + HTTPProxy: "not a valid url", + }, + errCheck: require.Error, + }, + { + name: "invalid https_proxy", + in: &HTTPProxySettings{ + HTTPSProxy: "not a valid url", + }, + errCheck: require.Error, + }, + { + name: "no_proxy is always valid", + in: &HTTPProxySettings{ + NoProxy: "internal.local, ::1/128,", + }, + errCheck: require.NoError, + }, + } { + t.Run(tt.name, func(t *testing.T) { + err := tt.in.CheckAndSetDefaults() + tt.errCheck(t, err) + }) + } +} diff --git a/api/types/mfa.go b/api/types/mfa.go index bd0f42aa21729..cf9f000cabfa2 100644 --- a/api/types/mfa.go +++ b/api/types/mfa.go @@ -18,7 +18,7 @@ import ( "bytes" "time" - "github.com/gogo/protobuf/jsonpb" + "github.com/gogo/protobuf/jsonpb" //nolint:depguard // needed for backwards compatibility "github.com/gravitational/trace" "github.com/gravitational/teleport/api/utils" diff --git a/api/types/msgraph.go b/api/types/msgraph.go new file mode 100644 index 0000000000000..aa4733f43091c --- /dev/null +++ b/api/types/msgraph.go @@ -0,0 +1,74 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package types + +import ( + "slices" + + "github.com/gravitational/trace" +) + +const ( + // MSGraphDefaultLoginEndpoint is the endpoint under which Microsoft identity platform APIs are available. + MSGraphDefaultLoginEndpoint = "https://login.microsoftonline.com" + // MSDefaultGraphEndpoint is the endpoint under which Microsoft Graph API is available. + MSGraphDefaultEndpoint = "https://graph.microsoft.com" +) + +var ( + validLoginEndpoints = []string{ + "https://login.microsoftonline.com", + "https://login.microsoftonline.us", + "https://login.chinacloudapi.cn", + } + validGraphEndpoints = []string{ + "https://graph.microsoft.com", + "https://graph.microsoft.us", + "https://dod-graph.microsoft.us", + "https://microsoftgraph.chinacloudapi.cn", + } +) + +// ValidateMSGraphEndpoints checks if API endpoints point to one of the official deployments of +// the Microsoft identity platform and Microsoft Graph. +// https://learn.microsoft.com/en-us/graph/deployments +func ValidateMSGraphEndpoints(loginEndpoint, graphEndpoint string) error { + if loginEndpoint != "" && !slices.Contains(validLoginEndpoints, loginEndpoint) { + return trace.BadParameter("expected login endpoint to be one of %q, got %q", validLoginEndpoints, loginEndpoint) + } + + if graphEndpoint != "" && !slices.Contains(validGraphEndpoints, graphEndpoint) { + return trace.BadParameter("expected graph endpoint to be one of %q, got %q", validGraphEndpoints, graphEndpoint) + } + + return nil +} + +const ( + // EntraIDSecurityGroups represents security enabled Entra ID groups. + EntraIDSecurityGroups = "security-groups" + // EntraIDDirectoryRoles represents Entra ID directory roles. + EntraIDDirectoryRoles = "directory-roles" + // EntraIDAllGroups represents all types of Entra ID groups, including directory roles. + EntraIDAllGroups = "all-groups" +) + +// EntraIDGroupsTypes defines supported Entra ID +// group types for Entra ID groups proivder. +var EntraIDGroupsTypes = []string{ + EntraIDSecurityGroups, + EntraIDDirectoryRoles, + EntraIDAllGroups, +} diff --git a/api/types/msgraph_test.go b/api/types/msgraph_test.go new file mode 100644 index 0000000000000..511ad2febd8fc --- /dev/null +++ b/api/types/msgraph_test.go @@ -0,0 +1,65 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package types + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestValidateMSGraphEndpoints(t *testing.T) { + for _, tt := range []struct { + name string + loginEndpoint string + graphEndpoint string + errAssertion require.ErrorAssertionFunc + }{ + { + name: "valid endpoints", + loginEndpoint: "https://login.microsoftonline.com", + graphEndpoint: "https://graph.microsoft.com", + errAssertion: require.NoError, + }, + { + name: "empty value is permitted", + loginEndpoint: "", + graphEndpoint: "", + errAssertion: require.NoError, + }, + { + name: "login and graph endpoint pair is not matched", + loginEndpoint: "https://login.microsoftonline.com", + graphEndpoint: "", + errAssertion: require.NoError, + }, + { + name: "empty login endpoint and invalid graph endpoint not allowed", + loginEndpoint: "", + graphEndpoint: "https://graph.windows.net", + errAssertion: require.Error, + }, + { + name: "invalid login and graph endpoint", + loginEndpoint: "https://login.microsoft.com", + graphEndpoint: "https://graph.windows.net", + errAssertion: require.Error, + }, + } { + t.Run(tt.name, func(t *testing.T) { + tt.errAssertion(t, ValidateMSGraphEndpoints(tt.loginEndpoint, tt.graphEndpoint)) + }) + } +} diff --git a/api/types/oidc.go b/api/types/oidc.go index 518a04943300c..535e8fdc9e663 100644 --- a/api/types/oidc.go +++ b/api/types/oidc.go @@ -19,7 +19,9 @@ package types import ( "net/netip" "net/url" + "os" "slices" + "strconv" "strings" "time" @@ -121,6 +123,20 @@ type OIDCConnector interface { SetPKCEMode(mode constants.OIDCPKCEMode) // GetPKCEMode will return the PKCEMode of the connector. GetPKCEMode() constants.OIDCPKCEMode + // GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + GetUserMatchers() []string + // GetRequestObjectMode will return the RequestObjectMode of the connector. + GetRequestObjectMode() constants.OIDCRequestObjectMode + // SetRequestObjectMode sets the RequestObjectMode of the connector. + SetRequestObjectMode(mode constants.OIDCRequestObjectMode) + // SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match + // for identifier-first login. + SetUserMatchers([]string) + // GetEntraIDGroupsProvider returns Entra ID groups provider. + GetEntraIDGroupsProvider() *EntraIDGroupsProvider + // IsEntraIDGroupsProviderDisabled checks if the Entra ID groups provider is disabled. + IsEntraIDGroupsProviderDisabled() bool } // NewOIDCConnector returns a new OIDCConnector based off a name and OIDCConnectorSpecV3. @@ -502,6 +518,13 @@ func (o *OIDCConnectorV3) Validate() error { } } + entra := o.GetEntraIDGroupsProvider() + if entra != nil { + if err := entra.checkAndSetDefaults(); err != nil { + return trace.Wrap(err) + } + } + return nil } @@ -565,12 +588,50 @@ func (o *OIDCConnectorV3) WithMFASettings() error { o.Spec.ClientSecret = o.Spec.MFASettings.ClientSecret o.Spec.ACR = o.Spec.MFASettings.AcrValues o.Spec.Prompt = o.Spec.MFASettings.Prompt - o.Spec.MaxAge = &MaxAge{ - Value: o.Spec.MFASettings.MaxAge, + // Overwrite the base connector's request object mode iff the MFA setting's + // request object mode is explicitly set. Otherwise, the base setting should be assumed. + if o.Spec.MFASettings.RequestObjectMode != string(constants.OIDCRequestObjectModeUnknown) { + o.Spec.RequestObjectMode = o.Spec.MFASettings.RequestObjectMode + } + + // In rare cases, some providers will complain about the presence of the 'max_age' + // parameter in auth requests. Provide users with a workaround to omit it. + omitMaxAge, _ := strconv.ParseBool(os.Getenv("TELEPORT_OIDC_OMIT_MFA_MAX_AGE")) + if omitMaxAge { + o.Spec.MaxAge = nil + } else { + o.Spec.MaxAge = &MaxAge{ + Value: o.Spec.MFASettings.MaxAge, + } } return nil } +// GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should +// match for identifier-first login. +func (r *OIDCConnectorV3) GetUserMatchers() []string { + if r.Spec.UserMatchers == nil { + return nil + } + return r.Spec.UserMatchers +} + +// GetRequestObjectMode returns the configured OIDC request object mode. +func (r *OIDCConnectorV3) GetRequestObjectMode() constants.OIDCRequestObjectMode { + return constants.OIDCRequestObjectMode(r.Spec.RequestObjectMode) +} + +// SetRequestObjectMode sets the OIDC request object mode. +func (r *OIDCConnectorV3) SetRequestObjectMode(mode constants.OIDCRequestObjectMode) { + r.Spec.RequestObjectMode = string(mode) +} + +// SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match +// for identifier-first login. +func (r *OIDCConnectorV3) SetUserMatchers(userMatchers []string) { + r.Spec.UserMatchers = userMatchers +} + // Check returns nil if all parameters are great, err otherwise func (r *OIDCAuthRequest) Check() error { switch { @@ -596,3 +657,28 @@ func (r *OIDCAuthRequest) Check() error { } return nil } + +func (e *EntraIDGroupsProvider) checkAndSetDefaults() error { + if e.GroupType != "" { + if !slices.Contains(EntraIDGroupsTypes, e.GroupType) { + return trace.BadParameter("expected group type to be one of %q, got %q", EntraIDGroupsTypes, e.GroupType) + } + } + + if err := ValidateMSGraphEndpoints("", e.GraphEndpoint); err != nil { + return trace.Wrap(err) + } + + return nil +} + +// GetEntraIDGroupsProvider returns Entra ID groups provider. +func (o *OIDCConnectorV3) GetEntraIDGroupsProvider() *EntraIDGroupsProvider { + return o.Spec.EntraIdGroupsProvider +} + +// IsEntraIDGroupsProviderDisabled checks if the Entra ID groups provider is disabled. +func (o *OIDCConnectorV3) IsEntraIDGroupsProviderDisabled() bool { + entra := o.Spec.EntraIdGroupsProvider + return entra != nil && entra.Disabled +} diff --git a/api/types/oidc_test.go b/api/types/oidc_test.go new file mode 100644 index 0000000000000..c382042dedc51 --- /dev/null +++ b/api/types/oidc_test.go @@ -0,0 +1,93 @@ +// Copyright 2025 Gravitational, Inc +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package types + +import ( + "testing" + + "github.com/stretchr/testify/require" + + "github.com/gravitational/teleport/api/types/wrappers" +) + +func TestOIDCValidate(t *testing.T) { + tests := []struct { + name string + entra *EntraIDGroupsProvider + errAssertion require.ErrorAssertionFunc + }{ + { + name: "invalid group type", + entra: &EntraIDGroupsProvider{ + GroupType: "random", + }, + errAssertion: require.Error, + }, + { + name: "invalid endpoint", + entra: &EntraIDGroupsProvider{ + GroupType: "all-groups", + GraphEndpoint: "https://example.com", + }, + errAssertion: require.Error, + }, + { + name: "empty entra_id_groups_provider", + entra: nil, + errAssertion: require.NoError, + }, + { + name: "disabled state should not skip invalid configuration", + entra: &EntraIDGroupsProvider{ + Disabled: true, + GroupType: "all-groups", + GraphEndpoint: "https://example.com", + }, + errAssertion: require.Error, + }, + { + name: "valid", + entra: &EntraIDGroupsProvider{ + Disabled: false, + GroupType: "all-groups", + GraphEndpoint: "https://graph.microsoft.com", + }, + errAssertion: require.NoError, + }, + } + + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + connector, err := NewOIDCConnector("test-connector", OIDCConnectorSpecV3{ + ClientID: "testid", + ClientSecret: "secret", + ClaimsToRoles: []ClaimMapping{ + { + Claim: "groups", + Value: "*", + Roles: []string{"requester"}, + }, + }, + RedirectURLs: wrappers.Strings{ + "https://example.com/proxy/oidc/callback", + }, + EntraIdGroupsProvider: test.entra, + }) + require.NoError(t, err) + + test.errAssertion(t, connector.Validate()) + }) + } +} diff --git a/api/types/plugin.go b/api/types/plugin.go index 6ba08b8af2942..76f0b7faa1d7c 100644 --- a/api/types/plugin.go +++ b/api/types/plugin.go @@ -21,7 +21,7 @@ import ( "net/url" "time" - "github.com/gogo/protobuf/jsonpb" + "github.com/gogo/protobuf/jsonpb" //nolint:depguard // needed for backwards compatibility "github.com/gravitational/trace" "github.com/gravitational/teleport/api/utils" @@ -37,6 +37,7 @@ var AllPluginTypes = []PluginType{ PluginTypeOpenAI, PluginTypeOkta, PluginTypeJamf, + PluginTypeIntune, PluginTypeJira, PluginTypeOpsgenie, PluginTypePagerDuty, @@ -62,6 +63,8 @@ const ( PluginTypeOkta = "okta" // PluginTypeJamf is the Jamf MDM plugin PluginTypeJamf = "jamf" + // PluginTypeIntune is the Intune MDM plugin + PluginTypeIntune = "intune" // PluginTypeJira is the Jira access plugin PluginTypeJira = "jira" // PluginTypeOpsgenie is the Opsgenie access request plugin @@ -120,6 +123,7 @@ type Plugin interface { SetCredentials(PluginCredentials) error SetStatus(PluginStatus) error GetGeneration() string + CloneWithoutSecrets() Plugin } // PluginCredentials are the credentials embedded in Plugin @@ -244,7 +248,23 @@ func (p *PluginV1) CheckAndSetDefaults() error { if len(staticCreds.Labels) == 0 { return trace.BadParameter("labels must be specified") } - + case *PluginSpecV1_Intune: + if settings.Intune == nil { + return trace.BadParameter("missing Intune settings") + } + if err := settings.Intune.Validate(); err != nil { + return trace.Wrap(err) + } + if p.Credentials == nil { + return trace.BadParameter("credentials must be set") + } + staticCreds := p.Credentials.GetStaticCredentialsRef() + if staticCreds == nil { + return trace.BadParameter("Intune plugin must be used with the static credentials ref type") + } + if len(staticCreds.Labels) == 0 { + return trace.BadParameter("labels must be specified") + } case *PluginSpecV1_Jira: if settings.Jira == nil { return trace.BadParameter("missing Jira settings") @@ -418,7 +438,8 @@ func (p *PluginV1) CheckAndSetDefaults() error { return nil } -// WithoutSecrets returns an instance of resource without secrets. +// WithoutSecrets returns the Plugin as a Resource, with secrets removed. +// If you want to have a copy of the Plugin without secrets use CloneWithoutSecrets instead. func (p *PluginV1) WithoutSecrets() Resource { if p.Credentials == nil { return p @@ -429,6 +450,15 @@ func (p *PluginV1) WithoutSecrets() Resource { return p2 } +// CloneWithoutSecrets returns a deep copy of the Plugin instance with secrets removed. +// Use this when you need a separate Plugin object without secrets, +// rather than a Resource interface value as returned by WithoutSecrets. +func (p *PluginV1) CloneWithoutSecrets() Plugin { + out := p.Clone().(*PluginV1) + out.SetCredentials(nil) + return out +} + func (p *PluginV1) setStaticFields() { p.Kind = KindPlugin p.Version = V1 @@ -555,6 +585,8 @@ func (p *PluginV1) GetType() PluginType { return PluginTypeOkta case *PluginSpecV1_Jamf: return PluginTypeJamf + case *PluginSpecV1_Intune: + return PluginTypeIntune case *PluginSpecV1_Jira: return PluginTypeJira case *PluginSpecV1_Opsgenie: @@ -754,9 +786,11 @@ func (c *PluginEntraIDSettings) Validate() error { } func (c *PluginSCIMSettings) CheckAndSetDefaults() error { - if c.SamlConnectorName == "" { - return trace.BadParameter("saml_connector_name must be set") + if c.SamlConnectorName == "" && c.ConnectorInfo == nil { + // Don't print legacy filed. + return trace.BadParameter("connector_info must be set") } + return nil } @@ -770,6 +804,17 @@ func (c *PluginDatadogAccessSettings) CheckAndSetDefaults() error { return nil } +const ( + // AWSICRolesSyncModeAll indicates that the AWS Identity Center integration + // should create and maintain roles for all possible Account Assignments. + AWSICRolesSyncModeAll string = "ALL" + + // AWSICRolesSyncModeNone indicates that the AWS Identity Center integration + // should *not* create any roles representing potential account Account + // Assignments. + AWSICRolesSyncModeNone string = "NONE" +) + func (c *PluginAWSICSettings) CheckAndSetDefaults() error { // Handle legacy records that pre-date the polymorphic Credentials settings @@ -974,7 +1019,7 @@ func (c *PluginNetIQSettings) Validate() error { return nil } -// CheckAndSetDefaults checks that the required fields for the Github plugin are set. +// Validate checks that the required fields for the Github plugin are set. func (c *PluginGithubSettings) Validate() error { if c.ClientId == "" { return trace.BadParameter("client_id must be set") @@ -984,3 +1029,38 @@ func (c *PluginGithubSettings) Validate() error { } return nil } + +// Validate checks that the required fields for the Intune plugin are set. +func (c *PluginIntuneSettings) Validate() error { + if c.Tenant == "" { + return trace.BadParameter("tenant must be set") + } + + if err := ValidateMSGraphEndpoints(c.LoginEndpoint, c.GraphEndpoint); err != nil { + return trace.Wrap(err) + } + + return nil +} + +// UnmarshalJSON implements [json.Unmarshaler] for the PluginSyncFilter, forcing +// it to use the `jsonpb` unmarshaler, which understands how to unpack values +// generated from a protobuf `oneof` directive. +func (s *PluginSyncFilter) UnmarshalJSON(b []byte) error { + if err := (&jsonpb.Unmarshaler{AllowUnknownFields: true}).Unmarshal(bytes.NewReader(b), s); err != nil { + return trace.Wrap(err) + } + return nil +} + +// MarshalJSON implements [json.Marshaler] for the PluginSyncFilter, forcing +// it to use the `jsonpb` marshaler, which understands how to pack values +// generated from a protobuf `oneof` directive. +func (s PluginSyncFilter) MarshalJSON() ([]byte, error) { + m := jsonpb.Marshaler{} + var buf bytes.Buffer + if err := m.Marshal(&buf, &s); err != nil { + return nil, trace.Wrap(err) + } + return buf.Bytes(), nil +} diff --git a/api/types/plugin_static_credentials.go b/api/types/plugin_static_credentials.go index 5dd8397a29c26..3433c6a1a4b78 100644 --- a/api/types/plugin_static_credentials.go +++ b/api/types/plugin_static_credentials.go @@ -82,7 +82,11 @@ func (p *PluginStaticCredentialsV1) CheckAndSetDefaults() error { return trace.Wrap(err) } - switch credentials := p.Spec.Credentials.(type) { + return trace.Wrap(p.Spec.CheckAndSetDefaults()) +} + +func (ps *PluginStaticCredentialsSpecV1) CheckAndSetDefaults() error { + switch credentials := ps.Credentials.(type) { case *PluginStaticCredentialsSpecV1_APIToken: if credentials.APIToken == "" { return trace.BadParameter("api token object is missing") diff --git a/api/types/plugin_test.go b/api/types/plugin_test.go index 176f1adb71f8d..bff364606e906 100644 --- a/api/types/plugin_test.go +++ b/api/types/plugin_test.go @@ -512,6 +512,192 @@ func TestPluginJamfValidation(t *testing.T) { } } +func TestPluginIntuneValidation(t *testing.T) { + testCases := []struct { + name string + settings *PluginSpecV1_Intune + creds *PluginCredentialsV1 + assertErr require.ErrorAssertionFunc + }{ + { + name: "no settings", + settings: &PluginSpecV1_Intune{ + Intune: nil, + }, + creds: nil, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "missing Intune settings") + }, + }, + { + name: "no tenant", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{}, + }, + creds: nil, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "tenant must be set") + }, + }, + { + name: "no credentials inner", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{Tenant: "foo"}, + }, + creds: &PluginCredentialsV1{}, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "must be used with the static credentials ref type") + }, + }, + { + name: "invalid credential type (oauth2)", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{Tenant: "foo"}, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_Oauth2AccessToken{}, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "must be used with the static credentials ref type") + }, + }, + { + name: "invalid credentials (static credentials)", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{Tenant: "foo"}, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{}, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "labels must be specified") + }, + }, + { + name: "valid credentials (static credentials)", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{Tenant: "foo"}, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{ + "label1": "value1", + }, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.NoError(t, err) + }, + }, + { + name: "invalid login endpoint", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{ + Tenant: "foo", + LoginEndpoint: "example.com", + }, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{ + "label1": "value1", + }, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.Contains(t, err.Error(), "login endpoint") + }, + }, + { + name: "valid login endpoint", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{ + Tenant: "foo", + LoginEndpoint: "https://login.microsoftonline.us", + }, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{ + "label1": "value1", + }, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.NoError(t, err) + }, + }, + { + name: "invalid graph endpoint", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{ + Tenant: "foo", + GraphEndpoint: "example.com", + }, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{ + "label1": "value1", + }, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.True(t, trace.IsBadParameter(err)) + require.ErrorContains(t, err, "graph endpoint") + }, + }, + { + name: "valid graph endpoint", + settings: &PluginSpecV1_Intune{ + Intune: &PluginIntuneSettings{ + Tenant: "foo", + GraphEndpoint: "https://graph.microsoft.us", + }, + }, + creds: &PluginCredentialsV1{ + Credentials: &PluginCredentialsV1_StaticCredentialsRef{ + &PluginStaticCredentialsRef{ + Labels: map[string]string{ + "label1": "value1", + }, + }, + }, + }, + assertErr: func(t require.TestingT, err error, args ...any) { + require.NoError(t, err) + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + plugin := NewPluginV1(Metadata{Name: "foobar"}, PluginSpecV1{ + Settings: tc.settings, + }, tc.creds) + tc.assertErr(t, plugin.CheckAndSetDefaults()) + }) + } +} + func TestPluginMattermostValidation(t *testing.T) { defaultSettings := &PluginSpecV1_Mattermost{ Mattermost: &PluginMattermostSettings{ @@ -1133,6 +1319,36 @@ func TestPluginAWSICSettings(t *testing.T) { require.Equal(t, "some-oidc-integration", oidc.IntegrationName) }, }, + { + name: "(role sync mode) empty value is not an error", + mutateSettings: func(cfg *PluginAWSICSettings) { + cfg.RolesSyncMode = "" + }, + assertErr: require.NoError, + assertValue: func(t *testing.T, cfg *PluginAWSICSettings) { + require.Empty(t, cfg.RolesSyncMode) + }, + }, + { + name: "(role sync mode) value is preserved", + mutateSettings: func(cfg *PluginAWSICSettings) { + cfg.RolesSyncMode = AWSICRolesSyncModeNone + }, + assertErr: require.NoError, + assertValue: func(t *testing.T, cfg *PluginAWSICSettings) { + require.Equal(t, AWSICRolesSyncModeNone, cfg.RolesSyncMode) + }, + }, + { + // Technically, an invalid Role Sync Mode *is* an error, its just not + // enforced when deserializing the plugin record. Validation is + // required at time of use. + name: "(role sync mode) invalid value is not an error", + mutateSettings: func(cfg *PluginAWSICSettings) { + cfg.RolesSyncMode = "banana" + }, + assertErr: require.NoError, + }, } for _, tc := range testCases { diff --git a/api/types/provisioning.go b/api/types/provisioning.go index 0ea9ab37edabe..6ad29daefa47b 100644 --- a/api/types/provisioning.go +++ b/api/types/provisioning.go @@ -20,6 +20,7 @@ import ( "crypto/x509" "encoding/pem" "fmt" + "net/url" "slices" "strings" "time" @@ -86,6 +87,8 @@ const ( // JoinMethodBoundKeypair indicates the node will join using the Bound // Keypair join method. See lib/boundkeypair for more. JoinMethodBoundKeypair JoinMethod = "bound_keypair" + // JoinMethodEnv0 indicates the node will join using the env0 join method. + JoinMethodEnv0 JoinMethod = "env0" ) var JoinMethods = []JoinMethod{ @@ -105,6 +108,7 @@ var JoinMethods = []JoinMethod{ JoinMethodTerraformCloud, JoinMethodOracle, JoinMethodBoundKeypair, + JoinMethodEnv0, } func ValidateJoinMethod(method JoinMethod) error { @@ -122,6 +126,7 @@ var ( KubernetesJoinTypeUnspecified KubernetesJoinType = "" KubernetesJoinTypeInCluster KubernetesJoinType = "in_cluster" KubernetesJoinTypeStaticJWKS KubernetesJoinType = "static_jwks" + KubernetesJoinTypeOIDC KubernetesJoinType = "oidc" ) // ProvisionToken is a provisioning token @@ -145,6 +150,8 @@ type ProvisionToken interface { GetGCPRules() *ProvisionTokenSpecV2GCP // GetGithubRules will return the GitHub rules within this token. GetGithubRules() *ProvisionTokenSpecV2GitHub + // GetGitlabRules will return the GitLab rules within this token. + GetGitlabRules() *ProvisionTokenSpecV2GitLab // GetAWSIIDTTL returns the TTL of EC2 IIDs GetAWSIIDTTL() Duration // GetJoinMethod returns joining method that must be used with this token. @@ -171,6 +178,11 @@ type ProvisionToken interface { // join methods where the name is secret. This should be used when logging // the token name. GetSafeName() string + + // GetAssignedScope always returns an empty string because a [ProvisionToken] is always + // unscoped + GetAssignedScope() string + // Clone creates a copy of the token. Clone() ProvisionToken } @@ -441,17 +453,21 @@ func (p *ProvisionTokenV2) CheckAndSetDefaults() error { return trace.Wrap(err, "spec.azure_devops: failed validation") } case JoinMethodBoundKeypair: - providerCfg := p.Spec.BoundKeypair - if providerCfg == nil { - return trace.BadParameter( - "spec.bound_keypair: must be configured for the join method %q", - JoinMethodBoundKeypair, - ) + if p.Spec.BoundKeypair == nil { + p.Spec.BoundKeypair = &ProvisionTokenSpecV2BoundKeypair{} } - if err := providerCfg.checkAndSetDefaults(); err != nil { + if err := p.Spec.BoundKeypair.checkAndSetDefaults(); err != nil { return trace.Wrap(err, "spec.bound_keypair: failed validation") } + case JoinMethodEnv0: + if p.Spec.Env0 == nil { + p.Spec.Env0 = &ProvisionTokenSpecV2Env0{} + } + + if err := p.Spec.Env0.checkAndSetDefaults(); err != nil { + return trace.Wrap(err, "spec.env0: failed validation") + } default: return trace.BadParameter("unknown join method %q", p.Spec.JoinMethod) } @@ -501,6 +517,11 @@ func (p *ProvisionTokenV2) GetGithubRules() *ProvisionTokenSpecV2GitHub { return p.Spec.GitHub } +// GetGitlabRules will return the GitLab rules within this token. +func (p *ProvisionTokenV2) GetGitlabRules() *ProvisionTokenSpecV2GitLab { + return p.Spec.GitLab +} + // GetAWSIIDTTL returns the TTL of EC2 IIDs func (p *ProvisionTokenV2) GetAWSIIDTTL() Duration { return p.Spec.AWSIIDTTL @@ -636,6 +657,12 @@ func (p *ProvisionTokenV2) GetSafeName() string { return name } +// GetAssignedScope always returns an empty string because a [ProvisionTokenV2] is always +// unscoped +func (p *ProvisionTokenV2) GetAssignedScope() string { + return "" +} + // String returns the human readable representation of a provisioning token. func (p ProvisionTokenV2) String() string { expires := "never" @@ -787,10 +814,34 @@ func (a *ProvisionTokenSpecV2Kubernetes) checkAndSetDefaults() error { if a.StaticJWKS.JWKS == "" { return trace.BadParameter("static_jwks.jwks: must be set when type is %q", KubernetesJoinTypeStaticJWKS) } + case KubernetesJoinTypeOIDC: + if a.OIDC == nil { + return trace.BadParameter("oidc: must be set when types is %q", KubernetesJoinTypeOIDC) + } + if a.OIDC.Issuer == "" { + return trace.BadParameter("oidc.issuer: must be set when type is %q", KubernetesJoinTypeOIDC) + } + + parsed, err := url.Parse(a.OIDC.Issuer) + if err != nil { + return trace.BadParameter("oidc.issuer: must be a valid URL") + } + + if parsed.Scheme == "http" { + if !a.OIDC.InsecureAllowHTTPIssuer { + return trace.BadParameter("oidc.issuer: must be https:// unless insecure_allow_http_issuer is set") + } + } else if parsed.Scheme != "https" { + return trace.BadParameter("oidc.issuer: invalid URL scheme, must be https://") + } default: return trace.BadParameter( "type: must be one of (%s), got %q", - utils.JoinStrings(JoinMethods, ", "), + utils.JoinStrings([]string{ + string(KubernetesJoinTypeInCluster), + string(KubernetesJoinTypeStaticJWKS), + string(KubernetesJoinTypeOIDC), + }, ", "), a.Type, ) } @@ -999,6 +1050,9 @@ func (a *ProvisionTokenSpecV2Oracle) checkAndSetDefaults() error { i, ) } + if len(rule.Instances) > 100 { + return trace.BadParameter("allow[%d]: maximum 100 instances may be set (found %d)", i, len(rule.Instances)) + } } return nil } @@ -1032,22 +1086,16 @@ func (a *ProvisionTokenSpecV2AzureDevops) checkAndSetDefaults() error { } func (a *ProvisionTokenSpecV2BoundKeypair) checkAndSetDefaults() error { - // Note: don't attempt to initialize onboarding - at least for now - as it - // has required keys. This behavior may be relaxed when we add - // server-generated joining secrets. if a.Onboarding == nil { - return trace.BadParameter("spec.bound_keypair.onboarding is required") - } - - if a.Onboarding.RegistrationSecret == "" && a.Onboarding.InitialPublicKey == "" { - return trace.BadParameter("at least one of [initial_join_secret, " + - "initial_public_key] is required in spec.bound_keypair.onboarding") + a.Onboarding = &ProvisionTokenSpecV2BoundKeypair_OnboardingSpec{} } if a.Recovery == nil { a.Recovery = &ProvisionTokenSpecV2BoundKeypair_RecoverySpec{} } + // Limit must be >= 1 for the token to be useful. If zero, assume it's unset + // and provide a sane default. if a.Recovery.Limit == 0 { a.Recovery.Limit = 1 } @@ -1057,3 +1105,21 @@ func (a *ProvisionTokenSpecV2BoundKeypair) checkAndSetDefaults() error { return nil } + +func (a *ProvisionTokenSpecV2Env0) checkAndSetDefaults() error { + if len(a.Allow) == 0 { + return trace.BadParameter("the %q join method requires at least one token allow rule", JoinMethodEnv0) + } + + for i, allowRule := range a.Allow { + if allowRule.OrganizationID == "" { + return trace.BadParameter("allow[%d]: organization_id must be set", i) + } + + if allowRule.ProjectID == "" && allowRule.ProjectName == "" { + return trace.BadParameter("allow[%d]: at least one of ['project_id', 'project_name'] must be set", i) + } + } + + return nil +} diff --git a/api/types/provisioning_test.go b/api/types/provisioning_test.go index 8fcfe17e7e48e..f632fd689a828 100644 --- a/api/types/provisioning_test.go +++ b/api/types/provisioning_test.go @@ -17,6 +17,7 @@ limitations under the License. package types import ( + "fmt" "testing" "time" @@ -579,6 +580,79 @@ func TestProvisionTokenV2_CheckAndSetDefaults(t *testing.T) { }, wantErr: true, }, + { + desc: "kubernetes: oidc must have valid issuer", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodKubernetes, + Kubernetes: &ProvisionTokenSpecV2Kubernetes{ + Type: KubernetesJoinTypeOIDC, + Allow: []*ProvisionTokenSpecV2Kubernetes_Rule{ + { + ServiceAccount: "namespace:my-service-account", + }, + }, + OIDC: &ProvisionTokenSpecV2Kubernetes_OIDCConfig{ + Issuer: "https://example.com", + }, + }, + }, + }, + wantErr: false, + }, + { + desc: "kubernetes: http issuers not allowed without override", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodKubernetes, + Kubernetes: &ProvisionTokenSpecV2Kubernetes{ + Type: KubernetesJoinTypeOIDC, + Allow: []*ProvisionTokenSpecV2Kubernetes_Rule{ + { + ServiceAccount: "namespace:my-service-account", + }, + }, + OIDC: &ProvisionTokenSpecV2Kubernetes_OIDCConfig{ + Issuer: "http://example.com", + }, + }, + }, + }, + wantErr: true, + }, + { + desc: "kubernetes: http issuers are allowed with override", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodKubernetes, + Kubernetes: &ProvisionTokenSpecV2Kubernetes{ + Type: KubernetesJoinTypeOIDC, + Allow: []*ProvisionTokenSpecV2Kubernetes_Rule{ + { + ServiceAccount: "namespace:my-service-account", + }, + }, + OIDC: &ProvisionTokenSpecV2Kubernetes_OIDCConfig{ + Issuer: "http://example.com", + InsecureAllowHTTPIssuer: true, + }, + }, + }, + }, + wantErr: false, + }, { desc: "gitlab empty allow rules", token: &ProvisionTokenV2{ @@ -1408,6 +1482,8 @@ func TestProvisionTokenV2_CheckAndSetDefaults(t *testing.T) { }, }, { + // note: missing onboarding config is allowed; we'll generate some + // fields at creation/upsert time. desc: "bound keypair missing onboarding config", token: &ProvisionTokenV2{ Metadata: Metadata{ @@ -1419,6 +1495,177 @@ func TestProvisionTokenV2_CheckAndSetDefaults(t *testing.T) { BoundKeypair: &ProvisionTokenSpecV2BoundKeypair{}, }, }, + wantErr: false, + }, + { + desc: "env0 success", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + OrganizationID: "organization-id", + ProjectID: "project-id", + ProjectName: "project-name", + TemplateID: "template-id", + TemplateName: "template-name", + EnvironmentID: "environment-id", + EnvironmentName: "environment-name", + WorkspaceName: "workspace-name", + DeploymentType: "deployment-type", + DeployerEmail: "deployer-email", + Env0Tag: "custom-tag", + }, + }, + }, + }, + }, + expected: &ProvisionTokenV2{ + Kind: "token", + Version: "v2", + Metadata: Metadata{ + Name: "test", + Namespace: "default", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + OrganizationID: "organization-id", + ProjectID: "project-id", + ProjectName: "project-name", + TemplateID: "template-id", + TemplateName: "template-name", + EnvironmentID: "environment-id", + EnvironmentName: "environment-name", + WorkspaceName: "workspace-name", + DeploymentType: "deployment-type", + DeployerEmail: "deployer-email", + Env0Tag: "custom-tag", + }, + }, + }, + }, + }, + wantErr: false, + }, + { + desc: "env0 multiple rules - success", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + OrganizationID: "organization-id", + ProjectID: "project-id", + }, + { + OrganizationID: "organization-id", + ProjectName: "project-name", + }, + }, + }, + }, + }, + expected: &ProvisionTokenV2{ + Kind: "token", + Version: "v2", + Metadata: Metadata{ + Name: "test", + Namespace: "default", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + OrganizationID: "organization-id", + ProjectID: "project-id", + }, + { + OrganizationID: "organization-id", + ProjectName: "project-name", + }, + }, + }, + }, + }, + wantErr: false, + }, + { + desc: "env0 missing organization", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + ProjectName: "test", + TemplateName: "test", + }, + }, + }, + }, + }, + wantErr: true, + }, + { + desc: "env0 missing project", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodEnv0, + Env0: &ProvisionTokenSpecV2Env0{ + Allow: []*ProvisionTokenSpecV2Env0_Rule{ + { + OrganizationID: "test", + TemplateName: "test", + }, + }, + }, + }, + }, + wantErr: true, + }, + { + desc: "oracle too many instance IDs", + token: &ProvisionTokenV2{ + Metadata: Metadata{ + Name: "test", + }, + Spec: ProvisionTokenSpecV2{ + Roles: []SystemRole{RoleNode}, + JoinMethod: JoinMethodOracle, + Oracle: &ProvisionTokenSpecV2Oracle{ + Allow: []*ProvisionTokenSpecV2Oracle_Rule{ + { + Tenancy: "ocid.tenancy.oc1..mytentant", + Instances: genOCIDs(101), + }, + }, + }, + }, + }, wantErr: true, }, } @@ -1442,6 +1689,14 @@ func TestProvisionTokenV2_CheckAndSetDefaults(t *testing.T) { } } +func genOCIDs(count int) []string { + out := make([]string, count) + for i := range count { + out[i] = fmt.Sprintf("ocid.instance.oc1.region.%d", i) + } + return out +} + func TestProvisionTokenV2_GetSafeName(t *testing.T) { t.Run("token join method (short)", func(t *testing.T) { tok, err := NewProvisionToken("1234", []SystemRole{RoleNode}, time.Now()) diff --git a/api/types/resource.go b/api/types/resource.go index 544f7b1f92b8c..ea7786c0b0775 100644 --- a/api/types/resource.go +++ b/api/types/resource.go @@ -92,6 +92,11 @@ func ResourceNames[R Resource, S ~[]R](s S) iter.Seq[string] { return iterutils.Map(GetName, slices.Values(s)) } +// CompareResourceByNames compares resources by their names. +func CompareResourceByNames[R Resource](a, b R) int { + return strings.Compare(a.GetName(), b.GetName()) +} + // ResourceDetails includes details about the resource type ResourceDetails struct { Hostname string @@ -534,9 +539,19 @@ func MatchKinds(resource ResourceWithLabels, kinds []string) bool { if len(kinds) == 0 { return true } + resourceKind := resource.GetKind() switch resourceKind { - case KindApp, KindSAMLIdPServiceProvider, KindIdentityCenterAccount: + case KindApp: + if slices.Contains(kinds, KindApp) { + return true + } + + // MCP server resources are subkinds of app resources, but it is + // possible for certain APIs like ListUnifiedResources to use KindMCP as + // a kind filter. + return resource.GetSubKind() == SubKindMCP && slices.Contains(kinds, KindMCP) + case KindSAMLIdPServiceProvider, KindIdentityCenterAccount: return slices.Contains(kinds, KindApp) default: return slices.Contains(kinds, resourceKind) @@ -746,7 +761,7 @@ func GetRevision(v any) (string, error) { case Resource: return r.GetRevision(), nil case ResourceMetadata: - return r.GetMetadata().Revision, nil + return r.GetMetadata().GetRevision(), nil } return "", trace.BadParameter("unable to determine revision from resource of type %T", v) } diff --git a/api/types/resource_ids.go b/api/types/resource_ids.go index 3c6289351947d..0ae351da078f0 100644 --- a/api/types/resource_ids.go +++ b/api/types/resource_ids.go @@ -32,15 +32,29 @@ func (id *ResourceID) CheckAndSetDefaults() error { if len(id.Kind) == 0 { return trace.BadParameter("ResourceID must include Kind") } - if !slices.Contains(RequestableResourceKinds, id.Kind) { - return trace.BadParameter("Resource kind %q is invalid or unsupported", id.Kind) - } if len(id.Name) == 0 { return trace.BadParameter("ResourceID must include Name") } + // TODO(@creack): DELETE IN v20.0.0. Here to maintain backwards compatibility with older clients. + if id.Kind != KindKubeNamespace && slices.Contains(KubernetesResourcesKinds, id.Kind) { + apiGroup := KubernetesResourcesV7KindGroups[id.Kind] + if slices.Contains(KubernetesClusterWideResourceKinds, id.Kind) { + id.Kind = AccessRequestPrefixKindKubeClusterWide + KubernetesResourcesKindsPlurals[id.Kind] + } else { + id.Kind = AccessRequestPrefixKindKubeNamespaced + KubernetesResourcesKindsPlurals[id.Kind] + } + if apiGroup != "" { + id.Kind += "." + apiGroup + } + } + + if id.Kind != KindKubeNamespace && !slices.Contains(RequestableResourceKinds, id.Kind) && !strings.HasPrefix(id.Kind, AccessRequestPrefixKindKube) { + return trace.BadParameter("Resource kind %q is invalid or unsupported", id.Kind) + } + switch { - case slices.Contains(KubernetesResourcesKinds, id.Kind): + case id.Kind == KindKubeNamespace || strings.HasPrefix(id.Kind, AccessRequestPrefixKindKube): return trace.Wrap(id.validateK8sSubResource()) case id.SubResourceName != "": return trace.BadParameter("resource kind %q doesn't allow sub resources", id.Kind) @@ -52,17 +66,17 @@ func (id *ResourceID) validateK8sSubResource() error { if id.SubResourceName == "" { return trace.BadParameter("resource of kind %q must include a subresource name", id.Kind) } - isResourceNamespaceScoped := slices.Contains(KubernetesClusterWideResourceKinds, id.Kind) + isResourceClusterwide := id.Kind == KindKubeNamespace || slices.Contains(KubernetesClusterWideResourceKinds, id.Kind) || strings.HasPrefix(id.Kind, AccessRequestPrefixKindKubeClusterWide) switch split := strings.Split(id.SubResourceName, "/"); { - case isResourceNamespaceScoped && len(split) != 1: + case isResourceClusterwide && len(split) != 1: return trace.BadParameter("subresource %q must follow the following format: ", id.SubResourceName) - case isResourceNamespaceScoped && split[0] == "": + case isResourceClusterwide && split[0] == "": return trace.BadParameter("subresource %q must include a non-empty name: ", id.SubResourceName) - case !isResourceNamespaceScoped && len(split) != 2: + case !isResourceClusterwide && len(split) != 2: return trace.BadParameter("subresource %q must follow the following format: /", id.SubResourceName) - case !isResourceNamespaceScoped && split[0] == "": + case !isResourceClusterwide && split[0] == "": return trace.BadParameter("subresource %q must include a non-empty namespace: /", id.SubResourceName) - case !isResourceNamespaceScoped && split[1] == "": + case !isResourceClusterwide && split[1] == "": return trace.BadParameter("subresource %q must include a non-empty name: /", id.SubResourceName) } @@ -95,9 +109,10 @@ func ResourceIDFromString(raw string) (ResourceID, error) { Kind: parts[1], Name: parts[2], } + switch { - case slices.Contains(KubernetesResourcesKinds, resourceID.Kind): - isResourceNamespaceScoped := slices.Contains(KubernetesClusterWideResourceKinds, resourceID.Kind) + case slices.Contains(KubernetesResourcesKinds, resourceID.Kind) || strings.HasPrefix(resourceID.Kind, AccessRequestPrefixKindKube) || resourceID.Kind == KindKubeNamespace: + isResourceClusterWide := resourceID.Kind == KindKubeNamespace || slices.Contains(KubernetesClusterWideResourceKinds, resourceID.Kind) || strings.HasPrefix(resourceID.Kind, AccessRequestPrefixKindKubeClusterWide) // Kubernetes forbids slashes "/" in Namespaces and Pod names, so it's safe to // explode the resourceID.Name and extract the last two entries as namespace // and name. @@ -107,10 +122,10 @@ func ResourceIDFromString(raw string) (ResourceID, error) { // will fail because, for kind=pod, it's mandatory to present a non-empty // namespace and name. splits := strings.Split(resourceID.Name, "/") - if !isResourceNamespaceScoped && len(splits) >= 3 { + if !isResourceClusterWide && len(splits) >= 3 { resourceID.Name = strings.Join(splits[:len(splits)-2], "/") resourceID.SubResourceName = strings.Join(splits[len(splits)-2:], "/") - } else if isResourceNamespaceScoped && len(splits) >= 2 { + } else if isResourceClusterWide && len(splits) >= 2 { resourceID.Name = strings.Join(splits[:len(splits)-1], "/") resourceID.SubResourceName = strings.Join(splits[len(splits)-1:], "/") } diff --git a/api/types/resource_ids_test.go b/api/types/resource_ids_test.go index 8b95e2df1c8d4..47a474b5cab95 100644 --- a/api/types/resource_ids_test.go +++ b/api/types/resource_ids_test.go @@ -107,317 +107,327 @@ func TestResourceIDs(t *testing.T) { desc: "pod resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePod, + Kind: "kube:ns:pods", Name: "cluster/1", SubResourceName: "namespace/pod*", }}, - expected: `["/one/pod/cluster/1/namespace/pod*"]`, + expected: `["/one/kube:ns:pods/cluster/1/namespace/pod*"]`, }, { desc: "pod resource name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePod, + Kind: "kube:ns:pods", Name: "cluster", SubResourceName: "namespace/pod*", }}, - expected: `["/one/pod/cluster/namespace/pod*"]`, + expected: `["/one/kube:ns:pods/cluster/namespace/pod*"]`, }, { desc: "pod resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePod, + Kind: "kube:ns:pods", Name: "cluster", SubResourceName: "/pod*", }}, - expected: `["/one/pod/cluster//pod*"]`, + expected: `["/one/kube:ns:pods/cluster//pod*"]`, expectParseError: true, }, { desc: "pod resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePod, + Kind: "kube:ns:pods", Name: "cluster", }}, - expected: `["/one/pod/cluster"]`, + expected: `["/one/kube:ns:pods/cluster"]`, expectParseError: true, }, { desc: "pod resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePod, + Kind: "kube:ns:pods", Name: "cluster", SubResourceName: "namespace/pod*", }}, - expected: `["/one/pod/cluster/namespace/pod*"]`, + expected: `["/one/kube:ns:pods/cluster/namespace/pod*"]`, }, { desc: "secret resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeSecret, + Kind: "kube:ns:secrets", Name: "cluster", SubResourceName: "/secret*", }}, - expected: `["/one/secret/cluster//secret*"]`, + expected: `["/one/kube:ns:secrets/cluster//secret*"]`, expectParseError: true, }, { desc: "secret resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeSecret, + Kind: "kube:ns:secrets", Name: "cluster", }}, - expected: `["/one/secret/cluster"]`, + expected: `["/one/kube:ns:secrets/cluster"]`, expectParseError: true, }, { desc: "secret resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeSecret, + Kind: "kube:ns:secrets", Name: "cluster", SubResourceName: "namespace/secret*", }}, - expected: `["/one/secret/cluster/namespace/secret*"]`, + expected: `["/one/kube:ns:secrets/cluster/namespace/secret*"]`, }, { desc: "configmap resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeConfigmap, + Kind: "kube:ns:configmaps", Name: "cluster", SubResourceName: "/configmap*", }}, - expected: `["/one/configmap/cluster//configmap*"]`, + expected: `["/one/kube:ns:configmaps/cluster//configmap*"]`, expectParseError: true, }, { desc: "configmap resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeConfigmap, + Kind: "kube:ns:configmaps", Name: "cluster", }}, - expected: `["/one/configmap/cluster"]`, + expected: `["/one/kube:ns:configmaps/cluster"]`, expectParseError: true, }, { desc: "configmap resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeConfigmap, + Kind: "kube:ns:configmaps", Name: "cluster", SubResourceName: "namespace/configmap*", }}, - expected: `["/one/configmap/cluster/namespace/configmap*"]`, + expected: `["/one/kube:ns:configmaps/cluster/namespace/configmap*"]`, }, { desc: "service resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeService, + Kind: "kube:ns:services", Name: "cluster", SubResourceName: "/service*", }}, - expected: `["/one/service/cluster//service*"]`, + expected: `["/one/kube:ns:services/cluster//service*"]`, expectParseError: true, }, { desc: "service resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeService, + Kind: "kube:ns:services", Name: "cluster", }}, - expected: `["/one/service/cluster"]`, + expected: `["/one/kube:ns:services/cluster"]`, expectParseError: true, }, { desc: "service resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeService, + Kind: "kube:ns:services", Name: "cluster", SubResourceName: "namespace/service*", }}, - expected: `["/one/service/cluster/namespace/service*"]`, + expected: `["/one/kube:ns:services/cluster/namespace/service*"]`, }, { desc: "service_account resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeServiceAccount, + Kind: "kube:ns:serviceaccounts", Name: "cluster", SubResourceName: "/service_account*", }}, - expected: `["/one/serviceaccount/cluster//service_account*"]`, + expected: `["/one/kube:ns:serviceaccounts/cluster//service_account*"]`, expectParseError: true, }, { desc: "service_account resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeServiceAccount, + Kind: "kube:ns:serviceaccounts", Name: "cluster", }}, - expected: `["/one/serviceaccount/cluster"]`, + expected: `["/one/kube:ns:serviceaccounts/cluster"]`, expectParseError: true, }, { desc: "service_account resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeServiceAccount, + Kind: "kube:ns:serviceaccounts", Name: "cluster", SubResourceName: "namespace/service_account*", }}, - expected: `["/one/serviceaccount/cluster/namespace/service_account*"]`, + expected: `["/one/kube:ns:serviceaccounts/cluster/namespace/service_account*"]`, }, { desc: "persistent_volume_claim resource name with missing namespace", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePersistentVolumeClaim, + Kind: "kube:ns:persistentvolumeclaims", Name: "cluster", SubResourceName: "/persistent_volume_claim*", }}, - expected: `["/one/persistentvolumeclaim/cluster//persistent_volume_claim*"]`, + expected: `["/one/kube:ns:persistentvolumeclaims/cluster//persistent_volume_claim*"]`, expectParseError: true, }, { desc: "persistent_volume_claim resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePersistentVolumeClaim, + Kind: "kube:ns:persistentvolumeclaims", Name: "cluster", }}, - expected: `["/one/persistentvolumeclaim/cluster"]`, + expected: `["/one/kube:ns:persistentvolumeclaims/cluster"]`, expectParseError: true, }, { desc: "namespace resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeNamespace, + Kind: "kube:cw:namespaces", Name: "cluster", }}, - expected: `["/one/namespace/cluster"]`, + expected: `["/one/kube:cw:namespaces/cluster"]`, expectParseError: true, }, { desc: "namespace resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeNamespace, + Kind: "kube:cw:namespaces", Name: "cluster", SubResourceName: "namespace*", }}, - expected: `["/one/namespace/cluster/namespace*"]`, + expected: `["/one/kube:cw:namespaces/cluster/namespace*"]`, }, { desc: "kube_node resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeNode, + Kind: "kube:cw:nodes", Name: "cluster", }}, - expected: `["/one/kube_node/cluster"]`, + expected: `["/one/kube:cw:nodes/cluster"]`, expectParseError: true, }, { desc: "kube_node resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeNode, + Kind: "kube:cw:nodes", Name: "cluster", SubResourceName: "kube_node*", }}, - expected: `["/one/kube_node/cluster/kube_node*"]`, + expected: `["/one/kube:cw:nodes/cluster/kube_node*"]`, }, { desc: "persistent_volume resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePersistentVolume, + Kind: "kube:cw:persistentvolumes", Name: "cluster", }}, - expected: `["/one/persistentvolume/cluster"]`, + expected: `["/one/kube:cw:persistentvolumes/cluster"]`, expectParseError: true, }, { desc: "persistent_volume resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubePersistentVolume, + Kind: "kube:cw:persistentvolumes", Name: "cluster", SubResourceName: "persistent_volume*", }}, - expected: `["/one/persistentvolume/cluster/persistent_volume*"]`, + expected: `["/one/kube:cw:persistentvolumes/cluster/persistent_volume*"]`, }, { desc: "cluster_role resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeClusterRole, + Kind: "kube:cw:clusterroles.rbac.authorization.k8s.io", Name: "cluster", }}, - expected: `["/one/clusterrole/cluster"]`, + expected: `["/one/kube:cw:clusterroles.rbac.authorization.k8s.io/cluster"]`, expectParseError: true, }, { desc: "cluster_role resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeClusterRole, + Kind: "kube:cw:clusterroles.rbac.authorization.k8s.io", Name: "cluster", SubResourceName: "cluster_role*", }}, - expected: `["/one/clusterrole/cluster/cluster_role*"]`, + expected: `["/one/kube:cw:clusterroles.rbac.authorization.k8s.io/cluster/cluster_role*"]`, }, { desc: "cluster_role_binding resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeClusterRoleBinding, + Kind: "kube:cw:clusterrolebindings.rbac.authorization.k8s.io", Name: "cluster", }}, - expected: `["/one/clusterrolebinding/cluster"]`, + expected: `["/one/kube:cw:clusterrolebindings.rbac.authorization.k8s.io/cluster"]`, expectParseError: true, }, { desc: "cluster_role_binding resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeClusterRoleBinding, + Kind: "kube:cw:clusterrolebindings.rbac.authorization.k8s.io", Name: "cluster", SubResourceName: "cluster_role_binding*", }}, - expected: `["/one/clusterrolebinding/cluster/cluster_role_binding*"]`, + expected: `["/one/kube:cw:clusterrolebindings.rbac.authorization.k8s.io/cluster/cluster_role_binding*"]`, }, { desc: "certificate_signing_request resource name with missing namespace and pod name", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeCertificateSigningRequest, + Kind: "kube:cw:certificatesigningrequests.certificates.k8s.io", Name: "cluster", }}, - expected: `["/one/certificatesigningrequest/cluster"]`, + expected: `["/one/kube:cw:certificatesigningrequests.certificates.k8s.io/cluster"]`, expectParseError: true, }, { desc: "certificate_signing_request resource name in cluster with slash", in: []ResourceID{{ ClusterName: "one", - Kind: KindKubeCertificateSigningRequest, + Kind: "kube:cw:certificatesigningrequests.certificates.k8s.io", Name: "cluster", SubResourceName: "certificate_signing_request*", }}, - expected: `["/one/certificatesigningrequest/cluster/certificate_signing_request*"]`, + expected: `["/one/kube:cw:certificatesigningrequests.certificates.k8s.io/cluster/certificate_signing_request*"]`, + }, + { + desc: "full kube namespace access", + in: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:*.*", // kind: *, api group: *. + Name: "cluster", + SubResourceName: "default/*", // namespace: default, resource name: *. + }}, + expected: `["/one/kube:ns:*.*/cluster/default/*"]`, }, } for _, tc := range testCases { @@ -452,3 +462,341 @@ func TestResourceIDs(t *testing.T) { }) } } + +// TODO(@creack): DELETE IN v20.0.0 when we no longer support legacy kube kinds. +func TestLegacyKubeResourceIDs(t *testing.T) { + testCases := []struct { + desc string + expect []ResourceID + in string + expectParseError bool + }{ + { + desc: "pod resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:pods", + Name: "cluster/1", + SubResourceName: "namespace/pod*", + }}, + in: `["/one/pod/cluster/1/namespace/pod*"]`, + }, + { + desc: "pod resource name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:pods", + Name: "cluster", + SubResourceName: "namespace/pod*", + }}, + in: `["/one/pod/cluster/namespace/pod*"]`, + }, + { + desc: "pod resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:pods", + Name: "cluster", + SubResourceName: "/pod*", + }}, + in: `["/one/pod/cluster//pod*"]`, + expectParseError: true, + }, + { + desc: "pod resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:pods", + Name: "cluster", + }}, + in: `["/one/pod/cluster"]`, + expectParseError: true, + }, + { + desc: "pod resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:pods", + Name: "cluster", + SubResourceName: "namespace/pod*", + }}, + in: `["/one/pod/cluster/namespace/pod*"]`, + }, + { + desc: "secret resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:secrets", + Name: "cluster", + SubResourceName: "/secret*", + }}, + in: `["/one/secret/cluster//secret*"]`, + expectParseError: true, + }, + { + desc: "secret resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:secrets", + Name: "cluster", + }}, + in: `["/one/secret/cluster"]`, + expectParseError: true, + }, + { + desc: "secret resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:secrets", + Name: "cluster", + SubResourceName: "namespace/secret*", + }}, + in: `["/one/secret/cluster/namespace/secret*"]`, + }, + { + desc: "configmap resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:configmaps", + Name: "cluster", + SubResourceName: "/configmap*", + }}, + in: `["/one/configmap/cluster//configmap*"]`, + expectParseError: true, + }, + { + desc: "configmap resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:configmaps", + Name: "cluster", + }}, + in: `["/one/configmap/cluster"]`, + expectParseError: true, + }, + { + desc: "configmap resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:configmaps", + Name: "cluster", + SubResourceName: "namespace/configmap*", + }}, + in: `["/one/configmap/cluster/namespace/configmap*"]`, + }, + { + desc: "service resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:services", + Name: "cluster", + SubResourceName: "/service*", + }}, + in: `["/one/service/cluster//service*"]`, + expectParseError: true, + }, + { + desc: "service resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:services", + Name: "cluster", + }}, + in: `["/one/service/cluster"]`, + expectParseError: true, + }, + { + desc: "service resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:services", + Name: "cluster", + SubResourceName: "namespace/service*", + }}, + in: `["/one/service/cluster/namespace/service*"]`, + }, + { + desc: "service_account resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:serviceaccounts", + Name: "cluster", + SubResourceName: "/service_account*", + }}, + in: `["/one/serviceaccount/cluster//service_account*"]`, + expectParseError: true, + }, + { + desc: "service_account resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:serviceaccounts", + Name: "cluster", + }}, + in: `["/one/serviceaccount/cluster"]`, + expectParseError: true, + }, + { + desc: "service_account resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:serviceaccounts", + Name: "cluster", + SubResourceName: "namespace/service_account*", + }}, + in: `["/one/serviceaccount/cluster/namespace/service_account*"]`, + }, + { + desc: "persistent_volume_claim resource name with missing namespace", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:persistentvolumeclaims", + Name: "cluster", + SubResourceName: "/persistent_volume_claim*", + }}, + in: `["/one/persistentvolumeclaim/cluster//persistent_volume_claim*"]`, + expectParseError: true, + }, + { + desc: "persistent_volume_claim resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:ns:persistentvolumeclaims", + Name: "cluster", + }}, + in: `["/one/persistentvolumeclaim/cluster"]`, + expectParseError: true, + }, + { + desc: "namespace resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "namespaces", + Name: "cluster", + }}, + in: `["/one/namespace/cluster"]`, + expectParseError: true, + }, + { + desc: "namespace resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "namespace", + Name: "cluster", + SubResourceName: "namespace*", + }}, + in: `["/one/namespace/cluster/namespace*"]`, + }, + { + desc: "kube_node resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:nodes", + Name: "cluster", + }}, + in: `["/one/kube_node/cluster"]`, + expectParseError: true, + }, + { + desc: "kube_node resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:nodes", + Name: "cluster", + SubResourceName: "kube_node*", + }}, + in: `["/one/kube_node/cluster/kube_node*"]`, + }, + { + desc: "persistent_volume resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:persistentvolumes", + Name: "cluster", + }}, + in: `["/one/persistentvolume/cluster"]`, + expectParseError: true, + }, + { + desc: "persistent_volume resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:persistentvolumes", + Name: "cluster", + SubResourceName: "persistent_volume*", + }}, + in: `["/one/persistentvolume/cluster/persistent_volume*"]`, + }, + { + desc: "cluster_role resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:clusterroles.rbac.authorization.k8s.io", + Name: "cluster", + }}, + in: `["/one/clusterrole/cluster"]`, + expectParseError: true, + }, + { + desc: "cluster_role resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:clusterroles.rbac.authorization.k8s.io", + Name: "cluster", + SubResourceName: "cluster_role*", + }}, + in: `["/one/clusterrole/cluster/cluster_role*"]`, + }, + { + desc: "cluster_role_binding resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:clusterrolebindings.rbac.authorization.k8s.io", + Name: "cluster", + }}, + in: `["/one/clusterrolebinding/cluster"]`, + expectParseError: true, + }, + { + desc: "cluster_role_binding resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:clusterrolebindings.rbac.authorization.k8s.io", + Name: "cluster", + SubResourceName: "cluster_role_binding*", + }}, + in: `["/one/clusterrolebinding/cluster/cluster_role_binding*"]`, + }, + { + desc: "certificate_signing_request resource name with missing namespace and pod name", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:certificatesigningrequests.certificates.k8s.io", + Name: "cluster", + }}, + in: `["/one/certificatesigningrequest/cluster"]`, + expectParseError: true, + }, + { + desc: "certificate_signing_request resource name in cluster with slash", + expect: []ResourceID{{ + ClusterName: "one", + Kind: "kube:cw:certificatesigningrequests.certificates.k8s.io", + Name: "cluster", + SubResourceName: "certificate_signing_request*", + }}, + in: `["/one/certificatesigningrequest/cluster/certificate_signing_request*"]`, + }, + } + for _, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + parsed, err := ResourceIDsFromString(tc.in) + if tc.expectParseError { + require.Error(t, err, "expected to get an error parsing resource IDs") + return + } + require.NoError(t, err) + require.Equal(t, tc.expect, parsed, "parsed resource IDs do not match the originals") + }) + } +} diff --git a/api/types/role.go b/api/types/role.go index c31d7d36892d9..476afc7a531f2 100644 --- a/api/types/role.go +++ b/api/types/role.go @@ -152,6 +152,8 @@ type Role interface { // SetKubeResources configures the Kubernetes Resources for the RoleConditionType. SetKubeResources(rct RoleConditionType, pods []KubernetesResource) + // GetRequestKubernetesResources returns the request Kubernetes resources. + GetRequestKubernetesResources(rct RoleConditionType) []RequestKubernetesResource // SetRequestKubernetesResources sets the request kubernetes resources. SetRequestKubernetesResources(rct RoleConditionType, resources []RequestKubernetesResource) @@ -298,6 +300,12 @@ type Role interface { // GetIdentityCenterAccountAssignments sets the allow or deny Account // Assignments for the role SetIdentityCenterAccountAssignments(RoleConditionType, []IdentityCenterAccountAssignment) + + // GetMCPPermissions returns the allow or deny MCP permissions. + GetMCPPermissions(RoleConditionType) *MCPPermissions + // SetMCPPermissions sets the allow or deny MCP permissions. + SetMCPPermissions(RoleConditionType, *MCPPermissions) + // Clone creats a copy of the role. Clone() Role } @@ -464,7 +472,12 @@ func (r *RoleV6) SetKubeGroups(rct RoleConditionType, groups []string) { // access to. func (r *RoleV6) GetKubeResources(rct RoleConditionType) []KubernetesResource { if rct == Allow { - return r.convertAllowKubernetesResourcesBetweenRoleVersions(r.Spec.Allow.KubernetesResources) + out := r.convertAllowKubernetesResourcesBetweenRoleVersions(r.Spec.Allow.KubernetesResources) + // We need to support `kubectl auth can-i` as we prompt the user to use this when they get an access denied error. + // Inject a selfsubjectaccessreviews resource to allow for it. It can still be explicitly denied by the role if + // set in the `deny` section. + out = append(out, KubernetesResourceSelfSubjectAccessReview) + return out } return r.convertKubernetesResourcesBetweenRoleVersions(r.Spec.Deny.KubernetesResources) } @@ -476,6 +489,8 @@ func (r *RoleV6) GetKubeResources(rct RoleConditionType) []KubernetesResource { // For roles v8, it returns the list as it is. // // For roles <=v7, it maps the legacy teleport Kinds to k8s plurals and sets the APIGroup to wildcard. +// +// Must be in sync with RoleV6.convertRequestKubernetesResourcesBetweenRoleVersions. func (r *RoleV6) convertKubernetesResourcesBetweenRoleVersions(resources []KubernetesResource) []KubernetesResource { switch r.Version { case V8: @@ -490,7 +505,7 @@ func (r *RoleV6) convertKubernetesResourcesBetweenRoleVersions(resources []Kuber if r.Kind == KindKubeNamespace { r.Kind = Wildcard if r.Name == Wildcard { - r.Namespace = "^" + Wildcard + "$" + r.Namespace = "^.+$" } else { r.Namespace = r.Name } @@ -509,9 +524,11 @@ func (r *RoleV6) convertKubernetesResourcesBetweenRoleVersions(resources []Kuber r.Namespace = "" } if k, ok := KubernetesResourcesKindsPlurals[r.Kind]; ok { // Can be empty if the kind is a wildcard. + r.APIGroup = KubernetesResourcesV7KindGroups[r.Kind] r.Kind = k + } else { + r.APIGroup = Wildcard } - r.APIGroup = Wildcard v7resources[i] = r if r.Kind == Wildcard { // If we have a wildcard, inject the clusterwide resources. for _, elem := range KubernetesClusterWideResourceKinds { @@ -574,13 +591,16 @@ func (r *RoleV6) convertAllowKubernetesResourcesBetweenRoleVersions(resources [] v6resources := slices.Clone(resources) for i, r := range v6resources { if k, ok := KubernetesResourcesKindsPlurals[r.Kind]; ok { + r.APIGroup = KubernetesResourcesV7KindGroups[r.Kind] r.Kind = k + } else { + r.APIGroup = Wildcard } - r.APIGroup = Wildcard v6resources[i] = r } for _, resource := range KubernetesResourcesKinds { // Iterate over the list to have deterministic order. + group := KubernetesResourcesV7KindGroups[resource] resource = KubernetesResourcesKindsPlurals[resource] // Ignore Pod resources for older roles because Pods were already supported // so we don't need to keep backwards compatibility for them. @@ -589,7 +609,7 @@ func (r *RoleV6) convertAllowKubernetesResourcesBetweenRoleVersions(resources [] if resource == "pods" || resource == "namespaces" { continue } - v6resources = append(v6resources, KubernetesResource{Kind: resource, Name: Wildcard, Namespace: Wildcard, Verbs: []string{Wildcard}, APIGroup: Wildcard}) + v6resources = append(v6resources, KubernetesResource{Kind: resource, Name: Wildcard, Namespace: Wildcard, Verbs: []string{Wildcard}, APIGroup: group}) } return v6resources } @@ -607,6 +627,20 @@ func (r *RoleV6) SetKubeResources(rct RoleConditionType, pods []KubernetesResour } } +// GetRequestKubernetesResources returns the upgraded request kubernetes resources. +func (r *RoleV6) GetRequestKubernetesResources(rct RoleConditionType) []RequestKubernetesResource { + if rct == Allow { + if r.Spec.Allow.Request == nil { + return nil + } + return r.convertRequestKubernetesResourcesBetweenRoleVersions(r.Spec.Allow.Request.KubernetesResources) + } + if r.Spec.Deny.Request == nil { + return nil + } + return r.convertRequestKubernetesResourcesBetweenRoleVersions(r.Spec.Deny.Request.KubernetesResources) +} + // SetRequestKubernetesResources sets the request kubernetes resources. func (r *RoleV6) SetRequestKubernetesResources(rct RoleConditionType, resources []RequestKubernetesResource) { roleConditions := &r.Spec.Allow @@ -650,6 +684,40 @@ func (r *RoleV6) GetAccessRequestConditions(rct RoleConditionType) AccessRequest return *cond } +// convertRequestKubernetesResourcesBetweenRoleVersions converts Access Request Kubernetes resources between role versions. +// +// This is required to keep compatibility between role versions to avoid breaking changes +// when using an older role version. +// +// For roles v8, it returns the list as it is. +// +// For roles <=v7, it maps the legacy teleport Kinds to k8s plurals and sets the APIGroup to wildcard. +// +// Must be in sync with RoleV6.convertDenyKubernetesResourcesBetweenRoleVersions. +func (r *RoleV6) convertRequestKubernetesResourcesBetweenRoleVersions(resources []RequestKubernetesResource) []RequestKubernetesResource { + if len(resources) == 0 { + return nil + } + switch r.Version { + case V8: + return resources + default: + v7resources := slices.Clone(resources) + for i, r := range v7resources { + if k, ok := KubernetesResourcesKindsPlurals[r.Kind]; ok { // Can be empty if the kind is a wildcard. + r.APIGroup = KubernetesResourcesV7KindGroups[r.Kind] + r.Kind = k + } else if r.Kind == KindKubeNamespace { + r.Kind = "namespaces" + } else { + r.APIGroup = Wildcard + } + v7resources[i] = r + } + return v7resources + } +} + // SetAccessRequestConditions sets allow/deny conditions for access requests. func (r *RoleV6) SetAccessRequestConditions(rct RoleConditionType, cond AccessRequestConditions) { if rct == Allow { @@ -1797,13 +1865,20 @@ func ProcessNamespace(namespace string) string { // WhereExpr is a tree like structure representing a `where` (sub-)expression. type WhereExpr struct { - Field string - Literal interface{} - And, Or WhereExpr2 - Not *WhereExpr - Equals, Contains WhereExpr2 + Field string + Literal interface{} + And, Or WhereExpr2 + Not *WhereExpr + Equals, Contains WhereExpr2 + ContainsAny, ContainsAll WhereExpr2 + CanView *WhereNoExpr + MapRef *WhereExpr2 } +// WhereNoExpr is an empty `where` expression used by +// functions without arguments like `can_view()`. +type WhereNoExpr struct{} + // WhereExpr2 is a pair of `where` (sub-)expressions. type WhereExpr2 struct { L, R *WhereExpr @@ -1832,6 +1907,18 @@ func (e WhereExpr) String() string { if e.Contains.L != nil && e.Contains.R != nil { return fmt.Sprintf("contains(%s, %s)", e.Contains.L, e.Contains.R) } + if e.ContainsAny.L != nil && e.ContainsAny.R != nil { + return fmt.Sprintf("contains_any(%s, %s)", e.ContainsAny.L, e.ContainsAny.R) + } + if e.ContainsAll.L != nil && e.ContainsAll.R != nil { + return fmt.Sprintf("contains_all(%s, %s)", e.ContainsAll.L, e.ContainsAll.R) + } + if e.CanView != nil { + return "can_view()" + } + if e.MapRef != nil && e.MapRef.L != nil && e.MapRef.R != nil { + return fmt.Sprintf("%s[%q]", e.MapRef.L, e.MapRef.R) + } return "" } @@ -1916,7 +2003,7 @@ func (r *RoleV6) GetRoleConditions(rct RoleConditionType) RoleConditions { return roleConditions } -// GetRoleConditions returns the role conditions for the role. +// GetRequestReasonMode returns the request reason mode for the role. func (r *RoleV6) GetRequestReasonMode(rct RoleConditionType) RequestReasonMode { roleConditions := r.GetRoleConditions(rct) if roleConditions.Request == nil || roleConditions.Request.Reason == nil { @@ -1981,12 +2068,12 @@ func validateKubeResources(roleVersion string, kubeResources []KubernetesResourc } } - // Only Pod resources are supported in role version <=V6. - // This is mandatory because we must append the other resources to the - // kubernetes resources. switch roleVersion { // Teleport does not support role versions < v3. case V6, V5, V4, V3: + // Only Pod resources are supported in role version <=V6. + // This is mandatory because we must append the other resources to the + // kubernetes resources. if kubeResource.Kind != KindKubePod { return trace.BadParameter("KubernetesResource kind %q is not supported in role version %q. Upgrade the role version to %q", kubeResource.Kind, roleVersion, V8) } @@ -2023,7 +2110,7 @@ func validateKubeResources(roleVersion string, kubeResources []KubernetesResourc } // Best effort attempt to validate if the namespace field is needed. if kubeResource.Namespace == "" { - if _, ok := kubernetesNamespacedResourceKinds[groupKind{kubeResource.APIGroup, kubeResource.Kind}]; ok { + if apiGroup, ok := kubernetesNamespacedResourceKinds[kubeResource.Kind]; ok && apiGroup == kubeResource.APIGroup { return trace.BadParameter("KubernetesResource %q must include Namespace", kubeResource.Kind) } } @@ -2037,26 +2124,50 @@ func validateKubeResources(roleVersion string, kubeResources []KubernetesResourc } // validateRequestKubeResources validates each kubeResources entry for `allow.request.kubernetes_resources` field. -// Currently the only supported field for this particular field is: -// - Kind (belonging to KubernetesResourcesKinds) +// Currently the only supported field for this particular field are: +// - Kind +// - APIGroup // // Mimics types.KubernetesResource data model, but opted to create own type as we don't support other fields yet. -// -// TODO(@creack): Handle rolev8 kind/group to support CRDs. Still use the teleport kinds for now. func validateRequestKubeResources(roleVersion string, kubeResources []RequestKubernetesResource) error { for _, kubeResource := range kubeResources { - if !slices.Contains(KubernetesResourcesKinds, kubeResource.Kind) && kubeResource.Kind != Wildcard { - return trace.BadParameter("request.kubernetes_resource kind %q is invalid or unsupported; Supported: %v", kubeResource.Kind, append([]string{Wildcard}, KubernetesResourcesKinds...)) - } - - // Only Pod resources are supported in role version <=V6. - // This is mandatory because we must append the other resources to the - // kubernetes resources. switch roleVersion { + case V8: + if kubeResource.Kind == "" { + return trace.BadParameter("request.kubernetes_resource kind is required in role version %q", roleVersion) + } + // If we have a kind that match a role v7 one, check the api group. + if slices.Contains(KubernetesResourcesKinds, kubeResource.Kind) { + // If the api group is a wildcard or match v7, then it is mostly definitely a mistake, reject the role. + if kubeResource.APIGroup == Wildcard || kubeResource.APIGroup == KubernetesResourcesV7KindGroups[kubeResource.Kind] { + return trace.BadParameter("request.kubernetes_resource kind %q is invalid. Please use plural name for role version %q", kubeResource.Kind, roleVersion) + } + } + // Only allow empty string for known core resources. + if kubeResource.APIGroup == "" { + if _, ok := KubernetesCoreResourceKinds[kubeResource.Kind]; !ok { + return trace.BadParameter("request.kubernetes_resource api_group is required for resource %q in role version %q", kubeResource.Kind, roleVersion) + } + } + case V7: + if kubeResource.APIGroup != "" { + return trace.BadParameter("request.kubernetes_resource api_group is not supported in role version %q. Upgrade the role version to %q", roleVersion, V8) + } + if !slices.Contains(KubernetesResourcesKinds, kubeResource.Kind) && kubeResource.Kind != Wildcard { + return trace.BadParameter("request.kubernetes_resource kind %q is invalid or unsupported in role version %q; Supported: %v", + kubeResource.Kind, roleVersion, append([]string{Wildcard}, KubernetesResourcesKinds...)) + } // Teleport does not support role versions < v3. case V6, V5, V4, V3: + if kubeResource.APIGroup != "" { + return trace.BadParameter("request.kubernetes_resource api_group is not supported in role version %q. Upgrade the role version to %q", roleVersion, V8) + } + // Only Pod resources are supported in role version <=V6. + // This is mandatory because we must append the other resources to the + // kubernetes resources. if kubeResource.Kind != KindKubePod { - return trace.BadParameter("request.kubernetes_resources kind %q is not supported in role version %q. Upgrade the role version to %q", kubeResource.Kind, roleVersion, V8) + return trace.BadParameter("request.kubernetes_resources kind %q is not supported in role version %q. Upgrade the role version to %q", + kubeResource.Kind, roleVersion, V8) } } } @@ -2065,8 +2176,8 @@ func validateRequestKubeResources(roleVersion string, kubeResources []RequestKub // ClusterResource returns the resource name in the following format // /. -func (k *KubernetesResource) ClusterResource() string { - return path.Join(k.Namespace, k.Name) +func (m *KubernetesResource) ClusterResource() string { + return path.Join(m.Namespace, m.Name) } // IsEmpty will return true if the condition is empty. @@ -2275,6 +2386,23 @@ func (r *RoleV6) SetIdentityCenterAccountAssignments(rct RoleConditionType, assi cond.AccountAssignments = assignments } +// GetMCPPermissions returns the allow or deny MCP permissions. +func (r *RoleV6) GetMCPPermissions(rct RoleConditionType) *MCPPermissions { + if rct == Allow { + return r.Spec.Allow.MCP + } + return r.Spec.Deny.MCP +} + +// SetMCPPermissions sets the allow or deny MCP permissions. +func (r *RoleV6) SetMCPPermissions(rct RoleConditionType, perms *MCPPermissions) { + if rct == Allow { + r.Spec.Allow.MCP = perms + } else { + r.Spec.Deny.MCP = perms + } +} + func (r *RoleV6) Clone() Role { return utils.CloneProtoMsg(r) } diff --git a/api/types/role_test.go b/api/types/role_test.go index b7955781add08..6007177d61425 100644 --- a/api/types/role_test.go +++ b/api/types/role_test.go @@ -267,6 +267,12 @@ func TestRole_GetKubeResources(t *testing.T) { Name: "test", Verbs: []string{Wildcard}, }, + { + Kind: KindKubeJob, + Namespace: "test", + Name: "test", + Verbs: []string{Wildcard}, + }, }, }, wantDeny: []KubernetesResource{ @@ -275,7 +281,14 @@ func TestRole_GetKubeResources(t *testing.T) { Namespace: "test", Name: "test", Verbs: []string{Wildcard}, - APIGroup: Wildcard, + APIGroup: "", + }, + { + Kind: "jobs", + Namespace: "test", + Name: "test", + Verbs: []string{Wildcard}, + APIGroup: "batch", }, }, assertErrorCreation: require.NoError, @@ -306,6 +319,11 @@ func TestRole_GetKubeResources(t *testing.T) { Namespace: "test", Name: "test", }, + { + Kind: KindKubeDeployment, + Namespace: "test", + Name: "test", + }, }, }, assertErrorCreation: require.NoError, @@ -314,7 +332,13 @@ func TestRole_GetKubeResources(t *testing.T) { Kind: "pods", Namespace: "test", Name: "test", - APIGroup: Wildcard, + APIGroup: "", + }, + { + Kind: "deployments", + Namespace: "test", + Name: "test", + APIGroup: "apps", }, }, }, @@ -353,7 +377,7 @@ func TestRole_GetKubeResources(t *testing.T) { Kind: "pods", Namespace: "test", Name: "test", - APIGroup: Wildcard, + APIGroup: "", }, }, }, @@ -553,7 +577,7 @@ func TestRole_GetKubeResources(t *testing.T) { Namespace: "test", Name: "test", Verbs: []string{Wildcard}, - APIGroup: Wildcard, + APIGroup: "", }, }, appendV7KubeResources()...), @@ -602,7 +626,7 @@ func TestRole_GetKubeResources(t *testing.T) { Namespace: "test", Name: "test", Verbs: []string{Wildcard}, - APIGroup: Wildcard, + APIGroup: "", }, }, appendV7KubeResources()...), @@ -661,6 +685,7 @@ func TestRole_GetKubeResources(t *testing.T) { } if tt.wantDeny == nil { got := r.GetKubeResources(Allow) + tt.wantAllow = append(tt.wantAllow, KubernetesResourceSelfSubjectAccessReview) require.Equal(t, tt.wantAllow, got) } got := r.GetKubeResources(Deny) @@ -879,6 +904,7 @@ func appendV7KubeResources() []KubernetesResource { resources := []KubernetesResource{} // append other kubernetes resources for _, resource := range KubernetesResourcesKinds { + group := KubernetesResourcesV7KindGroups[resource] resource = KubernetesResourcesKindsPlurals[resource] if resource == "pods" || resource == "namespaces" { continue @@ -888,7 +914,7 @@ func appendV7KubeResources() []KubernetesResource { Namespace: Wildcard, Name: Wildcard, Verbs: []string{Wildcard}, - APIGroup: Wildcard, + APIGroup: group, }, ) } diff --git a/api/types/saml.go b/api/types/saml.go index 24dde3df1a251..b085904af4a30 100644 --- a/api/types/saml.go +++ b/api/types/saml.go @@ -117,6 +117,16 @@ type SAMLConnector interface { GetForceAuthn() bool // GetPreferredRequestBinding returns PreferredRequestBinding. GetPreferredRequestBinding() string + // GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + GetUserMatchers() []string + // SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match + // for identifier-first login. + SetUserMatchers([]string) + // GetIncludeSubject returns true if the Subject element should be included in the AuthnRequest. + GetIncludeSubject() bool + // SetIncludeSubject sets whether the Subject element should be included. + SetIncludeSubject(bool) } // NewSAMLConnector returns a new SAMLConnector based off a name and SAMLConnectorSpecV2. @@ -448,6 +458,29 @@ func (o *SAMLConnectorV2) GetForceAuthn() bool { return o.Spec.ForceAuthn == SAMLForceAuthn_FORCE_AUTHN_YES } +// GetUserMatchers returns the set of glob patterns to narrow down which username(s) this auth connector should +// match for identifier-first login. +func (r *SAMLConnectorV2) GetUserMatchers() []string { + if r.Spec.UserMatchers == nil { + return nil + } + return r.Spec.UserMatchers +} + +// SetUserMatchers sets the set of glob patterns to narrow down which username(s) this auth connector should match +// for identifier-first login. +func (r *SAMLConnectorV2) SetUserMatchers(userMatchers []string) { + r.Spec.UserMatchers = userMatchers +} + +func (r *SAMLConnectorV2) GetIncludeSubject() bool { + return r.Spec.IncludeSubject +} + +func (r *SAMLConnectorV2) SetIncludeSubject(includeSubject bool) { + r.Spec.IncludeSubject = includeSubject +} + const ( // SAMLRequestHTTPRedirectBinding is the SAML http-redirect binding request name. SAMLRequestHTTPRedirectBinding = "http-redirect" diff --git a/api/types/secreports/convert/v1/secreport.go b/api/types/secreports/convert/v1/secreport.go index eff3c64aed25c..a24b24edbfc38 100644 --- a/api/types/secreports/convert/v1/secreport.go +++ b/api/types/secreports/convert/v1/secreport.go @@ -34,7 +34,7 @@ func FromProtoAuditQuery(in *secreportsv1.AuditQuery) (*secreports.AuditQuery, e Query: in.GetSpec().GetQuery(), Description: in.GetSpec().GetDescription(), } - out, err := secreports.NewAuditQuery(headerv1.FromMetadataProto(in.Header.Metadata), spec) + out, err := secreports.NewAuditQuery(headerv1.FromMetadataProto(in.GetHeader().GetMetadata()), spec) if err != nil { return nil, trace.Wrap(err) } @@ -76,7 +76,7 @@ func FromProtoReport(in *secreportsv1.Report) (*secreports.Report, error) { Title: in.GetSpec().GetTitle(), Version: in.GetSpec().GetVersion(), } - out, err := secreports.NewReport(headerv1.FromMetadataProto(in.Header.Metadata), spec) + out, err := secreports.NewReport(headerv1.FromMetadataProto(in.GetHeader().GetMetadata()), spec) if err != nil { return nil, trace.Wrap(err) } diff --git a/api/types/semaphore.go b/api/types/semaphore.go index 68d32850de075..e6b5e07d7cebf 100644 --- a/api/types/semaphore.go +++ b/api/types/semaphore.go @@ -327,17 +327,37 @@ type Semaphores interface { CancelSemaphoreLease(ctx context.Context, lease SemaphoreLease) error // GetSemaphores returns a list of semaphores matching supplied filter. GetSemaphores(ctx context.Context, filter SemaphoreFilter) ([]Semaphore, error) + // ListSemaphores returns a page of semaphores matching supplied filter. + ListSemaphores(ctx context.Context, limit int, start string, filter *SemaphoreFilter) ([]Semaphore, string, error) // DeleteSemaphore deletes a semaphore matching supplied filter. DeleteSemaphore(ctx context.Context, filter SemaphoreFilter) error } // Match checks if the supplied semaphore matches this filter. func (f *SemaphoreFilter) Match(sem Semaphore) bool { - if f.SemaphoreKind != "" && f.SemaphoreKind != sem.GetSubKind() { + if f.GetSemaphoreKind() != "" && f.GetSemaphoreKind() != sem.GetSubKind() { return false } - if f.SemaphoreName != "" && f.SemaphoreName != sem.GetName() { + if f.GetSemaphoreName() != "" && f.GetSemaphoreName() != sem.GetName() { return false } return true } + +// GetSemaphoreKind returns the semaphore kind to filter by if filter is non-nil +func (f *SemaphoreFilter) GetSemaphoreKind() string { + if f == nil { + return "" + } + + return f.SemaphoreKind +} + +// GetSemaphoreName returns the semaphore name to filter by if filter is non-nil +func (f *SemaphoreFilter) GetSemaphoreName() string { + if f == nil { + return "" + } + + return f.SemaphoreName +} diff --git a/api/types/server.go b/api/types/server.go index c2e139480f89a..194bf810344b7 100644 --- a/api/types/server.go +++ b/api/types/server.go @@ -27,6 +27,7 @@ import ( "github.com/google/uuid" "github.com/gravitational/trace" + componentfeaturesv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/componentfeatures/v1" "github.com/gravitational/teleport/api/utils" "github.com/gravitational/teleport/api/utils/aws" ) @@ -76,6 +77,13 @@ type Server interface { // ProxiedService provides common methods for a proxied service. ProxiedService + // GetRelayGroup returns the name of the Relay group that the server is + // connected to. + GetRelayGroup() string + // GetRelayIDs returns the list of Relay host IDs that the server is + // connected to. + GetRelayIDs() []string + // DeepCopy creates a clone of this server value DeepCopy() Server @@ -105,6 +113,13 @@ type Server interface { // GetGitHub returns the GitHub server spec. GetGitHub() *GitHubServerMetadata + // GetScope returns the scope this server belongs to. + GetScope() string + + // GetComponentFeatures returns the supported features for the server. + GetComponentFeatures() *componentfeaturesv1.ComponentFeatures + // SetComponentFeatures sets the supported features for the server. + SetComponentFeatures(*componentfeaturesv1.ComponentFeatures) } // NewServer creates an instance of Server. @@ -186,6 +201,16 @@ func NewGitHubServerWithName(name string, githubSpec GitHubServerMetadata) (Serv return server, nil } +// GetComponentFeatures returns the supported features for the server. +func (s *ServerV2) GetComponentFeatures() *componentfeaturesv1.ComponentFeatures { + return s.Spec.ComponentFeatures +} + +// SetComponentFeatures sets the supported features for the server. +func (s *ServerV2) SetComponentFeatures(features *componentfeaturesv1.ComponentFeatures) { + s.Spec.ComponentFeatures = features +} + // GetVersion returns resource version func (s *ServerV2) GetVersion() string { return s.Version @@ -384,6 +409,22 @@ func (s *ServerV2) SetProxyIDs(proxyIDs []string) { s.Spec.ProxyIDs = proxyIDs } +// GetRelayGroup implements [Server]. +func (s *ServerV2) GetRelayGroup() string { + if s == nil { + return "" + } + return s.Spec.RelayGroup +} + +// GetRelayIDs implements [Server]. +func (s *ServerV2) GetRelayIDs() []string { + if s == nil { + return nil + } + return s.Spec.RelayIds +} + // GetAllLabels returns the full key:value map of both static labels and // "command labels" func (s *ServerV2) GetAllLabels() map[string]string { @@ -726,6 +767,11 @@ func (s *ServerV2) SetCloudMetadata(meta *CloudMetadata) { s.Spec.CloudMetadata = meta } +// GetScope returns the scope this server belongs to. +func (s *ServerV2) GetScope() string { + return s.Scope +} + // CommandLabel is a label that has a value as a result of the // output generated by running command, e.g. hostname type CommandLabel interface { diff --git a/api/types/server_test.go b/api/types/server_test.go index 4e1476e9cbf38..7c398173ed4a5 100644 --- a/api/types/server_test.go +++ b/api/types/server_test.go @@ -19,7 +19,9 @@ package types import ( "fmt" "testing" + "time" + "github.com/google/go-cmp/cmp" "github.com/gravitational/trace" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -818,3 +820,41 @@ func TestGitServerOrgDomain(t *testing.T) { _, ok = GetGitHubOrgFromNodeAddr("my-server.example.teleport.sh:22") require.False(t, ok) } + +func TestServerLabels(t *testing.T) { + emptyLabels := make(map[string]string) + // empty + server := &ServerV2{} + require.Empty(t, server.GetAllLabels()) + require.True(t, MatchLabels(server, emptyLabels)) + require.False(t, MatchLabels(server, map[string]string{"a": "b"})) + + // more complex + server = &ServerV2{ + Metadata: Metadata{ + Labels: map[string]string{ + "role": "database", + }, + }, + Spec: ServerSpecV2{ + CmdLabels: map[string]CommandLabelV2{ + "time": { + Period: NewDuration(time.Second), + Command: []string{"time"}, + Result: "now", + }, + }, + }, + } + + require.Empty(t, cmp.Diff(server.GetAllLabels(), map[string]string{ + "role": "database", + "time": "now", + })) + + require.True(t, MatchLabels(server, emptyLabels)) + require.False(t, MatchLabels(server, map[string]string{"a": "b"})) + require.True(t, MatchLabels(server, map[string]string{"role": "database"})) + require.True(t, MatchLabels(server, map[string]string{"time": "now"})) + require.True(t, MatchLabels(server, map[string]string{"time": "now", "role": "database"})) +} diff --git a/api/types/session_tracker.go b/api/types/session_tracker.go index 62089117bc976..76f6431ab81f9 100644 --- a/api/types/session_tracker.go +++ b/api/types/session_tracker.go @@ -105,7 +105,7 @@ type SessionTracker interface { RemoveParticipant(string) error // UpdatePresence updates presence timestamp of a participant. - UpdatePresence(string, time.Time) error + UpdatePresence(username string, cluster string, t time.Time) error // GetKubeCluster returns the name of the kubernetes cluster the session is running in. GetKubeCluster() string @@ -320,9 +320,14 @@ func (s *SessionTrackerV1) GetHostUser() string { } // UpdatePresence updates presence timestamp of a participant. -func (s *SessionTrackerV1) UpdatePresence(user string, t time.Time) error { +func (s *SessionTrackerV1) UpdatePresence(user, userCluster string, t time.Time) error { idx := slices.IndexFunc(s.Spec.Participants, func(participant Participant) bool { - return participant.User == user + // participant.Cluster == "" is a legacy participant that was created + // before cluster field was added, so we allow updating presence for + // such participants as well. + // TODO(tigrato): Remove this in version 20.0.0 + // TODO(tigrato): DELETE IN 20.0.0 + return participant.User == user && (participant.Cluster == userCluster || participant.Cluster == "") }) if idx < 0 { diff --git a/api/types/session_tracker_test.go b/api/types/session_tracker_test.go index 59cbb7fbd84fa..9d3c7601491c4 100644 --- a/api/types/session_tracker_test.go +++ b/api/types/session_tracker_test.go @@ -34,12 +34,20 @@ func TestSessionTrackerV1_UpdatePresence(t *testing.T) { { ID: "1", User: "llama", + Cluster: "teleport-local", Mode: string(SessionPeerMode), LastActive: now, }, { ID: "2", User: "fish", + Cluster: "teleport-remote", + Mode: string(SessionModeratorMode), + LastActive: now, + }, + { + ID: "3", + User: "cat", Mode: string(SessionModeratorMode), LastActive: now, }, @@ -48,11 +56,13 @@ func TestSessionTrackerV1_UpdatePresence(t *testing.T) { require.NoError(t, err) // Presence cannot be updated for a non-existent user - err = s.UpdatePresence("alpaca", now.Add(time.Hour)) + err = s.UpdatePresence("alpaca", "", now.Add(time.Hour)) require.ErrorIs(t, err, trace.NotFound("participant alpaca not found")) // Update presence for just the user fish - require.NoError(t, s.UpdatePresence("fish", now.Add(time.Hour))) + require.NoError(t, s.UpdatePresence("fish", "teleport-remote", now.Add(time.Hour))) + // Try to Update presence for user fish again, but with a different cluster. + require.Error(t, s.UpdatePresence("fish", "teleport-local", now.Add(time.Hour))) // Verify that llama has not been active but that fish was for _, participant := range s.GetParticipants() { diff --git a/api/types/sessionrecording.go b/api/types/sessionrecording.go index 832bfcc59a8ec..ad1fc5af60c8f 100644 --- a/api/types/sessionrecording.go +++ b/api/types/sessionrecording.go @@ -17,6 +17,7 @@ limitations under the License. package types import ( + "iter" "slices" "strings" "time" @@ -43,8 +44,24 @@ type SessionRecordingConfig interface { // SetProxyChecksHostKeys sets if the proxy will check host keys. SetProxyChecksHostKeys(bool) + // GetEncrypted gets if session recordings should be encrypted or not. + GetEncrypted() bool + + // GetEncryptionConfig gets the encryption config from the session recording config. + GetEncryptionConfig() *SessionRecordingEncryptionConfig + + // GetEncryptionKeys gets the encryption keys for the session recording config. + GetEncryptionKeys() []*AgeEncryptionKey + + // SetEncryptionKeys sets the encryption keys for the session recording config. + // It returns true if there was a change applied and false otherwise. + SetEncryptionKeys(iter.Seq[*AgeEncryptionKey]) bool + // Clone returns a copy of the resource. Clone() SessionRecordingConfig + + // CheckAndSetDefaults verifies the constraints for a SessionRecordingConfig + CheckAndSetDefaults() error } // NewSessionRecordingConfigFromConfigFile is a convenience method to create @@ -163,6 +180,68 @@ func (c *SessionRecordingConfigV2) SetProxyChecksHostKeys(t bool) { c.Spec.ProxyChecksHostKeys = NewBoolOption(t) } +// GetEncrypted gets if session recordings should be encrypted or not. +func (c *SessionRecordingConfigV2) GetEncrypted() bool { + encryption := c.GetEncryptionConfig() + return encryption != nil && encryption.Enabled +} + +// GetEncryptionConfig gets the encryption config from the session recording config. +func (c *SessionRecordingConfigV2) GetEncryptionConfig() *SessionRecordingEncryptionConfig { + if c == nil { + return nil + } + + return c.Spec.Encryption +} + +// GetEncryptionKeys gets the encryption keys for the session recording config. +func (c *SessionRecordingConfigV2) GetEncryptionKeys() []*AgeEncryptionKey { + if c.Status != nil { + return c.Status.EncryptionKeys + } + + return nil +} + +// SetEncryptionKeys sets the encryption keys for the session recording config. +// It returns true if there was a change applied and false otherwise. +func (c *SessionRecordingConfigV2) SetEncryptionKeys(keys iter.Seq[*AgeEncryptionKey]) bool { + existingKeys := make(map[string]struct{}) + for _, key := range c.GetEncryptionKeys() { + existingKeys[string(key.PublicKey)] = struct{}{} + } + + var keysChanged bool + var newKeys []*AgeEncryptionKey + addedKeys := make(map[string]struct{}) + for key := range keys { + if !keysChanged { + if _, exists := existingKeys[string(key.PublicKey)]; !exists { + keysChanged = true + } + } + + if _, added := addedKeys[string(key.PublicKey)]; !added { + addedKeys[string(key.PublicKey)] = struct{}{} + newKeys = append(newKeys, key) + } + + } + + shouldUpdate := len(addedKeys) > 0 && (keysChanged || len(existingKeys) != len(addedKeys)) + if !shouldUpdate { + return false + } + + if c.Status == nil { + c.Status = &SessionRecordingConfigStatus{} + } + c.Status.EncryptionKeys = newKeys + + return true +} + // Clone returns a copy of the resource. func (c *SessionRecordingConfigV2) Clone() SessionRecordingConfig { return utils.CloneProtoMsg(c) diff --git a/api/types/sessionrecording_test.go b/api/types/sessionrecording_test.go new file mode 100644 index 0000000000000..375bef60ac9ab --- /dev/null +++ b/api/types/sessionrecording_test.go @@ -0,0 +1,187 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package types_test + +import ( + "slices" + "testing" + + "github.com/stretchr/testify/require" + + "github.com/gravitational/teleport/api/types" +) + +func TestSetEncryptionKeys(t *testing.T) { + cases := []struct { + name string + initialKeys []*types.AgeEncryptionKey + newKeys []*types.AgeEncryptionKey + expectChange bool + }{ + { + name: "adding new keys to empty list", + expectChange: true, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + }, { + name: "adding new keys to existing list", + expectChange: true, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + { + PublicKey: []byte("789"), + }, + }, + }, { + name: "replacing existing keys", + expectChange: true, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("321"), + }, + { + PublicKey: []byte("654"), + }, + }, + }, { + name: "removing from existing keys", + expectChange: true, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + { + PublicKey: []byte("789"), + }, + }, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + }, { + name: "try to remove all keys", + expectChange: false, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + { + PublicKey: []byte("789"), + }, + }, + }, { + name: "no change", + expectChange: false, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + }, { + name: "adding duplicates", + expectChange: false, + initialKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + }, + newKeys: []*types.AgeEncryptionKey{ + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("123"), + }, + { + PublicKey: []byte("456"), + }, + { + PublicKey: []byte("456"), + }, + }, + }, + } + + for _, c := range cases { + t.Run(c.name, func(t *testing.T) { + src := &types.SessionRecordingConfigV2{ + Status: &types.SessionRecordingConfigStatus{ + EncryptionKeys: c.initialKeys, + }, + } + + keysChanged := src.SetEncryptionKeys(slices.Values(c.newKeys)) + require.Equal(t, c.expectChange, keysChanged) + if keysChanged { + require.Equal(t, c.newKeys, src.Status.EncryptionKeys) + } else { + require.Equal(t, c.initialKeys, src.Status.EncryptionKeys) + } + }) + } +} diff --git a/api/types/signaturealgorithmsuite.go b/api/types/signaturealgorithmsuite.go index fdd8f91cc5ca6..78dcbda98af05 100644 --- a/api/types/signaturealgorithmsuite.go +++ b/api/types/signaturealgorithmsuite.go @@ -20,23 +20,35 @@ import ( "github.com/gravitational/trace" ) -// MarshalText marshals a SignatureAlgorithmSuite value to text. This gets used -// by json.Marshal. -func (s SignatureAlgorithmSuite) MarshalText() ([]byte, error) { +// SignatureAlgorithmSuiteToString converts a [SignatureAlgorithmSuite] to a user-friendly string. +func SignatureAlgorithmSuiteToString(s SignatureAlgorithmSuite) string { switch s { case SignatureAlgorithmSuite_SIGNATURE_ALGORITHM_SUITE_LEGACY: - return []byte("legacy"), nil + return "legacy" case SignatureAlgorithmSuite_SIGNATURE_ALGORITHM_SUITE_BALANCED_V1: - return []byte("balanced-v1"), nil + return "balanced-v1" case SignatureAlgorithmSuite_SIGNATURE_ALGORITHM_SUITE_FIPS_V1: - return []byte("fips-v1"), nil + return "fips-v1" case SignatureAlgorithmSuite_SIGNATURE_ALGORITHM_SUITE_HSM_V1: - return []byte("hsm-v1"), nil + return "hsm-v1" default: - return []byte(s.String()), nil + return s.String() } } +// SignatureAlgorithmSuiteFromString parses a string to return a [SignatureAlgorithmSuite]. +func SignatureAlgorithmSuiteFromString(str string) (SignatureAlgorithmSuite, error) { + var suite SignatureAlgorithmSuite + err := suite.UnmarshalText([]byte(str)) + return suite, trace.Wrap(err) +} + +// MarshalText marshals a SignatureAlgorithmSuite value to text. This gets used +// by json.Marshal. +func (s SignatureAlgorithmSuite) MarshalText() ([]byte, error) { + return []byte(SignatureAlgorithmSuiteToString(s)), nil +} + // UnmarshalJSON unmarshals a SignatureAlgorithmSuite and supports the custom // string format or numeric types matching an enum value. func (s *SignatureAlgorithmSuite) UnmarshalJSON(data []byte) error { diff --git a/api/types/summarizer/summarizer.go b/api/types/summarizer/summarizer.go new file mode 100644 index 0000000000000..5f290283ad550 --- /dev/null +++ b/api/types/summarizer/summarizer.go @@ -0,0 +1,193 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package summarizer + +import ( + "slices" + "strings" + + "github.com/gravitational/trace" + + headerv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/header/v1" + summarizerv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1" + "github.com/gravitational/teleport/api/types" +) + +// NewInferenceModel creates a new InferenceModel resource with the given name +// and spec. +func NewInferenceModel(name string, spec *summarizerv1.InferenceModelSpec) *summarizerv1.InferenceModel { + return &summarizerv1.InferenceModel{ + Kind: types.KindInferenceModel, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: name, + }, + Spec: spec, + } +} + +// ValidateInferenceModel validates an InferenceModel. +func ValidateInferenceModel(m *summarizerv1.InferenceModel) error { + switch { + case m == nil: + return trace.BadParameter("inference model is nil") + case m.GetKind() != types.KindInferenceModel: + return trace.BadParameter("kind must be %s, got %s", types.KindInferenceModel, m.GetKind()) + case m.GetSubKind() != "": + return trace.BadParameter("subkind must be empty") + case m.GetVersion() == "": + return trace.BadParameter("version is required") + case m.GetVersion() != types.V1: + return trace.BadParameter("unsupported version %s, supported: %s", m.GetVersion(), types.V1) + + case m.GetMetadata() == nil: + return trace.BadParameter("metadata is required") + case m.GetMetadata().GetName() == "": + return trace.BadParameter("metadata.name is required") + case m.GetMetadata().GetName() == "teleport-cloud-default": + return trace.BadParameter("metadata.name \"teleport-cloud-default\" is reserved") + + case m.GetSpec() == nil: + return trace.BadParameter("spec is required") + } + + provider := m.GetSpec().GetProvider() + switch p := provider.(type) { + case nil: + return trace.BadParameter( + // Unfortunately, there's no way to tell between a missing and + // unsupported one once the object is parsed from YAML. There may be a + // way to do it if it was created from binary wire format, but it's not + // worth the effort. + "missing or unsupported inference provider in spec, supported providers: openai", + ) + case *summarizerv1.InferenceModelSpec_Openai: + if p.Openai.GetOpenaiModelId() == "" { + return trace.BadParameter("spec.openai.openai_model_id is required") + } + case *summarizerv1.InferenceModelSpec_Bedrock: + if p.Bedrock.GetBedrockModelId() == "" { + return trace.BadParameter("spec.bedrock.bedrock_model_id is required") + } + if p.Bedrock.GetRegion() == "" { + return trace.BadParameter("spec.bedrock.region is required") + } + } + + return nil +} + +// NewInferenceSecret creates a new InferenceSecret resource with the given name +// and spec. +func NewInferenceSecret(name string, spec *summarizerv1.InferenceSecretSpec) *summarizerv1.InferenceSecret { + return &summarizerv1.InferenceSecret{ + Kind: types.KindInferenceSecret, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: name, + }, + Spec: spec, + } +} + +// ValidateInferenceSecret validates an inference secret. +func ValidateInferenceSecret(s *summarizerv1.InferenceSecret) error { + switch { + case s == nil: + return trace.BadParameter("inference secret is nil") + case s.GetKind() != types.KindInferenceSecret: + return trace.BadParameter("kind must be %s, got %s", types.KindInferenceSecret, s.GetKind()) + case s.GetSubKind() != "": + return trace.BadParameter("subkind must be empty") + case s.GetVersion() == "": + return trace.BadParameter("version is required") + case s.GetVersion() != types.V1: + return trace.BadParameter("unsupported version %s, supported: %s", s.GetVersion(), types.V1) + + case s.GetMetadata() == nil: + return trace.BadParameter("metadata is required") + case s.GetMetadata().GetName() == "": + return trace.BadParameter("metadata.name is required") + + case s.GetSpec() == nil: + return trace.BadParameter("spec is required") + case s.GetSpec().GetValue() == "": + return trace.BadParameter("spec.value is required") + } + + return nil +} + +// NewInferencePolicy creates a new InferencePolicy resource with the given name +// and spec. +func NewInferencePolicy(name string, spec *summarizerv1.InferencePolicySpec) *summarizerv1.InferencePolicy { + return &summarizerv1.InferencePolicy{ + Kind: types.KindInferencePolicy, + Version: types.V1, + Metadata: &headerv1.Metadata{ + Name: name, + }, + Spec: spec, + } +} + +// ValidateInferencePolicy validates an InferencePolicy. This function doesn't +// validate the Filter field, as it's unable to access the lib/services +// package; to fully validate a policy, use +// lib/services.ValidateInferencePolicy. +func ValidateInferencePolicy(p *summarizerv1.InferencePolicy) error { + switch { + case p == nil: + return trace.BadParameter("inference policy is nil") + case p.GetKind() != types.KindInferencePolicy: + return trace.BadParameter("kind must be %s, got %s", types.KindInferencePolicy, p.GetKind()) + case p.GetSubKind() != "": + return trace.BadParameter("subkind must be empty") + case p.GetVersion() == "": + return trace.BadParameter("version is required") + case p.GetVersion() != types.V1: + return trace.BadParameter("unsupported version %s, supported: %s", p.GetVersion(), types.V1) + + case p.GetMetadata() == nil: + return trace.BadParameter("metadata is required") + case p.GetMetadata().GetName() == "": + return trace.BadParameter("metadata.name is required") + + case p.GetSpec() == nil: + return trace.BadParameter("spec is required") + case p.GetSpec().GetModel() == "": + return trace.BadParameter("spec.model is required") + } + + kinds := p.GetSpec().GetKinds() + if len(kinds) == 0 { + return trace.BadParameter("spec.kinds are required") + } + supportedKinds := []string{ + string(types.SSHSessionKind), + string(types.KubernetesSessionKind), + string(types.DatabaseSessionKind), + } + for _, kind := range kinds { + if !slices.Contains(supportedKinds, kind) { + return trace.BadParameter( + "unsupported kind in spec.kinds: %s, supported: %v", + kind, strings.Join(supportedKinds, ", "), + ) + } + } + + return nil +} diff --git a/api/types/summarizer/summarizer_test.go b/api/types/summarizer/summarizer_test.go new file mode 100644 index 0000000000000..0fd1ad53df4bb --- /dev/null +++ b/api/types/summarizer/summarizer_test.go @@ -0,0 +1,244 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package summarizer + +import ( + "testing" + + "github.com/gravitational/trace" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "google.golang.org/protobuf/proto" + + summarizerv1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/summarizer/v1" + "github.com/gravitational/teleport/api/types" +) + +func TestValidateInferenceModel(t *testing.T) { + t.Parallel() + validOpenAI := NewInferenceModel("my-model", &summarizerv1.InferenceModelSpec{ + Provider: &summarizerv1.InferenceModelSpec_Openai{ + Openai: &summarizerv1.OpenAIProvider{ + OpenaiModelId: "gpt-4o", + }, + }, + }) + validBedrock := NewInferenceModel("my-model", &summarizerv1.InferenceModelSpec{ + Provider: &summarizerv1.InferenceModelSpec_Bedrock{ + Bedrock: &summarizerv1.BedrockProvider{ + BedrockModelId: "amazon.nova-lite-v1:0", + Region: "us-west-2", + }, + }, + }) + require.NoError(t, ValidateInferenceModel(validOpenAI)) + require.NoError(t, ValidateInferenceModel(validBedrock)) + + cases := []struct { + base *summarizerv1.InferenceModel + fn func(m *summarizerv1.InferenceModel) + msg string + }{ + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Kind = "other" }, + msg: "kind must be inference_model, got other", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.SubKind = "foo" }, + msg: "subkind must be empty", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Version = "" }, + msg: "version is required", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Version = types.V2 }, + msg: "unsupported version v2, supported: v1", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Metadata = nil }, + msg: "metadata is required", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Metadata.Name = "" }, + msg: "metadata.name is required", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Metadata.Name = "teleport-cloud-default" }, + msg: "metadata.name \"teleport-cloud-default\" is reserved", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Spec = nil }, + msg: "spec is required", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Spec.Provider = nil }, + msg: "missing or unsupported inference provider in spec, supported providers: openai", + }, + { + base: validOpenAI, + fn: func(m *summarizerv1.InferenceModel) { m.Spec.GetOpenai().OpenaiModelId = "" }, + msg: "spec.openai.openai_model_id is required", + }, + { + base: validBedrock, + fn: func(m *summarizerv1.InferenceModel) { m.Spec.GetBedrock().BedrockModelId = "" }, + msg: "spec.bedrock.bedrock_model_id is required", + }, + { + base: validBedrock, + fn: func(m *summarizerv1.InferenceModel) { m.Spec.GetBedrock().Region = "" }, + msg: "spec.bedrock.region is required", + }, + } + + for _, tc := range cases { + t.Run(tc.msg, func(t *testing.T) { + m := proto.CloneOf(tc.base) + tc.fn(m) + assert.ErrorIs(t, ValidateInferenceModel(m), &trace.BadParameterError{Message: tc.msg}) + }) + } +} + +func TestValidateInferenceSecret(t *testing.T) { + t.Parallel() + valid := NewInferenceSecret("my-secret", &summarizerv1.InferenceSecretSpec{ + Value: "super-secret-value", + }) + require.NoError(t, ValidateInferenceSecret(valid)) + + cases := []struct { + fn func(s *summarizerv1.InferenceSecret) + msg string + }{ + { + fn: func(s *summarizerv1.InferenceSecret) { s.Kind = "other" }, + msg: "kind must be inference_secret, got other", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.SubKind = "foo" }, + msg: "subkind must be empty", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Version = "" }, + msg: "version is required", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Version = types.V2 }, + msg: "unsupported version v2, supported: v1", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Metadata = nil }, + msg: "metadata is required", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Metadata.Name = "" }, + msg: "metadata.name is required", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Spec = nil }, + msg: "spec is required", + }, + { + fn: func(s *summarizerv1.InferenceSecret) { s.Spec.Value = "" }, + msg: "spec.value is required", + }, + } + + for _, tc := range cases { + t.Run(tc.msg, func(t *testing.T) { + s := proto.CloneOf(valid) + tc.fn(s) + assert.ErrorIs(t, ValidateInferenceSecret(s), &trace.BadParameterError{Message: tc.msg}) + }) + } +} + +func TestValidateInferencePolicy(t *testing.T) { + t.Parallel() + valid := NewInferencePolicy("my-policy", &summarizerv1.InferencePolicySpec{ + Kinds: []string{"ssh", "k8s", "db"}, + Filter: `equals(resource.metadata.labels["env"], "prod") || equals(user.metadata.name, "admin")`, + Model: "my-model", + }) + require.NoError(t, ValidateInferencePolicy(valid)) + // Empty filter should also be valid. + valid.Spec.Filter = "" + require.NoError(t, ValidateInferencePolicy(valid)) + + cases := []struct { + fn func(p *summarizerv1.InferencePolicy) + msg string + }{ + { + fn: func(p *summarizerv1.InferencePolicy) { p.Kind = "other" }, + msg: "kind must be inference_policy, got other", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.SubKind = "foo" }, + msg: "subkind must be empty", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Version = "" }, + msg: "version is required", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Version = types.V2 }, + msg: "unsupported version v2, supported: v1", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Metadata = nil }, + msg: "metadata is required", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Metadata.Name = "" }, + msg: "metadata.name is required", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Spec = nil }, + msg: "spec is required", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Spec.Kinds = nil }, + msg: "spec.kinds are required", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Spec.Kinds = []string{"foo"} }, + msg: "unsupported kind in spec.kinds: foo, supported: ssh, k8s, db", + }, + { + fn: func(p *summarizerv1.InferencePolicy) { p.Spec.Model = "" }, + msg: "spec.model is required", + }, + } + + for _, tc := range cases { + t.Run(tc.msg, func(t *testing.T) { + p := proto.CloneOf(valid) + tc.fn(p) + assert.ErrorContains(t, ValidateInferencePolicy(p), tc.msg) + }) + } +} diff --git a/api/types/system_role.go b/api/types/system_role.go index 9b353254368ca..15d13746eff88 100644 --- a/api/types/system_role.go +++ b/api/types/system_role.go @@ -39,6 +39,8 @@ const ( RoleProxy SystemRole = "Proxy" // RoleAdmin is admin role RoleAdmin SystemRole = "Admin" + // RoleRelay is the system role for a relay in the cluster. + RoleRelay SystemRole = "Relay" // RoleProvisionToken is a role for nodes authenticated using provisioning tokens RoleProvisionToken SystemRole = "ProvisionToken" // RoleTrustedCluster is a role needed for tokens used to add trusted clusters. @@ -90,6 +92,7 @@ var roleMappings = map[string]SystemRole{ "node": RoleNode, "proxy": RoleProxy, "admin": RoleAdmin, + "relay": RoleRelay, "provisiontoken": RoleProvisionToken, "trusted_cluster": RoleTrustedCluster, "trustedcluster": RoleTrustedCluster, @@ -132,6 +135,7 @@ var localServiceMappings = map[SystemRole]struct{}{ RoleAuth: {}, RoleNode: {}, RoleProxy: {}, + RoleRelay: {}, RoleKube: {}, RoleApp: {}, RoleDatabase: {}, diff --git a/api/types/target_health.go b/api/types/target_health.go index 07d548fb1160c..40a16b0f19dce 100644 --- a/api/types/target_health.go +++ b/api/types/target_health.go @@ -17,6 +17,7 @@ limitations under the License. package types import ( + "iter" "time" ) @@ -24,8 +25,10 @@ import ( type TargetHealthProtocol string const ( - // TargetHealthProtocolTCP is a target health check protocol. - TargetHealthProtocolTCP TargetHealthProtocol = "TCP" + // TargetHealthProtocolTCP is the TCP target health check protocol. + TargetHealthProtocolTCP TargetHealthProtocol = "tcp" + // TargetHealthProtocolHTTP is the HTTP target health check protocol. + TargetHealthProtocolHTTP TargetHealthProtocol = "http" ) // TargetHealthStatus is a target resource's health status. @@ -38,8 +41,40 @@ const ( TargetHealthStatusUnhealthy TargetHealthStatus = "unhealthy" // TargetHealthStatusUnknown indicates that an unknown health check target health status. TargetHealthStatusUnknown TargetHealthStatus = "unknown" + // TargetHealthStatusMixed indicates the resource has a mix of health + // statuses. This can happen when multiple agents proxy the same resource. + TargetHealthStatusMixed TargetHealthStatus = "mixed" ) +// Canonical converts a status into its canonical form. +// An empty or unknown status is converted to [TargetHealthStatusUnknown]. +// +// Returns only a healthy, unhealthy, or unknown status. +func (s TargetHealthStatus) Canonical() TargetHealthStatus { + switch s { + case TargetHealthStatusHealthy, TargetHealthStatusUnhealthy: + return s + default: + return TargetHealthStatusUnknown + } +} + +// AggregateHealthStatus health statuses into a single status. If there are a +// mix of different statuses then the aggregate status is "mixed". +func AggregateHealthStatus(statuses iter.Seq[TargetHealthStatus]) TargetHealthStatus { + first := true + out := TargetHealthStatusUnknown + for s := range statuses { + if first { + out = s.Canonical() + first = false + } else if out != s.Canonical() { + return TargetHealthStatusMixed + } + } + return out +} + // TargetHealthTransitionReason is the reason for the target health status // transition. type TargetHealthTransitionReason string @@ -66,17 +101,17 @@ func (t *TargetHealth) GetTransitionTimestamp() time.Time { return *t.TransitionTimestamp } -// TargetHealthGetter provides [TargetHealth] information. -type TargetHealthGetter interface { - // GetTargetHealth returns the target health. - GetTargetHealth() TargetHealth +// TargetHealthStatusGetter is a type that can return [TargetHealthStatus]. +type TargetHealthStatusGetter interface { + // GetTargetHealthStatus returns the target health status. + GetTargetHealthStatus() TargetHealthStatus } -// GroupByTargetHealth groups resources by target health and returns [TargetHealthGroups]. -func GroupByTargetHealth[T TargetHealthGetter](resources []T) TargetHealthGroups[T] { +// GroupByTargetHealthStatus groups resources by target health and returns [TargetHealthGroups]. +func GroupByTargetHealthStatus[T TargetHealthStatusGetter](resources []T) TargetHealthGroups[T] { var groups TargetHealthGroups[T] for _, r := range resources { - switch TargetHealthStatus(r.GetTargetHealth().Status) { + switch r.GetTargetHealthStatus() { case TargetHealthStatusHealthy: groups.Healthy = append(groups.Healthy, r) case TargetHealthStatusUnhealthy: @@ -90,15 +125,20 @@ func GroupByTargetHealth[T TargetHealthGetter](resources []T) TargetHealthGroups } // TargetHealthGroups holds resources grouped by target health status. -type TargetHealthGroups[T TargetHealthGetter] struct { +type TargetHealthGroups[T TargetHealthStatusGetter] struct { // Healthy is the resources with [TargetHealthStatusHealthy]. Healthy []T // Unhealthy is the resources with [TargetHealthStatusUnhealthy]. Unhealthy []T // Unknown is the resources with any status that isn't healthy or unhealthy. - // Namely [TargetHealthStatusUnknown] and the empty string are grouped - // together. + // Namely [TargetHealthStatusUnknown], [TargetHealthStatusMixed], and the + // empty string are grouped together. // Agents running with a version prior to health checks will always report // an empty health status. + // A mixed status should only be set if health status for multiple servers + // are aggregated. An aggregated mixed status is equivalent to "unknown" + // because the underlying statuses that compose the mix are not known, + // although it really doesn't make sense to aggregate the health status + // before grouping it (please don't do that). Unknown []T } diff --git a/api/types/target_health_test.go b/api/types/target_health_test.go index 769c51b095a02..ebe98145eb69f 100644 --- a/api/types/target_health_test.go +++ b/api/types/target_health_test.go @@ -19,6 +19,7 @@ package types import ( "fmt" "math/rand/v2" + "slices" "testing" "github.com/stretchr/testify/require" @@ -65,23 +66,72 @@ func TestGroupByTargetHealth(t *testing.T) { rand.Shuffle(len(servers), func(i, j int) { servers[i], servers[j] = servers[j], servers[i] }) - groups := GroupByTargetHealth(servers) + groups := GroupByTargetHealthStatus(servers) for _, server := range groups.Healthy { require.Equal(t, TargetHealthStatusHealthy, - TargetHealthStatus(server.GetTargetHealth().Status), + server.GetTargetHealthStatus(), "server %s is in the wrong group", server.GetName(), ) } for _, server := range groups.Unhealthy { require.Equal(t, TargetHealthStatusUnhealthy, - TargetHealthStatus(server.GetTargetHealth().Status), + server.GetTargetHealthStatus(), "server %s is in the wrong group", server.GetName(), ) } for _, server := range groups.Unknown { require.Contains(t, []TargetHealthStatus{TargetHealthStatusUnknown, ""}, - TargetHealthStatus(server.GetTargetHealth().Status), + server.GetTargetHealthStatus(), "server %s is in the wrong group", server.GetName(), ) } } + +func TestTargetHealthStatusCanonical(t *testing.T) { + tests := []struct { + name string + input TargetHealthStatus + expected TargetHealthStatus + }{ + {"healthy remains healthy", TargetHealthStatusHealthy, TargetHealthStatusHealthy}, + {"unhealthy remains unhealthy", TargetHealthStatusUnhealthy, TargetHealthStatusUnhealthy}, + {"unknown becomes unknown", TargetHealthStatusUnknown, TargetHealthStatusUnknown}, + {"mixed becomes unknown", TargetHealthStatusMixed, TargetHealthStatusUnknown}, + {"empty string becomes unknown", TargetHealthStatus(""), TargetHealthStatusUnknown}, + {"random string becomes unknown", TargetHealthStatus("invalid"), TargetHealthStatusUnknown}, + } + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + actual := test.input.Canonical() + require.Equal(t, test.expected, actual) + }) + } +} + +func TestTargetHealthStatusesAggregate(t *testing.T) { + tests := []struct { + name string + input []TargetHealthStatus + expected TargetHealthStatus + }{ + {"empty list returns unknown", []TargetHealthStatus{}, TargetHealthStatusUnknown}, + {"one healthy", []TargetHealthStatus{TargetHealthStatusHealthy}, TargetHealthStatusHealthy}, + {"one unhealthy", []TargetHealthStatus{TargetHealthStatusUnhealthy}, TargetHealthStatusUnhealthy}, + {"one unknown", []TargetHealthStatus{TargetHealthStatusUnknown}, TargetHealthStatusUnknown}, + {"one mixed", []TargetHealthStatus{TargetHealthStatusMixed}, TargetHealthStatusUnknown}, + {"all healthy", []TargetHealthStatus{TargetHealthStatusHealthy, TargetHealthStatusHealthy}, TargetHealthStatusHealthy}, + {"all unhealthy", []TargetHealthStatus{TargetHealthStatusUnhealthy, TargetHealthStatusUnhealthy}, TargetHealthStatusUnhealthy}, + {"all unknown", []TargetHealthStatus{TargetHealthStatusUnknown, TargetHealthStatusUnknown}, TargetHealthStatusUnknown}, + {"all empty", []TargetHealthStatus{"", ""}, TargetHealthStatusUnknown}, + {"empty and unknown", []TargetHealthStatus{"", TargetHealthStatusUnknown}, TargetHealthStatusUnknown}, + {"healthy and unhealthy", []TargetHealthStatus{TargetHealthStatusHealthy, TargetHealthStatusUnhealthy}, TargetHealthStatusMixed}, + {"unhealthy and unknown", []TargetHealthStatus{TargetHealthStatusUnhealthy, TargetHealthStatusUnknown}, TargetHealthStatusMixed}, + {"healthy and unhealthy and unknown", []TargetHealthStatus{TargetHealthStatusHealthy, TargetHealthStatusUnhealthy, TargetHealthStatusUnknown}, TargetHealthStatusMixed}, + } + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + actual := AggregateHealthStatus(slices.Values(test.input)) + require.Equal(t, test.expected, actual) + }) + } +} diff --git a/api/types/trait/trait.go b/api/types/trait/trait.go index c700e9a310d1f..b7429902da084 100644 --- a/api/types/trait/trait.go +++ b/api/types/trait/trait.go @@ -16,5 +16,33 @@ limitations under the License. package trait +import ( + "slices" +) + // Traits is a mapping of traits to values. type Traits map[string][]string + +// Clone returns a deep copy of the traits map. +func (t Traits) Clone() Traits { + if t == nil { + return nil + } + out := make(Traits, len(t)) + for key, values := range t { + out[key] = slices.Clone(values) + } + return out +} + +// Merge src traits into dst. If you don't want the dst to be mutated do a Clone() first. +// Duplicated values are removed, but the order of values for a given trait is not guaranteed. +func Merge(dst, src Traits) { + for key, values := range src { + dst[key] = append(dst[key], values...) + } + for key, values := range dst { + slices.Sort(values) + dst[key] = slices.Compact(values) + } +} diff --git a/api/types/trait/trait_test.go b/api/types/trait/trait_test.go new file mode 100644 index 0000000000000..2b6c9c81760af --- /dev/null +++ b/api/types/trait/trait_test.go @@ -0,0 +1,62 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package trait + +import ( + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/stretchr/testify/require" +) + +func TestMerge(t *testing.T) { + testCases := []struct { + name string + dst Traits + src Traits + expectedDst Traits + }{ + { + name: "typical", + dst: Traits{ + "only_dst": []string{"dst1"}, + "logins": []string{"root", "ec2-user"}, + }, + src: Traits{ + "only_src": []string{"src1"}, + "logins": []string{"ubuntu", "ec2-user"}, + }, + expectedDst: Traits{ + "only_src": []string{"src1"}, + "only_dst": []string{"dst1"}, + "logins": []string{"ec2-user", "root", "ubuntu"}, + }, + }, + { + name: "empty_dst", + dst: Traits{}, + src: Traits{ + "only_src": []string{"src1"}, + }, + expectedDst: Traits{ + "only_src": []string{"src1"}, + }, + }, + } + for _, tt := range testCases { + Merge(tt.dst, tt.src) + require.Empty(t, cmp.Diff(tt.expectedDst, tt.dst)) + } +} diff --git a/api/types/trust.go b/api/types/trust.go index 528ab0ef45baa..cd018414277cb 100644 --- a/api/types/trust.go +++ b/api/types/trust.go @@ -18,6 +18,7 @@ package types import ( "fmt" + "slices" "strings" "time" @@ -72,7 +73,8 @@ const ( ) // CertAuthTypes lists all certificate authority types. -var CertAuthTypes = []CertAuthType{HostCA, +var CertAuthTypes = []CertAuthType{ + HostCA, UserCA, DatabaseCA, DatabaseClientCA, @@ -93,7 +95,9 @@ func (c CertAuthType) NewlyAdded() bool { return c.addedInMajorVer() >= api.VersionMajor } -// addedInVer return the major version in which given CA was added. +// addedInMajorVer returns the major version in which given CA was added. +// The returned version must be the X.0.0 release in which the CA first +// existed. func (c CertAuthType) addedInMajorVer() int64 { switch c { case DatabaseCA: @@ -105,7 +109,7 @@ func (c CertAuthType) addedInMajorVer() int64 { case SPIFFECA: return 15 case OktaCA: - return 16 + return 17 case AWSRACA, BoundKeypairCA: return 18 default: @@ -125,10 +129,8 @@ const authTypeNotSupported string = "authority type is not supported" // Check checks if certificate authority type value is correct func (c CertAuthType) Check() error { - for _, caType := range CertAuthTypes { - if c == caType { - return nil - } + if slices.Contains(CertAuthTypes, c) { + return nil } return trace.BadParameter("%q %s", c, authTypeNotSupported) diff --git a/api/types/types.pb.go b/api/types/types.pb.go index 0c25b0e2509cb..514ee98863be7 100644 --- a/api/types/types.pb.go +++ b/api/types/types.pb.go @@ -12,7 +12,8 @@ import ( github_com_gogo_protobuf_types "github.com/gogo/protobuf/types" types "github.com/gogo/protobuf/types" github_com_gravitational_teleport_api_constants "github.com/gravitational/teleport/api/constants" - v1 "github.com/gravitational/teleport/api/gen/proto/go/attestation/v1" + v11 "github.com/gravitational/teleport/api/gen/proto/go/attestation/v1" + v1 "github.com/gravitational/teleport/api/gen/proto/go/teleport/componentfeatures/v1" _ "github.com/gravitational/teleport/api/types/wrappers" github_com_gravitational_teleport_api_types_wrappers "github.com/gravitational/teleport/api/types/wrappers" io "io" @@ -368,6 +369,38 @@ func (RequestState) EnumDescriptor() ([]byte, []int) { return fileDescriptor_9198ee693835762e, []int{8} } +// AccessRequestKind represents the kind of Access Request being made (short/long-term). +type AccessRequestKind int32 + +const ( + // UNDEFINED is the default value, and represents an undefined request kind. + AccessRequestKind_UNDEFINED AccessRequestKind = 0 + // SHORT_TERM represents a short-term request, either role-based or resource-based. + AccessRequestKind_SHORT_TERM AccessRequestKind = 1 + // LONG_TERM represents a long-term resource-based request. + AccessRequestKind_LONG_TERM AccessRequestKind = 2 +) + +var AccessRequestKind_name = map[int32]string{ + 0: "UNDEFINED", + 1: "SHORT_TERM", + 2: "LONG_TERM", +} + +var AccessRequestKind_value = map[string]int32{ + "UNDEFINED": 0, + "SHORT_TERM": 1, + "LONG_TERM": 2, +} + +func (x AccessRequestKind) String() string { + return proto.EnumName(AccessRequestKind_name, int32(x)) +} + +func (AccessRequestKind) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{9} +} + type AccessRequestScope int32 const ( @@ -403,7 +436,7 @@ func (x AccessRequestScope) String() string { } func (AccessRequestScope) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{9} + return fileDescriptor_9198ee693835762e, []int{10} } // CreateHostUserMode determines whether host user creation should be @@ -446,7 +479,7 @@ func (x CreateHostUserMode) String() string { } func (CreateHostUserMode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{10} + return fileDescriptor_9198ee693835762e, []int{11} } // CreateDatabaseUserMode determines whether database user creation should be @@ -483,7 +516,7 @@ func (x CreateDatabaseUserMode) String() string { } func (CreateDatabaseUserMode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{11} + return fileDescriptor_9198ee693835762e, []int{12} } // CertExtensionMode specifies the type of extension to use in the cert. @@ -508,7 +541,7 @@ func (x CertExtensionMode) String() string { } func (CertExtensionMode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{12} + return fileDescriptor_9198ee693835762e, []int{13} } // CertExtensionType represents the certificate type the extension is for. @@ -533,7 +566,7 @@ func (x CertExtensionType) String() string { } func (CertExtensionType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{13} + return fileDescriptor_9198ee693835762e, []int{14} } // PasswordState indicates what is known about existence of user's password. @@ -565,7 +598,7 @@ func (x PasswordState) String() string { } func (PasswordState) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{14} + return fileDescriptor_9198ee693835762e, []int{15} } // MFADeviceKind indicates what is known about existence of user's MFA device. @@ -601,7 +634,7 @@ func (x MFADeviceKind) String() string { } func (MFADeviceKind) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{15} + return fileDescriptor_9198ee693835762e, []int{16} } // SAMLForceAuthn specified whether existing SAML sessions should be accepted or re-authentication @@ -634,7 +667,7 @@ func (x SAMLForceAuthn) String() string { } func (SAMLForceAuthn) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{16} + return fileDescriptor_9198ee693835762e, []int{17} } // SessionState represents the state of a session. @@ -668,7 +701,7 @@ func (x SessionState) String() string { } func (SessionState) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{17} + return fileDescriptor_9198ee693835762e, []int{18} } // AlertSeverity represents how problematic/urgent an alert is, and is used to assist @@ -698,7 +731,7 @@ func (x AlertSeverity) String() string { } func (AlertSeverity) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{18} + return fileDescriptor_9198ee693835762e, []int{19} } // RequireMFAType is a type of MFA requirement enforced outside of login, @@ -747,7 +780,7 @@ func (x RequireMFAType) String() string { } func (RequireMFAType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{19} + return fileDescriptor_9198ee693835762e, []int{20} } // SignatureAlgorithmSuite represents the suite of cryptographic signature algorithms used in the cluster. @@ -797,7 +830,7 @@ func (x SignatureAlgorithmSuite) String() string { } func (SignatureAlgorithmSuite) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{20} + return fileDescriptor_9198ee693835762e, []int{21} } // EntraIDCredentialsSource defines the credentials source for Entra ID. @@ -831,7 +864,7 @@ func (x EntraIDCredentialsSource) String() string { } func (EntraIDCredentialsSource) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{21} + return fileDescriptor_9198ee693835762e, []int{22} } // AWSICCredentialsSource indicates where the AWS Identity Center plugin will @@ -871,7 +904,7 @@ func (x AWSICCredentialsSource) String() string { } func (AWSICCredentialsSource) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{22} + return fileDescriptor_9198ee693835762e, []int{23} } // AWSICGroupImportStatus defines Identity Center group and group members @@ -910,7 +943,7 @@ func (x AWSICGroupImportStatusCode) String() string { } func (AWSICGroupImportStatusCode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{23} + return fileDescriptor_9198ee693835762e, []int{24} } type PluginStatusCode int32 @@ -956,7 +989,7 @@ func (x PluginStatusCode) String() string { } func (PluginStatusCode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{24} + return fileDescriptor_9198ee693835762e, []int{25} } // OktaPluginSyncStatusCode indicates the possible states of an Okta @@ -992,7 +1025,7 @@ func (x OktaPluginSyncStatusCode) String() string { } func (OktaPluginSyncStatusCode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{25} + return fileDescriptor_9198ee693835762e, []int{26} } // HeadlessAuthenticationState is a headless authentication state. @@ -1027,7 +1060,7 @@ func (x HeadlessAuthenticationState) String() string { } func (HeadlessAuthenticationState) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{26} + return fileDescriptor_9198ee693835762e, []int{27} } // InstallParamEnrollMode is the mode used to enroll the node into the cluster. @@ -1060,7 +1093,7 @@ func (x InstallParamEnrollMode) String() string { } func (InstallParamEnrollMode) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{27} + return fileDescriptor_9198ee693835762e, []int{28} } // The type of a KeepAlive. When adding a new type, please double-check @@ -1143,7 +1176,7 @@ func (x CertAuthoritySpecV2_SigningAlgType) String() string { } func (CertAuthoritySpecV2_SigningAlgType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{60, 0} + return fileDescriptor_9198ee693835762e, []int{65, 0} } // FIPSEndpointState represents an AWS FIPS endpoint state. @@ -1176,7 +1209,7 @@ func (x ClusterAuditConfigSpecV2_FIPSEndpointState) String() string { } func (ClusterAuditConfigSpecV2_FIPSEndpointState) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{88, 0} + return fileDescriptor_9198ee693835762e, []int{94, 0} } // TraceType is an identification of the checkpoint. @@ -1246,7 +1279,7 @@ func (x ConnectionDiagnosticTrace_TraceType) String() string { } func (ConnectionDiagnosticTrace_TraceType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{272, 0} + return fileDescriptor_9198ee693835762e, []int{290, 0} } // StatusType describes whether this was a success or a failure. @@ -1275,7 +1308,7 @@ func (x ConnectionDiagnosticTrace_StatusType) String() string { } func (ConnectionDiagnosticTrace_StatusType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{272, 1} + return fileDescriptor_9198ee693835762e, []int{290, 1} } // OktaAssignmentStatus represents the status of an Okta assignment. @@ -1315,7 +1348,7 @@ func (x OktaAssignmentSpecV1_OktaAssignmentStatus) String() string { } func (OktaAssignmentSpecV1_OktaAssignmentStatus) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{362, 0} + return fileDescriptor_9198ee693835762e, []int{383, 0} } // OktaAssignmentTargetType is the type of Okta object that an assignment is targeting. @@ -1347,7 +1380,7 @@ func (x OktaAssignmentTargetV1_OktaAssignmentTargetType) String() string { } func (OktaAssignmentTargetV1_OktaAssignmentTargetType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{363, 0} + return fileDescriptor_9198ee693835762e, []int{384, 0} } type KeepAlive struct { @@ -1626,10 +1659,12 @@ type DatabaseServerV3 struct { // Spec is the database server spec. Spec DatabaseServerSpecV3 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` // Status is the database server status. - Status DatabaseServerStatusV3 `protobuf:"bytes,6,opt,name=Status,proto3" json:"status"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Status DatabaseServerStatusV3 `protobuf:"bytes,6,opt,name=Status,proto3" json:"status"` + // The advertized scope of the server which can not change once assigned. + Scope string `protobuf:"bytes,7,opt,name=scope,proto3" json:"scope,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *DatabaseServerV3) Reset() { *m = DatabaseServerV3{} } @@ -1677,7 +1712,11 @@ type DatabaseServerSpecV3 struct { // Database is the database proxied by this database server. Database *DatabaseV3 `protobuf:"bytes,12,opt,name=Database,proto3" json:"database,omitempty"` // ProxyIDs is a list of proxy IDs this server is expected to be connected to. - ProxyIDs []string `protobuf:"bytes,13,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + ProxyIDs []string `protobuf:"bytes,13,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + // the name of the Relay group that the server is connected to + RelayGroup string `protobuf:"bytes,14,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // the list of Relay host IDs that the server is connected to + RelayIds []string `protobuf:"bytes,15,rep,name=relay_ids,json=relayIds,proto3" json:"relay_ids,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -1971,11 +2010,17 @@ func (m *DatabaseAdminUser) XXX_DiscardUnknown() { var xxx_messageInfo_DatabaseAdminUser proto.InternalMessageInfo -// OracleOptions contains information about privileged database user used -// for database audit. +// OracleOptions contains Oracle-specific configuration options. type OracleOptions struct { - // AuditUser is the Oracle database user privilege to access internal Oracle audit trail. - AuditUser string `protobuf:"bytes,1,opt,name=AuditUser,proto3" json:"audit_user"` + // AuditUser is the name of the Oracle database user that should be used to access + // the internal audit trail. + AuditUser string `protobuf:"bytes,1,opt,name=AuditUser,proto3" json:"audit_user,omitempty"` + // RetryCount is the maximum number of times to retry connecting to a + // host upon failure. If not specified it defaults to 2, for a total of 3 connection attempts. + RetryCount int32 `protobuf:"varint,2,opt,name=RetryCount,proto3" json:"retry_count,omitempty"` + // ShuffleHostnames, when true, randomizes the order of hosts to connect to from + // the provided list. + ShuffleHostnames bool `protobuf:"varint,3,opt,name=ShuffleHostnames,proto3" json:"shuffle_hostnames,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -2074,7 +2119,7 @@ type AWS struct { RDS RDS `protobuf:"bytes,3,opt,name=RDS,proto3" json:"rds,omitempty"` // AccountID is the AWS account ID this database belongs to. AccountID string `protobuf:"bytes,4,opt,name=AccountID,proto3" json:"account_id,omitempty"` - // ElastiCache contains AWS ElastiCache Redis specific metadata. + // ElastiCache contains Amazon ElastiCache Redis-specific metadata. ElastiCache ElastiCache `protobuf:"bytes,5,opt,name=ElastiCache,proto3" json:"elasticache,omitempty"` // SecretStore contains secret store configurations. SecretStore SecretStore `protobuf:"bytes,6,opt,name=SecretStore,proto3" json:"secret_store,omitempty"` @@ -2082,7 +2127,7 @@ type AWS struct { MemoryDB MemoryDB `protobuf:"bytes,7,opt,name=MemoryDB,proto3" json:"memorydb,omitempty"` // RDSProxy contains AWS Proxy specific metadata. RDSProxy RDSProxy `protobuf:"bytes,8,opt,name=RDSProxy,proto3" json:"rdsproxy,omitempty"` - // RedshiftServerless contains AWS Redshift Serverless specific metadata. + // RedshiftServerless contains Amazon Redshift Serverless-specific metadata. RedshiftServerless RedshiftServerless `protobuf:"bytes,9,opt,name=RedshiftServerless,proto3" json:"redshift_serverless,omitempty"` // ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts. ExternalID string `protobuf:"bytes,10,opt,name=ExternalID,proto3" json:"external_id,omitempty"` @@ -2097,11 +2142,13 @@ type AWS struct { IAMPolicyStatus IAMPolicyStatus `protobuf:"varint,14,opt,name=IAMPolicyStatus,proto3,enum=types.IAMPolicyStatus" json:"iam_policy_status"` // SessionTags is a list of AWS STS session tags. SessionTags map[string]string `protobuf:"bytes,15,rep,name=SessionTags,proto3" json:"session_tags,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - // DocumentDB contains AWS DocumentDB specific metadata. - DocumentDB DocumentDB `protobuf:"bytes,16,opt,name=DocumentDB,proto3" json:"docdb,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + // DocumentDB contains Amazon DocumentDB-specific metadata. + DocumentDB DocumentDB `protobuf:"bytes,16,opt,name=DocumentDB,proto3" json:"docdb,omitempty"` + // ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. + ElastiCacheServerless ElastiCacheServerless `protobuf:"bytes,17,opt,name=ElastiCacheServerless,proto3" json:"elasticache_serverless,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AWS) Reset() { *m = AWS{} } @@ -2323,7 +2370,7 @@ func (m *RDSProxy) XXX_DiscardUnknown() { var xxx_messageInfo_RDSProxy proto.InternalMessageInfo -// ElastiCache contains AWS ElastiCache Redis specific metadata. +// ElastiCache contains Amazon ElastiCache Redis-specific metadata. type ElastiCache struct { // ReplicationGroupID is the Redis replication group ID. ReplicationGroupID string `protobuf:"bytes,1,opt,name=ReplicationGroupID,proto3" json:"replication_group_id,omitempty"` @@ -2371,6 +2418,48 @@ func (m *ElastiCache) XXX_DiscardUnknown() { var xxx_messageInfo_ElastiCache proto.InternalMessageInfo +// ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. +type ElastiCacheServerless struct { + // CacheName is an ElastiCache Serverless cache name. + CacheName string `protobuf:"bytes,1,opt,name=CacheName,proto3" json:"cache_name,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ElastiCacheServerless) Reset() { *m = ElastiCacheServerless{} } +func (m *ElastiCacheServerless) String() string { return proto.CompactTextString(m) } +func (*ElastiCacheServerless) ProtoMessage() {} +func (*ElastiCacheServerless) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{20} +} +func (m *ElastiCacheServerless) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ElastiCacheServerless) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ElastiCacheServerless.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ElastiCacheServerless) XXX_Merge(src proto.Message) { + xxx_messageInfo_ElastiCacheServerless.Merge(m, src) +} +func (m *ElastiCacheServerless) XXX_Size() int { + return m.Size() +} +func (m *ElastiCacheServerless) XXX_DiscardUnknown() { + xxx_messageInfo_ElastiCacheServerless.DiscardUnknown(m) +} + +var xxx_messageInfo_ElastiCacheServerless proto.InternalMessageInfo + // MemoryDB contains AWS MemoryDB specific metadata. type MemoryDB struct { // ClusterName is the name of the MemoryDB cluster. @@ -2390,7 +2479,7 @@ func (m *MemoryDB) Reset() { *m = MemoryDB{} } func (m *MemoryDB) String() string { return proto.CompactTextString(m) } func (*MemoryDB) ProtoMessage() {} func (*MemoryDB) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{20} + return fileDescriptor_9198ee693835762e, []int{21} } func (m *MemoryDB) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2419,7 +2508,7 @@ func (m *MemoryDB) XXX_DiscardUnknown() { var xxx_messageInfo_MemoryDB proto.InternalMessageInfo -// RedshiftServerless contains AWS Redshift Serverless specific metadata. +// RedshiftServerless contains Amazon Redshift Serverless-specific metadata. type RedshiftServerless struct { // WorkgroupName is the workgroup name. WorkgroupName string `protobuf:"bytes,1,opt,name=WorkgroupName,proto3" json:"workgroup_name,omitempty"` @@ -2436,7 +2525,7 @@ func (m *RedshiftServerless) Reset() { *m = RedshiftServerless{} } func (m *RedshiftServerless) String() string { return proto.CompactTextString(m) } func (*RedshiftServerless) ProtoMessage() {} func (*RedshiftServerless) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{21} + return fileDescriptor_9198ee693835762e, []int{22} } func (m *RedshiftServerless) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2482,7 +2571,7 @@ func (m *OpenSearch) Reset() { *m = OpenSearch{} } func (m *OpenSearch) String() string { return proto.CompactTextString(m) } func (*OpenSearch) ProtoMessage() {} func (*OpenSearch) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{22} + return fileDescriptor_9198ee693835762e, []int{23} } func (m *OpenSearch) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2511,7 +2600,7 @@ func (m *OpenSearch) XXX_DiscardUnknown() { var xxx_messageInfo_OpenSearch proto.InternalMessageInfo -// DocumentDB contains AWS DocumentDB specific metadata. +// DocumentDB contains Amazon DocumentDB-specific metadata. type DocumentDB struct { // ClusterID is the cluster identifier. ClusterID string `protobuf:"bytes,1,opt,name=ClusterID,proto3" json:"cluster_id,omitempty"` @@ -2528,7 +2617,7 @@ func (m *DocumentDB) Reset() { *m = DocumentDB{} } func (m *DocumentDB) String() string { return proto.CompactTextString(m) } func (*DocumentDB) ProtoMessage() {} func (*DocumentDB) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{23} + return fileDescriptor_9198ee693835762e, []int{24} } func (m *DocumentDB) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2557,12 +2646,15 @@ func (m *DocumentDB) XXX_DiscardUnknown() { var xxx_messageInfo_DocumentDB proto.InternalMessageInfo -// GCPCloudSQL contains parameters specific to GCP Cloud SQL databases. +// GCPCloudSQL contains parameters specific to GCP databases. +// The name "GCPCloudSQL" is a legacy from a time when only GCP Cloud SQL was supported. type GCPCloudSQL struct { // ProjectID is the GCP project ID the Cloud SQL instance resides in. ProjectID string `protobuf:"bytes,1,opt,name=ProjectID,proto3" json:"project_id,omitempty"` // InstanceID is the Cloud SQL instance ID. - InstanceID string `protobuf:"bytes,2,opt,name=InstanceID,proto3" json:"instance_id,omitempty"` + InstanceID string `protobuf:"bytes,2,opt,name=InstanceID,proto3" json:"instance_id,omitempty"` + // AlloyDB contains AlloyDB specific configuration elements. + AlloyDB AlloyDB `protobuf:"bytes,3,opt,name=AlloyDB,proto3" json:"alloydb,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -2572,7 +2664,7 @@ func (m *GCPCloudSQL) Reset() { *m = GCPCloudSQL{} } func (m *GCPCloudSQL) String() string { return proto.CompactTextString(m) } func (*GCPCloudSQL) ProtoMessage() {} func (*GCPCloudSQL) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{24} + return fileDescriptor_9198ee693835762e, []int{25} } func (m *GCPCloudSQL) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2601,6 +2693,50 @@ func (m *GCPCloudSQL) XXX_DiscardUnknown() { var xxx_messageInfo_GCPCloudSQL proto.InternalMessageInfo +// AlloyDB contains AlloyDB specific configuration elements. +type AlloyDB struct { + // EndpointType is the database endpoint type to use. Should be one of: "private", "public", "psc". + EndpointType string `protobuf:"bytes,1,opt,name=EndpointType,proto3" json:"endpoint_type,omitempty"` + // EndpointOverride is an override of endpoint address to use. + EndpointOverride string `protobuf:"bytes,2,opt,name=EndpointOverride,proto3" json:"endpoint_override,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AlloyDB) Reset() { *m = AlloyDB{} } +func (m *AlloyDB) String() string { return proto.CompactTextString(m) } +func (*AlloyDB) ProtoMessage() {} +func (*AlloyDB) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{26} +} +func (m *AlloyDB) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AlloyDB) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AlloyDB.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AlloyDB) XXX_Merge(src proto.Message) { + xxx_messageInfo_AlloyDB.Merge(m, src) +} +func (m *AlloyDB) XXX_Size() int { + return m.Size() +} +func (m *AlloyDB) XXX_DiscardUnknown() { + xxx_messageInfo_AlloyDB.DiscardUnknown(m) +} + +var xxx_messageInfo_AlloyDB proto.InternalMessageInfo + // Azure contains Azure specific database metadata. type Azure struct { // Name is the Azure database server name. @@ -2620,7 +2756,7 @@ func (m *Azure) Reset() { *m = Azure{} } func (m *Azure) String() string { return proto.CompactTextString(m) } func (*Azure) ProtoMessage() {} func (*Azure) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{25} + return fileDescriptor_9198ee693835762e, []int{27} } func (m *Azure) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2662,7 +2798,7 @@ func (m *AzureRedis) Reset() { *m = AzureRedis{} } func (m *AzureRedis) String() string { return proto.CompactTextString(m) } func (*AzureRedis) ProtoMessage() {} func (*AzureRedis) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{26} + return fileDescriptor_9198ee693835762e, []int{28} } func (m *AzureRedis) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2718,7 +2854,7 @@ func (m *AD) Reset() { *m = AD{} } func (m *AD) String() string { return proto.CompactTextString(m) } func (*AD) ProtoMessage() {} func (*AD) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{27} + return fileDescriptor_9198ee693835762e, []int{29} } func (m *AD) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2774,7 +2910,7 @@ func (m *DatabaseTLS) Reset() { *m = DatabaseTLS{} } func (m *DatabaseTLS) String() string { return proto.CompactTextString(m) } func (*DatabaseTLS) ProtoMessage() {} func (*DatabaseTLS) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{28} + return fileDescriptor_9198ee693835762e, []int{30} } func (m *DatabaseTLS) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2817,7 +2953,7 @@ func (m *MySQLOptions) Reset() { *m = MySQLOptions{} } func (m *MySQLOptions) String() string { return proto.CompactTextString(m) } func (*MySQLOptions) ProtoMessage() {} func (*MySQLOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{29} + return fileDescriptor_9198ee693835762e, []int{31} } func (m *MySQLOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2859,7 +2995,7 @@ func (m *MongoAtlas) Reset() { *m = MongoAtlas{} } func (m *MongoAtlas) String() string { return proto.CompactTextString(m) } func (*MongoAtlas) ProtoMessage() {} func (*MongoAtlas) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{30} + return fileDescriptor_9198ee693835762e, []int{32} } func (m *MongoAtlas) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2902,7 +3038,7 @@ func (m *InstanceV1) Reset() { *m = InstanceV1{} } func (m *InstanceV1) String() string { return proto.CompactTextString(m) } func (*InstanceV1) ProtoMessage() {} func (*InstanceV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{31} + return fileDescriptor_9198ee693835762e, []int{33} } func (m *InstanceV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2963,7 +3099,7 @@ func (m *InstanceSpecV1) Reset() { *m = InstanceSpecV1{} } func (m *InstanceSpecV1) String() string { return proto.CompactTextString(m) } func (*InstanceSpecV1) ProtoMessage() {} func (*InstanceSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{32} + return fileDescriptor_9198ee693835762e, []int{34} } func (m *InstanceSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3009,7 +3145,7 @@ func (m *SystemClockMeasurement) Reset() { *m = SystemClockMeasurement{} func (m *SystemClockMeasurement) String() string { return proto.CompactTextString(m) } func (*SystemClockMeasurement) ProtoMessage() {} func (*SystemClockMeasurement) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{33} + return fileDescriptor_9198ee693835762e, []int{35} } func (m *SystemClockMeasurement) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3079,7 +3215,7 @@ func (m *InstanceControlLogEntry) Reset() { *m = InstanceControlLogEntry func (m *InstanceControlLogEntry) String() string { return proto.CompactTextString(m) } func (*InstanceControlLogEntry) ProtoMessage() {} func (*InstanceControlLogEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{34} + return fileDescriptor_9198ee693835762e, []int{36} } func (m *InstanceControlLogEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3133,7 +3269,7 @@ func (m *UpdaterV2Info) Reset() { *m = UpdaterV2Info{} } func (m *UpdaterV2Info) String() string { return proto.CompactTextString(m) } func (*UpdaterV2Info) ProtoMessage() {} func (*UpdaterV2Info) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{35} + return fileDescriptor_9198ee693835762e, []int{37} } func (m *UpdaterV2Info) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3191,7 +3327,7 @@ func (m *InstanceFilter) Reset() { *m = InstanceFilter{} } func (m *InstanceFilter) String() string { return proto.CompactTextString(m) } func (*InstanceFilter) ProtoMessage() {} func (*InstanceFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{36} + return fileDescriptor_9198ee693835762e, []int{38} } func (m *InstanceFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3231,16 +3367,18 @@ type ServerV2 struct { // Metadata is resource metadata Metadata Metadata `protobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"` // Spec is a server spec - Spec ServerSpecV2 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Spec ServerSpecV2 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` + // The advertized scope of the server which can not change once assigned. + Scope string `protobuf:"bytes,6,opt,name=scope,proto3" json:"scope,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ServerV2) Reset() { *m = ServerV2{} } func (*ServerV2) ProtoMessage() {} func (*ServerV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{37} + return fileDescriptor_9198ee693835762e, []int{39} } func (m *ServerV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3295,7 +3433,13 @@ type ServerSpecV2 struct { CloudMetadata *CloudMetadata `protobuf:"bytes,14,opt,name=CloudMetadata,proto3" json:"cloud_metadata,omitempty"` // GitHub contains info about GitHub proxies where each server represents a // GitHub organization. - GitHub *GitHubServerMetadata `protobuf:"bytes,15,opt,name=git_hub,json=gitHub,proto3" json:"github,omitempty"` + GitHub *GitHubServerMetadata `protobuf:"bytes,15,opt,name=git_hub,json=gitHub,proto3" json:"github,omitempty"` + // the name of the Relay group that the server is connected to + RelayGroup string `protobuf:"bytes,16,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // the list of Relay host IDs that the server is connected to + RelayIds []string `protobuf:"bytes,17,rep,name=relay_ids,json=relayIds,proto3" json:"relay_ids,omitempty"` + // component_features represents features supported by this server + ComponentFeatures *v1.ComponentFeatures `protobuf:"bytes,18,opt,name=component_features,json=componentFeatures,proto3" json:"component_features,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -3305,7 +3449,7 @@ func (m *ServerSpecV2) Reset() { *m = ServerSpecV2{} } func (m *ServerSpecV2) String() string { return proto.CompactTextString(m) } func (*ServerSpecV2) ProtoMessage() {} func (*ServerSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{38} + return fileDescriptor_9198ee693835762e, []int{40} } func (m *ServerSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3359,7 +3503,7 @@ func (m *AWSInfo) Reset() { *m = AWSInfo{} } func (m *AWSInfo) String() string { return proto.CompactTextString(m) } func (*AWSInfo) ProtoMessage() {} func (*AWSInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{39} + return fileDescriptor_9198ee693835762e, []int{41} } func (m *AWSInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3402,7 +3546,7 @@ func (m *CloudMetadata) Reset() { *m = CloudMetadata{} } func (m *CloudMetadata) String() string { return proto.CompactTextString(m) } func (*CloudMetadata) ProtoMessage() {} func (*CloudMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{40} + return fileDescriptor_9198ee693835762e, []int{42} } func (m *CloudMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3447,7 +3591,7 @@ func (m *GitHubServerMetadata) Reset() { *m = GitHubServerMetadata{} } func (m *GitHubServerMetadata) String() string { return proto.CompactTextString(m) } func (*GitHubServerMetadata) ProtoMessage() {} func (*GitHubServerMetadata) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{41} + return fileDescriptor_9198ee693835762e, []int{43} } func (m *GitHubServerMetadata) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3487,16 +3631,18 @@ type AppServerV3 struct { // Metadata is the app server metadata. Metadata Metadata `protobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"` // Spec is the app server spec. - Spec AppServerSpecV3 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Spec AppServerSpecV3 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` + // The advertized scope of the server which can not change once assigned. + Scope string `protobuf:"bytes,6,opt,name=scope,proto3" json:"scope,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AppServerV3) Reset() { *m = AppServerV3{} } func (*AppServerV3) ProtoMessage() {} func (*AppServerV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{42} + return fileDescriptor_9198ee693835762e, []int{44} } func (m *AppServerV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3538,17 +3684,23 @@ type AppServerSpecV3 struct { // App is the app proxied by this app server. App *AppV3 `protobuf:"bytes,5,opt,name=App,proto3" json:"app"` // ProxyIDs is a list of proxy IDs this server is expected to be connected to. - ProxyIDs []string `protobuf:"bytes,6,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ProxyIDs []string `protobuf:"bytes,6,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + // the name of the Relay group that the server is connected to + RelayGroup string `protobuf:"bytes,7,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // the list of Relay host IDs that the server is connected to + RelayIds []string `protobuf:"bytes,8,rep,name=relay_ids,json=relayIds,proto3" json:"relay_ids,omitempty"` + // component_features contains features supported by this app server. + ComponentFeatures *v1.ComponentFeatures `protobuf:"bytes,9,opt,name=component_features,json=componentFeatures,proto3" json:"component_features,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AppServerSpecV3) Reset() { *m = AppServerSpecV3{} } func (m *AppServerSpecV3) String() string { return proto.CompactTextString(m) } func (*AppServerSpecV3) ProtoMessage() {} func (*AppServerSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{43} + return fileDescriptor_9198ee693835762e, []int{45} } func (m *AppServerSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3590,7 +3742,7 @@ func (m *AppV3List) Reset() { *m = AppV3List{} } func (m *AppV3List) String() string { return proto.CompactTextString(m) } func (*AppV3List) ProtoMessage() {} func (*AppV3List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{44} + return fileDescriptor_9198ee693835762e, []int{46} } func (m *AppV3List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3640,7 +3792,7 @@ type AppV3 struct { func (m *AppV3) Reset() { *m = AppV3{} } func (*AppV3) ProtoMessage() {} func (*AppV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{45} + return fileDescriptor_9198ee693835762e, []int{47} } func (m *AppV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3692,7 +3844,7 @@ func (m *CORSPolicy) Reset() { *m = CORSPolicy{} } func (m *CORSPolicy) String() string { return proto.CompactTextString(m) } func (*CORSPolicy) ProtoMessage() {} func (*CORSPolicy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{46} + return fileDescriptor_9198ee693835762e, []int{48} } func (m *CORSPolicy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3740,7 +3892,7 @@ func (m *IdentityCenterPermissionSet) Reset() { *m = IdentityCenterPermi func (m *IdentityCenterPermissionSet) String() string { return proto.CompactTextString(m) } func (*IdentityCenterPermissionSet) ProtoMessage() {} func (*IdentityCenterPermissionSet) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{47} + return fileDescriptor_9198ee693835762e, []int{49} } func (m *IdentityCenterPermissionSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3785,7 +3937,7 @@ func (m *AppIdentityCenter) Reset() { *m = AppIdentityCenter{} } func (m *AppIdentityCenter) String() string { return proto.CompactTextString(m) } func (*AppIdentityCenter) ProtoMessage() {} func (*AppIdentityCenter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{48} + return fileDescriptor_9198ee693835762e, []int{50} } func (m *AppIdentityCenter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3853,17 +4005,19 @@ type AppSpecV3 struct { // request originated from. This should be true if your proxy has multiple proxy public addrs and you // want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, // setting this value to true will overwrite that public address in the web UI. - UseAnyProxyPublicAddr bool `protobuf:"varint,14,opt,name=UseAnyProxyPublicAddr,proto3" json:"use_any_proxy_public_addr,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + UseAnyProxyPublicAddr bool `protobuf:"varint,14,opt,name=UseAnyProxyPublicAddr,proto3" json:"use_any_proxy_public_addr,omitempty"` + // MCP contains MCP server related configurations. + MCP *MCP `protobuf:"bytes,15,opt,name=MCP,proto3" json:"mcp,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AppSpecV3) Reset() { *m = AppSpecV3{} } func (m *AppSpecV3) String() string { return proto.CompactTextString(m) } func (*AppSpecV3) ProtoMessage() {} func (*AppSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{49} + return fileDescriptor_9198ee693835762e, []int{51} } func (m *AppSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3892,6 +4046,53 @@ func (m *AppSpecV3) XXX_DiscardUnknown() { var xxx_messageInfo_AppSpecV3 proto.InternalMessageInfo +// MCP contains MCP server-related configurations. +type MCP struct { + // Command to launch stdio-based MCP servers. + Command string `protobuf:"bytes,1,opt,name=command,proto3" json:"command,omitempty"` + // Args to execute with the command. + Args []string `protobuf:"bytes,2,rep,name=args,proto3" json:"args,omitempty"` + // RunAsHostUser is the host user account under which the command will be + // executed. Required for stdio-based MCP servers. + RunAsHostUser string `protobuf:"bytes,3,opt,name=run_as_host_user,json=runAsHostUser,proto3" json:"run_as_host_user,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *MCP) Reset() { *m = MCP{} } +func (m *MCP) String() string { return proto.CompactTextString(m) } +func (*MCP) ProtoMessage() {} +func (*MCP) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{52} +} +func (m *MCP) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCP) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCP.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *MCP) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCP.Merge(m, src) +} +func (m *MCP) XXX_Size() int { + return m.Size() +} +func (m *MCP) XXX_DiscardUnknown() { + xxx_messageInfo_MCP.DiscardUnknown(m) +} + +var xxx_messageInfo_MCP proto.InternalMessageInfo + // Rewrite is a list of rewriting rules to apply to requests and responses. type Rewrite struct { // Redirect defines a list of hosts which will be rewritten to the public @@ -3911,7 +4112,7 @@ func (m *Rewrite) Reset() { *m = Rewrite{} } func (m *Rewrite) String() string { return proto.CompactTextString(m) } func (*Rewrite) ProtoMessage() {} func (*Rewrite) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{50} + return fileDescriptor_9198ee693835762e, []int{53} } func (m *Rewrite) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3955,7 +4156,7 @@ func (m *Header) Reset() { *m = Header{} } func (m *Header) String() string { return proto.CompactTextString(m) } func (*Header) ProtoMessage() {} func (*Header) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{51} + return fileDescriptor_9198ee693835762e, []int{54} } func (m *Header) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4002,7 +4203,7 @@ type PortRange struct { func (m *PortRange) Reset() { *m = PortRange{} } func (*PortRange) ProtoMessage() {} func (*PortRange) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{52} + return fileDescriptor_9198ee693835762e, []int{55} } func (m *PortRange) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4049,7 +4250,7 @@ func (m *CommandLabelV2) Reset() { *m = CommandLabelV2{} } func (m *CommandLabelV2) String() string { return proto.CompactTextString(m) } func (*CommandLabelV2) ProtoMessage() {} func (*CommandLabelV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{53} + return fileDescriptor_9198ee693835762e, []int{56} } func (m *CommandLabelV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4094,7 +4295,7 @@ func (m *AppAWS) Reset() { *m = AppAWS{} } func (m *AppAWS) String() string { return proto.CompactTextString(m) } func (*AppAWS) ProtoMessage() {} func (*AppAWS) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{54} + return fileDescriptor_9198ee693835762e, []int{57} } func (m *AppAWS) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4142,7 +4343,7 @@ func (m *AppAWSRolesAnywhereProfile) Reset() { *m = AppAWSRolesAnywhereP func (m *AppAWSRolesAnywhereProfile) String() string { return proto.CompactTextString(m) } func (*AppAWSRolesAnywhereProfile) ProtoMessage() {} func (*AppAWSRolesAnywhereProfile) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{55} + return fileDescriptor_9198ee693835762e, []int{58} } func (m *AppAWSRolesAnywhereProfile) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4188,7 +4389,7 @@ func (m *SSHKeyPair) Reset() { *m = SSHKeyPair{} } func (m *SSHKeyPair) String() string { return proto.CompactTextString(m) } func (*SSHKeyPair) ProtoMessage() {} func (*SSHKeyPair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{56} + return fileDescriptor_9198ee693835762e, []int{59} } func (m *SSHKeyPair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4224,17 +4425,19 @@ type TLSKeyPair struct { // Key is a PEM encoded TLS key Key []byte `protobuf:"bytes,2,opt,name=Key,proto3" json:"key,omitempty"` // KeyType is the type of the Key. - KeyType PrivateKeyType `protobuf:"varint,3,opt,name=KeyType,proto3,enum=types.PrivateKeyType" json:"key_type,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + KeyType PrivateKeyType `protobuf:"varint,3,opt,name=KeyType,proto3,enum=types.PrivateKeyType" json:"key_type,omitempty"` + // CRL is an empty DER-encoded revocation list. + CRL []byte `protobuf:"bytes,4,opt,name=CRL,proto3" json:"crl"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *TLSKeyPair) Reset() { *m = TLSKeyPair{} } func (m *TLSKeyPair) String() string { return proto.CompactTextString(m) } func (*TLSKeyPair) ProtoMessage() {} func (*TLSKeyPair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{57} + return fileDescriptor_9198ee693835762e, []int{60} } func (m *TLSKeyPair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4280,7 +4483,7 @@ func (m *JWTKeyPair) Reset() { *m = JWTKeyPair{} } func (m *JWTKeyPair) String() string { return proto.CompactTextString(m) } func (*JWTKeyPair) ProtoMessage() {} func (*JWTKeyPair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{58} + return fileDescriptor_9198ee693835762e, []int{61} } func (m *JWTKeyPair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4309,6 +4512,97 @@ func (m *JWTKeyPair) XXX_DiscardUnknown() { var xxx_messageInfo_JWTKeyPair proto.InternalMessageInfo +// EncryptionKeyPair is a PKIX ASN.1 DER encoded keypair used for encrypting and decrypting data. +type EncryptionKeyPair struct { + // PublicKey is a PKIX ASN.1 DER encoded public key. + PublicKey []byte `protobuf:"bytes,1,opt,name=public_key,json=publicKey,proto3" json:"public_key,omitempty"` + // PrivateKey is a PKCS#8 ASN.1 DER encoded private key. + PrivateKey []byte `protobuf:"bytes,2,opt,name=private_key,json=privateKey,proto3" json:"private_key,omitempty"` + // PrivateKeyType is the type of the PrivateKey. + PrivateKeyType PrivateKeyType `protobuf:"varint,3,opt,name=private_key_type,json=privateKeyType,proto3,enum=types.PrivateKeyType" json:"private_key_type,omitempty"` + // Hash function used during OAEP encryption/decryption. It maps directly to the possible + // values of [crypto.Hash] in the go crypto package. + Hash uint32 `protobuf:"varint,4,opt,name=hash,proto3" json:"hash,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *EncryptionKeyPair) Reset() { *m = EncryptionKeyPair{} } +func (m *EncryptionKeyPair) String() string { return proto.CompactTextString(m) } +func (*EncryptionKeyPair) ProtoMessage() {} +func (*EncryptionKeyPair) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{62} +} +func (m *EncryptionKeyPair) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *EncryptionKeyPair) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_EncryptionKeyPair.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *EncryptionKeyPair) XXX_Merge(src proto.Message) { + xxx_messageInfo_EncryptionKeyPair.Merge(m, src) +} +func (m *EncryptionKeyPair) XXX_Size() int { + return m.Size() +} +func (m *EncryptionKeyPair) XXX_DiscardUnknown() { + xxx_messageInfo_EncryptionKeyPair.DiscardUnknown(m) +} + +var xxx_messageInfo_EncryptionKeyPair proto.InternalMessageInfo + +// A public key to be used as a recipient during age encryption of session recordings. +type AgeEncryptionKey struct { + // A PKIX ASN.1 DER encoded public key used for key wrapping during age encryption. Expected to be RSA 4096. + PublicKey []byte `protobuf:"bytes,1,opt,name=public_key,json=publicKey,proto3" json:"public_key"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AgeEncryptionKey) Reset() { *m = AgeEncryptionKey{} } +func (m *AgeEncryptionKey) String() string { return proto.CompactTextString(m) } +func (*AgeEncryptionKey) ProtoMessage() {} +func (*AgeEncryptionKey) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{63} +} +func (m *AgeEncryptionKey) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AgeEncryptionKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AgeEncryptionKey.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AgeEncryptionKey) XXX_Merge(src proto.Message) { + xxx_messageInfo_AgeEncryptionKey.Merge(m, src) +} +func (m *AgeEncryptionKey) XXX_Size() int { + return m.Size() +} +func (m *AgeEncryptionKey) XXX_DiscardUnknown() { + xxx_messageInfo_AgeEncryptionKey.DiscardUnknown(m) +} + +var xxx_messageInfo_AgeEncryptionKey proto.InternalMessageInfo + // CertAuthorityV2 is version 2 resource spec for Cert Authority type CertAuthorityV2 struct { // Kind is a resource kind @@ -4329,7 +4623,7 @@ type CertAuthorityV2 struct { func (m *CertAuthorityV2) Reset() { *m = CertAuthorityV2{} } func (*CertAuthorityV2) ProtoMessage() {} func (*CertAuthorityV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{59} + return fileDescriptor_9198ee693835762e, []int{64} } func (m *CertAuthorityV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4390,7 +4684,7 @@ func (m *CertAuthoritySpecV2) Reset() { *m = CertAuthoritySpecV2{} } func (m *CertAuthoritySpecV2) String() string { return proto.CompactTextString(m) } func (*CertAuthoritySpecV2) ProtoMessage() {} func (*CertAuthoritySpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{60} + return fileDescriptor_9198ee693835762e, []int{65} } func (m *CertAuthoritySpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4436,7 +4730,7 @@ func (m *CAKeySet) Reset() { *m = CAKeySet{} } func (m *CAKeySet) String() string { return proto.CompactTextString(m) } func (*CAKeySet) ProtoMessage() {} func (*CAKeySet) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{61} + return fileDescriptor_9198ee693835762e, []int{66} } func (m *CAKeySet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4481,7 +4775,7 @@ func (m *RoleMapping) Reset() { *m = RoleMapping{} } func (m *RoleMapping) String() string { return proto.CompactTextString(m) } func (*RoleMapping) ProtoMessage() {} func (*RoleMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{62} + return fileDescriptor_9198ee693835762e, []int{67} } func (m *RoleMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4529,7 +4823,7 @@ type ProvisionTokenV1 struct { func (m *ProvisionTokenV1) Reset() { *m = ProvisionTokenV1{} } func (*ProvisionTokenV1) ProtoMessage() {} func (*ProvisionTokenV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{63} + return fileDescriptor_9198ee693835762e, []int{68} } func (m *ProvisionTokenV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4582,7 +4876,7 @@ type ProvisionTokenV2 struct { func (m *ProvisionTokenV2) Reset() { *m = ProvisionTokenV2{} } func (*ProvisionTokenV2) ProtoMessage() {} func (*ProvisionTokenV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{64} + return fileDescriptor_9198ee693835762e, []int{69} } func (m *ProvisionTokenV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4624,7 +4918,7 @@ func (m *ProvisionTokenV2List) Reset() { *m = ProvisionTokenV2List{} } func (m *ProvisionTokenV2List) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenV2List) ProtoMessage() {} func (*ProvisionTokenV2List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{65} + return fileDescriptor_9198ee693835762e, []int{70} } func (m *ProvisionTokenV2List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4676,7 +4970,7 @@ func (m *TokenRule) Reset() { *m = TokenRule{} } func (m *TokenRule) String() string { return proto.CompactTextString(m) } func (*TokenRule) ProtoMessage() {} func (*TokenRule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{66} + return fileDescriptor_9198ee693835762e, []int{71} } func (m *TokenRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4756,17 +5050,19 @@ type ProvisionTokenSpecV2 struct { // BoundKeypair allows the configuration of options specific to the "bound_keypair" join method. BoundKeypair *ProvisionTokenSpecV2BoundKeypair `protobuf:"bytes,19,opt,name=BoundKeypair,proto3" json:"bound_keypair,omitempty"` // AzureDevops allows the configuration of options specific to the "azure_devops" join method. - AzureDevops *ProvisionTokenSpecV2AzureDevops `protobuf:"bytes,20,opt,name=AzureDevops,proto3" json:"azure_devops,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + AzureDevops *ProvisionTokenSpecV2AzureDevops `protobuf:"bytes,20,opt,name=AzureDevops,proto3" json:"azure_devops,omitempty"` + // Env0 allows the configuration of options specific to the "env0" join method. + Env0 *ProvisionTokenSpecV2Env0 `protobuf:"bytes,21,opt,name=Env0,proto3" json:"env0,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ProvisionTokenSpecV2) Reset() { *m = ProvisionTokenSpecV2{} } func (m *ProvisionTokenSpecV2) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2) ProtoMessage() {} func (*ProvisionTokenSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{67} + return fileDescriptor_9198ee693835762e, []int{72} } func (m *ProvisionTokenSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4815,7 +5111,7 @@ func (m *ProvisionTokenSpecV2AzureDevops) Reset() { *m = ProvisionTokenS func (m *ProvisionTokenSpecV2AzureDevops) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2AzureDevops) ProtoMessage() {} func (*ProvisionTokenSpecV2AzureDevops) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{68} + return fileDescriptor_9198ee693835762e, []int{73} } func (m *ProvisionTokenSpecV2AzureDevops) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4887,7 +5183,7 @@ func (m *ProvisionTokenSpecV2AzureDevops_Rule) Reset() { *m = ProvisionT func (m *ProvisionTokenSpecV2AzureDevops_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2AzureDevops_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2AzureDevops_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{68, 0} + return fileDescriptor_9198ee693835762e, []int{73, 0} } func (m *ProvisionTokenSpecV2AzureDevops_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4939,7 +5235,7 @@ func (m *ProvisionTokenSpecV2TPM) Reset() { *m = ProvisionTokenSpecV2TPM func (m *ProvisionTokenSpecV2TPM) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2TPM) ProtoMessage() {} func (*ProvisionTokenSpecV2TPM) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{69} + return fileDescriptor_9198ee693835762e, []int{74} } func (m *ProvisionTokenSpecV2TPM) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4995,7 +5291,7 @@ func (m *ProvisionTokenSpecV2TPM_Rule) Reset() { *m = ProvisionTokenSpec func (m *ProvisionTokenSpecV2TPM_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2TPM_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2TPM_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{69, 0} + return fileDescriptor_9198ee693835762e, []int{74, 0} } func (m *ProvisionTokenSpecV2TPM_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5064,7 +5360,7 @@ func (m *ProvisionTokenSpecV2GitHub) Reset() { *m = ProvisionTokenSpecV2 func (m *ProvisionTokenSpecV2GitHub) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GitHub) ProtoMessage() {} func (*ProvisionTokenSpecV2GitHub) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{70} + return fileDescriptor_9198ee693835762e, []int{75} } func (m *ProvisionTokenSpecV2GitHub) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5126,7 +5422,7 @@ func (m *ProvisionTokenSpecV2GitHub_Rule) Reset() { *m = ProvisionTokenS func (m *ProvisionTokenSpecV2GitHub_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GitHub_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2GitHub_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{70, 0} + return fileDescriptor_9198ee693835762e, []int{75, 0} } func (m *ProvisionTokenSpecV2GitHub_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5179,7 +5475,7 @@ func (m *ProvisionTokenSpecV2GitLab) Reset() { *m = ProvisionTokenSpecV2 func (m *ProvisionTokenSpecV2GitLab) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GitLab) ProtoMessage() {} func (*ProvisionTokenSpecV2GitLab) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{71} + return fileDescriptor_9198ee693835762e, []int{76} } func (m *ProvisionTokenSpecV2GitLab) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5281,7 +5577,7 @@ func (m *ProvisionTokenSpecV2GitLab_Rule) Reset() { *m = ProvisionTokenS func (m *ProvisionTokenSpecV2GitLab_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GitLab_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2GitLab_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{71, 0} + return fileDescriptor_9198ee693835762e, []int{76, 0} } func (m *ProvisionTokenSpecV2GitLab_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5326,7 +5622,7 @@ func (m *ProvisionTokenSpecV2CircleCI) Reset() { *m = ProvisionTokenSpec func (m *ProvisionTokenSpecV2CircleCI) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2CircleCI) ProtoMessage() {} func (*ProvisionTokenSpecV2CircleCI) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{72} + return fileDescriptor_9198ee693835762e, []int{77} } func (m *ProvisionTokenSpecV2CircleCI) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5367,7 +5663,7 @@ func (m *ProvisionTokenSpecV2CircleCI_Rule) Reset() { *m = ProvisionToke func (m *ProvisionTokenSpecV2CircleCI_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2CircleCI_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2CircleCI_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{72, 0} + return fileDescriptor_9198ee693835762e, []int{77, 0} } func (m *ProvisionTokenSpecV2CircleCI_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5404,7 +5700,10 @@ type ProvisionTokenSpecV2Spacelift struct { Allow []*ProvisionTokenSpecV2Spacelift_Rule `protobuf:"bytes,1,rep,name=Allow,proto3" json:"allow,omitempty"` // Hostname is the hostname of the Spacelift tenant that tokens // will originate from. E.g `example.app.spacelift.io` - Hostname string `protobuf:"bytes,2,opt,name=Hostname,proto3" json:"hostname,omitempty"` + Hostname string `protobuf:"bytes,2,opt,name=Hostname,proto3" json:"hostname,omitempty"` + // EnableGlobMatching enables glob-style matching for the space_id and + // caller_id fields in the rules. + EnableGlobMatching bool `protobuf:"varint,3,opt,name=EnableGlobMatching,proto3" json:"enable_glob_matching,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -5414,7 +5713,7 @@ func (m *ProvisionTokenSpecV2Spacelift) Reset() { *m = ProvisionTokenSpe func (m *ProvisionTokenSpecV2Spacelift) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Spacelift) ProtoMessage() {} func (*ProvisionTokenSpecV2Spacelift) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{73} + return fileDescriptor_9198ee693835762e, []int{78} } func (m *ProvisionTokenSpecV2Spacelift) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5446,9 +5745,17 @@ var xxx_messageInfo_ProvisionTokenSpecV2Spacelift proto.InternalMessageInfo type ProvisionTokenSpecV2Spacelift_Rule struct { // SpaceID is the ID of the space in which the run that owns the token was // executed. + // + // This field supports "glob-style" matching when enable_glob_matching is true: + // - Use '*' to match zero or more characters. + // - Use '?' to match any single character. SpaceID string `protobuf:"bytes,1,opt,name=SpaceID,proto3" json:"space_id,omitempty"` // CallerID is the ID of the caller, ie. the stack or module that generated // the run. + // + // This field supports "glob-style" matching when enable_glob_matching is true: + // - Use '*' to match zero or more characters. + // - Use '?' to match any single character. CallerID string `protobuf:"bytes,2,opt,name=CallerID,proto3" json:"caller_id,omitempty"` // CallerType is the type of the caller, ie. the entity that owns the run - // either `stack` or `module`. @@ -5465,7 +5772,7 @@ func (m *ProvisionTokenSpecV2Spacelift_Rule) Reset() { *m = ProvisionTok func (m *ProvisionTokenSpecV2Spacelift_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Spacelift_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2Spacelift_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{73, 0} + return fileDescriptor_9198ee693835762e, []int{78, 0} } func (m *ProvisionTokenSpecV2Spacelift_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5504,20 +5811,23 @@ type ProvisionTokenSpecV2Kubernetes struct { // Service Account token. Support values: // - `in_cluster` // - `static_jwks` + // - `oidc` // If unset, this defaults to `in_cluster`. Type KubernetesJoinType `protobuf:"bytes,2,opt,name=Type,proto3,casttype=KubernetesJoinType" json:"type,omitempty"` // StaticJWKS is the configuration specific to the `static_jwks` type. - StaticJWKS *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig `protobuf:"bytes,3,opt,name=StaticJWKS,proto3" json:"static_jwks,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + StaticJWKS *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig `protobuf:"bytes,3,opt,name=StaticJWKS,proto3" json:"static_jwks,omitempty"` + // OIDCConfig configures the `oidc` type. + OIDC *ProvisionTokenSpecV2Kubernetes_OIDCConfig `protobuf:"bytes,4,opt,name=OIDC,proto3" json:"oidc,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ProvisionTokenSpecV2Kubernetes) Reset() { *m = ProvisionTokenSpecV2Kubernetes{} } func (m *ProvisionTokenSpecV2Kubernetes) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Kubernetes) ProtoMessage() {} func (*ProvisionTokenSpecV2Kubernetes) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{74} + return fileDescriptor_9198ee693835762e, []int{79} } func (m *ProvisionTokenSpecV2Kubernetes) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5564,7 +5874,7 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) String() string { } func (*ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) ProtoMessage() {} func (*ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{74, 0} + return fileDescriptor_9198ee693835762e, []int{79, 0} } func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5593,6 +5903,58 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) XXX_DiscardUnknown() { var xxx_messageInfo_ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig proto.InternalMessageInfo +type ProvisionTokenSpecV2Kubernetes_OIDCConfig struct { + // Issuer is the URI of the OIDC issuer. It must have an accessible and + // OIDC-compliant `/.well-known/oidc-configuration` endpoint. This should + // be a valid URL and must exactly match the `issuer` field in a service + // account JWT. For example: + // https://oidc.eks.us-west-2.amazonaws.com/id/12345... + Issuer string `protobuf:"bytes,1,opt,name=Issuer,proto3" json:"issuer,omitempty"` + // InsecureAllowHTTPIssuer is a flag that, if set, disables the requirement + // that the issuer must use HTTPS. + InsecureAllowHTTPIssuer bool `protobuf:"varint,2,opt,name=InsecureAllowHTTPIssuer,proto3" json:"insecure_allow_http_issuer"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) Reset() { + *m = ProvisionTokenSpecV2Kubernetes_OIDCConfig{} +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) String() string { + return proto.CompactTextString(m) +} +func (*ProvisionTokenSpecV2Kubernetes_OIDCConfig) ProtoMessage() {} +func (*ProvisionTokenSpecV2Kubernetes_OIDCConfig) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{79, 1} +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ProvisionTokenSpecV2Kubernetes_OIDCConfig.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) XXX_Merge(src proto.Message) { + xxx_messageInfo_ProvisionTokenSpecV2Kubernetes_OIDCConfig.Merge(m, src) +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) XXX_Size() int { + return m.Size() +} +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) XXX_DiscardUnknown() { + xxx_messageInfo_ProvisionTokenSpecV2Kubernetes_OIDCConfig.DiscardUnknown(m) +} + +var xxx_messageInfo_ProvisionTokenSpecV2Kubernetes_OIDCConfig proto.InternalMessageInfo + // Rule is a set of properties the Kubernetes-issued token might have to be // allowed to use this ProvisionToken type ProvisionTokenSpecV2Kubernetes_Rule struct { @@ -5608,7 +5970,7 @@ func (m *ProvisionTokenSpecV2Kubernetes_Rule) Reset() { *m = ProvisionTo func (m *ProvisionTokenSpecV2Kubernetes_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Kubernetes_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2Kubernetes_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{74, 1} + return fileDescriptor_9198ee693835762e, []int{79, 2} } func (m *ProvisionTokenSpecV2Kubernetes_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5652,7 +6014,7 @@ func (m *ProvisionTokenSpecV2Azure) Reset() { *m = ProvisionTokenSpecV2A func (m *ProvisionTokenSpecV2Azure) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Azure) ProtoMessage() {} func (*ProvisionTokenSpecV2Azure) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{75} + return fileDescriptor_9198ee693835762e, []int{80} } func (m *ProvisionTokenSpecV2Azure) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5698,7 +6060,7 @@ func (m *ProvisionTokenSpecV2Azure_Rule) Reset() { *m = ProvisionTokenSp func (m *ProvisionTokenSpecV2Azure_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Azure_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2Azure_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{75, 0} + return fileDescriptor_9198ee693835762e, []int{80, 0} } func (m *ProvisionTokenSpecV2Azure_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5742,7 +6104,7 @@ func (m *ProvisionTokenSpecV2GCP) Reset() { *m = ProvisionTokenSpecV2GCP func (m *ProvisionTokenSpecV2GCP) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GCP) ProtoMessage() {} func (*ProvisionTokenSpecV2GCP) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{76} + return fileDescriptor_9198ee693835762e, []int{81} } func (m *ProvisionTokenSpecV2GCP) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5791,7 +6153,7 @@ func (m *ProvisionTokenSpecV2GCP_Rule) Reset() { *m = ProvisionTokenSpec func (m *ProvisionTokenSpecV2GCP_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2GCP_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2GCP_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{76, 0} + return fileDescriptor_9198ee693835762e, []int{81, 0} } func (m *ProvisionTokenSpecV2GCP_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5848,7 +6210,7 @@ func (m *ProvisionTokenSpecV2TerraformCloud) Reset() { *m = ProvisionTok func (m *ProvisionTokenSpecV2TerraformCloud) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2TerraformCloud) ProtoMessage() {} func (*ProvisionTokenSpecV2TerraformCloud) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{77} + return fileDescriptor_9198ee693835762e, []int{82} } func (m *ProvisionTokenSpecV2TerraformCloud) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5913,7 +6275,7 @@ func (m *ProvisionTokenSpecV2TerraformCloud_Rule) Reset() { func (m *ProvisionTokenSpecV2TerraformCloud_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2TerraformCloud_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2TerraformCloud_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{77, 0} + return fileDescriptor_9198ee693835762e, []int{82, 0} } func (m *ProvisionTokenSpecV2TerraformCloud_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5965,7 +6327,7 @@ func (m *ProvisionTokenSpecV2Bitbucket) Reset() { *m = ProvisionTokenSpe func (m *ProvisionTokenSpecV2Bitbucket) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Bitbucket) ProtoMessage() {} func (*ProvisionTokenSpecV2Bitbucket) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{78} + return fileDescriptor_9198ee693835762e, []int{83} } func (m *ProvisionTokenSpecV2Bitbucket) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6023,7 +6385,7 @@ func (m *ProvisionTokenSpecV2Bitbucket_Rule) Reset() { *m = ProvisionTok func (m *ProvisionTokenSpecV2Bitbucket_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Bitbucket_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2Bitbucket_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{78, 0} + return fileDescriptor_9198ee693835762e, []int{83, 0} } func (m *ProvisionTokenSpecV2Bitbucket_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6067,7 +6429,7 @@ func (m *ProvisionTokenSpecV2Oracle) Reset() { *m = ProvisionTokenSpecV2 func (m *ProvisionTokenSpecV2Oracle) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Oracle) ProtoMessage() {} func (*ProvisionTokenSpecV2Oracle) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{79} + return fileDescriptor_9198ee693835762e, []int{84} } func (m *ProvisionTokenSpecV2Oracle) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6108,7 +6470,11 @@ type ProvisionTokenSpecV2Oracle_Rule struct { // Regions is a list of regions an instance is allowed to join from. Both // full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. // If empty, any region is allowed. - Regions []string `protobuf:"bytes,3,rep,name=Regions,proto3" json:"regions,omitempty"` + Regions []string `protobuf:"bytes,3,rep,name=Regions,proto3" json:"regions,omitempty"` + // Instances is a list of the OCIDs of specific instances that are allowed + // to join. If empty, any instance matching the other fields in the rule is allowed. + // Limited to 100 instance OCIDs per rule. + Instances []string `protobuf:"bytes,4,rep,name=Instances,proto3" json:"instances,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -6118,7 +6484,7 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) Reset() { *m = ProvisionTokenS func (m *ProvisionTokenSpecV2Oracle_Rule) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2Oracle_Rule) ProtoMessage() {} func (*ProvisionTokenSpecV2Oracle_Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{79, 0} + return fileDescriptor_9198ee693835762e, []int{84, 0} } func (m *ProvisionTokenSpecV2Oracle_Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6147,6 +6513,124 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) XXX_DiscardUnknown() { var xxx_messageInfo_ProvisionTokenSpecV2Oracle_Rule proto.InternalMessageInfo +// ProvisionTokenSpecV2Env0 contains env0-specific parts of the +// ProvisionTokenSpecV2. +type ProvisionTokenSpecV2Env0 struct { + // Allow is a list of Rules, jobs using this token must match at least one + // allow rule to use this token. + Allow []*ProvisionTokenSpecV2Env0_Rule `protobuf:"bytes,1,rep,name=Allow,proto3" json:"allow,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ProvisionTokenSpecV2Env0) Reset() { *m = ProvisionTokenSpecV2Env0{} } +func (m *ProvisionTokenSpecV2Env0) String() string { return proto.CompactTextString(m) } +func (*ProvisionTokenSpecV2Env0) ProtoMessage() {} +func (*ProvisionTokenSpecV2Env0) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{85} +} +func (m *ProvisionTokenSpecV2Env0) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ProvisionTokenSpecV2Env0) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ProvisionTokenSpecV2Env0.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ProvisionTokenSpecV2Env0) XXX_Merge(src proto.Message) { + xxx_messageInfo_ProvisionTokenSpecV2Env0.Merge(m, src) +} +func (m *ProvisionTokenSpecV2Env0) XXX_Size() int { + return m.Size() +} +func (m *ProvisionTokenSpecV2Env0) XXX_DiscardUnknown() { + xxx_messageInfo_ProvisionTokenSpecV2Env0.DiscardUnknown(m) +} + +var xxx_messageInfo_ProvisionTokenSpecV2Env0 proto.InternalMessageInfo + +// Rule is a set of properties the env0 environment might have to be allowed +// to use this provision token. +type ProvisionTokenSpecV2Env0_Rule struct { + // OrganizationID is the unique organization identifier, corresponding to + // `organizationId` in an Env0 OIDC token. + OrganizationID string `protobuf:"bytes,1,opt,name=OrganizationID,proto3" json:"organization_id,omitempty"` + // ProjectID is a unique project identifier, corresponding to `projectId` in + // an Env0 OIDC token. + ProjectID string `protobuf:"bytes,2,opt,name=ProjectID,proto3" json:"project_id,omitempty"` + // ProjectName is the name of the project under which the job was run + // corresponding to `projectName` in an Env0 OIDC token. + ProjectName string `protobuf:"bytes,3,opt,name=ProjectName,proto3" json:"project_name,omitempty"` + // TemplateID is the unique identifier of the Env0 template, corresponding + // to `templateId` in an Env0 OIDC token. + TemplateID string `protobuf:"bytes,4,opt,name=TemplateID,proto3" json:"template_id,omitempty"` + // TemplateName is the name of the Env0 template, corresponding to + // `templateName` in an Env0 OIDC token. + TemplateName string `protobuf:"bytes,5,opt,name=TemplateName,proto3" json:"template_name,omitempty"` + // EnvironmentID is the unique identifier of the Env0 environment, + // corresponding to `environmentId` in an Env0 OIDC token. + EnvironmentID string `protobuf:"bytes,6,opt,name=EnvironmentID,proto3" json:"environment_id,omitempty"` + // EnvironmentName is the name of the Env0 environment, corresponding to + // `environmentName` in an Env0 OIDC token. + EnvironmentName string `protobuf:"bytes,7,opt,name=EnvironmentName,proto3" json:"environment_name,omitempty"` + // WorkspaceName is the name of the Env0 workspace, corresponding to + // `workspaceName` in an Env0 OIDC token. + WorkspaceName string `protobuf:"bytes,8,opt,name=WorkspaceName,proto3" json:"workspace_name,omitempty"` + // DeploymentType is the env0 deployment type, such as "deploy", "destroy", + // etc. Corresponds to `deploymentType` in an Env0 OIDC token. + DeploymentType string `protobuf:"bytes,9,opt,name=DeploymentType,proto3" json:"deployment_type,omitempty"` + // DeployerEmail is the email of the person that triggered the deployment, + // corresponding to `deployerEmail` in an Env0 OIDC token. + DeployerEmail string `protobuf:"bytes,10,opt,name=DeployerEmail,proto3" json:"deployer_email,omitempty"` + // Env0Tag is a custom tag value corresponding to `env0Tag` when + // `ENV0_OIDC_TAG` is set. + Env0Tag string `protobuf:"bytes,11,opt,name=Env0Tag,proto3" json:"env0_tag,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ProvisionTokenSpecV2Env0_Rule) Reset() { *m = ProvisionTokenSpecV2Env0_Rule{} } +func (m *ProvisionTokenSpecV2Env0_Rule) String() string { return proto.CompactTextString(m) } +func (*ProvisionTokenSpecV2Env0_Rule) ProtoMessage() {} +func (*ProvisionTokenSpecV2Env0_Rule) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{85, 0} +} +func (m *ProvisionTokenSpecV2Env0_Rule) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ProvisionTokenSpecV2Env0_Rule) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ProvisionTokenSpecV2Env0_Rule.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ProvisionTokenSpecV2Env0_Rule) XXX_Merge(src proto.Message) { + xxx_messageInfo_ProvisionTokenSpecV2Env0_Rule.Merge(m, src) +} +func (m *ProvisionTokenSpecV2Env0_Rule) XXX_Size() int { + return m.Size() +} +func (m *ProvisionTokenSpecV2Env0_Rule) XXX_DiscardUnknown() { + xxx_messageInfo_ProvisionTokenSpecV2Env0_Rule.DiscardUnknown(m) +} + +var xxx_messageInfo_ProvisionTokenSpecV2Env0_Rule proto.InternalMessageInfo + // ProvisionTokenSpecV2BoundKeypair contains configuration for bound_keypair // type join tokens. type ProvisionTokenSpecV2BoundKeypair struct { @@ -6170,7 +6654,7 @@ func (m *ProvisionTokenSpecV2BoundKeypair) Reset() { *m = ProvisionToken func (m *ProvisionTokenSpecV2BoundKeypair) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenSpecV2BoundKeypair) ProtoMessage() {} func (*ProvisionTokenSpecV2BoundKeypair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{80} + return fileDescriptor_9198ee693835762e, []int{86} } func (m *ProvisionTokenSpecV2BoundKeypair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6213,11 +6697,11 @@ type ProvisionTokenSpecV2BoundKeypair_OnboardingSpec struct { // public key on first join, which may be used instead of preregistering a // public key with `initial_public_key`. If `initial_public_key` is set, // this value is ignored. Otherwise, if set, this value will be used to - // populate `.status.bound_keypair.intitial_join_secret`. If unset and no + // populate `.status.bound_keypair.registration_secret`. If unset and no // `initial_public_key` is provided, a random secure value will be generated // server-side to populate the status field. RegistrationSecret string `protobuf:"bytes,2,opt,name=RegistrationSecret,proto3" json:"registration_secret,omitempty"` - // MustRegisterBefore is an optional time before which registeration via + // MustRegisterBefore is an optional time before which registration via // initial join secret must be performed. Attempts to register using an // initial join secret after this timestamp will not be allowed. This may be // modified after creation if necessary to allow the initial registration to @@ -6236,7 +6720,7 @@ func (m *ProvisionTokenSpecV2BoundKeypair_OnboardingSpec) String() string { } func (*ProvisionTokenSpecV2BoundKeypair_OnboardingSpec) ProtoMessage() {} func (*ProvisionTokenSpecV2BoundKeypair_OnboardingSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{80, 0} + return fileDescriptor_9198ee693835762e, []int{86, 0} } func (m *ProvisionTokenSpecV2BoundKeypair_OnboardingSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6278,18 +6762,18 @@ type ProvisionTokenSpecV2BoundKeypair_RecoverySpec struct { Limit uint32 `protobuf:"varint,1,opt,name=Limit,proto3" json:"limit"` // Mode sets the recovery rule enforcement mode. It may be one of these // values: - // - standard (or unset): all configured rules enforced. The recovery limit - // and client join state are required and verified. This is the most - // secure recovery mode. - // - relaxed: recovery limit is not enforced, but client join state is still - // required. This effectively allows unlimited recovery attempts, but - // client join state still helps mitigate stolen credentials. - // - insecure: neither the recovery limit nor client join state are - // enforced. This allows any client with the private key to join freely. - // This is less secure, but can be useful in certain situations, like in - // otherwise unsupported CI/CD providers. This mode should be used with - // care, and RBAC rules should be configured to heavily restrict which - // resources this identity can access. + // - standard (or unset): all configured rules enforced. The recovery limit + // and client join state are required and verified. This is the most + // secure recovery mode. + // - relaxed: recovery limit is not enforced, but client join state is still + // required. This effectively allows unlimited recovery attempts, but + // client join state still helps mitigate stolen credentials. + // - insecure: neither the recovery limit nor client join state are + // enforced. This allows any client with the private key to join freely. + // This is less secure, but can be useful in certain situations, like in + // otherwise unsupported CI/CD providers. This mode should be used with + // care, and RBAC rules should be configured to heavily restrict which + // resources this identity can access. Mode string `protobuf:"bytes,2,opt,name=Mode,proto3" json:"mode"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` @@ -6304,7 +6788,7 @@ func (m *ProvisionTokenSpecV2BoundKeypair_RecoverySpec) String() string { } func (*ProvisionTokenSpecV2BoundKeypair_RecoverySpec) ProtoMessage() {} func (*ProvisionTokenSpecV2BoundKeypair_RecoverySpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{80, 1} + return fileDescriptor_9198ee693835762e, []int{86, 1} } func (m *ProvisionTokenSpecV2BoundKeypair_RecoverySpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6348,7 +6832,7 @@ func (m *ProvisionTokenStatusV2) Reset() { *m = ProvisionTokenStatusV2{} func (m *ProvisionTokenStatusV2) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenStatusV2) ProtoMessage() {} func (*ProvisionTokenStatusV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{81} + return fileDescriptor_9198ee693835762e, []int{87} } func (m *ProvisionTokenStatusV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6383,7 +6867,7 @@ type ProvisionTokenStatusV2BoundKeypair struct { // RegistrationSecret contains a secret value that may be used for public key // registration during the initial join process if no public key is // preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` - // is set, †his field will remain empty. Otherwise, if + // is set, this field will remain empty. Otherwise, if // `.spec.bound_keypair.onboarding.registration_secret` is set, that value // will be copied here. If that field is unset, a value will be randomly // generated. @@ -6406,9 +6890,9 @@ type ProvisionTokenStatusV2BoundKeypair struct { // or `insecure`. RecoveryCount uint32 `protobuf:"varint,4,opt,name=RecoveryCount,proto3" json:"recovery_count"` // LastRecoveredAt contains a timestamp of the last successful recovery - // attempt. Note that normal renewals do not count as a recovery attempt, - // however onboarding does, either with a preregistered key or registration - // secret. This corresponds with the last time `bound_bot_instance_id` was + // attempt. Note that normal renewals with valid client certificates do not + // count as a recovery attempt, however the initial join during onboarding + // does. This corresponds with the last time `bound_bot_instance_id` was // updated. LastRecoveredAt *time.Time `protobuf:"bytes,5,opt,name=LastRecoveredAt,proto3,stdtime" json:"last_recovered_at,omitempty"` // LastRotatedAt contains a timestamp of the last time the keypair was @@ -6423,7 +6907,7 @@ func (m *ProvisionTokenStatusV2BoundKeypair) Reset() { *m = ProvisionTok func (m *ProvisionTokenStatusV2BoundKeypair) String() string { return proto.CompactTextString(m) } func (*ProvisionTokenStatusV2BoundKeypair) ProtoMessage() {} func (*ProvisionTokenStatusV2BoundKeypair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{82} + return fileDescriptor_9198ee693835762e, []int{88} } func (m *ProvisionTokenStatusV2BoundKeypair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6472,7 +6956,7 @@ type StaticTokensV2 struct { func (m *StaticTokensV2) Reset() { *m = StaticTokensV2{} } func (*StaticTokensV2) ProtoMessage() {} func (*StaticTokensV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{83} + return fileDescriptor_9198ee693835762e, []int{89} } func (m *StaticTokensV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6515,7 +6999,7 @@ func (m *StaticTokensSpecV2) Reset() { *m = StaticTokensSpecV2{} } func (m *StaticTokensSpecV2) String() string { return proto.CompactTextString(m) } func (*StaticTokensSpecV2) ProtoMessage() {} func (*StaticTokensSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{84} + return fileDescriptor_9198ee693835762e, []int{90} } func (m *StaticTokensSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6564,7 +7048,7 @@ type ClusterNameV2 struct { func (m *ClusterNameV2) Reset() { *m = ClusterNameV2{} } func (*ClusterNameV2) ProtoMessage() {} func (*ClusterNameV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{85} + return fileDescriptor_9198ee693835762e, []int{91} } func (m *ClusterNameV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6610,7 +7094,7 @@ func (m *ClusterNameSpecV2) Reset() { *m = ClusterNameSpecV2{} } func (m *ClusterNameSpecV2) String() string { return proto.CompactTextString(m) } func (*ClusterNameSpecV2) ProtoMessage() {} func (*ClusterNameSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{86} + return fileDescriptor_9198ee693835762e, []int{92} } func (m *ClusterNameSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6660,7 +7144,7 @@ func (m *ClusterAuditConfigV2) Reset() { *m = ClusterAuditConfigV2{} } func (m *ClusterAuditConfigV2) String() string { return proto.CompactTextString(m) } func (*ClusterAuditConfigV2) ProtoMessage() {} func (*ClusterAuditConfigV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{87} + return fileDescriptor_9198ee693835762e, []int{93} } func (m *ClusterAuditConfigV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6730,7 +7214,7 @@ func (m *ClusterAuditConfigSpecV2) Reset() { *m = ClusterAuditConfigSpec func (m *ClusterAuditConfigSpecV2) String() string { return proto.CompactTextString(m) } func (*ClusterAuditConfigSpecV2) ProtoMessage() {} func (*ClusterAuditConfigSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{88} + return fileDescriptor_9198ee693835762e, []int{94} } func (m *ClusterAuditConfigSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6781,7 +7265,7 @@ func (m *ClusterNetworkingConfigV2) Reset() { *m = ClusterNetworkingConf func (m *ClusterNetworkingConfigV2) String() string { return proto.CompactTextString(m) } func (*ClusterNetworkingConfigV2) ProtoMessage() {} func (*ClusterNetworkingConfigV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{89} + return fileDescriptor_9198ee693835762e, []int{95} } func (m *ClusterNetworkingConfigV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6822,9 +7306,9 @@ type ClusterNetworkingConfigSpecV2 struct { // KeepAliveCountMax is the number of keep-alive messages that can be // missed before the server disconnects the connection to the client. KeepAliveCountMax int64 `protobuf:"varint,3,opt,name=KeepAliveCountMax,proto3" json:"keep_alive_count_max"` - // SessionControlTimeout is the session control lease expiry and defines - // the upper limit of how long a node may be out of contact with the auth - // server before it begins terminating controlled sessions. + // SessionControlTimeout is the session control lease expiry and defines the + // upper limit of how long a node may be out of contact with the Auth Service + // before it begins terminating controlled sessions. SessionControlTimeout Duration `protobuf:"varint,4,opt,name=SessionControlTimeout,proto3,casttype=Duration" json:"session_control_timeout"` // ClientIdleTimeoutMessage is the message sent to the user when a connection times out. ClientIdleTimeoutMessage string `protobuf:"bytes,5,opt,name=ClientIdleTimeoutMessage,proto3" json:"idle_timeout_message"` @@ -6860,7 +7344,7 @@ func (m *ClusterNetworkingConfigSpecV2) Reset() { *m = ClusterNetworking func (m *ClusterNetworkingConfigSpecV2) String() string { return proto.CompactTextString(m) } func (*ClusterNetworkingConfigSpecV2) ProtoMessage() {} func (*ClusterNetworkingConfigSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{90} + return fileDescriptor_9198ee693835762e, []int{96} } func (m *ClusterNetworkingConfigSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6892,6 +7376,7 @@ var xxx_messageInfo_ClusterNetworkingConfigSpecV2 proto.InternalMessageInfo // TunnelStrategyV1 defines possible tunnel strategy types. type TunnelStrategyV1 struct { // Types that are valid to be assigned to Strategy: + // // *TunnelStrategyV1_AgentMesh // *TunnelStrategyV1_ProxyPeering Strategy isTunnelStrategyV1_Strategy `protobuf_oneof:"Strategy"` @@ -6904,7 +7389,7 @@ func (m *TunnelStrategyV1) Reset() { *m = TunnelStrategyV1{} } func (m *TunnelStrategyV1) String() string { return proto.CompactTextString(m) } func (*TunnelStrategyV1) ProtoMessage() {} func (*TunnelStrategyV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{91} + return fileDescriptor_9198ee693835762e, []int{97} } func (m *TunnelStrategyV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6989,7 +7474,7 @@ func (m *AgentMeshTunnelStrategy) Reset() { *m = AgentMeshTunnelStrategy func (m *AgentMeshTunnelStrategy) String() string { return proto.CompactTextString(m) } func (*AgentMeshTunnelStrategy) ProtoMessage() {} func (*AgentMeshTunnelStrategy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{92} + return fileDescriptor_9198ee693835762e, []int{98} } func (m *AgentMeshTunnelStrategy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7030,7 +7515,7 @@ func (m *ProxyPeeringTunnelStrategy) Reset() { *m = ProxyPeeringTunnelSt func (m *ProxyPeeringTunnelStrategy) String() string { return proto.CompactTextString(m) } func (*ProxyPeeringTunnelStrategy) ProtoMessage() {} func (*ProxyPeeringTunnelStrategy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{93} + return fileDescriptor_9198ee693835762e, []int{99} } func (m *ProxyPeeringTunnelStrategy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7071,17 +7556,19 @@ type SessionRecordingConfigV2 struct { // Metadata is resource metadata Metadata Metadata `protobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"` // Spec is a SessionRecordingConfig specification - Spec SessionRecordingConfigSpecV2 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Spec SessionRecordingConfigSpecV2 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` + // Status is the SessionRecordingConfig status containing active encryption keys + Status *SessionRecordingConfigStatus `protobuf:"bytes,6,opt,name=Status,proto3" json:"status,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *SessionRecordingConfigV2) Reset() { *m = SessionRecordingConfigV2{} } func (m *SessionRecordingConfigV2) String() string { return proto.CompactTextString(m) } func (*SessionRecordingConfigV2) ProtoMessage() {} func (*SessionRecordingConfigV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{94} + return fileDescriptor_9198ee693835762e, []int{100} } func (m *SessionRecordingConfigV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7110,6 +7597,144 @@ func (m *SessionRecordingConfigV2) XXX_DiscardUnknown() { var xxx_messageInfo_SessionRecordingConfigV2 proto.InternalMessageInfo +// KeyLabel combines a label that can be used to identify one or more keys with a keystore type that +// determines where the keys can be found. +type KeyLabel struct { + // Type represents which keystore should be searched when looking up keys by label. + Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type"` + // Label is a value that can be used with the related keystore in order to find relevant keys. + Label string `protobuf:"bytes,2,opt,name=label,proto3" json:"label"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *KeyLabel) Reset() { *m = KeyLabel{} } +func (m *KeyLabel) String() string { return proto.CompactTextString(m) } +func (*KeyLabel) ProtoMessage() {} +func (*KeyLabel) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{101} +} +func (m *KeyLabel) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *KeyLabel) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_KeyLabel.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *KeyLabel) XXX_Merge(src proto.Message) { + xxx_messageInfo_KeyLabel.Merge(m, src) +} +func (m *KeyLabel) XXX_Size() int { + return m.Size() +} +func (m *KeyLabel) XXX_DiscardUnknown() { + xxx_messageInfo_KeyLabel.DiscardUnknown(m) +} + +var xxx_messageInfo_KeyLabel proto.InternalMessageInfo + +// ManualKeyManagementConfig defines whether or not recording encryption keys should be managed externally +// and how to query those keys. +type ManualKeyManagementConfig struct { + // Enabled controls whether or recording encryption keys should be managed externally. + Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"` + // ActiveKeys describe which keys should be queried for active recording encryption and replay. + ActiveKeys []*KeyLabel `protobuf:"bytes,2,rep,name=active_keys,json=activeKeys,proto3" json:"active_keys,omitempty"` + // RotatedKeys describe which keys should be queried for historical replay. + RotatedKeys []*KeyLabel `protobuf:"bytes,3,rep,name=rotated_keys,json=rotatedKeys,proto3" json:"rotated_keys,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ManualKeyManagementConfig) Reset() { *m = ManualKeyManagementConfig{} } +func (m *ManualKeyManagementConfig) String() string { return proto.CompactTextString(m) } +func (*ManualKeyManagementConfig) ProtoMessage() {} +func (*ManualKeyManagementConfig) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{102} +} +func (m *ManualKeyManagementConfig) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ManualKeyManagementConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ManualKeyManagementConfig.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ManualKeyManagementConfig) XXX_Merge(src proto.Message) { + xxx_messageInfo_ManualKeyManagementConfig.Merge(m, src) +} +func (m *ManualKeyManagementConfig) XXX_Size() int { + return m.Size() +} +func (m *ManualKeyManagementConfig) XXX_DiscardUnknown() { + xxx_messageInfo_ManualKeyManagementConfig.DiscardUnknown(m) +} + +var xxx_messageInfo_ManualKeyManagementConfig proto.InternalMessageInfo + +// SessionRecordingEncryptionConfig configures if and how session recordings +// should be encrypted. +type SessionRecordingEncryptionConfig struct { + // Enabled controls whether or not session recordings should be encrypted. + Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"` + // ManualKeyManagement defines whether or not recording encryption keys should be managed externally + // and how to query those keys. + ManualKeyManagement *ManualKeyManagementConfig `protobuf:"bytes,2,opt,name=manual_key_management,json=manualKeyManagement,proto3" json:"manual_key_management,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *SessionRecordingEncryptionConfig) Reset() { *m = SessionRecordingEncryptionConfig{} } +func (m *SessionRecordingEncryptionConfig) String() string { return proto.CompactTextString(m) } +func (*SessionRecordingEncryptionConfig) ProtoMessage() {} +func (*SessionRecordingEncryptionConfig) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{103} +} +func (m *SessionRecordingEncryptionConfig) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SessionRecordingEncryptionConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SessionRecordingEncryptionConfig.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *SessionRecordingEncryptionConfig) XXX_Merge(src proto.Message) { + xxx_messageInfo_SessionRecordingEncryptionConfig.Merge(m, src) +} +func (m *SessionRecordingEncryptionConfig) XXX_Size() int { + return m.Size() +} +func (m *SessionRecordingEncryptionConfig) XXX_DiscardUnknown() { + xxx_messageInfo_SessionRecordingEncryptionConfig.DiscardUnknown(m) +} + +var xxx_messageInfo_SessionRecordingEncryptionConfig proto.InternalMessageInfo + // SessionRecordingConfigSpecV2 is the actual data we care about // for SessionRecordingConfig. type SessionRecordingConfigSpecV2 struct { @@ -7117,17 +7742,19 @@ type SessionRecordingConfigSpecV2 struct { Mode string `protobuf:"bytes,1,opt,name=Mode,proto3" json:"mode"` // ProxyChecksHostKeys is used to control if the proxy will check host keys // when in recording mode. - ProxyChecksHostKeys *BoolOption `protobuf:"bytes,2,opt,name=ProxyChecksHostKeys,proto3,customtype=BoolOption" json:"proxy_checks_host_keys"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + ProxyChecksHostKeys *BoolOption `protobuf:"bytes,2,opt,name=ProxyChecksHostKeys,proto3,customtype=BoolOption" json:"proxy_checks_host_keys"` + // Encryption configures if and how session recordings should be encrypted. + Encryption *SessionRecordingEncryptionConfig `protobuf:"bytes,3,opt,name=encryption,proto3" json:"encryption,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *SessionRecordingConfigSpecV2) Reset() { *m = SessionRecordingConfigSpecV2{} } func (m *SessionRecordingConfigSpecV2) String() string { return proto.CompactTextString(m) } func (*SessionRecordingConfigSpecV2) ProtoMessage() {} func (*SessionRecordingConfigSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{95} + return fileDescriptor_9198ee693835762e, []int{104} } func (m *SessionRecordingConfigSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7156,6 +7783,50 @@ func (m *SessionRecordingConfigSpecV2) XXX_DiscardUnknown() { var xxx_messageInfo_SessionRecordingConfigSpecV2 proto.InternalMessageInfo +// SessionRecordingConfigStatus contains the currently active age encryption keys used +// for encrypted session recording. +type SessionRecordingConfigStatus struct { + // EncryptionKeys contain the currently active age encryption keys used for + // encrypted session recording. + EncryptionKeys []*AgeEncryptionKey `protobuf:"bytes,1,rep,name=encryption_keys,json=encryptionKeys,proto3" json:"encryption_keys"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *SessionRecordingConfigStatus) Reset() { *m = SessionRecordingConfigStatus{} } +func (m *SessionRecordingConfigStatus) String() string { return proto.CompactTextString(m) } +func (*SessionRecordingConfigStatus) ProtoMessage() {} +func (*SessionRecordingConfigStatus) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{105} +} +func (m *SessionRecordingConfigStatus) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SessionRecordingConfigStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SessionRecordingConfigStatus.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *SessionRecordingConfigStatus) XXX_Merge(src proto.Message) { + xxx_messageInfo_SessionRecordingConfigStatus.Merge(m, src) +} +func (m *SessionRecordingConfigStatus) XXX_Size() int { + return m.Size() +} +func (m *SessionRecordingConfigStatus) XXX_DiscardUnknown() { + xxx_messageInfo_SessionRecordingConfigStatus.DiscardUnknown(m) +} + +var xxx_messageInfo_SessionRecordingConfigStatus proto.InternalMessageInfo + // AuthPreferenceV2 implements the AuthPreference interface. type AuthPreferenceV2 struct { // Kind is a resource kind @@ -7177,7 +7848,7 @@ type AuthPreferenceV2 struct { func (m *AuthPreferenceV2) Reset() { *m = AuthPreferenceV2{} } func (*AuthPreferenceV2) ProtoMessage() {} func (*AuthPreferenceV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{96} + return fileDescriptor_9198ee693835762e, []int{106} } func (m *AuthPreferenceV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7276,7 +7947,7 @@ func (m *AuthPreferenceSpecV2) Reset() { *m = AuthPreferenceSpecV2{} } func (m *AuthPreferenceSpecV2) String() string { return proto.CompactTextString(m) } func (*AuthPreferenceSpecV2) ProtoMessage() {} func (*AuthPreferenceSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{97} + return fileDescriptor_9198ee693835762e, []int{107} } func (m *AuthPreferenceSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7327,7 +7998,7 @@ func (m *StableUNIXUserConfig) Reset() { *m = StableUNIXUserConfig{} } func (m *StableUNIXUserConfig) String() string { return proto.CompactTextString(m) } func (*StableUNIXUserConfig) ProtoMessage() {} func (*StableUNIXUserConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{98} + return fileDescriptor_9198ee693835762e, []int{108} } func (m *StableUNIXUserConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7378,7 +8049,7 @@ func (m *U2F) Reset() { *m = U2F{} } func (m *U2F) String() string { return proto.CompactTextString(m) } func (*U2F) ProtoMessage() {} func (*U2F) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{99} + return fileDescriptor_9198ee693835762e, []int{109} } func (m *U2F) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7446,7 +8117,7 @@ func (m *Webauthn) Reset() { *m = Webauthn{} } func (m *Webauthn) String() string { return proto.CompactTextString(m) } func (*Webauthn) ProtoMessage() {} func (*Webauthn) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{100} + return fileDescriptor_9198ee693835762e, []int{110} } func (m *Webauthn) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7488,6 +8159,8 @@ type DeviceTrust struct { // endpoints. // - "required": enforces the presence of device extensions for sensitive // endpoints. + // - "required-for-humans": enforces the presence of device extensions for + // sensitive endpoints, for human users only (bots are exempt). // // Mode is always "off" for OSS. // Defaults to "optional" for Enterprise. @@ -7517,7 +8190,7 @@ func (m *DeviceTrust) Reset() { *m = DeviceTrust{} } func (m *DeviceTrust) String() string { return proto.CompactTextString(m) } func (*DeviceTrust) ProtoMessage() {} func (*DeviceTrust) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{101} + return fileDescriptor_9198ee693835762e, []int{111} } func (m *DeviceTrust) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7567,7 +8240,7 @@ func (m *HardwareKey) Reset() { *m = HardwareKey{} } func (m *HardwareKey) String() string { return proto.CompactTextString(m) } func (*HardwareKey) ProtoMessage() {} func (*HardwareKey) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{102} + return fileDescriptor_9198ee693835762e, []int{112} } func (m *HardwareKey) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7614,7 +8287,7 @@ func (m *HardwareKeySerialNumberValidation) Reset() { *m = HardwareKeySe func (m *HardwareKeySerialNumberValidation) String() string { return proto.CompactTextString(m) } func (*HardwareKeySerialNumberValidation) ProtoMessage() {} func (*HardwareKeySerialNumberValidation) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{103} + return fileDescriptor_9198ee693835762e, []int{113} } func (m *HardwareKeySerialNumberValidation) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7664,7 +8337,7 @@ func (m *Namespace) Reset() { *m = Namespace{} } func (m *Namespace) String() string { return proto.CompactTextString(m) } func (*Namespace) ProtoMessage() {} func (*Namespace) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{104} + return fileDescriptor_9198ee693835762e, []int{114} } func (m *Namespace) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7704,7 +8377,7 @@ func (m *NamespaceSpec) Reset() { *m = NamespaceSpec{} } func (m *NamespaceSpec) String() string { return proto.CompactTextString(m) } func (*NamespaceSpec) ProtoMessage() {} func (*NamespaceSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{105} + return fileDescriptor_9198ee693835762e, []int{115} } func (m *NamespaceSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7752,7 +8425,7 @@ type UserTokenV3 struct { func (m *UserTokenV3) Reset() { *m = UserTokenV3{} } func (*UserTokenV3) ProtoMessage() {} func (*UserTokenV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{106} + return fileDescriptor_9198ee693835762e, []int{116} } func (m *UserTokenV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7799,7 +8472,7 @@ func (m *UserTokenSpecV3) Reset() { *m = UserTokenSpecV3{} } func (m *UserTokenSpecV3) String() string { return proto.CompactTextString(m) } func (*UserTokenSpecV3) ProtoMessage() {} func (*UserTokenSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{107} + return fileDescriptor_9198ee693835762e, []int{117} } func (m *UserTokenSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7847,7 +8520,7 @@ type UserTokenSecretsV3 struct { func (m *UserTokenSecretsV3) Reset() { *m = UserTokenSecretsV3{} } func (*UserTokenSecretsV3) ProtoMessage() {} func (*UserTokenSecretsV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{108} + return fileDescriptor_9198ee693835762e, []int{118} } func (m *UserTokenSecretsV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7892,7 +8565,7 @@ func (m *UserTokenSecretsSpecV3) Reset() { *m = UserTokenSecretsSpecV3{} func (m *UserTokenSecretsSpecV3) String() string { return proto.CompactTextString(m) } func (*UserTokenSecretsSpecV3) ProtoMessage() {} func (*UserTokenSecretsSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{109} + return fileDescriptor_9198ee693835762e, []int{119} } func (m *UserTokenSecretsSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7941,7 +8614,7 @@ type AccessRequestV3 struct { func (m *AccessRequestV3) Reset() { *m = AccessRequestV3{} } func (*AccessRequestV3) ProtoMessage() {} func (*AccessRequestV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{110} + return fileDescriptor_9198ee693835762e, []int{120} } func (m *AccessRequestV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -7993,7 +8666,7 @@ func (m *AccessReviewThreshold) Reset() { *m = AccessReviewThreshold{} } func (m *AccessReviewThreshold) String() string { return proto.CompactTextString(m) } func (*AccessReviewThreshold) ProtoMessage() {} func (*AccessReviewThreshold) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{111} + return fileDescriptor_9198ee693835762e, []int{121} } func (m *AccessReviewThreshold) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8038,7 +8711,7 @@ func (m *PromotedAccessList) Reset() { *m = PromotedAccessList{} } func (m *PromotedAccessList) String() string { return proto.CompactTextString(m) } func (*PromotedAccessList) ProtoMessage() {} func (*PromotedAccessList) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{112} + return fileDescriptor_9198ee693835762e, []int{122} } func (m *PromotedAccessList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8083,7 +8756,7 @@ func (m *AccessRequestDryRunEnrichment) Reset() { *m = AccessRequestDryR func (m *AccessRequestDryRunEnrichment) String() string { return proto.CompactTextString(m) } func (*AccessRequestDryRunEnrichment) ProtoMessage() {} func (*AccessRequestDryRunEnrichment) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{113} + return fileDescriptor_9198ee693835762e, []int{123} } func (m *AccessRequestDryRunEnrichment) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8144,7 +8817,7 @@ func (m *AccessReview) Reset() { *m = AccessReview{} } func (m *AccessReview) String() string { return proto.CompactTextString(m) } func (*AccessReview) ProtoMessage() {} func (*AccessReview) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{114} + return fileDescriptor_9198ee693835762e, []int{124} } func (m *AccessReview) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8189,7 +8862,7 @@ func (m *AccessReviewSubmission) Reset() { *m = AccessReviewSubmission{} func (m *AccessReviewSubmission) String() string { return proto.CompactTextString(m) } func (*AccessReviewSubmission) ProtoMessage() {} func (*AccessReviewSubmission) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{115} + return fileDescriptor_9198ee693835762e, []int{125} } func (m *AccessReviewSubmission) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8232,7 +8905,7 @@ func (m *ThresholdIndexSet) Reset() { *m = ThresholdIndexSet{} } func (m *ThresholdIndexSet) String() string { return proto.CompactTextString(m) } func (*ThresholdIndexSet) ProtoMessage() {} func (*ThresholdIndexSet) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{116} + return fileDescriptor_9198ee693835762e, []int{126} } func (m *ThresholdIndexSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8275,7 +8948,7 @@ func (m *ThresholdIndexSets) Reset() { *m = ThresholdIndexSets{} } func (m *ThresholdIndexSets) String() string { return proto.CompactTextString(m) } func (*ThresholdIndexSets) ProtoMessage() {} func (*ThresholdIndexSets) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{117} + return fileDescriptor_9198ee693835762e, []int{127} } func (m *ThresholdIndexSets) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8372,17 +9045,22 @@ type AccessRequestSpecV3 struct { // ResourceExpiry is the time at which the access request resource will expire. ResourceExpiry *time.Time `protobuf:"bytes,22,opt,name=ResourceExpiry,proto3,stdtime" json:"expiry,omitempty"` // DryRunEnrichment contains the extra info added in response to a dry run request. - DryRunEnrichment *AccessRequestDryRunEnrichment `protobuf:"bytes,23,opt,name=DryRunEnrichment,proto3" json:"dry_run_enrichment,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + DryRunEnrichment *AccessRequestDryRunEnrichment `protobuf:"bytes,23,opt,name=DryRunEnrichment,proto3" json:"dry_run_enrichment,omitempty"` + // RequestKind indicates the kind (short/long-term) of request. + RequestKind AccessRequestKind `protobuf:"varint,24,opt,name=RequestKind,proto3,enum=types.AccessRequestKind" json:"request_kind,omitempty"` + // LongTermResourceGrouping contains information about how resources can be grouped + // based on Access List promotions for long-term Access Requests. + LongTermGrouping *LongTermResourceGrouping `protobuf:"bytes,25,opt,name=LongTermGrouping,proto3" json:"long_term_grouping,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AccessRequestSpecV3) Reset() { *m = AccessRequestSpecV3{} } func (m *AccessRequestSpecV3) String() string { return proto.CompactTextString(m) } func (*AccessRequestSpecV3) ProtoMessage() {} func (*AccessRequestSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{118} + return fileDescriptor_9198ee693835762e, []int{128} } func (m *AccessRequestSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8439,7 +9117,7 @@ func (m *AccessRequestFilter) Reset() { *m = AccessRequestFilter{} } func (m *AccessRequestFilter) String() string { return proto.CompactTextString(m) } func (*AccessRequestFilter) ProtoMessage() {} func (*AccessRequestFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{119} + return fileDescriptor_9198ee693835762e, []int{129} } func (m *AccessRequestFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8495,7 +9173,7 @@ func (m *AccessCapabilities) Reset() { *m = AccessCapabilities{} } func (m *AccessCapabilities) String() string { return proto.CompactTextString(m) } func (*AccessCapabilities) ProtoMessage() {} func (*AccessCapabilities) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{120} + return fileDescriptor_9198ee693835762e, []int{130} } func (m *AccessCapabilities) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8553,7 +9231,7 @@ func (m *AccessCapabilitiesRequest) Reset() { *m = AccessCapabilitiesReq func (m *AccessCapabilitiesRequest) String() string { return proto.CompactTextString(m) } func (*AccessCapabilitiesRequest) ProtoMessage() {} func (*AccessCapabilitiesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{121} + return fileDescriptor_9198ee693835762e, []int{131} } func (m *AccessCapabilitiesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8582,6 +9260,101 @@ func (m *AccessCapabilitiesRequest) XXX_DiscardUnknown() { var xxx_messageInfo_AccessCapabilitiesRequest proto.InternalMessageInfo +// RemoteAccessCapabilities is a summary of the capabilites that a remote cluster +// user is granted in target cluster. +// buf:lint:ignore PAGINATION_REQUIRED +type RemoteAccessCapabilities struct { + // ApplicableRolesForResources is a list of the remote-cluster roles applicable + // for access to a given set of resources. This will always be a subset of the + // SearchAsRoles supplied in the [RemoteAccessCapabilitiesRequest] + ApplicableRolesForResources []string `protobuf:"bytes,1,rep,name=ApplicableRolesForResources,proto3" json:"applicable_roles,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *RemoteAccessCapabilities) Reset() { *m = RemoteAccessCapabilities{} } +func (m *RemoteAccessCapabilities) String() string { return proto.CompactTextString(m) } +func (*RemoteAccessCapabilities) ProtoMessage() {} +func (*RemoteAccessCapabilities) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{132} +} +func (m *RemoteAccessCapabilities) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *RemoteAccessCapabilities) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_RemoteAccessCapabilities.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *RemoteAccessCapabilities) XXX_Merge(src proto.Message) { + xxx_messageInfo_RemoteAccessCapabilities.Merge(m, src) +} +func (m *RemoteAccessCapabilities) XXX_Size() int { + return m.Size() +} +func (m *RemoteAccessCapabilities) XXX_DiscardUnknown() { + xxx_messageInfo_RemoteAccessCapabilities.DiscardUnknown(m) +} + +var xxx_messageInfo_RemoteAccessCapabilities proto.InternalMessageInfo + +// AccessCapabilitiesRequest encodes parameters for the GetRemoteAccessCapabilities method. +// buf:lint:ignore PAGINATION_REQUIRED +type RemoteAccessCapabilitiesRequest struct { + // user is the name of the target user on their home cluster + User string `protobuf:"bytes,1,opt,name=User,proto3" json:"user,omitempty"` + // SearchAsRoles holds the roles the target user may use when searching for + // resources on the user's home cluster + SearchAsRoles []string `protobuf:"bytes,2,rep,name=SearchAsRoles,proto3" json:"remote_search_as_roles,omitempty"` + // ResourceIDs is the list of the ResourceIDs of the resources we would like to view + // the necessary roles for. + ResourceIDs []ResourceID `protobuf:"bytes,3,rep,name=ResourceIDs,proto3" json:"resource_ids,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *RemoteAccessCapabilitiesRequest) Reset() { *m = RemoteAccessCapabilitiesRequest{} } +func (m *RemoteAccessCapabilitiesRequest) String() string { return proto.CompactTextString(m) } +func (*RemoteAccessCapabilitiesRequest) ProtoMessage() {} +func (*RemoteAccessCapabilitiesRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{133} +} +func (m *RemoteAccessCapabilitiesRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *RemoteAccessCapabilitiesRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_RemoteAccessCapabilitiesRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *RemoteAccessCapabilitiesRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_RemoteAccessCapabilitiesRequest.Merge(m, src) +} +func (m *RemoteAccessCapabilitiesRequest) XXX_Size() int { + return m.Size() +} +func (m *RemoteAccessCapabilitiesRequest) XXX_DiscardUnknown() { + xxx_messageInfo_RemoteAccessCapabilitiesRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_RemoteAccessCapabilitiesRequest proto.InternalMessageInfo + // RequestKubernetesResource is the Kubernetes resource identifier used // in access request settings. // Modeled after existing message KubernetesResource. @@ -8599,7 +9372,7 @@ func (m *RequestKubernetesResource) Reset() { *m = RequestKubernetesReso func (m *RequestKubernetesResource) String() string { return proto.CompactTextString(m) } func (*RequestKubernetesResource) ProtoMessage() {} func (*RequestKubernetesResource) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{122} + return fileDescriptor_9198ee693835762e, []int{134} } func (m *RequestKubernetesResource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8652,7 +9425,7 @@ func (m *ResourceID) Reset() { *m = ResourceID{} } func (m *ResourceID) String() string { return proto.CompactTextString(m) } func (*ResourceID) ProtoMessage() {} func (*ResourceID) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{123} + return fileDescriptor_9198ee693835762e, []int{135} } func (m *ResourceID) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8701,7 +9474,7 @@ type PluginDataV3 struct { func (m *PluginDataV3) Reset() { *m = PluginDataV3{} } func (*PluginDataV3) ProtoMessage() {} func (*PluginDataV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{124} + return fileDescriptor_9198ee693835762e, []int{136} } func (m *PluginDataV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8744,7 +9517,7 @@ func (m *PluginDataEntry) Reset() { *m = PluginDataEntry{} } func (m *PluginDataEntry) String() string { return proto.CompactTextString(m) } func (*PluginDataEntry) ProtoMessage() {} func (*PluginDataEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{125} + return fileDescriptor_9198ee693835762e, []int{137} } func (m *PluginDataEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8786,7 +9559,7 @@ func (m *PluginDataSpecV3) Reset() { *m = PluginDataSpecV3{} } func (m *PluginDataSpecV3) String() string { return proto.CompactTextString(m) } func (*PluginDataSpecV3) ProtoMessage() {} func (*PluginDataSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{126} + return fileDescriptor_9198ee693835762e, []int{138} } func (m *PluginDataSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8833,7 +9606,7 @@ func (m *PluginDataFilter) Reset() { *m = PluginDataFilter{} } func (m *PluginDataFilter) String() string { return proto.CompactTextString(m) } func (*PluginDataFilter) ProtoMessage() {} func (*PluginDataFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{127} + return fileDescriptor_9198ee693835762e, []int{139} } func (m *PluginDataFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8884,7 +9657,7 @@ func (m *PluginDataUpdateParams) Reset() { *m = PluginDataUpdateParams{} func (m *PluginDataUpdateParams) String() string { return proto.CompactTextString(m) } func (*PluginDataUpdateParams) ProtoMessage() {} func (*PluginDataUpdateParams) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{128} + return fileDescriptor_9198ee693835762e, []int{140} } func (m *PluginDataUpdateParams) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8928,7 +9701,7 @@ func (m *RoleFilter) Reset() { *m = RoleFilter{} } func (m *RoleFilter) String() string { return proto.CompactTextString(m) } func (*RoleFilter) ProtoMessage() {} func (*RoleFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{129} + return fileDescriptor_9198ee693835762e, []int{141} } func (m *RoleFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8964,7 +9737,7 @@ type RoleV6 struct { // SubKind is an optional resource sub kind, used in some resources SubKind string `protobuf:"bytes,2,opt,name=SubKind,proto3" json:"sub_kind,omitempty"` // Version is the resource version. It must be specified. - // Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`. + // Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`, `v8`. Version string `protobuf:"bytes,3,opt,name=Version,proto3" json:"version"` // Metadata is resource metadata Metadata Metadata `protobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"` @@ -8978,7 +9751,7 @@ type RoleV6 struct { func (m *RoleV6) Reset() { *m = RoleV6{} } func (*RoleV6) ProtoMessage() {} func (*RoleV6) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{130} + return fileDescriptor_9198ee693835762e, []int{142} } func (m *RoleV6) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9025,7 +9798,7 @@ func (m *RoleSpecV6) Reset() { *m = RoleSpecV6{} } func (m *RoleSpecV6) String() string { return proto.CompactTextString(m) } func (*RoleSpecV6) ProtoMessage() {} func (*RoleSpecV6) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{131} + return fileDescriptor_9198ee693835762e, []int{143} } func (m *RoleSpecV6) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9066,7 +9839,7 @@ func (m *SSHLocalPortForwarding) Reset() { *m = SSHLocalPortForwarding{} func (m *SSHLocalPortForwarding) String() string { return proto.CompactTextString(m) } func (*SSHLocalPortForwarding) ProtoMessage() {} func (*SSHLocalPortForwarding) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{132} + return fileDescriptor_9198ee693835762e, []int{144} } func (m *SSHLocalPortForwarding) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9107,7 +9880,7 @@ func (m *SSHRemotePortForwarding) Reset() { *m = SSHRemotePortForwarding func (m *SSHRemotePortForwarding) String() string { return proto.CompactTextString(m) } func (*SSHRemotePortForwarding) ProtoMessage() {} func (*SSHRemotePortForwarding) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{133} + return fileDescriptor_9198ee693835762e, []int{145} } func (m *SSHRemotePortForwarding) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9151,7 +9924,7 @@ func (m *SSHPortForwarding) Reset() { *m = SSHPortForwarding{} } func (m *SSHPortForwarding) String() string { return proto.CompactTextString(m) } func (*SSHPortForwarding) ProtoMessage() {} func (*SSHPortForwarding) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{134} + return fileDescriptor_9198ee693835762e, []int{146} } func (m *SSHPortForwarding) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9283,7 +10056,7 @@ func (m *RoleOptions) Reset() { *m = RoleOptions{} } func (m *RoleOptions) String() string { return proto.CompactTextString(m) } func (*RoleOptions) ProtoMessage() {} func (*RoleOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{135} + return fileDescriptor_9198ee693835762e, []int{147} } func (m *RoleOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9329,7 +10102,7 @@ func (m *RecordSession) Reset() { *m = RecordSession{} } func (m *RecordSession) String() string { return proto.CompactTextString(m) } func (*RecordSession) ProtoMessage() {} func (*RecordSession) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{136} + return fileDescriptor_9198ee693835762e, []int{148} } func (m *RecordSession) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9381,7 +10154,7 @@ func (m *CertExtension) Reset() { *m = CertExtension{} } func (m *CertExtension) String() string { return proto.CompactTextString(m) } func (*CertExtension) ProtoMessage() {} func (*CertExtension) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{137} + return fileDescriptor_9198ee693835762e, []int{149} } func (m *CertExtension) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9516,17 +10289,19 @@ type RoleConditions struct { WorkloadIdentityLabels Labels `protobuf:"bytes,44,opt,name=WorkloadIdentityLabels,proto3,customtype=Labels" json:"workload_identity_labels,omitempty"` // WorkloadIdentityLabelsExpression is a predicate expression used to // allow/deny access to issuing a WorkloadIdentity. - WorkloadIdentityLabelsExpression string `protobuf:"bytes,45,opt,name=WorkloadIdentityLabelsExpression,proto3" json:"workload_identity_labels_expression,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + WorkloadIdentityLabelsExpression string `protobuf:"bytes,45,opt,name=WorkloadIdentityLabelsExpression,proto3" json:"workload_identity_labels_expression,omitempty"` + // MCPPermissions defines MCP servers related permissions. + MCP *MCPPermissions `protobuf:"bytes,46,opt,name=MCP,proto3" json:"mcp,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *RoleConditions) Reset() { *m = RoleConditions{} } func (m *RoleConditions) String() string { return proto.CompactTextString(m) } func (*RoleConditions) ProtoMessage() {} func (*RoleConditions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{138} + return fileDescriptor_9198ee693835762e, []int{150} } func (m *RoleConditions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9569,7 +10344,7 @@ func (m *IdentityCenterAccountAssignment) Reset() { *m = IdentityCenterA func (m *IdentityCenterAccountAssignment) String() string { return proto.CompactTextString(m) } func (*IdentityCenterAccountAssignment) ProtoMessage() {} func (*IdentityCenterAccountAssignment) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{139} + return fileDescriptor_9198ee693835762e, []int{151} } func (m *IdentityCenterAccountAssignment) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9610,7 +10385,7 @@ func (m *GitHubPermission) Reset() { *m = GitHubPermission{} } func (m *GitHubPermission) String() string { return proto.CompactTextString(m) } func (*GitHubPermission) ProtoMessage() {} func (*GitHubPermission) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{140} + return fileDescriptor_9198ee693835762e, []int{152} } func (m *GitHubPermission) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9639,6 +10414,51 @@ func (m *GitHubPermission) XXX_DiscardUnknown() { var xxx_messageInfo_GitHubPermission proto.InternalMessageInfo +// MCPPermissions defines MCP servers related permissions. +type MCPPermissions struct { + // Tools defines the list of tools allowed or denied for this role. Each entry + // can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular + // expression (must start with '^' and end with '$'). If the list is empty, no + // tools are allowed. + Tools []string `protobuf:"bytes,1,rep,name=tools,proto3" json:"tools,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *MCPPermissions) Reset() { *m = MCPPermissions{} } +func (m *MCPPermissions) String() string { return proto.CompactTextString(m) } +func (*MCPPermissions) ProtoMessage() {} +func (*MCPPermissions) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{153} +} +func (m *MCPPermissions) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MCPPermissions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_MCPPermissions.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *MCPPermissions) XXX_Merge(src proto.Message) { + xxx_messageInfo_MCPPermissions.Merge(m, src) +} +func (m *MCPPermissions) XXX_Size() int { + return m.Size() +} +func (m *MCPPermissions) XXX_DiscardUnknown() { + xxx_messageInfo_MCPPermissions.DiscardUnknown(m) +} + +var xxx_messageInfo_MCPPermissions proto.InternalMessageInfo + // SPIFFERoleCondition sets out which SPIFFE identities this role is allowed or // denied to generate. The Path matcher is required, and is evaluated first. If, // the Path does not match then the other matcher fields are not evaluated. @@ -9686,7 +10506,7 @@ func (m *SPIFFERoleCondition) Reset() { *m = SPIFFERoleCondition{} } func (m *SPIFFERoleCondition) String() string { return proto.CompactTextString(m) } func (*SPIFFERoleCondition) ProtoMessage() {} func (*SPIFFERoleCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{141} + return fileDescriptor_9198ee693835762e, []int{154} } func (m *SPIFFERoleCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9730,7 +10550,7 @@ func (m *DatabasePermission) Reset() { *m = DatabasePermission{} } func (m *DatabasePermission) String() string { return proto.CompactTextString(m) } func (*DatabasePermission) ProtoMessage() {} func (*DatabasePermission) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{142} + return fileDescriptor_9198ee693835762e, []int{155} } func (m *DatabasePermission) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9783,7 +10603,7 @@ func (m *KubernetesResource) Reset() { *m = KubernetesResource{} } func (m *KubernetesResource) String() string { return proto.CompactTextString(m) } func (*KubernetesResource) ProtoMessage() {} func (*KubernetesResource) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{143} + return fileDescriptor_9198ee693835762e, []int{156} } func (m *KubernetesResource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9836,7 +10656,7 @@ func (m *SessionRequirePolicy) Reset() { *m = SessionRequirePolicy{} } func (m *SessionRequirePolicy) String() string { return proto.CompactTextString(m) } func (*SessionRequirePolicy) ProtoMessage() {} func (*SessionRequirePolicy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{144} + return fileDescriptor_9198ee693835762e, []int{157} } func (m *SessionRequirePolicy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9884,7 +10704,7 @@ func (m *SessionJoinPolicy) Reset() { *m = SessionJoinPolicy{} } func (m *SessionJoinPolicy) String() string { return proto.CompactTextString(m) } func (*SessionJoinPolicy) ProtoMessage() {} func (*SessionJoinPolicy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{145} + return fileDescriptor_9198ee693835762e, []int{158} } func (m *SessionJoinPolicy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9960,7 +10780,7 @@ func (m *AccessRequestConditions) Reset() { *m = AccessRequestConditions func (m *AccessRequestConditions) String() string { return proto.CompactTextString(m) } func (*AccessRequestConditions) ProtoMessage() {} func (*AccessRequestConditions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{146} + return fileDescriptor_9198ee693835762e, []int{159} } func (m *AccessRequestConditions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9996,17 +10816,20 @@ type AccessRequestConditionsReason struct { // has the request reason mode set to "required", then reason is required for all Access Requests // requesting roles or resources allowed by this role. It applies only to users who have this // role assigned. - Mode RequestReasonMode `protobuf:"bytes,1,opt,name=Mode,proto3,casttype=RequestReasonMode" json:"mode,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Mode RequestReasonMode `protobuf:"bytes,1,opt,name=Mode,proto3,casttype=RequestReasonMode" json:"mode,omitempty"` + // Prompt is a custom message prompted to the user for the requested roles or resources searchable + // as other roles. This is only applied to the requested roles and resources specifying the prompt. + Prompt string `protobuf:"bytes,2,opt,name=Prompt,proto3" json:"prompt,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AccessRequestConditionsReason) Reset() { *m = AccessRequestConditionsReason{} } func (m *AccessRequestConditionsReason) String() string { return proto.CompactTextString(m) } func (*AccessRequestConditionsReason) ProtoMessage() {} func (*AccessRequestConditionsReason) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{147} + return fileDescriptor_9198ee693835762e, []int{160} } func (m *AccessRequestConditionsReason) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10059,7 +10882,7 @@ func (m *AccessReviewConditions) Reset() { *m = AccessReviewConditions{} func (m *AccessReviewConditions) String() string { return proto.CompactTextString(m) } func (*AccessReviewConditions) ProtoMessage() {} func (*AccessReviewConditions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{148} + return fileDescriptor_9198ee693835762e, []int{161} } func (m *AccessReviewConditions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10101,7 +10924,7 @@ func (m *AccessRequestAllowedPromotion) Reset() { *m = AccessRequestAllo func (m *AccessRequestAllowedPromotion) String() string { return proto.CompactTextString(m) } func (*AccessRequestAllowedPromotion) ProtoMessage() {} func (*AccessRequestAllowedPromotion) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{149} + return fileDescriptor_9198ee693835762e, []int{162} } func (m *AccessRequestAllowedPromotion) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10144,7 +10967,7 @@ func (m *AccessRequestAllowedPromotions) Reset() { *m = AccessRequestAll func (m *AccessRequestAllowedPromotions) String() string { return proto.CompactTextString(m) } func (*AccessRequestAllowedPromotions) ProtoMessage() {} func (*AccessRequestAllowedPromotions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{150} + return fileDescriptor_9198ee693835762e, []int{163} } func (m *AccessRequestAllowedPromotions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10173,6 +10996,100 @@ func (m *AccessRequestAllowedPromotions) XXX_DiscardUnknown() { var xxx_messageInfo_AccessRequestAllowedPromotions proto.InternalMessageInfo +// ResourceIDList represents a list of ResourceID objects. +type ResourceIDList struct { + ResourceIds []ResourceID `protobuf:"bytes,1,rep,name=resource_ids,json=resourceIds,proto3" json:"resource_ids"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *ResourceIDList) Reset() { *m = ResourceIDList{} } +func (m *ResourceIDList) String() string { return proto.CompactTextString(m) } +func (*ResourceIDList) ProtoMessage() {} +func (*ResourceIDList) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{164} +} +func (m *ResourceIDList) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ResourceIDList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_ResourceIDList.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *ResourceIDList) XXX_Merge(src proto.Message) { + xxx_messageInfo_ResourceIDList.Merge(m, src) +} +func (m *ResourceIDList) XXX_Size() int { + return m.Size() +} +func (m *ResourceIDList) XXX_DiscardUnknown() { + xxx_messageInfo_ResourceIDList.DiscardUnknown(m) +} + +var xxx_messageInfo_ResourceIDList proto.InternalMessageInfo + +// LongTermResourceGrouping contains information about how resources can be grouped +// based on Access List promotions for long-term Access Requests. +type LongTermResourceGrouping struct { + // AccessListToResources maps applicable Access List names to the resources they can grant, + // including the optimal grouping. + AccessListToResources map[string]ResourceIDList `protobuf:"bytes,1,rep,name=AccessListToResources,proto3" json:"grouped_by_access_list" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // RecommendedAccessList is the name of the Access List that would provide + // access to the most resources. If multiple Access Lists provide the same + // number of resources, the first one found will be used. + RecommendedAccessList string `protobuf:"bytes,2,opt,name=RecommendedAccessList,proto3" json:"recommended_access_list"` + // ValidationMessage is a user-friendly message explaining any grouping error, if CanProceed is false. + ValidationMessage string `protobuf:"bytes,3,opt,name=ValidationMessage,proto3" json:"validation_message,omitempty"` + // CanProceed represents the validity of the long-term grouping. If all requested + // resources cannot be grouped together, this will be false. + CanProceed bool `protobuf:"varint,4,opt,name=CanProceed,proto3" json:"can_proceed"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *LongTermResourceGrouping) Reset() { *m = LongTermResourceGrouping{} } +func (m *LongTermResourceGrouping) String() string { return proto.CompactTextString(m) } +func (*LongTermResourceGrouping) ProtoMessage() {} +func (*LongTermResourceGrouping) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{165} +} +func (m *LongTermResourceGrouping) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *LongTermResourceGrouping) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_LongTermResourceGrouping.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *LongTermResourceGrouping) XXX_Merge(src proto.Message) { + xxx_messageInfo_LongTermResourceGrouping.Merge(m, src) +} +func (m *LongTermResourceGrouping) XXX_Size() int { + return m.Size() +} +func (m *LongTermResourceGrouping) XXX_DiscardUnknown() { + xxx_messageInfo_LongTermResourceGrouping.DiscardUnknown(m) +} + +var xxx_messageInfo_LongTermResourceGrouping proto.InternalMessageInfo + // ClaimMapping maps a claim to teleport roles. type ClaimMapping struct { // Claim is a claim name. @@ -10190,7 +11107,7 @@ func (m *ClaimMapping) Reset() { *m = ClaimMapping{} } func (m *ClaimMapping) String() string { return proto.CompactTextString(m) } func (*ClaimMapping) ProtoMessage() {} func (*ClaimMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{151} + return fileDescriptor_9198ee693835762e, []int{166} } func (m *ClaimMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10236,7 +11153,7 @@ func (m *TraitMapping) Reset() { *m = TraitMapping{} } func (m *TraitMapping) String() string { return proto.CompactTextString(m) } func (*TraitMapping) ProtoMessage() {} func (*TraitMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{152} + return fileDescriptor_9198ee693835762e, []int{167} } func (m *TraitMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10285,7 +11202,7 @@ func (m *Rule) Reset() { *m = Rule{} } func (m *Rule) String() string { return proto.CompactTextString(m) } func (*Rule) ProtoMessage() {} func (*Rule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{153} + return fileDescriptor_9198ee693835762e, []int{168} } func (m *Rule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10333,7 +11250,7 @@ func (m *ImpersonateConditions) Reset() { *m = ImpersonateConditions{} } func (m *ImpersonateConditions) String() string { return proto.CompactTextString(m) } func (*ImpersonateConditions) ProtoMessage() {} func (*ImpersonateConditions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{154} + return fileDescriptor_9198ee693835762e, []int{169} } func (m *ImpersonateConditions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10375,7 +11292,7 @@ func (m *BoolValue) Reset() { *m = BoolValue{} } func (m *BoolValue) String() string { return proto.CompactTextString(m) } func (*BoolValue) ProtoMessage() {} func (*BoolValue) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{155} + return fileDescriptor_9198ee693835762e, []int{170} } func (m *BoolValue) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10407,7 +11324,9 @@ var xxx_messageInfo_BoolValue proto.InternalMessageInfo // UserFilter matches user resources. type UserFilter struct { // SearchKeywords is a list of search keywords to match against resource field values. - SearchKeywords []string `protobuf:"bytes,1,rep,name=SearchKeywords,proto3" json:"search_keywords,omitempty"` + SearchKeywords []string `protobuf:"bytes,1,rep,name=SearchKeywords,proto3" json:"search_keywords,omitempty"` + // SkipSystemUsers filters out teleport system users from the results. + SkipSystemUsers bool `protobuf:"varint,2,opt,name=SkipSystemUsers,proto3" json:"skip_system_users,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -10417,7 +11336,7 @@ func (m *UserFilter) Reset() { *m = UserFilter{} } func (m *UserFilter) String() string { return proto.CompactTextString(m) } func (*UserFilter) ProtoMessage() {} func (*UserFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{156} + return fileDescriptor_9198ee693835762e, []int{171} } func (m *UserFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10468,7 +11387,7 @@ type UserV2 struct { func (m *UserV2) Reset() { *m = UserV2{} } func (*UserV2) ProtoMessage() {} func (*UserV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{157} + return fileDescriptor_9198ee693835762e, []int{172} } func (m *UserV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10517,7 +11436,7 @@ func (m *UserStatusV2) Reset() { *m = UserStatusV2{} } func (m *UserStatusV2) String() string { return proto.CompactTextString(m) } func (*UserStatusV2) ProtoMessage() {} func (*UserStatusV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{158} + return fileDescriptor_9198ee693835762e, []int{173} } func (m *UserStatusV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10591,7 +11510,7 @@ func (m *UserSpecV2) Reset() { *m = UserSpecV2{} } func (m *UserSpecV2) String() string { return proto.CompactTextString(m) } func (*UserSpecV2) ProtoMessage() {} func (*UserSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{159} + return fileDescriptor_9198ee693835762e, []int{174} } func (m *UserSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10641,7 +11560,7 @@ type ExternalIdentity struct { func (m *ExternalIdentity) Reset() { *m = ExternalIdentity{} } func (*ExternalIdentity) ProtoMessage() {} func (*ExternalIdentity) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{160} + return fileDescriptor_9198ee693835762e, []int{175} } func (m *ExternalIdentity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10689,7 +11608,7 @@ func (m *LoginStatus) Reset() { *m = LoginStatus{} } func (m *LoginStatus) String() string { return proto.CompactTextString(m) } func (*LoginStatus) ProtoMessage() {} func (*LoginStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{161} + return fileDescriptor_9198ee693835762e, []int{176} } func (m *LoginStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10734,7 +11653,7 @@ type CreatedBy struct { func (m *CreatedBy) Reset() { *m = CreatedBy{} } func (*CreatedBy) ProtoMessage() {} func (*CreatedBy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{162} + return fileDescriptor_9198ee693835762e, []int{177} } func (m *CreatedBy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10783,7 +11702,7 @@ func (m *LocalAuthSecrets) Reset() { *m = LocalAuthSecrets{} } func (m *LocalAuthSecrets) String() string { return proto.CompactTextString(m) } func (*LocalAuthSecrets) ProtoMessage() {} func (*LocalAuthSecrets) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{163} + return fileDescriptor_9198ee693835762e, []int{178} } func (m *LocalAuthSecrets) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10840,7 +11759,7 @@ func (m *MFADevice) Reset() { *m = MFADevice{} } func (m *MFADevice) String() string { return proto.CompactTextString(m) } func (*MFADevice) ProtoMessage() {} func (*MFADevice) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{164} + return fileDescriptor_9198ee693835762e, []int{179} } func (m *MFADevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10950,7 +11869,7 @@ func (m *TOTPDevice) Reset() { *m = TOTPDevice{} } func (m *TOTPDevice) String() string { return proto.CompactTextString(m) } func (*TOTPDevice) ProtoMessage() {} func (*TOTPDevice) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{165} + return fileDescriptor_9198ee693835762e, []int{180} } func (m *TOTPDevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -10996,7 +11915,7 @@ func (m *U2FDevice) Reset() { *m = U2FDevice{} } func (m *U2FDevice) String() string { return proto.CompactTextString(m) } func (*U2FDevice) ProtoMessage() {} func (*U2FDevice) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{166} + return fileDescriptor_9198ee693835762e, []int{181} } func (m *U2FDevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11078,7 +11997,7 @@ func (m *WebauthnDevice) Reset() { *m = WebauthnDevice{} } func (m *WebauthnDevice) String() string { return proto.CompactTextString(m) } func (*WebauthnDevice) ProtoMessage() {} func (*WebauthnDevice) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{167} + return fileDescriptor_9198ee693835762e, []int{182} } func (m *WebauthnDevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11124,7 +12043,7 @@ func (m *SSOMFADevice) Reset() { *m = SSOMFADevice{} } func (m *SSOMFADevice) String() string { return proto.CompactTextString(m) } func (*SSOMFADevice) ProtoMessage() {} func (*SSOMFADevice) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{168} + return fileDescriptor_9198ee693835762e, []int{183} } func (m *SSOMFADevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11167,7 +12086,7 @@ func (m *WebauthnLocalAuth) Reset() { *m = WebauthnLocalAuth{} } func (m *WebauthnLocalAuth) String() string { return proto.CompactTextString(m) } func (*WebauthnLocalAuth) ProtoMessage() {} func (*WebauthnLocalAuth) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{169} + return fileDescriptor_9198ee693835762e, []int{184} } func (m *WebauthnLocalAuth) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11213,7 +12132,7 @@ func (m *ConnectorRef) Reset() { *m = ConnectorRef{} } func (m *ConnectorRef) String() string { return proto.CompactTextString(m) } func (*ConnectorRef) ProtoMessage() {} func (*ConnectorRef) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{170} + return fileDescriptor_9198ee693835762e, []int{185} } func (m *ConnectorRef) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11255,7 +12174,7 @@ func (m *UserRef) Reset() { *m = UserRef{} } func (m *UserRef) String() string { return proto.CompactTextString(m) } func (*UserRef) ProtoMessage() {} func (*UserRef) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{171} + return fileDescriptor_9198ee693835762e, []int{186} } func (m *UserRef) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11305,7 +12224,7 @@ func (m *ReverseTunnelV2) Reset() { *m = ReverseTunnelV2{} } func (m *ReverseTunnelV2) String() string { return proto.CompactTextString(m) } func (*ReverseTunnelV2) ProtoMessage() {} func (*ReverseTunnelV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{172} + return fileDescriptor_9198ee693835762e, []int{187} } func (m *ReverseTunnelV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11352,7 +12271,7 @@ func (m *ReverseTunnelSpecV2) Reset() { *m = ReverseTunnelSpecV2{} } func (m *ReverseTunnelSpecV2) String() string { return proto.CompactTextString(m) } func (*ReverseTunnelSpecV2) ProtoMessage() {} func (*ReverseTunnelSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{173} + return fileDescriptor_9198ee693835762e, []int{188} } func (m *ReverseTunnelSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11401,7 +12320,7 @@ type TunnelConnectionV2 struct { func (m *TunnelConnectionV2) Reset() { *m = TunnelConnectionV2{} } func (*TunnelConnectionV2) ProtoMessage() {} func (*TunnelConnectionV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{174} + return fileDescriptor_9198ee693835762e, []int{189} } func (m *TunnelConnectionV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11449,7 +12368,7 @@ func (m *TunnelConnectionSpecV2) Reset() { *m = TunnelConnectionSpecV2{} func (m *TunnelConnectionSpecV2) String() string { return proto.CompactTextString(m) } func (*TunnelConnectionSpecV2) ProtoMessage() {} func (*TunnelConnectionSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{175} + return fileDescriptor_9198ee693835762e, []int{190} } func (m *TunnelConnectionSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11497,7 +12416,7 @@ func (m *SemaphoreFilter) Reset() { *m = SemaphoreFilter{} } func (m *SemaphoreFilter) String() string { return proto.CompactTextString(m) } func (*SemaphoreFilter) ProtoMessage() {} func (*SemaphoreFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{176} + return fileDescriptor_9198ee693835762e, []int{191} } func (m *SemaphoreFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11548,7 +12467,7 @@ func (m *AcquireSemaphoreRequest) Reset() { *m = AcquireSemaphoreRequest func (m *AcquireSemaphoreRequest) String() string { return proto.CompactTextString(m) } func (*AcquireSemaphoreRequest) ProtoMessage() {} func (*AcquireSemaphoreRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{177} + return fileDescriptor_9198ee693835762e, []int{192} } func (m *AcquireSemaphoreRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11596,7 +12515,7 @@ func (m *SemaphoreLease) Reset() { *m = SemaphoreLease{} } func (m *SemaphoreLease) String() string { return proto.CompactTextString(m) } func (*SemaphoreLease) ProtoMessage() {} func (*SemaphoreLease) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{178} + return fileDescriptor_9198ee693835762e, []int{193} } func (m *SemaphoreLease) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11642,7 +12561,7 @@ func (m *SemaphoreLeaseRef) Reset() { *m = SemaphoreLeaseRef{} } func (m *SemaphoreLeaseRef) String() string { return proto.CompactTextString(m) } func (*SemaphoreLeaseRef) ProtoMessage() {} func (*SemaphoreLeaseRef) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{179} + return fileDescriptor_9198ee693835762e, []int{194} } func (m *SemaphoreLeaseRef) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11691,7 +12610,7 @@ type SemaphoreV3 struct { func (m *SemaphoreV3) Reset() { *m = SemaphoreV3{} } func (*SemaphoreV3) ProtoMessage() {} func (*SemaphoreV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{180} + return fileDescriptor_9198ee693835762e, []int{195} } func (m *SemaphoreV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11733,7 +12652,7 @@ func (m *SemaphoreSpecV3) Reset() { *m = SemaphoreSpecV3{} } func (m *SemaphoreSpecV3) String() string { return proto.CompactTextString(m) } func (*SemaphoreSpecV3) ProtoMessage() {} func (*SemaphoreSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{181} + return fileDescriptor_9198ee693835762e, []int{196} } func (m *SemaphoreSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11782,7 +12701,7 @@ type WebSessionV2 struct { func (m *WebSessionV2) Reset() { *m = WebSessionV2{} } func (*WebSessionV2) ProtoMessage() {} func (*WebSessionV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{182} + return fileDescriptor_9198ee693835762e, []int{197} } func (m *WebSessionV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11867,7 +12786,7 @@ func (m *WebSessionSpecV2) Reset() { *m = WebSessionSpecV2{} } func (m *WebSessionSpecV2) String() string { return proto.CompactTextString(m) } func (*WebSessionSpecV2) ProtoMessage() {} func (*WebSessionSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{183} + return fileDescriptor_9198ee693835762e, []int{198} } func (m *WebSessionSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11912,7 +12831,7 @@ func (m *DeviceWebToken) Reset() { *m = DeviceWebToken{} } func (m *DeviceWebToken) String() string { return proto.CompactTextString(m) } func (*DeviceWebToken) ProtoMessage() {} func (*DeviceWebToken) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{184} + return fileDescriptor_9198ee693835762e, []int{199} } func (m *DeviceWebToken) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -11954,7 +12873,7 @@ func (m *WebSessionFilter) Reset() { *m = WebSessionFilter{} } func (m *WebSessionFilter) String() string { return proto.CompactTextString(m) } func (*WebSessionFilter) ProtoMessage() {} func (*WebSessionFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{185} + return fileDescriptor_9198ee693835762e, []int{200} } func (m *WebSessionFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12025,7 +12944,7 @@ func (m *SAMLSessionData) Reset() { *m = SAMLSessionData{} } func (m *SAMLSessionData) String() string { return proto.CompactTextString(m) } func (*SAMLSessionData) ProtoMessage() {} func (*SAMLSessionData) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{186} + return fileDescriptor_9198ee693835762e, []int{201} } func (m *SAMLSessionData) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12074,7 +12993,7 @@ func (m *SAMLAttribute) Reset() { *m = SAMLAttribute{} } func (m *SAMLAttribute) String() string { return proto.CompactTextString(m) } func (*SAMLAttribute) ProtoMessage() {} func (*SAMLAttribute) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{187} + return fileDescriptor_9198ee693835762e, []int{202} } func (m *SAMLAttribute) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12121,7 +13040,7 @@ func (m *SAMLAttributeValue) Reset() { *m = SAMLAttributeValue{} } func (m *SAMLAttributeValue) String() string { return proto.CompactTextString(m) } func (*SAMLAttributeValue) ProtoMessage() {} func (*SAMLAttributeValue) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{188} + return fileDescriptor_9198ee693835762e, []int{203} } func (m *SAMLAttributeValue) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12172,7 +13091,7 @@ func (m *SAMLNameID) Reset() { *m = SAMLNameID{} } func (m *SAMLNameID) String() string { return proto.CompactTextString(m) } func (*SAMLNameID) ProtoMessage() {} func (*SAMLNameID) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{189} + return fileDescriptor_9198ee693835762e, []int{204} } func (m *SAMLNameID) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12221,7 +13140,7 @@ type RemoteClusterV3 struct { func (m *RemoteClusterV3) Reset() { *m = RemoteClusterV3{} } func (*RemoteClusterV3) ProtoMessage() {} func (*RemoteClusterV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{190} + return fileDescriptor_9198ee693835762e, []int{205} } func (m *RemoteClusterV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12265,7 +13184,7 @@ func (m *RemoteClusterStatusV3) Reset() { *m = RemoteClusterStatusV3{} } func (m *RemoteClusterStatusV3) String() string { return proto.CompactTextString(m) } func (*RemoteClusterStatusV3) ProtoMessage() {} func (*RemoteClusterStatusV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{191} + return fileDescriptor_9198ee693835762e, []int{206} } func (m *RemoteClusterStatusV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12315,7 +13234,7 @@ func (m *KubernetesCluster) Reset() { *m = KubernetesCluster{} } func (m *KubernetesCluster) String() string { return proto.CompactTextString(m) } func (*KubernetesCluster) ProtoMessage() {} func (*KubernetesCluster) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{192} + return fileDescriptor_9198ee693835762e, []int{207} } func (m *KubernetesCluster) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12364,7 +13283,7 @@ type KubernetesClusterV3 struct { func (m *KubernetesClusterV3) Reset() { *m = KubernetesClusterV3{} } func (*KubernetesClusterV3) ProtoMessage() {} func (*KubernetesClusterV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{193} + return fileDescriptor_9198ee693835762e, []int{208} } func (m *KubernetesClusterV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12415,7 +13334,7 @@ func (m *KubernetesClusterSpecV3) Reset() { *m = KubernetesClusterSpecV3 func (m *KubernetesClusterSpecV3) String() string { return proto.CompactTextString(m) } func (*KubernetesClusterSpecV3) ProtoMessage() {} func (*KubernetesClusterSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{194} + return fileDescriptor_9198ee693835762e, []int{209} } func (m *KubernetesClusterSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12463,7 +13382,7 @@ func (m *KubeAzure) Reset() { *m = KubeAzure{} } func (m *KubeAzure) String() string { return proto.CompactTextString(m) } func (*KubeAzure) ProtoMessage() {} func (*KubeAzure) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{195} + return fileDescriptor_9198ee693835762e, []int{210} } func (m *KubeAzure) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12509,7 +13428,7 @@ func (m *KubeAWS) Reset() { *m = KubeAWS{} } func (m *KubeAWS) String() string { return proto.CompactTextString(m) } func (*KubeAWS) ProtoMessage() {} func (*KubeAWS) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{196} + return fileDescriptor_9198ee693835762e, []int{211} } func (m *KubeAWS) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12555,7 +13474,7 @@ func (m *KubeGCP) Reset() { *m = KubeGCP{} } func (m *KubeGCP) String() string { return proto.CompactTextString(m) } func (*KubeGCP) ProtoMessage() {} func (*KubeGCP) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{197} + return fileDescriptor_9198ee693835762e, []int{212} } func (m *KubeGCP) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12597,7 +13516,7 @@ func (m *KubernetesClusterV3List) Reset() { *m = KubernetesClusterV3List func (m *KubernetesClusterV3List) String() string { return proto.CompactTextString(m) } func (*KubernetesClusterV3List) ProtoMessage() {} func (*KubernetesClusterV3List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{198} + return fileDescriptor_9198ee693835762e, []int{213} } func (m *KubernetesClusterV3List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12637,16 +13556,20 @@ type KubernetesServerV3 struct { // Metadata is the Kubernetes server metadata. Metadata Metadata `protobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"` // Spec is the Kubernetes server spec. - Spec KubernetesServerSpecV3 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Spec KubernetesServerSpecV3 `protobuf:"bytes,5,opt,name=Spec,proto3" json:"spec"` + // Status is the Kubernetes server status. + Status *KubernetesServerStatusV3 `protobuf:"bytes,6,opt,name=status,proto3" json:"status,omitempty"` + // The advertized scope of the server which can not change once assigned. + Scope string `protobuf:"bytes,7,opt,name=scope,proto3" json:"scope,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *KubernetesServerV3) Reset() { *m = KubernetesServerV3{} } func (*KubernetesServerV3) ProtoMessage() {} func (*KubernetesServerV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{199} + return fileDescriptor_9198ee693835762e, []int{214} } func (m *KubernetesServerV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12688,7 +13611,11 @@ type KubernetesServerSpecV3 struct { // Cluster is a Kubernetes Cluster proxied by this Kubernetes server. Cluster *KubernetesClusterV3 `protobuf:"bytes,5,opt,name=Cluster,proto3" json:"cluster"` // ProxyIDs is a list of proxy IDs this server is expected to be connected to. - ProxyIDs []string `protobuf:"bytes,6,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + ProxyIDs []string `protobuf:"bytes,6,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + // the name of the Relay group that the server is connected to + RelayGroup string `protobuf:"bytes,7,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // the list of Relay host IDs that the server is connected to + RelayIds []string `protobuf:"bytes,8,rep,name=relay_ids,json=relayIds,proto3" json:"relay_ids,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -12698,7 +13625,7 @@ func (m *KubernetesServerSpecV3) Reset() { *m = KubernetesServerSpecV3{} func (m *KubernetesServerSpecV3) String() string { return proto.CompactTextString(m) } func (*KubernetesServerSpecV3) ProtoMessage() {} func (*KubernetesServerSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{200} + return fileDescriptor_9198ee693835762e, []int{215} } func (m *KubernetesServerSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12727,6 +13654,49 @@ func (m *KubernetesServerSpecV3) XXX_DiscardUnknown() { var xxx_messageInfo_KubernetesServerSpecV3 proto.InternalMessageInfo +// KubernetesServerStatusV3 is the Kubernetes cluster status. +type KubernetesServerStatusV3 struct { + // TargetHealth is the health status of between the Teleport agent + // and Kubernetes cluster. + TargetHealth *TargetHealth `protobuf:"bytes,1,opt,name=target_health,json=targetHealth,proto3" json:"target_health,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *KubernetesServerStatusV3) Reset() { *m = KubernetesServerStatusV3{} } +func (m *KubernetesServerStatusV3) String() string { return proto.CompactTextString(m) } +func (*KubernetesServerStatusV3) ProtoMessage() {} +func (*KubernetesServerStatusV3) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{216} +} +func (m *KubernetesServerStatusV3) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *KubernetesServerStatusV3) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_KubernetesServerStatusV3.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *KubernetesServerStatusV3) XXX_Merge(src proto.Message) { + xxx_messageInfo_KubernetesServerStatusV3.Merge(m, src) +} +func (m *KubernetesServerStatusV3) XXX_Size() int { + return m.Size() +} +func (m *KubernetesServerStatusV3) XXX_DiscardUnknown() { + xxx_messageInfo_KubernetesServerStatusV3.DiscardUnknown(m) +} + +var xxx_messageInfo_KubernetesServerStatusV3 proto.InternalMessageInfo + // WebTokenV3 describes a web token. Web tokens are used as a transport to relay bearer tokens // to the client. // Initially bound to a web session, these have been factored out into a separate resource to @@ -12750,7 +13720,7 @@ type WebTokenV3 struct { func (m *WebTokenV3) Reset() { *m = WebTokenV3{} } func (*WebTokenV3) ProtoMessage() {} func (*WebTokenV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{201} + return fileDescriptor_9198ee693835762e, []int{217} } func (m *WebTokenV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12794,7 +13764,7 @@ func (m *WebTokenSpecV3) Reset() { *m = WebTokenSpecV3{} } func (m *WebTokenSpecV3) String() string { return proto.CompactTextString(m) } func (*WebTokenSpecV3) ProtoMessage() {} func (*WebTokenSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{202} + return fileDescriptor_9198ee693835762e, []int{218} } func (m *WebTokenSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12838,7 +13808,7 @@ func (m *GetWebSessionRequest) Reset() { *m = GetWebSessionRequest{} } func (m *GetWebSessionRequest) String() string { return proto.CompactTextString(m) } func (*GetWebSessionRequest) ProtoMessage() {} func (*GetWebSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{203} + return fileDescriptor_9198ee693835762e, []int{219} } func (m *GetWebSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12882,7 +13852,7 @@ func (m *DeleteWebSessionRequest) Reset() { *m = DeleteWebSessionRequest func (m *DeleteWebSessionRequest) String() string { return proto.CompactTextString(m) } func (*DeleteWebSessionRequest) ProtoMessage() {} func (*DeleteWebSessionRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{204} + return fileDescriptor_9198ee693835762e, []int{220} } func (m *DeleteWebSessionRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12926,7 +13896,7 @@ func (m *GetWebTokenRequest) Reset() { *m = GetWebTokenRequest{} } func (m *GetWebTokenRequest) String() string { return proto.CompactTextString(m) } func (*GetWebTokenRequest) ProtoMessage() {} func (*GetWebTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{205} + return fileDescriptor_9198ee693835762e, []int{221} } func (m *GetWebTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -12970,7 +13940,7 @@ func (m *DeleteWebTokenRequest) Reset() { *m = DeleteWebTokenRequest{} } func (m *DeleteWebTokenRequest) String() string { return proto.CompactTextString(m) } func (*DeleteWebTokenRequest) ProtoMessage() {} func (*DeleteWebTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{206} + return fileDescriptor_9198ee693835762e, []int{222} } func (m *DeleteWebTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13012,7 +13982,7 @@ func (m *ResourceRequest) Reset() { *m = ResourceRequest{} } func (m *ResourceRequest) String() string { return proto.CompactTextString(m) } func (*ResourceRequest) ProtoMessage() {} func (*ResourceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{207} + return fileDescriptor_9198ee693835762e, []int{223} } func (m *ResourceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13061,7 +14031,7 @@ func (m *ResourceWithSecretsRequest) Reset() { *m = ResourceWithSecretsR func (m *ResourceWithSecretsRequest) String() string { return proto.CompactTextString(m) } func (*ResourceWithSecretsRequest) ProtoMessage() {} func (*ResourceWithSecretsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{208} + return fileDescriptor_9198ee693835762e, []int{224} } func (m *ResourceWithSecretsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13108,7 +14078,7 @@ func (m *ResourcesWithSecretsRequest) Reset() { *m = ResourcesWithSecret func (m *ResourcesWithSecretsRequest) String() string { return proto.CompactTextString(m) } func (*ResourcesWithSecretsRequest) ProtoMessage() {} func (*ResourcesWithSecretsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{209} + return fileDescriptor_9198ee693835762e, []int{225} } func (m *ResourcesWithSecretsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13152,7 +14122,7 @@ func (m *ResourceInNamespaceRequest) Reset() { *m = ResourceInNamespaceR func (m *ResourceInNamespaceRequest) String() string { return proto.CompactTextString(m) } func (*ResourceInNamespaceRequest) ProtoMessage() {} func (*ResourceInNamespaceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{210} + return fileDescriptor_9198ee693835762e, []int{226} } func (m *ResourceInNamespaceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13194,7 +14164,7 @@ func (m *ResourcesInNamespaceRequest) Reset() { *m = ResourcesInNamespac func (m *ResourcesInNamespaceRequest) String() string { return proto.CompactTextString(m) } func (*ResourcesInNamespaceRequest) ProtoMessage() {} func (*ResourcesInNamespaceRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{211} + return fileDescriptor_9198ee693835762e, []int{227} } func (m *ResourcesInNamespaceRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13245,7 +14215,7 @@ func (m *OIDCConnectorV3) Reset() { *m = OIDCConnectorV3{} } func (m *OIDCConnectorV3) String() string { return proto.CompactTextString(m) } func (*OIDCConnectorV3) ProtoMessage() {} func (*OIDCConnectorV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{212} + return fileDescriptor_9198ee693835762e, []int{228} } func (m *OIDCConnectorV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13287,7 +14257,7 @@ func (m *OIDCConnectorV3List) Reset() { *m = OIDCConnectorV3List{} } func (m *OIDCConnectorV3List) String() string { return proto.CompactTextString(m) } func (*OIDCConnectorV3List) ProtoMessage() {} func (*OIDCConnectorV3List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{213} + return fileDescriptor_9198ee693835762e, []int{229} } func (m *OIDCConnectorV3List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13367,17 +14337,28 @@ type OIDCConnectorSpecV3 struct { // MFASettings contains settings to enable SSO MFA checks through this auth connector. MFASettings *OIDCConnectorMFASettings `protobuf:"bytes,19,opt,name=MFASettings,proto3" json:"mfa,omitempty"` // PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled" - PKCEMode string `protobuf:"bytes,20,opt,name=PKCEMode,proto3" json:"pkce_mode,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + PKCEMode string `protobuf:"bytes,20,opt,name=PKCEMode,proto3" json:"pkce_mode,omitempty"` + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + UserMatchers []string `protobuf:"bytes,21,rep,name=UserMatchers,proto3" json:"user_matchers,omitempty"` + // RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization + // requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality + // for authorization request parameters. + RequestObjectMode string `protobuf:"bytes,22,opt,name=RequestObjectMode,proto3" json:"request_object_mode,omitempty"` + // EntraIDGroupsProvider configures out-of-band user groups provider. + // It works by following through the groups claim source, which is sent for the "groups" + // claim when the user's group membership exceeds 200 max item limit. + EntraIdGroupsProvider *EntraIDGroupsProvider `protobuf:"bytes,23,opt,name=entra_id_groups_provider,json=entraIdGroupsProvider,proto3" json:"entra_id_groups_provider,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *OIDCConnectorSpecV3) Reset() { *m = OIDCConnectorSpecV3{} } func (m *OIDCConnectorSpecV3) String() string { return proto.CompactTextString(m) } func (*OIDCConnectorSpecV3) ProtoMessage() {} func (*OIDCConnectorSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{214} + return fileDescriptor_9198ee693835762e, []int{230} } func (m *OIDCConnectorSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13406,6 +14387,65 @@ func (m *OIDCConnectorSpecV3) XXX_DiscardUnknown() { var xxx_messageInfo_OIDCConnectorSpecV3 proto.InternalMessageInfo +// EntraIDGroupsProvider configures out-of-band user groups provider. +// It works by following through the groups claim source, which is sent for +// "groups" claim when the user's group membership exceeds 200 max item limit. +type EntraIDGroupsProvider struct { + // Disabled specifies that the groups provider should be disabled + // even when Entra ID responds with a groups claim source. + // User may choose to disable it if they are using + // integrations such as SCIM or similar groups importer as + // connector based role mapping may be not needed in such a scenario. + Disabled bool `protobuf:"varint,1,opt,name=disabled,proto3" json:"disabled,omitempty"` + // GroupType is a user group type filter. Defaults to "security-groups". + // Value can be "security-groups", "directory-roles", "all-groups". + GroupType string `protobuf:"bytes,2,opt,name=group_type,json=groupType,proto3" json:"group_type,omitempty"` + // GraphEndpoint is a Microsoft Graph API endpoint. + // The groups claim source endpoint provided by Entra ID points to the + // now-retired Azure AD Graph endpoint ("https://graph.windows.net"). + // To convert it to the newer Microsoft Graph API endpoint, + // Teleport defaults to the Microsoft Graph global service endpoint ("https://graph.microsoft.com"). + // Update GraphEndpoint to point to a different Microsoft Graph national + // cloud deployment endpoint. + GraphEndpoint string `protobuf:"bytes,3,opt,name=graph_endpoint,json=graphEndpoint,proto3" json:"graph_endpoint,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *EntraIDGroupsProvider) Reset() { *m = EntraIDGroupsProvider{} } +func (m *EntraIDGroupsProvider) String() string { return proto.CompactTextString(m) } +func (*EntraIDGroupsProvider) ProtoMessage() {} +func (*EntraIDGroupsProvider) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{231} +} +func (m *EntraIDGroupsProvider) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *EntraIDGroupsProvider) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_EntraIDGroupsProvider.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *EntraIDGroupsProvider) XXX_Merge(src proto.Message) { + xxx_messageInfo_EntraIDGroupsProvider.Merge(m, src) +} +func (m *EntraIDGroupsProvider) XXX_Size() int { + return m.Size() +} +func (m *EntraIDGroupsProvider) XXX_DiscardUnknown() { + xxx_messageInfo_EntraIDGroupsProvider.DiscardUnknown(m) +} + +var xxx_messageInfo_EntraIDGroupsProvider proto.InternalMessageInfo + // MaxAge allows the max_age parameter to be nullable to preserve backwards // compatibility. The duration is stored as nanoseconds. type MaxAge struct { @@ -13419,7 +14459,7 @@ func (m *MaxAge) Reset() { *m = MaxAge{} } func (m *MaxAge) String() string { return proto.CompactTextString(m) } func (*MaxAge) ProtoMessage() {} func (*MaxAge) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{215} + return fileDescriptor_9198ee693835762e, []int{232} } func (m *MaxAge) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13464,7 +14504,7 @@ func (m *SSOClientRedirectSettings) Reset() { *m = SSOClientRedirectSett func (m *SSOClientRedirectSettings) String() string { return proto.CompactTextString(m) } func (*SSOClientRedirectSettings) ProtoMessage() {} func (*SSOClientRedirectSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{216} + return fileDescriptor_9198ee693835762e, []int{233} } func (m *SSOClientRedirectSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13511,7 +14551,13 @@ type OIDCConnectorMFASettings struct { // MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to // 0 to always force re-authentication for MFA checks. This should only be set to a non-zero // value if the IdP is setup to perform MFA checks on top of active user sessions. - MaxAge Duration `protobuf:"varint,6,opt,name=max_age,json=maxAge,proto3,casttype=Duration" json:"max_age,omitempty"` + MaxAge Duration `protobuf:"varint,6,opt,name=max_age,json=maxAge,proto3,casttype=Duration" json:"max_age,omitempty"` + // RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization + // requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality + // for authorization request parameters. If omitted, MFA flows will default to the `RequestObjectMode` behavior + // specified in the base OIDC connector. Set this property to 'none' to explicitly disable request objects for + // the MFA client. + RequestObjectMode string `protobuf:"bytes,7,opt,name=RequestObjectMode,proto3" json:"request_object_mode,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -13521,7 +14567,7 @@ func (m *OIDCConnectorMFASettings) Reset() { *m = OIDCConnectorMFASettin func (m *OIDCConnectorMFASettings) String() string { return proto.CompactTextString(m) } func (*OIDCConnectorMFASettings) ProtoMessage() {} func (*OIDCConnectorMFASettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{217} + return fileDescriptor_9198ee693835762e, []int{234} } func (m *OIDCConnectorMFASettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13602,11 +14648,14 @@ type OIDCAuthRequest struct { // TLS cert in case of successful auth. TlsPublicKey []byte `protobuf:"bytes,21,opt,name=tls_public_key,json=tlsPublicKey,proto3" json:"tls_pub_key,omitempty"` // SshAttestationStatement is an attestation statement for the given SSH public key. - SshAttestationStatement *v1.AttestationStatement `protobuf:"bytes,22,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` + SshAttestationStatement *v11.AttestationStatement `protobuf:"bytes,22,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` // TlsAttestationStatement is an attestation statement for the given TLS public key. - TlsAttestationStatement *v1.AttestationStatement `protobuf:"bytes,23,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` + TlsAttestationStatement *v11.AttestationStatement `protobuf:"bytes,23,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` // pkce_verifier is used to verified a generated code challenge. - PkceVerifier string `protobuf:"bytes,24,opt,name=pkce_verifier,json=pkceVerifier,proto3" json:"pkce_verifier"` + PkceVerifier string `protobuf:"bytes,24,opt,name=pkce_verifier,json=pkceVerifier,proto3" json:"pkce_verifier"` + // LoginHint is an optional username/email provided by the client that will be passed + // to the IdP via the 'login_hint' query parameter. + LoginHint string `protobuf:"bytes,25,opt,name=login_hint,json=loginHint,proto3" json:"login_hint,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -13616,7 +14665,7 @@ func (m *OIDCAuthRequest) Reset() { *m = OIDCAuthRequest{} } func (m *OIDCAuthRequest) String() string { return proto.CompactTextString(m) } func (*OIDCAuthRequest) ProtoMessage() {} func (*OIDCAuthRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{218} + return fileDescriptor_9198ee693835762e, []int{235} } func (m *OIDCAuthRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13667,7 +14716,7 @@ func (m *SAMLConnectorV2) Reset() { *m = SAMLConnectorV2{} } func (m *SAMLConnectorV2) String() string { return proto.CompactTextString(m) } func (*SAMLConnectorV2) ProtoMessage() {} func (*SAMLConnectorV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{219} + return fileDescriptor_9198ee693835762e, []int{236} } func (m *SAMLConnectorV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13709,7 +14758,7 @@ func (m *SAMLConnectorV2List) Reset() { *m = SAMLConnectorV2List{} } func (m *SAMLConnectorV2List) String() string { return proto.CompactTextString(m) } func (*SAMLConnectorV2List) ProtoMessage() {} func (*SAMLConnectorV2List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{220} + return fileDescriptor_9198ee693835762e, []int{237} } func (m *SAMLConnectorV2List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13789,17 +14838,24 @@ type SAMLConnectorSpecV2 struct { // But we never honored request binding value provided by the IdP and always used http-redirect // binding as a default. Setting up PreferredRequestBinding value lets us preserve existing // auth connector behavior and only use http-post binding if it is explicitly configured. - PreferredRequestBinding string `protobuf:"bytes,19,opt,name=PreferredRequestBinding,proto3" json:"preferred_request_binding,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + PreferredRequestBinding string `protobuf:"bytes,19,opt,name=PreferredRequestBinding,proto3" json:"preferred_request_binding,omitempty"` + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + UserMatchers []string `protobuf:"bytes,20,rep,name=UserMatchers,proto3" json:"user_matchers,omitempty"` + // IncludeSubject is a flag that indicates whether the Subject element is included in the SAML + // authentication request. Defaults to false. + // Note: Some IdPs will reject requests that contain a Subject. + IncludeSubject bool `protobuf:"varint,21,opt,name=IncludeSubject,proto3" json:"include_subject,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *SAMLConnectorSpecV2) Reset() { *m = SAMLConnectorSpecV2{} } func (m *SAMLConnectorSpecV2) String() string { return proto.CompactTextString(m) } func (*SAMLConnectorSpecV2) ProtoMessage() {} func (*SAMLConnectorSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{221} + return fileDescriptor_9198ee693835762e, []int{238} } func (m *SAMLConnectorSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13858,7 +14914,7 @@ func (m *SAMLConnectorMFASettings) Reset() { *m = SAMLConnectorMFASettin func (m *SAMLConnectorMFASettings) String() string { return proto.CompactTextString(m) } func (*SAMLConnectorMFASettings) ProtoMessage() {} func (*SAMLConnectorMFASettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{222} + return fileDescriptor_9198ee693835762e, []int{239} } func (m *SAMLConnectorMFASettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13933,15 +14989,18 @@ type SAMLAuthRequest struct { // TLS cert in case of successful auth. TlsPublicKey []byte `protobuf:"bytes,20,opt,name=tls_public_key,json=tlsPublicKey,proto3" json:"tls_pub_key,omitempty"` // SshAttestationStatement is an attestation statement for the given SSH public key. - SshAttestationStatement *v1.AttestationStatement `protobuf:"bytes,21,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` + SshAttestationStatement *v11.AttestationStatement `protobuf:"bytes,21,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` // TlsAttestationStatement is an attestation statement for the given TLS public key. - TlsAttestationStatement *v1.AttestationStatement `protobuf:"bytes,22,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` + TlsAttestationStatement *v11.AttestationStatement `protobuf:"bytes,22,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` // PostForm is the HTML form value that contains the SAML authentication request data. // Value is only set if the PreferredRequestBinding in the SAMLConnectorSpecV2 // is "http-post". In any other case, RedirectURL field will be populated. PostForm []byte `protobuf:"bytes,23,opt,name=PostForm,proto3" json:"post_form,omitempty"` // ClientVersion is the version of tsh or Proxy that is sending the SAMLAuthRequest request. - ClientVersion string `protobuf:"bytes,24,opt,name=ClientVersion,proto3" json:"client_version,omitempty"` + ClientVersion string `protobuf:"bytes,24,opt,name=ClientVersion,proto3" json:"client_version,omitempty"` + // SubjectIdentifier is an optional username/email provided by the client that will be + // passed to prepopulate the IdP's login form + SubjectIdentifier string `protobuf:"bytes,25,opt,name=SubjectIdentifier,proto3" json:"subject_identifier,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -13951,7 +15010,7 @@ func (m *SAMLAuthRequest) Reset() { *m = SAMLAuthRequest{} } func (m *SAMLAuthRequest) String() string { return proto.CompactTextString(m) } func (*SAMLAuthRequest) ProtoMessage() {} func (*SAMLAuthRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{223} + return fileDescriptor_9198ee693835762e, []int{240} } func (m *SAMLAuthRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -13997,7 +15056,7 @@ func (m *AttributeMapping) Reset() { *m = AttributeMapping{} } func (m *AttributeMapping) String() string { return proto.CompactTextString(m) } func (*AttributeMapping) ProtoMessage() {} func (*AttributeMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{224} + return fileDescriptor_9198ee693835762e, []int{241} } func (m *AttributeMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14042,7 +15101,7 @@ func (m *AsymmetricKeyPair) Reset() { *m = AsymmetricKeyPair{} } func (m *AsymmetricKeyPair) String() string { return proto.CompactTextString(m) } func (*AsymmetricKeyPair) ProtoMessage() {} func (*AsymmetricKeyPair) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{225} + return fileDescriptor_9198ee693835762e, []int{242} } func (m *AsymmetricKeyPair) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14093,7 +15152,7 @@ func (m *GithubConnectorV3) Reset() { *m = GithubConnectorV3{} } func (m *GithubConnectorV3) String() string { return proto.CompactTextString(m) } func (*GithubConnectorV3) ProtoMessage() {} func (*GithubConnectorV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{226} + return fileDescriptor_9198ee693835762e, []int{243} } func (m *GithubConnectorV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14135,7 +15194,7 @@ func (m *GithubConnectorV3List) Reset() { *m = GithubConnectorV3List{} } func (m *GithubConnectorV3List) String() string { return proto.CompactTextString(m) } func (*GithubConnectorV3List) ProtoMessage() {} func (*GithubConnectorV3List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{227} + return fileDescriptor_9198ee693835762e, []int{244} } func (m *GithubConnectorV3List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14189,16 +15248,19 @@ type GithubConnectorSpecV3 struct { // ClientRedirectSettings defines which client redirect URLs are allowed for // non-browser SSO logins other than the standard localhost ones. ClientRedirectSettings *SSOClientRedirectSettings `protobuf:"bytes,9,opt,name=ClientRedirectSettings,proto3" json:"client_redirect_settings,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + // UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should + // match for identifier-first login. + UserMatchers []string `protobuf:"bytes,10,rep,name=UserMatchers,proto3" json:"user_matchers,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *GithubConnectorSpecV3) Reset() { *m = GithubConnectorSpecV3{} } func (m *GithubConnectorSpecV3) String() string { return proto.CompactTextString(m) } func (*GithubConnectorSpecV3) ProtoMessage() {} func (*GithubConnectorSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{228} + return fileDescriptor_9198ee693835762e, []int{245} } func (m *GithubConnectorSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14271,9 +15333,9 @@ type GithubAuthRequest struct { // TLS cert in case of successful auth. TlsPublicKey []byte `protobuf:"bytes,20,opt,name=tls_public_key,json=tlsPublicKey,proto3" json:"tls_pub_key,omitempty"` // SshAttestationStatement is an attestation statement for the given SSH public key. - SshAttestationStatement *v1.AttestationStatement `protobuf:"bytes,21,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` + SshAttestationStatement *v11.AttestationStatement `protobuf:"bytes,21,opt,name=ssh_attestation_statement,json=sshAttestationStatement,proto3" json:"ssh_attestation_statement,omitempty"` // TlsAttestationStatement is an attestation statement for the given TLS public key. - TlsAttestationStatement *v1.AttestationStatement `protobuf:"bytes,22,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` + TlsAttestationStatement *v11.AttestationStatement `protobuf:"bytes,22,opt,name=tls_attestation_statement,json=tlsAttestationStatement,proto3" json:"tls_attestation_statement,omitempty"` // AuthenticatedUser is the username of an authenticated Teleport user. This // OAuth flow is used to retrieve GitHub identity info which will be added to // the existing user. @@ -14287,7 +15349,7 @@ func (m *GithubAuthRequest) Reset() { *m = GithubAuthRequest{} } func (m *GithubAuthRequest) String() string { return proto.CompactTextString(m) } func (*GithubAuthRequest) ProtoMessage() {} func (*GithubAuthRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{229} + return fileDescriptor_9198ee693835762e, []int{246} } func (m *GithubAuthRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14331,7 +15393,7 @@ func (m *SSOWarnings) Reset() { *m = SSOWarnings{} } func (m *SSOWarnings) String() string { return proto.CompactTextString(m) } func (*SSOWarnings) ProtoMessage() {} func (*SSOWarnings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{230} + return fileDescriptor_9198ee693835762e, []int{247} } func (m *SSOWarnings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14387,7 +15449,7 @@ func (m *CreateUserParams) Reset() { *m = CreateUserParams{} } func (m *CreateUserParams) String() string { return proto.CompactTextString(m) } func (*CreateUserParams) ProtoMessage() {} func (*CreateUserParams) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{231} + return fileDescriptor_9198ee693835762e, []int{248} } func (m *CreateUserParams) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14472,7 +15534,7 @@ func (m *SSODiagnosticInfo) Reset() { *m = SSODiagnosticInfo{} } func (m *SSODiagnosticInfo) String() string { return proto.CompactTextString(m) } func (*SSODiagnosticInfo) ProtoMessage() {} func (*SSODiagnosticInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{232} + return fileDescriptor_9198ee693835762e, []int{249} } func (m *SSODiagnosticInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14516,7 +15578,7 @@ func (m *GithubTokenInfo) Reset() { *m = GithubTokenInfo{} } func (m *GithubTokenInfo) String() string { return proto.CompactTextString(m) } func (*GithubTokenInfo) ProtoMessage() {} func (*GithubTokenInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{233} + return fileDescriptor_9198ee693835762e, []int{250} } func (m *GithubTokenInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14567,7 +15629,7 @@ func (m *GithubClaims) Reset() { *m = GithubClaims{} } func (m *GithubClaims) String() string { return proto.CompactTextString(m) } func (*GithubClaims) ProtoMessage() {} func (*GithubClaims) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{234} + return fileDescriptor_9198ee693835762e, []int{251} } func (m *GithubClaims) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14619,7 +15681,7 @@ func (m *TeamMapping) Reset() { *m = TeamMapping{} } func (m *TeamMapping) String() string { return proto.CompactTextString(m) } func (*TeamMapping) ProtoMessage() {} func (*TeamMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{235} + return fileDescriptor_9198ee693835762e, []int{252} } func (m *TeamMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14665,7 +15727,7 @@ func (m *TeamRolesMapping) Reset() { *m = TeamRolesMapping{} } func (m *TeamRolesMapping) String() string { return proto.CompactTextString(m) } func (*TeamRolesMapping) ProtoMessage() {} func (*TeamRolesMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{236} + return fileDescriptor_9198ee693835762e, []int{253} } func (m *TeamRolesMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14715,7 +15777,7 @@ type TrustedClusterV2 struct { func (m *TrustedClusterV2) Reset() { *m = TrustedClusterV2{} } func (*TrustedClusterV2) ProtoMessage() {} func (*TrustedClusterV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{237} + return fileDescriptor_9198ee693835762e, []int{254} } func (m *TrustedClusterV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14757,7 +15819,7 @@ func (m *TrustedClusterV2List) Reset() { *m = TrustedClusterV2List{} } func (m *TrustedClusterV2List) String() string { return proto.CompactTextString(m) } func (*TrustedClusterV2List) ProtoMessage() {} func (*TrustedClusterV2List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{238} + return fileDescriptor_9198ee693835762e, []int{255} } func (m *TrustedClusterV2List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14813,7 +15875,7 @@ func (m *TrustedClusterSpecV2) Reset() { *m = TrustedClusterSpecV2{} } func (m *TrustedClusterSpecV2) String() string { return proto.CompactTextString(m) } func (*TrustedClusterSpecV2) ProtoMessage() {} func (*TrustedClusterSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{239} + return fileDescriptor_9198ee693835762e, []int{256} } func (m *TrustedClusterSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14867,7 +15929,7 @@ func (m *LockV2) Reset() { *m = LockV2{} } func (m *LockV2) String() string { return proto.CompactTextString(m) } func (*LockV2) ProtoMessage() {} func (*LockV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{240} + return fileDescriptor_9198ee693835762e, []int{257} } func (m *LockV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14917,7 +15979,7 @@ func (m *LockSpecV2) Reset() { *m = LockSpecV2{} } func (m *LockSpecV2) String() string { return proto.CompactTextString(m) } func (*LockSpecV2) ProtoMessage() {} func (*LockSpecV2) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{241} + return fileDescriptor_9198ee693835762e, []int{258} } func (m *LockSpecV2) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -14965,7 +16027,14 @@ type LockTarget struct { // Requires Teleport Enterprise. Device string `protobuf:"bytes,8,opt,name=Device,proto3" json:"device,omitempty"` // ServerID is the host id of the Teleport instance. - ServerID string `protobuf:"bytes,9,opt,name=ServerID,proto3" json:"server_id,omitempty"` + ServerID string `protobuf:"bytes,9,opt,name=ServerID,proto3" json:"server_id,omitempty"` + // BotInstanceID is the bot instance ID if this is a bot identity and is + // ignored otherwise. + BotInstanceID string `protobuf:"bytes,10,opt,name=BotInstanceID,proto3" json:"bot_instance_id,omitempty"` + // JoinToken is the name of the join token used when this identity originally + // joined. This is only valid for bot identities, and cannot be used to target + // `token`-joined bots. + JoinToken string `protobuf:"bytes,11,opt,name=JoinToken,proto3" json:"join_token,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -14974,7 +16043,7 @@ type LockTarget struct { func (m *LockTarget) Reset() { *m = LockTarget{} } func (*LockTarget) ProtoMessage() {} func (*LockTarget) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{242} + return fileDescriptor_9198ee693835762e, []int{259} } func (m *LockTarget) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15003,6 +16072,51 @@ func (m *LockTarget) XXX_DiscardUnknown() { var xxx_messageInfo_LockTarget proto.InternalMessageInfo +// LockFilter encodes optional filters to apply when listing Lock resources +type LockFilter struct { + // Targets is a list of targets. Every returned lock must match at least + // one of the targets. + Targets []*LockTarget `protobuf:"bytes,1,rep,name=targets,proto3" json:"targets,omitempty"` + // InForceOnly specifies whether to return active locks only. + InForceOnly bool `protobuf:"varint,2,opt,name=in_force_only,json=inForceOnly,proto3" json:"in_force_only,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *LockFilter) Reset() { *m = LockFilter{} } +func (m *LockFilter) String() string { return proto.CompactTextString(m) } +func (*LockFilter) ProtoMessage() {} +func (*LockFilter) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{260} +} +func (m *LockFilter) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *LockFilter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_LockFilter.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *LockFilter) XXX_Merge(src proto.Message) { + xxx_messageInfo_LockFilter.Merge(m, src) +} +func (m *LockFilter) XXX_Size() int { + return m.Size() +} +func (m *LockFilter) XXX_DiscardUnknown() { + xxx_messageInfo_LockFilter.DiscardUnknown(m) +} + +var xxx_messageInfo_LockFilter proto.InternalMessageInfo + // AddressCondition represents a set of addresses. Presently the addresses are specified // exclusively in terms of IPv4/IPv6 ranges. type AddressCondition struct { @@ -15018,7 +16132,7 @@ func (m *AddressCondition) Reset() { *m = AddressCondition{} } func (m *AddressCondition) String() string { return proto.CompactTextString(m) } func (*AddressCondition) ProtoMessage() {} func (*AddressCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{243} + return fileDescriptor_9198ee693835762e, []int{261} } func (m *AddressCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15061,7 +16175,7 @@ func (m *NetworkRestrictionsSpecV4) Reset() { *m = NetworkRestrictionsSp func (m *NetworkRestrictionsSpecV4) String() string { return proto.CompactTextString(m) } func (*NetworkRestrictionsSpecV4) ProtoMessage() {} func (*NetworkRestrictionsSpecV4) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{244} + return fileDescriptor_9198ee693835762e, []int{262} } func (m *NetworkRestrictionsSpecV4) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15114,7 +16228,7 @@ func (m *NetworkRestrictionsV4) Reset() { *m = NetworkRestrictionsV4{} } func (m *NetworkRestrictionsV4) String() string { return proto.CompactTextString(m) } func (*NetworkRestrictionsV4) ProtoMessage() {} func (*NetworkRestrictionsV4) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{245} + return fileDescriptor_9198ee693835762e, []int{263} } func (m *NetworkRestrictionsV4) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15158,7 +16272,7 @@ func (m *WindowsDesktopServiceV3) Reset() { *m = WindowsDesktopServiceV3 func (m *WindowsDesktopServiceV3) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopServiceV3) ProtoMessage() {} func (*WindowsDesktopServiceV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{246} + return fileDescriptor_9198ee693835762e, []int{264} } func (m *WindowsDesktopServiceV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15196,7 +16310,11 @@ type WindowsDesktopServiceSpecV3 struct { // Hostname is the desktop service hostname. Hostname string `protobuf:"bytes,3,opt,name=Hostname,proto3" json:"hostname"` // ProxyIDs is a list of proxy IDs this server is expected to be connected to. - ProxyIDs []string `protobuf:"bytes,4,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + ProxyIDs []string `protobuf:"bytes,4,rep,name=ProxyIDs,proto3" json:"proxy_ids,omitempty"` + // the name of the Relay group that the server is connected to + RelayGroup string `protobuf:"bytes,5,opt,name=relay_group,json=relayGroup,proto3" json:"relay_group,omitempty"` + // the list of Relay host IDs that the server is connected to + RelayIds []string `protobuf:"bytes,6,rep,name=relay_ids,json=relayIds,proto3" json:"relay_ids,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -15206,7 +16324,7 @@ func (m *WindowsDesktopServiceSpecV3) Reset() { *m = WindowsDesktopServi func (m *WindowsDesktopServiceSpecV3) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopServiceSpecV3) ProtoMessage() {} func (*WindowsDesktopServiceSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{247} + return fileDescriptor_9198ee693835762e, []int{265} } func (m *WindowsDesktopServiceSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15250,7 +16368,7 @@ func (m *WindowsDesktopFilter) Reset() { *m = WindowsDesktopFilter{} } func (m *WindowsDesktopFilter) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopFilter) ProtoMessage() {} func (*WindowsDesktopFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{248} + return fileDescriptor_9198ee693835762e, []int{266} } func (m *WindowsDesktopFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15294,7 +16412,7 @@ func (m *WindowsDesktopV3) Reset() { *m = WindowsDesktopV3{} } func (m *WindowsDesktopV3) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopV3) ProtoMessage() {} func (*WindowsDesktopV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{249} + return fileDescriptor_9198ee693835762e, []int{267} } func (m *WindowsDesktopV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15347,7 +16465,7 @@ func (m *WindowsDesktopSpecV3) Reset() { *m = WindowsDesktopSpecV3{} } func (m *WindowsDesktopSpecV3) String() string { return proto.CompactTextString(m) } func (*WindowsDesktopSpecV3) ProtoMessage() {} func (*WindowsDesktopSpecV3) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{250} + return fileDescriptor_9198ee693835762e, []int{268} } func (m *WindowsDesktopSpecV3) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15391,7 +16509,7 @@ func (m *DynamicWindowsDesktopV1) Reset() { *m = DynamicWindowsDesktopV1 func (m *DynamicWindowsDesktopV1) String() string { return proto.CompactTextString(m) } func (*DynamicWindowsDesktopV1) ProtoMessage() {} func (*DynamicWindowsDesktopV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{251} + return fileDescriptor_9198ee693835762e, []int{269} } func (m *DynamicWindowsDesktopV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15442,7 +16560,7 @@ func (m *DynamicWindowsDesktopSpecV1) Reset() { *m = DynamicWindowsDeskt func (m *DynamicWindowsDesktopSpecV1) String() string { return proto.CompactTextString(m) } func (*DynamicWindowsDesktopSpecV1) ProtoMessage() {} func (*DynamicWindowsDesktopSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{252} + return fileDescriptor_9198ee693835762e, []int{270} } func (m *DynamicWindowsDesktopSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15483,7 +16601,7 @@ func (m *Resolution) Reset() { *m = Resolution{} } func (m *Resolution) String() string { return proto.CompactTextString(m) } func (*Resolution) ProtoMessage() {} func (*Resolution) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{253} + return fileDescriptor_9198ee693835762e, []int{271} } func (m *Resolution) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15570,7 +16688,7 @@ func (m *RegisterUsingTokenRequest) Reset() { *m = RegisterUsingTokenReq func (m *RegisterUsingTokenRequest) String() string { return proto.CompactTextString(m) } func (*RegisterUsingTokenRequest) ProtoMessage() {} func (*RegisterUsingTokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{254} + return fileDescriptor_9198ee693835762e, []int{272} } func (m *RegisterUsingTokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15624,7 +16742,7 @@ func (m *RecoveryCodesV1) Reset() { *m = RecoveryCodesV1{} } func (m *RecoveryCodesV1) String() string { return proto.CompactTextString(m) } func (*RecoveryCodesV1) ProtoMessage() {} func (*RecoveryCodesV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{255} + return fileDescriptor_9198ee693835762e, []int{273} } func (m *RecoveryCodesV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15669,7 +16787,7 @@ func (m *RecoveryCodesSpecV1) Reset() { *m = RecoveryCodesSpecV1{} } func (m *RecoveryCodesSpecV1) String() string { return proto.CompactTextString(m) } func (*RecoveryCodesSpecV1) ProtoMessage() {} func (*RecoveryCodesSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{256} + return fileDescriptor_9198ee693835762e, []int{274} } func (m *RecoveryCodesSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15713,7 +16831,7 @@ func (m *RecoveryCode) Reset() { *m = RecoveryCode{} } func (m *RecoveryCode) String() string { return proto.CompactTextString(m) } func (*RecoveryCode) ProtoMessage() {} func (*RecoveryCode) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{257} + return fileDescriptor_9198ee693835762e, []int{275} } func (m *RecoveryCode) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15753,7 +16871,7 @@ func (m *NullableSessionState) Reset() { *m = NullableSessionState{} } func (m *NullableSessionState) String() string { return proto.CompactTextString(m) } func (*NullableSessionState) ProtoMessage() {} func (*NullableSessionState) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{258} + return fileDescriptor_9198ee693835762e, []int{276} } func (m *NullableSessionState) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15799,7 +16917,7 @@ func (m *SessionTrackerFilter) Reset() { *m = SessionTrackerFilter{} } func (m *SessionTrackerFilter) String() string { return proto.CompactTextString(m) } func (*SessionTrackerFilter) ProtoMessage() {} func (*SessionTrackerFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{259} + return fileDescriptor_9198ee693835762e, []int{277} } func (m *SessionTrackerFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15843,7 +16961,7 @@ func (m *SessionTrackerV1) Reset() { *m = SessionTrackerV1{} } func (m *SessionTrackerV1) String() string { return proto.CompactTextString(m) } func (*SessionTrackerV1) ProtoMessage() {} func (*SessionTrackerV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{260} + return fileDescriptor_9198ee693835762e, []int{278} } func (m *SessionTrackerV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15941,7 +17059,7 @@ func (m *SessionTrackerSpecV1) Reset() { *m = SessionTrackerSpecV1{} } func (m *SessionTrackerSpecV1) String() string { return proto.CompactTextString(m) } func (*SessionTrackerSpecV1) ProtoMessage() {} func (*SessionTrackerSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{261} + return fileDescriptor_9198ee693835762e, []int{279} } func (m *SessionTrackerSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -15988,7 +17106,7 @@ func (m *SessionTrackerPolicySet) Reset() { *m = SessionTrackerPolicySet func (m *SessionTrackerPolicySet) String() string { return proto.CompactTextString(m) } func (*SessionTrackerPolicySet) ProtoMessage() {} func (*SessionTrackerPolicySet) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{262} + return fileDescriptor_9198ee693835762e, []int{280} } func (m *SessionTrackerPolicySet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16026,17 +17144,19 @@ type Participant struct { // Mode is the participant mode. Mode string `protobuf:"bytes,3,opt,name=Mode,proto3" json:"mode,omitempty"` // LastActive is the last time this party was active in the session. - LastActive time.Time `protobuf:"bytes,4,opt,name=LastActive,proto3,stdtime" json:"last_active,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + LastActive time.Time `protobuf:"bytes,4,opt,name=LastActive,proto3,stdtime" json:"last_active,omitempty"` + // Cluster is the cluster name the user is authenticated against. + Cluster string `protobuf:"bytes,5,opt,name=Cluster,proto3" json:"cluster,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *Participant) Reset() { *m = Participant{} } func (m *Participant) String() string { return proto.CompactTextString(m) } func (*Participant) ProtoMessage() {} func (*Participant) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{263} + return fileDescriptor_9198ee693835762e, []int{281} } func (m *Participant) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16080,7 +17200,7 @@ func (m *UIConfigV1) Reset() { *m = UIConfigV1{} } func (m *UIConfigV1) String() string { return proto.CompactTextString(m) } func (*UIConfigV1) ProtoMessage() {} func (*UIConfigV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{264} + return fileDescriptor_9198ee693835762e, []int{282} } func (m *UIConfigV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16126,7 +17246,7 @@ func (m *UIConfigSpecV1) Reset() { *m = UIConfigSpecV1{} } func (m *UIConfigSpecV1) String() string { return proto.CompactTextString(m) } func (*UIConfigSpecV1) ProtoMessage() {} func (*UIConfigSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{265} + return fileDescriptor_9198ee693835762e, []int{283} } func (m *UIConfigSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16177,7 +17297,7 @@ func (m *InstallerV1) Reset() { *m = InstallerV1{} } func (m *InstallerV1) String() string { return proto.CompactTextString(m) } func (*InstallerV1) ProtoMessage() {} func (*InstallerV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{266} + return fileDescriptor_9198ee693835762e, []int{284} } func (m *InstallerV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16219,7 +17339,7 @@ func (m *InstallerSpecV1) Reset() { *m = InstallerSpecV1{} } func (m *InstallerSpecV1) String() string { return proto.CompactTextString(m) } func (*InstallerSpecV1) ProtoMessage() {} func (*InstallerSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{267} + return fileDescriptor_9198ee693835762e, []int{285} } func (m *InstallerSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16261,7 +17381,7 @@ func (m *InstallerV1List) Reset() { *m = InstallerV1List{} } func (m *InstallerV1List) String() string { return proto.CompactTextString(m) } func (*InstallerV1List) ProtoMessage() {} func (*InstallerV1List) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{268} + return fileDescriptor_9198ee693835762e, []int{286} } func (m *InstallerV1List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16305,7 +17425,7 @@ func (m *SortBy) Reset() { *m = SortBy{} } func (m *SortBy) String() string { return proto.CompactTextString(m) } func (*SortBy) ProtoMessage() {} func (*SortBy) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{269} + return fileDescriptor_9198ee693835762e, []int{287} } func (m *SortBy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16351,7 +17471,7 @@ func (m *ConnectionDiagnosticV1) Reset() { *m = ConnectionDiagnosticV1{} func (m *ConnectionDiagnosticV1) String() string { return proto.CompactTextString(m) } func (*ConnectionDiagnosticV1) ProtoMessage() {} func (*ConnectionDiagnosticV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{270} + return fileDescriptor_9198ee693835762e, []int{288} } func (m *ConnectionDiagnosticV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16401,7 +17521,7 @@ func (m *ConnectionDiagnosticSpecV1) Reset() { *m = ConnectionDiagnostic func (m *ConnectionDiagnosticSpecV1) String() string { return proto.CompactTextString(m) } func (*ConnectionDiagnosticSpecV1) ProtoMessage() {} func (*ConnectionDiagnosticSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{271} + return fileDescriptor_9198ee693835762e, []int{289} } func (m *ConnectionDiagnosticSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16447,7 +17567,7 @@ func (m *ConnectionDiagnosticTrace) Reset() { *m = ConnectionDiagnosticT func (m *ConnectionDiagnosticTrace) String() string { return proto.CompactTextString(m) } func (*ConnectionDiagnosticTrace) ProtoMessage() {} func (*ConnectionDiagnosticTrace) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{272} + return fileDescriptor_9198ee693835762e, []int{290} } func (m *ConnectionDiagnosticTrace) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16490,7 +17610,7 @@ func (m *DatabaseServiceV1) Reset() { *m = DatabaseServiceV1{} } func (m *DatabaseServiceV1) String() string { return proto.CompactTextString(m) } func (*DatabaseServiceV1) ProtoMessage() {} func (*DatabaseServiceV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{273} + return fileDescriptor_9198ee693835762e, []int{291} } func (m *DatabaseServiceV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16534,7 +17654,7 @@ func (m *DatabaseServiceSpecV1) Reset() { *m = DatabaseServiceSpecV1{} } func (m *DatabaseServiceSpecV1) String() string { return proto.CompactTextString(m) } func (*DatabaseServiceSpecV1) ProtoMessage() {} func (*DatabaseServiceSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{274} + return fileDescriptor_9198ee693835762e, []int{292} } func (m *DatabaseServiceSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16576,7 +17696,7 @@ func (m *DatabaseResourceMatcher) Reset() { *m = DatabaseResourceMatcher func (m *DatabaseResourceMatcher) String() string { return proto.CompactTextString(m) } func (*DatabaseResourceMatcher) ProtoMessage() {} func (*DatabaseResourceMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{275} + return fileDescriptor_9198ee693835762e, []int{293} } func (m *DatabaseResourceMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16620,7 +17740,7 @@ func (m *ResourceMatcherAWS) Reset() { *m = ResourceMatcherAWS{} } func (m *ResourceMatcherAWS) String() string { return proto.CompactTextString(m) } func (*ResourceMatcherAWS) ProtoMessage() {} func (*ResourceMatcherAWS) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{276} + return fileDescriptor_9198ee693835762e, []int{294} } func (m *ResourceMatcherAWS) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16662,7 +17782,7 @@ func (m *ClusterAlert) Reset() { *m = ClusterAlert{} } func (m *ClusterAlert) String() string { return proto.CompactTextString(m) } func (*ClusterAlert) ProtoMessage() {} func (*ClusterAlert) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{277} + return fileDescriptor_9198ee693835762e, []int{295} } func (m *ClusterAlert) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16708,7 +17828,7 @@ func (m *ClusterAlertSpec) Reset() { *m = ClusterAlertSpec{} } func (m *ClusterAlertSpec) String() string { return proto.CompactTextString(m) } func (*ClusterAlertSpec) ProtoMessage() {} func (*ClusterAlertSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{278} + return fileDescriptor_9198ee693835762e, []int{296} } func (m *ClusterAlertSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16761,7 +17881,7 @@ func (m *GetClusterAlertsRequest) Reset() { *m = GetClusterAlertsRequest func (m *GetClusterAlertsRequest) String() string { return proto.CompactTextString(m) } func (*GetClusterAlertsRequest) ProtoMessage() {} func (*GetClusterAlertsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{279} + return fileDescriptor_9198ee693835762e, []int{297} } func (m *GetClusterAlertsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16811,7 +17931,7 @@ func (m *AlertAcknowledgement) Reset() { *m = AlertAcknowledgement{} } func (m *AlertAcknowledgement) String() string { return proto.CompactTextString(m) } func (*AlertAcknowledgement) ProtoMessage() {} func (*AlertAcknowledgement) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{280} + return fileDescriptor_9198ee693835762e, []int{298} } func (m *AlertAcknowledgement) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16863,7 +17983,7 @@ func (m *Release) Reset() { *m = Release{} } func (m *Release) String() string { return proto.CompactTextString(m) } func (*Release) ProtoMessage() {} func (*Release) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{281} + return fileDescriptor_9198ee693835762e, []int{299} } func (m *Release) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16921,7 +18041,7 @@ func (m *Asset) Reset() { *m = Asset{} } func (m *Asset) String() string { return proto.CompactTextString(m) } func (*Asset) ProtoMessage() {} func (*Asset) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{282} + return fileDescriptor_9198ee693835762e, []int{300} } func (m *Asset) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -16974,7 +18094,7 @@ func (m *PluginV1) Reset() { *m = PluginV1{} } func (m *PluginV1) String() string { return proto.CompactTextString(m) } func (*PluginV1) ProtoMessage() {} func (*PluginV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{283} + return fileDescriptor_9198ee693835762e, []int{301} } func (m *PluginV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17027,6 +18147,7 @@ type PluginSpecV1 struct { // *PluginSpecV1_Msteams // *PluginSpecV1_NetIq // *PluginSpecV1_Github + // *PluginSpecV1_Intune Settings isPluginSpecV1_Settings `protobuf_oneof:"settings"` // generation contains a unique ID that should: // - Be created by the backend on plugin creation. @@ -17043,7 +18164,7 @@ func (m *PluginSpecV1) Reset() { *m = PluginSpecV1{} } func (m *PluginSpecV1) String() string { return proto.CompactTextString(m) } func (*PluginSpecV1) ProtoMessage() {} func (*PluginSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{284} + return fileDescriptor_9198ee693835762e, []int{302} } func (m *PluginSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17136,6 +18257,9 @@ type PluginSpecV1_NetIq struct { type PluginSpecV1_Github struct { Github *PluginGithubSettings `protobuf:"bytes,20,opt,name=github,proto3,oneof" json:"github,omitempty"` } +type PluginSpecV1_Intune struct { + Intune *PluginIntuneSettings `protobuf:"bytes,21,opt,name=intune,proto3,oneof" json:"intune,omitempty"` +} func (*PluginSpecV1_SlackAccessPlugin) isPluginSpecV1_Settings() {} func (*PluginSpecV1_Opsgenie) isPluginSpecV1_Settings() {} @@ -17156,6 +18280,7 @@ func (*PluginSpecV1_Email) isPluginSpecV1_Settings() {} func (*PluginSpecV1_Msteams) isPluginSpecV1_Settings() {} func (*PluginSpecV1_NetIq) isPluginSpecV1_Settings() {} func (*PluginSpecV1_Github) isPluginSpecV1_Settings() {} +func (*PluginSpecV1_Intune) isPluginSpecV1_Settings() {} func (m *PluginSpecV1) GetSettings() isPluginSpecV1_Settings { if m != nil { @@ -17297,6 +18422,13 @@ func (m *PluginSpecV1) GetGithub() *PluginGithubSettings { return nil } +func (m *PluginSpecV1) GetIntune() *PluginIntuneSettings { + if x, ok := m.GetSettings().(*PluginSpecV1_Intune); ok { + return x.Intune + } + return nil +} + // XXX_OneofWrappers is for the internal use of the proto package. func (*PluginSpecV1) XXX_OneofWrappers() []interface{} { return []interface{}{ @@ -17319,6 +18451,7 @@ func (*PluginSpecV1) XXX_OneofWrappers() []interface{} { (*PluginSpecV1_Msteams)(nil), (*PluginSpecV1_NetIq)(nil), (*PluginSpecV1_Github)(nil), + (*PluginSpecV1_Intune)(nil), } } @@ -17347,7 +18480,7 @@ func (m *PluginGithubSettings) Reset() { *m = PluginGithubSettings{} } func (m *PluginGithubSettings) String() string { return proto.CompactTextString(m) } func (*PluginGithubSettings) ProtoMessage() {} func (*PluginGithubSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{285} + return fileDescriptor_9198ee693835762e, []int{303} } func (m *PluginGithubSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17387,7 +18520,7 @@ func (m *PluginSlackAccessSettings) Reset() { *m = PluginSlackAccessSett func (m *PluginSlackAccessSettings) String() string { return proto.CompactTextString(m) } func (*PluginSlackAccessSettings) ProtoMessage() {} func (*PluginSlackAccessSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{286} + return fileDescriptor_9198ee693835762e, []int{304} } func (m *PluginSlackAccessSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17428,7 +18561,7 @@ func (m *PluginGitlabSettings) Reset() { *m = PluginGitlabSettings{} } func (m *PluginGitlabSettings) String() string { return proto.CompactTextString(m) } func (*PluginGitlabSettings) ProtoMessage() {} func (*PluginGitlabSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{287} + return fileDescriptor_9198ee693835762e, []int{305} } func (m *PluginGitlabSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17475,7 +18608,7 @@ func (m *PluginOpsgenieAccessSettings) Reset() { *m = PluginOpsgenieAcce func (m *PluginOpsgenieAccessSettings) String() string { return proto.CompactTextString(m) } func (*PluginOpsgenieAccessSettings) ProtoMessage() {} func (*PluginOpsgenieAccessSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{288} + return fileDescriptor_9198ee693835762e, []int{306} } func (m *PluginOpsgenieAccessSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17523,7 +18656,7 @@ func (m *PluginServiceNowSettings) Reset() { *m = PluginServiceNowSettin func (m *PluginServiceNowSettings) String() string { return proto.CompactTextString(m) } func (*PluginServiceNowSettings) ProtoMessage() {} func (*PluginServiceNowSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{289} + return fileDescriptor_9198ee693835762e, []int{307} } func (m *PluginServiceNowSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17569,7 +18702,7 @@ func (m *PluginPagerDutySettings) Reset() { *m = PluginPagerDutySettings func (m *PluginPagerDutySettings) String() string { return proto.CompactTextString(m) } func (*PluginPagerDutySettings) ProtoMessage() {} func (*PluginPagerDutySettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{290} + return fileDescriptor_9198ee693835762e, []int{308} } func (m *PluginPagerDutySettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17615,7 +18748,7 @@ func (m *PluginJiraSettings) Reset() { *m = PluginJiraSettings{} } func (m *PluginJiraSettings) String() string { return proto.CompactTextString(m) } func (*PluginJiraSettings) ProtoMessage() {} func (*PluginJiraSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{291} + return fileDescriptor_9198ee693835762e, []int{309} } func (m *PluginJiraSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17655,7 +18788,7 @@ func (m *PluginOpenAISettings) Reset() { *m = PluginOpenAISettings{} } func (m *PluginOpenAISettings) String() string { return proto.CompactTextString(m) } func (*PluginOpenAISettings) ProtoMessage() {} func (*PluginOpenAISettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{292} + return fileDescriptor_9198ee693835762e, []int{310} } func (m *PluginOpenAISettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17706,7 +18839,7 @@ func (m *PluginMattermostSettings) Reset() { *m = PluginMattermostSettin func (m *PluginMattermostSettings) String() string { return proto.CompactTextString(m) } func (*PluginMattermostSettings) ProtoMessage() {} func (*PluginMattermostSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{293} + return fileDescriptor_9198ee693835762e, []int{311} } func (m *PluginMattermostSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17748,7 +18881,7 @@ func (m *PluginJamfSettings) Reset() { *m = PluginJamfSettings{} } func (m *PluginJamfSettings) String() string { return proto.CompactTextString(m) } func (*PluginJamfSettings) ProtoMessage() {} func (*PluginJamfSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{294} + return fileDescriptor_9198ee693835762e, []int{312} } func (m *PluginJamfSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17777,6 +18910,61 @@ func (m *PluginJamfSettings) XXX_DiscardUnknown() { var xxx_messageInfo_PluginJamfSettings proto.InternalMessageInfo +// Defines settings for Intune plugin. +type PluginIntuneSettings struct { + // Tenant is the primary domain name (e.g. contoso.onmicrosoft.com) or the tenant ID (e.g. + // 38d49456-54d4-455d-a8d6-c383c71e0a6d) of an organization within Microsoft Entra ID. + // + // https://learn.microsoft.com/en-us/partner-center/account-settings/find-ids-and-domain-names#find-the-microsoft-entra-tenant-id-and-primary-domain-name + Tenant string `protobuf:"bytes,1,opt,name=tenant,proto3" json:"tenant,omitempty"` + // login_endpoint points to one of the national deployments of Microsoft Entra ID. + // Optional, defaults to "https://login.microsoftonline.com". + // + // https://learn.microsoft.com/en-us/graph/deployments + LoginEndpoint string `protobuf:"bytes,2,opt,name=login_endpoint,json=loginEndpoint,proto3" json:"login_endpoint,omitempty"` + // graph_endpoint points to one of the national deployments of Microsoft Graph. + // Optional, defaults to "https://graph.microsoft.com". + // + // https://learn.microsoft.com/en-us/graph/deployments + GraphEndpoint string `protobuf:"bytes,3,opt,name=graph_endpoint,json=graphEndpoint,proto3" json:"graph_endpoint,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *PluginIntuneSettings) Reset() { *m = PluginIntuneSettings{} } +func (m *PluginIntuneSettings) String() string { return proto.CompactTextString(m) } +func (*PluginIntuneSettings) ProtoMessage() {} +func (*PluginIntuneSettings) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{313} +} +func (m *PluginIntuneSettings) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *PluginIntuneSettings) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PluginIntuneSettings.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *PluginIntuneSettings) XXX_Merge(src proto.Message) { + xxx_messageInfo_PluginIntuneSettings.Merge(m, src) +} +func (m *PluginIntuneSettings) XXX_Size() int { + return m.Size() +} +func (m *PluginIntuneSettings) XXX_DiscardUnknown() { + xxx_messageInfo_PluginIntuneSettings.DiscardUnknown(m) +} + +var xxx_messageInfo_PluginIntuneSettings proto.InternalMessageInfo + // Defines settings for the Okta plugin. type PluginOktaSettings struct { // OrgUrl is the Okta organization URL to use for API communication. @@ -17800,7 +18988,7 @@ func (m *PluginOktaSettings) Reset() { *m = PluginOktaSettings{} } func (m *PluginOktaSettings) String() string { return proto.CompactTextString(m) } func (*PluginOktaSettings) ProtoMessage() {} func (*PluginOktaSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{295} + return fileDescriptor_9198ee693835762e, []int{314} } func (m *PluginOktaSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17847,7 +19035,7 @@ func (m *PluginOktaCredentialsInfo) Reset() { *m = PluginOktaCredentials func (m *PluginOktaCredentialsInfo) String() string { return proto.CompactTextString(m) } func (*PluginOktaCredentialsInfo) ProtoMessage() {} func (*PluginOktaCredentialsInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{296} + return fileDescriptor_9198ee693835762e, []int{315} } func (m *PluginOktaCredentialsInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17938,17 +19126,21 @@ type PluginOktaSyncSettings struct { // DisableAssignDefaultRoles prevents the builtin okta-requester role from being assigned to all // synchronized users. This is allows for a more advanced RBAC setup where not all // Okta-originated users are allowed request all Okta-originated resources. - DisableAssignDefaultRoles bool `protobuf:"varint,13,opt,name=disable_assign_default_roles,json=disableAssignDefaultRoles,proto3" json:"disable_assign_default_roles,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + DisableAssignDefaultRoles bool `protobuf:"varint,13,opt,name=disable_assign_default_roles,json=disableAssignDefaultRoles,proto3" json:"disable_assign_default_roles,omitempty"` + // TimeBetweenImports controls the time between Okta syncs. I.e. importing Okta users, apps and + // groups to teleport. This doesn't affect how quickly Teleport changes are propagated to Okta if + // bidirectional sync is enabled. The default value is 30m. + TimeBetweenImports string `protobuf:"bytes,14,opt,name=time_between_imports,json=timeBetweenImports,proto3" json:"time_between_imports,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *PluginOktaSyncSettings) Reset() { *m = PluginOktaSyncSettings{} } func (m *PluginOktaSyncSettings) String() string { return proto.CompactTextString(m) } func (*PluginOktaSyncSettings) ProtoMessage() {} func (*PluginOktaSyncSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{297} + return fileDescriptor_9198ee693835762e, []int{316} } func (m *PluginOktaSyncSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -17989,7 +19181,7 @@ func (m *DiscordChannels) Reset() { *m = DiscordChannels{} } func (m *DiscordChannels) String() string { return proto.CompactTextString(m) } func (*DiscordChannels) ProtoMessage() {} func (*DiscordChannels) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{298} + return fileDescriptor_9198ee693835762e, []int{317} } func (m *DiscordChannels) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18033,7 +19225,7 @@ func (m *PluginDiscordSettings) Reset() { *m = PluginDiscordSettings{} } func (m *PluginDiscordSettings) String() string { return proto.CompactTextString(m) } func (*PluginDiscordSettings) ProtoMessage() {} func (*PluginDiscordSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{299} + return fileDescriptor_9198ee693835762e, []int{318} } func (m *PluginDiscordSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18078,7 +19270,7 @@ func (m *PluginEntraIDSettings) Reset() { *m = PluginEntraIDSettings{} } func (m *PluginEntraIDSettings) String() string { return proto.CompactTextString(m) } func (*PluginEntraIDSettings) ProtoMessage() {} func (*PluginEntraIDSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{300} + return fileDescriptor_9198ee693835762e, []int{319} } func (m *PluginEntraIDSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18122,17 +19314,19 @@ type PluginEntraIDSyncSettings struct { // entra_app_id refers to the Entra Application ID that supports the SSO for "sso_connector_id". // This field is populated on a best-effort basis for legacy plugins but mandatory for plugins created after its introduction. // For existing plugins, it is filled in using the entity descriptor url when utilized. - EntraAppId string `protobuf:"bytes,5,opt,name=entra_app_id,json=entraAppId,proto3" json:"entra_app_id,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + EntraAppId string `protobuf:"bytes,5,opt,name=entra_app_id,json=entraAppId,proto3" json:"entra_app_id,omitempty"` + // GroupFilters configures which groups should be included or exlcuded. + GroupFilters []*PluginSyncFilter `protobuf:"bytes,6,rep,name=group_filters,json=groupFilters,proto3" json:"group_filters,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *PluginEntraIDSyncSettings) Reset() { *m = PluginEntraIDSyncSettings{} } func (m *PluginEntraIDSyncSettings) String() string { return proto.CompactTextString(m) } func (*PluginEntraIDSyncSettings) ProtoMessage() {} func (*PluginEntraIDSyncSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{301} + return fileDescriptor_9198ee693835762e, []int{320} } func (m *PluginEntraIDSyncSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18161,6 +19355,142 @@ func (m *PluginEntraIDSyncSettings) XXX_DiscardUnknown() { var xxx_messageInfo_PluginEntraIDSyncSettings proto.InternalMessageInfo +// PluginSyncFilter can specify inclusion or exclusion of a resource. +type PluginSyncFilter struct { + // Include describes that the resource should be explicitly included. + // + // Types that are valid to be assigned to Include: + // + // *PluginSyncFilter_Id + // *PluginSyncFilter_NameRegex + Include isPluginSyncFilter_Include `protobuf_oneof:"include"` + // Exclude specifies which AWS resources should be explicitly excluded. + // + // Types that are valid to be assigned to Exclude: + // + // *PluginSyncFilter_ExcludeId + // *PluginSyncFilter_ExcludeNameRegex + Exclude isPluginSyncFilter_Exclude `protobuf_oneof:"exclude"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *PluginSyncFilter) Reset() { *m = PluginSyncFilter{} } +func (m *PluginSyncFilter) String() string { return proto.CompactTextString(m) } +func (*PluginSyncFilter) ProtoMessage() {} +func (*PluginSyncFilter) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{321} +} +func (m *PluginSyncFilter) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *PluginSyncFilter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PluginSyncFilter.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *PluginSyncFilter) XXX_Merge(src proto.Message) { + xxx_messageInfo_PluginSyncFilter.Merge(m, src) +} +func (m *PluginSyncFilter) XXX_Size() int { + return m.Size() +} +func (m *PluginSyncFilter) XXX_DiscardUnknown() { + xxx_messageInfo_PluginSyncFilter.DiscardUnknown(m) +} + +var xxx_messageInfo_PluginSyncFilter proto.InternalMessageInfo + +type isPluginSyncFilter_Include interface { + isPluginSyncFilter_Include() + Equal(interface{}) bool + MarshalTo([]byte) (int, error) + Size() int +} +type isPluginSyncFilter_Exclude interface { + isPluginSyncFilter_Exclude() + Equal(interface{}) bool + MarshalTo([]byte) (int, error) + Size() int +} + +type PluginSyncFilter_Id struct { + Id string `protobuf:"bytes,1,opt,name=id,proto3,oneof" json:"id,omitempty"` +} +type PluginSyncFilter_NameRegex struct { + NameRegex string `protobuf:"bytes,2,opt,name=name_regex,json=nameRegex,proto3,oneof" json:"name_regex,omitempty"` +} +type PluginSyncFilter_ExcludeId struct { + ExcludeId string `protobuf:"bytes,3,opt,name=exclude_id,json=excludeId,proto3,oneof" json:"id,omitempty"` +} +type PluginSyncFilter_ExcludeNameRegex struct { + ExcludeNameRegex string `protobuf:"bytes,4,opt,name=exclude_name_regex,json=excludeNameRegex,proto3,oneof" json:"name_regex,omitempty"` +} + +func (*PluginSyncFilter_Id) isPluginSyncFilter_Include() {} +func (*PluginSyncFilter_NameRegex) isPluginSyncFilter_Include() {} +func (*PluginSyncFilter_ExcludeId) isPluginSyncFilter_Exclude() {} +func (*PluginSyncFilter_ExcludeNameRegex) isPluginSyncFilter_Exclude() {} + +func (m *PluginSyncFilter) GetInclude() isPluginSyncFilter_Include { + if m != nil { + return m.Include + } + return nil +} +func (m *PluginSyncFilter) GetExclude() isPluginSyncFilter_Exclude { + if m != nil { + return m.Exclude + } + return nil +} + +func (m *PluginSyncFilter) GetId() string { + if x, ok := m.GetInclude().(*PluginSyncFilter_Id); ok { + return x.Id + } + return "" +} + +func (m *PluginSyncFilter) GetNameRegex() string { + if x, ok := m.GetInclude().(*PluginSyncFilter_NameRegex); ok { + return x.NameRegex + } + return "" +} + +func (m *PluginSyncFilter) GetExcludeId() string { + if x, ok := m.GetExclude().(*PluginSyncFilter_ExcludeId); ok { + return x.ExcludeId + } + return "" +} + +func (m *PluginSyncFilter) GetExcludeNameRegex() string { + if x, ok := m.GetExclude().(*PluginSyncFilter_ExcludeNameRegex); ok { + return x.ExcludeNameRegex + } + return "" +} + +// XXX_OneofWrappers is for the internal use of the proto package. +func (*PluginSyncFilter) XXX_OneofWrappers() []interface{} { + return []interface{}{ + (*PluginSyncFilter_Id)(nil), + (*PluginSyncFilter_NameRegex)(nil), + (*PluginSyncFilter_ExcludeId)(nil), + (*PluginSyncFilter_ExcludeNameRegex)(nil), + } +} + // AccessGraphSettings controls settings for syncing access graph specific data. type PluginEntraIDAccessGraphSettings struct { // AppSsoSettingsCache is an array of single sign-on settings for Entra enterprise applications. @@ -18177,7 +19507,7 @@ func (m *PluginEntraIDAccessGraphSettings) Reset() { *m = PluginEntraIDA func (m *PluginEntraIDAccessGraphSettings) String() string { return proto.CompactTextString(m) } func (*PluginEntraIDAccessGraphSettings) ProtoMessage() {} func (*PluginEntraIDAccessGraphSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{302} + return fileDescriptor_9198ee693835762e, []int{322} } func (m *PluginEntraIDAccessGraphSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18223,7 +19553,7 @@ func (m *PluginEntraIDAppSSOSettings) Reset() { *m = PluginEntraIDAppSSO func (m *PluginEntraIDAppSSOSettings) String() string { return proto.CompactTextString(m) } func (*PluginEntraIDAppSSOSettings) ProtoMessage() {} func (*PluginEntraIDAppSSOSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{303} + return fileDescriptor_9198ee693835762e, []int{323} } func (m *PluginEntraIDAppSSOSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18256,20 +19586,27 @@ var xxx_messageInfo_PluginEntraIDAppSSOSettings proto.InternalMessageInfo type PluginSCIMSettings struct { // SamlConnectorName is the name of the SAML Connector that users provisioned // by this SCIM plugin will use to log in to Teleport. - SamlConnectorName string `protobuf:"bytes,1,opt,name=saml_connector_name,json=samlConnectorName,proto3" json:"saml_connector_name,omitempty"` + // DEPRECATED: Use ConnectorInfo instead. + // This is old field added when the Okta SCIM plugin was created + // and was limited usage to SAML connectors only. + SamlConnectorName string `protobuf:"bytes,1,opt,name=saml_connector_name,json=samlConnectorName,proto3" json:"saml_connector_name,omitempty"` // Deprecated: Do not use. // DefaultRole is the default role assigned to users provisioned by this // plugin. - DefaultRole string `protobuf:"bytes,2,opt,name=default_role,json=defaultRole,proto3" json:"default_role,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + DefaultRole string `protobuf:"bytes,2,opt,name=default_role,json=defaultRole,proto3" json:"default_role,omitempty"` // Deprecated: Do not use. + // ConnectorInfo contains information about the user's origin as provided + // by the SCIM plugin. It enables matching a SAML/OIDC external user + // with a SCIM-persisted user, allowing the ephemeral user entry to be updated to SCIM user. + ConnectorInfo *PluginSCIMSettings_ConnectorInfo `protobuf:"bytes,3,opt,name=connector_info,json=connectorInfo,proto3" json:"connector_info"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *PluginSCIMSettings) Reset() { *m = PluginSCIMSettings{} } func (m *PluginSCIMSettings) String() string { return proto.CompactTextString(m) } func (*PluginSCIMSettings) ProtoMessage() {} func (*PluginSCIMSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{304} + return fileDescriptor_9198ee693835762e, []int{324} } func (m *PluginSCIMSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18298,6 +19635,50 @@ func (m *PluginSCIMSettings) XXX_DiscardUnknown() { var xxx_messageInfo_PluginSCIMSettings proto.InternalMessageInfo +type PluginSCIMSettings_ConnectorInfo struct { + // Name is the name of the connector. + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name"` + // Type is the type of the connector: types.KindSAML, types.KindOIDC, etc. + // Note: The name of the connect is not unique across types. + Type string `protobuf:"bytes,2,opt,name=type,proto3" json:"type"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *PluginSCIMSettings_ConnectorInfo) Reset() { *m = PluginSCIMSettings_ConnectorInfo{} } +func (m *PluginSCIMSettings_ConnectorInfo) String() string { return proto.CompactTextString(m) } +func (*PluginSCIMSettings_ConnectorInfo) ProtoMessage() {} +func (*PluginSCIMSettings_ConnectorInfo) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{324, 0} +} +func (m *PluginSCIMSettings_ConnectorInfo) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *PluginSCIMSettings_ConnectorInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PluginSCIMSettings_ConnectorInfo.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *PluginSCIMSettings_ConnectorInfo) XXX_Merge(src proto.Message) { + xxx_messageInfo_PluginSCIMSettings_ConnectorInfo.Merge(m, src) +} +func (m *PluginSCIMSettings_ConnectorInfo) XXX_Size() int { + return m.Size() +} +func (m *PluginSCIMSettings_ConnectorInfo) XXX_DiscardUnknown() { + xxx_messageInfo_PluginSCIMSettings_ConnectorInfo.DiscardUnknown(m) +} + +var xxx_messageInfo_PluginSCIMSettings_ConnectorInfo proto.InternalMessageInfo + // PluginDatadogAccessSettings defines the settings for a Datadog Incident Management plugin type PluginDatadogAccessSettings struct { // ApiEndpoint is the Datadog API endpoint. @@ -18313,7 +19694,7 @@ func (m *PluginDatadogAccessSettings) Reset() { *m = PluginDatadogAccess func (m *PluginDatadogAccessSettings) String() string { return proto.CompactTextString(m) } func (*PluginDatadogAccessSettings) ProtoMessage() {} func (*PluginDatadogAccessSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{305} + return fileDescriptor_9198ee693835762e, []int{325} } func (m *PluginDatadogAccessSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18391,17 +19772,31 @@ type PluginAWSICSettings struct { GroupSyncFilters []*AWSICResourceFilter `protobuf:"bytes,10,rep,name=group_sync_filters,json=groupSyncFilters,proto3" json:"group_sync_filters,omitempty"` // Credentials represents the AWS credentials used by the Identity Center // integration - Credentials *AWSICCredentials `protobuf:"bytes,11,opt,name=credentials,proto3" json:"credentials,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Credentials *AWSICCredentials `protobuf:"bytes,11,opt,name=credentials,proto3" json:"credentials,omitempty"` + // RolesSyncMode indicates how the Identity Center integration will create and + // manage roles representing potential Identity Center Account Assignments. + // + // Possible values are ALL or NONE: + // + // ALL: indicates that the AWS Identity Center integration should + // create and maintain roles for all possible Account Assignments. + // NONE: indicates that the AWS Identity Center integration should + // not create any roles representing potential account Account + // Assignments. + // + // For backwards compatibility, an empty value is treated as equivalent to + // to ALL + RolesSyncMode string `protobuf:"bytes,12,opt,name=roles_sync_mode,json=rolesSyncMode,proto3" json:"roles_sync_mode,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *PluginAWSICSettings) Reset() { *m = PluginAWSICSettings{} } func (m *PluginAWSICSettings) String() string { return proto.CompactTextString(m) } func (*PluginAWSICSettings) ProtoMessage() {} func (*PluginAWSICSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{306} + return fileDescriptor_9198ee693835762e, []int{326} } func (m *PluginAWSICSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18446,7 +19841,7 @@ func (m *AWSICCredentials) Reset() { *m = AWSICCredentials{} } func (m *AWSICCredentials) String() string { return proto.CompactTextString(m) } func (*AWSICCredentials) ProtoMessage() {} func (*AWSICCredentials) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{307} + return fileDescriptor_9198ee693835762e, []int{327} } func (m *AWSICCredentials) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18536,7 +19931,7 @@ func (m *AWSICCredentialSourceSystem) Reset() { *m = AWSICCredentialSour func (m *AWSICCredentialSourceSystem) String() string { return proto.CompactTextString(m) } func (*AWSICCredentialSourceSystem) ProtoMessage() {} func (*AWSICCredentialSourceSystem) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{308} + return fileDescriptor_9198ee693835762e, []int{328} } func (m *AWSICCredentialSourceSystem) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18580,7 +19975,7 @@ func (m *AWSICCredentialSourceOIDC) Reset() { *m = AWSICCredentialSource func (m *AWSICCredentialSourceOIDC) String() string { return proto.CompactTextString(m) } func (*AWSICCredentialSourceOIDC) ProtoMessage() {} func (*AWSICCredentialSourceOIDC) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{309} + return fileDescriptor_9198ee693835762e, []int{329} } func (m *AWSICCredentialSourceOIDC) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18636,7 +20031,7 @@ func (m *AWSICResourceFilter) Reset() { *m = AWSICResourceFilter{} } func (m *AWSICResourceFilter) String() string { return proto.CompactTextString(m) } func (*AWSICResourceFilter) ProtoMessage() {} func (*AWSICResourceFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{310} + return fileDescriptor_9198ee693835762e, []int{330} } func (m *AWSICResourceFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18759,7 +20154,7 @@ func (m *AWSICUserSyncFilter) Reset() { *m = AWSICUserSyncFilter{} } func (m *AWSICUserSyncFilter) String() string { return proto.CompactTextString(m) } func (*AWSICUserSyncFilter) ProtoMessage() {} func (*AWSICUserSyncFilter) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{311} + return fileDescriptor_9198ee693835762e, []int{331} } func (m *AWSICUserSyncFilter) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18806,7 +20201,7 @@ func (m *AWSICProvisioningSpec) Reset() { *m = AWSICProvisioningSpec{} } func (m *AWSICProvisioningSpec) String() string { return proto.CompactTextString(m) } func (*AWSICProvisioningSpec) ProtoMessage() {} func (*AWSICProvisioningSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{312} + return fileDescriptor_9198ee693835762e, []int{332} } func (m *AWSICProvisioningSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18848,7 +20243,7 @@ func (m *PluginAWSICStatusV1) Reset() { *m = PluginAWSICStatusV1{} } func (m *PluginAWSICStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginAWSICStatusV1) ProtoMessage() {} func (*PluginAWSICStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{313} + return fileDescriptor_9198ee693835762e, []int{333} } func (m *PluginAWSICStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18893,7 +20288,7 @@ func (m *AWSICGroupImportStatus) Reset() { *m = AWSICGroupImportStatus{} func (m *AWSICGroupImportStatus) String() string { return proto.CompactTextString(m) } func (*AWSICGroupImportStatus) ProtoMessage() {} func (*AWSICGroupImportStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{314} + return fileDescriptor_9198ee693835762e, []int{334} } func (m *AWSICGroupImportStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -18944,7 +20339,7 @@ func (m *PluginEmailSettings) Reset() { *m = PluginEmailSettings{} } func (m *PluginEmailSettings) String() string { return proto.CompactTextString(m) } func (*PluginEmailSettings) ProtoMessage() {} func (*PluginEmailSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{315} + return fileDescriptor_9198ee693835762e, []int{335} } func (m *PluginEmailSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19032,7 +20427,7 @@ func (m *MailgunSpec) Reset() { *m = MailgunSpec{} } func (m *MailgunSpec) String() string { return proto.CompactTextString(m) } func (*MailgunSpec) ProtoMessage() {} func (*MailgunSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{316} + return fileDescriptor_9198ee693835762e, []int{336} } func (m *MailgunSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19079,7 +20474,7 @@ func (m *SMTPSpec) Reset() { *m = SMTPSpec{} } func (m *SMTPSpec) String() string { return proto.CompactTextString(m) } func (*SMTPSpec) ProtoMessage() {} func (*SMTPSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{317} + return fileDescriptor_9198ee693835762e, []int{337} } func (m *SMTPSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19129,7 +20524,7 @@ func (m *PluginMSTeamsSettings) Reset() { *m = PluginMSTeamsSettings{} } func (m *PluginMSTeamsSettings) String() string { return proto.CompactTextString(m) } func (*PluginMSTeamsSettings) ProtoMessage() {} func (*PluginMSTeamsSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{318} + return fileDescriptor_9198ee693835762e, []int{338} } func (m *PluginMSTeamsSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19176,7 +20571,7 @@ func (m *PluginNetIQSettings) Reset() { *m = PluginNetIQSettings{} } func (m *PluginNetIQSettings) String() string { return proto.CompactTextString(m) } func (*PluginNetIQSettings) ProtoMessage() {} func (*PluginNetIQSettings) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{319} + return fileDescriptor_9198ee693835762e, []int{339} } func (m *PluginNetIQSettings) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19221,7 +20616,7 @@ func (m *PluginBootstrapCredentialsV1) Reset() { *m = PluginBootstrapCre func (m *PluginBootstrapCredentialsV1) String() string { return proto.CompactTextString(m) } func (*PluginBootstrapCredentialsV1) ProtoMessage() {} func (*PluginBootstrapCredentialsV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{320} + return fileDescriptor_9198ee693835762e, []int{340} } func (m *PluginBootstrapCredentialsV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19321,7 +20716,7 @@ func (m *PluginIdSecretCredential) Reset() { *m = PluginIdSecretCredenti func (m *PluginIdSecretCredential) String() string { return proto.CompactTextString(m) } func (*PluginIdSecretCredential) ProtoMessage() {} func (*PluginIdSecretCredential) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{321} + return fileDescriptor_9198ee693835762e, []int{341} } func (m *PluginIdSecretCredential) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19364,7 +20759,7 @@ func (m *PluginOAuth2AuthorizationCodeCredentials) Reset() { func (m *PluginOAuth2AuthorizationCodeCredentials) String() string { return proto.CompactTextString(m) } func (*PluginOAuth2AuthorizationCodeCredentials) ProtoMessage() {} func (*PluginOAuth2AuthorizationCodeCredentials) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{322} + return fileDescriptor_9198ee693835762e, []int{342} } func (m *PluginOAuth2AuthorizationCodeCredentials) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19424,7 +20819,7 @@ func (m *PluginStatusV1) Reset() { *m = PluginStatusV1{} } func (m *PluginStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginStatusV1) ProtoMessage() {} func (*PluginStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{323} + return fileDescriptor_9198ee693835762e, []int{343} } func (m *PluginStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19553,7 +20948,7 @@ func (m *PluginNetIQStatusV1) Reset() { *m = PluginNetIQStatusV1{} } func (m *PluginNetIQStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginNetIQStatusV1) ProtoMessage() {} func (*PluginNetIQStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{324} + return fileDescriptor_9198ee693835762e, []int{344} } func (m *PluginNetIQStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19599,7 +20994,7 @@ func (m *PluginGitlabStatusV1) Reset() { *m = PluginGitlabStatusV1{} } func (m *PluginGitlabStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginGitlabStatusV1) ProtoMessage() {} func (*PluginGitlabStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{325} + return fileDescriptor_9198ee693835762e, []int{345} } func (m *PluginGitlabStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19643,7 +21038,7 @@ func (m *PluginEntraIDStatusV1) Reset() { *m = PluginEntraIDStatusV1{} } func (m *PluginEntraIDStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginEntraIDStatusV1) ProtoMessage() {} func (*PluginEntraIDStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{326} + return fileDescriptor_9198ee693835762e, []int{346} } func (m *PluginEntraIDStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19688,16 +21083,19 @@ type PluginOktaStatusV1 struct { // AccessListSyncDetails are status details relating to synchronizing access // lists from Okta. AccessListsSyncDetails *PluginOktaStatusDetailsAccessListsSync `protobuf:"bytes,5,opt,name=access_lists_sync_details,json=accessListsSyncDetails,proto3" json:"access_lists_sync_details,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + // SystemLogExportDetails are the status defaults related to the System Logs + // exporter. + SystemLogExportDetails *PluginOktaStatusSystemLogExporter `protobuf:"bytes,6,opt,name=system_log_export_details,json=systemLogExportDetails,proto3" json:"system_log_export_details,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *PluginOktaStatusV1) Reset() { *m = PluginOktaStatusV1{} } func (m *PluginOktaStatusV1) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusV1) ProtoMessage() {} func (*PluginOktaStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{327} + return fileDescriptor_9198ee693835762e, []int{347} } func (m *PluginOktaStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19750,7 +21148,7 @@ func (m *PluginOktaStatusDetailsSSO) Reset() { *m = PluginOktaStatusDeta func (m *PluginOktaStatusDetailsSSO) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusDetailsSSO) ProtoMessage() {} func (*PluginOktaStatusDetailsSSO) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{328} + return fileDescriptor_9198ee693835762e, []int{348} } func (m *PluginOktaStatusDetailsSSO) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19807,7 +21205,7 @@ func (m *PluginOktaStatusDetailsAppGroupSync) Reset() { *m = PluginOktaS func (m *PluginOktaStatusDetailsAppGroupSync) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusDetailsAppGroupSync) ProtoMessage() {} func (*PluginOktaStatusDetailsAppGroupSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{329} + return fileDescriptor_9198ee693835762e, []int{349} } func (m *PluginOktaStatusDetailsAppGroupSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19861,7 +21259,7 @@ func (m *PluginOktaStatusDetailsUsersSync) Reset() { *m = PluginOktaStat func (m *PluginOktaStatusDetailsUsersSync) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusDetailsUsersSync) ProtoMessage() {} func (*PluginOktaStatusDetailsUsersSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{330} + return fileDescriptor_9198ee693835762e, []int{350} } func (m *PluginOktaStatusDetailsUsersSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19904,7 +21302,7 @@ func (m *PluginOktaStatusDetailsSCIM) Reset() { *m = PluginOktaStatusDet func (m *PluginOktaStatusDetailsSCIM) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusDetailsSCIM) ProtoMessage() {} func (*PluginOktaStatusDetailsSCIM) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{331} + return fileDescriptor_9198ee693835762e, []int{351} } func (m *PluginOktaStatusDetailsSCIM) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19966,7 +21364,7 @@ func (m *PluginOktaStatusDetailsAccessListsSync) Reset() { func (m *PluginOktaStatusDetailsAccessListsSync) String() string { return proto.CompactTextString(m) } func (*PluginOktaStatusDetailsAccessListsSync) ProtoMessage() {} func (*PluginOktaStatusDetailsAccessListsSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{332} + return fileDescriptor_9198ee693835762e, []int{352} } func (m *PluginOktaStatusDetailsAccessListsSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -19995,6 +21393,58 @@ func (m *PluginOktaStatusDetailsAccessListsSync) XXX_DiscardUnknown() { var xxx_messageInfo_PluginOktaStatusDetailsAccessListsSync proto.InternalMessageInfo +// PluginOktaStatusSystemLogExporter are details related to the +// current status of the Okta integration w/r/t system logs sync. +type PluginOktaStatusSystemLogExporter struct { + // Enabled is whether Okta System Log exporter is enabled. + Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"` + // StatusCode indicates the current state of the service + StatusCode OktaPluginSyncStatusCode `protobuf:"varint,2,opt,name=status_code,json=statusCode,proto3,enum=types.OktaPluginSyncStatusCode" json:"status_code,omitempty"` + // LastSuccessful is the date of the last successful run. + LastSuccessful *time.Time `protobuf:"bytes,3,opt,name=last_successful,json=lastSuccessful,proto3,stdtime" json:"last_successful"` + // LastFailed is the date of the last failed run. + LastFailed *time.Time `protobuf:"bytes,4,opt,name=last_failed,json=lastFailed,proto3,stdtime" json:"last_failed"` + // Error contains a textual description of the reason the last synchronization + // failed. Only valid when StatusCode is OKTA_PLUGIN_SYNC_STATUS_CODE_ERROR. + Error string `protobuf:"bytes,9,opt,name=error,proto3" json:"error,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *PluginOktaStatusSystemLogExporter) Reset() { *m = PluginOktaStatusSystemLogExporter{} } +func (m *PluginOktaStatusSystemLogExporter) String() string { return proto.CompactTextString(m) } +func (*PluginOktaStatusSystemLogExporter) ProtoMessage() {} +func (*PluginOktaStatusSystemLogExporter) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{353} +} +func (m *PluginOktaStatusSystemLogExporter) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *PluginOktaStatusSystemLogExporter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_PluginOktaStatusSystemLogExporter.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *PluginOktaStatusSystemLogExporter) XXX_Merge(src proto.Message) { + xxx_messageInfo_PluginOktaStatusSystemLogExporter.Merge(m, src) +} +func (m *PluginOktaStatusSystemLogExporter) XXX_Size() int { + return m.Size() +} +func (m *PluginOktaStatusSystemLogExporter) XXX_DiscardUnknown() { + xxx_messageInfo_PluginOktaStatusSystemLogExporter.DiscardUnknown(m) +} + +var xxx_messageInfo_PluginOktaStatusSystemLogExporter proto.InternalMessageInfo + // PluginCredentialsV1 represents "live" credentials // that are used by the plugin to authenticate to the 3rd party API. type PluginCredentialsV1 struct { @@ -20014,7 +21464,7 @@ func (m *PluginCredentialsV1) Reset() { *m = PluginCredentialsV1{} } func (m *PluginCredentialsV1) String() string { return proto.CompactTextString(m) } func (*PluginCredentialsV1) ProtoMessage() {} func (*PluginCredentialsV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{333} + return fileDescriptor_9198ee693835762e, []int{354} } func (m *PluginCredentialsV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20045,6 +21495,7 @@ var xxx_messageInfo_PluginCredentialsV1 proto.InternalMessageInfo type isPluginCredentialsV1_Credentials interface { isPluginCredentialsV1_Credentials() + Equal(interface{}) bool MarshalTo([]byte) (int, error) Size() int } @@ -20125,7 +21576,7 @@ func (m *PluginOAuth2AccessTokenCredentials) Reset() { *m = PluginOAuth2 func (m *PluginOAuth2AccessTokenCredentials) String() string { return proto.CompactTextString(m) } func (*PluginOAuth2AccessTokenCredentials) ProtoMessage() {} func (*PluginOAuth2AccessTokenCredentials) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{334} + return fileDescriptor_9198ee693835762e, []int{355} } func (m *PluginOAuth2AccessTokenCredentials) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20166,7 +21617,7 @@ func (m *PluginBearerTokenCredentials) Reset() { *m = PluginBearerTokenC func (m *PluginBearerTokenCredentials) String() string { return proto.CompactTextString(m) } func (*PluginBearerTokenCredentials) ProtoMessage() {} func (*PluginBearerTokenCredentials) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{335} + return fileDescriptor_9198ee693835762e, []int{356} } func (m *PluginBearerTokenCredentials) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20208,7 +21659,7 @@ func (m *PluginStaticCredentialsRef) Reset() { *m = PluginStaticCredenti func (m *PluginStaticCredentialsRef) String() string { return proto.CompactTextString(m) } func (*PluginStaticCredentialsRef) ProtoMessage() {} func (*PluginStaticCredentialsRef) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{336} + return fileDescriptor_9198ee693835762e, []int{357} } func (m *PluginStaticCredentialsRef) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20250,7 +21701,7 @@ func (m *PluginListV1) Reset() { *m = PluginListV1{} } func (m *PluginListV1) String() string { return proto.CompactTextString(m) } func (*PluginListV1) ProtoMessage() {} func (*PluginListV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{337} + return fileDescriptor_9198ee693835762e, []int{358} } func (m *PluginListV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20293,7 +21744,7 @@ type PluginStaticCredentialsV1 struct { func (m *PluginStaticCredentialsV1) Reset() { *m = PluginStaticCredentialsV1{} } func (*PluginStaticCredentialsV1) ProtoMessage() {} func (*PluginStaticCredentialsV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{338} + return fileDescriptor_9198ee693835762e, []int{359} } func (m *PluginStaticCredentialsV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20341,7 +21792,7 @@ func (m *PluginStaticCredentialsSpecV1) Reset() { *m = PluginStaticCrede func (m *PluginStaticCredentialsSpecV1) String() string { return proto.CompactTextString(m) } func (*PluginStaticCredentialsSpecV1) ProtoMessage() {} func (*PluginStaticCredentialsSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{339} + return fileDescriptor_9198ee693835762e, []int{360} } func (m *PluginStaticCredentialsSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20372,6 +21823,7 @@ var xxx_messageInfo_PluginStaticCredentialsSpecV1 proto.InternalMessageInfo type isPluginStaticCredentialsSpecV1_Credentials interface { isPluginStaticCredentialsSpecV1_Credentials() + Equal(interface{}) bool MarshalTo([]byte) (int, error) Size() int } @@ -20468,7 +21920,7 @@ func (m *PluginStaticCredentialsBasicAuth) Reset() { *m = PluginStaticCr func (m *PluginStaticCredentialsBasicAuth) String() string { return proto.CompactTextString(m) } func (*PluginStaticCredentialsBasicAuth) ProtoMessage() {} func (*PluginStaticCredentialsBasicAuth) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{340} + return fileDescriptor_9198ee693835762e, []int{361} } func (m *PluginStaticCredentialsBasicAuth) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20514,7 +21966,7 @@ func (m *PluginStaticCredentialsOAuthClientSecret) Reset() { func (m *PluginStaticCredentialsOAuthClientSecret) String() string { return proto.CompactTextString(m) } func (*PluginStaticCredentialsOAuthClientSecret) ProtoMessage() {} func (*PluginStaticCredentialsOAuthClientSecret) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{341} + return fileDescriptor_9198ee693835762e, []int{362} } func (m *PluginStaticCredentialsOAuthClientSecret) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20562,7 +22014,7 @@ func (m *PluginStaticCredentialsSSHCertAuthorities) String() string { } func (*PluginStaticCredentialsSSHCertAuthorities) ProtoMessage() {} func (*PluginStaticCredentialsSSHCertAuthorities) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{342} + return fileDescriptor_9198ee693835762e, []int{363} } func (m *PluginStaticCredentialsSSHCertAuthorities) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20605,7 +22057,7 @@ type SAMLIdPServiceProviderV1 struct { func (m *SAMLIdPServiceProviderV1) Reset() { *m = SAMLIdPServiceProviderV1{} } func (*SAMLIdPServiceProviderV1) ProtoMessage() {} func (*SAMLIdPServiceProviderV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{343} + return fileDescriptor_9198ee693835762e, []int{364} } func (m *SAMLIdPServiceProviderV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20676,7 +22128,7 @@ func (m *SAMLIdPServiceProviderSpecV1) Reset() { *m = SAMLIdPServiceProv func (m *SAMLIdPServiceProviderSpecV1) String() string { return proto.CompactTextString(m) } func (*SAMLIdPServiceProviderSpecV1) ProtoMessage() {} func (*SAMLIdPServiceProviderSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{344} + return fileDescriptor_9198ee693835762e, []int{365} } func (m *SAMLIdPServiceProviderSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20723,7 +22175,7 @@ func (m *SAMLAttributeMapping) Reset() { *m = SAMLAttributeMapping{} } func (m *SAMLAttributeMapping) String() string { return proto.CompactTextString(m) } func (*SAMLAttributeMapping) ProtoMessage() {} func (*SAMLAttributeMapping) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{345} + return fileDescriptor_9198ee693835762e, []int{366} } func (m *SAMLAttributeMapping) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20765,7 +22217,7 @@ func (m *IdPOptions) Reset() { *m = IdPOptions{} } func (m *IdPOptions) String() string { return proto.CompactTextString(m) } func (*IdPOptions) ProtoMessage() {} func (*IdPOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{346} + return fileDescriptor_9198ee693835762e, []int{367} } func (m *IdPOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20807,7 +22259,7 @@ func (m *IdPSAMLOptions) Reset() { *m = IdPSAMLOptions{} } func (m *IdPSAMLOptions) String() string { return proto.CompactTextString(m) } func (*IdPSAMLOptions) ProtoMessage() {} func (*IdPSAMLOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{347} + return fileDescriptor_9198ee693835762e, []int{368} } func (m *IdPSAMLOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20857,7 +22309,7 @@ func (m *KubernetesResourceV1) Reset() { *m = KubernetesResourceV1{} } func (m *KubernetesResourceV1) String() string { return proto.CompactTextString(m) } func (*KubernetesResourceV1) ProtoMessage() {} func (*KubernetesResourceV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{348} + return fileDescriptor_9198ee693835762e, []int{369} } func (m *KubernetesResourceV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20899,7 +22351,7 @@ func (m *KubernetesResourceSpecV1) Reset() { *m = KubernetesResourceSpec func (m *KubernetesResourceSpecV1) String() string { return proto.CompactTextString(m) } func (*KubernetesResourceSpecV1) ProtoMessage() {} func (*KubernetesResourceSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{349} + return fileDescriptor_9198ee693835762e, []int{370} } func (m *KubernetesResourceSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20945,7 +22397,7 @@ func (m *ClusterMaintenanceConfigV1) Reset() { *m = ClusterMaintenanceCo func (m *ClusterMaintenanceConfigV1) String() string { return proto.CompactTextString(m) } func (*ClusterMaintenanceConfigV1) ProtoMessage() {} func (*ClusterMaintenanceConfigV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{350} + return fileDescriptor_9198ee693835762e, []int{371} } func (m *ClusterMaintenanceConfigV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -20987,7 +22439,7 @@ func (m *ClusterMaintenanceConfigSpecV1) Reset() { *m = ClusterMaintenan func (m *ClusterMaintenanceConfigSpecV1) String() string { return proto.CompactTextString(m) } func (*ClusterMaintenanceConfigSpecV1) ProtoMessage() {} func (*ClusterMaintenanceConfigSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{351} + return fileDescriptor_9198ee693835762e, []int{372} } func (m *ClusterMaintenanceConfigSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21033,7 +22485,7 @@ func (m *AgentUpgradeWindow) Reset() { *m = AgentUpgradeWindow{} } func (m *AgentUpgradeWindow) String() string { return proto.CompactTextString(m) } func (*AgentUpgradeWindow) ProtoMessage() {} func (*AgentUpgradeWindow) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{352} + return fileDescriptor_9198ee693835762e, []int{373} } func (m *AgentUpgradeWindow) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21080,7 +22532,7 @@ func (m *ScheduledAgentUpgradeWindow) Reset() { *m = ScheduledAgentUpgra func (m *ScheduledAgentUpgradeWindow) String() string { return proto.CompactTextString(m) } func (*ScheduledAgentUpgradeWindow) ProtoMessage() {} func (*ScheduledAgentUpgradeWindow) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{353} + return fileDescriptor_9198ee693835762e, []int{374} } func (m *ScheduledAgentUpgradeWindow) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21123,7 +22575,7 @@ func (m *AgentUpgradeSchedule) Reset() { *m = AgentUpgradeSchedule{} } func (m *AgentUpgradeSchedule) String() string { return proto.CompactTextString(m) } func (*AgentUpgradeSchedule) ProtoMessage() {} func (*AgentUpgradeSchedule) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{354} + return fileDescriptor_9198ee693835762e, []int{375} } func (m *AgentUpgradeSchedule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21166,7 +22618,7 @@ type UserGroupV1 struct { func (m *UserGroupV1) Reset() { *m = UserGroupV1{} } func (*UserGroupV1) ProtoMessage() {} func (*UserGroupV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{355} + return fileDescriptor_9198ee693835762e, []int{376} } func (m *UserGroupV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21208,7 +22660,7 @@ func (m *UserGroupSpecV1) Reset() { *m = UserGroupSpecV1{} } func (m *UserGroupSpecV1) String() string { return proto.CompactTextString(m) } func (*UserGroupSpecV1) ProtoMessage() {} func (*UserGroupSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{356} + return fileDescriptor_9198ee693835762e, []int{377} } func (m *UserGroupSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21252,7 +22704,7 @@ func (m *OktaImportRuleSpecV1) Reset() { *m = OktaImportRuleSpecV1{} } func (m *OktaImportRuleSpecV1) String() string { return proto.CompactTextString(m) } func (*OktaImportRuleSpecV1) ProtoMessage() {} func (*OktaImportRuleSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{357} + return fileDescriptor_9198ee693835762e, []int{378} } func (m *OktaImportRuleSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21296,7 +22748,7 @@ func (m *OktaImportRuleMappingV1) Reset() { *m = OktaImportRuleMappingV1 func (m *OktaImportRuleMappingV1) String() string { return proto.CompactTextString(m) } func (*OktaImportRuleMappingV1) ProtoMessage() {} func (*OktaImportRuleMappingV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{358} + return fileDescriptor_9198ee693835762e, []int{379} } func (m *OktaImportRuleMappingV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21339,7 +22791,7 @@ type OktaImportRuleV1 struct { func (m *OktaImportRuleV1) Reset() { *m = OktaImportRuleV1{} } func (*OktaImportRuleV1) ProtoMessage() {} func (*OktaImportRuleV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{359} + return fileDescriptor_9198ee693835762e, []int{380} } func (m *OktaImportRuleV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21387,7 +22839,7 @@ func (m *OktaImportRuleMatchV1) Reset() { *m = OktaImportRuleMatchV1{} } func (m *OktaImportRuleMatchV1) String() string { return proto.CompactTextString(m) } func (*OktaImportRuleMatchV1) ProtoMessage() {} func (*OktaImportRuleMatchV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{360} + return fileDescriptor_9198ee693835762e, []int{381} } func (m *OktaImportRuleMatchV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21430,7 +22882,7 @@ type OktaAssignmentV1 struct { func (m *OktaAssignmentV1) Reset() { *m = OktaAssignmentV1{} } func (*OktaAssignmentV1) ProtoMessage() {} func (*OktaAssignmentV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{361} + return fileDescriptor_9198ee693835762e, []int{382} } func (m *OktaAssignmentV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21484,7 +22936,7 @@ func (m *OktaAssignmentSpecV1) Reset() { *m = OktaAssignmentSpecV1{} } func (m *OktaAssignmentSpecV1) String() string { return proto.CompactTextString(m) } func (*OktaAssignmentSpecV1) ProtoMessage() {} func (*OktaAssignmentSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{362} + return fileDescriptor_9198ee693835762e, []int{383} } func (m *OktaAssignmentSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21528,7 +22980,7 @@ func (m *OktaAssignmentTargetV1) Reset() { *m = OktaAssignmentTargetV1{} func (m *OktaAssignmentTargetV1) String() string { return proto.CompactTextString(m) } func (*OktaAssignmentTargetV1) ProtoMessage() {} func (*OktaAssignmentTargetV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{363} + return fileDescriptor_9198ee693835762e, []int{384} } func (m *OktaAssignmentTargetV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21564,16 +23016,18 @@ type IntegrationV1 struct { // Header is the resource header. ResourceHeader `protobuf:"bytes,1,opt,name=Header,proto3,embedded=Header" json:""` // Spec is an Integration specification. - Spec IntegrationSpecV1 `protobuf:"bytes,2,opt,name=Spec,proto3" json:"spec"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Spec IntegrationSpecV1 `protobuf:"bytes,2,opt,name=Spec,proto3" json:"spec"` + // Status is an Integration specification. + Status IntegrationStatusV1 `protobuf:"bytes,3,opt,name=Status,proto3" json:"status"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *IntegrationV1) Reset() { *m = IntegrationV1{} } func (*IntegrationV1) ProtoMessage() {} func (*IntegrationV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{364} + return fileDescriptor_9198ee693835762e, []int{385} } func (m *IntegrationV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21622,7 +23076,7 @@ func (m *IntegrationSpecV1) Reset() { *m = IntegrationSpecV1{} } func (m *IntegrationSpecV1) String() string { return proto.CompactTextString(m) } func (*IntegrationSpecV1) ProtoMessage() {} func (*IntegrationSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{365} + return fileDescriptor_9198ee693835762e, []int{386} } func (m *IntegrationSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21720,6 +23174,48 @@ func (*IntegrationSpecV1) XXX_OneofWrappers() []interface{} { } } +// IntegrationStatusV1 contains the status of the integration. +type IntegrationStatusV1 struct { + // AWSRolesAnywhere contains the specific status fields to related to the AWS Roles Anywhere Integration subkind. + AWSRolesAnywhere *AWSRAIntegrationStatusV1 `protobuf:"bytes,1,opt,name=AWSRolesAnywhere,proto3" json:"aws_ra,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *IntegrationStatusV1) Reset() { *m = IntegrationStatusV1{} } +func (m *IntegrationStatusV1) String() string { return proto.CompactTextString(m) } +func (*IntegrationStatusV1) ProtoMessage() {} +func (*IntegrationStatusV1) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{387} +} +func (m *IntegrationStatusV1) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *IntegrationStatusV1) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_IntegrationStatusV1.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *IntegrationStatusV1) XXX_Merge(src proto.Message) { + xxx_messageInfo_IntegrationStatusV1.Merge(m, src) +} +func (m *IntegrationStatusV1) XXX_Size() int { + return m.Size() +} +func (m *IntegrationStatusV1) XXX_DiscardUnknown() { + xxx_messageInfo_IntegrationStatusV1.DiscardUnknown(m) +} + +var xxx_messageInfo_IntegrationStatusV1 proto.InternalMessageInfo + // AWSOIDCIntegrationSpecV1 contains the spec properties for the AWS OIDC SubKind Integration. type AWSOIDCIntegrationSpecV1 struct { // RoleARN contains the Role ARN used to set up the Integration. @@ -21729,7 +23225,7 @@ type AWSOIDCIntegrationSpecV1 struct { // This bucket/prefix/* files must be publicly accessible and contain the following: // > .well-known/openid-configuration // > .well-known/jwks - // Format: s3:/// + // Format: `s3:///` // Optional. The proxy's endpoint is used if it is not specified. // // DEPRECATED: Thumbprint validation requires the issuer to update the IdP in AWS everytime the issuer changes the certificate. @@ -21737,9 +23233,9 @@ type AWSOIDCIntegrationSpecV1 struct { // Amazon is now trusting all the root certificate authorities, and this workaround is no longer needed. // DELETE IN 18.0. IssuerS3URI string `protobuf:"bytes,2,opt,name=IssuerS3URI,proto3" json:"issuer_s3_uri,omitempty"` // Deprecated: Do not use. - // Audience is used to record a name of a plugin or a discover service in Teleport - // that depends on this integration. - // Audience value can be empty or configured with supported preset audience type. + // Audience is used to record a name of a plugin or a discover service in + // Teleport that depends on this integration. + // Audience value can either be empty or "aws-identity-center". // Preset audience may impose specific behavior on the integration CRUD API, // such as preventing integration from update or deletion. Empty audience value // should be treated as a default and backward-compatible behavior of the integration. @@ -21753,7 +23249,7 @@ func (m *AWSOIDCIntegrationSpecV1) Reset() { *m = AWSOIDCIntegrationSpec func (m *AWSOIDCIntegrationSpecV1) String() string { return proto.CompactTextString(m) } func (*AWSOIDCIntegrationSpecV1) ProtoMessage() {} func (*AWSOIDCIntegrationSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{366} + return fileDescriptor_9198ee693835762e, []int{388} } func (m *AWSOIDCIntegrationSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21799,7 +23295,7 @@ func (m *AzureOIDCIntegrationSpecV1) Reset() { *m = AzureOIDCIntegration func (m *AzureOIDCIntegrationSpecV1) String() string { return proto.CompactTextString(m) } func (*AzureOIDCIntegrationSpecV1) ProtoMessage() {} func (*AzureOIDCIntegrationSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{367} + return fileDescriptor_9198ee693835762e, []int{389} } func (m *AzureOIDCIntegrationSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21841,7 +23337,7 @@ func (m *GitHubIntegrationSpecV1) Reset() { *m = GitHubIntegrationSpecV1 func (m *GitHubIntegrationSpecV1) String() string { return proto.CompactTextString(m) } func (*GitHubIntegrationSpecV1) ProtoMessage() {} func (*GitHubIntegrationSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{368} + return fileDescriptor_9198ee693835762e, []int{390} } func (m *GitHubIntegrationSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21886,7 +23382,7 @@ func (m *AWSRAIntegrationSpecV1) Reset() { *m = AWSRAIntegrationSpecV1{} func (m *AWSRAIntegrationSpecV1) String() string { return proto.CompactTextString(m) } func (*AWSRAIntegrationSpecV1) ProtoMessage() {} func (*AWSRAIntegrationSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{369} + return fileDescriptor_9198ee693835762e, []int{391} } func (m *AWSRAIntegrationSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21925,7 +23421,21 @@ type AWSRolesAnywhereProfileSyncConfig struct { // ProfileAcceptsRoleSessionName indicates whether the profile accepts a custom Role Session name. ProfileAcceptsRoleSessionName bool `protobuf:"varint,3,opt,name=ProfileAcceptsRoleSessionName,proto3" json:"profile_accepts_role_session_name"` // RoleARN is the ARN of the IAM Role to assume when accessing the AWS APIs. - RoleARN string `protobuf:"bytes,4,opt,name=RoleARN,proto3" json:"role_arn"` + RoleARN string `protobuf:"bytes,4,opt,name=RoleARN,proto3" json:"role_arn"` + // ProfileNameFilters is a list of filters applied to the profile name. + // Only matching profiles will be synchronized as application servers. + // If empty, no filtering is applied. + // + // Filters can be globs, for example: + // + // profile* + // *name* + // + // Or regexes if they're prefixed and suffixed with ^ and $, for example: + // + // ^profile.*$ + // ^.*name.*$ + ProfileNameFilters []string `protobuf:"bytes,5,rep,name=ProfileNameFilters,proto3" json:"profile_name_filters"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -21935,7 +23445,7 @@ func (m *AWSRolesAnywhereProfileSyncConfig) Reset() { *m = AWSRolesAnywh func (m *AWSRolesAnywhereProfileSyncConfig) String() string { return proto.CompactTextString(m) } func (*AWSRolesAnywhereProfileSyncConfig) ProtoMessage() {} func (*AWSRolesAnywhereProfileSyncConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{370} + return fileDescriptor_9198ee693835762e, []int{392} } func (m *AWSRolesAnywhereProfileSyncConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -21964,6 +23474,102 @@ func (m *AWSRolesAnywhereProfileSyncConfig) XXX_DiscardUnknown() { var xxx_messageInfo_AWSRolesAnywhereProfileSyncConfig proto.InternalMessageInfo +// AWSRAIntegrationStatusV1 contains the status properties for the AWS IAM Roles Anywhere SubKind Integration. +type AWSRAIntegrationStatusV1 struct { + // LastProfileSync is the summary of the last profile sync iteration. + LastProfileSync *AWSRolesAnywhereProfileSyncIterationSummary `protobuf:"bytes,1,opt,name=LastProfileSync,proto3" json:"last_profile_sync"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AWSRAIntegrationStatusV1) Reset() { *m = AWSRAIntegrationStatusV1{} } +func (m *AWSRAIntegrationStatusV1) String() string { return proto.CompactTextString(m) } +func (*AWSRAIntegrationStatusV1) ProtoMessage() {} +func (*AWSRAIntegrationStatusV1) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{393} +} +func (m *AWSRAIntegrationStatusV1) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AWSRAIntegrationStatusV1) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AWSRAIntegrationStatusV1.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AWSRAIntegrationStatusV1) XXX_Merge(src proto.Message) { + xxx_messageInfo_AWSRAIntegrationStatusV1.Merge(m, src) +} +func (m *AWSRAIntegrationStatusV1) XXX_Size() int { + return m.Size() +} +func (m *AWSRAIntegrationStatusV1) XXX_DiscardUnknown() { + xxx_messageInfo_AWSRAIntegrationStatusV1.DiscardUnknown(m) +} + +var xxx_messageInfo_AWSRAIntegrationStatusV1 proto.InternalMessageInfo + +// AWSRolesAnywhereProfileSyncIterationSummary contains the summary of a single profile sync iteration. +type AWSRolesAnywhereProfileSyncIterationSummary struct { + // StartTime is the time when the sync iteration started. + StartTime time.Time `protobuf:"bytes,1,opt,name=StartTime,proto3,stdtime" json:"start_time"` + // EndTime is the time when the sync iteration ended. + EndTime time.Time `protobuf:"bytes,2,opt,name=EndTime,proto3,stdtime" json:"end_time"` + // Status is the result of the sync iteration: SUCCESS or ERROR. + Status string `protobuf:"bytes,3,opt,name=Status,proto3" json:"status"` + // ErrorMessage holds the error message when status is ERROR. + ErrorMessage string `protobuf:"bytes,4,opt,name=ErrorMessage,proto3" json:"error_message"` + // SyncedProfiles is the number of profiles synchronized as application servers. + SyncedProfiles int32 `protobuf:"varint,5,opt,name=SyncedProfiles,proto3" json:"synced_profiles"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AWSRolesAnywhereProfileSyncIterationSummary) Reset() { + *m = AWSRolesAnywhereProfileSyncIterationSummary{} +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) String() string { + return proto.CompactTextString(m) +} +func (*AWSRolesAnywhereProfileSyncIterationSummary) ProtoMessage() {} +func (*AWSRolesAnywhereProfileSyncIterationSummary) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{394} +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AWSRolesAnywhereProfileSyncIterationSummary.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) XXX_Merge(src proto.Message) { + xxx_messageInfo_AWSRolesAnywhereProfileSyncIterationSummary.Merge(m, src) +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) XXX_Size() int { + return m.Size() +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) XXX_DiscardUnknown() { + xxx_messageInfo_AWSRolesAnywhereProfileSyncIterationSummary.DiscardUnknown(m) +} + +var xxx_messageInfo_AWSRolesAnywhereProfileSyncIterationSummary proto.InternalMessageInfo + // HeadlessAuthentication holds data for an ongoing headless authentication attempt. type HeadlessAuthentication struct { // Header is the resource header. @@ -21992,7 +23598,7 @@ func (m *HeadlessAuthentication) Reset() { *m = HeadlessAuthentication{} func (m *HeadlessAuthentication) String() string { return proto.CompactTextString(m) } func (*HeadlessAuthentication) ProtoMessage() {} func (*HeadlessAuthentication) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{371} + return fileDescriptor_9198ee693835762e, []int{395} } func (m *HeadlessAuthentication) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22049,7 +23655,7 @@ func (m *WatchKind) Reset() { *m = WatchKind{} } func (m *WatchKind) String() string { return proto.CompactTextString(m) } func (*WatchKind) ProtoMessage() {} func (*WatchKind) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{372} + return fileDescriptor_9198ee693835762e, []int{396} } func (m *WatchKind) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22099,7 +23705,7 @@ func (m *WatchStatusV1) Reset() { *m = WatchStatusV1{} } func (m *WatchStatusV1) String() string { return proto.CompactTextString(m) } func (*WatchStatusV1) ProtoMessage() {} func (*WatchStatusV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{373} + return fileDescriptor_9198ee693835762e, []int{397} } func (m *WatchStatusV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22140,7 +23746,7 @@ func (m *WatchStatusSpecV1) Reset() { *m = WatchStatusSpecV1{} } func (m *WatchStatusSpecV1) String() string { return proto.CompactTextString(m) } func (*WatchStatusSpecV1) ProtoMessage() {} func (*WatchStatusSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{374} + return fileDescriptor_9198ee693835762e, []int{398} } func (m *WatchStatusSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22190,7 +23796,7 @@ func (m *ServerInfoV1) Reset() { *m = ServerInfoV1{} } func (m *ServerInfoV1) String() string { return proto.CompactTextString(m) } func (*ServerInfoV1) ProtoMessage() {} func (*ServerInfoV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{375} + return fileDescriptor_9198ee693835762e, []int{399} } func (m *ServerInfoV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22232,7 +23838,7 @@ func (m *ServerInfoSpecV1) Reset() { *m = ServerInfoSpecV1{} } func (m *ServerInfoSpecV1) String() string { return proto.CompactTextString(m) } func (*ServerInfoSpecV1) ProtoMessage() {} func (*ServerInfoSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{376} + return fileDescriptor_9198ee693835762e, []int{400} } func (m *ServerInfoSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22289,7 +23895,7 @@ func (m *JamfSpecV1) Reset() { *m = JamfSpecV1{} } func (m *JamfSpecV1) String() string { return proto.CompactTextString(m) } func (*JamfSpecV1) ProtoMessage() {} func (*JamfSpecV1) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{377} + return fileDescriptor_9198ee693835762e, []int{401} } func (m *JamfSpecV1) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22354,7 +23960,7 @@ func (m *JamfInventoryEntry) Reset() { *m = JamfInventoryEntry{} } func (m *JamfInventoryEntry) String() string { return proto.CompactTextString(m) } func (*JamfInventoryEntry) ProtoMessage() {} func (*JamfInventoryEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{378} + return fileDescriptor_9198ee693835762e, []int{402} } func (m *JamfInventoryEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22411,7 +24017,7 @@ type MessageWithHeader struct { func (m *MessageWithHeader) Reset() { *m = MessageWithHeader{} } func (*MessageWithHeader) ProtoMessage() {} func (*MessageWithHeader) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{379} + return fileDescriptor_9198ee693835762e, []int{403} } func (m *MessageWithHeader) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22462,20 +24068,22 @@ type AWSMatcher struct { // KubeAppDiscovery controls whether Kubernetes App Discovery will be enabled for agents running on // discovered clusters, currently only affects AWS EKS discovery in integration mode. KubeAppDiscovery bool `protobuf:"varint,8,opt,name=KubeAppDiscovery,proto3" json:"kube_app_discovery,omitempty"` - // SetupAccessForARN is the role that the discovery service should create EKS Access Entries for. + // SetupAccessForARN is the role that the Discovery Service should create EKS Access Entries for. // This value should match the IAM identity that Teleport Kubernetes Service uses. - // If this value is empty, the discovery service will attempt to set up access for its own identity (self). - SetupAccessForARN string `protobuf:"bytes,9,opt,name=SetupAccessForARN,proto3" json:"setup_access_for_arn,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + // If this value is empty, the Discovery Service will attempt to set up access for its own identity (self). + SetupAccessForARN string `protobuf:"bytes,9,opt,name=SetupAccessForARN,proto3" json:"setup_access_for_arn,omitempty"` + // Organization is an AWS Organization matcher for discovering resources accross multiple accounts under an Organization. + Organization *AWSOrganizationMatcher `protobuf:"bytes,10,opt,name=Organization,proto3" json:"organization,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AWSMatcher) Reset() { *m = AWSMatcher{} } func (m *AWSMatcher) String() string { return proto.CompactTextString(m) } func (*AWSMatcher) ProtoMessage() {} func (*AWSMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{380} + return fileDescriptor_9198ee693835762e, []int{404} } func (m *AWSMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22504,6 +24112,100 @@ func (m *AWSMatcher) XXX_DiscardUnknown() { var xxx_messageInfo_AWSMatcher proto.InternalMessageInfo +// AWSOrganizationMatcher specifies an Organization and rules for discovering accounts under that organization. +type AWSOrganizationMatcher struct { + // OrganizationID is the AWS Organization ID to match against. + // Required. + OrganizationID string `protobuf:"bytes,1,opt,name=OrganizationID,proto3" json:"organization_id,omitempty"` + // OrganizationalUnits contains rules for matchings AWS accounts based on their Organizational Units. + OrganizationalUnits *AWSOrganizationUnitsMatcher `protobuf:"bytes,2,opt,name=OrganizationalUnits,proto3" json:"organizational_units,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AWSOrganizationMatcher) Reset() { *m = AWSOrganizationMatcher{} } +func (m *AWSOrganizationMatcher) String() string { return proto.CompactTextString(m) } +func (*AWSOrganizationMatcher) ProtoMessage() {} +func (*AWSOrganizationMatcher) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{405} +} +func (m *AWSOrganizationMatcher) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AWSOrganizationMatcher) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AWSOrganizationMatcher.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AWSOrganizationMatcher) XXX_Merge(src proto.Message) { + xxx_messageInfo_AWSOrganizationMatcher.Merge(m, src) +} +func (m *AWSOrganizationMatcher) XXX_Size() int { + return m.Size() +} +func (m *AWSOrganizationMatcher) XXX_DiscardUnknown() { + xxx_messageInfo_AWSOrganizationMatcher.DiscardUnknown(m) +} + +var xxx_messageInfo_AWSOrganizationMatcher proto.InternalMessageInfo + +// AWSOrganizationUnitsMatcher contains rules for matching accounts under an Organization. +// Accounts that belong to an excluded Organizational Unit, and its children, will be excluded even if they were included. +type AWSOrganizationUnitsMatcher struct { + // Include is a list of AWS Organizational Unit IDs to match. + // Only exact matches or wildcard (*) are supported. + // If empty, all Organizational Units are included by default. + Include []string `protobuf:"bytes,1,rep,name=Include,proto3" json:"include,omitempty"` + // Exclude is a list of AWS Organizational Unit IDs to exclude. + // Only exact matches or wildcard (*) are supported. + // If empty, no Organizational Units are excluded by default. + Exclude []string `protobuf:"bytes,2,rep,name=Exclude,proto3" json:"exclude,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AWSOrganizationUnitsMatcher) Reset() { *m = AWSOrganizationUnitsMatcher{} } +func (m *AWSOrganizationUnitsMatcher) String() string { return proto.CompactTextString(m) } +func (*AWSOrganizationUnitsMatcher) ProtoMessage() {} +func (*AWSOrganizationUnitsMatcher) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{406} +} +func (m *AWSOrganizationUnitsMatcher) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AWSOrganizationUnitsMatcher) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AWSOrganizationUnitsMatcher.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AWSOrganizationUnitsMatcher) XXX_Merge(src proto.Message) { + xxx_messageInfo_AWSOrganizationUnitsMatcher.Merge(m, src) +} +func (m *AWSOrganizationUnitsMatcher) XXX_Size() int { + return m.Size() +} +func (m *AWSOrganizationUnitsMatcher) XXX_DiscardUnknown() { + xxx_messageInfo_AWSOrganizationUnitsMatcher.DiscardUnknown(m) +} + +var xxx_messageInfo_AWSOrganizationUnitsMatcher proto.InternalMessageInfo + // AssumeRole provides a role ARN and ExternalID to assume an AWS role // when interacting with AWS resources. type AssumeRole struct { @@ -22520,7 +24222,7 @@ func (m *AssumeRole) Reset() { *m = AssumeRole{} } func (m *AssumeRole) String() string { return proto.CompactTextString(m) } func (*AssumeRole) ProtoMessage() {} func (*AssumeRole) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{381} + return fileDescriptor_9198ee693835762e, []int{407} } func (m *AssumeRole) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22572,17 +24274,32 @@ type InstallerParams struct { // 0: uses eice for EC2 matchers which use an integration and script for all the other methods // 1: uses script mode // 2: uses eice mode - EnrollMode InstallParamEnrollMode `protobuf:"varint,8,opt,name=EnrollMode,proto3,enum=types.InstallParamEnrollMode" json:"enroll_mode,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + EnrollMode InstallParamEnrollMode `protobuf:"varint,8,opt,name=EnrollMode,proto3,enum=types.InstallParamEnrollMode" json:"enroll_mode,omitempty"` + // Suffix indicates the installation suffix for the teleport installation. + // Set this value if you want multiple installations of Teleport. + // See --install-suffix flag in teleport-update program. + // Note: only supported for Amazon EC2. + // Suffix name can only contain alphanumeric characters and hyphens. + Suffix string `protobuf:"bytes,9,opt,name=Suffix,proto3" json:"suffix,omitempty"` + // UpdateGroup indicates the update group for the teleport installation. + // This value is used to group installations in order to update them in batches. + // See --group flag in teleport-update program. + // Note: only supported for Amazon EC2. + // Group name can only contain alphanumeric characters and hyphens. + UpdateGroup string `protobuf:"bytes,10,opt,name=UpdateGroup,proto3" json:"update_group,omitempty"` + // HTTPProxySettings defines HTTP proxy settings for making HTTP requests. + // When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. + HTTPProxySettings *HTTPProxySettings `protobuf:"bytes,11,opt,name=HTTPProxySettings,proto3" json:"http_proxy_settings,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *InstallerParams) Reset() { *m = InstallerParams{} } func (m *InstallerParams) String() string { return proto.CompactTextString(m) } func (*InstallerParams) ProtoMessage() {} func (*InstallerParams) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{382} + return fileDescriptor_9198ee693835762e, []int{408} } func (m *InstallerParams) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22611,6 +24328,55 @@ func (m *InstallerParams) XXX_DiscardUnknown() { var xxx_messageInfo_InstallerParams proto.InternalMessageInfo +// HTTPProxySettings defines HTTP proxy settings for making HTTP and HTTPS requests. +type HTTPProxySettings struct { + // HTTPProxy is the URL for the HTTP proxy to use when making requests. + // When applied, this will set the HTTP_PROXY environment variable. + HTTPProxy string `protobuf:"bytes,1,opt,name=HTTPProxy,proto3" json:"http_proxy,omitempty"` + // HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. + // When applied, this will set the HTTPS_PROXY environment variable. + HTTPSProxy string `protobuf:"bytes,2,opt,name=HTTPSProxy,proto3" json:"https_proxy,omitempty"` + // NoProxy is a comma separated list of URLs that will be excluded from proxying. + // When applied, this will set the NO_PROXY environment variable. + NoProxy string `protobuf:"bytes,3,opt,name=NoProxy,proto3" json:"no_proxy,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *HTTPProxySettings) Reset() { *m = HTTPProxySettings{} } +func (m *HTTPProxySettings) String() string { return proto.CompactTextString(m) } +func (*HTTPProxySettings) ProtoMessage() {} +func (*HTTPProxySettings) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{409} +} +func (m *HTTPProxySettings) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *HTTPProxySettings) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_HTTPProxySettings.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *HTTPProxySettings) XXX_Merge(src proto.Message) { + xxx_messageInfo_HTTPProxySettings.Merge(m, src) +} +func (m *HTTPProxySettings) XXX_Size() int { + return m.Size() +} +func (m *HTTPProxySettings) XXX_DiscardUnknown() { + xxx_messageInfo_HTTPProxySettings.DiscardUnknown(m) +} + +var xxx_messageInfo_HTTPProxySettings proto.InternalMessageInfo + // AWSSSM provides options to use when executing SSM documents type AWSSSM struct { // DocumentName is the name of the document to use when executing an @@ -22625,7 +24391,7 @@ func (m *AWSSSM) Reset() { *m = AWSSSM{} } func (m *AWSSSM) String() string { return proto.CompactTextString(m) } func (*AWSSSM) ProtoMessage() {} func (*AWSSSM) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{383} + return fileDescriptor_9198ee693835762e, []int{410} } func (m *AWSSSM) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22668,7 +24434,7 @@ func (m *AzureInstallerParams) Reset() { *m = AzureInstallerParams{} } func (m *AzureInstallerParams) String() string { return proto.CompactTextString(m) } func (*AzureInstallerParams) ProtoMessage() {} func (*AzureInstallerParams) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{384} + return fileDescriptor_9198ee693835762e, []int{411} } func (m *AzureInstallerParams) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22712,17 +24478,20 @@ type AzureMatcher struct { ResourceTags Labels `protobuf:"bytes,5,opt,name=ResourceTags,proto3,customtype=Labels" json:"tags,omitempty"` // Params sets the join method when installing on // discovered Azure nodes. - Params *InstallerParams `protobuf:"bytes,6,opt,name=Params,proto3" json:"install_params,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Params *InstallerParams `protobuf:"bytes,6,opt,name=Params,proto3" json:"install_params,omitempty"` + // Integration is the integration name used to generate credentials to interact with Azure APIs. + // Environment credentials will not be used when this value is set. + Integration string `protobuf:"bytes,7,opt,name=Integration,proto3" json:"integration,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *AzureMatcher) Reset() { *m = AzureMatcher{} } func (m *AzureMatcher) String() string { return proto.CompactTextString(m) } func (*AzureMatcher) ProtoMessage() {} func (*AzureMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{385} + return fileDescriptor_9198ee693835762e, []int{412} } func (m *AzureMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22777,7 +24546,7 @@ func (m *GCPMatcher) Reset() { *m = GCPMatcher{} } func (m *GCPMatcher) String() string { return proto.CompactTextString(m) } func (*GCPMatcher) ProtoMessage() {} func (*GCPMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{386} + return fileDescriptor_9198ee693835762e, []int{413} } func (m *GCPMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22823,7 +24592,7 @@ func (m *KubernetesMatcher) Reset() { *m = KubernetesMatcher{} } func (m *KubernetesMatcher) String() string { return proto.CompactTextString(m) } func (*KubernetesMatcher) ProtoMessage() {} func (*KubernetesMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{387} + return fileDescriptor_9198ee693835762e, []int{414} } func (m *KubernetesMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22865,7 +24634,7 @@ func (m *OktaOptions) Reset() { *m = OktaOptions{} } func (m *OktaOptions) String() string { return proto.CompactTextString(m) } func (*OktaOptions) ProtoMessage() {} func (*OktaOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{388} + return fileDescriptor_9198ee693835762e, []int{415} } func (m *OktaOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22911,7 +24680,7 @@ func (m *AccessGraphSync) Reset() { *m = AccessGraphSync{} } func (m *AccessGraphSync) String() string { return proto.CompactTextString(m) } func (*AccessGraphSync) ProtoMessage() {} func (*AccessGraphSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{389} + return fileDescriptor_9198ee693835762e, []int{416} } func (m *AccessGraphSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22956,7 +24725,7 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) Reset() { *m = AccessGraphAWS func (m *AccessGraphAWSSyncCloudTrailLogs) String() string { return proto.CompactTextString(m) } func (*AccessGraphAWSSyncCloudTrailLogs) ProtoMessage() {} func (*AccessGraphAWSSyncCloudTrailLogs) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{390} + return fileDescriptor_9198ee693835762e, []int{417} } func (m *AccessGraphAWSSyncCloudTrailLogs) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -22985,6 +24754,49 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) XXX_DiscardUnknown() { var xxx_messageInfo_AccessGraphAWSSyncCloudTrailLogs proto.InternalMessageInfo +// AccessGraphAWSSyncEKSAuditLogs defines the settings for ingesting Kubernetes apiserver +// audit logs from EKS clusters. +type AccessGraphAWSSyncEKSAuditLogs struct { + // The tags of EKS clusters for which apiserver audit logs should be fetched. + Tags Labels `protobuf:"bytes,1,opt,name=Tags,proto3,customtype=Labels" json:"tags,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *AccessGraphAWSSyncEKSAuditLogs) Reset() { *m = AccessGraphAWSSyncEKSAuditLogs{} } +func (m *AccessGraphAWSSyncEKSAuditLogs) String() string { return proto.CompactTextString(m) } +func (*AccessGraphAWSSyncEKSAuditLogs) ProtoMessage() {} +func (*AccessGraphAWSSyncEKSAuditLogs) Descriptor() ([]byte, []int) { + return fileDescriptor_9198ee693835762e, []int{418} +} +func (m *AccessGraphAWSSyncEKSAuditLogs) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AccessGraphAWSSyncEKSAuditLogs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_AccessGraphAWSSyncEKSAuditLogs.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *AccessGraphAWSSyncEKSAuditLogs) XXX_Merge(src proto.Message) { + xxx_messageInfo_AccessGraphAWSSyncEKSAuditLogs.Merge(m, src) +} +func (m *AccessGraphAWSSyncEKSAuditLogs) XXX_Size() int { + return m.Size() +} +func (m *AccessGraphAWSSyncEKSAuditLogs) XXX_DiscardUnknown() { + xxx_messageInfo_AccessGraphAWSSyncEKSAuditLogs.DiscardUnknown(m) +} + +var xxx_messageInfo_AccessGraphAWSSyncEKSAuditLogs proto.InternalMessageInfo + // AccessGraphAWSSync is a configuration for AWS Access Graph service poll service. type AccessGraphAWSSync struct { // Regions are AWS regions to import resources from. @@ -22995,6 +24807,7 @@ type AccessGraphAWSSync struct { Integration string `protobuf:"bytes,4,opt,name=Integration,proto3" json:"integration,omitempty"` // Configuration settings for collecting AWS CloudTrail logs via an SQS queue. CloudTrailLogs *AccessGraphAWSSyncCloudTrailLogs `protobuf:"bytes,5,opt,name=cloud_trail_logs,json=cloudTrailLogs,proto3" json:"cloud_trail_logs,omitempty"` + EksAuditLogs *AccessGraphAWSSyncEKSAuditLogs `protobuf:"bytes,6,opt,name=eks_audit_logs,json=eksAuditLogs,proto3" json:"eks_audit_logs,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -23004,7 +24817,7 @@ func (m *AccessGraphAWSSync) Reset() { *m = AccessGraphAWSSync{} } func (m *AccessGraphAWSSync) String() string { return proto.CompactTextString(m) } func (*AccessGraphAWSSync) ProtoMessage() {} func (*AccessGraphAWSSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{391} + return fileDescriptor_9198ee693835762e, []int{419} } func (m *AccessGraphAWSSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -23048,7 +24861,7 @@ func (m *AccessGraphAzureSync) Reset() { *m = AccessGraphAzureSync{} } func (m *AccessGraphAzureSync) String() string { return proto.CompactTextString(m) } func (*AccessGraphAzureSync) ProtoMessage() {} func (*AccessGraphAzureSync) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{392} + return fileDescriptor_9198ee693835762e, []int{420} } func (m *AccessGraphAzureSync) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -23104,7 +24917,7 @@ func (m *TargetHealth) Reset() { *m = TargetHealth{} } func (m *TargetHealth) String() string { return proto.CompactTextString(m) } func (*TargetHealth) ProtoMessage() {} func (*TargetHealth) Descriptor() ([]byte, []int) { - return fileDescriptor_9198ee693835762e, []int{393} + return fileDescriptor_9198ee693835762e, []int{421} } func (m *TargetHealth) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -23143,6 +24956,7 @@ func init() { proto.RegisterEnum("types.SecondFactorType", SecondFactorType_name, SecondFactorType_value) proto.RegisterEnum("types.UserTokenUsage", UserTokenUsage_name, UserTokenUsage_value) proto.RegisterEnum("types.RequestState", RequestState_name, RequestState_value) + proto.RegisterEnum("types.AccessRequestKind", AccessRequestKind_name, AccessRequestKind_value) proto.RegisterEnum("types.AccessRequestScope", AccessRequestScope_name, AccessRequestScope_value) proto.RegisterEnum("types.CreateHostUserMode", CreateHostUserMode_name, CreateHostUserMode_value) proto.RegisterEnum("types.CreateDatabaseUserMode", CreateDatabaseUserMode_name, CreateDatabaseUserMode_value) @@ -23192,11 +25006,13 @@ func init() { proto.RegisterType((*RDS)(nil), "types.RDS") proto.RegisterType((*RDSProxy)(nil), "types.RDSProxy") proto.RegisterType((*ElastiCache)(nil), "types.ElastiCache") + proto.RegisterType((*ElastiCacheServerless)(nil), "types.ElastiCacheServerless") proto.RegisterType((*MemoryDB)(nil), "types.MemoryDB") proto.RegisterType((*RedshiftServerless)(nil), "types.RedshiftServerless") proto.RegisterType((*OpenSearch)(nil), "types.OpenSearch") proto.RegisterType((*DocumentDB)(nil), "types.DocumentDB") proto.RegisterType((*GCPCloudSQL)(nil), "types.GCPCloudSQL") + proto.RegisterType((*AlloyDB)(nil), "types.AlloyDB") proto.RegisterType((*Azure)(nil), "types.Azure") proto.RegisterType((*AzureRedis)(nil), "types.AzureRedis") proto.RegisterType((*AD)(nil), "types.AD") @@ -23225,6 +25041,7 @@ func init() { proto.RegisterType((*AppIdentityCenter)(nil), "types.AppIdentityCenter") proto.RegisterType((*AppSpecV3)(nil), "types.AppSpecV3") proto.RegisterMapType((map[string]CommandLabelV2)(nil), "types.AppSpecV3.DynamicLabelsEntry") + proto.RegisterType((*MCP)(nil), "types.MCP") proto.RegisterType((*Rewrite)(nil), "types.Rewrite") proto.RegisterType((*Header)(nil), "types.Header") proto.RegisterType((*PortRange)(nil), "types.PortRange") @@ -23234,6 +25051,8 @@ func init() { proto.RegisterType((*SSHKeyPair)(nil), "types.SSHKeyPair") proto.RegisterType((*TLSKeyPair)(nil), "types.TLSKeyPair") proto.RegisterType((*JWTKeyPair)(nil), "types.JWTKeyPair") + proto.RegisterType((*EncryptionKeyPair)(nil), "types.EncryptionKeyPair") + proto.RegisterType((*AgeEncryptionKey)(nil), "types.AgeEncryptionKey") proto.RegisterType((*CertAuthorityV2)(nil), "types.CertAuthorityV2") proto.RegisterType((*CertAuthoritySpecV2)(nil), "types.CertAuthoritySpecV2") proto.RegisterType((*CAKeySet)(nil), "types.CAKeySet") @@ -23257,6 +25076,7 @@ func init() { proto.RegisterType((*ProvisionTokenSpecV2Spacelift_Rule)(nil), "types.ProvisionTokenSpecV2Spacelift.Rule") proto.RegisterType((*ProvisionTokenSpecV2Kubernetes)(nil), "types.ProvisionTokenSpecV2Kubernetes") proto.RegisterType((*ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig)(nil), "types.ProvisionTokenSpecV2Kubernetes.StaticJWKSConfig") + proto.RegisterType((*ProvisionTokenSpecV2Kubernetes_OIDCConfig)(nil), "types.ProvisionTokenSpecV2Kubernetes.OIDCConfig") proto.RegisterType((*ProvisionTokenSpecV2Kubernetes_Rule)(nil), "types.ProvisionTokenSpecV2Kubernetes.Rule") proto.RegisterType((*ProvisionTokenSpecV2Azure)(nil), "types.ProvisionTokenSpecV2Azure") proto.RegisterType((*ProvisionTokenSpecV2Azure_Rule)(nil), "types.ProvisionTokenSpecV2Azure.Rule") @@ -23268,6 +25088,8 @@ func init() { proto.RegisterType((*ProvisionTokenSpecV2Bitbucket_Rule)(nil), "types.ProvisionTokenSpecV2Bitbucket.Rule") proto.RegisterType((*ProvisionTokenSpecV2Oracle)(nil), "types.ProvisionTokenSpecV2Oracle") proto.RegisterType((*ProvisionTokenSpecV2Oracle_Rule)(nil), "types.ProvisionTokenSpecV2Oracle.Rule") + proto.RegisterType((*ProvisionTokenSpecV2Env0)(nil), "types.ProvisionTokenSpecV2Env0") + proto.RegisterType((*ProvisionTokenSpecV2Env0_Rule)(nil), "types.ProvisionTokenSpecV2Env0.Rule") proto.RegisterType((*ProvisionTokenSpecV2BoundKeypair)(nil), "types.ProvisionTokenSpecV2BoundKeypair") proto.RegisterType((*ProvisionTokenSpecV2BoundKeypair_OnboardingSpec)(nil), "types.ProvisionTokenSpecV2BoundKeypair.OnboardingSpec") proto.RegisterType((*ProvisionTokenSpecV2BoundKeypair_RecoverySpec)(nil), "types.ProvisionTokenSpecV2BoundKeypair.RecoverySpec") @@ -23285,7 +25107,11 @@ func init() { proto.RegisterType((*AgentMeshTunnelStrategy)(nil), "types.AgentMeshTunnelStrategy") proto.RegisterType((*ProxyPeeringTunnelStrategy)(nil), "types.ProxyPeeringTunnelStrategy") proto.RegisterType((*SessionRecordingConfigV2)(nil), "types.SessionRecordingConfigV2") + proto.RegisterType((*KeyLabel)(nil), "types.KeyLabel") + proto.RegisterType((*ManualKeyManagementConfig)(nil), "types.ManualKeyManagementConfig") + proto.RegisterType((*SessionRecordingEncryptionConfig)(nil), "types.SessionRecordingEncryptionConfig") proto.RegisterType((*SessionRecordingConfigSpecV2)(nil), "types.SessionRecordingConfigSpecV2") + proto.RegisterType((*SessionRecordingConfigStatus)(nil), "types.SessionRecordingConfigStatus") proto.RegisterType((*AuthPreferenceV2)(nil), "types.AuthPreferenceV2") proto.RegisterType((*AuthPreferenceSpecV2)(nil), "types.AuthPreferenceSpecV2") proto.RegisterType((*StableUNIXUserConfig)(nil), "types.StableUNIXUserConfig") @@ -23313,6 +25139,8 @@ func init() { proto.RegisterType((*AccessRequestFilter)(nil), "types.AccessRequestFilter") proto.RegisterType((*AccessCapabilities)(nil), "types.AccessCapabilities") proto.RegisterType((*AccessCapabilitiesRequest)(nil), "types.AccessCapabilitiesRequest") + proto.RegisterType((*RemoteAccessCapabilities)(nil), "types.RemoteAccessCapabilities") + proto.RegisterType((*RemoteAccessCapabilitiesRequest)(nil), "types.RemoteAccessCapabilitiesRequest") proto.RegisterType((*RequestKubernetesResource)(nil), "types.RequestKubernetesResource") proto.RegisterType((*ResourceID)(nil), "types.ResourceID") proto.RegisterType((*PluginDataV3)(nil), "types.PluginDataV3") @@ -23336,6 +25164,7 @@ func init() { proto.RegisterType((*RoleConditions)(nil), "types.RoleConditions") proto.RegisterType((*IdentityCenterAccountAssignment)(nil), "types.IdentityCenterAccountAssignment") proto.RegisterType((*GitHubPermission)(nil), "types.GitHubPermission") + proto.RegisterType((*MCPPermissions)(nil), "types.MCPPermissions") proto.RegisterType((*SPIFFERoleCondition)(nil), "types.SPIFFERoleCondition") proto.RegisterType((*DatabasePermission)(nil), "types.DatabasePermission") proto.RegisterType((*KubernetesResource)(nil), "types.KubernetesResource") @@ -23346,6 +25175,9 @@ func init() { proto.RegisterType((*AccessReviewConditions)(nil), "types.AccessReviewConditions") proto.RegisterType((*AccessRequestAllowedPromotion)(nil), "types.AccessRequestAllowedPromotion") proto.RegisterType((*AccessRequestAllowedPromotions)(nil), "types.AccessRequestAllowedPromotions") + proto.RegisterType((*ResourceIDList)(nil), "types.ResourceIDList") + proto.RegisterType((*LongTermResourceGrouping)(nil), "types.LongTermResourceGrouping") + proto.RegisterMapType((map[string]ResourceIDList)(nil), "types.LongTermResourceGrouping.AccessListToResourcesEntry") proto.RegisterType((*ClaimMapping)(nil), "types.ClaimMapping") proto.RegisterType((*TraitMapping)(nil), "types.TraitMapping") proto.RegisterType((*Rule)(nil), "types.Rule") @@ -23399,6 +25231,7 @@ func init() { proto.RegisterType((*KubernetesClusterV3List)(nil), "types.KubernetesClusterV3List") proto.RegisterType((*KubernetesServerV3)(nil), "types.KubernetesServerV3") proto.RegisterType((*KubernetesServerSpecV3)(nil), "types.KubernetesServerSpecV3") + proto.RegisterType((*KubernetesServerStatusV3)(nil), "types.KubernetesServerStatusV3") proto.RegisterType((*WebTokenV3)(nil), "types.WebTokenV3") proto.RegisterType((*WebTokenSpecV3)(nil), "types.WebTokenSpecV3") proto.RegisterType((*GetWebSessionRequest)(nil), "types.GetWebSessionRequest") @@ -23413,6 +25246,7 @@ func init() { proto.RegisterType((*OIDCConnectorV3)(nil), "types.OIDCConnectorV3") proto.RegisterType((*OIDCConnectorV3List)(nil), "types.OIDCConnectorV3List") proto.RegisterType((*OIDCConnectorSpecV3)(nil), "types.OIDCConnectorSpecV3") + proto.RegisterType((*EntraIDGroupsProvider)(nil), "types.EntraIDGroupsProvider") proto.RegisterType((*MaxAge)(nil), "types.MaxAge") proto.RegisterType((*SSOClientRedirectSettings)(nil), "types.SSOClientRedirectSettings") proto.RegisterType((*OIDCConnectorMFASettings)(nil), "types.OIDCConnectorMFASettings") @@ -23441,6 +25275,7 @@ func init() { proto.RegisterType((*LockV2)(nil), "types.LockV2") proto.RegisterType((*LockSpecV2)(nil), "types.LockSpecV2") proto.RegisterType((*LockTarget)(nil), "types.LockTarget") + proto.RegisterType((*LockFilter)(nil), "types.LockFilter") proto.RegisterType((*AddressCondition)(nil), "types.AddressCondition") proto.RegisterType((*NetworkRestrictionsSpecV4)(nil), "types.NetworkRestrictionsSpecV4") proto.RegisterType((*NetworkRestrictionsV4)(nil), "types.NetworkRestrictionsV4") @@ -23494,6 +25329,7 @@ func init() { proto.RegisterType((*PluginOpenAISettings)(nil), "types.PluginOpenAISettings") proto.RegisterType((*PluginMattermostSettings)(nil), "types.PluginMattermostSettings") proto.RegisterType((*PluginJamfSettings)(nil), "types.PluginJamfSettings") + proto.RegisterType((*PluginIntuneSettings)(nil), "types.PluginIntuneSettings") proto.RegisterType((*PluginOktaSettings)(nil), "types.PluginOktaSettings") proto.RegisterType((*PluginOktaCredentialsInfo)(nil), "types.PluginOktaCredentialsInfo") proto.RegisterType((*PluginOktaSyncSettings)(nil), "types.PluginOktaSyncSettings") @@ -23502,9 +25338,11 @@ func init() { proto.RegisterMapType((map[string]*DiscordChannels)(nil), "types.PluginDiscordSettings.RoleToRecipientsEntry") proto.RegisterType((*PluginEntraIDSettings)(nil), "types.PluginEntraIDSettings") proto.RegisterType((*PluginEntraIDSyncSettings)(nil), "types.PluginEntraIDSyncSettings") + proto.RegisterType((*PluginSyncFilter)(nil), "types.PluginSyncFilter") proto.RegisterType((*PluginEntraIDAccessGraphSettings)(nil), "types.PluginEntraIDAccessGraphSettings") proto.RegisterType((*PluginEntraIDAppSSOSettings)(nil), "types.PluginEntraIDAppSSOSettings") proto.RegisterType((*PluginSCIMSettings)(nil), "types.PluginSCIMSettings") + proto.RegisterType((*PluginSCIMSettings_ConnectorInfo)(nil), "types.PluginSCIMSettings.ConnectorInfo") proto.RegisterType((*PluginDatadogAccessSettings)(nil), "types.PluginDatadogAccessSettings") proto.RegisterType((*PluginAWSICSettings)(nil), "types.PluginAWSICSettings") proto.RegisterType((*AWSICCredentials)(nil), "types.AWSICCredentials") @@ -23534,6 +25372,7 @@ func init() { proto.RegisterType((*PluginOktaStatusDetailsUsersSync)(nil), "types.PluginOktaStatusDetailsUsersSync") proto.RegisterType((*PluginOktaStatusDetailsSCIM)(nil), "types.PluginOktaStatusDetailsSCIM") proto.RegisterType((*PluginOktaStatusDetailsAccessListsSync)(nil), "types.PluginOktaStatusDetailsAccessListsSync") + proto.RegisterType((*PluginOktaStatusSystemLogExporter)(nil), "types.PluginOktaStatusSystemLogExporter") proto.RegisterType((*PluginCredentialsV1)(nil), "types.PluginCredentialsV1") proto.RegisterType((*PluginOAuth2AccessTokenCredentials)(nil), "types.PluginOAuth2AccessTokenCredentials") proto.RegisterType((*PluginBearerTokenCredentials)(nil), "types.PluginBearerTokenCredentials") @@ -23569,11 +25408,14 @@ func init() { proto.RegisterType((*OktaAssignmentTargetV1)(nil), "types.OktaAssignmentTargetV1") proto.RegisterType((*IntegrationV1)(nil), "types.IntegrationV1") proto.RegisterType((*IntegrationSpecV1)(nil), "types.IntegrationSpecV1") + proto.RegisterType((*IntegrationStatusV1)(nil), "types.IntegrationStatusV1") proto.RegisterType((*AWSOIDCIntegrationSpecV1)(nil), "types.AWSOIDCIntegrationSpecV1") proto.RegisterType((*AzureOIDCIntegrationSpecV1)(nil), "types.AzureOIDCIntegrationSpecV1") proto.RegisterType((*GitHubIntegrationSpecV1)(nil), "types.GitHubIntegrationSpecV1") proto.RegisterType((*AWSRAIntegrationSpecV1)(nil), "types.AWSRAIntegrationSpecV1") proto.RegisterType((*AWSRolesAnywhereProfileSyncConfig)(nil), "types.AWSRolesAnywhereProfileSyncConfig") + proto.RegisterType((*AWSRAIntegrationStatusV1)(nil), "types.AWSRAIntegrationStatusV1") + proto.RegisterType((*AWSRolesAnywhereProfileSyncIterationSummary)(nil), "types.AWSRolesAnywhereProfileSyncIterationSummary") proto.RegisterType((*HeadlessAuthentication)(nil), "types.HeadlessAuthentication") proto.RegisterType((*WatchKind)(nil), "types.WatchKind") proto.RegisterMapType((map[string]string)(nil), "types.WatchKind.FilterEntry") @@ -23586,8 +25428,11 @@ func init() { proto.RegisterType((*JamfInventoryEntry)(nil), "types.JamfInventoryEntry") proto.RegisterType((*MessageWithHeader)(nil), "types.MessageWithHeader") proto.RegisterType((*AWSMatcher)(nil), "types.AWSMatcher") + proto.RegisterType((*AWSOrganizationMatcher)(nil), "types.AWSOrganizationMatcher") + proto.RegisterType((*AWSOrganizationUnitsMatcher)(nil), "types.AWSOrganizationUnitsMatcher") proto.RegisterType((*AssumeRole)(nil), "types.AssumeRole") proto.RegisterType((*InstallerParams)(nil), "types.InstallerParams") + proto.RegisterType((*HTTPProxySettings)(nil), "types.HTTPProxySettings") proto.RegisterType((*AWSSSM)(nil), "types.AWSSSM") proto.RegisterType((*AzureInstallerParams)(nil), "types.AzureInstallerParams") proto.RegisterType((*AzureMatcher)(nil), "types.AzureMatcher") @@ -23596,6 +25441,7 @@ func init() { proto.RegisterType((*OktaOptions)(nil), "types.OktaOptions") proto.RegisterType((*AccessGraphSync)(nil), "types.AccessGraphSync") proto.RegisterType((*AccessGraphAWSSyncCloudTrailLogs)(nil), "types.AccessGraphAWSSyncCloudTrailLogs") + proto.RegisterType((*AccessGraphAWSSyncEKSAuditLogs)(nil), "types.AccessGraphAWSSyncEKSAuditLogs") proto.RegisterType((*AccessGraphAWSSync)(nil), "types.AccessGraphAWSSync") proto.RegisterType((*AccessGraphAzureSync)(nil), "types.AccessGraphAzureSync") proto.RegisterType((*TargetHealth)(nil), "types.TargetHealth") @@ -23604,2067 +25450,2261 @@ func init() { func init() { proto.RegisterFile("teleport/legacy/types/types.proto", fileDescriptor_9198ee693835762e) } var fileDescriptor_9198ee693835762e = []byte{ - // 32922 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xfd, 0x7d, 0x70, 0x1c, 0x49, - 0x76, 0x18, 0x88, 0x4f, 0x77, 0xe3, 0xa3, 0xf1, 0xd0, 0x00, 0x1a, 0x09, 0x90, 0x04, 0x31, 0xc3, - 0x69, 0x4e, 0xcd, 0x0c, 0x87, 0x9c, 0x19, 0x92, 0x4b, 0x70, 0x87, 0xbb, 0xb3, 0xf3, 0xb5, 0x0d, - 0x34, 0x48, 0x34, 0x09, 0x82, 0x3d, 0xd5, 0x00, 0xb9, 0xa3, 0xdd, 0xd9, 0xda, 0x42, 0x77, 0x02, - 0xa8, 0x41, 0x77, 0x57, 0x6f, 0x55, 0x35, 0x49, 0x68, 0xad, 0x9f, 0x25, 0x4b, 0xfb, 0x93, 0x15, - 0x3a, 0x49, 0x96, 0x2c, 0x59, 0xeb, 0x0b, 0x5b, 0xe1, 0x90, 0xed, 0x3b, 0x9f, 0x1d, 0x52, 0xd8, - 0x92, 0x7d, 0xe7, 0x0b, 0x85, 0xe4, 0xd5, 0x9d, 0x2d, 0xeb, 0x1c, 0x17, 0x27, 0x85, 0xcf, 0xf7, - 0xb5, 0xa1, 0x80, 0xc2, 0xa7, 0x8b, 0xfb, 0x03, 0x11, 0x17, 0x27, 0xdd, 0x45, 0x5c, 0xc4, 0xad, - 0x42, 0xf6, 0x45, 0xbe, 0xcc, 0xac, 0xca, 0xac, 0xaa, 0x6e, 0x34, 0x66, 0x38, 0xb2, 0xb8, 0xa1, - 0x7f, 0x48, 0xf4, 0xcb, 0xf7, 0x5e, 0x56, 0x7e, 0xbf, 0x7c, 0xf9, 0x3e, 0xe0, 0x85, 0x80, 0xb6, - 0x68, 0xd7, 0xf5, 0x82, 0xab, 0x2d, 0xba, 0x6b, 0x37, 0x0e, 0xae, 0x06, 0x07, 0x5d, 0xea, 0xf3, - 0x7f, 0xaf, 0x74, 0x3d, 0x37, 0x70, 0xc9, 0x28, 0xfe, 0x58, 0x9c, 0xdf, 0x75, 0x77, 0x5d, 0x84, - 0x5c, 0x65, 0x7f, 0xf1, 0xc2, 0xc5, 0xe7, 0x77, 0x5d, 0x77, 0xb7, 0x45, 0xaf, 0xe2, 0xaf, 0xed, - 0xde, 0xce, 0xd5, 0x66, 0xcf, 0xb3, 0x03, 0xc7, 0xed, 0x88, 0xf2, 0x52, 0xbc, 0x3c, 0x70, 0xda, - 0xd4, 0x0f, 0xec, 0x76, 0xb7, 0x1f, 0x83, 0x47, 0x9e, 0xdd, 0xed, 0x52, 0x4f, 0xd4, 0xbe, 0x78, - 0x29, 0xfc, 0x40, 0x3b, 0x08, 0x18, 0x25, 0x63, 0x7e, 0xf5, 0xe1, 0x35, 0xf5, 0xa7, 0x40, 0xbd, - 0xd1, 0xa7, 0x2d, 0x5e, 0xcf, 0x0f, 0x68, 0xd3, 0x6a, 0xd2, 0x87, 0x4e, 0x83, 0x5a, 0x1e, 0xfd, - 0x7a, 0xcf, 0xf1, 0x68, 0x9b, 0x76, 0x02, 0x41, 0x77, 0x39, 0x9d, 0x4e, 0x7e, 0x48, 0xec, 0x8b, - 0x8c, 0x5f, 0xc8, 0xc1, 0xc4, 0x1d, 0x4a, 0xbb, 0xe5, 0x96, 0xf3, 0x90, 0x92, 0x17, 0x61, 0x64, - 0xc3, 0x6e, 0xd3, 0x85, 0xcc, 0xf9, 0xcc, 0xc5, 0x89, 0xe5, 0x99, 0xa3, 0xc3, 0xd2, 0xa4, 0x4f, - 0xbd, 0x87, 0xd4, 0xb3, 0x3a, 0x76, 0x9b, 0x9a, 0x58, 0x48, 0x5e, 0x83, 0x09, 0xf6, 0xbf, 0xdf, - 0xb5, 0x1b, 0x74, 0x21, 0x8b, 0x98, 0x53, 0x47, 0x87, 0xa5, 0x89, 0x8e, 0x04, 0x9a, 0x51, 0x39, - 0xa9, 0xc2, 0xf8, 0xea, 0xe3, 0xae, 0xe3, 0x51, 0x7f, 0x61, 0xe4, 0x7c, 0xe6, 0xe2, 0xe4, 0xd2, - 0xe2, 0x15, 0xde, 0x47, 0x57, 0x64, 0x1f, 0x5d, 0xd9, 0x94, 0x9d, 0xb8, 0x3c, 0xf7, 0xdb, 0x87, - 0xa5, 0x67, 0x8e, 0x0e, 0x4b, 0xe3, 0x94, 0x93, 0xfc, 0x95, 0xdf, 0x2f, 0x65, 0x4c, 0x49, 0x4f, - 0xde, 0x86, 0x91, 0xcd, 0x83, 0x2e, 0x5d, 0x98, 0x38, 0x9f, 0xb9, 0x38, 0xbd, 0xf4, 0xfc, 0x15, - 0x3e, 0xac, 0xe1, 0xc7, 0x47, 0x7f, 0x31, 0xac, 0xe5, 0xfc, 0xd1, 0x61, 0x69, 0x84, 0xa1, 0x98, - 0x48, 0x45, 0x2e, 0xc3, 0xd8, 0x9a, 0xeb, 0x07, 0xd5, 0xca, 0x02, 0xe0, 0x27, 0x9f, 0x3a, 0x3a, - 0x2c, 0xcd, 0xee, 0xb9, 0x7e, 0x60, 0x39, 0xcd, 0xd7, 0xdd, 0xb6, 0x13, 0xd0, 0x76, 0x37, 0x38, - 0x30, 0x05, 0x92, 0xf1, 0x18, 0xa6, 0x34, 0x7e, 0x64, 0x12, 0xc6, 0xb7, 0x36, 0xee, 0x6c, 0xdc, - 0x7b, 0xb0, 0x51, 0x7c, 0x86, 0xe4, 0x61, 0x64, 0xe3, 0x5e, 0x65, 0xb5, 0x98, 0x21, 0xe3, 0x90, - 0x2b, 0xd7, 0x6a, 0xc5, 0x2c, 0x29, 0x40, 0xbe, 0x52, 0xde, 0x2c, 0x2f, 0x97, 0xeb, 0xab, 0xc5, - 0x1c, 0x99, 0x83, 0x99, 0x07, 0xd5, 0x8d, 0xca, 0xbd, 0x07, 0x75, 0xab, 0xb2, 0x5a, 0xbf, 0xb3, - 0x79, 0xaf, 0x56, 0x1c, 0x21, 0xd3, 0x00, 0x77, 0xb6, 0x96, 0x57, 0xcd, 0x8d, 0xd5, 0xcd, 0xd5, - 0x7a, 0x71, 0x94, 0xcc, 0x43, 0x51, 0x92, 0x58, 0xf5, 0x55, 0xf3, 0x7e, 0x75, 0x65, 0xb5, 0x38, - 0x76, 0x7b, 0x24, 0x9f, 0x2b, 0x8e, 0x98, 0xe3, 0xeb, 0xd4, 0xf6, 0x69, 0xb5, 0x62, 0xfc, 0xed, - 0x1c, 0xe4, 0xef, 0xd2, 0xc0, 0x6e, 0xda, 0x81, 0x4d, 0x9e, 0xd3, 0xc6, 0x07, 0x9b, 0xa8, 0x0c, - 0xcc, 0x8b, 0xc9, 0x81, 0x19, 0x3d, 0x3a, 0x2c, 0x65, 0x2e, 0xab, 0x03, 0xf2, 0x16, 0x4c, 0x56, - 0xa8, 0xdf, 0xf0, 0x9c, 0x2e, 0x9b, 0x6c, 0x0b, 0x39, 0x44, 0x3b, 0x7b, 0x74, 0x58, 0x3a, 0xd5, - 0x8c, 0xc0, 0x4a, 0x87, 0xa8, 0xd8, 0xa4, 0x0a, 0x63, 0xeb, 0xf6, 0x36, 0x6d, 0xf9, 0x0b, 0xa3, - 0xe7, 0x73, 0x17, 0x27, 0x97, 0x9e, 0x15, 0x83, 0x20, 0x3f, 0xf0, 0x0a, 0x2f, 0x5d, 0xed, 0x04, - 0xde, 0xc1, 0xf2, 0xfc, 0xd1, 0x61, 0xa9, 0xd8, 0x42, 0x80, 0xda, 0xc1, 0x1c, 0x85, 0xd4, 0xa3, - 0x89, 0x31, 0x76, 0xec, 0xc4, 0x38, 0xf7, 0xdb, 0x87, 0xa5, 0x0c, 0x1b, 0x30, 0x31, 0x31, 0x22, - 0x7e, 0xfa, 0x14, 0x59, 0x82, 0xbc, 0x49, 0x1f, 0x3a, 0x3e, 0x6b, 0x59, 0x1e, 0x5b, 0x76, 0xfa, - 0xe8, 0xb0, 0x44, 0x3c, 0x01, 0x53, 0x3e, 0x23, 0xc4, 0x5b, 0x7c, 0x13, 0x26, 0x95, 0xaf, 0x26, - 0x45, 0xc8, 0xed, 0xd3, 0x03, 0xde, 0xc3, 0x26, 0xfb, 0x93, 0xcc, 0xc3, 0xe8, 0x43, 0xbb, 0xd5, - 0x13, 0x5d, 0x6a, 0xf2, 0x1f, 0x5f, 0xc8, 0x7e, 0x3e, 0x73, 0x7b, 0x24, 0x3f, 0x5e, 0xcc, 0x9b, - 0xd9, 0x6a, 0xc5, 0xf8, 0x99, 0x11, 0xc8, 0x9b, 0x2e, 0x5f, 0xc0, 0xe4, 0x12, 0x8c, 0xd6, 0x03, - 0x3b, 0x90, 0xc3, 0x34, 0x77, 0x74, 0x58, 0x9a, 0x61, 0x8b, 0x9b, 0x2a, 0xf5, 0x73, 0x0c, 0x86, - 0x5a, 0xdb, 0xb3, 0x7d, 0x39, 0x5c, 0x88, 0xda, 0x65, 0x00, 0x15, 0x15, 0x31, 0xc8, 0x05, 0x18, - 0xb9, 0xeb, 0x36, 0xa9, 0x18, 0x31, 0x72, 0x74, 0x58, 0x9a, 0x6e, 0xbb, 0x4d, 0x15, 0x11, 0xcb, - 0xc9, 0xeb, 0x30, 0xb1, 0xd2, 0xf3, 0x3c, 0xda, 0x61, 0x73, 0x7d, 0x04, 0x91, 0xa7, 0x8f, 0x0e, - 0x4b, 0xd0, 0xe0, 0x40, 0xcb, 0x69, 0x9a, 0x11, 0x02, 0x1b, 0x86, 0x7a, 0x60, 0x7b, 0x01, 0x6d, - 0x2e, 0x8c, 0x0e, 0x35, 0x0c, 0x6c, 0x7d, 0xce, 0xfa, 0x9c, 0x24, 0x3e, 0x0c, 0x82, 0x13, 0x59, - 0x83, 0xc9, 0x5b, 0x9e, 0xdd, 0xa0, 0x35, 0xea, 0x39, 0x6e, 0x13, 0xc7, 0x37, 0xb7, 0x7c, 0xe1, - 0xe8, 0xb0, 0x74, 0x7a, 0x97, 0x81, 0xad, 0x2e, 0xc2, 0x23, 0xea, 0xef, 0x1e, 0x96, 0xf2, 0x15, - 0xb1, 0xd5, 0x9a, 0x2a, 0x29, 0xf9, 0x1a, 0x1b, 0x1c, 0x3f, 0xc0, 0xae, 0xa5, 0xcd, 0x85, 0xf1, - 0x63, 0x3f, 0xd1, 0x10, 0x9f, 0x78, 0xba, 0x65, 0xfb, 0x81, 0xe5, 0x71, 0xba, 0xd8, 0x77, 0xaa, - 0x2c, 0xc9, 0x3d, 0xc8, 0xd7, 0x1b, 0x7b, 0xb4, 0xd9, 0x6b, 0x51, 0x9c, 0x32, 0x93, 0x4b, 0x67, - 0xc4, 0xa4, 0x96, 0xe3, 0x29, 0x8b, 0x97, 0x17, 0x05, 0x6f, 0xe2, 0x0b, 0x88, 0x3a, 0x9f, 0x24, - 0xd6, 0x17, 0xf2, 0xdf, 0xfa, 0x5b, 0xa5, 0x67, 0x7e, 0xf0, 0xf7, 0xce, 0x3f, 0x63, 0xfc, 0x17, - 0x59, 0x28, 0xc6, 0x99, 0x90, 0x1d, 0x98, 0xda, 0xea, 0x36, 0xed, 0x80, 0xae, 0xb4, 0x1c, 0xda, - 0x09, 0x7c, 0x9c, 0x24, 0x83, 0xdb, 0xf4, 0x92, 0xa8, 0x77, 0xa1, 0x87, 0x84, 0x56, 0x83, 0x53, - 0xc6, 0x5a, 0xa5, 0xb3, 0x8d, 0xea, 0xa9, 0xe3, 0x06, 0xee, 0xe3, 0x0c, 0x3b, 0x59, 0x3d, 0x7c, - 0xeb, 0xef, 0x53, 0x8f, 0x60, 0x2b, 0x26, 0x50, 0xa7, 0xb9, 0x7d, 0x80, 0x33, 0x73, 0xf8, 0x09, - 0xc4, 0x48, 0x52, 0x26, 0x10, 0x03, 0x1b, 0xff, 0x5b, 0x06, 0xa6, 0x4d, 0xea, 0xbb, 0x3d, 0xaf, - 0x41, 0xd7, 0xa8, 0xdd, 0xa4, 0x1e, 0x9b, 0xfe, 0x77, 0x9c, 0x4e, 0x53, 0xac, 0x29, 0x9c, 0xfe, - 0xfb, 0x4e, 0x47, 0xdd, 0xba, 0xb1, 0x9c, 0x7c, 0x06, 0xc6, 0xeb, 0xbd, 0x6d, 0x44, 0xcd, 0x46, - 0x3b, 0x80, 0xdf, 0xdb, 0xb6, 0x62, 0xe8, 0x12, 0x8d, 0x5c, 0x85, 0xf1, 0xfb, 0xd4, 0xf3, 0xa3, - 0xdd, 0x10, 0x8f, 0x86, 0x87, 0x1c, 0xa4, 0x12, 0x08, 0x2c, 0x72, 0x2b, 0xda, 0x91, 0xc5, 0xa1, - 0x36, 0x13, 0xdb, 0x07, 0xa3, 0xa9, 0xd2, 0x16, 0x10, 0x75, 0xaa, 0x48, 0x2c, 0xe3, 0x7f, 0xce, - 0x42, 0xb1, 0x62, 0x07, 0xf6, 0xb6, 0xed, 0x8b, 0xfe, 0xbc, 0x7f, 0x9d, 0xed, 0xf1, 0x4a, 0x43, - 0x71, 0x8f, 0x67, 0x5f, 0xfe, 0xb1, 0x9b, 0xf7, 0x72, 0xbc, 0x79, 0x93, 0xec, 0x84, 0x15, 0xcd, - 0x8b, 0x1a, 0xf5, 0xce, 0xf1, 0x8d, 0x2a, 0x8a, 0x46, 0xe5, 0x65, 0xa3, 0xa2, 0xa6, 0x90, 0x77, - 0x60, 0xa4, 0xde, 0xa5, 0x0d, 0xb1, 0x89, 0xc8, 0x73, 0x41, 0x6f, 0x1c, 0x43, 0xb8, 0x7f, 0x7d, - 0xb9, 0x20, 0xd8, 0x8c, 0xf8, 0x5d, 0xda, 0x30, 0x91, 0x8c, 0xac, 0xc2, 0x18, 0xdb, 0x10, 0x7b, - 0xf2, 0x30, 0x38, 0x97, 0xce, 0x00, 0x51, 0xee, 0x5f, 0x5f, 0x9e, 0x16, 0x2c, 0xc6, 0x7c, 0x84, - 0x98, 0x82, 0x58, 0x59, 0x7b, 0xff, 0x34, 0x07, 0xf3, 0x69, 0xb5, 0xab, 0xdd, 0x31, 0x36, 0xa0, - 0x3b, 0x2e, 0x42, 0x9e, 0x49, 0x02, 0xec, 0x74, 0xc5, 0x5d, 0x67, 0x62, 0xb9, 0xc0, 0x5a, 0xbe, - 0x27, 0x60, 0x66, 0x58, 0x4a, 0x5e, 0x0c, 0x05, 0x8b, 0x7c, 0xc4, 0x4f, 0x08, 0x16, 0x52, 0x9c, - 0x60, 0x53, 0x46, 0xee, 0x04, 0x28, 0x7f, 0x44, 0xbd, 0x2b, 0xc1, 0xd1, 0x94, 0xf1, 0x04, 0x44, - 0x3b, 0xad, 0xe4, 0xd9, 0xb2, 0x0a, 0x79, 0xd9, 0xac, 0x85, 0x02, 0x32, 0x9a, 0x8d, 0x75, 0xd5, - 0xfd, 0xeb, 0x7c, 0x4e, 0x34, 0xc5, 0x6f, 0x95, 0x8d, 0xc4, 0x21, 0xd7, 0x21, 0x5f, 0xf3, 0xdc, - 0xc7, 0x07, 0xd5, 0x8a, 0xbf, 0x30, 0x75, 0x3e, 0x77, 0x71, 0x62, 0xf9, 0xcc, 0xd1, 0x61, 0x69, - 0xae, 0xcb, 0x60, 0x96, 0xd3, 0x54, 0x0f, 0xec, 0x10, 0xf1, 0xf6, 0x48, 0x3e, 0x53, 0xcc, 0xde, - 0x1e, 0xc9, 0x67, 0x8b, 0x39, 0x2e, 0xa5, 0xdc, 0x1e, 0xc9, 0x8f, 0x14, 0x47, 0x6f, 0x8f, 0xe4, - 0x47, 0x51, 0x6e, 0x99, 0x28, 0xc2, 0xed, 0x91, 0xfc, 0x64, 0xb1, 0xa0, 0x09, 0x0d, 0xc8, 0x20, - 0x70, 0x1b, 0x6e, 0xcb, 0xcc, 0x6d, 0x99, 0x55, 0x73, 0x6c, 0xa5, 0xbc, 0x42, 0xbd, 0xc0, 0xcc, - 0x95, 0x1f, 0xd4, 0xcd, 0xa9, 0xca, 0x41, 0xc7, 0x6e, 0x3b, 0x0d, 0x7e, 0x02, 0x9b, 0xb9, 0x5b, - 0x2b, 0x35, 0xa3, 0x03, 0xa7, 0xd3, 0x87, 0x9d, 0x6c, 0x42, 0x61, 0xd3, 0xf6, 0x76, 0x69, 0xb0, - 0x46, 0xed, 0x56, 0xb0, 0xb7, 0x30, 0x8d, 0x1d, 0x30, 0x27, 0x3a, 0x40, 0x2d, 0x5a, 0x7e, 0xf6, - 0xe8, 0xb0, 0x74, 0x26, 0x40, 0x88, 0xb5, 0x87, 0x20, 0xa5, 0x49, 0x1a, 0x17, 0xa3, 0x0c, 0xd3, - 0x51, 0xdf, 0xad, 0x3b, 0x7e, 0x40, 0xae, 0xc2, 0x84, 0x84, 0xb0, 0xfd, 0x39, 0x97, 0xda, 0xcb, - 0x66, 0x84, 0x63, 0xfc, 0x56, 0x16, 0x20, 0x2a, 0x79, 0x4a, 0x97, 0xf0, 0xe7, 0xb4, 0x25, 0x7c, - 0x2a, 0xbe, 0x02, 0xfb, 0x2f, 0xde, 0xf7, 0x62, 0x8b, 0xf7, 0x4c, 0x9c, 0x74, 0xf8, 0x65, 0xfb, - 0x0b, 0xe3, 0xd1, 0x60, 0x88, 0x05, 0x7b, 0x11, 0xc2, 0x09, 0x24, 0x3a, 0x14, 0x57, 0x62, 0x57, - 0x4e, 0xaa, 0xb0, 0x94, 0x9c, 0x05, 0x36, 0xc1, 0x44, 0xa7, 0x8e, 0x1f, 0x1d, 0x96, 0x72, 0x3d, - 0xcf, 0xc1, 0x49, 0x47, 0xae, 0x82, 0x98, 0x76, 0xa2, 0x03, 0xd9, 0x6c, 0x9f, 0x6d, 0xd8, 0x56, - 0x83, 0x7a, 0x41, 0xd4, 0xe3, 0x0b, 0x19, 0x39, 0x3b, 0x49, 0x17, 0xf4, 0xa9, 0xb9, 0x30, 0x82, - 0xd3, 0xe0, 0x62, 0x6a, 0xaf, 0x5c, 0xd1, 0x50, 0xb9, 0xf4, 0x7b, 0x5e, 0x1e, 0xa6, 0x4d, 0x5e, - 0x66, 0x25, 0x24, 0x61, 0xbd, 0x02, 0x72, 0x1d, 0xd8, 0x8a, 0x10, 0xbd, 0x0f, 0xa2, 0x9e, 0xf2, - 0x83, 0xfa, 0xf2, 0x29, 0xc1, 0x69, 0xca, 0x7e, 0xa4, 0x92, 0x33, 0x6c, 0xf2, 0x16, 0xb0, 0x25, - 0x23, 0xfa, 0x9d, 0x08, 0xa2, 0x5b, 0x2b, 0xb5, 0x95, 0x96, 0xdb, 0x6b, 0xd6, 0xdf, 0x5f, 0x8f, - 0x88, 0x77, 0x1b, 0x5d, 0x95, 0xf8, 0xd6, 0x4a, 0x8d, 0xbc, 0x05, 0xa3, 0xe5, 0xef, 0xef, 0x79, - 0x54, 0x88, 0x55, 0x05, 0x59, 0x27, 0x83, 0x2d, 0x9f, 0x11, 0x84, 0x33, 0x36, 0xfb, 0xa9, 0x8a, - 0xa3, 0x58, 0xce, 0x6a, 0xde, 0x5c, 0xaf, 0x0b, 0x91, 0x89, 0xc4, 0xba, 0x65, 0x73, 0x5d, 0xf9, - 0xec, 0x40, 0x6b, 0x35, 0xa3, 0x22, 0x57, 0x21, 0x5b, 0xae, 0xe0, 0x45, 0x6e, 0x72, 0x69, 0x42, - 0x56, 0x5b, 0x59, 0x9e, 0x17, 0x24, 0x05, 0x5b, 0x5d, 0x06, 0xd9, 0x72, 0x85, 0x2c, 0xc3, 0xe8, - 0xdd, 0x83, 0xfa, 0xfb, 0xeb, 0x62, 0xf3, 0x94, 0x4b, 0x1e, 0x61, 0xf7, 0x70, 0x9b, 0xf1, 0xa3, - 0x2f, 0x6e, 0x1f, 0xf8, 0x5f, 0x6f, 0xa9, 0x5f, 0x8c, 0x68, 0xa4, 0x06, 0x13, 0xe5, 0x66, 0xdb, - 0xe9, 0x6c, 0xf9, 0xd4, 0x5b, 0x98, 0x44, 0x3e, 0x0b, 0xb1, 0xef, 0x0e, 0xcb, 0x97, 0x17, 0x8e, - 0x0e, 0x4b, 0xf3, 0x36, 0xfb, 0x69, 0xf5, 0x7c, 0xea, 0x29, 0xdc, 0x22, 0x26, 0xa4, 0x06, 0x70, - 0xd7, 0xed, 0xec, 0xba, 0xe5, 0xa0, 0x65, 0xfb, 0xb1, 0xed, 0x38, 0x2a, 0x08, 0xa5, 0x9e, 0x53, - 0x6d, 0x06, 0xb3, 0x6c, 0x06, 0x54, 0x18, 0x2a, 0x3c, 0xc8, 0x4d, 0x18, 0xbb, 0xe7, 0xd9, 0x8d, - 0x16, 0x5d, 0x98, 0x42, 0x6e, 0xf3, 0x82, 0x1b, 0x07, 0xca, 0x96, 0x2e, 0x08, 0x86, 0x45, 0x17, - 0xc1, 0xea, 0xed, 0x8a, 0x23, 0x2e, 0x3e, 0x00, 0x92, 0x9c, 0x93, 0x29, 0x77, 0x9b, 0xd7, 0xd4, - 0xbb, 0x4d, 0xb4, 0xe8, 0x57, 0xdc, 0x76, 0xdb, 0xee, 0x34, 0x91, 0xf6, 0xfe, 0x92, 0x72, 0xe5, - 0x31, 0xbe, 0x0e, 0xb3, 0x89, 0xce, 0x3a, 0xe6, 0x5a, 0xfa, 0x2e, 0xcc, 0x54, 0xe8, 0x8e, 0xdd, - 0x6b, 0x05, 0xe1, 0xc9, 0xc5, 0x97, 0x28, 0x5e, 0x10, 0x9b, 0xbc, 0xc8, 0x92, 0xc7, 0x95, 0x19, - 0x47, 0x36, 0xde, 0x81, 0x29, 0xad, 0xf9, 0xec, 0x86, 0x53, 0xee, 0x35, 0x9d, 0x00, 0x07, 0x32, - 0x13, 0xdd, 0x70, 0x6c, 0x06, 0xc4, 0xe1, 0x32, 0x23, 0x04, 0xe3, 0xef, 0xa8, 0x42, 0x96, 0x3c, - 0x49, 0x2e, 0x87, 0xfb, 0x41, 0x26, 0x12, 0xf9, 0x12, 0xfb, 0x41, 0xb8, 0x1b, 0x5c, 0xe2, 0x6b, - 0x33, 0x9b, 0x58, 0x9b, 0x93, 0x62, 0x24, 0x72, 0xf6, 0x23, 0x9f, 0xaf, 0xc8, 0x70, 0xa6, 0xe6, - 0x3e, 0xfe, 0x4c, 0x7d, 0x0f, 0x0a, 0x77, 0xed, 0x8e, 0xbd, 0x4b, 0x9b, 0xac, 0x05, 0x7c, 0xef, - 0x99, 0xe0, 0x47, 0x5a, 0x9b, 0xc3, 0xb1, 0x95, 0xea, 0x24, 0xd2, 0x08, 0xc8, 0x35, 0xb9, 0xb2, - 0x47, 0x53, 0x56, 0xf6, 0x94, 0xa8, 0x7d, 0x14, 0x57, 0xb6, 0x58, 0xcf, 0xc6, 0xb7, 0x27, 0xb0, - 0x8d, 0xe4, 0x75, 0x18, 0x33, 0xe9, 0x2e, 0x3b, 0x6a, 0x32, 0xd1, 0x20, 0x79, 0x08, 0x51, 0x3b, - 0x86, 0xe3, 0xa0, 0x5c, 0x43, 0x9b, 0xfe, 0x9e, 0xb3, 0x13, 0x88, 0xde, 0x09, 0xe5, 0x1a, 0x01, - 0x56, 0xe4, 0x1a, 0x01, 0xd1, 0x6f, 0xe1, 0x1c, 0xc6, 0x76, 0x3f, 0xb3, 0x52, 0x17, 0x9d, 0x26, - 0x7b, 0xd8, 0xac, 0x28, 0xdb, 0x88, 0xa7, 0x49, 0x25, 0x0c, 0x9b, 0xdc, 0x80, 0x89, 0x72, 0xa3, - 0xe1, 0xf6, 0x94, 0xab, 0x2e, 0x5f, 0xb7, 0x1c, 0xa8, 0x6b, 0x76, 0x22, 0x54, 0x52, 0x87, 0xc9, - 0x55, 0x76, 0x3f, 0x74, 0x56, 0xec, 0xc6, 0x9e, 0xec, 0x24, 0xb9, 0x87, 0x29, 0x25, 0xd1, 0xca, - 0xa5, 0x08, 0x6c, 0x30, 0xa0, 0xaa, 0x1b, 0x51, 0x70, 0xc9, 0x26, 0x4c, 0xd6, 0x69, 0xc3, 0xa3, - 0x41, 0x3d, 0x70, 0x3d, 0x1a, 0xdb, 0x92, 0x95, 0x92, 0xe5, 0xe7, 0xe5, 0x15, 0xd5, 0x47, 0xa0, - 0xe5, 0x33, 0xa8, 0xca, 0x55, 0x41, 0xe6, 0x77, 0x8d, 0xb6, 0xeb, 0x1d, 0x54, 0x96, 0xc5, 0x36, - 0x1d, 0x9d, 0xe9, 0x1c, 0xac, 0xde, 0x35, 0x18, 0xa4, 0xb9, 0xad, 0xdf, 0x35, 0x38, 0x16, 0x8e, - 0x54, 0xa5, 0x8e, 0xb2, 0x9c, 0xd8, 0xb4, 0x67, 0xa2, 0x5e, 0x46, 0xb0, 0x32, 0x52, 0x4d, 0x1f, - 0x25, 0x41, 0x6d, 0xa4, 0x04, 0x16, 0xe9, 0x02, 0x91, 0xa3, 0xc6, 0xc5, 0xb3, 0x16, 0xf5, 0x7d, - 0xb1, 0x97, 0x9f, 0x8d, 0x0d, 0x7e, 0x84, 0xb0, 0xfc, 0xb2, 0x60, 0x7e, 0x4e, 0x4e, 0x03, 0x71, - 0xbd, 0x64, 0x85, 0x4a, 0x3d, 0x29, 0xbc, 0xc9, 0x9b, 0x00, 0xab, 0x8f, 0x03, 0xea, 0x75, 0xec, - 0x56, 0xa8, 0xbe, 0x43, 0x8d, 0x15, 0x15, 0x50, 0x7d, 0xa0, 0x15, 0x64, 0xb2, 0x02, 0x53, 0x65, - 0xdf, 0xef, 0xb5, 0xa9, 0xe9, 0xb6, 0x68, 0xd9, 0xdc, 0xc0, 0x7d, 0x7f, 0x62, 0xf9, 0xdc, 0xd1, - 0x61, 0xe9, 0xac, 0x8d, 0x05, 0x96, 0xe7, 0xb6, 0xa8, 0x65, 0x7b, 0xea, 0xec, 0xd6, 0x69, 0xc8, - 0x3d, 0x80, 0x7b, 0x5d, 0xda, 0xa9, 0x53, 0xdb, 0x6b, 0xec, 0xc5, 0xb6, 0xf9, 0xa8, 0x60, 0xf9, - 0x39, 0xd1, 0xc2, 0x79, 0xb7, 0x4b, 0x3b, 0x3e, 0xc2, 0xd4, 0xaf, 0x8a, 0x30, 0xc9, 0x03, 0x98, - 0xa9, 0x96, 0xef, 0xd6, 0xdc, 0x96, 0xd3, 0x38, 0x10, 0x92, 0xd3, 0x34, 0x2a, 0x35, 0x4f, 0x0b, - 0xae, 0xb1, 0x52, 0xbe, 0x3d, 0x39, 0x76, 0xdb, 0xea, 0x22, 0xd4, 0x12, 0xf2, 0x53, 0x9c, 0x0b, - 0xf9, 0x80, 0xcd, 0x41, 0x9f, 0x09, 0x83, 0x9b, 0xf6, 0xae, 0xbf, 0x30, 0xa3, 0x29, 0xe9, 0xca, - 0x0f, 0xea, 0x57, 0x94, 0x52, 0x2e, 0xa6, 0x2c, 0xf2, 0x89, 0x88, 0x50, 0x2b, 0xb0, 0x77, 0x7d, - 0x7d, 0x22, 0x86, 0xd8, 0xe4, 0x36, 0x40, 0xc5, 0x6d, 0xf4, 0xda, 0xb4, 0x13, 0x54, 0x96, 0x17, - 0x8a, 0xfa, 0xd5, 0x23, 0x2c, 0x88, 0xb6, 0xb6, 0xa6, 0xdb, 0xd0, 0x66, 0xa2, 0x42, 0xbd, 0xf8, - 0x2e, 0x14, 0xe3, 0x1f, 0x72, 0x42, 0xbd, 0xdb, 0x54, 0x71, 0x5a, 0x69, 0xfd, 0xea, 0x63, 0xc7, - 0x0f, 0x7c, 0xe3, 0x1b, 0xda, 0x0a, 0x64, 0xbb, 0xc3, 0x1d, 0x7a, 0x50, 0xf3, 0xe8, 0x8e, 0xf3, - 0x58, 0x6c, 0x66, 0xb8, 0x3b, 0xec, 0xd3, 0x03, 0xab, 0x8b, 0x50, 0x75, 0x77, 0x08, 0x51, 0xc9, - 0x67, 0x21, 0x7f, 0xe7, 0x6e, 0xfd, 0x0e, 0x3d, 0xa8, 0x56, 0xc4, 0x41, 0xc5, 0xc9, 0xda, 0xbe, - 0xc5, 0x48, 0xb5, 0xb9, 0x16, 0x62, 0x1a, 0xcb, 0xd1, 0x4e, 0xc8, 0x6a, 0x5e, 0x69, 0xf5, 0xfc, - 0x80, 0x7a, 0xd5, 0x8a, 0x5a, 0x73, 0x83, 0x03, 0x63, 0xfb, 0x52, 0x88, 0x6a, 0xfc, 0xfb, 0x2c, - 0xee, 0x82, 0x6c, 0xc2, 0x57, 0x3b, 0x7e, 0x60, 0x77, 0x1a, 0x34, 0x64, 0x80, 0x13, 0xde, 0x11, - 0xd0, 0xd8, 0x84, 0x8f, 0x90, 0xf5, 0xaa, 0xb3, 0x43, 0x57, 0xcd, 0xaa, 0x94, 0x0a, 0x97, 0x6a, - 0x45, 0xd5, 0x0a, 0x7b, 0x02, 0x1a, 0xab, 0x32, 0x42, 0x26, 0x17, 0x60, 0xbc, 0x5a, 0xbe, 0x5b, - 0xee, 0x05, 0x7b, 0xb8, 0x07, 0xe7, 0xb9, 0x7c, 0xce, 0x66, 0xab, 0xdd, 0x0b, 0xf6, 0x4c, 0x59, - 0x48, 0xae, 0xe2, 0xbd, 0xa7, 0x43, 0x03, 0xae, 0x3d, 0x16, 0x87, 0xae, 0xcf, 0x41, 0xb1, 0x6b, - 0x0f, 0x03, 0x91, 0x57, 0x61, 0xf4, 0x7e, 0x6d, 0xa5, 0x5a, 0x11, 0x17, 0x75, 0x3c, 0x89, 0x1e, - 0x76, 0x1b, 0xfa, 0x97, 0x70, 0x14, 0xb2, 0x0a, 0xd3, 0x75, 0xda, 0xe8, 0x79, 0x4e, 0x70, 0x70, - 0xcb, 0x73, 0x7b, 0x5d, 0x7f, 0x61, 0x1c, 0xeb, 0xc0, 0x95, 0xee, 0x8b, 0x12, 0x6b, 0x17, 0x8b, - 0x14, 0xea, 0x18, 0x91, 0xf1, 0x9b, 0x99, 0x68, 0x9b, 0x24, 0x17, 0x34, 0xb1, 0x06, 0x55, 0x4e, - 0x4c, 0xac, 0x51, 0x55, 0x4e, 0x28, 0xe0, 0x98, 0x40, 0x56, 0x7a, 0x7e, 0xe0, 0xb6, 0x57, 0x3b, - 0xcd, 0xae, 0xeb, 0x74, 0x02, 0xa4, 0xe2, 0x9d, 0x6f, 0x1c, 0x1d, 0x96, 0x9e, 0x6f, 0x60, 0xa9, - 0x45, 0x45, 0xb1, 0x15, 0xe3, 0x92, 0x42, 0xfd, 0x09, 0xc6, 0xc3, 0xf8, 0x57, 0x59, 0xed, 0x78, - 0x63, 0x9f, 0x67, 0xd2, 0x6e, 0xcb, 0x69, 0xa0, 0x06, 0x01, 0x1b, 0x1a, 0xce, 0x2a, 0xfc, 0x3c, - 0x2f, 0x2a, 0xe5, 0x3d, 0xa4, 0xf3, 0x4e, 0xa1, 0x26, 0x5f, 0x84, 0x02, 0x93, 0x34, 0xc4, 0x4f, - 0x7f, 0x21, 0x8b, 0x9d, 0xfd, 0x1c, 0x2a, 0x0f, 0x7d, 0xea, 0x85, 0x6c, 0x34, 0x11, 0x45, 0xa5, - 0x20, 0x4d, 0x58, 0xd8, 0xf4, 0xec, 0x8e, 0xef, 0x04, 0xab, 0x9d, 0x86, 0x77, 0x80, 0x92, 0xd1, - 0x6a, 0xc7, 0xde, 0x6e, 0xd1, 0x26, 0x36, 0x37, 0xbf, 0x7c, 0xf1, 0xe8, 0xb0, 0xf4, 0x52, 0xc0, - 0x71, 0x2c, 0x1a, 0x22, 0x59, 0x94, 0x63, 0x29, 0x9c, 0xfb, 0x72, 0x62, 0x92, 0x94, 0xec, 0x56, - 0x7c, 0x3b, 0xe2, 0x42, 0x02, 0x4a, 0x52, 0xe1, 0x68, 0xb0, 0x3d, 0x4c, 0xfd, 0x4c, 0x95, 0xc0, - 0xf8, 0x7f, 0x32, 0xd1, 0x01, 0x4c, 0xde, 0x86, 0x49, 0xb1, 0x62, 0x94, 0x79, 0x81, 0x3b, 0xa8, - 0x5c, 0x5e, 0xb1, 0x91, 0x55, 0xd1, 0xd9, 0xbd, 0xbf, 0xbc, 0xb2, 0xae, 0xcc, 0x0d, 0xbc, 0xf7, - 0xdb, 0x8d, 0x56, 0x9c, 0x4a, 0xa2, 0xb1, 0x49, 0xb0, 0xb9, 0x5e, 0xd7, 0x7b, 0x05, 0x27, 0x41, - 0xd0, 0xf2, 0x53, 0xba, 0x41, 0x41, 0xfe, 0xe4, 0x0d, 0xff, 0x9f, 0x32, 0x69, 0xe7, 0x3c, 0x59, - 0x86, 0xa9, 0x07, 0xae, 0xb7, 0x8f, 0xe3, 0xab, 0x74, 0x02, 0x8e, 0xfc, 0x23, 0x59, 0x10, 0x6f, - 0x90, 0x4e, 0xa2, 0x7e, 0x9b, 0xd2, 0x1b, 0xfa, 0xb7, 0xc5, 0x38, 0x68, 0x04, 0x6c, 0x1c, 0x42, - 0x8e, 0xe1, 0xea, 0xc0, 0x71, 0x88, 0x3e, 0x41, 0x9b, 0xc2, 0x2a, 0xba, 0xf1, 0xeb, 0x19, 0xf5, - 0x3c, 0x67, 0x9d, 0x5c, 0x71, 0xdb, 0xb6, 0xd3, 0x51, 0x9a, 0xc3, 0xdf, 0xc3, 0x10, 0x1a, 0xff, - 0x12, 0x05, 0x99, 0x5c, 0x87, 0x3c, 0xff, 0x15, 0xee, 0xb5, 0xa8, 0x45, 0x13, 0x84, 0xfa, 0x41, - 0x21, 0x11, 0x13, 0x23, 0x93, 0x3b, 0xe9, 0xc8, 0x7c, 0x3b, 0xa3, 0x1e, 0xc5, 0x1f, 0xf7, 0xb0, - 0x89, 0x1d, 0x32, 0xd9, 0x93, 0x1c, 0x32, 0x9f, 0xb8, 0x09, 0x3f, 0x98, 0x81, 0x49, 0x45, 0x4b, - 0xc1, 0xda, 0x50, 0xf3, 0xdc, 0x8f, 0x68, 0x23, 0xd0, 0xdb, 0xd0, 0xe5, 0xc0, 0x58, 0x1b, 0x42, - 0xd4, 0x4f, 0xd0, 0x06, 0xe3, 0x8f, 0x32, 0xe2, 0x8e, 0x34, 0xf4, 0x36, 0xaf, 0x6f, 0xc9, 0xd9, - 0x93, 0x1c, 0x91, 0x5f, 0x84, 0x51, 0x93, 0x36, 0x1d, 0x5f, 0xdc, 0x6f, 0x66, 0xd5, 0xfb, 0x18, - 0x16, 0x44, 0x72, 0x93, 0xc7, 0x7e, 0xaa, 0xe7, 0x1b, 0x96, 0x33, 0x41, 0xb6, 0xea, 0xdf, 0x6c, - 0xd1, 0xc7, 0x0e, 0x5f, 0x8c, 0xe2, 0xa8, 0xc5, 0xe3, 0xcd, 0xf1, 0xad, 0x1d, 0x56, 0x22, 0x24, - 0x6a, 0x75, 0xe1, 0x69, 0x34, 0xc6, 0x07, 0x00, 0x51, 0x95, 0xe4, 0x0e, 0x14, 0xc5, 0x6c, 0x70, - 0x3a, 0xbb, 0x5c, 0x90, 0x12, 0x7d, 0x50, 0x3a, 0x3a, 0x2c, 0x3d, 0xdb, 0x08, 0xcb, 0x84, 0xd4, - 0xa9, 0xf0, 0x4d, 0x10, 0x1a, 0xff, 0x7b, 0x0e, 0xb2, 0x65, 0x1c, 0x90, 0x3b, 0xf4, 0x20, 0xb0, - 0xb7, 0x6f, 0x3a, 0x2d, 0x6d, 0x31, 0xed, 0x23, 0xd4, 0xda, 0x71, 0x34, 0x75, 0x85, 0x82, 0xcc, - 0x16, 0xd3, 0x1d, 0x6f, 0xfb, 0x0d, 0x24, 0x54, 0x16, 0xd3, 0xbe, 0xb7, 0xfd, 0x46, 0x9c, 0x2c, - 0x44, 0x24, 0x06, 0x8c, 0xf1, 0x85, 0x25, 0xe6, 0x20, 0x1c, 0x1d, 0x96, 0xc6, 0xf8, 0xfa, 0x33, - 0x45, 0x09, 0x39, 0x0b, 0xb9, 0x7a, 0x6d, 0x43, 0xec, 0x80, 0xa8, 0x16, 0xf4, 0xbb, 0x1d, 0x93, - 0xc1, 0x58, 0x9d, 0xeb, 0x95, 0x72, 0x0d, 0x15, 0x01, 0xa3, 0x51, 0x9d, 0xad, 0xa6, 0xdd, 0x8d, - 0xab, 0x02, 0x42, 0x44, 0xf2, 0x0e, 0x4c, 0xde, 0xa9, 0xac, 0xac, 0xb9, 0x3e, 0xdf, 0xbd, 0xc6, - 0xa2, 0xc9, 0xbf, 0xdf, 0x6c, 0x58, 0xa8, 0xf9, 0x8f, 0x1f, 0x03, 0x0a, 0x3e, 0xb1, 0xe0, 0x34, - 0x63, 0xc5, 0x86, 0xc4, 0x69, 0x50, 0x71, 0x29, 0xdd, 0x88, 0xde, 0x19, 0x5e, 0x39, 0x3a, 0x2c, - 0xbd, 0x88, 0x5f, 0xe0, 0x73, 0x14, 0x4b, 0x5e, 0x67, 0x63, 0x5c, 0xfb, 0xb0, 0x21, 0x5f, 0x81, - 0x53, 0xc9, 0x92, 0x7a, 0xf8, 0x3e, 0x71, 0xe1, 0xe8, 0xb0, 0x64, 0xa4, 0xf2, 0xf7, 0xb5, 0xf9, - 0x9b, 0xce, 0xc4, 0xf8, 0x66, 0x16, 0x26, 0x15, 0x35, 0x1f, 0xf9, 0xac, 0x78, 0x96, 0xce, 0x68, - 0x17, 0x18, 0x05, 0x83, 0x95, 0x72, 0x9d, 0x50, 0xdb, 0x6d, 0x52, 0xf1, 0x48, 0x1d, 0xe9, 0x5f, - 0xb2, 0xc3, 0xe8, 0x5f, 0xde, 0x04, 0xe0, 0x53, 0x18, 0xfb, 0x49, 0x91, 0x86, 0x14, 0xeb, 0x14, - 0x75, 0x5a, 0x45, 0xc8, 0xe4, 0x3e, 0xcc, 0x6d, 0x7a, 0x3d, 0x3f, 0xa8, 0x1f, 0xf8, 0x01, 0x6d, - 0x33, 0x6e, 0x35, 0xd7, 0x6d, 0x89, 0xe5, 0xf3, 0xd2, 0xd1, 0x61, 0xe9, 0x3c, 0x9a, 0xd4, 0x58, - 0x3e, 0x96, 0xe3, 0x07, 0x58, 0x5d, 0xd7, 0x55, 0xb5, 0x32, 0x69, 0x0c, 0x0c, 0x13, 0x0a, 0xaa, - 0x4e, 0x87, 0x1d, 0x8c, 0xe2, 0x09, 0x4f, 0x68, 0xea, 0x95, 0x83, 0x51, 0x7c, 0x65, 0xf2, 0x49, - 0x51, 0x27, 0x31, 0x3e, 0xab, 0xea, 0x13, 0x87, 0xdd, 0x97, 0x8c, 0xbf, 0x94, 0x89, 0x76, 0xc1, - 0xfb, 0xd7, 0xc8, 0x5b, 0x30, 0xc6, 0x9f, 0x4c, 0xc5, 0xcb, 0xf2, 0xa9, 0xf0, 0x4e, 0xae, 0xbe, - 0xa7, 0x72, 0x45, 0xfe, 0xef, 0x72, 0xb3, 0x8a, 0x67, 0x4c, 0x41, 0x12, 0xbe, 0x01, 0xe8, 0xea, - 0x40, 0xc9, 0x1d, 0xb5, 0xdd, 0xd7, 0xd2, 0xde, 0x00, 0x8c, 0x7f, 0x31, 0x0a, 0xd3, 0x3a, 0x9a, - 0xfa, 0xae, 0x9a, 0x19, 0xea, 0x5d, 0xf5, 0x8b, 0x90, 0x17, 0xf3, 0x4d, 0x0a, 0x94, 0x2f, 0xe1, - 0xcb, 0x88, 0x80, 0x69, 0xf6, 0x02, 0xc0, 0x87, 0x83, 0x5d, 0xd1, 0xcd, 0x90, 0x8a, 0x2c, 0x29, - 0xaf, 0x76, 0xb9, 0x48, 0xc6, 0x92, 0xaf, 0x76, 0xea, 0x72, 0x0e, 0xdf, 0xef, 0x2e, 0xc3, 0x18, - 0xbb, 0x9e, 0x84, 0x1a, 0x24, 0xfc, 0x4a, 0x76, 0x73, 0x89, 0x19, 0x06, 0x71, 0x24, 0xf2, 0x00, - 0xf2, 0xeb, 0xb6, 0x1f, 0xd4, 0x29, 0xed, 0x0c, 0x61, 0x31, 0x51, 0x12, 0x5d, 0x35, 0x87, 0xe6, - 0x08, 0x3e, 0xa5, 0x9d, 0xd8, 0x93, 0x77, 0xc8, 0x8c, 0x7c, 0x08, 0xb0, 0xe2, 0x76, 0x02, 0xcf, - 0x6d, 0xad, 0xbb, 0xbb, 0x0b, 0x63, 0x78, 0x75, 0x7f, 0x3e, 0x36, 0x00, 0x11, 0x02, 0xbf, 0xbd, - 0x87, 0xfa, 0xa9, 0x06, 0x2f, 0xb0, 0x5a, 0xee, 0xae, 0xba, 0x0e, 0x22, 0x7c, 0x72, 0x13, 0x8a, - 0x52, 0x2f, 0xb2, 0xd5, 0xdd, 0xf5, 0x70, 0x82, 0x8c, 0x47, 0x82, 0x13, 0x7d, 0x1c, 0x58, 0x3d, - 0x01, 0x57, 0x37, 0xfa, 0x38, 0x0d, 0xf9, 0x0a, 0x9c, 0x89, 0xc3, 0xe4, 0x28, 0xe7, 0xa3, 0x2b, - 0x85, 0xca, 0x2e, 0x65, 0xde, 0xf7, 0x63, 0x41, 0x6e, 0xc1, 0x0c, 0xeb, 0x90, 0xbb, 0xd4, 0xf6, - 0x7b, 0xdc, 0xac, 0x4d, 0x68, 0x96, 0xe4, 0x83, 0xb0, 0x58, 0x85, 0x2d, 0xb7, 0xb1, 0xaf, 0x20, - 0x99, 0x71, 0x2a, 0x72, 0x03, 0x26, 0xb9, 0x9d, 0x82, 0x57, 0xed, 0xec, 0xb8, 0xe2, 0xd9, 0x40, - 0x6a, 0xd3, 0x45, 0xc9, 0xfd, 0x25, 0x56, 0x66, 0xaa, 0x88, 0xc6, 0x61, 0x16, 0x4e, 0xa7, 0xd7, - 0x41, 0xfe, 0x22, 0x9c, 0x12, 0xfd, 0xd9, 0xa2, 0x9e, 0x82, 0x33, 0x84, 0x05, 0xc7, 0x65, 0x31, - 0x4e, 0x2f, 0x34, 0x42, 0x06, 0xe1, 0x86, 0xc3, 0x58, 0xc4, 0x26, 0x45, 0x7a, 0x3d, 0xe4, 0x6b, - 0x30, 0xa9, 0x56, 0x9b, 0x1d, 0xde, 0x18, 0x66, 0x40, 0x5d, 0x2a, 0x4b, 0x62, 0xc3, 0x8c, 0x49, - 0xbf, 0xde, 0xa3, 0x7e, 0x20, 0xcd, 0x71, 0x84, 0xc4, 0x72, 0x36, 0x51, 0x8b, 0x44, 0x08, 0xd5, - 0x5e, 0x45, 0x8f, 0x53, 0x5a, 0xd2, 0x68, 0xf2, 0x5b, 0x8c, 0x7d, 0x9c, 0x9f, 0xf1, 0xdd, 0x2c, - 0x9c, 0xe9, 0x33, 0x9d, 0xd9, 0x8e, 0x87, 0xf2, 0xa4, 0xb2, 0xe3, 0xc5, 0xc4, 0x48, 0x6e, 0xcb, - 0x77, 0x1e, 0xb2, 0x42, 0x02, 0x1b, 0x59, 0x2e, 0x1e, 0x1d, 0x96, 0x0a, 0xda, 0x4a, 0xcd, 0x56, - 0x2b, 0xe4, 0x36, 0x8c, 0xb0, 0x6e, 0x18, 0xc2, 0x24, 0x45, 0x2a, 0x3d, 0xa7, 0x03, 0x47, 0xdd, - 0x20, 0xb0, 0x6f, 0x90, 0x07, 0xf9, 0x2c, 0xe4, 0x36, 0x37, 0xd7, 0x71, 0x77, 0xc8, 0xe1, 0xec, - 0x9e, 0x0a, 0x82, 0x96, 0xb6, 0x19, 0x4d, 0x31, 0xda, 0xb0, 0x47, 0x4c, 0x86, 0x4e, 0xbe, 0x14, - 0x33, 0x95, 0x7b, 0x75, 0xf0, 0x52, 0x1e, 0xde, 0x72, 0xee, 0x13, 0x18, 0xac, 0x19, 0x3f, 0x91, - 0x91, 0x56, 0x41, 0x62, 0xf2, 0x93, 0xf3, 0x72, 0x9d, 0xe0, 0xc5, 0x5c, 0x70, 0x51, 0x41, 0xe4, - 0x79, 0x00, 0xfe, 0x73, 0x6b, 0x4b, 0x74, 0x7a, 0xc1, 0x54, 0x20, 0xe4, 0x0b, 0x21, 0x4b, 0xa1, - 0xca, 0xcc, 0xa1, 0x24, 0x10, 0x5b, 0x6b, 0xbc, 0xcc, 0xd4, 0x51, 0x8d, 0xdf, 0xc8, 0x46, 0xa7, - 0xc6, 0x4d, 0xa7, 0x15, 0x50, 0x8f, 0x2c, 0xf2, 0x43, 0x20, 0xba, 0xcd, 0x98, 0xe1, 0x6f, 0xb2, - 0x10, 0x9d, 0x28, 0xbc, 0x69, 0xe1, 0xd1, 0xf1, 0xaa, 0x72, 0x74, 0xe4, 0xf0, 0xe8, 0x98, 0xee, - 0x7b, 0x48, 0xbc, 0x9a, 0xb2, 0x13, 0xe2, 0xd6, 0x9f, 0xb2, 0xdb, 0xbd, 0x04, 0x53, 0x1b, 0xee, - 0xea, 0xe3, 0x20, 0x44, 0x64, 0x5b, 0x7e, 0xde, 0xd4, 0x81, 0x8c, 0xe3, 0xbd, 0x56, 0x93, 0x7a, - 0x9b, 0x7b, 0x76, 0x47, 0x33, 0x2e, 0x31, 0x13, 0x70, 0x86, 0xbb, 0x41, 0x1f, 0xe9, 0xb8, 0xe3, - 0x1c, 0x37, 0x0e, 0x8f, 0x0f, 0x4e, 0x3e, 0x31, 0x38, 0xc6, 0x0f, 0x65, 0x65, 0x77, 0xdd, 0x5f, - 0x7a, 0x4a, 0xcd, 0x0e, 0xde, 0xd0, 0xcc, 0x0e, 0xe6, 0xc2, 0x07, 0x93, 0xd0, 0x66, 0x67, 0x29, - 0x4d, 0xe0, 0x50, 0x6c, 0x06, 0xfe, 0xce, 0x18, 0x14, 0x54, 0x74, 0xd6, 0x0f, 0xe5, 0x66, 0xd3, - 0x53, 0xfb, 0xc1, 0x6e, 0x36, 0x3d, 0x13, 0xa1, 0x9a, 0x65, 0x4f, 0x6e, 0xa0, 0x65, 0xcf, 0x57, - 0x61, 0x62, 0xa5, 0xdd, 0xd4, 0xde, 0xff, 0x8d, 0x94, 0xcf, 0xbb, 0x12, 0x22, 0xf1, 0xd5, 0x1b, - 0xbe, 0x03, 0x34, 0xda, 0xcd, 0xe4, 0xab, 0x7f, 0xc4, 0x52, 0x33, 0x0a, 0x1a, 0xfd, 0x24, 0x46, - 0x41, 0x37, 0x60, 0x62, 0xcb, 0xa7, 0x9b, 0xbd, 0x4e, 0x87, 0xb6, 0x70, 0xe2, 0xe5, 0xf9, 0xf5, - 0xb9, 0xe7, 0x53, 0x2b, 0x40, 0xa8, 0xfa, 0x01, 0x21, 0xaa, 0x3a, 0xc0, 0xe3, 0x03, 0x06, 0xf8, - 0x3a, 0xe4, 0x6b, 0x94, 0x7a, 0xd8, 0xa7, 0x93, 0xd1, 0x2d, 0xa9, 0x4b, 0xa9, 0x67, 0xb1, 0x8e, - 0xd5, 0x8c, 0x85, 0x04, 0xa2, 0x66, 0x61, 0x54, 0x18, 0xd2, 0xc2, 0x88, 0xbc, 0x00, 0x85, 0x6e, - 0x6f, 0xbb, 0xe5, 0x34, 0x90, 0xaf, 0x30, 0x4d, 0x32, 0x27, 0x39, 0x8c, 0xb1, 0xf5, 0xc9, 0x97, - 0x60, 0x0a, 0xd5, 0x06, 0xe1, 0x94, 0x9b, 0xd6, 0x8e, 0x76, 0xad, 0x8c, 0x4b, 0xdf, 0x0d, 0x06, - 0xb2, 0x52, 0x0c, 0xf1, 0x74, 0x46, 0xe4, 0x36, 0x8c, 0xef, 0x3a, 0x81, 0xb5, 0xd7, 0xdb, 0x5e, - 0x98, 0xd1, 0xac, 0xd8, 0x6e, 0x39, 0xc1, 0x5a, 0x6f, 0x9b, 0x0f, 0x79, 0xc8, 0x1a, 0xf7, 0xe8, - 0x5d, 0x27, 0xd8, 0xeb, 0xa9, 0xaf, 0x1c, 0x63, 0xbb, 0x88, 0xbb, 0x58, 0x87, 0x69, 0x7d, 0x56, - 0x3c, 0x81, 0xb7, 0xf7, 0xd0, 0xf2, 0x2a, 0x5f, 0x9c, 0xb8, 0x3d, 0x92, 0x87, 0xe2, 0x24, 0xb7, - 0xb9, 0x32, 0xa1, 0x16, 0xf6, 0x8f, 0x49, 0xee, 0xf4, 0xb6, 0xa9, 0xd7, 0xa1, 0x01, 0xf5, 0xc5, - 0x1d, 0xdd, 0x37, 0x47, 0xca, 0xdd, 0xae, 0x6f, 0xfc, 0xe3, 0x2c, 0x8c, 0x97, 0x1f, 0xd4, 0x71, - 0xd7, 0x7f, 0x5d, 0x7d, 0x38, 0x55, 0x5f, 0xd0, 0xc3, 0x87, 0x53, 0xf5, 0xb9, 0xf4, 0x6a, 0x8a, - 0x96, 0x05, 0x7d, 0x03, 0x14, 0x2d, 0x8b, 0xa6, 0x1f, 0x8a, 0xde, 0x90, 0x73, 0x43, 0xbc, 0x21, - 0x87, 0x6a, 0xfe, 0x91, 0xe3, 0xd5, 0xfc, 0x6f, 0xc1, 0x64, 0xb5, 0x13, 0xd0, 0x5d, 0x2f, 0x5a, - 0x35, 0xa1, 0xc6, 0x27, 0x04, 0xab, 0x37, 0x6f, 0x05, 0x9b, 0x4d, 0x49, 0xfe, 0xb4, 0x10, 0x3e, - 0x29, 0xe0, 0x94, 0xe4, 0x2f, 0x10, 0x31, 0x75, 0x9d, 0x44, 0x34, 0x2a, 0xb1, 0xf9, 0x26, 0xed, - 0x74, 0xb8, 0xd0, 0x37, 0x1d, 0xbd, 0xad, 0xb1, 0x8e, 0x5d, 0x9e, 0x4d, 0xb7, 0xd3, 0x31, 0xfe, - 0x6a, 0x06, 0xe6, 0xd3, 0xa6, 0x11, 0x79, 0x17, 0x0a, 0xae, 0xb7, 0x6b, 0x77, 0x9c, 0xef, 0xe7, - 0x2d, 0x52, 0x74, 0xca, 0x2a, 0x5c, 0xd5, 0xa4, 0xa9, 0x70, 0xd6, 0x21, 0x4a, 0xcb, 0x75, 0x15, - 0x58, 0x6a, 0x87, 0x28, 0x60, 0xe3, 0x47, 0xb3, 0x30, 0x59, 0xee, 0x76, 0x9f, 0x72, 0xd3, 0xd3, - 0xcf, 0x6b, 0x07, 0x88, 0xd4, 0x40, 0x84, 0xed, 0xea, 0x6f, 0xb8, 0xa6, 0x9c, 0x21, 0xbf, 0x9c, - 0x85, 0x99, 0x18, 0x85, 0xfa, 0xf5, 0x99, 0x21, 0x2d, 0x45, 0xb3, 0x43, 0x5a, 0x8a, 0xe6, 0x86, - 0xb3, 0x14, 0x1d, 0xf9, 0x24, 0x87, 0xc2, 0x2b, 0x90, 0x2b, 0x77, 0xbb, 0x71, 0x0b, 0x90, 0x6e, - 0xf7, 0xfe, 0x75, 0xae, 0x04, 0xb3, 0xbb, 0x5d, 0x93, 0x61, 0x68, 0x3b, 0xf5, 0xd8, 0x90, 0x3b, - 0xb5, 0x71, 0x19, 0x26, 0x90, 0x17, 0xda, 0x4b, 0x9e, 0x07, 0xdc, 0x62, 0x84, 0xa9, 0xa4, 0x56, - 0x97, 0xd8, 0x7c, 0xfe, 0x38, 0x03, 0xa3, 0xf8, 0xfb, 0x29, 0x9d, 0x63, 0x4b, 0xda, 0x1c, 0x2b, - 0x2a, 0x73, 0x6c, 0x98, 0xd9, 0xf5, 0xf7, 0x73, 0x00, 0x2b, 0xf7, 0xcc, 0x3a, 0xd7, 0x95, 0x92, - 0x9b, 0x30, 0x63, 0xb7, 0x5a, 0xee, 0x23, 0xda, 0xb4, 0x5c, 0xcf, 0xd9, 0x75, 0x3a, 0xbc, 0xe7, - 0xa4, 0x59, 0x82, 0x5e, 0xa4, 0x3e, 0x56, 0x8a, 0xa2, 0x7b, 0xbc, 0x44, 0xe5, 0xd3, 0xa6, 0xc1, - 0x9e, 0xdb, 0x94, 0x6a, 0x13, 0x8d, 0x8f, 0x28, 0x4a, 0xe1, 0x73, 0x97, 0x97, 0xa8, 0x7c, 0xf6, - 0x50, 0x0d, 0x24, 0x65, 0x68, 0x8d, 0x8f, 0x28, 0x4a, 0xe1, 0xc3, 0x75, 0x47, 0x3e, 0x59, 0x87, - 0x59, 0x84, 0x58, 0x0d, 0x8f, 0x36, 0x69, 0x27, 0x70, 0xec, 0x96, 0x2f, 0x14, 0x6d, 0xa8, 0x51, - 0x4e, 0x14, 0xaa, 0x8a, 0x06, 0x2c, 0x5c, 0x89, 0xca, 0xc8, 0x15, 0x18, 0x6f, 0xdb, 0x8f, 0x2d, - 0x7b, 0x97, 0x1b, 0xe8, 0x4c, 0x71, 0xc5, 0x8c, 0x00, 0xa9, 0xc7, 0x48, 0xdb, 0x7e, 0x5c, 0xde, - 0xa5, 0xac, 0x15, 0xf4, 0x71, 0xd7, 0xf5, 0x95, 0x56, 0x8c, 0x45, 0xad, 0x88, 0x15, 0xa9, 0xad, - 0x10, 0x45, 0xa2, 0x15, 0xc6, 0x2f, 0x65, 0xe0, 0xd9, 0x2a, 0x7e, 0x45, 0x70, 0xb0, 0x42, 0x3b, - 0x01, 0xf5, 0x6a, 0xd4, 0x6b, 0x3b, 0x68, 0xae, 0x50, 0xa7, 0x01, 0x79, 0x11, 0x72, 0x65, 0x73, - 0x43, 0xcc, 0x5f, 0xbe, 0xdf, 0x6b, 0xc6, 0x23, 0xac, 0x34, 0xd4, 0xdd, 0x65, 0x8f, 0x79, 0x53, - 0x28, 0x43, 0xa1, 0xec, 0xfb, 0xce, 0x6e, 0xa7, 0xcd, 0xfd, 0x75, 0x72, 0x9a, 0x79, 0x8a, 0x80, - 0x27, 0x1e, 0xc3, 0x54, 0x12, 0xe3, 0x3f, 0xcf, 0xc0, 0x6c, 0xb9, 0xdb, 0xd5, 0x3f, 0x59, 0x37, - 0x8d, 0xca, 0x0c, 0x6f, 0x1a, 0xe5, 0xc0, 0xb4, 0xd6, 0x5c, 0x3e, 0xa5, 0x22, 0xc1, 0x77, 0x40, - 0xcf, 0xf0, 0xcf, 0xee, 0x86, 0x20, 0xcb, 0xd7, 0xdf, 0xf5, 0x63, 0x8c, 0x8d, 0x7f, 0x37, 0x8e, - 0x7b, 0x88, 0xd8, 0x6d, 0x85, 0xf1, 0x6e, 0x26, 0xc5, 0x78, 0xf7, 0x4d, 0x50, 0x24, 0x1c, 0xf5, - 0x88, 0x53, 0x64, 0x45, 0x55, 0xeb, 0x15, 0x21, 0x93, 0xfd, 0xb8, 0x19, 0x6f, 0x0e, 0x5b, 0xf3, - 0x62, 0x7c, 0x01, 0x3f, 0x11, 0x0b, 0xde, 0x35, 0x20, 0xd5, 0x0e, 0xda, 0x1a, 0xd0, 0xfa, 0xbe, - 0xd3, 0xbd, 0x4f, 0x3d, 0x67, 0xe7, 0x40, 0x2c, 0x00, 0xec, 0x7c, 0x47, 0x94, 0x5a, 0xfe, 0xbe, - 0xd3, 0xb5, 0x1e, 0x62, 0xb9, 0x99, 0x42, 0x43, 0xde, 0x83, 0x71, 0x93, 0x3e, 0xf2, 0x9c, 0x40, - 0x1a, 0xa7, 0x4d, 0x87, 0x4a, 0x5c, 0x84, 0xf2, 0xb5, 0xe0, 0xf1, 0x1f, 0xea, 0xae, 0x28, 0xca, - 0xc9, 0x12, 0x17, 0x52, 0xb8, 0x11, 0xda, 0x54, 0xd4, 0xda, 0xf2, 0x83, 0x7a, 0x3f, 0x19, 0x85, - 0x5c, 0x82, 0x51, 0x94, 0x74, 0xc4, 0x5d, 0x00, 0x7d, 0xd1, 0x50, 0x76, 0x56, 0xc5, 0x30, 0xc4, - 0x40, 0x9d, 0x80, 0x7c, 0xcc, 0xf7, 0x17, 0xf2, 0x28, 0xa5, 0x2b, 0x90, 0xb8, 0x98, 0x36, 0x71, - 0x22, 0x31, 0x6d, 0x1d, 0x8a, 0x26, 0x77, 0x6b, 0x6d, 0x96, 0xbb, 0xf8, 0x62, 0xec, 0x2f, 0x00, - 0xae, 0xe4, 0xf3, 0x47, 0x87, 0xa5, 0xe7, 0x84, 0xcb, 0x6b, 0xd3, 0xb2, 0xbb, 0xfc, 0xa1, 0x59, - 0xdb, 0x46, 0xe2, 0x94, 0xe4, 0x4d, 0x18, 0x61, 0x5b, 0xaf, 0x30, 0xf8, 0x95, 0x2f, 0x6f, 0xd1, - 0x6e, 0xcc, 0x17, 0x67, 0xc3, 0xd5, 0xf6, 0x04, 0x24, 0x21, 0x16, 0x4c, 0xeb, 0xd3, 0x5d, 0xd8, - 0x7e, 0x2d, 0x44, 0xfd, 0xa9, 0x97, 0x8b, 0xe7, 0x38, 0x01, 0xb3, 0x1a, 0x08, 0x54, 0x57, 0x40, - 0x6c, 0x91, 0xae, 0x42, 0x7e, 0x73, 0xa5, 0x56, 0x73, 0xbd, 0x80, 0x5f, 0x75, 0xa2, 0x93, 0x85, - 0xc1, 0x4c, 0xbb, 0xb3, 0x4b, 0xf9, 0x59, 0x1c, 0x34, 0xba, 0x56, 0x97, 0xa1, 0xa9, 0x67, 0xb1, - 0x24, 0x25, 0x1f, 0xc2, 0xa9, 0x2d, 0x9f, 0x96, 0x3b, 0x07, 0x78, 0x3a, 0x2b, 0x4b, 0x65, 0x1a, - 0xa7, 0x1e, 0x3e, 0x28, 0xb1, 0xab, 0xa0, 0xdd, 0x39, 0xb0, 0xf8, 0xa9, 0x9e, 0xbe, 0x70, 0xd2, - 0xb9, 0x7c, 0x7a, 0xb6, 0xc4, 0xff, 0x24, 0x03, 0xca, 0x84, 0xcd, 0x9b, 0xb4, 0xe9, 0x78, 0xb4, - 0x11, 0x88, 0xc3, 0x50, 0x78, 0x6e, 0x72, 0x58, 0xcc, 0x66, 0x14, 0x61, 0xe4, 0x5d, 0x18, 0x17, - 0x9b, 0xb6, 0xd8, 0xa4, 0xe4, 0x44, 0x17, 0x4f, 0x1c, 0xdc, 0xc5, 0x37, 0xb1, 0xe1, 0x4b, 0x22, - 0xb6, 0x47, 0xde, 0x7e, 0xb0, 0xb9, 0xd2, 0xb2, 0x9d, 0xb6, 0x2f, 0x76, 0x5e, 0x5c, 0xa6, 0x1f, - 0x3d, 0x0a, 0xac, 0x06, 0x42, 0xd5, 0x3d, 0x32, 0x44, 0x35, 0x6e, 0xc9, 0x17, 0x96, 0x63, 0x0c, - 0x9f, 0x4b, 0x30, 0x7a, 0x3f, 0xd2, 0xc3, 0x2d, 0x4f, 0x1c, 0x1d, 0x96, 0x78, 0xeb, 0x4d, 0x0e, - 0x37, 0x28, 0x4c, 0x84, 0x03, 0xcd, 0x78, 0xb1, 0x1f, 0xc8, 0x6b, 0x8a, 0xf3, 0x62, 0x43, 0x6e, - 0x22, 0x94, 0x09, 0x46, 0xab, 0x9d, 0x26, 0x22, 0x64, 0x11, 0x01, 0xbb, 0x87, 0x76, 0x9a, 0x38, - 0x2f, 0xd4, 0xd6, 0x09, 0x34, 0x45, 0xfc, 0xf8, 0xf1, 0x0c, 0x4c, 0xeb, 0xa3, 0x40, 0xae, 0xc0, - 0x98, 0x70, 0xce, 0xcc, 0xa0, 0x5a, 0x93, 0x71, 0x1b, 0xe3, 0x6e, 0x99, 0x9a, 0x33, 0xa6, 0xc0, - 0x62, 0x52, 0x96, 0xe0, 0x20, 0x44, 0x0c, 0x94, 0xb2, 0x1a, 0x1c, 0x64, 0xca, 0x32, 0x62, 0xb0, - 0x8b, 0x9f, 0xdf, 0x6b, 0x05, 0xea, 0x73, 0xac, 0x87, 0x10, 0x53, 0x94, 0x18, 0xdf, 0xce, 0xc0, - 0x18, 0xdf, 0x89, 0x62, 0x86, 0x9d, 0x99, 0x93, 0x18, 0x76, 0x7e, 0x03, 0xe6, 0x4d, 0xb7, 0x45, - 0xfd, 0x72, 0xe7, 0xe0, 0xd1, 0x1e, 0xf5, 0x68, 0xcd, 0x73, 0x77, 0xe4, 0xcb, 0xf1, 0xe4, 0xd2, - 0x0b, 0xda, 0x8e, 0x97, 0x86, 0xc8, 0x9f, 0xfe, 0x3c, 0x56, 0xc2, 0xd6, 0x05, 0x16, 0xb1, 0xc5, - 0x11, 0x7b, 0x69, 0x4e, 0xad, 0xc4, 0xf8, 0x07, 0x19, 0x58, 0xec, 0xcf, 0x1a, 0xcf, 0x2b, 0xfe, - 0x67, 0x24, 0x28, 0xf0, 0xf3, 0x8a, 0x43, 0x63, 0xd6, 0xa6, 0x0a, 0x32, 0x31, 0xe1, 0x54, 0xb9, - 0xd1, 0xa0, 0xdd, 0x80, 0x31, 0x16, 0x36, 0x92, 0xa1, 0x20, 0x91, 0xe7, 0xfa, 0x0c, 0x1b, 0x11, - 0xb8, 0xdd, 0xaa, 0xb4, 0xdc, 0xc4, 0x59, 0x97, 0x4e, 0x6a, 0x1c, 0x66, 0x00, 0xea, 0xf5, 0xb5, - 0x3b, 0xf4, 0xa0, 0x66, 0x3b, 0x28, 0x19, 0xf0, 0xc5, 0x7d, 0x47, 0x2c, 0xdf, 0x82, 0xb0, 0xb5, - 0xe0, 0x7b, 0xc2, 0x3e, 0x3d, 0xd0, 0x6c, 0x2d, 0x24, 0x2a, 0x6f, 0x95, 0xf3, 0xd0, 0x0e, 0x28, - 0x23, 0x44, 0x3d, 0xb0, 0x6c, 0x15, 0x42, 0x63, 0x94, 0x0a, 0x32, 0xf9, 0x10, 0xa6, 0xa3, 0x5f, - 0xa1, 0xc5, 0xc8, 0x74, 0xb8, 0x45, 0xe8, 0x85, 0xcb, 0xcf, 0x1f, 0x1d, 0x96, 0x16, 0x15, 0xae, - 0x71, 0x5b, 0x92, 0x18, 0x33, 0xe3, 0x17, 0x33, 0x68, 0x27, 0x25, 0x1b, 0x78, 0x01, 0x46, 0x42, - 0xcb, 0xfe, 0x82, 0xd8, 0xde, 0xf5, 0x67, 0x65, 0x2c, 0x67, 0x82, 0x5c, 0xd4, 0x12, 0x3c, 0x14, - 0xf5, 0x16, 0xb0, 0x52, 0x72, 0x0b, 0xc6, 0x87, 0xfa, 0x66, 0x5c, 0x8e, 0x29, 0xdf, 0x2a, 0xa9, - 0x71, 0x14, 0x6e, 0x3f, 0xd8, 0xfc, 0xde, 0x1d, 0x85, 0x9f, 0xca, 0xc2, 0x0c, 0xeb, 0xd7, 0x72, - 0x2f, 0xd8, 0x73, 0x3d, 0x27, 0x38, 0x78, 0x6a, 0x35, 0xd2, 0x6f, 0x6b, 0x97, 0xbd, 0x45, 0x79, - 0x8e, 0xa9, 0x6d, 0x1b, 0x4a, 0x31, 0xfd, 0xdf, 0x8e, 0xc2, 0x5c, 0x0a, 0x15, 0x79, 0x5d, 0x7b, - 0xe6, 0x5a, 0x90, 0x81, 0x2a, 0xbe, 0x7b, 0x58, 0x2a, 0x48, 0xf4, 0xcd, 0x28, 0x70, 0xc5, 0x92, - 0x6e, 0x74, 0xc8, 0x7b, 0x0a, 0x5f, 0xbd, 0x54, 0xa3, 0x43, 0xdd, 0xd4, 0xf0, 0x12, 0x8c, 0xe2, - 0xce, 0x24, 0x0c, 0x6d, 0x51, 0x94, 0xc3, 0xbd, 0x4e, 0x33, 0x2c, 0x62, 0x00, 0xb2, 0x06, 0xe3, - 0xec, 0x8f, 0xbb, 0x76, 0x57, 0xbc, 0x39, 0x93, 0x50, 0xdd, 0x80, 0xd0, 0xae, 0xd3, 0xd9, 0x55, - 0x35, 0x0e, 0x2d, 0x6a, 0xb5, 0xed, 0xae, 0x26, 0x73, 0x72, 0x44, 0x4d, 0x73, 0x91, 0xef, 0xaf, - 0xb9, 0xc8, 0x1c, 0xab, 0xb9, 0xd8, 0x01, 0xa8, 0x3b, 0xbb, 0x1d, 0xa7, 0xb3, 0x5b, 0x6e, 0xed, - 0x8a, 0x70, 0x1f, 0x97, 0xfa, 0x8f, 0xc2, 0x95, 0x08, 0x19, 0x27, 0xee, 0xb3, 0x68, 0x18, 0xc2, - 0x61, 0x96, 0xdd, 0xda, 0xd5, 0xfc, 0xfb, 0x14, 0xce, 0x64, 0x03, 0xa0, 0xdc, 0x08, 0x9c, 0x87, - 0x6c, 0x0a, 0xfb, 0x42, 0x40, 0x94, 0x9f, 0xbc, 0x52, 0xbe, 0x43, 0x0f, 0xf0, 0x52, 0x23, 0x9f, - 0xd8, 0x6d, 0x44, 0x65, 0x2b, 0x41, 0x73, 0xde, 0x8a, 0x38, 0x90, 0x2e, 0x9c, 0x2a, 0x37, 0x9b, - 0x0e, 0x6b, 0x83, 0xdd, 0xda, 0xe4, 0x81, 0x5a, 0x90, 0x75, 0x21, 0x9d, 0xf5, 0x25, 0xf9, 0x2a, - 0x6c, 0x87, 0x54, 0x96, 0x8c, 0xef, 0x12, 0xab, 0x26, 0x9d, 0xb1, 0x51, 0x87, 0x69, 0xbd, 0xf1, - 0x7a, 0x98, 0x92, 0x02, 0xe4, 0xcd, 0x7a, 0xd9, 0xaa, 0xaf, 0x95, 0xaf, 0x15, 0x33, 0xa4, 0x08, - 0x05, 0xf1, 0x6b, 0xc9, 0x5a, 0x7a, 0xe3, 0x46, 0x31, 0xab, 0x41, 0xde, 0xb8, 0xb6, 0x54, 0xcc, - 0x2d, 0x66, 0x17, 0x32, 0x31, 0xd7, 0xde, 0xf1, 0x62, 0x9e, 0x2b, 0x9b, 0x8d, 0x5f, 0xc9, 0x40, - 0x5e, 0x7e, 0x3b, 0xb9, 0x01, 0xb9, 0x7a, 0x7d, 0x2d, 0xe6, 0x1c, 0x1b, 0x9d, 0x32, 0x7c, 0x3f, - 0xf5, 0x7d, 0xd5, 0x03, 0x82, 0x11, 0x30, 0xba, 0xcd, 0xf5, 0xba, 0x90, 0xd7, 0x24, 0x5d, 0xb4, - 0x79, 0x73, 0xba, 0x14, 0x8f, 0xc1, 0x1b, 0x90, 0xbb, 0xfd, 0x60, 0x53, 0x5c, 0xdf, 0x24, 0x5d, - 0xb4, 0x9f, 0x72, 0xba, 0x8f, 0x1e, 0xa9, 0xbb, 0x3c, 0x23, 0x30, 0x4c, 0x98, 0x54, 0x26, 0x32, - 0x17, 0x50, 0xda, 0x6e, 0x18, 0x9b, 0x43, 0x08, 0x28, 0x0c, 0x62, 0x8a, 0x12, 0x26, 0xb6, 0xad, - 0xbb, 0x0d, 0xbb, 0x25, 0x24, 0x1d, 0x14, 0xdb, 0x5a, 0x0c, 0x60, 0x72, 0xb8, 0xf1, 0x9b, 0x19, - 0x28, 0xd6, 0x3c, 0x97, 0xc7, 0x0f, 0xd9, 0x74, 0xf7, 0x69, 0xe7, 0xfe, 0x35, 0x72, 0x59, 0x2e, - 0xb9, 0x4c, 0xa8, 0x42, 0x1b, 0xc5, 0x25, 0x17, 0x7b, 0x87, 0x14, 0xcb, 0x4e, 0x09, 0x7f, 0x92, - 0x1d, 0x3e, 0x6c, 0xc2, 0x31, 0xe1, 0x4f, 0x4a, 0x30, 0x8a, 0x9f, 0x23, 0x36, 0x47, 0xfc, 0xf2, - 0x80, 0x01, 0x4c, 0x0e, 0x57, 0xf6, 0xa6, 0xc3, 0x6c, 0xa2, 0x0d, 0x4b, 0xdf, 0x53, 0xa1, 0x07, - 0xf4, 0xc6, 0xf5, 0xdf, 0xaf, 0xc9, 0x9d, 0x3e, 0xa1, 0x07, 0x62, 0x0c, 0xb8, 0xe7, 0xe0, 0x12, - 0x7f, 0x9e, 0xe0, 0xfe, 0x37, 0xaa, 0x12, 0x2a, 0xe1, 0xc9, 0xfc, 0x01, 0xcc, 0xc7, 0xfb, 0x17, - 0x75, 0xa5, 0x65, 0x98, 0xd1, 0xe1, 0x52, 0x6d, 0x7a, 0x26, 0xb5, 0xde, 0xfb, 0x4b, 0x66, 0x1c, - 0xdf, 0xf8, 0x5f, 0x33, 0x30, 0x81, 0x7f, 0x9a, 0x3d, 0x2e, 0x6d, 0x96, 0x1f, 0xd4, 0x85, 0x06, - 0x47, 0x95, 0x36, 0xed, 0x47, 0xbe, 0x34, 0xed, 0xd3, 0x36, 0xac, 0x10, 0x59, 0x90, 0xf2, 0x67, - 0x18, 0xa9, 0x3b, 0x0c, 0x49, 0xf9, 0x7b, 0x8d, 0x1f, 0x23, 0x15, 0xc8, 0x68, 0xcc, 0xce, 0xc5, - 0x5f, 0xd5, 0xd0, 0x0a, 0xe9, 0xdc, 0x96, 0x6e, 0xcc, 0xce, 0xd1, 0xd0, 0xce, 0xea, 0x41, 0x9d, - 0x49, 0xc4, 0xaa, 0x9d, 0x15, 0xfb, 0x46, 0x4d, 0x1a, 0x16, 0x48, 0xc6, 0xaf, 0x4f, 0xc5, 0x3b, - 0x50, 0x9c, 0x9e, 0x27, 0x5c, 0x68, 0x6f, 0xc1, 0x68, 0xb9, 0xd5, 0x72, 0x1f, 0x89, 0x2d, 0x47, - 0x5e, 0xb0, 0xc3, 0xfe, 0xe3, 0x87, 0x23, 0x6a, 0x1f, 0x35, 0x27, 0x67, 0x06, 0x20, 0x2b, 0x30, - 0x51, 0x7e, 0x50, 0xaf, 0x56, 0x2b, 0x9b, 0x9b, 0xdc, 0xa1, 0x33, 0xb7, 0xfc, 0xb2, 0xec, 0x1f, - 0xc7, 0x69, 0x5a, 0x71, 0x43, 0x90, 0xe8, 0xe2, 0x14, 0xd1, 0x91, 0x77, 0x00, 0x6e, 0xbb, 0x4e, - 0x87, 0x6b, 0x5b, 0x45, 0xe3, 0xcf, 0x1d, 0x1d, 0x96, 0x26, 0x3f, 0x72, 0x9d, 0x8e, 0x50, 0xcf, - 0xb2, 0x6f, 0x8f, 0x90, 0x4c, 0xe5, 0x6f, 0xd6, 0xd3, 0xcb, 0x2e, 0x37, 0x10, 0x1d, 0x8d, 0x7a, - 0x7a, 0xdb, 0x4d, 0xa8, 0x05, 0x25, 0x1a, 0x69, 0xc3, 0x4c, 0xbd, 0xb7, 0xbb, 0x4b, 0xd9, 0x31, - 0x21, 0xd4, 0x5e, 0x63, 0xe2, 0x4a, 0x1e, 0x46, 0xff, 0xe2, 0x17, 0x41, 0x76, 0x0b, 0xf5, 0x97, - 0x5f, 0x67, 0xab, 0xe2, 0x3b, 0x87, 0x25, 0x61, 0x60, 0xc2, 0xe4, 0x3e, 0x5f, 0xd2, 0x27, 0x95, - 0x5e, 0x71, 0xde, 0xe4, 0x1e, 0x8c, 0xf1, 0xa7, 0x2d, 0xe1, 0xa0, 0xf8, 0xc2, 0x80, 0x15, 0xc8, - 0x11, 0xfb, 0x3d, 0x9e, 0xf2, 0x52, 0xf2, 0x00, 0xf2, 0x2b, 0x8e, 0xd7, 0x68, 0xd1, 0x95, 0xaa, - 0x10, 0x24, 0x5e, 0x1c, 0xc0, 0x52, 0xa2, 0xf2, 0x7e, 0x69, 0xe0, 0xaf, 0x86, 0xa3, 0x0a, 0x16, - 0x12, 0x83, 0xfc, 0xd5, 0x0c, 0x3c, 0x1b, 0x7e, 0x7d, 0x79, 0x97, 0x76, 0x82, 0xbb, 0x76, 0xd0, - 0xd8, 0xa3, 0x9e, 0xe8, 0xa5, 0x89, 0x41, 0xbd, 0xf4, 0x85, 0x44, 0x2f, 0x5d, 0x8c, 0x7a, 0xc9, - 0x66, 0xcc, 0xac, 0x36, 0xe7, 0x96, 0xec, 0xb3, 0x41, 0xb5, 0x12, 0x0b, 0x20, 0x7a, 0xb4, 0x15, - 0x96, 0x6a, 0x2f, 0x0f, 0x68, 0x70, 0x84, 0x2c, 0x1c, 0xd3, 0xc2, 0xdf, 0x9a, 0x65, 0x75, 0x08, - 0x25, 0x77, 0xa4, 0x37, 0x30, 0x17, 0x71, 0xce, 0x0f, 0xe0, 0xcd, 0x3d, 0x84, 0xe7, 0x06, 0xf8, - 0xfd, 0xf3, 0xd1, 0x5e, 0xb7, 0xb7, 0x85, 0x54, 0x73, 0xcc, 0x68, 0xaf, 0xdb, 0xd1, 0x68, 0xb7, - 0xec, 0xf8, 0x68, 0xaf, 0xdb, 0xdb, 0x64, 0x85, 0x87, 0x30, 0xe0, 0xfe, 0xee, 0xcf, 0x0f, 0xe2, - 0xb6, 0x52, 0xe3, 0xc7, 0x7c, 0x4a, 0x28, 0x83, 0x2f, 0xc3, 0x44, 0xbd, 0x6b, 0x37, 0x68, 0xcb, - 0xd9, 0x09, 0x84, 0x45, 0xc0, 0x4b, 0x03, 0x58, 0x85, 0xb8, 0xe2, 0x05, 0x58, 0xfe, 0x54, 0xef, - 0x5c, 0x21, 0x0e, 0xfb, 0xc2, 0xcd, 0xda, 0x5d, 0x61, 0x14, 0x30, 0xe8, 0x0b, 0x37, 0x6b, 0x77, - 0x85, 0x00, 0xd3, 0x6d, 0x6b, 0x02, 0x4c, 0xed, 0x2e, 0xe9, 0xc2, 0xf4, 0x26, 0xf5, 0x3c, 0x7b, - 0xc7, 0xf5, 0xda, 0x5c, 0xcd, 0xca, 0x7d, 0x28, 0x2f, 0x0d, 0xe2, 0xa7, 0x11, 0x70, 0xed, 0x62, - 0x20, 0x61, 0x56, 0x5c, 0x37, 0x1b, 0xe3, 0xcf, 0xfa, 0x64, 0xd9, 0x09, 0xb6, 0x7b, 0x8d, 0x7d, - 0x1a, 0x2c, 0xcc, 0x1e, 0xdb, 0x27, 0x21, 0x2e, 0xef, 0x93, 0x6d, 0xf9, 0x53, 0xed, 0x93, 0x10, - 0x87, 0x4d, 0x03, 0x11, 0xa8, 0x80, 0x1c, 0x3b, 0x0d, 0x38, 0x22, 0x9f, 0x06, 0xfd, 0x22, 0x16, - 0x90, 0x3d, 0x28, 0x2c, 0xbb, 0xbd, 0x0e, 0x93, 0x6b, 0xbb, 0xb6, 0xe3, 0x2d, 0xcc, 0x21, 0xdb, - 0x57, 0x06, 0x7d, 0xb0, 0x82, 0xce, 0xed, 0xef, 0xb7, 0x19, 0x84, 0x89, 0xce, 0x0c, 0xa4, 0x3e, - 0x98, 0xa8, 0xa8, 0xa4, 0x09, 0x93, 0x38, 0x95, 0x2b, 0xf4, 0xa1, 0xdb, 0xf5, 0x17, 0xe6, 0xb1, - 0xa2, 0x0b, 0xc7, 0x2d, 0x0a, 0x8e, 0xcd, 0x5f, 0xe6, 0x71, 0x69, 0x58, 0x4d, 0x84, 0xa8, 0x5a, - 0x6c, 0x05, 0xd1, 0xf8, 0xc7, 0xa3, 0x50, 0x3a, 0x86, 0x19, 0xb9, 0x2f, 0xcf, 0x26, 0x2e, 0x01, - 0xbc, 0x36, 0xdc, 0x37, 0x5c, 0x39, 0xf6, 0xd8, 0x7a, 0x0b, 0xa6, 0xef, 0x29, 0x46, 0x02, 0xa1, - 0xd1, 0x06, 0xd2, 0xa8, 0xe6, 0x03, 0x96, 0xd3, 0x34, 0x63, 0xa8, 0x8b, 0x7f, 0x9c, 0x83, 0x11, - 0x14, 0x2c, 0x5e, 0x84, 0x5c, 0xbd, 0xb7, 0xad, 0x3e, 0x74, 0xf9, 0xda, 0x76, 0xcd, 0x4a, 0xc9, - 0xdb, 0x30, 0x29, 0xdc, 0x71, 0x94, 0xdb, 0x29, 0x76, 0x92, 0xf4, 0xdd, 0x89, 0xfb, 0x42, 0x28, - 0xe8, 0xe4, 0x3d, 0x28, 0xd4, 0x9c, 0x2e, 0x6d, 0x39, 0x1d, 0xaa, 0x58, 0xf6, 0xe3, 0x58, 0x76, - 0x05, 0x3c, 0xf1, 0xf8, 0xa5, 0x12, 0xe8, 0x8e, 0x43, 0x23, 0xc3, 0x3b, 0x0e, 0xbd, 0x07, 0x85, - 0x0a, 0xdd, 0x71, 0x3a, 0x8e, 0xe8, 0x9f, 0xd1, 0xa8, 0xe2, 0x66, 0x08, 0xd7, 0xa9, 0x35, 0x02, - 0xb2, 0x0c, 0x53, 0x26, 0xed, 0xba, 0xbe, 0x13, 0xb8, 0xde, 0xc1, 0x96, 0x59, 0x15, 0x06, 0x25, - 0xa8, 0xa0, 0xf3, 0xc2, 0x02, 0xab, 0xe7, 0xa9, 0x27, 0x91, 0x4e, 0x42, 0x36, 0x60, 0x36, 0x02, - 0xe8, 0x86, 0x58, 0xe2, 0xa5, 0x23, 0xe4, 0x93, 0x34, 0xa1, 0x4e, 0x92, 0xea, 0xdf, 0x64, 0xd2, - 0x1d, 0x61, 0x90, 0x1d, 0xff, 0x26, 0x8f, 0xee, 0xa4, 0x7f, 0x93, 0x49, 0x77, 0x8c, 0x5f, 0xcb, - 0xc1, 0x99, 0x3e, 0x5b, 0x1b, 0xd9, 0xd0, 0xa7, 0xeb, 0x8b, 0x83, 0x77, 0xc2, 0xe3, 0xa7, 0xe9, - 0x3a, 0x14, 0x57, 0xef, 0xe0, 0x85, 0x9e, 0xbf, 0x23, 0xaf, 0x94, 0xa5, 0x10, 0x8a, 0xcd, 0xa7, - 0xfb, 0xe8, 0x8c, 0x21, 0xdf, 0x9f, 0x1b, 0x5a, 0xd0, 0x94, 0x04, 0xe5, 0xe2, 0x0f, 0x65, 0xc5, - 0xbc, 0x8d, 0x45, 0xb8, 0xcc, 0x9c, 0x28, 0xc2, 0xe5, 0x17, 0xa1, 0xb0, 0x7a, 0x87, 0xab, 0xdb, - 0xd6, 0x6c, 0x7f, 0x4f, 0xcc, 0x29, 0xec, 0x42, 0xba, 0x2f, 0xdf, 0x4d, 0xf6, 0x6c, 0xed, 0x62, - 0xab, 0x51, 0x90, 0x2d, 0x98, 0xe3, 0xdf, 0xe6, 0xec, 0x38, 0x0d, 0x1e, 0x28, 0xcf, 0xb1, 0x5b, - 0x62, 0x86, 0xbd, 0x78, 0x74, 0x58, 0x2a, 0xd1, 0x7d, 0x74, 0x33, 0x11, 0xe5, 0x96, 0x8f, 0x08, - 0xaa, 0xbf, 0x49, 0x0a, 0xbd, 0x1a, 0x76, 0xcb, 0x9c, 0xc0, 0x0a, 0x59, 0x6d, 0xac, 0x6e, 0x86, - 0xcb, 0x91, 0x8c, 0x3f, 0x1c, 0x85, 0xc5, 0xfe, 0x62, 0x17, 0x79, 0x5f, 0x1f, 0xc0, 0x0b, 0xc7, - 0x0a, 0x6a, 0xc7, 0x8f, 0xe1, 0x97, 0x60, 0x7e, 0xb5, 0x13, 0x50, 0xaf, 0xeb, 0x39, 0x32, 0x5c, - 0xd7, 0x9a, 0xeb, 0x4b, 0xb7, 0x1e, 0x54, 0xb2, 0xd3, 0xb0, 0x5c, 0x38, 0xa8, 0xa1, 0x8f, 0x94, - 0xaa, 0x64, 0x4f, 0xe3, 0x40, 0x56, 0x61, 0x5a, 0x81, 0xb7, 0x7a, 0xbb, 0xea, 0xe3, 0xb8, 0xca, - 0xb3, 0xd5, 0x53, 0x7d, 0x1e, 0x62, 0x44, 0xe8, 0x3a, 0x14, 0xd8, 0x81, 0xd3, 0xb8, 0xfd, 0xe0, - 0x4e, 0x5d, 0x0c, 0x27, 0x77, 0x1d, 0x42, 0xa8, 0xf5, 0xd1, 0xa3, 0x7d, 0x4d, 0x6e, 0x8a, 0x90, - 0x17, 0x7f, 0xf1, 0x44, 0x3b, 0xe1, 0xe7, 0x01, 0xa2, 0xa5, 0xa4, 0xba, 0xde, 0x47, 0x4b, 0x4f, - 0xf7, 0x0e, 0x94, 0x50, 0xb2, 0x06, 0x33, 0xd1, 0xaf, 0x7b, 0x8f, 0x3a, 0xd4, 0x13, 0x4d, 0x45, - 0x15, 0xac, 0xb2, 0x72, 0x5d, 0x56, 0xa6, 0x8a, 0xe2, 0x31, 0x32, 0xb2, 0x04, 0xf9, 0x07, 0xae, - 0xb7, 0xbf, 0xc3, 0xc6, 0x78, 0x24, 0xba, 0x2c, 0x3c, 0x12, 0x30, 0x55, 0x28, 0x96, 0x78, 0x6c, - 0xb9, 0xac, 0x76, 0x1e, 0x3a, 0x9e, 0x8b, 0x06, 0x05, 0xaa, 0x49, 0x1d, 0x8d, 0xc0, 0x5a, 0xd0, - 0x93, 0x08, 0x4c, 0x2e, 0xc1, 0x68, 0xb9, 0x11, 0xb8, 0x9e, 0xd8, 0xfe, 0xf8, 0x4c, 0x61, 0x00, - 0x6d, 0xa6, 0x30, 0x00, 0xeb, 0x44, 0xb6, 0x27, 0x8d, 0x47, 0x9d, 0xa8, 0x6f, 0x44, 0xac, 0x94, - 0x5d, 0x76, 0x4c, 0xba, 0x83, 0xda, 0x51, 0x2d, 0x7e, 0xeb, 0x4e, 0x42, 0xaf, 0x2e, 0xd0, 0x8c, - 0x1f, 0x86, 0xbe, 0x53, 0x9e, 0x49, 0x97, 0x27, 0x9b, 0xf2, 0xeb, 0xf6, 0x10, 0x53, 0xfe, 0xf5, - 0xd0, 0xe7, 0x50, 0x0d, 0x63, 0x84, 0x10, 0x55, 0xae, 0x11, 0xde, 0x87, 0xfa, 0xfc, 0xcb, 0x9d, - 0x64, 0xfe, 0xfd, 0xbd, 0xfc, 0x49, 0xe6, 0x9f, 0xe8, 0xdf, 0xec, 0xb0, 0xfd, 0x9b, 0x1b, 0xaa, - 0x7f, 0xd9, 0xa1, 0x12, 0x06, 0x0f, 0xae, 0xd9, 0x81, 0xb6, 0x23, 0x86, 0x11, 0x9f, 0xad, 0xae, - 0xad, 0x05, 0xd8, 0xd3, 0x49, 0x14, 0x21, 0x01, 0x39, 0x8c, 0x26, 0x85, 0x84, 0x18, 0xbd, 0x8a, - 0xce, 0x36, 0x02, 0x79, 0xe6, 0xd7, 0xd1, 0x83, 0x4d, 0x4c, 0x36, 0x6e, 0x6e, 0x22, 0xc5, 0x04, - 0xee, 0xdc, 0xa6, 0xbd, 0x4f, 0x68, 0x44, 0xf1, 0x79, 0x3e, 0x7e, 0xa2, 0x79, 0xce, 0x2d, 0xac, - 0xbd, 0x75, 0x77, 0xd7, 0x91, 0x7e, 0x4e, 0xd2, 0xc2, 0xda, 0xb3, 0x5a, 0x0c, 0x1a, 0xb3, 0xb0, - 0xe6, 0xa8, 0xe4, 0x32, 0x8c, 0xb1, 0x1f, 0xd5, 0x8a, 0xb0, 0x81, 0x40, 0xa5, 0x07, 0x12, 0xe9, - 0xce, 0x65, 0x1c, 0x49, 0x56, 0xb3, 0xda, 0xb6, 0x9d, 0x96, 0x08, 0x74, 0x13, 0x55, 0x43, 0x19, - 0x34, 0x5e, 0x0d, 0xa2, 0x92, 0x06, 0x14, 0x4c, 0xba, 0x53, 0xf3, 0xdc, 0x80, 0x36, 0x02, 0xda, - 0x14, 0x17, 0x3d, 0xa9, 0xeb, 0x58, 0x76, 0x5d, 0x7e, 0x89, 0x45, 0x3f, 0xa4, 0xcc, 0x77, 0x0e, - 0x4b, 0xc0, 0x40, 0xdc, 0x73, 0x91, 0x89, 0x3c, 0x6c, 0xfc, 0xbb, 0x92, 0x58, 0x3d, 0xd8, 0x54, - 0xa6, 0xe4, 0x1b, 0x6c, 0xab, 0x0f, 0xbb, 0x24, 0xaa, 0xac, 0xd0, 0xa7, 0xb2, 0x37, 0x52, 0x2b, - 0x2b, 0x29, 0xbd, 0x9d, 0x5a, 0x69, 0x6a, 0x25, 0xe4, 0x1d, 0x98, 0x5c, 0xa9, 0xae, 0xb8, 0x9d, - 0x1d, 0x67, 0xb7, 0xbe, 0x56, 0xc6, 0xdb, 0xa2, 0x90, 0xd7, 0x1a, 0x8e, 0xd5, 0x40, 0xb8, 0xe5, - 0xef, 0xd9, 0x5a, 0xec, 0x85, 0x08, 0x9f, 0xdc, 0x82, 0x69, 0xf9, 0xd3, 0xa4, 0x3b, 0x4c, 0x5e, - 0x9b, 0x56, 0x3c, 0x9d, 0x43, 0x0e, 0xac, 0x23, 0x74, 0x91, 0x2d, 0x46, 0xc6, 0x26, 0x63, 0x85, - 0x76, 0x5b, 0xee, 0x01, 0xfb, 0xbc, 0x4d, 0x87, 0x7a, 0x78, 0x2d, 0x14, 0x93, 0xb1, 0x19, 0x96, - 0x58, 0x81, 0xa3, 0x5b, 0x7e, 0xe8, 0x44, 0x4c, 0xf4, 0x13, 0x53, 0xfc, 0xbe, 0xe3, 0x3b, 0xdb, - 0x4e, 0xcb, 0x09, 0x0e, 0xf0, 0x42, 0x28, 0x64, 0x1f, 0xb9, 0x2e, 0x1e, 0x86, 0xa5, 0xaa, 0xe8, - 0x97, 0x20, 0x35, 0x7e, 0x25, 0x0b, 0xcf, 0x0d, 0x52, 0x8e, 0x90, 0xba, 0xbe, 0x0f, 0x5e, 0x1c, - 0x42, 0xa1, 0x72, 0xfc, 0x4e, 0xb8, 0xda, 0xe7, 0x9e, 0x81, 0x9d, 0x11, 0xbb, 0x67, 0xa8, 0x9d, - 0x11, 0xbb, 0x71, 0x3c, 0x14, 0xdb, 0xdc, 0xc7, 0x8d, 0x02, 0x70, 0x03, 0x26, 0x56, 0xdc, 0x4e, - 0x40, 0x1f, 0x07, 0xb1, 0x98, 0x37, 0x1c, 0x18, 0x8f, 0x80, 0x20, 0x51, 0x8d, 0x7f, 0x9f, 0x85, - 0x73, 0x03, 0xb5, 0x03, 0x64, 0x53, 0xef, 0xb5, 0x4b, 0xc3, 0xa8, 0x14, 0x8e, 0xef, 0xb6, 0xa5, - 0x84, 0xc5, 0xf0, 0xb1, 0x5e, 0xaa, 0x8b, 0xff, 0x7d, 0x46, 0x74, 0xd2, 0x67, 0x60, 0x1c, 0xab, - 0x0a, 0xbb, 0x88, 0x6b, 0xe1, 0x71, 0x17, 0x76, 0x74, 0x2d, 0x3c, 0x47, 0x23, 0xd7, 0x21, 0xbf, - 0x62, 0xb7, 0x5a, 0x4a, 0x44, 0x20, 0xbc, 0xe0, 0x37, 0x10, 0x16, 0x33, 0x7b, 0x97, 0x88, 0xec, - 0xd8, 0xe2, 0x7f, 0x2b, 0x67, 0x05, 0x6e, 0x96, 0x82, 0x2c, 0x76, 0x5c, 0x28, 0xc8, 0x18, 0xfe, - 0xbc, 0xe1, 0x86, 0x31, 0x47, 0x78, 0xf8, 0x73, 0x06, 0xd0, 0xc2, 0x9f, 0x33, 0x80, 0xf1, 0xab, - 0x39, 0x78, 0x7e, 0xb0, 0x8a, 0x8b, 0x6c, 0xe9, 0x43, 0xf0, 0xea, 0x50, 0x8a, 0xb1, 0xe3, 0xc7, - 0x40, 0x26, 0x13, 0xe0, 0x1d, 0x72, 0x31, 0xe9, 0x6a, 0xf8, 0xdd, 0xc3, 0x92, 0xe2, 0x4b, 0x71, - 0xdb, 0x75, 0x3a, 0xca, 0x9b, 0xec, 0xd7, 0x13, 0x87, 0xfa, 0xe4, 0xd2, 0x8d, 0xe1, 0xbe, 0x2c, - 0xa2, 0xe3, 0xfb, 0xca, 0xb0, 0xc2, 0xc0, 0x17, 0xa0, 0x18, 0x27, 0x25, 0x17, 0x60, 0x04, 0x3f, - 0x40, 0xf1, 0x97, 0x8c, 0x71, 0xc0, 0xf2, 0xc5, 0xbb, 0x62, 0xee, 0x60, 0x90, 0x24, 0xd5, 0xa1, - 0x5f, 0x50, 0x8a, 0x20, 0x49, 0x5a, 0x34, 0x00, 0x3d, 0x48, 0x92, 0x4a, 0x64, 0xfc, 0x49, 0x06, - 0xce, 0xf6, 0xd5, 0x51, 0x90, 0x9a, 0x3e, 0x60, 0x2f, 0x1f, 0xa7, 0xd4, 0x38, 0x76, 0xac, 0x16, - 0x7f, 0x42, 0xce, 0xfd, 0x77, 0xa1, 0x50, 0xef, 0x6d, 0xc7, 0xaf, 0x76, 0x3c, 0x84, 0x99, 0x02, - 0x57, 0x4f, 0x30, 0x15, 0x9f, 0xb5, 0x5f, 0x7a, 0xc1, 0x0b, 0xd3, 0x45, 0xc5, 0x5e, 0x3a, 0x8c, - 0xe2, 0x91, 0x0c, 0x12, 0xa5, 0x13, 0x19, 0xbf, 0x9c, 0x4d, 0xbf, 0x23, 0xdf, 0x5a, 0xa9, 0x9d, - 0xe4, 0x8e, 0x7c, 0x6b, 0xa5, 0x76, 0x7c, 0xdb, 0xff, 0x2b, 0xd9, 0x76, 0x6e, 0x54, 0xc4, 0x77, - 0x3c, 0xf9, 0xf6, 0x21, 0x8d, 0x8a, 0xc4, 0xee, 0xe8, 0xc7, 0x8c, 0x8a, 0x04, 0x32, 0x79, 0x03, - 0x26, 0xd6, 0x5d, 0x1e, 0xbf, 0x49, 0xb6, 0x98, 0x87, 0xb9, 0x90, 0x40, 0x75, 0x7b, 0x0c, 0x31, - 0xd9, 0xb5, 0x44, 0x1f, 0x78, 0x69, 0x16, 0x8e, 0xd7, 0x92, 0xd8, 0x74, 0xd1, 0x5f, 0x08, 0x74, - 0x32, 0xe3, 0x1f, 0x8d, 0x82, 0x71, 0xbc, 0x7e, 0x93, 0x7c, 0xa0, 0xf7, 0xdd, 0x95, 0xa1, 0x35, - 0xa3, 0x43, 0x6d, 0xb9, 0xe5, 0x5e, 0xd3, 0xa1, 0x9d, 0x86, 0x1e, 0x7c, 0x49, 0xc0, 0xd4, 0x2d, - 0x50, 0xe2, 0x7d, 0x9c, 0x60, 0x02, 0x8b, 0xff, 0x32, 0x17, 0x2d, 0xb5, 0xd8, 0xd1, 0x98, 0xf9, - 0x18, 0x47, 0x23, 0xb9, 0x03, 0x45, 0x15, 0xa2, 0xe8, 0xd8, 0x50, 0x72, 0xd1, 0x18, 0xc5, 0x3e, - 0x2a, 0x41, 0xa8, 0x9f, 0xaf, 0xb9, 0xe1, 0xcf, 0xd7, 0x98, 0x8e, 0x6f, 0xe4, 0x64, 0x3a, 0x3e, - 0x11, 0xac, 0xc9, 0x17, 0x87, 0xd6, 0xa8, 0x1e, 0xac, 0x29, 0xe5, 0xe0, 0x52, 0xd1, 0x65, 0xbc, - 0x29, 0xfc, 0xa9, 0x84, 0x5b, 0x09, 0xe3, 0x4d, 0x71, 0xfa, 0xb4, 0x78, 0x53, 0x21, 0x09, 0x3b, - 0x00, 0xcd, 0x5e, 0x87, 0xe7, 0xd9, 0x18, 0x8f, 0x0e, 0x40, 0xaf, 0xd7, 0xb1, 0xe2, 0xb9, 0x36, - 0x42, 0x44, 0xe3, 0x9f, 0x8e, 0xa4, 0x0b, 0x07, 0x91, 0x0a, 0xfc, 0x04, 0xc2, 0x41, 0x48, 0xf4, - 0xe9, 0xcc, 0xd4, 0x2d, 0x98, 0x93, 0x96, 0xc5, 0x58, 0x7b, 0x93, 0x7a, 0x5b, 0xe6, 0xba, 0x18, - 0x62, 0x54, 0x39, 0x85, 0x36, 0xc9, 0x5d, 0x51, 0x6e, 0xf5, 0x3c, 0x4d, 0xe5, 0x94, 0x42, 0xbf, - 0xf8, 0x4f, 0xa4, 0x46, 0x4d, 0x1d, 0x04, 0xf4, 0x02, 0xcf, 0xa4, 0x0d, 0x42, 0xaf, 0xa7, 0x0d, - 0xa3, 0x4e, 0xc2, 0xf7, 0xde, 0x50, 0xfb, 0xb9, 0xa5, 0xcb, 0x8a, 0xaa, 0xc6, 0x54, 0xe7, 0x12, - 0x23, 0x22, 0xbb, 0x70, 0x36, 0x12, 0xa5, 0x95, 0x9b, 0x02, 0x72, 0xe4, 0x0d, 0xbe, 0x74, 0x74, - 0x58, 0x7a, 0x59, 0x11, 0xc5, 0xd5, 0x0b, 0x47, 0x8c, 0x7b, 0x7f, 0x5e, 0x6c, 0xbf, 0x5d, 0xf6, - 0xec, 0x4e, 0x63, 0x4f, 0x99, 0xf3, 0xb8, 0xdf, 0x6e, 0x23, 0x34, 0x11, 0x72, 0x26, 0x42, 0x36, - 0xfe, 0x76, 0x36, 0x5d, 0x25, 0x21, 0x5e, 0x3a, 0x4e, 0xa0, 0x92, 0xe0, 0x14, 0xc7, 0x9f, 0x12, - 0xff, 0x48, 0x9e, 0x12, 0x2f, 0xc3, 0xf8, 0x26, 0xed, 0xd8, 0x9d, 0x30, 0x94, 0x13, 0x5a, 0x5c, - 0x04, 0x1c, 0x64, 0xca, 0x32, 0xf2, 0x3e, 0x90, 0x9a, 0xed, 0xd1, 0x4e, 0xb0, 0xe2, 0xb6, 0xbb, - 0xb6, 0x17, 0xb4, 0x31, 0x13, 0x09, 0x3f, 0x1a, 0x5e, 0x38, 0x3a, 0x2c, 0x9d, 0xeb, 0x62, 0xa9, - 0xd5, 0x50, 0x8a, 0xd5, 0x88, 0x80, 0x49, 0x62, 0x72, 0x15, 0xc6, 0xa5, 0x21, 0x41, 0x2e, 0x8a, - 0xee, 0x98, 0x34, 0x22, 0x90, 0x58, 0xc6, 0xbf, 0x1c, 0x85, 0xf3, 0xc7, 0x3d, 0xeb, 0x90, 0x1d, - 0x80, 0x7b, 0x9d, 0x6d, 0xd7, 0xf6, 0x9a, 0x4e, 0x67, 0x57, 0xf8, 0x5c, 0xde, 0x18, 0xf2, 0x4d, - 0xe8, 0x4a, 0x44, 0xc9, 0x0a, 0xb9, 0x87, 0xab, 0x1b, 0xc2, 0x4c, 0x85, 0x33, 0xf9, 0x2a, 0xe4, - 0x4d, 0xda, 0x70, 0x1f, 0x52, 0xa1, 0xba, 0x9b, 0x5c, 0xfa, 0xec, 0xb0, 0xb5, 0x48, 0x3a, 0xac, - 0x03, 0x5d, 0xff, 0x3c, 0x01, 0x31, 0x43, 0x9e, 0xe4, 0x6b, 0x30, 0xc9, 0x13, 0xce, 0x94, 0x77, - 0x02, 0xa1, 0xde, 0x3b, 0x3e, 0x74, 0x47, 0x86, 0x6d, 0x92, 0x3c, 0x85, 0x8d, 0x65, 0xef, 0x68, - 0xae, 0x04, 0x3c, 0x74, 0x87, 0xc2, 0x72, 0xf1, 0x3f, 0xcb, 0xc2, 0xb4, 0xde, 0x60, 0xb2, 0x0e, - 0xc5, 0x6a, 0xc7, 0x09, 0x1c, 0xbb, 0xa5, 0x9b, 0x9a, 0x8a, 0x3b, 0xa6, 0xc3, 0xcb, 0xac, 0x54, - 0x93, 0xd3, 0x04, 0x25, 0x9b, 0x33, 0x6c, 0xe8, 0xfc, 0x80, 0x5b, 0x38, 0xf0, 0x48, 0xab, 0x62, - 0x11, 0xbf, 0xc0, 0x03, 0xfb, 0x46, 0xa5, 0x16, 0x8f, 0x6d, 0xac, 0x47, 0x91, 0x8c, 0x13, 0x93, - 0x87, 0x40, 0xee, 0xf6, 0xfc, 0x80, 0x97, 0x50, 0x6f, 0x99, 0xee, 0xb8, 0xde, 0x30, 0x31, 0x3b, - 0x5e, 0x15, 0x9d, 0xf3, 0x7c, 0xbb, 0xe7, 0x07, 0x96, 0x27, 0xc8, 0xad, 0x6d, 0xa4, 0x8f, 0x75, - 0x52, 0x4a, 0x0d, 0x8b, 0x77, 0xa1, 0xa0, 0x8e, 0x1a, 0x5a, 0x7c, 0x39, 0x6d, 0x47, 0xda, 0xde, - 0x73, 0x8b, 0x2f, 0x06, 0x30, 0x39, 0x9c, 0x3c, 0x27, 0x82, 0x5c, 0x65, 0x23, 0xc3, 0xa8, 0x28, - 0x98, 0x95, 0xf1, 0x23, 0x19, 0x38, 0x9d, 0x6e, 0x2d, 0x44, 0x3e, 0x8a, 0xbd, 0x6a, 0x66, 0x06, - 0xbd, 0xf9, 0x4a, 0x13, 0xa3, 0x8f, 0xf7, 0xae, 0x69, 0xfc, 0xe5, 0x91, 0x84, 0x94, 0x95, 0xc2, - 0x91, 0xdc, 0x4a, 0x1d, 0xc7, 0x8c, 0x72, 0x2e, 0x26, 0xc7, 0x31, 0x75, 0xf4, 0xde, 0x86, 0x69, - 0x64, 0x1c, 0x4d, 0x2e, 0x45, 0x1f, 0xca, 0x3f, 0x39, 0x9a, 0x5a, 0x66, 0x0c, 0x97, 0x54, 0x81, - 0x20, 0x64, 0xd9, 0x0d, 0x14, 0xe7, 0x72, 0xe5, 0xa2, 0xc9, 0x39, 0x6c, 0xbb, 0x81, 0xa5, 0xba, - 0x99, 0xa7, 0x10, 0x91, 0xcf, 0xc3, 0x94, 0x1c, 0xce, 0x15, 0xbc, 0xd5, 0x8c, 0xe0, 0x30, 0xe2, - 0x7d, 0x48, 0xae, 0x45, 0x0b, 0x45, 0x51, 0x53, 0x47, 0x24, 0x6d, 0x1e, 0x6e, 0x48, 0x00, 0x69, - 0xb3, 0x1c, 0x0c, 0x11, 0xd3, 0xe9, 0x15, 0x31, 0xfb, 0x9e, 0xe5, 0x29, 0xa6, 0x24, 0xad, 0x65, - 0x07, 0xb1, 0xa9, 0x17, 0xe7, 0x4d, 0x76, 0x61, 0x4a, 0x49, 0x3d, 0x55, 0x0e, 0x86, 0xc8, 0x7c, - 0xf6, 0xb2, 0xa8, 0xec, 0xac, 0x9a, 0xcf, 0x2a, 0x59, 0x95, 0xce, 0xd7, 0xf8, 0x89, 0x2c, 0x4c, - 0xf3, 0xdb, 0x22, 0x37, 0x19, 0x7b, 0x6a, 0x6d, 0xfb, 0xde, 0xd2, 0x6c, 0xfb, 0x64, 0x78, 0x71, - 0xb5, 0x69, 0x43, 0x59, 0x62, 0xef, 0x01, 0x49, 0xd2, 0x10, 0x13, 0x0a, 0x2a, 0x74, 0xb0, 0x1d, - 0xde, 0xb5, 0x28, 0x12, 0xbd, 0xb8, 0xac, 0xa3, 0x65, 0xa5, 0x6f, 0x6a, 0x3c, 0x8c, 0x1f, 0xcf, - 0xc2, 0x94, 0x62, 0x89, 0xfd, 0xd4, 0x76, 0xfc, 0x17, 0xb4, 0x8e, 0x5f, 0x08, 0xa3, 0x6b, 0x84, - 0x2d, 0x1b, 0xaa, 0xdf, 0x7b, 0x30, 0x9b, 0x20, 0x89, 0x1b, 0xb4, 0x67, 0x86, 0x31, 0x68, 0x7f, - 0x3d, 0x19, 0xd6, 0x9a, 0x27, 0xb5, 0x0b, 0x83, 0x9c, 0xaa, 0x71, 0xb4, 0x7f, 0x2a, 0x0b, 0xf3, - 0xe2, 0x17, 0xe6, 0x81, 0xe0, 0xea, 0x92, 0xa7, 0x76, 0x2c, 0xca, 0xda, 0x58, 0x94, 0xf4, 0xb1, - 0x50, 0x1a, 0xd8, 0x7f, 0x48, 0x8c, 0x1f, 0x01, 0x58, 0xe8, 0x47, 0x30, 0x74, 0xd8, 0xad, 0x28, - 0xac, 0x47, 0x76, 0x88, 0xb0, 0x1e, 0xeb, 0x50, 0xc4, 0xaa, 0x84, 0x2b, 0x92, 0xbf, 0x65, 0x56, - 0x45, 0x27, 0xa1, 0xf4, 0xc1, 0x93, 0x75, 0x08, 0xff, 0x25, 0x3f, 0xa6, 0x75, 0x4f, 0x50, 0x92, - 0x5f, 0xcc, 0xc0, 0x34, 0x02, 0x57, 0x1f, 0x32, 0x71, 0x93, 0x31, 0x1b, 0x11, 0xf1, 0x1e, 0x42, - 0x6b, 0xbd, 0x7a, 0xe0, 0x39, 0x9d, 0x5d, 0x61, 0xae, 0xb7, 0x2d, 0xcc, 0xf5, 0xde, 0xe6, 0x66, - 0x86, 0x57, 0x1a, 0x6e, 0xfb, 0xea, 0xae, 0x67, 0x3f, 0x74, 0xb8, 0x93, 0x81, 0xdd, 0xba, 0x1a, - 0xe5, 0x62, 0xed, 0x3a, 0xb1, 0x2c, 0xa9, 0x82, 0x15, 0x9a, 0x42, 0xf2, 0x0f, 0xa5, 0x58, 0x6d, - 0xfc, 0x71, 0x40, 0xff, 0x22, 0xf2, 0x7d, 0x70, 0x86, 0xc7, 0x5f, 0x5e, 0x71, 0x3b, 0x81, 0xd3, - 0xe9, 0xb9, 0x3d, 0x7f, 0xd9, 0x6e, 0xec, 0xf7, 0xba, 0xbe, 0x88, 0xca, 0x83, 0x2d, 0x6f, 0x84, - 0x85, 0xd6, 0x36, 0x2f, 0xd5, 0x22, 0xe3, 0xa5, 0x33, 0x20, 0x6b, 0x30, 0xcb, 0x8b, 0xca, 0xbd, - 0xc0, 0xad, 0x37, 0xec, 0x16, 0x13, 0x88, 0xc7, 0x91, 0x2b, 0xb7, 0x49, 0xea, 0x05, 0xae, 0xe5, - 0x73, 0xb8, 0xfa, 0x56, 0x90, 0x20, 0x22, 0x55, 0x98, 0x31, 0xa9, 0xdd, 0xbc, 0x6b, 0x3f, 0x5e, - 0xb1, 0xbb, 0x76, 0xc3, 0x09, 0x78, 0x42, 0x88, 0x1c, 0x57, 0x29, 0x78, 0xd4, 0x6e, 0x5a, 0x6d, - 0xfb, 0xb1, 0xd5, 0x10, 0x85, 0xfa, 0x7b, 0xb3, 0x46, 0x17, 0xb2, 0x72, 0x3a, 0x21, 0xab, 0x89, - 0x38, 0x2b, 0xa7, 0xd3, 0x9f, 0x55, 0x44, 0x27, 0x59, 0xf1, 0xcc, 0x5c, 0xdc, 0x6d, 0x12, 0xce, - 0x67, 0x2e, 0x66, 0x14, 0x56, 0x22, 0x9d, 0x17, 0xba, 0x50, 0xc6, 0x59, 0x29, 0x74, 0x6c, 0xe6, - 0x3d, 0xf0, 0x9c, 0x80, 0xaa, 0x2d, 0x9c, 0xc4, 0xcf, 0xc2, 0xfe, 0x47, 0x87, 0xd3, 0x7e, 0x4d, - 0x4c, 0x50, 0x46, 0xdc, 0x94, 0x46, 0x16, 0x12, 0xdc, 0xd2, 0x5b, 0x99, 0xa0, 0x0c, 0xb9, 0xa9, - 0xed, 0x9c, 0xc2, 0x76, 0x2a, 0xdc, 0xfa, 0x34, 0x34, 0x41, 0x49, 0x36, 0x58, 0xa7, 0x05, 0xec, - 0xe6, 0xee, 0x76, 0x84, 0x3f, 0xe7, 0x34, 0x7e, 0xda, 0x4b, 0x42, 0x6c, 0x28, 0x7a, 0xb2, 0xd8, - 0x4a, 0xf1, 0xee, 0x8c, 0x13, 0x93, 0xbf, 0x00, 0x33, 0x5b, 0x3e, 0xbd, 0x59, 0xad, 0xd5, 0x65, - 0xb8, 0x66, 0x7c, 0xde, 0x9a, 0x5e, 0xba, 0x76, 0xcc, 0xa6, 0x73, 0x45, 0xa5, 0xc1, 0xd4, 0xa6, - 0x7c, 0xdc, 0x7a, 0x3e, 0xb5, 0x76, 0x9c, 0xae, 0x1f, 0xc6, 0xbe, 0x57, 0xc7, 0x2d, 0x56, 0x95, - 0xb1, 0x06, 0xb3, 0x09, 0x36, 0x64, 0x1a, 0x80, 0x01, 0xad, 0xad, 0x8d, 0xfa, 0xea, 0x66, 0xf1, - 0x19, 0x52, 0x84, 0x02, 0xfe, 0x5e, 0xdd, 0x28, 0x2f, 0xaf, 0xaf, 0x56, 0x8a, 0x19, 0x32, 0x0b, - 0x53, 0x08, 0xa9, 0x54, 0xeb, 0x1c, 0x94, 0xe5, 0x19, 0xe9, 0xcc, 0x22, 0x5f, 0xba, 0x01, 0x5b, - 0x00, 0x78, 0xa6, 0x18, 0x7f, 0x3d, 0x0b, 0x67, 0xe5, 0xb1, 0x42, 0x83, 0x47, 0xae, 0xb7, 0xef, - 0x74, 0x76, 0x9f, 0xf2, 0xd3, 0xe1, 0xa6, 0x76, 0x3a, 0xbc, 0x14, 0x3b, 0xa9, 0x63, 0xad, 0x1c, - 0x70, 0x44, 0x7c, 0x7b, 0x02, 0xce, 0x0d, 0xa4, 0x22, 0xef, 0xb3, 0xd3, 0xdc, 0xa1, 0x9d, 0xa0, - 0xda, 0x6c, 0x51, 0x26, 0xa2, 0xba, 0xbd, 0x40, 0xf8, 0x0f, 0xbf, 0x88, 0x2f, 0x4a, 0x58, 0x68, - 0x39, 0xcd, 0x16, 0xb5, 0x02, 0x5e, 0xac, 0x4d, 0xb7, 0x24, 0x35, 0x63, 0x19, 0xa6, 0x59, 0xae, - 0x76, 0x02, 0xea, 0x3d, 0x44, 0xbf, 0x9b, 0x90, 0xe5, 0x3e, 0xa5, 0x5d, 0xcb, 0x66, 0xa5, 0x96, - 0x23, 0x8a, 0x75, 0x96, 0x09, 0x6a, 0x72, 0x53, 0x61, 0x89, 0x52, 0xfe, 0x5d, 0xfb, 0xb1, 0xb0, - 0xdd, 0x17, 0xe9, 0x3f, 0x42, 0x96, 0x3c, 0x14, 0x46, 0xdb, 0x7e, 0x6c, 0x26, 0x49, 0xc8, 0x87, - 0x70, 0x4a, 0x1c, 0x40, 0x22, 0x54, 0xa3, 0x6c, 0x31, 0x0f, 0x04, 0xf9, 0x0a, 0xbb, 0x98, 0x49, - 0xf7, 0x5b, 0x19, 0x7e, 0x35, 0xad, 0xd5, 0xe9, 0x5c, 0xc8, 0x26, 0x3b, 0x90, 0x63, 0xdd, 0x71, - 0x97, 0xfa, 0xbe, 0x8c, 0x77, 0x22, 0x74, 0xb3, 0x6a, 0x67, 0x5a, 0x6d, 0x5e, 0x6e, 0xf6, 0xa5, - 0x24, 0x6b, 0x30, 0xfd, 0x80, 0x6e, 0xab, 0xe3, 0x33, 0x16, 0x6e, 0x55, 0xc5, 0x47, 0x74, 0xbb, - 0xff, 0xe0, 0xc4, 0xe8, 0x88, 0x83, 0x2f, 0xd4, 0x8f, 0x0f, 0xd6, 0xd9, 0xc5, 0xb9, 0x43, 0x3d, - 0xbc, 0xff, 0x8e, 0xe3, 0x66, 0xb0, 0x10, 0x49, 0xc8, 0x7a, 0xb9, 0xd0, 0x1d, 0x61, 0x88, 0x81, - 0x96, 0x80, 0x5b, 0xb1, 0x24, 0xc5, 0x49, 0xae, 0xe4, 0x6b, 0x30, 0x63, 0xba, 0xbd, 0xc0, 0xe9, - 0xec, 0xd6, 0xd9, 0x0d, 0x93, 0xee, 0xf2, 0x03, 0x29, 0x8a, 0x26, 0x1d, 0x2b, 0x15, 0x76, 0x51, - 0x1c, 0x68, 0xf9, 0x02, 0xaa, 0x9d, 0x08, 0x3a, 0x01, 0xf9, 0x2a, 0x4c, 0xf3, 0x90, 0x77, 0x61, - 0x05, 0x13, 0x5a, 0xa6, 0x42, 0xbd, 0xf0, 0xfe, 0x35, 0x61, 0x6a, 0x8d, 0xd0, 0xb4, 0x0a, 0x62, - 0xdc, 0xc8, 0x97, 0x45, 0x67, 0xd5, 0x9c, 0xce, 0x6e, 0x38, 0x8d, 0x01, 0x7b, 0xfe, 0x72, 0xd4, - 0x25, 0x5d, 0xf6, 0xb9, 0x72, 0x1a, 0xf7, 0xf1, 0x1b, 0x49, 0xf2, 0x21, 0x01, 0x9c, 0x2b, 0xfb, - 0xbe, 0xe3, 0x07, 0xc2, 0xcb, 0x7e, 0xf5, 0x31, 0x6d, 0xf4, 0x18, 0xf2, 0x03, 0xd7, 0xdb, 0xa7, - 0x1e, 0xf7, 0x5c, 0x1c, 0x5d, 0xbe, 0x72, 0x74, 0x58, 0x7a, 0xd5, 0x46, 0x44, 0x4b, 0x38, 0xe6, - 0x5b, 0x54, 0xa2, 0x5a, 0x8f, 0x38, 0xae, 0xd2, 0x86, 0xc1, 0x4c, 0xc9, 0x57, 0xe1, 0xf4, 0x8a, - 0xed, 0xd3, 0x6a, 0xc7, 0xa7, 0x1d, 0xdf, 0x09, 0x9c, 0x87, 0x54, 0x74, 0x2a, 0x1e, 0x7e, 0x79, - 0x1e, 0x46, 0xbc, 0x61, 0xfb, 0x6c, 0x61, 0x86, 0x28, 0x96, 0x18, 0x14, 0x35, 0x4a, 0x79, 0x3a, - 0x17, 0x62, 0xc2, 0x74, 0xbd, 0xbe, 0x56, 0x71, 0xec, 0x70, 0x5d, 0x4d, 0x61, 0x7f, 0xbd, 0x8a, - 0x8f, 0x4b, 0xfe, 0x9e, 0xd5, 0x74, 0xec, 0x70, 0x41, 0xf5, 0xe9, 0xac, 0x18, 0x07, 0xe3, 0x30, - 0x03, 0xc5, 0xf8, 0x50, 0x92, 0x2f, 0xc1, 0x04, 0x77, 0xba, 0xa0, 0xfe, 0x9e, 0xd0, 0xbf, 0x48, - 0x1b, 0xfe, 0x10, 0xae, 0x13, 0x89, 0x48, 0x39, 0xdc, 0xa5, 0x83, 0xaa, 0xa6, 0x9e, 0x6b, 0xcf, - 0x98, 0x11, 0x33, 0xd2, 0x84, 0x02, 0x1f, 0x2d, 0x8a, 0x91, 0xf0, 0x63, 0xb1, 0x07, 0xd4, 0xa2, - 0x18, 0x7f, 0x6e, 0xe0, 0xcc, 0xe7, 0x04, 0x47, 0xd0, 0xaa, 0xd0, 0xb8, 0x2e, 0x03, 0xe4, 0x25, - 0xa1, 0x71, 0x16, 0xce, 0xf4, 0xf9, 0x66, 0xe3, 0x21, 0xea, 0x9c, 0xfb, 0xd4, 0x48, 0xbe, 0x04, - 0xf3, 0x48, 0xb8, 0xe2, 0x76, 0x3a, 0xb4, 0x11, 0xe0, 0x76, 0x24, 0xdf, 0x7f, 0x73, 0xdc, 0x4c, - 0x93, 0xb7, 0xb7, 0x11, 0x22, 0x58, 0xf1, 0x67, 0xe0, 0x54, 0x0e, 0xc6, 0xcf, 0x67, 0x61, 0x41, - 0xec, 0x70, 0x26, 0x6d, 0xb8, 0xa8, 0x7d, 0x7c, 0xca, 0x4f, 0xd4, 0x55, 0xed, 0x44, 0x7d, 0x31, - 0x0c, 0xf9, 0x99, 0xd6, 0xc8, 0x01, 0x07, 0xea, 0x2f, 0x67, 0xe0, 0xb9, 0x41, 0x44, 0xa1, 0x56, - 0x31, 0x93, 0xa6, 0x55, 0x24, 0x5d, 0x98, 0xc3, 0x01, 0x5d, 0xd9, 0xa3, 0x8d, 0x7d, 0x7f, 0xcd, - 0xf5, 0x03, 0xf4, 0x25, 0xce, 0xf6, 0xb1, 0xb6, 0x7a, 0x3d, 0xd5, 0xda, 0xea, 0x34, 0x9f, 0x65, - 0x0d, 0xe4, 0xc1, 0x73, 0x13, 0xec, 0xd3, 0x03, 0xdf, 0x4c, 0x63, 0x6d, 0xfc, 0x74, 0x96, 0x5d, - 0xd9, 0x82, 0xbd, 0x9a, 0x47, 0x77, 0xa8, 0x47, 0x3b, 0x0d, 0xfa, 0x3d, 0xe6, 0x13, 0xaa, 0x37, - 0x6e, 0x28, 0x0d, 0xc6, 0xb7, 0xa6, 0x61, 0x3e, 0x8d, 0x8c, 0xf5, 0x8b, 0x72, 0x69, 0xce, 0x4b, - 0x27, 0x7e, 0x71, 0x55, 0xfe, 0x66, 0x06, 0x0a, 0x75, 0xda, 0x70, 0x3b, 0xcd, 0x9b, 0x68, 0x0e, - 0x2b, 0x7a, 0xc7, 0xe6, 0x42, 0x03, 0x83, 0x5b, 0x3b, 0x31, 0x3b, 0xd9, 0xef, 0x1e, 0x96, 0xbe, - 0x38, 0xdc, 0x5d, 0xb5, 0xe1, 0xa2, 0xee, 0x33, 0xc0, 0xb4, 0x82, 0x61, 0x15, 0xfc, 0x6b, 0x4c, - 0xad, 0x5a, 0xb2, 0x0c, 0x53, 0x62, 0xc1, 0xba, 0x6a, 0xee, 0x04, 0x1e, 0x17, 0x55, 0x16, 0x24, - 0x9e, 0x4f, 0x35, 0x12, 0x72, 0x1d, 0x72, 0x5b, 0x4b, 0x37, 0xc5, 0x28, 0xc8, 0xd4, 0x8c, 0x5b, - 0x4b, 0x37, 0x51, 0x21, 0xc6, 0x2e, 0x19, 0x53, 0xbd, 0x25, 0xcd, 0xd0, 0x74, 0x6b, 0xe9, 0x26, - 0xf9, 0x8b, 0x70, 0xaa, 0xe2, 0xf8, 0xa2, 0x0a, 0xee, 0x9f, 0xdc, 0xc4, 0xa8, 0x1c, 0x63, 0x7d, - 0xe6, 0xef, 0xe7, 0x52, 0xe7, 0xef, 0x0b, 0xcd, 0x90, 0x89, 0xc5, 0x9d, 0x9f, 0x9b, 0xf1, 0x1c, - 0x11, 0xe9, 0xf5, 0x90, 0x8f, 0x60, 0x1a, 0xdf, 0xc6, 0xd0, 0x65, 0x1b, 0x93, 0x93, 0x8d, 0xf7, - 0xa9, 0xf9, 0x33, 0xa9, 0x35, 0x2f, 0xf2, 0x68, 0x75, 0xe8, 0xf8, 0x8d, 0x89, 0xcc, 0xb4, 0x7b, - 0xbf, 0xc6, 0x99, 0xdc, 0x86, 0x19, 0x21, 0x80, 0xdd, 0xdb, 0xd9, 0xdc, 0xa3, 0x15, 0xfb, 0x40, - 0xd8, 0x88, 0xe2, 0x9d, 0x4e, 0x48, 0x6d, 0x96, 0xbb, 0x63, 0x05, 0x7b, 0xd4, 0x6a, 0xda, 0x9a, - 0xa8, 0x12, 0x23, 0x24, 0xdf, 0x80, 0xc9, 0x75, 0xb7, 0xc1, 0x64, 0x6f, 0xdc, 0x1b, 0xb8, 0xd9, - 0xe8, 0x07, 0x6c, 0x29, 0xb7, 0x38, 0x38, 0x26, 0x50, 0x7d, 0xf7, 0xb0, 0xf4, 0xd6, 0x49, 0xa7, - 0x8d, 0x52, 0x81, 0xa9, 0xd6, 0x46, 0x56, 0x20, 0xff, 0x80, 0x6e, 0xb3, 0xd6, 0xc6, 0xd3, 0x94, - 0x4b, 0xb0, 0x30, 0x28, 0x17, 0xbf, 0x34, 0x83, 0x72, 0x01, 0x23, 0x1e, 0xcc, 0x62, 0xff, 0xd4, - 0x6c, 0xdf, 0x7f, 0xe4, 0x7a, 0x4d, 0xcc, 0x0f, 0xd9, 0xcf, 0x22, 0x75, 0x29, 0xb5, 0xf3, 0x9f, - 0xe3, 0x9d, 0xdf, 0x55, 0x38, 0xa8, 0x22, 0x64, 0x82, 0x3d, 0xf9, 0x1a, 0x4c, 0x8b, 0xc8, 0x5f, - 0x77, 0x6f, 0x96, 0x71, 0x25, 0x14, 0xb4, 0xd8, 0x26, 0x7a, 0xa1, 0x7c, 0xaf, 0x42, 0x58, 0x18, - 0x43, 0xa7, 0xbd, 0x63, 0xeb, 0x0f, 0xcf, 0x2a, 0x09, 0xa9, 0xc1, 0x64, 0x85, 0x3e, 0x74, 0x1a, - 0x14, 0xe3, 0x2f, 0x08, 0x77, 0xc5, 0x30, 0xef, 0x71, 0x54, 0xc2, 0xb5, 0x31, 0x4d, 0x04, 0xf0, - 0x68, 0x0e, 0xba, 0xab, 0x49, 0x88, 0x48, 0x6e, 0x40, 0xae, 0x5a, 0xa9, 0x09, 0x6f, 0xc5, 0xd9, - 0x30, 0xbe, 0x5e, 0x4d, 0x66, 0x89, 0x45, 0x1b, 0x6e, 0xa7, 0xa9, 0xf9, 0x3a, 0x56, 0x2b, 0x35, - 0xb2, 0x03, 0x53, 0xd8, 0x01, 0x6b, 0xd4, 0xe6, 0x7d, 0x3b, 0xd3, 0xa7, 0x6f, 0xaf, 0xa4, 0xf6, - 0xed, 0x02, 0xef, 0xdb, 0x3d, 0x41, 0xad, 0xa5, 0xbd, 0x54, 0xd9, 0x32, 0xa1, 0x56, 0xa4, 0xe2, - 0x95, 0xc9, 0x1a, 0x37, 0xd7, 0xd1, 0x46, 0x55, 0x08, 0xb5, 0x32, 0x73, 0x6f, 0x98, 0x3d, 0xb2, - 0xaf, 0x33, 0x74, 0x92, 0x0f, 0xf9, 0x02, 0x8c, 0xdc, 0xdb, 0x0f, 0x6c, 0xe1, 0x97, 0x28, 0xfb, - 0x91, 0x81, 0x64, 0xf3, 0x51, 0x0f, 0xe9, 0xee, 0x6b, 0x11, 0x9b, 0x91, 0x86, 0x0d, 0xc5, 0x9a, - 0xed, 0x35, 0x1f, 0xd9, 0x1e, 0x06, 0xc1, 0x99, 0xd3, 0x58, 0x28, 0x25, 0x7c, 0x28, 0xf6, 0x04, - 0x20, 0xf6, 0xc0, 0xa9, 0xb2, 0x20, 0xdf, 0x07, 0x67, 0x7d, 0x67, 0xb7, 0x63, 0x07, 0x3d, 0x8f, - 0x5a, 0x76, 0x6b, 0xd7, 0xf5, 0x9c, 0x60, 0xaf, 0x6d, 0xf9, 0x3d, 0x27, 0xa0, 0xe8, 0x20, 0x38, - 0x1d, 0xca, 0x8c, 0x75, 0x89, 0x57, 0x96, 0x68, 0x75, 0x86, 0x65, 0x9e, 0xf1, 0xd3, 0x0b, 0xc8, - 0x97, 0x61, 0x4a, 0xdd, 0x92, 0xfd, 0x85, 0x53, 0xe7, 0x73, 0x17, 0xa7, 0xc3, 0xab, 0x47, 0x7c, - 0x0b, 0x97, 0x19, 0x63, 0x94, 0x33, 0xc2, 0xd7, 0x33, 0xc6, 0x28, 0xbc, 0x88, 0x09, 0x67, 0x7c, - 0xae, 0xdf, 0xe8, 0x75, 0x9c, 0xc7, 0x98, 0x15, 0x58, 0xd8, 0x32, 0x2f, 0x9c, 0xd6, 0x8e, 0xbe, - 0x3a, 0x62, 0x6d, 0x6d, 0x54, 0xbf, 0xb4, 0xe5, 0x53, 0x4f, 0x98, 0x34, 0xcf, 0x73, 0xda, 0xad, - 0x8e, 0xf3, 0x38, 0x82, 0x86, 0xe9, 0xfc, 0x49, 0x71, 0xce, 0x9c, 0x15, 0xab, 0x40, 0x8c, 0xdc, - 0xdd, 0x9b, 0x65, 0x73, 0xbc, 0x56, 0xbd, 0x5f, 0x6f, 0xb9, 0x81, 0xb1, 0x07, 0xf3, 0x69, 0x5c, - 0xc9, 0x02, 0x8c, 0x8b, 0xdc, 0x74, 0x78, 0x38, 0xe6, 0x4d, 0xf9, 0x93, 0x3c, 0x0b, 0x13, 0x3b, - 0x8e, 0xe7, 0x07, 0x56, 0xcf, 0xe1, 0xf2, 0xc2, 0xa8, 0x99, 0x47, 0xc0, 0x96, 0xd3, 0x24, 0x67, - 0x21, 0x8f, 0x6f, 0x5c, 0xac, 0x2c, 0x87, 0x65, 0xe3, 0xec, 0xf7, 0x96, 0xd3, 0x34, 0xfe, 0xcb, - 0x0c, 0x1e, 0x41, 0xe4, 0x55, 0x0c, 0x22, 0x1b, 0xda, 0x9f, 0xa0, 0xfe, 0xd9, 0xee, 0xc6, 0x72, - 0xbd, 0x71, 0x14, 0xf2, 0x3a, 0x8c, 0xdd, 0xb4, 0x1b, 0x34, 0x34, 0x6b, 0x40, 0xe4, 0x1d, 0x84, - 0xa8, 0xca, 0x6a, 0x8e, 0xc3, 0xe4, 0x63, 0xbe, 0x34, 0xcb, 0x41, 0x40, 0x7d, 0xbe, 0x7f, 0xae, - 0x94, 0xa5, 0x29, 0x03, 0xca, 0xc7, 0x62, 0x49, 0xdb, 0x11, 0x42, 0xcc, 0x25, 0x2d, 0x95, 0x83, - 0xf1, 0x87, 0x99, 0x68, 0x4f, 0x25, 0xaf, 0xc0, 0x88, 0x59, 0x0b, 0xbf, 0x9f, 0x87, 0xe5, 0x89, - 0x7d, 0x3e, 0x22, 0x90, 0x2f, 0xc3, 0x29, 0x85, 0x4f, 0xc2, 0x3f, 0xee, 0x65, 0x8c, 0x1a, 0xa3, - 0x7c, 0x49, 0xba, 0x93, 0x5c, 0x3a, 0x0f, 0xbc, 0x0c, 0x44, 0x05, 0x15, 0xda, 0x71, 0x38, 0x6f, - 0xa5, 0xb1, 0x2a, 0xef, 0x26, 0x22, 0xc4, 0x1b, 0x9b, 0xc6, 0x81, 0x07, 0x8d, 0x31, 0x7e, 0x23, - 0xa3, 0xed, 0x95, 0xe4, 0x82, 0x26, 0xe7, 0xe2, 0xba, 0x8e, 0x29, 0x05, 0xb8, 0xc4, 0xfb, 0x26, - 0x40, 0xb9, 0x17, 0xb8, 0xab, 0x1d, 0xcf, 0x6d, 0xb5, 0x44, 0xc4, 0x33, 0x1e, 0x8e, 0xa2, 0x17, - 0xb8, 0x16, 0x45, 0xb0, 0x16, 0x8e, 0x22, 0x44, 0x4e, 0x75, 0x25, 0xcc, 0x7d, 0x5c, 0x57, 0x42, - 0xe3, 0xe7, 0xb2, 0xda, 0x0e, 0xc3, 0xa4, 0x5c, 0x31, 0xe9, 0x55, 0x9b, 0xeb, 0xae, 0xf3, 0xd0, - 0xf2, 0x5b, 0xae, 0x16, 0x7c, 0x4f, 0xa0, 0x91, 0xbf, 0x9c, 0x81, 0xd3, 0xdc, 0x27, 0x6f, 0xa3, - 0xd7, 0xde, 0xa6, 0xde, 0x7d, 0xbb, 0xe5, 0x34, 0xa3, 0x10, 0xdd, 0x91, 0x01, 0xbe, 0x52, 0x4d, - 0x3a, 0x3e, 0xbf, 0x68, 0x73, 0x1f, 0x41, 0xab, 0x83, 0x85, 0xd6, 0xc3, 0xb0, 0x54, 0xbd, 0x68, - 0xa7, 0xd3, 0x93, 0x2a, 0x4c, 0xd6, 0x9c, 0x0e, 0xa6, 0x02, 0x8d, 0xa2, 0x58, 0xbc, 0xc2, 0x5d, - 0x6c, 0xd9, 0x14, 0x6e, 0xec, 0xd1, 0x01, 0x5b, 0xb7, 0x4a, 0x6b, 0xfc, 0x4a, 0x06, 0x5e, 0x38, - 0xf6, 0x83, 0xc9, 0x55, 0x18, 0x5f, 0x55, 0xd7, 0x3f, 0xb7, 0x04, 0x4a, 0xa6, 0xab, 0x94, 0x58, - 0xe4, 0x2b, 0x70, 0x4a, 0x65, 0xb5, 0xe9, 0xd9, 0x8e, 0xea, 0x4d, 0x9c, 0xd2, 0x01, 0x01, 0x43, - 0x89, 0x8b, 0xad, 0xe9, 0x4c, 0x8c, 0xff, 0x37, 0x03, 0x13, 0xa1, 0x3b, 0xd2, 0x53, 0x7a, 0x9d, - 0xb9, 0xa1, 0x5d, 0x67, 0x64, 0xae, 0x83, 0xb0, 0x55, 0xdc, 0xf4, 0x28, 0xe5, 0x0a, 0x3a, 0xa3, - 0x38, 0x6f, 0x21, 0xe0, 0x47, 0xb3, 0x30, 0xc9, 0xb6, 0x6a, 0xfe, 0xa6, 0xfd, 0xbd, 0x15, 0xf1, - 0x3d, 0x6c, 0xd7, 0x50, 0x31, 0xb9, 0xff, 0x6d, 0x06, 0xdf, 0x3a, 0x54, 0x0a, 0xd6, 0x1b, 0x0c, - 0xa4, 0xf6, 0x06, 0x3b, 0x51, 0x4d, 0x84, 0xf2, 0x08, 0xc5, 0xeb, 0xa2, 0x27, 0x44, 0x84, 0xe2, - 0x96, 0xc9, 0x60, 0xe4, 0x8b, 0x30, 0xba, 0x85, 0x9a, 0x5b, 0x3d, 0xa2, 0x5e, 0xc8, 0x1f, 0x0b, - 0xf9, 0x7e, 0xdf, 0xf3, 0xf5, 0xf0, 0xd5, 0x9c, 0x90, 0xd4, 0x61, 0x7c, 0xc5, 0xa3, 0x76, 0x40, - 0x9b, 0xa2, 0x43, 0x86, 0x8a, 0x07, 0xd5, 0xe0, 0x24, 0xf1, 0x78, 0x50, 0x82, 0x13, 0xdb, 0xc7, - 0x48, 0xd4, 0x46, 0xb4, 0xda, 0xf1, 0x9f, 0xda, 0x41, 0x7f, 0x4f, 0x1b, 0xf4, 0x73, 0x89, 0x41, - 0xe7, 0xcd, 0x1b, 0x6a, 0xec, 0x7f, 0x33, 0x03, 0xa7, 0xd3, 0x09, 0xc9, 0x8b, 0x30, 0x76, 0x6f, - 0xb3, 0x16, 0x59, 0xca, 0x61, 0x53, 0xdc, 0x2e, 0xaa, 0x4d, 0x4c, 0x51, 0x44, 0x2e, 0xc3, 0xd8, - 0xfb, 0xe6, 0x4a, 0x64, 0x10, 0x86, 0x1b, 0xdc, 0xd7, 0x99, 0xe4, 0xa5, 0x9d, 0x6a, 0x02, 0x49, - 0x1d, 0xdb, 0xdc, 0x13, 0x1b, 0xdb, 0x9f, 0xca, 0xc2, 0x4c, 0xb9, 0xd1, 0xa0, 0xbe, 0x2f, 0x32, - 0x6c, 0x3d, 0xb5, 0x03, 0x9b, 0x1e, 0x6e, 0x51, 0x6b, 0xdb, 0x50, 0xa3, 0xfa, 0x5b, 0x19, 0x1e, - 0x3c, 0x95, 0x51, 0x3d, 0x74, 0xe8, 0xa3, 0xcd, 0x3d, 0x8f, 0xfa, 0x7b, 0x6e, 0xab, 0x39, 0x74, - 0x86, 0x57, 0x26, 0x33, 0x62, 0x16, 0x2a, 0xd5, 0xc0, 0x61, 0x07, 0x21, 0x9a, 0xcc, 0xc8, 0x33, - 0x55, 0x5d, 0x85, 0xf1, 0x72, 0xb7, 0xeb, 0xb9, 0x0f, 0xf9, 0xb2, 0x17, 0x01, 0xea, 0x6d, 0x0e, - 0xd2, 0x22, 0x60, 0x71, 0x10, 0xfb, 0x8c, 0x0a, 0xed, 0x1c, 0xa8, 0xe6, 0x69, 0x4d, 0xda, 0x51, - 0x2f, 0x25, 0x58, 0x6e, 0xd4, 0x81, 0xd4, 0x3c, 0xb7, 0xed, 0x06, 0xb4, 0xc9, 0xdb, 0x83, 0x81, - 0xc3, 0x8e, 0x8d, 0x35, 0xbc, 0xe9, 0x04, 0x2d, 0x2d, 0xd6, 0x70, 0xc0, 0x00, 0x26, 0x87, 0xb3, - 0xb3, 0xfb, 0x9c, 0xd6, 0xa7, 0x15, 0xef, 0xc0, 0xec, 0x75, 0x56, 0x3b, 0x9e, 0xd3, 0xd8, 0x43, - 0x1f, 0xd7, 0x0d, 0x00, 0x93, 0xda, 0xbe, 0xdb, 0x51, 0x84, 0xb5, 0x2b, 0x3c, 0xbf, 0x2d, 0x83, - 0x26, 0xf5, 0x0e, 0xb3, 0x82, 0x53, 0x44, 0x65, 0x2a, 0x1c, 0x48, 0x19, 0xa6, 0xf8, 0x2f, 0xd6, - 0x98, 0x6e, 0x28, 0x88, 0x3f, 0xcb, 0x3d, 0x4e, 0x91, 0x65, 0x17, 0x4b, 0xf4, 0x68, 0x14, 0x0a, - 0x85, 0xf1, 0x7f, 0x8d, 0x42, 0x41, 0x1d, 0x52, 0x62, 0xf0, 0x64, 0x8d, 0xae, 0xa7, 0xc6, 0xef, - 0xb3, 0x11, 0x62, 0x8a, 0x92, 0x28, 0xf8, 0x65, 0xf6, 0xd8, 0xe0, 0x97, 0x0f, 0x60, 0xaa, 0xe6, - 0xb9, 0x18, 0xfe, 0x1f, 0x5f, 0x9b, 0xc5, 0xfe, 0x3d, 0xa7, 0x68, 0x0d, 0xd8, 0xec, 0xc3, 0xf7, - 0x6c, 0xbc, 0x97, 0x75, 0x05, 0xb6, 0xc5, 0x44, 0x5f, 0x4d, 0x67, 0xa6, 0xf1, 0xe1, 0xa6, 0x32, - 0xac, 0x25, 0x6a, 0x52, 0x1b, 0xde, 0x68, 0xdd, 0x54, 0x86, 0x41, 0xd4, 0x0d, 0x62, 0xf4, 0x49, - 0x6d, 0x10, 0xe4, 0xe7, 0x32, 0x30, 0x59, 0xee, 0x74, 0x44, 0x50, 0xcd, 0x63, 0x42, 0x80, 0x7d, - 0x45, 0x58, 0xcb, 0xbc, 0xf5, 0xb1, 0xac, 0x65, 0x50, 0xd8, 0xf2, 0x51, 0x52, 0x8f, 0x2a, 0xd4, - 0x02, 0xe3, 0x44, 0x60, 0xf2, 0x16, 0x14, 0xc3, 0x95, 0x59, 0xed, 0x34, 0xe9, 0x63, 0xca, 0x73, - 0xf5, 0x4f, 0x89, 0x9c, 0x42, 0xaa, 0x64, 0x1e, 0x47, 0x24, 0x9b, 0x00, 0x76, 0xb8, 0x24, 0xc4, - 0x23, 0xde, 0xd9, 0xe8, 0xc1, 0x25, 0xb6, 0x66, 0xc4, 0xed, 0x01, 0x7f, 0xe3, 0x83, 0xa4, 0x7a, - 0x7b, 0x88, 0xf8, 0x90, 0x36, 0xcc, 0x94, 0x7d, 0xbf, 0xd7, 0xa6, 0xf5, 0xc0, 0xf6, 0x02, 0x4c, - 0x1c, 0x08, 0xc3, 0x9b, 0x81, 0xda, 0x48, 0xca, 0x66, 0x84, 0x17, 0x58, 0x29, 0x59, 0x04, 0xe3, - 0xbc, 0x79, 0x06, 0x27, 0xf3, 0x4c, 0xf2, 0x7b, 0xf9, 0x4a, 0xfd, 0xa9, 0x0c, 0x9c, 0x56, 0x27, - 0x7d, 0xbd, 0xb7, 0x2d, 0xd2, 0x26, 0x90, 0x2b, 0x30, 0x21, 0xe6, 0x64, 0x78, 0x89, 0x4c, 0xe6, - 0x3f, 0x8c, 0x50, 0xc8, 0x2a, 0x9b, 0x86, 0x8c, 0x87, 0xb8, 0x75, 0xcc, 0xc5, 0x36, 0x57, 0x56, - 0xb4, 0xbc, 0x10, 0x25, 0x70, 0x64, 0xbf, 0xf5, 0xf9, 0xc9, 0x20, 0xc6, 0xbb, 0x30, 0xab, 0x8f, - 0x44, 0x9d, 0x06, 0xe4, 0x12, 0x8c, 0xcb, 0xe1, 0xcb, 0xa4, 0x0f, 0x9f, 0x2c, 0x37, 0x1e, 0x00, - 0x49, 0xd0, 0xfb, 0x68, 0xd6, 0xc6, 0xee, 0xe7, 0xdc, 0xec, 0x52, 0x3e, 0x2a, 0x27, 0x10, 0x97, - 0xe7, 0xc4, 0xf7, 0x4d, 0x6a, 0x8e, 0x8d, 0x98, 0x42, 0xe2, 0xb7, 0x8a, 0x30, 0x97, 0x72, 0x50, - 0x1c, 0x23, 0xc8, 0x95, 0xf4, 0x0d, 0x62, 0x22, 0x8c, 0x20, 0x28, 0xb7, 0x85, 0x77, 0x61, 0xf4, - 0xd8, 0xed, 0x80, 0xbb, 0xb5, 0xc6, 0x76, 0x01, 0x4e, 0xf6, 0xa9, 0x08, 0x73, 0x6a, 0xc4, 0xd0, - 0xd1, 0x27, 0x16, 0x31, 0x14, 0x43, 0x06, 0x29, 0x9b, 0xb8, 0x1e, 0xc6, 0x88, 0xe7, 0xf3, 0x4c, - 0x6c, 0x5b, 0x3a, 0x09, 0xe7, 0xe1, 0xbb, 0xad, 0x87, 0x54, 0xf0, 0x18, 0x57, 0x79, 0x60, 0x41, - 0x2a, 0x0f, 0x85, 0x84, 0xfc, 0x83, 0x0c, 0x10, 0x01, 0x51, 0xf7, 0xac, 0xfc, 0xa0, 0x3d, 0xab, - 0xf9, 0x64, 0xf6, 0xac, 0x73, 0xf2, 0x1b, 0xd3, 0xf7, 0xae, 0x94, 0xcf, 0x22, 0x7f, 0x2f, 0x03, - 0xb3, 0x3c, 0xd2, 0xa4, 0xfa, 0xb1, 0x03, 0xa3, 0x07, 0x36, 0x9e, 0xcc, 0xc7, 0x3e, 0x27, 0x32, - 0xb9, 0xa6, 0x7f, 0x6b, 0xf2, 0xa3, 0xc8, 0xf7, 0x01, 0x84, 0x2b, 0x8a, 0xa7, 0xd1, 0x98, 0x5c, - 0x7a, 0x2e, 0x65, 0x17, 0x08, 0x91, 0xa2, 0xf4, 0x85, 0x41, 0x48, 0xa7, 0x6e, 0x9b, 0x11, 0x37, - 0xf2, 0x17, 0x79, 0x0c, 0xfe, 0x10, 0x22, 0x82, 0xec, 0x2e, 0x4c, 0x62, 0x2d, 0x9f, 0xed, 0x2f, - 0xc8, 0x5d, 0x49, 0x23, 0xe3, 0x49, 0x57, 0x42, 0x23, 0x6b, 0x2f, 0x68, 0xc7, 0xe3, 0xf0, 0xc7, - 0x29, 0x30, 0x76, 0x35, 0x7e, 0x3d, 0x4f, 0x31, 0xd8, 0x67, 0x7f, 0x3b, 0x2b, 0xd7, 0x02, 0xdf, - 0xdf, 0x62, 0xce, 0x48, 0x08, 0x22, 0xef, 0x03, 0x09, 0x43, 0x34, 0x72, 0x18, 0x95, 0xe9, 0x07, - 0xf9, 0x63, 0x41, 0x14, 0xea, 0xd1, 0x93, 0xc5, 0xea, 0x24, 0x49, 0x12, 0x13, 0x0a, 0xf3, 0xa2, - 0xd1, 0x0c, 0xca, 0x1d, 0x88, 0xab, 0x15, 0x7f, 0x61, 0x5a, 0x0b, 0x61, 0x1c, 0x95, 0x2c, 0x3f, - 0x2f, 0xb3, 0xf5, 0x86, 0x9e, 0xc8, 0xba, 0x4b, 0x6f, 0x2a, 0x3b, 0x72, 0x03, 0x26, 0x30, 0xd4, - 0xc8, 0x9a, 0x34, 0xd6, 0x13, 0x86, 0x43, 0x18, 0x94, 0xc4, 0xda, 0xd3, 0x4d, 0xee, 0x22, 0x54, - 0x76, 0x87, 0xe1, 0x12, 0x20, 0xaa, 0xf4, 0x85, 0x92, 0xa6, 0xe9, 0x1d, 0x58, 0x5e, 0x4f, 0x0f, - 0x63, 0x83, 0x48, 0xe4, 0x6b, 0x30, 0x79, 0xd7, 0x7e, 0x1c, 0x66, 0x05, 0x9e, 0x1d, 0x3e, 0xf7, - 0x70, 0xdb, 0x7e, 0x1c, 0xa6, 0x04, 0x8e, 0x3b, 0x30, 0x29, 0x2c, 0xc9, 0x87, 0x00, 0xca, 0x3b, - 0x03, 0x39, 0xb6, 0x82, 0x17, 0x64, 0x60, 0xee, 0xd4, 0xf7, 0x07, 0xe4, 0xaf, 0x30, 0x8c, 0x49, - 0x0e, 0xf3, 0x9f, 0x9e, 0xe4, 0x70, 0xea, 0xd3, 0x93, 0x1c, 0xf8, 0x33, 0x17, 0x1f, 0x7b, 0xdc, - 0xc1, 0x0f, 0x84, 0x96, 0x7f, 0x50, 0x6d, 0xcf, 0x49, 0x53, 0x50, 0x3c, 0x0a, 0x0e, 0x62, 0x55, - 0xc4, 0xf8, 0x11, 0x0f, 0x8a, 0xf1, 0x8b, 0xc1, 0xc2, 0x19, 0xcd, 0xb2, 0x70, 0xe0, 0x25, 0x82, - 0xab, 0x5b, 0xc5, 0x34, 0xb2, 0x68, 0x08, 0x57, 0x85, 0xba, 0x38, 0xcd, 0xe2, 0x36, 0x9c, 0xed, - 0xbb, 0x21, 0xa4, 0xe4, 0x99, 0xb9, 0xaa, 0xe7, 0x99, 0x39, 0xdb, 0x4f, 0x70, 0xf0, 0xf5, 0xdc, - 0x99, 0x73, 0xc5, 0xf9, 0xfe, 0x32, 0xd7, 0x77, 0xb2, 0x31, 0x41, 0x42, 0xdc, 0xf1, 0x78, 0xa6, - 0xe9, 0x7e, 0x92, 0x56, 0xb6, 0x5a, 0x61, 0x97, 0x3a, 0x14, 0x35, 0x94, 0x4c, 0x5f, 0x4c, 0xd4, - 0x50, 0x45, 0x15, 0x14, 0x3a, 0x3e, 0xa9, 0x4c, 0xf1, 0x36, 0x4c, 0xd7, 0xa9, 0xed, 0x35, 0xf6, - 0xee, 0xd0, 0x83, 0x47, 0xae, 0xd7, 0xe4, 0x19, 0x69, 0xc5, 0xcd, 0xc2, 0xc7, 0x12, 0x3d, 0x64, - 0x83, 0x8a, 0x4b, 0x2a, 0x32, 0x26, 0xc7, 0x28, 0xd6, 0x7e, 0x36, 0x75, 0x6f, 0x66, 0x08, 0x83, - 0xc2, 0x75, 0x90, 0x37, 0x42, 0xf1, 0x93, 0x7a, 0x6a, 0x06, 0x4d, 0x4f, 0x02, 0x53, 0xa4, 0x50, - 0xea, 0x19, 0xbf, 0x97, 0x03, 0xc2, 0x6b, 0x5a, 0xb1, 0xbb, 0x36, 0x46, 0xac, 0x71, 0x30, 0x2c, - 0x6d, 0x51, 0xe0, 0xd8, 0xdb, 0x2d, 0xaa, 0xc6, 0x74, 0x16, 0x26, 0xdf, 0x61, 0x99, 0x15, 0xbf, - 0xbe, 0x25, 0x08, 0xfb, 0x6c, 0xe0, 0xd9, 0x4f, 0xb2, 0x81, 0x7f, 0x0d, 0x9e, 0x2d, 0x77, 0xbb, - 0x2d, 0xa7, 0x11, 0xd6, 0x72, 0xd3, 0xf5, 0xe4, 0x72, 0xd1, 0x62, 0x21, 0xd8, 0x21, 0x5a, 0xe2, - 0x4b, 0x07, 0xb1, 0x50, 0xa4, 0x2f, 0x7e, 0xe1, 0x55, 0x63, 0x6b, 0x49, 0xe9, 0x2b, 0xed, 0x8a, - 0xac, 0x90, 0x48, 0x1e, 0x8e, 0x27, 0xa5, 0xaf, 0xd1, 0x28, 0x53, 0x8c, 0x7c, 0xe1, 0x4e, 0x97, - 0xe0, 0x42, 0x12, 0xf2, 0x36, 0x4c, 0x96, 0x7b, 0x81, 0x2b, 0x18, 0x0b, 0x5f, 0x85, 0xc8, 0xab, - 0x40, 0x7c, 0x8a, 0x76, 0xa1, 0x8b, 0xd0, 0x8d, 0x3f, 0xc8, 0xc1, 0xd9, 0xe4, 0xf0, 0x8a, 0xd2, - 0x70, 0x7d, 0x64, 0x8e, 0x59, 0x1f, 0x69, 0xb3, 0x21, 0x1b, 0xe5, 0x0e, 0x7c, 0x12, 0xb3, 0x21, - 0x87, 0xec, 0x3e, 0xe6, 0x6c, 0xa8, 0xc3, 0xa4, 0x7a, 0x8a, 0x8f, 0x7c, 0xdc, 0x53, 0x5c, 0xe5, - 0x42, 0x2e, 0xc1, 0x28, 0x0f, 0x29, 0x36, 0x1a, 0x3d, 0x08, 0xc6, 0xa3, 0x89, 0x71, 0x0c, 0xf2, - 0xff, 0x83, 0xf3, 0x7c, 0x4f, 0x8a, 0x37, 0x76, 0xf9, 0x40, 0x72, 0x14, 0x03, 0xb7, 0x74, 0x74, - 0x58, 0xba, 0xc2, 0xb5, 0x56, 0x56, 0xa2, 0xdb, 0xac, 0xed, 0x03, 0x4b, 0x7e, 0x99, 0x52, 0xc9, - 0xb1, 0xbc, 0x8d, 0xc7, 0x70, 0x56, 0x94, 0x46, 0xc1, 0x6c, 0x64, 0x21, 0x1b, 0xe4, 0xfd, 0x48, - 0xf1, 0x88, 0x83, 0x1c, 0xd3, 0x29, 0x62, 0x39, 0xb9, 0x0e, 0xf9, 0x72, 0xad, 0xca, 0xb3, 0x99, - 0x2b, 0xa1, 0x88, 0xec, 0xae, 0xc3, 0xa3, 0xae, 0x68, 0xd1, 0x0d, 0x04, 0xa2, 0xf1, 0xeb, 0x19, - 0x80, 0xa8, 0xd3, 0xc8, 0xe5, 0x34, 0xf7, 0x31, 0x9e, 0x8e, 0x8a, 0x83, 0x75, 0xcf, 0x31, 0xa9, - 0x13, 0xcd, 0xa6, 0xea, 0x44, 0xa5, 0x52, 0x2d, 0x97, 0xaa, 0x54, 0xab, 0xc0, 0x4c, 0xbd, 0xb7, - 0x2d, 0xeb, 0x8e, 0x07, 0xbf, 0xf0, 0x7b, 0xdb, 0x69, 0x5d, 0x19, 0x27, 0x31, 0x7e, 0x2c, 0x0b, - 0x85, 0x5a, 0xab, 0xb7, 0xeb, 0x74, 0x2a, 0x76, 0x60, 0x3f, 0xb5, 0x6a, 0xda, 0x37, 0x35, 0x35, - 0x6d, 0xe8, 0x25, 0x19, 0x36, 0x6c, 0x28, 0x1d, 0xed, 0xcf, 0x66, 0x60, 0x26, 0x22, 0xe1, 0x27, - 0xfc, 0x1a, 0x8c, 0xb0, 0x1f, 0x42, 0x0f, 0x70, 0x3e, 0xc1, 0x18, 0xb1, 0xae, 0x84, 0x7f, 0x09, - 0xc5, 0xa9, 0x9e, 0xca, 0x1b, 0x39, 0x2c, 0x7e, 0x0e, 0x26, 0x22, 0xb6, 0x49, 0xc1, 0x61, 0x5e, - 0x15, 0x1c, 0x26, 0xd4, 0x4c, 0x74, 0xbf, 0x9a, 0x81, 0x62, 0xbc, 0x25, 0xe4, 0x0e, 0x8c, 0x33, - 0x4e, 0x0e, 0x95, 0x2a, 0x8a, 0x97, 0xfa, 0xb4, 0xf9, 0x8a, 0x40, 0xe3, 0x9f, 0x87, 0x9d, 0x4f, - 0x39, 0xc4, 0x94, 0x1c, 0x16, 0x4d, 0x28, 0xa8, 0x58, 0x29, 0x5f, 0xf7, 0xba, 0x2e, 0xd6, 0x9c, - 0x4e, 0xef, 0x07, 0xf5, 0xab, 0xff, 0x86, 0xf6, 0xd5, 0x42, 0x62, 0xb9, 0xa0, 0x4d, 0xae, 0xd4, - 0xa5, 0x88, 0x93, 0x06, 0x13, 0xee, 0x89, 0x7d, 0x23, 0xab, 0x86, 0x82, 0x4c, 0x4c, 0xe8, 0x10, - 0x8f, 0xbc, 0x0e, 0x63, 0xbc, 0x3e, 0x35, 0xaf, 0x77, 0x17, 0x21, 0xea, 0x95, 0x81, 0xe3, 0x18, - 0x7f, 0x33, 0x07, 0xa7, 0xa3, 0xcf, 0xdb, 0xea, 0x36, 0xed, 0x80, 0xd6, 0x6c, 0xcf, 0x6e, 0xfb, - 0xc7, 0xac, 0x80, 0x8b, 0x89, 0x4f, 0x13, 0x61, 0x15, 0x38, 0x4c, 0xf9, 0x20, 0x23, 0xf6, 0x41, - 0xa8, 0x0e, 0xe6, 0x1f, 0x24, 0x3f, 0x83, 0xdc, 0x81, 0x5c, 0x9d, 0x06, 0x62, 0xc3, 0xbe, 0x90, - 0xe8, 0x55, 0xf5, 0xbb, 0xae, 0xd4, 0x69, 0xc0, 0x07, 0x91, 0x07, 0xd9, 0xd4, 0x02, 0x18, 0x30, - 0x2e, 0xe4, 0x01, 0x8c, 0xad, 0x3e, 0xee, 0xd2, 0x46, 0x80, 0x99, 0x95, 0x14, 0x4f, 0xfe, 0x74, - 0x7e, 0x1c, 0x97, 0xb3, 0x9c, 0x17, 0x32, 0xb8, 0x9e, 0xcd, 0x50, 0xb0, 0x5b, 0xbc, 0x01, 0x79, - 0x59, 0xf9, 0x49, 0x66, 0xee, 0xe2, 0x9b, 0x30, 0xa9, 0x54, 0x72, 0xa2, 0x49, 0xff, 0x0b, 0x6c, - 0x5f, 0x75, 0x5b, 0x54, 0x4c, 0x9c, 0xd5, 0x84, 0x80, 0xa9, 0x24, 0x25, 0xe6, 0x02, 0xa6, 0xb5, - 0x2f, 0x8a, 0x06, 0x48, 0x9a, 0x55, 0x98, 0xa9, 0xef, 0x3b, 0xdd, 0x28, 0x11, 0x87, 0x76, 0x8c, - 0x63, 0xe2, 0x53, 0xa1, 0xc3, 0x88, 0x1f, 0xe3, 0x71, 0x3a, 0xe3, 0x4f, 0x32, 0x30, 0xc6, 0xfe, - 0xba, 0x7f, 0xe3, 0x29, 0xdd, 0x32, 0xaf, 0x6b, 0x5b, 0xe6, 0xac, 0x92, 0x58, 0x0b, 0x37, 0x8e, - 0x1b, 0xc7, 0x6c, 0x96, 0x87, 0x62, 0x80, 0x38, 0x32, 0xb9, 0x05, 0xe3, 0xc2, 0x36, 0x4e, 0xb8, - 0x31, 0xa8, 0x99, 0xba, 0xa4, 0xd5, 0x5c, 0xa8, 0xec, 0x70, 0xbb, 0x71, 0xed, 0x90, 0xa4, 0x66, - 0x97, 0x01, 0x99, 0x12, 0x45, 0x4d, 0xd4, 0xc9, 0xd8, 0xac, 0xb8, 0x1d, 0x9e, 0x67, 0xca, 0x5f, - 0x3e, 0x23, 0x38, 0xf5, 0x0b, 0x54, 0x54, 0x16, 0xaf, 0x59, 0xb9, 0x41, 0x4c, 0x4e, 0x0b, 0x26, - 0xe9, 0x0f, 0x5d, 0x6d, 0x38, 0x5d, 0xaf, 0xaf, 0xa1, 0x1d, 0x6d, 0xcd, 0xf5, 0x82, 0x9b, 0xae, - 0xf7, 0x48, 0xc4, 0x63, 0xa9, 0xeb, 0x36, 0x24, 0x69, 0xd6, 0x8d, 0xaf, 0xa4, 0x5a, 0x37, 0x0e, - 0xb0, 0x33, 0x31, 0x3a, 0x70, 0xa6, 0x5e, 0x5f, 0xe3, 0x59, 0x9e, 0xfe, 0x34, 0xea, 0xfb, 0xd5, - 0x0c, 0xcc, 0xd6, 0xeb, 0x6b, 0xb1, 0xaa, 0xd6, 0x65, 0x7a, 0xa9, 0x8c, 0xf6, 0x90, 0x9d, 0xde, - 0x11, 0x38, 0x0a, 0x19, 0x2e, 0x16, 0x36, 0xb4, 0x20, 0xe1, 0x9c, 0x09, 0xa9, 0x85, 0x09, 0xad, - 0xb2, 0x9a, 0x6b, 0x4b, 0x9f, 0x86, 0xa2, 0xb2, 0x5f, 0x38, 0x86, 0xb2, 0x52, 0x5d, 0xd9, 0xcf, - 0x20, 0xc6, 0xbf, 0x38, 0xcd, 0x53, 0x66, 0xc9, 0xd9, 0xf2, 0x0e, 0x14, 0x04, 0x3d, 0xfa, 0x7f, - 0x08, 0x9b, 0x9e, 0xb3, 0x6c, 0x83, 0xdc, 0xe1, 0x70, 0x9e, 0xfd, 0xe4, 0xbb, 0x87, 0xa5, 0x11, - 0xd6, 0x35, 0xa6, 0x86, 0x4e, 0xee, 0xc1, 0xd4, 0x5d, 0xfb, 0xb1, 0xa2, 0xd9, 0xe1, 0xde, 0x7d, - 0x97, 0xd8, 0xae, 0xd2, 0xb6, 0x1f, 0x0f, 0x61, 0x3d, 0xaa, 0xd3, 0x93, 0x7d, 0x98, 0xd6, 0xdb, - 0x24, 0x66, 0x60, 0x72, 0xc4, 0xae, 0xa5, 0x8e, 0xd8, 0xd9, 0xae, 0xeb, 0x05, 0xd6, 0x4e, 0x48, - 0xae, 0xa5, 0x87, 0x8b, 0xb1, 0x26, 0xef, 0xc0, 0xac, 0x12, 0x8a, 0xfd, 0xa6, 0xeb, 0xb5, 0x6d, - 0x79, 0x4b, 0xc3, 0xe7, 0x0e, 0x34, 0x2b, 0xdb, 0x41, 0xb0, 0x99, 0xc4, 0x24, 0x5f, 0x4e, 0xf3, - 0x98, 0x1c, 0x8d, 0x4c, 0x68, 0x53, 0x3c, 0x26, 0xfb, 0x99, 0xd0, 0x26, 0x7d, 0x27, 0x77, 0x07, - 0x99, 0xd8, 0xe7, 0x79, 0xeb, 0x87, 0x32, 0xa1, 0x0f, 0x47, 0xae, 0x8f, 0x29, 0xfd, 0x12, 0xe4, - 0x96, 0x6b, 0x37, 0xf1, 0x91, 0x4e, 0xda, 0xd3, 0x75, 0xf6, 0xec, 0x4e, 0x03, 0x6f, 0x4f, 0xc2, - 0xb1, 0x45, 0x3d, 0x28, 0x97, 0x6b, 0x37, 0x89, 0x0d, 0x73, 0x98, 0xee, 0x3b, 0xf8, 0xd2, 0xb5, - 0x6b, 0xca, 0x50, 0xe5, 0xf1, 0xd3, 0xae, 0x8a, 0x4f, 0x2b, 0x61, 0xb2, 0xf0, 0xc0, 0x7a, 0x7c, - 0xed, 0x5a, 0xea, 0x80, 0x84, 0x1f, 0x96, 0xc6, 0x8b, 0x1d, 0x58, 0x77, 0xed, 0xc7, 0x91, 0x3f, - 0x92, 0x2f, 0x7c, 0xcf, 0xcf, 0xc9, 0xa9, 0x15, 0xf9, 0x32, 0x69, 0x07, 0x96, 0x4e, 0xc4, 0x2e, - 0xbf, 0xd1, 0x04, 0xf3, 0x85, 0xd7, 0xde, 0xa2, 0xd4, 0x5c, 0xca, 0x00, 0x05, 0xea, 0x0d, 0x4e, - 0x41, 0x27, 0x5b, 0xe1, 0x15, 0x9e, 0x5f, 0x81, 0xd1, 0xd0, 0x7d, 0x62, 0xf9, 0xaa, 0x7a, 0x85, - 0xe7, 0xfa, 0x42, 0xad, 0x59, 0x33, 0xa1, 0xde, 0x87, 0x3b, 0x68, 0x99, 0x3a, 0x97, 0xa4, 0x66, - 0xa0, 0x70, 0x72, 0xcd, 0x00, 0x85, 0x91, 0x75, 0xb7, 0xb1, 0x2f, 0x22, 0x1d, 0xbf, 0xcf, 0x76, - 0xe1, 0x96, 0xdb, 0xd8, 0x7f, 0x72, 0xae, 0x03, 0xc8, 0x9e, 0x6c, 0xf0, 0xe8, 0x3b, 0x5e, 0x53, - 0xf4, 0x89, 0x30, 0x47, 0x9f, 0x0f, 0xaf, 0xc6, 0x4a, 0x59, 0x14, 0x93, 0xc7, 0x6b, 0xca, 0xae, - 0x35, 0x75, 0x72, 0x42, 0xa1, 0x58, 0xa1, 0xfe, 0x7e, 0xe0, 0x76, 0x57, 0x5a, 0x4e, 0x17, 0x03, - 0x5a, 0x89, 0x54, 0x39, 0x43, 0xef, 0xc9, 0x4d, 0x4e, 0x6f, 0x35, 0x24, 0x03, 0x33, 0xc1, 0x92, - 0x7c, 0x19, 0xa6, 0xd9, 0xe4, 0x5e, 0x7d, 0x1c, 0xd0, 0x0e, 0x1f, 0xf9, 0x59, 0x94, 0xe8, 0xe6, - 0x95, 0x44, 0x93, 0x61, 0x21, 0x9f, 0x53, 0xb8, 0xd8, 0x69, 0x48, 0xa0, 0x45, 0x89, 0xd6, 0x58, - 0x91, 0x26, 0x2c, 0xdc, 0xb5, 0x1f, 0x47, 0x17, 0x65, 0x75, 0x92, 0x12, 0x9c, 0x60, 0x17, 0x8f, - 0x0e, 0x4b, 0x2f, 0xb1, 0x09, 0x16, 0x65, 0x6f, 0xea, 0x33, 0x5f, 0xfb, 0x72, 0x22, 0xdf, 0x80, - 0x33, 0xa2, 0x59, 0x15, 0x4c, 0x88, 0xed, 0x7a, 0x07, 0xf5, 0x3d, 0x1b, 0x5d, 0x11, 0xe7, 0xfa, - 0x74, 0xd8, 0xd5, 0xf4, 0x2d, 0x51, 0x76, 0x58, 0x53, 0xf2, 0xb1, 0x7c, 0xce, 0xc8, 0xec, 0x57, - 0x03, 0xf9, 0x08, 0xa6, 0xf9, 0xcb, 0xe4, 0x9a, 0xeb, 0x07, 0xa8, 0xe1, 0x99, 0x3f, 0x99, 0x7f, - 0x0d, 0x7f, 0xee, 0xe4, 0x3e, 0x69, 0x31, 0x8d, 0x50, 0x8c, 0x33, 0x79, 0x0b, 0x4d, 0x58, 0x79, - 0x1c, 0xf7, 0x6a, 0x0d, 0x35, 0xec, 0xe2, 0x04, 0xea, 0x3a, 0x1d, 0x4b, 0xaa, 0x59, 0xba, 0xe1, - 0x76, 0xa1, 0x62, 0x93, 0x07, 0x30, 0x59, 0xaf, 0xaf, 0xdd, 0x74, 0x98, 0x5c, 0xd2, 0x95, 0x0a, - 0xf3, 0xe4, 0x57, 0xbe, 0x98, 0xfa, 0x95, 0x53, 0xbe, 0xbf, 0x67, 0x61, 0xce, 0xe6, 0x86, 0xdb, - 0x3d, 0x30, 0x55, 0x4e, 0x29, 0x3e, 0x27, 0x67, 0x9e, 0xb0, 0xcf, 0x49, 0x15, 0x66, 0x14, 0x3b, - 0x6a, 0x34, 0xcb, 0x59, 0x88, 0x82, 0x7f, 0xaa, 0x3e, 0x26, 0x71, 0x2f, 0xeb, 0x38, 0x9d, 0x74, - 0x36, 0x39, 0x7b, 0x52, 0x67, 0x13, 0x07, 0x66, 0xf9, 0x60, 0x88, 0x79, 0x80, 0x23, 0xbd, 0xd8, - 0xa7, 0x0f, 0x2f, 0xa5, 0xf6, 0xe1, 0x9c, 0x18, 0x69, 0x39, 0xc9, 0xf0, 0x25, 0x3e, 0xc9, 0x95, - 0xec, 0x00, 0x11, 0x40, 0x3b, 0xb0, 0xb7, 0x6d, 0x9f, 0x62, 0x5d, 0xcf, 0xf6, 0xa9, 0xeb, 0xa5, - 0xd4, 0xba, 0xa6, 0x65, 0x5d, 0xdb, 0xbc, 0x9a, 0x14, 0x8e, 0xa4, 0x23, 0xeb, 0x91, 0xf3, 0x0b, - 0x3b, 0xf6, 0x39, 0x4d, 0x31, 0x9e, 0x44, 0xe0, 0x71, 0x34, 0xe3, 0x93, 0x36, 0xde, 0xef, 0x29, - 0x9c, 0xc9, 0x63, 0x38, 0x9d, 0xfc, 0x0a, 0xac, 0xf3, 0x1c, 0xd6, 0x79, 0x4e, 0xab, 0x33, 0x8e, - 0xc4, 0xe7, 0x8d, 0xde, 0xac, 0x78, 0xad, 0x7d, 0xf8, 0x93, 0x1f, 0xc9, 0xc0, 0x99, 0xbb, 0x37, - 0xcb, 0xf7, 0xa9, 0xc7, 0xc5, 0x12, 0xc7, 0xed, 0x84, 0xde, 0xe9, 0xcf, 0x8b, 0xc7, 0x93, 0xf8, - 0xc3, 0x91, 0x94, 0x38, 0x70, 0xab, 0x60, 0xa2, 0xfb, 0x8b, 0xed, 0x1d, 0xdb, 0x7a, 0xa8, 0xb0, - 0x48, 0x71, 0x61, 0xff, 0xd6, 0xef, 0x97, 0x32, 0x66, 0xbf, 0xaa, 0x48, 0x0b, 0x16, 0xf5, 0x6e, - 0x91, 0xee, 0x40, 0x7b, 0xb4, 0xd5, 0x5a, 0x28, 0xe1, 0x8c, 0x7e, 0xfd, 0xe8, 0xb0, 0x74, 0x31, - 0xd1, 0xbb, 0xa1, 0x8b, 0x11, 0xc3, 0x54, 0x1a, 0x3c, 0x80, 0x1f, 0x69, 0xa7, 0x08, 0xdd, 0x0b, - 0xe7, 0xb5, 0x30, 0x56, 0x89, 0xf2, 0x30, 0xcc, 0xda, 0x39, 0xb6, 0xde, 0xfb, 0x0a, 0x88, 0x66, - 0x92, 0xf3, 0xed, 0x91, 0xfc, 0x54, 0x71, 0x3a, 0xc5, 0x4f, 0xc6, 0xf8, 0x76, 0x36, 0x76, 0x30, - 0x92, 0x2a, 0x8c, 0x8b, 0xf9, 0xde, 0xf7, 0x92, 0x71, 0x2e, 0x75, 0x56, 0x8f, 0x8b, 0xa5, 0x63, - 0x4a, 0x7a, 0xf2, 0x88, 0xb1, 0xc2, 0x46, 0x8b, 0x1b, 0xef, 0x87, 0xfc, 0xdc, 0x43, 0x90, 0x76, - 0xc2, 0x57, 0x4e, 0xee, 0x53, 0xaa, 0xbb, 0x2c, 0xe3, 0x51, 0x2f, 0x6b, 0x23, 0xfb, 0x3c, 0xef, - 0x6f, 0x2e, 0x74, 0x4b, 0xd4, 0x93, 0xfc, 0x3e, 0xb1, 0x0a, 0x59, 0x2d, 0xc6, 0x6f, 0x64, 0x60, - 0x4a, 0x3b, 0x59, 0xc9, 0x0d, 0xc5, 0xeb, 0x36, 0x0a, 0x44, 0xa1, 0xe1, 0xe0, 0x66, 0x1b, 0xf7, - 0xc7, 0xbd, 0xa1, 0x04, 0x70, 0xec, 0x43, 0x87, 0x8b, 0x2d, 0xee, 0x84, 0x3d, 0x58, 0x3f, 0x5c, - 0x82, 0x51, 0x1e, 0xc1, 0x67, 0x24, 0x32, 0xba, 0x44, 0xfd, 0x8a, 0xc9, 0xe1, 0xc6, 0x7f, 0x5a, - 0x82, 0x69, 0xfd, 0x46, 0x4c, 0x5e, 0x87, 0x31, 0x54, 0xe8, 0x4b, 0xf5, 0x0a, 0xaa, 0x85, 0x50, - 0xe7, 0xaf, 0xf9, 0x25, 0x71, 0x1c, 0xf2, 0x32, 0x40, 0x68, 0xc0, 0x2f, 0x9f, 0xb3, 0x46, 0x8f, - 0x0e, 0x4b, 0x99, 0xcb, 0xa6, 0x52, 0x40, 0xbe, 0x0a, 0xb0, 0xe1, 0x36, 0xa9, 0x48, 0x63, 0x99, - 0x1b, 0x64, 0x88, 0xf2, 0x4a, 0x22, 0x8d, 0xe5, 0xa9, 0x8e, 0xdb, 0xa4, 0xc9, 0x9c, 0x95, 0x0a, - 0x47, 0xf2, 0x05, 0x18, 0x35, 0x7b, 0x2d, 0x2a, 0x9f, 0x3d, 0x26, 0xe5, 0x09, 0xd7, 0x6b, 0xd1, - 0x48, 0x4f, 0xe0, 0xf5, 0xe2, 0x36, 0x96, 0x0c, 0x40, 0xde, 0xe3, 0xe9, 0x2d, 0x45, 0xc0, 0xf5, - 0xd1, 0xe8, 0x81, 0x4f, 0x91, 0x7c, 0x12, 0x21, 0xd7, 0x15, 0x12, 0x72, 0x0f, 0xc6, 0xd5, 0x97, - 0x29, 0x25, 0x7c, 0x83, 0xfa, 0x7a, 0xa9, 0x28, 0x1d, 0x44, 0xe4, 0xd9, 0xf8, 0xa3, 0x95, 0xe4, - 0x42, 0xde, 0x86, 0x09, 0xc6, 0x9e, 0xed, 0x1c, 0xbe, 0xb8, 0xd5, 0xe0, 0x33, 0x9e, 0xf2, 0x41, - 0x6c, 0xf7, 0xd1, 0xc2, 0xa2, 0x87, 0x04, 0xe4, 0xcb, 0x30, 0x51, 0xee, 0x76, 0x45, 0x57, 0x0f, - 0x34, 0x50, 0xba, 0x90, 0xe8, 0xea, 0x79, 0xbb, 0xdb, 0x4d, 0xf6, 0x74, 0xc4, 0x8f, 0xec, 0x86, - 0xd1, 0x03, 0x87, 0x49, 0x49, 0xfa, 0x6a, 0xa2, 0x82, 0x05, 0x19, 0x10, 0x2f, 0x51, 0x89, 0xce, - 0x97, 0x74, 0xa1, 0x18, 0x09, 0x95, 0xa2, 0x2e, 0x18, 0x54, 0xd7, 0xe5, 0x44, 0x5d, 0xea, 0x00, - 0x26, 0xaa, 0x4b, 0x70, 0x27, 0x4d, 0x98, 0x96, 0x07, 0x94, 0xa8, 0x6f, 0x72, 0x50, 0x7d, 0x2f, - 0x27, 0xea, 0x9b, 0x6b, 0x6e, 0x27, 0xeb, 0x89, 0xf1, 0x24, 0x6f, 0xc3, 0x94, 0x84, 0xe0, 0xfa, - 0x40, 0xc3, 0x20, 0xa1, 0x10, 0x6c, 0x6e, 0xa3, 0xcb, 0x90, 0xd6, 0x2b, 0x1a, 0xb2, 0x4a, 0xcd, - 0x67, 0xc7, 0x94, 0x46, 0x1d, 0x9f, 0x15, 0x3a, 0x32, 0xf9, 0x00, 0x26, 0xab, 0x6d, 0xd6, 0x10, - 0xb7, 0x63, 0x07, 0x54, 0x38, 0xf6, 0x4a, 0x63, 0x2b, 0xa5, 0x44, 0x99, 0xaa, 0x68, 0x66, 0xe2, - 0x44, 0x45, 0xea, 0x35, 0x53, 0xa1, 0x60, 0x9d, 0xc7, 0x9f, 0x22, 0xc5, 0x1c, 0x96, 0x4e, 0xbf, - 0xe7, 0x52, 0x0c, 0x9e, 0x14, 0xf6, 0x22, 0xb8, 0x36, 0x83, 0xca, 0xa7, 0xc0, 0x58, 0x62, 0x03, - 0x95, 0x27, 0x79, 0x07, 0x26, 0x45, 0xb6, 0xe6, 0xb2, 0xb9, 0xe1, 0x2f, 0x14, 0x23, 0x7b, 0x6d, - 0x99, 0xd8, 0xd9, 0xb2, 0xbd, 0x98, 0x65, 0x6f, 0x84, 0x4f, 0xbe, 0x04, 0xf3, 0x0f, 0x9c, 0x4e, - 0xd3, 0x7d, 0xe4, 0x8b, 0x63, 0x4a, 0x6c, 0x74, 0xb3, 0x91, 0x5f, 0xe1, 0x23, 0x5e, 0x1e, 0xca, - 0x82, 0x89, 0x8d, 0x2f, 0x95, 0x03, 0xf9, 0x81, 0x04, 0x67, 0x3e, 0x83, 0xc8, 0xa0, 0x19, 0xb4, - 0x94, 0x98, 0x41, 0xc9, 0xea, 0xe3, 0xd3, 0x29, 0xb5, 0x1a, 0xe2, 0x02, 0xd1, 0xcf, 0xf7, 0xdb, - 0xae, 0xd3, 0x59, 0x98, 0xc3, 0xbd, 0xf0, 0xd9, 0x78, 0x78, 0x10, 0xc4, 0xab, 0xb9, 0x2d, 0xa7, - 0x71, 0xb0, 0x6c, 0x1c, 0x1d, 0x96, 0x9e, 0x8f, 0xcb, 0xfc, 0x1f, 0xb9, 0xda, 0x73, 0x49, 0x0a, - 0x6b, 0xf2, 0x01, 0x14, 0xd8, 0xff, 0xa1, 0x52, 0x62, 0x5e, 0x33, 0x91, 0x55, 0x30, 0x45, 0x3d, - 0x38, 0x46, 0x98, 0x4e, 0x3a, 0x45, 0x5f, 0xa1, 0xb1, 0x22, 0x6f, 0x02, 0x30, 0xb1, 0x49, 0x6c, - 0xc7, 0xa7, 0xa2, 0x3c, 0x12, 0x28, 0x75, 0x25, 0x37, 0xe2, 0x08, 0x99, 0xbc, 0x0d, 0x93, 0xec, - 0x57, 0xbd, 0xd7, 0x74, 0xd9, 0xda, 0x38, 0x8d, 0xb4, 0xdc, 0xc7, 0x9a, 0xd1, 0xfa, 0x1c, 0xae, - 0xf9, 0x58, 0x47, 0xe8, 0x64, 0x0d, 0x66, 0x30, 0xdf, 0x87, 0x88, 0x34, 0xef, 0x50, 0x7f, 0xe1, - 0x8c, 0x62, 0x42, 0x81, 0x29, 0x55, 0x9d, 0xb0, 0x4c, 0xbd, 0xcb, 0xc4, 0xc8, 0x88, 0x0f, 0x73, - 0xc9, 0x37, 0x68, 0x7f, 0x61, 0x01, 0x3b, 0x49, 0x4a, 0xf0, 0x49, 0x0c, 0xbe, 0x1f, 0xb3, 0x11, - 0x51, 0x36, 0x2e, 0xf9, 0xa8, 0xa4, 0x56, 0x98, 0xc6, 0x9d, 0x98, 0x40, 0x6e, 0xad, 0xd4, 0xe2, - 0x09, 0x31, 0xce, 0x62, 0x0b, 0x70, 0x98, 0x77, 0x1b, 0x5d, 0x6b, 0x40, 0x52, 0x8c, 0x14, 0x6a, - 0xf2, 0xfd, 0x70, 0x4a, 0xee, 0x20, 0xa2, 0x48, 0xcc, 0xeb, 0xc5, 0x13, 0xee, 0xc4, 0xcd, 0xed, - 0xb0, 0xea, 0xc4, 0x94, 0x4e, 0xaf, 0x82, 0xd8, 0x30, 0x89, 0xc3, 0x2a, 0x6a, 0x7c, 0x76, 0x50, - 0x8d, 0x17, 0x13, 0x35, 0x9e, 0xc6, 0x89, 0x92, 0xac, 0x4c, 0xe5, 0x49, 0x96, 0x61, 0x4a, 0xac, - 0x23, 0x31, 0xdb, 0x9e, 0xc3, 0xde, 0x42, 0x25, 0x96, 0x5c, 0x81, 0x89, 0x09, 0xa7, 0x93, 0xa8, - 0x3b, 0x32, 0x7f, 0x4c, 0x3a, 0xa7, 0xed, 0xc8, 0xf1, 0x37, 0x24, 0x1d, 0x99, 0xed, 0x48, 0x91, - 0x14, 0xb3, 0xfa, 0xb8, 0xeb, 0x09, 0x15, 0xd5, 0xf3, 0x51, 0x76, 0x4a, 0x45, 0xf8, 0xb1, 0x68, - 0x88, 0xa1, 0x6e, 0x09, 0x69, 0x1c, 0xc8, 0x16, 0xcc, 0x85, 0xa7, 0xb6, 0xc2, 0xb8, 0x14, 0xa5, - 0x5c, 0x88, 0x8e, 0xfa, 0x74, 0xbe, 0x69, 0xf4, 0xc4, 0x86, 0x33, 0xda, 0x39, 0xad, 0xb0, 0x3e, - 0x8f, 0xac, 0x5f, 0x61, 0x37, 0x32, 0xfd, 0x90, 0x4f, 0x67, 0xdf, 0x8f, 0x0f, 0xf9, 0x08, 0x16, - 0xe3, 0x67, 0xb3, 0x52, 0xcb, 0x0b, 0x58, 0xcb, 0xab, 0x47, 0x87, 0xa5, 0x0b, 0x89, 0xe3, 0x3d, - 0xbd, 0xa2, 0x01, 0xdc, 0xc8, 0x57, 0x61, 0x41, 0x3f, 0x9f, 0x95, 0x9a, 0x0c, 0xac, 0x09, 0x97, - 0x4e, 0x78, 0xb0, 0xa7, 0xd7, 0xd0, 0x97, 0x07, 0x09, 0xa0, 0x94, 0x3a, 0xbb, 0x95, 0x6a, 0x5e, - 0x8c, 0x1a, 0x94, 0x58, 0x25, 0xe9, 0xd5, 0x1d, 0xc7, 0x92, 0x3c, 0x82, 0xe7, 0xd3, 0x8e, 0x09, - 0xa5, 0xd2, 0x97, 0x42, 0x25, 0xf0, 0x6b, 0xe9, 0x47, 0x4e, 0x7a, 0xcd, 0xc7, 0xb0, 0x25, 0x5f, - 0x86, 0x53, 0xca, 0xfa, 0x52, 0xea, 0x7b, 0x19, 0xeb, 0xc3, 0xa8, 0x00, 0xea, 0xc2, 0x4c, 0xaf, - 0x25, 0x9d, 0x07, 0x69, 0xc3, 0x9c, 0x6c, 0x38, 0x6a, 0xdb, 0xc5, 0xd1, 0x73, 0x41, 0xdb, 0x55, - 0x93, 0x18, 0xcb, 0xe7, 0xc5, 0xae, 0xba, 0xd0, 0xdc, 0xb6, 0xba, 0x11, 0xa1, 0x3a, 0xd3, 0x53, - 0xf8, 0x92, 0x35, 0x18, 0xab, 0xd7, 0xaa, 0x37, 0x6f, 0xae, 0x2e, 0xbc, 0x82, 0x35, 0x48, 0xbf, - 0x3f, 0x0e, 0xd4, 0x2e, 0x4d, 0xc2, 0xc6, 0xb1, 0xeb, 0xec, 0xec, 0x68, 0x0f, 0x56, 0x1c, 0x95, - 0xfc, 0x00, 0x5a, 0x17, 0xb2, 0x1d, 0xb5, 0xec, 0xfb, 0xce, 0x6e, 0x87, 0x27, 0xb3, 0x78, 0x55, - 0x7b, 0xef, 0x97, 0xe9, 0x4d, 0x56, 0x30, 0x71, 0x6c, 0x02, 0x9d, 0x4b, 0x9b, 0xec, 0xfe, 0x2f, - 0x76, 0x6e, 0xcb, 0x8e, 0x58, 0xa9, 0x9b, 0x78, 0xb2, 0x22, 0xd6, 0x6f, 0xbb, 0x4e, 0x60, 0xed, - 0xf5, 0xb4, 0xe6, 0x2f, 0xbc, 0xa6, 0x05, 0x13, 0xe7, 0xe9, 0x74, 0x95, 0x5e, 0x7b, 0x49, 0x54, - 0xf8, 0x1c, 0xbf, 0x2d, 0xf7, 0xe9, 0xb9, 0xd9, 0xdd, 0x18, 0x9d, 0x4f, 0x7e, 0x38, 0x03, 0xa7, - 0x1f, 0xb8, 0xde, 0x7e, 0xcb, 0xb5, 0x9b, 0xb2, 0x55, 0x62, 0x0f, 0x7f, 0x7d, 0xd0, 0x1e, 0xfe, - 0xd9, 0xc4, 0x1e, 0x6e, 0x3c, 0x12, 0x6c, 0xac, 0x30, 0x3b, 0x4c, 0x62, 0x3f, 0xef, 0x53, 0x15, - 0xf9, 0x01, 0x38, 0x9f, 0x5e, 0xa2, 0x4c, 0xca, 0xcb, 0x38, 0x29, 0xaf, 0x1d, 0x1d, 0x96, 0x2e, - 0xf7, 0xab, 0x29, 0x7d, 0x82, 0x1e, 0xcb, 0xfa, 0xf6, 0x48, 0xfe, 0x62, 0xf1, 0xd2, 0xed, 0x91, - 0xfc, 0xa5, 0xe2, 0xab, 0xe6, 0x73, 0xf5, 0xf2, 0xdd, 0xf5, 0x6a, 0x53, 0x1e, 0xae, 0x32, 0x81, - 0x0d, 0xa7, 0x31, 0x2f, 0x0c, 0x2a, 0x8d, 0x38, 0x1a, 0x7f, 0x2d, 0x03, 0xa5, 0x63, 0x26, 0x09, - 0x3b, 0xcf, 0xa2, 0x91, 0xa8, 0x87, 0x49, 0x13, 0xb8, 0x63, 0x60, 0x58, 0x60, 0xe9, 0x66, 0x23, - 0x3a, 0x09, 0x3a, 0x8d, 0x8a, 0xdc, 0x6b, 0x8a, 0xef, 0x70, 0x32, 0xe7, 0x9a, 0xc4, 0x32, 0xd6, - 0xa1, 0x18, 0x9f, 0x3c, 0xe4, 0xf3, 0x30, 0xa5, 0x66, 0x7e, 0x92, 0xaa, 0x04, 0x1e, 0x31, 0xc7, - 0xdb, 0xd5, 0x0e, 0x44, 0x0d, 0xd1, 0xf8, 0x85, 0x0c, 0xcc, 0xa5, 0xac, 0x30, 0x72, 0x01, 0x46, - 0x30, 0x35, 0xab, 0x62, 0x35, 0x14, 0x4b, 0xc9, 0x8a, 0xe5, 0xe4, 0x33, 0x30, 0x5e, 0xd9, 0xa8, - 0xd7, 0xcb, 0x1b, 0x52, 0x19, 0xc1, 0x0f, 0xe2, 0x8e, 0x6f, 0xf9, 0xb6, 0x6e, 0x6c, 0x20, 0xd0, - 0xc8, 0x65, 0x18, 0xab, 0xd6, 0x90, 0x40, 0x49, 0x0b, 0xe3, 0x74, 0xe3, 0xf8, 0x02, 0xc9, 0xf8, - 0x89, 0x0c, 0x90, 0xe4, 0x76, 0x41, 0xae, 0xc1, 0xa4, 0xba, 0x29, 0xf1, 0xf6, 0xe2, 0x0b, 0xac, - 0xb2, 0x70, 0x4c, 0x15, 0x87, 0x54, 0x60, 0xf4, 0xae, 0x1d, 0x34, 0xf6, 0x42, 0x2b, 0x87, 0xd4, - 0x65, 0x71, 0x26, 0xb1, 0x2c, 0x46, 0xdb, 0x8c, 0xca, 0xe4, 0xc4, 0xc6, 0x1f, 0x67, 0x80, 0xa4, - 0x1b, 0x3c, 0x0e, 0x65, 0x65, 0xf5, 0x86, 0x12, 0x7b, 0x42, 0xb5, 0x78, 0x0c, 0x33, 0xe7, 0xaa, - 0x6a, 0x80, 0x28, 0x4a, 0xc5, 0x05, 0x4d, 0xed, 0xd4, 0xdf, 0x61, 0xf9, 0x12, 0x8c, 0xde, 0xa7, - 0xde, 0xb6, 0xb4, 0x05, 0x47, 0xfb, 0xd1, 0x87, 0x0c, 0xa0, 0xaa, 0x61, 0x10, 0x43, 0x33, 0xbd, - 0x1c, 0x1d, 0xd6, 0xf4, 0xf2, 0x0f, 0x32, 0x30, 0x9f, 0x76, 0xb1, 0x39, 0xc6, 0x19, 0xd9, 0x88, - 0xf9, 0x51, 0xa3, 0x59, 0x16, 0xb7, 0x48, 0x0d, 0xbd, 0xa7, 0x4b, 0x30, 0xca, 0x7a, 0x48, 0x4e, - 0x0b, 0xd4, 0x9d, 0xb1, 0x2e, 0xf4, 0x4d, 0x0e, 0x67, 0x08, 0x51, 0x36, 0x8f, 0x51, 0x8e, 0xc0, - 0x93, 0x78, 0x70, 0x38, 0x43, 0xb8, 0xeb, 0x36, 0xa9, 0xd4, 0x29, 0x21, 0x42, 0x9b, 0x01, 0x4c, - 0x0e, 0x27, 0x17, 0x60, 0xfc, 0x5e, 0x67, 0x9d, 0xda, 0x0f, 0x65, 0xd6, 0x30, 0x34, 0x23, 0x73, - 0x3b, 0x56, 0x8b, 0xc1, 0x4c, 0x59, 0x68, 0xfc, 0x6c, 0x06, 0x66, 0x13, 0x77, 0xaa, 0xe3, 0xfd, - 0xad, 0x07, 0xfb, 0x10, 0x0e, 0xd3, 0x3e, 0xfe, 0xf9, 0x23, 0xe9, 0x9f, 0x6f, 0xfc, 0x37, 0x63, - 0x70, 0xa6, 0x8f, 0x8a, 0x2b, 0xf2, 0x71, 0xce, 0x1c, 0xeb, 0xe3, 0xfc, 0x15, 0x98, 0x5a, 0x69, - 0xd9, 0x4e, 0xdb, 0xdf, 0x74, 0xa3, 0x2f, 0x8e, 0x5c, 0xa5, 0xb0, 0x4c, 0x78, 0x5c, 0x84, 0x3e, - 0x35, 0x67, 0x1b, 0x48, 0x61, 0x05, 0x6e, 0x52, 0xc2, 0xd6, 0x98, 0x25, 0xbc, 0x8c, 0x73, 0x7f, - 0x46, 0xbc, 0x8c, 0x75, 0xbf, 0xb7, 0x91, 0x27, 0xea, 0xf7, 0x96, 0x6e, 0x5d, 0x3e, 0xfa, 0x49, - 0x7c, 0x0d, 0x56, 0x60, 0x8a, 0xdb, 0xd1, 0x95, 0x7d, 0x3e, 0x48, 0x63, 0x09, 0xdb, 0x3b, 0xdb, - 0x4f, 0x8e, 0x85, 0x46, 0x43, 0xd6, 0x74, 0x1f, 0xad, 0x71, 0x7c, 0x68, 0xbe, 0xd0, 0xdf, 0x07, - 0x4b, 0x0f, 0xf4, 0xa3, 0xfa, 0x62, 0x7d, 0x03, 0xe6, 0xd3, 0xee, 0xc8, 0x0b, 0x79, 0xcd, 0x44, - 0xb7, 0xaf, 0x3d, 0xf8, 0xf0, 0x37, 0xed, 0xfd, 0xd4, 0x9b, 0xb6, 0xf4, 0x9d, 0x9f, 0xe8, 0xef, - 0x78, 0x14, 0xad, 0x05, 0x8e, 0x3b, 0xd8, 0xc3, 0xde, 0xf8, 0x4a, 0x2c, 0xf8, 0x41, 0x9c, 0x9c, - 0xbc, 0xa5, 0xc5, 0xa8, 0x7a, 0x25, 0x19, 0xa3, 0x2a, 0x3d, 0xde, 0x01, 0x4f, 0x00, 0xf5, 0xb3, - 0x59, 0xdd, 0x63, 0xfb, 0xcf, 0xe2, 0x42, 0xbd, 0x04, 0xa3, 0x0f, 0xf6, 0xa8, 0x27, 0xcf, 0x14, - 0xfc, 0x90, 0x47, 0x0c, 0xa0, 0x7e, 0x08, 0x62, 0x90, 0x9b, 0x30, 0x5d, 0xe3, 0x13, 0x57, 0xce, - 0xc6, 0x91, 0x48, 0x51, 0xd3, 0x15, 0xea, 0xc4, 0x94, 0xe9, 0x18, 0xa3, 0x32, 0x6e, 0xc5, 0x3a, - 0x5d, 0x44, 0xd8, 0xe2, 0x3e, 0x58, 0x5c, 0xea, 0x98, 0x8e, 0x7c, 0xe9, 0xa2, 0xcd, 0xd6, 0x8c, - 0x41, 0x8d, 0x1d, 0x78, 0x7e, 0x20, 0x23, 0x76, 0xd8, 0x43, 0x37, 0xfc, 0x15, 0x33, 0xd7, 0x1e, - 0x48, 0x6a, 0x2a, 0x74, 0xc6, 0x37, 0xa0, 0xa0, 0xf6, 0x32, 0x1e, 0x41, 0xec, 0xb7, 0x98, 0x15, - 0xfc, 0x08, 0x62, 0x00, 0x93, 0xc3, 0xa3, 0x07, 0xa0, 0x6c, 0xfa, 0x03, 0x50, 0x34, 0xfc, 0xb9, - 0xe3, 0x86, 0x9f, 0x55, 0x8e, 0x3b, 0x9c, 0x52, 0x39, 0xfe, 0x56, 0x2b, 0xc7, 0xb8, 0x57, 0x26, - 0x87, 0x3f, 0xd1, 0xca, 0xff, 0xb9, 0x4c, 0x20, 0x88, 0x2e, 0x5e, 0x72, 0xb9, 0x67, 0xa2, 0x5c, - 0xb1, 0x69, 0xab, 0x37, 0xc2, 0x8c, 0x04, 0x91, 0xec, 0xb1, 0x82, 0xc8, 0x09, 0x26, 0x22, 0x0a, - 0xcb, 0x7c, 0x48, 0x47, 0x22, 0xe1, 0xd1, 0x4e, 0x98, 0xc8, 0x48, 0x2c, 0xe3, 0x5b, 0x19, 0x38, - 0x95, 0xaa, 0x68, 0x67, 0xb5, 0x72, 0x8d, 0xbe, 0xb2, 0x0e, 0xe3, 0xea, 0x7c, 0x8e, 0x71, 0x92, - 0xf8, 0x21, 0xc3, 0xb7, 0xc5, 0x78, 0x01, 0x26, 0xc2, 0x67, 0x5e, 0x32, 0x2f, 0x87, 0x8e, 0x07, - 0x48, 0x14, 0xaf, 0x85, 0x75, 0x00, 0xf6, 0x05, 0x4f, 0xd4, 0x1e, 0xdb, 0xf8, 0xe7, 0x59, 0x18, - 0x63, 0x5c, 0x9f, 0xda, 0x50, 0xce, 0xe9, 0x46, 0xd4, 0xac, 0x49, 0xfd, 0x03, 0x38, 0x93, 0x55, - 0x18, 0xe3, 0xc9, 0xef, 0xc4, 0x93, 0xe1, 0x9c, 0x4a, 0x26, 0xb3, 0xe2, 0x85, 0x81, 0x2f, 0x7c, - 0x84, 0x68, 0xaa, 0x05, 0x84, 0x28, 0xb6, 0xd8, 0xbf, 0x93, 0x81, 0x82, 0x4a, 0x4c, 0x3e, 0x80, - 0x69, 0x19, 0x9e, 0x96, 0x07, 0x83, 0x11, 0x6f, 0xd2, 0xd2, 0x7e, 0x4c, 0x86, 0xa7, 0x55, 0x83, - 0xc7, 0x68, 0xf8, 0xea, 0x56, 0xdd, 0x55, 0x91, 0x49, 0x13, 0x48, 0x7b, 0xc7, 0xb6, 0x1e, 0x51, - 0x7b, 0x9f, 0xfa, 0x81, 0xc5, 0xed, 0x7c, 0xc4, 0xd3, 0xb5, 0x64, 0x7f, 0xf7, 0x66, 0x99, 0x9b, - 0xf8, 0xb0, 0x91, 0x10, 0x71, 0x86, 0x13, 0x34, 0xea, 0x7b, 0x5c, 0x7b, 0xc7, 0x7e, 0xc0, 0x0b, - 0x39, 0x9d, 0xf1, 0x87, 0x63, 0x7c, 0xba, 0x89, 0x78, 0xd6, 0xdb, 0x30, 0x7d, 0xaf, 0x5a, 0x59, - 0x51, 0xb4, 0xf3, 0x7a, 0x3a, 0xb4, 0xd5, 0xc7, 0x01, 0xf5, 0x3a, 0x76, 0x4b, 0x5e, 0x92, 0xa3, - 0x23, 0xc8, 0x75, 0x9a, 0x8d, 0x74, 0xcd, 0x7d, 0x8c, 0x23, 0xab, 0x83, 0x5f, 0xc7, 0xc3, 0x3a, - 0xb2, 0x43, 0xd6, 0xe1, 0xdb, 0xed, 0x56, 0x9f, 0x3a, 0x74, 0x8e, 0x64, 0x0f, 0xef, 0xcb, 0x7b, - 0xbd, 0x6d, 0xa5, 0x96, 0xdc, 0xe0, 0x5a, 0x5e, 0x14, 0xb5, 0x3c, 0x2b, 0x74, 0x31, 0xa9, 0xf5, - 0x24, 0xb8, 0x46, 0xfb, 0xc4, 0xc8, 0xb1, 0xfb, 0xc4, 0xff, 0x3f, 0x03, 0x63, 0x5c, 0x7c, 0x15, - 0xd3, 0xb8, 0x8f, 0x80, 0xfc, 0xe0, 0xc9, 0x08, 0xc8, 0x45, 0x3c, 0x27, 0xb4, 0x09, 0xcd, 0xcb, - 0x48, 0x25, 0xb6, 0x2e, 0xa4, 0x0b, 0x01, 0xbe, 0xb3, 0xf1, 0x92, 0xe3, 0x97, 0x05, 0xa9, 0x46, - 0xa1, 0x48, 0xc6, 0x8f, 0xf5, 0x3f, 0x97, 0xe1, 0x5b, 0xc6, 0x45, 0x28, 0x12, 0x3d, 0x00, 0xc9, - 0x3a, 0x4c, 0x88, 0x00, 0x27, 0xcb, 0x07, 0xe2, 0x35, 0xbd, 0xa8, 0xd9, 0x43, 0x35, 0x97, 0x0f, - 0x22, 0xd1, 0x5c, 0x84, 0x48, 0xb1, 0xb6, 0x55, 0x5f, 0x82, 0x88, 0x01, 0xb9, 0xc7, 0x33, 0x9d, - 0xf3, 0x78, 0xdf, 0x7a, 0x8a, 0x8f, 0x10, 0x2e, 0xe2, 0xbd, 0xc9, 0x28, 0x09, 0x29, 0xe1, 0xbd, - 0x23, 0x1e, 0x64, 0x1d, 0x8a, 0x68, 0x43, 0x47, 0x9b, 0x7c, 0xd5, 0x54, 0x2b, 0x3c, 0x88, 0x86, - 0xb0, 0x83, 0x0e, 0x78, 0x99, 0x58, 0x6e, 0x31, 0x4f, 0xcf, 0x04, 0xa5, 0xf1, 0x33, 0x59, 0x28, - 0xc6, 0x67, 0x1f, 0x79, 0x1b, 0x26, 0xc3, 0x78, 0xeb, 0xa1, 0xaf, 0x39, 0xbe, 0xaa, 0x45, 0x01, - 0xda, 0xf5, 0xfc, 0xd8, 0x0a, 0x3a, 0x59, 0x82, 0x3c, 0x5b, 0xc4, 0x9d, 0x28, 0x5c, 0x26, 0x6e, - 0xdb, 0x3d, 0x01, 0x53, 0x6f, 0xf5, 0x12, 0x8f, 0xd4, 0x61, 0x8e, 0x2d, 0x9a, 0xba, 0xd3, 0xd9, - 0x6d, 0xd1, 0x75, 0x77, 0xd7, 0xed, 0x05, 0x51, 0xba, 0x68, 0x7e, 0x81, 0xb1, 0xdb, 0x2d, 0xad, - 0x58, 0x4f, 0x16, 0x9d, 0x42, 0x4d, 0x2e, 0xf3, 0x63, 0xa6, 0x5a, 0x11, 0xc6, 0x30, 0x78, 0x54, - 0xa3, 0x11, 0x97, 0xf6, 0xf1, 0x02, 0x49, 0xd9, 0x59, 0x7f, 0x3f, 0x0b, 0x93, 0xca, 0xf4, 0x23, - 0x97, 0x20, 0x5f, 0xf5, 0xd7, 0xdd, 0xc6, 0x7e, 0x18, 0x3f, 0x74, 0xea, 0xe8, 0xb0, 0x34, 0xe1, - 0xf8, 0x56, 0x0b, 0x81, 0x66, 0x58, 0x4c, 0x96, 0x61, 0x8a, 0xff, 0x25, 0x13, 0xe7, 0x64, 0x23, - 0x85, 0x1c, 0x47, 0x96, 0x29, 0x73, 0xd4, 0xcd, 0x56, 0x23, 0x21, 0x1f, 0x02, 0x70, 0x00, 0x06, - 0x6f, 0xc8, 0x0d, 0x1f, 0x76, 0x42, 0x54, 0x90, 0x12, 0xb6, 0x41, 0x61, 0x48, 0xbe, 0xc6, 0xc3, - 0xb9, 0xcb, 0xe5, 0x32, 0x32, 0x7c, 0xdc, 0x0c, 0xc6, 0xdf, 0x4a, 0x0f, 0xdf, 0xa3, 0xb2, 0x14, - 0xb9, 0xae, 0x16, 0x65, 0x62, 0xd3, 0x72, 0x80, 0x88, 0x0a, 0x86, 0xf1, 0xbf, 0x64, 0x94, 0x45, - 0x46, 0x36, 0x60, 0x22, 0x9c, 0x40, 0xc2, 0x0e, 0x2d, 0xbc, 0x62, 0x48, 0xb8, 0x49, 0x77, 0x96, - 0x9f, 0x15, 0x26, 0x71, 0x73, 0xe1, 0x34, 0xd4, 0xd6, 0x9c, 0x04, 0x92, 0x2f, 0xc2, 0x08, 0x76, - 0x5d, 0xf6, 0xd8, 0xa6, 0xc9, 0x53, 0x7e, 0x84, 0xf5, 0x19, 0x36, 0x04, 0x29, 0xc9, 0x67, 0x84, - 0x8b, 0x38, 0xef, 0xfc, 0x69, 0xe5, 0xa8, 0x66, 0xdf, 0x11, 0x1e, 0xef, 0x51, 0x04, 0x27, 0x65, - 0xf6, 0xfc, 0xb5, 0x2c, 0x14, 0xe3, 0x4b, 0x9b, 0xbc, 0x07, 0x05, 0x79, 0xfc, 0xae, 0xd9, 0x22, - 0xeb, 0x4b, 0x41, 0x64, 0x5d, 0x91, 0x67, 0xf0, 0x9e, 0xad, 0xda, 0xad, 0x99, 0x1a, 0x01, 0x93, - 0x85, 0x36, 0x45, 0x18, 0x48, 0x65, 0x51, 0x05, 0x6e, 0xd0, 0x8d, 0x45, 0x11, 0x97, 0x68, 0xe4, - 0x0d, 0xc8, 0xdd, 0xbd, 0x59, 0x16, 0x5e, 0x81, 0xc5, 0xf8, 0x21, 0xcd, 0xcd, 0x6b, 0x75, 0x63, - 0x5f, 0x86, 0x4f, 0xd6, 0x95, 0x80, 0xfb, 0x63, 0x9a, 0x8d, 0xa2, 0x04, 0x87, 0x8d, 0x3b, 0x3e, - 0xf2, 0xfe, 0xed, 0x91, 0x7c, 0xae, 0x38, 0x22, 0x82, 0x30, 0xff, 0xeb, 0x1c, 0x4c, 0x84, 0xf5, - 0x13, 0xa2, 0x3a, 0x68, 0x0b, 0x67, 0xec, 0xb3, 0x90, 0x97, 0xd2, 0x9d, 0x70, 0x0e, 0x1c, 0xf7, - 0x85, 0x64, 0xb7, 0x00, 0x52, 0x8c, 0xe3, 0xbb, 0x82, 0x29, 0x7f, 0x92, 0x6b, 0x10, 0xca, 0x68, - 0xfd, 0x84, 0xb9, 0x11, 0x36, 0x60, 0x66, 0x88, 0x46, 0xa6, 0x21, 0xeb, 0xf0, 0xc0, 0x76, 0x13, - 0x66, 0xd6, 0x69, 0x92, 0xf7, 0x20, 0x6f, 0x37, 0x9b, 0x98, 0xc5, 0x76, 0x88, 0x04, 0xb8, 0x79, - 0xc6, 0x8d, 0x9f, 0x19, 0x48, 0x55, 0x0e, 0x48, 0x19, 0x26, 0x78, 0xa4, 0x70, 0x9f, 0x36, 0x87, - 0x38, 0x80, 0x22, 0x0e, 0x18, 0x60, 0x7c, 0xcb, 0xa7, 0x4d, 0xf2, 0x0a, 0x8c, 0xb0, 0xd1, 0x14, - 0x27, 0x8e, 0x14, 0x2a, 0xd9, 0x60, 0xf2, 0x0e, 0x5b, 0x7b, 0xc6, 0x44, 0x04, 0xf2, 0x12, 0xe4, - 0x7a, 0x4b, 0x3b, 0xe2, 0x2c, 0x29, 0x46, 0xc9, 0x2f, 0x42, 0x34, 0x56, 0x4c, 0xae, 0x43, 0xfe, - 0x91, 0x9e, 0x37, 0xe1, 0x54, 0x6c, 0x18, 0x43, 0xfc, 0x10, 0x91, 0xbc, 0x02, 0x39, 0xdf, 0x77, - 0x85, 0x15, 0xd4, 0x5c, 0x68, 0x9a, 0x7a, 0x2f, 0x1c, 0x35, 0xc6, 0xdd, 0xf7, 0xdd, 0xe5, 0x3c, - 0x8c, 0xf1, 0x03, 0xc6, 0x78, 0x1e, 0x20, 0xfa, 0xc6, 0xa4, 0xb3, 0xa7, 0xf1, 0x21, 0x4c, 0x84, - 0xdf, 0x46, 0xce, 0x01, 0xec, 0xd3, 0x03, 0x6b, 0xcf, 0xee, 0x34, 0x5b, 0x5c, 0x3a, 0x2d, 0x98, - 0x13, 0xfb, 0xf4, 0x60, 0x0d, 0x01, 0xe4, 0x0c, 0x8c, 0x77, 0xd9, 0xf0, 0x8b, 0x39, 0x5e, 0x30, - 0xc7, 0xba, 0xbd, 0x6d, 0x36, 0x95, 0x17, 0x60, 0x1c, 0xf5, 0xac, 0x62, 0x45, 0x4e, 0x99, 0xf2, - 0xa7, 0xf1, 0x47, 0x39, 0x4c, 0x2f, 0xa6, 0x34, 0x88, 0xbc, 0x08, 0x53, 0x0d, 0x8f, 0xe2, 0x59, - 0x66, 0x33, 0x09, 0x4d, 0xd4, 0x53, 0x88, 0x80, 0xd5, 0x26, 0xb9, 0x00, 0x33, 0x51, 0x26, 0x68, - 0xab, 0xb1, 0x2d, 0xf2, 0xa1, 0x14, 0xcc, 0xa9, 0xae, 0xcc, 0x07, 0xbd, 0xb2, 0x8d, 0x91, 0x1b, - 0x8b, 0x6a, 0xe0, 0x71, 0xd6, 0x23, 0x62, 0xfe, 0xcd, 0x28, 0x70, 0x34, 0xe8, 0x3c, 0x0d, 0x63, - 0xb6, 0xbd, 0xdb, 0x73, 0x78, 0x84, 0xb5, 0x82, 0x29, 0x7e, 0x91, 0xd7, 0x60, 0x36, 0x8a, 0xe4, - 0x2f, 0x9b, 0x31, 0x8a, 0xcd, 0x28, 0x86, 0x05, 0x2b, 0x1c, 0x4e, 0x2e, 0x03, 0x51, 0xeb, 0x73, - 0xb7, 0x3f, 0xa2, 0x0d, 0x3e, 0x27, 0x0b, 0xe6, 0xac, 0x52, 0x72, 0x0f, 0x0b, 0xc8, 0x0b, 0x50, - 0xf0, 0xa8, 0x8f, 0xd2, 0x21, 0x76, 0x1b, 0x66, 0xdf, 0x34, 0x27, 0x25, 0x8c, 0xf5, 0xdd, 0x45, - 0x28, 0x2a, 0xdd, 0x81, 0xb1, 0xdd, 0x79, 0x2a, 0x10, 0x73, 0x3a, 0x82, 0x9b, 0xdd, 0x6a, 0x93, - 0x7c, 0x09, 0x16, 0x15, 0x4c, 0x9e, 0x08, 0xd4, 0xa2, 0x2d, 0x67, 0xd7, 0xd9, 0x6e, 0x51, 0x31, - 0xdf, 0x92, 0xb3, 0x3a, 0xbc, 0x42, 0x9a, 0x0b, 0x11, 0x35, 0x4f, 0x11, 0xba, 0x2a, 0x68, 0xc9, - 0x3a, 0xcc, 0xc7, 0x38, 0xd3, 0xa6, 0xd5, 0xeb, 0xf6, 0x0d, 0x69, 0x18, 0xf1, 0x24, 0x3a, 0x4f, - 0xda, 0xdc, 0xea, 0x1a, 0xdf, 0x80, 0x82, 0x3a, 0x27, 0x59, 0x27, 0xa8, 0x72, 0x89, 0x98, 0x7d, - 0x93, 0x21, 0xac, 0xca, 0xee, 0x85, 0xd3, 0x11, 0x0a, 0x0e, 0x22, 0xdf, 0x5e, 0xa6, 0x42, 0x28, - 0x0e, 0xe1, 0x0b, 0x50, 0x68, 0x3a, 0x7e, 0xb7, 0x65, 0x1f, 0xa0, 0x59, 0x9e, 0x18, 0xe9, 0x49, - 0x01, 0x43, 0xc5, 0xcf, 0x32, 0xcc, 0x26, 0xf6, 0x41, 0x45, 0xd2, 0xe0, 0xfb, 0xfa, 0x60, 0x49, - 0xc3, 0xe8, 0x40, 0x41, 0x3d, 0xd7, 0x8e, 0x49, 0xdc, 0x73, 0x1a, 0x03, 0xfe, 0xf0, 0x4d, 0x7f, - 0xec, 0xe8, 0xb0, 0x94, 0x75, 0x9a, 0x18, 0xe6, 0xe7, 0x22, 0xe4, 0xa5, 0xc4, 0x26, 0x04, 0x25, - 0x7c, 0x4c, 0x90, 0xef, 0x99, 0x66, 0x58, 0x6a, 0xbc, 0x02, 0xe3, 0xe2, 0xe8, 0x1a, 0xfc, 0x84, - 0x60, 0x7c, 0x33, 0x0b, 0x33, 0x26, 0x65, 0x1b, 0x2b, 0xe5, 0xd9, 0xba, 0x9e, 0xda, 0x2b, 0x7a, - 0x7a, 0x04, 0x5f, 0xad, 0x6d, 0x03, 0xf2, 0x64, 0xfd, 0x52, 0x06, 0xe6, 0x52, 0x70, 0x3f, 0x56, - 0x9e, 0xe8, 0x1b, 0x30, 0x51, 0x71, 0xec, 0x56, 0xb9, 0xd9, 0x0c, 0xa3, 0xff, 0xa0, 0x9c, 0x8f, - 0xc9, 0xe4, 0x6c, 0x06, 0x55, 0x85, 0x98, 0x10, 0x95, 0xbc, 0x2a, 0x26, 0x45, 0x2e, 0xec, 0x56, - 0x9c, 0x14, 0xdf, 0x3d, 0x2c, 0x01, 0xff, 0xa6, 0xcd, 0x70, 0x8a, 0x60, 0x54, 0x6d, 0x0e, 0x8c, - 0x9c, 0xb1, 0x9e, 0xda, 0xa1, 0x4b, 0x8f, 0xaa, 0x1d, 0x6f, 0xde, 0x50, 0xa9, 0xb2, 0x7e, 0x32, - 0x0b, 0xa7, 0xd3, 0x09, 0x3f, 0x6e, 0xca, 0x6f, 0x4c, 0x52, 0xa6, 0x64, 0x02, 0xc0, 0x94, 0xdf, - 0x3c, 0xa3, 0x19, 0xe2, 0x47, 0x08, 0x64, 0x87, 0xa7, 0xd6, 0x5f, 0xa3, 0xb6, 0x17, 0x6c, 0x53, - 0x3b, 0x18, 0x42, 0x92, 0x97, 0x26, 0x18, 0x0b, 0x28, 0x4c, 0xec, 0x49, 0xca, 0xb4, 0xcc, 0xfa, - 0x21, 0xdb, 0x70, 0xa2, 0x8c, 0x0c, 0x31, 0x51, 0xbe, 0x0e, 0x33, 0x75, 0xda, 0xb6, 0xbb, 0x7b, - 0xae, 0x27, 0x83, 0x2c, 0x5c, 0x81, 0xa9, 0x10, 0x94, 0x3a, 0x5b, 0xf4, 0x62, 0x0d, 0x5f, 0xe9, - 0x88, 0x68, 0x2b, 0xd1, 0x8b, 0x8d, 0xbf, 0x9e, 0x85, 0x33, 0xe5, 0x86, 0xb0, 0x27, 0x15, 0x05, - 0xd2, 0xec, 0xfd, 0x53, 0xae, 0x9b, 0x5c, 0x85, 0x89, 0xbb, 0xf6, 0xe3, 0x75, 0x6a, 0xfb, 0xd4, - 0x17, 0x69, 0x26, 0xb8, 0xd8, 0x6b, 0x3f, 0x8e, 0x1e, 0x7f, 0xcc, 0x08, 0x47, 0x55, 0x23, 0x8c, - 0x7c, 0x42, 0x35, 0x82, 0x01, 0x63, 0x6b, 0x6e, 0xab, 0x29, 0xce, 0x7a, 0xf1, 0xe2, 0xbc, 0x87, - 0x10, 0x53, 0x94, 0x18, 0x7f, 0x90, 0x81, 0xe9, 0xf0, 0x8b, 0xf1, 0x13, 0x3e, 0xf5, 0x2e, 0xb9, - 0x00, 0xe3, 0x58, 0x51, 0xb5, 0xa2, 0x1e, 0x1a, 0x2d, 0x8a, 0x69, 0x33, 0x9b, 0xa6, 0x2c, 0x54, - 0x7b, 0x62, 0xf4, 0x93, 0xf5, 0x84, 0xf1, 0xf7, 0xf1, 0x31, 0x5b, 0x6d, 0x25, 0x3b, 0x89, 0x94, - 0x0f, 0xc9, 0x0c, 0xf9, 0x21, 0xd9, 0x27, 0x36, 0x24, 0xb9, 0xbe, 0x43, 0xf2, 0xa3, 0x59, 0x98, - 0x0c, 0x3f, 0xf6, 0x7b, 0x2c, 0x1d, 0x45, 0xd8, 0xae, 0xa1, 0x02, 0x23, 0xd5, 0x95, 0xbd, 0x42, - 0xc4, 0x1f, 0xfa, 0x22, 0x8c, 0x89, 0xc5, 0x94, 0x89, 0x99, 0x7f, 0xc7, 0x46, 0x77, 0x79, 0x5a, - 0xb0, 0x1e, 0xc3, 0x01, 0xf5, 0x4d, 0x41, 0x87, 0x91, 0xa7, 0x1e, 0xd0, 0x6d, 0x61, 0xdb, 0xf0, - 0xd4, 0x9e, 0x51, 0xe9, 0x91, 0xa7, 0xa2, 0x86, 0x0d, 0x75, 0x3a, 0xfd, 0x77, 0x79, 0x28, 0xc6, - 0x49, 0x8e, 0x4f, 0xf8, 0x51, 0xeb, 0x6d, 0xf3, 0xab, 0x0a, 0x4f, 0xf8, 0xd1, 0xed, 0x6d, 0x9b, - 0x0c, 0x86, 0xf6, 0x52, 0x9e, 0xf3, 0x10, 0x5b, 0x5d, 0x10, 0xf6, 0x52, 0x9e, 0xf3, 0x50, 0xb3, - 0x97, 0xf2, 0x9c, 0x87, 0xa8, 0x48, 0x58, 0xaf, 0x63, 0x54, 0x06, 0xbc, 0xa7, 0x08, 0x45, 0x42, - 0xcb, 0x8f, 0x67, 0x31, 0x94, 0x68, 0xec, 0xa8, 0x5c, 0xa6, 0xb6, 0x27, 0x92, 0x53, 0x88, 0xed, - 0x0c, 0x8f, 0xca, 0x6d, 0x04, 0x5b, 0x01, 0x83, 0x9b, 0x2a, 0x12, 0x69, 0x01, 0x51, 0x7e, 0xca, - 0x05, 0x7c, 0xfc, 0xdd, 0x5a, 0x9a, 0x6e, 0xce, 0xab, 0xac, 0x2d, 0x75, 0x35, 0xa7, 0xf0, 0x7d, - 0x92, 0xda, 0xdf, 0x9a, 0x08, 0x5e, 0x8b, 0x0a, 0xa4, 0xfc, 0xb1, 0xcc, 0x64, 0x34, 0x19, 0xe0, - 0xc1, 0x6d, 0x43, 0x35, 0x52, 0xc4, 0x84, 0xbc, 0x0b, 0x93, 0x6a, 0xac, 0x0d, 0x1e, 0x11, 0xe2, - 0x39, 0x1e, 0xb9, 0xb3, 0x4f, 0xe6, 0x6b, 0x95, 0x80, 0x6c, 0xc3, 0x99, 0x15, 0xb7, 0xe3, 0xf7, - 0xda, 0x32, 0x46, 0x68, 0x14, 0x6f, 0x1d, 0x70, 0x28, 0xd0, 0x71, 0xbf, 0x21, 0x50, 0x44, 0x68, - 0x07, 0xe9, 0x5b, 0xa3, 0x5f, 0x40, 0xfa, 0x31, 0x22, 0x9b, 0x30, 0x89, 0x1a, 0x54, 0x61, 0x27, - 0x39, 0xa9, 0x6f, 0x1b, 0x51, 0x49, 0x85, 0x2d, 0x0c, 0x1e, 0x6a, 0xce, 0x6e, 0xb7, 0xa4, 0x6b, - 0x87, 0xaa, 0x09, 0x56, 0x90, 0xc9, 0x87, 0x30, 0xcd, 0xaf, 0x68, 0x0f, 0xe8, 0x36, 0x9f, 0x3b, - 0x05, 0x4d, 0x13, 0xa1, 0x17, 0xf2, 0xc7, 0x7c, 0xa1, 0xb7, 0x7e, 0x44, 0xb7, 0xf9, 0xd8, 0x6b, - 0x8e, 0x55, 0x1a, 0x3e, 0xd9, 0x82, 0xb9, 0x35, 0xdb, 0xe7, 0x40, 0x25, 0x68, 0xc2, 0x14, 0x6a, - 0x68, 0xd1, 0xe0, 0x7d, 0xcf, 0xf6, 0xa5, 0x22, 0x3c, 0x35, 0x48, 0x42, 0x1a, 0x3d, 0xf9, 0x66, - 0x06, 0x16, 0x34, 0x3d, 0xb9, 0xb0, 0x33, 0xc3, 0xc0, 0xb3, 0xd3, 0xf8, 0xe4, 0x55, 0x92, 0x42, - 0x69, 0x1f, 0x34, 0x3e, 0x24, 0x31, 0x55, 0xbc, 0x17, 0x95, 0xab, 0x96, 0xe4, 0xfd, 0x78, 0x88, - 0x85, 0x8a, 0x6b, 0x7a, 0x46, 0x5f, 0xa8, 0xb1, 0x75, 0x2d, 0xd1, 0x8c, 0x1b, 0xf1, 0xfe, 0x16, - 0x8a, 0xae, 0x4c, 0xa8, 0xe8, 0x9a, 0x87, 0x51, 0xec, 0x55, 0x19, 0x7a, 0x0b, 0x7f, 0x18, 0x9f, - 0x51, 0xf7, 0x21, 0x21, 0x16, 0x0e, 0xdc, 0x87, 0x8c, 0xff, 0x61, 0x0c, 0x66, 0x62, 0xd3, 0x42, - 0xdc, 0x53, 0x33, 0x89, 0x7b, 0x6a, 0x1d, 0x80, 0xab, 0x7a, 0x87, 0xd4, 0xc9, 0x4a, 0xef, 0xcd, - 0x49, 0xe1, 0x7b, 0x1d, 0xae, 0x29, 0x85, 0x0d, 0x63, 0xca, 0x57, 0xec, 0x90, 0x3a, 0xf2, 0x90, - 0x29, 0x5f, 0xf4, 0x0a, 0xd3, 0x88, 0x0d, 0x29, 0xc1, 0x28, 0x46, 0xea, 0x55, 0x9d, 0x67, 0x1d, - 0x06, 0x30, 0x39, 0x9c, 0xbc, 0x08, 0x63, 0x4c, 0x88, 0xaa, 0x56, 0xc4, 0x26, 0x88, 0x67, 0x0b, - 0x93, 0xb2, 0x98, 0xc4, 0x22, 0x8a, 0xc8, 0x0d, 0x28, 0xf0, 0xbf, 0x44, 0x6c, 0x9e, 0x31, 0xdd, - 0x62, 0xd2, 0x72, 0x9a, 0x32, 0x3c, 0x8f, 0x86, 0xc7, 0x6e, 0x17, 0xf5, 0x1e, 0xaa, 0x75, 0xaa, - 0x15, 0x11, 0xb0, 0x1e, 0x6f, 0x17, 0x3e, 0x07, 0xb2, 0x2a, 0x22, 0x04, 0x26, 0xcb, 0x08, 0x17, - 0x96, 0x3c, 0xde, 0x29, 0x51, 0x96, 0xe1, 0xae, 0x2b, 0xa6, 0x28, 0x21, 0x97, 0xf8, 0x4b, 0x0c, - 0x8a, 0x85, 0x3c, 0x6b, 0x2b, 0xbe, 0x5b, 0xa0, 0x62, 0x02, 0x65, 0xc3, 0xb0, 0x98, 0x55, 0xce, - 0xfe, 0x5e, 0x6d, 0xdb, 0x4e, 0x4b, 0x6c, 0x2b, 0x58, 0x39, 0xe2, 0x52, 0x06, 0x35, 0x23, 0x04, - 0xf2, 0x36, 0x4c, 0xf3, 0xec, 0x8a, 0xed, 0xb6, 0xdb, 0x41, 0xf6, 0x93, 0x51, 0xf4, 0x3d, 0x91, - 0xf1, 0x91, 0x15, 0xf1, 0x5a, 0x62, 0xb8, 0xec, 0x3c, 0xc1, 0x57, 0xde, 0x1e, 0x7f, 0x23, 0x2a, - 0x44, 0xe7, 0x09, 0x92, 0xfa, 0x1c, 0x6e, 0xaa, 0x48, 0xe4, 0x4d, 0x98, 0x62, 0x3f, 0x6f, 0x39, - 0x0f, 0x29, 0xaf, 0x70, 0x2a, 0x32, 0x6f, 0x40, 0xaa, 0x5d, 0x56, 0xc2, 0xeb, 0xd3, 0x31, 0xc9, - 0xfb, 0x70, 0x0a, 0x39, 0x35, 0xdc, 0x2e, 0x6d, 0x96, 0x77, 0x76, 0x9c, 0x96, 0xc3, 0xad, 0xd1, - 0x78, 0x14, 0x1a, 0xd4, 0xc1, 0xf3, 0x8a, 0x11, 0xc3, 0xb2, 0x23, 0x14, 0x33, 0x9d, 0x92, 0x3c, - 0x80, 0xe2, 0x4a, 0xcf, 0x0f, 0xdc, 0x76, 0x39, 0x08, 0x3c, 0x67, 0xbb, 0x17, 0x50, 0x7f, 0x61, - 0x46, 0x8b, 0xd5, 0xc2, 0x16, 0x47, 0x58, 0xc8, 0xf5, 0x41, 0x0d, 0xa4, 0xb0, 0xec, 0x90, 0xc4, - 0x4c, 0x30, 0x31, 0xfe, 0x4d, 0x06, 0xa6, 0x34, 0x52, 0xf2, 0x06, 0x14, 0x6e, 0x7a, 0x0e, 0xed, - 0x34, 0x5b, 0x07, 0xca, 0x45, 0x15, 0x6f, 0x31, 0x3b, 0x02, 0xce, 0x5b, 0xad, 0xa1, 0x85, 0x7a, - 0x9e, 0x6c, 0xaa, 0xa9, 0xe8, 0x55, 0xee, 0xc3, 0x2d, 0x26, 0x68, 0x2e, 0x0a, 0x1e, 0x85, 0x13, - 0x54, 0xcc, 0x4e, 0x05, 0x85, 0xbc, 0x03, 0x63, 0xfc, 0x3d, 0x58, 0xd8, 0x2d, 0x9e, 0x4d, 0x6b, - 0x26, 0x8f, 0x17, 0x80, 0x13, 0x11, 0x8d, 0x7e, 0x7c, 0x53, 0x10, 0x19, 0x3f, 0x97, 0x01, 0x92, - 0x44, 0x3d, 0x46, 0xef, 0x75, 0xac, 0x31, 0xd1, 0x17, 0xc3, 0xd5, 0x98, 0xd3, 0x74, 0xe6, 0xac, - 0x26, 0x5e, 0xc0, 0x3b, 0x5e, 0xac, 0x3a, 0x55, 0x11, 0xc7, 0x8b, 0x8d, 0x1f, 0xc9, 0x02, 0x44, - 0xd8, 0xe4, 0xf3, 0x3c, 0x37, 0xdd, 0xfb, 0x3d, 0xbb, 0xe5, 0xec, 0x38, 0x7a, 0x84, 0x60, 0x64, - 0xf2, 0x75, 0x59, 0x62, 0xea, 0x88, 0xe4, 0x3d, 0x98, 0xa9, 0xd7, 0x74, 0x5a, 0xc5, 0x96, 0xde, - 0xef, 0x5a, 0x31, 0xf2, 0x38, 0x36, 0xda, 0x27, 0xab, 0xa3, 0xc1, 0xed, 0x93, 0xf9, 0x40, 0x88, - 0x12, 0xb6, 0xb1, 0xd4, 0x6b, 0xc2, 0x5d, 0xa0, 0x19, 0xbe, 0x6a, 0xe2, 0xd7, 0xf9, 0x5d, 0xab, - 0x2b, 0xfc, 0x08, 0xd8, 0x3e, 0xa1, 0xe1, 0x45, 0x1d, 0x39, 0xda, 0x27, 0x26, 0xc0, 0xcf, 0xa3, - 0xda, 0xaf, 0xed, 0x06, 0x54, 0x68, 0x3b, 0x9e, 0xda, 0x7b, 0x4f, 0x64, 0x4c, 0x30, 0xaa, 0xb9, - 0x3a, 0x6b, 0xad, 0x13, 0x06, 0x33, 0xd7, 0xa3, 0x4b, 0x0a, 0x37, 0x2b, 0x48, 0xb1, 0xb1, 0xf9, - 0xbb, 0x19, 0x38, 0x95, 0x4a, 0x4b, 0xae, 0x00, 0x44, 0x3a, 0x25, 0xd1, 0x4b, 0xb8, 0x63, 0x46, - 0x21, 0x93, 0x4c, 0x05, 0x83, 0x7c, 0x25, 0xae, 0x0d, 0x3a, 0xfe, 0x20, 0x5c, 0x94, 0x91, 0x0a, - 0x75, 0x6d, 0x50, 0x8a, 0x0e, 0xc8, 0xf8, 0xa5, 0x1c, 0xcc, 0x2a, 0x11, 0x99, 0xf8, 0xb7, 0x1e, - 0x63, 0x2f, 0xbe, 0x0f, 0x05, 0xd6, 0x1a, 0xa7, 0x21, 0x7c, 0x75, 0xb8, 0xe1, 0xcb, 0xab, 0x09, - 0x67, 0x55, 0xc1, 0xed, 0x8a, 0x8a, 0xcc, 0xe3, 0x87, 0xe2, 0xd6, 0x89, 0x0f, 0x12, 0x8d, 0xa4, - 0x9f, 0x8e, 0xc6, 0x9c, 0xf8, 0x30, 0x55, 0x39, 0xe8, 0xd8, 0xed, 0xb0, 0x36, 0x6e, 0x00, 0xf3, - 0x5a, 0xdf, 0xda, 0x34, 0x6c, 0x5e, 0x5d, 0xe4, 0xd6, 0xc5, 0xcb, 0x52, 0x22, 0x0a, 0x68, 0x54, - 0x8b, 0xef, 0xc1, 0x6c, 0xe2, 0xa3, 0x4f, 0x14, 0xca, 0xf4, 0x01, 0x90, 0xe4, 0x77, 0xa4, 0x70, - 0x78, 0x4d, 0x0f, 0x94, 0x7b, 0x2a, 0x7c, 0xbc, 0x6e, 0xb7, 0xed, 0x4e, 0x93, 0x9b, 0xd3, 0x2c, - 0xa9, 0x81, 0x4e, 0x7f, 0x3e, 0xab, 0x3a, 0x0c, 0x3f, 0xed, 0xab, 0xee, 0x8b, 0xda, 0x6d, 0xf8, - 0xf9, 0x7e, 0x63, 0x3a, 0x94, 0xd6, 0xe1, 0x3b, 0x39, 0x38, 0xd3, 0x87, 0x92, 0x1c, 0xc4, 0x27, - 0x11, 0xd7, 0x42, 0x5c, 0x1b, 0x5c, 0xe1, 0x93, 0x98, 0x4a, 0xe4, 0xf3, 0x3c, 0x64, 0x88, 0x48, - 0x64, 0xcd, 0xef, 0xdf, 0xa8, 0xc6, 0xdf, 0x0f, 0xa1, 0xf1, 0x58, 0x21, 0x1c, 0x4a, 0xde, 0x83, - 0x51, 0xf4, 0x16, 0x8f, 0xc5, 0x84, 0x64, 0x18, 0x08, 0x57, 0xa2, 0x9a, 0xb2, 0x9f, 0x5a, 0x54, - 0x53, 0x06, 0x20, 0x9f, 0x83, 0x5c, 0xf9, 0x41, 0x5d, 0x8c, 0xcb, 0xb4, 0x4a, 0xfe, 0xa0, 0x1e, - 0x25, 0xa7, 0xb1, 0xb5, 0x2c, 0x32, 0x8c, 0x82, 0x11, 0xde, 0x5a, 0xa9, 0x89, 0x51, 0x51, 0x09, - 0x6f, 0xad, 0xd4, 0x22, 0xc2, 0xdd, 0x86, 0x16, 0x61, 0xeb, 0xd6, 0x4a, 0xed, 0xd3, 0x9b, 0xf6, - 0xff, 0x51, 0x96, 0xc7, 0x39, 0xe1, 0x0d, 0x7b, 0x0f, 0x0a, 0x5a, 0x20, 0xf3, 0x4c, 0x24, 0x8f, - 0x85, 0x91, 0xea, 0x63, 0x16, 0x43, 0x1a, 0x81, 0x4c, 0xf3, 0xc4, 0x7e, 0xab, 0x01, 0xdc, 0xc3, - 0x34, 0x4f, 0xc8, 0x21, 0xee, 0x4a, 0xa4, 0x93, 0x90, 0xeb, 0x90, 0xdf, 0xa4, 0x1d, 0xbb, 0x13, - 0x84, 0x0a, 0x51, 0x34, 0x2e, 0x0e, 0x10, 0xa6, 0x4b, 0x0d, 0x21, 0x22, 0x1a, 0xc2, 0xf6, 0xb6, - 0xfd, 0x86, 0xe7, 0x60, 0x3c, 0xa4, 0xf0, 0x2c, 0xe6, 0x86, 0xb0, 0x4a, 0x89, 0xce, 0x20, 0x46, - 0x64, 0xfc, 0x7c, 0x06, 0xc6, 0xc5, 0x40, 0xf2, 0xf4, 0x7c, 0xbb, 0xd1, 0x59, 0x22, 0x9c, 0x07, - 0x76, 0x9d, 0xb8, 0xf3, 0xc0, 0x2e, 0x0f, 0x3a, 0x34, 0x21, 0xbc, 0xf1, 0xc2, 0xa7, 0x41, 0x9c, - 0x8d, 0xd2, 0x57, 0x54, 0xcf, 0xbe, 0x16, 0xa2, 0x0e, 0xeb, 0xc5, 0x65, 0xfc, 0x4d, 0xf1, 0x65, - 0xb7, 0x56, 0x6a, 0x64, 0x09, 0xf2, 0xeb, 0x2e, 0x8f, 0x9f, 0xa5, 0xe6, 0x9a, 0x6e, 0x09, 0x98, - 0xda, 0x41, 0x12, 0x8f, 0x7d, 0x5f, 0xcd, 0x73, 0xc5, 0x5d, 0x46, 0xf9, 0xbe, 0x2e, 0x07, 0xc6, - 0xbe, 0x2f, 0x44, 0x1d, 0xfa, 0xfb, 0x68, 0xca, 0x26, 0x71, 0xff, 0x3a, 0xe6, 0xbf, 0xb9, 0xad, - 0x7a, 0xc7, 0x89, 0x22, 0xb9, 0x53, 0x2c, 0xf6, 0xdb, 0x29, 0xee, 0x5f, 0x37, 0x53, 0xa8, 0xf0, - 0x5d, 0x2d, 0x02, 0xd7, 0xa9, 0xf7, 0xf0, 0x29, 0xde, 0xa5, 0xd3, 0xdf, 0xd5, 0xe2, 0xcd, 0x1b, - 0x6a, 0x93, 0xfe, 0x9d, 0x2c, 0x9c, 0x4e, 0x27, 0x54, 0xdb, 0x92, 0x19, 0xd0, 0x96, 0x8b, 0x90, - 0x5f, 0x73, 0xfd, 0x40, 0x31, 0x12, 0x44, 0xf5, 0xff, 0x9e, 0x80, 0x99, 0x61, 0x29, 0xbb, 0x73, - 0xb3, 0xbf, 0xc3, 0xe5, 0x89, 0xfc, 0x30, 0xba, 0x07, 0xbb, 0x73, 0xf3, 0x22, 0x72, 0x0b, 0xf2, - 0xa6, 0x70, 0xb4, 0x8a, 0x75, 0x8d, 0x04, 0x87, 0xd2, 0x14, 0xf1, 0x04, 0x44, 0x8b, 0x27, 0x2f, - 0x60, 0xa4, 0x0c, 0xe3, 0x62, 0xf4, 0x63, 0x4f, 0xc7, 0x29, 0x53, 0x46, 0x4f, 0xf1, 0x20, 0xe9, - 0xd8, 0x8e, 0x82, 0x8f, 0x80, 0xd5, 0x8a, 0xf4, 0x99, 0xc2, 0x1d, 0x85, 0x3f, 0x12, 0xea, 0xf6, - 0x98, 0x21, 0xa2, 0xf1, 0xcd, 0x2c, 0x80, 0xd4, 0xda, 0x3c, 0xb5, 0x33, 0xec, 0x73, 0xda, 0x0c, - 0x53, 0xec, 0x8d, 0x86, 0xcf, 0x81, 0x7d, 0x0f, 0xcd, 0x79, 0x86, 0xcf, 0x80, 0x5d, 0x82, 0xd1, - 0xcd, 0x48, 0xa1, 0x25, 0x5c, 0x52, 0x50, 0x1d, 0xcd, 0xe1, 0xc6, 0x36, 0xcc, 0xdf, 0xa2, 0x41, - 0xa4, 0xde, 0x92, 0x4f, 0x8f, 0x83, 0xd9, 0xbe, 0x0e, 0x13, 0x02, 0x3f, 0xdc, 0xbf, 0xb8, 0x2e, - 0x46, 0x04, 0xcc, 0x41, 0x5d, 0x8c, 0x44, 0x60, 0xbb, 0x51, 0x85, 0xb6, 0x68, 0x40, 0x3f, 0xdd, - 0x6a, 0xea, 0x40, 0x78, 0x53, 0xb0, 0x65, 0xc3, 0xd5, 0x70, 0x6c, 0xff, 0xdc, 0x87, 0x53, 0xe1, - 0xb7, 0x3f, 0x49, 0xbe, 0x57, 0xd9, 0x95, 0x52, 0x64, 0x47, 0x88, 0x38, 0x0e, 0xb0, 0x3d, 0xf9, - 0xbd, 0x0c, 0x2c, 0x4a, 0x8a, 0x07, 0x4e, 0x68, 0x39, 0x39, 0x14, 0x31, 0x79, 0x1b, 0x26, 0x15, - 0x1a, 0x11, 0xde, 0x1f, 0xf5, 0xd4, 0x8f, 0x9c, 0x60, 0xcf, 0xf2, 0x39, 0x5c, 0xd5, 0x53, 0x2b, - 0xe8, 0x64, 0x1b, 0x16, 0xeb, 0xe5, 0xbb, 0xeb, 0xf7, 0xed, 0x96, 0xd3, 0xc4, 0x6d, 0x60, 0xc3, - 0xbd, 0xe9, 0xb6, 0x5a, 0xee, 0xa3, 0x2d, 0x73, 0x5d, 0xe6, 0xe8, 0xc1, 0xa8, 0x20, 0xa8, 0xf4, - 0x7e, 0x18, 0xa2, 0x59, 0x1d, 0xd7, 0xda, 0x41, 0x44, 0xab, 0xe7, 0xb5, 0x7c, 0x73, 0x00, 0x17, - 0xe3, 0x9f, 0x65, 0xe0, 0xd9, 0xd0, 0x39, 0x29, 0xa5, 0x7d, 0xb1, 0x16, 0x64, 0x9e, 0x64, 0x0b, - 0xb2, 0x4f, 0xa4, 0x05, 0x1b, 0xd1, 0xf8, 0x54, 0x3b, 0xa1, 0x63, 0xb8, 0xfc, 0x7e, 0xa2, 0x8e, - 0x8f, 0x18, 0x95, 0xe7, 0x12, 0xae, 0xe6, 0x8a, 0x47, 0xb9, 0xf1, 0x96, 0xd2, 0x21, 0x29, 0x0c, - 0x35, 0xe2, 0x4c, 0x9c, 0xf8, 0x9b, 0x59, 0x98, 0xb9, 0x57, 0xad, 0xac, 0x84, 0x76, 0x54, 0xdf, - 0x63, 0xb9, 0xc6, 0xb5, 0xb6, 0xf5, 0xdf, 0x39, 0x8d, 0x2d, 0x98, 0x8b, 0x75, 0x03, 0x0a, 0x41, - 0xef, 0x72, 0xd7, 0x99, 0x10, 0x2c, 0x05, 0xa0, 0xd3, 0x69, 0xec, 0xef, 0x5f, 0x37, 0x63, 0xd8, - 0xc6, 0xff, 0x09, 0x31, 0xbe, 0x62, 0x33, 0x7e, 0x1d, 0x26, 0xaa, 0xbe, 0xdf, 0xa3, 0xde, 0x96, - 0xb9, 0xae, 0x2a, 0x3d, 0x1c, 0x04, 0xb2, 0x39, 0x64, 0x46, 0x08, 0xe4, 0x12, 0xe4, 0x45, 0x8c, - 0x78, 0xb9, 0xbb, 0xa1, 0xfe, 0x39, 0x0c, 0x31, 0x6f, 0x86, 0xc5, 0xe4, 0x0d, 0x28, 0xf0, 0xbf, - 0xf9, 0x8c, 0x16, 0x1d, 0x8e, 0x6a, 0x4e, 0x81, 0xce, 0x57, 0x80, 0xa9, 0xa1, 0x91, 0x57, 0x21, - 0x57, 0x5e, 0x31, 0x85, 0x62, 0x4b, 0x48, 0xc0, 0x9e, 0xc5, 0xb5, 0x8f, 0xda, 0x75, 0x68, 0xc5, - 0x64, 0x72, 0xac, 0x8c, 0xb5, 0x21, 0x74, 0xf2, 0x38, 0x03, 0xa4, 0xde, 0x2c, 0x76, 0x2c, 0x23, - 0x8c, 0x5c, 0x85, 0xf1, 0x0a, 0x37, 0xfe, 0x13, 0x1a, 0x79, 0x9e, 0x93, 0x92, 0x83, 0xb4, 0xd8, - 0x12, 0x1c, 0x44, 0x2e, 0xc9, 0xac, 0x76, 0xf9, 0xc8, 0x03, 0xa7, 0x4f, 0xea, 0xba, 0xd7, 0x61, - 0x4c, 0x44, 0x52, 0x9f, 0x50, 0x52, 0xd7, 0xc4, 0x23, 0xa8, 0x0b, 0x9c, 0xa4, 0x2b, 0x2e, 0x3c, - 0x49, 0x57, 0xdc, 0x6d, 0x38, 0x73, 0x0b, 0xf5, 0x50, 0x7a, 0x3c, 0xb0, 0x2d, 0xb3, 0x2a, 0x34, - 0xfb, 0xf8, 0xa0, 0xc5, 0x55, 0x55, 0xf1, 0x90, 0x62, 0x56, 0xcf, 0x53, 0x53, 0x2c, 0xf7, 0x63, - 0x44, 0xbe, 0x04, 0xf3, 0x69, 0x45, 0x42, 0xff, 0x8f, 0x91, 0xaf, 0xd2, 0x2b, 0x50, 0x23, 0x5f, - 0xa5, 0x71, 0x20, 0xeb, 0x50, 0xe4, 0xf0, 0x72, 0xb3, 0xed, 0x74, 0xf8, 0x1b, 0x06, 0x7f, 0x1f, - 0x40, 0x97, 0x18, 0xc1, 0xd5, 0x66, 0x85, 0xfc, 0x2d, 0x43, 0x73, 0xa2, 0x8a, 0x51, 0x92, 0xbf, - 0x92, 0x61, 0xf7, 0x52, 0x1e, 0x77, 0x1c, 0xb7, 0xcf, 0x69, 0xf1, 0x1a, 0x1a, 0x7a, 0x35, 0xd5, - 0x03, 0xcf, 0xe9, 0xec, 0x0a, 0x07, 0xa9, 0x4d, 0xe1, 0x20, 0xf5, 0xf6, 0xc7, 0x72, 0x90, 0xe2, - 0xac, 0xfc, 0xa3, 0xc3, 0x52, 0xc1, 0x13, 0x75, 0xe2, 0x2a, 0xd2, 0xbe, 0x80, 0x75, 0x1d, 0x7a, - 0x09, 0x6f, 0x75, 0x78, 0xd4, 0x63, 0xda, 0xe4, 0x8d, 0x9c, 0xc1, 0x8d, 0x1d, 0xbb, 0xce, 0xe6, - 0x9b, 0x78, 0x88, 0x90, 0x68, 0x68, 0x2a, 0x07, 0x76, 0x85, 0x96, 0x4e, 0x38, 0xdc, 0xaf, 0xb8, - 0x18, 0x5d, 0xa1, 0xa5, 0xc7, 0x8e, 0x85, 0xd3, 0x48, 0x9d, 0x3c, 0x1a, 0x09, 0xb9, 0x0a, 0x63, - 0x77, 0xed, 0xc7, 0xe5, 0x5d, 0x2a, 0x72, 0xb0, 0x4e, 0xc9, 0xed, 0x0f, 0x81, 0xcb, 0xf9, 0xdf, - 0xe5, 0x5e, 0x1b, 0xcf, 0x98, 0x02, 0x8d, 0xfc, 0x60, 0x06, 0x4e, 0xf3, 0x65, 0x2c, 0x5b, 0x59, - 0xa7, 0x41, 0xc0, 0xfa, 0x41, 0x84, 0x4f, 0x3c, 0x1f, 0x99, 0x9e, 0xa7, 0xe3, 0x61, 0x0c, 0x01, - 0x43, 0xec, 0x0c, 0x61, 0xc7, 0xf9, 0xa2, 0x54, 0x8b, 0x43, 0x9d, 0x4a, 0x4f, 0x36, 0x61, 0xf2, - 0xee, 0xcd, 0x72, 0x58, 0x2d, 0x0f, 0x4e, 0x5f, 0x4a, 0xdb, 0x1d, 0x15, 0xb4, 0x34, 0x9f, 0x09, - 0x95, 0x0d, 0x8a, 0xfe, 0x77, 0x56, 0x56, 0xd1, 0x6d, 0x7f, 0x3e, 0x52, 0x26, 0x74, 0xf7, 0x1b, - 0x34, 0x1e, 0x20, 0x3b, 0x44, 0x14, 0xce, 0x11, 0x9f, 0x93, 0x9d, 0x48, 0x2e, 0xab, 0x9e, 0xb8, - 0x39, 0xe4, 0x30, 0xde, 0xb6, 0x1f, 0x5b, 0xf6, 0x2e, 0xd5, 0x8c, 0x04, 0x84, 0xf2, 0xfe, 0x67, - 0x33, 0x70, 0xb6, 0x6f, 0x3f, 0x91, 0x1b, 0x70, 0xc6, 0xe6, 0xfe, 0xe5, 0xd6, 0x5e, 0x10, 0x74, - 0x7d, 0x4b, 0xde, 0xb0, 0x84, 0xef, 0xae, 0x79, 0x4a, 0x14, 0xaf, 0xb1, 0x52, 0x79, 0xe9, 0xf2, - 0xc9, 0x7b, 0xf0, 0x9c, 0xd3, 0xf1, 0x69, 0xa3, 0xe7, 0x51, 0x4b, 0x32, 0x68, 0x38, 0x4d, 0xcf, - 0xf2, 0xec, 0xce, 0xae, 0x74, 0x44, 0x36, 0xcf, 0x4a, 0x1c, 0xe1, 0xc3, 0xbe, 0xe2, 0x34, 0x3d, - 0x13, 0x11, 0x8c, 0x7f, 0x93, 0x81, 0x85, 0x7e, 0xfd, 0x48, 0x16, 0x60, 0x9c, 0x2a, 0xb9, 0x6d, - 0xf2, 0xa6, 0xfc, 0x49, 0x9e, 0x85, 0xe8, 0x78, 0x10, 0x22, 0x43, 0xbe, 0x21, 0xf2, 0x8c, 0xa0, - 0x65, 0xbf, 0x7a, 0x18, 0x08, 0xfb, 0xec, 0x42, 0x43, 0x3d, 0x12, 0xce, 0x01, 0x44, 0x67, 0x00, - 0xd7, 0xcb, 0x98, 0x13, 0x76, 0xc3, 0xe3, 0xcb, 0x95, 0x9c, 0x86, 0x31, 0xbe, 0xc7, 0x0a, 0xf7, - 0x0f, 0xf1, 0x8b, 0x1d, 0xf6, 0xa2, 0x93, 0xf1, 0x70, 0xc8, 0x2d, 0x17, 0xb4, 0xce, 0x1e, 0x6b, - 0xe3, 0xe0, 0x18, 0xbf, 0x56, 0xe0, 0x72, 0x47, 0xb9, 0x17, 0xec, 0x49, 0x49, 0x65, 0x29, 0xcd, - 0x5d, 0x8e, 0x9b, 0x92, 0x2a, 0x66, 0xe9, 0xba, 0x93, 0x9c, 0x7c, 0xfa, 0xca, 0xa6, 0x3e, 0x7d, - 0xbd, 0x0e, 0x13, 0x2b, 0x7b, 0xb4, 0xb1, 0x1f, 0xfa, 0x20, 0xe5, 0xc5, 0xdb, 0x02, 0x03, 0xf2, - 0x30, 0xf2, 0x11, 0x02, 0xb9, 0x0a, 0x80, 0x5e, 0xba, 0x5c, 0x20, 0x57, 0x52, 0xc1, 0xa0, 0x53, - 0xaf, 0xb0, 0xce, 0x51, 0x50, 0x90, 0x7d, 0xdd, 0xbc, 0xa9, 0x9a, 0xf3, 0x70, 0xf6, 0xbe, 0xb7, - 0x23, 0xd0, 0x23, 0x04, 0xd6, 0x3c, 0x65, 0x33, 0x12, 0x47, 0x67, 0x31, 0xb1, 0x63, 0xa9, 0x48, - 0xe4, 0x73, 0x30, 0xbe, 0x42, 0xbd, 0x60, 0x73, 0x73, 0x1d, 0x6d, 0x68, 0x78, 0x06, 0x94, 0x3c, - 0x66, 0xab, 0x08, 0x82, 0xd6, 0x77, 0x0f, 0x4b, 0x53, 0x81, 0xd3, 0xa6, 0x61, 0x64, 0x77, 0x53, - 0x62, 0x93, 0x65, 0x28, 0xf2, 0x57, 0xfe, 0xe8, 0x2a, 0x85, 0xc7, 0x63, 0x9e, 0x1f, 0xd6, 0xc2, - 0x24, 0xe0, 0x11, 0xdd, 0x0e, 0x73, 0x75, 0x24, 0xf0, 0xc9, 0xaa, 0x4c, 0x71, 0xa3, 0x7e, 0x36, - 0x44, 0xcb, 0x31, 0xbe, 0x6d, 0xb0, 0xaf, 0x4f, 0x52, 0x90, 0x32, 0x4c, 0xad, 0xb8, 0xed, 0xae, - 0x1d, 0x38, 0x98, 0x40, 0xf4, 0x40, 0x9c, 0x84, 0xa8, 0x9f, 0x6c, 0xa8, 0x05, 0xda, 0xb1, 0xaa, - 0x16, 0x90, 0x9b, 0x30, 0x6d, 0xba, 0x3d, 0xd6, 0xed, 0x52, 0xa9, 0xc0, 0x0f, 0x3b, 0xb4, 0x74, - 0xf1, 0x58, 0x09, 0x3b, 0x9b, 0x85, 0x06, 0x41, 0x8b, 0x82, 0xab, 0x51, 0x91, 0x8d, 0x94, 0xd7, - 0x1d, 0xf5, 0x84, 0x53, 0x33, 0x76, 0x24, 0x98, 0xa5, 0x3c, 0x0c, 0x5d, 0x87, 0xc9, 0x7a, 0xfd, - 0xde, 0x26, 0xf5, 0x83, 0x9b, 0x2d, 0xf7, 0x11, 0x1e, 0x70, 0x79, 0x91, 0x60, 0xce, 0x77, 0xad, - 0x80, 0xfa, 0x81, 0xb5, 0xd3, 0x72, 0x1f, 0x99, 0x2a, 0x16, 0xf9, 0x2a, 0xeb, 0x0f, 0x45, 0x1c, - 0x14, 0xf1, 0x7e, 0x07, 0x49, 0xac, 0x78, 0x8c, 0x44, 0x8b, 0x80, 0xc9, 0xad, 0x7a, 0x67, 0x29, - 0xe8, 0xe8, 0x22, 0xe7, 0xb9, 0x8f, 0x0f, 0xca, 0xcd, 0xa6, 0x47, 0x7d, 0x5f, 0x9c, 0x44, 0xdc, - 0x45, 0x0e, 0x75, 0x27, 0x36, 0x2f, 0xd0, 0x5c, 0xe4, 0x14, 0x02, 0xb2, 0xc2, 0x44, 0x24, 0x36, - 0x8a, 0x68, 0x7b, 0x55, 0xad, 0xe1, 0x61, 0x22, 0x94, 0xb2, 0x62, 0xcc, 0xb9, 0x95, 0x96, 0xd3, - 0xd5, 0x25, 0x21, 0x85, 0x86, 0x54, 0x61, 0x86, 0x03, 0xd8, 0xd2, 0xe2, 0xe9, 0xa5, 0xe6, 0xa2, - 0x04, 0x17, 0x82, 0x0d, 0x9a, 0x0b, 0x60, 0x8a, 0x29, 0x35, 0x28, 0x6c, 0x8c, 0x8e, 0xbc, 0x07, - 0xd3, 0x18, 0xbb, 0x3f, 0xf4, 0x33, 0xc2, 0x33, 0xa1, 0xc0, 0x63, 0xdb, 0x8a, 0x92, 0x98, 0xf3, - 0x5e, 0xc1, 0xf7, 0xf7, 0x6a, 0xd2, 0x01, 0x89, 0x31, 0x40, 0x73, 0x9f, 0x88, 0xc1, 0xa9, 0x88, - 0x81, 0x28, 0x89, 0x33, 0x08, 0x5a, 0x7e, 0xc4, 0xe0, 0x67, 0x32, 0x70, 0x96, 0x55, 0xa4, 0xba, - 0x14, 0xe1, 0xa6, 0x80, 0xb6, 0x4c, 0x3c, 0xef, 0xc8, 0xe5, 0x2b, 0x52, 0x3e, 0xb9, 0xa2, 0xa0, - 0x5d, 0x79, 0x78, 0xed, 0x4a, 0x39, 0xfa, 0x59, 0x97, 0x44, 0x3c, 0xda, 0x67, 0x5f, 0x9e, 0xaa, - 0x1c, 0xe8, 0xfb, 0x7b, 0x69, 0x1c, 0xf0, 0xa3, 0xd8, 0xc7, 0xa7, 0x7f, 0xd4, 0x99, 0x8f, 0xfd, - 0x51, 0x7d, 0x79, 0xaa, 0x1f, 0x15, 0xb4, 0xfc, 0xd4, 0x8f, 0xba, 0x01, 0x53, 0x78, 0x4a, 0x0b, - 0xe9, 0xc8, 0x13, 0x59, 0x4d, 0x70, 0x4d, 0x68, 0x05, 0x66, 0x81, 0xfd, 0xbc, 0x2f, 0x7e, 0xdd, - 0x1e, 0xc9, 0x8f, 0x17, 0xf3, 0xb7, 0x47, 0xf2, 0xb3, 0x45, 0x62, 0x4e, 0x84, 0x1d, 0x6f, 0x9e, - 0x4a, 0xfd, 0x10, 0xbc, 0xb5, 0xb2, 0x1b, 0x76, 0x74, 0xf5, 0xfa, 0xde, 0xf2, 0xaf, 0xd1, 0xda, - 0x36, 0xc0, 0xbf, 0x66, 0x8b, 0xbb, 0x7b, 0x2b, 0xdd, 0x20, 0x6f, 0xad, 0x1a, 0x38, 0x7e, 0x6b, - 0x8d, 0xd1, 0x98, 0x31, 0x6c, 0xe3, 0x9b, 0x93, 0x31, 0xbe, 0xc2, 0xa6, 0xd6, 0x80, 0x31, 0x7e, - 0x29, 0x15, 0x9d, 0x8c, 0xc6, 0x15, 0xfc, 0xca, 0x6a, 0x8a, 0x12, 0x72, 0x16, 0x72, 0xf5, 0xfa, - 0x3d, 0xd1, 0xc9, 0x68, 0x59, 0xeb, 0xfb, 0xae, 0xc9, 0x60, 0x6c, 0x84, 0xd0, 0x5c, 0x56, 0xc9, - 0xb8, 0xc0, 0x4e, 0x32, 0x13, 0xa1, 0xac, 0xbf, 0xe5, 0x15, 0x71, 0x24, 0xea, 0x6f, 0x71, 0x45, - 0x8c, 0x2e, 0x86, 0x2b, 0xb0, 0x50, 0xf6, 0x7d, 0xea, 0xb1, 0x19, 0x21, 0xac, 0x30, 0x3d, 0x71, - 0x8d, 0x11, 0x47, 0x30, 0x56, 0x6a, 0x37, 0x7c, 0xb3, 0x2f, 0x22, 0xb9, 0x08, 0xf9, 0x72, 0xaf, - 0xe9, 0xd0, 0x4e, 0x43, 0x8b, 0x1f, 0x67, 0x0b, 0x98, 0x19, 0x96, 0x92, 0xf7, 0xe1, 0x54, 0x2c, - 0xbe, 0xa4, 0xe8, 0x81, 0xf1, 0x68, 0x57, 0x95, 0xd7, 0xac, 0xc8, 0x72, 0x84, 0x77, 0x49, 0x3a, - 0x25, 0x29, 0x43, 0x71, 0x15, 0xfd, 0xc9, 0x2a, 0x94, 0x3f, 0x62, 0xb9, 0x1e, 0x77, 0x24, 0xe4, - 0x97, 0x62, 0x11, 0x45, 0xb3, 0x19, 0x16, 0x9a, 0x09, 0x74, 0x72, 0x07, 0xe6, 0xe2, 0x30, 0x76, - 0x36, 0xf3, 0xfb, 0x2f, 0xee, 0x6a, 0x09, 0x2e, 0x78, 0x3a, 0xa7, 0x51, 0x91, 0x6d, 0x98, 0x8d, - 0x2c, 0xa7, 0xf4, 0x5b, 0xb1, 0x34, 0xc8, 0x0e, 0xcb, 0xe5, 0xcd, 0xf8, 0x59, 0x31, 0x19, 0xe7, - 0x22, 0x2b, 0xac, 0xf0, 0x76, 0x6c, 0x26, 0xd9, 0x91, 0x26, 0x4c, 0xd7, 0x9d, 0xdd, 0x8e, 0xd3, - 0xd9, 0xbd, 0x43, 0x0f, 0x6a, 0xb6, 0xe3, 0x09, 0xd3, 0x58, 0x69, 0xf8, 0x5e, 0xf6, 0x0f, 0xda, - 0x6d, 0x1a, 0x78, 0xb8, 0xea, 0x59, 0x39, 0x3a, 0xcb, 0xb3, 0xdb, 0xce, 0xa2, 0xcf, 0xe9, 0xd0, - 0xbf, 0xb4, 0x6b, 0x3b, 0xda, 0xf1, 0xae, 0xf3, 0xd4, 0x34, 0x13, 0x85, 0x21, 0x35, 0x13, 0x2d, - 0x98, 0x5d, 0xed, 0x34, 0xbc, 0x03, 0x7c, 0x4b, 0x94, 0x1f, 0x37, 0x75, 0xcc, 0xc7, 0xbd, 0x24, - 0x3e, 0xee, 0x39, 0x5b, 0xce, 0xb0, 0xb4, 0xcf, 0x4b, 0x32, 0x26, 0x75, 0x98, 0x45, 0x11, 0xbf, - 0x5a, 0xa9, 0x55, 0x3b, 0x4e, 0xe0, 0xd8, 0x01, 0x6d, 0x0a, 0xb1, 0x21, 0xcc, 0x53, 0xc3, 0x6f, - 0xa0, 0x4e, 0xb3, 0x6b, 0x39, 0x12, 0x45, 0x65, 0x9a, 0xa0, 0x1f, 0x74, 0x0d, 0x9c, 0xf9, 0x53, - 0xba, 0x06, 0x56, 0x61, 0x26, 0x1e, 0x73, 0xa2, 0x18, 0x9d, 0xf6, 0x3e, 0x16, 0x31, 0xa1, 0xc1, - 0xed, 0xa1, 0x98, 0xa8, 0xa5, 0x86, 0x8d, 0x45, 0x9b, 0x88, 0xdd, 0x28, 0x67, 0xb5, 0x1b, 0xa5, - 0xb6, 0x2b, 0x9d, 0xe4, 0x46, 0x59, 0x03, 0xb8, 0xe9, 0x7a, 0x0d, 0x5a, 0x46, 0x47, 0x6e, 0xa2, - 0x65, 0xf3, 0x62, 0x4c, 0xa3, 0x42, 0xbe, 0x7e, 0x76, 0xd8, 0x6f, 0x2b, 0xee, 0x8f, 0xaf, 0xf0, - 0x20, 0x36, 0x9c, 0xa9, 0x79, 0x74, 0x87, 0x7a, 0x1e, 0x6d, 0x8a, 0x1b, 0xcc, 0xb2, 0xd3, 0x69, - 0xca, 0x14, 0x6d, 0x22, 0x9e, 0x77, 0x57, 0xa2, 0x84, 0x86, 0xe4, 0xdb, 0x1c, 0x49, 0x3d, 0x4c, - 0xfb, 0xf0, 0x31, 0x7e, 0x3c, 0x0b, 0x0b, 0xfd, 0x5a, 0x3c, 0xe0, 0xee, 0xf7, 0x1a, 0x24, 0x37, - 0x11, 0x71, 0x07, 0x2c, 0xd2, 0xf8, 0x56, 0xb2, 0x04, 0xe9, 0x7b, 0x85, 0xb8, 0x13, 0xce, 0xc5, - 0x09, 0xb6, 0xbc, 0x16, 0xb9, 0x01, 0x93, 0x4a, 0xff, 0xe0, 0x76, 0xdd, 0xaf, 0x37, 0x4d, 0xd8, - 0x89, 0xba, 0xec, 0x34, 0x88, 0xd3, 0x42, 0xde, 0x19, 0xf9, 0x2f, 0x52, 0xe4, 0xee, 0xf2, 0x63, - 0xdc, 0x22, 0xc2, 0xf7, 0x5d, 0x42, 0x00, 0x8f, 0x06, 0xbe, 0xcb, 0x9a, 0xf8, 0xb7, 0xf1, 0x8b, - 0x05, 0x7e, 0xe8, 0xab, 0x57, 0xc6, 0x7e, 0xb6, 0xd2, 0xb1, 0xab, 0x64, 0xf6, 0x24, 0x57, 0xc9, - 0xdc, 0xf1, 0x57, 0xc9, 0x91, 0xe3, 0xae, 0x92, 0xb1, 0xbb, 0xde, 0xe8, 0x09, 0xef, 0x7a, 0xe3, - 0x27, 0xba, 0xeb, 0x69, 0xd7, 0xd0, 0xfc, 0x71, 0xd7, 0xd0, 0x3f, 0xbf, 0x19, 0x3e, 0xad, 0x37, - 0xc3, 0x34, 0xa9, 0xf0, 0x44, 0x37, 0xc3, 0xc4, 0xc5, 0x6e, 0xf6, 0xc9, 0x5c, 0xec, 0xc8, 0x13, - 0xbb, 0xd8, 0xcd, 0x7d, 0xd2, 0x8b, 0xdd, 0xfc, 0x93, 0xbc, 0xd8, 0x9d, 0xfa, 0xb3, 0x78, 0xb1, - 0x3b, 0xfd, 0x1f, 0xe6, 0x62, 0x77, 0x1d, 0xf2, 0x35, 0xd7, 0x0f, 0x6e, 0xba, 0x5e, 0x1b, 0xef, - 0x96, 0x05, 0xa1, 0x92, 0x75, 0x7d, 0x9e, 0x25, 0x59, 0x13, 0xae, 0x04, 0x22, 0x59, 0x96, 0x13, - 0x4e, 0xde, 0xa4, 0x16, 0x22, 0xad, 0xb8, 0x98, 0x29, 0xe2, 0x42, 0x95, 0x9c, 0x6f, 0x82, 0xe4, - 0xf6, 0x48, 0x7e, 0xac, 0x38, 0x7e, 0x7b, 0x24, 0x5f, 0x2c, 0xce, 0x0e, 0x71, 0x33, 0xfc, 0x0b, - 0x50, 0x8c, 0x0b, 0xab, 0xc7, 0x87, 0x7b, 0x7e, 0x62, 0xb1, 0x36, 0x99, 0x28, 0x1d, 0x17, 0x16, - 0xc9, 0x55, 0x80, 0x9a, 0xe7, 0x3c, 0xb4, 0x03, 0x7a, 0x47, 0x9a, 0xfd, 0x89, 0xf8, 0xe6, 0x1c, - 0xca, 0x26, 0xa8, 0xa9, 0xa0, 0x84, 0xf7, 0xa4, 0x6c, 0xda, 0x3d, 0xc9, 0xf8, 0xb1, 0x2c, 0xcc, - 0xf2, 0x80, 0x75, 0x4f, 0xff, 0x9b, 0xed, 0xbb, 0xda, 0xed, 0xf7, 0xb9, 0x28, 0xa3, 0x82, 0xda, - 0xba, 0x01, 0xaf, 0xb6, 0x1f, 0xc2, 0xa9, 0x44, 0x57, 0xe0, 0x0d, 0xb8, 0x22, 0x43, 0x05, 0x26, - 0xee, 0xc0, 0x0b, 0xe9, 0x95, 0xdc, 0xbf, 0x6e, 0x26, 0x28, 0x8c, 0x3f, 0x1a, 0x49, 0xf0, 0x17, - 0xef, 0xb7, 0xea, 0x8b, 0x6c, 0xe6, 0x64, 0x2f, 0xb2, 0xd9, 0xe1, 0x5e, 0x64, 0x63, 0x12, 0x44, - 0x6e, 0x18, 0x09, 0xe2, 0x7d, 0x98, 0xda, 0xa4, 0x76, 0xdb, 0xdf, 0x74, 0x45, 0x7a, 0x2e, 0xee, - 0x64, 0x22, 0x23, 0x01, 0xb2, 0x32, 0x79, 0x81, 0x0b, 0x8d, 0x65, 0x03, 0x46, 0xc0, 0xce, 0x48, - 0x9e, 0xaf, 0xcb, 0xd4, 0x39, 0xa8, 0xb7, 0xf2, 0xd1, 0x01, 0xb7, 0xf2, 0x3a, 0x14, 0x04, 0x5d, - 0x14, 0xe3, 0x3a, 0xba, 0x3e, 0xb2, 0x22, 0x84, 0xcb, 0xda, 0xa5, 0xb7, 0xe7, 0x74, 0x58, 0x3b, - 0xbf, 0x39, 0x6a, 0x4c, 0x58, 0x17, 0xac, 0x76, 0x9a, 0x5d, 0xd7, 0xe9, 0x60, 0x17, 0x8c, 0x47, - 0x5d, 0x40, 0x05, 0x98, 0x77, 0x81, 0x82, 0x44, 0xde, 0x86, 0xe9, 0x72, 0xad, 0xaa, 0x92, 0xe5, - 0xa3, 0x47, 0x61, 0xbb, 0xeb, 0x58, 0x1a, 0x69, 0x0c, 0x77, 0xd0, 0x4d, 0x6a, 0xe2, 0x4f, 0xe7, - 0x26, 0x65, 0xfc, 0x46, 0x41, 0x2e, 0xef, 0x4f, 0xf7, 0x69, 0x44, 0x7f, 0xec, 0xc8, 0x9d, 0xf0, - 0xb1, 0x63, 0xe4, 0x38, 0x29, 0x53, 0x11, 0x66, 0xc7, 0x3e, 0xf1, 0xc3, 0xc5, 0xf8, 0x09, 0xc5, - 0xd3, 0xd8, 0xda, 0xc9, 0x0f, 0xb3, 0x76, 0x52, 0x45, 0xda, 0x89, 0x4f, 0x2e, 0xd2, 0xc2, 0x89, - 0x45, 0xda, 0x7a, 0xe4, 0x84, 0x3d, 0x79, 0xac, 0x6f, 0xcb, 0x39, 0xa1, 0x35, 0x98, 0x4d, 0x0f, - 0x27, 0x18, 0xba, 0x63, 0x7f, 0x4f, 0xc9, 0xc9, 0x5f, 0x4b, 0x97, 0x93, 0x07, 0x9f, 0x1f, 0x7f, - 0x2e, 0x29, 0xff, 0xb9, 0xa4, 0xfc, 0xa7, 0x22, 0x29, 0xdf, 0x03, 0x62, 0xf7, 0x82, 0x3d, 0xda, - 0x09, 0x9c, 0x06, 0x86, 0xb4, 0x65, 0x43, 0x8c, 0x32, 0xb3, 0x58, 0x23, 0xc9, 0x52, 0x75, 0x8d, - 0x68, 0xa5, 0x6c, 0x06, 0xf0, 0x30, 0xa0, 0x43, 0x4b, 0xc0, 0x1e, 0xae, 0xa8, 0x07, 0xb6, 0xd7, - 0x41, 0x35, 0xd1, 0x55, 0x18, 0x97, 0x21, 0x54, 0x33, 0x91, 0x92, 0x39, 0x19, 0x3b, 0x55, 0x62, - 0x91, 0x25, 0xc8, 0x4b, 0x62, 0x35, 0x11, 0xd0, 0x23, 0x01, 0xd3, 0xa2, 0x53, 0x0a, 0x98, 0xf1, - 0x9f, 0x8c, 0xc8, 0x5d, 0x9b, 0x7d, 0x70, 0xcd, 0xf6, 0xec, 0x36, 0xe6, 0x08, 0x0c, 0x17, 0x95, - 0x22, 0x7f, 0xc7, 0xd6, 0x61, 0xcc, 0x2d, 0x41, 0x27, 0xf9, 0x58, 0x31, 0x70, 0xa3, 0x34, 0xcc, - 0xb9, 0x21, 0xd2, 0x30, 0xbf, 0xa9, 0xe5, 0x30, 0x1e, 0x89, 0x92, 0x66, 0xb2, 0x9d, 0x6c, 0x70, - 0xf6, 0xe2, 0x1b, 0x6a, 0xb2, 0xe1, 0xd1, 0x28, 0x22, 0x19, 0x52, 0x0e, 0x48, 0x33, 0x1c, 0x5e, - 0x28, 0xc6, 0x4e, 0x12, 0x5d, 0x7a, 0xfc, 0x3f, 0x68, 0x74, 0xe9, 0x55, 0x00, 0x71, 0xba, 0x46, - 0xb6, 0x08, 0x2f, 0xe3, 0xe6, 0x23, 0x4c, 0xac, 0x83, 0xa0, 0xd5, 0x27, 0xfd, 0x88, 0x42, 0x68, - 0xfc, 0x2b, 0x02, 0xb3, 0xf5, 0xfa, 0xbd, 0x8a, 0x63, 0xef, 0x76, 0x5c, 0x3f, 0x70, 0x1a, 0xd5, - 0xce, 0x8e, 0xcb, 0xa4, 0xe9, 0xf0, 0x04, 0x50, 0xe2, 0x02, 0x47, 0xbb, 0x7f, 0x58, 0xcc, 0x6e, - 0x6b, 0xab, 0x9e, 0x27, 0xf5, 0x99, 0xfc, 0xb6, 0x46, 0x19, 0xc0, 0xe4, 0x70, 0x26, 0xb0, 0xd6, - 0x7b, 0x18, 0x95, 0x43, 0x18, 0x7c, 0xa0, 0xc0, 0xea, 0x73, 0x90, 0x29, 0xcb, 0x08, 0x4d, 0x4e, - 0x58, 0x71, 0x81, 0x39, 0xa3, 0xc5, 0xa8, 0x8e, 0x8a, 0xf9, 0xda, 0x15, 0xf2, 0x07, 0xee, 0xda, - 0x5d, 0x84, 0xab, 0x36, 0x70, 0x89, 0x35, 0x70, 0x00, 0xa7, 0x34, 0x7f, 0xed, 0x61, 0xdf, 0x57, - 0x5e, 0x15, 0x02, 0xb2, 0x81, 0x76, 0xc6, 0x29, 0x8f, 0x2c, 0x6a, 0xd2, 0xbf, 0xd4, 0x1a, 0xc8, - 0x8f, 0x65, 0xe0, 0x5c, 0x6a, 0x49, 0xb8, 0xba, 0x27, 0xb5, 0x38, 0xe1, 0xca, 0xa6, 0xc1, 0xd3, - 0x1b, 0xf6, 0xab, 0xda, 0x4a, 0xd9, 0x0a, 0x06, 0xd7, 0x44, 0x7e, 0x2d, 0x03, 0x67, 0x34, 0x8c, - 0x70, 0xb7, 0xf4, 0xc3, 0x50, 0x26, 0xa9, 0xf3, 0xfa, 0xa3, 0x27, 0x33, 0xaf, 0x5f, 0xd4, 0xdb, - 0x12, 0xed, 0x96, 0x6a, 0x1b, 0xfa, 0x7d, 0x21, 0x79, 0x08, 0xb3, 0x58, 0x24, 0xdf, 0x7a, 0xd8, - 0x9c, 0x15, 0x4f, 0x44, 0xf3, 0xd1, 0x67, 0xf3, 0x18, 0x04, 0x98, 0xa2, 0x7e, 0xe9, 0x3b, 0x87, - 0xa5, 0x29, 0x0d, 0x5d, 0x46, 0xde, 0xb6, 0xa2, 0x07, 0x23, 0xa7, 0xb3, 0xe3, 0xaa, 0xfb, 0x7e, - 0xa2, 0x0a, 0xf2, 0xcf, 0x32, 0x5c, 0xfd, 0xcf, 0x9b, 0x71, 0xd3, 0x73, 0xdb, 0x61, 0xb9, 0x34, - 0xa6, 0xec, 0xd3, 0x6d, 0xad, 0x27, 0xd3, 0x6d, 0x2f, 0xe3, 0x27, 0xf3, 0x3d, 0xc1, 0xda, 0xf1, - 0xdc, 0x76, 0xf4, 0xf9, 0x6a, 0xc7, 0xf5, 0xfd, 0x48, 0xf2, 0x43, 0x19, 0x38, 0xab, 0x69, 0x2d, - 0xd5, 0x34, 0x28, 0x22, 0xd2, 0xc3, 0x5c, 0x18, 0x03, 0x26, 0x2a, 0x5a, 0xbe, 0x22, 0xe6, 0xff, - 0x05, 0xfc, 0x02, 0x25, 0xe4, 0x28, 0x43, 0xb2, 0xda, 0x1c, 0x4b, 0xf9, 0x84, 0xfe, 0xb5, 0x10, - 0x07, 0x66, 0xd1, 0xa4, 0x46, 0x33, 0xfa, 0x9d, 0xef, 0x6f, 0xf4, 0x1b, 0x26, 0x38, 0xc2, 0xe4, - 0x07, 0xfd, 0x2d, 0x7f, 0x93, 0x5c, 0xc9, 0x0f, 0xc0, 0xd9, 0x04, 0x30, 0x5c, 0x6d, 0xa7, 0xfa, - 0xae, 0xb6, 0xd7, 0x8e, 0x0e, 0x4b, 0xaf, 0xa4, 0xd5, 0x96, 0xb6, 0xd2, 0xfa, 0xd7, 0x40, 0x6c, - 0x80, 0xa8, 0x50, 0x48, 0x3f, 0xe9, 0x13, 0xf4, 0x35, 0x31, 0x3f, 0x14, 0x7c, 0xb6, 0x97, 0x2b, - 0xdf, 0xa0, 0x1e, 0x79, 0x11, 0x12, 0xa1, 0x50, 0x50, 0x12, 0x3f, 0x1c, 0x08, 0x2b, 0x93, 0x3e, - 0x95, 0x7c, 0xe7, 0xb0, 0xa4, 0x61, 0xb3, 0x3b, 0x90, 0x9a, 0x51, 0x42, 0x13, 0x36, 0x55, 0x44, - 0xf2, 0xab, 0x19, 0x98, 0x67, 0x80, 0x68, 0x52, 0x89, 0x46, 0x2d, 0x0c, 0x9a, 0xf5, 0x7b, 0x4f, - 0x66, 0xd6, 0xbf, 0x80, 0xdf, 0xa8, 0xce, 0xfa, 0x44, 0x97, 0xa4, 0x7e, 0x1c, 0xce, 0x76, 0xcd, - 0x7a, 0x4b, 0x9b, 0xed, 0x67, 0x87, 0x98, 0xed, 0x7c, 0x00, 0x8e, 0x9f, 0xed, 0x7d, 0x6b, 0x21, - 0x9b, 0x50, 0x10, 0xd7, 0x1f, 0xde, 0x61, 0xcf, 0x6b, 0x21, 0xa8, 0xd5, 0x22, 0x7e, 0x27, 0x15, - 0x79, 0x31, 0x12, 0x2d, 0xd4, 0xb8, 0x90, 0x0e, 0xcc, 0xf1, 0xdf, 0xba, 0x7a, 0xa9, 0xd4, 0x57, - 0xbd, 0x74, 0x51, 0xb4, 0xe8, 0xbc, 0xe0, 0x1f, 0xd3, 0x32, 0xa9, 0xa1, 0xa3, 0x52, 0x18, 0x93, - 0x2e, 0x10, 0x0d, 0xcc, 0x17, 0xed, 0xf9, 0xc1, 0x4a, 0xa5, 0x57, 0x44, 0x9d, 0xa5, 0x78, 0x9d, - 0xf1, 0x95, 0x9b, 0xc2, 0x9b, 0xd8, 0x30, 0x23, 0xa0, 0xee, 0x3e, 0xe5, 0x3b, 0xfc, 0x0b, 0x5a, - 0xf0, 0xae, 0x58, 0x29, 0xbf, 0xc3, 0xc9, 0x9a, 0x30, 0xb8, 0x5a, 0x6c, 0x43, 0x8f, 0xf3, 0x23, - 0xf7, 0x60, 0xb6, 0xdc, 0xed, 0xb6, 0x1c, 0xda, 0xc4, 0x56, 0x9a, 0x3d, 0xd6, 0x26, 0x23, 0x4a, - 0x2d, 0x67, 0xf3, 0x42, 0x71, 0xb1, 0xf4, 0x7a, 0xb1, 0xed, 0x26, 0x41, 0x6b, 0xfc, 0x68, 0x26, - 0xf1, 0xd1, 0xe4, 0x75, 0x98, 0xc0, 0x1f, 0x4a, 0x3c, 0x18, 0xd4, 0xd2, 0xf0, 0x4f, 0x44, 0xfd, - 0x4f, 0x84, 0xc0, 0x84, 0x25, 0x35, 0x26, 0x64, 0x8e, 0x0b, 0x4b, 0x42, 0x95, 0x10, 0x29, 0x0f, - 0x4a, 0xd2, 0x19, 0x23, 0x17, 0x09, 0x5d, 0xe8, 0x8c, 0x21, 0x5c, 0x30, 0x8c, 0x7f, 0x98, 0xd5, - 0xa7, 0x1d, 0xb9, 0xa8, 0xc8, 0xed, 0x4a, 0x54, 0x4a, 0x29, 0xb7, 0x2b, 0xd2, 0xfa, 0xdf, 0xcd, - 0xc0, 0xdc, 0x3d, 0x25, 0x91, 0xe9, 0xa6, 0x8b, 0xe3, 0x32, 0x38, 0xb5, 0xe7, 0x93, 0xca, 0x36, - 0xa8, 0x66, 0x50, 0x65, 0x33, 0x05, 0xa7, 0x8c, 0x99, 0xf6, 0x3d, 0xe8, 0xa8, 0x87, 0x1f, 0xa6, - 0x24, 0x7d, 0xe4, 0xe8, 0x1c, 0x7e, 0xc2, 0x2c, 0x19, 0xc6, 0x4f, 0x66, 0x61, 0x52, 0x59, 0x31, - 0xe4, 0xb3, 0x50, 0x50, 0xab, 0x55, 0x55, 0x7c, 0xea, 0x57, 0x9a, 0x1a, 0x16, 0xea, 0xf8, 0xa8, - 0xdd, 0xd6, 0x74, 0x7c, 0x6c, 0x5d, 0x20, 0xf4, 0x84, 0x37, 0xa1, 0xf7, 0x52, 0x6e, 0x42, 0x38, - 0xcb, 0x15, 0x9d, 0xce, 0xc0, 0xfb, 0xd0, 0xdb, 0xc9, 0xfb, 0x10, 0xaa, 0x97, 0x14, 0xfa, 0xfe, - 0xb7, 0x22, 0xe3, 0xa7, 0x33, 0x50, 0x8c, 0xaf, 0xe9, 0x4f, 0xa5, 0x57, 0x4e, 0xf0, 0x9e, 0xf3, - 0x13, 0xd9, 0x30, 0x49, 0x8c, 0xf4, 0x56, 0x7e, 0x5a, 0x0d, 0x0d, 0xdf, 0xd1, 0x9e, 0x5a, 0x9e, - 0xd5, 0x03, 0xef, 0xa9, 0x71, 0x3e, 0xd2, 0xa3, 0x6d, 0x8e, 0x7c, 0xeb, 0x6f, 0x95, 0x9e, 0x31, - 0x3e, 0x80, 0xf9, 0x78, 0x77, 0xe0, 0x73, 0x4b, 0x19, 0x66, 0x74, 0x78, 0x3c, 0xc5, 0x54, 0x9c, - 0xca, 0x8c, 0xe3, 0x1b, 0xbf, 0x9b, 0x8d, 0xf3, 0x16, 0x46, 0x87, 0x6c, 0x8f, 0x52, 0xed, 0x5c, - 0xc4, 0x1e, 0xc5, 0x41, 0xa6, 0x2c, 0x3b, 0x49, 0x6a, 0xb7, 0xd0, 0xe7, 0x36, 0x97, 0xee, 0x73, - 0x4b, 0x6e, 0xc4, 0x2c, 0xa8, 0x95, 0x00, 0x51, 0x8f, 0xe8, 0xb6, 0x15, 0x59, 0x51, 0x27, 0x0c, - 0xa7, 0xe7, 0xb5, 0x68, 0xe7, 0x92, 0x7e, 0x34, 0xd2, 0xae, 0x07, 0x58, 0xc0, 0x89, 0x53, 0x91, - 0xc9, 0x1a, 0x8c, 0xb3, 0xcf, 0xbc, 0x6b, 0x77, 0xc5, 0x2b, 0x0a, 0x09, 0x3d, 0xf0, 0x5b, 0xe1, - 0xfd, 0x50, 0x71, 0xc2, 0x6f, 0x51, 0x26, 0x21, 0xa8, 0x13, 0x4b, 0x20, 0x1a, 0xff, 0x77, 0x86, - 0xad, 0xff, 0xc6, 0xfe, 0xf7, 0x58, 0x7e, 0x38, 0xd6, 0xa4, 0x01, 0x36, 0xb1, 0xff, 0x36, 0xcb, - 0xd3, 0xfe, 0x88, 0xe9, 0xf3, 0x26, 0x8c, 0x6d, 0xda, 0xde, 0xae, 0x48, 0xe9, 0xad, 0x73, 0xe1, - 0x05, 0x51, 0xf8, 0xaa, 0x00, 0x7f, 0x9b, 0x82, 0x40, 0x55, 0x9d, 0x65, 0x87, 0x52, 0x9d, 0x29, - 0x9a, 0xfb, 0xdc, 0x13, 0xd3, 0xdc, 0x7f, 0x5f, 0x98, 0xe1, 0xa7, 0x1c, 0x0c, 0x11, 0x4c, 0xfb, - 0x7c, 0x3c, 0xa1, 0x56, 0x22, 0xec, 0x79, 0xc4, 0x8e, 0xdc, 0x50, 0x53, 0x74, 0x29, 0xce, 0x9f, - 0xc7, 0x24, 0xe3, 0x32, 0xfe, 0xeb, 0x1c, 0xef, 0x63, 0xd1, 0x51, 0x17, 0x34, 0x17, 0x77, 0x5c, - 0x27, 0x31, 0xad, 0x26, 0x77, 0x76, 0xbf, 0x00, 0x23, 0x6c, 0x6e, 0x8a, 0xde, 0x44, 0x3c, 0x36, - 0x7f, 0x55, 0x3c, 0x56, 0xce, 0xd6, 0x32, 0x9e, 0x49, 0x6a, 0xee, 0x45, 0x3c, 0xb6, 0xd4, 0xb5, - 0x8c, 0x18, 0xac, 0x05, 0x61, 0x02, 0x0b, 0xb5, 0x05, 0xed, 0x1d, 0x3b, 0x99, 0x29, 0x4f, 0xc9, - 0x9a, 0xb3, 0x0a, 0xd3, 0x0f, 0x9c, 0x4e, 0xd3, 0x7d, 0xe4, 0x57, 0xa8, 0xbf, 0x1f, 0xb8, 0x5d, - 0x61, 0x07, 0x8c, 0x1a, 0xfe, 0x47, 0xbc, 0xc4, 0x6a, 0xf2, 0x22, 0xf5, 0x39, 0x44, 0x27, 0x22, - 0xcb, 0x30, 0xa5, 0x85, 0x80, 0x15, 0x8f, 0x94, 0xa8, 0xe3, 0xd4, 0x03, 0xc8, 0xaa, 0x3a, 0x4e, - 0x8d, 0x84, 0x9d, 0xd2, 0xe2, 0xfb, 0x95, 0xa7, 0xca, 0xc4, 0xb7, 0x0b, 0x1c, 0x72, 0x1d, 0xf2, - 0x3c, 0x4e, 0x48, 0xb5, 0xa2, 0x3e, 0x4f, 0xf9, 0x08, 0x8b, 0xc5, 0xd9, 0x91, 0x88, 0x51, 0x5c, - 0x08, 0xee, 0x24, 0x67, 0x8e, 0x6c, 0xb8, 0x4d, 0x6a, 0x7c, 0x06, 0x8a, 0x62, 0xd3, 0x89, 0xb2, - 0xc5, 0x3f, 0x07, 0x23, 0x2b, 0xd5, 0x8a, 0xa9, 0x6e, 0x14, 0x0d, 0xa7, 0xe9, 0x99, 0x08, 0x45, - 0x1f, 0xb9, 0x0d, 0x1a, 0x3c, 0x72, 0xbd, 0x7d, 0x93, 0xfa, 0x81, 0xe7, 0xf0, 0xe4, 0x9c, 0xb8, - 0xd4, 0x3e, 0x4b, 0xde, 0x86, 0x51, 0x34, 0x4e, 0x8d, 0xed, 0xfd, 0xf1, 0x3a, 0x96, 0xa7, 0xc4, - 0x14, 0x1d, 0x45, 0x4b, 0x57, 0x93, 0x13, 0x91, 0x37, 0x61, 0xa4, 0x42, 0x3b, 0x07, 0xb1, 0xbc, - 0x81, 0x09, 0xe2, 0x70, 0xc9, 0x37, 0x69, 0xe7, 0xc0, 0x44, 0x12, 0xe3, 0xa7, 0xb3, 0x70, 0x2a, - 0xe5, 0xb3, 0xee, 0x7f, 0xf6, 0x29, 0xdd, 0xf7, 0x96, 0xb5, 0x7d, 0x4f, 0xbe, 0x39, 0xf7, 0xed, - 0xf8, 0xd4, 0x6d, 0xf0, 0x6f, 0x64, 0xe0, 0x8c, 0x3e, 0x59, 0x85, 0x35, 0xfa, 0xfd, 0xeb, 0xe4, - 0x2d, 0x18, 0x5b, 0xa3, 0x76, 0x93, 0xca, 0x24, 0x61, 0xa7, 0xc2, 0xe8, 0x7e, 0x3c, 0x8c, 0x00, - 0x2f, 0xe4, 0x6c, 0x23, 0xa7, 0x53, 0x0e, 0x25, 0x15, 0xf1, 0x71, 0x5c, 0x40, 0x37, 0x64, 0x70, - 0x92, 0xb4, 0xaa, 0x06, 0x58, 0x6e, 0x7c, 0x27, 0x03, 0xcf, 0x0e, 0xa0, 0x61, 0x03, 0xc7, 0x86, - 0x5e, 0x1d, 0x38, 0x3c, 0x33, 0x11, 0x4a, 0xde, 0x85, 0x99, 0x4d, 0x21, 0xe0, 0xcb, 0xe1, 0xc8, - 0x46, 0x6b, 0x47, 0xca, 0xfe, 0xd2, 0xb6, 0xc8, 0x8c, 0x23, 0x6b, 0x51, 0x73, 0x72, 0x03, 0xa3, - 0xe6, 0xa8, 0x41, 0x68, 0x46, 0x86, 0x0d, 0x42, 0xf3, 0x01, 0xcc, 0xeb, 0x6d, 0x13, 0xb1, 0x80, - 0xa3, 0x10, 0x3c, 0x99, 0xfe, 0x21, 0x78, 0x06, 0x46, 0x1c, 0x35, 0x7e, 0x32, 0x03, 0x45, 0x9d, - 0xf7, 0x27, 0x1d, 0xcf, 0x77, 0xb4, 0xf1, 0x7c, 0x36, 0x7d, 0x3c, 0xfb, 0x0f, 0xe4, 0xff, 0x91, - 0x89, 0x37, 0x76, 0xa8, 0x11, 0x34, 0x60, 0xac, 0xe2, 0xb6, 0x6d, 0x47, 0x0e, 0x1c, 0xba, 0x92, - 0x34, 0x11, 0x62, 0x8a, 0x92, 0xe1, 0x22, 0x16, 0x9d, 0x87, 0xd1, 0x0d, 0xb7, 0x53, 0xae, 0x08, - 0x9b, 0x5c, 0xe4, 0xd3, 0x71, 0x3b, 0x96, 0xdd, 0x34, 0x79, 0x01, 0x59, 0x07, 0xa8, 0x37, 0x3c, - 0x4a, 0x3b, 0x75, 0xe7, 0xfb, 0x69, 0x4c, 0x96, 0x60, 0x3d, 0xd4, 0xea, 0xe1, 0xc6, 0xc2, 0x9f, - 0x52, 0x11, 0xd1, 0xf2, 0x9d, 0xef, 0x57, 0xf7, 0x5e, 0x85, 0x1e, 0xd7, 0x95, 0x08, 0xea, 0x16, - 0x1b, 0x87, 0x6b, 0x9f, 0xc6, 0xba, 0x4a, 0xad, 0x0a, 0x7b, 0xf8, 0x5a, 0xea, 0x70, 0xfc, 0x4e, - 0x06, 0x9e, 0x1d, 0x40, 0xf3, 0x04, 0x46, 0xe5, 0x4f, 0xbb, 0xc3, 0x29, 0x40, 0x44, 0x84, 0x69, - 0x99, 0x9d, 0x66, 0xc0, 0x13, 0xff, 0x4d, 0x89, 0xb4, 0xcc, 0x0c, 0xa0, 0xa5, 0x65, 0x66, 0x00, - 0x76, 0xae, 0xae, 0x51, 0x67, 0x77, 0x8f, 0x9b, 0x5c, 0x4d, 0xf1, 0xbd, 0x61, 0x0f, 0x21, 0xea, - 0xb9, 0xca, 0x71, 0x8c, 0x7f, 0x3d, 0x06, 0x67, 0x4d, 0xba, 0xeb, 0xb0, 0x9b, 0xc7, 0x96, 0xef, - 0x74, 0x76, 0xb5, 0x20, 0x3e, 0x46, 0x6c, 0xe5, 0x8a, 0x8c, 0x17, 0x0c, 0x12, 0xce, 0xc4, 0x4b, - 0x90, 0x67, 0xc7, 0xaa, 0xb2, 0x78, 0xf1, 0x15, 0xab, 0xe3, 0x36, 0xa9, 0x88, 0x12, 0x2d, 0x8b, - 0xc9, 0xab, 0x42, 0x10, 0x52, 0x72, 0x12, 0x31, 0x41, 0xe8, 0xbb, 0x87, 0x25, 0xa8, 0x1f, 0xf8, - 0x01, 0xc5, 0x4b, 0xb0, 0x10, 0x86, 0xc2, 0xdb, 0xca, 0x48, 0x9f, 0xdb, 0xca, 0x5d, 0x98, 0x2f, - 0x37, 0xf9, 0xe9, 0x68, 0xb7, 0x6a, 0x9e, 0xd3, 0x69, 0x38, 0x5d, 0xbb, 0x25, 0x6f, 0xe0, 0xd8, - 0xcb, 0x76, 0x58, 0x6e, 0x75, 0x43, 0x04, 0x33, 0x95, 0x8c, 0x35, 0xa3, 0xb2, 0x51, 0xc7, 0x08, - 0x31, 0xe2, 0x81, 0x12, 0x9b, 0xd1, 0xec, 0xf8, 0xd8, 0x0a, 0xdf, 0x0c, 0x8b, 0xf1, 0x9e, 0x84, - 0xcf, 0xd1, 0x9b, 0xeb, 0xf5, 0x3b, 0x22, 0x6b, 0x9a, 0x4c, 0x99, 0xc0, 0x0d, 0x0d, 0x82, 0x96, - 0x8f, 0xe6, 0x8d, 0x1a, 0x5e, 0x44, 0x57, 0xaf, 0xaf, 0x31, 0xba, 0x7c, 0x82, 0xce, 0xf7, 0xf7, - 0x54, 0x3a, 0x8e, 0x47, 0xae, 0xb2, 0xa9, 0xd0, 0x76, 0x03, 0x8a, 0x53, 0x78, 0x22, 0xba, 0x55, - 0x79, 0x08, 0xe5, 0xb7, 0x2a, 0x05, 0x85, 0xbc, 0x0d, 0x73, 0xab, 0x2b, 0x4b, 0x52, 0xad, 0x5c, - 0x71, 0x1b, 0x3d, 0x34, 0x0c, 0x00, 0xac, 0x0f, 0xc7, 0x90, 0x36, 0x96, 0xd8, 0x6e, 0x92, 0x86, - 0x46, 0x2e, 0xc0, 0x78, 0xb5, 0xc2, 0xfb, 0x7e, 0x52, 0xcd, 0x0b, 0x26, 0xac, 0x9d, 0x64, 0x21, - 0xb9, 0x17, 0x89, 0xfd, 0x85, 0x63, 0xe5, 0xf3, 0xb3, 0x43, 0x88, 0xfc, 0x6f, 0xc2, 0xd4, 0xb2, - 0x1b, 0x54, 0x3b, 0x7e, 0x60, 0x77, 0x1a, 0xb4, 0x5a, 0x51, 0x83, 0x74, 0x6f, 0xbb, 0x81, 0xe5, - 0x88, 0x12, 0xf6, 0xe5, 0x3a, 0x26, 0xf9, 0x3c, 0x92, 0xde, 0xa2, 0x1d, 0xea, 0x45, 0xc1, 0xb9, - 0x47, 0x79, 0xdf, 0x32, 0xd2, 0xdd, 0xb0, 0xc4, 0xd4, 0x11, 0x89, 0x09, 0xa7, 0x30, 0xc9, 0xbf, - 0xdb, 0xf3, 0xf5, 0xca, 0x67, 0x22, 0x91, 0xb6, 0x2b, 0x10, 0xac, 0xf8, 0x57, 0xa4, 0x93, 0x8a, - 0x3c, 0x68, 0x3c, 0x7b, 0xe9, 0x8a, 0xdb, 0xa4, 0x3e, 0xdf, 0x81, 0xbe, 0x87, 0xf2, 0xa0, 0x29, - 0x6d, 0x1b, 0xb0, 0x2b, 0xff, 0xc7, 0x98, 0x07, 0x2d, 0x81, 0x4b, 0x3e, 0x0f, 0xa3, 0xf8, 0x53, - 0x48, 0xcc, 0x73, 0x29, 0x6c, 0x23, 0x69, 0xb9, 0xc1, 0x30, 0x4d, 0x4e, 0x40, 0xaa, 0x30, 0x2e, - 0xae, 0x63, 0x27, 0xc9, 0xe6, 0x23, 0xee, 0x75, 0x7c, 0xb6, 0x09, 0x7a, 0xa3, 0x09, 0x05, 0xb5, - 0x42, 0xb6, 0xca, 0xd6, 0x6c, 0x7f, 0x8f, 0x36, 0xd9, 0x2f, 0x91, 0x88, 0x0f, 0x57, 0xd9, 0x1e, - 0x42, 0x2d, 0xf6, 0x1d, 0xa6, 0x82, 0xc2, 0xce, 0xe9, 0xaa, 0xbf, 0xe5, 0x8b, 0x4f, 0x11, 0x0a, - 0x1a, 0x07, 0x95, 0x7d, 0x4d, 0x53, 0x14, 0x19, 0xdf, 0x07, 0xf3, 0x1b, 0xbd, 0x56, 0xcb, 0xde, - 0x6e, 0x51, 0x99, 0xa8, 0x05, 0x33, 0xa2, 0x2f, 0xc3, 0x68, 0x5d, 0xc9, 0xb1, 0x1e, 0x26, 0xcb, - 0x54, 0x70, 0xd0, 0x58, 0x35, 0x83, 0x11, 0x80, 0x62, 0xd9, 0xd5, 0x39, 0xa9, 0xf1, 0xdb, 0x19, - 0x98, 0x97, 0x46, 0x06, 0x9e, 0xdd, 0xd8, 0x0f, 0x13, 0xed, 0x5f, 0xd0, 0xe6, 0x1a, 0x2e, 0x82, - 0xd8, 0x34, 0xe2, 0xb3, 0xee, 0xb6, 0xfc, 0x08, 0x5d, 0x08, 0x4a, 0xfb, 0xe0, 0xe3, 0x3e, 0x86, - 0xbc, 0x0d, 0x93, 0xe2, 0xc8, 0x55, 0x22, 0x70, 0x62, 0x00, 0x32, 0x71, 0x9d, 0x8c, 0x9b, 0xbc, - 0xa8, 0xe8, 0x28, 0xdf, 0xe9, 0x4d, 0xf9, 0xa4, 0x72, 0x45, 0xba, 0x7c, 0xa7, 0xd7, 0x31, 0x60, - 0xea, 0x7e, 0x7b, 0x32, 0xde, 0xb7, 0x62, 0xee, 0xde, 0x50, 0x63, 0xee, 0x65, 0xa2, 0x9b, 0x77, - 0x14, 0x73, 0x4f, 0xbd, 0x79, 0x87, 0xa8, 0xe1, 0x98, 0x64, 0x8f, 0x19, 0x93, 0x77, 0xe5, 0x98, - 0xe4, 0xfa, 0x4f, 0x8c, 0xb9, 0x01, 0xe3, 0x50, 0x8f, 0x56, 0xc8, 0xc8, 0x50, 0xca, 0x98, 0x67, - 0x30, 0xb9, 0x00, 0x27, 0x89, 0xef, 0xcc, 0x82, 0x93, 0xaa, 0xe1, 0x19, 0x1d, 0x9e, 0xe9, 0x31, - 0xdb, 0xfd, 0x17, 0xa0, 0x50, 0x0e, 0x02, 0xbb, 0xb1, 0x47, 0x9b, 0x15, 0xb6, 0x3d, 0x29, 0x41, - 0xb5, 0x6c, 0x01, 0x57, 0x5f, 0xe6, 0x54, 0x5c, 0x1e, 0xee, 0xd6, 0xf6, 0x85, 0x91, 0x6c, 0x18, - 0xee, 0x96, 0x41, 0xf4, 0x70, 0xb7, 0x0c, 0x42, 0xae, 0xc2, 0x78, 0xb5, 0xf3, 0xd0, 0x61, 0x7d, - 0xc2, 0xe3, 0x6a, 0xa1, 0x46, 0xcb, 0xe1, 0x20, 0x75, 0x73, 0x15, 0x58, 0xe4, 0x4d, 0xe5, 0xa2, - 0x34, 0x11, 0x29, 0x48, 0xb8, 0xa2, 0x2c, 0x8c, 0x81, 0xa3, 0x5e, 0x82, 0xc2, 0x9b, 0xd3, 0x0d, - 0x18, 0x97, 0xfa, 0x4f, 0x88, 0x4e, 0x10, 0x41, 0x99, 0x0c, 0x41, 0x21, 0x91, 0x31, 0x69, 0xba, - 0x92, 0x50, 0x70, 0x52, 0x49, 0x9a, 0xae, 0x24, 0x14, 0xd4, 0x92, 0xa6, 0x2b, 0xa9, 0x05, 0x43, - 0xd5, 0x51, 0xe1, 0x58, 0xd5, 0xd1, 0x7d, 0x28, 0xd4, 0x6c, 0x2f, 0x70, 0x98, 0xdc, 0xd3, 0x09, - 0xfc, 0x85, 0x29, 0x4d, 0xdb, 0xaa, 0x14, 0x2d, 0x3f, 0x2f, 0x13, 0x77, 0x77, 0x15, 0x7c, 0x3d, - 0xc3, 0x74, 0x04, 0x4f, 0x37, 0x91, 0x9d, 0xfe, 0x24, 0x26, 0xb2, 0xd8, 0xa9, 0xa8, 0x61, 0x9b, - 0x89, 0x34, 0x3e, 0x78, 0x11, 0x8a, 0xa9, 0xd9, 0x42, 0x44, 0xf2, 0x15, 0x28, 0xb0, 0xbf, 0x6b, - 0x6e, 0xcb, 0x69, 0x38, 0xd4, 0x5f, 0x28, 0x62, 0xe3, 0x9e, 0x4f, 0x5d, 0xfd, 0x88, 0x74, 0x50, - 0xa7, 0x01, 0x5f, 0xc0, 0xc8, 0x38, 0xae, 0x3a, 0xd7, 0xb8, 0x91, 0xf7, 0xa0, 0xc0, 0x66, 0xdf, - 0xb6, 0xed, 0x73, 0x71, 0x77, 0x36, 0x32, 0x72, 0x6e, 0x0a, 0x78, 0x22, 0xe2, 0xb4, 0x4a, 0xc0, - 0x8e, 0xf9, 0x72, 0x97, 0x6f, 0x90, 0x44, 0x99, 0xed, 0xdd, 0xc4, 0xe6, 0x28, 0xd1, 0xc8, 0x17, - 0xa1, 0x50, 0xee, 0x76, 0xa3, 0x1d, 0x67, 0x4e, 0x51, 0xb4, 0x75, 0xbb, 0x56, 0xea, 0xae, 0xa3, - 0x51, 0xc4, 0x37, 0xe6, 0xf9, 0x13, 0x6d, 0xcc, 0xe4, 0x72, 0x78, 0x03, 0x38, 0x15, 0xe9, 0x82, - 0xc5, 0x65, 0x54, 0xbb, 0x4e, 0xf0, 0xcb, 0xc0, 0x0a, 0x4c, 0x71, 0xe5, 0xa8, 0x94, 0x66, 0x4e, - 0x27, 0x56, 0x4f, 0x8a, 0x50, 0xa3, 0xd3, 0x90, 0x55, 0x98, 0xe6, 0x5e, 0xde, 0x2d, 0x11, 0x0a, - 0x7c, 0xe1, 0x0c, 0xae, 0x5a, 0xe4, 0xc2, 0x9d, 0xc3, 0x5b, 0x98, 0x21, 0xc6, 0xd6, 0xb8, 0xc4, - 0x88, 0x8c, 0x3f, 0xc8, 0xc0, 0x99, 0x3e, 0x23, 0x1e, 0x06, 0x8a, 0xce, 0x0c, 0x0e, 0x14, 0xcd, - 0x76, 0x0e, 0x5d, 0xd3, 0x82, 0xed, 0x4f, 0x3a, 0x6f, 0x85, 0xf2, 0x96, 0x0b, 0x44, 0x24, 0x61, - 0x12, 0x55, 0xdf, 0x76, 0x51, 0xa1, 0x9b, 0x4b, 0x1e, 0x42, 0x02, 0x8f, 0x7f, 0x14, 0x0f, 0xaf, - 0x29, 0x72, 0x3c, 0x85, 0xc3, 0xfa, 0x91, 0xab, 0xad, 0xe0, 0x14, 0xd6, 0xc6, 0x61, 0x06, 0x26, - 0x95, 0x75, 0x48, 0xce, 0x2b, 0xae, 0xc1, 0x45, 0x9e, 0x25, 0x4c, 0xe1, 0x90, 0xe5, 0x27, 0x11, - 0x2e, 0xaa, 0xec, 0xf1, 0x6a, 0x6b, 0x8c, 0x44, 0xa6, 0x04, 0xd3, 0x8e, 0x05, 0x21, 0xc3, 0x72, - 0xf2, 0x21, 0xc0, 0xba, 0xed, 0x07, 0xe5, 0x46, 0xe0, 0x3c, 0xa4, 0x43, 0x1c, 0x3a, 0x32, 0xbc, - 0xe0, 0x29, 0xcc, 0x4b, 0x61, 0x23, 0x59, 0xec, 0x8c, 0x50, 0x18, 0x1a, 0x7f, 0x29, 0x03, 0xb0, - 0x55, 0x5d, 0xc1, 0x68, 0xf8, 0x9f, 0x54, 0x28, 0x48, 0x8f, 0x30, 0x2c, 0xb9, 0x0f, 0x10, 0x07, - 0xfe, 0xc7, 0x0c, 0x4c, 0xeb, 0x68, 0xe4, 0x5d, 0x98, 0xa9, 0x37, 0x3c, 0xb7, 0xd5, 0xda, 0xb6, - 0x1b, 0xfb, 0xeb, 0x4e, 0x87, 0xf2, 0xa8, 0xab, 0xa3, 0xfc, 0x2c, 0xf2, 0xc3, 0x22, 0xab, 0xc5, - 0xca, 0xcc, 0x38, 0x32, 0xf9, 0xe1, 0x0c, 0x4c, 0xd5, 0xf7, 0xdc, 0x47, 0x61, 0x10, 0x53, 0x31, - 0x20, 0x1f, 0xb2, 0xb5, 0xed, 0xef, 0xb9, 0x8f, 0xa2, 0x14, 0xa3, 0x9a, 0x85, 0xe9, 0x3b, 0xc3, - 0x3d, 0xfe, 0x37, 0x5c, 0xbc, 0x8f, 0x04, 0xfe, 0x15, 0xad, 0x12, 0x53, 0xaf, 0xd3, 0xf8, 0x93, - 0x0c, 0x4c, 0xe2, 0xcd, 0xa5, 0xd5, 0x42, 0x99, 0xeb, 0x7b, 0x29, 0x5f, 0x65, 0xd8, 0xae, 0x01, - 0x03, 0xfb, 0x06, 0xcc, 0xc4, 0xd0, 0x88, 0x01, 0x63, 0x75, 0xf4, 0xfa, 0x57, 0x95, 0x1e, 0x3c, - 0x0e, 0x80, 0x29, 0x4a, 0x8c, 0x55, 0x85, 0xec, 0xfe, 0x35, 0x7c, 0x0c, 0x5e, 0x02, 0x70, 0x24, - 0x48, 0xde, 0x6c, 0x48, 0xfc, 0x4b, 0xee, 0x5f, 0x33, 0x15, 0x2c, 0x63, 0x03, 0xc6, 0xea, 0xae, - 0x17, 0x2c, 0x1f, 0xf0, 0xcb, 0x44, 0x85, 0xfa, 0x0d, 0xf5, 0xb5, 0xd7, 0xc1, 0xb7, 0x98, 0x86, - 0x29, 0x8a, 0x48, 0x09, 0x46, 0x6f, 0x3a, 0xb4, 0xd5, 0x54, 0xad, 0x80, 0x77, 0x18, 0xc0, 0xe4, - 0x70, 0x76, 0xe1, 0x3a, 0x1d, 0x25, 0x8d, 0x89, 0xcc, 0x8d, 0x3f, 0xe9, 0xba, 0x59, 0xd1, 0xfa, - 0xf7, 0x85, 0x30, 0x51, 0x43, 0xb2, 0xa6, 0x01, 0x5d, 0xfd, 0x0f, 0x33, 0xb0, 0xd8, 0x9f, 0x44, - 0xb5, 0x60, 0xce, 0x0c, 0xb0, 0x60, 0x7e, 0x39, 0xfe, 0x3a, 0x89, 0x68, 0xe2, 0x75, 0x32, 0x7a, - 0x93, 0xac, 0xa0, 0x01, 0x79, 0x83, 0xca, 0x4c, 0x31, 0xe7, 0x07, 0x7c, 0x33, 0x22, 0xf2, 0x61, - 0x0e, 0x90, 0xc6, 0x14, 0xb4, 0xc6, 0x6f, 0x8c, 0xc0, 0xd9, 0xbe, 0x14, 0x64, 0x4d, 0xc9, 0x3f, - 0x35, 0x1d, 0x66, 0xbe, 0xe9, 0x8b, 0x7f, 0x05, 0xff, 0x45, 0x1b, 0xc1, 0xb8, 0x57, 0xda, 0xbd, - 0x30, 0xef, 0x50, 0x16, 0x79, 0xbd, 0x76, 0x2c, 0x2f, 0x8e, 0x8e, 0xcc, 0x20, 0x99, 0x82, 0x08, - 0xfd, 0x17, 0x69, 0x60, 0x3b, 0x2d, 0x5f, 0x5d, 0x76, 0x4d, 0x0e, 0x32, 0x65, 0x59, 0x64, 0x56, - 0x3e, 0x92, 0x6e, 0x56, 0x6e, 0xfc, 0xbb, 0x0c, 0x4c, 0x84, 0x9f, 0x4d, 0x16, 0xe1, 0xf4, 0xa6, - 0x59, 0x5e, 0x59, 0xb5, 0x36, 0x3f, 0xa8, 0xad, 0x5a, 0x5b, 0x1b, 0xf5, 0xda, 0xea, 0x4a, 0xf5, - 0x66, 0x75, 0xb5, 0x52, 0x7c, 0x86, 0xcc, 0xc2, 0xd4, 0xd6, 0xc6, 0x9d, 0x8d, 0x7b, 0x0f, 0x36, - 0xac, 0x55, 0xd3, 0xbc, 0x67, 0x16, 0x33, 0x64, 0x0a, 0x26, 0xcc, 0xe5, 0xf2, 0x8a, 0xb5, 0x71, - 0xaf, 0xb2, 0x5a, 0xcc, 0x92, 0x22, 0x14, 0x56, 0xee, 0x6d, 0x6c, 0xac, 0xae, 0x6c, 0x56, 0xef, - 0x57, 0x37, 0x3f, 0x28, 0xe6, 0x08, 0x81, 0x69, 0x44, 0xa8, 0x99, 0xd5, 0x8d, 0x95, 0x6a, 0xad, - 0xbc, 0x5e, 0x1c, 0x61, 0x30, 0x86, 0xaf, 0xc0, 0x46, 0x43, 0x46, 0x77, 0xb6, 0x96, 0x57, 0x8b, - 0x63, 0x0c, 0x85, 0xfd, 0xa5, 0xa0, 0x8c, 0xb3, 0xea, 0x11, 0xa5, 0x52, 0xde, 0x2c, 0x2f, 0x97, - 0xeb, 0xab, 0xff, 0x1f, 0x7b, 0xdf, 0x16, 0x23, 0xc9, 0x91, 0x1c, 0xb6, 0xd5, 0xdd, 0x33, 0xd3, - 0x13, 0xf3, 0xaa, 0xc9, 0x9d, 0xdd, 0x9d, 0x7d, 0xef, 0x16, 0xc9, 0x15, 0x39, 0x3c, 0xf2, 0xb8, - 0x4b, 0xf3, 0xc8, 0x3d, 0xf1, 0xa1, 0x9a, 0xee, 0x9a, 0x99, 0xde, 0xed, 0x17, 0xab, 0x7a, 0x76, - 0xb5, 0x47, 0x49, 0xa5, 0xda, 0xee, 0x9a, 0x99, 0xe2, 0xf6, 0x74, 0x35, 0xab, 0xaa, 0xb9, 0x9c, - 0x83, 0x01, 0x9f, 0x20, 0xf8, 0x04, 0x58, 0x96, 0x75, 0x96, 0xcf, 0x30, 0x21, 0xd8, 0x90, 0x01, - 0x1f, 0x0c, 0x7d, 0x08, 0xf0, 0xaf, 0xe1, 0xfb, 0x3a, 0xf8, 0x47, 0xc0, 0x41, 0x86, 0x0d, 0xff, - 0x9d, 0x0c, 0x42, 0x3a, 0xc3, 0x86, 0x21, 0xf8, 0x4f, 0xb0, 0x3f, 0x04, 0x08, 0x30, 0x32, 0x32, - 0xb3, 0x2a, 0xeb, 0xd1, 0xbd, 0xb3, 0x47, 0x9e, 0x6d, 0x01, 0xfa, 0x9a, 0xe9, 0xc8, 0x88, 0xa8, - 0x7c, 0x67, 0x64, 0x44, 0x64, 0x84, 0x5a, 0x25, 0x17, 0xe0, 0x6c, 0x0a, 0x64, 0x37, 0x3b, 0xbb, - 0x8d, 0xb6, 0xba, 0x48, 0x36, 0x40, 0x8d, 0x61, 0xf5, 0x6d, 0x7b, 0xdf, 0x32, 0x4c, 0x15, 0xb2, - 0xd0, 0xb6, 0xde, 0x32, 0xd4, 0x25, 0xed, 0x3d, 0xf6, 0x5e, 0x90, 0x75, 0x35, 0x39, 0x0f, 0xc4, - 0xea, 0xe9, 0xbd, 0x7d, 0x2b, 0xd3, 0xf8, 0x25, 0x58, 0xb0, 0xf6, 0x6b, 0x35, 0xc3, 0xb2, 0x54, - 0x85, 0x00, 0xcc, 0xef, 0xe8, 0x8d, 0xa6, 0x51, 0x57, 0x4b, 0xda, 0xf7, 0x14, 0x58, 0x17, 0x12, - 0xa0, 0x30, 0x44, 0x7d, 0xc9, 0xb5, 0xf8, 0x7e, 0xea, 0x62, 0x2b, 0x1e, 0x7f, 0x65, 0x3e, 0x32, - 0x63, 0x19, 0xfe, 0x73, 0x05, 0xce, 0x15, 0x62, 0x93, 0x47, 0xa0, 0x8a, 0x1a, 0xb4, 0x9c, 0xa8, - 0x7f, 0x94, 0xec, 0x63, 0xd7, 0x32, 0x5f, 0xc9, 0xa0, 0x31, 0x55, 0x69, 0x92, 0x11, 0x3b, 0xc7, - 0xe6, 0xf4, 0xf9, 0x1a, 0xb4, 0xcf, 0x15, 0xb8, 0x30, 0xe5, 0x33, 0xa4, 0x06, 0xf3, 0x71, 0xe6, - 0x9e, 0x19, 0x6e, 0x72, 0x1b, 0x3f, 0xf9, 0xe2, 0x3a, 0x47, 0xc4, 0x14, 0xc2, 0xf8, 0x9f, 0x39, - 0x1f, 0xa7, 0xe2, 0xc1, 0x7c, 0x38, 0xac, 0xfb, 0x2e, 0x66, 0x7a, 0x9e, 0x7f, 0x49, 0x7f, 0x68, - 0x6d, 0x2f, 0xf1, 0xbe, 0x2b, 0x3b, 0x4f, 0x43, 0x4c, 0x88, 0xa3, 0x7d, 0x5f, 0xa1, 0xc2, 0x5d, - 0x16, 0x91, 0xca, 0xbc, 0x7a, 0x18, 0x4e, 0x8e, 0x5d, 0xd3, 0x1f, 0xba, 0xba, 0xd9, 0xe6, 0xc7, - 0x06, 0x4a, 0xab, 0x0e, 0x16, 0xe0, 0xb5, 0xc2, 0x76, 0x82, 0xd4, 0x73, 0xff, 0x14, 0x0d, 0xb9, - 0x0b, 0x60, 0x7c, 0x16, 0xb9, 0xc1, 0xc8, 0x19, 0xc6, 0x81, 0x5b, 0x58, 0x44, 0x2b, 0x0e, 0x4d, - 0xcb, 0xdb, 0x12, 0xb2, 0xf6, 0x5d, 0x05, 0x96, 0xf9, 0xa5, 0x49, 0x1f, 0xba, 0x41, 0xf4, 0xe5, - 0xa6, 0xd7, 0xdd, 0xd4, 0xf4, 0x8a, 0x5f, 0x85, 0x48, 0xfc, 0x69, 0x71, 0xe1, 0xcc, 0xfa, 0x0f, - 0x0a, 0xa8, 0x59, 0x44, 0xf2, 0x3e, 0x54, 0x2d, 0xf7, 0x53, 0x37, 0xf0, 0xa2, 0x13, 0xbe, 0x51, - 0x8a, 0x1c, 0x87, 0x0c, 0x87, 0x97, 0xb1, 0xf9, 0x10, 0xf2, 0x5f, 0x66, 0x4c, 0x73, 0xda, 0xfd, - 0x5e, 0x52, 0x7b, 0x94, 0xbf, 0x2a, 0xb5, 0x87, 0xf6, 0x67, 0x25, 0xb8, 0xb0, 0xeb, 0x46, 0x72, - 0x9b, 0x62, 0xf7, 0x85, 0x37, 0x4e, 0xd7, 0x2e, 0xa9, 0x25, 0x9b, 0xb0, 0x80, 0x45, 0x62, 0x7c, - 0x4d, 0xf1, 0x93, 0x6c, 0xc7, 0xf3, 0xba, 0x9c, 0x4a, 0xa2, 0x36, 0xe5, 0xdb, 0xaf, 0x4b, 0x69, - 0x95, 0xe2, 0x69, 0x7d, 0x0b, 0x56, 0x31, 0xa2, 0xff, 0x84, 0x2e, 0x07, 0x77, 0xc0, 0xd5, 0x3f, - 0x55, 0x33, 0x03, 0x25, 0x5b, 0xa0, 0x52, 0x88, 0xde, 0x7f, 0x32, 0xf2, 0x9f, 0x0e, 0xdd, 0xc1, - 0xa1, 0x3b, 0xc0, 0x63, 0xbd, 0x6a, 0xe6, 0xe0, 0x82, 0xe7, 0xfe, 0x88, 0x5d, 0xdd, 0xdc, 0x01, - 0xea, 0x68, 0x38, 0xcf, 0x04, 0x7a, 0xe9, 0x2e, 0x2c, 0xfd, 0x8c, 0x29, 0xd2, 0xb4, 0x3f, 0x55, - 0x60, 0x03, 0x1b, 0x27, 0x7d, 0x58, 0xa4, 0xaf, 0x15, 0xbd, 0x25, 0x65, 0x0d, 0x72, 0x28, 0x28, - 0xbd, 0x14, 0xe2, 0x5e, 0x4c, 0x74, 0x42, 0xa5, 0x53, 0xe8, 0x84, 0xac, 0xe7, 0x49, 0xd5, 0x7f, - 0x4a, 0x95, 0xd6, 0xbd, 0x4a, 0xb5, 0xac, 0x56, 0x92, 0x21, 0xd7, 0x7e, 0xb3, 0x04, 0x0b, 0xa6, - 0x8b, 0x39, 0xcc, 0xc9, 0x2d, 0x58, 0x68, 0xfb, 0x91, 0x1b, 0xb6, 0x52, 0x09, 0xeb, 0x47, 0x14, - 0x64, 0x1f, 0x0f, 0x4c, 0x51, 0x48, 0x27, 0x7c, 0x37, 0xf0, 0x07, 0x93, 0x7e, 0x24, 0x4f, 0xf8, - 0x31, 0x03, 0x99, 0xa2, 0x8c, 0x7c, 0x0d, 0x16, 0x39, 0xe7, 0xd8, 0x50, 0x8c, 0x1e, 0xcf, 0x81, - 0x1b, 0xe7, 0xc0, 0x4f, 0x10, 0x50, 0xa6, 0x65, 0x02, 0x46, 0x45, 0x92, 0x69, 0x73, 0x32, 0x83, - 0x10, 0xd5, 0xe7, 0x66, 0x88, 0xea, 0x6f, 0xc0, 0xbc, 0x1e, 0x86, 0x6e, 0x24, 0xa2, 0x1d, 0x2c, - 0xc7, 0xe1, 0xe2, 0x42, 0x37, 0x62, 0x8c, 0x1d, 0x2c, 0x37, 0x39, 0x9e, 0xf6, 0x97, 0x25, 0x98, - 0xc3, 0x7f, 0xd1, 0x0c, 0x1b, 0xf4, 0x8f, 0x52, 0x66, 0xd8, 0xa0, 0x7f, 0x64, 0x22, 0x94, 0xdc, - 0x46, 0x4d, 0x85, 0x48, 0x70, 0xc5, 0x5b, 0x8f, 0x2a, 0xf8, 0x41, 0x02, 0x36, 0x65, 0x9c, 0xd8, - 0x6b, 0xa0, 0x5c, 0x18, 0xe3, 0xe4, 0x3c, 0x94, 0x3a, 0x16, 0x6f, 0x31, 0x86, 0xc9, 0xf2, 0x43, - 0xb3, 0xd4, 0xb1, 0xb0, 0x37, 0xf6, 0xf4, 0x3b, 0x6f, 0x7d, 0x83, 0x37, 0x94, 0xf5, 0xc6, 0x91, - 0x73, 0xe7, 0xad, 0x6f, 0x98, 0xbc, 0x84, 0xf6, 0x2f, 0xd6, 0x19, 0x8d, 0xb9, 0xec, 0x2d, 0x3f, - 0xf6, 0x2f, 0xb6, 0x0d, 0x0d, 0xb7, 0x66, 0x82, 0x40, 0xee, 0xc0, 0x12, 0x8f, 0x09, 0x81, 0xf8, - 0x52, 0xcc, 0x06, 0x1e, 0x33, 0x82, 0x51, 0xc8, 0x48, 0xcc, 0xac, 0xc7, 0x07, 0x48, 0xa4, 0xe1, - 0xe5, 0x66, 0x3d, 0x31, 0x84, 0xa1, 0x29, 0xa1, 0xd0, 0x2a, 0x31, 0xbb, 0x60, 0xf2, 0x46, 0x1f, - 0xab, 0xc4, 0x8d, 0x87, 0x98, 0x3d, 0x21, 0x46, 0xd0, 0xfe, 0xb0, 0x04, 0xd5, 0xee, 0x70, 0x72, - 0xe8, 0x8d, 0x1e, 0xdc, 0x26, 0x04, 0xf0, 0x1a, 0x27, 0xd2, 0x6b, 0xd0, 0xff, 0xc9, 0x45, 0xa8, - 0x8a, 0x9b, 0x9b, 0xd8, 0x90, 0x42, 0x7e, 0x6b, 0xdb, 0x04, 0x31, 0xee, 0x3c, 0x1e, 0x9a, 0xf8, - 0x49, 0x6e, 0x43, 0x7c, 0xff, 0x9a, 0x76, 0x51, 0xab, 0xd0, 0xc5, 0x62, 0xc6, 0x68, 0xe4, 0x35, - 0xc0, 0x43, 0x82, 0x5f, 0x1e, 0x84, 0x42, 0x9b, 0x55, 0x8d, 0xcb, 0x29, 0x8c, 0x04, 0xd1, 0xc8, - 0x9b, 0xc0, 0x27, 0x26, 0x4f, 0xf7, 0x7e, 0x2e, 0x4d, 0xc0, 0x12, 0x68, 0x0a, 0x12, 0x8e, 0x4a, - 0xde, 0x85, 0xa5, 0x7e, 0xe0, 0xa2, 0x25, 0xd3, 0x19, 0x26, 0x59, 0xdc, 0x65, 0xca, 0x5a, 0x52, - 0xfe, 0xe0, 0xb6, 0x29, 0xa3, 0x6b, 0xdf, 0x5f, 0x84, 0x65, 0xb9, 0x3e, 0xc4, 0x84, 0xb3, 0xe1, - 0x90, 0xde, 0xdd, 0xb9, 0x33, 0xdb, 0x18, 0x0b, 0xf9, 0x71, 0x7a, 0x23, 0x5d, 0x21, 0x8a, 0xc7, - 0x3c, 0xdb, 0x44, 0x30, 0x8b, 0xbd, 0x33, 0xe6, 0x7a, 0x98, 0x80, 0x19, 0x1e, 0xd1, 0xa1, 0xea, - 0x8f, 0xc3, 0x43, 0x77, 0xe4, 0x09, 0x7b, 0xcb, 0x0b, 0x29, 0x46, 0x1d, 0x5e, 0x98, 0xe3, 0x15, - 0x93, 0x91, 0xb7, 0x60, 0xde, 0x1f, 0xbb, 0x23, 0xc7, 0xe3, 0x67, 0xdc, 0xe5, 0x0c, 0x03, 0x77, - 0xa4, 0x37, 0x24, 0x42, 0x8e, 0x4c, 0xbe, 0x0e, 0x15, 0xff, 0x49, 0x3c, 0x5e, 0x17, 0xd3, 0x44, - 0x4f, 0x22, 0x47, 0x22, 0x41, 0x44, 0x4a, 0xf0, 0xb1, 0x73, 0x7c, 0xc0, 0x47, 0x2c, 0x4d, 0x70, - 0xcf, 0x39, 0x3e, 0x90, 0x09, 0x28, 0x22, 0xf9, 0x00, 0x60, 0xec, 0x1c, 0xba, 0x81, 0x3d, 0x98, - 0x44, 0x27, 0x7c, 0xdc, 0xae, 0xa5, 0xc8, 0xba, 0xb4, 0xb8, 0x3e, 0x89, 0x4e, 0x24, 0xda, 0xc5, - 0xb1, 0x00, 0x12, 0x1d, 0xe0, 0xd8, 0x89, 0x22, 0x37, 0x38, 0xf6, 0xb9, 0x37, 0x61, 0x12, 0xfc, - 0x90, 0x31, 0x68, 0xc5, 0xc5, 0x12, 0x07, 0x89, 0x08, 0x2b, 0xed, 0x05, 0x0e, 0x4f, 0xba, 0x9f, - 0xa9, 0xb4, 0x17, 0xa4, 0x5a, 0x49, 0x11, 0xc9, 0x3b, 0xb0, 0x30, 0xf0, 0xc2, 0xbe, 0x1f, 0x0c, - 0x78, 0x94, 0x93, 0x2b, 0x29, 0x9a, 0x3a, 0x2b, 0x93, 0xc8, 0x04, 0x3a, 0xad, 0x2d, 0x0f, 0x7e, - 0xda, 0xf6, 0x9f, 0xa2, 0x9a, 0x3f, 0x5b, 0x5b, 0x2b, 0x2e, 0x96, 0x6b, 0x9b, 0x10, 0xd1, 0xa1, - 0x3c, 0xf4, 0xa2, 0xa1, 0xf3, 0x98, 0xdb, 0xce, 0xd3, 0x43, 0xb9, 0x8b, 0x45, 0xf2, 0x50, 0x32, - 0x64, 0x72, 0x17, 0xaa, 0xee, 0x28, 0x0a, 0x1c, 0xdb, 0x1b, 0xf0, 0xa7, 0x98, 0xe9, 0x4a, 0xd3, - 0x03, 0xd8, 0x69, 0xd4, 0xe5, 0x4a, 0x23, 0x7e, 0x63, 0x40, 0xfb, 0x27, 0xec, 0x7b, 0xc7, 0xfc, - 0x05, 0x65, 0xba, 0x7f, 0xac, 0x5a, 0xa3, 0x25, 0xf7, 0x0f, 0x45, 0x24, 0xef, 0xc3, 0x02, 0x5d, - 0xbf, 0x03, 0xff, 0x90, 0x07, 0x9a, 0xd0, 0xd2, 0xfd, 0xc3, 0xca, 0x72, 0xd3, 0x55, 0x10, 0xd1, - 0x85, 0xec, 0x3c, 0x0d, 0x6d, 0xaf, 0x8f, 0x31, 0x31, 0xb3, 0xcb, 0x51, 0x7f, 0x68, 0x35, 0x6a, - 0x12, 0xd9, 0x9c, 0xf3, 0x34, 0x6c, 0xf4, 0xc9, 0x1d, 0x98, 0xc3, 0xcc, 0x13, 0x3c, 0x00, 0x66, - 0x9a, 0x06, 0x73, 0x4e, 0xc8, 0x34, 0x88, 0x4a, 0x07, 0xf2, 0x38, 0xc4, 0x47, 0x29, 0x3c, 0xff, - 0x43, 0xba, 0x4f, 0x5a, 0x16, 0xbe, 0x54, 0x91, 0xab, 0xc8, 0xd1, 0x69, 0x15, 0x47, 0x6e, 0x64, - 0x7b, 0x9f, 0xf0, 0x0c, 0x0e, 0xe9, 0xcf, 0xb5, 0xdd, 0xa8, 0xf1, 0xa1, 0xfc, 0xb9, 0x91, 0x1b, - 0x35, 0x3e, 0xe1, 0x43, 0x77, 0x34, 0x79, 0x8c, 0xba, 0xf4, 0x82, 0xa1, 0x3b, 0x9a, 0x64, 0x87, - 0xee, 0x68, 0xf2, 0x98, 0x5c, 0x03, 0x48, 0xbc, 0x10, 0x98, 0x7d, 0xc7, 0x94, 0x20, 0xdf, 0xac, - 0xfc, 0x8f, 0x7f, 0x79, 0x5d, 0xd9, 0x06, 0xa8, 0x8a, 0xa8, 0x39, 0x54, 0x9e, 0xde, 0x28, 0x62, - 0x4a, 0x6e, 0xc2, 0xb2, 0x1c, 0xd3, 0x87, 0xef, 0xea, 0x4b, 0xce, 0xd8, 0x13, 0x51, 0x7d, 0x66, - 0x27, 0x42, 0x78, 0x15, 0xd6, 0x53, 0x4f, 0x80, 0x12, 0x87, 0x40, 0x53, 0x95, 0x0b, 0xf0, 0x10, - 0xad, 0x01, 0x84, 0x91, 0x13, 0x44, 0xf6, 0xc0, 0x89, 0x4e, 0xa3, 0xde, 0xad, 0xd2, 0x8d, 0x99, - 0x79, 0x5c, 0x23, 0x5d, 0xdd, 0x89, 0x5c, 0xd6, 0x38, 0xad, 0x09, 0x17, 0xa7, 0x6e, 0x9a, 0xe4, - 0x15, 0x50, 0x0f, 0x1c, 0xae, 0x32, 0xed, 0x1f, 0x39, 0xa3, 0x91, 0x3b, 0xe4, 0x0d, 0x5b, 0x13, - 0xf0, 0x1a, 0x03, 0x73, 0x6e, 0x1f, 0x48, 0xbd, 0x23, 0xad, 0x96, 0x53, 0xf4, 0x0e, 0x67, 0xf0, - 0x43, 0x05, 0xae, 0xcc, 0xda, 0x7b, 0xc9, 0x25, 0xa8, 0x8e, 0x03, 0xcf, 0x47, 0x19, 0x9f, 0xf7, - 0xa1, 0xf8, 0x8d, 0x79, 0x22, 0x50, 0x18, 0x8d, 0x9c, 0x43, 0xfe, 0xa6, 0xc6, 0x5c, 0x44, 0x48, - 0xcf, 0x39, 0x0c, 0x69, 0x17, 0x0f, 0xdc, 0x03, 0x67, 0x32, 0x8c, 0xec, 0xb0, 0x7f, 0xe4, 0x0e, - 0xf0, 0xd5, 0x1b, 0x7a, 0x52, 0x9a, 0x2a, 0x2f, 0xb0, 0x04, 0x3c, 0x57, 0xe3, 0xb9, 0x29, 0x35, - 0xbe, 0x57, 0xa9, 0x2a, 0x6a, 0xc9, 0x44, 0xd7, 0x35, 0xed, 0x3b, 0x25, 0xd8, 0x9c, 0xb6, 0xd9, - 0x90, 0xf7, 0x8a, 0xfa, 0x80, 0x59, 0x7d, 0x64, 0xb8, 0x6c, 0xf5, 0x91, 0x67, 0xcf, 0x1d, 0x88, - 0xdf, 0xac, 0x3d, 0x2b, 0xfe, 0x84, 0x80, 0x51, 0x9a, 0xb1, 0x13, 0x86, 0x4f, 0xe9, 0x7e, 0x5a, - 0x96, 0xa2, 0x10, 0x73, 0x98, 0x4c, 0x23, 0x60, 0xe4, 0x6d, 0x80, 0xfe, 0xd0, 0x0f, 0x5d, 0x74, - 0xae, 0xe0, 0x82, 0x1a, 0xf3, 0xc4, 0x8f, 0xa1, 0xb2, 0x35, 0x1d, 0xa1, 0x35, 0x7f, 0x20, 0xe6, - 0x93, 0x03, 0x17, 0xa6, 0x9c, 0x2e, 0x74, 0x78, 0xf0, 0x11, 0x1a, 0xdb, 0x4c, 0x78, 0xfe, 0x2f, - 0x0a, 0x61, 0x79, 0x6b, 0xb2, 0x3d, 0x5e, 0x9a, 0x36, 0x47, 0x4e, 0x80, 0xe4, 0x8f, 0x10, 0xca, - 0x9d, 0x7b, 0x9e, 0x4f, 0x82, 0x98, 0x3b, 0x83, 0xec, 0x07, 0x43, 0x72, 0x1d, 0x96, 0x44, 0xa6, - 0x52, 0x7a, 0x11, 0x62, 0xcc, 0x81, 0x83, 0xee, 0xbb, 0x38, 0x79, 0x30, 0x06, 0x2c, 0xbe, 0x4c, - 0xe4, 0x2b, 0x6f, 0x11, 0x21, 0xbd, 0x93, 0xb1, 0x68, 0xdd, 0x15, 0x31, 0xbf, 0xd3, 0x07, 0x3b, - 0x2f, 0xfd, 0xa7, 0x8a, 0x18, 0xfe, 0xfc, 0xc9, 0xf8, 0xac, 0xfa, 0x11, 0xc0, 0x87, 0x61, 0xbc, - 0x62, 0xf8, 0x3f, 0x15, 0xf9, 0xc4, 0xaa, 0xe3, 0x22, 0x1f, 0xff, 0x49, 0x6e, 0xc1, 0x5a, 0xc0, - 0x1c, 0x8b, 0x23, 0x9f, 0xf7, 0x27, 0x4b, 0x8b, 0xb2, 0xc2, 0xc0, 0x3d, 0x1f, 0xfb, 0x94, 0xd7, - 0xeb, 0x5e, 0xdc, 0x61, 0x92, 0xa0, 0x40, 0x5e, 0x87, 0x45, 0x2a, 0x28, 0x60, 0xf8, 0xa1, 0xcc, - 0x8b, 0x14, 0xc4, 0x43, 0xb1, 0xcb, 0xac, 0x7e, 0xcc, 0xff, 0xe7, 0xbc, 0xfe, 0x6d, 0x49, 0x30, - 0x93, 0xc5, 0x14, 0x72, 0x01, 0x16, 0xfc, 0xe0, 0x50, 0x6a, 0xda, 0xbc, 0x1f, 0x1c, 0xd2, 0x76, - 0xbd, 0x0c, 0x2a, 0x7b, 0x20, 0xc5, 0x02, 0x55, 0x84, 0x27, 0x23, 0xa6, 0xc7, 0xa8, 0x9a, 0xab, - 0x0c, 0xbe, 0x1f, 0xba, 0x81, 0x75, 0x32, 0xea, 0x53, 0xcc, 0x30, 0xf4, 0x6d, 0x39, 0x8a, 0x18, - 0x6f, 0xf6, 0x6a, 0x18, 0xfa, 0x49, 0x38, 0xb1, 0x01, 0xd9, 0x86, 0x15, 0xca, 0x27, 0x8e, 0x65, - 0xc6, 0x77, 0xc0, 0xab, 0x79, 0x29, 0xea, 0x64, 0xd4, 0x17, 0x55, 0x34, 0x97, 0x43, 0xe9, 0x17, - 0xb9, 0x0f, 0xaa, 0x24, 0x6e, 0xe2, 0x8b, 0xd9, 0x8c, 0x93, 0x7b, 0xc2, 0x46, 0x12, 0x53, 0x1b, - 0xa3, 0x03, 0xdf, 0x5c, 0xeb, 0xa7, 0x01, 0xf1, 0x4e, 0x30, 0xaf, 0x2e, 0x98, 0x9b, 0xbc, 0xb9, - 0x21, 0x7a, 0x4f, 0xda, 0x43, 0xff, 0xd0, 0x76, 0x3f, 0xa3, 0x63, 0xa2, 0xfd, 0x81, 0x22, 0xf6, - 0xda, 0x02, 0xa6, 0x44, 0x83, 0x95, 0x23, 0x27, 0xb4, 0xc3, 0xf0, 0x98, 0x39, 0xf5, 0xf1, 0x50, - 0xca, 0x4b, 0x47, 0x4e, 0x68, 0x85, 0xc7, 0x22, 0x6f, 0xcb, 0x39, 0x8a, 0xe3, 0x3b, 0x93, 0xe8, - 0xc8, 0x96, 0x85, 0x6b, 0xd6, 0xa3, 0x67, 0x8f, 0x9c, 0xb0, 0x43, 0xcb, 0x24, 0xde, 0xe4, 0x45, - 0x58, 0x45, 0xbe, 0x7d, 0x4f, 0x30, 0xc6, 0x60, 0x24, 0xe6, 0x32, 0x65, 0xdc, 0xf7, 0x18, 0x67, - 0x3e, 0xb8, 0x3f, 0xae, 0xc0, 0xf9, 0xe2, 0xde, 0xc3, 0xe9, 0x4b, 0xfb, 0x1c, 0x9f, 0x4d, 0xf2, - 0xba, 0x2d, 0x52, 0x08, 0x0b, 0x24, 0x53, 0x34, 0x78, 0xa5, 0xc2, 0xc1, 0xdb, 0x82, 0x75, 0x64, - 0xc4, 0xc5, 0xf8, 0xa1, 0x17, 0x46, 0x3c, 0x3e, 0x8a, 0xb9, 0x46, 0x0b, 0xd8, 0x7e, 0xdf, 0xa4, - 0x60, 0xf2, 0x12, 0xac, 0x8a, 0x1d, 0xdb, 0x7f, 0x3a, 0xa2, 0x1f, 0x66, 0xdb, 0xf5, 0x0a, 0x87, - 0x76, 0x10, 0x48, 0xce, 0xc1, 0xbc, 0x33, 0x1e, 0xd3, 0x4f, 0xb2, 0x5d, 0x7a, 0xce, 0x19, 0x8f, - 0x59, 0x6e, 0x21, 0x7c, 0x24, 0x6a, 0x1f, 0xa0, 0x0b, 0x16, 0xf7, 0x21, 0x35, 0x97, 0x11, 0xc8, - 0xdc, 0xb2, 0x42, 0xba, 0x2f, 0x50, 0x5a, 0x81, 0xb2, 0x80, 0x28, 0xe0, 0x8c, 0x63, 0x84, 0x8b, - 0x50, 0x15, 0xce, 0x00, 0xec, 0x55, 0x8c, 0xb9, 0xe0, 0x70, 0x47, 0x80, 0xb7, 0xe0, 0xc2, 0xc0, - 0x0b, 0xf9, 0x68, 0xd3, 0x26, 0x8d, 0xc7, 0xfc, 0x59, 0x2a, 0x0b, 0x63, 0x6c, 0x6e, 0xf0, 0x62, - 0xda, 0x93, 0xfa, 0x78, 0x1c, 0x3f, 0x4e, 0xbd, 0x24, 0xc8, 0x1e, 0x7b, 0x2c, 0x5e, 0x1b, 0x73, - 0x88, 0xc5, 0xc5, 0x01, 0x48, 0xb9, 0xc9, 0x31, 0xb6, 0x65, 0x04, 0xb1, 0x4c, 0xe2, 0x95, 0x64, - 0x33, 0xe5, 0x21, 0x97, 0x5c, 0xd0, 0x64, 0x8c, 0x83, 0x86, 0x50, 0xf2, 0x36, 0x4c, 0x9d, 0x8b, - 0x28, 0xe1, 0x56, 0xcd, 0x73, 0xac, 0x9c, 0x39, 0xfa, 0x36, 0xfd, 0x43, 0x03, 0x0b, 0xc9, 0x07, - 0x70, 0x45, 0x54, 0xd0, 0x09, 0x43, 0xef, 0x70, 0x64, 0x8b, 0x51, 0x40, 0x5f, 0x0c, 0x94, 0x72, - 0xab, 0xe6, 0x45, 0x8e, 0xa3, 0x23, 0x4a, 0x9d, 0x61, 0xe0, 0xb3, 0x46, 0x3e, 0x9b, 0xde, 0x81, - 0x35, 0x2e, 0xb0, 0x73, 0x21, 0x01, 0x7b, 0x9b, 0x6f, 0x61, 0xf4, 0x26, 0xcd, 0xf3, 0x55, 0x01, - 0x07, 0x35, 0x06, 0x82, 0xf2, 0xbf, 0x28, 0x70, 0xae, 0x50, 0xe2, 0x27, 0xbf, 0x0e, 0xec, 0x9d, - 0x61, 0xe4, 0xdb, 0x81, 0xdb, 0xf7, 0xc6, 0x1e, 0x06, 0x6e, 0x61, 0x1a, 0xf1, 0x3b, 0xb3, 0xee, - 0x0a, 0xf8, 0x66, 0xb1, 0xe7, 0x9b, 0x31, 0x11, 0x53, 0xd5, 0xa9, 0x41, 0x06, 0x7c, 0xe9, 0x23, - 0x38, 0x57, 0x88, 0x5a, 0xa0, 0x42, 0xfb, 0x5a, 0x3a, 0x59, 0xba, 0xb0, 0x71, 0x66, 0x1a, 0x2d, - 0xa9, 0xd6, 0x78, 0xf3, 0x7e, 0x14, 0x37, 0x2f, 0x73, 0x37, 0x20, 0x46, 0x76, 0x67, 0x2b, 0xba, - 0xde, 0x0a, 0xa2, 0xe9, 0x9b, 0xdb, 0x47, 0x70, 0x8e, 0x2f, 0xaf, 0xc3, 0xc0, 0x19, 0x1f, 0x25, - 0xec, 0x58, 0x45, 0x7f, 0xa1, 0x88, 0x1d, 0x5b, 0x77, 0xbb, 0x14, 0x3f, 0xe6, 0x7a, 0xd6, 0xc9, - 0x03, 0x79, 0x1b, 0x7e, 0xa3, 0x24, 0x36, 0xb3, 0x82, 0xea, 0x14, 0x2c, 0x5c, 0xa5, 0x68, 0xe1, - 0x9e, 0x7e, 0xd7, 0x68, 0x03, 0x91, 0xb7, 0x6b, 0x3e, 0xef, 0x99, 0x3f, 0x9e, 0xb8, 0xe6, 0xf1, - 0x8a, 0x48, 0x9b, 0x1f, 0x5b, 0x08, 0xe6, 0x7a, 0x3f, 0x0b, 0xa2, 0xb2, 0x78, 0x9c, 0x0f, 0x9e, - 0x1f, 0x9d, 0x55, 0x06, 0x68, 0x0c, 0xc8, 0x0d, 0x58, 0x66, 0x37, 0xba, 0xd4, 0xae, 0x02, 0x08, - 0xd3, 0xe9, 0xd6, 0x22, 0xfa, 0x40, 0x81, 0x1b, 0xcf, 0xea, 0x43, 0xf2, 0x10, 0xce, 0xa3, 0x57, - 0x50, 0xe8, 0xc7, 0xc3, 0x60, 0xf7, 0x9d, 0xfe, 0x91, 0xcb, 0x67, 0xad, 0x56, 0x38, 0x18, 0xe3, - 0xb1, 0x65, 0x75, 0xa4, 0x71, 0x18, 0x8f, 0xad, 0xd0, 0x17, 0xbf, 0x6b, 0x94, 0x9c, 0xd7, 0x61, - 0x00, 0x97, 0x67, 0x50, 0x4a, 0x5b, 0xa3, 0x22, 0x6f, 0x8d, 0x2f, 0x83, 0x7a, 0xe0, 0x0e, 0xe8, - 0x35, 0xc7, 0x1d, 0x60, 0xd5, 0x3e, 0xbd, 0x83, 0x1d, 0xbf, 0x6c, 0xae, 0xc6, 0x70, 0x2b, 0xf4, - 0x1f, 0xdc, 0xe1, 0x5f, 0x39, 0x16, 0x87, 0xbe, 0x7c, 0x2b, 0x25, 0xaf, 0xc3, 0xd9, 0x4c, 0x50, - 0x9c, 0x24, 0xca, 0x82, 0xb9, 0x4e, 0x8b, 0xd2, 0x21, 0xd4, 0x6e, 0xc2, 0xb2, 0xbc, 0x91, 0x08, - 0x09, 0x6f, 0x90, 0x6c, 0x1d, 0xfc, 0x73, 0x13, 0xd1, 0xa8, 0xc2, 0x0b, 0xed, 0x69, 0xee, 0x5a, - 0xaf, 0x01, 0x89, 0x6f, 0x2e, 0xf1, 0x46, 0xc1, 0x3f, 0xb8, 0x2e, 0x4a, 0xe2, 0x15, 0xce, 0x3f, - 0xfb, 0xdb, 0xf3, 0x70, 0xb6, 0xe0, 0x26, 0x4c, 0x5e, 0x03, 0xd5, 0x1b, 0x45, 0xee, 0x61, 0x20, - 0x5d, 0xcd, 0x98, 0xf4, 0x5e, 0xda, 0x54, 0xcc, 0x35, 0xa9, 0x8c, 0xab, 0x38, 0xe7, 0x59, 0x96, - 0x7f, 0xfe, 0x3d, 0xfe, 0x8b, 0x6e, 0x20, 0x4e, 0x20, 0xb4, 0x77, 0xf4, 0x5f, 0xd2, 0x80, 0x75, - 0xcc, 0x08, 0x12, 0x7a, 0x3e, 0x26, 0x16, 0x41, 0x51, 0xac, 0x92, 0xba, 0x2f, 0x63, 0x4d, 0xba, - 0x12, 0x12, 0x95, 0xc5, 0x4c, 0x75, 0x9c, 0x81, 0x90, 0x5f, 0x84, 0x4b, 0xd2, 0x89, 0x6a, 0x67, - 0x56, 0x1f, 0x3e, 0xc0, 0x30, 0x2f, 0x38, 0xf1, 0xd9, 0x5a, 0x4f, 0xad, 0xc3, 0x6d, 0x60, 0x59, - 0x84, 0xbd, 0xc1, 0xd8, 0xce, 0xa5, 0x90, 0xc1, 0xe6, 0xb2, 0x84, 0x08, 0x97, 0x28, 0x56, 0x63, - 0x30, 0xce, 0x64, 0x93, 0xc1, 0x56, 0x77, 0x0b, 0x57, 0xe8, 0x02, 0xae, 0xd0, 0xab, 0x72, 0x63, - 0x72, 0xeb, 0x13, 0x7b, 0xb1, 0x60, 0x8d, 0x1e, 0xc2, 0x7a, 0x72, 0xd2, 0x89, 0x03, 0xba, 0x9a, - 0xca, 0xfa, 0x8f, 0x0c, 0x85, 0x04, 0xc9, 0x4e, 0x6c, 0x16, 0x28, 0x22, 0x47, 0x28, 0x87, 0x43, - 0x99, 0xa4, 0x08, 0x42, 0xd2, 0x84, 0x0d, 0xe7, 0x69, 0x28, 0x72, 0x93, 0x86, 0xf1, 0xb7, 0x16, - 0xf3, 0xdf, 0x12, 0xf6, 0x3a, 0x46, 0x6a, 0x12, 0xe7, 0x69, 0xc8, 0x53, 0x96, 0x86, 0x82, 0xdb, - 0xc7, 0x40, 0x98, 0xd8, 0x91, 0xaa, 0x37, 0x3c, 0x8b, 0x17, 0x4f, 0x6c, 0x9a, 0xa3, 0x94, 0x83, - 0xba, 0x61, 0xa9, 0x5c, 0xf3, 0x5e, 0x5a, 0xc7, 0xba, 0x94, 0x32, 0x10, 0x66, 0x7b, 0x9b, 0x19, - 0x2f, 0x25, 0x7c, 0xf9, 0xaa, 0x29, 0x81, 0xf9, 0x6a, 0xf8, 0x5c, 0x01, 0x35, 0xcb, 0x82, 0xbc, - 0x0b, 0xf3, 0x4c, 0x98, 0xe0, 0x27, 0x93, 0x56, 0xfc, 0x2d, 0x36, 0x82, 0x4c, 0xae, 0xd8, 0x3b, - 0x63, 0x72, 0x1a, 0xf2, 0x0d, 0xa8, 0xf8, 0xde, 0x40, 0x18, 0x32, 0x6f, 0xcc, 0xa2, 0xed, 0x34, - 0xea, 0x35, 0x54, 0x7e, 0x7a, 0x03, 0x7e, 0xf5, 0xd8, 0xae, 0xc2, 0x3c, 0xeb, 0x30, 0xed, 0x63, - 0xb8, 0x3c, 0xe3, 0x83, 0xc4, 0x80, 0xb5, 0x8c, 0x91, 0xf7, 0x94, 0xf6, 0x5f, 0x27, 0xb1, 0xff, - 0x06, 0x42, 0x26, 0x1e, 0xc2, 0xc5, 0xa9, 0x15, 0x24, 0x8d, 0xa9, 0x3b, 0x03, 0x86, 0x1b, 0xc9, - 0x96, 0xc9, 0x93, 0x30, 0xb3, 0x6b, 0xf0, 0xaf, 0xfd, 0x4e, 0x09, 0xce, 0x16, 0x4c, 0x0e, 0xa2, - 0x41, 0x49, 0xec, 0xe1, 0x79, 0x17, 0xc2, 0xbd, 0x33, 0x66, 0xc9, 0x1b, 0x90, 0xbb, 0x00, 0x98, - 0xdb, 0x35, 0x70, 0x0f, 0xdd, 0xcf, 0xb8, 0x8e, 0x00, 0x6f, 0xee, 0x09, 0x34, 0x45, 0xb3, 0x88, - 0x66, 0x19, 0x0a, 0x26, 0xb7, 0x01, 0xdc, 0xcf, 0xfa, 0xc3, 0xc9, 0xc0, 0x8d, 0x6f, 0x5d, 0x05, - 0x9f, 0x51, 0xcc, 0x45, 0x8e, 0xd5, 0x18, 0x90, 0x3d, 0x20, 0x82, 0x44, 0xfa, 0x6a, 0xe5, 0x19, - 0x5f, 0x55, 0x4c, 0x95, 0x53, 0xb5, 0xc5, 0xc7, 0xf9, 0xe8, 0x2e, 0xc2, 0x82, 0x37, 0xc2, 0x12, - 0xfa, 0x2f, 0x47, 0xd2, 0xfe, 0x48, 0xe1, 0xfd, 0x91, 0x5e, 0xe4, 0xa4, 0x07, 0xdc, 0x87, 0x80, - 0x6f, 0x08, 0xb7, 0xa6, 0x6f, 0x08, 0xb2, 0x69, 0x96, 0xc7, 0x9d, 0x41, 0x80, 0x6c, 0x80, 0x64, - 0x90, 0x2f, 0x61, 0x34, 0xe5, 0xc3, 0xf7, 0x11, 0x9c, 0x2b, 0xdc, 0xb0, 0xe9, 0x2d, 0x02, 0x5d, - 0x91, 0x93, 0x0b, 0xf2, 0x02, 0xfd, 0x4d, 0x6f, 0xc8, 0x37, 0x61, 0xf9, 0xb1, 0xeb, 0x04, 0x6e, - 0xc0, 0xaf, 0x67, 0xfc, 0x54, 0x64, 0x30, 0xf9, 0x76, 0x36, 0x48, 0x9f, 0x4e, 0xdc, 0xea, 0x42, - 0x5a, 0x70, 0x96, 0xed, 0x1a, 0xde, 0x31, 0x6a, 0x04, 0xb8, 0xa5, 0x46, 0x49, 0xdd, 0x89, 0x91, - 0x04, 0xef, 0x1f, 0x0d, 0xc4, 0x62, 0xd4, 0xe6, 0xfa, 0x61, 0x16, 0x44, 0x85, 0x9a, 0xf3, 0xc5, - 0xd8, 0x64, 0x1b, 0x96, 0x18, 0x73, 0xa6, 0x1b, 0x62, 0x26, 0xf6, 0x9b, 0x33, 0xbf, 0x50, 0xc3, - 0x17, 0x3a, 0x61, 0xfc, 0x3f, 0xbd, 0x94, 0xa1, 0x37, 0x93, 0x7d, 0x2c, 0x7b, 0x10, 0x98, 0xcb, - 0x08, 0xe4, 0x9e, 0x03, 0xda, 0x7f, 0x56, 0x44, 0x53, 0x53, 0xea, 0x65, 0x7a, 0xb2, 0x86, 0xee, - 0x48, 0x78, 0x51, 0x2c, 0x9a, 0xfc, 0xd7, 0x73, 0x9e, 0xf6, 0xe4, 0x6d, 0x58, 0xa6, 0x6c, 0x0f, - 0x27, 0x23, 0x76, 0xe2, 0x96, 0x53, 0xf1, 0xf0, 0x5a, 0xac, 0x88, 0x0e, 0xdb, 0xde, 0x19, 0x73, - 0xe9, 0x38, 0xf9, 0x49, 0x5e, 0x87, 0xc5, 0xf0, 0x38, 0x1a, 0xcb, 0xe7, 0xb4, 0x30, 0xb5, 0x59, - 0xad, 0x5e, 0x97, 0x93, 0x54, 0x29, 0x4e, 0xa2, 0x32, 0xd9, 0x9e, 0x67, 0xc6, 0x36, 0xed, 0x55, - 0x58, 0x92, 0x78, 0xd3, 0xc6, 0xb0, 0xf7, 0xac, 0xa2, 0x31, 0xec, 0x17, 0x1f, 0xec, 0xc7, 0x50, - 0x15, 0x2c, 0x09, 0x81, 0xca, 0x91, 0x1f, 0x0a, 0x39, 0x07, 0xff, 0xa7, 0x30, 0xbc, 0xc8, 0xd1, - 0x46, 0xce, 0x99, 0xf8, 0x3f, 0x8a, 0xd3, 0xa8, 0x16, 0xc6, 0x28, 0xca, 0xe8, 0xc3, 0x1c, 0x6b, - 0x50, 0x28, 0xbc, 0x37, 0x0c, 0x99, 0x67, 0xb3, 0xd0, 0xe5, 0xc4, 0xf7, 0x90, 0x8c, 0x3e, 0x7e, - 0x9a, 0xd8, 0x98, 0x92, 0x9a, 0x4b, 0x79, 0xa9, 0x99, 0xc5, 0x39, 0xe3, 0x94, 0xec, 0xcb, 0x80, - 0x30, 0x94, 0x9a, 0x25, 0xc1, 0xa8, 0x92, 0x12, 0x8c, 0x24, 0xc5, 0x6c, 0x32, 0x7a, 0x4c, 0xe8, - 0x16, 0x8a, 0xd9, 0xac, 0xa8, 0xf6, 0x83, 0x78, 0x86, 0xa4, 0x2c, 0x02, 0xe4, 0x0e, 0x9c, 0x63, - 0xda, 0x11, 0x9e, 0xbe, 0x3e, 0x23, 0x23, 0x9e, 0xc5, 0x42, 0x96, 0xfd, 0x2e, 0x96, 0x15, 0x9f, - 0xad, 0x78, 0x24, 0x6f, 0xc0, 0x46, 0x9c, 0x3b, 0x39, 0x7c, 0xe2, 0x8d, 0x59, 0xee, 0xc8, 0x13, - 0xae, 0xb7, 0x20, 0xa2, 0xcc, 0x7a, 0xe2, 0x8d, 0x31, 0x8f, 0xa4, 0xe8, 0xe1, 0x7f, 0x5d, 0x12, - 0xea, 0xec, 0x6d, 0xdf, 0x8f, 0xc2, 0x28, 0x70, 0xc6, 0x29, 0x9b, 0x27, 0x39, 0x86, 0x8b, 0x58, - 0xa5, 0x3b, 0x98, 0xc0, 0xca, 0x0f, 0x84, 0xfa, 0x3f, 0x5e, 0x60, 0x4b, 0x77, 0xbe, 0x9e, 0xd6, - 0x47, 0xe9, 0x14, 0x5b, 0x97, 0x91, 0xe9, 0xba, 0x92, 0xb8, 0xee, 0x9d, 0x31, 0x2f, 0x30, 0x9e, - 0x39, 0x2c, 0xb2, 0x57, 0xb0, 0xd7, 0x64, 0x8d, 0x9e, 0xdb, 0xc9, 0xc6, 0x93, 0xe6, 0x2a, 0x6f, - 0x49, 0xe4, 0x7d, 0x58, 0xf4, 0x06, 0x72, 0xd2, 0xe6, 0xac, 0xb9, 0xad, 0x31, 0x60, 0xe9, 0x23, - 0x12, 0x1e, 0x74, 0x69, 0x78, 0x1c, 0xba, 0xbd, 0x92, 0x92, 0x5c, 0xb4, 0x6d, 0xa1, 0x39, 0xcd, - 0x93, 0x91, 0xd5, 0xe4, 0xec, 0xc3, 0x73, 0x0e, 0x77, 0x81, 0x24, 0x81, 0x85, 0xc9, 0x7f, 0x69, - 0x7f, 0x17, 0x5e, 0x3e, 0x6d, 0x1f, 0xd1, 0x1d, 0x63, 0x4a, 0x87, 0x2f, 0xb2, 0xb8, 0xd7, 0xe9, - 0x7e, 0xbb, 0x09, 0x72, 0xbc, 0x7e, 0x4f, 0x4c, 0x11, 0x01, 0xdb, 0x0f, 0x3c, 0xed, 0x7f, 0x96, - 0x61, 0x35, 0x6d, 0x0f, 0x27, 0xaf, 0x42, 0x45, 0xda, 0x28, 0x2f, 0x14, 0x18, 0xcd, 0x71, 0x7b, - 0x44, 0xa4, 0x53, 0x6d, 0x8c, 0xe4, 0x1e, 0xac, 0xa2, 0x87, 0x3e, 0x0a, 0x88, 0x91, 0xc7, 0x4d, - 0x44, 0xa7, 0x35, 0xfe, 0x2c, 0x53, 0x5a, 0x7a, 0x30, 0xd2, 0x42, 0xc9, 0xdc, 0x59, 0x99, 0x6e, - 0xee, 0xe4, 0x4d, 0x99, 0x62, 0xee, 0x9c, 0x9b, 0x61, 0xee, 0x4c, 0x28, 0x65, 0x73, 0x27, 0x1a, - 0xbd, 0x17, 0xa6, 0x19, 0xbd, 0x13, 0x1a, 0x66, 0xf4, 0x4e, 0xcc, 0x95, 0xd5, 0xa9, 0xe6, 0xca, - 0x84, 0x86, 0x9b, 0x2b, 0x13, 0x03, 0xe2, 0xe2, 0x54, 0x03, 0xa2, 0x44, 0xc4, 0x0c, 0x88, 0x2f, - 0xf2, 0x8e, 0x0d, 0x9c, 0xa7, 0x36, 0xf6, 0x38, 0xbf, 0xf1, 0x60, 0x97, 0x99, 0xce, 0x53, 0x74, - 0xbd, 0xa5, 0x82, 0x09, 0xf7, 0xd7, 0xd5, 0x7e, 0x98, 0xd9, 0x80, 0xc4, 0x98, 0xbf, 0x04, 0xab, - 0xec, 0x1c, 0xe6, 0xf1, 0xd4, 0xd9, 0x41, 0xbc, 0x62, 0xae, 0x08, 0x28, 0xd3, 0x97, 0xfe, 0x02, - 0xac, 0xc5, 0x68, 0x5c, 0x65, 0x88, 0xa1, 0x01, 0xcc, 0x98, 0x9a, 0x2b, 0x0b, 0x65, 0x7e, 0x01, - 0x8f, 0x15, 0x97, 0xe2, 0xc7, 0x02, 0x89, 0xbd, 0x06, 0x24, 0x41, 0x8b, 0x5f, 0x2f, 0x54, 0x10, - 0x75, 0x3d, 0x46, 0x8d, 0x9f, 0x18, 0xfc, 0x13, 0x25, 0x63, 0xa8, 0xfb, 0x79, 0x55, 0xff, 0x55, - 0x88, 0xbf, 0x6e, 0x73, 0x63, 0x8b, 0x68, 0x81, 0x2a, 0x0a, 0xba, 0x1c, 0xae, 0x1d, 0x66, 0xd5, - 0x62, 0x3f, 0xa7, 0x5a, 0x69, 0x3f, 0x2a, 0xa7, 0x8c, 0x18, 0xe2, 0x33, 0x54, 0xbe, 0x09, 0x7d, - 0x9b, 0x0f, 0x31, 0xdf, 0x7e, 0x6f, 0x4e, 0x99, 0xa6, 0xdc, 0x5f, 0xdb, 0xb2, 0x3a, 0x26, 0x84, - 0xa1, 0x2f, 0xdc, 0xb7, 0x6d, 0xa6, 0xee, 0x91, 0xee, 0x71, 0x82, 0x1d, 0xdb, 0x6b, 0xb7, 0x66, - 0xb3, 0x13, 0x5a, 0x62, 0xba, 0x4a, 0x51, 0xed, 0x13, 0xff, 0x12, 0x1f, 0xd8, 0x07, 0xb4, 0xf9, - 0x85, 0x69, 0xe6, 0xe5, 0x02, 0xc5, 0x5e, 0x8e, 0x39, 0xf6, 0x12, 0x72, 0x46, 0x15, 0x72, 0x28, - 0xb3, 0x35, 0x60, 0x19, 0x4d, 0x04, 0x82, 0x61, 0xa5, 0xc0, 0xbd, 0x20, 0xdf, 0xf8, 0x5a, 0xa3, - 0x65, 0x2e, 0x51, 0x3a, 0xc1, 0xe6, 0x08, 0x2e, 0xca, 0x8a, 0xfd, 0x74, 0x25, 0xe7, 0x44, 0x16, - 0x84, 0x99, 0x3d, 0x90, 0xe8, 0xff, 0xb1, 0xaa, 0xe7, 0x9d, 0x34, 0x80, 0xa3, 0xe1, 0xd3, 0x85, - 0xe9, 0x63, 0x32, 0x23, 0x27, 0x65, 0x22, 0xdb, 0x94, 0x64, 0xd9, 0x46, 0xd6, 0xf3, 0x97, 0xd3, - 0x7a, 0xfe, 0x1d, 0xb8, 0x41, 0xb7, 0x23, 0x3e, 0xa8, 0xee, 0xa7, 0x6e, 0x70, 0xe2, 0x8f, 0x30, - 0xd6, 0xdd, 0x38, 0x5e, 0x95, 0xcc, 0x30, 0x71, 0x85, 0xe2, 0xe1, 0x90, 0x19, 0x1c, 0xab, 0x85, - 0x48, 0x2c, 0x86, 0xe3, 0xbf, 0x2a, 0xc3, 0x0b, 0xa7, 0x18, 0xf7, 0x19, 0x75, 0xff, 0xa5, 0xb4, - 0x04, 0x5e, 0x4a, 0xe9, 0x3f, 0x29, 0x53, 0x7e, 0xb8, 0x9c, 0x8c, 0xfa, 0x53, 0xe4, 0xef, 0x5f, - 0x87, 0x35, 0x76, 0x82, 0xb0, 0xb7, 0x1b, 0x07, 0x93, 0xe1, 0x29, 0x8e, 0x90, 0xcb, 0xe2, 0xa1, - 0x79, 0x86, 0x14, 0x4f, 0x15, 0xdc, 0x38, 0xad, 0x18, 0x46, 0x7a, 0xb0, 0x84, 0x68, 0x07, 0x8e, - 0x37, 0x3c, 0xd5, 0x8b, 0x67, 0xf1, 0x8c, 0x5d, 0x26, 0x63, 0x4f, 0xce, 0x28, 0x60, 0x07, 0x7f, - 0x93, 0x5b, 0xb0, 0x36, 0x9a, 0x1c, 0x53, 0xd9, 0x92, 0x4d, 0x2a, 0xee, 0x22, 0x3b, 0x67, 0xae, - 0x8c, 0x26, 0xc7, 0xfa, 0x78, 0x8c, 0x73, 0x03, 0x7d, 0x69, 0xd7, 0x29, 0x1e, 0x5b, 0xfe, 0x02, - 0x73, 0x1e, 0x31, 0x29, 0x03, 0xb6, 0x01, 0x70, 0xdc, 0x0d, 0x60, 0x2f, 0x2b, 0x78, 0x6e, 0x4f, - 0xf6, 0x43, 0xfb, 0xdf, 0x25, 0xa1, 0xd5, 0x9d, 0xbe, 0x80, 0xfe, 0x76, 0x88, 0x0a, 0x86, 0xe8, - 0x65, 0x50, 0x69, 0xd7, 0x27, 0xbb, 0x53, 0x3c, 0x46, 0xab, 0xa3, 0xc9, 0x71, 0xdc, 0x77, 0x72, - 0xc7, 0xcf, 0xcb, 0x1d, 0xff, 0xb6, 0xd0, 0xfa, 0x16, 0xee, 0x33, 0xd3, 0xbb, 0x9c, 0x8a, 0x5e, - 0xb7, 0x4e, 0xb7, 0x9b, 0xfc, 0xed, 0xb8, 0x15, 0x8c, 0x5b, 0xc6, 0x04, 0x3a, 0x97, 0x33, 0x81, - 0x16, 0xac, 0xbd, 0xf9, 0xa2, 0xb5, 0x97, 0x33, 0xb8, 0x2e, 0x14, 0x18, 0x5c, 0x0b, 0x17, 0x68, - 0xf5, 0x19, 0x0b, 0x74, 0x51, 0x9e, 0x27, 0xff, 0xbd, 0x24, 0x44, 0xaf, 0xf4, 0x5d, 0xea, 0x23, - 0x38, 0x2b, 0xee, 0x52, 0xec, 0x08, 0x4a, 0xec, 0xe8, 0x4b, 0x77, 0x5e, 0x29, 0xba, 0x45, 0x21, - 0x5a, 0xc1, 0x4d, 0x67, 0x9d, 0xdf, 0x9f, 0x92, 0xf2, 0xff, 0x7f, 0x6e, 0x4e, 0xe4, 0x11, 0x9c, - 0xc7, 0x54, 0x3b, 0x7d, 0xd9, 0x03, 0xc0, 0x0e, 0xdc, 0x03, 0x3e, 0x1f, 0x6e, 0xe6, 0xee, 0x19, - 0x5e, 0x5f, 0xaa, 0x8e, 0xe9, 0x1e, 0xec, 0x9d, 0x31, 0x37, 0xc2, 0x02, 0x78, 0xf6, 0x52, 0xf6, - 0x47, 0x0a, 0x68, 0xcf, 0xee, 0x2f, 0xbc, 0x3f, 0x67, 0x3b, 0x9c, 0xde, 0x9f, 0xa5, 0xde, 0x7b, - 0x01, 0x56, 0x02, 0xf7, 0x20, 0x70, 0xc3, 0xa3, 0x94, 0x92, 0x6b, 0x99, 0x03, 0x45, 0xc7, 0x88, - 0x78, 0xdf, 0xcf, 0x75, 0xab, 0x11, 0x44, 0xda, 0x4e, 0x7c, 0xd7, 0x2e, 0x1c, 0x07, 0x3a, 0x9b, - 0xe4, 0x0a, 0xb2, 0x1f, 0xf7, 0x2a, 0xd5, 0x92, 0x5a, 0x36, 0x79, 0x54, 0xf2, 0x03, 0x6f, 0xe8, - 0x6a, 0xff, 0x2e, 0x96, 0x2c, 0x8a, 0x3a, 0x8f, 0x7c, 0x24, 0xbd, 0x78, 0x2a, 0xe7, 0xe4, 0x99, - 0x22, 0x92, 0xd3, 0x68, 0x20, 0x9b, 0x5f, 0x91, 0x06, 0xf2, 0xae, 0x70, 0x9b, 0xa6, 0x7b, 0xde, - 0x83, 0xdb, 0xe4, 0x15, 0x58, 0x60, 0x9e, 0xd2, 0xa2, 0xba, 0x6b, 0xa9, 0xea, 0x3e, 0xb8, 0x6d, - 0x8a, 0x72, 0xed, 0xf3, 0xd8, 0x3f, 0x25, 0xd7, 0x88, 0x07, 0xb7, 0xc9, 0xdb, 0xa7, 0x7b, 0xc1, - 0x54, 0x15, 0x2f, 0x98, 0xe2, 0xd7, 0x4b, 0xef, 0xa4, 0x5e, 0x2f, 0xbd, 0x38, 0xbb, 0xb7, 0xb8, - 0xd7, 0x11, 0x8b, 0xf4, 0x1c, 0xc7, 0x0a, 0xd5, 0xfe, 0xba, 0x04, 0x57, 0x67, 0x52, 0x90, 0x2b, - 0x50, 0xd5, 0xbb, 0x8d, 0x5e, 0x32, 0xbe, 0x74, 0xcd, 0x08, 0x08, 0xd9, 0x85, 0xc5, 0x6d, 0x27, - 0xf4, 0xfa, 0x74, 0x1a, 0x17, 0x1a, 0xc1, 0x73, 0x6c, 0x63, 0xf4, 0xbd, 0x33, 0x66, 0x42, 0x4b, - 0x6c, 0x58, 0xc7, 0xb5, 0x90, 0xca, 0xa3, 0x59, 0x2e, 0xd0, 0xd3, 0xe4, 0x18, 0xe6, 0xc8, 0xe8, - 0x3e, 0x93, 0x03, 0x92, 0xc7, 0x40, 0x2c, 0x6b, 0xaf, 0xe6, 0x06, 0x11, 0xd7, 0x5f, 0x44, 0x5e, - 0xfc, 0x1c, 0xe6, 0x8d, 0x67, 0xf4, 0x5d, 0x8e, 0x6e, 0xef, 0x8c, 0x59, 0xc0, 0x8d, 0xdc, 0x04, - 0x39, 0xe1, 0x2b, 0x9e, 0xd1, 0xcb, 0x7b, 0x67, 0x4c, 0x18, 0xc7, 0x89, 0x5f, 0xb3, 0x3b, 0xc1, - 0xa7, 0x42, 0x24, 0x9a, 0xde, 0x4f, 0xcf, 0x11, 0x68, 0xff, 0x65, 0xa8, 0x76, 0x85, 0x5b, 0xa2, - 0xf4, 0xf2, 0x50, 0xb8, 0x20, 0x9a, 0x71, 0xa9, 0xf6, 0x0f, 0x15, 0xa1, 0xd3, 0x79, 0x76, 0x7f, - 0x4a, 0x99, 0x50, 0x07, 0xb3, 0x33, 0xa1, 0x0e, 0x7e, 0xc6, 0x4c, 0xa8, 0x9a, 0x07, 0xaf, 0x9c, - 0xba, 0xef, 0xc9, 0xbb, 0xa0, 0x62, 0x92, 0x49, 0x47, 0x1a, 0x47, 0xb6, 0x04, 0xd7, 0xe3, 0xcc, - 0x2b, 0x7b, 0x3c, 0x33, 0xaf, 0xb9, 0xd6, 0x4f, 0x53, 0x6b, 0x7f, 0xc8, 0x33, 0xee, 0x34, 0x06, - 0xdd, 0x8c, 0xb5, 0xf5, 0xcb, 0x3e, 0x56, 0x35, 0x52, 0xeb, 0xf1, 0x05, 0x29, 0xa3, 0x77, 0xfe, - 0x5b, 0xd3, 0xdf, 0xac, 0x4a, 0x8b, 0xf3, 0x9f, 0x95, 0xe1, 0xca, 0x2c, 0x72, 0xa2, 0x83, 0x6a, - 0x64, 0x72, 0xf7, 0xcb, 0x19, 0xe0, 0x72, 0x49, 0xff, 0xcd, 0x1c, 0x3a, 0x1d, 0x5b, 0x06, 0x8b, - 0x5f, 0x62, 0xe2, 0xd8, 0x72, 0x52, 0x3a, 0xb6, 0xa2, 0x98, 0xbc, 0x00, 0xf3, 0x7a, 0xcd, 0x4a, - 0x32, 0xd5, 0xe2, 0x93, 0x29, 0xa7, 0x1f, 0xe2, 0x63, 0x1c, 0x5e, 0x44, 0x7e, 0x2d, 0x9f, 0x9c, - 0x99, 0xa7, 0xa8, 0xbd, 0x2c, 0x75, 0x48, 0x2e, 0x19, 0x16, 0xd6, 0x37, 0x49, 0xde, 0xc4, 0xf3, - 0xa1, 0x98, 0xf9, 0x44, 0xcf, 0x1a, 0xcc, 0x77, 0x03, 0x37, 0x74, 0x23, 0xf9, 0x39, 0xd3, 0x18, - 0x21, 0x26, 0x2f, 0xe1, 0x8f, 0x8d, 0x9c, 0x13, 0x16, 0x5b, 0x6a, 0x5e, 0x8e, 0x21, 0x88, 0xaf, - 0x93, 0x28, 0xd8, 0x94, 0x50, 0x28, 0x41, 0xd3, 0x99, 0x8c, 0xfa, 0x47, 0xfb, 0x66, 0x93, 0x0b, - 0x57, 0x8c, 0x60, 0x88, 0x50, 0xda, 0xc0, 0xd0, 0x94, 0x50, 0xb4, 0xdf, 0x52, 0x60, 0xa3, 0xa8, - 0x1d, 0xe4, 0x0a, 0x54, 0x46, 0x85, 0x79, 0xa8, 0x47, 0x2c, 0x24, 0xce, 0x12, 0x1a, 0xef, 0x0e, - 0xfc, 0xe0, 0xd8, 0x89, 0xe4, 0x47, 0x5f, 0x12, 0xd8, 0x44, 0x63, 0xe3, 0x0e, 0xfe, 0x4f, 0xae, - 0x8b, 0x53, 0xa9, 0x9c, 0xcb, 0x5c, 0x8d, 0x7f, 0x34, 0x1d, 0xa0, 0x31, 0xe8, 0x76, 0xc6, 0x2c, - 0x19, 0xd3, 0x9b, 0x50, 0xa1, 0xd5, 0xca, 0xcc, 0x5e, 0x3a, 0x7f, 0xf4, 0x56, 0x93, 0x23, 0xb1, - 0x5a, 0x85, 0xce, 0xf1, 0xd0, 0x44, 0x64, 0xed, 0x21, 0xac, 0xa6, 0x31, 0x88, 0x91, 0x8e, 0xc7, - 0xbf, 0x74, 0x47, 0xe5, 0x9c, 0xb6, 0x7d, 0x9f, 0x3d, 0x3c, 0xde, 0xbe, 0xf8, 0x93, 0x2f, 0xae, - 0x03, 0xfd, 0xc9, 0x68, 0x8a, 0xe2, 0xf5, 0x6b, 0xbf, 0x5b, 0x82, 0x8d, 0x24, 0xd6, 0x91, 0x58, - 0x43, 0x7f, 0x63, 0x03, 0x6f, 0xe8, 0xa9, 0xc0, 0x10, 0x42, 0xb4, 0xcc, 0x37, 0x70, 0xc6, 0x7b, - 0xf4, 0x5d, 0xd8, 0x9c, 0x86, 0x4f, 0x5e, 0x85, 0x45, 0x0c, 0xb9, 0x39, 0x76, 0xfa, 0xae, 0xbc, - 0xcd, 0x8e, 0x04, 0xd0, 0x4c, 0xca, 0xb5, 0x3f, 0x51, 0xe0, 0x12, 0x7f, 0x2e, 0xdb, 0x72, 0xbc, - 0x11, 0xda, 0x8a, 0xfa, 0xee, 0x57, 0x13, 0x38, 0x66, 0x37, 0xb5, 0x8f, 0xbd, 0x94, 0x7e, 0x15, - 0x9d, 0xfb, 0xda, 0xf4, 0xd6, 0x92, 0x57, 0x30, 0x8c, 0x2c, 0x77, 0x27, 0xab, 0xb0, 0x40, 0x5d, - 0x23, 0x0a, 0x90, 0x03, 0x75, 0x21, 0x86, 0xf6, 0xf7, 0xe0, 0xda, 0xec, 0x0f, 0x90, 0x5f, 0x85, - 0x15, 0xcc, 0x93, 0xba, 0x3f, 0x3e, 0x0c, 0x9c, 0x81, 0x2b, 0xb4, 0x88, 0x42, 0xd9, 0x2d, 0x97, - 0xb1, 0xa8, 0xb8, 0x3c, 0x70, 0xd4, 0x21, 0x66, 0x60, 0xe5, 0x44, 0xa9, 0x37, 0xe9, 0x32, 0x37, - 0xed, 0x3b, 0x0a, 0x90, 0x3c, 0x0f, 0xf2, 0x0d, 0x58, 0xde, 0xef, 0xd5, 0xac, 0xc8, 0x09, 0xa2, - 0x3d, 0x7f, 0x12, 0xf0, 0x90, 0xb4, 0x2c, 0x8e, 0x50, 0xd4, 0xb7, 0x99, 0x55, 0xf0, 0xc8, 0x9f, - 0x04, 0x66, 0x0a, 0x0f, 0x33, 0x6c, 0xba, 0xee, 0x93, 0x81, 0x73, 0x92, 0xce, 0xb0, 0xc9, 0x61, - 0xa9, 0x0c, 0x9b, 0x1c, 0xa6, 0xfd, 0x40, 0x81, 0xcb, 0xe2, 0x9d, 0xc4, 0xa0, 0xa0, 0x2e, 0x35, - 0x8c, 0x96, 0x17, 0x88, 0x2c, 0x07, 0xb3, 0x84, 0xf8, 0x75, 0x11, 0x50, 0x12, 0x2b, 0x88, 0xd2, - 0x3c, 0xa3, 0x25, 0xbf, 0x04, 0x15, 0x2b, 0xf2, 0xc7, 0xa7, 0x88, 0x28, 0xa9, 0xc6, 0x23, 0x1a, - 0xf9, 0x63, 0x64, 0x81, 0x94, 0x9a, 0x0b, 0x1b, 0x72, 0xe5, 0x44, 0x8d, 0x49, 0x0b, 0x16, 0x78, - 0x38, 0xe2, 0x8c, 0x03, 0xde, 0x8c, 0x36, 0x6d, 0xaf, 0x89, 0xb0, 0x95, 0x3c, 0x1e, 0xbf, 0x29, - 0x78, 0x68, 0xff, 0x48, 0x81, 0x25, 0x2a, 0xd8, 0xe0, 0xbd, 0xf5, 0xcb, 0x4e, 0xe9, 0xb4, 0xa8, - 0x2c, 0xfc, 0x49, 0x63, 0xf6, 0xa7, 0x3a, 0x8d, 0xdf, 0x82, 0xb5, 0x0c, 0x01, 0xd1, 0x30, 0x60, - 0xd9, 0xd0, 0xeb, 0x3b, 0x2c, 0x61, 0x1f, 0xf3, 0xc5, 0x4c, 0xc1, 0xb4, 0x7f, 0xa0, 0xc0, 0x46, - 0xe7, 0x49, 0xe4, 0x30, 0xe3, 0xbd, 0x39, 0x19, 0x8a, 0xf5, 0x4e, 0x85, 0x35, 0xf1, 0xe0, 0x86, - 0x05, 0x53, 0x62, 0xc2, 0x1a, 0x87, 0x99, 0x71, 0x29, 0xd9, 0x83, 0x2a, 0x3f, 0x5f, 0x42, 0x1e, - 0x3a, 0xff, 0x9a, 0xa4, 0x3e, 0x49, 0x18, 0x73, 0x24, 0xda, 0x12, 0xdc, 0xc2, 0x38, 0x8d, 0x19, - 0x53, 0x6b, 0x7f, 0xa9, 0xc0, 0x85, 0x29, 0x34, 0xe4, 0x3d, 0x98, 0xc3, 0x40, 0x0f, 0x7c, 0xf4, - 0xae, 0x4c, 0xf9, 0x44, 0xd4, 0x3f, 0x7a, 0x70, 0x9b, 0x1d, 0x44, 0xc7, 0xf4, 0x87, 0xc9, 0xa8, - 0xc8, 0x47, 0xb0, 0xa8, 0x0f, 0x06, 0xfc, 0x02, 0x57, 0x4a, 0x5d, 0xe0, 0xa6, 0x7c, 0xf1, 0xf5, - 0x18, 0x9f, 0x5d, 0xe0, 0xd8, 0x93, 0xe3, 0xc1, 0xc0, 0xe6, 0x41, 0x2c, 0x12, 0x7e, 0x97, 0xde, - 0x85, 0xd5, 0x34, 0xf2, 0x73, 0xbd, 0xbb, 0xff, 0x5c, 0x01, 0x35, 0x5d, 0x87, 0x9f, 0x4f, 0xc0, - 0xcd, 0xa2, 0x61, 0x7e, 0xc6, 0xa4, 0xfa, 0xc7, 0x25, 0x38, 0x57, 0xd8, 0xc3, 0xe4, 0x35, 0x98, - 0xd7, 0xc7, 0xe3, 0x46, 0x9d, 0xcf, 0x2a, 0x2e, 0x21, 0xa1, 0x7e, 0x3d, 0x75, 0xbf, 0x65, 0x48, - 0xe4, 0x4d, 0xa8, 0x32, 0x1f, 0x91, 0xba, 0xd8, 0x70, 0x30, 0x82, 0x20, 0x77, 0x60, 0x49, 0x07, - 0xb1, 0x17, 0x88, 0x64, 0x07, 0x56, 0x79, 0xec, 0x3d, 0x74, 0x18, 0x8a, 0xf3, 0x25, 0xa1, 0x8f, - 0x95, 0x50, 0xda, 0x33, 0x57, 0xa3, 0xd4, 0xde, 0x99, 0xa1, 0x22, 0x4d, 0x50, 0x91, 0xa7, 0xcc, - 0x89, 0x45, 0xd2, 0x97, 0x7c, 0xef, 0xa6, 0xf0, 0xca, 0x51, 0xc6, 0xc3, 0xc5, 0xfc, 0xdf, 0x8f, - 0xdd, 0x51, 0xf4, 0xf3, 0x1b, 0xae, 0xe4, 0x1b, 0xa7, 0x1a, 0xae, 0xef, 0x57, 0xd8, 0x62, 0xce, - 0x92, 0x51, 0x89, 0x46, 0x4a, 0x8f, 0x82, 0x12, 0x0d, 0xbd, 0x9f, 0xf1, 0xe8, 0x72, 0x75, 0x58, - 0x60, 0x51, 0xff, 0xc4, 0xca, 0xb8, 0x5a, 0x58, 0x05, 0x86, 0xf3, 0xe0, 0x36, 0x13, 0x5f, 0x58, - 0xc4, 0x89, 0xd0, 0x14, 0xa4, 0xe4, 0x01, 0x2c, 0xd5, 0x86, 0xae, 0x33, 0x9a, 0x8c, 0x7b, 0xa7, - 0x33, 0x50, 0x6f, 0xf2, 0xb6, 0x2c, 0xf7, 0x19, 0x19, 0x1a, 0xb6, 0x71, 0x27, 0x97, 0x19, 0x91, - 0x5e, 0xfc, 0x08, 0xbd, 0x82, 0xba, 0xd9, 0x37, 0x66, 0xf4, 0x4f, 0x16, 0x88, 0x74, 0xe9, 0x08, - 0x0b, 0xfc, 0x95, 0xba, 0x0d, 0xab, 0x4d, 0x27, 0x8c, 0x7a, 0x81, 0x33, 0x0a, 0x31, 0x02, 0xf9, - 0x29, 0xa2, 0xa9, 0x5e, 0xe6, 0x15, 0x66, 0x3a, 0xdb, 0x28, 0x26, 0x65, 0x3a, 0xdb, 0x34, 0x3b, - 0x2a, 0x2f, 0xed, 0x78, 0x23, 0x67, 0xe8, 0x7d, 0x5b, 0xc4, 0xea, 0x60, 0xf2, 0xd2, 0x81, 0x00, - 0x9a, 0x49, 0xb9, 0xf6, 0x2b, 0xb9, 0x71, 0x63, 0xb5, 0x5c, 0x82, 0x05, 0x1e, 0xc9, 0x89, 0x45, - 0x36, 0xea, 0x1a, 0xed, 0x7a, 0xa3, 0xbd, 0xab, 0x2a, 0x64, 0x15, 0xa0, 0x6b, 0x76, 0x6a, 0x86, - 0x65, 0xd1, 0xdf, 0x25, 0xfa, 0x9b, 0x87, 0x3d, 0xda, 0xd9, 0x6f, 0xaa, 0x65, 0x29, 0xf2, 0x51, - 0x45, 0xfb, 0xb1, 0x02, 0xe7, 0x8b, 0x87, 0x92, 0xf4, 0x00, 0x63, 0x5f, 0x71, 0x57, 0x85, 0x6f, - 0xcc, 0x1c, 0xf7, 0x42, 0x70, 0x36, 0x86, 0x56, 0xc4, 0x62, 0x33, 0x95, 0x84, 0x99, 0x8d, 0x05, - 0x7b, 0xf0, 0x06, 0x66, 0xc9, 0x1b, 0x68, 0x35, 0xd8, 0x9c, 0xc6, 0x23, 0xdd, 0xd4, 0x35, 0x58, - 0xd2, 0xbb, 0xdd, 0x66, 0xa3, 0xa6, 0xf7, 0x1a, 0x9d, 0xb6, 0xaa, 0x90, 0x45, 0x98, 0xdb, 0x35, - 0x3b, 0xfb, 0x5d, 0xb5, 0xa4, 0xfd, 0x9e, 0x02, 0x2b, 0x8d, 0xc4, 0x89, 0xf2, 0xcb, 0x2e, 0xbe, - 0x6f, 0xa6, 0x16, 0xdf, 0x66, 0x1c, 0x25, 0x2e, 0xfe, 0xc0, 0xa9, 0x56, 0xde, 0xbf, 0x29, 0xc3, - 0x7a, 0x8e, 0x86, 0x58, 0xb0, 0xa0, 0x3f, 0xb4, 0x3a, 0x8d, 0x7a, 0x8d, 0xd7, 0xec, 0x7a, 0xe2, - 0x34, 0x87, 0xd9, 0x46, 0x73, 0x5f, 0x61, 0x91, 0x55, 0x9e, 0x86, 0xb6, 0xef, 0x0d, 0xfa, 0x29, - 0xaf, 0x4d, 0xc1, 0x09, 0x4f, 0xb2, 0x6f, 0x4f, 0x02, 0x74, 0x44, 0xe5, 0xb5, 0x8e, 0x7d, 0xf1, - 0x04, 0x3c, 0xcf, 0x18, 0x5d, 0x33, 0x1d, 0x5a, 0x9e, 0x67, 0x9d, 0xf0, 0x23, 0x6d, 0x98, 0xdf, - 0xf5, 0xa2, 0xbd, 0xc9, 0x63, 0xbe, 0x7e, 0xaf, 0x25, 0xb9, 0x27, 0xf7, 0x26, 0x8f, 0xf3, 0x6c, - 0x51, 0xab, 0xc9, 0x5e, 0x55, 0xa7, 0x58, 0x72, 0x2e, 0xe4, 0x3e, 0xcc, 0xe9, 0x0f, 0x2d, 0x53, - 0xe7, 0xab, 0x4b, 0x72, 0x4b, 0x34, 0xf5, 0x29, 0xdc, 0x68, 0xeb, 0x03, 0x27, 0xc5, 0x8d, 0xf1, - 0xc8, 0x46, 0x96, 0xa8, 0x3c, 0x57, 0x64, 0x89, 0xed, 0x15, 0x58, 0xe2, 0x17, 0x32, 0xbc, 0xeb, - 0xfc, 0x48, 0x81, 0xcd, 0x69, 0xc3, 0x40, 0xef, 0x78, 0xe9, 0x08, 0x52, 0xe7, 0xe3, 0x4c, 0x67, - 0x69, 0xd7, 0x61, 0x81, 0x46, 0x3e, 0x80, 0x25, 0xe6, 0x5e, 0x66, 0xbd, 0xb9, 0x6f, 0x36, 0xf8, - 0xdc, 0xbf, 0xfa, 0x17, 0x5f, 0x5c, 0xbf, 0xc0, 0x3d, 0xd2, 0xc2, 0x37, 0xed, 0x49, 0xe0, 0x25, - 0xa4, 0x9b, 0x8a, 0x29, 0x53, 0x50, 0x91, 0xdc, 0x99, 0x0c, 0x3c, 0x57, 0x5c, 0x48, 0x44, 0x94, - 0x1d, 0x0e, 0x93, 0x0f, 0x48, 0x01, 0xd3, 0xbe, 0xab, 0xc0, 0xa5, 0xe9, 0x63, 0x4e, 0x0f, 0xdd, - 0x1e, 0xf3, 0xd2, 0x13, 0x71, 0x6e, 0xf0, 0xd0, 0x8d, 0x5d, 0xf9, 0x64, 0x9e, 0x02, 0x91, 0x12, - 0x71, 0x75, 0x99, 0xd0, 0xb8, 0x20, 0x51, 0xac, 0x4d, 0x93, 0x89, 0x04, 0xa2, 0xf6, 0x08, 0x2e, - 0x4c, 0x99, 0x21, 0xe4, 0xfd, 0xc2, 0xfc, 0x89, 0xf8, 0xfc, 0x59, 0x7e, 0xdf, 0x9e, 0x4a, 0xc4, - 0x2b, 0xc1, 0xb5, 0xff, 0xc4, 0xfc, 0x52, 0x0b, 0xa6, 0x0b, 0x95, 0x0f, 0x30, 0x5f, 0x9f, 0x3e, - 0xea, 0x1f, 0xf9, 0x41, 0x32, 0x58, 0x28, 0x1f, 0x44, 0xb4, 0xc4, 0x76, 0xb0, 0x28, 0x33, 0x68, - 0x19, 0x2a, 0xe2, 0xc3, 0x7a, 0x37, 0xf0, 0x0f, 0x3c, 0xf6, 0x60, 0x8f, 0x5d, 0xeb, 0xf8, 0xca, - 0x7a, 0x59, 0x9a, 0xb0, 0xfe, 0xd0, 0x0d, 0xf5, 0xd1, 0xc9, 0xd3, 0x23, 0x37, 0x70, 0x73, 0xf8, - 0x71, 0x52, 0x1e, 0x0a, 0x66, 0xee, 0x0f, 0x7d, 0x2c, 0x30, 0xf3, 0xbc, 0xb5, 0xef, 0x95, 0xe0, - 0xe6, 0x33, 0x39, 0x9e, 0x36, 0xed, 0xe0, 0xd7, 0x01, 0x38, 0x2d, 0xed, 0x01, 0x49, 0x69, 0x23, - 0x2a, 0xe3, 0x04, 0x23, 0x53, 0x42, 0x21, 0x4f, 0xe0, 0xaa, 0xf8, 0xd5, 0xef, 0xbb, 0xe3, 0x28, - 0xa4, 0xf5, 0xe0, 0x71, 0x6c, 0xe3, 0x08, 0x3e, 0x55, 0xcc, 0xad, 0x7f, 0x33, 0xe6, 0xc1, 0x30, - 0x99, 0xf7, 0xbc, 0x08, 0x89, 0x8b, 0xaa, 0xa3, 0xd9, 0xbc, 0xc8, 0xad, 0x64, 0x25, 0x55, 0x12, - 0x95, 0xaf, 0x58, 0x49, 0xf1, 0xfa, 0xd1, 0x7e, 0x52, 0x82, 0xf3, 0x74, 0x47, 0x1e, 0xba, 0x61, - 0xa8, 0x4f, 0xa2, 0x23, 0xba, 0x6a, 0xd9, 0x1d, 0x85, 0xbc, 0x0d, 0xf3, 0x47, 0xcf, 0x67, 0x81, - 0x60, 0xe8, 0x84, 0x00, 0x4a, 0x39, 0xe2, 0x6d, 0x35, 0xfd, 0x9f, 0xbc, 0x03, 0x73, 0xa8, 0x60, - 0xe3, 0xc2, 0x84, 0xb8, 0x04, 0x16, 0x7f, 0x1a, 0xd5, 0x6f, 0x26, 0x23, 0xa0, 0xfd, 0x9c, 0xa4, - 0x74, 0xe3, 0xfb, 0x99, 0x50, 0x3c, 0xc5, 0x59, 0xdd, 0xcc, 0xc5, 0xe3, 0x03, 0x87, 0xe7, 0x49, - 0xdb, 0x82, 0x75, 0xb1, 0x6a, 0xc6, 0x22, 0xdc, 0x38, 0xb7, 0x7c, 0xaf, 0xf1, 0xf8, 0x0f, 0x63, - 0x11, 0x72, 0xfc, 0x45, 0x58, 0x0d, 0xc3, 0x23, 0x9b, 0x87, 0x0f, 0x7a, 0x22, 0x32, 0x99, 0x98, - 0xcb, 0x61, 0x78, 0xc4, 0xe2, 0x08, 0xdd, 0x77, 0x4f, 0x28, 0x16, 0xba, 0xf8, 0x26, 0x58, 0x55, - 0x86, 0x15, 0x0d, 0xc3, 0x18, 0x8b, 0x47, 0xbe, 0x82, 0x04, 0x4b, 0xfb, 0x6f, 0x25, 0x58, 0x7c, - 0x48, 0x05, 0x77, 0x54, 0x47, 0xcd, 0x56, 0x6f, 0xdd, 0x81, 0xa5, 0xa6, 0xef, 0x70, 0xa3, 0x23, - 0x7f, 0x61, 0xcc, 0xde, 0x04, 0x0c, 0x7d, 0x47, 0xd8, 0x2f, 0x43, 0x53, 0x46, 0x7a, 0x46, 0xe8, - 0xa7, 0x7b, 0x30, 0xcf, 0x8c, 0xc0, 0x5c, 0xd3, 0x2a, 0xae, 0x6e, 0x71, 0x8d, 0x5e, 0x67, 0xc5, - 0x92, 0x9d, 0x8c, 0x19, 0x92, 0xe5, 0x7b, 0x04, 0xf7, 0xff, 0x97, 0x94, 0x6f, 0x73, 0xa7, 0x53, - 0xbe, 0x49, 0x61, 0xa3, 0xe7, 0x4f, 0x13, 0x36, 0xfa, 0xd2, 0x5d, 0x58, 0x92, 0xea, 0xf3, 0x5c, - 0x37, 0xb9, 0xdf, 0x28, 0xc1, 0x0a, 0xb6, 0x2a, 0x76, 0x2d, 0xfb, 0x9b, 0xa9, 0x4a, 0xfc, 0x66, - 0x4a, 0x95, 0xb8, 0x29, 0x8f, 0x17, 0x6b, 0xd9, 0x0c, 0x1d, 0xe2, 0x3d, 0x58, 0xcf, 0x21, 0x92, - 0xb7, 0x60, 0x8e, 0x56, 0x5f, 0xa8, 0x5e, 0xd4, 0xec, 0x0c, 0x48, 0x52, 0x8c, 0xd0, 0x86, 0x87, - 0x26, 0xc3, 0xd6, 0xfe, 0x97, 0x02, 0xcb, 0x3c, 0x83, 0xe0, 0xe8, 0xc0, 0x7f, 0x66, 0x77, 0xde, - 0xca, 0x76, 0x27, 0x0b, 0x64, 0xc8, 0xbb, 0xf3, 0xff, 0x76, 0x27, 0xde, 0x4d, 0x75, 0xe2, 0x85, - 0x38, 0xe0, 0xb8, 0x68, 0xce, 0x8c, 0x3e, 0xfc, 0x21, 0xa6, 0xe0, 0x48, 0x23, 0x92, 0x5f, 0x83, - 0xc5, 0xb6, 0xfb, 0x34, 0xa5, 0xc1, 0xb8, 0x35, 0x85, 0xe9, 0xeb, 0x31, 0x22, 0x5b, 0x53, 0xec, - 0x5d, 0x8e, 0xfb, 0xd4, 0xce, 0xd9, 0x9f, 0x13, 0x96, 0x97, 0xde, 0x85, 0xd5, 0x34, 0xd9, 0xf3, - 0x4c, 0x7d, 0x1e, 0x0e, 0x05, 0x63, 0x73, 0xfe, 0x56, 0x19, 0x20, 0x89, 0x24, 0x41, 0x17, 0x60, - 0xca, 0xf5, 0x46, 0x18, 0x7f, 0x10, 0x24, 0xcf, 0x71, 0xe1, 0x91, 0x73, 0x8b, 0x1b, 0x29, 0x4a, - 0xd3, 0x03, 0xc2, 0x8f, 0x44, 0x34, 0x1c, 0xe6, 0x66, 0x38, 0x74, 0x98, 0x4b, 0x7e, 0x79, 0xfb, - 0x45, 0xcc, 0xff, 0x11, 0x43, 0x53, 0xd1, 0xba, 0xab, 0xf5, 0x09, 0xcf, 0x3b, 0x84, 0x01, 0x0c, - 0xea, 0x14, 0x21, 0x17, 0x9d, 0xa5, 0xf2, 0x7c, 0xd1, 0x59, 0xba, 0xb0, 0xe8, 0x8d, 0x3e, 0x75, - 0x47, 0x91, 0x1f, 0x9c, 0xa0, 0x65, 0x26, 0x51, 0xf9, 0xd2, 0x2e, 0x68, 0x88, 0x32, 0x36, 0x0e, - 0x28, 0x23, 0xc4, 0xf8, 0xf2, 0x30, 0xc4, 0xc0, 0x38, 0xa6, 0xc4, 0x9c, 0x3a, 0xcf, 0x22, 0x4b, - 0xdc, 0xab, 0x54, 0xab, 0xea, 0xe2, 0xbd, 0x4a, 0x75, 0x51, 0x05, 0x53, 0x32, 0xab, 0xc6, 0x66, - 0x53, 0xc9, 0xd2, 0x99, 0xb6, 0x62, 0x6a, 0x7f, 0x55, 0x02, 0x92, 0xaf, 0x06, 0xf9, 0x26, 0x2c, - 0xb1, 0x0d, 0xd6, 0x0e, 0xc2, 0x4f, 0xf8, 0xbb, 0x24, 0xf6, 0x48, 0x50, 0x02, 0xcb, 0x11, 0x4e, - 0x19, 0xd8, 0x0c, 0x3f, 0x19, 0x92, 0x5f, 0x85, 0xb3, 0xd8, 0xbd, 0x63, 0x37, 0xf0, 0xfc, 0x81, - 0x8d, 0xe9, 0x28, 0x9c, 0x21, 0xcf, 0xdd, 0xfe, 0xda, 0x5f, 0x7c, 0x71, 0xfd, 0x6a, 0x41, 0xf1, - 0x94, 0x61, 0xc0, 0x80, 0x10, 0x5d, 0xc4, 0xec, 0x32, 0x44, 0xd2, 0x03, 0x55, 0xa6, 0x3f, 0x98, - 0x0c, 0x87, 0x7c, 0x64, 0xb7, 0xa8, 0x50, 0x97, 0x2d, 0x9b, 0xc2, 0x78, 0x35, 0x61, 0xbc, 0x33, - 0x19, 0x0e, 0xc9, 0xdb, 0x00, 0xfe, 0xc8, 0x3e, 0xf6, 0xc2, 0x90, 0xd9, 0xfb, 0xe2, 0xb7, 0x6a, - 0x09, 0x54, 0x1e, 0x0c, 0x7f, 0xd4, 0x62, 0x40, 0xf2, 0x77, 0x00, 0x03, 0xa3, 0x61, 0xc4, 0x40, - 0xe6, 0xd3, 0xc6, 0xe5, 0x3c, 0x01, 0x4c, 0x87, 0xd2, 0x39, 0x74, 0x2d, 0xef, 0xdb, 0xe2, 0x49, - 0xdf, 0xb7, 0x60, 0x9d, 0xbb, 0xef, 0x3f, 0xf4, 0xa2, 0x23, 0x7e, 0xdb, 0xfc, 0x32, 0x57, 0x55, - 0xe9, 0xba, 0xf9, 0xa7, 0x15, 0x00, 0xfd, 0xa1, 0x25, 0x82, 0xf1, 0xbe, 0x02, 0x73, 0xf4, 0x0e, - 0x2d, 0x74, 0x71, 0x68, 0xc9, 0x40, 0xbe, 0xb2, 0x25, 0x03, 0x31, 0xe8, 0x6a, 0x34, 0xf1, 0xf5, - 0x8d, 0xd0, 0xc3, 0xe1, 0x6a, 0x64, 0x0f, 0x72, 0x52, 0xc9, 0x50, 0x38, 0x16, 0x69, 0x02, 0x24, - 0xe1, 0x71, 0xf9, 0xad, 0x70, 0x3d, 0x89, 0x33, 0xc9, 0x0b, 0x78, 0x92, 0xb7, 0xe4, 0x89, 0xa5, - 0x3c, 0x7d, 0x12, 0x34, 0x72, 0x1f, 0x2a, 0x3d, 0x27, 0x8e, 0xdc, 0x32, 0x25, 0x68, 0xf0, 0x0d, - 0x9e, 0x5b, 0x3f, 0x09, 0x1c, 0xbc, 0x1a, 0x39, 0x87, 0x72, 0xed, 0x90, 0x09, 0x31, 0x60, 0xbe, - 0xeb, 0x04, 0xce, 0x71, 0x38, 0x2d, 0xd8, 0x3c, 0x2b, 0x15, 0x29, 0x66, 0x10, 0x28, 0xcb, 0x14, - 0xac, 0x98, 0xdc, 0x81, 0xb2, 0x65, 0xb5, 0x78, 0xa8, 0xbc, 0x95, 0x44, 0xe0, 0xb7, 0xac, 0x16, - 0x73, 0x0d, 0x08, 0xc3, 0x63, 0x89, 0x8c, 0x22, 0x93, 0x5f, 0x84, 0x25, 0xe9, 0x3e, 0xc2, 0x83, - 0x4c, 0x62, 0x1f, 0x48, 0xef, 0x3b, 0xe5, 0x4d, 0x43, 0xc2, 0x26, 0x4d, 0x50, 0xef, 0x4f, 0x1e, - 0xbb, 0xfa, 0x78, 0x8c, 0x21, 0x23, 0x3e, 0x75, 0x03, 0x26, 0xc8, 0x55, 0x93, 0xec, 0x2c, 0xf8, - 0x98, 0x6a, 0x20, 0x4a, 0x65, 0x7d, 0x64, 0x96, 0x92, 0x74, 0x61, 0xdd, 0x72, 0xa3, 0xc9, 0x98, - 0x79, 0x69, 0xed, 0xb0, 0x8b, 0x10, 0x0b, 0x49, 0x89, 0x89, 0x2c, 0x42, 0x5a, 0x28, 0x5c, 0xe3, - 0x0e, 0x72, 0x97, 0xa1, 0x3c, 0xb1, 0xe6, 0xca, 0x43, 0x2e, 0x4b, 0xf0, 0xca, 0x0c, 0x09, 0x9e, - 0xca, 0xc7, 0xb9, 0xb0, 0xc9, 0x78, 0x0f, 0x91, 0xc2, 0x26, 0xa7, 0x82, 0x25, 0xff, 0xa0, 0x22, - 0x45, 0xee, 0xe7, 0x63, 0xf1, 0x1e, 0xc0, 0x3d, 0xdf, 0x1b, 0xb5, 0xdc, 0xe8, 0xc8, 0x1f, 0x48, - 0xaf, 0x77, 0x97, 0x3e, 0xf6, 0xbd, 0x91, 0x7d, 0x8c, 0xe0, 0xbf, 0xfa, 0xe2, 0xba, 0x84, 0x64, - 0x4a, 0xff, 0x93, 0xaf, 0xc1, 0x22, 0xfd, 0xd5, 0x4b, 0x7c, 0xcd, 0x98, 0xda, 0x1e, 0xa9, 0x59, - 0xce, 0xbc, 0x04, 0x81, 0xdc, 0xc5, 0x2c, 0x91, 0xde, 0x38, 0x92, 0x84, 0x57, 0x91, 0x12, 0xd2, - 0x1b, 0x47, 0xd9, 0xa7, 0xbb, 0x12, 0x32, 0xd9, 0x8b, 0xab, 0x2e, 0x12, 0xbb, 0xf2, 0x64, 0x94, - 0xfc, 0xfd, 0x2f, 0x16, 0xd9, 0x22, 0x0b, 0x84, 0xfc, 0xfe, 0x37, 0x43, 0x86, 0x95, 0xb0, 0xf6, - 0xea, 0xfc, 0xd6, 0x39, 0x27, 0x55, 0x22, 0x3c, 0x1a, 0xf0, 0x3b, 0x64, 0xaa, 0x12, 0x31, 0x32, - 0xd9, 0x86, 0x35, 0x26, 0xf5, 0xc7, 0x29, 0xe0, 0xb9, 0x88, 0x8b, 0x7b, 0x5b, 0x92, 0x23, 0x5e, - 0xfe, 0x7c, 0x86, 0x80, 0xec, 0xc0, 0x1c, 0x6a, 0x10, 0xf8, 0xe3, 0x9c, 0xcb, 0xb2, 0x26, 0x29, - 0xbb, 0x8e, 0x70, 0x5f, 0x41, 0x1d, 0x92, 0xbc, 0xaf, 0x20, 0x2a, 0xf9, 0x65, 0x00, 0x63, 0x14, - 0xf8, 0xc3, 0x21, 0xe6, 0x29, 0xa9, 0xa6, 0x9e, 0xff, 0x73, 0x3e, 0xc8, 0x25, 0x41, 0xe2, 0x31, - 0xb5, 0xf1, 0xb7, 0x9d, 0xc9, 0x66, 0x22, 0xf1, 0xd2, 0x1a, 0x30, 0xcf, 0x16, 0x23, 0xe6, 0xfc, - 0xe1, 0x99, 0x11, 0xa5, 0x8c, 0x31, 0x2c, 0xe7, 0x0f, 0x87, 0xe7, 0x73, 0xfe, 0x48, 0x04, 0xda, - 0x7d, 0xd8, 0x28, 0x6a, 0x58, 0x4a, 0xe7, 0xa1, 0x9c, 0x56, 0xe7, 0xf1, 0x07, 0x65, 0x58, 0x46, - 0x6e, 0x62, 0x17, 0xd6, 0x61, 0xc5, 0x9a, 0x3c, 0x8e, 0x03, 0xe2, 0x8a, 0xdd, 0x18, 0xeb, 0x17, - 0xca, 0x05, 0xb2, 0x99, 0x37, 0x45, 0x41, 0x0c, 0x58, 0x15, 0x27, 0xc1, 0xae, 0x78, 0xc8, 0x12, - 0xa7, 0xdb, 0x11, 0xef, 0x7b, 0xb8, 0x0f, 0xad, 0xac, 0xd0, 0x48, 0x13, 0x25, 0xe7, 0x41, 0xf9, - 0x79, 0xce, 0x83, 0xca, 0xa9, 0xce, 0x83, 0x8f, 0x60, 0x59, 0x7c, 0x0d, 0x77, 0xf2, 0xb9, 0x2f, - 0xb7, 0x93, 0xa7, 0x98, 0x91, 0x66, 0xbc, 0xa3, 0xcf, 0xcf, 0xdc, 0xd1, 0xd1, 0x76, 0x2e, 0x56, - 0xd9, 0x18, 0x61, 0xf9, 0x8d, 0x5d, 0xfb, 0xf3, 0x32, 0xc0, 0x6e, 0xad, 0xfb, 0x33, 0x9c, 0x92, - 0x6f, 0xc1, 0x62, 0xd3, 0x17, 0x66, 0x53, 0xc9, 0x5e, 0x35, 0x14, 0x40, 0x59, 0x5c, 0x88, 0x31, - 0xe3, 0xd3, 0xad, 0xfc, 0x55, 0x9c, 0x6e, 0x77, 0x51, 0xaf, 0xf3, 0xb1, 0xdb, 0x8f, 0x92, 0xcc, - 0xcf, 0xb8, 0x64, 0x44, 0x3c, 0xbb, 0xb4, 0xd9, 0x4c, 0x42, 0xa6, 0xbb, 0x13, 0xf7, 0xc8, 0x12, - 0x41, 0x2a, 0x78, 0x2a, 0x56, 0xdc, 0x9d, 0x44, 0xa4, 0x0f, 0x11, 0xf7, 0x42, 0xde, 0x1e, 0x32, - 0x64, 0x5f, 0xed, 0x80, 0x90, 0x0f, 0x63, 0x17, 0xda, 0x85, 0x59, 0x3d, 0xa4, 0xe5, 0x7a, 0x68, - 0xaa, 0xe3, 0xac, 0xf6, 0x63, 0x45, 0xce, 0x75, 0xf6, 0x33, 0x0c, 0xf5, 0x3b, 0x00, 0xb1, 0xdf, - 0x8a, 0x18, 0xeb, 0x38, 0x8e, 0x01, 0x83, 0xca, 0xbd, 0x9c, 0xe0, 0x4a, 0xad, 0x29, 0x7f, 0x55, - 0xad, 0xe9, 0xc1, 0x52, 0xe7, 0x49, 0xe4, 0x24, 0x8e, 0x4e, 0x60, 0xc5, 0x92, 0x2c, 0xee, 0x4c, - 0x65, 0x54, 0xcb, 0x9d, 0x93, 0xe4, 0xe0, 0x29, 0x22, 0xb0, 0x44, 0xa8, 0xfd, 0xb5, 0x02, 0x6b, - 0x72, 0x88, 0xa2, 0x93, 0x51, 0x9f, 0xbc, 0xcf, 0x52, 0x2f, 0x28, 0xa9, 0x2b, 0x8b, 0x84, 0x44, - 0xb7, 0xdc, 0x93, 0x51, 0x9f, 0x09, 0x40, 0xce, 0x53, 0xb9, 0xb2, 0x94, 0x90, 0x3c, 0x86, 0xe5, - 0xae, 0x3f, 0x1c, 0x52, 0xb1, 0x26, 0xf8, 0x94, 0x5f, 0x00, 0x28, 0xa3, 0xac, 0xf5, 0x4c, 0x54, - 0x68, 0xfb, 0x05, 0x7e, 0xcf, 0xbd, 0x30, 0xa6, 0xfb, 0xbd, 0xc7, 0xe9, 0x12, 0xb6, 0x9f, 0xe3, - 0x4b, 0x55, 0x99, 0x67, 0x72, 0x36, 0xa5, 0x73, 0x76, 0xc9, 0xb5, 0xa4, 0xc5, 0x58, 0xcf, 0x19, - 0x67, 0x93, 0xf6, 0xf7, 0x15, 0xb8, 0x91, 0x6f, 0x5a, 0x6d, 0xe8, 0x4f, 0x06, 0xbd, 0xc0, 0xf1, - 0x86, 0x4d, 0xff, 0x30, 0x64, 0x21, 0xeb, 0x0f, 0x13, 0x0d, 0x35, 0x0f, 0x59, 0x7f, 0xe8, 0x65, - 0x43, 0xd6, 0xe3, 0xd3, 0xf5, 0x37, 0xa1, 0x6a, 0x7d, 0x68, 0x7d, 0x38, 0x71, 0xc5, 0x5d, 0x98, - 0xed, 0x0f, 0xe1, 0x27, 0xa1, 0xfd, 0x09, 0x05, 0xca, 0x27, 0x86, 0x40, 0xd4, 0xfe, 0x7d, 0x09, - 0x48, 0xbe, 0x1e, 0xf2, 0x16, 0xac, 0xfc, 0x3f, 0x10, 0xc9, 0x33, 0xa2, 0x6c, 0xe5, 0xb9, 0x44, - 0xd9, 0x4f, 0x40, 0xed, 0xd3, 0x7e, 0xb4, 0x23, 0xda, 0x91, 0xf6, 0xd0, 0x8f, 0x4f, 0x84, 0x5f, - 0x98, 0x3a, 0xa7, 0xd2, 0x1d, 0xcf, 0xf6, 0xa4, 0x2c, 0x13, 0xf9, 0x70, 0xeb, 0xa7, 0xf0, 0xb5, - 0xdf, 0x57, 0x60, 0xa3, 0x68, 0x0a, 0xd0, 0xc3, 0x53, 0x3e, 0x4d, 0xe3, 0xb3, 0x1c, 0x0f, 0x4f, - 0xf9, 0x00, 0x4e, 0x9f, 0xe8, 0x19, 0xa2, 0x6c, 0x7f, 0x94, 0x9e, 0xa7, 0x3f, 0xb4, 0xff, 0x5a, - 0x86, 0x65, 0x66, 0xd4, 0xdc, 0x73, 0x9d, 0x61, 0x74, 0x44, 0x07, 0x57, 0xe4, 0xa0, 0x94, 0x5c, - 0x5f, 0x67, 0x24, 0x9f, 0xbc, 0x83, 0xe9, 0xfe, 0x23, 0xbf, 0xef, 0x0f, 0x65, 0xa5, 0xe0, 0x98, - 0xc3, 0x32, 0xd9, 0xfe, 0x11, 0x46, 0xe7, 0x2e, 0x4f, 0x29, 0x50, 0x4e, 0xe6, 0x2e, 0x33, 0x74, - 0xcb, 0x73, 0x97, 0x1b, 0x95, 0x3f, 0x83, 0xb3, 0x89, 0x9d, 0x3a, 0xb6, 0x6e, 0x9f, 0xe2, 0x55, - 0xd1, 0x16, 0x7f, 0x55, 0x74, 0x2d, 0x31, 0x7d, 0xa3, 0xcd, 0x1e, 0x4b, 0x33, 0x79, 0x18, 0x8a, - 0x3e, 0x41, 0xee, 0x83, 0x9a, 0x80, 0x79, 0x82, 0x08, 0x26, 0xf1, 0x62, 0xf0, 0x26, 0x89, 0x6d, - 0x2e, 0x57, 0x44, 0x8e, 0x90, 0x1e, 0x72, 0x09, 0xcc, 0x48, 0x9e, 0x95, 0x09, 0xf3, 0x4f, 0xcc, - 0x0b, 0x5f, 0x0f, 0xc9, 0x87, 0x5c, 0x86, 0x8c, 0x8e, 0x91, 0xc8, 0x6b, 0xb2, 0x90, 0x8c, 0x11, - 0x7f, 0x8f, 0x2f, 0x8f, 0x11, 0xc7, 0xda, 0xfa, 0x9e, 0x02, 0x6b, 0x0d, 0xbd, 0xc5, 0x73, 0x18, - 0xb2, 0x5e, 0xbd, 0x09, 0x57, 0x1b, 0x7a, 0xcb, 0xee, 0x76, 0x9a, 0x8d, 0xda, 0x23, 0xbb, 0x30, - 0x35, 0xd1, 0x55, 0xb8, 0x98, 0x47, 0x49, 0x4c, 0xfa, 0x57, 0x60, 0x33, 0x5f, 0x2c, 0xd2, 0x17, - 0x15, 0x13, 0x8b, 0x4c, 0x47, 0xe5, 0xad, 0x0f, 0x60, 0x4d, 0xa4, 0xea, 0xe9, 0x35, 0x2d, 0x4c, - 0x06, 0xb8, 0x06, 0x4b, 0x0f, 0x0c, 0xb3, 0xb1, 0xf3, 0xc8, 0xde, 0xd9, 0x6f, 0x36, 0xd5, 0x33, - 0x64, 0x05, 0x16, 0x39, 0xa0, 0xa6, 0xab, 0x0a, 0x59, 0x86, 0x6a, 0xa3, 0x6d, 0x19, 0xb5, 0x7d, - 0xd3, 0x50, 0x4b, 0x5b, 0xff, 0x42, 0x81, 0x95, 0xfd, 0xf1, 0xc0, 0x89, 0xdc, 0x80, 0xb7, 0xe8, - 0x1a, 0x5c, 0xda, 0xef, 0xd6, 0xf5, 0x9e, 0x61, 0x16, 0x37, 0xe7, 0x1c, 0xac, 0x67, 0xca, 0x3b, - 0xf7, 0x55, 0x85, 0x5c, 0x86, 0x0b, 0x19, 0x70, 0xbd, 0x61, 0xe9, 0xdb, 0xac, 0x15, 0x17, 0xe1, - 0x5c, 0xa6, 0xb0, 0xdb, 0x68, 0xb7, 0x8d, 0xba, 0x5a, 0xa6, 0x0d, 0xcc, 0x7d, 0xce, 0x34, 0xf4, - 0x3a, 0x25, 0x55, 0x2b, 0x5b, 0x1f, 0xc0, 0x6a, 0x37, 0x7e, 0xa7, 0x80, 0x1e, 0x03, 0x0b, 0x50, - 0x36, 0xf5, 0x87, 0xea, 0x19, 0x02, 0x30, 0xdf, 0xbd, 0x5f, 0xb3, 0x6e, 0xdf, 0x56, 0x15, 0xb2, - 0x04, 0x0b, 0xbb, 0xb5, 0xae, 0x7d, 0xbf, 0x65, 0xa9, 0x25, 0xfa, 0x43, 0x7f, 0x68, 0xe1, 0x8f, - 0xf2, 0xd6, 0x1b, 0x68, 0xe5, 0xfb, 0xec, 0xa4, 0xe9, 0x85, 0x91, 0x3b, 0x72, 0x03, 0xec, 0xa3, - 0x65, 0xa8, 0x5a, 0x2e, 0x95, 0x57, 0x22, 0x97, 0x75, 0x50, 0x6b, 0x32, 0x8c, 0xbc, 0xf1, 0xd0, - 0xfd, 0x4c, 0x55, 0xb6, 0xee, 0xc2, 0x9a, 0xe9, 0x4f, 0x22, 0x6f, 0x74, 0x68, 0x45, 0x14, 0xe3, - 0xf0, 0x04, 0xdb, 0xdc, 0xd6, 0x5b, 0xdb, 0x8d, 0xdd, 0xfd, 0xce, 0xbe, 0x65, 0xb7, 0xf4, 0x5e, - 0x6d, 0x8f, 0xf9, 0x2b, 0xb4, 0x3a, 0x56, 0xcf, 0x36, 0x8d, 0x9a, 0xd1, 0xee, 0xa9, 0xca, 0xd6, - 0xef, 0xa2, 0x06, 0xb7, 0xef, 0x8f, 0x06, 0x3b, 0x4e, 0x3f, 0xf2, 0x03, 0xac, 0xb0, 0x06, 0xd7, - 0x2c, 0xa3, 0xd6, 0x69, 0xd7, 0xed, 0x1d, 0xbd, 0xd6, 0xeb, 0x98, 0x45, 0xb9, 0xbb, 0x2e, 0xc1, - 0xf9, 0x02, 0x9c, 0x4e, 0xaf, 0xab, 0x2a, 0xe4, 0x3a, 0x5c, 0x2e, 0x28, 0x7b, 0x68, 0x6c, 0xeb, - 0xfb, 0xbd, 0xbd, 0xb6, 0x5a, 0x9a, 0x42, 0x6c, 0x59, 0x1d, 0xb5, 0xbc, 0xf5, 0xdb, 0x0a, 0xac, - 0xee, 0x87, 0xfc, 0x79, 0xd4, 0x3e, 0x46, 0x95, 0xb8, 0x01, 0x57, 0xf6, 0x2d, 0xc3, 0xb4, 0x7b, - 0x9d, 0xfb, 0x46, 0xdb, 0xde, 0xb7, 0xf4, 0xdd, 0x6c, 0x6d, 0xae, 0xc3, 0x65, 0x09, 0xc3, 0x34, - 0x6a, 0x9d, 0x07, 0x86, 0x69, 0x77, 0x75, 0xcb, 0x7a, 0xd8, 0x31, 0xeb, 0xaa, 0x42, 0xbf, 0x58, - 0x80, 0xd0, 0xda, 0xd1, 0x59, 0x6d, 0x52, 0x65, 0x6d, 0xe3, 0xa1, 0xde, 0xb4, 0xb7, 0x3b, 0x3d, - 0xb5, 0xbc, 0xd5, 0xa2, 0xb7, 0x08, 0xcc, 0xa0, 0xc3, 0x5c, 0xdc, 0xab, 0x50, 0x69, 0x77, 0xda, - 0x46, 0xd6, 0xcb, 0x65, 0x19, 0xaa, 0x7a, 0xb7, 0x6b, 0x76, 0x1e, 0xe0, 0xe4, 0x01, 0x98, 0xaf, - 0x1b, 0xed, 0x06, 0xce, 0x96, 0x65, 0xa8, 0x76, 0xcd, 0x4e, 0xab, 0xd3, 0x33, 0xea, 0x6a, 0x65, - 0xcb, 0x14, 0x07, 0xab, 0x60, 0xda, 0xf7, 0x99, 0x4b, 0x49, 0xdd, 0xd8, 0xd1, 0xf7, 0x9b, 0x3d, - 0x3e, 0x44, 0x8f, 0x6c, 0xd3, 0xf8, 0x70, 0xdf, 0xb0, 0x7a, 0x96, 0xaa, 0x10, 0x15, 0x96, 0xdb, - 0x86, 0x51, 0xb7, 0x6c, 0xd3, 0x78, 0xd0, 0x30, 0x1e, 0xaa, 0x25, 0xca, 0x93, 0xfd, 0x4f, 0xbf, - 0xb0, 0xf5, 0x03, 0x05, 0x08, 0xcb, 0x3e, 0x24, 0x52, 0xda, 0xe2, 0x8c, 0xb9, 0x06, 0x97, 0xf6, - 0xe8, 0x50, 0x63, 0xd3, 0x5a, 0x9d, 0x7a, 0xb6, 0xcb, 0xce, 0x03, 0xc9, 0x94, 0x77, 0x76, 0x76, - 0x70, 0x59, 0x9c, 0xcd, 0xc0, 0xeb, 0x66, 0xa7, 0xab, 0x96, 0x2e, 0x95, 0xaa, 0x0a, 0xb9, 0x90, - 0x2b, 0xbc, 0x6f, 0x18, 0x5d, 0xb5, 0x4c, 0x87, 0x28, 0x53, 0x20, 0x96, 0x2c, 0x23, 0xaf, 0x6c, - 0x7d, 0x57, 0x81, 0xf3, 0xac, 0x9a, 0x62, 0xfd, 0xc7, 0x55, 0xbd, 0x02, 0x9b, 0x3c, 0xa7, 0x5a, - 0x51, 0x45, 0x37, 0x40, 0x4d, 0x95, 0xb2, 0x6a, 0x9e, 0x83, 0xf5, 0x14, 0x14, 0xeb, 0x51, 0xa2, - 0xbb, 0x5b, 0x0a, 0xbc, 0x6d, 0x58, 0x3d, 0xdb, 0xd8, 0xd9, 0xe9, 0x98, 0x3d, 0x56, 0x91, 0xf2, - 0x96, 0x06, 0xeb, 0x35, 0x37, 0x88, 0x8c, 0xcf, 0x22, 0x77, 0x14, 0x7a, 0xfe, 0x08, 0xab, 0xb0, - 0x02, 0x8b, 0xc6, 0x2f, 0xf7, 0x8c, 0xb6, 0xd5, 0xe8, 0xb4, 0xd5, 0x33, 0x5b, 0x57, 0x32, 0x38, - 0x62, 0x1d, 0x5b, 0xd6, 0x9e, 0x7a, 0x66, 0xcb, 0x81, 0x15, 0xf1, 0x02, 0x88, 0xcd, 0x8a, 0x6b, - 0x70, 0x49, 0xcc, 0x35, 0xdc, 0x13, 0xb2, 0x4d, 0xd8, 0x84, 0x8d, 0x7c, 0xb9, 0xd1, 0x53, 0x15, - 0x3a, 0x0a, 0x99, 0x12, 0x0a, 0x2f, 0x6d, 0xfd, 0xa6, 0x02, 0x2b, 0xb1, 0xb1, 0x16, 0x8d, 0x41, - 0xd7, 0xe1, 0x72, 0x6b, 0x47, 0xb7, 0xeb, 0xc6, 0x83, 0x46, 0xcd, 0xb0, 0xef, 0x37, 0xda, 0xf5, - 0xcc, 0x47, 0x2e, 0xc2, 0xb9, 0x02, 0x04, 0xfc, 0xca, 0x26, 0x6c, 0x64, 0x8b, 0x7a, 0x74, 0xa9, - 0x96, 0x68, 0xd7, 0x67, 0x4b, 0xe2, 0x75, 0x5a, 0xde, 0x7a, 0x00, 0xab, 0x96, 0xde, 0x6a, 0xee, - 0xf8, 0x41, 0xdf, 0xd5, 0x27, 0xd1, 0xd1, 0x88, 0x6e, 0x9a, 0x3b, 0x1d, 0xb3, 0x66, 0xd8, 0x88, - 0x92, 0xa9, 0xc1, 0x59, 0x58, 0x93, 0x0b, 0x1f, 0x19, 0x74, 0xfa, 0x12, 0x58, 0x95, 0x81, 0xed, - 0x8e, 0x5a, 0xda, 0xfa, 0x15, 0x58, 0x4e, 0x65, 0xb6, 0xbf, 0x00, 0x67, 0xe5, 0xdf, 0x5d, 0x77, - 0x34, 0xf0, 0x46, 0x87, 0xea, 0x99, 0x6c, 0x81, 0x39, 0x19, 0x8d, 0x68, 0x01, 0xae, 0x67, 0xb9, - 0xa0, 0xe7, 0x06, 0xc7, 0xde, 0xc8, 0x89, 0xdc, 0x81, 0x5a, 0xda, 0x7a, 0x1d, 0x56, 0x52, 0xf9, - 0xb4, 0xe8, 0xc0, 0x35, 0x3b, 0x7c, 0x03, 0x6e, 0x19, 0xf5, 0xc6, 0x7e, 0x4b, 0x9d, 0xa3, 0x2b, - 0x79, 0xaf, 0xb1, 0xbb, 0xa7, 0xc2, 0xd6, 0xef, 0x29, 0xb0, 0xca, 0xb3, 0xe4, 0xb6, 0x76, 0x74, - 0x31, 0xd4, 0x74, 0x9a, 0xb1, 0x2c, 0x7d, 0x86, 0x65, 0x31, 0xe7, 0xae, 0x2b, 0xb0, 0xc9, 0x7f, - 0xd8, 0x7a, 0xbb, 0x6e, 0xef, 0xe9, 0x66, 0xfd, 0xa1, 0x6e, 0xd2, 0xb9, 0xf7, 0x48, 0x2d, 0xe1, - 0x82, 0x92, 0x20, 0x76, 0xaf, 0xb3, 0x5f, 0xdb, 0x53, 0xcb, 0x74, 0xfe, 0xa6, 0xe0, 0xdd, 0x46, - 0x5b, 0xad, 0xe0, 0xf2, 0xcc, 0x61, 0x23, 0x5b, 0x5a, 0x3e, 0xb7, 0xf5, 0x53, 0x05, 0x2e, 0x58, - 0xde, 0xe1, 0xc8, 0x89, 0x26, 0x81, 0xab, 0x0f, 0x0f, 0xfd, 0xc0, 0x8b, 0x8e, 0x8e, 0xad, 0x89, - 0x17, 0xb9, 0xe4, 0x15, 0x78, 0xc9, 0x6a, 0xec, 0xb6, 0xf5, 0x1e, 0x5d, 0x5e, 0x7a, 0x73, 0xb7, - 0x63, 0x36, 0x7a, 0x7b, 0x2d, 0xdb, 0xda, 0x6f, 0xe4, 0x66, 0xde, 0x8b, 0x70, 0x63, 0x3a, 0x6a, - 0xd3, 0xd8, 0xd5, 0x6b, 0x8f, 0x54, 0x65, 0x36, 0xc3, 0x6d, 0xbd, 0xa9, 0xb7, 0x6b, 0x46, 0xdd, - 0x7e, 0x70, 0x5b, 0x2d, 0x91, 0x97, 0xe0, 0xe6, 0x74, 0xd4, 0x9d, 0x46, 0xd7, 0xa2, 0x68, 0xe5, - 0xd9, 0xdf, 0xdd, 0xb3, 0x5a, 0x14, 0xab, 0xb2, 0xf5, 0xfb, 0x0a, 0x6c, 0x4e, 0x8b, 0x8a, 0x4b, - 0x6e, 0x81, 0x66, 0xb4, 0x7b, 0xa6, 0xde, 0xa8, 0xdb, 0x35, 0xd3, 0xa8, 0x1b, 0xed, 0x5e, 0x43, - 0x6f, 0x5a, 0xb6, 0xd5, 0xd9, 0xa7, 0xb3, 0x29, 0xf1, 0xc1, 0x7b, 0x01, 0xae, 0xcf, 0xc0, 0xeb, - 0x34, 0xea, 0x35, 0x55, 0x21, 0xb7, 0xe1, 0xb5, 0x19, 0x48, 0xd6, 0x23, 0xab, 0x67, 0xb4, 0xe4, - 0x12, 0xb5, 0x84, 0x1b, 0x56, 0x71, 0x40, 0x50, 0xda, 0x3a, 0x2c, 0x99, 0x5d, 0xb1, 0x9b, 0x70, - 0x75, 0x2a, 0x16, 0xaf, 0xd6, 0x0b, 0x70, 0x7d, 0x2a, 0x0a, 0xab, 0x94, 0x5a, 0xda, 0xfa, 0x08, - 0x2e, 0x4d, 0x0f, 0x5e, 0x47, 0xcf, 0x8b, 0xf4, 0x90, 0x57, 0xa1, 0x52, 0xa7, 0x47, 0x54, 0x2a, - 0xab, 0x24, 0x9d, 0x9d, 0xa6, 0xd1, 0x68, 0x75, 0xe9, 0x46, 0xc8, 0x0f, 0x17, 0x3c, 0x3d, 0xbe, - 0xa3, 0x80, 0x9a, 0x8d, 0xf8, 0x94, 0x73, 0xe7, 0x34, 0xf7, 0xdb, 0x6d, 0x76, 0xd0, 0xad, 0xc1, - 0x52, 0xa7, 0xb7, 0x67, 0x98, 0x3c, 0x61, 0x27, 0x66, 0xe8, 0xdc, 0x6f, 0xd3, 0xa5, 0xdd, 0x31, - 0x1b, 0xdf, 0xc2, 0x13, 0x6f, 0x13, 0x36, 0xac, 0xa6, 0x5e, 0xbb, 0x6f, 0xb7, 0x3b, 0x3d, 0xbb, - 0xd1, 0xb6, 0x6b, 0x7b, 0x7a, 0xbb, 0x6d, 0x34, 0x55, 0xa0, 0x7b, 0x76, 0xe7, 0x7e, 0x4f, 0xb7, - 0x6b, 0x9d, 0xf6, 0x4e, 0x63, 0x97, 0xb3, 0xd8, 0xc0, 0x59, 0x30, 0x2d, 0x80, 0x01, 0xf9, 0x1a, - 0xbc, 0x8c, 0x34, 0xdd, 0xe6, 0xfe, 0x6e, 0xa3, 0x6d, 0x5b, 0x8f, 0xda, 0x35, 0x21, 0x76, 0xd5, - 0xf2, 0x67, 0xc5, 0xcb, 0xf0, 0xe2, 0x4c, 0xec, 0x24, 0xe3, 0xe6, 0x2d, 0xd0, 0x66, 0x62, 0xf2, - 0xf6, 0x6d, 0xfd, 0x89, 0x02, 0x97, 0x67, 0x38, 0xdd, 0x90, 0xd7, 0xe0, 0x95, 0x3d, 0x43, 0xaf, - 0x37, 0x0d, 0xcb, 0xc2, 0x1d, 0x8e, 0x0e, 0x22, 0xf3, 0x06, 0x2d, 0x3c, 0x09, 0x5e, 0x81, 0x97, - 0x66, 0xa3, 0x27, 0x32, 0xc5, 0xcb, 0xf0, 0xe2, 0x6c, 0x54, 0x2e, 0x63, 0x94, 0xc8, 0x16, 0xdc, - 0x9a, 0x8d, 0x19, 0xcb, 0x26, 0xe5, 0xad, 0xdf, 0x51, 0xe0, 0x7c, 0xb1, 0x9e, 0x9b, 0xd6, 0xad, - 0xd1, 0xb6, 0x7a, 0x7a, 0xb3, 0x69, 0x77, 0x75, 0x53, 0x6f, 0xd9, 0x46, 0xdb, 0xec, 0x34, 0x9b, - 0x45, 0x67, 0xf2, 0x8b, 0x70, 0x63, 0x3a, 0xaa, 0x55, 0x33, 0x1b, 0x5d, 0x7a, 0xec, 0x68, 0x70, - 0x6d, 0x3a, 0x96, 0xd1, 0xa8, 0x19, 0x6a, 0x69, 0xfb, 0xbd, 0x3f, 0xfe, 0xf3, 0x6b, 0x67, 0xfe, - 0xf8, 0xa7, 0xd7, 0x94, 0xff, 0xf8, 0xd3, 0x6b, 0xca, 0x9f, 0xfd, 0xf4, 0x9a, 0xf2, 0xad, 0x57, - 0x4f, 0x97, 0xac, 0x1a, 0x6f, 0xed, 0x8f, 0xe7, 0xf1, 0xfa, 0xf7, 0xe6, 0xff, 0x09, 0x00, 0x00, - 0xff, 0xff, 0x7e, 0x4e, 0x71, 0x5d, 0xf5, 0xe0, 0x01, 0x00, -} + // 35492 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xfd, 0x79, 0x90, 0x1c, 0x59, + 0x7a, 0x18, 0x86, 0x6f, 0x55, 0xf5, 0x51, 0xfd, 0xf5, 0x55, 0xfd, 0x70, 0x35, 0x7a, 0x06, 0xd3, + 0x98, 0x9c, 0x19, 0x0c, 0x30, 0x07, 0x30, 0x03, 0xec, 0x60, 0x77, 0x8e, 0xdd, 0xd9, 0xea, 0x03, + 0x40, 0x03, 0x0d, 0xa0, 0x27, 0xab, 0x01, 0xcc, 0x72, 0x8f, 0xdc, 0xec, 0xaa, 0xd7, 0xdd, 0x39, + 0x5d, 0x95, 0x59, 0x9b, 0x99, 0xd5, 0x40, 0x6b, 0xc9, 0x9f, 0xa4, 0x9f, 0xb8, 0x3a, 0x4c, 0xaf, + 0x78, 0x88, 0x14, 0x57, 0x11, 0x22, 0x83, 0x21, 0x8b, 0x0e, 0x59, 0x0a, 0xfa, 0x20, 0x25, 0x1f, + 0x21, 0x9b, 0x22, 0x6d, 0x51, 0x92, 0x15, 0x36, 0xc5, 0xb0, 0xec, 0xb0, 0xbd, 0xc1, 0x68, 0x86, + 0x44, 0x86, 0x43, 0xc6, 0x1f, 0x0a, 0x4a, 0x76, 0x28, 0xc2, 0xcb, 0x50, 0xd8, 0xf1, 0xbe, 0xef, + 0xbd, 0xcc, 0xf7, 0x32, 0xb3, 0xba, 0xab, 0x67, 0x30, 0x14, 0xb1, 0xc1, 0x7f, 0x80, 0xae, 0xef, + 0x7d, 0xdf, 0xf7, 0x8e, 0x7c, 0xc7, 0xf7, 0xbe, 0xf7, 0x1d, 0xf0, 0x7c, 0xcc, 0xdb, 0xbc, 0x1b, + 0x84, 0xf1, 0xa5, 0x36, 0xdf, 0x72, 0x9b, 0x7b, 0x97, 0xe2, 0xbd, 0x2e, 0x8f, 0xe8, 0xdf, 0x8b, + 0xdd, 0x30, 0x88, 0x03, 0x36, 0x8c, 0x3f, 0xe6, 0x8e, 0x6f, 0x05, 0x5b, 0x01, 0x42, 0x2e, 0x89, + 0xbf, 0xa8, 0x70, 0xee, 0xb9, 0xad, 0x20, 0xd8, 0x6a, 0xf3, 0x4b, 0xf8, 0x6b, 0xa3, 0xb7, 0x79, + 0xa9, 0xd5, 0x0b, 0xdd, 0xd8, 0x0b, 0x7c, 0x59, 0x3e, 0x9f, 0x2d, 0x8f, 0xbd, 0x0e, 0x8f, 0x62, + 0xb7, 0xd3, 0xed, 0xc7, 0xe0, 0x61, 0xe8, 0x76, 0xbb, 0x3c, 0x94, 0xb5, 0xcf, 0x5d, 0x48, 0x1a, + 0xe8, 0xc6, 0xb1, 0xa0, 0x14, 0xcc, 0x2f, 0xed, 0xbe, 0xa9, 0xff, 0x94, 0xa8, 0x57, 0x13, 0xd4, + 0x66, 0xd0, 0xe9, 0x06, 0x3e, 0xf7, 0xe3, 0x4d, 0xee, 0xc6, 0xbd, 0x90, 0x47, 0x82, 0x20, 0x01, + 0x3a, 0x0a, 0x9a, 0xa3, 0x33, 0xc7, 0x20, 0xec, 0x45, 0x31, 0x6f, 0x39, 0x2d, 0xbe, 0xeb, 0x35, + 0xb9, 0x13, 0xf2, 0x6f, 0xf6, 0xbc, 0x90, 0x77, 0xb8, 0x1f, 0x4b, 0xba, 0xd7, 0x8b, 0xe9, 0x54, + 0x07, 0x32, 0x3d, 0xb1, 0x7e, 0xbe, 0x02, 0x63, 0xb7, 0x38, 0xef, 0xd6, 0xdb, 0xde, 0x2e, 0x67, + 0x2f, 0xc0, 0xd0, 0x1d, 0xb7, 0xc3, 0x67, 0x4b, 0x67, 0x4b, 0xe7, 0xc7, 0x16, 0xa6, 0x1f, 0xef, + 0xcf, 0x8f, 0x47, 0x3c, 0xdc, 0xe5, 0xa1, 0xe3, 0xbb, 0x1d, 0x6e, 0x63, 0x21, 0x7b, 0x15, 0xc6, + 0xc4, 0xff, 0x51, 0xd7, 0x6d, 0xf2, 0xd9, 0x32, 0x62, 0x4e, 0x3e, 0xde, 0x9f, 0x1f, 0xf3, 0x15, + 0xd0, 0x4e, 0xcb, 0xd9, 0x0a, 0x8c, 0x2e, 0x3f, 0xea, 0x7a, 0x21, 0x8f, 0x66, 0x87, 0xce, 0x96, + 0xce, 0x8f, 0x5f, 0x9e, 0xbb, 0x48, 0x63, 0x7b, 0x51, 0x8d, 0xed, 0xc5, 0x75, 0x35, 0xf8, 0x0b, + 0xc7, 0xfe, 0xd1, 0xfe, 0xfc, 0x67, 0x1e, 0xef, 0xcf, 0x8f, 0x72, 0x22, 0xf9, 0x89, 0xdf, 0x99, + 0x2f, 0xd9, 0x8a, 0x9e, 0xbd, 0x07, 0x43, 0xeb, 0x7b, 0x5d, 0x3e, 0x3b, 0x76, 0xb6, 0x74, 0x7e, + 0xea, 0xf2, 0x73, 0x17, 0x69, 0x3a, 0x24, 0x8d, 0x4f, 0xff, 0x12, 0x58, 0x0b, 0xd5, 0xc7, 0xfb, + 0xf3, 0x43, 0x02, 0xc5, 0x46, 0x2a, 0xf6, 0x3a, 0x8c, 0xdc, 0x08, 0xa2, 0x78, 0x65, 0x69, 0x16, + 0xb0, 0xc9, 0x27, 0x1e, 0xef, 0xcf, 0xcf, 0x6c, 0x07, 0x51, 0xec, 0x78, 0xad, 0xd7, 0x82, 0x8e, + 0x17, 0xf3, 0x4e, 0x37, 0xde, 0xb3, 0x25, 0x92, 0xf5, 0x08, 0x26, 0x0d, 0x7e, 0x6c, 0x1c, 0x46, + 0xef, 0xdd, 0xb9, 0x75, 0xe7, 0xee, 0x83, 0x3b, 0xb5, 0xcf, 0xb0, 0x2a, 0x0c, 0xdd, 0xb9, 0xbb, + 0xb4, 0x5c, 0x2b, 0xb1, 0x51, 0xa8, 0xd4, 0xd7, 0xd6, 0x6a, 0x65, 0x36, 0x01, 0xd5, 0xa5, 0xfa, + 0x7a, 0x7d, 0xa1, 0xde, 0x58, 0xae, 0x55, 0xd8, 0x31, 0x98, 0x7e, 0xb0, 0x72, 0x67, 0xe9, 0xee, + 0x83, 0x86, 0xb3, 0xb4, 0xdc, 0xb8, 0xb5, 0x7e, 0x77, 0xad, 0x36, 0xc4, 0xa6, 0x00, 0x6e, 0xdd, + 0x5b, 0x58, 0xb6, 0xef, 0x2c, 0xaf, 0x2f, 0x37, 0x6a, 0xc3, 0xec, 0x38, 0xd4, 0x14, 0x89, 0xd3, + 0x58, 0xb6, 0xef, 0xaf, 0x2c, 0x2e, 0xd7, 0x46, 0x6e, 0x0e, 0x55, 0x2b, 0xb5, 0x21, 0x7b, 0x74, + 0x95, 0xbb, 0x11, 0x5f, 0x59, 0xb2, 0xfe, 0x83, 0x0a, 0x54, 0x6f, 0xf3, 0xd8, 0x6d, 0xb9, 0xb1, + 0xcb, 0x9e, 0x35, 0xbe, 0x0f, 0x76, 0x51, 0xfb, 0x30, 0x2f, 0xe4, 0x3f, 0xcc, 0xf0, 0xe3, 0xfd, + 0xf9, 0xd2, 0xeb, 0xfa, 0x07, 0x79, 0x17, 0xc6, 0x97, 0x78, 0xd4, 0x0c, 0xbd, 0xae, 0x98, 0xa4, + 0xb3, 0x15, 0x44, 0x3b, 0xfd, 0x78, 0x7f, 0xfe, 0x44, 0x2b, 0x05, 0x6b, 0x03, 0xa2, 0x63, 0xb3, + 0x15, 0x18, 0x59, 0x75, 0x37, 0x78, 0x3b, 0x9a, 0x1d, 0x3e, 0x5b, 0x39, 0x3f, 0x7e, 0xf9, 0x19, + 0xf9, 0x11, 0x54, 0x03, 0x2f, 0x52, 0xe9, 0xb2, 0x1f, 0x87, 0x7b, 0x0b, 0xc7, 0x1f, 0xef, 0xcf, + 0xd7, 0xda, 0x08, 0xd0, 0x07, 0x98, 0x50, 0x58, 0x23, 0x9d, 0x18, 0x23, 0x87, 0x4e, 0x8c, 0x33, + 0xff, 0x68, 0x7f, 0xbe, 0x24, 0x3e, 0x98, 0x9c, 0x18, 0x29, 0x3f, 0x73, 0x8a, 0x5c, 0x86, 0xaa, + 0xcd, 0x77, 0xbd, 0x48, 0xf4, 0xac, 0x8a, 0x3d, 0x3b, 0xf9, 0x78, 0x7f, 0x9e, 0x85, 0x12, 0xa6, + 0x35, 0x23, 0xc1, 0x9b, 0x7b, 0x1b, 0xc6, 0xb5, 0x56, 0xb3, 0x1a, 0x54, 0x76, 0xf8, 0x1e, 0x8d, + 0xb0, 0x2d, 0xfe, 0x64, 0xc7, 0x61, 0x78, 0xd7, 0x6d, 0xf7, 0xe4, 0x90, 0xda, 0xf4, 0xe3, 0x9d, + 0xf2, 0xe7, 0x4b, 0x37, 0x87, 0xaa, 0xa3, 0xb5, 0xaa, 0x5d, 0x5e, 0x59, 0xb2, 0x7e, 0x6a, 0x08, + 0xaa, 0x76, 0x40, 0x0b, 0x9f, 0x5d, 0x80, 0xe1, 0x46, 0xec, 0xc6, 0xea, 0x33, 0x1d, 0x7b, 0xbc, + 0x3f, 0x3f, 0x2d, 0x36, 0x05, 0xae, 0xd5, 0x4f, 0x18, 0x02, 0x75, 0x6d, 0xdb, 0x8d, 0xd4, 0xe7, + 0x42, 0xd4, 0xae, 0x00, 0xe8, 0xa8, 0x88, 0xc1, 0xce, 0xc1, 0xd0, 0xed, 0xa0, 0xc5, 0xe5, 0x17, + 0x63, 0x8f, 0xf7, 0xe7, 0xa7, 0x3a, 0x41, 0x4b, 0x47, 0xc4, 0x72, 0xf6, 0x1a, 0x8c, 0x2d, 0xf6, + 0xc2, 0x90, 0xfb, 0x62, 0xae, 0x0f, 0x21, 0xf2, 0xd4, 0xe3, 0xfd, 0x79, 0x68, 0x12, 0xd0, 0xf1, + 0x5a, 0x76, 0x8a, 0x20, 0x3e, 0x43, 0x23, 0x76, 0xc3, 0x98, 0xb7, 0x66, 0x87, 0x07, 0xfa, 0x0c, + 0x62, 0x7d, 0xce, 0x44, 0x44, 0x92, 0xfd, 0x0c, 0x92, 0x13, 0xbb, 0x01, 0xe3, 0xd7, 0x43, 0xb7, + 0xc9, 0xd7, 0x78, 0xe8, 0x05, 0x2d, 0xfc, 0xbe, 0x95, 0x85, 0x73, 0x8f, 0xf7, 0xe7, 0x4f, 0x6e, + 0x09, 0xb0, 0xd3, 0x45, 0x78, 0x4a, 0xfd, 0xfd, 0xfd, 0xf9, 0xea, 0x92, 0xdc, 0xa2, 0x6d, 0x9d, + 0x94, 0x7d, 0x43, 0x7c, 0x9c, 0x28, 0xc6, 0xa1, 0xe5, 0xad, 0xd9, 0xd1, 0x43, 0x9b, 0x68, 0xc9, + 0x26, 0x9e, 0x6c, 0xbb, 0x51, 0xec, 0x84, 0x44, 0x97, 0x69, 0xa7, 0xce, 0x92, 0xdd, 0x85, 0x6a, + 0xa3, 0xb9, 0xcd, 0x5b, 0xbd, 0x36, 0xc7, 0x29, 0x33, 0x7e, 0xf9, 0x94, 0x9c, 0xd4, 0xea, 0x7b, + 0xaa, 0xe2, 0x85, 0x39, 0xc9, 0x9b, 0x45, 0x12, 0xa2, 0xcf, 0x27, 0x85, 0xf5, 0x4e, 0xf5, 0xbb, + 0xbf, 0x30, 0xff, 0x99, 0x3f, 0xf5, 0xdb, 0x67, 0x3f, 0x63, 0xfd, 0x17, 0x65, 0xa8, 0x65, 0x99, + 0xb0, 0x4d, 0x98, 0xbc, 0xd7, 0x6d, 0xb9, 0x31, 0x5f, 0x6c, 0x7b, 0xdc, 0x8f, 0x23, 0x9c, 0x24, + 0x07, 0xf7, 0xe9, 0x45, 0x59, 0xef, 0x6c, 0x0f, 0x09, 0x9d, 0x26, 0x51, 0x66, 0x7a, 0x65, 0xb2, + 0x4d, 0xeb, 0x69, 0xe0, 0x06, 0x1e, 0xe1, 0x0c, 0x3b, 0x5a, 0x3d, 0xb4, 0xf5, 0xf7, 0xa9, 0x47, + 0xb2, 0x95, 0x13, 0xc8, 0x6f, 0x6d, 0xec, 0xe1, 0xcc, 0x1c, 0x7c, 0x02, 0x09, 0x92, 0x82, 0x09, + 0x24, 0xc0, 0xd6, 0xef, 0x95, 0x60, 0xca, 0xe6, 0x51, 0xd0, 0x0b, 0x9b, 0xfc, 0x06, 0x77, 0x5b, + 0x3c, 0x14, 0xd3, 0xff, 0x96, 0xe7, 0xb7, 0xe4, 0x9a, 0xc2, 0xe9, 0xbf, 0xe3, 0xf9, 0xfa, 0xd6, + 0x8d, 0xe5, 0xec, 0x0d, 0x18, 0x6d, 0xf4, 0x36, 0x10, 0xb5, 0x9c, 0xee, 0x00, 0x51, 0x6f, 0xc3, + 0xc9, 0xa0, 0x2b, 0x34, 0x76, 0x09, 0x46, 0xef, 0xf3, 0x30, 0x4a, 0x77, 0x43, 0x3c, 0x1a, 0x76, + 0x09, 0xa4, 0x13, 0x48, 0x2c, 0x76, 0x3d, 0xdd, 0x91, 0xe5, 0xa1, 0x36, 0x9d, 0xd9, 0x07, 0xd3, + 0xa9, 0xd2, 0x91, 0x10, 0x7d, 0xaa, 0x28, 0x2c, 0xeb, 0x5f, 0x96, 0xa1, 0xb6, 0xe4, 0xc6, 0xee, + 0x86, 0x1b, 0xc9, 0xf1, 0xbc, 0x7f, 0x45, 0xec, 0xf1, 0x5a, 0x47, 0x71, 0x8f, 0x17, 0x2d, 0xff, + 0xd8, 0xdd, 0x7b, 0x29, 0xdb, 0xbd, 0x71, 0x71, 0xc2, 0xca, 0xee, 0xa5, 0x9d, 0xfa, 0xc2, 0xe1, + 0x9d, 0xaa, 0xc9, 0x4e, 0x55, 0x55, 0xa7, 0xd2, 0xae, 0xb0, 0x2f, 0xc0, 0x50, 0xa3, 0xcb, 0x9b, + 0x72, 0x13, 0x51, 0xe7, 0x82, 0xd9, 0x39, 0x81, 0x70, 0xff, 0xca, 0xc2, 0x84, 0x64, 0x33, 0x14, + 0x75, 0x79, 0xd3, 0x46, 0x32, 0xb6, 0x0c, 0x23, 0x62, 0x43, 0xec, 0xa9, 0xc3, 0xe0, 0x4c, 0x31, + 0x03, 0x44, 0xb9, 0x7f, 0x65, 0x61, 0x4a, 0xb2, 0x18, 0x89, 0x10, 0x62, 0x4b, 0x62, 0xb1, 0x55, + 0x47, 0xcd, 0xa0, 0xcb, 0x71, 0xa3, 0x18, 0xb3, 0xe9, 0x87, 0xb6, 0x22, 0xff, 0x4d, 0x05, 0x8e, + 0x17, 0xb5, 0x49, 0x1f, 0xa4, 0x91, 0x03, 0x06, 0xe9, 0x3c, 0x54, 0x85, 0x7c, 0x20, 0xce, 0x5c, + 0xaa, 0x62, 0x61, 0x42, 0x8c, 0xc7, 0xb6, 0x84, 0xd9, 0x49, 0x29, 0x7b, 0x21, 0x11, 0x37, 0xaa, + 0x29, 0x3f, 0x29, 0x6e, 0x28, 0x21, 0x43, 0x4c, 0x24, 0xb5, 0x3f, 0xa0, 0x54, 0x92, 0x8e, 0xb9, + 0x02, 0xa7, 0x13, 0x29, 0x94, 0x10, 0xe3, 0x0c, 0x53, 0x27, 0xce, 0x32, 0x54, 0x55, 0xb7, 0x66, + 0x27, 0x90, 0xd1, 0x4c, 0x66, 0x00, 0xef, 0x5f, 0xa1, 0x99, 0xd2, 0x92, 0xbf, 0x75, 0x36, 0x0a, + 0x87, 0x5d, 0x81, 0xea, 0x5a, 0x18, 0x3c, 0xda, 0x5b, 0x59, 0x8a, 0x66, 0x27, 0xcf, 0x56, 0xce, + 0x8f, 0x2d, 0x9c, 0x7a, 0xbc, 0x3f, 0x7f, 0xac, 0x2b, 0x60, 0x8e, 0xd7, 0xd2, 0x8f, 0xf1, 0x04, + 0x91, 0xcd, 0xc3, 0x78, 0xc8, 0xdb, 0xee, 0x9e, 0xb3, 0x15, 0x06, 0xbd, 0xee, 0xec, 0x14, 0x8e, + 0x3c, 0x20, 0xe8, 0xba, 0x80, 0xb0, 0x67, 0x60, 0x8c, 0x10, 0xbc, 0x56, 0x34, 0x3b, 0x2d, 0xd8, + 0xda, 0x55, 0x04, 0xac, 0xb4, 0xa2, 0x9b, 0x43, 0xd5, 0x52, 0xad, 0x7c, 0x73, 0xa8, 0x5a, 0xae, + 0x55, 0x48, 0xf2, 0xb9, 0x39, 0x54, 0x1d, 0xaa, 0x0d, 0xdf, 0x1c, 0xaa, 0x0e, 0xa3, 0x2c, 0x34, + 0x56, 0x83, 0x9b, 0x43, 0xd5, 0xf1, 0xda, 0x84, 0x21, 0x88, 0x60, 0xf5, 0x71, 0xd0, 0x0c, 0xda, + 0x76, 0xe5, 0x9e, 0xbd, 0x62, 0x8f, 0x2c, 0xd6, 0x17, 0x79, 0x18, 0xdb, 0x95, 0xfa, 0x83, 0x86, + 0x3d, 0xb9, 0xb4, 0xe7, 0xbb, 0x1d, 0xaf, 0x49, 0xa7, 0xba, 0x5d, 0xb9, 0xbe, 0xb8, 0x66, 0xf9, + 0x70, 0xb2, 0x78, 0x2a, 0xb1, 0x75, 0x98, 0x58, 0x77, 0xc3, 0x2d, 0x1e, 0xdf, 0xe0, 0x6e, 0x3b, + 0xde, 0xc6, 0xf6, 0x8f, 0x5f, 0x3e, 0x26, 0x87, 0x4f, 0x2f, 0x5a, 0x78, 0xe6, 0xf1, 0xfe, 0xfc, + 0xa9, 0x18, 0x21, 0xce, 0x36, 0x82, 0xb4, 0x01, 0x31, 0xb8, 0x58, 0x75, 0x98, 0x4a, 0x47, 0x7e, + 0xd5, 0x8b, 0x62, 0x76, 0x09, 0xc6, 0x14, 0x44, 0xec, 0xf9, 0x95, 0xc2, 0x6f, 0x64, 0xa7, 0x38, + 0xd6, 0x3f, 0x28, 0x03, 0xa4, 0x25, 0x4f, 0xe9, 0xb6, 0xf0, 0x39, 0x63, 0x5b, 0x38, 0x91, 0x5d, + 0xd5, 0xfd, 0x37, 0x84, 0xf7, 0x33, 0x1b, 0xc2, 0xa9, 0x2c, 0xe9, 0x21, 0x5b, 0x81, 0xb6, 0xe8, + 0x7f, 0x7e, 0x34, 0xfd, 0x18, 0x72, 0xb9, 0x9f, 0x87, 0x64, 0x02, 0xc9, 0x01, 0xc5, 0x75, 0xdc, + 0x55, 0x93, 0x2a, 0x29, 0x65, 0xa7, 0x41, 0x4c, 0x30, 0x39, 0xa8, 0xa3, 0x8f, 0xf7, 0xe7, 0x2b, + 0xbd, 0xd0, 0xc3, 0x49, 0xc7, 0x2e, 0x81, 0x9c, 0x76, 0x72, 0x00, 0xc5, 0x5a, 0x99, 0x69, 0xba, + 0x4e, 0x93, 0x87, 0x71, 0x3a, 0xe2, 0xb3, 0x25, 0x35, 0x3b, 0x59, 0x17, 0xcc, 0xa9, 0x39, 0x3b, + 0x84, 0xd3, 0xe0, 0x7c, 0xe1, 0xa8, 0x5c, 0x34, 0x50, 0x49, 0xa2, 0x3e, 0xab, 0x0e, 0xe8, 0x16, + 0x95, 0x39, 0x39, 0xe9, 0xda, 0xac, 0x80, 0x5d, 0x01, 0xb1, 0x22, 0xe4, 0xe8, 0x83, 0xac, 0xa7, + 0xfe, 0xa0, 0xb1, 0x70, 0x42, 0x72, 0x9a, 0x74, 0x1f, 0xea, 0xe4, 0x02, 0x9b, 0xbd, 0x0b, 0x62, + 0xc9, 0xc8, 0x71, 0x67, 0x92, 0xe8, 0xfa, 0xe2, 0xda, 0x62, 0x3b, 0xe8, 0xb5, 0x1a, 0x1f, 0xac, + 0xa6, 0xc4, 0x5b, 0xcd, 0xae, 0x4e, 0x7c, 0x7d, 0x71, 0x8d, 0xbd, 0x0b, 0xc3, 0xf5, 0x3f, 0xd1, + 0x0b, 0xb9, 0x14, 0xd5, 0x26, 0x54, 0x9d, 0x02, 0xb6, 0x70, 0x4a, 0x12, 0x4e, 0xbb, 0xe2, 0xa7, + 0x2e, 0xe2, 0x62, 0xb9, 0xa8, 0x79, 0x7d, 0xb5, 0x21, 0xc5, 0x30, 0x96, 0x19, 0x96, 0xf5, 0x55, + 0xad, 0xd9, 0xb1, 0xd1, 0x6b, 0x41, 0xc5, 0x2e, 0x41, 0xb9, 0xbe, 0x84, 0x97, 0xc3, 0xf1, 0xcb, + 0x63, 0xaa, 0xda, 0xa5, 0x85, 0xe3, 0x92, 0x64, 0xc2, 0xd5, 0x97, 0x41, 0xb9, 0xbe, 0xc4, 0x16, + 0x60, 0xf8, 0xf6, 0x5e, 0xe3, 0x83, 0x55, 0xb9, 0xf5, 0xaa, 0x25, 0x8f, 0xb0, 0xbb, 0xb8, 0xcd, + 0x44, 0x69, 0x8b, 0x3b, 0x7b, 0xd1, 0x37, 0xdb, 0x7a, 0x8b, 0x11, 0x8d, 0xad, 0xc1, 0x58, 0xbd, + 0xd5, 0xf1, 0xfc, 0x7b, 0x11, 0x0f, 0x67, 0xc7, 0x91, 0xcf, 0x6c, 0xa6, 0xdd, 0x49, 0xf9, 0xc2, + 0xec, 0xe3, 0xfd, 0xf9, 0xe3, 0xae, 0xf8, 0xe9, 0xf4, 0x22, 0x1e, 0x6a, 0xdc, 0x52, 0x26, 0x6c, + 0x0d, 0xe0, 0x76, 0xe0, 0x6f, 0x05, 0xf5, 0xb8, 0xed, 0x46, 0x99, 0xcd, 0x3c, 0x2d, 0x48, 0x24, + 0xa9, 0x13, 0x1d, 0x01, 0x73, 0x5c, 0x01, 0xd4, 0x18, 0x6a, 0x3c, 0xd8, 0x35, 0x18, 0xb9, 0x1b, + 0xba, 0xcd, 0x36, 0x9f, 0x9d, 0x44, 0x6e, 0xc7, 0x25, 0x37, 0x02, 0xaa, 0x9e, 0xce, 0x4a, 0x86, + 0xb5, 0x00, 0xc1, 0xfa, 0x8d, 0x8d, 0x10, 0xe7, 0x1e, 0x00, 0xcb, 0xcf, 0xc9, 0x82, 0xfb, 0xd2, + 0xab, 0xfa, 0x7d, 0x29, 0x5d, 0xf4, 0x8b, 0x41, 0xa7, 0xe3, 0xfa, 0x2d, 0xa4, 0xbd, 0x7f, 0x59, + 0xbb, 0x46, 0x59, 0xdf, 0x84, 0x99, 0xdc, 0x60, 0x1d, 0x72, 0xd5, 0xfd, 0x22, 0x4c, 0x2f, 0xf1, + 0x4d, 0xb7, 0xd7, 0x8e, 0x93, 0x73, 0x8f, 0x96, 0x28, 0x5e, 0x3a, 0x5b, 0x54, 0xe4, 0xa8, 0xc3, + 0xce, 0xce, 0x22, 0x5b, 0xff, 0xa4, 0x04, 0x93, 0x46, 0xff, 0xd9, 0x55, 0x18, 0xab, 0xf7, 0x5a, + 0x5e, 0x8c, 0x5f, 0x92, 0x2a, 0xa5, 0xef, 0x25, 0x80, 0xf9, 0xef, 0xa5, 0x50, 0xd9, 0xdb, 0x00, + 0x36, 0x8f, 0xc3, 0xbd, 0xc5, 0xa0, 0xe7, 0xc7, 0xd8, 0x88, 0x61, 0xba, 0x4e, 0x87, 0x02, 0xea, + 0x34, 0x05, 0x58, 0xff, 0x30, 0x29, 0x32, 0xbb, 0x05, 0xb5, 0xc6, 0x76, 0x6f, 0x73, 0xb3, 0xcd, + 0x95, 0xd8, 0x10, 0xe1, 0x56, 0x52, 0x5d, 0x98, 0x7f, 0xbc, 0x3f, 0xff, 0x4c, 0x44, 0x65, 0x8e, + 0x92, 0x2e, 0xf4, 0xef, 0x9b, 0x23, 0xb4, 0xfe, 0xba, 0x2e, 0x4b, 0xaa, 0xc3, 0xed, 0xf5, 0x64, + 0x8b, 0x2a, 0xa5, 0x92, 0x6d, 0x6e, 0x8b, 0x4a, 0x36, 0xa8, 0x0b, 0xb4, 0x5d, 0x94, 0x73, 0xdb, + 0xc5, 0xb8, 0x9c, 0x1c, 0x15, 0xf7, 0x61, 0x44, 0x9b, 0x44, 0xb2, 0x78, 0x2a, 0x1f, 0x7f, 0xf1, + 0xbc, 0x0f, 0x13, 0xb7, 0x5d, 0xdf, 0xdd, 0xe2, 0x2d, 0x31, 0x92, 0xb4, 0x1d, 0x8e, 0xd1, 0x29, + 0xdb, 0x21, 0x38, 0x8e, 0xbb, 0xde, 0x6f, 0x83, 0x80, 0xbd, 0xa9, 0x36, 0x9b, 0xe1, 0x82, 0xcd, + 0x66, 0x52, 0xd6, 0x3e, 0x8c, 0x9b, 0x8d, 0xdc, 0x62, 0xac, 0x5f, 0x05, 0xec, 0x23, 0x7b, 0x0d, + 0x46, 0x6c, 0xbe, 0x25, 0x4e, 0xbf, 0x52, 0x3a, 0x6f, 0x42, 0x84, 0xe8, 0x03, 0x43, 0x38, 0x28, + 0xa8, 0xf1, 0x56, 0xb4, 0xed, 0x6d, 0xc6, 0x72, 0x74, 0x12, 0x41, 0x4d, 0x82, 0x35, 0x41, 0x4d, + 0x42, 0x4c, 0x65, 0x03, 0xc1, 0xc4, 0x86, 0x6c, 0x2f, 0x35, 0xe4, 0xa0, 0xa9, 0x11, 0xb6, 0x97, + 0xb4, 0x9d, 0x2d, 0x34, 0xc4, 0x2c, 0x81, 0x8d, 0x53, 0xb3, 0x89, 0x13, 0x29, 0xb9, 0xd1, 0xd3, + 0xd4, 0x24, 0xa0, 0xa9, 0xc0, 0x4a, 0x51, 0x59, 0x03, 0xc6, 0x97, 0xc5, 0x35, 0xd8, 0x5b, 0x74, + 0x9b, 0xdb, 0x6a, 0x90, 0xd4, 0xb6, 0xaa, 0x95, 0xa4, 0x9b, 0x09, 0x47, 0x60, 0x53, 0x00, 0x75, + 0x15, 0x90, 0x86, 0xcb, 0xd6, 0x61, 0xbc, 0xc1, 0x9b, 0x21, 0x8f, 0x1b, 0x71, 0x10, 0xf2, 0xcc, + 0x29, 0xa1, 0x95, 0x2c, 0x3c, 0xa7, 0x6e, 0xe2, 0x11, 0x02, 0x9d, 0x48, 0x40, 0x75, 0xae, 0x1a, + 0x32, 0x5d, 0xa9, 0x3a, 0x41, 0xb8, 0xb7, 0xb4, 0x20, 0x4f, 0x8e, 0x54, 0xcc, 0x20, 0xb0, 0x7e, + 0xa5, 0x12, 0x90, 0xd6, 0x86, 0x79, 0xa5, 0x22, 0x2c, 0xfc, 0x52, 0x4b, 0x0d, 0x14, 0x4e, 0xe5, + 0x39, 0x32, 0x9d, 0x8e, 0x32, 0x82, 0xb5, 0x2f, 0xd5, 0x8a, 0x50, 0xb4, 0x35, 0xbe, 0x94, 0xc4, + 0x62, 0x5d, 0x60, 0xea, 0xab, 0x91, 0xc4, 0xd8, 0xe6, 0x51, 0x24, 0x8f, 0x97, 0xd3, 0x99, 0x8f, + 0x9f, 0x22, 0x2c, 0xbc, 0x24, 0x99, 0x9f, 0x51, 0xd3, 0x40, 0xde, 0xa2, 0x45, 0xa1, 0x56, 0x4f, + 0x01, 0x6f, 0xb1, 0x93, 0x2c, 0x3f, 0x8a, 0x79, 0xe8, 0xbb, 0xed, 0x44, 0x4b, 0x89, 0x3b, 0x09, + 0x97, 0x50, 0xf3, 0x43, 0x6b, 0xc8, 0x6c, 0x11, 0x26, 0xeb, 0x51, 0xd4, 0xeb, 0x70, 0x3b, 0x68, + 0xf3, 0xba, 0x7d, 0x07, 0x8f, 0xa2, 0xb1, 0x85, 0x33, 0x8f, 0xf7, 0xe7, 0x4f, 0xbb, 0x58, 0xe0, + 0x84, 0x41, 0x9b, 0x3b, 0x6e, 0xa8, 0xcf, 0x6e, 0x93, 0x86, 0xdd, 0x05, 0xb8, 0xdb, 0xe5, 0x7e, + 0x83, 0xbb, 0x61, 0x73, 0x3b, 0x73, 0xf2, 0xa4, 0x05, 0x0b, 0xcf, 0xca, 0x1e, 0x1e, 0x0f, 0xba, + 0xdc, 0x8f, 0x10, 0xa6, 0xb7, 0x2a, 0xc5, 0x64, 0x0f, 0x60, 0x7a, 0xa5, 0x7e, 0x7b, 0x2d, 0x68, + 0x7b, 0xcd, 0x3d, 0x29, 0xcc, 0x4d, 0xa1, 0xee, 0xf6, 0xa4, 0xe4, 0x9a, 0x29, 0xa5, 0xed, 0xc9, + 0x73, 0x3b, 0x4e, 0x17, 0xa1, 0x8e, 0x14, 0xe9, 0xb2, 0x5c, 0xd8, 0x97, 0xc5, 0x1c, 0x8c, 0x84, + 0x7c, 0xba, 0xee, 0x6e, 0xd1, 0x9d, 0x22, 0xbd, 0x73, 0xd6, 0x1f, 0x34, 0x2e, 0x6a, 0xa5, 0x24, + 0x39, 0xcd, 0xd1, 0x44, 0x44, 0xa8, 0x13, 0xbb, 0x5b, 0x91, 0x39, 0x11, 0x13, 0x6c, 0x76, 0x13, + 0x60, 0x29, 0x68, 0xf6, 0x3a, 0xdc, 0x8f, 0x97, 0x16, 0x66, 0x6b, 0xe6, 0x5d, 0x2a, 0x29, 0x48, + 0xb7, 0xb6, 0x56, 0xd0, 0x34, 0x66, 0xa2, 0x46, 0xcd, 0x7e, 0x18, 0x4e, 0x68, 0x2b, 0x47, 0x9b, + 0x45, 0x33, 0xc8, 0xf6, 0xd9, 0xfc, 0x4a, 0xd4, 0x26, 0xd2, 0x79, 0x59, 0xc3, 0x59, 0x6d, 0x4d, + 0x16, 0xcf, 0xa5, 0xe2, 0x4a, 0xe6, 0xbe, 0x08, 0xb5, 0xec, 0x30, 0x1c, 0x51, 0xb9, 0x39, 0x59, + 0x9b, 0xd2, 0xc6, 0x7e, 0xf9, 0x91, 0x17, 0xc5, 0x91, 0xf5, 0x2d, 0x63, 0xfd, 0x8b, 0xbd, 0xe9, + 0x16, 0xdf, 0x5b, 0x0b, 0xf9, 0xa6, 0xf7, 0x48, 0x3f, 0x36, 0x77, 0xf8, 0x9e, 0xd3, 0x45, 0xa8, + 0xbe, 0x37, 0x25, 0xa8, 0xec, 0xb3, 0x50, 0xbd, 0x75, 0xbb, 0x71, 0x8b, 0xef, 0xad, 0x2c, 0xc9, + 0x93, 0x9b, 0xc8, 0x3a, 0x91, 0x23, 0x48, 0x8d, 0x99, 0x9e, 0x60, 0x5a, 0x0b, 0xe9, 0x3e, 0x2c, + 0x6a, 0x5e, 0x6c, 0xf7, 0xa2, 0x98, 0x87, 0x2b, 0x4b, 0x7a, 0xcd, 0x4d, 0x02, 0x66, 0x76, 0xc5, + 0x04, 0xd5, 0xfa, 0x7f, 0xcb, 0xb8, 0x07, 0x8b, 0xe5, 0xb6, 0xe2, 0x47, 0xb1, 0xeb, 0x37, 0x79, + 0xc2, 0x00, 0x97, 0x9b, 0x27, 0xa1, 0x99, 0xe5, 0x96, 0x22, 0x9b, 0x55, 0x97, 0x07, 0xae, 0x9a, + 0x64, 0x05, 0xd2, 0x6a, 0xad, 0x2c, 0xe9, 0xaa, 0xf7, 0x50, 0x42, 0x33, 0x55, 0xa6, 0xc8, 0xec, + 0x1c, 0x8c, 0xae, 0xd4, 0x6f, 0xd7, 0x7b, 0xf1, 0x36, 0x9e, 0x00, 0x55, 0xba, 0xb0, 0x88, 0xb5, + 0xe2, 0xf6, 0xe2, 0x6d, 0x5b, 0x15, 0xb2, 0x4b, 0x78, 0x11, 0xf4, 0x79, 0x4c, 0x2a, 0x7a, 0x79, + 0xe4, 0x47, 0x04, 0xca, 0xdc, 0x03, 0x05, 0x88, 0xbd, 0x02, 0xc3, 0xf7, 0xd7, 0x16, 0x57, 0x96, + 0xa4, 0xde, 0x03, 0xcf, 0xc1, 0xdd, 0x6e, 0xd3, 0x6c, 0x09, 0xa1, 0xb0, 0x65, 0x98, 0x6a, 0xf0, + 0x66, 0x2f, 0xf4, 0x62, 0xba, 0xda, 0x47, 0xb3, 0xa3, 0x58, 0x07, 0xee, 0x33, 0x91, 0x2c, 0x21, + 0x3d, 0x80, 0x5e, 0x57, 0x86, 0xc8, 0xfa, 0xf5, 0x52, 0xba, 0x49, 0xb3, 0x73, 0x86, 0x9c, 0x87, + 0x7a, 0x3d, 0x21, 0xd0, 0xe8, 0x7a, 0x3d, 0x94, 0xf8, 0x6c, 0x60, 0x8b, 0xbd, 0x28, 0x0e, 0x3a, + 0xcb, 0x7e, 0xab, 0x1b, 0x78, 0x7e, 0x8c, 0x54, 0x34, 0xf8, 0xd6, 0xe3, 0xfd, 0xf9, 0xe7, 0x9a, + 0x58, 0xea, 0x70, 0x59, 0xec, 0x64, 0xb8, 0x14, 0x50, 0x7f, 0x82, 0xef, 0x61, 0xfd, 0xe3, 0xb2, + 0x71, 0xb8, 0x8a, 0xe6, 0xd9, 0xbc, 0xdb, 0xf6, 0x9a, 0xa8, 0x90, 0xc1, 0x8e, 0x26, 0xb3, 0x0a, + 0x9b, 0x17, 0xa6, 0xa5, 0x34, 0x42, 0x26, 0xef, 0x02, 0x6a, 0xf6, 0x25, 0x98, 0x10, 0x72, 0x8e, + 0xfc, 0x19, 0xcd, 0x96, 0x71, 0xb0, 0x9f, 0x45, 0x0d, 0x6d, 0xc4, 0xc3, 0x84, 0x8d, 0x21, 0x20, + 0xe9, 0x14, 0xac, 0x05, 0xb3, 0xeb, 0xa1, 0xeb, 0x47, 0x5e, 0xbc, 0xec, 0x37, 0xc3, 0x3d, 0x94, + 0xcb, 0x96, 0x7d, 0x77, 0xa3, 0xcd, 0x5b, 0x52, 0xd2, 0x3c, 0xff, 0x78, 0x7f, 0xfe, 0xc5, 0x98, + 0x70, 0x1c, 0x9e, 0x20, 0x39, 0x9c, 0xb0, 0x34, 0xce, 0x7d, 0x39, 0x09, 0x39, 0x4e, 0x0d, 0x2b, + 0x3e, 0xd0, 0x91, 0x88, 0x82, 0x72, 0x5c, 0xf2, 0x35, 0xc4, 0x56, 0xa7, 0x37, 0x53, 0x27, 0xb0, + 0xee, 0xf6, 0xd9, 0x28, 0x71, 0xa1, 0x09, 0x90, 0x36, 0x43, 0x68, 0xa1, 0xe1, 0x6e, 0x98, 0xf9, + 0xc2, 0x29, 0xaa, 0xf5, 0x6f, 0x4a, 0xa9, 0x3c, 0xc1, 0xde, 0x83, 0x71, 0xb9, 0x04, 0x35, 0x36, + 0x78, 0x20, 0xa8, 0xf5, 0x9a, 0x61, 0xa4, 0xa3, 0xb3, 0x37, 0x60, 0xb4, 0xbe, 0xb8, 0xaa, 0x4d, + 0x36, 0xd4, 0xac, 0xb8, 0xcd, 0x76, 0x96, 0x4a, 0xa1, 0x89, 0x59, 0xb5, 0xbe, 0xda, 0x30, 0x87, + 0x19, 0x67, 0x55, 0xdc, 0x8e, 0x0a, 0xc6, 0x55, 0x43, 0xfe, 0xe4, 0x23, 0xf9, 0xbf, 0x96, 0x8a, + 0xc4, 0x16, 0xb6, 0x00, 0x93, 0x0f, 0x82, 0x70, 0x07, 0x27, 0x8c, 0x36, 0x08, 0x38, 0x95, 0x1e, + 0xaa, 0x82, 0x6c, 0x87, 0x4c, 0x12, 0xbd, 0x6d, 0xda, 0x68, 0x98, 0x6d, 0xcb, 0x70, 0x30, 0x08, + 0xc4, 0x77, 0x48, 0x38, 0x26, 0xcb, 0x0d, 0xbf, 0x43, 0xda, 0x04, 0x63, 0x4d, 0xe8, 0xe8, 0xd6, + 0x7f, 0x53, 0xd2, 0xc5, 0x13, 0x31, 0xc8, 0x4b, 0x41, 0xc7, 0xf5, 0x7c, 0xad, 0x3b, 0xf4, 0x8a, + 0x89, 0xd0, 0x6c, 0x4b, 0x34, 0x64, 0x76, 0x05, 0xaa, 0xf4, 0x2b, 0xd9, 0xbc, 0x51, 0xcb, 0x29, + 0x09, 0xcd, 0x93, 0x47, 0x21, 0xe6, 0xbe, 0x4c, 0xe5, 0xa8, 0x5f, 0xe6, 0xd7, 0x4a, 0xba, 0x64, + 0xf1, 0x71, 0x4f, 0xaf, 0xcc, 0xa9, 0x55, 0x3e, 0xca, 0xa9, 0xf5, 0x89, 0xbb, 0xf0, 0x1b, 0x25, + 0x18, 0xd7, 0xf4, 0x40, 0xa2, 0x0f, 0x6b, 0x61, 0xf0, 0x11, 0x6f, 0xc6, 0x66, 0x1f, 0xba, 0x04, + 0xcc, 0xf4, 0x21, 0x41, 0xfd, 0x24, 0x7d, 0x58, 0x84, 0xd1, 0x7a, 0xbb, 0x1d, 0x88, 0x6b, 0x02, + 0xdd, 0xa1, 0xa6, 0x94, 0xd4, 0x47, 0xd0, 0x85, 0xd3, 0xea, 0x85, 0xc9, 0x15, 0x00, 0x43, 0x34, + 0x53, 0x94, 0xd6, 0xcf, 0x96, 0x12, 0x2e, 0xb9, 0x41, 0x29, 0x1d, 0x71, 0x50, 0xc4, 0x25, 0x5e, + 0xfd, 0xbe, 0xbb, 0xcb, 0xc3, 0xd0, 0x6b, 0xa9, 0xa5, 0x81, 0x97, 0xf8, 0x84, 0x49, 0x20, 0x0b, + 0xf5, 0x4b, 0x7c, 0x96, 0xd0, 0xfa, 0x57, 0x25, 0x79, 0xa3, 0x1d, 0xf8, 0x58, 0x34, 0x8f, 0xb0, + 0xf2, 0x51, 0x44, 0x8a, 0x2f, 0xc1, 0xb0, 0xcd, 0x5b, 0x5e, 0x24, 0x47, 0x72, 0x46, 0xbf, 0x3d, + 0x63, 0x41, 0x2a, 0xe5, 0x86, 0xe2, 0xa7, 0x2e, 0x0f, 0x60, 0xb9, 0xb8, 0x76, 0xac, 0x44, 0xd7, + 0xda, 0xfc, 0x91, 0x47, 0x7b, 0x8d, 0x14, 0x4d, 0x50, 0x1c, 0xf0, 0x22, 0x67, 0x53, 0x94, 0x48, + 0x99, 0x55, 0xdf, 0x57, 0x0c, 0x1a, 0xeb, 0xcb, 0x00, 0x69, 0x95, 0x62, 0x38, 0xe5, 0x64, 0xf7, + 0xfc, 0x2d, 0x12, 0x3c, 0xe5, 0x18, 0xe0, 0x70, 0x36, 0x93, 0x32, 0x79, 0x47, 0xd0, 0x87, 0x33, + 0x4b, 0x68, 0xfd, 0x1f, 0x15, 0x28, 0xd7, 0x71, 0xbe, 0xdd, 0xe2, 0x7b, 0xb1, 0xbb, 0x71, 0xcd, + 0x6b, 0x1b, 0x7b, 0xc5, 0x0e, 0x42, 0x9d, 0x4d, 0xcf, 0xd0, 0x77, 0x69, 0xc8, 0x62, 0xaf, 0xb8, + 0x15, 0x6e, 0xbc, 0x85, 0x84, 0xda, 0x5e, 0xb1, 0x13, 0x6e, 0xbc, 0x95, 0x25, 0x4b, 0x10, 0x99, + 0x05, 0x23, 0xb4, 0x6f, 0xc8, 0x25, 0x06, 0x8f, 0xf7, 0xe7, 0x47, 0x68, 0x7b, 0xb1, 0x65, 0x09, + 0x3b, 0x0d, 0x95, 0xc6, 0xda, 0x1d, 0xb9, 0xc1, 0xa3, 0x5e, 0x39, 0xea, 0xfa, 0xb6, 0x80, 0x89, + 0x3a, 0x57, 0x97, 0xea, 0x6b, 0xa8, 0xb6, 0x19, 0x4e, 0xeb, 0x6c, 0xb7, 0xdc, 0x6e, 0x56, 0x71, + 0x93, 0x20, 0xb2, 0x2f, 0xc0, 0xf8, 0xad, 0xa5, 0xc5, 0x1b, 0x41, 0x44, 0x9b, 0xf3, 0x48, 0x3a, + 0x8d, 0x77, 0x5a, 0x4d, 0x54, 0x21, 0xe5, 0x4e, 0x39, 0x0d, 0x9f, 0x39, 0x70, 0x52, 0xb0, 0x12, + 0x9f, 0xc4, 0x6b, 0x72, 0xa9, 0x42, 0xb8, 0x93, 0x3e, 0x73, 0xbd, 0xfc, 0x78, 0x7f, 0xfe, 0x05, + 0x6c, 0x41, 0x44, 0x28, 0x8e, 0x52, 0x3e, 0x64, 0xb8, 0xf6, 0x61, 0xc3, 0xbe, 0x0a, 0x27, 0xf2, + 0x25, 0x8d, 0xe4, 0x79, 0xec, 0xdc, 0xe3, 0xfd, 0x79, 0xab, 0x90, 0x7f, 0x64, 0xcc, 0xdf, 0x62, + 0x26, 0xd6, 0xb7, 0xcb, 0x30, 0xae, 0xe9, 0x89, 0xd9, 0x67, 0xa5, 0xad, 0x44, 0xc9, 0xb8, 0x6e, + 0x6a, 0x18, 0xa2, 0x94, 0x94, 0x8a, 0x9d, 0xa0, 0xc5, 0xa5, 0xe5, 0x44, 0xaa, 0x2d, 0x2b, 0x0f, + 0xa2, 0x2d, 0x7b, 0x1b, 0x80, 0xa6, 0x30, 0x8e, 0x93, 0x26, 0x3d, 0x6a, 0x26, 0x53, 0xfa, 0xb4, + 0x4a, 0x91, 0xd9, 0x7d, 0x38, 0xb6, 0x1e, 0xf6, 0xa2, 0xb8, 0xb1, 0x17, 0xc5, 0xbc, 0x23, 0xb8, + 0xad, 0x05, 0x41, 0x5b, 0x2e, 0x9f, 0x17, 0xc5, 0xad, 0x0f, 0xed, 0xbc, 0x9c, 0x08, 0xcb, 0xb1, + 0x01, 0x4e, 0x37, 0x08, 0x74, 0x1d, 0x5a, 0x11, 0x03, 0xcb, 0x86, 0x09, 0x5d, 0x03, 0x27, 0xce, + 0x7d, 0xf9, 0xae, 0x2c, 0x9f, 0x7a, 0xb4, 0x73, 0x5f, 0xb6, 0x32, 0xff, 0xce, 0x6d, 0x92, 0x58, + 0x9f, 0xd5, 0x15, 0xd2, 0x83, 0xee, 0x4b, 0xd6, 0xff, 0xbf, 0x94, 0x6e, 0xf2, 0xf7, 0xdf, 0x64, + 0xef, 0xc2, 0x08, 0xbd, 0xe3, 0x4b, 0x73, 0x87, 0x13, 0x89, 0x06, 0x45, 0x7f, 0xe4, 0xa7, 0x97, + 0xa0, 0xdf, 0x22, 0x5b, 0x9f, 0xcf, 0xd8, 0x92, 0x24, 0x79, 0x44, 0x32, 0xf5, 0xc9, 0x8a, 0x3b, + 0x3e, 0x97, 0xbc, 0x59, 0xf4, 0x88, 0x64, 0xfd, 0xc6, 0x30, 0x4c, 0x99, 0x68, 0xfa, 0x63, 0x7f, + 0x69, 0xa0, 0xc7, 0xfe, 0x2f, 0x41, 0x55, 0xce, 0x37, 0x25, 0x80, 0xbf, 0x88, 0x4f, 0x6b, 0x12, + 0x66, 0x18, 0xb1, 0x00, 0x7d, 0x0e, 0x3b, 0x68, 0x73, 0x3b, 0xa1, 0x62, 0x97, 0xb5, 0x47, 0xe3, + 0x4a, 0x2a, 0x42, 0x2a, 0xb5, 0xae, 0xbe, 0x9c, 0x93, 0xe7, 0xe3, 0xd7, 0x61, 0x44, 0x5c, 0xe7, + 0x12, 0x7d, 0x1f, 0xb6, 0x52, 0xdc, 0xf4, 0x32, 0xd6, 0x6a, 0x84, 0xc4, 0x1e, 0x40, 0x75, 0xd5, + 0x8d, 0xe2, 0x06, 0xe7, 0xfe, 0x00, 0x66, 0x3c, 0xf3, 0x72, 0xa8, 0x8e, 0xa1, 0x8d, 0x4c, 0xc4, + 0xb9, 0x9f, 0xb1, 0xc3, 0x48, 0x98, 0xb1, 0xaf, 0x01, 0x2c, 0x06, 0x7e, 0x1c, 0x06, 0xed, 0xd5, + 0x60, 0x6b, 0x76, 0x04, 0x15, 0x2d, 0xcf, 0x65, 0x3e, 0x40, 0x8a, 0x40, 0xba, 0x96, 0x44, 0x9b, + 0xd8, 0xa4, 0x02, 0xa7, 0x1d, 0x6c, 0xe9, 0xeb, 0x20, 0xc5, 0x67, 0xd7, 0xa0, 0xa6, 0xb4, 0x58, + 0xf7, 0xba, 0x5b, 0x21, 0x4e, 0x90, 0xd1, 0x54, 0x2e, 0xe4, 0x8f, 0x62, 0xa7, 0x27, 0xe1, 0xc6, + 0xb9, 0x99, 0xa1, 0x61, 0x5f, 0x85, 0x53, 0x59, 0x98, 0xfa, 0xca, 0xd5, 0xf4, 0x0a, 0xa6, 0xb3, + 0x2b, 0x98, 0xf7, 0xfd, 0x58, 0xb0, 0xeb, 0x30, 0x2d, 0x06, 0xe4, 0x36, 0x77, 0xa3, 0x1e, 0xd9, + 0x5a, 0x4a, 0x3d, 0xa0, 0xb2, 0x52, 0x90, 0xab, 0xb0, 0x1d, 0x34, 0x77, 0x34, 0x24, 0x3b, 0x4b, + 0xc5, 0xae, 0xc2, 0x38, 0x19, 0xcf, 0x84, 0x2b, 0xfe, 0x66, 0x20, 0xdf, 0x9d, 0xd4, 0x73, 0x8c, + 0x2c, 0xb9, 0x7f, 0x59, 0x94, 0xd9, 0x3a, 0xa2, 0xb5, 0x5f, 0x86, 0x93, 0xc5, 0x75, 0xb0, 0x3f, + 0x09, 0x27, 0xe4, 0x78, 0xb6, 0x79, 0xa8, 0xe1, 0x0c, 0x60, 0x56, 0xf4, 0xba, 0xfc, 0x4e, 0xcf, + 0x37, 0x13, 0x06, 0xc9, 0x86, 0x23, 0x58, 0x64, 0x26, 0x45, 0x71, 0x3d, 0xec, 0x1b, 0x30, 0xae, + 0x57, 0x5b, 0x1e, 0xdc, 0x42, 0xeb, 0x80, 0xba, 0x74, 0x96, 0xcc, 0x85, 0x69, 0x9b, 0x7f, 0xb3, + 0xc7, 0xa3, 0x58, 0xd9, 0x88, 0x49, 0x89, 0xe5, 0x74, 0xae, 0x16, 0x85, 0x90, 0x28, 0x29, 0x6b, + 0x21, 0x51, 0x3a, 0xca, 0x02, 0xf8, 0xbb, 0x82, 0x7d, 0x96, 0x9f, 0xf5, 0xfd, 0x32, 0x9c, 0xea, + 0x33, 0x9d, 0xc5, 0x8e, 0xa7, 0x49, 0x86, 0xb8, 0xe3, 0x65, 0x04, 0x42, 0x32, 0x30, 0x3d, 0x0b, + 0x65, 0x29, 0x81, 0x0d, 0x2d, 0xd4, 0x1e, 0xef, 0xcf, 0x4f, 0x18, 0x2b, 0xb5, 0xbc, 0xb2, 0xc4, + 0x6e, 0xc2, 0x90, 0x18, 0x86, 0x01, 0xec, 0xa4, 0x94, 0x8a, 0x7a, 0x2a, 0xf6, 0xf4, 0x0d, 0x02, + 0xc7, 0x06, 0x79, 0xb0, 0xcf, 0x42, 0x65, 0x7d, 0x7d, 0x15, 0x77, 0x87, 0x0a, 0xce, 0xee, 0xc9, + 0x38, 0x6e, 0x1b, 0x9b, 0xd1, 0xa4, 0xa0, 0x4d, 0x46, 0xc4, 0x16, 0xe8, 0xec, 0xc3, 0x8c, 0xfd, + 0xe6, 0x2b, 0x07, 0x2f, 0xe5, 0xc1, 0xcd, 0x39, 0x3f, 0x81, 0x15, 0xa5, 0xf5, 0x9d, 0x92, 0x32, + 0x55, 0x93, 0x93, 0x9f, 0x9d, 0x55, 0xeb, 0x04, 0x15, 0x19, 0x92, 0x8b, 0x0e, 0x62, 0xcf, 0x01, + 0xd0, 0xcf, 0x7b, 0xf7, 0xe4, 0xa0, 0x4f, 0xd8, 0x1a, 0x84, 0xbd, 0x93, 0xb0, 0x94, 0x8a, 0xe7, + 0x0a, 0x4a, 0x02, 0x99, 0xb5, 0x46, 0x65, 0xb6, 0x89, 0x6a, 0xfd, 0x6a, 0x39, 0x3d, 0x35, 0xae, + 0x79, 0xed, 0x98, 0x87, 0x6c, 0x8e, 0x0e, 0x81, 0xf4, 0xb2, 0x66, 0x27, 0xbf, 0xd9, 0x6c, 0x7a, + 0xa2, 0x50, 0xd7, 0x92, 0xa3, 0xe3, 0x15, 0xed, 0xe8, 0xa8, 0xe0, 0xd1, 0x31, 0xd5, 0xf7, 0x90, + 0x78, 0xa5, 0x60, 0x27, 0xc4, 0xad, 0xbf, 0x60, 0xb7, 0x7b, 0x11, 0x26, 0xef, 0x04, 0xcb, 0x8f, + 0xe2, 0x04, 0x51, 0x6c, 0xf9, 0x55, 0xdb, 0x04, 0x0a, 0x8e, 0x77, 0xdb, 0x2d, 0x1e, 0xae, 0x6f, + 0xbb, 0xbe, 0x61, 0xdb, 0x64, 0xe7, 0xe0, 0x02, 0xf7, 0x0e, 0x7f, 0x68, 0xe2, 0x92, 0x09, 0x55, + 0x0e, 0x9e, 0xfd, 0x38, 0xd5, 0xdc, 0xc7, 0xb1, 0x7e, 0xb6, 0xac, 0x86, 0xeb, 0xfe, 0xe5, 0xa7, + 0xd4, 0x6e, 0xe5, 0x2d, 0xc3, 0x6e, 0xe5, 0x58, 0xf2, 0xbc, 0x95, 0x98, 0x8c, 0x5d, 0x2e, 0xb4, + 0x5a, 0x49, 0xec, 0xcf, 0x46, 0x8a, 0xed, 0xcf, 0xfe, 0xbb, 0x51, 0x98, 0xd0, 0x99, 0x88, 0xd1, + 0xa9, 0xb7, 0x5a, 0xa1, 0x3e, 0x3a, 0x6e, 0xab, 0x15, 0xda, 0x08, 0x35, 0xcc, 0xcd, 0x2a, 0x07, + 0x9a, 0x9b, 0x7d, 0x1d, 0xc6, 0x16, 0x3b, 0x2d, 0xc3, 0xac, 0xc4, 0x2a, 0x68, 0xf4, 0xc5, 0x04, + 0x89, 0xd6, 0x74, 0xf2, 0x96, 0xd3, 0xec, 0xb4, 0xf2, 0xc6, 0x24, 0x29, 0x4b, 0xc3, 0x52, 0x6d, + 0xf8, 0x93, 0x58, 0xaa, 0x5d, 0x85, 0xb1, 0x7b, 0x11, 0x5f, 0xef, 0xf9, 0x3e, 0x6f, 0xe3, 0x28, + 0x55, 0x49, 0x67, 0xd0, 0x8b, 0xb8, 0x13, 0x23, 0x54, 0x6f, 0x40, 0x82, 0xaa, 0x7f, 0xf6, 0xd1, + 0x03, 0x3e, 0xfb, 0x15, 0xa8, 0xae, 0x71, 0x1e, 0xe2, 0x98, 0x8e, 0xa7, 0x77, 0xa7, 0x2e, 0xe7, + 0xa1, 0x23, 0x06, 0xd6, 0xb0, 0x60, 0x93, 0x88, 0x86, 0xd9, 0xdb, 0xc4, 0xa0, 0x66, 0x6f, 0xcf, + 0xc3, 0x44, 0xb7, 0xb7, 0xd1, 0xf6, 0x9a, 0xc8, 0x57, 0xda, 0xcb, 0xd9, 0xe3, 0x04, 0x13, 0x6c, + 0x23, 0xf6, 0x21, 0x4c, 0xa2, 0xae, 0x24, 0x99, 0x88, 0x53, 0xc6, 0x81, 0x6f, 0x94, 0x91, 0x4c, + 0xde, 0x14, 0x20, 0xa7, 0xc0, 0x66, 0xd4, 0x64, 0xc4, 0x6e, 0xc2, 0xe8, 0x96, 0x17, 0x3b, 0xdb, + 0xbd, 0x8d, 0xd9, 0x69, 0xc3, 0xe0, 0xf2, 0xba, 0x17, 0xdf, 0xe8, 0x6d, 0xd0, 0x27, 0x4f, 0x58, + 0xe3, 0xce, 0xbd, 0xe5, 0xc5, 0xdb, 0x3d, 0x5d, 0x1d, 0x32, 0xb2, 0x85, 0xb8, 0x59, 0xfb, 0xbd, + 0xda, 0xc1, 0xf6, 0x7b, 0x33, 0xa6, 0xfd, 0x1e, 0x73, 0x80, 0xe5, 0x5d, 0x58, 0x66, 0x19, 0x36, + 0xea, 0x8d, 0x8b, 0xca, 0x17, 0xe5, 0x62, 0xce, 0xf7, 0xe5, 0xe2, 0xee, 0x9b, 0x17, 0x17, 0x15, + 0xf0, 0x9a, 0x04, 0xda, 0x33, 0xcd, 0x2c, 0x68, 0xae, 0x01, 0x53, 0xe6, 0xa4, 0x7d, 0x02, 0x16, + 0x27, 0x89, 0xbd, 0x61, 0xb5, 0x36, 0x76, 0x73, 0xa8, 0x0a, 0xb5, 0x71, 0xb2, 0x34, 0xb4, 0x61, + 0x2d, 0xf9, 0x7c, 0x36, 0xbb, 0xd5, 0xdb, 0xe0, 0xa1, 0xcf, 0x63, 0x1e, 0x49, 0xc5, 0x42, 0x64, + 0x0f, 0xd5, 0xbb, 0xdd, 0xc8, 0xfa, 0xdb, 0x65, 0x18, 0xad, 0x3f, 0x68, 0xe0, 0x51, 0xf5, 0x9a, + 0xfe, 0x36, 0x5f, 0x4a, 0xad, 0xed, 0xd3, 0xb7, 0x79, 0xfd, 0x45, 0xfe, 0x52, 0x81, 0xe6, 0x0b, + 0xbd, 0x6c, 0x34, 0xcd, 0x97, 0xa1, 0xef, 0x4a, 0xcd, 0x14, 0x2a, 0x03, 0x98, 0x29, 0x24, 0x6f, + 0x39, 0x43, 0x87, 0xbf, 0xe5, 0xbc, 0x0b, 0xe3, 0x2b, 0x7e, 0xcc, 0xb7, 0xc2, 0x74, 0x51, 0x27, + 0x5a, 0xb8, 0x04, 0xac, 0xab, 0x0b, 0x34, 0x6c, 0xb1, 0x62, 0xe8, 0xfd, 0x28, 0x79, 0x37, 0xc2, + 0x15, 0x43, 0xcf, 0x4c, 0x19, 0x15, 0xaa, 0x42, 0xb4, 0x96, 0x32, 0xcb, 0x41, 0x59, 0xa7, 0x95, + 0x4c, 0x45, 0x1e, 0x0d, 0xec, 0xc2, 0x4c, 0xb1, 0x75, 0x9a, 0xf5, 0x97, 0x4a, 0x70, 0xbc, 0x68, + 0x96, 0xb3, 0x2f, 0xc2, 0x44, 0x10, 0x6e, 0xb9, 0xbe, 0xf7, 0x27, 0xa8, 0x47, 0x9a, 0x9e, 0x5f, + 0x87, 0xeb, 0x8a, 0x3c, 0x1d, 0x2e, 0x06, 0x44, 0xeb, 0xb9, 0xa9, 0x96, 0x2c, 0x1c, 0x10, 0x0d, + 0x6c, 0xfd, 0x5c, 0x19, 0xc6, 0xeb, 0xdd, 0xee, 0x53, 0x6e, 0xc4, 0xfd, 0x79, 0xe3, 0xd4, 0x53, + 0x6a, 0x93, 0xa4, 0x5f, 0x07, 0x98, 0x6b, 0x1e, 0x76, 0xf0, 0xfd, 0x8f, 0x15, 0x98, 0xce, 0xf0, + 0xd1, 0xfb, 0x54, 0x1a, 0xd0, 0xe6, 0xba, 0x3c, 0xa0, 0xcd, 0x75, 0x65, 0x30, 0x9b, 0xeb, 0xa1, + 0x4f, 0x72, 0x92, 0xbd, 0x0c, 0x95, 0x7a, 0xb7, 0x9b, 0x35, 0x3d, 0xea, 0x76, 0xef, 0x5f, 0x21, + 0x7d, 0x9e, 0xdb, 0xed, 0xda, 0x02, 0xc3, 0x38, 0x5e, 0x46, 0x3e, 0xa6, 0x55, 0xf5, 0xe8, 0xc1, + 0xbb, 0x72, 0x75, 0xa0, 0x5d, 0x79, 0xec, 0x89, 0xed, 0xca, 0xd6, 0xeb, 0x30, 0x86, 0x5d, 0x45, + 0xd3, 0xe6, 0xb3, 0x80, 0xfb, 0xa2, 0xb4, 0x6a, 0x36, 0x86, 0x42, 0xee, 0x98, 0x7f, 0x50, 0x82, + 0x61, 0xfc, 0xfd, 0x94, 0x2e, 0x8c, 0xcb, 0xc6, 0xc2, 0xa8, 0x69, 0x0b, 0xa3, 0xef, 0x92, 0xd0, + 0x26, 0xff, 0xdf, 0xac, 0x00, 0x2c, 0xde, 0xb5, 0x1b, 0xa4, 0x95, 0x66, 0xd7, 0x60, 0xda, 0x6d, + 0xb7, 0x83, 0x87, 0xbc, 0xe5, 0x04, 0xa1, 0xb7, 0xe5, 0xf9, 0x34, 0x72, 0xca, 0x5c, 0xc7, 0x2c, + 0xd2, 0x9f, 0xd1, 0x65, 0xd1, 0x5d, 0x2a, 0xd1, 0xf9, 0x74, 0x78, 0xbc, 0x1d, 0xb4, 0x94, 0x82, + 0xca, 0xe0, 0x23, 0x8b, 0x0a, 0xf8, 0xdc, 0xa6, 0x12, 0x9d, 0xcf, 0x36, 0x2a, 0xdc, 0xd4, 0x6d, + 0xc5, 0xe0, 0x23, 0x8b, 0x0a, 0xf8, 0x90, 0x96, 0x2e, 0x62, 0xab, 0x80, 0x8f, 0x2e, 0x0f, 0x9d, + 0x66, 0xc8, 0x5b, 0xdc, 0x8f, 0x3d, 0xb7, 0x1d, 0x49, 0x95, 0x26, 0xea, 0xee, 0x73, 0x85, 0xba, + 0x4a, 0x07, 0x0b, 0x17, 0xd3, 0x32, 0x76, 0x11, 0x46, 0x3b, 0xee, 0x23, 0xc7, 0xdd, 0x22, 0xc3, + 0xb5, 0x49, 0x52, 0x81, 0x49, 0x90, 0x7e, 0xf6, 0x75, 0xdc, 0x47, 0xf5, 0x2d, 0x2e, 0x7a, 0xc1, + 0x1f, 0x75, 0x83, 0x48, 0xeb, 0xc5, 0x48, 0xda, 0x8b, 0x4c, 0x91, 0xde, 0x0b, 0x59, 0x24, 0x7b, + 0x61, 0xfd, 0x52, 0x09, 0x9e, 0x59, 0xc1, 0x56, 0xc4, 0x7b, 0x8b, 0xdc, 0x8f, 0x79, 0xb8, 0xc6, + 0xc3, 0x8e, 0x87, 0x86, 0x34, 0x0d, 0x1e, 0xb3, 0x17, 0xa0, 0x52, 0xb7, 0xef, 0xc8, 0xf9, 0x4b, + 0x87, 0x94, 0x61, 0x54, 0x25, 0x4a, 0x13, 0x2d, 0x69, 0xf9, 0x90, 0xd7, 0x9b, 0x3a, 0x4c, 0xd4, + 0xa3, 0xc8, 0xdb, 0xf2, 0x3b, 0xe4, 0xae, 0x57, 0x31, 0xcc, 0xb6, 0x24, 0x3c, 0xf7, 0xaa, 0xaa, + 0x93, 0x58, 0xff, 0x79, 0x09, 0x66, 0xea, 0xdd, 0xae, 0xd9, 0x64, 0xd3, 0x64, 0xb0, 0x34, 0xb8, + 0xc9, 0xa0, 0x07, 0x53, 0x46, 0x77, 0x69, 0x4a, 0xa5, 0x97, 0x89, 0x03, 0x46, 0x86, 0x9a, 0xdd, + 0x4d, 0x40, 0x4e, 0x64, 0x5a, 0x9c, 0x64, 0x18, 0x5b, 0xff, 0x49, 0x15, 0xf7, 0x10, 0x79, 0x18, + 0x48, 0x3b, 0xfb, 0x52, 0x81, 0x9d, 0xfd, 0xdb, 0xa0, 0x89, 0x65, 0xfa, 0xb9, 0xac, 0xc9, 0xdf, + 0xba, 0x7e, 0x31, 0x45, 0x66, 0x3b, 0x59, 0x8b, 0xfb, 0x0a, 0xf6, 0xe6, 0x85, 0xec, 0x02, 0x7e, + 0x22, 0xc6, 0xf6, 0x37, 0x80, 0xad, 0xf8, 0x68, 0x05, 0xc3, 0x1b, 0x3b, 0x5e, 0xf7, 0x3e, 0x0f, + 0xbd, 0xcd, 0x3d, 0xb9, 0x00, 0x70, 0xf0, 0x3d, 0x59, 0xea, 0x44, 0x3b, 0x5e, 0xd7, 0xd9, 0xc5, + 0x72, 0xbb, 0x80, 0x86, 0xbd, 0x0f, 0xa3, 0x36, 0x7f, 0x18, 0x7a, 0xb1, 0x32, 0xda, 0x9c, 0x4a, + 0xd4, 0xe5, 0x08, 0xa5, 0xb5, 0x10, 0xd2, 0x0f, 0x7d, 0x57, 0x94, 0xe5, 0xec, 0x32, 0x49, 0x56, + 0x64, 0x9c, 0x39, 0x99, 0xf6, 0xb6, 0xfe, 0xa0, 0xd1, 0x4f, 0xb0, 0x62, 0x17, 0x60, 0x18, 0xc5, + 0x33, 0x79, 0xbf, 0x42, 0x57, 0x54, 0xbc, 0x8f, 0xe8, 0xb2, 0x23, 0x62, 0xa0, 0xf6, 0x45, 0x99, + 0x99, 0xa8, 0xc3, 0x47, 0x83, 0x64, 0x65, 0xcb, 0xb1, 0x23, 0xc9, 0x96, 0xab, 0x50, 0xb3, 0xc9, + 0xab, 0xbd, 0x55, 0xef, 0xa2, 0xe9, 0x41, 0x34, 0x0b, 0xb8, 0x92, 0xcf, 0x3e, 0xde, 0x9f, 0x7f, + 0x56, 0x7a, 0xbc, 0xb7, 0x1c, 0xb7, 0x4b, 0x16, 0x0b, 0xc6, 0x36, 0x92, 0xa5, 0x64, 0x6f, 0xc3, + 0x90, 0xd8, 0x7a, 0xa5, 0x6d, 0xbe, 0x7a, 0xe3, 0x4c, 0x77, 0x63, 0x5a, 0x9c, 0xcd, 0xc0, 0xd8, + 0x13, 0x90, 0x84, 0x39, 0x30, 0x65, 0x4e, 0x77, 0x69, 0x13, 0x39, 0x9b, 0x8e, 0xa7, 0x59, 0x2e, + 0x1f, 0x3e, 0x25, 0xcc, 0x69, 0x22, 0x50, 0x5f, 0x01, 0x99, 0x45, 0xba, 0x0c, 0xd5, 0xf5, 0xc5, + 0xb5, 0xb5, 0x20, 0x8c, 0xe9, 0xfa, 0x98, 0x9e, 0x2c, 0x02, 0x66, 0xbb, 0xfe, 0x16, 0x27, 0x51, + 0x21, 0x6e, 0x76, 0x1d, 0x71, 0x60, 0x1b, 0xa2, 0x82, 0x22, 0x65, 0x5f, 0x83, 0x13, 0xf7, 0x22, + 0x5e, 0xf7, 0xf7, 0x50, 0x78, 0xd0, 0x96, 0xca, 0x14, 0x4e, 0x3d, 0x7c, 0xba, 0x13, 0xd7, 0x6b, + 0xd7, 0xdf, 0x73, 0x48, 0xe8, 0x28, 0x5e, 0x38, 0xc5, 0x5c, 0xd8, 0x25, 0xa8, 0xdc, 0x5e, 0x5c, + 0x93, 0xf7, 0x4c, 0x65, 0xb2, 0x7c, 0x7b, 0x71, 0x8d, 0x26, 0x52, 0xc7, 0x74, 0x01, 0xb9, 0xbd, + 0xb8, 0xf6, 0xe9, 0xf9, 0x09, 0x7c, 0x15, 0x5b, 0xc2, 0x66, 0x61, 0xb4, 0x49, 0x38, 0x92, 0x9b, + 0xfa, 0xc9, 0x18, 0x0c, 0xb9, 0xe1, 0x96, 0x3c, 0x06, 0x6d, 0xfc, 0x9b, 0xbd, 0x0c, 0xb5, 0xb0, + 0xe7, 0x3b, 0x6e, 0x44, 0x8f, 0xa0, 0xbd, 0x88, 0x87, 0xb4, 0xcd, 0xda, 0x93, 0x61, 0xcf, 0xaf, + 0x47, 0x42, 0x2c, 0x14, 0x53, 0xd7, 0xfa, 0x3b, 0x25, 0xd0, 0xd6, 0x4f, 0xd5, 0xe6, 0x2d, 0x2f, + 0xe4, 0xcd, 0x58, 0x9e, 0xcd, 0xd2, 0x8f, 0x9c, 0x60, 0x19, 0xd3, 0x6e, 0x84, 0xb1, 0x2f, 0xc2, + 0xa8, 0x3c, 0x43, 0xe4, 0x9e, 0xa9, 0xd6, 0x9d, 0x7c, 0xdb, 0xa2, 0x80, 0x03, 0xb9, 0xf3, 0x47, + 0x11, 0x89, 0x2d, 0xfb, 0xe6, 0x83, 0xf5, 0xc5, 0xb6, 0xeb, 0x75, 0x22, 0x79, 0x10, 0xe0, 0xae, + 0xf1, 0xd1, 0xc3, 0xd8, 0x69, 0x22, 0x54, 0xdf, 0xb2, 0x13, 0x54, 0xeb, 0xba, 0x7a, 0x5a, 0x3b, + 0xc4, 0x65, 0x62, 0x1e, 0x86, 0xef, 0xa7, 0x0a, 0xd8, 0x85, 0xb1, 0xc7, 0xfb, 0xf3, 0x34, 0xb6, + 0x36, 0xc1, 0x2d, 0x0e, 0x63, 0xc9, 0xbc, 0x13, 0xbc, 0xc4, 0x0f, 0xe4, 0x35, 0x49, 0xbc, 0xc4, + 0x0c, 0xb4, 0x11, 0x2a, 0xe4, 0xb4, 0x65, 0xbf, 0x85, 0x08, 0x65, 0x44, 0xc0, 0xe1, 0xe1, 0x7e, + 0x0b, 0xa7, 0xa9, 0xde, 0x3b, 0x89, 0xa6, 0x49, 0x43, 0x3f, 0x56, 0x82, 0x29, 0xf3, 0x1b, 0xb3, + 0x8b, 0x30, 0x22, 0x5d, 0xc5, 0x4b, 0xa8, 0xcf, 0x16, 0xdc, 0x46, 0xc8, 0x49, 0xdc, 0x70, 0x0d, + 0x97, 0x58, 0x42, 0xe8, 0x93, 0x1c, 0xa4, 0xc4, 0x83, 0x42, 0x9f, 0x9c, 0x05, 0xb6, 0x2a, 0x63, + 0x96, 0xb8, 0x3c, 0x47, 0xbd, 0x76, 0xac, 0xbf, 0xc3, 0x87, 0x08, 0xb1, 0x65, 0x89, 0xf5, 0x6b, + 0x25, 0x18, 0xa1, 0x8d, 0x31, 0x63, 0x7f, 0x5d, 0x3a, 0x8a, 0xfd, 0xf5, 0xb7, 0xe0, 0xb8, 0x1d, + 0xb4, 0x79, 0x54, 0xf7, 0xf7, 0x1e, 0x6e, 0xf3, 0x90, 0xaf, 0x85, 0xc1, 0xa6, 0x32, 0x19, 0x18, + 0xbf, 0xfc, 0xbc, 0xb1, 0x01, 0x17, 0x21, 0xd2, 0x9b, 0x6f, 0x28, 0x4a, 0xc4, 0x32, 0xc5, 0x22, + 0xb1, 0x56, 0x33, 0x26, 0x06, 0x85, 0x95, 0x58, 0x7f, 0xab, 0x04, 0x73, 0xfd, 0x59, 0xe3, 0xf1, + 0x49, 0x7f, 0xa6, 0x72, 0x0b, 0x1d, 0x9f, 0x04, 0xcd, 0x18, 0x85, 0x6b, 0xc8, 0xcc, 0x86, 0x13, + 0xf5, 0x66, 0x93, 0x77, 0x63, 0xc1, 0x58, 0x1a, 0x13, 0x27, 0x72, 0x4d, 0x95, 0x54, 0x56, 0x2e, + 0x22, 0x90, 0x79, 0xb9, 0x32, 0xb0, 0xc6, 0x59, 0x57, 0x4c, 0x6a, 0xfd, 0x5e, 0x09, 0xa0, 0xd1, + 0xb8, 0x71, 0x8b, 0xef, 0xad, 0xb9, 0x1e, 0x0a, 0x2a, 0xb4, 0xd7, 0xdc, 0x92, 0x9b, 0xc3, 0x84, + 0xb4, 0x21, 0xa2, 0x2d, 0x6a, 0x87, 0xef, 0x19, 0x36, 0x44, 0x0a, 0x95, 0x7a, 0xe5, 0xed, 0xba, + 0x31, 0x17, 0x84, 0xf8, 0x00, 0xa0, 0x7a, 0x85, 0xd0, 0x0c, 0xa5, 0x86, 0xcc, 0xbe, 0x06, 0x53, + 0xe9, 0xaf, 0xc4, 0x12, 0x6a, 0x2a, 0xd9, 0x80, 0xcc, 0xc2, 0x85, 0xe7, 0x1e, 0xef, 0xcf, 0xcf, + 0x69, 0x5c, 0xb3, 0xe6, 0x40, 0x19, 0x66, 0xef, 0x0c, 0xfd, 0x8b, 0x5f, 0x98, 0x2f, 0xa1, 0xb9, + 0xda, 0xfa, 0x6a, 0x43, 0x75, 0xf3, 0x1c, 0x0c, 0x25, 0x6e, 0x38, 0x13, 0xf2, 0xcc, 0x31, 0xad, + 0x0a, 0xb0, 0x5c, 0x48, 0x97, 0x69, 0x7f, 0x70, 0x83, 0x35, 0xfb, 0x21, 0x4a, 0xd9, 0x75, 0x18, + 0x1d, 0xa8, 0xe5, 0xb8, 0x28, 0x0b, 0x5a, 0xac, 0xa8, 0x85, 0xd0, 0xb5, 0x68, 0xd3, 0x23, 0xd2, + 0x04, 0x09, 0x5d, 0xcd, 0xb0, 0x6d, 0x0b, 0x98, 0xb5, 0x5f, 0x02, 0xb8, 0xf9, 0x60, 0xfd, 0x07, + 0xf6, 0x33, 0x59, 0x3f, 0x59, 0x86, 0x99, 0xd4, 0x94, 0x55, 0xf5, 0xf3, 0x73, 0x00, 0x69, 0x97, + 0x0e, 0xef, 0x68, 0x37, 0xe9, 0xe8, 0x3b, 0x30, 0xae, 0x55, 0x3e, 0x40, 0x4f, 0xbb, 0x69, 0x4f, + 0x1d, 0xa8, 0x65, 0x1b, 0xfe, 0x09, 0xfb, 0xda, 0x35, 0xf0, 0xc5, 0xec, 0xdb, 0x76, 0x23, 0xb2, + 0x1c, 0x9f, 0xa4, 0xd9, 0x27, 0x7e, 0xeb, 0xb3, 0x4f, 0xfc, 0xb6, 0xea, 0x50, 0xab, 0x6f, 0x71, + 0x63, 0x54, 0xd8, 0xeb, 0x05, 0x23, 0x82, 0x1a, 0xce, 0x14, 0xaa, 0x8d, 0x83, 0xf5, 0xe3, 0x65, + 0x98, 0x16, 0x33, 0xb9, 0xde, 0x8b, 0xb7, 0x83, 0xd0, 0x8b, 0xf7, 0x9e, 0xda, 0x27, 0xa0, 0xf7, + 0x8c, 0x3b, 0xff, 0x9c, 0x92, 0x4e, 0xf4, 0xbe, 0xf5, 0x7f, 0x09, 0xd2, 0xce, 0xbb, 0xff, 0x61, + 0x18, 0x8e, 0x15, 0x50, 0xb1, 0xd7, 0x8c, 0x77, 0xe5, 0x59, 0x15, 0xae, 0xe8, 0xfb, 0xfb, 0xf3, + 0x13, 0x0a, 0x7d, 0x3d, 0x0d, 0x5f, 0x74, 0xd9, 0x34, 0x62, 0xa6, 0x91, 0xc2, 0x67, 0x66, 0xdd, + 0x88, 0xd9, 0x34, 0x5d, 0xbe, 0x00, 0xc3, 0x78, 0x22, 0x48, 0x4f, 0x00, 0x94, 0xe8, 0xf1, 0x8c, + 0x31, 0x2c, 0xf9, 0x04, 0x80, 0xdd, 0x80, 0x51, 0xf1, 0xc7, 0x6d, 0xb7, 0x2b, 0x8d, 0x3c, 0x58, + 0xa2, 0x14, 0x43, 0x68, 0xd7, 0xf3, 0xb7, 0x74, 0xbd, 0x58, 0x9b, 0x3b, 0x1d, 0xb7, 0x6b, 0x5c, + 0x3d, 0x08, 0xd1, 0xd0, 0xaf, 0x55, 0xfb, 0xeb, 0xd7, 0x4a, 0x87, 0xea, 0xd7, 0x36, 0x01, 0x1a, + 0xde, 0x96, 0xef, 0xf9, 0x5b, 0xf5, 0xf6, 0x96, 0x0c, 0xfa, 0x74, 0xa1, 0xff, 0x57, 0xb8, 0x98, + 0x22, 0xe3, 0x1a, 0x79, 0x06, 0x2d, 0xb1, 0x08, 0xe6, 0xb8, 0xed, 0x2d, 0xc3, 0x23, 0x5b, 0xe3, + 0xcc, 0xee, 0x00, 0xd4, 0x9b, 0xb1, 0xb7, 0x2b, 0x56, 0x4b, 0x24, 0xef, 0x09, 0xaa, 0xc9, 0x8b, + 0xf5, 0x5b, 0x7c, 0x0f, 0xef, 0xb6, 0xca, 0xa6, 0xc5, 0x45, 0x54, 0x31, 0xeb, 0x0d, 0x77, 0xdb, + 0x94, 0x03, 0xeb, 0xc2, 0x89, 0x7a, 0xab, 0xe5, 0x89, 0x3e, 0xb8, 0xed, 0x75, 0x0a, 0xd7, 0x85, + 0xac, 0x27, 0x8a, 0x59, 0x5f, 0x50, 0x66, 0x18, 0x6e, 0x42, 0xe5, 0xa8, 0x28, 0x5f, 0x99, 0x6a, + 0x8a, 0x19, 0x5b, 0x0d, 0x98, 0x32, 0x3b, 0x6f, 0x06, 0xab, 0x9a, 0x80, 0xaa, 0xdd, 0xa8, 0x3b, + 0x8d, 0x1b, 0xf5, 0x37, 0x6b, 0x25, 0x56, 0x83, 0x09, 0xf9, 0xeb, 0xb2, 0x73, 0xf9, 0xad, 0xab, + 0xb5, 0xb2, 0x01, 0x79, 0xeb, 0xcd, 0xcb, 0xb5, 0xca, 0x5c, 0x79, 0xb6, 0x94, 0x09, 0xc6, 0x30, + 0x5a, 0xab, 0xd2, 0x43, 0x89, 0xf5, 0xcb, 0x25, 0xa8, 0xaa, 0xb6, 0xb3, 0xab, 0x50, 0x69, 0x34, + 0x6e, 0x64, 0xc2, 0x19, 0xa4, 0xa7, 0x3b, 0x9d, 0x60, 0x91, 0xb1, 0xd3, 0x08, 0x02, 0x41, 0xb7, + 0xbe, 0xda, 0x90, 0x72, 0xb2, 0xa2, 0x4b, 0x8f, 0x4b, 0xa2, 0x2b, 0xf0, 0xf1, 0xbe, 0x0a, 0x95, + 0x9b, 0x0f, 0xd6, 0xe5, 0x2d, 0x5e, 0xd1, 0xa5, 0xc7, 0x14, 0xd1, 0x7d, 0xf4, 0x50, 0x3f, 0x57, + 0x05, 0x81, 0x65, 0xc3, 0xb8, 0x36, 0x91, 0x49, 0x30, 0xec, 0x04, 0x49, 0x84, 0x26, 0x29, 0x18, + 0x0a, 0x88, 0x2d, 0x4b, 0x84, 0xb8, 0xbc, 0x1a, 0x34, 0xdd, 0xb6, 0x94, 0x30, 0x51, 0x5c, 0x6e, + 0x0b, 0x80, 0x4d, 0x70, 0xeb, 0xd7, 0x4b, 0x50, 0x5b, 0x0b, 0x03, 0x8a, 0x22, 0xb5, 0x1e, 0xec, + 0x70, 0xff, 0xfe, 0x9b, 0xec, 0x75, 0xb5, 0xe4, 0x4a, 0x89, 0xa2, 0x77, 0x18, 0x97, 0x5c, 0xe6, + 0xe1, 0x5f, 0x2e, 0x3b, 0x2d, 0x08, 0x56, 0x79, 0xf0, 0xe0, 0x39, 0x87, 0x04, 0xc1, 0x9a, 0x87, + 0x61, 0x6c, 0x8e, 0xdc, 0x1c, 0xb1, 0xe5, 0xb1, 0x00, 0xd8, 0x04, 0xd7, 0xf6, 0xa6, 0xfd, 0x72, + 0xae, 0x0f, 0x97, 0x7f, 0xa0, 0x02, 0xd0, 0x98, 0x9d, 0x3b, 0xe0, 0xe5, 0xfe, 0x56, 0x9f, 0x00, + 0x34, 0x19, 0x06, 0xe4, 0x58, 0x7d, 0x99, 0x9e, 0xd6, 0xc8, 0x3d, 0x51, 0xd7, 0x45, 0xe6, 0x62, + 0x4f, 0x7c, 0x19, 0x8e, 0x67, 0xc7, 0x17, 0x55, 0xe6, 0x75, 0x98, 0x36, 0xe1, 0x4a, 0x7b, 0x7e, + 0xaa, 0xb0, 0xde, 0xfb, 0x97, 0xed, 0x2c, 0xbe, 0xf5, 0xcf, 0x4b, 0x30, 0x86, 0x7f, 0xda, 0x3d, + 0x92, 0xf2, 0xeb, 0x0f, 0x1a, 0x52, 0x91, 0xa7, 0x4b, 0xf9, 0xee, 0xc3, 0x48, 0xd9, 0xd2, 0x1a, + 0x1b, 0x56, 0x82, 0x2c, 0x49, 0xe9, 0x09, 0x51, 0xa9, 0x90, 0x13, 0x52, 0x7a, 0x6b, 0x8c, 0x32, + 0xa4, 0x12, 0x19, 0x9d, 0x63, 0xe8, 0xda, 0xa1, 0x5b, 0x36, 0x22, 0x5d, 0xd0, 0x36, 0x9d, 0x63, + 0x08, 0x0d, 0x0d, 0x1b, 0x1f, 0x34, 0xc4, 0x4d, 0x44, 0x37, 0x6c, 0x14, 0x6d, 0x34, 0x6e, 0x21, + 0x12, 0xc9, 0xfa, 0xb3, 0x53, 0xd9, 0x01, 0x94, 0xa7, 0xe7, 0x11, 0x17, 0xda, 0xbb, 0x30, 0x5c, + 0x6f, 0xb7, 0x83, 0x87, 0x72, 0xcb, 0x51, 0x7a, 0x96, 0x64, 0xfc, 0xe8, 0x70, 0x44, 0x25, 0xb4, + 0x11, 0x96, 0x42, 0x00, 0xd8, 0x22, 0x8c, 0xd5, 0x1f, 0x34, 0x56, 0x56, 0x96, 0xd6, 0xd7, 0xc9, + 0xdf, 0xbd, 0xb2, 0xf0, 0x92, 0x1a, 0x1f, 0xcf, 0x6b, 0x39, 0x59, 0xcb, 0xab, 0xf4, 0xc2, 0x9a, + 0xd2, 0xb1, 0x2f, 0x00, 0xdc, 0x0c, 0x3c, 0x9f, 0x94, 0xee, 0xb2, 0xf3, 0x67, 0x1e, 0xef, 0xcf, + 0x8f, 0x7f, 0x14, 0x78, 0xbe, 0xd4, 0xd2, 0x8b, 0xb6, 0xa7, 0x48, 0xb6, 0xf6, 0xb7, 0x18, 0xe9, + 0x85, 0x80, 0x2c, 0xb2, 0x87, 0xd3, 0x91, 0xde, 0x08, 0x72, 0xda, 0x61, 0x85, 0xc6, 0x3a, 0x30, + 0xdd, 0xe8, 0x6d, 0x6d, 0x71, 0x71, 0x4c, 0x48, 0xed, 0xe7, 0x88, 0x54, 0xb4, 0x24, 0x31, 0x20, + 0xe9, 0x02, 0x2e, 0x6e, 0xff, 0xd1, 0xc2, 0x6b, 0x62, 0x55, 0x7c, 0x6f, 0x7f, 0x5e, 0x5a, 0x74, + 0x09, 0x11, 0x33, 0x52, 0xf4, 0x79, 0xdd, 0x67, 0x96, 0x37, 0xbb, 0x0b, 0x23, 0xf4, 0x2c, 0x2b, + 0xfd, 0xb7, 0x9f, 0x3f, 0x60, 0x05, 0x12, 0x62, 0x3f, 0xbb, 0x04, 0x2a, 0x65, 0x0f, 0xa0, 0xba, + 0xe8, 0x85, 0xcd, 0x36, 0x5f, 0x5c, 0x91, 0x82, 0xc4, 0x0b, 0x07, 0xb0, 0x54, 0xa8, 0x34, 0x2e, + 0x4d, 0xfc, 0xd5, 0xf4, 0x74, 0xc1, 0x42, 0x61, 0xb0, 0xbf, 0x54, 0x82, 0x67, 0x92, 0xd6, 0xd7, + 0xb7, 0xb8, 0x1f, 0xdf, 0x76, 0xe3, 0xe6, 0x36, 0x0f, 0xe5, 0x28, 0x8d, 0x1d, 0x34, 0x4a, 0xef, + 0xe4, 0x46, 0xe9, 0x7c, 0x3a, 0x4a, 0xae, 0x60, 0xe6, 0x74, 0x88, 0x5b, 0x7e, 0xcc, 0x0e, 0xaa, + 0x95, 0x39, 0x00, 0xa9, 0xc1, 0x81, 0x34, 0x0d, 0x7d, 0xe9, 0x80, 0x0e, 0xa7, 0xc8, 0xd2, 0x73, + 0x36, 0xf9, 0x6d, 0xb8, 0x32, 0x24, 0x50, 0x76, 0x4b, 0x05, 0x4b, 0x20, 0x11, 0xe7, 0xec, 0x01, + 0xbc, 0x29, 0x80, 0xc2, 0xb1, 0x03, 0x22, 0xb5, 0xd0, 0xd7, 0x5e, 0x75, 0x37, 0xa4, 0x54, 0x73, + 0xc8, 0xd7, 0x5e, 0x75, 0xd3, 0xaf, 0xdd, 0x76, 0xb3, 0x5f, 0x7b, 0xd5, 0xdd, 0x60, 0x8b, 0x14, + 0x74, 0x86, 0x22, 0x94, 0x3c, 0x77, 0x10, 0x37, 0xa5, 0x79, 0x2c, 0x08, 0x3e, 0xf3, 0x15, 0x18, + 0x6b, 0x74, 0xdd, 0x26, 0x6f, 0x7b, 0x9b, 0xb1, 0x34, 0xb6, 0x79, 0xf1, 0x00, 0x56, 0x09, 0xae, + 0xb4, 0x5e, 0x50, 0x3f, 0xf5, 0x1b, 0x5e, 0x82, 0x23, 0x5a, 0xb8, 0xbe, 0x76, 0x5b, 0xea, 0x41, + 0x0f, 0x6a, 0xe1, 0xfa, 0xda, 0x6d, 0x29, 0xc0, 0x74, 0x3b, 0x86, 0x00, 0xb3, 0x76, 0x9b, 0x75, + 0x61, 0x6a, 0x9d, 0x87, 0xa1, 0xbb, 0x19, 0x84, 0x1d, 0xd2, 0xb6, 0x93, 0x8b, 0xf9, 0x85, 0x83, + 0xf8, 0x19, 0x04, 0xa4, 0x64, 0x8e, 0x15, 0xcc, 0xc9, 0xaa, 0xe8, 0x33, 0xfc, 0xc5, 0x98, 0x2c, + 0x78, 0xf1, 0x46, 0xaf, 0xb9, 0xc3, 0x63, 0xe9, 0x78, 0x7e, 0xd0, 0x98, 0x24, 0xb8, 0x34, 0x26, + 0x1b, 0xea, 0xa7, 0x3e, 0x26, 0x09, 0x8e, 0x98, 0x06, 0x32, 0xb4, 0x0c, 0x3b, 0x74, 0x1a, 0x10, + 0x22, 0x4d, 0x83, 0x7e, 0x31, 0x66, 0xd8, 0x36, 0x4c, 0x2c, 0x04, 0x3d, 0x5f, 0xc8, 0xb5, 0x5d, + 0xd7, 0x0b, 0x67, 0x8f, 0x21, 0xdb, 0x97, 0x0f, 0x6a, 0xb0, 0x86, 0x4e, 0x0e, 0x2f, 0x1b, 0x02, + 0x22, 0x44, 0x67, 0x01, 0xd2, 0xdf, 0xcd, 0x74, 0x54, 0xd6, 0x82, 0x71, 0x9c, 0xca, 0x4b, 0x7c, + 0x37, 0xe8, 0x46, 0xb3, 0xc7, 0xb1, 0xa2, 0x73, 0x87, 0x2d, 0x0a, 0xc2, 0x26, 0xab, 0x12, 0x5c, + 0x1a, 0x4e, 0x0b, 0x21, 0xfa, 0x63, 0x86, 0x86, 0xc8, 0xae, 0xc3, 0xd0, 0xb2, 0xbf, 0xfb, 0xc6, + 0xec, 0x09, 0x64, 0x3f, 0x7f, 0x00, 0x7b, 0x81, 0x46, 0x57, 0x73, 0xee, 0xef, 0xbe, 0xa1, 0x5f, + 0xcd, 0x45, 0x89, 0xf5, 0xb7, 0x87, 0x61, 0xfe, 0x90, 0x56, 0xb1, 0xfb, 0xea, 0x90, 0x23, 0x51, + 0xe2, 0xd5, 0xc1, 0x3a, 0x73, 0xf1, 0xd0, 0xf3, 0xef, 0x5d, 0x98, 0xba, 0xab, 0x59, 0xca, 0x24, + 0x96, 0x4b, 0x48, 0xa3, 0xdb, 0xd0, 0x38, 0x5e, 0xcb, 0xce, 0xa0, 0xce, 0xfd, 0x41, 0x05, 0x86, + 0x50, 0x42, 0x79, 0x01, 0x2a, 0x8d, 0xde, 0x86, 0xfe, 0x70, 0x1a, 0x19, 0xfb, 0xbe, 0x28, 0x65, + 0xef, 0xc1, 0xb8, 0xf4, 0x13, 0xd4, 0xae, 0xb9, 0x38, 0xda, 0xca, 0xa9, 0x30, 0xeb, 0xc5, 0xa4, + 0xa1, 0xb3, 0xf7, 0x61, 0x62, 0xcd, 0xeb, 0xf2, 0xb6, 0xe7, 0x73, 0xcd, 0x27, 0x07, 0x27, 0x45, + 0x57, 0xc2, 0x73, 0x8f, 0xa9, 0x3a, 0x81, 0xe9, 0xd1, 0x38, 0x34, 0xb8, 0x47, 0xe3, 0xfb, 0x30, + 0xb1, 0xc4, 0x37, 0x3d, 0xdf, 0x93, 0xe3, 0x33, 0x9c, 0x56, 0xdc, 0x4a, 0xe0, 0x26, 0xb5, 0x41, + 0xc0, 0x16, 0x60, 0xd2, 0xe6, 0xdd, 0x20, 0xf2, 0xe2, 0x20, 0xdc, 0xbb, 0x67, 0xaf, 0x48, 0xab, + 0x2a, 0xd4, 0xb0, 0x86, 0x49, 0x81, 0xd3, 0x0b, 0xf5, 0x23, 0xcd, 0x24, 0x61, 0x77, 0x60, 0x26, + 0x05, 0x98, 0xc6, 0x92, 0xf2, 0xe5, 0x2c, 0xe1, 0x93, 0x77, 0x7e, 0xc8, 0x93, 0x9a, 0x6d, 0xb2, + 0xf9, 0xa6, 0x74, 0xa5, 0xc8, 0xb6, 0x29, 0xe4, 0x9b, 0xc5, 0x6d, 0xb2, 0xf9, 0xa6, 0xf5, 0x77, + 0x2b, 0x70, 0xaa, 0xcf, 0x1e, 0xc9, 0xee, 0x98, 0xd3, 0xf5, 0x85, 0x83, 0xb7, 0xd4, 0xc3, 0xa7, + 0xe9, 0x2a, 0xd4, 0x96, 0x6f, 0xa1, 0x66, 0x80, 0xec, 0x12, 0x16, 0xeb, 0x4a, 0x9a, 0xc5, 0xee, + 0xf3, 0x1d, 0x74, 0xa3, 0x52, 0xf6, 0x0c, 0x4d, 0x23, 0x5e, 0x56, 0x8e, 0x72, 0xee, 0x4f, 0x97, + 0xe5, 0xbc, 0xcd, 0x04, 0x4c, 0x2e, 0x1d, 0x29, 0x60, 0xf2, 0x97, 0x60, 0x62, 0xf9, 0x16, 0xa9, + 0x43, 0x6f, 0x28, 0x0d, 0x9c, 0x1c, 0x42, 0xbe, 0xa3, 0xde, 0xe1, 0x32, 0xba, 0x38, 0x83, 0x82, + 0xdd, 0x83, 0x63, 0xd4, 0x36, 0x6f, 0xd3, 0x6b, 0x52, 0xdc, 0x55, 0xcf, 0x6d, 0xcb, 0x19, 0xf6, + 0xc2, 0xe3, 0xfd, 0xf9, 0x79, 0xbe, 0x83, 0x0e, 0x62, 0xb2, 0xdc, 0x89, 0x10, 0x41, 0xf7, 0x14, + 0x2b, 0xa0, 0xd7, 0x23, 0x2e, 0xda, 0x63, 0x58, 0xa1, 0xa8, 0x4d, 0xd4, 0x2d, 0x70, 0x09, 0xc9, + 0xfa, 0xfd, 0x61, 0x98, 0xeb, 0x2f, 0xbf, 0xb1, 0x0f, 0xcc, 0x0f, 0x78, 0xee, 0x50, 0x89, 0xef, + 0xf0, 0x6f, 0xf8, 0x21, 0x1c, 0x5f, 0xf6, 0x63, 0x1e, 0x76, 0x43, 0x4f, 0x45, 0x6a, 0xbc, 0x11, + 0x44, 0xca, 0x21, 0x0f, 0x5f, 0x49, 0x78, 0x52, 0x2e, 0x5d, 0x4b, 0xf1, 0x61, 0x4f, 0x7f, 0x25, + 0x29, 0xe2, 0xc0, 0x96, 0x61, 0x4a, 0x83, 0xb7, 0x7b, 0x5b, 0xba, 0xb1, 0x85, 0xce, 0xb3, 0xdd, + 0xd3, 0xbd, 0x95, 0x32, 0x44, 0xe8, 0xf4, 0x17, 0xbb, 0xb1, 0xd7, 0xbc, 0xf9, 0xe0, 0x56, 0x43, + 0x7e, 0x4e, 0x72, 0xfa, 0x43, 0xa8, 0xf3, 0xd1, 0xc3, 0x1d, 0x43, 0x00, 0x4b, 0x91, 0xe7, 0xfe, + 0xda, 0x91, 0x76, 0xc2, 0xcf, 0x03, 0xa4, 0x4b, 0x49, 0x0f, 0x32, 0x92, 0x2e, 0x3d, 0xd3, 0xaf, + 0x57, 0x41, 0xd9, 0x0d, 0x98, 0x4e, 0x7f, 0xdd, 0x7d, 0xe8, 0xab, 0x07, 0x4f, 0x52, 0x1b, 0x6b, + 0x2b, 0x37, 0x10, 0x65, 0xba, 0x4c, 0x9f, 0x21, 0x63, 0x97, 0xa1, 0xfa, 0x20, 0x08, 0x77, 0x36, + 0xc5, 0x37, 0x1e, 0x4a, 0x6f, 0x1d, 0x0f, 0x25, 0x4c, 0x97, 0xae, 0x15, 0x9e, 0x58, 0x2e, 0xcb, + 0xfe, 0xae, 0x17, 0x06, 0x68, 0xa0, 0xa2, 0xdb, 0x95, 0xf2, 0x14, 0x6c, 0x04, 0x97, 0x4a, 0xc1, + 0xec, 0x02, 0x0c, 0xd7, 0x9b, 0x71, 0x10, 0xca, 0xed, 0x8f, 0x66, 0x8a, 0x00, 0x18, 0x33, 0x45, + 0x00, 0xc4, 0x20, 0x8a, 0x3d, 0x69, 0x34, 0x1d, 0x44, 0x73, 0x23, 0x12, 0xa5, 0xe2, 0xd6, 0x64, + 0xf3, 0x4d, 0x54, 0xb3, 0x1a, 0xe1, 0xc0, 0x37, 0x73, 0x4f, 0x22, 0x12, 0xcd, 0xfa, 0x33, 0xd0, + 0x77, 0xca, 0x0b, 0x31, 0xf5, 0x68, 0x53, 0x7e, 0xd5, 0x1d, 0x60, 0xca, 0xbf, 0x96, 0x78, 0x0b, + 0xeb, 0x11, 0xec, 0x10, 0xa2, 0x0b, 0x48, 0xd2, 0x6f, 0xd8, 0x9c, 0x7f, 0x95, 0xa3, 0xcc, 0xbf, + 0xbf, 0x51, 0x3d, 0xca, 0xfc, 0x93, 0xe3, 0x5b, 0x1e, 0x74, 0x7c, 0x2b, 0x03, 0x8d, 0xaf, 0x38, + 0x54, 0x92, 0x58, 0xf4, 0x6b, 0x6e, 0x6c, 0xec, 0x88, 0x49, 0x02, 0x01, 0xa7, 0xeb, 0x1a, 0xb1, + 0x55, 0x4d, 0x12, 0x4d, 0x48, 0x40, 0x0e, 0xc3, 0x79, 0x21, 0x21, 0x43, 0xaf, 0xa3, 0x8b, 0x8d, + 0x40, 0x9d, 0xf9, 0x0d, 0xf4, 0x3d, 0x95, 0x93, 0x8d, 0xcc, 0x97, 0x94, 0x98, 0x40, 0x6e, 0xa9, + 0xc6, 0xfb, 0x91, 0x41, 0x94, 0x9d, 0xe7, 0xa3, 0x47, 0x9a, 0xe7, 0xe4, 0x05, 0x11, 0xae, 0x06, + 0x5b, 0x9e, 0xf2, 0x50, 0x54, 0x5e, 0x10, 0xa1, 0xd3, 0x16, 0xd0, 0x8c, 0x17, 0x04, 0xa1, 0xb2, + 0xd7, 0x61, 0x44, 0xfc, 0x58, 0x59, 0x92, 0x36, 0x35, 0xa8, 0x3d, 0x41, 0x22, 0xd3, 0x2d, 0x94, + 0x90, 0x54, 0x35, 0xcb, 0x1d, 0xd7, 0x6b, 0xcb, 0x80, 0x62, 0x69, 0x35, 0x5c, 0x40, 0xb3, 0xd5, + 0x20, 0x2a, 0x6b, 0xc2, 0x84, 0xcd, 0x37, 0xd7, 0xc2, 0x20, 0xe6, 0xcd, 0x98, 0xb7, 0xe4, 0x8d, + 0x51, 0x29, 0x4d, 0x16, 0x82, 0x80, 0x6e, 0xc3, 0xe8, 0x41, 0x58, 0xfa, 0xde, 0xfe, 0x3c, 0x08, + 0x10, 0xf9, 0x1c, 0x0b, 0x91, 0x47, 0x7c, 0xff, 0xae, 0x22, 0xd6, 0x0f, 0x36, 0x9d, 0x29, 0xfb, + 0x96, 0xd8, 0xea, 0x93, 0x21, 0x49, 0x2b, 0x9b, 0xe8, 0x53, 0xd9, 0x5b, 0x85, 0x95, 0xcd, 0x6b, + 0xa3, 0x5d, 0x58, 0x69, 0x61, 0x25, 0xec, 0x0b, 0x30, 0xbe, 0xb8, 0xb2, 0x18, 0xf8, 0x9b, 0xde, + 0x56, 0xe3, 0x46, 0x1d, 0xaf, 0x9d, 0x52, 0x5e, 0x6b, 0x7a, 0x4e, 0x13, 0xe1, 0x4e, 0xb4, 0xed, + 0x1a, 0x41, 0x61, 0x52, 0x7c, 0x76, 0x1d, 0xa6, 0xd4, 0x4f, 0x9b, 0x6f, 0x0a, 0x79, 0x6d, 0x4a, + 0x8b, 0x51, 0x90, 0x70, 0x10, 0x03, 0x61, 0x8a, 0x6c, 0x19, 0x32, 0x31, 0x19, 0x97, 0x78, 0xb7, + 0x1d, 0xec, 0x89, 0xe6, 0xad, 0x7b, 0x3c, 0xc4, 0xfb, 0xa5, 0x9c, 0x8c, 0xad, 0xa4, 0xc4, 0x89, + 0x3d, 0xd3, 0x92, 0xc8, 0x24, 0x12, 0xa2, 0x9f, 0x9c, 0xe2, 0xf7, 0xbd, 0xc8, 0xdb, 0xf0, 0xda, + 0x5e, 0xbc, 0x47, 0x9e, 0x1c, 0x24, 0xfb, 0xa8, 0x75, 0xb1, 0x9b, 0x94, 0xea, 0xa2, 0x5f, 0x8e, + 0xd4, 0xfa, 0xe5, 0x32, 0x3c, 0x7b, 0x90, 0x96, 0x85, 0x35, 0xcc, 0x7d, 0xf0, 0xfc, 0x00, 0x9a, + 0x99, 0xc3, 0x77, 0xc2, 0xe5, 0x3e, 0xf7, 0x0c, 0x1c, 0x8c, 0xcc, 0x3d, 0x43, 0x1f, 0x8c, 0xcc, + 0x8d, 0x63, 0x57, 0x6e, 0x73, 0x1f, 0x37, 0x3c, 0xc9, 0x55, 0x18, 0x5b, 0x0c, 0xfc, 0x98, 0x3f, + 0x8a, 0x33, 0xd1, 0xbd, 0x08, 0x98, 0x0d, 0xcd, 0xa2, 0x50, 0xad, 0x7f, 0x5a, 0x81, 0x33, 0x07, + 0xaa, 0x19, 0xd8, 0xba, 0x39, 0x6a, 0x17, 0x06, 0xd1, 0x4d, 0x1c, 0x3e, 0x6c, 0x97, 0x73, 0x06, + 0xf2, 0x87, 0xfb, 0x97, 0xdb, 0xc0, 0x28, 0xe6, 0xd0, 0xf5, 0x76, 0xb0, 0x81, 0x8a, 0x28, 0xcf, + 0xdf, 0x92, 0xb1, 0x8a, 0xc8, 0x57, 0x1a, 0x4b, 0x9d, 0xad, 0x76, 0xb0, 0x41, 0x0a, 0x2d, 0xcf, + 0xd7, 0xc5, 0xa2, 0x02, 0xea, 0xb9, 0x7f, 0x5a, 0x92, 0x03, 0xff, 0x06, 0x8c, 0x62, 0xf3, 0x93, + 0x61, 0xa7, 0x27, 0x02, 0xdc, 0xd9, 0x3d, 0xf3, 0x89, 0x80, 0xd0, 0xd8, 0x15, 0xa8, 0x2e, 0xba, + 0xed, 0xb6, 0x16, 0x4f, 0x0d, 0xb5, 0x0f, 0x4d, 0x84, 0x65, 0xfc, 0x49, 0x14, 0xa2, 0x38, 0x0a, + 0xe9, 0x6f, 0xed, 0xfc, 0xc1, 0x0d, 0x58, 0x92, 0x65, 0x8e, 0x20, 0x0d, 0x19, 0x33, 0x74, 0xa0, + 0xbb, 0xc2, 0x90, 0x96, 0xa1, 0x43, 0x00, 0x8c, 0x0c, 0x1d, 0x02, 0x60, 0xfd, 0xf5, 0x61, 0x78, + 0xee, 0x60, 0xfd, 0x1b, 0xbb, 0x67, 0x7e, 0xd6, 0x57, 0x06, 0xd2, 0xda, 0x1d, 0xfe, 0x5d, 0x55, + 0xbe, 0x1b, 0x1a, 0x90, 0xf3, 0x79, 0xc7, 0xe3, 0xef, 0xef, 0xcf, 0x6b, 0x4e, 0x4a, 0x37, 0x03, + 0xcf, 0xd7, 0x1e, 0x8c, 0xbf, 0x99, 0x13, 0x14, 0xc6, 0x2f, 0x5f, 0x1d, 0xac, 0x65, 0x29, 0x1d, + 0xed, 0x55, 0x03, 0x0a, 0x18, 0xec, 0x43, 0x18, 0xba, 0xbb, 0xb2, 0xb4, 0x28, 0x5f, 0x6f, 0xde, + 0x18, 0xac, 0x32, 0x41, 0x21, 0xab, 0x41, 0xed, 0x47, 0xe0, 0xb5, 0x9a, 0xba, 0xf6, 0x43, 0x94, + 0xcf, 0xbd, 0x03, 0xb5, 0x6c, 0xa3, 0xd8, 0x39, 0x18, 0xc2, 0xae, 0x69, 0x7e, 0xd9, 0x99, 0xb6, + 0x61, 0xf9, 0xdc, 0x4f, 0x97, 0x00, 0xd2, 0x4a, 0x84, 0xb8, 0xb5, 0x12, 0x45, 0xbd, 0x24, 0xc8, + 0x2f, 0x8a, 0x5b, 0x1e, 0x42, 0xf4, 0x13, 0x94, 0x70, 0xd8, 0x87, 0xe8, 0x17, 0x8e, 0xf6, 0xb9, + 0xf8, 0x51, 0x6e, 0xac, 0xaf, 0xaf, 0x49, 0x72, 0xb2, 0x81, 0x42, 0x99, 0x3a, 0x31, 0xec, 0x25, + 0x13, 0xf7, 0xed, 0x38, 0xee, 0x3a, 0xc4, 0xd2, 0xee, 0x47, 0x3e, 0x77, 0x5b, 0x2e, 0x16, 0x8c, + 0xa9, 0xa7, 0xc7, 0x33, 0x91, 0xed, 0x92, 0x31, 0xf5, 0x8c, 0x60, 0x28, 0x66, 0x4c, 0x3d, 0x9d, + 0xc8, 0xfa, 0xb7, 0x25, 0x38, 0xdd, 0x57, 0xd1, 0xc3, 0xd6, 0xcc, 0x19, 0xfa, 0xd2, 0x61, 0x9a, + 0xa1, 0x43, 0x27, 0xe7, 0xdc, 0x77, 0xd4, 0x62, 0xff, 0x22, 0x4c, 0x34, 0x7a, 0x1b, 0xd9, 0xfb, + 0x31, 0xc5, 0xdb, 0xd4, 0xe0, 0xba, 0x18, 0xa0, 0xe3, 0x8b, 0xfe, 0xab, 0x20, 0x20, 0xd2, 0x9e, + 0x58, 0x73, 0x62, 0x48, 0x82, 0x18, 0xe5, 0x63, 0x0a, 0x9a, 0x44, 0xd6, 0x7f, 0x5c, 0x2e, 0x56, + 0x34, 0x5c, 0x5f, 0x5c, 0x3b, 0x8a, 0xa2, 0xe1, 0xfa, 0xe2, 0xda, 0xe1, 0x7d, 0xff, 0x6f, 0x55, + 0xdf, 0xc9, 0xb4, 0x8e, 0x8e, 0x0d, 0xf5, 0x12, 0xa5, 0x4c, 0xeb, 0xe4, 0x11, 0x13, 0x65, 0x4c, + 0xeb, 0x24, 0x32, 0x7b, 0x0b, 0xc6, 0x56, 0x03, 0x0a, 0xf7, 0xa7, 0x7a, 0x4c, 0x51, 0x7e, 0x14, + 0x50, 0x3f, 0x63, 0x12, 0x4c, 0x71, 0xb7, 0x33, 0x3f, 0xbc, 0xf2, 0xd5, 0xc0, 0x79, 0x98, 0x99, + 0x2e, 0xe6, 0x7b, 0x8d, 0x49, 0x66, 0xfd, 0x67, 0xc3, 0x60, 0x1d, 0xae, 0x6d, 0x66, 0x5f, 0x36, + 0xc7, 0xee, 0xe2, 0xc0, 0x7a, 0xea, 0x81, 0xce, 0xad, 0x7a, 0xaf, 0xe5, 0x71, 0xbf, 0x69, 0x86, + 0xd6, 0x93, 0x30, 0x7d, 0xcf, 0x57, 0x78, 0x1f, 0x27, 0x96, 0xca, 0xdc, 0x3f, 0xac, 0xa4, 0x4b, + 0x2d, 0x23, 0x5f, 0x94, 0x3e, 0x86, 0x7c, 0xc1, 0x6e, 0x41, 0x4d, 0x87, 0x68, 0x8a, 0x4a, 0x14, + 0xff, 0x0c, 0x46, 0x99, 0x46, 0xe5, 0x08, 0x4d, 0x21, 0xa5, 0x32, 0xb8, 0x90, 0x92, 0x51, 0x94, + 0x0e, 0x1d, 0x4d, 0x51, 0x2a, 0x43, 0xf1, 0x45, 0xf2, 0x94, 0x1e, 0x36, 0x43, 0xf1, 0x15, 0x9c, + 0xd4, 0x3a, 0xba, 0x8a, 0x26, 0x88, 0x3f, 0xb5, 0x68, 0x53, 0x49, 0x34, 0x41, 0xa2, 0x2f, 0x8a, + 0x26, 0x98, 0x90, 0x88, 0x13, 0xdf, 0xee, 0xf9, 0x94, 0xfb, 0x6a, 0x34, 0x3d, 0xf1, 0xc3, 0x9e, + 0xef, 0x64, 0xf3, 0x5f, 0x25, 0x88, 0xd6, 0x7f, 0x39, 0x54, 0x2c, 0x61, 0xa5, 0x0f, 0x12, 0x47, + 0x90, 0xb0, 0x12, 0xa2, 0x4f, 0x67, 0xa6, 0xde, 0x83, 0x63, 0xca, 0xdc, 0x1f, 0x6b, 0x6f, 0xf1, + 0xf0, 0x9e, 0xbd, 0x2a, 0x3f, 0x31, 0xea, 0xed, 0x12, 0x47, 0x81, 0xae, 0x2c, 0x77, 0x7a, 0xa1, + 0xa1, 0xb7, 0x2b, 0xa0, 0x9f, 0xfb, 0x3b, 0x4a, 0x2d, 0xa9, 0x7f, 0x04, 0x0c, 0x82, 0x51, 0x2a, + 0xfa, 0x08, 0xbd, 0x9e, 0xf1, 0x19, 0x4d, 0x12, 0xda, 0x7b, 0x13, 0x15, 0xf2, 0x3d, 0x53, 0xe0, + 0xd6, 0xd5, 0xce, 0x26, 0x97, 0x0c, 0x11, 0xdb, 0x82, 0xd3, 0xe9, 0x7d, 0x44, 0xbb, 0x6e, 0x21, + 0x47, 0xea, 0xf0, 0x85, 0xc7, 0xfb, 0xf3, 0x2f, 0x69, 0xf7, 0x19, 0xfd, 0xd6, 0x96, 0xe1, 0xde, + 0x9f, 0x97, 0xd8, 0x6f, 0x17, 0x42, 0xd7, 0x6f, 0x6e, 0x6b, 0x73, 0x1e, 0xf7, 0xdb, 0x0d, 0x84, + 0xe6, 0x22, 0x6e, 0xa5, 0xc8, 0xd6, 0xf7, 0xca, 0xc5, 0x7a, 0x1d, 0xf9, 0xee, 0x74, 0x04, 0xbd, + 0x0e, 0x51, 0x1c, 0x7e, 0x4a, 0xfc, 0x0b, 0x75, 0x4a, 0xbc, 0x04, 0xa3, 0xeb, 0xdc, 0x77, 0xfd, + 0x24, 0x92, 0x1d, 0xda, 0xbf, 0xc4, 0x04, 0xb2, 0x55, 0x19, 0xfb, 0x00, 0xd8, 0x9a, 0x1b, 0x72, + 0x3f, 0x5e, 0x0c, 0x3a, 0x5d, 0x37, 0x8c, 0x3b, 0x98, 0x1d, 0x8c, 0x8e, 0x86, 0xe7, 0x1f, 0xef, + 0xcf, 0x9f, 0xe9, 0x62, 0xa9, 0xd3, 0xd4, 0x8a, 0x75, 0x89, 0x3c, 0x4f, 0xcc, 0x2e, 0xc1, 0xa8, + 0x32, 0xeb, 0xa8, 0xa4, 0xc1, 0x80, 0xf3, 0x26, 0x1d, 0x0a, 0x4b, 0x9c, 0x4a, 0xca, 0xf9, 0x5c, + 0x85, 0xe3, 0xc7, 0x65, 0xa9, 0xdc, 0xd3, 0x8d, 0x53, 0x29, 0xc1, 0xb4, 0xfe, 0xeb, 0x11, 0x98, + 0xed, 0xf7, 0xa6, 0xc5, 0xee, 0x9a, 0x43, 0xfb, 0xe2, 0x21, 0x6f, 0x60, 0x87, 0x0f, 0xec, 0x6f, + 0x0e, 0x3f, 0xd9, 0xfd, 0xdc, 0xd8, 0x82, 0xcb, 0x1f, 0x7b, 0x0b, 0xae, 0x1c, 0x6d, 0x0b, 0x7e, + 0x1b, 0x60, 0x9d, 0x77, 0xba, 0x6d, 0x37, 0xe6, 0xc9, 0x5b, 0x13, 0x45, 0x89, 0x95, 0xd0, 0x8c, + 0xb7, 0x41, 0x8a, 0xcc, 0xde, 0x87, 0x09, 0xf5, 0x4b, 0x33, 0x08, 0xa1, 0xec, 0x44, 0x8a, 0x38, + 0xfb, 0xcc, 0xa5, 0x13, 0x88, 0xbd, 0x43, 0x5b, 0x5a, 0x89, 0x0f, 0x3f, 0x3d, 0x4b, 0x68, 0x2b, + 0xd3, 0xdc, 0x3b, 0x0c, 0x12, 0x21, 0x89, 0x68, 0x00, 0x2d, 0x54, 0x20, 0x4a, 0x22, 0x3a, 0x97, + 0x4c, 0x53, 0xb2, 0x64, 0xf9, 0xe3, 0xa4, 0x7a, 0xf4, 0xe3, 0xc4, 0xd4, 0xa3, 0xa8, 0x2c, 0xa1, + 0x05, 0x7a, 0x94, 0x8c, 0xa1, 0xb4, 0x49, 0x24, 0x9a, 0x42, 0x10, 0x53, 0x69, 0x86, 0x4d, 0x69, + 0xc9, 0x82, 0x9c, 0xe2, 0xcc, 0x24, 0x21, 0xdf, 0x98, 0xdd, 0x37, 0xd6, 0xdd, 0x2d, 0x19, 0x81, + 0x44, 0xfa, 0xc6, 0xec, 0xbe, 0xe1, 0xc4, 0xee, 0x96, 0xe9, 0x1b, 0x83, 0x68, 0xd6, 0x3f, 0x1c, + 0x86, 0xb3, 0x87, 0x3d, 0x6d, 0xb3, 0x4d, 0x80, 0xbb, 0xfe, 0x46, 0xe0, 0x86, 0x2d, 0x71, 0x53, + 0x2f, 0x1d, 0x7a, 0x9f, 0xd3, 0x89, 0x2f, 0xa6, 0x94, 0xa2, 0x90, 0xec, 0xb7, 0x83, 0x04, 0x66, + 0x6b, 0x9c, 0xd9, 0xd7, 0xa1, 0x6a, 0xf3, 0x66, 0xb0, 0xcb, 0xe5, 0xab, 0xc3, 0xf8, 0xe5, 0xcf, + 0x0e, 0x5a, 0x8b, 0xa2, 0xc3, 0x3a, 0xd0, 0x49, 0x3f, 0x94, 0x10, 0x3b, 0xe1, 0xc9, 0xbe, 0x01, + 0xe3, 0x94, 0x7a, 0xb1, 0xbe, 0x19, 0xcb, 0x97, 0x89, 0xc3, 0xe3, 0x85, 0x95, 0xc4, 0xaa, 0xa2, + 0x64, 0x8e, 0x8e, 0xbb, 0x69, 0x78, 0xd5, 0x51, 0xbc, 0x30, 0x8d, 0xe5, 0xdc, 0x7f, 0x54, 0x86, + 0x29, 0xb3, 0xc3, 0x6c, 0x15, 0x6a, 0x2b, 0xbe, 0x17, 0x7b, 0x6e, 0xdb, 0xf4, 0x62, 0x90, 0xea, + 0x31, 0x8f, 0xca, 0x9c, 0x42, 0x23, 0xff, 0x1c, 0xa5, 0xd8, 0xa9, 0xc5, 0x86, 0x19, 0xc5, 0x64, + 0xe5, 0x45, 0xe1, 0xf0, 0xe5, 0xce, 0xf1, 0x3c, 0xe5, 0x7e, 0x48, 0x4b, 0x1d, 0x4a, 0x7f, 0x61, + 0x86, 0xfa, 0xce, 0x12, 0xb3, 0x5d, 0x60, 0xb7, 0x7b, 0x51, 0x4c, 0x25, 0x3c, 0x5c, 0xe0, 0x9b, + 0x41, 0x38, 0x48, 0xa0, 0xb0, 0x57, 0xe4, 0xe0, 0x3c, 0xd7, 0xe9, 0x45, 0xb1, 0x13, 0x4a, 0x72, + 0x67, 0x03, 0xe9, 0x33, 0x83, 0x54, 0x50, 0xc3, 0xdc, 0x6d, 0x98, 0xd0, 0xbf, 0x1a, 0x5a, 0xbd, + 0x7a, 0x1d, 0x4f, 0xf9, 0x7d, 0x91, 0xd5, 0xab, 0x00, 0xd8, 0x04, 0x67, 0xcf, 0xca, 0xc8, 0x9a, + 0xe5, 0xd4, 0x38, 0x34, 0x8d, 0xa0, 0x69, 0xfd, 0x68, 0x09, 0x4e, 0x16, 0x5b, 0x4c, 0xb2, 0x8f, + 0x32, 0x96, 0x1d, 0xa5, 0x83, 0xec, 0x5e, 0x94, 0x99, 0xe5, 0xc7, 0xb3, 0xed, 0xb0, 0xfe, 0xfc, + 0x50, 0xee, 0x6e, 0x53, 0xc0, 0x91, 0x5d, 0x2f, 0xfc, 0x8e, 0x25, 0x4d, 0x1a, 0xcd, 0x7f, 0xc7, + 0xc2, 0xaf, 0xf7, 0x1e, 0x4c, 0x21, 0xe3, 0x74, 0x72, 0x69, 0x4f, 0x39, 0xd4, 0x64, 0xcd, 0x5b, + 0x22, 0x83, 0xcb, 0x56, 0x80, 0x21, 0x64, 0x21, 0x88, 0xb5, 0xe0, 0x30, 0x9a, 0x3e, 0x8b, 0x38, + 0x6c, 0x04, 0xb1, 0xa3, 0x87, 0x89, 0x29, 0x20, 0x62, 0x9f, 0x87, 0x49, 0xf5, 0x39, 0x29, 0x1f, + 0x91, 0xe6, 0xf1, 0xa1, 0xd6, 0x22, 0xa5, 0x24, 0xb2, 0x4d, 0x44, 0xd6, 0xa1, 0x18, 0x87, 0x12, + 0xc8, 0x5b, 0xf5, 0x78, 0x80, 0x40, 0x92, 0x2f, 0xcb, 0xd9, 0xf7, 0x0c, 0x25, 0x5b, 0x55, 0xb4, + 0x8e, 0x1b, 0x67, 0xa6, 0x5e, 0x96, 0x37, 0xdb, 0x82, 0x49, 0x2d, 0x09, 0x6b, 0x3d, 0x1e, 0x20, + 0x07, 0xf0, 0x4b, 0xb2, 0xb2, 0xd3, 0x7a, 0x66, 0xd7, 0x7c, 0x55, 0x26, 0x5f, 0xeb, 0x3b, 0x65, + 0x98, 0x22, 0xd5, 0x11, 0x99, 0xcd, 0x3e, 0xb5, 0xf6, 0xcd, 0xef, 0x1a, 0xf6, 0xcd, 0x2a, 0x03, + 0x8d, 0xde, 0xb5, 0x81, 0xbc, 0x51, 0xb6, 0x81, 0xe5, 0x69, 0x98, 0x0d, 0x13, 0x3a, 0xf4, 0x60, + 0x5b, 0xe4, 0x37, 0xd3, 0x64, 0x45, 0x52, 0x27, 0x88, 0xd6, 0xe5, 0x91, 0x6d, 0xf0, 0xb0, 0x7e, + 0xac, 0x0c, 0x93, 0x9a, 0x37, 0xca, 0x53, 0x3b, 0xf0, 0xef, 0x18, 0x03, 0x3f, 0x9b, 0x04, 0xef, + 0x4a, 0x7a, 0x36, 0xd0, 0xb8, 0xf7, 0x60, 0x26, 0x47, 0x92, 0x75, 0xea, 0x29, 0x0d, 0xe2, 0xd4, + 0xf3, 0x5a, 0x3e, 0xf7, 0x08, 0xa5, 0x77, 0x4e, 0x02, 0xc7, 0xeb, 0xc9, 0x4e, 0x7e, 0xbc, 0x0c, + 0xc7, 0xe5, 0x2f, 0x4c, 0x59, 0x46, 0x4a, 0xd0, 0xa7, 0xf6, 0x5b, 0xd4, 0x8d, 0x6f, 0x31, 0x6f, + 0x7e, 0x0b, 0xad, 0x83, 0xfd, 0x3f, 0x89, 0xf5, 0xa3, 0x00, 0xb3, 0xfd, 0x08, 0x06, 0x8e, 0xf5, + 0x99, 0x86, 0xe5, 0x2a, 0x0f, 0x10, 0x96, 0x6b, 0x15, 0x6a, 0x58, 0x95, 0x74, 0x83, 0x8d, 0xee, + 0xd9, 0x2b, 0x72, 0x90, 0x50, 0xfa, 0xa0, 0x0c, 0x73, 0xd2, 0x77, 0x36, 0xca, 0x3c, 0x18, 0xe6, + 0x28, 0xd9, 0x5f, 0x2b, 0xc1, 0x14, 0x02, 0x97, 0x77, 0xc5, 0x25, 0x4f, 0x30, 0x1b, 0x92, 0xf1, + 0x9a, 0x12, 0x8b, 0xe5, 0x46, 0x1c, 0x7a, 0xfe, 0x96, 0x34, 0x59, 0xde, 0x90, 0x26, 0xcb, 0xef, + 0x91, 0xa9, 0xf5, 0xc5, 0x66, 0xd0, 0xb9, 0xb4, 0x15, 0xba, 0xbb, 0x1e, 0x39, 0x5a, 0xb9, 0xed, + 0x4b, 0x2a, 0x20, 0xd0, 0x25, 0xb7, 0xeb, 0x5d, 0xc2, 0x31, 0xbd, 0x94, 0x61, 0x85, 0xe6, 0xe0, + 0xd4, 0x50, 0x8e, 0xd5, 0x66, 0xdf, 0x35, 0xcd, 0x16, 0xb1, 0x1f, 0x82, 0x53, 0xf4, 0x42, 0xb4, + 0x18, 0xf8, 0xb1, 0xe7, 0xf7, 0x82, 0x5e, 0xb4, 0xe0, 0x36, 0x77, 0x7a, 0xdd, 0x48, 0x06, 0xfd, + 0xc3, 0x9e, 0x37, 0x93, 0x42, 0x67, 0x83, 0x4a, 0x8d, 0x70, 0xbc, 0xc5, 0x0c, 0xd8, 0x0d, 0x98, + 0xa1, 0xa2, 0x7a, 0x2f, 0x0e, 0x1a, 0x4d, 0xb7, 0x2d, 0x04, 0xe2, 0x51, 0xe4, 0x4a, 0x76, 0x99, + 0xbd, 0x38, 0x70, 0x22, 0x82, 0xeb, 0xcf, 0x9c, 0x39, 0x22, 0xb6, 0x02, 0xd3, 0x36, 0x77, 0x5b, + 0xb7, 0xdd, 0x47, 0x8b, 0x6e, 0xd7, 0x6d, 0x7a, 0x31, 0xe5, 0x0c, 0xab, 0x90, 0x22, 0x2f, 0xe4, + 0x6e, 0xcb, 0xe9, 0xb8, 0x8f, 0x9c, 0xa6, 0x2c, 0x34, 0x4d, 0x65, 0x0c, 0xba, 0x84, 0x95, 0xe7, + 0x27, 0xac, 0xc6, 0xb2, 0xac, 0x3c, 0xbf, 0x3f, 0xab, 0x94, 0x4e, 0xb1, 0xa2, 0x7c, 0xb2, 0xe4, + 0xb2, 0x2f, 0xae, 0x21, 0x25, 0x8d, 0x95, 0x4c, 0x42, 0x8b, 0xee, 0xfb, 0x59, 0x56, 0x1a, 0x9d, + 0x98, 0x79, 0x0f, 0x42, 0x2f, 0xe6, 0x7a, 0x0f, 0xc7, 0xb1, 0x59, 0x38, 0xfe, 0x18, 0xec, 0xa0, + 0x5f, 0x17, 0x73, 0x94, 0x29, 0x37, 0xad, 0x93, 0x13, 0x39, 0x6e, 0xc5, 0xbd, 0xcc, 0x51, 0x26, + 0xdc, 0xf4, 0x7e, 0x4e, 0x62, 0x3f, 0x35, 0x6e, 0x7d, 0x3a, 0x9a, 0xa3, 0x64, 0x77, 0xc4, 0xa0, + 0xc5, 0xdc, 0x17, 0x33, 0x5a, 0xc6, 0x12, 0x98, 0xc2, 0xa6, 0xbd, 0x28, 0xc5, 0x86, 0x5a, 0xa8, + 0x8a, 0x9d, 0x82, 0xc8, 0x02, 0x59, 0x62, 0xf6, 0xc3, 0x30, 0x7d, 0x2f, 0xe2, 0xd7, 0x56, 0xd6, + 0x1a, 0x2a, 0x49, 0x03, 0xbe, 0xcc, 0x4f, 0x5d, 0x7e, 0xf3, 0x90, 0x4d, 0xe7, 0xa2, 0x4e, 0x83, + 0x49, 0xfe, 0xe9, 0xbb, 0xf5, 0x22, 0xee, 0x6c, 0x7a, 0xdd, 0x28, 0x49, 0x50, 0xa4, 0x7f, 0xb7, + 0x4c, 0x55, 0xd6, 0x0d, 0x98, 0xc9, 0xb1, 0x61, 0x53, 0x00, 0x02, 0xe8, 0xdc, 0xbb, 0xd3, 0x58, + 0x5e, 0xaf, 0x7d, 0x86, 0xd5, 0x60, 0x02, 0x7f, 0x2f, 0xdf, 0xa9, 0x2f, 0xac, 0x2e, 0x2f, 0xd5, + 0x4a, 0x6c, 0x06, 0x26, 0x11, 0xb2, 0xb4, 0xd2, 0x20, 0x50, 0x99, 0xf2, 0x28, 0xdb, 0x35, 0x5a, + 0xba, 0x31, 0xbe, 0xe9, 0x8a, 0x33, 0xc5, 0xfa, 0x2b, 0x65, 0x38, 0xad, 0x8e, 0x15, 0x1e, 0x8b, + 0x6b, 0xb6, 0xe7, 0x6f, 0x3d, 0xe5, 0xa7, 0xc3, 0x35, 0xe3, 0x74, 0x78, 0x31, 0x73, 0x52, 0x67, + 0x7a, 0x79, 0xc0, 0x11, 0xf1, 0x6b, 0x63, 0x70, 0xe6, 0x40, 0x2a, 0xf6, 0x81, 0x38, 0xcd, 0x3d, + 0xee, 0xc7, 0x2b, 0xad, 0x36, 0x17, 0x22, 0x6a, 0xd0, 0x8b, 0x65, 0xec, 0x8a, 0x17, 0xf0, 0xe1, + 0x1a, 0x0b, 0x1d, 0xaf, 0xd5, 0xe6, 0x4e, 0x4c, 0xc5, 0xc6, 0x74, 0xcb, 0x53, 0x0b, 0x96, 0xb7, + 0x38, 0xef, 0xd6, 0xdb, 0xde, 0x2e, 0x5f, 0xf1, 0x63, 0x1e, 0xee, 0xa2, 0xef, 0x61, 0xc2, 0x72, + 0x87, 0xf3, 0xae, 0xe3, 0x8a, 0x52, 0xc7, 0x93, 0xc5, 0x26, 0xcb, 0x1c, 0x35, 0xbb, 0xa6, 0xb1, + 0x44, 0x29, 0xff, 0xb6, 0xfb, 0x48, 0xfa, 0x2f, 0xc9, 0x1c, 0x6d, 0x09, 0x4b, 0x8a, 0x0a, 0xd5, + 0x71, 0x1f, 0xd9, 0x79, 0x12, 0xf6, 0x35, 0x38, 0x21, 0x0f, 0x20, 0x19, 0x1f, 0x5a, 0xf5, 0x98, + 0xa2, 0x4f, 0xbf, 0x2c, 0x2e, 0x66, 0x2a, 0xf4, 0x83, 0x8a, 0xf9, 0x5e, 0xd4, 0xeb, 0x62, 0x2e, + 0x6c, 0x5d, 0x1c, 0xc8, 0x99, 0xe1, 0xb8, 0xcd, 0xa3, 0x48, 0x85, 0xfe, 0x92, 0xea, 0x38, 0x7d, + 0x30, 0x9d, 0x0e, 0x95, 0xdb, 0x7d, 0x29, 0xd9, 0x0d, 0x98, 0x7a, 0xc0, 0x37, 0xf4, 0xef, 0x33, + 0x92, 0x6c, 0x55, 0xb5, 0x87, 0x7c, 0xa3, 0xff, 0xc7, 0xc9, 0xd0, 0x31, 0x0f, 0x8d, 0x6b, 0x1e, + 0xed, 0xad, 0x8a, 0x8b, 0xb3, 0xcf, 0x43, 0xbc, 0xff, 0x8e, 0xe2, 0x66, 0x30, 0x9b, 0x4a, 0xc8, + 0x66, 0xb9, 0xd4, 0xd8, 0x62, 0xb4, 0x9d, 0xb6, 0x84, 0x3b, 0xe2, 0xa2, 0x9c, 0xb1, 0xbb, 0x31, + 0xa9, 0xd8, 0x37, 0x60, 0xda, 0x0e, 0x7a, 0xb1, 0xe7, 0x6f, 0x35, 0xc4, 0x0d, 0x93, 0x6f, 0xd1, + 0x81, 0x94, 0xa6, 0xb0, 0xc8, 0x94, 0x4a, 0x93, 0x4e, 0x02, 0x3a, 0x91, 0x84, 0x1a, 0x27, 0x82, + 0x49, 0xc0, 0xbe, 0x0e, 0x53, 0x14, 0x51, 0x37, 0xa9, 0x60, 0xcc, 0xc8, 0xaf, 0x6d, 0x16, 0xde, + 0x7f, 0x53, 0xba, 0x9b, 0x20, 0xb4, 0xa8, 0x82, 0x0c, 0x37, 0xf6, 0x15, 0x39, 0x58, 0x6b, 0x9e, + 0xbf, 0x95, 0x4c, 0x63, 0xc0, 0x91, 0x7f, 0x3d, 0x1d, 0x92, 0xae, 0x68, 0xae, 0x9a, 0xc6, 0x7d, + 0x7c, 0xe7, 0xf2, 0x7c, 0x58, 0x0c, 0x67, 0xea, 0x51, 0xe4, 0x45, 0xb1, 0x8c, 0xf0, 0xb2, 0xfc, + 0x88, 0x37, 0x7b, 0x02, 0xf9, 0x41, 0x10, 0xee, 0xf0, 0x90, 0xbc, 0xb7, 0x87, 0x17, 0x2e, 0x3e, + 0xde, 0x9f, 0x7f, 0xc5, 0x45, 0x44, 0x47, 0x06, 0x85, 0x71, 0xb8, 0x42, 0x75, 0x1e, 0x12, 0xae, + 0xd6, 0x87, 0x83, 0x99, 0xb2, 0xaf, 0xc3, 0xc9, 0x45, 0x37, 0xe2, 0x2b, 0x7e, 0xc4, 0xfd, 0xc8, + 0x8b, 0xbd, 0x5d, 0x2e, 0x07, 0x15, 0x0f, 0xbf, 0x2a, 0xe5, 0x2e, 0x69, 0xba, 0x91, 0x58, 0x98, + 0x09, 0x8a, 0x23, 0x3f, 0x8a, 0x9e, 0x1a, 0xa5, 0x98, 0x0b, 0xb3, 0x61, 0xaa, 0xd1, 0xb8, 0xb1, + 0xe4, 0xb9, 0xc9, 0xba, 0x9a, 0xc4, 0xf1, 0x7a, 0x05, 0x9f, 0x74, 0xa3, 0x6d, 0xa7, 0xe5, 0xb9, + 0xc9, 0x82, 0xea, 0x33, 0x58, 0x19, 0x0e, 0xd6, 0x7e, 0x09, 0x6a, 0xd9, 0x4f, 0xc9, 0x3e, 0x84, + 0x31, 0x72, 0x3c, 0xe3, 0xd1, 0xb6, 0xd4, 0xbf, 0x28, 0x3f, 0xa6, 0x04, 0x6e, 0x12, 0xc9, 0xa0, + 0x71, 0xe4, 0xd6, 0xc6, 0x75, 0x2b, 0xf5, 0x1b, 0x9f, 0xb1, 0x53, 0x66, 0xac, 0x05, 0x13, 0xf4, + 0xb5, 0x38, 0xa6, 0xdf, 0xc9, 0xc4, 0xbd, 0xd1, 0x8b, 0x32, 0xfc, 0xc9, 0x37, 0x83, 0xe6, 0x04, + 0x21, 0x18, 0x55, 0x18, 0x5c, 0x17, 0x00, 0xaa, 0x8a, 0xd0, 0x3a, 0x0d, 0xa7, 0xfa, 0xb4, 0xd9, + 0xda, 0xc5, 0x97, 0x9e, 0x3e, 0x35, 0xb2, 0x0f, 0xe1, 0x38, 0x12, 0x2e, 0x06, 0xbe, 0xcf, 0x9b, + 0x31, 0x6e, 0x47, 0xca, 0xea, 0xa2, 0x42, 0x16, 0xe6, 0xd4, 0xdf, 0x66, 0x82, 0x90, 0x4b, 0xe2, + 0x5c, 0xc8, 0xc1, 0xfa, 0x97, 0x65, 0x98, 0x95, 0x3b, 0x9c, 0xcd, 0x9b, 0x01, 0x6a, 0x1f, 0x9f, + 0xf2, 0x13, 0x75, 0xd9, 0x38, 0x51, 0x5f, 0x48, 0x22, 0x8a, 0x17, 0x75, 0xf2, 0x00, 0xe7, 0xea, + 0x7b, 0x19, 0xe7, 0xea, 0x43, 0x18, 0x51, 0x32, 0xd8, 0x59, 0x25, 0xb8, 0xf5, 0x73, 0xb3, 0xb6, + 0x56, 0xa0, 0x7a, 0x8b, 0xef, 0xa1, 0xff, 0xa4, 0x18, 0xdf, 0x38, 0xbd, 0xb9, 0x55, 0x55, 0x34, + 0x0d, 0x1b, 0xff, 0x65, 0xf3, 0x30, 0x8c, 0xde, 0x98, 0x7a, 0xec, 0x2b, 0x04, 0xd8, 0xf4, 0x9f, + 0xf5, 0x7f, 0x96, 0xe0, 0xf4, 0x6d, 0xd7, 0xef, 0xb9, 0xed, 0x5b, 0x7c, 0x8f, 0x72, 0x4c, 0x77, + 0xe8, 0x13, 0x6f, 0x7a, 0x5b, 0xec, 0x12, 0x8c, 0xca, 0xbc, 0x7d, 0xc8, 0xbf, 0x4a, 0xef, 0x68, + 0xf9, 0x54, 0x7e, 0x0a, 0x8b, 0xdd, 0x82, 0x71, 0x2d, 0x50, 0x84, 0x74, 0x3a, 0x56, 0x23, 0xaf, + 0xda, 0x2c, 0x9d, 0xac, 0x8b, 0x03, 0x4a, 0xb8, 0x69, 0x40, 0x89, 0x3b, 0x30, 0xa1, 0xb4, 0x5d, + 0xc8, 0xad, 0x52, 0xcc, 0x6d, 0x2e, 0xd5, 0x91, 0xe7, 0x02, 0x47, 0x8c, 0x4b, 0x38, 0x86, 0x8b, + 0xf8, 0xcd, 0x12, 0x9c, 0xcd, 0x8e, 0x7c, 0x1a, 0xf3, 0xe5, 0xe3, 0x76, 0x79, 0x0f, 0x4e, 0x74, + 0x70, 0x00, 0x31, 0x20, 0x4d, 0x27, 0x19, 0x42, 0xb9, 0x17, 0x28, 0x77, 0xd3, 0xbe, 0x83, 0x4c, + 0x6f, 0xe2, 0x85, 0x2c, 0xf4, 0x37, 0xf1, 0x4e, 0x9e, 0xde, 0xfa, 0x99, 0x32, 0x3c, 0x7b, 0xd0, + 0x9c, 0x4c, 0x94, 0xd6, 0xa5, 0x22, 0xa5, 0x35, 0xeb, 0xc2, 0x31, 0xdc, 0x2f, 0x16, 0xb7, 0x79, + 0x73, 0x07, 0xe3, 0xc1, 0xdd, 0xa2, 0x8f, 0x56, 0x6c, 0x87, 0xfc, 0x5a, 0xa1, 0x1d, 0xf2, 0x49, + 0xda, 0xc4, 0x9a, 0xc8, 0x83, 0x42, 0xcd, 0x89, 0x6f, 0x60, 0x17, 0xb1, 0x66, 0x1c, 0x20, 0x4d, + 0xb5, 0x29, 0xb5, 0xfc, 0x2f, 0xf7, 0x59, 0x13, 0xd9, 0x2f, 0x83, 0x31, 0xfb, 0xc5, 0xba, 0x38, + 0x9e, 0xb2, 0xd0, 0x27, 0x4e, 0x0a, 0xb5, 0x7a, 0x7d, 0x87, 0x85, 0xd2, 0x28, 0xdf, 0x83, 0x69, + 0x2d, 0xe3, 0x27, 0xce, 0x2d, 0x53, 0xf1, 0x97, 0x0d, 0x06, 0x44, 0xcf, 0xb2, 0x19, 0x1a, 0x7b, + 0x8a, 0xeb, 0x38, 0x91, 0xf5, 0x93, 0x65, 0xa8, 0xd5, 0x7b, 0xf1, 0xf6, 0x5a, 0xc8, 0x37, 0x79, + 0xc8, 0xfd, 0x26, 0xff, 0x01, 0x0b, 0x2a, 0x61, 0x76, 0x6e, 0x20, 0xf5, 0xdf, 0x77, 0xa7, 0xe0, + 0x78, 0x11, 0x99, 0x18, 0x97, 0xf5, 0xc2, 0x7d, 0x0b, 0xf5, 0x4c, 0xdf, 0x2e, 0xc1, 0x44, 0x83, + 0x37, 0x03, 0xbf, 0x75, 0x0d, 0xdd, 0x60, 0xe4, 0xe8, 0xb8, 0x24, 0x71, 0x0b, 0xb8, 0xb3, 0x99, + 0xf1, 0x8f, 0xf9, 0xfe, 0xfe, 0xfc, 0x97, 0x06, 0x53, 0xf4, 0x34, 0x03, 0x7c, 0x38, 0x88, 0x31, + 0x6d, 0x7b, 0x52, 0x05, 0xb5, 0xc6, 0x36, 0xaa, 0x65, 0x0b, 0x30, 0x29, 0x4f, 0xbb, 0x40, 0xcf, + 0x76, 0x46, 0x39, 0x0b, 0x54, 0x41, 0xee, 0x89, 0xd6, 0x20, 0x61, 0x57, 0xa0, 0x72, 0xef, 0xf2, + 0x35, 0xf9, 0x15, 0x54, 0x1c, 0xc9, 0x7b, 0x97, 0xaf, 0xa1, 0x36, 0x59, 0x4c, 0xe8, 0xc9, 0xde, + 0x65, 0xc3, 0xc1, 0xe4, 0xde, 0xe5, 0x6b, 0xec, 0x4f, 0xc2, 0x89, 0x25, 0x2f, 0x92, 0x55, 0x50, + 0x80, 0x93, 0x16, 0x06, 0x52, 0x1b, 0xe9, 0xb3, 0x3a, 0x3f, 0x57, 0xb8, 0x3a, 0x9f, 0x6f, 0x25, + 0x4c, 0x1c, 0x8a, 0x9e, 0xd2, 0xca, 0x66, 0x75, 0x2b, 0xae, 0x87, 0x7d, 0x04, 0x53, 0x68, 0x75, + 0x80, 0x31, 0x5f, 0x30, 0xfd, 0xf2, 0x68, 0x9f, 0x9a, 0xdf, 0x28, 0xac, 0x79, 0x8e, 0x4c, 0x42, + 0x31, 0x72, 0x0c, 0xa6, 0x6a, 0x36, 0x94, 0x66, 0x06, 0x67, 0x76, 0x13, 0xa6, 0xe5, 0xed, 0xe5, + 0xee, 0xe6, 0xfa, 0x36, 0x5f, 0x72, 0xf7, 0xe4, 0x53, 0x38, 0x2a, 0x44, 0xe4, 0x95, 0xc7, 0x09, + 0x36, 0x9d, 0x78, 0x9b, 0x3b, 0x2d, 0xd7, 0x90, 0xf3, 0x33, 0x84, 0xec, 0x5b, 0x30, 0xbe, 0x1a, + 0x34, 0xc5, 0xc5, 0x15, 0x77, 0x3e, 0x7a, 0x0d, 0xff, 0xb2, 0xd8, 0xa8, 0xda, 0x04, 0xce, 0xdc, + 0x46, 0xbe, 0xbf, 0x3f, 0xff, 0xee, 0x51, 0xa7, 0x8d, 0x56, 0x81, 0xad, 0xd7, 0xc6, 0x16, 0xa1, + 0xfa, 0x80, 0x6f, 0x88, 0xde, 0xfa, 0x32, 0x92, 0x81, 0x5a, 0x75, 0x0a, 0x2c, 0x1d, 0xc9, 0xe4, + 0x2f, 0xc3, 0x91, 0x4c, 0xc2, 0x58, 0x08, 0x33, 0x38, 0x3e, 0x6b, 0x6e, 0x14, 0x3d, 0x0c, 0xc2, + 0x16, 0x66, 0x4e, 0xef, 0xe7, 0x89, 0x72, 0xb9, 0x70, 0xf0, 0x9f, 0xa5, 0xc1, 0xef, 0x6a, 0x1c, + 0xf4, 0xfb, 0x57, 0x8e, 0x3d, 0xfb, 0x06, 0x4c, 0xc9, 0x08, 0xb2, 0xb7, 0xaf, 0xd5, 0x71, 0x25, + 0x4c, 0x18, 0x71, 0xd8, 0xcc, 0x42, 0xf5, 0xd8, 0x8b, 0xb0, 0x24, 0xf8, 0x61, 0x67, 0xd3, 0x35, + 0x6d, 0xa5, 0x74, 0x12, 0xb6, 0x06, 0xe3, 0x4b, 0x7c, 0xd7, 0x6b, 0x72, 0x0c, 0xe0, 0x24, 0xe3, + 0x1d, 0xa8, 0x60, 0x5b, 0x5a, 0x09, 0x1d, 0xe7, 0x2d, 0x04, 0x50, 0x38, 0x28, 0xd3, 0xc5, 0x34, + 0x41, 0x64, 0x57, 0xa1, 0xb2, 0xb2, 0xb4, 0x26, 0xc3, 0x1d, 0xcc, 0x24, 0x71, 0x9a, 0xd7, 0x64, + 0x0e, 0x40, 0xf2, 0xdd, 0xf2, 0x5a, 0x46, 0xb0, 0x84, 0x95, 0xa5, 0x35, 0xb6, 0x09, 0x93, 0x64, + 0x93, 0xcc, 0x5d, 0x1a, 0xdb, 0xe9, 0x3e, 0x63, 0x7b, 0xb1, 0x70, 0x6c, 0x67, 0xa5, 0xad, 0xb3, + 0xa4, 0xd6, 0xd7, 0xbd, 0xc1, 0x56, 0xdc, 0x08, 0x97, 0xf8, 0xa6, 0xdb, 0x6b, 0x2b, 0x2d, 0xf6, + 0xfa, 0xfa, 0x2a, 0xfa, 0xa6, 0xc8, 0x1b, 0x61, 0x8b, 0x0a, 0x93, 0xf1, 0xeb, 0x1f, 0x4d, 0x25, + 0xcf, 0x87, 0xbd, 0x03, 0x43, 0x77, 0x77, 0x62, 0x57, 0x06, 0x36, 0x50, 0xe3, 0x28, 0x40, 0xaa, + 0xfb, 0x64, 0x54, 0xbe, 0x63, 0x64, 0x53, 0x41, 0x1a, 0xf1, 0x29, 0x6e, 0xb8, 0x61, 0xeb, 0xa1, + 0x1b, 0x62, 0x70, 0xc2, 0x63, 0x06, 0x0b, 0xad, 0x84, 0x3e, 0xc5, 0xb6, 0x04, 0x64, 0xac, 0x03, + 0x74, 0x16, 0xec, 0x87, 0xe0, 0x74, 0xe4, 0x6d, 0xf9, 0x18, 0x23, 0xdf, 0x71, 0xdb, 0x5b, 0x41, + 0xe8, 0xc5, 0xdb, 0x1d, 0x27, 0xea, 0x79, 0x31, 0xc7, 0x08, 0x03, 0x53, 0xc9, 0x85, 0xab, 0xa1, + 0xf0, 0xea, 0x0a, 0xad, 0x21, 0xb0, 0xec, 0x53, 0x51, 0x71, 0x01, 0xfb, 0x0a, 0x4c, 0xea, 0x5b, + 0x72, 0x34, 0x7b, 0xe2, 0x6c, 0xe5, 0xfc, 0x54, 0x72, 0x54, 0x67, 0xb7, 0x70, 0x95, 0xe3, 0x51, + 0x3b, 0x23, 0x22, 0x33, 0xc7, 0xa3, 0xc6, 0x8b, 0xd9, 0x70, 0x2a, 0x22, 0xe5, 0x60, 0xcf, 0xf7, + 0x1e, 0x61, 0x90, 0x5c, 0xe9, 0xc3, 0x34, 0x7b, 0xd2, 0x38, 0xfa, 0x1a, 0x88, 0x75, 0xef, 0xce, + 0xca, 0x87, 0xf7, 0x22, 0x1e, 0x4a, 0x57, 0xa6, 0xe3, 0x44, 0x7b, 0xcf, 0xf7, 0x1e, 0xa5, 0x50, + 0xd2, 0x3c, 0xde, 0x1c, 0xaa, 0xb2, 0xda, 0x31, 0x7b, 0x46, 0xae, 0x02, 0xf9, 0xe5, 0x6e, 0x5f, + 0xab, 0xdb, 0xa3, 0x6b, 0x2b, 0xf7, 0x1b, 0xed, 0x20, 0xb6, 0xb6, 0xe1, 0x78, 0x11, 0x57, 0x36, + 0x9b, 0x91, 0x40, 0x53, 0x51, 0xf3, 0x19, 0x18, 0xdb, 0xf4, 0xc2, 0x28, 0x76, 0x7a, 0x1e, 0xc9, + 0x0b, 0xc3, 0x76, 0x15, 0x01, 0xf7, 0xbc, 0x16, 0x3b, 0x0d, 0x55, 0x7c, 0x20, 0x16, 0x65, 0x15, + 0x2c, 0x1b, 0x15, 0xbf, 0xef, 0x79, 0x2d, 0xeb, 0xbf, 0x2a, 0xe1, 0x11, 0xc4, 0x5e, 0xc1, 0x64, + 0x04, 0x89, 0xb9, 0x18, 0x3e, 0xde, 0xb8, 0xdd, 0x4c, 0xf2, 0x69, 0x42, 0x61, 0xaf, 0xc1, 0xc8, + 0x35, 0xb7, 0xc9, 0x13, 0x4b, 0x3c, 0x44, 0xde, 0x44, 0x88, 0x7e, 0x23, 0x21, 0x1c, 0x71, 0xb9, + 0xa4, 0xa5, 0x59, 0x8f, 0x63, 0x1e, 0xd1, 0xfe, 0xb9, 0x58, 0x57, 0xd6, 0x77, 0x78, 0xb9, 0x94, + 0x4b, 0xda, 0x4d, 0x11, 0x32, 0xae, 0xe8, 0x85, 0x1c, 0xac, 0xdf, 0x2f, 0xa5, 0x7b, 0x2a, 0x7b, + 0x19, 0x86, 0xec, 0xb5, 0xa4, 0xfd, 0x14, 0xd7, 0x2f, 0xd3, 0x7c, 0x44, 0x60, 0x5f, 0x81, 0x13, + 0x1a, 0x9f, 0x9c, 0x5f, 0xfc, 0x4b, 0x18, 0x76, 0x4e, 0x6b, 0x49, 0xb1, 0x73, 0x7c, 0x31, 0x0f, + 0xbc, 0x49, 0xa7, 0x05, 0x4b, 0xdc, 0xf7, 0x88, 0xb7, 0xd6, 0x59, 0x9d, 0x77, 0x0b, 0x11, 0xb2, + 0x9d, 0x2d, 0xe2, 0x40, 0x51, 0xe7, 0xac, 0x5f, 0x2d, 0x19, 0x7b, 0x25, 0x3b, 0x67, 0x48, 0xf1, + 0xb8, 0xae, 0x33, 0x1a, 0x35, 0x92, 0xe7, 0xdf, 0x06, 0xa8, 0xf7, 0xe2, 0x60, 0xd9, 0x0f, 0x83, + 0x76, 0x5b, 0xba, 0x69, 0xd0, 0x55, 0xab, 0x17, 0x07, 0x0e, 0x47, 0xb0, 0x11, 0xcf, 0x2a, 0x41, + 0x2e, 0x0c, 0x21, 0x50, 0xf9, 0xb8, 0x21, 0x04, 0xc4, 0xbd, 0xc4, 0xd8, 0x1e, 0xde, 0x00, 0x35, + 0xe9, 0x75, 0xbf, 0xa8, 0xae, 0xb7, 0xeb, 0x44, 0xed, 0xc0, 0x88, 0x9a, 0x2c, 0xd1, 0xd8, 0x9f, + 0x2f, 0xc1, 0x49, 0xf2, 0xc5, 0xbf, 0xd3, 0xeb, 0x6c, 0xf0, 0xf0, 0xbe, 0xdb, 0xf6, 0x5a, 0x69, + 0x7e, 0x9a, 0xd4, 0xf1, 0x4e, 0xab, 0xa6, 0x18, 0x9f, 0xb4, 0x54, 0x14, 0x1b, 0xc0, 0xf1, 0xb1, + 0xd0, 0xd9, 0x4d, 0x4a, 0x75, 0x2d, 0x55, 0x31, 0x3d, 0x5b, 0x81, 0xf1, 0x35, 0xcf, 0xc7, 0x14, + 0xfb, 0x69, 0x18, 0xac, 0x97, 0x29, 0xb4, 0x86, 0x98, 0xc2, 0xcd, 0x6d, 0x7e, 0xc0, 0xd6, 0xad, + 0xd3, 0x5a, 0xbf, 0x5c, 0x82, 0xe7, 0x0f, 0x6d, 0xb0, 0xb8, 0x81, 0x2e, 0x0f, 0x74, 0x03, 0x55, + 0xc9, 0xf3, 0xbf, 0x0a, 0x27, 0x74, 0x56, 0xeb, 0xa1, 0xeb, 0xe9, 0x51, 0x44, 0x0a, 0x06, 0x20, + 0x16, 0x28, 0x59, 0xb1, 0xb5, 0x98, 0x89, 0xf5, 0xff, 0x94, 0x60, 0x2c, 0x71, 0x43, 0x7e, 0x4a, + 0xaf, 0x33, 0x57, 0x8d, 0xeb, 0x8c, 0xca, 0x43, 0x96, 0xf4, 0x8a, 0xec, 0xf6, 0x0a, 0x1e, 0x44, + 0xa6, 0x35, 0xa7, 0x6d, 0x04, 0xfc, 0xb9, 0x32, 0x8c, 0x8b, 0xad, 0x9a, 0x0c, 0x42, 0x7e, 0xb0, + 0xd2, 0x1d, 0x25, 0xfd, 0x1a, 0x28, 0xb7, 0xcb, 0x3f, 0x2b, 0xe1, 0x43, 0xa1, 0x4e, 0x21, 0x46, + 0x43, 0x80, 0xf4, 0xd1, 0x10, 0x27, 0xaa, 0x8d, 0x50, 0xca, 0x74, 0xb1, 0x2a, 0x47, 0x42, 0x66, + 0xba, 0x68, 0xdb, 0x02, 0xc6, 0xbe, 0x04, 0xc3, 0xf7, 0xf0, 0xd9, 0xc3, 0x8c, 0xfe, 0x9b, 0xf0, + 0xc7, 0x42, 0xda, 0xef, 0x7b, 0x91, 0x99, 0x06, 0x85, 0x08, 0x59, 0x03, 0x46, 0x17, 0x43, 0xee, + 0xc6, 0xbc, 0x25, 0x07, 0x64, 0xa0, 0x80, 0x92, 0x4d, 0x22, 0xc9, 0x06, 0x94, 0x94, 0x9c, 0xc4, + 0x3e, 0xc6, 0xd2, 0x3e, 0xa2, 0xc9, 0x5b, 0xf4, 0xd4, 0x7e, 0xf4, 0xf7, 0x8d, 0x8f, 0x7e, 0x26, + 0xf7, 0xd1, 0xa9, 0x7b, 0x03, 0x7d, 0xfb, 0x5f, 0x2f, 0xc1, 0xc9, 0x62, 0x42, 0xf6, 0x02, 0x8c, + 0xdc, 0x5d, 0x5f, 0x4b, 0xcd, 0x4c, 0xb1, 0x2b, 0x41, 0x17, 0x95, 0x42, 0xb6, 0x2c, 0x62, 0xaf, + 0xc3, 0xc8, 0x07, 0xf6, 0x62, 0x6a, 0x4d, 0x89, 0x1b, 0xdc, 0x37, 0x85, 0xe4, 0x65, 0x9c, 0x6a, + 0x12, 0x49, 0xff, 0xb6, 0x95, 0x27, 0xf6, 0x6d, 0x7f, 0xbc, 0x0c, 0xd3, 0xf5, 0x66, 0x93, 0x47, + 0x91, 0xcc, 0x89, 0xfb, 0xd4, 0x7e, 0xd8, 0xe2, 0x78, 0xcd, 0x46, 0xdf, 0x06, 0xfa, 0xaa, 0xff, + 0xa0, 0x44, 0x51, 0xef, 0x05, 0xd5, 0xae, 0xc7, 0x1f, 0xae, 0x6f, 0x87, 0x3c, 0xda, 0x0e, 0xda, + 0xad, 0x41, 0x73, 0x9f, 0xa3, 0xcc, 0x88, 0x79, 0x63, 0x75, 0xeb, 0xa0, 0x4d, 0x84, 0x18, 0x32, + 0x23, 0xe5, 0x96, 0xbd, 0x04, 0xa3, 0xf5, 0x6e, 0x37, 0x0c, 0x76, 0x69, 0xd9, 0xcb, 0x44, 0x47, + 0x2e, 0x81, 0x8c, 0x10, 0x9a, 0x04, 0x12, 0xcd, 0x58, 0xe2, 0xfe, 0x9e, 0x6e, 0xdb, 0xd9, 0xe2, + 0xbe, 0x7e, 0x29, 0xc1, 0x72, 0xab, 0x01, 0x6c, 0x2d, 0x0c, 0x3a, 0x41, 0xcc, 0x5b, 0xd4, 0x1f, + 0x8c, 0x3c, 0x7a, 0x68, 0x92, 0x88, 0x75, 0x2f, 0x6e, 0x1b, 0x49, 0x22, 0x62, 0x01, 0xb0, 0x09, + 0x2e, 0xce, 0xee, 0x33, 0xc6, 0x98, 0x2e, 0x85, 0x7b, 0x76, 0xcf, 0x5f, 0xf6, 0x43, 0xaf, 0xb9, + 0x8d, 0xb1, 0x2d, 0xee, 0x00, 0xd8, 0xdc, 0x8d, 0x02, 0x5f, 0x13, 0xd6, 0x2e, 0x0a, 0xf1, 0x2b, + 0x44, 0x68, 0x5e, 0xef, 0x30, 0x23, 0x39, 0xa5, 0x54, 0xb6, 0xc6, 0x81, 0xd5, 0x61, 0x92, 0x7e, + 0x89, 0xce, 0x74, 0x13, 0x41, 0xfc, 0x19, 0x8a, 0x34, 0x81, 0x2c, 0xbb, 0x58, 0x62, 0x46, 0xa1, + 0xd2, 0x28, 0xac, 0x7f, 0x3d, 0x0c, 0x13, 0xfa, 0x27, 0x65, 0x16, 0xa5, 0x57, 0x0f, 0x42, 0x3d, + 0x00, 0xb0, 0x8b, 0x10, 0x5b, 0x96, 0xa4, 0xd1, 0xb3, 0xcb, 0x87, 0x46, 0xcf, 0x7e, 0x00, 0x93, + 0x6b, 0x61, 0x80, 0x69, 0xa4, 0xd0, 0x54, 0x43, 0xee, 0xdf, 0xc7, 0x34, 0xad, 0x81, 0x98, 0x7d, + 0x68, 0x0c, 0x82, 0xf7, 0xb2, 0xae, 0xc4, 0x76, 0x84, 0xe8, 0x6b, 0xe8, 0xcc, 0x0c, 0x3e, 0x64, + 0x67, 0x26, 0x7a, 0xa2, 0x67, 0x74, 0xa4, 0x4e, 0x9b, 0x76, 0x66, 0x02, 0xa2, 0x6f, 0x10, 0xc3, + 0x4f, 0x6a, 0x83, 0x60, 0x3f, 0x53, 0x82, 0xf1, 0xba, 0xef, 0xcb, 0xa8, 0xdc, 0x87, 0xc4, 0x10, + 0xfd, 0xaa, 0x34, 0x35, 0x7b, 0xf7, 0x63, 0x99, 0x9a, 0xa1, 0xb0, 0x15, 0xa1, 0xa4, 0x9e, 0x56, + 0x68, 0x44, 0xd6, 0x4b, 0xc1, 0xec, 0x5d, 0xa8, 0x25, 0x2b, 0x73, 0xc5, 0x6f, 0xf1, 0x47, 0x3c, + 0x9a, 0x1d, 0x3d, 0x5b, 0x39, 0x3f, 0x29, 0x13, 0x6a, 0xea, 0x92, 0x79, 0x16, 0x91, 0xad, 0x03, + 0xb8, 0xc9, 0x92, 0x90, 0x2f, 0xe0, 0xa7, 0xd3, 0xd7, 0xca, 0xcc, 0x9a, 0x51, 0x0f, 0x35, 0xe2, + 0x37, 0xbe, 0xe6, 0x9b, 0x0f, 0x35, 0xc9, 0xd2, 0xea, 0xc0, 0x74, 0x3d, 0x8a, 0x7a, 0x1d, 0xde, + 0x88, 0xdd, 0x30, 0xc6, 0x54, 0xdf, 0x30, 0xb8, 0x0d, 0xb5, 0x8b, 0xa4, 0x62, 0x46, 0x84, 0xb1, + 0x53, 0x90, 0xf7, 0x3b, 0xcb, 0x9b, 0xd2, 0x97, 0xda, 0xa7, 0xf2, 0xed, 0xa5, 0x95, 0xfa, 0xe3, + 0x25, 0x38, 0xa9, 0x4f, 0xfa, 0x46, 0x6f, 0x43, 0xa6, 0xdf, 0x62, 0x17, 0x61, 0x4c, 0xce, 0xc9, + 0xe4, 0x12, 0x99, 0xcf, 0x58, 0x9e, 0xa2, 0xb0, 0x65, 0x31, 0x0d, 0x05, 0x0f, 0x79, 0xeb, 0x38, + 0x96, 0xd9, 0x5c, 0x45, 0x11, 0xbe, 0xd7, 0xc9, 0x94, 0xeb, 0xe2, 0xb7, 0x39, 0x3f, 0x05, 0xc4, + 0xfa, 0x22, 0xcc, 0x98, 0x5f, 0xa2, 0xc1, 0x63, 0x76, 0x01, 0x46, 0xd5, 0xe7, 0x2b, 0x15, 0x7f, + 0x3e, 0x55, 0x6e, 0x3d, 0x00, 0x96, 0xa3, 0x8f, 0xd0, 0x26, 0x54, 0xdc, 0xcf, 0xe9, 0xe9, 0x42, + 0x59, 0x64, 0xe4, 0x10, 0x17, 0x8e, 0xc9, 0xf6, 0x8d, 0x1b, 0x21, 0x02, 0x30, 0x15, 0xd9, 0x77, + 0x18, 0x1c, 0x2b, 0x38, 0x28, 0x0e, 0x11, 0xe4, 0xe6, 0xcd, 0x0d, 0x62, 0x2c, 0x09, 0x41, 0xac, + 0xb6, 0x85, 0x2f, 0xc2, 0xf0, 0xa1, 0xdb, 0x01, 0x85, 0x9e, 0xc8, 0xec, 0x02, 0x44, 0xf6, 0xa9, + 0x08, 0x73, 0x7a, 0xc8, 0xf1, 0xe1, 0x27, 0x16, 0x72, 0x1c, 0x43, 0x05, 0x6a, 0x9b, 0xb8, 0x19, + 0xbe, 0x90, 0x32, 0xf0, 0xe7, 0xb6, 0x2d, 0x93, 0x84, 0x78, 0x44, 0x41, 0x7b, 0x97, 0x4b, 0x1e, + 0xa3, 0x3a, 0x0f, 0x2c, 0x28, 0xe4, 0xa1, 0x91, 0xb0, 0xbf, 0x55, 0x02, 0x26, 0x21, 0xfa, 0x9e, + 0x55, 0x3d, 0x68, 0xcf, 0x6a, 0x3d, 0x99, 0x3d, 0xeb, 0x8c, 0x6a, 0x63, 0xf1, 0xde, 0x55, 0xd0, + 0x2c, 0xf6, 0x37, 0x4a, 0x30, 0x43, 0xa1, 0xaa, 0xf5, 0xc6, 0x1e, 0x18, 0x7e, 0xb8, 0xf9, 0x64, + 0x1a, 0xfb, 0x6c, 0x84, 0xd5, 0xf6, 0x69, 0x6b, 0xbe, 0x51, 0xec, 0x87, 0x00, 0x92, 0x15, 0x45, + 0xe9, 0xd8, 0xc6, 0x2f, 0x3f, 0x5b, 0xb0, 0x0b, 0x24, 0x48, 0x69, 0x6a, 0xf1, 0x38, 0xa1, 0x33, + 0xdc, 0x19, 0x13, 0x28, 0xfb, 0x93, 0x94, 0x3c, 0x29, 0x81, 0xc8, 0x28, 0xfd, 0xb3, 0xe3, 0x58, + 0xcb, 0x67, 0xfb, 0x0b, 0x72, 0x17, 0x8b, 0xc8, 0x28, 0x79, 0x5f, 0xe2, 0xa1, 0x10, 0xc6, 0x9d, + 0x6c, 0x02, 0xa5, 0x2c, 0x05, 0x26, 0xbf, 0xc0, 0xd6, 0x53, 0xfa, 0xef, 0x3e, 0xfb, 0xdb, 0x69, + 0xb5, 0x16, 0x68, 0x7f, 0xcb, 0xf8, 0xcf, 0x22, 0x88, 0x7d, 0x00, 0x2c, 0x89, 0xf1, 0x4c, 0x30, + 0xae, 0x52, 0x83, 0xd3, 0x63, 0x41, 0x1a, 0x2b, 0x3a, 0x54, 0xc5, 0xfa, 0x24, 0xc9, 0x13, 0x33, + 0x0e, 0xc7, 0x65, 0xa7, 0x05, 0x94, 0x62, 0x5e, 0xac, 0x2c, 0x45, 0xb3, 0x53, 0x46, 0x0e, 0x84, + 0xb4, 0x64, 0xe1, 0x39, 0xd9, 0xce, 0x93, 0x49, 0xf0, 0x0c, 0x33, 0x0a, 0x45, 0x21, 0x3b, 0x76, + 0x15, 0xc6, 0x30, 0xc4, 0xd8, 0x0d, 0x65, 0xe9, 0x2a, 0xad, 0xee, 0x30, 0x18, 0x99, 0xb3, 0x6d, + 0xda, 0xab, 0xa6, 0xa8, 0xe2, 0x0e, 0x43, 0x12, 0x20, 0xaa, 0xf4, 0xa5, 0x92, 0xa6, 0x15, 0xee, + 0x39, 0x61, 0xcf, 0x0c, 0x5f, 0x87, 0x48, 0xec, 0x1b, 0x30, 0x7e, 0xdb, 0x7d, 0xa4, 0xd4, 0x42, + 0x52, 0x6d, 0x7f, 0x98, 0xf7, 0x1f, 0xf6, 0xa6, 0xe3, 0x3e, 0x72, 0x5a, 0xbd, 0x6c, 0xea, 0x40, + 0xf2, 0xfe, 0xd3, 0x58, 0xb2, 0xaf, 0x01, 0x68, 0xef, 0x0c, 0xec, 0xd0, 0x0a, 0x9e, 0x57, 0x99, + 0x3d, 0x0a, 0xdf, 0x1f, 0x90, 0xbf, 0xc6, 0x30, 0x23, 0x39, 0x1c, 0xff, 0xf4, 0x24, 0x87, 0x13, + 0x9f, 0x9e, 0xe4, 0x40, 0xcf, 0x5c, 0xf4, 0xed, 0x71, 0x07, 0xdf, 0x93, 0x5a, 0xfe, 0x83, 0x6a, + 0x53, 0x66, 0x07, 0x35, 0x3c, 0x0a, 0xf6, 0x32, 0x55, 0x64, 0xf8, 0xb1, 0x10, 0x6a, 0xd9, 0x8b, + 0xc1, 0xec, 0x29, 0xc3, 0x2c, 0xf7, 0xc0, 0x4b, 0x04, 0xa9, 0x5b, 0xe5, 0x34, 0x72, 0x78, 0x02, + 0xd7, 0x85, 0xba, 0xdc, 0xc5, 0xe3, 0x3e, 0x8c, 0x4b, 0x76, 0x78, 0x39, 0x9d, 0x35, 0x2c, 0x34, + 0x8d, 0xea, 0x44, 0xb9, 0xb4, 0x97, 0x91, 0x87, 0x53, 0xe6, 0xea, 0xaa, 0x33, 0x62, 0x1d, 0xa8, + 0xad, 0x06, 0xfe, 0xd6, 0x3a, 0x0f, 0x3b, 0x18, 0x6c, 0x46, 0xec, 0x4d, 0xa7, 0x0d, 0x07, 0x14, + 0x55, 0x6c, 0xc4, 0xa4, 0xf1, 0xfc, 0x2d, 0xea, 0x46, 0x3b, 0xf0, 0xb7, 0x9c, 0x98, 0x87, 0x1d, + 0x8a, 0x62, 0x63, 0x1a, 0x05, 0xe6, 0x58, 0xcf, 0x6d, 0xc0, 0xe9, 0xbe, 0xfb, 0x5a, 0x41, 0x16, + 0xc5, 0x4b, 0x66, 0x16, 0xc5, 0xd3, 0xfd, 0xe4, 0x9f, 0xc8, 0xcc, 0x7f, 0x7f, 0xac, 0x76, 0xbc, + 0xbf, 0xe8, 0xf8, 0xbd, 0x72, 0x46, 0x1e, 0x92, 0x57, 0xd5, 0xb3, 0x50, 0x3e, 0x40, 0x60, 0x2c, + 0xaf, 0x2c, 0x89, 0xbb, 0x29, 0x4a, 0x4c, 0x5a, 0xe2, 0x5b, 0x21, 0x31, 0xe9, 0x12, 0x17, 0xca, + 0x4e, 0x9f, 0x54, 0x34, 0x7a, 0x0f, 0xa6, 0x1a, 0xdc, 0x0d, 0x9b, 0xdb, 0xb7, 0xf8, 0xde, 0xc3, + 0x20, 0x6c, 0xa9, 0x68, 0x05, 0x94, 0x97, 0x03, 0x4b, 0xcc, 0x60, 0x49, 0x3a, 0x2e, 0x5b, 0x52, + 0xe1, 0xbf, 0x86, 0xb1, 0xf6, 0xd3, 0x85, 0x47, 0x8c, 0x40, 0x38, 0x28, 0x32, 0x18, 0x7b, 0x2b, + 0x91, 0xa2, 0x79, 0xa8, 0x67, 0xc1, 0x0f, 0x15, 0xb0, 0x40, 0x98, 0xe6, 0xa1, 0xf5, 0xdb, 0x15, + 0x60, 0x54, 0xd3, 0xa2, 0xdb, 0x75, 0x31, 0xe0, 0x9e, 0x87, 0xe1, 0xf9, 0x6b, 0x12, 0xc7, 0xdd, + 0x68, 0x73, 0x3d, 0xb7, 0x85, 0x74, 0xfb, 0x48, 0xca, 0x9c, 0xec, 0x2d, 0x34, 0x47, 0xd8, 0xe7, + 0x1c, 0x2a, 0x7f, 0x92, 0x73, 0xe8, 0x1b, 0xf0, 0x4c, 0xbd, 0xdb, 0x6d, 0x7b, 0xcd, 0xa4, 0x96, + 0x6b, 0x41, 0xa8, 0x26, 0xbc, 0x11, 0x85, 0xc8, 0x4d, 0xd0, 0x72, 0x2d, 0x3d, 0x88, 0x85, 0x26, + 0x44, 0xd2, 0xbd, 0x5d, 0x0f, 0x0d, 0xaa, 0xd6, 0x69, 0xd1, 0x4d, 0x5f, 0x23, 0x51, 0x3c, 0xbc, + 0x50, 0x09, 0x91, 0xc3, 0x69, 0xa6, 0x42, 0xf5, 0x50, 0x5f, 0x2c, 0x88, 0x26, 0x24, 0xec, 0x3d, + 0x18, 0xaf, 0xf7, 0xe2, 0x40, 0x32, 0x96, 0xfe, 0x4a, 0xa9, 0x67, 0x91, 0x6c, 0x8a, 0x71, 0x2f, + 0x4d, 0xd1, 0xad, 0xdf, 0xad, 0xc0, 0xe9, 0xfc, 0xe7, 0x95, 0xa5, 0xc9, 0xfa, 0x28, 0x1d, 0xb2, + 0x3e, 0x8a, 0x66, 0x43, 0x39, 0x4d, 0xa5, 0xfd, 0x24, 0x66, 0x03, 0x05, 0xfb, 0xfb, 0x98, 0xb3, + 0xa1, 0x21, 0xf6, 0xda, 0x54, 0x18, 0x19, 0xfa, 0xb8, 0xc2, 0x88, 0xce, 0x85, 0x5d, 0x80, 0x61, + 0x8a, 0x88, 0x3a, 0x9c, 0xbe, 0x6b, 0x66, 0x83, 0xa1, 0x12, 0x06, 0xfb, 0xff, 0xc1, 0x59, 0xda, + 0x93, 0xb2, 0x9d, 0x5d, 0xd8, 0x53, 0x1c, 0xe5, 0x87, 0xbb, 0xfc, 0x78, 0x7f, 0xfe, 0x22, 0x29, + 0xdf, 0x9c, 0xdc, 0xb0, 0x39, 0x1b, 0x7b, 0x8e, 0x6a, 0x99, 0x56, 0xc9, 0xa1, 0xbc, 0xad, 0x1f, + 0x86, 0x59, 0xca, 0x13, 0x55, 0xb0, 0x92, 0x0f, 0x59, 0x29, 0xa5, 0x4f, 0xbc, 0x52, 0xac, 0xc7, + 0x25, 0x98, 0xef, 0x57, 0xfd, 0x51, 0x67, 0xda, 0x4d, 0x98, 0xa4, 0xdd, 0xb1, 0x1e, 0xe9, 0xb7, + 0x59, 0x4a, 0x48, 0x8a, 0x75, 0x38, 0xb4, 0x9f, 0x3a, 0x6e, 0x94, 0x6b, 0xa5, 0x49, 0x9a, 0x9d, + 0x15, 0x95, 0x27, 0x31, 0x2b, 0xac, 0x47, 0x70, 0x5a, 0x9d, 0xc6, 0x49, 0xd4, 0x40, 0x55, 0x2e, + 0x7a, 0xb9, 0x93, 0xaa, 0xaa, 0xb1, 0x97, 0x99, 0xa3, 0x1c, 0xcb, 0xd9, 0x15, 0xa8, 0xd6, 0xd7, + 0x56, 0xf0, 0x8c, 0xd5, 0x03, 0x4c, 0xba, 0x5d, 0x8f, 0x0e, 0x65, 0x23, 0x84, 0x93, 0x44, 0xc4, + 0x1c, 0x9e, 0x69, 0x4b, 0xd8, 0xeb, 0x45, 0xde, 0xba, 0x94, 0x79, 0x96, 0xc0, 0xa6, 0xa3, 0xae, + 0xd2, 0xa2, 0x97, 0x0b, 0xb5, 0xe8, 0x4a, 0x0d, 0x5b, 0x29, 0x54, 0xc3, 0x2e, 0xc1, 0x74, 0xa3, + 0xb7, 0xa1, 0xea, 0xce, 0x46, 0xf8, 0x8a, 0x7a, 0x1b, 0x45, 0xb3, 0x36, 0x4b, 0x62, 0xfd, 0x85, + 0x32, 0x4c, 0xac, 0xb5, 0x7b, 0x5b, 0x9e, 0xbf, 0xe4, 0xc6, 0xee, 0x53, 0xab, 0xd8, 0x7f, 0xdb, + 0x50, 0xec, 0x27, 0x4e, 0xe9, 0x49, 0xc7, 0x06, 0xd2, 0xea, 0xff, 0x74, 0x09, 0xa6, 0x53, 0x12, + 0x12, 0xa6, 0x6e, 0xc0, 0x90, 0xf8, 0x21, 0x35, 0x47, 0x67, 0x73, 0x8c, 0x11, 0xeb, 0x62, 0xf2, + 0x97, 0x54, 0xb5, 0xbb, 0xa6, 0x29, 0x91, 0x28, 0x9e, 0xfb, 0x1c, 0x8c, 0xa5, 0x6c, 0xf3, 0x32, + 0xda, 0x71, 0x5d, 0x46, 0x1b, 0xd3, 0x53, 0x5a, 0xff, 0x4a, 0x09, 0x6a, 0xd9, 0x9e, 0xb0, 0x5b, + 0x30, 0x2a, 0x38, 0x79, 0x3c, 0xca, 0xc6, 0x4c, 0xca, 0x60, 0x5e, 0x94, 0x68, 0xd4, 0x3c, 0x1c, + 0x7c, 0x4e, 0x10, 0x5b, 0x71, 0x98, 0xb3, 0x61, 0x42, 0xc7, 0x2a, 0x68, 0xdd, 0x6b, 0xa6, 0x04, + 0x79, 0xb2, 0x78, 0x1c, 0xf4, 0x56, 0xff, 0x55, 0xa3, 0xd5, 0x52, 0x38, 0x3c, 0x67, 0x4c, 0xae, + 0xc2, 0xa5, 0x88, 0x93, 0x06, 0x73, 0x6b, 0xcb, 0x2d, 0xba, 0xac, 0x07, 0x0d, 0xcf, 0x4d, 0xe8, + 0x04, 0x8f, 0xbd, 0x06, 0x23, 0x54, 0x9f, 0x9c, 0x67, 0x28, 0xe6, 0x75, 0x11, 0xa2, 0x5f, 0x32, + 0x09, 0xc7, 0xfa, 0xb9, 0x0a, 0x9c, 0x4c, 0x9b, 0x77, 0xaf, 0xdb, 0x72, 0x63, 0xbe, 0xe6, 0x86, + 0x6e, 0x27, 0x3a, 0x64, 0x05, 0x9c, 0xcf, 0x35, 0x4d, 0x46, 0xb1, 0x21, 0x98, 0xd6, 0x20, 0x2b, + 0xd3, 0x20, 0x7c, 0x40, 0xa0, 0x06, 0xa9, 0x66, 0xb0, 0x5b, 0x50, 0x69, 0xf0, 0x58, 0x9e, 0x8d, + 0xe7, 0x72, 0xa3, 0xaa, 0xb7, 0xeb, 0x62, 0x83, 0xc7, 0xf4, 0x11, 0x29, 0x1c, 0xbb, 0x11, 0x2f, + 0x46, 0x70, 0x61, 0x0f, 0x60, 0x64, 0xf9, 0x51, 0x97, 0x37, 0x63, 0x4c, 0xe6, 0xa9, 0x05, 0x4e, + 0x29, 0xe6, 0x47, 0xb8, 0xc4, 0xf2, 0xb8, 0xbc, 0xb5, 0x99, 0x89, 0xcb, 0x25, 0xbb, 0xb9, 0xab, + 0x50, 0x55, 0x95, 0x1f, 0x65, 0xe6, 0xce, 0xbd, 0x0d, 0xe3, 0x5a, 0x25, 0x47, 0x9a, 0xf4, 0x3f, + 0x2f, 0xf6, 0xd5, 0xa0, 0xcd, 0xe5, 0xc4, 0x59, 0xce, 0xc9, 0xf2, 0xa5, 0x34, 0x02, 0xa8, 0x3c, + 0x7b, 0x76, 0x64, 0xd1, 0x01, 0x42, 0xfd, 0x0a, 0x4c, 0x37, 0x76, 0xbc, 0x6e, 0x9a, 0xfb, 0xcd, + 0x90, 0x98, 0xa2, 0x1d, 0xaf, 0xeb, 0x48, 0xad, 0x57, 0xf6, 0x14, 0xcb, 0xd2, 0x59, 0xff, 0xb6, + 0x04, 0x23, 0xe2, 0xaf, 0xfb, 0x57, 0x9f, 0xd2, 0x2d, 0xf3, 0x8a, 0xb1, 0x65, 0xce, 0x68, 0xb9, + 0x5c, 0x71, 0xe3, 0xb8, 0x7a, 0xc8, 0x66, 0xb9, 0x2f, 0x3f, 0x10, 0x21, 0xb3, 0xeb, 0x30, 0x2a, + 0xad, 0x29, 0xa5, 0xd7, 0x98, 0x9e, 0x1c, 0x56, 0xd9, 0x59, 0x26, 0xea, 0xb1, 0xa0, 0x9b, 0xd5, + 0x27, 0x2a, 0x6a, 0x71, 0xef, 0x52, 0x59, 0xf8, 0xf4, 0x8c, 0xff, 0x82, 0xcd, 0x62, 0xe0, 0x53, + 0x6a, 0xd3, 0x68, 0xe1, 0x94, 0xe4, 0xd4, 0x2f, 0x1a, 0x63, 0x5d, 0xbe, 0x7f, 0x56, 0x0e, 0x62, + 0x72, 0x52, 0x32, 0x29, 0x7e, 0x1a, 0xed, 0xc0, 0xc9, 0x46, 0xe3, 0x06, 0x5a, 0x5e, 0xaf, 0x05, + 0x61, 0x7c, 0x2d, 0x08, 0x1f, 0xca, 0xf0, 0x57, 0x0d, 0xd3, 0xea, 0xa8, 0xc8, 0x1e, 0xf6, 0xe5, + 0x42, 0x7b, 0xd8, 0x03, 0x2c, 0x93, 0x2c, 0x1f, 0x4e, 0x35, 0x1a, 0x37, 0x48, 0x62, 0xfb, 0xc3, + 0xa8, 0xef, 0x57, 0x4a, 0x30, 0xd3, 0x68, 0xdc, 0xc8, 0x54, 0xb5, 0xaa, 0x32, 0x9a, 0x96, 0x0c, + 0xd3, 0x87, 0xe2, 0x81, 0xc0, 0xaf, 0x50, 0x22, 0x09, 0xbc, 0x69, 0xa4, 0x93, 0x21, 0x26, 0x6c, + 0x2d, 0xc9, 0xa1, 0x5a, 0x36, 0x3c, 0x09, 0xfb, 0x74, 0x34, 0x75, 0xe7, 0x22, 0xa1, 0xd2, 0x7c, + 0x1e, 0x12, 0x10, 0xeb, 0x37, 0x4e, 0x52, 0x96, 0x56, 0x35, 0x5b, 0xbe, 0x00, 0x13, 0x92, 0x1e, + 0xdd, 0xed, 0xa4, 0x15, 0xd8, 0x69, 0xb1, 0x41, 0x6e, 0x12, 0x9c, 0x12, 0xee, 0x7d, 0x7f, 0x7f, + 0x7e, 0x48, 0x0c, 0x8d, 0x6d, 0xa0, 0xb3, 0xbb, 0x30, 0x79, 0xdb, 0x7d, 0xa4, 0xe9, 0x02, 0xc9, + 0x99, 0xfa, 0x82, 0xd8, 0x55, 0x3a, 0xee, 0xa3, 0x01, 0xec, 0x8d, 0x4d, 0x7a, 0xb6, 0x03, 0x53, + 0x66, 0x9f, 0xe4, 0x0c, 0xcc, 0x7f, 0xb1, 0x37, 0x0b, 0xbf, 0xd8, 0xe9, 0x6e, 0x10, 0xc6, 0xce, + 0x66, 0x42, 0x6e, 0x64, 0x24, 0xce, 0xb0, 0x66, 0x5f, 0x80, 0x19, 0x2d, 0x69, 0xcf, 0xb5, 0x20, + 0xec, 0xb8, 0xea, 0x42, 0x8c, 0x0f, 0x64, 0x68, 0x88, 0xb8, 0x89, 0x60, 0x3b, 0x8f, 0xc9, 0xbe, + 0x52, 0xe4, 0xa0, 0x3e, 0x9c, 0x1a, 0x5d, 0x17, 0x38, 0xa8, 0xf7, 0x33, 0xba, 0xce, 0xbb, 0xaa, + 0x6f, 0x1d, 0xe4, 0x94, 0x51, 0xa5, 0xde, 0x0f, 0xe4, 0x74, 0x91, 0x7c, 0xb9, 0x3e, 0xce, 0x17, + 0x97, 0xa1, 0xb2, 0xb0, 0x76, 0x0d, 0x9f, 0x75, 0x95, 0x05, 0xa6, 0xbf, 0xed, 0xfa, 0x4d, 0xbc, + 0xa8, 0x4a, 0x8f, 0x26, 0xfd, 0xa0, 0x5c, 0x58, 0xbb, 0xc6, 0x5c, 0x38, 0xb6, 0xc6, 0xc3, 0x8e, + 0x17, 0x7f, 0xf8, 0xe6, 0x9b, 0xda, 0xa7, 0xaa, 0x62, 0xd3, 0x2e, 0xc9, 0xa6, 0xcd, 0x77, 0x11, + 0xc5, 0x79, 0xf4, 0xe6, 0x9b, 0x85, 0x1f, 0x24, 0x69, 0x58, 0x11, 0x2f, 0x71, 0x60, 0xdd, 0x76, + 0x1f, 0xa5, 0xee, 0x9f, 0x91, 0x0c, 0xf5, 0x71, 0x46, 0x4d, 0xad, 0xd4, 0x75, 0xd4, 0x38, 0xb0, + 0x4c, 0x22, 0xf6, 0x1e, 0xea, 0xc2, 0x55, 0x68, 0x17, 0xe9, 0x24, 0x3d, 0xa7, 0x74, 0xdd, 0x2a, + 0x1e, 0x8c, 0x7e, 0x2d, 0xd2, 0xd0, 0xd9, 0xbd, 0x44, 0x5b, 0x42, 0x77, 0x40, 0x19, 0x6c, 0xf0, + 0x92, 0xae, 0x2d, 0x21, 0x0d, 0xb3, 0xd1, 0xad, 0xe9, 0x44, 0xc5, 0x46, 0xfe, 0xb0, 0xb6, 0xc9, + 0x25, 0xaf, 0x84, 0x99, 0x38, 0xba, 0x12, 0x86, 0xc3, 0xd0, 0x6a, 0xd0, 0xdc, 0x91, 0x39, 0x31, + 0x3e, 0x10, 0xbb, 0x70, 0x3b, 0x68, 0xee, 0x3c, 0x39, 0x67, 0x13, 0x64, 0xcf, 0xee, 0x50, 0xb0, + 0xb3, 0xb0, 0x25, 0xc7, 0x44, 0x3a, 0x30, 0x1c, 0x4f, 0xee, 0x9b, 0x5a, 0x59, 0x1a, 0x02, 0x2d, + 0x6c, 0xa9, 0xa1, 0xb5, 0x4d, 0x72, 0xc6, 0xa1, 0xb6, 0xc4, 0xa3, 0x9d, 0x38, 0xe8, 0x2e, 0xb6, + 0xbd, 0x2e, 0xc6, 0x0f, 0x94, 0xd9, 0x19, 0x07, 0xde, 0x93, 0x5b, 0x44, 0xef, 0x34, 0x15, 0x03, + 0x3b, 0xc7, 0x92, 0x7d, 0x05, 0xa6, 0xc4, 0xe4, 0x5e, 0x7e, 0x14, 0x73, 0x9f, 0xbe, 0xfc, 0x0c, + 0x4a, 0x74, 0xc7, 0xb5, 0xdc, 0xe6, 0x49, 0x21, 0xcd, 0x29, 0x5c, 0xec, 0x3c, 0x21, 0x30, 0xf2, + 0x89, 0x18, 0xac, 0x58, 0x0b, 0x66, 0x6f, 0xbb, 0x8f, 0xd2, 0x8b, 0xb2, 0x3e, 0x49, 0x19, 0x4e, + 0xb0, 0xf3, 0x8f, 0xf7, 0xe7, 0x5f, 0x14, 0x13, 0x2c, 0x4d, 0x18, 0xda, 0x67, 0xbe, 0xf6, 0xe5, + 0xc4, 0xbe, 0x05, 0xa7, 0x64, 0xb7, 0x96, 0xbc, 0x10, 0x3d, 0xbc, 0xf6, 0x1a, 0xdb, 0x2e, 0x7a, + 0x7e, 0x1f, 0xeb, 0x33, 0x60, 0x97, 0x8a, 0xb7, 0x44, 0x35, 0x60, 0x2d, 0xc5, 0xc7, 0x89, 0x88, + 0x91, 0xdd, 0xaf, 0x06, 0xf6, 0x11, 0x4c, 0xd1, 0x5b, 0xf6, 0x8d, 0x20, 0x8a, 0x51, 0xc5, 0x71, + 0xfc, 0x68, 0x1e, 0x59, 0xf4, 0x40, 0x4e, 0x3e, 0x9a, 0x19, 0x95, 0x48, 0x86, 0x33, 0x7b, 0x17, + 0x8d, 0x9e, 0x29, 0xe3, 0xcf, 0xca, 0x1a, 0xbe, 0xc9, 0xc8, 0x13, 0xa8, 0xeb, 0xf9, 0x8e, 0xd2, + 0x5d, 0x74, 0x93, 0xed, 0x42, 0xc7, 0x66, 0x0f, 0x60, 0xbc, 0xd1, 0xb8, 0x71, 0xcd, 0x13, 0x72, + 0x49, 0x57, 0x3d, 0xb1, 0xe4, 0x5b, 0xf9, 0x42, 0x61, 0x2b, 0x27, 0xa3, 0x68, 0xdb, 0xd9, 0xf4, + 0xda, 0xdc, 0x69, 0x06, 0xdd, 0x3d, 0x5b, 0xe7, 0x54, 0xe0, 0xa5, 0x74, 0xea, 0x09, 0x7b, 0x29, + 0xad, 0xc0, 0xb4, 0x66, 0x79, 0x8f, 0x86, 0x5c, 0xb3, 0x69, 0x84, 0x73, 0xdd, 0x2b, 0x29, 0x1b, + 0xd4, 0x22, 0x4b, 0xa7, 0xdc, 0x93, 0x4e, 0x1f, 0xd5, 0x3d, 0xc9, 0x83, 0x19, 0xfa, 0x18, 0x72, + 0x1e, 0xe0, 0x97, 0x9e, 0xeb, 0x33, 0x86, 0x17, 0x0a, 0xc7, 0xf0, 0x98, 0xfc, 0xd2, 0x6a, 0x92, + 0xa1, 0xed, 0x46, 0x9e, 0x2b, 0xdb, 0x04, 0x26, 0x81, 0x6e, 0xec, 0x6e, 0xb8, 0x11, 0xc7, 0xba, + 0x9e, 0xe9, 0x53, 0xd7, 0x8b, 0x85, 0x75, 0x4d, 0xa9, 0xba, 0x36, 0xa8, 0x9a, 0x02, 0x8e, 0xcc, + 0x57, 0xf5, 0xa8, 0xf9, 0x85, 0x03, 0xfb, 0xac, 0xf1, 0x06, 0x91, 0x47, 0x20, 0xc7, 0xe8, 0xec, + 0xa4, 0xcd, 0x8e, 0x7b, 0x01, 0x67, 0xf6, 0x08, 0x4e, 0xe6, 0x5b, 0x81, 0x75, 0x9e, 0xc1, 0x3a, + 0xcf, 0x18, 0x75, 0x66, 0x91, 0x68, 0xde, 0x98, 0xdd, 0xca, 0xd6, 0xda, 0x87, 0x3f, 0xfb, 0xd1, + 0x12, 0x9c, 0xba, 0x7d, 0xad, 0x7e, 0x9f, 0x87, 0x24, 0x96, 0x78, 0x81, 0x9f, 0x04, 0x03, 0x79, + 0x4e, 0xbe, 0x53, 0x65, 0x9f, 0x1a, 0x95, 0xc4, 0x81, 0x5b, 0x85, 0x10, 0xdd, 0x5f, 0xe8, 0x6c, + 0xba, 0xce, 0xae, 0xc6, 0xa2, 0x20, 0x62, 0xc8, 0x77, 0x7f, 0x67, 0xbe, 0x64, 0xf7, 0xab, 0x8a, + 0xb5, 0x61, 0xce, 0x1c, 0x16, 0xe5, 0x40, 0xb6, 0xcd, 0xdb, 0xed, 0xd9, 0x79, 0x9c, 0xd1, 0xaf, + 0x3d, 0xde, 0x9f, 0x3f, 0x9f, 0x1b, 0xdd, 0xc4, 0x29, 0x4d, 0x60, 0x6a, 0x1d, 0x3e, 0x80, 0x1f, + 0xeb, 0x14, 0x08, 0xdd, 0xb3, 0x67, 0x8d, 0xa8, 0x81, 0xb9, 0xf2, 0x24, 0xaa, 0xe5, 0x19, 0xb1, + 0xde, 0xfb, 0x0a, 0x88, 0x76, 0x9e, 0xf3, 0xcd, 0xa1, 0xea, 0x64, 0x6d, 0xaa, 0xc0, 0xb3, 0xca, + 0xfa, 0xb5, 0x72, 0xe6, 0x60, 0x64, 0x2b, 0x30, 0x2a, 0xe7, 0x7b, 0xdf, 0x4b, 0xc6, 0x99, 0xc2, + 0x59, 0x3d, 0x2a, 0x97, 0x8e, 0xad, 0xe8, 0xd9, 0x43, 0xc1, 0x0a, 0x3b, 0x2d, 0x6f, 0xbc, 0x5f, + 0xa3, 0x73, 0x0f, 0x41, 0xc6, 0x09, 0xbf, 0x74, 0x74, 0x2f, 0x64, 0xd3, 0x57, 0x1d, 0x8f, 0x7a, + 0x55, 0x1b, 0xdb, 0x81, 0x4a, 0xa3, 0x71, 0x43, 0x5e, 0x9a, 0xbf, 0x2c, 0x77, 0xc8, 0x4f, 0xa1, + 0x42, 0x51, 0x8b, 0xf5, 0xab, 0x25, 0x98, 0x34, 0x4e, 0x56, 0x76, 0x55, 0xf3, 0xd3, 0x4e, 0x5f, + 0x95, 0x0d, 0x1c, 0xdc, 0x6c, 0xb3, 0x1e, 0xdc, 0x57, 0xb5, 0x78, 0xb9, 0x7d, 0xe8, 0x70, 0xb1, + 0x65, 0x83, 0x12, 0x1c, 0xac, 0x1f, 0x9e, 0x87, 0x61, 0x0a, 0x98, 0x36, 0x94, 0x9a, 0xe9, 0xa2, + 0x7e, 0xc5, 0x26, 0xb8, 0xf5, 0xbb, 0xf3, 0x30, 0x65, 0xde, 0x88, 0xd9, 0x6b, 0x30, 0x82, 0x6f, + 0x27, 0x4a, 0xbd, 0x82, 0x6a, 0x21, 0x7c, 0x5e, 0x31, 0x3c, 0xd9, 0x08, 0x87, 0xbd, 0x04, 0x90, + 0xb8, 0x7c, 0xa8, 0x37, 0x81, 0xe1, 0xc7, 0xfb, 0xf3, 0xa5, 0xd7, 0x6d, 0xad, 0x80, 0x7d, 0x1d, + 0xe0, 0x4e, 0xd0, 0xe2, 0x32, 0x73, 0x7a, 0xe5, 0x20, 0xd3, 0xa5, 0x97, 0x73, 0x99, 0xd3, 0x4f, + 0xf8, 0x41, 0x8b, 0xe7, 0xd3, 0xa4, 0x6b, 0x1c, 0xd9, 0x3b, 0x30, 0x6c, 0xf7, 0xda, 0x5c, 0xbd, + 0x30, 0x8d, 0xab, 0x13, 0xae, 0xd7, 0xe6, 0xa9, 0x9e, 0x20, 0xec, 0x65, 0xad, 0x72, 0x05, 0x80, + 0xbd, 0x4f, 0x19, 0xd5, 0x65, 0x56, 0x99, 0xe1, 0xf4, 0x2d, 0x55, 0x93, 0x7c, 0x72, 0x79, 0x65, + 0x34, 0x12, 0x76, 0x17, 0x46, 0xf5, 0x47, 0x40, 0x2d, 0x5a, 0x8e, 0xfe, 0x50, 0xac, 0x29, 0x1d, + 0x64, 0x78, 0xfd, 0xec, 0xfb, 0xa0, 0xe2, 0xc2, 0xde, 0x83, 0x31, 0xc1, 0x5e, 0xec, 0x1c, 0x91, + 0xbc, 0xd5, 0xe0, 0x3b, 0x90, 0xd6, 0x20, 0xb1, 0xfb, 0x18, 0x51, 0xf6, 0x13, 0x02, 0xf6, 0x15, + 0x18, 0xab, 0x77, 0xbb, 0x72, 0xa8, 0x0f, 0x34, 0x69, 0x3b, 0x97, 0x1b, 0xea, 0xe3, 0x6e, 0xb7, + 0x9b, 0x1f, 0xe9, 0x94, 0x1f, 0xdb, 0x4a, 0x82, 0xb5, 0x0e, 0x92, 0x05, 0xff, 0x95, 0x5c, 0x05, + 0xb3, 0x2a, 0xfe, 0x68, 0xae, 0x12, 0x93, 0x2f, 0xeb, 0x42, 0x2d, 0x15, 0x2a, 0x65, 0x5d, 0x70, + 0x50, 0x5d, 0xaf, 0xe7, 0xea, 0xd2, 0x3f, 0x60, 0xae, 0xba, 0x1c, 0x77, 0xd6, 0x82, 0x29, 0x75, + 0x40, 0xc9, 0xfa, 0xc6, 0x0f, 0xaa, 0xef, 0xa5, 0x5c, 0x7d, 0xc7, 0x5a, 0x1b, 0xf9, 0x7a, 0x32, + 0x3c, 0xd9, 0x7b, 0x30, 0xa9, 0x20, 0xb8, 0x3e, 0xd0, 0x94, 0x4c, 0x2a, 0x04, 0x5b, 0x1b, 0xe8, + 0x64, 0x66, 0x8c, 0x8a, 0x81, 0xac, 0x53, 0xd3, 0xec, 0x98, 0x34, 0xa8, 0xb3, 0xb3, 0xc2, 0x44, + 0x66, 0x5f, 0x86, 0xf1, 0x95, 0x8e, 0xe8, 0x48, 0xe0, 0xbb, 0x31, 0x97, 0xae, 0xe0, 0xca, 0x3c, + 0x4f, 0x2b, 0xd1, 0xa6, 0x2a, 0x1a, 0x26, 0x79, 0x69, 0x91, 0x7e, 0xcd, 0xd4, 0x28, 0xc4, 0xe0, + 0xd1, 0xab, 0xaf, 0x9c, 0xc3, 0xca, 0x4d, 0xfc, 0x4c, 0x81, 0x89, 0x9c, 0xc6, 0x5e, 0x66, 0x10, + 0x11, 0x50, 0xf5, 0xea, 0x9a, 0xc9, 0xde, 0xa4, 0xf3, 0x64, 0x5f, 0x80, 0xf1, 0xfa, 0x83, 0x86, + 0xd8, 0xb0, 0xea, 0xf6, 0x9d, 0x68, 0xb6, 0x96, 0x5a, 0xf8, 0xbb, 0x0f, 0xe9, 0xd5, 0xd1, 0x71, + 0xc3, 0x8c, 0x2d, 0x78, 0x8a, 0xcf, 0x3e, 0x84, 0xe3, 0x0f, 0x3c, 0xbf, 0x15, 0x3c, 0x8c, 0xe4, + 0x31, 0x25, 0x37, 0xba, 0x99, 0xf4, 0x29, 0xf3, 0x21, 0x95, 0x27, 0xb2, 0x60, 0x6e, 0xe3, 0x2b, + 0xe4, 0xc0, 0x7e, 0x24, 0xc7, 0x99, 0x66, 0x10, 0x3b, 0x68, 0x06, 0x5d, 0xce, 0xcd, 0xa0, 0x7c, + 0xf5, 0xd9, 0xe9, 0x54, 0x58, 0x0d, 0x0b, 0x80, 0x99, 0xe7, 0xfb, 0xcd, 0xc0, 0xf3, 0x67, 0x8f, + 0xe1, 0x5e, 0xf8, 0x4c, 0x36, 0x60, 0x0c, 0xe2, 0xad, 0x05, 0x6d, 0xaf, 0xb9, 0x47, 0x39, 0xfc, + 0xb2, 0x32, 0xff, 0x47, 0x81, 0xf1, 0x5c, 0x52, 0xc0, 0x9a, 0x7d, 0x19, 0x26, 0xc4, 0xff, 0x89, + 0x52, 0xe2, 0xb8, 0x61, 0x54, 0xad, 0x61, 0xca, 0x7a, 0xf0, 0x1b, 0x09, 0xbe, 0x45, 0xfa, 0x0a, + 0x83, 0x15, 0x7b, 0x1b, 0x40, 0x88, 0x4d, 0x72, 0x3b, 0x3e, 0x91, 0x26, 0xcb, 0x42, 0xa9, 0x2b, + 0xbf, 0x11, 0xa7, 0xc8, 0xec, 0x3d, 0x18, 0x17, 0xbf, 0x1a, 0xbd, 0x56, 0x20, 0xd6, 0xc6, 0x49, + 0xa4, 0x25, 0xaf, 0x7c, 0x41, 0x1b, 0x11, 0xdc, 0xf0, 0xca, 0x4f, 0xd1, 0xd9, 0x0d, 0x98, 0xc6, + 0xa4, 0x66, 0x32, 0x9d, 0x8e, 0xc7, 0xa3, 0xd9, 0x53, 0xda, 0x1b, 0x3c, 0x66, 0xf1, 0xf7, 0x92, + 0x32, 0xfd, 0x2e, 0x93, 0x21, 0x63, 0x11, 0x1c, 0xcb, 0xbf, 0x41, 0x47, 0xb3, 0xb3, 0x38, 0x48, + 0x4a, 0x82, 0xcf, 0x63, 0xd0, 0x7e, 0x2c, 0xbe, 0x88, 0xb6, 0x71, 0xa9, 0x47, 0x25, 0xbd, 0xc2, + 0x22, 0xee, 0xcc, 0x06, 0x76, 0x7d, 0x71, 0x2d, 0x9b, 0xf5, 0xeb, 0x34, 0xf6, 0x00, 0x3f, 0xf3, + 0x56, 0xb3, 0xeb, 0x1c, 0x90, 0xf9, 0xab, 0x80, 0x9a, 0xfd, 0x09, 0x38, 0xa1, 0x76, 0x10, 0x59, + 0x24, 0xe7, 0xf5, 0xdc, 0x11, 0x77, 0xe2, 0xd6, 0x46, 0x52, 0x75, 0x6e, 0x4a, 0x17, 0x57, 0xc1, + 0x5c, 0x18, 0xc7, 0xcf, 0x2a, 0x6b, 0x7c, 0xe6, 0xa0, 0x1a, 0xcf, 0xe7, 0x6a, 0x3c, 0x89, 0x13, + 0x25, 0x5f, 0x99, 0xce, 0x93, 0xd2, 0x78, 0xe0, 0x3a, 0x92, 0xb3, 0xed, 0x59, 0x1c, 0x2d, 0x99, + 0xc6, 0x83, 0x56, 0x60, 0x6e, 0xc2, 0x99, 0x24, 0xfa, 0x8e, 0x4c, 0x8f, 0x49, 0x67, 0x8c, 0x1d, + 0x39, 0x67, 0x09, 0x61, 0x20, 0x8b, 0x1d, 0x29, 0x95, 0x62, 0x96, 0x1f, 0x75, 0x43, 0xa9, 0xa2, + 0x7a, 0x2e, 0xcd, 0x63, 0xae, 0x09, 0x3f, 0x0e, 0x4f, 0x30, 0xf4, 0x2d, 0xa1, 0x88, 0x03, 0xbb, + 0x07, 0xc7, 0x92, 0x53, 0x5b, 0x63, 0x3c, 0x9f, 0xe6, 0x95, 0x4a, 0x8f, 0xfa, 0x62, 0xbe, 0x45, + 0xf4, 0xcc, 0x85, 0x53, 0xc6, 0x39, 0xad, 0xb1, 0x3e, 0x8b, 0xac, 0x5f, 0x16, 0x37, 0x32, 0xf3, + 0x90, 0x2f, 0x66, 0xdf, 0x8f, 0x0f, 0xfb, 0x08, 0xe6, 0xb2, 0x67, 0xb3, 0x56, 0xcb, 0xf3, 0x58, + 0xcb, 0x2b, 0x8f, 0xf7, 0xe7, 0xcf, 0xe5, 0x8e, 0xf7, 0xe2, 0x8a, 0x0e, 0xe0, 0xc6, 0xbe, 0x0e, + 0xb3, 0xe6, 0xf9, 0xac, 0xd5, 0x64, 0x61, 0x4d, 0xb8, 0x74, 0x92, 0x83, 0xbd, 0xb8, 0x86, 0xbe, + 0x3c, 0x58, 0x0c, 0xf3, 0x85, 0xb3, 0x5b, 0xab, 0xe6, 0x85, 0xb4, 0x43, 0xb9, 0x55, 0x52, 0x5c, + 0xdd, 0x61, 0x2c, 0xd9, 0x43, 0x78, 0xae, 0xe8, 0x98, 0xd0, 0x2a, 0x7d, 0x31, 0x51, 0x02, 0xbf, + 0x5a, 0x7c, 0xe4, 0x14, 0xd7, 0x7c, 0x08, 0x5b, 0xf6, 0x15, 0x38, 0xa1, 0xad, 0x2f, 0xad, 0xbe, + 0x97, 0xb0, 0x3e, 0x8c, 0x23, 0xa1, 0x2f, 0xcc, 0xe2, 0x5a, 0x8a, 0x79, 0xb0, 0x0e, 0x1c, 0x53, + 0x1d, 0x47, 0x6d, 0xbb, 0x3c, 0x7a, 0xce, 0x19, 0xbb, 0x6a, 0x1e, 0x63, 0xe1, 0xac, 0xdc, 0x55, + 0x67, 0x5b, 0x1b, 0x4e, 0x37, 0x25, 0xd4, 0x67, 0x7a, 0x01, 0x5f, 0x76, 0x03, 0x46, 0x1a, 0x6b, + 0x2b, 0xd7, 0xae, 0x2d, 0xcf, 0xbe, 0x8c, 0x35, 0x28, 0x4f, 0x51, 0x02, 0x1a, 0x97, 0x26, 0x69, + 0x4e, 0xda, 0xf5, 0x36, 0x37, 0x8d, 0x07, 0x2b, 0x42, 0x65, 0x3f, 0x82, 0x86, 0x9c, 0x62, 0x47, + 0xad, 0x47, 0x91, 0xb7, 0xe5, 0x53, 0xc6, 0xae, 0x57, 0x8c, 0xf7, 0x7e, 0x95, 0xc3, 0x6d, 0x91, + 0xfb, 0x31, 0x0f, 0x73, 0xe8, 0x24, 0x6d, 0x8a, 0xfb, 0xbf, 0xdc, 0xb9, 0x1d, 0x37, 0x65, 0xa5, + 0x6f, 0xe2, 0xf9, 0x8a, 0xc4, 0xb8, 0x6d, 0x79, 0xb1, 0xb3, 0xdd, 0x33, 0xba, 0x3f, 0xfb, 0xaa, + 0x11, 0xc2, 0xed, 0xba, 0x17, 0xdf, 0xe8, 0x6d, 0x68, 0xa3, 0xf6, 0xa2, 0xac, 0xf0, 0x59, 0xba, + 0x2d, 0xf7, 0x19, 0xb9, 0x99, 0xad, 0x0c, 0x5d, 0xc4, 0xfe, 0x4c, 0x09, 0x4e, 0x3e, 0x08, 0xc2, + 0x9d, 0x76, 0xe0, 0xb6, 0x54, 0xaf, 0xe4, 0x1e, 0xfe, 0xda, 0x41, 0x7b, 0xf8, 0x67, 0x73, 0x7b, + 0xb8, 0xf5, 0x50, 0xb2, 0x71, 0x92, 0x14, 0x78, 0xb9, 0xfd, 0xbc, 0x4f, 0x55, 0xec, 0x47, 0xe0, + 0x6c, 0x71, 0x89, 0x36, 0x29, 0x5f, 0xc7, 0x49, 0xf9, 0xe6, 0xe3, 0xfd, 0xf9, 0xd7, 0xfb, 0xd5, + 0x54, 0x3c, 0x41, 0x0f, 0x65, 0xcd, 0xde, 0x81, 0xca, 0xed, 0xc5, 0xb5, 0xd9, 0x8b, 0xc6, 0xd3, + 0xf3, 0xed, 0xc5, 0x35, 0x6d, 0xa0, 0x48, 0xa3, 0xd9, 0x69, 0x1a, 0x1a, 0xcd, 0xdb, 0x8b, 0x6b, + 0x37, 0x87, 0xaa, 0xe7, 0x6b, 0x17, 0x6e, 0x0e, 0x55, 0x2f, 0xd4, 0x5e, 0xb1, 0x9f, 0x6d, 0xd4, + 0x6f, 0xaf, 0xae, 0xb4, 0xd4, 0xc1, 0xac, 0x32, 0xfc, 0x51, 0x7d, 0xf6, 0xb9, 0x83, 0x4a, 0xd3, + 0xd6, 0x58, 0x7f, 0xb9, 0x04, 0xf3, 0x87, 0x4c, 0x30, 0x71, 0x16, 0xa6, 0x8d, 0x6b, 0x24, 0xf9, + 0x6d, 0xc8, 0x0d, 0x35, 0x29, 0x70, 0x4c, 0x93, 0x13, 0x93, 0x04, 0x5d, 0x94, 0x65, 0x72, 0x5a, + 0xcd, 0x53, 0x3d, 0x9f, 0x94, 0x56, 0x61, 0x59, 0xab, 0x50, 0xcb, 0x4e, 0x3c, 0xf6, 0x79, 0x98, + 0xd4, 0x73, 0xb2, 0x29, 0x35, 0x04, 0xc5, 0x67, 0x0a, 0xb7, 0x8c, 0xc3, 0xd4, 0x40, 0xb4, 0xce, + 0xc1, 0x94, 0x39, 0xc4, 0xec, 0x38, 0x0c, 0xc7, 0x41, 0xd0, 0x96, 0x3c, 0x6c, 0xfa, 0x61, 0xfd, + 0x7c, 0x09, 0x8e, 0x15, 0xac, 0x62, 0x76, 0x0e, 0x86, 0xd6, 0xdc, 0x78, 0x5b, 0xb7, 0x4c, 0xea, + 0xba, 0x46, 0x24, 0x36, 0x2c, 0x67, 0x6f, 0xc0, 0xe8, 0xd2, 0x9d, 0x46, 0xa3, 0x7e, 0x47, 0x29, + 0x3c, 0xe8, 0xb0, 0xf7, 0x23, 0x27, 0x72, 0x4d, 0x83, 0x06, 0x89, 0xc6, 0x5e, 0x87, 0x91, 0x95, + 0x35, 0x24, 0xd0, 0xf2, 0xeb, 0x79, 0xdd, 0x2c, 0xbe, 0x44, 0xb2, 0xbe, 0x53, 0x02, 0x96, 0xdf, + 0x92, 0xd8, 0x9b, 0x30, 0xae, 0x6f, 0x7c, 0x34, 0x2e, 0xf8, 0xca, 0xab, 0x2d, 0x4e, 0x5b, 0xc7, + 0x61, 0x4b, 0x30, 0x8c, 0x79, 0xb7, 0x13, 0x4b, 0x8a, 0xc2, 0xa5, 0x77, 0x2a, 0xb7, 0xf4, 0x86, + 0x31, 0x97, 0xb7, 0x4d, 0xc4, 0xd6, 0x1f, 0x94, 0x80, 0x15, 0x1b, 0x55, 0x0e, 0x64, 0xc9, 0xf5, + 0x96, 0x16, 0x11, 0x45, 0xb7, 0xaa, 0xf4, 0x15, 0x50, 0x57, 0x35, 0xa4, 0xb1, 0x53, 0xce, 0x19, + 0xaa, 0xad, 0xfe, 0x6e, 0xf4, 0x17, 0x60, 0xf8, 0x3e, 0x0f, 0x37, 0x94, 0x69, 0x3f, 0x9a, 0x03, + 0xef, 0x0a, 0x80, 0xae, 0xea, 0x41, 0x0c, 0xc3, 0xbc, 0x73, 0x78, 0x50, 0xf3, 0xce, 0xdf, 0x2d, + 0xc1, 0xf1, 0xa2, 0xcb, 0xd3, 0x21, 0x2e, 0xf2, 0x56, 0xc6, 0xbb, 0x1f, 0x4d, 0xbf, 0xc8, 0xc0, + 0x38, 0xf1, 0xe9, 0x9f, 0x87, 0x61, 0x31, 0x42, 0x6a, 0x5a, 0xa0, 0x7e, 0x4e, 0x0c, 0x61, 0x64, + 0x13, 0x5c, 0x20, 0xa4, 0x09, 0x9a, 0x86, 0x09, 0x81, 0xf2, 0x32, 0x11, 0x5c, 0x20, 0xdc, 0x0e, + 0x5a, 0x5c, 0xe9, 0xad, 0x10, 0xa1, 0x23, 0x00, 0x36, 0xc1, 0xd9, 0x39, 0x18, 0xbd, 0xeb, 0xaf, + 0x72, 0x77, 0x57, 0xa5, 0x5f, 0x45, 0x53, 0xb5, 0xc0, 0x77, 0xda, 0x02, 0x66, 0xab, 0x42, 0xeb, + 0xa7, 0x4b, 0x30, 0x93, 0xbb, 0xb7, 0x1d, 0x1e, 0x05, 0xe0, 0x60, 0xcf, 0xd6, 0x41, 0xfa, 0x47, + 0xcd, 0x1f, 0x2a, 0x6e, 0xbe, 0xf5, 0xdf, 0x8f, 0xc0, 0xa9, 0x3e, 0x6a, 0xb4, 0xd4, 0xf3, 0xbe, + 0x74, 0xa8, 0xe7, 0xfd, 0x57, 0x61, 0x72, 0xb1, 0xed, 0x7a, 0x9d, 0x68, 0x3d, 0x48, 0x5b, 0x9c, + 0x3a, 0xf0, 0x61, 0x99, 0x74, 0xa0, 0x49, 0x3c, 0xbd, 0x4e, 0x37, 0x91, 0xc2, 0x89, 0x83, 0xbc, + 0x14, 0x6f, 0x30, 0xcb, 0xf9, 0xbe, 0x57, 0xfe, 0x88, 0xf8, 0xbe, 0x9b, 0xde, 0x98, 0x43, 0x4f, + 0xd4, 0x1b, 0xb3, 0xd8, 0x59, 0x60, 0xf8, 0x93, 0xb8, 0x8e, 0x2c, 0x66, 0x4d, 0xcc, 0x47, 0x72, + 0xf6, 0x7d, 0x87, 0xdb, 0x96, 0xdf, 0x30, 0x3d, 0x07, 0x47, 0xf1, 0x31, 0xfb, 0x5c, 0x7f, 0xcf, + 0x40, 0x33, 0xfc, 0x94, 0xee, 0x21, 0xf8, 0x2d, 0x38, 0x5e, 0x74, 0x0f, 0x9f, 0xad, 0x1a, 0x66, + 0xc0, 0x7d, 0x6d, 0xce, 0x07, 0xbf, 0xcd, 0xef, 0x14, 0xde, 0xe6, 0x55, 0x44, 0x87, 0xb1, 0xfe, + 0xee, 0x70, 0xe9, 0x5a, 0x20, 0xdc, 0x83, 0xe3, 0x3e, 0x58, 0xff, 0x5e, 0x36, 0x26, 0x47, 0x96, + 0x9e, 0xbd, 0x6b, 0x84, 0x4e, 0x7b, 0x39, 0x1f, 0x3a, 0xad, 0x38, 0x0c, 0x07, 0x3d, 0x45, 0xbc, + 0x06, 0x23, 0xd2, 0x16, 0x44, 0x0b, 0x67, 0x92, 0xb3, 0x01, 0x91, 0x38, 0xd6, 0x4f, 0x97, 0xcd, + 0xb0, 0x03, 0x7f, 0x14, 0xd7, 0xf5, 0x05, 0x18, 0x7e, 0xb0, 0xcd, 0x43, 0x75, 0x04, 0x61, 0x43, + 0x1e, 0x0a, 0x80, 0xde, 0x10, 0xc4, 0x60, 0xd7, 0x60, 0x6a, 0x8d, 0xe6, 0xb9, 0x9a, 0xbc, 0x43, + 0xa9, 0xee, 0xa8, 0x2b, 0x35, 0x9c, 0x05, 0xb3, 0x37, 0x43, 0x65, 0x5d, 0xcf, 0x7c, 0x22, 0x19, + 0x26, 0x8e, 0x3c, 0xf0, 0x48, 0x48, 0x99, 0x4a, 0x1d, 0x42, 0xd3, 0xbd, 0xd9, 0xce, 0x40, 0xad, + 0x4d, 0x78, 0xee, 0x40, 0x46, 0x42, 0x36, 0x80, 0x6e, 0xf2, 0x2b, 0x63, 0x41, 0x7e, 0x20, 0xa9, + 0xad, 0xd1, 0x59, 0xab, 0xa9, 0x8f, 0xe8, 0xca, 0x12, 0x3a, 0xa9, 0xbe, 0x03, 0x13, 0xba, 0xbf, + 0x86, 0xe4, 0x5c, 0xe0, 0xde, 0x31, 0x24, 0x3e, 0x88, 0x3d, 0xae, 0x90, 0x57, 0x5a, 0x91, 0xf5, + 0xcf, 0x2b, 0x30, 0xdb, 0xcf, 0x4b, 0x92, 0xfd, 0x44, 0x12, 0x71, 0x07, 0x5d, 0x10, 0x03, 0xd3, + 0x57, 0x66, 0xfc, 0xf2, 0x3b, 0x87, 0xb8, 0x59, 0x5e, 0x2c, 0x24, 0x26, 0xe3, 0xe7, 0xc4, 0xd5, + 0x04, 0xe5, 0x00, 0xde, 0x72, 0x36, 0xf6, 0x1c, 0xcd, 0x1f, 0xd7, 0x2e, 0xae, 0x98, 0x7d, 0x00, + 0x27, 0x6c, 0xde, 0x0c, 0x3a, 0x1d, 0xee, 0xb7, 0x74, 0xff, 0x48, 0xb9, 0x04, 0x64, 0xf0, 0x99, + 0x04, 0xc1, 0x64, 0x59, 0x48, 0xc9, 0xee, 0xc0, 0x4c, 0x1a, 0xdd, 0x4e, 0xe5, 0x37, 0xd1, 0xd2, + 0x80, 0xa5, 0xd1, 0xf8, 0x54, 0x76, 0x13, 0xfd, 0x3e, 0x96, 0x23, 0x65, 0x97, 0x00, 0x16, 0x5d, + 0x7f, 0x2d, 0x0c, 0x9a, 0x5c, 0x06, 0x88, 0xa8, 0x4a, 0xd3, 0x40, 0x17, 0x23, 0xe2, 0x08, 0xb0, + 0xad, 0xa1, 0xcc, 0x39, 0x30, 0xd7, 0x7f, 0xa0, 0x0a, 0x0c, 0xb8, 0x5f, 0x35, 0xfd, 0x02, 0x4e, + 0xe4, 0x3e, 0xb4, 0xe0, 0xa3, 0xdb, 0x75, 0x7f, 0x0b, 0x26, 0xf4, 0x85, 0x89, 0x42, 0x8e, 0xf8, + 0x2d, 0xb7, 0x1d, 0x12, 0x72, 0x04, 0xc0, 0x26, 0x78, 0xfa, 0x8c, 0x59, 0x2e, 0x7e, 0xc6, 0x4c, + 0x77, 0x8c, 0xca, 0x61, 0x3b, 0x86, 0xa8, 0x1c, 0xcf, 0x50, 0xad, 0x72, 0xfc, 0xad, 0x57, 0x8e, + 0xf1, 0xfe, 0x6c, 0x82, 0x3f, 0xd1, 0xca, 0xff, 0xbe, 0xca, 0xf5, 0x8d, 0x3e, 0xa1, 0xa6, 0xa7, + 0x97, 0xf4, 0x09, 0xcd, 0x9f, 0x0f, 0x29, 0x66, 0x2a, 0xea, 0x96, 0x0f, 0x15, 0x75, 0x8f, 0xb0, + 0x77, 0xe1, 0xb5, 0x8d, 0x76, 0x81, 0xa1, 0xf4, 0x7a, 0xe2, 0xe6, 0x0c, 0xbd, 0x14, 0x96, 0xf5, + 0xdd, 0x12, 0x9c, 0x28, 0x7c, 0x2e, 0x12, 0xb5, 0xd2, 0xbb, 0x94, 0xb6, 0x75, 0x67, 0x1f, 0xa5, + 0x08, 0xe3, 0x28, 0x71, 0x93, 0x06, 0xef, 0x8b, 0xf5, 0x3c, 0x8c, 0x25, 0xc6, 0x0a, 0xe2, 0xfa, + 0x47, 0x9f, 0x8e, 0x02, 0xc3, 0xca, 0x37, 0xef, 0x9f, 0x2f, 0x01, 0x88, 0x26, 0x7c, 0x8a, 0x6e, + 0x05, 0x34, 0x06, 0x7d, 0xdc, 0x0a, 0xb2, 0xe3, 0x91, 0xa5, 0xb3, 0xfe, 0x7e, 0x19, 0x46, 0xc4, + 0x5f, 0x4f, 0x6d, 0x38, 0xfc, 0x62, 0xb7, 0x02, 0xd1, 0xa5, 0x03, 0x92, 0x7f, 0x2c, 0x67, 0x92, + 0x7f, 0x1c, 0xd3, 0xc9, 0x54, 0x5a, 0xde, 0x24, 0x78, 0x50, 0xbf, 0x64, 0x1f, 0x9a, 0x77, 0xc2, + 0x3f, 0x29, 0xc1, 0x84, 0x4e, 0xcc, 0xbe, 0x0c, 0x53, 0x2a, 0xc4, 0x37, 0x05, 0xd4, 0x92, 0x56, + 0x1a, 0xca, 0xa2, 0x52, 0x85, 0xf8, 0xd6, 0x03, 0x70, 0x19, 0xf8, 0xba, 0xa4, 0xd0, 0xd5, 0x91, + 0x59, 0x0b, 0x58, 0x67, 0xd3, 0x75, 0x1e, 0x72, 0x77, 0x87, 0x47, 0xb1, 0x43, 0x96, 0x6f, 0xd2, + 0x98, 0x43, 0xb1, 0xbf, 0x7d, 0xad, 0x4e, 0x46, 0x6f, 0x18, 0x56, 0x80, 0x62, 0xb5, 0xe7, 0x68, + 0xf4, 0x17, 0xea, 0xce, 0xa6, 0xfb, 0x80, 0x0a, 0x89, 0xce, 0xfa, 0xfd, 0x11, 0x9a, 0xb9, 0x32, + 0x27, 0xc0, 0x06, 0x4c, 0xdd, 0x5d, 0x59, 0x5a, 0xd4, 0xde, 0xab, 0xcc, 0xb4, 0x0c, 0xcb, 0x8f, + 0x62, 0x1e, 0xfa, 0x6e, 0x5b, 0xa9, 0x7e, 0x52, 0x09, 0x28, 0xf0, 0x5a, 0xcd, 0xe2, 0xb7, 0xac, + 0x0c, 0x47, 0x51, 0x07, 0x29, 0x99, 0x92, 0x3a, 0xca, 0x03, 0xd6, 0x11, 0xb9, 0x9d, 0x76, 0x9f, + 0x3a, 0x4c, 0x8e, 0x6c, 0x1b, 0xb5, 0x40, 0xdb, 0xbd, 0x0d, 0xad, 0x96, 0xca, 0xc1, 0xb5, 0xbc, + 0x20, 0x6b, 0x79, 0x46, 0x6a, 0x27, 0x0b, 0xeb, 0xc9, 0x71, 0x4d, 0xf7, 0x9c, 0xa1, 0x43, 0xf7, + 0x9c, 0x3f, 0x5b, 0x82, 0x11, 0xba, 0x6c, 0xc9, 0x69, 0xdc, 0xe7, 0x3a, 0xf7, 0xe0, 0xc9, 0x5c, + 0xe7, 0x6a, 0x78, 0xe6, 0x18, 0x13, 0x9a, 0xca, 0xd8, 0x52, 0x66, 0x5d, 0xb0, 0x44, 0xca, 0xd9, + 0xf2, 0x7c, 0x2d, 0x07, 0xce, 0x81, 0xcb, 0x82, 0xad, 0xa4, 0xe1, 0x9c, 0x46, 0x0f, 0x8d, 0xe1, + 0xa1, 0x42, 0x60, 0x8d, 0xca, 0x70, 0x4e, 0x66, 0x10, 0xa7, 0x55, 0x18, 0x93, 0x41, 0xa2, 0x16, + 0xf6, 0xa4, 0x7d, 0x49, 0xcd, 0xb0, 0x10, 0x6c, 0x2d, 0xec, 0xa5, 0x17, 0x49, 0x19, 0x66, 0xca, + 0xd9, 0xd0, 0xbd, 0x6b, 0x52, 0x06, 0xec, 0x2e, 0x8c, 0xa5, 0x39, 0x13, 0xcc, 0x1c, 0x63, 0x09, + 0x5c, 0xc6, 0xcc, 0x54, 0x91, 0x66, 0x0a, 0x52, 0x24, 0xa4, 0x3c, 0xd8, 0x2a, 0xd4, 0xd0, 0xaa, + 0x94, 0xb7, 0x68, 0xd5, 0xac, 0x2c, 0x51, 0x20, 0x22, 0x29, 0x3e, 0xc5, 0x54, 0x26, 0x97, 0x5b, + 0xc6, 0xa1, 0x38, 0x47, 0x69, 0xfd, 0x54, 0x19, 0x6a, 0xd9, 0xd9, 0xc7, 0xde, 0x83, 0xf1, 0x24, + 0x67, 0x45, 0x12, 0xe8, 0x02, 0xdf, 0x99, 0xd3, 0x24, 0x17, 0x46, 0xc8, 0x0b, 0x1d, 0x9d, 0x5d, + 0x86, 0xaa, 0x58, 0xc4, 0x7e, 0x1a, 0x72, 0x18, 0xb7, 0xed, 0x9e, 0x84, 0xe9, 0x3a, 0x28, 0x85, + 0xc7, 0x1a, 0x70, 0x4c, 0x2c, 0x9a, 0x86, 0xe7, 0x6f, 0xb5, 0xf9, 0x6a, 0xb0, 0x15, 0xf4, 0xe2, + 0x7b, 0xf6, 0xaa, 0xdc, 0xc3, 0xe9, 0xba, 0xed, 0x76, 0xda, 0x46, 0x71, 0xa8, 0xdb, 0x23, 0x16, + 0x51, 0xb3, 0xd7, 0xe9, 0x98, 0x59, 0x59, 0x92, 0xe6, 0x61, 0x78, 0xec, 0xa3, 0x59, 0xa3, 0xd1, + 0x78, 0x89, 0xa4, 0xed, 0xac, 0xbf, 0x53, 0x86, 0x71, 0x6d, 0xfa, 0xb1, 0x0b, 0x50, 0x5d, 0x89, + 0x56, 0x83, 0xe6, 0x4e, 0x12, 0x83, 0x79, 0xf2, 0xf1, 0xfe, 0xfc, 0x98, 0x17, 0x39, 0x6d, 0x04, + 0xda, 0x49, 0x31, 0x5b, 0x80, 0x49, 0xfa, 0x4b, 0x49, 0xb6, 0xe5, 0x54, 0xcd, 0x4c, 0xc8, 0x05, + 0x52, 0xad, 0x49, 0xc2, 0xbe, 0x06, 0x40, 0x00, 0x0c, 0x80, 0x53, 0x19, 0x3c, 0x74, 0x8f, 0xac, + 0xa0, 0x20, 0xf4, 0x8d, 0xc6, 0x90, 0x7d, 0x83, 0x52, 0x62, 0xa8, 0xe5, 0x32, 0x34, 0x78, 0xec, + 0x21, 0xc1, 0xdf, 0x29, 0x0e, 0x81, 0xa6, 0xb3, 0x94, 0xc9, 0x36, 0xe7, 0x54, 0x66, 0xf5, 0x7a, + 0x8c, 0x88, 0x1a, 0x86, 0xf5, 0xbf, 0x97, 0xb4, 0x45, 0xc6, 0xee, 0xc0, 0x58, 0x32, 0x81, 0xa4, + 0x65, 0x66, 0x72, 0xc3, 0x55, 0x70, 0x9b, 0x6f, 0x2e, 0x3c, 0x23, 0x8d, 0x44, 0x8f, 0x25, 0xd3, + 0xd0, 0x58, 0x73, 0x0a, 0xc8, 0xbe, 0x04, 0x43, 0x38, 0x74, 0xe5, 0x43, 0xbb, 0xa6, 0x4e, 0xf9, + 0x21, 0x31, 0x66, 0xd8, 0x11, 0xa4, 0x64, 0x6f, 0xc8, 0xa8, 0x01, 0x34, 0xf8, 0x53, 0xda, 0x51, + 0x2d, 0xda, 0x91, 0x1c, 0xef, 0x69, 0x14, 0x3c, 0x6d, 0xf6, 0xfc, 0xe5, 0x32, 0xd4, 0xb2, 0x4b, + 0x9b, 0xbd, 0x0f, 0x13, 0xea, 0xf8, 0xbd, 0xe1, 0xca, 0xb4, 0x73, 0x13, 0x32, 0xed, 0x9b, 0x3a, + 0x83, 0xb7, 0x5d, 0xdd, 0x92, 0xd3, 0x36, 0x08, 0x84, 0x2c, 0xb4, 0x2e, 0x43, 0xe9, 0x6a, 0x8b, + 0x2a, 0x0e, 0xe2, 0x6e, 0x26, 0x13, 0x83, 0x42, 0x63, 0x6f, 0x41, 0xe5, 0xf6, 0xb5, 0xba, 0xf4, + 0x93, 0xad, 0x65, 0x0f, 0x69, 0xf9, 0x3c, 0x63, 0x98, 0xbf, 0x0b, 0x7c, 0xb6, 0xaa, 0x25, 0x2d, + 0x19, 0x31, 0xac, 0x76, 0x15, 0x38, 0xe9, 0xdc, 0xe1, 0xd9, 0x4b, 0x6e, 0x0e, 0x55, 0x2b, 0xb5, + 0x21, 0x19, 0xc8, 0xfe, 0x7f, 0xaa, 0xc0, 0x58, 0x52, 0x3f, 0x63, 0x7a, 0xc8, 0x02, 0x19, 0x9e, + 0xe0, 0x34, 0x54, 0x95, 0x74, 0x27, 0xdd, 0x65, 0x47, 0x23, 0x29, 0xd9, 0xcd, 0x82, 0x12, 0xe3, + 0x68, 0x57, 0xb0, 0xd5, 0x4f, 0xf6, 0x26, 0x24, 0x32, 0x5a, 0x3f, 0x61, 0x8e, 0x6e, 0xe2, 0x09, + 0x1a, 0x9b, 0x82, 0xb2, 0x47, 0xc1, 0x41, 0xc7, 0xec, 0xb2, 0xd7, 0x62, 0xef, 0x43, 0xd5, 0x6d, + 0xe1, 0xfd, 0x75, 0x90, 0x0c, 0xfc, 0x55, 0xc1, 0x8d, 0xce, 0x0c, 0xa4, 0xaa, 0xc7, 0xac, 0x0e, + 0x63, 0x94, 0x6d, 0x21, 0xe2, 0xad, 0x01, 0x0e, 0xa0, 0x94, 0x03, 0x26, 0x69, 0xb8, 0x17, 0xf1, + 0x16, 0x7b, 0x19, 0x86, 0xc4, 0xd7, 0x94, 0x27, 0x8e, 0x12, 0x2a, 0xc5, 0xc7, 0xa4, 0x01, 0xbb, + 0xf1, 0x19, 0x1b, 0x11, 0xd8, 0x8b, 0x50, 0xe9, 0x5d, 0xde, 0x94, 0x67, 0x49, 0x2d, 0x4d, 0x20, + 0x94, 0xa0, 0x89, 0x62, 0x76, 0x05, 0xaa, 0x0f, 0xcd, 0xdc, 0x33, 0x27, 0x32, 0x9f, 0x31, 0xc1, + 0x4f, 0x10, 0xd9, 0xcb, 0x50, 0x89, 0xa2, 0x40, 0xda, 0x05, 0x1e, 0x4b, 0x8c, 0xb5, 0xef, 0x26, + 0x5f, 0x4d, 0x70, 0x8f, 0xa2, 0x60, 0xa1, 0x0a, 0x23, 0x74, 0xc0, 0x58, 0xcf, 0x01, 0xa4, 0x6d, + 0xcc, 0xdf, 0x9e, 0xad, 0xaf, 0xc1, 0x58, 0xd2, 0x36, 0x76, 0x06, 0x60, 0x87, 0xef, 0x39, 0xdb, + 0xae, 0xdf, 0x6a, 0x93, 0x74, 0x3a, 0x61, 0x8f, 0xed, 0xf0, 0xbd, 0x1b, 0x08, 0x60, 0xa7, 0x60, + 0xb4, 0x2b, 0x3e, 0xbf, 0x9c, 0xe3, 0x13, 0xf6, 0x48, 0xb7, 0xb7, 0x21, 0xa6, 0xf2, 0x2c, 0x8c, + 0xe2, 0xab, 0x80, 0x5c, 0x91, 0x93, 0xb6, 0xfa, 0x69, 0xfd, 0xab, 0x0a, 0xe6, 0x37, 0xd5, 0x3a, + 0xc4, 0x5e, 0x80, 0xc9, 0x66, 0xc8, 0xf1, 0x2c, 0x73, 0x85, 0x84, 0x26, 0xeb, 0x99, 0x48, 0x81, + 0x2b, 0x2d, 0x76, 0x0e, 0xa6, 0xbb, 0xbd, 0x8d, 0xb6, 0xd7, 0xc4, 0x1c, 0x6b, 0xcd, 0x0d, 0x99, + 0x53, 0x6a, 0xc2, 0x9e, 0x24, 0xf0, 0x2d, 0xbe, 0xb7, 0xb8, 0x81, 0xd1, 0x6f, 0x6b, 0x7a, 0xf2, + 0x06, 0xcc, 0xad, 0x47, 0xf3, 0x6f, 0x5a, 0x83, 0xa3, 0x89, 0xf3, 0x49, 0x18, 0x71, 0xdd, 0xad, + 0x9e, 0x47, 0x4a, 0x88, 0x09, 0x5b, 0xfe, 0x62, 0xaf, 0xc2, 0x4c, 0x9a, 0x0d, 0x45, 0x75, 0x63, + 0x18, 0xbb, 0x51, 0x4b, 0x0a, 0x16, 0x09, 0xce, 0x5e, 0x07, 0xa6, 0xd7, 0x17, 0x6c, 0x7c, 0xc4, + 0x9b, 0x34, 0x27, 0x27, 0xec, 0x19, 0xad, 0xe4, 0x2e, 0x16, 0xb0, 0xe7, 0x51, 0x17, 0x85, 0xd2, + 0x21, 0x0e, 0x1b, 0xa6, 0xff, 0x46, 0x95, 0x13, 0xc2, 0xc4, 0xd8, 0x9d, 0x87, 0x9a, 0x36, 0x1c, + 0x98, 0x1f, 0x83, 0xd2, 0x29, 0xd9, 0x53, 0x29, 0xdc, 0xee, 0xae, 0xb4, 0xd8, 0x87, 0x30, 0xa7, + 0x61, 0x52, 0x26, 0x72, 0x87, 0xb7, 0xbd, 0x2d, 0x6f, 0xa3, 0xcd, 0xe5, 0x7c, 0xcb, 0xcf, 0xea, + 0xe4, 0x3a, 0x6a, 0xcf, 0xa6, 0xd4, 0x94, 0xa3, 0x7c, 0x59, 0xd2, 0xb2, 0x55, 0x38, 0x9e, 0xe1, + 0xcc, 0x5b, 0x4e, 0xaf, 0xdb, 0x37, 0x2c, 0x6c, 0xca, 0x93, 0x99, 0x3c, 0x79, 0xeb, 0x5e, 0xd7, + 0xfa, 0x16, 0x4c, 0xe8, 0x73, 0x52, 0x0c, 0x82, 0x2e, 0x97, 0xc8, 0xd9, 0x37, 0x9e, 0xc0, 0x56, + 0xc4, 0xbd, 0x70, 0x2a, 0x45, 0xc1, 0x8f, 0x48, 0xdb, 0xcb, 0x64, 0x02, 0xc5, 0x4f, 0xf8, 0x3c, + 0x4c, 0xb4, 0xbc, 0xa8, 0xdb, 0x76, 0xf7, 0xd0, 0x50, 0x55, 0x7e, 0xe9, 0x71, 0x09, 0x43, 0xbd, + 0xe3, 0x02, 0xcc, 0xe4, 0xf6, 0x41, 0x4d, 0xd2, 0xa0, 0x7d, 0xfd, 0x60, 0x49, 0xc3, 0xf2, 0x61, + 0x42, 0x3f, 0xd7, 0x0e, 0x49, 0x7e, 0x76, 0x12, 0xa3, 0x8d, 0xd1, 0xa6, 0x3f, 0xf2, 0x78, 0x7f, + 0xbe, 0xec, 0xb5, 0x30, 0xc6, 0xd8, 0x79, 0xa8, 0x2a, 0x89, 0x4d, 0x0a, 0x4a, 0xf8, 0xf4, 0xa5, + 0x5e, 0xf8, 0xed, 0xa4, 0xd4, 0x7a, 0x19, 0x46, 0xe5, 0xd1, 0x75, 0xf0, 0x83, 0x97, 0xf5, 0xed, + 0x32, 0x4c, 0xdb, 0x5c, 0x6c, 0xac, 0x9c, 0xd2, 0x85, 0x3e, 0xb5, 0x57, 0xf4, 0xe2, 0x28, 0xe8, + 0x46, 0xdf, 0x0e, 0xc8, 0x7c, 0xfd, 0x4b, 0x25, 0x38, 0x56, 0x80, 0xcb, 0x2e, 0x17, 0x85, 0xbe, + 0xc1, 0x08, 0x70, 0xca, 0x22, 0x0c, 0x07, 0xd3, 0x88, 0x7f, 0x73, 0x15, 0xc6, 0x96, 0x3c, 0xb7, + 0x5d, 0x6f, 0xb5, 0x92, 0xd0, 0x63, 0x28, 0xe7, 0x63, 0x36, 0x5b, 0x57, 0x40, 0x75, 0x21, 0x26, + 0x41, 0x65, 0xaf, 0xc8, 0x49, 0x51, 0x49, 0x86, 0x15, 0x27, 0xc5, 0xf7, 0xf7, 0xe7, 0x81, 0xda, + 0xb4, 0x9e, 0x4c, 0x11, 0xcc, 0x4c, 0x40, 0xc0, 0xd4, 0x3d, 0xf1, 0xa9, 0xfd, 0x74, 0xc5, 0x99, + 0x09, 0xb2, 0xdd, 0x1b, 0x28, 0xdd, 0xe0, 0x5f, 0x2c, 0xc3, 0xc9, 0x62, 0xc2, 0x8f, 0xf5, 0x29, + 0x5f, 0x83, 0x31, 0x4c, 0x63, 0xa9, 0x65, 0x53, 0x99, 0x7a, 0xbc, 0x3f, 0x0f, 0x94, 0xf3, 0x12, + 0xf1, 0x53, 0x04, 0xb6, 0x09, 0x93, 0xab, 0x6e, 0x14, 0xdf, 0xe0, 0x6e, 0x18, 0x6f, 0x70, 0x37, + 0x1e, 0x40, 0x92, 0x57, 0x46, 0x49, 0xb3, 0x28, 0x4c, 0x6c, 0x2b, 0xca, 0x8c, 0xac, 0x6d, 0xb2, + 0x4d, 0x26, 0xca, 0xd0, 0x00, 0x13, 0xe5, 0x9b, 0x30, 0xdd, 0xe0, 0x1d, 0xb7, 0xbb, 0x1d, 0x84, + 0x2a, 0xec, 0xc8, 0x45, 0x98, 0x4c, 0x40, 0x85, 0xb3, 0xc5, 0x2c, 0x36, 0xf0, 0xb5, 0x81, 0x48, + 0xb7, 0x12, 0xb3, 0xd8, 0xfa, 0x2b, 0x65, 0x38, 0x55, 0x6f, 0x4a, 0x0b, 0x6b, 0x59, 0xa0, 0x1c, + 0x41, 0x3e, 0xe5, 0xba, 0xd9, 0x25, 0x18, 0xbb, 0xed, 0x3e, 0x5a, 0xe5, 0x6e, 0xc4, 0x23, 0x99, + 0xaa, 0x87, 0xc4, 0x5e, 0xf7, 0x51, 0xfa, 0x54, 0x69, 0xa7, 0x38, 0xba, 0x1a, 0x61, 0xe8, 0x13, + 0xaa, 0x11, 0x2c, 0x18, 0xb9, 0x11, 0xb4, 0x5b, 0xf2, 0xac, 0x97, 0xf6, 0x11, 0xdb, 0x08, 0xb1, + 0x65, 0x89, 0xf5, 0xbb, 0x25, 0x98, 0x4a, 0x5a, 0x8c, 0x4d, 0xf8, 0xd4, 0x87, 0xe4, 0x1c, 0x8c, + 0x62, 0x45, 0x2b, 0x4b, 0xfa, 0xa1, 0xd1, 0xe6, 0x98, 0xb7, 0xbb, 0x65, 0xab, 0x42, 0x7d, 0x24, + 0x86, 0x3f, 0xd9, 0x48, 0x58, 0x7f, 0x13, 0x4d, 0x2f, 0xf4, 0x5e, 0x8a, 0x93, 0x48, 0x6b, 0x48, + 0x69, 0xc0, 0x86, 0x94, 0x9f, 0xd8, 0x27, 0xa9, 0xf4, 0xfd, 0x24, 0x7f, 0xae, 0x0c, 0xe3, 0x49, + 0x63, 0x7f, 0xc0, 0x52, 0xfa, 0x24, 0xfd, 0x1a, 0x28, 0x54, 0x58, 0x43, 0xdb, 0x2b, 0x64, 0x44, + 0xae, 0x2f, 0xc1, 0x88, 0x5c, 0x4c, 0xa5, 0x8c, 0x43, 0x44, 0xe6, 0xeb, 0x2e, 0x4c, 0x49, 0xd6, + 0x23, 0xf8, 0x41, 0x23, 0x5b, 0xd2, 0x61, 0x2c, 0xb6, 0x07, 0x7c, 0x43, 0x5a, 0xe2, 0x3c, 0xb5, + 0x67, 0x54, 0x71, 0x2c, 0xb6, 0xb4, 0x63, 0x03, 0x9d, 0x4e, 0xbf, 0x59, 0x85, 0x5a, 0x96, 0xe4, + 0xf0, 0xa4, 0x49, 0x6b, 0xbd, 0x0d, 0xba, 0xaa, 0x50, 0xd2, 0xa4, 0x6e, 0x6f, 0xc3, 0x16, 0x30, + 0xb4, 0xee, 0x0b, 0xbd, 0x5d, 0xec, 0xf5, 0x84, 0xb4, 0xee, 0x0b, 0xbd, 0x5d, 0xc3, 0xba, 0x2f, + 0xf4, 0x76, 0x51, 0x91, 0xb0, 0xda, 0xc0, 0x38, 0x25, 0x78, 0x4f, 0x91, 0x8a, 0x84, 0x76, 0x94, + 0xcd, 0x04, 0xab, 0xd0, 0xc4, 0x51, 0xb9, 0xc0, 0xdd, 0x50, 0x26, 0xf8, 0x91, 0xdb, 0x19, 0x1e, + 0x95, 0x1b, 0x08, 0x76, 0x62, 0x01, 0xb7, 0x75, 0x24, 0xd6, 0x06, 0xa6, 0xfd, 0x54, 0x0b, 0xf8, + 0xf0, 0xbb, 0xb5, 0x32, 0x66, 0x3e, 0xae, 0xb3, 0x76, 0xf4, 0xd5, 0x5c, 0xc0, 0xf7, 0x49, 0x6a, + 0x7f, 0xd7, 0x64, 0x00, 0x70, 0x54, 0x20, 0x55, 0x0f, 0x65, 0xa6, 0xe2, 0x2b, 0x01, 0x05, 0x08, + 0x4f, 0xd4, 0x48, 0x29, 0x13, 0xf6, 0x45, 0x18, 0xd7, 0xa3, 0xcf, 0x50, 0x8c, 0x94, 0x67, 0x29, + 0x6c, 0x70, 0x1a, 0x76, 0xc6, 0xb4, 0xd3, 0xd1, 0xc3, 0xcc, 0x6c, 0xc0, 0xa9, 0xc5, 0xc0, 0x8f, + 0x7a, 0x1d, 0xf5, 0x8c, 0x9e, 0xe6, 0xac, 0x00, 0xfc, 0x14, 0x18, 0xca, 0xa2, 0x29, 0x51, 0xd4, + 0x13, 0xbc, 0x0a, 0x52, 0x62, 0x5c, 0x40, 0xfa, 0x31, 0x62, 0xeb, 0x30, 0x8e, 0x1a, 0x54, 0x69, + 0x39, 0x3c, 0x6e, 0x6e, 0x1b, 0x69, 0xc9, 0x92, 0x58, 0x18, 0x14, 0x7c, 0xd1, 0xed, 0xb4, 0x95, + 0xb3, 0x93, 0xae, 0x09, 0xd6, 0x90, 0xd9, 0xd7, 0x60, 0x8a, 0xae, 0x68, 0x0f, 0xf8, 0x06, 0xcd, + 0x9d, 0x09, 0x43, 0x13, 0x61, 0x16, 0x92, 0x2d, 0x89, 0xd4, 0x5b, 0x3f, 0xe4, 0x1b, 0xf4, 0xed, + 0x0d, 0x57, 0x43, 0x03, 0x9f, 0xdd, 0x83, 0x63, 0x37, 0xdc, 0x88, 0x80, 0x5a, 0x18, 0x91, 0x49, + 0xd4, 0xd0, 0xa2, 0x0b, 0xc8, 0xb6, 0x1b, 0x29, 0x45, 0x78, 0x61, 0xd8, 0x90, 0x22, 0x7a, 0xf6, + 0xed, 0x12, 0xcc, 0x1a, 0x7a, 0x72, 0x69, 0x15, 0x89, 0xc1, 0xbb, 0xa7, 0xf0, 0xc9, 0x4b, 0x05, + 0xbc, 0xee, 0x87, 0x46, 0x9f, 0x24, 0xa3, 0x8a, 0x0f, 0xd3, 0x72, 0xdd, 0xb7, 0xa2, 0x1f, 0x0f, + 0xb9, 0x50, 0x71, 0x4d, 0x4f, 0x9b, 0x0b, 0x35, 0xb3, 0xae, 0x15, 0x9a, 0x75, 0x35, 0x3b, 0xde, + 0x52, 0xd1, 0x55, 0x4a, 0x14, 0x5d, 0x68, 0x30, 0x2c, 0x3e, 0x84, 0x0c, 0x46, 0x87, 0x3f, 0xac, + 0x37, 0xf4, 0x7d, 0x48, 0x8a, 0x85, 0x07, 0xee, 0x43, 0xd6, 0xff, 0x32, 0x02, 0xd3, 0x99, 0x69, + 0x21, 0xef, 0xa9, 0xa5, 0xdc, 0x3d, 0xb5, 0x01, 0x40, 0xaa, 0xde, 0x01, 0x75, 0xb2, 0xca, 0x9f, + 0x79, 0x5c, 0x46, 0x23, 0x48, 0xd6, 0x94, 0xc6, 0x46, 0x30, 0xa5, 0x15, 0x3b, 0xa0, 0x8e, 0x3c, + 0x61, 0x4a, 0x8b, 0x5e, 0x63, 0x9a, 0xb2, 0x61, 0xf3, 0x30, 0x8c, 0x61, 0xc2, 0x75, 0x77, 0x72, + 0x4f, 0x00, 0x6c, 0x82, 0xb3, 0x17, 0x60, 0x44, 0x08, 0x51, 0x2b, 0x4b, 0x72, 0x13, 0xc4, 0xb3, + 0x45, 0x48, 0x59, 0x42, 0x62, 0x91, 0x45, 0xec, 0x2a, 0x4c, 0xd0, 0x5f, 0x32, 0x5a, 0xd5, 0x88, + 0x69, 0xdf, 0xeb, 0x78, 0x2d, 0x15, 0xb0, 0xca, 0xc0, 0x13, 0xb7, 0x8b, 0x46, 0x0f, 0xd5, 0x3a, + 0x2b, 0x4b, 0x32, 0xe9, 0x07, 0xde, 0x2e, 0x22, 0x02, 0x8a, 0x2a, 0x52, 0x04, 0x21, 0xcb, 0x48, + 0xa7, 0xae, 0x2a, 0xde, 0x29, 0x51, 0x96, 0x21, 0x67, 0x2e, 0x5b, 0x96, 0xb0, 0x0b, 0xf4, 0x12, + 0x83, 0x62, 0x21, 0x65, 0xbe, 0xc6, 0x77, 0x0b, 0x54, 0x4c, 0xa0, 0x6c, 0x98, 0x14, 0x8b, 0xca, + 0xc5, 0xdf, 0xcb, 0x1d, 0xd7, 0x6b, 0xcb, 0x6d, 0x05, 0x2b, 0x47, 0x5c, 0x2e, 0xa0, 0x76, 0x8a, + 0xc0, 0xde, 0x83, 0x29, 0xca, 0x50, 0xdb, 0xe9, 0x04, 0x3e, 0xb2, 0x1f, 0x4f, 0x4d, 0xe2, 0x64, + 0xd6, 0x5c, 0x51, 0x44, 0xb5, 0x64, 0x70, 0xc5, 0x79, 0x82, 0xaf, 0xbc, 0x3d, 0x7a, 0x23, 0x9a, + 0x48, 0xcf, 0x13, 0x24, 0x8d, 0x08, 0x6e, 0xeb, 0x48, 0xec, 0x6d, 0x98, 0x14, 0x3f, 0xaf, 0x7b, + 0xbb, 0x9c, 0x2a, 0x9c, 0x4c, 0x4d, 0x25, 0x90, 0x6a, 0x4b, 0x94, 0x50, 0x7d, 0x26, 0x26, 0xfb, + 0x00, 0x4e, 0x20, 0xa7, 0x66, 0xd0, 0xe5, 0xad, 0xfa, 0xe6, 0xa6, 0xd7, 0xf6, 0xc8, 0x76, 0x72, + 0x2a, 0xb5, 0x61, 0xa2, 0x8a, 0x11, 0xc3, 0x71, 0x53, 0x14, 0xbb, 0x98, 0x92, 0x3d, 0x80, 0xda, + 0x62, 0x2f, 0x8a, 0x83, 0x4e, 0x3d, 0x8e, 0x43, 0x6f, 0xa3, 0x17, 0xf3, 0x68, 0x76, 0xda, 0x88, + 0x5e, 0x24, 0x16, 0x47, 0x52, 0x48, 0xfa, 0xa0, 0x26, 0x52, 0x38, 0x6e, 0x42, 0x62, 0xe7, 0x98, + 0x58, 0xff, 0x73, 0x09, 0x26, 0x0d, 0x52, 0xf6, 0x16, 0x4c, 0x5c, 0x0b, 0x3d, 0xee, 0xb7, 0xda, + 0x7b, 0xda, 0x45, 0x15, 0x6f, 0x31, 0x9b, 0x12, 0x4e, 0xbd, 0x36, 0xd0, 0x12, 0x3d, 0x4f, 0xb9, + 0xd0, 0xb0, 0xf9, 0x12, 0x45, 0x35, 0x90, 0x13, 0xb4, 0x92, 0x86, 0x53, 0xc3, 0x09, 0x2a, 0x67, + 0xa7, 0x86, 0xc2, 0xbe, 0x00, 0x23, 0xf4, 0x1e, 0x2c, 0xad, 0x6c, 0x4f, 0x17, 0x75, 0x93, 0x22, + 0x68, 0xe0, 0x44, 0x44, 0x03, 0xa2, 0xc8, 0x96, 0x44, 0xd6, 0xcf, 0x94, 0x80, 0xe5, 0x51, 0x0f, + 0xd1, 0x7b, 0x1d, 0x6a, 0x98, 0xf4, 0xa5, 0x64, 0x35, 0x56, 0x0c, 0x9d, 0xb9, 0xa8, 0x89, 0x0a, + 0x68, 0xe0, 0xe5, 0xaa, 0xd3, 0x15, 0x71, 0x54, 0x6c, 0xfd, 0x68, 0x19, 0x20, 0xc5, 0x66, 0x9f, + 0xa7, 0xfc, 0x9e, 0x1f, 0xf4, 0xdc, 0xb6, 0xb7, 0xe9, 0x99, 0x41, 0xa3, 0x91, 0xc9, 0x37, 0x55, + 0x89, 0x6d, 0x22, 0xb2, 0xf7, 0x61, 0xba, 0xb1, 0x66, 0xd2, 0x6a, 0x1e, 0x22, 0x51, 0xd7, 0xc9, + 0x90, 0x67, 0xb1, 0xd1, 0x9a, 0x5e, 0xff, 0x1a, 0x64, 0x4d, 0x4f, 0x1f, 0x42, 0x96, 0x88, 0x8d, + 0xa5, 0xb1, 0x26, 0x9d, 0x60, 0x5a, 0xc9, 0xab, 0x26, 0xb6, 0x2e, 0xea, 0x3a, 0x5d, 0xe9, 0x1d, + 0x23, 0xf6, 0x09, 0x03, 0x2f, 0x1d, 0xc8, 0xe1, 0x3e, 0x51, 0x32, 0x7e, 0x16, 0xd5, 0x7e, 0x9d, + 0x20, 0xe6, 0x52, 0xdb, 0xf1, 0xd4, 0xde, 0x7b, 0x52, 0x63, 0x82, 0x61, 0xc3, 0xf9, 0xdf, 0xe8, + 0x9d, 0x34, 0x98, 0xb9, 0x92, 0x5e, 0x52, 0xc8, 0xac, 0xa0, 0xc0, 0xc6, 0xe6, 0x17, 0x4b, 0x70, + 0xa2, 0x90, 0x96, 0x5d, 0x04, 0x48, 0x75, 0x4a, 0x72, 0x94, 0x70, 0xc7, 0x4c, 0x83, 0x88, 0xd9, + 0x1a, 0x06, 0xfb, 0x6a, 0x56, 0x1b, 0x74, 0xf8, 0x41, 0x38, 0xa7, 0x62, 0x77, 0x9a, 0xda, 0xa0, + 0x02, 0x1d, 0x90, 0xf5, 0x4b, 0x15, 0x98, 0xd1, 0x62, 0x94, 0x51, 0x5b, 0x0f, 0xf1, 0x6e, 0xd8, + 0x81, 0x09, 0xd1, 0x1b, 0xaf, 0x29, 0xbd, 0xd7, 0xc8, 0xf0, 0xe5, 0x95, 0x9c, 0xfb, 0xb6, 0xe4, + 0x76, 0x51, 0x47, 0x26, 0xa3, 0x52, 0xdc, 0x3a, 0xf1, 0x41, 0xa2, 0x99, 0xf7, 0x5c, 0x33, 0x98, + 0xb3, 0x08, 0x26, 0x97, 0xf6, 0x7c, 0xb7, 0x93, 0xd4, 0x46, 0x06, 0x30, 0xaf, 0xf6, 0xad, 0xcd, + 0xc0, 0xa6, 0xea, 0x52, 0x47, 0x47, 0x2a, 0x2b, 0x88, 0xb1, 0x61, 0x50, 0xcd, 0xbd, 0x0f, 0x33, + 0xb9, 0x46, 0x1f, 0x29, 0xb8, 0xef, 0x03, 0x60, 0xf9, 0x76, 0x0c, 0x6e, 0x22, 0x2a, 0xce, 0x3c, + 0xd7, 0x6f, 0x91, 0x39, 0xcd, 0x65, 0xdd, 0x44, 0xf4, 0x67, 0xcb, 0xba, 0x0b, 0xfd, 0xd3, 0xbe, + 0xea, 0xbe, 0x64, 0xdc, 0x86, 0x9f, 0xeb, 0xf7, 0x4d, 0x07, 0xd2, 0x3a, 0x7c, 0xaf, 0x02, 0xa7, + 0xfa, 0x50, 0xb2, 0xbd, 0xec, 0x24, 0x22, 0x2d, 0xc4, 0x9b, 0x07, 0x57, 0xf8, 0x24, 0xa6, 0x12, + 0xfb, 0x3c, 0x05, 0xd1, 0x69, 0x62, 0x4e, 0x7e, 0x79, 0xff, 0x46, 0x35, 0xfe, 0x4e, 0x02, 0xcd, + 0x46, 0xcf, 0x21, 0x28, 0x7b, 0x1f, 0x86, 0x31, 0x7e, 0x42, 0x26, 0x4a, 0xaa, 0xc0, 0x40, 0xb8, + 0x16, 0xe7, 0x57, 0xfc, 0x34, 0xe2, 0xfc, 0x0a, 0x00, 0xfb, 0x1c, 0x54, 0xea, 0x0f, 0x1a, 0xf2, + 0xbb, 0x4c, 0xe9, 0xe4, 0x0f, 0x1a, 0x69, 0x82, 0x2f, 0xd7, 0xc8, 0xc4, 0x25, 0x28, 0x04, 0xe1, + 0xf5, 0xc5, 0x35, 0xf9, 0x55, 0x74, 0xc2, 0xeb, 0x8b, 0x6b, 0x29, 0xe1, 0x96, 0xe9, 0xa1, 0x79, + 0x7d, 0x71, 0xed, 0xd3, 0x9b, 0xf6, 0xff, 0x7e, 0x99, 0x22, 0xff, 0x50, 0xc7, 0xde, 0x87, 0x09, + 0x23, 0xb4, 0x7f, 0x49, 0xb7, 0x29, 0x97, 0x06, 0xf6, 0x19, 0x8b, 0x21, 0x83, 0x40, 0xa5, 0xca, + 0x4b, 0x6c, 0xe0, 0x75, 0x63, 0x9b, 0x84, 0x43, 0xd6, 0xf1, 0xcd, 0x24, 0x61, 0x57, 0xa0, 0xba, + 0xce, 0x7d, 0xd7, 0x8f, 0x13, 0x85, 0x28, 0x1a, 0x2a, 0xc7, 0x08, 0x33, 0xa5, 0x86, 0x04, 0x11, + 0x6d, 0x6a, 0x7b, 0x1b, 0x51, 0x33, 0xf4, 0x30, 0x42, 0x58, 0x72, 0x16, 0x93, 0x4d, 0xad, 0x56, + 0x62, 0x32, 0xc8, 0x10, 0x59, 0x3f, 0x5b, 0x82, 0x51, 0xf9, 0x21, 0x29, 0xc5, 0xe9, 0x56, 0x7a, + 0x96, 0x48, 0x57, 0x97, 0x2d, 0x2f, 0xeb, 0xea, 0xb2, 0x45, 0x61, 0xb8, 0xc6, 0xa4, 0x8f, 0x69, + 0xf2, 0x34, 0x88, 0xb3, 0x51, 0x79, 0x4f, 0x9b, 0x19, 0x2c, 0x13, 0xd4, 0x41, 0x7d, 0x0e, 0xad, + 0x9f, 0x93, 0x2d, 0xbb, 0xbe, 0xb8, 0xc6, 0x2e, 0x43, 0x75, 0x35, 0xa0, 0x88, 0x72, 0x7a, 0xbe, + 0xfe, 0xb6, 0x84, 0xe9, 0x03, 0xa4, 0xf0, 0x44, 0xfb, 0xd6, 0xc2, 0x40, 0xde, 0x65, 0xb4, 0xf6, + 0x75, 0x09, 0x98, 0x69, 0x5f, 0x82, 0x3a, 0x70, 0xfb, 0x78, 0xc1, 0x26, 0x71, 0xff, 0x0a, 0xfa, + 0x17, 0xdc, 0xd4, 0x7d, 0x39, 0x65, 0x91, 0xda, 0x29, 0xe6, 0xfa, 0xed, 0x14, 0xf7, 0xaf, 0xd8, + 0x05, 0x54, 0xd6, 0xef, 0x95, 0x75, 0x66, 0x0d, 0x1e, 0xee, 0x3e, 0xc5, 0xbb, 0x74, 0xf1, 0xbb, + 0x5a, 0xb6, 0x7b, 0xfd, 0x37, 0x69, 0xf6, 0x39, 0x90, 0x82, 0x92, 0xd4, 0xe3, 0xcd, 0xf7, 0x63, + 0x21, 0x65, 0x24, 0x5b, 0xa2, 0x8b, 0x03, 0x16, 0xef, 0x56, 0x74, 0xab, 0xb5, 0xe9, 0x87, 0xb6, + 0xe7, 0x7f, 0xa7, 0x02, 0x27, 0x8b, 0xdb, 0xa1, 0x0f, 0x4d, 0xe9, 0x80, 0xa1, 0x39, 0x0f, 0xd5, + 0x1b, 0x41, 0x14, 0x6b, 0x36, 0x87, 0xf8, 0x9a, 0xb0, 0x2d, 0x61, 0x76, 0x52, 0x2a, 0xae, 0xf0, + 0xe2, 0xef, 0x64, 0xb5, 0x23, 0x3f, 0x0c, 0x9f, 0x23, 0xae, 0xf0, 0x54, 0xc4, 0xae, 0x43, 0xd5, + 0x96, 0x5e, 0x86, 0x99, 0x91, 0x56, 0xe0, 0x44, 0x38, 0x63, 0xa1, 0x84, 0x18, 0x09, 0x1b, 0x24, + 0x8c, 0xd5, 0x61, 0x54, 0x4e, 0xa6, 0xcc, 0x4b, 0x74, 0xc1, 0x0c, 0x34, 0x73, 0xa8, 0x28, 0x3a, + 0xb1, 0x41, 0xe1, 0x9b, 0xe2, 0xca, 0x92, 0x72, 0x18, 0xc4, 0x0d, 0x8a, 0xde, 0x1c, 0x4d, 0xf3, + 0xce, 0x04, 0x91, 0xcd, 0xc3, 0x78, 0xc8, 0xdb, 0xee, 0x1e, 0xed, 0x7e, 0x72, 0xdc, 0x01, 0x41, + 0xb4, 0xed, 0x3d, 0x03, 0x63, 0x84, 0xe0, 0xb5, 0xa4, 0x06, 0xc1, 0xae, 0x22, 0x60, 0xa5, 0x15, + 0x59, 0xeb, 0x30, 0xdb, 0xef, 0x9b, 0x8a, 0x3b, 0x52, 0xec, 0x86, 0x5b, 0x1c, 0x05, 0xd2, 0xb6, + 0xf4, 0x26, 0x4f, 0x8d, 0x85, 0xd6, 0xb1, 0xec, 0x06, 0x16, 0xd9, 0x13, 0xb1, 0xf6, 0xcb, 0xfa, + 0x76, 0x19, 0x40, 0x29, 0xa6, 0x9e, 0xda, 0x45, 0xf4, 0x39, 0x63, 0x11, 0x69, 0x26, 0x55, 0x94, + 0xfc, 0x7e, 0x10, 0x09, 0xe7, 0x2e, 0x5a, 0x2c, 0x69, 0xf8, 0x87, 0xe7, 0xd7, 0x5d, 0x4f, 0x75, + 0x76, 0xd2, 0x83, 0x07, 0x35, 0xee, 0x04, 0xb7, 0x36, 0xe0, 0xf8, 0x75, 0x1e, 0xa7, 0x1a, 0x3c, + 0xf5, 0xba, 0x7a, 0x30, 0xdb, 0xd7, 0x60, 0x4c, 0xe2, 0x27, 0x5b, 0x34, 0xa9, 0x9b, 0x64, 0x94, + 0x2c, 0x54, 0x37, 0x29, 0x04, 0xb1, 0xe1, 0x2e, 0xf1, 0x36, 0x8f, 0xf9, 0xa7, 0x5b, 0x4d, 0x03, + 0x18, 0x75, 0x05, 0x7b, 0x36, 0x58, 0x0d, 0x87, 0x8e, 0xcf, 0x7d, 0x38, 0x91, 0xb4, 0xfd, 0x49, + 0xf2, 0xbd, 0x24, 0x6e, 0xcd, 0x32, 0x25, 0x4a, 0xca, 0xf1, 0x00, 0xf3, 0x9a, 0xdf, 0x2e, 0xc1, + 0x9c, 0xa2, 0x78, 0xe0, 0x25, 0xc6, 0xa1, 0x03, 0x11, 0xb3, 0xf7, 0x60, 0x5c, 0xa3, 0x91, 0xce, + 0x37, 0xa8, 0x8a, 0x7f, 0xe8, 0xc5, 0xdb, 0x4e, 0x44, 0x70, 0x5d, 0x15, 0xaf, 0xa1, 0xb3, 0x0d, + 0x98, 0x6b, 0xd4, 0x6f, 0xaf, 0xa6, 0xee, 0x73, 0x77, 0x82, 0x6b, 0x41, 0xbb, 0x1d, 0x3c, 0xbc, + 0x67, 0xaf, 0xaa, 0x1c, 0x68, 0x18, 0x0a, 0x08, 0xf5, 0xfa, 0x9a, 0x0f, 0x9e, 0x1f, 0x38, 0x9b, + 0x88, 0xe8, 0xf4, 0xc2, 0x76, 0x64, 0x1f, 0xc0, 0xc5, 0xfa, 0x7b, 0x25, 0x78, 0x26, 0xf1, 0xe5, + 0x2a, 0xe8, 0x5f, 0xa6, 0x07, 0xa5, 0x27, 0xd9, 0x83, 0xf2, 0x13, 0xe9, 0xc1, 0x9d, 0xf4, 0xfb, + 0xac, 0xf8, 0x49, 0xa4, 0x06, 0xd5, 0x7e, 0xa6, 0x7f, 0x1f, 0xf9, 0x55, 0x9e, 0xcd, 0xc5, 0x7e, + 0xd0, 0x42, 0x3c, 0x58, 0xef, 0x6a, 0x03, 0x52, 0xc0, 0xd0, 0x20, 0x2e, 0x65, 0x89, 0xbf, 0x5d, + 0x86, 0xe9, 0xbb, 0x2b, 0x4b, 0x8b, 0x89, 0xa9, 0xd8, 0x53, 0xbb, 0x69, 0x16, 0x1b, 0x63, 0x19, + 0x7d, 0xeb, 0xbf, 0x73, 0x5a, 0xf7, 0xe0, 0x58, 0x66, 0x18, 0x50, 0xce, 0xfb, 0x22, 0x79, 0x07, + 0x25, 0x60, 0x25, 0xe3, 0x9d, 0x2c, 0x62, 0x7f, 0xff, 0x8a, 0x9d, 0xc1, 0xb6, 0xfe, 0xb7, 0x89, + 0x0c, 0x5f, 0xb9, 0x19, 0xbf, 0x06, 0x63, 0x2b, 0x51, 0xd4, 0xe3, 0xe1, 0x3d, 0x7b, 0x55, 0xd7, + 0xeb, 0x78, 0x08, 0x14, 0x73, 0xc8, 0x4e, 0x11, 0xd8, 0x05, 0xa8, 0xca, 0xc4, 0x10, 0x6a, 0x77, + 0x43, 0x15, 0x7b, 0x92, 0x57, 0xc2, 0x4e, 0x8a, 0xd9, 0x5b, 0x30, 0x41, 0x7f, 0xd3, 0x8c, 0x96, + 0x03, 0x8e, 0x9a, 0x5c, 0x89, 0x4e, 0x2b, 0xc0, 0x36, 0xd0, 0xd8, 0x2b, 0x50, 0xa9, 0x2f, 0xda, + 0x52, 0x77, 0x27, 0x85, 0xfc, 0xd0, 0x21, 0x05, 0xab, 0x71, 0xe3, 0x5b, 0xb4, 0x85, 0xa8, 0xae, + 0x82, 0xe4, 0xc8, 0x67, 0x07, 0x9c, 0x01, 0x4a, 0x35, 0x98, 0x11, 0x15, 0x10, 0xc6, 0x2e, 0xc1, + 0xe8, 0x12, 0xd9, 0x37, 0xca, 0x47, 0x07, 0x4a, 0x5d, 0x4c, 0x20, 0x23, 0xd8, 0x0b, 0x81, 0xd8, + 0x05, 0x95, 0x35, 0xb4, 0x9a, 0x3a, 0x19, 0xf5, 0x49, 0x0d, 0x9a, 0xba, 0xcc, 0x8f, 0x1d, 0xee, + 0x32, 0x9f, 0x77, 0x76, 0x87, 0x27, 0xe9, 0xec, 0xbe, 0x01, 0xa7, 0xae, 0xa3, 0xaa, 0xcd, 0x0c, + 0x02, 0x78, 0xcf, 0x5e, 0x91, 0x8f, 0x17, 0xf8, 0x66, 0x47, 0xda, 0xb8, 0x6c, 0x1c, 0x41, 0xa7, + 0x17, 0xea, 0x99, 0xf8, 0xfb, 0x31, 0x62, 0x1f, 0xc2, 0xf1, 0xa2, 0x22, 0xf9, 0xc4, 0x81, 0xe1, + 0xee, 0x8a, 0x2b, 0xd0, 0xc3, 0xdd, 0x15, 0x71, 0x60, 0xab, 0x50, 0x23, 0x78, 0xbd, 0xd5, 0xf1, + 0x7c, 0x7a, 0xa6, 0x99, 0x4c, 0x9d, 0xa6, 0x25, 0x57, 0x57, 0x14, 0xd2, 0x73, 0x8d, 0xe1, 0x27, + 0x96, 0xa1, 0x64, 0x3f, 0x51, 0x12, 0x57, 0x6f, 0x4a, 0x36, 0x80, 0xdb, 0xe7, 0x94, 0x7c, 0xf0, + 0x4d, 0x1c, 0xb7, 0x1a, 0x71, 0xe8, 0xf9, 0x5b, 0xd2, 0x07, 0x6c, 0x5d, 0xfa, 0x80, 0xbd, 0xf7, + 0xb1, 0x7c, 0xc0, 0x88, 0x55, 0xf4, 0x78, 0x7f, 0x7e, 0x22, 0x94, 0x75, 0xe2, 0x2a, 0x32, 0x5a, + 0x20, 0x86, 0x0e, 0xfd, 0xf0, 0xef, 0xf9, 0x14, 0xea, 0x9c, 0xb7, 0xa8, 0x93, 0xd3, 0xb8, 0xb1, + 0xe3, 0xd0, 0xb9, 0xb4, 0x89, 0x27, 0x08, 0xb9, 0x8e, 0x16, 0x72, 0x60, 0x0b, 0xf4, 0x74, 0x24, + 0x8e, 0x52, 0x72, 0xc3, 0xae, 0xa5, 0x5a, 0x02, 0xe5, 0x94, 0xe4, 0xe0, 0x34, 0xd2, 0x27, 0x8f, + 0x41, 0xc2, 0x2e, 0xc1, 0xc8, 0x6d, 0xf7, 0x51, 0x7d, 0x8b, 0xcb, 0x54, 0xdd, 0x93, 0x6a, 0xfb, + 0x43, 0xe0, 0x42, 0xf5, 0xb7, 0xc8, 0x31, 0xe5, 0x33, 0xb6, 0x44, 0x63, 0x7f, 0xaa, 0x04, 0x27, + 0x69, 0x19, 0xab, 0x5e, 0x36, 0x78, 0x1c, 0x8b, 0x71, 0x90, 0x31, 0x53, 0xcf, 0xa6, 0xd6, 0xf5, + 0xc5, 0x78, 0x18, 0xd4, 0xc3, 0x92, 0x3b, 0x43, 0x32, 0x70, 0x91, 0x2c, 0x35, 0x82, 0xcf, 0x17, + 0xd2, 0xb3, 0x75, 0x18, 0xbf, 0x7d, 0xad, 0x9e, 0x54, 0x7b, 0xcc, 0xb8, 0xb3, 0x19, 0x3b, 0x9f, + 0x86, 0x56, 0xe4, 0x16, 0xa2, 0xb3, 0xc1, 0xeb, 0xc8, 0xad, 0xc5, 0x65, 0x0c, 0xa3, 0x71, 0x3c, + 0xd5, 0x97, 0x74, 0x77, 0x9a, 0x3c, 0x1b, 0x15, 0x3f, 0x41, 0x64, 0xef, 0x93, 0xa7, 0x2a, 0x46, + 0x5b, 0x12, 0xb7, 0xf1, 0x13, 0x69, 0x60, 0x5b, 0x0a, 0xa7, 0x2f, 0x0b, 0x74, 0x4d, 0x8f, 0x4e, + 0xc0, 0xee, 0x82, 0x0a, 0xcb, 0x41, 0x86, 0xef, 0x58, 0xfd, 0xc9, 0xd4, 0x3b, 0x4c, 0x59, 0x3b, + 0x90, 0xbd, 0x7c, 0xb6, 0x21, 0x79, 0x5a, 0x76, 0x0f, 0x66, 0xb9, 0x1f, 0x87, 0xae, 0xe3, 0xb5, + 0x64, 0x08, 0xcc, 0xe4, 0x01, 0x45, 0x26, 0xe8, 0x56, 0x4f, 0x07, 0xcb, 0x02, 0x6d, 0x65, 0x89, + 0x9e, 0x54, 0xd5, 0xae, 0x69, 0x9f, 0x40, 0xea, 0x95, 0x96, 0x09, 0x96, 0x8e, 0x2e, 0x7b, 0x70, + 0xa2, 0x90, 0x8a, 0xcd, 0x41, 0xb5, 0xe5, 0x45, 0x69, 0x9a, 0xab, 0xaa, 0x9d, 0xfc, 0x66, 0x67, + 0x00, 0x28, 0x74, 0xa0, 0x66, 0x9e, 0x3e, 0x86, 0x10, 0x7c, 0x0d, 0x7b, 0x09, 0xa6, 0xb6, 0x42, + 0xb7, 0xbb, 0xed, 0x70, 0xbf, 0xd5, 0x0d, 0x3c, 0x5f, 0x9e, 0x1f, 0xf6, 0x24, 0x42, 0x97, 0x25, + 0xd0, 0xfa, 0x9c, 0x9a, 0xa8, 0xec, 0x75, 0xdd, 0x39, 0xbc, 0x82, 0x5f, 0x69, 0xb4, 0xe3, 0x3e, + 0x72, 0xdc, 0x2d, 0x6e, 0xd8, 0x9a, 0xc8, 0x37, 0xa0, 0x9f, 0x2e, 0xc1, 0xe9, 0xbe, 0x73, 0x91, + 0x5d, 0x85, 0x53, 0x2e, 0x45, 0xc9, 0x70, 0xb6, 0xe3, 0xb8, 0x1b, 0x39, 0xea, 0x66, 0xad, 0x42, + 0x8f, 0x9d, 0x90, 0xc5, 0x37, 0x44, 0xa9, 0xba, 0x6c, 0x47, 0xec, 0x7d, 0x78, 0xd6, 0xf3, 0x23, + 0xde, 0xec, 0x85, 0xdc, 0x51, 0x0c, 0x9a, 0x5e, 0x2b, 0x74, 0x42, 0xd7, 0xdf, 0x52, 0xbe, 0xf1, + 0xf6, 0x69, 0x85, 0x23, 0x23, 0x71, 0x2c, 0x7a, 0xad, 0xd0, 0x46, 0x04, 0xeb, 0x17, 0xcb, 0x30, + 0xdb, 0x6f, 0xae, 0xb2, 0x59, 0x18, 0xe5, 0xbe, 0x3e, 0x9a, 0xea, 0xa7, 0xb8, 0xde, 0x26, 0x47, + 0xb0, 0x1c, 0xcb, 0x6a, 0x53, 0x26, 0x70, 0x42, 0x07, 0x11, 0xfd, 0xc0, 0x95, 0x23, 0x39, 0xd1, + 0xd4, 0x8f, 0xdd, 0x33, 0x00, 0xe9, 0x39, 0x4b, 0xea, 0x3d, 0x7b, 0xcc, 0x6d, 0x86, 0xb4, 0x25, + 0xb2, 0x93, 0x30, 0x42, 0xe7, 0x98, 0xf4, 0x22, 0x92, 0xbf, 0x84, 0x40, 0x25, 0x07, 0x19, 0x0f, + 0xe0, 0xca, 0xc2, 0x84, 0x31, 0xd8, 0x23, 0x1d, 0xfa, 0x38, 0x85, 0xf3, 0x79, 0xf4, 0xe3, 0xcf, + 0x67, 0xeb, 0xff, 0x9e, 0x20, 0x61, 0xb1, 0xde, 0x8b, 0xb7, 0x95, 0x78, 0x79, 0xb9, 0xc8, 0x8d, + 0x93, 0x4c, 0x9c, 0x35, 0x77, 0x09, 0xd3, 0x79, 0x53, 0x3d, 0xc9, 0x96, 0x0b, 0x9f, 0x64, 0x5f, + 0x83, 0xb1, 0xc5, 0x6d, 0xde, 0xdc, 0x49, 0x7c, 0xe3, 0xaa, 0xf2, 0xcd, 0x4b, 0x00, 0x29, 0xe1, + 0x47, 0x8a, 0xc0, 0x2e, 0x01, 0xa0, 0xf7, 0x38, 0xdd, 0xa2, 0xb4, 0xa4, 0x5d, 0xe8, 0x6c, 0x2e, + 0xad, 0xc6, 0x34, 0x14, 0x64, 0xdf, 0xb0, 0xaf, 0xe9, 0x66, 0x66, 0xc4, 0x3e, 0x0a, 0x37, 0x25, + 0x7a, 0x8a, 0x20, 0xba, 0xa7, 0x9d, 0x20, 0x52, 0xde, 0xa9, 0xe5, 0x8e, 0x19, 0x1d, 0x89, 0x7d, + 0x0e, 0x46, 0x17, 0x79, 0x18, 0xaf, 0xaf, 0xaf, 0xa2, 0x6d, 0x17, 0xe5, 0xaa, 0xaa, 0x62, 0x5e, + 0xa1, 0x38, 0x6e, 0x7f, 0x7f, 0x7f, 0x7e, 0x32, 0xf6, 0x3a, 0x3c, 0xc9, 0xc1, 0x61, 0x2b, 0x6c, + 0xb6, 0x00, 0x35, 0xb2, 0x3e, 0x49, 0xef, 0xbf, 0x28, 0xd3, 0x54, 0x49, 0xc2, 0x92, 0xa6, 0x2a, + 0x0f, 0xf9, 0x46, 0x92, 0x55, 0x29, 0x87, 0xcf, 0x96, 0x55, 0x32, 0x32, 0xbd, 0xd9, 0x90, 0xee, + 0xa1, 0xd9, 0xbd, 0x5e, 0xb4, 0x3e, 0x4f, 0xc1, 0xea, 0x30, 0xb9, 0x18, 0x74, 0xba, 0x6e, 0xec, + 0x61, 0xae, 0xe3, 0x3d, 0x29, 0xbe, 0xe0, 0x6e, 0xda, 0xd4, 0x0b, 0x0c, 0x59, 0x48, 0x2f, 0x60, + 0xd7, 0x60, 0xca, 0x0e, 0x7a, 0x62, 0xd8, 0x95, 0x76, 0x8a, 0x24, 0x14, 0xb4, 0xc0, 0x0a, 0x45, + 0x89, 0x10, 0xa8, 0xa4, 0x2a, 0xca, 0x88, 0x57, 0x6e, 0x50, 0xb1, 0x3b, 0x05, 0xaf, 0x8e, 0xba, + 0x58, 0xa2, 0xe7, 0x56, 0xca, 0x31, 0x2b, 0x78, 0xb0, 0xbc, 0x02, 0xe3, 0x8d, 0xc6, 0xdd, 0x75, + 0x1e, 0xc5, 0xd7, 0xda, 0xc1, 0x43, 0x94, 0x4a, 0xaa, 0x32, 0x15, 0x68, 0x14, 0x38, 0xb1, 0x58, + 0x11, 0x9b, 0xed, 0xe0, 0xa1, 0xad, 0x63, 0xb1, 0xaf, 0x8b, 0xf1, 0xd0, 0x64, 0x78, 0x19, 0x99, + 0xfd, 0xa0, 0x6b, 0x06, 0x9e, 0xfd, 0xe9, 0x22, 0x10, 0x97, 0x0d, 0x73, 0xb0, 0x34, 0x74, 0x74, + 0xdd, 0x0c, 0x83, 0x47, 0x7b, 0xf5, 0x56, 0x2b, 0xe4, 0x51, 0x24, 0xc5, 0x07, 0x72, 0xdd, 0x44, + 0x25, 0x9c, 0x4b, 0x05, 0x86, 0xeb, 0xa6, 0x46, 0xc0, 0x16, 0x85, 0x5c, 0x2b, 0xbe, 0x22, 0xda, + 0x04, 0xae, 0xac, 0xa1, 0x04, 0x20, 0x1f, 0x0b, 0xe4, 0x37, 0x27, 0xeb, 0x41, 0xaf, 0x6b, 0x8a, + 0xaf, 0x1a, 0x0d, 0x5b, 0x81, 0x69, 0x02, 0x88, 0xa5, 0x45, 0x89, 0x00, 0x8f, 0xa5, 0xa9, 0x88, + 0x24, 0x1b, 0x3c, 0x4c, 0x31, 0x19, 0xa0, 0x1e, 0x7f, 0x23, 0x43, 0xc7, 0xde, 0x87, 0x29, 0xcc, + 0xb2, 0x92, 0xf8, 0xbf, 0xe1, 0x41, 0x3e, 0x41, 0x51, 0xc8, 0x65, 0x49, 0xc6, 0xa9, 0x74, 0x22, + 0x8a, 0xb6, 0xd7, 0x94, 0x63, 0x9c, 0x60, 0x80, 0x66, 0x68, 0x29, 0x83, 0x13, 0x29, 0x03, 0x59, + 0x92, 0x65, 0x10, 0xb7, 0xa3, 0x94, 0xc1, 0x4f, 0x95, 0xe0, 0xb4, 0xa8, 0x48, 0x77, 0x75, 0xc3, + 0x4d, 0x01, 0x6d, 0xec, 0x28, 0x43, 0xd4, 0xeb, 0x17, 0x95, 0x50, 0x79, 0x51, 0x43, 0xbb, 0xb8, + 0xfb, 0xe6, 0xc5, 0x7a, 0xfa, 0xb3, 0xa1, 0x88, 0x28, 0x2e, 0x73, 0x5f, 0x9e, 0xba, 0xf0, 0x1e, + 0x45, 0xdb, 0x45, 0x1c, 0xb0, 0x51, 0xa2, 0xf1, 0xc5, 0x8d, 0x3a, 0xf5, 0xb1, 0x1b, 0xd5, 0x97, + 0xa7, 0xde, 0xa8, 0xb8, 0x1d, 0x15, 0x36, 0xea, 0x2a, 0x4c, 0xa2, 0x68, 0x25, 0x45, 0xda, 0x50, + 0xe6, 0x9f, 0xc2, 0x35, 0x61, 0x14, 0xd8, 0x13, 0xe2, 0xe7, 0x7d, 0xf9, 0x8b, 0x7d, 0x0e, 0xa4, + 0x45, 0xea, 0xb6, 0x10, 0x15, 0x4e, 0xa7, 0x97, 0xc7, 0x14, 0xaa, 0xbf, 0xc0, 0x20, 0xf4, 0x86, + 0xe7, 0xc7, 0x37, 0x87, 0xaa, 0xa3, 0xb5, 0xea, 0xcd, 0xa1, 0xea, 0x4c, 0x8d, 0xd9, 0x63, 0xc9, + 0x17, 0xb3, 0x4f, 0x14, 0xf6, 0x00, 0x75, 0x14, 0x8d, 0xfa, 0xed, 0xd5, 0xf4, 0xa2, 0xfd, 0x83, + 0xe5, 0x30, 0x66, 0xf4, 0xed, 0x00, 0x87, 0xb1, 0x7b, 0x14, 0xbf, 0x40, 0x1b, 0x06, 0xa5, 0xa3, + 0x30, 0xc0, 0x59, 0x1d, 0x45, 0x86, 0xc6, 0xce, 0x60, 0x5b, 0xdf, 0x99, 0xc8, 0xf0, 0x95, 0x46, + 0xe2, 0x16, 0x8c, 0x90, 0x0a, 0x42, 0x0e, 0x32, 0x5a, 0x0b, 0x91, 0x82, 0xc2, 0x96, 0x25, 0xec, + 0x34, 0x54, 0x1a, 0x8d, 0xbb, 0x72, 0x90, 0xd1, 0x54, 0x3c, 0x8a, 0x02, 0x5b, 0xc0, 0xc4, 0x17, + 0x42, 0xfb, 0x6f, 0x2d, 0xa9, 0x8e, 0x38, 0x02, 0x6d, 0x84, 0x8a, 0xf1, 0x56, 0x0a, 0x81, 0xa1, + 0x74, 0xbc, 0xa5, 0x42, 0x20, 0x55, 0x03, 0x2c, 0xc2, 0x6c, 0x3d, 0x8a, 0x78, 0x28, 0x66, 0x84, + 0x34, 0x2b, 0x0e, 0xe5, 0xa5, 0x55, 0x9e, 0xdd, 0x58, 0xa9, 0xdb, 0x8c, 0xec, 0xbe, 0x88, 0xec, + 0x3c, 0x54, 0xeb, 0xbd, 0x96, 0xc7, 0xfd, 0xa6, 0x11, 0xbe, 0xd3, 0x95, 0x30, 0x3b, 0x29, 0x65, + 0x1f, 0xc0, 0x89, 0x4c, 0x18, 0x60, 0x39, 0x02, 0xa3, 0xe9, 0x76, 0xac, 0x2e, 0xd5, 0xa9, 0x29, + 0x14, 0x0d, 0x49, 0x31, 0x25, 0xab, 0x43, 0x6d, 0x19, 0x1d, 0x24, 0x97, 0x38, 0xbd, 0xca, 0x06, + 0x21, 0x79, 0xc6, 0x92, 0x0a, 0x44, 0x06, 0x4a, 0x6e, 0x25, 0x85, 0x76, 0x0e, 0x9d, 0xdd, 0x82, + 0x63, 0x59, 0x98, 0x38, 0xd4, 0x49, 0xdb, 0x81, 0xdb, 0x61, 0x8e, 0x0b, 0x1e, 0xeb, 0x45, 0x54, + 0x6c, 0x03, 0x66, 0x52, 0x53, 0x40, 0x53, 0x07, 0xa2, 0x3c, 0x0c, 0x92, 0x72, 0xa5, 0x07, 0x79, + 0x46, 0x4e, 0xc6, 0x63, 0xa9, 0x59, 0x61, 0xa2, 0x0b, 0xb1, 0xf3, 0xec, 0x58, 0x0b, 0xa6, 0x1a, + 0xde, 0x96, 0xef, 0xf9, 0x5b, 0xb7, 0xf8, 0xde, 0x9a, 0xeb, 0x85, 0xd2, 0xd6, 0x5b, 0x79, 0x72, + 0xd4, 0xa3, 0xbd, 0x4e, 0x87, 0xc7, 0x21, 0xae, 0x7a, 0x51, 0x8e, 0xd1, 0x1f, 0xc4, 0xdd, 0x76, + 0x2e, 0x22, 0x3a, 0x74, 0x98, 0xee, 0xba, 0x9e, 0x21, 0x17, 0x98, 0x3c, 0x0d, 0x3d, 0xd4, 0xc4, + 0x80, 0x7a, 0xa8, 0x36, 0xcc, 0x2c, 0xfb, 0xcd, 0x70, 0x0f, 0x1f, 0xc7, 0x55, 0xe3, 0x26, 0x0f, + 0x69, 0xdc, 0x8b, 0xb2, 0x71, 0xcf, 0xba, 0x6a, 0x86, 0x15, 0x35, 0x2f, 0xcf, 0x98, 0x35, 0x60, + 0x06, 0x2f, 0x1b, 0x2b, 0x4b, 0x6b, 0x2b, 0xbe, 0x17, 0x7b, 0x6e, 0xcc, 0x5b, 0x52, 0xde, 0x48, + 0x52, 0x91, 0x91, 0xbe, 0xc1, 0x6b, 0x75, 0x1d, 0x4f, 0xa1, 0xe8, 0x4c, 0x73, 0xf4, 0x07, 0x5d, + 0xfa, 0xa7, 0xff, 0x90, 0x2e, 0xfd, 0x2b, 0x30, 0x9d, 0x0d, 0xa2, 0x52, 0x4b, 0xc5, 0x84, 0x08, + 0x8b, 0x84, 0xb4, 0x11, 0xf4, 0x50, 0xbe, 0x34, 0xc2, 0x74, 0x65, 0xc2, 0xa7, 0x64, 0xf4, 0x07, + 0x33, 0x86, 0xfe, 0xc0, 0xd8, 0x95, 0x8e, 0xa2, 0x3f, 0x58, 0x03, 0xb8, 0x16, 0x84, 0x4d, 0x5e, + 0xc7, 0xc8, 0x04, 0xcc, 0x48, 0xd8, 0x28, 0x98, 0xa6, 0x85, 0xb4, 0x7e, 0x36, 0xc5, 0x6f, 0x27, + 0x1b, 0x60, 0x42, 0xe3, 0xc1, 0x5c, 0x38, 0xb5, 0x16, 0xf2, 0x4d, 0x1e, 0x86, 0xbc, 0x25, 0xaf, + 0x3e, 0x0b, 0x9e, 0xdf, 0x52, 0x59, 0x38, 0x65, 0xca, 0x86, 0xae, 0x42, 0x49, 0x3c, 0x23, 0x36, + 0x08, 0x49, 0x3f, 0x85, 0xfb, 0xf0, 0xc9, 0xe9, 0x2f, 0x8e, 0x1f, 0x55, 0x7f, 0xb1, 0x0c, 0x53, + 0x2b, 0x7e, 0xb3, 0xdd, 0x6b, 0x71, 0x69, 0xc1, 0x2d, 0x73, 0x68, 0xa2, 0x0c, 0xe8, 0x51, 0x89, + 0x23, 0x0d, 0xbd, 0xf5, 0x75, 0x65, 0x12, 0x59, 0x3f, 0x56, 0x86, 0xd9, 0x7e, 0x23, 0x7f, 0xc0, + 0x6d, 0xf8, 0x55, 0xc8, 0x6f, 0x66, 0xf2, 0x56, 0x5c, 0xe3, 0xd9, 0x2d, 0xed, 0x32, 0x14, 0xef, + 0x59, 0xf2, 0x96, 0x7c, 0x2c, 0x4b, 0x70, 0x2f, 0x6c, 0xb3, 0xab, 0x30, 0xae, 0x7d, 0x27, 0x3c, + 0x36, 0xfa, 0x7d, 0x55, 0x1b, 0x36, 0xd3, 0x4f, 0x77, 0x12, 0xe4, 0xa9, 0xa5, 0x6e, 0xd1, 0xf4, + 0x8b, 0xd5, 0x28, 0x0e, 0xc5, 0x08, 0x99, 0x1a, 0x45, 0x51, 0xc0, 0x18, 0xe0, 0x11, 0x25, 0x5f, + 0xb2, 0xf1, 0x6f, 0xeb, 0x5f, 0x4f, 0x90, 0xf0, 0xa1, 0xdf, 0x79, 0xfb, 0x39, 0x21, 0x64, 0xee, + 0xc2, 0xe5, 0xa3, 0xdc, 0x85, 0x2b, 0x87, 0xdf, 0x85, 0x87, 0x0e, 0xbb, 0x0b, 0x67, 0x2e, 0xab, + 0xc3, 0x47, 0xbc, 0xac, 0x8e, 0x1e, 0xe9, 0xb2, 0x6a, 0xdc, 0xa3, 0xab, 0x87, 0xdd, 0xa3, 0xff, + 0xf8, 0x6a, 0xfb, 0xb4, 0x5e, 0x6d, 0x8b, 0xa4, 0xd3, 0x23, 0x5d, 0x6d, 0x73, 0x37, 0xd3, 0x99, + 0x27, 0x73, 0x33, 0x65, 0x4f, 0xec, 0x66, 0x7a, 0xec, 0x93, 0xde, 0x4c, 0x8f, 0x3f, 0xc9, 0x9b, + 0xe9, 0x89, 0x3f, 0x8a, 0x37, 0xd3, 0x93, 0xff, 0x6e, 0x6e, 0xa6, 0x57, 0xa0, 0xba, 0x16, 0x44, + 0xf1, 0xb5, 0x20, 0xec, 0xe0, 0xe5, 0x78, 0x42, 0x3e, 0x04, 0x04, 0x11, 0x25, 0xe4, 0x37, 0x84, + 0x3c, 0x89, 0xc8, 0x16, 0xd4, 0x84, 0x53, 0x37, 0xba, 0xd9, 0xf4, 0x2d, 0x46, 0xce, 0x14, 0x79, + 0xb1, 0xcb, 0xcf, 0x37, 0x75, 0xd1, 0xbb, 0x03, 0x33, 0xca, 0x0d, 0x0a, 0x03, 0x9c, 0xe0, 0xb5, + 0xf8, 0x74, 0xba, 0x32, 0x53, 0x7f, 0x29, 0x55, 0x6a, 0x24, 0x90, 0xcd, 0x92, 0xde, 0x1c, 0xaa, + 0x8e, 0xd4, 0x46, 0x6f, 0x0e, 0x55, 0x6b, 0xb5, 0x99, 0x01, 0x6e, 0xbc, 0x3f, 0x0c, 0xb5, 0xac, + 0x10, 0x7e, 0x78, 0x16, 0x81, 0x27, 0x16, 0x60, 0x57, 0x5c, 0x11, 0xb2, 0x42, 0x30, 0xbb, 0x04, + 0xb0, 0x16, 0x7a, 0xbb, 0x6e, 0xcc, 0x6f, 0x29, 0xfb, 0x5c, 0x99, 0x36, 0x83, 0xa0, 0x62, 0xc2, + 0xdb, 0x1a, 0x4a, 0x72, 0xff, 0x2b, 0x17, 0xdd, 0xff, 0xac, 0xbf, 0x50, 0x86, 0x19, 0x8a, 0x2c, + 0xf9, 0xf4, 0x5b, 0x1e, 0x7c, 0xd1, 0xb8, 0xd5, 0x3f, 0x9b, 0x26, 0x03, 0xd2, 0x7b, 0x77, 0x80, + 0xed, 0xc1, 0xd7, 0xe0, 0x44, 0x6e, 0x28, 0xf0, 0x66, 0xbf, 0xa4, 0x62, 0x7a, 0xe6, 0xee, 0xf6, + 0xb3, 0xc5, 0x95, 0xdc, 0xbf, 0x62, 0xe7, 0x28, 0xac, 0x5f, 0x1e, 0xce, 0xf1, 0x97, 0x56, 0x08, + 0xba, 0x5d, 0x41, 0xe9, 0x68, 0x76, 0x05, 0xe5, 0xc1, 0xec, 0x0a, 0x32, 0x12, 0x49, 0x65, 0x10, + 0x89, 0xe4, 0x03, 0x98, 0x5c, 0xe7, 0x6e, 0x27, 0x5a, 0x0f, 0x64, 0x66, 0x49, 0xf2, 0x06, 0x53, + 0x21, 0x3b, 0x45, 0x99, 0xba, 0x98, 0x26, 0x56, 0xed, 0xb1, 0x20, 0x10, 0x67, 0x2e, 0xa5, 0x9a, + 0xb4, 0x4d, 0x0e, 0xba, 0xb6, 0x61, 0xf8, 0x00, 0x6d, 0x43, 0x03, 0x26, 0x24, 0x5d, 0x9a, 0x3a, + 0x21, 0xbd, 0x16, 0x8b, 0x22, 0x84, 0xab, 0xda, 0x95, 0x5b, 0xf6, 0x54, 0x52, 0x3b, 0xdd, 0x88, + 0x0d, 0x26, 0x62, 0x08, 0xd4, 0xc3, 0x99, 0x18, 0x82, 0xd1, 0x74, 0x08, 0xd4, 0x23, 0x1b, 0x0d, + 0x81, 0x86, 0xc4, 0xde, 0x83, 0xa9, 0xfa, 0xda, 0x8a, 0x4e, 0x56, 0x4d, 0x4d, 0x1b, 0xdc, 0xae, + 0xe7, 0x18, 0xa4, 0x19, 0xdc, 0x83, 0x6e, 0x88, 0x63, 0x7f, 0x48, 0x37, 0xc4, 0xec, 0x5d, 0x06, + 0x8e, 0x78, 0x97, 0xb1, 0x7e, 0x75, 0x42, 0xed, 0x0f, 0x9f, 0xee, 0x63, 0x93, 0xf9, 0x7c, 0x54, + 0x39, 0xe2, 0xf3, 0xd1, 0xd0, 0x61, 0x62, 0xaf, 0x26, 0x5d, 0x8f, 0x7c, 0xe2, 0xa7, 0xa0, 0xd1, + 0x23, 0xca, 0xcb, 0x99, 0xc5, 0x57, 0x1d, 0x64, 0xf1, 0x15, 0xca, 0xd8, 0x63, 0x9f, 0x5c, 0xc6, + 0x86, 0x23, 0xcb, 0xd8, 0x8d, 0x34, 0xdc, 0xc2, 0xf8, 0xa1, 0x5e, 0x6c, 0x67, 0xa4, 0x3a, 0x65, + 0xa6, 0x38, 0x70, 0x68, 0x12, 0x78, 0xe1, 0x07, 0x4a, 0x70, 0xff, 0x46, 0xb1, 0xe0, 0x7e, 0xf0, + 0x01, 0xf4, 0xc7, 0xa2, 0xfb, 0x1f, 0x8b, 0xee, 0x7f, 0x28, 0xa2, 0xfb, 0x5d, 0x60, 0x6e, 0x2f, + 0xde, 0x16, 0x12, 0x70, 0x13, 0x83, 0x57, 0x8b, 0x4f, 0x8c, 0x42, 0xbc, 0x5c, 0x23, 0xf9, 0x52, + 0x7d, 0x8d, 0x18, 0xa5, 0x62, 0x06, 0x50, 0xc0, 0xdf, 0x81, 0x45, 0xe8, 0x10, 0x57, 0xd4, 0x03, + 0x37, 0xf4, 0xf1, 0x40, 0xba, 0x04, 0xa3, 0x2a, 0x58, 0x72, 0x29, 0xd5, 0xbe, 0xe7, 0xa3, 0x24, + 0x2b, 0x2c, 0x76, 0x19, 0xaa, 0x8a, 0x58, 0x4f, 0x50, 0xf7, 0x50, 0xc2, 0x8c, 0x38, 0xb4, 0x12, + 0x66, 0xfd, 0x87, 0x43, 0x6a, 0xd7, 0x16, 0x0d, 0x5e, 0x73, 0x43, 0xb7, 0x83, 0xf9, 0x71, 0x93, + 0x45, 0xa5, 0x09, 0xf0, 0x99, 0x75, 0x98, 0x71, 0x40, 0x32, 0x49, 0x3e, 0x56, 0xb4, 0xeb, 0xd7, + 0x60, 0x44, 0xca, 0x4f, 0x24, 0xf1, 0xa3, 0xec, 0x90, 0xcb, 0xc4, 0x2d, 0x71, 0xd8, 0xdb, 0x46, + 0xfe, 0xfe, 0xa1, 0x34, 0x61, 0xb4, 0xd8, 0xc9, 0x0e, 0xce, 0xdc, 0x7f, 0x55, 0x4f, 0xb4, 0x3f, + 0x9c, 0xc6, 0x1e, 0x44, 0xca, 0x03, 0x52, 0xec, 0x27, 0x37, 0x92, 0x91, 0xa3, 0xc4, 0x91, 0x1f, + 0xfd, 0x77, 0x1a, 0x47, 0x7e, 0x19, 0x40, 0x9e, 0xae, 0xa9, 0x75, 0xc7, 0x4b, 0xb8, 0xf9, 0x48, + 0x4f, 0x83, 0x38, 0x6e, 0xf7, 0x49, 0x8b, 0xa5, 0x11, 0x5a, 0xff, 0x98, 0xc1, 0x4c, 0xa3, 0x71, + 0x77, 0xc9, 0x73, 0xb7, 0xfc, 0x20, 0x8a, 0xbd, 0xe6, 0x8a, 0xbf, 0x19, 0x08, 0x71, 0x3c, 0x39, + 0x01, 0xb4, 0x08, 0xe0, 0xe9, 0xee, 0x9f, 0x14, 0x8b, 0xeb, 0xde, 0x72, 0x18, 0x2a, 0x05, 0x2b, + 0x5d, 0xf7, 0xb8, 0x00, 0xd8, 0x04, 0x17, 0x12, 0x6f, 0xa3, 0x87, 0xf1, 0x77, 0xa4, 0x09, 0x0d, + 0x4a, 0xbc, 0x11, 0x81, 0x6c, 0x55, 0xc6, 0x78, 0x7e, 0xc2, 0xca, 0x1b, 0xd0, 0x29, 0x23, 0x1a, + 0x7d, 0x5a, 0x4c, 0x6b, 0x57, 0xca, 0x1f, 0xb8, 0x6b, 0x77, 0x11, 0xae, 0x9b, 0x82, 0xe6, 0xd6, + 0xc0, 0x1e, 0x9c, 0x30, 0x22, 0x33, 0x0c, 0xfa, 0xf0, 0xf4, 0x8a, 0x94, 0xb0, 0x2d, 0x34, 0xb7, + 0x2f, 0x78, 0x7d, 0xd2, 0x13, 0xde, 0x16, 0xd6, 0xc0, 0xfe, 0x42, 0x09, 0xce, 0x14, 0x96, 0x24, + 0xab, 0x7b, 0xdc, 0xc8, 0x08, 0xa0, 0x6d, 0x1a, 0x94, 0xda, 0xb7, 0x5f, 0xd5, 0x4e, 0xc1, 0x56, + 0x70, 0x70, 0x4d, 0xec, 0xef, 0x96, 0xe0, 0x94, 0x81, 0x91, 0xec, 0x96, 0x51, 0x12, 0xb4, 0xa8, + 0x70, 0x5e, 0x7f, 0xf4, 0x64, 0xe6, 0xf5, 0x0b, 0x66, 0x5f, 0xd2, 0xdd, 0x52, 0xef, 0x43, 0xbf, + 0x16, 0xb2, 0x5d, 0x98, 0xc1, 0x22, 0xf5, 0x08, 0x26, 0xe6, 0xac, 0x7c, 0x3b, 0x3b, 0x9e, 0x36, + 0x9b, 0xa2, 0x8d, 0x08, 0xd9, 0x7a, 0xe1, 0xf2, 0xf7, 0xf6, 0xe7, 0x27, 0x0d, 0x74, 0x15, 0x63, + 0xdf, 0x49, 0x5f, 0xd2, 0x3c, 0x7f, 0x33, 0x30, 0x54, 0x27, 0xd9, 0x2a, 0xd8, 0xdf, 0x2b, 0xd1, + 0x7b, 0x04, 0x75, 0xe3, 0x5a, 0x18, 0x74, 0x92, 0x72, 0x65, 0x53, 0xdc, 0x67, 0xd8, 0xda, 0x4f, + 0x66, 0xd8, 0x5e, 0xc2, 0x26, 0xd3, 0x9e, 0xe0, 0x6c, 0x86, 0x41, 0x27, 0x6d, 0xbe, 0x3e, 0x70, + 0x7d, 0x1b, 0xc9, 0xfe, 0x74, 0x09, 0x4e, 0x1b, 0x6a, 0x54, 0x3d, 0x79, 0x92, 0x8c, 0xe9, 0x92, + 0xf8, 0xb6, 0x69, 0x45, 0x0b, 0x17, 0xe5, 0xfc, 0x3f, 0x87, 0x2d, 0xd0, 0x82, 0x0b, 0x0b, 0x24, + 0xa7, 0x43, 0x58, 0x5a, 0x13, 0xfa, 0xd7, 0xc2, 0x3c, 0x98, 0x41, 0x23, 0x25, 0xc3, 0xf6, 0xfd, + 0x78, 0x7f, 0xdb, 0xf7, 0x24, 0xf1, 0x1e, 0xa6, 0x39, 0xe9, 0x6f, 0x00, 0x9f, 0xe7, 0xca, 0x7e, + 0x04, 0x4e, 0xe7, 0x80, 0xc9, 0x6a, 0x3b, 0xd1, 0x77, 0xb5, 0xbd, 0xfa, 0x78, 0x7f, 0xfe, 0xe5, + 0xa2, 0xda, 0x8a, 0x56, 0x5a, 0xff, 0x1a, 0x98, 0x0b, 0x90, 0x16, 0x4a, 0xe9, 0xa7, 0x78, 0x82, + 0xbe, 0x2a, 0xe7, 0x87, 0x86, 0x2f, 0xf6, 0x72, 0xad, 0x0d, 0xfa, 0x91, 0x97, 0x22, 0x31, 0x0e, + 0x13, 0x5a, 0x8a, 0x97, 0x3d, 0x69, 0xb7, 0xd3, 0xa7, 0x92, 0xef, 0xed, 0xcf, 0x1b, 0xd8, 0xe2, + 0x0e, 0xa4, 0xe7, 0x8e, 0x31, 0x84, 0x4d, 0x1d, 0x91, 0xfd, 0x4a, 0x09, 0x8e, 0x0b, 0x40, 0x3a, + 0xa9, 0x64, 0xa7, 0x66, 0x0f, 0x9a, 0xf5, 0xdb, 0x4f, 0x66, 0xd6, 0x3f, 0x8f, 0x6d, 0xd4, 0x67, + 0x7d, 0x6e, 0x48, 0x0a, 0x1b, 0x87, 0xb3, 0xdd, 0xb0, 0x87, 0x33, 0x66, 0xfb, 0xe9, 0x01, 0x66, + 0x3b, 0x7d, 0x80, 0xc3, 0x67, 0x7b, 0xdf, 0x5a, 0xd8, 0x3a, 0x4c, 0xc8, 0xeb, 0x0f, 0x0d, 0xd8, + 0x73, 0x86, 0xff, 0xa8, 0x5e, 0x44, 0x77, 0x52, 0x99, 0x01, 0x27, 0xd7, 0x43, 0x83, 0x0b, 0xf3, + 0xe1, 0x18, 0xfd, 0x36, 0xf5, 0x53, 0xf3, 0x7d, 0xf5, 0x53, 0xe7, 0x65, 0x8f, 0xce, 0x4a, 0xfe, + 0x19, 0x35, 0x95, 0x1e, 0x24, 0xae, 0x80, 0x31, 0xeb, 0x02, 0x33, 0xc0, 0xb4, 0x68, 0xcf, 0x1e, + 0xac, 0x95, 0x7a, 0x59, 0xd6, 0x39, 0x9f, 0xad, 0x33, 0xbb, 0x72, 0x0b, 0x78, 0x33, 0x17, 0xa6, + 0x25, 0x34, 0xd8, 0xe1, 0xb4, 0xc3, 0x3f, 0x6f, 0x84, 0xe9, 0xcb, 0x94, 0xd2, 0x1d, 0x4e, 0xd5, + 0x84, 0x61, 0x14, 0x33, 0x1b, 0x7a, 0x96, 0x1f, 0xbb, 0x0b, 0x33, 0xf5, 0x6e, 0xb7, 0xed, 0xf1, + 0x16, 0xf6, 0xd2, 0xee, 0x89, 0x3e, 0x59, 0x69, 0xca, 0x53, 0x97, 0x0a, 0xe5, 0xc5, 0x32, 0xec, + 0x65, 0xb6, 0x9b, 0x1c, 0xad, 0xf5, 0xe7, 0x4a, 0xb9, 0x46, 0xb3, 0xd7, 0x60, 0x0c, 0x7f, 0x68, + 0x91, 0x9f, 0x50, 0x4b, 0x43, 0x4d, 0x44, 0xfd, 0x4f, 0x8a, 0x20, 0x84, 0x25, 0x3d, 0xfa, 0x6b, + 0x85, 0x84, 0x25, 0xa9, 0x4a, 0x48, 0x95, 0x07, 0xf3, 0xca, 0x27, 0xa9, 0x92, 0x0a, 0x5d, 0xe8, + 0x93, 0x24, 0x3d, 0x91, 0xac, 0xff, 0xb4, 0x6c, 0x4e, 0x3b, 0x76, 0x5e, 0x93, 0xdb, 0xb5, 0xf8, + 0xb3, 0x4a, 0x6e, 0xd7, 0xa4, 0xf5, 0x5f, 0x2c, 0xc1, 0xb1, 0xbb, 0x5a, 0x22, 0xee, 0xf5, 0x00, + 0xbf, 0xcb, 0xc1, 0x29, 0xa7, 0x9f, 0x54, 0x16, 0x5c, 0x3d, 0x03, 0xb8, 0x98, 0x29, 0x38, 0x65, + 0xec, 0xa2, 0xf6, 0xa0, 0xbf, 0x2a, 0x36, 0x4c, 0x4b, 0x46, 0x4c, 0xe8, 0x04, 0x3f, 0x62, 0x3e, + 0x1c, 0xeb, 0x2f, 0x96, 0x61, 0x5c, 0x5b, 0x31, 0xec, 0xb3, 0x30, 0xa1, 0x57, 0xab, 0xab, 0xf8, + 0xf4, 0x56, 0xda, 0x06, 0x16, 0xea, 0xf8, 0xb8, 0xdb, 0x31, 0x74, 0x7c, 0x62, 0x5d, 0x20, 0xf4, + 0x88, 0x37, 0xa1, 0xf7, 0x0b, 0x6e, 0x42, 0x38, 0xcb, 0x35, 0x9d, 0xce, 0x81, 0xf7, 0xa1, 0xf7, + 0xf2, 0xf7, 0x21, 0x54, 0x2f, 0x69, 0xf4, 0xfd, 0x6f, 0x45, 0xd6, 0x4f, 0x96, 0xa0, 0x96, 0x5d, + 0xd3, 0x9f, 0xca, 0xa8, 0x1c, 0xe1, 0x41, 0xe8, 0x3b, 0xe5, 0x24, 0x1d, 0x94, 0x0a, 0x24, 0xf0, + 0xb4, 0x5a, 0x60, 0x7e, 0xc1, 0x78, 0xab, 0x79, 0xc6, 0x0c, 0xb1, 0xa9, 0x47, 0xf4, 0x29, 0x8e, + 0xab, 0x3b, 0xf4, 0xdd, 0x5f, 0x98, 0xff, 0x8c, 0xf5, 0x65, 0x38, 0x9e, 0x1d, 0x0e, 0x7c, 0xaf, + 0xa9, 0xc3, 0xb4, 0x09, 0xcf, 0x26, 0x93, 0xcb, 0x52, 0xd9, 0x59, 0x7c, 0xeb, 0xb7, 0xca, 0x59, + 0xde, 0xd2, 0x1a, 0x53, 0xec, 0x51, 0xba, 0xe1, 0x8d, 0xdc, 0xa3, 0x08, 0x64, 0xab, 0xb2, 0xa3, + 0x24, 0x84, 0x4c, 0x5c, 0xcf, 0x2b, 0xc5, 0xae, 0xe7, 0xec, 0x6a, 0xc6, 0x26, 0x5d, 0x0b, 0x05, + 0xf7, 0x90, 0x6f, 0x38, 0xa9, 0x5d, 0x7a, 0xce, 0x14, 0xfd, 0xb8, 0x91, 0xd7, 0x40, 0xd1, 0x0f, + 0xa7, 0xda, 0xf5, 0x18, 0x0b, 0x88, 0xb8, 0x10, 0x99, 0xdd, 0x80, 0x51, 0xd1, 0xcc, 0xdb, 0x6e, + 0x57, 0x3e, 0xc3, 0xb0, 0x24, 0x38, 0x46, 0x3b, 0xb9, 0x1f, 0x6a, 0xf1, 0x31, 0xda, 0x5c, 0x48, + 0x08, 0xfa, 0xc4, 0x92, 0x88, 0xd6, 0xff, 0x55, 0x12, 0xeb, 0xbf, 0xb9, 0xf3, 0x03, 0x96, 0x09, + 0x52, 0x74, 0xe9, 0x00, 0x63, 0xe1, 0x7f, 0x56, 0xa6, 0x04, 0x5f, 0x72, 0xfa, 0xbc, 0x0d, 0x23, + 0x14, 0x35, 0x43, 0x86, 0xd2, 0xd0, 0xb9, 0x50, 0x41, 0x1a, 0xa8, 0x8e, 0x02, 0x6a, 0xd8, 0x92, + 0x40, 0x57, 0x9d, 0x95, 0x07, 0x52, 0x9d, 0x69, 0x9a, 0xfb, 0xca, 0x13, 0xd3, 0xdc, 0xff, 0x50, + 0x92, 0xcb, 0xab, 0x1e, 0x0f, 0x10, 0x36, 0xff, 0x6c, 0x36, 0x75, 0x5e, 0x2e, 0xc1, 0x41, 0xca, + 0x8e, 0x5d, 0xd5, 0x93, 0xf1, 0x69, 0x3e, 0xd0, 0x87, 0xa4, 0xdd, 0xb3, 0x7e, 0x6d, 0x88, 0xc6, + 0x58, 0x0e, 0xd4, 0x39, 0x23, 0xd2, 0x03, 0xae, 0x93, 0x8c, 0x56, 0x93, 0x62, 0x3e, 0x9c, 0x83, + 0x21, 0x31, 0x37, 0xe5, 0x68, 0x22, 0x9e, 0x98, 0xbf, 0x3a, 0x9e, 0x28, 0x17, 0x6b, 0x19, 0xcf, + 0x24, 0x3d, 0x63, 0x2b, 0x1e, 0x5b, 0xfa, 0x5a, 0x46, 0x0c, 0xd1, 0x83, 0x24, 0x55, 0x8d, 0xde, + 0x83, 0xce, 0xa6, 0x9b, 0xcf, 0x89, 0xa9, 0xe5, 0xc7, 0x5a, 0x86, 0xa9, 0x07, 0x9e, 0xdf, 0x0a, + 0x1e, 0x46, 0x4b, 0x3c, 0xda, 0x89, 0x83, 0xae, 0x34, 0x90, 0x46, 0x0d, 0xff, 0x43, 0x2a, 0x71, + 0x5a, 0x54, 0xa4, 0x3f, 0x87, 0x98, 0x44, 0x6c, 0x01, 0x26, 0x8d, 0x60, 0xcf, 0xf2, 0x95, 0x13, + 0x75, 0x9c, 0x66, 0xa8, 0x68, 0x5d, 0xc7, 0x69, 0x90, 0x88, 0x53, 0x5a, 0xb6, 0x5f, 0x7b, 0xeb, + 0xcc, 0xb5, 0x5d, 0xe2, 0xb0, 0x2b, 0x50, 0xa5, 0x98, 0x31, 0x2b, 0x4b, 0xfa, 0xf3, 0x54, 0x84, + 0xb0, 0x4c, 0x44, 0x2d, 0x85, 0xc8, 0x16, 0x61, 0x72, 0x21, 0x88, 0x57, 0xfc, 0x28, 0x76, 0xfd, + 0x26, 0x4f, 0x42, 0x5b, 0x63, 0x67, 0x37, 0x82, 0xd8, 0xf1, 0x64, 0x89, 0x49, 0x6f, 0xd2, 0x88, + 0xa1, 0xbe, 0x19, 0x78, 0x3e, 0x6d, 0x9d, 0xe3, 0xe9, 0x50, 0x7f, 0x14, 0x78, 0x7e, 0x2e, 0x9a, + 0x74, 0x8a, 0x9a, 0xc6, 0x66, 0x21, 0xff, 0x4d, 0x7b, 0xe8, 0x4e, 0xd0, 0xe2, 0x96, 0x4c, 0xc3, + 0x27, 0xe3, 0x21, 0xbf, 0x0a, 0xa3, 0xb4, 0xf8, 0xb2, 0x09, 0xbf, 0xd3, 0x59, 0x66, 0x2b, 0x0c, + 0x66, 0xc1, 0xa4, 0xe7, 0x3b, 0x64, 0x12, 0x19, 0xf8, 0x6d, 0x4a, 0x67, 0x55, 0xb5, 0xc7, 0x3d, + 0x1f, 0x0d, 0x21, 0xef, 0xfa, 0xed, 0x3d, 0xeb, 0x0d, 0xa8, 0xc9, 0x0d, 0x35, 0xc9, 0x2f, 0x8c, + 0x86, 0x19, 0x2b, 0x4b, 0xb6, 0xbe, 0x09, 0x36, 0xbd, 0x56, 0x68, 0x23, 0x14, 0x5d, 0x34, 0xef, + 0xf0, 0xf8, 0x61, 0x10, 0xee, 0xd8, 0x3c, 0x8a, 0x43, 0x8f, 0xd2, 0x15, 0xe3, 0x36, 0xf2, 0x59, + 0xf6, 0x1e, 0x0c, 0xa3, 0x45, 0x72, 0xe6, 0x5c, 0xcb, 0xd6, 0xb1, 0x30, 0x29, 0x97, 0xdf, 0x30, + 0x9a, 0x37, 0xdb, 0x44, 0xc4, 0xde, 0x86, 0xa1, 0x25, 0xee, 0xef, 0x65, 0xb2, 0x9f, 0xe6, 0x88, + 0x93, 0xed, 0xac, 0xc5, 0xfd, 0x3d, 0x1b, 0x49, 0xac, 0x9f, 0x2c, 0xc3, 0x89, 0x82, 0x66, 0xdd, + 0xff, 0xec, 0x53, 0xba, 0xa7, 0x2f, 0x18, 0x7b, 0xba, 0x7a, 0x90, 0xef, 0x3b, 0xf0, 0x85, 0x5b, + 0xfc, 0x5f, 0x2d, 0xc1, 0x29, 0x73, 0x21, 0x4a, 0x17, 0x84, 0xfb, 0x57, 0xd8, 0xbb, 0x30, 0x72, + 0x83, 0xbb, 0x2d, 0xae, 0x52, 0x1d, 0x66, 0x13, 0x8a, 0x53, 0x21, 0xb1, 0x4d, 0xfd, 0xca, 0x09, + 0xca, 0x96, 0x64, 0xe3, 0xe8, 0xf2, 0x61, 0xa9, 0xf8, 0x43, 0xff, 0x1f, 0x7b, 0xdf, 0x16, 0x1b, + 0x49, 0x76, 0x1d, 0x36, 0xd5, 0xdd, 0x24, 0x9b, 0x87, 0xaf, 0xe2, 0x1d, 0xce, 0x0c, 0x87, 0x33, + 0x3b, 0x8f, 0xda, 0xdd, 0xf1, 0x2e, 0x57, 0xfb, 0x98, 0xd9, 0xec, 0x63, 0xa4, 0x7d, 0xa8, 0xd8, + 0x5d, 0x24, 0x7b, 0xa6, 0xd9, 0xdd, 0x5b, 0xd5, 0x9c, 0xf1, 0x68, 0x25, 0x97, 0x6b, 0xba, 0x8b, + 0x64, 0xed, 0x34, 0xbb, 0x7a, 0xab, 0xaa, 0x77, 0x86, 0x82, 0x01, 0xcb, 0x16, 0x22, 0x03, 0x0e, + 0x6c, 0x39, 0xb6, 0x83, 0x2c, 0x84, 0x04, 0x09, 0x10, 0xc1, 0xf0, 0x87, 0x91, 0x04, 0xc9, 0x8f, + 0x61, 0x01, 0x49, 0xfc, 0x27, 0x40, 0x50, 0x9e, 0x80, 0x81, 0x28, 0xc6, 0xc2, 0x96, 0x10, 0x20, + 0x10, 0x9c, 0x2f, 0x23, 0xf9, 0x50, 0xa0, 0x20, 0xb8, 0xe7, 0xde, 0x5b, 0x75, 0xab, 0xba, 0xba, + 0xc9, 0xd1, 0xac, 0x9c, 0xd8, 0xd1, 0x0f, 0xc1, 0x3e, 0xf7, 0x9c, 0x53, 0xf7, 0x79, 0xee, 0xb9, + 0xe7, 0x9e, 0x7b, 0x4e, 0xde, 0xa7, 0x26, 0xb8, 0xb5, 0x7c, 0x5c, 0x80, 0x0b, 0x13, 0x68, 0xe8, + 0xc0, 0xd1, 0xa1, 0x97, 0x07, 0x0e, 0xf5, 0x01, 0x84, 0x92, 0x77, 0x60, 0xa9, 0xcd, 0x0f, 0x2f, + 0x62, 0x38, 0x0a, 0x89, 0x5c, 0x10, 0xe7, 0x1a, 0xe1, 0xc8, 0x65, 0x66, 0x91, 0x53, 0xc1, 0xba, + 0x8a, 0x13, 0x83, 0x75, 0xc9, 0xb1, 0xaf, 0x4a, 0x3f, 0x61, 0xec, 0xab, 0xa9, 0xc9, 0xb1, 0xaf, + 0xa6, 0x33, 0xb1, 0xaf, 0xee, 0xc1, 0x4a, 0xba, 0x67, 0xf8, 0xfa, 0x4f, 0xe2, 0x86, 0x29, 0xe3, + 0xe3, 0x86, 0x4d, 0x8c, 0xba, 0xac, 0xfd, 0xa6, 0x02, 0x6a, 0x9a, 0xf7, 0x93, 0xce, 0x86, 0xb7, + 0x53, 0xb3, 0xe1, 0x42, 0xfe, 0x6c, 0x18, 0x3f, 0x0d, 0xfe, 0xbb, 0x92, 0x6d, 0xec, 0x89, 0xc6, + 0x5f, 0x83, 0xe9, 0xaa, 0x7f, 0xe8, 0x78, 0x62, 0xd8, 0xf1, 0xf5, 0x51, 0x17, 0x21, 0x26, 0x2f, + 0x39, 0x59, 0x98, 0xb5, 0x2b, 0x30, 0xd5, 0xf0, 0xfb, 0x7a, 0x95, 0xbb, 0x4f, 0x23, 0x9f, 0xbe, + 0xdf, 0xb7, 0x9d, 0xae, 0xc9, 0x0a, 0x48, 0x1d, 0xc0, 0xea, 0x04, 0xae, 0xdb, 0xb7, 0xbc, 0x2f, + 0xbb, 0x19, 0x2d, 0x8b, 0xf6, 0x50, 0x6f, 0x88, 0x62, 0x89, 0x5d, 0x32, 0x23, 0xa2, 0x1d, 0x7a, + 0x5f, 0x96, 0x77, 0x25, 0x89, 0x1e, 0x57, 0x25, 0x0f, 0x6c, 0x99, 0x19, 0x87, 0xeb, 0x3f, 0x8d, + 0x55, 0x99, 0xfb, 0x29, 0xec, 0xe1, 0xeb, 0xb9, 0xc3, 0xf1, 0xef, 0x14, 0xb8, 0x30, 0x81, 0xe6, + 0x53, 0x18, 0x95, 0xbf, 0xea, 0x0e, 0x77, 0x01, 0x12, 0x22, 0x4c, 0x73, 0xef, 0x75, 0x79, 0xc8, + 0xb8, 0x05, 0x9e, 0xe6, 0x9e, 0x02, 0x52, 0x69, 0xee, 0x29, 0x80, 0x6a, 0x1c, 0xdb, 0xae, 0xb7, + 0x7f, 0xc0, 0xbc, 0xd9, 0x16, 0x98, 0x64, 0x39, 0x40, 0x88, 0xac, 0x71, 0x30, 0x1c, 0xed, 0x3f, + 0x4e, 0xc3, 0x79, 0xd3, 0xdd, 0xf7, 0xe8, 0x99, 0x6c, 0x37, 0xf4, 0xfa, 0xfb, 0xa9, 0x28, 0x5f, + 0x5a, 0x66, 0xe5, 0xf2, 0xac, 0x3f, 0x14, 0x12, 0xcf, 0xc4, 0xe7, 0xa1, 0x4c, 0xf7, 0x7c, 0x69, + 0xf1, 0xe2, 0xfd, 0x5e, 0xdf, 0xef, 0xba, 0x3c, 0x52, 0xbe, 0x28, 0x26, 0xeb, 0x5c, 0x45, 0x94, + 0xf2, 0xb2, 0x51, 0x15, 0xf1, 0x47, 0x9f, 0x5c, 0x06, 0x96, 0xe4, 0x9e, 0x96, 0x72, 0x35, 0x31, + 0x3e, 0xc7, 0x95, 0xc6, 0x9c, 0xe3, 0x76, 0x60, 0x45, 0xef, 0xb2, 0xbd, 0xd5, 0xe9, 0xb5, 0x02, + 0xaf, 0xdf, 0xf1, 0x06, 0x4e, 0x4f, 0xd8, 0x26, 0xb0, 0x97, 0x9d, 0xb8, 0xdc, 0x1e, 0xc4, 0x08, + 0x66, 0x2e, 0x19, 0x6d, 0x46, 0xb5, 0x61, 0x61, 0x08, 0x29, 0x7e, 0x75, 0x8b, 0xcd, 0xe8, 0xf6, + 0x43, 0x6c, 0x45, 0x68, 0xc6, 0xc5, 0x78, 0x82, 0xc4, 0x8b, 0xfa, 0x76, 0xdd, 0xba, 0xcd, 0x33, + 0x47, 0x8a, 0xb4, 0x31, 0xcc, 0x05, 0x23, 0xea, 0x85, 0xe8, 0x39, 0x9a, 0xc2, 0x4b, 0xe8, 0x2c, + 0x6b, 0x9b, 0xd2, 0x95, 0x47, 0xe8, 0xc2, 0xf0, 0x40, 0xa6, 0x63, 0x78, 0xe4, 0x65, 0x3a, 0x15, + 0x0e, 0xfd, 0xc8, 0xc5, 0x29, 0x3c, 0x9b, 0x9c, 0x37, 0x03, 0x84, 0xb2, 0xf3, 0xa6, 0x84, 0x42, + 0xde, 0x82, 0xd3, 0x46, 0xe5, 0x86, 0x30, 0xb8, 0x57, 0xfd, 0xce, 0x10, 0x5d, 0x26, 0x00, 0xbf, + 0x87, 0x63, 0xe8, 0x76, 0x6e, 0x50, 0x69, 0x92, 0x87, 0x46, 0xae, 0xc1, 0x4c, 0xad, 0x2a, 0x2b, + 0x82, 0x3c, 0x37, 0x22, 0xf7, 0x03, 0x13, 0x85, 0xa4, 0x99, 0x1c, 0x88, 0xe6, 0x8f, 0x3d, 0xb9, + 0x9c, 0x3f, 0xc1, 0x61, 0xe8, 0x66, 0x56, 0x91, 0x95, 0x12, 0x15, 0x64, 0x14, 0xd9, 0xac, 0xfa, + 0xfa, 0x26, 0x92, 0x6e, 0xb9, 0x7d, 0x37, 0x48, 0x12, 0x14, 0x4c, 0xb1, 0xbe, 0xa5, 0xa4, 0xfb, + 0x71, 0x89, 0x99, 0x46, 0x24, 0x26, 0x9c, 0x69, 0x05, 0xee, 0x47, 0x9e, 0x3f, 0x0c, 0xd3, 0x1f, + 0x5f, 0x4a, 0x94, 0xfd, 0x01, 0x47, 0xb0, 0xb3, 0xb5, 0xc8, 0x27, 0xe5, 0xb9, 0x20, 0x59, 0x06, + 0xe7, 0x8a, 0xdf, 0x75, 0x43, 0x26, 0x81, 0xfe, 0x06, 0xe5, 0x82, 0x94, 0xda, 0x36, 0x41, 0x2a, + 0x7f, 0x03, 0x73, 0x41, 0x8e, 0xe0, 0x92, 0x37, 0x61, 0x0a, 0x7f, 0x72, 0x7d, 0xfb, 0x74, 0x0e, + 0xdb, 0x44, 0xd7, 0xee, 0x50, 0x4c, 0x93, 0x11, 0x90, 0x1a, 0xcc, 0xf0, 0x83, 0xea, 0xe3, 0x64, + 0x34, 0xe3, 0x27, 0x5e, 0x36, 0xdb, 0x38, 0xbd, 0xd6, 0x85, 0x79, 0xf9, 0x83, 0x74, 0x95, 0x6d, + 0x3b, 0xe1, 0x81, 0xdb, 0xa5, 0xbf, 0x78, 0x32, 0x52, 0x5c, 0x65, 0x07, 0x08, 0xb5, 0x69, 0x3d, + 0x4c, 0x09, 0x85, 0xee, 0xd3, 0xb5, 0x70, 0x37, 0xe4, 0x55, 0xe1, 0xa6, 0x2b, 0x0f, 0xcd, 0xa0, + 0x5d, 0x93, 0x17, 0x69, 0x5f, 0x80, 0x95, 0xc6, 0xb0, 0xd7, 0x73, 0xee, 0xf7, 0x5c, 0x91, 0xac, + 0x2a, 0x72, 0x22, 0x97, 0x6c, 0xc0, 0x14, 0xfe, 0x83, 0x1f, 0x5a, 0x4c, 0x12, 0x06, 0x4b, 0x38, + 0xe8, 0x07, 0xac, 0x60, 0x88, 0x30, 0xfa, 0x33, 0x15, 0x22, 0x8c, 0x02, 0xb4, 0x6f, 0x2b, 0xb0, + 0x22, 0xdc, 0x2f, 0x02, 0xa7, 0xf3, 0xc0, 0x0d, 0xb8, 0xc2, 0x75, 0x2d, 0x35, 0xd7, 0x70, 0x11, + 0x64, 0xa6, 0x11, 0x9b, 0x75, 0xb7, 0x44, 0x25, 0xd2, 0x4a, 0x50, 0x5e, 0x85, 0x8f, 0xab, 0x0c, + 0x79, 0x0b, 0xe6, 0xf8, 0x96, 0x2b, 0x45, 0x21, 0xc6, 0x08, 0x85, 0xfc, 0xa0, 0x9d, 0x75, 0x06, + 0x92, 0xd1, 0x51, 0xbf, 0x4b, 0x37, 0xe5, 0x49, 0xf5, 0x8a, 0x7c, 0xfd, 0x2e, 0xfd, 0x8d, 0x09, + 0x53, 0xf7, 0xdf, 0xcc, 0x65, 0xfb, 0x96, 0xcf, 0xdd, 0xd7, 0xe5, 0xa0, 0x9c, 0x4a, 0x72, 0x50, + 0x4e, 0x82, 0x72, 0xca, 0x07, 0xe5, 0x18, 0x35, 0x1e, 0x93, 0xc2, 0x31, 0x63, 0xf2, 0x8e, 0x18, + 0x93, 0xe2, 0xf8, 0x89, 0x71, 0x7a, 0xc2, 0x38, 0x58, 0xc9, 0x0a, 0x29, 0x9d, 0xc8, 0x4c, 0x75, + 0x0a, 0x13, 0xac, 0x30, 0x92, 0xac, 0x64, 0xe6, 0x9c, 0x64, 0xdb, 0xd7, 0xd4, 0xc9, 0x99, 0x1e, + 0x23, 0xee, 0x3f, 0x0b, 0xf3, 0x7a, 0x14, 0x39, 0x9d, 0x03, 0xb7, 0x5b, 0xa5, 0xe2, 0x49, 0x8a, + 0xba, 0xe7, 0x70, 0xb8, 0x7c, 0x67, 0x29, 0xe3, 0xb2, 0x90, 0xdf, 0x4e, 0xc8, 0xdd, 0x87, 0xe3, + 0x90, 0xdf, 0x14, 0x92, 0x0e, 0xf9, 0x4d, 0x21, 0xe4, 0x65, 0x98, 0xa9, 0xf5, 0x3f, 0xf2, 0x68, + 0x9f, 0xb0, 0xc0, 0x7b, 0x68, 0xeb, 0xf3, 0x18, 0x48, 0x16, 0xae, 0x1c, 0x8b, 0xdc, 0x94, 0x8e, + 0x59, 0xb3, 0x89, 0x35, 0x45, 0x44, 0xef, 0xe5, 0x45, 0xf2, 0x11, 0x2a, 0x3e, 0x77, 0xbd, 0x0e, + 0x33, 0xc2, 0x32, 0x0c, 0xc9, 0x0e, 0xc2, 0x29, 0x47, 0xc3, 0x9d, 0x08, 0x64, 0xba, 0x7e, 0xe4, + 0xa4, 0xaa, 0x73, 0xc9, 0xfa, 0x91, 0x93, 0xaa, 0xca, 0xeb, 0x47, 0x4e, 0xaf, 0x1a, 0x1b, 0xd5, + 0xe6, 0x8f, 0x35, 0xaa, 0xdd, 0x81, 0xf9, 0x96, 0x13, 0x44, 0x1e, 0xd5, 0x7b, 0xfa, 0x51, 0xb8, + 0xba, 0x90, 0xb2, 0x43, 0x4b, 0x45, 0x1b, 0x97, 0xf8, 0x40, 0x9e, 0x1d, 0x48, 0xf8, 0xe9, 0x2c, + 0xfb, 0x09, 0x3c, 0xdf, 0x79, 0x78, 0xf1, 0x49, 0x9c, 0x87, 0xb1, 0x53, 0xd1, 0xf6, 0xb8, 0x94, + 0xd8, 0xc2, 0xf0, 0x20, 0x94, 0x31, 0x40, 0xc6, 0x88, 0xe4, 0x8b, 0x30, 0x4f, 0xff, 0x6f, 0xf9, + 0x3d, 0xaf, 0xe3, 0xb9, 0xe1, 0xaa, 0x8a, 0x8d, 0xbb, 0x94, 0xbb, 0xfa, 0x11, 0xe9, 0xc8, 0x72, + 0x23, 0xb6, 0x80, 0x91, 0x71, 0xf6, 0x52, 0x21, 0xc5, 0x8d, 0xbc, 0x0b, 0xf3, 0x74, 0xf6, 0xdd, + 0x77, 0x42, 0xa6, 0xee, 0x2e, 0x27, 0xee, 0xdf, 0x5d, 0x0e, 0x1f, 0x89, 0xba, 0x2f, 0x13, 0xd0, + 0x6d, 0x5e, 0x1f, 0x30, 0x01, 0x49, 0xa4, 0xd9, 0x3e, 0x18, 0x11, 0x8e, 0x02, 0x8d, 0x7c, 0x1e, + 0xe6, 0xf5, 0xc1, 0x20, 0x91, 0x38, 0xa7, 0x25, 0x13, 0xe4, 0x60, 0x60, 0xe7, 0x4a, 0x9d, 0x14, + 0x45, 0x56, 0x30, 0xaf, 0x3c, 0x96, 0x60, 0x26, 0x2f, 0xc6, 0x27, 0x80, 0x33, 0x89, 0x95, 0x9c, + 0x1f, 0x46, 0x53, 0xc7, 0x09, 0x76, 0x18, 0xa8, 0xc0, 0x02, 0x33, 0xe8, 0x09, 0x6d, 0xe6, 0xec, + 0xc8, 0xea, 0xc9, 0x51, 0x6a, 0xd2, 0x34, 0xec, 0xc5, 0xb7, 0x17, 0x79, 0x4e, 0x8f, 0xa7, 0x43, + 0x58, 0x3d, 0x87, 0xab, 0x96, 0xbf, 0xf8, 0xc6, 0x12, 0xcc, 0x92, 0xe5, 0xa4, 0xb8, 0x64, 0x88, + 0xb4, 0x1f, 0x28, 0x70, 0x6e, 0xcc, 0x88, 0xc7, 0xc1, 0xf2, 0x95, 0xc9, 0xc1, 0xf2, 0xa9, 0xe4, + 0x48, 0xdb, 0x69, 0xb0, 0xfd, 0xa3, 0xef, 0xec, 0x62, 0x7d, 0xcb, 0x07, 0xc2, 0x13, 0xd1, 0xf1, + 0x4f, 0xdf, 0xf2, 0xd1, 0xd4, 0x5d, 0x1c, 0xdd, 0x84, 0x38, 0x1e, 0xab, 0x14, 0x8b, 0xbf, 0xcb, + 0xf3, 0xdc, 0xc5, 0xc3, 0xfa, 0x81, 0x9f, 0x5a, 0xc1, 0x39, 0xac, 0xb5, 0xaf, 0x17, 0x60, 0x4e, + 0x5a, 0x87, 0xe4, 0x8a, 0xf4, 0x8a, 0x5b, 0x65, 0x99, 0x12, 0x25, 0x0e, 0x05, 0xb6, 0x13, 0xe1, + 0xa2, 0x2a, 0x1c, 0x6f, 0xd0, 0xc7, 0xd8, 0x6a, 0x52, 0x42, 0x81, 0x4c, 0x30, 0x35, 0x2c, 0x27, + 0x5f, 0x02, 0xa8, 0x3b, 0x61, 0xa4, 0x77, 0x22, 0xef, 0x23, 0xf7, 0x04, 0x9b, 0x8e, 0x88, 0x3f, + 0x7a, 0x06, 0x73, 0xf3, 0x38, 0x48, 0x96, 0xd9, 0x23, 0x24, 0x86, 0x74, 0x08, 0xe4, 0x38, 0xf0, + 0x7c, 0x08, 0x46, 0x45, 0x88, 0xc0, 0xd2, 0x7e, 0x55, 0x01, 0xd8, 0xad, 0x55, 0x30, 0x85, 0xc8, + 0x93, 0x6a, 0x11, 0xf9, 0x31, 0xcb, 0x05, 0xf7, 0x09, 0xfa, 0xc3, 0x9f, 0x28, 0xb0, 0x98, 0x46, + 0x23, 0xef, 0xc0, 0x92, 0xd5, 0x09, 0xfc, 0x5e, 0xef, 0xbe, 0xd3, 0x79, 0x50, 0xf7, 0xfa, 0x2e, + 0x8b, 0xe3, 0x3c, 0xc5, 0x36, 0xaf, 0x30, 0x2e, 0xb2, 0x7b, 0xb4, 0xcc, 0xcc, 0x22, 0x93, 0xaf, + 0x2a, 0xb0, 0x60, 0x1d, 0xf8, 0x0f, 0xe3, 0xb0, 0xc8, 0x7c, 0x04, 0xbf, 0x44, 0x85, 0x41, 0x78, + 0xe0, 0x3f, 0x4c, 0xf2, 0x32, 0xa7, 0x9c, 0x75, 0xdf, 0x3e, 0x99, 0x1f, 0x45, 0xc7, 0xc7, 0x03, + 0x4c, 0x14, 0xbe, 0x94, 0xfa, 0x88, 0x99, 0xfe, 0xa6, 0xf6, 0x63, 0x05, 0xe6, 0xf0, 0xa8, 0xd3, + 0xeb, 0xa1, 0x92, 0xf6, 0x37, 0x29, 0xc9, 0x6f, 0xdc, 0xae, 0x09, 0x03, 0xfb, 0x1a, 0x2c, 0x65, + 0xd0, 0x88, 0x06, 0xd3, 0x16, 0x46, 0x74, 0x90, 0xad, 0x24, 0x2c, 0xc6, 0x83, 0xc9, 0x4b, 0x34, + 0x43, 0x22, 0xbb, 0x73, 0x1d, 0xef, 0xd5, 0x6f, 0x00, 0x78, 0x02, 0x24, 0x8e, 0x42, 0x24, 0x5b, + 0x93, 0x3b, 0xd7, 0x4d, 0x09, 0x4b, 0x6b, 0xc0, 0xb4, 0xe5, 0x07, 0xd1, 0xc6, 0x11, 0x3b, 0x7d, + 0x54, 0xdd, 0xb0, 0x23, 0x5f, 0x9c, 0x7b, 0x78, 0xad, 0xd5, 0x31, 0x79, 0x11, 0xb9, 0x0c, 0x53, + 0x9b, 0x9e, 0xdb, 0xeb, 0xca, 0x0e, 0xd5, 0x7b, 0x14, 0x60, 0x32, 0x38, 0x3d, 0xa1, 0x9d, 0x4d, + 0x32, 0x6d, 0x25, 0x9e, 0xdb, 0x4f, 0xba, 0x6e, 0x2a, 0xa9, 0xfe, 0xbd, 0x1a, 0x67, 0xb7, 0x19, + 0xfd, 0xd2, 0x84, 0xae, 0xfe, 0xe7, 0x0a, 0xac, 0x8d, 0x27, 0x91, 0x9d, 0xc1, 0x95, 0x09, 0xce, + 0xe0, 0xcf, 0x66, 0x2f, 0x7a, 0x11, 0x8d, 0x5f, 0xf4, 0x26, 0xd7, 0xbb, 0x55, 0xf4, 0xc5, 0xef, + 0xb8, 0x22, 0xbd, 0xd6, 0x95, 0x09, 0x75, 0x46, 0x44, 0x36, 0xcc, 0x11, 0xd2, 0x98, 0x9c, 0x56, + 0xfb, 0x57, 0x25, 0x38, 0x3f, 0x96, 0x82, 0x6c, 0x4b, 0x49, 0xfb, 0x16, 0xe3, 0x74, 0x61, 0x63, + 0xf1, 0x5f, 0xc2, 0xbf, 0xe8, 0x6e, 0x99, 0x7d, 0xe0, 0xd7, 0x8c, 0x93, 0xb5, 0x15, 0x90, 0xd7, + 0x0b, 0xc7, 0xf2, 0x62, 0xe8, 0xc8, 0x0c, 0x46, 0xf3, 0xb6, 0xe1, 0x5b, 0x52, 0x37, 0x72, 0xbc, + 0x5e, 0x28, 0x2f, 0xbb, 0x2e, 0x03, 0x99, 0xa2, 0x2c, 0xf1, 0xd0, 0x2f, 0xe5, 0x7b, 0xe8, 0x6b, + 0xff, 0x5b, 0x81, 0xd9, 0xb8, 0xda, 0x64, 0x0d, 0xce, 0xb6, 0x4d, 0xbd, 0x62, 0xd8, 0xed, 0x7b, + 0x2d, 0xc3, 0xde, 0x6d, 0x58, 0x2d, 0xa3, 0x52, 0xdb, 0xac, 0x19, 0x55, 0xf5, 0x14, 0x59, 0x86, + 0x85, 0xdd, 0xc6, 0xed, 0x46, 0xf3, 0x6e, 0xc3, 0x36, 0x4c, 0xb3, 0x69, 0xaa, 0x0a, 0x59, 0x80, + 0x59, 0x73, 0x43, 0xaf, 0xd8, 0x8d, 0x66, 0xd5, 0x50, 0x0b, 0x44, 0x85, 0xf9, 0x4a, 0xb3, 0xd1, + 0x30, 0x2a, 0xed, 0xda, 0x9d, 0x5a, 0xfb, 0x9e, 0x5a, 0x24, 0x04, 0x16, 0x11, 0xa1, 0x65, 0xd6, + 0x1a, 0x95, 0x5a, 0x4b, 0xaf, 0xab, 0x25, 0x0a, 0xa3, 0xf8, 0x12, 0x6c, 0x2a, 0x66, 0x74, 0x7b, + 0x77, 0xc3, 0x50, 0xa7, 0x29, 0x0a, 0xfd, 0x4f, 0x42, 0x99, 0xa1, 0x9f, 0x47, 0x94, 0xaa, 0xde, + 0xd6, 0x37, 0x74, 0xcb, 0x50, 0xcb, 0xe4, 0x1c, 0x9c, 0x4e, 0x81, 0xec, 0x7a, 0x73, 0xab, 0xd6, + 0x50, 0x67, 0xc9, 0x0a, 0xa8, 0x31, 0xac, 0xba, 0x61, 0xef, 0x5a, 0x86, 0xa9, 0x42, 0x16, 0xda, + 0xd0, 0x77, 0x0c, 0x75, 0x4e, 0x7b, 0x9b, 0x3d, 0xbd, 0x64, 0x5d, 0x4d, 0xce, 0x02, 0xb1, 0xda, + 0x7a, 0x7b, 0xd7, 0xca, 0x34, 0x7e, 0x0e, 0x66, 0xac, 0xdd, 0x4a, 0xc5, 0xb0, 0x2c, 0x55, 0x21, + 0x00, 0xd3, 0x9b, 0x7a, 0xad, 0x6e, 0x54, 0xd5, 0x82, 0xf6, 0x5b, 0x0a, 0x2c, 0x0b, 0x95, 0x51, + 0xdc, 0x7b, 0x3d, 0xe1, 0x5a, 0x7c, 0x27, 0x75, 0x12, 0x16, 0xef, 0xe8, 0x32, 0x1f, 0x99, 0xb0, + 0x0c, 0xff, 0x81, 0x02, 0x67, 0x72, 0xb1, 0xc9, 0x3d, 0x50, 0x45, 0x0d, 0xe2, 0x37, 0xb1, 0x4a, + 0x4a, 0xe3, 0x16, 0x74, 0x19, 0x34, 0x66, 0x5b, 0x8d, 0xb7, 0x2b, 0x73, 0x84, 0xcd, 0xc9, 0xb3, + 0xd2, 0x68, 0x1f, 0x2b, 0x70, 0x6e, 0xcc, 0x67, 0x48, 0x05, 0xa6, 0xe3, 0x74, 0x67, 0x13, 0x3c, + 0x0e, 0x57, 0xbe, 0xf7, 0xc9, 0x65, 0x8e, 0x88, 0x79, 0xd7, 0xf1, 0x3f, 0x73, 0x3a, 0xce, 0x5f, + 0x86, 0x49, 0xc4, 0x58, 0xf7, 0x9d, 0xcf, 0xf4, 0x3c, 0xff, 0x92, 0x7e, 0xd7, 0xda, 0x98, 0xe3, + 0x7d, 0x57, 0x74, 0x1e, 0x86, 0x98, 0x45, 0x4c, 0xfb, 0x5d, 0x85, 0x6a, 0x83, 0x59, 0x44, 0xaa, + 0x24, 0xeb, 0x61, 0x38, 0x3c, 0x74, 0x4d, 0xbf, 0xe7, 0xea, 0x66, 0x83, 0x6f, 0x1b, 0xa8, 0xde, + 0x3a, 0x58, 0x80, 0xe7, 0x10, 0xdb, 0x09, 0x52, 0xa1, 0x1c, 0x52, 0x34, 0xe4, 0x26, 0x80, 0xf1, + 0x28, 0x72, 0x83, 0xbe, 0xd3, 0x8b, 0x83, 0xf2, 0xb0, 0xa8, 0x69, 0x1c, 0x9a, 0x56, 0xd0, 0x25, + 0x64, 0xed, 0x6b, 0x0a, 0xcc, 0x73, 0x65, 0x49, 0xef, 0xb9, 0x41, 0xf4, 0x64, 0xd3, 0xeb, 0x66, + 0x6a, 0x7a, 0xc5, 0x0f, 0x6c, 0x24, 0xfe, 0xb4, 0x38, 0x77, 0x66, 0xfd, 0x5b, 0x05, 0xd4, 0x2c, + 0x22, 0x79, 0x07, 0xca, 0x96, 0xfb, 0x91, 0x1b, 0x78, 0xd1, 0x11, 0x17, 0x94, 0x22, 0x31, 0x2c, + 0xc3, 0xe1, 0x65, 0x6c, 0x3e, 0x84, 0xfc, 0x97, 0x19, 0xd3, 0x9c, 0x54, 0xde, 0x4b, 0x76, 0x92, + 0xe2, 0xa7, 0x65, 0x27, 0xd1, 0xfe, 0xac, 0x00, 0xe7, 0xb6, 0xdc, 0x48, 0x6e, 0x53, 0xec, 0x09, + 0xf2, 0xca, 0xc9, 0xda, 0x25, 0xb5, 0x64, 0x15, 0x66, 0xb0, 0x48, 0x8c, 0xaf, 0x29, 0x7e, 0x92, + 0x8d, 0x78, 0x5e, 0x17, 0x53, 0x99, 0x27, 0xc7, 0x7c, 0xfb, 0x25, 0x29, 0x17, 0x5d, 0x3c, 0xad, + 0xaf, 0xc1, 0x22, 0xe6, 0x08, 0x19, 0xd2, 0xe5, 0xe0, 0x76, 0xb9, 0xbd, 0xa8, 0x6c, 0x66, 0xa0, + 0x64, 0x1d, 0x54, 0x0a, 0xd1, 0x3b, 0x0f, 0xfa, 0xfe, 0xc3, 0x9e, 0xdb, 0xdd, 0x77, 0xbb, 0xb8, + 0xad, 0x97, 0xcd, 0x11, 0xb8, 0xe0, 0xb9, 0xdb, 0x67, 0x67, 0x3d, 0xb7, 0x8b, 0x46, 0x1d, 0xce, + 0x33, 0x81, 0xae, 0xdd, 0x84, 0xb9, 0x9f, 0x30, 0xaf, 0xa4, 0xf6, 0x5f, 0x14, 0x58, 0xc1, 0xc6, + 0x49, 0x1f, 0x16, 0x39, 0xbf, 0x45, 0x6f, 0x49, 0xa9, 0xd6, 0x1c, 0x0a, 0x4a, 0x2f, 0x85, 0xb8, + 0x17, 0x13, 0x23, 0x52, 0xe1, 0x04, 0x46, 0x24, 0xc9, 0x06, 0x56, 0xfa, 0xb4, 0x6c, 0x60, 0xb7, + 0x4a, 0xe5, 0xa2, 0x5a, 0x4a, 0x86, 0x5c, 0xfb, 0x6a, 0x01, 0x66, 0x4c, 0xb7, 0xe7, 0x3a, 0xa1, + 0x4b, 0xae, 0xc1, 0x4c, 0xc3, 0x8f, 0xdc, 0x70, 0xa7, 0x2a, 0x7b, 0x59, 0xf7, 0x29, 0xc8, 0x3e, + 0xec, 0x9a, 0xa2, 0x90, 0x4e, 0xf8, 0x56, 0xe0, 0x77, 0x87, 0x9d, 0x48, 0x9e, 0xf0, 0x03, 0x06, + 0x32, 0x45, 0x19, 0xf9, 0x0c, 0xcc, 0x72, 0xce, 0xf1, 0xcd, 0x32, 0x3a, 0x8f, 0x07, 0x0c, 0x88, + 0xd9, 0x84, 0x62, 0x04, 0xd4, 0x69, 0x99, 0x82, 0x51, 0x92, 0x74, 0xda, 0x11, 0x9d, 0x41, 0xa8, + 0xea, 0x53, 0x13, 0x54, 0xf5, 0x57, 0x60, 0x5a, 0x0f, 0x43, 0x37, 0x12, 0x91, 0x27, 0xe6, 0xe3, + 0x90, 0x84, 0xa1, 0x1b, 0x31, 0xc6, 0x0e, 0x96, 0x9b, 0x1c, 0x4f, 0xfb, 0xcb, 0x02, 0x4c, 0xe1, + 0xbf, 0x78, 0x6f, 0x1b, 0x74, 0x0e, 0x52, 0xf7, 0xb6, 0x41, 0xe7, 0xc0, 0x44, 0x28, 0xb9, 0x8e, + 0xa6, 0x0d, 0x91, 0x15, 0x90, 0xb7, 0x1e, 0x6d, 0xf6, 0xdd, 0x04, 0x6c, 0xca, 0x38, 0xb1, 0x9b, + 0x41, 0x31, 0x37, 0xde, 0xcc, 0x59, 0x28, 0x34, 0x2d, 0xde, 0x62, 0x0c, 0x81, 0xe6, 0x87, 0x66, + 0xa1, 0x69, 0x61, 0x6f, 0x6c, 0xeb, 0x37, 0x5e, 0x7b, 0x9d, 0x37, 0x94, 0xf5, 0xc6, 0x81, 0x73, + 0xe3, 0xb5, 0xd7, 0x4d, 0x5e, 0x42, 0xfb, 0x17, 0xeb, 0x8c, 0xb7, 0xbf, 0x2c, 0x2c, 0x02, 0xf6, + 0x2f, 0xb6, 0x0d, 0x6f, 0x7a, 0xcd, 0x04, 0x81, 0xdc, 0x80, 0x39, 0x1e, 0x9f, 0x03, 0xf1, 0xa5, + 0xf8, 0x19, 0x3c, 0x7e, 0x07, 0xa3, 0x90, 0x91, 0xd8, 0x3d, 0x20, 0x1f, 0x20, 0x91, 0xbb, 0x9c, + 0xdf, 0x03, 0x8a, 0x21, 0x0c, 0x4d, 0x09, 0x85, 0x56, 0x89, 0x5d, 0x24, 0x26, 0xe1, 0x0e, 0xb0, + 0x4a, 0xfc, 0xb6, 0x11, 0xf3, 0xb1, 0xc4, 0x08, 0xda, 0xef, 0x17, 0xa0, 0xdc, 0xea, 0x0d, 0xf7, + 0xbd, 0xfe, 0x9d, 0xeb, 0x84, 0x00, 0x1e, 0xe3, 0x44, 0xc2, 0x1e, 0xfa, 0x3f, 0x39, 0x0f, 0x65, + 0x71, 0x72, 0x13, 0x02, 0x29, 0xe4, 0xa7, 0xb6, 0x55, 0x10, 0xe3, 0xce, 0x63, 0xdd, 0x89, 0x9f, + 0xe4, 0x3a, 0xc4, 0xe7, 0xaf, 0x71, 0x07, 0xb5, 0x12, 0x5d, 0x2c, 0x66, 0x8c, 0x46, 0x5e, 0x04, + 0xdc, 0x24, 0xf8, 0xe1, 0x41, 0x58, 0xc0, 0x59, 0xd5, 0xb8, 0x9e, 0xc2, 0x48, 0x10, 0x8d, 0xbc, + 0x9a, 0xc9, 0xad, 0x77, 0x26, 0x4d, 0xc0, 0xb2, 0xaf, 0x09, 0x12, 0x91, 0x57, 0xef, 0x2d, 0x98, + 0xeb, 0x04, 0x2e, 0x5e, 0x7d, 0x3a, 0x3d, 0xf1, 0xfe, 0x76, 0x2d, 0x45, 0x59, 0x49, 0xca, 0xef, + 0x5c, 0x37, 0x65, 0x74, 0xed, 0x3b, 0xb3, 0x30, 0x2f, 0xd7, 0x87, 0x98, 0x70, 0x3a, 0xec, 0xd1, + 0xb3, 0x3b, 0xf7, 0x0b, 0x1c, 0x60, 0x21, 0xdf, 0x4e, 0xaf, 0xa4, 0x2b, 0x44, 0xf1, 0x98, 0x93, + 0xa0, 0x08, 0x2c, 0xb2, 0x7d, 0xca, 0x5c, 0x0e, 0x13, 0x30, 0xc3, 0x23, 0x3a, 0x94, 0xfd, 0x41, + 0xb8, 0xef, 0xf6, 0x3d, 0x71, 0x41, 0xf3, 0x74, 0x8a, 0x51, 0x93, 0x17, 0x8e, 0xf0, 0x8a, 0xc9, + 0xc8, 0x6b, 0x30, 0xed, 0x0f, 0xdc, 0xbe, 0xe3, 0xf1, 0x3d, 0xee, 0x42, 0x86, 0x81, 0xdb, 0xd7, + 0x6b, 0x12, 0x21, 0x47, 0x26, 0x2f, 0x43, 0xc9, 0x7f, 0x10, 0x8f, 0xd7, 0xf9, 0x34, 0xd1, 0x83, + 0xc8, 0x91, 0x48, 0x10, 0x91, 0x12, 0x7c, 0xe0, 0x1c, 0xee, 0xf1, 0x11, 0x4b, 0x13, 0xdc, 0x72, + 0x0e, 0xf7, 0x64, 0x02, 0x8a, 0x48, 0xde, 0x05, 0x18, 0x38, 0xfb, 0x6e, 0x60, 0x77, 0x87, 0xd1, + 0x11, 0x1f, 0xb7, 0x4b, 0x29, 0xb2, 0x16, 0x2d, 0xae, 0x0e, 0xa3, 0x23, 0x89, 0x76, 0x76, 0x20, + 0x80, 0x44, 0x07, 0x38, 0x74, 0xa2, 0xc8, 0x0d, 0x0e, 0x7d, 0xee, 0x98, 0x99, 0x04, 0xd8, 0x64, + 0x0c, 0x76, 0xe2, 0x62, 0x89, 0x83, 0x44, 0x84, 0x95, 0xf6, 0x02, 0x07, 0xaf, 0xe1, 0x47, 0x2a, + 0xed, 0x05, 0xa9, 0x56, 0x52, 0x44, 0xf2, 0x26, 0xcc, 0x74, 0xbd, 0xb0, 0xe3, 0x07, 0x5d, 0x1e, + 0x71, 0xe6, 0x62, 0x8a, 0xa6, 0xca, 0xca, 0x24, 0x32, 0x81, 0x4e, 0x6b, 0xcb, 0x03, 0xec, 0x36, + 0xfc, 0x87, 0x78, 0x2f, 0x90, 0xad, 0xad, 0x15, 0x17, 0xcb, 0xb5, 0x4d, 0x88, 0xe8, 0x50, 0xee, + 0x7b, 0x51, 0xcf, 0xb9, 0xcf, 0x2f, 0xdb, 0xd3, 0x43, 0xb9, 0x85, 0x45, 0xf2, 0x50, 0x32, 0x64, + 0x72, 0x13, 0xca, 0x22, 0x59, 0x07, 0x7f, 0xd5, 0x9a, 0xae, 0x34, 0x4f, 0xb6, 0x21, 0x57, 0x9a, + 0xa7, 0xe7, 0xa0, 0xfd, 0x13, 0x76, 0xbc, 0x43, 0xfe, 0x18, 0x35, 0xdd, 0x3f, 0x56, 0xa5, 0xb6, + 0x23, 0xf7, 0x0f, 0x45, 0x24, 0xef, 0xc0, 0x0c, 0x5d, 0xbf, 0x5d, 0x7f, 0x9f, 0xc7, 0xec, 0xd0, + 0xd2, 0xfd, 0xc3, 0xca, 0x46, 0xa6, 0xab, 0x20, 0xa2, 0x0b, 0xd9, 0x79, 0x18, 0xda, 0x5e, 0x07, + 0xe3, 0xae, 0x66, 0x97, 0xa3, 0x7e, 0xd7, 0xaa, 0x55, 0x24, 0xb2, 0x29, 0xe7, 0x61, 0x58, 0xeb, + 0x90, 0x1b, 0x30, 0x85, 0xb9, 0x6c, 0x78, 0x90, 0xd5, 0x34, 0x0d, 0x66, 0xb1, 0x91, 0x69, 0x10, + 0x95, 0x0e, 0xe4, 0x61, 0x88, 0xef, 0x7b, 0x78, 0x46, 0x99, 0x74, 0x9f, 0xec, 0x58, 0xf8, 0xe8, + 0x47, 0xae, 0x22, 0x47, 0xa7, 0x55, 0xec, 0xbb, 0x91, 0xed, 0x7d, 0xc8, 0x73, 0xc2, 0xa4, 0x3f, + 0xd7, 0x70, 0xa3, 0xda, 0x7b, 0xf2, 0xe7, 0xfa, 0x6e, 0x54, 0xfb, 0x90, 0x0f, 0xdd, 0xc1, 0xf0, + 0x3e, 0x1a, 0xdf, 0x73, 0x86, 0xee, 0x60, 0x98, 0x1d, 0xba, 0x83, 0xe1, 0x7d, 0x4a, 0xe6, 0xf5, + 0xa3, 0x61, 0xdf, 0xe5, 0xaf, 0x4b, 0xd3, 0x64, 0x35, 0x2c, 0x92, 0xc9, 0x18, 0x32, 0xb9, 0x04, + 0x90, 0x78, 0x3b, 0xb0, 0x7b, 0x24, 0x53, 0x82, 0x7c, 0xb6, 0xf4, 0xdf, 0xfe, 0xf1, 0x65, 0x65, + 0x03, 0xa0, 0x2c, 0x02, 0x1f, 0x51, 0x35, 0x7c, 0x25, 0xaf, 0x2e, 0xe4, 0x2a, 0xcc, 0xcb, 0x61, + 0x99, 0xf8, 0x66, 0x30, 0xe7, 0x0c, 0x3c, 0x11, 0x98, 0x69, 0x72, 0xb6, 0x90, 0x17, 0x60, 0x39, + 0xf5, 0x08, 0x2b, 0x71, 0x5b, 0x34, 0x55, 0xb9, 0x00, 0xf7, 0xde, 0x0a, 0x40, 0x18, 0x39, 0x41, + 0x64, 0x77, 0x9d, 0xe8, 0x24, 0x66, 0xe4, 0x32, 0x95, 0xe7, 0xcc, 0xe7, 0x1d, 0xe9, 0xaa, 0x4e, + 0xe4, 0xb2, 0xc6, 0x69, 0x75, 0x38, 0x3f, 0x56, 0xd6, 0x92, 0xe7, 0x41, 0xdd, 0x73, 0xb8, 0xa5, + 0xb5, 0x73, 0xe0, 0xf4, 0xfb, 0x6e, 0x8f, 0x37, 0x6c, 0x49, 0xc0, 0x2b, 0x0c, 0xcc, 0xb9, 0xbd, + 0x2b, 0xf5, 0x8e, 0xb4, 0xc8, 0x4e, 0xd0, 0x3b, 0x9c, 0xc1, 0xb7, 0x14, 0xb8, 0x38, 0x49, 0x64, + 0x93, 0x35, 0x28, 0x0f, 0x02, 0xcf, 0xc7, 0xa3, 0x01, 0xef, 0x43, 0xf1, 0x1b, 0x93, 0xa9, 0xa0, + 0x0e, 0x1b, 0x39, 0xfb, 0xfc, 0x55, 0x93, 0x39, 0x8b, 0x90, 0xb6, 0xb3, 0x1f, 0xd2, 0x2e, 0xee, + 0xba, 0x7b, 0xce, 0xb0, 0x17, 0xd9, 0x61, 0xe7, 0xc0, 0xed, 0xe2, 0xbb, 0x43, 0xf4, 0xf7, 0x34, + 0x55, 0x5e, 0x60, 0x09, 0xf8, 0x48, 0x8d, 0xa7, 0xc6, 0xd4, 0xf8, 0x56, 0xa9, 0xac, 0xa8, 0x05, + 0x13, 0x5d, 0xe4, 0xb4, 0xaf, 0x14, 0x60, 0x75, 0x9c, 0x8c, 0x22, 0x6f, 0xe7, 0xf5, 0x01, 0xbb, + 0x5d, 0x92, 0xe1, 0xf2, 0xed, 0x92, 0x3c, 0x7b, 0x6e, 0x40, 0xfc, 0x6a, 0xf0, 0xb8, 0x08, 0x20, + 0x02, 0x46, 0x69, 0x06, 0x4e, 0x18, 0x3e, 0xa4, 0x62, 0xb8, 0x28, 0x05, 0xc8, 0xe6, 0x30, 0x99, + 0x46, 0xc0, 0xc8, 0x1b, 0x00, 0x9d, 0x9e, 0x1f, 0xba, 0xe8, 0xc4, 0xc1, 0xf5, 0x3b, 0xf6, 0x16, + 0x22, 0x86, 0xca, 0xb7, 0xf6, 0x08, 0xad, 0xf8, 0x5d, 0x31, 0x9f, 0x1c, 0x38, 0x37, 0x66, 0x53, + 0xa2, 0xc3, 0x83, 0xcf, 0x00, 0x99, 0x0c, 0xe2, 0x89, 0x08, 0x29, 0x84, 0x25, 0xd0, 0xca, 0xf6, + 0x78, 0x61, 0xdc, 0x1c, 0x39, 0x02, 0x32, 0xba, 0xf3, 0x50, 0xee, 0xdc, 0xf7, 0x7f, 0x18, 0xc4, + 0xdc, 0x19, 0x64, 0x37, 0xe8, 0x91, 0xcb, 0x30, 0x27, 0xb2, 0x42, 0xd3, 0xf3, 0x13, 0x63, 0x0e, + 0x1c, 0x74, 0xdb, 0xc5, 0xc9, 0x83, 0x61, 0x81, 0x59, 0x62, 0x24, 0xb6, 0xf2, 0x66, 0x11, 0xd2, + 0x3e, 0x1a, 0x88, 0xd6, 0x5d, 0x14, 0xf3, 0x3b, 0xad, 0x0f, 0xf0, 0xd2, 0xbf, 0xa7, 0x88, 0xe1, + 0x1f, 0xdd, 0x50, 0x8f, 0xab, 0x1f, 0x01, 0x7c, 0x9a, 0xc7, 0x2b, 0x86, 0xff, 0x53, 0x4d, 0x51, + 0xac, 0x3a, 0xae, 0x29, 0xf2, 0x9f, 0xe4, 0x1a, 0x2c, 0x05, 0xcc, 0xfd, 0x39, 0xf2, 0x79, 0x7f, + 0xb2, 0xdc, 0x41, 0x0b, 0x0c, 0xdc, 0xf6, 0xb1, 0x4f, 0x79, 0xbd, 0x6e, 0xc5, 0x1d, 0x26, 0xe9, + 0x17, 0xe4, 0x25, 0x98, 0xa5, 0xfa, 0x05, 0x06, 0x80, 0xca, 0xbc, 0x09, 0x42, 0x3c, 0xd4, 0xd6, + 0xcc, 0xf2, 0x07, 0xfc, 0x7f, 0xce, 0xeb, 0x57, 0x63, 0x01, 0x98, 0x96, 0xaa, 0xe4, 0x2c, 0x4c, + 0xb3, 0x6c, 0xe6, 0xbc, 0x6d, 0xfc, 0x17, 0x79, 0x16, 0x16, 0xd9, 0x43, 0xde, 0xcc, 0xc0, 0x2e, + 0x20, 0x34, 0x9e, 0xde, 0x27, 0x4b, 0x3c, 0xc5, 0x2b, 0xf1, 0x87, 0x05, 0xd1, 0x22, 0x59, 0xc5, + 0x22, 0xe7, 0x60, 0xc6, 0x0f, 0xf6, 0xa5, 0xfe, 0x9d, 0xf6, 0x83, 0x7d, 0xda, 0xb9, 0xcf, 0x81, + 0xca, 0xde, 0xc9, 0xb1, 0x78, 0x25, 0xe1, 0x51, 0xbf, 0xc3, 0xdf, 0x4a, 0x2c, 0x32, 0xf8, 0x6e, + 0xe8, 0x06, 0xd6, 0x51, 0xbf, 0x43, 0x31, 0xc3, 0xd0, 0xb7, 0xe5, 0x60, 0x72, 0xbc, 0x22, 0x8b, + 0x61, 0xe8, 0x27, 0x51, 0xe5, 0xba, 0x64, 0x03, 0x16, 0x28, 0x9f, 0x38, 0x26, 0x1e, 0x17, 0xc3, + 0x4f, 0x8d, 0x6a, 0x80, 0x47, 0xfd, 0x8e, 0xa8, 0xa2, 0x39, 0x1f, 0x4a, 0xbf, 0xc8, 0x6d, 0x50, + 0x25, 0x55, 0x19, 0x1f, 0x4e, 0x67, 0xde, 0x03, 0x24, 0x6c, 0x24, 0x15, 0xbb, 0xd6, 0xdf, 0xf3, + 0xcd, 0xa5, 0x4e, 0x1a, 0x10, 0x8b, 0xa3, 0x69, 0x75, 0xc6, 0x5c, 0xe5, 0xcd, 0x0d, 0xd1, 0x55, + 0xd4, 0xee, 0xf9, 0xfb, 0xb6, 0xfb, 0x88, 0x4e, 0x0c, 0xed, 0x1f, 0x29, 0x42, 0xe0, 0xe7, 0x30, + 0x25, 0x1a, 0x2c, 0x1c, 0x38, 0xa1, 0x1d, 0x86, 0x87, 0xcc, 0x83, 0x91, 0x87, 0xf8, 0x9e, 0x3b, + 0x70, 0x42, 0x2b, 0x3c, 0x14, 0x09, 0x91, 0xce, 0x50, 0x1c, 0xdf, 0x19, 0x46, 0x07, 0xb6, 0x7c, + 0x30, 0x60, 0x3d, 0x7a, 0xfa, 0xc0, 0x09, 0x9b, 0xb4, 0x4c, 0xe2, 0x4d, 0x9e, 0x81, 0x45, 0xe4, + 0xdb, 0xf1, 0x04, 0x63, 0x8c, 0x49, 0x63, 0xce, 0x53, 0xc6, 0x1d, 0x8f, 0x71, 0xe6, 0x83, 0xfb, + 0xe3, 0x12, 0x9c, 0xcd, 0xef, 0x3d, 0x5c, 0x43, 0xb4, 0xcf, 0xf1, 0xf5, 0x2c, 0xaf, 0xdb, 0x2c, + 0x85, 0xb0, 0x78, 0x42, 0x79, 0x83, 0x57, 0xc8, 0x1d, 0xbc, 0x75, 0x58, 0x46, 0x46, 0xfc, 0x08, + 0xd2, 0xf3, 0xc2, 0x88, 0x87, 0xc9, 0x31, 0x97, 0x68, 0x01, 0xdb, 0x74, 0xea, 0x14, 0x4c, 0x67, + 0xa6, 0xd8, 0x36, 0xfc, 0x87, 0x7d, 0xfa, 0x61, 0xb6, 0x67, 0x2c, 0x70, 0x68, 0x13, 0x81, 0xe4, + 0x0c, 0x4c, 0x3b, 0x83, 0x01, 0xfd, 0x24, 0xdb, 0x2a, 0xa6, 0x9c, 0xc1, 0x80, 0x65, 0x01, 0x63, + 0xf9, 0xd6, 0xf6, 0xd0, 0xdf, 0x4c, 0xbc, 0x04, 0x98, 0x47, 0x20, 0xf3, 0x41, 0xc3, 0xb7, 0x04, + 0x94, 0x56, 0xa0, 0xcc, 0x20, 0x0a, 0x38, 0x83, 0x18, 0xe1, 0x3c, 0x94, 0x85, 0xe7, 0x03, 0x7b, + 0x1c, 0x65, 0xce, 0x38, 0xdc, 0xeb, 0xe1, 0x35, 0x38, 0xc7, 0x93, 0xbb, 0xd9, 0xac, 0x49, 0x83, + 0x01, 0x7f, 0x9d, 0xcc, 0xc2, 0x6b, 0x9b, 0x2b, 0xbc, 0x98, 0xf6, 0xa4, 0x3e, 0x18, 0xc4, 0x6f, + 0x94, 0xd7, 0x04, 0xd9, 0x7d, 0x8f, 0x85, 0xed, 0x63, 0xde, 0xbf, 0xb8, 0x38, 0x00, 0x29, 0x57, + 0x39, 0xc6, 0x86, 0x8c, 0x20, 0x96, 0x49, 0xbc, 0x92, 0x6c, 0x66, 0xf8, 0xe4, 0xea, 0x13, 0xde, + 0x8f, 0xe3, 0xa0, 0x21, 0x94, 0xbc, 0x01, 0x63, 0xe7, 0x22, 0x6a, 0xe7, 0x65, 0xf3, 0x0c, 0x2b, + 0x67, 0x5e, 0xcd, 0x75, 0x7f, 0xdf, 0xc0, 0x42, 0xf2, 0x2e, 0x5c, 0x14, 0x15, 0x74, 0xc2, 0xd0, + 0xdb, 0xef, 0xdb, 0x62, 0x14, 0xd0, 0xf1, 0x04, 0x35, 0xf4, 0xb2, 0x79, 0x9e, 0xe3, 0xe8, 0x88, + 0x52, 0x65, 0x18, 0xec, 0x75, 0xeb, 0x2b, 0xb0, 0x12, 0x79, 0x87, 0xae, 0x7d, 0xdf, 0x8d, 0x1e, + 0xba, 0x6e, 0xdf, 0xf6, 0x0e, 0x29, 0x5f, 0x16, 0x30, 0x66, 0xd6, 0x24, 0xb4, 0x6c, 0x83, 0x15, + 0xd5, 0x58, 0x09, 0x9f, 0x7f, 0x6f, 0xc2, 0x12, 0x3f, 0x9e, 0x70, 0xdd, 0x06, 0xc7, 0x87, 0x4b, + 0x5e, 0x7c, 0xcc, 0xc1, 0x72, 0xd1, 0x01, 0x07, 0xd5, 0xba, 0x82, 0xf2, 0x4f, 0x15, 0x38, 0x93, + 0x7b, 0xbe, 0x21, 0xbf, 0x08, 0xec, 0x81, 0x6a, 0xe4, 0xdb, 0x81, 0xdb, 0xf1, 0x06, 0x1e, 0x46, + 0xfc, 0x61, 0xf6, 0xff, 0x1b, 0x93, 0x4e, 0x46, 0xf8, 0xd8, 0xb5, 0xed, 0x9b, 0x31, 0x11, 0x33, + 0x4c, 0xaa, 0x41, 0x06, 0xbc, 0xf6, 0x3e, 0x9c, 0xc9, 0x45, 0xcd, 0x31, 0x18, 0x7e, 0x46, 0x36, + 0x18, 0x26, 0x37, 0xba, 0x99, 0x46, 0x4b, 0x86, 0x44, 0xde, 0xbc, 0x3f, 0x8e, 0x9b, 0x97, 0x39, + 0x09, 0x11, 0x23, 0x2b, 0x0b, 0xf3, 0x0e, 0xf3, 0x82, 0x68, 0xbc, 0x38, 0x7c, 0x1f, 0xce, 0xf0, + 0x05, 0xc9, 0xb6, 0x82, 0x98, 0x1d, 0xab, 0xe8, 0xcf, 0xe5, 0xb1, 0x63, 0x2b, 0x75, 0x8b, 0xe2, + 0xc7, 0x5c, 0x4f, 0x3b, 0xa3, 0x40, 0xde, 0x86, 0x7f, 0x5d, 0x10, 0xe2, 0x2f, 0xa7, 0x3a, 0x39, + 0x4b, 0x5d, 0xc9, 0x5b, 0xea, 0x27, 0x97, 0x33, 0x0d, 0x20, 0xb2, 0x80, 0xe7, 0x2b, 0x85, 0xb9, + 0x2b, 0x5e, 0x4e, 0x67, 0x7e, 0x94, 0xc4, 0x25, 0x5b, 0x3a, 0xe6, 0x72, 0x27, 0x0b, 0xa2, 0x47, + 0x08, 0xb6, 0xad, 0xd2, 0x4f, 0xb2, 0x1d, 0xbf, 0xcc, 0x00, 0xb5, 0x2e, 0xb9, 0x02, 0xf3, 0xec, + 0xfc, 0x9a, 0x92, 0x43, 0x80, 0x30, 0x1d, 0x85, 0xd1, 0x5b, 0x79, 0xc2, 0x28, 0xb9, 0x88, 0xe0, + 0xaa, 0xeb, 0x51, 0xbf, 0xc3, 0xe4, 0x4e, 0x5a, 0x4a, 0xf1, 0x1e, 0xfc, 0xf5, 0x02, 0xa8, 0x59, + 0x44, 0xa2, 0x41, 0xc1, 0xeb, 0x8e, 0xf3, 0xa4, 0xd9, 0x3e, 0x65, 0x16, 0xbc, 0x2e, 0xb9, 0x09, + 0x80, 0x39, 0x50, 0x03, 0x77, 0xdf, 0x7d, 0xc4, 0x55, 0x58, 0x54, 0x2c, 0x13, 0x68, 0x8a, 0x66, + 0x16, 0x8d, 0x8d, 0x14, 0x4c, 0xae, 0x03, 0xb8, 0x8f, 0x58, 0xf6, 0x0a, 0xb1, 0x1f, 0xe7, 0x7c, + 0x46, 0x31, 0x67, 0x39, 0x56, 0xad, 0x4b, 0xb6, 0x81, 0x08, 0x12, 0xe9, 0xab, 0xa5, 0x63, 0xbe, + 0xaa, 0x98, 0x2a, 0xa7, 0x6a, 0x88, 0x8f, 0xf3, 0x43, 0xe0, 0x2c, 0xcc, 0xf0, 0x04, 0x1a, 0xf4, + 0x5f, 0x8e, 0xa4, 0xfd, 0x8a, 0x02, 0x57, 0x8e, 0x9b, 0x8e, 0xe4, 0x2e, 0x9c, 0x45, 0xff, 0xb3, + 0xd0, 0x8f, 0x67, 0xb4, 0xdd, 0x71, 0x3a, 0x07, 0x2e, 0x17, 0x00, 0x5a, 0xee, 0xbc, 0x1e, 0x0c, + 0x2c, 0xab, 0x29, 0x4d, 0xe9, 0xc1, 0xc0, 0x0a, 0x7d, 0xf1, 0xbb, 0x42, 0xc9, 0xf9, 0x80, 0x74, + 0xe1, 0xc2, 0x04, 0x4a, 0x69, 0x5f, 0x52, 0xe4, 0x7d, 0xe9, 0x39, 0x50, 0xf7, 0xdc, 0x2e, 0x3d, + 0xe8, 0xba, 0x5d, 0xac, 0xda, 0x47, 0x37, 0x70, 0x4c, 0xe6, 0xcd, 0xc5, 0x18, 0x6e, 0x85, 0xfe, + 0x9d, 0x1b, 0xfc, 0x2b, 0x7f, 0x10, 0xab, 0x5c, 0xb2, 0x3d, 0x83, 0xdc, 0x80, 0xd3, 0x99, 0xc8, + 0x54, 0x52, 0xa8, 0x93, 0xc2, 0xaa, 0x62, 0x2e, 0xd3, 0xe2, 0x74, 0x2c, 0xc3, 0x67, 0x61, 0x5e, + 0x16, 0xe5, 0x7c, 0x2a, 0x50, 0xe4, 0xb9, 0x6e, 0x22, 0xc0, 0xc9, 0x7d, 0x58, 0x94, 0x56, 0x18, + 0x55, 0x8d, 0x8a, 0x39, 0x62, 0x40, 0xae, 0xcd, 0x4b, 0xc9, 0xd2, 0xeb, 0xef, 0xf9, 0xcc, 0x0d, + 0x2b, 0xcd, 0xc2, 0x5c, 0xe8, 0xc8, 0x28, 0x6b, 0xef, 0x49, 0xa1, 0x19, 0x51, 0x01, 0xba, 0x08, + 0xa5, 0x7e, 0x6e, 0x8c, 0xf5, 0x3e, 0x4b, 0xc3, 0x5e, 0x8a, 0x72, 0x23, 0x09, 0x47, 0xf1, 0x11, + 0x81, 0x77, 0xd7, 0x50, 0x0c, 0x4a, 0xae, 0x25, 0xe7, 0x24, 0xd6, 0x82, 0x17, 0x81, 0xc4, 0x67, + 0xef, 0x78, 0xcf, 0xe0, 0x62, 0x66, 0x59, 0x94, 0xc4, 0xc2, 0x9e, 0x7f, 0xf6, 0x5f, 0x4e, 0xc3, + 0xe9, 0x1c, 0x13, 0x10, 0x79, 0x11, 0x54, 0xaf, 0x1f, 0xb9, 0xfb, 0x81, 0x64, 0x5c, 0x48, 0xc6, + 0x68, 0x49, 0x2a, 0xe3, 0xb6, 0xfd, 0xe9, 0xc0, 0xdd, 0x8f, 0xef, 0x09, 0x4c, 0xfe, 0x8b, 0xee, + 0x25, 0x4e, 0x20, 0xcc, 0xd6, 0xf4, 0x5f, 0x52, 0x83, 0x65, 0x4c, 0xb7, 0x14, 0x7a, 0x3e, 0x66, + 0x6d, 0xc2, 0xc3, 0x44, 0x29, 0x65, 0x28, 0xc2, 0x9a, 0xb4, 0x24, 0x24, 0x7a, 0x9a, 0x30, 0xd5, + 0x41, 0x06, 0x42, 0x3e, 0x07, 0x6b, 0x92, 0x3a, 0x66, 0x67, 0x04, 0x31, 0x3e, 0x55, 0x32, 0xcf, + 0x39, 0xb1, 0x62, 0x56, 0x4d, 0x89, 0xe4, 0x0d, 0x60, 0x09, 0xf9, 0xbd, 0xee, 0xc0, 0x1e, 0xc9, + 0xcf, 0x85, 0xcd, 0x65, 0x59, 0x5e, 0xd6, 0x28, 0x56, 0xad, 0x3b, 0xc8, 0xa4, 0xea, 0xc2, 0x56, + 0xb7, 0x72, 0x85, 0xf5, 0x0c, 0x0a, 0xeb, 0xa7, 0xe4, 0xc6, 0x8c, 0x88, 0x6a, 0x36, 0xd3, 0x47, + 0xc5, 0xf5, 0x3e, 0x2c, 0x27, 0x6a, 0x92, 0x90, 0xb9, 0x65, 0x5c, 0xf4, 0x6b, 0x32, 0x43, 0x71, + 0xfc, 0x60, 0xd2, 0x94, 0x05, 0x9b, 0x19, 0x21, 0x94, 0x43, 0x2a, 0x0d, 0x53, 0x04, 0x21, 0xa9, + 0xc3, 0x8a, 0xf3, 0x30, 0x14, 0x69, 0xbe, 0xc3, 0xf8, 0x5b, 0xb3, 0xa3, 0xdf, 0x12, 0x17, 0xd5, + 0x5c, 0xc4, 0x13, 0xe7, 0x61, 0xc8, 0xb3, 0x7f, 0x87, 0x82, 0xdb, 0x07, 0x40, 0xd8, 0x36, 0x91, + 0xaa, 0x37, 0x1c, 0xc7, 0x8b, 0xe7, 0x08, 0x1f, 0xa1, 0x94, 0x03, 0x43, 0x62, 0xa9, 0x5c, 0xf3, + 0x76, 0xfa, 0x72, 0x61, 0x2e, 0x75, 0x33, 0x9e, 0xed, 0x6d, 0x76, 0x6b, 0x2f, 0xe1, 0xcb, 0xc6, + 0x12, 0x09, 0x8c, 0xe7, 0x63, 0x0c, 0x1c, 0x87, 0xf5, 0x38, 0xf4, 0xbb, 0x2e, 0xf3, 0xf6, 0x36, + 0x17, 0x10, 0x4c, 0x2b, 0xb0, 0x93, 0xd8, 0x2c, 0x3e, 0x56, 0x40, 0xcd, 0x7e, 0x8a, 0xbc, 0x05, + 0xd3, 0x4c, 0x63, 0xe5, 0xca, 0x8c, 0x96, 0x5f, 0x27, 0x36, 0xd2, 0x4c, 0x79, 0xdd, 0x3e, 0x65, + 0x72, 0x1a, 0xf2, 0x3a, 0x94, 0x7c, 0xaf, 0x2b, 0x6e, 0xfa, 0xaf, 0x4c, 0xa2, 0x6d, 0xd6, 0xaa, + 0x15, 0xbc, 0x1d, 0xf0, 0xba, 0xfc, 0x90, 0xbd, 0x51, 0x86, 0x69, 0xd6, 0xb1, 0xda, 0x07, 0x70, + 0x61, 0xc2, 0x07, 0x89, 0x01, 0x4b, 0x19, 0x2f, 0x88, 0x13, 0x3a, 0x48, 0x38, 0x89, 0x83, 0x44, + 0x20, 0x0e, 0x5e, 0x3d, 0x38, 0x3f, 0xb6, 0x82, 0xa4, 0x36, 0x56, 0x82, 0x60, 0x68, 0xa3, 0x6c, + 0x99, 0x3c, 0x59, 0x33, 0xd2, 0x85, 0x7f, 0xed, 0x37, 0x0a, 0x70, 0x3a, 0x67, 0x12, 0xfd, 0x7f, + 0xab, 0x4a, 0xfc, 0x81, 0xc2, 0xfb, 0x23, 0x2d, 0x0c, 0x48, 0x1b, 0xb8, 0x93, 0x0d, 0x17, 0x1c, + 0xd7, 0xc6, 0x0b, 0x0e, 0xd9, 0x77, 0x81, 0xc7, 0xb8, 0x42, 0x80, 0x7c, 0x43, 0xcf, 0x20, 0x4f, + 0xe0, 0x55, 0xc0, 0x87, 0xef, 0x7d, 0x38, 0x93, 0x2b, 0xd8, 0xe9, 0x51, 0x15, 0x9d, 0xfb, 0x13, + 0x2b, 0xcc, 0x0c, 0xfd, 0xbd, 0x1b, 0xa0, 0x85, 0xef, 0xbe, 0xeb, 0x04, 0x6e, 0xc0, 0x6d, 0x00, + 0xdc, 0xc2, 0xc7, 0x60, 0xb2, 0x09, 0xa0, 0x9b, 0xde, 0xc5, 0xf8, 0xb5, 0x24, 0xd9, 0x81, 0xd3, + 0x4c, 0xba, 0xb0, 0xa3, 0x9c, 0xcd, 0xaf, 0x32, 0x95, 0x94, 0xe1, 0x05, 0x49, 0xf0, 0x90, 0xcb, + 0x8e, 0x75, 0x8c, 0xda, 0x5c, 0xde, 0xcf, 0x82, 0xa8, 0xf2, 0x76, 0x36, 0x1f, 0x9b, 0x6c, 0xc0, + 0x1c, 0x63, 0xce, 0xac, 0xa0, 0xcc, 0x07, 0xe5, 0xea, 0xc4, 0x2f, 0x54, 0xf0, 0xcd, 0x5b, 0x18, + 0xff, 0x4f, 0x4f, 0xfe, 0xe8, 0xee, 0x67, 0x1f, 0xca, 0x2e, 0x36, 0xe6, 0x3c, 0x02, 0xb9, 0x6b, + 0x8d, 0xf6, 0x9f, 0x14, 0xd1, 0xd4, 0xd4, 0xfd, 0x0b, 0xdd, 0x81, 0x43, 0xb7, 0x2f, 0xdc, 0x8c, + 0x66, 0x4d, 0xfe, 0xeb, 0x31, 0xb5, 0x02, 0xf2, 0x06, 0xcc, 0x53, 0xb6, 0xfb, 0xc3, 0x3e, 0xdb, + 0x99, 0x8b, 0xa9, 0xd8, 0x9b, 0x3b, 0xac, 0x88, 0x0e, 0xdb, 0xf6, 0x29, 0x73, 0xee, 0x30, 0xf9, + 0x49, 0x5e, 0x82, 0xd9, 0xf0, 0x30, 0x1a, 0xc8, 0xfb, 0xb9, 0xb8, 0x8b, 0xb6, 0x76, 0xda, 0x2d, + 0x4e, 0x52, 0xa6, 0x38, 0x89, 0x71, 0x70, 0x63, 0x9a, 0xdd, 0x46, 0x6b, 0x2f, 0xc0, 0x9c, 0xc4, + 0x9b, 0x36, 0x86, 0xbd, 0x10, 0x17, 0x8d, 0x61, 0xbf, 0xf8, 0x60, 0xdf, 0x87, 0xb2, 0x60, 0x49, + 0x08, 0x94, 0x0e, 0xfc, 0x50, 0xe8, 0x43, 0xf8, 0x3f, 0x85, 0xa1, 0xb5, 0x80, 0x36, 0x72, 0xca, + 0xc4, 0xff, 0xf1, 0x04, 0x86, 0x17, 0x20, 0x18, 0xb1, 0x1d, 0x5f, 0x05, 0xc4, 0x66, 0x3a, 0x0a, + 0x6f, 0xf7, 0x42, 0xf6, 0x56, 0x40, 0x18, 0x0c, 0xe3, 0xa3, 0x6b, 0xe6, 0xc2, 0x6a, 0x9c, 0x7a, + 0x9c, 0x3a, 0x68, 0x15, 0x46, 0x0f, 0x5a, 0x2c, 0xa6, 0x22, 0xa7, 0x64, 0x5f, 0x06, 0x84, 0xb1, + 0x83, 0x56, 0xa2, 0x40, 0x95, 0x52, 0x0a, 0x94, 0x74, 0x05, 0x91, 0x8c, 0x1e, 0x3b, 0xa7, 0x89, + 0x2b, 0x88, 0xac, 0x4a, 0xf7, 0xcd, 0x78, 0x86, 0xa4, 0xae, 0xcc, 0xc8, 0x0d, 0x38, 0xc3, 0x4c, + 0x70, 0x2c, 0x99, 0x5d, 0x56, 0x97, 0x3c, 0x8d, 0x85, 0x2c, 0x05, 0x69, 0xac, 0x53, 0x1e, 0x6f, + 0x62, 0x27, 0xaf, 0xc0, 0x4a, 0x9c, 0x4a, 0x3f, 0x7c, 0xe0, 0x0d, 0x58, 0xe6, 0xdf, 0x23, 0x6e, + 0x1c, 0x23, 0xa2, 0xcc, 0x7a, 0xe0, 0x0d, 0x30, 0x0b, 0xb0, 0xe8, 0xe1, 0xdf, 0x2b, 0x88, 0x8b, + 0x9b, 0x0d, 0xdf, 0x8f, 0xc2, 0x28, 0x70, 0x06, 0x29, 0xa7, 0x00, 0x72, 0x08, 0xe7, 0xb1, 0x4a, + 0x37, 0x30, 0x7b, 0x9f, 0x1f, 0x88, 0x8b, 0xae, 0x78, 0x81, 0xcd, 0xdd, 0x78, 0x39, 0x6d, 0xf4, + 0xd4, 0x29, 0xb6, 0x2e, 0x23, 0xd3, 0x75, 0x25, 0x71, 0xdd, 0x3e, 0x65, 0x9e, 0x63, 0x3c, 0x47, + 0xb0, 0xc8, 0x76, 0x8e, 0xac, 0xc9, 0x7a, 0x05, 0x6c, 0x24, 0x82, 0x27, 0xcd, 0x55, 0x16, 0x49, + 0xe4, 0x1d, 0x98, 0xf5, 0xba, 0x72, 0x0e, 0xff, 0xec, 0x7d, 0x74, 0xad, 0xcb, 0x72, 0xdd, 0x24, + 0x3c, 0xe8, 0xd2, 0xf0, 0x38, 0x74, 0x63, 0x21, 0xa5, 0xe1, 0x68, 0xdb, 0xe2, 0x8e, 0x60, 0x94, + 0x8c, 0x2c, 0x26, 0x7b, 0x1f, 0xee, 0x73, 0x28, 0x05, 0x92, 0x6c, 0x3b, 0x26, 0xff, 0xc5, 0xbb, + 0xfc, 0x97, 0xe0, 0xb9, 0x93, 0xf6, 0x14, 0x95, 0x1b, 0x63, 0xba, 0x7d, 0x96, 0x45, 0xda, 0x4f, + 0xf7, 0xde, 0x55, 0x90, 0x33, 0x84, 0x78, 0x62, 0xa2, 0x08, 0xd8, 0x6e, 0xe0, 0x69, 0x7f, 0x51, + 0x84, 0xc5, 0xb4, 0xdb, 0x08, 0x79, 0x01, 0x4a, 0x92, 0xb8, 0x3c, 0x97, 0xe3, 0x5b, 0x82, 0x42, + 0x12, 0x91, 0x4e, 0x24, 0x1e, 0xc9, 0x2d, 0x58, 0xc4, 0x97, 0x2f, 0xa8, 0xc6, 0x45, 0x1e, 0xbf, + 0x12, 0x3d, 0xe9, 0x65, 0xe7, 0x3c, 0xa5, 0xa5, 0xdb, 0x23, 0x2d, 0x94, 0xbc, 0x02, 0x4a, 0xe3, + 0xbd, 0x02, 0x78, 0x53, 0xc6, 0x78, 0x05, 0x4c, 0x4d, 0xf0, 0x0a, 0x48, 0x28, 0x65, 0xaf, 0x00, + 0xf4, 0x0d, 0x99, 0x19, 0xe7, 0x1b, 0x92, 0xd0, 0x30, 0xdf, 0x90, 0xe4, 0x56, 0xbf, 0x3c, 0xf6, + 0x56, 0x3f, 0xa1, 0xe1, 0xb7, 0xfa, 0xc9, 0x3d, 0xfb, 0xec, 0xd8, 0x7b, 0x76, 0x89, 0x88, 0xdd, + 0xb3, 0x3f, 0xc3, 0x3b, 0x36, 0x70, 0x1e, 0xda, 0xd8, 0xe3, 0xfc, 0x7c, 0x84, 0x5d, 0x66, 0x3a, + 0x0f, 0xd1, 0x43, 0x9d, 0xaa, 0x27, 0xdc, 0xad, 0x5d, 0xfb, 0x56, 0x46, 0x0c, 0x89, 0x31, 0x7f, + 0x16, 0x16, 0xd9, 0x6e, 0xcc, 0x33, 0x38, 0xb0, 0xed, 0x78, 0xc1, 0x5c, 0x10, 0x50, 0x66, 0x9a, + 0xff, 0x39, 0x58, 0x8a, 0xd1, 0xb8, 0x75, 0x1a, 0x43, 0x6e, 0x98, 0x31, 0x35, 0xb7, 0x4b, 0xcb, + 0xfc, 0x02, 0x1e, 0x9d, 0x32, 0xc5, 0x8f, 0x19, 0x77, 0x5f, 0x04, 0x92, 0xa0, 0xc5, 0x8f, 0x7c, + 0x4a, 0x88, 0xba, 0x1c, 0xa3, 0xc6, 0x2f, 0x71, 0x7e, 0x47, 0xc9, 0x5c, 0x4c, 0xff, 0xb4, 0xaa, + 0xff, 0x02, 0xc4, 0x5f, 0xb7, 0xf9, 0xe5, 0xa2, 0x68, 0x81, 0x2a, 0x0a, 0x5a, 0x1c, 0xae, 0xed, + 0x67, 0xed, 0xa9, 0x3f, 0xa5, 0x5a, 0x69, 0xbf, 0x57, 0x4a, 0xdd, 0x97, 0x89, 0xcf, 0x50, 0x2d, + 0x27, 0xf4, 0x6d, 0x3e, 0xc4, 0x5c, 0x08, 0x5f, 0x1d, 0x33, 0x4d, 0xf9, 0xb3, 0x06, 0xcb, 0x6a, + 0x9a, 0x10, 0x86, 0xbe, 0x78, 0xe5, 0x60, 0x33, 0xe3, 0x96, 0x74, 0xea, 0x13, 0xec, 0x98, 0xc4, + 0x5d, 0x9f, 0xcc, 0x4e, 0x5c, 0x48, 0xd0, 0x55, 0x8a, 0x46, 0xae, 0xf8, 0x97, 0xf8, 0xc0, 0x2e, + 0xe0, 0x1d, 0x77, 0x98, 0x66, 0x9e, 0x67, 0x0a, 0x1a, 0x61, 0x8e, 0xbd, 0x84, 0x9c, 0xf1, 0xb6, + 0x22, 0x94, 0xd9, 0x1a, 0x30, 0x8f, 0xb7, 0x51, 0x82, 0x61, 0x29, 0xc7, 0x0b, 0x67, 0xb4, 0xf1, + 0x95, 0xda, 0x8e, 0x39, 0x47, 0xe9, 0x04, 0x9b, 0x03, 0x38, 0x2f, 0xdf, 0x21, 0xa5, 0x2b, 0x39, + 0x25, 0xf2, 0xae, 0x4c, 0xec, 0x81, 0xe4, 0xaa, 0x09, 0xab, 0x7a, 0xd6, 0x49, 0x03, 0xc4, 0x97, + 0x3a, 0x70, 0x7e, 0xe4, 0x06, 0x25, 0xfe, 0x12, 0xf3, 0x0a, 0x7b, 0x6e, 0xcc, 0x97, 0x32, 0x57, + 0x2b, 0x6e, 0x60, 0x9e, 0x0d, 0xd3, 0x20, 0xfe, 0x11, 0x7c, 0x46, 0x34, 0x7e, 0xe0, 0x27, 0xe4, + 0xfe, 0x4d, 0xd4, 0xa8, 0x82, 0xac, 0x46, 0xc9, 0xf7, 0x56, 0xc5, 0xf4, 0xbd, 0xd5, 0x26, 0x5c, + 0xa1, 0x32, 0x8f, 0xcf, 0x1c, 0xf7, 0x23, 0x37, 0x38, 0xf2, 0xfb, 0x18, 0xc2, 0x73, 0x10, 0x2f, + 0x7d, 0x76, 0xd1, 0x76, 0x91, 0xe2, 0xe1, 0xbc, 0x30, 0x38, 0xd6, 0x0e, 0x22, 0xb1, 0xd0, 0xb4, + 0xff, 0xa4, 0x08, 0x4f, 0x9f, 0x60, 0x72, 0x4d, 0xa8, 0xfb, 0xe7, 0xd3, 0xca, 0x7e, 0x21, 0x65, + 0x9d, 0xa7, 0x4c, 0x13, 0x73, 0xf7, 0x18, 0x55, 0xff, 0x17, 0x61, 0x89, 0x6d, 0x53, 0xec, 0x1d, + 0xd5, 0xde, 0xb0, 0x77, 0x82, 0x7d, 0xea, 0x82, 0x88, 0x12, 0x91, 0x21, 0xc5, 0xad, 0x0b, 0xa5, + 0xb3, 0x15, 0xc3, 0x48, 0x1b, 0xe6, 0x10, 0x6d, 0xcf, 0xf1, 0x7a, 0x27, 0x0a, 0x57, 0x20, 0x62, + 0x50, 0xc8, 0x64, 0xec, 0xbd, 0x28, 0x05, 0x6c, 0xe2, 0x6f, 0x72, 0x0d, 0x96, 0xfa, 0xc3, 0x43, + 0xaa, 0xc6, 0xb2, 0x99, 0xcb, 0xdd, 0xd5, 0xa7, 0xcc, 0x85, 0xfe, 0xf0, 0x50, 0x1f, 0x0c, 0x70, + 0x02, 0xa2, 0x5f, 0xfb, 0x32, 0xc5, 0x63, 0x32, 0x46, 0x60, 0x4e, 0x23, 0x26, 0x65, 0xc0, 0xa4, + 0x0c, 0xc7, 0x5d, 0x01, 0xf6, 0xca, 0x89, 0xe7, 0x50, 0x66, 0x3f, 0xb4, 0xff, 0x59, 0x10, 0x86, + 0xf2, 0xf1, 0xab, 0xf4, 0x67, 0x43, 0x94, 0x33, 0x44, 0xcf, 0x81, 0x4a, 0xbb, 0x3e, 0x11, 0x81, + 0xf1, 0x18, 0x2d, 0xf6, 0x87, 0x87, 0x71, 0xdf, 0xc9, 0x1d, 0x3f, 0x2d, 0x77, 0xfc, 0x1b, 0xc2, + 0x10, 0x9d, 0x2b, 0xcc, 0xc6, 0x77, 0x39, 0xd5, 0xef, 0xae, 0x9d, 0x4c, 0x64, 0xfd, 0x6c, 0xdc, + 0x72, 0xc6, 0x2d, 0x73, 0xa5, 0x3f, 0x35, 0x72, 0xa5, 0x9f, 0xb3, 0xf6, 0xa6, 0xf3, 0xd6, 0xde, + 0x88, 0x03, 0xc1, 0x4c, 0x8e, 0x03, 0x41, 0xee, 0x02, 0x2d, 0x1f, 0xb3, 0x40, 0x67, 0xe5, 0x79, + 0xf2, 0xdd, 0x02, 0x5c, 0x3d, 0x76, 0xdf, 0xf8, 0xd9, 0x48, 0xe7, 0x8c, 0x74, 0x7e, 0x7f, 0xfe, + 0x45, 0x41, 0xe8, 0xcb, 0xe9, 0x63, 0xf0, 0xfb, 0x70, 0x5a, 0x1c, 0x83, 0x99, 0xde, 0x90, 0xf8, + 0xd9, 0xcc, 0xdd, 0x78, 0x3e, 0xef, 0x00, 0x8c, 0x68, 0x39, 0x87, 0xd4, 0x65, 0x7e, 0xf4, 0x4d, + 0xca, 0xff, 0xdf, 0x39, 0xf4, 0x92, 0x7b, 0x70, 0x16, 0x33, 0xb2, 0x75, 0x64, 0x0f, 0x21, 0x3b, + 0x70, 0xf7, 0x78, 0xaf, 0x5f, 0x1d, 0x39, 0x1c, 0x7a, 0x1d, 0xa9, 0x3a, 0xa6, 0xbb, 0xb7, 0x7d, + 0xca, 0x5c, 0x09, 0x73, 0xe0, 0xdc, 0xd4, 0x94, 0x39, 0x55, 0xff, 0x0b, 0x05, 0xb4, 0xe3, 0x7b, + 0x0d, 0x0d, 0x20, 0xd9, 0x6e, 0x9f, 0x35, 0xe7, 0x1c, 0xa9, 0x0f, 0x9f, 0x86, 0x85, 0xc0, 0xdd, + 0x0b, 0xdc, 0xf0, 0x20, 0x65, 0xa5, 0x9c, 0xe7, 0x40, 0xd1, 0x3d, 0x22, 0x39, 0xc4, 0x63, 0x1d, + 0x48, 0x05, 0x51, 0xec, 0x7b, 0x7b, 0x71, 0xd2, 0x98, 0xd0, 0x99, 0x25, 0x57, 0x93, 0xfd, 0x88, + 0xfd, 0xbc, 0x0a, 0x6a, 0xd1, 0xe4, 0xe9, 0x2c, 0xf6, 0xbc, 0x9e, 0xab, 0xfd, 0x51, 0xac, 0xbb, + 0xe5, 0x75, 0x27, 0x79, 0x5f, 0x7a, 0xdf, 0x59, 0x1c, 0x51, 0x4b, 0xf3, 0x48, 0x4e, 0x62, 0x4e, + 0xae, 0x7f, 0x4a, 0xe6, 0xe4, 0x9b, 0xe2, 0x91, 0x08, 0xdd, 0x55, 0xee, 0x5c, 0x27, 0xcf, 0xc3, + 0x0c, 0x7b, 0x17, 0x22, 0xaa, 0xbb, 0x94, 0xaa, 0xee, 0x9d, 0xeb, 0xa6, 0x28, 0xd7, 0x3e, 0x8e, + 0x3d, 0xda, 0x46, 0x1a, 0x71, 0xe7, 0x3a, 0x79, 0xe3, 0x64, 0xef, 0x35, 0xcb, 0xe2, 0xbd, 0x66, + 0xfc, 0x56, 0xf3, 0xcd, 0xd4, 0x5b, 0xcd, 0x67, 0x26, 0xf7, 0x16, 0x77, 0x96, 0x64, 0x29, 0x02, + 0xe2, 0x38, 0xcf, 0xda, 0x57, 0x8b, 0xf0, 0xd4, 0x44, 0x0a, 0x72, 0x11, 0xca, 0x7a, 0xab, 0xd6, + 0x4e, 0x46, 0x99, 0xae, 0x22, 0x01, 0x21, 0x5b, 0x30, 0xbb, 0xe1, 0x84, 0x5e, 0x87, 0x4e, 0xe9, + 0x5c, 0x27, 0x98, 0x11, 0xb6, 0x31, 0xfa, 0xf6, 0x29, 0x33, 0xa1, 0x25, 0x36, 0x2c, 0xe3, 0xba, + 0x48, 0x65, 0x70, 0x2e, 0xe6, 0x18, 0xdd, 0x46, 0x18, 0x8e, 0x90, 0x51, 0xc9, 0x33, 0x02, 0x24, + 0xf7, 0x81, 0x58, 0xd6, 0x76, 0xc5, 0x0d, 0x22, 0x6e, 0x86, 0x8a, 0xbc, 0xf8, 0xf1, 0xdf, 0x2b, + 0xc7, 0xf4, 0xdd, 0x08, 0xdd, 0xf6, 0x29, 0x33, 0x87, 0x1b, 0xb9, 0x0a, 0x72, 0xaa, 0x71, 0xd4, + 0x82, 0xe6, 0xb7, 0x4f, 0x99, 0x30, 0x88, 0x53, 0x8e, 0xe7, 0xcb, 0x86, 0x5f, 0x12, 0xaa, 0xe7, + 0xf8, 0xde, 0x7a, 0x8c, 0x3c, 0x2d, 0xcf, 0x41, 0xb9, 0x25, 0x7c, 0xaa, 0xa5, 0xd7, 0xd6, 0xc2, + 0x7f, 0xda, 0x8c, 0x4b, 0xf9, 0xcc, 0xfe, 0x6d, 0x45, 0x98, 0xe9, 0x8e, 0xef, 0x5b, 0x29, 0x1f, + 0x77, 0x77, 0x72, 0x3e, 0xee, 0xee, 0x4f, 0x98, 0x8f, 0x9b, 0x57, 0xca, 0x87, 0xe7, 0x4f, 0x3c, + 0x1a, 0xe4, 0x2d, 0x50, 0x31, 0x5f, 0xb1, 0x23, 0x8d, 0x6c, 0x3a, 0x38, 0x39, 0x0b, 0xbe, 0xd9, + 0x72, 0xbc, 0xc0, 0x5c, 0xea, 0xa4, 0xa9, 0xf9, 0x07, 0x7f, 0x9f, 0xa7, 0x70, 0xab, 0x75, 0x5b, + 0x99, 0xab, 0xf7, 0x27, 0x7d, 0xb2, 0x6f, 0xa4, 0xd6, 0xa9, 0xd8, 0xeb, 0xf2, 0xbf, 0x35, 0xfe, + 0xe5, 0xbe, 0xb4, 0x68, 0xff, 0x7e, 0x11, 0x2e, 0x4e, 0x22, 0x27, 0x3a, 0xa8, 0x06, 0x0b, 0x03, + 0xca, 0x1f, 0x4b, 0xfa, 0x81, 0x9c, 0x52, 0x94, 0x85, 0x08, 0xb5, 0xbb, 0x71, 0xa1, 0x39, 0x82, + 0x4e, 0xc7, 0x99, 0xc1, 0xe2, 0xf7, 0xe8, 0x38, 0xce, 0x9c, 0x94, 0x8e, 0xb3, 0x28, 0x26, 0x4f, + 0xc3, 0xb4, 0x5e, 0xb1, 0x92, 0xdc, 0xe9, 0xf8, 0x70, 0xd4, 0xe9, 0x84, 0xf8, 0x24, 0x91, 0x17, + 0x91, 0x5f, 0x00, 0x35, 0x9b, 0x3a, 0x91, 0x27, 0x4d, 0xbf, 0x20, 0x75, 0xc8, 0x48, 0x76, 0x45, + 0xac, 0x6f, 0x92, 0x0d, 0x90, 0x27, 0xd8, 0x32, 0x47, 0x78, 0x11, 0x0d, 0xa6, 0x5b, 0x81, 0x1b, + 0xba, 0x91, 0xfc, 0xa8, 0x73, 0x80, 0x10, 0x93, 0x97, 0xf0, 0x27, 0x97, 0xce, 0x11, 0x0b, 0xc9, + 0x37, 0x2d, 0x87, 0x5e, 0xc5, 0x37, 0x9a, 0x14, 0x6c, 0x4a, 0x28, 0x94, 0xa0, 0xee, 0x0c, 0xfb, + 0x9d, 0x83, 0x5d, 0xb3, 0xce, 0xd5, 0x5a, 0x46, 0xd0, 0x43, 0x28, 0x6d, 0x60, 0x68, 0x4a, 0x28, + 0xda, 0xaf, 0x29, 0xb0, 0x92, 0xd7, 0x8e, 0x63, 0xbc, 0x76, 0x5e, 0x81, 0x39, 0xbc, 0xa1, 0xdd, + 0xf3, 0x83, 0x43, 0x27, 0x92, 0x9f, 0xbe, 0x4a, 0x60, 0x13, 0x6f, 0x94, 0x37, 0xf1, 0x7f, 0x72, + 0x59, 0xec, 0x56, 0x52, 0x62, 0x14, 0x04, 0xf0, 0x8d, 0x4b, 0xd3, 0x01, 0x6a, 0xdd, 0x56, 0x73, + 0xc0, 0xb2, 0xfb, 0xbd, 0x0a, 0x25, 0x5a, 0xad, 0xcc, 0xec, 0xa5, 0xf3, 0x47, 0xdf, 0xa9, 0x73, + 0x24, 0x56, 0xab, 0xd0, 0x39, 0xec, 0x99, 0x88, 0xac, 0xdd, 0x85, 0xc5, 0x34, 0x06, 0x31, 0xd2, + 0x09, 0x5e, 0xe6, 0x6e, 0xa8, 0x9c, 0xd3, 0x86, 0xef, 0xb3, 0xf0, 0x0b, 0x1b, 0xe7, 0xbf, 0xf7, + 0xc9, 0x65, 0xa0, 0x3f, 0x19, 0x4d, 0x5e, 0x02, 0x18, 0xed, 0xeb, 0x05, 0x58, 0x49, 0x42, 0xc4, + 0x89, 0x35, 0xf4, 0xd7, 0x36, 0xfc, 0x90, 0x9e, 0x0a, 0x8f, 0x23, 0x94, 0xd0, 0xd1, 0x06, 0x4e, + 0x88, 0xca, 0xb1, 0x05, 0xab, 0xe3, 0xf0, 0xc9, 0x0b, 0x30, 0x8b, 0x91, 0x8a, 0x07, 0x4e, 0xc7, + 0x95, 0x45, 0x6e, 0x5f, 0x00, 0xcd, 0xa4, 0x5c, 0xfb, 0xae, 0x02, 0x6b, 0x3c, 0x68, 0xc0, 0x8e, + 0xe3, 0xf5, 0xf1, 0x42, 0xb0, 0xe3, 0x7e, 0x3a, 0xe1, 0xb3, 0xb6, 0x52, 0x72, 0xec, 0xd9, 0x74, + 0x6c, 0x88, 0x91, 0xaf, 0x8d, 0x6f, 0x2d, 0x79, 0x1e, 0xa3, 0x6f, 0x73, 0x37, 0xd3, 0x12, 0x8b, + 0x6f, 0xd8, 0xa7, 0x00, 0x39, 0xbe, 0x21, 0x62, 0x68, 0xbf, 0x0c, 0x97, 0x26, 0x7f, 0x80, 0x7c, + 0x09, 0x16, 0x30, 0xf1, 0xf6, 0xee, 0x60, 0x3f, 0x70, 0xba, 0xae, 0x30, 0x12, 0x8b, 0xbb, 0x0c, + 0xb9, 0x8c, 0x05, 0x13, 0xe7, 0xf1, 0xf6, 0xf6, 0x31, 0xa5, 0x37, 0x27, 0x4a, 0x45, 0xe6, 0x90, + 0xb9, 0x69, 0x5f, 0x51, 0x80, 0x8c, 0xf2, 0x20, 0xaf, 0xc3, 0xfc, 0x6e, 0xbb, 0x62, 0x45, 0x4e, + 0x10, 0x6d, 0xfb, 0xc3, 0x80, 0x47, 0xf2, 0x66, 0xe1, 0xd7, 0xa2, 0x8e, 0xcd, 0xae, 0x7e, 0x0f, + 0xfc, 0x61, 0x60, 0xa6, 0xf0, 0x30, 0x65, 0xb3, 0xeb, 0x3e, 0xe8, 0x3a, 0x47, 0xe9, 0x94, 0xcd, + 0x1c, 0x96, 0x4a, 0xd9, 0xcc, 0x61, 0xda, 0x37, 0x15, 0xb8, 0x20, 0x9e, 0x7d, 0x75, 0x73, 0xea, + 0x52, 0xc1, 0x20, 0xa3, 0x81, 0x48, 0x9b, 0x33, 0x49, 0xd1, 0x5f, 0x16, 0x71, 0x78, 0xb1, 0x82, + 0xa8, 0xf1, 0x33, 0x5a, 0xf2, 0x79, 0x28, 0x59, 0x91, 0x3f, 0x38, 0x41, 0x20, 0x5e, 0x35, 0x1e, + 0xd1, 0xc8, 0x1f, 0x20, 0x0b, 0xa4, 0xd4, 0x5c, 0x58, 0x91, 0x2b, 0x27, 0x6a, 0x4c, 0x76, 0x60, + 0x86, 0x47, 0x71, 0xcf, 0x78, 0x93, 0x4e, 0x68, 0xd3, 0xc6, 0x92, 0x88, 0xf6, 0xcb, 0x13, 0xbc, + 0x98, 0x82, 0x87, 0xf6, 0x9b, 0x0a, 0xcc, 0x51, 0x55, 0x07, 0x2d, 0x06, 0x4f, 0x3a, 0xa5, 0xd3, + 0x2a, 0xb4, 0xf0, 0x33, 0x8f, 0xd9, 0x9f, 0x68, 0x37, 0x7e, 0x0d, 0x96, 0x32, 0x04, 0x44, 0xc3, + 0x38, 0x8f, 0x3d, 0xaf, 0xe3, 0xb0, 0x0c, 0xb0, 0xcc, 0x47, 0x3b, 0x05, 0xd3, 0x7e, 0x5d, 0x81, + 0x95, 0xe6, 0x83, 0xc8, 0x61, 0x1e, 0x1a, 0xe6, 0xb0, 0x27, 0xd6, 0x3b, 0x55, 0xdf, 0xc4, 0xfb, + 0x41, 0x16, 0x52, 0x8e, 0xa9, 0x6f, 0x1c, 0x66, 0xc6, 0xa5, 0x64, 0x1b, 0xca, 0x7c, 0x7f, 0x09, + 0x79, 0xbe, 0x92, 0x4b, 0x92, 0x39, 0x23, 0x61, 0xcc, 0x91, 0x68, 0x4b, 0x50, 0x84, 0x71, 0x1a, + 0x33, 0xa6, 0xd6, 0xfe, 0x52, 0x81, 0x73, 0x63, 0x68, 0xc8, 0xdb, 0x30, 0x85, 0xe1, 0x6e, 0xf8, + 0xe8, 0x5d, 0x1c, 0xf3, 0x89, 0xa8, 0x73, 0x70, 0xe7, 0x3a, 0xdb, 0x88, 0x0e, 0xe9, 0x0f, 0x93, + 0x51, 0x91, 0xf7, 0x61, 0x56, 0xef, 0x76, 0xf9, 0xc1, 0xae, 0x90, 0x3a, 0xd8, 0x8d, 0xf9, 0xe2, + 0x4b, 0x31, 0x3e, 0x3b, 0xd8, 0xb1, 0xc0, 0x0b, 0xdd, 0xae, 0xcd, 0x43, 0xf9, 0x24, 0xfc, 0xd6, + 0xde, 0x82, 0xc5, 0x34, 0xf2, 0x63, 0x45, 0x1f, 0xf9, 0x58, 0x01, 0x35, 0x5d, 0x87, 0x9f, 0x4e, + 0x9c, 0xe2, 0xbc, 0x61, 0x3e, 0x66, 0x52, 0xfd, 0xdd, 0x02, 0x9c, 0xc9, 0xed, 0x61, 0xf2, 0x22, + 0x4c, 0xeb, 0x83, 0x41, 0xad, 0xca, 0x67, 0x15, 0xd7, 0x90, 0xf0, 0x66, 0x23, 0x75, 0xee, 0x65, + 0x48, 0xe4, 0x55, 0x28, 0x33, 0x47, 0xa0, 0xaa, 0x10, 0x38, 0x18, 0x78, 0x95, 0x7b, 0x29, 0xa5, + 0x33, 0x87, 0x08, 0x44, 0xb2, 0x09, 0x8b, 0x3c, 0x64, 0x29, 0x7a, 0x85, 0xc5, 0x09, 0xf8, 0xd0, + 0x91, 0x4e, 0x5c, 0x97, 0x30, 0x7f, 0xb2, 0x94, 0xec, 0xcc, 0x50, 0x91, 0x3a, 0xa8, 0xc8, 0x53, + 0xe6, 0xc4, 0xd2, 0x97, 0x48, 0x8e, 0x98, 0x63, 0x78, 0x8d, 0x50, 0xc6, 0xc3, 0xc5, 0x5e, 0xd2, + 0x1c, 0xba, 0xfd, 0xe8, 0xa7, 0x37, 0x5c, 0xc9, 0x37, 0x4e, 0x34, 0x5c, 0xbf, 0x5b, 0x62, 0x8b, + 0x39, 0x4b, 0x46, 0x35, 0x1a, 0x29, 0xdf, 0x16, 0x6a, 0x34, 0xf4, 0xc4, 0xc6, 0x83, 0x72, 0x56, + 0x61, 0xa6, 0xcd, 0x73, 0x29, 0xb1, 0x95, 0xf1, 0x54, 0x6e, 0x15, 0x18, 0xce, 0x9d, 0xeb, 0x4c, + 0x7d, 0xe1, 0xb9, 0x95, 0x4c, 0x41, 0x4a, 0xee, 0xc0, 0x5c, 0xa5, 0xe7, 0x3a, 0xfd, 0xe1, 0xa0, + 0x7d, 0x32, 0xff, 0x83, 0x55, 0xde, 0x96, 0xf9, 0x0e, 0x23, 0x43, 0xbf, 0x05, 0x94, 0xe4, 0x32, + 0x23, 0xd2, 0x8e, 0x43, 0x71, 0x94, 0xd0, 0x56, 0xfa, 0xca, 0x84, 0xfe, 0xc9, 0x02, 0x91, 0x2e, + 0x1d, 0x67, 0x86, 0xc7, 0xea, 0xb0, 0x61, 0xb1, 0xee, 0x84, 0x51, 0x3b, 0x70, 0xfa, 0x21, 0x26, + 0x6e, 0x38, 0x41, 0x10, 0xea, 0x0b, 0xbc, 0xc2, 0xcc, 0x86, 0x1a, 0xc5, 0xa4, 0xcc, 0x86, 0x9a, + 0x66, 0x47, 0xf5, 0xa5, 0x4d, 0xaf, 0xef, 0xf4, 0xbc, 0x2f, 0x8b, 0x88, 0x45, 0x4c, 0x5f, 0xda, + 0x13, 0x40, 0x33, 0x29, 0xd7, 0xbe, 0x38, 0x32, 0x6e, 0xac, 0x96, 0x73, 0x30, 0xc3, 0xe3, 0xd9, + 0xb1, 0xf8, 0x6e, 0x2d, 0xa3, 0x51, 0xad, 0x35, 0xb6, 0x54, 0x85, 0x2c, 0x02, 0xb4, 0xcc, 0x66, + 0xc5, 0xb0, 0x2c, 0xfa, 0xbb, 0x40, 0x7f, 0xf3, 0xe0, 0x6f, 0x9b, 0xbb, 0x75, 0xb5, 0x28, 0xc5, + 0x7f, 0x2b, 0x69, 0xdf, 0x51, 0xe0, 0x6c, 0xfe, 0x50, 0x92, 0x36, 0x77, 0xd4, 0x67, 0x9e, 0x28, + 0xaf, 0x4f, 0x1c, 0xf7, 0x5c, 0x70, 0x36, 0x92, 0x60, 0xc4, 0x22, 0xd4, 0x15, 0xc4, 0x05, 0x27, + 0x0b, 0x79, 0xe3, 0x75, 0xcd, 0x82, 0xd7, 0xd5, 0x2a, 0xb0, 0x3a, 0x8e, 0x47, 0xba, 0xa9, 0x4b, + 0x30, 0xa7, 0xb7, 0x5a, 0xf5, 0x5a, 0x45, 0x6f, 0xd7, 0x9a, 0x0d, 0x55, 0x21, 0xb3, 0x30, 0xb5, + 0x65, 0x36, 0x77, 0x5b, 0x6a, 0x41, 0xfb, 0x13, 0x05, 0x16, 0x6a, 0x89, 0xa7, 0xec, 0x93, 0x2e, + 0xbe, 0xcf, 0xa6, 0x16, 0xdf, 0x6a, 0x1c, 0x2b, 0x33, 0xfe, 0xc0, 0x04, 0x0d, 0x72, 0x23, 0x0e, + 0x68, 0x54, 0x4c, 0x79, 0x94, 0xc8, 0xd4, 0x22, 0x54, 0x4c, 0x9c, 0x33, 0x30, 0x1d, 0xf0, 0x48, + 0x5a, 0xbd, 0xff, 0xac, 0x08, 0xcb, 0x23, 0xdf, 0x25, 0x16, 0xcc, 0xe8, 0x77, 0xad, 0x66, 0xad, + 0x5a, 0xe1, 0xad, 0xbb, 0x9c, 0x78, 0x57, 0x62, 0x0a, 0xec, 0x91, 0x9a, 0xb2, 0x18, 0x55, 0x0f, + 0x43, 0xdb, 0xf7, 0xba, 0x9d, 0x94, 0x7b, 0xaf, 0xe0, 0x84, 0xbb, 0xe1, 0x97, 0x87, 0x01, 0x7a, + 0x2c, 0xf3, 0x96, 0xc7, 0x4e, 0x9b, 0x02, 0x3e, 0xca, 0x18, 0x7d, 0x78, 0x1d, 0x5a, 0x3e, 0xca, + 0x3a, 0xe1, 0x47, 0x1a, 0x30, 0xbd, 0xe5, 0x45, 0xdb, 0xc3, 0xfb, 0xbc, 0x57, 0x2e, 0x25, 0x09, + 0x91, 0xb7, 0x87, 0xf7, 0x47, 0xd9, 0xa2, 0xc5, 0x94, 0xc5, 0xa7, 0x48, 0xb1, 0xe4, 0x5c, 0xc8, + 0x6d, 0x98, 0xd2, 0xef, 0x5a, 0xa6, 0xce, 0x57, 0xa8, 0xe4, 0xbf, 0x6a, 0xea, 0x63, 0xb8, 0xd1, + 0xd6, 0x07, 0x4e, 0x8a, 0x1b, 0xe3, 0x91, 0x8d, 0xd1, 0x53, 0x7a, 0xac, 0x18, 0x3d, 0x1b, 0x0b, + 0x30, 0xc7, 0x0f, 0x75, 0x78, 0x5e, 0x7a, 0x04, 0xa7, 0x73, 0x86, 0x9a, 0x38, 0xe8, 0x25, 0x8f, + 0xb7, 0xee, 0x7a, 0xff, 0xe8, 0xe1, 0x81, 0x1b, 0xb8, 0xa3, 0x63, 0x97, 0xae, 0xbb, 0x98, 0x25, + 0xb9, 0xb5, 0x37, 0x47, 0xd8, 0x69, 0x7f, 0xac, 0xc0, 0xea, 0xb8, 0x09, 0x40, 0x4f, 0xa8, 0xe9, + 0x28, 0x80, 0x67, 0xe3, 0xc4, 0x9f, 0x69, 0xef, 0x76, 0x81, 0x46, 0xde, 0x85, 0x39, 0xe6, 0x01, + 0x69, 0xbd, 0xba, 0x6b, 0xd6, 0xf8, 0xca, 0x7d, 0xea, 0x87, 0x9f, 0x5c, 0x3e, 0xc7, 0x9d, 0x26, + 0xc3, 0x57, 0xed, 0x61, 0xe0, 0x25, 0xa4, 0xab, 0x8a, 0x29, 0x53, 0xd0, 0x03, 0x85, 0x33, 0xec, + 0x7a, 0xae, 0x38, 0x4e, 0x89, 0x48, 0x69, 0x1c, 0x26, 0x6f, 0xef, 0x02, 0xa6, 0x7d, 0x4d, 0x81, + 0xb5, 0xf1, 0xb3, 0x8d, 0xaa, 0x0c, 0x6d, 0xe6, 0x48, 0x2a, 0x62, 0x95, 0xa1, 0xca, 0x10, 0x7b, + 0x9b, 0xca, 0x3c, 0x05, 0x22, 0x25, 0xe2, 0x86, 0x3f, 0x61, 0x2f, 0x42, 0xa2, 0xd8, 0x2e, 0x28, + 0x13, 0x09, 0x44, 0xed, 0x1e, 0x9c, 0x1b, 0x33, 0x37, 0xc9, 0x3b, 0xb9, 0xe9, 0x84, 0x31, 0x16, + 0x85, 0x1c, 0x6c, 0x24, 0x95, 0x97, 0x5e, 0x82, 0x6b, 0xff, 0x81, 0xb9, 0x4e, 0xe7, 0x4c, 0x54, + 0xaa, 0xdd, 0x60, 0xfa, 0x5a, 0xbd, 0xdf, 0x39, 0xf0, 0x83, 0x64, 0xb0, 0x50, 0xbb, 0x89, 0x68, + 0x89, 0xed, 0x60, 0x51, 0x66, 0xd0, 0x32, 0x54, 0xc4, 0x87, 0xe5, 0x56, 0xe0, 0xef, 0x79, 0xec, + 0xe1, 0x32, 0x3b, 0x94, 0xf2, 0x35, 0xfd, 0x9c, 0x34, 0xdd, 0xe4, 0xe9, 0x33, 0x82, 0x1f, 0xe7, + 0x71, 0xa3, 0x60, 0xe6, 0x9b, 0xd3, 0xc1, 0x02, 0x73, 0x94, 0xb7, 0xf6, 0xbd, 0x02, 0x5c, 0x3d, + 0x96, 0xe3, 0x49, 0xb3, 0xf0, 0xbe, 0x0c, 0xc0, 0x69, 0x69, 0x0f, 0x48, 0x26, 0x27, 0x51, 0x19, + 0x27, 0xe8, 0x9b, 0x12, 0x0a, 0x79, 0x00, 0x4f, 0x89, 0x5f, 0x9d, 0x8e, 0x3b, 0x88, 0x42, 0x5a, + 0x0f, 0x1e, 0xbc, 0x3c, 0x8e, 0xc2, 0x56, 0xde, 0x78, 0xf6, 0x87, 0x9f, 0x5c, 0xbe, 0x1a, 0xf3, + 0x60, 0x98, 0xec, 0x81, 0x87, 0x88, 0x83, 0x8e, 0x86, 0xaf, 0xc9, 0xbc, 0xc8, 0xb5, 0x64, 0x25, + 0x95, 0x12, 0x13, 0xb6, 0x58, 0x49, 0xc9, 0xfa, 0xd9, 0x06, 0xc2, 0x19, 0x51, 0xb2, 0x4d, 0xf9, + 0x2e, 0x9b, 0x49, 0x4d, 0x51, 0x13, 0x66, 0x48, 0x63, 0xe5, 0x66, 0x0e, 0x8d, 0xf6, 0x3b, 0x6c, + 0x61, 0xe7, 0x4a, 0x07, 0xf2, 0x10, 0x96, 0xa8, 0x96, 0x21, 0x75, 0x36, 0x97, 0x2b, 0x37, 0x8e, + 0x1f, 0xe8, 0x5a, 0xc4, 0x83, 0xf5, 0x58, 0xc3, 0xc3, 0x43, 0x27, 0x38, 0xda, 0x38, 0x2f, 0xd2, + 0xc9, 0xa2, 0x36, 0x23, 0x8f, 0xbd, 0x99, 0xfd, 0x8a, 0xf6, 0x83, 0x02, 0xbc, 0xf0, 0x18, 0xbc, + 0x49, 0x0b, 0x66, 0xf1, 0x3c, 0x8f, 0x9a, 0xe0, 0xf1, 0xf6, 0x80, 0xb3, 0x7c, 0x6f, 0xe4, 0xc1, + 0x7a, 0x62, 0x3d, 0x30, 0x61, 0x42, 0x6e, 0xd1, 0xe9, 0xd4, 0x45, 0x7e, 0xc7, 0xdb, 0x06, 0x56, + 0x84, 0x99, 0xcc, 0xed, 0x77, 0x13, 0x6e, 0x82, 0x81, 0x14, 0x87, 0xb0, 0x38, 0x36, 0x0e, 0xe1, + 0x6b, 0x30, 0x6f, 0x48, 0x0e, 0xb6, 0x7c, 0xf8, 0xf1, 0xd6, 0x20, 0xe5, 0x8d, 0x6b, 0xa6, 0xd0, + 0xc8, 0xe7, 0x60, 0x91, 0x79, 0x0f, 0xf0, 0xde, 0x61, 0xbe, 0x6d, 0x53, 0x3c, 0xbb, 0x0a, 0x96, + 0x88, 0xae, 0x0e, 0xcd, 0x0c, 0x2a, 0x5d, 0x58, 0x67, 0xa9, 0x56, 0xd2, 0x73, 0xc3, 0x50, 0x1f, + 0x46, 0x07, 0x74, 0xd7, 0x61, 0xe7, 0x74, 0xf2, 0x06, 0x4c, 0x1f, 0x3c, 0xde, 0xed, 0x1c, 0x43, + 0x27, 0x04, 0x50, 0xd3, 0x17, 0xe1, 0x52, 0xe8, 0xff, 0xe4, 0x4d, 0x98, 0x42, 0x23, 0x33, 0x57, + 0xa8, 0x85, 0x21, 0x24, 0xff, 0xd3, 0x68, 0x82, 0x36, 0x19, 0x01, 0x5d, 0xad, 0x49, 0x9e, 0x5c, + 0xbe, 0x1f, 0x0b, 0xe3, 0x6b, 0x9c, 0x2a, 0xd7, 0x9c, 0x3d, 0xdc, 0x73, 0x78, 0xf2, 0xd9, 0x75, + 0x58, 0x16, 0xb2, 0x77, 0x20, 0x32, 0x95, 0x70, 0xbf, 0x9b, 0x25, 0x1e, 0xd2, 0x69, 0x20, 0xb2, + 0x95, 0x3c, 0x03, 0x8b, 0x61, 0x78, 0x60, 0xf3, 0x40, 0x82, 0x0f, 0x44, 0x12, 0x34, 0x73, 0x3e, + 0x0c, 0x0f, 0x58, 0x44, 0xc1, 0xdb, 0xee, 0x11, 0xc5, 0xc2, 0xb7, 0x0c, 0x09, 0x56, 0x99, 0x61, + 0x45, 0xbd, 0x30, 0xc6, 0xe2, 0x31, 0x30, 0x21, 0xc1, 0xd2, 0xfe, 0x6b, 0x01, 0x66, 0xef, 0xd2, + 0xc3, 0x2b, 0x9a, 0x64, 0x27, 0x9b, 0x78, 0x6f, 0xc0, 0x5c, 0xdd, 0x77, 0xf8, 0x15, 0x3d, 0x8f, + 0xd7, 0xc1, 0x1e, 0x3f, 0xf5, 0x7c, 0x47, 0xdc, 0xf6, 0x87, 0xa6, 0x8c, 0x74, 0x4c, 0x10, 0xc8, + 0x5b, 0x30, 0xcd, 0x56, 0x38, 0xbf, 0x6d, 0x10, 0xe6, 0x8b, 0xb8, 0x46, 0x2f, 0xb1, 0x62, 0xe9, + 0x0e, 0x99, 0x49, 0x09, 0xf9, 0x2c, 0xcd, 0x1f, 0x3a, 0x49, 0x06, 0xe8, 0xa9, 0x93, 0x19, 0xa0, + 0xa5, 0x8c, 0x13, 0xd3, 0x27, 0xc9, 0x38, 0xb1, 0x76, 0x13, 0xe6, 0xa4, 0xfa, 0x3c, 0x96, 0x35, + 0xe3, 0x57, 0x0a, 0xb0, 0x80, 0xad, 0x8a, 0xa5, 0xd6, 0x5f, 0x4f, 0x73, 0xfa, 0x67, 0x53, 0xe6, + 0xf4, 0x55, 0x79, 0xbc, 0xb8, 0xd3, 0xcf, 0x78, 0x3b, 0xfa, 0x2d, 0x58, 0x1e, 0x41, 0x24, 0xaf, + 0xc1, 0x14, 0xad, 0xbe, 0x30, 0x3f, 0xaa, 0xd9, 0x19, 0x90, 0x64, 0x27, 0xa3, 0x0d, 0x0f, 0x4d, + 0x86, 0xad, 0xfd, 0x0f, 0x05, 0xe6, 0x79, 0x5a, 0xe6, 0xfe, 0x9e, 0x7f, 0x6c, 0x77, 0x5e, 0xcb, + 0x76, 0x27, 0x0b, 0x69, 0xcc, 0xbb, 0xf3, 0xaf, 0xba, 0x13, 0x6f, 0xa6, 0x3a, 0xf1, 0x5c, 0x9c, + 0xab, 0x44, 0x34, 0x67, 0x42, 0x1f, 0x7e, 0x0b, 0xb3, 0x77, 0xa5, 0x11, 0xc9, 0x2f, 0xc0, 0x6c, + 0xc3, 0x7d, 0x98, 0xb2, 0xe2, 0x5d, 0x1b, 0xc3, 0xf4, 0xa5, 0x18, 0x91, 0xad, 0x29, 0xf6, 0x00, + 0xd1, 0x7d, 0x68, 0x8f, 0xf8, 0x66, 0x24, 0x2c, 0xd7, 0xde, 0x82, 0xc5, 0x34, 0xd9, 0xe3, 0x4c, + 0x7d, 0x1e, 0xe1, 0x0c, 0xa3, 0x74, 0xff, 0x5a, 0x11, 0x20, 0x09, 0x0e, 0x45, 0x17, 0x60, 0xca, + 0x1d, 0x4c, 0x5c, 0x80, 0x22, 0x48, 0x9e, 0xe3, 0xc2, 0x4b, 0xec, 0x1a, 0xbf, 0xa8, 0x2b, 0x8c, + 0xcf, 0x25, 0xd3, 0x17, 0x01, 0xee, 0x98, 0x27, 0x75, 0xcf, 0x61, 0x6f, 0x8f, 0x8a, 0x1b, 0xcf, + 0x60, 0xea, 0xb0, 0x18, 0x9a, 0xca, 0xdb, 0x51, 0xae, 0x0e, 0x79, 0xca, 0x42, 0x0c, 0x07, 0x54, + 0xa5, 0x08, 0x23, 0x01, 0xd7, 0x4a, 0x8f, 0x17, 0x70, 0xad, 0x05, 0xb3, 0x5e, 0xff, 0x23, 0xb7, + 0x1f, 0xf9, 0xc1, 0x11, 0xde, 0x4e, 0x26, 0xd7, 0x1e, 0xb4, 0x0b, 0x6a, 0xa2, 0x8c, 0x8d, 0x03, + 0x6a, 0x9a, 0x31, 0xbe, 0x3c, 0x0c, 0x31, 0x30, 0xf6, 0xdc, 0x99, 0x52, 0xa7, 0x59, 0x9c, 0xa6, + 0x5b, 0xa5, 0x72, 0x59, 0x9d, 0xbd, 0x55, 0x2a, 0xcf, 0xaa, 0x60, 0x4a, 0xce, 0x06, 0xb1, 0x33, + 0x81, 0x74, 0xf3, 0x9f, 0xbe, 0xd5, 0xd7, 0x7e, 0x54, 0x00, 0x32, 0x5a, 0x0d, 0xf2, 0x59, 0x98, + 0x63, 0x02, 0xd6, 0x0e, 0xc2, 0x0f, 0xf9, 0x03, 0x4c, 0xf6, 0x6a, 0x5a, 0x02, 0xcb, 0xb1, 0xce, + 0x19, 0xd8, 0x0c, 0x3f, 0xec, 0x91, 0x2f, 0xc1, 0x69, 0xec, 0xde, 0x81, 0x1b, 0x78, 0x7e, 0xd7, + 0xc6, 0x4c, 0x56, 0x4e, 0x0f, 0xc7, 0xaa, 0xb8, 0xf1, 0xe2, 0x0f, 0x3f, 0xb9, 0xfc, 0x54, 0x4e, + 0xf1, 0x98, 0x61, 0xc0, 0xf0, 0x4a, 0x2d, 0xc4, 0x6c, 0x31, 0x44, 0xd2, 0x06, 0x55, 0xa6, 0xdf, + 0x1b, 0xf6, 0x7a, 0x7c, 0x64, 0xd7, 0xe9, 0xd1, 0x20, 0x5b, 0x36, 0x86, 0xf1, 0x62, 0xc2, 0x78, + 0x73, 0xd8, 0xeb, 0x91, 0x37, 0x00, 0xfc, 0xbe, 0x7d, 0xe8, 0x85, 0x21, 0xbb, 0xf3, 0x8e, 0x1f, + 0xe5, 0x26, 0x50, 0x79, 0x30, 0xfc, 0xfe, 0x0e, 0x03, 0x92, 0xbf, 0x05, 0x18, 0x22, 0x15, 0x63, + 0x07, 0x73, 0x6d, 0x86, 0x9d, 0x16, 0x04, 0x30, 0x1d, 0x1d, 0x6f, 0xdf, 0xb5, 0xbc, 0x2f, 0x8b, + 0xb7, 0xcb, 0x5f, 0x80, 0x65, 0xae, 0x19, 0xdd, 0xf5, 0xa2, 0x03, 0x6e, 0x71, 0x79, 0x12, 0x73, + 0x8d, 0x64, 0x2e, 0xf9, 0xa3, 0x29, 0x00, 0xfd, 0xae, 0x25, 0xc2, 0xf2, 0x3f, 0x0f, 0x53, 0x6d, + 0xca, 0x86, 0xdb, 0xa3, 0x51, 0xe1, 0x42, 0xbe, 0xf2, 0x6d, 0x1e, 0x62, 0xd0, 0xd5, 0x68, 0xe2, + 0x33, 0x43, 0x61, 0x8b, 0xc6, 0xd5, 0xc8, 0x5e, 0x1e, 0xa6, 0xf2, 0xa8, 0x71, 0x2c, 0x52, 0x07, + 0x48, 0x02, 0xe5, 0x73, 0xab, 0xc6, 0x72, 0x12, 0x71, 0x9a, 0x17, 0xf0, 0xfc, 0xb0, 0xc9, 0x5b, + 0x72, 0x79, 0xfa, 0x24, 0x68, 0xe4, 0x36, 0x94, 0xda, 0x4e, 0x1c, 0x07, 0x6d, 0x4c, 0xfa, 0x80, + 0x2b, 0xb4, 0xf5, 0xa9, 0x14, 0x02, 0x8b, 0x91, 0xb3, 0x2f, 0xd7, 0x0e, 0x99, 0x10, 0x03, 0xa6, + 0x5b, 0x4e, 0xe0, 0x1c, 0x86, 0xe3, 0xd2, 0xce, 0xb0, 0x52, 0x91, 0x9d, 0x0e, 0x81, 0xb2, 0x4e, + 0xc1, 0x8a, 0xc9, 0x0d, 0x28, 0x5a, 0xd6, 0x0e, 0x7f, 0x1e, 0xb1, 0x90, 0x9c, 0x26, 0x2c, 0x6b, + 0x87, 0x29, 0xbd, 0x61, 0x78, 0x28, 0x91, 0x51, 0x64, 0xf2, 0x39, 0x98, 0x93, 0x0e, 0x29, 0x3c, + 0xdc, 0x34, 0xf6, 0x81, 0xf4, 0x90, 0x5d, 0x16, 0x1a, 0x12, 0x36, 0xa9, 0x83, 0x7a, 0x7b, 0x78, + 0xdf, 0xd5, 0x07, 0x03, 0x0c, 0xa7, 0xf4, 0x91, 0x1b, 0x30, 0x45, 0xae, 0x9c, 0x24, 0x76, 0xc3, + 0x57, 0xa3, 0x5d, 0x51, 0x2a, 0x9b, 0x43, 0xb2, 0x94, 0xa4, 0x05, 0xcb, 0x96, 0x1b, 0x0d, 0x07, + 0xcc, 0x9b, 0x71, 0x93, 0x1d, 0xa7, 0x59, 0x70, 0x6a, 0xcc, 0x81, 0x15, 0xd2, 0x42, 0xe1, 0x48, + 0xba, 0x37, 0x72, 0xa4, 0x1e, 0x25, 0x26, 0x5f, 0xca, 0x1c, 0xfc, 0x21, 0x6b, 0x7b, 0x92, 0x4b, + 0x45, 0x06, 0x8a, 0x93, 0xdb, 0x05, 0xfe, 0x33, 0xb3, 0x0b, 0xe4, 0x30, 0x21, 0x06, 0x2c, 0xca, + 0xe0, 0xd8, 0xfa, 0x81, 0x91, 0x0a, 0x52, 0xa1, 0x4f, 0x53, 0xe6, 0x8c, 0x0c, 0x11, 0x79, 0x04, + 0xa7, 0x65, 0x88, 0xd3, 0xdb, 0xed, 0x7b, 0x51, 0x98, 0x49, 0x89, 0x9d, 0xa9, 0x02, 0xa2, 0x88, + 0xc6, 0x60, 0xc7, 0xf9, 0x29, 0x16, 0xf6, 0x90, 0x22, 0x48, 0x1f, 0xcd, 0xfb, 0x84, 0xf6, 0xcb, + 0x18, 0x8a, 0x61, 0x1c, 0x5f, 0x96, 0x38, 0x11, 0x9f, 0xf2, 0xcb, 0x57, 0x47, 0xfc, 0xa1, 0x7f, + 0x3a, 0x71, 0x22, 0x82, 0x28, 0x81, 0xc1, 0xde, 0xfe, 0xcb, 0xcb, 0x95, 0x87, 0x03, 0x90, 0x09, + 0x38, 0x96, 0xe6, 0xca, 0xcb, 0x55, 0x3e, 0xc3, 0x2b, 0x93, 0xce, 0xf0, 0x2f, 0xe7, 0x24, 0xbf, + 0x40, 0x4b, 0x84, 0x94, 0xfc, 0x22, 0x95, 0xf2, 0xe2, 0x7f, 0x4d, 0x49, 0xf9, 0x97, 0xf8, 0x3a, + 0x7a, 0x1b, 0xe0, 0x96, 0xef, 0xf5, 0x77, 0xdc, 0xe8, 0xc0, 0xef, 0x4a, 0x03, 0x37, 0xf7, 0x81, + 0xef, 0xf5, 0xed, 0x43, 0x04, 0xff, 0xe8, 0x93, 0xcb, 0x12, 0x92, 0x29, 0xfd, 0x4f, 0x3e, 0x03, + 0xb3, 0xf4, 0x57, 0x3b, 0xf1, 0xa7, 0x65, 0xd7, 0x8e, 0x48, 0xcd, 0x52, 0x25, 0x27, 0x08, 0xe4, + 0x26, 0x26, 0x07, 0xf7, 0x06, 0x91, 0x74, 0xf0, 0x10, 0x99, 0xc0, 0xbd, 0x41, 0x94, 0x8d, 0x2f, + 0x21, 0x21, 0x93, 0xed, 0xb8, 0xea, 0x6d, 0x9e, 0xaf, 0x8b, 0xe7, 0x20, 0xe7, 0x41, 0x2a, 0xb0, + 0xc8, 0x16, 0xb9, 0xbc, 0xe4, 0x20, 0x15, 0x19, 0x32, 0xac, 0x84, 0xb5, 0x5d, 0xe5, 0x76, 0xa7, + 0x29, 0xa9, 0x12, 0xe1, 0x41, 0x97, 0x5b, 0x91, 0x52, 0x95, 0x88, 0x91, 0xc9, 0x06, 0x2c, 0xb1, + 0x13, 0x5b, 0x2b, 0xf0, 0x1f, 0x1d, 0x61, 0x22, 0xea, 0xe9, 0x64, 0x5f, 0x1a, 0x50, 0x20, 0x9e, + 0x19, 0xe5, 0xcf, 0x67, 0x08, 0xc8, 0x26, 0x4c, 0xa1, 0x0d, 0x91, 0xbf, 0x1d, 0xbd, 0x20, 0x5b, + 0xb1, 0xb3, 0x32, 0x10, 0xf7, 0x04, 0xb4, 0x5f, 0xcb, 0x7b, 0x02, 0xa2, 0x92, 0x9f, 0x07, 0x30, + 0xfa, 0x81, 0xdf, 0xeb, 0x61, 0x7a, 0xba, 0x72, 0x2a, 0x96, 0x0d, 0xe7, 0x83, 0x5c, 0x12, 0x24, + 0x9e, 0x19, 0x05, 0x7f, 0xdb, 0x99, 0x24, 0x76, 0x12, 0x2f, 0xf2, 0x19, 0x98, 0xb6, 0x86, 0x7b, + 0x7b, 0xde, 0x23, 0x2e, 0x90, 0x58, 0x66, 0x36, 0x84, 0xc8, 0x82, 0x98, 0xe1, 0x90, 0xb7, 0x60, + 0x6e, 0x77, 0xd0, 0x75, 0x22, 0x17, 0xef, 0x1d, 0x79, 0xba, 0x4f, 0x94, 0x2b, 0x43, 0x04, 0xb3, + 0xa7, 0x0b, 0xb2, 0x54, 0x95, 0xd0, 0x89, 0x07, 0xcb, 0xdb, 0xed, 0x76, 0x0b, 0xbb, 0x47, 0x3c, + 0x80, 0xe7, 0xa1, 0x62, 0xc4, 0x09, 0x66, 0xa4, 0x7c, 0xe3, 0x2a, 0xd5, 0x58, 0x0e, 0xa2, 0x68, + 0x60, 0xb3, 0x2e, 0x17, 0xb1, 0xb7, 0x64, 0x01, 0x39, 0x42, 0xa5, 0xfd, 0xa1, 0x92, 0xf3, 0x2d, + 0xf2, 0x3a, 0xcc, 0xc6, 0x40, 0x39, 0x93, 0x6d, 0xc2, 0x5e, 0x56, 0x32, 0x62, 0x54, 0x3a, 0x8b, + 0xe8, 0x0f, 0x8b, 0x11, 0x4a, 0x99, 0x67, 0x28, 0x61, 0x38, 0x42, 0x29, 0x21, 0xd3, 0x03, 0x64, + 0xc3, 0x67, 0x74, 0x92, 0xe5, 0xb9, 0xef, 0x8f, 0x10, 0x09, 0x34, 0xad, 0x06, 0xd3, 0x6c, 0x6b, + 0xc3, 0xe4, 0x9b, 0x3c, 0x45, 0xb9, 0x94, 0xba, 0x91, 0x25, 0xdf, 0xe4, 0xf0, 0xd1, 0xe4, 0x9b, + 0x12, 0x81, 0x76, 0x1b, 0x56, 0xf2, 0xa6, 0x5a, 0xca, 0x0e, 0xad, 0x9c, 0xd4, 0x0e, 0xfd, 0xa7, + 0x45, 0x98, 0x47, 0x6e, 0x42, 0x54, 0xea, 0xb0, 0x60, 0x0d, 0xef, 0xc7, 0x89, 0x26, 0x84, 0x6e, + 0x83, 0xf5, 0x0b, 0xe5, 0x02, 0xd9, 0x71, 0x28, 0x45, 0x41, 0x77, 0x13, 0xa1, 0x57, 0x6d, 0x89, + 0x97, 0xaf, 0x71, 0xde, 0x4b, 0xf1, 0x20, 0x98, 0xbf, 0x87, 0x91, 0x77, 0x93, 0x34, 0x51, 0xa2, + 0x5d, 0x15, 0x1f, 0x47, 0xbb, 0x2a, 0x9d, 0x48, 0xbb, 0x7a, 0x1f, 0xe6, 0xc5, 0xd7, 0x50, 0x2f, + 0x9a, 0x7a, 0x32, 0xbd, 0x28, 0xc5, 0x8c, 0xd4, 0x63, 0xfd, 0x68, 0x7a, 0xa2, 0x7e, 0x84, 0xde, + 0x58, 0x42, 0xee, 0x0d, 0x10, 0x96, 0xa3, 0x26, 0x3d, 0x89, 0xca, 0xa3, 0xfd, 0x79, 0x11, 0x60, + 0xab, 0xd2, 0xfa, 0x09, 0x14, 0xd6, 0xd7, 0x60, 0xb6, 0xee, 0x0b, 0x2f, 0x1e, 0xc9, 0x7d, 0xa2, + 0x27, 0x80, 0xf2, 0xa2, 0x8a, 0x31, 0x63, 0x45, 0xb3, 0xf8, 0x69, 0x28, 0x9a, 0x37, 0xd1, 0x50, + 0xff, 0x81, 0xdb, 0x89, 0x6a, 0x55, 0x31, 0xb2, 0xd8, 0x72, 0x11, 0x2d, 0x3a, 0xed, 0xc5, 0x21, + 0x21, 0xd3, 0xcd, 0x86, 0x3b, 0x08, 0x8b, 0x00, 0x5a, 0xdc, 0x34, 0x8e, 0x9b, 0x8d, 0x88, 0x42, + 0x26, 0x62, 0x72, 0xc9, 0xd2, 0x3e, 0x43, 0xf6, 0x29, 0x8f, 0xe6, 0x7b, 0xf1, 0x4b, 0x8f, 0x99, + 0x49, 0x3d, 0xa4, 0x8d, 0xf4, 0xd0, 0xd8, 0xf7, 0x1d, 0xda, 0x77, 0x14, 0x39, 0x63, 0xf1, 0x4f, + 0x30, 0xd4, 0x6f, 0x02, 0xc4, 0x6e, 0x94, 0x62, 0xac, 0xe3, 0xd8, 0x49, 0x0c, 0x2a, 0xf7, 0x72, + 0x82, 0x2b, 0xb5, 0xa6, 0xf8, 0x69, 0xb5, 0xa6, 0x0d, 0x73, 0xcd, 0x07, 0x91, 0x93, 0xf8, 0xdd, + 0x82, 0x15, 0x1f, 0x2a, 0x51, 0xac, 0x15, 0xf1, 0x9e, 0xe5, 0x8c, 0x74, 0x24, 0x1d, 0x73, 0x1a, + 0x95, 0x08, 0xb5, 0x1f, 0x2b, 0xb0, 0x24, 0x87, 0x7f, 0x3c, 0xea, 0x77, 0xc8, 0x3b, 0x2c, 0x1f, + 0x9a, 0x92, 0xb2, 0x1e, 0x48, 0x48, 0x54, 0x5e, 0x1f, 0xf5, 0x3b, 0xec, 0x2c, 0xe2, 0x3c, 0x94, + 0x2b, 0x4b, 0x09, 0xc9, 0x7d, 0x98, 0x6f, 0xf9, 0xbd, 0x1e, 0x5d, 0x6e, 0xc1, 0x47, 0xfc, 0x2c, + 0x4e, 0x19, 0x65, 0xef, 0x08, 0x44, 0x85, 0x36, 0x9e, 0xe6, 0x26, 0xa7, 0x73, 0x03, 0xba, 0x7d, + 0x7b, 0x9c, 0x2e, 0x61, 0xfb, 0x31, 0xc6, 0xc5, 0x90, 0x79, 0x26, 0xaa, 0x46, 0x3a, 0xf3, 0xae, + 0x5c, 0x4b, 0x5a, 0x8c, 0xf5, 0x9c, 0xa0, 0x6a, 0x68, 0x7f, 0x5b, 0x81, 0x2b, 0xa3, 0x4d, 0xab, + 0xf4, 0xfc, 0x61, 0xb7, 0x1d, 0x38, 0x5e, 0xaf, 0xee, 0xef, 0x87, 0x2c, 0x8f, 0xd4, 0x7e, 0x72, + 0xe5, 0xc8, 0xf3, 0x48, 0xed, 0x7b, 0xd9, 0x3c, 0x52, 0x18, 0x2e, 0xe7, 0x55, 0x28, 0x5b, 0xef, + 0x59, 0xef, 0x0d, 0x5d, 0x61, 0x96, 0x62, 0xf2, 0x21, 0xfc, 0x30, 0xb4, 0x3f, 0xa4, 0x40, 0x79, + 0xbb, 0x11, 0x88, 0xda, 0x21, 0x5c, 0x1a, 0xad, 0x86, 0x71, 0xdb, 0xd2, 0x87, 0x5d, 0x2f, 0xc2, + 0x4a, 0x08, 0x01, 0xa2, 0x7c, 0x0a, 0x02, 0x44, 0xfb, 0xa7, 0x45, 0x20, 0xa3, 0xdf, 0x93, 0xb7, + 0x0b, 0xe5, 0xff, 0xc2, 0x61, 0x3c, 0x23, 0xd1, 0x4b, 0x8f, 0x75, 0x88, 0xfd, 0x10, 0xd4, 0x0e, + 0x1d, 0x36, 0x3b, 0xa2, 0xe3, 0x66, 0xf7, 0xfc, 0x78, 0xf7, 0xfa, 0xb9, 0xb1, 0x53, 0x38, 0x3d, + 0xce, 0x4c, 0x04, 0x66, 0x99, 0xc8, 0x1b, 0x71, 0x27, 0x3d, 0x2f, 0x3c, 0x58, 0x74, 0x1f, 0x84, + 0xb6, 0x43, 0xc7, 0x88, 0x7d, 0x70, 0x3a, 0xe5, 0x07, 0x3d, 0x79, 0x44, 0x99, 0x60, 0x4c, 0x33, + 0x90, 0xb7, 0x4e, 0xf7, 0x41, 0x18, 0xe3, 0x6a, 0xdf, 0x50, 0x60, 0x25, 0x6f, 0x72, 0x53, 0x9d, + 0x42, 0x56, 0x32, 0xd2, 0x27, 0x54, 0x59, 0x2f, 0xc9, 0x9c, 0x50, 0xd3, 0x44, 0xd9, 0xae, 0x2f, + 0x3c, 0xd6, 0x66, 0xfa, 0x83, 0x22, 0xcc, 0x33, 0xef, 0xa1, 0x6d, 0xd7, 0xe9, 0x45, 0x07, 0x74, + 0x1e, 0x89, 0x1c, 0xf9, 0xd2, 0x1b, 0x93, 0x09, 0xc9, 0xf1, 0x6f, 0x40, 0xb9, 0x45, 0xc5, 0x42, + 0xc7, 0xef, 0xc9, 0x37, 0x0f, 0x03, 0x0e, 0x93, 0x97, 0x8c, 0xc0, 0x43, 0x5d, 0x5e, 0xbe, 0x39, + 0x64, 0xba, 0x3c, 0x42, 0x52, 0xba, 0x3c, 0xbb, 0x43, 0x7c, 0x04, 0xa7, 0x13, 0x87, 0xb0, 0xf8, + 0x76, 0xf2, 0x04, 0xcf, 0x69, 0xd7, 0xf9, 0xd5, 0xec, 0xa5, 0xc4, 0xc7, 0x0c, 0xaf, 0x31, 0xb1, + 0x34, 0x93, 0xf6, 0x2d, 0xef, 0x13, 0xe4, 0x36, 0xa8, 0x09, 0x98, 0xe7, 0xa3, 0x63, 0x47, 0x33, + 0x0c, 0x99, 0x29, 0xb1, 0x1d, 0x49, 0x4d, 0x37, 0x42, 0x48, 0xb7, 0xef, 0x04, 0x66, 0x24, 0x2f, + 0xe7, 0x85, 0xa7, 0x42, 0xcc, 0x0b, 0x2f, 0x46, 0xe5, 0xed, 0x3b, 0x43, 0x46, 0xc7, 0x48, 0xdc, + 0xa7, 0xce, 0x24, 0x63, 0xc4, 0x6f, 0x52, 0xe5, 0x31, 0xe2, 0x58, 0xeb, 0xbf, 0xa5, 0xc0, 0x52, + 0x4d, 0xdf, 0xe1, 0x39, 0xd6, 0x59, 0xaf, 0x5e, 0x85, 0xa7, 0x6a, 0xfa, 0x8e, 0xdd, 0x6a, 0xd6, + 0x6b, 0x95, 0x7b, 0x76, 0x6e, 0x26, 0xd4, 0xa7, 0xe0, 0xfc, 0x28, 0x4a, 0xe2, 0x3b, 0x77, 0x11, + 0x56, 0x47, 0x8b, 0x45, 0xb6, 0xd4, 0x7c, 0x62, 0x91, 0x58, 0xb5, 0xb8, 0xfe, 0x2e, 0x2c, 0x89, + 0xcc, 0xa0, 0xed, 0xba, 0x85, 0x27, 0xbc, 0x25, 0x98, 0xbb, 0x63, 0x98, 0xb5, 0xcd, 0x7b, 0xf6, + 0xe6, 0x6e, 0xbd, 0xae, 0x9e, 0x22, 0x0b, 0x30, 0xcb, 0x01, 0x15, 0x5d, 0x55, 0xc8, 0x3c, 0x94, + 0x6b, 0x0d, 0xcb, 0xa8, 0xec, 0x9a, 0x86, 0x5a, 0x58, 0xff, 0x87, 0x0a, 0x2c, 0xb0, 0x33, 0x5b, + 0xc0, 0x5b, 0x74, 0x09, 0xd6, 0x76, 0x5b, 0x55, 0xbd, 0x6d, 0x98, 0xf9, 0xcd, 0x39, 0x03, 0xcb, + 0x99, 0xf2, 0xe6, 0x6d, 0x55, 0x21, 0x17, 0xe0, 0x5c, 0x06, 0x5c, 0xad, 0x59, 0xfa, 0x06, 0x6b, + 0xc5, 0x79, 0x38, 0x93, 0x29, 0x6c, 0xd5, 0x1a, 0x0d, 0xa3, 0xaa, 0x16, 0x69, 0x03, 0x47, 0x3e, + 0x67, 0x1a, 0x7a, 0x95, 0x92, 0xaa, 0xa5, 0xf5, 0x77, 0x61, 0xb1, 0x15, 0x3f, 0x14, 0x44, 0xd7, + 0xbc, 0x19, 0x28, 0x9a, 0xfa, 0x5d, 0xf5, 0x14, 0x01, 0x98, 0x6e, 0xdd, 0xae, 0x58, 0xd7, 0xaf, + 0xab, 0x0a, 0x99, 0x83, 0x99, 0xad, 0x4a, 0xcb, 0xbe, 0xbd, 0x63, 0xa9, 0x05, 0xfa, 0x43, 0xbf, + 0x6b, 0xe1, 0x8f, 0xe2, 0xfa, 0x2b, 0xe8, 0x90, 0xf2, 0xe8, 0xa8, 0xee, 0x85, 0x91, 0xdb, 0x77, + 0x03, 0xec, 0xa3, 0x79, 0x28, 0x5b, 0x2e, 0xd5, 0xc4, 0x22, 0x97, 0x75, 0xd0, 0xce, 0xb0, 0x17, + 0x79, 0x83, 0x9e, 0xfb, 0x48, 0x55, 0xd6, 0x6f, 0xc2, 0x92, 0xe9, 0x0f, 0xe9, 0x09, 0xd2, 0x8a, + 0x28, 0xc6, 0xfe, 0x11, 0xb6, 0xb9, 0xa1, 0xef, 0x6c, 0xd4, 0xb6, 0x76, 0x9b, 0xbb, 0x96, 0xbd, + 0xa3, 0xb7, 0x2b, 0xdb, 0xcc, 0x31, 0x70, 0xa7, 0x69, 0xb5, 0x6d, 0xd3, 0xa8, 0x18, 0x8d, 0xb6, + 0xaa, 0xac, 0x7f, 0x1d, 0xaf, 0x89, 0x3a, 0x7e, 0xbf, 0xbb, 0xe9, 0x74, 0x22, 0x3f, 0xc0, 0x0a, + 0x6b, 0x70, 0xc9, 0x32, 0x2a, 0xcd, 0x46, 0xd5, 0xde, 0xd4, 0x2b, 0xed, 0xa6, 0x99, 0x97, 0x2a, + 0x78, 0x0d, 0xce, 0xe6, 0xe0, 0x34, 0xdb, 0x2d, 0x55, 0x21, 0x97, 0xe1, 0x42, 0x4e, 0xd9, 0x5d, + 0x63, 0x43, 0xdf, 0x6d, 0x6f, 0x37, 0xd4, 0xc2, 0x18, 0x62, 0xcb, 0x6a, 0xaa, 0xc5, 0xf5, 0xbf, + 0xa3, 0xc0, 0xe2, 0x6e, 0xc8, 0x5f, 0x29, 0xef, 0xa2, 0x57, 0xc0, 0x15, 0xb8, 0xb8, 0x6b, 0x19, + 0xa6, 0xdd, 0x6e, 0xde, 0x36, 0x1a, 0xf6, 0xae, 0xa5, 0x6f, 0x65, 0x6b, 0x73, 0x19, 0x2e, 0x48, + 0x18, 0xa6, 0x51, 0x69, 0xde, 0x31, 0x4c, 0xbb, 0xa5, 0x5b, 0xd6, 0xdd, 0xa6, 0x59, 0x55, 0x15, + 0xfa, 0xc5, 0x1c, 0x84, 0x9d, 0x4d, 0x9d, 0xd5, 0x26, 0x55, 0xd6, 0x30, 0xee, 0xea, 0x75, 0x7b, + 0xa3, 0xd9, 0x56, 0x8b, 0xeb, 0x3b, 0xf4, 0x70, 0x85, 0x09, 0x3b, 0xd9, 0x5b, 0xb2, 0x32, 0x94, + 0x1a, 0xcd, 0x86, 0x91, 0x75, 0x27, 0x9d, 0x87, 0xb2, 0xde, 0x6a, 0x99, 0xcd, 0x3b, 0x38, 0x79, + 0x00, 0xa6, 0xab, 0x46, 0xa3, 0x86, 0xb3, 0x65, 0x1e, 0xca, 0x2d, 0xb3, 0xb9, 0xd3, 0x6c, 0x1b, + 0x55, 0xb5, 0xb4, 0xae, 0xc3, 0x32, 0xdb, 0x12, 0x38, 0x53, 0xbc, 0x4b, 0x5c, 0x80, 0xd9, 0xdd, + 0x46, 0xd5, 0xd8, 0xac, 0x35, 0xb0, 0x2d, 0x8b, 0x00, 0xd6, 0x76, 0xd3, 0x6c, 0xdb, 0x6d, 0xc3, + 0xdc, 0x61, 0x19, 0x98, 0xeb, 0xcd, 0xc6, 0x16, 0xfb, 0x59, 0x58, 0x37, 0x85, 0x1a, 0x20, 0xea, + 0xd5, 0xf1, 0x99, 0xfb, 0x67, 0xd5, 0xd8, 0xd4, 0x77, 0xeb, 0x6d, 0x3e, 0xca, 0xf7, 0x6c, 0xd3, + 0x78, 0x6f, 0xd7, 0xb0, 0xda, 0x96, 0xaa, 0x10, 0x15, 0xe6, 0x1b, 0x86, 0x51, 0xb5, 0x6c, 0xd3, + 0xb8, 0x53, 0x33, 0xee, 0xaa, 0x05, 0x5a, 0x2d, 0xf6, 0x3f, 0xad, 0xe4, 0xfa, 0x37, 0x15, 0x20, + 0x2c, 0x5f, 0xea, 0xb6, 0x1f, 0x46, 0xb4, 0xf7, 0x71, 0xd2, 0x5d, 0x82, 0xb5, 0x6d, 0x3a, 0x5b, + 0xb0, 0x77, 0x76, 0x9a, 0xd5, 0x6c, 0xaf, 0x9f, 0x05, 0x92, 0x29, 0x6f, 0x6e, 0x6e, 0xe2, 0xca, + 0x3a, 0x9d, 0x81, 0x57, 0xcd, 0x66, 0x4b, 0x2d, 0xac, 0x15, 0xca, 0x0a, 0x39, 0x37, 0x52, 0x78, + 0xdb, 0x30, 0x5a, 0x6a, 0x91, 0x8e, 0x72, 0xa6, 0x40, 0xac, 0x7a, 0x46, 0x5e, 0x5a, 0xff, 0x9a, + 0x02, 0x67, 0x59, 0x35, 0x85, 0x08, 0x89, 0xab, 0x7a, 0x11, 0x56, 0x79, 0x16, 0xe8, 0xbc, 0x8a, + 0xae, 0x80, 0x9a, 0x2a, 0x65, 0xd5, 0x3c, 0x03, 0xcb, 0x29, 0x28, 0xd6, 0xa3, 0x40, 0x05, 0x64, + 0x0a, 0xbc, 0x61, 0x58, 0x6d, 0xdb, 0xd8, 0xdc, 0xa4, 0x43, 0x82, 0x15, 0x29, 0xae, 0x6b, 0xb0, + 0x5c, 0x71, 0x83, 0xc8, 0x78, 0x14, 0xb9, 0xfd, 0xd0, 0xf3, 0xfb, 0x58, 0x85, 0x05, 0x98, 0x35, + 0x7e, 0xbe, 0x6d, 0x34, 0xac, 0x5a, 0xb3, 0xa1, 0x9e, 0x5a, 0xbf, 0x98, 0xc1, 0x11, 0xa2, 0xc0, + 0xb2, 0xb6, 0xd5, 0x53, 0xeb, 0x0e, 0x2c, 0x88, 0xf7, 0xbb, 0x6c, 0x62, 0x5d, 0x82, 0x35, 0x31, + 0x5d, 0x51, 0xac, 0x64, 0x9b, 0xb0, 0x0a, 0x2b, 0xa3, 0xe5, 0x46, 0x5b, 0x55, 0xe8, 0x28, 0x64, + 0x4a, 0x28, 0xbc, 0xb0, 0xfe, 0x55, 0x05, 0x16, 0x62, 0xa7, 0x12, 0x9c, 0x68, 0x97, 0xe1, 0xc2, + 0xce, 0xa6, 0x6e, 0x57, 0x8d, 0x3b, 0xb5, 0x8a, 0x61, 0xdf, 0xae, 0x35, 0xaa, 0x99, 0x8f, 0x9c, + 0x87, 0x33, 0x39, 0x08, 0xf8, 0x95, 0x55, 0x58, 0xc9, 0x16, 0xb5, 0xe9, 0x6a, 0x2f, 0xd0, 0xae, + 0xcf, 0x96, 0xc4, 0x4b, 0xbd, 0xb8, 0x7e, 0x07, 0x16, 0x2d, 0x7d, 0xa7, 0xbe, 0xe9, 0x07, 0x1d, + 0x57, 0x1f, 0x46, 0x07, 0x7d, 0x2a, 0x77, 0x37, 0x9b, 0x66, 0xc5, 0xb0, 0x11, 0x25, 0x53, 0x83, + 0xd3, 0xb0, 0x24, 0x17, 0xde, 0x33, 0xe8, 0xf4, 0x25, 0xb0, 0x28, 0x03, 0x1b, 0x4d, 0xb5, 0xb0, + 0xfe, 0x45, 0x98, 0xe7, 0x9e, 0x66, 0xac, 0xff, 0xce, 0xc1, 0x69, 0xf9, 0x77, 0xcb, 0xed, 0x77, + 0xbd, 0xfe, 0xbe, 0x7a, 0x2a, 0x5b, 0x60, 0x0e, 0xfb, 0x7d, 0x5a, 0x80, 0x22, 0x41, 0x2e, 0x68, + 0xbb, 0xc1, 0xa1, 0xd7, 0x77, 0x22, 0xb7, 0xab, 0x16, 0xd6, 0x5f, 0x82, 0x85, 0x54, 0x06, 0x60, + 0x3a, 0x70, 0xf5, 0x26, 0x97, 0xe1, 0x3b, 0x46, 0xb5, 0xb6, 0xbb, 0xa3, 0x4e, 0x51, 0x61, 0xb0, + 0x5d, 0xdb, 0xda, 0x56, 0x61, 0xfd, 0xb7, 0x15, 0x58, 0xa4, 0xeb, 0xd1, 0x0b, 0xdc, 0x9d, 0x4d, + 0x5d, 0x0c, 0x35, 0x9d, 0x66, 0x2c, 0xaf, 0xb8, 0x61, 0x59, 0xcc, 0x11, 0xfb, 0x22, 0xac, 0xf2, + 0x1f, 0xb6, 0xde, 0xa8, 0xda, 0xdb, 0xba, 0x59, 0xbd, 0xab, 0x9b, 0x74, 0xee, 0xdd, 0x53, 0x0b, + 0xb8, 0xa0, 0x24, 0x88, 0xdd, 0x6e, 0xee, 0x56, 0xb6, 0xd5, 0x22, 0x9d, 0xbf, 0x29, 0x78, 0xab, + 0xd6, 0x50, 0x4b, 0xb8, 0x3c, 0x47, 0xb0, 0x91, 0x2d, 0x2d, 0x9f, 0x5a, 0xff, 0xbe, 0x02, 0xe7, + 0x2c, 0x6f, 0xbf, 0xef, 0x44, 0xc3, 0xc0, 0xd5, 0x7b, 0xfb, 0x7e, 0xe0, 0x45, 0x07, 0x87, 0xd6, + 0xd0, 0x8b, 0x5c, 0xf2, 0x3c, 0x3c, 0x6b, 0xd5, 0xb6, 0x1a, 0x7a, 0x9b, 0x2e, 0x2f, 0xbd, 0xbe, + 0xd5, 0x34, 0x6b, 0xed, 0xed, 0x1d, 0xdb, 0xda, 0xad, 0x8d, 0xcc, 0xbc, 0x67, 0xe0, 0xca, 0x78, + 0xd4, 0xba, 0xb1, 0xa5, 0x57, 0xee, 0xa9, 0xca, 0x64, 0x86, 0x1b, 0x7a, 0x5d, 0x6f, 0x54, 0x8c, + 0xaa, 0x7d, 0xe7, 0xba, 0x5a, 0x20, 0xcf, 0xc2, 0xd5, 0xf1, 0xa8, 0x9b, 0xb5, 0x96, 0x45, 0xd1, + 0x8a, 0x93, 0xbf, 0xbb, 0x6d, 0xed, 0x50, 0xac, 0xd2, 0xfa, 0x37, 0x14, 0x58, 0x1d, 0x97, 0xd9, + 0x82, 0x5c, 0x03, 0xcd, 0x68, 0xb4, 0x4d, 0xbd, 0x56, 0xb5, 0x2b, 0xa6, 0x51, 0x35, 0x1a, 0xed, + 0x9a, 0x5e, 0xb7, 0x6c, 0xab, 0xb9, 0x4b, 0x67, 0x53, 0xe2, 0x2f, 0xff, 0x34, 0x5c, 0x9e, 0x80, + 0xd7, 0xac, 0x55, 0x2b, 0xaa, 0x42, 0xae, 0xc3, 0x8b, 0x13, 0x90, 0xac, 0x7b, 0x56, 0xdb, 0xd8, + 0x91, 0x4b, 0xd4, 0x02, 0x0a, 0xac, 0xfc, 0x48, 0xee, 0xb4, 0x75, 0x58, 0x32, 0xb9, 0x62, 0x57, + 0xe1, 0xa9, 0xb1, 0x58, 0xbc, 0x5a, 0x4f, 0xc3, 0xe5, 0xb1, 0x28, 0xac, 0x52, 0x6a, 0x61, 0xfd, + 0x7d, 0x58, 0x1b, 0x1f, 0x4d, 0x98, 0xee, 0x17, 0xe9, 0x21, 0x2f, 0x43, 0xa9, 0x4a, 0x77, 0xb9, + 0x54, 0x1e, 0x7c, 0x3a, 0x3b, 0x4d, 0xa3, 0xb6, 0xd3, 0xa2, 0x82, 0x90, 0x6f, 0x2e, 0xb8, 0x7b, + 0x7c, 0x45, 0x89, 0x33, 0x75, 0x24, 0x3c, 0xb3, 0x4f, 0x2f, 0xcc, 0xdd, 0x46, 0x83, 0xed, 0x95, + 0x4b, 0x30, 0xd7, 0x6c, 0x6f, 0x1b, 0xa6, 0x6d, 0x98, 0x66, 0xd3, 0x54, 0x0b, 0x74, 0x77, 0xda, + 0x6d, 0xd0, 0xa5, 0xdd, 0x34, 0x6b, 0x5f, 0xc0, 0x4d, 0x73, 0x15, 0x56, 0xac, 0xba, 0x5e, 0xb9, + 0x6d, 0x37, 0x9a, 0x6d, 0xbb, 0xd6, 0xb0, 0x2b, 0xdb, 0x7a, 0xa3, 0x61, 0xd4, 0x55, 0xa0, 0x32, + 0xbb, 0x79, 0xbb, 0xad, 0xdb, 0x95, 0x66, 0x63, 0xb3, 0xb6, 0xc5, 0x59, 0xac, 0xe0, 0x2c, 0x18, + 0x17, 0xfc, 0x87, 0x7c, 0x06, 0x9e, 0x43, 0x9a, 0x56, 0x7d, 0x77, 0xab, 0xd6, 0xb0, 0xad, 0x7b, + 0x8d, 0x8a, 0xd0, 0xdc, 0x2a, 0xa3, 0x7b, 0xc5, 0x73, 0xf0, 0xcc, 0x44, 0x6c, 0xa1, 0xca, 0x2a, + 0x74, 0x76, 0x4d, 0xc4, 0xe4, 0xed, 0x5b, 0xff, 0xae, 0x02, 0x17, 0x26, 0x38, 0x07, 0x92, 0x17, + 0xe1, 0xf9, 0x6d, 0x43, 0xaf, 0xd6, 0x0d, 0xcb, 0x42, 0x09, 0x47, 0x07, 0x91, 0xbd, 0xdc, 0xc8, + 0xdd, 0x09, 0x9e, 0x87, 0x67, 0x27, 0xa3, 0x27, 0x6a, 0xc9, 0x73, 0xf0, 0xcc, 0x64, 0x54, 0xae, + 0xa6, 0x14, 0xc8, 0x3a, 0x5c, 0x9b, 0x8c, 0x19, 0xab, 0x37, 0xc5, 0xf5, 0xdf, 0x50, 0xe0, 0x6c, + 0xfe, 0x9d, 0x0e, 0xad, 0x5b, 0xad, 0x61, 0xb5, 0xf5, 0x7a, 0xdd, 0x6e, 0xe9, 0xa6, 0xbe, 0x63, + 0x1b, 0x0d, 0xb3, 0x59, 0xaf, 0xe7, 0xed, 0xc9, 0xcf, 0xc0, 0x95, 0xf1, 0xa8, 0x56, 0xc5, 0xac, + 0xb5, 0xe8, 0xb6, 0xa3, 0xc1, 0xa5, 0xf1, 0x58, 0x46, 0xad, 0x62, 0xa8, 0x85, 0x8d, 0xb7, 0xbf, + 0xfd, 0xe7, 0x97, 0x4e, 0x7d, 0xfb, 0xfb, 0x97, 0x94, 0x7f, 0xff, 0xfd, 0x4b, 0xca, 0x9f, 0x7d, + 0xff, 0x92, 0xf2, 0x85, 0x17, 0xd8, 0x73, 0x88, 0x97, 0x3a, 0xfe, 0xe1, 0xcb, 0xfb, 0x81, 0xf3, + 0x91, 0x17, 0xf1, 0x1b, 0xd6, 0x97, 0xc5, 0x95, 0xdc, 0xcb, 0xce, 0xc0, 0x7b, 0x19, 0x8f, 0xfc, + 0xf7, 0xa7, 0xf1, 0x04, 0xf9, 0xea, 0xff, 0x09, 0x00, 0x00, 0xff, 0xff, 0xc0, 0xfd, 0x9e, 0xbc, + 0x9f, 0x06, 0x02, 0x00, +} + +func (this *SSHKeyPair) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + that1, ok := that.(*SSHKeyPair) + if !ok { + that2, ok := that.(SSHKeyPair) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !bytes.Equal(this.PublicKey, that1.PublicKey) { + return false + } + if !bytes.Equal(this.PrivateKey, that1.PrivateKey) { + return false + } + if this.PrivateKeyType != that1.PrivateKeyType { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} func (this *PluginSpecV1) Equal(that interface{}) bool { if that == nil { return this == nil @@ -26157,6 +28197,30 @@ func (this *PluginSpecV1_Github) Equal(that interface{}) bool { } return true } +func (this *PluginSpecV1_Intune) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSpecV1_Intune) + if !ok { + that2, ok := that.(PluginSpecV1_Intune) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.Intune.Equal(that1.Intune) { + return false + } + return true +} func (this *PluginGithubSettings) Equal(that interface{}) bool { if that == nil { return this == nil @@ -26479,6 +28543,39 @@ func (this *PluginJamfSettings) Equal(that interface{}) bool { } return true } +func (this *PluginIntuneSettings) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginIntuneSettings) + if !ok { + that2, ok := that.(PluginIntuneSettings) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Tenant != that1.Tenant { + return false + } + if this.LoginEndpoint != that1.LoginEndpoint { + return false + } + if this.GraphEndpoint != that1.GraphEndpoint { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} func (this *PluginOktaSettings) Equal(that interface{}) bool { if that == nil { return this == nil @@ -26624,6 +28721,9 @@ func (this *PluginOktaSyncSettings) Equal(that interface{}) bool { if this.DisableAssignDefaultRoles != that1.DisableAssignDefaultRoles { return false } + if this.TimeBetweenImports != that1.TimeBetweenImports { + return false + } if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { return false } @@ -26762,11 +28862,157 @@ func (this *PluginEntraIDSyncSettings) Equal(that interface{}) bool { if this.EntraAppId != that1.EntraAppId { return false } + if len(this.GroupFilters) != len(that1.GroupFilters) { + return false + } + for i := range this.GroupFilters { + if !this.GroupFilters[i].Equal(that1.GroupFilters[i]) { + return false + } + } if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { return false } return true } +func (this *PluginSyncFilter) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSyncFilter) + if !ok { + that2, ok := that.(PluginSyncFilter) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if that1.Include == nil { + if this.Include != nil { + return false + } + } else if this.Include == nil { + return false + } else if !this.Include.Equal(that1.Include) { + return false + } + if that1.Exclude == nil { + if this.Exclude != nil { + return false + } + } else if this.Exclude == nil { + return false + } else if !this.Exclude.Equal(that1.Exclude) { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginSyncFilter_Id) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSyncFilter_Id) + if !ok { + that2, ok := that.(PluginSyncFilter_Id) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Id != that1.Id { + return false + } + return true +} +func (this *PluginSyncFilter_NameRegex) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSyncFilter_NameRegex) + if !ok { + that2, ok := that.(PluginSyncFilter_NameRegex) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.NameRegex != that1.NameRegex { + return false + } + return true +} +func (this *PluginSyncFilter_ExcludeId) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSyncFilter_ExcludeId) + if !ok { + that2, ok := that.(PluginSyncFilter_ExcludeId) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.ExcludeId != that1.ExcludeId { + return false + } + return true +} +func (this *PluginSyncFilter_ExcludeNameRegex) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSyncFilter_ExcludeNameRegex) + if !ok { + that2, ok := that.(PluginSyncFilter_ExcludeNameRegex) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.ExcludeNameRegex != that1.ExcludeNameRegex { + return false + } + return true +} func (this *PluginEntraIDAccessGraphSettings) Equal(that interface{}) bool { if that == nil { return this == nil @@ -26854,6 +29100,39 @@ func (this *PluginSCIMSettings) Equal(that interface{}) bool { if this.DefaultRole != that1.DefaultRole { return false } + if !this.ConnectorInfo.Equal(that1.ConnectorInfo) { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginSCIMSettings_ConnectorInfo) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginSCIMSettings_ConnectorInfo) + if !ok { + that2, ok := that.(PluginSCIMSettings_ConnectorInfo) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Name != that1.Name { + return false + } + if this.Type != that1.Type { + return false + } if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { return false } @@ -26961,6 +29240,9 @@ func (this *PluginAWSICSettings) Equal(that interface{}) bool { if !this.Credentials.Equal(that1.Credentials) { return false } + if this.RolesSyncMode != that1.RolesSyncMode { + return false + } if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { return false } @@ -27520,6 +29802,225 @@ func (this *PluginNetIQSettings) Equal(that interface{}) bool { } return true } +func (this *PluginIdSecretCredential) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginIdSecretCredential) + if !ok { + that2, ok := that.(PluginIdSecretCredential) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Id != that1.Id { + return false + } + if this.Secret != that1.Secret { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginCredentialsV1) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginCredentialsV1) + if !ok { + that2, ok := that.(PluginCredentialsV1) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if that1.Credentials == nil { + if this.Credentials != nil { + return false + } + } else if this.Credentials == nil { + return false + } else if !this.Credentials.Equal(that1.Credentials) { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginCredentialsV1_Oauth2AccessToken) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginCredentialsV1_Oauth2AccessToken) + if !ok { + that2, ok := that.(PluginCredentialsV1_Oauth2AccessToken) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.Oauth2AccessToken.Equal(that1.Oauth2AccessToken) { + return false + } + return true +} +func (this *PluginCredentialsV1_BearerToken) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginCredentialsV1_BearerToken) + if !ok { + that2, ok := that.(PluginCredentialsV1_BearerToken) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.BearerToken.Equal(that1.BearerToken) { + return false + } + return true +} +func (this *PluginCredentialsV1_IdSecret) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginCredentialsV1_IdSecret) + if !ok { + that2, ok := that.(PluginCredentialsV1_IdSecret) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.IdSecret.Equal(that1.IdSecret) { + return false + } + return true +} +func (this *PluginCredentialsV1_StaticCredentialsRef) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginCredentialsV1_StaticCredentialsRef) + if !ok { + that2, ok := that.(PluginCredentialsV1_StaticCredentialsRef) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.StaticCredentialsRef.Equal(that1.StaticCredentialsRef) { + return false + } + return true +} +func (this *PluginOAuth2AccessTokenCredentials) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginOAuth2AccessTokenCredentials) + if !ok { + that2, ok := that.(PluginOAuth2AccessTokenCredentials) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.AccessToken != that1.AccessToken { + return false + } + if this.RefreshToken != that1.RefreshToken { + return false + } + if !this.Expires.Equal(that1.Expires) { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginBearerTokenCredentials) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginBearerTokenCredentials) + if !ok { + that2, ok := that.(PluginBearerTokenCredentials) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Token != that1.Token { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} func (this *PluginStaticCredentialsRef) Equal(that interface{}) bool { if that == nil { return this == nil @@ -27552,6 +30053,251 @@ func (this *PluginStaticCredentialsRef) Equal(that interface{}) bool { } return true } +func (this *PluginStaticCredentialsSpecV1) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if that1.Credentials == nil { + if this.Credentials != nil { + return false + } + } else if this.Credentials == nil { + return false + } else if !this.Credentials.Equal(that1.Credentials) { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginStaticCredentialsSpecV1_APIToken) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1_APIToken) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1_APIToken) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.APIToken != that1.APIToken { + return false + } + return true +} +func (this *PluginStaticCredentialsSpecV1_BasicAuth) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1_BasicAuth) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1_BasicAuth) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.BasicAuth.Equal(that1.BasicAuth) { + return false + } + return true +} +func (this *PluginStaticCredentialsSpecV1_OAuthClientSecret) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1_OAuthClientSecret) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1_OAuthClientSecret) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.OAuthClientSecret.Equal(that1.OAuthClientSecret) { + return false + } + return true +} +func (this *PluginStaticCredentialsSpecV1_SSHCertAuthorities) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1_SSHCertAuthorities) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1_SSHCertAuthorities) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.SSHCertAuthorities.Equal(that1.SSHCertAuthorities) { + return false + } + return true +} +func (this *PluginStaticCredentialsSpecV1_PrivateKey) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSpecV1_PrivateKey) + if !ok { + that2, ok := that.(PluginStaticCredentialsSpecV1_PrivateKey) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !bytes.Equal(this.PrivateKey, that1.PrivateKey) { + return false + } + return true +} +func (this *PluginStaticCredentialsBasicAuth) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsBasicAuth) + if !ok { + that2, ok := that.(PluginStaticCredentialsBasicAuth) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.Username != that1.Username { + return false + } + if this.Password != that1.Password { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginStaticCredentialsOAuthClientSecret) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsOAuthClientSecret) + if !ok { + that2, ok := that.(PluginStaticCredentialsOAuthClientSecret) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.ClientId != that1.ClientId { + return false + } + if this.ClientSecret != that1.ClientSecret { + return false + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} +func (this *PluginStaticCredentialsSSHCertAuthorities) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*PluginStaticCredentialsSSHCertAuthorities) + if !ok { + that2, ok := that.(PluginStaticCredentialsSSHCertAuthorities) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if len(this.CertAuthorities) != len(that1.CertAuthorities) { + return false + } + for i := range this.CertAuthorities { + if !this.CertAuthorities[i].Equal(that1.CertAuthorities[i]) { + return false + } + } + if !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) { + return false + } + return true +} func (this *JamfSpecV1) Equal(that interface{}) bool { if that == nil { return this == nil @@ -27999,6 +30745,13 @@ func (m *DatabaseServerV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Scope) > 0 { + i -= len(m.Scope) + copy(dAtA[i:], m.Scope) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Scope))) + i-- + dAtA[i] = 0x3a + } { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -28077,6 +30830,22 @@ func (m *DatabaseServerSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.RelayIds) > 0 { + for iNdEx := len(m.RelayIds) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RelayIds[iNdEx]) + copy(dAtA[i:], m.RelayIds[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayIds[iNdEx]))) + i-- + dAtA[i] = 0x7a + } + } + if len(m.RelayGroup) > 0 { + i -= len(m.RelayGroup) + copy(dAtA[i:], m.RelayGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayGroup))) + i-- + dAtA[i] = 0x72 + } if len(m.ProxyIDs) > 0 { for iNdEx := len(m.ProxyIDs) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.ProxyIDs[iNdEx]) @@ -28519,6 +31288,21 @@ func (m *OracleOptions) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ShuffleHostnames { + i-- + if m.ShuffleHostnames { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x18 + } + if m.RetryCount != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.RetryCount)) + i-- + dAtA[i] = 0x10 + } if len(m.AuditUser) > 0 { i -= len(m.AuditUser) copy(dAtA[i:], m.AuditUser) @@ -28626,6 +31410,18 @@ func (m *AWS) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + { + size, err := m.ElastiCacheServerless.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x8a { size, err := m.DocumentDB.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -29039,6 +31835,40 @@ func (m *ElastiCache) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *ElastiCacheServerless) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ElastiCacheServerless) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ElastiCacheServerless) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.CacheName) > 0 { + i -= len(m.CacheName) + copy(dAtA[i:], m.CacheName) + i = encodeVarintTypes(dAtA, i, uint64(len(m.CacheName))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *MemoryDB) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -29265,6 +32095,16 @@ func (m *GCPCloudSQL) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + { + size, err := m.AlloyDB.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a if len(m.InstanceID) > 0 { i -= len(m.InstanceID) copy(dAtA[i:], m.InstanceID) @@ -29282,6 +32122,47 @@ func (m *GCPCloudSQL) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *AlloyDB) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AlloyDB) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AlloyDB) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.EndpointOverride) > 0 { + i -= len(m.EndpointOverride) + copy(dAtA[i:], m.EndpointOverride) + i = encodeVarintTypes(dAtA, i, uint64(len(m.EndpointOverride))) + i-- + dAtA[i] = 0x12 + } + if len(m.EndpointType) > 0 { + i -= len(m.EndpointType) + copy(dAtA[i:], m.EndpointType) + i = encodeVarintTypes(dAtA, i, uint64(len(m.EndpointType))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *Azure) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -29707,12 +32588,12 @@ func (m *InstanceSpecV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x32 } } - n46, err46 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastSeen, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastSeen):]) - if err46 != nil { - return 0, err46 + n48, err48 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastSeen, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastSeen):]) + if err48 != nil { + return 0, err48 } - i -= n46 - i = encodeVarintTypes(dAtA, i, uint64(n46)) + i -= n48 + i = encodeVarintTypes(dAtA, i, uint64(n48)) i-- dAtA[i] = 0x2a if len(m.AuthID) > 0 { @@ -29772,28 +32653,28 @@ func (m *SystemClockMeasurement) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n47, err47 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.RequestDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.RequestDuration):]) - if err47 != nil { - return 0, err47 + n49, err49 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.RequestDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.RequestDuration):]) + if err49 != nil { + return 0, err49 } - i -= n47 - i = encodeVarintTypes(dAtA, i, uint64(n47)) + i -= n49 + i = encodeVarintTypes(dAtA, i, uint64(n49)) i-- dAtA[i] = 0x1a - n48, err48 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.SystemClock, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.SystemClock):]) - if err48 != nil { - return 0, err48 + n50, err50 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.SystemClock, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.SystemClock):]) + if err50 != nil { + return 0, err50 } - i -= n48 - i = encodeVarintTypes(dAtA, i, uint64(n48)) + i -= n50 + i = encodeVarintTypes(dAtA, i, uint64(n50)) i-- dAtA[i] = 0x12 - n49, err49 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.ControllerSystemClock, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.ControllerSystemClock):]) - if err49 != nil { - return 0, err49 + n51, err51 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.ControllerSystemClock, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.ControllerSystemClock):]) + if err51 != nil { + return 0, err51 } - i -= n49 - i = encodeVarintTypes(dAtA, i, uint64(n49)) + i -= n51 + i = encodeVarintTypes(dAtA, i, uint64(n51)) i-- dAtA[i] = 0xa return len(dAtA) - i, nil @@ -29847,12 +32728,12 @@ func (m *InstanceControlLogEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) i-- dAtA[i] = 0x20 } - n50, err50 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) - if err50 != nil { - return 0, err50 + n52, err52 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) + if err52 != nil { + return 0, err52 } - i -= n50 - i = encodeVarintTypes(dAtA, i, uint64(n50)) + i -= n52 + i = encodeVarintTypes(dAtA, i, uint64(n52)) i-- dAtA[i] = 0x1a if m.ID != 0 { @@ -30028,6 +32909,13 @@ func (m *ServerV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Scope) > 0 { + i -= len(m.Scope) + copy(dAtA[i:], m.Scope) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Scope))) + i-- + dAtA[i] = 0x32 + } { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -30096,6 +32984,40 @@ func (m *ServerSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ComponentFeatures != nil { + { + size, err := m.ComponentFeatures.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x92 + } + if len(m.RelayIds) > 0 { + for iNdEx := len(m.RelayIds) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RelayIds[iNdEx]) + copy(dAtA[i:], m.RelayIds[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayIds[iNdEx]))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x8a + } + } + if len(m.RelayGroup) > 0 { + i -= len(m.RelayGroup) + copy(dAtA[i:], m.RelayGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayGroup))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x82 + } if m.GitHub != nil { { size, err := m.GitHub.MarshalToSizedBuffer(dAtA[:i]) @@ -30386,6 +33308,13 @@ func (m *AppServerV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Scope) > 0 { + i -= len(m.Scope) + copy(dAtA[i:], m.Scope) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Scope))) + i-- + dAtA[i] = 0x32 + } { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -30454,6 +33383,34 @@ func (m *AppServerSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ComponentFeatures != nil { + { + size, err := m.ComponentFeatures.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x4a + } + if len(m.RelayIds) > 0 { + for iNdEx := len(m.RelayIds) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RelayIds[iNdEx]) + copy(dAtA[i:], m.RelayIds[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayIds[iNdEx]))) + i-- + dAtA[i] = 0x42 + } + } + if len(m.RelayGroup) > 0 { + i -= len(m.RelayGroup) + copy(dAtA[i:], m.RelayGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayGroup))) + i-- + dAtA[i] = 0x3a + } if len(m.ProxyIDs) > 0 { for iNdEx := len(m.ProxyIDs) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.ProxyIDs[iNdEx]) @@ -30816,6 +33773,18 @@ func (m *AppSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.MCP != nil { + { + size, err := m.MCP.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x7a + } if m.UseAnyProxyPublicAddr { i-- if m.UseAnyProxyPublicAddr { @@ -30971,6 +33940,56 @@ func (m *AppSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *MCP) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *MCP) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCP) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.RunAsHostUser) > 0 { + i -= len(m.RunAsHostUser) + copy(dAtA[i:], m.RunAsHostUser) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RunAsHostUser))) + i-- + dAtA[i] = 0x1a + } + if len(m.Args) > 0 { + for iNdEx := len(m.Args) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Args[iNdEx]) + copy(dAtA[i:], m.Args[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Args[iNdEx]))) + i-- + dAtA[i] = 0x12 + } + } + if len(m.Command) > 0 { + i -= len(m.Command) + copy(dAtA[i:], m.Command) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Command))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *Rewrite) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -31314,6 +34333,13 @@ func (m *TLSKeyPair) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.CRL) > 0 { + i -= len(m.CRL) + copy(dAtA[i:], m.CRL) + i = encodeVarintTypes(dAtA, i, uint64(len(m.CRL))) + i-- + dAtA[i] = 0x22 + } if m.KeyType != 0 { i = encodeVarintTypes(dAtA, i, uint64(m.KeyType)) i-- @@ -31382,6 +34408,91 @@ func (m *JWTKeyPair) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *EncryptionKeyPair) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *EncryptionKeyPair) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *EncryptionKeyPair) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Hash != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.Hash)) + i-- + dAtA[i] = 0x20 + } + if m.PrivateKeyType != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.PrivateKeyType)) + i-- + dAtA[i] = 0x18 + } + if len(m.PrivateKey) > 0 { + i -= len(m.PrivateKey) + copy(dAtA[i:], m.PrivateKey) + i = encodeVarintTypes(dAtA, i, uint64(len(m.PrivateKey))) + i-- + dAtA[i] = 0x12 + } + if len(m.PublicKey) > 0 { + i -= len(m.PublicKey) + copy(dAtA[i:], m.PublicKey) + i = encodeVarintTypes(dAtA, i, uint64(len(m.PublicKey))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *AgeEncryptionKey) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AgeEncryptionKey) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AgeEncryptionKey) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.PublicKey) > 0 { + i -= len(m.PublicKey) + copy(dAtA[i:], m.PublicKey) + i = encodeVarintTypes(dAtA, i, uint64(len(m.PublicKey))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *CertAuthorityV2) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -31694,12 +34805,12 @@ func (m *ProvisionTokenV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a } - n75, err75 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err75 != nil { - return 0, err75 + n80, err80 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err80 != nil { + return 0, err80 } - i -= n75 - i = encodeVarintTypes(dAtA, i, uint64(n75)) + i -= n80 + i = encodeVarintTypes(dAtA, i, uint64(n80)) i-- dAtA[i] = 0x12 if len(m.Roles) > 0 { @@ -31916,6 +35027,20 @@ func (m *ProvisionTokenSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.Env0 != nil { + { + size, err := m.Env0.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xaa + } if m.AzureDevops != nil { { size, err := m.AzureDevops.MarshalToSizedBuffer(dAtA[:i]) @@ -32840,6 +35965,16 @@ func (m *ProvisionTokenSpecV2Spacelift) MarshalToSizedBuffer(dAtA []byte) (int, i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.EnableGlobMatching { + i-- + if m.EnableGlobMatching { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x18 + } if len(m.Hostname) > 0 { i -= len(m.Hostname) copy(dAtA[i:], m.Hostname) @@ -32943,6 +36078,18 @@ func (m *ProvisionTokenSpecV2Kubernetes) MarshalToSizedBuffer(dAtA []byte) (int, i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.OIDC != nil { + { + size, err := m.OIDC.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } if m.StaticJWKS != nil { { size, err := m.StaticJWKS.MarshalToSizedBuffer(dAtA[:i]) @@ -33013,6 +36160,50 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) MarshalToSizedBuffer(d return len(dAtA) - i, nil } +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.InsecureAllowHTTPIssuer { + i-- + if m.InsecureAllowHTTPIssuer { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x10 + } + if len(m.Issuer) > 0 { + i -= len(m.Issuer) + copy(dAtA[i:], m.Issuer) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Issuer))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *ProvisionTokenSpecV2Kubernetes_Rule) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -33532,6 +36723,15 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) MarshalToSizedBuffer(dAtA []byte) (int i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Instances) > 0 { + for iNdEx := len(m.Instances) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Instances[iNdEx]) + copy(dAtA[i:], m.Instances[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Instances[iNdEx]))) + i-- + dAtA[i] = 0x22 + } + } if len(m.Regions) > 0 { for iNdEx := len(m.Regions) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.Regions[iNdEx]) @@ -33560,6 +36760,151 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) MarshalToSizedBuffer(dAtA []byte) (int return len(dAtA) - i, nil } +func (m *ProvisionTokenSpecV2Env0) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ProvisionTokenSpecV2Env0) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ProvisionTokenSpecV2Env0) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Allow) > 0 { + for iNdEx := len(m.Allow) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Allow[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ProvisionTokenSpecV2Env0_Rule) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ProvisionTokenSpecV2Env0_Rule) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ProvisionTokenSpecV2Env0_Rule) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Env0Tag) > 0 { + i -= len(m.Env0Tag) + copy(dAtA[i:], m.Env0Tag) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Env0Tag))) + i-- + dAtA[i] = 0x5a + } + if len(m.DeployerEmail) > 0 { + i -= len(m.DeployerEmail) + copy(dAtA[i:], m.DeployerEmail) + i = encodeVarintTypes(dAtA, i, uint64(len(m.DeployerEmail))) + i-- + dAtA[i] = 0x52 + } + if len(m.DeploymentType) > 0 { + i -= len(m.DeploymentType) + copy(dAtA[i:], m.DeploymentType) + i = encodeVarintTypes(dAtA, i, uint64(len(m.DeploymentType))) + i-- + dAtA[i] = 0x4a + } + if len(m.WorkspaceName) > 0 { + i -= len(m.WorkspaceName) + copy(dAtA[i:], m.WorkspaceName) + i = encodeVarintTypes(dAtA, i, uint64(len(m.WorkspaceName))) + i-- + dAtA[i] = 0x42 + } + if len(m.EnvironmentName) > 0 { + i -= len(m.EnvironmentName) + copy(dAtA[i:], m.EnvironmentName) + i = encodeVarintTypes(dAtA, i, uint64(len(m.EnvironmentName))) + i-- + dAtA[i] = 0x3a + } + if len(m.EnvironmentID) > 0 { + i -= len(m.EnvironmentID) + copy(dAtA[i:], m.EnvironmentID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.EnvironmentID))) + i-- + dAtA[i] = 0x32 + } + if len(m.TemplateName) > 0 { + i -= len(m.TemplateName) + copy(dAtA[i:], m.TemplateName) + i = encodeVarintTypes(dAtA, i, uint64(len(m.TemplateName))) + i-- + dAtA[i] = 0x2a + } + if len(m.TemplateID) > 0 { + i -= len(m.TemplateID) + copy(dAtA[i:], m.TemplateID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.TemplateID))) + i-- + dAtA[i] = 0x22 + } + if len(m.ProjectName) > 0 { + i -= len(m.ProjectName) + copy(dAtA[i:], m.ProjectName) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ProjectName))) + i-- + dAtA[i] = 0x1a + } + if len(m.ProjectID) > 0 { + i -= len(m.ProjectID) + copy(dAtA[i:], m.ProjectID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ProjectID))) + i-- + dAtA[i] = 0x12 + } + if len(m.OrganizationID) > 0 { + i -= len(m.OrganizationID) + copy(dAtA[i:], m.OrganizationID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.OrganizationID))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *ProvisionTokenSpecV2BoundKeypair) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -33585,12 +36930,12 @@ func (m *ProvisionTokenSpecV2BoundKeypair) MarshalToSizedBuffer(dAtA []byte) (in copy(dAtA[i:], m.XXX_unrecognized) } if m.RotateAfter != nil { - n97, err97 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.RotateAfter, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.RotateAfter):]) - if err97 != nil { - return 0, err97 + n104, err104 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.RotateAfter, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.RotateAfter):]) + if err104 != nil { + return 0, err104 } - i -= n97 - i = encodeVarintTypes(dAtA, i, uint64(n97)) + i -= n104 + i = encodeVarintTypes(dAtA, i, uint64(n104)) i-- dAtA[i] = 0x1a } @@ -33646,12 +36991,12 @@ func (m *ProvisionTokenSpecV2BoundKeypair_OnboardingSpec) MarshalToSizedBuffer(d copy(dAtA[i:], m.XXX_unrecognized) } if m.MustRegisterBefore != nil { - n100, err100 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.MustRegisterBefore, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.MustRegisterBefore):]) - if err100 != nil { - return 0, err100 + n107, err107 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.MustRegisterBefore, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.MustRegisterBefore):]) + if err107 != nil { + return 0, err107 } - i -= n100 - i = encodeVarintTypes(dAtA, i, uint64(n100)) + i -= n107 + i = encodeVarintTypes(dAtA, i, uint64(n107)) i-- dAtA[i] = 0x1a } @@ -33775,22 +37120,22 @@ func (m *ProvisionTokenStatusV2BoundKeypair) MarshalToSizedBuffer(dAtA []byte) ( copy(dAtA[i:], m.XXX_unrecognized) } if m.LastRotatedAt != nil { - n102, err102 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastRotatedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastRotatedAt):]) - if err102 != nil { - return 0, err102 + n109, err109 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastRotatedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastRotatedAt):]) + if err109 != nil { + return 0, err109 } - i -= n102 - i = encodeVarintTypes(dAtA, i, uint64(n102)) + i -= n109 + i = encodeVarintTypes(dAtA, i, uint64(n109)) i-- dAtA[i] = 0x32 } if m.LastRecoveredAt != nil { - n103, err103 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastRecoveredAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastRecoveredAt):]) - if err103 != nil { - return 0, err103 + n110, err110 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastRecoveredAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastRecoveredAt):]) + if err110 != nil { + return 0, err110 } - i -= n103 - i = encodeVarintTypes(dAtA, i, uint64(n103)) + i -= n110 + i = encodeVarintTypes(dAtA, i, uint64(n110)) i-- dAtA[i] = 0x2a } @@ -34564,119 +37909,339 @@ func (m *SessionRecordingConfigV2) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - { - size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintTypes(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x2a - { - size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintTypes(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - if len(m.Version) > 0 { - i -= len(m.Version) - copy(dAtA[i:], m.Version) - i = encodeVarintTypes(dAtA, i, uint64(len(m.Version))) - i-- - dAtA[i] = 0x1a - } - if len(m.SubKind) > 0 { - i -= len(m.SubKind) - copy(dAtA[i:], m.SubKind) - i = encodeVarintTypes(dAtA, i, uint64(len(m.SubKind))) - i-- - dAtA[i] = 0x12 - } - if len(m.Kind) > 0 { - i -= len(m.Kind) - copy(dAtA[i:], m.Kind) - i = encodeVarintTypes(dAtA, i, uint64(len(m.Kind))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil -} - -func (m *SessionRecordingConfigSpecV2) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *SessionRecordingConfigSpecV2) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *SessionRecordingConfigSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) - } - if m.ProxyChecksHostKeys != nil { + if m.Status != nil { { - size := m.ProxyChecksHostKeys.Size() - i -= size - if _, err := m.ProxyChecksHostKeys.MarshalTo(dAtA[i:]); err != nil { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { return 0, err } + i -= size i = encodeVarintTypes(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x12 - } - if len(m.Mode) > 0 { - i -= len(m.Mode) - copy(dAtA[i:], m.Mode) - i = encodeVarintTypes(dAtA, i, uint64(len(m.Mode))) - i-- - dAtA[i] = 0xa - } - return len(dAtA) - i, nil -} - -func (m *AuthPreferenceV2) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *AuthPreferenceV2) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *AuthPreferenceV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.XXX_unrecognized != nil { - i -= len(m.XXX_unrecognized) - copy(dAtA[i:], m.XXX_unrecognized) + dAtA[i] = 0x32 + } + { + size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + { + size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + if len(m.Version) > 0 { + i -= len(m.Version) + copy(dAtA[i:], m.Version) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Version))) + i-- + dAtA[i] = 0x1a + } + if len(m.SubKind) > 0 { + i -= len(m.SubKind) + copy(dAtA[i:], m.SubKind) + i = encodeVarintTypes(dAtA, i, uint64(len(m.SubKind))) + i-- + dAtA[i] = 0x12 + } + if len(m.Kind) > 0 { + i -= len(m.Kind) + copy(dAtA[i:], m.Kind) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Kind))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *KeyLabel) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *KeyLabel) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *KeyLabel) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Label) > 0 { + i -= len(m.Label) + copy(dAtA[i:], m.Label) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Label))) + i-- + dAtA[i] = 0x12 + } + if len(m.Type) > 0 { + i -= len(m.Type) + copy(dAtA[i:], m.Type) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Type))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *ManualKeyManagementConfig) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ManualKeyManagementConfig) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ManualKeyManagementConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.RotatedKeys) > 0 { + for iNdEx := len(m.RotatedKeys) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.RotatedKeys[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + } + if len(m.ActiveKeys) > 0 { + for iNdEx := len(m.ActiveKeys) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ActiveKeys[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + } + if m.Enabled { + i-- + if m.Enabled { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *SessionRecordingEncryptionConfig) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SessionRecordingEncryptionConfig) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SessionRecordingEncryptionConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.ManualKeyManagement != nil { + { + size, err := m.ManualKeyManagement.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + if m.Enabled { + i-- + if m.Enabled { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *SessionRecordingConfigSpecV2) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SessionRecordingConfigSpecV2) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SessionRecordingConfigSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Encryption != nil { + { + size, err := m.Encryption.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + if m.ProxyChecksHostKeys != nil { + { + size := m.ProxyChecksHostKeys.Size() + i -= size + if _, err := m.ProxyChecksHostKeys.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + if len(m.Mode) > 0 { + i -= len(m.Mode) + copy(dAtA[i:], m.Mode) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Mode))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *SessionRecordingConfigStatus) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SessionRecordingConfigStatus) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SessionRecordingConfigStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.EncryptionKeys) > 0 { + for iNdEx := len(m.EncryptionKeys) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.EncryptionKeys[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *AuthPreferenceV2) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AuthPreferenceV2) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AuthPreferenceV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) } { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) @@ -34761,20 +38326,20 @@ func (m *AuthPreferenceSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0xb2 } if len(m.SecondFactors) > 0 { - dAtA123 := make([]byte, len(m.SecondFactors)*10) - var j122 int + dAtA133 := make([]byte, len(m.SecondFactors)*10) + var j132 int for _, num := range m.SecondFactors { for num >= 1<<7 { - dAtA123[j122] = uint8(uint64(num)&0x7f | 0x80) + dAtA133[j132] = uint8(uint64(num)&0x7f | 0x80) num >>= 7 - j122++ + j132++ } - dAtA123[j122] = uint8(num) - j122++ + dAtA133[j132] = uint8(num) + j132++ } - i -= j122 - copy(dAtA[i:], dAtA123[:j122]) - i = encodeVarintTypes(dAtA, i, uint64(j122)) + i -= j132 + copy(dAtA[i:], dAtA133[:j132]) + i = encodeVarintTypes(dAtA, i, uint64(j132)) i-- dAtA[i] = 0x1 i-- @@ -35447,12 +39012,12 @@ func (m *UserTokenSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n139, err139 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err139 != nil { - return 0, err139 + n149, err149 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err149 != nil { + return 0, err149 } - i -= n139 - i = encodeVarintTypes(dAtA, i, uint64(n139)) + i -= n149 + i = encodeVarintTypes(dAtA, i, uint64(n149)) i-- dAtA[i] = 0x22 if m.Usage != 0 { @@ -35569,12 +39134,12 @@ func (m *UserTokenSecretsSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n142, err142 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err142 != nil { - return 0, err142 + n152, err152 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err152 != nil { + return 0, err152 } - i -= n142 - i = encodeVarintTypes(dAtA, i, uint64(n142)) + i -= n152 + i = encodeVarintTypes(dAtA, i, uint64(n152)) i-- dAtA[i] = 0x1a if len(m.QRCode) > 0 { @@ -35822,12 +39387,12 @@ func (m *AccessReview) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } if m.AssumeStartTime != nil { - n145, err145 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) - if err145 != nil { - return 0, err145 + n155, err155 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) + if err155 != nil { + return 0, err155 } - i -= n145 - i = encodeVarintTypes(dAtA, i, uint64(n145)) + i -= n155 + i = encodeVarintTypes(dAtA, i, uint64(n155)) i-- dAtA[i] = 0x52 } @@ -35844,20 +39409,20 @@ func (m *AccessReview) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x4a } if len(m.ThresholdIndexes) > 0 { - dAtA148 := make([]byte, len(m.ThresholdIndexes)*10) - var j147 int + dAtA158 := make([]byte, len(m.ThresholdIndexes)*10) + var j157 int for _, num := range m.ThresholdIndexes { for num >= 1<<7 { - dAtA148[j147] = uint8(uint64(num)&0x7f | 0x80) + dAtA158[j157] = uint8(uint64(num)&0x7f | 0x80) num >>= 7 - j147++ + j157++ } - dAtA148[j147] = uint8(num) - j147++ + dAtA158[j157] = uint8(num) + j157++ } - i -= j147 - copy(dAtA[i:], dAtA148[:j147]) - i = encodeVarintTypes(dAtA, i, uint64(j147)) + i -= j157 + copy(dAtA[i:], dAtA158[:j157]) + i = encodeVarintTypes(dAtA, i, uint64(j157)) i-- dAtA[i] = 0x3a } @@ -35871,12 +39436,12 @@ func (m *AccessReview) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x32 - n150, err150 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err150 != nil { - return 0, err150 + n160, err160 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err160 != nil { + return 0, err160 } - i -= n150 - i = encodeVarintTypes(dAtA, i, uint64(n150)) + i -= n160 + i = encodeVarintTypes(dAtA, i, uint64(n160)) i-- dAtA[i] = 0x2a if len(m.Reason) > 0 { @@ -35979,20 +39544,20 @@ func (m *ThresholdIndexSet) MarshalToSizedBuffer(dAtA []byte) (int, error) { copy(dAtA[i:], m.XXX_unrecognized) } if len(m.Indexes) > 0 { - dAtA153 := make([]byte, len(m.Indexes)*10) - var j152 int + dAtA163 := make([]byte, len(m.Indexes)*10) + var j162 int for _, num := range m.Indexes { for num >= 1<<7 { - dAtA153[j152] = uint8(uint64(num)&0x7f | 0x80) + dAtA163[j162] = uint8(uint64(num)&0x7f | 0x80) num >>= 7 - j152++ + j162++ } - dAtA153[j152] = uint8(num) - j152++ + dAtA163[j162] = uint8(num) + j162++ } - i -= j152 - copy(dAtA[i:], dAtA153[:j152]) - i = encodeVarintTypes(dAtA, i, uint64(j152)) + i -= j162 + copy(dAtA[i:], dAtA163[:j162]) + i = encodeVarintTypes(dAtA, i, uint64(j162)) i-- dAtA[i] = 0xa } @@ -36064,6 +39629,27 @@ func (m *AccessRequestSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.LongTermGrouping != nil { + { + size, err := m.LongTermGrouping.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xca + } + if m.RequestKind != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.RequestKind)) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xc0 + } if m.DryRunEnrichment != nil { { size, err := m.DryRunEnrichment.MarshalToSizedBuffer(dAtA[:i]) @@ -36079,24 +39665,24 @@ func (m *AccessRequestSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0xba } if m.ResourceExpiry != nil { - n155, err155 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ResourceExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry):]) - if err155 != nil { - return 0, err155 + n166, err166 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.ResourceExpiry, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.ResourceExpiry):]) + if err166 != nil { + return 0, err166 } - i -= n155 - i = encodeVarintTypes(dAtA, i, uint64(n155)) + i -= n166 + i = encodeVarintTypes(dAtA, i, uint64(n166)) i-- dAtA[i] = 0x1 i-- dAtA[i] = 0xb2 } if m.AssumeStartTime != nil { - n156, err156 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) - if err156 != nil { - return 0, err156 + n167, err167 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.AssumeStartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.AssumeStartTime):]) + if err167 != nil { + return 0, err167 } - i -= n156 - i = encodeVarintTypes(dAtA, i, uint64(n156)) + i -= n167 + i = encodeVarintTypes(dAtA, i, uint64(n167)) i-- dAtA[i] = 0x1 i-- @@ -36116,22 +39702,22 @@ func (m *AccessRequestSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0xa2 } - n158, err158 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.SessionTTL, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.SessionTTL):]) - if err158 != nil { - return 0, err158 + n169, err169 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.SessionTTL, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.SessionTTL):]) + if err169 != nil { + return 0, err169 } - i -= n158 - i = encodeVarintTypes(dAtA, i, uint64(n158)) + i -= n169 + i = encodeVarintTypes(dAtA, i, uint64(n169)) i-- dAtA[i] = 0x1 i-- dAtA[i] = 0x92 - n159, err159 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.MaxDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration):]) - if err159 != nil { - return 0, err159 + n170, err170 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.MaxDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.MaxDuration):]) + if err170 != nil { + return 0, err170 } - i -= n159 - i = encodeVarintTypes(dAtA, i, uint64(n159)) + i -= n170 + i = encodeVarintTypes(dAtA, i, uint64(n170)) i-- dAtA[i] = 0x1 i-- @@ -36264,20 +39850,20 @@ func (m *AccessRequestSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x32 } - n163, err163 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err163 != nil { - return 0, err163 + n174, err174 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err174 != nil { + return 0, err174 } - i -= n163 - i = encodeVarintTypes(dAtA, i, uint64(n163)) + i -= n174 + i = encodeVarintTypes(dAtA, i, uint64(n174)) i-- dAtA[i] = 0x2a - n164, err164 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err164 != nil { - return 0, err164 + n175, err175 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err175 != nil { + return 0, err175 } - i -= n164 - i = encodeVarintTypes(dAtA, i, uint64(n164)) + i -= n175 + i = encodeVarintTypes(dAtA, i, uint64(n175)) i-- dAtA[i] = 0x22 if m.State != 0 { @@ -36537,6 +40123,99 @@ func (m *AccessCapabilitiesRequest) MarshalToSizedBuffer(dAtA []byte) (int, erro return len(dAtA) - i, nil } +func (m *RemoteAccessCapabilities) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RemoteAccessCapabilities) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RemoteAccessCapabilities) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.ApplicableRolesForResources) > 0 { + for iNdEx := len(m.ApplicableRolesForResources) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ApplicableRolesForResources[iNdEx]) + copy(dAtA[i:], m.ApplicableRolesForResources[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ApplicableRolesForResources[iNdEx]))) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *RemoteAccessCapabilitiesRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RemoteAccessCapabilitiesRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RemoteAccessCapabilitiesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.ResourceIDs) > 0 { + for iNdEx := len(m.ResourceIDs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ResourceIDs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + } + if len(m.SearchAsRoles) > 0 { + for iNdEx := len(m.SearchAsRoles) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.SearchAsRoles[iNdEx]) + copy(dAtA[i:], m.SearchAsRoles[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.SearchAsRoles[iNdEx]))) + i-- + dAtA[i] = 0x12 + } + } + if len(m.User) > 0 { + i -= len(m.User) + copy(dAtA[i:], m.User) + i = encodeVarintTypes(dAtA, i, uint64(len(m.User))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *RequestKubernetesResource) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -37281,12 +40960,12 @@ func (m *RoleOptions) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0xfa } - n178, err178 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.MFAVerificationInterval, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.MFAVerificationInterval):]) - if err178 != nil { - return 0, err178 + n189, err189 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.MFAVerificationInterval, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.MFAVerificationInterval):]) + if err189 != nil { + return 0, err189 } - i -= n178 - i = encodeVarintTypes(dAtA, i, uint64(n178)) + i -= n189 + i = encodeVarintTypes(dAtA, i, uint64(n189)) i-- dAtA[i] = 0x1 i-- @@ -37696,6 +41375,20 @@ func (m *RoleConditions) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.MCP != nil { + { + size, err := m.MCP.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2 + i-- + dAtA[i] = 0xf2 + } if len(m.WorkloadIdentityLabelsExpression) > 0 { i -= len(m.WorkloadIdentityLabelsExpression) copy(dAtA[i:], m.WorkloadIdentityLabelsExpression) @@ -38259,6 +41952,42 @@ func (m *GitHubPermission) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *MCPPermissions) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *MCPPermissions) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MCPPermissions) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Tools) > 0 { + for iNdEx := len(m.Tools) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Tools[iNdEx]) + copy(dAtA[i:], m.Tools[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Tools[iNdEx]))) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *SPIFFERoleCondition) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -38700,6 +42429,13 @@ func (m *AccessRequestConditionsReason) MarshalToSizedBuffer(dAtA []byte) (int, i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Prompt) > 0 { + i -= len(m.Prompt) + copy(dAtA[i:], m.Prompt) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Prompt))) + i-- + dAtA[i] = 0x12 + } if len(m.Mode) > 0 { i -= len(m.Mode) copy(dAtA[i:], m.Mode) @@ -38851,6 +42587,122 @@ func (m *AccessRequestAllowedPromotions) MarshalToSizedBuffer(dAtA []byte) (int, return len(dAtA) - i, nil } +func (m *ResourceIDList) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ResourceIDList) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ResourceIDList) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.ResourceIds) > 0 { + for iNdEx := len(m.ResourceIds) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ResourceIds[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *LongTermResourceGrouping) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *LongTermResourceGrouping) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *LongTermResourceGrouping) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.CanProceed { + i-- + if m.CanProceed { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x20 + } + if len(m.ValidationMessage) > 0 { + i -= len(m.ValidationMessage) + copy(dAtA[i:], m.ValidationMessage) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ValidationMessage))) + i-- + dAtA[i] = 0x1a + } + if len(m.RecommendedAccessList) > 0 { + i -= len(m.RecommendedAccessList) + copy(dAtA[i:], m.RecommendedAccessList) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RecommendedAccessList))) + i-- + dAtA[i] = 0x12 + } + if len(m.AccessListToResources) > 0 { + for k := range m.AccessListToResources { + v := m.AccessListToResources[k] + baseI := i + { + size, err := (&v).MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintTypes(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintTypes(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *ClaimMapping) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -39125,6 +42977,16 @@ func (m *UserFilter) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.SkipSystemUsers { + i-- + if m.SkipSystemUsers { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x10 + } if len(m.SearchKeywords) > 0 { for iNdEx := len(m.SearchKeywords) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.SearchKeywords[iNdEx]) @@ -39307,12 +43169,12 @@ func (m *UserSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x42 - n209, err209 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err209 != nil { - return 0, err209 + n222, err222 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err222 != nil { + return 0, err222 } - i -= n209 - i = encodeVarintTypes(dAtA, i, uint64(n209)) + i -= n222 + i = encodeVarintTypes(dAtA, i, uint64(n222)) i-- dAtA[i] = 0x3a { @@ -39468,20 +43330,20 @@ func (m *LoginStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n212, err212 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LockExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LockExpires):]) - if err212 != nil { - return 0, err212 + n225, err225 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LockExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LockExpires):]) + if err225 != nil { + return 0, err225 } - i -= n212 - i = encodeVarintTypes(dAtA, i, uint64(n212)) + i -= n225 + i = encodeVarintTypes(dAtA, i, uint64(n225)) i-- dAtA[i] = 0x22 - n213, err213 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LockedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LockedTime):]) - if err213 != nil { - return 0, err213 + n226, err226 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LockedTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LockedTime):]) + if err226 != nil { + return 0, err226 } - i -= n213 - i = encodeVarintTypes(dAtA, i, uint64(n213)) + i -= n226 + i = encodeVarintTypes(dAtA, i, uint64(n226)) i-- dAtA[i] = 0x1a if len(m.LockedMessage) > 0 { @@ -39538,12 +43400,12 @@ func (m *CreatedBy) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x1a - n215, err215 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) - if err215 != nil { - return 0, err215 + n228, err228 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):]) + if err228 != nil { + return 0, err228 } - i -= n215 - i = encodeVarintTypes(dAtA, i, uint64(n215)) + i -= n228 + i = encodeVarintTypes(dAtA, i, uint64(n228)) i-- dAtA[i] = 0x12 if m.Connector != nil { @@ -39661,20 +43523,20 @@ func (m *MFADevice) MarshalToSizedBuffer(dAtA []byte) (int, error) { } } } - n218, err218 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastUsed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUsed):]) - if err218 != nil { - return 0, err218 + n231, err231 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastUsed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastUsed):]) + if err231 != nil { + return 0, err231 } - i -= n218 - i = encodeVarintTypes(dAtA, i, uint64(n218)) + i -= n231 + i = encodeVarintTypes(dAtA, i, uint64(n231)) i-- dAtA[i] = 0x3a - n219, err219 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.AddedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.AddedAt):]) - if err219 != nil { - return 0, err219 + n232, err232 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.AddedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.AddedAt):]) + if err232 != nil { + return 0, err232 } - i -= n219 - i = encodeVarintTypes(dAtA, i, uint64(n219)) + i -= n232 + i = encodeVarintTypes(dAtA, i, uint64(n232)) i-- dAtA[i] = 0x32 if len(m.Id) > 0 { @@ -40371,12 +44233,12 @@ func (m *TunnelConnectionSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) i-- dAtA[i] = 0x22 } - n231, err231 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastHeartbeat, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastHeartbeat):]) - if err231 != nil { - return 0, err231 + n244, err244 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastHeartbeat, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastHeartbeat):]) + if err244 != nil { + return 0, err244 } - i -= n231 - i = encodeVarintTypes(dAtA, i, uint64(n231)) + i -= n244 + i = encodeVarintTypes(dAtA, i, uint64(n244)) i-- dAtA[i] = 0x1a if len(m.ProxyName) > 0 { @@ -40468,12 +44330,12 @@ func (m *AcquireSemaphoreRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) i-- dAtA[i] = 0x2a } - n232, err232 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err232 != nil { - return 0, err232 + n245, err245 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err245 != nil { + return 0, err245 } - i -= n232 - i = encodeVarintTypes(dAtA, i, uint64(n232)) + i -= n245 + i = encodeVarintTypes(dAtA, i, uint64(n245)) i-- dAtA[i] = 0x22 if m.MaxLeases != 0 { @@ -40522,12 +44384,12 @@ func (m *SemaphoreLease) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n233, err233 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err233 != nil { - return 0, err233 + n246, err246 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err246 != nil { + return 0, err246 } - i -= n233 - i = encodeVarintTypes(dAtA, i, uint64(n233)) + i -= n246 + i = encodeVarintTypes(dAtA, i, uint64(n246)) i-- dAtA[i] = 0x2a if len(m.LeaseID) > 0 { @@ -40585,12 +44447,12 @@ func (m *SemaphoreLeaseRef) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x1a } - n234, err234 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err234 != nil { - return 0, err234 + n247, err247 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err247 != nil { + return 0, err247 } - i -= n234 - i = encodeVarintTypes(dAtA, i, uint64(n234)) + i -= n247 + i = encodeVarintTypes(dAtA, i, uint64(n247)) i-- dAtA[i] = 0x12 if len(m.LeaseID) > 0 { @@ -40862,28 +44724,28 @@ func (m *WebSessionSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x48 } - n241, err241 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LoginTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LoginTime):]) - if err241 != nil { - return 0, err241 + n254, err254 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LoginTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LoginTime):]) + if err254 != nil { + return 0, err254 } - i -= n241 - i = encodeVarintTypes(dAtA, i, uint64(n241)) + i -= n254 + i = encodeVarintTypes(dAtA, i, uint64(n254)) i-- dAtA[i] = 0x42 - n242, err242 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err242 != nil { - return 0, err242 + n255, err255 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err255 != nil { + return 0, err255 } - i -= n242 - i = encodeVarintTypes(dAtA, i, uint64(n242)) + i -= n255 + i = encodeVarintTypes(dAtA, i, uint64(n255)) i-- dAtA[i] = 0x3a - n243, err243 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.BearerTokenExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.BearerTokenExpires):]) - if err243 != nil { - return 0, err243 + n256, err256 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.BearerTokenExpires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.BearerTokenExpires):]) + if err256 != nil { + return 0, err256 } - i -= n243 - i = encodeVarintTypes(dAtA, i, uint64(n243)) + i -= n256 + i = encodeVarintTypes(dAtA, i, uint64(n256)) i-- dAtA[i] = 0x32 if len(m.BearerToken) > 0 { @@ -41116,20 +44978,20 @@ func (m *SAMLSessionData) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x22 } - n244, err244 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.ExpireTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.ExpireTime):]) - if err244 != nil { - return 0, err244 + n257, err257 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.ExpireTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.ExpireTime):]) + if err257 != nil { + return 0, err257 } - i -= n244 - i = encodeVarintTypes(dAtA, i, uint64(n244)) + i -= n257 + i = encodeVarintTypes(dAtA, i, uint64(n257)) i-- dAtA[i] = 0x1a - n245, err245 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreateTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreateTime):]) - if err245 != nil { - return 0, err245 + n258, err258 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreateTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreateTime):]) + if err258 != nil { + return 0, err258 } - i -= n245 - i = encodeVarintTypes(dAtA, i, uint64(n245)) + i -= n258 + i = encodeVarintTypes(dAtA, i, uint64(n258)) i-- dAtA[i] = 0x12 if len(m.ID) > 0 { @@ -41411,12 +45273,12 @@ func (m *RemoteClusterStatusV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n249, err249 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastHeartbeat, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastHeartbeat):]) - if err249 != nil { - return 0, err249 + n262, err262 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastHeartbeat, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastHeartbeat):]) + if err262 != nil { + return 0, err262 } - i -= n249 - i = encodeVarintTypes(dAtA, i, uint64(n249)) + i -= n262 + i = encodeVarintTypes(dAtA, i, uint64(n262)) i-- dAtA[i] = 0x12 if len(m.Connection) > 0 { @@ -41878,6 +45740,25 @@ func (m *KubernetesServerV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Scope) > 0 { + i -= len(m.Scope) + copy(dAtA[i:], m.Scope) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Scope))) + i-- + dAtA[i] = 0x3a + } + if m.Status != nil { + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -41946,6 +45827,22 @@ func (m *KubernetesServerSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.RelayIds) > 0 { + for iNdEx := len(m.RelayIds) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RelayIds[iNdEx]) + copy(dAtA[i:], m.RelayIds[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayIds[iNdEx]))) + i-- + dAtA[i] = 0x42 + } + } + if len(m.RelayGroup) > 0 { + i -= len(m.RelayGroup) + copy(dAtA[i:], m.RelayGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayGroup))) + i-- + dAtA[i] = 0x3a + } if len(m.ProxyIDs) > 0 { for iNdEx := len(m.ProxyIDs) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.ProxyIDs[iNdEx]) @@ -42001,6 +45898,45 @@ func (m *KubernetesServerSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) return len(dAtA) - i, nil } +func (m *KubernetesServerStatusV3) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *KubernetesServerStatusV3) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *KubernetesServerStatusV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.TargetHealth != nil { + { + size, err := m.TargetHealth.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *WebTokenV3) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -42617,6 +46553,40 @@ func (m *OIDCConnectorSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.EntraIdGroupsProvider != nil { + { + size, err := m.EntraIdGroupsProvider.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xba + } + if len(m.RequestObjectMode) > 0 { + i -= len(m.RequestObjectMode) + copy(dAtA[i:], m.RequestObjectMode) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RequestObjectMode))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xb2 + } + if len(m.UserMatchers) > 0 { + for iNdEx := len(m.UserMatchers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UserMatchers[iNdEx]) + copy(dAtA[i:], m.UserMatchers[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.UserMatchers[iNdEx]))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xaa + } + } if len(m.PKCEMode) > 0 { i -= len(m.PKCEMode) copy(dAtA[i:], m.PKCEMode) @@ -42793,6 +46763,57 @@ func (m *OIDCConnectorSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *EntraIDGroupsProvider) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *EntraIDGroupsProvider) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *EntraIDGroupsProvider) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.GraphEndpoint) > 0 { + i -= len(m.GraphEndpoint) + copy(dAtA[i:], m.GraphEndpoint) + i = encodeVarintTypes(dAtA, i, uint64(len(m.GraphEndpoint))) + i-- + dAtA[i] = 0x1a + } + if len(m.GroupType) > 0 { + i -= len(m.GroupType) + copy(dAtA[i:], m.GroupType) + i = encodeVarintTypes(dAtA, i, uint64(len(m.GroupType))) + i-- + dAtA[i] = 0x12 + } + if m.Disabled { + i-- + if m.Disabled { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func (m *MaxAge) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -42894,6 +46915,13 @@ func (m *OIDCConnectorMFASettings) MarshalToSizedBuffer(dAtA []byte) (int, error i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.RequestObjectMode) > 0 { + i -= len(m.RequestObjectMode) + copy(dAtA[i:], m.RequestObjectMode) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RequestObjectMode))) + i-- + dAtA[i] = 0x3a + } if m.MaxAge != 0 { i = encodeVarintTypes(dAtA, i, uint64(m.MaxAge)) i-- @@ -42964,6 +46992,15 @@ func (m *OIDCAuthRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.LoginHint) > 0 { + i -= len(m.LoginHint) + copy(dAtA[i:], m.LoginHint) + i = encodeVarintTypes(dAtA, i, uint64(len(m.LoginHint))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xca + } if len(m.PkceVerifier) > 0 { i -= len(m.PkceVerifier) copy(dAtA[i:], m.PkceVerifier) @@ -43292,6 +47329,29 @@ func (m *SAMLConnectorSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.IncludeSubject { + i-- + if m.IncludeSubject { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xa8 + } + if len(m.UserMatchers) > 0 { + for iNdEx := len(m.UserMatchers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UserMatchers[iNdEx]) + copy(dAtA[i:], m.UserMatchers[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.UserMatchers[iNdEx]))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xa2 + } + } if len(m.PreferredRequestBinding) > 0 { i -= len(m.PreferredRequestBinding) copy(dAtA[i:], m.PreferredRequestBinding) @@ -43565,6 +47625,15 @@ func (m *SAMLAuthRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.SubjectIdentifier) > 0 { + i -= len(m.SubjectIdentifier) + copy(dAtA[i:], m.SubjectIdentifier) + i = encodeVarintTypes(dAtA, i, uint64(len(m.SubjectIdentifier))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xca + } if len(m.ClientVersion) > 0 { i -= len(m.ClientVersion) copy(dAtA[i:], m.ClientVersion) @@ -43984,6 +48053,15 @@ func (m *GithubConnectorSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.UserMatchers) > 0 { + for iNdEx := len(m.UserMatchers) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.UserMatchers[iNdEx]) + copy(dAtA[i:], m.UserMatchers[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.UserMatchers[iNdEx]))) + i-- + dAtA[i] = 0x52 + } + } if m.ClientRedirectSettings != nil { { size, err := m.ClientRedirectSettings.MarshalToSizedBuffer(dAtA[:i]) @@ -44203,12 +48281,12 @@ func (m *GithubAuthRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x62 } if m.Expires != nil { - n287, err287 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) - if err287 != nil { - return 0, err287 + n303, err303 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) + if err303 != nil { + return 0, err303 } - i -= n287 - i = encodeVarintTypes(dAtA, i, uint64(n287)) + i -= n303 + i = encodeVarintTypes(dAtA, i, uint64(n303)) i-- dAtA[i] = 0x5a } @@ -45220,21 +49298,21 @@ func (m *LockSpecV2) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x2a } - n305, err305 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedAt):]) - if err305 != nil { - return 0, err305 + n321, err321 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CreatedAt, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CreatedAt):]) + if err321 != nil { + return 0, err321 } - i -= n305 - i = encodeVarintTypes(dAtA, i, uint64(n305)) + i -= n321 + i = encodeVarintTypes(dAtA, i, uint64(n321)) i-- dAtA[i] = 0x22 if m.Expires != nil { - n306, err306 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) - if err306 != nil { - return 0, err306 + n322, err322 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) + if err322 != nil { + return 0, err322 } - i -= n306 - i = encodeVarintTypes(dAtA, i, uint64(n306)) + i -= n322 + i = encodeVarintTypes(dAtA, i, uint64(n322)) i-- dAtA[i] = 0x1a } @@ -45282,6 +49360,20 @@ func (m *LockTarget) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.JoinToken) > 0 { + i -= len(m.JoinToken) + copy(dAtA[i:], m.JoinToken) + i = encodeVarintTypes(dAtA, i, uint64(len(m.JoinToken))) + i-- + dAtA[i] = 0x5a + } + if len(m.BotInstanceID) > 0 { + i -= len(m.BotInstanceID) + copy(dAtA[i:], m.BotInstanceID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.BotInstanceID))) + i-- + dAtA[i] = 0x52 + } if len(m.ServerID) > 0 { i -= len(m.ServerID) copy(dAtA[i:], m.ServerID) @@ -45341,6 +49433,57 @@ func (m *LockTarget) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *LockFilter) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *LockFilter) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *LockFilter) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.InForceOnly { + i-- + if m.InForceOnly { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x10 + } + if len(m.Targets) > 0 { + for iNdEx := len(m.Targets) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Targets[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *AddressCondition) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -45569,6 +49712,22 @@ func (m *WindowsDesktopServiceSpecV3) MarshalToSizedBuffer(dAtA []byte) (int, er i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.RelayIds) > 0 { + for iNdEx := len(m.RelayIds) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.RelayIds[iNdEx]) + copy(dAtA[i:], m.RelayIds[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayIds[iNdEx]))) + i-- + dAtA[i] = 0x32 + } + } + if len(m.RelayGroup) > 0 { + i -= len(m.RelayGroup) + copy(dAtA[i:], m.RelayGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RelayGroup))) + i-- + dAtA[i] = 0x2a + } if len(m.ProxyIDs) > 0 { for iNdEx := len(m.ProxyIDs) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.ProxyIDs[iNdEx]) @@ -45951,12 +50110,12 @@ func (m *RegisterUsingTokenRequest) MarshalToSizedBuffer(dAtA []byte) (int, erro dAtA[i] = 0x6a } if m.Expires != nil { - n318, err318 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) - if err318 != nil { - return 0, err318 + n334, err334 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.Expires):]) + if err334 != nil { + return 0, err334 } - i -= n318 - i = encodeVarintTypes(dAtA, i, uint64(n318)) + i -= n334 + i = encodeVarintTypes(dAtA, i, uint64(n334)) i-- dAtA[i] = 0x62 } @@ -46136,12 +50295,12 @@ func (m *RecoveryCodesSpecV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n321, err321 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err321 != nil { - return 0, err321 + n337, err337 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err337 != nil { + return 0, err337 } - i -= n321 - i = encodeVarintTypes(dAtA, i, uint64(n321)) + i -= n337 + i = encodeVarintTypes(dAtA, i, uint64(n337)) i-- dAtA[i] = 0x12 if len(m.Codes) > 0 { @@ -46521,20 +50680,20 @@ func (m *SessionTrackerSpecV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x32 } - n325, err325 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err325 != nil { - return 0, err325 + n341, err341 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err341 != nil { + return 0, err341 } - i -= n325 - i = encodeVarintTypes(dAtA, i, uint64(n325)) + i -= n341 + i = encodeVarintTypes(dAtA, i, uint64(n341)) i-- dAtA[i] = 0x2a - n326, err326 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err326 != nil { - return 0, err326 + n342, err342 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err342 != nil { + return 0, err342 } - i -= n326 - i = encodeVarintTypes(dAtA, i, uint64(n326)) + i -= n342 + i = encodeVarintTypes(dAtA, i, uint64(n342)) i-- dAtA[i] = 0x22 if m.State != 0 { @@ -46638,12 +50797,19 @@ func (m *Participant) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n327, err327 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastActive, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastActive):]) - if err327 != nil { - return 0, err327 + if len(m.Cluster) > 0 { + i -= len(m.Cluster) + copy(dAtA[i:], m.Cluster) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Cluster))) + i-- + dAtA[i] = 0x2a + } + n343, err343 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastActive, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastActive):]) + if err343 != nil { + return 0, err343 } - i -= n327 - i = encodeVarintTypes(dAtA, i, uint64(n327)) + i -= n343 + i = encodeVarintTypes(dAtA, i, uint64(n343)) i-- dAtA[i] = 0x22 if len(m.Mode) > 0 { @@ -47355,12 +51521,12 @@ func (m *ClusterAlertSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n340, err340 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) - if err340 != nil { - return 0, err340 + n356, err356 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Created, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Created):]) + if err356 != nil { + return 0, err356 } - i -= n340 - i = encodeVarintTypes(dAtA, i, uint64(n340)) + i -= n356 + i = encodeVarintTypes(dAtA, i, uint64(n356)) i-- dAtA[i] = 0x1a if len(m.Message) > 0 { @@ -47490,12 +51656,12 @@ func (m *AlertAcknowledgement) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n341, err341 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err341 != nil { - return 0, err341 + n357, err357 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err357 != nil { + return 0, err357 } - i -= n341 - i = encodeVarintTypes(dAtA, i, uint64(n341)) + i -= n357 + i = encodeVarintTypes(dAtA, i, uint64(n357)) i-- dAtA[i] = 0x22 if len(m.Reason) > 0 { @@ -48223,6 +52389,29 @@ func (m *PluginSpecV1_Github) MarshalToSizedBuffer(dAtA []byte) (int, error) { } return len(dAtA) - i, nil } +func (m *PluginSpecV1_Intune) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSpecV1_Intune) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + if m.Intune != nil { + { + size, err := m.Intune.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xaa + } + return len(dAtA) - i, nil +} func (m *PluginGithubSettings) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -48247,12 +52436,12 @@ func (m *PluginGithubSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n365, err365 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) - if err365 != nil { - return 0, err365 + n382, err382 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartDate, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartDate):]) + if err382 != nil { + return 0, err382 } - i -= n365 - i = encodeVarintTypes(dAtA, i, uint64(n365)) + i -= n382 + i = encodeVarintTypes(dAtA, i, uint64(n382)) i-- dAtA[i] = 0x22 if len(m.OrganizationName) > 0 { @@ -48671,6 +52860,54 @@ func (m *PluginJamfSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *PluginIntuneSettings) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginIntuneSettings) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginIntuneSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.GraphEndpoint) > 0 { + i -= len(m.GraphEndpoint) + copy(dAtA[i:], m.GraphEndpoint) + i = encodeVarintTypes(dAtA, i, uint64(len(m.GraphEndpoint))) + i-- + dAtA[i] = 0x1a + } + if len(m.LoginEndpoint) > 0 { + i -= len(m.LoginEndpoint) + copy(dAtA[i:], m.LoginEndpoint) + i = encodeVarintTypes(dAtA, i, uint64(len(m.LoginEndpoint))) + i-- + dAtA[i] = 0x12 + } + if len(m.Tenant) > 0 { + i -= len(m.Tenant) + copy(dAtA[i:], m.Tenant) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Tenant))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *PluginOktaSettings) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -48827,6 +53064,13 @@ func (m *PluginOktaSyncSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.TimeBetweenImports) > 0 { + i -= len(m.TimeBetweenImports) + copy(dAtA[i:], m.TimeBetweenImports) + i = encodeVarintTypes(dAtA, i, uint64(len(m.TimeBetweenImports))) + i-- + dAtA[i] = 0x72 + } if m.DisableAssignDefaultRoles { i-- if m.DisableAssignDefaultRoles { @@ -49109,6 +53353,20 @@ func (m *PluginEntraIDSyncSettings) MarshalToSizedBuffer(dAtA []byte) (int, erro i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.GroupFilters) > 0 { + for iNdEx := len(m.GroupFilters) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.GroupFilters[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } + } if len(m.EntraAppId) > 0 { i -= len(m.EntraAppId) copy(dAtA[i:], m.EntraAppId) @@ -49147,6 +53405,107 @@ func (m *PluginEntraIDSyncSettings) MarshalToSizedBuffer(dAtA []byte) (int, erro return len(dAtA) - i, nil } +func (m *PluginSyncFilter) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginSyncFilter) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSyncFilter) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.Exclude != nil { + { + size := m.Exclude.Size() + i -= size + if _, err := m.Exclude.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + } + } + if m.Include != nil { + { + size := m.Include.Size() + i -= size + if _, err := m.Include.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + } + } + return len(dAtA) - i, nil +} + +func (m *PluginSyncFilter_Id) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSyncFilter_Id) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + i -= len(m.Id) + copy(dAtA[i:], m.Id) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Id))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} +func (m *PluginSyncFilter_NameRegex) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSyncFilter_NameRegex) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + i -= len(m.NameRegex) + copy(dAtA[i:], m.NameRegex) + i = encodeVarintTypes(dAtA, i, uint64(len(m.NameRegex))) + i-- + dAtA[i] = 0x12 + return len(dAtA) - i, nil +} +func (m *PluginSyncFilter_ExcludeId) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSyncFilter_ExcludeId) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + i -= len(m.ExcludeId) + copy(dAtA[i:], m.ExcludeId) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ExcludeId))) + i-- + dAtA[i] = 0x1a + return len(dAtA) - i, nil +} +func (m *PluginSyncFilter_ExcludeNameRegex) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSyncFilter_ExcludeNameRegex) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + i -= len(m.ExcludeNameRegex) + copy(dAtA[i:], m.ExcludeNameRegex) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ExcludeNameRegex))) + i-- + dAtA[i] = 0x22 + return len(dAtA) - i, nil +} func (m *PluginEntraIDAccessGraphSettings) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -49253,6 +53612,18 @@ func (m *PluginSCIMSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.ConnectorInfo != nil { + { + size, err := m.ConnectorInfo.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } if len(m.DefaultRole) > 0 { i -= len(m.DefaultRole) copy(dAtA[i:], m.DefaultRole) @@ -49270,6 +53641,47 @@ func (m *PluginSCIMSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *PluginSCIMSettings_ConnectorInfo) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginSCIMSettings_ConnectorInfo) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginSCIMSettings_ConnectorInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Type) > 0 { + i -= len(m.Type) + copy(dAtA[i:], m.Type) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Type))) + i-- + dAtA[i] = 0x12 + } + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *PluginDatadogAccessSettings) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -49335,6 +53747,13 @@ func (m *PluginAWSICSettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.RolesSyncMode) > 0 { + i -= len(m.RolesSyncMode) + copy(dAtA[i:], m.RolesSyncMode) + i = encodeVarintTypes(dAtA, i, uint64(len(m.RolesSyncMode))) + i-- + dAtA[i] = 0x62 + } if m.Credentials != nil { { size, err := m.Credentials.MarshalToSizedBuffer(dAtA[:i]) @@ -50364,12 +54783,12 @@ func (m *PluginStatusV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x32 } - n382, err382 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastSyncTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastSyncTime):]) - if err382 != nil { - return 0, err382 + n400, err400 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastSyncTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastSyncTime):]) + if err400 != nil { + return 0, err400 } - i -= n382 - i = encodeVarintTypes(dAtA, i, uint64(n382)) + i -= n400 + i = encodeVarintTypes(dAtA, i, uint64(n400)) i-- dAtA[i] = 0x1a if len(m.ErrorMessage) > 0 { @@ -50642,6 +55061,18 @@ func (m *PluginOktaStatusV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.SystemLogExportDetails != nil { + { + size, err := m.SystemLogExportDetails.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } if m.AccessListsSyncDetails != nil { { size, err := m.AccessListsSyncDetails.MarshalToSizedBuffer(dAtA[:i]) @@ -50807,22 +55238,22 @@ func (m *PluginOktaStatusDetailsAppGroupSync) MarshalToSizedBuffer(dAtA []byte) dAtA[i] = 0x28 } if m.LastFailed != nil { - n393, err393 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) - if err393 != nil { - return 0, err393 + n412, err412 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) + if err412 != nil { + return 0, err412 } - i -= n393 - i = encodeVarintTypes(dAtA, i, uint64(n393)) + i -= n412 + i = encodeVarintTypes(dAtA, i, uint64(n412)) i-- dAtA[i] = 0x22 } if m.LastSuccessful != nil { - n394, err394 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) - if err394 != nil { - return 0, err394 + n413, err413 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) + if err413 != nil { + return 0, err413 } - i -= n394 - i = encodeVarintTypes(dAtA, i, uint64(n394)) + i -= n413 + i = encodeVarintTypes(dAtA, i, uint64(n413)) i-- dAtA[i] = 0x1a } @@ -50881,22 +55312,22 @@ func (m *PluginOktaStatusDetailsUsersSync) MarshalToSizedBuffer(dAtA []byte) (in dAtA[i] = 0x28 } if m.LastFailed != nil { - n395, err395 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) - if err395 != nil { - return 0, err395 + n414, err414 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) + if err414 != nil { + return 0, err414 } - i -= n395 - i = encodeVarintTypes(dAtA, i, uint64(n395)) + i -= n414 + i = encodeVarintTypes(dAtA, i, uint64(n414)) i-- dAtA[i] = 0x22 } if m.LastSuccessful != nil { - n396, err396 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) - if err396 != nil { - return 0, err396 + n415, err415 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) + if err415 != nil { + return 0, err415 } - i -= n396 - i = encodeVarintTypes(dAtA, i, uint64(n396)) + i -= n415 + i = encodeVarintTypes(dAtA, i, uint64(n415)) i-- dAtA[i] = 0x1a } @@ -51015,22 +55446,91 @@ func (m *PluginOktaStatusDetailsAccessListsSync) MarshalToSizedBuffer(dAtA []byt } } if m.LastFailed != nil { - n397, err397 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) - if err397 != nil { - return 0, err397 + n416, err416 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) + if err416 != nil { + return 0, err416 + } + i -= n416 + i = encodeVarintTypes(dAtA, i, uint64(n416)) + i-- + dAtA[i] = 0x22 + } + if m.LastSuccessful != nil { + n417, err417 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) + if err417 != nil { + return 0, err417 + } + i -= n417 + i = encodeVarintTypes(dAtA, i, uint64(n417)) + i-- + dAtA[i] = 0x1a + } + if m.StatusCode != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.StatusCode)) + i-- + dAtA[i] = 0x10 + } + if m.Enabled { + i-- + if m.Enabled { + dAtA[i] = 1 + } else { + dAtA[i] = 0 + } + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + +func (m *PluginOktaStatusSystemLogExporter) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PluginOktaStatusSystemLogExporter) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PluginOktaStatusSystemLogExporter) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x4a + } + if m.LastFailed != nil { + n418, err418 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastFailed, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed):]) + if err418 != nil { + return 0, err418 } - i -= n397 - i = encodeVarintTypes(dAtA, i, uint64(n397)) + i -= n418 + i = encodeVarintTypes(dAtA, i, uint64(n418)) i-- dAtA[i] = 0x22 } if m.LastSuccessful != nil { - n398, err398 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) - if err398 != nil { - return 0, err398 + n419, err419 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.LastSuccessful, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful):]) + if err419 != nil { + return 0, err419 } - i -= n398 - i = encodeVarintTypes(dAtA, i, uint64(n398)) + i -= n419 + i = encodeVarintTypes(dAtA, i, uint64(n419)) i-- dAtA[i] = 0x1a } @@ -51196,12 +55696,12 @@ func (m *PluginOAuth2AccessTokenCredentials) MarshalToSizedBuffer(dAtA []byte) ( i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n403, err403 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) - if err403 != nil { - return 0, err403 + n424, err424 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Expires, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Expires):]) + if err424 != nil { + return 0, err424 } - i -= n403 - i = encodeVarintTypes(dAtA, i, uint64(n403)) + i -= n424 + i = encodeVarintTypes(dAtA, i, uint64(n424)) i-- dAtA[i] = 0x1a if len(m.RefreshToken) > 0 { @@ -52159,20 +56659,20 @@ func (m *ScheduledAgentUpgradeWindow) MarshalToSizedBuffer(dAtA []byte) (int, er i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } - n418, err418 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Stop, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Stop):]) - if err418 != nil { - return 0, err418 + n439, err439 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Stop, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Stop):]) + if err439 != nil { + return 0, err439 } - i -= n418 - i = encodeVarintTypes(dAtA, i, uint64(n418)) + i -= n439 + i = encodeVarintTypes(dAtA, i, uint64(n439)) i-- dAtA[i] = 0x12 - n419, err419 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Start, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Start):]) - if err419 != nil { - return 0, err419 + n440, err440 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Start, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Start):]) + if err440 != nil { + return 0, err440 } - i -= n419 - i = encodeVarintTypes(dAtA, i, uint64(n419)) + i -= n440 + i = encodeVarintTypes(dAtA, i, uint64(n440)) i-- dAtA[i] = 0xa return len(dAtA) - i, nil @@ -52599,12 +57099,12 @@ func (m *OktaAssignmentSpecV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x30 } - n426, err426 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastTransition, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastTransition):]) - if err426 != nil { - return 0, err426 + n447, err447 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.LastTransition, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.LastTransition):]) + if err447 != nil { + return 0, err447 } - i -= n426 - i = encodeVarintTypes(dAtA, i, uint64(n426)) + i -= n447 + i = encodeVarintTypes(dAtA, i, uint64(n447)) i-- dAtA[i] = 0x2a if m.Status != 0 { @@ -52612,12 +57112,12 @@ func (m *OktaAssignmentSpecV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i-- dAtA[i] = 0x20 } - n427, err427 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CleanupTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CleanupTime):]) - if err427 != nil { - return 0, err427 + n448, err448 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.CleanupTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.CleanupTime):]) + if err448 != nil { + return 0, err448 } - i -= n427 - i = encodeVarintTypes(dAtA, i, uint64(n427)) + i -= n448 + i = encodeVarintTypes(dAtA, i, uint64(n448)) i-- dAtA[i] = 0x1a if len(m.Targets) > 0 { @@ -52707,6 +57207,16 @@ func (m *IntegrationV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -52862,6 +57372,45 @@ func (m *IntegrationSpecV1_AWSRA) MarshalToSizedBuffer(dAtA []byte) (int, error) } return len(dAtA) - i, nil } +func (m *IntegrationStatusV1) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *IntegrationStatusV1) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *IntegrationStatusV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.AWSRolesAnywhere != nil { + { + size, err := m.AWSRolesAnywhere.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *AWSOIDCIntegrationSpecV1) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -53055,6 +57604,15 @@ func (m *AWSRolesAnywhereProfileSyncConfig) MarshalToSizedBuffer(dAtA []byte) (i i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.ProfileNameFilters) > 0 { + for iNdEx := len(m.ProfileNameFilters) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ProfileNameFilters[iNdEx]) + copy(dAtA[i:], m.ProfileNameFilters[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ProfileNameFilters[iNdEx]))) + i-- + dAtA[i] = 0x2a + } + } if len(m.RoleARN) > 0 { i -= len(m.RoleARN) copy(dAtA[i:], m.RoleARN) @@ -53092,6 +57650,107 @@ func (m *AWSRolesAnywhereProfileSyncConfig) MarshalToSizedBuffer(dAtA []byte) (i return len(dAtA) - i, nil } +func (m *AWSRAIntegrationStatusV1) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AWSRAIntegrationStatusV1) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AWSRAIntegrationStatusV1) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.LastProfileSync != nil { + { + size, err := m.LastProfileSync.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *AWSRolesAnywhereProfileSyncIterationSummary) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AWSRolesAnywhereProfileSyncIterationSummary) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AWSRolesAnywhereProfileSyncIterationSummary) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.SyncedProfiles != 0 { + i = encodeVarintTypes(dAtA, i, uint64(m.SyncedProfiles)) + i-- + dAtA[i] = 0x28 + } + if len(m.ErrorMessage) > 0 { + i -= len(m.ErrorMessage) + copy(dAtA[i:], m.ErrorMessage) + i = encodeVarintTypes(dAtA, i, uint64(len(m.ErrorMessage))) + i-- + dAtA[i] = 0x22 + } + if len(m.Status) > 0 { + i -= len(m.Status) + copy(dAtA[i:], m.Status) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Status))) + i-- + dAtA[i] = 0x1a + } + n460, err460 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.EndTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime):]) + if err460 != nil { + return 0, err460 + } + i -= n460 + i = encodeVarintTypes(dAtA, i, uint64(n460)) + i-- + dAtA[i] = 0x12 + n461, err461 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.StartTime, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime):]) + if err461 != nil { + return 0, err461 + } + i -= n461 + i = encodeVarintTypes(dAtA, i, uint64(n461)) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func (m *HeadlessAuthentication) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -53668,6 +58327,18 @@ func (m *AWSMatcher) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.Organization != nil { + { + size, err := m.Organization.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x52 + } if len(m.SetupAccessForARN) > 0 { i -= len(m.SetupAccessForARN) copy(dAtA[i:], m.SetupAccessForARN) @@ -53759,6 +58430,97 @@ func (m *AWSMatcher) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *AWSOrganizationMatcher) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AWSOrganizationMatcher) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AWSOrganizationMatcher) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if m.OrganizationalUnits != nil { + { + size, err := m.OrganizationalUnits.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + if len(m.OrganizationID) > 0 { + i -= len(m.OrganizationID) + copy(dAtA[i:], m.OrganizationID) + i = encodeVarintTypes(dAtA, i, uint64(len(m.OrganizationID))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *AWSOrganizationUnitsMatcher) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AWSOrganizationUnitsMatcher) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AWSOrganizationUnitsMatcher) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.Exclude) > 0 { + for iNdEx := len(m.Exclude) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Exclude[iNdEx]) + copy(dAtA[i:], m.Exclude[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Exclude[iNdEx]))) + i-- + dAtA[i] = 0x12 + } + } + if len(m.Include) > 0 { + for iNdEx := len(m.Include) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Include[iNdEx]) + copy(dAtA[i:], m.Include[iNdEx]) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Include[iNdEx]))) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + func (m *AssumeRole) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -53824,6 +58586,32 @@ func (m *InstallerParams) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.HTTPProxySettings != nil { + { + size, err := m.HTTPProxySettings.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x5a + } + if len(m.UpdateGroup) > 0 { + i -= len(m.UpdateGroup) + copy(dAtA[i:], m.UpdateGroup) + i = encodeVarintTypes(dAtA, i, uint64(len(m.UpdateGroup))) + i-- + dAtA[i] = 0x52 + } + if len(m.Suffix) > 0 { + i -= len(m.Suffix) + copy(dAtA[i:], m.Suffix) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Suffix))) + i-- + dAtA[i] = 0x4a + } if m.EnrollMode != 0 { i = encodeVarintTypes(dAtA, i, uint64(m.EnrollMode)) i-- @@ -53889,6 +58677,54 @@ func (m *InstallerParams) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *HTTPProxySettings) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *HTTPProxySettings) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *HTTPProxySettings) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + if len(m.NoProxy) > 0 { + i -= len(m.NoProxy) + copy(dAtA[i:], m.NoProxy) + i = encodeVarintTypes(dAtA, i, uint64(len(m.NoProxy))) + i-- + dAtA[i] = 0x1a + } + if len(m.HTTPSProxy) > 0 { + i -= len(m.HTTPSProxy) + copy(dAtA[i:], m.HTTPSProxy) + i = encodeVarintTypes(dAtA, i, uint64(len(m.HTTPSProxy))) + i-- + dAtA[i] = 0x12 + } + if len(m.HTTPProxy) > 0 { + i -= len(m.HTTPProxy) + copy(dAtA[i:], m.HTTPProxy) + i = encodeVarintTypes(dAtA, i, uint64(len(m.HTTPProxy))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *AWSSSM) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -53981,6 +58817,13 @@ func (m *AzureMatcher) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if len(m.Integration) > 0 { + i -= len(m.Integration) + copy(dAtA[i:], m.Integration) + i = encodeVarintTypes(dAtA, i, uint64(len(m.Integration))) + i-- + dAtA[i] = 0x3a + } if m.Params != nil { { size, err := m.Params.MarshalToSizedBuffer(dAtA[:i]) @@ -54262,12 +59105,12 @@ func (m *AccessGraphSync) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x1a } } - n454, err454 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.PollInterval, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.PollInterval):]) - if err454 != nil { - return 0, err454 + n483, err483 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.PollInterval, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.PollInterval):]) + if err483 != nil { + return 0, err483 } - i -= n454 - i = encodeVarintTypes(dAtA, i, uint64(n454)) + i -= n483 + i = encodeVarintTypes(dAtA, i, uint64(n483)) i-- dAtA[i] = 0x12 if len(m.AWS) > 0 { @@ -54328,6 +59171,43 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) MarshalToSizedBuffer(dAtA []byte) (in return len(dAtA) - i, nil } +func (m *AccessGraphAWSSyncEKSAuditLogs) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AccessGraphAWSSyncEKSAuditLogs) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AccessGraphAWSSyncEKSAuditLogs) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.XXX_unrecognized != nil { + i -= len(m.XXX_unrecognized) + copy(dAtA[i:], m.XXX_unrecognized) + } + { + size := m.Tags.Size() + i -= size + if _, err := m.Tags.MarshalTo(dAtA[i:]); err != nil { + return 0, err + } + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func (m *AccessGraphAWSSync) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -54352,6 +59232,18 @@ func (m *AccessGraphAWSSync) MarshalToSizedBuffer(dAtA []byte) (int, error) { i -= len(m.XXX_unrecognized) copy(dAtA[i:], m.XXX_unrecognized) } + if m.EksAuditLogs != nil { + { + size, err := m.EksAuditLogs.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintTypes(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } if m.CloudTrailLogs != nil { { size, err := m.CloudTrailLogs.MarshalToSizedBuffer(dAtA[:i]) @@ -54482,12 +59374,12 @@ func (m *TargetHealth) MarshalToSizedBuffer(dAtA []byte) (int, error) { dAtA[i] = 0x2a } if m.TransitionTimestamp != nil { - n457, err457 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.TransitionTimestamp, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.TransitionTimestamp):]) - if err457 != nil { - return 0, err457 + n488, err488 := github_com_gogo_protobuf_types.StdTimeMarshalTo(*m.TransitionTimestamp, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(*m.TransitionTimestamp):]) + if err488 != nil { + return 0, err488 } - i -= n457 - i = encodeVarintTypes(dAtA, i, uint64(n457)) + i -= n488 + i = encodeVarintTypes(dAtA, i, uint64(n488)) i-- dAtA[i] = 0x22 } @@ -54700,6 +59592,10 @@ func (m *DatabaseServerV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Status.Size() n += 1 + l + sovTypes(uint64(l)) + l = len(m.Scope) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -54736,6 +59632,16 @@ func (m *DatabaseServerSpecV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + l = len(m.RelayGroup) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.RelayIds) > 0 { + for _, s := range m.RelayIds { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -54889,6 +59795,12 @@ func (m *OracleOptions) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if m.RetryCount != 0 { + n += 1 + sovTypes(uint64(m.RetryCount)) + } + if m.ShuffleHostnames { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -54974,6 +59886,8 @@ func (m *AWS) Size() (n int) { } l = m.DocumentDB.Size() n += 2 + l + sovTypes(uint64(l)) + l = m.ElastiCacheServerless.Size() + n += 2 + l + sovTypes(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55112,6 +60026,22 @@ func (m *ElastiCache) Size() (n int) { return n } +func (m *ElastiCacheServerless) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.CacheName) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *MemoryDB) Size() (n int) { if m == nil { return 0 @@ -55225,6 +60155,28 @@ func (m *GCPCloudSQL) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + l = m.AlloyDB.Size() + n += 1 + l + sovTypes(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AlloyDB) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.EndpointType) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.EndpointOverride) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55584,6 +60536,10 @@ func (m *ServerV2) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Spec.Size() n += 1 + l + sovTypes(uint64(l)) + l = len(m.Scope) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55646,6 +60602,20 @@ func (m *ServerSpecV2) Size() (n int) { l = m.GitHub.Size() n += 1 + l + sovTypes(uint64(l)) } + l = len(m.RelayGroup) + if l > 0 { + n += 2 + l + sovTypes(uint64(l)) + } + if len(m.RelayIds) > 0 { + for _, s := range m.RelayIds { + l = len(s) + n += 2 + l + sovTypes(uint64(l)) + } + } + if m.ComponentFeatures != nil { + l = m.ComponentFeatures.Size() + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55746,6 +60716,10 @@ func (m *AppServerV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Spec.Size() n += 1 + l + sovTypes(uint64(l)) + l = len(m.Scope) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55782,6 +60756,20 @@ func (m *AppServerSpecV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + l = len(m.RelayGroup) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.RelayIds) > 0 { + for _, s := range m.RelayIds { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.ComponentFeatures != nil { + l = m.ComponentFeatures.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -55993,6 +60981,36 @@ func (m *AppSpecV3) Size() (n int) { if m.UseAnyProxyPublicAddr { n += 2 } + if m.MCP != nil { + l = m.MCP.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *MCP) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Command) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.Args) > 0 { + for _, s := range m.Args { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + l = len(m.RunAsHostUser) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -56169,6 +61187,10 @@ func (m *TLSKeyPair) Size() (n int) { if m.KeyType != 0 { n += 1 + sovTypes(uint64(m.KeyType)) } + l = len(m.CRL) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -56198,6 +61220,48 @@ func (m *JWTKeyPair) Size() (n int) { return n } +func (m *EncryptionKeyPair) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.PublicKey) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.PrivateKey) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.PrivateKeyType != 0 { + n += 1 + sovTypes(uint64(m.PrivateKeyType)) + } + if m.Hash != 0 { + n += 1 + sovTypes(uint64(m.Hash)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AgeEncryptionKey) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.PublicKey) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *CertAuthorityV2) Size() (n int) { if m == nil { return 0 @@ -56510,6 +61574,10 @@ func (m *ProvisionTokenSpecV2) Size() (n int) { l = m.AzureDevops.Size() n += 2 + l + sovTypes(uint64(l)) } + if m.Env0 != nil { + l = m.Env0.Size() + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -56864,6 +61932,9 @@ func (m *ProvisionTokenSpecV2Spacelift) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if m.EnableGlobMatching { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -56918,6 +61989,10 @@ func (m *ProvisionTokenSpecV2Kubernetes) Size() (n int) { l = m.StaticJWKS.Size() n += 1 + l + sovTypes(uint64(l)) } + if m.OIDC != nil { + l = m.OIDC.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -56940,6 +62015,25 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Size() (n int) { return n } +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Issuer) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.InsecureAllowHTTPIssuer { + n += 2 + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *ProvisionTokenSpecV2Kubernetes_Rule) Size() (n int) { if m == nil { return 0 @@ -57204,6 +62298,86 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + if len(m.Instances) > 0 { + for _, s := range m.Instances { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ProvisionTokenSpecV2Env0) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Allow) > 0 { + for _, e := range m.Allow { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ProvisionTokenSpecV2Env0_Rule) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.OrganizationID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.ProjectID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.ProjectName) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.TemplateID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.TemplateName) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.EnvironmentID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.EnvironmentName) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.WorkspaceName) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.DeploymentType) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.DeployerEmail) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.Env0Tag) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -57675,6 +62849,76 @@ func (m *SessionRecordingConfigV2) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Spec.Size() n += 1 + l + sovTypes(uint64(l)) + if m.Status != nil { + l = m.Status.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *KeyLabel) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Type) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.Label) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *ManualKeyManagementConfig) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Enabled { + n += 2 + } + if len(m.ActiveKeys) > 0 { + for _, e := range m.ActiveKeys { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if len(m.RotatedKeys) > 0 { + for _, e := range m.RotatedKeys { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SessionRecordingEncryptionConfig) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Enabled { + n += 2 + } + if m.ManualKeyManagement != nil { + l = m.ManualKeyManagement.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -57695,6 +62939,28 @@ func (m *SessionRecordingConfigSpecV2) Size() (n int) { l = m.ProxyChecksHostKeys.Size() n += 1 + l + sovTypes(uint64(l)) } + if m.Encryption != nil { + l = m.Encryption.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *SessionRecordingConfigStatus) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.EncryptionKeys) > 0 { + for _, e := range m.EncryptionKeys { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -58402,6 +63668,13 @@ func (m *AccessRequestSpecV3) Size() (n int) { l = m.DryRunEnrichment.Size() n += 2 + l + sovTypes(uint64(l)) } + if m.RequestKind != 0 { + n += 2 + sovTypes(uint64(m.RequestKind)) + } + if m.LongTermGrouping != nil { + l = m.LongTermGrouping.Size() + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -58519,6 +63792,52 @@ func (m *AccessCapabilitiesRequest) Size() (n int) { return n } +func (m *RemoteAccessCapabilities) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.ApplicableRolesForResources) > 0 { + for _, s := range m.ApplicableRolesForResources { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *RemoteAccessCapabilitiesRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.User) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.SearchAsRoles) > 0 { + for _, s := range m.SearchAsRoles { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if len(m.ResourceIDs) > 0 { + for _, e := range m.ResourceIDs { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *RequestKubernetesResource) Size() (n int) { if m == nil { return 0 @@ -59203,6 +64522,10 @@ func (m *RoleConditions) Size() (n int) { if l > 0 { n += 2 + l + sovTypes(uint64(l)) } + if m.MCP != nil { + l = m.MCP.Size() + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -59247,6 +64570,24 @@ func (m *GitHubPermission) Size() (n int) { return n } +func (m *MCPPermissions) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Tools) > 0 { + for _, s := range m.Tools { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *SPIFFERoleCondition) Size() (n int) { if m == nil { return 0 @@ -59469,6 +64810,10 @@ func (m *AccessRequestConditionsReason) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + l = len(m.Prompt) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -59543,6 +64888,56 @@ func (m *AccessRequestAllowedPromotions) Size() (n int) { return n } +func (m *ResourceIDList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.ResourceIds) > 0 { + for _, e := range m.ResourceIds { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *LongTermResourceGrouping) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.AccessListToResources) > 0 { + for k, v := range m.AccessListToResources { + _ = k + _ = v + l = v.Size() + mapEntrySize := 1 + len(k) + sovTypes(uint64(len(k))) + 1 + l + sovTypes(uint64(l)) + n += mapEntrySize + 1 + sovTypes(uint64(mapEntrySize)) + } + } + l = len(m.RecommendedAccessList) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.ValidationMessage) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.CanProceed { + n += 2 + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *ClaimMapping) Size() (n int) { if m == nil { return 0 @@ -59684,6 +65079,9 @@ func (m *UserFilter) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + if m.SkipSystemUsers { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -60937,6 +66335,14 @@ func (m *KubernetesServerV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Spec.Size() n += 1 + l + sovTypes(uint64(l)) + if m.Status != nil { + l = m.Status.Size() + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.Scope) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -60973,6 +66379,32 @@ func (m *KubernetesServerSpecV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + l = len(m.RelayGroup) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.RelayIds) > 0 { + for _, s := range m.RelayIds { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *KubernetesServerStatusV3) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.TargetHealth != nil { + l = m.TargetHealth.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61328,6 +66760,43 @@ func (m *OIDCConnectorSpecV3) Size() (n int) { if l > 0 { n += 2 + l + sovTypes(uint64(l)) } + if len(m.UserMatchers) > 0 { + for _, s := range m.UserMatchers { + l = len(s) + n += 2 + l + sovTypes(uint64(l)) + } + } + l = len(m.RequestObjectMode) + if l > 0 { + n += 2 + l + sovTypes(uint64(l)) + } + if m.EntraIdGroupsProvider != nil { + l = m.EntraIdGroupsProvider.Size() + n += 2 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *EntraIDGroupsProvider) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Disabled { + n += 2 + } + l = len(m.GroupType) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.GraphEndpoint) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61401,6 +66870,10 @@ func (m *OIDCConnectorMFASettings) Size() (n int) { if m.MaxAge != 0 { n += 1 + sovTypes(uint64(m.MaxAge)) } + l = len(m.RequestObjectMode) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61497,6 +66970,10 @@ func (m *OIDCAuthRequest) Size() (n int) { if l > 0 { n += 2 + l + sovTypes(uint64(l)) } + l = len(m.LoginHint) + if l > 0 { + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61631,6 +67108,15 @@ func (m *SAMLConnectorSpecV2) Size() (n int) { if l > 0 { n += 2 + l + sovTypes(uint64(l)) } + if len(m.UserMatchers) > 0 { + for _, s := range m.UserMatchers { + l = len(s) + n += 2 + l + sovTypes(uint64(l)) + } + } + if m.IncludeSubject { + n += 3 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61765,6 +67251,10 @@ func (m *SAMLAuthRequest) Size() (n int) { if l > 0 { n += 2 + l + sovTypes(uint64(l)) } + l = len(m.SubjectIdentifier) + if l > 0 { + n += 2 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -61909,6 +67399,12 @@ func (m *GithubConnectorSpecV3) Size() (n int) { l = m.ClientRedirectSettings.Size() n += 1 + l + sovTypes(uint64(l)) } + if len(m.UserMatchers) > 0 { + for _, s := range m.UserMatchers { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -62473,6 +67969,35 @@ func (m *LockTarget) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + l = len(m.BotInstanceID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.JoinToken) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *LockFilter) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Targets) > 0 { + for _, e := range m.Targets { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.InForceOnly { + n += 2 + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -62587,6 +68112,16 @@ func (m *WindowsDesktopServiceSpecV3) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) } } + l = len(m.RelayGroup) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if len(m.RelayIds) > 0 { + for _, s := range m.RelayIds { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -63071,6 +68606,10 @@ func (m *Participant) Size() (n int) { } l = github_com_gogo_protobuf_types.SizeOfStdTime(m.LastActive) n += 1 + l + sovTypes(uint64(l)) + l = len(m.Cluster) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -63799,6 +69338,18 @@ func (m *PluginSpecV1_Github) Size() (n int) { } return n } +func (m *PluginSpecV1_Intune) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Intune != nil { + l = m.Intune.Size() + n += 2 + l + sovTypes(uint64(l)) + } + return n +} func (m *PluginGithubSettings) Size() (n int) { if m == nil { return 0 @@ -64017,6 +69568,30 @@ func (m *PluginJamfSettings) Size() (n int) { return n } +func (m *PluginIntuneSettings) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Tenant) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.LoginEndpoint) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.GraphEndpoint) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *PluginOktaSettings) Size() (n int) { if m == nil { return 0 @@ -64127,6 +69702,10 @@ func (m *PluginOktaSyncSettings) Size() (n int) { if m.DisableAssignDefaultRoles { n += 2 } + l = len(m.TimeBetweenImports) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -64223,12 +69802,76 @@ func (m *PluginEntraIDSyncSettings) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if len(m.GroupFilters) > 0 { + for _, e := range m.GroupFilters { + l = e.Size() + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } return n } +func (m *PluginSyncFilter) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Include != nil { + n += m.Include.Size() + } + if m.Exclude != nil { + n += m.Exclude.Size() + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *PluginSyncFilter_Id) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Id) + n += 1 + l + sovTypes(uint64(l)) + return n +} +func (m *PluginSyncFilter_NameRegex) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.NameRegex) + n += 1 + l + sovTypes(uint64(l)) + return n +} +func (m *PluginSyncFilter_ExcludeId) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.ExcludeId) + n += 1 + l + sovTypes(uint64(l)) + return n +} +func (m *PluginSyncFilter_ExcludeNameRegex) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.ExcludeNameRegex) + n += 1 + l + sovTypes(uint64(l)) + return n +} func (m *PluginEntraIDAccessGraphSettings) Size() (n int) { if m == nil { return 0 @@ -64281,6 +69924,30 @@ func (m *PluginSCIMSettings) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if m.ConnectorInfo != nil { + l = m.ConnectorInfo.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *PluginSCIMSettings_ConnectorInfo) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.Type) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -64364,6 +70031,10 @@ func (m *PluginAWSICSettings) Size() (n int) { l = m.Credentials.Size() n += 1 + l + sovTypes(uint64(l)) } + l = len(m.RolesSyncMode) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -64983,6 +70654,10 @@ func (m *PluginOktaStatusV1) Size() (n int) { l = m.AccessListsSyncDetails.Size() n += 1 + l + sovTypes(uint64(l)) } + if m.SystemLogExportDetails != nil { + l = m.SystemLogExportDetails.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -65150,6 +70825,36 @@ func (m *PluginOktaStatusDetailsAccessListsSync) Size() (n int) { return n } +func (m *PluginOktaStatusSystemLogExporter) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Enabled { + n += 2 + } + if m.StatusCode != 0 { + n += 1 + sovTypes(uint64(m.StatusCode)) + } + if m.LastSuccessful != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastSuccessful) + n += 1 + l + sovTypes(uint64(l)) + } + if m.LastFailed != nil { + l = github_com_gogo_protobuf_types.SizeOfStdTime(*m.LastFailed) + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.Error) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *PluginCredentialsV1) Size() (n int) { if m == nil { return 0 @@ -65898,6 +71603,8 @@ func (m *IntegrationV1) Size() (n int) { n += 1 + l + sovTypes(uint64(l)) l = m.Spec.Size() n += 1 + l + sovTypes(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovTypes(uint64(l)) if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -65971,6 +71678,22 @@ func (m *IntegrationSpecV1_AWSRA) Size() (n int) { } return n } +func (m *IntegrationStatusV1) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.AWSRolesAnywhere != nil { + l = m.AWSRolesAnywhere.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *AWSOIDCIntegrationSpecV1) Size() (n int) { if m == nil { return 0 @@ -66071,6 +71794,55 @@ func (m *AWSRolesAnywhereProfileSyncConfig) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if len(m.ProfileNameFilters) > 0 { + for _, s := range m.ProfileNameFilters { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AWSRAIntegrationStatusV1) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.LastProfileSync != nil { + l = m.LastProfileSync.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AWSRolesAnywhereProfileSyncIterationSummary) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.StartTime) + n += 1 + l + sovTypes(uint64(l)) + l = github_com_gogo_protobuf_types.SizeOfStdTime(m.EndTime) + n += 1 + l + sovTypes(uint64(l)) + l = len(m.Status) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.ErrorMessage) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.SyncedProfiles != 0 { + n += 1 + sovTypes(uint64(m.SyncedProfiles)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -66365,6 +72137,54 @@ func (m *AWSMatcher) Size() (n int) { if l > 0 { n += 1 + l + sovTypes(uint64(l)) } + if m.Organization != nil { + l = m.Organization.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AWSOrganizationMatcher) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.OrganizationID) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.OrganizationalUnits != nil { + l = m.OrganizationalUnits.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *AWSOrganizationUnitsMatcher) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Include) > 0 { + for _, s := range m.Include { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } + if len(m.Exclude) > 0 { + for _, s := range m.Exclude { + l = len(s) + n += 1 + l + sovTypes(uint64(l)) + } + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -66427,6 +72247,42 @@ func (m *InstallerParams) Size() (n int) { if m.EnrollMode != 0 { n += 1 + sovTypes(uint64(m.EnrollMode)) } + l = len(m.Suffix) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.UpdateGroup) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + if m.HTTPProxySettings != nil { + l = m.HTTPProxySettings.Size() + n += 1 + l + sovTypes(uint64(l)) + } + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + +func (m *HTTPProxySettings) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.HTTPProxy) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.HTTPSProxy) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } + l = len(m.NoProxy) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -66501,6 +72357,10 @@ func (m *AzureMatcher) Size() (n int) { l = m.Params.Size() n += 1 + l + sovTypes(uint64(l)) } + l = len(m.Integration) + if l > 0 { + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -66638,6 +72498,20 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) Size() (n int) { return n } +func (m *AccessGraphAWSSyncEKSAuditLogs) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.Tags.Size() + n += 1 + l + sovTypes(uint64(l)) + if m.XXX_unrecognized != nil { + n += len(m.XXX_unrecognized) + } + return n +} + func (m *AccessGraphAWSSync) Size() (n int) { if m == nil { return 0 @@ -66662,6 +72536,10 @@ func (m *AccessGraphAWSSync) Size() (n int) { l = m.CloudTrailLogs.Size() n += 1 + l + sovTypes(uint64(l)) } + if m.EksAuditLogs != nil { + l = m.EksAuditLogs.Size() + n += 1 + l + sovTypes(uint64(l)) + } if m.XXX_unrecognized != nil { n += len(m.XXX_unrecognized) } @@ -68126,6 +74004,38 @@ func (m *DatabaseServerV3) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Scope", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Scope = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -68374,6 +74284,70 @@ func (m *DatabaseServerSpecV3) Unmarshal(dAtA []byte) error { } m.ProxyIDs = append(m.ProxyIDs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 14: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayIds", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayIds = append(m.RelayIds, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -69566,6 +75540,45 @@ func (m *OracleOptions) Unmarshal(dAtA []byte) error { } m.AuditUser = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field RetryCount", wireType) + } + m.RetryCount = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.RetryCount |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ShuffleHostnames", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.ShuffleHostnames = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -70402,6 +76415,39 @@ func (m *AWS) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 17: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ElastiCacheServerless", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ElastiCacheServerless.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -71199,7 +77245,7 @@ func (m *ElastiCache) Unmarshal(dAtA []byte) error { } return nil } -func (m *MemoryDB) Unmarshal(dAtA []byte) error { +func (m *ElastiCacheServerless) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -71222,15 +77268,15 @@ func (m *MemoryDB) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MemoryDB: wiretype end group for non-group") + return fmt.Errorf("proto: ElastiCacheServerless: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MemoryDB: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ElastiCacheServerless: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CacheName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71258,11 +77304,94 @@ func (m *MemoryDB) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) + m.CacheName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MemoryDB) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MemoryDB: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MemoryDB: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ACLName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ClusterName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ACLName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -71900,6 +78029,154 @@ func (m *GCPCloudSQL) Unmarshal(dAtA []byte) error { } m.InstanceID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AlloyDB", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.AlloyDB.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AlloyDB) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AlloyDB: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AlloyDB: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EndpointType", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EndpointType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EndpointOverride", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EndpointOverride = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -74352,6 +80629,38 @@ func (m *ServerV2) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Scope", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Scope = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -74849,60 +81158,160 @@ func (m *ServerSpecV2) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AWSInfo) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AWSInfo: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AWSInfo: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 16: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AccountID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RelayGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 17: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayIds", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayIds = append(m.RelayIds, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 18: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ComponentFeatures", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ComponentFeatures == nil { + m.ComponentFeatures = &v1.ComponentFeatures{} + } + if err := m.ComponentFeatures.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSInfo) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSInfo: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSInfo: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AccountID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -75507,6 +81916,38 @@ func (m *AppServerV3) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Scope", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Scope = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -75755,6 +82196,106 @@ func (m *AppServerSpecV3) Unmarshal(dAtA []byte) error { } m.ProxyIDs = append(m.ProxyIDs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayIds", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayIds = append(m.RelayIds, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ComponentFeatures", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ComponentFeatures == nil { + m.ComponentFeatures = &v1.ComponentFeatures{} + } + if err := m.ComponentFeatures.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -77125,6 +83666,42 @@ func (m *AppSpecV3) Unmarshal(dAtA []byte) error { } } m.UseAnyProxyPublicAddr = bool(v != 0) + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MCP", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.MCP == nil { + m.MCP = &MCP{} + } + if err := m.MCP.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -77147,7 +83724,7 @@ func (m *AppSpecV3) Unmarshal(dAtA []byte) error { } return nil } -func (m *Rewrite) Unmarshal(dAtA []byte) error { +func (m *MCP) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77170,15 +83747,15 @@ func (m *Rewrite) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Rewrite: wiretype end group for non-group") + return fmt.Errorf("proto: MCP: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Rewrite: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCP: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Redirect", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Command", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -77206,13 +83783,13 @@ func (m *Rewrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Redirect = append(m.Redirect, string(dAtA[iNdEx:postIndex])) + m.Command = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Args", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77222,29 +83799,27 @@ func (m *Rewrite) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Headers = append(m.Headers, &Header{}) - if err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Args = append(m.Args, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field JWTClaims", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RunAsHostUser", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -77272,7 +83847,7 @@ func (m *Rewrite) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.JWTClaims = string(dAtA[iNdEx:postIndex]) + m.RunAsHostUser = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -77296,7 +83871,7 @@ func (m *Rewrite) Unmarshal(dAtA []byte) error { } return nil } -func (m *Header) Unmarshal(dAtA []byte) error { +func (m *Rewrite) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77319,15 +83894,15 @@ func (m *Header) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Header: wiretype end group for non-group") + return fmt.Errorf("proto: Rewrite: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Header: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Rewrite: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Redirect", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -77355,11 +83930,45 @@ func (m *Header) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Redirect = append(m.Redirect, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Headers = append(m.Headers, &Header{}) + if err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field JWTClaims", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -77387,7 +83996,7 @@ func (m *Header) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Value = string(dAtA[iNdEx:postIndex]) + m.JWTClaims = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -77411,7 +84020,7 @@ func (m *Header) Unmarshal(dAtA []byte) error { } return nil } -func (m *PortRange) Unmarshal(dAtA []byte) error { +func (m *Header) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77434,17 +84043,17 @@ func (m *PortRange) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PortRange: wiretype end group for non-group") + return fmt.Errorf("proto: Header: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PortRange: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Header: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - m.Port = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77454,16 +84063,29 @@ func (m *PortRange) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Port |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field EndPort", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) } - m.EndPort = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77473,11 +84095,24 @@ func (m *PortRange) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.EndPort |= uint32(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Value = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -77500,7 +84135,7 @@ func (m *PortRange) Unmarshal(dAtA []byte) error { } return nil } -func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { +func (m *PortRange) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77523,17 +84158,17 @@ func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CommandLabelV2: wiretype end group for non-group") + return fmt.Errorf("proto: PortRange: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CommandLabelV2: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PortRange: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) } - m.Period = 0 + m.Port = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77543,48 +84178,16 @@ func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Period |= Duration(b&0x7F) << shift + m.Port |= uint32(b&0x7F) << shift if b < 0x80 { break } } case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Command", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Command = append(m.Command, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field EndPort", wireType) } - var stringLen uint64 + m.EndPort = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77594,24 +84197,11 @@ func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.EndPort |= uint32(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Result = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -77634,7 +84224,7 @@ func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { } return nil } -func (m *AppAWS) Unmarshal(dAtA []byte) error { +func (m *CommandLabelV2) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -77657,15 +84247,34 @@ func (m *AppAWS) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AppAWS: wiretype end group for non-group") + return fmt.Errorf("proto: CommandLabelV2: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AppAWS: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: CommandLabelV2: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) + } + m.Period = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Period |= Duration(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ExternalID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Command", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -77693,13 +84302,13 @@ func (m *AppAWS) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ExternalID = string(dAtA[iNdEx:postIndex]) + m.Command = append(m.Command, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RolesAnywhereProfile", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -77709,27 +84318,142 @@ func (m *AppAWS) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - if m.RolesAnywhereProfile == nil { - m.RolesAnywhereProfile = &AppAWSRolesAnywhereProfile{} - } - if err := m.RolesAnywhereProfile.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Result = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AppAWS) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AppAWS: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AppAWS: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ExternalID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ExternalID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RolesAnywhereProfile", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.RolesAnywhereProfile == nil { + m.RolesAnywhereProfile = &AppAWSRolesAnywhereProfile{} + } + if err := m.RolesAnywhereProfile.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -78110,6 +84834,40 @@ func (m *TLSKeyPair) Unmarshal(dAtA []byte) error { break } } + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CRL", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CRL = append(m.CRL[:0], dAtA[iNdEx:postIndex]...) + if m.CRL == nil { + m.CRL = []byte{} + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -78270,7 +85028,7 @@ func (m *JWTKeyPair) Unmarshal(dAtA []byte) error { } return nil } -func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { +func (m *EncryptionKeyPair) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -78293,17 +85051,17 @@ func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: CertAuthorityV2: wiretype end group for non-group") + return fmt.Errorf("proto: EncryptionKeyPair: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: CertAuthorityV2: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: EncryptionKeyPair: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -78313,29 +85071,31 @@ func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Kind = string(dAtA[iNdEx:postIndex]) + m.PublicKey = append(m.PublicKey[:0], dAtA[iNdEx:postIndex]...) + if m.PublicKey == nil { + m.PublicKey = []byte{} + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PrivateKey", wireType) } - var stringLen uint64 + var byteLen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -78345,29 +85105,31 @@ func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + byteLen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if byteLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + byteLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.SubKind = string(dAtA[iNdEx:postIndex]) + m.PrivateKey = append(m.PrivateKey[:0], dAtA[iNdEx:postIndex]...) + if m.PrivateKey == nil { + m.PrivateKey = []byte{} + } iNdEx = postIndex case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field PrivateKeyType", wireType) } - var stringLen uint64 + m.PrivateKeyType = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -78377,62 +85139,16 @@ func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.PrivateKeyType |= PrivateKeyType(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Version = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType) } - var msglen int + m.Hash = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -78442,25 +85158,309 @@ func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + m.Hash |= uint32(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AgeEncryptionKey) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AgeEncryptionKey: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AgeEncryptionKey: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PublicKey = append(m.PublicKey[:0], dAtA[iNdEx:postIndex]...) + if m.PublicKey == nil { + m.PublicKey = []byte{} + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *CertAuthorityV2) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CertAuthorityV2: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CertAuthorityV2: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Kind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SubKind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Version = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -80426,6 +87426,42 @@ func (m *ProvisionTokenSpecV2) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 21: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Env0", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Env0 == nil { + m.Env0 = &ProvisionTokenSpecV2Env0{} + } + if err := m.Env0.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -82671,6 +89707,26 @@ func (m *ProvisionTokenSpecV2Spacelift) Unmarshal(dAtA []byte) error { } m.Hostname = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field EnableGlobMatching", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.EnableGlobMatching = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -83003,62 +90059,11 @@ func (m *ProvisionTokenSpecV2Kubernetes) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: StaticJWKSConfig: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: StaticJWKSConfig: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field JWKS", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field OIDC", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -83068,23 +90073,27 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Unmarshal(dAtA []byte) } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.JWKS = string(dAtA[iNdEx:postIndex]) + if m.OIDC == nil { + m.OIDC = &ProvisionTokenSpecV2Kubernetes_OIDCConfig{} + } + if err := m.OIDC.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -83108,7 +90117,7 @@ func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Unmarshal(dAtA []byte) } return nil } -func (m *ProvisionTokenSpecV2Kubernetes_Rule) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2Kubernetes_StaticJWKSConfig) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83131,15 +90140,15 @@ func (m *ProvisionTokenSpecV2Kubernetes_Rule) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Rule: wiretype end group for non-group") + return fmt.Errorf("proto: StaticJWKSConfig: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Rule: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: StaticJWKSConfig: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceAccount", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field JWKS", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -83167,7 +90176,7 @@ func (m *ProvisionTokenSpecV2Kubernetes_Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServiceAccount = string(dAtA[iNdEx:postIndex]) + m.JWKS = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -83191,7 +90200,7 @@ func (m *ProvisionTokenSpecV2Kubernetes_Rule) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenSpecV2Azure) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2Kubernetes_OIDCConfig) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83214,17 +90223,17 @@ func (m *ProvisionTokenSpecV2Azure) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ProvisionTokenSpecV2Azure: wiretype end group for non-group") + return fmt.Errorf("proto: OIDCConfig: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ProvisionTokenSpecV2Azure: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: OIDCConfig: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Issuer", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -83234,26 +90243,44 @@ func (m *ProvisionTokenSpecV2Azure) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Allow = append(m.Allow, &ProvisionTokenSpecV2Azure_Rule{}) - if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Issuer = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field InsecureAllowHTTPIssuer", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.InsecureAllowHTTPIssuer = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -83276,7 +90303,7 @@ func (m *ProvisionTokenSpecV2Azure) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenSpecV2Azure_Rule) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2Kubernetes_Rule) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83307,39 +90334,7 @@ func (m *ProvisionTokenSpecV2Azure_Rule) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Subscription", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Subscription = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceGroups", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ServiceAccount", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -83367,7 +90362,7 @@ func (m *ProvisionTokenSpecV2Azure_Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ResourceGroups = append(m.ResourceGroups, string(dAtA[iNdEx:postIndex])) + m.ServiceAccount = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -83391,7 +90386,7 @@ func (m *ProvisionTokenSpecV2Azure_Rule) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenSpecV2GCP) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2Azure) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83414,10 +90409,10 @@ func (m *ProvisionTokenSpecV2GCP) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ProvisionTokenSpecV2GCP: wiretype end group for non-group") + return fmt.Errorf("proto: ProvisionTokenSpecV2Azure: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ProvisionTokenSpecV2GCP: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ProvisionTokenSpecV2Azure: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -83449,7 +90444,7 @@ func (m *ProvisionTokenSpecV2GCP) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Allow = append(m.Allow, &ProvisionTokenSpecV2GCP_Rule{}) + m.Allow = append(m.Allow, &ProvisionTokenSpecV2Azure_Rule{}) if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } @@ -83476,7 +90471,7 @@ func (m *ProvisionTokenSpecV2GCP) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2Azure_Rule) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83507,7 +90502,7 @@ func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ProjectIDs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Subscription", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -83535,43 +90530,11 @@ func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ProjectIDs = append(m.ProjectIDs, string(dAtA[iNdEx:postIndex])) + m.Subscription = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Locations", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Locations = append(m.Locations, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ServiceAccounts", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceGroups", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -83599,7 +90562,7 @@ func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ServiceAccounts = append(m.ServiceAccounts, string(dAtA[iNdEx:postIndex])) + m.ResourceGroups = append(m.ResourceGroups, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -83623,7 +90586,7 @@ func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { } return nil } -func (m *ProvisionTokenSpecV2TerraformCloud) Unmarshal(dAtA []byte) error { +func (m *ProvisionTokenSpecV2GCP) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -83646,10 +90609,242 @@ func (m *ProvisionTokenSpecV2TerraformCloud) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ProvisionTokenSpecV2TerraformCloud: wiretype end group for non-group") + return fmt.Errorf("proto: ProvisionTokenSpecV2GCP: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ProvisionTokenSpecV2TerraformCloud: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ProvisionTokenSpecV2GCP: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Allow = append(m.Allow, &ProvisionTokenSpecV2GCP_Rule{}) + if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ProvisionTokenSpecV2GCP_Rule) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Rule: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Rule: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProjectIDs", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ProjectIDs = append(m.ProjectIDs, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Locations", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Locations = append(m.Locations, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServiceAccounts", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServiceAccounts = append(m.ServiceAccounts, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ProvisionTokenSpecV2TerraformCloud) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ProvisionTokenSpecV2TerraformCloud: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ProvisionTokenSpecV2TerraformCloud: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -84585,6 +91780,526 @@ func (m *ProvisionTokenSpecV2Oracle_Rule) Unmarshal(dAtA []byte) error { } m.Regions = append(m.Regions, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Instances", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Instances = append(m.Instances, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ProvisionTokenSpecV2Env0) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ProvisionTokenSpecV2Env0: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ProvisionTokenSpecV2Env0: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Allow = append(m.Allow, &ProvisionTokenSpecV2Env0_Rule{}) + if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ProvisionTokenSpecV2Env0_Rule) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Rule: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Rule: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field OrganizationID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.OrganizationID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProjectID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ProjectID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProjectName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ProjectName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TemplateID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TemplateID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TemplateName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TemplateName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EnvironmentID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EnvironmentID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EnvironmentName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EnvironmentName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkspaceName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.WorkspaceName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DeploymentType", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DeploymentType = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DeployerEmail", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.DeployerEmail = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Env0Tag", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Env0Tag = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -87514,6 +95229,403 @@ func (m *SessionRecordingConfigV2) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Status == nil { + m.Status = &SessionRecordingConfigStatus{} + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *KeyLabel) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: KeyLabel: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KeyLabel: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Type = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Label", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Label = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ManualKeyManagementConfig) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ManualKeyManagementConfig: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ManualKeyManagementConfig: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Enabled = bool(v != 0) + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ActiveKeys", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ActiveKeys = append(m.ActiveKeys, &KeyLabel{}) + if err := m.ActiveKeys[len(m.ActiveKeys)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RotatedKeys", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RotatedKeys = append(m.RotatedKeys, &KeyLabel{}) + if err := m.RotatedKeys[len(m.RotatedKeys)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionRecordingEncryptionConfig) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionRecordingEncryptionConfig: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionRecordingEncryptionConfig: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Enabled = bool(v != 0) + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ManualKeyManagement", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ManualKeyManagement == nil { + m.ManualKeyManagement = &ManualKeyManagementConfig{} + } + if err := m.ManualKeyManagement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -87633,6 +95745,127 @@ func (m *SessionRecordingConfigSpecV2) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Encryption", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Encryption == nil { + m.Encryption = &SessionRecordingEncryptionConfig{} + } + if err := m.Encryption.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SessionRecordingConfigStatus) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SessionRecordingConfigStatus: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SessionRecordingConfigStatus: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EncryptionKeys", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.EncryptionKeys = append(m.EncryptionKeys, &AgeEncryptionKey{}) + if err := m.EncryptionKeys[len(m.EncryptionKeys)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -92522,6 +100755,61 @@ func (m *AccessRequestSpecV3) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 24: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field RequestKind", wireType) + } + m.RequestKind = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.RequestKind |= AccessRequestKind(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 25: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LongTermGrouping", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.LongTermGrouping == nil { + m.LongTermGrouping = &LongTermResourceGrouping{} + } + if err := m.LongTermGrouping.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -93189,7 +101477,7 @@ func (m *AccessCapabilitiesRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *RequestKubernetesResource) Unmarshal(dAtA []byte) error { +func (m *RemoteAccessCapabilities) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -93212,47 +101500,15 @@ func (m *RequestKubernetesResource) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: RequestKubernetesResource: wiretype end group for non-group") + return fmt.Errorf("proto: RemoteAccessCapabilities: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: RequestKubernetesResource: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RemoteAccessCapabilities: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Kind = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field APIGroup", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ApplicableRolesForResources", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93280,7 +101536,7 @@ func (m *RequestKubernetesResource) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.APIGroup = string(dAtA[iNdEx:postIndex]) + m.ApplicableRolesForResources = append(m.ApplicableRolesForResources, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -93304,7 +101560,7 @@ func (m *RequestKubernetesResource) Unmarshal(dAtA []byte) error { } return nil } -func (m *ResourceID) Unmarshal(dAtA []byte) error { +func (m *RemoteAccessCapabilitiesRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -93327,15 +101583,15 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ResourceID: wiretype end group for non-group") + return fmt.Errorf("proto: RemoteAccessCapabilitiesRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ResourceID: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RemoteAccessCapabilitiesRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93363,11 +101619,11 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClusterName = string(dAtA[iNdEx:postIndex]) + m.User = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SearchAsRoles", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93395,11 +101651,96 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Kind = string(dAtA[iNdEx:postIndex]) + m.SearchAsRoles = append(m.SearchAsRoles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceIDs", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceIDs = append(m.ResourceIDs, ResourceID{}) + if err := m.ResourceIDs[len(m.ResourceIDs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RequestKubernetesResource) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RequestKubernetesResource: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RequestKubernetesResource: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93427,11 +101768,11 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + m.Kind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubResourceName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field APIGroup", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93459,7 +101800,7 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SubResourceName = string(dAtA[iNdEx:postIndex]) + m.APIGroup = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -93483,7 +101824,7 @@ func (m *ResourceID) Unmarshal(dAtA []byte) error { } return nil } -func (m *PluginDataV3) Unmarshal(dAtA []byte) error { +func (m *ResourceID) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -93506,15 +101847,15 @@ func (m *PluginDataV3) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PluginDataV3: wiretype end group for non-group") + return fmt.Errorf("proto: ResourceID: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PluginDataV3: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ResourceID: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClusterName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93542,11 +101883,11 @@ func (m *PluginDataV3) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Kind = string(dAtA[iNdEx:postIndex]) + m.ClusterName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93574,11 +101915,11 @@ func (m *PluginDataV3) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.SubKind = string(dAtA[iNdEx:postIndex]) + m.Kind = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -93606,46 +101947,13 @@ func (m *PluginDataV3) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Version = string(dAtA[iNdEx:postIndex]) + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SubResourceName", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -93655,24 +101963,236 @@ func (m *PluginDataV3) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SubResourceName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginDataV3) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginDataV3: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginDataV3: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Kind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SubKind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SubKind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Version = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -98045,6 +106565,42 @@ func (m *RoleConditions) Unmarshal(dAtA []byte) error { } m.WorkloadIdentityLabelsExpression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 46: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MCP", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.MCP == nil { + m.MCP = &MCPPermissions{} + } + if err := m.MCP.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -98265,7 +106821,7 @@ func (m *GitHubPermission) Unmarshal(dAtA []byte) error { } return nil } -func (m *SPIFFERoleCondition) Unmarshal(dAtA []byte) error { +func (m *MCPPermissions) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -98288,15 +106844,15 @@ func (m *SPIFFERoleCondition) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: SPIFFERoleCondition: wiretype end group for non-group") + return fmt.Errorf("proto: MCPPermissions: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: SPIFFERoleCondition: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MCPPermissions: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Tools", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -98324,11 +106880,94 @@ func (m *SPIFFERoleCondition) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Path = string(dAtA[iNdEx:postIndex]) + m.Tools = append(m.Tools, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SPIFFERoleCondition) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SPIFFERoleCondition: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SPIFFERoleCondition: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field DNSSANs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Path = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field DNSSANs", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -99546,6 +108185,38 @@ func (m *AccessRequestConditionsReason) Unmarshal(dAtA []byte) error { } m.Mode = RequestReasonMode(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Prompt", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Prompt = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -99917,7 +108588,7 @@ func (m *AccessRequestAllowedPromotions) Unmarshal(dAtA []byte) error { } return nil } -func (m *ClaimMapping) Unmarshal(dAtA []byte) error { +func (m *ResourceIDList) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -99940,49 +108611,17 @@ func (m *ClaimMapping) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ClaimMapping: wiretype end group for non-group") + return fmt.Errorf("proto: ResourceIDList: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ClaimMapping: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ResourceIDList: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Claim", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceIds", wireType) } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Claim = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) - } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -99992,55 +108631,25 @@ func (m *ClaimMapping) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Value = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF + m.ResourceIds = append(m.ResourceIds, ResourceID{}) + if err := m.ResourceIds[len(m.ResourceIds)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -100064,7 +108673,7 @@ func (m *ClaimMapping) Unmarshal(dAtA []byte) error { } return nil } -func (m *TraitMapping) Unmarshal(dAtA []byte) error { +func (m *LongTermResourceGrouping) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100087,15 +108696,144 @@ func (m *TraitMapping) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: TraitMapping: wiretype end group for non-group") + return fmt.Errorf("proto: LongTermResourceGrouping: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: TraitMapping: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: LongTermResourceGrouping: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Trait", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AccessListToResources", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AccessListToResources == nil { + m.AccessListToResources = make(map[string]ResourceIDList) + } + var mapkey string + mapvalue := &ResourceIDList{} + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthTypes + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthTypes + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthTypes + } + postmsgIndex := iNdEx + mapmsglen + if postmsgIndex < 0 { + return ErrInvalidLengthTypes + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &ResourceIDList{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.AccessListToResources[mapkey] = *mapvalue + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RecommendedAccessList", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100123,11 +108861,11 @@ func (m *TraitMapping) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Trait = string(dAtA[iNdEx:postIndex]) + m.RecommendedAccessList = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ValidationMessage", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100155,13 +108893,13 @@ func (m *TraitMapping) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Value = string(dAtA[iNdEx:postIndex]) + m.ValidationMessage = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CanProceed", wireType) } - var stringLen uint64 + var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -100171,24 +108909,12 @@ func (m *TraitMapping) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + v |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex + m.CanProceed = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -100211,7 +108937,7 @@ func (m *TraitMapping) Unmarshal(dAtA []byte) error { } return nil } -func (m *Rule) Unmarshal(dAtA []byte) error { +func (m *ClaimMapping) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100234,15 +108960,15 @@ func (m *Rule) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Rule: wiretype end group for non-group") + return fmt.Errorf("proto: ClaimMapping: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Rule: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ClaimMapping: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Claim", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100270,11 +108996,11 @@ func (m *Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Resources = append(m.Resources, string(dAtA[iNdEx:postIndex])) + m.Claim = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Verbs", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100302,43 +109028,11 @@ func (m *Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Verbs = append(m.Verbs, string(dAtA[iNdEx:postIndex])) + m.Value = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Where", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Where = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Actions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100366,7 +109060,7 @@ func (m *Rule) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Actions = append(m.Actions, string(dAtA[iNdEx:postIndex])) + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -100390,7 +109084,7 @@ func (m *Rule) Unmarshal(dAtA []byte) error { } return nil } -func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { +func (m *TraitMapping) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100413,15 +109107,15 @@ func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ImpersonateConditions: wiretype end group for non-group") + return fmt.Errorf("proto: TraitMapping: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ImpersonateConditions: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: TraitMapping: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Users", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Trait", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100449,11 +109143,11 @@ func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Users = append(m.Users, string(dAtA[iNdEx:postIndex])) + m.Trait = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100481,11 +109175,11 @@ func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + m.Value = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Where", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100513,7 +109207,7 @@ func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Where = string(dAtA[iNdEx:postIndex]) + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -100537,7 +109231,7 @@ func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { } return nil } -func (m *BoolValue) Unmarshal(dAtA []byte) error { +func (m *Rule) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100560,17 +109254,17 @@ func (m *BoolValue) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: BoolValue: wiretype end group for non-group") + return fmt.Errorf("proto: Rule: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: BoolValue: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Rule: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) } - var v int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -100580,12 +109274,120 @@ func (m *BoolValue) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - m.Value = bool(v != 0) + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Resources = append(m.Resources, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Verbs", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Verbs = append(m.Verbs, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Where", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Where = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Actions", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Actions = append(m.Actions, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -100608,7 +109410,7 @@ func (m *BoolValue) Unmarshal(dAtA []byte) error { } return nil } -func (m *UserFilter) Unmarshal(dAtA []byte) error { +func (m *ImpersonateConditions) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -100631,15 +109433,233 @@ func (m *UserFilter) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: UserFilter: wiretype end group for non-group") + return fmt.Errorf("proto: ImpersonateConditions: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: UserFilter: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ImpersonateConditions: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Users", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Users = append(m.Users, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Roles", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Where", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Where = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *BoolValue) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: BoolValue: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: BoolValue: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Value = bool(v != 0) + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *UserFilter) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: UserFilter: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: UserFilter: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SearchKeywords", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -100669,6 +109689,26 @@ func (m *UserFilter) Unmarshal(dAtA []byte) error { } m.SearchKeywords = append(m.SearchKeywords, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field SkipSystemUsers", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.SkipSystemUsers = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -109199,6 +118239,74 @@ func (m *KubernetesServerV3) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Status == nil { + m.Status = &KubernetesServerStatusV3{} + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Scope", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Scope = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -109447,6 +118555,157 @@ func (m *KubernetesServerSpecV3) Unmarshal(dAtA []byte) error { } m.ProxyIDs = append(m.ProxyIDs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayIds", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayIds = append(m.RelayIds, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *KubernetesServerStatusV3) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: KubernetesServerStatusV3: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KubernetesServerStatusV3: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TargetHealth", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.TargetHealth == nil { + m.TargetHealth = &TargetHealth{} + } + if err := m.TargetHealth.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -111690,62 +120949,11 @@ func (m *OIDCConnectorSpecV3) Unmarshal(dAtA []byte) error { } m.PKCEMode = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *MaxAge) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: MaxAge: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: MaxAge: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) + case 21: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMatchers", wireType) } - m.Value = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -111755,65 +120963,27 @@ func (m *MaxAge) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.Value |= Duration(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *SSOClientRedirectSettings) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes } - if iNdEx >= l { + if postIndex > l { return io.ErrUnexpectedEOF } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: SSOClientRedirectSettings: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: SSOClientRedirectSettings: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + m.UserMatchers = append(m.UserMatchers, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 22: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AllowedHttpsHostnames", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestObjectMode", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -111841,13 +121011,13 @@ func (m *SSOClientRedirectSettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AllowedHttpsHostnames = append(m.AllowedHttpsHostnames, string(dAtA[iNdEx:postIndex])) + m.RequestObjectMode = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 2: + case 23: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field InsecureAllowedCidrRanges", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EntraIdGroupsProvider", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -111857,23 +121027,27 @@ func (m *SSOClientRedirectSettings) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.InsecureAllowedCidrRanges = append(m.InsecureAllowedCidrRanges, string(dAtA[iNdEx:postIndex])) + if m.EntraIdGroupsProvider == nil { + m.EntraIdGroupsProvider = &EntraIDGroupsProvider{} + } + if err := m.EntraIdGroupsProvider.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -111897,7 +121071,7 @@ func (m *SSOClientRedirectSettings) Unmarshal(dAtA []byte) error { } return nil } -func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { +func (m *EntraIDGroupsProvider) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -111920,15 +121094,15 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OIDCConnectorMFASettings: wiretype end group for non-group") + return fmt.Errorf("proto: EntraIDGroupsProvider: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OIDCConnectorMFASettings: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: EntraIDGroupsProvider: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Disabled", wireType) } var v int for shift := uint(0); ; shift += 7 { @@ -111945,10 +121119,10 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { break } } - m.Enabled = bool(v != 0) + m.Disabled = bool(v != 0) case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GroupType", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -111976,11 +121150,11 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClientId = string(dAtA[iNdEx:postIndex]) + m.GroupType = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientSecret", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GraphEndpoint", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112008,77 +121182,64 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClientSecret = string(dAtA[iNdEx:postIndex]) + m.GraphEndpoint = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AcrValues", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err } - postIndex := iNdEx + intStringLen - if postIndex < 0 { + if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthTypes } - if postIndex > l { + if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } - m.AcrValues = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 5: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Prompt", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MaxAge) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes } - if postIndex > l { + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.Prompt = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 6: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MaxAge: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MaxAge: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field MaxAge", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType) } - m.MaxAge = 0 + m.Value = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -112088,7 +121249,7 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.MaxAge |= Duration(b&0x7F) << shift + m.Value |= Duration(b&0x7F) << shift if b < 0x80 { break } @@ -112115,7 +121276,7 @@ func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { } return nil } -func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { +func (m *SSOClientRedirectSettings) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -112138,15 +121299,15 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: OIDCAuthRequest: wiretype end group for non-group") + return fmt.Errorf("proto: SSOClientRedirectSettings: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: OIDCAuthRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SSOClientRedirectSettings: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ConnectorID", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AllowedHttpsHostnames", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112174,11 +121335,11 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ConnectorID = string(dAtA[iNdEx:postIndex]) + m.AllowedHttpsHostnames = append(m.AllowedHttpsHostnames, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field InsecureAllowedCidrRanges", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112206,11 +121367,62 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Type = string(dAtA[iNdEx:postIndex]) + m.InsecureAllowedCidrRanges = append(m.InsecureAllowedCidrRanges, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 3: + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OIDCConnectorMFASettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OIDCConnectorMFASettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OIDCConnectorMFASettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field CheckUser", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType) } var v int for shift := uint(0); ; shift += 7 { @@ -112227,10 +121439,10 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { break } } - m.CheckUser = bool(v != 0) - case 4: + m.Enabled = bool(v != 0) + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientId", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112258,11 +121470,11 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.StateToken = string(dAtA[iNdEx:postIndex]) + m.ClientId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 5: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CSRFToken", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ClientSecret", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112290,11 +121502,11 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.CSRFToken = string(dAtA[iNdEx:postIndex]) + m.ClientSecret = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 6: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field RedirectURL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AcrValues", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112322,13 +121534,13 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.RedirectURL = string(dAtA[iNdEx:postIndex]) + m.AcrValues = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field CertTTL", wireType) + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Prompt", wireType) } - m.CertTTL = 0 + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -112338,16 +121550,29 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - m.CertTTL |= time.Duration(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - case 9: + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Prompt = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field CreateWebSession", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MaxAge", wireType) } - var v int + m.MaxAge = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -112357,15 +121582,316 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - v |= int(b&0x7F) << shift + m.MaxAge |= Duration(b&0x7F) << shift if b < 0x80 { break } } - m.CreateWebSession = bool(v != 0) - case 10: + case 7: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientRedirectURL", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RequestObjectMode", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RequestObjectMode = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OIDCAuthRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OIDCAuthRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectorID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ConnectorID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Type = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CheckUser", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.CheckUser = bool(v != 0) + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field StateToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.StateToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CSRFToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CSRFToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RedirectURL", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RedirectURL = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CertTTL", wireType) + } + m.CertTTL = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.CertTTL |= time.Duration(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CreateWebSession", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.CreateWebSession = bool(v != 0) + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ClientRedirectURL", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -112741,7 +122267,7 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.SshAttestationStatement == nil { - m.SshAttestationStatement = &v1.AttestationStatement{} + m.SshAttestationStatement = &v11.AttestationStatement{} } if err := m.SshAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -112777,7 +122303,7 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.TlsAttestationStatement == nil { - m.TlsAttestationStatement = &v1.AttestationStatement{} + m.TlsAttestationStatement = &v11.AttestationStatement{} } if err := m.TlsAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -112815,6 +122341,38 @@ func (m *OIDCAuthRequest) Unmarshal(dAtA []byte) error { } m.PkceVerifier = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 25: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LoginHint", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LoginHint = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -113765,6 +123323,58 @@ func (m *SAMLConnectorSpecV2) Unmarshal(dAtA []byte) error { } m.PreferredRequestBinding = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 20: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMatchers", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserMatchers = append(m.UserMatchers, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 21: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field IncludeSubject", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.IncludeSubject = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -114631,7 +124241,7 @@ func (m *SAMLAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.SshAttestationStatement == nil { - m.SshAttestationStatement = &v1.AttestationStatement{} + m.SshAttestationStatement = &v11.AttestationStatement{} } if err := m.SshAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -114667,7 +124277,7 @@ func (m *SAMLAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.TlsAttestationStatement == nil { - m.TlsAttestationStatement = &v1.AttestationStatement{} + m.TlsAttestationStatement = &v11.AttestationStatement{} } if err := m.TlsAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -114739,6 +124349,38 @@ func (m *SAMLAuthRequest) Unmarshal(dAtA []byte) error { } m.ClientVersion = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 25: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SubjectIdentifier", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SubjectIdentifier = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -115646,6 +125288,38 @@ func (m *GithubConnectorSpecV3) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserMatchers", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserMatchers = append(m.UserMatchers, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -116278,7 +125952,7 @@ func (m *GithubAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.SshAttestationStatement == nil { - m.SshAttestationStatement = &v1.AttestationStatement{} + m.SshAttestationStatement = &v11.AttestationStatement{} } if err := m.SshAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -116314,7 +125988,7 @@ func (m *GithubAuthRequest) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.TlsAttestationStatement == nil { - m.TlsAttestationStatement = &v1.AttestationStatement{} + m.TlsAttestationStatement = &v11.AttestationStatement{} } if err := m.TlsAttestationStatement.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -119445,6 +129119,70 @@ func (m *LockTarget) Unmarshal(dAtA []byte) error { } m.ServerID = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field BotInstanceID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.BotInstanceID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field JoinToken", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.JoinToken = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -119467,7 +129205,7 @@ func (m *LockTarget) Unmarshal(dAtA []byte) error { } return nil } -func (m *AddressCondition) Unmarshal(dAtA []byte) error { +func (m *LockFilter) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -119490,17 +129228,17 @@ func (m *AddressCondition) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AddressCondition: wiretype end group for non-group") + return fmt.Errorf("proto: LockFilter: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AddressCondition: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: LockFilter: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field CIDR", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Targets", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -119510,24 +129248,46 @@ func (m *AddressCondition) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.CIDR = string(dAtA[iNdEx:postIndex]) + m.Targets = append(m.Targets, &LockTarget{}) + if err := m.Targets[len(m.Targets)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field InForceOnly", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.InForceOnly = bool(v != 0) default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -119550,7 +129310,7 @@ func (m *AddressCondition) Unmarshal(dAtA []byte) error { } return nil } -func (m *NetworkRestrictionsSpecV4) Unmarshal(dAtA []byte) error { +func (m *AddressCondition) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -119573,51 +129333,17 @@ func (m *NetworkRestrictionsSpecV4) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: NetworkRestrictionsSpecV4: wiretype end group for non-group") + return fmt.Errorf("proto: AddressCondition: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: NetworkRestrictionsSpecV4: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AddressCondition: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Allow = append(m.Allow, AddressCondition{}) - if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Deny", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field CIDR", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -119627,25 +129353,142 @@ func (m *NetworkRestrictionsSpecV4) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Deny = append(m.Deny, AddressCondition{}) - if err := m.Deny[len(m.Deny)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.CIDR = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NetworkRestrictionsSpecV4) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NetworkRestrictionsSpecV4: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NetworkRestrictionsSpecV4: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Allow = append(m.Allow, AddressCondition{}) + if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Deny", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Deny = append(m.Deny, AddressCondition{}) + if err := m.Deny[len(m.Deny)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex default: iNdEx = preIndex @@ -120156,6 +129999,70 @@ func (m *WindowsDesktopServiceSpecV3) Unmarshal(dAtA []byte) error { } m.ProxyIDs = append(m.ProxyIDs, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RelayIds", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RelayIds = append(m.RelayIds, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -123379,6 +133286,38 @@ func (m *Participant) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Cluster", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Cluster = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -127248,6 +137187,41 @@ func (m *PluginSpecV1) Unmarshal(dAtA []byte) error { } m.Settings = &PluginSpecV1_Github{v} iNdEx = postIndex + case 21: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Intune", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + v := &PluginIntuneSettings{} + if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + m.Settings = &PluginSpecV1_Intune{v} + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -128553,7 +138527,7 @@ func (m *PluginJamfSettings) Unmarshal(dAtA []byte) error { } return nil } -func (m *PluginOktaSettings) Unmarshal(dAtA []byte) error { +func (m *PluginIntuneSettings) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -128576,15 +138550,15 @@ func (m *PluginOktaSettings) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PluginOktaSettings: wiretype end group for non-group") + return fmt.Errorf("proto: PluginIntuneSettings: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PluginOktaSettings: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PluginIntuneSettings: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field OrgUrl", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Tenant", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -128612,31 +138586,178 @@ func (m *PluginOktaSettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.OrgUrl = string(dAtA[iNdEx:postIndex]) + m.Tenant = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field EnableUserSync", wireType) - } - var v int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.EnableUserSync = bool(v != 0) - case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SsoConnectorId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LoginEndpoint", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LoginEndpoint = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field GraphEndpoint", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.GraphEndpoint = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginOktaSettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginOktaSettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginOktaSettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field OrgUrl", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.OrgUrl = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field EnableUserSync", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.EnableUserSync = bool(v != 0) + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SsoConnectorId", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -129244,6 +139365,38 @@ func (m *PluginOktaSyncSettings) Unmarshal(dAtA []byte) error { } } m.DisableAssignDefaultRoles = bool(v != 0) + case 14: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TimeBetweenImports", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TimeBetweenImports = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -129828,60 +139981,9 @@ func (m *PluginEntraIDSyncSettings) Unmarshal(dAtA []byte) error { } m.EntraAppId = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginEntraIDAccessGraphSettings) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginEntraIDAccessGraphSettings: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginEntraIDAccessGraphSettings: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 6: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppSsoSettingsCache", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field GroupFilters", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -129908,8 +140010,8 @@ func (m *PluginEntraIDAccessGraphSettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AppSsoSettingsCache = append(m.AppSsoSettingsCache, &PluginEntraIDAppSSOSettings{}) - if err := m.AppSsoSettingsCache[len(m.AppSsoSettingsCache)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.GroupFilters = append(m.GroupFilters, &PluginSyncFilter{}) + if err := m.GroupFilters[len(m.GroupFilters)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -129935,7 +140037,7 @@ func (m *PluginEntraIDAccessGraphSettings) Unmarshal(dAtA []byte) error { } return nil } -func (m *PluginEntraIDAppSSOSettings) Unmarshal(dAtA []byte) error { +func (m *PluginSyncFilter) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -129958,15 +140060,15 @@ func (m *PluginEntraIDAppSSOSettings) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: PluginEntraIDAppSSOSettings: wiretype end group for non-group") + return fmt.Errorf("proto: PluginSyncFilter: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: PluginEntraIDAppSSOSettings: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: PluginSyncFilter: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AppId", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -129994,13 +140096,13 @@ func (m *PluginEntraIDAppSSOSettings) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.AppId = string(dAtA[iNdEx:postIndex]) + m.Include = &PluginSyncFilter_Id{string(dAtA[iNdEx:postIndex])} iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field FederatedSsoV2", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NameRegex", wireType) } - var byteLen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -130010,80 +140112,344 @@ func (m *PluginEntraIDAppSSOSettings) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + byteLen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.FederatedSsoV2 = append(m.FederatedSsoV2[:0], dAtA[iNdEx:postIndex]...) - if m.FederatedSsoV2 == nil { - m.FederatedSsoV2 = []byte{} - } + m.Include = &PluginSyncFilter_NameRegex{string(dAtA[iNdEx:postIndex])} iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *PluginSCIMSettings) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: PluginSCIMSettings: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: PluginSCIMSettings: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SamlConnectorName", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExcludeId", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Exclude = &PluginSyncFilter_ExcludeId{string(dAtA[iNdEx:postIndex])} + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ExcludeNameRegex", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Exclude = &PluginSyncFilter_ExcludeNameRegex{string(dAtA[iNdEx:postIndex])} + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginEntraIDAccessGraphSettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginEntraIDAccessGraphSettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginEntraIDAccessGraphSettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AppSsoSettingsCache", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AppSsoSettingsCache = append(m.AppSsoSettingsCache, &PluginEntraIDAppSSOSettings{}) + if err := m.AppSsoSettingsCache[len(m.AppSsoSettingsCache)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginEntraIDAppSSOSettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginEntraIDAppSSOSettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginEntraIDAppSSOSettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AppId", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AppId = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FederatedSsoV2", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.FederatedSsoV2 = append(m.FederatedSsoV2[:0], dAtA[iNdEx:postIndex]...) + if m.FederatedSsoV2 == nil { + m.FederatedSsoV2 = []byte{} + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginSCIMSettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginSCIMSettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginSCIMSettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SamlConnectorName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -130145,6 +140511,157 @@ func (m *PluginSCIMSettings) Unmarshal(dAtA []byte) error { } m.DefaultRole = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ConnectorInfo", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ConnectorInfo == nil { + m.ConnectorInfo = &PluginSCIMSettings_ConnectorInfo{} + } + if err := m.ConnectorInfo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *PluginSCIMSettings_ConnectorInfo) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ConnectorInfo: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ConnectorInfo: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Type = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -130664,6 +141181,38 @@ func (m *PluginAWSICSettings) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RolesSyncMode", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.RolesSyncMode = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -133643,6 +144192,42 @@ func (m *PluginOktaStatusV1) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SystemLogExportDetails", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.SystemLogExportDetails == nil { + m.SystemLogExportDetails = &PluginOktaStatusSystemLogExporter{} + } + if err := m.SystemLogExportDetails.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -134644,6 +145229,200 @@ func (m *PluginOktaStatusDetailsAccessListsSync) Unmarshal(dAtA []byte) error { } return nil } +func (m *PluginOktaStatusSystemLogExporter) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PluginOktaStatusSystemLogExporter: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PluginOktaStatusSystemLogExporter: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType) + } + var v int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Enabled = bool(v != 0) + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field StatusCode", wireType) + } + m.StatusCode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.StatusCode |= OktaPluginSyncStatusCode(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LastSuccessful", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.LastSuccessful == nil { + m.LastSuccessful = new(time.Time) + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.LastSuccessful, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LastFailed", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.LastFailed == nil { + m.LastFailed = new(time.Time) + } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(m.LastFailed, dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Error = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *PluginCredentialsV1) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -138871,6 +149650,39 @@ func (m *IntegrationV1) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -139120,7 +149932,94 @@ func (m *IntegrationSpecV1) Unmarshal(dAtA []byte) error { } return nil } -func (m *AWSOIDCIntegrationSpecV1) Unmarshal(dAtA []byte) error { +func (m *IntegrationStatusV1) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: IntegrationStatusV1: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: IntegrationStatusV1: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AWSRolesAnywhere", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AWSRolesAnywhere == nil { + m.AWSRolesAnywhere = &AWSRAIntegrationStatusV1{} + } + if err := m.AWSRolesAnywhere.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSOIDCIntegrationSpecV1) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -139717,6 +150616,38 @@ func (m *AWSRolesAnywhereProfileSyncConfig) Unmarshal(dAtA []byte) error { } m.RoleARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ProfileNameFilters", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ProfileNameFilters = append(m.ProfileNameFilters, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -139739,7 +150670,7 @@ func (m *AWSRolesAnywhereProfileSyncConfig) Unmarshal(dAtA []byte) error { } return nil } -func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { +func (m *AWSRAIntegrationStatusV1) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -139762,15 +150693,15 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: HeadlessAuthentication: wiretype end group for non-group") + return fmt.Errorf("proto: AWSRAIntegrationStatusV1: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: HeadlessAuthentication: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AWSRAIntegrationStatusV1: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResourceHeader", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field LastProfileSync", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -139797,15 +150728,69 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ResourceHeader.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.LastProfileSync == nil { + m.LastProfileSync = &AWSRolesAnywhereProfileSyncIterationSummary{} + } + if err := m.LastProfileSync.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 2: + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSRolesAnywhereProfileSyncIterationSummary) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSRolesAnywhereProfileSyncIterationSummary: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSRolesAnywhereProfileSyncIterationSummary: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field StartTime", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -139815,46 +150800,28 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.User = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) - } - m.State = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.State |= HeadlessAuthenticationState(b&0x7F) << shift - if b < 0x80 { - break - } + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.StartTime, dAtA[iNdEx:postIndex]); err != nil { + return err } - case 5: + iNdEx = postIndex + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MfaDevice", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field EndTime", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -139881,16 +150848,13 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.MfaDevice == nil { - m.MfaDevice = &MFADevice{} - } - if err := m.MfaDevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.EndTime, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientIpAddress", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -139918,13 +150882,13 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.ClientIpAddress = string(dAtA[iNdEx:postIndex]) + m.Status = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SshPublicKey", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ErrorMessage", wireType) } - var byteLen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -139934,31 +150898,29 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + byteLen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.SshPublicKey = append(m.SshPublicKey[:0], dAtA[iNdEx:postIndex]...) - if m.SshPublicKey == nil { - m.SshPublicKey = []byte{} - } + m.ErrorMessage = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TlsPublicKey", wireType) + case 5: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field SyncedProfiles", wireType) } - var byteLen int + m.SyncedProfiles = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -139968,26 +150930,11 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - byteLen |= int(b&0x7F) << shift + m.SyncedProfiles |= int32(b&0x7F) << shift if b < 0x80 { break } } - if byteLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.TlsPublicKey = append(m.TlsPublicKey[:0], dAtA[iNdEx:postIndex]...) - if m.TlsPublicKey == nil { - m.TlsPublicKey = []byte{} - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -140010,7 +150957,7 @@ func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { } return nil } -func (m *WatchKind) Unmarshal(dAtA []byte) error { +func (m *HeadlessAuthentication) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -140033,15 +150980,286 @@ func (m *WatchKind) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WatchKind: wiretype end group for non-group") + return fmt.Errorf("proto: HeadlessAuthentication: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WatchKind: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: HeadlessAuthentication: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ResourceHeader", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ResourceHeader.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field User", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.User = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field State", wireType) + } + m.State = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.State |= HeadlessAuthenticationState(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MfaDevice", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.MfaDevice == nil { + m.MfaDevice = &MFADevice{} + } + if err := m.MfaDevice.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ClientIpAddress", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ClientIpAddress = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SshPublicKey", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.SshPublicKey = append(m.SshPublicKey[:0], dAtA[iNdEx:postIndex]...) + if m.SshPublicKey == nil { + m.SshPublicKey = []byte{} + } + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TlsPublicKey", wireType) + } + var byteLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + byteLen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if byteLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + byteLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TlsPublicKey = append(m.TlsPublicKey[:0], dAtA[iNdEx:postIndex]...) + if m.TlsPublicKey == nil { + m.TlsPublicKey = []byte{} + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *WatchKind) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: WatchKind: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: WatchKind: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -141787,6 +153005,276 @@ func (m *AWSMatcher) Unmarshal(dAtA []byte) error { } m.SetupAccessForARN = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Organization", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Organization == nil { + m.Organization = &AWSOrganizationMatcher{} + } + if err := m.Organization.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSOrganizationMatcher) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSOrganizationMatcher: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSOrganizationMatcher: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field OrganizationID", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.OrganizationID = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field OrganizationalUnits", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.OrganizationalUnits == nil { + m.OrganizationalUnits = &AWSOrganizationUnitsMatcher{} + } + if err := m.OrganizationalUnits.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AWSOrganizationUnitsMatcher) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AWSOrganizationUnitsMatcher: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AWSOrganizationUnitsMatcher: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Include", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Include = append(m.Include, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Exclude", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Exclude = append(m.Exclude, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -142131,13 +153619,283 @@ func (m *InstallerParams) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.PublicProxyAddr = string(dAtA[iNdEx:postIndex]) + m.PublicProxyAddr = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Azure", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Azure == nil { + m.Azure = &AzureInstallerParams{} + } + if err := m.Azure.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 8: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field EnrollMode", wireType) + } + m.EnrollMode = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.EnrollMode |= InstallParamEnrollMode(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Suffix", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Suffix = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UpdateGroup", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UpdateGroup = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field HTTPProxySettings", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.HTTPProxySettings == nil { + m.HTTPProxySettings = &HTTPProxySettings{} + } + if err := m.HTTPProxySettings.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *HTTPProxySettings) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: HTTPProxySettings: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: HTTPProxySettings: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field HTTPProxy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.HTTPProxy = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field HTTPSProxy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.HTTPSProxy = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 7: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Azure", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field NoProxy", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -142147,47 +153905,24 @@ func (m *InstallerParams) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - if m.Azure == nil { - m.Azure = &AzureInstallerParams{} - } - if err := m.Azure.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.NoProxy = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 8: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field EnrollMode", wireType) - } - m.EnrollMode = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.EnrollMode |= InstallParamEnrollMode(b&0x7F) << shift - if b < 0x80 { - break - } - } default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -142602,6 +154337,38 @@ func (m *AzureMatcher) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Integration", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Integration = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) @@ -142928,17 +154695,235 @@ func (m *KubernetesMatcher) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: KubernetesMatcher: wiretype end group for non-group") + return fmt.Errorf("proto: KubernetesMatcher: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: KubernetesMatcher: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Types", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Types = append(m.Types, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespaces", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespaces = append(m.Namespaces, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Labels.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *OktaOptions) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: OktaOptions: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: OktaOptions: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field SyncPeriod", wireType) + } + m.SyncPeriod = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.SyncPeriod |= Duration(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipTypes(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthTypes + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessGraphSync: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: KubernetesMatcher: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessGraphSync: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Types", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field AWS", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -142948,29 +154933,31 @@ func (m *KubernetesMatcher) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Types = append(m.Types, string(dAtA[iNdEx:postIndex])) + m.AWS = append(m.AWS, &AccessGraphAWSSync{}) + if err := m.AWS[len(m.AWS)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Namespaces", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PollInterval", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -142980,27 +154967,28 @@ func (m *KubernetesMatcher) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Namespaces = append(m.Namespaces, string(dAtA[iNdEx:postIndex])) + if err := github_com_gogo_protobuf_types.StdDurationUnmarshal(&m.PollInterval, dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Azure", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -143027,7 +155015,8 @@ func (m *KubernetesMatcher) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.Labels.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Azure = append(m.Azure, &AccessGraphAzureSync{}) + if err := m.Azure[len(m.Azure)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -143053,77 +155042,7 @@ func (m *KubernetesMatcher) Unmarshal(dAtA []byte) error { } return nil } -func (m *OktaOptions) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: OktaOptions: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: OktaOptions: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field SyncPeriod", wireType) - } - m.SyncPeriod = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.SyncPeriod |= Duration(b&0x7F) << shift - if b < 0x80 { - break - } - } - default: - iNdEx = preIndex - skippy, err := skipTypes(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthTypes - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...) - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { +func (m *AccessGraphAWSSyncCloudTrailLogs) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -143146,17 +155065,17 @@ func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessGraphSync: wiretype end group for non-group") + return fmt.Errorf("proto: AccessGraphAWSSyncCloudTrailLogs: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessGraphSync: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessGraphAWSSyncCloudTrailLogs: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AWS", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -143166,64 +155085,29 @@ func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.AWS = append(m.AWS, &AccessGraphAWSSync{}) - if err := m.AWS[len(m.AWS)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Region = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PollInterval", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := github_com_gogo_protobuf_types.StdDurationUnmarshal(&m.PollInterval, dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Azure", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SQSQueue", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -143233,25 +155117,23 @@ func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Azure = append(m.Azure, &AccessGraphAzureSync{}) - if err := m.Azure[len(m.Azure)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.SQSQueue = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -143275,7 +155157,7 @@ func (m *AccessGraphSync) Unmarshal(dAtA []byte) error { } return nil } -func (m *AccessGraphAWSSyncCloudTrailLogs) Unmarshal(dAtA []byte) error { +func (m *AccessGraphAWSSyncEKSAuditLogs) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -143298,17 +155180,17 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: AccessGraphAWSSyncCloudTrailLogs: wiretype end group for non-group") + return fmt.Errorf("proto: AccessGraphAWSSyncEKSAuditLogs: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: AccessGraphAWSSyncCloudTrailLogs: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: AccessGraphAWSSyncEKSAuditLogs: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Region", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Tags", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowTypes @@ -143318,55 +155200,24 @@ func (m *AccessGraphAWSSyncCloudTrailLogs) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthTypes } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthTypes } if postIndex > l { return io.ErrUnexpectedEOF } - m.Region = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SQSQueue", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowTypes - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthTypes - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthTypes - } - if postIndex > l { - return io.ErrUnexpectedEOF + if err := m.Tags.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.SQSQueue = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -143555,6 +155406,42 @@ func (m *AccessGraphAWSSync) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field EksAuditLogs", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowTypes + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthTypes + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthTypes + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.EksAuditLogs == nil { + m.EksAuditLogs = &AccessGraphAWSSyncEKSAuditLogs{} + } + if err := m.EksAuditLogs.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipTypes(dAtA[iNdEx:]) diff --git a/api/types/user.go b/api/types/user.go index 4005465989ea6..610b45b98459f 100644 --- a/api/types/user.go +++ b/api/types/user.go @@ -44,6 +44,10 @@ func (f *UserFilter) Match(user *UserV2) bool { } } + if f.SkipSystemUsers && IsSystemResource(user) { + return false + } + return true } @@ -59,6 +63,8 @@ type User interface { GetOIDCIdentities() []ExternalIdentity // GetSAMLIdentities returns a list of connected SAML identities GetSAMLIdentities() []ExternalIdentity + // SetSAMLIdentities sets a list of connected SAML identities + SetSAMLIdentities([]ExternalIdentity) // GetGithubIdentities returns a list of connected Github identities GetGithubIdentities() []ExternalIdentity // SetGithubIdentities sets the list of connected GitHub identities @@ -123,6 +129,10 @@ type User interface { SetHostUserUID(uid string) // SetHostUserGID sets the GID for host users SetHostUserGID(gid string) + // SetMCPTools sets a list of allowed MCP tools for the user + SetMCPTools(mcpTools []string) + // SetDefaultRelayAddr sets the trait for the default relay address. + SetDefaultRelayAddr(addr string) // GetCreatedBy returns information about user GetCreatedBy() CreatedBy // SetCreatedBy sets created by information @@ -417,6 +427,23 @@ func (u *UserV2) SetHostUserGID(uid string) { u.setTrait(constants.TraitHostUserGID, []string{uid}) } +// SetMCPTools sets a list of allowed MCP tools for the user +func (u *UserV2) SetMCPTools(mcpTools []string) { + u.setTrait(constants.TraitMCPTools, mcpTools) +} + +// SetDefaultRelayAddr implements [User]. +func (u *UserV2) SetDefaultRelayAddr(addr string) { + if addr == "" { + delete(u.Spec.Traits, constants.TraitDefaultRelayAddr) + return + } + if u.Spec.Traits == nil { + u.Spec.Traits = make(map[string][]string) + } + u.Spec.Traits[constants.TraitDefaultRelayAddr] = []string{addr} +} + // GetStatus returns login status of the user func (u *UserV2) GetStatus() LoginStatus { return u.Spec.Status @@ -432,6 +459,11 @@ func (u *UserV2) GetSAMLIdentities() []ExternalIdentity { return u.Spec.SAMLIdentities } +// SetSAMLIdentities sets a list of connected SAML identities +func (u *UserV2) SetSAMLIdentities(identities []ExternalIdentity) { + u.Spec.SAMLIdentities = identities +} + // GetGithubIdentities returns a list of connected Github identities func (u *UserV2) GetGithubIdentities() []ExternalIdentity { return u.Spec.GithubIdentities diff --git a/api/types/userloginstate/convert/v1/user_login_state.go b/api/types/userloginstate/convert/v1/user_login_state.go index 91c423ab10ed2..1b38b209578cc 100644 --- a/api/types/userloginstate/convert/v1/user_login_state.go +++ b/api/types/userloginstate/convert/v1/user_login_state.go @@ -32,13 +32,16 @@ func FromProto(msg *userloginstatev1.UserLoginState) (*userloginstate.UserLoginS return nil, trace.BadParameter("spec is missing") } - uls, err := userloginstate.New(headerv1.FromMetadataProto(msg.Header.Metadata), userloginstate.Spec{ - OriginalRoles: msg.Spec.GetOriginalRoles(), - OriginalTraits: traitv1.FromProto(msg.Spec.OriginalTraits), - Roles: msg.Spec.Roles, - Traits: traitv1.FromProto(msg.Spec.Traits), - UserType: types.UserType(msg.Spec.UserType), - GitHubIdentity: externalIdentityFromProto(msg.Spec.GitHubIdentity), + uls, err := userloginstate.New(headerv1.FromMetadataProto(msg.GetHeader().GetMetadata()), userloginstate.Spec{ + OriginalRoles: msg.GetSpec().GetOriginalRoles(), + OriginalTraits: traitv1.FromProto(msg.GetSpec().GetOriginalTraits()), + AccessListRoles: msg.GetSpec().GetAccessListRoles(), + AccessListTraits: traitv1.FromProto(msg.GetSpec().GetAccessListTraits()), + Roles: msg.GetSpec().GetRoles(), + Traits: traitv1.FromProto(msg.GetSpec().GetTraits()), + UserType: types.UserType(msg.GetSpec().GetUserType()), + GitHubIdentity: externalIdentityFromProto(msg.GetSpec().GetGitHubIdentity()), + SAMLIdentities: externalIdentitiesFromProto(msg.GetSpec().GetSamlIdentities()), }) return uls, trace.Wrap(err) @@ -49,12 +52,15 @@ func ToProto(uls *userloginstate.UserLoginState) *userloginstatev1.UserLoginStat return &userloginstatev1.UserLoginState{ Header: headerv1.ToResourceHeaderProto(uls.ResourceHeader), Spec: &userloginstatev1.Spec{ - OriginalRoles: uls.GetOriginalRoles(), - OriginalTraits: traitv1.ToProto(uls.GetOriginalTraits()), - Roles: uls.GetRoles(), - Traits: traitv1.ToProto(uls.GetTraits()), - UserType: string(uls.Spec.UserType), - GitHubIdentity: externalIdentityToProto(uls.Spec.GitHubIdentity), + OriginalRoles: uls.GetOriginalRoles(), + OriginalTraits: traitv1.ToProto(uls.GetOriginalTraits()), + AccessListRoles: uls.GetAccessListRoles(), + AccessListTraits: traitv1.ToProto(uls.GetAccessListTraits()), + Roles: uls.GetRoles(), + Traits: traitv1.ToProto(uls.GetTraits()), + UserType: string(uls.Spec.UserType), + GitHubIdentity: externalIdentityToProto(uls.Spec.GitHubIdentity), + SamlIdentities: externalIdentitiesToProto(uls.Spec.SAMLIdentities), }, } } @@ -62,19 +68,41 @@ func ToProto(uls *userloginstate.UserLoginState) *userloginstatev1.UserLoginStat func externalIdentityFromProto(identity *userloginstatev1.ExternalIdentity) *userloginstate.ExternalIdentity { if identity != nil { return &userloginstate.ExternalIdentity{ - UserID: identity.UserId, - Username: identity.Username, + ConnectorID: identity.ConnectorId, + UserID: identity.UserId, + Username: identity.Username, + GrantedRoles: identity.GrantedRoles, + GrantedTraits: traitv1.FromProto(identity.GrantedTraits), } } return nil } +func externalIdentitiesFromProto(identities []*userloginstatev1.ExternalIdentity) []userloginstate.ExternalIdentity { + var res []userloginstate.ExternalIdentity + for _, identity := range identities { + res = append(res, *externalIdentityFromProto(identity)) + } + return res +} + func externalIdentityToProto(identity *userloginstate.ExternalIdentity) *userloginstatev1.ExternalIdentity { if identity != nil { return &userloginstatev1.ExternalIdentity{ - UserId: identity.UserID, - Username: identity.Username, + ConnectorId: identity.ConnectorID, + UserId: identity.UserID, + Username: identity.Username, + GrantedRoles: identity.GrantedRoles, + GrantedTraits: traitv1.ToProto(identity.GrantedTraits), } } return nil } + +func externalIdentitiesToProto(identities []userloginstate.ExternalIdentity) []*userloginstatev1.ExternalIdentity { + var res []*userloginstatev1.ExternalIdentity + for _, identity := range identities { + res = append(res, externalIdentityToProto(&identity)) + } + return res +} diff --git a/api/types/userloginstate/derived.gen.go b/api/types/userloginstate/derived.gen.go new file mode 100644 index 0000000000000..f21a22821a324 --- /dev/null +++ b/api/types/userloginstate/derived.gen.go @@ -0,0 +1,360 @@ +// Code generated by goderive DO NOT EDIT. + +package userloginstate + +import ( + header "github.com/gravitational/teleport/api/types/header" + "time" +) + +// deriveTeleportEqualUserLoginState returns whether this and that are equal. +func deriveTeleportEqualUserLoginState(this, that *UserLoginState) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual(&this.ResourceHeader, &that.ResourceHeader) && + deriveTeleportEqual_(&this.Spec, &that.Spec) +} + +// deriveDeepCopyUserLoginState recursively copies the contents of src into dst. +func deriveDeepCopyUserLoginState(dst, src *UserLoginState) { + func() { + field := new(header.ResourceHeader) + deriveDeepCopy(field, &src.ResourceHeader) + dst.ResourceHeader = *field + }() + func() { + field := new(Spec) + deriveDeepCopy_(field, &src.Spec) + dst.Spec = *field + }() +} + +// deriveTeleportEqual returns whether this and that are equal. +func deriveTeleportEqual(this, that *header.ResourceHeader) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Kind == that.Kind && + this.SubKind == that.SubKind && + this.Version == that.Version && + deriveTeleportEqual_1(&this.Metadata, &that.Metadata) +} + +// deriveTeleportEqual_ returns whether this and that are equal. +func deriveTeleportEqual_(this, that *Spec) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + deriveTeleportEqual_2(this.OriginalRoles, that.OriginalRoles) && + deriveTeleportEqual_3(this.OriginalTraits, that.OriginalTraits) && + deriveTeleportEqual_2(this.Roles, that.Roles) && + deriveTeleportEqual_3(this.Traits, that.Traits) && + deriveTeleportEqual_2(this.AccessListRoles, that.AccessListRoles) && + deriveTeleportEqual_3(this.AccessListTraits, that.AccessListTraits) && + this.UserType == that.UserType && + deriveTeleportEqual_4(this.GitHubIdentity, that.GitHubIdentity) && + deriveTeleportEqual_5(this.SAMLIdentities, that.SAMLIdentities) +} + +// deriveDeepCopy recursively copies the contents of src into dst. +func deriveDeepCopy(dst, src *header.ResourceHeader) { + dst.Kind = src.Kind + dst.SubKind = src.SubKind + dst.Version = src.Version + func() { + field := new(header.Metadata) + deriveDeepCopy_1(field, &src.Metadata) + dst.Metadata = *field + }() +} + +// deriveDeepCopy_ recursively copies the contents of src into dst. +func deriveDeepCopy_(dst, src *Spec) { + if src.OriginalRoles == nil { + dst.OriginalRoles = nil + } else { + if dst.OriginalRoles != nil { + if len(src.OriginalRoles) > len(dst.OriginalRoles) { + if cap(dst.OriginalRoles) >= len(src.OriginalRoles) { + dst.OriginalRoles = (dst.OriginalRoles)[:len(src.OriginalRoles)] + } else { + dst.OriginalRoles = make([]string, len(src.OriginalRoles)) + } + } else if len(src.OriginalRoles) < len(dst.OriginalRoles) { + dst.OriginalRoles = (dst.OriginalRoles)[:len(src.OriginalRoles)] + } + } else { + dst.OriginalRoles = make([]string, len(src.OriginalRoles)) + } + copy(dst.OriginalRoles, src.OriginalRoles) + } + if src.OriginalTraits != nil { + dst.OriginalTraits = make(map[string][]string, len(src.OriginalTraits)) + deriveDeepCopy_2(dst.OriginalTraits, src.OriginalTraits) + } else { + dst.OriginalTraits = nil + } + if src.Roles == nil { + dst.Roles = nil + } else { + if dst.Roles != nil { + if len(src.Roles) > len(dst.Roles) { + if cap(dst.Roles) >= len(src.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } else { + dst.Roles = make([]string, len(src.Roles)) + } + } else if len(src.Roles) < len(dst.Roles) { + dst.Roles = (dst.Roles)[:len(src.Roles)] + } + } else { + dst.Roles = make([]string, len(src.Roles)) + } + copy(dst.Roles, src.Roles) + } + if src.Traits != nil { + dst.Traits = make(map[string][]string, len(src.Traits)) + deriveDeepCopy_2(dst.Traits, src.Traits) + } else { + dst.Traits = nil + } + if src.AccessListRoles == nil { + dst.AccessListRoles = nil + } else { + if dst.AccessListRoles != nil { + if len(src.AccessListRoles) > len(dst.AccessListRoles) { + if cap(dst.AccessListRoles) >= len(src.AccessListRoles) { + dst.AccessListRoles = (dst.AccessListRoles)[:len(src.AccessListRoles)] + } else { + dst.AccessListRoles = make([]string, len(src.AccessListRoles)) + } + } else if len(src.AccessListRoles) < len(dst.AccessListRoles) { + dst.AccessListRoles = (dst.AccessListRoles)[:len(src.AccessListRoles)] + } + } else { + dst.AccessListRoles = make([]string, len(src.AccessListRoles)) + } + copy(dst.AccessListRoles, src.AccessListRoles) + } + if src.AccessListTraits != nil { + dst.AccessListTraits = make(map[string][]string, len(src.AccessListTraits)) + deriveDeepCopy_2(dst.AccessListTraits, src.AccessListTraits) + } else { + dst.AccessListTraits = nil + } + dst.UserType = src.UserType + if src.GitHubIdentity == nil { + dst.GitHubIdentity = nil + } else { + dst.GitHubIdentity = new(ExternalIdentity) + deriveDeepCopy_3(dst.GitHubIdentity, src.GitHubIdentity) + } + if src.SAMLIdentities == nil { + dst.SAMLIdentities = nil + } else { + if dst.SAMLIdentities != nil { + if len(src.SAMLIdentities) > len(dst.SAMLIdentities) { + if cap(dst.SAMLIdentities) >= len(src.SAMLIdentities) { + dst.SAMLIdentities = (dst.SAMLIdentities)[:len(src.SAMLIdentities)] + } else { + dst.SAMLIdentities = make([]ExternalIdentity, len(src.SAMLIdentities)) + } + } else if len(src.SAMLIdentities) < len(dst.SAMLIdentities) { + dst.SAMLIdentities = (dst.SAMLIdentities)[:len(src.SAMLIdentities)] + } + } else { + dst.SAMLIdentities = make([]ExternalIdentity, len(src.SAMLIdentities)) + } + deriveDeepCopy_4(dst.SAMLIdentities, src.SAMLIdentities) + } +} + +// deriveTeleportEqual_1 returns whether this and that are equal. +func deriveTeleportEqual_1(this, that *header.Metadata) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.Name == that.Name && + this.Description == that.Description && + deriveTeleportEqual_6(this.Labels, that.Labels) && + this.Expires.Equal(that.Expires) +} + +// deriveTeleportEqual_2 returns whether this and that are equal. +func deriveTeleportEqual_2(this, that []string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for i := 0; i < len(this); i++ { + if !(this[i] == that[i]) { + return false + } + } + return true +} + +// deriveTeleportEqual_3 returns whether this and that are equal. +func deriveTeleportEqual_3(this, that map[string][]string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for k, v := range this { + thatv, ok := that[k] + if !ok { + return false + } + if !(deriveTeleportEqual_2(v, thatv)) { + return false + } + } + return true +} + +// deriveTeleportEqual_4 returns whether this and that are equal. +func deriveTeleportEqual_4(this, that *ExternalIdentity) bool { + return (this == nil && that == nil) || + this != nil && that != nil && + this.ConnectorID == that.ConnectorID && + this.UserID == that.UserID && + this.Username == that.Username && + deriveTeleportEqual_2(this.GrantedRoles, that.GrantedRoles) && + deriveTeleportEqual_3(this.GrantedTraits, that.GrantedTraits) +} + +// deriveTeleportEqual_5 returns whether this and that are equal. +func deriveTeleportEqual_5(this, that []ExternalIdentity) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for i := 0; i < len(this); i++ { + if !(deriveTeleportEqual_4(&this[i], &that[i])) { + return false + } + } + return true +} + +// deriveDeepCopy_1 recursively copies the contents of src into dst. +func deriveDeepCopy_1(dst, src *header.Metadata) { + dst.Name = src.Name + dst.Description = src.Description + if src.Labels != nil { + dst.Labels = make(map[string]string, len(src.Labels)) + deriveDeepCopy_5(dst.Labels, src.Labels) + } else { + dst.Labels = nil + } + func() { + field := new(time.Time) + deriveDeepCopy_6(field, &src.Expires) + dst.Expires = *field + }() + dst.Revision = src.Revision +} + +// deriveDeepCopy_2 recursively copies the contents of src into dst. +func deriveDeepCopy_2(dst, src map[string][]string) { + for src_key, src_value := range src { + if src_value == nil { + dst[src_key] = nil + } + if src_value == nil { + dst[src_key] = nil + } else { + if dst[src_key] != nil { + if len(src_value) > len(dst[src_key]) { + if cap(dst[src_key]) >= len(src_value) { + dst[src_key] = (dst[src_key])[:len(src_value)] + } else { + dst[src_key] = make([]string, len(src_value)) + } + } else if len(src_value) < len(dst[src_key]) { + dst[src_key] = (dst[src_key])[:len(src_value)] + } + } else { + dst[src_key] = make([]string, len(src_value)) + } + copy(dst[src_key], src_value) + } + } +} + +// deriveDeepCopy_3 recursively copies the contents of src into dst. +func deriveDeepCopy_3(dst, src *ExternalIdentity) { + dst.ConnectorID = src.ConnectorID + dst.UserID = src.UserID + dst.Username = src.Username + if src.GrantedRoles == nil { + dst.GrantedRoles = nil + } else { + if dst.GrantedRoles != nil { + if len(src.GrantedRoles) > len(dst.GrantedRoles) { + if cap(dst.GrantedRoles) >= len(src.GrantedRoles) { + dst.GrantedRoles = (dst.GrantedRoles)[:len(src.GrantedRoles)] + } else { + dst.GrantedRoles = make([]string, len(src.GrantedRoles)) + } + } else if len(src.GrantedRoles) < len(dst.GrantedRoles) { + dst.GrantedRoles = (dst.GrantedRoles)[:len(src.GrantedRoles)] + } + } else { + dst.GrantedRoles = make([]string, len(src.GrantedRoles)) + } + copy(dst.GrantedRoles, src.GrantedRoles) + } + if src.GrantedTraits != nil { + dst.GrantedTraits = make(map[string][]string, len(src.GrantedTraits)) + deriveDeepCopy_2(dst.GrantedTraits, src.GrantedTraits) + } else { + dst.GrantedTraits = nil + } +} + +// deriveDeepCopy_4 recursively copies the contents of src into dst. +func deriveDeepCopy_4(dst, src []ExternalIdentity) { + for src_i, src_value := range src { + func() { + field := new(ExternalIdentity) + deriveDeepCopy_3(field, &src_value) + dst[src_i] = *field + }() + } +} + +// deriveTeleportEqual_6 returns whether this and that are equal. +func deriveTeleportEqual_6(this, that map[string]string) bool { + if this == nil || that == nil { + return this == nil && that == nil + } + if len(this) != len(that) { + return false + } + for k, v := range this { + thatv, ok := that[k] + if !ok { + return false + } + if !(v == thatv) { + return false + } + } + return true +} + +// deriveDeepCopy_5 recursively copies the contents of src into dst. +func deriveDeepCopy_5(dst, src map[string]string) { + for src_key, src_value := range src { + dst[src_key] = src_value + } +} + +// deriveDeepCopy_6 recursively copies the contents of src into dst. +func deriveDeepCopy_6(dst, src *time.Time) { + *dst = *src +} diff --git a/api/types/userloginstate/user_login_state.go b/api/types/userloginstate/user_login_state.go index e649c31f035f9..79f67ace589bd 100644 --- a/api/types/userloginstate/user_login_state.go +++ b/api/types/userloginstate/user_login_state.go @@ -23,7 +23,6 @@ import ( "github.com/gravitational/teleport/api/types/header" "github.com/gravitational/teleport/api/types/header/convert/legacy" "github.com/gravitational/teleport/api/types/trait" - "github.com/gravitational/teleport/api/utils" ) // UserLoginState is the ephemeral user login state. This will hold data to differentiate @@ -42,32 +41,75 @@ type UserLoginState struct { // Spec is the specification for the user login state. type Spec struct { - // OriginalRoles is the list of the original roles from the user login state. + // OriginalRoles are the user roles that are part of the user's static definition. These roles + // are not affected by access granted by access lists and are obtained prior to granting access + // list access. Basically, [OriginalRoles] = [Roles] - [AccessListRoles]. OriginalRoles []string `json:"original_roles" yaml:"original_roles"` - // OriginalTraits is the list of the original traits from the user login state. + // OriginalTraits are the user traits that are part of the user's static definition. These + // traits are not affected by access granted by access lists and are obtained prior to granting + // access list access. Basically, [OriginalTraits] = [Traits] - [AccessListTraits]. OriginalTraits trait.Traits `json:"original_traits" yaml:"original_traits"` - // Roles is the list of roles attached to the user login state. + // Roles are the user roles attached to the user. Basically, [Roles] = [OriginalRoles] + + // [AccessListRoles]. Roles []string `json:"roles" yaml:"roles"` - // Traits are the traits attached to the user login state. + // Traits are the traits attached to the user. Basically, [roles] = [original_traits] + + // [access_list_traits]. Traits trait.Traits `json:"traits" yaml:"traits"` + // AccessListRoles are roles granted to this user by the Access Lists + // membership/ownership. Basically, [AccessListRoles] = [Roles] - [OriginalRoles]. + AccessListRoles []string `json:"access_list_roles,omitempty" yaml:"access_list_roles"` + + // AccessListTraits are traits granted to this user by the Access Lists membership/ownership. + // Basically, [AccessListTraits] = [Traits] - [OriginalTraits]. + AccessListTraits trait.Traits `json:"access_list_traits" yaml:"access_list_traits"` + // UserType is the type of user that this state represents. UserType types.UserType `json:"user_type" yaml:"user_type"` // GitHubIdentity is user's attached GitHub identity GitHubIdentity *ExternalIdentity `json:"github_identity,omitempty" yaml:"github_identity"` + + // SAMLIdentities are the identities created from the SAML connectors used to log in by + // this user name. + // + // NOTE: There is no mechanism to clean those identities. If the the user is deleted, the + // user_login_state and it's saml_identities will not be deleted. Or even if the user still + // exists, but it's SAML identity expires it isn't cleared from the user_login_state. This means + // the information stored here can be used only as long as there is a background sync running and + // making sure the user's info is up-to-date. E.g. Okta assignment creator is using this + // information, but it is running only when Okta user sync is active and periodically updates the + // user which in turn updates the user_login_state. + // + // NOTE2: This field isn't currently used. It's introduced so we can resolve the + // https://github.com/gravitational/teleport.e/issues/6723 issue in stages. + // The STAGE 1 is to introduce this field and give enough time to get existing Teleport + // installations to get updated and populate this field. + // The STAGE 2 is in the v19 release (or maybe even v20) to deploy the actual fix PR + // (https://github.com/gravitational/teleport.e/pull/7168) reading this field and calculating + // access to Okta resources. See more details in the description of the fix PR. + // + // TODO(kopiczko) v19: consider proceeding with the STAGE 2 described above. + SAMLIdentities []ExternalIdentity `json:"saml_identities,omitempty" yaml:"saml_identities"` } // ExternalIdentity defines an external identity attached to this user state. type ExternalIdentity struct { + // ConnectorID is the connector this identity was created with. It's empty for the local + // user. + ConnectorID string // UserId is the unique identifier of the external identity such as GitHub // user ID. UserID string // Username is the username of the external identity. Username string + // GrantedRoles specific for this identity. E.g.: from connector attributes mapping. + GrantedRoles []string + // GrantedTraits specific for this identity. E.g.: from connector roles attributes mapping. + GrantedTraits trait.Traits } // New creates a new user login state. @@ -102,31 +144,55 @@ func (u *UserLoginState) CheckAndSetDefaults() error { // Clone returns a copy of the member. func (u *UserLoginState) Clone() *UserLoginState { - var copy *UserLoginState - utils.StrictObjectToStruct(u, ©) - return copy + if u == nil { + return nil + } + out := &UserLoginState{} + deriveDeepCopyUserLoginState(out, u) + return out } -// GetOriginalRoles returns the original roles that the user login state was derived from. +// IsEqual compares two user login states for equality. +func (u *UserLoginState) IsEqual(i *UserLoginState) bool { + return deriveTeleportEqualUserLoginState(u, i) +} + +// GetOriginalRoles returns the original roles that the user login state was derived from. It's the +// same as GetRoles() - GetOriginalRoles(). func (u *UserLoginState) GetOriginalRoles() []string { return u.Spec.OriginalRoles } -// GetOriginalTraits returns the original traits that the user login state was derived from. +// GetOriginalTraits returns the original traits that the user login state was derived from. It's +// the same as GetTraits() - GetAccessListTraits(). func (u *UserLoginState) GetOriginalTraits() map[string][]string { return u.Spec.OriginalTraits } -// GetRoles returns the roles attached to the user login state. +// GetRoles returns the roles attached to the user login state. It's the same as GetOriginalRoles() +// + GetAccessListRoles(). func (u *UserLoginState) GetRoles() []string { return u.Spec.Roles } -// GetTraits returns the traits attached to the user login state. +// GetTraits returns the traits attached to the user login state. It's the same as +// GetOriginalTraits() + GetAccessListTraits(). func (u *UserLoginState) GetTraits() map[string][]string { return u.Spec.Traits } +// GetAccessListRoles returns roles granted to this user by the Access Lists membership/ownership. +// It's the same as GetRoles() - GetOriginalRoles(). +func (u *UserLoginState) GetAccessListRoles() []string { + return u.Spec.AccessListRoles +} + +// GetAccessListTraits returns traits granted to this user by the Access Lists +// membership/ownership. It's the same as GetTraits() - GetOriginalTraits(). +func (u *UserLoginState) GetAccessListTraits() map[string][]string { + return u.Spec.AccessListTraits +} + // GetUserType returns the user type for the user login state. func (u *UserLoginState) GetUserType() types.UserType { return u.Spec.UserType diff --git a/api/types/wrappers/wrappers.go b/api/types/wrappers/wrappers.go index c84a62420c7b0..7de3c632fc8d9 100644 --- a/api/types/wrappers/wrappers.go +++ b/api/types/wrappers/wrappers.go @@ -19,6 +19,7 @@ package wrappers import ( "encoding/json" + "slices" "github.com/gogo/protobuf/proto" "github.com/gravitational/trace" @@ -58,6 +59,21 @@ func UnmarshalTraits(data []byte, traits *Traits) error { return nil } +// Clone returns a copy of the Traits map. +func (l Traits) Clone() Traits { + if l == nil { + return nil + } + clone := make(Traits, len(l)) + for key, vals := range l { + if len(vals) == 0 || len(vals) == 1 && vals[0] == "" { + continue + } + clone[key] = slices.Clone(vals) + } + return clone +} + // Marshal marshals value into protobuf representation func (l Traits) Marshal() ([]byte, error) { return proto.Marshal(l.protoType()) diff --git a/api/utils/aws/endpoint.go b/api/utils/aws/endpoint.go index 315658234c56b..c5d677c79af2a 100644 --- a/api/utils/aws/endpoint.go +++ b/api/utils/aws/endpoint.go @@ -58,7 +58,15 @@ func IsRedshiftServerlessEndpoint(uri string) bool { // IsElastiCacheEndpoint returns true if the input URI is an ElastiCache // endpoint. func IsElastiCacheEndpoint(uri string) bool { - return isAWSServiceEndpoint(uri, ElastiCacheServiceName) + _, err := ParseElastiCacheEndpoint(uri) + return err == nil +} + +// IsElastiCacheServerlessEndpoint returns true if the input URI is an ElastiCacheServerless +// endpoint. +func IsElastiCacheServerlessEndpoint(uri string) bool { + _, err := ParseElastiCacheServerlessEndpoint(uri) + return err == nil } // IsMemoryDBEndpoint returns true if the input URI is an MemoryDB @@ -556,6 +564,58 @@ func trimElastiCacheShardAndNodeID(input string) string { return strings.Join(parts, "-") } +// ParseElastiCacheServerlessEndpoint extracts the details from the provided +// ElastiCacheServerless Redis endpoint, which should be in the form +// .serverless..cache.amazonaws.com: +func ParseElastiCacheServerlessEndpoint(endpoint string) (*RedisEndpointInfo, error) { + endpoint, err := removeSchemaAndPort(endpoint) + if err != nil { + return nil, trace.Wrap(err) + } + + // Remove partition suffix. Note that endpoints for CN regions use the same + // format except they end with AWSCNEndpointSuffix. + endpointWithoutSuffix, _, err := removePartitionSuffix(endpoint) + if err != nil { + return nil, trace.Wrap(err) + } + + // Split into parts to extract details. They look like this in general: + // .serverless..cache + // + // Note that ElastiCache uses short region codes like "use1". + const ( + nameIdx = iota + serverlessIdx + shortRegionIdx + svcIdx + numParts + ) + parts := strings.Split(endpointWithoutSuffix, ".") + if len(parts) != numParts || parts[serverlessIdx] != "serverless" || parts[svcIdx] != ElastiCacheServiceName { + return nil, trace.BadParameter("unknown ElastiCache Redis endpoint format %q", endpoint) + } + + region, ok := ShortRegionToRegion(parts[shortRegionIdx]) + if !ok { + return nil, trace.BadParameter("%v is not a valid region", parts[shortRegionIdx]) + } + + // Configuration endpoint for Redis with TLS disabled looks like: + // example-.serverless.cac1.cache.amazonaws.com:6379 + info := &RedisEndpointInfo{ + ID: parts[nameIdx], + Region: region, + TransitEncryptionEnabled: true, + EndpointType: ElastiCacheConfigurationEndpoint, + } + nameParts := strings.Split(info.ID, "-") + if len(nameParts) > 1 { + info.ID = strings.Join(nameParts[:len(nameParts)-1], "-") + } + return info, nil +} + // ParseMemoryDBEndpoint extracts the details from the provided // MemoryDB endpoint. // diff --git a/api/utils/aws/endpoint_test.go b/api/utils/aws/endpoint_test.go index 51c408fb54d12..82efc20ed9541 100644 --- a/api/utils/aws/endpoint_test.go +++ b/api/utils/aws/endpoint_test.go @@ -322,6 +322,73 @@ func TestParseElastiCacheEndpoint(t *testing.T) { } } +func TestParseElastiCacheServerlessEndpoint(t *testing.T) { + tests := []struct { + name string + inputURI string + expectInfo *RedisEndpointInfo + expectError bool + }{ + { + name: "normal endpoint", + inputURI: "example-asdf123.serverless.cac1.cache.amazonaws.com:6379", + expectInfo: &RedisEndpointInfo{ + ID: "example", + Region: "ca-central-1", + TransitEncryptionEnabled: true, + EndpointType: ElastiCacheConfigurationEndpoint, + }, + }, + { + name: "CN endpoint", + inputURI: "example-asdf123.serverless.cnn1.cache.amazonaws.com.cn:6379", + expectInfo: &RedisEndpointInfo{ + ID: "example", + Region: "cn-north-1", + TransitEncryptionEnabled: true, + EndpointType: ElastiCacheConfigurationEndpoint, + }, + }, + { + name: "endpoint with schema and parameters", + inputURI: "redis://example-asdf123.serverless.cac1.cache.amazonaws.com:6379?a=b&c=d", + expectInfo: &RedisEndpointInfo{ + ID: "example", + Region: "ca-central-1", + TransitEncryptionEnabled: true, + EndpointType: ElastiCacheConfigurationEndpoint, + }, + }, + { + name: "invalid suffix", + inputURI: "example.serverless.cac1.cache.amazonaws.io:6379", + expectError: true, + }, + { + name: "invalid url", + inputURI: "://example.serverless.cac1.cache.amazonaws.com:6379", + expectError: true, + }, + { + name: "invalid format", + inputURI: "example.serverless.cache.amazonaws.com:6379", + expectError: true, + }, + } + + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + actualInfo, err := ParseElastiCacheServerlessEndpoint(test.inputURI) + if test.expectError { + require.Error(t, err) + } else { + require.NoError(t, err) + require.Equal(t, test.expectInfo, actualInfo) + } + }) + } +} + func TestParseMemoryDBEndpoint(t *testing.T) { t.Parallel() diff --git a/api/utils/aws/fuzz_test.go b/api/utils/aws/fuzz_test.go index c65d018119ec3..fa37cc0b7761e 100644 --- a/api/utils/aws/fuzz_test.go +++ b/api/utils/aws/fuzz_test.go @@ -91,6 +91,24 @@ func FuzzParseElastiCacheEndpoint(f *testing.F) { }) } +func FuzzParseElastiCacheServerlessEndpoint(f *testing.F) { + f.Add("") + f.Add(":123") + f.Add("foo:123") + f.Add("cache.b.c.usnw1.amazonaws.com") + f.Add("a.b.c.d.amazonaws.com:6379") + f.Add("a.serverless.c.cac1.cache.amazonaws.com:6379") + f.Add("example-cache-abc123.serverless.cac1.cache.amazonaws.com:6379") + f.Add("redis://example-cache-abc123.serverless.cac1.cache.amazonaws.com:6379") + f.Add("://://example-cache-abc123.serverless.cac1.cache.amazonaws.com:6379") + + f.Fuzz(func(t *testing.T, endpoint string) { + require.NotPanics(t, func() { + _, _ = ParseElastiCacheServerlessEndpoint(endpoint) + }) + }) +} + func FuzzParseDynamoDBEndpoint(f *testing.F) { f.Add("") f.Add(":123") diff --git a/api/utils/aws/identifiers.go b/api/utils/aws/identifiers.go index fd71aaf22b8e8..d245006bda012 100644 --- a/api/utils/aws/identifiers.go +++ b/api/utils/aws/identifiers.go @@ -53,6 +53,28 @@ func IsValidIAMRoleName(roleName string) error { return nil } +// IsValidIAMRolesAnywhereTrustAnchorName checks whether the AWS IAM Roles Anywhere Trust Anchor name is valid. +// Validation based on the AWS documentation. +// See https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_CreateTrustAnchor.html#API_CreateTrustAnchor_RequestBody +func IsValidIAMRolesAnywhereTrustAnchorName(name string) error { + if !matchRolesAnywhereTrustAnchorName(name) { + return trace.BadParameter("trust anchor name is invalid") + } + + return nil +} + +// IsValidIAMRolesAnywhereProfileName checks whether the AWS IAM Roles Anywhere Profile name is valid. +// Validation based on the AWS documentation. +// See https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_CreateProfile.html#API_CreateProfile_RequestBody +func IsValidIAMRolesAnywhereProfileName(name string) error { + if !matchRolesAnywhereProfileName(name) { + return trace.BadParameter("profile name is invalid") + } + + return nil +} + // IsValidIAMPolicyName checks whether the policy name is a valid AWS IAM Policy // identifier. // @@ -167,7 +189,7 @@ var ( // // Reference: // https://github.com/aws/aws-sdk-go-v2/blob/main/codegen/smithy-aws-go-codegen/src/main/resources/software/amazon/smithy/aws/go/codegen/endpoints.json - matchRegion = regexp.MustCompile(`^[a-z]{2}(-gov|-iso|-isob|-isoe|-isof)?-\w+-\d+$`) + matchRegion = regexp.MustCompile(`^(eusc-)?[a-z]{2}(-gov|-iso|-isob|-isoe|-isof)?-\w+-\d+$`) // https://docs.aws.amazon.com/athena/latest/APIReference/API_CreateWorkGroup.html matchAthenaWorkgroupName = regexp.MustCompile(`^[a-zA-Z0-9._-]{1,128}$`).MatchString @@ -180,6 +202,16 @@ var ( // > special characters other than underscore (_) are not supported matchGlueName = regexp.MustCompile(`^[a-z0-9_]{1,255}$`).MatchString + // matchRolesAnywhereTrustAnchorName is a regex that matches against AWS IAM Roles Anywhere Trust Anchor Names. + // See https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_CreateTrustAnchor.html#API_CreateTrustAnchor_RequestBody + matchRolesAnywhereTrustAnchorName = baseResourceNameMatcher + + // matchRolesAnywhereProfileName is a regex that matches against AWS IAM Roles Anywhere Profile Names. + // See https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_CreateProfile.html#API_CreateProfile_RequestBody + matchRolesAnywhereProfileName = baseResourceNameMatcher + + baseResourceNameMatcher = regexp.MustCompile(`^[ a-zA-Z0-9-_]{1,255}$`).MatchString + // https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html validPartitions = []string{"aws", "aws-cn", "aws-us-gov"} ) diff --git a/api/utils/aws/identifiers_test.go b/api/utils/aws/identifiers_test.go index a6c701a2e10f9..666b8b1231365 100644 --- a/api/utils/aws/identifiers_test.go +++ b/api/utils/aws/identifiers_test.go @@ -391,3 +391,69 @@ func TestIsValidGlueResourceName(t *testing.T) { }) } } + +func TestIsValidIAMRolesAnywhereTrustAnchorName(t *testing.T) { + for _, tt := range []struct { + name string + trustAnchorName string + errCheck require.ErrorAssertionFunc + }{ + { + name: "valid", + trustAnchorName: "aA0-_", + errCheck: require.NoError, + }, + { + name: "empty", + trustAnchorName: "", + errCheck: require.Error, + }, + { + name: "too long", + trustAnchorName: strings.Repeat("a", 256), + errCheck: require.Error, + }, + { + name: "invalid chars", + trustAnchorName: "+", + errCheck: require.Error, + }, + } { + t.Run(tt.name, func(t *testing.T) { + tt.errCheck(t, IsValidIAMRolesAnywhereTrustAnchorName(tt.trustAnchorName)) + }) + } +} + +func TestIsValidIAMRolesAnywhereProfileName(t *testing.T) { + for _, tt := range []struct { + name string + profileName string + errCheck require.ErrorAssertionFunc + }{ + { + name: "valid", + profileName: "aA0-_", + errCheck: require.NoError, + }, + { + name: "empty", + profileName: "", + errCheck: require.Error, + }, + { + name: "too long", + profileName: strings.Repeat("a", 256), + errCheck: require.Error, + }, + { + name: "invalid chars", + profileName: "+", + errCheck: require.Error, + }, + } { + t.Run(tt.name, func(t *testing.T) { + tt.errCheck(t, IsValidIAMRolesAnywhereProfileName(tt.profileName)) + }) + } +} diff --git a/api/utils/clientutils/resources.go b/api/utils/clientutils/resources.go index d10fa7f76cfe4..ee5d2cdff9de6 100644 --- a/api/utils/clientutils/resources.go +++ b/api/utils/clientutils/resources.go @@ -20,6 +20,7 @@ package clientutils import ( "context" + "iter" "github.com/gravitational/trace" @@ -28,26 +29,180 @@ import ( // IterateResources is a helper that iterates through each resource from all // pages and passes them one by one to the provided callback. +// Deprecated: Prefer using [Resources] instead. +// TODO(tross): DELETE IN 19.0.0 func IterateResources[T any]( ctx context.Context, - listPageFunc func(context.Context, int, string) ([]T, string, error), + pageFunc func(context.Context, int, string) ([]T, string, error), callback func(T) error, ) error { - var pageToken string - for { - page, nextToken, err := listPageFunc(ctx, defaults.DefaultChunkSize, pageToken) + for item, err := range Resources(ctx, pageFunc) { if err != nil { return trace.Wrap(err) } - for _, resource := range page { - if err := callback(resource); err != nil { - return trace.Wrap(err) + + if err := callback(item); err != nil { + return trace.Wrap(err) + } + } + + return nil +} + +// rangeParams are parameters provided to [rangeInternal] +type rangeParams[T any] struct { + // start is the minimum inclusive key in the range yielded by the iteration. + // Empty string means start of the range. + start string + // end is the upper bound (exclusive) key in the range yielded by the iteration. + // Empty string means full remainder of the range. + end string + // pageSize is an optional maximum number of items to retrieve via [rangeParams.pageFunc] + // Default value is 0, in this case it is assumed the backend will impose a page size. + pageSize int + // pageFunc is a user provided function to retrieve a single page of items. + pageFunc func(context.Context, int, string) ([]T, string, error) + // keyFunc is a user provided function to retrieve a backend key for a given item. + // This key is used when a given range end key is given to compare against. + // Backend keys are assumed to be sorted lexigraphically. + keyFunc func(item T) string +} + +// rangeInternal is the internal implementation of resource range getter. +// The iterator will only produce an error if one is encountered retrieving a page. +func rangeInternal[T any](ctx context.Context, params rangeParams[T]) iter.Seq2[T, error] { + return func(yield func(T, error) bool) { + pageToken := params.start + pageSize := params.pageSize + isLookingForEnd := params.end != "" && params.keyFunc != nil + + for { + page, nextToken, lastPageSize, err := Page(ctx, pageSize, pageToken, params.pageFunc) + if err != nil { + yield(*new(T), trace.Wrap(err)) + return + } + + for _, resource := range page { + if isLookingForEnd && params.keyFunc(resource) >= params.end { + return + } + + if !yield(resource, nil) { + return + } + } + + pageToken = nextToken + if nextToken == "" { + return + } + + // Note that the server may return a smaller page at its own discretion, + // we use the last successful requested page size here to allow the server + // to temporarily lower the size if needed. + pageSize = lastPageSize + } + } + +} + +// ResourcesWithPageSize returns an iterator over all resources from every page, limited to pageSize, produced from the pageFunc. +// The iterator will only produce an error if one is encountered retrieving a page. +func ResourcesWithPageSize[T any](ctx context.Context, pageFunc func(context.Context, int, string) ([]T, string, error), pageSize int) iter.Seq2[T, error] { + return rangeInternal(ctx, rangeParams[T]{ + pageSize: pageSize, + pageFunc: pageFunc, + }) +} + +// Resources returns an iterator over all resources from every page produced from the pageFunc. +// The iterator will only produce an error if one is encountered retrieving a page. +func Resources[T any](ctx context.Context, pageFunc func(context.Context, int, string) ([]T, string, error)) iter.Seq2[T, error] { + return rangeInternal(ctx, rangeParams[T]{ + pageFunc: pageFunc, + pageSize: defaults.DefaultChunkSize, + }) +} + +// RangeResources returns resources within the range [start, end). +// +// Example use: +// +// func (c *Client) RangeFoos(ctx context.Context, start, end string) iter.Seq2[Foo, error] { +// return clientutils.RangeResources(ctx, start, end, c.ListFoos, Foo.GetName) +// } +func RangeResources[T any](ctx context.Context, start, end string, + pageFunc func(context.Context, int, string) ([]T, string, error), + keyFunc func(item T) string) iter.Seq2[T, error] { + + return rangeInternal(ctx, rangeParams[T]{ + start: start, + end: end, + pageFunc: pageFunc, + keyFunc: keyFunc, + pageSize: defaults.DefaultChunkSize, + }) +} + +// CollectWithFallback collects all items in a collection using pageFunc, should the this be NotImplemented a fallbackFunc is attempted. +// Example: +// +// foos, err := clientutils.CollectWithFallback(ctx, client.ListFoos, client.GetFoos) +func CollectWithFallback[T any](ctx context.Context, + pageFunc func(context.Context, int, string) ([]T, string, error), + fallbackFunc func(context.Context) ([]T, error)) ([]T, error) { + + var out []T + for item, err := range Resources(ctx, pageFunc) { + if err == nil { + out = append(out, item) + continue + } + + if trace.IsNotImplemented(err) { + fallbackOut, fallbackErr := fallbackFunc(ctx) + if fallbackErr != nil { + return nil, trace.Wrap(fallbackErr) } + return fallbackOut, nil } - if nextToken == "" { - return nil + return nil, trace.Wrap(err) + } + + return out, nil +} + +// Page is a client side utility which implements auto page size adjustment. +func Page[T any]( + ctx context.Context, + pageSize int, + pageToken string, + pageFunc func(context.Context, int, string) ([]T, string, error), +) ( + _ []T, + nextPageToken string, + lastPageSize int, + _ error, +) { + for { + page, nextToken, err := pageFunc(ctx, pageSize, pageToken) + if err != nil { + if trace.IsLimitExceeded(err) { + // Cut chunkSize in half if gRPC max message size is exceeded. + pageSize /= 2 + // This is an extremely unlikely scenario, but better to cover it anyways. + if pageSize == 0 { + return nil, "", 0, trace.Wrap(err, "resource is too large to retrieve, token: %q", pageToken) + } + + continue + } + + return nil, "", pageSize, trace.Wrap(err) } - pageToken = nextToken + + return page, nextToken, pageSize, nil } } diff --git a/api/utils/clientutils/resources_test.go b/api/utils/clientutils/resources_test.go index dffc1867e4269..9398cdd8743e4 100644 --- a/api/utils/clientutils/resources_test.go +++ b/api/utils/clientutils/resources_test.go @@ -20,57 +20,306 @@ package clientutils import ( "context" + "fmt" + "strconv" "testing" "github.com/gravitational/trace" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/gravitational/teleport/api/defaults" ) +const totalItems = defaults.DefaultChunkSize*2 + 5 + type mockPaginator struct { - accessDenied bool + accessDenied bool + maxSupportedPageSize int + pageCalls int +} + +func generatePage(start, count int) []int { + page := make([]int, count) + for i := range count { + page[i] = start + i + } + return page +} + +func limitCount(start, pageSize int) int { + if start >= totalItems { + return 0 + } + if start+pageSize > totalItems { + return totalItems - start + } + return pageSize } -func (m *mockPaginator) List(_ context.Context, pageSize int, token string) ([]bool, string, error) { +func nextToken(start, pageSize int) string { + if start+pageSize > totalItems { + return "" + } + return strconv.Itoa(start + pageSize) +} + +func startIndex(token string) int { + var start int + if token != "" { + start, _ = strconv.Atoi(token) + } + return start +} + +func (m *mockPaginator) List(_ context.Context, pageSize int, token string) ([]int, string, error) { + m.pageCalls++ if m.accessDenied { return nil, "", trace.AccessDenied("access denied") } - switch token { - case "": - return make([]bool, pageSize), "page1", nil - case "page1": - return make([]bool, pageSize), "page2", nil - case "page2": - return make([]bool, 5), "", nil - default: + + if pageSize > m.maxSupportedPageSize { + return nil, "", trace.LimitExceeded("page size %d exceeded the limit", pageSize) + } + + start := startIndex(token) + if start >= totalItems { return nil, "", trace.BadParameter("invalid token") } + count := limitCount(start, pageSize) + next := nextToken(start, pageSize) + + return generatePage(start, count), next, nil } func TestIterateResources(t *testing.T) { t.Run("success", func(t *testing.T) { var count int - paginator := mockPaginator{} - err := IterateResources(context.Background(), paginator.List, func(bool) error { + paginator := mockPaginator{maxSupportedPageSize: defaults.DefaultChunkSize} + err := IterateResources(t.Context(), paginator.List, func(int) error { count++ return nil }) - require.NoError(t, err) - require.Equal(t, defaults.DefaultChunkSize*2+5, count) + assert.NoError(t, err) + assert.Equal(t, totalItems, count) }) t.Run("paginator error", func(t *testing.T) { - paginator := mockPaginator{accessDenied: true} - err := IterateResources(context.Background(), paginator.List, func(bool) error { + paginator := mockPaginator{accessDenied: true, maxSupportedPageSize: defaults.DefaultChunkSize} + err := IterateResources(t.Context(), paginator.List, func(int) error { return nil }) - require.Error(t, err) + assert.Error(t, err) }) t.Run("callback error", func(t *testing.T) { - paginator := mockPaginator{} - err := IterateResources(context.Background(), paginator.List, func(bool) error { + paginator := mockPaginator{maxSupportedPageSize: defaults.DefaultChunkSize} + err := IterateResources(t.Context(), paginator.List, func(int) error { return trace.BadParameter("error") }) - require.Error(t, err) + assert.Error(t, err) + }) +} + +func TestResources(t *testing.T) { + t.Run("success", func(t *testing.T) { + paginator := mockPaginator{maxSupportedPageSize: defaults.DefaultChunkSize} + var count int + for _, err := range Resources(t.Context(), paginator.List) { + count++ + require.NoError(t, err) + } + + assert.Equal(t, totalItems, count) + assert.Equal(t, 3, paginator.pageCalls) }) + t.Run("paginator error", func(t *testing.T) { + paginator := mockPaginator{accessDenied: true, maxSupportedPageSize: defaults.DefaultChunkSize} + var count int + for _, err := range Resources(t.Context(), paginator.List) { + count++ + require.Error(t, err) + } + assert.Equal(t, 1, count) + assert.Equal(t, 1, paginator.pageCalls) + }) + + t.Run("limit exceeded", func(t *testing.T) { + paginator := mockPaginator{maxSupportedPageSize: 0} + var count int + for _, err := range Resources(t.Context(), paginator.List) { + count++ + require.Error(t, err) + } + assert.Equal(t, 1, count) + assert.Equal(t, 10, paginator.pageCalls) + }) +} + +func TestResourcesWithCustomPageSize(t *testing.T) { + paginator := mockPaginator{maxSupportedPageSize: defaults.DefaultChunkSize} + var count int + for _, err := range ResourcesWithPageSize(t.Context(), paginator.List, 10) { + count++ + require.NoError(t, err) + } + assert.Equal(t, totalItems, count) + assert.Equal(t, 201, paginator.pageCalls) +} + +func TestRangeResources(t *testing.T) { + t.Parallel() + keyFunc := func(item int) string { + return fmt.Sprintf("%06d", item) + } + + tests := []struct { + name string + start string + end string + expectedItemCount int + expectedListCalls int + accessDenied bool + maxSupportedPageSize int + errFn func(require.TestingT, error, ...any) + }{ + { + name: "RangeAllItems", + expectedItemCount: totalItems, + expectedListCalls: 3, + maxSupportedPageSize: defaults.DefaultChunkSize, + errFn: require.NoError, + }, + { + name: "RangeAccessDenied", + expectedItemCount: 0, + expectedListCalls: 1, + accessDenied: true, + maxSupportedPageSize: defaults.DefaultChunkSize, + errFn: require.Error, + }, + { + name: "RangeWithEnd", + expectedItemCount: 20, + expectedListCalls: 1, + end: keyFunc(20), + maxSupportedPageSize: defaults.DefaultChunkSize, + errFn: require.NoError, + }, + { + name: "RangeWithStart", + expectedItemCount: totalItems - 1337, + expectedListCalls: 1, + start: keyFunc(1337), + maxSupportedPageSize: defaults.DefaultChunkSize, + errFn: require.NoError, + }, + { + name: "RangeSpan", + expectedItemCount: 1500 - 500, + // The end marker is not inclusive and the number of items falls on the pagesize, in this case 2 calls will be made. + expectedListCalls: 2, + start: keyFunc(500), + end: keyFunc(1500), + maxSupportedPageSize: defaults.DefaultChunkSize, + errFn: require.NoError, + }, + { + name: "RangeLimitExceeded", + expectedItemCount: 0, + expectedListCalls: 10, + start: keyFunc(500), + end: keyFunc(1500), + maxSupportedPageSize: -1, + errFn: require.Error, + }, + { + name: "RangeLimitExceededWithRecovery", + expectedItemCount: 1000, + expectedListCalls: 4, + start: keyFunc(500), + end: keyFunc(1500), + maxSupportedPageSize: defaults.DefaultChunkSize / 2, + errFn: require.NoError, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + paginator := mockPaginator{accessDenied: tc.accessDenied, maxSupportedPageSize: tc.maxSupportedPageSize} + var count int + + for _, err := range RangeResources(t.Context(), tc.start, tc.end, paginator.List, keyFunc) { + if err == nil { + count++ + } + + tc.errFn(t, err) + } + + assert.Equal(t, tc.expectedItemCount, count) + assert.Equal(t, tc.expectedListCalls, paginator.pageCalls) + }) + } +} + +func TestCollectWithFallback(t *testing.T) { + t.Parallel() + + testCases := []struct { + name string + pageFunc func(context.Context, int, string) ([]string, string, error) + fallbackFunc func(context.Context) ([]string, error) + err error + }{ + { + name: "happy primary path", + pageFunc: func(context.Context, int, string) ([]string, string, error) { + return []string{"hello", "world"}, "", nil + }, + fallbackFunc: func(context.Context) ([]string, error) { + panic("unexpected call") + }, + err: nil, + }, + + { + name: "fallback fail", + pageFunc: func(context.Context, int, string) ([]string, string, error) { + return nil, "", trace.NotImplemented("") + }, + fallbackFunc: func(context.Context) ([]string, error) { + return nil, trace.BadParameter("") + }, + err: trace.BadParameter(""), + }, + { + name: "fallback success", + pageFunc: func(context.Context, int, string) ([]string, string, error) { + return nil, "", trace.NotImplemented("") + }, + fallbackFunc: func(context.Context) ([]string, error) { + return []string{"hello", "world"}, nil + }, + err: nil, + }, + { + name: "fallback no match", + pageFunc: func(context.Context, int, string) ([]string, string, error) { + return nil, "", trace.BadParameter("") + }, + fallbackFunc: func(context.Context) ([]string, error) { + panic("unexpected call") + }, + err: trace.BadParameter(""), + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + out, err := CollectWithFallback(context.Background(), tc.pageFunc, tc.fallbackFunc) + if tc.err == nil { + require.NotNil(t, out) + } + require.ErrorIs(t, err, tc.err) + }) + } } diff --git a/api/utils/gcp/alloydb.go b/api/utils/gcp/alloydb.go new file mode 100644 index 0000000000000..82ccc55a62474 --- /dev/null +++ b/api/utils/gcp/alloydb.go @@ -0,0 +1,149 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gcp + +import ( + "fmt" + "strings" + + "github.com/gravitational/trace" +) + +// AlloyDBFullInstanceName fully identifies particular AlloyDB instance. +// The "full" is in contrast with the "Instance" field, which is also referenced to as "instance name", +// yet it isn't a globally unique instance identifier. +type AlloyDBFullInstanceName struct { + // Project is the project ID. + ProjectID string + // Location is location, also known as region. + Location string + // ClusterID is the cluster ID. + ClusterID string + // InstanceID is the instance ID. + InstanceID string +} + +// ParentClusterName returns the full name of parent cluster. +func (info AlloyDBFullInstanceName) ParentClusterName() string { + return fmt.Sprintf( + "projects/%s/locations/%s/clusters/%s", info.ProjectID, info.Location, info.ClusterID, + ) +} + +// InstanceName returns a full name of the instance. +func (info AlloyDBFullInstanceName) InstanceName() string { + return fmt.Sprintf( + "projects/%s/locations/%s/clusters/%s/instances/%s", info.ProjectID, info.Location, info.ClusterID, info.InstanceID, + ) +} + +const ( + // alloyDBScheme is the custom URI scheme used to disambiguate AlloyDB URIs from all others. + alloyDBScheme = "alloydb://" +) + +// IsAlloyDBConnectionURI returns true if the uri can possibly be parsed as AlloyDB connection URI. +// +// It doesn't try to parse it; it merely searches for the presence of the custom `alloydb://` scheme. +func IsAlloyDBConnectionURI(uri string) bool { + return strings.HasPrefix(uri, alloyDBScheme) +} + +// ParseAlloyDBConnectionURI parses "connection URI" (as it is called by GCP in some places) into the full instance name. +// The URI format requires custom scheme `alloydb://` which we use to disambiguate AlloyDB URIs from all others. +// +// Example URI: alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance +func ParseAlloyDBConnectionURI(connectionURI string) (*AlloyDBFullInstanceName, error) { + if connectionURI == "" { + return nil, trace.BadParameter("connection URI cannot be empty") + } + + uriNoPrefix, found := strings.CutPrefix(connectionURI, alloyDBScheme) + if !found { + return nil, trace.BadParameter("invalid connection URI %q: should start with %v", connectionURI, alloyDBScheme) + } + + parts := strings.Split(uriNoPrefix, "/") + if len(parts) != 8 { + return nil, trace.BadParameter("invalid connection URI %q: wrong number of parts", connectionURI) + } + + switch { + case parts[0] != "projects": + return nil, trace.BadParameter("invalid connection URI %q: expected 'projects', got %q", connectionURI, parts[0]) + case parts[2] != "locations": + return nil, trace.BadParameter("invalid connection URI %q: expected 'locations', got %q", connectionURI, parts[2]) + case parts[4] != "clusters": + return nil, trace.BadParameter("invalid connection URI %q: expected 'clusters', got %q", connectionURI, parts[4]) + case parts[6] != "instances": + return nil, trace.BadParameter("invalid connection URI %q: expected 'instances', got %q", connectionURI, parts[6]) + } + + project, location, cluster, instance := parts[1], parts[3], parts[5], parts[7] + + switch { + case project == "": + return nil, trace.BadParameter("invalid connection URI %q: project cannot be empty", connectionURI) + case location == "": + return nil, trace.BadParameter("invalid connection URI %q: location cannot be empty", connectionURI) + case cluster == "": + return nil, trace.BadParameter("invalid connection URI %q: cluster cannot be empty", connectionURI) + case instance == "": + return nil, trace.BadParameter("invalid connection URI %q: instance cannot be empty", connectionURI) + } + + // '?' cannot be a part of a valid instance name; this looks like attempt at query param. + if strings.Contains(instance, "?") { + return nil, trace.BadParameter("invalid connection URI %q: query parameters are not accepted", connectionURI) + } + + return &AlloyDBFullInstanceName{ + ProjectID: project, + Location: location, + ClusterID: cluster, + InstanceID: instance, + }, nil +} + +// AlloyDBEndpointType is AlloyDB endpoint type. +type AlloyDBEndpointType string + +const ( + // AlloyDBEndpointTypePublic is the public endpoint type. + AlloyDBEndpointTypePublic AlloyDBEndpointType = "public" + // AlloyDBEndpointTypePrivate is the private endpoint type. + AlloyDBEndpointTypePrivate AlloyDBEndpointType = "private" + // AlloyDBEndpointTypePSC is the PSC endpoint type. + AlloyDBEndpointTypePSC AlloyDBEndpointType = "psc" +) + +// AlloyDBEndpointTypes is a list of all known AlloyDB endpoint types. +var AlloyDBEndpointTypes = []AlloyDBEndpointType{ + AlloyDBEndpointTypePublic, + AlloyDBEndpointTypePrivate, + AlloyDBEndpointTypePSC, +} + +func ValidateAlloyDBEndpointType(endpointType string) error { + if endpointType == "" { + return nil + } + for _, t := range AlloyDBEndpointTypes { + if endpointType == string(t) { + return nil + } + } + return trace.BadParameter("invalid alloy db endpoint type: %v, expected one of %v", endpointType, AlloyDBEndpointTypes) +} diff --git a/api/utils/gcp/alloydb_test.go b/api/utils/gcp/alloydb_test.go new file mode 100644 index 0000000000000..b24d92665a155 --- /dev/null +++ b/api/utils/gcp/alloydb_test.go @@ -0,0 +1,155 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gcp + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestIsAlloyDBConnectionURI(t *testing.T) { + require.True(t, IsAlloyDBConnectionURI("alloydb://dummy")) + require.False(t, IsAlloyDBConnectionURI("http://dummy")) + require.False(t, IsAlloyDBConnectionURI("just/some/stuff")) +} + +func TestParseAlloyDBConnectionURI(t *testing.T) { + tests := []struct { + name string + uri string + want *AlloyDBFullInstanceName + wantErr string + }{ + { + name: "valid address", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + want: &AlloyDBFullInstanceName{ + ProjectID: "my-project-123456", + Location: "europe-west1", + ClusterID: "my-cluster", + InstanceID: "my-instance", + }, + }, + { + name: "empty string is rejected", + uri: "", + wantErr: `connection URI cannot be empty`, + }, + { + name: "missing 'projects'", + uri: "alloydb://PROJECTS/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "alloydb://PROJECTS/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance": expected 'projects', got "PROJECTS"`, + }, + { + name: "missing 'locations'", + uri: "alloydb://projects/my-project-123456/LOCATIONS/europe-west1/clusters/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/LOCATIONS/europe-west1/clusters/my-cluster/instances/my-instance": expected 'locations', got "LOCATIONS"`, + }, + { + name: "missing 'clusters'", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/CLUSTERS/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/europe-west1/CLUSTERS/my-cluster/instances/my-instance": expected 'clusters', got "CLUSTERS"`, + }, + { + name: "missing 'instances'", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/INSTANCES/my-instance", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/INSTANCES/my-instance": expected 'instances', got "INSTANCES"`, + }, + { + name: "empty project", + uri: "alloydb://projects//locations/europe-west1/clusters/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "alloydb://projects//locations/europe-west1/clusters/my-cluster/instances/my-instance": project cannot be empty`, + }, + { + name: "empty location", + uri: "alloydb://projects/my-project-123456/locations//clusters/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations//clusters/my-cluster/instances/my-instance": location cannot be empty`, + }, + { + name: "empty cluster", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/clusters//instances/my-instance", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/europe-west1/clusters//instances/my-instance": cluster cannot be empty`, + }, + { + name: "empty instance", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/": instance cannot be empty`, + }, + { + name: "missing scheme", + uri: "projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", + wantErr: `invalid connection URI "projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance": should start with alloydb://`, + }, + { + name: "invalid address", + uri: "alloydb://invalid", + wantErr: `invalid connection URI "alloydb://invalid": wrong number of parts`, + }, + { + name: "query params are not accepted", + uri: "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance?foo=bar", + wantErr: `invalid connection URI "alloydb://projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance?foo=bar": query parameters are not accepted`, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + parsed, err := ParseAlloyDBConnectionURI(tt.uri) + if tt.wantErr == "" { + require.NoError(t, err) + require.Equal(t, tt.want, parsed) + } else { + require.ErrorContains(t, err, tt.wantErr) + } + }) + } +} + +func TestAlloyDBResourceNames(t *testing.T) { + info := AlloyDBFullInstanceName{ + ProjectID: "my-project-123456", + Location: "europe-west1", + ClusterID: "my-cluster", + InstanceID: "my-instance", + } + require.Equal(t, "projects/my-project-123456/locations/europe-west1/clusters/my-cluster", info.ParentClusterName()) + require.Equal(t, "projects/my-project-123456/locations/europe-west1/clusters/my-cluster/instances/my-instance", info.InstanceName()) +} + +func TestValidateAlloyDBEndpointType(t *testing.T) { + tests := []struct { + name string + str string + wantErr string + }{ + {name: "empty string", str: "", wantErr: ""}, + {name: "private", str: "private", wantErr: ""}, + {name: "public", str: "public", wantErr: ""}, + {name: "psc", str: "psc", wantErr: ""}, + {name: "caps", str: "PUBLIC", wantErr: `invalid alloy db endpoint type: PUBLIC, expected one of [public private psc]`}, + } + + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + err := ValidateAlloyDBEndpointType(test.str) + if test.wantErr != "" { + require.ErrorContains(t, err, test.wantErr) + } else { + require.NoError(t, err) + } + }) + } +} diff --git a/api/utils/grpc/size.go b/api/utils/grpc/size.go new file mode 100644 index 0000000000000..2c37e2df117b3 --- /dev/null +++ b/api/utils/grpc/size.go @@ -0,0 +1,116 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package grpc + +import ( + "fmt" + "math" + "os" + "strconv" + "strings" + "unicode" +) + +// defaultClientRecvSize is the grpc client default for received message sizes +const defaultClientRecvSize = 4 * 1024 * 1024 // 4MB + +// parseBytes takes human represtation of bytes such as '24mb' and returns number of bytes as an integer +// +// Only support a subset of SI prefixes up to gi/gb. +func parseBytes(s string) (int, error) { + const ( + Byte = 1 << (iota * 10) + KiByte + MiByte + GiByte + IByte = 1 + KByte = IByte * 1000 + MByte = KByte * 1000 + GByte = MByte * 1000 + ) + + var bytesSizeTable = map[string]int{ + "b": Byte, + "kib": KiByte, + "kb": KByte, + "mib": MiByte, + "mb": MByte, + "gib": GiByte, + "gb": GByte, + "": Byte, + "ki": KiByte, + "k": KByte, + "mi": MiByte, + "m": MByte, + "gi": GiByte, + "g": GByte, + } + + lastDigit := 0 + for _, r := range s { + if !unicode.IsDigit(r) && r != '.' { + break + } + lastDigit++ + } + + num := s[:lastDigit] + + var value float32 + if f, err := strconv.ParseFloat(num, 32); err == nil { + value = float32(f) + } else { + return 0, err + + } + + extra := strings.ToLower(strings.TrimSpace(s[lastDigit:])) + if m, ok := bytesSizeTable[extra]; ok { + value *= float32(m) + if value >= math.MaxInt32 { + return 0, fmt.Errorf("too large: %v", s) + } + return int(value), nil + } + + return 0, fmt.Errorf("unhandled size name: %v", extra) +} + +// MaxClientRecvMsgSize returns maximum message size in bytes the client can receive. +// +// By default 4MB is returned, to overwrite this, set `TELEPORT_UNSTABLE_GRPC_RECV_SIZE` envriroment +// variable. If the value cannot be parsed or exceeds int32 limits, the default value is returned. +// +// The result of this call can be passed directly into `grpc.MaxCallRecvMsgSize`, example: +// +// conn, err := grpc.DialContext(ctx, target, +// grpc.WithDefaultCallOptions( +// grpc.MaxCallRecvMsgSize(grpcutils.MaxClientRecvMsgSize()), +// ), +// ) +func MaxClientRecvMsgSize() int { + + val := os.Getenv("TELEPORT_UNSTABLE_GRPC_RECV_SIZE") + if val == "" { + return defaultClientRecvSize + } + + size, err := parseBytes(val) + if err != nil { + return defaultClientRecvSize + } + + return size +} diff --git a/api/utils/grpc/size_test.go b/api/utils/grpc/size_test.go new file mode 100644 index 0000000000000..4752746732553 --- /dev/null +++ b/api/utils/grpc/size_test.go @@ -0,0 +1,98 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package grpc + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestMaxClientRecvMsgSize(t *testing.T) { + + testCases := []struct { + desc string + size string + bytes int + }{ + { + desc: "Decimal", + size: "1234", + bytes: 1234, + }, + { + desc: "Unset", + size: "", + bytes: defaultClientRecvSize, + }, + { + desc: "Unhandled units", + size: "4TB", + bytes: defaultClientRecvSize, + }, + { + desc: "Too large", + size: "20GB", + bytes: defaultClientRecvSize, + }, + { + desc: "Rubbish", + size: "foobar", + bytes: defaultClientRecvSize, + }, + { + desc: "Human mib", + size: "8mib", + bytes: 8 * 1024 * 1024, + }, + { + desc: "Human kib", + size: "8kib", + bytes: 8 * 1024, + }, + { + desc: "Human mb", + size: "8mb", + bytes: 8 * 1000 * 1000, + }, + { + desc: "Floats", + size: "2.5kb", + bytes: 2500, + }, + { + desc: "Human kb", + size: "8kb", + bytes: 8 * 1000, + }, + { + desc: "Human m", + size: "8m", + bytes: 8 * 1000 * 1000, + }, + { + desc: "Human k", + size: "8k", + bytes: 8 * 1000, + }, + } + + for _, tt := range testCases { + t.Run(tt.desc, func(t *testing.T) { + t.Setenv("TELEPORT_UNSTABLE_GRPC_RECV_SIZE", tt.size) + assert.Equal(t, tt.bytes, MaxClientRecvMsgSize()) + }) + } +} diff --git a/api/utils/grpc/stream/stream.go b/api/utils/grpc/stream/stream.go index 7fe4694da7954..fecd9d2b7d1f6 100644 --- a/api/utils/grpc/stream/stream.go +++ b/api/utils/grpc/stream/stream.go @@ -47,17 +47,45 @@ type ReadWriter struct { wLock sync.Mutex rLock sync.Mutex rBytes []byte + + options *Options +} + +// Options is NewReadWriter config options. +type Options struct { + // DisableChunking disables automatic splitting of data messages + // that exceed MaxChunkSize during writes. + // This is useful when the receiver does not support chunked reads. + DisableChunking bool +} + +// Option allows setting options as functional arguments to NewReadWriter. +type Option func(s *Options) + +// WithDisabledChunking disables automatic splitting of data messages +// that exceed MaxChunkSize during writes. +// This is useful when the receiver does not support chunked reads. +func WithDisabledChunking() Option { + return func(s *Options) { + s.DisableChunking = true + } } // NewReadWriter creates a new ReadWriter that leverages the provided // source to retrieve data from and write data to. -func NewReadWriter(source Source) (*ReadWriter, error) { +func NewReadWriter(source Source, opts ...Option) (*ReadWriter, error) { if source == nil { return nil, trace.BadParameter("parameter source required") } + options := &Options{} + for _, opt := range opts { + opt(options) + } + return &ReadWriter{ - source: source, + source: source, + options: options, }, nil } @@ -102,7 +130,7 @@ func (c *ReadWriter) Read(b []byte) (n int, err error) { // the grpc stream. To prevent exhausting the stream all // sends on the stream are limited to be at most MaxChunkSize. // If the data exceeds the MaxChunkSize it will be sent in -// batches. +// batches. This behavior can be disabled by using WithDisabledChunking. func (c *ReadWriter) Write(b []byte) (int, error) { c.wLock.Lock() defer c.wLock.Unlock() @@ -110,7 +138,7 @@ func (c *ReadWriter) Write(b []byte) (int, error) { var sent int for len(b) > 0 { chunk := b - if len(chunk) > MaxChunkSize { + if !c.options.DisableChunking && len(chunk) > MaxChunkSize { chunk = chunk[:MaxChunkSize] } diff --git a/api/utils/grpc/stream/stream_test.go b/api/utils/grpc/stream/stream_test.go index 35dfb7070b53c..6658e4f97daf6 100644 --- a/api/utils/grpc/stream/stream_test.go +++ b/api/utils/grpc/stream/stream_test.go @@ -52,7 +52,7 @@ func (m *mockStream) Recv() ([]byte, error) { return b[:n], err } -func newStreamPipe(t *testing.T) (*ReadWriter, net.Conn) { +func newStreamPipe(t *testing.T, opts ...Option) (*ReadWriter, net.Conn) { local, remote := net.Pipe() stream := newMockStream(context.Background(), remote) @@ -63,7 +63,7 @@ func newStreamPipe(t *testing.T) (*ReadWriter, net.Conn) { require.NoError(t, remote.SetReadDeadline(timeout)) require.NoError(t, remote.SetWriteDeadline(timeout)) - streamConn, err := NewReadWriter(stream) + streamConn, err := NewReadWriter(stream, opts...) require.NoError(t, err) return streamConn, local @@ -122,6 +122,30 @@ func TestReadWriter_WriteChunk(t *testing.T) { wg.Wait() } +func TestReadWriter_WriteDisabledChunk(t *testing.T) { + streamConn, local := newStreamPipe(t, WithDisabledChunking()) + wg := &sync.WaitGroup{} + wg.Add(2) + + data := make([]byte, MaxChunkSize+1) + go func() { + defer wg.Done() + n, err := streamConn.Write(data) + assert.NoError(t, err) + assert.Len(t, data, n) + }() + go func() { + defer wg.Done() + b := make([]byte, 2*MaxChunkSize) + n, err := local.Read(b) + assert.NoError(t, err) + assert.Len(t, data, n) + assert.Equal(t, data[:n], b[:n]) + }() + + wg.Wait() +} + func TestReadWriter_Read(t *testing.T) { streamConn, local := newStreamPipe(t) wg := &sync.WaitGroup{} diff --git a/api/utils/iterutils/iter.go b/api/utils/iterutils/iter.go index 3d0ddf495f43c..91d68bb6741a7 100644 --- a/api/utils/iterutils/iter.go +++ b/api/utils/iterutils/iter.go @@ -35,3 +35,18 @@ func Map[In, Out any](f func(In) Out, seq iter.Seq[In]) iter.Seq[Out] { } } } + +// Filter returns an iterator over seq that only includes the values v for which +// f(v) is true. +// +// Copied from https://github.com/golang/go/issues/61898. We should switch to an +// official package once it is available. +func Filter[V any](f func(V) bool, seq iter.Seq[V]) iter.Seq[V] { + return func(yield func(V) bool) { + for v := range seq { + if f(v) && !yield(v) { + return + } + } + } +} diff --git a/api/utils/iterutils/iter_test.go b/api/utils/iterutils/iter_test.go index e6bf3e3feecd2..763fe63ceccbe 100644 --- a/api/utils/iterutils/iter_test.go +++ b/api/utils/iterutils/iter_test.go @@ -37,3 +37,21 @@ func ExampleMap() { // HELLO WORLD // FOO } + +func ExampleFilter() { + inputs := []string{ + "a", + "bb", + "ccc", + "dddd", + } + isOddLen := func(s string) bool { + return len(s)%2 == 1 + } + for filtered := range Filter(isOddLen, slices.Values(inputs)) { + fmt.Println(filtered) + } + // Output: + // a + // ccc +} diff --git a/api/utils/keypaths/keypaths.go b/api/utils/keypaths/keypaths.go index 6e0bc36d1e3c7..cd42d96bc7aa3 100644 --- a/api/utils/keypaths/keypaths.go +++ b/api/utils/keypaths/keypaths.go @@ -29,7 +29,7 @@ const ( // sessionKeyDir is a sub-directory where session keys are stored sessionKeyDir = "keys" // sshDirSuffix is the suffix of a sub-directory where SSH certificates are stored. - sshDirSuffix = "-ssh" + SSHDirSuffix = "-ssh" // fileNameKnownHosts is a file where known hosts are stored. fileNameKnownHosts = "known_hosts" // FileExtTLSCertLegacy is the legacy suffix/extension of a file where a TLS cert is stored. @@ -73,6 +73,18 @@ const ( profileFileExt = ".yaml" // oracleWalletDirSuffix is the suffix of the oracle wallet database directory. oracleWalletDirSuffix = "-wallet" + // VNetClientSSHKey is the file name of the SSH key used by third-party SSH + // clients to connect to VNet SSH. + VNetClientSSHKey = "id_vnet" + // VNetClientSSHKeyPub is the file name of the SSH public key matching + // VNetClientSSHKey. + VNetClientSSHKeyPub = VNetClientSSHKey + fileExtPub + // vnetKnownHosts is the file name of the known_hosts file trusted by + // third-party SSH clients connecting to VNet SSH. + vnetKnownHosts = "vnet_known_hosts" + // VNetSSHConfig is the file name of the generated OpenSSH-compatible config + // file to be used by third-party SSH clients connecting to VNet SSH. + VNetSSHConfig = "vnet_ssh_config" ) // Here's the file layout of all these keypaths. @@ -81,6 +93,10 @@ const ( // ├── one.example.com.yaml --> file containing profile details for proxy "one.example.com" // ├── two.example.com.yaml --> file containing profile details for proxy "two.example.com" // ├── known_hosts --> trusted certificate authorities (their keys) in a format similar to known_hosts +// ├── id_vnet --> SSH Private Key for third-party clients of VNet SSH +// ├── id_vnet.pub --> SSH Public Key for third-party clients of VNet SSH +// ├── vnet_known_hosts --> trusted certificate authorities (their keys) for third-party clients of VNet SSH +// ├── vnet_ssh_config --> OpenSSH-compatible config file for third-party clients of VNet SSH // └── keys --> session keys directory // ├── one.example.com --> Proxy hostname // │ ├── certs.pem --> TLS CA certs for the Teleport CA @@ -241,7 +257,7 @@ func TLSCAsPathCluster(baseDir, proxy, cluster string) string { // // /keys//-ssh func SSHDir(baseDir, proxy, username string) string { - return filepath.Join(ProxyKeyDir(baseDir, proxy), username+sshDirSuffix) + return filepath.Join(ProxyKeyDir(baseDir, proxy), username+SSHDirSuffix) } // PPKFilePath returns the path to the user's PuTTY PPK-formatted keypair @@ -429,6 +445,27 @@ func IdentitySSHCertPath(path string) string { return path + fileExtSSHCert } +// VNetClientSSHKeyPath returns the path to the VNet client SSH private key. +func VNetClientSSHKeyPath(baseDir string) string { + return filepath.Join(baseDir, VNetClientSSHKey) +} + +// VNetClientSSHKeyPubPath returns the path to the VNet client SSH public key. +func VNetClientSSHKeyPubPath(baseDir string) string { + return filepath.Join(baseDir, VNetClientSSHKeyPub) +} + +// VNetKnownHostsPath returns the path to the VNet known_hosts file. +func VNetKnownHostsPath(baseDir string) string { + return filepath.Join(baseDir, vnetKnownHosts) +} + +// VNetSSHConfigPath returns the path to VNet's generated OpenSSH-compatible +// config file. +func VNetSSHConfigPath(baseDir string) string { + return filepath.Join(baseDir, VNetSSHConfig) +} + // TrimKeyPathSuffix returns the given path with any key suffix/extension trimmed off. func TrimKeyPathSuffix(path string) string { return strings.TrimSuffix(path, fileExtTLSKey) diff --git a/api/utils/keys/alias.go b/api/utils/keys/alias.go deleted file mode 100644 index 8323a9c9e53ff..0000000000000 --- a/api/utils/keys/alias.go +++ /dev/null @@ -1,26 +0,0 @@ -/* -Copyright 2025 Gravitational, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package keys - -import "github.com/gravitational/teleport/api/utils/keys/hardwarekey" - -// Temporary aliases for types moved to the hardwarekey or piv packages -// TODO(Joerger): Remove once /e no longer relies on them. - -// AttestationStatement is an attestation statement for a hardware private key -// that supports json marshaling through the standard json/encoding package. -type AttestationStatement = hardwarekey.AttestationStatement - -// AttestationStatementFromProto converts an AttestationStatement from its protobuf form. -var AttestationStatementFromProto = hardwarekey.AttestationStatementFromProto diff --git a/api/utils/keys/hardwarekey/attestation.go b/api/utils/keys/hardwarekey/attestation.go index 94de91e177dc0..cf8a93717233d 100644 --- a/api/utils/keys/hardwarekey/attestation.go +++ b/api/utils/keys/hardwarekey/attestation.go @@ -17,7 +17,7 @@ package hardwarekey import ( "bytes" - "github.com/gogo/protobuf/jsonpb" + "github.com/gogo/protobuf/jsonpb" //nolint:depguard // needed for backwards compatibility "github.com/gravitational/trace" attestationv1 "github.com/gravitational/teleport/api/gen/proto/go/attestation/v1" diff --git a/api/utils/keys/piv/service.go b/api/utils/keys/piv/service.go index 50a54335a02c6..a2b51278129ed 100644 --- a/api/utils/keys/piv/service.go +++ b/api/utils/keys/piv/service.go @@ -25,7 +25,7 @@ import ( "io" "sync" - "github.com/go-piv/piv-go/piv" + "github.com/go-piv/piv-go/v2/piv" "github.com/gravitational/trace" "github.com/gravitational/teleport/api/utils/keys/hardwarekey" diff --git a/api/utils/keys/piv/service_test.go b/api/utils/keys/piv/service_test.go index 471a42842f7ab..1845e2d94d9b8 100644 --- a/api/utils/keys/piv/service_test.go +++ b/api/utils/keys/piv/service_test.go @@ -26,7 +26,7 @@ import ( "testing" "time" - pivgo "github.com/go-piv/piv-go/piv" + pivgo "github.com/go-piv/piv-go/v2/piv" "github.com/gravitational/trace" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" diff --git a/api/utils/keys/piv/yubikey.go b/api/utils/keys/piv/yubikey.go index bb1eccdc0e0fb..006c967639efe 100644 --- a/api/utils/keys/piv/yubikey.go +++ b/api/utils/keys/piv/yubikey.go @@ -34,7 +34,7 @@ import ( "sync" "time" - "github.com/go-piv/piv-go/piv" + "github.com/go-piv/piv-go/v2/piv" "github.com/gravitational/trace" "github.com/gravitational/teleport/api" @@ -439,7 +439,11 @@ func (y *YubiKey) setPINAndPUKFromDefault(ctx context.Context, prompt hardwareke y.pinCache.mu.Lock() defer y.pinCache.mu.Unlock() - ctx, cancel := context.WithTimeout(ctx, pinPromptTimeout) + // Use a longer timeout than pinPromptTimeout since this specific prompt requires the user to + // re-type both PIN and PUK. The user might also want to save the values somewhere. + // pinPromptTimeout just doesn't give enough time for that. + const newPinPromptTimeout = 3 * time.Minute + ctx, cancel := context.WithTimeout(ctx, newPinPromptTimeout) defer cancel() pinAndPUK, err := prompt.ChangePIN(ctx, keyInfo) @@ -684,7 +688,7 @@ func (c *sharedPIVConnection) reset() error { return trace.Wrap(c.conn.Reset()) } -func (c *sharedPIVConnection) setCertificate(key [24]byte, slot piv.Slot, cert *x509.Certificate) error { +func (c *sharedPIVConnection) setCertificate(key []byte, slot piv.Slot, cert *x509.Certificate) error { release, err := c.connect() if err != nil { return trace.Wrap(err) @@ -711,7 +715,7 @@ func (c *sharedPIVConnection) certificate(slot piv.Slot) (*x509.Certificate, err return cert, trace.Wrap(err) } -func (c *sharedPIVConnection) generateKey(key [24]byte, slot piv.Slot, opts piv.Key) (crypto.PublicKey, error) { +func (c *sharedPIVConnection) generateKey(key []byte, slot piv.Slot, opts piv.Key) (crypto.PublicKey, error) { release, err := c.connect() if err != nil { return nil, trace.Wrap(err) diff --git a/api/utils/keys/piv/yubikey_test.go b/api/utils/keys/piv/yubikey_test.go index 0715910564efa..2876d533b2604 100644 --- a/api/utils/keys/piv/yubikey_test.go +++ b/api/utils/keys/piv/yubikey_test.go @@ -24,7 +24,7 @@ import ( "sync" "testing" - "github.com/go-piv/piv-go/piv" + "github.com/go-piv/piv-go/v2/piv" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" diff --git a/api/utils/keys/policy_piv.go b/api/utils/keys/policy_piv.go deleted file mode 100644 index a99fc532f3010..0000000000000 --- a/api/utils/keys/policy_piv.go +++ /dev/null @@ -1,45 +0,0 @@ -//go:build piv && !pivtest - -/* -Copyright 2025 Gravitational, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package keys - -import ( - "github.com/go-piv/piv-go/piv" -) - -// GetPrivateKeyPolicyFromAttestation returns the PrivateKeyPolicy satisfied by the given hardware key attestation. -// TODO(Joerger): Move to /e where this is used. -func GetPrivateKeyPolicyFromAttestation(att *piv.Attestation) PrivateKeyPolicy { - if att == nil { - return PrivateKeyPolicyNone - } - - isTouchPolicy := att.TouchPolicy == piv.TouchPolicyCached || - att.TouchPolicy == piv.TouchPolicyAlways - - isPINPolicy := att.PINPolicy == piv.PINPolicyOnce || - att.PINPolicy == piv.PINPolicyAlways - - switch { - case isPINPolicy && isTouchPolicy: - return PrivateKeyPolicyHardwareKeyTouchAndPIN - case isPINPolicy: - return PrivateKeyPolicyHardwareKeyPIN - case isTouchPolicy: - return PrivateKeyPolicyHardwareKeyTouch - default: - return PrivateKeyPolicyHardwareKey - } -} diff --git a/api/utils/keys/privatekey.go b/api/utils/keys/privatekey.go index fbc745a59f36b..c1fd3470c9967 100644 --- a/api/utils/keys/privatekey.go +++ b/api/utils/keys/privatekey.go @@ -55,6 +55,7 @@ type cryptoPublicKeyI interface { // custom implementation for a non-standard private key, such as a hardware key. type PrivateKey struct { crypto.Signer + // sshPub is the public key in ssh.PublicKey form. sshPub ssh.PublicKey // keyPEM is PEM-encoded private key data which can be parsed with ParsePrivateKey. @@ -243,6 +244,10 @@ func LoadPrivateKey(keyFile string) (*PrivateKey, error) { priv, err := ParsePrivateKey(keyPEM) if err != nil { + // Treat malformed keys the same as missing keys. + if trace.IsBadParameter(err) { + return nil, trace.NotFound("%s", err.Error()) + } return nil, trace.Wrap(err) } return priv, nil @@ -306,14 +311,14 @@ func ParsePrivateKey(keyPEM []byte, opts ...ParsePrivateKeyOpt) (*PrivateKey, er hwSigner, err := hardwarekey.DecodeSigner(block.Bytes, hwks, appliedOpts.ContextualKeyInfo) if err != nil { - return nil, trace.Wrap(err, "failed to parse hardware key signer") + return nil, trace.BadParameter("failed to parse hardware key signer: %s", err.Error()) } return newPrivateKeyWithKeyPEM(hwSigner, keyPEM) case OpenSSHPrivateKeyType: priv, err := ssh.ParseRawPrivateKey(keyPEM) if err != nil { - return nil, trace.Wrap(err) + return nil, trace.BadParameter("%s", err.Error()) } cryptoSigner, ok := priv.(crypto.Signer) if !ok { @@ -354,7 +359,7 @@ func ParsePrivateKey(keyPEM []byte, opts ...ParsePrivateKeyOpt) (*PrivateKey, er // If all three parse functions returned an error, preferedErr is // guaranteed to be set to the error from the parse function that // usually matches the PEM block type. - return nil, trace.Wrap(preferredErr, "parsing private key PEM") + return nil, trace.BadParameter("parsing private key PEM: %s", preferredErr.Error()) default: return nil, trace.BadParameter("unexpected private key PEM type %q", block.Type) } @@ -395,6 +400,21 @@ func MarshalPrivateKey(key crypto.Signer) ([]byte, error) { } } +// MarshalDecrypter will return a PEM encoded crypto.Decrypter. +// [key] must be an *rsa.PrivateKey +func MarshalDecrypter(key crypto.Decrypter) ([]byte, error) { + switch privateKey := key.(type) { + case *rsa.PrivateKey: + privPEM := pem.EncodeToMemory(&pem.Block{ + Type: PKCS1PrivateKeyType, + Bytes: x509.MarshalPKCS1PrivateKey(privateKey), + }) + return privPEM, nil + default: + return nil, trace.BadParameter("unsupported private key type %T", key) + } +} + // LoadKeyPair returns the PrivateKey for the given private and public key files. func LoadKeyPair(privFile, sshPubFile string, opts ...ParsePrivateKeyOpt) (*PrivateKey, error) { privPEM, err := os.ReadFile(privFile) @@ -409,6 +429,10 @@ func LoadKeyPair(privFile, sshPubFile string, opts ...ParsePrivateKeyOpt) (*Priv priv, err := ParseKeyPair(privPEM, marshaledSSHPub, opts...) if err != nil { + // Treat malformed keys the same as missing keys. + if trace.IsBadParameter(err) { + return nil, trace.NotFound("%s", err.Error()) + } return nil, trace.Wrap(err) } return priv, nil @@ -444,6 +468,10 @@ func LoadX509KeyPair(certFile, keyFile string) (tls.Certificate, error) { tlsCert, err := X509KeyPair(certPEMBlock, keyPEMBlock) if err != nil { + // Treat malformed keys the same as missing keys. + if trace.IsBadParameter(err) { + return tls.Certificate{}, trace.NotFound("%s", err.Error()) + } return tls.Certificate{}, trace.Wrap(err) } diff --git a/api/utils/keys/privatekey_test.go b/api/utils/keys/privatekey_test.go index 4be10d55e6d51..c9bea6ac52a96 100644 --- a/api/utils/keys/privatekey_test.go +++ b/api/utils/keys/privatekey_test.go @@ -33,6 +33,7 @@ import ( "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" + "github.com/gravitational/trace" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -172,26 +173,56 @@ func TestParseMismatchedPEMHeader(t *testing.T) { // that the preferredErr logic in Parse(Private|Public)Key returns an error for // each PEM type. func TestParseCorruptedKey(t *testing.T) { - for _, tc := range []string{ - "RSA PRIVATE KEY", - "PRIVATE KEY", - "EC PRIVATE KEY", - } { - t.Run(tc, func(t *testing.T) { - b := pem.EncodeToMemory(&pem.Block{Type: tc, Bytes: []byte("foo")}) - _, err := keys.ParsePrivateKey(b) - require.Error(t, err) + t.Parallel() + privateKeyTests := []struct { + name string + pemData []byte + }{ + { + name: "PRIVATE KEY", + pemData: pem.EncodeToMemory(&pem.Block{Type: "PRIVATE KEY", Bytes: []byte("foo")}), + }, + { + name: "RSA PRIVATE KEY", + pemData: pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: []byte("foo")}), + }, + { + name: "EC PRIVATE KEY", + pemData: pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: []byte("foo")}), + }, + { + name: "not a private key pem file", + pemData: []byte("foo"), + }, + } + for _, tc := range privateKeyTests { + t.Run(tc.name, func(t *testing.T) { + _, err := keys.ParsePrivateKey(tc.pemData) + require.True(t, trace.IsBadParameter(err), "wanted BadParameter, got: %v", err) }) } - for _, tc := range []string{ - "RSA PUBLIC KEY", - "PUBLIC KEY", - } { - t.Run(tc, func(t *testing.T) { - b := pem.EncodeToMemory(&pem.Block{Type: tc, Bytes: []byte("foo")}) - _, err := keys.ParsePublicKey(b) - require.Error(t, err) + publicKeyTests := []struct { + name string + pemData []byte + }{ + { + name: "RSA PUBLIC KEY", + pemData: pem.EncodeToMemory(&pem.Block{Type: "RSA PUBLIC KEY", Bytes: []byte("foo")}), + }, + { + name: "PUBLIC KEY", + pemData: pem.EncodeToMemory(&pem.Block{Type: "PUBLIC KEY", Bytes: []byte("foo")}), + }, + { + name: "not a public key pem file", + pemData: []byte("foo"), + }, + } + for _, tc := range publicKeyTests { + t.Run(tc.name, func(t *testing.T) { + _, err := keys.ParsePublicKey(tc.pemData) + require.True(t, trace.IsBadParameter(err), "wanted BadParameter, got: %v", err) }) } } diff --git a/api/utils/keys/publickey.go b/api/utils/keys/publickey.go index 0979caf266c60..77a4489786bcb 100644 --- a/api/utils/keys/publickey.go +++ b/api/utils/keys/publickey.go @@ -89,5 +89,5 @@ func ParsePublicKey(keyPEM []byte) (crypto.PublicKey, error) { // If both parse functions returned an error, preferedErr is guaranteed to // be set to the error from the parse function that usually matches the PEM // block type. - return nil, trace.Wrap(preferredErr, "parsing public key PEM") + return nil, trace.BadParameter("parsing public key PEM: %s", preferredErr) } diff --git a/api/utils/retryutils/retryv2.go b/api/utils/retryutils/retryv2.go index b54ae2e7cdfef..0b02a64d9fa52 100644 --- a/api/utils/retryutils/retryv2.go +++ b/api/utils/retryutils/retryv2.go @@ -51,6 +51,31 @@ type Driver interface { Check() error } +// NewConstantDriver creates a constant retry driver with the supplied step value. Resulting +// retries have always the same value of the provided step. +func NewConstantDriver(step time.Duration) Driver { + return constantDriver{step} +} + +type constantDriver struct { + step time.Duration +} + +func (d constantDriver) Duration(_ int64) time.Duration { + return d.step +} + +func (d constantDriver) Check() error { + if d.step <= 0 { + return trace.BadParameter("constant driver requires positive step value") + } + + if d.step > maxBackoff { + return trace.BadParameter("constant backoff step value too large: %v (max=%v)", d.step, maxBackoff) + } + return nil +} + // NewLinearDriver creates a linear retry driver with the supplied step value. Resulting retries // have increase their backoff by a fixed step amount on each increment, with the first retry // having a base step amount of zero. @@ -220,9 +245,7 @@ func (r *RetryV2) Duration() time.Duration { return 0 } - if a > r.Max { - a = r.Max - } + a = min(a, r.Max) if r.Jitter != nil { a = r.Jitter(a) diff --git a/api/utils/retryutils/update_with_retry.go b/api/utils/retryutils/update_with_retry.go new file mode 100644 index 0000000000000..2c94587df5e97 --- /dev/null +++ b/api/utils/retryutils/update_with_retry.go @@ -0,0 +1,119 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package retryutils + +import ( + "context" + "time" + + "github.com/gravitational/trace" + "github.com/jonboulle/clockwork" +) + +const ( + // updateWithRetryMaxRetries is the default number of retries in case of conflicts. + updateWithRetryMaxRetries = 3 + // updateWithRetryHalfJitterBetweenAttempts is the default jitter, [UpdateWithRetry] is + // using to wait between attempts. + updateWithRetryHalfJitterBetweenAttempts = 2 * time.Second +) + +type updateWithRetryOptions struct { + maxRetries int + retryConfig RetryV2Config +} + +// UpdateWithRetryOpt is the option type for [UpdateWithRetry]. See With* funcs in the package to +// see what options are available. +type UpdateWithRetryOpt func(*updateWithRetryOptions) + +// WithMaxRetries changes the max retry attempts number from the default +// [updateWithRetryMaxRetries]. +func WithMaxRetries(maxRetries int) UpdateWithRetryOpt { + return func(o *updateWithRetryOptions) { + o.maxRetries = maxRetries + } +} + +// WithRetryConfig changes the default retry configuration. +func WithRetryConfig(config RetryV2Config) UpdateWithRetryOpt { + return func(o *updateWithRetryOptions) { + o.retryConfig = config + } +} + +// RefreshFn refreshes the resource if needed considering if this is an retry attempt or the first +// attempt. +type RefreshFn[R any] func(ctx context.Context, isRetry bool) (R, error) + +// UpdateFn conditionally updates the resource. +type UpdateFn[R any] func(ctx context.Context, resource R) error + +// UpdateWithRetry tries to conditionally update a resource, by default retrying 3 times with a 2s +// half jittered constant backoff in between retries. The retry configuration can be changed with +// the option arguments. +// +// UpdateWithRetry will retry only if updateFn returns an error matched with trace.IsCompareFailed. +// It will return any other error immediately without retries. +func UpdateWithRetry[R any](ctx context.Context, clock clockwork.Clock, refreshFn RefreshFn[R], updateFn UpdateFn[R], opts ...UpdateWithRetryOpt) error { + options := updateWithRetryOptions{ + maxRetries: updateWithRetryMaxRetries, + retryConfig: RetryV2Config{ + Clock: clock, + Driver: NewConstantDriver(updateWithRetryHalfJitterBetweenAttempts), + Max: updateWithRetryHalfJitterBetweenAttempts, + Jitter: HalfJitter, + }, + } + for _, o := range opts { + o(&options) + } + + retry, err := NewRetryV2(options.retryConfig) + if err != nil { + return trace.Wrap(err, "creating retry instance") + } + + var updateErr error + attempts := min(options.maxRetries+1, maxAttempts) + for i := range attempts { + lastAttempt := i == attempts-1 + + isRetry := i > 0 + resource, err := refreshFn(ctx, isRetry) + if err != nil { + return trace.Wrap(err) + } + + updateErr = updateFn(ctx, resource) + switch { + case !lastAttempt && trace.IsCompareFailed(updateErr): + d := retry.Duration() + select { + case <-clock.After(d): + continue + case <-ctx.Done(): + return trace.Wrap(ctx.Err()) + } + case updateErr != nil: + return trace.Wrap(updateErr) + default: + return nil + } + } + return trace.Wrap(updateErr) +} diff --git a/api/utils/retryutils/update_with_retry_test.go b/api/utils/retryutils/update_with_retry_test.go new file mode 100644 index 0000000000000..a0300322d7864 --- /dev/null +++ b/api/utils/retryutils/update_with_retry_test.go @@ -0,0 +1,242 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package retryutils + +import ( + "context" + "errors" + "fmt" + "sync" + "testing" + "time" + + "github.com/gravitational/trace" + "github.com/jonboulle/clockwork" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func setupUpdateWithRetryTest[R any](t *testing.T) (*clockwork.FakeClock, *refreshFnRecorder[R], *updateFnRecorder[R]) { + t.Helper() + return clockwork.NewFakeClock(), new(refreshFnRecorder[R]), new(updateFnRecorder[R]) + +} + +func Test_UpdateWithRetry_returns_error_if_refreshFn_returns_error(t *testing.T) { + t.Parallel() + + ctx := context.Background() + clock, refreshRecorder, updateRecorder := setupUpdateWithRetryTest[string](t) + + refreshRecorder.retErr = errors.New("error from retryFn") + + err := UpdateWithRetry(ctx, clock, refreshRecorder.Refresh, updateRecorder.Update) + require.Error(t, err) + require.Equal(t, refreshRecorder.retErr.Error(), err.Error()) + + require.Equal(t, 1, refreshRecorder.callCnt) + require.Equal(t, 0, updateRecorder.callCnt) + + require.False(t, refreshRecorder.lastArgIsRetry) +} + +func Test_UpdateWithRetry_passes_value_from_refreshFn_to_updateFn(t *testing.T) { + t.Parallel() + + ctx := context.Background() + clock, refreshRecorder, updateRecorder := setupUpdateWithRetryTest[string](t) + + resource := "test_resource_value_1" + + refreshRecorder.retResource = resource + + err := UpdateWithRetry(ctx, clock, refreshRecorder.Refresh, updateRecorder.Update) + require.NoError(t, err) + + require.Equal(t, 1, refreshRecorder.callCnt) + require.Equal(t, 1, updateRecorder.callCnt) + require.Equal(t, resource, updateRecorder.lastArgResource) + + require.False(t, refreshRecorder.lastArgIsRetry) +} + +func Test_UpdateWithRetry_retries_if_updateFn_returns_CompareFailedError(t *testing.T) { + t.Parallel() + + ctx := context.Background() + clock, refreshRecorder, updateRecorder := setupUpdateWithRetryTest[string](t) + + updateRecorder.retErr = trace.CompareFailed("test_compare_failed_1") + + updateWithRetryRetCh := make(chan error) + + go func() { + updateWithRetryRetCh <- UpdateWithRetry(ctx, clock, refreshRecorder.Refresh, updateRecorder.Update) + }() + + require.EventuallyWithT(t, func(c *assert.CollectT) { + require.Equal(c, 1, refreshRecorder.read().callCnt) + require.Equal(c, 1, updateRecorder.read().callCnt) + require.False(c, refreshRecorder.read().lastArgIsRetry) + }, 4*time.Second, 10*time.Millisecond) + + clock.BlockUntilContext(ctx, 1) + + totalAttempts := updateWithRetryMaxRetries + 1 + + for expectedCallCnt := 2; expectedCallCnt < totalAttempts; expectedCallCnt++ { + clock.Advance(updateWithRetryHalfJitterBetweenAttempts) + clock.BlockUntilContext(ctx, 1) + + require.Equal(t, expectedCallCnt, refreshRecorder.read().callCnt) + require.Equal(t, expectedCallCnt, updateRecorder.read().callCnt) + require.True(t, refreshRecorder.read().lastArgIsRetry) + } + + clock.Advance(updateWithRetryHalfJitterBetweenAttempts) + + err := <-updateWithRetryRetCh + require.Error(t, err) + require.Equal(t, updateRecorder.read().retErr.Error(), err.Error()) + + require.Equal(t, totalAttempts, refreshRecorder.read().callCnt) + require.Equal(t, totalAttempts, updateRecorder.read().callCnt) + require.True(t, refreshRecorder.read().lastArgIsRetry) +} + +func Test_UpdateWithRetry_do_not_retry_if_updateFn_returns_different_error(t *testing.T) { + t.Parallel() + + ctx := context.Background() + clock, refreshRecorder, updateRecorder := setupUpdateWithRetryTest[string](t) + + updateRecorder.retErr = errors.New("different_update_error_1") + + err := UpdateWithRetry(ctx, clock, refreshRecorder.Refresh, updateRecorder.Update) + require.Error(t, err) + require.Equal(t, updateRecorder.retErr.Error(), err.Error()) + + require.Equal(t, 1, refreshRecorder.callCnt) + require.Equal(t, 1, updateRecorder.callCnt) +} + +func Test_UpdateWithRetry_stops_retrying_if_updateFn_returns_different_error_at_some_point(t *testing.T) { + t.Parallel() + + ctx := context.Background() + clock, refreshRecorder, updateRecorder := setupUpdateWithRetryTest[string](t) + + updateRecorder.retErr = trace.CompareFailed("test_compare_failed_1") + + updateWithRetryRetCh := make(chan error) + + go func() { + updateWithRetryRetCh <- UpdateWithRetry(ctx, clock, refreshRecorder.Refresh, updateRecorder.Update) + }() + + require.EventuallyWithT(t, func(c *assert.CollectT) { + require.Equal(c, 1, refreshRecorder.read().callCnt) + require.Equal(c, 1, updateRecorder.read().callCnt) + require.False(c, refreshRecorder.read().lastArgIsRetry) + }, 4*time.Second, 10*time.Millisecond) + + clock.BlockUntilContext(ctx, 1) + + totalAttempts := updateWithRetryMaxRetries + 1 + somePoint := totalAttempts - 1 + + for expectedCallCnt := 2; expectedCallCnt < totalAttempts; expectedCallCnt++ { + clock.Advance(updateWithRetryHalfJitterBetweenAttempts) + clock.BlockUntilContext(ctx, 1) + + if expectedCallCnt == somePoint { + updateRecorder.mu.Lock() + updateRecorder.retErr = fmt.Errorf("achieved some point (%d)", somePoint) + updateRecorder.mu.Unlock() + break + } + + require.Equal(t, expectedCallCnt, refreshRecorder.read().callCnt) + require.Equal(t, expectedCallCnt, updateRecorder.read().callCnt) + require.True(t, refreshRecorder.read().lastArgIsRetry) + } + + clock.Advance(updateWithRetryHalfJitterBetweenAttempts) + + err := <-updateWithRetryRetCh + require.Error(t, err) + require.Equal(t, updateRecorder.read().retErr.Error(), err.Error()) + + require.Equal(t, totalAttempts, refreshRecorder.read().callCnt) + require.Equal(t, totalAttempts, updateRecorder.read().callCnt) + require.True(t, refreshRecorder.read().lastArgIsRetry) +} + +type refreshFnRecorder[R any] struct { + mu sync.RWMutex + refreshFnRecorderState[R] +} + +type refreshFnRecorderState[R any] struct { + callCnt int + + lastArgIsRetry bool + + retResource R + retErr error +} + +func (r *refreshFnRecorder[R]) read() refreshFnRecorderState[R] { + r.mu.RLock() + defer r.mu.RUnlock() + return r.refreshFnRecorderState +} + +func (r *refreshFnRecorder[R]) Refresh(_ context.Context, isRetry bool) (R, error) { + r.mu.Lock() + defer r.mu.Unlock() + r.callCnt++ + r.lastArgIsRetry = isRetry + return r.retResource, r.retErr +} + +type updateFnRecorder[R any] struct { + mu sync.RWMutex + updateFnRecorderState[R] +} + +type updateFnRecorderState[R any] struct { + callCnt int + + lastArgResource R + + retErr error +} + +func (u *updateFnRecorder[R]) read() updateFnRecorderState[R] { + u.mu.RLock() + defer u.mu.RUnlock() + return u.updateFnRecorderState +} + +func (u *updateFnRecorder[R]) Update(_ context.Context, resource R) error { + u.mu.Lock() + defer u.mu.Unlock() + u.callCnt++ + u.lastArgResource = resource + return u.retErr +} diff --git a/api/utils/sshutils/callback.go b/api/utils/sshutils/callback.go index c96e4ff39dc8d..a69931f28d870 100644 --- a/api/utils/sshutils/callback.go +++ b/api/utils/sshutils/callback.go @@ -70,23 +70,16 @@ func NewHostKeyCallback(conf HostKeyCallbackConfig) (ssh.HostKeyCallback, error) return checker.CheckHostKey, nil } -func makeIsHostAuthorityFunc(getCheckers CheckersGetter) func(key ssh.PublicKey, host string) bool { - return func(key ssh.PublicKey, host string) bool { +func makeIsHostAuthorityFunc(getCheckers CheckersGetter) func(authority ssh.PublicKey, host string) bool { + return func(authority ssh.PublicKey, host string) bool { checkers, err := getCheckers() if err != nil { slog.ErrorContext(context.Background(), "Failed to get checkers.", "host", host, "error", err) return false } for _, checker := range checkers { - switch v := key.(type) { - case *ssh.Certificate: - if KeysEqual(v.SignatureKey, checker) { - return true - } - default: - if KeysEqual(key, checker) { - return true - } + if KeysEqual(authority, checker) { + return true } } slog.DebugContext(context.Background(), "No CA found for target host.", "host", host) diff --git a/api/utils/sshutils/callback_test.go b/api/utils/sshutils/callback_test.go new file mode 100644 index 0000000000000..78ab289e33b1e --- /dev/null +++ b/api/utils/sshutils/callback_test.go @@ -0,0 +1,56 @@ +// Copyright 2025 Gravitational, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sshutils + +import ( + "crypto/ed25519" + "crypto/rand" + "testing" + + "github.com/stretchr/testify/require" + "golang.org/x/crypto/ssh" +) + +func TestMakeIsHostAuthorityFunc(t *testing.T) { + rawCA1, _, err := ed25519.GenerateKey(rand.Reader) + require.NoError(t, err) + ca1, err := ssh.NewPublicKey(rawCA1) + require.NoError(t, err) + + rawCA2, _, err := ed25519.GenerateKey(rand.Reader) + require.NoError(t, err) + ca2, err := ssh.NewPublicKey(rawCA2) + require.NoError(t, err) + + rawCA3, _, err := ed25519.GenerateKey(rand.Reader) + require.NoError(t, err) + ca3, err := ssh.NewPublicKey(rawCA3) + require.NoError(t, err) + + isHostAuthority := makeIsHostAuthorityFunc(func() ([]ssh.PublicKey, error) { + return []ssh.PublicKey{ca1, ca2}, nil + }) + + cert1 := &ssh.Certificate{ + Key: ca1, + SignatureKey: ca1, + } + + require.True(t, isHostAuthority(ca1, "")) + require.True(t, isHostAuthority(ca2, "")) + require.False(t, isHostAuthority(ca3, "")) + + require.False(t, isHostAuthority(cert1, ""), "a certificate signed by a certificate should not pass validation") +} diff --git a/api/utils/sshutils/checker.go b/api/utils/sshutils/checker.go index ba63ddb345a89..cc37a9d7421c9 100644 --- a/api/utils/sshutils/checker.go +++ b/api/utils/sshutils/checker.go @@ -20,6 +20,7 @@ import ( "crypto/ecdsa" "crypto/elliptic" "crypto/rsa" + "fmt" "net" "github.com/gravitational/trace" @@ -28,13 +29,25 @@ import ( "github.com/gravitational/teleport/api/constants" ) +type FIPSError struct { + message string +} + +func (f *FIPSError) Error() string { + return f.message +} + +func newFIPSError(message string, args ...any) error { + return &FIPSError{fmt.Sprintf("FIPS: "+message, args...)} +} + // CertChecker is a drop-in replacement for ssh.CertChecker. In FIPS mode, // checks if the certificate (or key) were generated with a supported algorithm. type CertChecker struct { ssh.CertChecker // FIPS means in addition to checking the validity of the key or - // certificate, also check that FIPS 140-2 algorithms were used. + // certificate, also check that FIPS algorithms were used. FIPS bool // OnCheckCert is called when validating host certificate. @@ -123,12 +136,12 @@ func (c *CertChecker) validateFIPS(key ssh.PublicKey) error { func validateFIPSAlgorithm(key ssh.PublicKey) error { cryptoKey, ok := key.(ssh.CryptoPublicKey) if !ok { - return trace.BadParameter("unable to determine underlying public key") + return trace.Wrap(newFIPSError("unable to determine underlying public key")) } switch k := cryptoKey.CryptoPublicKey().(type) { case *rsa.PublicKey: if k.N.BitLen() != constants.RSAKeySize { - return trace.BadParameter("found %v-bit RSA key, only %v-bit supported", k.N.BitLen(), constants.RSAKeySize) + return trace.Wrap(newFIPSError("found %v-bit RSA key, only %v-bit supported", k.N.BitLen(), constants.RSAKeySize)) } case *ecdsa.PublicKey: if k.Curve != elliptic.P256() && k.Curve != elliptic.P384() { @@ -136,10 +149,10 @@ func validateFIPSAlgorithm(key ssh.PublicKey) error { if params == nil { return trace.BadParameter("unable to determine curve of ECDSA public key") } - return trace.BadParameter("found ECDSA key with curve %s, only P-256 and P-384 are supported", params.Name) + return trace.Wrap(newFIPSError("found ECDSA key with curve %s, only P-256 and P-384 are supported", params.Name)) } default: - return trace.BadParameter("only RSA and ECDSA keys supported") + return trace.Wrap(newFIPSError("only RSA and ECDSA keys supported")) } return nil } diff --git a/api/utils/time.go b/api/utils/time.go index 288a02d8f6b11..14411752a8314 100644 --- a/api/utils/time.go +++ b/api/utils/time.go @@ -18,6 +18,8 @@ package utils import ( "time" + + "google.golang.org/protobuf/types/known/timestamppb" ) // UTC converts time to UTC timezone @@ -41,3 +43,28 @@ const HumanTimeFormatString = "Mon Jan _2 15:04 UTC" func HumanTimeFormat(d time.Time) string { return d.Format(HumanTimeFormatString) } + +// TimeFromProto converts a protobuf Timestamp to a Go time.Time, preserving +// the zero value across the conversion boundary (standard go/proto timestamp +// conversion doesn't preserve "zeroness"). +func TimeFromProto(t *timestamppb.Timestamp) time.Time { + // use the zero time to represent the nil timestamp. note that this is conceptually distinct + // from using t.GetSeconds() == 0 && t.GetNanos() == 0. a timstampb that happens to be created + // targeting the unix epoch isn't necessarily equivalent to a zero go timestamp, since the zero + // value for the go timestamp isn't the unix epoch. + if t == nil || (t.GetSeconds() == 0 && t.GetNanos() == 0) { + return time.Time{} + } + + return t.AsTime() +} + +// TimeIntoProto converts a Go time.Time to a protobuf Timestamp, preserving +// the zero value across the conversion boundary (standard go/proto timestamp +// conversion doesn't preserve "zeroness"). +func TimeIntoProto(t time.Time) *timestamppb.Timestamp { + if t.IsZero() { + return nil + } + return timestamppb.New(t) +} diff --git a/api/version.go b/api/version.go index 9ad60b95cb8a7..7390ce1e099c6 100644 --- a/api/version.go +++ b/api/version.go @@ -2,10 +2,10 @@ package api -const Version = "18.0.0-dev" +const Version = "18.6.1" const VersionMajor = 18 -const VersionMinor = 0 -const VersionPatch = 0 -const VersionPreRelease = "dev" +const VersionMinor = 6 +const VersionPatch = 1 +const VersionPreRelease = "" const VersionMetadata = "" diff --git a/assets/aws/files/install-hardened.sh b/assets/aws/files/install-hardened.sh index c0db8a739bf3b..48b80057d9a1f 100644 --- a/assets/aws/files/install-hardened.sh +++ b/assets/aws/files/install-hardened.sh @@ -19,6 +19,8 @@ ln -s /opt/certbot/bin/certbot /usr/local/bin/certbot useradd -r teleport -u "${TELEPORT_UID}" -d /var/lib/teleport # Add teleport to adm group to read and write logs usermod -a -G adm teleport +# Disable password age on ec2-user account to prevent users being locked out (V-230367) +chage --maxdays -1 ec2-user # Setup teleport run dir for pid files install -d -m 0700 -o teleport -g adm /var/lib/teleport diff --git a/assets/backport/go.mod b/assets/backport/go.mod index 15f4ee14685a1..0a0c1059ee7d8 100644 --- a/assets/backport/go.mod +++ b/assets/backport/go.mod @@ -1,14 +1,12 @@ module github.com/teleport/assets/backport -go 1.23.7 - -toolchain go1.24.1 +go 1.24.11 require ( github.com/google/go-github/v41 v41.0.0 github.com/gravitational/trace v1.5.1 github.com/stretchr/testify v1.10.0 - golang.org/x/oauth2 v0.29.0 + golang.org/x/oauth2 v0.30.0 gopkg.in/yaml.v2 v2.4.0 ) @@ -16,7 +14,7 @@ require ( github.com/davecgh/go-spew v1.1.1 // indirect github.com/google/go-querystring v1.1.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect - golang.org/x/crypto v0.36.0 // indirect - golang.org/x/net v0.38.0 // indirect + golang.org/x/crypto v0.45.0 // indirect + golang.org/x/net v0.47.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/assets/backport/go.sum b/assets/backport/go.sum index 3c5b3595ca5de..a17ce125a5f77 100644 --- a/assets/backport/go.sum +++ b/assets/backport/go.sum @@ -3,9 +3,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-github/v41 v41.0.0 h1:HseJrM2JFf2vfiZJ8anY2hqBjdfY1Vlj/K27ueww4gg= github.com/google/go-github/v41 v41.0.0/go.mod h1:XgmCA5H323A9rtgExdTcnDkcqp6S30AVACCBDOonIxg= github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= @@ -18,15 +17,15 @@ github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOf github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34= -golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc= +golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q= +golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8= -golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8= +golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= +golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98= -golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= diff --git a/assets/loadtest/helm/node-agent/templates/deployment.yaml b/assets/loadtest/helm/node-agent/templates/deployment.yaml index 7295a8ea89005..bd81cdd7cd30f 100644 --- a/assets/loadtest/helm/node-agent/templates/deployment.yaml +++ b/assets/loadtest/helm/node-agent/templates/deployment.yaml @@ -30,6 +30,7 @@ spec: - name: SSL_CERT_FILE value: /etc/teleport-tls-ca/ca.pem {{- end }} + {{- if $.Values.extraEnv }}{{ toYaml $.Values.extraEnv | nindent 12 }}{{ end }} volumeMounts: - mountPath: /etc/teleport-config name: config diff --git a/assets/loadtest/helm/node-agent/values.yaml b/assets/loadtest/helm/node-agent/values.yaml index 46f09427ef2d6..e17220ac1c3fa 100644 --- a/assets/loadtest/helm/node-agent/values.yaml +++ b/assets/loadtest/helm/node-agent/values.yaml @@ -19,11 +19,24 @@ joinParams: # DO NOT USE THIS IN PRODUCTION token_name: "" +# pod tolerations +# array of https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#toleration-v1-core tolerations: [] +# pod affinity rules +# https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#affinity-v1-core affinity: {} tls: existingCASecretName: "" +# envvars set in each container +# array of https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#envvar-v1-core +extraEnv: [] + +# resource requirements for each container +# https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#resourcerequirements-v1-core +resources: {} + +# Teleport labels labels: {} diff --git a/buf-go.gen.yaml b/buf-go.gen.yaml index 1b814fa2ef672..3564ee5f443e0 100644 --- a/buf-go.gen.yaml +++ b/buf-go.gen.yaml @@ -5,6 +5,7 @@ inputs: exclude_paths: # generated by buf-gogo.gen.yaml - api/proto/teleport/attestation/ + - api/proto/teleport/componentfeatures/ - api/proto/teleport/legacy/ - api/proto/teleport/mfa/ - api/proto/teleport/usageevents/ @@ -16,6 +17,7 @@ inputs: # excluded by buf-gogo.gen.yaml - api/proto/teleport/legacy/client/proto/event.proto - api/proto/teleport/legacy/client/proto/inventory.proto + - api/proto/teleport/legacy/client/proto/requestable_roles.proto plugins: - local: diff --git a/buf-gogo.gen.yaml b/buf-gogo.gen.yaml index c3044013e6e73..e0e8fd85c935e 100644 --- a/buf-gogo.gen.yaml +++ b/buf-gogo.gen.yaml @@ -4,6 +4,7 @@ inputs: - directory: . paths: - api/proto/teleport/attestation/ + - api/proto/teleport/componentfeatures/ - api/proto/teleport/legacy/ - api/proto/teleport/mfa/ - api/proto/teleport/usageevents/ @@ -12,6 +13,7 @@ inputs: # generated by buf-go.gen.yaml - api/proto/teleport/legacy/client/proto/event.proto - api/proto/teleport/legacy/client/proto/inventory.proto + - api/proto/teleport/legacy/client/proto/requestable_roles.proto plugins: - local: diff --git a/buf-legacy.yaml b/buf-legacy.yaml index f50c9f7b20e5b..5aa86eff420f0 100644 --- a/buf-legacy.yaml +++ b/buf-legacy.yaml @@ -21,6 +21,7 @@ lint: # - COMMENT_MESSAGE - COMMENT_RPC - COMMENT_SERVICE + - PAGINATION_REQUIRED except: # MINIMAL - PACKAGE_DIRECTORY_MATCH @@ -35,3 +36,12 @@ lint: - RPC_REQUEST_RESPONSE_UNIQUE - RPC_REQUEST_STANDARD_NAME - RPC_RESPONSE_STANDARD_NAME +plugins: + - plugin: + - env + - GOWORK=off + - go + - -C + - ./build.assets/tooling + - run + - ./cmd/buf-plugin-linters diff --git a/buf-ts.gen.yaml b/buf-ts.gen.yaml index 75ca359210c8a..b41c6ae9d5701 100644 --- a/buf-ts.gen.yaml +++ b/buf-ts.gen.yaml @@ -3,11 +3,6 @@ version: v2 inputs: - directory: . paths: - - api/proto/teleport/accesslist/ - - api/proto/teleport/devicetrust/ - - api/proto/teleport/header/ - - api/proto/teleport/trait/ - - api/proto/teleport/legacy/types/trusted_device_requirement.proto - api/proto/teleport/userpreferences/ - proto/prehog/ - proto/teleport/lib/teleterm/ @@ -33,10 +28,12 @@ plugins: - protoc-gen-ts out: gen/proto/ts opt: - # the next time we tweak the ts codegen we should put the options in - # alphabetical order - - eslint_disable - add_pb_suffix + - eslint_disable + # By default, only the proto files passed as input to protoc are generated, not the files they + # import (with the exception of well-known types which are always generated when imported). + # generate_dependencies generates code for dependencies too. + - generate_dependencies - server_grpc1 - ts_nocheck strategy: all diff --git a/buf.yaml b/buf.yaml index 88945bfcb3a0e..81f74dce9733e 100644 --- a/buf.yaml +++ b/buf.yaml @@ -17,6 +17,7 @@ lint: - COMMENT_RPC - COMMENT_SERVICE - PACKAGE_NO_IMPORT_CYCLE + - PAGINATION_REQUIRED - UNARY_RPC except: - FIELD_NOT_REQUIRED @@ -75,6 +76,8 @@ lint: - api/proto/teleport/workloadidentity/v1/revocation_service.proto - proto/accessgraph/v1alpha/access_graph_service.proto - proto/teleport/lib/teleterm/v1/service.proto + - api/proto/teleport/recordingmetadata/v1/recordingmetadata_service.proto + - api/proto/teleport/join/v1/joinservice.proto breaking: use: @@ -83,3 +86,15 @@ breaking: ignore: # TODO(codingllama): Remove ignore once the PDP API is stable. - api/proto/teleport/decision/v1alpha1 + # TODO(nklaassen): Remove ignore once the new join API is stable. + - api/proto/teleport/join/v1 + +plugins: + - plugin: + - env + - GOWORK=off + - go + - -C + - ./build.assets/tooling + - run + - ./cmd/buf-plugin-linters diff --git a/build.assets/Dockerfile b/build.assets/Dockerfile index 7926786363285..82f6429998ef5 100644 --- a/build.assets/Dockerfile +++ b/build.assets/Dockerfile @@ -142,6 +142,7 @@ RUN apt-get -y update && \ git \ gnupg \ gzip \ + jq \ libc6-dev \ libelf-dev \ libpam-dev \ @@ -181,8 +182,6 @@ RUN apt-get -y update && \ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \ tee /etc/apt/sources.list.d/hashicorp.list > /dev/null && \ apt-get update && apt-get install terraform -y --no-install-recommends && \ - # Manually install the wasm-opt binary from the binaryen package on ARM64. - if [ "$BUILDARCH" = "arm64" ]; then apt-get install -y binaryen; fi && \ pip3 --no-cache-dir install yamllint && \ dpkg-reconfigure locales && \ apt-get -y clean && \ @@ -228,9 +227,10 @@ RUN curl --proto '=https' --tlsv1.2 -fsSL https://sh.rustup.rs | sh -s -- -y --p rustup target add wasm32-unknown-unknown && \ if [ "$BUILDARCH" = "amd64" ]; then rustup target add aarch64-unknown-linux-gnu i686-unknown-linux-gnu; fi -ARG WASM_PACK_VERSION -# Install wasm-pack for targeting WebAssembly from Rust. -RUN cargo install wasm-pack --locked --version ${WASM_PACK_VERSION} +ARG WASM_OPT_VERSION +ARG WASM_BINDGEN_VERSION +# Install wasm-bindgen-ci and wasm-opt for bulding rust webassembly module. +RUN cargo install --locked wasm-opt@${WASM_OPT_VERSION} wasm-bindgen-cli@${WASM_BINDGEN_VERSION} # Switch back to root for the remaining instructions and keep it as the default # user. @@ -249,7 +249,7 @@ RUN corepack enable yarn pnpm # Install Go. ARG GOLANG_VERSION -RUN mkdir -p /opt && cd /opt && curl -fsSL https://storage.googleapis.com/golang/$GOLANG_VERSION.linux-${BUILDARCH}.tar.gz | tar xz && \ +RUN mkdir -p /opt && cd /opt && curl -fsSL https://go.dev/dl/$GOLANG_VERSION.linux-${BUILDARCH}.tar.gz | tar xz && \ mkdir -p /go/src/github.com/gravitational/teleport && \ chmod a+w /go && \ chmod a+w /var/lib && \ diff --git a/build.assets/Dockerfile-arm b/build.assets/Dockerfile-arm index 24ae7e6715589..c702e2de6a7e6 100644 --- a/build.assets/Dockerfile-arm +++ b/build.assets/Dockerfile-arm @@ -63,7 +63,7 @@ RUN corepack enable yarn pnpm ARG GOLANG_VERSION RUN mkdir -p /opt && \ cd /opt && \ - curl -fsSL "https://storage.googleapis.com/golang/$GOLANG_VERSION.linux-$BUILDARCH.tar.gz" | tar xz && \ + curl -fsSL "https://go.dev/dl/$GOLANG_VERSION.linux-$BUILDARCH.tar.gz" | tar xz && \ mkdir -p /go/src/github.com/gravitational/teleport && \ chmod a+w /go && \ chmod a+w /var/lib && \ diff --git a/build.assets/Dockerfile-centos7 b/build.assets/Dockerfile-centos7 index c3862ee508836..a2e379e3a2970 100644 --- a/build.assets/Dockerfile-centos7 +++ b/build.assets/Dockerfile-centos7 @@ -245,7 +245,7 @@ COPY --from=git2 /opt/git / # Install Go. ARG GOLANG_VERSION -RUN mkdir -p /opt && cd /opt && curl -fsSL https://storage.googleapis.com/golang/${GOLANG_VERSION}.linux-${TARGETARCH}.tar.gz | tar xz && \ +RUN mkdir -p /opt && cd /opt && curl -fsSL https://go.dev/dl/${GOLANG_VERSION}.linux-${TARGETARCH}.tar.gz | tar xz && \ mkdir -p /go/src/github.com/gravitational/teleport && \ chmod a+w /go && \ chmod a+w /var/lib && \ @@ -279,11 +279,6 @@ RUN curl --proto '=https' --tlsv1.2 -fsSL https://sh.rustup.rs | sh -s -- -y --p rustc --version && \ rustup target add wasm32-unknown-unknown -# Install wasm-pack for targeting WebAssembly from Rust. -ARG WASM_PACK_VERSION -# scl enable is required to use the newer C compiler installed above. Without it, the build fails. -RUN scl enable ${DEVTOOLSET} "cargo install wasm-pack --locked --version ${WASM_PACK_VERSION}" - # Do a quick switch back to root and copy/setup libfido2 and libpcsclite binaries. # Do this last to take better advantage of the multi-stage build. USER root diff --git a/build.assets/Dockerfile-grpcbox b/build.assets/Dockerfile-grpcbox index e15807374df43..8360d67ec9616 100644 --- a/build.assets/Dockerfile-grpcbox +++ b/build.assets/Dockerfile-grpcbox @@ -1,6 +1,6 @@ # syntax=docker/dockerfile:1 -FROM docker.io/golang:1.24.3 +FROM docker.io/golang:1.24.11 # Image layers go from less likely to most likely to change. RUN apt-get update && \ @@ -40,3 +40,14 @@ RUN \ go -C /teleport-module run -exec true google.golang.org/grpc/cmd/protoc-gen-go-grpc && \ go -C /teleport-module run -exec true google.golang.org/protobuf/cmd/protoc-gen-go && \ rm -rf /teleport-module + +# Pre-cache buf plugin +COPY build.assets/tooling /tooling-tmp/ +RUN \ + go -C /tooling-tmp run -exec true ./cmd/buf-plugin-linters && \ + rm -rf /tooling-tmp + +ARG UID +ARG GID +RUN mkdir -p /.cache /.npm && \ + chown -R $UID:$GID /.cache /.npm /go/pkg/mod/ diff --git a/build.assets/Dockerfile-node b/build.assets/Dockerfile-node index 59a3100c80e1e..3aea917065e15 100644 --- a/build.assets/Dockerfile-node +++ b/build.assets/Dockerfile-node @@ -49,8 +49,6 @@ RUN apt-get -y update && \ # Used during tag builds to build the RPM package of Connect. rpm \ && \ - # Manually install the wasm-opt binary from the binaryen package on ARM64. - if [ "$BUILDARCH" = "arm64" ]; then apt-get install -y binaryen; fi && \ dpkg-reconfigure locales && \ apt-get -y clean && \ rm -rf /var/lib/apt/lists/* @@ -83,6 +81,7 @@ RUN curl --proto '=https' --tlsv1.2 -fsSL https://sh.rustup.rs | sh -s -- -y --p rustup target add wasm32-unknown-unknown && \ if [ "$BUILDARCH" = "amd64" ]; then rustup target add aarch64-unknown-linux-gnu; fi -# Install wasm-pack for targeting WebAssembly from Rust. -ARG WASM_PACK_VERSION -RUN cargo install wasm-pack --locked --version ${WASM_PACK_VERSION} +ARG WASM_OPT_VERSION +ARG WASM_BINDGEN_VERSION +# Install wasm-bindgen-ci and wasm-opt for bulding rust webassembly module. +RUN cargo install --locked wasm-opt@${WASM_OPT_VERSION} wasm-bindgen-cli@${WASM_BINDGEN_VERSION} diff --git a/build.assets/Makefile b/build.assets/Makefile index 17106405bbc3c..61c0162fec07f 100644 --- a/build.assets/Makefile +++ b/build.assets/Makefile @@ -56,6 +56,10 @@ endif # $(ARCH) is the target architecture we want to build for. REQUIRE_HOST_ARCH = $(if $(filter-out $(ARCH),$(RUNTIME_ARCH)),$(error Cannot cross-compile $@ $(ARCH) on $(RUNTIME_ARCH))) +# Determine which version of wasm-bindgen should be installed on build containers +# responsible for building webassets. +WASM_BINDGEN_VERSION ?= $(shell $(MAKE) -C .. --no-print-directory print-wasm-bindgen-version) + # This determines which make target we call in this repo's top level Makefile when # make release in this Makefile is called. Currently this supports its default value # (release) and release-unix-preserving-webassets. See the release-arm target for @@ -176,7 +180,8 @@ buildbox: --build-arg GOLANG_VERSION=$(GOLANG_VERSION) \ --build-arg GOLANGCI_LINT_VERSION=$(GOLANGCI_LINT_VERSION) \ --build-arg RUST_VERSION=$(RUST_VERSION) \ - --build-arg WASM_PACK_VERSION=$(WASM_PACK_VERSION) \ + --build-arg WASM_OPT_VERSION=$(WASM_OPT_VERSION) \ + --build-arg WASM_BINDGEN_VERSION=$(WASM_BINDGEN_VERSION) \ --build-arg NODE_VERSION=$(NODE_VERSION) \ --build-arg LIBBPF_VERSION=$(LIBBPF_VERSION) \ --build-arg BUF_VERSION=$(BUF_VERSION) \ @@ -219,7 +224,6 @@ buildbox-centos7: --build-arg TARGETARCH=$(RUNTIME_ARCH) \ --build-arg GOLANG_VERSION=$(GOLANG_VERSION) \ --build-arg RUST_VERSION=$(RUST_VERSION) \ - --build-arg WASM_PACK_VERSION=$(WASM_PACK_VERSION) \ --build-arg DEVTOOLSET=$(DEVTOOLSET) \ --build-arg LIBBPF_VERSION=$(LIBBPF_VERSION) \ --build-arg LIBPCSCLITE_VERSION=$(LIBPCSCLITE_VERSION) \ @@ -248,7 +252,6 @@ buildbox-centos7-fips: --build-arg TARGETARCH=$(RUNTIME_ARCH) \ --build-arg GOLANG_VERSION=$(GOLANG_VERSION) \ --build-arg RUST_VERSION=$(RUST_VERSION) \ - --build-arg WASM_PACK_VERSION=$(WASM_PACK_VERSION) \ --build-arg DEVTOOLSET=$(DEVTOOLSET) \ --build-arg LIBBPF_VERSION=$(LIBBPF_VERSION) \ --build-arg LIBPCSCLITE_VERSION=$(LIBPCSCLITE_VERSION) \ @@ -297,7 +300,8 @@ buildbox-node: --build-arg GID=$(GID) \ --build-arg NODE_VERSION=$(NODE_VERSION) \ --build-arg RUST_VERSION=$(RUST_VERSION) \ - --build-arg WASM_PACK_VERSION=$(WASM_PACK_VERSION) \ + --build-arg WASM_OPT_VERSION=$(WASM_OPT_VERSION) \ + --build-arg WASM_BINDGEN_VERSION=$(WASM_BINDGEN_VERSION) \ --cache-to type=inline \ --cache-from $(BUILDBOX_NODE) \ $(if $(PUSH),--push,--load) \ @@ -738,11 +742,18 @@ print-rust-version: @echo $(RUST_VERSION) # -# Print the wasm-pack version used to build Teleport. +# Print the wasm-opt version used to build Teleport. +# +.PHONY:print-wasm-opt-version +print-wasm-opt-version: + @echo $(WASM_OPT_VERSION) + +# +# Print the wasm-bindgen version used to build Teleport. # -.PHONY:print-wasm-pack-version -print-wasm-pack-version: - @echo $(WASM_PACK_VERSION) +.PHONY:print-wasm-bindgen-version +print-wasm-bindgen-version: + @echo $(WASM_BINDGEN_VERSION) # # Print the Node version used to build Teleport Connect. diff --git a/build.assets/build-fido2-macos.sh b/build.assets/build-fido2-macos.sh index fa861639fafb2..4b0288ea5500f 100755 --- a/build.assets/build-fido2-macos.sh +++ b/build.assets/build-fido2-macos.sh @@ -90,6 +90,7 @@ cbor_build() { cmake \ -DCMAKE_OSX_ARCHITECTURES="$C_ARCH" \ + -DCMAKE_POLICY_VERSION_MINIMUM=3.5 \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX="$dest" \ -DCMAKE_OSX_DEPLOYMENT_TARGET="$MACOS_VERSION_MIN" \ diff --git a/build.assets/build-package.sh b/build.assets/build-package.sh index 186d5d5684a70..e2cc788ec7043 100755 --- a/build.assets/build-package.sh +++ b/build.assets/build-package.sh @@ -305,9 +305,6 @@ if [[ "${PACKAGE_TYPE}" != "pkg" ]]; then mv -v ${TAR_PATH}/examples/systemd/post-install ${PACKAGE_TEMPDIR} mv -v ${TAR_PATH}/examples/systemd/before-remove ${PACKAGE_TEMPDIR} - # create versions folder - mkdir -p ${PACKAGE_TEMPDIR}/buildroot${LINUX_DATA_DIR}/versions - # /var/lib/teleport # shellcheck disable=SC2174 mkdir -p -m0700 ${PACKAGE_TEMPDIR}/buildroot${LINUX_DATA_DIR} diff --git a/build.assets/buildbox/Dockerfile b/build.assets/buildbox/Dockerfile index b818138d8657e..41b22f9a4b5b1 100644 --- a/build.assets/buildbox/Dockerfile +++ b/build.assets/buildbox/Dockerfile @@ -69,7 +69,7 @@ ARG BUILDARCH ARG GOLANG_VERSION # Set BUILDARCH if not set when not using buildkit. Only works for arm64 and amd64. RUN BUILDARCH=${BUILDARCH:-$(uname -m | sed 's/aarch64/arm64/g; s/x86_64/amd64/g')}; \ - curl -fsSL https://storage.googleapis.com/golang/${GOLANG_VERSION}.linux-${BUILDARCH}.tar.gz | \ + curl -fsSL https://go.dev/dl/${GOLANG_VERSION}.linux-${BUILDARCH}.tar.gz | \ tar -C /opt -xz && \ /opt/go/bin/go version diff --git a/build.assets/charts/Dockerfile-distroless b/build.assets/charts/Dockerfile-distroless index afc9ef481d768..3df8ee5f4f56d 100644 --- a/build.assets/charts/Dockerfile-distroless +++ b/build.assets/charts/Dockerfile-distroless @@ -33,6 +33,7 @@ FROM $BASE_IMAGE COPY --from=teleport /opt/staging / COPY --from=staging /opt/staging/root / COPY --from=staging /opt/staging/status /var/lib/dpkg/status.d +ENV TELEPORT_TOOLS_VERSION=off # Attempt a graceful shutdown by default # See https://goteleport.com/docs/reference/signals/ for signal reference. STOPSIGNAL SIGQUIT diff --git a/build.assets/charts/Dockerfile-tbot-distroless b/build.assets/charts/Dockerfile-tbot-distroless index 1b368d8aee725..5199acf6b2ffb 100644 --- a/build.assets/charts/Dockerfile-tbot-distroless +++ b/build.assets/charts/Dockerfile-tbot-distroless @@ -18,5 +18,6 @@ RUN --mount=type=bind,target=/ctx dpkg-deb -R /ctx/$TELEPORT_DEB_FILE_NAME /opt/ FROM $BASE_IMAGE COPY --from=teleport /opt/staging/opt/teleport/system/bin/tbot /usr/local/bin/tbot +COPY --from=teleport /opt/staging/opt/teleport/system/bin/fdpass-teleport /usr/local/bin/fdpass-teleport ENTRYPOINT ["/usr/local/bin/tbot"] CMD ["start"] diff --git a/build.assets/grpcbox.mk b/build.assets/grpcbox.mk index 6b49e1d580c16..c8d193ff37519 100644 --- a/build.assets/grpcbox.mk +++ b/build.assets/grpcbox.mk @@ -19,14 +19,17 @@ DOCKER ?= docker # GRPCBOX_RUN has the necessary invocation to run a command inside the grpcbox. # Use this variable to run it from other Makefiles. -GRPCBOX_RUN := $(DOCKER) run -it --rm -v "$$(pwd)/../:/workdir" -w /workdir $(GRPCBOX) +UID := $$(id -u) +GID := $$(id -g) +GRPCBOX_RUN := $(DOCKER) run -it --rm -u $(UID):$(GID) -v "$$(pwd)/../:/workdir" -w /workdir $(GRPCBOX) # grpcbox builds a codegen-focused buildbox. # It's leaner, meaner, faster and not supposed to compile code. .PHONY: grpcbox grpcbox: - $(DOCKER) buildx build \ + $(DOCKER) build \ --build-arg BUF_VERSION=$(BUF_VERSION) \ + --build-arg UID=$(UID) --build-arg GID=$(GID) \ -f Dockerfile-grpcbox \ -t "$(GRPCBOX)" \ ../ diff --git a/build.assets/images.mk b/build.assets/images.mk index b13c9c43e58cb..b62efa3b4b785 100644 --- a/build.assets/images.mk +++ b/build.assets/images.mk @@ -15,7 +15,7 @@ BUILDBOX_CENTOS7_ASSETS = $(BUILDBOX_BASE_NAME)-centos7-assets:$(BUILDBOX_VERSIO BUILDBOX_CENTOS7_FIPS = $(BUILDBOX_BASE_NAME)-centos7-fips:$(BUILDBOX_VERSION)-$(RUNTIME_ARCH) BUILDBOX_ARM = $(BUILDBOX_BASE_NAME)-arm:$(BUILDBOX_VERSION) BUILDBOX_UI = $(BUILDBOX_BASE_NAME)-ui:$(BUILDBOX_VERSION) -BUILDBOX_NODE = $(BUILDBOX_BASE_NAME)-node:$(BUILDBOX_VERSION) +BUILDBOX_NODE = $(BUILDBOX_BASE_NAME)-node:$(BUILDBOX_VERSION)-$(RUNTIME_ARCH) BUILDBOX_NG = $(BUILDBOX_BASE_NAME)-ng:$(BUILDBOX_VERSION) BUILDBOX_THIRDPARTY = $(BUILDBOX_BASE_NAME)-thirdparty:$(BUILDBOX_VERSION) diff --git a/build.assets/install b/build.assets/install index eb52ec7d99959..493bdcedc633d 100755 --- a/build.assets/install +++ b/build.assets/install @@ -14,7 +14,7 @@ VARDIR=/var/lib/teleport echo "Starting Teleport installation..." cd $(dirname $0) mkdir -p $VARDIR $BINDIR -cp -f teleport tctl tsh tbot teleport-update $BINDIR/ || exit 1 +cp -f teleport tctl tsh tbot fdpass-teleport teleport-update $BINDIR/ || exit 1 # # What operating system is the user running? diff --git a/build.assets/kubectl-version/main.go b/build.assets/kubectl-version/main.go deleted file mode 100644 index ff65540420efe..0000000000000 --- a/build.assets/kubectl-version/main.go +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Teleport - * Copyright (C) 2023 Gravitational, Inc. - * - * This program is free software: you can redistribute it and/or modify - * it under the terms of the GNU Affero General Public License as published by - * the Free Software Foundation, either version 3 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU Affero General Public License for more details. - * - * You should have received a copy of the GNU Affero General Public License - * along with this program. If not, see . - */ - -// Command version-check that outputs the version of kubectl used by the build. -// It doesn't live in the build.assets/tooling/ directory because it needs to -// be built with the go.mod file from the teleport root directory. -package main - -import ( - "fmt" - "os" - "runtime/debug" - "strings" - - // Import the kubectl module to get the version - // This is a hack to get the version of kubectl - // without having to parse the go.mod file. - _ "k8s.io/kubectl" -) - -func main() { - version, ok := getKubectlModVersion() - if !ok { - fmt.Println("kubectl version not found") - os.Exit(1) - } - fmt.Println(version) -} - -// getKubectlModVersion returns the version of the kubectl module -// and replaces the v0 prefix with v1. -// This is a hack to get the version of kubectl -// without having to parse the go.mod file. -func getKubectlModVersion() (string, bool) { - info, ok := debug.ReadBuildInfo() - if !ok { - return "", false - } - for _, dep := range info.Deps { - if dep.Path == "k8s.io/kubectl" { - return strings.Replace(dep.Version, "v0.", "v1.", 1), true - } - } - return "", false -} diff --git a/build.assets/macos/tsh/tsh.app/Contents/Info.plist b/build.assets/macos/tsh/tsh.app/Contents/Info.plist index 3b35b688471cb..a8a877b571813 100644 --- a/build.assets/macos/tsh/tsh.app/Contents/Info.plist +++ b/build.assets/macos/tsh/tsh.app/Contents/Info.plist @@ -19,13 +19,13 @@ CFBundlePackageType APPL CFBundleShortVersionString - 1.0 + 18.6.1 CFBundleSupportedPlatforms MacOSX CFBundleVersion - 1.0 + 18.6.1 DTCompiler com.apple.compilers.llvm.clang.1_0 DTPlatformBuild diff --git a/build.assets/macos/tshdev/tsh.app/Contents/Info.plist b/build.assets/macos/tshdev/tsh.app/Contents/Info.plist index 0383531e7294a..5bdd4440a0a81 100644 --- a/build.assets/macos/tshdev/tsh.app/Contents/Info.plist +++ b/build.assets/macos/tshdev/tsh.app/Contents/Info.plist @@ -17,13 +17,13 @@ CFBundlePackageType APPL CFBundleShortVersionString - 1.0 + 18.6.1 CFBundleSupportedPlatforms MacOSX CFBundleVersion - 1.0 + 18.6.1 DTCompiler com.apple.compilers.llvm.clang.1_0 DTPlatformBuild diff --git a/build.assets/tooling/cmd/buf-plugin-linters/README.md b/build.assets/tooling/cmd/buf-plugin-linters/README.md new file mode 100644 index 0000000000000..0998b4885f569 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/README.md @@ -0,0 +1,30 @@ +# `buf-plugin-linters` + +`buf-plugin-linters` is a `buf`[1] plugin for linting `.proto` files for APIs that should +confirm to Teleport resource guidelines for pagination[2]. + +## Usage + +To enable the plugin in `buf.yaml`: + +```yaml +version: v2 + +lint: + use: + - PAGINATION_REQUIRED +plugins: + - plugin: + - env + - GOWORK=off + - go + - -C + - ./build.assets/tooling + - run + - ./cmd/buf-plugin-linters +``` + +See `default.go` for the default configuration. + +[1]: http://buf.build/ +[2]: https://github.com/gravitational/teleport/blob/master/rfd/0153-resource-guidelines.md#list diff --git a/build.assets/tooling/cmd/buf-plugin-linters/config.go b/build.assets/tooling/cmd/buf-plugin-linters/config.go new file mode 100644 index 0000000000000..7ceda86526e2d --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/config.go @@ -0,0 +1,93 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import ( + "strings" + + "google.golang.org/protobuf/reflect/protoreflect" +) + +// methodPaginatedPrefix is a list of RPC method name prefixes which should be paginated. +var methodPaginatedPrefix = []string{"List", "Search", "Get"} + +// methodPrefixMustHaveRepeated is a prefix for methods that must have a repeated field without exception. +var methodPrefixMustHaveRepeated = "List" + +const ( + // Naming based on rfd/0153-resource-guidelines.md + paginationRequestSizeName = "page_size" + paginationRequestTokenName = "page_token" + paginationResponseNextTokenName = "next_page_token" +) + +type fieldNames struct { + size string // Name of the max page size field in the request. + token string // Name of the current page token field in the request. + next string // Name of the next page token field in the response. +} + +type Config struct { + // overwrites maps the full name of proto method to a struct of field names. + overwrites map[protoreflect.FullName]fieldNames + + // skips contains entires of RPCs which should be skipped. + skips map[protoreflect.FullName]struct{} +} + +func (c *Config) getPageSizeFieldName(method protoreflect.MethodDescriptor) string { + if overwrite, ok := c.overwrites[method.FullName()]; ok && overwrite.size != "" { + return overwrite.size + } + return paginationRequestSizeName +} + +func (c *Config) getPageFieldName(method protoreflect.MethodDescriptor) string { + if overwrite, ok := c.overwrites[method.FullName()]; ok && overwrite.token != "" { + return overwrite.token + } + return paginationRequestTokenName +} + +func (c *Config) getNextPageFieldName(method protoreflect.MethodDescriptor) string { + if overwrite, ok := c.overwrites[method.FullName()]; ok && overwrite.next != "" { + return overwrite.next + } + return paginationResponseNextTokenName +} + +func (c *Config) shouldSkip(method protoreflect.MethodDescriptor) bool { + if _, skipped := c.skips[method.FullName()]; skipped { + return true + } + + if !hasAnyPrefix(string(method.Name()), methodPaginatedPrefix) { + return true + } + + return false +} + +// hasAnyPrefix return true if given name starts with any of the given prefixes +func hasAnyPrefix(name string, prefixes []string) bool { + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + return true + } + } + return false +} diff --git a/build.assets/tooling/cmd/buf-plugin-linters/default.go b/build.assets/tooling/cmd/buf-plugin-linters/default.go new file mode 100644 index 0000000000000..d4c9ee1305770 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/default.go @@ -0,0 +1,204 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import "google.golang.org/protobuf/reflect/protoreflect" + +func newDefaultConfig() *Config { + return &Config{ + // Existing RPCs that do not follow RFD-153 naming: + overwrites: map[protoreflect.FullName]fieldNames{ + "teleport.accesslist.v1.AccessListService.ListAccessLists": { + token: "next_token", + next: "next_token", + }, + "teleport.accesslist.v1.AccessListService.ListAccessListReviews": { + token: "next_token", + next: "next_token", + }, + "teleport.accesslist.v1.AccessListService.ListAllAccessListReviews": { + token: "next_token", + next: "next_token", + }, + + "teleport.auditlog.v1.AuditLogService.GetUnstructuredEvents": { + size: "limit", + token: "start_key", + next: "last_key", + }, + "teleport.autoupdate.v1.AutoUpdateService.ListAutoUpdateAgentReports": { + token: "next_token", + next: "next_key", + }, + "teleport.discoveryconfig.v1.DiscoveryConfigService.ListDiscoveryConfigs": { + token: "next_token", + next: "next_key", + }, + + "teleport.integration.v1.AWSOIDCService.ListDatabases": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSOIDCService.ListSecurityGroups": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSOIDCService.ListSubnets": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSOIDCService.ListVPCs": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSOIDCService.ListDeployedDatabaseServices": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSOIDCService.ListEKSClusters": { + token: "next_token", + next: "next_token", + }, + "teleport.integration.v1.AWSRolesAnywhereService.ListRolesAnywhereProfiles": { + size: "page_size", + token: "next_page_token", + }, + "teleport.integration.v1.IntegrationService.ListIntegrations": { + size: "limit", + token: "next_key", + next: "next_key", + }, + "teleport.kube.v1.KubeService.ListKubernetesResources": { + size: "limit", + token: "start_key", + next: "next_key", + }, + "teleport.plugins.v1.PluginService.ListPlugins": { + token: "start_key", + next: "next_key", + }, + "teleport.scim.v1.SCIMService.ListSCIMResources": { + // This rpc uses a different scheme for pagination (index based). + token: "page", + size: "page", + next: "start_index", + }, + "teleport.scopes.joining.v1.ScopedJoiningService.ListScopedTokens": { + token: "cursor", + size: "limit", + next: "cursor", + }, + "teleport.lib.teleterm.v1.TerminalService.ListKubernetesResources": { + size: "limit", + token: "next_key", + next: "next_key", + }, + "teleport.lib.teleterm.v1.TerminalService.ListDatabaseServers": { + next: "next_key", + }, + "teleport.lib.teleterm.v1.TerminalService.ListUnifiedResources": { + size: "limit", + token: "start_key", + next: "next_key", + }, + + // Testing only: + "test.foo.bar.v1.config.FooService.SearchFoos": { + size: "max", + token: "token", + next: "next", + }, + }, + skips: map[protoreflect.FullName]struct{}{ + // TODO(okraport): review the following and remove the skip: + "proto.AuthService.GetAccessCapabilities": {}, + "proto.AuthService.GetAccountRecoveryCodes": {}, + "proto.AuthService.GetAlertAcks": {}, + "proto.AuthService.GetApps": {}, + "proto.AuthService.GetClusterAlerts": {}, + "proto.AuthService.GetEvents": {}, + "proto.AuthService.GetGithubConnectors": {}, + "proto.AuthService.GetInstallers": {}, + "proto.AuthService.GetKubernetesClusters": {}, + "proto.AuthService.GetLocks": {}, + "proto.AuthService.GetMFADevices": {}, + "proto.AuthService.GetOIDCConnectors": {}, + "proto.AuthService.GetPluginData": {}, + "proto.AuthService.GetSAMLConnectors": {}, + "proto.AuthService.GetSemaphores": {}, + "proto.AuthService.GetSessionEvents": {}, + "proto.AuthService.GetSnowflakeSessions": {}, + "proto.AuthService.GetSSHTargets": {}, + "proto.AuthService.GetSSODiagnosticInfo": {}, + "proto.AuthService.GetTokens": {}, + "proto.AuthService.GetTrustedClusters": {}, + "proto.AuthService.GetWebTokens": {}, + "proto.AuthService.GetWindowsDesktops": {}, + "proto.AuthService.GetWindowsDesktopServices": {}, + "proto.AuthService.ListAccessRequests": {}, + "proto.AuthService.ListApps": {}, + "proto.AuthService.ListProvisionTokens": {}, + "proto.AuthService.ListReleases": {}, + "proto.AuthService.ListResources": {}, + "proto.AuthService.ListRoles": {}, + "proto.AuthService.ListSAMLIdPServiceProviders": {}, + "proto.AuthService.ListUnifiedResources": {}, + "proto.AuthService.ListUserGroups": {}, + "proto.AuthService.ListWindowsDesktops": {}, + "teleport.accesslist.v1.AccessListService.GetAccessListOwners": {}, + "teleport.accesslist.v1.AccessListService.GetAccessLists": {}, + "teleport.accesslist.v1.AccessListService.GetAccessListsToReview": {}, + "teleport.accesslist.v1.AccessListService.GetSuggestedAccessLists": {}, + "teleport.integration.v1.AWSOIDCService.ListDatabases": {}, + "teleport.integration.v1.AWSOIDCService.ListDeployedDatabaseServices": {}, + "teleport.integration.v1.AWSOIDCService.ListEKSClusters": {}, + "teleport.integration.v1.AWSOIDCService.ListSecurityGroups": {}, + "teleport.integration.v1.AWSOIDCService.ListSubnets": {}, + "teleport.integration.v1.AWSOIDCService.ListVPCs": {}, + "teleport.kube.v1.KubeService.ListKubernetesResources": {}, + "teleport.lib.teleterm.v1.TerminalService.GetSuggestedAccessLists": {}, + "teleport.okta.v1.OktaService.GetApps": {}, + "teleport.okta.v1.OktaService.GetGroups": {}, + "teleport.plugins.v1.PluginService.GetAvailablePluginTypes": {}, + "teleport.plugins.v1.PluginService.SearchPluginStaticCredentials": {}, + "teleport.secreports.v1.SecReportsService.GetSchema": {}, + "teleport.trust.v1.TrustService.GetCertAuthorities": {}, + "teleport.userloginstate.v1.UserLoginStateService.GetUserLoginStates": {}, + "teleport.lib.teleterm.auto_update.v1.AutoUpdateService.GetClusterVersions": {}, + "teleport.lib.teleterm.v1.TerminalService.GetAccessRequests": {}, + "teleport.lib.teleterm.v1.TerminalService.GetRequestableRoles": {}, + "teleport.lib.teleterm.v1.TerminalService.ListKubernetesResources": {}, + "teleport.lib.teleterm.v1.TerminalService.ListDatabaseServers": {}, + "teleport.lib.teleterm.v1.TerminalService.ListGateways": {}, + "teleport.lib.teleterm.v1.TerminalService.ListRootClusters": {}, + "teleport.lib.teleterm.v1.TerminalService.ListLeafClusters": {}, + "teleport.lib.teleterm.v1.TerminalService.ListDatabaseUsers": {}, + "proto.AuthService.GetInventoryStatus": {}, + + // RPCs deprecated from v19 onwards: + "proto.AuthService.GetDatabases": {}, + // repeated field `schemas` in `Resource` does not require pagination. + "teleport.scim.v1.SCIMService.GetSCIMResource": {}, + // `Device` message contains repeated field `DeviceCollectedData` but is not paginated. + "teleport.devicetrust.v1.DeviceTrustService.GetDevice": {}, + // AuthSettings contains repeated `auth_providers` but does not need pagination. + "teleport.lib.teleterm.v1.TerminalService.GetAuthSettings": {}, + // GetServiceInfoResponse does not require pagination. + "teleport.lib.teleterm.vnet.v1.VnetService.GetServiceInfo": {}, + }, + } +} diff --git a/build.assets/tooling/cmd/buf-plugin-linters/main.go b/build.assets/tooling/cmd/buf-plugin-linters/main.go new file mode 100644 index 0000000000000..951594c273aba --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/main.go @@ -0,0 +1,170 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import ( + "context" + "strings" + + "buf.build/go/bufplugin/check" + "buf.build/go/bufplugin/check/checkutil" + "google.golang.org/protobuf/reflect/protoreflect" + "google.golang.org/protobuf/types/descriptorpb" +) + +const ( + paginationRuleID = "PAGINATION_REQUIRED" +) + +var paginationSpec = &check.Spec{ + Rules: []*check.RuleSpec{ + { + ID: paginationRuleID, + Purpose: "Ensure RPCs returning a repeated field use pagination fields.", + Type: check.RuleTypeLint, + Handler: checkutil.NewMethodRuleHandler(checkPaginationMethod), + }, + }, +} + +func main() { + check.Main(paginationSpec) +} + +// checkPaginationMethod implements MethodRuleHandler for RuleSpec +func checkPaginationMethod( + _ context.Context, + responseWriter check.ResponseWriter, + request check.Request, + method protoreflect.MethodDescriptor, +) error { + + if method.IsStreamingServer() || method.IsStreamingClient() { + // Streaming APIs do not expect pagination. + return nil + } + + if isMethodDeprecated(method) { + // deprecated methods are skipped + return nil + } + + config := newDefaultConfig() + if config.shouldSkip(method) { + return nil + } + + resp := method.Output() + if !hasRepeated(resp.Fields()) { + // Check if the method *should* have the repeated field. + if strings.HasPrefix(string(method.Name()), methodPrefixMustHaveRepeated) { + responseWriter.AddAnnotation( + check.WithDescriptor(method), + check.WithMessagef( + "repeated fields expected for RPC names starting with: %q (RFD-0153)", + methodPrefixMustHaveRepeated, + ), + ) + } + return nil + } + + sizeName := config.getPageSizeFieldName(method) + tokenName := config.getPageFieldName(method) + nextName := config.getNextPageFieldName(method) + + req := method.Input() + + if !hasFieldName(req.Fields(), sizeName) { + responseWriter.AddAnnotation( + check.WithDescriptor(req), + check.WithMessagef( + "%q taken by %q is missing page size field name: %q (RFD-0153)", + req.Name(), + method.FullName(), + sizeName, + ), + ) + } + + if !hasFieldName(req.Fields(), tokenName) { + responseWriter.AddAnnotation( + check.WithDescriptor(req), + check.WithMessagef( + "%q taken by %q is missing page token field name: %q (RFD-0153)", + req.Name(), + method.FullName(), + tokenName, + ), + ) + } + + if !hasFieldName(resp.Fields(), nextName) { + responseWriter.AddAnnotation( + check.WithDescriptor(resp), + check.WithMessagef( + "%q returned by %q is missing next page token field name: %q (RFD-0153)", + resp.Name(), + method.FullName(), + nextName, + ), + ) + } + + return nil +} + +// hasFieldName returns true if the proto fields match given name +func hasFieldName(fields protoreflect.FieldDescriptors, name string) bool { + for i := 0; i < fields.Len(); i++ { + if string(fields.Get(i).Name()) == name { + return true + } + } + + return false +} + +// hasRepeated returns true any of the proto fields are marked as `repeated` +func hasRepeated(fields protoreflect.FieldDescriptors) bool { + for i := 0; i < fields.Len(); i++ { + field := fields.Get(i) + + if field.IsMap() { + // maps are technically repeated under the hood, but currently out of scope for the linter. + continue + } + + if field.Cardinality() == protoreflect.Repeated { + return true + } + } + + return false +} + +// isMethodDeprecated returns true if the RPC has been marked with deprecated. +func isMethodDeprecated(method protoreflect.MethodDescriptor) bool { + options := method.Options() + if options == nil { + return false + } + if opts, ok := options.(*descriptorpb.MethodOptions); ok { + return opts.GetDeprecated() + } + return false +} diff --git a/build.assets/tooling/cmd/buf-plugin-linters/main_test.go b/build.assets/tooling/cmd/buf-plugin-linters/main_test.go new file mode 100644 index 0000000000000..9b4bb453f1c79 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/main_test.go @@ -0,0 +1,204 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import ( + "testing" + + "buf.build/go/bufplugin/check/checktest" +) + +func TestSpec(t *testing.T) { + t.Parallel() + checktest.SpecTest(t, paginationSpec) +} + +func TestSimple(t *testing.T) { + t.Parallel() + + checktest.CheckTest{ + Request: &checktest.RequestSpec{ + Files: &checktest.ProtoFileSpec{ + DirPaths: []string{"testdata"}, + FilePaths: []string{"correct.proto"}, + }, + RuleIDs: []string{paginationRuleID}, + }, + Spec: paginationSpec, + ExpectedAnnotations: []checktest.ExpectedAnnotation{}, + }.Run(t) +} + +func TestDefaultConfig(t *testing.T) { + t.Parallel() + + checktest.CheckTest{ + Request: &checktest.RequestSpec{ + Files: &checktest.ProtoFileSpec{ + DirPaths: []string{"testdata"}, + FilePaths: []string{"bad.proto"}, + }, + RuleIDs: []string{paginationRuleID}, + }, + Spec: paginationSpec, + ExpectedAnnotations: []checktest.ExpectedAnnotation{ + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 30, + StartColumn: 2, + EndLine: 30, + EndColumn: 86, + }, + Message: "repeated fields expected for RPC names starting with: \"List\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 33, + StartColumn: 0, + EndLine: 35, + EndColumn: 1, + }, + Message: "\"ListMissingPageReqFoosRequest\" taken by \"bad.FooService.ListMissingPageReqFoos\" is missing page token field name: \"page_token\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 42, + StartColumn: 0, + EndLine: 44, + EndColumn: 1, + }, + Message: "\"ListMissingPageSizeFoosRequest\" taken by \"bad.FooService.ListMissingPageSizeFoos\" is missing page token field name: \"page_token\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 57, + StartColumn: 0, + EndLine: 59, + EndColumn: 1, + }, + Message: "\"ListMissingNextpageFoosResponse\" returned by \"bad.FooService.ListMissingNextpageFoos\" is missing next page token field name: \"next_page_token\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 62, + StartColumn: 0, + EndLine: 63, + EndColumn: 1, + }, + Message: "\"ListMissingAllFoosRequest\" taken by \"bad.FooService.ListMissingAllFoos\" is missing page size field name: \"page_size\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 62, + StartColumn: 0, + EndLine: 63, + EndColumn: 1, + }, + Message: "\"ListMissingAllFoosRequest\" taken by \"bad.FooService.ListMissingAllFoos\" is missing page token field name: \"page_token\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "bad.proto", + StartLine: 65, + StartColumn: 0, + EndLine: 67, + EndColumn: 1, + }, + Message: "\"ListMissingAllFoosResponse\" returned by \"bad.FooService.ListMissingAllFoos\" is missing next page token field name: \"next_page_token\" (RFD-0153)", + }, + }, + }.Run(t) +} + +func TestConfig(t *testing.T) { + t.Parallel() + + checktest.CheckTest{ + Request: &checktest.RequestSpec{ + Files: &checktest.ProtoFileSpec{ + DirPaths: []string{"testdata"}, + FilePaths: []string{"config.proto"}, + }, + RuleIDs: []string{paginationRuleID}, + }, + Spec: paginationSpec, + ExpectedAnnotations: []checktest.ExpectedAnnotation{}, + }.Run(t) +} + +func TestRepeatFields(t *testing.T) { + t.Parallel() + + checktest.CheckTest{ + Request: &checktest.RequestSpec{ + Files: &checktest.ProtoFileSpec{ + DirPaths: []string{"testdata"}, + FilePaths: []string{"repeat.proto"}, + }, + RuleIDs: []string{paginationRuleID}, + }, + Spec: paginationSpec, + ExpectedAnnotations: []checktest.ExpectedAnnotation{ + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "repeat.proto", + StartLine: 30, + StartColumn: 0, + EndLine: 31, + EndColumn: 1, + }, + Message: "\"GetFoosRequest\" taken by \"repeat.FooService.GetFoos\" is missing page size field name: \"page_size\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "repeat.proto", + StartLine: 30, + StartColumn: 0, + EndLine: 31, + EndColumn: 1, + }, + Message: "\"GetFoosRequest\" taken by \"repeat.FooService.GetFoos\" is missing page token field name: \"page_token\" (RFD-0153)", + }, + { + RuleID: paginationRuleID, + FileLocation: &checktest.ExpectedFileLocation{ + FileName: "repeat.proto", + StartLine: 33, + StartColumn: 0, + EndLine: 35, + EndColumn: 1, + }, + Message: "\"GetFoosResponse\" returned by \"repeat.FooService.GetFoos\" is missing next page token field name: \"next_page_token\" (RFD-0153)", + }, + }, + }.Run(t) +} diff --git a/build.assets/tooling/cmd/buf-plugin-linters/testdata/bad.proto b/build.assets/tooling/cmd/buf-plugin-linters/testdata/bad.proto new file mode 100644 index 0000000000000..81d560d2d46c2 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/testdata/bad.proto @@ -0,0 +1,80 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . +syntax = "proto3"; + +package bad; + +message Foo { + string kind = 1; + string sub_kind = 2; + string version = 3; +} + +service FooService { + rpc ListMissingPageReqFoos(ListMissingPageReqFoosRequest) returns (ListMissingPageReqFoosResponse); + rpc ListMissingPageSizeFoos(ListMissingPageSizeFoosRequest) returns (ListMissingPageSizeFoosResponse); + rpc ListMissingNextpageFoos(ListMissingNextpageFoosRequest) returns (ListMissingNextpageFoosResponse); + rpc ListMissingAllFoos(ListMissingAllFoosRequest) returns (ListMissingAllFoosResponse); + rpc ListWithoutRepeat(ListWithoutRepeatRequest) returns (ListWithoutRepeatResponse); +} + +message ListMissingPageReqFoosRequest { + int32 page_size = 1; +} + +message ListMissingPageReqFoosResponse { + repeated Foo foos = 1; + string next_page_token = 2; +} + +message ListMissingPageSizeFoosRequest { + int32 page_size = 1; +} + +message ListMissingPageSizeFoosResponse { + repeated Foo foos = 1; + string next_page_token = 2; +} + + +message ListMissingNextpageFoosRequest { + int32 page_size = 1; + string page_token = 2; +} + +message ListMissingNextpageFoosResponse { + repeated Foo foos = 1; +} + + +message ListMissingAllFoosRequest { +} + +message ListMissingAllFoosResponse { + repeated Foo foos = 1; +} + +message ListWithoutRepeatRequest { + int32 page_size = 1; + string page_token = 2; +} + +message ListWithoutRepeatResponse { + Foo foos = 1; + string next_page_token = 2; +} + + diff --git a/build.assets/tooling/cmd/buf-plugin-linters/testdata/buf.yaml b/build.assets/tooling/cmd/buf-plugin-linters/testdata/buf.yaml new file mode 100644 index 0000000000000..4d943895d1471 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/testdata/buf.yaml @@ -0,0 +1,6 @@ +version: v2 + +lint: + ignore: + # Ignore all test data + - . diff --git a/build.assets/tooling/cmd/buf-plugin-linters/testdata/config.proto b/build.assets/tooling/cmd/buf-plugin-linters/testdata/config.proto new file mode 100644 index 0000000000000..9a86e2b16e57d --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/testdata/config.proto @@ -0,0 +1,40 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . +syntax = "proto3"; + +package test.foo.bar.v1.config; + +message Foo { + string kind = 1; + string sub_kind = 2; + string version = 3; + +} + +service FooService { + rpc SearchFoos(SearchFoosRequest) returns (SearchFoosResponse); +} + +message SearchFoosRequest { + int32 max = 1; + string token = 2; +} + +message SearchFoosResponse { + repeated Foo foos = 1; + string next = 2; +} + diff --git a/build.assets/tooling/cmd/buf-plugin-linters/testdata/correct.proto b/build.assets/tooling/cmd/buf-plugin-linters/testdata/correct.proto new file mode 100644 index 0000000000000..5588a5e915fae --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/testdata/correct.proto @@ -0,0 +1,79 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . +syntax = "proto3"; + +package teleport.foo.v1; + +import "google/protobuf/timestamp.proto"; +import "google/protobuf/empty.proto"; + +message Foo { + string kind = 1; + string sub_kind = 2; + string version = 3; + FooSpec spec = 5; + FooStatus status = 6; +} + +message FooSpec { + string bar = 1; + int32 baz = 2; + bool qux = 3; +} + +message FooStatus { + google.protobuf.Timestamp next_audit = 1; + string teleport_host = 2; +} + +service FooService { + rpc GetFoo(GetFooRequest) returns (Foo); + rpc ListFoos(ListFoosRequest) returns (ListFoosResponse); + rpc CreateFoo(CreateFooRequest) returns (Foo); + rpc UpdateFoo(UpdateFooRequest) returns (Foo); + rpc UpsertFoo(UpsertFooRequest) returns (Foo); + rpc DeleteFoo(DeleteFooRequest) returns (google.protobuf.Empty); +} + +message GetFooRequest { + string foo_id = 1; +} + +message ListFoosRequest { + int32 page_size = 1; + string page_token = 2; +} + +message ListFoosResponse { + repeated Foo foos = 1; + string next_page_token = 2; +} + +message CreateFooRequest { + Foo foo = 1; +} + +message UpdateFooRequest { + Foo foo = 1; +} + +message UpsertFooRequest { + Foo foo = 2; +} + +message DeleteFooRequest { + string foo_id = 1; +} \ No newline at end of file diff --git a/build.assets/tooling/cmd/buf-plugin-linters/testdata/repeat.proto b/build.assets/tooling/cmd/buf-plugin-linters/testdata/repeat.proto new file mode 100644 index 0000000000000..801b3ae3a45f8 --- /dev/null +++ b/build.assets/tooling/cmd/buf-plugin-linters/testdata/repeat.proto @@ -0,0 +1,37 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . +syntax = "proto3"; + +package repeat; + +message Foo { + string kind = 1; + string sub_kind = 2; + string version = 3; + +} + +service FooService { + rpc GetFoos(GetFoosRequest) returns (GetFoosResponse); +} + +message GetFoosRequest { +} + +message GetFoosResponse { + repeated Foo foos = 1; +} + diff --git a/build.assets/tooling/cmd/gen-athena-docs/main.go b/build.assets/tooling/cmd/gen-athena-docs/main.go new file mode 100644 index 0000000000000..ce2f58c8d1d92 --- /dev/null +++ b/build.assets/tooling/cmd/gen-athena-docs/main.go @@ -0,0 +1,86 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import ( + _ "embed" + "fmt" + "os" + "regexp" + "slices" + "strings" + "text/template" + "unicode" + "unicode/utf8" + + "github.com/gravitational/teleport/gen/go/eventschema" +) + +// colNameList prints an example of columns from a given Access Monitoring event +// view to include an example command. It only prints up to the first three +// columns. +func colNameList(cols []*eventschema.ColumnSchemaDetails) string { + var sb strings.Builder + for i := range min(3, len(cols)) { + if i != 0 { + sb.WriteString(",") + } + sb.WriteString(cols[i].NameSQL()) + } + return sb.String() +} + +var descPredicate = regexp.MustCompile(`^(is|are) `) + +// prepareDescription returns a description of the column data provided in col. +func prepareDescription(col *eventschema.ColumnSchemaDetails) string { + // Remove the initial verb, since there is no subject in the sentence. + desc := descPredicate.ReplaceAllString(col.Description, "") + + // Capitalize the first word in the description. + if r, size := utf8.DecodeRuneInString(desc); r != utf8.RuneError { + desc = string(unicode.ToUpper(r)) + desc[size:] + } + + return desc +} + +// docTempl is the template that represents an Access Monitoring event reference +// docs page. The assumption is that "@" characters are replaced with backticks +// before rendering the template. +// +//go:embed schema-reference.mdx.tmpl +var docTempl string + +func main() { + data, err := eventschema.GetViewsDetails() + if err != nil { + fmt.Fprintf(os.Stderr, "Cannot generate an Access Monitoring schema reference: %v\n", err) + os.Exit(1) + } + + slices.SortFunc(data, func(a, b *eventschema.TableSchemaDetails) int { + return strings.Compare(a.Name, b.Name) + }) + + template.Must(template.New("event-reference").Funcs( + template.FuncMap{ + "ColNameList": colNameList, + "PrepareDescription": prepareDescription, + }, + ).Parse(docTempl)).Execute(os.Stdout, data) +} diff --git a/build.assets/tooling/cmd/gen-athena-docs/schema-reference.mdx.tmpl b/build.assets/tooling/cmd/gen-athena-docs/schema-reference.mdx.tmpl new file mode 100644 index 0000000000000..26c1f988e943c --- /dev/null +++ b/build.assets/tooling/cmd/gen-athena-docs/schema-reference.mdx.tmpl @@ -0,0 +1,28 @@ +{/*generated file. DO NOT EDIT.*/} +{/*To generate, run make access-monitoring-reference*/} +{/*vale messaging.capitalization = NO*/} +{/*vale messaging.consistent-terms = NO*/} + +{{ range . -}} +## {{ .Name }} + +`{{ .Name }}` {{ .Description }}. + +Example query: + +```code +$ tctl audit query exec \ + 'select {{ ColNameList .Columns }} from {{ .SQLViewName }} limit 1' +``` + +Columns: + +|SQL Name|Type|Description| +|---|---|---| +{{- range .Columns }} +|{{ .NameSQL }}|{{ .Type }}|{{ PrepareDescription . }}| +{{- end }} + +{{ end }} +{/*vale messaging.capitalization = YES*/} +{/*vale messaging.consistent-terms = YES*/} diff --git a/build.assets/tooling/cmd/goderive/main.go b/build.assets/tooling/cmd/goderive/main.go index 1260866add8bc..bba584bf50076 100644 --- a/build.assets/tooling/cmd/goderive/main.go +++ b/build.assets/tooling/cmd/goderive/main.go @@ -25,6 +25,7 @@ import ( "github.com/awalterschulze/goderive/derive" + "github.com/gravitational/teleport/build.assets/tooling/cmd/goderive/plugin/deepcopy" "github.com/gravitational/teleport/build.assets/tooling/cmd/goderive/plugin/teleportequal" ) @@ -32,6 +33,7 @@ func main() { // Establish Teleport derive plugins of interest. plugins := []derive.Plugin{ teleportequal.NewPlugin(), + deepcopy.NewPlugin(), } // Parse args, which are just paths at the moment.. diff --git a/build.assets/tooling/cmd/goderive/plugin/deepcopy/0001-add-time-deepcopy-support.patch b/build.assets/tooling/cmd/goderive/plugin/deepcopy/0001-add-time-deepcopy-support.patch new file mode 100644 index 0000000000000..9d8229a04411c --- /dev/null +++ b/build.assets/tooling/cmd/goderive/plugin/deepcopy/0001-add-time-deepcopy-support.patch @@ -0,0 +1,99 @@ +--- a/deepcopy/deepcopy.go ++++ b/deepcopy/deepcopy.go +@@ -17,8 +17,8 @@ + // The deriveDeepCopy function is a maintainable and fast way to implement fast copy functions. + // + // When goderive walks over your code it is looking for a function that: +-// - was not implemented (or was previously derived) and +-// - has a predefined prefix. ++// - was not implemented (or was previously derived) and ++// - has a predefined prefix. + // + // In the following code the deriveDeepCopy function will be found, because + // it was not implemented and it has a prefix deriveDeepCopy. +@@ -29,7 +29,7 @@ + // import "sort" + // + // type MyStruct struct { +-// Int64 int64 ++// Int64 int64 + // StringPtr *string + // } + // +@@ -43,25 +43,27 @@ + // } + // + // The initial type that is passed into deriveDeepCopy needs to have a reference type: +-// - pointer +-// - slice +-// - map ++// - pointer ++// - slice ++// - map ++// + // , otherwise we are not able to modify the input parameter and then what are you really copying, + // but as we go deeper we support most types. + // + // Supported types: +-// - basic types +-// - named structs +-// - slices +-// - maps +-// - pointers to these types +-// - private fields of structs in external packages (using reflect and unsafe) +-// - and many more ++// - basic types ++// - named structs ++// - slices ++// - maps ++// - pointers to these types ++// - private fields of structs in external packages (using reflect and unsafe) ++// - and many more ++// + // Unsupported types: +-// - chan +-// - interface +-// - function +-// - unnamed structs, which are not comparable with the == operator ++// - chan ++// - interface ++// - function ++// - unnamed structs, which are not comparable with the == operator + // + // Example output can be found here: + // https://github.com/awalterschulze/goderive/tree/master/example/plugin/deepcopy +@@ -140,6 +142,12 @@ func (g *gen) genStatement(typ types.Type, this, that string) error { + p.P("%s = %s", that, this) + return nil + } ++ ++ if typ.String() == "*time.Time" { ++ p.P("*%s = *%s", that, this) ++ return nil ++ } ++ + switch ttyp := typ.Underlying().(type) { + case *types.Pointer: + reftyp := ttyp.Elem() +@@ -243,13 +251,6 @@ func nullable(typ types.Type) bool { + return false + } + +-func not(s string) string { +- if strings.HasPrefix(s, "(") && strings.HasSuffix(s, ")") { +- return "!" + s +- } +- return "!(" + s + ")" +-} +- + func wrap(value string) string { + if strings.HasPrefix(value, "*") || + strings.HasPrefix(value, "&") || +@@ -261,7 +262,7 @@ func wrap(value string) string { + + func prepend(before, after string) string { + bs := strings.Split(before, ".") +- b := strings.Replace(bs[0], "*", "", -1) ++ b := strings.ReplaceAll(bs[0], "*", "") + return b + "_" + after + } diff --git a/build.assets/tooling/cmd/goderive/plugin/deepcopy/README.md b/build.assets/tooling/cmd/goderive/plugin/deepcopy/README.md new file mode 100644 index 0000000000000..2240233502150 --- /dev/null +++ b/build.assets/tooling/cmd/goderive/plugin/deepcopy/README.md @@ -0,0 +1,16 @@ +# Deepcopy Patch Workflow + +This repository vendors `deepcopy.go` from [goderive](https://github.com/awalterschulze/goderive) and applies a custom patch to support `*time.Time` deep copy. + +## Workflow + +1. **Fetch upstream `deepcopy.go`** +```bash +$ GODERIVE_VERSION=$(go list -m -f '{{.Version}}' github.com/awalterschulze/goderive) +$ curl -fsSL "https://raw.githubusercontent.com/awalterschulze/goderive/refs/tags/${GODERIVE_VERSION}/plugin/deepcopy/deepcopy.go" -o deepcopy.go +``` + +2. **Apply custom patch** +```bash +$ patch deepcopy.go < 0001-add-time-deepcopy-support.patch +``` diff --git a/build.assets/tooling/cmd/goderive/plugin/deepcopy/deepcopy.go b/build.assets/tooling/cmd/goderive/plugin/deepcopy/deepcopy.go new file mode 100644 index 0000000000000..979d8aab0548a --- /dev/null +++ b/build.assets/tooling/cmd/goderive/plugin/deepcopy/deepcopy.go @@ -0,0 +1,418 @@ +// Copyright 2017 Walter Schulze +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package deepcopy contains the implementation of the deepcopy plugin, which generates the deriveDeepCopy function. +// +// The deriveDeepCopy function is a maintainable and fast way to implement fast copy functions. +// +// When goderive walks over your code it is looking for a function that: +// - was not implemented (or was previously derived) and +// - has a predefined prefix. +// +// In the following code the deriveDeepCopy function will be found, because +// it was not implemented and it has a prefix deriveDeepCopy. +// This prefix is configurable. +// +// package main +// +// import "sort" +// +// type MyStruct struct { +// Int64 int64 +// StringPtr *string +// } +// +// func (m *MyStruct) Clone() *MyStruct { +// if m == nil { +// return nil +// } +// n := &MyStruct{} +// deriveDeepCopy(n, m) +// return n +// } +// +// The initial type that is passed into deriveDeepCopy needs to have a reference type: +// - pointer +// - slice +// - map +// +// , otherwise we are not able to modify the input parameter and then what are you really copying, +// but as we go deeper we support most types. +// +// Supported types: +// - basic types +// - named structs +// - slices +// - maps +// - pointers to these types +// - private fields of structs in external packages (using reflect and unsafe) +// - and many more +// +// Unsupported types: +// - chan +// - interface +// - function +// - unnamed structs, which are not comparable with the == operator +// +// Example output can be found here: +// https://github.com/awalterschulze/goderive/tree/master/example/plugin/deepcopy +// +// This plugin has been tested thoroughly. +package deepcopy + +import ( + "fmt" + "go/types" + "strings" + + "github.com/awalterschulze/goderive/derive" +) + +// NewPlugin creates a new deepcopy plugin. +// This function returns the plugin name, default prefix and a constructor for the deepcopy code generator. +func NewPlugin() derive.Plugin { + return derive.NewPlugin("deepcopy", "deriveDeepCopy", New) +} + +// New is a constructor for the deepcopy code generator. +// This generator should be reconstructed for each package. +func New(typesMap derive.TypesMap, p derive.Printer, deps map[string]derive.Dependency) derive.Generator { + return &gen{ + TypesMap: typesMap, + printer: p, + bytesPkg: p.NewImport("bytes", "bytes"), + reflectPkg: p.NewImport("reflect", "reflect"), + unsafePkg: p.NewImport("unsafe", "unsafe"), + } +} + +type gen struct { + derive.TypesMap + printer derive.Printer + bytesPkg derive.Import + reflectPkg derive.Import + unsafePkg derive.Import +} + +func (g *gen) Add(name string, typs []types.Type) (string, error) { + if len(typs) != 2 { + return "", fmt.Errorf("%s does not have two arguments", name) + } + if !types.Identical(typs[0], typs[1]) { + return "", fmt.Errorf("%s has two arguments, but they are of different types %s != %s", + name, g.TypeString(typs[0]), g.TypeString(typs[1])) + } + return g.SetFuncName(name, typs[0]) +} + +func (g *gen) Generate(typs []types.Type) error { + return g.genFunc(typs[0]) +} + +func (g *gen) genFunc(typ types.Type) error { + p := g.printer + g.Generating(typ) + typeStr := g.TypeString(typ) + p.P("") + p.P("// %s recursively copies the contents of src into dst.", g.GetFuncName(typ)) + p.P("func %s(dst, src %s) {", g.GetFuncName(typ), typeStr) + p.In() + if err := g.genStatement(typ, "src", "dst"); err != nil { + return err + } + p.Out() + p.P("}") + return nil +} + +func (g *gen) genStatement(typ types.Type, this, that string) error { + p := g.printer + if canCopy(typ) { + p.P("%s = %s", that, this) + return nil + } + + if typ.String() == "*time.Time" { + p.P("*%s = *%s", that, this) + return nil + } + + switch ttyp := typ.Underlying().(type) { + case *types.Pointer: + reftyp := ttyp.Elem() + g.TypeString(reftyp) + thisref, thatref := "*"+this, "*"+that + named, isNamed := reftyp.(*types.Named) + strct, isStruct := reftyp.Underlying().(*types.Struct) + if !isStruct { + if err := g.genField(reftyp, thisref, thatref); err != nil { + return err + } + return nil + } else if isNamed { + external := g.TypesMap.IsExternal(named) + fields := derive.Fields(g.TypesMap, strct, external) + if len(fields.Fields) > 0 { + thisv := prepend(this, "v") + thatv := prepend(that, "v") + if fields.Reflect { + p.P(thisv+` := `+g.reflectPkg()+`.Indirect(`+g.reflectPkg()+`.ValueOf(%s))`, this) + p.P(thatv+` := `+g.reflectPkg()+`.Indirect(`+g.reflectPkg()+`.ValueOf(%s))`, that) + } + for _, field := range fields.Fields { + fieldType := field.Type + var thisField, thatField string + if field.Private() && external { + thisField, thatField = field.Name(thisv, g.unsafePkg), field.Name(thatv, g.unsafePkg) + } else { + thisField, thatField = field.Name(this, nil), field.Name(that, nil) + } + if err := g.genField(fieldType, thisField, thatField); err != nil { + return err + } + } + } + return nil + } + case *types.Slice: + elmType := ttyp.Elem() + if canCopy(elmType) { + p.P("copy(%s, %s)", that, this) + return nil + } + thisvalue := prepend(this, "value") + thisi := prepend(this, "i") + p.P("for %s, %s := range %s {", thisi, thisvalue, this) + p.In() + if err := g.genField(elmType, thisvalue, wrap(that)+"["+thisi+"]"); err != nil { + return err + } + p.Out() + p.P("}") + return nil + case *types.Array: + elmType := ttyp.Elem() + thisvalue := prepend(this, "value") + thisi := prepend(this, "i") + p.P("for %s, %s := range %s {", thisi, thisvalue, this) + p.In() + if err := g.genField(elmType, thisvalue, wrap(that)+"["+thisi+"]"); err != nil { + return err + } + p.Out() + p.P("}") + return nil + case *types.Map: + elmType := ttyp.Elem() + keyType := ttyp.Key() + thiskey, thisvalue := prepend(this, "key"), prepend(this, "value") + p.P("for %s, %s := range %s {", thiskey, thisvalue, this) + p.In() + thatkey := thiskey + if !canCopy(keyType) { + if err := g.genField(keyType, thatkey, thiskey); err != nil { + return err + } + thatkey = prepend(that, "key") + } + if nullable(elmType) { + p.P("if %s == nil {", thisvalue) + p.In() + p.P("%s = nil", wrap(that)+"["+thatkey+"]") + p.Out() + p.P("}") + } + if err := g.genField(elmType, thisvalue, wrap(that)+"["+thatkey+"]"); err != nil { + return err + } + p.Out() + p.P("}") + return nil + } + return fmt.Errorf("unsupported deepcopy type: %s", g.TypeString(typ)) +} + +func nullable(typ types.Type) bool { + switch typ.(type) { + case *types.Pointer, *types.Slice, *types.Map: + return true + } + return false +} + +func wrap(value string) string { + if strings.HasPrefix(value, "*") || + strings.HasPrefix(value, "&") || + strings.HasSuffix(value, "]") { + return "(" + value + ")" + } + return value +} + +func prepend(before, after string) string { + bs := strings.Split(before, ".") + b := strings.ReplaceAll(bs[0], "*", "") + return b + "_" + after +} + +func canCopy(tt types.Type) bool { + t := tt.Underlying() + switch typ := t.(type) { + case *types.Basic: + return typ.Kind() != types.UntypedNil + case *types.Struct: + for i := 0; i < typ.NumFields(); i++ { + f := typ.Field(i) + ft := f.Type() + if !canCopy(ft) { + return false + } + } + return true + case *types.Array: + return canCopy(typ.Elem()) + } + return false +} + +func hasDeepCopyMethod(typ *types.Named) bool { + for i := 0; i < typ.NumMethods(); i++ { + meth := typ.Method(i) + if meth.Name() != "DeepCopy" { + continue + } + sig, ok := meth.Type().(*types.Signature) + if !ok { + // impossible, but lets check anyway + continue + } + if sig.Params().Len() != 1 { + continue + } + res := sig.Results() + if res.Len() != 0 { + continue + } + return true + } + return false +} + +func (g *gen) genField(fieldType types.Type, thisField, thatField string) error { + p := g.printer + if canCopy(fieldType) { + p.P("%s = %s", thatField, thisField) + return nil + } + switch typ := fieldType.Underlying().(type) { + case *types.Pointer: + p.P("if %s == nil {", thisField) + p.In() + p.P("%s = nil", thatField) + p.Out() + p.P("} else {") + p.In() + ref := typ.Elem() + p.P("%s = new(%s)", thatField, g.TypeString(typ.Elem())) + if named, ok := ref.(*types.Named); ok && hasDeepCopyMethod(named) { + p.P("%s.DeepCopy(%s)", wrap(thisField), thatField) + } else if canCopy(typ.Elem()) { + p.P("*%s = *%s", thatField, thisField) + } else { + p.P("%s(%s, %s)", g.GetFuncName(typ), thatField, thisField) + } + p.Out() + p.P("}") + return nil + case *types.Array: + g.genStatement(fieldType, thisField, thatField) + return nil + case *types.Slice: + p.P("if %s == nil {", thisField) // nil + p.In() + p.P("%s = nil", thatField) + p.Out() + p.P("} else {") // nil + p.In() + p.P("if %s != nil {", thatField) // not nil + p.In() + p.P("if len(%s) > len(%s) {", thisField, thatField) // len + p.In() + p.P("if cap(%s) >= len(%s) {", thatField, thisField) // cap + p.In() + p.P("%s = (%s)[:len(%s)]", thatField, thatField, thisField) + p.Out() + p.P("} else {") // cap + p.In() + p.P("%s = make(%s, len(%s))", thatField, g.TypeString(typ), thisField) + p.Out() + p.P("}") + p.Out() + p.P("} else if len(%s) < len(%s) {", thisField, thatField) // len + p.In() + p.P("%s = (%s)[:len(%s)]", thatField, thatField, thisField) + p.Out() + p.P("}") // len + p.Out() + p.P("} else {") // not nil + p.In() + p.P("%s = make(%s, len(%s))", thatField, g.TypeString(typ), thisField) + p.Out() + p.P("}") // not nil + named, isNamed := fieldType.(*types.Named) + if isNamed && hasDeepCopyMethod(named) { + p.P("%s.DeepCopy(%s)", wrap(thisField), thatField) + } else if canCopy(typ.Elem()) { + p.P("copy(%s, %s)", thatField, thisField) + } else { + p.P("%s(%s, %s)", g.GetFuncName(typ), thatField, thisField) + } + p.Out() + p.P("}") // nil + return nil + case *types.Map: + p.P("if %s != nil {", thisField) + p.In() + p.P("%s = make(%s, len(%s))", thatField, g.TypeString(typ), thisField) + named, isNamed := fieldType.(*types.Named) + if isNamed && hasDeepCopyMethod(named) { + p.P("%s.DeepCopy(%s)", wrap(thisField), thatField) + } else { + p.P("%s(%s, %s)", g.GetFuncName(typ), thatField, thisField) + } + p.Out() + p.P("} else {") + p.In() + p.P("%s = nil", thatField) + p.Out() + p.P("}") + return nil + case *types.Struct: + p.P("func() {") + p.In() + p.P("field := new(%s)", g.TypeString(fieldType)) + named, isNamed := fieldType.(*types.Named) + if isNamed && hasDeepCopyMethod(named) { + p.P("%s.DeepCopy(field)", wrap(thisField)) + } else { + p.P("%s(field, &%s)", g.GetFuncName(types.NewPointer(fieldType)), wrap(thisField)) + } + p.P("%s = *field", thatField) + p.Out() + p.P("}()") + return nil + default: // *Chan, *Tuple, *Signature, *Interface, *types.Basic.Kind() == types.UntypedNil, *Struct + return fmt.Errorf("unsupported field type %s", g.TypeString(fieldType)) + } +} diff --git a/build.assets/tooling/cmd/govulncheck-report/main.go b/build.assets/tooling/cmd/govulncheck-report/main.go new file mode 100644 index 0000000000000..a03d6fe459ef3 --- /dev/null +++ b/build.assets/tooling/cmd/govulncheck-report/main.go @@ -0,0 +1,186 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +// govulncheck-report wraps govulncheck, allows silences vulnerabilities and +// prints a brief report at the end. +// +// govulncheck must be on PATH. +// +// Usage: govulncheck-report -ignore=GO-2022-0635 -ignore=GO-2022-0646 -- [govulncheck flags here] +package main + +import ( + "bufio" + "context" + "errors" + "flag" + "fmt" + "io" + "os" + "os/exec" + "regexp" + "slices" + "strings" +) + +const programName = "govulncheck-report" + +// Eg: "Vulnerability #1: GO-2025-4192" +var vulnRE = regexp.MustCompile(`^\s*Vulnerability #\d+: (GO-\d+-\d+)$`) + +type govulnReport struct { + govulnArgs []string + ignoredVulns []string +} + +func newGovulnReport(thisArgs, govulnArgs []string) (*govulnReport, error) { + cmd := &govulnReport{ + govulnArgs: govulnArgs, + } + + fs := flag.NewFlagSet(programName, flag.ExitOnError) + fs.Var(stringsFlag{dst: &cmd.ignoredVulns}, "ignore", "Vulnerability IDs to ignore") + + if err := fs.Parse(thisArgs); err != nil { + // Unreachable, we have flag.ExitOnError set. + panic(err) + } + if fs.NArg() > 0 { + return nil, fmt.Errorf("arguments not supported, found trailing arguments: %v", fs.Args()) + } + + return cmd, nil +} + +func (c *govulnReport) Run(ctx context.Context) (exitCode int, _ error) { + // Make sure we can run properly. + for _, arg := range c.govulnArgs { + switch arg { + case "-json", "--json", "-format", "--format": + return 0, fmt.Errorf("%s doesn't support -json or non-text -format", programName) + default: + // OK, continue. + } + } + + pr, pw := io.Pipe() + stdout := os.Stdout + stderr := os.Stderr + + cmd := exec.CommandContext(ctx, "govulncheck", c.govulnArgs...) + cmd.Stdin = os.Stdin + cmd.Stdout = pw + cmd.Stderr = stderr + + // Don't read until the goroutine below exits. + var vulns []string + + // Parse and echo stdout. + doneC := make(chan struct{}) + go func() { + defer close(doneC) + scan := bufio.NewScanner(pr) + for scan.Scan() { + line := scan.Text() + stdout.WriteString(line) + stdout.WriteString("\n") + + if matches := vulnRE.FindStringSubmatch(line); len(matches) == 2 { + vulns = append(vulns, matches[1]) + } + } + // No need to check scan.Err(). Once the pipe is drained we're done. + _ = scan.Err() + }() + + err := cmd.Run() + pw.Close() + pr.Close() + <-doneC + + var ee *exec.ExitError + if errors.As(err, &ee) { + exitCode = ee.ExitCode() + } else { + return 0, err // Unexpected error from cmd.Run() + } + // Do not print report if we found no vulnerabilities. + if len(vulns) == 0 { + return exitCode, nil + } + + // Print report/redact exit code. + slices.Sort(vulns) + fmt.Fprintf(stderr, "\n%s:\n", programName) + allIgnored := true + for _, vuln := range vulns { + var suffix string + if ignored := slices.Contains(c.ignoredVulns, vuln); ignored { + suffix = " (ignored)" + } else { + allIgnored = false + } + fmt.Fprintf(stderr, " * %s%s\n", vuln, suffix) + } + if allIgnored { + return 0, nil + } + + return exitCode, nil +} + +type stringsFlag struct { + dst *[]string +} + +func (f stringsFlag) Set(val string) error { + *f.dst = append(*f.dst, val) + return nil +} + +func (f stringsFlag) String() string { + if f.dst == nil { + return "" + } + return strings.Join(*f.dst, ",") +} + +func main() { + thisArgs, govulnArgs := splitArgs(os.Args[1:]) // Remove program name. + + cmd, err := newGovulnReport(thisArgs, govulnArgs) + if err != nil { + panic(err) + } + + ctx := context.Background() + exitCode, err := cmd.Run(ctx) + if err != nil { + panic(err) + } + os.Exit(exitCode) +} + +func splitArgs(args []string) (thisArgs, govulnArgs []string) { + for i, arg := range args { + if arg == "--" { + govulnArgs = args[i+1:] + break + } + thisArgs = append(thisArgs, arg) + } + return thisArgs, govulnArgs +} diff --git a/build.assets/tooling/cmd/resource-ref-generator/README.md b/build.assets/tooling/cmd/resource-ref-generator/README.md new file mode 100644 index 0000000000000..c44341af3e481 --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/README.md @@ -0,0 +1,84 @@ +# Resource reference generator + +The resource reference generator is a Go program that produces a comprehensive +reference guide for all dynamic Teleport resources and the fields they include. +It uses the Teleport source as the basis for the guide. + +## Usage + +From the root of your `gravitational/teleport` clone: + +``` +$ make gen-resource-docs +``` + +## How it works + +The resource reference generator works by: + +1. Identifying Go types that represent to the fields of each dynamic resource + struct identified in a configuration file. +1. Retrieving reference information about dynamic resources and their fields + using a combination of Go comments and type information. + +### Editing source files + +The resource reference indicates which Go source files the reference generator +used for each entry. The generator is only aware of Go source files, not +protobuf message definitions. + +If a source file is based on a protobuf message definition, edit the message +definition first, then run: + +``` +$ make grpc +``` + +After that, you can run the reference generator. + +## Configuration + +The generator uses a YAML configuration file with the following fields. + +### Main config + +- `source` (string): the path to the root of a Go project directory. + +- `destination` (string): the directory path in which to place reference pages. + +- `resources` (array of resource configuration objects): Teleport dynamic + resources to represent in the reference docs. + +### Resource configuration + +In the `resources` field of the configuration file, each resource configuration +object has the following structure: + +- `type`: The name of the struct type declaration that represents the resource, + e.g., `RoleV6`. +- `package`: The name of the Go package that includes the type declaration, + e.g., `types`. +- `yaml_kind`: The value of `kind` to include in the resource manifest, e.g., + "role". +- `yaml_version`: The value of `version` to include in the resource manifest, + e.g., "v6". + +### Example + +```yaml +source: "../../../../api" +destination: "../../../../docs/pages/reference/tctl-resources" +resources: + - type: RoleV6 + package: types + yaml_kind: "role" + yaml_version: "v6" + - type: OIDCConnectorV3 + package: types + yaml_kind: "oidc" + yaml_version: "v3" + - type: SAMLConnectorV2 + package: types + yaml_kind: "saml" + yaml_version: "v2" +``` diff --git a/build.assets/tooling/cmd/resource-ref-generator/config.yaml b/build.assets/tooling/cmd/resource-ref-generator/config.yaml new file mode 100644 index 0000000000000..86795daffef87 --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/config.yaml @@ -0,0 +1,22 @@ +source: "../../../../api" +destination: "../../../../docs/pages/reference/infrastructure-as-code/teleport-resources" + +# Please alphabetize the resource list by type name. +resources: + - type: DiscoveryConfig + package: discoveryconfig + yaml_kind: discovery_config + yaml_version: v1 + - type: InstallerV1 + package: types + yaml_kind: installer + yaml_version: v1 + - type: OIDCConnectorV3 + package: types + yaml_kind: oidc + yaml_version: v3 + - type: SAMLConnectorV2 + package: types + yaml_kind: saml + yaml_version: v2 + diff --git a/build.assets/tooling/cmd/resource-ref-generator/main.go b/build.assets/tooling/cmd/resource-ref-generator/main.go new file mode 100644 index 0000000000000..82b201819ed0c --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/main.go @@ -0,0 +1,64 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package main + +import ( + "flag" + "fmt" + "os" + "path/filepath" + "text/template" + + "gopkg.in/yaml.v3" + + "github.com/gravitational/teleport/build.assets/tooling/cmd/resource-ref-generator/reference" +) + +const tmplBase = "reference.tmpl" + +var tmplPath = filepath.Join("reference", tmplBase) + +const configHelp string = `The path to a YAML configuration file (see the README)` + +func main() { + conf := flag.String("config", "conf.yaml", configHelp) + flag.Parse() + + conffile, err := os.Open(*conf) + if err != nil { + fmt.Fprintf(os.Stderr, "Could not open the configuration file %v: %v\n", *conf, err) + os.Exit(1) + } + defer conffile.Close() + + var genconf reference.GeneratorConfig + if err := yaml.NewDecoder(conffile).Decode(&genconf); err != nil { + fmt.Fprintf(os.Stderr, "Invalid configuration file: %v\n", err) + os.Exit(1) + } + tmpl, err := template.New(tmplBase).ParseFiles(tmplPath) + if err != nil { + fmt.Fprintf(os.Stderr, "Cannot open resource reference template at %v: %v\n", tmplPath, err) + os.Exit(1) + } + + err = reference.Generate(genconf, tmpl) + if err != nil { + fmt.Fprintf(os.Stderr, "Could not generate the resource reference: %v\n", err) + os.Exit(1) + } +} diff --git a/build.assets/tooling/cmd/resource-ref-generator/reference/reference.go b/build.assets/tooling/cmd/resource-ref-generator/reference/reference.go new file mode 100644 index 0000000000000..dbe4c757f6802 --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/reference/reference.go @@ -0,0 +1,153 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package reference + +import ( + "errors" + "fmt" + "os" + "path/filepath" + "strings" + "text/template" + + "github.com/gravitational/teleport/build.assets/tooling/cmd/resource-ref-generator/resource" +) + +// pageContent represents a reference page for a single resource and its related +// fields. Fields must be exported so we can use them in templates. +type pageContent struct { + Resource resourceSection + Fields map[resource.PackageInfo]resource.ReferenceEntry +} + +// resourceSection represents a top-level section of the resource reference +// dedicated to a dynamic resource. +type resourceSection struct { + Version string + Kind string + resource.ReferenceEntry +} + +// TypeInfo represents the name and package name of an exported Go type. It +// makes no guarantees about whether the type was actually declared within the +// package. +type TypeInfo struct { + // Go package path (not a file path) + Package string `yaml:"package"` + // Name of the type, e.g., Metadata + Name string `yaml:"name"` +} + +// ResourceConfig describes a resource type to include in the reference. +type ResourceConfig struct { + // The name of the struct type as declared in the Go source, e.g., + // RoleV6. + TypeName string `yaml:"type"` + // The final path segment in the name of the Go package containing this + // type declaration, e.g., "api". + PackageName string `yaml:"package"` + // The value of the "kind" field within a YAML manifest for this + // resource, e.g., "role". + KindValue string `yaml:"yaml_kind"` + // The value of the "version" field within a YAML manifest for this + // resource, e.g., "v6". + VersionValue string `yaml:"yaml_version"` +} + +// GeneratorConfig is the user-facing configuration for the resource reference +// generator. +type GeneratorConfig struct { + Resources []ResourceConfig `yaml:"resources"` + // Path to the root of the Go project directory. + SourcePath string `yaml:"source"` + // Directory where the generator writes reference pages. + DestinationDirectory string `yaml:"destination"` +} + +type GenerationError struct { + messages []error +} + +func (g GenerationError) Error() string { + // Begin with a newline to format the first list item below the outer + // error. + final := "\n" + for _, e := range g.messages { + final += fmt.Sprintf("- %v\n", e) + } + return final +} + +// Generate uses the provided user-facing configuration to write the resource +// reference to fs. +func Generate(conf GeneratorConfig, tmpl *template.Template) error { + sourceData, err := resource.NewSourceData(conf.SourcePath) + if err != nil { + return fmt.Errorf("loading Go source files: %w", err) + } + + var errs GenerationError + for _, r := range conf.Resources { + k := resource.PackageInfo{ + DeclName: r.TypeName, + PackageName: r.PackageName, + } + + decl, ok := sourceData.TypeDecls[k] + if !ok { + errs.messages = append(errs.messages, fmt.Errorf("creating a reference entry for declaration %v.%v: cannot find a declaration of this resource type", k.PackageName, k.DeclName)) + continue + } + + pc := pageContent{} + + // decl is a dynamic resource type, so get data for the type and + // its dependencies. + entries, err := resource.ReferenceDataFromDeclaration(decl, sourceData.TypeDecls) + if errors.As(err, &resource.NotAGenDeclError{}) { + continue + } + if err != nil { + errs.messages = append(errs.messages, fmt.Errorf("creating a reference entry for declaration %v.%v in file %v: %w", k.PackageName, k.DeclName, decl.FilePath, err)) + } + + pc.Resource.ReferenceEntry = entries[k] + delete(entries, k) + pc.Fields = entries + + pc.Resource.Kind = r.KindValue + pc.Resource.Version = r.VersionValue + + filename := strings.ReplaceAll(strings.ToLower(pc.Resource.SectionName), " ", "-") + docpath := filepath.Join(conf.DestinationDirectory, filename+".mdx") + doc, err := os.Create(docpath) + if err != nil { + errs.messages = append(errs.messages, fmt.Errorf("cannot create page at %v: %w", docpath, err)) + continue + } + defer doc.Close() + + if err := tmpl.Execute(doc, pc); err != nil { + errs.messages = append(errs.messages, fmt.Errorf("cannot populate the resource reference template: %w", err)) + } + } + if len(errs.messages) > 0 { + return errs + } + + return nil +} diff --git a/build.assets/tooling/cmd/resource-ref-generator/reference/reference.tmpl b/build.assets/tooling/cmd/resource-ref-generator/reference/reference.tmpl new file mode 100644 index 0000000000000..36e3eff4fb73c --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/reference/reference.tmpl @@ -0,0 +1,49 @@ +--- +title: {{ .Resource.SectionName }} Reference +description: Provides a reference of fields within the {{ .Resource.SectionName }} resource, which you can manage with tctl. +sidebar_label: {{ .Resource.SectionName }} +--- +{/* vale 3rd-party-products.former-names = NO */} +{/* vale messaging.capitalization = NO */} +{/* Automatically generated from: {{ .Resource.SourcePath}} */} +{/* DO NOT EDIT */} + +**Kind**: `{{ .Resource.Kind }}`
+**Version**: `{{ .Resource.Version }}` + +{{ .Resource.Description }} + +Example: + +```yaml +{{ .Resource.YAMLExample -}} +``` + +{{- if gt (len .Resource.Fields) 0 }} +|Field Name|Description|Type| +|---|---|---| +{{ range .Resource.Fields -}} +|{{.Name}}|{{.Description}}|{{.Type}}| +{{ end }} +{{- end }} + +{{- range .Fields }} +## {{ .SectionName }} + +{{ .Description }} + +{{ if ne .YAMLExample "" }} +Example: + +```yaml +{{ .YAMLExample -}} +``` +{{ end }} +{{- if gt (len .Fields) 0 }} +|Field Name|Description|Type| +|---|---|---| +{{ range .Fields -}} +|{{.Name}}|{{.Description}}|{{.Type}}| +{{ end }} +{{- end }} +{{- end }} diff --git a/build.assets/tooling/cmd/resource-ref-generator/resource/resource.go b/build.assets/tooling/cmd/resource-ref-generator/resource/resource.go new file mode 100644 index 0000000000000..491575c455f84 --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/resource/resource.go @@ -0,0 +1,913 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package resource + +import ( + "bytes" + "errors" + "fmt" + "go/ast" + "go/parser" + "go/token" + "io/fs" + "os" + "path/filepath" + "regexp" + "slices" + "strings" +) + +// PackageInfo is used to look up a Go declaration in a map of declaration names +// to resource data. +type PackageInfo struct { + DeclName string + PackageName string +} + +// ReferenceEntry represents a section in the resource reference docs. +type ReferenceEntry struct { + SectionName string + Description string + SourcePath string + Fields []Field + YAMLExample string +} + +// DeclarationInfo includes data about a declaration so the generator can +// convert it into a ReferenceEntry. +type DeclarationInfo struct { + FilePath string + Decl ast.Decl + PackageName string + // Maps the file-scoped name of each import (if given) to the + // corresponding full package path. + NamedImports map[string]string +} + +// Field represents a row in a table that provides information about a field in +// the resource reference. +type Field struct { + Name string + Description string + Type string +} + +// sortFieldsByName sorts a and b ascending by Name. +func sortFieldsByName(a, b Field) int { + return strings.Compare(a.Name, b.Name) +} + +type SourceData struct { + // TypeDecls maps package and declaration names to data that the generator + // uses to format documentation for dynamic resource fields. + TypeDecls map[PackageInfo]DeclarationInfo +} + +func NewSourceData(rootPath string) (SourceData, error) { + // All declarations within the source tree. We use this to extract + // information about dynamic resource fields, which we can look up by + // package and declaration name. + typeDecls := make(map[PackageInfo]DeclarationInfo) + + // Load each file in the source directory individually. Not using + // packages.Load here since the resulting []*Package does not expose + // individual file names, which we need so contributors who want to edit + // the resulting docs page know which files to modify. + err := filepath.Walk(rootPath, func(currentPath string, info fs.FileInfo, err error) error { + if err != nil { + return fmt.Errorf("loading Go source: %w", err) + } + + if info.IsDir() { + return nil + } + + if filepath.Ext(info.Name()) != ".go" { + return nil + } + + // Open the file so we can pass it to ParseFile. Otherwise, + // ParseFile always reads from the OS FS, not from fs. + f, err := os.Open(currentPath) + if err != nil { + return err + } + defer f.Close() + fset := token.NewFileSet() + file, err := parser.ParseFile(fset, currentPath, f, parser.ParseComments) + if err != nil { + return err + } + + // Use a relative path from the source directory for cleaner + // paths + relDeclPath, err := filepath.Rel(rootPath, currentPath) + if err != nil { + return err + } + + // Collect information from each file: + // - Imported packages and their aliases + // - Possible function declarations (for identifying relevant + // methods later) + // - Type declarations + pn := NamedImports(file) + for _, decl := range file.Decls { + l, ok := decl.(*ast.GenDecl) + if !ok { + continue + } + if len(l.Specs) != 1 { + continue + } + spec, ok := l.Specs[0].(*ast.TypeSpec) + if !ok { + continue + } + + typeDecls[PackageInfo{ + DeclName: spec.Name.Name, + PackageName: file.Name.Name, + }] = DeclarationInfo{ + Decl: l, + FilePath: relDeclPath, + PackageName: file.Name.Name, + NamedImports: pn, + } + } + return nil + }) + if err != nil { + return SourceData{}, fmt.Errorf("loading Go source files: %w", err) + } + return SourceData{ + TypeDecls: typeDecls, + }, nil +} + +// rawField contains simplified information about a struct field type. This +// prevents passing around AST nodes and makes testing easier. +type rawField struct { + // Package that declares the field type + packageName string + // A declaration's GoDoc, including newline characters but not comment + // characters. + doc string + // The type of the field. + kind yamlKindNode + // Original name of the field. + name string + // Name as it appears in YAML, based on the "json" struct tag and + // marshaling rules in the encoding/json package. + jsonName string + // The entire struct tag expression for the field. + tags string +} + +// rawType contains simplified information about a type, which may or may not be +// a struct. This prevents passing around AST nodes and makes testing easier. +type rawType struct { + // A declaration's GoDoc, including newline characters but not comment + // characters. + doc string + // The name of the type declaration. + name string + // Struct fields within the type. Empty if not a struct. + fields []rawField +} + +// yamlKindNode represents a node in a potentially recursive YAML type, such as +// an integer, a map of integers to strings, a sequence of maps of strings to +// strings, etc. Used for printing example YAML documents and tables of fields. +// This is not intended to be a comprehensive YAML AST. +type yamlKindNode interface { + // Generate a string representation to include in a table of fields. + formatForTable() string + // Generate an example YAML value for the type with the provided number + // of indendations. + formatForExampleYAML(indents int) string + // Get the custom children of this yamlKindNode. Must call + // customFieldData on its own children before returning. + customFieldData() []PackageInfo +} + +// nonYAMLKind represents a field type that we cannot convert to YAML. Consumers +// should return an error if there is no way to avoid creating a reference entry +// for this kind. +type nonYAMLKind struct{} + +func (n nonYAMLKind) formatForTable() string { + return "" +} + +func (n nonYAMLKind) formatForExampleYAML(indents int) string { + return "# See description" +} + +func (n nonYAMLKind) customFieldData() []PackageInfo { + return []PackageInfo{} +} + +// yamlSequence is a list of elements. +type yamlSequence struct { + elementKind yamlKindNode +} + +func (y yamlSequence) formatForTable() string { + return `[]` + y.elementKind.formatForTable() +} + +func (y yamlSequence) formatForExampleYAML(indents int) string { + var leading string + indents++ + for i := 0; i < indents; i++ { + leading += " " + } + el := y.elementKind.formatForExampleYAML(indents) + // Trim leading indentation since each element is already indented. + el = strings.TrimLeft(el, " ") + // Always start a sequence on a new line + return fmt.Sprintf(` +%v- %v +%v- %v +%v- %v`, + leading, el, + leading, el, + leading, el, + ) +} + +func (y yamlSequence) customFieldData() []PackageInfo { + return y.elementKind.customFieldData() +} + +// yamlMapping is a mapping of keys to values. +type yamlMapping struct { + keyKind yamlKindNode + valueKind yamlKindNode +} + +func (y yamlMapping) formatForExampleYAML(indents int) string { + var leading string + // Add an extra indent for mappings + indents = indents + 1 + for i := 0; i < indents; i++ { + leading += " " + } + + val := y.valueKind.formatForExampleYAML(indents) + // Remove leading indentation on the first line of the value since the + // key/value pair is already indented. This does not affect subsequent + // lines of the value. + val = strings.TrimLeft(val, " ") + + kv := fmt.Sprintf("%v%v: %v", leading, y.keyKind.formatForExampleYAML(0), val) + return fmt.Sprintf("\n%v\n%v\n%v", kv, kv, kv) +} + +func (y yamlMapping) formatForTable() string { + return fmt.Sprintf("map[%v]%v", y.keyKind.formatForTable(), y.valueKind.formatForTable()) +} + +func (y yamlMapping) customFieldData() []PackageInfo { + k := y.keyKind.customFieldData() + v := y.valueKind.customFieldData() + return append(k, v...) +} + +type yamlString struct{} + +func (y yamlString) formatForTable() string { + return "string" +} + +func (y yamlString) formatForExampleYAML(indents int) string { + return `"string"` +} + +func (y yamlString) customFieldData() []PackageInfo { + return []PackageInfo{} +} + +type yamlBase64 struct{} + +func (y yamlBase64) formatForTable() string { + return "base64-encoded string" +} + +func (y yamlBase64) formatForExampleYAML(indents int) string { + return "BASE64_STRING" +} + +func (y yamlBase64) customFieldData() []PackageInfo { + return []PackageInfo{} +} + +type yamlNumber struct{} + +func (y yamlNumber) formatForTable() string { + return "number" +} + +func (y yamlNumber) formatForExampleYAML(indents int) string { + return "1" +} + +func (y yamlNumber) customFieldData() []PackageInfo { + return []PackageInfo{} +} + +type yamlBool struct{} + +func (y yamlBool) formatForTable() string { + return "Boolean" +} + +func (y yamlBool) formatForExampleYAML(indents int) string { + return "true" +} + +func (y yamlBool) customFieldData() []PackageInfo { + return []PackageInfo{} +} + +// A type declared by the program, i.e., not one of Go's predeclared types. +type yamlCustomType struct { + name string + // Used to look up more information about the declaration of the custom + // type so we can populate additional reference entries. + declarationInfo PackageInfo +} + +func (y yamlCustomType) customFieldData() []PackageInfo { + return []PackageInfo{ + y.declarationInfo, + } +} + +func (y yamlCustomType) formatForExampleYAML(indents int) string { + var leading string + for i := 0; i < indents; i++ { + leading += " " + } + + return leading + "# [...]" +} + +func (y yamlCustomType) formatForTable() string { + return fmt.Sprintf( + "[%v](#%v)", + y.name, + strings.ReplaceAll(strings.ToLower(y.name), " ", "-"), + ) +} + +type NotAGenDeclError struct{} + +func (e NotAGenDeclError) Error() string { + return "the declaration is not a GenDecl" +} + +// typeForDecl returns a representation of the type spec of decl to use for +// further processing. Returns an error if there is either no type spec or more +// than one. +func typeForDecl(decl DeclarationInfo, allDecls map[PackageInfo]DeclarationInfo) (rawType, error) { + gendecl, ok := decl.Decl.(*ast.GenDecl) + if !ok { + return rawType{}, NotAGenDeclError{} + } + + if len(gendecl.Specs) == 0 { + return rawType{}, errors.New("declaration has no specs") + } + + if len(gendecl.Specs) > 1 { + return rawType{}, errors.New("declaration contains more than one type spec") + } + + if gendecl.Specs[0] == nil { + return rawType{}, errors.New("no spec found") + } + + t, ok := gendecl.Specs[0].(*ast.TypeSpec) + if !ok { + return rawType{}, errors.New("no type spec found") + } + + str, ok := t.Type.(*ast.StructType) + // The declaration is not a struct, but we may still want to include it + // in the reference. Return a rawType with no fields. + if !ok { + return rawType{ + name: t.Name.Name, + doc: gendecl.Doc.Text(), + fields: []rawField{}, + }, nil + } + + // We have determined that decl is a struct type, so collect its fields. + var rawFields []rawField + for _, field := range str.Fields.List { + f, err := makeRawField(field, decl.PackageName, allDecls, decl.NamedImports) + if err != nil { + return rawType{}, err + } + + // The struct field name is lowercased, so the field is not + // exported. Ignore it. + if f.name != "" && f.name[0] >= 'a' && f.name[0] <= 122 { + continue + } + + jsonName := getJSONTag(f.tags) + // This field is ignored, so skip it. + // See: https://pkg.go.dev/encoding/json#Marshal + if jsonName == "-" { + continue + } + // Using the exported field declaration name as the field name + // per JSON marshaling rules. + if jsonName == "" { + f.jsonName = f.name + } + + rawFields = append(rawFields, f) + } + + result := rawType{ + name: t.Name.Name, + // Preserving newlines for downstream processing + doc: gendecl.Doc.Text(), + fields: rawFields, + } + + return result, nil +} + +// makeYAMLExample creates an example YAML document illustrating the fields +// within a declaration. This appears at the end of a section within the +// reference. +func makeYAMLExample(fields []rawField) (string, error) { + var buf bytes.Buffer + + for _, field := range fields { + example := field.kind.formatForExampleYAML(0) + "\n" + buf.WriteString(getJSONTag(field.tags) + ": ") + buf.WriteString(example) + } + + return buf.String(), nil +} + +// Key-value pair for the "json" tag within a struct tag. See: +// https://pkg.go.dev/reflect#StructTag +var jsonTagKeyValue = regexp.MustCompile(`json:"([^"]+)"`) + +// getYAMLTag returns the "json" tag value from the provided struct tag +// expression. +func getJSONTag(tags string) string { + kv := jsonTagKeyValue.FindStringSubmatch(tags) + + // No "json" tag, or a "json" tag with no value. + if len(kv) != 2 { + return "" + } + + return strings.TrimSuffix(kv[1], ",omitempty") +} + +var camelCaseExceptions = []string{ + "IdP", +} +var abbreviationWordBoundary *regexp.Regexp = regexp.MustCompile(`([A-Z]{2,})([A-Z][a-z0-9])`) +var camelCaseWordBoundary *regexp.Regexp = regexp.MustCompile( + fmt.Sprintf(`(%v|[a-z]+)([A-Z])`, strings.Join(camelCaseExceptions, "|")), +) + +// makeSectionName edits the original name of a declaration to make it more +// suitable as a section within the resource reference. +func makeSectionName(original string) string { + s := abbreviationWordBoundary.ReplaceAllString(original, "$1 $2") + s = camelCaseWordBoundary.ReplaceAllString(s, "$1 $2") + return s +} + +// isByteSlice returns whether t is a []byte. +func isByteSlice(t *ast.ArrayType) bool { + i, ok := t.Elt.(*ast.Ident) + if !ok { + return false + } + return i.Name == "byte" +} + +// getYAMLTypeForExpr takes an AST type expression and recursively +// traverses it to populate a yamlKindNode. Each iteration converts a +// single *ast.Expr into a single yamlKindNode, returning the new node. +func getYAMLTypeForExpr(exp ast.Expr, pkg string, allDecls map[PackageInfo]DeclarationInfo, namedImports map[string]string) (yamlKindNode, error) { + switch t := exp.(type) { + case *ast.StarExpr: + // Ignore the star, since YAML fields are unmarshaled as the + // values they point to. + return getYAMLTypeForExpr(t.X, pkg, allDecls, namedImports) + case *ast.Ident: + switch t.Name { + case "string": + return yamlString{}, nil + case "uint", "uint8", "uint16", "uint32", "uint64", "int", "int8", "int16", "int32", "int64", "float32", "float64": + return yamlNumber{}, nil + case "bool": + return yamlBool{}, nil + default: + info := PackageInfo{ + DeclName: t.Name, + PackageName: pkg, + } + if _, ok := allDecls[info]; !ok { + return nonYAMLKind{}, nil + } + + return yamlCustomType{ + name: makeSectionName(t.Name), + declarationInfo: info, + }, nil + } + case *ast.MapType: + k, err := getYAMLTypeForExpr(t.Key, pkg, allDecls, namedImports) + if err != nil { + return nil, err + } + + v, err := getYAMLTypeForExpr(t.Value, pkg, allDecls, namedImports) + if err != nil { + return nil, err + } + return yamlMapping{ + keyKind: k, + valueKind: v, + }, nil + case *ast.ArrayType: + // Bite slices marshal to base64 strings + if isByteSlice(t) { + return yamlBase64{}, nil + } + e, err := getYAMLTypeForExpr(t.Elt, pkg, allDecls, namedImports) + if err != nil { + return nil, err + } + return yamlSequence{ + elementKind: e, + }, nil + case *ast.SelectorExpr: + var pkg string + x, ok := t.X.(*ast.Ident) + if ok { + pkg = x.Name + if i, ok := namedImports[x.Name]; ok { + pkg = i + } + } + info := PackageInfo{ + DeclName: t.Sel.Name, + PackageName: pkg, + } + if _, ok := allDecls[info]; !ok { + return nonYAMLKind{}, nil + } + + return yamlCustomType{ + name: makeSectionName(t.Sel.Name), + declarationInfo: info, + }, nil + default: + return nonYAMLKind{}, nil + } +} + +// getYAMLType returns YAML type information for a struct field so we can print +// information about it in the resource reference. +func getYAMLType(field *ast.Field, pkg string, allDecls map[PackageInfo]DeclarationInfo, namedImports map[string]string) (yamlKindNode, error) { + return getYAMLTypeForExpr(field.Type, pkg, allDecls, namedImports) +} + +// makeRawField translates an *ast.Field into a rawField for downstream +// processing. packageName is the name of the package that includes this name in +// a struct declaration. +func makeRawField(field *ast.Field, packageName string, allDecls map[PackageInfo]DeclarationInfo, namedImports map[string]string) (rawField, error) { + doc := field.Doc.Text() + if len(field.Names) > 1 { + return rawField{}, fmt.Errorf("field %+v in %v contains more than one name", field, packageName) + } + + var name string + // Otherwise, the field is likely an embedded struct. + if len(field.Names) == 1 { + name = field.Names[0].Name + } + + tn, err := getYAMLType(field, packageName, allDecls, namedImports) + if err != nil { + return rawField{}, err + } + + // Indicate which package declared this field depending on whether the + // field's type name includes the name of another package. + pkg := packageName + s, ok := field.Type.(*ast.SelectorExpr) + if ok { + i, ok := s.X.(*ast.Ident) + if ok { + pkg = i.Name + } + } + + var tag string + if field.Tag != nil { + tag = field.Tag.Value + } + + return rawField{ + packageName: pkg, + doc: doc, + kind: tn, + name: name, + jsonName: getJSONTag(tag), + tags: tag, + }, nil +} + +// makeFieldTableInfo assembles a slice of human-readable information about +// fields within a Go struct to include within the resource reference. +func makeFieldTableInfo(fields []rawField) ([]Field, error) { + var result []Field + for _, field := range fields { + var desc string + var typ string + + desc = field.doc + typ = field.kind.formatForTable() + // Escape pipes so they do not affect table rendering. + desc = strings.ReplaceAll(desc, "|", `\|`) + // Remove surrounding spaces and inner line breaks. + desc = strings.Trim(strings.ReplaceAll(desc, "\n", " "), " ") + + // Escape angle brackets so the docs engine handles them as + // strings instead of HTML tags. + desc = strings.ReplaceAll(desc, "<", `\<`) + desc = strings.ReplaceAll(desc, ">", `\>`) + + result = append(result, Field{ + Description: printableDescription(desc, field.name), + Name: field.jsonName, + Type: typ, + }) + } + return result, nil +} + +// curlyBracePairPattern matches a pair of curly braces, with a capture group +// for the content enclosed by the braces. +var curlyBracePairPattern = regexp.MustCompile(`\{([^}]*)\}`) + +// printableDescription modifies a field or type description to make it suitable +// for reading on a docs page. +// +// ident is the name of a Go identifier. printableDescription removes the name +// from the description so we can include it within the resource reference, +// fixing capitalization issues resulting from removing the name. Since the +// identifier's name within the source won't mean anything to a docs reader, +// removing it makes the description easier to read. +// +// Since curly brace pairs break docs site builds, printableDescription also +// encloses any curly brace pairs with backticks. +func printableDescription(description, ident string) string { + result := curlyBracePairPattern.ReplaceAllString(description, "`{$1}`") + // Replace any double-backticks resulting from the previous operation. + // This is a hack to avoid the need for more complex logic. Double + // backticks won't render as expected in the docs anyway, so it's fine + // to replace ones that don't result from the earlier replacement. + result = strings.ReplaceAll(result, "``", "`") + + // Not possible to trim the name from description + if len(ident) > len(result) { + return result + } + + switch { + case strings.HasPrefix(result, ident+" are "): + result = strings.TrimPrefix(result, ident+" are ") + case strings.HasPrefix(result, ident+" is "): + result = strings.TrimPrefix(result, ident+" is ") + case strings.HasPrefix(result, ident+" "): + result = strings.TrimPrefix(result, ident+" ") + case strings.HasPrefix(result, ident): + result = strings.TrimPrefix(result, ident) + } + + // Make sure the result begins with a capital letter + if len(result) > 0 { + result = strings.ToUpper(result[:1]) + result[1:] + } + + return result +} + +// allFieldsForDecl finds embedded structs within fld and recursively +// processes the fields of those structs as though the fields belonged to the +// containing struct. Uses decl and allDecls to look up fields within the base +// structs. Returns a modified slice of fields that include all non-embedded +// fields within fld. +func allFieldsForDecl(decl DeclarationInfo, fld []rawField, allDecls map[PackageInfo]DeclarationInfo) ([]rawField, error) { + fieldsToProcess := []rawField{} + for _, l := range fld { + // Not an embedded struct field, so append it to the final + // result. + if l.name != "" { + fieldsToProcess = append(fieldsToProcess, l) + continue + } + c, ok := l.kind.(yamlCustomType) + // Not an embedded struct since it's not a declared type. + if !ok { + continue + } + + // Find the package name to use to look up the declaration from + // its identifier. + var pkg string + i, ok := decl.NamedImports[l.packageName] + switch { + // The file that made the declaration provided a name for the + // package associated with the identifier, so find the full + // package path and use that to look up the declaration. + case ok: + pkg = i + case l.packageName != "": + pkg = l.packageName + // If the field's type has no package name, assume the field's + // package name is the same as the one for decl. + default: + pkg = decl.PackageName + + } + p := PackageInfo{ + DeclName: c.declarationInfo.DeclName, + PackageName: pkg, + } + + // We expect to find a declaration of the embedded struct. + d, ok := allDecls[p] + if !ok { + return nil, fmt.Errorf( + "%v: field %v.%v is not declared anywhere", + decl.FilePath, + l.packageName, + c.name, + ) + } + e, err := typeForDecl(d, allDecls) + if err != nil && !errors.As(err, &NotAGenDeclError{}) { + return nil, err + } + + // The embedded struct field may have its own embedded struct + // fields. + nf, err := allFieldsForDecl(decl, e.fields, allDecls) + if err != nil { + return nil, err + } + + fieldsToProcess = append(fieldsToProcess, nf...) + } + return fieldsToProcess, nil +} + +// NamedImports creates a mapping from the provided name of each package import +// to the original package path. +func NamedImports(file *ast.File) map[string]string { + m := make(map[string]string) + for _, i := range file.Imports { + if i.Name == nil { + continue + } + s := strings.Trim(i.Path.Value, "\"") + p := strings.Split(s, "/") + // Consumers check the named imports map against the final path + // segment of a package path. + if len(p) > 1 { + s = p[len(p)-1] + } + m[i.Name.Name] = s + } + return m +} + +// ReferenceDataFromDeclaration gets data for the reference by examining decl. +// Looks up decl's fields in allDecls and methods in allMethods. +func ReferenceDataFromDeclaration(decl DeclarationInfo, allDecls map[PackageInfo]DeclarationInfo) (map[PackageInfo]ReferenceEntry, error) { + rs, err := typeForDecl(decl, allDecls) + if err != nil { + return nil, err + } + + fieldsToProcess, err := allFieldsForDecl(decl, rs.fields, allDecls) + if err != nil { + return nil, err + } + + description := rs.doc + var example string + + example, err = makeYAMLExample(fieldsToProcess) + if err != nil { + return nil, err + } + + // Initialize the return value and insert the root reference entry + // provided by decl. + refs := make(map[PackageInfo]ReferenceEntry) + description = strings.Trim(strings.ReplaceAll(description, "\n", " "), " ") + entry := ReferenceEntry{ + SectionName: makeSectionName(rs.name), + Description: printableDescription(description, rs.name), + SourcePath: decl.FilePath, + YAMLExample: example, + Fields: []Field{}, + } + key := PackageInfo{ + DeclName: rs.name, + PackageName: decl.PackageName, + } + + fld, err := makeFieldTableInfo(fieldsToProcess) + if err != nil { + return nil, err + } + slices.SortFunc(fld, sortFieldsByName) + entry.Fields = fld + refs[key] = entry + + // For any fields within decl that have a custom type, look up the + // declaration for that type and create a separate reference entry for + // it. + for _, f := range fieldsToProcess { + // Don't make separate reference entries for embedded structs + // since they are part of the containing struct for the purposes + // of unmarshaling YAML. + // + if f.name == "" { + continue + } + + c := f.kind.customFieldData() + + for _, d := range c { + // Find the package name to use to look up the declaration from + // its identifier. + i, ok := decl.NamedImports[d.PackageName] + // The file that made the declaration provided a name for the + // package associated with the identifier, so find the full + // package path and use that to look up the declaration. + if ok { + d.PackageName = i + } + + // Get information about the field type's declaration. + // If we can't find it, it means the field type was + // probably declared in the standard library or + // third-party package. In this case, leave it to the + // GoDoc to describe the field type. + gd, ok := allDecls[d] + if !ok { + continue + } + r, err := ReferenceDataFromDeclaration(gd, allDecls) + if errors.As(err, &NotAGenDeclError{}) { + continue + } + if err != nil { + return nil, err + } + + for k, v := range r { + slices.SortFunc(v.Fields, sortFieldsByName) + refs[k] = v + } + } + } + return refs, nil +} diff --git a/build.assets/tooling/cmd/resource-ref-generator/resource/resource_test.go b/build.assets/tooling/cmd/resource-ref-generator/resource/resource_test.go new file mode 100644 index 0000000000000..21b41d8949a3a --- /dev/null +++ b/build.assets/tooling/cmd/resource-ref-generator/resource/resource_test.go @@ -0,0 +1,2099 @@ +// Teleport +// Copyright (C) 2025 Gravitational, Inc. +// +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU Affero General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Affero General Public License for more details. +// +// You should have received a copy of the GNU Affero General Public License +// along with this program. If not, see . + +package resource + +import ( + "go/parser" + "go/token" + "os" + "path/filepath" + "strconv" + "strings" + "testing" + + "github.com/stretchr/testify/assert" +) + +// replaceBackticks replaces the "BACKTICK" placeholder text with backticks so +// we can include struct tags within source fixtures. +func replaceBackticks(source string) string { + return strings.ReplaceAll(source, "BACKTICK", "`") +} + +func TestReferenceDataFromDeclaration(t *testing.T) { + cases := []struct { + description string + source string + expected map[PackageInfo]ReferenceEntry + // Go source fixtures that the test uses for named type fields. + declSources []string + // Substring to expect in a resulting error message + errorSubstring string + declInfo PackageInfo + }{ + { + description: "scalar fields with one field ignored", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Namespace is the resource's namespace + Namespace string BACKTICKprotobuf:"bytes,2,opt,name=Namespace,proto3" json:"-"BACKTICK + // Description is the resource's description. + Description string BACKTICKprotobuf:"bytes,3,opt,name=Description,proto3" json:"description,omitempty"BACKTICK + // Age is the resource's age in seconds. + Age uint BACKTICKjson:"age"BACKTICK + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +description: "string" +age: 1 +active: true +`, + Fields: []Field{ + Field{ + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + Field{ + Name: "age", + Description: "The resource's age in seconds.", + Type: "number", + }, + Field{ + Name: "description", + Description: "The resource's description.", + Type: "string", + }, + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + }, + }, + }, + { + description: "sequences of scalars", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Names is a list of names. + Names []string BACKTICKjson:"names"BACKTICK + // Numbers is a list of numbers. + Numbers []int BACKTICKjson:"numbers"BACKTICK + // Booleans is a list of Booleans. + Booleans []bool BACKTICKjson:"booleans"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `names: + - "string" + - "string" + - "string" +numbers: + - 1 + - 1 + - 1 +booleans: + - true + - true + - true +`, + Fields: []Field{ + Field{ + Name: "booleans", + Description: "A list of Booleans.", + Type: "[]Boolean", + }, + Field{ + Name: "names", + Description: "A list of names.", + Type: "[]string", + }, + Field{ + Name: "numbers", + Description: "A list of numbers.", + Type: "[]number", + }, + }, + }, + }, + }, + { + description: "a map of strings to sequences", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Attributes indicates additional data for the resource. + Attributes map[string][]string BACKTICKjson:"attributes"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `attributes: + "string": + - "string" + - "string" + - "string" + "string": + - "string" + - "string" + - "string" + "string": + - "string" + - "string" + - "string" +`, + Fields: []Field{ + Field{ + Name: "attributes", + Description: "Indicates additional data for the resource.", + Type: "map[string][]string", + }, + }, + }, + }, + }, + { + description: "an undeclared custom type field", + declInfo: PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + { + Name: "spec", + Description: "Contains information about the server.", + Type: "", + }, + }, + YAMLExample: "name: \"string\"\nspec: # See description\n", + }, + }, + }, + { + description: "named scalar type", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Server", + }, + source: `package mypkg +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK + // Label specifies labels for the server. + Label Labels BACKTICKjson:"labels"BACKTICK +} +`, + declSources: []string{ + `package mypkg + +// Labels is a slice of strings that we'll process downstream +type Labels []string +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "labels", + Description: "Specifies labels for the server.", + Type: "[Labels](#labels)", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + { + Name: "spec", + Description: "Contains information about the server.", + Type: "", + }, + }, + YAMLExample: "name: \"string\"\nspec: # See description\nlabels: # [...]\n", + }, + PackageInfo{ + DeclName: "Labels", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Labels", + Description: "A slice of strings that we'll process downstream", + SourcePath: "src/myfile0.go", + Fields: nil, + YAMLExample: "", + }, + }, + }, + { + description: "custom type fields with a custom JSON unmarshaller", + declInfo: PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + declSources: []string{ + `package mypkg + +func (s *Server) UnmarshalJSON (b []byte) error { + return nil +} +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + { + Name: "spec", + Description: "Contains information about the server.", + Type: "", + }, + }, + YAMLExample: "name: \"string\"\nspec: # See description\n", + }, + }, + }, + { + description: "custom type with custom YAML unmarshaller", + declInfo: PackageInfo{ + DeclName: "Application", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Application includes information about an application registered with Teleport. +type Application struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the application. + Spec types.AppSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + declSources: []string{ + `package mypkg + +func (a *Application) UnmarshalYAML(value *yaml.Node) error { + return nil +} +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Application", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Application", + Description: "Includes information about an application registered with Teleport.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + { + Name: "spec", + Description: "Contains information about the application.", + Type: "", + }, + }, + YAMLExample: "name: \"string\"\nspec: # See description\n", + }, + }, + }, + { + description: "a custom type field declared in a second source file", + declInfo: PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + declSources: []string{`package types +// ServerSpecV1 includes aspects of a proxied server. +type ServerSpecV1 struct { + // The address of the server. + Address string BACKTICKjson:"address"BACKTICK + // How long the resource is valid. + TTL int BACKTICKjson:"ttl"BACKTICK + // Whether the server is active. + IsActive bool BACKTICKjson:"is_active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +spec: # [...] +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + Field{ + Name: "spec", + Description: "Contains information about the server.", + Type: "[Server Spec V1](#server-spec-v1)", + }, + }, + }, + PackageInfo{ + DeclName: "ServerSpecV1", + PackageName: "types", + }: { + SectionName: "Server Spec V1", + Description: "Includes aspects of a proxied server.", + SourcePath: "src/myfile0.go", + YAMLExample: `address: "string" +ttl: 1 +is_active: true +`, + Fields: []Field{ + Field{ + Name: "address", + Description: "The address of the server.", + Type: "string", + }, + Field{ + Name: "is_active", + Description: "Whether the server is active.", + Type: "Boolean", + }, + Field{ + Name: "ttl", + Description: "How long the resource is valid.", + Type: "number", + }, + }, + }, + }, + }, + { + description: "composite field type with named scalar type", + declInfo: PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK + // LabelMaps includes a map of strings to labels. + LabelMaps []map[string]types.Label BACKTICKjson:"label_maps"BACKTICK +} +`, + declSources: []string{`package types +// ServerSpecV1 includes aspects of a proxied server. +type ServerSpecV1 struct { + // The address of the server. + Address string BACKTICKjson:"address"BACKTICK +}`, + `package types + +// Label is a custom type that we unmarshal in a non-default way. +type Label string +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + YAMLExample: `spec: # [...] +label_maps: + - + "string": # [...] + "string": # [...] + "string": # [...] + - + "string": # [...] + "string": # [...] + "string": # [...] + - + "string": # [...] + "string": # [...] + "string": # [...] +`, + Fields: []Field{ + Field{ + Name: "label_maps", + Description: "Includes a map of strings to labels.", + Type: "[]map[string][Label](#label)", + }, + Field{ + Name: "spec", + Description: "Contains information about the server.", + Type: "[Server Spec V1](#server-spec-v1)", + }, + }, + }, + PackageInfo{ + DeclName: "ServerSpecV1", + PackageName: "types", + }: { + SectionName: "Server Spec V1", + Description: "Includes aspects of a proxied server.", + SourcePath: "src/myfile0.go", + YAMLExample: `address: "string" +`, + Fields: []Field{ + Field{ + Name: "address", + Description: "The address of the server.", + Type: "string", + }, + }, + }, + PackageInfo{ + DeclName: "Label", + PackageName: "types", + }: { + SectionName: "Label", + Description: "A custom type that we unmarshal in a non-default way.", + SourcePath: "src/myfile1.go", + Fields: nil, + }, + }, + }, + { + description: "struct type with an interface field", + declInfo: PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // The name of the server. + Name string BACKTICK:json:"name"BACKTICK + // Impl is the implementation of the server. + Impl ServerImplementation BACKTICK:json:"impl"BACKTICK +} +`, + declSources: []string{`package mypkg +// ServerImplementation is a remote service with a URL. +type ServerImplementation interface{ + GetURL() string +} +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "impl", + Description: "The implementation of the server.", + Type: "[Server Implementation](#server-implementation)", + }, + { + Name: "name", + Description: "The name of the server.", + Type: "string", + }, + }, + YAMLExample: `name: "string" +impl: # [...] +`, + }, + PackageInfo{ + DeclName: "ServerImplementation", + PackageName: "mypkg", + }: ReferenceEntry{ + SectionName: "Server Implementation", + Description: "A remote service with a URL.", + SourcePath: "src/myfile0.go", + Fields: nil, + YAMLExample: "", + }, + }, + }, + { + description: "embedded struct", + declInfo: PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }, + source: `package mypkg +// MyResource is a resource declared for testing. +type MyResource struct{ + // Alias is another name to call the resource. + Alias string BACKTICKjson:"alias"BACKTICK + types.Metadata +} +`, + declSources: []string{ + `package types + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }: { + SectionName: "My Resource", + Description: "A resource declared for testing.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + { + Name: "alias", + Description: "Another name to call the resource.", + Type: "string", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + YAMLExample: `alias: "string" +name: "string" +active: true +`, + }, + }, + }, + { + description: "embedded struct with struct field", + declInfo: PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }, + source: `package mypkg +// MyResource is a resource declared for testing. +type MyResource struct{ + // Alias is another name to call the resource. + Alias string BACKTICKjson:"alias"BACKTICK + types.Header +} +`, + declSources: []string{ + `package types +type Header struct { + // Metadata is the resource metadata + Metadata Metadata BACKTICKjson:"metadata"BACKTICK +} + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }: { + SectionName: "My Resource", + Description: "A resource declared for testing.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "alias", + Description: "Another name to call the resource.", + Type: "string", + }, + { + Name: "metadata", + Description: "The resource metadata", + Type: "[Metadata](#metadata)", + }, + }, + YAMLExample: `alias: "string" +metadata: # [...] +`, + }, + PackageInfo{ + DeclName: "Metadata", + PackageName: "types", + }: { + SectionName: "Metadata", + SourcePath: "src/myfile0.go", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + Fields: []Field{ + { + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + YAMLExample: `name: "string" +active: true +`, + }, + }, + }, + { + description: "embedded struct with base in the same package", + declInfo: PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }, + source: `package mypkg +// MyResource is a resource declared for testing. +type MyResource struct{ + // Alias is another name to call the resource. + Alias string BACKTICKjson:"alias"BACKTICK + Metadata +} +`, + declSources: []string{ + `package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }: { + SectionName: "My Resource", + Description: "A resource declared for testing.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + { + Name: "alias", + Description: "Another name to call the resource.", + Type: "string", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + YAMLExample: `alias: "string" +name: "string" +active: true +`, + }, + }, + }, + { + description: "struct with two embedded structs", + declInfo: PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }, + source: `package mypkg +// MyResource is a resource declared for testing. +type MyResource struct{ + // Alias is another name to call the resource. + Alias string BACKTICKjson:"alias"BACKTICK + types.Metadata + moretypes.ActivityStatus +} +`, + declSources: []string{ + `package types + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK +}`, + `package moretypes + +// ActivityStatus indicates the status of a resource +type ActivityStatus struct{ + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }: { + SectionName: "My Resource", + Description: "A resource declared for testing.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + { + Name: "alias", + Description: "Another name to call the resource.", + Type: "string", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + YAMLExample: `alias: "string" +name: "string" +active: true +`, + }, + }, + }, + { + description: "embedded struct with an embedded struct", + declInfo: PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }, + source: `package mypkg +// MyResource is a resource declared for testing. +type MyResource struct{ + // Alias is another name to call the resource. + Alias string BACKTICKjson:"alias"BACKTICK + types.Metadata +} +`, + declSources: []string{ + `package types + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + moretypes.ActivityStatus +}`, + `package moretypes + +// ActivityStatus indicates the status of a resource +type ActivityStatus struct{ + // Active indicates whether the resource is currently in use. + Active bool BACKTICKjson:"active"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "MyResource", + PackageName: "mypkg", + }: { + SectionName: "My Resource", + Description: "A resource declared for testing.", + SourcePath: "src/myfile.go", + Fields: []Field{ + { + Name: "active", + Description: "Indicates whether the resource is currently in use.", + Type: "Boolean", + }, + { + Name: "alias", + Description: "Another name to call the resource.", + Type: "string", + }, + { + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + YAMLExample: `alias: "string" +name: "string" +active: true +`, + }, + }, + }, + { + description: "ignored fields with non-YAML-comptabible types", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + XXX_NoUnkeyedLiteral struct{} BACKTICKjson:"-"BACKTICK + XXX_unrecognized []byte BACKTICKjson:"-"BACKTICK + +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + }, + }, + }, + { + description: "non-embedded custom field type declared in the same package as the containing struct", + declInfo: PackageInfo{ + DeclName: "DatabaseServerV3", + PackageName: "typestest", + }, + source: `package typestest + +// DatabaseServerV3 represents a database access server. +type DatabaseServerV3 struct { + // Kind is the database server resource kind. + Kind string BACKTICKprotobuf:"bytes,1,opt,name=Kind,proto3" json:"kind"BACKTICK + // Metadata is the database server metadata. + Metadata Metadata BACKTICKprotobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"BACKTICK +} +`, + declSources: []string{ + `package typestest + +// Metadata is resource metadata +type Metadata struct { + // Name is an object name + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Description is object description + Description string BACKTICKprotobuf:"bytes,3,opt,name=Description,proto3" json:"description,omitempty"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "DatabaseServerV3", + PackageName: "typestest", + }: ReferenceEntry{ + SectionName: "Database Server V3", + Description: "Represents a database access server.", + SourcePath: "src/myfile.go", + Fields: []Field{ + Field{ + Name: "kind", + Description: "The database server resource kind.", + Type: "string", + }, + Field{ + Name: "metadata", + Description: "The database server metadata.", + Type: "[Metadata](#metadata)", + }, + }, + YAMLExample: `kind: "string" +metadata: # [...] +`, + }, + PackageInfo{ + DeclName: "Metadata", + PackageName: "typestest", + }: ReferenceEntry{ + SectionName: "Metadata", + Description: "Resource metadata", + SourcePath: "src/myfile0.go", + Fields: []Field{ + { + Name: "description", + Description: "Object description", + Type: "string", + }, + { + Name: "name", + Description: "An object name", + Type: "string", + }, + }, + YAMLExample: `name: "string" +description: "string" +`, + }, + }, + }, + { + description: "pointer field", + declInfo: PackageInfo{ + DeclName: "DatabaseServerV3", + PackageName: "typestest", + }, + source: `package typestest + +// DatabaseServerV3 represents a database access server. +type DatabaseServerV3 struct { + // Metadata is the database server metadata. + Metadata *Metadata BACKTICKprotobuf:"bytes,4,opt,name=Metadata,proto3" json:"metadata"BACKTICK +} +`, + declSources: []string{ + `package typestest + +// Metadata is resource metadata +type Metadata struct { + // Name is an object name + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "DatabaseServerV3", + PackageName: "typestest", + }: ReferenceEntry{ + SectionName: "Database Server V3", + Description: "Represents a database access server.", + SourcePath: "src/myfile.go", + Fields: []Field{ + Field{ + Name: "metadata", + Description: "The database server metadata.", + Type: "[Metadata](#metadata)", + }, + }, + YAMLExample: `metadata: # [...] +`, + }, + PackageInfo{ + DeclName: "Metadata", + PackageName: "typestest", + }: ReferenceEntry{ + SectionName: "Metadata", + Description: "Resource metadata", + SourcePath: "src/myfile0.go", + Fields: []Field{ + { + Name: "name", + Description: "An object name", + Type: "string", + }, + }, + YAMLExample: `name: "string" +`, + }, + }, + }, + { + description: "map of strings to an undeclared field", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Server", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the server. + Name string BACKTICKjson:"name"BACKTICK + // LabelMaps includes a map of strings to labels. + LabelMaps []map[string]types.Label BACKTICKjson:"label_maps"BACKTICK +} +`, + declSources: []string{`package types +// ServerSpecV1 includes aspects of a proxied server. +type ServerSpecV1 struct { + // The address of the server. + Address string BACKTICKjson:"address"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +label_maps: + - + "string": # See description + "string": # See description + "string": # See description + - + "string": # See description + "string": # See description + "string": # See description + - + "string": # See description + "string": # See description + "string": # See description +`, + Fields: []Field{ + Field{ + Name: "label_maps", + Description: "Includes a map of strings to labels.", + Type: "[]map[string]", + }, + Field{ + Name: "name", + Description: "The name of the server.", + Type: "string"}, + }, + }, + }, + }, + { + description: "type parameter", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Resource", + }, + source: `package mypkg +// Resource is a resource. +type Resource struct { + // The name of the resource. + Name string BACKTICKjson:"name"BACKTICK +} +`, + declSources: []string{ + `package mypkg +// streamFunc is a wrapper that converts a closure into a stream. +type streamFunc[T any] struct { + fn func() (T, error) + doneFuncs []func() + item T + err error +} + +func (stream *streamFunc[T]) Next() bool { + stream.item, stream.err = stream.fn() + return stream.err == nil +} +`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + PackageName: "mypkg", + DeclName: "Resource", + }: ReferenceEntry{ + SectionName: "Resource", + Description: "A resource.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + }, + }, + }, + { + description: "field type not declared in a loaded package", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Resource", + }, + source: `package mypkg + +// Resource is a resource. +type Resource struct { + // The name of the resource. + Name string BACKTICKjson:"name"BACKTICK + // How much time must elapse before the resource expires. + Expiry time.Time BACKTICKjson:"expiry"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + PackageName: "mypkg", + DeclName: "Resource", + }: ReferenceEntry{ + SectionName: "Resource", + Description: "A resource.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +expiry: # See description +`, + Fields: []Field{ + Field{ + Name: "expiry", + Description: "How much time must elapse before the resource expires.", + Type: "", + }, + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + }, + }, + declSources: []string{}, + }, + { + description: "byte slice", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Metadata", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // PrivateKey is the private key of the resource. + PrivateKey []byte BACKTICKjson:"private_key"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +private_key: BASE64_STRING +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + Field{ + Name: "private_key", + Description: "The private key of the resource.", + Type: "base64-encoded string", + }, + }, + }, + }, + }, + { + description: "named import in embedded struct field", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Server", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + declSources: []string{`package types + +import alias "otherpkg" + +// ServerSpecV1 includes aspects of a proxied server. +type ServerSpecV1 struct { + alias.ServerSpec +}`, + `package otherpkg + +type ServerSpec struct { + // The address of the server. + Address string BACKTICKjson:"address"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +spec: # [...] +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + Field{ + Name: "spec", + Description: "Contains information about the server.", + Type: "[Server Spec V1](#server-spec-v1)", + }, + }, + }, + PackageInfo{ + DeclName: "ServerSpecV1", + PackageName: "types", + }: { + SectionName: "Server Spec V1", + Description: "Includes aspects of a proxied server.", + SourcePath: "src/myfile0.go", + YAMLExample: `address: "string" +`, + Fields: []Field{ + Field{ + Name: "address", + Description: "The address of the server.", + Type: "string", + }, + }, + }, + }, + }, + { + description: "named import in named struct field", + declInfo: PackageInfo{ + PackageName: "mypkg", + DeclName: "Server", + }, + source: ` +package mypkg + +// Server includes information about a server registered with Teleport. +type Server struct { + // Spec contains information about the server. + Spec types.ServerSpecV1 BACKTICKjson:"spec"BACKTICK +} +`, + declSources: []string{`package types +import alias "otherpkg" + +// ServerSpecV1 includes aspects of a proxied server. +type ServerSpecV1 struct { + // Address information. + Info alias.AddressInfo BACKTICKjson:"info"BACKTICK +}`, + + `package otherpkg +// AddressInfo provides information about an address. +type AddressInfo struct { + // The address of the server. + Address string BACKTICKjson:"address"BACKTICK +}`, + }, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "AddressInfo", + PackageName: "otherpkg", + }: { + SectionName: "Address Info", + Description: "Provides information about an address.", + SourcePath: "src/myfile1.go", + Fields: []Field{ + { + Name: "address", + Description: "The address of the server.", + Type: "string", + }, + }, + YAMLExample: "address: \"string\"\n", + }, + PackageInfo{ + DeclName: "Server", + PackageName: "mypkg", + }: { + SectionName: "Server", + Description: "Includes information about a server registered with Teleport.", + SourcePath: "src/myfile.go", + YAMLExample: `spec: # [...] +`, + Fields: []Field{ + Field{ + Name: "spec", + Description: "Contains information about the server.", + Type: "[Server Spec V1](#server-spec-v1)"}, + }, + }, + PackageInfo{ + DeclName: "ServerSpecV1", + PackageName: "types", + }: { + SectionName: "Server Spec V1", + Description: "Includes aspects of a proxied server.", + SourcePath: "src/myfile0.go", + YAMLExample: `info: # [...] +`, + Fields: []Field{ + Field{ + Name: "info", + Description: "Address information.", + Type: "[Address Info](#address-info)", + }, + }, + }, + }, + }, + { + description: "scalar fields with two unexported fields", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a dynamic resource. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the name of the resource. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK + // Description is the resource's description. + Description string BACKTICKprotobuf:"bytes,3,opt,name=Description,proto3" json:"description,omitempty"BACKTICK + state protoimpl.MessageState BACKTICKprotogen:"open.v1"BACKTICK + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a dynamic resource. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +description: "string" +`, + Fields: []Field{ + Field{ + Name: "description", + Description: "The resource's description.", + Type: "string", + }, + Field{ + Name: "name", + Description: "The name of the resource.", + Type: "string", + }, + }, + }, + }, + }, + { + description: "curly braces in descriptions", + declInfo: PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }, + source: ` +package mypkg + +// Metadata describes information about a {dynamic resource}. Every dynamic +// resource in Teleport has a metadata object. +type Metadata struct { + // Name is the {name of the resource}. + Name string BACKTICKprotobuf:"bytes,1,opt,name=Name,proto3" json:"name"BACKTICK +} +`, + expected: map[PackageInfo]ReferenceEntry{ + PackageInfo{ + DeclName: "Metadata", + PackageName: "mypkg", + }: { + SectionName: "Metadata", + Description: "Describes information about a `{dynamic resource}`. Every dynamic resource in Teleport has a metadata object.", + SourcePath: "src/myfile.go", + YAMLExample: `name: "string" +`, + Fields: []Field{ + Field{ + Name: "name", + Description: "The `{name of the resource}`.", + Type: "string", + }, + }, + }, + }, + }, + } + + for _, tc := range cases { + t.Run(tc.description, func(t *testing.T) { + tmp := t.TempDir() + if err := os.Mkdir(filepath.Join(tmp, "src"), 0777); err != nil { + t.Fatal(err) + } + + // Make a map of filenames to content + sources := make(map[string][]byte) + sources[filepath.Join(tmp, "src", "myfile.go")] = []byte(replaceBackticks(tc.source)) + for i, s := range tc.declSources { + sources[filepath.Join(tmp, "src", "myfile"+strconv.Itoa(i)+".go")] = []byte(replaceBackticks(s)) + } + + // Write the source content to a temporary directory + for n, v := range sources { + f, err := os.Create(n) + if err != nil { + t.Fatal(err) + } + + if _, err := f.Write(v); err != nil { + t.Fatal(err) + } + } + + sourceData, err := NewSourceData(tmp) + if err != nil { + t.Fatal(err) + } + + di, ok := sourceData.TypeDecls[tc.declInfo] + if !ok { + t.Fatalf("expected data for %v.%v not found in the source", tc.declInfo.PackageName, tc.declInfo.DeclName) + } + + r, err := ReferenceDataFromDeclaration(di, sourceData.TypeDecls) + if tc.errorSubstring == "" { + assert.NoError(t, err) + } else { + assert.ErrorContains(t, err, tc.errorSubstring) + } + + assert.Equal(t, tc.expected, r) + }) + } +} + +func TestNamedImports(t *testing.T) { + cases := []struct { + description string + input string + expected map[string]string + }{ + { + description: "single-line format", + input: `package mypkg +import alias "otherpkg" +`, + expected: map[string]string{ + "alias": "otherpkg", + }, + }, + { + description: "multi-line format", + input: `package mypkg +import ( + alias "first" + alias2 "second" +) +`, + expected: map[string]string{ + "alias": "first", + "alias2": "second", + }, + }, + { + description: "multi-segment package path", + input: `package mypkg +import alias "my/multi/segment/package" +`, + expected: map[string]string{ + "alias": "package", + }, + }, + } + + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + fset := token.NewFileSet() + f, err := parser.ParseFile(fset, + "myfile.go", + c.input, + parser.ParseComments, + ) + assert.NoError(t, err) + assert.Equal(t, c.expected, NamedImports(f)) + }) + } +} + +func TestMakeFieldTableInfo(t *testing.T) { + cases := []struct { + description string + input []rawField + expected []Field + }{ + { + description: "angle brackets in GoDoc", + input: []rawField{ + rawField{ + packageName: "mypkg", + doc: `An ID, e.g., ""`, + kind: yamlString{}, + name: "ObjectID", + jsonName: "object_id", + tags: `json:"object_id"`, + }, + }, + expected: []Field{ + { + Name: "object_id", + Description: `An ID, e.g., "\"`, + Type: "string", + }, + }, + }, + { + description: "pipe in field description", + input: []rawField{ + { + packageName: "mypkg", + doc: "Specifies the locking mode (strict|best_effort) to be applied with the role.", + kind: yamlCustomType{ + name: "LockingMode", + declarationInfo: PackageInfo{ + DeclName: "LockingMode", + PackageName: "mypkg", + }, + }, + name: "LockingMode", + jsonName: "locking_mode", + tags: "json:\"locking_mode\"", + }, + }, + expected: []Field{ + { + Name: "locking_mode", + Description: `Specifies the locking mode (strict\|best_effort) to be applied with the role.`, + Type: "[LockingMode](#lockingmode)", + }, + }, + }, + } + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + f, err := makeFieldTableInfo(c.input) + assert.NoError(t, err) + assert.Equal(t, c.expected, f) + }) + } +} + +func TestGetJSONTag(t *testing.T) { + cases := []struct { + description string + input string + expected string + }{ + { + description: "one well-formed struct tag", + input: `json:"my_tag"`, + expected: "my_tag", + }, + { + description: "multiple well-formed struct tags", + input: `json:"json_tag" yaml:"yaml_tag" other:"other-tag"`, + expected: "json_tag", + }, + { + description: "omitempty option in tag value", + input: `json:"json_tag,omitempty" yaml:"yaml_tag" other:"other-tag"`, + expected: "json_tag", + }, + { + description: "No JSON tag", + input: `other:"other-tag"`, + expected: "", + }, + { + description: "Empty JSON tag with the omitempty option", + input: `json:",omitempty" other:"other-tag"`, + expected: "", + }, + { + description: "Ignored JSON field", + input: `json:"-" other:"other-tag"`, + expected: "-", + }, + { + description: "empty JSON tag", + input: `json:"" yaml:"yaml_tag" other:"other-tag"`, + expected: "", + }, + } + + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + g := getJSONTag(c.input) + assert.Equal(t, c.expected, g) + }) + } +} + +func TestPrintableDescription(t *testing.T) { + cases := []struct { + description string + input string + name string + expected string + }{ + { + description: "short description", + input: "A", + name: "MyDecl", + expected: "A", + }, + { + description: "no description", + input: "", + name: "MyDecl", + expected: "", + }, + { + description: "GoDoc consists only of declaration name", + input: "MyDecl", + name: "MyDecl", + expected: "", + }, + { + description: "description containing name", + input: "MyDecl is a declaration that we will describe in the docs.", + name: "MyDecl", + expected: "A declaration that we will describe in the docs.", + }, + { + description: "description containing name and \"are\"", + input: "MyDecls are things that we will describe in the docs.", + name: "MyDecls", + expected: "Things that we will describe in the docs.", + }, + + { + description: "description with no name", + input: "Declaration that we will describe in the docs.", + name: "MyDecl", + expected: "Declaration that we will describe in the docs.", + }, + { + description: "description beginning with name and non-is verb", + input: "MyDecl performs an action.", + name: "MyDecl", + expected: "Performs an action.", + }, + { + description: "curly brace pair and identifier name", + input: "MyDecl performs an action, such as {updating, deleting}", + name: "MyDecl", + expected: "Performs an action, such as `{updating, deleting}`", + }, + { + description: "curly brace pair and no identifier name", + input: "Performs an action, such as {updating, deleting}", + name: "MyDecl", + expected: "Performs an action, such as `{updating, deleting}`", + }, + { + description: "curly brace pair with existing backticks", + input: "Performs an action, such as `{updating, deleting}`", + name: "MyDecl", + expected: "Performs an action, such as `{updating, deleting}`", + }, + } + + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + assert.Equal(t, c.expected, printableDescription(c.input, c.name)) + }) + } +} + +func TestMakeYAMLExample(t *testing.T) { + cases := []struct { + description string + input []rawField + expected string + }{ + { + description: "all scalars", + input: []rawField{ + rawField{ + doc: "myInt is an int", + kind: yamlNumber{}, + name: "myInt", + tags: `json:"my_int"`, + }, + rawField{ + doc: "myBool is a Boolean", + kind: yamlBool{}, + name: "myBool", + tags: `json:"my_bool"`, + }, + rawField{ + doc: "myString is a string", + kind: yamlString{}, + tags: `json:"my_string"`, + }, + }, + expected: `my_int: 1 +my_bool: true +my_string: "string" +`, + }, + { + description: "sequence of sequence of strings", + input: []rawField{ + rawField{ + name: "mySeq", + jsonName: "my_seq", + doc: "mySeq is a sequence of sequences of strings", + tags: `json:"my_seq"`, + kind: yamlSequence{ + elementKind: yamlSequence{ + elementKind: yamlString{}, + }, + }, + }, + }, + expected: `my_seq: + - + - "string" + - "string" + - "string" + - + - "string" + - "string" + - "string" + - + - "string" + - "string" + - "string" +`, + }, + { + description: "maps of numbers to strings", + input: []rawField{ + rawField{ + name: "myMap", + jsonName: "my_map", + doc: "myMap is a map of ints to strings", + tags: `json:"my_map"`, + kind: yamlMapping{ + keyKind: yamlNumber{}, + valueKind: yamlString{}, + }, + }, + }, + expected: `my_map: + 1: "string" + 1: "string" + 1: "string" +`, + }, + { + description: "sequence of maps of strings to Booleans", + input: []rawField{ + rawField{ + name: "mySeq", + jsonName: "my_seq", + doc: "mySeq is a complex type", + tags: `json:"my_seq"`, + kind: yamlSequence{ + elementKind: yamlMapping{ + keyKind: yamlString{}, + valueKind: yamlBool{}, + }, + }, + }, + }, + expected: `my_seq: + - + "string": true + "string": true + "string": true + - + "string": true + "string": true + "string": true + - + "string": true + "string": true + "string": true +`, + }, + { + description: "sequences of custom types", + input: []rawField{ + rawField{ + name: "labels", + jsonName: "labels", + doc: "labels is a list of labels", + tags: `json:"labels"`, + kind: yamlSequence{ + elementKind: yamlCustomType{ + name: "label", + declarationInfo: PackageInfo{ + DeclName: "label", + PackageName: "mypkg", + }, + }, + }, + }, + }, + expected: `labels: + - # [...] + - # [...] + - # [...] +`, + }, + { + description: "maps of strings to custom types", + input: []rawField{ + rawField{ + name: "labels", + jsonName: "labels", + doc: "labels is a map of strings to labels", + tags: `json:"labels"`, + kind: yamlMapping{ + keyKind: yamlString{}, + valueKind: yamlCustomType{ + name: "label", + declarationInfo: PackageInfo{ + DeclName: "label", + PackageName: "mypkg", + }, + }, + }, + }, + }, + expected: `labels: + "string": # [...] + "string": # [...] + "string": # [...] +`, + }, + } + + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + e, err := makeYAMLExample(c.input) + assert.NoError(t, err) + assert.Equal(t, c.expected, e) + }) + } +} + +func TestMakeSectionName(t *testing.T) { + cases := []struct { + description string + original string + expected string + }{ + { + description: "camel-case name", + original: "ServerSpec", + expected: "Server Spec", + }, + { + description: "camel-case name with three words", + original: "MyExcellentResource", + expected: "My Excellent Resource", + }, + { + description: "camel-case name with version", + original: "ServerSpecV2", + expected: "Server Spec V2", + }, + { + description: "abbreviation", + original: "SAMLConnector", + expected: "SAML Connector", + }, + { + description: "IdP", + original: "IdPSAMLOptions", + expected: "IdP SAML Options", + }, + } + + for _, c := range cases { + t.Run(c.description, func(t *testing.T) { + assert.Equal(t, c.expected, makeSectionName(c.original)) + }) + } +} diff --git a/build.assets/tooling/go.mod b/build.assets/tooling/go.mod index 3b3db97ae3c58..3855ae295db91 100644 --- a/build.assets/tooling/go.mod +++ b/build.assets/tooling/go.mod @@ -1,38 +1,51 @@ module github.com/gravitational/teleport/build.assets/tooling -go 1.24.3 +go 1.24.11 require ( + buf.build/go/bufplugin v0.9.0 github.com/Masterminds/sprig/v3 v3.3.0 github.com/alecthomas/kingpin/v2 v2.4.0 // replaced github.com/awalterschulze/goderive v0.5.1 - github.com/bmatcuk/doublestar/v4 v4.8.1 + github.com/bmatcuk/doublestar/v4 v4.9.0 github.com/coreos/go-semver v0.3.1 github.com/gogo/protobuf v1.3.2 github.com/google/go-github/v41 v41.0.0 + github.com/gravitational/teleport v0.0.0-00010101000000-000000000000 github.com/gravitational/trace v1.5.1 - github.com/stretchr/testify v1.10.0 + github.com/stretchr/testify v1.11.1 github.com/waigani/diffparser v0.0.0-20190828052634-7391f219313d - golang.org/x/mod v0.24.0 - golang.org/x/oauth2 v0.29.0 - helm.sh/helm/v3 v3.17.3 + golang.org/x/mod v0.29.0 + golang.org/x/oauth2 v0.30.0 + google.golang.org/protobuf v1.36.10 + helm.sh/helm/v3 v3.18.5 howett.net/plist v1.0.1 - k8s.io/apiextensions-apiserver v0.33.0 + k8s.io/apiextensions-apiserver v0.33.3 ) require ( - dario.cat/mergo v1.0.1 // indirect + buf.build/gen/go/bufbuild/bufplugin/protocolbuffers/go v1.36.3-20250121211742-6d880cc6cc8d.1 // indirect + buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.10-20250912141014-52f32327d4b0.1 // indirect + buf.build/gen/go/pluginrpc/pluginrpc/protocolbuffers/go v1.36.3-20241007202033-cf42259fcbfc.1 // indirect + buf.build/go/protovalidate v0.14.0 // indirect + buf.build/go/spdx v0.2.0 // indirect + cel.dev/expr v0.24.0 // indirect + dario.cat/mergo v1.0.2 // indirect github.com/Masterminds/goutils v1.1.1 // indirect github.com/Masterminds/semver/v3 v3.3.0 // indirect github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect + github.com/antlr4-go/antlr/v4 v4.13.1 // indirect + github.com/bufbuild/protocompile v0.14.1 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect - github.com/fxamacker/cbor/v2 v2.7.0 // indirect - github.com/go-logr/logr v1.4.2 // indirect + github.com/fxamacker/cbor/v2 v2.8.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/google/cel-go v0.25.0 // indirect github.com/google/go-querystring v1.1.0 // indirect github.com/google/uuid v1.6.0 // indirect github.com/huandu/xstrings v1.5.0 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/kisielk/gotool v1.0.0 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect github.com/mitchellh/copystructure v1.2.0 // indirect github.com/mitchellh/reflectwalk v1.0.2 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect @@ -40,23 +53,261 @@ require ( github.com/pkg/errors v0.9.1 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/shopspring/decimal v1.4.0 // indirect - github.com/spf13/cast v1.7.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/pflag v1.0.7 // indirect + github.com/stoewer/go-strcase v1.3.0 // indirect github.com/x448/float16 v0.8.4 // indirect github.com/xhit/go-str2duration/v2 v2.1.0 // indirect - golang.org/x/crypto v0.36.0 // indirect - golang.org/x/net v0.38.0 // indirect - golang.org/x/text v0.23.0 // indirect - golang.org/x/tools v0.26.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + golang.org/x/crypto v0.45.0 // indirect + golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b // indirect + golang.org/x/net v0.47.0 // indirect + golang.org/x/sync v0.18.0 // indirect + golang.org/x/sys v0.38.0 // indirect + golang.org/x/text v0.31.0 // indirect + golang.org/x/tools v0.38.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c // indirect gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 - k8s.io/apimachinery v0.33.0 // indirect + k8s.io/apimachinery v0.33.3 // indirect k8s.io/klog/v2 v2.130.1 // indirect - k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect + k8s.io/utils v0.0.0-20241210054802-24370beab758 // indirect + pluginrpc.com/pluginrpc v0.5.0 // indirect sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect sigs.k8s.io/randfill v1.0.0 // indirect sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect - sigs.k8s.io/yaml v1.4.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +require ( + cloud.google.com/go/auth v0.16.5 // indirect + cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect + cloud.google.com/go/compute/metadata v0.8.0 // indirect + connectrpc.com/connect v1.18.1 // indirect + filippo.io/age v1.2.1 // indirect + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.2 // indirect + github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0 // indirect + github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect + github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect + github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/armon/go-radix v1.0.0 // indirect + github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect + github.com/aws/aws-sdk-go-v2 v1.39.6 // indirect + github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect + github.com/aws/aws-sdk-go-v2/config v1.31.0 // indirect + github.com/aws/aws-sdk-go-v2/credentials v1.18.4 // indirect + github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.3 // indirect + github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.75 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13 // indirect + github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect + github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 // indirect + github.com/aws/aws-sdk-go-v2/service/athena v1.50.4 // indirect + github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.41.0 // indirect + github.com/aws/aws-sdk-go-v2/service/glue v1.109.2 // indirect + github.com/aws/aws-sdk-go-v2/service/identitystore v1.28.2 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.1 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.3 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 // indirect + github.com/aws/aws-sdk-go-v2/service/organizations v1.38.3 // indirect + github.com/aws/aws-sdk-go-v2/service/s3 v1.79.3 // indirect + github.com/aws/aws-sdk-go-v2/service/sns v1.34.4 // indirect + github.com/aws/aws-sdk-go-v2/service/sqs v1.38.5 // indirect + github.com/aws/aws-sdk-go-v2/service/sso v1.28.0 // indirect + github.com/aws/aws-sdk-go-v2/service/ssoadmin v1.30.2 // indirect + github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.0 // indirect + github.com/aws/aws-sdk-go-v2/service/sts v1.37.0 // indirect + github.com/aws/smithy-go v1.23.2 // indirect + github.com/beevik/etree v1.5.1 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/blang/semver v3.5.1+incompatible // indirect + github.com/boombuler/barcode v1.0.1 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect + github.com/cenkalti/backoff/v5 v5.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charlievieth/strcase v0.0.5 // indirect + github.com/crewjam/httperr v0.2.0 // indirect + github.com/crewjam/saml v0.4.14 // indirect + github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46 // indirect + github.com/di-wu/parser v0.3.0 // indirect + github.com/di-wu/xsd-datetime v1.0.0 // indirect + github.com/digitorus/pkcs7 v0.0.0-20250730155240-ffadbf3f398c // indirect + github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7 // indirect + github.com/dlclark/regexp2 v1.11.0 // indirect + github.com/elimity-com/scim v0.0.0-20240320110924-172bf2aee9c8 // indirect + github.com/emicklei/go-restful/v3 v3.11.3 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/ghodss/yaml v1.0.0 // indirect + github.com/go-chi/chi/v5 v5.2.2 // indirect + github.com/go-jose/go-jose/v3 v3.0.4 // indirect + github.com/go-jose/go-jose/v4 v4.1.1 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-openapi/analysis v0.23.0 // indirect + github.com/go-openapi/errors v0.22.2 // indirect + github.com/go-openapi/jsonpointer v0.21.0 // indirect + github.com/go-openapi/jsonreference v0.21.0 // indirect + github.com/go-openapi/loads v0.22.0 // indirect + github.com/go-openapi/runtime v0.28.0 // indirect + github.com/go-openapi/spec v0.21.0 // indirect + github.com/go-openapi/strfmt v0.23.0 // indirect + github.com/go-openapi/swag v0.23.1 // indirect + github.com/go-openapi/validate v0.24.0 // indirect + github.com/go-piv/piv-go/v2 v2.4.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-webauthn/webauthn v0.11.2 // indirect + github.com/go-webauthn/x v0.1.20 // indirect + github.com/gobwas/httphead v0.1.0 // indirect + github.com/gobwas/pool v0.2.1 // indirect + github.com/gobwas/ws v1.4.0 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/golang-jwt/jwt/v4 v4.5.2 // indirect + github.com/golang-jwt/jwt/v5 v5.3.0 // indirect + github.com/golang/snappy v0.0.4 // indirect + github.com/google/btree v1.1.3 // indirect + github.com/google/certificate-transparency-go v1.3.1 // indirect + github.com/google/gnostic-models v0.6.9 // indirect + github.com/google/go-attestation v0.5.1 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/go-containerregistry v0.20.6 // indirect + github.com/google/go-github/v70 v70.0.0 // indirect + github.com/google/go-tpm v0.9.4 // indirect + github.com/google/go-tpm-tools v0.4.5 // indirect + github.com/google/go-tspi v0.3.0 // indirect + github.com/google/s2a-go v0.1.9 // indirect + github.com/google/safetext v0.0.0-20240104143208-7a7d9b3d812f // indirect + github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect + github.com/googleapis/gax-go/v2 v2.15.0 // indirect + github.com/gorilla/securecookie v1.1.2 // indirect + github.com/gravitational/license v0.0.0-20250329001817-070456fa8ec1 // indirect + github.com/gravitational/roundtrip v1.0.3 // indirect + github.com/gravitational/teleport/api v0.0.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 // indirect + github.com/hashicorp/go-cleanhttp v0.5.2 // indirect + github.com/hashicorp/go-retryablehttp v0.7.8 // indirect + github.com/hashicorp/go-uuid v1.0.3 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/in-toto/attestation v1.1.1 // indirect + github.com/in-toto/in-toto-golang v0.9.0 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jackc/pgpassfile v1.0.0 // indirect + github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect + github.com/jackc/pgx/v5 v5.7.4 // indirect + github.com/jackc/puddle/v2 v2.2.2 // indirect + github.com/jcmturner/aescts/v2 v2.0.0 // indirect + github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect + github.com/jcmturner/gofork v1.7.6 // indirect + github.com/jcmturner/goidentity/v6 v6.0.1 // indirect + github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect + github.com/jcmturner/rpc/v2 v2.0.3 // indirect + github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267 // indirect + github.com/jonboulle/clockwork v0.5.0 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/julienschmidt/httprouter v1.3.0 // indirect + github.com/kelseyhightower/envconfig v1.4.0 // indirect + github.com/klauspost/compress v1.18.0 // indirect + github.com/kr/pretty v0.3.1 // indirect + github.com/kr/text v0.2.0 // indirect + github.com/kylelemons/godebug v1.1.0 // indirect + github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/mattermost/xml-roundtrip-validator v0.1.0 // indirect + github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c // indirect + github.com/moby/term v0.5.2 // indirect + github.com/muhlemmer/gu v0.3.1 // indirect + github.com/muhlemmer/httpforwarded v0.1.0 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/ohler55/ojg v1.26.8 // indirect + github.com/oklog/ulid v1.3.1 // indirect + github.com/okta/okta-sdk-golang/v2 v2.20.0 // indirect + github.com/openai/openai-go v1.8.2 // indirect + github.com/opencontainers/go-digest v1.0.0 // indirect + github.com/opentracing/opentracing-go v1.2.0 // indirect + github.com/patrickmn/go-cache v2.1.1-0.20191004192108-46f407853014+incompatible // indirect + github.com/pelletier/go-toml/v2 v2.2.3 // indirect + github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect + github.com/pkoukk/tiktoken-go v0.1.8 // indirect + github.com/pquerna/otp v1.4.0 // indirect + github.com/prometheus/client_golang v1.23.0 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.65.0 // indirect + github.com/prometheus/procfs v0.16.1 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/rs/cors v1.11.1 // indirect + github.com/russellhaering/gosaml2 v0.10.0 // indirect + github.com/russellhaering/goxmldsig v1.5.0 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sassoftware/relic v7.2.1+incompatible // indirect + github.com/scim2/filter-parser/v2 v2.2.0 // indirect + github.com/secure-systems-lab/go-securesystemslib v0.9.1 // indirect + github.com/shibumi/go-pathspec v1.3.0 // indirect + github.com/sigstore/protobuf-specs v0.5.0 // indirect + github.com/sigstore/rekor v1.4.1 // indirect + github.com/sigstore/sigstore v1.9.5 // indirect + github.com/sigstore/sigstore-go v0.7.1 // indirect + github.com/sigstore/timestamp-authority v1.2.5 // indirect + github.com/sijms/go-ora/v2 v2.8.24 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cobra v1.9.1 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/theupdateframework/go-tuf v0.7.0 // indirect + github.com/theupdateframework/go-tuf/v2 v2.0.2 // indirect + github.com/tidwall/gjson v1.14.4 // indirect + github.com/tidwall/match v1.1.1 // indirect + github.com/tidwall/pretty v1.2.1 // indirect + github.com/tidwall/sjson v1.2.5 // indirect + github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect + github.com/transparency-dev/merkle v0.0.2 // indirect + github.com/vulcand/predicate v1.2.0 // indirect + github.com/xdg-go/pbkdf2 v1.0.0 // indirect + github.com/xdg-go/scram v1.1.2 // indirect + github.com/xdg-go/stringprep v1.0.4 // indirect + github.com/zeebo/errs v1.4.0 // indirect + github.com/zitadel/logging v0.6.2 // indirect + github.com/zitadel/oidc/v3 v3.43.0 // indirect + github.com/zitadel/schema v1.3.1 // indirect + gitlab.com/gitlab-org/api/client-go v0.127.0 // indirect + go.mongodb.org/mongo-driver v1.17.4 // indirect + go.opentelemetry.io/auto/sdk v1.1.0 // indirect + go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect + go.opentelemetry.io/otel v1.37.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0 // indirect + go.opentelemetry.io/otel/metric v1.37.0 // indirect + go.opentelemetry.io/otel/sdk v1.37.0 // indirect + go.opentelemetry.io/otel/trace v1.37.0 // indirect + go.opentelemetry.io/proto/otlp v1.7.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + golang.org/x/term v0.37.0 // indirect + golang.org/x/time v0.12.0 // indirect + google.golang.org/api v0.248.0 // indirect + google.golang.org/grpc v1.75.0 // indirect + google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + k8s.io/api v0.33.3 // indirect + k8s.io/client-go v0.33.3 // indirect + k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect + mvdan.cc/sh/v3 v3.7.0 // indirect + sigs.k8s.io/controller-runtime v0.20.4 // indirect ) replace github.com/alecthomas/kingpin/v2 => github.com/gravitational/kingpin/v2 v2.1.11-0.20230515143221-4ec6b70ecd33 + +replace github.com/gravitational/teleport => ../.. + +replace github.com/gravitational/teleport/api => ../../api + +replace github.com/vulcand/predicate => github.com/gravitational/predicate v1.3.4 diff --git a/build.assets/tooling/go.sum b/build.assets/tooling/go.sum index 300c74177a2b3..14691330be42b 100644 --- a/build.assets/tooling/go.sum +++ b/build.assets/tooling/go.sum @@ -1,59 +1,569 @@ -dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s= -dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= +buf.build/gen/go/bufbuild/bufplugin/protocolbuffers/go v1.36.3-20250121211742-6d880cc6cc8d.1 h1:1v+ez1GRKKKdI1IwDDQqV98lGKo8489+Ekql+prUW6c= +buf.build/gen/go/bufbuild/bufplugin/protocolbuffers/go v1.36.3-20250121211742-6d880cc6cc8d.1/go.mod h1:MYDFm9IHRP085R5Bis68mLc0mIqp5Q27Uk4o8YXjkAI= +buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.10-20250912141014-52f32327d4b0.1 h1:31on4W/yPcV4nZHL4+UCiCvLPsMqe/vJcNg8Rci0scc= +buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.10-20250912141014-52f32327d4b0.1/go.mod h1:fUl8CEN/6ZAMk6bP8ahBJPUJw7rbp+j4x+wCcYi2IG4= +buf.build/gen/go/pluginrpc/pluginrpc/protocolbuffers/go v1.36.3-20241007202033-cf42259fcbfc.1 h1:NOipq02MS20WQCr6rfAG1o0n2AuQnY4Xg9avLl16csA= +buf.build/gen/go/pluginrpc/pluginrpc/protocolbuffers/go v1.36.3-20241007202033-cf42259fcbfc.1/go.mod h1:jceo5esD5zSbflHHGad57RXzBpRrcPaiLrLQRA+Mbec= +buf.build/go/bufplugin v0.9.0 h1:ktZJNP3If7ldcWVqh46XKeiYJVPxHQxCfjzVQDzZ/lo= +buf.build/go/bufplugin v0.9.0/go.mod h1:Z0CxA3sKQ6EPz/Os4kJJneeRO6CjPeidtP1ABh5jPPY= +buf.build/go/protovalidate v0.14.0 h1:kr/rC/no+DtRyYX+8KXLDxNnI1rINz0imk5K44ZpZ3A= +buf.build/go/protovalidate v0.14.0/go.mod h1:+F/oISho9MO7gJQNYC2VWLzcO1fTPmaTA08SDYJZncA= +buf.build/go/spdx v0.2.0 h1:IItqM0/cMxvFJJumcBuP8NrsIzMs/UYjp/6WSpq8LTw= +buf.build/go/spdx v0.2.0/go.mod h1:bXdwQFem9Si3nsbNy8aJKGPoaPi5DKwdeEp5/ArZ6w8= +c2sp.org/CCTV/age v0.0.0-20240306222714-3ec4d716e805 h1:u2qwJeEvnypw+OCPUHmoZE3IqwfuN5kgDfo5MLzpNM0= +c2sp.org/CCTV/age v0.0.0-20240306222714-3ec4d716e805/go.mod h1:FomMrUJ2Lxt5jCLmZkG3FHa72zUprnhd3v/Z18Snm4w= +cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY= +cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw= +cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c= +cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI= +cloud.google.com/go/alloydb v1.16.1 h1:pW4D0O2jAfAjoOEI1bgChPwMHWE8X8BjwSO0tfWkWvk= +cloud.google.com/go/alloydb v1.16.1/go.mod h1:zeZuGJ5mEaQE70FMXEvZIp5hQLR9yrGnHo1YUOncWRY= +cloud.google.com/go/auth v0.16.5 h1:mFWNQ2FEVWAliEQWpAdH80omXFokmrnbDhUS9cBywsI= +cloud.google.com/go/auth v0.16.5/go.mod h1:utzRfHMP+Vv0mpOkTRQoWD2q3BatTOoWbA7gCc2dUhQ= +cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc= +cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c= +cloud.google.com/go/compute v1.38.0 h1:MilCLYQW2m7Dku8hRIIKo4r0oKastlD74sSu16riYKs= +cloud.google.com/go/compute v1.38.0/go.mod h1:oAFNIuXOmXbK/ssXm3z4nZB8ckPdjltJ7xhHCdbWFZM= +cloud.google.com/go/compute/metadata v0.8.0 h1:HxMRIbao8w17ZX6wBnjhcDkW6lTFpgcaobyVfZWqRLA= +cloud.google.com/go/compute/metadata v0.8.0/go.mod h1:sYOGTp851OV9bOFJ9CH7elVvyzopvWQFNNghtDQ/Biw= +cloud.google.com/go/container v1.43.0 h1:A6J92FJPfxTvyX7MHF+w4t2W9WCqvHOi9UB5SAeSy3w= +cloud.google.com/go/container v1.43.0/go.mod h1:ETU9WZ1KM9ikEKLzrhRVao7KHtalDQu6aPqM34zDr/U= +cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8= +cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE= +cloud.google.com/go/kms v1.22.0 h1:dBRIj7+GDeeEvatJeTB19oYZNV0aj6wEqSIT/7gLqtk= +cloud.google.com/go/kms v1.22.0/go.mod h1:U7mf8Sva5jpOb4bxYZdtw/9zsbIjrklYwPcvMk34AL8= +cloud.google.com/go/longrunning v0.6.7 h1:IGtfDWHhQCgCjwQjV9iiLnUta9LBCo8R9QmAFsS/PrE= +cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY= +cloud.google.com/go/resourcemanager v1.10.6 h1:LIa8kKE8HF71zm976oHMqpWFiaDHVw/H1YMO71lrGmo= +cloud.google.com/go/resourcemanager v1.10.6/go.mod h1:VqMoDQ03W4yZmxzLPrB+RuAoVkHDS5tFUUQUhOtnRTg= +connectrpc.com/connect v1.18.1 h1:PAg7CjSAGvscaf6YZKUefjoih5Z/qYkyaTrBW8xvYPw= +connectrpc.com/connect v1.18.1/go.mod h1:0292hj1rnx8oFrStN7cB4jjVBeqs+Yx5yDIC2prWDO8= +dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= +dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= +filippo.io/age v1.2.1 h1:X0TZjehAZylOIj4DubWYU1vWQxv9bJpo+Uu2/LGhi1o= +filippo.io/age v1.2.1/go.mod h1:JL9ew2lTN+Pyft4RiNGguFfOpewKwSHm5ayKD/A4004= +filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA= +filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8= +github.com/AdamKorcz/go-fuzz-headers-1 v0.0.0-20230919221257-8b5d3ce2d11d h1:zjqpY4C7H15HjRPEenkS4SAn3Jy2eRRjkjZbGR30TOg= +github.com/AdamKorcz/go-fuzz-headers-1 v0.0.0-20230919221257-8b5d3ce2d11d/go.mod h1:XNqJ7hv2kY++g8XEHREpi+JqZo3+0l+CH2egBVN4yqM= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.2 h1:Hr5FTipp7SL07o2FvoVOX9HRiRH3CR3Mj8pxqCcdD5A= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.2/go.mod h1:QyVsSSN64v5TGltphKLQ2sQxe4OBQg0J1eKRcVBnfgE= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0 h1:MhRfI58HblXzCtWEZCO0feHs8LweePB3s90r7WaR1KU= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0/go.mod h1:okZ+ZURbArNdlJ+ptXoyHNuOETzOl1Oww19rm8I2WLA= +github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY= +github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/authorization/armauthorization/v2 v2.2.0 h1:Hp+EScFOu9HeCbeW8WU2yQPJd4gGwhMgKxWe+G6jNzw= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/authorization/armauthorization/v2 v2.2.0/go.mod h1:/pz8dyNQe+Ey3yBp/XuYz7oqX8YDNWVpPB0hH3XWfbc= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v6 v6.4.0 h1:z7Mqz6l0EFH549GvHEqfjKvi+cRScxLWbaoeLm9wxVQ= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v6 v6.4.0/go.mod h1:v6gbfH+7DG7xH2kUNs+ZJ9tF6O3iNnR85wMtmr+F54o= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v6 v6.6.0 h1:xkWEcbsnJWid3rOf/S/LOHy1I55JA+4kw/f8Tnm+Onc= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v6 v6.6.0/go.mod h1:OWKfCmX4X3Vp2w7GSx1LZn8566tOHJBA6K0IAUVNYx0= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/msi/armmsi v1.2.0 h1:z4YeiSXxnUI+PqB46Yj6MZA3nwb1CcJIkEMDrzUd8Cs= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/msi/armmsi v1.2.0/go.mod h1:rko9SzMxcMk0NJsNAxALEGaTYyy79bNRwxgJfrH0Spw= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/mysql/armmysql v1.2.0 h1:dhywcZH9yPDIje9aTqwy6psZSPzI6CJLYEprDahIBSQ= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/mysql/armmysql v1.2.0/go.mod h1:6z3b+JdBLH0eMzfBex/cvEIoEFVEwXuB0wbgdfN11iM= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/mysql/armmysqlflexibleservers v1.2.0 h1:3jDMffAwnvs6qmOqhjNVHB29AKxs6brnzJeo65E1YwM= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/mysql/armmysqlflexibleservers v1.2.0/go.mod h1:0mKVz3WT8oNjBunT1zD/HPwMleQ72QClMa7Gmsm+6Kc= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/postgresql/armpostgresql v1.2.0 h1:0hXKrsbh2M6CQyW0TDC9Bsyd99vQmrOxiBTUfQHZjPA= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/postgresql/armpostgresql v1.2.0/go.mod h1:bvZZor36Jg9q9kouuMyfJ+ay77+qK+YUfThXH1FdXjU= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/postgresql/armpostgresqlflexibleservers v1.1.0 h1:HzqcSJWx32XQdr8KtxAu/SZJj0PqDo9tKf2YGPdynV0= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/postgresql/armpostgresqlflexibleservers v1.1.0/go.mod h1:nKcJObAisSPDrO9lMuuCBoYY7Ki7ADt8p6XmBhpKNTk= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/redis/armredis/v3 v3.3.0 h1:EkL5dmUoy1OlzVfsbkcHayOvOJgheyRYL3wM/RHizzg= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/redis/armredis/v3 v3.3.0/go.mod h1:DiazWkJHUUKUZGpIdV7JhDTjebBxdfsJ386dE5w7G3o= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/redisenterprise/armredisenterprise v1.2.0 h1:hTmVmyvriwO+ymGLEsH7HZokVwinC2MZl8F0LjvPdHU= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/redisenterprise/armredisenterprise v1.2.0/go.mod h1:uHEpZj4TWSZEp35rIByJ8RX7hQBm3bxfPxS4tiz+x+g= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/sql/armsql v1.2.0 h1:S087deZ0kP1RUg4pU7w9U9xpUedTCbOtz+mnd0+hrkQ= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/sql/armsql v1.2.0/go.mod h1:B4cEyXrWBmbfMDAPnpJ1di7MAt5DKP57jPEObAvZChg= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/subscription/armsubscription v1.2.0 h1:UrGzkHueDwAWDdjQxC+QaXHd4tVCkISYE9j7fSSXF8k= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/subscription/armsubscription v1.2.0/go.mod h1:qskvSQeW+cxEE2bcKYyKimB1/KiQ9xpJ99bcHY0BX6c= +github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.4.0 h1:E4MgwLBGeVB5f2MdcIVD3ELVAWpr+WD6MUe1i+tM/PA= +github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.4.0/go.mod h1:Y2b/1clN4zsAoUd/pgNAQHjLDnTis/6ROkUfyob6psM= +github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.2.0 h1:nCYfgcSyHZXJI8J0IWE5MsCGlb2xp9fJiXyxWgmOFg4= +github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.2.0/go.mod h1:ucUjca2JtSZboY8IoUqyQyuuXvwbMBVwFOm0vdQPNhA= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg= +github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= +github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8= +github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU= +github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM= +github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE= +github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs= +github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ= +github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE= github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/semver/v3 v3.3.0 h1:B8LGeaivUe71a5qox1ICM/JLl0NqZSW5CHyL+hmvYS0= github.com/Masterminds/semver/v3 v3.3.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= github.com/Masterminds/sprig/v3 v3.3.0 h1:mQh0Yrg1XPo6vjYXgtf5OtijNAKJRNcTdOOGZe3tPhs= github.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0= +github.com/Masterminds/squirrel v1.5.4 h1:uUcX/aBc8O7Fg9kaISIUsHXdKuqehiXAMQTYX8afzqM= +github.com/Masterminds/squirrel v1.5.4/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10= +github.com/ThalesIgnite/crypto11 v1.2.5 h1:1IiIIEqYmBvUYFeMnHqRft4bwf/O36jryEUpY+9ef8E= +github.com/ThalesIgnite/crypto11 v1.2.5/go.mod h1:ILDKtnCKiQ7zRoNxcp36Y1ZR8LBPmR2E23+wTQe/MlE= github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc= github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE= +github.com/alessio/shellescape v1.4.1 h1:V7yhSDDn8LP4lc4jS8pFkt0zCnzVJlG5JXy9BVKJUX0= +github.com/alessio/shellescape v1.4.1/go.mod h1:PZAiSCk0LJaZkiCSkPv8qIobYglO3FPpyFjDCtHLS30= +github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ= +github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw= +github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI= +github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= +github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so= +github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= github.com/awalterschulze/goderive v0.5.1 h1:H2XNRDw0Ordwj/pgLAVqQgDC1LQWP7L99ofOs72bjqg= github.com/awalterschulze/goderive v0.5.1/go.mod h1:DLlff0SRVo846CBrp8nXuXJ4mdWA92ai5CYTr+LV/II= -github.com/bmatcuk/doublestar/v4 v4.8.1 h1:54Bopc5c2cAvhLRAzqOGCYHYyhcDHsFF4wWIR5wKP38= -github.com/bmatcuk/doublestar/v4 v4.8.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= +github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE= +github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go-v2 v1.39.6 h1:2JrPCVgWJm7bm83BDwY5z8ietmeJUbh3O2ACnn+Xsqk= +github.com/aws/aws-sdk-go-v2 v1.39.6/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE= +github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0= +github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y= +github.com/aws/aws-sdk-go-v2/config v1.31.0 h1:9yH0xiY5fUnVNLRWO0AtayqwU1ndriZdN78LlhruJR4= +github.com/aws/aws-sdk-go-v2/config v1.31.0/go.mod h1:VeV3K72nXnhbe4EuxxhzsDc/ByrCSlZwUnWH52Nde/I= +github.com/aws/aws-sdk-go-v2/credentials v1.18.4 h1:IPd0Algf1b+Qy9BcDp0sCUcIWdCQPSzDoMK3a8pcbUM= +github.com/aws/aws-sdk-go-v2/credentials v1.18.4/go.mod h1:nwg78FjH2qvsRM1EVZlX9WuGUJOL5od+0qvm0adEzHk= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.3 h1:GicIdnekoJsjq9wqnvyi2elW6CGMSYKhdozE7/Svh78= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.3/go.mod h1:R7BIi6WNC5mc1kfRM7XM/VHC3uRWkjc396sfabq4iOo= +github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.75 h1:S61/E3N01oral6B3y9hZ2E1iFDqCZPPOBoBQretCnBI= +github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.75/go.mod h1:bDMQbkI1vJbNjnvJYpPTSNYBkI/VIv18ngWb/K84tkk= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13 h1:a+8/MLcWlIxo1lF9xaGt3J/u3yOZx+CdSveSNwjhD40= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13/go.mod h1:oGnKwIYZ4XttyU2JWxFrwvhF6YKiK/9/wmE3v3Iu9K8= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13 h1:HBSI2kDkMdWz4ZM7FjwE7e/pWDEZ+nR95x8Ztet1ooY= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13/go.mod h1:YE94ZoDArI7awZqJzBAZ3PDD2zSfuP7w6P2knOzIn8M= +github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo= +github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo= +github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 h1:ZNTqv4nIdE/DiBfUUfXcLZ/Spcuz+RjeziUtNJackkM= +github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34/go.mod h1:zf7Vcd1ViW7cPqYWEHLHJkS50X0JS2IKz9Cgaj6ugrs= +github.com/aws/aws-sdk-go-v2/service/athena v1.50.4 h1:QWhxjrA0r+FQnDAATdGqLXUvYW0MdUIvCBK89BN3OfU= +github.com/aws/aws-sdk-go-v2/service/athena v1.50.4/go.mod h1:xsG8Y2fMenmHTdukyknTUO1uQhEZ/entaNHvPmD1klE= +github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.41.0 h1:xdYdX+JpIFByMG8JQe9iWM9CqepyjhenukxTVQnuCbM= +github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.41.0/go.mod h1:c1Ik+59wgLIJFhsSY8cAnw6QooiogpTZKP0rtkVcpCQ= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.213.0 h1:9nUhN6dRT2chbA7E9y3JDGpIV1C7cZpfiRvX63EB5XA= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.213.0/go.mod h1:ouvGEfHbLaIlWwpDpOVWPWR+YwO0HDv3vm5tYLq8ImY= +github.com/aws/aws-sdk-go-v2/service/ec2instanceconnect v1.28.2 h1:se3+XU16LNr8JoHdJBrBNJKvn1dnJcnW3qRlo5g2vKI= +github.com/aws/aws-sdk-go-v2/service/ec2instanceconnect v1.28.2/go.mod h1:OCIzmvYHkq7q6zRwmTyBjWSsE4EfLRtbEoAEgY+iFD4= +github.com/aws/aws-sdk-go-v2/service/ecs v1.56.2 h1:oYHra2ttm7jOSY/wfuTeEnH164O6Eo3AuygreQKa+Gg= +github.com/aws/aws-sdk-go-v2/service/ecs v1.56.2/go.mod h1:wAtdeFanDuF9Re/ge4DRDaYe3Wy1OGrU7jG042UcuI4= +github.com/aws/aws-sdk-go-v2/service/eks v1.64.0 h1:EYeOThTRysemFtC6J6h6b7dNg3jN03QuO5cg92ojIQE= +github.com/aws/aws-sdk-go-v2/service/eks v1.64.0/go.mod h1:v1xXy6ea0PHtWkjFUvAUh6B/5wv7UF909Nru0dOIJDk= +github.com/aws/aws-sdk-go-v2/service/elasticache v1.46.0 h1:UficfhqlA7k0zQ/x9pNKmyIIeHfvJUfdbzOQJKGJkt8= +github.com/aws/aws-sdk-go-v2/service/elasticache v1.46.0/go.mod h1:477YEP4FkrM0oUcw+w4vk4+XTB7WacLzPGPFj69kwkg= +github.com/aws/aws-sdk-go-v2/service/glue v1.109.2 h1:cp6rvdJiV36VupuDMvrdnILZXctf6BANWzKtv4nA4xQ= +github.com/aws/aws-sdk-go-v2/service/glue v1.109.2/go.mod h1:6FqWCqW0Py6VOvY42NQyf9e7N+sNVnDEiHFklCCCoQc= +github.com/aws/aws-sdk-go-v2/service/iam v1.41.1 h1:Kq3R+K49y23CGC5UQF3Vpw5oZEQk5gF/nn+MekPD0ZY= +github.com/aws/aws-sdk-go-v2/service/iam v1.41.1/go.mod h1:mPJkGQzeCoPs82ElNILor2JzZgYENr4UaSKUT8K27+c= +github.com/aws/aws-sdk-go-v2/service/identitystore v1.28.2 h1:hWqvzMaaiHhwndQhy1rF/qoHidaa4KzevkiDaMvjk3Q= +github.com/aws/aws-sdk-go-v2/service/identitystore v1.28.2/go.mod h1:7nGvrQXBNp7k5yYpwpmxGucYTPY39d0cxjmANAeWwYE= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 h1:6+lZi2JeGKtCraAj1rpoZfKqnQ9SptseRZioejfUOLM= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0/go.mod h1:eb3gfbVIxIoGgJsi9pGne19dhCBpK6opTYpQqAmdy44= +github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.1 h1:4nm2G6A4pV9rdlWzGMPv4BNtQp22v1hg3yrtkYpeLl8= +github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.1/go.mod h1:iu6FSzgt+M2/x3Dk8zhycdIcHjEFb36IS8HVUVFoMg0= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.3 h1:ieRzyHXypu5ByllM7Sp4hC5f/1Fy5wqxqY0yB85hC7s= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.3/go.mod h1:O5ROz8jHiOAKAwx179v+7sHMhfobFVi6nZt8DEyiYoM= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 h1:moLQUoVq91LiqT1nbvzDukyqAlCv89ZmwaHw/ZFlFZg= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA= +github.com/aws/aws-sdk-go-v2/service/kms v1.44.0 h1:Z95XCqqSnwXr0AY7PgsiOUBhUG2GoDM5getw6RfD1Lg= +github.com/aws/aws-sdk-go-v2/service/kms v1.44.0/go.mod h1:DqcSngL7jJeU1fOzh5Ll5rSvX/MlMV6OZlE4mVdFAQc= +github.com/aws/aws-sdk-go-v2/service/memorydb v1.27.0 h1:ggjjmfNX+nlv+nWHXOLr1pl36buP25Y9GZBEPMSofGw= +github.com/aws/aws-sdk-go-v2/service/memorydb v1.27.0/go.mod h1:pfuDC5zBwunXdE44WT1PRbtzuXWGohKFcFLtv+ezI6k= +github.com/aws/aws-sdk-go-v2/service/opensearch v1.46.3 h1:vWClqL1dTCuPtWkaGDW7Y6P9ocqHtfFrjlkWYARm1qI= +github.com/aws/aws-sdk-go-v2/service/opensearch v1.46.3/go.mod h1:51rUy2+lDiOQVlekScV044he709HMMhCdUDHqSBojgg= +github.com/aws/aws-sdk-go-v2/service/organizations v1.38.3 h1:rAUHsUFmux71j/4wQ5nUHsXyJxSMRgMlDnmFfahDhSk= +github.com/aws/aws-sdk-go-v2/service/organizations v1.38.3/go.mod h1:iYC/SPpI4WveHr4ZzPFWTmXRODyJub5Aif75W7Ll+yM= +github.com/aws/aws-sdk-go-v2/service/rds v1.95.0 h1:7KmQEDuz6XWafMaeIahplfGSEakzX4RMSrNHyvhkEq8= +github.com/aws/aws-sdk-go-v2/service/rds v1.95.0/go.mod h1:CXiHj5rVyQ5Q3zNSoYzwaJfWm8IGDweyyCGfO8ei5fQ= +github.com/aws/aws-sdk-go-v2/service/redshift v1.54.3 h1:LNOKEsPjtoBrV2WYUb2zPLOOtD5sKt907LZ/h0cYHSk= +github.com/aws/aws-sdk-go-v2/service/redshift v1.54.3/go.mod h1:TC8pNvjiikrjpX2MEzX/cEJ4/T4XIoSY4BskVvHj8bk= +github.com/aws/aws-sdk-go-v2/service/redshiftserverless v1.27.1 h1:+nldYGx2Kdn7jzntjm2fG7sRonfQ08l1RqkLSTed8G8= +github.com/aws/aws-sdk-go-v2/service/redshiftserverless v1.27.1/go.mod h1:gpRsJN3qxZbsj1NhAoCNX02zJ4RZUB5v/7o4QrnGTcA= +github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.4 h1:QqXnA7s6sxFe6B6dkocEfZ9ap1bAmEXp4W32n9n+cmU= +github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.4/go.mod h1:cgPfPTC/V3JqwCKed7Q6d0FrgarV7ltz4Bz6S4Q+Dqk= +github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.17.2 h1:0LBxtAX2bHcfPr6VSzQSvJlR1nzlna7xp031gEjbWGU= +github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.17.2/go.mod h1:NW+LcIadUUlDgM3gb8+97lr6zSKExHR58NRRWSWkXl8= +github.com/aws/aws-sdk-go-v2/service/s3 v1.79.3 h1:BRXS0U76Z8wfF+bnkilA2QwpIch6URlm++yPUt9QPmQ= +github.com/aws/aws-sdk-go-v2/service/s3 v1.79.3/go.mod h1:bNXKFFyaiVvWuR6O16h/I1724+aXe/tAkA9/QS01t5k= +github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.4 h1:EKXYJ8kgz4fiqef8xApu7eH0eae2SrVG+oHCLFybMRI= +github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.4/go.mod h1:yGhDiLKguA3iFJYxbrQkQiNzuy+ddxesSZYWVeeEH5Q= +github.com/aws/aws-sdk-go-v2/service/sns v1.34.4 h1:ihddI5wufQQCJiujUgAvWRqZcfDmSKIfXlAuX7T95cg= +github.com/aws/aws-sdk-go-v2/service/sns v1.34.4/go.mod h1:PJtxxMdj747j8DeZENRTTYAz/lx/pADn/U0k7YNNiUY= +github.com/aws/aws-sdk-go-v2/service/sqs v1.38.5 h1:KNgVWw8qbPzjYnIF1gL0EAszy6VKGnmUK6VSm1huYY8= +github.com/aws/aws-sdk-go-v2/service/sqs v1.38.5/go.mod h1:Bar4MrRxeqdn6XIh8JGfiXuFRmyrrsZNTJotxEJmWW0= +github.com/aws/aws-sdk-go-v2/service/ssm v1.59.0 h1:KWArCwA/WkuHWKfygkNz0B6YS6OvdgoJUaJHX0Qby1s= +github.com/aws/aws-sdk-go-v2/service/ssm v1.59.0/go.mod h1:PUWUl5MDiYNQkUHN9Pyd9kgtA/YhbxnSnHP+yQqzrM8= +github.com/aws/aws-sdk-go-v2/service/sso v1.28.0 h1:Mc/MKBf2m4VynyJkABoVEN+QzkfLqGj0aiJuEe7cMeM= +github.com/aws/aws-sdk-go-v2/service/sso v1.28.0/go.mod h1:iS5OmxEcN4QIPXARGhavH7S8kETNL11kym6jhoS7IUQ= +github.com/aws/aws-sdk-go-v2/service/ssoadmin v1.30.2 h1:j3YvW9+qUFIzshXoPclOEUOSlXgr9vCU6OsB/CVRKGM= +github.com/aws/aws-sdk-go-v2/service/ssoadmin v1.30.2/go.mod h1:znVkl7Y14sZKEL/sbRQ6qgD8wj8VdTcVVQp5iRaKXcc= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.0 h1:6csaS/aJmqZQbKhi1EyEMM7yBW653Wy/B9hnBofW+sw= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.0/go.mod h1:59qHWaY5B+Rs7HGTuVGaC32m0rdpQ68N8QCN3khYiqs= +github.com/aws/aws-sdk-go-v2/service/sts v1.37.0 h1:MG9VFW43M4A8BYeAfaJJZWrroinxeTi2r3+SnmLQfSA= +github.com/aws/aws-sdk-go-v2/service/sts v1.37.0/go.mod h1:JdeBDPgpJfuS6rU/hNglmOigKhyEZtBmbraLE4GK1J8= +github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM= +github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0= +github.com/aws/smithy-go/tracing/smithyoteltracing v1.0.4 h1:Gx4ipHtKfaABSHAVo4Zjo2E4ClKzYqZ2NrPO0gy6qvY= +github.com/aws/smithy-go/tracing/smithyoteltracing v1.0.4/go.mod h1:nnwXv9COKyqd4q7jpPrxRaW9L+Qfwb4aGTdZqsIpOho= +github.com/beevik/etree v1.5.1 h1:TC3zyxYp+81wAmbsi8SWUpZCurbxa6S8RITYRSkNRwo= +github.com/beevik/etree v1.5.1/go.mod h1:gPNJNaBGVZ9AwsidazFZyygnd+0pAU38N4D+WemwKNs= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ= +github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= +github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM= +github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= +github.com/bmatcuk/doublestar/v4 v4.9.0 h1:DBvuZxjdKkRP/dr4GVV4w2fnmrk5Hxc90T51LZjv0JA= +github.com/bmatcuk/doublestar/v4 v4.9.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= +github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8= +github.com/boombuler/barcode v1.0.1 h1:NDBbPmhS+EqABEs5Kg3n/5ZNjy73Pz7SIV+KCeqyXcs= +github.com/boombuler/barcode v1.0.1/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8= +github.com/bufbuild/protocompile v0.14.1 h1:iA73zAf/fyljNjQKwYzUHD6AD4R8KMasmwa/FBatYVw= +github.com/bufbuild/protocompile v0.14.1/go.mod h1:ppVdAIhbr2H8asPk6k4pY7t9zB1OU5DoEw9xY/FUi1c= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8= +github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/chai2010/gettext-go v1.0.2 h1:1Lwwip6Q2QGsAdl/ZKPCwTe9fe0CjlUbqj5bFNSjIRk= +github.com/chai2010/gettext-go v1.0.2/go.mod h1:y+wnP2cHYaVj19NZhYKAwEMH2CI1gNHeQQ+5AjwawxA= +github.com/charlievieth/strcase v0.0.5 h1:gV4iXVyD6eI5KdfOV+/vIVCKXZwtCWOmDMcu7Uy00Rs= +github.com/charlievieth/strcase v0.0.5/go.mod h1:FIOYY1aDBMSIOFqmVomHBpoK+bteGlESRsgsdWjrhx8= +github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE= +github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4= +github.com/containerd/containerd v1.7.29 h1:90fWABQsaN9mJhGkoVnuzEY+o1XDPbg9BTC9QTAHnuE= +github.com/containerd/containerd v1.7.29/go.mod h1:azUkWcOvHrWvaiUjSQH0fjzuHIwSPg1WL5PshGP4Szs= +github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= +github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= +github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= +github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= +github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A= +github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw= github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4= github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s= +github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= +github.com/crewjam/httperr v0.2.0 h1:b2BfXR8U3AlIHwNeFFvZ+BV1LFvKLlzMjzaTnZMybNo= +github.com/crewjam/httperr v0.2.0/go.mod h1:Jlz+Sg/XqBQhyMjdDiC+GNNRzZTD7x39Gu3pglZ5oH4= +github.com/crewjam/saml v0.4.14 h1:g9FBNx62osKusnFzs3QTN5L9CVA/Egfgm+stJShzw/c= +github.com/crewjam/saml v0.4.14/go.mod h1:UVSZCf18jJkk6GpWNVqcyQJMD5HsRugBPf4I1nl2mME= +github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46 h1:2Dx4IHfC1yHWI12AxQDJM1QbRCDfk6M+blLzlZCXdrc= +github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46/go.mod h1:uzvlm1mxhHkdfqitSA92i7Se+S9ksOn3a3qmv/kyOCw= +github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s= +github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI= +github.com/danieljoos/wincred v1.2.2 h1:774zMFJrqaeYCK2W57BgAem/MLi6mtSE47MB6BOJ0i0= +github.com/danieljoos/wincred v1.2.2/go.mod h1:w7w4Utbrz8lqeMbDAK0lkNJUv5sAOkFi7nd/ogr0Uh8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/di-wu/parser v0.2.2/go.mod h1:SLp58pW6WamdmznrVRrw2NTyn4wAvT9rrEFynKX7nYo= +github.com/di-wu/parser v0.3.0 h1:NMOvy5ifswgt4gsdhySVcKOQtvjC43cHZIfViWctqQY= +github.com/di-wu/parser v0.3.0/go.mod h1:SLp58pW6WamdmznrVRrw2NTyn4wAvT9rrEFynKX7nYo= +github.com/di-wu/xsd-datetime v1.0.0 h1:vZoGNkbzpBNoc+JyfVLEbutNDNydYV8XwHeV7eUJoxI= +github.com/di-wu/xsd-datetime v1.0.0/go.mod h1:i3iEhrP3WchwseOBeIdW/zxeoleXTOzx1WyDXgdmOww= +github.com/digitorus/pkcs7 v0.0.0-20230713084857-e76b763bdc49/go.mod h1:SKVExuS+vpu2l9IoOc0RwqE7NYnb0JlcFHFnEJkVDzc= +github.com/digitorus/pkcs7 v0.0.0-20250730155240-ffadbf3f398c h1:g349iS+CtAvba7i0Ee9EP1TlTZ9w+UncBY6HSmsFZa0= +github.com/digitorus/pkcs7 v0.0.0-20250730155240-ffadbf3f398c/go.mod h1:mCGGmWkOQvEuLdIRfPIpXViBfpWto4AhwtJlAvo62SQ= +github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7 h1:lxmTCgmHE1GUYL7P0MlNa00M67axePTq+9nBSGddR8I= +github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7/go.mod h1:GvWntX9qiTlOud0WkQ6ewFm0LPy5JUR1Xo0Ngbd1w6Y= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= +github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= +github.com/elimity-com/scim v0.0.0-20240320110924-172bf2aee9c8 h1:0+BTyxIYgiVAry/P5s8R4dYuLkhB9Nhso8ogFWNr4IQ= +github.com/elimity-com/scim v0.0.0-20240320110924-172bf2aee9c8/go.mod h1:JkjcmqbLW+khwt2fmBPJFBhx2zGZ8XobRZ+O0VhlwWo= +github.com/emicklei/go-restful/v3 v3.11.3 h1:yagOQz/38xJmcNeZJtrUcKjkHRltIaIFXKWeG1SkWGE= +github.com/emicklei/go-restful/v3 v3.11.3/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/evanphx/json-patch v5.9.11+incompatible h1:ixHHqfcGvxhWkniF1tWxBHA0yb4Z+d1UQi45df52xW8= +github.com/evanphx/json-patch v5.9.11+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f h1:Wl78ApPPB2Wvf/TIe2xdyJxTlb6obmF18d8QdkxNDu4= +github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f/go.mod h1:OSYXu++VVOHnXeitef/D8n/6y4QV8uLHSFXX4NeXMGc= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= -github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= -github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.8.0 h1:fFtUGXUzXPHTIUdne5+zzMPTfffl3RD5qYnkY40vtxU= +github.com/fxamacker/cbor/v2 v2.8.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= +github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= +github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 h1:BP4M0CvQ4S3TGls2FvczZtj5Re/2ZzkV9VwqPHH/3Bo= +github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0= +github.com/go-chi/chi/v5 v5.2.2 h1:CMwsvRVTbXVytCk1Wd72Zy1LAsAh9GxMmSNWLHCG618= +github.com/go-chi/chi/v5 v5.2.2/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops= +github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA= +github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og= +github.com/go-gorp/gorp/v3 v3.1.0 h1:ItKF/Vbuj31dmV4jxA1qblpSwkl9g1typ24xoe70IGs= +github.com/go-gorp/gorp/v3 v3.1.0/go.mod h1:dLEjIyyRNiXvNZ8PSmzpt1GsWAUK8kjVhEpjH8TixEw= +github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY= +github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/go-jose/go-jose/v4 v4.1.1 h1:JYhSgy4mXXzAdF3nUx3ygx347LRXJRrpgyU3adRmkAI= +github.com/go-jose/go-jose/v4 v4.1.1/go.mod h1:BdsZGqgdO3b6tTc6LSE56wcDbMMLuPsw5d4ZD5f94kA= +github.com/go-ldap/ldap/v3 v3.4.11 h1:4k0Yxweg+a3OyBLjdYn5OKglv18JNvfDykSoI8bW0gU= +github.com/go-ldap/ldap/v3 v3.4.11/go.mod h1:bY7t0FLK8OAVpp/vV6sSlpz3EQDGcQwc8pF0ujLgKvM= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/analysis v0.23.0 h1:aGday7OWupfMs+LbmLZG4k0MYXIANxcuBTYUC03zFCU= +github.com/go-openapi/analysis v0.23.0/go.mod h1:9mz9ZWaSlV8TvjQHLl2mUW2PbZtemkE8yA5v22ohupo= +github.com/go-openapi/errors v0.22.2 h1:rdxhzcBUazEcGccKqbY1Y7NS8FDcMyIRr0934jrYnZg= +github.com/go-openapi/errors v0.22.2/go.mod h1:+n/5UdIqdVnLIJ6Q9Se8HNGUXYaY6CN8ImWzfi/Gzp0= +github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= +github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= +github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ= +github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4= +github.com/go-openapi/loads v0.22.0 h1:ECPGd4jX1U6NApCGG1We+uEozOAvXvJSF4nnwHZ8Aco= +github.com/go-openapi/loads v0.22.0/go.mod h1:yLsaTCS92mnSAZX5WWoxszLj0u+Ojl+Zs5Stn1oF+rs= +github.com/go-openapi/runtime v0.28.0 h1:gpPPmWSNGo214l6n8hzdXYhPuJcGtziTOgUpvsFWGIQ= +github.com/go-openapi/runtime v0.28.0/go.mod h1:QN7OzcS+XuYmkQLw05akXk0jRH/eZ3kb18+1KwW9gyc= +github.com/go-openapi/spec v0.21.0 h1:LTVzPc3p/RzRnkQqLRndbAzjY0d0BCL72A6j3CdL9ZY= +github.com/go-openapi/spec v0.21.0/go.mod h1:78u6VdPw81XU44qEWGhtr982gJ5BWg2c0I5XwVMotYk= +github.com/go-openapi/strfmt v0.23.0 h1:nlUS6BCqcnAk0pyhi9Y+kdDVZdZMHfEKQiS4HaMgO/c= +github.com/go-openapi/strfmt v0.23.0/go.mod h1:NrtIpfKtWIygRkKVsxh7XQMDQW5HKQl6S5ik2elW+K4= +github.com/go-openapi/swag v0.23.1 h1:lpsStH0n2ittzTnbaSloVZLuB5+fvSY/+hnagBjSNZU= +github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDqQnDHMef0= +github.com/go-openapi/validate v0.24.0 h1:LdfDKwNbpB6Vn40xhTdNZAnfLECL81w+VX3BumrGD58= +github.com/go-openapi/validate v0.24.0/go.mod h1:iyeX1sEufmv3nPbBdX3ieNviWnOZaJ1+zquzJEf2BAQ= +github.com/go-piv/piv-go/v2 v2.4.0 h1:xamQ/fR4MJiw/Ndbk6yi7MVwhjrwlnDAPuaH9zcGb+I= +github.com/go-piv/piv-go/v2 v2.4.0/go.mod h1:ShZi74nnrWNQEdWzRUd/3cSig3uNOcEZp+EWl0oewnI= +github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo= +github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-test/deep v1.1.1 h1:0r/53hagsehfO4bzD2Pgr/+RgHqhmf+k1Bpse2cTu1U= +github.com/go-test/deep v1.1.1/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-webauthn/webauthn v0.11.2 h1:Fgx0/wlmkClTKlnOsdOQ+K5HcHDsDcYIvtYmfhEOSUc= +github.com/go-webauthn/webauthn v0.11.2/go.mod h1:aOtudaF94pM71g3jRwTYYwQTG1KyTILTcZqN1srkmD0= +github.com/go-webauthn/x v0.1.20 h1:brEBDqfiPtNNCdS/peu8gARtq8fIPsHz0VzpPjGvgiw= +github.com/go-webauthn/x v0.1.20/go.mod h1:n/gAc8ssZJGATM0qThE+W+vfgXiMedsWi3wf/C4lld0= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/gobwas/httphead v0.1.0 h1:exrUm0f4YX0L7EBwZHuCF4GDp8aJfVeBrlLQrs6NqWU= +github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM= +github.com/gobwas/pool v0.2.1 h1:xfeeEhW7pwmX8nuLVlqbzVc7udMDrwetjEv+TZIz1og= +github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= +github.com/gobwas/ws v1.4.0 h1:CTaoG1tojrh4ucGPcoJFiAQUAsEWekEWvLy7GsVNqGs= +github.com/gobwas/ws v1.4.0/go.mod h1:G3gNqMNtPppf5XUz7O4shetPpcZ1VJ7zt18dlUeakrc= +github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 h1:ZpnhV/YsD2/4cESfV5+Hoeu/iUR3ruzNvZ+yQfO03a0= +github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk= +github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI= +github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo= +github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= +github.com/golang/mock v1.7.0-rc.1 h1:YojYx61/OLFsiv6Rw1Z96LpldJIy31o+UHmwAUMJ6/U= +github.com/golang/mock v1.7.0-rc.1/go.mod h1:s42URUywIqd+OcERslBJvOjepvNymP31m3q8d/GkuRs= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM= +github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/cel-go v0.25.0 h1:jsFw9Fhn+3y2kBbltZR4VEz5xKkcIFRPDnuEzAGv5GY= +github.com/google/cel-go v0.25.0/go.mod h1:hjEb6r5SuOSlhCHmFoLzu8HGCERvIsDAbxDAyNU/MmI= +github.com/google/certificate-transparency-go v1.0.21/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg= +github.com/google/certificate-transparency-go v1.3.1 h1:akbcTfQg0iZlANZLn0L9xOeWtyCIdeoYhKrqi5iH3Go= +github.com/google/certificate-transparency-go v1.3.1/go.mod h1:gg+UQlx6caKEDQ9EElFOujyxEQEfOiQzAt6782Bvi8k= +github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw= +github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw= +github.com/google/go-attestation v0.5.1 h1:jqtOrLk5MNdliTKjPbIPrAaRKJaKW+0LIU2n/brJYms= +github.com/google/go-attestation v0.5.1/go.mod h1:KqGatdUhg5kPFkokyzSBDxwSCFyRgIgtRkMp6c3lOBQ= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/go-configfs-tsm v0.3.3-0.20240919001351-b4b5b84fdcbc h1:SG12DWUUM5igxm+//YX5Yq4vhdoRnOG9HkCodkOn+YU= +github.com/google/go-configfs-tsm v0.3.3-0.20240919001351-b4b5b84fdcbc/go.mod h1:EL1GTDFMb5PZQWDviGfZV9n87WeGTR/JUg13RfwkgRo= +github.com/google/go-containerregistry v0.20.6 h1:cvWX87UxxLgaH76b4hIvya6Dzz9qHB31qAwjAohdSTU= +github.com/google/go-containerregistry v0.20.6/go.mod h1:T0x8MuoAoKX/873bkeSfLD2FAkwCDf9/HZgsFJ02E2Y= github.com/google/go-github/v41 v41.0.0 h1:HseJrM2JFf2vfiZJ8anY2hqBjdfY1Vlj/K27ueww4gg= github.com/google/go-github/v41 v41.0.0/go.mod h1:XgmCA5H323A9rtgExdTcnDkcqp6S30AVACCBDOonIxg= +github.com/google/go-github/v70 v70.0.0 h1:/tqCp5KPrcvqCc7vIvYyFYTiCGrYvaWoYMGHSQbo55o= +github.com/google/go-github/v70 v70.0.0/go.mod h1:xBUZgo8MI3lUL/hwxl3hlceJW1U8MVnXP3zUyI+rhQY= github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= +github.com/google/go-sev-guest v0.12.1 h1:H4rFYnPIn8HtqEsNTmh56Zxcf9BI9n48ZSYCnpYLYvc= +github.com/google/go-sev-guest v0.12.1/go.mod h1:SK9vW+uyfuzYdVN0m8BShL3OQCtXZe/JPF7ZkpD3760= +github.com/google/go-tdx-guest v0.3.2-0.20241009005452-097ee70d0843 h1:+MoPobRN9HrDhGyn6HnF5NYo4uMBKaiFqAtf/D/OB4A= +github.com/google/go-tdx-guest v0.3.2-0.20241009005452-097ee70d0843/go.mod h1:g/n8sKITIT9xRivBUbizo34DTsUm2nN2uU3A662h09g= +github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I= +github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY= +github.com/google/go-tpm-tools v0.4.5 h1:3fhthtyMDbIZFR5/0y1hvUoZ1Kf4i1eZ7C73R4Pvd+k= +github.com/google/go-tpm-tools v0.4.5/go.mod h1:ktjTNq8yZFD6TzdBFefUfen96rF3NpYwpSb2d8bc+Y8= +github.com/google/go-tspi v0.3.0 h1:ADtq8RKfP+jrTyIWIZDIYcKOMecRqNJFOew2IT0Inus= +github.com/google/go-tspi v0.3.0/go.mod h1:xfMGI3G0PhxCdNVcYr1C4C+EizojDg/TXuX5by8CiHI= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/logger v1.1.1 h1:+6Z2geNxc9G+4D4oDO9njjjn2d0wN5d7uOo0vOIW1NQ= +github.com/google/logger v1.1.1/go.mod h1:BkeJZ+1FhQ+/d087r4dzojEg1u2ZX+ZqG1jTUrLM+zQ= +github.com/google/pprof v0.0.0-20250602020802-c6617b811d0e h1:FJta/0WsADCe1r9vQjdHbd3KuiLPu7Y9WlyLGwMUNyE= +github.com/google/pprof v0.0.0-20250602020802-c6617b811d0e/go.mod h1:5hDyRhoBCxViHszMt12TnOpEI4VVi+U8Gm9iphldiMA= +github.com/google/renameio/v2 v2.0.0 h1:UifI23ZTGY8Tt29JbYFiuyIU3eX+RNFtUwefq9qAhxg= +github.com/google/renameio/v2 v2.0.0/go.mod h1:BtmJXm5YlszgC+TD4HOEEUFgkJP3nLxehU6hfe7jRt4= +github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0= +github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM= +github.com/google/safetext v0.0.0-20240104143208-7a7d9b3d812f h1:o2yGZLlsOj5H5uvtQNEdi6DeA0GbUP3lm0gWW5RvY0s= +github.com/google/safetext v0.0.0-20240104143208-7a7d9b3d812f/go.mod h1:H3K1Iu/utuCfa10JO+GsmKUYSWi7ug57Rk6GaDRHaaQ= +github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4= +github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ= +github.com/google/tink/go v1.7.0 h1:6Eox8zONGebBFcCBqkVmt60LaWZa6xg1cl/DwAh/J1w= +github.com/google/tink/go v1.7.0/go.mod h1:GAUOd+QE3pgj9q8VKIGTCP33c/B7eb4NhxLcgTJZStM= +github.com/google/trillian v1.7.2 h1:EPBxc4YWY4Ak8tcuhyFleY+zYlbCDCa4Sn24e1Ka8Js= +github.com/google/trillian v1.7.2/go.mod h1:mfQJW4qRH6/ilABtPYNBerVJAJ/upxHLX81zxNQw05s= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4= +github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA= +github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo= +github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc= +github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4= +github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA= +github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo= +github.com/gorilla/sessions v1.2.1 h1:DHd3rPN5lE3Ts3D8rKkQ8x/0kqfeNmBAaiSi+o7FsgI= +github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM= +github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo= +github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA= +github.com/gosuri/uitable v0.0.4 h1:IG2xLKRvErL3uhY6e1BylFzG+aJiwQviDDTfOKeKTpY= +github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo= github.com/gravitational/kingpin/v2 v2.1.11-0.20230515143221-4ec6b70ecd33 h1:VFER/+0TfRypJhc9XeuggTtEZzhhe75DSVqMv/avHEU= github.com/gravitational/kingpin/v2 v2.1.11-0.20230515143221-4ec6b70ecd33/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE= +github.com/gravitational/license v0.0.0-20250329001817-070456fa8ec1 h1:Kt7aT9N7vbZmcMejGXnSAGap8TUwH3fMoHE8cQm14wc= +github.com/gravitational/license v0.0.0-20250329001817-070456fa8ec1/go.mod h1:n4RXV6T3SJ/vrJqmc4vBeHpaBspxWENDD67ssQlXXkg= +github.com/gravitational/predicate v1.3.4 h1:9N3JhBXNPcUh0w8DdlpnVmfnH9Z3xxbw43sD3E19VBE= +github.com/gravitational/predicate v1.3.4/go.mod h1:cTQkp40X3YejTcWsZGvzAtfa28VXfBxT10H/Grt0Fzs= +github.com/gravitational/roundtrip v1.0.3 h1:n5JLvJVs8XrnJxXGYOzb8I9zGMEr5WVhSA1qGuxzwnU= +github.com/gravitational/roundtrip v1.0.3/go.mod h1:AR9OSmv3WN0qJObMyMYJUrPRXtzhdjAQSKIACv7F9b4= github.com/gravitational/trace v1.5.1 h1:CdSymAjkE1VOef+lsC5x29jX9WbgI0fBtnRqeT4Fh+c= github.com/gravitational/trace v1.5.1/go.mod h1:sJKfJHIQ7IkG8kvYpFPEr6mj3WDEdZ0YAc7xAD8w7lw= +github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA= +github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= +github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 h1:UH//fgunKIs4JdUbpDl1VZCDaL56wXCB/5+wF6uHfaI= +github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vbp88Yd8NsDy6rZz+RcrMPxvld8= +github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1 h1:qnpSQwGEnkcRpTqNOIR6bJbR0gAorgP9CSALpRcKoAA= +github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1/go.mod h1:lXGCsh6c22WGtjr+qGHj1otzZpV/1kwTMAqkwZsnWRU= +github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.1 h1:KcFzXwzM/kGhIRHvc8jdixfIJjVzuUJdnv+5xsPutog= +github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.1/go.mod h1:qOchhhIlmRcqk/O9uCo/puJlyo07YINaIqdZfZG3Jkc= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 h1:X5VWvz21y3gzm9Nw/kaUeku/1+uBhcekkmy4IkffJww= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90= +github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= +github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= +github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= +github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= +github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k= +github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= +github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= +github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= +github.com/hashicorp/go-retryablehttp v0.7.8 h1:ylXZWnqa7Lhqpk0L1P1LzDtGcCR0rPVUrx/c8Unxc48= +github.com/hashicorp/go-retryablehttp v0.7.8/go.mod h1:rjiScheydd+CxvumBsIrFKlx3iS0jrZ7LvzFGFmuKbw= +github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc= +github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= +github.com/hashicorp/go-secure-stdlib/parseutil v0.1.7 h1:UpiO20jno/eV1eVZcxqWnUohyKRe1g8FPV/xH1s/2qs= +github.com/hashicorp/go-secure-stdlib/parseutil v0.1.7/go.mod h1:QmrqtbKuxxSWTN3ETMPuB+VtEiBJ/A9XhoYGv8E1uD8= +github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 h1:kes8mmyCpxJsI7FTwtzRqEy9CdjCtrXrXGuOpxEA7Ts= +github.com/hashicorp/go-secure-stdlib/strutil v0.1.2/go.mod h1:Gou2R9+il93BqX25LAKCLuM+y9U2T4hlwvT1yprcna4= +github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc= +github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= +github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hashicorp/vault/api v1.16.0 h1:nbEYGJiAPGzT9U4oWgaaB0g+Rj8E59QuHKyA5LhwQN4= +github.com/hashicorp/vault/api v1.16.0/go.mod h1:KhuUhzOD8lDSk29AtzNjgAu2kxRA9jL9NAbkFlqvkBA= +github.com/hinshun/vt10x v0.0.0-20220301184237-5011da428d02 h1:AgcIVYPa6XJnU3phs104wLj8l5GEththEw6+F79YsIY= +github.com/hinshun/vt10x v0.0.0-20220301184237-5011da428d02/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68= +github.com/howeyc/gopass v0.0.0-20210920133722-c8aef6fb66ef h1:A9HsByNhogrvm9cWb28sjiS3i7tcKCkflWFEkHfuAgM= +github.com/howeyc/gopass v0.0.0-20210920133722-c8aef6fb66ef/go.mod h1:lADxMC39cJJqL93Duh1xhAs4I2Zs8mKS89XWXFGp9cs= github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI= github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= +github.com/in-toto/attestation v1.1.1 h1:QD3d+oATQ0dFsWoNh5oT0udQ3tUrOsZZ0Fc3tSgWbzI= +github.com/in-toto/attestation v1.1.1/go.mod h1:Dcq1zVwA2V7Qin8I7rgOi+i837wEf/mOZwRm047Sjys= +github.com/in-toto/in-toto-golang v0.9.0 h1:tHny7ac4KgtsfrG6ybU8gVOZux2H8jN05AXJ9EBM1XU= +github.com/in-toto/in-toto-golang v0.9.0/go.mod h1:xsBVrVsHNsB61++S6Dy2vWosKhuA3lUTQd+eF9HdeMo= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jackc/pgerrcode v0.0.0-20240316143900-6e2875d9b438 h1:Dj0L5fhJ9F82ZJyVOmBx6msDp/kfd1t9GRfny/mfJA0= +github.com/jackc/pgerrcode v0.0.0-20240316143900-6e2875d9b438/go.mod h1:a/s9Lp5W7n/DD0VrVoyJ00FbP2ytTPDVOivvn2bMlds= +github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM= +github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg= +github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo= +github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= +github.com/jackc/pgx/v5 v5.7.4 h1:9wKznZrhWa2QiHL+NjTSPP6yjl3451BX3imWDnokYlg= +github.com/jackc/pgx/v5 v5.7.4/go.mod h1:ncY89UGWxg82EykZUwSpUKEfccBGGYq1xjrOpsbsfGQ= +github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo= +github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4= +github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8= +github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs= +github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo= +github.com/jcmturner/dnsutils/v2 v2.0.0/go.mod h1:b0TnjGOvI/n42bZa+hmXL+kFJZsFT7G4t3HTlQ184QM= +github.com/jcmturner/gofork v1.7.6 h1:QH0l3hzAU1tfT3rZCnW5zXl+orbkNMMRGJfdJjHVETg= +github.com/jcmturner/gofork v1.7.6/go.mod h1:1622LH6i/EZqLloHfE7IeZ0uEJwMSUyQ/nDd82IeqRo= +github.com/jcmturner/goidentity/v6 v6.0.1 h1:VKnZd2oEIMorCTsFBnJWbExfNN7yZr3EhJAxwOkZg6o= +github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxyjn/mY9zx4tFonSg= +github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh687T8= +github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs= +github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY= +github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc= +github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267 h1:TMtDYDHKYY15rFihtRfck/bfFqNfvcabqvXAFQfAUpY= +github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267/go.mod h1:h1nSAbGFqGVzn6Jyl1R/iCcBUHN4g+gW1u9CoBTrb9E= +github.com/jellydator/ttlcache/v3 v3.4.0 h1:YS4P125qQS0tNhtL6aeYkheEaB/m8HCqdMMP4mnWdTY= +github.com/jellydator/ttlcache/v3 v3.4.0/go.mod h1:Hw9EgjymziQD3yGsQdf1FqFdpp7YjFMd4Srg5EJlgD4= +github.com/jeremija/gosubmit v0.2.8 h1:mmSITBz9JxVtu8eqbN+zmmwX7Ij2RidQxhcwRVI4wqA= +github.com/jeremija/gosubmit v0.2.8/go.mod h1:Ui+HS073lCFREXBbdfrJzMB57OI/bdxTiLtrDHHhFPI= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24 h1:liMMTbpW34dhU4az1GN0pTPADwNmvoRSeoZ6PItiqnY= +github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= +github.com/jmhodges/clock v1.2.0 h1:eq4kys+NI0PLngzaHEe7AmPT90XMGIEySD1JfV1PDIs= +github.com/jmhodges/clock v1.2.0/go.mod h1:qKjhA7x7u/lQpPB1XAqX1b1lCI/w3/fNuYpI/ZjLynI= +github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o= +github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY= +github.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I= +github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U= +github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= +github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8= +github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg= +github.com/keybase/go-keychain v0.0.1 h1:way+bWYa6lDppZoZcgMbYsvC7GxljxrskdNInRtuthU= +github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXwkPPMeUgOK1k= +github.com/keys-pub/go-libfido2 v1.5.3-0.20220306005615-8ab03fb1ec27 h1:10nfvqVK4/KINnLT8bDICrRnfguTJ300dNGpW8D2bQo= +github.com/keys-pub/go-libfido2 v1.5.3-0.20220306005615-8ab03fb1ec27/go.mod h1:P0V19qHwJNY0htZwZDe9Ilvs/nokGhdFX7faKFyZ6+U= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0 h1:AV2c/EiW3KqPNT9ZKl07ehoAGi4C5/01Cfbblndcapg= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= @@ -61,116 +571,464 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq6+3iTQz8KNCLtVX6idSoTLdUw= +github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o= +github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk= +github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw= +github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec h1:2tTW6cDth2TSgRbAhD7yjZzTQmcN25sDRPEeinR51yQ= +github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec/go.mod h1:TmwEoGCwIti7BCeJ9hescZgRtatxRE+A72pCoPfmcfk= +github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw= +github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= +github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0= +github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/mattermost/xml-roundtrip-validator v0.1.0 h1:RXbVD2UAl7A7nOTR4u7E3ILa4IbtvKBHw64LDsmu9hU= +github.com/mattermost/xml-roundtrip-validator v0.1.0/go.mod h1:qccnGMcpgwcNaBnxqpJpWWUiPNr5H3O8eDgGV9gT5To= +github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= +github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mattn/go-sqlite3 v1.14.28 h1:ThEiQrnbtumT+QMknw63Befp/ce/nUPgBPMlRFEum7A= +github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= +github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU= +github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs= github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0= +github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c h1:cqn374mizHuIWj+OSJCajGr/phAmuMug9qIX3l9CflE= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= +github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU= +github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI= +github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ= +github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0= +github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4= +github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE= +github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= +github.com/muhlemmer/gu v0.3.1 h1:7EAqmFrW7n3hETvuAdmFmn4hS8W+z3LgKtrnow+YzNM= +github.com/muhlemmer/gu v0.3.1/go.mod h1:YHtHR+gxM+bKEIIs7Hmi9sPT3ZDUvTN/i88wQpZkrdM= +github.com/muhlemmer/httpforwarded v0.1.0 h1:x4DLrzXdliq8mprgUMR0olDvHGkou5BJsK/vWUetyzY= +github.com/muhlemmer/httpforwarded v0.1.0/go.mod h1:yo9czKedo2pdZhoXe+yDkGVbU0TJ0q9oQ90BVoDEtw0= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus= +github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= +github.com/ohler55/ojg v1.26.8 h1:njM65m+ej8sLHiFZIhJK9UkwOmDPsUikjGbTgcwu8CU= +github.com/ohler55/ojg v1.26.8/go.mod h1:/Y5dGWkekv9ocnUixuETqiL58f+5pAsUfg5P8e7Pa2o= +github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= +github.com/okta/okta-sdk-golang/v2 v2.20.0 h1:EDKM+uOPfihOMNwgHMdno+NAsIfyXkVnoFAYVPay0YU= +github.com/okta/okta-sdk-golang/v2 v2.20.0/go.mod h1:FMy5hN5G8Rd/VoS0XrfyPPhIfOVo78ZK7lvwiQRS2+U= +github.com/onsi/ginkgo/v2 v2.22.0 h1:Yed107/8DjTr0lKCNt7Dn8yQ6ybuDRQoMGrNFKzMfHg= +github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= +github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw= +github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= +github.com/openai/openai-go v1.8.2 h1:UqSkJ1vCOPUpz9Ka5tS0324EJFEuOvMc+lA/EarJWP8= +github.com/openai/openai-go v1.8.2/go.mod h1:g461MYGXEXBVdV5SaR/5tNzNbSfwTBBefwc+LlDCK0Y= +github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= +github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= +github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040= +github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M= +github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs= +github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc= +github.com/oracle/oci-go-sdk/v65 v65.89.3 h1:KSUykb5Ou54jF4SeJNjBwcDg+umbAwcvT+xhrvNDog0= +github.com/oracle/oci-go-sdk/v65 v65.89.3/go.mod h1:u6XRPsw9tPziBh76K7GrrRXPa8P8W3BQeqJ6ZZt9VLA= +github.com/patrickmn/go-cache v2.1.1-0.20191004192108-46f407853014+incompatible h1:IWzUvJ72xMjmrjR9q3H1PF+jwdN0uNQiR2t1BLNalyo= +github.com/patrickmn/go-cache v2.1.1-0.20191004192108-46f407853014+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ= +github.com/pelletier/go-toml/v2 v2.2.3 h1:YmeHyLY8mFWbdkNWwpr+qIL2bEqT0o95WSdkNHvL12M= +github.com/pelletier/go-toml/v2 v2.2.3/go.mod h1:MfCQTFTvCcUyyvvwm1+G6H/jORL20Xlb6rzQu9GuUkc= +github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= +github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= +github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ= +github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU= +github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkoukk/tiktoken-go v0.1.8 h1:85ENo+3FpWgAACBaEUVp+lctuTcYUO7BtmfhlN/QTRo= +github.com/pkoukk/tiktoken-go v0.1.8/go.mod h1:9NiV+i9mJKGj1rYOT+njbv+ZwA/zJxYdewGl6qVatpg= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= -github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= +github.com/pquerna/otp v1.4.0 h1:wZvl1TIVxKRThZIBiwOOHOGP/1+nZyWBil9Y2XNEDzg= +github.com/pquerna/otp v1.4.0/go.mod h1:dkJfzwRKNiegxyNb54X/3fLwhCynbMspSyWKnvi1AEg= +github.com/prometheus/client_golang v1.23.0 h1:ust4zpdl9r4trLY/gSjlm07PuiBq2ynaXXlptpfy8Uc= +github.com/prometheus/client_golang v1.23.0/go.mod h1:i/o0R9ByOnHX0McrTMTyhYvKE4haaf2mW08I+jGAjEE= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE= +github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8= +github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg= +github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA= +github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU= +github.com/rubenv/sql-migrate v1.8.0 h1:dXnYiJk9k3wetp7GfQbKJcPHjVJL6YK19tKj8t2Ns0o= +github.com/rubenv/sql-migrate v1.8.0/go.mod h1:F2bGFBwCU+pnmbtNYDeKvSuvL6lBVtXDXUUv5t+u1qw= +github.com/russellhaering/gosaml2 v0.10.0 h1:z7JTpKmC4JVG94tvSQz4lszUdKLt+uy5c6lEkhdEz3Y= +github.com/russellhaering/gosaml2 v0.10.0/go.mod h1:XLwI/5aWV4E2X9p+qj6LgRwiYGv2nh4YS6pQBGlQ0Cc= +github.com/russellhaering/goxmldsig v1.5.0 h1:AU2UkkYIUOTyZRbe08XMThaOCelArgvNfYapcmSjBNw= +github.com/russellhaering/goxmldsig v1.5.0/go.mod h1:x98CjQNFJcWfMxeOrMnMKg70lvDP6tE0nTaeUnjXDmk= +github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk= +github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 h1:KRzFb2m7YtdldCEkzs6KqmJw4nqEVZGK7IN2kJkjTuQ= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.2/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sassoftware/relic v7.2.1+incompatible h1:Pwyh1F3I0r4clFJXkSI8bOyJINGqpgjJU3DYAZeI05A= +github.com/sassoftware/relic v7.2.1+incompatible/go.mod h1:CWfAxv73/iLZ17rbyhIEq3K9hs5w6FpNMdUT//qR+zk= +github.com/sassoftware/relic/v7 v7.6.2 h1:rS44Lbv9G9eXsukknS4mSjIAuuX+lMq/FnStgmZlUv4= +github.com/sassoftware/relic/v7 v7.6.2/go.mod h1:kjmP0IBVkJZ6gXeAu35/KCEfca//+PKM6vTAsyDPY+k= +github.com/scim2/filter-parser/v2 v2.2.0 h1:QGadEcsmypxg8gYChRSM2j1edLyE/2j72j+hdmI4BJM= +github.com/scim2/filter-parser/v2 v2.2.0/go.mod h1:jWnkDToqX/Y0ugz0P5VvpVEUKcWcyHHj+X+je9ce5JA= +github.com/secure-systems-lab/go-securesystemslib v0.9.1 h1:nZZaNz4DiERIQguNy0cL5qTdn9lR8XKHf4RUyG1Sx3g= +github.com/secure-systems-lab/go-securesystemslib v0.9.1/go.mod h1:np53YzT0zXGMv6x4iEWc9Z59uR+x+ndLwCLqPYpLXVU= +github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8= +github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I= +github.com/shibumi/go-pathspec v1.3.0 h1:QUyMZhFo0Md5B8zV8x2tesohbb5kfbpTi9rBnKh5dkI= +github.com/shibumi/go-pathspec v1.3.0/go.mod h1:Xutfslp817l2I1cZvgcfeMQJG5QnU2lh5tVaaMCl3jE= github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k= github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME= -github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w= -github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= -github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/sigstore/protobuf-specs v0.5.0 h1:F8YTI65xOHw70NrvPwJ5PhAzsvTnuJMGLkA4FIkofAY= +github.com/sigstore/protobuf-specs v0.5.0/go.mod h1:+gXR+38nIa2oEupqDdzg4qSBT0Os+sP7oYv6alWewWc= +github.com/sigstore/rekor v1.4.1 h1:KK3McuHnptIE9mdNlrc9qh/OVE0AXf4rnScMxJE6xH4= +github.com/sigstore/rekor v1.4.1/go.mod h1:/McBsz/vrtfi4EInxSIk/MGbDXzgv2+1FQUg1R/uSnE= +github.com/sigstore/sigstore v1.9.5 h1:Wm1LT9yF4LhQdEMy5A2JeGRHTrAWGjT3ubE5JUSrGVU= +github.com/sigstore/sigstore v1.9.5/go.mod h1:VtxgvGqCmEZN9X2zhFSOkfXxvKUjpy8RpUW39oCtoII= +github.com/sigstore/sigstore-go v0.7.1 h1:lyzi3AjO6+BHc5zCf9fniycqPYOt3RaC08M/FRmQhVY= +github.com/sigstore/sigstore-go v0.7.1/go.mod h1:AIRj4I3LC82qd07VFm3T2zXYiddxeBV1k/eoS8nTz0E= +github.com/sigstore/sigstore/pkg/signature/kms/aws v1.9.5 h1:qp2VFyKuFQvTGmZwk5Q7m5nE4NwnF9tHwkyz0gtWAck= +github.com/sigstore/sigstore/pkg/signature/kms/aws v1.9.5/go.mod h1:DKlQjjr+GsWljEYPycI0Sf8URLCk4EbGA9qYjF47j4g= +github.com/sigstore/sigstore/pkg/signature/kms/azure v1.9.5 h1:CRZcdYn5AOptStsLRAAACudAVmb1qUbhMlzrvm7ju3o= +github.com/sigstore/sigstore/pkg/signature/kms/azure v1.9.5/go.mod h1:b9rFfITq2fp1M3oJmq6lFFhSrAz5vOEJH1qzbMsZWN4= +github.com/sigstore/sigstore/pkg/signature/kms/gcp v1.9.6-0.20250729224751-181c5d3339b3 h1:a7Yz8C0aBa/LjeiTa9ZLYi9B74GNhFRnUIUdvN6ddVk= +github.com/sigstore/sigstore/pkg/signature/kms/gcp v1.9.6-0.20250729224751-181c5d3339b3/go.mod h1:tRtJzSZ48MXJV9bmS8pkb3mP36PCad/Cs+BmVJ3Z4O4= +github.com/sigstore/sigstore/pkg/signature/kms/hashivault v1.9.5 h1:S2ukEfN1orLKw2wEQIUHDDlzk0YcylhcheeZ5TGk8LI= +github.com/sigstore/sigstore/pkg/signature/kms/hashivault v1.9.5/go.mod h1:m7sQxVJmDa+rsmS1m6biQxaLX83pzNS7ThUEyjOqkCU= +github.com/sigstore/timestamp-authority v1.2.5 h1:W22JmwRv1Salr/NFFuP7iJuhytcZszQjldoB8GiEdnw= +github.com/sigstore/timestamp-authority v1.2.5/go.mod h1:gWPKWq4HMWgPCETre0AakgBzcr9DRqHrsgbrRqsigOs= +github.com/sijms/go-ora/v2 v2.8.24 h1:TODRWjWGwJ1VlBOhbTLat+diTYe8HXq2soJeB+HMjnw= +github.com/sijms/go-ora/v2 v2.8.24/go.mod h1:QgFInVi3ZWyqAiJwzBQA+nbKYKH77tdp1PYoCqhR2dU= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sony/gobreaker v0.5.0 h1:dRCvqm0P490vZPmy7ppEk2qCnCieBooFJ+YoXGYB+yg= +github.com/sony/gobreaker v0.5.0/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo= +github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0= +github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.7 h1:vN6T9TfwStFPFM5XzjsvmzZkLuaLX+HS+0SeFLRgU6M= +github.com/spf13/pflag v1.0.7/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE= +github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g= +github.com/stoewer/go-strcase v1.3.0 h1:g0eASXYtp+yvN9fK8sH94oCIk0fau9uV1/ZdJ0AVEzs= +github.com/stoewer/go-strcase v1.3.0/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= -github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/thales-e-security/pool v0.0.2 h1:RAPs4q2EbWsTit6tpzuvTFlgFRJ3S8Evf5gtvVDbmPg= +github.com/thales-e-security/pool v0.0.2/go.mod h1:qtpMm2+thHtqhLzTwgDBj/OuNnMpupY8mv0Phz0gjhU= +github.com/theupdateframework/go-tuf v0.7.0 h1:CqbQFrWo1ae3/I0UCblSbczevCCbS31Qvs5LdxRWqRI= +github.com/theupdateframework/go-tuf v0.7.0/go.mod h1:uEB7WSY+7ZIugK6R1hiBMBjQftaFzn7ZCDJcp1tCUug= +github.com/theupdateframework/go-tuf/v2 v2.0.2 h1:PyNnjV9BJNzN1ZE6BcWK+5JbF+if370jjzO84SS+Ebo= +github.com/theupdateframework/go-tuf/v2 v2.0.2/go.mod h1:baB22nBHeHBCeuGZcIlctNq4P61PcOdyARlplg5xmLA= +github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/gjson v1.14.4 h1:uo0p8EbA09J7RQaflQ1aBRffTR7xedD2bcIVSYxLnkM= +github.com/tidwall/gjson v1.14.4/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA= +github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/tink-crypto/tink-go-awskms/v2 v2.1.0 h1:N9UxlsOzu5mttdjhxkDLbzwtEecuXmlxZVo/ds7JKJI= +github.com/tink-crypto/tink-go-awskms/v2 v2.1.0/go.mod h1:PxSp9GlOkKL9rlybW804uspnHuO9nbD98V/fDX4uSis= +github.com/tink-crypto/tink-go-gcpkms/v2 v2.2.0 h1:3B9i6XBXNTRspfkTC0asN5W0K6GhOSgcujNiECNRNb0= +github.com/tink-crypto/tink-go-gcpkms/v2 v2.2.0/go.mod h1:jY5YN2BqD/KSCHM9SqZPIpJNG/u3zwfLXHgws4x2IRw= +github.com/tink-crypto/tink-go/v2 v2.4.0 h1:8VPZeZI4EeZ8P/vB6SIkhlStrJfivTJn+cQ4dtyHNh0= +github.com/tink-crypto/tink-go/v2 v2.4.0/go.mod h1:l//evrF2Y3MjdbpNDNGnKgCpo5zSmvUvnQ4MU+yE2sw= +github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 h1:e/5i7d4oYZ+C1wj2THlRK+oAhjeS/TRQwMfkIuet3w0= +github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399/go.mod h1:LdwHTNJT99C5fTAzDz0ud328OgXz+gierycbcIx2fRs= +github.com/transparency-dev/merkle v0.0.2 h1:Q9nBoQcZcgPamMkGn7ghV8XiTZ/kRxn1yCG81+twTK4= +github.com/transparency-dev/merkle v0.0.2/go.mod h1:pqSy+OXefQ1EDUVmAJ8MUhHB9TXGuzVAT58PqBoHz1A= github.com/waigani/diffparser v0.0.0-20190828052634-7391f219313d h1:xQcF7b7cZLWZG/+7A4G7un1qmEDYHIvId9qxRS1mZMs= github.com/waigani/diffparser v0.0.0-20190828052634-7391f219313d/go.mod h1:BzSc3WEF8R+lCaP5iGFRxd5kIXy4JKOZAwNe1w0cdc0= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c= +github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= +github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY= +github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4= +github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8= +github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM= github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc= github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU= +github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ= +github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0= +github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM= +github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +github.com/zalando/go-keyring v0.2.3 h1:v9CUu9phlABObO4LPWycf+zwMG7nlbb3t/B5wa97yms= +github.com/zalando/go-keyring v0.2.3/go.mod h1:HL4k+OXQfJUWaMnqyuSOc0drfGPX2b51Du6K+MRgZMk= +github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM= +github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4= +github.com/zitadel/logging v0.6.2 h1:MW2kDDR0ieQynPZ0KIZPrh9ote2WkxfBif5QoARDQcU= +github.com/zitadel/logging v0.6.2/go.mod h1:z6VWLWUkJpnNVDSLzrPSQSQyttysKZ6bCRongw0ROK4= +github.com/zitadel/oidc/v3 v3.43.0 h1:LokviPoiTNNPbIAMO/eb6Kq9PNWPWp0mA1oWtdLc+Qs= +github.com/zitadel/oidc/v3 v3.43.0/go.mod h1:5ki8s9CWoB4iGmtULndiVxwM8xt7IylZIaudro7jEq4= +github.com/zitadel/schema v1.3.1 h1:QT3kwiRIRXXLVAs6gCK/u044WmUVh6IlbLXUsn6yRQU= +github.com/zitadel/schema v1.3.1/go.mod h1:071u7D2LQacy1HAN+YnMd/mx1qVE2isb0Mjeqg46xnU= +gitlab.com/gitlab-org/api/client-go v0.127.0 h1:8xnxcNKGF2gDazEoMs+hOZfOspSSw8D0vAoWhQk9U+U= +gitlab.com/gitlab-org/api/client-go v0.127.0/go.mod h1:bYC6fPORKSmtuPRyD9Z2rtbAjE7UeNatu2VWHRf4/LE= +go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw= +go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ= +go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= +go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q= +go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= +go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0 h1:Ahq7pZmv87yiyn3jeFz/LekZmPLLdKejuO3NcK9MssM= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0/go.mod h1:MJTqhM0im3mRLw1i8uGHnCvUEeS7VwRyxlLC78PA18M= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0 h1:EtFWSnwW9hGObjkIdmlnWSydO+Qs8OwzfzXLUPg4xOc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0/go.mod h1:QjUEoiGCPkvFZ/MjK6ZZfNOS6mfVEVKYE99dFhuN2LI= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0 h1:bDMKF3RUSxshZ5OjOTi8rsHGaPKsAt76FaqgvIUySLc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0/go.mod h1:dDT67G/IkA46Mr2l9Uj7HsQVwsjASyV9SjGofsiUZDA= +go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE= +go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E= +go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI= +go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg= +go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc= +go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps= +go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= +go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= +go.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os= +go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo= +go.step.sm/crypto v0.70.0 h1:Q9Ft7N637mucyZcHZd1+0VVQJVwDCKqcb9CYcYi7cds= +go.step.sm/crypto v0.70.0/go.mod h1:pzfUhS5/ue7ev64PLlEgXvhx1opwbhFCjkvlhsxVds0= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.3 h1:bXOww4E/J3f66rav3pX3m8w6jDE4knZjGOw8b5Y6iNE= +go.yaml.in/yaml/v3 v3.0.3/go.mod h1:tBHosrYAkRZjRAOREWbDnBXUf08JOwYq++0QNwQiWzI= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34= -golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= +golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= +golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q= +golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4= +golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b h1:M2rDM6z3Fhozi9O7NWsxAkg/yqS/lQJ6PmkyIV3YP+o= +golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU= -golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8= -golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= +golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98= -golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw= -golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I= +golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc= +golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= +golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU= +golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY= -golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= +golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= +golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= +golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ= -golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= +gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= +google.golang.org/api v0.248.0 h1:hUotakSkcwGdYUqzCRc5yGYsg4wXxpkKlW5ryVqvC1Y= +google.golang.org/api v0.248.0/go.mod h1:yAFUAF56Li7IuIQbTFoLwXTCI6XCFKueOlS7S9e4F9k= google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= +google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4= +google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s= +google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c h1:AtEkQdl5b6zsybXcbz00j1LwNodDuH6hVifIaNqk7NQ= +google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c/go.mod h1:ea2MjsO70ssTfCjiwHgI0ZFqcw45Ksuk2ckf9G468GA= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c h1:qXWI/sQtv5UKboZ/zUk7h+mrf/lXORyI+n9DKDAusdg= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c/go.mod h1:gw1tLEfykwDz2ET4a12jcXt4couGAm7IwsVaTy0Sflo= +google.golang.org/grpc v1.75.0 h1:+TW+dqTd2Biwe6KKfhE5JpiYIBWq865PhKGSXiivqt4= +google.golang.org/grpc v1.75.0/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ= +google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 h1:F29+wU6Ee6qgu9TddPgooOdaqsxTMunOoj8KA5yuS5A= +google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1/go.mod h1:5KF+wpkbTSbGcR9zteSqZV6fqFOWBl4Yde8En8MryZA= +google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE= +google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= +gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v1 v1.0.0-20140924161607-9f9df34309c0/go.mod h1:WDnlLJ4WF5VGsH/HVa3CI79GS0ol3YnhVnKP89i0kNg= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -helm.sh/helm/v3 v3.17.3 h1:3n5rW3D0ArjFl0p4/oWO8IbY/HKaNNwJtOQFdH2AZHg= -helm.sh/helm/v3 v3.17.3/go.mod h1:+uJKMH/UiMzZQOALR3XUf3BLIoczI2RKKD6bMhPh4G8= +gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo= +gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= +helm.sh/helm/v3 v3.18.5 h1:Cc3Z5vd6kDrZq9wO9KxKLNEickiTho6/H/dBNRVSos4= +helm.sh/helm/v3 v3.18.5/go.mod h1:L/dXDR2r539oPlFP1PJqKAC1CUgqHJDLkxKpDGrWnyg= howett.net/plist v1.0.1 h1:37GdZ8tP09Q35o9ych3ehygcsL+HqKSwzctveSlarvM= howett.net/plist v1.0.1/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g= -k8s.io/apiextensions-apiserver v0.33.0 h1:d2qpYL7Mngbsc1taA4IjJPRJ9ilnsXIrndH+r9IimOs= -k8s.io/apiextensions-apiserver v0.33.0/go.mod h1:VeJ8u9dEEN+tbETo+lFkwaaZPg6uFKLGj5vyNEwwSzc= -k8s.io/apimachinery v0.33.0 h1:1a6kHrJxb2hs4t8EE5wuR/WxKDwGN1FKH3JvDtA0CIQ= -k8s.io/apimachinery v0.33.0/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM= +k8s.io/api v0.33.3 h1:SRd5t//hhkI1buzxb288fy2xvjubstenEKL9K51KBI8= +k8s.io/api v0.33.3/go.mod h1:01Y/iLUjNBM3TAvypct7DIj0M0NIZc+PzAHCIo0CYGE= +k8s.io/apiextensions-apiserver v0.33.3 h1:qmOcAHN6DjfD0v9kxL5udB27SRP6SG/MTopmge3MwEs= +k8s.io/apiextensions-apiserver v0.33.3/go.mod h1:oROuctgo27mUsyp9+Obahos6CWcMISSAPzQ77CAQGz8= +k8s.io/apimachinery v0.33.3 h1:4ZSrmNa0c/ZpZJhAgRdcsFcZOw1PQU1bALVQ0B3I5LA= +k8s.io/apimachinery v0.33.3/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM= +k8s.io/apiserver v0.33.3 h1:Wv0hGc+QFdMJB4ZSiHrCgN3zL3QRatu56+rpccKC3J4= +k8s.io/apiserver v0.33.3/go.mod h1:05632ifFEe6TxwjdAIrwINHWE2hLwyADFk5mBsQa15E= +k8s.io/cli-runtime v0.33.3 h1:Dgy4vPjNIu8LMJBSvs8W0LcdV0PX/8aGG1DA1W8lklA= +k8s.io/cli-runtime v0.33.3/go.mod h1:yklhLklD4vLS8HNGgC9wGiuHWze4g7x6XQZ+8edsKEo= +k8s.io/client-go v0.33.3 h1:M5AfDnKfYmVJif92ngN532gFqakcGi6RvaOF16efrpA= +k8s.io/client-go v0.33.3/go.mod h1:luqKBQggEf3shbxHY4uVENAxrDISLOarxpTKMiUuujg= +k8s.io/component-base v0.33.3 h1:mlAuyJqyPlKZM7FyaoM/LcunZaaY353RXiOd2+B5tGA= +k8s.io/component-base v0.33.3/go.mod h1:ktBVsBzkI3imDuxYXmVxZ2zxJnYTZ4HAsVj9iF09qp4= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4= +k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8= +k8s.io/kubectl v0.33.3 h1:r/phHvH1iU7gO/l7tTjQk2K01ER7/OAJi8uFHHyWSac= +k8s.io/kubectl v0.33.3/go.mod h1:euj2bG56L6kUGOE/ckZbCoudPwuj4Kud7BR0GzyNiT0= +k8s.io/utils v0.0.0-20241210054802-24370beab758 h1:sdbE21q2nlQtFh65saZY+rRM6x6aJJI8IUa1AmH/qa0= +k8s.io/utils v0.0.0-20241210054802-24370beab758/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/sh/v3 v3.7.0 h1:lSTjdP/1xsddtaKfGg7Myu7DnlHItd3/M2tomOcNNBg= +mvdan.cc/sh/v3 v3.7.0/go.mod h1:K2gwkaesF/D7av7Kxl0HbF5kGOd2ArupNTX3X44+8l8= +oras.land/oras-go/v2 v2.6.0 h1:X4ELRsiGkrbeox69+9tzTu492FMUu7zJQW6eJU+I2oc= +oras.land/oras-go/v2 v2.6.0/go.mod h1:magiQDfG6H1O9APp+rOsvCPcW1GD2MM7vgnKY0Y+u1o= +pluginrpc.com/pluginrpc v0.5.0 h1:tOQj2D35hOmvHyPu8e7ohW2/QvAnEtKscy2IJYWQ2yo= +pluginrpc.com/pluginrpc v0.5.0/go.mod h1:UNWZ941hcVAoOZUn8YZsMmOZBzbUjQa3XMns8RQLp9o= +sigs.k8s.io/controller-runtime v0.20.4 h1:X3c+Odnxz+iPTRobG4tp092+CvBU9UK0t/bRf+n0DGU= +sigs.k8s.io/controller-runtime v0.20.4/go.mod h1:xg2XB0K5ShQzAgsoujxuKN4LNXR2LfwwHsPj7Iaw+XY= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= +sigs.k8s.io/kustomize/api v0.19.0 h1:F+2HB2mU1MSiR9Hp1NEgoU2q9ItNOaBJl0I4Dlus5SQ= +sigs.k8s.io/kustomize/api v0.19.0/go.mod h1:/BbwnivGVcBh1r+8m3tH1VNxJmHSk1PzP5fkP6lbL1o= +sigs.k8s.io/kustomize/kyaml v0.19.0 h1:RFge5qsO1uHhwJsu3ipV7RNolC7Uozc0jUBC/61XSlA= +sigs.k8s.io/kustomize/kyaml v0.19.0/go.mod h1:FeKD5jEOH+FbZPpqUghBP8mrLjJ3+zD3/rf9NNu1cwY= sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc= sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps= -sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= +software.sslmate.com/src/go-pkcs12 v0.5.0 h1:EC6R394xgENTpZ4RltKydeDUjtlM5drOYIG9c6TVj2M= +software.sslmate.com/src/go-pkcs12 v0.5.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI= diff --git a/build.assets/versions.mk b/build.assets/versions.mk index f5a118bc18d5d..704b9b393a366 100644 --- a/build.assets/versions.mk +++ b/build.assets/versions.mk @@ -3,22 +3,22 @@ # Keep versions in sync with devbox.json, when applicable. # Sync with devbox.json. -GOLANG_VERSION ?= go1.24.3 +GOLANG_VERSION ?= go1.24.11 GOLANGCI_LINT_VERSION ?= v2.1.5 # NOTE: Remember to update engines.node in package.json to match the major version. -NODE_VERSION ?= 22.14.0 +NODE_VERSION ?= 22.21.0 # Run lint-rust check locally before merging code after you bump this. RUST_VERSION ?= 1.81.0 -WASM_PACK_VERSION ?= 0.12.1 +WASM_OPT_VERSION ?= 0.116.1 LIBBPF_VERSION ?= 1.2.2 LIBPCSCLITE_VERSION ?= 1.9.9-teleport DEVTOOLSET ?= devtoolset-12 # Protogen related versions. -BUF_VERSION ?= v1.50.1 +BUF_VERSION ?= v1.56.0 # Keep in sync with api/proto/buf.yaml (and buf.lock). GOGO_PROTO_TAG ?= v1.3.2 NODE_GRPC_TOOLS_VERSION ?= 1.12.4 diff --git a/build.assets/windows/build.ps1 b/build.assets/windows/build.ps1 index db5ad6a23a5c5..30efd34ab5a5b 100644 --- a/build.assets/windows/build.ps1 +++ b/build.assets/windows/build.ps1 @@ -164,23 +164,15 @@ function Enable-Node { } } -function Install-WasmPack { +function Install-WasmDeps { <# .SYNOPSIS - Builds and installs wasm-pack and dependent tooling. + Builds and installs wasm-bindgen-cli, wasm-opt, and wasm32-unknown-unknown toolchain. #> - [CmdletBinding()] - param( - [Parameter(Mandatory)] - [string] $WasmPackVersion - ) - begin { - Write-Host "::group::Installing wasm-pack $WasmPackVersion" - # TODO(camscale): Don't hard-code wasm-binden-cli version - cargo install wasm-bindgen-cli --locked --version 0.2.99 - cargo install wasm-pack --locked --version "$WasmPackVersion" - Write-Host "::endgroup::" - } + + Write-Host "::group::Installing wasm-bindgen-cli, wasm-opt, and wasm32-unknown-unknown toolchain" + make -C "$TeleportSourceDirectory" ensure-wasm-deps + Write-Host "::endgroup::" } function Install-Wintun { @@ -349,8 +341,7 @@ function Install-BuildRequirements { $GoVersion = $(make --no-print-directory -C "$TeleportSourceDirectory/build.assets" print-go-version).TrimStart("go") Install-Go -GoVersion "$GoVersion" -ToolchainDir "$InstallDirectory" - $WasmPackVersion = $(make --no-print-directory -C "$TeleportSourceDirectory/build.assets" print-wasm-pack-version).Trim() - Install-WasmPack -WasmPackVersion "$WasmPackVersion" + Install-WasmDeps } Write-Host $("All build requirements installed in {0:g}" -f $CommandDuration) } diff --git a/constants.go b/constants.go index 4ddbea83d3002..c52e95ad3df36 100644 --- a/constants.go +++ b/constants.go @@ -120,6 +120,9 @@ const ( // ComponentProxy is SSH proxy (SSH server forwarding connections) ComponentProxy = "proxy" + // ComponentRelay is the component name for the relay service. + ComponentRelay = "relay" + // ComponentProxyPeer is the proxy peering component of the proxy service ComponentProxyPeer = "proxy:peer" @@ -298,6 +301,12 @@ const ( // ComponentForwardingGit represents the SSH proxy that forwards Git commands. ComponentForwardingGit = "git:forward" + // ComponentMCP represents the MCP server handler. + ComponentMCP = "mcp" + + // ComponentRecordingEncryption represents recording encryption + ComponentRecordingEncryption = "recording-encryption" + // VerboseLogsEnvVar forces all logs to be verbose (down to DEBUG level) VerboseLogsEnvVar = "TELEPORT_DEBUG" @@ -455,6 +464,14 @@ const ( ) const ( + // CertExtensionScopePin is used to pin a user certificate to a specific scope and + // set of scoped roles. This constrains a user's access to resources based on both + // the scoping rules and scoped roles defined. + CertExtensionScopePin = "scope-pin@goteleport.com" + // CertExtensionAgentScope is used to pin an agent/host certificate to a specific scope. + // This constrains other identities' access to the agent itself as well as the agent's + // access to other resources based on scoping rules. + CertExtensionAgentScope = "agent-scope@goteleport.com" // CertExtensionPermitX11Forwarding allows X11 forwarding for certificate CertExtensionPermitX11Forwarding = "permit-X11-forwarding" // CertExtensionPermitAgentForwarding allows agent forwarding for certificate @@ -520,6 +537,9 @@ const ( // Machine ID bot instance, if any. This identifier is persisted through // certificate renewals. CertExtensionBotInstanceID = "bot-instance-id@goteleport.com" + // CertExtensionJoinToken is the name of the join token used to join this + // bot, if any. + CertExtensionJoinToken = "join-token@goteleport.com" // CertCriticalOptionSourceAddress is a critical option that defines IP addresses (in CIDR notation) // from which this certificate is accepted for authentication. @@ -585,6 +605,10 @@ const MaxHTTPRequestSize = 10 * 1024 * 1024 // to prevent resource exhaustion attacks. const MaxHTTPResponseSize = 10 * 1024 * 1024 +// MaxUsernameLength is the maximum allowed length (characters) for usernames. +// This limit prevents sending extremely long usernames that could clog up logs or exhaust resources. +const MaxUsernameLength = 1000 + const ( // CertificateFormatOldSSH is used to make Teleport interoperate with older // versions of OpenSSH. @@ -654,6 +678,10 @@ const ( // TraitInternalGitHubOrgs is the variable used to store allowed GitHub // organizations for GitHub integrations. TraitInternalGitHubOrgs = "{{internal.github_orgs}}" + + // TraitInternalMCPTools is the variable used to store allowed MCP tools for + // MCP servers. + TraitInternalMCPTools = "{{internal.mcp_tools}}" ) // SCP is Secure Copy. @@ -735,14 +763,35 @@ const ( // credentials using any workload_identity resource. This exists to simplify // Day 0 UX experience with workload identity. PresetWildcardWorkloadIdentityIssuerRoleName = "wildcard-workload-identity-issuer" + + // PresetAccessPluginRoleName is a name of a preset role that includes + // permissions required by self-hosted access request plugin. + PresetAccessPluginRoleName = "access-plugin" + + // PresetListAccessRequestResourcesRoleName is a name of a preset role that + // includes permissions to read access request resources. + PresetListAccessRequestResourcesRoleName = "list-access-request-resources" + + // PresetMCPUserRoleName is a name of a preset role that allows + // accessing MCP servers. + PresetMCPUserRoleName = "mcp-user" ) var PresetRoles = []string{PresetEditorRoleName, PresetAccessRoleName, PresetAuditorRoleName} const ( - // PresetDefaultHealthCheckConfigName is the name of a preset - // default health_check_config that enables health checks for all resources. - PresetDefaultHealthCheckConfigName = "default" + // VirtualDefaultHealthCheckConfigDBName is the name of a virtual + // health_check_config that enables health checks for all database + // resources. For historical reasons, it's value is "default" even + // though it applies to databases only. + VirtualDefaultHealthCheckConfigDBName = "default" + // VirtualDefaultHealthCheckConfigKubeName is the name of a virtual + // health_check_config that enables health checks for all Kubernetes + // resources. + VirtualDefaultHealthCheckConfigKubeName = "default-kube" + // VirtualDefaultHealthCheckConfigCount is the number of virtual + // health_check_config resources. + VirtualDefaultHealthCheckConfigCount = 2 ) const ( @@ -782,9 +831,22 @@ const ( CurrentSessionIDRequest = "current-session-id@goteleport.com" // SessionIDQueryRequest is sent by clients to ask servers if they - // will generate their own session ID when a new session is created. + // will generate and share their own session ID when a new session + // is started (session and exec/shell channels accepted). + // + // TODO(Joerger): DELETE IN v20.0.0 + // All v17+ servers set the session ID. v19+ clients stop checking. SessionIDQueryRequest = "session-id-query@goteleport.com" + // SessionIDQueryRequestV2 is sent by clients to ask servers if they + // will generate and share their own session ID when a new session + // channel is accepted, rather than when the shell/exec channel is. + // + // TODO(Joerger): DELETE IN v21.0.0 + // all v19+ servers set the session ID directly after accepting the session channel. + // clients should stop checking in v21, and servers should stop responding to the query in v22. + SessionIDQueryRequestV2 = "session-id-query-v2@goteleport.com" + // ForceTerminateRequest is an SSH request to forcefully terminate a session. ForceTerminateRequest = "x-teleport-force-terminate" diff --git a/darwin-signing.mk b/darwin-signing.mk index d4f6823114e97..201c7df2734b3 100644 --- a/darwin-signing.mk +++ b/darwin-signing.mk @@ -121,6 +121,11 @@ define notarize_teleport_pkg $(call notarize_pkg,$(TELEPORT_PKG_UNSIGNED),$(TELEPORT_PKG_SIGNED)) endef +NOTARIZE_TELEPORT_TOOLS_PKG = $(if $(SHOULD_NOTARIZE),$(notarize_teleport_tools_pkg),$(not_notarizing_cmd)) +define notarize_teleport_tools_pkg + $(call notarize_pkg,$(TELEPORT_TOOLS_PKG_UNSIGNED),$(TELEPORT_TOOLS_PKG_SIGNED)) +endef + define notarize_app_bundle $(eval $@_BUNDLE = $(1)) $(eval $@_BUNDLE_ID = $(2)) diff --git a/devbox.json b/devbox.json index 9e6cb5d6dbfb3..4576f87314496 100644 --- a/devbox.json +++ b/devbox.json @@ -14,7 +14,6 @@ "yamllint@latest", "zlib@latest", "rustup@latest", - "wasm-pack@latest", "wasm-bindgen-cli@latest", "pkg-config@latest", diff --git a/devbox.lock b/devbox.lock index 42454fe15ef70..33c209f4fe5db 100644 --- a/devbox.lock +++ b/devbox.lock @@ -1458,54 +1458,6 @@ } } }, - "wasm-pack@latest": { - "last_modified": "2024-11-16T04:25:12Z", - "resolved": "github:NixOS/nixpkgs/34a626458d686f1b58139620a8b2793e9e123bba#wasm-pack", - "source": "devbox-search", - "version": "0.13.1", - "systems": { - "aarch64-darwin": { - "outputs": [ - { - "name": "out", - "path": "/nix/store/1p81s2kwmj415jvg1v2avgq83mqx7fns-wasm-pack-0.13.1", - "default": true - } - ], - "store_path": "/nix/store/1p81s2kwmj415jvg1v2avgq83mqx7fns-wasm-pack-0.13.1" - }, - "aarch64-linux": { - "outputs": [ - { - "name": "out", - "path": "/nix/store/377fivabb47p4xczxxdjlikpzqjiyw1l-wasm-pack-0.13.1", - "default": true - } - ], - "store_path": "/nix/store/377fivabb47p4xczxxdjlikpzqjiyw1l-wasm-pack-0.13.1" - }, - "x86_64-darwin": { - "outputs": [ - { - "name": "out", - "path": "/nix/store/b491smk9is9kmys0v26gbdnqgic3z993-wasm-pack-0.13.1", - "default": true - } - ], - "store_path": "/nix/store/b491smk9is9kmys0v26gbdnqgic3z993-wasm-pack-0.13.1" - }, - "x86_64-linux": { - "outputs": [ - { - "name": "out", - "path": "/nix/store/b8630fh9mnsi9gakpldyp3xjrdh760yi-wasm-pack-0.13.1", - "default": true - } - ], - "store_path": "/nix/store/b8630fh9mnsi9gakpldyp3xjrdh760yi-wasm-pack-0.13.1" - } - } - }, "yamllint@latest": { "last_modified": "2024-11-16T04:25:12Z", "resolved": "github:NixOS/nixpkgs/34a626458d686f1b58139620a8b2793e9e123bba#yamllint", diff --git a/docs/.vale.ini b/docs/.vale.ini index d117e2a7b349e..96fba68e771b9 100644 --- a/docs/.vale.ini +++ b/docs/.vale.ini @@ -5,5 +5,5 @@ MinAlertLevel = suggestion mdx = md [*.mdx] -BasedOnStyles = messaging,examples,3rd-party-products,structure +BasedOnStyles = messaging,examples,3rd-party-products CommentDelimiters = "{/*,*/}" diff --git a/docs/config.json b/docs/config.json index 74184915c54d1..80803ee0b06a4 100644 --- a/docs/config.json +++ b/docs/config.json @@ -6,7 +6,7 @@ "nodeIP": "ip-172-31-35-170" }, "access_graph": { - "version": "1.24.4" + "version": "1.29.5" }, "ansible": { "min_version": "2.9.6" @@ -17,8 +17,8 @@ "aws_secret_access_key": "zyxw9876-this-is-an-example" }, "cloud": { - "version": "17.4.10", - "major_version": "17", + "version": "18.2.8", + "major_version": "18", "sla": { "monthly_percentage": "99.9%", "monthly_downtime": "44 minutes" @@ -51,7 +51,8 @@ }, "kubernetes": { "major_version": "1", - "minor_version": "17" + "minor_version": "17", + "health_check_min_version": "18.3.0" }, "mongodb": { "min_version": "3.6" @@ -65,19 +66,18 @@ "ca_pin": "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678" }, "teleport": { - "git": "api/14.0.0-gd1e081e", - "major_version": "17", - "version": "17.0.0-dev", + "major_version": "18", + "version": "18.6.1", "url": "teleport.example.com", - "golang": "1.22", + "golang": "1.24.11", "plugin": { - "version": "13.3.7" + "version": "18.6.1" }, "helm_repo_url": "https://charts.releases.teleport.dev", - "latest_oss_docker_image": "public.ecr.aws/gravitational/teleport-distroless:13.3.7", - "latest_oss_debug_docker_image": "public.ecr.aws/gravitational/teleport-distroless-debug:13.3.7", - "latest_ent_docker_image": "public.ecr.aws/gravitational/teleport-ent-distroless:13.3.7", - "latest_ent_debug_docker_image": "public.ecr.aws/gravitational/teleport-ent-distroless-debug:13.3.7", + "latest_oss_docker_image": "public.ecr.aws/gravitational/teleport-distroless:18.6.1", + "latest_oss_debug_docker_image": "public.ecr.aws/gravitational/teleport-distroless-debug:18.6.1", + "latest_ent_docker_image": "public.ecr.aws/gravitational/teleport-ent-distroless:18.6.1", + "latest_ent_debug_docker_image": "public.ecr.aws/gravitational/teleport-ent-distroless-debug:18.6.1", "teleport_install_script_url": "https://cdn.teleport.dev/install.sh" }, "terraform": { @@ -97,97 +97,97 @@ "redirects": [ { "source": "/admin-guides/access-controls/guides/moderated-sessions/", - "destination": "/admin-guides/access-controls/guides/joining-sessions/", + "destination": "/zero-trust-access/authentication/joining-sessions/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_accesslists/", - "destination": "/reference/operator-resources/resources-teleport-dev-accesslists/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-accesslists/", "permanent": true }, { "source": "/admin-guides/deploy-a-cluster/deployments/ibm/", - "destination": "/admin-guides/deploy-a-cluster/helm-deployments/ibm/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/ibm/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_githubconnectors/", - "destination": "/reference/operator-resources/resources-teleport-dev-githubconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-githubconnectors/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_loginrules/", - "destination": "/reference/operator-resources/resources-teleport-dev-loginrules/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-loginrules/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_oidcconnectors/", - "destination": "/reference/operator-resources/resources-teleport-dev-oidcconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oidcconnectors/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_oktaimportrules/", - "destination": "/reference/operator-resources/resources-teleport-dev-oktaimportrules/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oktaimportrules/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_openssheiceserversv2/", - "destination": "/reference/operator-resources/resources-teleport-dev-openssheiceserversv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-openssheiceserversv2/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_opensshserversv2/", - "destination": "/reference/operator-resources/resources-teleport-dev-opensshserversv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-opensshserversv2/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_provisiontokens/", - "destination": "/reference/operator-resources/resources-teleport-dev-provisiontokens/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_roles/", - "destination": "/reference/operator-resources/resources-teleport-dev-roles/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-roles/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_rolesv6/", - "destination": "/reference/operator-resources/resources-teleport-dev-rolesv6/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv6/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_rolesv7/", - "destination": "/reference/operator-resources/resources-teleport-dev-rolesv7/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv7/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_samlconnectors/", - "destination": "/reference/operator-resources/resources-teleport-dev-samlconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-samlconnectors/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_users/", - "destination": "/reference/operator-resources/resources-teleport-dev-users/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-users/", "permanent": true }, { "source": "/access-controls/guides/role-templates/", - "destination": "/admin-guides/access-controls/guides/role-templates/", + "destination": "/zero-trust-access/rbac-get-started/role-templates/", "permanent": true }, { "source": "/enterprise/sso/", - "destination": "/admin-guides/access-controls/sso/", + "destination": "/zero-trust-access/sso/", "permanent": true }, { "source": "/access-controls/guides/headless/", - "destination": "/admin-guides/access-controls/guides/headless/", + "destination": "/zero-trust-access/authentication/headless/", "permanent": true }, { "source": "/application-access/okta/hosted-guide/", - "destination": "/admin-guides/access-controls/okta/", + "destination": "/identity-governance/integrations/okta/", "permanent": true }, { @@ -202,7 +202,7 @@ }, { "source": "/enterprise/sso/google-workspace/", - "destination": "/admin-guides/access-controls/sso/google-workspace/", + "destination": "/zero-trust-access/sso/integrate-idp/google-workspace/", "permanent": true }, { @@ -210,49 +210,59 @@ "destination": "/enroll-resources/kubernetes-access/", "permanent": true }, + { + "source": "/kubernetes-ssh/", + "destination": "/enroll-resources/kubernetes-access/controls/", + "permanent": true + }, { "source": "/machine-id/introduction/", - "destination": "/enroll-resources/machine-id/introduction/", + "destination": "/machine-workload-identity/introduction/", "permanent": true }, { "source": "/machine-id/reference/telemetry/", - "destination": "/reference/machine-id/telemetry/", + "destination": "/reference/machine-workload-identity/telemetry/", "permanent": true }, { "source": "/machine-id/reference/v16-upgrade-guide/", - "destination": "/reference/machine-id/v16-upgrade-guide/", + "destination": "/reference/machine-workload-identity/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/getting-started/", + "destination": "/machine-workload-identity/getting-started/", "permanent": true }, { "source": "/management/export-audit-events/fluentd/", - "destination": "/admin-guides/management/export-audit-events/fluentd/", + "destination": "/zero-trust-access/export-audit-events/fluentd/", "permanent": true }, { "source": "/setup/admin/github-sso/", - "destination": "/admin-guides/access-controls/sso/github-sso/", + "destination": "/zero-trust-access/sso/integrate-idp/github-sso/", "permanent": true }, { "source": "/access-controls/device-trust/guide/", - "destination": "/admin-guides/access-controls/device-trust/guide/", + "destination": "/identity-governance/device-trust/guide/", "permanent": true }, { - "source": "/access-controls/access-request-plugins/", - "destination": "/admin-guides/access-controls/access-request-plugins/", + "source": "/upgrading/automatic-agent-updates/", + "destination": "/upgrading/agent-managed-updates-v1/", "permanent": true }, { - "source": "/upgrading/automatic-agent-updates/", - "destination": "/upgrading/agent-managed-updates-v1/", + "source": "/upgrading/upgrading-reference/", + "destination": "/upgrading/upgrading-manual/", "permanent": true }, { "source": "/access-controls/compliance-frameworks/fedramp/", - "destination": "/admin-guides/access-controls/compliance-frameworks/fedramp/", + "destination": "/zero-trust-access/compliance-frameworks/fedramp/", "permanent": true }, { @@ -260,69 +270,59 @@ "destination": "/enroll-resources/server-access/", "permanent": true }, - { - "source": "/enroll-resources/auto-discovery/reference/", - "destination": "/reference/agent-services/auto-discovery-reference/", - "permanent": true - }, - { - "source": "/enroll-resources/auto-discovery/reference/aws-iam/", - "destination": "/reference/agent-services/auto-discovery-reference/aws-iam/", - "permanent": true - }, { "source": "/reference/agent-services/kubernetes-application-discovery/", - "destination": "/reference/agent-services/auto-discovery-reference/kubernetes-application-discovery/", + "destination": "/enroll-resources/auto-discovery/kubernetes-applications/reference/", "permanent": true }, { "source": "/reference/operator-resources/resources.teleport.dev_trustedclustersv2/", - "destination": "/reference/operator-resources/resources-teleport-dev-trustedclustersv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-trustedclustersv2/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/hosted-guide/", - "destination": "/admin-guides/access-controls/okta/", + "destination": "/identity-governance/integrations/okta/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/scim-only/", - "destination": "/admin-guides/access-controls/okta/scim-integration/", + "destination": "/identity-governance/integrations/okta/scim-integration/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/sync-scim/", - "destination": "/admin-guides/access-controls/okta/scim-integration/", + "destination": "/identity-governance/integrations/okta/scim-integration/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/app-and-group-sync/", - "destination": "/admin-guides/access-controls/okta/app-and-group-sync/", + "destination": "/identity-governance/integrations/okta/app-and-group-sync/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/guided-sso/", - "destination": "/admin-guides/access-controls/okta/guided-sso/", + "destination": "/identity-governance/integrations/okta/guided-sso/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/", - "destination": "/admin-guides/access-controls/okta/", + "destination": "/identity-governance/integrations/okta/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/scim-integration/", - "destination": "/admin-guides/access-controls/okta/scim-integration/", + "destination": "/identity-governance/integrations/okta/scim-integration/", "permanent": true }, { "source": "/enroll-resources/application-access/okta/user-sync/", - "destination": "/admin-guides/access-controls/okta/user-sync/", + "destination": "/identity-governance/integrations/okta/user-sync/", "permanent": true }, { "source": "/api/getting-started/", - "destination": "/admin-guides/api/getting-started/", + "destination": "/zero-trust-access/api/getting-started/", "permanent": true }, { @@ -369,6 +369,3102 @@ "source": "/enroll-resources/agents/introduction/", "destination": "/enroll-resources/agents/", "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-lists/", + "destination": "/identity-governance/access-lists/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-lists/nested-access-lists/", + "destination": "/identity-governance/access-lists/nested-access-lists/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-monitoring/", + "destination": "/identity-governance/access-monitoring/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/", + "destination": "/identity-governance/access-requests/plugins/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/datadog-hosted/", + "destination": "/identity-governance/access-requests/plugins/datadog-hosted/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/notification-routing-rules/", + "destination": "/identity-governance/access-requests/notification-routing-rules/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/opsgenie/", + "destination": "/identity-governance/access-requests/plugins/opsgenie/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/servicenow/", + "destination": "/identity-governance/access-requests/plugins/servicenow/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-discord/", + "destination": "/identity-governance/access-requests/plugins/discord/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-jira/", + "destination": "/identity-governance/access-requests/plugins/jira/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-email/", + "destination": "/identity-governance/access-requests/plugins/email/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost/", + "destination": "/identity-governance/access-requests/plugins/mattermost/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams/", + "destination": "/identity-governance/access-requests/plugins/msteams/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/access-request-configuration/", + "destination": "/identity-governance/access-requests/access-request-configuration/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/", + "destination": "/identity-governance/access-requests/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/automatic-reviews/", + "destination": "/identity-governance/access-requests/automatic-reviews/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/oss-role-requests/", + "destination": "/identity-governance/access-requests/oss-role-requests/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/resource-requests/", + "destination": "/identity-governance/access-requests/resource-requests/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-requests/role-requests/", + "destination": "/identity-governance/access-requests/role-requests/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/device-trust/device-management/", + "destination": "/identity-governance/device-trust/device-management/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/device-trust/", + "destination": "/identity-governance/device-trust/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/device-trust/enforcing-device-trust/", + "destination": "/identity-governance/device-trust/enforcing-device-trust/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/device-trust/guide/", + "destination": "/identity-governance/device-trust/guide/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/device-trust/jamf-integration/", + "destination": "/identity-governance/device-trust/jamf-integration/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/locking/", + "destination": "/identity-governance/locking/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/okta/app-and-group-sync/", + "destination": "/identity-governance/integrations/okta/app-and-group-sync/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/okta/guided-sso/", + "destination": "/identity-governance/integrations/okta/guided-sso/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/okta/", + "destination": "/identity-governance/integrations/okta/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/okta/scim-integration/", + "destination": "/identity-governance/integrations/okta/scim-integration/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/okta/user-sync/", + "destination": "/identity-governance/integrations/okta/user-sync/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/linux-demo/", + "destination": "/get-started/deploy-community/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/", + "destination": "/machine-workload-identity/access-guides/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/ansible/", + "destination": "/machine-workload-identity/access-guides/ansible/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/applications/", + "destination": "/machine-workload-identity/access-guides/applications/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/databases/", + "destination": "/machine-workload-identity/access-guides/databases/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/kubernetes/", + "destination": "/machine-workload-identity/access-guides/kubernetes/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/ssh/", + "destination": "/machine-workload-identity/access-guides/ssh/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/access-guides/tctl/", + "destination": "/machine-workload-identity/access-guides/tctl/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/aws/", + "destination": "/machine-workload-identity/deployment/aws/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/azure-devops/", + "destination": "/machine-workload-identity/deployment/azure-devops/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/azure/", + "destination": "/machine-workload-identity/deployment/azure/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/bitbucket/", + "destination": "/machine-workload-identity/deployment/bitbucket/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/circleci/", + "destination": "/machine-workload-identity/deployment/circleci/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/", + "destination": "/machine-workload-identity/deployment/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/gcp/", + "destination": "/machine-workload-identity/deployment/gcp/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/github-actions/", + "destination": "/machine-workload-identity/deployment/github-actions/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/gitlab/", + "destination": "/machine-workload-identity/deployment/gitlab/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/jenkins/", + "destination": "/machine-workload-identity/deployment/jenkins/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/kubernetes/", + "destination": "/machine-workload-identity/deployment/kubernetes/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/linux-tpm/", + "destination": "/machine-workload-identity/deployment/linux-tpm/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/deployment/linux/", + "destination": "/machine-workload-identity/deployment/linux/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/faq/", + "destination": "/machine-workload-identity/faq/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/getting-started/", + "destination": "/machine-workload-identity/getting-started/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/introduction/", + "destination": "/machine-workload-identity/introduction/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/", + "destination": "/machine-workload-identity/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/manifesto/", + "destination": "/machine-workload-identity/manifesto/", + "permanent": true + }, + { + "source": "/enroll-resources/machine-id/troubleshooting/", + "destination": "/machine-workload-identity/troubleshooting/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/aws-oidc-federation/", + "destination": "/machine-workload-identity/workload-identity/aws-oidc-federation/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/aws-roles-anywhere/", + "destination": "/machine-workload-identity/workload-identity/aws-roles-anywhere/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/azure-federated-credentials/", + "destination": "/machine-workload-identity/workload-identity/azure-federated-credentials/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/best-practices/", + "destination": "/machine-workload-identity/workload-identity/best-practices/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/federation/", + "destination": "/machine-workload-identity/workload-identity/federation/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/gcp-workload-identity-federation-jwt/", + "destination": "/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/getting-started/", + "destination": "/machine-workload-identity/workload-identity/getting-started/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/introduction/", + "destination": "/machine-workload-identity/workload-identity/introduction/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/jwt-svids/", + "destination": "/machine-workload-identity/workload-identity/jwt-svids/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/spiffe/", + "destination": "/machine-workload-identity/workload-identity/spiffe/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/tsh/", + "destination": "/machine-workload-identity/workload-identity/tsh/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/workload-attestation/", + "destination": "/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation/", + "permanent": true + }, + { + "source": "/machine-workload-identity/workload-identity/workload-attestation/", + "destination": "/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/bound-keypair/", + "destination": "/reference/machine-workload-identity/bound-keypair/getting-started/", + "permanent": true + }, + { + "source": "/enroll-resources/workload-identity/", + "destination": "/machine-workload-identity/workload-identity/" + }, + { + "source": "/admin-guides/access-controls/access-lists/guide/", + "destination": "/identity-governance/access-lists/guide/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty/", + "destination": "/identity-governance/access-requests/plugins/pagerduty/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/access-request-plugins/ssh-approval-slack/", + "destination": "/identity-governance/access-requests/plugins/slack/", + "permanent": true + }, + { + "source": "/machine-workload-identity/mwi-use-cases/", + "destination": "/machine-workload-identity/workload-identity/", + "permanent": true + }, + { + "source": "/machine-workload-identity/mwi-use-cases/mwi-ci-cd/", + "destination": "/machine-workload-identity/use-cases/mwi-ci-cd/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/gitlab/", + "destination": "/reference/deployment/join-methods/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/github-actions/", + "destination": "/reference/deployment/join-methods/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/", + "destination": "/reference/machine-workload-identity/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/configuration/", + "destination": "/reference/machine-workload-identity/configuration/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/diagnostics-service/", + "destination": "/reference/machine-workload-identity/diagnostics-service/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/telemetry/", + "destination": "/reference/machine-workload-identity/telemetry/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/bound-keypair/admin-guide/", + "destination": "/reference/machine-workload-identity/bound-keypair/admin-guide/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/bound-keypair/", + "destination": "/reference/machine-workload-identity/bound-keypair/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/bound-keypair/concepts/", + "destination": "/reference/machine-workload-identity/bound-keypair/concepts/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/bound-keypair/getting-started/", + "destination": "/reference/machine-workload-identity/bound-keypair/getting-started/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/bound-keypair/static-keys/", + "destination": "/reference/machine-workload-identity/bound-keypair/static-keys/", + "permanent": true + }, + { + "source": "/reference/machine-workload-identity/machine-id/v16-upgrade-guide/", + "destination": "/reference/machine-workload-identity/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/access-graph/", + "destination": "/identity-security/access-graph/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/access-graph/identity-activity-center/", + "destination": "/identity-security/access-graph/identity-activity-center/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/access-graph/self-hosted-helm/", + "destination": "/identity-security/access-graph/self-hosted-helm/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/access-graph/self-hosted/", + "destination": "/identity-security/access-graph/self-hosted/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/crown-jewels/", + "destination": "/identity-security/usage/crown-jewels/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/aws-sync/", + "destination": "/identity-security/integrations/aws-sync/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/azure-sync/", + "destination": "/identity-security/integrations/azure-sync/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/entra-id/", + "destination": "/identity-security/integrations/entra-id/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/github/", + "destination": "/identity-security/integrations/github/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/gitlab/", + "destination": "/identity-security/integrations/gitlab/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/", + "destination": "/identity-security/integrations/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/netiq/", + "destination": "/identity-security/integrations/netiq/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/ssh-keys-scan/", + "destination": "/identity-security/integrations/ssh-keys-scan/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/integrations/teleport/", + "destination": "/identity-security/integrations/teleport/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/policy-connections/", + "destination": "/identity-security/usage/graph-explorer/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/policy-how-to-use/", + "destination": "/identity-security/usage/", + "permanent": true + }, + { + "source": "/identity-security/policy-how-to-use/", + "destination": "/identity-security/usage/", + "permanent": true + }, + { + "source": "/admin-guides/teleport-policy/", + "destination": "/identity-security/", + "permanent": true + }, + { + "source": "/identity-security/teleport-policy/", + "destination": "/identity-security/", + "permanent": true + }, + { + "source": "/reference/architecture/proxy-peering/", + "destination": "/zero-trust-access/deploy-a-cluster/proxy-peering/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/compliance-frameworks/", + "destination": "/zero-trust-access/compliance-frameworks/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/compliance-frameworks/fedramp/", + "destination": "/zero-trust-access/compliance-frameworks/fedramp/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/compliance-frameworks/soc2/", + "destination": "/zero-trust-access/compliance-frameworks/soc2/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/adfs/", + "destination": "/zero-trust-access/sso/integrate-idp/adfs/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/azuread/", + "destination": "/zero-trust-access/sso/integrate-idp/entra-id/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/github-sso/", + "destination": "/zero-trust-access/sso/integrate-idp/github-sso/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/gitlab/", + "destination": "/zero-trust-access/sso/integrate-idp/gitlab/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/google-workspace/", + "destination": "/zero-trust-access/sso/integrate-idp/google-workspace/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/keycloak/", + "destination": "/zero-trust-access/sso/integrate-idp/keycloak/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/oidc/", + "destination": "/zero-trust-access/sso/integrate-idp/oidc/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/okta/", + "destination": "/zero-trust-access/sso/integrate-idp/okta/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/one-login/", + "destination": "/zero-trust-access/sso/integrate-idp/one-login/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/sso/", + "destination": "/zero-trust-access/sso/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/", + "destination": "/zero-trust-access/infrastructure-as-code/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/access-list/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/access-list/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/agentless-ssh-servers/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/agentless-ssh-servers/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/import-existing-resources/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/import-existing-resources/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/login-rules-operator/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/login-rules-operator/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/login-rules-terraform/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/login-rules-terraform/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/trusted-cluster/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/trusted-cluster/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/managing-resources/user-and-role/", + "destination": "/zero-trust-access/infrastructure-as-code/managing-resources/user-and-role/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/teleport-operator/secret-lookup/", + "destination": "/zero-trust-access/infrastructure-as-code/teleport-operator/secret-lookup/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/teleport-operator/teleport-operator-helm/", + "destination": "/zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator-helm/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/teleport-operator/teleport-operator-standalone/", + "destination": "/zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator-standalone/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/teleport-operator/", + "destination": "/zero-trust-access/infrastructure-as-code/teleport-operator/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/ci-or-cloud/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/ci-or-cloud/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/dedicated-server/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/dedicated-server/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/local/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/local/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/long-lived-credentials/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/long-lived-credentials/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/spacelift/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/spacelift/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/terraform-cloud/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-cloud/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-provider/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-starter/enroll-resources/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-starter/rbac/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/admin-guides/infrastructure-as-code/terraform-starter/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/datadog/", + "destination": "/zero-trust-access/export-audit-events/datadog/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/elastic-stack/", + "destination": "/zero-trust-access/export-audit-events/elastic-stack/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/", + "destination": "/zero-trust-access/export-audit-events/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/fluentd/", + "destination": "/zero-trust-access/export-audit-events/fluentd/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/panther/", + "destination": "/zero-trust-access/export-audit-events/panther/", + "permanent": true + }, + { + "source": "/admin-guides/management/export-audit-events/splunk/", + "destination": "/zero-trust-access/export-audit-events/splunk/", + "permanent": true + }, + { + "source": "/model-context-preview/", + "destination": "/connect-your-client/model-context-protocol/", + "permanent": true + }, + { + "source": "/upgrading/client-tools-autoupdate/", + "destination": "/upgrading/client-tools-managed-updates/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/", + "destination": "/identity-governance/idps/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/", + "destination": "/zero-trust-access/authentication/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/getting-started/", + "destination": "/zero-trust-access/rbac-get-started/role-demo/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/dual-authz/", + "destination": "/identity-governance/access-requests/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/", + "destination": "/zero-trust-access/authentication/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/hardware-key-support/", + "destination": "/zero-trust-access/authentication/hardware-key-support/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/headless/", + "destination": "/zero-trust-access/authentication/headless/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/impersonation/", + "destination": "/zero-trust-access/authentication/impersonation/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/ip-pinning/", + "destination": "/zero-trust-access/authentication/ip-pinning/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/joining-sessions/", + "destination": "/zero-trust-access/authentication/joining-sessions/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/mfa-for-admin-actions/", + "destination": "/zero-trust-access/authentication/mfa-for-admin-actions/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/passwordless/", + "destination": "/zero-trust-access/authentication/passwordless/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/per-session-mfa/", + "destination": "/zero-trust-access/authentication/per-session-mfa/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/role-templates/", + "destination": "/zero-trust-access/rbac-get-started/role-templates/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/guides/webauthn/", + "destination": "/zero-trust-access/management/security/idp-compromise/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/saml-attribute-mapping/", + "destination": "/identity-governance/idps/saml-attribute-mapping/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation/", + "destination": "/identity-governance/idps/usage/saml-gcp-workforce-identity-federation/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/saml-grafana/", + "destination": "/identity-governance/idps/usage/saml-grafana/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/saml-guide/", + "destination": "/identity-governance/idps/saml-guide/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/idps/saml-microsoft-entra-external-id/", + "destination": "/identity-governance/idps/usage/saml-microsoft-entra-external-id/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/login-rules/guide/", + "destination": "/zero-trust-access/sso/login-rules/guide/", + "permanent": true + }, + { + "source": "/admin-guides/access-controls/login-rules/", + "destination": "/zero-trust-access/sso/login-rules/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/aws-kms/", + "destination": "/zero-trust-access/deploy-a-cluster/private-keys/aws-kms/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/", + "destination": "/zero-trust-access/deploy-a-cluster/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/deployments/aws-starter-cluster-terraform/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/aws-starter-cluster-terraform/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/deployments/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/deployments/gcp/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/gcp/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/gcp-kms/", + "destination": "/zero-trust-access/deploy-a-cluster/private-keys/gcp-kms/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/argocd-helm/", + "destination": "/enroll-resources/agents/argocd-helm/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/aws/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/aws/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/azure/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/azure/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/custom/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/custom/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/digitalocean/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/digitalocean/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/disaster-recovery/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/disaster-recovery/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/gcp/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/gcp/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/ibm/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/ibm/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/kubernetes-cluster/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/kubernetes-cluster/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp/", + "destination": "/zero-trust-access/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/high-availability/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/high-availability/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/hsm/", + "destination": "/zero-trust-access/deploy-a-cluster/hsm/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/license/", + "destination": "/zero-trust-access/deploy-a-cluster/license/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/multi-region-blueprint/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/multi-region-blueprint/", + "permanent": true + }, + { + "source": "/admin-guides/deploy-a-cluster/separate-proxy-service-endpoints/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/separate-proxy-service-endpoints/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/daemon/", + "destination": "/zero-trust-access/management/daemon/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/labels/", + "destination": "/zero-trust-access/rbac-get-started/labels/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/self-signed-certs/", + "destination": "/zero-trust-access/deploy-a-cluster/self-signed-certs/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/troubleshooting/", + "destination": "/zero-trust-access/management/diagnostics/troubleshooting/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/trustedclusters/", + "destination": "/zero-trust-access/deploy-a-cluster/trustedclusters/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/uninstall-teleport/", + "destination": "/installation/uninstall-teleport/", + "permanent": true + }, + { + "source": "/admin-guides/management/admin/users/", + "destination": "/zero-trust-access/rbac-get-started/users/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/client-timeout/", + "destination": "/zero-trust-access/management/security/client-timeout/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/proxy-protocol/", + "destination": "/zero-trust-access/management/security/proxy-protocol/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/reduce-blast-radius/", + "destination": "/zero-trust-access/management/security/reduce-blast-radius/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/restrict-privileges/", + "destination": "/zero-trust-access/management/security/restrict-privileges/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/revoking-access/", + "destination": "/zero-trust-access/management/security/revoking-access/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/", + "destination": "/zero-trust-access/management/security/", + "permanent": true + }, + { + "source": "/admin-guides/management/security/selinux/", + "destination": "/zero-trust-access/management/security/selinux/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/", + "destination": "/zero-trust-access/management/diagnostics/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/logging/", + "destination": "/zero-trust-access/management/diagnostics/logging/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/metrics/", + "destination": "/zero-trust-access/management/diagnostics/metrics/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/monitoring/", + "destination": "/zero-trust-access/management/diagnostics/monitoring/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/profiles/", + "destination": "/zero-trust-access/management/diagnostics/profiles/", + "permanent": true + }, + { + "source": "/admin-guides/management/diagnostics/tracing/", + "destination": "/zero-trust-access/management/diagnostics/tracing/", + "permanent": true + }, + { + "source": "/admin-guides/management/external-audit-storage/", + "destination": "/zero-trust-access/management/external-audit-storage/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/aws-iam-identity-center/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/aws-iam-identity-center/guide/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/guide/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/awsoidc-integration-rds/", + "destination": "/enroll-resources/application-access/cloud-apis/awsoidc-integration-rds/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/awsoidc-integration/", + "destination": "/enroll-resources/application-access/cloud-apis/awsoidc-integration/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/datadog/", + "destination": "/zero-trust-access/management/diagnostics/datadog/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/ec2-tags/", + "destination": "/enroll-resources/automatic-labels/ec2-tags/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/gcp-tags/", + "destination": "/enroll-resources/automatic-labels/gcp-tags/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/github-integration/", + "destination": "/enroll-resources/application-access/cloud-apis/github-integration/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/oracle-tags/", + "destination": "/enroll-resources/automatic-labels/oracle-tags/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/scim/sailpoint/", + "destination": "/identity-governance/integrations/scim/sailpoint/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/scim/", + "destination": "/identity-governance/integrations/scim/", + "permanent": true + }, + { + "source": "/admin-guides/management/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/backup-restore/", + "destination": "/zero-trust-access/deploy-a-cluster/backup-restore/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/ca-rotation/", + "destination": "/zero-trust-access/management/security/ca-rotation/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/db-ca-migrations/", + "destination": "/zero-trust-access/management/security/db-ca-migrations/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/proxy-peering/", + "destination": "/zero-trust-access/deploy-a-cluster/proxy-peering/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/scaling/", + "destination": "/reference/deployment/scaling/", + "permanent": true + }, + { + "source": "/admin-guides/management/operations/tls-routing/", + "destination": "/zero-trust-access/deploy-a-cluster/tls-routing/", + "permanent": true + }, + { + "source": "/admin-guides/management/guides/aws-iam-identity-center/using-aws-cli/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/using-aws-cli/", + "permanent": true + }, + { + "source": "/admin-guides/api/access-plugin/", + "destination": "/identity-governance/access-requests/plugins/how-to-build/", + "permanent": true + }, + { + "source": "/admin-guides/api/", + "destination": "/zero-trust-access/api/", + "permanent": true + }, + { + "source": "/admin-guides/", + "destination": "/zero-trust-access/", + "permanent": true + }, + { + "source": "/admin-guides/api/automatically-register-agents/", + "destination": "/zero-trust-access/api/automatically-register-agents/", + "permanent": true + }, + { + "source": "/admin-guides/api/getting-started/", + "destination": "/zero-trust-access/api/getting-started/", + "permanent": true + }, + { + "source": "/admin-guides/api/rbac/", + "destination": "/zero-trust-access/api/rbac/", + "permanent": true + }, + { + "source": "/admin-guides/migrate-plans/", + "destination": "/migrate-plans/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/", + "destination": "/zero-trust-access/authentication/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/", + "destination": "/zero-trust-access/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/getting-started/", + "destination": "/zero-trust-access/authentication/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/dual-authz/", + "destination": "/zero-trust-access/authentication/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/hardware-key-support/", + "destination": "/zero-trust-access/authentication/hardware-key-support/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/headless/", + "destination": "/zero-trust-access/authentication/headless/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/impersonation/", + "destination": "/zero-trust-access/authentication/impersonation/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/ip-pinning/", + "destination": "/zero-trust-access/authentication/ip-pinning/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/joining-sessions/", + "destination": "/zero-trust-access/authentication/joining-sessions/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/mfa-for-admin-actions/", + "destination": "/zero-trust-access/authentication/mfa-for-admin-actions/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/passwordless/", + "destination": "/zero-trust-access/authentication/passwordless/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/per-session-mfa/", + "destination": "/zero-trust-access/authentication/per-session-mfa/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/role-templates/", + "destination": "/zero-trust-access/rbac-get-started/role-templates/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/guides/webauthn/", + "destination": "/zero-trust-access/management/security/idp-compromise/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/login-rules/guide/", + "destination": "/zero-trust-access/sso/login-rules/guide/", + "permanent": true + }, + { + "source": "/zero-trust-access/access-controls/login-rules/", + "destination": "/zero-trust-access/sso/login-rules/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/labels/", + "destination": "/zero-trust-access/rbac-get-started/labels/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/aws-iam-identity-center/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/aws-iam-identity-center/guide/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/guide/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/aws-iam-identity-center/using-aws-cli/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/using-aws-cli/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/azuread/", + "destination": "/zero-trust-access/sso/integrate-idp/entra-id/", + "permanent": true + }, + { + "source": "/reference/backends/", + "destination": "/reference/deployment/backends/", + "permanent": true + }, + { + "source": "/reference/cloud-faq/", + "destination": "/cloud-faq/", + "permanent": true + }, + { + "source": "/reference/config/", + "destination": "/reference/deployment/config/", + "permanent": true + }, + { + "source": "/reference/identity-security-config/", + "destination": "/reference/deployment/identity-security-config/", + "permanent": true + }, + { + "source": "/reference/join-methods/", + "destination": "/reference/deployment/join-methods/", + "permanent": true + }, + { + "source": "/reference/machine-id/bound-keypair/admin-guide/", + "destination": "/reference/machine-workload-identity/bound-keypair/admin-guide/", + "permanent": true + }, + { + "source": "/reference/machine-id/bound-keypair/", + "destination": "/reference/machine-workload-identity/bound-keypair/", + "permanent": true + }, + { + "source": "/reference/machine-id/bound-keypair/concepts/", + "destination": "/reference/machine-workload-identity/bound-keypair/concepts/", + "permanent": true + }, + { + "source": "/reference/machine-id/bound-keypair/getting-started/", + "destination": "/reference/machine-workload-identity/bound-keypair/getting-started/", + "permanent": true + }, + { + "source": "/reference/machine-id/configuration/", + "destination": "/reference/machine-workload-identity/configuration/", + "permanent": true + }, + { + "source": "/reference/machine-id/diagnostics-service/", + "destination": "/reference/machine-workload-identity/diagnostics-service/", + "permanent": true + }, + { + "source": "/reference/machine-id/github-actions/", + "destination": "/reference/deployment/join-methods/", + "permanent": true + }, + { + "source": "/reference/machine-id/gitlab/", + "destination": "/reference/deployment/join-methods/", + "permanent": true + }, + { + "source": "/reference/machine-id/", + "destination": "/reference/machine-workload-identity/", + "permanent": true + }, + { + "source": "/reference/machine-id/telemetry/", + "destination": "/reference/machine-workload-identity/telemetry/", + "permanent": true + }, + { + "source": "/reference/machine-id/v16-upgrade-guide/", + "destination": "/reference/machine-workload-identity/", + "permanent": true + }, + { + "source": "/reference/managed-updates-v2/", + "destination": "/reference/deployment/managed-updates-v2/", + "permanent": true + }, + { + "source": "/reference/monitoring/audit/", + "destination": "/reference/deployment/monitoring/audit/", + "permanent": true + }, + { + "source": "/reference/monitoring/metrics/", + "destination": "/reference/deployment/monitoring/metrics/", + "permanent": true + }, + { + "source": "/reference/monitoring/", + "destination": "/reference/deployment/monitoring/", + "permanent": true + }, + { + "source": "/reference/monitoring/tracing-service-configuration/", + "destination": "/reference/deployment/monitoring/tracing-service-configuration/", + "permanent": true + }, + { + "source": "/reference/networking/", + "destination": "/reference/deployment/networking/", + "permanent": true + }, + { + "source": "/reference/operator-resources/", + "destination": "/reference/infrastructure-as-code/operator-resources/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-accesslists/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-accesslists/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-appsv3/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-appsv3/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-autoupdateconfigsv1/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateconfigsv1/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-autoupdateversionsv1/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateversionsv1/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-botsv1/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-botsv1/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-databasesv3/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-databasesv3/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-githubconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-githubconnectors/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-loginrules/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-loginrules/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-oidcconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oidcconnectors/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-oktaimportrules/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oktaimportrules/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-openssheiceserversv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-openssheiceserversv2/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-opensshserversv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-opensshserversv2/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-provisiontokens/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-roles/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-roles/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-rolesv6/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv6/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-rolesv7/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv7/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-rolesv8/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv8/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-samlconnectors/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-samlconnectors/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-trustedclustersv2/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-trustedclustersv2/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-users/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-users/", + "permanent": true + }, + { + "source": "/reference/operator-resources/resources-teleport-dev-workloadidentitiesv1/", + "destination": "/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-workloadidentitiesv1/", + "permanent": true + }, + { + "source": "/reference/predicate-language/", + "destination": "/reference/access-controls/predicate-language/", + "permanent": true + }, + { + "source": "/reference/resources/", + "destination": "/reference/infrastructure-as-code/teleport-resources/", + "permanent": true + }, + { + "source": "/reference/signals/", + "destination": "/reference/deployment/signals/", + "permanent": true + }, + { + "source": "/reference/signature-algorithms/", + "destination": "/reference/deployment/signature-algorithms/", + "permanent": true + }, + { + "source": "/reference/terraform-provider-mwi/data-sources/", + "destination": "/reference/machine-workload-identity/terraform-provider-mwi/data-sources/", + "permanent": true + }, + { + "source": "/reference/terraform-provider-mwi/data-sources/kubernetes/", + "destination": "/reference/machine-workload-identity/terraform-provider-mwi/data-sources/kubernetes/", + "permanent": true + }, + { + "source": "/reference/terraform-provider-mwi/ephemeral-resources/", + "destination": "/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/", + "permanent": true + }, + { + "source": "/reference/terraform-provider-mwi/ephemeral-resources/kubernetes/", + "destination": "/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/kubernetes/", + "permanent": true + }, + { + "source": "/reference/terraform-provider-mwi/", + "destination": "/reference/machine-workload-identity/terraform-provider-mwi/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/access_list/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/access_list/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/access_list_member/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/access_list_member/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/access_monitoring_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/access_monitoring_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/app/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/app/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/auth_preference/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/auth_preference/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/autoupdate_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/autoupdate_version/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_version/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/cluster_maintenance_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_maintenance_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/cluster_networking_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_networking_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/database/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/database/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/dynamic_windows_desktop/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/dynamic_windows_desktop/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/github_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/github_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/health_check_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/health_check_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/installer/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/installer/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/login_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/login_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/oidc_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/oidc_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/okta_import_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/okta_import_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/provision_token/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/provision_token/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/role/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/role/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/saml_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/saml_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/session_recording_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/session_recording_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/static_host_user/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/static_host_user/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/trusted_cluster/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_cluster/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/trusted_device/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_device/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/user/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/user/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/data-sources/workload_identity/", + "destination": "/reference/infrastructure-as-code/terraform-provider/data-sources/workload_identity/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/access_list/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/access_list/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/access_list_member/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/access_list_member/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/access_monitoring_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/access_monitoring_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/app/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/app/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/auth_preference/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/auth_preference/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/autoupdate_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/autoupdate_version/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_version/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/bot/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/bot/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/cluster_maintenance_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/cluster_maintenance_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/cluster_networking_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/cluster_networking_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/database/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/database/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/dynamic_windows_desktop/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/dynamic_windows_desktop/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/github_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/github_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/health_check_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/health_check_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/installer/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/installer/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/login_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/login_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/oidc_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/oidc_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/okta_import_rule/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/okta_import_rule/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/provision_token/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/provision_token/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/role/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/role/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/saml_connector/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/saml_connector/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/server/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/server/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/session_recording_config/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/session_recording_config/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/static_host_user/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/static_host_user/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/trusted_cluster/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/trusted_cluster/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/trusted_device/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/trusted_device/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/user/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/user/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/resources/workload_identity/", + "destination": "/reference/infrastructure-as-code/terraform-provider/resources/workload_identity/", + "permanent": true + }, + { + "source": "/reference/terraform-provider/", + "destination": "/reference/infrastructure-as-code/terraform-provider/", + "permanent": true + }, + { + "source": "/reference/user-types/", + "destination": "/reference/access-controls/user-types/", + "permanent": true + }, + { + "source": "/machine-workload-identity/ai-agents-mwi/", + "destination": "/machine-workload-identity/use-cases/ai-agents-mwi/", + "permanent": true + }, + { + "source": "/machine-workload-identity/hybrid-multi-mwi/", + "destination": "/machine-workload-identity/use-cases/hybrid-multi-mwi/", + "permanent": true + }, + { + "source": "/machine-workload-identity/iac-mwi/", + "destination": "/machine-workload-identity/use-cases/iac-mwi/", + "permanent": true + }, + { + "source": "/machine-workload-identity/mwi-ci-cd/", + "destination": "/machine-workload-identity/use-cases/mwi-ci-cd/", + "permanent": true + }, + { + "source": "/machine-workload-identity/workload-mwi/", + "destination": "/machine-workload-identity/use-cases/workload-mwi/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/", + "destination": "/machine-workload-identity/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/introduction/", + "destination": "/machine-workload-identity/introduction/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/faq/", + "destination": "/machine-workload-identity/faq/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/manifesto/", + "destination": "/machine-workload-identity/manifesto/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/troubleshooting/", + "destination": "/machine-workload-identity/troubleshooting/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/", + "destination": "/machine-workload-identity/access-guides/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/ansible-awx/", + "destination": "/machine-workload-identity/access-guides/ansible-awx/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/ansible/", + "destination": "/machine-workload-identity/access-guides/ansible/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/applications/", + "destination": "/machine-workload-identity/access-guides/applications/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/argocd/", + "destination": "/machine-workload-identity/access-guides/argocd/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/databases/", + "destination": "/machine-workload-identity/access-guides/databases/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/kubernetes/", + "destination": "/machine-workload-identity/access-guides/kubernetes/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/mcp/", + "destination": "/machine-workload-identity/access-guides/mcp/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/ssh/", + "destination": "/machine-workload-identity/access-guides/ssh/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/access-guides/tctl/", + "destination": "/machine-workload-identity/access-guides/tctl/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/", + "destination": "/machine-workload-identity/deployment/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/aws/", + "destination": "/machine-workload-identity/deployment/aws/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/azure-devops/", + "destination": "/machine-workload-identity/deployment/azure-devops/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/azure/", + "destination": "/machine-workload-identity/deployment/azure/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/bitbucket/", + "destination": "/machine-workload-identity/deployment/bitbucket/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/circleci/", + "destination": "/machine-workload-identity/deployment/circleci/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/gcp/", + "destination": "/machine-workload-identity/deployment/gcp/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/github-actions/", + "destination": "/machine-workload-identity/deployment/github-actions/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/gitlab/", + "destination": "/machine-workload-identity/deployment/gitlab/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/jenkins/", + "destination": "/machine-workload-identity/deployment/jenkins/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/kubernetes-oidc/", + "destination": "/machine-workload-identity/deployment/kubernetes-oidc/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/kubernetes/", + "destination": "/machine-workload-identity/deployment/kubernetes/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/linux-tpm/", + "destination": "/machine-workload-identity/deployment/linux-tpm/", + "permanent": true + }, + { + "source": "/machine-workload-identity/machine-id/deployment/linux/", + "destination": "/machine-workload-identity/deployment/linux/", + "permanent": true + }, + { + "source": "/reference/workload-identity/attributes/", + "destination": "/reference/machine-workload-identity/workload-identity/attributes/", + "permanent": true + }, + { + "source": "/reference/workload-identity/configuration-resource-migration/", + "destination": "/reference/machine-workload-identity/workload-identity/configuration-resource-migration/", + "permanent": true + }, + { + "source": "/reference/workload-identity/issuer-override/", + "destination": "/reference/machine-workload-identity/workload-identity/issuer-override/", + "permanent": true + }, + { + "source": "/reference/workload-identity/revocations/", + "destination": "/reference/machine-workload-identity/workload-identity/revocations/", + "permanent": true + }, + { + "source": "/reference/workload-identity/sigstore-attestation/", + "destination": "/reference/machine-workload-identity/workload-identity/sigstore-attestation/", + "permanent": true + }, + { + "source": "/reference/workload-identity/workload-identity-api-and-workload-attestation/", + "destination": "/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation/", + "permanent": true + }, + { + "source": "/reference/workload-identity/workload-identity-resource/", + "destination": "/reference/machine-workload-identity/workload-identity/workload-identity-resource/", + "permanent": true + }, + { + "source": "/reference/workload-identity/", + "destination": "/reference/machine-workload-identity/workload-identity/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/cloud-apis/aws-console-without-agent/", + "destination": "/enroll-resources/application-access/cloud-apis/aws-console-roles-anywhere/", + "permanent": true + }, + { + "source": "/zero-trust-access/infrastructure-as-code/terraform-starter/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/zero-trust-access/infrastructure-as-code/terraform-starter/enroll-resources/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/zero-trust-access/infrastructure-as-code/terraform-starter/rbac/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/scim/", + "destination": "/identity-governance/integrations/scim/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/scim/sailpoint/", + "destination": "/identity-governance/integrations/scim/sailpoint/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/advanced-options/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/advanced-options/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/guide/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/guide/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/maintenance/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/maintenance/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport/", + "permanent": true + }, + { + "source": "/identity-governance/aws-iam-identity-center/using-aws-cli/", + "destination": "/identity-governance/integrations/aws-iam-identity-center/using-aws-cli/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/configure-access/", + "destination": "/identity-governance/integrations/entra-id/configure-access/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/", + "destination": "/identity-governance/integrations/entra-id/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/faq/", + "destination": "/identity-governance/integrations/entra-id/faq/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/getting-started/", + "destination": "/identity-governance/integrations/entra-id/getting-started/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/manual-installation/", + "destination": "/identity-governance/integrations/entra-id/setup/manual-installation/", + "permanent": true + }, + { + "source": "/identity-governance/entra-id/terraform/", + "destination": "/identity-governance/integrations/entra-id/setup/terraform/", + "permanent": true + }, + { + "source": "/identity-governance/okta/app-and-group-sync/", + "destination": "/identity-governance/integrations/okta/app-and-group-sync/", + "permanent": true + }, + { + "source": "/identity-governance/okta/guided-sso/", + "destination": "/identity-governance/integrations/okta/guided-sso/", + "permanent": true + }, + { + "source": "/identity-governance/okta/", + "destination": "/identity-governance/integrations/okta/", + "permanent": true + }, + { + "source": "/identity-governance/okta/scim-integration/", + "destination": "/identity-governance/integrations/okta/scim-integration/", + "permanent": true + }, + { + "source": "/identity-governance/okta/user-sync/", + "destination": "/identity-governance/integrations/okta/user-sync/", + "permanent": true + }, + { + "source": "/identity-governance/scim/sailpoint/", + "destination": "/identity-governance/integrations/scim/sailpoint/", + "permanent": true + }, + { + "source": "/identity-governance/scim/", + "destination": "/identity-governance/integrations/scim/", + "permanent": true + }, + { + "source": "/identity-security/how-to-use/", + "destination": "/identity-security/usage/", + "permanent": true + }, + { + "source": "/identity-security/policy-connections/", + "destination": "/identity-security/usage/graph-explorer/", + "permanent": true + }, + { + "source": "/identity-security/crown-jewels/", + "destination": "/identity-security/usage/crown-jewels/", + "permanent": true + }, + { + "source": "/linux-demo/", + "destination": "/get-started/deploy-community/", + "permanent": true + }, + { + "source": "/reference/infrastructure-as-code/resources/", + "destination": "/reference/infrastructure-as-code/teleport-resources/", + "permanent": true + }, + { + "source": "/enroll-resources/server-access/guides/recording-proxy-mode/", + "destination": "/reference/architecture/session-recording/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/aws-kms/", + "destination": "/zero-trust-access/deploy-a-cluster/private-keys/aws-kms/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/gcp-kms/", + "destination": "/zero-trust-access/deploy-a-cluster/private-keys/gcp-kms/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/ec2-tags/", + "destination": "/enroll-resources/automatic-labels/ec2-tags/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/gcp-tags/", + "destination": "/enroll-resources/automatic-labels/gcp-tags/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/oracle-tags/", + "destination": "/enroll-resources/automatic-labels/oracle-tags/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-oracle/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/rds-oracle/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy-mysql/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-mysql/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy-sqlserver/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/sql-server-ad/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/sql-server-ad/", + "permanent": true + }, + { + "source": "/enroll-resources/mcp-access/sse/", + "destination": "/enroll-resources/mcp-access/enrolling-mcp-servers/sse/", + "permanent": true + }, + { + "source": "/enroll-resources/mcp-access/stdio/", + "destination": "/enroll-resources/mcp-access/enrolling-mcp-servers/stdio/", + "permanent": true + }, + { + "source": "/enroll-resources/mcp-access/streamable-http/", + "destination": "/enroll-resources/mcp-access/enrolling-mcp-servers/streamable-http/", + "permanent": true + }, + { + "source": "/identity-governance/integrations/entra-id/manual-installation/", + "destination": "/identity-governance/integrations/entra-id/setup/manual-installation/", + "permanent": true + }, + { + "source": "/identity-governance/integrations/entra-id/terraform/", + "destination": "/identity-governance/integrations/entra-id/setup/terraform/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/amazon-athena/", + "destination": "/enroll-resources/application-access/protect-apps/amazon-athena/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/api-access/", + "destination": "/enroll-resources/application-access/protect-apps/api-access/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/connecting-apps/", + "destination": "/enroll-resources/application-access/protect-apps/connecting-apps/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/dynamodb/", + "destination": "/enroll-resources/application-access/protect-apps/dynamodb/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/tcp/", + "destination": "/enroll-resources/application-access/protect-apps/tcp/", + "permanent": true + }, + { + "source": "/reference/agent-services/", + "destination": "/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/application-access/", + "destination": "/enroll-resources/application-access/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/auto-discovery-reference/", + "destination": "/enroll-resources/auto-discovery/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/auto-discovery-reference/aws-iam/", + "destination": "/enroll-resources/auto-discovery/reference/aws-iam/", + "permanent": true + }, + { + "source": "/reference/agent-services/auto-discovery-reference/kubernetes-application-discovery/", + "destination": "/enroll-resources/auto-discovery/kubernetes-applications/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/auto-discovery-reference/labels/", + "destination": "/enroll-resources/auto-discovery/reference/labels/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/audit/", + "destination": "/enroll-resources/database-access/reference/audit/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/aws/", + "destination": "/enroll-resources/database-access/reference/aws/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/cli/", + "destination": "/enroll-resources/database-access/reference/cli/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/configuration/", + "destination": "/enroll-resources/database-access/reference/configuration/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/", + "destination": "/enroll-resources/database-access/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/database-access-reference/labels/", + "destination": "/enroll-resources/database-access/reference/labels/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/audit/", + "destination": "/enroll-resources/desktop-access/reference/audit/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/cli/", + "destination": "/enroll-resources/desktop-access/reference/cli/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/clipboard/", + "destination": "/enroll-resources/desktop-access/reference/clipboard/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/configuration/", + "destination": "/enroll-resources/desktop-access/reference/configuration/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/", + "destination": "/enroll-resources/desktop-access/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/sessions/", + "destination": "/enroll-resources/desktop-access/reference/sessions/", + "permanent": true + }, + { + "source": "/reference/agent-services/desktop-access-reference/user-creation/", + "destination": "/enroll-resources/desktop-access/reference/user-creation/", + "permanent": true + }, + { + "source": "/reference/agent-services/mcp-access/", + "destination": "/enroll-resources/mcp-access/reference/", + "permanent": true + }, + { + "source": "/reference/agent-services/okta/", + "destination": "/identity-governance/integrations/okta/reference/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/introduction/", + "destination": "/enroll-resources/application-access/", + "source": "/zero-trust-access/sso/adfs/", + "destination": "/zero-trust-access/sso/integrate-idp/adfs/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/entra-id-oidc/", + "destination": "/zero-trust-access/sso/integrate-idp/entra-id-oidc/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/entra-id/", + "destination": "/zero-trust-access/sso/integrate-idp/entra-id/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/github-sso/", + "destination": "/zero-trust-access/sso/integrate-idp/github-sso/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/gitlab/", + "destination": "/zero-trust-access/sso/integrate-idp/gitlab/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/google-workspace/", + "destination": "/zero-trust-access/sso/integrate-idp/google-workspace/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/keycloak/", + "destination": "/zero-trust-access/sso/integrate-idp/keycloak/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/oidc/", + "destination": "/zero-trust-access/sso/integrate-idp/oidc/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/okta/", + "destination": "/zero-trust-access/sso/integrate-idp/okta/", + "permanent": true + }, + { + "source": "/zero-trust-access/sso/one-login/", + "destination": "/zero-trust-access/sso/integrate-idp/one-login/", + "permanent": true + }, + { + "source": "/zero-trust-access/authentication/login-rules/guide/", + "destination": "/zero-trust-access/sso/login-rules/guide/", + "permanent": true + }, + { + "source": "/zero-trust-access/authentication/login-rules/", + "destination": "/zero-trust-access/sso/login-rules/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/", + "destination": "/identity-governance/access-requests/plugins/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/opsgenie/", + "destination": "/identity-governance/access-requests/plugins/opsgenie/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-discord/", + "destination": "/identity-governance/access-requests/plugins/discord/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-email/", + "destination": "/identity-governance/access-requests/plugins/email/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-mattermost/", + "destination": "/identity-governance/access-requests/plugins/mattermost/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-msteams/", + "destination": "/identity-governance/access-requests/plugins/msteams/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-pagerduty/", + "destination": "/identity-governance/access-requests/plugins/pagerduty/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-slack/", + "destination": "/identity-governance/access-requests/plugins/slack/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/access-plugin/", + "destination": "/identity-governance/access-requests/plugins/how-to-build/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/datadog-hosted/", + "destination": "/identity-governance/access-requests/plugins/datadog-hosted/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/notification-routing-rules/", + "destination": "/identity-governance/access-requests/notification-routing-rules/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/servicenow/", + "destination": "/identity-governance/access-requests/plugins/servicenow/", + "permanent": true + }, + { + "source": "/identity-governance/access-request-plugins/ssh-approval-jira/", + "destination": "/identity-governance/access-requests/plugins/jira/", + "permanent": true + }, + { + "source": "/identity-governance/idps/saml-gcp-workforce-identity-federation/", + "destination": "/identity-governance/idps/usage/saml-gcp-workforce-identity-federation/", + "permanent": true + }, + { + "source": "/identity-governance/idps/saml-grafana/", + "destination": "/identity-governance/idps/usage/saml-grafana/", + "permanent": true + }, + { + "source": "/identity-governance/idps/saml-microsoft-entra-external-id/", + "destination": "/identity-governance/idps/usage/saml-microsoft-entra-external-id/", + "permanent": true + }, + { + "source": "/connect-your-client/gui-clients/", + "destination": "/connect-your-client/third-party/gui-clients/", + "permanent": true + }, + { + "source": "/connect-your-client/introduction/", + "destination": "/connect-your-client/teleport-clients/", + "permanent": true + }, + { + "source": "/connect-your-client/notifications/", + "destination": "/connect-your-client/teleport-clients/notifications/", + "permanent": true + }, + { + "source": "/connect-your-client/putty-winscp/", + "destination": "/connect-your-client/third-party/putty-winscp/", + "permanent": true + }, + { + "source": "/connect-your-client/teleport-connect/", + "destination": "/connect-your-client/teleport-clients/teleport-connect/", + "permanent": true + }, + { + "source": "/connect-your-client/tsh/", + "destination": "/connect-your-client/teleport-clients/tsh/", + "permanent": true + }, + { + "source": "/connect-your-client/vnet/", + "destination": "/connect-your-client/teleport-clients/vnet/", + "permanent": true + }, + { + "source": "/connect-your-client/web-ui/", + "destination": "/connect-your-client/teleport-clients/web-ui/", + "permanent": true + }, + { + "source": "/enroll-resources/server-access/guides/jetbrains-sftp/", + "destination": "/connect-your-client/third-party/jetbrains-sftp/", + "permanent": true + }, + { + "source": "/enroll-resources/server-access/guides/vscode/", + "destination": "/connect-your-client/third-party/vscode/", + "permanent": true + }, + { + "source": "/zero-trust-access/infrastructure-as-code/managing-resources/import-existing-resources/", + "destination": "/zero-trust-access/infrastructure-as-code/terraform-provider/import-existing-resources/", + "permanent": true + }, + { + "source": "/zero-trust-access/authentication/webauthn/", + "destination": "/zero-trust-access/management/security/idp-compromise/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/high-availability/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/high-availability/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/multi-region-blueprint/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/multi-region-blueprint/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/separate-proxy-service-endpoints/", + "destination": "/zero-trust-access/deploy-a-cluster/deployments/separate-proxy-service-endpoints/", + "permanent": true + }, + { + "source": "/zero-trust-access/deploy-a-cluster/helm-deployments/argocd-helm/", + "destination": "/enroll-resources/agents/argocd-helm/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-cassandra-keyspaces/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-cassandra-keyspaces/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-cross-account/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-cross-account/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-docdb/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-docdb/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-dynamodb/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-dynamodb/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-memorydb/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-memorydb/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/aws-opensearch/", + "destination": "/enroll-resources/database-access/enrollment/aws/aws-opensearch/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/elasticache-serverless/", + "destination": "/enroll-resources/database-access/enrollment/aws/elasticache-serverless/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/", + "destination": "/enroll-resources/database-access/enrollment/aws/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/postgres-redshift/", + "destination": "/enroll-resources/database-access/enrollment/aws/postgres-redshift/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy/rds-proxy-mysql/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-mysql/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy/rds-proxy-postgres/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy/rds-proxy-sqlserver/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds-proxy/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds-proxy/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds/mysql-postgres-mariadb/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds/rds-oracle/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/rds-oracle/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/rds/sql-server-ad/", + "destination": "/enroll-resources/database-access/enrollment/aws/rds/sql-server-ad/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/redis-aws/", + "destination": "/enroll-resources/database-access/enrollment/aws/redis-aws/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-aws-databases/redshift-serverless/", + "destination": "/enroll-resources/database-access/enrollment/aws/redshift-serverless/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql/", + "destination": "/enroll-resources/database-access/enrollment/azure/azure-postgres-mysql/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-azure-databases/azure-redis/", + "destination": "/enroll-resources/database-access/enrollment/azure/azure-redis/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-azure-databases/azure-sql-server-ad/", + "destination": "/enroll-resources/database-access/enrollment/azure/azure-sql-server-ad/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-azure-databases/", + "destination": "/enroll-resources/database-access/enrollment/azure/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-google-cloud-databases/alloydb/", + "destination": "/enroll-resources/database-access/enrollment/google-cloud/alloydb/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-google-cloud-databases/", + "destination": "/enroll-resources/database-access/enrollment/google-cloud/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-google-cloud-databases/mysql-cloudsql/", + "destination": "/enroll-resources/database-access/enrollment/google-cloud/mysql-cloudsql/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-google-cloud-databases/postgres-cloudsql/", + "destination": "/enroll-resources/database-access/enrollment/google-cloud/postgres-cloudsql/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-google-cloud-databases/spanner/", + "destination": "/enroll-resources/database-access/enrollment/google-cloud/spanner/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-managed-databases/", + "destination": "/enroll-resources/database-access/enrollment/managed/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-managed-databases/mongodb-atlas/", + "destination": "/enroll-resources/database-access/enrollment/managed/mongodb-atlas/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-managed-databases/oracle-exadata/", + "destination": "/enroll-resources/database-access/enrollment/managed/oracle-exadata/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-managed-databases/snowflake/", + "destination": "/enroll-resources/database-access/enrollment/managed/snowflake/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/cassandra-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/cassandra-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/clickhouse-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/clickhouse-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/cockroachdb-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/cockroachdb-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/elastic/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/elastic/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/mongodb-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/mongodb-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/mysql-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/mysql-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/oracle-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/oracle-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/postgres-self-hosted/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/postgres-self-hosted/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/redis-cluster/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/redis-cluster/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/redis/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/redis/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/sql-server-ad-pkinit/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/sql-server-ad-pkinit/", + "permanent": true + }, + { + "source": "/enroll-resources/database-access/enroll-self-hosted-databases/vitess/", + "destination": "/enroll-resources/database-access/enrollment/self-hosted/vitess/", + "permanent": true + }, + { + "source": "/enroll-resources/auto-discovery/reference/kubernetes-application-discovery/", + "destination": "/enroll-resources/auto-discovery/kubernetes-applications/reference/", + "permanent": true + }, + { + "source": "/enroll-resources/server-access/introduction/", + "destination": "/enroll-resources/server-access/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/dynamic-registration/", + "destination": "/enroll-resources/application-access/configuration/dynamic-registration/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/", + "destination": "/enroll-resources/application-access/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/ha/", + "destination": "/enroll-resources/application-access/configuration/ha/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/guides/vnet/", + "destination": "/enroll-resources/application-access/vnet/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/controls/", + "destination": "/enroll-resources/application-access/configuration/controls/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/dynamic-registration/", + "destination": "/enroll-resources/application-access/configuration/dynamic-registration/", + "permanent": true + }, + { + "source": "/enroll-resources/application-access/ha/", + "destination": "/enroll-resources/application-access/configuration/ha/", + "permanent": true + }, + { + "source": "/enroll-resources/server-access/guides/ansible/", + "destination": "/connect-your-client/third-party/ansible/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/datadog/", + "destination": "/zero-trust-access/management/diagnostics/datadog/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/awsoidc-integration-console/", + "destination": "/enroll-resources/application-access/cloud-apis/awsoidc-integration-console/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/awsoidc-integration-rds/", + "destination": "/enroll-resources/application-access/cloud-apis/awsoidc-integration-rds/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/awsoidc-integration/", + "destination": "/enroll-resources/application-access/cloud-apis/awsoidc-integration/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/github-integration/", + "destination": "/enroll-resources/application-access/cloud-apis/github-integration/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/ca-rotation/", + "destination": "/zero-trust-access/management/security/ca-rotation/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/db-ca-migrations/", + "destination": "/zero-trust-access/management/security/db-ca-migrations/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/users/", + "destination": "/zero-trust-access/rbac-get-started/users/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/uninstall-teleport/", + "destination": "/installation/uninstall-teleport/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/self-signed-certs/", + "destination": "/zero-trust-access/deploy-a-cluster/self-signed-certs/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/trustedclusters/", + "destination": "/zero-trust-access/deploy-a-cluster/trustedclusters/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/tls-routing/", + "destination": "/zero-trust-access/deploy-a-cluster/tls-routing/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/troubleshooting/", + "destination": "/zero-trust-access/management/diagnostics/troubleshooting/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/backup-restore/", + "destination": "/zero-trust-access/deploy-a-cluster/backup-restore/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/proxy-peering/", + "destination": "/zero-trust-access/deploy-a-cluster/proxy-peering/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/scaling/", + "destination": "/reference/deployment/scaling/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/daemon/", + "destination": "/zero-trust-access/management/daemon/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/admin/trusted-cluster-address-lookup/", + "destination": "/zero-trust-access/deploy-a-cluster/trustedclusters/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/guides/", + "destination": "/zero-trust-access/management/", + "permanent": true + }, + { + "source": "/zero-trust-access/management/operations/breakglass-access/", + "destination": "/zero-trust-access/deploy-a-cluster/breakglass-access/", + "permanent": true + }, + { + "source": "/connect-your-client/request-access/", + "destination": "/connect-your-client/teleport-clients/request-access/", + "permanent": true } ] } diff --git a/docs/cspell.json b/docs/cspell.json index 781b3e584fcac..e68daef04bc47 100644 --- a/docs/cspell.json +++ b/docs/cspell.json @@ -5,6 +5,8 @@ "AADUSER", "ABAC", "ABCDEFGHIJKL", + "ACCESSTOKEN", + "Acking", "ADFS", "AICPA", "AICPA’s", @@ -15,27 +17,33 @@ "APPDATA", "APPSECRET", "AQAB", + "AROA123456789AEXAMPLE", "AUTHINFO", "AWSARN", "AWSIIDTTL", + "AWX", "Aarch", "Activ", "Addrs", "Afax", "Agentic", "Aqxs", + "Authenticode", "Authy", "BDRVJUSUZ", "BSFD", "BUCKETNAME", "Binm", + "breakglass", "Brosnan", + "CAROOT", "CAcreateserial", "CCDC", "CGNAT", "CHANGEID", "CHANGEME", "CLOUDSDK", + "CMVP", "CREATEDB", "CTAP", "CXXXXXXXXX", @@ -55,9 +63,11 @@ "Dfumu", "Distroless", "Divio's", + "EADDRINUSE", "EBSCSI", "ECCP", "ECMWF", + "EICE", "EKCert", "ERRO", "Elastcsearch", @@ -83,8 +93,10 @@ "GCP's", "GENPASS", "GHES", + "GOAWAY", "GODEBUG", "GOMAXPROCS", + "GOMEMLIMIT", "GSLB", "Gbps", "Ghostunnel", @@ -109,6 +121,7 @@ "Infosec", "Instruqt", "Intelli", + "Intune", "Iqxtr", "Istio", "JCRP", @@ -121,6 +134,7 @@ "Jlbn", "Jpji", "Jqlang", + "JumpCloud", "Kanban", "Keycloak", "Keyspaces", @@ -148,10 +162,13 @@ "Mqgcq", "Multifactor", "Multihost", + "mypassword", "Mzgz", + "NOEXEC", "NOFILE", "NOKEY", "NOPASSWD", + "NRPT", "NVGJ", "Näme", "OCID", @@ -164,6 +181,7 @@ "OpenAI", "Opsgenie", "Orapki", + "PDU", "PFDEBUG", "PFSELFTEST", "PGCLIENTENCODING", @@ -171,6 +189,7 @@ "PGSSLCERT", "PGSSLKEY", "PGSSLROOTCERT", + "PKCE", "PKIX", "POSIX", "POSTALCODE", @@ -182,6 +201,8 @@ "Pbbd", "Pluggable", "Println", + "Pulumi", + "Pyroscope", "Quickstart", "Quicktime's", "REDISCLI", @@ -193,6 +214,7 @@ "Rekor", "Relogging", "Relogin", + "replayable", "SAMLIDP", "SCIM", "SECURITYADMIN", @@ -201,6 +223,7 @@ "SLAVEOF", "SLES", "SLOWLOG", + "Snowsight", "SPDY", "SPIFFE", "SQLSTATE", @@ -212,8 +235,12 @@ "SUPATH", "SVID", "SVIDs", + "SailPoint", + "Sailpoint", "Secretless", + "setx", "Shockbyte", + "showcerts", "Silverfort's", "Slackbot", "Sllavd", @@ -222,6 +249,7 @@ "Spacelift's", "Sprintf", "Stackdriver", + "Streamable", "Svhk", "Swic", "Swicm", @@ -248,8 +276,10 @@ "Uwhp", "VPCID", "VSVZY", + "Valkey", "Vhka", "Vitess", + "VMSS", "Vybm", "WIMSE", "WWFCX", @@ -293,6 +323,7 @@ "allkeys", "allowdeny", "allowedlogins", + "alloydb", "anonymization", "anotheruser", "apikey", @@ -333,13 +364,16 @@ "awsic", "awskms", "awsoidc", + "awsra", "awsuser", + "awx", "azblob", "azidentity", "azuread", "azuredatabases", "azurerm", "backdoors", + "backdooring", "backoff", "backported", "backporting", @@ -364,6 +398,8 @@ "certutil", "cfpassword", "cfsdf", + "cfssl", + "cfssljson", "cgroups", "cgroupv", "cgroupv2", @@ -397,6 +433,7 @@ "clusterroles", "cmdkey", "cockroachdb", + "codesign", "codingllama", "compat", "compinit", @@ -404,10 +441,11 @@ "cond", "configmap", "configmaps", + "configurator", "connectionupgrade", - "connstring", "connectorname", "connstring", + "contoso", "cprops", "cqlsh", "createkey", @@ -416,6 +454,7 @@ "createrow", "creds", "crond", + "crontabs", "cryptoprocessor", "csrs", "customizability", @@ -429,6 +468,7 @@ "dbaccessdemo", "dbadir", "dbadmin", + "dbas", "dbeaver", "dbgroup", "dbname", @@ -439,6 +479,7 @@ "deletecollection", "demodb", "deregisters", + "desync", "devel", "develnode", "devicetrust", @@ -486,6 +527,7 @@ "enzos", "errcode", "etcdctl", + "evtx", "exadata", "exadatadomain", "examplecontainer", @@ -493,6 +535,7 @@ "examplestorageaccount", "exampletoken", "exampleuser", + "executionrole", "exfiltrated", "exfiltration", "externaladdress", @@ -503,6 +546,7 @@ "fabfc", "fakehost", "fakekey", + "fcontext", "fdpass", "federationmedata", "fedmeatadataxml", @@ -523,13 +567,17 @@ "gcloud", "gcpproj", "gecos", + "gencert", "genpkey", "genrsa", + "geoip", + "getenforce", "getent", "getstring", "gitref", "glmhf", "gncm", + "goaway", "googleoidc", "googleusercontent", "goroutines", @@ -542,18 +590,22 @@ "grav", "groupclaim", "gserviceaccount", + "goexit", "gslb", "gsutil", "gworkspace", "hashfile", "hashicorp", "healthcheck", + "healthchecker", "healthchecks", "healthz", "highavailability", "highavailabilitycertmanager", "highavailabilitycertmanageraddcommonname", "hiro", + "homelab", + "homelabs", "hostcert", "hostdb", "hostedzone", @@ -592,6 +644,7 @@ "jsmith", "jsonencode", "jsonpath", + "jumpcloud", "jumphost", "jwks", "jwkset", @@ -618,6 +671,8 @@ "kubernetesdiscovery", "kvno", "lastname", + "langchain", + "langgraph", "ldapsearch", "leaderelection", "leafcluster", @@ -628,11 +683,13 @@ "libgcc", "libm", "libpq", + "lifecycle", "livez", "lmnop", "loadbalancer", "localca", "localinstall", + "localproxy", "loginerrortroubleshooting", "loginrule", "loginrules", @@ -653,6 +710,7 @@ "masterauth", "masteruser", "mattermosttokenfromsecret", + "maxmind", "mcache", "memlock", "memorydb", @@ -667,9 +725,12 @@ "mkstore", "mlock", "mlockall", + "mmdb", + "modelcontextprotocol", "mongodbatlas", "mongosh", "mpghq", + "mpstat", "msapi", "msclmd", "mspan", @@ -678,6 +739,7 @@ "mtls", "multiaccount", "multistep", + "mvcc", "myapp", "myauth", "mybot", @@ -737,9 +799,13 @@ "nologo", "nonprod", "noout", + "noprofile", "noprompt", + "noqa", + "norc", "nosql", "notifempty", + "notionhq", "nowait", "ntauth", "nvme", @@ -774,14 +840,18 @@ "pasteable", "persistentvolume", "persistentvolumeclaim", + "persistentvolumes", "pgaadauth", "pgbouncer", "pgpass", "pguser", + "pipx", "pidof", + "pkce", "pkill", "pkinit", "plugindata", + "policycoreutils", "portforward", "postgresqlselfhosted", "pprof", @@ -853,6 +923,9 @@ "rootfully", "rootlessly", "rtrzn", + "runbook", + "runbooks", + "runcmd", "runcommand", "runscript", "runtimes", @@ -867,6 +940,7 @@ "sandboxbd", "sasl", "scrollback", + "sealert", "seccomp", "secretname", "selectnongalleryapp", @@ -874,16 +948,23 @@ "selfhosted", "selfsubjectaccessreviews", "selfsubjectrulesreviews", + "semanage", + "semodule", "sendall", "seqno", + "serverlesscache", "serviceaccount", "serviceaccountname", "serviceaccounts", "serviceacct", "servicecfg", "serviceid", + "serviceusage", "sess", "session", + "sestatus", + "setenforce", + "setroubleshoot", "setspn", "sharded", "signup", @@ -917,6 +998,7 @@ "statefulset", "storageclasses", "storageenabled", + "streamable", "strslice", "structs", "subca", @@ -931,6 +1013,7 @@ "sysdba", "sysvinit", "tadmin", + "taskrole", "tbot", "tbotrole", "tcpip", @@ -951,7 +1034,10 @@ "teleportgithubconnector", "teleporthostname", "teleportinfra", + "teleportmwi", "teleportopensshserverv", + "teleportprovisiontoken", + "teleportprovisiontokenv2", "teleportproxy", "teleportrolesv", "teleportrolev", @@ -968,6 +1054,7 @@ "timechart", "timedatectl", "timekey", + "timesearch", "tlogs", "tlscacerts", "tlscert", @@ -1017,11 +1104,14 @@ "uuidgen", "viewstore", "vkxz", + "vmss", "vmcopy", "vmjm", "vnet", "vnetd", + "vsts", "vtgate", + "Vg5R6vpZffg", "walkthrough", "watcherjob", "webapi", @@ -1042,6 +1132,7 @@ "winserver", "workgroups", "wtmp", + "wtmpdb", "xample", "xauth", "xclip", @@ -1061,10 +1152,14 @@ "zxvf", "zztop" ], - "flagWords": ["hte"], + "flagWords": [ + "hte" + ], "ignorePaths": [ - "**/reference/terraform-provider/**", - "**/reference/operator-resources/**", - "**/includes/reference/code-blocks-no-cspell/**" + "**/includes/access-monitoring-events.mdx", + "**/includes/reference/code-blocks-no-cspell/**", + "**/reference/infrastructure-as-code/operator-resources/**", + "**/reference/infrastructure-as-code/teleport-resources/**", + "**/reference/infrastructure-as-code/terraform-provider/**" ] -} +} \ No newline at end of file diff --git a/docs/img/access-controls/access-lists/add-member.png b/docs/img/access-controls/access-lists/add-member.png index 66218c87aff7d..70ea68557da55 100644 Binary files a/docs/img/access-controls/access-lists/add-member.png and b/docs/img/access-controls/access-lists/add-member.png differ diff --git a/docs/img/access-controls/access-lists/create-new-access-list.png b/docs/img/access-controls/access-lists/create-new-access-list.png index 91f8e12b9dbfe..99fb91762dd2e 100644 Binary files a/docs/img/access-controls/access-lists/create-new-access-list.png and b/docs/img/access-controls/access-lists/create-new-access-list.png differ diff --git a/docs/img/access-controls/access-lists/create-test-user.png b/docs/img/access-controls/access-lists/create-test-user.png index 807dac48063e1..164e2987bfb41 100644 Binary files a/docs/img/access-controls/access-lists/create-test-user.png and b/docs/img/access-controls/access-lists/create-test-user.png differ diff --git a/docs/img/access-controls/access-lists/debug-app.png b/docs/img/access-controls/access-lists/debug-app.png index 29b24088e10c6..d5bd2ae3aea8f 100644 Binary files a/docs/img/access-controls/access-lists/debug-app.png and b/docs/img/access-controls/access-lists/debug-app.png differ diff --git a/docs/img/access-controls/access-lists/fill-out-access-list-fields.png b/docs/img/access-controls/access-lists/fill-out-access-list-fields.png index fcd52798aa1e6..bde7e0de5773c 100644 Binary files a/docs/img/access-controls/access-lists/fill-out-access-list-fields.png and b/docs/img/access-controls/access-lists/fill-out-access-list-fields.png differ diff --git a/docs/img/access-controls/access-lists/select-owner.png b/docs/img/access-controls/access-lists/select-owner.png index 87b93e189a42c..1023a9d4e1c54 100644 Binary files a/docs/img/access-controls/access-lists/select-owner.png and b/docs/img/access-controls/access-lists/select-owner.png differ diff --git a/docs/img/access-controls/automatic-reviews/create-automatic-review-rule.png b/docs/img/access-controls/automatic-reviews/create-automatic-review-rule.png deleted file mode 100644 index 109d73522615d..0000000000000 Binary files a/docs/img/access-controls/automatic-reviews/create-automatic-review-rule.png and /dev/null differ diff --git a/docs/img/access-controls/automatic-reviews/schedule-editor.png b/docs/img/access-controls/automatic-reviews/schedule-editor.png new file mode 100644 index 0000000000000..5b2ee0def76af Binary files /dev/null and b/docs/img/access-controls/automatic-reviews/schedule-editor.png differ diff --git a/docs/img/access-controls/automatic-reviews/users-edit.png b/docs/img/access-controls/automatic-reviews/users-edit.png index 6f665e17f0bcb..ab78134e03ead 100644 Binary files a/docs/img/access-controls/automatic-reviews/users-edit.png and b/docs/img/access-controls/automatic-reviews/users-edit.png differ diff --git a/docs/img/access-controls/automatic-reviews/view-access-requests.png b/docs/img/access-controls/automatic-reviews/view-access-requests.png deleted file mode 100644 index 4b148b3cce47d..0000000000000 Binary files a/docs/img/access-controls/automatic-reviews/view-access-requests.png and /dev/null differ diff --git a/docs/img/access-controls/device-trust/jamf-setup-1.png b/docs/img/access-controls/device-trust/jamf-setup-1.png index 473c83906e787..9b8475b446f4d 100644 Binary files a/docs/img/access-controls/device-trust/jamf-setup-1.png and b/docs/img/access-controls/device-trust/jamf-setup-1.png differ diff --git a/docs/img/access-controls/device-trust/jamf-setup-2.png b/docs/img/access-controls/device-trust/jamf-setup-2.png index fb376dee8e5ae..9c3d39b86d1e4 100644 Binary files a/docs/img/access-controls/device-trust/jamf-setup-2.png and b/docs/img/access-controls/device-trust/jamf-setup-2.png differ diff --git a/docs/img/access-controls/dual-authz/mattermost-0-enable.png b/docs/img/access-controls/dual-authz/mattermost-0-enable.png index 3d3e7e915a3b6..f311149bfc1d7 100644 Binary files a/docs/img/access-controls/dual-authz/mattermost-0-enable.png and b/docs/img/access-controls/dual-authz/mattermost-0-enable.png differ diff --git a/docs/img/access-controls/dual-authz/mattermost-1-bot.png b/docs/img/access-controls/dual-authz/mattermost-1-bot.png index 5c44f42646423..e96f6e7d91eb3 100644 Binary files a/docs/img/access-controls/dual-authz/mattermost-1-bot.png and b/docs/img/access-controls/dual-authz/mattermost-1-bot.png differ diff --git a/docs/img/access-controls/dual-authz/mattermost-2-all-permissions@2x.png b/docs/img/access-controls/dual-authz/mattermost-2-all-permissions@2x.png index 5473af4dfae5d..cb54e1a44e8b6 100644 Binary files a/docs/img/access-controls/dual-authz/mattermost-2-all-permissions@2x.png and b/docs/img/access-controls/dual-authz/mattermost-2-all-permissions@2x.png differ diff --git a/docs/img/access-controls/dual-authz/mattermost-5-request.png b/docs/img/access-controls/dual-authz/mattermost-5-request.png index 0f8c2d22eecae..4b300d62c6c42 100644 Binary files a/docs/img/access-controls/dual-authz/mattermost-5-request.png and b/docs/img/access-controls/dual-authz/mattermost-5-request.png differ diff --git a/docs/img/access-controls/dual-authz/role-new-request.png b/docs/img/access-controls/dual-authz/role-new-request.png index e9248e8705d2e..3087f5b745c92 100644 Binary files a/docs/img/access-controls/dual-authz/role-new-request.png and b/docs/img/access-controls/dual-authz/role-new-request.png differ diff --git a/docs/img/access-controls/hardware-key-support/agent.png b/docs/img/access-controls/hardware-key-support/agent.png index 3bff22c1dbf50..c84195e5a142f 100644 Binary files a/docs/img/access-controls/hardware-key-support/agent.png and b/docs/img/access-controls/hardware-key-support/agent.png differ diff --git a/docs/img/access-controls/idp-graph.png b/docs/img/access-controls/idp-graph.png index c69dad4a64697..29daa969b277f 100644 Binary files a/docs/img/access-controls/idp-graph.png and b/docs/img/access-controls/idp-graph.png differ diff --git a/docs/img/access-controls/saml-idp/access-app.png b/docs/img/access-controls/saml-idp/access-app.png deleted file mode 100644 index 28a66b3efa9a7..0000000000000 Binary files a/docs/img/access-controls/saml-idp/access-app.png and /dev/null differ diff --git a/docs/img/access-controls/saml-idp/add-entity-descriptor.png b/docs/img/access-controls/saml-idp/add-entity-descriptor.png deleted file mode 100644 index efb69b0fff918..0000000000000 Binary files a/docs/img/access-controls/saml-idp/add-entity-descriptor.png and /dev/null differ diff --git a/docs/img/access-controls/saml-idp/enroll-saml-app.png b/docs/img/access-controls/saml-idp/enroll-saml-app.png deleted file mode 100644 index f6ddb26df16bf..0000000000000 Binary files a/docs/img/access-controls/saml-idp/enroll-saml-app.png and /dev/null differ diff --git a/docs/img/access-controls/saml-idp/gcp-workforce/gcp-workforce-tile.png b/docs/img/access-controls/saml-idp/gcp-workforce/gcp-workforce-tile.png index b0a0b4ac38502..b793180c0e941 100644 Binary files a/docs/img/access-controls/saml-idp/gcp-workforce/gcp-workforce-tile.png and b/docs/img/access-controls/saml-idp/gcp-workforce/gcp-workforce-tile.png differ diff --git a/docs/img/access-controls/saml-idp/gcp-workforce/generate-command.png b/docs/img/access-controls/saml-idp/gcp-workforce/generate-command.png index 4521de5d16f10..ea9ce06a2a1e6 100644 Binary files a/docs/img/access-controls/saml-idp/gcp-workforce/generate-command.png and b/docs/img/access-controls/saml-idp/gcp-workforce/generate-command.png differ diff --git a/docs/img/access-controls/saml-idp/iamshowcase-protected-page.png b/docs/img/access-controls/saml-idp/iamshowcase-protected-page.png deleted file mode 100644 index 2574ff7b29111..0000000000000 Binary files a/docs/img/access-controls/saml-idp/iamshowcase-protected-page.png and /dev/null differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/add-new-resource.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/add-new-resource.png index 0fd1808aeeff1..652387e620735 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/add-new-resource.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/add-new-resource.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-idp.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-idp.png index 8f0e9d777db9c..592af71e1babc 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-idp.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-idp.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-new-domain.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-new-domain.png index 81b8a4d8f0a48..eb7eb55c5cf49 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-new-domain.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-add-new-domain.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-group.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-group.png index b19423d1fa91f..9ad80cbc22865 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-group.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-group.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-resource-group.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-resource-group.png index cbc67e516efa8..51d27a7a90287 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-resource-group.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-create-resource-group.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-invite-user.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-invite-user.png index 011932974c2ef..2f69c18e94538 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-invite-user.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-invite-user.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-link-subscription.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-link-subscription.png index 3ef0ab1160c89..059d7deab5f90 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-link-subscription.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-link-subscription.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-resource-group-iam.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-resource-group-iam.png index 043da73a73408..58c79c82a18e0 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-resource-group-iam.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-resource-group-iam.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-role-assignment.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-role-assignment.png index c3c3a27e4059d..f05e0030776bb 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-role-assignment.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-role-assignment.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-update-domain.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-update-domain.png index afeab4e9dc252..636c3066aa293 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-update-domain.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-update-domain.png differ diff --git a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-user-consent.png b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-user-consent.png index 68237ab22d883..a9c1d6dab09f6 100644 Binary files a/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-user-consent.png and b/docs/img/access-controls/saml-idp/ms-entra-external-id/entra-user-consent.png differ diff --git a/docs/img/access-controls/saml-idp/protected-page-access.png b/docs/img/access-controls/saml-idp/protected-page-access.png deleted file mode 100644 index a7e5682a6053c..0000000000000 Binary files a/docs/img/access-controls/saml-idp/protected-page-access.png and /dev/null differ diff --git a/docs/img/access-controls/saml-idp/rsa-simplesp-protected-page.png b/docs/img/access-controls/saml-idp/rsa-simplesp-protected-page.png new file mode 100644 index 0000000000000..2412258e9ddef Binary files /dev/null and b/docs/img/access-controls/saml-idp/rsa-simplesp-protected-page.png differ diff --git a/docs/img/access-controls/saml-idp/step-1-create-role-app-label.png b/docs/img/access-controls/saml-idp/step-1-create-role-app-label.png new file mode 100644 index 0000000000000..29409c483c6fa Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-1-create-role-app-label.png differ diff --git a/docs/img/access-controls/saml-idp/step-1-create-role-saml-resource.png b/docs/img/access-controls/saml-idp/step-1-create-role-saml-resource.png new file mode 100644 index 0000000000000..eb8397b157521 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-1-create-role-saml-resource.png differ diff --git a/docs/img/access-controls/saml-idp/step-2-copy-metadata.png b/docs/img/access-controls/saml-idp/step-2-copy-metadata.png new file mode 100644 index 0000000000000..7476e622ef141 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-2-copy-metadata.png differ diff --git a/docs/img/access-controls/saml-idp/step-2-find-saml-discover-tile.png b/docs/img/access-controls/saml-idp/step-2-find-saml-discover-tile.png new file mode 100644 index 0000000000000..e7703fb0791a2 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-2-find-saml-discover-tile.png differ diff --git a/docs/img/access-controls/saml-idp/step-3-add-service-provider-ed.png b/docs/img/access-controls/saml-idp/step-3-add-service-provider-ed.png new file mode 100644 index 0000000000000..17c4345f96d92 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-3-add-service-provider-ed.png differ diff --git a/docs/img/access-controls/saml-idp/step-3-add-service-provider.png b/docs/img/access-controls/saml-idp/step-3-add-service-provider.png new file mode 100644 index 0000000000000..36397e5b9578d Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-3-add-service-provider.png differ diff --git a/docs/img/access-controls/saml-idp/step-4-login.png b/docs/img/access-controls/saml-idp/step-4-login.png new file mode 100644 index 0000000000000..c48c8a2579585 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-4-login.png differ diff --git a/docs/img/access-controls/saml-idp/step-4-service-provider-authenticated-page.png b/docs/img/access-controls/saml-idp/step-4-service-provider-authenticated-page.png new file mode 100644 index 0000000000000..4c9d4494a5bb5 Binary files /dev/null and b/docs/img/access-controls/saml-idp/step-4-service-provider-authenticated-page.png differ diff --git a/docs/img/access-controls/saml-idp/teleport-idp-metadata.png b/docs/img/access-controls/saml-idp/teleport-idp-metadata.png deleted file mode 100644 index 4b02f5d5b0571..0000000000000 Binary files a/docs/img/access-controls/saml-idp/teleport-idp-metadata.png and /dev/null differ diff --git a/docs/img/access-graph/alerts.png b/docs/img/access-graph/alerts.png new file mode 100644 index 0000000000000..808d784e72909 Binary files /dev/null and b/docs/img/access-graph/alerts.png differ diff --git a/docs/img/access-graph/aws/eks-audit-settings.png b/docs/img/access-graph/aws/eks-audit-settings.png new file mode 100644 index 0000000000000..2800035fe48c6 Binary files /dev/null and b/docs/img/access-graph/aws/eks-audit-settings.png differ diff --git a/docs/img/access-graph/aws/sns-settings.png b/docs/img/access-graph/aws/sns-settings.png new file mode 100644 index 0000000000000..cc3e27af4c56a Binary files /dev/null and b/docs/img/access-graph/aws/sns-settings.png differ diff --git a/docs/img/access-graph/aws/trail-settings.png b/docs/img/access-graph/aws/trail-settings.png new file mode 100644 index 0000000000000..3b3c1d2a97a5a Binary files /dev/null and b/docs/img/access-graph/aws/trail-settings.png differ diff --git a/docs/img/access-graph/graph-explorer-ui.png b/docs/img/access-graph/graph-explorer-ui.png new file mode 100644 index 0000000000000..2df44cb08f5f9 Binary files /dev/null and b/docs/img/access-graph/graph-explorer-ui.png differ diff --git a/docs/img/access-graph/integrations.png b/docs/img/access-graph/integrations.png deleted file mode 100644 index e12762cbb3b3f..0000000000000 Binary files a/docs/img/access-graph/integrations.png and /dev/null differ diff --git a/docs/img/access-graph/investigate.png b/docs/img/access-graph/investigate.png new file mode 100644 index 0000000000000..88a77d8fcaf4f Binary files /dev/null and b/docs/img/access-graph/investigate.png differ diff --git a/docs/img/access-monitoring/exec_passwd.png b/docs/img/access-monitoring/exec_passwd.png index 07c9b6ee71a92..b42e30b4b43e7 100644 Binary files a/docs/img/access-monitoring/exec_passwd.png and b/docs/img/access-monitoring/exec_passwd.png differ diff --git a/docs/img/add-resources.png b/docs/img/add-resources.png deleted file mode 100644 index ed17f6eef9f3d..0000000000000 Binary files a/docs/img/add-resources.png and /dev/null differ diff --git a/docs/img/adfs-add-provider-trust.png b/docs/img/adfs-add-provider-trust.png index 4822b79ed16f8..fa906b9a5012f 100644 Binary files a/docs/img/adfs-add-provider-trust.png and b/docs/img/adfs-add-provider-trust.png differ diff --git a/docs/img/adfs-add-teleport-cert.png b/docs/img/adfs-add-teleport-cert.png index 7b38942ee2d5a..6f3d9e9b32cfa 100644 Binary files a/docs/img/adfs-add-teleport-cert.png and b/docs/img/adfs-add-teleport-cert.png differ diff --git a/docs/img/agent-pool-diagram.png b/docs/img/agent-pool-diagram.png index 291fdbc7cc2f3..07d75d87eb2d6 100644 Binary files a/docs/img/agent-pool-diagram.png and b/docs/img/agent-pool-diagram.png differ diff --git a/docs/img/application-access/architecture.svg b/docs/img/application-access/architecture.svg deleted file mode 100644 index 2b6e7d9f6f722..0000000000000 --- a/docs/img/application-access/architecture.svg +++ /dev/null @@ -1,11336 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/img/application-access/attach-security-role.png b/docs/img/application-access/attach-security-role.png deleted file mode 100644 index 67e2e669e4eb2..0000000000000 Binary files a/docs/img/application-access/attach-security-role.png and /dev/null differ diff --git a/docs/img/application-access/aws-trusted-entities@2x.png b/docs/img/application-access/aws-trusted-entities@2x.png deleted file mode 100644 index 72ee9edf49a63..0000000000000 Binary files a/docs/img/application-access/aws-trusted-entities@2x.png and /dev/null differ diff --git a/docs/img/application-access/create-role-example-readonly-1.png b/docs/img/application-access/create-role-example-readonly-1.png deleted file mode 100644 index 8eff57f53ede4..0000000000000 Binary files a/docs/img/application-access/create-role-example-readonly-1.png and /dev/null differ diff --git a/docs/img/application-access/create-role-example-readonly-2.png b/docs/img/application-access/create-role-example-readonly-2.png deleted file mode 100644 index b113ea49c64fb..0000000000000 Binary files a/docs/img/application-access/create-role-example-readonly-2.png and /dev/null differ diff --git a/docs/img/application-access/create-role-example-readonly-3.png b/docs/img/application-access/create-role-example-readonly-3.png deleted file mode 100644 index adda1744eecfb..0000000000000 Binary files a/docs/img/application-access/create-role-example-readonly-3.png and /dev/null differ diff --git a/docs/img/application-access/okta/okta-application-access-request.png b/docs/img/application-access/okta/okta-application-access-request.png deleted file mode 100644 index b7a9a5d63f2dc..0000000000000 Binary files a/docs/img/application-access/okta/okta-application-access-request.png and /dev/null differ diff --git a/docs/img/application-access/okta/okta-user-group-access-request.png b/docs/img/application-access/okta/okta-user-group-access-request.png deleted file mode 100644 index 8e263659dc73b..0000000000000 Binary files a/docs/img/application-access/okta/okta-user-group-access-request.png and /dev/null differ diff --git a/docs/img/architecture/managed-updates-v2.svg b/docs/img/architecture/managed-updates-v2.svg new file mode 100644 index 0000000000000..db5eb1f0e7266 --- /dev/null +++ b/docs/img/architecture/managed-updates-v2.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/architecture/ssh-tunnel-mode@1.2x.png b/docs/img/architecture/ssh-tunnel-mode@1.2x.png deleted file mode 100644 index d2247a64c62f1..0000000000000 Binary files a/docs/img/architecture/ssh-tunnel-mode@1.2x.png and /dev/null differ diff --git a/docs/img/auto-discovery/auto-discovery-hero.jpg b/docs/img/auto-discovery/auto-discovery-hero.jpg new file mode 100644 index 0000000000000..2abd692a0d6a5 Binary files /dev/null and b/docs/img/auto-discovery/auto-discovery-hero.jpg differ diff --git a/docs/img/aws/allow-tags.png b/docs/img/aws/allow-tags.png index 43cbbca289c14..71932b555bd53 100644 Binary files a/docs/img/aws/allow-tags.png and b/docs/img/aws/allow-tags.png differ diff --git a/docs/img/aws/enroll-aws-access-awsoidc.png b/docs/img/aws/enroll-aws-access-awsoidc.png new file mode 100644 index 0000000000000..225c15a7d316c Binary files /dev/null and b/docs/img/aws/enroll-aws-access-awsoidc.png differ diff --git a/docs/img/aws/instance-settings.png b/docs/img/aws/instance-settings.png index bf8d600d40145..4ceadf84f259b 100644 Binary files a/docs/img/aws/instance-settings.png and b/docs/img/aws/instance-settings.png differ diff --git a/docs/img/aws/launch-instance-advanced-options.png b/docs/img/aws/launch-instance-advanced-options.png index c9ba1f5e342d4..a6202286d8394 100644 Binary files a/docs/img/aws/launch-instance-advanced-options.png and b/docs/img/aws/launch-instance-advanced-options.png differ diff --git a/docs/img/azure/add-custom-role@2x.png b/docs/img/azure/add-custom-role@2x.png index 6d60ce3bbd4bf..bb01f9f494e76 100644 Binary files a/docs/img/azure/add-custom-role@2x.png and b/docs/img/azure/add-custom-role@2x.png differ diff --git a/docs/img/azure/add-role-assignment.png b/docs/img/azure/add-role-assignment.png index a994494106061..8254181825258 100644 Binary files a/docs/img/azure/add-role-assignment.png and b/docs/img/azure/add-role-assignment.png differ diff --git a/docs/img/azure/app-registrations@2x.png b/docs/img/azure/app-registrations@2x.png index 3915b71c6c967..c0e6d19afd7c1 100644 Binary files a/docs/img/azure/app-registrations@2x.png and b/docs/img/azure/app-registrations@2x.png differ diff --git a/docs/img/azure/create-custom-role.png b/docs/img/azure/create-custom-role.png index 09a6cd6143bb0..569bf30ae64f4 100644 Binary files a/docs/img/azure/create-custom-role.png and b/docs/img/azure/create-custom-role.png differ diff --git a/docs/img/azure/managed-identities@2x.png b/docs/img/azure/managed-identities@2x.png index be02c8cf0ee63..b499ad4c11e7d 100644 Binary files a/docs/img/azure/managed-identities@2x.png and b/docs/img/azure/managed-identities@2x.png differ diff --git a/docs/img/azure/read-permission.png b/docs/img/azure/read-permission.png index 73fdba7f50475..34f46d470e7eb 100644 Binary files a/docs/img/azure/read-permission.png and b/docs/img/azure/read-permission.png differ diff --git a/docs/img/azure/registered-app-secrets@2x.png b/docs/img/azure/registered-app-secrets@2x.png index e4b3f80a85209..9028421ac773c 100644 Binary files a/docs/img/azure/registered-app-secrets@2x.png and b/docs/img/azure/registered-app-secrets@2x.png differ diff --git a/docs/img/azuread/azuread-4-turnoffuserassign.png b/docs/img/azuread/azuread-4-turnoffuserassign.png index 99d125715f824..7dc91fe613b49 100644 Binary files a/docs/img/azuread/azuread-4-turnoffuserassign.png and b/docs/img/azuread/azuread-4-turnoffuserassign.png differ diff --git a/docs/img/azuread/azuread-7-entityandreplyurl.png b/docs/img/azuread/azuread-7-entityandreplyurl.png index ce78e5bb2849a..9008dcd220dba 100644 Binary files a/docs/img/azuread/azuread-7-entityandreplyurl.png and b/docs/img/azuread/azuread-7-entityandreplyurl.png differ diff --git a/docs/img/azuread/azuread-8a-nameidentifier.png b/docs/img/azuread/azuread-8a-nameidentifier.png index ef28e3d29ecf6..e30482cb47f78 100644 Binary files a/docs/img/azuread/azuread-8a-nameidentifier.png and b/docs/img/azuread/azuread-8a-nameidentifier.png differ diff --git a/docs/img/azuread/azuread-8b-groupclaim.png b/docs/img/azuread/azuread-8b-groupclaim.png index c29256719d233..b225b589a4e4d 100644 Binary files a/docs/img/azuread/azuread-8b-groupclaim.png and b/docs/img/azuread/azuread-8b-groupclaim.png differ diff --git a/docs/img/azuread/azuread-8c-usernameclaim.png b/docs/img/azuread/azuread-8c-usernameclaim.png index 884df14c9eca1..c156ea5abb569 100644 Binary files a/docs/img/azuread/azuread-8c-usernameclaim.png and b/docs/img/azuread/azuread-8c-usernameclaim.png differ diff --git a/docs/img/azuread/azuread-9-fedmeatadataxml.png b/docs/img/azuread/azuread-9-fedmeatadataxml.png index c5bbb977f4b10..6c54e40a0b31a 100644 Binary files a/docs/img/azuread/azuread-9-fedmeatadataxml.png and b/docs/img/azuread/azuread-9-fedmeatadataxml.png differ diff --git a/docs/img/azuread/azuread-nameid.png b/docs/img/azuread/azuread-nameid.png index b2bc772294612..e925f1249b1f4 100644 Binary files a/docs/img/azuread/azuread-nameid.png and b/docs/img/azuread/azuread-nameid.png differ diff --git a/docs/img/cloud/getting-started/add-my-first-resource@2x.png b/docs/img/cloud/getting-started/add-my-first-resource@2x.png deleted file mode 100644 index 9cb0e68c22691..0000000000000 Binary files a/docs/img/cloud/getting-started/add-my-first-resource@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/choose-resource@2x.png b/docs/img/cloud/getting-started/choose-resource@2x.png deleted file mode 100644 index a84e85b18486f..0000000000000 Binary files a/docs/img/cloud/getting-started/choose-resource@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/identity-platform.png b/docs/img/cloud/getting-started/identity-platform.png deleted file mode 100644 index b5f33a789fed5..0000000000000 Binary files a/docs/img/cloud/getting-started/identity-platform.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/intro-velocity-resiliency.png b/docs/img/cloud/getting-started/intro-velocity-resiliency.png deleted file mode 100644 index c91c91d1fc650..0000000000000 Binary files a/docs/img/cloud/getting-started/intro-velocity-resiliency.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/paste-script@2x.png b/docs/img/cloud/getting-started/paste-script@2x.png deleted file mode 100644 index 747dab39340e5..0000000000000 Binary files a/docs/img/cloud/getting-started/paste-script@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/session-recordings@2x.png b/docs/img/cloud/getting-started/session-recordings@2x.png deleted file mode 100644 index 2b628c91437f6..0000000000000 Binary files a/docs/img/cloud/getting-started/session-recordings@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/set-up-access@2x.png b/docs/img/cloud/getting-started/set-up-access@2x.png deleted file mode 100644 index cf75d23f4e2df..0000000000000 Binary files a/docs/img/cloud/getting-started/set-up-access@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/successfully-connected@2x.png b/docs/img/cloud/getting-started/successfully-connected@2x.png deleted file mode 100644 index d8564d4bd1507..0000000000000 Binary files a/docs/img/cloud/getting-started/successfully-connected@2x.png and /dev/null differ diff --git a/docs/img/cloud/getting-started/test-connection@2x.png b/docs/img/cloud/getting-started/test-connection@2x.png deleted file mode 100644 index 3059567370d76..0000000000000 Binary files a/docs/img/cloud/getting-started/test-connection@2x.png and /dev/null differ diff --git a/docs/img/cloud/notification-complete.png b/docs/img/cloud/notification-complete.png index e31486a61241e..e148893ae3480 100644 Binary files a/docs/img/cloud/notification-complete.png and b/docs/img/cloud/notification-complete.png differ diff --git a/docs/img/cloud/notification-message.png b/docs/img/cloud/notification-message.png index 1f4d69893bd03..f8d5a5fc0c9e7 100644 Binary files a/docs/img/cloud/notification-message.png and b/docs/img/cloud/notification-message.png differ diff --git a/docs/img/cloud/notification-scheduled.png b/docs/img/cloud/notification-scheduled.png index 50e8f37c48d8a..0fcb517841d23 100644 Binary files a/docs/img/cloud/notification-scheduled.png and b/docs/img/cloud/notification-scheduled.png differ diff --git a/docs/img/cloud/recovery-codes.png b/docs/img/cloud/recovery-codes.png index 3c533345d4efb..60e1edd113c87 100644 Binary files a/docs/img/cloud/recovery-codes.png and b/docs/img/cloud/recovery-codes.png differ diff --git a/docs/img/cloud/security-business-contacts.png b/docs/img/cloud/security-business-contacts.png new file mode 100644 index 0000000000000..2f4a448674435 Binary files /dev/null and b/docs/img/cloud/security-business-contacts.png differ diff --git a/docs/img/connect-your-client/model-context-protocol/usage-example-database-access.png b/docs/img/connect-your-client/model-context-protocol/usage-example-database-access.png new file mode 100644 index 0000000000000..29ecdfdc24a0e Binary files /dev/null and b/docs/img/connect-your-client/model-context-protocol/usage-example-database-access.png differ diff --git a/docs/img/connect-your-client/model-context-protocol/usage-example-everything.png b/docs/img/connect-your-client/model-context-protocol/usage-example-everything.png new file mode 100644 index 0000000000000..93d2101e8e08e Binary files /dev/null and b/docs/img/connect-your-client/model-context-protocol/usage-example-everything.png differ diff --git a/docs/img/connect-your-client/model-context-protocol/usage-kube-context.png b/docs/img/connect-your-client/model-context-protocol/usage-kube-context.png new file mode 100644 index 0000000000000..3ca08582424e9 Binary files /dev/null and b/docs/img/connect-your-client/model-context-protocol/usage-kube-context.png differ diff --git a/docs/img/connect-your-client/model-context-protocol/usage-kube-pod.png b/docs/img/connect-your-client/model-context-protocol/usage-kube-pod.png new file mode 100644 index 0000000000000..74b30a394c5d9 Binary files /dev/null and b/docs/img/connect-your-client/model-context-protocol/usage-kube-pod.png differ diff --git a/docs/img/connect-your-client/putty-console.png b/docs/img/connect-your-client/putty-console.png index ff83d635004cf..2387eb5c4adf4 100644 Binary files a/docs/img/connect-your-client/putty-console.png and b/docs/img/connect-your-client/putty-console.png differ diff --git a/docs/img/connect-your-client/putty-window.png b/docs/img/connect-your-client/putty-window.png index 39899857ecad9..0e4079ba7fc8f 100644 Binary files a/docs/img/connect-your-client/putty-window.png and b/docs/img/connect-your-client/putty-window.png differ diff --git a/docs/img/connect-your-client/webui-db-sessions/connect-dialog-custom.png b/docs/img/connect-your-client/webui-db-sessions/connect-dialog-custom.png index 1ed7f0e494230..977a6ab3485a2 100644 Binary files a/docs/img/connect-your-client/webui-db-sessions/connect-dialog-custom.png and b/docs/img/connect-your-client/webui-db-sessions/connect-dialog-custom.png differ diff --git a/docs/img/connect-your-client/webui-db-sessions/connect-dialog.png b/docs/img/connect-your-client/webui-db-sessions/connect-dialog.png index f1614f0a2826e..6de33d54f9ffa 100644 Binary files a/docs/img/connect-your-client/webui-db-sessions/connect-dialog.png and b/docs/img/connect-your-client/webui-db-sessions/connect-dialog.png differ diff --git a/docs/img/connect-your-client/webui-db-sessions/resources-connect-dialog.png b/docs/img/connect-your-client/webui-db-sessions/resources-connect-dialog.png index 51137d8769469..97bc81c5ef007 100644 Binary files a/docs/img/connect-your-client/webui-db-sessions/resources-connect-dialog.png and b/docs/img/connect-your-client/webui-db-sessions/resources-connect-dialog.png differ diff --git a/docs/img/connect-your-client/webui-db-sessions/session-terminal.png b/docs/img/connect-your-client/webui-db-sessions/session-terminal.png index c062ec00e2ace..6ab4d48a024f7 100644 Binary files a/docs/img/connect-your-client/webui-db-sessions/session-terminal.png and b/docs/img/connect-your-client/webui-db-sessions/session-terminal.png differ diff --git a/docs/img/connect-your-client/winscp-1.png b/docs/img/connect-your-client/winscp-1.png index 79db262681dcd..f1d1160556dd9 100644 Binary files a/docs/img/connect-your-client/winscp-1.png and b/docs/img/connect-your-client/winscp-1.png differ diff --git a/docs/img/connect-your-client/winscp-2.png b/docs/img/connect-your-client/winscp-2.png index a39037c8d2ed5..004efa24468e9 100644 Binary files a/docs/img/connect-your-client/winscp-2.png and b/docs/img/connect-your-client/winscp-2.png differ diff --git a/docs/img/connect-your-client/winscp-3.png b/docs/img/connect-your-client/winscp-3.png index 9f50be49c1868..84433e4a0accc 100644 Binary files a/docs/img/connect-your-client/winscp-3.png and b/docs/img/connect-your-client/winscp-3.png differ diff --git a/docs/img/connect-your-client/winscp-4.png b/docs/img/connect-your-client/winscp-4.png index 47b51b900443f..368e74af480f9 100644 Binary files a/docs/img/connect-your-client/winscp-4.png and b/docs/img/connect-your-client/winscp-4.png differ diff --git a/docs/img/connect-your-client/winscp-5.png b/docs/img/connect-your-client/winscp-5.png index b070dde676047..b3302f98da779 100644 Binary files a/docs/img/connect-your-client/winscp-5.png and b/docs/img/connect-your-client/winscp-5.png differ diff --git a/docs/img/connect-your-client/winscp-6.png b/docs/img/connect-your-client/winscp-6.png index aebc66b44bd46..088f645e04bee 100644 Binary files a/docs/img/connect-your-client/winscp-6.png and b/docs/img/connect-your-client/winscp-6.png differ diff --git a/docs/img/database-access/azure-data-studio-connection.png b/docs/img/database-access/azure-data-studio-connection.png index df7e363f75630..698c78530b4d2 100644 Binary files a/docs/img/database-access/azure-data-studio-connection.png and b/docs/img/database-access/azure-data-studio-connection.png differ diff --git a/docs/img/database-access/compass-hostname@2x.png b/docs/img/database-access/compass-hostname@2x.png index 5ba28a4e3f2f3..b9871ee37925e 100644 Binary files a/docs/img/database-access/compass-hostname@2x.png and b/docs/img/database-access/compass-hostname@2x.png differ diff --git a/docs/img/database-access/compass-more-options@2x.png b/docs/img/database-access/compass-more-options@2x.png index 6ab07d6f694de..18b0028bde341 100644 Binary files a/docs/img/database-access/compass-more-options@2x.png and b/docs/img/database-access/compass-more-options@2x.png differ diff --git a/docs/img/database-access/compass-new-connection@2x.png b/docs/img/database-access/compass-new-connection@2x.png index 55e4395d6f6f4..b72737fd3badf 100644 Binary files a/docs/img/database-access/compass-new-connection@2x.png and b/docs/img/database-access/compass-new-connection@2x.png differ diff --git a/docs/img/database-access/dbeaver-add-server.png b/docs/img/database-access/dbeaver-add-server.png index 4bbbd29ade0a5..71984aa7042f1 100644 Binary files a/docs/img/database-access/dbeaver-add-server.png and b/docs/img/database-access/dbeaver-add-server.png differ diff --git a/docs/img/database-access/dbeaver-configure-server.png b/docs/img/database-access/dbeaver-configure-server.png index 9f5a12a66131b..99c9aa1834622 100644 Binary files a/docs/img/database-access/dbeaver-configure-server.png and b/docs/img/database-access/dbeaver-configure-server.png differ diff --git a/docs/img/database-access/dbeaver-configure-user.png b/docs/img/database-access/dbeaver-configure-user.png index e195e410e4923..2d3792edd1ee2 100644 Binary files a/docs/img/database-access/dbeaver-configure-user.png and b/docs/img/database-access/dbeaver-configure-user.png differ diff --git a/docs/img/database-access/dbeaver-driver-settings.png b/docs/img/database-access/dbeaver-driver-settings.png index d4fa24e8eb440..79eda89b0ebda 100644 Binary files a/docs/img/database-access/dbeaver-driver-settings.png and b/docs/img/database-access/dbeaver-driver-settings.png differ diff --git a/docs/img/database-access/dbeaver-pg-configure-server.png b/docs/img/database-access/dbeaver-pg-configure-server.png index 12a597709fc09..1670a74a28610 100644 Binary files a/docs/img/database-access/dbeaver-pg-configure-server.png and b/docs/img/database-access/dbeaver-pg-configure-server.png differ diff --git a/docs/img/database-access/dbeaver-select-driver.png b/docs/img/database-access/dbeaver-select-driver.png index b9743875fec7f..5f9de26b58d9c 100644 Binary files a/docs/img/database-access/dbeaver-select-driver.png and b/docs/img/database-access/dbeaver-select-driver.png differ diff --git a/docs/img/database-access/enrollment-wizard/db-connection-test-timeout.png b/docs/img/database-access/enrollment-wizard/db-connection-test-timeout.png index 8673b8f272744..6dff15f0c38d1 100644 Binary files a/docs/img/database-access/enrollment-wizard/db-connection-test-timeout.png and b/docs/img/database-access/enrollment-wizard/db-connection-test-timeout.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-clusters-overview@2x.png b/docs/img/database-access/enrollment-wizard/ecs-clusters-overview@2x.png index 248e88eeb5a48..5da39ef2f3dd0 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-clusters-overview@2x.png and b/docs/img/database-access/enrollment-wizard/ecs-clusters-overview@2x.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-deployment-tasks-overview.png b/docs/img/database-access/enrollment-wizard/ecs-deployment-tasks-overview.png index 7fb4d86b6555e..3bc652e592647 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-deployment-tasks-overview.png and b/docs/img/database-access/enrollment-wizard/ecs-deployment-tasks-overview.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-networking-panel.png b/docs/img/database-access/enrollment-wizard/ecs-networking-panel.png index 12b8d370b3ebc..9779248561aa1 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-networking-panel.png and b/docs/img/database-access/enrollment-wizard/ecs-networking-panel.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-service-network-config.png b/docs/img/database-access/enrollment-wizard/ecs-service-network-config.png index c2b8404f7a83e..9bfcd96885731 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-service-network-config.png and b/docs/img/database-access/enrollment-wizard/ecs-service-network-config.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-services-overview@2x.png b/docs/img/database-access/enrollment-wizard/ecs-services-overview@2x.png index bcc16559cf9da..aa9103834e462 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-services-overview@2x.png and b/docs/img/database-access/enrollment-wizard/ecs-services-overview@2x.png differ diff --git a/docs/img/database-access/enrollment-wizard/ecs-task-logs@2x.png b/docs/img/database-access/enrollment-wizard/ecs-task-logs@2x.png index a979613535291..fa3628ee44640 100644 Binary files a/docs/img/database-access/enrollment-wizard/ecs-task-logs@2x.png and b/docs/img/database-access/enrollment-wizard/ecs-task-logs@2x.png differ diff --git a/docs/img/database-access/enrollment-wizard/example-network-analyzer-results@2x.png b/docs/img/database-access/enrollment-wizard/example-network-analyzer-results@2x.png index 93fa709557eb6..134806b700f96 100644 Binary files a/docs/img/database-access/enrollment-wizard/example-network-analyzer-results@2x.png and b/docs/img/database-access/enrollment-wizard/example-network-analyzer-results@2x.png differ diff --git a/docs/img/database-access/enrollment-wizard/example-outbound-internet-rule.png b/docs/img/database-access/enrollment-wizard/example-outbound-internet-rule.png index 02cf748ccf1b0..0ef6a86e81e87 100644 Binary files a/docs/img/database-access/enrollment-wizard/example-outbound-internet-rule.png and b/docs/img/database-access/enrollment-wizard/example-outbound-internet-rule.png differ diff --git a/docs/img/database-access/enrollment-wizard/example-rds-sg-inbound.png b/docs/img/database-access/enrollment-wizard/example-rds-sg-inbound.png index 84c2cbeca2f92..669b1377f38b0 100644 Binary files a/docs/img/database-access/enrollment-wizard/example-rds-sg-inbound.png and b/docs/img/database-access/enrollment-wizard/example-rds-sg-inbound.png differ diff --git a/docs/img/database-access/enrollment-wizard/example-vpc-routes-overview.png b/docs/img/database-access/enrollment-wizard/example-vpc-routes-overview.png index 2c8c553b83e7f..4042a2a0a480c 100644 Binary files a/docs/img/database-access/enrollment-wizard/example-vpc-routes-overview.png and b/docs/img/database-access/enrollment-wizard/example-vpc-routes-overview.png differ diff --git a/docs/img/database-access/enrollment-wizard/resource-panel-rds.png b/docs/img/database-access/enrollment-wizard/resource-panel-rds.png index d2089f5465b9e..25f0f8092b777 100644 Binary files a/docs/img/database-access/enrollment-wizard/resource-panel-rds.png and b/docs/img/database-access/enrollment-wizard/resource-panel-rds.png differ diff --git a/docs/img/database-access/enrollment-wizard/stopped-ecs-tasks@2x.png b/docs/img/database-access/enrollment-wizard/stopped-ecs-tasks@2x.png index 1bee0654e141c..b757c07958431 100644 Binary files a/docs/img/database-access/enrollment-wizard/stopped-ecs-tasks@2x.png and b/docs/img/database-access/enrollment-wizard/stopped-ecs-tasks@2x.png differ diff --git a/docs/img/database-access/guides/alloydb/add-user-account.png b/docs/img/database-access/guides/alloydb/add-user-account.png new file mode 100644 index 0000000000000..04bc2b8b6900a Binary files /dev/null and b/docs/img/database-access/guides/alloydb/add-user-account.png differ diff --git a/docs/img/database-access/guides/alloydb/architecture.png b/docs/img/database-access/guides/alloydb/architecture.png new file mode 100644 index 0000000000000..eaf49e4669317 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/architecture.png differ diff --git a/docs/img/database-access/guides/alloydb/connection-uri.png b/docs/img/database-access/guides/alloydb/connection-uri.png new file mode 100644 index 0000000000000..9f21211cd4da2 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/connection-uri.png differ diff --git a/docs/img/database-access/guides/alloydb/create-system-service-account.png b/docs/img/database-access/guides/alloydb/create-system-service-account.png new file mode 100644 index 0000000000000..4d983bd5f9bd4 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/create-system-service-account.png differ diff --git a/docs/img/database-access/guides/alloydb/create-user-service-account.png b/docs/img/database-access/guides/alloydb/create-user-service-account.png new file mode 100644 index 0000000000000..e2fc01790140a Binary files /dev/null and b/docs/img/database-access/guides/alloydb/create-user-service-account.png differ diff --git a/docs/img/database-access/guides/alloydb/flag-iam-authentication-on.png b/docs/img/database-access/guides/alloydb/flag-iam-authentication-on.png new file mode 100644 index 0000000000000..5b7dbc1b32b56 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/flag-iam-authentication-on.png differ diff --git a/docs/img/database-access/guides/alloydb/service-account-keys.png b/docs/img/database-access/guides/alloydb/service-account-keys.png new file mode 100644 index 0000000000000..8fa5dbdb34ce1 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/service-account-keys.png differ diff --git a/docs/img/database-access/guides/alloydb/service-account-new-key.png b/docs/img/database-access/guides/alloydb/service-account-new-key.png new file mode 100644 index 0000000000000..d1de449b2621f Binary files /dev/null and b/docs/img/database-access/guides/alloydb/service-account-new-key.png differ diff --git a/docs/img/database-access/guides/alloydb/system-service-account-permissions.png b/docs/img/database-access/guides/alloydb/system-service-account-permissions.png new file mode 100644 index 0000000000000..9692736c0a970 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/system-service-account-permissions.png differ diff --git a/docs/img/database-access/guides/alloydb/user-account-added.png b/docs/img/database-access/guides/alloydb/user-account-added.png new file mode 100644 index 0000000000000..30474ddf6f394 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/user-account-added.png differ diff --git a/docs/img/database-access/guides/alloydb/user-service-account-grant-access.png b/docs/img/database-access/guides/alloydb/user-service-account-grant-access.png new file mode 100644 index 0000000000000..3723d140c9c02 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/user-service-account-grant-access.png differ diff --git a/docs/img/database-access/guides/alloydb/user-service-account-permissions.png b/docs/img/database-access/guides/alloydb/user-service-account-permissions.png new file mode 100644 index 0000000000000..9c0e1353489cd Binary files /dev/null and b/docs/img/database-access/guides/alloydb/user-service-account-permissions.png differ diff --git a/docs/img/database-access/guides/alloydb/user-service-account-principals-with-access.png b/docs/img/database-access/guides/alloydb/user-service-account-principals-with-access.png new file mode 100644 index 0000000000000..f1478b2ba56e1 Binary files /dev/null and b/docs/img/database-access/guides/alloydb/user-service-account-principals-with-access.png differ diff --git a/docs/img/database-access/guides/atlas/atlas-add-user@2x.png b/docs/img/database-access/guides/atlas/atlas-add-user@2x.png index 8b29279477cb4..6f6ec9e8bd492 100644 Binary files a/docs/img/database-access/guides/atlas/atlas-add-user@2x.png and b/docs/img/database-access/guides/atlas/atlas-add-user@2x.png differ diff --git a/docs/img/database-access/guides/atlas/atlas-connect-btn@2x.png b/docs/img/database-access/guides/atlas/atlas-connect-btn@2x.png index c371344a2dabb..c9a991c1f683a 100644 Binary files a/docs/img/database-access/guides/atlas/atlas-connect-btn@2x.png and b/docs/img/database-access/guides/atlas/atlas-connect-btn@2x.png differ diff --git a/docs/img/database-access/guides/atlas/atlas-connect@2x.png b/docs/img/database-access/guides/atlas/atlas-connect@2x.png index eeb9b9d8c1458..f4f93a1255c17 100644 Binary files a/docs/img/database-access/guides/atlas/atlas-connect@2x.png and b/docs/img/database-access/guides/atlas/atlas-connect@2x.png differ diff --git a/docs/img/database-access/guides/atlas/atlas-self-managed-x509@2x.png b/docs/img/database-access/guides/atlas/atlas-self-managed-x509@2x.png index 95d9e621d397f..8a7a7a5e4cc60 100644 Binary files a/docs/img/database-access/guides/atlas/atlas-self-managed-x509@2x.png and b/docs/img/database-access/guides/atlas/atlas-self-managed-x509@2x.png differ diff --git a/docs/img/database-access/guides/aws-create-role-add-tags.png b/docs/img/database-access/guides/aws-create-role-add-tags.png index 56f6216337a3b..70fe0be8b75e3 100644 Binary files a/docs/img/database-access/guides/aws-create-role-add-tags.png and b/docs/img/database-access/guides/aws-create-role-add-tags.png differ diff --git a/docs/img/database-access/guides/aws-dynamodb_cloud.png b/docs/img/database-access/guides/aws-dynamodb_cloud.png index 015ecfee3997c..a45c86549b196 100644 Binary files a/docs/img/database-access/guides/aws-dynamodb_cloud.png and b/docs/img/database-access/guides/aws-dynamodb_cloud.png differ diff --git a/docs/img/database-access/guides/aws-dynamodb_selfhosted.png b/docs/img/database-access/guides/aws-dynamodb_selfhosted.png index 438b47d716e68..c4f88c7971e9b 100644 Binary files a/docs/img/database-access/guides/aws-dynamodb_selfhosted.png and b/docs/img/database-access/guides/aws-dynamodb_selfhosted.png differ diff --git a/docs/img/database-access/guides/aws-opensearch/01-opensearch_get_started.png b/docs/img/database-access/guides/aws-opensearch/01-opensearch_get_started.png index 110b3f313c548..37c7740d2beaf 100644 Binary files a/docs/img/database-access/guides/aws-opensearch/01-opensearch_get_started.png and b/docs/img/database-access/guides/aws-opensearch/01-opensearch_get_started.png differ diff --git a/docs/img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png b/docs/img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png index b35ee4154b51f..169c340e92cff 100644 Binary files a/docs/img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png and b/docs/img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png differ diff --git a/docs/img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png b/docs/img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png index 1a3c544e12dc8..d1aaa1351f65a 100644 Binary files a/docs/img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png and b/docs/img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png differ diff --git a/docs/img/database-access/guides/aws-opensearch/opensearch_cloud.png b/docs/img/database-access/guides/aws-opensearch/opensearch_cloud.png index 14d4859717079..35d7335237f5c 100644 Binary files a/docs/img/database-access/guides/aws-opensearch/opensearch_cloud.png and b/docs/img/database-access/guides/aws-opensearch/opensearch_cloud.png differ diff --git a/docs/img/database-access/guides/aws-opensearch/opensearch_selfhosted.png b/docs/img/database-access/guides/aws-opensearch/opensearch_selfhosted.png index 71d674fcc23de..47085f4a96309 100644 Binary files a/docs/img/database-access/guides/aws-opensearch/opensearch_selfhosted.png and b/docs/img/database-access/guides/aws-opensearch/opensearch_selfhosted.png differ diff --git a/docs/img/database-access/guides/aws_memorydb_cloud.png b/docs/img/database-access/guides/aws_memorydb_cloud.png new file mode 100644 index 0000000000000..aed7d1a94b2d4 Binary files /dev/null and b/docs/img/database-access/guides/aws_memorydb_cloud.png differ diff --git a/docs/img/database-access/guides/aws_memorydb_selfhosted.png b/docs/img/database-access/guides/aws_memorydb_selfhosted.png new file mode 100644 index 0000000000000..230312ce1f122 Binary files /dev/null and b/docs/img/database-access/guides/aws_memorydb_selfhosted.png differ diff --git a/docs/img/database-access/guides/azure/create-role-assignment@2x.png b/docs/img/database-access/guides/azure/create-role-assignment@2x.png index 57e9018bae515..76f3439b3aaf9 100644 Binary files a/docs/img/database-access/guides/azure/create-role-assignment@2x.png and b/docs/img/database-access/guides/azure/create-role-assignment@2x.png differ diff --git a/docs/img/database-access/guides/azure/create-role-from-json@2x.png b/docs/img/database-access/guides/azure/create-role-from-json@2x.png index 1091cde4d0379..31350ef18975b 100644 Binary files a/docs/img/database-access/guides/azure/create-role-from-json@2x.png and b/docs/img/database-access/guides/azure/create-role-from-json@2x.png differ diff --git a/docs/img/database-access/guides/azure/registered-app@2x.png b/docs/img/database-access/guides/azure/registered-app@2x.png index 6f9bcc0da1a28..e2d331ced8d87 100644 Binary files a/docs/img/database-access/guides/azure/registered-app@2x.png and b/docs/img/database-access/guides/azure/registered-app@2x.png differ diff --git a/docs/img/database-access/guides/azure/set-ad-admin@2x.png b/docs/img/database-access/guides/azure/set-ad-admin@2x.png index ce5447f048386..f222625229111 100644 Binary files a/docs/img/database-access/guides/azure/set-ad-admin@2x.png and b/docs/img/database-access/guides/azure/set-ad-admin@2x.png differ diff --git a/docs/img/database-access/guides/azure_cloud.png b/docs/img/database-access/guides/azure_cloud.png index da12c6a28b883..5b666bb823679 100644 Binary files a/docs/img/database-access/guides/azure_cloud.png and b/docs/img/database-access/guides/azure_cloud.png differ diff --git a/docs/img/database-access/guides/azure_selfhosted.png b/docs/img/database-access/guides/azure_selfhosted.png index 7936d7e69ab45..3a72eb8851b66 100644 Binary files a/docs/img/database-access/guides/azure_selfhosted.png and b/docs/img/database-access/guides/azure_selfhosted.png differ diff --git a/docs/img/database-access/guides/cassandra_cloud.png b/docs/img/database-access/guides/cassandra_cloud.png index 9ecfe223f14e3..f19a357e5f4f7 100644 Binary files a/docs/img/database-access/guides/cassandra_cloud.png and b/docs/img/database-access/guides/cassandra_cloud.png differ diff --git a/docs/img/database-access/guides/cassandra_keyspaces_cloud.png b/docs/img/database-access/guides/cassandra_keyspaces_cloud.png index 45fa3343f29e4..1039d635a8330 100644 Binary files a/docs/img/database-access/guides/cassandra_keyspaces_cloud.png and b/docs/img/database-access/guides/cassandra_keyspaces_cloud.png differ diff --git a/docs/img/database-access/guides/cassandra_keyspaces_selfhosted.png b/docs/img/database-access/guides/cassandra_keyspaces_selfhosted.png index 9f401b73591a3..5335d5d125400 100644 Binary files a/docs/img/database-access/guides/cassandra_keyspaces_selfhosted.png and b/docs/img/database-access/guides/cassandra_keyspaces_selfhosted.png differ diff --git a/docs/img/database-access/guides/cassandra_selfhosted.png b/docs/img/database-access/guides/cassandra_selfhosted.png index 631e9adf316e2..303b8edb5e736 100644 Binary files a/docs/img/database-access/guides/cassandra_selfhosted.png and b/docs/img/database-access/guides/cassandra_selfhosted.png differ diff --git a/docs/img/database-access/guides/clickhouse_selfhosted_cloud.png b/docs/img/database-access/guides/clickhouse_selfhosted_cloud.png index 9d3db52cc33a3..03cb089085873 100644 Binary files a/docs/img/database-access/guides/clickhouse_selfhosted_cloud.png and b/docs/img/database-access/guides/clickhouse_selfhosted_cloud.png differ diff --git a/docs/img/database-access/guides/clickhouse_selfhosted_selfhosted.png b/docs/img/database-access/guides/clickhouse_selfhosted_selfhosted.png index 0ccb0decaba21..930dceb117518 100644 Binary files a/docs/img/database-access/guides/clickhouse_selfhosted_selfhosted.png and b/docs/img/database-access/guides/clickhouse_selfhosted_selfhosted.png differ diff --git a/docs/img/database-access/guides/cloudsql/add-user-account-mysql@2x.png b/docs/img/database-access/guides/cloudsql/add-user-account-mysql@2x.png index 145137eb615ae..28af88ab14c5a 100644 Binary files a/docs/img/database-access/guides/cloudsql/add-user-account-mysql@2x.png and b/docs/img/database-access/guides/cloudsql/add-user-account-mysql@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/add-user-account@2x.png b/docs/img/database-access/guides/cloudsql/add-user-account@2x.png index ca4842bb1cc6d..63ea2421d74f7 100644 Binary files a/docs/img/database-access/guides/cloudsql/add-user-account@2x.png and b/docs/img/database-access/guides/cloudsql/add-user-account@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/grant-token-creator@2x.png b/docs/img/database-access/guides/cloudsql/grant-token-creator@2x.png index 157dd73ec923f..ec845fdb898c7 100644 Binary files a/docs/img/database-access/guides/cloudsql/grant-token-creator@2x.png and b/docs/img/database-access/guides/cloudsql/grant-token-creator@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/iam-existing-instance@2x.png b/docs/img/database-access/guides/cloudsql/iam-existing-instance@2x.png index caf65c3aa6686..c75ebd4918e0e 100644 Binary files a/docs/img/database-access/guides/cloudsql/iam-existing-instance@2x.png and b/docs/img/database-access/guides/cloudsql/iam-existing-instance@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/iam-new-instance@2x.png b/docs/img/database-access/guides/cloudsql/iam-new-instance@2x.png index 2c53ace12a3cd..1428b44967969 100644 Binary files a/docs/img/database-access/guides/cloudsql/iam-new-instance@2x.png and b/docs/img/database-access/guides/cloudsql/iam-new-instance@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/instance-id-mysql@2x.png b/docs/img/database-access/guides/cloudsql/instance-id-mysql@2x.png deleted file mode 100644 index 29dac1ace8c84..0000000000000 Binary files a/docs/img/database-access/guides/cloudsql/instance-id-mysql@2x.png and /dev/null differ diff --git a/docs/img/database-access/guides/cloudsql/instance-root-ca@2x.png b/docs/img/database-access/guides/cloudsql/instance-root-ca@2x.png index 302140d95cc9e..fc12d19494c64 100644 Binary files a/docs/img/database-access/guides/cloudsql/instance-root-ca@2x.png and b/docs/img/database-access/guides/cloudsql/instance-root-ca@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-db-grant@2x.png b/docs/img/database-access/guides/cloudsql/service-account-db-grant@2x.png index 61cb2072f2659..26eeeb6cfc06d 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-db-grant@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-db-grant@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-db-service@2x.png b/docs/img/database-access/guides/cloudsql/service-account-db-service@2x.png index 76c78999863e0..81e95e7020939 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-db-service@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-db-service@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-db@2x.png b/docs/img/database-access/guides/cloudsql/service-account-db@2x.png index 1ea0c8b7cb97f..27eb8b97d0a21 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-db@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-db@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-keys@2x.png b/docs/img/database-access/guides/cloudsql/service-account-keys@2x.png index 81a0e972ba000..c890e2764d42a 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-keys@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-keys@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-new-key@2x.png b/docs/img/database-access/guides/cloudsql/service-account-new-key@2x.png index 1ada05ac57d80..00fc1645f1f36 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-new-key@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-new-key@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-permissions-tab@2x.png b/docs/img/database-access/guides/cloudsql/service-account-permissions-tab@2x.png index a87fe72fd3bb8..29699b0f02072 100644 Binary files a/docs/img/database-access/guides/cloudsql/service-account-permissions-tab@2x.png and b/docs/img/database-access/guides/cloudsql/service-account-permissions-tab@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql/service-account-sql-admin-grant@2x.png b/docs/img/database-access/guides/cloudsql/service-account-sql-admin-grant@2x.png deleted file mode 100644 index c6861ff72dc84..0000000000000 Binary files a/docs/img/database-access/guides/cloudsql/service-account-sql-admin-grant@2x.png and /dev/null differ diff --git a/docs/img/database-access/guides/cloudsql/user-accounts@2x.png b/docs/img/database-access/guides/cloudsql/user-accounts@2x.png index 791a54db11884..1a6fbf2a9d8a7 100644 Binary files a/docs/img/database-access/guides/cloudsql/user-accounts@2x.png and b/docs/img/database-access/guides/cloudsql/user-accounts@2x.png differ diff --git a/docs/img/database-access/guides/cloudsql_cloud.png b/docs/img/database-access/guides/cloudsql_cloud.png index 303beb5888d59..7c5f34f02d73e 100644 Binary files a/docs/img/database-access/guides/cloudsql_cloud.png and b/docs/img/database-access/guides/cloudsql_cloud.png differ diff --git a/docs/img/database-access/guides/cloudsql_selfhosted.png b/docs/img/database-access/guides/cloudsql_selfhosted.png index 4e95eabd6098e..9ca0e5e04b311 100644 Binary files a/docs/img/database-access/guides/cloudsql_selfhosted.png and b/docs/img/database-access/guides/cloudsql_selfhosted.png differ diff --git a/docs/img/database-access/guides/cockroachdb_cloud.png b/docs/img/database-access/guides/cockroachdb_cloud.png index 49e8a30e3de21..8d9758f6a5d4e 100644 Binary files a/docs/img/database-access/guides/cockroachdb_cloud.png and b/docs/img/database-access/guides/cockroachdb_cloud.png differ diff --git a/docs/img/database-access/guides/cockroachdb_selfhosted.png b/docs/img/database-access/guides/cockroachdb_selfhosted.png index 29df01d017f6c..9ea01ebce1d3f 100644 Binary files a/docs/img/database-access/guides/cockroachdb_selfhosted.png and b/docs/img/database-access/guides/cockroachdb_selfhosted.png differ diff --git a/docs/img/database-access/guides/create-redshift-iam-role-policy.png b/docs/img/database-access/guides/create-redshift-iam-role-policy.png deleted file mode 100644 index 6fa5394155d0f..0000000000000 Binary files a/docs/img/database-access/guides/create-redshift-iam-role-policy.png and /dev/null differ diff --git a/docs/img/database-access/guides/docdb-create-role-select-policy.png b/docs/img/database-access/guides/docdb-create-role-select-policy.png index 5b1f617863625..b15bbccc4b2db 100644 Binary files a/docs/img/database-access/guides/docdb-create-role-select-policy.png and b/docs/img/database-access/guides/docdb-create-role-select-policy.png differ diff --git a/docs/img/database-access/guides/elasticache_serverless_cloud.png b/docs/img/database-access/guides/elasticache_serverless_cloud.png new file mode 100644 index 0000000000000..fd8f2d71a9ec4 Binary files /dev/null and b/docs/img/database-access/guides/elasticache_serverless_cloud.png differ diff --git a/docs/img/database-access/guides/elasticache_serverless_selfhosted.png b/docs/img/database-access/guides/elasticache_serverless_selfhosted.png new file mode 100644 index 0000000000000..f3b0ab0eff53d Binary files /dev/null and b/docs/img/database-access/guides/elasticache_serverless_selfhosted.png differ diff --git a/docs/img/database-access/guides/keyspaces/create-role-step1.png b/docs/img/database-access/guides/keyspaces/create-role-step1.png index 9130cf40ed96f..b7e830b7635b7 100644 Binary files a/docs/img/database-access/guides/keyspaces/create-role-step1.png and b/docs/img/database-access/guides/keyspaces/create-role-step1.png differ diff --git a/docs/img/database-access/guides/keyspaces/create-role-step2.png b/docs/img/database-access/guides/keyspaces/create-role-step2.png index 918bc04dd28c7..01c34c2d9f89c 100644 Binary files a/docs/img/database-access/guides/keyspaces/create-role-step2.png and b/docs/img/database-access/guides/keyspaces/create-role-step2.png differ diff --git a/docs/img/database-access/guides/keyspaces/create-role-step3.png b/docs/img/database-access/guides/keyspaces/create-role-step3.png index 83d6c715253cc..5d4e36aa9dac0 100644 Binary files a/docs/img/database-access/guides/keyspaces/create-role-step3.png and b/docs/img/database-access/guides/keyspaces/create-role-step3.png differ diff --git a/docs/img/database-access/guides/mongodb_cloud.png b/docs/img/database-access/guides/mongodb_cloud.png index 9cacf4ec9000c..3d46f05430ceb 100644 Binary files a/docs/img/database-access/guides/mongodb_cloud.png and b/docs/img/database-access/guides/mongodb_cloud.png differ diff --git a/docs/img/database-access/guides/mongodb_selfhosted.png b/docs/img/database-access/guides/mongodb_selfhosted.png index dec53f1fd6e42..c43f984eb04f6 100644 Binary files a/docs/img/database-access/guides/mongodb_selfhosted.png and b/docs/img/database-access/guides/mongodb_selfhosted.png differ diff --git a/docs/img/database-access/guides/mongodbatlas_cloud.png b/docs/img/database-access/guides/mongodbatlas_cloud.png index efc7fa40ae48c..7f0e0c11d055e 100644 Binary files a/docs/img/database-access/guides/mongodbatlas_cloud.png and b/docs/img/database-access/guides/mongodbatlas_cloud.png differ diff --git a/docs/img/database-access/guides/mongodbatlas_selfhosted.png b/docs/img/database-access/guides/mongodbatlas_selfhosted.png index 64ff67590a9d8..11d1bf78ecbb6 100644 Binary files a/docs/img/database-access/guides/mongodbatlas_selfhosted.png and b/docs/img/database-access/guides/mongodbatlas_selfhosted.png differ diff --git a/docs/img/database-access/guides/mysql_cloud.png b/docs/img/database-access/guides/mysql_cloud.png index 85d9003a446d4..b3efcef722434 100644 Binary files a/docs/img/database-access/guides/mysql_cloud.png and b/docs/img/database-access/guides/mysql_cloud.png differ diff --git a/docs/img/database-access/guides/mysql_selfhosted.png b/docs/img/database-access/guides/mysql_selfhosted.png index 72e90e0132e1e..1c178ff67b9f0 100644 Binary files a/docs/img/database-access/guides/mysql_selfhosted.png and b/docs/img/database-access/guides/mysql_selfhosted.png differ diff --git a/docs/img/database-access/guides/oracle_selfhosted.png b/docs/img/database-access/guides/oracle_selfhosted.png index 01f8315f16285..ae060f9356037 100644 Binary files a/docs/img/database-access/guides/oracle_selfhosted.png and b/docs/img/database-access/guides/oracle_selfhosted.png differ diff --git a/docs/img/database-access/guides/oracle_selfhosted_cloud.png b/docs/img/database-access/guides/oracle_selfhosted_cloud.png index 56c19aa7269ce..b00b54e36aaa1 100644 Binary files a/docs/img/database-access/guides/oracle_selfhosted_cloud.png and b/docs/img/database-access/guides/oracle_selfhosted_cloud.png differ diff --git a/docs/img/database-access/guides/postgresqlselfhosted_cloud.png b/docs/img/database-access/guides/postgresqlselfhosted_cloud.png index aec7a5b9c92e5..af54550326b5b 100644 Binary files a/docs/img/database-access/guides/postgresqlselfhosted_cloud.png and b/docs/img/database-access/guides/postgresqlselfhosted_cloud.png differ diff --git a/docs/img/database-access/guides/postgresqlselfhosted_selfhosted.png b/docs/img/database-access/guides/postgresqlselfhosted_selfhosted.png index 8741ae46ea75c..562d1e7a4f48e 100644 Binary files a/docs/img/database-access/guides/postgresqlselfhosted_selfhosted.png and b/docs/img/database-access/guides/postgresqlselfhosted_selfhosted.png differ diff --git a/docs/img/database-access/guides/rds-proxy_cloud.png b/docs/img/database-access/guides/rds-proxy_cloud.png index 5de6f90f86330..8f7faf22e8510 100644 Binary files a/docs/img/database-access/guides/rds-proxy_cloud.png and b/docs/img/database-access/guides/rds-proxy_cloud.png differ diff --git a/docs/img/database-access/guides/rds-proxy_selfhosted.png b/docs/img/database-access/guides/rds-proxy_selfhosted.png index 4e1b55fa5b973..c6ec61e6cf517 100644 Binary files a/docs/img/database-access/guides/rds-proxy_selfhosted.png and b/docs/img/database-access/guides/rds-proxy_selfhosted.png differ diff --git a/docs/img/database-access/guides/rds_cloud.png b/docs/img/database-access/guides/rds_cloud.png index 18ccb249ef8af..665b51430388e 100644 Binary files a/docs/img/database-access/guides/rds_cloud.png and b/docs/img/database-access/guides/rds_cloud.png differ diff --git a/docs/img/database-access/guides/rds_selfhosted.png b/docs/img/database-access/guides/rds_selfhosted.png index 9ef9850c90f38..849f183db7bd2 100644 Binary files a/docs/img/database-access/guides/rds_selfhosted.png and b/docs/img/database-access/guides/rds_selfhosted.png differ diff --git a/docs/img/database-access/guides/redis/redis-aws-iam-user@2x.png b/docs/img/database-access/guides/redis/redis-aws-iam-user@2x.png index 9564707f7875d..74e2523687718 100644 Binary files a/docs/img/database-access/guides/redis/redis-aws-iam-user@2x.png and b/docs/img/database-access/guides/redis/redis-aws-iam-user@2x.png differ diff --git a/docs/img/database-access/guides/redis/redis-aws-managed-user-tag.png b/docs/img/database-access/guides/redis/redis-aws-managed-user-tag.png index ff684ba29985c..23350da68d2f9 100644 Binary files a/docs/img/database-access/guides/redis/redis-aws-managed-user-tag.png and b/docs/img/database-access/guides/redis/redis-aws-managed-user-tag.png differ diff --git a/docs/img/database-access/guides/redis/redisinsight-add-config.png b/docs/img/database-access/guides/redis/redisinsight-add-config.png index da0298b6c6f02..1fed10a42a81e 100644 Binary files a/docs/img/database-access/guides/redis/redisinsight-add-config.png and b/docs/img/database-access/guides/redis/redisinsight-add-config.png differ diff --git a/docs/img/database-access/guides/redis/redisinsight-connected.png b/docs/img/database-access/guides/redis/redisinsight-connected.png index f0f54ce2f1d26..7bca06f986e61 100644 Binary files a/docs/img/database-access/guides/redis/redisinsight-connected.png and b/docs/img/database-access/guides/redis/redisinsight-connected.png differ diff --git a/docs/img/database-access/guides/redis/redisinsight-startup.png b/docs/img/database-access/guides/redis/redisinsight-startup.png index 3102842af9d8e..74d31ca5b7d78 100644 Binary files a/docs/img/database-access/guides/redis/redisinsight-startup.png and b/docs/img/database-access/guides/redis/redisinsight-startup.png differ diff --git a/docs/img/database-access/guides/redis/redisinsight-tls-config.png b/docs/img/database-access/guides/redis/redisinsight-tls-config.png index c35add5405589..0b43e51311ee6 100644 Binary files a/docs/img/database-access/guides/redis/redisinsight-tls-config.png and b/docs/img/database-access/guides/redis/redisinsight-tls-config.png differ diff --git a/docs/img/database-access/guides/redis_cloud.png b/docs/img/database-access/guides/redis_cloud.png index b371038fbafb2..ea70a85e2b30d 100644 Binary files a/docs/img/database-access/guides/redis_cloud.png and b/docs/img/database-access/guides/redis_cloud.png differ diff --git a/docs/img/database-access/guides/redis_elasticache_cloud.png b/docs/img/database-access/guides/redis_elasticache_cloud.png index c962bad2051a1..47229a89fd00a 100644 Binary files a/docs/img/database-access/guides/redis_elasticache_cloud.png and b/docs/img/database-access/guides/redis_elasticache_cloud.png differ diff --git a/docs/img/database-access/guides/redis_elasticache_selfhosted.png b/docs/img/database-access/guides/redis_elasticache_selfhosted.png index 15ad6a9147ed2..2144fd8487210 100644 Binary files a/docs/img/database-access/guides/redis_elasticache_selfhosted.png and b/docs/img/database-access/guides/redis_elasticache_selfhosted.png differ diff --git a/docs/img/database-access/guides/redis_selfhosted.png b/docs/img/database-access/guides/redis_selfhosted.png index 851d537b706c6..f522bca2d60e2 100644 Binary files a/docs/img/database-access/guides/redis_selfhosted.png and b/docs/img/database-access/guides/redis_selfhosted.png differ diff --git a/docs/img/database-access/guides/rediscluster_cloud.png b/docs/img/database-access/guides/rediscluster_cloud.png index d84aeab95a837..f148f529a8195 100644 Binary files a/docs/img/database-access/guides/rediscluster_cloud.png and b/docs/img/database-access/guides/rediscluster_cloud.png differ diff --git a/docs/img/database-access/guides/rediscluster_selfhosted.png b/docs/img/database-access/guides/rediscluster_selfhosted.png index 8e3f24cac5ab0..62c3518581abc 100644 Binary files a/docs/img/database-access/guides/rediscluster_selfhosted.png and b/docs/img/database-access/guides/rediscluster_selfhosted.png differ diff --git a/docs/img/database-access/guides/redshift_cloud.png b/docs/img/database-access/guides/redshift_cloud.png index 922e616d9190f..e04c8235f7f53 100644 Binary files a/docs/img/database-access/guides/redshift_cloud.png and b/docs/img/database-access/guides/redshift_cloud.png differ diff --git a/docs/img/database-access/guides/redshift_selfhosted.png b/docs/img/database-access/guides/redshift_selfhosted.png index cb0210e773dea..92e7aad1779e2 100644 Binary files a/docs/img/database-access/guides/redshift_selfhosted.png and b/docs/img/database-access/guides/redshift_selfhosted.png differ diff --git a/docs/img/database-access/guides/snowflake/dbeaver-driver.png b/docs/img/database-access/guides/snowflake/dbeaver-driver.png index aad3f911c3e50..d641a553937a8 100644 Binary files a/docs/img/database-access/guides/snowflake/dbeaver-driver.png and b/docs/img/database-access/guides/snowflake/dbeaver-driver.png differ diff --git a/docs/img/database-access/guides/snowflake/dbeaver-main-screen.png b/docs/img/database-access/guides/snowflake/dbeaver-main-screen.png index 2a2a19ed334a9..c33adadff094a 100644 Binary files a/docs/img/database-access/guides/snowflake/dbeaver-main-screen.png and b/docs/img/database-access/guides/snowflake/dbeaver-main-screen.png differ diff --git a/docs/img/database-access/guides/snowflake/dbeaver-main.png b/docs/img/database-access/guides/snowflake/dbeaver-main.png index 226244f6d2173..b495997253128 100644 Binary files a/docs/img/database-access/guides/snowflake/dbeaver-main.png and b/docs/img/database-access/guides/snowflake/dbeaver-main.png differ diff --git a/docs/img/database-access/guides/snowflake/dbeaver-select-database.png b/docs/img/database-access/guides/snowflake/dbeaver-select-database.png index f741275f2e78e..1805a1917c7eb 100644 Binary files a/docs/img/database-access/guides/snowflake/dbeaver-select-database.png and b/docs/img/database-access/guides/snowflake/dbeaver-select-database.png differ diff --git a/docs/img/database-access/guides/snowflake/dbeaver-success.png b/docs/img/database-access/guides/snowflake/dbeaver-success.png index 3fbeb3b6cc48b..74d5cd3c757a4 100644 Binary files a/docs/img/database-access/guides/snowflake/dbeaver-success.png and b/docs/img/database-access/guides/snowflake/dbeaver-success.png differ diff --git a/docs/img/database-access/guides/snowflake/jetbrains-add-database.png b/docs/img/database-access/guides/snowflake/jetbrains-add-database.png index 266c7e159f758..4de07b69a2f34 100644 Binary files a/docs/img/database-access/guides/snowflake/jetbrains-add-database.png and b/docs/img/database-access/guides/snowflake/jetbrains-add-database.png differ diff --git a/docs/img/database-access/guides/snowflake/jetbrains-advanced.png b/docs/img/database-access/guides/snowflake/jetbrains-advanced.png index 8074dd7de8295..ea41975c08406 100644 Binary files a/docs/img/database-access/guides/snowflake/jetbrains-advanced.png and b/docs/img/database-access/guides/snowflake/jetbrains-advanced.png differ diff --git a/docs/img/database-access/guides/snowflake/jetbrains-general.png b/docs/img/database-access/guides/snowflake/jetbrains-general.png index 348a96ab7264a..370054df9ce64 100644 Binary files a/docs/img/database-access/guides/snowflake/jetbrains-general.png and b/docs/img/database-access/guides/snowflake/jetbrains-general.png differ diff --git a/docs/img/database-access/guides/snowflake/jetbrains-success.png b/docs/img/database-access/guides/snowflake/jetbrains-success.png index 246b0da46ea8c..8cf7a4c66a1a7 100644 Binary files a/docs/img/database-access/guides/snowflake/jetbrains-success.png and b/docs/img/database-access/guides/snowflake/jetbrains-success.png differ diff --git a/docs/img/database-access/guides/snowflake_cloud.png b/docs/img/database-access/guides/snowflake_cloud.png index 3f96e9a4d717b..c127580039f32 100644 Binary files a/docs/img/database-access/guides/snowflake_cloud.png and b/docs/img/database-access/guides/snowflake_cloud.png differ diff --git a/docs/img/database-access/guides/snowflake_selfhosted.png b/docs/img/database-access/guides/snowflake_selfhosted.png index b2c9b4d3b994f..589d51a557dc1 100644 Binary files a/docs/img/database-access/guides/snowflake_selfhosted.png and b/docs/img/database-access/guides/snowflake_selfhosted.png differ diff --git a/docs/img/database-access/guides/spanner/create-spanner-user@2x.png b/docs/img/database-access/guides/spanner/create-spanner-user@2x.png index 59c5e0fd7bb38..083378730e052 100644 Binary files a/docs/img/database-access/guides/spanner/create-spanner-user@2x.png and b/docs/img/database-access/guides/spanner/create-spanner-user@2x.png differ diff --git a/docs/img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png b/docs/img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png index 5701ae4eb97f8..c7130a91c1c66 100644 Binary files a/docs/img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png and b/docs/img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png differ diff --git a/docs/img/database-access/guides/spanner/grant-token-creator@2x.png b/docs/img/database-access/guides/spanner/grant-token-creator@2x.png index 2ccd33671b1a8..d0d92a37380b3 100644 Binary files a/docs/img/database-access/guides/spanner/grant-token-creator@2x.png and b/docs/img/database-access/guides/spanner/grant-token-creator@2x.png differ diff --git a/docs/img/database-access/guides/spanner/select-instance@2x.png b/docs/img/database-access/guides/spanner/select-instance@2x.png index c33d0a2b9ba45..6903991b67418 100644 Binary files a/docs/img/database-access/guides/spanner/select-instance@2x.png and b/docs/img/database-access/guides/spanner/select-instance@2x.png differ diff --git a/docs/img/database-access/guides/spanner/service-account-permissions-tab@2x.png b/docs/img/database-access/guides/spanner/service-account-permissions-tab@2x.png index 582f3ca3b2414..a0df2adcfba91 100644 Binary files a/docs/img/database-access/guides/spanner/service-account-permissions-tab@2x.png and b/docs/img/database-access/guides/spanner/service-account-permissions-tab@2x.png differ diff --git a/docs/img/database-access/guides/spanner_cloud.png b/docs/img/database-access/guides/spanner_cloud.png index 6ca9ad2271f64..5a57aef96653f 100644 Binary files a/docs/img/database-access/guides/spanner_cloud.png and b/docs/img/database-access/guides/spanner_cloud.png differ diff --git a/docs/img/database-access/guides/spanner_selfhosted.png b/docs/img/database-access/guides/spanner_selfhosted.png index 990b7bb77d570..f9f84fd622dc9 100644 Binary files a/docs/img/database-access/guides/spanner_selfhosted.png and b/docs/img/database-access/guides/spanner_selfhosted.png differ diff --git a/docs/img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png b/docs/img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png index 83599c0327a2b..3f53156b1f673 100644 Binary files a/docs/img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png and b/docs/img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png differ diff --git a/docs/img/database-access/guides/sqlserver/azure-set-ad-admin.png b/docs/img/database-access/guides/sqlserver/azure-set-ad-admin.png index 61f532f1649ea..fd49af03b8399 100644 Binary files a/docs/img/database-access/guides/sqlserver/azure-set-ad-admin.png and b/docs/img/database-access/guides/sqlserver/azure-set-ad-admin.png differ diff --git a/docs/img/database-access/guides/sqlserver/azure-user-managed-identity.png b/docs/img/database-access/guides/sqlserver/azure-user-managed-identity.png index 7e381d24e8e40..a211d56f25536 100644 Binary files a/docs/img/database-access/guides/sqlserver/azure-user-managed-identity.png and b/docs/img/database-access/guides/sqlserver/azure-user-managed-identity.png differ diff --git a/docs/img/database-access/guides/sqlserver/cloud-sql-aad.png b/docs/img/database-access/guides/sqlserver/cloud-sql-aad.png index 9d160142f768b..871b9833db03b 100644 Binary files a/docs/img/database-access/guides/sqlserver/cloud-sql-aad.png and b/docs/img/database-access/guides/sqlserver/cloud-sql-aad.png differ diff --git a/docs/img/database-access/guides/sqlserver/datagrip-connection@2x.png b/docs/img/database-access/guides/sqlserver/datagrip-connection@2x.png index aae0ead22285c..27c62a72adcd3 100644 Binary files a/docs/img/database-access/guides/sqlserver/datagrip-connection@2x.png and b/docs/img/database-access/guides/sqlserver/datagrip-connection@2x.png differ diff --git a/docs/img/database-access/guides/sqlserver/dbeaver-connection@2x.png b/docs/img/database-access/guides/sqlserver/dbeaver-connection@2x.png index 9a5bdb7505e43..0e4f300ce95e1 100644 Binary files a/docs/img/database-access/guides/sqlserver/dbeaver-connection@2x.png and b/docs/img/database-access/guides/sqlserver/dbeaver-connection@2x.png differ diff --git a/docs/img/database-access/guides/sqlserver/spn@2x.png b/docs/img/database-access/guides/sqlserver/spn@2x.png index a107fa50ebc1a..6ec0ea4e1b68a 100644 Binary files a/docs/img/database-access/guides/sqlserver/spn@2x.png and b/docs/img/database-access/guides/sqlserver/spn@2x.png differ diff --git a/docs/img/database-access/guides/sqlserver/sql-aad.png b/docs/img/database-access/guides/sqlserver/sql-aad.png index accf6498cef37..26362e684e44a 100644 Binary files a/docs/img/database-access/guides/sqlserver/sql-aad.png and b/docs/img/database-access/guides/sqlserver/sql-aad.png differ diff --git a/docs/img/database-access/guides/sqlserver/system-managed-identity.png b/docs/img/database-access/guides/sqlserver/system-managed-identity.png index 77a2e8c7746f3..a2790297d0719 100644 Binary files a/docs/img/database-access/guides/sqlserver/system-managed-identity.png and b/docs/img/database-access/guides/sqlserver/system-managed-identity.png differ diff --git a/docs/img/database-access/guides/vitess_cloud.png b/docs/img/database-access/guides/vitess_cloud.png index b87459cf849b4..57f4ce31ea239 100644 Binary files a/docs/img/database-access/guides/vitess_cloud.png and b/docs/img/database-access/guides/vitess_cloud.png differ diff --git a/docs/img/database-access/guides/vitess_selfhosted.png b/docs/img/database-access/guides/vitess_selfhosted.png index e48622606ed9c..3da1de5e3cd42 100644 Binary files a/docs/img/database-access/guides/vitess_selfhosted.png and b/docs/img/database-access/guides/vitess_selfhosted.png differ diff --git a/docs/img/database-access/iam@2x.png b/docs/img/database-access/iam@2x.png deleted file mode 100644 index 204eab77190e9..0000000000000 Binary files a/docs/img/database-access/iam@2x.png and /dev/null differ diff --git a/docs/img/database-access/msft-sql-server-management-studio-connection.png b/docs/img/database-access/msft-sql-server-management-studio-connection.png index 1606266edd9f9..ef0edf375ce55 100644 Binary files a/docs/img/database-access/msft-sql-server-management-studio-connection.png and b/docs/img/database-access/msft-sql-server-management-studio-connection.png differ diff --git a/docs/img/database-access/pgadmin-connection@2x.png b/docs/img/database-access/pgadmin-connection@2x.png index 0d28e8164922d..8a054fce914ea 100644 Binary files a/docs/img/database-access/pgadmin-connection@2x.png and b/docs/img/database-access/pgadmin-connection@2x.png differ diff --git a/docs/img/database-access/pgadmin-ssl@2x.png b/docs/img/database-access/pgadmin-ssl@2x.png index f574d94a5b737..ec672a10fae19 100644 Binary files a/docs/img/database-access/pgadmin-ssl@2x.png and b/docs/img/database-access/pgadmin-ssl@2x.png differ diff --git a/docs/img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png b/docs/img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png index ef04361d7ee31..c60663c8f47ef 100644 Binary files a/docs/img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png and b/docs/img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png differ diff --git a/docs/img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png b/docs/img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png index 1724a0c4c4617..3c32e9879ed43 100644 Binary files a/docs/img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png and b/docs/img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png differ diff --git a/docs/img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png b/docs/img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png index 4b97bc5053f48..4039f6c241eee 100644 Binary files a/docs/img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png and b/docs/img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png differ diff --git a/docs/img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png b/docs/img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png index cf5f2aede6a79..f1e1cd7ccd80d 100644 Binary files a/docs/img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png and b/docs/img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png differ diff --git a/docs/img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png b/docs/img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png index 8fb99f35b0c82..7faab33a57531 100644 Binary files a/docs/img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png and b/docs/img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png differ diff --git a/docs/img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png b/docs/img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png index e792b6ddf0473..8bde5c9263714 100644 Binary files a/docs/img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png and b/docs/img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png differ diff --git a/docs/img/deploy-a-cluster/aws-gslb-proxy-peering-ha-deployment.png b/docs/img/deploy-a-cluster/aws-gslb-proxy-peering-ha-deployment.png index e43817f5d6ca4..f4b61ef169931 100644 Binary files a/docs/img/deploy-a-cluster/aws-gslb-proxy-peering-ha-deployment.png and b/docs/img/deploy-a-cluster/aws-gslb-proxy-peering-ha-deployment.png differ diff --git a/docs/img/device-trust/device-trust-shield-dark-failure.png b/docs/img/device-trust/device-trust-shield-dark-failure.png index 6c0a77cd4b906..a4043406747cd 100644 Binary files a/docs/img/device-trust/device-trust-shield-dark-failure.png and b/docs/img/device-trust/device-trust-shield-dark-failure.png differ diff --git a/docs/img/device-trust/device-trust-shield-dark-success.png b/docs/img/device-trust/device-trust-shield-dark-success.png index 631ac80ba2292..4d70b43801e76 100644 Binary files a/docs/img/device-trust/device-trust-shield-dark-success.png and b/docs/img/device-trust/device-trust-shield-dark-success.png differ diff --git a/docs/img/device-trust/device-trust-shield-light-failure.png b/docs/img/device-trust/device-trust-shield-light-failure.png index e3e1030530fd7..5327abc00a64b 100644 Binary files a/docs/img/device-trust/device-trust-shield-light-failure.png and b/docs/img/device-trust/device-trust-shield-light-failure.png differ diff --git a/docs/img/device-trust/device-trust-shield-light-success.png b/docs/img/device-trust/device-trust-shield-light-success.png index f3feb4e46a885..81e228829f4ff 100644 Binary files a/docs/img/device-trust/device-trust-shield-light-success.png and b/docs/img/device-trust/device-trust-shield-light-success.png differ diff --git a/docs/img/doc-submodules.png b/docs/img/doc-submodules.png deleted file mode 100644 index 8250180e268d8..0000000000000 Binary files a/docs/img/doc-submodules.png and /dev/null differ diff --git a/docs/img/download_files_connect.png b/docs/img/download_files_connect.png index bfd84203b9cad..c4e45d705d48a 100644 Binary files a/docs/img/download_files_connect.png and b/docs/img/download_files_connect.png differ diff --git a/docs/img/enroll-resources/enroll-resources-hero.png b/docs/img/enroll-resources/enroll-resources-hero.png new file mode 100644 index 0000000000000..f9f30d936d17d Binary files /dev/null and b/docs/img/enroll-resources/enroll-resources-hero.png differ diff --git a/docs/img/enterprise/plugins/datadog/review-access-request.png b/docs/img/enterprise/plugins/datadog/review-access-request.png index 05a2ab77bd67d..7e3ea385ce3f2 100644 Binary files a/docs/img/enterprise/plugins/datadog/review-access-request.png and b/docs/img/enterprise/plugins/datadog/review-access-request.png differ diff --git a/docs/img/enterprise/plugins/datadog/teleport-users.png b/docs/img/enterprise/plugins/datadog/teleport-users.png index 0d99c7289d237..7f93820b7f75b 100644 Binary files a/docs/img/enterprise/plugins/datadog/teleport-users.png and b/docs/img/enterprise/plugins/datadog/teleport-users.png differ diff --git a/docs/img/enterprise/plugins/discord/discord-permissions.png b/docs/img/enterprise/plugins/discord/discord-permissions.png deleted file mode 100644 index 6994faf0f6347..0000000000000 Binary files a/docs/img/enterprise/plugins/discord/discord-permissions.png and /dev/null differ diff --git a/docs/img/enterprise/plugins/enroll.png b/docs/img/enterprise/plugins/enroll.png index 958c14122abca..4e9cfae257ab9 100644 Binary files a/docs/img/enterprise/plugins/enroll.png and b/docs/img/enterprise/plugins/enroll.png differ diff --git a/docs/img/enterprise/plugins/integration-plugin.png b/docs/img/enterprise/plugins/integration-plugin.png new file mode 100644 index 0000000000000..b8b693a3d3b0a Binary files /dev/null and b/docs/img/enterprise/plugins/integration-plugin.png differ diff --git a/docs/img/enterprise/plugins/okta/okta-delete-integration.png b/docs/img/enterprise/plugins/okta/okta-delete-integration.png index 15aba79019b24..26dab22492901 100644 Binary files a/docs/img/enterprise/plugins/okta/okta-delete-integration.png and b/docs/img/enterprise/plugins/okta/okta-delete-integration.png differ diff --git a/docs/img/get-started/get-started-audit-log@2x.png b/docs/img/get-started/get-started-audit-log@2x.png new file mode 100644 index 0000000000000..22f7579e4ac10 Binary files /dev/null and b/docs/img/get-started/get-started-audit-log@2x.png differ diff --git a/docs/img/get-started/get-started-enroll-resources@2x.png b/docs/img/get-started/get-started-enroll-resources@2x.png new file mode 100644 index 0000000000000..59d36407e7a1f Binary files /dev/null and b/docs/img/get-started/get-started-enroll-resources@2x.png differ diff --git a/docs/img/headless/approval.png b/docs/img/headless/approval.png index dce71a9a6e8ec..7c84036a642c6 100644 Binary files a/docs/img/headless/approval.png and b/docs/img/headless/approval.png differ diff --git a/docs/img/headless/confirmation.png b/docs/img/headless/confirmation.png index 963d55258f7a1..ece8b2041ea63 100644 Binary files a/docs/img/headless/confirmation.png and b/docs/img/headless/confirmation.png differ diff --git a/docs/img/helm/digitalocean/create-k8s.png b/docs/img/helm/digitalocean/create-k8s.png index 8fb2bb1c0fce2..113e0664405a7 100644 Binary files a/docs/img/helm/digitalocean/create-k8s.png and b/docs/img/helm/digitalocean/create-k8s.png differ diff --git a/docs/img/helm/digitalocean/edit-user.png b/docs/img/helm/digitalocean/edit-user.png index 05fb4b09893bd..800590c03fd4b 100644 Binary files a/docs/img/helm/digitalocean/edit-user.png and b/docs/img/helm/digitalocean/edit-user.png differ diff --git a/docs/img/helm/digitalocean/fqdn.png b/docs/img/helm/digitalocean/fqdn.png index 1105fd0baa315..2771e254ac737 100644 Binary files a/docs/img/helm/digitalocean/fqdn.png and b/docs/img/helm/digitalocean/fqdn.png differ diff --git a/docs/img/helm/digitalocean/setup-k8s.png b/docs/img/helm/digitalocean/setup-k8s.png index 38550e0ee74d8..cbfd50832ad41 100644 Binary files a/docs/img/helm/digitalocean/setup-k8s.png and b/docs/img/helm/digitalocean/setup-k8s.png differ diff --git a/docs/img/helm/digitalocean/setup-user.png b/docs/img/helm/digitalocean/setup-user.png index ace6d96343ab5..5b22ab8488ed0 100644 Binary files a/docs/img/helm/digitalocean/setup-user.png and b/docs/img/helm/digitalocean/setup-user.png differ diff --git a/docs/img/helm/digitalocean/update-role.png b/docs/img/helm/digitalocean/update-role.png index 02a7af701276d..a767cf0910036 100644 Binary files a/docs/img/helm/digitalocean/update-role.png and b/docs/img/helm/digitalocean/update-role.png differ diff --git a/docs/img/helm/digitalocean/view-activity.png b/docs/img/helm/digitalocean/view-activity.png index 394b85a221466..eb4e67b05b30f 100644 Binary files a/docs/img/helm/digitalocean/view-activity.png and b/docs/img/helm/digitalocean/view-activity.png differ diff --git a/docs/img/helm/gcp/1-roles@1.5x.png b/docs/img/helm/gcp/1-roles@1.5x.png index 813454629c3fe..b70741e11ec03 100644 Binary files a/docs/img/helm/gcp/1-roles@1.5x.png and b/docs/img/helm/gcp/1-roles@1.5x.png differ diff --git a/docs/img/helm/gcp/10-serviceaccountdetails@1.5x.png b/docs/img/helm/gcp/10-serviceaccountdetails@1.5x.png index 9b0ae83898d29..58569657ed857 100644 Binary files a/docs/img/helm/gcp/10-serviceaccountdetails@1.5x.png and b/docs/img/helm/gcp/10-serviceaccountdetails@1.5x.png differ diff --git a/docs/img/helm/gcp/11-createkey.png b/docs/img/helm/gcp/11-createkey.png index 50c36270441f1..4c372578557a4 100644 Binary files a/docs/img/helm/gcp/11-createkey.png and b/docs/img/helm/gcp/11-createkey.png differ diff --git a/docs/img/helm/gcp/12-privatekey@1.5x.png b/docs/img/helm/gcp/12-privatekey@1.5x.png index a3a843d6794de..704ed7c780180 100644 Binary files a/docs/img/helm/gcp/12-privatekey@1.5x.png and b/docs/img/helm/gcp/12-privatekey@1.5x.png differ diff --git a/docs/img/helm/gcp/13-dns-createrole@1.5x.png b/docs/img/helm/gcp/13-dns-createrole@1.5x.png index 284fc2c2d86ff..c36436530979f 100644 Binary files a/docs/img/helm/gcp/13-dns-createrole@1.5x.png and b/docs/img/helm/gcp/13-dns-createrole@1.5x.png differ diff --git a/docs/img/helm/gcp/14-dns-permissions-create@1.5x.png b/docs/img/helm/gcp/14-dns-permissions-create@1.5x.png index 4b64e8fe7bd4e..350d355c1e296 100644 Binary files a/docs/img/helm/gcp/14-dns-permissions-create@1.5x.png and b/docs/img/helm/gcp/14-dns-permissions-create@1.5x.png differ diff --git a/docs/img/helm/gcp/2-createrole@1.5x.png b/docs/img/helm/gcp/2-createrole@1.5x.png index 77bf0b5cbc643..2d1aa0909e5df 100644 Binary files a/docs/img/helm/gcp/2-createrole@1.5x.png and b/docs/img/helm/gcp/2-createrole@1.5x.png differ diff --git a/docs/img/helm/gcp/3-addpermissions@1.5x.png b/docs/img/helm/gcp/3-addpermissions@1.5x.png index 5a6c70a2192f2..a57460dfb99d3 100644 Binary files a/docs/img/helm/gcp/3-addpermissions@1.5x.png and b/docs/img/helm/gcp/3-addpermissions@1.5x.png differ diff --git a/docs/img/helm/gcp/4-storagebucketscreate@1.5x.png b/docs/img/helm/gcp/4-storagebucketscreate@1.5x.png index 5457d1b1a764b..204aa0eae7b85 100644 Binary files a/docs/img/helm/gcp/4-storagebucketscreate@1.5x.png and b/docs/img/helm/gcp/4-storagebucketscreate@1.5x.png differ diff --git a/docs/img/helm/gcp/5-select@1.5x.png b/docs/img/helm/gcp/5-select@1.5x.png index 55ae16315c72b..de6542ed21dfa 100644 Binary files a/docs/img/helm/gcp/5-select@1.5x.png and b/docs/img/helm/gcp/5-select@1.5x.png differ diff --git a/docs/img/helm/gcp/6-createrole@1.5x.png b/docs/img/helm/gcp/6-createrole@1.5x.png index e4c3edd0782a6..4ff04b549ec23 100644 Binary files a/docs/img/helm/gcp/6-createrole@1.5x.png and b/docs/img/helm/gcp/6-createrole@1.5x.png differ diff --git a/docs/img/helm/gcp/7-serviceaccounts@1.5x.png b/docs/img/helm/gcp/7-serviceaccounts@1.5x.png index a55b8c38c6e64..4ab0d82559b3a 100644 Binary files a/docs/img/helm/gcp/7-serviceaccounts@1.5x.png and b/docs/img/helm/gcp/7-serviceaccounts@1.5x.png differ diff --git a/docs/img/helm/gcp/8-createserviceaccount@1.5x.png b/docs/img/helm/gcp/8-createserviceaccount@1.5x.png index 6f790f50f2782..0d30f2deda289 100644 Binary files a/docs/img/helm/gcp/8-createserviceaccount@1.5x.png and b/docs/img/helm/gcp/8-createserviceaccount@1.5x.png differ diff --git a/docs/img/helm/gcp/9-addroles@1.5x.png b/docs/img/helm/gcp/9-addroles@1.5x.png index ab7a58c12d5c6..c701ace8f2cc3 100644 Binary files a/docs/img/helm/gcp/9-addroles@1.5x.png and b/docs/img/helm/gcp/9-addroles@1.5x.png differ diff --git a/docs/img/identity-center/ic-app.png b/docs/img/identity-center/ic-app.png index a4d91becf4cc0..d3fef011b748f 100644 Binary files a/docs/img/identity-center/ic-app.png and b/docs/img/identity-center/ic-app.png differ diff --git a/docs/img/identity-center/ic-lists.png b/docs/img/identity-center/ic-lists.png index 05b0bbc721001..bde5657f53aae 100644 Binary files a/docs/img/identity-center/ic-lists.png and b/docs/img/identity-center/ic-lists.png differ diff --git a/docs/img/identity-center/ic-migrate-end.png b/docs/img/identity-center/ic-migrate-end.png index 50c922acf1223..399484a5fce69 100644 Binary files a/docs/img/identity-center/ic-migrate-end.png and b/docs/img/identity-center/ic-migrate-end.png differ diff --git a/docs/img/identity-center/ic-migrate-mid.png b/docs/img/identity-center/ic-migrate-mid.png index 72adcd42a4f7b..38d9a59a8b1aa 100644 Binary files a/docs/img/identity-center/ic-migrate-mid.png and b/docs/img/identity-center/ic-migrate-mid.png differ diff --git a/docs/img/identity-center/ic-migrate-start.png b/docs/img/identity-center/ic-migrate-start.png index 561c5074197fa..7e8c0bb9d9d53 100644 Binary files a/docs/img/identity-center/ic-migrate-start.png and b/docs/img/identity-center/ic-migrate-start.png differ diff --git a/docs/img/identity-center/ic-pick-integration.png b/docs/img/identity-center/ic-pick-integration.png index 45407e384c5d3..b154143c2493d 100644 Binary files a/docs/img/identity-center/ic-pick-integration.png and b/docs/img/identity-center/ic-pick-integration.png differ diff --git a/docs/img/identity-center/ic-promote.png b/docs/img/identity-center/ic-promote.png index 7f288a6ebdfe7..5cb823d52a468 100644 Binary files a/docs/img/identity-center/ic-promote.png and b/docs/img/identity-center/ic-promote.png differ diff --git a/docs/img/identity-center/ic-role-assumed.png b/docs/img/identity-center/ic-role-assumed.png index dae226206e02d..d1af3eaece4e2 100644 Binary files a/docs/img/identity-center/ic-role-assumed.png and b/docs/img/identity-center/ic-role-assumed.png differ diff --git a/docs/img/identity-center/ic-saml-login-audit-log.png b/docs/img/identity-center/ic-saml-login-audit-log.png new file mode 100644 index 0000000000000..e995cfc7adb9e Binary files /dev/null and b/docs/img/identity-center/ic-saml-login-audit-log.png differ diff --git a/docs/img/identity-center/ic-select-ps.png b/docs/img/identity-center/ic-select-ps.png index a4400ada18c50..d140707b6498f 100644 Binary files a/docs/img/identity-center/ic-select-ps.png and b/docs/img/identity-center/ic-select-ps.png differ diff --git a/docs/img/identity-center/ic-step1.1.png b/docs/img/identity-center/ic-step1.1.png index 7a4b849841285..d6d9f4a21364e 100644 Binary files a/docs/img/identity-center/ic-step1.1.png and b/docs/img/identity-center/ic-step1.1.png differ diff --git a/docs/img/identity-center/ic-step1.2.png b/docs/img/identity-center/ic-step1.2.png index f98c0a02da476..1774d12d64195 100644 Binary files a/docs/img/identity-center/ic-step1.2.png and b/docs/img/identity-center/ic-step1.2.png differ diff --git a/docs/img/identity-center/ic-step2.png b/docs/img/identity-center/ic-step2.png index 1e392097e9125..9ab6e7821e6f0 100644 Binary files a/docs/img/identity-center/ic-step2.png and b/docs/img/identity-center/ic-step2.png differ diff --git a/docs/img/identity-center/ic-step3.png b/docs/img/identity-center/ic-step3.png index 70d871c885928..9a1a9d3f3e84e 100644 Binary files a/docs/img/identity-center/ic-step3.png and b/docs/img/identity-center/ic-step3.png differ diff --git a/docs/img/identity-center/ic-step4.png b/docs/img/identity-center/ic-step4.png index 1cd85509a2eb3..9e33a0f816aab 100644 Binary files a/docs/img/identity-center/ic-step4.png and b/docs/img/identity-center/ic-step4.png differ diff --git a/docs/img/identity-governance/access-lists/alice-review-request.png b/docs/img/identity-governance/access-lists/alice-review-request.png new file mode 100644 index 0000000000000..aa1f152fad719 Binary files /dev/null and b/docs/img/identity-governance/access-lists/alice-review-request.png differ diff --git a/docs/img/identity-governance/access-lists/bob-submit-request.png b/docs/img/identity-governance/access-lists/bob-submit-request.png new file mode 100644 index 0000000000000..98e5c2a2acafb Binary files /dev/null and b/docs/img/identity-governance/access-lists/bob-submit-request.png differ diff --git a/docs/img/identity-governance/access-lists/bob-user-roles.png b/docs/img/identity-governance/access-lists/bob-user-roles.png new file mode 100644 index 0000000000000..502baa78b7406 Binary files /dev/null and b/docs/img/identity-governance/access-lists/bob-user-roles.png differ diff --git a/docs/img/identity-governance/access-lists/ssh-access-list-owners.png b/docs/img/identity-governance/access-lists/ssh-access-list-owners.png new file mode 100644 index 0000000000000..0351501de158f Binary files /dev/null and b/docs/img/identity-governance/access-lists/ssh-access-list-owners.png differ diff --git a/docs/img/identity-governance/identity-governance-hero.png b/docs/img/identity-governance/identity-governance-hero.png new file mode 100644 index 0000000000000..47e50d4bcb991 Binary files /dev/null and b/docs/img/identity-governance/identity-governance-hero.png differ diff --git a/docs/img/identity-security/identity-security-hero.png b/docs/img/identity-security/identity-security-hero.png new file mode 100644 index 0000000000000..2a58d1a58df5a Binary files /dev/null and b/docs/img/identity-security/identity-security-hero.png differ diff --git a/docs/img/identity-security/rec-list-with-summary.png b/docs/img/identity-security/rec-list-with-summary.png new file mode 100644 index 0000000000000..1238a431f389b Binary files /dev/null and b/docs/img/identity-security/rec-list-with-summary.png differ diff --git a/docs/img/identity-security/summary-button.png b/docs/img/identity-security/summary-button.png new file mode 100644 index 0000000000000..f9b4c42e139fe Binary files /dev/null and b/docs/img/identity-security/summary-button.png differ diff --git a/docs/img/igs/entraid/entra-api-permission.png b/docs/img/igs/entraid/entra-api-permission.png new file mode 100644 index 0000000000000..a7776373c7107 Binary files /dev/null and b/docs/img/igs/entraid/entra-api-permission.png differ diff --git a/docs/img/igs/entraid/entra-create-enterprise-app.png b/docs/img/igs/entraid/entra-create-enterprise-app.png new file mode 100644 index 0000000000000..b07c31e5ac9cd Binary files /dev/null and b/docs/img/igs/entraid/entra-create-enterprise-app.png differ diff --git a/docs/img/igs/entraid/entra-create-group.png b/docs/img/igs/entraid/entra-create-group.png new file mode 100644 index 0000000000000..22c8a3e8adb8b Binary files /dev/null and b/docs/img/igs/entraid/entra-create-group.png differ diff --git a/docs/img/igs/entraid/entra-default-owners.png b/docs/img/igs/entraid/entra-default-owners.png new file mode 100644 index 0000000000000..1a69a390b9fdc Binary files /dev/null and b/docs/img/igs/entraid/entra-default-owners.png differ diff --git a/docs/img/igs/entraid/entra-finish.png b/docs/img/igs/entraid/entra-finish.png new file mode 100644 index 0000000000000..4a6b3dab5decb Binary files /dev/null and b/docs/img/igs/entraid/entra-finish.png differ diff --git a/docs/img/igs/entraid/entra-setup-oidc.png b/docs/img/igs/entraid/entra-setup-oidc.png new file mode 100644 index 0000000000000..8c01330a9f1c5 Binary files /dev/null and b/docs/img/igs/entraid/entra-setup-oidc.png differ diff --git a/docs/img/igs/entraid/entra-sso-config.png b/docs/img/igs/entraid/entra-sso-config.png new file mode 100644 index 0000000000000..a00bf74995fb7 Binary files /dev/null and b/docs/img/igs/entraid/entra-sso-config.png differ diff --git a/docs/img/igs/entraid/entra-teleport-acl-long-term.png b/docs/img/igs/entraid/entra-teleport-acl-long-term.png new file mode 100644 index 0000000000000..70cec8729df30 Binary files /dev/null and b/docs/img/igs/entraid/entra-teleport-acl-long-term.png differ diff --git a/docs/img/igs/entraid/entra-teleport-acl-short-term.png b/docs/img/igs/entraid/entra-teleport-acl-short-term.png new file mode 100644 index 0000000000000..7ebfc81de5e17 Binary files /dev/null and b/docs/img/igs/entraid/entra-teleport-acl-short-term.png differ diff --git a/docs/img/igs/entraid/entra-teleport-grafana-app.png b/docs/img/igs/entraid/entra-teleport-grafana-app.png new file mode 100644 index 0000000000000..6ecef9006653c Binary files /dev/null and b/docs/img/igs/entraid/entra-teleport-grafana-app.png differ diff --git a/docs/img/igs/entraid/entra-teleport-role-mapping.png b/docs/img/igs/entraid/entra-teleport-role-mapping.png new file mode 100644 index 0000000000000..c4cf9d3adffec Binary files /dev/null and b/docs/img/igs/entraid/entra-teleport-role-mapping.png differ diff --git a/docs/img/igs/entraid/entra-teleport-role-yaml-editor.png b/docs/img/igs/entraid/entra-teleport-role-yaml-editor.png new file mode 100644 index 0000000000000..2de943e6862dd Binary files /dev/null and b/docs/img/igs/entraid/entra-teleport-role-yaml-editor.png differ diff --git a/docs/img/infrastructure-as-code/terraform-cloud-variables.png b/docs/img/infrastructure-as-code/terraform-cloud-variables.png index cba8e494f4c07..e862ed1599342 100644 Binary files a/docs/img/infrastructure-as-code/terraform-cloud-variables.png and b/docs/img/infrastructure-as-code/terraform-cloud-variables.png differ diff --git a/docs/img/jetbrains-sftp/add-server.png b/docs/img/jetbrains-sftp/add-server.png index c72cfe9329a42..575d744f97ecb 100644 Binary files a/docs/img/jetbrains-sftp/add-server.png and b/docs/img/jetbrains-sftp/add-server.png differ diff --git a/docs/img/jetbrains-sftp/browse-window.png b/docs/img/jetbrains-sftp/browse-window.png index f5ce8cf244f92..8bce4da480219 100644 Binary files a/docs/img/jetbrains-sftp/browse-window.png and b/docs/img/jetbrains-sftp/browse-window.png differ diff --git a/docs/img/jetbrains-sftp/deployment-added.png b/docs/img/jetbrains-sftp/deployment-added.png index 0ebbc5ba0af83..b7a7314b76ece 100644 Binary files a/docs/img/jetbrains-sftp/deployment-added.png and b/docs/img/jetbrains-sftp/deployment-added.png differ diff --git a/docs/img/jetbrains-sftp/deployment-main.png b/docs/img/jetbrains-sftp/deployment-main.png index ddbf44f28e516..5e0abb730b605 100644 Binary files a/docs/img/jetbrains-sftp/deployment-main.png and b/docs/img/jetbrains-sftp/deployment-main.png differ diff --git a/docs/img/jetbrains-sftp/ssh-configurations.png b/docs/img/jetbrains-sftp/ssh-configurations.png index 603d7f6811e6c..05e768fba01f8 100644 Binary files a/docs/img/jetbrains-sftp/ssh-configurations.png and b/docs/img/jetbrains-sftp/ssh-configurations.png differ diff --git a/docs/img/jetbrains-sftp/successfully-connected.png b/docs/img/jetbrains-sftp/successfully-connected.png index 44c02523daeb3..c08b572862e5e 100644 Binary files a/docs/img/jetbrains-sftp/successfully-connected.png and b/docs/img/jetbrains-sftp/successfully-connected.png differ diff --git a/docs/img/k8s/mini-diagrams/k8s-to-teleport-mono.png b/docs/img/k8s/mini-diagrams/k8s-to-teleport-mono.png deleted file mode 100644 index 3f226d9dba2fd..0000000000000 Binary files a/docs/img/k8s/mini-diagrams/k8s-to-teleport-mono.png and /dev/null differ diff --git a/docs/img/k8s/mini-diagrams/teleport-in-k8s-mono.png b/docs/img/k8s/mini-diagrams/teleport-in-k8s-mono.png deleted file mode 100644 index e9f8562e2dea0..0000000000000 Binary files a/docs/img/k8s/mini-diagrams/teleport-in-k8s-mono.png and /dev/null differ diff --git a/docs/img/k8s/test-k8s-connection.png b/docs/img/k8s/test-k8s-connection.png deleted file mode 100644 index 8a332d43afec8..0000000000000 Binary files a/docs/img/k8s/test-k8s-connection.png and /dev/null differ diff --git a/docs/img/machine-id/jenkins.png b/docs/img/machine-id/jenkins.png index 63f6292946a45..6623c81638ce4 100644 Binary files a/docs/img/machine-id/jenkins.png and b/docs/img/machine-id/jenkins.png differ diff --git a/docs/img/management/access-list-web-ui.png b/docs/img/management/access-list-web-ui.png index 360d167251056..5adf1549cd970 100644 Binary files a/docs/img/management/access-list-web-ui.png and b/docs/img/management/access-list-web-ui.png differ diff --git a/docs/img/management/datadog-diagram.png b/docs/img/management/datadog-diagram.png index 71cfb3a6f42d2..56466f8d032b1 100644 Binary files a/docs/img/management/datadog-diagram.png and b/docs/img/management/datadog-diagram.png differ diff --git a/docs/img/management/fluentd-diagram.png b/docs/img/management/fluentd-diagram.png index cf525cc5d34dd..b20d07b83db78 100644 Binary files a/docs/img/management/fluentd-diagram.png and b/docs/img/management/fluentd-diagram.png differ diff --git a/docs/img/management/panther-ingest.png b/docs/img/management/panther-ingest.png index ee8bdd26e2ab3..3a8da03213738 100644 Binary files a/docs/img/management/panther-ingest.png and b/docs/img/management/panther-ingest.png differ diff --git a/docs/img/management/spacelift/apply-success.png b/docs/img/management/spacelift/apply-success.png index 69b708d3646f9..ba828b681dd43 100644 Binary files a/docs/img/management/spacelift/apply-success.png and b/docs/img/management/spacelift/apply-success.png differ diff --git a/docs/img/management/spacelift/id-file.png b/docs/img/management/spacelift/id-file.png index a420b5a954df0..0341b707be1bb 100644 Binary files a/docs/img/management/spacelift/id-file.png and b/docs/img/management/spacelift/id-file.png differ diff --git a/docs/img/management/spacelift/newstack.png b/docs/img/management/spacelift/newstack.png deleted file mode 100644 index c2a632d41e2ea..0000000000000 Binary files a/docs/img/management/spacelift/newstack.png and /dev/null differ diff --git a/docs/img/management/spacelift/pr-run.png b/docs/img/management/spacelift/pr-run.png index 42bf01d58b17f..692fc56a8d3af 100644 Binary files a/docs/img/management/spacelift/pr-run.png and b/docs/img/management/spacelift/pr-run.png differ diff --git a/docs/img/mcp-access/architecture-demo.svg b/docs/img/mcp-access/architecture-demo.svg new file mode 100644 index 0000000000000..9388f7b5b9092 --- /dev/null +++ b/docs/img/mcp-access/architecture-demo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/mcp-access/architecture-http.svg b/docs/img/mcp-access/architecture-http.svg new file mode 100644 index 0000000000000..710a7f87e671c --- /dev/null +++ b/docs/img/mcp-access/architecture-http.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/mcp-access/architecture-sse.svg b/docs/img/mcp-access/architecture-sse.svg new file mode 100644 index 0000000000000..270a45f4392c8 --- /dev/null +++ b/docs/img/mcp-access/architecture-sse.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/mcp-access/architecture-stdio.svg b/docs/img/mcp-access/architecture-stdio.svg new file mode 100644 index 0000000000000..10204ff0f900f --- /dev/null +++ b/docs/img/mcp-access/architecture-stdio.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/mcp-access/architecture.svg b/docs/img/mcp-access/architecture.svg new file mode 100644 index 0000000000000..40ad9c7af7c35 --- /dev/null +++ b/docs/img/mcp-access/architecture.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/img/mcp-access/demo-server-claude-desktop.png b/docs/img/mcp-access/demo-server-claude-desktop.png new file mode 100644 index 0000000000000..81af1a4845233 Binary files /dev/null and b/docs/img/mcp-access/demo-server-claude-desktop.png differ diff --git a/docs/img/mcp-access/github-claude.png b/docs/img/mcp-access/github-claude.png new file mode 100644 index 0000000000000..52e0cb2f9a1f1 Binary files /dev/null and b/docs/img/mcp-access/github-claude.png differ diff --git a/docs/img/mcp-access/grafana-claude.png b/docs/img/mcp-access/grafana-claude.png new file mode 100644 index 0000000000000..beb06ff39a5d1 Binary files /dev/null and b/docs/img/mcp-access/grafana-claude.png differ diff --git a/docs/img/mcp-access/grafana-create-sa.png b/docs/img/mcp-access/grafana-create-sa.png new file mode 100644 index 0000000000000..a65b5fa2224bd Binary files /dev/null and b/docs/img/mcp-access/grafana-create-sa.png differ diff --git a/docs/img/mcp-access/notion-access.png b/docs/img/mcp-access/notion-access.png new file mode 100644 index 0000000000000..eed6004f0138e Binary files /dev/null and b/docs/img/mcp-access/notion-access.png differ diff --git a/docs/img/mcp-access/notion-claude.png b/docs/img/mcp-access/notion-claude.png new file mode 100644 index 0000000000000..c3c5ccaabd0b6 Binary files /dev/null and b/docs/img/mcp-access/notion-claude.png differ diff --git a/docs/img/mcp-access/notion-integration.png b/docs/img/mcp-access/notion-integration.png new file mode 100644 index 0000000000000..3a4920c96cc9b Binary files /dev/null and b/docs/img/mcp-access/notion-integration.png differ diff --git a/docs/img/mcp-access/troubleshoot-server-disconnected.png b/docs/img/mcp-access/troubleshoot-server-disconnected.png new file mode 100644 index 0000000000000..d3d3e8de0d55e Binary files /dev/null and b/docs/img/mcp-access/troubleshoot-server-disconnected.png differ diff --git a/docs/img/mcp-access/troubleshoot-tsh-binary-enoent.png b/docs/img/mcp-access/troubleshoot-tsh-binary-enoent.png new file mode 100644 index 0000000000000..0d53d97d5c5ba Binary files /dev/null and b/docs/img/mcp-access/troubleshoot-tsh-binary-enoent.png differ diff --git a/docs/img/mcp-access/vault-claude.png b/docs/img/mcp-access/vault-claude.png new file mode 100644 index 0000000000000..ebdb6ba35c1ae Binary files /dev/null and b/docs/img/mcp-access/vault-claude.png differ diff --git a/docs/img/moderated-file-transfer-dialog.png b/docs/img/moderated-file-transfer-dialog.png index 56aca6646d48d..fee90b78325dc 100644 Binary files a/docs/img/moderated-file-transfer-dialog.png and b/docs/img/moderated-file-transfer-dialog.png differ diff --git a/docs/img/motd/teleport-with-updated-MOTD.png b/docs/img/motd/teleport-with-updated-MOTD.png index 08705ebd429a1..dab35ae735337 100644 Binary files a/docs/img/motd/teleport-with-updated-MOTD.png and b/docs/img/motd/teleport-with-updated-MOTD.png differ diff --git a/docs/img/new-ssh-server.png b/docs/img/new-ssh-server.png index 0f3f7b79f280b..a4c66d3f7ec2d 100644 Binary files a/docs/img/new-ssh-server.png and b/docs/img/new-ssh-server.png differ diff --git a/docs/img/notification.png b/docs/img/notification.png index 0b48b94994b4f..75c7c361d3d29 100644 Binary files a/docs/img/notification.png and b/docs/img/notification.png differ diff --git a/docs/img/oracle/oracle-join-dynamic-group@2x.png b/docs/img/oracle/oracle-join-dynamic-group@2x.png index e78194f3bec1d..03468bfbc186b 100644 Binary files a/docs/img/oracle/oracle-join-dynamic-group@2x.png and b/docs/img/oracle/oracle-join-dynamic-group@2x.png differ diff --git a/docs/img/oracle/oracle-join-policy@2x.png b/docs/img/oracle/oracle-join-policy@2x.png index 8ff26e37917bc..a3f3d9a4bde53 100644 Binary files a/docs/img/oracle/oracle-join-policy@2x.png and b/docs/img/oracle/oracle-join-policy@2x.png differ diff --git a/docs/img/quickstart/welcome.png b/docs/img/quickstart/welcome.png index a6df65efc9a7d..e61d470617271 100644 Binary files a/docs/img/quickstart/welcome.png and b/docs/img/quickstart/welcome.png differ diff --git a/docs/img/request-access.png b/docs/img/request-access.png index aa577df57c2b5..a5b92eba05576 100644 Binary files a/docs/img/request-access.png and b/docs/img/request-access.png differ diff --git a/docs/img/resource-health-check/database-health-warning-indicator.png b/docs/img/resource-health-check/database-health-warning-indicator.png new file mode 100644 index 0000000000000..173c5c4edaf68 Binary files /dev/null and b/docs/img/resource-health-check/database-health-warning-indicator.png differ diff --git a/docs/img/resource-health-check/kubernetes-health-warning.png b/docs/img/resource-health-check/kubernetes-health-warning.png new file mode 100644 index 0000000000000..816f4a6a04ee8 Binary files /dev/null and b/docs/img/resource-health-check/kubernetes-health-warning.png differ diff --git a/docs/img/sailpoint/plugin-connector.png b/docs/img/sailpoint/plugin-connector.png new file mode 100644 index 0000000000000..3ddf11c69ec02 Binary files /dev/null and b/docs/img/sailpoint/plugin-connector.png differ diff --git a/docs/img/sailpoint/plugin-install.png b/docs/img/sailpoint/plugin-install.png new file mode 100644 index 0000000000000..37fcd6b85abcd Binary files /dev/null and b/docs/img/sailpoint/plugin-install.png differ diff --git a/docs/img/sailpoint/sailpoint-1.png b/docs/img/sailpoint/sailpoint-1.png new file mode 100644 index 0000000000000..6ecc025e01cdc Binary files /dev/null and b/docs/img/sailpoint/sailpoint-1.png differ diff --git a/docs/img/sailpoint/sailpoint-3.png b/docs/img/sailpoint/sailpoint-3.png new file mode 100644 index 0000000000000..f178d75f8609f Binary files /dev/null and b/docs/img/sailpoint/sailpoint-3.png differ diff --git a/docs/img/sailpoint/sailpoint-4.png b/docs/img/sailpoint/sailpoint-4.png new file mode 100644 index 0000000000000..0f4fb3ab29e51 Binary files /dev/null and b/docs/img/sailpoint/sailpoint-4.png differ diff --git a/docs/img/sailpoint/sailpoint-5.png b/docs/img/sailpoint/sailpoint-5.png new file mode 100644 index 0000000000000..ca185b24f63de Binary files /dev/null and b/docs/img/sailpoint/sailpoint-5.png differ diff --git a/docs/img/sailpoint/sailpoint-6.png b/docs/img/sailpoint/sailpoint-6.png new file mode 100644 index 0000000000000..2343097e6d7a9 Binary files /dev/null and b/docs/img/sailpoint/sailpoint-6.png differ diff --git a/docs/img/sailpoint/sailpoint-7.png b/docs/img/sailpoint/sailpoint-7.png new file mode 100644 index 0000000000000..52a9635e3f748 Binary files /dev/null and b/docs/img/sailpoint/sailpoint-7.png differ diff --git a/docs/img/sailpoint/sailpoint-8.png b/docs/img/sailpoint/sailpoint-8.png new file mode 100644 index 0000000000000..fcc656545c5a0 Binary files /dev/null and b/docs/img/sailpoint/sailpoint-8.png differ diff --git a/docs/img/sailpoint/sailpoint-9.png b/docs/img/sailpoint/sailpoint-9.png new file mode 100644 index 0000000000000..dffce678a119d Binary files /dev/null and b/docs/img/sailpoint/sailpoint-9.png differ diff --git a/docs/img/sailpoint/sailspoint-scim-config.png b/docs/img/sailpoint/sailspoint-scim-config.png new file mode 100644 index 0000000000000..b5eda60c1ac7a Binary files /dev/null and b/docs/img/sailpoint/sailspoint-scim-config.png differ diff --git a/docs/img/server-access/getting-started-diagram.png b/docs/img/server-access/getting-started-diagram.png deleted file mode 100644 index e5c3e8f23534d..0000000000000 Binary files a/docs/img/server-access/getting-started-diagram.png and /dev/null differ diff --git a/docs/img/server-access/linux-servers-hero.jpg b/docs/img/server-access/linux-servers-hero.jpg new file mode 100644 index 0000000000000..0f8cbacf273b8 Binary files /dev/null and b/docs/img/server-access/linux-servers-hero.jpg differ diff --git a/docs/img/server-access/openssh-proxy.png b/docs/img/server-access/openssh-proxy.png deleted file mode 100644 index 7513260741b33..0000000000000 Binary files a/docs/img/server-access/openssh-proxy.png and /dev/null differ diff --git a/docs/img/spacelift.png b/docs/img/spacelift.png index d78e17ba3ea96..7549f646d24e8 100644 Binary files a/docs/img/spacelift.png and b/docs/img/spacelift.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-assign-group.png b/docs/img/sso/entraid/oidc/entra-id-assign-group.png new file mode 100644 index 0000000000000..482174b61be12 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-assign-group.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-create-application.png b/docs/img/sso/entraid/oidc/entra-id-create-application.png new file mode 100644 index 0000000000000..9ecb2126b5c13 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-create-application.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-create-group.png b/docs/img/sso/entraid/oidc/entra-id-create-group.png new file mode 100644 index 0000000000000..19a10add362f7 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-create-group.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-oidc-client-credential.png b/docs/img/sso/entraid/oidc/entra-id-oidc-client-credential.png new file mode 100644 index 0000000000000..4d8d4ba3dee84 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-oidc-client-credential.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-oidc-graph-permission.png b/docs/img/sso/entraid/oidc/entra-id-oidc-graph-permission.png new file mode 100644 index 0000000000000..78a78cb9b5523 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-oidc-graph-permission.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-oidc-group-claim.png b/docs/img/sso/entraid/oidc/entra-id-oidc-group-claim.png new file mode 100644 index 0000000000000..55c374a24fac7 Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-oidc-group-claim.png differ diff --git a/docs/img/sso/entraid/oidc/entra-id-oidc-redirect-uri.png b/docs/img/sso/entraid/oidc/entra-id-oidc-redirect-uri.png new file mode 100644 index 0000000000000..f1c22c27ebbff Binary files /dev/null and b/docs/img/sso/entraid/oidc/entra-id-oidc-redirect-uri.png differ diff --git a/docs/img/sso/keycloak/Export SAML keys.png b/docs/img/sso/keycloak/Export SAML keys.png deleted file mode 100644 index 99aa0da11cb64..0000000000000 Binary files a/docs/img/sso/keycloak/Export SAML keys.png and /dev/null differ diff --git a/docs/img/team-diagram.png b/docs/img/team-diagram.png deleted file mode 100644 index b62255ad30120..0000000000000 Binary files a/docs/img/team-diagram.png and /dev/null differ diff --git a/docs/img/teleport-k8s-pod.png b/docs/img/teleport-k8s-pod.png deleted file mode 100644 index ab9991fe9c8b1..0000000000000 Binary files a/docs/img/teleport-k8s-pod.png and /dev/null differ diff --git a/docs/img/teleport-kubernetes-outside.png b/docs/img/teleport-kubernetes-outside.png deleted file mode 100644 index 826a28263f292..0000000000000 Binary files a/docs/img/teleport-kubernetes-outside.png and /dev/null differ diff --git a/docs/img/teleport-sso/bitbucket@2x.png b/docs/img/teleport-sso/bitbucket@2x.png index e9f8f6d756f5c..93c7f31f65fab 100644 Binary files a/docs/img/teleport-sso/bitbucket@2x.png and b/docs/img/teleport-sso/bitbucket@2x.png differ diff --git a/docs/img/teleport-sso/github@2x.png b/docs/img/teleport-sso/github@2x.png index e3fe956dc751a..f38bc551fc067 100644 Binary files a/docs/img/teleport-sso/github@2x.png and b/docs/img/teleport-sso/github@2x.png differ diff --git a/docs/img/teleport-sso/google@2x.png b/docs/img/teleport-sso/google@2x.png index be600049e3b2a..aa7cd8d6fde99 100644 Binary files a/docs/img/teleport-sso/google@2x.png and b/docs/img/teleport-sso/google@2x.png differ diff --git a/docs/img/teleport-sso/microsoft@2x.png b/docs/img/teleport-sso/microsoft@2x.png index b13f26d038421..325ce8cfdf2be 100644 Binary files a/docs/img/teleport-sso/microsoft@2x.png and b/docs/img/teleport-sso/microsoft@2x.png differ diff --git a/docs/img/teleport-sso/openId@2x.png b/docs/img/teleport-sso/openId@2x.png index b4a8d5df5ad95..4fc5fac1ea370 100644 Binary files a/docs/img/teleport-sso/openId@2x.png and b/docs/img/teleport-sso/openId@2x.png differ diff --git a/docs/img/upload_files_connect_drag_drop.png b/docs/img/upload_files_connect_drag_drop.png index 2601ea640db29..f5d285f999335 100644 Binary files a/docs/img/upload_files_connect_drag_drop.png and b/docs/img/upload_files_connect_drag_drop.png differ diff --git a/docs/img/use-teleport/connect-kube-access.png b/docs/img/use-teleport/connect-kube-access.png index f510010f2c077..ee1db74123dd9 100644 Binary files a/docs/img/use-teleport/connect-kube-access.png and b/docs/img/use-teleport/connect-kube-access.png differ diff --git a/docs/img/use-teleport/connect-kube-list.png b/docs/img/use-teleport/connect-kube-list.png index 0c148f2fcc47f..f962b501dbd76 100644 Binary files a/docs/img/use-teleport/connect-kube-list.png and b/docs/img/use-teleport/connect-kube-list.png differ diff --git a/docs/img/use-teleport/connect-mcp-http.png b/docs/img/use-teleport/connect-mcp-http.png new file mode 100644 index 0000000000000..d9029ab02d2e9 Binary files /dev/null and b/docs/img/use-teleport/connect-mcp-http.png differ diff --git a/docs/img/vnet/configure-ssh-clients.png b/docs/img/vnet/configure-ssh-clients.png new file mode 100644 index 0000000000000..522856f80bfbb Binary files /dev/null and b/docs/img/vnet/configure-ssh-clients.png differ diff --git a/docs/img/vnet/how-it-works.svg b/docs/img/vnet/how-it-works.svg index e37da9d95d0e8..c3f47d4d8d486 100644 --- a/docs/img/vnet/how-it-works.svg +++ b/docs/img/vnet/how-it-works.svg @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/docs/img/vnet/ssh-connect.png b/docs/img/vnet/ssh-connect.png new file mode 100644 index 0000000000000..2baeee20e737f Binary files /dev/null and b/docs/img/vnet/ssh-connect.png differ diff --git a/docs/img/vnet/start-vnet.png b/docs/img/vnet/start-vnet.png new file mode 100644 index 0000000000000..81d03e073624b Binary files /dev/null and b/docs/img/vnet/start-vnet.png differ diff --git a/docs/img/vscode/add-host.png b/docs/img/vscode/add-host.png index ccb4fec84f72a..f11c81e59ab91 100644 Binary files a/docs/img/vscode/add-host.png and b/docs/img/vscode/add-host.png differ diff --git a/docs/img/vscode/connected-editor.png b/docs/img/vscode/connected-editor.png index 612022d52a198..9768efbc55d01 100644 Binary files a/docs/img/vscode/connected-editor.png and b/docs/img/vscode/connected-editor.png differ diff --git a/docs/img/vscode/host-added-notification.png b/docs/img/vscode/host-added-notification.png index 205f7b3521c82..42f5e1564b378 100644 Binary files a/docs/img/vscode/host-added-notification.png and b/docs/img/vscode/host-added-notification.png differ diff --git a/docs/img/vscode/select-host-to-connect.png b/docs/img/vscode/select-host-to-connect.png index 20d51869ef4c0..e30a08b2e2011 100644 Binary files a/docs/img/vscode/select-host-to-connect.png and b/docs/img/vscode/select-host-to-connect.png differ diff --git a/docs/img/vscode/settings.png b/docs/img/vscode/settings.png index b62225d710ce1..acf8dbd76cca3 100644 Binary files a/docs/img/vscode/settings.png and b/docs/img/vscode/settings.png differ diff --git a/docs/img/vscode/window-indicator.png b/docs/img/vscode/window-indicator.png index c93df22bea123..083f27bc6a45d 100644 Binary files a/docs/img/vscode/window-indicator.png and b/docs/img/vscode/window-indicator.png differ diff --git a/docs/img/webui-active-session.png b/docs/img/webui-active-session.png index 223232f949478..9f3c0edd9f754 100644 Binary files a/docs/img/webui-active-session.png and b/docs/img/webui-active-session.png differ diff --git a/docs/img/webui_billing_cycle.png b/docs/img/webui_billing_cycle.png index bfa7b31704107..701a18dacc058 100644 Binary files a/docs/img/webui_billing_cycle.png and b/docs/img/webui_billing_cycle.png differ diff --git a/docs/pages/admin-guides/access-controls/access-controls.mdx b/docs/pages/admin-guides/access-controls/access-controls.mdx deleted file mode 100644 index 2c4efb89c57a9..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-controls.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Access Controls -description: How to provide role-based access control (RBAC) for servers, databases, Kubernetes clusters, and other resources in your infrastructure -layout: tocless-doc ---- - -Teleport's role-based access control (RBAC) enables you to set fine-grained -policies for who can perform certain actions against specific resources. For -example, you can allow analytics team members to SSH into a MongoDB read -replica, but not the main database. You can also allow SREs to access a -production server only when using a [trusted hardware -device](./device-trust/guide.mdx), or if approved by someone else from the same -team. - - - diff --git a/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx b/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx deleted file mode 100644 index 7e8706407b256..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Access Lists -description: Use Access Lists in Teleport -layout: tocless-doc ---- - -Access Lists allow Teleport users to be granted long term access to resources -managed within Teleport. With Access Lists, administrators and Access List -owners can regularly audit and control membership to specific roles and -traits, which then tie easily back into Teleport's existing RBAC system. - -[Getting Started with Access Lists](guide.mdx) - -[Nested Access Lists](nested-access-lists.mdx) - -[Access List Reference](../../../reference/access-controls/access-lists.mdx) diff --git a/docs/pages/admin-guides/access-controls/access-lists/guide.mdx b/docs/pages/admin-guides/access-controls/access-lists/guide.mdx deleted file mode 100644 index 9b2f9811da28c..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-lists/guide.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Getting Started with Access Lists -description: Learn how to use Access Lists to manage and audit long lived access to Teleport resources. ---- - -This guide will help you: -- Create an Access List -- Assign a member to it -- Verify permissions granted through the list membership - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- A user with the preset `editor` role, which will have permissions to create Access Lists. - -## Step 1/4. Setting up the Application Service on the cluster for testing - -One of the easiest ways to get resources on the cluster for testing is to set up a Teleport Application Service -instance with the debugging application enabled. To do this, add the following config to your `teleport.yaml` -configuration: - -```yaml -app_service: - enabled: true - debug_app: true -``` - -And restart Teleport. The "dumper" app should show up in the resource list. - -![Debug app](../../../../img/access-controls/access-lists/debug-app.png) - -## Step 2/4. Create a test user - -We need to create a simple test user that has only the `requester` role, which has no default access -to anything within a cluster. This user will only be used for the purposes of this guide, so you may use -another user if you so choose. If you would rather use your own user, skip to the next step. - -Navigate to the management pane and select "Users." Click on "Create New User" and fill in `test-user` as -the name and select `requester` as the role. - -![Create Test User](../../../../img/access-controls/access-lists/create-test-user.png) - -Click "Save," and then navigate to the provided URL in order to set up the credentials for your test user. -Try logging into the cluster with the test user to verify that no resources show up in the resources page. - -## Step 3/4. Create an Access List - -Next, we'll create a simple Access List that will grant the `access` role to its members. -Login as the administrative user mentioned in the prerequisites. Click on "Add New" in the left pane, and then "Create an Access List." - -![Navigate to create new Access List](../../../../img/access-controls/access-lists/create-new-access-list.png) - -Here, fill in a title, description, and grant the `access` role. Select a date in the future for the next -review date. - -![Fill out Access List Fields](../../../../img/access-controls/access-lists/fill-out-access-list-fields.png) - -Under "List Owners" select `editor` as a required role, then add your administrative user under "Add -Eligible List Owners." By selecting `editor` as a required role, this will ensure that any owner of the list -must have the `editor` role in order to actually manage the list. If the user loses this role later, they will -not be able to manage the list, though they will still be reflected as an owner. - -![Select an owner](../../../../img/access-controls/access-lists/select-owner.png) - -Under "Members" select `requester` as a required role, then add your test user to the Access List. Similar to -the owner requirements, this will ensure that any member of the list must have the `requester` role in order to -be granted the access described in this list. If the user loses this role later, they will not be granted the -roles or traits described in the Access List. - -![Add a member](../../../../img/access-controls/access-lists/add-member.png) - -Finally, click "Create Access List" at the bottom of the page. - -## Step 4/4. Login as the test user - -Again, login as the test user. When logging in now, you should now see the dumper application contained within -the cluster, and should be able to interact with it as expected. - -## Next Steps - -- Familiarize yourself with the CLI tooling available for managing Access Lists in the [reference](../../../reference/access-controls/access-lists.mdx). -- Learn how to work with nested Access Lists in the [nested Access Lists guide](./nested-access-lists.mdx). diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx deleted file mode 100644 index 8dc6813f74e5a..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Just-in-Time Access Request Plugins -description: "Use Teleport's Access Request plugins to least-privilege access without sacrificing productivity." -layout: tocless-doc ---- - -Teleport Just-in-Time Access Requests allow users to receive temporary elevated -privileges by seeking consent from one or more reviewers, depending on your -configuration. - -With Teleport's Access Request plugins, users can manage Access Requests from -within your organization's existing messaging and project management solutions. - -Access Request plugins are self-contained programs that connect to the Teleport -Auth Service's gRPC API to listen for audit events relating to new or updated -Access Requests. After processing an Access Request event, Access Request plugins -interact with a third-party API (e.g., the Slack or PagerDuty APIs). - -## Enrolling Access Request plugins in Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="Access Request plugins"!) - -The following Access Request plugins are hosted on Teleport Cloud: - -- Discord -- Jira -- Mattermost -- Opsgenie -- PagerDuty -- ServiceNow -- Slack -- Datadog - -## Self-hosting Access Request plugins - -You can host Teleport Access Request plugins yourself. Self-hosted Access -Request plugins are the only way to manage Access Requests through a third-party -communication platform if you are self-hosting Teleport. If you use Teleport -Team or Teleport Enterprise Cloud, you can run self-hosted Access Request -plugins for more control over configuration and architecture. - -Access Request plugins can run within private networks that are isolated from -the Teleport Auth Service. To access the Auth Service API, they connect to the -Proxy Service, which establishes a reverse tunnel for the plugin to access the -Auth Service. - -You can run multiple instances of an Access Request plugin for high availability -by deploying each instance in a separate availability zone. There is no need for -additional configuration or load balancing, as plugins avoid creating duplicate -requests to their third-party APIs. - -Learn how to deploy and configure a plugin for your organization's communication -workflows by reading our setup guides: - -(!docs/pages/includes/access-request-integrations.mdx!) - -To read more about the architecture of an Access Request plugin, and start -writing your own, read our [Access Request plugin development -guide](../../api/access-plugin.mdx). diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-discord.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-discord.mdx deleted file mode 100644 index f4cd7ac743bbe..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-discord.mdx +++ /dev/null @@ -1,378 +0,0 @@ ---- -title: Run the Discord Access Request Plugin -description: How to set up Teleport's Discord plugin for privilege elevation approvals. ---- - -import BotLogo from "/static/avatar_logo.png"; - -This guide will explain how to set up Discord to receive Access Request messages -from Teleport. Teleport's Discord integration notifies individuals and channels of -Access Requests. Users can then approve and deny Access Requests from within -Discord, making it easier to implement security best practices without -compromising productivity. - -![The Discord Access Request plugin](../../../../img/enterprise/plugins/discord.png) - -
-This integration is hosted on Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="the Discord integration"!) - -
- -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- Admin account on your Discord server. Installing a bot requires at least the - "manager server" permission. -- Either a Linux host or Kubernetes cluster where you will run the Discord plugin. -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/8. Define RBAC resources - -Before you set up the Discord plugin, you will need to enable Role Access Requests -in your Teleport cluster. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 2/8. Install the Teleport Discord plugin - -(!docs/pages/includes/plugins/install-access-request.mdx name="discord"!) - -## Step 3/8. Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -## Step 4/8. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-discord-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-discord-identity"!) - - - -## Step 5/8. Register a Discord app - -The Access Request plugin for Discord receives Access Request events from the -Teleport Auth Service, formats them into Discord messages, and sends them to the -Discord API to post them in your guild (Discord server). For this to work, -you must register a new app with the Discord API. - -### Create your application - -Visit [https://discord.com/developers/applications](https://discord.com/developers/applications) -to create a new Discord application. Click "New Application" and name the application "Teleport". - -Set the application icon ([download application icon here](../../../../img/sso/onelogin/teleport.png)). - -### Create the application bot - -Go to the "Bot" tab and choose "Add Bot". You can download our avatar to set as your Bot icon. Un-check the "Public -Bot" toggle as this bot should only be used within your Discord servers. -Finally, press "Reset Token", copy and save the new token into a safe place. -This token will be used by the Teleport plugin to authenticate against the -Discord API. - -### Install and authorize the application in your Discord server - -Go to the "OAuth2" tab, open the "URL Generator" and check the "bot" and "Send Messages" permissions. - -![Set Discord permissions](../../../../img/enterprise/plugins/discord/discord-permissions.png) - -Copy and access the generated URL. Choose to install the application into the -desired Discord server. If the server is not available in the -dropdown list, it means you don't have sufficient rights. Ask a server -administrator to grant you a role with the "manage server" permission. - - -The same application can be installed into multiple Discord servers. To -do so, access the OAuth URL multiple times and choose different servers. You -have to be admin on a Discord server to install the app into it. - - -## Step 6/8. Configure the Teleport Discord plugin - -At this point, the Teleport Discord plugin has the credentials it needs to -communicate with your Teleport cluster and the Discord API. In this step, you will -configure the Discord plugin to use these credentials. You will also configure the -plugin to notify the right Discord channels when it receives an Access Request -update. - -### Create a config file - - - -The Teleport Discord plugin uses a config file in TOML format. Generate a -boilerplate config by running the following command (the plugin will not run -unless the config file is in `/etc/teleport-discord.toml`): - -```code -$ teleport-discord configure | sudo tee /etc/teleport-discord.toml > /dev/null -``` - -This should result in a config file like the one below: - -```toml -(!examples/resources/plugins/teleport-discord.toml!) -``` - - -The Discord Helm chart uses a YAML values file to configure the plugin. -On your local workstation, create a file called `teleport-discord-helm.yaml` -based on the following example: - -```toml -(!examples/resources/plugins/teleport-discord-helm.yaml!) -``` - - - - -### Edit the config file - -Open the configuration file created for the Teleport Discord plugin and update the following fields: - -**`[teleport]`** - -The Discord plugin uses this section to connect to the Teleport Auth Service. - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -**`[discord]`** - -`token`: Paste the bot token saved previously in this field. - -**`[role_to_recipients]`** - -The `role_to_recipients` map configures the channels that the Discord plugin will -notify when a user requests access to a specific role. When the Discord plugin -receives an Access Request from the Auth Service, it will look up the role being -requested and identify the Discord channels to notify. - -Each channel is represented by a numeric ID. Channels can be public, private or -direct messages between a user and the bot. -To determine the numeric ID of a channel for the bot to notify, follow the instructions below: - - - - - Open Discord in a web browser and navigate to the desired channel. - - The web browser URL should look like: - ``` - https://discord.com/channels// - ``` - - Copy the last part of the URL (everything after the last `/`), which is the channel ID. - - - Open Discord in a web browser and navigate to the desired channel. - - In the channel list choose "Create invite", type "teleport" in the search field - and invite your Discord Teleport bot. The bot should now appear in the channel - member list. - - The web browser URL should look like: - ``` - https://discord.com/channels// - ``` - - Copy the last part of the URL (everything after the last `/`), which is the channel ID. - - - To retrieve the channel ID of the private discussion between User A and the - Teleport bot, have User A send a direct message to the Teleport bot. This will - open a conversation between the user and the bot. Once the conversation is - initiated, the user can open the discussion page. - - The web browser URL should look like: - ``` - https://discord.com/channels/@me/ - ``` - - Copy the last part of the URL (everything after the last `/`), which is the channel ID. - - - -In the `role_to_recipients` map, each key is the name of a Teleport role. Each -value configures the Discord channel (or channels) to notify. - -The `role_to_recipients` map must also include an entry for `"*"`, which the -plugin looks up if no other entry matches a given role name. In the example -above, requests for roles aside from `dev` will notify the -`security-team` channel. - -Configure the Discord plugin to notify you when a user requests the `editor` role -by adding the following to your `role_to_recipients` config (replace -`YOUR-CHANNEL-ID` with a valid channel ID): - - - - -Here is an example of a `role_to_recipients` map. Each value can be a single -string or an array of strings: - -```toml -[role_to_recipients] -"*" = "YOUR-CHANNEL-ID" -"editor" = "YOUR-CHANNEL-ID" -``` - - - -In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` -and uses the following format, where keys are strings and values are arrays of -strings: - -```yaml -roleToRecipients: - "*": ["YOUR-CHANNEL-ID"] - "editor": ["YOUR-CHANNEL-ID"] -``` - - - -The final configuration file should resemble the following: - - - -```toml -(!examples/resources/plugins/teleport-discord.toml!) -``` - - -```yaml -(!examples/resources/plugins/teleport-discord-helm.yaml!) -``` - - - -## Step 7/8. Test your Discord app - -Once Teleport is running, you've created the Discord app, and the plugin is -configured, you can now run the plugin and test the workflow. - - - -Start the plugin: - -```code -$ teleport-discord start -``` - -If everything works fine, the log output should look like this: - -```code -$ teleport-discord start -INFO Starting Teleport Access Discord Plugin (=teleport.version=): discord/app.go:80 -INFO Plugin is ready discord/app.go:101 -``` - - -Start the plugin: - -```code -$ docker run -v :/etc/teleport-discord.toml public.ecr.aws/gravitational/teleport-plugin-discord:(=teleport.version=) start -``` - - -Install the plugin: - -```code -$ helm upgrade --install teleport-plugin-discord teleport/teleport-plugin-discord --values teleport-discord-helm.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-discord -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-discord-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-discord -``` - - - -Create an Access Request and check if the plugin works as expected with the -following steps. - -### Create an Access Request - -(!docs/pages/includes/plugins/create-request.mdx!) - -The channel you configured earlier to review the request should receive a -message from "Teleport" in Discord allowing them to visit a link in the Teleport -Web UI and either approve or deny the request. - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - -Once the request is resolved, the Discord bot will update the Access Request -message with ✅ or ❌, depending on whether the request was approved or denied. - - - -When the Discord plugin posts an Access Request notification to a channel, anyone -with access to the channel can view the notification and follow the link. While -users must be authorized via their Teleport roles to review Access Requests, you -should still check the Teleport audit log to ensure that the right users are -reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -## Step 8/8. Set up systemd - -This section is only relevant if you are running the Teleport Discord plugin on a -Linux host. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!examples/systemd/plugins/teleport-discord.service!) -``` - -Save this as `teleport-discord.service` in either `/usr/lib/systemd/system/` or -another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-discord -$ sudo systemctl start teleport-discord -``` - -## Next steps - -- Read our guides to configuring [Resource Access - Requests](../access-requests/resource-requests.mdx) and [Role Access - Requests](../access-requests/role-requests.mdx) so you can get the most out - of your Access Request plugins. diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-email.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-email.mdx deleted file mode 100644 index 8c0aebcba19b1..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-email.mdx +++ /dev/null @@ -1,497 +0,0 @@ ---- -title: Access Requests with Email -description: How to set up the Teleport email plugin to notify users when another user requests elevated privileges. ---- - -This guide will explain how to set up Teleport to send Just-in-Time Access -Request notifications to users via email. Since all organizations use email for -at least some of their communications, Teleport's email plugin makes it -straightforward to integrate Access Requests into your existing workflows, -letting you implement security best practices without compromising productivity. - -![The email Access Request plugin](../../../../img/enterprise/plugins/email.png) - -
-This integration is hosted on Teleport Enterprise (Cloud) - -(!docs/pages/includes/plugins/enroll.mdx name="the Email integration"!) - -![Enroll email plugin](../../../../img/enterprise/plugins/email/enroll-email.png) - -Configure and connect email integration by providing the following configuration -values: - -**`Sender`**: Configures the sender address. - -**`Fallback Recipient`**: Configures the default recipient for Access Request -notifications. - -**`Email Service`**: Selects the desired email service. Note: only `Mailgun` is -supported for Teleport Enterprise (Cloud). - -![Configure Mailgun integration](../../../../img/enterprise/plugins/email/configure-mailgun.png) - -Complete Mailgun integration by providing the following Mailgun API -configuration values: - -**`Domain`**: Configures the Mailgun sending domain. - -**`Mailgun Private Key`**: Configures the Mailgun API key. - -
- -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- Access to an SMTP service. The Teleport email plugin supports either Mailgun - or a generic SMTP service that authenticates via username and password. -- Either a Linux host or Kubernetes cluster where you will run the email plugin. - - - -The Teleport plugin needs to use a username and password to authenticate to your -SMTP service. To mitigate the risk of these credentials being leaked, you should -set up a dedicated email account for the Teleport plugin and rotate the password -regularly. - - - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/7. Define RBAC resources - -Before you set up the email plugin, enable Role Access Requests in your Teleport cluster. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 2/7. Install the Teleport email plugin - -
-Using a local SMTP server? - -If you are using a local SMTP server to test the plugin, you should install the -plugin on your local machine to ensure the plugin can connect to the SMTP server -and perform any necessary DNS lookups to send email. - -Your Teleport cluster does *not* need to perform DNS lookups for your plugin -because the plugin dials out to the Teleport Proxy Service or Teleport Auth Service. - -
- -(!docs/pages/includes/plugins/install-access-request.mdx name="email"!) - -## Step 3/7. Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -## Step 4/7. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-email-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-email-identity" !) - - - -## Step 5/7. Configure the plugin - -At this point, you have generated credentials that the email plugin will use to -connect to Teleport. You will now configure the plugin to use these credentials -to receive Access Request notifications from Teleport and email them to your -chosen recipients. - -### Create a config file - - - - -The Teleport email plugin uses a configuration file in TOML format. Generate a -boilerplate configuration by running the following command: - -```code -$ teleport-email configure | sudo tee /etc/teleport-email.toml -``` - - - -The email plugin Helm Chart uses a YAML values file to configure the plugin. -On your local workstation, create a file called `teleport-email-helm.yaml` -based on the following example: - -```yaml -(!examples/resources/plugins/teleport-email-helm.yaml!) -``` - - - - -### Edit the configuration file - -Edit the configuration file for your environment. We will show you how to set -each value below. - -### `[teleport]` - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -### `[mailgun]` or `[smtp]` - -Provide the credentials for your SMTP service depending on whether you are using -Mailgun or SMTP service. - - - - -If you are deploying the email plugin on a Linux host: - -1. In the `mailgun` section, assign `domain` to the domain name and subdomain of - your Mailgun account. -1. Assign `mailgun.private_key` to your Mailgun private key. - -If you are deploying the email plugin on Kubernetes: - -1. Write your Mailgun private key to a local file called `mailgun-private-key`. -1. Create a Kubernetes secret from the file: - - ```code - $ kubectl -n teleport create secret generic mailgun-private-key --from-file=mailgun-private-key - ``` - -1. Assign `mailgun.privateKeyFromSecret` to `mailgun-private-key`. - - - - -Assign `host` to the fully qualified domain name of your SMTP service, omitting -the URL scheme and port. (If you're using a local SMTP server for testing, use -`"localhost"` for `host`.) Assign `port` to the port of your SMTP service. - -If you are running the email plugin on a Linux host, fill in `username` and -`password`. - - - -You can also save your password to a separate file and assign `password_file` to -the file's path. The plugin reads the file and uses the file's content as the -password. - - - -If you are deploying the email plugin on Kubernetes: - -1. Write your SMTP service's password a local file called `smtp-password.txt`. -1. Create a Kubernetes secret from the file: - - ```code - $ kubectl -n teleport create secret generic smtp-password --from-file=smtp-password - ``` - -1. Assign `smtp.passwordFromSecret` to `smtp-password`. - -
-Disabling TLS for testing - -If you are testing the email plugin against a trusted internal SMTP server where -you would rather not use TLS—e.g., a local SMTP server on your development -machine—you can assign the `starttls_policy` setting to `disabled` (always -disable TLS) or `opportunistic` (disable TLS if the server does not advertise -the `STARTTLS` extension). The default is to always enforce TLS, and you should -leave this setting unassigned unless you know what you are doing and understand -the risks. - -For Kubernetes deployments, `starttls_policy` is called `smtp.starttlsPolicy` in -the Helm values file for the email plugin. - -
- -
-
- -### `[delivery]` - -Assign `sender` to the email address from which you would like the Teleport -plugin to send messages. - -### `[role_to_recipients]` - -The `role_to_recipients` map (`roleToRecipients` for Helm users) configures the -recipients that the email plugin will notify when a user requests access to a -specific role. When the plugin receives an Access Request from the Auth Service, -it will look up the role being requested and identify the recipients to notify. - - - - -Here is an example of a `role_to_recipients` map. Each value can be a single -string or an array of strings: - -```toml -[role_to_recipients] -"*" = ["security@example.com", "executive-team@example.com"] -"dev" = "eng@example.com" -"dba" = "mallory@example.com" -``` - - - - -In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` -and uses the following format, where keys are strings and values are arrays of -strings: - -```yaml -roleToRecipients: - "*": ["security@example.com", "executive-team@example.com"] - "dev": ["eng@example.com"] - "dba": ["mallory@example.com"] -``` - - - - -In the `role_to_recipients` map, each key is the name of a Teleport role. Each -value configures the recipients the plugin will email when it receives an Access -Request for that role. Each string must be an email address. - -The `role_to_recipients` map must also include an entry for `"*"`, which the -plugin looks up if no other entry matches a given role name. In the example -above, requests for roles aside from `dev` and `dba` will notify -`security@example.com` and `executive-team@example.com`. - -
-Suggested reviewers - -Users can suggest reviewers when they create an Access Request, e.g.,: - -```code -$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com -``` - -If an Access Request includes suggested reviewers, the email plugin will add -these to the list of recipients to notify. If a suggested reviewer is an email -address, the plugin will send a message to that recipient in addition to those -configured in `role_to_recipients`. - -
- -Configure the email plugin to notify you when a user requests the `editor` role -by adding the following to your `role_to_recipients` config, replacing -`YOUR_EMAIL_ADDRESS` with the appropriate address: - - - -```toml -[role_to_recipients] -"*" = "YOUR_EMAIL_ADDRESS" -"editor" = "YOUR_EMAIL_ADDRESS" -``` - - -```yaml -roleToRecipients: - "*": "YOUR_EMAIL_ADDRESS" - "editor": "YOUR_EMAIL_ADDRESS" -``` - - - -
-Configuring recipients without role mapping - -If you do not plan to use role-to-recipient mapping, you can configure the -Teleport email plugin to notify a static list of recipients for every Access -Request event by using the `delivery.recipients` field: - - - -```toml -[delivery] -recipients = ["eng@exmaple.com", "dev@example.com"] -``` - - -```yaml -delivery: - recipients: ["eng@exmaple.com", "dev@example.com"] -``` - - - -If you use `delivery.recipients`, you must remove the `role_to_recipients` -configuration section. Behind the scenes, `delivery.recipients` assigns the -recipient list to a `role_to_recipients` mapping under the wildcard value `"*"`. - -
- -You configuration should resemble the following: - - - - -```toml -# /etc/teleport-email.toml -[teleport] -addr = "example.com:443" -identity = "/var/lib/teleport/plugins/email/identity" -refresh_identity = true - -[mailgun] -domain = "sandbox123abc.mailgun.org" -private_key = "xoxb-fakekey62b0eac53565a38c8cc0316f6" - -# As an alternative, you can use SMTP server credentials: -# -# [smtp] -# host = "smtp.gmail.com" -# port = 587 -# username = "username@gmail.com" -# password = "" -# password_file = "/var/lib/teleport/plugins/email/smtp_password" -# starttls_policy = "mandatory" - -[delivery] -sender = "noreply@example.com" - -[role_to_recipients] -"*" = "eng@example.com" -"editor" = ["admin@example.com", "execs@example.com"] - -[log] -output = "stderr" # Logger output. Could be "stdout", "stderr" or "/var/lib/teleport/email.log" -severity = "INFO" # Logger severity. Could be "INFO", "ERROR", "DEBUG" or "WARN". -``` - - - - -```yaml -# teleport-email-helm.yaml -teleport: - address: "teleport.example.com:443" - identitySecretName: teleport-plugin-email-identity - identitySecretPath: identity - -mailgun: - domain: "sandbox123abc.mailgun.org" - privateKeyFromSecret: "mailgun-private-key" - -# As an alternative, you can use SMTP server credentials: -# -# smtp: -# host: "smtp.gmail.com" -# port: 587 -# username: "username@gmail.com" -# passwordFromSecret: "smtp-password" -# starttls_policy = "mandatory" - -delivery: - sender: "noreply@example.com" - -roleToRecipients: - "*": "eng@example.com" - "editor": ["admin@example.com", "execs@example.com"] -``` - - - - -## Step 6/7. Test the email plugin - -After finishing your configuration, you can now run the plugin and test your -email-based Access Request flow: - - - - -```code -$ teleport-email start -``` - -If everything works as expected, the log output should look like this: - -```code -$ teleport-email start -INFO Starting Teleport Access Email Plugin (): email/app.go:80 -INFO Plugin is ready email/app.go:101 -``` - - -Start the plugin: - -```code -$ docker run -v :/etc/teleport-email.toml public.ecr.aws/gravitational/teleport-plugin-email:(=teleport.version=) start -``` - - -Install the plugin: - -```code -$ helm upgrade --install teleport-plugin-email teleport/teleport-plugin-email --values teleport-email-helm.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-email -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-email-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-email -``` - - - -### Create an Access Request - -(!docs/pages/includes/plugins/create-request.mdx!) - -The recipients you configured earlier should receive notifications of the -request by email. - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - -## Step 7/7. Set up systemd - -This section is only relevant if you are running the Teleport email plugin on a -Linux host. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!/examples/systemd/plugins/teleport-email.service!) -``` - -Save this as `teleport-email.service` in either `/usr/lib/systemd/system/` or -another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-email -$ sudo systemctl start teleport-email -``` - diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-jira.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-jira.mdx deleted file mode 100644 index 16ada13273ffd..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-jira.mdx +++ /dev/null @@ -1,434 +0,0 @@ ---- -title: Run the Jira Access Request Plugin -description: How to set up the Teleport Jira plugin to notify users when another user requests elevated privileges. ---- - -This guide explains how to set up the Teleport Access Request plugin for Jira. -Teleport's Jira integration allows you to manage Access Requests as -Jira issues. - -The Teleport Jira plugin synchronizes a Jira project board with the Access -Requests processed by your Teleport cluster. When you change the status of an -Access Request within Teleport, the plugin updates the board. And when you -update the status of an Access Request on the board, the plugin notifies a Jira -webhook run by the plugin, which modifies the Access Request in Teleport. - -
-This integration is hosted on Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="the Mattermost integration"!) - -
- -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- A Jira account with permissions to create applications and webhooks. - -- A registered domain name for the Jira webhook. Jira notifies the webhook of - changes in your project board. - -- An environment where you will run the Jira plugin. This is either: - - - A Linux virtual machine with ports `80` and `8081` open, plus a means of - accessing the host (e.g., OpenSSH with an SSH port exposed to your - workstation). - - A Kubernetes cluster deployed via a cloud provider. This guide shows you how - to allow traffic to the Jira plugin via a `LoadBalancer` service, so your - environment must support services of this type. - -- A means of providing TLS credentials for the Jira webhook run by the plugin. - **TLS certificates must not be self signed.** For example, you can obtain TLS - credentials for the webhook with Let's Encrypt by using an [ACME - client](https://letsencrypt.org/docs/client-options/). - - - If you run the plugin on a Linux server, you must provide TLS credentials to - a directory available to the plugin. - - If you run the plugin on Kubernetes, you must write these credentials to a - secret that the plugin can read. This guide assumes that the name of the - secret is `teleport-plugin-jira-tls`. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/7. Define RBAC resources - -### Enable Role Access Requests - -Before you set up the Jira plugin, you need to enable Role Access Requests in -your Teleport cluster. - -(!docs/pages/includes/plugins/editor-request-rbac.mdx!) - -### Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-update.mdx!) - -## Step 2/7. Install the Teleport Jira plugin - -Install the Teleport Jira plugin following the instructions below, which depend -on whether you are deploying the plugin on a host (e.g., an EC2 instance) or a -Kubernetes cluster. - -The Teleport Jira plugin must run on a host or Kubernetes cluster that can -access both Jira and your Teleport Proxy Service (or Teleport Enterprise Cloud -tenant). - -(!docs/pages/includes/plugins/install-access-request.mdx name="jira"!) - -## Step 3/7. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -susceptible to exfiltration, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-slack-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-jira-identity"!) - - - -## Step 4/7. Set up a Jira project - -In this section, you will create a Jira a project that the Teleport plugin can -modify when a Teleport user creates or updates an Access Request. The plugin -then uses the Jira webhook to monitor the state of the board and respond to any -changes in the tickets it creates. - -### Create a project for managing Access Requests - -In Jira, find the top navigation bar and click **Projects** -> **Create -project**. Select **Kanban** for the template, then **Use template**. Click -**Select a company-managed project**. - -You'll see a screen where you can enter a name for your project. In this guide, -we assume that your project is called "Teleport Access Requests", which -receives the key `TAR` by default. - -Make sure "Connect repositories, documents, and more" is unset, then click -**Create project**. - -In the three-dots menu on the upper right of your new board, click **Board -settings**, then click **Columns**. Edit the statuses in your board so it -contains the following four: - -1. Pending -1. Approved -1. Denied -1. Expired - -Create a column with the same name as each status. The result should be the -following: - -![Jira board setup](../../../../img/enterprise/plugins/jira/board-setup.png) - - - -If your project board does not contain these (and only these) columns, each with -a status of the same name, the Jira Access Request plugin will behave in -unexpected ways. Remove all other columns and statuses. - - - -Click **Back to board** to review your changes. - -### Retrieve your Jira API token - -Obtain an API token that the Access Request plugin uses to make -changes to your Jira project. Click the gear menu at the upper right of the -screen, then click **Atlassian account settings**. Click **Security** > -**Create and manage API tokens** > **Create API token**. - -Choose any label and click **Copy**. Paste the API token into a convenient -location (e.g., a password manager or local text document) so you can use it -later in this guide when you configure the Jira plugin. - -### Set up a Jira webhook - -Now that you have generated an API key that the Teleport Jira plugin uses to -manage your project, enable Jira to notify the Teleport Jira plugin when your -project is updated by creating a webhook. - -Return to Jira. Click the gear menu on the upper right of the screen. Click -**System** > **WebHooks** > **Create a WebHook**. - - - - -Enter "Teleport Access Request Plugin" in the "Name" field. In the "URL" field, -enter the domain name you created for the plugin earlier, plus port `8081`. - - - - -Enter "Teleport Access Request Plugin" in the "Name" field. In the "URL" field, -enter the domain name you created for the plugin earlier, plus port `443`. - - - - -The webhook needs to be notified only when an issue is created, updated, or -deleted. You can leave all the other boxes empty. - -Click **Create**. - -## Step 5/7. Configure the Jira Access Request plugin - -Earlier, you retrieved credentials that the Jira plugin uses to connect to -Teleport and the Jira API. You will now configure the plugin to use these -credentials and run the Jira webhook at the address you configured earlier. - -### Create a configuration file - - - -The Teleport Jira plugin uses a configuration file in TOML format. Generate a -boilerplate configuration by running the following command (the plugin will not run -unless the config file is in `/etc/teleport-jira.toml`): - -```code -$ teleport-jira configure | sudo tee /etc/teleport-jira.toml > /dev/null -``` - -This should result in a configuration file like the one below: - -```toml -(!examples/resources/plugins/teleport-jira-cloud.toml!) -``` - - -The Helm chart for the Jira plugin uses a YAML values file to configure the -plugin. On your local workstation, create a file called -`teleport-jira-helm.yaml` based on the following example: - -```yaml -(!examples/resources/plugins/teleport-jira-helm-cloud.yaml!) -``` - - - - -### Edit the configuration file - -Open the configuration file created for the Teleport Jira plugin and update the -following fields: - -**`[teleport]`** - -The Jira plugin uses this section to connect to your Teleport cluster: - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - - - - -### `jira` - -**url:** The URL of your Jira tenant, e.g., `https://[your-jira].atlassian.net`. - -**username:** The username you were logged in as when you created your API -token. - -**api_token:** The Jira API token you retrieved earlier. - -**project:** The project key for your project, which in our case is `TAR`. - -You can leave `issue_type` as `Task` or remove the field, as `Task` is the -default. - -### `http` - -The `[http]` setting block describes how the plugin's webhook works. - -**listen_addr** indicates the address that the plugin listens on, and defaults -to `:8081`. If you opened port `8081` on your plugin host as we recommended -earlier in the guide, you can leave this option unset. - -**public_addr** is the public address of your webhook. This is the domain name you -added to the DNS A record you created earlier. - -**https_key_file** and **https_cert_file** correspond to the private key and -certificate you obtained before following this guide. Use the following values, -assigning to the domain name you created for the -plugin earlier: - -- **https_key_file:** - - ```code - $ /var/teleport-jira/tls/certificates/acme-v02.api.letsencrypt.org-directory//.key - ``` - -- **https_cert_file:** - - ```code - $ /var/teleport-jira/tls/certificates/acme-v02.api.letsencrypt.org-directory//.crt - ``` - - - - -### `jira` - -**url:** The URL of your Jira tenant, e.g., `https://[your-jira].atlassian.net`. - -**username:** The username you were logged in as when you created your API -token. - -**apiToken:** The API token you retrieved earlier. - -**project:** The project key for your project, which in our case is `TAR`. - -You can leave `issueType` as `Task` or remove the field, as `Task` is the -default. - -### `http` - -The `http` setting block describes how the plugin's webhook works. - -**publicAddress:** The public address of your webhook. This is the domain name -you created for your webhook. (We will create a DNS record for this domain name -later.) - -**tlsFromSecret:** The name of a Kubernetes secret containing TLS credentials -for the webhook. Use `teleport-plugin-jira-tls`. - - - - -## Step 6/7. Run the Jira plugin - -After finishing your configuration, you can now run the plugin and test your -Jira-based Access Request flow: - - - - -Run the following on your Linux host: - -```code -$ sudo teleport-jira start -INFO Starting Teleport Jira Plugin 12.1.1: jira/app.go:112 -INFO Plugin is ready jira/app.go:142 -``` - - - -Install the Helm chart for the Teleport Jira plugin: - -```code -$ helm install teleport-plugin-jira teleport/teleport-plugin-jira \ - --namespace teleport \ - --values values.yaml \ - --version (=teleport.plugin.version=) -``` - -Create a DNS record that associates the webhook's domain name with the address -of the load balancer created by the Jira plugin Helm chart. - -See whether the load balancer has a domain name or IP address: - -```code -$ kubectl -n teleport get services/teleport-plugin-jira -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -teleport-plugin-jira LoadBalancer 10.100.135.75 abc123.us-west-2.elb.amazonaws.com 80:30625/TCP,443:31672/TCP 134m -``` - -If the `EXTERNAL-IP` field has a domain name for the value, create a `CNAME` -record in which the domain name for your webhook points to the domain name of -the load balancer. - -If the `EXTERNAL-IP` field's value is an IP address, create a DNS `A` record -instead. - -You can then generate signed TLS credentials for the Jira plugin, which expects -them to be written to a Kubernetes secret. - - - - -### Check the status of the webhook - -Confirm that the Jira webhook has started serving by sending a GET request to -the `/status` endpoint. If the webhook is running, it will return a `200` status -code with no document body: - - - - -```code -$ curl -v https://:8081/status 2>&1 | grep "^< HTTP/2" -< HTTP/2 200 -``` - - - - -```code -$ curl -v https://:443/status 2>&1 | grep "^< HTTP/2" -< HTTP/2 200 -``` - - - - -### Create an Access Request - -Sign in to your cluster as the `myuser` user you created earlier and create an -Access Request: - -(!docs/pages/includes/plugins/create-request.mdx!) - -When you create the request, you will see a new task in the "Pending" column of the Access Requests board: - -![New Access Request](../../../../img/enterprise/plugins/jira/new-request.png) - -### Resolve the request - -Move the card corresponding to your new Access Request to the "Denied" column, -then click the card and navigate to Teleport. You will see that the Access -Request has been denied. - - - -Anyone with access to the Jira project board can modify the status of Access -Requests reflected on the board. You can check the Teleport audit log to ensure -that the right users are reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -## Step 7/7. Set up systemd - - - -This step is only applicable if you are running the Teleport Jira plugin on a -Linux machine. - - - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```txt -(!examples/systemd/plugins/teleport-jira.service!) -``` - -Save this as `teleport-jira.service` or another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -```code -$ sudo systemctl enable teleport-jira -$ sudo systemctl start teleport-jira -``` diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost.mdx deleted file mode 100644 index 9f0e7b24af90c..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost.mdx +++ /dev/null @@ -1,362 +0,0 @@ ---- -title: Run the Mattermost Access Request plugin -description: How to set up Teleport's Mattermost plugin for privilege elevation approvals. ---- - -import BotLogo from "/static/avatar_logo.png"; - -This guide explains how to integrate Access Requests with Mattermost, an open -source messaging platform. The Teleport Mattermost plugin notifies individuals of -Access Requests. Users can then approve and deny Access Requests by following the -message link, making it easier to implement security best practices without -compromising productivity. - -![The Mattermost Access Request plugin](../../../../img/enterprise/plugins/mattermost/diagram.png) - -
-This integration is hosted on Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="the Mattermost integration"!) - -
- -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- A Mattermost account with admin privileges. This plugin has been tested with - Mattermost v7.0.1. -- Either a Linux host or Kubernetes cluster where you will run the Teleport Mattermost plugin. -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/8. Define RBAC resources - -Before you set up the Teleport Mattermost plugin, you need to enable Role Access -Requests in the Teleport Proxy Service or Teleport Auth Service. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 2/8. Install the Teleport Mattermost plugin - - - - -We recommend installing Teleport plugins on the same host as the Teleport Proxy -Service. This is an ideal location as plugins have a low memory footprint, and -require both public internet access and Teleport Auth Service access. - - - - - -Install the Teleport Mattermost plugin on a host that can access both your -Teleport Proxy Service and your Mattermost deployment. - - - - - -(!docs/pages/includes/plugins/install-access-request.mdx name="mattermost"!) - -## Step 3/8. Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -## Step 4/8. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-mattermost-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-mattermost-identity"!) - - - -## Step 5/8. Register a Mattermost bot - -Now that you have generated the credentials your plugin needs to connect to your -Teleport cluster, register your plugin with Mattermost so it can send Access -Request messages to your workspace. - -In Mattermost, click the menu button in the upper left of the UI, then click -System Console → Integrations → Bot Accounts. - -Set "Enable Bot Account Creation" to "true". - -![Enable Mattermost bots](../../../../img/enterprise/plugins/mattermost/mattermost_admin_console_integrations_bot_accounts.png) - -This will allow you to create a new bot account for the Mattermost plugin. - -Go back to your team. In the menu on the upper left of the UI, click -Integrations → Bot Accounts → Add Bot Account. - -Set the "Username", "Display Name", and "Description" fields according to how -you would like the Mattermost plugin bot to appear in your workspace. Set "Role" -to "Member". - -{/* lint ignore absolute-docs-links */} -You can download our avatar to set as your Bot -Icon. - -Set "post:all" to "Enabled". - -![Enable Mattermost Bots](../../../../img/enterprise/plugins/mattermost/mattermost_bot.png) - -Click "Create Bot Account". We will use the resulting OAuth 2.0 token when we -configure the Mattermost plugin. - -## Step 6/8. Configure the Mattermost plugin - -At this point, you have generated credentials that the Mattermost plugin will use -to connect to Teleport and Mattermost. You will now configure the Mattermost -plugin to use these credentials and post messages in the right channels for your -workspace. - - - -The Mattermost plugin uses a configuration file in TOML format. On the host where you -will run the Mattermost plugin, generate a boilerplate configuration by running the -following commands: - -```code -$ teleport-mattermost configure > teleport-mattermost.toml -$ sudo mv teleport-mattermost.toml /etc -``` - - -The Mattermost Helm Chart uses a YAML values file to configure the plugin. On -the host where you have Helm installed, create a file called -`teleport-mattermost-helm.yaml` based on the following example: - -```yaml -(!examples/resources/plugins/teleport-mattermost-helm-cloud.yaml!) -``` - - - -Edit the configuration as explained below: - -### `[teleport]` - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -### `[mattermost]` - -**`url`**: Include the scheme (`https://`) and fully qualified domain name of -your Mattermost deployment. - -**`token`**: Find your Mattermost bot's OAuth 2.0 token. To do so, visit -Mattermost. In the menu on the upper left of the UI, go to Integrations → Bot -Accounts. Find the listing for the Teleport plugin and click "Create New Token". -After you save the token, you will see a message with text in the format, -"Access Token: TOKEN". Copy the token and paste it here. - -**`recipients`**: This field configures the channels that the Mattermost plugin -will notify when it receives an Access Request message. The value is an array of -strings, where each element is either: - -- The email address of a Mattermost user to notify via a direct message when the - plugin receives an Access Request event -- A channel name in the format `team/channel`, where `/` separates the name - of the team and the name of the channel - -For example, this configuration will notify `first.last@example.com` and -the `Town Square` channel in the `myteam` team of any Access Request events: - - - - -```toml -recipients = [ - "myteam/Town Square", - "first.last@example.com" -] -``` - - - - -```yaml -recipients: - - "myteam/Town Square" - - first.last@example.com -``` - - - - -You will need to invite your Teleport plugin to any channel you add to the -`recipients` list (aside from direct message channels). Visit Mattermost, -navigate to each channel you want to invite the plugin to, and enter `/invite -@teleport` (or the name of the bot you configured) into the message box. - -![Invite the bot](../../../../img/enterprise/plugins/mattermost/add-bot.png) - -
-Suggested reviewers - -Users can also suggest reviewers when they create an Access Request, e.g.,: - -```code -$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com -``` - -If an Access Request includes suggested reviewers, the Mattermost plugin will -add these to the list of channels to notify. If a suggested reviewer is an email -address, the plugin will look up the direct message channel for that address -and post a message in that channel. - -If `recipients` is empty, and the user requesting elevated privileges has not -suggested any reviewers, the plugin will skip forwarding the Access Request to -Mattermost. - -
- -The final configuration should look similar to this: - - - -```yaml -# example mattermost configuration TOML file -[teleport] -auth_server = "myinstance.teleport.sh:443" # Teleport Cloud proxy HTTPS address -identity = "/var/lib/teleport/plugins/mattermost/identity" # Identity file path -refresh_identity = true # Refresh identity file on a periodic basis - -[mattermost] -url = "https://mattermost.example.com" # Mattermost Server URL -token = "api-token" # Mattermost Bot OAuth token -recipients = [ - "myteam/general", - "first.last@example.com" -] - -[log] -output = "stderr" # Logger output. Could be "stdout", "stderr" or "/var/lib/teleport/mattermost.log" -severity = "INFO" # Logger severity. Could be "INFO", "ERROR", "DEBUG" or "WARN". - -``` - - -```yaml -(!examples/resources/plugins/teleport-mattermost-helm-cloud.yaml!) -``` - - - -## Step 7/8. Test your Mattermost bot - - - -After modifying your configuration, run the bot with the following command: - -```code -$ teleport-mattermost start -d -``` - -The `-d` flag provides debug information to make sure the bot can connect to -Mattermost, e.g.: - -```text -DEBU Checking Teleport server version mattermost/main.go:234 -DEBU Starting a request watcher... mattermost/main.go:296 -DEBU Starting Mattermost API health check... mattermost/main.go:186 -DEBU Starting secure HTTPS server on :8081 utils/http.go:146 -DEBU Watcher connected mattermost/main.go:260 -DEBU Mattermost API health check finished ok mattermost/main.go:19 -``` - - - -Run the plugin: - -```code -$ docker run -v :/etc/teleport-mattermost.toml public.ecr.aws/gravitational/teleport-plugin-mattermost:(=teleport.version=) start -``` - - -After modifying your configuration, run the bot with the following command: - -```code -$ helm upgrade --install teleport-plugin-mattermost teleport/teleport-plugin-mattermost --values teleport-mattermost-helm.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-mattermost -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-mattermost-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-mattermost -``` - - - - -### Create an Access Request - -(!docs/pages/includes/plugins/create-request.mdx!) - -The users and channels you configured earlier to review the request should -receive a message from "Teleport" in Mattermost allowing them to visit a link in -the Teleport Web UI and either approve or deny the request. - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - - - -When the Mattermost plugin posts an Access Request notification to a channel, -anyone with access to the channel can view the notification and follow the link. -While users must be authorized via their Teleport roles to review Access -Requests, you should still check the Teleport audit log to ensure that the right -users are reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -## Step 8/8. Set up systemd - -This section is only relevant if you are running the Teleport Mattermost plugin -on a Linux host. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!examples/systemd/plugins/teleport-mattermost.service!) -``` - -Save this as `teleport-mattermost.service` in either `/usr/lib/systemd/system/` or -another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-mattermost -$ sudo systemctl start teleport-mattermost -``` - diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx deleted file mode 100644 index d53c4539efeb2..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx +++ /dev/null @@ -1,564 +0,0 @@ ---- -title: Access Requests with Microsoft Teams -description: How to set up Teleport's Microsoft Teams plugin for privilege elevation approvals. ---- - -This guide will explain how to set up Microsoft Teams to receive Access Request messages -from Teleport. Teleport's Microsoft Teams integration notifies individuals of -Access Requests. Users can then approve and deny Access Requests by following the -message link, making it easier to implement security best practices without -compromising productivity. - -![The Microsoft Teams Access Request plugin](../../../../img/enterprise/plugins/msteams.png) - -
-This integration is hosted on Teleport Enterprise (Cloud) - -(!docs/pages/includes/plugins/enroll.mdx name="the Microsoft Teams integration"!) - -![Create Microsoft Teams Bot](../../../../img/enterprise/plugins/msteams/enroll-bot.png) - -Once enrolled you can download the required `app.zip` file from the integrations status page. - -![Download app.zip](../../../../img/enterprise/plugins/msteams/app-zip.png) - -
- -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- A Microsoft Teams License (Microsoft 365 Business). -- Azure console access in the organization/directory holding the Microsoft Teams - License. -- An Azure resource group in the same directory. This will host resources for - the Microsoft Teams Access Request plugin. You should have enough - permissions to create and edit Azure Bot Services in this resource group. -- Someone with Global Admin rights on Microsoft Entra ID in order to grant - permissions to the plugin. -- Someone with the `Teams administrator` role that can approve installation - requests for Microsoft Teams Apps. -- Either an Azure virtual machine or Kubernetes cluster where you will run the - Teleport Microsoft Teams plugin. - -(!/docs/pages/includes/tctl.mdx!) - -## Step 1/9. Define RBAC resources - -Before you set up the Microsoft Teams plugin, you will need to enable Role Access Requests -in your Teleport cluster. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 2/9. Install the Teleport Microsoft Teams plugin - -Install the Microsoft Teams plugin on your workstation. If you are deploying the -plugin on Kubernetes, you will still need to install the plugin locally in order -to generate an application archive to upload later in this guide. - -(!docs/pages/includes/plugins/install-access-request.mdx name="msteams"!) - -## Step 3/9. Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -## Step 4/9. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-msteams-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-msteams-identity"!) - - - -## Step 5/9. Register an Azure Bot - -The Access Request plugin for Microsoft Teams receives Access Request events from the -Teleport Auth Service, formats them into Microsoft Teams messages, and sends them to the -Microsoft Teams API to post them in your workspace. For this to work, you must register a -new Azure Bot. Azure Bot is a managed service by Microsoft that allows to -develop bots that interact with users through different channels, including -Microsoft Teams. - -### Register a new Azure bot - -Visit [https://portal.azure.com/#create/Microsoft.AzureBot](https://portal.azure.com/#create/Microsoft.AzureBot) -to create a new bot. Choose the bot handle so you can find the bot later in the Azure console (the bot handle will -not be displayed to the user or used to configure the Microsoft Teams plugin). Also edit the Azure subscription, -the resource group and the bot pricing tier. - -In the "Microsoft App ID" section choose "Single Tenant" and "Create new -Microsoft App ID". - -![Create Azure Bot](../../../../img/enterprise/plugins/msteams/create-azure-bot.png) - -### Connect the bot to Microsoft Teams - -Once the bot is created, open its resource page on the Azure console and -navigate to the "Channels" tab. Click "Microsoft Teams" and add the Microsoft Teams -channel. - -The result should be as follows: - -![Add Bot Channel](../../../../img/enterprise/plugins/msteams/add-bot-channel.png) - -### Obtain information about your Microsoft App - -On the bot's "Configuration" tab, copy and keep in a safe place the values of -"Microsoft App ID" and "App Tenant ID". Those two UUIDs will be used in the -plugin configuration. - -Click the "Manage" link next to "Microsoft App ID". This will open the app management view. - -![Manage Bot App](../../../../img/enterprise/plugins/msteams/manage-bot-app.png) - -Then, go to the "Certificates & Secrets" section and choose to create a "New client secret". -Use the "Copy" icon to copy the newly created secret and keep it with the -previously recovered App ID and Tenant ID. - -The client secret will be used by the Teleport plugin to authenticate as the bot's app when -searching users and posting messages. - -### Specify the permissions used by the app - -Still in the app management view ("Configuration", then "Manage" the Microsoft App ID), -go to the "API permissions" tab. - -Add the following Microsoft Graph Application permissions: - -| Permission name | Reason | -|---|---| -| `AppCatalog.Read.All` | Used to list Teams Apps and check the app is installed. | -| `User.Read.All` | Used to get notification recipients. | -| `TeamsAppInstallation.ReadWriteSelfForUser.All` | Used to initiate communication with a user that never interacted with the Teams App before. | -| `TeamsAppInstallation.ReadWriteSelfForTeam.All` | Used to discover if the app is installed in the Team. | - -At this point the app declares the required permissions but those have not been granted. - -If you are an admin, click "Grant admin consent for \". If you are not an admin, -contact an admin user to grant the permissions. - -![Specify App Permissions](../../../../img/enterprise/plugins/msteams/specify-app-permissions.png) - -Once permissions have been approved, refresh the page and check the approval status. -The result should be as follows: - -![Granted App Permissions](../../../../img/enterprise/plugins/msteams/granted-app-permissions.png) - -## Step 6/9. Configure the Teleport Microsoft Teams plugin - -At this point, the Teleport Microsoft Teams plugin has the credentials it needs to -communicate with your Teleport cluster and Azure APIs, but the app has not been -installed to Microsoft Teams yet. - -In this step, you will configure the Microsoft Teams plugin to use the Azure -credentials and generate the Teams App package that will be used to install the -Microsoft Teams App. You will also configure the plugin to notify -the right Microsoft Teams users when it receives an Access Request update. - -### Generate a configuration file and assets - -Generate a configuration file for the plugin. The instructions depend on whether -you are deploying the plugin on a virtual machine or Kubernetes: - - - - -The Teleport Microsoft Teams plugin uses a config file in TOML format. The -`configure` subcommand generates the directory -`/var/lib/teleport/plugins/msteams/assets` containing the TOML configuration -file and an `app.zip` file that will be used later to add the Teams App into the -organization catalog. - -Run the following command on your virtual machine: - -```code -$ export AZURE_APPID="your-appid" -$ export AZURE_TENANTID="your-tenantid" -$ export AZURE_APPSECRET="your-appsecret" -$ teleport-msteams configure /var/lib/teleport/plugins/msteams/assets --appID "$AZURE_APPID" --tenantID "$AZURE_TENANTID" --appSecret "$AZURE_APPSECRET" -``` - -This should result in a config file like the one below: - -```toml -(!examples/resources/plugins/teleport-msteams.toml!) -``` - -Copy the `/var/lib/teleport/plugins/msteams/assets/app.zip` file to your local -computer. You will have to upload it to Microsoft Teams later. - -On the host where you will run the Microsoft Teams plugin, move the file -`/var/lib/teleport/plugins/msteams/assets/teleport-msteams.toml` to -`/etc/teleport-msteams.toml`. You can then edit the copy located in `/etc/`. - - - - -Run the following command on your local machine: - -```code -$ export AZURE_APPID="your-appid" -$ export AZURE_TENANTID="your-tenantid" -$ export AZURE_APPSECRET="your-appsecret" -$ teleport-msteams configure /var/lib/teleport/plugins/msteams/assets --appID "$AZURE_APPID" --tenantID "$AZURE_TENANTID" --appSecret "$AZURE_APPSECRET" -``` - -This command generates an application archive at -`/var/lib/teleport/plugins/msteams/assets/app.zip`. You will upload it to -Microsoft Teams later in this guide. - -Create a file on your workstation called `teleport-msteams-helm.yaml` with the -following content: - -```yaml -(!examples/resources/plugins/teleport-msteams-helm.yaml!) -``` - -You will edit this file in the next section. - - - - - -The `configure` command is not idempotent. It generates a new Microsoft Teams -application UUID with each execution. It is not possible to use an `app.zip` and -a TOML configuration generated by two different executions. - - -### Edit the configuration file - -Edit the configuration file according to the instructions below. - -#### `[teleport]` - -The Microsoft Teams plugin uses this section to connect to your Teleport -cluster. - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -#### `[msapi]`/`msTeams` - - - - -Make sure the `app_id`, `app_secret`, `tenant_id`, and `teams_app_id` fields are -filled in with the correct information, which you obtained earlier in this -guide. - - - -Make sure the `appID`, `tenantID`, and `teamsAppID` fields are filled in with -the correct information, which you obtained earlier in this guide. - - - - -#### `[role_to_recipients]` - -The `role_to_recipients` map (`roleToRecipients` for Helm users) configures the -users and channels that the Microsoft Teams plugin will notify when a user -requests access to a specific role. When the Microsoft Teams plugin receives an -Access Request from the Auth Service, it will look up the role being requested -and identify the Microsoft Teams users and channels to notify. - - - - -Here is an example of a `role_to_recipients` map. Each value can be a single -string or an array of strings: - -```toml -[role_to_recipients] -"*" = "alice@example.com" -"dev" = ["alice@example.com", "bob@example.com"] -"dba" = "https://teams.microsoft.com/l/channel/19%3somerandomid%40thread.tacv2/ChannelName?groupId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -``` - - - -In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` -and uses the following format, where keys are strings and values are arrays of -strings: - -```toml -roleToRecipients: - "*": "alice@example.com" - "dev": ["alice@example.com", "bob@example.com"] - "dba": "https://teams.microsoft.com/l/channel/19%3somerandomid%40thread.tacv2/ChannelName?groupId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -``` - - - -In the `role_to_recipients` map, each key is the name of a Teleport role. Each -value configures the Teams user (or users) to notify. Each string must be either -the email address of a Microsoft Teams user or a channel URL. - -You can find the URL of a channel by opening the channel and clicking the button -"Get link to channel": - -![Copy Teams Channel](../../../../img/enterprise/plugins/msteams/copy-teams-channel.png) - -The `role_to_recipients` map must also include an entry for `"*"`, which the -plugin looks up if no other entry matches a given role name. In the example -above, requests for roles aside from `dev` and `dba` will notify `alice@example.com`. - -
-Suggested reviewers - -Users can suggest reviewers when they create an Access Request, e.g.,: - -```code -$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com -``` - -If an Access Request includes suggested reviewers, the Microsoft Teams plugin will add -these to the list of channels to notify. If a suggested reviewer is an email -address, the plugin will look up the direct message channel for that -address and post a message in that channel. - -
- -Configure the Microsoft Teams plugin to notify you when a user requests the `editor` role -by adding the following to your `role_to_recipients` config (replace -`TELEPORT_USERNAME` with the email of the user you assigned the `editor-reviewer` -role earlier): - - - - -```toml -[role_to_recipients] -"*" = "TELEPORT_USERNAME" -"editor" = "TELEPORT_USERNAME" -``` - - - -```toml -roleToRecipients: - "*": "TELEPORT_USERNAME" - "editor": "TELEPORT_USERNAME" -``` - - - -The final configuration file should resemble the following: - - - -```toml -(!examples/resources/plugins/teleport-msteams.toml!) -``` - - -```yaml -(!examples/resources/plugins/teleport-msteams-helm.yaml!) -``` - - - -## Step 7/9. Add and configure the Teams App - -### Upload the Teams App - -Open Microsoft Teams and go to "Apps", "Manage your apps", then in the additional -choices menu choose "Upload an App". - -![Upload Teams App](../../../../img/enterprise/plugins/msteams/upload-teams-app.png) - -If you're a Teams admin, choose "Upload an app to your org's app catalog". -This will allow you to skip the approval step. -If you're not a Microsoft Teams admin, choose "Submit an app to your org". - -Upload the `app.zip` file you generated earlier. - -### Approve the Teams App - -If you are not a Teams admin and chose "Submit an app to your org", -you will have to ask a Teams admin to approve it. - -They can do so by connecting to the -[Teams admin dashboard](https://admin.teams.microsoft.com/policies/manage-apps), -searching "TeleBot", selecting it and choosing "Allow". - -![Upload Teams App](../../../../img/enterprise/plugins/msteams/allowed-teams-app.png) - -### Add the Teams App to a Team - -Once the app is approved it should appear in the "Apps built for your org" section. -Add the newly uploaded app to a team. Open the app, click "Add to a team", -choose the "General" channel of your team and click "Set up a bot". - -![Add Teams App](../../../../img/enterprise/plugins/msteams/add-teams-app.png) - -Note: Once an app is added to a team, it can post on all channels. - -## Step 8/9. Test the Teams App - -Once Teleport is running, you've created the Teams App, and the plugin is -configured, you can now run the plugin and test the workflow. - -### Test Microsoft Teams connectivity - -Start the plugin in validation mode: - -```code -$ teleport-msteams validate -``` - -If everything works fine, the log output should look like this: - -```text -teleport-msteams v(=teleport.plugin.version=) go(=teleport.golang=) - - - Checking application xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx status... - - Application found in the team app store (internal ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx) - - User xxxxxx@xxxxxxxxx.xxx found: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx - - Application installation ID for user: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX - - Chat ID for user: 19:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@unq.gbl.spaces - - Chat web URL: https://teams.microsoft.com/l/chat/19%3Axxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx%40unq.gbl.spaces/0?tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx - - Hailing the user... - - Message sent, ID: XXXXXXXXXXXXX - -Check your MS Teams! -``` - -The plugin should exit and you should have received two messages through Microsoft Teams. - -![Validate Bot Message](../../../../img/enterprise/plugins/msteams/validate-bot-message.png) - -### Start the MS Teams Plugin - -After you configured and validated the MS Teams plugin, you can now run the -plugin and test the workflow. - - - - -Run the following command to start the Teleport MS Teams plugin. The `-d` flag -will provide debug information to ensure that the plugin can connect to MS Teams -and your Teleport cluster: - -```code -$ teleport-msteams start -d -DEBU DEBUG logging enabled msteams/main.go:120 -INFO Starting Teleport MS Teams Plugin (=teleport.plugin.version=): msteams/app.go:74 -DEBU Attempting GET teleport.example.com:443/webapi/find webclient/webclient.go:129 -DEBU Checking Teleport server version msteams/app.go:242 -INFO MS Teams app found in org app store id:292e2881-38ab-7777-8aa7-cefed1404a63 name:TeleBot msteams/app.go:179 -INFO Preloading recipient data... msteams/app.go:185 -INFO Recipient found, chat found chat_id:19:a8c06deb-aa2b-4db5-9c78-96e48f625aef_a36aec2e-f11c-4219-b79a-19UUUU57de70@unq.gbl.spaces kind:user recipient:jeff@example.com msteams/app.go:195 -INFO Recipient data preloaded and cached. msteams/app.go:198 -DEBU Watcher connected watcherjob/watcherjob.go:121 -INFO Plugin is ready msteams/app.go:227 -``` - - - -Run the plugin: - -```code -$ docker run -v :/etc/teleport-msteams.toml public.ecr.aws/gravitational/teleport-plugin-msteams:(=teleport.version=) start -``` - - - -Install the plugin: - -```code -$ helm upgrade --install teleport-plugin-msteams teleport/teleport-plugin-msteams --values teleport-msteams-helm.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-msteams -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-msteams-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-msteams -``` - - - - -### Create an Access Request - -Create an Access Request and check if the plugin works as expected with the -following steps. - -(!docs/pages/includes/plugins/create-request.mdx!) - -The user you configured earlier to review the request should receive a direct -message from "TeleBot" in Microsoft Teams allowing them to visit a link in the Teleport -Web UI and either approve or deny the request. - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - -Once the request is resolved, the Microsoft Teams bot will update the Access Request message -to reflect its new status. - - - -When the Microsoft Teams plugin posts an Access Request notification to a channel, anyone -with access to the channel can view the notification and follow the link. While -users must be authorized via their Teleport roles to review Access Requests, you -should still check the Teleport audit log to ensure that the right users are -reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -## Step 9/9. Set up systemd - -This section is only relevant if you are running the Teleport Microsoft Teams -plugin on a virtual machine. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!examples/systemd/plugins/teleport-msteams.service!) -``` - -Save this as `teleport-msteams.service` in either `/usr/lib/systemd/system/` or -another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-msteams -$ sudo systemctl start teleport-msteams -``` - -## Next steps - -- Read our guides to configuring [Resource Access - Requests](../access-requests/resource-requests.mdx) and [Role Access - Requests](../access-requests/role-requests.mdx) so you can get the most out - of your Access Request plugins. diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty.mdx deleted file mode 100644 index 1e5c2480e79c0..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty.mdx +++ /dev/null @@ -1,629 +0,0 @@ ---- -title: Run the PagerDuty Access Request Plugin -description: How to set up Teleport's PagerDuty plugin for privilege elevation approvals. ---- - -With Teleport's PagerDuty integration, engineers can access the infrastructure -they need to resolve incidents quickly—without longstanding admin permissions -that can become a vector for attacks. - -Teleport's PagerDuty integration allows you to treat Teleport Role Access -Requests as PagerDuty incidents, notify the appropriate on-call team, and -approve or deny the requests via Teleport. You can also configure the plugin to -approve Role Access Requests automatically if the user making the request is on -the on-call team for a service affected by an incident. - -
-This integration is hosted on Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="the PagerDuty integration"!) - -
- -This guide will explain how to set up Teleport's Access Request plugin for -PagerDuty. - -![PagerDuty plugin architecture](../../../../img/enterprise/plugins/pagerduty/diagram.png) - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- A PagerDuty account with the "Admin", "Global Admin", or "Account Owner" - roles. These roles are necessary for generating an API token that can list and - look up user profiles. - - You can see your role by visiting your user page in PagerDuty, navigating to - the "Permissions & Teams" tab, and checking the value of the "Base Role" - field. - -- Either a Linux host or Kubernetes cluster where you will run the PagerDuty plugin. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/8. Create services - -To demonstrate the PagerDuty plugin, create two services in PagerDuty. For each -service, fill in only the "Name" field and skip all other configuration screens, -leaving options as the defaults: - -- `Teleport Access Request Notifications` -- `My Critical Service` - -We will configure the PagerDuty plugin to create an incident in the `Teleport -Access Request Notifications` service when certain users create an Access -Request. - -For users on the on-call team for `My Critical Service` (in this case, your -PagerDuty user), we will configure the PagerDuty plugin to approve Access -Requests automatically, letting them investigate incidents on the service -quickly. - -## Step 2/8. Define RBAC resources - -The Teleport PagerDuty plugin works by receiving Access Request events from the -Teleport Auth Service and, based on these events, interacting with the PagerDuty -API. - -In this section, we will show you how to configure the PagerDuty plugin by -defining the following RBAC resources: - -- A role called `editor-requester`, which can request the built-in `editor` - role. We will configure this role to open a PagerDuty incident whenever a - user requests it, notifying the on-call team for the `Teleport Access Request - Notifications` service. -- A role called `demo-role-requester`, which can request a role called - `demo-role`. We will configure the PagerDuty plugin to auto-approve this - request whenever the user making it is on the on-call team for `My Critical - Service`. -- A user and role called `access-plugin` that the PagerDuty plugin will assume - in order to authenticate to the Teleport Auth Service. This role will have - permissions to approve Access Requests from users on the on-call team for `My - Critical Service` automatically. -- A role called `access-plugin-impersonator` that allows you to generate signed - credentials that the PagerDuty plugin can use to authenticate with your - Teleport cluster. - -### `editor-requester` - -Create a file called `editor-request-rbac.yaml` with the following content, -which defines a role called `editor-reviewer` that can review requests for the -`editor` role, plus an `editor-requester` role that can request this role. - -```yaml -kind: role -version: v5 -metadata: - name: editor-reviewer -spec: - allow: - review_requests: - roles: ['editor'] ---- -kind: role -version: v5 -metadata: - name: editor-requester -spec: - allow: - request: - roles: ['editor'] - thresholds: - - approve: 1 - deny: 1 - annotations: - pagerduty_notify_service: ["Teleport Access Request Notifications"] -``` - -The Teleport Auth Service *annotates* Access Request events with metadata based -on the roles of the Teleport user submitting the Access Request. The PagerDuty -plugin reads these annotations to determine how to respond to a new Access -Request event. - -Whenever a user with the `editor-requester` role requests the `editor` role, the -PagerDuty plugin will read the `pagerduty_notify_service` annotation and notify -PagerDuty to open an incident in the specified service, `Teleport Access Request -Notifications`, until someone with the `editor-reviewer` role approves or denies -the request. - -Create the roles you defined: - -```code -$ tctl create -f editor-request-rbac.yaml -role 'editor-reviewer' has been created -role 'editor-requester' has been created -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -### `demo-role-requester` - -Create a file called `demo-role-requester.yaml` with the following content: - -```yaml -kind: role -version: v5 -metadata: - name: demo-role ---- -kind: role -version: v5 -metadata: - name: demo-role-requester -spec: - allow: - request: - roles: ['demo-role'] - thresholds: - - approve: 1 - deny: 1 - annotations: - pagerduty_services: ["My Critical Service"] -``` - -Users with the `demo-role-requester` role can request the `demo-role` role. When -such a user makes this request, the PagerDuty plugin will read the -`pagerduty_services` annotation. If the user making the request is on the -on-call team for a service listed as a value for the annotation, the plugin will -approve the Access Request automatically. - -In this case, the PagerDuty plugin will approve any requests from users on the -on-call team for `My Critical Service`. - -Create the resources: - -```code -$ tctl create -f demo-role-requester.yaml; -``` - - - -For auto-approval to work, the user creating an Access Request must have a -Teleport username that is also the email address associated with a PagerDuty -account. In this guide, we will add the `demo-role-requester` role to your own -Teleport account—which we assume is also your email address for PagerDuty—so you -can request the `demo-role` role. - - - -### `access-plugin` - -Teleport's Access Request plugins authenticate to your Teleport cluster as a -user with permissions to list, read, and update Access Requests. This way, -plugins can retrieve Access Requests from the Teleport Auth Service, present -them to reviewers, and modify them after a review. - -Define a user and role called `access-plugin` by adding the following content to -a file called `access-plugin.yaml`: - -```yaml -kind: role -version: v5 -metadata: - name: access-plugin -spec: - allow: - rules: - - resources: ['access_request'] - verbs: ['list', 'read'] - - resources: ['access_plugin_data'] - verbs: ['update'] - review_requests: - roles: ['demo-role'] - where: 'contains(request.system_annotations["pagerduty_services"], "My Critical Service")' ---- -kind: user -metadata: - name: access-plugin -spec: - roles: ['access-plugin'] -version: v2 -``` - -Notice that the `access-plugin` role includes an `allow.review_requests.roles` -field with `demo-role` as a value. This allows the plugin to review requests for -the `demo-role` role. - -We are also restricting the `access-plugin` role to reviewing only Access -Requests associated with `My Critical Service`. To do so, we have defined a -*predicate expression* in the `review_requests.where` field. This expression -indicates that the plugin *cannot* review requests for `demo-role` unless the -request contains an annotation with the key `pagerduty_services` and the value -`My Critical Service`. - -
-How "where" conditions work - -The `where` field includes a predicate expression that determines whether a -reviewer is allowed to review a specific request. You can include two functions -in a predicate expression: - -|Function|Description|Example| -|---|---|---| -|`equals`|A field is equivalent to a value.|`equals(request.reason, "resolve an incident")` -|`contains`|A list of strings includes a value.|`contains(reviewer.traits["team"], "devops")`| - -When you use the `where` field, you can include the following fields in your -predicate expression: - -|Field|Type|Description| -|---|---|---| -|`reviewer.roles`|`[]string`|A list of the reviewer's Teleport role names| -|`reviewer.traits`|`map[string][]string`|A map of the reviewer's Teleport traits by the name of the trait| -|`request.roles`|`[]string`|A list of the Teleport roles a user is requesting| -|`request.reason`|`string`|The reason attached to the request| -|`request.system_annotations`| `map[string][]string`|A map of annotations for the request by annotation key, e.g., `pagerduty_services`| - -You can combine functions using the following operators: - -|Operator|Format|Description| -|---|---|---| -|`&&`|`function && function`|Evaluates to true if both functions evaluate to true| -|`\|\|`|`function \|\| function`|Evaluates to true if either one or both functions evaluate to true| -|`!`| `!function`|Evaluates to true if the function evaluates to false| - -An example of a function is `equals(request.reason, "resolve an incident")`. To -configure an `allow` condition to match any Access Request that does not include -the reason, "resolve an incident", you could use the function, -`!equals(request.reason, "resolve an incident")`. - -
- -Create the user and role: - -```code -$ tctl create -f access-plugin.yaml -``` - -### `access-plugin-impersonator` - -As with all Teleport users, the Teleport Auth Service authenticates the -`access-plugin` user by issuing short-lived TLS credentials. In this case, we -will need to request the credentials manually by *impersonating* the -`access-plugin` role and user. - -If you are running a self-hosted Teleport Enterprise cluster and are using -`tctl` from the Auth Service host, you will already have impersonation -privileges. - -To grant your user impersonation privileges for `access-plugin`, define a role -called `access-plugin-impersonator` by pasting the following YAML document into -a file called `access-plugin-impersonator.yaml`: - -```yaml -kind: role -version: v5 -metadata: - name: access-plugin-impersonator -spec: - allow: - impersonate: - roles: - - access-plugin - users: - - access-plugin -``` - -Create the `access-plugin-impersonator` role: - -```code -$ tctl create -f access-plugin-impersonator.yaml -``` - -### Add roles to your user - -Later in this guide, your Teleport user will take three actions that require -additional permissions: - -- Generate signed credentials that the PagerDuty plugin will use to connect to - your Teleport Cluster -- Manually review an Access Request for the `editor` role -- Create an Access Request for the `demo-role` role - -To grant these permissions to your user, give your user the `editor-reviewer`, -`access-plugin-impersonator`, and `demo-role-requester` roles we defined -earlier. - -Open your user definition in an editor: - -```code -$ TELEPORT_USER=$(tsh status --format=json | jq -r .active.username) -$ tctl edit users/${TELEPORT_USER?} -``` - -Edit the user to include the roles you just created: - -```diff - roles: - - access - - auditor - - editor -+ - editor-reviewer -+ - access-plugin-impersonator -+ - demo-role-requester -``` - -Apply your changes by saving and closing the file in your editor. - -Log out of your Teleport cluster and log in again. You will now be able to -review requests for the `editor` role, request the `demo-role` role, and -generate signed certificates for the `access-plugin` role and user. - -### Create a user who will request access - -Create a user called `myuser` who has the `editor-requester` role. Later in this -guide, you will create an Access Request as this user to test the PagerDuty -plugin: - -```code -$ tctl users add myuser --roles=editor-requester -``` - -`tctl` will print an invitation URL to your terminal. Visit the URL and log in -as `myuser` for the first time, registering credentials as configured for your -Teleport cluster. - -## Step 3/8. Install the Teleport PagerDuty plugin - -(!docs/pages/includes/plugins/install-access-request.mdx name="pagerduty"!) - -## Step 4/8. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-pagerduty-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-pagerduty-identity"!) - - - -## Step 5/8. Set up a PagerDuty API key - -Generate an API key that the PagerDuty plugin will use to create and modify -incidents as well as list users, services, and on-call policies. - -In your PagerDuty dashboard, go to **Integrations → API Access Keys** and click -**Create New API Key**. Add a key description, e.g., "Teleport integration". -Leave "Read-only API Key" unchecked. Copy the key to a file on your local -machine. We'll use the key in the plugin config file later. - -![Create an API -key](../../../../img/enterprise/plugins/pagerduty/pagerduty-integrations.png) - -## Step 6/8. Configure the PagerDuty plugin - -At this point, you have generated credentials that the PagerDuty plugin will use -to connect to Teleport and the PagerDuty API. You will now configure the -PagerDuty plugin to use these credentials, plus adjust any settings required for -your environment. - - - -Teleport's PagerDuty plugin has its own configuration file in TOML format. On -the host where you will run the PagerDuty plugin, generate a boilerplate config -by running the following commands: - -```code -$ teleport-pagerduty configure > teleport-pagerduty.toml -$ sudo mv teleport-pagerduty.toml /etc -``` - - -The PagerDuty Helm Chart uses a YAML values file to configure the plugin. On -the host where you have Helm installed, create a file called -`teleport-pagerduty-values.yaml` based on the following example: - -```yaml -teleport: - address: "" # Teleport Auth Service GRPC API address - identitySecretName: "" # Identity secret name - identitySecretPath: "" # Identity secret path - -pagerduty: - apiKey: "" # PagerDuty API Key - userEmail: "" # PagerDuty bot user email (Could be admin email) -``` - - - -
-Saving the configuration to another location - -The PagerDuty plugin expects the configuration to be in -`/etc/teleport-pagerduty.toml`, but you can override this with the `--config` -flag when you run the plugin binary later in this guide. - -
- -Edit the configuration file in `/etc/teleport-pagerduty.toml` as explained -below: - -### `[teleport]` - -The PagerDuty plugin uses this section to connect to your Teleport cluster: - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -### `[pagerduty]` - -Assign `api_key` to the PagerDuty API key you generated earlier. - -Assign `user_email` to the email address of a PagerDuty user on the account -associated with your API key. When the PagerDuty plugin creates a new incident, -PagerDuty will display this incident as created by that user. - -
-Overriding annotation names - -This guide has assumed that the Teleport PagerDuty plugin uses -`pagerduty_notify_service` annotation to determine which services to notify of -new Access Request events and the `pagerduty_services` annotation to configure -auto-approval. - -If you would like to use a different name for these annotations in your Teleport -roles, you can assign the `pagerduty.notify_service` and `pagerduty.services` -fields. - -
- -The final configuration should resemble the following: - - - -```yaml -(!examples/resources/plugins/teleport-pagerduty-cloud.toml!) -``` - - -```yaml -(!examples/resources/plugins/teleport-pagerduty-helm-cloud.yaml!) -``` - - - -## Step 7/8. Test the PagerDuty plugin - - - -After you configure the PagerDuty plugin, run the following command to start it. -The `-d` flag will provide debug information to ensure that the plugin can -connect to PagerDuty and your Teleport cluster: - -```code -$ teleport-pagerduty start -d -# DEBU DEBUG logging enabled logrus/exported.go:117 -# INFO Starting Teleport Access PagerDuty extension 0.1.0-dev.1: pagerduty/main.go:124 -# DEBU Checking Teleport server version pagerduty/main.go:226 -# DEBU Starting a request watcher... pagerduty/main.go:288 -# DEBU Starting PagerDuty API health check... pagerduty/main.go:170 -# DEBU Starting secure HTTPS server on :8081 utils/http.go:146 -# DEBU Watcher connected pagerduty/main.go:252 -# DEBU PagerDuty API health check finished ok pagerduty/main.go:176 -# DEBU Setting up the webhook extensions pagerduty/main.go:178 -``` - - -Run the plugin: - -```code -$ docker run -v :/etc/teleport-pagerduty.toml public.ecr.aws/gravitational/teleport-plugin-pagerduty:(=teleport.version=) start -``` - - -After modifying your configuration, run the bot with the following command: - -```code -$ helm upgrade --install teleport-plugin-pagerduty teleport/teleport-plugin-pagerduty --values teleport-pagerduty-values.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-pagerduty -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-pagerduty-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-pagerduty -``` - - - - -### Create an Access Request - -As the Teleport user `myuser`, create an Access Request for the `editor` role: - -(!docs/pages/includes/plugins/create-request.mdx!) - -You should see a log resembling the following on your PagerDuty plugin host: - -``` -INFO Successfully created PagerDuty incident pd_incident_id:00000000000000 -pd_service_name:Teleport Access Request Notifications -request_id:00000000-0000-0000-0000-000000000000 request_op:put -request_state:PENDING pagerduty/app.go:366 -``` - -In PagerDuty, you will see a new incident containing information about the -Access Request: - -![PagerDuty dashboard showing an Access -Request](../../../../img/enterprise/plugins/pagerduty/new-access-req-incident.png) - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - - - -When the PagerDuty plugin sends a notification, anyone who receives the -notification can follow the enclosed link to an Access Request URL. While users -must be authorized via their Teleport roles to review Access Request, you -should still check the Teleport audit log to ensure that the right users are -reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -### Trigger an auto-approval - -As your Teleport user, create an Access Request for the `demo-role` role. - -You will see a log similar to the following on your PagerDuty plugin host: - -``` -INFO Successfully submitted a request approval -pd_user_email:myuser@example.com pd_user_name:My User -request_id:00000000-0000-0000-0000-000000000000 request_op:put -request_state:PENDING pagerduty/app.go:511 -``` - -Your Access Request will appear as `APPROVED`: - -```code -$ tsh requests ls -ID User Roles Created (UTC) Status ------------------------------------- ------------------ --------- ------------------- -------- -00000000-0000-0000-0000-000000000000 myuser@example.com demo-role 12 Aug 22 18:30 UTC APPROVED -``` - -## Step 8/8. Set up systemd - -This section is only relevant if you are running the Teleport PagerDuty plugin -on a Linux host. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!examples/systemd/plugins/teleport-pagerduty.service!) -``` - -Save this as `teleport-pagerduty.service` in either `/usr/lib/systemd/system/` -or another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-pagerduty -$ sudo systemctl start teleport-pagerduty -``` diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-slack.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-slack.mdx deleted file mode 100644 index 41bfe9056136a..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-slack.mdx +++ /dev/null @@ -1,412 +0,0 @@ ---- -title: Run the Slack Access Request Plugin -description: How to set up Teleport's Slack plugin for privilege elevation approvals. ---- - -This guide will explain how to set up Slack to receive Access Request messages -from Teleport. Teleport's Slack integration notifies individuals and channels of -Access Requests. Users are then directed to log in to the Teleport cluster where -they can approve and deny Access Requests, making it easier to implement security -best practices without compromising productivity. - -
-This integration is hosted on Teleport Cloud - -(!docs/pages/includes/plugins/enroll.mdx name="the Slack integration"!) - -
- -![The Slack Access Request plugin](../../../../img/enterprise/plugins/slack/diagram.png) - -Here is an example of sending an Access Request via Teleport's Slack plugin: - - - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- Slack admin privileges to create an app and install it to your workspace. Your - Slack profile must have the "Workspace Owner" or "Workspace Admin" banner - below your profile picture. -- Either a Linux host or Kubernetes cluster where you will run the Slack plugin. - -(!/docs/pages/includes/tctl.mdx!) - -## Step 1/8. Define RBAC resources - -Before you set up the Slack plugin, you will need to enable Role Access Requests -in your Teleport cluster. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 2/8. Install the Teleport Slack plugin - -(!docs/pages/includes/plugins/install-access-request.mdx name="slack"!) - -## Step 3/8. Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -## Step 4/8. Export the access plugin identity - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-slack-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-slack-identity"!) - - - -## Step 5/8. Register a Slack app - -The Access Request plugin for Slack receives Access Request events from the -Teleport Auth Service, formats them into Slack messages, and sends them to the -Slack API to post them in your workspace. For this to work, you must register a -new app with the Slack API. - -### Create your app - -Visit [https://api.slack.com/apps](https://api.slack.com/apps) to create a new -Slack app. Click "Create an App", then "From scratch". Fill in the form as shown -below: - -![Create Slack App](../../../../img/enterprise/plugins/slack/Create-a-Slack-App.png) - -The "App Name" should be "Teleport". Click the "Development Slack Workspace" -dropdown and choose the workspace where you would like to see Access Request -messages. - -### Generate an OAuth token with scopes - -Next, configure your application to authenticate to the Slack API. We will do -this by generating an OAuth token that the plugin will present to the Slack API. - -We will restrict the plugin to the narrowest possible permissions by using OAuth -scopes. The Slack plugin needs to post messages to your workspace. It also needs -to read usernames and email addresses in order to direct Access Request -notifications from the Auth Service to the appropriate Teleport users in Slack. - -After creating your app, the Slack website will open a console where you can -specify configuration options. On the sidebar menu under "Features", click -"OAuth & Permissions". - -Scroll to the "Scopes" section and click "Add an OAuth Scope" for each of the -following scopes: - -- `chat:write` -- `incoming-webhook` -- `users:read` -- `users:read.email` - -The result should look like this: - -![API Scopes](../../../../img/enterprise/plugins/slack/api-scopes.png) - -After you have configured scopes for your plugin, scroll back to the top of the -OAuth & Permissions page, find the "OAuth Tokens for Your Workspace" section, -and click "Install to Workspace". You will see a summary of the permission you -configured for the Slack plugin earlier. - -In "Where should Teleport post?", choose "Slackbot" as the default channel the -plugin will post to. The plugin will post here when sending direct messages. -Later in this guide, we will configure the plugin to post in other channels as -well. - -After submitting this form, you will see an OAuth token in the "OAuth & -Permissions" tab under "Tokens for Your Workspace": - -![OAuth Tokens](../../../../img/enterprise/plugins/slack/OAuth.png) - -You will use this token later when configuring the Slack plugin. - -## Step 6/8. Configure the Teleport Slack plugin - -At this point, the Teleport Slack plugin has the credentials it needs to -communicate with your Teleport cluster and the Slack API. In this step, you will -configure the Slack plugin to use these credentials. You will also configure the -plugin to notify the right Slack channels when it receives an Access Request -update. - -### Create a configuration file - - - -The Teleport Slack plugin uses a configuration file in TOML format. Generate a -boilerplate configuration by running the following command (the plugin will not run -unless the config file is in `/etc/teleport-slack.toml`): - -```code -$ teleport-slack configure | sudo tee /etc/teleport-slack.toml > /dev/null -``` - -This should result in a configuration file like the one below: - -```toml -(!examples/resources/plugins/teleport-slack.toml!) -``` - - -The Slack Helm Chart uses a YAML values file to configure the plugin. -On your local workstation, create a file called `teleport-slack-helm.yaml` -based on the following example: - -```toml -(!examples/resources/plugins/teleport-slack-helm.yaml!) -``` - - - - -### Edit the configuration file - -Open the configuration file created for the Teleport Slack plugin and update the following fields: - -**`[teleport]`** - -The Slack plugin uses this section to connect to your Teleport cluster: - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) - -**`[slack]`** - -`token`: Open [`https://api.slack.com/apps`](https://api.slack.com/apps), find -the Slack app you created earlier, navigate to the "OAuth & Permissions" tab, -copy the "Bot User OAuth Token", and paste it into this field. - -**`[role_to_recipients]`** - -The `role_to_recipients` map configure the channels that the Slack plugin will -notify when a user requests access to a specific role. When the Slack plugin -receives an Access Request from the Auth Service, it will look up the role being -requested and identify the Slack channels to notify. - - - -Here is an example of a `role_to_recipients` map. Each value can be a -single string or an array of strings: - -```toml -[role_to_recipients] -"*" = "admin-slack-channel" -"dev" = ["dev-slack-channel", "admin-slack-channel"] -"dba" = "alex@gmail.com" -``` - - -In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` -and uses the following format, where keys are strings and values are arrays of -strings: - -```yaml -roleToRecipients: - "*": ["admin-slack-channel"] - "dev": - - "dev-slack-channel" - - "admin-slack-channel" - "dba": ["alex@gmail.com"] -``` - - - -In the `role_to_recipients` map, each key is the name of a Teleport role. Each -value configures the Slack channel (or channels) to notify. Each string must be -either the name of a Slack channel (including a user's direct message channel) -or the email address of a Slack user. If the recipient is an email address, the -Slack plugin will use that email address to look up a direct message channel. - -The `role_to_recipients` map must also include an entry for `"*"`, which the -plugin looks up if no other entry matches a given role name. In the example -above, requests for roles aside from `dev` and `dba` will notify the -`admin-slack-channel` channel. - -
-Suggested reviewers - -Users can suggest reviewers when they create an Access Request, e.g.,: - -```code -$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com -``` - -If an Access Request includes suggested reviewers, the Slack plugin will add -these to the list of channels to notify. If a suggested reviewer is an email -address, the plugin will look up the direct message channel for that -address and post a message in that channel. - -
- -Configure the Slack plugin to notify you when a user requests the `editor` role -by adding the following to your `role_to_recipients` config (replace -`TELEPORT_USERNAME` with the user you assigned the `editor-reviewer` role -earlier): - - - -```toml -[role_to_recipients] -"*" = "access-requests" -"editor" = "TELEPORT_USERNAME" -``` - - -```yaml -roleToRecipients: - "*": "access-requests" - "editor": "TELEPORT_USERNAME" -``` - - - -Either create an `access-requests` channel in your Slack workspace or rename the -value of the `"*"` key to an existing channel. - -### Invite your Slack app - -Once you have configured the channels that the Slack plugin will notify when it -receives an Access Request, you will need to ensure that the plugin can post in -those channels. - -You have already configured the plugin to send direct messages as Slackbot. For -any other channel you mention in your `role_to_recipients` map, you will need -to invite the plugin to that channel. Navigate to each channel and enter `/invite -@teleport` in the message box. - -## Step 7/8. Test your Slack app - -Once Teleport is running, you've created the Slack app, and the plugin is -configured, you can now run the plugin and test the workflow. - - - -Start the plugin: - -```code -$ teleport-slack start -``` - -If everything works fine, the log output should look like this: - -```code -$ teleport-slack start -INFO Starting Teleport Access Slack Plugin 7.2.1: slack/app.go:80 -INFO Plugin is ready slack/app.go:101 -``` - - -run the plugin: - -```code -$ docker run -v :/etc/teleport-slack.toml public.ecr.aws/gravitational/teleport-plugin-slack:(=teleport.version=) start -``` - - -Install the plugin: - -```code -$ helm upgrade --install teleport-plugin-slack teleport/teleport-plugin-slack --values teleport-slack-helm.yaml -``` - -To inspect the plugin's logs, use the following command: - -```code -$ kubectl logs deploy/teleport-plugin-slack -``` - -Debug logs can be enabled by setting `log.severity` to `DEBUG` in -`teleport-slack-helm.yaml` and executing the `helm upgrade ...` command -above again. Then you can restart the plugin with the following command: - -```code -$ kubectl rollout restart deployment teleport-plugin-slack -``` - - - -Create an Access Request and check if the plugin works as expected with the -following steps. - -### Create an Access Request - -(!docs/pages/includes/plugins/create-request.mdx!) - -The user you configured earlier to review the request should receive a direct -message from "Teleport" in Slack allowing them to visit a link in the Teleport -Web UI and either approve or deny the request. - -### Resolve the request - -(!docs/pages/includes/plugins/resolve-request.mdx!) - -Once the request is resolved, the Slack bot will add an emoji reaction of ✅ or -❌ to the Slack message for the Access Request, depending on whether the request -was approved or denied. - - - -When the Slack plugin posts an Access Request notification to a channel, anyone -with access to the channel can view the notification and follow the link. While -users must be authorized via their Teleport roles to review Access Requests, you -should still check the Teleport audit log to ensure that the right users are -reviewing the right requests. - -When auditing Access Request reviews, check for events with the type `Access -Request Reviewed` in the Teleport Web UI. - - - -## Step 8/8. Set up systemd - -This section is only relevant if you are running the Teleport Slack plugin on a -Linux host. - -In production, we recommend starting the Teleport plugin daemon via an init -system like systemd. Here's the recommended Teleport plugin service unit file -for systemd: - -```ini -(!examples/systemd/plugins/teleport-slack.service!) -``` - -Save this as `teleport-slack.service` in either `/usr/lib/systemd/system/` or -another [unit file load -path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) -supported by systemd. - -Enable and start the plugin: - -```code -$ sudo systemctl enable teleport-slack -$ sudo systemctl start teleport-slack -``` - -## Next steps - -- Read our guides to configuring [Resource Access - Requests](../access-requests/resource-requests.mdx) and [Role Access - Requests](../access-requests/role-requests.mdx) so you can get the most out - of your Access Request plugins. diff --git a/docs/pages/admin-guides/access-controls/access-requests/access-requests.mdx b/docs/pages/admin-guides/access-controls/access-requests/access-requests.mdx deleted file mode 100644 index a05b91b1c3db7..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-requests/access-requests.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Just-in-Time Access Requests -description: Use just-in-time Access Requests to request elevated privileges. -layout: tocless-doc ---- - -Just-in-time Access Requests allow Teleport users to request access to a -resource or role depending on need. The request can then be approved or denied -based on a configurable number of approvers. - -You can use Access Requests to implement the principle of least privilege in -your organization, leaving an attacker with no permanent admins to target. Users -receive elevated privileges for a limited period of time. Request approvers can -be configured with limited cluster access so they are not high value targets. - -Access Requests are designed to provide temporary permissions to users. If you -want to grant longstanding permissions to a group of users, with the option to -renew these permissions after a recurring interval (such as three months), -consider [Access Lists](../access-lists/access-lists.mdx). - -## See how Access Requests work - -Access Requests support two main use cases: **Role Access Requests** -and **Resource Access Requests**. - -With Role Access Requests, engineers can request temporary credentials with -elevated roles in order to perform critical system-wide tasks. - -[Get started with Role Access Requests](role-requests.mdx). - -With Resource Access Requests, engineers can easily get access to only the -individual resources they need, when they need it. - -[Get started with Resource Access Requests](resource-requests.mdx). - -## Configure Access Requests - -You can configure all aspects of the Access Request lifecycle in Teleport, -including: - -- When a user must make a request. -- What permissions a user can request. -- How long elevated permissions can last. -- How many users can approve or deny different kinds of requests. - -Read the [Access Request -Configuration](access-request-configuration.mdx) guide for an -overview of the configuration options available for Access Requests. - -## Teleport Community Edition users - -Just-in-time Access Requests are a feature of Teleport Enterprise. Teleport -Community Edition users can get a preview of how Access Requests work by -requesting a role via the Teleport CLI. Full Access Request functionality, -including Resource Access Requests managing Access Requests via the Web UI are -available in Teleport Enterprise. - -For information on how to use Just-in-time Access Requests with Teleport Community -Edition, see [Teleport Community Access Requests](oss-role-requests.mdx). - - diff --git a/docs/pages/admin-guides/access-controls/access-requests/automatic-reviews.mdx b/docs/pages/admin-guides/access-controls/access-requests/automatic-reviews.mdx deleted file mode 100644 index 186cefe82bd8f..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-requests/automatic-reviews.mdx +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: Configure Automatic Reviews -description: Describes how to configure Access Monitoring Rules for Automatic Reviews. ---- - -Teleport supports automatic reviews of role Access Requests. This -feature enables teams to enforce a zero standing privilege policy, while -still allowing users to receive temporary access without manual approval. - - - Automatic reviews is not currently supported with resource Access Requests. - This functionality will be enabled in a future Teleport release. - - -## How it works - -Automatic reviews are triggered by Access Monitoring Rules. These rules instruct -Teleport to monitor Access Requests and automatically submit a review -when certain conditions (such as requested roles or user traits) are met. - -For example, an Access Monitoring Rule can perform an automatic Access Request -approval when a user with the Teleport traits or IdP attribute `team: demo` -requests access to the `access` role. - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- This feature requires Teleport Identity Governance. - -## Step 1/4. Create a requester role and user - -In this example, we'll first create: -- A role named `demo-access-request`, which allows requesting access to the -`access` role. -- A user named `demo-access-requester`, assigned the above role. - -### Create the role - -Create a role configuration file named `demo-role.yaml`: -```yaml -# demo-role.yaml -kind: role -version: v7 -metadata: - name: demo-access-request -spec: - allow: - request: - roles: - - access - search_as_roles: - - access -``` - -Create the role with: -```code -$ tctl create demo-role.yaml -``` - -### Create the user - -Use the following command to create the user and assign the role: -```code -$ tctl users add --roles=demo-access-request demo-access-requester -``` - -Alternatively, you can assign the role after creating the user: - - (!docs/pages/includes/add-role-to-user.mdx role="demo-access-request" user="\`demo-access-requester\`"!) - -## Step 2/4. Assign user traits - -To allow automatic review rules to evaluate the requesting user, assign them -traits via the Teleport Web UI. - -1. Go to **Zero Trust Access** -> **Users** -2. Next to `demo-access-requester`, click **Options** -> **Edit...** -3. Click **Add user trait**, and set: - - Key: `team` - - Value: `demo` -4. Click **Save** -5. Verify that the user has been updated with the desired trait. - -![Edit User Traits](../../../../img/access-controls/automatic-reviews/users-edit.png) - -When adding user traits, you can enter any keys and values. The user trait form -does not support wildcard or regular expressions. - - - Automatic reviews are compatible with SSO users and the attributes provided - by the IdP. - - -## Step 3/4. Create Access Monitoring Rule - -Next, define the automatic review rule via the Teleport Web UI. - -1. Go to **Identity Governance** -> **Access Requests** -> **View Access Monitoring Rules** -2. Click **Create Access Monitoring Rule** -> **Automatic Review Rule** -3. Configure the rule and set: - - **Name of roles to match**: `access` - - **User Traits**: `team: demo` - - **Review decision**: `APPROVED` -4. Click **Create Rule** - -![Create Automatic Review Rule Editor](../../../../img/access-controls/automatic-reviews/create-automatic-review-rule.png) - -This Access Monitoring Rule will ensure that Access Requests for the `access` -role will be automatically reviewed for approval if the Teleport user traits -requirements are satisfied. In this case, any user with the traits `team: demo` -will satisfy the requirement. - -## Step 4/4. Verify automatic review rule - -To verify the new Access Monitoring Rule, create an Access Request via the Teleport -Web UI. -1. Log in as `demo-access-requester` -2. Go to **Access Requests** and click **New Access Request** -3. Change the request type from **Resources** to **Roles** -4. Add the `access` role to the Access Request -5. Click **Proceed to Request**, then **Submit Request** - -![Create Access Request](../../../../img/access-requests/new-role-request.png) - -At this point, the new Access Request should have been created, automatically -reviewed, and transitioned into an `APPROVED` state. Navigate **Back to Listings** -and verify the Access Request status. It might take a second for the review to -process, so you may have to refresh the page. - -![View Access Requests](../../../../img/access-controls/automatic-reviews/view-access-requests.png) - -## Troubleshooting - -### Conflicting automatic review rules - -Automatic review rules can automatically approve or deny Access Requests based -on the selected review decision. If an Access Request meets the conditions for -both an approval rule and a denial rule, the denial rule takes precedence. - -### Resource access requests - -Automatic review rules are not currently supported for resource Access Requests. -These rules will not be applied to any Access Requests that include a resource -other than a role. - -## Next Steps - -- For more configuration options with Access Monitoring Rules, refer to the - [Access Monitoring Rules Reference](../../../reference/access-controls/access-monitoring-rules.mdx). -- For configuration with Teleport Terraform Provider, refer to the - [Terraform Resources Index](../../../reference/terraform-provider/resources/access_monitoring_rule.mdx) -- For configuration options with SSO, refer to the - [Single Sign-On Guides](../sso/sso.mdx) diff --git a/docs/pages/admin-guides/access-controls/access-requests/resource-requests.mdx b/docs/pages/admin-guides/access-controls/access-requests/resource-requests.mdx deleted file mode 100644 index 489d283b8cc13..0000000000000 --- a/docs/pages/admin-guides/access-controls/access-requests/resource-requests.mdx +++ /dev/null @@ -1,684 +0,0 @@ ---- -title: Resource Access Requests -description: Teleport allows users to request access to specific resources from the CLI or UI. Requests can be escalated via ChatOps or anywhere else via our flexible Authorization Workflow API. -tocDepth: 3 ---- - -With Teleport Resource Access Requests, users can request access to specific -resources without needing to know anything about the roles or RBAC controls used -under the hood. -The Access Request API makes it easy to dynamically approve or deny these -requests. - -Just-in-time Access Requests are a feature of Teleport Enterprise. -Teleport Community Edition users can get a preview of how Access Requests work by -requesting a role via the Teleport CLI. Full Access Request functionality, -including Resource Access Requests and an intuitive and searchable UI are -available in Teleport Enterprise. - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/6. Grant roles to users - -The built-in `requester` and `reviewer` roles have permissions to, respectively, -open and review Access Requests. Grant the `requester` and `reviewer` roles to -existing users, or create new users to test this feature. Make sure the -requester has a valid `login` so that they can view and access SSH nodes. - -For the rest of the guide we will assume that the `requester` role has been -granted to a user named `alice` and the `reviewer` role has been granted to a -user named `bob`. - -1. Assign the `requester` role to a user named `alice`: - - (!docs/pages/includes/add-role-to-user.mdx role="requester" user="\`alice\`"!) - -1. Repeat these steps to assign the `reviewer` role to a user named `bob`. - - - -Consider defining custom roles to limit the scope of a requester or reviewer's -permissions. Read the [Access Request -Configuration](./access-request-configuration.mdx) guide for available options. - - - -## Step 2/6. Search for resources - -First, log in as `alice`. - -```code -$ tsh login --proxy teleport.example.com --user alice -``` - -Notice that `tsh ls` returns an empty list, because `alice` does not have access to any resources by default. -```code -$ tsh ls -Node Name Address Labels ---------- ------- ------ -``` - -Then try searching for all available ssh nodes. - -```code -$ tsh request search --kind node -Name Hostname Labels Resource ID ------------------------------------- ----------- ------------ ------------------------------------------------------ -b1168402-9340-421a-a344-af66a6675738 iot test=test /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 -bbb56211-7b54-4f9e-bee9-b68ea156be5f node test=test /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f - -To request access to these resources, run -> tsh request create --resource /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ - --reason -``` - -You can search for resources of kind `node`, `kube_cluster`, `db`, `app`, and -`windows_desktop`. Teleport also supports searching and requesting access to -resources within Kubernetes clusters. - -
-List of supported Kubernetes resources - -Teleport supports searching and requesting access to the following Kubernetes resources: - - - -Requesting access to a Kubernetes namespace allows you to access all resources -in that namespace, including the supported Kubernetes resources listed below. -However, it prevents you from access any resources belonging to another namespace. - - - -- `pod` -- `secret` -- `configmap` -- `namespace` -- `service` -- `serviceaccount` -- `kube_node` -- `persistentvolume` -- `persistentvolumeclaim` -- `deployment` -- `replicaset` -- `statefulset` -- `daemonset` -- `clusterrole` -- `kube_role` -- `clusterrolebinding` -- `rolebinding` -- `cronjob` -- `job` -- `certificatesigningrequest` -- `ingress` - - -
- -Advanced filters and queries are supported. See our -[filtering reference](../../../reference/cli/cli.mdx) for more information. - -Try narrowing your search to a specific resource you want to access. - -```code -$ tsh request search --kind node --search iot -Name Hostname Labels Resource ID ------------------------------------- ----------- ------------ ------------------------------------------------------ -b1168402-9340-421a-a344-af66a6675738 iot test=test /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 - -To request access to these resources, run -> tsh request create --resource /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 \ - --reason -``` - -## Step 3/6. Request access to a resource - -Copy the command output by `tsh request search` in the previous step, optionally filling in a request reason. - -```code -$ tsh request create --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ - --reason "responding to incident 123" -Creating request... -Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab -Username: alice -Roles: access -Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] -Reason: "responding to incident 123" -Reviewers: [none] (suggested) -Status: PENDING - -hint: use 'tsh login --request-id=' to login with an approved request - -Waiting for request approval... - -``` - -The command will automatically wait until the request is approved. - -## Step 4/6. Approve the Access Request - -First, log in as `bob`. - -```code -$ tsh login --proxy teleport.example.com --user bob -``` - -Then list, review, and approve the Access Request. - -```code -$ tsh request ls -ID User Roles Resources Created At (UTC) Status ------------------------------------- ----- ------ --------------------------- ------------------- ------- -f406f5d8-3c2a-428f-8547-a1d091a4ddab alice access ["/teleport.example.... [+] 23 Jun 22 18:25 UTC PENDING - -[+] Requested resources truncated, use `tsh request show ` to view the full list - -hint: use 'tsh request show ' for additional details - use 'tsh login --request-id=' to login with an approved request -$ tsh request show f406f5d8-3c2a-428f-8547-a1d091a4ddab -Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab -Username: alice -Roles: access -Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] -Reason: "responding to incident 123" -Reviewers: [none] (suggested) -Status: PENDING - -hint: use 'tsh login --request-id=' to login with an approved request -$ tsh request review --approve f406f5d8-3c2a-428f-8547-a1d091a4ddab -Successfully submitted review. Request state: APPROVED -``` - - -Check out our -[Access Request Integrations](#integrating-with-an-external-tool) -to notify the right people about new Access Requests. - - -## Step 5/6. Access the requested resource - -`alice`'s `tsh request create` command should resolve now that the request has been approved. - -```code -$ tsh request create --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ - --reason "responding to incident 123" -Creating request... -Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab -Username: alice -Roles: access -Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] -Reason: "responding to incident 123" -Reviewers: [none] (suggested) -Status: PENDING - -hint: use 'tsh login --request-id=' to login with an approved request - -Waiting for request approval... - -Approval received, getting updated certificates... - -> Profile URL: https://teleport.example.com - Logged in as: alice - Active requests: f406f5d8-3c2a-428f-8547-a1d091a4ddab - Cluster: teleport.example.com - Roles: access, requester - Logins: alice - Kubernetes: disabled - Allowed Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] - Valid until: 2022-06-23 22:46:22 -0700 PDT [valid for 11h16m0s] - Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty -``` - -`alice` can now view and access the node. - -```code -$ tsh ls -Node Name Address Labels ---------- --------- --------- -iot [::]:3022 test=test - -$ tsh ssh alice@iot -iot:~ alice$ -``` - -## Step 6/6. Resume regular access - -While logged in with a Resource Access Request, users will be blocked from access to any other resources. -This is necessary because their certificate now contains an elevated role, -so it is restricted to only allow access to the resources they were specifically approved for. -Use the `tsh request drop` command to "drop" the request and resume regular access. - -```code -$ tsh request drop -``` - -When you drop an Access Request, Teleport issues a new user certificate. The new -certificate retains the expiration of the previous certificate. Once your -session expires and you reauthenticate to Teleport, you receive a user -certificate with your user's originally configured time to live. - -## Next Steps - -### Automatically request access for SSH - -Once you have configured Resource Access Requests, -`tsh ssh` is able to automatically create a Resource Access Request for you when access is denied, -allowing you to skip the `tsh request search` and `tsh request create` steps. - -```code -$ tsh ssh alice@iot -ERROR: access denied to alice connecting to iot on cluster teleport.example.com - -You do not currently have access to alice@iot, attempting to request access. - -Enter request reason: please -Creating request... -Request ID: ab43fc70-e893-471b-872e-ae65eb24fd76 -Username: alice -Roles: access -Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] -Reason: "please" -Reviewers: [none] (suggested) -Status: PENDING - -hint: use 'tsh login --request-id=' to login with an approved request - -Waiting for request approval... - -Approval received, reason="okay" -Getting updated certificates... - -iot:~ alice$ -``` - -### Restrict the resources a user can request access to - -In this guide, we showed you how to enable a user to search for resources to -request access to. To do so, we assigned the user a Teleport role with the -`search_as_roles` field set to the preset `access` role. - -You can impose further restrictions on the resources a user is allowed to -search by assigning `search_as_roles` to a more limited role. Below, we will -show you which permissions you must set to restrict a user's ability to search -for different resources. - -To restrict access to a particular resource using a role similar to the ones -below, edit one of the user's roles so the `search_as_roles` field includes the -role you have created. - -For full details on how to use Teleport roles to configure RBAC, see the -[Access Controls Reference](../../../reference/access-controls/roles.mdx). - -#### `node` - -You can restrict access to searching `node` resources by assigning values to the -`node_labels` field in the `spec.allow` or `spec.deny` fields. The following -role allows access to SSH Service instances with the `env:staging` label. - -```yaml -kind: role -version: v5 -metadata: - name: staging-access -spec: - allow: - node_labels: - env: staging - logins: - - "{{internal.logins}}" - options: - # Only allows the requester to use this role for 1 hour from time of request. - max_session_ttl: 1h -``` - -#### `kube_cluster` - -You can restrict access to searching `kube_cluster` resources by assigning -values to the `kubernetes_labels` field in the `spec.allow` or `spec.deny` -fields. - -The following role allows access to Kubernetes clusters with the `env:staging` -label: - -```yaml -kind: role -metadata: - name: kube-access -version: v7 -spec: - allow: - kubernetes_labels: - 'env': 'staging' - kubernetes_resources: - - kind: '*' - namespace: '*' - name: '*' - deny: {} -``` - -#### Kubernetes resources - -You can restrict access to Kubernetes resources by assigning values to the -`kubernetes_resources` field in the `spec.allow` or `spec.deny` fields. - -The following role allows access to Kubernetes pods with the name `nginx` in any -namespace, and all pods in the `dev` namespace: - -```yaml -kind: role -metadata: - name: kube-access -version: v7 -spec: - allow: - kubernetes_labels: - '*':'*' - kubernetes_resources: - - kind: pod - namespace: "*" - name: "nginx*" - - kind: pod - namespace: "dev" - name: "*" - kubernetes_groups: - - viewers - deny: {} -``` - -##### Preventing unintended access to Kubernetes resources - -If you are setting up a Teleport role to enable just-in-time access to a -specific Kubernetes resources, you should set the role's `kubernetes_groups` and -`kubernetes_users` to a role that has no access to Kubernetes resource beside -the Kubernetes resources that Teleport is able to restrict access for. - -This is because, if a user requests access to a Kubernetes pod, and the request -is approved, the Teleport Kubernetes Service will use the `kubernetes_groups` -and `kubernetes_users` fields in the role to add impersonation headers to the user's -requests to a Kubernetes API server. Under these conditions, Teleport will be able -to restrict access to all Kubernetes resources kinds mentioned above except for -the desired `pod`. Teleport is also able to restrict access to namespaced-scoped -custom resources but not cluster-scoped custom resources - CRDs resources that -are cluster-scoped will be accessible to the user if the principals in the -`kubernetes_users` and `kubernetes_groups` fields have access to them. - -Requesting access to a Kubernetes Namespace allows you to access all resources -in that namespace but you won't be able to access any other supported resources -in the cluster. - -##### Restrict Access Requests to specific Kubernetes resource kinds - -The `request.kubernetes_resources` field allows you to restrict what kinds of Kubernetes -resources a user can request access to. Configuring this field to any value will disallow -requesting access to the entire Kubernetes cluster. - -If the `request.kubernetes_resources` field is not configured, then a user can request access -to any Kubernetes resources, including the entire Kubernetes cluster. - -The following role allows users to request access to Kubernetes namespaces. -Requests for Kubernetes resources other than `namespace` will not be allowed. - -```yaml -kind: role -metadata: - name: requester-kube-access -version: v7 -spec: - allow: - request: - search_as_roles: - - "kube-access" - kubernetes_resources: - - kind: "namespace" -``` - -The following role allows users to request access only to Kubernetes namespaces and/or pods. - -```yaml -kind: role -metadata: - name: requester-kube-access -version: v7 -spec: - allow: - request: - search_as_roles: - - "kube-access" - kubernetes_resources: - - kind: "namespace" - - kind: "pod" -``` - -The following role allows users to request access to any specific Kubernetes resources. - -```yaml -kind: role -metadata: - name: requester-kube-access -version: v7 -spec: - allow: - request: - search_as_roles: - - "kube-access" - kubernetes_resources: - - kind: "*" -``` - -See related section about [Kubernetes Resources](../../../enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources) -to see a list of supported `kind` values. - -The `request.kubernetes_resources` field only restricts what `kinds` of Kubernetes resource requests are allowed. -To control Kubernetes access to these resources see -[Preventing unintended access to Kubernetes resources](#preventing-unintended-access-to-kubernetes-resources) -section for more details. - -#### `db` - -You can restrict access to searching `db` resources by assigning values to the -`db_labels` field in the `spec.allow` or `spec.deny` fields. - -The following role allows access to databases with the `environment:dev` or -`environment:stage` labels: - -```yaml -kind: role -version: v5 -metadata: - name: developer -spec: - allow: - db_labels: - environment: ["dev", "stage"] - - # Database account names this role can connect as. - db_users: ["viewer", "editor"] - db_names: ["*"] -``` - -#### `app` - -You can restrict access to searching `app` resources by assigning values to the -`app_labels` field in the `spec.allow` or `spec.deny` fields. - -The following role allows access to all applications except for those in -`env:prod`: - -```yaml -kind: role -version: v5 -metadata: - name: dev -spec: - allow: - app_labels: - "*": "*" - deny: - app_labels: - env: "prod" -``` - -#### `windows_desktop` - -You can restrict access to searching `windows_desktop` resources by assigning -values to the `windows_desktop_labels` field in the `spec.allow` or `spec.deny` -fields. - -The following role allows access to all Windows desktops with the -`environment:dev` or `environment:stage` labels. - -```yaml -kind: role -version: v4 -metadata: - name: developer -spec: - allow: - windows_desktop_labels: - environment: ["dev", "stage"] - - windows_desktop_logins: ["{{internal.windows_logins}}"] -``` - -### Request access to Kubernetes resources - -Teleport users can request access to a Kubernetes resources by running the following -command, where is the ID of the resource: - -```code -$ tsh request create -``` - -#### Namespace-scoped resources - -For Kubernetes namespaced resources, the `resource-id` is in the following format: - -``` -/TELEPORT_CLUSTER/NAMESPACED_KIND/KUBE_CLUSTER/NAMESPACE/RESOURCE_NAME -``` - -Teleport supports the following namespaced resources: `pod`, `secret`, `configmap`, -`service`, `serviceaccount`, `persistentvolumeclaim`, `deployment`, `replicaset`, -`statefulset`, `daemonset`, `kube_role`, `rolebinding`, `cronjob`, `job`, -`ingress`. - -For example, to request access to a pod called `nginx-1` in the `development` -namespace, run the following command: - -```code -$ tsh request create --resources /teleport.example.com/pod/mycluster/development/nginx-1 -``` - -For the `NAMESPACE` and `RESOURCE_NAME` values, you can match ranges of characters by -supplying a wildcard (`*`) or regular expression. Regular expressions must begin -with `^` and end with `$`. - -For example, to create a request to access all pods in all namespaces that match -the regular expression `/^nginx-[a-z0-9-]+$/`, run the following command: - -```code -$ tsh request create --resources /teleport.example.com/pod/mycluster/*/^nginx-[a-z0-9-]+$ -``` - -#### Cluster-scoped resources - -For Kubernetes cluster-scoped resources, the `resource-id` is in the following format: - -``` -/TELEPORT_CLUSTER/CLUSTER_WIDE_KIND/KUBE_CLUSTER/RESOURCE_NAME -``` - -Teleport supports the following cluster-wide resources: `namespace`, `kube_node`, -`clusterrole`, `clusterrolebinding`, `persistentvolume`, `certificatesigningrequest`. - -For example, to request access to a namespace called `prod`, run the following command: - -```code -$ tsh request create --resources /teleport.example.com/namespace/mycluster/prod -``` - -For the `RESOURCE_NAME` value, you can match ranges of characters by -supplying a wildcard (`*`) or regular expression. Regular expressions must begin -with `^` and end with `$`. - -For example, to create a request to access all namespaces prefixed with `dev` match -the regular expression `/^dev-[a-z0-9-]+$/`, run the following command: - -```code -$ tsh request create --resources /teleport.example.com/namespace/mycluster/^dev-[a-z0-9-]+$ -``` - -Using cluster-scoped resource requests requires the role under `search_as_roles` to have the correct -permissions for the cluster-scoped resource. This is an example of the required permissions to -request access only to the namespace `prod` (and all resources within it): - -```yaml -kind: role -spec: - allow: - kubernetes_resources: - - kind: namespace - name: prod - verbs: - - '*' -``` - -#### Search for Kubernetes resources - -If a user has no access to a Kubernetes cluster, they can search the list of -resources in the cluster by running the following command, replacing - with the name of the Kubernetes cluster and - with the name of a Kubernetes namespace: - -```code -$ tsh request search --kind= --kube-cluster= \ -[--kube-namespace=|--all-kube-namespaces] -Name Namespace Labels Resource ID ------------------ --------- --------- ---------------------------------------------------------- -nginx-deployment-0 default app=nginx /teleport.example.com/pod/local/default/nginx-deployment-0 -nginx-deployment-1 default app=nginx /teleport.example.com/pod/local/default/nginx-deployment-1 - -To request access to these resources, run -> tsh request create --resource /teleport.example.com/pod/local/default/nginx-deployment-0 --resource /teleport.example.com/pod/local/default/nginx-deployment-1 \ - --reason -``` - -The list returned includes the name of the resource, the namespace it is in if -applicable, its labels, and the resource ID. -Resources included in the list are those that match the `kubernetes_resources` -field in the user's `search_as_roles`. The user can then: - -- Request access to the resources by running the command provided by the `tsh request -search` command. -- Edit the command to request access to a subset of the resources. -- Use a custom request with wildcards or regular expressions. - -`tsh request search --kind=` works even if the user has no permissions to interact - with the desired Kubernetes cluster, but the user's `search_as_roles` values -must allow access to the cluster. If the user is unsure of the name of the cluster, -they can run the following command to search it: - -```code -$ tsh request search --kind=kube_cluster -Name Hostname Labels Resource ID ------ -------- ------ ---------------------------------------- -local /teleport.example.com/kube_cluster/local - -``` - -### Integrating with an external tool - -With Teleport's Access Request plugins, users can manage Access Requests from -within your organization's existing messaging and project management solutions. - -(!docs/pages/includes/access-request-integrations.mdx!) - -## Next Steps - -- Learn more about [Access Lists](../access-lists/access-lists.mdx) diff --git a/docs/pages/admin-guides/access-controls/compliance-frameworks/soc2.mdx b/docs/pages/admin-guides/access-controls/compliance-frameworks/soc2.mdx deleted file mode 100644 index fe932d2098ac3..0000000000000 --- a/docs/pages/admin-guides/access-controls/compliance-frameworks/soc2.mdx +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: SOC 2 compliance for SSH, Kubernetes, and Databases -description: How to configure SOC 2-compliant access to SSH, Kubernetes, databases, desktops, and web apps -h1: SOC 2 Compliance for SSH, Kubernetes, Databases, Desktops, and Web Apps ---- - -Teleport is designed to meet SOC 2 requirements for the purposes of accessing infrastructure, change management, and system operations. This document outlines a high -level overview of how Teleport can be used to help your company to become SOC 2 compliant. - - - - SOC 2 compliance features are only available for Teleport Enterprise and - Teleport Enterprise Cloud. - - - -## Achieving SOC 2 Compliance with Teleport -SOC 2 or Service Organization Controls were developed by the American Institute of CPAs (AICPA). They are based on five trust services criteria: security, availability, processing integrity, confidentiality, and privacy. - -## What Key SOC 2 Controls does Teleport help achieve? -Teleport helps with 4 of the 9 control areas. - -### CC6 Control Activities -Teleport helps with separation of duties using RBAC and restricts access to authorized users -- Provide role-based access controls (RBAC) using short-lived certificates and your existing identity management service. - -### CC6 Physical and Logical Access controls -Teleport issues temporary security credentials according to the user's role. - -### CC7 System Operations -Teleport helps audit and monitor access. - -- Audit events and session recordings are securely stored in a vault to prevent tampering. -- Convert logins, executed commands, deployments and other events into structured audit logs. -- Monitor, share and join interactive sessions in real-time from the CLI or browser. - -### CC8 Change Management -Teleport helps users elevate their permissions during incidents, RBAC helps limit the need for approvals. The Teleport Slack integration allows for managers to quickly approve temporary SSH Access Requests. - -- Let engineers request elevated permissions on the fly without ever leaving the terminal -- Approve or deny permission requests with ChatOps workflow via Slack or other supported platforms. -- Extend and customize permission elevation workflow with a simple API and extendable plugin system. - -## What Specific Criteria does Teleport Help Satisfy? - -Below is a table of principles and common points of focus listed by [AICPA's official "Trust Services Criteria" reference document](https://us.aicpa.org/content/dam/aicpa/interestareas/frc/assuranceadvisoryservices/downloadabledocuments/trust-services-criteria-2020.pdf) and how Teleport helps satisfy them. - -Each principle has many "Points of Focus" which will apply differently to different products and organizations, talk to an auditor to understand exactly which points of focus apply to your organization. - -| Principle Criteria | Point of Focus | Teleport Features | -| --- | --- | --- | -| CC6.1 - Restricts Logical Access | Logical access to information assets, including hardware, data (at-rest, during processing, or in transmission), software, administrative authorities, mobile devices, output, and offline system components is restricted through the use of access control software and rule sets. | Teleport Enterprise supports robust [Role-based Access Controls (RBAC)](../access-controls.mdx) to:
  • Control which SSH nodes a user can or cannot access.
  • Control cluster level configuration (session recording, configuration, etc.)
  • Control which UNIX logins a user is allowed to use when logging into a server.
| -| CC6.1 - Identifies and Authenticates Users | Persons, infrastructure, and software are identified and authenticated prior to accessing information assets, whether locally or remotely. | Provide role-based access controls (RBAC) using short-lived certificates and your existing identity management service. Connecting locally or remotely is just as easy. | -| CC6.1 - Considers Network Segmentation | Network segmentation permits unrelated portions of the entity's information system to be isolated from each other. | [Teleport enables beyond corp network segmentation](../../management/admin/trustedclusters.mdx)

[Connect to nodes behind Firewalls or create reverse tunnels to a proxy server](../../../faq.mdx) | -| CC6.1 - Manages Points of Access | Points of access by outside entities and the types of data that flow through the points of access are identified, inventoried, and managed. The types of individuals and systems using each point of access are identified, documented, and managed. | [Label Nodes to inventory and create rules](../../management/admin/labels.mdx)

[Create Labels from AWS Tags](../../management/guides/ec2-tags.mdx)

Teleport maintains a live list of all nodes within a cluster. This node list can be queried by users (who see a subset they have access to) and administrators any time. | -| CC6.1 - Restricts Access to Information Assets | Combinations of data classification, separate data structures, port restrictions, access protocol restrictions, user identification, and digital certificates are used to establish access-control rules for information assets. | [Teleport uses Certificates to grant access and create access control rules](../../../core-concepts.mdx) | -| CC6.1 - Manages Identification and Authentication | Identification and authentication requirements are established, documented, and managed for individuals and systems accessing entity information, infrastructure, and software. | Teleport makes setting policies for SSH requirements easy since it works in the cloud and on premise with the same authentication security standards. | -| CC6.1 - Manages Credentials for Infrastructure and Software | New internal and external infrastructure and software are registered, authorized, and documented prior to being granted access credentials and implemented on the network or access point. Credentials are removed and access is disabled when access is no longer required or the infrastructure and software are no longer in use. | [Invite nodes to your cluster with short lived tokens](../../../enroll-resources/agents/join-token.mdx) | -| CC6.1 - Uses Encryption to Protect Data | The entity uses encryption to supplement other measures used to protect data at rest, when such protections are deemed appropriate based on assessed risk. | Teleport Audit logs can use DynamoDB encryption at rest. | -| CC6.1 - Protects Encryption Keys | Processes are in place to protect encryption keys during generation, storage, use, and destruction. | Teleport acts as a Certificate Authority to issue SSH and x509 user certificates that are signed by the CA and are (by default) short-lived. SSH host certificates are also signed by the CA and rotated automatically | -| CC6.2 - Controls Access Credentials to Protected Assets | Information asset access credentials are created based on an authorization from the system's asset owner or authorized custodian. | [Request Approval from the command line](../../../reference/cli/tctl.mdx)

[Build Approval Workflows with Access Requests](../access-requests/access-requests.mdx)

[Use Plugins to send approvals to tools like Slack or Jira](../access-requests/access-requests.mdx) | -| CC6.2 - Removes Access to Protected Assets When Appropriate | Processes are in place to remove credential access when an individual no longer requires such access. | [Teleport issues temporary credentials based on an employees role and are revoked upon job change, termination or end of a maintenance window](../access-requests/access-requests.mdx) | -| CC6.2 - Reviews Appropriateness of Access Credentials | The appropriateness of access credentials is reviewed on a periodic basis for unnecessary and inappropriate individuals with credentials. | Teleport maintains a live list of all nodes within a cluster. This node list can be queried by users (who see a subset they have access to) and administrators any time. | -| CC6.3 - Creates or Modifies Access to Protected Information Assets | Processes are in place to create or modify access to protected information assets based on authorization from the asset’s owner. | [Build Approval Workflows with Access Requests](../access-requests/access-requests.mdx) to get authorization from asset owners. | -| CC6.3 - Removes Access to Protected Information Assets | Processes are in place to remove access to protected information assets when an individual no longer requires access. | Teleport uses temporary credentials and can be integrated with your version control system or even your HR system to [revoke access with the Access requests API](../../api/api.mdx) | -| CC6.3 - Uses Role-Based Access Controls | Role-based access control is utilized to support segregation of incompatible functions. | [Role based access control ("RBAC") allows Teleport administrators to grant granular access permissions to users.](../access-controls.mdx) | -| CC6.3 - Reviews Access Roles and Rules | The appropriateness of access roles and access rules is reviewed on a periodic basis for unnecessary and inappropriate individuals with access and access rules are modified as appropriate. | Teleport maintains a live list of all nodes within a cluster. This node list can be queried by users (who see a subset they have access to) and administrators any time. | -| CC6.6 - Restricts Access | The types of activities that can occur through a communication channel (for example, FTP site, router port) are restricted. | Teleport makes it easy to restrict access to common ports like 21, 22 and instead have users [tunnel to the server](../../../faq.mdx) using Teleport. [Teleport uses the following default ports.](../../../reference/networking.mdx) | -| CC6.6 - Protects Identification and Authentication Credentials | Identification and authentication credentials are protected during transmission outside system boundaries. | [Yes, Teleport protects credentials outside your network allowing for Zero Trust network architecture](https://goteleport.com/blog/applying-principles-of-zero-trust-to-ssh/) | -| CC6.6 - Requires Additional Authentication or Credentials | Additional authentication information or credentials are required when accessing the system from outside its boundaries. | [Yes, Teleport can manage MFA with TOTP, WebAuthn or U2F Standards or connect to your Identity Provider using SAML, OAUTH or OIDC](../sso/sso.mdx) | -| CC6.6 - Implements Boundary Protection Systems | Boundary protection systems (for example, firewalls, demilitarized zones, and intrusion detection systems) are implemented to protect external access points from attempts and unauthorized access and are monitored to detect such attempts. | [Trusted clusters](../../management/admin/trustedclusters.mdx) | -| CC6.7 - Uses Encryption Technologies or Secure Communication Channels to Protect Data | Encryption technologies or secured communication channels are used to protect transmission of data and other communications beyond connectivity access points. | [Teleport has strong encryption including a FedRAMP compliant FIPS mode](./fedramp.mdx#start-teleport-in-fips-mode) | -| CC7.2 - Implements Detection Policies, Procedures, and Tools | Processes are in place to detect changes to software and configuration parameters that may be indicative of unauthorized or malicious software. | [Teleport creates detailed SSH Audit Logs with Metadata](../../../reference/monitoring/audit.mdx)

[Use BPF Session Recording to catch malicious program execution](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC7.2 - Designs Detection Measures | Detection measures are designed to identify anomalies that could result from actual or attempted (1) compromise of physical barriers; (2) unauthorized actions of authorized personnel; (3) use of compromised identification and authentication credentials; (4) unauthorized access from outside the system boundaries; (5) compromise of authorized external parties; and (6) implementation or connection of unauthorized hardware and software. | [Use Enhanced Session Recording to catch malicious program execution, capture TCP connections and log programs accessing files on the system the should not be accessing.](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC7.3 - Communicates and Reviews Detected Security Events | Detected security events are communicated to and reviewed by the individuals responsible for the management of the security program and actions are taken, if necessary. | [Use Session recording to replay and review suspicious sessions](../../../reference/architecture/session-recording.mdx). | -| CC7.3 - Develops and Implements Procedures to Analyze Security Incidents | Procedures are in place to analyze security incidents and determine system impact. | [Analyze detailed logs and replay recorded sessions to determine impact. See exactly what files were accessed during an incident.](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC7.4 - Contains Security Incidents | Procedures are in place to contain security incidents that actively threaten entity objectives. | [Use Teleport to quickly revoke access and contain an active incident](../../access-controls/guides/locking.mdx)

[Use Shared Sessions so Multiple On-Call Engineers can collaborate and fight fires together.](../../../connect-your-client/tsh.mdx) | -| CC7.4 - Ends Threats Posed by Security Incidents | Procedures are in place to mitigate the effects of ongoing security incidents. | [Use Teleport to quickly revoke access and contain an active incident](../../access-controls/guides/locking.mdx) | -| CC7.4 - Obtains Understanding of Nature of Incident and Determines Containment Strategy | An understanding of the nature (for example, the method by which the incident occurred and the affected system resources) and severity of the security incident is obtained to determine the appropriate containment strategy, including (1) a determination of the appropriate response time frame, and (2) the determination and execution of the containment approach. | [Use Teleport’s Session Recording and Replay along with logs to understand what actions led to an incident.](../../../reference/monitoring/audit.mdx) | -| CC7.4 - Evaluates the Effectiveness of Incident Response | The design of incident-response activities is evaluated for effectiveness on a periodic basis. | [Use audit logs and session recordings to find pain points in your incident response plan and improve effectiveness](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx). | -| CC7.4 - Periodically Evaluates Incidents | Periodically, management reviews incidents related to security, availability, processing integrity, confidentiality, and privacy and identifies the need for system changes based on incident patterns and root causes. | [Use Session recording and audit logs to find patterns that lead to incidents.](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC7.5 - Determines Root Cause of the Event | The root cause of the event is determined. | [Use Session recording and audit logs to find root cause.](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC7.5 - Improves Response and Recovery Procedures | Lessons learned are analyzed and the incident-response plan and recovery procedures are improved. | [Replay Session recordings at your 'after action review' or postmortem meetings](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Manages changes throughout the system life cycle. | Enables a documented software development lifecycle through its integrations with [infrastructure as code tools](../../infrastructure-as-code/infrastructure-as-code.mdx). | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Authorizes Changes, Identifies and Evaluates System Changes. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) integrations enable you to authorize changes using GitOps platforms. [Access Requests](../access-requests/access-requests.mdx) enable authorization for elevated privileges. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Designs and Develops Changes. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) integrations allow you to specify infrastructure access controls as code. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Documents Changes. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) integrations allow for documenting configuration changes in version control system logs, and a [notification system](../../../connect-your-client/notifications.mdx) allows you to inform end users of system changes. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Tracks System Changes. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) integrations and audit logging allow you to track the history of configuration changes and [Access Request](../access-requests/access-requests.mdx) approvals in a Teleport cluster. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Configures Software. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) support enables you to track dynamic resource configurations. [Helm support](../../../reference/helm-reference/helm-reference.mdx) allows you to track configurations for Teleport services on Kubernetes. You can also use version control to manage Teleport [YAML configuration files](../../../reference/config.mdx).| -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Tests System Changes. | Teleport's infrastructure as code support and [gRPC API](../../api/api.mdx) make it possible to set up staging environments and automated tests for changes in a Teleport configuration. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Approves System Changes. | [Access Requests](../access-requests/access-requests.mdx) allow approvals to proposed permissions changes, and infrastructure as code support enables you to set up an approval system for configurations. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Deploys System Changes. | [RBAC protections](../../../reference/access-controls/roles.mdx#rbac-for-dynamic-teleport-resources) for API resources allow you to restrict configuration changes to an authorized continuous deployment runner. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Identifies Changes in the Infrastructure, Data, Software, and Procedures Required to Remediate Incidents. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) support enables you to revert changes. It is also possible for you to [back up the Auth Service backend](../../management/operations/backup-restore.mdx) on self-hosted clusters.| -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Creates Baseline Configuration of IT Technology. |[Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) support enables you to set up a baseline configuration in a code repository. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Provides Changes Necessary in Emergency Situations. | Admins can use [Access Requests](../access-requests/access-requests.mdx) to obtain temporarily elevated permissions in order to provide changes in emergency situations. | -| CC8.1 - Authorizes, designs, develops or acquires, configures, documents, tests, approves and implements changes to its infrastructure, data, software, and procedures to meet its objectives. | Manages Patch Changes. | [Infrastructure as code](../../infrastructure-as-code/infrastructure-as-code.mdx) support enables regular patch changes to configurations. | diff --git a/docs/pages/admin-guides/access-controls/device-trust/guide.mdx b/docs/pages/admin-guides/access-controls/device-trust/guide.mdx deleted file mode 100644 index 15433634493ac..0000000000000 --- a/docs/pages/admin-guides/access-controls/device-trust/guide.mdx +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: Getting Started with Device Trust -description: Get started with Teleport Device Trust -videoBanner: gBQyj_X1LVw ---- - -(!docs/pages/includes/device-trust/support-notice.mdx!) - -Device Trust requires two of the following steps to have been configured: - -- Device enforcement mode configured via either a role or a cluster-wide config. -- Trusted device registered and enrolled with Teleport. - -In this guide, you will update an existing user profile to assign the preset `require-trusted-device` -role and then enroll a trusted device into Teleport to access a resource (a test linux server) -protected with Teleport. - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -(!docs/pages/includes/device-trust/prereqs.mdx!) - -- User with `editor` role. - ```code - $ tsh status - > Profile URL: (=clusterDefaults.clusterName=):443 - Logged in as: (=clusterDefaults.username=) - Cluster: (=clusterDefaults.clusterName=) - Roles: access, auditor, editor - Logins: root, ubuntu, ec2-user - Kubernetes: disabled - Valid until: 2023-08-22 03:30:24 -0400 EDT [valid for 11h52m0s] - Extensions: login-ip, permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy - ``` -- Access to a linux server (any Linux server you can access via `tsh ssh` will do). - ```code - $ tsh ls - Node Name Address Labels - ---------------- -------------- -------------------------------------- - (=clusterDefaults.nodeIP=) ⟵ Tunnel - - # test connection to (=clusterDefaults.nodeIP=) - $ tsh ssh root@(=clusterDefaults.nodeIP=) - root@(=clusterDefaults.nodeIP=):~# - ``` - -Once the above prerequisites are met, begin with the following step. - -## Step 1/2. Update user profile to enforce Device Trust - -To enforce Device Trust, a user must be assigned with a role with Device Trust mode "required". - -For this guide, we will use the preset `require-trusted-device` role to update current user profile. - -Open the user resource in your editor so we can update it with the preset `require-trusted-device` role. - -```code -$ tctl edit users/(=clusterDefaults.username=) -``` - -Edit the profile: - -```diff -kind: user -metadata: - id: 1692716146877042322 - name: (=clusterDefaults.username=) -spec: - created_by: - time: "2023-08-14T13:42:22.291972449Z" - expires: "0001-01-01T00:00:00Z" - roles: - - access - - auditor - - editor -+ - require-trusted-device # add this line - status: - is_locked: false - ... -``` - -Update the user by saving and closing the file in your editor. - -Now that the user profile is updated to enforce Device Trust, try to access the test server -again. - -```code -$ tsh logout; tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) -$ tsh ssh root@(=clusterDefaults.nodeIP=) -ERROR: access denied to root connecting to (=clusterDefaults.nodeIP=):0 -``` - -As you can verify from the above step, access to `(=clusterDefaults.nodeIP=)` ssh server, -which was previously accessible, is now forbidden. - -## Step 2/2. Enroll device - -To access `(=clusterDefaults.nodeIP=)` server again, you will have to enroll your device. - -Enrolling your device is easy, and can be done using `tsh` client: - -```code -$ tsh device enroll --current-device -Device "(=devicetrust.asset_tag=)"/macOS registered and enrolled -``` - - - The `--current-device` flag tells `tsh` to enroll the current device. The user must have the preset `editor` - or `device-admin` role to be able to self-enroll their device. For users without the `editor` or - `device-admin` roles, a device admin must generate the an enrollment token, which can then be - used to enroll the device. Learn more about manual device enrollment in the - [device management guide](./device-management.mdx#register-a-trusted-device). - - -Relogin to fetch updated certificate with device extension: - -```code -$ tsh logout; tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) - -$ tsh status -> Profile URL: (=clusterDefaults.clusterName=):443 - Logged in as: (=clusterDefaults.username=) - Cluster: (=clusterDefaults.clusterName=):443 - Roles: access, auditor, editor - Logins: root - Kubernetes: enabled - Valid until: 2023-08-22 04:06:53 -0400 EDT [valid for 12h0m0s] - Extensions: login-ip, ... teleport-device-asset-tag, teleport-device-credential-id, teleport-device-id -``` - -The presence of the `teleport-device-*` extensions shows that the device was successfully authenticated. - -Now, let's try to access server (`(=clusterDefaults.nodeIP=)`) again: - -```code -$ tsh ssh root@(=clusterDefaults.nodeIP=) -root@(=clusterDefaults.nodeIP=):~# -``` - -Congratulations! You have successfully configured a Trusted Device and accessed a resource protected with -Device Trust enforcement. - -## Troubleshooting - -(!docs/pages/includes/device-trust/troubleshooting.mdx!) - -## Next steps - -- [Device Management](./device-management.mdx) -- [Enforcing Device Trust](./enforcing-device-trust.mdx) -- [Jamf Pro Integration](./jamf-integration.mdx) -- The role we illustrated in this guide uses the `internal.logins` trait, - which Teleport replaces with values from the Teleport local user - database. For full details on how traits work in Teleport roles, - see the [Access Controls - Reference](../../../reference/access-controls/roles.mdx). - diff --git a/docs/pages/admin-guides/access-controls/getting-started.mdx b/docs/pages/admin-guides/access-controls/getting-started.mdx deleted file mode 100644 index cd37227a1debb..0000000000000 --- a/docs/pages/admin-guides/access-controls/getting-started.mdx +++ /dev/null @@ -1,253 +0,0 @@ ---- -title: Getting Started With Access Controls -description: Get started using Access Controls. ---- - -In Teleport, any local, SSO, or robot user can be assigned one or several roles. -Roles govern access to databases, SSH servers, Kubernetes clusters, Windows -desktops, and web apps. - -We will start with local users and preset roles, assign roles to SSO users, and -wrap up with creating your own role. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -(!docs/pages/includes/permission-warning.mdx!) - -## Step 1/3. Add local users with preset roles - -Teleport provides several preset roles: - -(!docs/pages/includes/preset-roles-table.mdx!) - - - -Invite the local user Alice as cluster `editor`: - -```code -$ tctl users add alice --roles=editor -``` - - -Invite the local user Alice as cluster `editor` and `reviewer`: - -```code -$ tctl users add alice --roles=editor,reviewer -``` - - - - -Once Alice signs up, she will be able to edit cluster configuration. You can list -users and their roles using `tctl users ls`. - - - -```code -$ tctl users ls - -# User Roles -# -------------------- -------------- -# alice editor -``` - - -```code -$ tctl users ls - -# User Roles -# -------------------- -------------- -# alice editor, reviewer -``` - - - - -You can update the user's roles using the `tctl users update` command: - - - -```code -# Once Alice logs back in, she will be able to view audit logs -$ tctl users update alice --set-roles=editor,auditor -``` - - -```code -# Once Alice logs back in, she will be able to view audit logs -$ tctl users update alice --set-roles=editor,reviewer,auditor -``` - - - - -Because Alice has two or more roles, permissions from those roles create a union. She -will be able to act as a system administrator and auditor at the same time. - -## Step 2/3. Map SSO users to roles - -Next, follow the instructions to set up an authentication connector that maps -users within your SSO solution to Teleport roles. - -### Teleport Enterprise - -Create a SAML or OIDC application that Teleport can integrate with, then -create an authentication connector that maps users within your application to -Teleport roles. - - - - -Follow our [SAML Okta Guide](./sso/okta.mdx) to -create a SAML application. - -Save the file below as `okta.yaml` and update the `acs` field. -Any member in Okta group `okta-admin` will assume a built-in role `admin`. - -```yaml -kind: saml -version: v2 -metadata: - name: okta -spec: - acs: https://tele.example.com/v1/webapi/saml/acs - attributes_to_roles: - - {name: "groups", value: "okta-admin", roles: ["access"]} - entity_descriptor: | - - - -Follow our [OIDC guides](./sso/oidc.mdx#identity-providers) to -create an OIDC application. - -Copy the YAML below to a file called `oidc.yaml` and edit the information to -include the details of your OIDC application. - -```yaml -(!examples/resources/oidc-connector.yaml!) -``` - -Create the `oidc` resource: - -```code -$ tctl create okta.yaml -``` - - - - -### Teleport Community Edition - -Save the file below as `github.yaml` and update the fields. You will need to -set up a -[GitHub OAuth 2.0 Connector](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) -app. Any member belonging to the GitHub organization `octocats` and on team -`admin` will be able to assume the built-in role `access`. - -```yaml -kind: github -version: v3 -metadata: - # connector name that will be used with `tsh --auth=github login` - name: github -spec: - # client ID of GitHub OAuth app - client_id: client-id - # client secret of GitHub OAuth app - client_secret: client-secret - # This name will be shown on UI login screen - display: GitHub - # Change tele.example.com to your domain name - redirect_url: https://tele.example.com:443/v1/webapi/github/callback - # Map github teams to teleport roles - teams_to_roles: - - organization: octocats # GitHub organization name - team: admin # GitHub team name within that organization - # map github admin team to Teleport's "access" role - roles: ["access"] -``` - -Create the `github` resource: - -```code -$ tctl create github.yaml -``` - -## Step 3/3. Create a custom role - -Let's create a custom role for interns. Interns will have access -to test or staging SSH servers as `readonly` users. We will let them -view some monitoring web applications and dev kubernetes cluster. - -Save this role as `interns.yaml`: - -```yaml -kind: role -version: v7 -metadata: - name: interns -spec: - allow: - # Logins configures SSH login principals - logins: ['readonly'] - # Assigns users with this role to the built-in Kubernetes group "view" - kubernetes_groups: ["view"] - # Allow access to SSH nodes, Kubernetes clusters, apps or databases - # labeled with "staging" or "test" - node_labels: - 'env': ['staging', 'test'] - kubernetes_labels: - 'env': 'dev' - kubernetes_resources: - - kind: * - namespace: "*" - name: "*" - verbs: ["*"] - app_labels: - 'type': ['monitoring'] - # The deny rules always override allow rules. - deny: - # deny access to any Node, database, app or Kubernetes cluster labeled - # as prod as any user. - node_labels: - 'env': 'prod' - kubernetes_labels: - 'env': 'prod' - kubernetes_resources: - - kind: "namespace" - name: "prod" - db_labels: - 'env': 'prod' - app_labels: - 'env': 'prod' -``` - -Create a role using the `tctl create -f` command: - -```code -$ tctl create -f /tmp/interns.yaml -# Get a list of all roles in the system -$ tctl get roles --format text -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -## Next steps - -- [Mapping SSO and local users traits with role templates](./guides/role-templates.mdx) -- [Create certs for CI/CD using impersonation](./guides/impersonation.mdx) - diff --git a/docs/pages/admin-guides/access-controls/guides/dual-authz.mdx b/docs/pages/admin-guides/access-controls/guides/dual-authz.mdx deleted file mode 100644 index 84dc472803abd..0000000000000 --- a/docs/pages/admin-guides/access-controls/guides/dual-authz.mdx +++ /dev/null @@ -1,234 +0,0 @@ ---- -title: Dual Authorization -description: Dual Authorization for SSH and Kubernetes. -videoBanner: b_iqJm_o15I ---- - -You can set up Teleport to require the approval of multiple team members to perform some critical actions. -Here are the most common scenarios: - -- Improve the security of your system and prevent one successful phishing attack from compromising your system. -- Satisfy FedRAMP AC-3 Dual authorization control that requires approval of two authorized individuals. - -In this guide, we will set up Teleport's Just-in-Time Access Requests to require -the approval of two team members for a privileged role `elevated-access`. - -The steps below describe how to use Teleport with Mattermost. You can also -[integrate with many other providers](../access-requests/access-requests.mdx). - - - -Dual Authorization requires Teleport Enterprise. - - - -## Prerequisites - -- Mattermost installed. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - - - ```code - $ docker run --name mattermost-preview -d --publish 8065:8065 --add-host dockerhost:127.0.0.1 mattermost/mattermost-preview - ``` - - - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/2. Set up a Teleport bot - -### Create a bot within Mattermost - -Enable bot account creation in "System Console -> Integrations". - -Toggle `Enable Bot Account Creation`. - -![Enable bots](../../../../img/access-controls/dual-authz/mattermost-0-enable.png) - -Go back to your team settings, navigate to "Integrations -> Bot Accounts". Press "Add Bot Account". - -![Enable bots](../../../../img/access-controls/dual-authz/mattermost-1-bot.png) - -Add the "Post All" permission on the new account. - -![Enable bots](../../../../img/access-controls/dual-authz/mattermost-2-all-permissions@2x.png) - -Create the bot and save the access token. - -### Set up RBAC for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -### Export the access-plugin identity files - -(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin"!) - -We'll reference the exported file(s) later when configuring the plugin. - -### Install the plugin - -(!docs/pages/includes/plugins/install-access-request.mdx name="mattermost"!) - - - - Access Request Plugins are available as `amd64` or `arm64` Linux binaries for downloading. - Replace `ARCH` with your required version. - - ```code - $ curl -L https://cdn.teleport.dev/teleport-access-mattermost-v(=teleport.plugin.version=)-linux--bin.tar.gz - $ tar -xzf teleport-access-mattermost-v(=teleport.plugin.version=)-linux--bin.tar.gz - $ cd teleport-access-mattermost - $ ./install - ``` - - - To install from source you need `git` and `go` installed. If you do not have Go - installed, visit the Go [downloads page](https://go.dev/dl/). - - ```code - $ git clone https://github.com/gravitational/teleport -b branch/v(=teleport.major_version=) - $ cd teleport/integrations/access/mattermost - $ git checkout v(=teleport.plugin.version=) - $ make build/teleport-mattermost - ``` - - - -```code -$ teleport-mattermost configure > /etc/teleport-mattermost.toml -``` - -Update the config with the Teleport address, Mattermost URL, and a bot token: - -```yaml -(!examples/resources/plugins/teleport-mattermost-cloud.toml!) -``` - -## Step 2/2. Configure dual authorization - -In this section, we will use an example to show you how to require dual -authorization for a user to assume a role. - -### Require dual authorization for a role - -Alice and Ivan are reviewers. They can approve requests for assuming role -`elevated-access`. Bob is a DevOps engineer and can assume the `elevated-access` role if two members -of the `reviewer` role approve the request. - -Create the following `elevated-access`, `dbreviewer` and `devops` roles: - -```yaml -kind: role -version: v5 -metadata: - name: dbreviewer -spec: - allow: - review_requests: - roles: ['elevated-access'] ---- -kind: role -version: v5 -metadata: - name: devops -spec: - allow: - request: - roles: ['elevated-access'] - thresholds: - - approve: 2 - deny: 1 ---- -kind: role -version: v5 -metadata: - name: elevated-access -spec: - allow: - logins: ['root'] - node_labels: - 'env': 'prod' - 'type': 'db' -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -The commands below create the local users Bob, Alice, and Ivan. - -```code -$ tctl users add bob@example.com --roles=devops -$ tctl users add alice@example.com --roles=dbreviewer -$ tctl users add ivan@example.com --roles=dbreviewer -``` - -### Create an Access Request - -Bob does not have a role `elevated-access` assigned to him, but can create an Access Request for this role in the Web UI or CLI: - - - - ![Role-Request](../../../../img/access-controls/dual-authz/role-new-request.png) - ![Request-Success](../../../../img/access-controls/dual-authz/request-success.png) - - - ```code - # Bob has to set valid emails of Alice and Ivan matching in Mattermost. - $ tsh request create --roles=elevated-access --reviewers=alice@example.com,ivan@example.com - ``` - - - -The Web UI will notify the admin: - -![Mattermost-Request](../../../../img/access-controls/dual-authz/pending-access-request.png) - -The request can then be reviewed and approved through the Web UI or CLI: - - - - ![Teleport-Approve](../../../../img/access-controls/dual-authz/approve-new-request.png) - - - - ```code - $ tsh request list - - # ID User Roles Created (UTC) Status - # ------------------------------------ ---------- --------------- ------------------- ------ - # 0193496f-268c-727e-b696-600a868429ff test (Bob) elevated-access 21 Nov 24 18:50 UTC PENDING - - $ tsh request review --approve --reason="Need to gain elevated-access for investigation" 0193496f-268c-727e-b696-600a868429ff - # Successfully submitted review. Request state: APPROVED - ``` - - - -If the user has created a request using CLI, the role will be assumed once it has been approved, or they can assume the role using the Web UI. - -## Troubleshooting - -### Certificate errors in self-hosted deployments - -You may be getting certificate errors if Teleport's Auth Service is missing an address in the server certificate: - -```txt -authentication handshake failed: x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs -``` - -```txt -x509: certificate is valid for,*.teleport.cluster.local, teleport.cluster.local, not example.com -``` - -To fix the problem, update the Auth Service with a public address, and restart Teleport: - -```yaml -auth_service: - public_addr: ['localhost:3025', 'example.com:3025'] -``` diff --git a/docs/pages/admin-guides/access-controls/guides/guides.mdx b/docs/pages/admin-guides/access-controls/guides/guides.mdx deleted file mode 100644 index 05c27ca3b84aa..0000000000000 --- a/docs/pages/admin-guides/access-controls/guides/guides.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cluster Access and RBAC -description: How to configure access to specific resources in your infrastructure or your Teleport cluster as a whole. -layout: tocless-doc ---- - -Teleport gives you fine-grained control over who can access resources in your -infrastructure as well as how they can access those resources. Once you have -deployed a Teleport cluster, configure access controls to achieve the right -security policies for your organization. - -(!docs/pages/includes/access-control-guides.mdx!) diff --git a/docs/pages/admin-guides/access-controls/guides/headless.mdx b/docs/pages/admin-guides/access-controls/guides/headless.mdx deleted file mode 100644 index 3500b087800ac..0000000000000 --- a/docs/pages/admin-guides/access-controls/guides/headless.mdx +++ /dev/null @@ -1,215 +0,0 @@ ---- -title: Headless WebAuthn -description: Headless WebAuthn ---- - -Headless WebAuthn provides a secure way to authenticate with WebAuthn from a -machine without access to a WebAuthn device. This enables the use of WebAuthn -features which are usually not usable in WebAuthn-incompatible environments. -For example: - -- Logging into Teleport with WebAuthn from a remote dev box -- Connecting to a Teleport SSH Service from a remote dev box with per-session MFA -- Performing `tsh scp` from one Teleport SSH Service to another with per-session MFA -- Logging into Teleport on a machine without a WebAuthn-compatible browser - - - Headless WebAuthn only supports the following `tsh` commands: - - - `tsh ls` - - `tsh ssh` - - `tsh scp` - - In the future, Headless WebAuthn will be extended to other `tsh` commands. - - -## Prerequisites - -- A Teleport cluster with WebAuthn configured. - See the [Harden your Cluster Against IdP Compromises](./webauthn.mdx) guide. -- WebAuthn hardware device, such as YubiKey. -- Machines for Headless WebAuthn activities have [Linux](../../../installation.mdx), [macOS](../../../installation.mdx) or [Windows](../../../installation.mdx) `tsh` binary installed. -- Machines used to approve Headless WebAuthn requests have a Web browser with [WebAuthn support]( - https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) or `tsh` binary installed. -- Optional: Teleport Connect for [seamless Headless WebAuthn approval](#optional-teleport-connect). - -## Step 1/3. Configuration - -A Teleport cluster capable of WebAuthn is automatically capable of -Headless WebAuthn without any additional configuration. - -
-Optional: make Headless WebAuthn the default auth connector - -To make Headless WebAuthn the default authentication method for your Teleport -Cluster, add `connector_name: headless` to your cluster configuration. - -Create a `cap.yaml` file or get the existing configuration using -`tctl get cluster_auth_preference`: - -```yaml -kind: cluster_auth_preference -version: v2 -metadata: - name: cluster-auth-preference -spec: - type: local - second_factors: ["webauthn"] - webauthn: - rp_id: example.com - connector_name: headless # headless by default -``` - -Update the configuration: - -```code -$ tctl create -f cap.yaml -# cluster auth preference has been updated -``` -
- -
-Alternative: disable Headless WebAuthn - -Headless WebAuthn is enabled automatically when WebAuthn is configured. If you -want to forbid Headless WebAuthn in your cluster, add `headless: false` to your -configuration. - -Create a `cap.yaml` file or get the existing configuration using -`tctl get cluster_auth_preference`: - -```yaml -kind: cluster_auth_preference -version: v2 -metadata: - name: cluster-auth-preference -spec: - type: local - second_factors: ["webauthn"] - webauthn: - rp_id: example.com - headless: false # disable Headless WebAuthn -``` - -Update the configuration: - -```code -$ tctl create -f cap.yaml -# cluster auth preference has been updated -``` - -
- -## Step 2/3. Initiate Headless WebAuthn - -Run a headless `tsh` command with the `--headless` flag. This will initiate -headless authentication, printing a URL and `tsh` command. - -```code -$ tsh ls --headless --proxy=proxy.example.com --user=alice -# Complete headless authentication in your local web browser: -# -# https://proxy.example.com:3080/web/headless/86172f78-af7c-5935-a7c1-ed06b94f17dc -# -# or execute this command in your local terminal: -# -# tsh headless approve --user=alice --proxy=proxy.example.com 86172f78-af7c-5935-a7c1-ed06b94f17dc -``` - -## Step 3/3. Approve Headless WebAuthn - -To approve the headless authentication, click or copy+paste the URL printed by -`tsh` in your local web browser. You will be prompted to approve the log in with -WebAuthn verification. Once approved, your initial `tsh --headless ` -should continue as if you had logged in locally. - -Unlike a standard login session, headless sessions are only available for the -lifetime of a single `tsh` request. This means that for each `tsh --headless` -command, you will need to go through the Headless WebAuthn flow: - -### Example: Listing SSH servers -```code -$ tsh ls --headless --proxy=proxy.example.com --user=alice -# Complete headless authentication in your local web browser: -# -# https://proxy.example.com:3080/web/headless/86172f78-af7c-5935-a7c1-ed06b94f17dc -# -# or execute this command in your local terminal: -# -# tsh headless approve --user=alice --proxy=proxy.example.com 86172f78-af7c-5935-a7c1-ed06b94f17dc -# # User approves through link -# Node Name Address Labels -# --------- -------------- ----------- -# server01 127.0.0.1:3022 arch=x86_64 -``` - -### Example: Initiating an SSH session -```code -$ tsh ssh --headless --proxy=proxy.example.com --user=alice alice@server01 -# Complete headless authentication in your local web browser: -# -# https://proxy.example.com:3080/web/headless/864cccd9-2425-46d9-a9f2-636387e66ebf -# -# or execute this command in your local terminal: -# -# tsh headless approve --user=alice --proxy=proxy.example.com 864cccd9-2425-46d9-a9f2-636387e66ebf -# # User approves through link and a ssh terminal starts -alice@server01 $ -``` - - - The Teleport user, `--user` parameter, is the Teleport user requesting Headless WebAuthn activity. - If no `--user` parameter or environment variables set the OS user in the machine terminal is used. - - The login username, `--login` parameter or login@hostname, for `tsh ssh` commands is the user - to open a SSH session as. If no login username for the SSH session is set the OS terminal username is used. - A Teleport user must have access to that login user for that server or they will receive - an access denied message. The user could receive an access denied message after being approved - for their Headless WebAuthn activity since the same access rights are granted or denied as if running from - your local terminal. - - -## Optional: Teleport Connect - -Teleport Connect can also be used to approve Headless WebAuthn logins. Teleport -Connect will automatically detect the Headless WebAuthn login attempt and allow -you to approve or cancel the request. - -![Headless Confirmation](../../../../img/headless/confirmation.png) - -You will be prompted to tap your MFA key to complete the approval process. - -![Headless WebAuthn Approval](../../../../img/headless/approval.png) - -## Troubleshooting - -### "WARN: Failed to lock system memory for headless login: ..." - -When using Headless WebAuthn, `tsh` does not write private key and certificate data -to disk(`~/.tsh`). Instead, `tsh` holds these secrets in memory for the duration of -the request. Additionally, it will try to lock the process memory to further protect -the secrets from being stolen by other users on a shared machine. - -Below are some of the specific warning messages you may run into and how to fix them: - -#### "operation not permitted" OR "cannot allocate memory" - -In order to lock the process memory, your OS user must have permission to lock -the amount of memory needed. Use `ulimit -l` to check your OS user's current limit. -The exact amount of memory needed may vary from system to system, so we recommend -updating your ulimit to unlimited, with either `ulimit -l unlimited` or by adding -the line ` hard memlock unlimited` to your `/etc/security/limits.conf`. - -#### "memory locking is not supported on non-linux operating systems" - -The `mlockall` syscall is only supported on Linux operating systems. This means -that on other operating systems, the memory lock attempt will always fail and -output the warning. We recommend only using Headless WebAuthn on Linux machines -for the best level of security on shared machines. - -#### Disable mlock - -If the above solutions are not feasible in your environment, you can also disable -the memory locking requirement by setting the `--mlock` flag or `TELEPORT_MLOCK_MODE` -environment variable to `off` or `best_effort`. This is not recommended in production -environments on shared systems where a memory swap attack is possible. diff --git a/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx b/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx deleted file mode 100644 index d22683be7adcb..0000000000000 --- a/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -title: Per-session MFA -description: Require MFA checks to initiate sessions. -videoBanner: j8Ze7HhjFGw ---- - -Teleport supports requiring additional multi-factor authentication checks -when starting new: - -- SSH connections (a single `tsh ssh` call, Web UI SSH session or Teleport Connect SSH session) -- Kubernetes sessions (a single `kubectl` call) -- Database sessions (a single `tsh db connect` call) -- Application sessions -- Desktop sessions - -This is an advanced security feature that protects users against compromises of -their on-disk Teleport certificates. - - - In addition to per-session MFA, enable login MFA in your SSO provider and/or - for all [local Teleport - users](../../../reference/access-controls/authentication.mdx) - to improve security. - - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- [WebAuthn configured](webauthn.mdx) on this cluster -- Hardware device for multi-factor authentication, such as YubiKey or SoloKey -- A Web browser with [WebAuthn support]( - https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) (if using - SSH or desktop sessions from the Teleport Web UI). - - - -Teleport FIPS builds disable local users. To configure WebAuthn in order to use -per-session MFA with FIPS builds, provide the following in your `teleport.yaml`: - -```yaml -teleport: - auth_service: - local_auth: false - second_factors: ["webauthn"] - webauthn: - rp_id: teleport.example.com -``` - - - -## Configure per-session MFA - -Per-session MFA can be enforced cluster-wide or only for some specific roles. - -### Cluster-wide - -To enforce MFA checks for all roles, edit your cluster authentication -configuration. - -Edit your `cluster_auth_preference` resource: - -```code -$ tctl edit cap -``` - -Ensure that the resource contains the following content: - -```yaml -kind: cluster_auth_preference -metadata: - name: cluster-auth-preference -spec: - require_session_mfa: true -version: v2 -``` - -Apply your changes by saving and closing the file in your editor. - -### Per role - -To enforce MFA checks for a specific role, update the role to contain: - -```yaml -kind: role -version: v7 -metadata: - name: example-role-with-mfa -spec: - options: - # require per-session MFA for this role - require_session_mfa: true - allow: - ... - deny: - ... -``` - -Role-specific enforcement only applies when accessing resources matching a -role's `allow` section. - -### Roles example - -Let's walk through an example of setting up per-session MFA checks for roles. - -Jerry is an engineer with access to the company infrastructure. The -infrastructure is split into development and production environments. Security -engineer Olga wants to enforce MFA checks for accessing production servers. -Development servers don't require this to reduce engineers' friction. - -Olga defines two Teleport roles: `access-dev` and `access-prod`: - -```yaml -# access-dev.yaml -kind: role -version: v7 -metadata: - name: access-dev -spec: - allow: - node_labels: - env: dev - logins: - - jerry ---- -# access-prod.yaml -kind: role -version: v7 -metadata: - name: access-prod -spec: - options: - # require per-session MFA for production access - require_session_mfa: true - allow: - node_labels: - env: prod - logins: - - jerry - deny: {} -``` - -Olga then assigns both roles to all engineers, including Jerry. - -When Jerry logs into node `dev1.example.com` (with label `env: dev` as login `jerry`), nothing -special happens: - -```code -$ tsh ssh jerry@dev1.example.com - -# jerry@dev1.example.com > -``` - -But when Jerry logs into node `rod3.example.com` (with label `env: prod` as login `jerry`), he -gets prompted for an MFA check: - -```code -$ tsh ssh jerry@prod3.example.com -# Tap any security key - -# jerry@prod3.example.com > -``` - - -If you are using `tsh` in a constrained environment, you can tell it to use -OTP by doing `tsh --mfa-mode=otp ssh prod3.example.com`. - -OTP can only be used with per-session MFA when using `tsh` or Teleport Connect to -establish connections. A hardware MFA key is required for using per-session -MFA with Teleport's Web UI. - - -If per-session MFA was enabled cluster-wide, Jerry would be prompted for MFA -even when logging into `dev1.example.com`. - - - -The Teleport Database Service supports per-connection MFA. When Jerry connects -to the database `prod-mysql-instance` (with label `env: prod`), he gets prompted -for an MFA check for each `tsh db connect` or `tsh proxy db` call: - -```code -$ tsh db connect prod-mysql-instance -# Tap any security key - -# Welcome to the MySQL monitor. Commands end with ; or \g. -# Your MySQL connection id is 10002 -# Server version: 8.0.0-Teleport (Ubuntu) -# -# Copyright (c) 2000, 2021, Oracle and/or its affiliates. -# -# Oracle is a registered trademark of Oracle Corporation and/or its -# affiliates. Other names may be trademarks of their respective -# owners. -# -# Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. -# -# mysql> -``` - - - -## Limitations - -Current limitations for this feature are: - -- For SSH connections besides the Web UI, the `tsh` or Teleport Connect client must be used for per-session MFA. - (The OpenSSH `ssh` client does not work with per-session MFA). -- Only `kubectl` supports per-session WebAuthn authentication for Kubernetes. -- For desktop access, only WebAuthn devices are supported. -- When accessing a - [multi-port](../../../enroll-resources/application-access/guides/tcp.mdx#configuring-access-to-multiple-ports) - TCP application through [VNet](../../../connect-your-client/vnet.mdx), the first connection over - each port triggers an MFA check. - -## Next steps - -- [Require MFA for administrative actions](./mfa-for-admin-actions.mdx) diff --git a/docs/pages/admin-guides/access-controls/guides/webauthn.mdx b/docs/pages/admin-guides/access-controls/guides/webauthn.mdx deleted file mode 100644 index d7683e3249657..0000000000000 --- a/docs/pages/admin-guides/access-controls/guides/webauthn.mdx +++ /dev/null @@ -1,494 +0,0 @@ ---- -title: Harden your Cluster Against IdP Compromises -description: Implement cluster-wide hardening measures. -videoBanner: vQgKkD4ZRDU ---- - -This guide aims to help you fortify your identity infrastructure and mitigate the risks associated with IdP weaknesses. - -An IdP compromise occurs when an attacker gains unauthorized access to your identity management -system, potentially allowing them to impersonate legitimate users, escalate privileges, or access -sensitive information. This can happen through various means, such as exploiting software vulnerabilities, -stealing credentials, or social engineering attacks. - -While many organizations have implemented basic security measures like single sign-on (SSO) and multi-factor authentication (MFA), -these alone may not be sufficient to protect against sophisticated attacks targeting your IdP. -Attackers are constantly evolving their techniques, and traditional security measures may have limitations -or vulnerabilities that can be exploited. - -![IdP threat vector tree](../../../../img/access-controls/idp-graph.png) - -To enhance your defense against IdP compromises, we recommend implementing the following comprehensive security measures. - -## Set up cluster-wide WebAuthn -Implement strong, phishing-resistant authentication across -your entire infrastructure using WebAuthn standards. WebAuthn, a W3C standard and part of FIDO2, -enables public-key cryptography for web authentication. Teleport supports WebAuthn as a multi-factor -for logging into Teleport (via tsh login or Web UI) and accessing SSH nodes or Kubernetes clusters. -It's compatible with hardware keys (e.g., YubiKeys, SoloKeys) and biometric authenticators like Touch ID and Windows Hello. - -### Prerequisites - -- A running Teleport cluster or Teleport Cloud, version 16 or later. If you want to get started with Teleport, -[sign up](https://goteleport.com/signup) for a free trial. - -- The `tctl` admin tool and `tsh` client tool. - - Visit [Installation](../../../installation.mdx) for instructions on downloading `tctl` and `tsh`. - -- WebAuthn hardware device, such as YubiKey or SoloKey - -- A Web browser with [WebAuthn support](https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) - -- (!docs/pages/includes/tctl.mdx!) - -### Step 1/3. Enable WebAuthn support - -WebAuthn is disabled by default. To enable WebAuthn support, update your -Teleport configuration as below: - - Edit the `cluster_auth_preference` resource: - - ```code - $ tctl edit cap - ``` - - Update the `cluster_auth_preference` definition to include the following content: - - ```yaml - kind: cluster_auth_preference - version: v2 - metadata: - name: cluster-auth-preference - spec: - type: local - # To enable WebAuthn support, include "webauthn" as a second factor method. - second_factors: ["webauthn"] - webauthn: - # Required, replace with proxy web address (example.com, example.teleport.sh). - # rp_id is the public domain of the Teleport Proxy Service, *excluding* protocol - # (https://) and port number. - rp_id: example.com - # Optional, attestation_allowed_cas is an optional allow list - # of certificate authorities. - attestation_allowed_cas: - # Entries can be paths to certificate files: - - "/path/to/allowed_ca.pem" - # Entries can also be inline certificates: - - | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - # Optional, attestation_denied_cas is an optional deny list - # of certificate authorities. - attestation_denied_cas: - # Entries can be paths to certificate files: - - "/path/to/denied_ca.pem" - # Entries can also be inline certificates: - - | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - ``` - - Save and exit the file. `tctl` will update the remote definition: - - ```text - cluster auth preference has been updated - ``` - -### `webauthn` fields definitions - -`rp_id` is the public domain of the Teleport Proxy Service, *excluding* protocol - (`https://`) and port number. - -`attestation_allowed_cas` is an optional allow list of certificate authorities -(as local file paths or inline PEM certificate strings) for -[device verification]( -https://developers.yubico.com/WebAuthn/WebAuthn_Developer_Guide/Attestation.html). - -This field allows you to restrict which device models and vendors you trust. -Devices outside of the list will be rejected during registration. By default all -devices are allowed. If you must use attestation, consider using -`attestation_denied_cas` to forbid troublesome devices instead. - -`attestation_denied_cas` is an optional deny list of certificate authorities (as -local file paths or inline PEM certificate strings) for [device verification]( -https://developers.yubico.com/WebAuthn/WebAuthn_Developer_Guide/Attestation.html). - -This field allows you to forbid specific device models and vendors, while -allowing all others (provided they clear `attestation_allowed_cas` as well). -Devices within this list will be rejected during registration. By default no -devices are forbidden. - -### Step 2/3. Register WebAuthn devices as a user - -A user can register multiple WebAuthn devices using `tsh`: - -```code -$ tsh mfa add -# Choose device type [TOTP, WEBAUTHN]: webauthn -# Enter device name: desktop yubikey -# Tap any *registered* security key or enter a code from a *registered* OTP device: -# Tap your *new* security key -# MFA device "desktop yubikey" added. -``` - -### Step 3/3. Log in using WebAuthn - -Once a WebAuthn device is registered, the user will be prompted for it on login: - -```code -$ tsh login --proxy=example.teleport.sh -# Enter password for Teleport user codingllama: -# Tap any security key or enter a code from a OTP device: -# > Profile URL: https://example.teleport.sh -# Logged in as: codingllama -# Cluster: example.teleport.sh -# Roles: access, editor, reviewer -# Logins: codingllama -# Kubernetes: enabled -# Valid until: 2021-10-04 23:32:29 -0700 PDT [valid for 12h0m0s] -# Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty -``` - - - WebAuthn for logging in to Teleport is only required for [local users]( - ../../../reference/access-controls/authentication.mdx#local-no-authentication-connector). - SSO users should configure multi-factor authentication in their SSO provider. - - -## Configure per-session MFA -Ensure that multi-factor authentication is required for each session, not just at initial login, -to maintain continuous security. Teleport's per-session MFA enhances security by protecting -against compromised on-disk certificates. It requires additional MFA checks when initiating new SSH, Kubernetes, database, or desktop sessions. - -Teleport supports requiring additional multi-factor authentication checks -when starting new: - -- SSH connections (a single `tsh ssh` call, Web UI SSH session or Teleport Connect SSH session) -- Kubernetes sessions (a single `kubectl` call) -- Database sessions (a single `tsh db connect` call) -- Application sessions -- Desktop sessions - - - In addition to per-session MFA, enable login MFA in your SSO provider and/or - for all [local Teleport - users](../../../reference/access-controls/authentication.mdx) - to improve security. - - -To enforce MFA checks for all roles, edit your cluster authentication -configuration: - -Edit your `cluster_auth_preference` resource: - -```code -$ tctl edit cap -``` - -Ensure that the resource contains the following content: - -```yaml -kind: cluster_auth_preference -metadata: - name: cluster-auth-preference -spec: - require_session_mfa: true -version: v2 -``` - -Save and close the file in your editor to apply your changes. - - -### Per role - -To enforce MFA checks for a specific role, update the role to contain: - -```yaml -kind: role -version: v7 -metadata: - name: example-role-with-mfa -spec: - options: - # require per-session MFA for this role - require_session_mfa: true - allow: - ... - deny: - ... -``` - -## Implement cluster-wide Device Trust -Develop a system to verify and manage trusted devices across your organization, reducing the risk -of unauthorized access from unknown or compromised devices. Device Trust adds an extra layer of security by requiring the use of trusted devices -for accessing protected resources, complementing user identity and role enforcement. This can be -configured cluster-wide or via RBAC. Supported resources include apps (role-based only), SSH nodes, -databases, Kubernetes clusters, and first MFA device enrollment. The latter helps prevent auto-provisioning of new users through compromised IdPs. - - - - We do not currently support Machine ID and Device Trust. Requiring Device Trust - cluster-wide or for roles impersonated by Machine ID will prevent credentials - produced by Machine ID from being used to connect to resources. - - As a work-around, configure Device Trust enforcement on a role-by-role basis and - ensure that it is not required for roles that you will impersonate using Machine ID. - - -### Prerequisites - -(!docs/pages/includes/device-trust/prereqs.mdx!) - -The `tctl` tool is used to manage the device inventory. A device admin is -responsible for managing devices, adding new devices to the inventory and -removing devices that are no longer in use. - - - Users with the preset `editor` or `device-admin` role - can register and enroll their device in a single step with the following command: - ```code - $ tsh device enroll --current-device - ``` - - -### Step 1/3. Register a trusted device - -Before you can enroll the device, you need to register it. To register -a device, you first need to determine its serial number. - -Retrieve device serial number with `tsh` (must be run on the device you want to register): -```code -$ tsh device asset-tag -(=devicetrust.asset_tag=) -``` -
-Manually retrieving device serial - - - The serial number is visible under Apple menu -> "About This Mac" -> "Serial number". - - - - Windows and Linux devices can have multiple serial numbers depending on the - configuration made by the manufacturer. - - Teleport will pick the first available value from the following: - - System asset tag - - System serial number - - Baseboard serial number - - To find the value chosen by Teleport, run the following command: - - ```code - $ tsh device asset-tag - (=devicetrust.asset_tag=) - ``` - - -
- -Replace - -with the serial number of the device you wish to enroll, and - with your operating system. Run the -`tctl devices add` command: - -```code -$ tctl devices add --os='' --asset-tag='' -Device /macos added to the inventory -``` - -Use `tctl` to check that the device has been registered: - -```code -$ tctl devices ls -Asset Tag OS Enroll Status Device ID ------------- ----- ------------- ------------------------------------ -(=devicetrust.asset_tag=) macOS not enrolled (=devicetrust.device_id=) -``` - -### Step 2/3. Create a device enrollment token - -A registered device becomes a trusted device after it goes through the -enrollment ceremony. To enroll the device, a device enrollment token is -necessary. The token is created by a device admin and sent to the person -performing the enrollment off-band (for example, via a corporate chat). - -To create an enrollment token run the command below, where `--asset-tag` is -the serial number of the device we want to enroll: - -```code -$ tctl devices enroll --asset-tag="(=devicetrust.asset_tag=)" -Run the command below on device "(=devicetrust.asset_tag=)" to enroll it: -tsh device enroll --token=(=devicetrust.enroll_token=) -``` - -### Step 3/3. Enroll a trusted device - -To perform the enrollment ceremony, using the device specified above, type the -command printed by `tctl devices enroll`: - -```code -$ tsh device enroll --token=(=devicetrust.enroll_token=) -Device "(=devicetrust.asset_tag=)"/macOS enrolled - -$ tsh logout -$ tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) # fetch new certificates -Enter password for Teleport user (=clusterDefaults.username=): -Tap any security key -Detected security key tap -> Profile URL: (=clusterDefaults.clusterName=):443 - Logged in as: (=clusterDefaults.username=) - Cluster: (=clusterDefaults.clusterName=) - Roles: access, editor - Logins: (=clusterDefaults.username=) - Kubernetes: enabled - Valid until: 2023-06-23 02:47:05 -0300 -03 [valid for 12h0m0s] - Extensions: teleport-device-asset-tag, teleport-device-credential-id, teleport-device-id -``` -The presence of the `teleport-device-*` extensions shows that the device was -successfully enrolled and authenticated. The device above is now a trusted device. - -### Auto-Enrollment - -Distributing enrollment tokens to many users can be challenging. To address that, -Teleport supports auto-enrollment. When enabled, auto-enrollment automatically -enrolls the user's device in their next Teleport (`tsh`) login. - -For auto-enrollment to work, the following conditions must be met: -- A device must be registered. Registration may be [manual](#implement-cluster-wide-device-trust) or performed using an -integration, like the [Jamf Pro integration](../device-trust/jamf-integration.mdx). -- Auto-enrollment must be enabled in the cluster setting. - -### Step 1/2. Enable auto-enrollment in your cluster settings - -Modify the dynamic config resource using `tctl edit cluster_auth_preference`: - -```diff -kind: cluster_auth_preference -version: v2 -metadata: - name: cluster-auth-preference -spec: - # ... - device_trust: - mode: "required" -+ auto_enroll: true -``` - -### Step 2/2. Log out and back in - -Once enabled, users with their device registered in Teleport will have their -device enrolled to Teleport in their next login. - -```code -$ tsh logout -All users logged out. -$ tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) -Enter password for Teleport user (=clusterDefaults.username=): -Tap any security key -Detected security key tap -> Profile URL: (=clusterDefaults.clusterName=):443 - Logged in as: (=clusterDefaults.username=) - Cluster: (=clusterDefaults.clusterName=) - Roles: access, editor - Logins: (=clusterDefaults.username=) - Kubernetes: enabled - Valid until: 2023-06-23 02:47:05 -0300 -03 [valid for 12h0m0s] - Extensions: teleport-device-asset-tag, teleport-device-credential-id, teleport-device-id -``` - -The presence of the `teleport-device-*` extensions shows that the device was -successfully enrolled and authenticated. - -## Require MFA for administrative actions - -Add an extra layer of security for sensitive administrative operations by requiring multi-factor authentication -for these high-privilege actions. Teleport enforces additional MFA verification for administrative -actions across all clients (tctl, tsh, Web UI, and Connect). This feature adds an extra security -layer by re-verifying user identity immediately before any admin action, mitigating risks from compromised admin accounts. - -By adopting these advanced security measures, you can create a robust defense against IdP compromises and significantly reduce your organization's attack surface. -In the following sections, we'll dive deeper into each of these recommendations, providing step-by-step guidance on implementation and best practices. - - - When MFA for administrative actions is enabled, user certificates produced - with `tctl auth sign` will no longer be suitable for automation due to the - additional MFA checks. - - We recommend using Machine ID to issue certificates for automated workflows, - which uses role impersonation that is not subject to MFA checks. - - Certificates produced with `tctl auth sign` directly on an Auth Service instance using the super-admin - role are not subject to MFA checks to support legacy self-hosted setups. - - -### Prerequisites - -- (!docs/pages/includes/tctl.mdx!) -- [WebAuthn configured](#set-up-cluster-wide-WebAuthn) on this cluster -- Multi-factor hardware device, such as YubiKey or SoloKey -- A Web browser with [WebAuthn support](https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) (if using - SSH or desktop sessions from the Teleport Web UI). - -MFA for administrative actions is automatically enforced for clusters where WebAuthn is the only form of multi-factor allowed. - - - In a future major version, Teleport may enforce MFA for administrative actions - for a wider range of cluster configurations. - - -Examples of administrative actions include, but are not limited to: - -- Resetting or recovering user accounts -- Inviting new users -- Updating cluster configuration resources -- Modifying access management resources -- Approving Access Requests -- Generating new join tokens -- Impersonation -- Creating new bots for Machine ID - -This is an advanced security feature that protects users against compromises of -their on-disk Teleport certificates. - -### Step 1/2. Edit the resource - - Edit the `cluster_auth_preference` resource: - - ```code - $ tctl edit cap - ``` - -Update the `cluster_auth_preference` definition to include the following content: - - ```yaml - kind: cluster_auth_preference - version: v2 - metadata: - name: cluster-auth-preference - spec: - type: local - second_factors: ["webauthn"] - webauthn: - rp_id: example.com - ``` - - ### Step 2/2. Save and exit the file - - The command `tctl` will update the remote definition: - - ```text - cluster auth preference has been updated - ``` - -## Next steps -For additional cluster hardening measures, see: - -- [Passwordless Authentication](./passwordless.mdx): Provides passwordless and usernameless authentication. -- [Locking](./locking.mdx): Lock access to active user sessions or hosts. -- [Moderated Sessions](./joining-sessions.mdx): Require session auditors and allow fine-grained live session access. -- [Hardware Key Support](./hardware-key-support.mdx): Enforce the use of hardware-based private keys. diff --git a/docs/pages/admin-guides/access-controls/idps/idps.mdx b/docs/pages/admin-guides/access-controls/idps/idps.mdx deleted file mode 100644 index 2c4daa37a4ae8..0000000000000 --- a/docs/pages/admin-guides/access-controls/idps/idps.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Configure Teleport as an identity provider -description: How to set up Teleport's identity provider functionality ---- - -Users can authenticate to both internal and external applications -through the use of a built in identity provider in Teleport. - -- [SAML Guide](saml-guide.mdx): A guide for setting up an example application to integration with the SAML identity provider. -- [SAML Attribute Mapping](saml-attribute-mapping.mdx): A reference on how attribute mapping works in Teleport and how to -use it to assert custom user attribute name and values in a SAML response. -- [Use Teleport's SAML Provider to authenticate with Grafana](saml-grafana.mdx): Configure Grafana to authenticate using Teleport identities. -- [SAML Reference](../../../reference/access-controls/saml-idp.mdx): A reference for Teleport's SAML identity provider. diff --git a/docs/pages/admin-guides/access-controls/idps/saml-guide.mdx b/docs/pages/admin-guides/access-controls/idps/saml-guide.mdx deleted file mode 100644 index 6efe5bec03768..0000000000000 --- a/docs/pages/admin-guides/access-controls/idps/saml-guide.mdx +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: Using Teleport as a SAML identity provider -description: How to configure and use Teleport as a SAML identity provider. -h1: Teleport as a SAML identity provider ---- - -This guide details an example on how to use Teleport as a SAML identity provider -(IdP). You can set up the Teleport SAML IdP to enable Teleport users to -authenticate to external services through Teleport. - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- If you're new to SAML, consider reviewing our [SAML Identity Provider - Reference](../../../reference/access-controls/saml-idp.mdx) before proceeding. -- User with permission to create service provider resource. The preset `editor` role has this permission. -- SAML application (also known as a SAML service provider or SP) for testing. For this guide, we'll be using -[RSA Simple Test Service Provider](https://sptest.iamshowcase.com/) - a free test service that -lets us test Teleport SAML IdP. The test service has a protected page, which can be accessed only after a -user is federated to the site with a valid SAML assertion flow. -![iamshowcase protected page](../../../../img/access-controls/saml-idp/iamshowcase-protected-page.png) - -## Step 1/3. Add a service provider to Teleport - -To add a service provider to Teleport, you must configure a service provider -metadata. This can be configured by either providing an **Entity ID** and **ACS URL** values of the -service provider or by providing an entity descriptor value (also known as metadata file, which is an XML file) -of the service provider. - -Below we'll show both of the configuration options. - -First, in the Web UI, under **Access Management**, click **Enroll New Resource** menu. -In the search box, enter "saml", which will show the SAML application -integration tile. Click the tile. - -![Enroll SAML app](../../../../img/access-controls/saml-idp/enroll-saml-app.png) - -The first configuration step, **Configure Service Provider with Teleport's Identity Provider Metadata** -shows Teleport SAML IdP metadata values. For this guide, you can move to next step by clicking **Next** button -which takes to **Add Service Provider To Teleport** step. - -### Option 1: Configure with Entity ID and ACS URL - -With this option, the minimum configuration values required to add a service provider are: -1. **Entity ID:** The SAML metadata value or an endpoint of the service provider. -1. **ACS URL:** The endpoint where users will be redirected after SAML authentication. ACS URL -is also referred to as SAML SSO URL. - -To configure Simple Test Service Provider, the values you need to provide are the following: -- **App Name:** `iamshowcase` -- **SP Entity ID / Audience URI:** `iamshowcase` -- **ACS URL / SP SSO URL:** `https://sptest.iamshowcase.com/acs` - -Click **Finish** button, the `iamshowcase` app is now added to Teleport. - -
-Reference `tctl` based configuration - -The following `saml_idp_service_provider` spec is a reference for adding Simple Test -Service Provider to Teleport: -```yaml -kind: saml_idp_service_provider -metadata: - # The friendly name of the service provider. This is used to manage the - # service provider as well as in identity provider initiated SSO. - name: iamshowcase -spec: - # entity_id is the metadata value or an endpoint of service provider - # that serves entity descriptor, aka SP metadata. - entity_id: iamshowcase - # acs_url is the endpoint where users will be redirected after - # SAML authentication. - acs_url: https://sptest.iamshowcase.com/acs -version: v1 -``` - -Add the spec to Teleport using `tctl`: - -```code -$ tctl create iamshowcase.yaml -# SAML IdP service provider 'iamshowcase' has been created. -``` -
- - -With this configuration method, Teleport first tries to fetch an entity descriptor by -querying the `entity_id` endpoint. If an entity descriptor is not found at that endpoint, -Teleport will generate a new entity descriptor with the given `entity_id` and `acs_url` values. - - -### Option 2: Configure with Entity Descriptor file - -If the service provider provides an option to download an entity descriptor file or you -need more control over the entity descriptor, this is the recommended option to add a service provider -to Teleport. - -With this option, you provide service provider entity descriptor file, which has all the details -required to configure service provider metadata. - -![add entity descriptor](../../../../img/access-controls/saml-idp/add-entity-descriptor.png) - -In the **Add Service Provider To Teleport** page, provide a SAML service provider name (`iamshowcase`). -Now click **+ Add Entity Descriptor (optional)** button, which will expand entity descriptor editor. -Copy Simple Test Service Provider metadata file, which is available at the URL `https://sptest.iamshowcase.com/testsp_metadata.xml` -and paste it to entity descriptor editor in Teleport Web UI. - -Click **Finish** button, the `iamshowcase` app is now added to Teleport. - -
-Reference `tctl` based configuration - -First download the service provider metadata from Simple Test Service Provider as `iamshowcase.xml`: - -```code -$ curl -o iamshowcase.xml https://sptest.iamshowcase.com/testsp_metadata.xml -``` - -Using the template below, create a file called `iamshowcase.yaml`. Assign the -metadata you just downloaded to the `entity_descriptor` field in the -`saml_idp_service_provider` object: - -```yaml -kind: saml_idp_service_provider -metadata: - # The friendly name of the service provider. This is used to manage the - # service provider as well as in identity provider initiated SSO. - name: iamshowcase -spec: - # The entity_descriptor is the service provider XML. - entity_descriptor: | - - - -If an `entity_descriptor` is provided, its content takes preference over values provided in `entity_id` and `acs_url`. - -Teleport only tries to fetch or generate entity descriptor when service provider is created for the first time. -Subsequent updates require an entity descriptor to be present in the service provider spec. As such, when updating -service provider, you should first fetch the spec that is stored in Teleport and only then edit the configuration. -```code -# get service provider spec -$ tctl get saml_idp_service_provider/ > service-provider.yml -``` - - -## Step 2/3. Configure the service provider to recognize Teleport's SAML IdP - -This step varies from service provider to service provider. Some service provider may ask to -provide Teleport SAML IdP Entity ID, SSO URL and X.509 certificate. Other's may ask to upload -an Teleport SAML IdP metadata file. - -You will find these values in the Teleport Web UI under the -**Configure Service Provider with Teleport's Identity Provider Metadata** UI which is the first step -shown in the SAML app enrollment flow. - -![Teleport IdP metadata](../../../../img/access-controls/saml-idp/teleport-idp-metadata.png) - -In the case of Simple Test Service Provider, which this guide is based on, the sample app is designed to grant access protected page -for any well formatted IdP federated SAML assertion data. - -As such, when you click **Finish** button in the previous step, the protected page of the Simple Test Service Provider is -already available to access under resources page. - -## Step 3/3. Verify access to iamshowcase protected page - -To verify everything works, navigate to **Resources** page in Teleport Web UI. - -![Access app](../../../../img/access-controls/saml-idp/access-app.png) - -The "iamshowcase" app will now appear under resources tile. Inside this tile, -click **Login** button, which will now forward you to iamshowcase protected page. - -![Protected page access](../../../../img/access-controls/saml-idp/protected-page-access.png) - -This page shows Teleport user details along with other attributes such as roles are -federated by Teleport SAML IdP. - -This demonstrates adding a service provider to Teleport configured with a working idp-initiated SSO login flow. - -## Recommended: Creating dedicated role to manage service provider - -For production, we recommend creating a dedicated role to manage service provider. - -To create a dedicated role, first, ensure you are logged into Teleport as a user that has -permissions to read and modify `saml_idp_service_provider` objects. The default `editor` role -has access to this already, but in case you are using a more customized configuration, -create a role called `sp-manager.yaml` with the following contents: - -```yaml -kind: role -metadata: - name: sp-manager -spec: - allow: - rules: - - resources: - - saml_idp_service_provider - verbs: - - list - - create - - read - - update - - delete -version: v7 -``` - -Create the role with `tctl`: - -```code -$ tctl create sp-manager.yaml -role 'saml-idp-service-provider-manager' has been created -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -Next, add the role to your user. - -(!docs/pages/includes/add-role-to-user.mdx role="sp-manager"!) - -## Next steps -- Configure [SAML Attribute Mapping](./saml-attribute-mapping.mdx). diff --git a/docs/pages/admin-guides/access-controls/login-rules/guide.mdx b/docs/pages/admin-guides/access-controls/login-rules/guide.mdx deleted file mode 100644 index ce7662b381b65..0000000000000 --- a/docs/pages/admin-guides/access-controls/login-rules/guide.mdx +++ /dev/null @@ -1,287 +0,0 @@ ---- -title: Set Up Login Rules -description: Set up Login Rules to transform user traits ---- - -This guide will walk you through the process of writing, testing, and adding the -first Login Rule to your Teleport cluster. - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -Before you get started you’ll need a running Teleport Enterprise or Cloud -cluster on version `11.3.1` or greater. - -Login Rules only operate on SSO logins, so make sure you have -configured an OIDC, SAML, or GitHub connector before you begin. -Check the [Single Sign-On](../sso/sso.mdx) docs to learn how to set this up. - -## Step 1/5. Configure RBAC - -First, ensure you are logged into Teleport as a user that has permissions -to read and modify `login_rule` resources. The preset `editor` role -has access to this already, but in case you are using a more customized configuration, -create a role called `loginrule-manager.yaml` with the following contents: - -```yaml -kind: role -metadata: - name: loginrule-manager -spec: - allow: - rules: - - resources: [login_rule] - verbs: [list, create, read, update, delete] -version: v7 -``` - -Create the role with `tctl`: - -```code -$ tctl create loginrule-manager.yaml -role 'loginrule-manager' has been created -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -(!docs/pages/includes/add-role-to-user.mdx role="loginrule-manager" !) - -## Step 2/5. Draft your Login Rule resource - -The following example will give all users a new `logins` trait set to the value -of their current `username` trait converted to lowercase. -Copy this example rule to a file called `my_rule.yaml` to continue with the -guide. - -```yaml -# my_rule.yaml -kind: login_rule -version: v1 -metadata: - # Each Login Rule must have a unique name within the cluster. - name: my_rule - - # expires is optional and usually should not be set for deployed login - # rules, but it can be useful to set an expiry a short time in the future - # while testing new Login Rules to prevent potentially locking yourself out of - # your Teleport cluster. - # expires: "2023-01-31T00:00:00-00:00" -spec: - # priority orders the evaluation of Login Rules if multiple are present in the - # cluster, lower priorities are evaluated first. - priority: 0 - - # traits_expression is a predicate expression which will be evaluated to - # determine the final traits for each SSO user during login. - # - # This example expression sets the "logins" trait to the incoming "username" - # trait converted to lowercase. - traits_expression: 'external.put("logins", strings.lower(external["username"]))' -``` - -Each Login Rule resource must have either a `traits_map` or `traits_expression` -field. -In this guide we will use an example `traits_expression`. - -The `traits_expression` is a form of script which will be evaluated by your -Teleport cluster at runtime to determine the traits for each SSO user who logs -in. -The expression can access the incoming traits for the user via the `external` -variable. -The `external` variable is a dictionary which maps trait keys to sets of values -for that trait. - -## Step 3/5. Test the Login Rule - -The `tctl login_rule test` command can be used to experiment with new Login -Rules to check their syntax and see exactly how they will operate on example -incoming traits. - -Fetch your user's current traits and store them in `input.json`, then test your -new Login Rule with that input. - -```code -$ tctl get --format json users/ | jq 'first.spec.traits' > input.json -$ tctl login_rule test --resource-file my_rule.yaml input.json -access: -- staging -groups: -- dbs -- devs -logins: -- alice -``` - -This script will catch any syntax errors in your expressions. -Make sure that all expected traits are present in the output. - -## Step 4/5. Create the Login Rule - -Use the following command to create the Login Rule in your cluster: - -```code -$ tctl create my_rule.yaml -``` - -## Step 5/5. Try it out - -As a final step, log out of your cluster, then log in again and make sure your user received the -expected traits and roles. -You can check the traits and roles with the following command: - -```code -$ tctl get --format json users/ | jq '{traits: first.spec.traits, roles: first.spec.roles}' -{ - "traits": { - "access": [ - "staging" - ], - "groups": [ - "dbs", - "devs" - ], - "logins": [ - "alice" - ] - }, - "roles": [ - "access", - "editor", - "auditor" - ] -} -``` - -## Troubleshooting - -The [`tctl sso test`](../../../reference/cli/tctl.mdx) command can be used to -debug SSO logins and see exactly which traits are being sent by your SSO -provider and how they are being mapped by your Login Rules. - -`tctl sso test` expects a connector spec. Run the following command to debug -with a connector currently installed in your cluster, replacing - with the name of the SSO connector you registered with -Teleport: - -```code -$ tctl get connector/ --with-secrets | tctl sso test -``` - -## Next steps - -To learn more about the Login Rule expression syntax, check out the -[Login Rule Reference](../../../reference/access-controls/login-rules.mdx) page. - -Learn about the `tctl login_rule test` command by running the help command or -checking the [reference page](../../../reference/cli/tctl.mdx). -```code -$ tctl help login_rule test -``` - -The following `tctl` resource commands are helpful for viewing and modifying the -login rules currently installed in your cluster. - -Command | Description -------- | ----------- -`tctl get login_rules` | Show all Login Rules installed in your cluster. -`tctl get login_rule/` | Get a specific installed Login Rule. -`tctl create login_rule.yaml` | Install a new Login Rule. -`tctl create -f login_rule.yaml` | Overwrite an existing Login Rule. -`tctl rm login_rule/` | Delete a Login Rule. - -## Example Login Rules - -### Set a trait to a static list of values defined per group - -```yaml -kind: login_rule -version: v1 -metadata: - name: example -spec: - priority: 0 - traits_expression: | - external.put("allow-env", - choose( - option(external.group.contains("dev"), set("dev", "staging")), - option(external.group.contains("qa"), set("qa", "staging")), - option(external.group.contains("admin"), set("dev", "qa", "staging", "prod")), - option(true, set()), - )) -``` - -### Use only specific traits provided by the OIDC/SAML provider - -To only keep the `groups` and `email` traits, with their original values: - -```yaml -kind: login_rule -version: v1 -metadata: - name: example -spec: - priority: 0 - traits_map: - groups: - - external.groups - email: - - external.email -``` - -### Remove a specific trait - -To remove a specific trait and keep the rest: - -```yaml -kind: login_rule -version: v1 -metadata: - name: example -spec: - priority: 0 - traits_expression: | - external.remove("big-trait") -``` - -### Extend a specific trait with extra values - -```yaml -kind: login_rule -version: v1 -metadata: - name: example -spec: - priority: 0 - traits_expression: | - external.add_values("logins", "ubuntu", "ec2-user") -``` - -### Use the output of one Login Rule in another rule - -```yaml -kind: login_rule -version: v1 -metadata: - name: set_groups -spec: - priority: 0 - traits_expression: | - external.put("groups", - ifelse(external.groups.contains("admins"), - external["groups"].add("superusers"), - external["groups"])) ---- -kind: login_rule -version: v1 -metadata: - name: set_logins -spec: - priority: 1 - traits_expression: | - ifelse(external.groups.contains("superusers"), - external.add_values("logins", "root"), - external) -``` diff --git a/docs/pages/admin-guides/access-controls/login-rules/login-rules.mdx b/docs/pages/admin-guides/access-controls/login-rules/login-rules.mdx deleted file mode 100644 index f179a2a8b4ec9..0000000000000 --- a/docs/pages/admin-guides/access-controls/login-rules/login-rules.mdx +++ /dev/null @@ -1,83 +0,0 @@ ---- -title: Login Rules -description: Transform User Traits with Login Rules -layout: tocless-doc ---- - -When users log in to your Teleport cluster with a configured SSO provider, -**Login Rules** can transform the traits provided by your IdP to meet your needs -for configuring access within Teleport. -Login Rules are a feature of Teleport Enterprise. - -Some use cases for Login Rules are: - -- When you need to modify a user trait based on logical rules, like - "users in group `db-admins` should also be added to group `db-users`", - Login Rules provide a powerful expression language to make these changes - without needing to modify claims in your IdP. -- When your IdP provides a large number of traits with many values, all of these - traits will be included in your user's SSH certificates and JWTs, which can - become too large for some third-party applications to handle. Login Rules can - filter out unnecessary traits and keep just the ones you need. -- When you have multiple [Role Templates](../guides/role-templates.mdx) repeating - the same logic to combine and transform external traits, consider using Login - Rules to consolidate the logic to one place and simplify your Roles. - -Login Rules can solve these problems without requiring changes to your -organization's IdP. - -Login Rules use a predicate language to provide maximum flexibility when -configuring your cluster. -This allows you to write simple or complex expressions to define the traits your -users should be granted. - -For example, you can convert the value of a `username` trait to lowercase and -conditionally extend the value of a groups trait with the following -snippet: -```yaml -traits_map: - username: - - 'strings.lower(external.username)' - groups: - - 'ifelse(external.groups.contains("db-admins"), external.groups.add("db-users"), external.groups)' -``` - -Check out the [Login Rules guide](guide.mdx) for a quick walkthrough -that will show you how to write, test, and add the first Login Rule to your -cluster. See [example Login Rules](guide.mdx) to -learn how to address common use cases. - -When you're ready to take full advantage of Login Rules in your cluster, see the -[Login Rules Reference](../../../reference/access-controls/login-rules.mdx) for details on the expression -language that powers them. - -## FAQ - -### Which users do Login Rules apply to? - -Login Rules apply to all users logging in via OIDC, SAML, or GitHub. -They do not apply to local Teleport users. - -### When are Login Rules evaluated? - -Login Rules are evaluated once during each user login, after receiving the -claims or assertions from your IdP, before mapping claims/assertions to Teleport -roles, and before generating user certificates. -If Login Rules modify any traits used for role mapping, the role mapping will be -affected. - -### Can I define custom helper functions for the predicate language? - -No, but if you have a use case which is not adequately met by the currently -supported helper functions, please talk to support or submit a GitHub issue and -we will consider adding helpers which are generally useful. - -### Can I have multiple Login Rules in a single cluster? - -Yes. -All Login Rules installed in the cluster will first be sorted by priority and -then evaluated in order. -Each subsequent Login Rule will receive the full output of the previous rule as -its input. -It is strongly recommended to give each Login Rule a unique priority, but ties -will be broken by sorting by the rule name. diff --git a/docs/pages/admin-guides/access-controls/okta/okta.mdx b/docs/pages/admin-guides/access-controls/okta/okta.mdx deleted file mode 100644 index 96e7eabea7aa4..0000000000000 --- a/docs/pages/admin-guides/access-controls/okta/okta.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: Enroll the Teleport Okta Integration -description: Describes how to set up the Teleport Okta integration in order to grant Teleport users access to resources managed in Okta. ---- - -The Teleport Okta integration can import and grant access to resources from an -Okta organizations, such as user profiles, groups and applications. Teleport can -provision user accounts based Okta users, Okta applications can be accessed -through Teleport's application access UI, and access to these applications along -with user groups can be managed by Teleport's RBAC along with Access Requests. - -This guide shows you how to set up the Teleport Okta integration. - -## How it works - -Enrolling the Okta integration is a guided process and contains four components: - -1. **Single sign-on (SSO) integration:** Configures Teleport authentication with Okta as an - identity provider. -2. **System for Cross-domain Identity Management (SCIM) integration:** Uses the - [SCIM standard](https://developer.okta.com/docs/concepts/scim/) to - allow nearly instant provisioning of Okta users in Teleport. This is - available only if Teleport Identity Governance is enabled. The SCIM - integration is optional for enabling user, application, and group sync. -3. **User sync:** Imports Okta users as Teleport users via a continuous - reconciliation loop. It is usually slower than SCIM, but but more reliable, - and does not require a publicly accessible Teleport cluster. Combined with - SCIM, it provides both speed and reliability. -4. **Application and group sync:** imports Okta applications and groups as Teleport - Access Lists. This enables you to manage access to Okta applications and user - groups via Teleport RBAC, including Access Requests. - -The Okta SSO integration is required for all other components of the Okta -integration. User sync is required for application and group sync. - -Teleport manages Okta application group assignments using a declarative state -approach. If an assignment is created via Teleport and subsequently removed via -the Okta UI, Teleport will re-evaluate and potentially overwrite Okta UI changes -to align with the state defined by Teleport's RBAC configuration. - -## Get started - -To start the enrolment, visit the Teleport Web UI and click **Access** and then -**Integrations** on the menu bar at the left of the screen. Then click **Enroll -New Integration** and select **Okta** tile. - -![Enroll an Access Request plugin](../../../../img/enterprise/plugins/enroll.png) - -This will bring you to the SSO integration configuration screen. Read [Guided -Okta SSO Integration](./guided-sso.mdx) for supplementary instructions as you -work through the guided flow. - -## Guides - -For supplemental information on the guided process, such as instructions on -making changes in Okta, review the documentation in this section: - - - -## Managing integration components - -At any point in the guided flow, you can return to the integration status page -to manage any component of the integration. To open the integration status page -in the Teleport Web UI, navigate to **Access** -> **Integrations** and click -anywhere on the Okta integration row. - -![Okta integration #1](../../../../img/enterprise/plugins/okta/okta-status-page-1.png) - -## Deleting the Okta integration - -You can delete the Okta integration by taking the following steps: - -1. Clean up user accounts created by the Okta integration. To do so, un-assign - all Okta users from the Okta SAML Application Teleport uses as its identity - provider, and wait for the sync process and/or SCIM provisioning to delete - the corresponding Teleport accounts. Once the Teleport accounts have been - automatically deleted you can proceed to delete the integration. - -1. Navigate to **Access** -> **Integrations** and click anywhere on the Okta - integration row to visit the integration status page. - -1. Click **Delete Integration** . - - ![Delete Okta integration](../../../../img/enterprise/plugins/okta/okta-delete-integration.png) - -1. Permanently delete the SAML connector created as part of the integration. - Navigate to the **Auth Connectors** page in the Teleport UI and select the - option to delete it. - - The SAML connector created during the enrollment process is ***not*** deleted - when the hosted Okta integration is deleted, and will automatically be re-used - if the Okta integration is re-enrolled. - -1. Clean up Access Lists created by the Okta integration. These are not deleted - when the Okta integration is deleted. To delete Okta-sourced Access Lists, - run the following command: - - ```code - $ tctl plugins cleanup okta - ``` - - If any Access Lists created by the integration are nested members or owners - of Teleport-created Access Lists, cleanup will fail. Any nested Access Lists - need to be unassigned before you can perform a cleanup. - - - If the Okta integration is still active, removing Okta-sourced Access Lists - could revoke Okta access from users in your organization. Exercise caution - when cleaning up Access Lists. - - -Once the integration is deleted, you can manually remove any remaining user -accounts with `tctl`. diff --git a/docs/pages/admin-guides/access-controls/sso/adfs.mdx b/docs/pages/admin-guides/access-controls/sso/adfs.mdx deleted file mode 100644 index d913d9df86e0f..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/adfs.mdx +++ /dev/null @@ -1,213 +0,0 @@ ---- -title: SSO with Active Directory Federation Services -description: How to configure Teleport access with Active Directory Federation Services -h1: Single Sign-On with Active Directory Federation Services ---- - -This guide will explain how to configure Active Directory Federation Services -([ADFS](https://en.wikipedia.org/wiki/Active_Directory_Federation_Services)) to be -a single sign-on (SSO) provider to issue login credentials to specific groups -of users. When used in combination with role based access control (RBAC), it -allows Teleport administrators to define policies like: - -- Only members of "DBA" group can SSH into machines running PostgreSQL. -- Developers must never SSH into production servers. - -## Prerequisites - -- ADFS installation with Admin access and users assigned to at least two groups. -- Teleport role with access to maintaining `saml` resources. This is available in the default `editor` role. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Configure ADFS - -You'll need to configure ADFS to export claims about a user (Claims Provider -Trust in ADFS terminology) and you'll need to configure ADFS to trust -Teleport (a Relying Party Trust in ADFS terminology). - -For Claims Provider Trust configuration, open the **AD FS** management window. -Under **Claims Provider Trusts**, right-click on **Active Directory** and -select **Edit Claim Rules**. You'll need to specify at least the following two -incoming claims: `Name ID` and `Group`. - -- `Name ID` should be a mapping of the LDAP Attribute `E-Mail-Addresses` to - `Name ID`. - - ![Name ID Configuration](../../../../img/adfs-1.png) - -- A group membership claim should be used to map users to roles (for example to - separate normal users and admins). - - ![Group Configuration](../../../../img/adfs-2.png) - -- If you are using dynamic roles (see below), it may be useful to map the LDAP - Attribute `SAM-Account-Name` to `Windows account name`: - - ![WAN Configuration](../../../../img/adfs-3.png) - -- And create another mapping of `E-Mail-Addresses` to `UPN`: - - ![UPN Configuration](../../../../img/adfs-4.png) - -You'll also need to create a Relying Party Trust. Use the below information to -help guide you through the Wizard. - -- Create a Relaying Party Trust: - ![Add a Claims Provider Trust](../../../../img/adfs-add-provider-trust.png) -- Enter data about the relying party manually. -- Set the display name to something along the lines of `Teleport`. -- Skip the token encryption certificate. -- Select *"Enable support for SAML 2.0 Web SSO protocol"* and set the URL to - `https://teleport.example.com/v1/webapi/saml/acs`, replacing the domain name - with your Teleport proxy URL. -- Set the relying party trust identifier to - `https://teleport.example.com/v1/webapi/saml/acs` as well. -- For access control policy select *"Permit everyone"*. - -Once the Relying Party Trust has been created, update the Claim Issuance Policy -for it. Like before, make sure you send at least `Name ID` and `Group` claims to the -relying party (Teleport). If you are using dynamic roles, it may be useful to -map the LDAP Attribute `SAM-Account-Name` to *"Windows account name"* and create -another mapping of `E-Mail-Addresses` to *"UPN"*. - -Lastly, ensure the user you create in Active Directory has an email address -associated with it. To check this open Server Manager then -*"Tools -> Active Directory Users and Computers"* and select the user and right -click and open properties. Make sure the email address field is filled out. - -## Step 2/3. Create Teleport roles - -Let's create two Teleport roles: one for administrators and the other for -normal users. You can create them using the `tctl create {file name}` CLI command -or via the Web UI. - -```yaml -# admin-role.yaml -kind: "role" -version: "v3" -metadata: - name: "admin" -spec: - options: - max_session_ttl: "8h0m0s" - allow: - logins: [ root ] - node_labels: - "*": "*" - rules: - - resources: ["*"] - verbs: ["*"] -``` - -```yaml -# user-role.yaml -kind: "role" -version: "v3" -metadata: - name: "dev" -spec: - options: - # regular users can only be guests and their certificates will have a TTL of 1 hour: - max_session_ttl: "1h" - allow: - # only allow login as either ubuntu or the 'windowsaccountname' claim - logins: [ '{{external["http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]}}', ubuntu ] - node_labels: - "access": "relaxed" -``` - -This role declares: - -- Devs are only allowed to log in to nodes labeled `access: relaxed`. -- Developers can log in as the `ubuntu` user. -- Developers will not be able to see or replay past sessions or - re-configure the Teleport cluster. - -The login -`{{external["http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]}}` -configures Teleport to look at the -`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` -attribute and use that field as an allowed login for each user. Since the name -of the attribute contains characters besides letters, numbers, and underscores, -you must use double quotes (`"`) and square brackets (`[]`) around the name of -the attribute. - -## Step 3/3. Create a SAML connector - -Create a SAML connector resource using `tctl`: - -```code -$ tctl sso configure saml --acs https://teleport.example.com/v1/webapi/saml/acs \ - --preset adfs \ - --entity-descriptor https://adfs.example.com/FederationMetadata/2007-06/FederationMetadata.xml \ - --attributes-to-roles http://schemas.xmlsoap.org/claims/Group,teleadmins,editor \ - --attributes-to-roles http://schemas.xmlsoap.org/claims/Group,Users,access \ - > adfs.yaml -``` - -The `acs` field should match the value you set in ADFS earlier and you can -obtain the `entity_descriptor_url` from ADFS under *"ADFS -> Service -> Endpoints -> Metadata"*. - -The `attributes_to_roles` is used to map attributes to the Teleport roles you -just created. In our situation, we are mapping the *"Group"* attribute whose full -name is `http://schemas.xmlsoap.org/claims/Group` with a value of *"teleadmins"* -to the *"editor"* role. Groups with the value *"users"* is being mapped to the -*"users"* role. - -You can test this connector before applying it (`cat adfs.yaml | tctl sso test`), -but until we complete the next step the authentication process will not complete. - -Apply the connector: - -```code -$ tctl create -f adfs.yaml -``` - -### Export the signing key - -For the last step, you'll need to export the signing key: - -```code -$ tctl saml export adfs > saml.cer -``` - -Copy `saml.cer`, to ADFS server, open the "Relying Party Trust" and add this -file as one of the signature verification certificates: - -![adfs-add-teleport-cert.png](../../../../img/adfs-add-teleport-cert.png) - -The Web UI will now contain a new button: "Login with MS Active Directory". The CLI is -the same as before: - -```code -$ tsh --proxy=proxy.example.com login -``` - -This command will print the SSO login URL and try to open it -automatically in a browser. - - - Teleport can use multiple SAML connectors. In this case a connector name - can be passed via `tsh login --auth=connector_name` - - -(!docs/pages/includes/enterprise/samlauthentication.mdx!) - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) - -## Next steps - -In the Teleport role we illustrated in this guide, `external` traits -are replaced with values from the single sign-on provider that the user -used to authenticate to Teleport. For full details on how traits -work in Teleport roles, see the [Access Controls -Reference](../../../reference/access-controls/roles.mdx). - diff --git a/docs/pages/admin-guides/access-controls/sso/azuread.mdx b/docs/pages/admin-guides/access-controls/sso/azuread.mdx deleted file mode 100644 index 886b18933517a..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/azuread.mdx +++ /dev/null @@ -1,418 +0,0 @@ ---- -title: Teleport Authentication with Azure Active Directory (AD) -description: How to configure Teleport access with Azure Active Directory. ---- - -This guide will cover how to configure Microsoft Entra ID to issue credentials -to specific groups of users with a SAML Authentication Connector. When used in -combination with role-based access control (RBAC), it allows Teleport -administrators to define policies like: - -- Only members of the "DBA" Microsoft Entra ID group can connect to PostgreSQL - databases. -- Developers must never SSH into production servers. - -The following steps configure an example SAML authentication connector matching -Microsoft Entra ID groups with security roles. You can choose to configure other -options. - -## Prerequisites - -Before you get started, you’ll need: - -- A Microsoft Entra ID admin account with access to creating non-gallery - applications (P2 License). -- To register one or more users in the directory. -- To create at least two security groups in Microsoft Entra ID and assign one or - more users to each group. -- A Teleport role with access to maintaining `saml` resources. This is available - in the default `editor` role. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Configure Microsoft Entra ID - -### Create an enterprise application - -1. Select **Azure AD -> Enterprise Applications** - - ![Select Enterprise Applications From Manage](../../../../img/azuread/azuread-1-home.png) - -1. Select **New application** - - ![Select New Applications From Manage](../../../../img/azuread/azuread-2-newapp.png) - -1. Select **Create your own application**, enter the application name (for example, - Teleport), and select **Integrate any other application you don't find in - the gallery (Non-gallery)**. - - ![Select Non-gallery application](../../../../img/azuread/azuread-3-createnongalleryapp.png) - -1. Select **Properties** under **Manage** and set **Assignment required?** to **No** - - ![Turn off user assignment](../../../../img/azuread/azuread-4-turnoffuserassign.png) - - Click **Save** before proceeding to the next step. - -### Configure SAML - -1. Select **Single sign-on** under **Manage** and choose **SAML** - - ![Select SAML](../../../../img/azuread/azuread-5-selectsaml.png) - -1. Edit the **Basic SAML Configuration** - - ![Edit Basic SAML Configuration](../../../../img/azuread/azuread-6-editbasicsaml.png) - -1. Enter the URL for your Teleport Proxy Service host in the **Entity ID** and - **Reply URL** fields, for example: - - ```code - https://mytenant.teleport.sh:443/v1/webapi/saml/acs/ad - ``` - - ![Put in Entity ID and Reply URL](../../../../img/azuread/azuread-7-entityandreplyurl.png) - - Click **Save** before proceeding to the next step. - -1. In **SAML Certificates** section, copy the **App Federation - Metadata URL** link and save it for use in our Teleport connector configuration: - - ![Download Federation Metadata XML](../../../../img/azuread/azuread-9-fedmeatadataxml.png) - -### Edit attributes & claims - -1. Click on **Unique User Identifier (Name ID)** under **Required claim**. - -1. Change the "name identifier format" to **Default**. Make sure the source - attribute is `user.userprincipalname`. - - ![Confirm Name Identifier](../../../../img/azuread/azuread-8a-nameidentifier.png) - -1. Add a group claim to make user security groups available to the connector: - - ![Put in Security group claim](../../../../img/azuread/azuread-8b-groupclaim.png) - -1. (optional) Add a claim that transforms the format of the Azure AD username to lower case, in order to use it inside - Teleport roles as the `{{external.username}}` property. - - Set the Source to "Transformation". In the new panel: - - - Set the Transformation value to "Extract()" - - - Set the Attribute name to `user.userprincipalname`. - - - Set the Value to `@`. - - - Click "Add Transformation" and set the Transformation to `ToLowercase()`. - - ![Add a transformed username](../../../../img/azuread/azuread-8c-usernameclaim.png) - -## Step 2/3. Create a SAML connector - -Now, create a SAML connector resource using `tctl`. - -```code -$ tctl sso configure saml --preset ad \ ---entity-descriptor https://login.microsoftonline.com/ff882432.../federationmetadata/2007-06/federationmetadata.xml\?appid\=b8d06e01... \ ---attributes-to-roles http://schemas.microsoft.com/ws/2008/06/identity/claims/groups,41c94563...,dev \ ---attributes-to-roles http://schemas.microsoft.com/ws/2008/06/identity/claims/groups,8adac502...,access > azure-connector.yaml -``` - -In the example above: - -- `--entity-descriptor` specifies the app federation metadata URL saved in the - previous step. -- Each `--attributes-to-roles` specifies the name of the schema definition for groups, - groups, the UUID of an AD group, and the Teleport role that members of the group - will be assigned. - -The file `azure-connector.yaml` should now resemble the following: - -```yaml -kind: saml -metadata: - name: ad -spec: - acs: https://mytenant.teleport.sh/v1/webapi/saml/acs/ad - attributes_to_roles: - - name: http://schemas.microsoft.com/ws/2008/06/identity/claims/groups - roles: - - dev - value: 41c94563... - - name: http://schemas.microsoft.com/ws/2008/06/identity/claims/groups - roles: - - access - value: 8adac502... - audience: https://mytenant.teleport.sh/v1/webapi/saml/acs/ad - cert: "" - display: Microsoft - entity_descriptor: "" - entity_descriptor_url: https://login.microsoftonline.com/ff882432.../federationmetadata/2007-06/federationmetadata.xml?appid=b8d06e01... - issuer: "" - service_provider_issuer: https://mytenant.teleport.sh/v1/webapi/saml/acs/ad - sso: "" -version: v2 - -``` - -With the connector in place on the cluster, you can test it with `tctl`: - -```code -$ cat azure-connector.yaml | tctl sso test -``` - -Your browser should open and log you in to the Teleport cluster using your -Azure AD credentials. If there are any problems, the CLI output will help you -debug the connector configuration. - - -To create the connector using the `tctl` tool, run the following command: - -```code -$ tctl create -f azure-connector.yaml -``` - -## Step 3/3. Create a new Teleport role - -Create a Teleport role resource that will use external username data from the -Azure AD connector to determine which Linux logins to allow on a host. - -Create a file called `dev.yaml` with the following content: - -```yaml -kind: role -version: v5 -metadata: - name: dev -spec: - options: - max_session_ttl: 24h - allow: - # only allow login as either ubuntu or the 'windowsaccountname' claim - logins: [ '{{external["http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]}}', ubuntu ] - node_labels: - access: relaxed -``` - -Users with the `dev` role are only allowed to log in to nodes with the `access: -relaxed` Teleport label. They can log in as either `ubuntu` or a username that -is passed in from the Azure AD connector using the `windowsaccountname` -attribute. - -The login -`{{external["http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]}}` -configures Teleport to look at the -`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` -attribute and use that field as an allowed login for each user. Since the name -of the attribute contains characters besides letters, numbers, and underscores, -you must use double quotes (`"`) and square brackets (`[]`) around the name of -the attribute. - -Create the role: - -```code -$ tctl create dev.yaml -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -(!docs/pages/includes/enterprise/samlauthentication.mdx!) - -## Token encryption (Optional) - -Azure AD's SAML token encryption encrypts the SAML assertions sent to Teleport -during SSO redirect. - -Token encryption is an Azure Active Directory premium feature and requires a -separate license. Since traffic between Azure AD and the Teleport Proxy Service -already uses HTTPS, token encryption is optional. To determine whether you -should enable token encryption, read the Azure AD -[documentation](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption). - -### Set up Teleport token encryption - -Start with generating a public/private key and a certificate. You will set up -the public certificate with Azure AD and the private key with Teleport. - -```code -$ openssl req -nodes -new -x509 -keyout server.key -out server.cer -``` - -If you are modifying the existing connector, open it in your editor: - -```code -$ tctl edit saml -``` - -You will notice that Teleport has generated a `signing_key_pair`. This key pair -is used to sign responses. - -```yaml -kind: saml -metadata: - name: ad -spec: - acs: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - attributes_to_roles: - - name: http://schemas.microsoft.com/ws/2008/06/identity/claims/groups - roles: - - editor - - access - - auditor - value: '*' - audience: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - cert: "" - display: Microsoft - entity_descriptor: - entity_descriptor_url: https://login.microsoftonline.com/ff882432.../federationmetadata/2007-06/federationmetadata.xml?appid=b8d06e01... - issuer: https://sts.windows.net/your-id-here/ - service_provider_issuer: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - signing_key_pair: - cert: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - private_key: | - -----BEGIN RSA PRIVATE KEY----- - ... - -----END RSA PRIVATE KEY----- - sso: https://login.microsoftonline.com/your-id-here/saml2 -version: v2 -``` - -Add `assertion_key_pair` using the data from `server.key` and `server.cer`. - -```yaml -kind: saml -metadata: - name: azure-saml -spec: - acs: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - attributes_to_roles: - - name: http://schemas.microsoft.com/ws/2008/06/identity/claims/groups - roles: - - editor - - access - - auditor - value: '*' - audience: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - cert: "" - display: Microsoft - entity_descriptor: - entity_descriptor_url: https://login.microsoftonline.com/ff882432.../federationmetadata/2007-06/federationmetadata.xml?appid=b8d06e01... - issuer: https://sts.windows.net/your-id-here/ - service_provider_issuer: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - signing_key_pair: - cert: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - private_key: | - -----BEGIN RSA PRIVATE KEY----- - ... - -----END RSA PRIVATE KEY----- - sso: https://login.microsoftonline.com/your-id-here/saml2 -version: v2 -``` - - - Make sure to have the same indentation for all lines of the certificate and key, otherwise - Teleport will not parse the YAML file. - - -After your edits, the file will look like this: - -```yaml -kind: saml -metadata: - name: azure-saml -spec: - acs: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - attributes_to_roles: - - name: http://schemas.microsoft.com/ws/2008/06/identity/claims/groups - roles: - - editor - - access - - auditor - value: '*' - audience: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - cert: "" - display: Microsoft - entity_descriptor: - entity_descriptor_url: https://login.microsoftonline.com/ff882432.../federationmetadata/2007-06/federationmetadata.xml?appid=b8d06e01... - issuer: https://sts.windows.net/your-id-here/ - service_provider_issuer: https://mytenant.teleport.sh/v1/webapi/saml/acs/azure-saml - assertion_key_pair: - cert: | - -----BEGIN CERTIFICATE----- - New CERT - -----END CERTIFICATE----- - private_key: | - -----BEGIN RSA PRIVATE KEY----- - New private key - -----END RSA PRIVATE KEY----- - signing_key_pair: - cert: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - private_key: | - -----BEGIN RSA PRIVATE KEY----- - ... - -----END RSA PRIVATE KEY----- - sso: https://login.microsoftonline.com/your-id-here/saml2 -version: v2 -``` - -Update the connector by saving and closing the file in your editor. - -### Activate token encryption - -- Navigate to **Token Encryption**: - -![Navigate to token encryption](../../../../img/azuread/azuread-token-encryption-0.png) - -- Import certificate - -![Import certificate](../../../../img/azuread/azuread-token-encryption-1-import.png) - -- Activate it - -![Activate certificate](../../../../img/azuread/azuread-token-encryption-2-activate.png) - -If the SSO login with this connector is successful, the encryption works. - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) - -### Failed to process SAML callback - -If you encounter a "Failed to process SAML callback" error, take a look at the audit log. - -```text -Special characters are not allowed in resource names. Use a name composed only from -alphanumeric characters, hyphens, and dots: /web/users/ops_example.com#EXT#@opsexample.onmicrosoft.com/params -``` - -The error above is caused by a Name ID format that is not compatible with Teleport's naming conventions. - -Change the Name ID format to use email instead: - -![Change NameID format to use email](../../../../img/azuread/azuread-nameid.png) - -## Further reading - -- [Teleport Configuration Resources Reference](../../../reference/resources.mdx) -- In the Teleport role we illustrated in this guide, `external` traits - are replaced with values from the single sign-on provider that the - user used to authenticate to Teleport. For full details on how traits - work in Teleport roles, see the [Access Controls - Reference](../../../reference/access-controls/roles.mdx). - diff --git a/docs/pages/admin-guides/access-controls/sso/gitlab.mdx b/docs/pages/admin-guides/access-controls/sso/gitlab.mdx deleted file mode 100644 index d4a43f9160067..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/gitlab.mdx +++ /dev/null @@ -1,229 +0,0 @@ ---- -title: Authentication With GitLab as an SSO provider -description: How to configure Teleport access using GitLab for SSO -h1: Teleport SSO Authentication with GitLab ---- - -This guide will cover how to configure [GitLab](https://www.gitlab.com/) to issue -credentials to specific groups of users. When used in combination with role -based access control (RBAC), it allows administrators to define policies -like: - -- Only members of the "DBA" group can access PostgreSQL databases. -- Only members of "ProductionKubernetes" can access production Kubernetes clusters -- Developers must never SSH into production servers. - -## Prerequisites - -- At least two groups in GitLab with users assigned. In our examples below, - we'll assume a group named `company` with two subgroups, `admin` and `dev`. -- Teleport role with access to maintaining `oidc` resources. This is available - in the default `editor` role. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Configure GitLab - -You should have at least one group configured in GitLab to map to Teleport roles. -In this example we use the names `gitlab-dev` and `gitlab-admin`. Assign users -to each of these groups. - -1. Create an application in one of your groups (**Group overview** -> - **Settings** -> **Applications**) that will allow using GitLab as an OAuth - provider to Teleport. - - Settings: - - - Redirect URL: `https:///v1/webapi/oidc/callback`. - - Check `Confidential`, `openid`, `profile`, and `email`. - - ![Create App](../../../../img/sso/gitlab/gitlab-oidc-0.png) - -1. Collect the `Application ID` and `Secret` in the Application. These will be - used in the Teleport OIDC auth connector: - - ![Collection Information](../../../../img/sso/gitlab/gitlab-oidc-1.png) - -1. Confirm the GitLab issuer address. - - For GitLab.com, the issuer address is `https://gitlab.com`. This allows - Teleport to access the Open-ID configuration at - `https://gitlab.com/.well-known/openid-configuration`. If you are self - hosting the issuer address is the path to your GitLab instance. - -## Configure Teleport - -### Create a OIDC Connector - -Create an OIDC connector resource using `tctl`. - -On your workstation, create a file called `client-secret.txt` consisting only of -your client secret. - - - -Replace the application ID and secret with the values from GitLab: - -```code -$ tctl sso configure oidc --preset gitlab \ ---id \ ---secret $( cat client-secret.txt) \ ---claims-to-roles groups,company/admin,admin \ ---claims-to-roles groups,company/dev,dev > oidc.yaml -``` - - - -Replace the application ID and secret with the values from GitLab, and replace -`https://gitlab.company.com` with the path to your self-hosted GitLab instance: - -```code -$ tctl sso configure oidc --preset gitlab \ ---id \ ---issuer-url https://gitlab.company.com \ ---secret $( cat client-secret.txt) \ ---claims-to-roles groups,company/admin,admin \ ---claims-to-roles groups,company/dev,dev > oidc.yaml -``` - - - - -This example maps the two subgroups `admin` and `dev` of the parent group -`company` to the `admin` and `dev` roles in Teleport, and creates the `oidc.yaml` file: - -```yaml -kind: oidc -metadata: - name: gitlab -spec: - claims_to_roles: - - claim: groups - roles: - - admin - value: company/admin - - claim: groups - roles: - - dev - value: company/gitlab-dev - client_id: - client_secret: - display: GitLab - issuer_url: https://gitlab.com - prompt: none - redirect_url: https://teleport.example.com:443/v1/webapi/oidc/callback -version: v3 - -``` - -Test the connector resource by piping the file to `tctl sso test`: - -```code -$ cat oidc.yaml | tctl sso test -``` - -After authorizing the application in GitLab you should get a -**Login Successful** message in your web browser. Otherwise, consult the output -of the command to diagnose. - -Create the connector using `tctl` tool: - -```code -$ tctl create -f oidc.yaml -``` - -## Create Teleport Roles - -We are going to create 2 roles, privileged role admin who is able to login as -root and is capable of administrating the cluster and non-privileged dev. - -```yaml -kind: role -version: v5 -metadata: - name: admin -spec: - options: - max_session_ttl: 24h - allow: - logins: [root] - node_labels: - "*": "*" - rules: - - resources: ["*"] - verbs: ["*"] -``` - -The developer role: - -```yaml -kind: role -version: v5 -metadata: - name: dev -spec: - options: - max_session_ttl: 24h - allow: - logins: [ "{{email.local(external.email)}}", ubuntu ] - node_labels: - access: relaxed -``` - -- Devs are only allowed to login to nodes labelled with `access: relaxed` label. -- Developers can log in as `ubuntu` user -- Notice the `{{external.email}}` login. It configures Teleport to look at the - *"email"* GitLab claim and use that field as an allowed login for each user. - The `email.local(external.trait)` function removes the `@domain` and preserves - the username prefix. For full details on how variable expansion works in - Teleport roles, see the [Access Controls - Reference](../../../reference/access-controls/roles.mdx). -- Developers also do not have any "allow rules" i.e. they will not be able to - see/replay past sessions or re-configure the Teleport cluster. - -Create both roles on the Auth Service: - -```code -$ tctl create -f admin.yaml -$ tctl create -f dev.yaml -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -## Enable default OIDC authentication - -(!docs/pages/includes/enterprise/oidcauthentication.mdx!) - -The Web UI will now contain a new button: "Login with GitLab". The CLI is -the same as before: - -```code -$ tsh --proxy=teleport.example.com login -``` - -This command will print the SSO login URL (and will try to open it -automatically in a browser). - - - Teleport can use multiple OIDC/SAML connectors. In this case a connector name - can be passed via `tsh login --auth=connector_name` - - - - Teleport only supports sending party initiated flows for OIDC Connect. This - means you can not initiate login from your identity provider, you have to - initiate login from either the Teleport Web UI or CLI. - - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) - diff --git a/docs/pages/admin-guides/access-controls/sso/oidc.mdx b/docs/pages/admin-guides/access-controls/sso/oidc.mdx deleted file mode 100644 index 84fd249c503dd..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/oidc.mdx +++ /dev/null @@ -1,297 +0,0 @@ ---- -title: OAuth2 and OIDC authentication -description: How to configure Teleport access with OAuth2 or OpenID connect (OIDC) ---- - -This guide will explain how to configure an SSO provider using [OpenID -Connect](http://openid.net/connect/) (also known as OIDC) to issue Teleport -credentials to specific groups of users. When used in combination with role-based -access control (RBAC), OIDC allows Teleport administrators to define -policies like: - -- Only members of the "DBA" group can connect to PostgreSQL databases. -- Developers must never SSH into production servers. - -## Prerequisites - -- Admin access to the SSO/IdP being integrated with users assigned to groups/roles. -- Teleport role with permission to maintain `oidc` resources. This permission is available - in the default `editor` role. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Identity Providers - -Register Teleport with the external identity provider you will be using and -obtain your `client_id` and `client_secret`. This information should be -documented on the identity providers website. Here are a few links: - -- [Auth0 Client Configuration](https://auth0.com/docs/applications) -- [Google Identity Platform](https://developers.google.com/identity/protocols/OpenIDConnect) -- [Keycloak Client Registration](https://www.keycloak.org/docs/latest/securing_apps/index.html#\_client_registration) - - -For Google Workspace, see [Teleport Authentication with Google Workspace](google-workspace.mdx) - - -Save the relevant information from your identity provider. To make following -this guide easier, you can add the Client ID here and it will be included in the -example commands below: - -Client ID: - -## OIDC Redirect URL - -OIDC relies on HTTP redirects to return control back to Teleport after -authentication is complete. The redirect URL must be selected by a Teleport -administrator in advance. - -The redirect URL for OIDC authentication in Teleport is -`/v1/webapi/oidc/callback`. -Replace with your Teleport Cloud tenant or -Proxy Service address. - -## OIDC connector configuration - -The next step is to add an OIDC connector to Teleport. The connectors are -created, tested, and added or removed using `tctl` [resource -commands](../../../reference/resources.mdx) or the Teleport Web UI. - -On your workstation, create a file called `client-secret.txt` consisting only of -your client secret. - -To create a new connector, use `tctl sso configure`. The following example creates a -connector resource file in YAML format named `oidc-connector.yaml`: - -```code -$ tctl sso configure oidc --name \ - --issuer-url \ - --id \ - --secret $(cat client-secret.txt) \ - --claims-to-roles ,,access \ - --claims-to-roles ,,editor > oidc-connector.yaml -``` - -- `--name`: Usually the name of the IdP, this is how the connector will be - identified in Teleport. -- `--issuer-url`: This is the base path to the IdP's OIDC configuration endpoint, - excluding `.well-known/openid-configuration`. If, for example, the endpoint - is `https://example.com/.well-known/openid-configuration`, you would use - `https://example.com`. -- `--id`: The client ID as defined in the IdP. Depending on your identity - provider this may be something you can define (for example, `teleport`), or may be an - assigned string. -- `--secret`: The client token/secret provided by the IdP to authorize this client. -- `--claims-to-roles`: A mapping of OIDC claims/values to be associated with - Teleport roles. - -For more information on these and all available flags, see the [tctl sso configure -oidc](../../../reference/cli/tctl.mdx) section of the Teleport CLI -Reference page. - -The file created should look like the example below. This connector requests -the scope `` from the identity provider, then maps the value to -either the `access` or the `editor` role depending on the value returned for -that key within the claims: - -```yaml -(!examples/resources/oidc-connector.yaml!) -``` - -
-Practical Example: Keycloak -The following example was generated using Keycloak as the identity provider. -Keycloak is being served at `keycloak.example.com`, and the Teleport Proxy -Service is listening at `teleport.example.com`. In Keycloak, the client is -named `teleport`. Under the `teleport-dedicated` client scope, we've added -the "Group Membership" mapper: - -```yaml -kind: oidc -metadata: - name: keycloak -spec: - claims_to_roles: - - claim: groups - roles: - - access - value: /users - - claim: groups - roles: - - editor - value: /admins - client_id: teleport - client_secret: abc123... - issuer_url: https://keycloak.example.com/realms/master - redirect_url: https://teleport.example.com:443/v1/webapi/oidc/callback -version: v3 - -``` - -
- -Before applying the connector to your cluster, you can test that it's configured -correctly: - -```code -$ cat oidc-connector | tctl sso test -``` - -This should open up your web browser and attempt to log you in to the Teleport -cluster through your IdP. If it fails, review the output of this command for -troubleshooting details. - - -The "[OIDC] Claims" section of the CLI output provides all the details of your -user provided by the IdP. This is a good starting point while troubleshooting -errors like `Failed to calculate user attributes.` - - -After your tests are successful, create the connector: - -```code -$ tctl create -f oidc-connector.yaml -``` - -### Optional: ACR Values - -Teleport supports sending Authentication Context Class Reference (ACR) values -when obtaining an authorization code from an OIDC provider. By default ACR -values are not set. However, if the `acr_values` field is set, Teleport expects -to receive the same value in the `acr` claim, otherwise it will consider the -callback invalid. - -In addition, Teleport supports OIDC provider specific ACR value processing -which can be enabled by setting the `provider` field in OIDC configuration. At -the moment, the only build-in support is for NetIQ. - -A example of using ACR values and provider specific processing is below: - -```yaml -# example connector which uses ACR values -kind: oidc -version: v2 -metadata: - name: "oidc-connector" -spec: - issuer_url: "https://oidc.example.com" - client_id: "xxxxxxxxxxxxxxxxxxxxxxx.example.com" - client_secret: "zzzzzzzzzzzzzzzzzzzzzzzz" - redirect_url: "https://mytenant.teleport.sh/v1/webapi/oidc/callback" - display: "Login with Example" - acr_values: "foo/bar" - provider: netiq - scope: [ "group" ] - claims_to_roles: - - claim: "group" - value: "editor" - roles: [ "editor" ] - - claim: "group" - value: "user" - roles: [ "access" ] -``` - -### Optional: Max age - -The `max_age` field controls the maximum age of users' sessions before they will -be forced to reauthenticate. By default `max_age` is unset, meaning once a user -authenticates using OIDC they will not have to reauthenticate unless the -configured OIDC provider forces them to. This can be set to a duration of time -to force users to reauthenticate more often. If `max_age` is set to zero -seconds, users will be forced to reauthenticate with their OIDC provider every -time they authenticate with Teleport. - -Note that the specified duration must be in whole seconds. `24h` works because that's -the same as `1440s`, but `60s500ms` would not be allowed as that is 60.5 seconds. - -```yaml -# Extra parts of OIDC yaml have been removed. -spec: - max_age: 24h -``` - -Note that not all OIDC providers support setting `max_age`. Google and GitLab are -both known not to support it and authentication with those providers will not work -when the `max_age` field is set. - -### Optional: Prompt - -Set the Authorization Server prompt for the End-User for reauthentication and consent -per the OIDC protocol. If no `prompt` value is set, Teleport uses `select_account` as -default. - -```yaml -# Extra parts of OIDC yaml have been removed. -spec: - # Valid values as defined from https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest - # none: The Authorization Server must not display any authentication or consent user interface pages. - # select_account: The Authorization Server should prompt the End-User to select a user account. - # login: The Authorization Server should prompt the End-User for reauthentication. - # consent: The Authorization Server should prompt the End-User for consent before returning information to the Client. - prompt: 'login' -``` - -### Optional: Redirect URL and Timeout - -The redirect URL must be accessible by all user, optional redirect timeout. - -```yaml -# Extra parts of OIDC yaml have been removed. -spec: - redirect_url: https://.example.com:3080/v1/webapi/oidc/callback - # Optional Redirect Timeout. - # redirect_timeout: 90s -``` - -### Optional: Disable email verification - -By default, Teleport validates the `email_verified` claim, and users who -attempt to sign in without a verified email address are prevented from doing so: - -```text -ERROR: SSO flow failed. -identity provider callback failed with error: OIDC provider did not verify email. - email not verified by OIDC provider -``` - -For testing and other purposes, you can opt out of this behavior by enabling -`allow_unverified_email` in your OIDC connector. This option weakens the overall -security of the system, so we do not recommend enabling it. - -```yaml -kind: oidc -version: v2 -metadata: - name: connector -spec: - allow_unverified_email: true -``` - -### Optional: Specify a claim to use as the username - -By default, Teleport will use the user's email as their Teleport username. - -You can define a `username_claim` to specify the claim that should be used as -the username instead: - -```yaml -kind: oidc -version: v2 -metadata: - name: connector -spec: - # Use the `preferred_username` claim as the user's Teleport username. - username_claim: preferred_username -``` - -## Enable default OIDC authentication - -(!docs/pages/includes/enterprise/oidcauthentication.mdx!) - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) diff --git a/docs/pages/admin-guides/access-controls/sso/okta.mdx b/docs/pages/admin-guides/access-controls/sso/okta.mdx deleted file mode 100644 index 80551cb62e8ef..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/okta.mdx +++ /dev/null @@ -1,335 +0,0 @@ ---- -title: Authentication With Okta as an SSO Provider -description: How to configure Teleport access using Okta for SSO -videoBanner: SM4Am-i8cj4 ---- - -This guide covers how to configure [Okta](https://www.okta.com/) to provide -single sign on (SSO) identities to Teleport Enterprise and Teleport Enterprise -Cloud. When used in combination with role-based access control (RBAC), this allows -Teleport administrators to define policies like: - -- Only members of the "DBA" group can access PostgreSQL databases. -- Developers must never SSH into production servers. -- Members of the HR group can view audit logs but not production environments. - -
-Automated SSO connection with Okta integration - -In Teleport Enterprise Cloud and Self-Hosted Teleport Enterprise, Teleport can -automatically configure an SSO connector for you when as part of [enrolling the -hosted Okta integration](../okta/okta.mdx). - -You can enroll the Okta integration from the Teleport Web UI. - -Visit the Teleport Web UI, click **Add New** in the left sidebar and click **Integration**: - -![Enroll an Access Request plugin](../../../../img/enterprise/plugins/enroll.png) - -On the "Select Integration Type" menu, click the Okta tile. Teleport will then -guide you through configuring the Okta integration. - -
- -## Prerequisites - -- An Okta account with admin access. Your account must include users and at - least two groups. If you don't already have Okta groups you want to assign to - Teleport roles don't worry, we'll create example groups below. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- A Teleport role with access to edit and maintain `saml` resources. This is - available in the default `editor` role. -- (!docs/pages/includes/tctl-tsh-prerequisite.mdx!) - -## Step 1/4. Create & assign groups - -Okta indicates a user's group membership as a SAML assertion in the data it -provides to Teleport. We will configure Teleport to assign the "dev" role to -members of the `okta-dev` Okta group, and the prest "editor" role to members of -the `okta-admin` group. - -If you already have Okta groups you want to assign to "dev" and "editor" roles in -Teleport, you can skip to the [next step](#step-24-configure-okta). - -### Create Groups - -Create two groups: "okta-dev" and "okta-admin". In the Okta dashboard go to the -navigation bar and click **Directory** -> **Groups**, then **Add group**: - -![Create Group Devs](../../../../img/sso/okta/okta-saml-2.1@2x.png) - -Repeat for the admin group: - -![Create Group Devs](../../../../img/sso/okta/okta-saml-2.2.png) - -## Step 2/4. Configure Okta - -In this section we will create an application in the Okta dashboard to allow our -Teleport cluster to access Okta as an IdP provider. We'll also locate the -address that Okta uses to provides their IdP metadata to Teleport. - -### Create Okta SAML 2.0 App - -From the main navigation menu, select **Applications** -> **Applications**, and click -**Create App Integration**. Select SAML 2.0, then click **Next**. - -![Create APP](../../../../img/sso/okta/okta-saml-1.png) - -On the next screen (**General Settings**), provide a name and optional logo for -your new app, then click **Next**. This will bring you to the **Configure SAML** section. - -### Configure the App - -Provide the following values to their respective fields: - -#### General - -Replace with your Teleport Proxy Service or Enterprise -Cloud account domain and port (`443` by default). - -- Single sign on URL: - - ``` - https:///v1/webapi/saml/acs/okta - ``` - -- Audience URI (SP Entity ID): - - ``` - https:///v1/webapi/saml/acs/okta - ``` - -- Name ID format `EmailAddress` - -- Application username `Okta username` - -#### Attribute Statements - -- Name: `username` | Name format: `Unspecified` | Value: `user.login` - -#### Group Attribute Statements - -We will map our Okta groups to SAML attribute statements (special signed metadata -exposed via a SAML XML response), so that Teleport can discover a user's group -membership and assign matching roles. - -- Name: `groups` | Name format: `Unspecified` -- Filter: `Matches regex` | `.*` - -The configuration page should now look like this: - -![Configure APP](../../../../img/sso/okta/setup-redirection.png) - -The "Matches regex" filter requires the literal string `.*` in order to match all -content from the group attribute statement. - -Notice that we have set "NameID" to the email format and mapped the groups with -a wildcard regex in the Group Attribute statements. We have also set the "Audience" -and SSO URLs to the same value. This is so Teleport can read and use Okta users' -email addresses to create their usernames in Teleport, instead of relying on additional -name fields. - -Once you've filled the required fields, click **Next**, then finish the app creation wizard. - -### Group assignment - -From the **Assignments** tab of the new application page, click **Assign**. Assign the user groups -which can access to the app. Users being members of those groups will have the SSO access to -Teleport once the Auth Connector is configured. - -![Configure APP](../../../../img/sso/okta/okta-saml-3.1.png) - -### Save IdP metadata path - -Okta provides an IdP metadata block, used by clients to identify and verify it -as a trusted source of user identity. - -Since Okta serves this content over HTTPS we can configure Teleport to use this -path instead of a local copy, which can go stale. - -From the app's **Sign On** tab, scroll down to **SAML Signing Certificates**. -Click **Actions** for the SHA-2 entry, then "View IdP metadata": - -![View Okta IdP Metadata](../../../../img/sso/okta/okta-copy-metadata.png) - -Copy the URL to the metadata file for use in our Teleport configuration. - - -You can also right click on the "View IdP metadata" link and select -"Copy Link" or "Copy Link Address". - - -## Step 3/4. Create a SAML connector - -Define an Okta SAML connector using `tctl`. Update this example command with -the path to your metadata file, and edit the `--attributes-to-roles` values for -custom group assignment to roles. See [tctl sso configure -saml](../../../reference/cli/tctl.mdx) for a full reference of -flags for this command: - -```code -$ tctl sso configure saml --preset=okta \ ---entity-descriptor "https://example.okta.com/app/000000/sso/saml/metadata" \ ---attributes-to-roles=groups,okta-admin,editor \ ---attributes-to-roles=groups,okta-dev,dev > okta-connector.yaml -``` - -The contents of `okta-connector.yaml` should resemble the following: - -```yaml -kind: saml -metadata: - name: okta -spec: - acs: https://teleport.example.com:443/v1/webapi/saml/acs/okta - attributes_to_roles: - - name: groups - roles: - - editor - value: okta-admin - - name: groups - roles: - - dev - value: okta-dev - audience: https://teleport.example.com:443/v1/webapi/saml/acs/okta - cert: "" - display: "Okta" - entity_descriptor: "" - entity_descriptor_url: https://example.okta.com/app/000000/sso/saml/metadata - issuer: "" - service_provider_issuer: https://teleport.example.com:443/v1/webapi/saml/acs/okta - sso: "" -version: v2 -``` - -The `attributes_to_roles` field in the connector resource maps key/value-like attributes of -the assertion from Okta into a list of Teleport roles to apply to the session. - -(!docs/pages/includes/sso/idp-initiated.mdx!) - -You can test the connector before applying it to your cluster. This is strongly -encouraged to avoid interruption to active clusters: - -```code -$ cat okta-connector.yaml | tctl sso test -If browser window does not open automatically, open it by clicking on the link: - http://127.0.0.1:52519/0222b1ca... -Success! Logged in as: alice@example.com --------------------------------------------------------------------------------- -Authentication details: - roles: - - editor - - dev - traits: - groups: - - Everyone - - okta-admin - - okta-dev - username: - - alice@example.com - username: alice@example.com --------------------------------------------------------------------------------- -[SAML] Attributes to roles: -- name: groups - roles: - - editor - value: okta-admin -- name: groups - roles: - - dev - value: okta-dev - --------------------------------------------------------------------------------- -[SAML] Attributes statements: -groups: -- Everyone -- okta-admin -- okta-dev -username: -- alice@example.com - --------------------------------------------------------------------------------- -For more details repeat the command with --debug flag. -``` - -Create the connector using `tctl`: - -```code -$ tctl create okta-connector.yaml -``` - -(!docs/pages/includes/enterprise/samlauthentication.mdx!) - -## Step 4/4. Create a developer Teleport role - -Now let's create the Teleport role that we will assign to members of the -`okta-dev` group. Create the local file `dev.yaml` with the content below. - -```yaml -kind: role -version: v5 -metadata: - name: dev -spec: - options: - max_session_ttl: 24h - allow: - logins: [ "{{email.local(external.username)}}", ubuntu ] - node_labels: - access: relaxed -``` - -Members of this role have are assigned several attributes: - -- they are only allowed to login to nodes labelled with `access: relaxed` label. -- they can log in as `ubuntu` user or a user matching their email prefix. -- they do not have any "allow rules" i.e. they will not be able to - see/replay past sessions or re-configure the Teleport cluster. - -Notice the `{{external.username}}` login. It configures Teleport to look at the -*"username"* Okta claim and use that field as an allowed login for each user. -This example uses email as the username format. The -`email.local(external.username)` function call will remove the `@domain` and -leave the username prefix. For full details on how variable expansion works in -Teleport roles, see the [Access Controls -Reference](../../../reference/access-controls/roles.mdx). - -Use `tctl` to create this role in the Teleport Auth Service: - -```code -$ tctl create dev.yaml -``` - -We don't need to repeat this process for the "editor" role because this is a -preset role that is available by default in all Teleport clusters. - -(!docs/pages/includes/create-role-using-web.mdx!) - -## Testing - -The Web UI now contains a new "Okta" button at the login screen. To -authenticate via the `tsh` CLI, specify the Proxy Service address and `tsh` will -automatically use the default authentication type: - -```code -$ tsh login --proxy=proxy.example.com -``` - -This command prints the SSO login URL (and will try to open it automatically -in a browser). - - -Teleport can use multiple SAML connectors. In this case a connector name -can be passed via the `--auth` flag. For the connector we created above: - -```code -$ tsh login --proxy=proxy.example.com --auth=okta -``` - - - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) diff --git a/docs/pages/admin-guides/access-controls/sso/one-login.mdx b/docs/pages/admin-guides/access-controls/sso/one-login.mdx deleted file mode 100644 index 8e9bf0355f182..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/one-login.mdx +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: Teleport Authentication with OneLogin as an SSO Provider -description: How to configure Teleport access using OneLogin as an SSO provider -h1: Teleport Authentication with OneLogin ---- - -import SquareIcon from "@version/docs/img/sso/onelogin/teleport.png" -import RectangleIcon from "@version/docs/img/sso/onelogin/teleportlogo@2x.png" - -This guide will explain how to configure [OneLogin](https://www.onelogin.com/) -to issue Teleport credentials to specific groups of users. When used in -combination with role based access control (RBAC) it allows SSH administrators -to define policies like: - -- Only members of "DBA" group can connect to PostgreSQL databases. -- Developers must never SSH into production servers. -- ... and many others. - -## Prerequisites - -- A OneLogin account with admin access, and users assigned to at least two groups. -- Teleport role with access to maintaining `saml` resources. This is available - in the default `editor` role. - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Create Teleport application in OneLogin - -1. In the OneLogin control panel's main menu navigate to **Applications** -> - **Add App**. Using the search box, select "SAML Custom Connector (SP - Shibboleth)": - - ![SAML Custom Connector (SP Shibboleth)](../../../../img/sso/onelogin/onelogin-saml-1.png) - -1. Define the new application: - - ![SAML Config](../../../../img/sso/onelogin/onelogin-saml-1a.png) - - You can find Teleport icons to upload from the following links: - - - Square Icon - - Rectangular Icon - -1. From the application's **Configuration** page, set the following values: - - - Set - here with your Teleport Proxy Service address and port, or Teleport Enterprise - Cloud tenant (e.g. `company.teleport.sh:443`) to fill out the values below. - - - - **Login URL**: - - `https://``/web/login` - - **ACS (Consumer) URL**, **SAML Recipient**, **ACS (Consumer) URL Validator**, & **Audience**: - - `https://``/v1/webapi/saml/acs/onelogin` - - ![Configure SAML](../../../../img/sso/onelogin/onelogin-saml-2.png) - -1. Teleport needs to assign groups to users. From the **Parameters** page, - configure the application with some parameters exposed as SAML attribute - statements: - - ![New Field](../../../../img/sso/onelogin/onelogin-saml-3.png) - - ![New Field Group](../../../../img/sso/onelogin/onelogin-saml-4.png) - - - Make sure to check `Include in SAML assertion` checkbox. - - -1. Add users to the application: - - ![Add User](../../../../img/sso/onelogin/onelogin-saml-5.png) - -1. Obtain SAML metadata for your authentication connector. Once the application - is set up, navigate to the the **More Actions** menu and find the **SAML - Metadata** option: - - ![Download XML](../../../../img/sso/onelogin/saml-download.png) - - You can either left-click the option and download the XML document as a local - file or right-click the option and copy the link address. The Teleport Auth - Service either reads the provided document or queries the address to obtain - SAML metadata. We recommend copying the address so the Auth Service can use - the most up-to-date information. - -## Step 2/3. Create a SAML connector - -Create a SAML connector using `tctl`. Update to the URL -of the XML document that you copied in the previous step. If you downloaded the -XML document instead, use the path to the XML metadata file: - -```code -$ tctl sso configure saml --preset onelogin \ ---entity-descriptor \ ---attributes-to-roles groups,admin,editor \ ---attributes-to-roles groups,dev,access > onelogin.yaml -``` - -This will create `onelogin.yaml`, describing the connector resource: - -```yaml -(!examples/resources/onelogin-connector.yaml!) -``` - -Test the newly created configuration: - -```code -$ cat onelogin.yaml | tctl sso test -``` - -`tctl sso test` will open the browser and attempt to authenticate with OneLogin. -If it succeeds the output will print what SAML attributes are received and mapped -to Teleport roles. If the test fails, the output will help you troubleshoot your -configuration. - -Create the connector using `tctl` tool: - -```code -$ tctl create -f onelogin.yaml -``` - -(!docs/pages/includes/enterprise/samlauthentication.mdx!) - -## Step 3/3. Create a new Teleport role - -We are going to create a new that'll use external username data from OneLogin -to map to a host linux login. - -In the role described below, Devs are only allowed to login to nodes labelled with -`access: relaxed` Teleport label. Developers can log in as either `ubuntu` to a -username that arrives in their assertions. Developers also do not have any -rules needed to obtain admin access to Teleport. - -```yaml -# dev.yaml -kind: role -version: v5 -metadata: - name: dev -spec: - options: - max_session_ttl: "24h" - allow: - logins: [ "{{external.username}}", ubuntu ] - node_labels: - access: relaxed -``` - -**Notice:** Replace `ubuntu` with linux login available on your servers! - -Create the role: - -```code -$ tctl create -f dev.yaml -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -## Troubleshooting - -(!docs/pages/includes/sso/loginerrortroubleshooting.mdx!) - -## Next steps - -In the Teleport role we illustrated in this guide, `external` traits -are replaced with values from the single sign-on provider that the user -used to authenticate to Teleport. For full details on how traits -work in Teleport roles, see the [Access Controls -Reference](../../../reference/access-controls/roles.mdx). - diff --git a/docs/pages/admin-guides/access-controls/sso/sso.mdx b/docs/pages/admin-guides/access-controls/sso/sso.mdx deleted file mode 100644 index d5f2382e21d6b..0000000000000 --- a/docs/pages/admin-guides/access-controls/sso/sso.mdx +++ /dev/null @@ -1,669 +0,0 @@ ---- -title: Configure Single Sign-On -description: How to set up single sign-on (SSO) for SSH using Teleport ---- - -Teleport users can log in to servers, Kubernetes clusters, databases, web -applications, and Windows desktops through their organization's Single Sign-On -(SSO) provider. - - - -## How Teleport uses SSO - -You can register your Teleport cluster as an application with your SSO provider. -When a user signs in to Teleport, your SSO provider will execute its own -authentication flow, then send an HTTP request to your Teleport cluster to -indicate that authentication has completed. - -Teleport authenticates users to your infrastructure by issuing short-lived -certificates. After a user completes an SSO authentication flow, Teleport issues -a short-lived certificate to the user. Teleport also creates a temporary user on -the Auth Service backend. - -### Temporary `user` resources - -After a user completes an SSO authentication flow, Teleport creates a temporary -`user` resource for the user. - -When a user signs in to Teleport with `tsh login`, they can configure the TTL of -the `user` Teleport creates. Teleport enforces a limit of 30 hours (the default -is 12 hours). - -In the Teleport audit log, you will see an event of type `user.create` with -information about the temporary user. - -
-How can I inspect a temporary user resource? - -You can inspect a temporary `user` resource created via your SSO integration -by using the `tctl` command: - -```code -# Log in to your cluster with tsh so you can use tctl remotely -$ tsh login --proxy=example.teleport.sh -$ tctl get users/ -``` - -Here is an example of a temporary `user` resource created when the GitHub user -`myuser` signed in to GitHub to authenticate to Teleport. This resource -expires 12 hours after creation. The `created_by` field indicates that the -resource was created by Teleport's GitHub SSO integration: - -```yaml -kind: user -metadata: - expires: "2022-06-15T04:02:34.586688054Z" - id: 0000000000000000000 - name: myuser -spec: - created_by: - connector: - id: github - identity: myuser - type: github - time: "2022-06-14T16:02:34.586688441Z" - user: - name: system - expires: "0001-01-01T00:00:00Z" - github_identities: - - connector_id: github - username: myuser - roles: - - editor - - access - - auditor - status: - is_locked: false - lock_expires: "0001-01-01T00:00:00Z" - locked_time: "0001-01-01T00:00:00Z" - traits: - github_teams: - - my-team - kubernetes_groups: null - kubernetes_users: null - logins: - - root -version: v2 -``` - -
- -### Certificates for SSO users - -Along with creating a temporary user, Teleport issues SSH and X.509 certificates -to a successfully authenticated SSO user's machine. This enables SSO users to -authenticate to your cluster without Teleport needing to create a permanent -record of them. - -In the X.509 certificate, for example, the `Subject` field contains the same -information defined in the temporary `user` resource. This enables Teleport to -enforce RBAC rules for the authenticated user when they access resources in your -cluster. - -This is a `Subject` field for a certificate that Teleport issued for the GitHub -user `myuser`, who signed in to a Teleport cluster via the GitHub SSO -integration: - -``` -Subject: L=myuser/street=teleport.example.com/postalCode={"github_teams":["my-team"],"kubernetes_groups":null,"kubernetes_users":null,"logins":["root"]}, O=access, O=editor, O=auditor, CN=myuser/1.3.9999.1.7=teleport.example.com -``` - -The user belongs to the GitHub team `my-team`, which this Teleport cluster maps -to the `access`, `editor`, and `auditor` roles in Teleport. (Read the guide for -your SSO provider to determine how to configure role mapping.) - -
-Inspecting your certificate subject - -To inspect the contents of an X.509 certificate issued for your user after you -sign in to Teleport via SSO, run the following commands: - -```code -$ TELEPORT_CLUSTER= -$ SSO_USER= -$ openssl x509 -text -in ~/.tsh/keys/${TELEPORT_CLUSTER}/${SSO_USER}-x509.pem | grep "Subject:" -``` - -You can inspect an SSH certificate issued for your Teleport user with the -following command: - -```code -$ ssh-keygen -L -f ~/.tsh/keys/${TELEPORT_CLUSTER}/${SSO_USER}-ssh/${TELEPORT_CLUSTER}-cert.pub -``` - -
- -### Multiple SSO providers - -Since Teleport creates temporary users and issues short-lived certificates when -a user authenticates via SSO, it is straightforward to integrate Teleport with -multiple SSO providers. Besides the temporary `user` resource, no persistent -backend data in Teleport is tied to a user's account with the SSO provider. - -This also means that if one SSO provider becomes unavailable, the end user only -needs to choose another SSO provider when signing in to Teleport. While the -user may be locked out of their account with the first SSO provider, signing in -via the second provider is sufficient for Teleport to issue a new certificate -and grant the user access to your infrastructure. - -Note that if the username of an SSO user already belongs to a user registered -locally with the Auth Service (i.e., created via `tctl users add`), the SSO -login will fail. - -## Logging in via SSO - -Users can log in to Teleport via your SSO provider by executing a command -similar to the following, using the `--auth` flag to specify the provider: - -```code -# This command will automatically open the default web browser and take a user -# through the login process with an SSO provider -$ tsh login --proxy=proxy.example.com --auth=github -``` - -The command opens a browser window and shows a URL the user can visit in the -terminal to complete their SSO flow: - -```text -If browser window does not open automatically, open it by clicking on the link: -http://127.0.0.1:45235/055a310a-1099-43ea-8cf6-ffc41d88ad1f -``` - -Teleport will wait for up to 3 minutes for a user to authenticate. If -authentication succeeds, Teleport will retrieve SSH and X.509 certificates and -store them in the `~/.tsh/keys/` directory. The tool will also will -add SSH cert to an SSH agent if there's one running. - -### Changing Callback Address - -The callback address can be changed if calling back to a remote machine -instead of the local machine is required: - -```code -# --bind-addr sets the host and port tsh will listen on, and --callback changes -# what link is displayed to the user -$ tsh login --proxy=proxy.example.com --auth=github --bind-addr=localhost:1234 --callback https://remote.machine:1234 -``` - -For this to work the hostname or CIDR of the remote machine that will be used for -the callback will need to be allowed via your auth connector's `client_redirect_settings`: - -```code -kind: oidc -metadata: - name: example-connector -spec: - client_redirect_settings: - # a list of hostnames allowed for HTTPS client redirect URLs - # can be a regex pattern - allowed_https_hostnames: - - remote.machine - - '*.app.github.dev' - - '^\d+-[a-zA-Z0-9]+\.foo.internal$' - # a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - insecure_allowed_cidr_ranges: - - '192.168.1.0/24' - - '2001:db8::/96' -``` - -## Configuring SSO for login - -Teleport works with SSO providers by relying on the concept of an -**authentication connector**. An authentication connector is a configuration -resource that controls how SSO users log in to Teleport—and which Teleport roles -they will assume once they do. - -This means that you can apply fine-grained RBAC policies to your Teleport -cluster without needing to change the solution you use for on– and offboarding -users. - -### Supported connectors - -The following authentication connectors are supported: - - - - -|Type|Description| -|---|---| -|None|If no authentication connector is created, Teleport will use local authentication based user information stored
in the Auth Service backend. You can manage user data via the web UI Users page and the `tctl users` command. | -|`saml`| The SAML connector type uses the [SAML protocol](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language) to authenticate users and query their group membership.| -|`oidc`| The OIDC connector type uses the [OpenID Connect protocol](https://en.wikipedia.org/wiki/OpenID_Connect) to authenticate users
and query their group membership.| -|`github`| The GitHub connector uses GitHub SSO to authenticate users and query their group membership.| - -
- - -|Type|Description| -|---|---| -|None|If no authentication connector is created, Teleport will use local authentication based user information stored in the Auth Service backend. You can manage user data via the web UI Users page and the `tctl users` command. | -|`github`| The GitHub connector uses GitHub SSO to authenticate users and query their group membership.| - - - -
- -### Creating an authentication connector - -Before you can create an authentication connector, you must enable -authentication via that connector's protocol. - -To set the default authentication type as `saml`, `oidc`, or `github`, -create a `cluster_auth_preference` resource. - -Create a file called `cap.yaml`: - ```yaml - kind: cluster_auth_preference - metadata: - name: cluster-auth-preference - spec: - # Set as saml, oidc, or github - type: saml|oidc|github - version: v2 - ``` - -Create the resource: - - ```code - # Log in to your cluster with tsh so you can run tctl commands. - $ tsh login --proxy=example.teleport.sh --user=myuser - $ tctl create -f cap.yaml - ``` - - -Next, define an authentication connector. Create a file called `connector.yaml` -based on one of the following examples. Teleport Community Edition only supports -GitHub as an SSO option. - - - - -```yaml -(!/examples/resources/saml-connector.yaml!) -``` - -(!docs/pages/includes/sso/idp-initiated.mdx!) - -(!docs/pages/includes/sso/saml-slo.mdx!) - -You may use `entity_descriptor_url` in lieu of `entity_descriptor` to fetch -the entity descriptor from your IDP. - -We recommend "pinning" the entity descriptor by including the XML rather than -fetching from a URL. - - - - -```yaml -(!/examples/resources/onelogin-connector.yaml!) -``` - -You may use `entity_descriptor_url`, in lieu of `entity_descriptor`, to fetch -the entity descriptor from your IDP. - -We recommend "pinning" the entity descriptor by including the XML rather than -fetching from a URL. - - - - -```yaml -(!/examples/resources/oidc-connector.yaml!) -``` - - - - -```yaml -(!/examples/resources/gworkspace-connector-inline.yaml!) -``` - - - - -```yaml -(!/examples/resources/adfs-connector.yaml!) -``` - -You may use `entity_descriptor_url`, in lieu of `entity_descriptor`, to fetch -the entity descriptor from your IDP. - -We recommend "pinning" the entity descriptor by including the XML rather than -fetching from a URL. - - - - -```yaml -(!/examples/resources/saml-connector.yaml!) -``` - -(!docs/pages/includes/sso/idp-initiated.mdx!) - -(!docs/pages/includes/sso/saml-slo.mdx!) - -You may use `entity_descriptor_url`, in lieu of `entity_descriptor`, to fetch -the entity descriptor from your IDP. - -We recommend "pinning" the entity descriptor by including the XML rather than -fetching from a URL. - - - - -```yaml -(!/examples/resources/github.yaml!) -``` - - - - -Create the connector: - -```code -$ tctl create -f connector.yaml -``` - -### User logins - -Often it is required to restrict SSO users to their unique UNIX logins when they -connect to Teleport Nodes. To support this: - -- Use the SSO provider to create a field called `unix_login` (you can use another name). -- Make sure the `unix_login` field is exposed as a claim via SAML/OIDC. -- Update a Teleport role to include the `{{external.unix_login}}` variable in the list of allowed logins: - -```yaml -kind: role -version: v5 -metadata: - name: sso_user -spec: - allow: - logins: - - '{{external.unix_login}}' - node_labels: - '*': '*' -``` - -### Provider-Specific Workarounds - -Certain SSO providers may require or benefit from changes to Teleport's SSO -flow. These provider-specific changes can be enabled by setting the -`spec.provider` property of the connector definition to one of the following -values to match your identity provider: - -- `adfs` (SAML): Required for compatibility with Active Directory (ADFS); refer - to the full [ADFS guide](adfs.mdx) for details. -- `netiq` (OIDC): Used to enable NetIQ-specific ACR value processing; refer to - the [OIDC guide](oidc.mdx) for details. -- `ping` (SAML and OIDC): Required for compatibility with Ping Identity (including - PingOne and PingFederate). -- `okta` (OIDC): Required when using Okta as an OIDC provider. - -At this time, the `spec.provider` field should not be set for any other identity providers. - -## Configuring SSO for MFA checks - -Teleport administrators can configure Teleport to delegate MFA checks to an -SSO provider as an alternative to registering MFA devices directly with the Teleport cluster. -This allows Teleport users to use MFA devices and custom flows configured in the SSO provider -to carry out privileged actions in Teleport, such as: - -- [Per-session MFA](../guides/per-session-mfa.mdx) -- [Moderated sessions](../guides/joining-sessions.mdx) -- [Admin actions](../guides/mfa-for-admin-actions.mdx) - -Administrators may want to consider enabling this feature in order to: - -- Make all authentication (login and MFA) go through the IDP, reducing administrative overhead -- Make custom MFA flows, such as prompting for 2 distinct devices for a single MFA check -- Integrate with non-webauthn devices supported directly by your IDP - - - SSO MFA is an enterprise feature. Only OIDC and SAML auth connectors are supported. - - -### Configure the IDP App / Client - -There is no standardized MFA flow unlike there is with SAML/OIDC -login, so each IDP may offer zero, one, or more ways to offer MFA checks. - -Generally, these offerings will fall under one of the following cases: - -1. Use a separate IDP app for MFA: - -You can create a separate IDP app with a custom MFA flow. For example, with -Auth0 (OIDC), you can create a separate app with a custom [Auth0 Action](https://auth0.com/docs/customize/actions) -which prompts for MFA for an active OIDC session. - -2. Use the same IDP app for MFA: - -Some IDPs provide a way to fork to different flows using the same IDP app. -For example, with Okta (OIDC), you can provide `acr_values: ["phr"]` to -[enforce phishing resistant authentication](https://developer.okta.com/docs/guides/step-up-authentication/main/#predefined-parameter-values). - -For a simpler approach, you could use the same IDP app for both login and MFA -with no adjustments. For Teleport MFA checks, the user will be required to -relogin through the IDP with username, password, and MFA if required. - - - While the customizability of SSO MFA presents multiple secure options previously - unavailable to administrators, it also presents the possibility of insecure - misconfigurations. Therefore, we strongly advice administrators to incorporate - strict, phishing-resistant checks with WebAuthn, Device Trust, or some similar - security features into their custom SSO MFA flow. - - -### Updating your authentication connector to enable MFA checks - -Take the authentication connector file `connector.yaml` created in [Configuring SSO for login](#configuring-sso-for-login) -and add MFA settings. - - - - -```yaml -(!examples/resources/oidc-connector-mfa.yaml!) -``` - - - - -```yaml -(!examples/resources/saml-connector-mfa.yaml!) -``` - -You may use `entity_descriptor_url` in lieu of `entity_descriptor` to fetch -the entity descriptor from your IDP. - -We recommend "pinning" the entity descriptor by including the XML rather than -fetching from a URL. - - - - -Update the connector: - -```code -$ tctl create -f connector.yaml -``` - -### Allowing SSO as an MFA method in your cluster - -Before you can use the SSO MFA flow we created above, you need to enable SSO for -multi-factor authentication in your cluster settings. Modify the dynamic config -resource using the following command: - -```code -$ tctl edit cluster_auth_preference -``` - -Make the following change: - -```diff -kind: cluster_auth_preference -version: v2 -metadata: - name: cluster-auth-preference -spec: - # ... - second_factors: - - webauthn -+ - sso -``` - -## Working with an external email identity - -Along with sending groups, an SSO provider will also provide a user's email address. -In many organizations, the username that a person uses to log in to a system is the -same as the first part of their email address, the "local" part. - -For example, `dave.smith@example.com` might log in with the username `dave.smith`. -Teleport provides an easy way to extract the first part of an email address so -it can be used as a username. This is the `{{email.local}}` function. - -If the email claim from the identity provider (which can be accessed via -`{{external.email}}`) is sent and contains an email address, you can extract the -"local" part of the email address before the @ sign like this: -`{{email.local(external.email)}}` - -Here's how this looks in a Teleport role: - -```yaml -kind: role -version: v5 -metadata: - name: sso_user -spec: - allow: - logins: - # Extracts the local part of dave.smith@acme.com, so the login will - # now support dave.smith. - - '{{email.local(external.email)}}' - node_labels: - '*': '*' -``` - -## Working with multiple SSO providers - -Teleport can also support multiple connectors. For example, a Teleport -administrator can define and create multiple connector resources using -`tctl create` as shown above. - -To see all configured connectors, execute this command on the Auth Service: - -```code -$ tctl get connectors -``` - -To delete/update connectors, use the usual `tctl rm` and `tctl create` commands -as described in the [Resources Reference](../../../reference/resources.mdx). - -If multiple authentication connectors exist, the clients must supply a -connector name to `tsh login` via `--auth` argument: - -```code -# use "okta" SAML connector: -$ tsh --proxy=proxy.example.com login --auth=okta - -# use local Teleport user DB: -$ tsh --proxy=proxy.example.com login --auth=local --user=admin -``` - -Refer to the following guides to configure authentication connectors of both -SAML and OIDC types: - -- [SSH Authentication with Okta](okta.mdx) -- [SSH Authentication with OneLogin](one-login.mdx) -- [SSH Authentication with ADFS](adfs.mdx) -- [SSH Authentication with OAuth2 / OpenID Connect](oidc.mdx) - -## SSO customization - -Use the `display` field in an authentication connector to control the appearance -of SSO buttons in the Teleport Web UI. - -| Provider | YAML | Example | -| - | - | - | -| GitHub | `display: GitHub` | ![github](../../../../img/teleport-sso/github@2x.png) | -| Microsoft | `display: Microsoft` | ![microsoft](../../../../img/teleport-sso/microsoft@2x.png) | -| Google | `display: Google` | ![google](../../../../img/teleport-sso/google@2x.png) | -| BitBucket | `display: Bitbucket` | ![bitbucket](../../../../img/teleport-sso/bitbucket@2x.png) | -| OpenID | `display: Okta` | ![Okta](../../../../img/teleport-sso/openId@2x.png) | - -## Troubleshooting - -Troubleshooting SSO configuration can be challenging. Usually a Teleport administrator -must be able to: - - - -- Ensure that HTTP/TLS certificates are configured properly for both Teleport - proxy and the SSO provider. - - - -- Be able to see what SAML/OIDC claims and values are getting exported and passed - by the SSO provider to Teleport. -- Be able to see how Teleport maps the received claims to role mappings as defined - in the connector. - -If something is not working, we recommend to: - -- Double-check the host names, tokens and TCP ports in a connector definition. - -### Using the Web UI - -If you get "access denied" or other login errors, the number one place to check is the Audit -Log. You can access it in the **Activity** tab of the Teleport Web UI. - -![Audit Log Entry for SSO Login error](../../../../img/sso/teleportauditlogssofailed.png) - -Example of a user being denied because the role `clusteradmin` wasn't set up: - -```json -{ - "code": "T1001W", - "error": "role clusteradmin is not found", - "event": "user.login", - "message": "Failed to calculate user attributes.\n\trole clusteradmin is not found", - "method": "oidc", - "success": false, - "time": "2024-11-07T15:41:25.584Z", - "uid": "71e46f17-d611-48bb-bf5e-effd90016c13" -} -``` - -### Teleport does not show the expected Nodes - -(!docs/pages/includes/node-logins.mdx!) - -When configuring SSO, ensure that the identity provider is populating each user's -traits correctly. For a user to see a Node in Teleport, the result of populating a - template variable in a role's `allow.logins` must match at least one of a user's - `traits.logins`. - -In this example a user will have usernames `ubuntu`, `debian` and usernames from the SSO trait `logins` for Nodes that have a `env: dev` label. If the SSO trait username is `bob` then the usernames would include `ubuntu`, `debian`, and `bob`. - -```yaml -kind: role -metadata: - name: example-role -spec: - allow: - logins: ['{{external.logins}}', ubuntu, debian] - node_labels: - 'env': 'dev' -version: v5 -``` - -## Next steps - -The roles we illustrated in this guide use `external` traits, -which Teleport replaces with values from the single sign-on provider that the -user used to authenticate with Teleport. For full details on how variable -expansion works in Teleport roles, see the [Access Controls -Reference](../../../reference/access-controls/roles.mdx). diff --git a/docs/pages/admin-guides/admin-guides.mdx b/docs/pages/admin-guides/admin-guides.mdx deleted file mode 100644 index 6bf703cb6bbc2..0000000000000 --- a/docs/pages/admin-guides/admin-guides.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Teleport Admin Guides -description: Provides step-by-step instructions for completing administrative tasks in Teleport. ---- - - diff --git a/docs/pages/admin-guides/api/access-plugin.mdx b/docs/pages/admin-guides/api/access-plugin.mdx deleted file mode 100644 index 3178ab6adb30c..0000000000000 --- a/docs/pages/admin-guides/api/access-plugin.mdx +++ /dev/null @@ -1,472 +0,0 @@ ---- -title: How to Build an Access Request Plugin -description: Manage Access Requests using custom workflows with the Teleport API ---- - -With Teleport [Access Requests](../access-controls/access-requests/access-requests.mdx), you can -assign Teleport users to less privileged roles by default and allow them to -temporarily escalate their privileges. Reviewers can grant or deny Access -Requests within your organization's existing communication workflows (e.g., -Slack, email, and PagerDuty) using [Access Request -plugins](../access-controls/access-request-plugins/access-request-plugins.mdx). - -You can use Teleport's API client library to build an Access Request plugin that -integrates with your organization's unique workflows. - -In this guide, we will explore a number of Teleport's API client libraries by -showing you how to write a plugin that lets you manage Access Requests via -Google Sheets. The plugin lists new Access Requests in a Google Sheets -spreadsheet, with links to allow or deny each request. - -![The result of the plugin](../../../img/api/google-sheets.png) - - - -The plugin we will build in this guide is intended as a learning tool. **Do not -connect it to your production Teleport cluster.** Use a demo cluster instead. - - - -## Prerequisites - -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) - -- Go version (=teleport.golang=)+ installed on your workstation. See the [Go - download page](https://go.dev/dl/). You do not need to be familiar with Go to - complete this guide, though Go knowledge is required if you want to build your - own Access Request plugin. - -You will need the following in order to set up the demo plugin, which requires -authenticating to the Google Sheets API: - -- A Google Cloud project with permissions to create service accounts. -- A Google account that you will use to create a Google Sheets spreadsheet. We - will grant permissions to edit the spreadsheet to the service account used for - the plugin. - - - -Even if you do not plan to set up the demo project, you can follow this guide to see -which libraries, types, and functions you can use to develop an Access Request -plugin. - -The demo is a minimal working example, and you can see fully fledged plugins in -the -[`gravitational/teleport`](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/integrations/access). -repository on GitHub. - - - -## Step 1/5. Set up your Go project - -Download the source code for our minimal Access Request plugin: - -```code -$ git clone https://github.com/gravitational/teleport -b branch/v(=teleport.major_version=) -$ cd teleport/examples/access-plugin-minimal -``` - -For the rest of this guide, we will show you how to set up this plugin and -explore the way the plugin uses Teleport's API to integrate Access Requests with -a particular workflow. - -## Step 2/5. Set up the Google Sheets API - -Access Request plugins typically communicate with two APIs. They receive Access -Request events from the Teleport Auth Service's gRPC API, and use the data to -interact with the API of your chosen messaging or collaboration tool. - -In this section, we will enable the Google Sheets API, create a Google Cloud -service account for the plugin, and use the service account to authenticate the -plugin to Google Sheets. - -### Enable the Google Sheets API - -Enable the Google Sheets API by visiting the following Google Cloud console URL: - -https://console.cloud.google.com/apis/enableflow?apiid=sheets.googleapis.com - -Ensure that your Google Cloud project is the one you intend to use. - -Click **Next** > **Enable**. - -### Create a Google Cloud service account for the plugin - -Visit the following Google Cloud console URL: - -https://console.cloud.google.com/iam-admin/serviceaccounts - -Click **Create Service Account**. - -For **Service account name**, enter "Teleport Google Sheets Plugin". Google -Cloud will populate the **Service account ID** field for you. - -Click **Create and Continue**. When prompted to grant roles to the service -account, click **Continue** again. We will create our service account without -roles. Skip the step to grant users access to the service account, clicking -**Done**. - -The console will take you to the **Service accounts** view. Click the name of -the service account you just created, then click the **Keys** tab. Click **Add -Key**, then **Create new key**. Leave the **Key type** as "JSON" and click -**Create**. - -Save your Google Cloud credentials file as `credentials.json` in your Go project -directory. - -Your plugin will use this JSON file to authenticate to Google Sheets. - -### Create a Google Sheets spreadsheet - -Visit the following URL and make sure you are authenticated as the correct user: - -https://sheets.new - -Name your spreadsheet. - -Give the plugin access to the spreadsheet by clicking **Share**. In the **Add -people and groups** field, enter -`teleport-google-sheets-plugin@PROJECT_NAME.iam.gserviceaccount.com`, replacing -`PROJECT_NAME` with the name of your project. Make sure that the service account -has "Editor" permissions. Click **Share**, then **Share anyway** when prompted -with a warning. - -By authenticating to Google Sheets with the service account you created, the -plugin will have access to modify your spreadsheet. - -Next, ensure that the following is true within your spreadsheet: - -- There is only one sheet -- The sheet includes the following columns: - -|ID|Created|User|Roles|Status|Link| -|---|---|---|---|---|---| - -After we write our Access Request plugin, it will populate the spreadsheet with -data automatically. - -## Step 3/5. Set up Teleport RBAC - -In this section, we will set up Teleport roles that enable creating and -reviewing Access Requests, plus another Teleport role that can generate -credentials for your Access Request plugin to authenticate to Teleport. - -### Create a user and role for the plugin - -(!docs/pages/includes/plugins/rbac-with-friendly-name.mdx!) - -(!/docs/pages/includes/plugins/rbac-impersonate.mdx!) - -### Export the access plugin identity - -You will use the `tctl auth sign` command to request the credentials that the -`access-plugin` needs to connect to your Teleport cluster. - -The following `tctl auth sign` command impersonates the `access-plugin` user, -generates signed credentials, and writes an identity file to the local -directory: - -```code -$ tctl auth sign --user=access-plugin --out=auth.pem -``` - -Teleport's Access Request plugins listen for new and updated Access Requests by -connecting to the Teleport Auth Service's gRPC endpoint over TLS. - -The identity file, `auth.pem`, includes both TLS and SSH credentials. Your -Access Request plugin uses the SSH credentials to connect to the Proxy Service, -which establishes a reverse tunnel connection to the Auth Service. The plugin -uses this reverse tunnel, along with your TLS credentials, to connect to the -Auth Service's gRPC endpoint. - -You will refer to this file later when configuring the plugin. - -### Set up Role Access Requests - -In this guide, we will use our plugin to manage Role Access Requests. For this -to work, we will set up Role Access Requests in your cluster. - -(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) - -## Step 4/5. Write the Access Request plugin - -In this step, we will walk you through the structure of the Access Request -plugin in `examples/access-plugin-minimal/main.go`. You can use the example here -to write your own Access Request plugin. - -### Imports - -Here are the packages our Access Request plugin will import from Go's standard -library: - -|Package|Description| -|---|---| -|`context`|Includes the `context.Context` type. `context.Context` is an abstraction for controlling long-running routines, such as connections to external services, that might fail or time out. Programs can cancel contexts or assign them timeouts and metadata. | -|`errors`|Working with errors.| -|`fmt`|Formatting data for printing, strings, or errors.| -|`strings`|Manipulating strings.| - -The plugin imports the following third-party code: - -|Package|Description| -|---|---| -|`github.com/gravitational/teleport/api/client`|A library for authenticating to the Auth Service's gRPC API and making requests.| -|`github.com/gravitational/teleport/api/types`|Types used in the Auth Service API, e.g., Access Requests.| -|`github.com/gravitational/trace`|Presenting errors with more useful detail than the standard library provides.| -|`google.golang.org/api/option`|Settings for configuring Google API clients.| -|`google.golang.org/api/sheets/v4`|The Google Sheets API client library, aliased as `sheets` in our program.| -|`google.golang.org/grpc`|The gRPC client and server library.| - -### Configuration - -First, we declare two constants that you need to configure for your environment: - -```go -(!examples/access-plugin-minimal/config.go!) -``` - -`proxyAddr` indicates the hostname and port of your Teleport Proxy Service or -Teleport Enterprise Cloud tenant. Assign it to the address of your own Proxy -Service, e.g., `mytenant.teleport.sh:443`. - -Assign `spreadSheetID` to the ID of the spreadsheet you created earlier. To find -the spreadsheet ID, visit your spreadsheet in Google Drive. The ID will be in -the URL path segment called `SPREADSHEET_ID` below: - -```text -https://docs.google.com/spreadsheets/d/SPREADHSEET_ID/edit#gid=0 -``` - -### The `AccessRequestPlugin` type - -The `plugin.go` file declares types that we will use to organize our Access -Request plugin code: - -```go -(!examples/access-plugin-minimal/plugin.go!) -``` - -The `AccessRequestPlugin` type represents a generic Access Request plugin, and -you can use this type to build your own plugin. It contains a Teleport API -client and an `EventHandler`, any Go type that implements a `HandleEvent` -method. - -In our case, the type that implements `HandleEvent` is `googleSheetsClient`, a -struct type that contains an API client for Google Sheets. - -### Prepare row data - -Whether creating a new row of the spreadsheet or updating an existing one, we -need a way to extract data from an Access Request in order to provide it to -Google Sheets. We achieve this with the `makeRowData` method: - -```go -(!examples/access-plugin-minimal/makerowdata.go!) -``` - -The `sheets.RowData` type makes extensive use of pointers to strings, so we -introduce a utility function called `stringPtr` that returns the pointer to the -provided string. This makes it easier to assign the values of cells in the -`sheets.RowData` using chains of function calls. - -`makeRowData` is a method of the `googleSheetsClient` type. (The `*` before -`googleSheetsClient` indicates that the method receives a *pointer* to a -`googleSheetsClient`.) It takes a `types.AccessRequest`, which Teleport's API -library uses to represent the fields within an Access Request. - -The Google Sheets client library defines a `sheets.RowData` type that we -include in requests to update a spreadsheet. This function converts a -`types.AccessRequest` into a `*sheets.RowData` (another pointer). - -Access Requests have one of four states: approved, denied, pending, and none. -We obtain the request states from Teleport's `types` library and map them to -strings in the `requestStates` map. - -When extracting the data, we use the `types.AccessRequest.GetName()` method to -retrieve the ID of the Access Request as a string we can include in the -spreadsheet. - -Users can review an Access Request by visiting a URL within the Teleport Web UI -that corresponds to the request's ID. `makeRowData` assembles a `=HYPERLINK` -formula that we can insert into the spreadsheet as a link to this URL. - -### Create a row - -The following function submits a request to the Google Sheets API to create a -new row based on an incoming Access Request, using the data returned by -`makeRowData`. It returns an error if the attempt to create a row failed: - -```go -(!examples/access-plugin-minimal/createrow.go!) -``` - -`createRow` assembles a `sheets.BatchUpdateSpreadsheetRequest` and sends it to -the Google Sheets API using `g.sheetsClient.BatchUpdate()`, returning errors -encountered while sending the request. - -We log unexpected HTTP status codes without returning an error since these may -be transient server-side issues. A production Access Request plugin would handle -these situations in a more sophisticated way, e.g., storing the request so it -can retry it later. - -### Update a row - -The code for updating a row is similar to the code for creating a new row: - -```go -(!examples/access-plugin-minimal/updaterow.go!) -``` - -The only difference between `updateRow` and `createRow` is that we send a -`&sheets.UpdateCellsRequest` instead of a `&sheets.AppendCellsRequest`. This -function takes the number of a row within the spreadsheet to update and sends a -request to update that row with information from the provided Access Request. - -### Determine where to update the spreadsheet - -When our program receives an event that updates an Access Request, it needs a -way to look up the row in the spreadsheet that corresponds to the Access Request -so it can update the row: - -```go -(!examples/access-plugin-minimal/updatespreadsheet.go!) -``` - -`updateSpreadSheet` takes a `types.AccessRequest`, gets the latest data from -your spreadsheet, determines which row to update, and calls `updateRow` -accordingly. It uses linear search to look up the first column within each row -of the sheet and check whether that column matches the ID of the Access Request. -It then calls `updateRow` with the Access Request and the row's number. - -### Handle incoming Access Requests - -The plugin calls a handler function when it receives an event. To set this up, -we use the `Run` method of our generic `AccessRequestPlugin` type, which -contains the main loop of the plugin: - -```go -(!examples/access-plugin-minimal/watcherjob.go!) -``` - -As we described above, the `AccessRequestPlugin` type's `EventHandler` field is -assigned to an interface with a `HandleEvent` method. In this case, the -implementation is `*googleSheetsClient.HandleEvent`. This method checks whether -an Access Request is in a pending state, i.e., whether the request is new. If -so, it calls `createRow`. If not, it calls `updateSpreadsheet`. - -The Teleport API client type, `client.Client`, has a `NewWatcher` method that -listens for new audit events from the Auth Service API via a gRPC stream. The -second parameter of the method indicates the type of audit event to watch for, -in this case, events having to do with Access Requests. - -The result of `NewWatcher`, a `types.Watcher`, enables `Run` to respond to new -audit events by calling the `Events` method. This returns a Go **channel**, a -runtime abstraction that allows concurrent routines to communicate. Another -channel, returned by `Done`, indicates when the watcher has finished. - -In a `for` loop, the `Run` method receives from either the `Done` channel or -`Events` channel, whichever is ready to send first. If it receives from the -`Events` channel, it calls the `HandleEvent` method to process the event. - -### Initialize the API clients - -Now we have all the code we need to use the Teleport and Google Sheets API -clients to listen for Access Request events and use them to maintain a -spreadsheet. The final step is to start our program by initializing the API -clients: - -```go -(!examples/access-plugin-minimal/main.go!) -``` - -The `main` function, the entrypoint to our program, initializes an -`AccessRequestPlugin` and `googleSheetsClient` and uses them run the plugin. - -The function creates a Google Sheets API client by loading the credentials file -you downloaded earlier at the relative path `credentials.json`. - -`client` is Teleport's library for setting up an API client. Our plugin does so -by calling `client.LoadIdentityFile` to obtain a `client.Credentials`. It then -uses the `client.Credentials` to call `client.New`, which connects to the -Teleport Proxy Service specified in the `Addrs` field using the provided -identity file. - -In this example, we are passing the `grpc.WithReturnConnectionError()` function -call to `client.New`, which instructs the gRPC client to return more detailed -connection errors. - - - -This program does not validate your credentials or Teleport cluster address. -Make sure that: - -- The identity file you exported earlier does not have an expired TTL -- The value you supplied for the `proxyAddr` constant includes both the host - **and** the web port of your Teleport Proxy Service, e.g., - `mytenant.teleport.sh:443` - - - -## Step 5/5. Test your plugin - -Run the plugin to forward Access Requests from your Teleport cluster to Google -Sheets. Execute the following command from within -`examples/access-plugin-minimal`: - -```code -$ go run teleport-sheets -``` - -Now that the plugin is running, create an Access Request: - -(!docs/pages/includes/plugins/create-request.mdx!) - -You should see the new Access Request in your spreadsheet with the `PENDING` -state. - -In your spreadsheet, click "View Access Request" next to your new request. Sign -into the Teleport Web UI as your original user. When you submit your review, -e.g., deny the request, the new status will appear within the spreadsheet. - - - -Access Request plugins must not enable reviewing Access Requests via the plugin, -and must always refer a reviewer to the Teleport Web UI to complete the review. -Otherwise, an unauthorized party could spoof traffic to the plugin and escalate -privileges. - - - -## Next steps - -In this guide, we showed you how to set up an Access Request plugin using -Teleport's API client libraries. To go beyond the minimal plugin we demonstrate -in this guide, you can use the Teleport API to set up more sophisticated -workflows that take full advantage of your communication and project management -tools. - -### Manage state - -While the plugin we developed in this guide is stateless, updating Access -Request information by searching all rows of a spreadsheet, real-world Access -Request plugins typically need to manage state. You can use the -[`plugindata`](https://pkg.go.dev/github.com/gravitational/teleport/api/types#PluginData) -package to make it easier for your Access Request plugin to do this. - -### Consult the examples - -Explore the -[`gravitational/teleport`](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/integrations/access). -repository on GitHub for examples of plugins developed at Teleport. You can see -how these plugins use the packages we discuss in this guide, as well as how they -add more complete functionality like configuration validation and state -management. - -### Provision the plugin with short-lived credentials - -In this example, we used the `tctl auth sign` command to fetch credentials for -the plugin. For production usage, we recommend provisioning short-lived -credentials via Machine ID, which reduces the risk of these credentials becoming -stolen. View our [Machine ID documentation](../../enroll-resources/machine-id/introduction.mdx) to -learn more. - diff --git a/docs/pages/admin-guides/api/getting-started.mdx b/docs/pages/admin-guides/api/getting-started.mdx deleted file mode 100644 index 34d8438868ec7..0000000000000 --- a/docs/pages/admin-guides/api/getting-started.mdx +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: API Getting Started Guide -description: Get started working with the Teleport API programmatically using Go. ---- - -In this getting started guide we will use the Teleport API Go client to connect -to a Teleport Auth Service. - -Here are the steps we'll walkthrough: - -- Create an API user using a simple role-based authentication method. -- Generate credentials for that user. -- Create and connect a Go client to interact with Teleport's API. - -## Prerequisites - -- Install [Go](https://golang.org/doc/install) (=teleport.golang=)+ and Go development environment. - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Create a user - -(!docs/pages/includes/permission-warning.mdx!) - - - Read [API authorization](../../reference/architecture/api-architecture.mdx) to learn more about defining custom roles for your API client. - - -Create a user `api-admin` with the built-in role `editor`: - -```code -$ tctl users add api-admin --roles=editor -``` - -## Step 2/3. Generate client credentials - -Log in as the newly created user with `tsh`. - -```code -# generate tsh profile -$ tsh login --user=api-admin --proxy=tele.example.com -``` - -The [Profile Credentials loader](https://pkg.go.dev/github.com/gravitational/teleport/api/client#LoadProfile) -will automatically retrieve Credentials from the current profile in the next step. - -## Step 3/3. Create a Go project - -Set up a new [Go module](https://golang.org/doc/tutorial/create-module) and import the `client` package: - -```code -$ mkdir client-demo && cd client-demo -$ go mod init client-demo -$ go get github.com/gravitational/teleport/api/client -``` - - -To ensure compatibility, you should use a version of Teleport's API library that matches -the major version of Teleport running in your cluster. - -To find the pseudoversion appropriate for a go.mod file for a specific git tag, -run the following command from the `teleport` repository: - -```code -$ go list -f '{{.Version}}' -m "github.com/gravitational/teleport/api@$(git rev-parse v12.1.0)" -v0.0.0-20230307032901-49a6de744a3a -``` - - -Create a file called `main.go`, modifying the `Addrs` strings as needed: - -```go -package main - -import ( - "context" - "log" - - "github.com/gravitational/teleport/api/client" -) - -func main() { - ctx := context.Background() - - clt, err := client.New(ctx, client.Config{ - Addrs: []string{ - // Teleport Cloud customers should use .teleport.sh - "tele.example.com:443", - "tele.example.com:3025", - "tele.example.com:3024", - "tele.example.com:3080", - }, - Credentials: []client.Credentials{ - client.LoadProfile("", ""), - }, - }) - - if err != nil { - log.Fatalf("failed to create client: %v", err) - } - - defer clt.Close() - resp, err := clt.Ping(ctx) - if err != nil { - log.Fatalf("failed to ping server: %v", err) - } - - log.Printf("Example success!") - log.Printf("Example server response: %s", resp) - log.Printf("Server version: %s", resp.ServerVersion) -} -``` - -Now you can run the program and connect the client to the Teleport Auth Service to fetch the server version. - -```code -$ go run main.go -``` - -## Next steps - -- Learn about [pkg.go.dev](https://pkg.go.dev/github.com/gravitational/teleport/api/client) -- Learn how to use [the client](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client) -- Learn how to [work with credentials](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Credentials) -- Read about Teleport [API architecture](../../reference/architecture/api-architecture.mdx) for an in-depth overview of the API and API clients. -- Read [API authorization](../../reference/architecture/api-architecture.mdx) to learn more about defining custom roles for your API client. -- Review the `client` [pkg.go reference documentation](https://pkg.go.dev/github.com/gravitational/teleport/api/client) for more information about working with the Teleport API programmatically. -- Familiarize yourself with the [admin manual](../management/admin/admin.mdx) to make the best use of the API. diff --git a/docs/pages/admin-guides/api/rbac.mdx b/docs/pages/admin-guides/api/rbac.mdx deleted file mode 100644 index 63c6eb35ad3db..0000000000000 --- a/docs/pages/admin-guides/api/rbac.mdx +++ /dev/null @@ -1,987 +0,0 @@ ---- -title: Generate Teleport Roles from an External RBAC System -description: Use Teleport's API to automatically generate Teleport roles based on third-party RBAC policies ---- - -You can use the Teleport gRPC API to generate roles automatically based on an -external role-based access control (RBAC) system, such as GitHub or AWS Identity -and Access Management. - -This is especially useful for: - -- Setting up a new Teleport cluster, since you can preserve your existing - authorization levels or categories while letting Teleport handle access - control. -- Ensuring that your Teleport cluster stays up to date with the RBAC systems of - the infrastructure it manages access to. This way, Teleport roles do not - unexpectedly gain or lose permissions if your teams reconfigure your external - RBAC systems. - -In this guide, we will build a small demo application to show you how to -generate Teleport roles using Teleport's API client library. - - - -The program we will build in this guide is intended as a learning tool. **Do not -connect it to your production Teleport cluster.** Use a demo cluster instead. - - - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- Go version (=teleport.golang=) or above installed on your workstation. See the - [Go download page](https://go.dev/dl/). You will not need to be familiar with - Go to complete this guide, though Go knowledge is required if you want to - build a production-ready Teleport client application. - -In a production scenario, you will already have a third-party RBAC solution to -use as a basis for generating Teleport roles. In this guide, we will simulate -this by deploying a local Kubernetes cluster using `minikube` and setting up -some RBAC resources. We will then use this Kubernetes cluster to generate -Teleport roles. - -To run the local demo environment, ensure that you have the following tools -installed on your workstation: - -| Tool | Purpose | Installation link | -|----------|----------------------------------|---------------------------------------------------------------| -| minikube | Local Kubernetes deployment tool | [Install minikube](https://minikube.sigs.k8s.io/docs/start/) | -| Helm | Kubernetes package manager | [Install Helm](https://helm.sh/docs/intro/install/) | -| kubectl | Kubernetes admin CLI | [Install kubectl](https://kubernetes.io/docs/tasks/tools/) | -| Docker | Required minikube driver | [Get Started With Docker](https://www.docker.com/get-started) | - - -Even if you do not plan to set up the demo project, you can follow this guide to -see which libraries, types, and functions you can use to automatically generate -Teleport roles based on an external RBAC system. - - - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/4. Set up your Kubernetes cluster - -In this step, we will launch a local Kubernetes cluster and set up role-based -access controls within it. We will then use this Kubernetes cluster as a basis -for generating Teleport roles. - -### Start minikube - -Start minikube with the Docker driver, which boots a local Kubernetes cluster on -a single Docker container: - -```code -$ minikube start --driver=docker -``` - -This command should start a local Kubernetes cluster and set your context (i.e., -the Kubernetes cluster you are currently interacting with) to `minikube`. To -verify this, run the following command: - -```code -$ kubectl config current-context -minikube -``` - -### Define demo Kubernetes RBAC resources - -Next, we will set up RBAC resources in your local `minikube` cluster to use as a -basis for generating Teleport roles. - -In Kubernetes, you can divide a cluster into logically isolated **namespaces**. -A **role** defines a set of permissions for manipulating resources in a specific -namespace. A **cluster role** is a role that applies to all namespaces in a -cluster. You can use a **role binding** or **cluster role binding** to attach a -role or cluster role to Kubernetes users and groups. - -Define a Kubernetes role and role binding that allows users in the -`app-developer` group to read and list pods in the `app` namespace. Add the -following to a file called `pod-reader.yaml`: - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: app ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - namespace: app - name: pod-reader -rules: -- apiGroups: [""] - resources: ["pods"] - verbs: ["get", "list"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: read-pods - namespace: app - annotations: - 'create-teleport-role': 'true' -subjects: -- kind: Group - name: app-developer - apiGroup: rbac.authorization.k8s.io -roleRef: - kind: Role - name: pod-reader - apiGroup: rbac.authorization.k8s.io -``` - -Create the resources: - -```code -$ kubectl apply -f pod-reader.yaml -namespace/app created -role.rbac.authorization.k8s.io/pod-reader created -rolebinding.rbac.authorization.k8s.io/read-pods created -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -Next, define a cluster role and cluster role binding that allow users in the -`ops` group to read, create, and execute commands on pods in all namespaces. Add -the following to a file called `pod-ops.yaml`: - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: pod-ops -rules: -- apiGroups: [""] - resources: ["pods"] - verbs: ["get", "watch", "list", "create", "exec", "logs"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pod-ops - annotations: - 'create-teleport-role': 'true' -subjects: -- kind: Group - name: ops - apiGroup: rbac.authorization.k8s.io -roleRef: - kind: ClusterRole - name: pod-ops - apiGroup: rbac.authorization.k8s.io -``` - -Create the resources: - -```code -$ kubectl apply -f pod-ops.yaml -clusterrole.rbac.authorization.k8s.io/pod-ops created -clusterrolebinding.rbac.authorization.k8s.io/pod-ops created -``` - -Later in this guide, we will show you how to automatically generate Teleport -roles based on the Kubernetes RBAC resources you created. - -### Define RBAC resources for the client application - -Next, ensure that your API client can read the RBAC resources you created. -Create a file called `rbac-sync.yaml` with the following content: - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: rbac-sync -rules: -- apiGroups: ["rbac.authorization.k8s.io"] - resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings"] - verbs: ["get", "list"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: rbac-sync -subjects: -- kind: User - name: sync-kubernetes-rbac - apiGroup: rbac.authorization.k8s.io -roleRef: - kind: ClusterRole - name: rbac-sync - apiGroup: rbac.authorization.k8s.io -``` - -Apply the changes: - -```code -$ kubectl apply -f rbac-sync.yaml -clusterrole.rbac.authorization.k8s.io/rbac-sync created -clusterrolebinding.rbac.authorization.k8s.io/rbac-sync created -``` - -## Step 2/4. Set up Teleport - -In this step, you will configure Teleport to enable your API client application -to interact with your Kubernetes cluster. - -### Create a user and role for the client application - -Give the client application a Teleport user and role that can retrieve -information about a Kubernetes cluster that is registered with Teleport, -authenticate to the cluster, and create or update Teleport roles. - -Create a file called `sync-kubernetes-rbac.yaml` with the following content: - -```yaml -kind: role -version: v7 -metadata: - name: sync-kubernetes-rbac -spec: - allow: - kubernetes_labels: - '*': '*' - kubernetes_users: - - sync-kubernetes-rbac - kubernetes_resources: - - kind: pod - name: '*' - namespace: '*' - rules: - - resources: ['kubernetes_cluster'] - verbs: ['read'] - - resources: ['role'] - verbs: ['create', 'update'] ---- -kind: user -metadata: - name: sync-kubernetes-rbac -spec: - roles: ['sync-kubernetes-rbac'] -version: v2 -``` - -Create the user and role: - -```code -$ tctl create -f sync-kubernetes-rbac.yaml -role 'sync-kubernetes-rbac' has been created -user "sync-kubernetes-rbac" has been created -``` - -### Enable impersonation of the client application - -As with all Teleport users, the Teleport Auth Service authenticates the -`sync-kubernetes-rbac` user by issuing short-lived TLS credentials. In this -case, we will request the credentials manually by *impersonating* the -`sync-kubernetes-rbac` role and user. - -If you are running a self-hosted Teleport Enterprise deployment and are using -`tctl` from the Auth Service host, you will already have impersonation -privileges. - -To grant your user impersonation privileges for `sync-kubernetes-rbac`, define a role -called `sync-kubernetes-rbac-impersonator` by pasting the following YAML document into -a file called `sync-kubernetes-rbac-impersonator.yaml`: - -```yaml -kind: role -version: v5 -metadata: - name: sync-kubernetes-rbac-impersonator -spec: - allow: - impersonate: - roles: - - sync-kubernetes-rbac - users: - - sync-kubernetes-rbac -``` - -Create the `sync-kubernetes-rbac-impersonator` role: - -```code -$ tctl create -f sync-kubernetes-rbac-impersonator.yaml -``` - -(!docs/pages/includes/add-role-to-user.mdx role="sync-kubernetes-rbac-impersonator"!) - -You will now be able to generate signed certificates for the `sync-kubernetes-rbac` -role and user. - -### Install the Teleport Kubernetes Service - -We will enable your client application to communicate with your Kubernetes -cluster via the Teleport Kubernetes Service, which forwards requests after -authorizing them. While this step is not strictly necessary with a local -`minikube` cluster, it demonstrates one way to use Teleport to securely access -your external RBAC system's API. - -(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - -Request a token that the Kubernetes Service will use to join your Teleport -cluster: - -```code -$ tctl tokens add --type=kube,app,discovery --format=text -``` - -Copy this token so you can use it when running the Teleport Kubernetes Service. - -Ensure that you are connected to the right Kubernetes cluster (logging into -Teleport earlier will have changed your Kubernetes context): - -```code -$ kubectl config use-context minikube -Switched to context "minikube". -``` - -Install the Teleport Kubernetes Service in your cluster, assigning to the host **and port** of your Teleport Proxy Service -(e.g., `mytenant.teleport.sh:443`) and to the token you -requested earlier: - -```code -$ helm install teleport-agent teleport/teleport-kube-agent \ - --set kubeClusterName=minikube \ - --set roles="kube\,app\,discovery" \ - --set proxyAddr= \ - --set authToken= \ - --create-namespace \ - --namespace=teleport-agent \ - --set labels.environment=demo \ - --version (=teleport.version=) -``` - -After a few seconds, verify that you have deployed the Teleport Kubernetes -Service by running the following command: - -```code -$ kubectl -n teleport-agent get pods -``` - -This should show that the Kubernetes Service is running: - -```text -NAME READY STATUS RESTARTS AGE -teleport-agent-0 1/1 Running 0 22s -``` - -`tsh` should indicate that the cluster has registered with Teleport: - -```code -$ tsh kube ls -Kube Cluster Name Labels Selected ------------------ ---------------- -------- -minikube environment=demo -``` - -## Step 3/4. Write the client application - -At this point, we have set up an external RBAC system to use for generating -Teleport roles and configured Teleport to allow our API client to interact with -our Kubernetes cluster and Teleport cluster. In this step, we will write our -client application. - -### Set up your Go project - -Download the source code for the API client application: - -```code -$ git clone --depth=1 https://github.com/gravitational/teleport -b branch/v(=teleport.major_version=) -$ cd teleport/examples/api-sync-roles -``` - -For the rest of this guide, we will show you how to set up the client -application and explore the ways it uses Teleport's API to automatically -generate Teleport roles. - -### Export identity files for the client application - -The `sync-kubernetes-rbac` user needs signed credentials in order to connect to -your Teleport cluster as well as your Kubernetes cluster. You will use the `tctl -auth sign` command to request these credentials for your API client. - -#### Connecting to your Teleport cluster - -The following `tctl auth sign` command impersonates the `sync-kubernetes-rbac` -user, generates signed credentials, and writes an identity file to the local -directory: - -```code -$ tctl auth sign --user=sync-kubernetes-rbac --out=auth.pem -``` - -The identity file, `auth.pem`, includes both TLS and SSH credentials. Your -client application uses the SSH credentials to connect to the Proxy Service, -which establishes a reverse tunnel connection to the Auth Service. The client -application uses this reverse tunnel, along with your TLS credentials, to -connect to the Auth Service's gRPC endpoint. - -#### Connecting to the Kubernetes cluster - -You will also need to give the client application a way to authenticate to your -Kubernetes cluster. To do this, use Teleport's certificate authority to sign -credentials for the `sync-kubernetes-rbac` user. Your API client will present -these credentials to authenticate to the Teleport Kubernetes Service, which will -proxy requests to the Kubernetes cluster. - -Run the following command, ensuring that includes -the host and port of your Proxy Service: - -```code -$ tctl auth sign --user=sync-kubernetes-rbac \ - --kube-cluster-name=minikube \ - --format=kubernetes \ - --proxy=https:// \ - --out=kubeconfig -``` - -### Imports - -In the `api-sync-roles` directory, open `main.go`, which contains the API client -program we demonstrate in this guide. - -Here are the packages our client application imports from Go's standard library: - -|Package|Description| -|---|---| -| `context`|Includes the `context.Context` type. `context.Context` is an abstraction for controlling long-running routines, such as connections to external services, that might fail or time out. Programs can cancel contexts or assign them timeouts and metadata.| -|`fmt`|Formatting data for printing, strings, or errors.| -|`io`|Dealing with I/O operations, e.g., reading files or network sockets.| -|`os`|Interacting with the operating system, e.g., to open files.| -|`time`|Dealing with time. We will use this to define a timeout for connecting to the Auth Service along with a ticker for executing our discovery logic in a loop.| - -The client imports the following third-party code: - -|Package|Description| -|---|---| -|`github.com/gravitational/teleport/api/client`|A library for authenticating to the Auth Service's gRPC API and making requests, aliased as `teleport`.| -|`github.com/gravitational/teleport/api/types`|Types used in the Auth Service API, e.g., Application Service records.| -|`github.com/gravitational/trace`|Presenting errors with more useful detail than the standard library provides.| -|`google.golang.org/grpc`|The gRPC client and server library.| -|`k8s.io/api/rbac/v1`|The Kubernetes RBAC API client library.| -|`k8s.io/apimachinery/pkg/apis/meta/v1`|Code common to Kubernetes' API client libraries.| -|`k8s.io/client-go/kubernetes`|Setting up an general-purpose Kubernetes client.| -|`k8s.io/client-go/kubernetes/typed/rbac/v1`|Types for the Kubernetes RBAC API.| -|`k8s.io/client-go/tools/clientcmd`|Another general-purpose Kubernetes client library.| - -### Constants - -The program defines constants in a visible location so, later on, it's easier to -make them configurable outside the program: - -```go -const ( - proxyAddr string = "" - initTimeout = time.Duration(30) * time.Second - identityFilePath string = "auth.pem" - kubeconfigPath string = "kubeconfig" - clusterName string = "minikube" - roleAnnotationKey string = "create-teleport-role" -) -``` - -We will use these constants later in the program. They define some values we may -want to change later, including: - -|Constant|Description| -|---|---| -|`proxyAddr`|The host and port of the Teleport Proxy Service, e.g., `mytenant.teleport.sh:443`, which we will use to connect the client to your cluster. **Assign this to your own Proxy Service's host and port:** | -|`initTimeout`|The timeout for connecting to the Teleport cluster. We have defined this as 30 seconds.| -|`identityFilePath`|The path to the Teleport identity file you created earlier.| -|`clusterName`|The name of the Kubernetes cluster you will fetch RBAC resources from. In this guide, the cluster's name is `minikube`.| -|`roleAnnotationKey`|In Kubernetes, annotations are arbitrary key/value pairs that you can add to resources. The role and cluster role bindings we created earlier have the annotation key we specify here so our client application can fetch them.| - -### Initializing a Kubernetes RBAC client - -To contact the Kubernetes API, we will need to set up an HTTP client. The client -authenticates to the API using mutual TLS, loading a client certificate, -certificate authority, and private key from the file at `kubeconfigPath`. -Earlier in the guide, we requested this from the Teleport Auth Service. - -The program sets up a Kubernetes API client with the `getRBACClient` function: - -```go -func getRBACClient() (v1.RbacV1Interface, error) { - f, err := os.Open(kubeconfigPath) - if err != nil { - return nil, trace.Wrap(err) - } - - kc, err := io.ReadAll(f) - if err != nil { - return nil, trace.Wrap(err) - } - n, err := clientcmd.RESTConfigFromKubeConfig(kc) - if err != nil { - return nil, trace.Wrap(err) - } - - c, err := kubernetes.NewForConfig(n) - if err != nil { - return nil, trace.Wrap(err) - } - - return c.RbacV1(), nil -} -``` - -`getRBACClient` opens and reads the Kubernetes credentials file at -`kubeconfigPath`, then uses the file to set up a Kubernetes API client -configuration (`clientcmd.RESTConfigFromKubeConfig(kc)`) and, with that, an HTTP -client (`kubernetes.NewForConfig(n)`). - -Finally, it returns an interface to the Kubernetes API dedicated to role-based -access controls, which the rest of the program uses to interact with your -Kubernetes cluster. - -### Creating a Teleport role from a Kubernetes cluster role binding - -The `createTeleportRoleFromClusterRoleBinding` function creates a Teleport role -from a Kubernetes cluster role binding by populating fields in the former based -on fields in latter: - -```go -func createTeleportRoleFromClusterRoleBinding(teleport *client.Client, k types.KubeCluster, r rbacv1.ClusterRoleBinding) error { - if e, ok := r.Annotations[roleAnnotationKey]; !ok || e != "true" { - return nil - } - - role := types.RoleV6{} - role.SetMetadata(types.Metadata{ - Name: k.GetName() + "-" + r.RoleRef.Name + "-" + "cluster", - }) - - b := k.GetStaticLabels() - labels := make(types.Labels) - for k, v := range b { - labels[k] = []string{v} - } - role.SetKubernetesLabels(types.Allow, labels) - role.SetKubeResources(types.Allow, []types.KubernetesResource{ - types.KubernetesResource{ - Kind: "pod", - Namespace: "*", - Name: "*", - }, - }) - - var g []string - var u []string - for _, s := range r.Subjects { - if s.Kind == "User" || s.Kind == "ServiceAccount" { - u = append(u, s.Name) - continue - } - if s.Kind == "Group" { - g = append(g, s.Name) - continue - } - } - role.SetKubeGroups(types.Allow, g) - role.SetKubeUsers(types.Allow, u) - if err := teleport.UpsertRole( - context.Background(), - &role, - ); err != nil { - return trace.Wrap(err) - } - fmt.Println("Upserted Teleport role:", role.GetName()) - return nil -} -``` - -To avoid unexpected behavior, this function ignores Kubernetes-managed roles and -roles for internal systems like the Teleport Kubernetes Service. This function -checks the cluster role binding's metadata for an annotation with a specific -key, `roleAnnotationKey`, and ignores any resource where this key is not set to -`"true"`. - -We also want a quick way to identify roles we created with this program. To do -so, this function names all roles it generates based on cluster role bindings -according to the following attributes: - -- Kubernetes cluster name -- Kubernetes role name -- The suffix `-cluster`. - -In our demo application, this function will create a Teleport role called -`minikube-pod-ops-cluster`. - -The rest of the function assigns fields to a `types.RoleV6`, the Teleport API -client's role type, based on the cluster role binding: - -|Role field|Purpose|How we assign it| -|---|---|---| -|`allow.kubernetes_labels`|Labels for Teleport-registered Kubernetes clusters that a user with this role is allowed to access.|Based on the Teleport-registered Kubernetes cluster that the cluster role binding belongs to.| -|`allow.kubernetes_resources`|Kubernetes pods in specific namespaces that that a user with this role is allowed to access.|Allow access to all namespaces, since cluster role bindings are not restricted by namespace.| -|`allow.kubernetes_users` and `allow.kubernetes_groups`|The Kubernetes groups and users that a Teleport user with this role will assume when interacting with the Kubernetes cluster.|Supply the names of any users or groups connected to the cluster role binding.| - -### Creating a Teleport role from a Kubernetes role binding - -As with cluster role bindings, this program will also create Teleport roles -based on Kubernetes role bindings: - -```go -func createTeleportRoleFromRoleBinding(teleport *client.Client, k types.KubeCluster, r rbacv1.RoleBinding) error { - if e, ok := r.Annotations[roleAnnotationKey]; !ok || e != "true" { - return nil - } - - role := types.RoleV6{} - role.SetMetadata(types.Metadata{ - Name: k.GetName() + "-" + r.RoleRef.Name + "-" + r.Namespace, - }) - - b := k.GetStaticLabels() - labels := make(types.Labels) - for k, v := range b { - labels[k] = []string{v} - } - role.SetKubernetesLabels(types.Allow, labels) - role.SetKubeResources(types.Allow, []types.KubernetesResource{ - types.KubernetesResource{ - Kind: "pod", - Namespace: r.Namespace, - Name: "*", - }, - }) - var g []string - var u []string - for _, s := range r.Subjects { - if s.Kind == "User" || s.Kind == "ServiceAccount" { - u = append(u, s.Name) - continue - } - if s.Kind == "Group" { - g = append(g, s.Name) - continue - } - } - role.SetKubeGroups(types.Allow, g) - role.SetKubeUsers(types.Allow, u) - - if err := teleport.UpsertRole( - context.Background(), - &role, - ); err != nil { - return trace.Wrap(err) - } - fmt.Println("Upserted Teleport role:", role.GetName()) - return nil -} -``` - -While the overall behavior of this function is the same as -`createTeleportRoleFromClusterRoleBinding`, Kubernetes role bindings require -some differences in how we assign fields to Teleport roles: - -- When setting the name of the role, we use the role binding's namespace as the - suffix, rather than `-cluster`, to indicate the namespace that this role - applies to. -- In the role's `kubernetes_resources` field, the value has the same namespace - as the role binding, rather than applying to all namespaces. - -### Creating Teleport roles based on Kubernetes resources - -Now that we have functions to create Teleport roles based on individual -Kubernetes RBAC resources, we can fetch all RBAC resources from our Kubernetes -cluster and call these functions: - -```go -func createTeleportRolesForKubeCluster(teleport *client.Client, k types.KubeCluster) error { - rbac, err := getRBACClient() - if err != nil { - return trace.Wrap(err) - } - - crb, err := rbac.ClusterRoleBindings().List( - context.Background(), - metav1.ListOptions{}, - ) - if err != nil { - return trace.Wrap(err) - } - - for _, i := range crb.Items { - if err := createTeleportRoleFromClusterRoleBinding(teleport, k, i); err != nil { - return trace.Wrap(err) - } - } - - rb, err := rbac.RoleBindings("").List( - context.Background(), - metav1.ListOptions{}, - ) - if err != nil { - return trace.Wrap(err) - } - - for _, i := range rb.Items { - if err := createTeleportRoleFromRoleBinding(teleport, k, i); err != nil { - return trace.Wrap(err) - } - } - return nil -} -``` - -`createTeleportRolesForKubeCluster` takes a Teleport client and a -Teleport-registered Kubernetes cluster. It calls the `getRBACClient` function we -defined earlier to set up a client for the Kubernetes cluster. It then: - -- Lists Kubernetes cluster role bindings and creates a Teleport role for each one. -- Lists Kubernetes role bindings and creates a Teleport role for each one. - -### Initializing clients and start the application - -The functions we declared earlier require a Teleport API client and a -Teleport-registered Kubernetes cluster, and we initialize these in the -entrypoint of the program, the `main` function: - -```go -func main() { - ctx, cancel := context.WithTimeout(context.Background(), initTimeout) - defer cancel() - creds := client.LoadIdentityFile(identityFilePath) - - teleport, err := client.New(ctx, client.Config{ - Addrs: []string{proxyAddr}, - Credentials: []client.Credentials{creds}, - }) - if err != nil { - panic(err) - } - fmt.Println("Connected to Teleport") - - ks, err := teleport.GetKubernetesServers(context.Background()) - if err != nil { - panic(err) - } - for _, k := range ks { - if k.GetCluster().GetName() != clusterName { - continue - } - fmt.Println("Retrieved Kubernetes cluster", clusterName) - - if err := createTeleportRolesForKubeCluster(teleport, k.GetCluster()); err != nil { - panic(err) - } - fmt.Println("Created roles for Kubernetes cluster", clusterName) - return - } - panic("Unable to locate a Kubernetes Service instance for " + clusterName) -} -``` - -`client` is Teleport's library for setting up an API client. Our program -initializes a Teleport client by calling `client.LoadIdentityFile` to obtain a -`client.Credentials`. It then uses the `client.Credentials` to call -`client.New`, which connects to the Teleport Proxy Service specified in the -`Addrs` field using the provided identity file. - - - -This program does not validate your credentials or Teleport cluster address. -Make sure that: - -- The identity file you exported earlier does not have an expired TTL -- The value you supplied to the `Addrs` field in `teleport.Config` includes both - the host **and** the web port of your Teleport Proxy Service, e.g., - `mytenant.teleport.sh:443` - - - -After initializing a Teleport client, the `main` function fetches all Kubernetes -servers registered with Teleport (`teleport.GetKubernetesServers`) and checks if -there is a registered Kubernetes cluster that matches the one you specified. - -If a matching Kubernetes cluster exists, the code calls the -`createTeleportRolesForKubeClusters` function we defined earlier. If not, the -program prints an error message and a stack trace by calling Go's built-in -`panic` function. - -## Step 4/4. Test your client application - -To test the client application, start it up from within its project directory: - -```code -$ go run main.go -``` - -You should see the following output: - -```text -Connected to Teleport -Retrieved Kubernetes cluster minikube -Upserted Teleport role: minikube-pod-ops-cluster -Upserted Teleport role: minikube-pod-reader-app -Created roles for Kubernetes cluster minikube -``` - -Examine the new `minikube-pod-ops-cluster` role by running the command below: - -```code -$ tctl get roles/minikube-pod-ops-cluster -``` - -You should see output similar to the following: - -```yaml -kind: role -metadata: - id: 1678732494974032643 - name: minikube-pod-ops-cluster -spec: - allow: - kubernetes_groups: - - ops - kubernetes_labels: - environment: demo - kubernetes_resources: - - kind: pod - name: '*' - namespace: '*' - deny: {} - options: - cert_format: standard - create_host_user: false - desktop_clipboard: true - desktop_directory_sharing: true - enhanced_recording: - - command - - network - forward_agent: false - idp: - saml: - enabled: true - max_session_ttl: 30h0m0s - pin_source_ip: false - ssh_port_forwarding: - remote: - enabled: true - local: - enabled: true - record_session: - default: best_effort - desktop: true - ssh_file_copy: true -version: v7 -``` - -Compare this with the `minikube-pod-reader-app` role, which you can retrieve -with the following command: - -```code -$ tctl get roles/minikube-pod-reader-app -``` - -Here is the role we created: - -```yaml -kind: role -metadata: - id: 1678732495284493075 - name: minikube-pod-reader-app -spec: - allow: - kubernetes_groups: - - app-developer - kubernetes_labels: - environment: demo - kubernetes_resources: - - kind: pod - name: '*' - namespace: app - deny: {} - options: - cert_format: standard - create_host_user: false - desktop_clipboard: true - desktop_directory_sharing: true - enhanced_recording: - - command - - network - forward_agent: false - idp: - saml: - enabled: true - max_session_ttl: 30h0m0s - pin_source_ip: false - ssh_port_forwarding: - remote: - enabled: true - local: - enabled: true - record_session: - default: best_effort - desktop: true - ssh_file_copy: true -version: v7 -``` - -Since role bindings are namespaced, this role only allows access to pods in the -`app` namespace, where this role binding was applied. The Kubernetes Service -forwards traffic from users with this role using the `app-developer` Kubernetes -group. - -## Next steps - -We have implemented a Teleport API client that generates Teleport roles based on -the Kubernetes RBAC system. You can use Teleport's API to build similar -applications that interact with other RBAC systems, such as GitHub teams or -groups within your database management system. - -Here are some starting points for building out your client application. - -### Learn more about Teleport roles - -To write an effective client application that generates Teleport roles from an -external RBAC solution, you should understand the role fields that apply to -infrastructure resources you want to manage access to. - -See the links below for guides to fields related to different infrastructure -resources: - -- [Servers](../../enroll-resources/server-access/rbac.mdx) -- [Databases](../../enroll-resources/database-access/rbac.mdx) -- [Kubernetes clusters](../../enroll-resources/kubernetes-access/controls.mdx) -- [Windows Desktops](../../enroll-resources/desktop-access/rbac.mdx) -- [Applications](../../enroll-resources/application-access/controls.mdx) - -For general guidance, read our [Access Controls -Reference](../../reference/access-controls/roles.mdx). - -### Register your cloud provider with Teleport - -You can protect cloud provider APIs with Teleport and instruct your API client -applications to connect to these APIs via the Teleport Application Service. -Using Teleport-protected cloud provider APIs, you can generate Teleport roles -based on your cloud provider's RBAC solution. - -Read our guides for how to set up the Teleport Application Service for cloud -provider APIs: - -- [AWS](../../enroll-resources/application-access/cloud-apis/aws-console.mdx) -- [Google Cloud](../../enroll-resources/application-access/cloud-apis/google-cloud.mdx) -- [Azure](../../enroll-resources/application-access/cloud-apis/azure.mdx) - -### Consult examples - -The Teleport code repository contains [examples of production-ready Teleport API -clients](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/examples/). -While we currently do not maintain plugins that generate Teleport -roles, you can use these examples to see how to implement configuration -parsing, retries, and other tasks. - -### Provision the client application with short-lived credentials - -In this example, we used the `tctl auth sign` command to fetch credentials for -the program you wrote. For production usage, we recommend provisioning -short-lived credentials via Machine ID, which reduces the risk of these -credentials becoming stolen. View our [Machine ID -documentation](../../enroll-resources/machine-id/introduction.mdx) to learn more. diff --git a/docs/pages/admin-guides/deploy-a-cluster/access-graph/access-graph.mdx b/docs/pages/admin-guides/deploy-a-cluster/access-graph/access-graph.mdx deleted file mode 100644 index ec7ccc5ce6a1d..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/access-graph/access-graph.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Self-Hosting Teleport Access Graph" -description: Explains how to deploy Access Graph alongside a self-hosted Teleport cluster. ---- - -If you run a self-hosted Teleport cluster, using Access Graph (part of -Teleport Identity Security) requires running the Access Graph Service on your own -infrastructure. The following guides show you how to deploy the Access Graph -Service. - - diff --git a/docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted.mdx b/docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted.mdx deleted file mode 100644 index 19a784b3c4576..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Run Teleport Identity Security on Self-Hosted Clusters -description: Describes how to deploy Access Graph on self-hosted clusters. ---- - -Identity Security with Access Graph on a self-hosted Teleport cluster requires setting up -Access Graph, a dedicated service which uses PostgreSQL as its backing storage and communicates -with Auth Service and Proxy Service to collect information about resources and access. - -This guide will help you set up the service and enable Access Graph in your Teleport cluster. - -## Prerequisites - -- A running Teleport Enterprise cluster v14.3.6 or later. -- An updated `license.pem` with Identity Security enabled. -- Docker version v(=docker.version=) or later. -- A PostgreSQL database server v14 or later. - - Access Graph needs a dedicated [database](https://www.postgresql.org/docs/current/sql-createdatabase.html) to store its data. - The user that Access Graph connects to the database with needs to be the owner of this database, or have similar broad permissions: - at least the `CREATE TABLE` privilege on the `public` schema, and the `CREATE SCHEMA` privilege. - - Amazon RDS for PostgreSQL is supported. -- A TLS certificate for the Access Graph service - - The TLS certificate must be issued for "server authentication" key usage, - and must list the IP or DNS name of the Access Graph service in an X.509 v3 `subjectAltName` extension. - - Starting from version 1.20.4 of the Access Graph service, the container runs as a non-root user by default. - Make sure the certificate files are readable by the user running the container. You can set correct permissions with the following command: - ```code - $ sudo chown 65532 /etc/access_graph/tls.key - ``` -- The node running the Access Graph service must be reachable from Teleport Auth Service and Proxy Service. - - - The deployment with Docker is suitable for testing and development purposes. For production deployments, - consider using the Access Graph Helm chart to deploy this service on Kubernetes. - Refer to [Helm chart for Access Graph](self-hosted-helm.mdx) for instructions. - - -## Step 1/3. Set up Access Graph - -You will need a copy of your Teleport cluster's host certificate authority (CA) on the machine that hosts the Access Graph service. -The service requires incoming connections to be authenticated via host certificates that the host CA issues to the Auth Service and Proxy Service. - -The host CA can be retrieved and saved into a file in one of the following ways: - - - -```code -$ sudo mkdir /etc/access_graph -$ curl -s 'https:///webapi/auth/export?type=tls-host' | sudo tee /etc/access_graph/teleport_host_ca.pem -``` - - - -```code -$ sudo mkdir /etc/access_graph -$ tsh login --proxy= -$ tctl get cert_authorities --format=json \ - | jq -r '.[] | select(.spec.type == "host") | .spec.active_keys.tls[].cert' \ - | base64 -d | sudo tee /etc/access_graph/teleport_host_ca.pem -``` - - - -Then, on the same machine, create a configuration file for the Access Graph service, similar to this: - -```yaml -backend: - postgres: - # This uses the PostgreSQL connection URI format, see https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS - # A stricter `sslmode` value is strongly recommended, - # e.g. `sslmode=verify-full&sslrootcert=/etc/access_graph/my_postgres_ca.crt`. - # For a full reference on possible parameters see https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS - connection: postgres://access_graph_user:my_password@db.example.com:5432/access_graph_db?sslmode=require - - # When running on Amazon RDS, IAM auth via credentials set in the environment can be used as follows: - # iam: - # aws_region: us-west-2 - -# IP address (optional) and port for the Access Graph service to listen to. -# This is the default value. This key can be omitted to listen on port 50051 on all interfaces. -address: ":50051" - -tls: - # File paths of PEM-encoded TLS certificate and private key for the Access Graph server. - cert: /etc/access_graph/tls.crt - key: /etc/access_graph/tls.key - -# This lists the file paths for host CAs of Teleport clusters that are allowed to register with this Access Graph service. -# Several paths can be included to allow several Teleport clusters to connect to the Access Graph service. -registration_cas: - - /etc/access_graph/teleport_host_ca.pem # A full path to the file containing the Teleport cluster's host CA certificate. -``` - -Finally, start the Access Graph service using Docker as follows: - -```console -$ docker run -p 50051:50051 -v :/app/config.yaml -v /etc/access_graph:/etc/access_graph public.ecr.aws/gravitational/access-graph:(=access_graph.version=) -``` - -## Step 2/3. Update the Teleport Auth Service configuration - -In the YAML config for the Auth Service, add a new top-level section for Access Graph configuration. - -```yaml -access_graph: - enabled: true - # host:port where the Access Graph service is listening - endpoint: access-graph.example.com:50051 - # Specify a trusted CA we expect the Access Graph server certificate to be signed by. - # If not specified, the system trust store will be used. - ca: /etc/access_graph_ca.pem -``` - -Then, restart Auth Service instances, followed by Proxy Service instances. - -## Step 3/3. View the Access Graph in the Web UI - -You can find Access Graph in the "Access Management" tab in the Web UI. -![Access Management menu item](../../../../img/access-graph/menu-item.png) - -To access the interface, your user must have a role that allows `list` and `read` verbs on the `access_graph` resource, e.g.: - -```yaml -kind: role -version: v7 -metadata: - name: my-role -spec: - allow: - rules: - - resources: - - access_graph - verbs: - - list - - read -``` - -The preset `editor` role has the required permissions by default. diff --git a/docs/pages/admin-guides/deploy-a-cluster/deploy-a-cluster.mdx b/docs/pages/admin-guides/deploy-a-cluster/deploy-a-cluster.mdx deleted file mode 100644 index b9bff221670f9..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/deploy-a-cluster.mdx +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Self-Hosting Teleport -description: "Guides to running a self-hosted Teleport cluster in production." ---- - -These guides show you how to run a self-hosted Teleport Enterprise cluster in -production. - -## Dedicated account dashboard - -Teleport Enterprise subscriptions include a dedicated account dashboard with their preferred -subdomain of [teleport.sh](https://teleport.sh). The dedicated account dashboard provides -subscription administrators access to the license file, support links and Teleport Enterprise binary downloads. - -## Guides to self-hosting Teleport - - diff --git a/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx b/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx deleted file mode 100644 index d106107bdbd7d..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx +++ /dev/null @@ -1,352 +0,0 @@ ---- -title: Running Teleport on GCP -description: How to install and configure Teleport on GCP ---- - -We've created this guide to give customers an overview of how to deploy a -self-hosted Teleport cluster on [Google Cloud](https://cloud.google.com/gcp/) -(GCP). This guide provides a high-level introduction to setting up and running -Teleport in production. - -We have split this guide into: - -- [GCP Teleport Introduction](#gcp-teleport-introduction) -- [GCP Quickstart](#gcp-quickstart) - -(!docs/pages/includes/cloud/call-to-action.mdx!) - -## GCP Teleport Introduction - -This guide will cover how to set up, configure and run Teleport on GCP. - -The following GCP Services are required to run Teleport in high availability mode: - -- [Compute Engine: VM Instances with Instance Groups](#compute-engine-vm-instances-with-instance-groups) -- [Compute Engine: Health Checks](#compute-engine-health-checks) -- [Storage: Cloud Firestore](#storage-cloud-firestore) -- [Storage: Google Cloud Storage](#storage-google-cloud-storage) -- [Network Services: Load Balancing](#network-services-load-balancing) -- [Network Services: Cloud DNS](#network-services-cloud-dns) - -Other things needed: - -- [SSL Certificate](https://cloud.google.com/load-balancing/docs/ssl-certificates) - -Optional: - -- Management Tools: Cloud Deployment Manager -- Logging: Stackdriver - -We recommend setting up Teleport in high availability mode. -In high availability mode Firestore is used for cluster state and audit logs, -and Google Cloud Storage is used for session recordings. - -![GCP Intro Image](../../../../img/gcp/gcp-teleport.svg) - -Throughout this guide, we'll make use of the following placeholder variables. -Please replace them with values appropriate for your environment. - -| Name | Example | Description | -| -------- | ----------- | ----------- | -| `Example_GCP_PROJECT` | teleport-project | Your GCP project ID | -| `Example_GCP_CREDENTIALS` | /var/lib/teleport/google.json | Path to service account credentials | -| `Example_FIRESTORE_CLUSTER_STATE` | teleport-cluster-state | Name of the Firestore collection for Teleport cluster state | -| `Example_FIRESTORE_AUDIT_LOGS` | teleport-audit-logs | Name of the Firestore collection for Teleport audit logs | -| `Example_BUCKET_NAME` | teleport-session-recordings | Name of the GCS bucket for session recording storage | - -### Compute Engine: VM Instances with Instance Groups - -We recommend using `n1-standard-2` instances in production. It's best to separate -Teleport Proxy Service and Auth Service instances using instance groups for each. - -### Compute Engine: Health Checks - -GCP relies heavily on [Health Checks](https://cloud.google.com/load-balancing/docs/health-checks), -this is helpful when adding new instances to an instance group. - -To enable health checks in Teleport start with `teleport start --diag-addr=0.0.0.0:3000` -see [Admin Guide: Troubleshooting](../../management/admin/troubleshooting.mdx) for more information. - -### Storage: Cloud Firestore - -The [Firestore](https://cloud.google.com/firestore/) backend uses real-time -updates to keep individual Auth Service instances in sync, and requires Firestore configured -in native mode. - -To configure Teleport to store audit events in Firestore, add the following to -the teleport section of your Auth Service's config file (by default it's `/etc/teleport.yaml`): - -```yaml -teleport: - storage: - type: firestore - collection_name: Example_FIRESTORE_CLUSTER_STATE - project_id: Example_GCP_PROJECT - credentials_path: Example_GCP_CREDENTIALS - audit_events_uri: [ 'firestore://Example_FIRESTORE_AUDIT_LOGS?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' ] -``` - - -Be careful to ensure that `Example_FIRESTORE_CLUSTER_STATE` and `Example_FIRESTORE_AUDIT_LOGS` -refer to *different* Firestore collections. The schema is different for each, and using the same -collection for both types of data will result in errors. - - -### Storage: Google Cloud Storage - -The Google Cloud Storage backend is used for Teleport session recordings. -Teleport will try to create the bucket on startup if it doesn't already exist. -If you prefer, you can create the bucket ahead of time. In this case, Teleport -does not need permissions to create buckets. - -When creating the Bucket, we recommend setting it up as `Dual-region` with -the `Standard` storage class. Provide access using a `Uniform` access control -with a Google-managed key. - -When setting up `audit_sessions_uri` use the `gs://` prefix. - -```yaml -storage: - ... - audit_sessions_uri: 'gs://Example_BUCKET_NAME?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' - ... -``` - -### Network Services: Load Balancing - -Load Balancing is required for Proxy and SSH traffic. Use `TCP Load Balancing` as -Teleport requires custom ports for SSH and Web Traffic. - -### Network Services: Cloud DNS - -Cloud DNS is used to set up the public URL of the Teleport Proxy. - -### Access: Service accounts - -The Teleport Auth Service will need to read and write to Firestore and -Google Cloud Storage. For this you will need a Service Account with the -correct permissions. - -If you want Teleport to be able to create its own GCS bucket, you'll need to -create a role allowing the `storage.buckets.create` permission. You can skip -this step if you choose to create the bucket before installing Teleport. - -To create this role, start by defining the role in a YAML file: - -```yaml -# teleport_auth_role.yaml -title: teleport_auth_role -description: 'Teleport permissions for GCP' -stage: ALPHA -includedPermissions: -# Allow Teleport to create the GCS bucket for session -# recordings if it doesn't already exist. -- storage.buckets.create -``` - -Create the role using this file: - -```code -$ gcloud iam roles create teleport_auth_role \ - --project Example_GCP_PROJECT \ - --file teleport_auth_role.yaml \ - --format yaml -``` - -Note the `name` field in the output which is the fully qualified name for the -custom role and must be used in later steps. - -```code -$ export IAM_ROLE= -``` - -If you don't already have a GCP service account for your Teleport Auth Service -you can create one with the following command, otherwise use your existing -service account. - -```code -$ gcloud iam service-accounts create teleport-auth-server \ - --description="Service account for Teleport Auth Service" \ - --display-name="Teleport Auth Service" \ - --format=yaml -``` - -Note the `email` field in the output, this must be used as the identifier for -the service account. - -```code -$ export SERVICE_ACCOUNT= -``` - -Lastly, bind the required IAM roles to your newly created service account. - -```code -# our custom IAM role allows Teleport to create the GCS -# bucket for session recordings if it doesn't already exist -$ gcloud projects add-iam-policy-binding Example_GCP_PROJECT \ - --member=serviceAccount:$SERVICE_ACCOUNT \ - --role=$IAM_ROLE - -# datastore.owner grants the required Firestore access -$ gcloud projects add-iam-policy-binding Example_GCP_PROJECT \ - --member=serviceAccount:$SERVICE_ACCOUNT \ - --role=roles/datastore.owner - -# storage.objectAdmin is needed to read/write/delete storage objects -$ gcloud projects add-iam-policy-binding Example_GCP_PROJECT \ - --member=serviceAccount:$SERVICE_ACCOUNT \ - --role=roles/storage.objectAdmin -``` - -**Download JSON Service Key** - -The credentials for this service account should be exported in JSON format -and provided to Teleport throughout the remainder of this guide. - -![GCP Service Key](../../../../img/gcp/gcp-service-key.png) - -## GCP Quickstart - -### 1. Create Resources - -We recommend starting by creating the resources. We highly recommend creating these -an infrastructure automation tool such as [Cloud Deployment Manager](https://cloud.google.com/deployment-manager/) or Terraform. - -### 2. Install & Configure Teleport - -Follow install instructions from our [installation page](../../../installation.mdx). - -We recommend configuring Teleport as per the below steps: - - - -**1. Configure Teleport Auth Service** using the below example `teleport.yaml`,and start it -using [systemd](../../management/admin/daemon.mdx). The DEB/RPM installations will -automatically include the `systemd` configuration. - -```yaml -# -# Sample Teleport configuration teleport.yaml file for Auth Service -# -teleport: - nodename: teleport-auth-server - data_dir: /var/lib/teleport - pid_file: /run/teleport.pid - log: - output: stderr - severity: DEBUG - storage: - type: firestore - collection_name: Example_FIRESTORE_CLUSTER_STATE - # Credentials: Path to google service account file, used for Firestore and Google Storage. - credentials_path: Example_GCP_CREDENTIALS - project_id: Example_GCP_PROJECT - audit_events_uri: 'firestore://Example_FIRESTORE_AUDIT_LOGS?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' - audit_sessions_uri: 'gs://Example_BUCKET_NAME?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' -auth_service: - enabled: true - tokens: - - "proxy:(= presets.tokens.first =)" - - "node:(= presets.tokens.second =)" -proxy_service: - enabled: false -ssh_service: - enabled: false -``` - - -**1. Configure Teleport Auth Service** using the below example `teleport.yaml`, and start it -using [systemd](../../management/admin/daemon.mdx). The DEB/RPM installations will -automatically include the `systemd` configuration. - -```yaml -# -# Sample Teleport configuration teleport.yaml file for Auth Service -# -teleport: - nodename: teleport-auth-server - data_dir: /var/lib/teleport - pid_file: /run/teleport.pid - log: - output: stderr - severity: DEBUG - storage: - type: firestore - collection_name: Example_FIRESTORE_CLUSTER_STATE - # Credentials: Path to google service account file, used for Firestore and Google Storage. - credentials_path: Example_GCP_CREDENTIALS - project_id: Example_GCP_PROJECT - audit_events_uri: 'firestore://Example_FIRESTORE_AUDIT_LOGS?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' - audit_sessions_uri: 'gs://Example_BUCKET_NAME?projectID=Example_GCP_PROJECT&credentialsPath=Example_GCP_CREDENTIALS' -auth_service: - enabled: true - license_file: /var/lib/teleport/license.pem - tokens: - - "proxy:(= presets.tokens.first =)" - - "node:(= presets.tokens.second =)" -proxy_service: - enabled: false -ssh_service: - enabled: false -``` - -(!docs/pages/includes/enterprise/obtainlicense.mdx!) - -Save your license file on the Auth Service instances at the path, -`/var/lib/teleport/license.pem`. - - - -**2. Set up Proxy** - -Save the following configuration file as `/etc/teleport.yaml` on the Proxy Server: - -```yaml -# enable multiplexing all traffic on TCP port 443 -version: v3 -teleport: - auth_token: (= presets.tokens.first =) - # We recommend using a TCP load balancer pointed to the auth servers when - # setting up in High Availability mode. - auth_server: auth.example.com:3025 -# enable proxy service, disable auth and ssh -ssh_service: - enabled: false -auth_service: - enabled: false -proxy_service: - enabled: true - web_listen_addr: 0.0.0.0:443 - public_addr: teleport.example.com:443 - # automatically get an ACME certificate for teleport.example.com (works for a single proxy) - acme: - enabled: true - email: example@email.com -``` - -**3. Set up Teleport Nodes** - -Save the following configuration file as `/etc/teleport.yaml` on the Node: - -```yaml -version: v3 -teleport: - auth_token: (= presets.tokens.second =) - # Teleport Agents can be joined to the cluster via the Proxy Service's - # public address. This will establish a reverse tunnel between the Proxy - # Service and the agent that is used for all traffic. - proxy_server: teleport.example.com:443 -# enable the SSH Service and disable the Auth and Proxy Services -ssh_service: - enabled: true -auth_service: - enabled: false -proxy_service: - enabled: false -``` - -**4. Add Users** - -Follow our [Local Users](../../management/admin/users.mdx) guide or integrate -with [Google Workspace](../../access-controls/sso/google-workspace.mdx) to -provide SSO access. diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx deleted file mode 100644 index fe05757a6d763..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx +++ /dev/null @@ -1,681 +0,0 @@ ---- -title: Running an HA Teleport cluster using AWS, EKS, and Helm -description: Install and configure an HA Teleport cluster using an AWS EKS cluster ---- - -In this guide, we'll use Teleport Helm charts to set up a high-availability Teleport cluster that runs on AWS EKS. - - -If you are already running Teleport on another platform, you can use your -existing Teleport deployment to access your Kubernetes cluster. [Follow our -guide](../../../enroll-resources/kubernetes-access/getting-started.mdx) to connect your Kubernetes -cluster to Teleport. - - -(!docs/pages/includes/cloud/call-to-action.mdx!) - -## Prerequisites - -(!docs/pages/includes/kubernetes-access/helm/teleport-cluster-prereqs.mdx!) - -### Choose a Kubernetes namespace and Helm release name - - - Before starting, setting your Kubernetes namespace and Helm release name here will - enable easier copy/pasting of commands for installation. - - If you don't know what to put here, use `teleport` for both values. - - Namespace: - - Release name: - - -## Step 1/7. Install Helm - -(!docs/pages/includes/kubernetes-access/helm/teleport-cluster-install.mdx!) - -## Step 2/7. Add the Teleport Helm chart repository - -(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - -## Step 3/7. Set up AWS IAM configuration - -For Teleport to be able to manage the DynamoDB tables, indexes, and the S3 -storage bucket it needs, you'll need to configure AWS IAM policies to allow -access. - - - Add these IAM policies to your AWS account and then grant it to the role associated with your EKS node group(s). - - -### DynamoDB IAM policy - -(!docs/pages/includes/dynamodb-iam-policy.mdx!) - -### S3 IAM policy - -(!docs/pages/includes/s3-iam-policy.mdx!) - -## Step 4/7. Configure TLS certificates for Teleport - -We now need to configure TLS certificates for Teleport to secure its -communications and allow external clients to connect. - -Depending on the approach you use for provisioning TLS certificates, the `teleport-cluster` chart can -deploy either a Kubernetes `LoadBalancer` or Kubernetes `Ingress` to handle incoming connections to -the Teleport Proxy Service. - -### Determining an approach - -There are three supported options when using AWS. You must choose only one of -these options: - -| Approach | AWS Load Balancer Type | Kubernetes Traffic Destination | Can use an existing AWS LB? | Caveats | -| - | - | - | - | - | -| [Using `cert-manager`](#using-cert-manager) | Network Load Balancer (NLB) | `LoadBalancer` | No | Requires a Route 53 domain and an `Issuer` configured with IAM permissions to change DNS records for your domain | -| [Using AWS Certificate Manager](#using-aws-certificate-manager) | Application Load Balancer (ALB) | `Ingress` | Yes | Requires a working instance of the AWS Load Balancer controller installed in your Kubernetes cluster | -| [Using your own TLS credentials](#using-your-own-tls-credentials) | Network Load Balancer (NLB) | `LoadBalancer` | No | Requires you to independently manage the maintenance, renewal and trust of the TLS certificates securing Teleport's web listener | - -#### Using `cert-manager` - -You can use `cert-manager` to provision and automatically renew TLS credentials -by completing ACME challenges via Let's Encrypt. - -You can also use `cert-manager` with AWS Private Certificate Authority (PCA) in EKS using the -`aws-privateca-issuer` plugin. - -This method uses a Kubernetes `LoadBalancer`, which will provision an underlying AWS Network Load -Balancer (NLB) to handle incoming traffic. - -#### Using AWS Certificate Manager - -You can use AWS Certificate Manager to handle TLS termination with AWS-managed certificates. - -This method uses a Kubernetes `Ingress`, which can provision an underlying AWS Application Load -Balancer (ALB) to handle incoming traffic if one does not already exist. It also requires the -installation and setup of the AWS Load Balancer controller. - -You should be aware of these potential limitations and differences when using Layer 7 load balancers with Teleport: - -- Connecting to Kubernetes clusters at the command line requires the use of the `tsh proxy kube` or - `tsh kubectl` commands and `tsh proxy db`/`tsh db connect` commands respectively. It is not - possible to connect `kubectl` directly to Teleport listeners without the use of `tsh` as a proxy client - in this mode. -- Connecting to databases at the command line requires the use of the `tsh proxy db` or `tsh db connect` - commands. It is not possible to connect database clients directly to Teleport listeners without the use of `tsh` - as a proxy client in this mode. -- The reason for both of these requirements is that Teleport uses X509 certificates for authentication, which requires - that it terminate all inbound TLS traffic itself on the Teleport proxy. This is not directly possible when using - a Layer 7 load balancer, so the `tsh` client implements this flow itself - [using ALPN connection upgrades](../../../reference/architecture/tls-routing.mdx). - - - Using ACM with an ALB also requires that your cluster has a fully functional installation of the AWS Load Balancer - controller with required IAM permissions. This guide provides more details below. - - -#### Using your own TLS credentials - -With this approach, you are responsible for determining how to obtain a TLS -certificate and private key for your Teleport cluster, and for renewing your -credentials periodically. Use this approach if you would like to use a trusted -internal certificate authority instead of Let's Encrypt or AWS Certificate -Manager. This method uses a Kubernetes `LoadBalancer` and will provision an -underlying AWS NLB. - -### Steps to follow - -Once you have chosen an approach based on the details above, select the correct tab below for instructions. - - - - -In this example, we are using multiple pods to create a High Availability -Teleport cluster. As such, we will be using `cert-manager` to centrally -provision TLS certificates using Let's Encrypt. These certificates will be -mounted into each Teleport pod, and automatically renewed and kept up to date by -`cert-manager`. - -If you are planning to use `cert-manager`, you will need to add one IAM policy to your cluster to enable it -to update Route53 records. - -### Route53 IAM policy - -This policy allows `cert-manager` to use DNS01 Let's Encrypt challenges to provision TLS certificates for your Teleport cluster. - -You'll need to replace these values in the policy example below: - -| Placeholder value | Replace with | -| - | - | -| | Route 53 hosted zone ID for the domain hosting your Teleport cluster | - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": "route53:GetChange", - "Resource": "arn:aws:route53:::change/*" - }, - { - "Effect": "Allow", - "Action": [ - "route53:ChangeResourceRecordSets", - "route53:ListResourceRecordSets" - ], - "Resource": "arn:aws:route53:::hostedzone/" - } - ] -} -``` - -### Installing cert-manager - -If you do not have `cert-manager` already configured in the Kubernetes cluster where you are installing Teleport, -you should add the Jetstack Helm chart repository which hosts the `cert-manager` chart, and install the chart: - -```code -$ helm repo add jetstack https://charts.jetstack.io -$ helm repo update -$ helm install cert-manager jetstack/cert-manager \ ---create-namespace \ ---namespace cert-manager \ ---set installCRDs=true \ ---set global.leaderElection.namespace=cert-manager \ ---set extraArgs="{--issuer-ambient-credentials}" # required to automount ambient AWS credentials when using an Issuer -``` - -Once `cert-manager` is installed, you should create and add an `Issuer`. - -You'll need to replace these values in the `Issuer` example below: - -| Placeholder value | Replace with | -| - | - | -| | An email address to receive communications from Let's Encrypt | -| | The name of the Route 53 domain hosting your Teleport cluster | -| | AWS region where the cluster is running | -| | Route 53 hosted zone ID for the domain hosting your Teleport cluster | - -```yaml -cat << EOF > aws-issuer.yaml -apiVersion: cert-manager.io/v1 -kind: Issuer -metadata: - name: letsencrypt-production - namespace: teleport -spec: - acme: - email: - server: https://acme-v02.api.letsencrypt.org/directory - privateKeySecretRef: - name: letsencrypt-production - solvers: - - selector: - dnsZones: - - "" - dns01: - route53: - region: - hostedZoneID: -EOF -``` - -After you have created the `Issuer` and updated the values, add it to your cluster using `kubectl`: - -```code -$ kubectl create namespace -$ kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline' -$ kubectl --namespace create -f aws-issuer.yaml -``` - - - -In this step, you will configure Teleport to use AWS Certificate Manager (ACM) -to provision your Teleport instances with TLS credentials. - - - You must either follow the [AWS-maintained documentation on installing the AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) - or already have a working installation of the AWS LB controller before continuing with these instructions. Failure to do this will result in an unusable Teleport cluster. - - Assuming you follow the AWS guide linked above, you can check whether the AWS LB controller is running in your cluster by looking - for pods with the `app.kubernetes.io/name=aws-load-balancer-controller` label: - - ```code - $ kubectl get pods -A -l app.kubernetes.io/name=aws-load-balancer-controller - NAMESPACE NAME READY STATUS RESTARTS AGE - kube-system aws-load-balancer-controller-655f647b95-5vz56 1/1 Running 0 109d - kube-system aws-load-balancer-controller-655f647b95-b4brx 1/1 Running 0 109d - ``` - - You can also check whether `alb` is registered as an `IngressClass` in your cluster: - - ```code - $ kubectl get ingressclass - NAME CONTROLLER PARAMETERS AGE - alb ingress.k8s.aws/alb 109d - ``` - - -To use ACM to handle TLS, we will add annotations to the chart values in the section below specifying -the ACM certificate ARN to use, the port it should be served on and other ALB configuration -parameters. - -Replace -with your actual ACM certificate ARN. - - - - -You can configure the `teleport-cluster` Helm chart to secure the Teleport Web -UI using existing TLS credentials within a Kubernetes secret. - -Use the following command to create your secret: - -```code -$ kubectl -n create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file -``` - -Edit your `aws-values.yaml` file (created below) to refer to the name of your secret: - -```yaml - tls: - existingSecretName: my-tls-secret -``` - - - - -## Step 5/7. Set values to configure the cluster - -If you run Teleport Enterprise, you will need to create a secret that contains -your Teleport license information before you can install Teleport in your -Kubernetes cluster. - -(!docs/pages/includes//enterprise/obtainlicense.mdx!) - -Create a secret from your license file. Teleport will automatically discover -this secret as long as your file is named `license.pem`. - -```code -$ kubectl -n create secret generic license --from-file=license.pem -``` - -Next, configure the `teleport-cluster` Helm chart to use the `aws` mode. Create -a file called `aws-values.yaml` and write the values you've chosen above to it: - - - -```yaml -chartMode: aws -clusterName: # Name of your cluster. Use the FQDN you intend to configure in DNS below. -proxyListenerMode: multiplex -aws: - region: # AWS region - backendTable: # DynamoDB table to use for the Teleport backend - auditLogTable: # DynamoDB table to use for the Teleport audit log (must be different to the backend table) - auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - sessionRecordingBucket: # S3 bucket to use for Teleport session recordings - backups: true # Whether or not to turn on DynamoDB backups - dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling. -highAvailability: - replicaCount: 2 # Number of replicas to configure - certManager: - enabled: true # Enable cert-manager support to get TLS certificates - issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above) -# Indicate that this is a Teleport Enterprise deployment. Set to false for -# Teleport Community Edition. -enterprise: true -# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies -podSecurityPolicy: - enabled: false -``` - -If using an AWS PCA with cert-manager, you will need to -[ensure you set](../../../reference/helm-reference/teleport-cluster.mdx) -`highAvailability.certManager.addCommonName: true` in your values file. You will also need to get the certificate authority -certificate for the CA (`aws acm-pca get-certificate-authority-certificate --certificate-authority-arn `), -upload the full certificate chain to a secret, and -[reference the secret](../../../reference/helm-reference/teleport-cluster.mdx) -with `tls.existingCASecretName` in the values file. - - - -```yaml -chartMode: aws -clusterName: # Name of your cluster. Use the FQDN you intend to configure in DNS below. -proxyListenerMode: multiplex -service: - type: ClusterIP -aws: - region: # AWS region - backendTable: # DynamoDB table to use for the Teleport backend - auditLogTable: # DynamoDB table to use for the Teleport audit log (must be different to the backend table) - auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - sessionRecordingBucket: # S3 bucket to use for Teleport session recordings - backups: true # Whether or not to turn on DynamoDB backups - dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling. -highAvailability: - replicaCount: 2 # Number of replicas to configure -# Indicate that this is a Teleport Enterprise deployment. Set to false for -# Teleport Community Edition. -enterprise: true -ingress: - enabled: true - spec: - ingressClassName: alb -annotations: - ingress: - alb.ingress.kubernetes.io/target-type: ip - alb.ingress.kubernetes.io/backend-protocol: HTTPS - alb.ingress.kubernetes.io/scheme: internet-facing - alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=350 - alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS - alb.ingress.kubernetes.io/success-codes: 200,301,302 - # Replace with your AWS certificate ARN - alb.ingress.kubernetes.io/certificate-arn: "" -# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies -podSecurityPolicy: - enabled: false -``` - -To use an internal AWS Application Load Balancer (as opposed to an internet-facing ALB), you should -edit the `alb.ingress.kubernetes.io/scheme` annotation: - - ```yaml - alb.ingress.kubernetes.io/scheme: internal - ``` - -To automatically redirect HTTP requests on port 80 to HTTPS requests on port 443, you -can also optionally provide these two values under `annotations.ingress`: - - ```yaml - alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]' - alb.ingress.kubernetes.io/ssl-redirect: '443' - ``` - - - - -Install the chart with the values from your `aws-values.yaml` file using this command: - -```code -$ helm install teleport/teleport-cluster \ - --create-namespace \ - --namespace \ - -f aws-values.yaml -``` - - - You cannot change the `clusterName` after the cluster is configured, so make sure you choose wisely. You should use the fully-qualified domain name that you'll use for external access to your Teleport cluster. - - -Once the chart is installed, you can use `kubectl` commands to view the deployment (example using `cert-manager`): - -```code -$ kubectl --namespace get all - -NAME READY STATUS RESTARTS AGE -pod/teleport-auth-57989d4cbd-4q2ds 1/1 Running 0 22h -pod/teleport-auth-57989d4cbd-rtrzn 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/teleport LoadBalancer 10.40.11.180 xxxxx.elb.us-east-1.amazonaws.com 443:30258/TCP 22h -service/teleport-auth ClusterIP 10.40.8.251 3025/TCP,3026/TCP 22h -service/teleport-auth-v11 ClusterIP None 22h -service/teleport-auth-v12 ClusterIP None 22h - -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/teleport-auth 2/2 2 2 22h -deployment.apps/teleport-proxy 2/2 2 2 22h - -NAME DESIRED CURRENT READY AGE -replicaset.apps/teleport-auth-57989d4cbd 2 2 2 22h -replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h -``` - -## Step 6/7. Set up DNS - -You'll need to set up a DNS `A` record for `teleport.example.com`. In our example, this record is an alias to an ELB. - -
-Enrolling applications with Teleport? - -(!docs/pages/includes/dns-app-access.mdx!) - -
- -Here's how to do this in a hosted zone with Amazon Route 53: - - - - -```code -# Change these parameters if you altered them above -$ NAMESPACE='' -$ RELEASE_NAME='' - -# DNS settings (change as necessary) -$ MYZONE_DNS='' -$ MYDNS='' -$ MY_CLUSTER_REGION='' - -# Find AWS Zone ID and ELB Zone ID -$ MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)" -$ MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}-proxy" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')" -$ MYELB_NAME="${MYELB%%-*}" -$ MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')" - -# Create a JSON file changeset for AWS. -$ jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \ - '{ - "Comment": "Create records", - "Changes": [ - { - "Action": "CREATE", - "ResourceRecordSet": { - "Name": $dns, - "Type": "A", - "AliasTarget": { - "HostedZoneId": $elbz, - "DNSName": ("dualstack." + $elb), - "EvaluateTargetHealth": false - } - } - }, - { - "Action": "CREATE", - "ResourceRecordSet": { - "Name": ("*." + $dns), - "Type": "A", - "AliasTarget": { - "HostedZoneId": $elbz, - "DNSName": ("dualstack." + $elb), - "EvaluateTargetHealth": false - } - } - } - ] - }' > myrecords.json - -# Review records before applying. -$ cat myrecords.json | jq -# Apply the records and capture change id -$ CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')" - -# Verify that change has been applied -$ aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status' -# "INSYNC" -``` - - - - -```code -# Change these parameters if you altered them above -$ NAMESPACE='' -$ RELEASE_NAME='' - -# DNS settings (change as necessary) -$ MYZONE_DNS='' -$ MYDNS='' -$ MY_CLUSTER_REGION='' - -# Find AWS Zone ID and Ingress Controller ALB Zone ID -$ MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)" -$ MYELB="$(kubectl --namespace "${NAMESPACE?}" get "ingress/${RELEASE_NAME?}-proxy" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')" -$ MYELB_ROOT="${MYELB%%.*}" -$ MYELB_NAME="${MYELB_ROOT%-*}" -$ MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')" - -# Create a JSON file changeset for AWS. -$ jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \ - '{ - "Comment": "Create records", - "Changes": [ - { - "Action": "CREATE", - "ResourceRecordSet": { - "Name": $dns, - "Type": "A", - "AliasTarget": { - "HostedZoneId": $elbz, - "DNSName": ("dualstack." + $elb), - "EvaluateTargetHealth": false - } - } - }, - { - "Action": "CREATE", - "ResourceRecordSet": { - "Name": ("*." + $dns), - "Type": "A", - "AliasTarget": { - "HostedZoneId": $elbz, - "DNSName": ("dualstack." + $elb), - "EvaluateTargetHealth": false - } - } - } - ] - }' > myrecords.json - -# Review records before applying. -$ cat myrecords.json | jq -# Apply the records and capture change id -$ CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')" - -# Verify that change has been applied -$ aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status' -# "INSYNC" -``` - - - - -## Step 7/7. Create a Teleport user - -Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, -so we can run the command using `kubectl`: - - - -```code -$ kubectl --namespace exec deploy/-auth -- tctl users add test --roles=access,editor - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - -```code -$ kubectl --namespace exec deploy/-auth -- tctl users add test --roles=access,editor,reviewer - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - - -Load the user creation link to create a password and set up multi-factor authentication for the Teleport user via the web UI. - -### High Availability - -In this guide, we have configured two replicas. This can be changed after cluster creation by altering the `highAvailability.replicaCount` -value [using `helm upgrade` as detailed below](#upgrading-the-cluster-after-deployment). - -### Upgrading the cluster after deployment - -To make changes to your Teleport cluster after deployment, you can use `helm upgrade`. - -Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest -version of Teleport. You can make sure that the repo is up to date by running `helm repo update`. - -Here's an example where we set the chart to use 2 replicas: - -Edit your `aws-values.yaml` file from above and make the appropriate changes: - -```yaml -highAvailability: - replicaCount: 2 -``` - -Upgrade the deployment with the values from your `aws-values.yaml` file using this command: - -```code -$ helm upgrade teleport/teleport-cluster \ - --namespace \ - -f aws-values.yaml -``` - - - To change `chartMode`, `clusterName`, or any `aws` settings, you must first uninstall the existing chart and then install a new version with the appropriate values. - - -Then perform a cluster upgrade with the new values: - -```code -$ helm upgrade teleport/teleport-cluster \ - --namespace \ - -f aws-values.yaml -``` - -## Uninstalling Teleport - -To uninstall the `teleport-cluster` chart, use `helm uninstall `. For example: - -```code -$ helm --namespace uninstall -``` - -### Uninstalling cert-manager - -If you want to remove the `cert-manager` installation later, you can use this command: - -```code -$ helm --namespace cert-manager uninstall cert-manager -``` - -## Troubleshooting - -### AWS quotas - -(!docs/pages/includes/aws-quotas.mdx!) - -## Next steps - -Now that you have deployed a Teleport cluster, read the [Manage -Access](../../access-controls/access-controls.mdx) section to get started enrolling -users and setting up RBAC. - -See the [high availability section of our Helm chart reference](../../../reference/helm-reference/teleport-cluster.mdx) for more details on high availability. - -Read the [`cert-manager` documentation](https://cert-manager.io/docs/). - diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx deleted file mode 100644 index 7cb1e1721f90e..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx +++ /dev/null @@ -1,355 +0,0 @@ ---- -title: Running an HA Teleport cluster using Microsoft Azure, AKS, and Helm -description: Install and configure an HA Teleport cluster using a Microsoft Azure AKS cluster. ---- - -In this guide, we'll go through how to set up a High Availability Teleport -cluster with multiple replicas in Kubernetes using Teleport Helm charts and -Microsoft Azure managed services (Kubernetes Services, Database for PostgreSQL, -Blob Storage). - - -If you are already running Teleport on another platform, you can use your -existing Teleport deployment to access your Kubernetes cluster. [Follow our -guide](../../../enroll-resources/kubernetes-access/getting-started.mdx) to connect your Kubernetes -cluster to Teleport. - - -(!docs/pages/includes/cloud/call-to-action.mdx!) - -## Prerequisites - -(!docs/pages/includes/kubernetes-access/helm/teleport-cluster-prereqs.mdx!) - -In addition, you will need `azure-cli` 2.51 or later to follow along these -instructions. Reference the Azure docs on [how to install the Azure -CLI](https://learn.microsoft.com/cli/azure/install-azure-cli). - -After installing it, make sure you are logged in by typing `az login`. This -guide assumes that your user has permissions to create Azure Database for -PostgreSQL instances, Azure Blob Storage accounts, and Managed Identities, and -has the ability to add role assignments for those. You will also need an Azure -DNS zone, and access to an AKS cluster with cert-manager -[installed](https://cert-manager.io/docs/installation/) and configured to [issue -certificates for said Azure DNS -zone](https://cert-manager.io/docs/configuration/acme/dns01/azuredns/). - -In this guide we'll use [workload -identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview) -to authenticate Teleport to PostgreSQL and Blob Storage, so you'll need to -[enable workload identity and the OIDC issuer in your AKS -cluster](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster#update-an-existing-aks-cluster) -if they're not enabled already: - -```code -$ az aks update --resource-group --name --enable-oidc-issuer --enable-workload-identity -``` - -## Step 1/5. Add the Teleport Helm chart repository - -(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - -## Step 2/5. Set up PostgreSQL and Blob Storage - -For convenience, we'll create all the resources necessary in a brand new -resource group; if you want to use an existing one, you can skip this step. -Assign to your Azure region: - -```code -$ az group create --name --location -``` - -We're going to need a Managed Identity for Teleport to use these services. - -```code -$ az identity create --resource-group --name teleport-id -``` - -The recommended HA deployment of Teleport on Azure stores the cluster state and -the audit log entries in a PostgreSQL instance. In this guide we'll create a -publicly accessible one, but you can restrict it to your AKS cluster's IP -address, or you can create it attached to the same virtual network that the -cluster is using, instead. - -Depending on your region, you might be able to use `ZoneRedundant` high -availability, or you might have to use `SameZone` high availability. - -```code -$ az postgres flexible-server create --resource-group --name \ - --active-directory-auth Enabled --password-auth Disabled \ - --version 15 --high-availability SameZone --public-access All -$ az postgres flexible-server parameter set --resource-group --name \ - --name wal_level --value logical -$ az postgres flexible-server restart --resource-group --name -$ az postgres flexible-server ad-admin create --resource-group --server-name \ - --display-name --type ServicePrincipal \ - --object-id "$(az identity show --resource-group --name teleport-id --query principalId -o tsv)" -``` - -Teleport will store session recordings in a Blob Storage account. Optionally, -access can be restricted to just the AKS outbound address, or the account can be -made part of the virtual network that the AKS cluster is using. - -```code -$ az storage account create --resource-group --name "teleport-blob" \ - --allow-blob-public-access false -$ az role assignment create --role "Storage Blob Data Owner" --assignee-principal-type ServicePrincipal \ - --assignee-object-id "$(az identity show --resource-group --name teleport-id --query principalId -o tsv)" \ - --scope "$(az storage account show --resource-group --name --query id -o tsv)"" -``` - -We'll use Workload Identity to authenticate to those services, so we'll add -federated credentials for the Teleport service account used by the Auth Service. - -```code -$ az identity federated-credential create --resource-group --identity-name teleport-id \ - --name aks --audience api://AzureADTokenExchange \ - --subject system:serviceaccount:: \ - --issuer "$(az aks show --resource-group --name --query oidcIssuerProfile.issuerUrl -o tsv)" -``` - -## Step 3/5. Set values to configure the cluster - -
-License Secret - -Before you can install Teleport Enterprise in your Kubernetes cluster, you will need to -create a secret that contains your Teleport license information. - -(!docs/pages/includes/enterprise/obtainlicense.mdx!) - -Create a secret from your license file. Teleport will automatically discover -this secret as long as your file is named `license.pem`. - -```code -$ kubectl create namespace -$ kubectl -n create secret generic license --from-file=license.pem -``` - -
- -Now we'll configure the `teleport-cluster` Helm chart to use the `azure` mode. - -First get the client ID for the `teleport-id` identity: -```code -$ az identity show --resource-group --name teleport-id --query clientId -o tsv - -``` - -Then create a file called `azure-values.yaml` containing the values you've selected above: - - - - -```yaml -chartMode: azure -# Name of your cluster. Use the FQDN you intend to configure in DNS later -clusterName: teleport.example.com -azure: - databaseHost: ".postgres.database.azure.com" - databaseUser: "" - sessionRecordingStorageAccount: ".blob.core.windows.net" - # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - auditLogMirrorOnStdout: false - clientID: "" -highAvailability: - # Number of replicas to configure - replicaCount: 2 - certManager: - # Enable cert-manager support to get TLS certificates - enabled: true - # Name of the cert-manager GlobalIssuer or Issuer to use - issuerName: letsencrypt-production - issuerKind: ClusterIssuer -# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies -podSecurityPolicy: - enabled: false -``` - - - - -```yaml -chartMode: azure -# Name of your cluster. Use the FQDN you intend to configure in DNS later -clusterName: teleport.example.com -azure: - databaseHost: ".postgres.database.azure.com" - databaseUser: "" - sessionRecordingStorageAccount: ".blob.core.windows.net" - # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - auditLogMirrorOnStdout: false - clientID: "" -highAvailability: - # Number of replicas to configure - replicaCount: 2 - certManager: - # Enable cert-manager support to get TLS certificates - enabled: true - # Name of the cert-manager GlobalIssuer or Issuer to use - issuerName: letsencrypt-production - issuerKind: ClusterIssuer -# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies -podSecurityPolicy: - enabled: false -# Indicate that this is a Teleport Enterprise deployment -enterprise: true -``` - - - - -Install the chart with the values from your `azure-values.yaml` file using this command: - -```code -$ helm install teleport/teleport-cluster \ - --create-namespace --namespace \ - --values azure-values.yaml -``` - - - You cannot change the `clusterName` after the cluster is configured, so make sure you choose wisely. We recommend using the fully-qualified domain name that you'll use for external access to your Teleport cluster. - - -Once the chart is installed, you can use `kubectl` commands to view the deployment: - -```code -$ kubectl --namespace get all - -NAME READY STATUS RESTARTS AGE -pod/teleport-auth-57989d4cb-4q2ds 1/1 Running 0 22h -pod/teleport-auth-57989d4cb-rtrzn 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/teleport LoadBalancer 10.40.11.180 34.138.177.11 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h -service/teleport-auth ClusterIP 10.40.8.251 3025/TCP,3026/TCP 22h -service/teleport-auth-v13 ClusterIP None 22h -service/teleport-auth-v14 ClusterIP None 22h - -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/teleport-auth 2/2 2 2 22h -deployment.apps/teleport-proxy 2/2 2 2 22h - -NAME DESIRED CURRENT READY AGE -replicaset.apps/teleport-auth-57989d4cb 2 2 2 22h -replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h -``` - -## Step 4/5. Set up DNS - -We'll now set up DNS `A` records for `teleport.example.com` and -`*.teleport.example.com`, using your Azure DNS zone rooted at `example.com`. If -you're using a different DNS hosting service, follow their instructions instead. - -```code -$ PREFIX=teleport -$ ZONE=example.com -$ ZONE_RG=dns-rg -$ external_ip="$(kubectl --namespace get service/ -o jsonpath='{.status.loadBalancer.ingress[*].ip}')" -$ az network dns record-set a add-record --resource-group ${ZONE_RG} --zone-name ${ZONE} --record-set-name ${PREFIX} --ipv4-address "${external_ip}" -$ az network dns record-set a add-record --resource-group ${ZONE_RG} --zone-name ${ZONE} --record-set-name "*.${PREFIX}" --ipv4-address "${external_ip}" -``` - -## Step 5/5. Create a Teleport user - -Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, -so we can run the command using `kubectl`: - - - - -```code -$ kubectl --namespace exec deploy/-auth -- tctl users add test --roles=access,editor - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - - - -```code -$ kubectl --namespace exec deploy/-auth -- tctl users add test --roles=access,editor,reviewer - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - - - -Load the user creation link to create a password and set up multi-factor authentication for the Teleport user via the web UI. - -### High Availability - -In this guide, we have configured 2 replicas. This can be changed after cluster creation by altering the `highAvailability.replicaCount` -value [using `helm upgrade` as detailed below](#upgrading-the-cluster-after-deployment). - -## Upgrading the cluster after deployment - -To make changes to your Teleport cluster after deployment, you can use `helm upgrade`. - -Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest -version of Teleport. You can make sure that the repo is up to date by running `helm repo update`. - -If you want to use a different version of Teleport, pass the `--version` argument to Helm: - -```code -$ helm upgrade --version 14.0.0 \ - teleport/teleport-cluster \ - --namespace \ - -f azure-values.yaml -``` - -Here's an example where we set the chart to use 3 replicas: - - - - Edit your `azure-values.yaml` file from above and make the appropriate changes. - - Upgrade the deployment with the values from your `azure-values.yaml` file using this command: - - ```code - $ helm upgrade teleport/teleport-cluster \ - --namespace \ - -f azure-values.yaml - ``` - - - - Run this command, editing your command line parameters as appropriate: - - ```code - $ helm upgrade teleport/teleport-cluster \ - --namespace \ - --set highAvailability.replicaCount=3 - ``` - - - - - To change `chartMode`, `clusterName` or any `azure` settings, you must first uninstall the existing chart and then install - a new version with the appropriate values. - - -## Uninstalling Teleport - -To uninstall the `teleport-cluster` chart, use `helm uninstall`. For example: - -```code -$ helm --namespace uninstall -``` - -## Next steps - -Now that you have deployed a Teleport cluster, read the [Manage -Access](../../access-controls/access-controls.mdx) section to get started enrolling -users and setting up RBAC. - -See the [high availability section of our Helm chart reference](../../../reference/helm-reference/teleport-cluster.mdx) for more details on high availability. diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx deleted file mode 100644 index a2fa03f6d4731..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx +++ /dev/null @@ -1,489 +0,0 @@ ---- -title: Running an HA Teleport cluster using GCP, GKE, and Helm -description: Install and configure an HA Teleport cluster using a Google Cloud GKE cluster. ---- - -In this guide, we'll go through how to set up a High Availability Teleport cluster with multiple replicas in Kubernetes -using Teleport Helm charts and Google Cloud Platform products (Firestore and Google Cloud Storage). - -If you are already running Teleport on another platform, you can use your -existing Teleport deployment to access your Kubernetes cluster. [Follow our -guide](../../../enroll-resources/kubernetes-access/getting-started.mdx) to connect your Kubernetes -cluster to Teleport. - -(!docs/pages/includes/cloud/call-to-action.mdx!) - -## Prerequisites - -(!docs/pages/includes/kubernetes-access/helm/teleport-cluster-prereqs.mdx!) - -## Step 1/6. Add the Teleport Helm chart repository - -(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - - -> - The steps below apply to Google Cloud Google Kubernetes Engine (GKE) Standard deployments. - - -## Step 2/6. Google Cloud IAM configuration - -For Teleport to be able to create the Firestore collections, indexes, and the Google Cloud Storage bucket it needs, -you'll need to configure a Google Cloud service account with permissions to use these services. - -### Create an IAM role granting the `storage.buckets.create` permission - -Go to the "Roles" section of Google Cloud IAM & Admin. - -1. Click the "Create Role" button at the top. - - ![Roles section](../../../../img/helm/gcp/1-roles@1.5x.png) - -2. Fill in the details of a "Storage Bucket Creator" role (we suggest using the name `storage-bucket-creator-role`) - - ![Create role](../../../../img/helm/gcp/2-createrole@1.5x.png) - -3. Click the "Add Permissions" button. - - ![Storage bucket creator role](../../../../img/helm/gcp/3-addpermissions@1.5x.png) - -4. Use the "Filter" box to enter `storage.buckets.create` and select it in the list. - - ![Filter the list](../../../../img/helm/gcp/4-storagebucketscreate@1.5x.png) - -5. Check the `storage.buckets.create` permission in the list and click the "Add" button to add it to the role. - - ![Select storage.buckets.create](../../../../img/helm/gcp/5-select@1.5x.png) - -6. Once all these settings are entered successfully, click the "Create" button. - - ![Create role](../../../../img/helm/gcp/6-createrole@1.5x.png) - -### Create an IAM role granting Cloud DNS permissions - -Go to the "Roles" section of Google Cloud IAM & Admin. - -1. Click the "Create Role" button at the top. - - ![Roles section](../../../../img/helm/gcp/1-roles@1.5x.png) - -2. Fill in the details of a "DNS Updater" role (we suggest using the name `dns-updater-role`) - - ![Create role](../../../../img/helm/gcp/13-dns-createrole@1.5x.png) - -3. Click the "Add Permissions" button. - - ![DNS updater role](../../../../img/helm/gcp/3-addpermissions@1.5x.png) - -4. Use the "Filter" box to find each of the following permissions in the list - and add it. You can type things like `dns.resourceRecordSets.*` to quickly - filter the list. - - ```console - dns.resourceRecordSets.create - dns.resourceRecordSets.delete - dns.resourceRecordSets.list - dns.resourceRecordSets.update - dns.changes.create - dns.changes.get - dns.changes.list - dns.managedZones.list - ``` - -5. Once all these settings are entered successfully, click the "Create" button. - - ![Add DNS permissions](../../../../img/helm/gcp/14-dns-permissions-create@1.5x.png) - -### Create a service account for the Teleport Helm chart - - - If you already have a JSON private key for an appropriately-provisioned service account that you wish to use, you can skip this - creation process and go to the ["Create the Kubernetes secret containing the JSON private key for the service account"](#create-the-kubernetes-secret-containing-the-json-private-key-for-the-service-account) - section below. - - -Go to the "Service Accounts" section of Google Cloud IAM & Admin. - -1. Click the "Create Service Account" button at the top. - - ![Create service account](../../../../img/helm/gcp/7-serviceaccounts@1.5x.png) - -2. Enter details for the service account (we recommend using the name `teleport-helm`) and click the "Create" button. - - ![Enter service account details](../../../../img/helm/gcp/8-createserviceaccount@1.5x.png) - -3. In the "Grant this service account access to project" section, add these four roles: - -| Role | Purpose | -| - | - | -| storage-bucket-creator-role | Role you just created allowing creation of storage buckets | -| dns-updater-role | Role you just created allowing updates to Cloud DNS records | -| Cloud Datastore Owner | Grants permissions to create Cloud Datastore collections | -| Storage Object Admin | Allows read/write/delete of Google Cloud storage objects | - -![Add roles](../../../../img/helm/gcp/9-addroles@1.5x.png) - -4. Click the "continue" button to save these settings, then click the "create" button to create the service account. - -### Generate an access key for the service account - -Go back to the "Service Accounts" view in Google Cloud IAM & Admin. - -1. Click on the `teleport-helm` service account that you just created. - - ![Click on the service account](../../../../img/helm/gcp/10-serviceaccountdetails@1.5x.png) - -2. Click the "Keys" tab at the top and click "Add Key". Choose "JSON" and click "Create". - - ![Create JSON key](../../../../img/helm/gcp/11-createkey.png) - -3. The JSON private key will be downloaded to your computer. Take note of the filename (`bens-demos-24150b1a0a7f.json` in this example) - as you will need it shortly. - - ![Private key saved](../../../../img/helm/gcp/12-privatekey@1.5x.png) - -#### Create the Kubernetes secret containing the JSON private key for the service account - -Find the path where the JSON private key was just saved (likely your browser's default "Downloads" directory). - -Use `kubectl` to create the `teleport` namespace, set its security policy, and -create the secret using the path to the JSON private key: - -```code -$ kubectl create namespace teleport -namespace/teleport created -$ kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline' -namespace/teleport labeled -$ kubectl --namespace teleport create secret generic teleport-gcp-credentials --from-file=gcp-credentials.json=/path/to/downloads/bens-demos-24150b1a0a7f.json -secret/teleport-gcp-credentials created -``` - - - If you installed the Teleport chart into a specific namespace, the `teleport-gcp-credentials` secret you create must also be added to the same namespace. - - - - The default name configured for the secret is `teleport-gcp-credentials`. - - If you already have a secret created, you can skip this creation process and specify the name of the secret using `gcp.credentialSecretName`. - - The credentials file stored in any secret used must have the key name `gcp-credentials.json`. - - -## Step 3/6. Install and configure cert-manager - -Reference the [cert-manager docs](https://cert-manager.io/docs/). - -In this example, we are using multiple pods to create a High Availability Teleport cluster. As such, we will be using -`cert-manager` to centrally provision TLS certificates using Let's Encrypt. These certificates will be mounted into each -Teleport pod, and automatically renewed and kept up to date by `cert-manager`. - -If you do not have `cert-manager` already configured in the Kubernetes cluster where you are installing Teleport, -you should add the Jetstack Helm chart repository which hosts the `cert-manager` chart, and install the chart: - -```code -$ helm repo add jetstack https://charts.jetstack.io -$ helm repo update -$ helm install cert-manager jetstack/cert-manager \ ---create-namespace \ ---namespace cert-manager \ ---set global.leaderElection.namespace=cert-manager \ ---set installCRDs=true -``` - -Once `cert-manager` is installed, you should create and add an `Issuer`. - -You'll need to replace these values in the `Issuer` example below: - -| Placeholder value | Replace with | -| - | - | -| `email@address.com` | An email address to receive communications from Let's Encrypt | -| `example.com` | The name of the Cloud DNS domain hosting your Teleport cluster | -| `gcp-project-id` | GCP project ID where the Cloud DNS domain is registered | - -```yaml -cat << EOF > gcp-issuer.yaml -apiVersion: cert-manager.io/v1 -kind: Issuer -metadata: - name: letsencrypt-production - namespace: teleport -spec: - acme: - email: email@address.com # Change this - server: https://acme-v02.api.letsencrypt.org/directory - privateKeySecretRef: - name: letsencrypt-production - solvers: - - selector: - dnsZones: - - "example.com" # Change this - dns01: - cloudDNS: - project: gcp-project-id # Change this - serviceAccountSecretRef: - name: teleport-gcp-credentials - key: gcp-credentials.json -EOF -``` - - - The secret name under `serviceAccountSecretRef` here defaults to `teleport-gcp-credentials`. - - If you have changed `gcp.credentialSecretName` in your chart values, you must also make sure it matches here. - - -After you have created the `Issuer` and updated the values, add it to your cluster using `kubectl`: - -```code -$ kubectl --namespace teleport create -f gcp-issuer.yaml -``` - -## Step 4/6. Set values to configure the cluster - -
-License Secret - -Before you can install Teleport Enterprise in your Kubernetes cluster, you will need to -create a secret that contains your Teleport license information. - -(!docs/pages/includes/enterprise/obtainlicense.mdx!) - -Create a secret from your license file. Teleport will automatically discover -this secret as long as your file is named `license.pem`. - -```code -$ kubectl -n teleport create secret generic license --from-file=license.pem -``` - -
- - - If you are installing Teleport in a brand new GCP project, make sure you have enabled the - [Cloud Firestore API](https://console.cloud.google.com/apis/api/firestore.googleapis.com/overview) - and created a - [Firestore Database](https://console.cloud.google.com/firestore/welcome) - in your project before continuing. - - -Next, configure the `teleport-cluster` Helm chart to use the `gcp` mode. Create a -file called `gcp-values.yaml` file and write the values you've chosen above to -it: - - - - -```yaml -chartMode: gcp -clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below -gcp: - projectId: gcpproj-123456 # Google Cloud project ID - backendTable: teleport-helm-backend # Firestore collection to use for the Teleport backend - auditLogTable: teleport-helm-events # Firestore collection to use for the Teleport audit log (must be different to the backend collection) - auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - sessionRecordingBucket: teleport-helm-sessions # Google Cloud Storage bucket to use for Teleport session recordings -highAvailability: - replicaCount: 2 # Number of replicas to configure - certManager: - enabled: true # Enable cert-manager support to get TLS certificates - issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above) - -# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies -podSecurityPolicy: - enabled: false -``` - - - - -```yaml -chartMode: gcp -clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below -gcp: - projectId: gcpproj-123456 # Google Cloud project ID - backendTable: teleport-helm-backend # Firestore collection to use for the Teleport backend - auditLogTable: teleport-helm-events # Firestore collection to use for the Teleport audit log (must be different to the backend collection) - auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors) - sessionRecordingBucket: teleport-helm-sessions # Google Cloud Storage bucket to use for Teleport session recordings -highAvailability: - replicaCount: 2 # Number of replicas to configure - certManager: - enabled: true # Enable cert-manager support to get TLS certificates - issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above) -enterprise: true # Indicate that this is a Teleport Enterprise deployment -``` - - - - - -Install the chart with the values from your `gcp-values.yaml` file using this command: - -```code -$ helm install teleport teleport/teleport-cluster \ - --create-namespace \ - --namespace teleport \ - -f gcp-values.yaml -``` - - - You cannot change the `clusterName` after the cluster is configured, so make sure you choose wisely. We recommend using the fully-qualified domain name that you'll use for external access to your Teleport cluster. - - -Once the chart is installed, you can use `kubectl` commands to view the deployment: - -```code -$ kubectl --namespace teleport get all - -NAME READY STATUS RESTARTS AGE -pod/teleport-auth-57989d4cbd-4q2ds 1/1 Running 0 22h -pod/teleport-auth-57989d4cbd-rtrzn 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h -pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/teleport LoadBalancer 10.40.11.180 34.138.177.11 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h -service/teleport-auth ClusterIP 10.40.8.251 3025/TCP,3026/TCP 22h -service/teleport-auth-v11 ClusterIP None 22h -service/teleport-auth-v12 ClusterIP None 22h - -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/teleport-auth 2/2 2 2 22h -deployment.apps/teleport-proxy 2/2 2 2 22h - -NAME DESIRED CURRENT READY AGE -replicaset.apps/teleport-auth-57989d4cbd 2 2 2 22h -replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h -``` - -## Step 5/6. Set up DNS - -You'll need to set up a DNS `A` record for `teleport.example.com`. - -
-Enrolling applications with Teleport? - -(!docs/pages/includes/dns-app-access.mdx!) - -
- -Here's how to do this using Google Cloud DNS: - -```code -# Change these parameters if you altered them above -$ NAMESPACE=teleport -$ RELEASE_NAME=teleport - -$ MYIP=$(kubectl --namespace ${NAMESPACE?} get service/${RELEASE_NAME?} -o jsonpath='{.status.loadBalancer.ingress[*].ip}') -$ MYZONE="myzone" -$ MYDNS="teleport.example.com" - -$ gcloud dns record-sets transaction start --zone="${MYZONE?}" -$ gcloud dns record-sets transaction add ${MYIP?} --name="${MYDNS?}" --ttl="300" --type="A" --zone="${MYZONE?}" -$ gcloud dns record-sets transaction add ${MYIP?} --name="*.${MYDNS?}" --ttl="300" --type="A" --zone="${MYZONE?}" -$ gcloud dns record-sets transaction describe --zone="${MYZONE?}" -$ gcloud dns record-sets transaction execute --zone="${MYZONE?}" -``` - -## Step 6/6. Create a Teleport user - -Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, -so we can run the command using `kubectl`: - - - -```code -$ kubectl --namespace teleport exec deploy/teleport-auth -- tctl users add test --roles=access,editor - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - -```code -$ kubectl --namespace teleport exec deploy/teleport-auth -- tctl users add test --roles=access,editor,reviewer - -User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access. -``` - - - - -Load the user creation link to create a password and set up multi-factor authentication for the Teleport user via the web UI. - -### High Availability - -In this guide, we have configured 2 replicas. This can be changed after cluster creation by altering the `highAvailability.replicaCount` -value [using `helm upgrade` as detailed below](#upgrading-the-cluster-after-deployment). - -## Upgrading the cluster after deployment - -To make changes to your Teleport cluster after deployment, you can use `helm upgrade`. - -Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest -version of Teleport. You can make sure that the repo is up to date by running `helm repo update`. - -If you want to use a different version of Teleport, set the [`teleportVersionOverride`](../../../reference/helm-reference/teleport-cluster.mdx) value. - -Here's an example where we set the chart to use 3 replicas: - - - - Edit your `gcp-values.yaml` file from above and make the appropriate changes. - - Upgrade the deployment with the values from your `gcp-values.yaml` file using this command: - - ```code - $ helm upgrade teleport teleport/teleport-cluster \ - --namespace teleport \ - -f gcp-values.yaml - ``` - - - - Run this command, editing your command line parameters as appropriate: - - ```code - $ helm upgrade teleport teleport/teleport-cluster \ - --namespace teleport \ - --set highAvailability.replicaCount=3 - ``` - - - - - To change `chartMode`, `clusterName` or any `gcp` settings, you must first uninstall the existing chart and then install - a new version with the appropriate values. - - -## Uninstalling Teleport - -To uninstall the `teleport-cluster` chart, use `helm uninstall `. For example: - -```code -$ helm --namespace teleport uninstall teleport -``` - -### Uninstalling cert-manager - -If you want to remove the `cert-manager` installation later, you can use this command: - -```code -$ helm --namespace cert-manager uninstall cert-manager -``` - -## Next steps - -Now that you have deployed a Teleport cluster, read the [Manage -Access](../../access-controls/access-controls.mdx) section to get started enrolling -users and setting up RBAC. - -See the [high availability section of our Helm chart reference](../../../reference/helm-reference/teleport-cluster.mdx) for more details on high availability. - diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx deleted file mode 100644 index adc9f5791ec92..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Guides for running Teleport using Helm -description: How to install and configure Teleport in Kubernetes using Helm -layout: tocless-doc ---- - -The `teleport-cluster` Helm chart enables you deploy and manage a self-hosted, -high-availability Teleport cluster. This chart launches the Teleport Auth -Service, Teleport Proxy Service, and the Kubernetes infrastructure required to -support these services. The guides in this section show you how to use the -`teleport-cluster` Helm chart in your environment. - -You do not need to deploy the Auth Service and Proxy Service on Kubernetes in -order to protect a Kubernetes cluster with Teleport, and it is possible to -enroll a Kubernetes cluster on Teleport Cloud or by running the Teleport -Kubernetes Service on a Linux server. For instructions on enrolling a Kubernetes -cluster with Teleport, read the [Kubernetes -Access](../../../enroll-resources/kubernetes-access/introduction.mdx) documentation. - -## Helm deployment guides - -These guides show you how to set up a full self-hosted Teleport deployment using -our `teleport-cluster` Helm chart. - -- [Deploy Teleport on Kubernetes](kubernetes-cluster.mdx): Run a Teleport cluster in a Kubernetes cluster using - the default configuration. This deployment is a great starting point to try a self-hosted - Teleport with minimal resources. -- [HA AWS Teleport Cluster](aws.mdx): Running an HA Teleport cluster in Kubernetes using an AWS EKS Cluster -- [HA Azure Teleport Cluster](azure.mdx): Running an HA Teleport cluster in Kubernetes using an Azure AKS Cluster -- [HA GCP Teleport Cluster](gcp.mdx): Running an HA Teleport cluster in Kubernetes using a Google Cloud GKE Cluster -- [IBM Teleport Cluster](ibm.mdx): Running a replicated Teleport cluster in Kubernetes using and IBM Cloud IKS Cluster -- [DigitalOcean Kubernetes Cluster](digitalocean.mdx): - Running Teleport on DigitalOcean Kubernetes. -- [Custom Teleport config](custom.mdx): Running a Teleport cluster in Kubernetes with a custom Teleport config - -## Migration Guides - -- [Kubernetes 1.25 and PSP removal](migration-kubernetes-1-25-psp.mdx) diff --git a/docs/pages/admin-guides/deploy-a-cluster/linux-demo.mdx b/docs/pages/admin-guides/deploy-a-cluster/linux-demo.mdx deleted file mode 100644 index caaa50c913e94..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/linux-demo.mdx +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: Run a Self-Hosted Demo Cluster -description: This tutorial will guide you through the steps needed to install and run Teleport on a Linux server -videoBanner: O0YOPpWsn8M -tocDepth: 3 ---- - -See how a self-hosted Teleport deployment works by completing the tutorial -below. This shows you how to spin up a single-instance Teleport cluster on a -Linux server using Teleport Community Edition. Once you deploy the cluster, you -can configure RBAC, register resources, and protect your small-scale demo -environments or home lab. - -You can also get started right away with a production-ready Teleport cluster by -signing up for a [free trial of Teleport Enterprise -Cloud](https://goteleport.com/signup/). - -![Architecture of the setup you will complete in this -guide](../../../img/linux-server-diagram.png) - -We will run the following Teleport services: - -- **Teleport Auth Service:** The certificate authority for your cluster. It - issues certificates and conducts authentication challenges. The Auth Service - is typically inaccessible outside your private network. -- **Teleport Proxy Service:** The cluster frontend, which handles user requests, - forwards user credentials to the Auth Service, and communicates with Teleport - instances that enable access to specific resources in your infrastructure. -- **Teleport SSH Service:** An SSH server implementation that takes advantage of - Teleport's short-lived certificates, sophisticated RBAC, session recording, - and other features. - -## Prerequisites - -You will need the following to deploy a demo Teleport cluster. If your -environment doesn't meet the prerequisites, you can get started with Teleport by -signing up for a [free trial of Teleport Enterprise -Cloud](https://goteleport.com/signup/). - -If you want to get a feel for Teleport commands and capabilities without setting -up any infrastructure, take a look at the browser-based [Teleport -Labs](https://goteleport.com/labs). - -- A Linux host with only port `443` open to ingress traffic. You must be able - to install and run software on the host. Either configure access to the host - via SSH for the initial setup (and open an SSH port in addition port `443`) - or enter the commands in this guide into an Amazon EC2 [user data - script](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html), - Google Compute Engine [startup - script](https://cloud.google.com/compute/docs/instances/startup-scripts), - or similar. - - For a quick demo environment you can use to follow this guide, consider - installing our DigitalOcean 1-Click droplet. View the installation page on - [DigitalOcean - Marketplace](https://marketplace.digitalocean.com/apps/teleport). Once your - droplet is ready, SSH into the droplet and follow the configuration wizard. - -- A multi-factor authenticator app such as [Authy](https://authy.com/download/), - [Google Authenticator](https://www.google.com/landing/2step/), or - [1Password](https://support.1password.com/one-time-passwords/). - -You must also have **one** of the following: -- A registered domain name. -- An authoritative DNS nameserver managed by your organization, plus an existing - certificate authority. If using this approach, ensure that your browser is - configured to use your organization's nameserver. - -## Step 1/4. Configure DNS - -Teleport uses TLS to provide secure access to its Proxy Service and Auth -Service, and this requires a domain name that clients can use to verify -Teleport's certificate. Set up two DNS `A` records, each pointing to the IP -address of your Linux host. Assuming `teleport.example.com` is your domain name, -set up records for: - -|Domain|Reason| -|---|---| -|`teleport.example.com`|Traffic to the Proxy Service from users and services.| -|`*.teleport.example.com`|Traffic to web applications registered with Teleport. Teleport issues a subdomain of your cluster's domain name to each application.| - -## Step 2/4. Set up Teleport on your Linux host - -In this step, you will log into your Linux host, download the Teleport binary, -generate a Teleport configuration file, and start the Teleport Auth Service, -Proxy Service, and SSH Service on the host. - -### Install Teleport - -On your Linux host, run the following command to install the Teleport binary: - -```code -$ curl (=teleport.teleport_install_script_url=) | bash -s (=teleport.version=) -``` - -### Configure Teleport - -Generate a configuration file for Teleport using the `teleport configure` command. -This command requires information about a TLS certificate and private key. - -(!docs/pages/includes/tls-certificate-setup.mdx!) - -### Start Teleport - -(!docs/pages/includes/start-teleport.mdx!) - -Access the Teleport Web UI via HTTPS at the domain you created earlier at - and accept the terms of using Teleport -Community Edition. - -You should see a welcome screen similar to -the following: - -![Teleport Welcome Screen](../../../img/quickstart/welcome.png) - -## Step 3/4. Create a Teleport user and set up multi-factor authentication - -In this step, we'll create a new Teleport user, `teleport-admin`, which is -allowed to log into SSH hosts as any of the principals `root`, `ubuntu`, or -`ec2-user`. - -On your Linux host, run the following command: - -```code -# tctl is an administrative tool that is used to configure Teleport's auth service. -$ sudo tctl users add teleport-admin --roles=editor,access --logins=root,ubuntu,ec2-user -``` - -The command prints a message similar to the following: - -```text -User "teleport-admin" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://teleport.example.com:443/web/invite/123abc456def789ghi123abc456def78 - -NOTE: Make sure teleport.example.com:443 points at a Teleport proxy which users can access. -``` - -Visit the provided URL in order to create your Teleport user. - - - - The users that you specify in the `logins` flag (e.g., `root`, `ubuntu` and - `ec2-user` in our examples) must exist on your Linux host. Otherwise, you - will get authentication errors later in this tutorial. - - If a user does not already exist, you can create it with `adduser ` or - use [host user creation](../../enroll-resources/server-access/guides/host-user-creation.mdx). - - If you do not have the permission to create new users on the Linux host, run - `tctl users add teleport $(whoami)` to explicitly allow Teleport to - authenticate as the user that you have currently logged in as. - - - -Teleport enforces the use of multi-factor authentication by default. It supports -one-time passwords (OTP) and multi-factor authenticators (WebAuthn). In this -guide, you will need to enroll an OTP authenticator application using the QR -code on the Teleport welcome screen. - -
-Logging in via the CLI - -In addition to Teleport's Web UI, you can access resources in your -infrastructure via the `tsh` client tool. - -Install `tsh` on your local workstation: - -(!docs/pages/includes/install-tsh.mdx!) - -Log in to receive short-lived certificates from Teleport. Replace - with your Teleport cluster's public address -as configured above: - -```code -$ tsh login --proxy= --user=teleport-admin -> Profile URL: https://teleport.example.com:443 - Logged in as: teleport-admin - Cluster: teleport.example.com - Roles: access, editor - Logins: root, ubuntu, ec2-user - Kubernetes: enabled - Valid until: 2022-04-26 03:04:46 -0400 EDT [valid for 12h0m0s] - Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty -``` - -
- -## Step 4/4. Enroll your infrastructure - -Once you finish setting up your user, you will see your SSH server in the -Teleport Web UI: - -![Teleport Web UI showing an SSH server](../../../img/new-ssh-server.png) - -With Teleport, you can protect all of the resources in your infrastructure -behind a single identity-aware access proxy, including servers, databases, -applications, Kubernetes clusters, Windows desktops, and cloud provider APIs. - -To enroll a resource with Teleport, visit the Web UI and click **Enroll New -Resource**. The Web UI will show you the steps you can take to enroll your new -resource. - -## Next step: deploy Teleport Agents - -Teleport **Agents** proxy traffic to infrastructure resources like servers, -databases, Kubernetes clusters, cloud provider APIs, and Windows desktops. - -Step 4 showed you how to install agents manually, and you can also launch agents -and enroll resources with them using infrastructure-as-code tools. For example, -you can use Terraform to declare a pool of Teleport Agents and configure them to -proxy your infrastructure. Read [Protect Infrastructure with Teleport](../../enroll-resources/agents/agents.mdx) to get started. diff --git a/docs/pages/admin-guides/infrastructure-as-code/infrastructure-as-code.mdx b/docs/pages/admin-guides/infrastructure-as-code/infrastructure-as-code.mdx deleted file mode 100644 index d4f35a4c5b572..0000000000000 --- a/docs/pages/admin-guides/infrastructure-as-code/infrastructure-as-code.mdx +++ /dev/null @@ -1,271 +0,0 @@ ---- -title: Infrastructure as Code -h1: Manage Teleport Resources with Infrastructure as Code -description: An introduction to Teleport's dynamic resources, which make it possible to apply settings to remote clusters using infrastructure as code. -tocDepth: 3 ---- - -This section explains how to manage Teleport's **dynamic resources**, which make -it possible to adjust the behavior of your Teleport cluster as your -infrastructure changes. The design of dynamic resources enables you to manage -them using infrastructure as code tools, including Terraform, Helm, and the -Teleport `tctl` client tool. - -## What is a dynamic resource? - -There are two ways to configure a Teleport cluster: - -- **Static configuration file:** At startup, a Teleport process reads a - configuration file from the local filesystem (the default path is - `/etc/teleport.yaml`). Static configuration settings control aspects of a - cluster that are not expected to change frequently, like the ports that - services listen on. -- **Dynamic resources:** Dynamic resources control aspects of your cluster that - are likely to change over time, such as roles, local users, and registered - infrastructure resources. - -This approach makes it possible to incrementally adjust your Teleport -configuration without restarting Teleport instances. - -![Architecture of dynamic resources](../../../img/dynamic-resources.png) - -A cluster is composed of different objects (i.e., resources) and there are three -common operations that can be performed on them: `get` , `create` , and `remove` -. - -Every resource in Teleport has three required fields: - -- `kind`: The type of resource -- `name`: A required field in the `metadata` to uniquely identify the resource -- `version`: The version of the resource format - -All other fields are specific to a resource. - -While Teleport Enterprise Cloud does not expose the static configuration file to -operators, they do use a static configuration file for certain settings. Read -how Teleport [reconciles static and dynamic -resources](#reconciling-the-configuration-file-with-dynamic-resources) to -understand how to see the values of static configuration settings that also -appear in dynamic resources. - -When examining a dynamic resource, note that some of the fields you will see are -used only internally and are not meant to be changed. Others are reserved for -future use. - -## Managing dynamic resources - -Teleport provides three methods for applying dynamic resources: the `tctl` -client tool, Teleport Terraform provider, and Kubernetes Operator. - -All three methods connect to the Teleport Auth Service's gRPC endpoint in order -to manipulate cluster resources stored on the Auth Service backend. The design -of Teleport's configuration interface makes it well suited for -infrastructure-as-code and GitOps approaches. - -You can get started with `tctl`, the Terraform Provider, and the Kubernetes -Operator by following: -- the ["Managing Users and Roles with IaC" guide](managing-resources/user-and-role.mdx) -- the ["Creating Access Lists with IaC" guide](managing-resources/access-list.mdx) -- the ["Registering Agentless OpenSSH Servers with IaC" guide](managing-resources/agentless-ssh-servers.mdx) - -For more information on Teleport roles, including the `internal.logins` -trait we use in these example roles, see the [Access -Controls Reference](../../reference/access-controls/roles.mdx). - -### YAML documents with `tctl` - -You can define resources as YAML documents and apply them using the `tctl` -client tool. Here is an example of a `role` resource that allows access to -servers with the label `env:test`: - -```yaml -kind: role -version: v7 -metadata: - name: developer -spec: - allow: - logins: ['ubuntu', 'debian', '{{internal.logins}}'] - node_labels: - 'env': 'test' -``` - -Since `tctl` works from the local filesystem, you can write commands that apply -all configuration documents in a directory tree. See the [CLI -reference](../../reference/cli/tctl.mdx) for more information on `tctl`. - -### Teleport Terraform provider - -Teleport's Terraform provider lets you manage your Teleport resources within the -same infrastructure-as-code source as the rest of your infrastructure. There is -a Terraform resource for each Teleport configuration resource. For example: - -```hcl -resource "teleport_role" "developer" { - version = "v7" - metadata = { - name = "developer" - } - - spec = { - allow = { - logins = ["ubuntu", "debian", "{{internal.logins}}"] - - node_labels = { - key = ["env"] - value = ["test"] - } - } - } -} -``` - -[Get started with the Terraform -provider](terraform-provider/terraform-provider.mdx). - -### Teleport Kubernetes Operator - -The Teleport Kubernetes Operator lets you apply Teleport resources as Kubernetes -resources so you can manage your Teleport settings alongside the rest of your -Kubernetes infrastructure. Here is an example of a `TeleportRoleV7` resource, -which is equivalent to the two roles shown above: - -```yaml -apiVersion: resources.teleport.dev/v1 -kind: TeleportRoleV7 -metadata: - name: developer -spec: - allow: - logins: ['ubuntu', 'debian', '{{internal.logins}}'] - node_labels: - 'env': 'test' -``` - -[Get started with the Kubernetes Operator](teleport-operator/teleport-operator.mdx). - -## Reconciling the configuration file with dynamic resources - -Some dynamic resources assign the same settings as fields within -Teleport's static configuration file. For these fields, the Teleport Auth -Service reconciles static and dynamic configurations on startup and when you -create or remove a Teleport resource. - -### Configuration resources that apply to static configuration fields - -There are four dynamic resources that share fields with the -static configuration file: - -- `session_recording_config` -- `cluster_auth_preference` -- `cluster_networking_config` -- `ui_config` - -#### `session_recording_config` - -|Dynamic resource field|Static configuration field| -|---|---| -|`spec.mode`|`auth_service.session_recording`| -|`spec.proxy_checks_host_keys`|`auth_service.proxy_checks_host_keys`| - -#### `cluster_auth_preference` - -|Dynamic resource field|Static configuration field| -|---|---| -|`spec.type`|`auth_service.authentication.type`| -|`spec.second_factor`|`auth_service.authentication.second_factor`| -|`spec.second_factors`|`auth_service.authentication.second_factors`| -|`spec.connector_name`|`auth_service.authentication.connector_name` -|`spec.u2f`|`auth_service.authentication.u2f`| -|`spec.disconnect_expired_cert`|`auth_service.disconnect_expired_cert`| -|`spec.allow_local_auth`|`auth_service.authentication.local_auth`| -|`spec.message_of_the_day`|`auth_service.message_of_the_day`| -|`spec.locking_mode`|`auth_service.authentication.locking_mode`| -|`spec.webauthn`|`auth_service.authentication.webauthn`| -|`spec.require_session_mfa`|`auth_service.authentication.require_session_mfa`| -|`spec.allow_passwordless`|`auth_service.authentication.passwordless`| -|`spec.device_trust`|`auth_service.authentication.device_trust`| -|`spec.idp`|`proxy_service.idp`| -|`spec.allow_headless`|`auth_service.authentication.headless`| - -#### `cluster_networking_config` - -|Dynamic resource field|Static configuration field| -|---|---| -|`spec.client_idle_timeout`|`auth_service.client_idle_timeout`| -|`spec.keep_alive_interval`|`auth_service.keep_alive_interval`| -|`spec.keep_alive_count_max`|`auth_service.keep_alive_count_max`| -|`spec.session_control_timeout`|`auth_service.session_control_timeout`| -|`spec.idle_timeout_message`|`auth_service.client_idle_timeout_message`| -|`spec.web_idle_timeout`|`auth_service.web_idle_timeout`| -|`spec.proxy_listener_mode`|`auth_service.proxy_listener_mode`| -|`spec.routing_strategy`|`auth_service.routing_strategy`| -|`spec.tunnel_strategy`|`auth_service.tunnel_strategy`| -|`spec.proxy_ping_interval`|`auth_service.proxy_ping_interval`| -|`spec.case_insensitive_routing`|`auth_service.case_insensitive_routing`| - -#### `ui_config` - -| Dynamic resource field | Static configuration field | -| ----------------------- | ----------------------------------- | -| `spec.scrollback_lines` | `proxy_service.ui.scrollback_lines` | -| `spec.show_resources` | `proxy_service.ui.show_resources` | - -### Origin labels - -The Teleport Auth Service applies the `teleport.dev/origin` label to -configuration resources to indicate whether they originated from the static -configuration file, a dynamic configuration resource, or the default value. - -Here are possible values of the `teleport.dev/origin` label: - -- `defaults` -- `config-file` -- `dynamic` -- `terraform` -- `kubernetes` - -When the Auth Service starts up, it looks up the values of static configuration -fields that correspond to fields in dynamic configuration resources. If any of -these have values, it creates the corresponding dynamic configuration resources -and stores them in its backend. - -For any static configuration fields without a value, the Auth Service checks -whether the backend contains the corresponding dynamic configuration resource. -If not, it creates one with default values and the -`teleport.dev/origin=defaults` label. - -If you attempt to create a dynamic configuration resource after the Auth Service -has already loaded the configuration from a static configuration file, the Auth -Service will return an error. - -If you remove a dynamic configuration resource, the Auth Service will restore -its configuration fields to the default values and add the -`teleport.dev/origin=defaults` label. - - - -Cloud-hosted Teleport deployments use configuration files, but these are not -available for operators to modify. Users of Teleport Enterprise Cloud may see -configuration resources with the `teleport.dev/origin=config-file` label. - - - -## Further reading - -### Configuration references - -- For a comprehensive reference of Teleport's static configuration options, read - the [Configuration Reference](../../reference/config.mdx). -- To see the dynamic configuration resources available to apply, read the - [Configuration Resource Reference](../../reference/resources.mdx). There are also - dedicated configuration resource references for - [applications](../../reference/agent-services/application-access.mdx) and - [databases](../../reference/agent-services/database-access-reference/configuration.mdx). - -### Other ways to use the Teleport API - -The Teleport Kubernetes Operator, Terraform provider, and `tctl` are all clients -of the Teleport Auth Service's gRPC API. To build your own API client to extend -Teleport for your organization's needs, read our [API -guides](../api/api.mdx). diff --git a/docs/pages/admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx b/docs/pages/admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx deleted file mode 100644 index 5a7a41505ae19..0000000000000 --- a/docs/pages/admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Configuring Teleport with Terraform -description: How to manage dynamic resources using the Teleport Terraform provider. -videoBanner: YgNHD4SS8dg ---- - -The Teleport Terraform provider allows Teleport administrators to use Terraform to configure Teleport via -dynamic resources. - -## Setup - -For instructions on managing users and roles via Terraform, read -the ["Managing users and roles with IaC" guide](../managing-resources/user-and-role.mdx). - -The provider must obtain an identity to connect to Teleport. The method to obtain it depends on where the Terraform code -is executed. You must pick the correct guide for your setup: - -| Guide | Use-case | How it works | -|---------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------| -| [Run the Teleport Terraform provider locally](local.mdx) | You are getting started with the Teleport Terraform provider and managing Teleport resources with IaC. | You use local credentials to create a temporary bot, obtain short-lived credentials, and store them in environment variables. | -| [Run the Teleport Terraform provider on Terraform Cloud](terraform-cloud.mdx) | You're running on HCP Terraform (Terraform Cloud) or self-hosted Terraform Enterprise. | Terraform Cloud Workload Identity issues a proof of identity and the Teleport Terraform provider uses it to authenticate. | -| [Run the Teleport Terraform provider in CI or a cloud VM](ci-or-cloud.mdx) | You already have a working Terraform module configuring Teleport and want to run it in CI to benefit from review and audit capabilities from your versioning system (e.g. git). | You're using a proof provided by your runtime (CI engine, cloud provider) to prove your identity and join using MachineID. | -| [Run the Teleport Terraform provider on Spacelift](spacelift.mdx) | You already have a working Terraform module configuring Teleport and want to run it on the Spacelift platform. | You're using a proof provided by Spacelift to prove your identity and join using MachineID. | -| [Run the Teleport Terraform provider from a server](dedicated-server.mdx) | You have working Terraform code and want to run it on a dedicated server. The server is long-lived, like a bastion or a task runner. | You setup a MachineID daemon (`tbot`) that obtains and refreshes credentials for the Terraform provider. | -| [Run the Teleport Terraform provider with long-lived credentials.](long-lived-credentials.mdx) | This method is discouraged as less secure than the others. This should be used when none of the other methods work in your case (short-lived CI environments that don't have dedicated Teleport join methods). | You sign one long lived certificate allowing the Terraform provider to connect to Teleport. | - -## Resource guides - -Once you have a functional Teleport Terraform provider, you will want to configure your resources with it. - -The list of supported resources and their fields is available [in the Terraform -reference](../../../reference/terraform-provider/terraform-provider.mdx). - -Some resources have their dedicated Infrastructure-as-Code (IaC) step-by step guides such as: -- [Managing Users And Roles With IaC](../managing-resources/user-and-role.mdx) -- [Creating Access Lists with IaC](../managing-resources/access-list.mdx) -- [Registering Agentless OpenSSH Servers with IaC](../managing-resources/agentless-ssh-servers.mdx) - -Finally, you can [import your existing resources in Terraform](../managing-resources/import-existing-resources.mdx). diff --git a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx b/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx deleted file mode 100644 index f3cebf7b8c3fd..0000000000000 --- a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx +++ /dev/null @@ -1,629 +0,0 @@ ---- -title: "Part 1: Enroll Infrastructure with Terraform" -h1: "Terraform Starter: Enroll Infrastructure" -description: Explains how to deploy a pool of Teleport Agents so you can apply dynamic resources to enroll your infrastructure with Teleport. ---- - -*This guide is Part One of the Teleport Terraform starter guide. Read the -[overview](terraform-starter.mdx) for the scope and purpose of the Terraform -starter guide.* - -This guide shows you how to use Terraform to enroll infrastructure resources -with Teleport. You will: - -- Deploy a pool of Teleport Agents running on virtual machines. -- Label resources enrolled by the Agents with `env:dev` and `env:prod` so that, - in [Part Two](rbac.mdx), you can configure Teleport roles to enable access to - these resources. - -## How it works - -An Agent is a Teleport instance configured to run one or more Teleport services -in order to proxy infrastructure resources. For a brief architectural overview -of how Agents run in a Teleport cluster, read the [Introduction to Teleport -Agents](../../../enroll-resources/agents/agents.mdx). - -There are several methods you can use to join a Teleport Agent to your cluster, -which we discuss in the [Joining Services to your -Cluster](../../../enroll-resources/agents/agents.mdx) guide. In this guide, we will use -the **join token** method, where the operator stores a secure token on the Auth -Service, and an Agent presents the token in order to join a cluster. - -No matter which join method you use, it will involve the following Terraform -resources: - -- Compute instances to run Teleport services -- A join token for each compute instance in the Agent pool - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx version="16.2.0"!) - - - -We recommend following this guide on a fresh Teleport demo cluster so you can -see how an Agent pool works. After you are familiar with the setup, apply the -lessons from this guide to protect your infrastructure. You can get started with -a demo cluster using: -- A demo deployment on a [Linux server](../../../admin-guides/deploy-a-cluster/linux-demo.mdx) -- A [Teleport Enterprise (Cloud) trial](https://goteleport.com/signup) - - - -- An AWS, Google Cloud, or Azure account with permissions to create virtual - machine instances. - -- Cloud infrastructure that enables virtual machine instances to connect to the - Teleport Proxy Service. For example: - - An AWS subnet with a public NAT gateway or NAT instance - - Google Cloud NAT - - Azure NAT Gateway - - In minimum-security demo clusters, you can also configure the VM instances you - deploy to have public IP addresses. - -- Terraform v(=terraform.version=) or higher. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Import the Terraform module - -In this step, you will download Terraform modules that show you how to get -started enrolling Teleport resources. These modules are minimal examples of how -Teleport Terraform resources work together to enable you to manage Teleport -Agents. - -After finishing this guide and becoming familiar with the setup, you should -modify your Terraform configuration to accommodate your infrastructure in -production. - -1. Navigate to your Terraform project directory. - -1. Fetch the Teleport code repository and copy the example Terraform - configuration for this project into your current working directory. The - following commands copy the appropriate child module for your cloud provider - into a subdirectory called `cloud` and HCL configurations for Teleport - resources into a subdirectory called `teleport`: - - - - - ```code - $ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v(=teleport.major_version=) - $ cp -R teleport-clone/examples/terraform-starter/agent-installation teleport - $ cp -R teleport-clone/examples/terraform-starter/aws cloud - $ rm -rf teleport-clone - ``` - - - - - ```code - $ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v(=teleport.major_version=) - $ cp -R teleport-clone/examples/terraform-starter/agent-installation teleport - $ cp -R teleport-clone/examples/terraform-starter/gcp cloud - $ rm -rf teleport-clone - ``` - - - - - ```code - $ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v(=teleport.major_version=) - $ cp -R teleport-clone/examples/terraform-starter/agent-installation teleport - $ cp -R teleport-clone/examples/terraform-starter/azure cloud - $ rm -rf teleport-clone - ``` - - - - -1. Create a file called `agent.tf` with the following content, which configures - the child modules you downloaded in the previous step: - - - - - ```hcl - module "agent_installation_dev" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "dev" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_installation_prod" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "prod" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_deployment" { - region = "" - source = "./cloud" - subnet_id = "" - userdata_scripts = concat( - module.agent_installation_dev.userdata_scripts, - module.agent_installation_prod.userdata_scripts - ) - } - ``` - - - - - ```hcl - module "agent_installation_dev" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "dev" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_installation_prod" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "prod" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_deployment" { - gcp_zone = "us-east1-b" - google_project = "" - source = "./cloud" - subnet_id = "" - userdata_scripts = concat( - module.agent_installation_dev.userdata_scripts, - module.agent_installation_prod.userdata_scripts - ) - } - ``` - - - - - ```hcl - module "agent_installation_dev" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "dev" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_installation_prod" { - source = "./teleport" - agent_count = 1 - agent_labels = { - env = "prod" - } - proxy_service_address = "teleport.example.com:443" - teleport_edition = "cloud" - teleport_version = "(=teleport.version=)" - } - - module "agent_deployment" { - azure_resource_group = "" - public_key_path = "" - region = "East US" - source = "./cloud" - subnet_id = "" - userdata_scripts = concat( - module.agent_installation_dev.userdata_scripts, - module.agent_installation_prod.userdata_scripts - ) - } - ``` - - - - -Each of the `agent_installation_*` module blocks produces a number of -installation scripts equal to the `agent_count` input. Each installation script -runs the Teleport SSH Service with a Teleport join token, labeling the Agent -with the key/value pairs specified in `agent_labels`. This configuration passes -all installation scripts to the `agent_deployment` module in order to run them -on virtual machines, launching one VM per script. - -As you scale your Teleport usage, you can increase this count to ease the load -on each Agent. - -Edit the `agent_installation_dev` and `agent_installation_prod` blocks in -`agent.tf` as follows: - -1. Assign `proxy_service_address` to the host and HTTPS port of your Teleport - Proxy Service, e.g., `mytenant.teleport.sh:443`. - - - - Make sure to include the port. - - - -1. Make sure `teleport_edition` matches your Teleport edition. Assign this to - `oss`, `cloud`, or `enterprise`. The default is `oss`. - -1. If needed, change the value of `teleport_version` to the version of Teleport - you want to run on your Agents. It must be either the same major version as - your Teleport cluster or one major version behind. - -Edit the `module "agent_deployment"` block in `agent.tf` as follows: - -1. If you are deploying your instance in a minimum-security demo environment and - do not have a NAT gateway, NAT instance, or other method for connecting your - instances to the Teleport Proxy Service, modify the `module` block to - associate a public IP address with each Agent instance: - - ```hcl - insecure_direct_access=true - ``` - -1. Assign the remaining input variables depending on your cloud provider: - - - - - 1. Assign `region` to the AWS region where you plan to deploy Teleport Agents, - such as `us-east-1`. - 1. For `subnet_id`, include the ID of the subnet where you will deploy Teleport - Agents. - - - - - 1. Assign `google_project` to the name of your Google Cloud project and - `gcp_zone` to the zone where you will deploy Agents, such as `us-east1-b`. - - 1. For `subnet_id`, include the name or URI of the Google Cloud subnet where you - will deploy the Teleport Agents. - - The subnet URI has the format: - - ``` - projects/PROJECT_NAME/regions/REGION/subnetworks/SUBNET_NAME - ``` - - - - - 1. Assign `azure_resource_group` to the name of the Azure resource group where - you are deploying Teleport Agents. - - 1. The module uses `public_key_path` to pass validation, as Azure VMs must - include an RSA public key with at least 2048 bits. Once the module deploys - the VMs, a cloud-init script removes the public key and disables OpenSSH. Set - this input to the path to a valid public SSH key. - - 1. Assign `region` to the Azure region where you plan to deploy Teleport Agents, - such as `East US`. - - 1. For `subnet_id`, include the ID of the subnet where you will deploy Teleport - Agents. Use the following format: - - ```text - /subscriptions/SUBSCRIPTION/resourceGroups/RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/NETWORK_NAME/subnets/SUBNET_NAME - ``` - - - - -## Step 2/3. Add provider configurations - -In this step, you will configure the `terraform-starter` module for your -Teleport cluster and cloud provider. - -In your Terraform project directory, ensure that the file called `provider.tf` -includes the following content: - - - - -```hcl -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 4.0" - } - - teleport = { - source = "terraform.releases.teleport.dev/gravitational/teleport" - version = "~> (=teleport.major_version=).0" - } - } -} - -provider "aws" { - region = AWS_REGION -} - -provider "teleport" { - # Update addr to point to your Teleport Enterprise (Cloud) tenant URL's host:port - addr = PROXY_SERVICE_ADDRESS -} -``` - -Replace the following placeholders: - -| Placeholder | Value | -|-----------------------|------------------------------------------------------------------------------| -| AWS_REGION | The AWS region where you will deploy Agents, e.g., `us-east-2` | -| PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., `example.teleport.sh:443` | - - - -```hcl -terraform { - required_providers { - google = { - source = "hashicorp/google" - version = "~> 5.5.0" - } - - teleport = { - source = "terraform.releases.teleport.dev/gravitational/teleport" - version = "~> (=teleport.major_version=).0" - } - } -} - -provider "google" { - project = GOOGLE_CLOUD_PROJECT - region = GOOGLE_CLOUD_REGION -} - -provider "teleport" { - # Update addr to point to your Teleport Enterprise (Cloud) tenant URL's host:port - addr = PROXY_SERVICE_ADDRESS -} -``` - -Replace the following placeholders: - -| Placeholder | Value | -|-----------------------|------------------------------------------------------------------------------| -| GOOGLE_CLOUD_PROJECT| Google Cloud project where you will deploy Agents. | -| GOOGLE_CLOUD_REGION | Google Cloud region where you will deploy Agents. | -| PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., `example.teleport.sh:443` | - - - - -```hcl -terraform { - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = "~> 3.0.0" - } - - teleport = { - source = "terraform.releases.teleport.dev/gravitational/teleport" - version = "~> (=teleport.major_version=).0" - } - } -} - -provider "teleport" { - # Update addr to point to your Teleport Cloud tenant URL's host:port - addr = PROXY_SERVICE_ADDRESS -} - -provider "azurerm" { - features {} -} -``` - -Replace the following placeholders: - -|Placeholder|Value| -|---|---| -| PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., `example.teleport.sh:443` | - - - - -## Step 3/3. Verify the deployment - -In this step, you will create a Teleport bot to apply your Terraform -configuration. The bot will exist for one hour and will be granted the default -`terraform-provider` role that can edit every resource the TF provider supports. - -1. Navigate to your Terraform project directory and run the following command. - The `eval`command assigns environment variables in your shell to credentials - for the Teleport Terraform provider: - - ```code - $ eval "$(tctl terraform env)" - 🔑 Detecting if MFA is required - This is an admin-level action and requires MFA to complete - Tap any security key - Detected security key tap - ⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token - 🤖 Using the temporary bot to obtain certificates - 🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s - ``` - -1. Make sure your cloud provider credentials are available to Terraform using - the standard approach for your organization. - -1. Apply the Terraform configuration: - - ```code - $ terraform init - $ terraform apply - ``` - -1. Once the `apply` command completes, run the following command to verify that - your Agents have deployed successfully. This command, which assumes that the - Agents have the `Node` role, lists all Teleport SSH Service instances with - the `role=agent-pool` label: - - ```code - $ tsh ls role=agent-pool - Node Name Address Labels - -------------------------- ---------- --------------- - ip-10-1-1-187.ec2.internal ⟵ Tunnel role=agent-pool - ip-10-1-1-24.ec2.internal ⟵ Tunnel role=agent-pool - ``` - -## Further reading: How the module works - -In this section, we explain the resources configured in the -`terraform-starter` module. - -### Join token - -The `terraform-starter` module deploys one virtual machine instance for each -Teleport Agent. Each Agent joins the cluster using a token. We create each token -using the `teleport_provision_token` Terraform resource, specifying the token's -value with a `random_string` resource: - -```hcl -(!examples/terraform-starter/agent-installation/token.tf!) -``` - -When we apply the `teleport_provision_token` resources, the Teleport Terraform -provider creates them on the Teleport Auth Service backend. - -### User data script - -Each Teleport Agent deployed by the `terraform-starter` module loads a user -data script that creates a Teleport configuration file for the Agent: - -```text -(!examples/terraform-starter/agent-installation/userdata!) -``` - -The configuration adds the `role: agent-pool` label to the Teleport SSH Service -on each instance. This makes it easier to access hosts in the Agent pool later. -It also adds the labels you configured using the `agent_labels` input of the -module. - -The script makes Teleport the only option for accessing Agent instances by -disabling OpenSSH on startup and deleting any authorized public keys. - -### Virtual machine instances - -Each cloud-specific child module of `terraform-starter` declares resources to -deploy a virtual machine instance on your cloud provider: - - - - -`ec2-instance.tf` declares a data source for an Amazon Linux 2023 machine image -and uses it to launch EC2 instances that run Teleport Agents with the -`teleport_provision_token` resource: - -```hcl -(!examples/terraform-starter/aws/ec2-instance.tf!) -``` - - - - -`gcp-instance.tf` declares Google Compute Engine instances that use the -`teleport_provision_token` to run Teleport Agents: - -```hcl -(!examples/terraform-starter/gcp/gcp-instance.tf!) -``` - - - - -`azure-instance.tf` declares an Azure virtual machine resource to run Teleport -Agents using the `teleport_provision_token` resource, plus the required network -interface for each instance. - -Note that while Azure VM instances require a user account, this module declares -a temporary one to pass validation, but uses Teleport to enable access to the -instances: - -```hcl -(!examples/terraform-starter/azure/azure-instance.tf!) -``` - - - - -## Next steps: More options for enrolling resources - -In Part One of the Terraform starter guide, we showed you how to use Terraform -to deploy a pool of Teleport Agents in order to enroll infrastructure resources -with Teleport. While the guide showed you how to enroll resources dynamically, -by declaring Terraform resources for each infrastructure resource you want to -enroll, you can protect more of your infrastructure with Teleport by: - -- Configuring Auto-Discovery -- Configuring resource enrollment - -### Configure Auto-Discovery - -For a more scalable approach to enrolling resources than the one shown in this -guide, configure the Teleport Discovery Service to automatically detect -resources in your infrastructure and enroll them with the Teleport Auth Service. - -To configure the Teleport Discovery Service: - -1. Edit the userdata script run by the Agent instances managed in the Terraform - starter module. Follow the [Auto-Discovery - guides](../../../enroll-resources/auto-discovery/auto-discovery.mdx) guides - to configure the Discovery Service and enable your Agents to proxy the - resources that the service enrolls. -1. Add the `Discovery` role to the join token resource you created earlier. In - this guide, the join token only has the `Node` role. -1. Add roles to the join token resource that corresponds to the Agent services - you want to proxy discovered resources. The roles to add depend on the - resources you want to automatically enroll based on the [Auto-Discovery - guides](../../../enroll-resources/auto-discovery/auto-discovery.mdx) guides. - -### Enroll resources manually - -You can also enroll resources manually, instructing Agents to proxy specific -endpoints in your infrastructure. For information about manual enrollment, read -the documentation section for each kind of resource you would like to enroll: - -- [Databases](../../../enroll-resources/database-access/database-access.mdx) -- [Windows - desktops](../../../enroll-resources/desktop-access/desktop-access.mdx) -- [Kubernetes - clusters](../../../enroll-resources/kubernetes-access/kubernetes-access.mdx) -- [Linux servers](../../../enroll-resources/server-access/server-access.mdx) -- [Web applications and cloud provider - APIs](../../../enroll-resources/application-access/application-access.mdx) - -Once you are familiar with the process of enrolling a resource manually, you can -edit your Terraform module to: - -1. **Add token roles:** The token resource you created has only the `Node` role, - and you can add roles to authorize your Agents to proxy additional kinds of - resources. Consult a guide to enrolling resources manually to determine the - role to add to the token. -1. **Change the userdata script** to enable additional Agent services additional - infrastructure resources for your Agents to proxy. -1. **Deploy dynamic resources:** Consult the [Terraform provider - reference](../../../reference/terraform-provider/terraform-provider.mdx) for - Terraform resources that you can apply in order to enroll dynamic resources - in your infrastructure. - diff --git a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx b/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx deleted file mode 100644 index 33fc3db56af9f..0000000000000 --- a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx +++ /dev/null @@ -1,566 +0,0 @@ ---- -title: "Part 2: Configure Teleport RBAC with Terraform" -h1: "Terraform Starter: Configure RBAC" -description: Explains how to manage Teleport roles and authentication connectors with Terraform so you can implement the principle of least privilege in your infrastructure. ---- - -*This guide is Part Two of the Teleport Terraform starter guide. Read the -[overview](terraform-starter.mdx) for the scope and purpose of the Terraform -starter guide.* - -In [Part One](enroll-resources.mdx) of this series, we showed you how to use -Terraform to deploy Teleport Agents in order to enroll infrastructure resources -with your Teleport cluster. While configuring Agents, you labeled them based on -their environment, with some falling under `dev` and others under `prod`. - -In this guide, you will configure your Teleport cluster to manage access to -resources with the `dev` and `prod` labels in order to implement the principle -of least privilege. - -## How it works - -This guide shows you how to create: - -- A role that can access `prod` resources. -- A role that can access `dev` resources and request access to `prod` resources. -- An authentication connector that allows users to sign into your organization's - identity provider and automatically gain access to `dev` resources. - -In this setup, the only way to access `prod` resources is with an Access -Request, meaning that there are no standing credentials for accessing `prod` -resources that an attacker can compromise. - -## Prerequisites - -This guide assumes that you have completed [Part 1: Enroll Infrastructure with -Terraform](./enroll-resources.mdx). - -(!docs/pages/includes/edition-prereqs-tabs.mdx version="16.2.0"!) - -- Resources enrolled with Teleport that include the `dev` and `prod` labels. We - show you how to enroll these resources using Terraform in Part One. -- An identity provider that supports OIDC or SAML. You should have either: - - The ability to modify SAML attributes or OIDC claims in your organization. - - Pre-existing groups of users that you want to map to two levels of access: - the ability to connect to `dev` resources; and the ability to review Access - Requests for `prod` access. - - - -We recommend following this guide on the same Teleport demo cluster you used for -Part One. After you are familiar with the setup, you can apply the lessons from -this guide to manage RBAC with Terraform. - - - -- Terraform v(=terraform.version=) or higher. -- (!docs/pages/includes/tctl.mdx!) -- To help with troubleshooting, we recommend completing the setup steps in this - guide with a local user that has the preset `editor` and `auditor` roles. In - production, you can apply the lessons in this guide using a less privileged - user. - -## Step 1/4. Import Terraform modules - -In this step, you will download Terraform modules that show you how to get -started managing Teleport RBAC. These modules are minimal examples of how -Teleport Terraform resources work together to enable you to manage Teleport -roles and authentication connectors. - -After finishing this guide and becoming familiar with the setup, you should -modify your Terraform configuration to accommodate your infrastructure in -production. - -1. Navigate to the directory where you organized files for your root Terraform - module. - -1. Fetch the Teleport code repository and copy the example Terraform - configuration for this project into your current working directory. - - Since you will enable users to authenticate to Teleport through your - organization's identity provider (IdP), the modules depend on whether your - organization uses OIDC or SAML to communicate with services: - - - - - ```code - $ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v(=teleport.major_version=) - $ cp -R teleport-clone/examples/terraform-starter/env_role env_role - $ cp -R teleport-clone/examples/terraform-starter/oidc oidc - $ rm -rf teleport-clone - ``` - - - - ```code - $ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v(=teleport.major_version=) - $ cp -R teleport-clone/examples/terraform-starter/env_role env_role - $ cp -R teleport-clone/examples/terraform-starter/saml saml - $ rm -rf teleport-clone - ``` - - - - - Your project directory will include two new modules: - - - - - |Name|Description| - |---|---| - |`env_role`|A module for a Teleport role that grants access to resources with a specific `env` label.| - |`oidc`|Teleport resources to configure an OIDC authentication connector and require that users authenticate with it.| - - - - - |Name|Description| - |---|---| - |`env_role`|A module for a Teleport role that grants access to resources with a specific `env` label.| - |`saml`|Teleport resources to configure a SAML authentication connector and require that users authenticate with it.| - - - - -1. Create a file called `rbac.tf` that includes the following `module` blocks: - - - - - ```hcl - module "oidc" { - source = "./oidc" - oidc_claims_to_roles = [] - oidc_client_id = "" - oidc_connector_name = "Log in with OIDC" - oidc_redirect_url = "" - oidc_secret = "" - teleport_domain = "" - } - - module "prod_role" { - source = "./env_role" - env_label = "prod" - principals = {} - request_roles = [] - } - - module "dev_role" { - source = "./env_role" - env_label = "dev" - principals = {} - request_roles = [module.prod_role.role_name] - } - ``` - - - - ```hcl - module "saml" { - source = "./saml" - saml_connector_name = "Log in with SAML" - saml_attributes_to_roles = [] - saml_acs = "" - saml_entity_descriptor = "" - teleport_domain = "" - } - - module "prod_role" { - source = "./env_role" - env_label = "prod" - principals = {} - request_roles = [] - } - - module "dev_role" { - source = "./env_role" - env_label = "dev" - principals = {} - request_roles = [module.prod_role.role_name] - } - ``` - - - -Next, we will show you how to configure the two child modules, and walk you -through the Terraform resources that they apply. - -## Step 2/4. Configure role principals - -Together, the `prod_role` and `dev_role` modules you declared in Step 1 create -three Teleport roles: - -|Role|Description| -|---|---| -|`prod_access`|Allows access to infrastructure resources with the `env:prod` label.| -|`dev_access`|Allows access to infrastructure resources with the `env:dev` label, and Access Requests for the `prod_access` role.| -|`prod_reviewer`|Allows reviews of Access Requests for the `prod_access` role.| - -When Teleport users connect to resources in your infrastructure, they assume a -principal, such as an operating system login or Kubernetes user, in order to -interact with those resources. In this step, you will configure the `prod_role` -and `dev_role` modules to grant access to principals in your infrastructure. - -In `rbac.tf`, edit the `prod_role` and `dev_role` blocks so that the -`principals` field contains a mapping, similar to the example below. Use the -list of keys below the example to configure the principals. - -```hcl -module "prod_role" { - principals = { - KEY = "value" - } - // ... -} - -// ... -``` - -|Key|Description| -|---|---| -|`aws_role_arns`|AWS role ARNs the user can access when authenticating to an AWS API.| -|`azure_identities`|Azure identities the user can access when authenticating to an Azure API.| -|`db_names`|Names of databases the user can access within a database server.| -|`db_roles`|Roles the user can access on a database when they authenticate to a database server.| -|`db_users`|Users the user can access on a database when they authenticate to a database server.| -|`gcp_service_accounts`|Google Cloud service accounts the user can access when authenticating to a Google Cloud API.| -|`kubernetes_groups`|Kubernetes groups the Teleport Database Service can impersonate when proxying requests from the user.| -|`kubernetes_users`|Kubernetes users the Teleport Database Service can impersonate when proxying requests from the user.| -|`logins`|Operating system logins the user can access when authenticating to a Linux server.| -|`windows_desktop_logins`|Operating system logins the user can access when authenticating to a Windows desktop.| - -For example, the following configuration allows users with the `dev_access` role -to assume the `dev` login on Linux hosts and the `developers` group on -Kubernetes. `prod` users have more privileges and can assume the `root` login -and `system:masters` Kubernetes group: - -```hcl -module "dev_role" { - principals = { - logins = ["dev"] - kubernetes_groups = ["developers"] - } - // ... -} - -module "prod_role" { - principals = { - logins = ["root"] - kubernetes_groups = ["system:masters"] - } - // ... -} -``` - -## Step 3/4. [Optional] Configure the single sign-on connector - -In this step, you will configure your Terraform module to enable authentication -through your organization's IdP. Configure the `saml` or `oidc` module you -declared in Step 1 by following the instructions. - - - -You can skip this step for now if you want to assign the `dev_access` and -`prod_access` roles to local Teleport users instead of single sign-on users. To -do so, you can: - -- Import existing `teleport_user` resources and modify them to include the - `dev_access` and `prod_access` roles (see the - [documentation](../managing-resources/import-existing-resources.mdx)). -- Create a new `teleport_user` resource that includes the roles - ([documentation](../managing-resources/user-and-role.mdx). - -If you plan to skip this step, make sure to remove the `module "saml"` or -`module "oidc"` block from your Terraform configuration. - - - -1. Register your Teleport cluster with your IdP as a relying party. The - instructions depend on your IdP. - - The following guides show you how to set up your IdP to support the SAML or - OIDC authentication connector. **Read only the linked section**, since these - guides assume you are using `tctl` instead of Terraform to manage - authentication connectors: - - - [AD FS](../../../admin-guides/access-controls/sso/adfs.mdx#step-13-configure-adfs) - - [GitLab](../../../admin-guides/access-controls/sso/gitlab.mdx#configure-gitlab) - - [Google - Workspace](../../../admin-guides/access-controls/sso/google-workspace.mdx#step-14-configure-google-workspace) - - [OneLogin](../../../admin-guides/access-controls/sso/one-login.mdx#step-13-create-teleport-application-in-onelogin) - - [Azure - AD](../../../admin-guides/access-controls/sso/azuread.mdx#step-13-configure-microsoft-entra-id) - - [Okta](../../../admin-guides/access-controls/sso/okta.mdx#step-24-configure-okta) - -1. Configure the redirect URL (for OIDC) or assertion consumer service (for SAML): - - - - - Set `oidc_redirect_url` to - `https://example.teleport.sh:443/v1/webapi/oidc/callback`, replacing - `example.teleport.sh` with the domain name of your Teleport cluster. - - Ensure that `oidc_redirect_url` matches match the URL you configured with - your IdP when registering your Teleport cluster as a relying party. - - - - - Set `saml_acs` to `https://example.teleport.sh:443/v1/webapi/saml/acs`, - replacing `example.teleport.sh` with the domain name of your Teleport - cluster. - - Ensure that `saml_acs` matches the URL you configured with your IdP when - registering your Teleport cluster as a relying party. - - - - -1. After you register Teleport as a relying party, your identity provider will - print information that you will use to configure the authentication - connector. Fill in the information depending on your provider type: - - - - - Fill in the `oidc_client_id` and `oidc_secret` with the client ID and secret - returned by the IdP. - - - - - Assign `saml_entity_descriptor` to the contents of the XML document that - contains the SAML entity descriptor for the IdP. - - - - -1. Assign `teleport_domain` to the domain name of your Teleport Proxy Service, - with no scheme or path, e.g., `example.teleport.sh`. The child module uses - this to configure WebAuthn for local users. This way, you can authenticate as - a local user as a fallback if you need to troubleshoot your single sign-on - authentication connector. - -1. Configure role mapping for your authentication connector. When a user - authenticates to Teleport through your organization's IdP, Teleport assigns - roles to the user based on your connector's role mapping configuration: - - - - - In this example, users with a `group` claim with the `developers` value - receive the `dev_access` role, while users with a `group` claim with the - value `admins` receive the `prod_reviewer` role: - - ```hcl - oidc_claims_to_roles = [ - { - claim = "group" - value = "developers" - roles = [ - module.dev_role.role_name - ] - }, - { - claim = "group" - value = "admins" - roles = module.dev_role.reviewer_role_names - } - ] - ``` - - Edit the `claim` value for each item in `oidc_claims_to_roles` to match the - name of an OIDC claim you have configured on your IdP. - - - - - In this example, users with a `group` attribute with the `developers` value - receive the `dev_access` role, while users with a `group` attribute with the - value `admins` receive the `prod_reviewer` role: - - ```hcl - saml_attributes_to_roles = [ - { - name = "group" - value = "developers" - roles = [ - module.dev_role.role_name - ] - }, - { - name = "group" - value = "admins" - roles = module.dev_role.reviewer_role_names - } - ] - ``` - - - - -## Step 4/4. Apply and verify changes - -In this step, you will create a Teleport bot to apply your Terraform -configuration. The bot will exist for one hour and will be granted the default -`terraform-provider` role that can edit every resource the TF provider supports. - -1. Navigate to your Terraform project directory and run the following command. - The `eval`command assigns environment variables in your shell to credentials - for the Teleport Terraform provider: - - ```code - $ eval "$(tctl terraform env)" - 🔑 Detecting if MFA is required - This is an admin-level action and requires MFA to complete - Tap any security key - Detected security key tap - ⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token - 🤖 Using the temporary bot to obtain certificates - 🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s - ``` - -1. Make sure your cloud provider credentials are available to Terraform using - the standard approach for your organization. - -1. Apply the Terraform configuration: - - ```code - $ terraform init - $ terraform apply - ``` - -1. Open the Teleport Web UI in a browser and sign in to Teleport as a user on - your IdP with the `groups` trait assigned to the value that you mapped to the - role in your authentication connector. Your user should have the `dev_access` - role. - - - - If you receive errors logging in using your authentication connector, log in - as a local user with permissions to view the Teleport audit log. These is - available in the preset `auditor` role. Check for error messages in events - related with the "SSO Login" type. - - - -1. Request access to the `prod_access` role through the Web UI. Visit the - "Access Requests" tab and click "New Request". - -1. Sign out of the Web UI and, as a user in a group that you mapped to the - `prod_access` role, sign in. In the "Access Requests" tab, you should be able - to see and resolve the Access Request you created. - -## Further reading: How the module works - -This section describes the resources managed by the `env_role`, `saml`, and -`oidc` child modules. We encourage you to copy and customize these -configurations in order to refine your settings and choose the best reusable -interface for your environment. - -### The `env_access` role - -The `env_role` child module creates Teleport roles with the ability to access -Teleport-protected resources with the `env` label: - -```hcl -(!examples/terraform-starter/env_role/env_role.tf!) -``` - -The role hardcodes an `allow` rule with the ability to access applications, -databases, Linux servers, Kubernetes clusters, and Windows desktops with the -user-configured `env` label. - -Since we cannot predict which principals are available in your infrastructure, -this role leaves the `aws_role_arns`, `logins`, and other principal-related role -attributes for the user to configure. - -The role also configures an `allow` rule that enables users to request access -for the roles configured in the `request_roles` input variable. - -An `output` prints the name of the role to allow us to create a dependency -relationship between this role and an authentication connector. - -### The `env_access_reviewer` role - -If `var.request_roles` in the `env_access` role is nonempty, the `env_role` -module creates a role that can review those roles. This is a separate role to -make permissions more composable: - -```hcl -(!examples/terraform-starter/env_role/reviewer.tf!) -``` - -As with the `env_access` role, there is an output to print the name of the -`env_access_reviewer` role to create a dependency relationship with the -authentication connector. - -### Configuring an authentication connector - -The authentication connector resources are minimal. Beyond providing the -attributes necessary to send and receive Teleport OIDC and SAML messages, the -connectors configure role mappings based on user-provided values: - - - - -```hcl -(!examples/terraform-starter/oidc/oidc_connector.tf!) -``` - - - -```hcl -(!examples/terraform-starter/saml/saml.tf!) -``` - - - - -Since the role mappings inputs are composite data types, we add a complex type -definition when declaring the input variables for the `oidc` and `saml` child -modules: - - - - -```hcl -(!examples/terraform-starter/oidc/inputs.tf!) -``` - - - -```hcl -(!examples/terraform-starter/saml/inputs.tf!) -``` - - - -For each authentication connector, we declare a cluster authentication -preference that enables the connector. The cluster authentication preference -enables local user login with WebAuthn as a secure fallback in case you need to -troubleshoot the single sign-on provider. - - - - -```hcl -(!examples/terraform-starter/oidc/auth_pref.tf!) -``` - - - -```hcl -(!examples/terraform-starter/saml/auth_pref.tf!) -``` - - - -## Next steps - -Now that you have configured RBAC in your Terraform demo cluster, fine-tune your -setup by reading the comprehensive [Terraform provider -reference](../../../reference/terraform-provider/terraform-provider.mdx). diff --git a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/terraform-starter.mdx b/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/terraform-starter.mdx deleted file mode 100644 index c82835f95fbd1..0000000000000 --- a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/terraform-starter.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "Terraform Starter Setup" -description: Provides an example to help you get started managing dynamic resources in a Teleport cluster using Terraform. ---- - -The Terraform starter guide provides an example of a Terraform module that -manages Teleport resources in production. The guide helps you to understand the -Teleport resources to manage with Terraform in order to accomplish common -Teleport setup tasks. You can use the example module as a starting point for -managing a complete set of Teleport cluster resources. - -The guides in the Terraform starter module assume that you have [a working Terraform provider setup](../terraform-provider/terraform-provider.mdx) on your workstation. - -## Part One: Enroll resources - -In Part One of the Terraform starter module, we show you how to enroll resources -such as Linux servers, databases, and Kubernetes clusters by deploying a pool of -Teleport Agents on virtual machine instances. You can then declare dynamic -infrastructure resources with Terraform or change the configuration file -provided to each Agent. - -[Read Part One](enroll-resources.mdx). - -## Part Two: Configure RBAC - -Part Two of the Terraform starter module shows you how to configure Teleport -role-based access controls to provide different levels of access to the -resources you enrolled in Part One. It also configures Access Requests, -available in Teleport Identity Governance, so that users authenticate with less privileged -roles by default but can request access to more privileged roles. An -authentication connector lets users authenticate to Teleport using a Single -Sign-On provider. - -[Read Part Two](rbac.mdx). - diff --git a/docs/pages/admin-guides/management/admin/admin.mdx b/docs/pages/admin-guides/management/admin/admin.mdx deleted file mode 100644 index 4e11195231369..0000000000000 --- a/docs/pages/admin-guides/management/admin/admin.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Cluster Administration Guides -description: Teleport Cluster Administration Guides. -layout: tocless-doc ---- - -The guides in this section show you the fundamentals of setting up and running a -Teleport cluster. You will learn how to run the `teleport` daemon, manage users -and resources, and troubleshoot any issues that arise. - -If you already understand how to set up a Teleport cluster, consult the -[Operations](../operations/operations.mdx) section so you can start conducting periodic -cluster maintenance tasks. - -## Run Teleport - -- [Teleport Daemon](daemon.mdx): Set up Teleport as a daemon on Linux with systemd. -- [Run Teleport with Self-Signed Certificates](self-signed-certs.mdx): Set up Teleport in a local -environment without configuring TLS certificates. - -## Manage users and resources - -- [Trusted Clusters](trustedclusters.mdx): Connect multiple Teleport clusters using trusted clusters. -- [Labels](labels.mdx): Manage resource metadata with labels. -- [Local Users](users.mdx): Manage local user accounts. - -## Troubleshoot issues - -- [Troubleshooting](troubleshooting.mdx): Collect metrics and diagnostic information from Teleport. -- [Uninstall Teleport](uninstall-teleport.mdx): Uninstall Teleport from your system. diff --git a/docs/pages/admin-guides/management/admin/labels.mdx b/docs/pages/admin-guides/management/admin/labels.mdx deleted file mode 100644 index 7cb0750dccba7..0000000000000 --- a/docs/pages/admin-guides/management/admin/labels.mdx +++ /dev/null @@ -1,384 +0,0 @@ ---- -title: Add Labels to Resources -description: How to assign static and command-based dynamic labels to Teleport resources. ---- - -Teleport allows you to add arbitrary key-value pairs to applications, servers, -databases, and other resources in your cluster. You can use labels to do -the following: - -- Filter the resources returned when running `tctl` and `tsh` commands. -- Define roles that limit the resources Teleport users can access. - -This guide demonstrates how to add labels to enrolled server resources. -However, you can follow similar steps to add labels to other types of resources. - -## How it works - -The labels you assign to resources can be **static labels**, **dynamic labels**, or **resource-based labels**. - -- Static labels are hardcoded in the Teleport configuration file and don't change -while the `teleport` process is running. For example, you might use a static label to identify -the resources in a `staging` or `production` environment. -- Dynamic labels—also known as **commands-based labels**—allow you to generate labels at runtime. -With a dynamic label, the `teleport` process executes an external command on its host at -a configurable frequency and the output of the command becomes the label value. -- Resource-based labels allow you to add labels to an instance without restarting the `teleport` -process or editing the configuration file. - -You can add multiple static, dynamic, and resource-based labels for the same resource. -However, you can't add static labels that use the same key with different values or use -a static label to define multiple potential values. - -Dynamic labels are especially useful for decoupling a label value from the Teleport configuration. -For example, if you start Teleport on an Amazon EC2 instance, you can use a dynamic label to set the -`region` value based on the result from a command sent to the EC2 instance -[metadata API](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html). -The dynamic label enables you to use the same configuration for each server in an Amazon Machine Image -but filter and limit access to the servers based on their AWS region. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- A Linux host where you will run a Teleport Agent. This guide shows you how to - apply labels to an instance of the Teleport SSH Service. You can use the - technique shown in the guide to label any Teleport-protected resource. - -(!docs/pages/includes/tctl.mdx!) - -## Step 1/2. Install Teleport - -1. Select a Linux server where you will run a Teleport Agent. - -1. (!docs/pages/includes/install-linux.mdx!) - -1. Generate an invitation token for the host. - - The invitation token is required for the local computer to join the Teleport cluster. - The following example generates a new token that is valid for five minutes and can be used - to enroll a server: - - ```code - $ tctl tokens add --ttl=5m --type=node - # The invite token: (=presets.tokens.first=) - ``` - -1. List all generated non-expired tokens by running the following command: - - ```code - $ tctl tokens ls - # Token Type Labels Expiry Time (UTC) - # -------------------------------- ----------- ------ ------------------------------- - # (=presets.tokens.first=) Node,Db,App 10 Aug 23 19:49 UTC (4m11s) - ``` - -1. Write the join token to a file on the host at `/tmp/token`: - - ```code - $ echo (=presets.tokens.first=) | sudo tee /tmp/token - ``` - -1. On the host where you plan to run the Agent, generate a configuration file - that enables the Teleport SSH Service. Replace `teleport.example.com` with - the host and port of your Teleport Proxy Service or Teleport Enterprise - (Cloud) account: - - ```code - $ sudo teleport configure \ - --token="/tmp/token" \ - --roles=node \ - --proxy=example.teleport.sh:443 \ - -o file - ``` - -## Step 2/2. Apply labels - -Follow any or all of the sections below to add different types of labels to -your resource. - -### Apply a static label - -You can configure static labels by editing the Teleport configuration file, then -starting Teleport. - -To add a static label: - -1. Open the Teleport configuration file, `/etc/teleport.yaml`, in an editor on the computer -where you installed the Teleport Agent. - -1. Locate the `labels` configuration under the `ssh_service` section. - -1. Add the static label key and value. - For example, add `environment` as the label key and `dev` as the value: - - ```yaml - ssh_service: - enabled: true - labels: - environment: dev - ``` - - The preceding example illustrates a simple value setting. However, you can also use static labels - to define more complex string values that include white space or punctuation marks. - For example: - - ```code - ssh_service: - enabled: true - labels: - location: San Francisco Bldg 301 4th floor - ``` - -1. Save your changes and close the file. - -1. Start Teleport on the Linux host: - - (!docs/pages/includes/start-teleport.mdx!) - -1. Verify that you have added the label by running the following command -on your local computer. - - ```code - $ tsh ls --query 'labels["environment"]=="dev"' - ``` - - You should see output similar to the following: - - ```code - Node Name Address Labels - ---------------- ---------- ------------------------------------------ - ip-192-168-13-57 ⟵ Tunnel environment=dev,hostname=ip-192-168-13-57 - ``` - - **Checking the status of your server** - - If you don't see your server listed when you query for the label you added, you should verify that the - SSH Service is running on the server. Check the log for the server to verify that there are - messages similar to the following: - - ```text - 2023-08-07T22:22:21Z INFO [NODE:1] Service is starting in tunnel mode. pid:149932.1 service/service.go:2630 - 2023-08-07T22:22:21Z INFO [UPLOAD:1] starting upload completer service pid:149932.1 service/service.go:2723 - 2023-08-07T22:22:21Z INFO [PROC:1] The new service has started successfully. Starting syncing rotation status... - ``` - - **Checking your user profile** - - If the SSH Service is running on the server, verify that your current Teleport user has a login on the local host. - You can check the status of your user account by running the following command: - - ```code - $ tsh status - ``` - - You should see output similar to the following with at least one login listed for your current user: - - ```code - > Profile URL: https://ajuba-aws.teledocs.click:443 - Logged in as: teleport-admin - Cluster: teleport-aws.example.com - Roles: access, editor - Logins: root, ubuntu, ec2-user - Kubernetes: enabled - Valid until: 2023-08-08 10:08:46 +0000 UTC [valid for 10h36m0s] - Extensions: login-ip, permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy - ``` - - If no valid logins have been assigned to the user, you should update your current user profile to include - at least one valid login. - - You can add logins to a user by running a command similar to the following: - - ```code - $ tctl users update myuser --set-logins=root - ``` - - This example adds the `root` login to the `myuser` Teleport user. - For more information about managing logins for Teleport users, see [Local Users](./users.mdx). - -**Using hidden static labels** - -If you want to use labels for role-based access control but don't want to -display the labels in command output or the Teleport Web UI, you can define them -in a hidden namespace by prefixing the label key with `teleport.hidden/`. -For example: - - -```yaml -ssh_service: - enabled: true - labels: - teleport.hidden/team-id: ai-lab-01 -``` - -### Apply dynamic labels using commands - -As with static labels, you can apply dynamic labels by editing the -Teleport configuration file, then restarting the Teleport service on your server. - -To add a command to generate a dynamic label: - -1. Stop the Teleport service running on your server. - -1. Open the Teleport configuration file—by default, `/etc/teleport.yaml`—in a text editor. - -1. Locate the `commands` configuration under the `ssh_service` section. - -1. Add a `command` array that runs the `uname` command with the `-p` argument to return the architecture of the -host server every one hour. - - For example, add the `name`, `command`, and `period` fields as follows: - - ```yaml - ... - ssh_service: - enabled: true - labels: - teleport.internal/resource-id: 1f2cdcc5-cde3-41fa-b390-bc872087821a - environment: dev - commands: - - name: hostname - command: [hostname] - period: 1m0s - - name: arch - command: [uname, -p] - period: 1h0m0s - ``` - - In the `command` setting, the first element is a valid executable. - Each subsequent element is an argument. - The following syntax is valid: - - ```yaml - command: ["/bin/uname", "-m"] - ``` - - The following syntax is not valid: - - ```yaml - command: ["/bin/uname -m"] - ``` - - For more complex commands, you can use single (') and double (") - quotation marks interchangeably to create nested expressions. - For example: - - ```yaml - command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\\.[0-9]+\\.[0-9]+'"] - ``` - - In configuring commands, keep the following in mind: - - - The executable must be discoverable in the `$PATH` or specified using an absolute path. - - You must set the executable permission bit on any file you use as a command. - - Shell scripts must have a [shebang line](https://en.wikipedia.org/wiki/Shebang_\(Unix\)). - - The `period` setting specifies how frequently Teleport executes each command. - In this example, the `uname -p` command is executed every one hour (1h), zero minutes (0m), - and zero seconds (0s). This value can't be less than one minute. - -1. Save your changes and close the file. - -1. Start Teleport with the invitation token you saved in the `INVITE_TOKEN` environment variable: - - ```code - $ sudo teleport start --token=${INVITE_TOKEN?} - ``` - -1. Verify that you have added the label by running the following command on your local -computer. Your Teleport user must be authorized to access the server. - - ```code - $ tsh ls - ``` - - You should see output similar to the following with both the `arch` and `environment` labels displayed: - - ```code - Node Name Address Labels - ---------------- -------------- ------------------------------------------------------ - ip-192-168-13-57 ⟵ Tunnel arch=x86_64,environment=dev,hostname=ip-192-168-13-57 - ``` - -### Apply resource-based labels - -Applying resource-based labels is only supported for servers. - -You can apply resource-based labels to a Teleport instance by creating a `server_info` -resource for the instance. For a server with name ``, its matching `server_info` -should be named `si-`. - -To add resource-based labels: - -1. Run `tctl get node/NODE_NAME` to get the name of the node resource to apply labels to. - You should get output similar to the following: - - ```yaml - kind: node - metadata: - expires: "2024-01-12T00:41:17.355013266Z" - id: - name: - revision: - spec: - # ... - ``` - - Save the value of `metadata.name` for the next step. - -1. Create the file `server_info.yaml` and paste the following into it: - - ```yaml - # server_info.yaml - kind: server_info - metadata: - name: si- - spec: - new_labels: - "foo": "bar" - ``` - - Replace `` with the resource name you saved in the previous step. - Run the following to create the `server_info` resource: - - ```code - $ tctl create server_info.yaml - ``` - -1. Verify that you have added the label by running the following command on your local -computer. Your Teleport user must be authorized to access the server. Teleport -applies labels from `server_info` resources gradually to prevent strain on the Auth Service in -larger clusters, so it may take several minutes for the new labels to appear. - - ```code - $ tsh ls - ``` - - You should see output similar to the following with the `dynamic/foo` label displayed: - - ```code - Node Name Address Labels - ---------------- -------------- ------------------------------------------------------ - ip-192-168-13-57 ⟵ Tunnel dynamic/foo=bar,hostname=ip-192-168-13-57 - ``` - - - All resource-based labels created with `tctl` will have the - `dynamic/` prefix. This prefix forbids the label from being used in a - role's deny rules. - - -To update resource-based labels, recreate the `server_info` resource with updated labels. - -## Next steps - -After you have labeled your resources, you can use the labels when running -`tsh` and `tctl` commands to filter the resources that the commands return. -For more information, see [Resource filtering](../../../reference/predicate-language.mdx). - -You can also use labels to limit the access that users in different roles have to - -specific classes of resources. For more information, see -[Teleport Role Templates](../../access-controls/guides/role-templates.mdx). - diff --git a/docs/pages/admin-guides/management/admin/self-signed-certs.mdx b/docs/pages/admin-guides/management/admin/self-signed-certs.mdx deleted file mode 100644 index 1aa8ff6889b0e..0000000000000 --- a/docs/pages/admin-guides/management/admin/self-signed-certs.mdx +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: Running Teleport with Self-Signed Certificates -description: This guide shows you how to run Teleport using self-signed certificates, which is helpful for testing or demo environments. ---- - -This guide explains how to evaluate Teleport in a -non-production environment without having to configure TLS certificates. -We will show you how to run Teleport with self-signed certificates and address -problems caused by this configuration. - -The Teleport Proxy Service authenticates itself to clients via TLS x509 certificates. -If certificates are not configured for the Proxy Service then it uses self-signed certificates, -which clients will not trust by default. -When visiting the cluster's Web UI, the certificate presented will not be trusted by your browser. In this case, -you will likely see a page warning you that the website is not trusted. - -Additionally, self-signed certificates can prevent `teleport`, `tsh`, and `tctl` from connecting -to the Proxy Service. - - - **DO NOT USE SELF-SIGNED CERTIFICATES IN PRODUCTION** - - Configuring your cluster to trust self-signed certificates - makes it easier for attackers to intercept communications - between the Proxy Service and clients, since there is no - way to verify the authenticity of the certificates. - It is therefore important to properly configure certificates - when using Teleport in a production environment. - - -## Prerequisites - - - - -- A running Teleport cluster. For details on how to set this up, see our - [Getting Started](../../deploy-a-cluster/linux-demo.mdx) guide (skip TLS certificate setup). - -- A Teleport Proxy Service which does not have certificates or ACME automatic certificates configured. -For example, this Teleport Proxy Service configuration would use self-signed certs: - - ```yaml - proxy_service: - enabled: true - # TLS certificate for the HTTPS connection. - https_keypairs: [] - # Get an automatic certificate from letsencrypt.org using ACME. - acme: {} - ``` - -- (!docs/pages/includes/tctl-tsh-prerequisite.mdx!) - - See [Installation](../../../installation.mdx) for details. - - - - -- A running Teleport cluster. For details on how to set this up, see our Enterprise - [Getting Started](../../deploy-a-cluster/deploy-a-cluster.mdx) guide. - -- A Teleport Proxy Service which does not have certificates or ACME automatic certificates configured. -For example, this Teleport Proxy Service configuration would use self-signed certs: - - ```yaml - proxy_service: - enabled: true - # TLS certificate for the HTTPS connection. - https_keypairs: [] - # Get an automatic certificate from letsencrypt.org using ACME. - acme: {} - ``` - -- (!docs/pages/includes/tctl-tsh-prerequisite.mdx!) - - - - -## How to use self-signed certs with Teleport binaries and clients - -### `teleport` - -When running a Teleport service with `teleport`, if the service you are starting is configured to point to -the Teleport Proxy Service endpoint and the Proxy Service is using self-signed certificates, then `teleport` will need -to be run with the `--insecure` flag to disable verification of the -Proxy Service TLS certificate. This is the case when: -- The Teleport config file `proxy_server` setting is set to the Proxy Service endpoint: - - `proxy_server: "tele.example.com:443"` or - - `proxy_server: "tele.example.com:3080"` -- Teleport is started with the `--auth-server` flag pointing to the Proxy Service endpoint: - - `teleport [app | db] start --auth-server=tele.example.com:443` or - - `teleport [app | db] start --auth-server=tele.example.com:3080` - -Instructions for disabling TLS certificate verification depend on how you are -running Teleport: via the `teleport` CLI, using a Helm chart, or via systemd: - - - - When running `teleport` from the command line, pass the `--insecure` flag to disable - TLS certificate validation. For example: - ```sh - $ sudo teleport start -c /etc/teleport.yaml --insecure - $ sudo teleport app start -c /etc/teleport.yaml --insecure - $ sudo teleport db start -c /etc/teleport.yaml --insecure - ``` - Without the `--insecure` flag, you will see an error message that looks like - `x509: “tele.example.com” certificate is not trusted`. - - - - If you are using the `teleport-cluster` Helm chart, set - [extraArgs](../../../reference/helm-reference/teleport-cluster.mdx) - to include the extra argument: `--insecure`. - - Here is an example of the field within a values file: - - ```yaml - extraArgs: - - "--insecure" - ``` - - When using the `--set` flag, use the following syntax: - - - ```text - --set "extraArgs={--insecure}" - ``` - - If you are using the `teleport-kube-agent` chart, set the - [insecureSkipProxyTLSVerify](../../../reference/helm-reference/teleport-kube-agent.mdx) - flag to `true`. - - In a values file, this would appear as follows: - - ```yaml - insecureSkipProxyTLSVerify: true - ``` - - When using the `--set` flag, use the following syntax: - - ```text - --set insecureSkipProxyTLSVerify=true - ``` - - - - Locate the `systemd` unit file for Teleport (called teleport.service) by running the following command: - ```sh - $ systemctl status teleport - ``` - You will see output similar to the following, including the file path (`/lib/systemd/system/teleport.service`) that contains the unit file for the systemd configuration being applied: - - ```code - ● teleport.service - Teleport Service - Loaded: loaded (/lib/systemd/system/teleport.service; disabled; vendor preset: enabled) - Active: inactive (dead) - ``` - - Edit the Teleport unit file to include `--insecure` in the `ExecStart` line, for example: - ```text - ExecStart=/usr/local/bin/teleport start --pid-file=/run/teleport.pid --insecure - ``` - - After saving the unit file, you will need to reload the daemon for your changes to take effect: - ```sh - $ sudo systemctl daemon-reload - $ sudo systemctl restart teleport.service - ``` - - - - -### `tctl` -When running `tctl` remotely via the Teleport Proxy Service, if the Proxy Service is using self-signed -certificates, then `tctl` will not trust the certificate from the Proxy Service. -To disable certificate verification use the `--insecure` flag when running `tctl` commands. - -`tctl` will determine how to connect to the Auth Service in a few ways: -- loading configuration from a local profile after logging in with `tsh` -- loading from a config file passed as an argument: `tctl -c /etc/teleport.yaml` -- passing the `--auth-server` flag directly, as in: - - `tctl --auth-server=tele.example.com:443` or - - `tctl --auth-server=tele.example.com:3080` - -If any of these methods tries to connect via the Teleport Proxy Service, and the Proxy Service is using -self-signed certificates, then `tctl` will not trust the certificate from the Proxy Service and you will get an -error message about -untrusted or invalid certificates, unless `--insecure` is also passed to `tctl`. - -For example: `tctl status --insecure` - -### `tsh` -When running `tsh`, you must specify the Teleport Proxy Service address for `tsh` to connect to. -If the Teleport Proxy Service is using self-signed certificates, then `tsh` will not trust the Proxy Service -certificate. -In order to use `tsh` in this case, you need to use the `--insecure` flag. - -For example: `tsh login --proxy=tele.example.com:443 --user=alice --insecure` - -### Teleport Connect - -Teleport Connect lets you [skip TLS certificate verification with the `--insecure` -flag](../../../connect-your-client/teleport-connect.mdx). - -## Further reading - -- [Configuring Teleport TLS Certs](../../deploy-a-cluster/linux-demo.mdx#configure-teleport) -- [Run Teleport as a systemd Daemon](./daemon.mdx) -- [Teleport Proxy Service](../../../reference/architecture/proxy.mdx) -- [Teleport Authentication](../../../reference/architecture/authentication.mdx) diff --git a/docs/pages/admin-guides/management/admin/troubleshooting.mdx b/docs/pages/admin-guides/management/admin/troubleshooting.mdx deleted file mode 100644 index 44f3c31ea542e..0000000000000 --- a/docs/pages/admin-guides/management/admin/troubleshooting.mdx +++ /dev/null @@ -1,231 +0,0 @@ ---- -title: Troubleshooting -description: Troubleshooting and Collecting Metrics of Teleport Processes ---- - -In this guide, we will explain how to address issues or unexpected behavior in your -Teleport cluster. - -You can use these steps to get more visibility into the `teleport` process so -you can troubleshoot the Auth Service, Proxy Service, and Teleport Agent -services such as the Application Service and Database Service. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Enable verbose logging - -To change log levels in Teleport, you can use either of the following methods: - -- Debug Service: Allows on-the-fly log level adjustments without restarting the - instance, which is ideal for troubleshooting sessions. -- Updating configuration: Involves updating the Teleport configuration file and - restarting the instance. - - - -The Teleport Debug Service allows administrators to dynamically manage log levels -without restarting the instance. The service, enabled by default, ensures -local-only access and must be consumed from inside the same instance. - -To change the instance log level use the `teleport debug set-log-level` command: - -```code -$ teleport debug set-log-level DEBUG -Changed log level from "INFO" to "DEBUG". - -$ kubectl -n teleport exec my-pod -- teleport debug set-log-level DEBUG -Changed log level from "INFO" to "DEBUG". -``` - -If you're unsure what is the current level you can retrieve it using -`teleport debug get-log-level`. - -After troubleshooting, remember to turn the log level back to avoid generating -unnecessary logs. - -(!docs/pages/includes/diagnostics/teleport-debug-config.mdx!) - - - -To diagnose problems, you can configure the `teleport` process to run with -verbose logging enabled by passing it the `-d` flag. `teleport` will write logs -to stderr. - -Alternatively, you can set the log level from the Teleport configuration file: - -```yaml -teleport: - log: - severity: DEBUG -``` - -Restart the `teleport` process to apply the modified log level. Logs will resemble -the following (these logs were printed while joining a server to a cluster, then -terminating the `teleport` process on the server): - - - -Debug logs include the file and line number of the code that emitted the log, so -you can investigate (or report) what a `teleport` process was doing before it ran into -problems. Here's an example: - -``` -DEBU [NODE:PROX] Agent connected to proxy: [aee1241f-0f6f-460e-8149-23c38709e46d.tele.example.com aee1241f-0f6f-460e-8149-23c38709e46d teleport-proxy-us-west-2-6db8db844c-ftmg9.tele.example.com teleport-proxy-us-west-2-6db8db844c-ftmg9 localhost 127.0.0.1 ::1 tele.example.com 100.92.90.42 remote.kube.proxy.teleport.cluster.local]. leaseID:4 target:tele.example.com:11106 reversetunnel/agent.go:414 -DEBU [NODE:PROX] Changing state connecting -> connected. leaseID:4 target:tele.example.com:11106 reversetunnel/agent.go:210 -DEBU [NODE:PROX] Discovery request channel opened: teleport-discovery. leaseID:4 target:tele.example.com:11106 reversetunnel/agent.go:526 -DEBU [NODE:PROX] handleDiscovery requests channel. leaseID:4 target:tele.example.com:11106 reversetunnel/agent.go:544 -DEBU [NODE:PROX] Pool is closing agent. leaseID:2 target:tele.example.com:11106 reversetunnel/agentpool.go:238 -DEBU [NODE:PROX] Pool is closing agent. leaseID:3 target:tele.example.com:11106 reversetunnel/agentpool.go:238 -``` - - - It is not recommended to run Teleport in production with verbose logging as it - generates a substantial amount of data. - - - -## Step 2/3. Generate a debug dump - -The `teleport` binary is a Go program. Go programs assign work to CPU threads -using an abstraction called a **goroutine**. You can get a goroutine dump of a -running `teleport` process by sending it a `USR1` signal. - -This is especially useful for troubleshooting a `teleport` process that appears -stuck, since you can see which a goroutine is blocked and and why. For example, -goroutines often communicate using **channels**, and a goroutine dump indicates -whether a goroutine is waiting to send or receive on a channel. - -To generate a goroutine dump, send a `USR1` signal to a `teleport` process: - -```code -$ kill -USR1 $(pidof teleport) -``` - -Teleport will print the debug information to `stderr`. Here what you will see in -the logs: - -```txt -INFO [PROC:1] Got signal "user defined signal 1", logging diagnostic info to stderr. service/signals.go:99 -Runtime stats -goroutines: 64 -OS threads: 10 -GOMAXPROCS: 2 -num CPU: 2 -... -goroutines: 84 -... -Goroutines -goroutine 1 [running]: -runtime/pprof.writeGoroutineStacks(0x3c2ffc0, 0xc0001a8010, 0xc001011a38, 0x4bcfb3) - /usr/local/go/src/runtime/pprof/pprof.go:693 +0x9f -... -``` - - - -You can print a goroutine dump without enabling verbose logging. - - - -## Step 3/3. Ask for help - -Once you have collected verbose logs and a goroutine dump from your `teleport` -binary, you can use this information to get help from the Teleport community and -Support team. - -### Collect your Teleport version - -Determine the version of the `teleport` process you are investigating. - -```code -$ teleport version -Teleport v8.3.7 git:v8.3.7-0-ga8d066935 go1.17.3 -``` - -You can also collect the versions of the Teleport Auth Service, Proxy -Service, and client tools to rule out version compatibility issues. - -To see the version of the Auth Service and Proxy Service, run the following -command: - -```code -$ tctl status -Cluster mytenant.teleport.sh -Version (=cloud.version=) -Host CA never updated -User CA never updated -Jwt CA never updated -CA pin (=presets.ca_pin=) -``` - -Get the versions of your client tools: - -```code -$ tctl version -Teleport v9.0.4 git: go1.18 -$ tsh version -Teleport v9.0.4 git: go1.18 -``` - -### Pose your question - - - -If you have a question or need assistance please submit a request -through the [Teleport support portal](https://support.goteleport.com). - - - -If you need help, please ask on our [community forum](https://github.com/gravitational/teleport/discussions). You can also open an [issue on GitHub](https://github.com/gravitational/teleport/issues). - -For more information about Enterprise features reach out to [the Teleport sales team](https://goteleport.com/signup/enterprise/). -You can also sign up for a [free trial](https://goteleport.com/signup) of Teleport Enterprise. - - - -## Further reading - -This guide showed how to investigate issues with the `teleport` process. To see -how you can monitor more general health and performance data from your Teleport -cluster, read our [Teleport Diagnostics](../diagnostics/monitoring.mdx) guides. - -For additional sources of Teleport support, please see the -[Teleport Support and Education Center](https://goteleport.com/support/). - -## Common Issues - -### `teleport.cluster.local` - -It is common to see references to `teleport.cluster.local` within logs and -errors in Teleport. This is a special value that is used within Teleport for two -purposes and seeing it within your logs is not necessarily an indication that -anything is incorrect. - -Firstly, Teleport uses this value within certificates (as a DNS Subject -Alternative Name) issued to the Auth Service and Proxy Service. Teleport clients can -then use this value to validate the service's certificates during the TLS -handshake regardless of the service address as long as the client already has a -copy of the cluster's certificate authorities. This is important as there are -often multiple different ways that a client can connect to the Auth Service and -these are not always via the same address. - -Secondly, this value is used by clients as part of the URL when making gRPC or -HTTP requests to the Teleport API. This is because the Teleport API client uses -special logic to open the connection to the Auth Service to make the request, -rather than connecting to a single address as a typical client may do. This -special logic is necessary for the client to be able to support connecting to a -list of Auth Service instances or to be able to connect to the Auth Service through a -tunnel via the Proxy Service. This means that `teleport.cluster.local` appears -in log messages that show the URL of a request made to the Auth Service, and -does not explicitly indicate that something is misconfigured. - -### `ssh: overflow reading version string` and/or `502: Bad Gateway` errors - -(!docs/pages/includes/tls-multiplexing-warnings.mdx!) - diff --git a/docs/pages/admin-guides/management/admin/trusted-cluster-address-lookup.mdx b/docs/pages/admin-guides/management/admin/trusted-cluster-address-lookup.mdx deleted file mode 100644 index 02a022ad5133c..0000000000000 --- a/docs/pages/admin-guides/management/admin/trusted-cluster-address-lookup.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Commands to look up cluster addresses -description: Suggests command-line tools and scripts to look up cluster addresses. ---- - -If you aren't sure of what values to use for cluster settings such as the `tunnel_addr` -or `web_proxy_addr` in resource configuration files, you can often look up the information -using command-line tools that parse and extract machine-readable data from JSON files. -One of the most common of these tools is `jq`. -You can download `jq` for most operating systems from the -[jqlang](https://jqlang.github.io/jq/download/) website. - -After you download the program, you can run commands that use the jq program to look up -cluster addresses. - -To get cluster addresses: - -1. Set the `PROXY` environment variable to retrieve information about your Teleport cluster -by replacing `teleport.example.com` with your Teleport cluster domain: - - ```code - $ PROXY=teleport.example.com - ``` - -1. Extract the `tunnel_addr` for your cluster by running the following command: - - ```code - $ curl https://$PROXY/webapi/ping | jq 'if .proxy.tls_routing_enabled == true then .proxy.ssh.public_addr else .proxy.ssh.ssh_tunnel_public_addr end' - ``` - -1. Extract the `web_proxy_addr` for your cluster by running the following command: - - ```code - $ curl https://$PROXY/webapi/ping | jq .proxy.ssh.public_addr - ``` \ No newline at end of file diff --git a/docs/pages/admin-guides/management/admin/uninstall-teleport.mdx b/docs/pages/admin-guides/management/admin/uninstall-teleport.mdx deleted file mode 100644 index 3b2191e04cfdd..0000000000000 --- a/docs/pages/admin-guides/management/admin/uninstall-teleport.mdx +++ /dev/null @@ -1,298 +0,0 @@ ---- -title: Uninstall Teleport -description: How to remove Teleport from your system ---- - -This guide explains how to uninstall Teleport completely including binaries, configurations and data. - -## Prerequisites - -- A system with Teleport installed. - - -These instructions only apply to non-containerized installations of Teleport. - -If you are running Teleport in Kubernetes, you should uninstall the Helm chart release instead: - -```code -# Example: uninstall the Helm release named 'teleport-kube-agent' in the 'teleport' namespace -$ helm uninstall --namespace teleport teleport-kube-agent -``` - -If you are running Teleport in Docker, you should stop the Teleport Docker container: - -```code -# Example: Stop the Docker container named 'teleport' -$ docker stop teleport -``` - - -## Step 1/3. Stop any running Teleport processes - - - - Instruct `systemd` to stop the Teleport process, and disable it from automatically starting: - - ```code - $ sudo systemctl stop teleport - $ sudo systemctl disable teleport - ``` - - If these `systemd` commands do not work, you can "kill" all the running Teleport processes instead: - - ```code - $ sudo killall teleport - ``` - - - - Instruct `launchd` to stop the Teleport process, and disable it from automatically starting: - - ```code - $ sudo launchctl unload -w /Library/LaunchDaemons/com.goteleport.teleport.plist - $ sudo rm -f /Library/LaunchDaemons/com.goteleport.teleport.plist - ``` - - If these commands do not work, you can "kill" all the running Teleport processes instead: - - ```code - $ sudo killall teleport - ``` - - - - - There are currently no long-running Teleport processes on Windows machines. - - - - -## Step 2/3. Remove Teleport binaries - -Follow the steps for your operating system to remove Teleport binaries. - -### Linux - -Follow the instructions for your Linux distribution: - - - - - Uninstall the Teleport binary using APT: - - ```code - $ sudo apt-get -y remove teleport-ent - ``` - - For Teleport Community Edition, use the following command: - - ```code - $ sudo apt-get -y remove teleport - ``` - - Uninstall the Teleport APT repo: - - ```code - $ sudo rm -f /etc/apt/sources.list.d/teleport.list - ``` - - - If the commands above do not work, you may have installed Teleport using a standalone DEB package. Remove it with: - - ```code - # Use "teleport" instead of "teleport-ent" for Teleport Community Edition - $ sudo dpkg -r teleport-ent - ``` - - - - - - Uninstall the Teleport binary using YUM: - - ```code - # Change the package name to "teleport" for Teleport Community Edition - $ sudo yum -y remove teleport-ent - # Optional: Use DNF on newer distributions - # $ sudo dnf -y remove teleport-ent - ``` - - Uninstall the Teleport YUM repo: - - ```code - $ sudo rm -f /etc/yum.repos.d/teleport.repo - ``` - - - If the commands above do not work, you may have installed Teleport using a standalone RPM package. Remove it with: - - ```code - # Use "teleport" for Teleport Community Edition - $ sudo rpm -e teleport-ent - ``` - - - - - - Uninstall the Teleport binary using zypper: - - ```code - # Change the package name to "teleport" for Teleport Community Edition - $ sudo zypper -y remove teleport-ent - ``` - - Uninstall the Teleport zypper repo: - - ```code - $ sudo zypper removerepo teleport - ``` - - - - - - These are the default paths to the Teleport binaries. If you have changed these from the defaults on your system, substitute those paths here. - You can use `dirname $(which teleport)` to look this up automatically. - - - Remove the Teleport binaries from the machine: - - ```code - $ sudo rm -f /usr/local/bin/tbot - $ sudo rm -f /usr/local/bin/tctl - $ sudo rm -f /usr/local/bin/teleport - $ sudo rm -f /usr/local/bin/tsh - $ sudo rm -f /usr/local/bin/fdpass-teleport - ``` - - - - -### macOS - - - These are the default paths to the Teleport binaries. If you have changed these from the defaults on your system, substitute those paths here. - You can use `dirname $(which teleport)` to look this up automatically. - - - Remove the Teleport binaries and links to Teleport software from the machine: - - ```code - $ sudo rm -f /usr/local/bin/tbot - $ sudo rm -f /usr/local/bin/tctl - $ sudo rm -f /usr/local/bin/teleport - $ sudo rm -f /usr/local/bin/tsh - $ sudo rm -f /usr/local/bin/fdpass-teleport - ``` - - - - You may have Teleport software in the `/Applications` folder if you: - - Installed from a macOS tarball for v17+ that includes `tsh.app` and `tctl.app` - - Installed the macOS `tsh` client-only package for v16 or older versions. - - Installed Teleport Connect for macOS - - You can remove those with these commands: - - ```code - $ sudo rm -rf /Applications/tsh.app - $ sudo rm -rf /Applications/tctl.app - $ sudo rm -rf /Applications/Teleport\ Connect.app - ``` - - -### Windows - - Remove the `tsh.exe` and `tctl.exe` binaries from the machine: - - ```code - $ del C:\Path\To\tsh.exe - $ del C:\Path\To\tctl.exe - ``` -(!docs/pages/includes/uninstall-teleport-connect-windows.mdx!) - -(!docs/pages/includes/uninstall-windows-auth.mdx!) - -## Step 3/3. Remove Teleport data and configuration files - - - - - These are the default paths to the Teleport config files and data directory. - If you have changed these from the defaults on your system, substitute those paths here. - - - Remove the Teleport config file: - - ```code - $ sudo rm -f /etc/teleport.yaml - # Optional: Also remove the Machine ID config file, if you used it - # $ sudo rm -f /etc/tbot.yaml - ``` - - Remove the Teleport data directory: - - ```code - $ sudo rm -rf /var/lib/teleport - ``` - - Optionally, also remove the global config file and local user data directory for `tsh`: - - ```code - $ sudo rm -f /etc/tsh.yaml - $ rm -rf ~/.tsh - ``` - - - - These are the default paths to the Teleport config files and data directory. - If you have changed these from the defaults on your system, substitute those paths here. - - - Remove the Teleport config file: - - ```code - $ sudo rm -f /etc/teleport.yaml - # Optional: Also remove the Machine ID config file, if you used it - # $ sudo rm -f /etc/tbot.yaml - ``` - - Remove the Teleport data directory: - - ```code - $ sudo rm -rf /var/lib/teleport - ``` - - Optionally, also remove: - - the global config file and local user data directory for `tsh` - - the local user data directory for Teleport Connect - - ```code - # tsh - $ sudo rm -f /etc/tsh.yaml - $ rm -rf ~/.tsh - # Teleport Connect - $ rm -rf ~/Library/Application\ Support/Teleport\ Connect - ``` - - - - Remove the local user data directory for `tsh`: - - ```code - $ rmdir /s /q %USERPROFILE%\.tsh - ``` - - Optionally, also remove the local user data directory for Teleport Connect: - - ```code - $ rmdir /s /q "%APPDATA%\Teleport Connect" - ``` - - - - -Teleport is now removed from your system. - -Any Teleport services will stop appearing in your Teleport Web UI or the output of `tsh ls` once their last heartbeat has timed out. This usually occurs within 10-15 minutes of stopping the Teleport process. diff --git a/docs/pages/admin-guides/management/admin/users.mdx b/docs/pages/admin-guides/management/admin/users.mdx deleted file mode 100644 index 0ae4068bcc5e1..0000000000000 --- a/docs/pages/admin-guides/management/admin/users.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Local Users -description: Learn how to manage local users in Teleport. Local users are stored on the Auth Service instead of a third-party identity provider. ---- - -In Teleport, **local users** are users managed directly via Teleport, rather -than a third-party identity provider. All local users are stored in Teleport's -cluster state backend, which contains the user's name, their roles and traits, -and a bcrypt password hash. - -This guide shows you how to: - -- [Add local users](./users.mdx#adding-local-users) -- [Edit existing users](./users.mdx#editing-users) -- [Delete users](./users.mdx#deleting-users) - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) - -## Adding local users - -A user identity in Teleport exists in the scope of a cluster. -A Teleport administrator creates Teleport user accounts and maps them to the roles they can use. - -Let's look at this table: - -| Teleport User | Allowed OS Logins | Description | -| - | - | - | -| `joe` | `joe`, `root` | Teleport user `joe` can log in to member Nodes as user `joe` or `root` on the OS. | -| `bob` | `bob` | Teleport user `bob` can log in to member Nodes only as OS user `bob`. | -| `kim` | | If no OS login is specified, it defaults to the same name as the Teleport user, `kim`. | - -Let's add a new user to Teleport using the `tctl` tool: - - - -```code -$ tctl users add joe --logins=joe,root --roles=access,editor -``` - - -```code -$ tctl users add joe --logins=joe,root --roles=access,editor,reviewer -``` - - - - -Teleport generates an auto-expiring token (with a TTL of one hour) and prints -the token URL, which must be used before the TTL expires. - -```code -User "joe" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: -https://:443/web/invite/ - -NOTE: Make sure :443 points at a Teleport proxy which users can access. -``` - -The user completes registration by visiting this URL in their web browser, -picking a password, and configuring multi-factor authentication. If the -credentials are correct, the Teleport Auth Service generates and signs a new -certificate, and the client stores this key and will use it for subsequent -logins. - -The key will automatically expire after 12 hours by default, after which -the user will need to log back in with their credentials. This TTL can be -configured to a different value. - -Once authenticated, the account will become visible via `tctl`: - -```code -$ tctl users ls - -# User Allowed Logins -# ---- -------------- -# admin admin,root -# kim kim -# joe joe,root -``` - -## Editing users - -Admins can edit user entries via `tctl`. - -For example, to see the full list of user records, an administrator can execute: - -```code -$ tctl get users -``` - -To edit the user `joe`, run the following command: - -```code -$ tctl edit user/joe -``` - -Make your changes, then save and close the file in your editor to apply them. - -## Deleting users - -Admins can delete a local user via `tctl`: - -```code -$ tctl users rm joe -``` - -## Next steps - - - - -In addition to users, you can use `tctl` to manage roles and other dynamic -resources. See our [Teleport Resources Reference](../../../reference/resources.mdx). - -For all available `tctl` commands and flags, see our [CLI Reference](../../../reference/cli/tctl.mdx). - -You can also configure Teleport so that users can log in using an SSO provider. -For more information, see: - -- [Single Sign-On](../../access-controls/sso/sso.mdx) - - - - -In addition to users, you can use `tctl` to manage roles and other dynamic -resources. See our [Teleport Resources Reference](../../../reference/resources.mdx). - -For all available `tctl` commands and flags, see our -[CLI Reference](../../../reference/cli/tctl.mdx). - -You can also configure Teleport so that users can log in using GitHub. For more -information, see [GitHub SSO](../../access-controls/sso/github-sso.mdx). - - - diff --git a/docs/pages/admin-guides/management/diagnostics/metrics.mdx b/docs/pages/admin-guides/management/diagnostics/metrics.mdx deleted file mode 100644 index df6ced5db6cb9..0000000000000 --- a/docs/pages/admin-guides/management/diagnostics/metrics.mdx +++ /dev/null @@ -1,205 +0,0 @@ ---- -title: Key Metrics for Self-Hosted Clusters -description: Describes important metrics to monitor if you are self-hosting Teleport. -tocDepth: 3 ---- - -This guide explains the metrics you should use to get started monitoring your -self-hosted Teleport cluster, focusing on metrics reported by the Auth Service -and Proxy Service. If you use Teleport Enterprise (Cloud), the Teleport team -monitors and responds to these metrics for you. - -For a reference of all available metrics, see the [Teleport Metrics -Reference](../../../reference/monitoring/metrics.mdx). - -This guide assumes that you already monitor compute resources on all instances -that run the Teleport Auth Service and Proxy Service (e.g., CPU, memory, disk, -bandwidth, and open file descriptors). - -## Enabling metrics - -(!docs/pages/includes/diagnostics/diag-addr-prereqs-tabs.mdx!) - -This will enable the `http://127.0.0.1:3000/metrics` endpoint, which serves the -metrics that Teleport tracks. It is compatible with [Prometheus](https://prometheus.io/) collectors. - -## Backend operations - -A Teleport cluster cannot function if the Auth Service does not have a healthy -cluster state backend. You need to track the ability of the Auth Service to read -from and write to its backend. - -The Auth Service can connect to [several possible -backends](../../../reference/backends.mdx). In addition to Teleport backend -metrics, you should set up monitoring for your backend of choice so that, if -these metrics show problematic values, you can correlate them with metrics on -your backend infrastructure. - -### Backend operation throughput and availability - -On each backend operation, the Auth Service increments a metric. Backend -operation metrics have the following format: - -```text -teleport_backend_[_failed]_total -``` - -If an operation results in an error, the Auth Service adds the `_failed` segment -to the metric name. For example, successfully creating a record increments the -`teleport_backend_write_requests_total` metric. If the create operation fails, -the Auth Service increments `teleport_backend_write_requests_failed_total` -instead. - -The following backend operation metrics are available: - -|Operation|Incremented metric name| -|---|---| -|Create an item|`write_requests`| -|Modify an item, creating it if it does not exist|`write_requests`| -|Update an item|`write_requests`| -|Conditionally update an item if versions match|`write_requests`| -|List a range of items|`batch_read_requests`| -|Get a single item|`read_requests`| -|Compare and swap items|`write_requests`| -|Delete an item|`write_requests`| -|Conditionally delete an item if versions match|`write_requests`| -|Write a batch of updates atomically, failing the write if any update fails|Both `write_requests` and `atomic_write_requests`| -|Delete a range of items|`batch_write_requests`| -|Update the keepalive status of an item|`write_requests`| - -During failed backend writes, a Teleport process also increments the -`backend_write_requests_failed_precondition_total` metric if the cause of the -failure is expected. For example, the metric increments during a create -operation if a record already exists, during an update or delete operation if -the record is not found, and during an atomic write if the resource was modified -concurrently. All of these conditions can hold in a well-functioning Teleport -cluster. - -`backend_write_requests_failed_precondition_total` increments whenever -`backend_write_requests_failed_total` increments, and you can use it to -distinguish potentially expected write failures from unexpected, problematic -ones. - -You can use backend operation metrics to define an availability formula, i.e., -the percentage of reads or writes that succeeded. For example, in Prometheus, -you can define a query similar to the following. This takes the percentage of -write requests that failed for unexpected reasons and subtracts it from 1 to get -a percentage of successful writes: - -``` -1- (sum(rate(backend_write_requests_failed_total -sum(rate(teleport_backend_write_requests_failed_precondition_total)) / sum(rate(backend_write_requests_total)) -``` - -If your backend begins to appear unavailable, you can investigate your backend -infrastructure. - -### Backend operation performance - -To help you track backend operation performance, the Auth Service also exposes -Prometheus [histogram metrics](https://prometheus.io/docs/practices/histograms/) -for read and write operations: - -- `teleport_backend_read_seconds_bucket` -- `teleport_backend_write_seconds_bucket` -- `teleport_backend_batch_write_seconds_bucket` -- `teleport_backend_batch_read_seconds_bucket` -- `teleport_backend_atomic_write_seconds_bucket` - -The backend throughput metrics discussed in the previous section map on to -latency metrics. Whenever the Auth Service increments one of the throughput -metrics, it reports one of the corresponding latency metrics. See the table -below for which throughput metrics map to which latency metrics. Each metric -name excludes the standard prefixes and suffixes. - -|Throughput|Latency| -|---|---| -|`read_requests`|`read_seconds_bucket`| -|`read_requests`|`write_seconds_bucket`| -|`batch_read_requests`|`batch_write_seconds_bucket`| -|`batch_write_requests`|`batch_read_seconds_bucket`| -|`atomic_write_requests`|`atomic_write_seconds_bucket`| - -## Agents and connected resources - -To enable users to access most infrastructure with Teleport, you must join a -[Teleport Agent](../../../enroll-resources/agents/agents.mdx) to your Teleport -cluster and configure it to proxy your infrastructure. In a typical setup, an -Agent establishes an SSH reverse tunnel with the Proxy Service. User traffic to -Teleport-protected resources flows through the Proxy Service, an Agent, and -finally the infrastructure resource the Agent proxies. Return traffic from the -resource takes this path in reverse. - -### Number of connected resources by type - -Teleport-connected resources periodically send heartbeat (keepalive) messages to -the Auth Service. The Auth Service uses these heartbeats to track the number of -Teleport-protected resources by type with the `teleport_connected_resources` -metric. - -The Auth Service tracks this metric for the following resources: - -- SSH servers -- Kubernetes clusters -- Applications -- Databases -- Teleport Database Service instances -- Windows desktops - -You can use this metric to: -- Compare the number of resources that are protected by Teleport with those that - are not so you can plan your Teleport rollout, e.g., by configuring [Auto - Discovery](../../../enroll-resources/auto-discovery/auto-discovery.mdx). -- Correlate changes in Teleport usage with resource utilization on Auth Service - and Proxy Service compute instances to determine scaling needs. - -You can include this query in your Grafana configuration to break this metric -down by resource type: - -```text -sum(teleport_connected_resources) by (type) -``` - -### Reverse tunnels by type - -Every Teleport service that starts up establishes an SSH reverse tunnel to the -Proxy Service. (Self-hosted clusters can configure Agent services to connect to -the Auth Service directly without establishing a reverse tunnel.) The Proxy -Service tracks the number of reverse tunnels using the metric, -`teleport_reverse_tunnels_connected`. - -With an improperly scaled Proxy Service pool, the Proxy Service can become a -bottleneck for traffic to Teleport-protected resources. If Proxy Service -instances display heavy utilization of compute resources while the number of -connected infrastructure resources is high, you can consider scaling out your -Proxy Service pool and using [Proxy Peering](../operations/proxy-peering.mdx). - -Use the following Grafana query to track the maximum number of reverse tunnels -by type over a given interval: - -```text -max(teleport_reverse_tunnels_connected) by (type)) -``` - -## Teleport instance versions - -At regular intervals (around 7 seconds with jitter), the Auth Service refreshes -its count of registered Teleport instances, including Agents and Teleport -processes that run the Auth Service and Proxy Service. You can measure this -count with the metric, `teleport_registered_servers`. To get the number of -registered instances by version, you can use this query in Grafana: - -```text -sum by (version)(teleport_registered_servers) -``` - -You can use this metric to tell how many of your registered Teleport instances -are behind the version of the Auth Service and Proxy Service, which can help you -identify any that are at risk of violating the Teleport [version compatibility -guarantees](../../../upgrading/overview.mdx). - -We strongly encourage self-hosted Teleport users to enroll their Agents in -automatic updates. You can track the count of Teleport Agents that are not -enrolled in automatic updates using the metric, `teleport_enrolled_in_upgrades`. -[Read the documentation](../../../upgrading/agent-managed-updates.mdx) for how -to enroll Agents in automatic updates. - diff --git a/docs/pages/admin-guides/management/diagnostics/monitoring.mdx b/docs/pages/admin-guides/management/diagnostics/monitoring.mdx deleted file mode 100644 index 9f885f6974c41..0000000000000 --- a/docs/pages/admin-guides/management/diagnostics/monitoring.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Health Monitoring -description: Monitoring health and readiness. ---- - -Teleport provides health checking mechanisms in order to verify that it -is healthy and ready to serve traffic. These can be used by things like -Kubernetes probes to monitor the health of a Teleport process. - -## Enable health monitoring - -(!docs/pages/includes/diagnostics/diag-addr-prereqs-tabs.mdx!) - -Now you can collect monitoring information from several endpoints. - -## `/healthz` - -The `http://127.0.0.1:3000/healthz` endpoint responds with a body of -`{"status":"ok"}` and an HTTP 200 OK status code if the process is running. - -This is a simple check, suitable for determining if the Teleport process is -still running. - -## `/readyz` - -The `http://127.0.0.1:3000/readyz` endpoint is similar to `/healthz`, but its -response includes information about the state of the process. - -The response body is a JSON object of the form: - -``` -{ "status": "a status message here"} -``` - -### `/readyz` and heartbeats - -If a Teleport component fails to execute its heartbeat procedure, it will enter -a degraded state. Teleport will begin recovering from this state when a -heartbeat completes successfully. - -The first successful heartbeat will transition Teleport into a recovering state. - -A second consecutive successful heartbeat will cause Teleport to transition to -the OK state. - -Teleport heartbeats run approximately every 60 seconds when healthy, and failed -heartbeats are retried approximately every 5 seconds. This means that depending -on the timing of heartbeats, it can take 60-70 seconds after connectivity is -restored for `/readyz` to start reporting healthy again. - -### Status codes - -The status code of the response can be one of: - -- HTTP 200 OK: Teleport is operating normally -- HTTP 503 Service Unavailable: Teleport has encountered a connection error and - is running in a degraded state. This happens when a Teleport heartbeat fails. -- HTTP 400 Bad Request: Teleport is either entering its initial startup phase or - has begun recovering from a degraded state. - -The same state information is also available via the `process_state` metric -under the `/metrics` endpoint. diff --git a/docs/pages/admin-guides/management/export-audit-events/datadog.mdx b/docs/pages/admin-guides/management/export-audit-events/datadog.mdx deleted file mode 100644 index 097cf861a5a07..0000000000000 --- a/docs/pages/admin-guides/management/export-audit-events/datadog.mdx +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: Export Teleport Audit Events with Datadog -description: How to configure the Teleport Event Handler plugin and Fluentd to send audit logs to Datadog ---- - -Datadog is a SAAS monitoring and security platform. In this guide, we'll explain -how to forward Teleport audit events to Datadog using Fluentd. - -## How it works - -The Teleport Event Handler authenticates to the Teleport Auth Service to receive -audit events over a gRPC stream, then sends those events to Fluentd as JSON -payloads over a secure channel established via mutual TLS: - -![Architecture of the setup shown in this guide](../../../../img/management/datadog-diagram.png) - -Since the Datadog Agent can only receive logs from remote sources as -JSON-encoded bytes over a [TCP or UDP -connection](https://docs.datadoghq.com/agent/logs/?tab=tailfiles#custom-log-collection), -the Teleport Event Handler needs to send its HTTPS payloads without using the -Datadog Agent. Fluentd handles authentication to the Datadog API. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) - -- A [Datadog](https://www.datadoghq.com/) account. -- A server, virtual machine, Kubernetes cluster, or Docker environment to run the - Event Handler. The instructions below assume a local Docker container for testing. -- Fluentd version v(=fluentd.version=) or greater. The Teleport Event Handler - will create a new `fluent.conf` file you can integrate into an existing Fluentd - system, or use with a fresh setup. - -The instructions below demonstrate a local test of the Event Handler plugin on -your workstation. You will need to adjust paths, ports, and domains for other -environments. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/6. Install the Event Handler plugin - -The Teleport Event Handler runs alongside the Fluentd forwarder, receives events -from Teleport's events API, and forwards them to Fluentd. - -(!docs/pages/includes/install-event-handler.mdx!) - -## Step 2/6. Generate a plugin configuration - -(!docs/pages/includes/configure-event-handler.mdx!) - -## Step 3/6. Create a user and role for reading audit events - -(!docs/pages/includes/plugins/event-handler-role-user.mdx!) - -## Step 4/6. Create teleport-event-handler credentials - -The Teleport Event Handler needs credentials to authenticate to the Teleport -Auth Service. In this section, you will give the Event Handler access to these -credentials. - -### Enable issuing of credentials for the Event Handler role - - - -(!docs/pages/includes/plugins/rbac-impersonate-event-handler-machine-id.mdx!) - - -(!docs/pages/includes/plugins/rbac-impersonate-event-handler.mdx!) - - - -### Export an identity file for the Event Handler plugin user - -Give the plugin access to a Teleport identity file. We recommend using Machine -ID for this in order to produce short-lived identity files that are less -dangerous if exfiltrated, though in demo deployments, you can generate -longer-lived identity files with `tctl`: - - - -(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-event-handler-identity"!) - - -(!docs/pages/includes/plugins/identity-export.mdx user="teleport-event-handler" secret="teleport-event-handler-identity"!) - - - -## Step 5/6. Install the Fluentd output plugin for Datadog - -In order for Fluentd to communicate with Datadog, it requires the [Fluentd output -plugin for Datadog](https://github.com/DataDog/fluent-plugin-datadog). Install -the plugin on your Fluentd host using either `gem` or the `td-agent`, if installed: - -```code -# Using Gem -$ gem install fluent-plugin-datadog - -# Using td-agent -$ /usr/sbin/td-agent-gem install fluent-plugin-datadog -``` - - - -If you're running Fluentd in a local Docker container for testing, you can adjust -the entrypoint to an interactive shell as the root user, so you can install the -plugin before starting Fluentd: - -```code -$ docker run -u $(id -u root):$(id -g root) -p 8888:8888 -v $(pwd):/keys -v \ -$(pwd)/fluent.conf:/fluentd/etc/fluent.conf --entrypoint=/bin/sh -i --tty fluent/fluentd:edge -# From the container shell: -$ gem install fluent-plugin-datadog -$ fluentd -c /fluentd/etc/fluent.conf -``` - - - -### Configure Fluentd - -1. Visit Datadog and generate an API key for Fluentd by following the [Datadog - documentation](https://docs.datadoghq.com/account_management/api-app-keys). - -1. Copy the API key and use it to add a new `` block to `fluent.conf`: - - ```ini - - - @type datadog - @id awesome_agent - api_key (=presets.tokens.first=) - - host http-intake.logs.us5.datadoghq.com - - # Optional parameters - dd_source teleport - - - ``` - -1. Edit your configuration as follows: - - - Add your API key to the `api_key` field. - - Adjust the `host` value to match your Datadog site. See the Datadog [Log - Collection and - Integrations](https://docs.datadoghq.com/logs/log_collection/?tab=host) - guide to determine the correct value. - - `dd_source` is an optional field you can use to filter these logs in the - Datadog UI. - - Adjust `ca_path`, `cert_path` and `private_key_path` to point to the - credential files generated earlier. If you're testing locally, the Docker - command above already mounted the current working directory to `keys/` in - the container. - -1. Restart Fluentd after saving the changes to `fluent.conf`. - -## Step 6/6. Run the Teleport Event Handler plugin - -In this section, you will modify the Event Handler configuration you generated -and run the Event Handler to test your configuration. - -### Configure the Event Handler - -In this section, you will configure the Teleport Event Handler for your -environment. - -(!docs/pages/includes/plugins/finish-event-handler-config.mdx!) - -Next, modify the configuration file as follows: - -(!docs/pages/includes/plugins/config-toml-teleport.mdx!) - -(!docs/pages/includes/plugins/machine-id-exporter-config.mdx!) - -### Start the Teleport Event Handler - -(!docs/pages/includes/start-event-handler.mdx!) - -The Logs view in Datadog should now report your Teleport cluster events: - -![Datadog Logs](../../../../img/management/datadog-logs.png) - -## Troubleshooting connection issues - -If the Teleport Event Handler is displaying error logs while connecting to your -Teleport Cluster, ensure that: - -- The certificate the Teleport Event Handler is using to connect to your - Teleport cluster is not past its expiration date. This is the value of the - `--ttl` flag in the `tctl auth sign` command, which is 12 hours by default. -- Ensure that in your Teleport Event Handler configuration file - (`teleport-event-handler.toml`), you have provided the correct host *and* port - for the Teleport Proxy Service. - -## Next steps - -- Read more about -[impersonation](../../access-controls/guides/impersonation.mdx) -here. -- While this guide uses the `tctl auth sign` command to issue credentials for the -Teleport Event Handler, production clusters should use Machine ID for safer, -more reliable renewals. Read [our guide](../../../enroll-resources/machine-id/getting-started.mdx) -to getting started with Machine ID. -- To see all of the options you can set in the values file for the -`teleport-plugin-event-handler` Helm chart, consult our [reference -guide](../../../reference/helm-reference/teleport-plugin-event-handler.mdx). -- Review the Fluentd output plugin for Datadog [README -file](https://github.com/DataDog/fluent-plugin-datadog/blob/master/README.md) -to learn how to customize the log format entering Datadog. diff --git a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/guide.mdx b/docs/pages/admin-guides/management/guides/aws-iam-identity-center/guide.mdx deleted file mode 100644 index 7d3774b3aa6b0..0000000000000 --- a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/guide.mdx +++ /dev/null @@ -1,323 +0,0 @@ ---- -title: Getting Started with AWS IAM Identity Center integration -description: Explains how to set up and use Teleport AWS IAM Identity Center integration. ---- - -Teleport's integration with [AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) -allows you to organize and manage your users' short- and long-term access to AWS -accounts and their permissions. - -## Prerequisites - -- Teleport Enterprise or Teleport Enterprise Cloud cluster version 17.0 or higher. -- Administrative access to AWS IAM Identity Center. - -Note that Identity Center integration requires using Teleport as an external -identity source. - -As such, we recommend ensuring that all Identity Center users have access to -your Teleport cluster before turning the integration on to avoid access -interruption. If your Identity Center already uses external identity source, -you can configure the corresponding [SSO connector](../../../access-controls/sso/sso.mdx) -in Teleport or, if you're using Okta, enable the -[Okta integration](../../../access-controls/okta/okta.mdx). - -## Step 1/6. Configure AWS integration - -Teleport provides a guided web UI based configuration flow for the Identity -Center integration. To get started, navigate to the "Add new integration" page -in your Teleport cluster control panel and select "AWS Identity Center". - -![Pick Identity Center integration](../../../../../img/identity-center/ic-pick-integration.png) - -During this step, you will set up Teleport as an OIDC identity provider for -your AWS account and create an AWS role with the permissions required for the -integration to function, such as fetching Identity Center accounts, users, -groups, permission set assignments, and so on. - -
-Full list of IAM permissions required by Identity Center integration -``` -// ListAccounts -organizations:ListAccounts -organizations:ListAccountsForParent - -// ListGroupsAndMembers -identitystore:ListUsers -identitystore:ListGroups -identitystore:ListGroupMemberships - -// ListPermissionSetsAndAssignments -sso:DescribeInstance -sso:DescribePermissionSet -sso:ListPermissionSets -sso:ListAccountAssignmentsForPrincipal -sso:ListPermissionSetsProvisionedToAccount - -// CreateAndDeleteAccountAssignment -sso:CreateAccountAssignment -sso:DescribeAccountAssignmentCreationStatus -sso:DeleteAccountAssignment -sso:DescribeAccountAssignmentDeletionStatus -iam:AttachRolePolicy -iam:CreateRole -iam:GetRole -iam:ListAttachedRolePolicies -iam:ListRolePolicies - -// AllowAccountAssignmentOnOwner -iam:GetSAMLProvider - -// ListProvisionedRoles -iam:ListRoles -``` -
- -![Configure AWS integration](../../../../../img/identity-center/ic-step1.1.png) - -Enter required information such as Identity Center region, ARN and integration -name, and execute the generated command in the Cloud Shell. - -After the script has run, fill out the ARN for the role created by the script. - -![Run script for AWS integration](../../../../../img/identity-center/ic-step1.2.png) - -## Step 2/6. Preview AWS resources - -On the next step, you are presented with the list of AWS accounts, groups, and -permission sets that Teleport was able to find in your Identity Center. - -![Preview AWS resources](../../../../../img/identity-center/ic-step2.png) - -Pick the default owners that should be assigned to the Access Lists in Teleport. -These resources will be imported into Teleport once the plugin is installed. - -## Step 3/6. Configure identity source - - -After this step, Teleport will become your Identity Center's identity provider. - -To avoid access interruptions, we recommend making sure that all existing -Identity Center users have access to your Teleport cluster by, for example, using -the same [IdP](../../../access-controls/sso/sso.mdx) as your current Identity Center -external identity source. - - -Follow the instructions to change your Identity Center's identity source to -Teleport. - -![Configure identity source](../../../../../img/identity-center/ic-step3.png) - -## Step 4/6. Enable SCIM - -The final step is to enable the SCIM endpoint in your Identity Center to -allow Teleport to push user and group changes. - -![Enable SCIM](../../../../../img/identity-center/ic-step4.png) - -Make sure to test SCIM connection after enabling it. - -## Step 5/6. Verify the integration - -Navigate to the Access Lists view page in your cluster and make sure that all -your Identity Center groups have been imported. - - -It may take a few minutes for the initial sync to complete. - - -![Access Lists view](../../../../../img/identity-center/ic-lists.png) - -Imported Access Lists should show the same members as their corresponding -Identity Center groups. - -## Step 6/6. Connect to AWS - -Once the integration is up and running, you will see an application named -`aws-identity-center` among your resources: - -![Connect to AWS SSO portal](../../../../../img/identity-center/ic-app.png) - -Clicking the "Log In" button for this app takes you to your Identity Center -SSO start page which you can use to pick a role and connect to your AWS account -as usual. - -## Usage scenarios - -Let's take a look at some common usage scenarios enabled by the Identity Center -integration. - -### Managing Account Assignments with Access Lists - -Teleport creates an Access List for each group imported from the Identity Center -instance, with group members becoming Access List members. Default Access List -owners are configured during the initial integration enrollment flow and can be -adjusted as necessary after the initial sync completes. - -Each imported Access List is automatically assigned a role (or a set of roles) -that grant all members of that list access to all of the Account Assignments -assigned to the corresponding AWS Identity Center group during the integration -setup. - -These Teleport-generated roles each represent a single Account Assignment, and -are named using `-on--` convention -(e.g. `AdministratorAccess-on-MyAccount-012345678`). - - -These roles are considered system roles, and any edits or updates to them will -be automatically reverted. - - -To give a user permission granted by an already-existing Identity Center synced -Access List, an owner can add that user as a member which makes Teleport to add -the user to its corresponding Identity Center group. - - -Currently all existing Teleport users are synced to Identity Center. Label-based -user filtering will be supported in a later release. - - -Removing a member from an Identity Center synced Access List removes them -from the corresponding Identity Center group effectively revoking privileges. - -In addition to membership changes, Teleport propagates changes in Access List -grants back to Identity Center as well. For example, imagine an Access List with -the roles `AdminAccess-on-my-account` and `ReadOnlyAccess-on-my-account`. If the -Access List owner removes the `AdminAccess-on-my-account` role from the Access Lists, -that change will be propagated back to AWS and the corresponding Identity Center -group will have its assignments updated to remove the `AdminAccess` Permission - -### Just-in-time Access with Resource Access Requests - -Teleport represents the imported AWS accounts as apps in the Teleport Resource -View, with the permission sets available for each account bundled up inside the -app. AWS accounts are treated the same as any other Teleport-managed resource, -so users can see what AWS permission sets they are allowed to request just by -checking "Show requestable resources" in the resource view. - -Users can then choose the specific Account Assignments they want access to by -selecting from the Permission Sets available to each AWS Account. Users -can mix Permission Sets from multiple AWS Accounts, and even include other -Teleport-managed resources if necessary. - -![Selecting Identity Center resources](../../../../../img/identity-center/ic-select-ps.png) - -Once the used has selected their desired Account Assignments, the Access Request -submission and review process is the same as for any other Teleport-managed -resource. Assuming the Access Request is approved, Teleport will create the -appropriate AWS Account Assignments in Identity Center to grant the requested -access. These AWS Account Assignments will automatically be deleted when the -Access Request expires. - -The user can access their temporary AWS Accounts and Roles from within Teleport -by assuming the Access Request roles. - -![Assumed Role granting Identity Center Account Assignments](../../../../../img/identity-center/ic-role-assumed.png) - - -The AWS Account Assignments will exist for the lifetime of the Access Request, -regardless of when the user assumes the associated role(s). - - -### Just-in-time access with role Access Requests - -The Identity Center integration allows Teleport users to submit Access Requests for short-term privilege elevation. - -When an Access Request for a role granting Identity Center privileges is -approved, Teleport creates an individual assignment for that user in the -Identity Center. The assignment is deleted when the Access Request expires. - -### Long-term access with Access Requests - -If a user requests access to Account Assignments that can also be granted via an -existing Access List, Teleport will offer the reviewer the option of *promoting* -the Access Request to long-term access. - -![Promoting Access Request](../../../../../img/identity-center/ic-promote.png) - -When an Access Request is promoted to long-term access, the requesting user is -added to the targeted Access List. This membership change is propagated to the -corresponding Identity Center group, and the user is then granted their requested -Account Assignments via group membership. - -### Creating custom Identity Center roles - -You can craft your own roles that bind Identity Center accounts to permission -sets, for example: - -```yaml -kind: role -version: v7 -metadata: - name: aws-dev-access -spec: - allow: - account_assignments: - - account: "" # AWS identity center account ID - # permission set ARN of AdministratorAccess - permission_set: arn:aws:sso:::permissionSet/ssoins-1234/ps-5678 - - account: "" - # permission set ARN of ReadOnlyAccess - permission_set: arn:aws:sso:::permissionSet/ssoins-1234/ps-8765 -``` - -These roles can be assigned to users and Access Lists or requested by users -using Access Requests flow described above. - -## FAQ - -### Which Access Lists are synced to Identity Center? - -Teleport syncs all Access Lists that have AWS account and permission set rules -among their role grants to Identity Center. - -### How does it work with nested Access Lists? - -Identity Center does not support nested groups. As such, Teleport recursively -flattens any [nested Access Lists](../../../access-controls/access-lists/nested-access-lists.mdx) -into a single Identity Center group containing all members reachable from the -top-level Access List. - -The flattened Identity Center group will be kept updated as members are added to -or removed from nested - -### How do I uninstall the integration? - - -Before fully removing the integration, make sure to remember to change the -identity source in your Identity Center instance. - - -Deleting the integration automatically removes all Teleport resources it used to manage its state, including: - -- Teleport roles created for AWS Identity Center account assignments. -- Access Lists imported from AWS Identity Center groups. - -However, user-created resources remain intact, including: - -- Access Lists you created. -- Roles you manually configured for account assignments. - -If an Access List grants permissions to a now-deleted integration role, or if a user -has a deleted role assigned directly, you must manually remove those references. - -If you decide not to switch to Teleport, you can delete the Identity Center integration in two ways. - -You can remove the integration by navigating to your cluster's Integrations -list and deleting both the integration named `AWS IAM Identity Center`. The AWS -OIDC integration that was created during the first enrollment step will be -automatically removed as well once the plugin is uninstalled. - -To clean up AWS resources created for the integration, remove the Identity -Provider and its role from your AWS IAM console as well. - -## Next steps - -- Take a deeper dive into fundamental Teleport concepts used in Identity Center - integration such as [RBAC](../../../access-controls/guides/guides.mdx), - [JIT Access Requests](../../../access-controls/access-requests/access-requests.mdx), - [Access Lists](../../../access-controls/access-lists/access-lists.mdx) - and the [Teleport SAML IdP](../../../access-controls/idps/idps.mdx). -- Learn how to enable the [Okta integration](../../../access-controls/okta/okta.mdx) - to sync apps, users and groups from Okta in conjunction with Identity Center - integration. diff --git a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx b/docs/pages/admin-guides/management/guides/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx deleted file mode 100644 index a5c4c3e69c96d..0000000000000 --- a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx +++ /dev/null @@ -1,334 +0,0 @@ ---- -title: Migrating AWS IAM Identity Center from Okta to Teleport -description: Explains how to migrate an Identity Center instance from Okta control to Teleport control. -tocDepth: 3 ---- - -By default, the Teleport AWS IAM Identity Center integration: -- Controls user and group provisioning into Identity Center, -- Controls AWS account assignment provisioning into Identity Center, and -- Authenticates user access to AWS console via Teleport SAML IdP. - -This is fine for most deployments, but can cause difficulties when integrating Teleport with -an existing Okta-managed AWS IAM Identity Center configuration. - -This guide introduces a hybrid setup and a migration path to integrate Teleport in -such an existing Okta-managed AWS IAM Identity Center configuration where Okta will -continue working as an SSO provider to AWS console while Teleport manages user, group -and AWS account assignment provisioning. - -### Migration Path - -#### Starting Point - -This is our starting configuration: Okta as sole Identity Source for Identity Center. - -![Migration Starting Point](../../../../../img/identity-center/ic-migrate-start.png) - -#### Midway - -This is the point where you can start migrating control of users and groups from -Okta to Teleport. At this stage: - - - Okta provides SSO login for Identity Center. - - Okta manages a subset of the Identity Center group membership (selected via Push Groups). - - Okta and Teleport both control user provisioning via SCIM. - - Teleport manages a second subset of Identity Center groups (selected via group filters during plugin installation). - - Teleport controls Identity Center group account assignments for the Identity Center groups under its control. - - Teleport controls direct Identity Center user account assignments for the Identity Center users under its control. - -![Migration Mid Point](../../../../../img/identity-center/ic-migrate-mid.png) - -#### Ending point - -Once Teleport is in control of all the users and groups you want to provision -into Identity Center, the Okta Provisioning and Push Groups functions can be -retired, leaving Okta only providing SSO login. - -![Migration End Point](../../../../../img/identity-center/ic-migrate-end.png) - -## Prerequisites - - - A working Teleport cluster. - - An AWS role configured as per the [AWS IAM Identity Center guide](./guide.mdx#step-16-configure-aws-integration) - for the integration to use. - - AWS credentials configured for the Teleport auth process to pick up and use - (e.g. as environment vars, system profiles, etc). - - An Okta API token, with the following privileges: - - View users and their details. - - View groups and their details. - - View applications and their details. - - The AWS IAM Identity Center ARN, AWS region, SCIM base address and SCIM bearer token. - -## Step 1/7. Install Okta SAML connector - -Install Okta SAML connector into Teleport as per the Teleport -[Okta as an SSO provider](../../../access-controls/sso/okta.mdx) guide. - - - For the integration to function properly, both AWS IAM Identity Center and Teleport must view the - same user set which can be achieved by using the same Okta SAML application for both Identity - Center and Teleport SAML connector. - - If this is not possible, the most flexible approach we've found is having an - an Okta Group for those users that should have Identity Center access, and then assigning - both the Okta Identity Center App and the Okta SAML App used for Teleport SAML - connector to that group. This ensures that the same user set is visible across - both applications. - - -You will need the Teleport SAML Connector Name and Okta SAML App ID in the next -step. - -## Step 2/7. Install the Teleport Okta integration - -We will be using a very limited subset of the Teleport Okta integration in this -deployment, disabling all features except periodic user synchronization. This -configuration is not currently supported by the normal installation UI, so we -will have to use `tctl` to install it: - -```console -$ tctl plugins install okta \ - --org ${OKTA_ORG_URL} \ - --saml-connector ${TELEPORT_SAML_CONNECTOR_NAME} \ - --app-id ${OKTA_SAML_APP_ID} \ - --api-token ${OKTA_API_TOKEN} \ - --no-scim \ - --no-accesslist-sync \ - --no-appgroup-sync -``` - -This will install the Okta integration and start the user sync service with a -configuration that: -- Imports Okta users assigned to the Okta App `${OKTA_SAML_APP_ID}`, and keeps - them synced with the upstream Okta organization. -- Does *not* expose a SCIM service. -- Does *not* attempt to sync or manage any other resources from Okta. - -You can monitor the state of the Okta integration in the Teleport Integrations UI. - -## Step 3/7. Wait for user sync - -To make sure everything is working, wait until the first Okta to Teleport user -sync has occurred. You can verify this by either - - Refreshing the user page and finding your Okta users, or - - Checking the Okta integration status page. - -Once your Okta users are imported into Teleport, you can progress to the next -step. - -## Step 4/7. Install the Teleport AWS IAM Identity Center integration - -Again, we need to install the plugin using `tctl`. - -```console -$ tctl plugins install awsic \ - --instance-arn ${IDENTITY_CENTER_INSTANCE_ARN} \ - --instance-region ${IDENTITY_CENTER_INSTANCE_REGION} \ - --use-system-credentials \ - --assume-role-arn ${AWS_IAM_ROLE_ARN} \ - --scim-url ${IDENTITY_CENTER_SCIM_BASE_URL} \ - --scim-token ${IDENTITY_CENTER_SCIM_BEARER_TOKEN} \ - --access-list-default-owner ${TELEPORT_ACCESS_LIST_DEFAULT_OWNER} \ - --user-origin okta \ - --account-name ${ACCOUNT_NAME_ALLOW_FILTER} \ - --group-name ${GROUP_NAME_ALLOW_FILTER} -``` - -This will install the Teleport AWS IAM Identity Center integration with a -Teleport configuration that: -- Controls the AWS IAM Identity Center instance indicated by `--instance-arn`. -- Uses the system AWS credentials to authenticate with AWS (from `--use-system-credentials`) - and assumes the IAM role indicated by `--assume-role-arn`. -- Manages account assignments all AWS accounts that match the `${ACCOUNT_NAME_ALLOW_FILTER}`. -- Provisions all users imported from Okta into AWS IAM Identity Center (from the `--user-origin okta` flag). -- Only imports all groups matching `${GROUP_NAME_ALLOW_FILTER}` into Teleport as Access - Lists, with `${TELEPORT_ACCESS_LIST_DEFAULT_OWNER}` as the owner. - - -Note that the `tctl` installer currently only supports installations using -system-level AWS credentials with `--use-system-credentials`. - -Using system-level credential is also the recommended way to provide AWS credential when -configuring integration in the Teleport Enterprise self-hosted deployment. - - -You can change the AWS account, group and user filters later by following the -instructions in [Step 6](#step-67-expanding-teleport-integration-scope). - -During the installation process, Teleport will import all of the Identity Center -groups that match its allow list (or all of them, if no allow list is defined) -and create matching Access Lists, preserving the group membership and account -assignments. - - -Individual user account assignments will ***not*** be preserved during import. You -will need to ensure these are preserved manually, or converted to group assignments -prior to installation. - - -### Group import control - -The Group import allow list is controlled by the `--group-name` option. You can -specify multiple filters and a Group will be imported if matches _any_ of the -supplied filters. Filters can be either literal names, globbed names or Go -compatible regular expressions. To treat a filter as a regular expression, -enclose it in a leading `^` and trailing `$`. - -Example filters: - - `administrators`: The literal "administrators" group - - `site-*`: Any group with the prefix `site-` - - `^(?:[^a]|a[^w]|aw[^s]|aws[^\-]).*$`: Any group that does ***not*** have the prefix `aws-` - -Ensure that there is no overlap between the groups imported to Teleport and the -groups you want Okta to maintain control over. - - -Avoid creating an Access List with the same name as a Push Group managed by Okta. -Teleport will attempt to adopt the group, and may change the group membership. -Deleting the Teleport Access List and forcing a re-push from Okta should restore -access. - - -### User provisioning control - -Your Teleport cluster may have a mix of local Teleport users (e.g. a local Admin -user) and users imported from Okta. By default, Teleport will try to provision -_all_ Teleport users into Identity Center. You can control which users are -provisioned by the Identity Center integration with the `--user-origin` and -`--user-filter` arguments. In the example above, the `--user-origin okta` will -restrict Teleport to only provisioning users that are synced from Okta, and -excluding all local Teleport users. - -### AWS account import control - -By default, Teleport will take control of account assignments for all AWS Accounts -managed by Identity Center. You can create an allow-list of AWS Accounts to -import with the `--account-name` and `--account-id` install options. - -The `--account-name` filters work like the `--group-name` filters above. The -`--account-id` filters specify a literal AWS Account ID. - -Teleport will not create or delete account assignments on AWS accounts outside -of its allow-list. - -## Step 5/7. Migrate AWS account assignments - -We're now at the midway point, and ready to migrate account assignments from the -Okta-managed groups into new Teleport-managed Access Lists. To migrate groups, -create a new Access List in Teleport (taking care not to use the same name as -the existing Okta-managed Group) and create the appropriate memberships and -account assignments. - -account assignments can be created on an Access List by assigning it the Account -Assignment roles created by the Identity Center integration, assigning it a -custom Teleport role that specifies a specific combination of access, or a -combination of each. - -For more information, see the [Identity Center integration guide](./guide.mdx). - -## Step 6/7. Expanding Teleport integration scope - -Once you are satisfied with the way Teleport is handling the initial set of -imported AWS resources, you can expand the scope of the Identity Center -integration by editing the plugin import features. - - -### Edit plugin spec with `tctl edit` - - -This currently involves manually editing the Identity Center's integration -resource using `tctl`, which is a dangerous operation. Please ensure you take a -backup of the plugin resource so you can roll back if necessary. - -A guided editing workflow is currently under development. - - -You can expand the scope of the Teleport Identity Center integration by editing -the integration's plugin resource with `tctl`. - -```console -$ tctl edit plugins/aws-identity-center -``` - -The plugin resource is a YAML document that looks something like this: - -```YAML -kind: plugin -version: v1 -metadata: - labels: - teleport.dev/hosted-plugin: "true" - name: aws-identity-center -spec: - Settings: - aws_ic: - # Account import filters. An absent or empty list of filters implies "manage all AWS accounts" - aws_accounts_filters: - - Include: - id: "637423191929" - - Include: - id: "730335414865" - - Include: - id: "058264527036" - - Include: - name_regex: ^Staging-.*$ - - # User provisioning filters. An absent or empty list of filters implies "provision all users to AWS" - user_sync_filters: - - labels: - teleport.dev/origin: okta - - # Group import filters. See notes below. - group_sync_filters: - - Include: - name_regex: '^Group #00\d+$' - - access_list_default_owners: - - admin - arn: arn:aws:sso:::instance/ssoins-722326ecc902a06a - credentials_source: 2 - integration_name: aws-identity-center - provisioning_spec: - base_url: https://scim.us-east-1.amazonaws.com/f3v9c6bc2ca-b104-4571-b669-f2eba522efe8/scim/v2 - region: us-east-1 -``` - -You can add or remove filters to the various filter sets. Once you save and quit -the editor, `tctl` will replace the existing resource with your updated version. -This will automatically restart the Identity Center Integration with the new -filters. - - -As of Teleport 17.3, the `group_sync_filters` are only applied during the initial -import at installation time, and there is no user accessible method of re-triggering -the import. The only way to apply new group filters is to uninstall and re-install -the plugin. - -A later release of Teleport will re-trigger a group import when it detects a -change in the group import filters. - - -### Re-install plugin - -A second alternative is to [uninstall](#deleting-the-aws-iam-identity-center-integration) -and re-install the integration with the new parameters. Be warned that this may -result in user access disruptions while the integration is offline. - -## Step 7/7. Retire Okta group provisioning - -Once you are satisfied that an AWS IAM Identity Center group has been migrated to -Teleport control, you can remove the corresponding push Group from the Okta -Identity Center integration. - -## Deleting the AWS IAM Identity Center integration - -Deleting the integration automatically removes all Teleport resources it used to manage its state. - -The impact of plugin deletion and general consideration is explained in the [AWS IAM Identity Center guide](./guide.mdx#how-do-i-uninstall-the-integration). - -Delete AWS IAM Identity Center plugin with `tctl`. - -``` -$ tctl plugins delete aws-identity-center -``` diff --git a/docs/pages/admin-guides/management/guides/datadog.mdx b/docs/pages/admin-guides/management/guides/datadog.mdx deleted file mode 100644 index b6ad1e0a0d74e..0000000000000 --- a/docs/pages/admin-guides/management/guides/datadog.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Datadog Integration -description: How to export Teleport metrics and logs to Datadog -h1: Export Teleport metrics and logs to Datadog ---- - -Datadog has an [officially maintained integration](https://docs.datadoghq.com/integrations/teleport/) for Teleport. - -The integration exports both metrics and logs to Datadog. The metrics are Datadog-native, -not external metrics, which means they incur no extra usage charges to your Datadog account. -The integration also includes a dashboard for monitoring your Teleport cluster. - -There is no installation required to use the integration, only changes to the Datadog -Agent configuration. Please see the Datadog documentation linked above for how to activate -the Teleport integration in Datadog. - -The Teleport Datadog integration includes log processing rules for -text-formatted logs printed by an Auth Service instance. If you self-host -Teleport, you can configure the Datadog Agent to tail these logs on an Auth -Service host. - -On Teleport Enterprise (Cloud), you must use the Teleport Event Handler to -subscribe to audit logs from the Auth Service and forward them to Datadog using -Fluentd. Read the [Teleport documentation](../export-audit-events/datadog.mdx) -for how to configure audit log exporting using Datadog and the Event Handler. diff --git a/docs/pages/admin-guides/management/guides/guides.mdx b/docs/pages/admin-guides/management/guides/guides.mdx deleted file mode 100644 index 16d5acb1ad82c..0000000000000 --- a/docs/pages/admin-guides/management/guides/guides.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Integrations -description: Miscellaneous guides for integrating Teleport with third-party tools. -layout: tocless-doc ---- - -You can integrate Teleport with third-party tools in order to complete various -tasks in your cluster. These guides describe Teleport integrations that are not -documented elsewhere: - - diff --git a/docs/pages/admin-guides/management/operations/operations.mdx b/docs/pages/admin-guides/management/operations/operations.mdx deleted file mode 100644 index 63e025209bcba..0000000000000 --- a/docs/pages/admin-guides/management/operations/operations.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Operations -description: Teleport Operations - Scaling and High-Availability. -layout: tocless-doc ---- - -The guides in this section show you how to carry out common administration tasks -on an already running Teleport cluster. - -For guides on the fundamentals of setting up your cluster, you should consult -the [Cluster Administration Guides](../admin/admin.mdx) section. - -- [Scaling](scaling.mdx): How to configure Teleport for large-scale deployments. -- [Backup and Restore](backup-restore.mdx): Backing up and restoring the cluster. -- [CA Rotation](ca-rotation.mdx): Rotating Teleport certificate authorities. -- [TLS Routing Migration](tls-routing.mdx): Migrating your Teleport cluster to single-port TLS routing mode. -- [Proxy Peering Migration](proxy-peering.mdx): Migrating your Teleport cluster to Proxy Peering mode. -- [Database CA Migrations](db-ca-migrations.mdx): Completing Teleport's Database CA migrations. diff --git a/docs/pages/admin-guides/management/operations/proxy-peering.mdx b/docs/pages/admin-guides/management/operations/proxy-peering.mdx deleted file mode 100644 index 33fbdd4f56655..0000000000000 --- a/docs/pages/admin-guides/management/operations/proxy-peering.mdx +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Proxy Peering Migration -description: How to upgrade an existing Teleport cluster to Proxy Peering mode. ---- - -This guide shows you how to migrate your Teleport -cluster to use Proxy Peering, which enables you to scale -your Proxy Service instances horizontally by reducing the -number of connections created between Teleport Proxy instances -and Teleport services like the Database Service and Application Service. - -## Prerequisites - -An existing self-hosted Teleport Enterprise cluster. See the documentation on [self-hosting Teleport](../../deploy-a-cluster/deploy-a-cluster.mdx) to get started. - -Teleport Proxy Service instances must be able to reach each other over the network on port -`3021` by default. Ensure there are no firewall policies that would block communication -between instances. - -## Step 1/3. Enable Proxy Peering - -Update your cluster's Auth Service configuration to set the tunnel strategy type -to `proxy_peering`. - -```yaml -auth_service: - tunnel_strategy: - type: proxy_peering - agent_connection_count: 1 -``` - -This setting will indicate to agents that they are only required to connect to 1 -Teleport Proxy instance as specified by the `agent_connection_count` field. - -For high availability, an `agent_connection_count` greater than 1 can be configured. -This ensures an agent is still reachable if one of the Proxy Service instances it is connected to is not available. - -## Step 2/3. Restart the Auth Service - -Restart all Teleport Auth Service instances running in the cluster to apply the -new Auth Service configuration. - -## Step 3/3. Restart the Proxy Services - -Restart all Teleport Proxy Service instances running in the cluster in order to -start the services required for Proxy Peering. - diff --git a/docs/pages/admin-guides/management/security/revoking-access.mdx b/docs/pages/admin-guides/management/security/revoking-access.mdx deleted file mode 100644 index 6a80cc471ef2c..0000000000000 --- a/docs/pages/admin-guides/management/security/revoking-access.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Revoking Access -description: Learn how to revoke access before Teleport certificates expire ---- - -Teleport's approach to using short-lived certificates for all infrastructure -access means that it can generate large numbers of certificates every day. For -this reason, Teleport does not support traditional certificate revocation. - -There are two options available for revoking access: CA rotations and Teleport locks. - -## CA rotations - -To generate a new certificate authority and invalidate user certificates issued -by the current CA, run `tctl auth rotate --type=user`. This process will require -that the newly generated CA certificate is uploaded to your entire fleet of -OpenSSH servers. This can be a disruptive change, especially in environments -that lack automation, so proceed with caution. - -See the [CA rotations guide](../operations/ca-rotation.mdx) for more details on -how to execute the procedure. - -## Locks - -Teleport locks allow you to permanently or temporarily revoke access to a number -of different "targets". Supported lock targets include: specific users, roles, -servers, desktops, or MFA devices. After you create a lock, all existing -sessions where the lock applies are terminated and new sessions are rejected -while the lock remains in force. - -For more information, read our -[Session and Identity Locking Guide](../../access-controls/guides/locking.mdx). diff --git a/docs/pages/admin-guides/teleport-policy/integrations/aws-sync.mdx b/docs/pages/admin-guides/teleport-policy/integrations/aws-sync.mdx deleted file mode 100644 index c55ac3a6028e1..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/integrations/aws-sync.mdx +++ /dev/null @@ -1,194 +0,0 @@ ---- -title: Discover AWS Access Patterns with Teleport Identity Security -description: Describes how to import and visualize AWS accounts access patterns using Identity Security and Access Graph. ---- - -Identity Security streamlines and centralizes access management across your entire infrastructure. You can view access relationships in seconds, -viewing unified, up-to-date relationships and policies between all users, groups, and computing resources. - -Identity Security with Access Graph offers insights into access patterns within your AWS account. By scanning IAM -permissions, users, groups, resources, and identities, it provides a visual representation and aids in -enhancing the permission model within your AWS environment. This functionality enables you to address queries such as: - -- What resources are accessible to AWS users and roles? -- Which resources can be reached via identities associated with EC2 instances? -- What AWS resources can Teleport users access when connecting to EC2 nodes? - -Utilizing the Access Graph to analyze IAM permissions within an AWS account necessitates the setup of the Access Graph -service, a Discovery Service, and integration with your AWS account. - -(!docs/pages/includes/policy/access-graph.mdx!) - -## How it works - -Access Graph discovers AWS access patterns, synchronizes various AWS resources, -including IAM Policies, Groups, Users, User Groups, EC2 instances, EKS clusters, and RDS databases. -These resources are then visualized using the graph representation detailed in the -[Identity Security usage page](../policy-how-to-use.mdx). - -The importing process involves two primary steps: - -### Polling Cloud APIs - -The Teleport Discovery Service continuously scans the configured AWS accounts. -At intervals of 15 minutes, it retrieves the following resources from your AWS account: - -- Users -- Groups -- User Groups -- IAM Roles -- IAM Policies -- EC2 Instances -- EKS Clusters -- RDS Databases -- S3 Buckets - -Once all the necessary resources are fetched, the Teleport Discovery Service pushes them to the -Access Graph, ensuring that it remains updated with the latest information from your AWS environment. - -### Importing resources - -Identity Security’s Access Graph feature delves into the IAM policies, identities, -and resources retrieved from your AWS account, crafting a -graphical representation thereof. - - -## Prerequisites - -- A running Teleport Enterprise cluster v14.3.9/v15.2.0 or later. -- Identity Security enabled for your account. -- For self-hosted clusters: - - Ensure that an up-to-date `license.pem` is used in the Auth Service configuration. - - A running Access Graph node v1.17.0 or later. -Check the [Identity Security page](../teleport-policy.mdx) for details on -how to set up Access Graph. - - The node running the Access Graph service must be reachable from the Teleport Auth Service. - -## Step 1/2. Configure Discovery Service (Self-hosted only) - - - -If you have a cloud-hosted Teleport Enterprise cluster, you can disregard -this step, as cloud-hosted Teleport Enterprise already operates a properly configured -Discovery Service within your cluster. - - -To activate the Teleport Discovery Service, add a top level discovery_service section to the teleport.yaml -config used by the Auth Service. This service monitors dynamic `discovery_config` resources that are set up -with the `discovery_group` matching. - -. -```yaml -discovery_service: - enabled: true - discovery_group: -``` - -Notice that if you already operate a Discovery Service within your cluster, -it's possible to reuse it as long as the following requirements are met: - -- On step 2, you match the `discovery_group` with the existing Discovery Service's -`discovery_group`. -- Access Graph service is reachable from the machine where Discovery Service runs. - -## Step 2/2. Set up Access Graph AWS Sync - -To initiate the setup wizard for configuring AWS Sync, access the Teleport UI, -click the Policy sidebar button, and then click Integrations. - -Click the "Setup new integration" button, and then select "AWS". You'll be prompted -to create a new Teleport AWS integration if you haven't configured one already. -Alternatively, you can opt for a previously established integration. - -Upon selecting or creating the integration, you'll be instructed to execute a -bash script within your AWS Cloud Shell to configure the necessary permissions. - -
-List of IAM Policies required for AWS Sync - -The policy is designed with a set of read-only actions, enabling Teleport -to access and retrieve information from resources within your AWS Account. - -The IAM policy includes the following directives: - - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "ec2:DescribeInstances", - "ec2:DescribeImages", - "ec2:DescribeTags", - "ec2:DescribeSnapshots", - "ec2:DescribeKeyPairs", - - "eks:ListClusters", - "eks:DescribeCluster", - "eks:ListAccessEntries", - "eks:ListAccessPolicies", - "eks:ListAssociatedAccessPolicies", - "eks:DescribeAccessEntry", - - "rds:DescribeDBInstances", - "rds:DescribeDBClusters", - "rds:ListTagsForResource", - "rds:DescribeDBProxies", - - "dynamodb:ListTables", - "dynamodb:DescribeTable", - - "redshift:DescribeClusters", - "redshift:Describe*", - - "s3:ListAllMyBuckets", - "s3:GetBucketPolicy", - "s3:ListBucket", - "s3:GetBucketLocation", - "s3:GetBucketTagging", - "s3:GetBucketPolicyStatus", - "s3:GetBucketAcl", - - "iam:ListUsers", - "iam:GetUser", - "iam:ListRoles", - "iam:ListGroups", - "iam:ListPolicies", - "iam:ListGroupsForUser", - "iam:ListInstanceProfiles", - "iam:ListUserPolicies", - "iam:GetUserPolicy", - "iam:ListAttachedUserPolicies", - "iam:ListGroupPolicies", - "iam:GetGroupPolicy", - "iam:ListAttachedGroupPolicies", - "iam:GetPolicy", - "iam:GetPolicyVersion", - "iam:ListRolePolicies", - "iam:ListAttachedRolePolicies", - "iam:GetRolePolicy", - "iam:ListSAMLProviders", - "iam:GetSAMLProvider", - "iam:ListOpenIDConnectProviders", - "iam:GetOpenIDConnectProvider" - ], - "Resource": "*" - } - ] -} - -``` -
- - -Once the IAM Policy has been successfully linked to the IAM role -utilized by Teleport, you'll be prompted to specify the regions from -which Teleport should import resources. This selection solely pertains -to regional resources and does not impact global resources such as S3 -Buckets, IAM Policies, or IAM Users. - -If you're operating a self-hosted cluster, you'll additionally need to -provide input for the `discovery_group` configured during Step 1. - diff --git a/docs/pages/admin-guides/teleport-policy/integrations/entra-id.mdx b/docs/pages/admin-guides/teleport-policy/integrations/entra-id.mdx deleted file mode 100644 index 432764999dda4..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/integrations/entra-id.mdx +++ /dev/null @@ -1,533 +0,0 @@ ---- -title: Analyze Entra ID policies with Teleport Identity Security -description: Describes how to import and visualize Entra ID policies using Identity Security and Graph Explorer. ---- - -The Microsoft Entra ID integration in Teleport Identity Governance synchronizes your Entra ID directory into your Teleport cluster, -and offers insights into relationships in your Entra ID directory. -Additionally, when Entra ID is used as an SSO identity provider, Identity Security visualizes -SSO grants across your services. - - -SSO grant analysis is currently only supported in situations where Entra ID acts as the identity provider, -and AWS accounts are set up as relying parties using AWS IAM role federation. - -Support for additional relying parties will be added in the future. - - -## How it works - -Teleport continuously scans the connected Entra ID directory. -At intervals of 5 minutes, it retrieves the following resources from your Entra ID directory: - -- Users -- Groups -- Users' memberships in groups -- Enterprise applications - -Entra ID users and groups are imported into Teleport as users and Access Lists respectively. -Once all the necessary resources are fetched, Teleport pushes them to the -Graph Explorer, ensuring that it remains updated with the latest information. -These resources are then visualized using the graph representation detailed in the -[Identity Security usage page](../policy-how-to-use.mdx). - -## Prerequisites - -- A running Teleport Enterprise cluster v15.4.2/v16.0.0 or later. -- Teleport Identity Governance and Identity Security enabled for your account. -- For self-hosted clusters: - - Ensure that an up-to-date `license.pem` is used in the Auth Service configuration. - - A running Graph Explorer node v1.21.3 or later. -Check the [Identity Security page](../teleport-policy.mdx) for details on -how to set up Graph Explorer. - - The node running the Graph Explorer service must be reachable from the Teleport Auth Service. -- Your user must have privileged administrator permissions in the Azure account -- For OIDC setup, the Teleport cluster must be publicly accessible from the internet. -- For air gapped clusters, `tctl` must be v16.4.7 or later. - -(!docs/pages/includes/policy/access-graph.mdx!) - -## Step 1/3. Choose a setup method - -To begin onboarding, select your preferred setup method. Teleport offers various methods based on your cluster -configuration and user requirements. - -### Automatic setup with Teleport as an OIDC Provider for Entra ID - - -This method is recommended and is required if you are a Teleport Enterprise (Cloud) customer. - - -This method is suitable for Teleport clusters that are publicly accessible and lack Azure credentials on Auth -Service nodes or pods. - -In this setup, Teleport is configured as an OpenID Connect (OIDC) identity provider, establishing a trusted -connection with an Entra ID application created during setup. This trust allows Teleport to authenticate using -the Entra ID application, accessing permissions tied to it without requiring additional credentials or managed -identities. - -**Requirements:** -- Direct bidirectional connectivity between Teleport and Azure is necessary for Azure to validate the OIDC -tokens issued by Teleport. - -### Automatic setup with system credentials for Entra ID authentication - -Designed for air-gapped Teleport clusters that are not publicly accessible, this setup accommodates environments -where Azure cannot validate OIDC tokens issued by Teleport. - -Instead, Teleport relies on Azure credentials available on the VMs where Teleport Auth Service is running. -These credentials must have the following Entra ID permissions: - -- `Application.Read.All` -- `Directory.Read.All` -- `Policy.Read.All` - -**Requirements:** -- Unidirectional connectivity from Teleport to Azure infrastructure. - -### Manual setup - -This setup describes how to manually configure Entra ID integration without relying on automated scripts -to setup Entra ID Application. - -This guide covers the -[**Automatic Setup with Teleport as OIDC Provider for Entra ID**](./entra-id.mdx#automatic-setup-with-teleport-as-an-oidc-provider-for-entra-id) -and [**Automatic Setup with System Credentials**](./entra-id.mdx#automatic-setup-with-system-credentials-for-entra-id-authentication) -setup but has a limitation of not being possible to enable the [Identity Security](../teleport-policy.mdx) integration. - -## Step 2/3. Configure the Entra ID integration - - - - - -### Start integration onboarding - -To start the onboarding process, access the Teleport Web UI, select "Add New" in the left-hand pane, choose "Integration" and then "Microsoft Entra ID". - -![Integration selection screen](../../../../img/access-graph/entra-id/integrations-page.png) - -In the onboarding wizard, choose a Teleport user that will be assigned as the default owner of Access Lists that are created for your Entra groups, and click "Next". - -![First step of the Entra ID integration onboarding](../../../../img/access-graph/entra-id/integration-wizard-step-1.png) - -### Grant permissions in Azure and finish onboarding - -The wizard will now provide you with a script that will set up the necessary permissions in Azure. - -![Second step of the Entra ID integration onboarding](../../../../img/access-graph/entra-id/integration-wizard-step-2.png) - -Open Azure Cloud Shell by navigating to shell.azure.com, -or by clicking the Cloud Shell icon in the Azure Portal. - -![Location of the Cloud Shell button in the Azure Portal](../../../../img/access-graph/entra-id/azure-cloud-shell-button.png) - -Make sure to use the Bash version of Cloud Shell. -Once a Cloud Shell instance opens, paste the generated command. -The command sets up your Teleport cluster as an enterprise application in the Entra ID directory, -and grants Teleport read-only permissions to read your directory's data (such as users and groups in the directory). - -Once the script is done setting up the necessary permissions, -it prints out the data required to finish the integration onboarding. - -![Output of the Entra ID onboarding script](../../../../img/access-graph/entra-id/onboarding-script-output.png) - -Back in the Teleport Web UI, fill out the required data and click "Finish". - -![Second step of the Entra ID integration onboarding with required fields filled in](../../../../img/access-graph/entra-id/integration-wizard-step-2-filled.png) - - - - - -### Assign permissions to the Azure identity of your Auth Service VMs - -To set up the Azure Identity with the necessary permissions: - -- `Application.Read.All` -- `Directory.Read.All` -- `Policy.Read.All` - -Go to your Azure Dashboard, find the identities linked to your Teleport Auth Service VMs, -and copy the `Object (principal) ID`. Paste this value into ``. - -After obtaining the Principal ID, open the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) -in PowerShell mode and run the following script to assign the required permissions to ``. - -
-Assign required permissions to Azure Identity - -```powershell - -# Connect to Microsoft Graph with the required scopes for directory and app role assignment permissions. -Connect-MgGraph -Scopes 'Directory.ReadWrite.All', 'AppRoleAssignment.ReadWrite.All' - -# Retrieve the managed identity's service principal object using its unique principal ID (UUID). -$managedIdentity = Get-MgServicePrincipal -ServicePrincipalId '' - -# Set the Microsoft Graph enterprise application object. -# This is a service principal object representing Microsoft Graph in Azure AD with a specific app ID. -$graphSPN = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'" - -# Define the permission scopes that we want to assign to the managed identity. -# These are Microsoft Graph API permissions required by the managed identity. -$permissions = @( - "Application.Read.All" # Permission to read applications in the directory - "Directory.Read.All" # Permission to read directory data - "Policy.Read.All" # Permission to read policies within the directory -) - -# Filter and find the app roles in the Microsoft Graph service principal that match the defined permissions. -# Only include roles where "AllowedMemberTypes" includes "Application" (suitable for managed identities). -$appRoles = $graphSPN.AppRoles | - Where-Object Value -in $permissions | - Where-Object AllowedMemberTypes -contains "Application" - -# Iterate over each app role to assign it to the managed identity. -foreach ($appRole in $appRoles) { - # Define the parameters for the role assignment, including the managed identity's principal ID, - # the Microsoft Graph service principal's resource ID, and the specific app role ID. - $bodyParam = @{ - PrincipalId = $managedIdentity.Id # The ID of the managed identity (service principal) - ResourceId = $graphSPN.Id # The ID of the Microsoft Graph service principal - AppRoleId = $appRole.Id # The ID of the app role being assigned - } - - # Create a new app role assignment for the managed identity, granting it the specified permissions. - New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $managedIdentity.Id -BodyParameter $bodyParam -} - -``` - -
- -Your identity principal `` now has the necessary permissions to list Applications, -Directories, and Policies. - - -### Set up Entra ID and Teleport resources - -The Teleport `tctl` command provides an interactive guide to set up and configure Entra ID integration for air-gapped clusters. - -To use it, ensure you have `tctl` version v16.4.7 or later and select a default list of Access List owners. -These specified Teleport users will become the owners of Access Lists imported by the Entra ID integration. -`` must be an existing Teleport user. -If you prefer multiple Access List owners, repeat the flag with each user, e.g., `--default-owner=owner1 --default-owner=owner2`. - -You'll also need to provide the Teleport Auth Service address as ``. -For clusters running in multiplex mode, this address will be the same as your proxy address. - -If your Teleport license does not include [Identity Security](../teleport-policy.mdx), include the `--no-access-graph` flag. - -```code -$ tctl plugins install entraid \ - --default-owner= \ - --default-owner=someOtherOwner@teleport.sh \ - --use-system-credentials \ - --auth-server -``` - -To disable the Graph Explorer integration, add the `--no-access-graph` flag. - -Follow the detailed instructions provided by the `tctl plugins install entraid` guide to install and configure the Entra ID plugin. -This guide will walk you through each step required to enable Entra ID integration within your Teleport environment. -Be sure to follow each step in the `tctl plugins install entraid` guide closely to complete the installation and configuration. - -
- - - -### Assign permissions to the Azure identity of your Auth Service VMs - -This step configures the Azure Identity on your Auth Service machine with the required Entra ID permissions. - - -Follow this step only if you want to use system-available credentials to authenticate Teleport with Entra ID. -If you intend to use Teleport as an OIDC provider for Entra ID, you can skip this step. - - - -- `Application.Read.All` -- `Directory.Read.All` -- `Policy.Read.All` - -Go to your Azure Dashboard, find the identities linked to your Teleport Auth Service VMs, -and copy the `Object (principal) ID`. Paste this value into ``. - -After obtaining the Principal ID, open the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) -in PowerShell mode and run the following script to assign the required permissions to ``. - -
-Assign required permissions to Azure Identity - -```powershell - -# Connect to Microsoft Graph with the required scopes for directory and app role assignment permissions. -Connect-MgGraph -Scopes 'Directory.ReadWrite.All', 'AppRoleAssignment.ReadWrite.All' - -# Retrieve the managed identity's service principal object using its unique principal ID (UUID). -$managedIdentity = Get-MgServicePrincipal -ServicePrincipalId '' - -# Set the Microsoft Graph enterprise application object. -# This is a service principal object representing Microsoft Graph in Azure AD with a specific app ID. -$graphSPN = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'" - -# Define the permission scopes that we want to assign to the managed identity. -# These are Microsoft Graph API permissions required by the managed identity. -$permissions = @( - "Application.Read.All" # Permission to read applications in the directory - "Directory.Read.All" # Permission to read directory data - "Policy.Read.All" # Permission to read policies within the directory -) - -# Filter and find the app roles in the Microsoft Graph service principal that match the defined permissions. -# Only include roles where "AllowedMemberTypes" includes "Application" (suitable for managed identities). -$appRoles = $graphSPN.AppRoles | - Where-Object Value -in $permissions | - Where-Object AllowedMemberTypes -contains "Application" - -# Iterate over each app role to assign it to the managed identity. -foreach ($appRole in $appRoles) { - # Define the parameters for the role assignment, including the managed identity's principal ID, - # the Microsoft Graph service principal's resource ID, and the specific app role ID. - $bodyParam = @{ - PrincipalId = $managedIdentity.Id # The ID of the managed identity (service principal) - ResourceId = $graphSPN.Id # The ID of the Microsoft Graph service principal - AppRoleId = $appRole.Id # The ID of the app role being assigned - } - - # Create a new app role assignment for the managed identity, granting it the specified permissions. - New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $managedIdentity.Id -BodyParameter $bodyParam -} - -``` - -
- -Your identity principal `` now has the necessary permissions to list Applications, -Directories, and Policies. - - -### Set up an Entra ID application - -In this step, you will manually configure an Entra ID Enterprise Application to be used by the Teleport Auth Connector. - -We provide a PowerShell script that creates the specified application, assigns the token signing request, and sets up the necessary SAML parameters. - -To proceed, you need to define the following parameters: - -- ``: The Entra ID Application name, typically set to `Teleport your.cluster.address`. -- ``: Your Teleport Proxy address. -- ``: The Teleport Auth Connector name, usually set to `entra-id`. - -Once these parameters are defined, open the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) in -PowerShell mode, or use the session created in the previous step. - -
-Create the Entra ID application - -```powershell -# Connect to Microsoft Graph with required scopes for application creation and app role assignment permissions. -Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All" - -# Import the Microsoft Graph module for managing applications. -Import-Module Microsoft.Graph.Applications - -# Define application parameters, including the display name. -$params = @{ - displayName = '' # Set the display name of the new application. -} - -# Set the SAML application template ID. -# This ID corresponds to a non-gallery SAML application template. -$applicationTemplateId = "8adf8e6e-67b2-4cf2-a259-e3dc5476c621" - -# Instantiate the application template to create a new application and its service principal. -$app = Invoke-MgInstantiateApplicationTemplate -ApplicationTemplateId $applicationTemplateId -BodyParameter $params - -# Extract the Application ID, Object ID, and Service Principal ID of the newly created application. -$appId = $app.Application.AppId # The unique identifier for the application (client ID). -$objectId = $app.Application.Id # The unique object ID for the application in Azure AD. -$servicePrincipal = $app.ServicePrincipal.Id # The unique object ID for the service principal. - -# Define parameters for the token signing certificate used for SAML. -$principalTokenSigningCertificateParams = @{ - displayName = "CN=azure-sso" # Common Name (CN) for the SAML token signing certificate. -} - -# Add a token signing certificate to the service principal, which is required for SAML authentication. -$cert = Add-MgServicePrincipalTokenSigningCertificate -ServicePrincipalId $servicePrincipal -BodyParameter $principalTokenSigningCertificateParams - -# Extract the thumbprint of the certificate, which will be used to configure SAML. -$thumbprint = $cert.Thumbprint - -# Set additional SAML-specific properties for the service principal. -$updateServicePrincipalParams = @{ - preferredSingleSignOnMode = "saml" # Set SAML as the single sign-on mode. - preferredTokenSigningKeyThumbprint = $thumbprint # Use the thumbprint of the added certificate for token signing. - appRoleAssignmentRequired = $false # Allow app access without explicit app role assignments. -} - -# Update the service principal with the SAML configuration. -Update-MgServicePrincipal -ServicePrincipalId $servicePrincipal -BodyParameter $updateServicePrincipalParams - -# Define the URL for the proxy (Teleport Auth Service address). -# This URL will be used as the Redirect URI and Identifier URI. -$proxyURL = 'https://'.TrimEnd("/").TrimEnd(":443") # Remove the default 443 port for standard formatting. -$acsURL = $proxyURL+'/v1/webapi/saml/acs/' - -# Define web properties, including the redirect URI for SAML authentication. -$web = @{ - redirectUris = @($acsURL) # Set the application's redirect URI. -} - -# Update the application with the web properties and identifier URI. -# This enables SAML-based authentication and includes security group claims. -Update-MgApplication -ApplicationId $objectId -Web $web -IdentifierUris @($acsURL) -# Define optional claims for the application to include group membership claims. -$optionalClaims = [Microsoft.Graph.PowerShell.Models.MicrosoftGraphOptionalClaims]::DeserializeFromDictionary(@{ - AccessToken = @( - @{ Name = 'groups' } - ) - IdToken = @( - @{ Name = 'groups' } - ) - Saml2Token = @( - @{ Name = 'groups' } - ) -}) - -Update-MgApplication -ApplicationId $objectId -GroupMembershipClaims "SecurityGroup" -OptionalClaims $optionalClaims - - -# Retrieve the tenant ID for display purposes. -$tenant = Get-AzTenant - -# Output the Application ID, Tenant ID, and additional information for reference. -Write-Output "-------------------------------------------------------" "Copy and paste the following details:" "Application ID (Client ID): $appId" "Tenant ID: $tenant" "-------------------------------------------------------" - -``` - -
- -If your cluster is publicly accessible from the internet and you prefer or need to use OIDC rather -than Auth Service system credentials, you can configure Teleport as an OIDC provider for the Entra -ID application. If you have already assigned the necessary permissions to your Auth Service's Azure -Identity, you may skip the following section. - -To configure Federated credentials for your application, run the following script in the same -Azure Cloud Shell terminal used previously. - -
-Create Federated Credentials for Entra ID Application" - -```powershell - -# Define the subject for the federated identity credential. This is a constant defined in Teleport. -$subject = "teleport-azure" - -# Define the accepted audiences for the credential. It's a constant value. -$audiences = @("api://AzureADTokenExchange") - -# Set the issuer to the Teleport cluster proxy URL. -$issuer = $proxyURL - -# Define a unique name for the federated identity credential. This name is used for identification within the application. -$name = "teleport-oidc" - -# Create a new federated identity credential for the application in Microsoft Graph. -$credential = New-MgApplicationFederatedIdentityCredential -ApplicationId $objectId -Subject $subject -Audiences $audiences -Issuer $issuer -Name $name - - -# Configure the required permissions for the application. - -# Retrieve the managed identity's service principal object using its unique principal ID (UUID). -$managedIdentity = Get-MgServicePrincipal -ServicePrincipalId $servicePrincipal - - -# Set the Microsoft Graph enterprise application object. -# This is a service principal object representing Microsoft Graph in Azure AD with a specific app ID. -$graphSPN = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'" - -# Define the permission scopes that we want to assign to the managed identity. -# These are Microsoft Graph API permissions required by the managed identity. -$permissions = @( - "Application.Read.All" # Permission to read applications in the directory - "Directory.Read.All" # Permission to read directory data - "Policy.Read.All" # Permission to read policies within the directory -) - -# Filter and find the app roles in the Microsoft Graph service principal that match the defined permissions. -# Only include roles where "AllowedMemberTypes" includes "Application" (suitable for managed identities). -$appRoles = $graphSPN.AppRoles | - Where-Object Value -in $permissions | - Where-Object AllowedMemberTypes -contains "Application" - -# Iterate over each app role to assign it to the managed identity. -foreach ($appRole in $appRoles) { - # Define the parameters for the role assignment, including the managed identity's principal ID, - # the Microsoft Graph service principal's resource ID, and the specific app role ID. - $bodyParam = @{ - PrincipalId = $managedIdentity.Id # The ID of the managed identity (service principal) - ResourceId = $graphSPN.Id # The ID of the Microsoft Graph service principal - AppRoleId = $appRole.Id # The ID of the app role being assigned - } - - # Create a new app role assignment for the managed identity, granting it the specified permissions. - New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $managedIdentity.Id -BodyParameter $bodyParam -} - -``` - -
- -### Set up Teleport resources - -The Teleport `tctl` command provides an interactive guide to set up and configure Entra ID integration for air-gapped clusters. - -To use it, ensure you have `tctl` version v16.4.7 or later and select a default list of Access List owners. -These specified Teleport users will become the owners of Access Lists imported by the Entra ID integration. -`` must be an existing Teleport user. -If you prefer multiple Access List owners, repeat the flag with each user, e.g., `--default-owner=owner1 --default-owner=owner2`. - -You'll also need to provide the Teleport Auth Service address as ``. -For clusters running in multiplex mode, this address will be the same as your proxy address. - -If you chose to use Teleport as the OIDC provider for Entra ID in the previous step, remove the `--use-system-credentials` -flag from the command below. - - -Currently, when using manual mode, it is not possible to operate without the `--no-access-graph` flag. - - -```code - -$ tctl plugins install entraid \ - --default-owner= \ - --default-owner=someOtherOwner@teleport.sh \ - --auth-connector-name="" \ - --use-system-credentials \ - --no-access-graph \ - --manual-setup \ - --auth-server - -``` - -Follow the detailed instructions provided by the `tctl plugins install entraid` guide to install and configure the Entra ID plugin. -
-
- -## Step 3/3. Analyze Entra ID directory in Teleport Graph Explorer - -Shortly after the integration onboarding is finished, -your Entra ID directory will be imported into your Teleport cluster and Graph Explorer. - -You can find Entra ID users and groups in the Graph Explorer UI. If you have Entra ID SSO set up for your AWS accounts, -and the AWS accounts have been connected to Teleport, -Graph Explorer will also show access to AWS resources granted to Entra ID identities. - -In the following example, Bob is assigned to group `AWS-Engineers` in Entra ID. -This allows him to use SSO to assume the AWS IAM role `Engineers`, -which in turn allows Bob to access two S3 buckets. - -![Example of an Entra ID user's access paths](../../../../img/access-graph/entra-id/entra-sso-path.png) diff --git a/docs/pages/admin-guides/teleport-policy/integrations/gitlab.mdx b/docs/pages/admin-guides/teleport-policy/integrations/gitlab.mdx deleted file mode 100644 index 8e995f87a526e..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/integrations/gitlab.mdx +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Discover GitLab Access Patterns with Teleport Identity Security -description: Describes how to synchronize GitLab access patterns using Identity Security and Access Graph. ---- - -Gain insights into access patterns within your GitLab account using Identity Security with Access Graph. By scanning all -permissions, users, groups, and projects, it provides a visual representation to help enhance the permission model within -your GitLab environment. This functionality enables you to answer queries such as: - -- What projects are accessible to users? -- Which users have write permissions to projects? - -(!docs/pages/includes/policy/access-graph.mdx!) - -## How it works - -Access Graph synchronizes various GitLab resources, including users, projects and groups. -These resources are then visualized using the graph representation detailed in the -[Access Graph page](../teleport-policy.mdx). - -The importing process involves two primary steps: - -### Polling GitLab APIs - -The Teleport cluster continuously scans the configured GitLab accounts and retrieves the following resources: - -- Users -- Groups -- Projects -- Group memberships -- Project memberships - -Once all the necessary resources are fetched, Teleport pushes them to the -Access Graph, ensuring that it remains updated with the latest information from your GitLab instance. - -### Importing resources - -Identity Security’s Access Graph feature delves into the resources imported and their relationships, crafting a -graphical representation thereof. - -## Prerequisites - -- A running Teleport Enterprise cluster v14.3.20/v15.3.1/v16.0.0 or later. -- Identity Security enabled for your account. -- A GitLab instance running GitLab v9.0 or later. -- For self-hosted clusters: - - Ensure that an up-to-date `license.pem` is used in the Auth Service configuration. - - A running Access Graph node v1.21.4 or later. -Check the [Identity Security page](../teleport-policy.mdx) for details on -how to set up Access Graph. - - The node running the Access Graph service must be reachable from the Teleport Auth Service. - -## Step 1/3. Create GitLab token - -To set up the GitLab integration, you'll need to create a GitLab token with the following permissions: - -- `read_api` - -Navigate to your GitLab instance, access the User Settings, and select the Access Tokens option. -Create a new token with the `read_api` scope and copy the generated token. For more information, refer to the -[GitLab documentation](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html). - -The importer will use this token to fetch the necessary resources from your GitLab instance. - - - - The GitLab importer will only fetch resources that the token has access to. Ensure that the user associated with - the token has the necessary permissions to access/view the resources you want to import. - - If you're using a GitLab.com account, the importer will only fetch the users who are part of the organization or - have access to the projects that the token can access. - - If you're using a self-hosted GitLab instance, the importer will fetch all resources that the token has access to, - including all users who are part of the instance. - - - -The token will be used in the next step to configure the GitLab Sync integration. - -## Step 2/3. Set up Access Graph GitLab Sync - -To initiate the setup wizard for configuring Gitlab Sync, access the Teleport UI, -click the Policy sidebar button, and then click Integrations. - -Click the "Setup new integration" button, and then select "Gitlab". You'll be prompted -to create a new Teleport Gitlab integration if you haven't configured one already. -Alternatively, you can opt for a previously established integration. - -You'll be prompted to provide the GitLab token created in Step 1 and the GitLab instance domain. -Once the token is successfully validated, you'll be able to see the resources imported in Access Graph. - -## Step 3/3. View GitLab resources in Access Graph - -After the GitLab resources are imported, you can view them in the Access Graph page. -The graph representation will show the relationships between users, groups, and projects within your GitLab instance. - -Users can have permissions to access a Group or Project. When a user has access to a Group, they inherit permissions -to all projects and sub-projects within that group. - -You can view the permissions granted to users, groups, and projects by clicking on the respective nodes in the graph. - -For example, to view the permissions granted to a user, click on the user node and select `View Access` from the context menu. -This will display the permissions granted to the user and the resources they have access to. - -You can also run queries to fetch specific information from the Access Graph, such as: - -### Fetch All Projects Accessible to a User - -The following query fetches all projects accessible to a user : - -```sql -SELECT * FROM access_path WHERE "identity" = '' AND source='Gitlab' -``` - -Additionally, you filter projects by the user's access level, `owner`, `maintainer`, `developer`, `guest`, or `reporter` -by running the following query: - -```sql -SELECT * FROM access_path WHERE "identity" = '' AND source='Gitlab' AND action='owner' -``` - -Change the `action` parameter to `maintainer`, `developer`, `guest`, or `reporter` to fetch the respective projects. - -### Fetch All Users with Write Access to a Project - -The following query fetches all users with read/write access to a project named : - -```sql -SELECT * FROM access_path WHERE "resource" = '' AND source='Gitlab' -``` - -## Troubleshooting - -After setting up the GitLab integration, you can monitor the import process status on the Access Graph's Integrations page. -If the import fails, an error message will help identify the issue. - -You can also check whether the import process is currently running or has completed successfully by viewing the status. - -If you encounter any `Unauthorized` errors, ensure that the GitLab token has the necessary permissions to access the resources -and that the token is valid. If the token has expired, you'll need to create a new token and update the integration settings. - -If you encounter any other issues, please ensure that the Teleport cluster can reach the GitLab instance and that the -GitLab APIs are accessible. - -If you're still facing issues, please inspect the error log on the Access Graph's Integrations page for more details. - diff --git a/docs/pages/admin-guides/teleport-policy/integrations/integrations.mdx b/docs/pages/admin-guides/teleport-policy/integrations/integrations.mdx deleted file mode 100644 index cb3c9e50e0e58..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/integrations/integrations.mdx +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Teleport Identity Security Integrations -description: Integrations in Access Graph with Identity Security. ---- - -Teleport can integrate with identity providers (IdPs) like Okta and AWS OIDC -which can then be used with Access Graph, providing a comprehensive, -interactive view of how users, roles, and resources are interconnected, -enabling administrators to better understand and control access policies. - -Read the following guides for information on using Teleport Identity Security Access Graph to -visualize role-based access controls from third-party services: - - - -## Viewing available integrations - -The Integrations page shows integrations that can be enabled or are already -enabled in Access Graph. - -![Integrations](../../../../img/access-graph/integrations.png) - -Resources imported into Teleport through Teleport-enabled integrations are -automatically imported into Identity Security without any additional -configuration. - -To access the interface, your user must have a role that allows `list` and `read` verbs on the `access_graph` resource, e.g.: - -```yaml -kind: role -version: v7 -metadata: - name: my-role -spec: - allow: - rules: - - resources: - - access_graph - verbs: - - list - - read -``` - -The preset `editor` role has the required permissions by default. - -## Set up a new integration - -On the left sidebar, click **Policy**. Click the connection icon labeled Integrations: -![Connection view](../../../../img/access-graph/connection_view.png) -Select the "Set up new integration" button. - -Teleport can also import and grant access to resources from Okta organizations, -such as user profiles, groups and applications. You can view connection data in -Access Graph. Enroll the Teleport [Okta -integration](../../access-controls/okta/okta.mdx) in your -cluster. - diff --git a/docs/pages/admin-guides/teleport-policy/policy-connections.mdx b/docs/pages/admin-guides/teleport-policy/policy-connections.mdx deleted file mode 100644 index 0f321977cbf00..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/policy-connections.mdx +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Teleport Identity Security Connections -description: Connections in Access Graph with Identity Security. ---- - -Identity Security with Access Graph shows the relationships between users, roles, and resources. -It does this by showing paths between nodes. Paths are the relationships between nodes. -Paths always connect nodes in the following order: - -## Connecting to resources - -1. Users -1. User Groups -1. Actions -1. Resource Groups -1. Resources - -Graph paths can be divided into two categories: - -1. Allow paths - -![Allow Path](../../../img/access-graph/allow-path.png) - -Allow paths connect identities to resources. They show what an identity can access and what actions they can perform. - -2. Deny paths - -![Deny Path](../../../img/access-graph/deny-path.png) - -Deny paths connect identities to resources. They show what an identity cannot access and what actions they cannot perform. Deny paths take precedence over allow paths. - -## How resources and identities are represented - -Access Graph imports all resources and identities from Teleport and keeps them up to date, so every time you make a change -to your Teleport resources, the Graph will reflect those changes. - -### Identities - -Users are created from Teleport Users. -Local users are imported as soon as they are created. -External users (created from authentication connectors for GitHub, SAML, etc.) are imported when they log in for the first time. - -### User Groups - -User Groups are created from Teleport roles and Access Requests. Roles create User Groups where the members -are the users that have that role. Access requests create a temporary User Group where the members are the users that -got the access through the accepted Access Request. - -### Actions - -Actions are created from Teleport roles. Actions can be divided into three -categories: - -1. Allow Actions - -Allow Actions are created from Teleport roles. Allow Actions are the things that users can do. -For example, a user can SSH into a node. - -2. Deny Actions - -Deny Actions are created from Teleport roles. Deny Actions are the things that users cannot do. -For example, a user cannot SSH into a node. Deny Actions take precedence over Allow Actions. - -3. Temporary Actions - -Temporary Actions are created when a user is granted temporary access to a resource. -They are automatically deleted when the user's access expires. The temporary actions -can be identified by having `Temporary: true` property. - -#### Resource Groups - -Resource Groups are created from Teleport roles. - -### Database Access Controls - -Teleport supports [object-level permissions](../../enroll-resources/database-access/rbac.mdx#executing-database-object-permission-rules) for select database protocols. - -The database objects-level access information is automatically synchronized to Identity Security, making it possible to see who has particular levels of access to the different parts of the database. - -When you inspect a particular user's access, the Teleport Access Graph will automatically display the database objects that the user can access. - -![Overview of access including individual database objects](../../../img/access-graph/dac/overview.png) - -To see more details about a specific database object, simply select it. - -![Details of an individual database object](../../../img/access-graph/dac/db-object-details.png) - -In the graph, database objects are connected by multiple edges: - -1. There is exactly one edge connecting the object to its parent database resource. This edge has "contains" label. - -![Database object and parent database resource](../../../img/access-graph/dac/db-object-contains-relation.png) - -2. At least one edge shows the permissions associated with the object, such as `INSERT, SELECT, UPDATE`. If multiple roles grant permissions to the same object, additional edges of this type may be present. The permissions are presented as edge labels. - -![Specific object permissions](../../../img/access-graph/dac/db-object-permissions-label.png) - -#### Resources - -Resources are created from Teleport resources like nodes, databases, and Kubernetes clusters. - -## Next steps -- Uncover [privileges, permissions, and construct SQL queries](./policy-how-to-use.mdx) in Identity Security. diff --git a/docs/pages/admin-guides/teleport-policy/policy-how-to-use.mdx b/docs/pages/admin-guides/teleport-policy/policy-how-to-use.mdx deleted file mode 100644 index bbd5711fabda7..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/policy-how-to-use.mdx +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: How to use Teleport Identity Security -description: Using Access Graph with Identity Security. ---- - -Identity Security provides a framework for visualizing and managing access controls across an organization’s infrastructure. - -This interface enables administrators to efficiently identify and address potential security risks, -such as overly broad permissions or conflicting roles, ensuring that access is granted -on principles of least privilege. - -## How to use Identity Security - -Identity Security with Access Graph feature can help you to answer questions like: - -- Who can access a specific resource? - -Determine who has access to resources and understand the pathways that grant access: - -![Show Access Path Resource](../../../img/access-graph/show-access-path-resource.gif) - -- What resources can a specific user access? - -At a glance, you can view all the resources a user can access: - -![Show Access Path](../../../img/access-graph/show-access-path.gif) - -## Dashboard: Standing Privileges - -The Identity Security with Access Graph dashboard provides a high-level overview of standing privileges across -your infrastructure. Standing privileges are the number of resources that an identity can access -without creating an Access Request. Details about standing privileges can be found by clicking on the -user or bot. - -![Standing Privileges](../../../img/access-graph/standing-privileges.png) - -## Graph View - -Graph view is the main view that shows the connections between identities and resources. - -By default, an aggregated view of access paths grouped by identity is shown. - -## Search - -To search for a graph node, use the search bar at the top of the page or press `/`. - -This will bring up the global search, where you can search for nodes, pages and access paths. - -![Search](../../../img/access-graph/search.png) - -Clicking on a node will open a drawer with more details about that node. - -![Node Information](../../../img/access-graph/node-information.png) - -Viewing a node's access path will open up the graph explorer view with the selected node. - -![Access Path](../../../img/access-graph/access-path.png) - -## Access path view - -A node's access path shows all the resources that the node can access. - -When viewing an access path, you can filter down the graph to show only the nodes that you are interested in. - -For example, if you are viewing a user's access path and want to see what access is given to that user through a certain role, you can right-click the role and select "Add to search". - -![Add to search](../../../img/access-graph/add-to-search.png) - -This will narrow down the graph to show you only the access paths that include that role. - -![Filtered access path](../../../img/access-graph/filtered-access-path.png) - -## Graph nodes - -Access Graph divides your infrastructure into six main components: - -1. Identities - -![Identity Node](../../../img/access-graph/identity-node.png) - -Identities are the actors that can access your infrastructure. They can be employees, contractors, machines or bots. - -The number on the right hand side shows **standing privileges**. The standing privileges metric indicates the number of -resources that an identity can access without creating an Access Request. - -2. User Groups - -![Identity Group Node](../../../img/access-graph/identity-group-node.png) - -Identity Groups are collections of identities. They can be used to organize users -based on their role or team, and they can be nested. - -3. Actions - -![Action Node](../../../img/access-graph/allow-action-node.png) - -Actions are the things that identities can or cannot do. Actions are related to resources. -For example, a user can SSH into a node. - -4. Deny Actions - -![Deny Action Node](../../../img/access-graph/deny-action-node.png) - -Deny Actions are the things that identities cannot do. Deny Actions are related to resources. -For example, a user cannot SSH into a node. - -5. Resource Groups - -![Resource Group Node](../../../img/access-graph/resource-group-node.png) - -Resource Groups are collections of resources. They can be used to organize resources based on their role or team. - -The number on the right hand side shows the number of resources that a resource group contains. - -6. Resources - -![Resource Node](../../../img/access-graph/resource-node.png) - -Resources are the things that users can or cannot access. They can be servers, databases, or Kubernetes clusters. - -## SQL Editor - -Access Graph allows creating SQL like queries to explore the graph. - -![SQL Editor](../../../img/access-graph/sql-editor.png) - -The query language allows to create different views of the graph, ex: - -Show only allowed paths: - -```sql -SELECT * FROM access_path WHERE kind = 'ALLOWED'; -``` - -Show only denied paths: -```sql -SELECT * FROM access_path WHERE kind = 'DENIED'; -``` - -Show all access paths for a user: -```sql -SELECT * FROM access_path WHERE identity = 'bob'; -``` - -Show all access paths for a user AND a resource: -```sql -SELECT * FROM access_path WHERE identity = 'bob' AND resource = 'postgres'; -``` - -Show all access paths for resources with specific labels: -```sql -SELECT * FROM access_path WHERE resource_labels @> '{"key": "value"}'; -``` - -You can view more SQL examples in the editor. diff --git a/docs/pages/admin-guides/teleport-policy/teleport-policy.mdx b/docs/pages/admin-guides/teleport-policy/teleport-policy.mdx deleted file mode 100644 index c2fade47f7eb3..0000000000000 --- a/docs/pages/admin-guides/teleport-policy/teleport-policy.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Teleport Identity Security -description: A reference for Access Graph with Identity Security. ---- - -Teleport Identity Security unifies management of access policies across your infrastructure. -It hardens your access controls and visually shows up-to-date relationships and policies of all users, groups, and computing resources -It can help you answer questions like: - -- What resources can a specific user access? -- What users can access a specific resource? -- What are the relationships between users, roles, and resources? - -## Getting started with Identity Security - -Identity Security is a separately licensed product and is available to Teleport Enterprise customers. -Access Graph is a major capability of Identity Security that visually shows the relationships of -policies of users, groups, and computing resources. - -To verify the availability of the Access Graph, ensure that the Policy icon is present in the navigation sidebar. - - -Note: For managed Enterprise customers, Identity Security is enabled by default. -If you are a self-hosted Teleport customer, you will need to [deploy the Access Graph Service](../deploy-a-cluster/access-graph/access-graph.mdx) and ensure you have an updated -`license.pem` with Identity Security enabled to use it. - - -## Identity Security guides - - diff --git a/docs/pages/changelog.mdx b/docs/pages/changelog.mdx index 3676469d022ac..6da111faf54d0 100644 --- a/docs/pages/changelog.mdx +++ b/docs/pages/changelog.mdx @@ -1,7 +1,13 @@ --- title: Teleport Changelog description: The Changelog provides a comprehensive description of the changes introduced by each Teleport release. +tags: + - reference + - platform-wide --- + {/*lint disable absolute-docs-links*/} +The Teleport changelog lists changes introduced by each version of Teleport. + (!CHANGELOG.md!) diff --git a/docs/pages/reference/cloud-faq.mdx b/docs/pages/cloud-faq.mdx similarity index 75% rename from docs/pages/reference/cloud-faq.mdx rename to docs/pages/cloud-faq.mdx index 16197d3b97c7b..7c293df0b45ed 100644 --- a/docs/pages/reference/cloud-faq.mdx +++ b/docs/pages/cloud-faq.mdx @@ -1,12 +1,14 @@ --- title: Teleport Enterprise Cloud FAQ description: Teleport cloud frequently asked questions. -tocDepth: 3 +tags: + - faq + - platform-wide --- This page provides answers to frequently asked questions about Teleport Enterprise (Cloud). For a list of frequently asked questions about Teleport in -general, see [Frequently Asked Questions](../faq.mdx). +general, see [Frequently Asked Questions](faq.mdx). ## Billing and usage @@ -26,7 +28,7 @@ If you plan to use S3 and DynamoDB as storage backends, we can provide data for ### How long will Teleport Enterprise Cloud retain my data? -See our documentation on [data retention](architecture/teleport-cloud-architecture.mdx). +See our documentation on [data retention](reference/architecture/teleport-cloud-architecture.mdx). ### Is an independent security audit available? @@ -48,30 +50,39 @@ for customer data including session recordings and user records. ### Can I get a list of IP addresses that my infrastructure will need to allow connections to? -See the [Public IP Address Allowlist](../ips.mdx) for the list of IP addresses used for inbound connections to Teleport Enterprise Cloud. +See the [Public IP Address Allowlist](ips.mdx) for the list of IP addresses used for inbound connections to Teleport Enterprise Cloud. ### Can I configure a list of IP addresses which are allowed to connect to Teleport Enterprise Cloud? We do not have plans to offer support for inbound connection IP allowlists. -We believe mTLS with strong user and [device identity](../admin-guides/access-controls/device-trust/guide.mdx) provides the best available +We believe mTLS with strong user and [device identity](identity-governance/device-trust/guide.mdx) provides the best available security for client authentication. For customers that require IP-based control for compliance purposes, we do support IP Pinning. For more information see `pin_source_ip` in our [Teleport -Access Controls Reference](access-controls/roles.mdx). +Access Controls Reference](reference/access-controls/roles.mdx). ### Are internal connections encrypted / authenticated? Teleport components communicate with themselves using mTLS, with a separate certificate authority for each tenant. Connections to AWS services, such as DynamoDB and S3, are established using encryption provided by AWS, both at rest and in transit. Each tenant has its own credentials that isolate it to interacting with only its own data. + +### Is Teleport Enterprise (cloud-hosted) PCI compliant? + +Teleport Enterprise (cloud-hosted) is a PCI DSS (Payment Card Industry Data Security Standard) Level 1 compliant service provider. This means we have met the highest standards for data security set by the PCI Security Standards Council. Our compliance is verified through an annual assessment conducted by a Qualified Security Assessor (QSA). Our most recent Attestation of Compliance can be viewed at [trust.goteleport.com](https://trust.goteleport.com). + +Teleport Enterprise (cloud-hosted) can be used to meet multiple security requirements for PCI environments and provides customers details on how to secure their Cardholder Data Environment when using Teleport Enterprise (cloud-hosted). This includes a shared responsibilities matrix that outlines Teleport's, the customer's, and shared responsibilities. + +Additionally, Teleport takes reasonable steps to identify and remove sensitive data that could be inadvertently stored or transmitted on systems it controls. Any cardholder data received from a customer is considered an "unintended channel." + ## Connecting resources ### How do I add resources to Teleport Enterprise Cloud? You can connect servers, Kubernetes clusters, databases, desktops, and applications using [reverse -tunnels](../enroll-resources/agents/agents.mdx). +tunnels](enroll-resources/agents/agents.mdx). There is no need to open any ports on your infrastructure for inbound traffic. @@ -85,7 +96,7 @@ If you plan on connecting more than 10,000 nodes or agents, please contact your ### Are dynamic node tokens available? After [connecting](#how-can-i-access-the-tctl-admin-tool) `tctl` to Teleport Enterprise Cloud, users can generate -[dynamic tokens](../enroll-resources/agents/join-token.mdx): +[dynamic tokens](enroll-resources/agents/join-token.mdx): ```code $ tctl nodes add --ttl=5m --roles=node,proxy --token=$(uuid) @@ -95,7 +106,7 @@ $ tctl nodes add --ttl=5m --roles=node,proxy --token=$(uuid) ### How can I access the `tctl` admin tool? -Find the appropriate download at [Installation](../installation.mdx). +Find the appropriate download at [Installation](installation/installation.mdx). After downloading the tools, first log in to your cluster using `tsh`, then use `tctl` remotely: @@ -117,15 +128,15 @@ $ tctl tokens add --type=node ### Is there a way to forward Teleport Enterprise Cloud audit events to my company's internal Security Information and Event Management (SIEM)? -Yes. We recommend Teleport's [event handler plugin](../admin-guides/management/export-audit-events/fluentd.mdx) to export Teleport Enterprise Cloud audit events. +Yes. We recommend Teleport's [event handler plugin](zero-trust-access/export-audit-events/fluentd.mdx) to export Teleport Enterprise Cloud audit events. ### Is it possible to store audit logs and session recordings in my own S3 bucket? -Yes, you can configure [External Audit Storage](../admin-guides/management/external-audit-storage.mdx). +Yes, you can configure [External Audit Storage](zero-trust-access/management/external-audit-storage.mdx). -### Is it possible to enable proxy recording mode? +### Is it possible to enable recording proxy mode? -Proxy recording mode is disabled for cloud customers +Recording proxy mode is disabled for Teleport Cloud customers. ### Is there a way to download session recordings for easy playback? @@ -138,12 +149,12 @@ The ability to download recordings for offline viewing will be available in a fu If you have Teleport Agents connected to a Teleport Enterprise Cloud cluster that are more than one major version behind, you might experience compatibility issues unless your Teleport Agents are enrolled in automatic updates. See the [Upgrading -Overview](../upgrading/overview.mdx) for more information. +Overview](upgrading/overview.mdx) for more information. To get version information for your Teleport Agents, see [How can I find version information on connected agents?](#how-can-i-find-version-information-on-connected-agents). -If you want more details about cluster updates, see [Cloud Cluster Updates](../upgrading/cloud-cluster-updates.mdx). +If you want more details about cluster updates, see [Cloud Cluster Updates](upgrading/cloud-cluster-updates.mdx). For more information about automatic updates and compatibility issues, contact [Teleport support](https://goteleport.com/support/). @@ -162,7 +173,7 @@ connected agents?](#how-can-i-find-version-information-on-connected-agents) for ### Are updates times configurable for Teleport Enterprise Cloud? -Yes, see [Cloud Cluster Updates](../upgrading/cloud-cluster-updates.mdx#maintenance-windows) for further instruction. +Yes, see [Cloud Cluster Updates](upgrading/cloud-cluster-updates.mdx#maintenance-windows) for further instruction. ### When are agents automatically updated? @@ -174,7 +185,7 @@ update the software when new versions are found. If you enroll in automatic agent updates, Teleport Agents are automatically updated after your Teleport cluster is updated during your scheduled maintenance period. For more information, read the [Automatic Agent -Updates](../upgrading/agent-managed-updates.mdx) guide. +Updates](upgrading/agent-managed-updates/agent-managed-updates.mdx) guide. ### How can I find version information on connected agents? @@ -236,7 +247,7 @@ than through a port allocated to that service. In this case, you can see that TLS routing is enabled, and that the Proxy Service's public web address (`ssh.public_addr`) is `example.teleport.sh:443`. -Read more in our [TLS Routing](architecture/tls-routing.mdx) guide. +Read more in our [TLS Routing](reference/architecture/tls-routing.mdx) guide. ### How does Teleport manage web certificates? Can I upload my own? @@ -247,7 +258,7 @@ certificate or use a custom domain name. ### Where does Teleport Enterprise Cloud run? Teleport Cloud runs on Amazon Web Services (AWS). We run proxies in a variety -of regions all over the world, and allow customers to [select the region](architecture/teleport-cloud-architecture.mdx) where the data is stored. +of regions all over the world, and allow customers to [select the region](reference/architecture/teleport-cloud-architecture.mdx) where the data is stored. ### Are you using AWS-managed encryption keys, or CMKs via KMS? @@ -257,7 +268,7 @@ We use AWS-managed keys. Currently there is no option to provide your own key. It's a Teleport-managed S3 bucket with AWS-managed keys by default. -Configuring [External Audit Storage](../admin-guides/management/external-audit-storage.mdx) will allow +Configuring [External Audit Storage](zero-trust-access/management/external-audit-storage.mdx) will allow you to use your own S3 bucket. ### Is IPv6 Supported for connections to Teleport Enterprise Cloud? @@ -312,8 +323,41 @@ When you sign up for a Teleport Enterprise (Cloud) account and set up your first user within the account, the Teleport Web UI displays a set of recovery codes: ![Web UI view showing Teleport recovery -codes](../../img/cloud/recovery-codes.png) +codes](../img/cloud/recovery-codes.png) Save the recovery codes into a safe location, such as your organization's password manager. You can use these codes to reset your account if you lose your password or multi-factor authentication credentials. + +### How do I update my security and business contacts? + +Teleport Cloud users can configure up to 3 security contacts and 3 business contacts for their cluster. It's important to +keep these up to date so that we always know who to notify of important updates and alerts. + +To do this, log in to your Teleport Cloud account with the email address of the user that created the initial Cloud tenant. Open +the user dropdown menu on the top right of the navigation bar, and select "Help & Support," then scroll down until you see the contacts sections. +Once you add a contact, they will receive an invitation email which they must accept within 14 days. + +![Web UI view showing security and business contacts options](../img/cloud/security-business-contacts.png) + +If you don't see the contacts lists on the Help & Support page, ensure that your user has the required permissions to see and update contacts. You +can also create a separate role that grants these permissions and assign it to your user and any other desired users: + +```yaml +version: v6 +kind: role +metadata: + description: Edit Contacts + name: contact-editor +spec: + allow: + rules: + - resources: + - contact + verbs: + - list + - create + - read + - update + - delete +``` diff --git a/docs/pages/connect-your-client/connect-your-client.mdx b/docs/pages/connect-your-client/connect-your-client.mdx index 6149a7098d5a1..0a710a2e8784e 100644 --- a/docs/pages/connect-your-client/connect-your-client.mdx +++ b/docs/pages/connect-your-client/connect-your-client.mdx @@ -1,6 +1,37 @@ --- title: Teleport User Guides +sidebar_label: User Guides description: Provides instructions to help users connect to infrastructure resources with Teleport. +tags: + - platform-wide --- - +The guides in this section explain how to access infrastructure resources +protected by Teleport. + +## Teleport clients + +Teleport Connect is a graphical interface that allows you to access protected +resources, often without needing a third-party client tool. For a similar +experience within your browser, you can use the Teleport Web UI. + +Teleport provides a command-line tool, `tsh`, that allows you to access +Teleport-protected resources using terminal-based workflows the same as a +protected resource's native tooling. + +Read the [Introduction to Teleport +Clients](./teleport-clients/teleport-clients.mdx). + +## MCP clients + +You can configure MCP clients like Cursor and Claude Desktop to securely access +Teleport-protected resources. + +Read [MCP Clients](./model-context-protocol/model-context-protocol.mdx). + +## Third-party clients + +You can use Teleport-issued credentials to give your third-party client tools, +including graphical clients and IDEs, access to Teleport-protected resources. +Read [Third-Party Clients](./third-party/third-party.mdx). + diff --git a/docs/pages/connect-your-client/gui-clients.mdx b/docs/pages/connect-your-client/gui-clients.mdx deleted file mode 100644 index efaf9cf27332b..0000000000000 --- a/docs/pages/connect-your-client/gui-clients.mdx +++ /dev/null @@ -1,656 +0,0 @@ ---- -title: Database Access GUI Clients -description: How to configure graphical database clients for Teleport database access. ---- - -This guide describes how to configure popular graphical database clients to -work with Teleport. - -## Setting up your Teleport environment - -### Prerequisites - -- A running Teleport cluster. If you want to get started with Teleport, [sign - up](https://goteleport.com/signup) for a free trial or [set up a demo - environment](../admin-guides/deploy-a-cluster/linux-demo.mdx). - -- The `tsh` client tool. Visit [Installation](../installation.mdx) for instructions on downloading - `tsh`. See the [Using Teleport Connect](./teleport-connect.mdx) guide for a graphical desktop client - that includes `tsh`. - -- To check that you can connect to your Teleport cluster, sign in with `tsh login`. For example: - - ```code - $ tsh login --proxy=teleport.example.com --user=myuser@example.com - ``` - -- The Teleport Database Service configured to access a database. See one of our - [guides](../enroll-resources/database-access/guides/guides.mdx) for how to set up the Teleport - Database Service for your database. - -### Get connection information - -Get information about the host and port where your database is available so you -can configure your GUI client to access the database. - -#### Using a local proxy server - -Use the `tsh proxy db` command to start a local TLS proxy your GUI database -client will be connecting to. This command requires that you use Teleport -Enterprise (Cloud) or, if self-hosting Teleport, have enabled [TLS -routing](../admin-guides/management/operations/tls-routing.mdx) mode. - -Run a command similar to the following:: - -```code -$ tsh proxy db -Started DB proxy on 127.0.0.1:61740 - -Use following credentials to connect to the proxy: - ca_file=/Users/r0mant/.tsh/keys/root.gravitational.io/certs.pem - cert_file=/Users/r0mant/.tsh/keys/root.gravitational.io/alice-db/root/-x509.pem - key_file=/Users/r0mant/.tsh/keys/root.gravitational.io/alice -``` - -Use the displayed local proxy host/port and credentials paths when configuring -your GUI client below. When entering the hostname, use `localhost` rather than -`127.0.0.1`. - -Starting the local database proxy with the `--tunnel` flag will create an -authenticated tunnel that you can use to connect to your database instances. -You won't need to configure any credentials when connecting to this tunnel. - -Here is an example on how to start the proxy: - -```code -# Start the local proxy. -$ tsh proxy db --tunnel -Started authenticated tunnel for the database "" in cluster "" on 127.0.0.1:62652. -``` - -You can optionally specify the database name and the user to use by default -when connecting to the database: - -```code -$ tsh proxy db --db-user=my-database-user --db-name=my-schema --tunnel -``` - -Now, you can connect to the address the proxy command returns. In our example it -is `127.0.0.1:62652`. - -#### Using a remote host and port - -If you are self-hosting Teleport and not using [TLS -routing](../admin-guides/management/operations/tls-routing.mdx), run the -following command to see the database connection information: - -```code -# View configuration for the database you're logged in to. -$ tsh db config -# View configuration for the specific database when you're logged into multiple. -$ tsh db config example -``` - -It will display the path to your locally cached certificate and key files: - -``` -Name: example -Host: teleport.example.com -Port: 3080 -User: postgres -Database: postgres -CA: /Users/alice/.tsh/keys/teleport.example.com/certs.pem -Cert: /Users/alice/.tsh/keys/teleport.example.com/alice-db/root/example-x509.pem -Key: /Users/alice/.tsh/keys/teleport.example.com/alice -``` - -The displayed `CA`, `Cert`, and `Key` files are used to connect through pgAdmin -4, MySQL Workbench, and other graphical database clients that support mutual -TLS authentication. - -## MongoDB Compass - -[Compass](https://www.mongodb.com/products/compass) is the official MongoDB -graphical client. - -On the "New Connection" panel, click on "Fill in connection fields individually". - -![MongoDB Compass new connection](../../img/database-access/compass-new-connection@2x.png) - -On the "Hostname" tab, enter the hostname and port of the proxy you will use to -access the database (see -[Get connection information](./gui-clients.mdx#get-connection-information)). -Leave "Authentication" as None. - -![MongoDB Compass hostname](../../img/database-access/compass-hostname@2x.png) - -On the "More Options" tab, set SSL to "Client and Server Validation" and set the -CA as well as the client key and certificate. Note that a CA path must be -provided and be able to validate the certificate presented by your Teleport -Proxy Service's web endpoint. - -![MongoDB Compass more options](../../img/database-access/compass-more-options@2x.png) - -The following fields in the More Options tab must correspond to paths printed by -the `tsh proxy db` command you ran earlier: - -|Field|Path| -|---|---| -|Certificate Authority|`ca_file`| -|Client Certificate|`cert_file`| -|Client Private Key|`key_file`| - -Click on the "Connect" button. - -## MySQL DBeaver - -Right-click in the "Database Navigator" menu in the main view and select Create > Connection: - -![DBeaver Add Server](../../img/database-access/dbeaver-add-server.png) - -In the search bar of the "Connect to a database" window that opens up, type "mysql", select the MySQL driver, and click "Next": - -![DBeaver Select Driver](../../img/database-access/dbeaver-select-driver.png) - -In the newly-opened "Connection Settings" tab, use the Host as `localhost` and -Port as the one returned by the proxy command (`62652` in the example above): - -![DBeaver Select Configure Server](../../img/database-access/dbeaver-configure-server.png) - -In that same tab, set the username to match the one that you are connecting to -using Teleport and uncheck the "Save password locally" box: - -![DBeaver Select Configure User](../../img/database-access/dbeaver-configure-user.png) - -Click the "Edit Driver Settings" button on the "Main" tab, check the "No -Authentication" box, and click "Ok" to save: - -![DBeaver Driver Settings](../../img/database-access/dbeaver-driver-settings.png) - -Once you are back in the "Connection Settings" window, click "Ok" to finish and -DBeaver should connect to the remote MySQL server automatically. - -## MySQL Workbench - -[MySQL Workbench](https://www.mysql.com/products/workbench/) is a GUI -application that provides comprehensive MySQL administration and SQL development -tools. - -In the MySQL Workbench "Setup New Connection" dialog, fill out "Connection -Name", "Hostname", "Port", and "Username": - -![MySQL Workbench -Parameters](../../img/database-access/workbench-parameters@2x.png) - -In the "SSL" tab, set "Use SSL" to `Require and Verify Identity` and enter the -paths to your CA, certificate, and private key files (see -[Get connection information](./gui-clients.mdx#get-connection-information)): - -![MySQL Workbench SSL](../../img/database-access/workbench-ssl@2x.png) - -The following fields in the SSL tab must correspond to paths printed by the `tsh -proxy db` command you ran earlier: - -|Field|Path| -|---|---| -|SSL Key File|`key_file`| -|SSL CERT File|`cert_file`| -|SSL CA File|`ca_file`| - -Optionally, click "Test Connection" to verify connectivity: - -![MySQL Workbench Test](../../img/database-access/workbench-test@2x.png) - -Save the connection and connect to the database. - -## NoSQL Workbench - -From the NoSQL Workbench launch screen, click **Launch** next to **Amazon DynamoDB**. -From the left-side menu select **Operation builder**, then **+ Add connection**. -Choose the **DynamoDB local** tab, and point to your proxy's endpoint. This is -`localhost:62652` in the example above. (See -[Get connection information](./gui-clients.mdx#get-connection-information) for -more information.) - -![DynamoDB local connection in NoSQL Workbench](../../img/database-access/nosql-workbench-connection.png) - -## SQL Server with Azure Data Studio - -In Azure Data Studio create a connection using your proxy's endpoint. This is -`localhost,62652` in the example above. On a Windows machine, using an address in - the format `127.0.0.1,62652` could be required instead of `localhost`. (See -[Get connection information](./gui-clients.mdx#get-connection-information) for -more information.) - -Create a connection with Microsoft SQL Server with these settings: - -|Connection Detail|Value| -|---|---| -|Server|`host`,`port` of proxy endpoint| -|Authentication Type|SQL Login| -|Password|empty| -|Encrypt|`False`| - -Example: -![Azure Data Studio connection options](../../img/database-access/azure-data-studio-connection.png) - -Click **Connect** to connect. - -## PostgreSQL DBeaver - -To connect to your PostgreSQL instance, use the authenticated proxy address. -This is `127.0.0.1:62652` in the example above (see the “Authenticated Proxy” -section on [Get connection information](./gui-clients.mdx#get-connection-information) -for more information). - -Use the "Database native" authentication with an empty password: - -![DBeaver Postgres Configure -Server](../../img/database-access/dbeaver-pg-configure-server.png) - -Clicking on "Test connection" should return a connection success message. Then, -click on "Finish" to save the configuration. - -## PostgreSQL pgAdmin 4 - -[pgAdmin 4](https://www.pgadmin.org/) is a popular graphical client for -PostgreSQL servers. - -To configure a new connection, right-click on "Servers" in the main browser view -and create a new server: - -![pgAdmin Add Server](../../img/database-access/pgadmin-add-server@2x.png) - -In the "General" tab of the new server dialog, enter the server connection name: - -![pgAdmin General](../../img/database-access/pgadmin-general@2x.png) - -In the "Connection" tab, fill in the hostname, port, user and database name from -the configuration above: - -![pgAdmin Connection](../../img/database-access/pgadmin-connection@2x.png) - -In the "SSL" tab, set "SSL Mode" to `Verify-Full` and fill in paths for client -certificate, key and root certificate from the configuration above: - -![pgAdmin SSL](../../img/database-access/pgadmin-ssl@2x.png) - -The following fields in the SSL tab must correspond to paths printed by the `tsh -proxy db` command you ran earlier: - -|Field|Path| -|---|---| -|Client certificate|`cert_file`| -|Client certificate key|`key_file`| -|Root certificate|`ca_file`| - -Click "Save", and pgAdmin should immediately connect. If pgAdmin prompts you -for password, leave the password field empty and click OK. - -## Microsoft SQL Server Management Studio - -In Microsoft SQL Server Management Studio connect to a database engine using -your proxy's endpoint. This is `localhost,62652` in the example above. Using -the IP `127.0.0.1,62652` connection could be required on a Windows machine -instead of `localhost`. (See [Get connection information](./gui-clients.mdx#get-connection-information) for -more information.) - -Create a connection with Microsoft SQL Server with these settings: - -|Connection Detail|Value| -|---|---| -|Server type|Database Engine| -|Server name|`host`,`port` of proxy endpoint| -|Authentication|SQL Server Authentication| -|Password|empty| -|Encryption|do not enable| - -Example: -![Microsoft SQL Server Management Studio connection options](../../img/database-access/msft-sql-server-management-studio-connection.png) - -Click Connect to connect. - -## Redis Insight - - - Teleport's Redis Insight integration only supports Redis standalone instances. - - -After opening Redis Insight click `ADD REDIS DATABASE`. - -![Redis Insight Startup Screen](../../img/database-access/guides/redis/redisinsight-startup.png) - -Now start a local proxy to your Redis instance: - -`tsh proxy db --db-user=alice redis-db-name`. - -Click `Add Database Manually`. Use `127.0.0.1` as the `Host`. Use the port printed by -the `tsh` command you ran in [Get connection information](#get-connection-information). - -Provide your Redis username as `Username` and password as `Password`. - -![Redis Insight Configuration](../../img/database-access/guides/redis/redisinsight-add-config.png) - -Next, check the `Use TLS` and `Verify TLS Certificates` boxes. Copy the contents -of the files at the paths returned by the `tsh proxy db` command you ran earlier -and paste them into their corresponding fields. See the table below for the -Redis Insight fields that correspond to each path: - -|Field|Path| -|---|---| -|CA Certificate|`ca_file`| -|Client Certificate|`cert_file`| -|Private Key|`key_file`| - -Click `Add Redis Database`. - -![Redis Insight TLS Configuration](../../img/database-access/guides/redis/redisinsight-tls-config.png) - -Congratulations! You have just connected to your Redis instance. - -![Redis Insight Connected](../../img/database-access/guides/redis/redisinsight-connected.png) - -## Snowflake: DBeaver - -The Snowflake integration works only in the authenticated proxy mode. Start a local proxy for connections to your Snowflake database by using the command below: -``` -tsh proxy db --tunnel --port 2000 snowflake -``` - -Add a new database by clicking the "add" icon in the top-left corner: - -![DBeaver Main Screen](../../img/database-access/guides/snowflake/dbeaver-main-screen.png) - -In the search bar of the "Connect to a database" window that opens up, type "snowflake", select the Snowflake driver, and click "Next": - -![DBeaver Select Database](../../img/database-access/guides/snowflake/dbeaver-select-database.png) - -Set "Host" to `localhost` and "Port" to the port returned by the `tsh proxy db` command you ran earlier (`2000` in the example above). -In the "Authentication" section set the "Username" to match the database username passed to Teleport with `--db-user` -and enter any value (e.g., "teleport") in the "Password" field (the value of - "Password" will be ignored when establishing a connection but is required by DBeaver to register your database): - -![DBeaver Main](../../img/database-access/guides/snowflake/dbeaver-main.png) - -Next, click the "Driver properties" tab and set "account" to any value (e.g., "teleport"; as with "Password", the value of - "account" will be ignored when establishing a connection but is required by DBeaver to register your database). In "User properties", set "ssl" to `off`: - -![DBeaver Driver](../../img/database-access/guides/snowflake/dbeaver-driver.png) - -Teleport ignores the provided password and the account name as internally it uses values from the Database Agent configuration. -SSL set to `off` disables only encryption on local machine. Connection to Snowflake is encrypted by Teleport. - -Now you can click on "Test Connection..." in the bottom-left corner: - -![DBeaver Success](../../img/database-access/guides/snowflake/dbeaver-success.png) - -Congratulations! You have just connected to your Snowflake instance. - -## Snowflake: JetBrains (IntelliJ, Goland, DataGrip, PyCharm, etc.) - -The Snowflake integration works only in the authenticated proxy mode. Start a local proxy for connections to your Snowflake database by using the command below: -``` -tsh proxy db --tunnel --port 2000 snowflake -``` - -In "Database Explorer" click the "add" button, pick "Data Source", and then pick "Snowflake": - -![JetBrains Add Database](../../img/database-access/guides/snowflake/jetbrains-add-database.png) - -Next, set "Host" to `localhost` and "Port" to the port returned by the `tsh proxy db` command you ran earlier (`2000` in the example above). -Set the "Username" to match the one that you are assuming when you connect to Snowflake - via Teleport and enter any value (e.g., "teleport") in the "Password" field (the value of - "Password" will be ignored but is required to create a data source in your IDE): - -![JetBrains General](../../img/database-access/guides/snowflake/jetbrains-general.png) - -Switch to the "Advanced" tab, set any value (e.g., "teleport") for "account", and add a new record named `ssl` with value `off` (as with "Password", "account" is ignored while establishing the connection but required by your IDE): - -![JetBrains Advanced](../../img/database-access/guides/snowflake/jetbrains-advanced.png) - -Teleport ignores the provided password and the account name as internally it uses values from the Database Agent configuration. -Setting "SSL" to `off` only disables encryption on your local machine. The connection to Snowflake is encrypted by Teleport. - -Now you can click "Test Connection" to check your configuration. - -![JetBrains Success](../../img/database-access/guides/snowflake/jetbrains-success.png) - -Congratulations! You have just connected to your Snowflake instance. - -## SQL Server DataGrip - -In the DataGrip connection configuration menu, use your proxy's endpoint. This -is `localhost:4242` in the example below. (See -[Get connection information](./gui-clients.mdx#get-connection-information) for -more information.) - -Select the "User & Password" authentication option and keep the "Password" field -empty: - -![DataGrip connection options](../../img/database-access/guides/sqlserver/datagrip-connection@2x.png) - -Click "OK" to connect. - -## SQL Server DBeaver - -In the DBeaver connection configuration menu, use your proxy's endpoint. This is -`localhost:62652` in the example above. (See -[Get connection information](./gui-clients.mdx#get-connection-information) for -more information.) - -Use the SQL Server Authentication option and keep the Password field empty: - -![DBeaver connection options](../../img/database-access/guides/sqlserver/dbeaver-connection@2x.png) - -Click OK to connect. - -## Cloud Spanner DataGrip - -(!docs/pages/includes/database-access/gui-clients/spanner-local-proxy.mdx!) - -From the DataGrip menu, click "File > New > Data Source from URL", then copy and -paste the JDBC URL that was output by `tsh proxy db`: - -![Create DataGrip Spanner Data Source From JDBC URL](../../img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png) - -The "Google Cloud Spanner" driver should be automatically selected. -Click "Ok". - -DataGrip does not need GCP credentials - those are already provided by Teleport. -On the "General" tab of the new data source, select the "Authentication" -dropdown setting and choose "No auth": - -![Choose No Auth For DataGrip Spanner Data Source](../../img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png) - -Click "Test connection" to ensure the connection is configured correctly, then -click "Ok" to create the data source. - -(!docs/pages/includes/database-access/gui-clients/spanner-reuse-port-note.mdx!) - -## Cloud Spanner DBeaver - -(!docs/pages/includes/database-access/gui-clients/spanner-local-proxy.mdx!) - -From the menu, click "Database > Driver Manager": - -![Open DBeaver Driver Manager](../../img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png) - -Search for the "Google Cloud Spanner" driver, select it, and click the "Copy" -button to make a custom driver configuration: - -![Copy DBeaver Google Spanner Driver](../../img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png) - -Give the custom driver a name, e.g. "Teleport Spanner", then set "URL Template" -to this pattern string: - -```code -jdbc:cloudspanner://127.0.0.1:{port}/projects/{server}/instances/{host}/databases/{database};usePlainText=true -``` - -![Create Custom DBeaver Google Spanner Driver](../../img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png) - -Click "Ok", then click "Close" - -From the menu, click "Database > New Connection from JDBC URL": - -Now copy the JDBC URL that was output by `tsh proxy db` and paste it: - -![Create DBeaver Spanner Connection From JDBC URL](../../img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png) - -Click "Proceed", then click "Finish". - -(!docs/pages/includes/database-access/gui-clients/spanner-reuse-port-note.mdx!) - -## Oracle graphical clients - -The Oracle integration works only in the authenticated proxy mode. Start a local proxy for connections to your Oracle database by using the command below: - -``` -> tsh proxy db --tunnel --port 11555 --db-user= --db-name= oracle - -Started authenticated tunnel for the Oracle database "oracle" in cluster "teleport.example.com" on 127.0.0.1:11555. -``` - - -The command above uses the local port 11555, but you can choose any available port. Leaving `--port` empty will cause `tsh` to pick a random one. - - -The local proxy supports TCP and TCPS modes. Different clients prefer different modes. - -TCP: -- requires no username or password -- generally easier to configure - -TCPS: -- requires no username or password -- depends on automatically created wallet -- uses JDBC URL for configuration - - -Teleport versions earlier than 17.2.0 support only a limited range of clients and only offer TCPS mode. `tsh` will automatically detect this situation and warn the user. We recommend updating to the latest version of Teleport to access full client support and additional connection options. - - -### Oracle SQL Developer (standalone) - -In "Connections" click the "+" button for a new database connection: - -![Oracle SQL Developer Add Database Connection](../../img/database-access/guides/oracle/sql-developer-standalone-add-database.png) - -Next, set the name and username from the `--db-user` parameter. Set connection type to "Custom JDBC" and -set the "Custom JDBC URL" from the `tsh proxy db` command. - -![Oracle SQL Developer](../../img/database-access/guides/oracle/sql-developer-standalone-conn-details-tcps.png) - -Now you can click "Test" to check your configuration. - -![Oracle SQL Developer Success](../../img/database-access/guides/oracle/sql-developer-standalone-success.png) - -### Oracle SQL Developer (VS Code extension) - -Install the extension from [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Oracle.sql-developer). - -Both TCP and TCPS modes can be used. - - - - -Open the extension toolbar and click on "Create Connection" button. - -![SQL Developer (VS Code) Start](../../img/database-access/guides/oracle/sql-developer-vscode-start@2x.png) - -Enter the following connection details: - -| Field | Value | -|-----------------|------------------------| -| Connection name | Choose unique name | -| User name | `/` | -| Password | `/` | -| Save Password | Mark checkbox | -| Connection type | Basic | -| Host name | `localhost` | -| Port number | `--port` flag value | -| Type | Service Name | -| Service name | `--db-name` flag value | - -Test and create the connection. - -![SQL Developer (VS Code) Connection Details (basic)](../../img/database-access/guides/oracle/sql-developer-vscode-conn-details-basic@2x.png) - -The new connection should appear on the list. - -![SQL Developer (VS Code) Connected (basic)](../../img/database-access/guides/oracle/sql-developer-vscode-connected-basic@2x.png) - - - - - -Open the extension toolbar and click on "Create Connection" button. - -![SQL Developer (VS Code) Start](../../img/database-access/guides/oracle/sql-developer-vscode-start@2x.png) - -Enter the following connection details: - -| Field | Value | -|------------------|---------------------------------| -| Connection name | (choose per your preference) | -| User name | `/` | -| Password | `/` | -| Save Password | Mark the checkbox | -| Connection type | "Custom JDBC" | -| Custom JDBC URL | Copy from `tsh proxy db` output | - -Test and create the connection. - -![SQL Developer (VS Code) Connection Details (JDBC)](../../img/database-access/guides/oracle/sql-developer-vscode-conn-details-jdbc@2x.png) - -The new connection should appear on the list. - -![SQL Developer (VS Code) Connected (JDBC)](../../img/database-access/guides/oracle/sql-developer-vscode-connected-jdbc@2x.png) - - - - - -### Toad - -Add new login record in the logins dialog. - -![Toad Add Login Record](../../img/database-access/guides/oracle/toad-add-login-record@2x.png) - -Enter the connection details in "Direct" tab: - -| Field | Value | -|-----------------|------------------------------| -| Host name | `127.0.0.1` | -| Port number | `--port` flag value | -| Service name | `--db-name` flag value | -| User name | `EXTERNAL` | -| Password | (leave empty) | -| Connection name | (choose per your preference) | - -Test the connection to verify the setup. - -![Toad Add Login Tested](../../img/database-access/guides/oracle/toad-add-login-tested@2x.png) - -The newly added login should appear on the login list. - -![Toad Login List](../../img/database-access/guides/oracle/toad-login-list@2x.png) - - -You can also configure Toad to use an external Oracle client. Both native and external clients are supported. - - -### DBeaver - -Click on the "New Database Connection" button. - -![DBeaver new connection button](../../img/database-access/guides/oracle/dbeaver-new-connection@2x.png) - -Select "Oracle" from the driver list. You may use the search toolbar to narrow down the list. -![DBeaver connect to a database](../../img/database-access/guides/oracle/dbeaver-connect-to-a-database@2x.png) - -Choose "Custom" connection type and paste the JDBC connection string printed by `tsh proxy db`. -![DBeaver JDBC details](../../img/database-access/guides/oracle/dbeaver-jdbc-details@2x.png) - -Test the connection to verify the setup. Finalize by clicking "Finish". - diff --git a/docs/pages/connect-your-client/introduction.mdx b/docs/pages/connect-your-client/introduction.mdx deleted file mode 100644 index 27a44c6835074..0000000000000 --- a/docs/pages/connect-your-client/introduction.mdx +++ /dev/null @@ -1,348 +0,0 @@ ---- -title: Introduction to Teleport Clients -description: The basics of connecting to resources with Teleport ---- - -This guide covers the basics of authenticating to Teleport and -accessing resources. It's written for end-users of resources protected by -Teleport, and includes links to more detailed documentation at the end. - -## Client tools - -### tsh - -`tsh` lets you authenticate to Teleport and list and connect to resources. After -[downloading](https://goteleport.com/download/) and installing `tsh`, sign in to -your Teleport cluster: - - - - -Authenticate to Teleport as a local user with `tsh login` by assigning to your Teleport username and -to the domain name of your Teleport cluster: - -```code -$ tsh login --proxy= --user= -Enter password for Teleport user alice: -Tap any security key -> Profile URL: https://teleport.example.com:443 - Logged in as: alice - Cluster: example.com - Roles: access, requester - Logins: ubuntu, ec2-user - Kubernetes: enabled - Valid until: 2022-11-01 22:37:05 -0500 CDT [valid for 12h0m0s] - Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy -``` - - - - -Authenticate to Teleport as a single sign-on (SSO) user by running `tsh login` -and assigning to the name of your -authentication connector, if implemented by your administrators: - -```code -$ tsh login --proxy= --auth= -If browser window does not open automatically, open it by clicking on the link: - http://127.0.0.1:49927/1d80e257-ec61-4ed2-9403-784f8d35b2fe -> Profile URL: https://teleport.example.com:443 - Logged in as: user@example.com - Cluster: example.com - Roles: access, requester - Logins: ubuntu, ec2-user - Kubernetes: enabled - Valid until: 2022-11-01 22:37:05 -0500 CDT [valid for 12h0m0s] - Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy -``` - -Depending on how Teleport was configured for your network, you may not need -the additional flags `--auth`. Your administrator should provide the details -required for your particular use case. - - - - -### Teleport Connect - -The Teleport Connect app provides all the same access to resources as `tsh` in -a friendly graphic user interface. After [downloading](https://goteleport.com/download/) -and installing Teleport Connect, you can log in and initiate sessions for -server and database access within a single window. - -1. Click **CONNECT** to connect to the Teleport cluster: - - ![A new instance of Teleport Connect](../../img/teleport-connect-init.png) - -1. Provide the address of your Teleport Cluster (e.g. - `https://example.teleport.sh`) and click **NEXT**. - -1. Teleport Connect will ask you for your username, password, and MFA. - - If Teleport is integrated with an external Identity Provider (**IdP**), you might be - prompted to authenticate with that service in a browser window. - -1. Browse and connect to all the resources your user is permitted to access: - - ![An example of Teleport Connect, populated with instances](../../img/use-teleport/connect-ui-overview@2x.png) - -### Web UI - -Teleport provides a web interface for users to interact with Teleport, e.g., by -accessing resources or creating Access Requests. This is usually found at the -same URL used to connect to Teleport with (e.g. `https://example.teleport.sh`), -but you should confirm the Web UI URL with the team that manages your Teleport -deployment. - -The Web UI provides similar access to resources as Teleport Connect, and -additional access to to Request and Activity logs for users with the right -permissions. - -## Protocols - -### Server access (`ssh`) - - - - -`tsh ls` lists the servers you have access to through Teleport: - -```code -$ tsh ls -Node Name Address Labels -------------------- --------------- ---------------------------- -server1.example.com 192.0.2.24:3022 access=servers,hostname=server1 -server2.example.com 192.0.2.32:3022 access=servers,hostname=server2 -``` - -To connect to a server: - -```code -$ tsh ssh ubuntu@server1.example.com -ubuntu@server1:~$ -``` - - - - -Under the **Servers** tab, click **CONNECT** next to the server you want to -access. Select or type in a username available to your Teleport user. - -In a new tab, Teleport Connect will initiate the connection and provide a -terminal environment. - - - - -From the **Servers** menu, the Teleport Web UI will list all servers your user -has permission to access. The **CONNECT** button will open a new tab with a -terminal emulator to provide access to that server. - - - - -### Kubernetes access (`kubectl`) - - - - -To see the Kubernetes clusters that you can access via Teleport, run the -following command: - -```code -$ tsh kube ls - Kube Cluster Name Labels Selected - ----------------- --------------------------- -------- - mycluster env=dev -``` - -To log in to the cluster, run the following command, changing `mycluster` to the -name of a Kubernetes cluster as it was listed in `tsh kube ls`: - -```code -$ tsh kube login mycluster -Logged into kubernetes cluster "mycluster". Try 'kubectl version' to test the -connection. -``` - -When you log in to your Teleport cluster via `tsh kube login`, `tsh` updates -your kubeconfig to point to your chosen Kubernetes cluster. You can then run -`kubectl` commands against your cluster. - - - - -After logging in to a cluster in Teleport Connect, click the **Kubes** tab to -see a list of Kubernetes clusters you can access. - -![Clusters you can access](../../img/use-teleport/connect-kube-list.png) - -Next to your chosen cluster, click **Connect**. Teleport Connect will -open a terminal in a new tab and authenticate to the cluster. You can then run -`kubectl` commands to interact with the cluster. - -![Connect to a cluster](../../img/use-teleport/connect-kube-access.png) - - - -In the Teleport Web UI, click the **Kubernetes** tab. You will see a list of -Kubernetes clusters your Teleport user is authorized to connect to. - -![Available Kubernetes clusters](../../img/use-teleport/kubernetes-clusters.png) - -Find your chosen cluster and click **Connect**. You will see a modal window that -lists the commands you can execute in your terminal to connect to the cluster -via Teleport. - -![Connect to a Kubernetes -cluster](../../img/use-teleport/kubernetes-login.png) - - - - -### Transferring files - - - - -`tsh scp` will let you transfer files to servers behind Teleport: - -```code -$ tsh scp some-file.ext server.example.com: -some-file.ext 7% |███████ | (25/342 MB, 2.9 MB/s) [9s:1m48s] -``` - - - -Teleport Connect allows you to transfer files to a remote server and runs on macOS, Linux and Windows. You can use the arrows in the top-left corner of an active SSH session to upload and download files: - -![Download with connect](../../img/download_files_connect.png) - -You can upload files from systems using drag and drop: - -![Drag and drop in Connect](../../img/upload_files_connect_drag_drop.png) - - - - -You can use the arrows in the top-left corner of the Web UI to download files: - -![Arrows in Web UI](../../img/download_files_webui.png) - -Or you can upload using drag and drop: - -![Upload in Web UI](../../img/upload_files_webui.png) - - - - -### Database access - - - - -`tsh db ls` will list the databases available to your user: - -```code -$ tsh db ls -Name Description Allowed Users Labels Connect ------------------------------------------------ ----------- --------------- ------- ------------------------- -myelastic [alice elastic] env=dev -mysql-server1 (user: alice, db: teleport_example) [alice elastic] env=dev tsh db connect mysql-server1 -``` - -To connect to a database server through `tsh`, you'll need a local client for -that database. For example, to connect to a MySQL or MariaDB database, you'll -need the [MySQL CLI](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) client: - -```code -$ tsh db connect --db-user=alice --db-name=teleport_example mysql-server1 -``` - -In this example, `teleport_example` is a pre-existing database on the MySQL server: - -```sql -SHOW DATABASES; -+--------------------+ -| Database | -+--------------------+ -| information_schema | -| mysql | -| performance_schema | -| teleport_example | -+--------------------+ -4 rows in set (0.09 sec) -``` - -For other connection types, or to connect a graphic database client, you can -create a local tunnel to point your software to: - -```code -$ tsh proxy db myelastic --db-user=alice --tunnel -Started authenticated tunnel for the Elasticsearch database "myelastic" in cluster "example.come" on 192.0.2.58:52669. - -Use one of the following commands to connect to the database: - - * interactive SQL connection: - - $ elasticsearch-sql-cli http://localhost:52669/ - - * run single request with curl: - - $ curl http://localhost:52669/ - -Database certificate renewed: valid until 2022-11-02T06:11:50Z [valid for 9h55m0s] -``` - - - - -In the **Databases** tab, select the database you want to access and click -**CONNECT**. Select or type in a username available to your Teleport user. - -Teleport Connect will open a tunneled connection to the database. You can copy -the port number and connect with a GUI app at `localhost`, or click **RUN** to -initiate a CLI connection inside Teleport Connect: - -![Teleport Connect database connection](../../img/teleport-connect-mysql.png) - - - - -The Teleport Web UI cannot provide direct connections to databases, but it will -list those that are accessible to your user under **Databases** and provide -`tsh` commands to connect from your local terminal environment. - - - - -### Desktop access - -Desktop access is available through the Teleport Web UI. - -1. In your browser, navigate to your Teleport cluster (for example, -`https://example.teleport.sh`). -1. From the menu on the right, select **Desktops**. -1. Next to the desktop you want to access, click **CONNECT**. Select -or type in a username available to your Teleport user. -1. Teleport will open a new browser tab or window and begin the RDP session. -Note that you may need to wait a moment for Teleport to log you in as the specified user. - -## Next steps - -This guide covers the basic installation and access of Teleport for end users, -but the rest of the Connect your Client section provides more detailed information. - -- `tsh` is the CLI tool for accessing resources through Teleport. With it, -you can authenticate to Teleport, list available services, and connect to them -either directly or through proxy tunnels. - - Learn more about it from [Using the tsh Command Line Tool](./tsh.mdx). - -- Teleport Connect is a graphic utility for connecting to resources through - Teleport. You use it to connect to servers, databases, and Kubernetes - clusters. See [Using Teleport Connect](./teleport-connect.mdx). - -- [Access Teleport-protected databases with GUI clients](./gui-clients.mdx): - Details how to connect many popular database GUI clients through Teleport. diff --git a/docs/pages/connect-your-client/model-context-protocol/database-access.mdx b/docs/pages/connect-your-client/model-context-protocol/database-access.mdx new file mode 100644 index 0000000000000..89ac3802c71f0 --- /dev/null +++ b/docs/pages/connect-your-client/model-context-protocol/database-access.mdx @@ -0,0 +1,207 @@ +--- +title: Access Teleport Databases over MCP +sidebar_label: Databases +description: How to configure MCP clients to use Teleport databases as MCP servers. +tags: + - mcp + - infrastructure-identity + - zero-trust + - how-to +--- + +This guide explains how to connect to your **PostgreSQL** Teleport databases with MCP clients. + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx clients="\`tsh\` client"!) + +- Teleport Database Service with a PostgreSQL database enrolled. See our [guides](../../enroll-resources/database-access/database-access.mdx) + for options on how to enroll PostgreSQL databases with Teleport, such as + the [AWS RDS PostgreSQL](../../enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb.mdx) + and [self-hosted PostgreSQL](../../enroll-resources/database-access/enrollment/self-hosted/postgres-self-hosted.mdx) guides. + + +Since language models can execute any query on your database, we advise creating +a database user with only the permissions you want the models to have. Setting +up a user with read-only permissions will help prevent accidental changes to +your database. + +Here's an example of how to create a PostgreSQL user with read-only access to +the database: + +```sql +CREATE ROLE mcp_read_only WITH LOGIN; +GRANT CONNECT ON DATABASE my_database TO mcp_read_only; +GRANT USAGE ON SCHEMA public TO mcp_read_only; +GRANT SELECT ON my_table TO mcp_read_only; -- To grant read access to a single table. +GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_read_only; -- To grant read access to all tables in the "public" schema. + +GRANT rds_iam TO mcp_read_only; -- If connecting to a PostgreSQL RDS database. +``` + +Remember that you will also need sufficient permissions on your Teleport user to +access the database using this user. + +You can also set up fine-grained permissions for use with MCP using [Auto-user +provisioning](../../enroll-resources/database-access/auto-user-provisioning/postgres.mdx) +or [Database Access Controls](../../enroll-resources/database-access/rbac.mdx). + + +## Step 1/2. Configure MCP clients + +First, sign in into your Teleport cluster using `tsh login`: + +```code +$ tsh login --proxy= --user=myuser@example.com +``` + +Only databases that you have access to can be exposed as MCP servers. You can +retrieve the list of databases and their connection options using `tsh db ls`: + +```code +$ tsh db ls +# Name Description Allowed Users Labels +# --------------- -------------------- ------------------ ------- +# postgres-dev Development database [* postgres] env=dev +# postgres-prod Production database [mcp_read_only] env=prod +``` + +Each database you want accessible through MCP must be configured with your MCP +client individually. This is done using the `tsh mcp db config` command. Like +`tsh db connect`, this command requires you to select the database user and +database name that the MCP connections will use. + +This command can either generate a configuration file (using the `mcpServers` +format) for manual MCP client updates or automatically update the MCP client +configuration. + + + +`tsh` can automatically update the Claude Desktop MCP configuration file to +include Teleport's configuration: + +```code +$ tsh mcp db config --db-user=postgres --db-name=employees --client-config=claude postgres-dev +Added database "postgres-dev" on the client configuration at: +~/Library/Application Support/Claude/claude_desktop_config.json + +Teleport database access MCP server is named "teleport-databases" in this configuration. + +You may need to restart your client to reload these new configurations. +``` + +You can also provide a custom path for your Claude Desktop MCPs configuration: +```code +$ tsh mcp db config --db-user=postgres --db-name=employees --client-config=/path/to/config.json postgres-dev +``` + +After updating the configuration, you need to restart the Claude Desktop app +before using the newly added MCPs. + + + +`tsh` can automatically update the Global Cursor MCP servers to include +Teleport's configuration: + +```code +$ tsh mcp db config --db-user=postgres --db-name=employees --client-config=cursor postgres-dev +Added database "postgres-dev" on the client configuration at: +/your/home/path/.cursor/mcp.json + +Teleport database access MCP server is named "teleport-databases" in this configuration. + +You may need to restart your client to reload these new configurations. +``` + +You can also update a Cursor project MCP servers by providing the path to the +file: +```code +$ tsh mcp db config --db-user=postgres --db-name=employees --config-client=/path/to/project/.cursor/mcp.json postgres-dev +``` + + + +Currently, `tsh` only supports generating the `mcpServers` format and some +client-specific formats. Running the config command without any specific options +will output the configuration used to start Teleport's STDIO MCP server. You can +use this as a base and modify it to suit your MCP client needs. + +```code +$ tsh mcp db config --db-user=postgres --db-name=employees postgres-dev +Here is a sample JSON configuration for launching Teleport MCP servers: +{ + "mcpServers": { + "teleport-databases": { + "command": "tsh", + "args": [ + "mcp", + "db", + "start", + "teleport://clusters//databases/postgres-dev?dbName=employees&dbUser=postgres" + ] + } + } +} + +If you already have an entry for "teleport-databases" server, add the following database resource URI to the command arguments list: +teleport://clusters//databases/postgres-dev?dbName=employees&dbUser=postgres +``` + + + +## Step 2/2. Access Teleport-protected resources over MCP + +After configuring your MCP client, the following tools will be available: + +- `teleport_list_databases`: Lists database resources accessible with Teleport tools. +- `teleport_postgres_query` (PostgreSQL databases only): Executes SQL queries on a specified database through Teleport. + +Additionally, the served databases are available as MCP resources. You can +attach them to the conversation to provide extra context. Behavior may vary +among MCP clients. + +You can now use these tools to execute queries on your databases. Here is an +example of testing the connection and retrieving the database version using +Claude Desktop: + +![Database access usage example](../../../img/connect-your-client/model-context-protocol/usage-example-database-access.png) + +## Troubleshooting + +### Empty list of databases or missing `teleport_postgres_query` tool + +This occurs if the MCP server command (`tsh mcp db start`) has an invalid list +of databases or options. Make sure the database URI list is correct. You can +generate it using the `tsh mcp db config` command. + +In this case, you'll still be able to see and call `teleport_list_databases`, +which will provide instructions on how to proceed. + +### Expired `tsh` session + +There must be a valid `tsh` session during the MCP server startup, or it won't +start. + +If your session expires while the MCP server is running, the next tool calls +will fail. You need to run `tsh login` again and retry the failed requests. In +such cases, you don't have to restart the MCP client or the MCP server. + +### Access denied errors + +If, while executing queries, you receive access denied responses, verify that +your user has permission to access the configured database. You can confirm by +running `tsh db connect` before starting your MCP server. + +For example, if you configured your database with: + +```code +$ tsh mcp db config --db-user=postgres --db-name=employees postgres-dev +``` + +You must be able to test your access by running: + +```code +$ tsh db connect --db-user=postgres --db-name=employees postgres-dev +``` diff --git a/docs/pages/connect-your-client/model-context-protocol/kube-access.mdx b/docs/pages/connect-your-client/model-context-protocol/kube-access.mdx new file mode 100644 index 0000000000000..aef2bea72a35e --- /dev/null +++ b/docs/pages/connect-your-client/model-context-protocol/kube-access.mdx @@ -0,0 +1,135 @@ +--- +title: Access Teleport Kubernetes clusters over MCP +sidebar_label: Kubernetes Clusters +description: How to configure MCP clients to use Teleport Kubernetes clusters as MCP servers. +tags: + - mcp + - infrastructure-identity + - zero-trust + - how-to +--- + +This guide explains how to connect to Teleport Kubernetes Clusters with MCP clients. + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx clients="\`tsh\` client"!) +- Kubernetes Clusters enrolled with Teleport. See our [guides](../../enroll-resources/kubernetes-access/getting-started.mdx). + +## Step 1/2. Configure MCP clients + +First, sign in into your Teleport cluster using `tsh login`: + +```code +$ tsh login --proxy= --user=myuser@example.com +``` + +To list Kubernetes clusters available for you to access: +```code +$ tsh kube ls +Kube Cluster Name Labels Selected +----------------- ------- -------- +minikube env=dev * +``` + +Now log in to your Kubernetes cluster, replacing with your our Kubernetes cluster name: +```code +$ tsh kube login +Logged into Kubernetes cluster "minikube". Try 'kubectl version' to test the connection. +``` +This command also updates your default Kubernetes config. + +Next, configure your MCP clients to use the +[`kubernetes-mcp-server`](https://github.com/containers/kubernetes-mcp-server) +MCP server. + + + + +Open your `claude_desktop_config.json` and add the MCP server to the list of +mcpServers: + +```json +{ + "mcpServers": { + "kubernetes": { + "command": "npx", + "args": [ + "-y", + "kubernetes-mcp-server@latest" + ] + } + } +} +``` + + + + +You can install the extension by editing the `mcp.json` file: +```json +{ + "mcpServers": { + "kubernetes-mcp-server": { + "command": "npx", + "args": ["-y", "kubernetes-mcp-server@latest"] + } + } +} +``` + + + + + +You can install the extension by running the following command: +```code +# For VS Code +code --add-mcp '{"name":"kubernetes","command":"npx","args":["kubernetes-mcp-server@latest"]}' +# For VS Code Insiders +code-insiders --add-mcp '{"name":"kubernetes","command":"npx","args":["kubernetes-mcp-server@latest"]}' +``` + + + +## Step 2/2. Access Teleport-protected resources over MCP + +After configuring your MCP client, you will find Kubernetes and Helms tools from +`kubernetes-mcp-server`. + +You can now use these tools to interact with your Kubernetes clusters via +Teleport in your MCP clients: + +![Kube usage context](../../../img/connect-your-client/model-context-protocol/usage-kube-context.png) +![Kube usage pod](../../../img/connect-your-client/model-context-protocol/usage-kube-pod.png) + +## Teleport behind TLS-terminating load balancers + +If your Teleport cluster is behind a TLS-terminating load balancer or reverse +proxy, you can start a local proxy with `tsh`: +```code +$ tsh proxy kube -p 8888 +``` + +Copy the `KUBECONFIG` path from the output of the command, and add it with the +`--kubeconfig` flag in your MCP client configuration. For example: +```json +{ + "mcpServers": { + "kubernetes-mcp-server": { + "command": "npx", + "args": ["-y", "kubernetes-mcp-server@latest", "--kubeconfig", "/path/to/your/tsh/localproxy-8888-kubeconfig"] + } + } +} +``` + +Alternatively, you can use [Teleport Connect](../teleport-clients/teleport-connect.mdx) to run +the local proxy to your Kubernetes cluster. You can find the `KUBECONFIG` path +from the terminal in Teleport Connect: +```code +$ echo $KUBECONFIG +/path/to/your/minikube-kubeconfig +``` diff --git a/docs/pages/connect-your-client/model-context-protocol/mcp-access.mdx b/docs/pages/connect-your-client/model-context-protocol/mcp-access.mdx new file mode 100644 index 0000000000000..3f46427a4e886 --- /dev/null +++ b/docs/pages/connect-your-client/model-context-protocol/mcp-access.mdx @@ -0,0 +1,199 @@ +--- +title: Access MCP Servers with Teleport +sidebar_label: MCP Servers +sidebar_position: 1 +description: How to configure MCP clients to use MCP servers served by Teleport. +tags: + - mcp + - how-to + - zero-trust + - infrastructure-identity +--- + +This guide explains how to configure MCP clients to access MCP servers served by +Teleport. + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.1.0 or higher)" clients="\`tsh\` client"!) + +- Teleport MCP Access configured. + +
+ Configure MCP Access + + - [Quick-start with demo MCP server](../../enroll-resources/mcp-access/getting-started.mdx). + - [MCP Access with Stdio MCP Server](../../enroll-resources/mcp-access/enrolling-mcp-servers/stdio.mdx) +
+ +## Step 1/2. Configure MCP clients + +First, sign in into your Teleport cluster using `tsh login`, assigning + to the web address of the Teleport Proxy +Service in your cluster: + +```code +$ tsh login --proxy= --user=myuser@example.com +``` + +You can now list the available MCP servers that you can use: + +```code +$ tsh mcp ls +Name Description Type Labels +-------------- ------------------------------------------ ----- -------------------- +fs Filesystem MCP Server stdio env=prod +mcp-everything This MCP server attempts to exercise al... stdio env=dev,sandbox=true +``` + +The MCP client configuration can be created using the `tsh mcp config` command. +You can choose which servers to configure by using the `--labels` flag to filter +by labels or by specifying `--all`, which will configure all MCP servers you have +access to. + +This command can either generate a configuration file (using the `mcpServers` +format) for manual MCP client updates or automatically update the MCP client +configuration. + + + +`tsh` can automatically update the Claude Desktop MCP configuration file to +include Teleport's configuration: + +```code +$ tsh mcp config --all --client-config=claude +Found MCP servers: +fs +mcp-everything + +Updated client configuration at: +~/Library/Application Support/Claude/claude_desktop_config.json + +Teleport MCP servers will be prefixed with "teleport-mcp-" in this +configuration. + +You may need to restart your client to reload these new configurations. If you +encounter a "disconnected" error when tsh session expires, you may also need to +restart your client after logging in a new tsh session. +``` + +You can also provide a custom path for your Claude Desktop MCPs configuration: +```code +$ tsh mcp config --all --config-client=/path/to/config.json +``` + +After updating the configuration, you need to restart the Claude Desktop app +before using the newly added MCPs. + + + +`tsh` can automatically update the Global Cursor MCP servers to include +Teleport's configuration: + +```code +$ tsh mcp config --all --config-client=cursor +Found MCP servers: +fs +mcp-everything + +Updated client configuration at: +/your/home/path/.cursor/mcp.json + +Teleport MCP servers will be prefixed with "teleport-mcp-" in this +configuration. + +You may need to restart your client to reload these new configurations. If you +encounter a "disconnected" error when tsh session expires, you may also need to +restart your client after logging in a new tsh session. +``` + +You can also update a Cursor project MCP servers by providing the path to the +file: +```code +$ tsh mcp config --all --config-client=/path/to/project/.cursor/mcp.json +``` + + + +Currently, `tsh` only supports generating the `mcpServers` format and some +client-specific formats. Running the config command without any specific options +will output configuration used to start Teleport's STDIO MCP server. You can use +this as a base and modify it to suit your MCP client needs. + +```code +$ tsh mcp config --all +Found MCP servers: +fs +mcp-everything + +Here is a sample JSON configuration for launching Teleport MCP servers: +{ + "mcpServers": { + "teleport-mcp-fs": { + "command": "tsh", + "args": ["mcp", "connect", "fs"] + }, + "teleport-mcp-mcp-everything": { + "command": "tsh", + "args": ["mcp", "connect", "mcp-everything"] + } + } +} +``` + + + +## Step 2/2. Use Teleport-protected MCP servers + +After configuring your MCP client, the MCP tools and resources should be +available. + +You can now use the MCP servers as usual. Here is an example of using the +`mcp-everything` server through Teleport with Claude Desktop: + +![MCP everything usage example](../../../img/connect-your-client/model-context-protocol/usage-example-everything.png) + +## Troubleshooting + +### Server is running but it has an empty list of tools + +Besides accessing the MCP servers, you also need permissions for the MCP tools +they provide. You can see which tools are available for you by running +`tsh mcp ls -v`. + +If you're missing tool permissions, reach out to your Teleport administrator to +have them properly configured. + +### Expired `tsh` session + +There must be a valid `tsh` session during the MCP server startup, or it won't +start. + +If your session expires while the MCP server is running, the next tool calls +will fail. You need to run `tsh login` again and retry the failed requests. In +such cases, you don't have to restart the MCP client or the MCP server. + +### `tsh mcp` commands not found + +If you're encountering an error like this when running the `tsh mcp` commands +family: + +``` +ERROR: expected command but got "mcp" +``` + +This is caused by using an outdated version of `tsh`. The earliest version that +includes the MCP commands is `18.1.0`, so you need to make sure you’re running +at least this version. You can check your `tsh` version by running +`tsh version`. + +### `tsh mcp ls` returns an empty list of servers + +Make sure your Teleport cluster has MCP access resources available and the +correct permissions to reach them. [See our guides for enrollment instructions.](../../enroll-resources/mcp-access/mcp-access.mdx) + +### `tsh` path errors in your MCP clients + +(!docs/pages/includes/mcp-access/troubleshoot-tsh-binary-enoent.mdx!) diff --git a/docs/pages/connect-your-client/model-context-protocol/model-context-protocol.mdx b/docs/pages/connect-your-client/model-context-protocol/model-context-protocol.mdx new file mode 100644 index 0000000000000..02e4799204d24 --- /dev/null +++ b/docs/pages/connect-your-client/model-context-protocol/model-context-protocol.mdx @@ -0,0 +1,15 @@ +--- +title: "Securing Agentic AI with Teleport Zero Trust Access" +sidebar_label: MCP +description: "Access control and audit between LLMs and data sources or MCP servers." +tags: + - conceptual + - zero-trust + - ai +--- + +Secure Large Language Model (LLM) interactions with infrastructure using Teleport's secure +[Model Context Protocol](https://goteleport.com/use-cases/secure-model-context-protocol/) (MCP), +delivering fine-grained access control and audit between LLMs and data sources or MCP servers. + + diff --git a/docs/pages/connect-your-client/notifications.mdx b/docs/pages/connect-your-client/teleport-clients/notifications.mdx similarity index 92% rename from docs/pages/connect-your-client/notifications.mdx rename to docs/pages/connect-your-client/teleport-clients/notifications.mdx index 7889638309534..a3a353c362ab2 100644 --- a/docs/pages/connect-your-client/notifications.mdx +++ b/docs/pages/connect-your-client/teleport-clients/notifications.mdx @@ -1,6 +1,10 @@ --- title: Notifications description: Provides a detailed breakdown of Teleport's notification system. +tags: + - conceptual + - platform-wide + - resiliency --- Teleport's notification system allows users to be notified of various events, updates, and warnings in real time. @@ -16,7 +20,7 @@ You can mark the notification as read to acknowledge it, or hide it to have it n Some notifications may include quick action buttons which allow you perform actions directly from the notification, such as assuming granted roles from an approved Access Request notification. -![Notification in the WebUI](../../img/notification.png) +![Notification in the WebUI](../../../img/notification.png) ## Creating and managing notifications @@ -47,4 +51,4 @@ ID Created Expires Tit $ tctl notifications rm 3b8eb3d6-da9a-5353-aece-cbc885ecbf73 ``` -For more detailed information on this family of commands, please refer to the [CLI Reference](../reference/cli/tctl.mdx#tctl-notifications-create). +For more detailed information on this family of commands, please refer to the [CLI Reference](../../reference/cli/tctl.mdx#tctl-notifications-create). diff --git a/docs/pages/connect-your-client/teleport-clients/request-access.mdx b/docs/pages/connect-your-client/teleport-clients/request-access.mdx new file mode 100644 index 0000000000000..d9e9092274475 --- /dev/null +++ b/docs/pages/connect-your-client/teleport-clients/request-access.mdx @@ -0,0 +1,291 @@ +--- +title: Requesting Access to Roles and Resources +description: Explains how to request access to resources that your Teleport user does not have permissions to access. +sidebar_label: Access Requests +tags: + - access-requests + - identity-governance + - privileged-access + - conceptual +--- + +If you cannot access a Teleport role or resource with your current Teleport +permissions, you can create an Access Request to obtain elevated permissions for +a limited time. + +This guide explains you how to use `tsh` to create Teleport Access Requests. + + + +In Teleport Enterprise accounts with Identity Governance enabled, it is possible +to create Access Requests in the Teleport Web UI and Teleport Connect. + +Access Requests can also be created in the Teleport Web UI and Teleport Connect. +In the Teleport Web UI, navigate from the left sidebar to **Identity Governance > +Access Requests > New Request**. In Teleport Connect, either open a new tab +with cluster resources by clicking on the new tab button or click the **Access +Requests** button in the top right. + + + +## Requesting access to roles + +You can request access to one or more Teleport roles using the `tsh request +create` command and the `--roles` flag, e.g.: + +```code +$ tsh request create --roles "dev,analyst" +``` + +The command above creates an Access Request for the `dev` and `analyst` roles. + +## Requesting access to Teleport cluster resources + +In addition to Teleport roles, you can request access to individual Teleport +resources, including resources within a Teleport-protected Kubernetes cluster. + +To request access to a Teleport-protected infrastructure resource, you can +search for possible resources, retrieve the ID of a resource to request access +to, then create an Access Request for that resource ID. + + + +In the Teleport Web UI, you can view the resources you can request access to by +navigating to **Resources** in the sidebar. + + + +1. Search for a resource to request access to: + + ```code + $ tsh request search --kind= + ``` + + Substitute `--kind` with the kind of Teleport resource you would like to + request access to. This can be one of the following: + + - `node` + - `kube_cluster` + - `kube_resource` + - `db` + - `app` + - `windows_desktop` + +1. Determine the ID of the resource to request access to. Resource IDs have the + following format: + + ``` + /CLUSTER_ADDRESS/TYPE/RESOURCE_NAME + ``` + + For example, assume that you have found the following Teleport-protected + server to request access to: + + ```code + $ tsh request search --kind=node + Name Hostname Labels Resource ID + ------------------------------------ -------- ------ ---------------------------------------------- + 00000000-0000-0000-0000-000000000000 myhost /teleport.example.com/node/00000000-0000-00... + ``` + + While the resource ID on the right is clipped, you can use the format above + to infer that it is the following: + + ``` + /teleport.example.com/node/00000000-0000-0000-0000-000000000000 + ``` + +1. Request access to the resource by running the following command, inputting + the resource ID you found in the previous step: + + ```code + $ tsh request create --resource RESOURCE_ID + ``` + +## Requesting access to Kubernetes resources + +As with Teleport cluster resources, users can request access to a Kubernetes +resources by running the following command, where is +the ID of the resource: + +```code +$ tsh request create +``` + +### Namespace-scoped resources + +For Kubernetes namespaced resources, the `resource-id` is in the following format: + +``` +/TELEPORT_CLUSTER/kube:ns:NAMESPACED_PLURAL_KIND[.API_GROUP]/KUBE_CLUSTER/NAMESPACE/RESOURCE_NAME +``` + +The `API_GROUP` part is only required for resources that are part of the core +Kubernetes group. + + +You can run this command to list the namespaced Kubernetes resources, +including their API groups: + +```code +$ kubectl api-resources --namespaced=true -o name --sort-by=name +``` + + +For example, to request access to a pod called `nginx-1` in the `dev` +namespace, run the following command: + +```code +$ tsh request create --resource /teleport.example.com/kube:ns:pods/mycluster/dev/nginx-1 +``` + +And to request access to a deployment called `website` in the `prod` +namespace, run the following command: + +```code +$ tsh request create --resource /teleport.example.com/kube:ns:deployments.apps/mycluster/prod/website +``` + +For the `NAMESPACED_PLURAL_KIND` value you can use `*` to match all. +For the `API_GROUP`, `NAMESPACE` and `RESOURCE_NAME` +values, you can match ranges of characters by supplying a wildcard (`*`) or +regular expression. Regular expressions must begin with `^` and end with `$`. + +For example, to create a request to access all pods in all namespaces that +match the regular expression `/^nginx-[a-z0-9-]+$/`, run the following command: + +```code +$ tsh request create --resource '/teleport.example.com/kube:ns:pods/mycluster/*/^nginx-[a-z0-9-]+$' +``` + +### Cluster-scoped resources + +For Kubernetes cluster-scoped resources, the `resource-id` is in the following +format: + +``` +/TELEPORT_CLUSTER/kube:cw:CLUSTER_WIDE_PLURAL_KIND[.API_GROUP]/KUBE_CLUSTER/RESOURCE_NAME +``` + +The `API_GROUP` part is only required for resources that are part of the core +kubernetes group. + + +You can run this command to list the cluster-wide Kubernetes resources, +including their API groups: + +```code +$ kubectl api-resources --namespaced=false -o name --sort-by=name +``` + + +For example, to request access to a namespace called `prod`, run the +following command: + +```code +$ tsh request create --resource /teleport.example.com/kube:cw:namespaces/mycluster/prod +``` + +Note that this will only grant access to the namespace resource itself, not to +any namespaced resources within it. + +For the `CLUSTER_WIDE_PLURAL_KIND` value you can use `*` to match all. +For the `API_GROUP` and `RESOURCE_NAME` value, you +can match ranges of characters by supplying a wildcard (`*`) or regular +expression. Regular expressions must begin with `^` and end with `$`. + +For example, to create a request to access all namespaces prefixed with `dev` match +the regular expression `/^dev-[a-z0-9-]+$/`, run the following command: + +```code +$ tsh request create --resource '/teleport.example.com/kube:cw:namespaces/mycluster/^dev-[a-z0-9-]+$' +``` + +### Wildcard requests + +You can request access to a namespace and all resources within it, as well as to all cluster-wide resources. To do so, you can use wildcards in the resource ID. + +For cluster-wide resources, the `kind` format as described above is +`kube:cw:CLUSTER_WIDE_PLURAL_KIND[.API_GROUP]`. +To request access to all cluster-wide resources you can request access to +`kube:cw:*.*`. Where the first `*` represents the kind, and the second `*` +represents the `api group`. + +For namespaced resources, the `kind` format is +`kube:ns:NAMESPACED_PLURAL_KIND[.API_GROUP]`. +To request access to all namespaced resources within a specific namespace, you +can request access to `kube:ns:*.*`, where the first `*` represents the kind, +and the second `*` represents the `api group`. + +For example, to request access to the `prod` namespace (cluster-wide) all +resources within it (i.e., namespaced), you can run the following command: + +```code +$ tsh request create \ + --resource /teleport.example.com/kube:cw:namespaces/mycluster/prod \ + --resource '/teleport.example.com/kube:ns:*.*/mycluster/prod/*' +``` + +Similarly, to request access to everything within a cluster, you can use the +following command: + +```code +$ tsh request create \ + --resource '/teleport.example.com/kube:cw:*.*/mycluster/*' \ + --resource '/teleport.example.com/kube:ns:*.*/mycluster/*/*' +``` + +which is equivalent to: + +```code +$ tsh request create \ + --resource /teleport.example.com/kube_cluster/mycluster +``` + +### Search for Kubernetes resources + +If a user has no access to a Kubernetes cluster, they can search the list of +resources in the cluster by running the following command, replacing + with the name of the Kubernetes cluster and + with the name of a Kubernetes namespace: + +```code +$ tsh request search \ + --kind=kube_resource --kube-kind=pods --kube-api-group='' \ + --kube-cluster= \ + [--kube-namespace=|--all-kube-namespaces] +Name Namespace Labels Resource ID +----------------- --------- --------- ---------------------------------------------------------- +nginx-deployment-0 default app=nginx /teleport.example.com/kube:ns:pods/local/default/nginx-deployment-0 +nginx-deployment-1 default app=nginx /teleport.example.com/kube:ns:pods/local/default/nginx-deployment-1 + +To request access to these resources, run +> tsh request create \ + --resource /teleport.example.com/kube:ns:pods/local/default/nginx-deployment-0 \ + --resource /teleport.example.com/kube:ns:pods/local/default/nginx-deployment-1 \ + --reason +``` + +The list returned includes the name of the resource, the namespace it is in if +applicable, its labels, and the resource ID. +Resources included in the list are those that match the `kubernetes_resources` +field in the user's `search_as_roles`. The user can then: + +- Request access to the resources by running the command provided by the `tsh request +search` command. +- Edit the command to request access to a subset of the resources. +- Use a custom request with wildcards or regular expressions. + +`tsh request search --kind=kube_resource --kube-kind= +--kube-api-group=` works even if the user has no permissions to +interact with the desired Kubernetes cluster, but the user's `search_as_roles` +values must allow access to the cluster. If the user is unsure of the name of +the cluster, they can run the following command to search it: + +```code +$ tsh request search --kind=kube_cluster +Name Hostname Labels Resource ID +----- -------- ------ ---------------------------------------- +local /teleport.example.com/kube_cluster/local + +``` diff --git a/docs/pages/connect-your-client/teleport-clients/teleport-clients.mdx b/docs/pages/connect-your-client/teleport-clients/teleport-clients.mdx new file mode 100644 index 0000000000000..311cb090b1f4c --- /dev/null +++ b/docs/pages/connect-your-client/teleport-clients/teleport-clients.mdx @@ -0,0 +1,359 @@ +--- +title: Introduction to Teleport Clients +sidebar_label: Teleport Clients +description: The basics of connecting to resources with Teleport +tags: + - conceptual + - platform-wide +--- + +This guide covers the basics of authenticating to Teleport and +accessing resources. It's written for end-users of resources protected by +Teleport, and includes links to more detailed documentation at the end. + +## Client tools + +### tsh + +`tsh` lets you authenticate to Teleport and list and connect to resources. After +[downloading](https://goteleport.com/download/) and installing `tsh`, sign in to +your Teleport cluster: + + + + +Authenticate to Teleport as a local user with `tsh login` by assigning to your Teleport username and +to the domain name of your Teleport cluster: + +```code +$ tsh login --proxy= --user= +Enter password for Teleport user alice: +Tap any security key +> Profile URL: https://teleport.example.com:443 + Logged in as: alice + Cluster: example.com + Roles: access, requester + Logins: ubuntu, ec2-user + Kubernetes: enabled + Valid until: 2022-11-01 22:37:05 -0500 CDT [valid for 12h0m0s] + Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy +``` + + + + +Authenticate to Teleport as a single sign-on (SSO) user by running `tsh login` +and assigning to the name of your +authentication connector, if implemented by your administrators: + +```code +$ tsh login --proxy= --auth= +If browser window does not open automatically, open it by clicking on the link: + http://127.0.0.1:49927/1d80e257-ec61-4ed2-9403-784f8d35b2fe +> Profile URL: https://teleport.example.com:443 + Logged in as: user@example.com + Cluster: example.com + Roles: access, requester + Logins: ubuntu, ec2-user + Kubernetes: enabled + Valid until: 2022-11-01 22:37:05 -0500 CDT [valid for 12h0m0s] + Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy +``` + +Depending on how Teleport was configured for your network, you may not need +the additional flags `--auth`. Your administrator should provide the details +required for your particular use case. + + + + +### Teleport Connect + +The Teleport Connect app provides all the same access to resources as `tsh` in +a friendly graphic user interface. After [downloading](https://goteleport.com/download/) +and installing Teleport Connect, you can log in and initiate sessions for +server and database access within a single window. + +1. Click **CONNECT** to connect to the Teleport cluster: + + ![A new instance of Teleport Connect](../../../img/teleport-connect-init.png) + +1. Provide the address of your Teleport Cluster (e.g. + `https://example.teleport.sh`) and click **NEXT**. + +1. Teleport Connect will ask you for your username, password, and MFA. + + If Teleport is integrated with an external Identity Provider (**IdP**), you might be + prompted to authenticate with that service in a browser window. + +1. Browse and connect to all the resources your user is permitted to access: + + ![An example of Teleport Connect, populated with instances](../../../img/use-teleport/connect-ui-overview@2x.png) + +### Web UI + +Teleport provides a web interface for users to interact with Teleport, e.g., by +accessing resources or creating Access Requests. This is usually found at the +same URL used to connect to Teleport with (e.g. `https://example.teleport.sh`), +but you should confirm the Web UI URL with the team that manages your Teleport +deployment. + +The Web UI provides similar access to resources as Teleport Connect, and +additional access to to Request and Activity logs for users with the right +permissions. + +## Connecting to resources + +This section provides an overview of some ways you can use Teleport client tools to connect to resources in your infrastructure. + +### Server access (`ssh`) + + + + +`tsh ls` lists the servers you have access to through Teleport: + +```code +$ tsh ls +Node Name Address Labels +------------------- --------------- ---------------------------- +server1.example.com 192.0.2.24:3022 access=servers,hostname=server1 +server2.example.com 192.0.2.32:3022 access=servers,hostname=server2 +``` + +To connect to a server: + +```code +$ tsh ssh ubuntu@server1.example.com +ubuntu@server1:~$ +``` + + + + +Under the **Servers** tab, click **CONNECT** next to the server you want to +access. Select or type in a username available to your Teleport user. + +In a new tab, Teleport Connect will initiate the connection and provide a +terminal environment. + + + + +From the **Servers** menu, the Teleport Web UI will list all servers your user +has permission to access. The **CONNECT** button will open a new tab with a +terminal emulator to provide access to that server. + + + + +### Kubernetes access (`kubectl`) + + + + +To see the Kubernetes clusters that you can access via Teleport, run the +following command: + +```code +$ tsh kube ls + Kube Cluster Name Labels Selected + ----------------- --------------------------- -------- + mycluster env=dev +``` + +To log in to the cluster, run the following command, changing `mycluster` to the +name of a Kubernetes cluster as it was listed in `tsh kube ls`: + +```code +$ tsh kube login mycluster +Logged into kubernetes cluster "mycluster". Try 'kubectl version' to test the +connection. +``` + +When you log in to your Teleport cluster via `tsh kube login`, `tsh` updates +your kubeconfig to point to your chosen Kubernetes cluster. You can then run +`kubectl` commands against your cluster. + + + + +After logging in to a cluster in Teleport Connect, click the **Kubes** tab to +see a list of Kubernetes clusters you can access. + +![Clusters you can access](../../../img/use-teleport/connect-kube-list.png) + +Next to your chosen cluster, click **Connect**. Teleport Connect will +open a terminal in a new tab and authenticate to the cluster. You can then run +`kubectl` commands to interact with the cluster. + +![Connect to a cluster](../../../img/use-teleport/connect-kube-access.png) + + + +In the Teleport Web UI, click the **Kubernetes** tab. You will see a list of +Kubernetes clusters your Teleport user is authorized to connect to. + +![Available Kubernetes clusters](../../../img/use-teleport/kubernetes-clusters.png) + +Find your chosen cluster and click **Connect**. You will see a modal window that +lists the commands you can execute in your terminal to connect to the cluster +via Teleport. + +![Connect to a Kubernetes +cluster](../../../img/use-teleport/kubernetes-login.png) + + + + +### Transferring files + + + + +`tsh scp` will let you transfer files to servers behind Teleport: + +```code +$ tsh scp some-file.ext server.example.com: +some-file.ext 7% |███████ | (25/342 MB, 2.9 MB/s) [9s:1m48s] +``` + + + +Teleport Connect allows you to transfer files to a remote server and runs on macOS, Linux and Windows. You can use the arrows in the top-left corner of an active SSH session to upload and download files: + +![Download with connect](../../../img/download_files_connect.png) + +You can upload files from systems using drag and drop: + +![Drag and drop in Connect](../../../img/upload_files_connect_drag_drop.png) + + + + +You can use the arrows in the top-left corner of the Web UI to download files: + +![Arrows in Web UI](../../../img/download_files_webui.png) + +Or you can upload using drag and drop: + +![Upload in Web UI](../../../img/upload_files_webui.png) + + + + +### Database access + + + + +`tsh db ls` will list the databases available to your user: + +```code +$ tsh db ls +Name Description Allowed Users Labels Connect +----------------------------------------------- ----------- --------------- ------- ------------------------- +myelastic [alice elastic] env=dev +mysql-server1 (user: alice, db: teleport_example) [alice elastic] env=dev tsh db connect mysql-server1 +``` + +To connect to a database server through `tsh`, you'll need a local client for +that database. For example, to connect to a MySQL or MariaDB database, you'll +need the [MySQL CLI](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) client: + +```code +$ tsh db connect --db-user=alice --db-name=teleport_example mysql-server1 +``` + +In this example, `teleport_example` is a pre-existing database on the MySQL server: + +```sql +SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | +| teleport_example | ++--------------------+ +4 rows in set (0.09 sec) +``` + +For other connection types, or to connect a graphic database client, you can +create a local tunnel to point your software to: + +```code +$ tsh proxy db myelastic --db-user=alice --tunnel +Started authenticated tunnel for the Elasticsearch database "myelastic" in cluster "example.come" on 192.0.2.58:52669. + +Use one of the following commands to connect to the database: + + * interactive SQL connection: + + $ elasticsearch-sql-cli http://localhost:52669/ + + * run single request with curl: + + $ curl http://localhost:52669/ + +Database certificate renewed: valid until 2022-11-02T06:11:50Z [valid for 9h55m0s] +``` + + + + +In the **Databases** tab, select the database you want to access and click +**CONNECT**. Select or type in a username available to your Teleport user. + +Teleport Connect will open a tunneled connection to the database. You can copy +the port number and connect with a GUI app at `localhost`, or click **RUN** to +initiate a CLI connection inside Teleport Connect: + +![Teleport Connect database connection](../../../img/teleport-connect-mysql.png) + + + + +The Teleport Web UI will show all databases a user has access to and only offers +browser-based sessions for a select number of databases. For databases +that can't be accessed from the browser, the Web UI will show the `tsh` commands +that are needed to connect from a developer workstation. + +For more details on accessing databases directly from the browser, refer to +[Starting a database session](./web-ui.mdx#starting-a-database-session) in +the Teleport Web UI. + + + + +### Desktop access + +Desktop access is available through the Teleport Web UI. + +1. In your browser, navigate to your Teleport cluster (for example, +`https://example.teleport.sh`). +1. From the menu on the right, select **Desktops**. +1. Next to the desktop you want to access, click **CONNECT**. Select +or type in a username available to your Teleport user. +1. Teleport will open a new browser tab or window and begin the RDP session. +Note that you may need to wait a moment for Teleport to log you in as the specified user. + +## Next steps + +This guide covers the basic installation and access of Teleport for end users, +but the rest of the Connect your Client section provides more detailed information. + +- `tsh` is the CLI tool for accessing resources through Teleport. With it, +you can authenticate to Teleport, list available services, and connect to them +either directly or through proxy tunnels. + + Learn more about it from [Using the tsh Command Line Tool](./tsh.mdx). + +- Teleport Connect is a graphic utility for connecting to resources through + Teleport. You use it to connect to servers, databases, and Kubernetes + clusters. See [Using Teleport Connect](./teleport-connect.mdx). + +- [Access Teleport-protected databases with GUI clients](../third-party/gui-clients.mdx): + Details how to connect many popular database GUI clients through Teleport. diff --git a/docs/pages/connect-your-client/teleport-connect.mdx b/docs/pages/connect-your-client/teleport-clients/teleport-connect.mdx similarity index 76% rename from docs/pages/connect-your-client/teleport-connect.mdx rename to docs/pages/connect-your-client/teleport-clients/teleport-connect.mdx index 27c14263f8b36..cad0e6ad61c62 100644 --- a/docs/pages/connect-your-client/teleport-connect.mdx +++ b/docs/pages/connect-your-client/teleport-clients/teleport-connect.mdx @@ -1,12 +1,17 @@ --- title: Using Teleport Connect +sidebar_label: Teleport Connect +sidebar_position: 3 description: Using Teleport Connect +tags: + - conceptual + - platform-wide --- -Teleport Connect provides easy and secure access to SSH servers, databases, applications, Windows desktops, and -Kubernetes clusters. +Teleport Connect provides easy and secure access to SSH servers, databases, +applications, MCP servers, Windows desktops, and Kubernetes clusters. -![resources tab in Teleport Connect](../../img/use-teleport/connect-cluster@2x.png) +![resources tab in Teleport Connect](../../../img/use-teleport/connect-cluster@2x.png) ## Installation & upgrade @@ -17,7 +22,7 @@ version. Teleport Connect supports macOS, Linux, and Windows. Double-click the downloaded `.dmg` file and drag the Teleport Connect icon to the Applications folder. -To upgrade Teleport Connect to a newer version, drag the new version to the Applications folder. +To manually upgrade Teleport Connect to a newer version, drag the new version to the Applications folder. Download the DEB (Debian-based distros) or RPM (RHEL-based distros) package and install it using @@ -56,9 +61,90 @@ automatically handle the migration to a per-machine installation. +### Managed updates + +Teleport Connect supports [Teleport Client Tool Managed Updates](../../upgrading/client-tools-managed-updates.mdx). +When enabled in your cluster, the app checks for available updates at login, +downloads them automatically, and prompts you to restart. On Windows and Linux, +you may be asked to provide administrator credentials to complete the installation. + +You can also check for updates manually via "Check for Updates…" in the +additional actions menu. + + + To use managed updates on Linux, make sure the app was installed with a package + manager. The `.tar.gz` distribution is not compatible with managed updates. + + +#### Multiple clusters + +If you are connected to multiple clusters, Teleport Connect selects the highest client +version that is compatible across all clusters according to [Teleport component compatibility rules](../../upgrading/overview.mdx#component-compatibility). +If clusters require incompatible versions, or if you want a different cluster +to manage updates, you can manually choose the cluster in the +"Check for Updates…" dialog, available in the additional actions menu. + + + Multi-cluster support differs between Teleport Connect and the tsh/tctl CLI tools. + The CLI tools can switch versions on the fly to match the one required by each cluster. + Teleport Connect, as an installed desktop application, can only run a single version at a time. + + +#### Managed updates configuration + +Like the CLI tools, Teleport Connect respects the `TELEPORT_CDN_BASE_URL` and +`TELEPORT_TOOLS_VERSION` environment variables. + +`TELEPORT_CDN_BASE_URL` lets you use custom builds or mirror the CDN in a private +network (for example `https://example.com`). + +`TELEPORT_TOOLS_VERSION` controls client tool updates: +- Set to `off` to completely disable managed updates. +- Set to a specific version (e.g. `18.0.1`) to override the cluster-provided version +(for example, to work around a known issue). + +To use an environment variable with Teleport Connect, open your terminal and launch +the app from there, providing the variable. +It will apply only for that session, so you can test settings or override +cluster-provided updates without affecting your system-wide configuration. +For a permanent setup, follow the instructions below: + + + + To set the variable for your current login session, open the Terminal and type: + ```code + $ launchctl set env TELEPORT_TOOLS_VERSION X.Y.Z + ``` + Then run Teleport Connect as usual. This setting persists until you log out. + + + To set the variable permanently for your user account, open the Command Prompt and type: + ```code + $ setx TELEPORT_TOOLS_VERSION X.Y.Z + ``` + Then run Teleport Connect as usual. To clear it, use: + ```code + $ setx TELEPORT_TOOLS_VERSION "" + ``` + + + To set the variable permanently for the app, prepend the environment variable to + the `Exec=` line in `usr/share/applications/teleport-connect.desktop` file: + ```text + Exec=env TELEPORT_TOOLS_VERSION=X.Y.Z "/opt/Teleport Connect/teleport-connect" %U + ``` + + + ## User interface -![user interface of Teleport Connect](../../img/use-teleport/connect-ui-overview@2x.png) +![user interface of Teleport Connect](../../../img/use-teleport/connect-ui-overview@2x.png) The top bar of Teleport Connect consists of: @@ -76,6 +162,15 @@ The top bar of Teleport Connect consists of: The **status bar** at the bottom displays **cluster breadcrumbs** in the bottom left, indicating which cluster the current tab is bound to, and the **Share Feedback** button in the bottom right. +### Background mode +Teleport Connect can run in the background (via the menu bar or system tray), +so you can continue using VNet and connections to databases, Kubernetes clusters, +and apps without keeping the main window open. + +The first time you close the window, the app will ask you whether to run in background mode +or quit when closing the window. +To change this behavior later, see [the `runInBackground` configuration option](#configuration). + ## Connecting to an SSH server 1. Open a tab with cluster resources by clicking on the plus symbol at the right end of the tab bar. @@ -87,6 +182,9 @@ A new tab will open with a shell session on the chosen server. Alternatively, you can look for the server in the search bar and press `Enter` to connect to it. +If you'd prefer to connect to SSH servers with a third-party SSH client or your +editor's Remote Development feature, read the [VNet guide](./vnet.mdx) to learn how. + ## Opening a local terminal To open a terminal with a local shell session, either select "Open new terminal" from the additional @@ -97,7 +195,7 @@ this by setting the environment variables `TELEPORT_PROXY` and `TELEPORT_CLUSTER Additionally, Teleport Connect prepends the `PATH` environment variable in the session with the directory containing the tsh binary, even if [tsh is not globally available](#using-tsh-outside-of-teleport-connect). -When using [trusted clusters](../admin-guides/management/admin/trustedclusters.mdx), the cluster selector allows +When using [trusted clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx), the cluster selector allows you to determine which cluster the shell session will be bound to. The selected cluster will be reflected in both the tab title and the status bar. @@ -108,7 +206,7 @@ To switch to a different shell, right-click on a terminal tab (either a local te From the menu, choose a predefined shell (on Windows) or provide the path to a custom shell. The selected shell will be used only for that tab unless you choose to set it as the default. -![Changing the terminal shell in Teleport Connect on Windows](../../img/use-teleport/connect-shell-switcher@2x.png) +![Changing the terminal shell in Teleport Connect on Windows](../../../img/use-teleport/connect-shell-switcher@2x.png) The configuration is stored under `terminal.shell` and `terminal.customShell` config properties. For more details, refer to the [Configuration](#configuration) section. @@ -145,8 +243,11 @@ A new tab will open with a new connection established between your device and th Alternatively, you can look for the database in the search bar and press `Enter` to connect to it. -This connection will remain active until you click the Close Connection button or close Teleport -Connect. The port number will persist between app restarts—you can set up your favorite client +The connection is active while Teleport Connect is running. +If [the background mode](#background-mode) is on, it remains active even after you close the window. +To end the connection, click Close Connection or quit the app. + +The port number will persist between app restarts—you can set up your client without worrying about the port suddenly changing. ### With a GUI client @@ -178,7 +279,7 @@ During a remote session, you can share your clipboard and a local directory, if Teleport Connect supports launching applications in the browser, as well as creating authenticated tunnels for web and TCP applications. -When it comes to [cloud APIs secured with Teleport](../enroll-resources/application-access/cloud-apis/cloud-apis.mdx), +When it comes to [cloud APIs secured with Teleport](../../enroll-resources/application-access/cloud-apis/cloud-apis.mdx), Teleport Connect supports launching the AWS console in the browser, but other CLI applications can be used only through tsh in [a local terminal tab](#opening-a-local-terminal). @@ -235,14 +336,38 @@ press `Enter` to set up a connection to it. A new tab will open with a new connection established between your device and the application. -This connection will remain active until you click the Close Connection button or close Teleport -Connect. The port number will persist between app restarts—you can set up your favorite client +The connection is active while Teleport Connect is running. +If [the background mode](#background-mode) is on, it remains active even after you close the window. +To end the connection, click Close Connection or quit the app. + +The port number will persist between app restarts—you can set up your client without worrying about the port suddenly changing. The application connection tab shows an example command that can be used to query the application. Requests sent under the displayed address will be proxied through an authenticated tunnel to the application. +## Connecting to an MCP server + +Teleport Connect can set up an authenticated tunnel on a random port on your +device to an MCP server in streamable-HTTP transport. All connections made to +that tunnel will be authenticated with your credentials. + +1. Open a tab with cluster resources by clicking on the plus symbol at the end of the tab bar. You + can also press `Ctrl+Shift+T/Cmd+T` to achieve the same result. +1. Click the Connect button of the MCP server resource. + +A new tab will open with a new connection established between your device and the MCP server. + +![MCP connection](../../../img/use-teleport/connect-mcp-http.png) + +The connection is active while Teleport Connect is running. +If [the background mode](#background-mode) is on, it remains active even after you close the window. +To end the connection, click Close Connection or quit the app. + +The port number will persist between app restarts—you can set up your client +without worrying about the port suddenly changing. + ## Connecting to multiple clusters Teleport Connect allows you to log in to multiple clusters at the same time. After logging in to @@ -283,19 +408,19 @@ the lifecycle of the agent. {/* The permission to read and update users is needed during setup but not for regular usage, that's why it's not listed in the partial. */} - Permissions to read and update user objects in the backend (verbs `read` and `update` for [the - `user` resource](../reference/access-controls/roles.mdx)). + `user` resource](../../reference/access-controls/roles.mdx)). - Permissions to read, update, and create roles in the backend (verbs `read`, `update`, and `create` - for [the `role` resource](../reference/access-controls/roles.mdx)). + for [the `role` resource](../../reference/access-controls/roles.mdx)). The agent runs as the current system user, not as root. Some features are thus not available, such -as logging in as other system users or [host user creation](../enroll-resources/server-access/guides/host-user-creation.mdx). +as logging in as other system users or [host user creation](../../enroll-resources/server-access/guides/host-user-creation.mdx). ### Setup and usage To begin the setup, click on the laptop icon in the top left and select "Connect My Computer". The new tab will guide you through an interaction-free setup. Click "Connect" to start the setup. -![Connect My Computer in the top bar](../../img/use-teleport/connect-my-computer-top-bar@2x.png) +![Connect My Computer in the top bar](../../../img/use-teleport/connect-my-computer-top-bar@2x.png) The setup creates a new role in the cluster which grants access to your device as the current system user. The role is then added to your user object. @@ -315,18 +440,20 @@ Next, the setup downloads a Teleport Agent for your platform and runs `teleport pointed at the current cluster. Once that is done, Connect My Computer starts the agent and waits for it to show up in the cluster as an SSH node. -![Connect My Computer setup](../../img/use-teleport/connect-my-computer-setup@2x.png) +![Connect My Computer setup](../../../img/use-teleport/connect-my-computer-setup@2x.png) After the agent joins the cluster, the tab transitions to showing the status of the agent. From here, you can connect to the node made available by the agent, stop and start the agent, as well as completely remove it. Manually logging out of the cluster will remove the agent as well. -![Connect My Computer status](../../img/use-teleport/connect-my-computer-status@2x.png) +![Connect My Computer status](../../../img/use-teleport/connect-my-computer-status@2x.png) + +Your computer will be shared while Teleport Connect is running. +If [the background mode](#background-mode) is on, it remains shared even after you close the window. +To stop sharing, quit the app or stop the agent through the Connect My Computer tab. -Your computer will be shared while Teleport Connect is open. To stop sharing, close Teleport Connect -or stop the agent through the Connect My Computer tab. Sharing will resume on app restart, unless -you stop the agent before exiting. The agent stops immediately if Teleport Connect unexpectedly -crashes. +Sharing will resume on app restart, unless you stop the agent before exiting. +The agent stops immediately if Teleport Connect unexpectedly crashes. ### Agent maintenance @@ -400,7 +527,7 @@ INFO [AUTH] Attempting registration via proxy server. auth/register.go:279 ERRO [PROC:1] Can not join the cluster as node, the token expired or not found. Regenerate the token and try again. pid:54364.1 service/connect.go:106 ``` -During the setup, Connect My Computer creates [a join token](../enroll-resources/agents/join-token.mdx) +During the setup, Connect My Computer creates [a join token](../../enroll-resources/agents/join-token.mdx) that is valid for up to five minutes. If the logs say that the token has expired, it most likely means that the initial attempt to join the cluster has failed and you started another one after more than five minutes. @@ -419,7 +546,7 @@ the agent was successfully configured. If you wish to prevent cluster users from using Connect My Computer, make sure they don't have permissions to create new join tokens. This is controlled by the `create` verb for the `token` resource. Either deny this permission explicitly or do not grant it in the first place. See [Access -Controls Reference Documentation](../reference/access-controls/roles.mdx) for more +Controls Reference Documentation](../../reference/access-controls/roles.mdx) for more details. Denying this permission will hide the icon in the top bar. Users who already set up agents will still be able to manage those agents, even after the denying @@ -428,7 +555,7 @@ a user set up an agent and then lock the user out of removing the agent. To instantly revoke access to agents that have already joined the cluster, look for nodes labeled with the `teleport.dev/connect-my-computer/owner` label and then [place -locks](../admin-guides/access-controls/guides/locking.mdx) on those nodes. +locks](../../identity-governance/locking.mdx) on those nodes. ```code $ tctl nodes ls -v --query='labels["teleport.dev/connect-my-computer/owner"] != ""' @@ -440,11 +567,17 @@ $ tctl lock --server-id=SERVER_UUID --message="Using Connect My Computer is forb Teleport Connect ships with its own bundled version of tsh. Teleport Connect will always use this version of tsh for any actions performed within the app. -Teleport Connect makes tsh available to use in your terminal of choice as well. Please note that at -the moment tsh and Teleport Connect operate on different sets of profiles, as Teleport Connect sets -a custom home location through [the `TELEPORT_HOME` environment -variable](../reference/cli/tsh.mdx#tsh-environment-variables). For example, logging in to a new cluster -through tsh will not make that cluster show up in Teleport Connect. +The bundled tsh tool is also available to use directly in your terminal. By default, Teleport Connect and tsh share +the same home directory, so any profiles you create or update in one tool are immediately reflected in the other. +If you prefer a different directory, you can configure it via the `tshHome` property in [the configuration](#configuration). + +Teleport Connect actively monitors the tsh directory, so updates made with tsh (such as logging in, logging out, +or adding new clusters) will automatically appear in the app. + + +Older versions of Teleport Connect used a separate set of profiles stored in an internal tsh directory. +On the first launch, the app will automatically migrate your profiles to the new location. + @@ -468,7 +601,7 @@ installation directory to the `Path` user environment variable. Teleport Connect supports authenticating with hardware-based keys. Keys are generated and stored directly on a hardware device, providing greater security than storing -them on a file system. For more details, see [Hardware Key Support guide](../admin-guides/access-controls/guides/hardware-key-support.mdx). +them on a file system. For more details, see [Hardware Key Support guide](../../zero-trust-access/authentication/hardware-key-support.mdx). Hardware key support requires users to use a PIV-compatible hardware key. @@ -478,7 +611,7 @@ them on a file system. For more details, see [Hardware Key Support guide](../adm To log in with a hardware key, your role or cluster configuration must enable it. Once enforced, Teleport Connect will require you to keep the hardware key plugged in and may also prompt for a tap and/or PIV PIN: -![Logging in with a hardware key](../../img/use-teleport/connect-hardware-keys@2x.png) +![Logging in with a hardware key](../../../img/use-teleport/connect-hardware-keys@2x.png) When logging in for the first time, you’ll be prompted to log in again immediately. @@ -504,6 +637,8 @@ Below is the list of the supported config properties. |-------------------------------|----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------| | `theme` | `system` | Color theme for the app. Available modes: `light`, `dark`, `system`. | | `skipVersionCheck` | `false` | Skips the version check and hides the version compatibility warning when logging in to a cluster. | +| `runInBackground` | `true` on macOS/Windows, `false` on Linux | Keeps the app running in the menu bar/system tray even when the main window is closed. On Linux, displaying the system tray icon may require installing shell extensions. | +| `tshHome` | `~/.tsh` | Home location for tsh configuration and data. | | `terminal.fontFamily` | `Menlo, Monaco, monospace` on macOS `Consolas, monospace` on Windows `'Droid Sans Mono', monospace` on Linux | Font family for the terminal. | | `terminal.fontSize` | `15` | Font size for the terminal. | | `terminal.windowsBackend` | `auto` | `auto` uses modern [ConPTY](https://devblogs.microsoft.com/commandline/windows-command-line-introducing-the-windows-pseudo-console-conpty/) system if available, which requires Windows 10 (19H1) or above. Set to `winpty` to use winpty even if ConPTY is available. | @@ -525,7 +660,7 @@ Below is the list of the supported config properties. | `keymap.openClusters` | `Command+E` on macOS `Ctrl+Shift+E` on Windows/Linux | Shortcut to open the cluster selector. | | `keymap.openProfiles` | `Command+I` on macOS `Ctrl+Shift+I` on Windows/Linux | Shortcut to open the profile selector. | | `keymap.openSearchBar` | `Command+K` on macOS `Ctrl+Shift+K` on Windows/Linux | Shortcut to open the search bar. | -| `headless.skipConfirm` | `false` | Skips the confirmation prompt for Headless WebAuthn approval and instead prompts for WebAuthn immediately. | +| `headless.skipConfirm` | `false` | Skips the confirmation prompt for Headless Authentication approval and instead prompts for MFA immediately. | | `ssh.noResume` | `false` | Disables SSH connection resumption. | | `ssh.forwardAgent` | `false` | Enables agent forwarding. | | `sshAgent.addKeysToAgent` | `auto` | Controls how keys are added to a local SSH agent. "auto" adds the keys only if the agent supports SSH certificates, "no" never attempts to add them, "yes" always attempts to add them, "only" always attempts to add the keys to the agent but it does not save them on disk. | @@ -556,6 +691,20 @@ Available key codes: - `,`, `.`, `/`, `\`, `` ` ``, `-`, `=`, `;`, `'`, `[`, `]` - `Space`, `Tab`, `CapsLock`, `NumLock`, `ScrollLock`, `Backspace`, `Delete`, `Insert`, `Enter`, `Up`, `Down`, `Left`, `Right`, `Home`, `End`, `PageUp`, `PageDown`, `Escape`, `IntlBackslash` +### `tsh ssh` environment variables + +Under the hood, Teleport Connect uses `tsh ssh` to connect to SSH servers. As a result, +Teleport Connect will respect many [tsh environment variables](../../reference/cli/tsh.mdx#tsh-environment-variables) +related to `tsh ssh`. This can make it easier to share common settings between `tsh` and Teleport Connect. + +Below is a list of environment variables supported by Teleport Connect for SSH connections: + +| Property | Default | Allowed Value(s) | Description | +|----------|---------|------------------|-------------| +| `TELEPORT_MFA_MODE` | `auto` | `auto`, `cross-platform`, `platform`, `otp`, or `sso` | Preferred mode for MFA and Passwordless assertions. | +| `TELEPORT_ADD_KEYS_TO_AGENT` | `auto` | `auto`, `no`, `yes`, or `only` | Controls how keys are added to a local SSH agent. "auto" adds the keys only if the agent supports SSH certificates, "no" never attempts to add them, "yes" always attempts to add them, "only" always attempts to add the keys to the agent but it does not save them on disk. | +| `TELEPORT_NO_RESUME` | `false` | `true` or `false` | Disables SSH connection resumption. | + ## Telemetry (!docs/pages/includes/teleport-connect-telemetry.mdx!) @@ -582,27 +731,30 @@ To reset the state related to a particular cluster: 1. Close Teleport Connect. 1. Open Teleport Connect, then log back in to the cluster. -To completely wipe all app state: +To completely wipe all app state, close Teleport Connect and run the following commands: -1. Close Teleport Connect. -1. Remove the internal `tsh` folder and the `app_state.json` file to log out of all clusters and - clear all remembered tabs and connections. +{/* TODO: DELETE IN 20.0.0: */} +{/* Remove cleaning up the internal Teleport Connect/tsh directory from the commands. */} +{/* Also remove the tsh home migration in Connect's main.ts */} ```code $ rm -rf ~/Library/Application\ Support/Teleport\ Connect/{tsh,app_state.json} +$ rm -rf ~/.tsh ``` ```code $ rm -rf ~/.config/Teleport\ Connect/{tsh,app_state.json} +$ rm -rf ~/.tsh ``` ```code -$ rmdir /s /q C:\Users\%UserName%\AppData\Roaming\"Teleport Connect"\tsh $ del C:\Users\%UserName%\AppData\Roaming\"Teleport Connect"\app_state.json +$ rmdir /s /q C:\Users\%UserName%\AppData\Roaming\"Teleport Connect"\tsh +$ rmdir /s /q C:\Users\%UserName%\.tsh ``` @@ -661,7 +813,7 @@ see them only after you restart the app. You can open Teleport Connect in insecure mode, which skips TLS certificate verification when talking to a Teleport Proxy Service. This is useful in [test environments with self-signed -certificates](../admin-guides/management/admin/self-signed-certs.mdx) or for demo purposes. We do not recommend +certificates](../../zero-trust-access/deploy-a-cluster/self-signed-certs.mdx) or for demo purposes. We do not recommend using this mode in production. @@ -699,23 +851,35 @@ Remove Teleport Connect for macOS from the Applications directory with this comm $ sudo rm -rf /Applications/Teleport\ Connect.app ``` -To remove the local user data directory: +To remove the local user data directory which holds app configuration, state, and logs: ```code $ rm -rf ~/Library/Application\ Support/Teleport\ Connect ``` +To remove the tsh directory which holds cluster credentials (note: this directory is also used by the tsh tool): + +```code +$ rm -rf ~/.tsh +``` + (!docs/pages/includes/uninstall-teleport-connect-windows.mdx!) -To remove the local user data directory: +To remove the local user data directory which holds app configuration, state, and logs: ```powershell $ rmdir /s /q "%APPDATA%\Teleport Connect" ``` +To remove the tsh directory which holds cluster credentials (note: this directory is also used by the tsh tool): + +```code +$ rmdir /s /q C:\Users\%UserName%\.tsh +``` + @@ -731,6 +895,12 @@ For RPM installations uninstall Teleport Connect using YUM: $ sudo yum remove teleport-connect ``` +To remove the tsh directory which holds cluster credentials (note: this directory is also used by the tsh tool): + +```code +$ rm -rf ~/.tsh +``` + Installs based on a tarball should remove the `teleport-connect` directory and any copied/linked executables. diff --git a/docs/pages/connect-your-client/teleport-clients/tsh.mdx b/docs/pages/connect-your-client/teleport-clients/tsh.mdx new file mode 100644 index 0000000000000..cddc2727d080f --- /dev/null +++ b/docs/pages/connect-your-client/teleport-clients/tsh.mdx @@ -0,0 +1,1196 @@ +--- +title: Using the tsh Command Line Tool +sidebar_label: tsh +sidebar_position: 1 +description: This reference shows you how to use Teleport's tsh tool to authenticate to a cluster, explore your infrastructure, and connect to a resource. +tags: + - conceptual + - zero-trust +--- + +This guide shows you how to use the Teleport client tool `tsh` to connect to +infrastructure resources in your cluster. + +You will learn how to: + +- List, access, and interact with Teleport-connected resources. +- Share interactive shell sessions with colleagues or join someone else's + session. +- List and replay recorded interactive sessions. + +In addition to this document, you can type `tsh` into your terminal for +the CLI reference, or explore the [`tsh` CLI +Reference](../../reference/cli/tsh.mdx#tsh-login) in the documentation. + +{/*TODO: update this to the Access Request user guide when this is available*/} + +You can also use `tsh` to manage Access Requests. For instructions, see +[Access Requests](../../identity-governance/access-requests/access-requests.mdx). + +## Installing tsh + +Follow the instructions below to install the `tsh` binary. + +1. Determine the version of `tsh` to install. We recommend installing the same + major version as the version used in your Teleport cluster. Either: + + - In the Web UI, select your username in the upper right, then click **Help & + Support**. You will see the version of your Teleport cluster under + **Cluster Information**. + + - Use `curl` and `jq`. Replace + with your Proxy Service address (e.g. `mytenant.teleport.sh` for Teleport + Enterprise Cloud): + + ```code + $ curl https:///webapi/find | jq '.server_version' + "(=teleport.version=)" + ``` + +1. Install a package the includes `tsh`: + + (!docs/pages/includes/install-tsh.mdx!) + +## Basic usage + +`tsh` allows you to view infrastructure resources that you can access in +Teleport and connect to those resources. This section shows you the main +workflow for using `tsh` to access an infrastructure resource. + +### Log in to Teleport + +Log in to your Teleport cluster, assigning +to the domain name of the Teleport Proxy Service in your cluster and + to your Teleport username: + +```code +$ tsh login --proxy= --user= +``` + +This command retrieves the user's certificates and saves them into +`~/.tsh/`. + +### List resources that you can access + +In a Teleport cluster, all Teleport Agents periodically ping the cluster's Auth +Service and update their status. This allows Teleport users to see which +Teleport-protected resources are online. + +This command lists all connected servers in the cluster that you have permission +to access: + +```code +$ tsh ls + +# Node Name Address Labels +# --------- ------- ------ +# turing ⟵ Tunnel os=linux +# graviton 10.1.0.7:3022 os=osx +``` + +You can use `tsh` commands to list other kinds of resources. For more +information, see the `tsh` reference entries for the following resource types: + +|Resource|Command| +|---|---| +|[Applications](../../reference/cli/tsh.mdx#tsh-apps-ls)|`tsh apps ls`| +|[Databases](../../enroll-resources/database-access/reference/cli.mdx#tsh-db-ls)|`tsh db ls`| +|[Kubernetes clusters](../../reference/cli/tsh.mdx#tsh-kube-ls)|`tsh kube ls`| +|[Servers](../../reference/cli/tsh.mdx#tsh-ls)|`tsh ls`| + +Windows desktops are available to list in Teleport Connect and the Teleport Web +UI. + +`tsh ls` commands can apply a filter based on the resource's +labels. For example, to only show servers with the `os` label set to `osx`, +you can run the following command: + +```code +$ tsh ls os=osx + +# Nodename Address Labels +# --------- ------- ------ +# graviton 10.1.0.7:3022 os=osx +``` + +
+Not seeing resources? + +(!docs/pages/includes/node-logins.mdx!) + +
+ +### Connect to a resource + +Once you have determined a resource to connect to, the next step is to access +the resource with `tsh`, which handles authentication to your Teleport cluster +and routing traffic to and from local clients. + +You can only connect to a Windows desktop using the Teleport Web UI or Teleport +Connect. + +Select a resource type for instructions on connecting to the resource with +`tsh`: + + + + +Run the `tsh ssh` command to connect to a server, specifying the login to assume +on the server you are connecting to. The following command connects to the +server `mynode` as user `root`: + +```code +$ tsh ssh root@mynode +``` + +`tsh ssh` takes the same arguments and flags as the OpenSSH client. For more +information on connecting to servers, see [Connecting to SSH +servers](#connecting-to-ssh-servers). + + + + +You can access web applications registered with Teleport through Teleport +Connect and the Teleport Web UI. + +You can also use `tsh` to start a local proxy server and connect to your +application with your client of choice. The local proxy manages Teleport-issued +certificates automatically. This example connects to the app `myapp`: + +```code +$ tsh proxy app myapp +``` + +You can avoid the need to manually start the local proxy for application clients +by using VNet. See [Using VNet](./vnet.mdx) for instructions. + +For APIs protected with Teleport, use `tsh apps login` to receive a certificate +authorized to access the application, which in this example, is `grafana`: + + ```code + $ tsh apps login grafana + ``` + +You can then access the application with a command like the following: + +```code +$ curl \ + --cert /Users/alice/.tsh/keys/teleport.example.com/alice-app/cluster-name/grafana-x509.pem \ + --key /Users/alice/.tsh/keys/teleport.example.com/alice \ + https://grafana.teleport.example.com:3080 +``` + +Since the local proxy manages Teleport-issued application certificates +automatically, we recommend that you use it instead of `tsh apps login` unless +it is necessary to manually reference TLS credentials. + +For cloud APIs protected by Teleport, you can use the following `tsh` commands +to execute a single API client command or start a local proxy that API client +applications can proxy traffic through: + +|Cloud API|Single command|Local proxy| +|---|---|---| +|AWS|`tsh aws`|[tsh proxy aws](../../reference/cli/tsh.mdx#tsh-proxy-aws)| +|Azure|[tsh az](../../enroll-resources/application-access/reference.mdx#tsh-az)|[tsh proxy azure](../../reference/cli/tsh.mdx#tsh-proxy-azure)| +|Google-Cloud|[tsh gcloud](../../reference/cli/tsh.mdx#tsh-gcloud)|[tsh proxy gcloud](../../reference/cli/tsh.mdx#tsh-proxy-gcloud)| + + + + +To receive a certificate from Teleport signed for your database user and connect +to the database, run the following command, assuming `mydb` is the name of the +database registered with Teleport and `my-database-user` is the name of your +database user: + +```code +$ tsh db connect --db-user=my-database-user mydb +``` + +Once you have finished your database session, you can remove the certificate for +the database by running the following command: + +```code +$ tsh db logout +``` + +Some databases require `tsh` to start a local proxy server to forward traffic +from your workstation. `tsh db connect` starts the local proxy if it needs to, +but you can also start the proxy yourself. This is useful for running graphical +database clients. For example, you can start a local proxy with an authenticated +tunnel using the following command: + +```code +$ tsh proxy db --tunnel mydb +``` + +For more information, see [GUI Clients](../third-party/gui-clients.mdx). + + + + +To access a Teleport-connected Kubernetes cluster, run the following command to +update your kubeconfig with a certificate signed by Teleport. The following +command logs in to the cluster mycluster: + +```code +$ tsh kube login mycluster +``` + +Once you have logged into the cluster, run `tsh kubectl` to execute `kubectl` +commands. Teleport can allow or deny access to specific Kubernetes cluster +resources. `tsh kubectl` detects whether the command has failed due to +insufficient permissions and, if so, submits an Access Request for the target +Kubernetes resource. + +For example, the following command executes the `date` command in pod `my-pod` +and submits an Access Request if the user does not have permissions to access +that pod: + +```code +$ tsh kubectl exec my-pod -- date +``` + + + + +## Logging in + +To retrieve a user's certificate, execute: + + + + +```code +# Full form: +$ tsh login --proxy=proxy_host: + +# Using default ports: +$ tsh login --proxy=work.example.com + +# Using custom HTTPS port: +$ tsh login --proxy=work.example.com:5000 +``` + + + + +```code +# Full form: +$ tsh login --proxy=proxy_host: + +$ tsh login --proxy=mytenant.teleport.sh +``` + + + + + +[CLI Docs - tsh login](../../reference/cli/tsh.mdx#tsh-login) + +| Port | Description | +| - | - | +| https_proxy_port | the HTTPS port the proxy host is listening to (defaults to `443` and `3080`). | + +The login command retrieves a user's certificate and stores it in `~/.tsh` +directory as well as in the [ssh agent](https://en.wikipedia.org/wiki/Ssh-agent) if there is one running. + +This allows you to authenticate just once, maybe at the beginning of the day. Subsequent `tsh ssh` commands will run without asking for credentials until the temporary certificate expires. By default, Teleport issues user certificates with a time to live (TTL) of 12 hours. + + + It is recommended to always use [`tsh login`](../../reference/cli/tsh.mdx#tsh-login) before using any other `tsh` commands. This allows users to omit `--proxy` flag in subsequent tsh commands. For example `tsh ssh user@host` will work. + + +A Teleport cluster can be configured for multiple user identity sources. For example, a cluster may have a local user called `admin` while regular users should [authenticate via GitHub](../../zero-trust-access/sso/integrate-idp/github-sso.mdx). In this case, you have to pass `--auth` flag to `tsh login` to specify which identity storage to use: + + + + +```code +# Log in using the local Teleport 'admin' user: +$ tsh --proxy=proxy.example.com --auth=local --user=admin login + +# Log in using GitHub as an SSO provider, assuming the GitHub connector is called "github" +$ tsh --proxy=proxy.example.com --auth=github login + +# Don't open the system default browser when logging in +$ tsh login --proxy=work.example.com --browser=none +``` + + + + +```code +# Log in using the local Teleport 'admin' user: +$ tsh --proxy=mytenant.teleport.sh --auth=local --user=admin login + +# Log in using GitHub as an SSO provider, assuming the GitHub connector is called "github" +$ tsh --proxy=mytenant.teleport.sh --auth=github login +``` + + + + + +When using an external identity provider to log in, `tsh` will need to open a +web browser to complete the authentication flow. By default, `tsh` will use your +system's default browser. If you wish to suppress this behavior, you can use the +`--browser=none` flag: + + + + +```code +# Don't open the system default browser when logging in +$ tsh login --proxy=mytenant.teleport.sh --browser=none +``` + + + + + +In this situation, a link will be printed on the screen. You can copy and paste this link into +a browser of your choice to continue the login flow. + +[CLI Docs - tsh login](../../reference/cli/tsh.mdx#tsh-login) + +### Inspecting an SSH certificate + +To inspect the SSH certificates in `~/.tsh`, a user may execute the following +command: + + + + +```code +$ tsh status + +# > Profile URL: https://proxy.example.com:3080 +# Logged in as: johndoe +# Cluster: proxy.example.com +# Roles: access, auditor, editor +# Logins: root, admin, guest +# Kubernetes: enabled +# Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s] +# Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty +``` + + + + +```code +$ tsh status + +# > Profile URL: https://mytenant.teleport.sh:443 +# Logged in as: johndoe +# Cluster: mytenant.teleport.sh +# Roles: access, editor, auditor +# Logins: root, admin, guest +# Kubernetes: enabled +# Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s] +# Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty +``` + + + + + +[CLI Docs - tsh status](../../reference/cli/tsh.mdx#tsh-status) + +### Identity files + +[`tsh login`](../../reference/cli/tsh.mdx#tsh-login) can also save the user certificate into a +file: + + + + +```code +# Authenticate the user against proxy.example.com and save the user +# certificate to joe.pem +$ tsh login --proxy=proxy.example.com --out=joe + +# Use joe.pem to log in to the server 'db' +$ tsh ssh --proxy=proxy.example.com -i joe joe@db +``` + + + + +```code +# Authenticate the user against mytenant.teleport.sh and save the user +# certificate to joe.pem +$ tsh login --proxy=mytenant.teleport.sh --out=joe + +# Use joe.pem to log in to the server 'db' +$ tsh ssh --proxy=mytenant.teleport.sh -i joe joe@db +``` + + + + + +By default, the `--out` flag will create an identity file suitable for `tsh -i`. +If compatibility with OpenSSH is needed, `--format=openssh` must be specified. +In this case, the identity will be saved into two files, `joe` and +`joe-cert.pub`: + + + + +```code +$ tsh login --proxy=proxy.example.com --out=joe --format=openssh +$ ls -lh + +# total 8.0K +# -rw------- 1 joe staff 1.7K Aug 10 16:16 joe +# -rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub +``` + + + + +```code +$ tsh login --proxy=mytenant.teleport.sh --out=joe --format=openssh +$ ls -lh + +# total 8.0K +# -rw------- 1 joe staff 1.7K Aug 10 16:16 joe +# -rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub +``` + + + + + +### SSH certificates for automation + +Regular users of Teleport must request an auto-expiring SSH certificate, usually +every day. This doesn't work for non-interactive scripts, like cron jobs or a +CI/CD pipeline. + +The most secure way to generate certificates for automation purposes is to use +[Machine & Workload Identity](../../machine-workload-identity/machine-workload-identity.mdx). +This ensures that your automation is taking advantage of the security properties +of short-lived credentials. + +If Machine & Workload Identity does not support your preferred CI/CD platform, +you can create a local user for use in automation and request a long-lived +certificate for that user. + +In this example, we're creating a certificate with a TTL of one hour for the +`jenkins` user and storing it in a `jenkins.pem` file, which can be later used with +`-i` (identity) flag for `tsh`. + + + + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +$ tctl auth sign --ttl=1h --user=jenkins --out=jenkins.pem +``` + + + + +```code +# Log in to your Teleport Cloud cluster so you can use tctl locally. +$ tsh login --proxy=myinstance.teleport.sh --user=email@example.com +$ tctl auth sign --ttl=1h --user=jenkins --out=jenkins.pem +``` + + + + + +[CLI Docs - tctl auth sign](../../reference/cli/tctl.mdx#tctl-auth-sign) + +Now `jenkins.pem` can be copied to the Jenkins server and passed to the `-i` +(identity file) flag of `tsh`. + +`tctl auth sign` is an admin's equivalent of `tsh login --out` and allows for +unrestricted certificate TTL values. + +## Connecting to SSH servers + +This section provides detailed information about using `tsh` to connect to SSH +servers registered with your Teleport cluster. + +### User identities + +A user identity in Teleport exists in the scope of a cluster. The member nodes +of a cluster may have multiple OS users on them. A Teleport administrator +assigns allowed logins to every Teleport user account. + +When logging into a remote node, you will have to specify both the Teleport +login and the OS login. A Teleport identity will have to be passed via the +`--user` flag while the OS login will be passed as `login@host` using syntax +compatible with the traditional `ssh` command. + +The following command authenticates against the + cluster and logs into the server `node` as +`root`: + +```code +$ tsh ssh --proxy= --user=joe root@node +``` + +### Proxy ports + +By default, the Teleport Proxy Service listens on port `3080`. + +If a Teleport Proxy Service instance is configured to listen on non-default +ports, they must be specified via `--proxy` flag as shown: + +```code +$ tsh --proxy=proxy.example.com:5000 +``` + +This `tsh` command will use port `5000` of the Proxy Service. + +### Port forwarding + +`tsh ssh` supports the OpenSSH `-L` flag which forwards incoming +connections from localhost to the specified remote host:port. The syntax of `-L` +flag is as follows, where "bind_ip" defaults to `127.0.0.1`: + +```code +$ -L [bind_ip]:listen_port:remote_host:remote_port +``` + +Example: + +```code +$ tsh ssh -L 5000:web.remote:80 node +``` + +This will connect to remote server `node` via the Proxy Service, then open a +listening socket on `localhost:5000`. Finally, it will forward all incoming +connections to `web.remote:80` via this SSH tunnel. + +It is often convenient to establish port forwarding, execute a local command +which uses the connection, and then disconnect. You can do this with the `--local` +flag. + +Example: + +```code +$ tsh ssh -L 5000:google.com:80 --local node curl http://localhost:5000 +``` + +This command: + +- Connects to `node`. +- Binds the local port `5000` to port `80` on `google.com`. +- Executes `curl` command locally, which results in `curl` hitting `google.com:80` via `node`. + +### SSH agent support + +If there is an [ssh agent](https://en.wikipedia.org/wiki/Ssh-agent) running, +`tsh login` will store the user certificate in the agent. This can be verified +via: + +```code +$ ssh-add -L +``` + +The SSH agent can be used to feed the certificate to other SSH clients, for example +to OpenSSH (`ssh`). + +If you wish to disable SSH agent integration, pass `--no-use-local-ssh-agent` +to `tsh`. You can also set the `TELEPORT_USE_LOCAL_SSH_AGENT` environment +variable to `false` in your shell profile to make this permanent. + + +### SSH jump host + +While implementing `ProxyJump` for Teleport, we have extended the feature to `tsh`. + + + + +```code +$ tsh ssh -J proxy.example.com telenode +``` + + + + +```code +$ tsh ssh -J mytenant.teleport.sh telenode +``` + + + + + +Known limitations: + +- Only one jump host is supported (`-J` supports chaining that Teleport does not utilize) and `tsh` will return with error in the case of two jump hosts, i.e. `-J proxy-1.example.com,proxy-2.example.com` will not work. +- When `tsh ssh -J user@proxy` is used, it overrides the SSH proxy defined in the tsh profile, and port forwarding is used instead of the existing Teleport proxy subsystem. + +### Resolving server names + +`tsh` supports multiple methods to resolve remote Node names. + +- **Traditional**: by IP address or via DNS. +- **Nodename setting**: the `teleport` daemon supports the` nodename` flag, which allows Teleport administrators to assign alternative Node names. +- **Labels**: you can address a Node by `name=value` pair. + +If we have two Node, one with `os:linux` label and one Node with `os:osx`, we +can log in to the OSX Node with: + +```code +$ tsh ssh os=osx +``` + +This only works if there is only one remote node with the `os:osx` label, but +you can still execute commands via SSH on multiple Nodes using labels as a +selector. This command will update all system packages on machines that run +Linux: + +```code +$ tsh ssh os=ubuntu apt-get update -y +``` + +### Short-lived sessions + +The default TTL of a Teleport user certificate is 12 hours. This can be modified +at login with the `--ttl` flag. This command logs you into the cluster with a +very short-lived (1 minute) temporary certificate: + +```code +$ tsh --ttl=1 login +``` + +You will be logged out after one minute, but if you want to log out immediately, +you can always run: + +```code +$ tsh logout +``` + +### Connecting to SSH clusters behind firewalls + +Teleport supports creating clusters of servers located behind firewalls +**without any open listening TCP ports**. This works by creating reverse SSH +tunnels from behind-firewall environments into a Teleport Proxy Service you have access to. + +To learn more about setting up a trust relationship between clusters behind firewalls, see +[Configure Trusted Clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx). + + + Trusted clusters are only available for self-hosted Teleport clusters. + + +Assuming the Teleport Proxy Server called `work` is configured with a few trusted +clusters, you can use the `tsh clusters` command to see a list of all the trusted clusters on the server: + +```code +$ tsh --proxy=work clusters + +# Cluster Name Status +# ------------ ------ +# staging online +# production offline +``` + +[CLI Docs - tsh clusters](../../reference/cli/tsh.mdx#tsh-clusters) + +Now you can use the `--cluster` flag with any `tsh` command. For example, to list SSH nodes that are members of the `production` cluster, simply run: + +```code +$ tsh --proxy=work ls --cluster=production + +# Node Name Node ID Address Labels +# --------- ------- ------- ------ +# db-1 xxxxxxxxx 10.0.20.31:3022 kernel:4.4 +# db-2 xxxxxxxxx 10.0.20.41:3022 kernel:4.2 +``` + +Similarly, if you want to SSH into `db-1` inside the `production` cluster: + +```code +$ tsh --proxy=work ssh --cluster=production db-1 +``` + +This is possible even if Nodes in the `production` cluster are located behind a +firewall without open ports. This works because the `production` cluster +establishes a reverse SSH tunnel back into the Proxy Service called `work`, and +this tunnel is used to establish inbound SSH connections. + +### X11 forwarding + +In order to run graphical programs within an SSH session, such as an IDE like +Virtual Studio Code, you'll need to request X11 forwarding for the session with +the `-X` flag. + +```code +$ tsh ssh -X node01 +``` + +X11 forwarding provides the server with secure access to your local X Server +so that it can communicate directly with your local display and I/O devices. + + + The `-Y` flag can be used to start Trusted X11 forwarding. This is needed + in order to enable more "unsafe" features, such as running clipboard or + screenshot utilities like `xclip`. However, it provides the server with + unmitigated access to your local X Server and puts your local machine at + risk of X11 attacks, so it should only be used with extreme caution. + + +In order to use X11 forwarding, you'll need to enable it on the Teleport Node. +You'll also need to ensure that your user has the `permit_x11_forwarding` role option: + +```code +$ tsh status +> Profile URL: https://proxy.example.com:3080 + Logged in as: dev + ... + Extensions: permit-X11-forwarding +``` + + +## Interacting with SSH servers + +In this section, you will find details on interacting with SSH servers with +`tsh`. + +### Interactive shell + +To launch an interactive shell on a remote Node or to execute a command, use +`tsh ssh`. + +`tsh` tries to mimic the `ssh` experience as much as possible, so it supports +the most popular `ssh` flags like `-p`, `-l` or `-L`. For example, if you have +the following alias defined in your `~/.bashrc`: `alias ssh="tsh ssh"` then you +can continue using familiar SSH syntax: + + + + +```code +# Have this alias configured, perhaps via ~/.bashrc +$ alias ssh="/usr/local/bin/tsh ssh" + +# Login in to a cluster and retrieve your SSH certificate: +$ tsh --proxy=proxy.example.com login + +# These commands execute `tsh ssh` under the hood: +$ ssh user@node +$ ssh -p 6122 user@node ls +$ ssh -o ForwardAgent=yes user@node +$ ssh -o AddKeysToAgent=yes user@node +``` + + + + +```code +# Have this alias configured, perhaps via ~/.bashrc +$ alias ssh="/usr/local/bin/tsh ssh" + +# Login in to a cluster and retrieve your SSH certificate: +$ tsh --proxy=mytenant.teleport.sh login + +# These commands execute `tsh ssh` under the hood: +$ ssh user@node +$ ssh -p 6122 user@node ls +$ ssh -o ForwardAgent=yes user@node +$ ssh -o AddKeysToAgent=yes user@node +``` + + + + +### Copying files + +To securely copy files to and from cluster Nodes, use the `tsh scp` command. It +is designed to mimic OpenSSH's `scp` command as much as possible: + +```code +$ tsh scp example.txt root@node:/path/to/dest +``` + +Again, you may want to create a bash alias like `alias scp="tsh --proxy=work +scp"` and use the familiar syntax: + +```code +$ scp -P 61122 -r files root@node:/path/to/dest +``` + +Teleport supports both the SCP and SFTP protocols. +OpenSSH `scp` or `sftp` commands can both be used in place of `tsh scp` +if desired. + +### Sharing sessions + +Suppose you are trying to troubleshoot a problem on a remote server. Sometimes +it makes sense to ask another team member for help. Traditionally, this could be +done by letting them know which host you're on, having them SSH in, start a +terminal multiplexer like `screen`, and join a session there. + +Teleport makes this more convenient. Let's log in to a server named `luna` +and ask Teleport for our current session status: + +```code +$ tsh ssh luna +# on host luna +$ teleport status + +# User ID : joe, logged in as joe from 10.0.10.1 43026 3022 +# Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4 +Session URL: https://work:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4 +``` + +Now you can invite another user account to the `work` cluster. You can share the +URL for access through a web browser, or you can share the session ID, and the +other user can join you through their terminal by typing: + +```code +$ tsh join +``` + + + Joining sessions requires special permissions that need to be set up by your cluster administrator. + Refer them to the [Moderated Sessions guide](../../zero-trust-access/authentication/joining-sessions.mdx) for more information on configuring join permissions. + + +You can also list active sessions with the `tsh sessions ls` command. + + + Joining sessions is not supported in recording proxy mode (where `session_recording` is set to `proxy`). + + +## Proxying Git commands + +(!docs/pages/connect-your-client/includes/tsh-git.mdx!) + +## Debug logs + +Adding the `--debug` flag to a command or setting the `TELEPORT_DEBUG` env var to `1` makes tsh +print debug logs to standard output. + +### Unified logging system on macOS + +On macOS, the `--os-log` flag can be used instead of `--debug` to send debug logs to [the unified +logging system](https://support.apple.com/en-gb/guide/console/welcome/mac). This behavior can also be controlled through the `TELEPORT_OS_LOG` env var. + +To stream logs in a separate shell session: + +```code +$ log stream --predicate 'subsystem CONTAINS "tsh"' --style syslog --level debug +``` + +To dump logs captured so far to a file: + +```code +$ log show --predicate 'subsystem CONTAINS "tsh"' --style syslog --info --debug > tsh.log +``` + +The logs can also be inspected in [the Console +app](https://support.apple.com/en-gb/guide/console/cnsl1012/1.1/mac/15.0). Info and debug logs are +not shown by default, so make sure to select "Include Info Messages" and "Include Debug Messages" +from the Action menu. + +## Examining recorded sessions + +You can use `tsh` to examine sessions that users have completed in resources +protected by Teleport. This section explains how to list and play Teleport +session recordings with `tsh`. + +To view the recording, select **Audit** in the Teleport Web UI, then click **Session Recordings** in the menu. + +### Listing recordings + +Run the following command to review recorded sessions: + +```code +$ tsh recordings ls +ID Type Participants Hostname Timestamp +------------------------------------ ---- ------------ -------- ------------------- +b0a04442-70dc-4be8-9308-7b7901d2d600 ssh jeff dev Nov 26 16:36:16 UTC +c0a02222-70dc-4be8-9308-7b7901d2d600 kube alice Nov 26 20:36:16 UTC +d0a04442-70dc-4be8-9308-7b7901d2d600 ssh navin test Nov 26 16:36:16 UTC +``` + +### Playing recordings + +To play a session recording, run the `tsh play` command with the ID of a session +as returned by `tsh recordings ls`: + +```code +$ tsh play c0a02222-70dc-4be8-9308-7b7901d2d600 +``` + +You can also run `tsh play` with the path to a TAR file that contains a session +recording: + +```code +$ tsh play ./my-recording.tar +``` + +To retrieve a TAR file containing a session recording, you must have access to +the session recording backend. This requires either a self-hosted Teleport +cluster or [external audit +storage](../../zero-trust-access/management/external-audit-storage.mdx). + +The `tsh play` command can print recordings in several formats, depending on the +kind of resource the recorded session interacts with. To choose a format, use +the `--format` flag of `tsh play`: + +| `--format` value | Supported resources | Description | +|------------------|---------------------|-------------| +| `pty` (default) | Servers, Kubernetes clusters | `tsh` opens a pseudo-terminal to play each command executed in the session. | +| `text` | Servers, Kubernetes clusters | `tsh` dumps the entire recording directly to standard out. Timing data is ignored. | +| `json` | Servers, Kubernetes clusters, applications, databases | `tsh` prints a JSON-serialized list of audit events, separated by newlines. | +| `yaml` | Servers, Kubernetes clusters, applications, databases | `tsh` prints a YAML-serialized list of audit events, separated by `---` characters. | + +The playback speed can be customized with the `--speed` flag, which must be +one of `0.5x`, `1x`, `2x`, `4x`, or `8x`. + +```code +tsh play --speed=8x UUID +``` + +Another way to speed up playback is to skip idle time in the recording with the +`--skip-idle-time` flag. When enabled, tsh will respect the configured playback +speed during active sections of the recording, but it will skip over larger periods +of inactivity. + +## tsh configuration files + +You can use a configuration file to control the behavior of `tsh`. The scope of +the configuration file depends on its location: + +- `/etc/tsh.yaml` is the default location for global, shared configuration + settings. You can override the location with the `TELEPORT_GLOBAL_TSH_CONFIG` + environment variable. +- `$TELEPORT_HOME/config/config.yaml` is the default location for user-specific + configuration settings. The default location for `TELEPORT_HOME` is `~/.tsh`. + +`tsh` merges the settings from both configuration file locations, with the user +configuration settings taking precedence. + +### Extra proxy headers + +The `tsh` configuration file enables you to specify HTTP headers to be +included in requests to Teleport Proxy Servers with addresses matching +the `proxy` field. + +```yaml +add_headers: + - proxy: "*.example.com" # matching proxies will have headers included + headers: # headers are pairs to include in the http headers + foo: bar # Key/Value to be included in the http request +``` + +For example, adding HTTP headers can be useful if an intermediate HTTP proxy is +in place that requires setting an authentication token: + +```yaml +add_headers: + - proxy: "*.infra.corp.xyz" + headers: + "Authorization": "Bearer tokentokentoken" +``` + +### Aliases + +You can configure `tsh` to define aliases, custom commands and command-specific +flag defaults. Using aliases, you can run frequently used `tsh` commands more +easily. + +Aliases allow you to define custom commands or change the default flag values +for existing commands using the following syntax in `tsh` configuration files: + +```yaml +aliases: + "": "" +``` + +The `` can only be a top-level subcommand. In other words, you can define a `tsh mycommand` alias but not `tsh my command`. + +New command `tsh l`: + +```yaml +aliases: + "l": "tsh login --auth=okta" +``` + +Make `tsh status` use JSON as a default format: + +```yaml +aliases: + "status": "tsh status --format=json" +``` + +The alias can use an arbitrary number of arguments. If an argument variable `$N` is referenced, `tsh` will check that at least `N+1` arguments were given to the alias invocation. All arguments that were given but not referenced in the alias definition will be appended at the end. + +Define a custom command using `bash`. The `$0` and `$1` variables will be substituted with command arguments. + +```yaml +aliases: + "connect": "bash -c 'tsh login $0 && tsh ssh $1'" +``` + +Define a custom login command where first argument specifies `--auth` option. + +```yaml +aliases: + "ap": "tsh login --auth=$0 --proxy=teleport.example.com" +``` + +Given the configuration: + +```yaml +aliases: + "example": "bash -c 'echo first=$0 $0-$1 $3'" +``` + +`tsh example 0 1 unused-2 3 unused-4` will expand to `bash -c 'echo first=0 0-1 3 unused-2 unused-4'`. + +An alias definition can also reference the `$TSH` variable. If you use the +`$TSH` variable in an alias, `tsh` expands the variable to the absolute path of +the current `tsh` executable. This behavior can be useful if there are multiple +`tsh` versions installed, or the version you're currently using is not in the +`PATH`: + +```yaml +aliases: + "status": "$TSH status --format=json" +``` + +To troubleshoot aliases, set the `TELEPORT_DEBUG=1` environment variable. This will cause detailed logs to be printed to standard error: + +```code +$ TELEPORT_DEBUG=1 tsh status +DEBU [TSH] Self re-exec command: tsh [status --format=json]. tsh/aliases.go:203 +... +``` + +### Proxy templates + +With proxy templates, `tsh` dynamically determines the address of the Teleport +Proxy Service to connect to based on the address of the destination host in your +`tsh ssh` or `tsh proxy ssh` command: + +```yaml +proxy_templates: + +# Regular expression that the host server address `%h:%p` is matched against. +# The "replace rules" below can reference capturing groups from this regular +# expression (`$1`, `$2`, etc.). +- template: '^(\w+)\.(\w+):([0-9]+)$' # .: + + # Optional web proxy address to use for proxy jump (`--jumphost`, `-J`). + # + # Proxy Jump can be used to reduce latency in regionally distributed trusted + # clusters by connecting to a leaf node through the leaf proxy instead of the + # root proxy. + proxy: "$2.eu.example.com:443" + + # Optional cluster name to connect to (`--cluster`). + # + # Cluster can be used to connect to leaf nodes from the root proxy without + # first logging in to the leaf cluster. This may be useful in cases where + # proxy jump is not applicable, such as when the leaf clusters do not have + # their own public proxies. + cluster: "$2" + + # Optional host server address to connect to (`%h:%p`). + # + # Port defaults to 3022 if not explicitly provided with `--port`. + # If provided, it will take precedence over host resolution via + # query or search. + host: "$1:$3" + + # Optional predicate expression to resolve the target host with. + # + # Query by predicate expression similar to tsh ls --query. + # Has priority over search but will be ignored if a host is provided. + query: "labels.env == $1" + + # Optional fuzzy search terms to resolve the target host with. + # + # Search by a list of comma separated keywords similar to tsh ls --search. + # Only applied if host and search are not provided. + search: "$1" + +# Multiple templates can be provided. They are evaluated in order and the first +# match takes effect. +- template: ... +``` + +In the configuration above, `query` accepts an predicate expression. This has +priority over search but will be ignored if a host is provided. See the +[predicate language +documentation](../../reference/access-controls/predicate-language.mdx#resource-filtering) for +predicate expression examples. + +`tsh -J {{proxy}} ssh` and `tsh -J {{proxy}} proxy ssh` will attempt to match the +host server address `%h:%p` with the configured templates. For each replace rule set, +the corresponding cli value will be set. + +If leaf certificates are required to connect to the node, `tsh` automatically +retrieves leaf certificates from the root cluster: + +```code +$ tsh ssh -J {{proxy}} node1.leaf1 +# becomes +$ tsh ssh -J leaf1.eu.example.com:443 --cluster leaf1 node1 +``` + +If there is no template matched, an error is returned. + +```code +$ tsh ssh -J {{proxy}} node1.none.example.com +ERROR: proxy jump contains {{proxy}} variable but did not match any of the templates in tsh config +``` + +If you don't explicitly provide the proxy variable `-J {{proxy}}`, `tsh` still +attempts to match a template, but won't fail if there isn't a match. +Additionally, `tsh` won't replace the `proxy` value if it's explicitly set by +the client: + +```code +$ tsh ssh -J leaf2.us.example.com:443 node1.leaf2 +# becomes +$ tsh ssh -J leaf2.us.example.com:443 --cluster leaf2 node1 +``` + +Proxy Templates can also be used with OpenSSH by setting the `ProxyCommand` +in `~/.ssh/config` to use `tsh proxy ssh`. + +```txt +Host *.example.com + Port 3022 + ProxyCommand tsh proxy ssh -J {{proxy}} %r@%h:%p +``` + +As a result, you can use `tsh ssh` and `ssh` interchangeably. + +```code +$ tsh ssh node1.leaf1 +# is equivalent to +$ ssh node1.leaf1 +``` + +## Uninstalling tsh + +To remove `tsh` and associated user data see +[Uninstalling Teleport](../../installation/uninstall-teleport.mdx). + +## Further reading + +Read the [`tsh` CLI Reference](../../reference/cli/tsh.mdx) for all `tsh` commands +and their options. diff --git a/docs/pages/connect-your-client/teleport-clients/vnet.mdx b/docs/pages/connect-your-client/teleport-clients/vnet.mdx new file mode 100644 index 0000000000000..475ec0a465818 --- /dev/null +++ b/docs/pages/connect-your-client/teleport-clients/vnet.mdx @@ -0,0 +1,361 @@ +--- +title: Using VNet +sidebar_label: VNet +description: Using VNet +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +This guide explains how to use VNet to connect to TCP applications and SSH +servers available through Teleport. + +## How it works + +VNet automatically proxies connections from your computer to TCP apps and SSH +servers available through Teleport. +A program on your device can securely connect to resources protected +by Teleport without having to know about Teleport authentication details. +Underneath, VNet authenticates the connection with your Teleport credentials and +securely tunnels the connection. +This is all done client-side – VNet sets up a local DNS name server that +intercepts DNS requests for your Teleport resources and responds with a virtual IP +address managed by VNet that will handle the connection. + +VNet's SSH support enables third-party SSH clients to connect to Teleport SSH +servers with minimal configuration required, while still offering Teleport +access controls and features like [Per-session MFA](../../zero-trust-access/authentication/per-session-mfa.mdx) +and [Hardware Key Support](../../zero-trust-access/authentication/hardware-key-support.mdx). + +![Diagram showing VNet architecture](../../../img/vnet/how-it-works.svg) + +VNet delivers an experience like a VPN through this local virtual network, +while maintaining all of Teleport's identity verification and zero trust +features that traditional VPNs cannot provide. + +VNet is available on macOS and Windows in Teleport Connect and tsh, with plans +for Linux support in a future version. + + +VNet's VPN-like experience for app access means that any software running on +the client machine can access Teleport apps at local DNS or IP addresses. + +**Avoid running VNet on shared or multi-user machines.** +If multiple OS users share the same machine, any user could access Teleport TCP +apps at their local VNet DNS or IP address. + +**Protect HTTP services behind VNet.** +Untrusted websites can potentially use DNS rebinding attacks to bypass the +browser’s Same-Origin Policy and issue plain HTTP requests to VNet IP addresses. +If your Teleport cluster contains TCP apps serving plain HTTP APIs, it is +strongly recommended to either avoid VNet or implement one or more of the +following mitigations for DNS rebinding attacks: +- upgrade these APIs to HTTPS or another protocol +- enforce a Host header allowlist at the HTTP server +- block browser access to HTTP websites + + +## Prerequisites + + + +- A client machine running macOS Ventura (13.0) or higher. +- [Teleport Connect](teleport-connect.mdx), version 16.0.0 or higher. + + +- A client machine running Windows 10 or higher. +- [Teleport Connect](teleport-connect.mdx), version 17.3.0 or higher. + + + +## Step 1/3. Start VNet + +Open Teleport Connect and log in to your cluster. +See [Using Teleport Connect](./teleport-connect.mdx) if you haven't used the +Teleport Connect app before. + +Open the **connection list** in the top left and click the icon to start VNet. +Or, skip this step and VNet will start automatically when you click "Connect" +on a TCP app or "Connect with VNet" on an SSH server. + +![VNet shown in connection list](../../../img/vnet/start-vnet.png) + +After VNet has been started once it will automatically start every time +Teleport Connect is opened, unless you stop VNet before closing Teleport +Connect. + +
+First launch on macOS +During the first launch, macOS will prompt you to enable a background item for tsh.app. VNet needs +this background item in order to configure DNS on your device. To enable the background item, either +interact with the system notification or go to System Settings > General > Login Items and look for +tsh.app under "Allow in the Background". + +![VNet starting up](../../../img/use-teleport/vnet-starting@2x.png) +
+ +## Step 2/3. Connect to a TCP app + +Find the TCP app you want to connect to. +TCP apps have `tcp://` as the protocol in their address. + +![Resource list in Teleport Connect with a TCP app hovered over](../../../img/use-teleport/vnet-resources-list@2x.png) + +Click "Connect" next to the TCP app. +This will start VNet if it's not already running, and then copy the app's +address to your clipboard. +You can now connect to the application using the application client you would +normally use to connect to it. + +```code +$ psql postgres://postgres@tcp-app.teleport.example.com/postgres +``` + +As long as VNet is running in the background, clicking "Connect" next to each +app is not necessary. +You can directly connect to all of your TCP apps without any actions in +Teleport Connect. + + +Unless the application specifies [multiple +ports](../../enroll-resources/application-access/protect-apps/tcp.mdx#configuring-access-to-multiple-ports), +VNet proxies connections over any port used by the application client. For multi-port apps, the port +number must match one of the target ports of the app. To see a list of target ports, click the +three dot menu next to an application in Teleport Connect or execute `tsh apps ls`. + +If [per-session MFA](../../zero-trust-access/authentication/per-session-mfa.mdx) is enabled, the +first connection over each port triggers an MFA check. + + +## Step 3/3. Connect to an SSH server + +Find the SSH server you want to connect to, open the menu next to the "Connect" +dropdown, and click "Connect with VNet". +This will start VNet if it's not already running, and then copy the VNet +address for the server to your clipboard. + +![SSH server in Teleport Connect with "Connect with VNet" menu open](../../../img/vnet/ssh-connect.png) + +There is a one-time configuration step required before SSH clients will be able +to connect to Teleport SSH servers through VNet. +When you click "Connect with VNet" on an SSH server, Teleport Connect will +automatically check if this configuration is present and walk you through it if +necessary. + +![SSH client configuration modal in Teleport Connect](../../../img/vnet/configure-ssh-clients.png) + +Once the configuration step is complete, any OpenSSH-compatible client that +reads configuration options from `~/.ssh/config` should be able to connect to +Teleport SSH servers. +Try connecting with the standard `ssh` client or the Remote Development feature +in editors like Visual Studio Code or Zed. + +```code +$ ssh @. +``` + +As long as VNet is running in the background, clicking "Connect with VNet" next +to each SSH server is not necessary, you can directly connect to all of your +Teleport SSH servers without any actions in Teleport Connect. + +## `tsh` support + +VNet is also available in `tsh` without running Teleport Connect. +To use it, log in and then run `tsh vnet`. + +```code +$ tsh login --proxy=teleport.example.com +$ tsh vnet +``` + +While `tsh` support is available, Teleport Connect is the preferred application +for running VNet. +Teleport Connect offers better visibility for MFA prompts and cluster logins, and +automatically runs diagnostics that are useful for troubleshooting. + +## Troubleshooting + +### Conflicting IPv4 ranges + +On the client computer, VNet uses IPv4 addresses from the CGNAT IP range `100.64.0.0/10` by +default, and needs to configure addresses and routes for this range. +This can conflict with other VPN-like applications, notably Tailscale also uses +this range. + +If you are experiencing connectivity problems with VNet, check if you are +running Tailscale or another VPN client, and try disabling it to see if the +issue persists. +To avoid the conflict and run VNet alongside Tailscale or another VPN client you +can configure VNet to use a different IPv4 range, see our VNet configuration +[guide](../../enroll-resources/application-access/vnet.mdx#configuring-ipv4-cidr-range). + +### Connecting to the app without VNet + +Sometimes connectivity issues are not related to VNet, and you can narrow that down by trying to +connect to your app without VNet. Make sure your app appears in the Connect resources view, or the +output of `tsh apps ls`. Turn off VNet and try creating a local proxy to your app (with debug +logging enabled) with `tsh proxy app -d `. + +### Timeouts when trying to reach a Teleport cluster + +If VNet doesn't have a chance to clean up before stopping, such as during sudden device shut down, +it may leave leftover DNS configuration files in `/etc/resolver`. Those files tell your computer to +talk to a DNS server operated by VNet when connecting to your cluster. But since VNet is no longer +running, there's no DNS server to answer those calls. + +To clean up those files, simply start VNet again. Alternatively, you can remove the leftover files +manually. + +### Verifying that VNet receives DNS queries + +Open Teleport Connect. From the Connections panel in the top left, select VNet. Make sure VNet is +running, then select "Open Diag Report". Note the IPv6 prefix and the IPv4 CIDR range used by VNet. + +Send a query for a TCP app available in your cluster, replacing with the name of your app: + + + +```code +$ dscacheutil -q host -a name +name: tcp-app.teleport.example.com +ipv6_address: fd60:67ec:4325::647a:547d + +name: tcp-app.teleport.example.com +ip_address: 100.68.51.151 +``` + + +```code +# In PowerShell. +$ Resolve-DnsName + +Name Type TTL Section IPAddress +---- ---- --- ------- --------- +tcp-app.teleport.example.com AAAA 10 Answer fd60:67ec:4325::647a:547d +tcp-app.teleport.example.com A 10 Answer 100.68.51.151 +``` + + + +The returned addresses should belong to ranges listed in the VNet diag report. + +Querying for anything other than an address of a TCP app should return the address belonging to the +Proxy Service. Using macOS as an example: + +```code +$ dscacheutil -q host -a name dashboard.teleport.example.com +name: dashboard.teleport.example.com +ipv6_address: 2606:2800:21f:cb07:6820:80da:af6b:8b2c + +name: dashboard.teleport.example.com +ip_address: 93.184.215.14 +``` + +Querying for any of those hostnames should result in some output being emitted in the debug logs of +VNet (see [Submitting an issue](#submitting-an-issue) on how to enable debug logs). + +### Submitting an issue + +When [submitting an +issue](https://github.com/gravitational/teleport/issues/new?assignees=&labels=bug,vnet&template=bug_report.md), +make sure to include a VNet diag report and debug logs from VNet and Teleport Connect. + +To save a diag report to a file, open Teleport Connect. From the Connections panel in the top left +select VNet, then "Open Diag Report". In the new tab with the report that was opened click the "Save +Report to File" icon. + +To collect VNet and Teleport Connect logs use the instructions below: + + + +To enable debug logs in VNet, first stop Teleport Connect and then run the following command. It +enables debug logs just for the next invocation of VNet: + +```code +$ sudo launchctl debug system/com.gravitational.teleport.tsh.vnetd --environment TELEPORT_DEBUG=1 +``` + +Next, start capturing logs from VNet into a file: + +```code +$ log stream --predicate 'subsystem ENDSWITH ".vnetd"' --style syslog --level debug > vnet.log +``` + +Then start Teleport Connect using the following command to enable debug logs for Teleport Connect: + +```code +$ open -a "Teleport Connect" --args --connect-debug +``` + +Next, attempt to reproduce the issue with VNet. + +To gather logs from Teleport Connect, from the app menu select Help → Open Logs Directory which +opens `~/Library/Application Support/Teleport Connect/logs` in Finder. Attach all files together +with `vnet.log` produced in the earlier step. + +{/* TODO: DELETE IN 21.0.0 */} +Before version 18.0.0, VNet logs were saved in `/var/log/vnet.log`. + +If the error is related to Teleport Connect not being able to start VNet or issues with code +signing, searching through `/var/log/com.apple.xpc.launchd/launchd.log` for `tsh` soon after +attempting to start VNet might also bring up relevant information: + +```code +$ grep tsh /var/log/com.apple.xpc.launchd/launchd.log +``` + + +To enable debug logs in VNet, first stop Teleport Connect. Then in the Start menu look for Command +Prompt and from the right click menu select Run as administrator. The following command enables +debug logs in VNet and immediately closes the admin command prompt to prevent you from starting +Teleport Connect as an admin by mistake. + +```code +$ reg.exe ADD HKLM\SYSTEM\CurrentControlSet\Services\TeleportVNet /v Environment /t REG_MULTI_SZ /d TELEPORT_DEBUG=1 /f && exit +``` + +Next, from the Start menu open the Run app. Execute the following to start Teleport Connect with +debug logs enabled: + +```code +$ "%PROGRAMFILES%\Teleport Connect\Teleport Connect.exe" --connect-debug +``` + +Next, attempt to reproduce the issue with VNet. + +Once that's done, execute the following command from the administrator Command Prompt to disable +debug logs in VNet: + +```code +$ reg.exe DELETE HKLM\SYSTEM\CurrentControlSet\Services\TeleportVNet /v Environment /f +``` + +The last step is collecting the logs. Let's start with the VNet logs. From the Start menu, open Event Viewer. +From the sidebar on the left, select Event Viewer (Local) → Applications and Services Logs → +Teleport. From the sidebar on the right, select "Save All Events As…". Save the logs as .evtx file. +If Event Viewer asks about Display Information, choose "No display information". + +To gather logs from Teleport Connect, press `Alt` while in the app, then select Help → Open Logs +Directory. This opens `C:\Users\%UserName%\AppData\Roaming\Teleport Connect\logs`. Attach all files +together with the .evtx file from the previous step. + +Outside of submitting an issue, VNet logs can be quickly saved to a file with the following +PowerShell command. However, when submitting an issue please attach the .evtx file instead. + +```code +$ Get-WinEvent -LogName Teleport -FilterXPath "*[System[Provider[@Name='vnet']]]" -Oldest | Format-Table -Property TimeCreated,LevelDisplayName,Message -Wrap | Out-File vnet.log +``` + +{/* TODO: DELETE IN 21.0.0 */} +Before version 18.0.0, VNet logs were saved in `C:\Program Files\Teleport Connect\resources\bin\logs.txt`. + + + +## Next steps + +- Read our VNet configuration [guide](../../enroll-resources/application-access/vnet.mdx) + to learn how to configure VNet access to your applications. +- Read [RFD 163](https://github.com/gravitational/teleport/blob/master/rfd/0163-vnet.md) to learn how VNet works on a technical level. +- Read [RFD 207](https://github.com/gravitational/teleport/blob/master/rfd/0207-vnet-ssh.md) to learn how VNet SSH access works. diff --git a/docs/pages/connect-your-client/web-ui.mdx b/docs/pages/connect-your-client/teleport-clients/web-ui.mdx similarity index 78% rename from docs/pages/connect-your-client/web-ui.mdx rename to docs/pages/connect-your-client/teleport-clients/web-ui.mdx index f122d81205f64..12c3899639bd2 100644 --- a/docs/pages/connect-your-client/web-ui.mdx +++ b/docs/pages/connect-your-client/teleport-clients/web-ui.mdx @@ -1,6 +1,11 @@ --- title: Using the Web UI +sidebar_label: Web UI +sidebar_position: 2 description: Using the Teleport Web UI +tags: + - conceptual + - platform-wide --- The Teleport Web UI is a web-based visual interface from which you can access resources, view active sessions and recordings, create and review Access Requests, @@ -22,11 +27,11 @@ From the active sessions list, click **Join** and select a participant mode to j - **As a Moderator** with permission to watch, pause, or terminate the session. You can view output and forcefully terminate or pause the session at any time, but can't send input. Moderated sessions are an enterprise-only feature. -![joining an active session from the Web UI](../../img/webui-active-session.png) +![joining an active session from the Web UI](../../../img/webui-active-session.png) You must have the `join_sessions` allow policy in a role you've been assigned to join sessions in any participant mode. For information about how to configure the `join_sessions` allow policy and participant modes for a role, see -[Configure an allow policy](../admin-guides/access-controls/guides/joining-sessions.mdx). +[Configure an allow policy](../../zero-trust-access/authentication/joining-sessions.mdx). ## Idle timeout @@ -65,29 +70,29 @@ cluster networking configuration has been updated ## Starting a database session Starting from version `17.1`, users can establish database sessions using the -Teleport Web UI. Currently, it is supported in PostgreSQL databases. +Teleport Web UI. Support for database sessions in the Web UI was later expanded to include CockroachDB in `18.1.5` and MySQL/MariaDB databases in `18.2.0`. To start a new session, locate your database in the resources list and click "Connect". -![Resources list](../../img/connect-your-client/webui-db-sessions/resources-list.png) +![Resources list](../../../img/connect-your-client/webui-db-sessions/resources-list.png) For supported databases, the dialog will present the option to start the session in your browser. -![Resources list database connect dialog](../../img/connect-your-client/webui-db-sessions/resources-connect-dialog.png) +![Resources list database connect dialog](../../../img/connect-your-client/webui-db-sessions/resources-connect-dialog.png) After clicking on the "Connect in the browser" button, a new tab will open with a form. Teleport will pre-fill this form based on your permissions, but you can adjust the options as needed. -![New database session connect dialog](../../img/connect-your-client/webui-db-sessions/connect-dialog.png) +![New database session connect dialog](../../../img/connect-your-client/webui-db-sessions/connect-dialog.png) If your user has wildcard permissions (*), you can type custom values into the form fields. This allows flexibility in selecting specific databases or credentials. -![New database session connect dialog with custom values](../../img/connect-your-client/webui-db-sessions/connect-dialog-custom.png) +![New database session connect dialog with custom values](../../../img/connect-your-client/webui-db-sessions/connect-dialog-custom.png) Once you've filled in the session details, click the "Connect" button. Your session will start, and a terminal interface will appear in the browser. @@ -96,17 +101,17 @@ The browser-based terminal allows you to execute queries and interact with your database. Follow the on-screen instructions to see available commands and limitations. -![Database session terminal](../../img/connect-your-client/webui-db-sessions/session-terminal.png) +![Database session terminal](../../../img/connect-your-client/webui-db-sessions/session-terminal.png) While the terminal interface provided in the Teleport Web UI is designed to resemble popular database CLIs such as `psql`, it is a custom implementation with some differences and limitations: - - **Feature Set:** Not all features available in `psql` are implemented. + - **Feature Set:** Not all features available in popular CLI tools are implemented. For instance, scripting capabilities, query cancellation, or informational - commands like `\d` or `\dt` are currently unsupported. + commands like `\d` or `\dt` from `psql` are currently not supported. - **Error Handling:** Error messages and diagnostics might differ from what - users are accustomed to in `psql`. + users are accustomed to. These distinctions are designed to maintain a lightweight and secure interface directly in your browser. For more complex operations, you may prefer diff --git a/docs/pages/connect-your-client/third-party/ansible.mdx b/docs/pages/connect-your-client/third-party/ansible.mdx new file mode 100644 index 0000000000000..c17056a023c83 --- /dev/null +++ b/docs/pages/connect-your-client/third-party/ansible.mdx @@ -0,0 +1,173 @@ +--- +title: Ansible +description: Using Teleport with Ansible +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +Ansible uses the OpenSSH client by default. Teleport supports SSH protocol and +works as SSH jumphost. + +In this guide we will configure OpenSSH client to work with Teleport Proxy +and run a sample ansible playbook. + +## How it works + +In the setup we describe in this guide, you generate an OpenSSH configuration +that uses a Teleport-issued SSH certificate to connect to Teleport-protected +servers. You then provide this OpenSSH configuration to your Ansible host, along +with an inventory of Teleport-protected servers. Ansible then uses the OpenSSH +configuration to present the Teleport-issued credentials in order to manage your +servers. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- `ssh` openssh tool +- `ansible` >= (=ansible.min_version=) +- Optional tool `jq` to process `JSON` output. + +## Step 1/3. Login and configure SSH + +Log into Teleport with `tsh`: + +```code +$ tsh login --proxy= +``` + +Generate `openssh` configuration using `tsh config` shortcut: + +```code +$ tsh config > ssh.cfg +``` + + +You can edit matching patterns used in `ssh.cfg` if something +is not working out of the box. + + +## Step 2/3. Configure Ansible + +Create a folder `ansible` where we will collect all generated files: + +```code +$ mkdir -p ansible +# Copy the openssh configuration from the previous step to the ansible dir +$ cp ssh.cfg ansible/ +$ cd ansible +``` + +Create a file `ansible.cfg`: + +``` +[defaults] +host_key_checking = True +inventory=./hosts +remote_tmp=/tmp + +[ssh_connection] +scp_if_ssh = True +ssh_args = -F ./ssh.cfg +``` + +You can create an inventory file `hosts` manually or use a script below to generate it from your environment. Set your +cluster name (e.g. `teleport.example.com` or in the form `mytenant.teleport.sh` for Teleport Enterprise Cloud) +and this script will generate the host names to match the `openssh` configuration: + +```code +$ tsh ls --format=json | jq '.[].spec.hostname + "."' > hosts +``` + +## Step 3/3. Run a playbook + +Finally, let's create a simple ansible playbook `playbook.yaml`. + +The playbook below runs `hostname` on all hosts. Make sure to set the `remote_user` parameter +to a valid SSH username that works with the target host and is allowed by Teleport: + +```yaml +- hosts: all + remote_user: ubuntu + tasks: + - name: "hostname" + command: "hostname" +``` + +From the folder `ansible`, run the ansible playbook: + +```code +$ ansible-playbook playbook.yaml + +# PLAY [all] ***************************************************************************************************************************************** +# TASK [Gathering Facts] ***************************************************************************************************************************** +# +# ok: [terminal] +# +# TASK [hostname] ************************************************************************************************************************************ +# changed: [terminal] +# +# PLAY RECAP ***************************************************************************************************************************************** +# terminal : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +``` + +You are all set. You are now using short-lived SSH certificates and Teleport can now record +all ansible commands in the audit log. + +## Troubleshooting + +In cases where Ansible cannot connect, you may see an error like this: + +```txt +example.host | UNREACHABLE! => { + "changed": false, + "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname example.host: Name or service not known", + "unreachable": true +} +``` + +You can examine and tweak patterns matching the inventory hosts in `ssh.cfg`. + +Try the SSH connection using `ssh.cfg` with verbose mode to inspect the error: + +```code +$ ssh -vvv -F ./ssh.cfg root@example.host +``` + +If `ssh` works, try running the playbook with verbose mode on: + +```code +$ ansible-playbook -vvvv playbook.yaml +``` + +If your hostnames contain uppercase characters (like `MYHOSTNAME`), please note that Teleport's internal hostname matching +is case sensitive by default, which can also lead to seeing this error. + +If this is the case, you can work around this by requesting that your Teleport +administrator enable case-insensitive routing at the cluster level. For admins, +is possible to enable case-insensitive routing using the following instructions: + + + + +Edit your `/etc/teleport.yaml` config file on all servers running the Teleport `auth_service`, then restart Teleport on each. + +```yaml +auth_service: + case_insensitive_routing: true +``` + + + + +Run `tctl edit cluster_networking_config` to add the following specification, then save and exit. + +```yaml +spec: + case_insensitive_routing: true +``` + + + diff --git a/docs/pages/connect-your-client/third-party/gui-clients.mdx b/docs/pages/connect-your-client/third-party/gui-clients.mdx new file mode 100644 index 0000000000000..7d2583131ac13 --- /dev/null +++ b/docs/pages/connect-your-client/third-party/gui-clients.mdx @@ -0,0 +1,699 @@ +--- +title: Database GUI Clients +description: How to configure graphical database clients for Teleport database access. +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +This guide describes how to configure popular graphical database clients to +work with Teleport. + +If you are using Teleport Connect to access your database, you can follow the +instructions in the app to connect your GUI client. See [Using Teleport +Connect](../teleport-clients/teleport-connect.mdx#connecting-to-a-database). + +## Prerequisites + +- A running Teleport cluster. If you want to get started with Teleport, [sign + up](https://goteleport.com/signup) for a free trial or [set up a demo + environment](../../get-started/deploy-community.mdx). + +- The `tsh` client tool. Visit [Installation](../../installation/installation.mdx) for instructions on downloading + `tsh`. See the [Using Teleport Connect](../teleport-clients/teleport-connect.mdx) guide for a graphical desktop client + that includes `tsh`. + +- To check that you can connect to your Teleport cluster, sign in with `tsh login`. For example: + + ```code + $ tsh login --proxy=teleport.example.com --user=myuser@example.com + ``` + +- The Teleport Database Service configured to access a database. See one of our + [guides](../../enroll-resources/database-access/guides/guides.mdx) for how to set up the Teleport + Database Service for your database. + +## How GUI clients access Teleport-protected databases + +In a typical setup, clients accessing a database protected by Teleport send +traffic to the Teleport Proxy Service in the database's native protocol, and the +Proxy Service forwards the traffic to and from the protected database. End users +use TLS certificates to authenticate with Teleport-protected databases. + +GUI clients need to present a certificate to the Teleport Database Service and +check the database server certificate issued by a Teleport-protected database. +There are three ways to do this: + +- **Authenticated tunnel (recommended):** `tsh` or Teleport Connect starts a + local proxy. A database GUI client establishes an unauthenticated connection + with the local proxy. Before forwarding the connection to the + Teleport-protected database, the local proxy automatically secures the + connection with the Teleport user's client certificate. +- **Unauthenticated local proxy:** `tsh` starts a local proxy that routes + connections to the Teleport-protected database, but the proxy itself doesn't + handle authentication. The user configures a database GUI client to provide + the user certificate, private key, and CA certificate at paths printed by + `tsh`. (Teleport Connect can start an authenticated tunnel but not an + unauthenticated local proxy.) +- **Using a remote host and port:** For self-hosted clusters with TLS + multiplexing disabled on the Teleport Proxy Service, you can configure GUI + clients to communicate with a port on the Proxy Service reserved for traffic + in the protected database's protocol. We recommend TLS multiplexing for a + typical Teleport cluster, and it is enabled by default on Teleport Cloud. + +To access a Teleport-protected database with a GUI client, you will need to +retrieve connection information to pass to your client. + +Determine the database to access by listing databases you can connect to: + +```code +$ tsh db ls +# Name +# ------------------- +# database-name +``` + +Replace with the name of the database you want to +connect to. + +### Authenticated tunnel + +To start the local authenticated tunnel, run the following command, + +```code +$ tsh proxy db --tunnel +Started authenticated tunnel for the database "" in cluster "" on 127.0.0.1:62652. +``` + +Starting the local database proxy with the `--tunnel` flag will create an +authenticated tunnel that you can use to connect to your database instances. + +You can optionally specify the database name and the user to use by default +when connecting to the database: + +```code +$ tsh proxy db --db-user=my-database-user --db-name=my-schema --tunnel +``` + +Now, you can connect to the address the proxy command returns. In our example it +is `127.0.0.1:62652`. + + +### Local proxy server + +Use the `tsh proxy db` command to start a local TLS proxy your GUI database +client will be connecting to. + +Run a command similar to the following:: + +```code +$ tsh proxy db +Started DB proxy on 127.0.0.1:61740 + +Use following credentials to connect to the proxy: + ca_file=/Users/r0mant/.tsh/keys/root.gravitational.io/certs.pem + cert_file=/Users/r0mant/.tsh/keys/root.gravitational.io/alice-db/root/-x509.pem + key_file=/Users/r0mant/.tsh/keys/root.gravitational.io/alice +``` + +Use the displayed local proxy host/port and credentials paths when configuring +your GUI client below. When entering the hostname, use `localhost` rather than +`127.0.0.1`. + +### Remote host and port + +If you are self-hosting Teleport and not using [TLS +routing](../../zero-trust-access/deploy-a-cluster/tls-routing.mdx), run the +following command to see the database connection information: + +```code +# View configuration for the database you're logged in to. +$ tsh db config +# View configuration for the specific database when you're logged into multiple. +$ tsh db config example +``` + +It will display the path to your locally cached certificate and key files: + +``` +Name: example +Host: teleport.example.com +Port: 3080 +User: postgres +Database: postgres +CA: /Users/alice/.tsh/keys/teleport.example.com/certs.pem +Cert: /Users/alice/.tsh/keys/teleport.example.com/alice-db/root/example-x509.pem +Key: /Users/alice/.tsh/keys/teleport.example.com/alice +``` + +The displayed `CA`, `Cert`, and `Key` files are used to connect through pgAdmin +4, MySQL Workbench, and other graphical database clients that support mutual +TLS authentication. + +## MongoDB Compass + +[Compass](https://www.mongodb.com/products/compass) is the official MongoDB +graphical client. + +On the "New Connection" panel, click on "Fill in connection fields individually". + +![MongoDB Compass new connection](../../../img/database-access/compass-new-connection@2x.png) + +On the "Hostname" tab, enter the hostname and port of the proxy you will use to +access the database (see +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases)). +Leave "Authentication" as None. + +![MongoDB Compass hostname](../../../img/database-access/compass-hostname@2x.png) + +On the "More Options" tab, set SSL to "Client and Server Validation" and set the +CA as well as the client key and certificate. Note that a CA path must be +provided and be able to validate the certificate presented by your Teleport +Proxy Service's web endpoint. + +![MongoDB Compass more options](../../../img/database-access/compass-more-options@2x.png) + +The following fields in the More Options tab must correspond to paths printed by +the `tsh proxy db` command you ran earlier: + +|Field|Path| +|---|---| +|Certificate Authority|`ca_file`| +|Client Certificate|`cert_file`| +|Client Private Key|`key_file`| + +Click on the "Connect" button. + +## MySQL DBeaver + +Right-click in the "Database Navigator" menu in the main view and select Create > Connection: + +![DBeaver Add Server](../../../img/database-access/dbeaver-add-server.png) + +In the search bar of the "Connect to a database" window that opens up, type "mysql", select the MySQL driver, and click "Next": + +![DBeaver Select Driver](../../../img/database-access/dbeaver-select-driver.png) + +In the newly-opened "Connection Settings" tab, use the Host as `localhost` and +Port as the one returned by the proxy command (`62652` in the example above): + +![DBeaver Select Configure Server](../../../img/database-access/dbeaver-configure-server.png) + +In that same tab, set the username to match the one that you are connecting to +using Teleport and uncheck the "Save password locally" box: + +![DBeaver Select Configure User](../../../img/database-access/dbeaver-configure-user.png) + +Click the "Edit Driver Settings" button on the "Main" tab, check the "No +Authentication" box, and click "Ok" to save: + +![DBeaver Driver Settings](../../../img/database-access/dbeaver-driver-settings.png) + +Once you are back in the "Connection Settings" window, click "Ok" to finish and +DBeaver should connect to the remote MySQL server automatically. + +## MySQL Workbench + +[MySQL Workbench](https://www.mysql.com/products/workbench/) is a GUI +application that provides comprehensive MySQL administration and SQL development +tools. + +In the MySQL Workbench "Setup New Connection" dialog, fill out "Connection +Name", "Hostname", "Port", and "Username": + +![MySQL Workbench +Parameters](../../../img/database-access/workbench-parameters@2x.png) + +In the "SSL" tab, set "Use SSL" to `Require and Verify Identity` and enter the +paths to your CA, certificate, and private key files (see +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases)): + +![MySQL Workbench SSL](../../../img/database-access/workbench-ssl@2x.png) + +The following fields in the SSL tab must correspond to paths printed by the `tsh +proxy db` command you ran earlier: + +|Field|Path| +|---|---| +|SSL Key File|`key_file`| +|SSL CERT File|`cert_file`| +|SSL CA File|`ca_file`| + +Optionally, click "Test Connection" to verify connectivity: + +![MySQL Workbench Test](../../../img/database-access/workbench-test@2x.png) + +Save the connection and connect to the database. + +## NoSQL Workbench + +From the NoSQL Workbench launch screen, click **Launch** next to **Amazon DynamoDB**. +From the left-side menu select **Operation builder**, then **+ Add connection**. +Choose the **DynamoDB local** tab, and point to your proxy's endpoint. This is +`localhost:62652` in the example above. (See +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) for +more information.) + +![DynamoDB local connection in NoSQL Workbench](../../../img/database-access/nosql-workbench-connection.png) + +## SQL Server with Azure Data Studio + +In Azure Data Studio create a connection using your proxy's endpoint. This is +`localhost,62652` in the example above. On a Windows machine, using an address in + the format `127.0.0.1,62652` could be required instead of `localhost`. (See +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) for +more information.) + +Create a connection with Microsoft SQL Server with these settings: + +|Connection Detail|Value| +|---|---| +|Server|`host`,`port` of proxy endpoint| +|Authentication Type|SQL Login| +|Password|empty| +|Encrypt|`False`| + +Example: +![Azure Data Studio connection options](../../../img/database-access/azure-data-studio-connection.png) + +Click **Connect** to connect. + +## PostgreSQL DBeaver + +To connect to your PostgreSQL instance, use the authenticated proxy address. +This is `127.0.0.1:62652` in the example above (see the “Authenticated Proxy” +section on [Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) +for more information). + +Use the "Database native" authentication with an empty password: + +![DBeaver Postgres Configure +Server](../../../img/database-access/dbeaver-pg-configure-server.png) + +Clicking on "Test connection" should return a connection success message. Then, +click on "Finish" to save the configuration. + +## PostgreSQL pgAdmin 4 + +[pgAdmin 4](https://www.pgadmin.org/) is a popular graphical client for +PostgreSQL servers. + +To configure a new connection, right-click on "Servers" in the main browser view +and create a new server: + +![pgAdmin Add Server](../../../img/database-access/pgadmin-add-server@2x.png) + +In the "General" tab of the new server dialog, enter the server connection name: + +![pgAdmin General](../../../img/database-access/pgadmin-general@2x.png) + +In the "Connection" tab, fill in the hostname, port, user and database name from +the configuration above: + +![pgAdmin Connection](../../../img/database-access/pgadmin-connection@2x.png) + +In the "SSL" tab, set "SSL Mode" to `Verify-Full` and fill in paths for client +certificate, key and root certificate from the configuration above: + +![pgAdmin SSL](../../../img/database-access/pgadmin-ssl@2x.png) + +The following fields in the SSL tab must correspond to paths printed by the `tsh +proxy db` command you ran earlier: + +|Field|Path| +|---|---| +|Client certificate|`cert_file`| +|Client certificate key|`key_file`| +|Root certificate|`ca_file`| + +Click "Save", and pgAdmin should immediately connect. If pgAdmin prompts you +for password, leave the password field empty and click OK. + +## Microsoft SQL Server Management Studio + +In Microsoft SQL Server Management Studio connect to a database engine using +your proxy's endpoint. This is `localhost,62652` in the example above. Using +the IP `127.0.0.1,62652` connection could be required on a Windows machine +instead of `localhost`. (See [Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) for +more information.) + +Create a connection with Microsoft SQL Server with these settings: + +|Connection Detail|Value| +|---|---| +|Server type|Database Engine| +|Server name|`host`,`port` of proxy endpoint| +|Authentication|SQL Server Authentication| +|Password|empty| +|Encryption|do not enable| + +Example: +![Microsoft SQL Server Management Studio connection options](../../../img/database-access/msft-sql-server-management-studio-connection.png) + +Click Connect to connect. + +## Redis Insight + + + Teleport's Redis Insight integration only supports Redis standalone instances. + + +After opening Redis Insight click `ADD REDIS DATABASE`. + +![Redis Insight Startup Screen](../../../img/database-access/guides/redis/redisinsight-startup.png) + +Now start a local proxy to your Redis instance: + +`tsh proxy db --db-user=alice redis-db-name`. + +Click `Add Database Manually`. Use `127.0.0.1` as the `Host`. Use the port printed by +the `tsh` command you ran in [Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases). + +Provide your Redis username as `Username` and password as `Password`. + +![Redis Insight Configuration](../../../img/database-access/guides/redis/redisinsight-add-config.png) + +Next, check the `Use TLS` and `Verify TLS Certificates` boxes. Copy the contents +of the files at the paths returned by the `tsh proxy db` command you ran earlier +and paste them into their corresponding fields. See the table below for the +Redis Insight fields that correspond to each path: + +|Field|Path| +|---|---| +|CA Certificate|`ca_file`| +|Client Certificate|`cert_file`| +|Private Key|`key_file`| + +Click `Add Redis Database`. + +![Redis Insight TLS Configuration](../../../img/database-access/guides/redis/redisinsight-tls-config.png) + +Congratulations! You have just connected to your Redis instance. + +![Redis Insight Connected](../../../img/database-access/guides/redis/redisinsight-connected.png) + +## Snowflake: DBeaver + +The Snowflake integration works only in the authenticated proxy mode. Start a local proxy for connections to your Snowflake database by using the command below: +``` +tsh proxy db --tunnel --port 2000 snowflake +``` + +Add a new database by clicking the "add" icon in the top-left corner: + +![DBeaver Main Screen](../../../img/database-access/guides/snowflake/dbeaver-main-screen.png) + +In the search bar of the "Connect to a database" window that opens up, type "snowflake", select the Snowflake driver, and click "Next": + +![DBeaver Select Database](../../../img/database-access/guides/snowflake/dbeaver-select-database.png) + +Set "Host" to `localhost` and "Port" to the port returned by the `tsh proxy db` command you ran earlier (`2000` in the example above). +In the "Authentication" section set the "Username" to match the database username passed to Teleport with `--db-user` +and enter any value (e.g., "teleport") in the "Password" field (the value of + "Password" will be ignored when establishing a connection but is required by DBeaver to register your database): + +![DBeaver Main](../../../img/database-access/guides/snowflake/dbeaver-main.png) + +Next, click the "Driver properties" tab and set "account" to any value (e.g., "teleport"; as with "Password", the value of + "account" will be ignored when establishing a connection but is required by DBeaver to register your database). In "User properties", set "ssl" to `off`: + +![DBeaver Driver](../../../img/database-access/guides/snowflake/dbeaver-driver.png) + +Teleport ignores the provided password and the account name as internally it uses values from the Database Agent configuration. +SSL set to `off` disables only encryption on local machine. Connection to Snowflake is encrypted by Teleport. + +Now you can click on "Test Connection..." in the bottom-left corner: + +![DBeaver Success](../../../img/database-access/guides/snowflake/dbeaver-success.png) + +Congratulations! You have just connected to your Snowflake instance. + +## Snowflake: JetBrains (IntelliJ, Goland, DataGrip, PyCharm, etc.) + +The Snowflake integration works only in the authenticated proxy mode. Start a local proxy for connections to your Snowflake database by using the command below: +``` +tsh proxy db --tunnel --port 2000 snowflake +``` + +In "Database Explorer" click the "add" button, pick "Data Source", and then pick "Snowflake": + +![JetBrains Add Database](../../../img/database-access/guides/snowflake/jetbrains-add-database.png) + +Next, set "Host" to `localhost` and "Port" to the port returned by the `tsh proxy db` command you ran earlier (`2000` in the example above). +Set the "Username" to match the one that you are assuming when you connect to Snowflake + via Teleport and enter any value (e.g., "teleport") in the "Password" field (the value of + "Password" will be ignored but is required to create a data source in your IDE): + +![JetBrains General](../../../img/database-access/guides/snowflake/jetbrains-general.png) + +Switch to the "Advanced" tab, set any value (e.g., "teleport") for "account", and add a new record named `ssl` with value `off` (as with "Password", "account" is ignored while establishing the connection but required by your IDE): + +![JetBrains Advanced](../../../img/database-access/guides/snowflake/jetbrains-advanced.png) + +Teleport ignores the provided password and the account name as internally it uses values from the Database Agent configuration. +Setting "SSL" to `off` only disables encryption on your local machine. The connection to Snowflake is encrypted by Teleport. + +Now you can click "Test Connection" to check your configuration. + +![JetBrains Success](../../../img/database-access/guides/snowflake/jetbrains-success.png) + +Congratulations! You have just connected to your Snowflake instance. + +## SQL Server DataGrip + +In the DataGrip connection configuration menu, use your proxy's endpoint. This +is `localhost:4242` in the example below. (See +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) for +more information.) + +Select the "User & Password" authentication option and keep the "Password" field +empty: + +![DataGrip connection options](../../../img/database-access/guides/sqlserver/datagrip-connection@2x.png) + +Click "OK" to connect. + +## SQL Server DBeaver + +In the DBeaver connection configuration menu, use your proxy's endpoint. This is +`localhost:62652` in the example above. (See +[Get connection information](./gui-clients.mdx#how-gui-clients-access-teleport-protected-databases) for +more information.) + +Use the SQL Server Authentication option and keep the Password field empty: + +![DBeaver connection options](../../../img/database-access/guides/sqlserver/dbeaver-connection@2x.png) + +Click OK to connect. + +## Cloud Spanner DataGrip + +(!docs/pages/includes/database-access/gui-clients/spanner-local-proxy.mdx!) + +From the DataGrip menu, click "File > New > Data Source from URL", then copy and +paste the JDBC URL that was output by `tsh proxy db`: + +![Create DataGrip Spanner Data Source From JDBC URL](../../../img/database-access/spanner-gui/datagrip-data-source-from-jdbc-url@2x.png) + +The "Google Cloud Spanner" driver should be automatically selected. +Click "Ok". + +DataGrip does not need GCP credentials - those are already provided by Teleport. +On the "General" tab of the new data source, select the "Authentication" +dropdown setting and choose "No auth": + +![Choose No Auth For DataGrip Spanner Data Source](../../../img/database-access/spanner-gui/datagrip-choose-no-auth@2x.png) + +Click "Test connection" to ensure the connection is configured correctly, then +click "Ok" to create the data source. + +(!docs/pages/includes/database-access/gui-clients/spanner-reuse-port-note.mdx!) + +## Cloud Spanner DBeaver + +(!docs/pages/includes/database-access/gui-clients/spanner-local-proxy.mdx!) + +From the menu, click "Database > Driver Manager": + +![Open DBeaver Driver Manager](../../../img/database-access/spanner-gui/dbeaver-open-driver-manager@2x.png) + +Search for the "Google Cloud Spanner" driver, select it, and click the "Copy" +button to make a custom driver configuration: + +![Copy DBeaver Google Spanner Driver](../../../img/database-access/spanner-gui/dbeaver-copy-spanner-driver@2x.png) + +Give the custom driver a name, e.g. "Teleport Spanner", then set "URL Template" +to this pattern string: + +```code +jdbc:cloudspanner://127.0.0.1:{port}/projects/{server}/instances/{host}/databases/{database};usePlainText=true +``` + +![Create Custom DBeaver Google Spanner Driver](../../../img/database-access/spanner-gui/dbeaver-create-spanner-driver@2x.png) + +Click "Ok", then click "Close" + +From the menu, click "Database > New Connection from JDBC URL": + +Now copy the JDBC URL that was output by `tsh proxy db` and paste it: + +![Create DBeaver Spanner Connection From JDBC URL](../../../img/database-access/spanner-gui/dbeaver-connection-from-jdbc-url@2x.png) + +Click "Proceed", then click "Finish". + +(!docs/pages/includes/database-access/gui-clients/spanner-reuse-port-note.mdx!) + +## Oracle graphical clients + +The Oracle integration works only in the authenticated proxy mode. Start a local proxy for connections to your Oracle database by using the command below: + +``` +> tsh proxy db --tunnel --port 11555 --db-user= --db-name= oracle + +Started authenticated tunnel for the Oracle database "oracle" in cluster "teleport.example.com" on 127.0.0.1:11555. +``` + + +The command above uses the local port 11555, but you can choose any available port. Leaving `--port` empty will cause `tsh` to pick a random one. + + +The local proxy supports TCP and TCPS modes. Different clients prefer different modes. + +TCP: +- requires no username or password +- generally easier to configure + +TCPS: +- requires no username or password +- depends on automatically created wallet +- uses JDBC URL for configuration + + +Teleport versions earlier than 17.2.0 support only a limited range of clients and only offer TCPS mode. `tsh` will automatically detect this situation and warn the user. We recommend updating to the latest version of Teleport to access full client support and additional connection options. + + +### Oracle SQL Developer (standalone) + +In "Connections" click the "+" button for a new database connection: + +![Oracle SQL Developer Add Database Connection](../../../img/database-access/guides/oracle/sql-developer-standalone-add-database.png) + +Next, set the name and username from the `--db-user` parameter. Set connection type to "Custom JDBC" and +set the "Custom JDBC URL" from the `tsh proxy db` command. + +![Oracle SQL Developer](../../../img/database-access/guides/oracle/sql-developer-standalone-conn-details-tcps.png) + +Now you can click "Test" to check your configuration. + +![Oracle SQL Developer Success](../../../img/database-access/guides/oracle/sql-developer-standalone-success.png) + +### Oracle SQL Developer (VS Code extension) + +Install the extension from [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Oracle.sql-developer). + +Both TCP and TCPS modes can be used. + + + + +Open the extension toolbar and click on "Create Connection" button. + +![SQL Developer (VS Code) Start](../../../img/database-access/guides/oracle/sql-developer-vscode-start@2x.png) + +Enter the following connection details: + +| Field | Value | +|-----------------|------------------------| +| Connection name | Choose unique name | +| User name | `/` | +| Password | `/` | +| Save Password | Mark checkbox | +| Connection type | Basic | +| Host name | `localhost` | +| Port number | `--port` flag value | +| Type | Service Name | +| Service name | `--db-name` flag value | + +Test and create the connection. + +![SQL Developer (VS Code) Connection Details (basic)](../../../img/database-access/guides/oracle/sql-developer-vscode-conn-details-basic@2x.png) + +The new connection should appear on the list. + +![SQL Developer (VS Code) Connected (basic)](../../../img/database-access/guides/oracle/sql-developer-vscode-connected-basic@2x.png) + + + + + +Open the extension toolbar and click on "Create Connection" button. + +![SQL Developer (VS Code) Start](../../../img/database-access/guides/oracle/sql-developer-vscode-start@2x.png) + +Enter the following connection details: + +| Field | Value | +|------------------|---------------------------------| +| Connection name | (choose per your preference) | +| User name | `/` | +| Password | `/` | +| Save Password | Mark the checkbox | +| Connection type | "Custom JDBC" | +| Custom JDBC URL | Copy from `tsh proxy db` output | + +Test and create the connection. + +![SQL Developer (VS Code) Connection Details (JDBC)](../../../img/database-access/guides/oracle/sql-developer-vscode-conn-details-jdbc@2x.png) + +The new connection should appear on the list. + +![SQL Developer (VS Code) Connected (JDBC)](../../../img/database-access/guides/oracle/sql-developer-vscode-connected-jdbc@2x.png) + + + + + +### Toad + +Add new login record in the logins dialog. + +![Toad Add Login Record](../../../img/database-access/guides/oracle/toad-add-login-record@2x.png) + +Enter the connection details in "Direct" tab: + +| Field | Value | +|-----------------|------------------------------| +| Host name | `127.0.0.1` | +| Port number | `--port` flag value | +| Service name | `--db-name` flag value | +| User name | `EXTERNAL` | +| Password | (leave empty) | +| Connection name | (choose per your preference) | + +Test the connection to verify the setup. + +![Toad Add Login Tested](../../../img/database-access/guides/oracle/toad-add-login-tested@2x.png) + +The newly added login should appear on the login list. + +![Toad Login List](../../../img/database-access/guides/oracle/toad-login-list@2x.png) + + +You can also configure Toad to use an external Oracle client. Both native and external clients are supported. + + +### DBeaver + +Click on the "New Database Connection" button. + +![DBeaver new connection button](../../../img/database-access/guides/oracle/dbeaver-new-connection@2x.png) + +Select "Oracle" from the driver list. You may use the search toolbar to narrow down the list. +![DBeaver connect to a database](../../../img/database-access/guides/oracle/dbeaver-connect-to-a-database@2x.png) + +Choose "Custom" connection type and paste the JDBC connection string printed by `tsh proxy db`. +![DBeaver JDBC details](../../../img/database-access/guides/oracle/dbeaver-jdbc-details@2x.png) + +Test the connection to verify the setup. Finalize by clicking "Finish". + diff --git a/docs/pages/enroll-resources/server-access/guides/jetbrains-sftp.mdx b/docs/pages/connect-your-client/third-party/jetbrains-sftp.mdx similarity index 77% rename from docs/pages/enroll-resources/server-access/guides/jetbrains-sftp.mdx rename to docs/pages/connect-your-client/third-party/jetbrains-sftp.mdx index 922ed0b06c3de..507fef80cbd0b 100644 --- a/docs/pages/enroll-resources/server-access/guides/jetbrains-sftp.mdx +++ b/docs/pages/connect-your-client/third-party/jetbrains-sftp.mdx @@ -1,7 +1,10 @@ --- title: JetBrains SFTP description: How to use a JetBrains IDE to manipulate files on a remote host with Teleport -h1: SFTP with JetBrains IDE +tags: + - how-to + - zero-trust + - infrastructure-identity --- JetBrain's IDEs, like PyCharm, GoLand, and IntelliJ, allow browsing, copying, and editing files on a remote server @@ -10,13 +13,20 @@ machine without using a third-party client. This guide explains how to use Teleport and a JetBrains IDE to access files with SFTP. +## How it works + +JetBrains IDEs can use the local SSH client to access a remote server. You can +use Teleport to generate a configuration for your local SSH client that +instructs the client to connect to a Teleport-protected Linux server using a +Teleport-issued OpenSSH certificate. + ## Prerequisites -(!docs/pages/includes/edition-prereqs-tabs-not-admin.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx clients="\`tsh\` client"!) - JetBrains IDE like PyCharm, IntelliJ, GoLand etc. See [Products](https://www.jetbrains.com/products/#type=ide) for a full list of JetBrains IDEs. - One or more Teleport SSH Service instances. If you have not yet done this, - read the [getting started guide](../getting-started.mdx) to + read the [getting started guide](../../enroll-resources/server-access/getting-started.mdx) to learn how. ## Step 1/3. First-time setup @@ -67,7 +77,7 @@ $ ssh user@[server name].[cluster name] Include the port number for OpenSSH servers, by default `22`, or you can experience an error. - See the [OpenSSH guide](../openssh/openssh.mdx) for more information. + See the [OpenSSH guide](../../enroll-resources/server-access/openssh/openssh.mdx) for more information. Example connecting to a OpenSSH server: ```code @@ -79,19 +89,19 @@ $ ssh user@[server name].[cluster name] After opening your IDE go to `Tools` -> `Deployment` -> `Browse Remote Host`. -![Deployment](../../../../img/jetbrains-sftp/deployment-main.png) +![Deployment](../../../img/jetbrains-sftp/deployment-main.png) Then click the plus sign in the top-left corner to add a new server. -![Add server](../../../../img/jetbrains-sftp/add-server.png) +![Add server](../../../img/jetbrains-sftp/add-server.png) Enter a name for your new server. -![New Deployment](../../../../img/jetbrains-sftp/deployment-added.png) +![New Deployment](../../../img/jetbrains-sftp/deployment-added.png) Click the three dots next to `SSH configuration` as in the picture above. -![SSH Configuration](../../../../img/jetbrains-sftp/ssh-configurations.png) +![SSH Configuration](../../../img/jetbrains-sftp/ssh-configurations.png) Create a new configuration by clicking the plus sign on the top left and providing: @@ -102,13 +112,13 @@ Create a new configuration by clicking the plus sign on the top left and providi As an `Authentication type` pick `OpenSSH config and authentication agent`. Next, you can click `Test Connection`. -![Successfully Connected](../../../../img/jetbrains-sftp/successfully-connected.png) +![Successfully Connected](../../../img/jetbrains-sftp/successfully-connected.png) ## Step 3/3. Browse a remote host After closing the SSH configuration window, you should see `Remote Host` menu in your IDE. -![Browse window](../../../../img/jetbrains-sftp/browse-window.png) +![Browse window](../../../img/jetbrains-sftp/browse-window.png) Teleport's certificates expire fairly quickly, after which SSH @@ -129,7 +139,7 @@ After closing the SSH configuration window, you should see `Remote Host` menu in ### Using OpenSSH clients This guide makes use of `tsh config`; refer to the -[dedicated guide](../openssh/openssh.mdx) for additional information. +[dedicated guide](../../enroll-resources/server-access/openssh/openssh.mdx) for additional information. ## Further reading - [JetBrains - Create a remote server configuration](https://www.jetbrains.com/help/idea/creating-a-remote-server-configuration.html#overload) diff --git a/docs/pages/connect-your-client/putty-winscp.mdx b/docs/pages/connect-your-client/third-party/putty-winscp.mdx similarity index 93% rename from docs/pages/connect-your-client/putty-winscp.mdx rename to docs/pages/connect-your-client/third-party/putty-winscp.mdx index c5621639181ab..dffcf04a72644 100644 --- a/docs/pages/connect-your-client/putty-winscp.mdx +++ b/docs/pages/connect-your-client/third-party/putty-winscp.mdx @@ -1,10 +1,16 @@ --- title: Using PuTTY and WinSCP with Teleport +sidebar_label: PuTTY and WinSCP description: This reference shows you how to use PuTTY to connect to SSH nodes and WinSCP to transfer files through Teleport +tags: + - conceptual + - platform-wide + - zero-trust + - infrastructure-identity --- This guide will show you how to use the Teleport client tool `tsh` to add saved sessions for use -with [PuTTY](https://www.putty.org/), and then how to use PuTTY as a client to connect to SSH nodes. +with [PuTTY](https://putty.software/), and then how to use PuTTY as a client to connect to SSH nodes. It will also show you how to optionally use these saved sessions with [WinSCP](https://winscp.net) to transfer files from SSH nodes using SFTP. @@ -19,7 +25,8 @@ You will learn how to: - A client machine running Windows 10 or higher. You can only use `tsh` to save PuTTY sessions on Windows. -- The Teleport `tsh.exe` client, version 14.0.3 or higher. To download the `tsh.exe` client, run the following command: +- The Teleport `tsh.exe` client. To download the `tsh.exe` client, run the + following command: ```code $ curl.exe -O https://cdn.teleport.dev/teleport-v(=teleport.version=)-windows-amd64-bin.zip @@ -79,7 +86,7 @@ Added PuTTY session for ubuntu@ip-172-31-30-140 [proxy:teleport.example.com] If you don't provide a login to this command, your local Windows username is used instead. If you are adding a session for a registered OpenSSH node within your cluster (added with -[`teleport join openssh`](../enroll-resources/server-access/openssh/openssh-agentless.mdx)), you must specify the `sshd` port +[`teleport join openssh`](../../enroll-resources/server-access/openssh/openssh-agentless.mdx)), you must specify the `sshd` port (usually 22) when adding a session with `tsh puttyconfig`: ```code @@ -131,11 +138,11 @@ If you don't provide a login to this command, your local Windows username is use 1. Start PuTTY to see the saved sessions available for your cluster. -![Main PuTTY window](../../img/connect-your-client/putty-window.png) +![Main PuTTY window](../../../img/connect-your-client/putty-window.png) 2. Double-click a session to connect to the host through Teleport. -![PuTTY console](../../img/connect-your-client/putty-console.png) +![PuTTY console](../../../img/connect-your-client/putty-console.png) After you connect to the host, Teleport generates an audit log entry for the session's start, and appears in the list of "Active Sessions" within Teleport. @@ -188,18 +195,18 @@ transfer files to and from it. If you don't see the Site Manager "Login" dialog appear with a list of sessions to connect to when WinSCP starts, click the **Tabs** menu, choose **Sites**, then **Site Manager...** to show it. -![WinSCP Site Manager window](../../img/connect-your-client/winscp-1.png) +![WinSCP Site Manager window](../../../img/connect-your-client/winscp-1.png) 2. Click the **Tools** button at the bottom left, and choose **Import Sites**. -![Click 'Tools', then choose 'Import Sites...'](../../img/connect-your-client/winscp-2.png) +![Click 'Tools', then choose 'Import Sites...'](../../../img/connect-your-client/winscp-2.png) 3. Check the box next to any saved PuTTY sessions that you wish to import into WinSCP for use, then click the "OK" button. If you don't see sessions matching the hosts that you want to connect to, close this box and run `tsh puttyconfig @` from a terminal [as described above](#summary) to add the sessions, then repeat this step. -![Choose PuTTY sessions to import and click OK](../../img/connect-your-client/winscp-3.png) +![Choose PuTTY sessions to import and click OK](../../../img/connect-your-client/winscp-3.png) 4. To tell WinSCP it should trust and load saved Host CAs from PuTTY, click **Tools** again at the bottom left, then choose **Preferences...** @@ -208,17 +215,17 @@ then choose **Preferences...** You can skip steps 4 and 5 if you've completed the process as this user on this PC before. -![Click 'Tools', then choose 'Preferences...'](../../img/connect-your-client/winscp-4.png) +![Click 'Tools', then choose 'Preferences...'](../../../img/connect-your-client/winscp-4.png) 5. Click the **Security** section at the left, then check the **Load authorities from PuTTY** checkbox under the *Trusted host certification authorities* section and click **OK** to exit. -![Click 'Security', Check 'Load authorities from PuTTY' then click OK](../../img/connect-your-client/winscp-5.png) +![Click 'Security', Check 'Load authorities from PuTTY' then click OK](../../../img/connect-your-client/winscp-5.png) 6. Choose the host to connect to from the list at the left-hand side and click **Login**. You can also start the session by double clicking on its name if you like. -![Choose the host from the list and click Login](../../img/connect-your-client/winscp-6.png) +![Choose the host from the list and click Login](../../../img/connect-your-client/winscp-6.png) Uploading or downloading files using WinSCP through Teleport will generate audit events. @@ -269,7 +276,7 @@ Yes, WinSCP version 6.2 and higher support validation using SSH host certificate No, PuTTY calls `tsh proxy ssh` which uses the default authentication method configured for the Teleport cluster. -For more information about Teleport authentication, see [Authentication options](../reference/access-controls/authentication.mdx). +For more information about Teleport authentication, see [Authentication options](../../reference/access-controls/authentication.mdx). Advanced users can use the Registry Editor to modify the PuTTY proxy command themselves under the `ProxyTelnetCommand` key. Note that if you re-run `tsh puttyconfig` for the given hostname, this command is overwritten. @@ -359,9 +366,9 @@ If this error appears during normal day-to-day operation, this is a bug and shou ## Uninstalling tsh To remove `tsh` and associated user data see -[Uninstalling Teleport](../admin-guides/management/admin/uninstall-teleport.mdx). +[Uninstalling Teleport](../../installation/uninstall-teleport.mdx). ## Further reading -- [CLI Reference](../reference/cli/tsh.mdx#tsh-puttyconfig). +- [CLI Reference](../../reference/cli/tsh.mdx#tsh-puttyconfig). diff --git a/docs/pages/connect-your-client/third-party/third-party.mdx b/docs/pages/connect-your-client/third-party/third-party.mdx new file mode 100644 index 0000000000000..0bd569fc0fd3e --- /dev/null +++ b/docs/pages/connect-your-client/third-party/third-party.mdx @@ -0,0 +1,12 @@ +--- +title: Connect Third-Party Clients with Teleport +sidebar_label: Third-Party Tooling +description: Explains how to use the third-party client tools you are already familiar with to access infrastructure protected by Teleport. +--- + +Teleport works with the tooling that you already use to access infrastructure. +The guides in this section show you how to use Teleport-issued credentials to +access protected resources with third-party clients, including: + + + diff --git a/docs/pages/enroll-resources/server-access/guides/vscode.mdx b/docs/pages/connect-your-client/third-party/vscode.mdx similarity index 81% rename from docs/pages/enroll-resources/server-access/guides/vscode.mdx rename to docs/pages/connect-your-client/third-party/vscode.mdx index 50a8ad43a4e3d..46fc0e60c9351 100644 --- a/docs/pages/enroll-resources/server-access/guides/vscode.mdx +++ b/docs/pages/connect-your-client/third-party/vscode.mdx @@ -1,21 +1,31 @@ --- title: Visual Studio Code description: How to use Visual Studio Code's Remote Development plugin with Teleport -h1: Remote Development With Visual Studio Code +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide explains how to use Teleport and Visual Studio Code's remote SSH extension. +## How it works + +The Visual Studio Code Remote - SSH extension uses the local SSH client to +access a remote server. You can use Teleport to generate a configuration for +your local SSH client that instructs the client to connect to a +Teleport-protected Linux server using a Teleport-issued OpenSSH certificate. + ## Prerequisites -(!docs/pages/includes/edition-prereqs-tabs-not-admin.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx clients="\`tsh\` client"!) - OpenSSH client. - Visual Studio Code with the [Remote - SSH extension](https://code.visualstudio.com/docs/remote/ssh#_system-requirements) for the Remote - SSH extension. - One or more Teleport Agents running the Teleport SSH Service. If you have not yet done this, read the - [getting started guide](../getting-started.mdx) to learn how. + [getting started guide](../../enroll-resources/server-access/getting-started.mdx) to learn how. Linux and macOS clients should rely on their operating system-provided OpenSSH @@ -92,13 +102,13 @@ When you see this error, re-run `tsh login` to refresh your local certificate. Install the [Remote - SSH extension][remote-ssh] in your local VS Code instance. A new "Window Indicator" (icon with two arrows) should appear in the bottom left of your VS Code window. -![Window Indicator in bottom left corner of VS Code](../../../../img/vscode/window-indicator.png) +![Window Indicator in bottom left corner of VS Code](../../../img/vscode/window-indicator.png) Prior to connecting with a host, set the `Remote.SSH: Use Local Server` setting to false in the extension setting. You can search for `@ext:ms-vscode-remote.remote-ssh ` to find the plugin-specific settings. -![Remote SSH Extension VS Code Settings](../../../../img/vscode/settings.png) +![Remote SSH Extension VS Code Settings](../../../img/vscode/settings.png) To connect, click on the icon with two arrows and select "Connect to Host...". Select "+ Add New SSH Host..." @@ -109,7 +119,7 @@ For each host you wish to remotely develop on, add an entry like the following: alice@node000.foo.example.com ``` -![Input box to add new Node](../../../../img/vscode/add-host.png) +![Input box to add new Node](../../../img/vscode/add-host.png) When prompted to choose which SSH Configuration file to update select the one we generated during Step 1. @@ -121,18 +131,18 @@ Start a Remote Development session by either: 1. Clicking "Connect" on the notification that opens after adding a new host. -![Notification of "Host added" that has connect button](../../../../img/vscode/host-added-notification.png) +![Notification of "Host added" that has connect button](../../../img/vscode/host-added-notification.png) 2. Clicking on the Window Indicator again and selecting "Connect to Host". You should see the host you just added and any others in your Configuration file in the drop down. -![Connecting to a Teleport host in VS Code](../../../../img/vscode/select-host-to-connect.png) +![Connecting to a Teleport host in VS Code](../../../img/vscode/select-host-to-connect.png) On first connect, you'll be prompted to configure the remote OS. Select the proper platform and VS Code will install its server-side component. When it completes, you should be left with a working editor: -![VS Code connected to a Teleport Node](../../../../img/vscode/connected-editor.png) +![VS Code connected to a Teleport Node](../../../img/vscode/connected-editor.png) The Window Indicator in the bottom left highlights the currently connected remote host. @@ -142,14 +152,14 @@ The Window Indicator in the bottom left highlights the currently connected remot It's possible to remotely develop on any OpenSSH host joined to a Teleport cluster so long as its host OS is supported by VS Code. Refer to the -[OpenSSH guide](../openssh/openssh.mdx) to configure the remote host to authenticate via +[OpenSSH guide](../../enroll-resources/server-access/openssh/openssh.mdx) to configure the remote host to authenticate via Teleport certificates, after which the procedure outlined above can be used to connect to the host in VS Code. ### Using OpenSSH clients This guide makes use of `tsh config`; refer to the -[dedicated guide](../openssh/openssh.mdx) for additional information. +[dedicated guide](../../enroll-resources/server-access/openssh/openssh.mdx) for additional information. ## Further reading - [VS Code Remote Development](https://code.visualstudio.com/docs/remote/remote-overview) diff --git a/docs/pages/connect-your-client/tsh.mdx b/docs/pages/connect-your-client/tsh.mdx deleted file mode 100644 index 54035be7f35c6..0000000000000 --- a/docs/pages/connect-your-client/tsh.mdx +++ /dev/null @@ -1,1167 +0,0 @@ ---- -title: Using the tsh Command Line Tool -description: This reference shows you how to use Teleport's tsh tool to authenticate to a cluster, explore your infrastructure, and connect to a resource. ---- - -This guide will show you how to use the Teleport client tool, `tsh`. - -You will learn how to: - -- Log in to an interactive shell on remote cluster nodes. -- Copy files to and from cluster nodes. -- Connect to SSH clusters behind firewalls without any open ports using SSH - reverse tunnels. -- Explore a cluster and execute commands on specific nodes in the cluster. -- Share interactive shell sessions with colleagues or join someone else's session. -- List and replay recorded interactive sessions. - -In addition to this document, you can always simply type `tsh` into your -terminal for the CLI reference. - -## Introduction - -For the impatient, here's an example of how a user would typically use -[`tsh`](../reference/cli/tsh.mdx): - - - - -```code -# Log into a Teleport cluster. This command retrieves the user's certificates -# and saves them into ~/.tsh/teleport.example.com -$ tsh login --proxy=teleport.example.com - -# SSH into a Node as usual -$ tsh ssh user@node - -# `tsh ssh` takes the same arguments as the OpenSSH client: -$ tsh ssh -o ForwardAgent=yes user@node -$ tsh ssh -o AddKeysToAgent=yes user@node - -# You can even create a convenient symlink: -$ ln -s /path/to/tsh /path/to/ssh - -# ... and now your 'ssh' command is calling Teleport's `tsh ssh` -$ ssh user@host - -# This command removes SSH certificates from a user's machine: -$ tsh logout -``` - - - - -```code -# Login into a Teleport cluster. This command retrieves the user's certificates -# and saves them into ~/.tsh/mytenant.teleport.sh -$ tsh login --proxy=mytenant.teleport.sh - -# SSH into a Node as usual -$ tsh ssh user@node - -# `tsh ssh` takes the same arguments as the OpenSSH client: -$ tsh ssh -o ForwardAgent=yes user@node -$ tsh ssh -o AddKeysToAgent=yes user@node - -# You can even create a convenient symlink: -$ ln -s /path/to/tsh /path/to/ssh - -# ... and now your 'ssh' command is calling Teleport's `tsh ssh` -$ ssh user@host - -# This command removes SSH certificates from a user's machine: -$ tsh logout -``` - - - - - -In other words, Teleport was designed to be fully compatible with existing -SSH-based workflows and does not require users to learn anything new, other than -to call [`tsh login`](../reference/cli/tsh.mdx#tsh-login) in the beginning. - -## Installing tsh - -Follow the instructions below to install the `tsh` binary. - -We recommend installing `tsh` of the same major version as the version used in -your Teleport cluster. - -To find the version number, either: - -- In the Web UI, select your username in the upper right, then click - **Help & Support**. You will see the version of your Teleport - cluster under **CLUSTER INFORMATION**. - -- Use `curl` and `jq`. Replace - with your Proxy Service address (e.g. `mytenant.teleport.sh` for Teleport - Enterprise Cloud): - - ```code - $ curl https:///webapi/find | jq '.server_version' - "(=teleport.version=)" - ``` - -(!docs/pages/includes/install-tsh.mdx!) - -## User identities - -A user identity in Teleport exists in the scope of a cluster. The member nodes -of a cluster may have multiple OS users on them. A Teleport administrator -assigns allowed logins to every Teleport user account. - -When logging into a remote node, you will have to specify both the Teleport -login and the OS login. A Teleport identity will have to be passed via the -`--user` flag while the OS login will be passed as `login@host` using syntax -compatible with the traditional `ssh` command. - - - - -```code -# Authenticate against the "work" cluster as joe and then -# log into the node as root: -$ tsh ssh --proxy=work.example.com --user=joe root@node -``` - - - - -```code -# Authenticate against the "work" cluster as joe and then -# log into the node as root: -$ tsh ssh --proxy=mytenant.teleport.sh --user=joe root@node -``` - - - - - -[CLI Docs - tsh ssh](../reference/cli/tsh.mdx#tsh-ssh) - -## Logging in - -To retrieve a user's certificate, execute: - - - - -```code -# Full form: -$ tsh login --proxy=proxy_host: - -# Using default ports: -$ tsh login --proxy=work.example.com - -# Using custom HTTPS port: -$ tsh login --proxy=work.example.com:5000 -``` - - - - -```code -# Full form: -$ tsh login --proxy=proxy_host: - -$ tsh login --proxy=mytenant.teleport.sh -``` - - - - - -[CLI Docs - tsh login](../reference/cli/tsh.mdx#tsh-login) - -| Port | Description | -| - | - | -| https_proxy_port | the HTTPS port the proxy host is listening to (defaults to `443` and `3080`). | - -The login command retrieves a user's certificate and stores it in `~/.tsh` -directory as well as in the [ssh agent](https://en.wikipedia.org/wiki/Ssh-agent) if there is one running. - -This allows you to authenticate just once, maybe at the beginning of the day. Subsequent `tsh ssh` commands will run without asking for credentials until the temporary certificate expires. By default, Teleport issues user certificates with a time to live (TTL) of 12 hours. - - - It is recommended to always use [`tsh login`](../reference/cli/tsh.mdx#tsh-login) before using any other `tsh` commands. This allows users to omit `--proxy` flag in subsequent tsh commands. For example `tsh ssh user@host` will work. - - -A Teleport cluster can be configured for multiple user identity sources. For example, a cluster may have a local user called `admin` while regular users should [authenticate via GitHub](../admin-guides/access-controls/sso/github-sso.mdx). In this case, you have to pass `--auth` flag to `tsh login` to specify which identity storage to use: - - - - -```code -# Log in using the local Teleport 'admin' user: -$ tsh --proxy=proxy.example.com --auth=local --user=admin login - -# Log in using GitHub as an SSO provider, assuming the GitHub connector is called "github" -$ tsh --proxy=proxy.example.com --auth=github login -``` - - - - -```code -# Log in using the local Teleport 'admin' user: -$ tsh --proxy=mytenant.teleport.sh --auth=local --user=admin login - -# Log in using GitHub as an SSO provider, assuming the GitHub connector is called "github" -$ tsh --proxy=mytenant.teleport.sh --auth=github login -``` - - - - - -When using an external identity provider to log in, `tsh` will need to open a -web browser to complete the authentication flow. By default, `tsh` will use your -system's default browser. If you wish to suppress this behavior, you can use the -`--browser=none` flag: - - - - -```code -# Don't open the system default browser when logging in -$ tsh login --proxy=work.example.com --browser=none -``` - - - - -```code -# Don't open the system default browser when logging in -$ tsh login --proxy=mytenant.teleport.sh --browser=none -``` - - - - - -In this situation, a link will be printed on the screen. You can copy and paste this link into -a browser of your choice to continue the login flow. - -[CLI Docs - tsh login](../reference/cli/tsh.mdx#tsh-login) - -### Inspecting an SSH certificate - -To inspect the SSH certificates in `~/.tsh`, a user may execute the following -command: - - - - -```code -$ tsh status - -# > Profile URL: https://proxy.example.com:3080 -# Logged in as: johndoe -# Cluster: proxy.example.com -# Roles: access, auditor, editor -# Logins: root, admin, guest -# Kubernetes: enabled -# Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s] -# Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty -``` - - - - -```code -$ tsh status - -# > Profile URL: https://mytenant.teleport.sh:443 -# Logged in as: johndoe -# Cluster: mytenant.teleport.sh -# Roles: access, editor, auditor -# Logins: root, admin, guest -# Kubernetes: enabled -# Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s] -# Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty -``` - - - - - -[CLI Docs - tsh status](../reference/cli/tsh.mdx#tsh-status) - -### SSH agent support - -If there is an [ssh agent](https://en.wikipedia.org/wiki/Ssh-agent) running, -`tsh login` will store the user certificate in the agent. This can be verified -via: - -```code -$ ssh-add -L -``` - -The SSH agent can be used to feed the certificate to other SSH clients, for example -to OpenSSH (`ssh`). - -If you wish to disable SSH agent integration, pass `--no-use-local-ssh-agent` -to `tsh`. You can also set the `TELEPORT_USE_LOCAL_SSH_AGENT` environment -variable to `false` in your shell profile to make this permanent. - -### Identity files - -[`tsh login`](../reference/cli/tsh.mdx#tsh-login) can also save the user certificate into a -file: - - - - -```code -# Authenticate the user against proxy.example.com and save the user -# certificate to joe.pem -$ tsh login --proxy=proxy.example.com --out=joe - -# Use joe.pem to log in to the server 'db' -$ tsh ssh --proxy=proxy.example.com -i joe joe@db -``` - - - - -```code -# Authenticate the user against mytenant.teleport.sh and save the user -# certificate to joe.pem -$ tsh login --proxy=mytenant.teleport.sh --out=joe - -# Use joe.pem to log in to the server 'db' -$ tsh ssh --proxy=mytenant.teleport.sh -i joe joe@db -``` - - - - - -By default, the `--out` flag will create an identity file suitable for `tsh -i`. -If compatibility with OpenSSH is needed, `--format=openssh` must be specified. -In this case, the identity will be saved into two files, `joe` and -`joe-cert.pub`: - - - - -```code -$ tsh login --proxy=proxy.example.com --out=joe --format=openssh -$ ls -lh - -# total 8.0K -# -rw------- 1 joe staff 1.7K Aug 10 16:16 joe -# -rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub -``` - - - - -```code -$ tsh login --proxy=mytenant.teleport.sh --out=joe --format=openssh -$ ls -lh - -# total 8.0K -# -rw------- 1 joe staff 1.7K Aug 10 16:16 joe -# -rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub -``` - - - - - -### SSH certificates for automation - -Regular users of Teleport must request an auto-expiring SSH certificate, usually -every day. This doesn't work for non-interactive scripts, like cron jobs or a -CI/CD pipeline. - -The most secure way to generate certificates for automation purposes is to use -[Machine ID](../enroll-resources/machine-id/introduction.mdx). This ensures that your automation -is taking advantage of the security properties of short-lived credentials. - -If Machine ID does not support your preferred CI/CD platform, you can create a -local user for use in automation and request a long-lived certificate for that -user. - -In this example, we're creating a certificate with a TTL of one hour for the -`jenkins` user and storing it in a `jenkins.pem` file, which can be later used with -`-i` (identity) flag for `tsh`. - - - - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -# You can also run tctl on your Auth Service host without running "tsh login" -# first. -$ tsh login --proxy=teleport.example.com --user=myuser -$ tctl auth sign --ttl=1h --user=jenkins --out=jenkins.pem -``` - - - - -```code -# Log in to your Teleport Cloud cluster so you can use tctl locally. -$ tsh login --proxy=myinstance.teleport.sh --user=email@example.com -$ tctl auth sign --ttl=1h --user=jenkins --out=jenkins.pem -``` - - - - - -[CLI Docs - tctl auth sign](../reference/cli/tctl.mdx#tctl-auth-sign) - -Now `jenkins.pem` can be copied to the Jenkins server and passed to the `-i` -(identity file) flag of `tsh`. - -`tctl auth sign` is an admin's equivalent of `tsh login --out` and allows for -unrestricted certificate TTL values. - -## Exploring the cluster - -In a Teleport cluster, all Nodes periodically ping the cluster's Auth Service -and update their status. This allows Teleport users to see which Nodes are -online with the `tsh ls` command: - -```code -# This command lists all Nodes in the cluster you logged into via "tsh login": -$ tsh ls - -# Node Name Address Labels -# --------- ------- ------ -# turing ⟵ Tunnel os=linux -# graviton 10.1.0.7:3022 os=osx -``` - -[CLI Docs - tsh ls](../reference/cli/tsh.mdx#tsh-ls) - -`tsh ls` can apply a filter based on the node labels. - -```code -# Only show Nodes with os label set to 'osx': -$ tsh ls os=osx - -# Nodename Address Labels -# --------- ------- ------ -# graviton 10.1.0.7:3022 os=osx -``` - -[CLI Docs -tsh ls](../reference/cli/tsh.mdx#tsh-ls) - -
-Not seeing Nodes? - -(!docs/pages/includes/node-logins.mdx!) - -
- -## Interactive shell - -To launch an interactive shell on a remote Node or to execute a command, use -`tsh ssh`. - -`tsh` tries to mimic the `ssh` experience as much as possible, so it supports -the most popular `ssh` flags like `-p`, `-l` or `-L`. For example, if you have -the following alias defined in your `~/.bashrc`: `alias ssh="tsh ssh"` then you -can continue using familiar SSH syntax: - - - - -```code -# Have this alias configured, perhaps via ~/.bashrc -$ alias ssh="/usr/local/bin/tsh ssh" - -# Login in to a cluster and retrieve your SSH certificate: -$ tsh --proxy=proxy.example.com login - -# These commands execute `tsh ssh` under the hood: -$ ssh user@node -$ ssh -p 6122 user@node ls -$ ssh -o ForwardAgent=yes user@node -$ ssh -o AddKeysToAgent=yes user@node -``` - - - - -```code -# Have this alias configured, perhaps via ~/.bashrc -$ alias ssh="/usr/local/bin/tsh ssh" - -# Login in to a cluster and retrieve your SSH certificate: -$ tsh --proxy=mytenant.teleport.sh login - -# These commands execute `tsh ssh` under the hood: -$ ssh user@node -$ ssh -p 6122 user@node ls -$ ssh -o ForwardAgent=yes user@node -$ ssh -o AddKeysToAgent=yes user@node -``` - - - - - -### Proxy ports - -By default, the Teleport Proxy Service listens on port `3080`. - -If a Teleport Proxy Service instance is configured to listen on non-default -ports, they must be specified via `--proxy` flag as shown: - -```code -$ tsh --proxy=proxy.example.com:5000 -``` - -This `tsh` command will use port `5000` of the Proxy Service. - - - - - -### Port forwarding - -`tsh ssh` supports the OpenSSH `-L` flag which forwards incoming -connections from localhost to the specified remote host:port. The syntax of `-L` -flag is as follows, where "bind_ip" defaults to `127.0.0.1`: - -```code -$ -L [bind_ip]:listen_port:remote_host:remote_port -``` - -Example: - -```code -$ tsh ssh -L 5000:web.remote:80 node -``` - -This will connect to remote server `node` via the Proxy Service, then open a -listening socket on `localhost:5000`. Finally, it will forward all incoming -connections to `web.remote:80` via this SSH tunnel. - -It is often convenient to establish port forwarding, execute a local command -which uses the connection, and then disconnect. You can do this with the `--local` -flag. - -Example: - -```code -$ tsh ssh -L 5000:google.com:80 --local node curl http://localhost:5000 -``` - -This command: - -- Connects to `node`. -- Binds the local port `5000` to port `80` on `google.com`. -- Executes `curl` command locally, which results in `curl` hitting `google.com:80` via `node`. - -### SSH jump host - -While implementing `ProxyJump` for Teleport, we have extended the feature to `tsh`. - - - - -```code -$ tsh ssh -J proxy.example.com telenode -``` - - - - -```code -$ tsh ssh -J mytenant.teleport.sh telenode -``` - - - - - -Known limitations: - -- Only one jump host is supported (`-J` supports chaining that Teleport does not utilize) and `tsh` will return with error in the case of two jump hosts, i.e. `-J proxy-1.example.com,proxy-2.example.com` will not work. -- When `tsh ssh -J user@proxy` is used, it overrides the SSH proxy defined in the tsh profile, and port forwarding is used instead of the existing Teleport proxy subsystem. - -### Resolving Node names - -`tsh` supports multiple methods to resolve remote Node names. - -- **Traditional**: by IP address or via DNS. -- **Nodename setting**: the `teleport` daemon supports the` nodename` flag, which allows Teleport administrators to assign alternative Node names. -- **Labels**: you can address a Node by `name=value` pair. - -If we have two Node, one with `os:linux` label and one Node with `os:osx`, we -can log in to the OSX Node with: - -```code -$ tsh ssh os=osx -``` - -This only works if there is only one remote node with the `os:osx` label, but -you can still execute commands via SSH on multiple Nodes using labels as a -selector. This command will update all system packages on machines that run -Linux: - -```code -$ tsh ssh os=ubuntu apt-get update -y -``` - -### Short-lived sessions - -The default TTL of a Teleport user certificate is 12 hours. This can be modified -at login with the `--ttl` flag. This command logs you into the cluster with a -very short-lived (1 minute) temporary certificate: - -```code -$ tsh --ttl=1 login -``` - -You will be logged out after one minute, but if you want to log out immediately, -you can always run: - -```code -$ tsh logout -``` - -## Copying files - -To securely copy files to and from cluster Nodes, use the `tsh scp` command. It -is designed to mimic OpenSSH's `scp` command as much as possible: - -```code -$ tsh scp example.txt root@node:/path/to/dest -``` - -Again, you may want to create a bash alias like `alias scp="tsh --proxy=work -scp"` and use the familiar syntax: - -```code -$ scp -P 61122 -r files root@node:/path/to/dest -``` - -Teleport supports both the SCP and SFTP protocols. -OpenSSH `scp` or `sftp` commands can both be used in place of `tsh scp` -if desired. - -## Sharing sessions - -Suppose you are trying to troubleshoot a problem on a remote server. Sometimes -it makes sense to ask another team member for help. Traditionally, this could be -done by letting them know which host you're on, having them SSH in, start a -terminal multiplexer like `screen`, and join a session there. - -Teleport makes this more convenient. Let's log in to a server named `luna` -and ask Teleport for our current session status: - -```code -$ tsh ssh luna -# on host luna -$ teleport status - -# User ID : joe, logged in as joe from 10.0.10.1 43026 3022 -# Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4 -Session URL: https://work:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4 -``` - -Now you can invite another user account to the `work` cluster. You can share the -URL for access through a web browser, or you can share the session ID, and the -other user can join you through their terminal by typing: - -```code -$ tsh join -``` - - - Joining sessions requires special permissions that need to be set up by your cluster administrator. - Refer them to the [Moderated Sessions guide](../admin-guides/access-controls/guides/joining-sessions.mdx) for more information on configuring join permissions. - - -You can also list active sessions with the `tsh sessions ls` command. - - - Joining sessions is not supported in recording proxy mode (where `session_recording` is set to `proxy`). - - -## Connecting to SSH clusters behind firewalls - -Teleport supports creating clusters of servers located behind firewalls -**without any open listening TCP ports**. This works by creating reverse SSH -tunnels from behind-firewall environments into a Teleport Proxy Service you have access to. - -To learn more about setting up a trust relationship between clusters behind firewalls, see -[Configure Trusted Clusters](../admin-guides/management/admin/trustedclusters.mdx). - - - Trusted clusters are only available for self-hosted Teleport clusters. - - -Assuming the Teleport Proxy Server called `work` is configured with a few trusted -clusters, you can use the `tsh clusters` command to see a list of all the trusted clusters on the server: - -```code -$ tsh --proxy=work clusters - -# Cluster Name Status -# ------------ ------ -# staging online -# production offline -``` - -[CLI Docs - tsh clusters](../reference/cli/tsh.mdx#tsh-clusters) - -Now you can use the `--cluster` flag with any `tsh` command. For example, to list SSH nodes that are members of the `production` cluster, simply run: - -```code -$ tsh --proxy=work ls --cluster=production - -# Node Name Node ID Address Labels -# --------- ------- ------- ------ -# db-1 xxxxxxxxx 10.0.20.31:3022 kernel:4.4 -# db-2 xxxxxxxxx 10.0.20.41:3022 kernel:4.2 -``` - -Similarly, if you want to SSH into `db-1` inside the `production` cluster: - -```code -$ tsh --proxy=work ssh --cluster=production db-1 -``` - -This is possible even if Nodes in the `production` cluster are located behind a -firewall without open ports. This works because the `production` cluster -establishes a reverse SSH tunnel back into the Proxy Service called `work`, and -this tunnel is used to establish inbound SSH connections. - -## X11 forwarding - -In order to run graphical programs within an SSH session, such as an IDE like -Virtual Studio Code, you'll need to request X11 forwarding for the session with -the `-X` flag. - -```code -$ tsh ssh -X node01 -``` - -X11 forwarding provides the server with secure access to your local X Server -so that it can communicate directly with your local display and I/O devices. - - - The `-Y` flag can be used to start Trusted X11 forwarding. This is needed - in order to enable more "unsafe" features, such as running clipboard or - screenshot utilities like `xclip`. However, it provides the server with - unmitigated access to your local X Server and puts your local machine at - risk of X11 attacks, so it should only be used with extreme caution. - - -In order to use X11 forwarding, you'll need to enable it on the Teleport Node. -You'll also need to ensure that your user has the `permit_x11_forwarding` role option: - -```code -$ tsh status -> Profile URL: https://proxy.example.com:3080 - Logged in as: dev - ... - Extensions: permit-X11-forwarding -``` - -## Proxying Git commands - -(!docs/pages/connect-your-client/includes/tsh-git.mdx!) - -## Custom aliases and defaults - -You can configure `tsh` to define aliases, custom commands and command-specific flag defaults. Using aliases, you can run frequently used `tsh` commands more easily. - -Aliases are defined in configuration files using the following syntax: - -```yaml -aliases: - "": "" -``` - -The `` can only be a top-level subcommand. In other words, you can define `tsh mycommand` alias but not `tsh my command`. - -`tsh` loads two kinds of configuration files: - -- global: set via the `$TELEPORT_GLOBAL_TSH_CONFIG` env var if not provided it will default to `/etc/tsh.yaml` on non-Windows operating systems. -- user-specific: `$TELEPORT_HOME/config/config.yaml`, which by default resolves to `~/.tsh/config/config.yaml`. - -`tsh` merges the user-specific config with the global config. In case of conflicts (i.e. same alias defined in both files), the user-specific config has higher priority. - -In either of those files you can add define an alias such as: - -```yaml -aliases: - "l": "tsh login --auth=okta" -``` - -From now on, `tsh l` will resolve to `tsh login --auth=okta`. - -You can also change the defaults for regular `tsh` commands: - -```yaml -aliases: - "status": "tsh status --format=json" -``` - -Calling external programs other than `tsh` is also possible: - -```yaml -aliases: - "connect": "bash -c 'tsh login $0 && tsh ssh $1'" -``` - -The example above demonstrates the usage of variables `$0` and `$1`. They represent arguments provided to the alias. With the definition above, `tsh connect foo bar` resolves to `bash -c 'tsh login foo && tsh ssh bar'`. - -The alias can use as many arguments as needed. If the alias is invoked with too few arguments, `tsh` will report an error. Conversely, providing additional arguments is *not* an error. `tsh` will append any additional arguments to the end of an alias definition. - -Given the configuration: - -```yaml -aliases: - "example": "bash -c 'echo first=$0 $0-$1 $3'" -``` - -`tsh example 0 1 unused-2 3 unused-4` will expand to `bash -c 'echo first=0 0-1 3 unused-2 unused-4'`. - -You can also add the `$TSH` variable to an alias definition. When invoking the alias, `tsh` will expand this to the absolute path to current `tsh` executable. This can be useful if there are multiple `tsh` versions installed, or the currently used version is not in `PATH`. - -```yaml -aliases: - "status": "$TSH status --format=json" -``` - -The alias substitution happens before the command line flags are fully parsed. This means that it is not affected by the `--debug` flag. To troubleshoot your aliases, set the `TELEPORT_DEBUG=1` environment variable instead. This will cause the `tsh` logs to be printed to the console: - -```code -$ TELEPORT_DEBUG=1 tsh status -DEBU [TSH] Self re-exec command: tsh [status --format=json]. tsh/aliases.go:203 -... -``` - -## Debug logs - -Adding the `--debug` flag to a command or setting the `TELEPORT_DEBUG` env var to `1` makes tsh -print debug logs to standard output. - -### Unified logging system on macOS - -On macOS, the `--os-log` flag can be used instead of `--debug` to send debug logs to [the unified -logging system](https://support.apple.com/en-gb/guide/console/welcome/mac). This behavior can also be controlled through the `TELEPORT_OS_LOG` env var. - -To stream logs in a separate shell session: - -```code -$ log stream --predicate 'subsystem CONTAINS "tsh"' --style syslog --level debug -``` - -To dump logs captured so far to a file: - -```code -$ log show --predicate 'subsystem CONTAINS "tsh"' --style syslog --info --debug > tsh.log -``` - -The logs can also be inspected in [the Console -app](https://support.apple.com/en-gb/guide/console/cnsl1012/1.1/mac/15.0). Info and debug logs are -not shown by default, so make sure to select "Include Info Messages" and "Include Debug Messages" -from the Action menu. - -## Examining recorded sessions - -You can use `tsh` to examine sessions that users have completed in resources -protected by Teleport. This section explains how to list and play Teleport -session recordings with `tsh`. - -Note that you can also play session recordings in the Teleport Web UI. To do so, -navigate to the **Access Management** tab on the top sidebar and view the -**Session Recordings** tab on the left sidebar. - -### Listing recordings - -Run the following command to review recorded sessions: - -```code -$ tsh recordings ls -ID Type Participants Hostname Timestamp ------------------------------------- ---- ------------ -------- ------------------- -b0a04442-70dc-4be8-9308-7b7901d2d600 ssh jeff dev Nov 26 16:36:16 UTC -c0a02222-70dc-4be8-9308-7b7901d2d600 kube alice Nov 26 20:36:16 UTC -d0a04442-70dc-4be8-9308-7b7901d2d600 ssh navin test Nov 26 16:36:16 UTC -``` - -### Playing recordings - -To play a session recording, run the `tsh play` command with the ID of a session -as returned by `tsh recordings ls`: - -```code -$ tsh play c0a02222-70dc-4be8-9308-7b7901d2d600 -``` - -You can also run `tsh play` with the path to a TAR file that contains a session -recording: - -```code -$ tsh play ./my-recording.tar -``` - -To retrieve a TAR file containing a session recording, you must have access to -the session recording backend. This requires either a self-hosted Teleport -cluster or [external audit -storage](../admin-guides/management/external-audit-storage.mdx). - -The `tsh play` command can print recordings in several formats, depending on the -kind of resource the recorded session interacts with. To choose a format, use -the `--format` flag of `tsh play`: - -| `--format` value | Supported resources | Description | -|------------------|---------------------|-------------| -| `pty` (default) | Servers, Kubernetes clusters | `tsh` opens a pseudo-terminal to play each command executed in the session. | -| `text` | Servers, Kubernetes clusters | `tsh` dumps the entire recording directly to standard out. Timing data is ignored. | -| `json` | Servers, Kubernetes clusters, applications, databases | `tsh` prints a JSON-serialized list of audit events, separated by newlines. | -| `yaml` | Servers, Kubernetes clusters, applications, databases | `tsh` prints a YAML-serialized list of audit events, separated by `---` characters. | - -The playback speed can be customized with the `--speed` flag, which must be -one of `0.5x`, `1x`, `2x`, `4x`, or `8x`. - -```code -tsh play --speed=8x UUID -``` - -Another way to speed up playback is to skip idle time in the recording with the -`--skip-idle-time` flag. When enabled, tsh will respect the configured playback -speed during active sections of the recording, but it will skip over larger periods -of inactivity. - -## tsh configuration files - -You can use a configuration file to control the behavior of `tsh`. The scope of -the configuration file depends on its location: - -- `/etc/tsh.yaml` is the default location for global, shared configuration - settings. You can override the location with the `TELEPORT_GLOBAL_TSH_CONFIG` - environment variable. -- `$TELEPORT_HOME/config/config.yaml` is the default location for user-specific - configuration settings. The default location for `TELEPORT_HOME` is `~/.tsh`. - -`tsh` merges the settings from both configuration file locations, with the user -configuration settings taking precedence. - -### Extra proxy headers - -The `tsh` configuration file enables you to specify HTTP headers to be -included in requests to Teleport Proxy Servers with addresses matching -the `proxy` field. - -```yaml -add_headers: - - proxy: "*.example.com" # matching proxies will have headers included - headers: # headers are pairs to include in the http headers - foo: bar # Key/Value to be included in the http request -``` - -For example, adding HTTP headers can be useful if an intermediate HTTP proxy is -in place that requires setting an authentication token: - -```yaml -add_headers: - - proxy: "*.infra.corp.xyz" - headers: - "Authorization": "Bearer tokentokentoken" -``` - -### Aliases - -Aliases allow you to define custom commands or change the default flag values for existing commands using the following syntax: - -```yaml -aliases: - "": "" -``` - -The `` can only be a top-level subcommand. In other words, you can define a `tsh mycommand` alias but not `tsh my command`. - -New command `tsh l`: - -```yaml -aliases: - "l": "tsh login --auth=okta" -``` - -Make `tsh status` use JSON as a default format: - -```yaml -aliases: - "status": "tsh status --format=json" -``` - -The alias can use an arbitrary number of arguments. If an argument variable `$N` is referenced, `tsh` will check that at least `N+1` arguments were given to the alias invocation. All arguments that were given but not referenced in the alias definition will be appended at the end. - -Define a custom command using `bash`. The `$0` and `$1` variables will be substituted with command arguments. - -```yaml -aliases: - "connect": "bash -c 'tsh login $0 && tsh ssh $1'" -``` - -Define a custom login command where first argument specifies `--auth` option. - -```yaml -aliases: - "ap": "tsh login --auth=$0 --proxy=teleport.example.com" -``` - -Given the configuration: - -```yaml -aliases: - "example": "bash -c 'echo first=$0 $0-$1 $3'" -``` - -`tsh example 0 1 unused-2 3 unused-4` will expand to `bash -c 'echo first=0 0-1 3 unused-2 unused-4'`. - -An alias definition can also reference the `$TSH` variable. If you use the -`$TSH` variable in an alias, `tsh` expands the variable to the absolute path of -the current `tsh` executable. This behavior can be useful if there are multiple -`tsh` versions installed, or the version you're currently using is not in the -`PATH`: - -```yaml -aliases: - "status": "$TSH status --format=json" -``` - -To troubleshoot aliases, set the `TELEPORT_DEBUG=1` environment variable. This will cause detailed logs to be printed to standard error: - -```code -$ TELEPORT_DEBUG=1 tsh status -DEBU [TSH] Self re-exec command: tsh [status --format=json]. tsh/aliases.go:203 -... -``` - -### Proxy templates - -With proxy templates, `tsh` dynamically determines the address of the Teleport -Proxy Service to connect to based on the address of the destination host in your -`tsh ssh` or `tsh proxy ssh` command: - -```yaml -proxy_templates: - -# Regular expression that the host server address `%h:%p` is matched against. -# The "replace rules" below can reference capturing groups from this regular -# expression (`$1`, `$2`, etc.). -- template: '^(\w+)\.(\w+):([0-9]+)$' # .: - - # Optional web proxy address to use for proxy jump (`--jumphost`, `-J`). - # - # Proxy Jump can be used to reduce latency in regionally distributed trusted - # clusters by connecting to a leaf node through the leaf proxy instead of the - # root proxy. - proxy: "$2.eu.example.com:443" - - # Optional cluster name to connect to (`--cluster`). - # - # Cluster can be used to connect to leaf nodes from the root proxy without - # first logging in to the leaf cluster. This may be useful in cases where - # proxy jump is not applicable, such as when the leaf clusters do not have - # their own public proxies. - cluster: "$2" - - # Optional host server address to connect to (`%h:%p`). - # - # Port defaults to 3022 if not explicitly provided with `--port`. - # If provided, it will take precedence over host resolution via - # query or search. - host: "$1:$3" - - # Optional predicate expression to resolve the target host with. - # - # Query by predicate expression similar to tsh ls --query. - # Has priority over search but will be ignored if a host is provided. - query: "labels.env == $1" - - # Optional fuzzy search terms to resolve the target host with. - # - # Search by a list of comma separated keywords similar to tsh ls --search. - # Only applied if host and search are not provided. - search: "$1" - -# Multiple templates can be provided. They are evaluated in order and the first -# match takes effect. -- template: ... -``` - -In the configuration above, `query` accepts an predicate expression. This has -priority over search but will be ignored if a host is provided. See the -[predicate language -documentation](../reference/predicate-language.mdx#resource-filtering) for -predicate expression examples. - -`tsh -J {{proxy}} ssh` and `tsh -J {{proxy}} proxy ssh` will attempt to match the -host server address `%h:%p` with the configured templates. For each replace rule set, -the corresponding cli value will be set. - -If leaf certificates are required to connect to the node, `tsh` automatically -retrieves leaf certificates from the root cluster: - -```code -$ tsh ssh -J {{proxy}} node1.leaf1 -# becomes -$ tsh ssh -J leaf1.eu.example.com:443 --cluster leaf1 node1 -``` - -If there is no template matched, an error is returned. - -```code -$ tsh ssh -J {{proxy}} node1.none.example.com -ERROR: proxy jump contains {{proxy}} variable but did not match any of the templates in tsh config -``` - -If you don't explicitly provide the proxy variable `-J {{proxy}}`, `tsh` still -attempts to match a template, but won't fail if there isn't a match. -Additionally, `tsh` won't replace the `proxy` value if it's explicitly set by -the client: - -```code -$ tsh ssh -J leaf2.us.example.com:443 node1.leaf2 -# becomes -$ tsh ssh -J leaf2.us.example.com:443 --cluster leaf2 node1 -``` - -Proxy Templates can also be used with OpenSSH by setting the `ProxyCommand` -in `~/.ssh/config` to use `tsh proxy ssh`. - -```txt -Host *.example.com - Port 3022 - ProxyCommand tsh proxy ssh -J {{proxy}} %r@%h:%p -``` - -As a result, you can use `tsh ssh` and `ssh` interchangeably. - -```code -$ tsh ssh node1.leaf1 -# is equivalent to -$ ssh node1.leaf1 -``` - -## Uninstalling tsh - -To remove `tsh` and associated user data see -[Uninstalling Teleport](../admin-guides/management/admin/uninstall-teleport.mdx). - -## Further reading - -Read the [`tsh` CLI Reference](../reference/cli/tsh.mdx) for all `tsh` commands -and their options. diff --git a/docs/pages/connect-your-client/vnet.mdx b/docs/pages/connect-your-client/vnet.mdx deleted file mode 100644 index a11ba1566dba6..0000000000000 --- a/docs/pages/connect-your-client/vnet.mdx +++ /dev/null @@ -1,234 +0,0 @@ ---- -title: Using VNet -description: Using VNet ---- - -This guide explains how to use VNet to connect to TCP applications available through Teleport. - -## How it works - -VNet automatically proxies connections from your computer to TCP apps available -through Teleport. -A program on your device can securely connect to internal applications protected -by Teleport without having to know about Teleport authentication details. -Underneath, VNet authenticates the connection with your Teleport credentials and -securely tunnels the TCP connection to your application. -This is all done client-side – VNet sets up a local DNS name server that -intercepts DNS requests for your internal apps and responds with a virtual IP -address managed by VNet that will forward the connection to your application. - -![Diagram showing VNet architecture](../../img/vnet/how-it-works.svg) - -VNet is available on macOS and Windows in Teleport Connect and tsh, with plans -for Linux support in a future version. - -## Prerequisites - - - -- A client machine running macOS Ventura (13.0) or higher. -- [Teleport Connect](teleport-connect.mdx), version 16.0.0 or higher. - - -- A client machine running Windows 10 or higher. -- [Teleport Connect](teleport-connect.mdx), version 17.3.0 or higher. - - - -## Step 1/3. Start Teleport Connect - -Open Teleport Connect and log in to the cluster. Find the TCP app you want to connect to. TCP apps -have `tcp://` as the protocol in their addresses. - -![Resource list in Teleport Connect with a TCP hovered over](../../img/use-teleport/vnet-resources-list@2x.png) - -## Step 2/3. Start VNet - -Click "Connect" next to the TCP app. This starts VNet if it's not already running. Alternatively, -you can start VNet through the connection list in the top left. - -
-First launch on macOS -During the first launch, macOS will prompt you to enable a background item for tsh.app. VNet needs -this background item in order to configure DNS on your device. To enable the background item, either -interact with the system notification or go to System Settings > General > Login Items and look for -tsh.app under "Allow in the Background". - -![VNet starting up](../../img/use-teleport/vnet-starting@2x.png) -
- -## Step 3/3. Connect - -Once VNet is running, you can connect to the application using the application client you would -normally use to connect to it. - -```code -$ psql postgres://postgres@tcp-app.teleport.example.com/postgres -``` - - -Unless the application specifies [multiple -ports](../enroll-resources/application-access/guides/tcp.mdx#configuring-access-to-multiple-ports), -VNet proxies connections over any port used by the application client. For multi-port apps, the port -number must match one of the target ports of the app. To see a list of target ports, click the -three dot menu next to an application in Teleport Connect or execute `tsh apps ls`. - -If [per-session MFA](../admin-guides/access-controls/guides/per-session-mfa.mdx) is enabled, the -first connection over each port triggers an MFA check. - - -VNet is going to automatically start on the next Teleport Connect launch, unless you stop VNet -before closing Teleport Connect. - -## `tsh` support - -VNet is available in `tsh` as well. Using it involves logging into the cluster and executing the -command `tsh vnet`. - -```code -$ tsh login --proxy=teleport.example.com -$ tsh vnet -``` - -## Troubleshooting - -### Conflicting IPv4 ranges - -On the client computer, VNet uses IPv4 addresses from the CGNAT IP range `100.64.0.0/10` by -default, and needs to configure addresses and routes for this range. -This can conflict with other VPN-like applications, notably Tailscale also uses -this range. - -If you are experiencing connectivity problems with VNet, check if you are -running Tailscale or another VPN client, and try disabling it to see if the -issue persists. -To avoid the conflict and run VNet alongside Tailscale or another VPN client you -can configure VNet to use a different IPv4 range, see our VNet configuration -[guide](../enroll-resources/application-access/guides/vnet.mdx#configuring-ipv4-cidr-range). - -### Connecting to the app without VNet - -Sometimes connectivity issues are not related to VNet, and you can narrow that down by trying to -connect to your app without VNet. Make sure your app appears in the Connect resources view, or the -output of `tsh apps ls`. Turn off VNet and try creating a local proxy to your app (with debug -logging enabled) with `tsh proxy app -d `. - -### Timeouts when trying to reach a Teleport cluster - -If VNet doesn't have a chance to clean up before stopping, such as during sudden device shut down, -it may leave leftover DNS configuration files in `/etc/resolver`. Those files tell your computer to -talk to a DNS server operated by VNet when connecting to your cluster. But since VNet is no longer -running, there's no DNS server to answer those calls. - -To clean up those files, simply start VNet again. Alternatively, you can remove the leftover files -manually. - -### Verifying that VNet receives DNS queries - -Start VNet with `tsh vnet -d`. Look at `/var/log/vnet.log` and note the IPv6 and IPv4 CIDR range used by VNet. - -```code -From tsh vnet -d: -INFO [VNET] Running Teleport VNet. ipv6_prefix:fd60:67ec:4325:: vnet/vnet.go:317 - -From /var/log/vnet.log: -INFO Setting an IP route for the VNet. netmask:100.64.0.0/10 vnet/osconfig_darwin.go:47 -``` - -Send a query for a TCP app available in your cluster, replacing with the name of your app: - -```code -$ dscacheutil -q host -a name -name: tcp-app.teleport.example.com -ipv6_address: fd60:67ec:4325::647a:547d - -name: tcp-app.teleport.example.com -ip_address: 100.68.51.151 -``` - -The addresses reported by `dscacheutil` should belong to ranges reported by VNet above. - -Querying for anything other than an address of a TCP app should return the address belonging to the -Proxy Service. - -```code -$ dscacheutil -q host -a name dashboard.teleport.example.com -name: dashboard.teleport.example.com -ipv6_address: 2606:2800:21f:cb07:6820:80da:af6b:8b2c - -name: dashboard.teleport.example.com -ip_address: 93.184.215.14 -``` - -Querying for both addresses should result in some output being emitted by `tsh vnet -d`. - -### Submitting an issue - -When [submitting an -issue](https://github.com/gravitational/teleport/issues/new?assignees=&labels=bug,vnet&template=bug_report.md), -make sure to include VNet logs as well as [Teleport Connect -logs](teleport-connect.mdx#submitting-an-issue). - -You can collect VNet logs using the instructions below: - - - -Logs from the VNet daemon are sent to [the unified logging system](https://support.apple.com/en-gb/guide/console/welcome/mac). - -To stream logs: - -```code -$ log stream --predicate 'subsystem ENDSWITH ".vnetd"' --style syslog --level info -``` - -To dump logs captured so far to a file: - -```code -$ log show --predicate 'subsystem ENDSWITH ".vnetd"' --style syslog --info > vnet.log -``` - -The logs can also be inspected in [the Console -app](https://support.apple.com/en-gb/guide/console/cnsl1012/1.1/mac/15.0). Info logs are not shown -by default, so make sure to select "Include Info Messages" from the Action menu. - -At the moment it's not possible to enable debug logs in the VNet daemon. - -{/* TODO: DELETE IN 21.0.0 */} -Before version 18.0.0, VNet logs were saved in `/var/log/vnet.log`. - -If the error is related to Teleport Connect not being able to start VNet or issues with code -signing, searching through `/var/log/com.apple.xpc.launchd/launchd.log` for `tsh` soon after -attempting to start VNet might also bring up relevant information: - -```code -$ grep tsh /var/log/com.apple.xpc.launchd/launchd.log -``` - - -Logs are saved to a custom log in Event Log called Teleport. To browse them, open [Event -Viewer](https://learn.microsoft.com/en-us/shows/inside/event-viewer), select "Applications and -Services Logs" in the sidebar on the left and choose "Teleport". - -To save them to a file, select "Save All Events As…" from the sidebar on the right. - -Alternatively, you can save them to a file with a PowerShell command: - -```code -$ Get-WinEvent -LogName Teleport -FilterXPath "*[System[Provider[@Name='vnet']]]" -Oldest | Format-Table -Property TimeCreated,LevelDisplayName,Message -Wrap | Out-File vnet.log -``` - -To enable debug logs, search for "Edit the system environment variables" in the Start Menu. Select -"Environment Variables…" and then add a new _system_ variable with the name `TELEPORT_DEBUG` and the -value set to `1`, then restart VNet. - -{/* TODO: DELETE IN 21.0.0 */} -Before version 18.0.0, VNet logs were saved in `C:\Program Files\Teleport Connect\resources\bin\logs.txt`. - - - -## Next steps - -- Read our VNet configuration [guide](../enroll-resources/application-access/guides/vnet.mdx) - to learn how to configure VNet access to your applications. -- Read [RFD 163](https://github.com/gravitational/teleport/blob/master/rfd/0163-vnet.md) to learn how VNet works on a technical level. diff --git a/docs/pages/core-concepts.mdx b/docs/pages/core-concepts.mdx index 591997f3cb574..358e1e104c834 100644 --- a/docs/pages/core-concepts.mdx +++ b/docs/pages/core-concepts.mdx @@ -1,6 +1,9 @@ --- title: Teleport Core Concepts description: Learn the key components that make up Teleport. +tags: + - conceptual + - platform-wide --- Here are the core concepts that describe a Teleport deployment. You will see @@ -17,7 +20,7 @@ within your infrastructure, such as Kubernetes clusters and Windows desktops. A minimal Teleport cluster consists of the **Teleport Auth Service** and **Teleport Proxy Service**. In a demo environment, you can run these two services from a single `teleport` process on a [Linux -host](admin-guides/deploy-a-cluster/linux-demo.mdx). +host](./get-started/deploy-community.mdx). ### Teleport Auth Service @@ -29,9 +32,10 @@ issues certificates to clients and maintains an audit log. The Auth Service is the only component of a cluster that has to be connected to a backend, which it uses to store cluster state and the certificate authorities' private keys. **Teleport services** are stateless and interact with the Auth -Service via a gRPC API. +Service via a gRPC API. You can run multiple Auth Service instances in a cluster for high availability. -You can run multiple Auth Service instances in a cluster for high availability. +As part of the Teleport control plane, the Auth Service is a core system that manages identity, access, and trust within your cluster. +If you are a Teleport Cloud customer, this service is fully managed for you. Read our guides to how [authorization](reference/architecture/authorization.mdx) and [authentication](reference/architecture/authentication.mdx) work in Teleport. @@ -46,61 +50,86 @@ Services**, which can run in private networks. This means that, in the Proxy Service's minimal configuration, you can expose only port `443` to the internet and run the rest of your infrastructure in private networks. -You can also configure clients to bypass Proxy Service instances and connect to -resources with Teleport-issued certificates directly. +As a key component of the Teleport control plane, the Proxy Service handles secure routing and session recording +for all incoming connections. For Teleport Cloud users, the Proxy Service is fully managed and hosted by Teleport. -Read our guide to [how the Teleport Proxy Service -works](reference/architecture/proxy.mdx). +Read our guide to [how the Teleport Proxy Service works](reference/architecture/proxy.mdx). ## Teleport services A **Teleport service** manages access to resources in your infrastructure, such -as Kubernetes clusters, Windows desktops, internal web applications, and -databases. +as Kubernetes clusters, Windows desktops, internal web applications, etc. -A single running `teleport` process can run one or more **Teleport services**, -depending on the user's configuration. Read about all subcommands of `teleport` -in our [CLI Reference](./reference/cli/teleport.mdx). +Each Teleport process can run one or more services. Different services enable different Teleport functions. For example, the SSH Service provides SSH access +to a resource, and the Database Service proxies connections to databases. Read about all subcommands of `teleport` in our [CLI Reference](./reference/cli/teleport.mdx). + +### Teleport SSH Service + +An SSH server implementation that allows users to execute commands on remote +machines while taking advantage of Teleport's built-in access controls, +auditing, and session recording. The SSH service is enabled by default. + +Read more about the [Teleport SSH Service](./enroll-resources/server-access/server-access.mdx). + +### Teleport Kubernetes Service + +Proxies HTTP traffic to the Kubernetes API server. Use for Access control and session tracking for Kubernetes. + +Read more about the [Teleport Kubernetes +Service](./enroll-resources/kubernetes-access/introduction.mdx) ### Teleport Application Service Proxies HTTP and TCP traffic to user-configured endpoints, e.g., internal web -applications or the AWS Console. +applications or the AWS Console. Protect your internal tools and dashboards. Read more about the [Teleport Application -Service](./enroll-resources/application-access/introduction.mdx). +Service](./enroll-resources/application-access/application-access.mdx). ### Teleport Database Service Proxies TCP traffic in the native protocols of popular databases, including -PostgreSQL and MySQL. +PostgreSQL and MySQL. Use to enforce role-based access to databases with full audit logging. Read more about the [Teleport Database Service](./enroll-resources/database-access/database-access.mdx). +### Teleport Discovery Service + +The Teleport Discovery Service automates the process of finding and enrolling cloud resources such as AWS EC2 instances and Azure VMs into a +Teleport cluster. It continuously scans supported cloud environments and adds matching resources based on configured rules, +reducing the need for manual node registration. + +Read more about the [Teleport Discovery Service](./enroll-resources/auto-discovery/auto-discovery.mdx). + ### Teleport Desktop Service -Proxies Remote Desktop Protocol traffic to Windows desktops. +Proxies Remote Desktop Protocol (RDP) traffic to Windows desktops. Enables access control and session recording for Windows systems Read more about the [Teleport Desktop Service](./enroll-resources/desktop-access/introduction.mdx). -### Teleport Kubernetes Service +### Teleport Jamf Service -Proxies HTTP traffic to the Kubernetes API server. +The Jamf Service integrates with Jamf to enrolls trusted devices for macOS endpoints. +Before a user is granted access, Teleport checks the device's compliance status with Jamf. +This service is ideal for organizations that need to verify macOS device posture as part of +their access control strategy, ensuring only trusted devices are allowed to connect. +[Read more about the Jamf Service](./identity-governance/device-trust/jamf-integration.mdx). -Read more about the [Teleport Kubernetes -Service](./enroll-resources/kubernetes-access/introduction.mdx) +### Teleport Debug Service -### Teleport SSH Service +Provides internal diagnostics, exposing metrics, performance profiling tools, and debugging endpoints. +It is designed for measuring performance, troubleshooting scenarios, and enabling operators to change the logging level without restarting teleport. +This can be helpful for analyzing bottlenecks, or diagnosing unexpected behavior in a controlled environment. The debug service is exposed locally by default. +[Read more about the Debug Service](zero-trust-access/management/diagnostics/troubleshooting.mdx#step-13-enable-verbose-logging). -An SSH server implementation that allows users to execute commands on remote -machines while taking advantage of Teleport's built-in access controls, -auditing, and session recording. +## Agent -Read more about the [Teleport SSH Service](./enroll-resources/server-access/introduction.mdx). +A Teleport instance that runs one or more services to provide access to infrastructure resources. +It can be hosted separately from the resources it manages, but all agents must run within the same network as their target resources. -### Machine ID +## Machine & Workload Identity Allows machines and services—called bot users—to communicate securely with resources in your infrastructure by automatically provisioning and renewing @@ -110,17 +139,15 @@ Bot users can connect to resources in your infrastructure without relying on static credentials (e.g., certificates and private keys) that become more vulnerable to attacks the longer they remain in use. -Unlike other **Teleport services**, Machine ID runs via the `tbot` binary, -rather than the `teleport` binary. +Unlike other **Teleport services**, Machine & Workload Identity runs via the +`tbot` binary, rather than the `teleport` binary. -Read more in our [Machine ID guide](./enroll-resources/machine-id/introduction.mdx). +### `tbot` -### Agent +`tbot` is a lightweight agent designed to provision short-lived credentials to workloads and automation systems like CI/CD pipelines. +It enables secure machine and workload authentication without relying on long-lived secrets. This makes it ideal for infrastructure automation and Zero Trust deployments. -An instance of a **Teleport service** is called an **agent**. For all Teleport -services besides the **Teleport SSH Service**, an agent can enable access to -multiple resources, and can run on a separate host from the resources it enables -access to. All agents must run in the same network as their target resources. +Read more in our [Machine & Workload Identity documentation](./machine-workload-identity/machine-workload-identity.mdx). ## Teleport editions @@ -143,7 +170,7 @@ and **Proxy Service**, including upgrades and certificate management. Each customer account, known as a **Teleport Enterprise Cloud tenant**, has its own subdomain of `.teleport.sh`, e.g., `mytenant.teleport.sh`. -Read more in our [Teleport Enterprise (Cloud) getting started guide](./get-started.mdx). +Read more in our [Teleport Enterprise (Cloud) getting started guide](./get-started/deploy-cloud.mdx). ### Teleport Enterprise @@ -153,7 +180,7 @@ advanced security needs, such as support for Federal Information Processing Standards (FIPS) and a hardware security module (HSM). Teleport Enterprise includes a support agreement with Teleport. -[Read the documentation](admin-guides/deploy-a-cluster/deploy-a-cluster.mdx) on +[Read the documentation](zero-trust-access/deploy-a-cluster/deploy-a-cluster.mdx) on self-hosting Teleport. ### Teleport Community Edition @@ -167,7 +194,7 @@ A **configuration resource** is a document stored on the **Teleport Auth Service** backend that specifies settings for your **Teleport cluster**. Examples include **roles**, **local users**, and **authentication connectors** -Read more in our [resource reference](./reference/resources.mdx). +Read more in our [resource reference](reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx). ### Role @@ -176,7 +203,7 @@ privileges within a cluster. Teleport's role-based access control (RBAC) is restrictive by default, and a user needs explicit permissions before they can access a resource or perform management tasks on a cluster. -Read our [guide to Teleport roles](admin-guides/access-controls/guides/role-templates.mdx). +Read our [guide to Teleport roles](zero-trust-access/rbac-get-started/role-templates.mdx). ### Teleport users @@ -196,7 +223,7 @@ subject of the certificate—including its username and Teleport roles—to authorize the user. Read more about [local users](reference/access-controls/authentication.mdx) and how [SSO -authentication works in Teleport](admin-guides/access-controls/sso/sso.mdx). +authentication works in Teleport](zero-trust-access/sso/sso.mdx). ### Authentication connector @@ -216,6 +243,36 @@ users, roles, and resources, but the trust relationship allows users with certai in the root cluster to be mapped to roles and permissions defined in the leaf cluster. For more information about how to configure a trust relationship between clusters, -see [Configure Trusted Clusters](admin-guides/management/admin/trustedclusters.mdx). -For an overview of the architecture used in a trusted cluster relationship, see -[Trusted Cluster Architecture](reference/architecture/trustedclusters.mdx). +see [Configure Trusted Clusters](zero-trust-access/deploy-a-cluster/trustedclusters.mdx). For an overview of the architecture used in a trusted cluster relationship, see [Trusted Cluster Architecture](reference/architecture/trustedclusters.mdx). + +## Teleport clients + +A **Teleport client** connects to a Teleport cluster to authenticate, request access, or interact with resources such as servers, databases, Kubernetes clusters, desktops, or internal web apps. Clients use short-lived certificates and Teleport’s Role-Based Access Control (RBAC) to ensure secure, auditable access. All clients rely on the Teleport Auth Service to verify identity and issue credentials, and they may connect through the Proxy Service when accessing resources behind firewalls or in private networks. + +### `tsh` + +`tsh` is the command-line tool used by Teleport users to authenticate, connect to infrastructure resources, and manage active sessions. +It supports SSH, Kubernetes, databases, and applications. This tool is commonly used by engineers and operators who prefer terminal-based workflows. + +### `tctl` + +`tctl` is the administrative command-line interface for managing a Teleport cluster. It allows administrators to configure roles, manage users, and work with resources such as tokens, trusted clusters, and audit logs. +It's typically used during initial cluster setup and for ongoing administrative tasks. + +### Teleport Connect + +**Teleport Connect** is a graphical desktop application that provides an intuitive interface for accessing infrastructure resources such as +servers, databases, applications, and desktops. It's ideal for engineers who prefer GUI-based access for their day-to-day tasks, offering the same secure, certificate-based access as other Teleport clients. + +### Teleport Kubernetes Operator + +The Teleport Kubernetes Operator is a controller that runs in your Kubernetes cluster to automate the configuration of Teleport resources, +such as users, roles, and Access Requests, using Kubernetes custom resources. + +### Teleport Terraform Provider + +The Teleport Terraform Provider allows you to manage Teleport resources such as users, roles, and connectors using Terraform. +It enables infrastructure-as-code workflows for configuring access and automating resource provisioning in Teleport. + +Further reading: +- [Introduction to Teleport clients](./connect-your-client/connect-your-client.mdx) Authenticate to Teleport and access protected resources. Designed for end users with links to additional documentation. diff --git a/docs/pages/enroll-resources/agents/add-service-to-agent.mdx b/docs/pages/enroll-resources/agents/add-service-to-agent.mdx index 71a674f26f3a8..82f7c52bbf61e 100644 --- a/docs/pages/enroll-resources/agents/add-service-to-agent.mdx +++ b/docs/pages/enroll-resources/agents/add-service-to-agent.mdx @@ -1,6 +1,11 @@ --- title: Enable a New Service on an Agent +sidebar_label: Enabling a New Service description: Explains how to edit the services that are running on a Teleport Agent. +tags: + - how-to + - zero-trust + - infrastructure-identity --- A single Teleport Agent can run multiple services. This guide shows you how to @@ -249,18 +254,18 @@ While this guide shows you how to create a token using `tctl`, you can also manage tokens using the Teleport Terraform provider or Kubernetes operator. See the following documentation for information on the token resource: - [Terraform - provider](../../reference/terraform-provider/resources/provision_token.mdx) + provider](../../reference/infrastructure-as-code/terraform-provider/resources/provision_token.mdx) - [Kubernetes - operator](../../reference/operator-resources/resources-teleport-dev-provisiontokens.mdx) + operator](../../reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens.mdx) You can set up a system to automate the process of assigning join tokens to agents, ensuring that all Teleport services you run have the correct join token permissions. Here are examples in the documentation: - [Enroll Infrastructure with - Terraform](../../admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx): + Terraform](../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started.mdx): Using Terraform to launch Teleport Agent instances that depend on join token resources. - [Automatically Register Resources with - Teleport](../../admin-guides/api/automatically-register-agents.mdx): Writing a + Teleport](../../zero-trust-access/api/automatically-register-agents.mdx): Writing a compute platform client that automatically creates join tokens and launches Teleport Agents with them. diff --git a/docs/pages/enroll-resources/agents/agents.mdx b/docs/pages/enroll-resources/agents/agents.mdx index 0026eed4c4a08..4bd3d73f28f47 100644 --- a/docs/pages/enroll-resources/agents/agents.mdx +++ b/docs/pages/enroll-resources/agents/agents.mdx @@ -1,6 +1,11 @@ --- title: Joining Teleport Agents description: Deploy Agents to enroll resources in your infrastructure with Teleport. You can run multiple Teleport services per Agent. +tags: + - how-to + - zero-trust + - infrastructure-identity +sidebar_position: 8 --- You can use Teleport to protect infrastructure resources like servers and @@ -17,7 +22,7 @@ the services that run on it. ![Diagram showing the architecture of an Agent pool](../../../img/agent-pool-diagram.png) Read our guide for how to use Terraform to [deploy a pool of -Agents](../../admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx). +Agents](../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-getting-started.mdx). For more information on the architecture of Teleport Agents, read [Teleport Agent Architecture](../../reference/architecture/agents.mdx). diff --git a/docs/pages/enroll-resources/agents/argocd-helm.mdx b/docs/pages/enroll-resources/agents/argocd-helm.mdx new file mode 100644 index 0000000000000..a6ae876b82011 --- /dev/null +++ b/docs/pages/enroll-resources/agents/argocd-helm.mdx @@ -0,0 +1,177 @@ +--- +title: Running Teleport Agents using Helm via ArgoCD +sidebar_label: Argo CD +description: How to install and configure Teleport Kubernetes agent using Helm and ArgoCD +tags: + - ci-cd + - platform-wide + - how-to +--- + +Teleport can provide secure, unified access to your Kubernetes clusters. This +guide will show you how to deploy Teleport Kubernetes agent on a Kubernetes cluster using Helm +and ArgoCD. + +## How it works + +Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. This is used to orchestrate large deployments, and avoid the Kubernetes resources to drift from the desired deployment. + +Teleport has [an official Helm chart (`teleport-kube-agent`)](../../reference/helm-reference/teleport-kube-agent.mdx) that deploys a Teleport Agent in a Kubernetes cluster. The agent can be configured to run several services, but by default it runs the `kubernetes_service` to provide access to the Kubernetes API via Teleport. + +This guide leverages ArgoCD's native Helm support to deploy the Teleport Agent using the `teleport-kube-agent` Helm chart. + +## Prerequisites + +- An existing Kubernetes cluster you wish to provide access to via Teleport. +- (!docs/pages/includes/tctl.mdx!) +- An existing ArgoCD instance (version 2.10 or greater) that can deploy to the + above Kubernetes cluster. +- The `tsh` client tool v(=teleport.version=)+ installed on your workstation. + You can download this from our [installation page](../../installation/installation.mdx). + +## Step 1/3. Generate a join token + +Teleport agents use a join token to obtain certificates and connect to Teleport. See [joining docs](../../reference/deployment/join-methods.mdx) for more information. +The token is only used to join initially, the Teleport Kube agent will store its +certificates in Kubernetes and won't need a token to join again in the future. +In this section, we will create a token for the agent to join the Teleport cluster. +```code +$ tctl tokens add --type=kube,app --ttl=5m +``` + +You can specify the following token types: + +(!docs/pages/includes/token-types.mdx!) + +See the `teleport-kube-agent` [chart +reference](../../reference/helm-reference/teleport-kube-agent.mdx#roles) for the +roles and token types that the chart supports. + +## Step 2/3. Configure and deploy the `teleport-kube-agent` Helm chart via ArgoCD + +1. Create a namespace for Teleport and configure its Pod Security Admission, + which enforces security standards on pods in the namespace: + + ```code + $ kubectl create namespace teleport + namespace/teleport created + + $ kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline' + namespace/teleport labeled + ``` + +2. Create a new ArgoCD application using the following as a template. + +``` yaml +project: default +source: + repoURL: 'https://charts.releases.teleport.dev' + targetRevision: (=teleport.version=) + helm: + values: |- + roles: kube,app + authToken: $YOUR_AUTH_TOKEN + proxyAddr: $YOUR_PROXY_ADDRESS + kubeClusterName: $YOUR_KUBE_CLUSTER_NAME + + highAvailability: + replicaCount: 2 + podDisruptionBudget: + enabled: true + minAvailable: 1 + chart: teleport-kube-agent +destination: + server: 'https://kubernetes.default.svc' + namespace: teleport +# This section is used to allow the teleport-kube-agent-updater to update the agent +# without ArgoCD reverting the update. +ignoreDifferences: + - group: apps + kind: StatefulSet + name: $YOUR_APPLICATION_NAME + namespace: teleport + jqPathExpressions: + - '.spec.template.spec.containers[] | select(.name == "teleport").image' +``` + +3. Sync your changes to apply the configuration using the following command: +```bash +$ argocd app sync $YOUR_APPLICATION_NAME +``` +4. To verify setup, navigate to the 'Resources' page in your Teleport + cluster to confirm the Kubernetes cluster is registered. + +## Step 3/3. Manage access to your new resource + +In this step, we'll create a Teleport role called `kube-access` +that allows users to send requests to any Teleport-protected Kubernetes +cluster as a member of the `viewers` group. The Teleport Kubernetes Service +will impersonate the `viewers` group when proxying requests from those users. + +1. Create a file called `kube-access.yaml` with the following content: + + ```yaml + kind: role + metadata: + name: kube-access + version: v7 + spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_resources: + - kind: '*' + namespace: '*' + name: '*' + verbs: ['*'] + kubernetes_groups: + - viewers + deny: {} + ``` + +1. Apply your changes: + + ```code + $ tctl create -f kube-access.yaml + ``` + + (!docs/pages/includes/create-role-using-web.mdx!) + +1. (!docs/pages/includes/add-role-to-user.mdx role="kube-access"!) + +While you have authorized the `kube-access` role to access Kubernetes clusters +as a member of the `viewers` group, this group does not yet have permissions +within its Kubernetes cluster. To assign these permissions, create a Kubernetes +`RoleBinding` or `ClusterRoleBindings` that grants permission to the `viewers` +group. + +1. Create a file called `viewers-bind.yaml` with the following contents: + + ```yaml + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: viewers-crb + subjects: + - kind: Group + # Bind the group "viewers" to the kubernetes_groups assigned in the "kube-access" role + name: viewers + apiGroup: rbac.authorization.k8s.io + roleRef: + kind: ClusterRole + # "view" is a default ClusterRole that grants read-only access to resources + # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles + name: view + apiGroup: rbac.authorization.k8s.io + ``` + +1. Apply the `ClusterRoleBinding` with `kubectl`: + + ```code + $ kubectl apply -f viewers-bind.yaml + ``` + +Your Teleport user now has permissions to assume membership in the `viewers` +group when accessing your Kubernetes cluster, and the `viewers` group now has +permissions to view resources in the cluster. + diff --git a/docs/pages/enroll-resources/agents/aws-ec2.mdx b/docs/pages/enroll-resources/agents/aws-ec2.mdx index a3e795c535404..4414ef5f58a6e 100644 --- a/docs/pages/enroll-resources/agents/aws-ec2.mdx +++ b/docs/pages/enroll-resources/agents/aws-ec2.mdx @@ -1,6 +1,12 @@ --- title: Joining Services via AWS EC2 Identity Document +sidebar_label: EC2 Identity Document description: Use the EC2 join method to add services to your Teleport cluster on AWS +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- This guide explains how to use the **EC2 join method** to configure Teleport @@ -37,7 +43,7 @@ Teleport processes joining the cluster. ## Prerequisites -(!docs/pages/includes/self-hosted-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="self-hosted Teleport"!) - (!docs/pages/includes/tctl.mdx!) - An AWS EC2 instance to host a Teleport process, with the Teleport binary diff --git a/docs/pages/enroll-resources/agents/aws-iam.mdx b/docs/pages/enroll-resources/agents/aws-iam.mdx index aaa6aac84b6a1..6611ef9cf6338 100644 --- a/docs/pages/enroll-resources/agents/aws-iam.mdx +++ b/docs/pages/enroll-resources/agents/aws-iam.mdx @@ -1,6 +1,12 @@ --- title: Joining Services via AWS IAM Role +sidebar_label: IAM Role description: Use the IAM join method to add services to your Teleport cluster on AWS +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- This guide explains how to use the **IAM join method** to configure Teleport @@ -36,9 +42,6 @@ pre-signed `sts:GetCallerIdentity` request to the Teleport Auth Service. The service's identity must match an allow rule configured in your AWS service joining token. -Support for joining a cluster with the Proxy Service behind a layer 7 load -balancer or reverse proxy is available in Teleport 13.0+. - ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) diff --git a/docs/pages/enroll-resources/agents/azure.mdx b/docs/pages/enroll-resources/agents/azure.mdx index c2c91234dba3f..ce9bdace79079 100644 --- a/docs/pages/enroll-resources/agents/azure.mdx +++ b/docs/pages/enroll-resources/agents/azure.mdx @@ -1,6 +1,12 @@ --- title: Joining Services via Azure Managed Identity +sidebar_label: Azure Managed Identity description: Use the Azure join method to join Teleport services to your Teleport cluster on Azure +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- This guide will explain how to use the **Azure join method** to configure @@ -8,12 +14,18 @@ Teleport instances to join your Teleport cluster without sharing any secrets when they are running in an Azure Virtual Machine. The Azure join method is available to any Teleport process running in an -Azure Virtual Machine. Support for joining a cluster with the Proxy Service -behind a layer 7 load balancer or reverse proxy is available in Teleport 13.0+. +Azure Virtual Machine. For other methods of joining a Teleport process to a cluster, see [Joining Teleport Services to a Cluster](agents.mdx). +## How it works + +Under the hood, Teleport processes prove that they are running in your Azure +subscription by sending a signed attested data document and access token to the +Teleport Auth Service. The VM's identity must match an allow rule configured in +your Azure joining token. + ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) @@ -21,24 +33,61 @@ Teleport Services to a Cluster](agents.mdx). - An Azure Virtual Machine running Linux with the Teleport binary installed. The Virtual Machine must have a [Managed Identity](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) - assigned to it with permission to read virtual machine info. + assigned to it. - (!docs/pages/includes/tctl.mdx!) ## Step 1/5. Set up a Managed Identity Every virtual machine hosting a Teleport process using the Azure method to join -your Teleport cluster needs a Managed Identity assigned to it. The identity -requires the `Microsoft.Compute/virtualMachines/read` permission so Teleport can -look up the virtual machine. No other permissions are required. +your Teleport cluster needs a Managed Identity assigned to it. -(!docs/pages/includes/server-access/azure-join-managed-identity.mdx!) + + -## Step 2/5. Create the Azure joining token +To set up a Managed Identity: +1. Navigate to [Virtual machines view](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Compute%2FVirtualMachines) +if you're hosting Teleport on an Azure VM, +or navigate to [Virtual machine scale sets view](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Compute%2FVirtualMachineScaleSets) +if you're hosting Teleport on an Azure VMSS. +2. Select the VM or VMSS hosting your Teleport Service. +3. In the right-side panel, click the **Security/Identity** tab. +4. Under the **Identity** section, select the **System assigned** tab. +5. Toggle the **Status** switch to **On**. +6. Click **Save**. + +If you're using VMSS and it is configured with manual upgrade mode, you must +update the VM instances for the identity changes to take effect: +- Click the **Instances** tab in the right panel. +- Select the VM instances to update. +- Click **Restart**. + + + + +To attach a system-assigned identity to a regular VM, run: + +```code +$ az vm identity assign --resource-group --name +``` + +To attach a system-assigned identity to an Azure VMSS, run: -Under the hood, Teleport processes will prove that they are running in your -Azure subscription by sending a signed attested data document and access token -to the Teleport Auth Service. The VM's identity must match an allow rule -configured in your Azure joining token. +```code +$ az vmss identity assign --resource-group --name +``` + +If you're using VMSS and it is configured with manual upgrade mode, you must +update the VM instances for the identity changes to take effect. Run the following +command to propagate the identity change: + +```code +$ az vmss update-instances --resource-group --name --instance-ids * +``` + + + + +## Step 2/5. Create the Azure joining token Create the following `token.yaml` with an `allow` rule specifying your Azure subscription and the resource group that your VM's identity must match. @@ -77,7 +126,8 @@ teleport: token_name: azure-token method: azure azure: - # client_id is the client ID of the managed identity created in Step 1. + # client_id is the client ID of a user-assigned managed identity. + # Omit this value when using a system-assigned managed identity. client_id: 11111111-1111-1111-1111-111111111111 proxy_server: teleport.example.com:443 ssh_service: diff --git a/docs/pages/enroll-resources/agents/gcp.mdx b/docs/pages/enroll-resources/agents/gcp.mdx index 21ab337506e95..e90b2a9834354 100644 --- a/docs/pages/enroll-resources/agents/gcp.mdx +++ b/docs/pages/enroll-resources/agents/gcp.mdx @@ -1,18 +1,30 @@ --- title: Join Services with GCP +sidebar_label: Google Cloud description: Use the GCP join method to add services to your Teleport cluster. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- This guide will explain how to use the **GCP join method** to configure Teleport processes to join your Teleport cluster without sharing any secrets when they are running in a GCP VM. +## How it works + The GCP join method is available to any Teleport process running on a GCP VM. The VM must have a [service account](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) assigned to it (the default service account is fine). No IAM roles are required on the Teleport process joining the cluster. +Under the hood, services prove that they are running in your GCP project by +sending a signed ID token which matches an allow rule configured in your GCP +joining token. + ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) @@ -26,10 +38,6 @@ on the Teleport process joining the cluster. Configure your Teleport Auth Service with a special dynamic token which will allow services from your GCP projects to join your Teleport cluster. -Under the hood, services will prove that they are running in your GCP project -by sending a signed ID token which matches an allow rule configured in your GCP -joining token. - Create the following `token.yaml` file with a `gcp.allow` rule specifying your GCP project ID(s), service account(s), and location(s) in which your GCP instances will run: diff --git a/docs/pages/enroll-resources/agents/join-token.mdx b/docs/pages/enroll-resources/agents/join-token.mdx index c2fc4c43d9d6d..b5572a31f8104 100644 --- a/docs/pages/enroll-resources/agents/join-token.mdx +++ b/docs/pages/enroll-resources/agents/join-token.mdx @@ -1,8 +1,15 @@ --- title: Join Services with a Secure Token +sidebar_label: Secure Token description: This guide shows you how to join a Teleport instance to your cluster using a join token in order to proxy access to resources in your infrastructure. +tags: + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + In this guide, we will show you how to register a Teleport process running one or more services to your cluster by presenting a **join token**. @@ -24,8 +31,7 @@ relationship with the Teleport cluster. accessing resources in your infrastructure.
- tip" title="Running multiple Proxy Service instances behind - load balancer" > + Running multiple Proxy Service instances behind a load balancer The join token method works if a cluster includes a single Proxy Service instance as well as multiple Proxy Service instances behind a load balancer @@ -100,13 +106,14 @@ the `tctl tokens add` output. Here are all the values we support for `--type` flag when creating a join token: -(!docs/pages/includes/token-types.mdx) +(!docs/pages/includes/token-types.mdx!)
Administrators can generate tokens as they are needed. A Teleport process can use a token multiple times until its time to live (TTL) expires, with the -exception of tokens with the `bot` type, which are used by Machine ID. +exception of tokens with the `bot` type, which are used by Machine & Workload +Identity. To list all of the tokens you have generated, run the following command: @@ -120,7 +127,7 @@ Token Type Labels Expiry Time (UTC)
An insecure alternative: static tokens - + Use short-lived tokens instead of long-lived static tokens. Static tokens are easier to steal, guess, and leak. @@ -329,4 +336,4 @@ $ tctl tokens rm ## Next steps - If you have workloads split across different networks or clouds, we recommend - setting up trusted clusters. Read how to get started in [Configure Trusted Clusters](../../admin-guides/management/admin/trustedclusters.mdx). + setting up trusted clusters. Read how to get started in [Configure Trusted Clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx). diff --git a/docs/pages/enroll-resources/agents/kubernetes.mdx b/docs/pages/enroll-resources/agents/kubernetes.mdx index 345f41d5f5aa2..45ae40db5b6ef 100644 --- a/docs/pages/enroll-resources/agents/kubernetes.mdx +++ b/docs/pages/enroll-resources/agents/kubernetes.mdx @@ -1,6 +1,11 @@ --- title: Joining Services via Kubernetes ServiceAccount Token +sidebar_label: Kubernetes Token description: Use Kubernetes ServiceAccount tokens to join services running in the same Kubernetes cluster as the Auth Service. +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide will explain how to use the **Kubernetes join method** to configure @@ -29,7 +34,7 @@ as the Auth Service. ## Prerequisites - A running Teleport cluster in Kubernetes. For details on how to set this up, - see [Guides for running Teleport using Helm](../../admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx). + see [Guides for running Teleport using Helm](../../zero-trust-access/deploy-a-cluster/helm-deployments/helm-deployments.mdx). - Editor access to the Kubernetes cluster running the Teleport cluster. You must be able to create Namespaces and Deployments. - A Teleport user with `access` role, or any other role that allows access to @@ -243,6 +248,6 @@ namespace "teleport-agent" deleted {/* vale messaging.protocol-products = NO */} - The possible values for `teleport-kube-agent` chart are documented [in its reference](../../reference/helm-reference/teleport-kube-agent.mdx). -- See [Application Access Guides](../application-access/guides/guides.mdx) +- See [Application Access guides](../application-access/application-access.mdx) - See [Database Access Guides](../database-access/guides/guides.mdx) {/* vale messaging.protocol-products = YES */} diff --git a/docs/pages/enroll-resources/agents/oracle.mdx b/docs/pages/enroll-resources/agents/oracle.mdx index d20b1d9c120a4..204111a7a1d50 100644 --- a/docs/pages/enroll-resources/agents/oracle.mdx +++ b/docs/pages/enroll-resources/agents/oracle.mdx @@ -1,18 +1,38 @@ --- title: Join Services with Oracle Cloud +sidebar_label: Oracle Cloud description: Use the Oracle join method to add services to your Teleport cluster. +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide will explain how to use the **Oracle join method** to configure Teleport processes to join your Teleport cluster without sharing any secrets when they are running in an Oracle Cloud Infrastructure (OCI) Compute instance. +## How it works + The Oracle join method is available to any Teleport process running on an OCI Compute instance. +Under the hood, services prove that they are running in your OCI tenant by +sending their signed OCI instance identity certificate to the Auth Service, +along with a cryptographic signature proving ownership of the instance's +private key. +The Auth Service verifies the instance identity certificate was signed by +Oracle's root CAs and that the signature is valid. +The certificate verifies the instance's tenancy, compartment, and instance ID +and matches them against rules configured in an Oracle joining token. + +Instances running a version older than Teleport v18.4.0 do not send the +instance identity certificate, instead they send a presigned +self-authentication request to the OCI API for the Auth Service to execute. + ## Prerequisites -(!docs/pages/includes/edition-prereqs-tabs.mdx version="17.3.0"!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v17.3.0 or higher)"!) - An OCI Compute instance to host a Teleport service. - (!docs/pages/includes/tctl.mdx!) @@ -22,13 +42,11 @@ OCI Compute instance. Configure your Teleport Auth Service with a special dynamic token which will allow services from your OCI tenants to join your Teleport cluster. -Under the hood, services will prove that they are running in your OCI -tenant by sending a presigned self-authentication request to the OCI API -for the Auth Service to execute. - Create the following `token.yaml` file with an `oracle.allow` rule specifying the Oracle tenant(s), compartment(s), and region(s) in which your OCI -Compute instances will run: +Compute instances will run. +In Teleport v18.4.0 or higher you can also specify exact compute instance OCIDs +that will be allowed. (!docs/pages/includes/provision-token/oracle-spec.mdx!) @@ -40,14 +58,20 @@ $ tctl create token.yaml ## Step 2/5. Configure permissions -Every OCI Compute instance needs permission to authenticate itself with the -Oracle Cloud API so the presigned request can succeed. +On Teleport versions older than v18.4.0, every OCI compute instance needs IAM +permissions to authenticate itself with the Oracle Cloud API. +Newer Teleport versions do not require any permissions. +The permissions are only required if either the Auth Service or the joining +instance is running a Teleport version older than v18.4.0. + +
+Configuring OCI IAM Permissions ### Create a dynamic group In the OCI console, navigate to [Identity/Domains](https://cloud.oracle.com/identity/domains). -Select your domain, then select **Dynamic groups**. Click **Create dynamic +Select the currently active domain, then select **Dynamic groups**. Click **Create dynamic group**. Create a group with the following matching rule, assigning to the OCID of the compartment your instance is in: @@ -78,6 +102,8 @@ Allow dynamic-group ''/'join-teleport' to inspect ![Create policy](../../../img/oracle/oracle-join-policy@2x.png) +
+ ## Step 3/5. Install Teleport Install Teleport on your OCI Compute instance. @@ -116,3 +142,11 @@ proxy_service: Once you have started Teleport, confirm that your service is able to connect to and join your cluster. + +## Troubleshooting + +### Join fails with error `oci api error: Not Found` + +This error likely means that the instance does not have the required permissions. +Check if the identity domain you created the dynamic group in has the Compute +service active; if it doesn't, try creating the group in a domain that does. diff --git a/docs/pages/enroll-resources/application-access/application-access.mdx b/docs/pages/enroll-resources/application-access/application-access.mdx index 203fe2dc590ad..4ce3991e3bac6 100644 --- a/docs/pages/enroll-resources/application-access/application-access.mdx +++ b/docs/pages/enroll-resources/application-access/application-access.mdx @@ -1,6 +1,115 @@ --- title: Applications -description: Guides to using Teleport to protect web applications, cloud provider APIs, and more. +description: Protect applications with Teleport by deploying the Teleport Application Service. +template: doc-page +tags: + - zero-trust + - infrastructure-identity --- - +import DocHero from "@site/src/components/Pages/Landing/DocHero"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; +import Resources from "@site/src/components/Pages/Homepage/Resources"; + +import userCheckSvg from "@site/src/components/Icon/svg/user-check.svg"; +import slidersHorizontalSvg from "@site/src/components/Icon/svg/sliders-horizontal.svg"; +import graphSvg from "@site/src/components/Icon/svg/graph.svg"; +import questionSvg from "@site/src/components/Icon/svg/question2.svg"; + + +Teleport provides secure access to applications and cloud provider APIs with RBAC, audit logging, and just-in-time access. + +The Teleport Application Service enables secure connections to private networks, supports Teleport JWTs for authentication, and provides guidance on app enrollment and access control configuration. + +Note that it is also possible to secure applications with Teleport Identity +Governance by setting up Teleport as an IdP. See [Using the Teleport SAML +IdP](../../identity-governance/idps/usage/usage.mdx). + + + + + diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/aws-console-roles-anywhere.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/aws-console-roles-anywhere.mdx new file mode 100644 index 0000000000000..6c8d4eb5e6ee8 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/cloud-apis/aws-console-roles-anywhere.mdx @@ -0,0 +1,366 @@ +--- +title: AWS Console and CLI Access with Roles Anywhere +sidebar_label: AWS (via Roles Anywhere) +description: How to use AWS IAM Roles Anywhere with Teleport for AWS Console and CLI access +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +Teleport integrates with AWS IAM Roles Anywhere to provide AWS Console and CLI access. +This allows you to take advantage of Teleport role-based access controls, just-in-time Access Requests and other Teleport Zero Trust Access and Identity Governance capabilities to manage access to your AWS infrastructure. + +If you're already using AWS IAM Roles Anywhere and looking to protect access to your AWS environment with Teleport, while ensuring full compatibility with all AWS API based tooling (such as CLI tools or terraform), this guide provides a recommended way for you to do that. +If you're looking to provide AWS CLI access to your users with audit capture by going through Teleport proxy, or Roles Anywhere are not adopted at your organization, take a look at the guide for [agent-based AWS access](aws-console.mdx) instead. + +This guide will explain how to: + +- Configure AWS Console and CLI access with Teleport using Roles Anywhere +- Access the AWS Management Console +- Access the AWS Command Line Interface (CLI) +- Access applications using AWS SDKs + +## How it works + +Teleport uses AWS IAM Roles Anywhere to issue temporary credentials for assuming target IAM roles. + +Access is managed through Teleport's RBAC policies, ensuring credentials are only generated for authorized users and roles. +No additional service is required, as it runs in the control plane (Proxy and Auth Services). + +For web console access, you can navigate to the resources page in Teleport Web UI and click on the AWS Application which is named after the Roles Anywhere profile. + +For CLI and SDK based access, you must use `tsh` to obtain AWS credentials. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- Permissions in AWS to: + - create IAM Roles Anywhere trust anchor + - create IAM Roles Anywhere profiles + - create IAM roles +- `aws` command line interface (CLI) tool in PATH. Read the AWS documentation to + [install or update the latest version of the AWS + CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html). + +## Step 1/4. Configure AWS IAM Roles Anywhere integration + +In this section, you will configure AWS IAM resources to allow Teleport to issue AWS credentials. + +Navigate to the Teleport Web UI, click on Enroll New Resource in the Resources listing page, and follow the guide after clicking on AWS CLI/Console Access tile. + +
+ Manually configure the integration + + You will create the following resources in AWS: + + | Name | Resource | Function | + |--------------------------------------|---------------------------------|-------------------------------------------------------------------------------------------| + | `TeleportRolesAnywhereIntegrationCA` | IAM Roles Anywhere trust anchor | Allow access from Teleport into AWS. | + | `TeleportRolesAnywhereProfileSync` | IAM role | Allows Teleport to iterate over AWS Roles Anywhere profiles and import them as resources. | + | `TeleportRolesAnywhereProfileSync` | IAM Roles Anywhere profile | Allows access to `TeleportRolesAnywhereProfileSync` IAM role. | + + ### Create an IAM Roles Anywhere Trust Anchor + + First, you will create an IAM Roles Anywhere Trust Anchor which trusts the Teleport's AWS Roles Anywhere CA. + + 1. Obtain the Teleport certificate: + ```code + $ tctl auth export --type awsra + -----BEGIN CERTIFICATE----- + MIIDqjCCApKgAwIBAgIQMIK8/WiQ/rUOrjlmB0IHVTANBgkqhkiG9w0BAQsFADBv + ... + -----END CERTIFICATE----- + ``` + 1. Navigate to [Create a trust anchor](https://console.aws.amazon.com/rolesanywhere/home/trust-anchors/create) page. + Name the trust anchor `TeleportRolesAnywhereIntegrationCA` and add the Teleport's `awsra` CA certificate you obtained in the previous step as an External certificate bundle. + 1. Copy the Trust Anchor ARN + + ### Set up Profile Sync + + Next, you will create the required AWS resources to enable the profile sync. + + 1. Navigate to [Create Role](https://console.aws.amazon.com/iam/home?#/roles/create) and create a new IAM role called `TeleportRolesAnywhereProfileSync`. + Trust policy must include the Roles Anywhere service principal: + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "rolesanywhere.amazonaws.com" + }, + "Action": [ + "sts:AssumeRole", + "sts:SetSourceIdentity", + "sts:TagSession" + ] + } + ] + } + ``` + 1. Copy the IAM role ARN + 1. Navigate to the [TeleportRolesAnywhereProfileSync Role](https://console.aws.amazon.com/iam/home?#/roles/details/TeleportRolesAnywhereProfileSync) and create a new inline policy: + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "RolesAnywhereProfileSync", + "Effect": "Allow", + "Action": [ + "rolesanywhere:ListProfiles", + "rolesanywhere:ListTagsForResource", + "rolesanywhere:ImportCrl", + "iam:GetRole" + ], + "Resource": [ + "*" + ] + } + ] + } + ``` + + 1. Now, navigate to the [Create a Profile](https://console.aws.amazon.com/rolesanywhere/home/profiles/create) page and name it `TeleportRolesAnywhereProfileSync`. + Add the role created in the last step (`TeleportRolesAnywhereProfileSync`) and create the Profile. + 1. Copy the Profile ARN + + ### Create a Teleport AWS IAM Roles Anywhere integration + Now that the required AWS resources are created, you can create a Teleport AWS IAM Roles Anywhere integration. + + 1. Write the following contents to a file called `roles-anywhere-integration.yaml`: + ```yaml + kind: integration + sub_kind: aws-ra + version: v1 + metadata: + name: + spec: + aws_ra: + trust_anchor_arn: + profile_sync_config: + enabled: true + profile_arn: + role_arn: + ``` + + 1. Create the integration with the following command: + ```code + $ tctl create -f roles-anywhere-integration.yaml + ``` + + Teleport will now start syncing AWS IAM Roles Anywhere profiles as AWS applications, every 5 minutes. +
+ +## Step 2/4. Create an AWS IAM Roles Anywhere profile and assign IAM roles + +Now, you will create an example profile and role so that you can test the integration. + +If you are already leveraging AWS IAM Roles Anywhere profiles, you can skip this step. + +1. Navigate to [Create role](https://console.aws.amazon.com/iam/home?#/roles/create) and create a new IAM role called `ExampleReadOnlyAccess`. + Trust policy must include the Roles Anywhere service principal: + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "rolesanywhere.amazonaws.com" + }, + "Action": [ + "sts:AssumeRole", + "sts:SetSourceIdentity", + "sts:TagSession" + ] + } + ] + } + ``` + + Add the AWS-managed `ReadOnlyAccess` policy to the role. + Copy the role ARN . + +1. Now, navigate to [Create a Profile](https://console.aws.amazon.com/rolesanywhere/home/profiles/create) page and name it `ExampleReadOnlyAccess`. + Add the role created in the last step (`ExampleReadOnlyAccess`) and create the Profile. + + Copy the Profile ARN . + +## Step 3/4. Set up access +Now you need to grant your users access to the imported AWS profiles/roles. +You will create a Teleport role which will grant access to the `ExampleReadOnlyAccess` Profile and associated IAM role. + + + + Open your Teleport cluster Web UI, select the "Zero Trust Access" and then "Roles". + + Click the "Create New Role" button, set the role name to `aws-ro-access`. + Next, add an Application Access entry, in the Resources section, with the following: + - Labels: `teleport.dev/aws-roles-anywhere-profile-arn': ''` + - AWS Role ARNs: + + Proceed to the next steps and create the role. + + + + 1. Create a file named `example-read-only-access-role.yaml` with the following contents: + ```yaml + kind: role + version: v8 + metadata: + name: aws-ro-access + spec: + allow: + app_labels: + 'teleport.dev/aws-roles-anywhere-profile-arn': '' + aws_role_arns: + - + ``` + + 1. Create the role with the following command: + ```code + $ tctl create -f example-read-only-access-role.yaml + ``` + + + +Assign the Role to the users you want to allow access. You can also assign it to an [Access List](../../../identity-governance/access-lists/guide.mdx) or use it in conjunction with [Access Requests](../../../identity-governance/access-requests/access-requests.mdx). + +## Step 4/4. Access AWS resources + +Now that you have configured the Integration and set up the Role, users can access AWS resources through Teleport. + +### Access AWS Management Console + +1. Visit the home page of the Teleport Web UI and click **Resources**. + If the Integration found any profiles as expected, the Web UI will display the name of the Profile. + If you don't see any profiles, see the [Troubleshooting](#troubleshooting) section. + +1. Click the **Launch** button for the AWS Console application, then click on the role you would like to assume when signing in to the AWS Console: + + ![IAM role selector](../../../../img/application-access/iam-role-selector.png) + +1. You will get redirected to the AWS Management Console, signed in with the selected role. + You should see your Teleport user name as a federated login assigned to `ExampleReadOnlyRole` in the top-right corner of the AWS Console: + + ![Federated login](../../../../img/application-access/federated-login@2x.png) + +### Access AWS using AWS CLI or other AWS SDK based tools + +#### Obtain the credentials +On your desktop, log into the AWS App Profile that was synced: + +```code +$ tsh apps login --aws-role arn:aws:iam::278576220453:role/ExampleReadOnlyAccess example-read-only-access +Logged into AWS app "example-read-only-access". + +Your IAM role: + arn:aws:iam::123456789012:role/ExampleReadOnlyAccess + +Example AWS CLI commands: + aws --profile example-read-only-access s3 ls + AWS_PROFILE=example-read-only-access aws s3 ls +``` + +#### Access AWS using AWS CLI +You can now access all your AWS resources from the command line using the `aws` command line tool. + +You need to pass the `--profile` flag or export the `AWS_PROFILE` environment variable with the name of the profile you created earlier. + +```code +$ aws --profile example-read-only-access s3 ls +... +``` + +#### Access AWS from Terraform +Using Terraform requires you to set the profile in the provider. +```hcl +provider "aws" { + profile = "example-read-only-access" + // ... +} +``` + +Setting the `AWS_PROFILE` environment variable is also an option. + +## Troubleshooting + +Read this section if you run into issues while following this guide. + +### Missing Profiles in Resources page +If you don't see the expected AWS profiles in the Teleport Web UI, check the following: + +1. Ensure the profile sync configuration is enabled and the `profile_name_filters` are set accordingly (set it to `*` if you want to import all the profiles): + + ```code + $ tctl get integration/ + ... + spec: + aws_ra: + profile_sync_config: + enabled: true + profile_arn: arn:aws:rolesanywhere:eu-west-2:123456789012:profile/5c659b8f-7ca3-48ef-a1aa-14c9c93506ee + profile_name_filters: + - MarcoRA-* + role_arn: arn:aws:iam::123456789012:role/TeleportRolesAnywhereProfileSync + trust_anchor_arn: arn:aws:rolesanywhere:eu-west-2:123456789012:trust-anchor/69a0c3f8-3157-49b2-85dd-75bdef828a68 + ``` + +1. You can get the integration status summary by running the following command: + + ```code + $ tctl get integration/ + ... + status: + aws_ra: + last_profile_sync: + end_time: "2025-07-23T15:46:27.475629Z" + error_message: "" + start_time: "2025-07-23T15:46:26.334536Z" + status: SUCCESS + synced_profiles: 4 + ``` + + Look for any error message you might have. + +1. Ensure that your Teleport role allows you access to all the expected profiles. + AWS app resources have a specific label which you can use to control access: `teleport.dev/aws-roles-anywhere-profile-arn`. + Set it to `*` to allow access to all profiles, like so: + + ```yaml + kind: role + spec: + allow: + app_labels: + 'teleport.dev/aws-roles-anywhere-profile-arn': '*' + ``` + +## Next steps + +Now that you know how to set up Teleport to protect access to the AWS Management Console and APIs, you can tailor your setup to the needs of your organization. + +### Refine your AWS IAM role mapping + +The `aws_role_arns` field supports template variables so they can be populated dynamically when a user authenticates to Teleport. + +For example, you can configure your identity provider to define a SAML attribute or OIDC claim called `aws_role_arns`, then use this field to list each user's permitted AWS role ARNs on your IdP. +If you define a Teleport role to mention the `{{external.aws_role_arns}}` variable, the Auth Service will fill in the user's permitted ARNs based on data from the IdP: + +```yaml + aws_role_arns: + - {{external.aws_role_arns}} +``` + +See the [Access Controls Reference](../../../reference/access-controls/roles.mdx) for all of the variables and functions you can use in the `aws_role_arns` field. + +### Create Roles to grant access to a specific profile and IAM roles + +You can create multiple Teleport roles to grant access to the profiles and IAM roles for each Profile. + +See [Role Access Requests](../../../identity-governance/access-requests/role-requests.mdx) to learn more about creating Roles and how to access them using Access Requests. diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx index 88eec03347ea5..9d8bdb92cf8ea 100644 --- a/docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/aws-console.mdx @@ -1,8 +1,19 @@ --- -title: Access AWS With Teleport Application Access +title: Access AWS with the Teleport Application Service +sidebar_label: AWS (via Teleport Agent) description: How to access AWS with Teleport application access. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- + +If you're using AWS IAM Roles Anywhere, see the [corresponding guide](aws-console-roles-anywhere.mdx) that provides +the recommended way to manage access to AWS Console and CLI-based tooling. + + You can protect the AWS Management Console and AWS APIs with Teleport. This makes it possible to manage access to AWS infrastructure with Teleport features like Access Requests, Access Lists, and identity locking. You can also configure @@ -829,7 +840,7 @@ for all of the variables and functions you can use in the `aws_role_arns` field. You can deploy a pool of Teleport Agents to run the Teleport Application Service, then enroll an AWS application in your Teleport cluster as a dynamic resource. Read more about [dynamically registering -applications](../guides/dynamic-registration.mdx). +applications](../configuration/dynamic-registration.mdx). ### Choose an alternative agent join method diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-console.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-console.mdx new file mode 100644 index 0000000000000..c1adcd6666f30 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-console.mdx @@ -0,0 +1,204 @@ +--- +title: Protect AWS CLI and Console Access with Teleport +description: Access AWS CLI and Console using AWS OIDC integration. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +Teleport integrates with AWS IAM Identity Provider to provide AWS CLI and Console access. +This allows you to take advantage of Teleport role-based access controls, just-in-time Access Requests and other Teleport Zero Trust Access and Identity Governance capabilities to manage access to your AWS infrastructure. + + +If you're using AWS IAM Roles Anywhere, see the [corresponding guide](../../../enroll-resources/application-access/cloud-apis/aws-console-roles-anywhere.mdx) that provides +the recommended way to manage access to AWS Console and CLI-based tooling. + +If you're looking to provide AWS CLI access to your users with audit capture by going through Teleport but you don't have a public cluster, or Roles Anywhere is not adopted at your organization, take a look at the guide for [agent-based AWS access](../../../enroll-resources/application-access/cloud-apis/aws-console.mdx) instead. + + +You can set up access using the "AWS CLI/Console Access via AWS OIDC IdP" enrollment wizard from the Teleport Web UI or you can follow this guide to set it up manually. + +## How it works + +Setting up access to AWS CLI and Console requires an [AWS OIDC integration](./awsoidc-integration.mdx). +The AWS OIDC integration creates and configures an AWS IAM OpenID Connect Identity Provider (OIDC IdP) and an AWS IAM role that your Teleport cluster can assume. + +The enrollment wizard adds permissions to the AWS OIDC integration IAM role, so that users can assume other existing IAM Roles. + +## Prerequisites + +- A running Teleport cluster +- A Teleport user with the preset `editor` role. +- An AWS account and permissions to create IAM Identity Providers and roles + +## Step 1/6. Create an AWS OIDC Integration + +The enrollment wizard is available in the "Add New Resource" panel of the Teleport Web UI: + +![Screenshot of AWS CLI/Console Access via AWS OIDC IdP enrollment wizard tile](../../../../img/aws/enroll-aws-access-awsoidc.png) + +The Teleport Web UI walks you through the steps to set up an AWS OIDC integration (if you don't have one already), configure IAM permissions and set up access to AWS IAM Roles. + +Set the field to the IAM Role name created by the AWS OIDC integration. + +## Step 2/6. Allow the Integration to assume other IAM Roles + +The wizard instructs you to run a script, which will configure the necessary permissions for the AWS OIDC integration IAM role to assume other IAM Roles. + +The script adds the following Inline Policy to the integration's IAM Role: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "sts:AssumeRole", + "Resource": "*", + "Condition": { + "StringEquals": { + "iam:ResourceTag/teleport.dev/integration": "true" + } + } + } + ] +} +``` + +## Step 3/6. Add the Trust Relationship to all IAM Roles + +Only AWS IAM Roles with the tag `teleport.dev/integration=true` and whose Trust Relationship allows the integration role to assume them, can be used by Teleport users to access AWS Console and CLI. + +The Trust Relationship of all the target IAM Roles must include the following statement: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam:::role/" + }, + "Action": "sts:AssumeRole" + } + ] +} +``` + +## Step 4/6. Create AWS CLI/Console App in Teleport + +The enrollment wizard creates an AWS App resource in Teleport which allows users to access the AWS Console and CLI. +You can do it manually by creating the resource using `tctl`: + + 1. Set the to the public address of your Teleport cluster (eg, `teleport.example.com`). + 1. Set the with a randomly generated UUID. + 1. Write the following contents to a file called `_app_access.yaml`: + ```yaml + kind: app_server + metadata: + name: + spec: + app: + kind: app + metadata: + labels: + aws_account_id: "" + name: + spec: + cloud: AWS + uri: https://console.aws.amazon.com/ + integration: + public_addr: . + version: v3 + host_id: + version: v3 + ``` + + 1. Create the AWS App with the following command: + ```code + $ tctl create -f _app_access.yaml + ``` + +You can add other labels in `spec.app.metadata.labels` in order to further customize access controls. + +## Step 5/6. Set up access + +The next step will ask you to enter any AWS Role ARNs you would like to access using Teleport. + +This will add the AWS Role ARNs to your Teleport user profile, allowing you to assume them when accessing the AWS App. + + + This step is not applicable for SSO Users. + In this case you should map the AWS Role ARNs through your Identity Provider, using [role templates](../../../zero-trust-access/rbac-get-started/role-templates.mdx#sso-users). + + +In order to provide access to other users, you must create a Teleport Role that: +- includes the list of AWS Role ARNs in the `aws_role_arns` field +- includes the appropriate `app_labels` matcher to allow access to the AWS App created in the previous step (ie, `aws_account_id: `) + +As an example, if you want to provide access to the AWS Role ``: + + 1. Write the following contents to a file called `aws_access_.yaml`: + ```yaml + kind: role + metadata: + name: aws_access_ + spec: + allow: + app_labels: + aws_account_id: "" + aws_role_arns: + - arn:aws:iam:::role/ + version: v8 + ``` + + 1. Create the AWS App with the following command: + ```code + $ tctl create -f aws_access_.yaml + ``` + +Assign this role to users and they will be able to assume the AWS IAM Role using the CLI or the Management Console. + +## Step 6/6. Access AWS Console and CLI + +You can now access the AWS Console using the Teleport Web UI by clicking on the AWS App created in the previous step. + +## Troubleshooting + +### AWS App is not listed or IAM Roles list is empty + +Your user must have both RBAC permissions to access the AWS Console: +- App Labels (`app_labels`) must match the labels defined in the AWS App +- AWS Role ARNs (`aws_role_arns`) must include at least one of the allowed IAM Roles + +### Failure when trying to access a specific IAM Role + +This can happen when the target IAM Role (the role that you chose when accessing the AWS App) does not meet the requirements mentioned above: +- it must have the tag `teleport.dev/integration=true` +- its Trust Relationship must allow the AWS OIDC integration IAM Role to assume it + +See step 3 for more details. + +## Next steps + +Now that you know how to set up Teleport to protect access to the AWS Management Console and APIs, you can tailor your setup to the needs of your organization. + +### Refine your AWS IAM role mapping + +The `aws_role_arns` field supports template variables so they can be populated dynamically when a user authenticates to Teleport. + +For example, you can configure your identity provider to define a SAML attribute or OIDC claim called `aws_role_arns`, then use this field to list each user's permitted AWS role ARNs on your IdP. +If you define a Teleport role to mention the `{{external.aws_role_arns}}` variable, the Auth Service will fill in the user's permitted ARNs based on data from the IdP: + +```yaml + aws_role_arns: + - {{external.aws_role_arns}} +``` + +### Create Teleport Roles to grant access to a specific profile and IAM roles + +You can create multiple Teleport roles to grant access to the profiles and IAM roles for each Profile. + +See [Role Access Requests](../../../identity-governance/access-requests/role-requests.mdx) to learn more about creating Roles and how to access them using Access Requests. diff --git a/docs/pages/admin-guides/management/guides/awsoidc-integration-rds.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-rds.mdx similarity index 99% rename from docs/pages/admin-guides/management/guides/awsoidc-integration-rds.mdx rename to docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-rds.mdx index 1cee52d9333f6..fe31bd43f87f7 100644 --- a/docs/pages/admin-guides/management/guides/awsoidc-integration-rds.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration-rds.mdx @@ -1,7 +1,11 @@ --- title: AWS RDS Enrollment Wizard description: Enroll AWS RDS databases with your Teleport cluster using an enrollment wizard. -tocDepth: 3 +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- The AWS RDS enrollment wizard is available in your Teleport cluster Web UI as @@ -82,7 +86,7 @@ Teleport users to connect to the database(s) it proxies. This is a serverless HA approach to deploying the Teleport Database Service. It does not require any allowed inbound traffic from the internet and supports -[automatic agent upgrades](../../../upgrading/agent-managed-updates.mdx). +[automatic agent upgrades](../../../upgrading/agent-managed-updates/agent-managed-updates.mdx). The Teleport Database Service instances are configured to use [dynamic database registration](../../../enroll-resources/database-access/guides/dynamic-registration.mdx) diff --git a/docs/pages/admin-guides/management/guides/awsoidc-integration.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration.mdx similarity index 85% rename from docs/pages/admin-guides/management/guides/awsoidc-integration.mdx rename to docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration.mdx index 37571f4dbd3f6..b07944614a327 100644 --- a/docs/pages/admin-guides/management/guides/awsoidc-integration.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/awsoidc-integration.mdx @@ -1,20 +1,25 @@ --- title: AWS OIDC Integration description: How to connect your AWS account with Teleport and provide access to AWS resources. +tags: + - how-to + - platform-wide + - aws --- This guide explains how to set up the Teleport AWS OIDC integration. With the AWS OIDC integration you will no longer need to deploy Teleport Agents in AWS manually for most use cases. The following features use an AWS OIDC integration to interact with AWS: -- [External Audit Storage](../external-audit-storage.mdx) +- [External Audit Storage](../../../zero-trust-access/management/external-audit-storage.mdx) - [RDS Enrollment](./awsoidc-integration-rds.mdx) - EC2 Enrollment -- [Access Graph AWS Sync](../../teleport-policy/integrations/aws-sync.mdx) +- [Access Graph AWS Sync](../../../identity-security/integrations/aws-sync.mdx) +- [Protect AWS CLI and Console access with Teleport](./awsoidc-integration-console.mdx) It targets users who would prefer a more manual approach or to manage the integration with Infrastructure as Code tools. -As an alternative to this guide, you can use the Teleport Web UI (Access Management / Enroll New Integration). +As an alternative to this guide, you can use the Teleport Web UI. In the left-hand pane, click **Add New** -> **Integration**. ## How it works Teleport is added as an [OpenID Connect identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) to establish trust with your AWS account and assume a configured IAM role in order to access AWS resources. @@ -26,7 +31,7 @@ $ curl https:///.well-known/openid-configurat The integration requires no extra configuration or services to run. Initially, no policy is added to the IAM role, but users are asked to add them the first time they are trying to use a given feature. -For example, when setting up [External Audit Storage](../external-audit-storage.mdx), you will be asked to add the required policies to this IAM role. +For example, when setting up [External Audit Storage](../../../zero-trust-access/management/external-audit-storage.mdx), you will be asked to add the required policies to this IAM role. AWS resources created by the integration are tagged so that you can search and export them using the [AWS Resource Groups / Tag Editor](https://console.aws.amazon.com/resource-groups/tag-editor/find-resources). The following tags are applied: @@ -41,6 +46,7 @@ teleport.dev/integration - A running Teleport cluster. - AWS Account with permissions to create IAM Identity Providers and roles + ## Step 1/4. Configure RBAC To configure the integration you will need the following allow rules in one of your Teleport roles. @@ -85,7 +91,7 @@ teleport.dev/integration ## Step 3/4. Create IAM role An IAM role must be created to assign the required policies to the integration . -This IAM role is created without any policy, as those are added depending on the feature you would like to use, for example when setting up [Access Graph AWS Sync](../../teleport-policy/integrations/aws-sync.mdx). +This IAM role is created without any policy, as those are added depending on the feature you would like to use, for example when setting up [Access Graph AWS Sync](../../../identity-security/integrations/aws-sync.mdx). However, it must be configured to allow the Identity Provider to assume it. To achieve this, add the following Trust Relationship: ```json @@ -140,5 +146,5 @@ After the set up is complete, you can now use the "Enroll New Resource" flow in ## Next steps Now that you have an integration, you can use the following features: -- [Access Graph AWS Sync](../../teleport-policy/integrations/aws-sync.mdx) -- [External Audit Storage](../external-audit-storage.mdx) +- [Access Graph AWS Sync](../../../identity-security/integrations/aws-sync.mdx) +- [External Audit Storage](../../../zero-trust-access/management/external-audit-storage.mdx) diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/azure-aks-workload-id.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/azure-aks-workload-id.mdx index 46182e246f83c..880db7c9501fe 100644 --- a/docs/pages/enroll-resources/application-access/cloud-apis/azure-aks-workload-id.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/azure-aks-workload-id.mdx @@ -1,6 +1,12 @@ --- title: "Protect Azure CLIs with Teleport Application Access on AKS" +sidebar_label: Azure (using AKS) description: How to enable secure access to Azure CLIs on Azure Kubernetes Service with Workload Identity. +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- (!docs/pages/includes/application-access/azure-intro.mdx!) @@ -18,7 +24,7 @@ In this guide, you will: ## Prerequisites -(!docs/pages/includes/edition-prereqs-tabs.mdx version="15.2.4"!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v15.2.4 or higher)"!) - An Azure Kubernetes Service (AKS) cluster and admin permissions to manage the cluster. - The ability to manage user-assigned Azure managed identities, role policies, @@ -226,8 +232,8 @@ teleport-azure-access-agent-0 1/1 Running 0 99s your Teleport users can only manage Azure resources temporarily, with no longstanding admin roles for attackers to hijack. View our documentation on [Role Access - Requests](../../../admin-guides/access-controls/access-requests/role-requests.mdx) and - [Access Request plugins](../../../admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx). + Requests](../../../identity-governance/access-requests/role-requests.mdx) and + [Access Request plugins](../../../identity-governance/access-requests/plugins/plugins.mdx). - Consult the Azure documentation for information about [Azure managed identities](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) and how to [manage user-assigned managed diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/azure.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/azure.mdx index 3218ce6b2680b..356f410dce4b9 100644 --- a/docs/pages/enroll-resources/application-access/cloud-apis/azure.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/azure.mdx @@ -1,6 +1,13 @@ --- -title: "Protect Azure CLIs with Teleport Application Access" +title: Protect Azure CLIs and APIs with Teleport +sidebar_label: Azure description: How to enable secure access to Azure CLIs. +tags: + - teleport-as-idp + - how-to + - zero-trust + - infrastructure-identity + - azure --- (!docs/pages/includes/application-access/azure-intro.mdx!) @@ -8,13 +15,22 @@ description: How to enable secure access to Azure CLIs. In this guide, you will: 1. Create an Azure managed identity for user access and attach it your VM. -1. Deploy a Teleport Application Service with an Azure app in your Teleport cluster. +1. Deploy the Teleport Application Service with an Azure app in your Teleport cluster. 1. Assume the managed identity and run `az` commands via `tsh`. ## How it works (!docs/pages/includes/application-access/azure-how-it-works.mdx deployment="on an Azure VM" credential="managed identities"!) +Alternatively, you can use the [Teleport SAML IdP](../../../identity-governance/idps/usage/saml-microsoft-entra-external-id.mdx)-based +integration to manage access to Azure Portal and CLI applications. In this +approach, users [interactively sign in]( +https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli-interactively) +to Azure portal and CLI tools, and Azure services tag audit logs on behalf of +Teleport user account, providing more visibility into how Teleport users are +interacting with Azure. It is also the only supported method to manage access to +the Azure portal. + ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) @@ -223,8 +239,8 @@ Application Service host. your Teleport users can only manage Azure resources temporarily, with no longstanding admin roles for attackers to hijack. View our documentation on [Role Access - Requests](../../../admin-guides/access-controls/access-requests/role-requests.mdx) and - [Access Request plugins](../../../admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx). + Requests](../../../identity-governance/access-requests/role-requests.mdx) and + [Access Request plugins](../../../identity-governance/access-requests/plugins/plugins.mdx). - Consult the Azure documentation for information about [Azure managed identities](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) and how to [manage user-assigned managed diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/cloud-apis.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/cloud-apis.mdx index fac714a736b3d..5caf17eb714fb 100644 --- a/docs/pages/enroll-resources/application-access/cloud-apis/cloud-apis.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/cloud-apis.mdx @@ -1,6 +1,11 @@ --- title: "Securing Access to Cloud APIs" +sidebar_position: 2 +sidebar_label: Cloud APIs description: "How to use Teleport to achieve secure access while managing your cloud-based infrastructure." +tags: + - zero-trust + - infrastructure-identity --- You can use Teleport to provide secure access to your cloud provider's APIs. @@ -15,8 +20,10 @@ longstanding admin accounts to target. Learn how to protect your cloud provider APIs with Teleport: -- [AWS (console and CLI applications)](aws-console.mdx) +- [AWS Console and CLI access](aws-console-roles-anywhere.mdx) +- [AWS Console and CLI access via Teleport Agent](aws-console.mdx) - [Azure CLI applications](azure.mdx) - [Azure CLI applications (AKS with Workload ID deployment)](azure-aks-workload-id.mdx) - [Google Cloud CLI applications](google-cloud.mdx) -- [GCP Web Console Access with Workforce Identity Federation and Teleport SAML IdP](../../../admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx) +- [GCP Web Console Access with Workforce Identity Federation and Teleport SAML IdP](../../../identity-governance/idps/usage/saml-gcp-workforce-identity-federation.mdx) + diff --git a/docs/pages/admin-guides/management/guides/github-integration.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/github-integration.mdx similarity index 89% rename from docs/pages/admin-guides/management/guides/github-integration.mdx rename to docs/pages/enroll-resources/application-access/cloud-apis/github-integration.mdx index 8bf3d60bb92d4..5892be5b1bd18 100644 --- a/docs/pages/admin-guides/management/guides/github-integration.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/github-integration.mdx @@ -1,6 +1,10 @@ --- title: Proxy Git Commands with Teleport for GitHub description: How to use Teleport's short-lived SSH certificates with the GitHub Certificate Authority. +tags: + - how-to + - zero-trust + - infrastructure-identity --- Teleport can proxy Git commands and use short-lived SSH certificates to @@ -37,9 +41,14 @@ continue to access GitHub through their browsers. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx version="17.2"!) -- Access to GitHub Enterprise and permissions to modify GitHub's SSH certificate - authorities and configure OAuth applications. +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v17.2 or higher)"!) +- Access to GitHub Enterprise Cloud and permissions to modify GitHub's SSH + certificate authorities and configure OAuth applications. + + GitHub integration requires a GitHub Enterprise plan and is currently + supported for **GitHub Enterprise Cloud**. Support for **GitHub Enterprise + Server (self-hosted)** is not available as of the current release. + - (!docs/pages/includes/tctl.mdx!) - Git locally installed diff --git a/docs/pages/enroll-resources/application-access/cloud-apis/google-cloud.mdx b/docs/pages/enroll-resources/application-access/cloud-apis/google-cloud.mdx index 32789636e0665..72323a6c1f098 100644 --- a/docs/pages/enroll-resources/application-access/cloud-apis/google-cloud.mdx +++ b/docs/pages/enroll-resources/application-access/cloud-apis/google-cloud.mdx @@ -1,6 +1,12 @@ --- title: "Protect Google Cloud API Access with Teleport" +sidebar_label: Google Cloud description: How to enable secure access to Google Cloud APIs. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- You can use Teleport to manage access to CLI tools that interact with Google @@ -643,8 +649,8 @@ command. ensure that your Teleport users can only manage Google Cloud resources temporarily, with no longstanding admin roles for attackers to hijack. View our documentation on [Role Access - Requests](../../../admin-guides/access-controls/access-requests/role-requests.mdx) and [Access - Request plugins](../../../admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx). + Requests](../../../identity-governance/access-requests/role-requests.mdx) and [Access + Request plugins](../../../identity-governance/access-requests/plugins/plugins.mdx). - You can proxy any `gcloud` or `gsutil` command via Teleport. For a full reference of commands, view the Google Cloud documentation for [`gcloud`](https://cloud.google.com/sdk/gcloud/reference) and diff --git a/docs/pages/enroll-resources/application-access/configuration/configuration.mdx b/docs/pages/enroll-resources/application-access/configuration/configuration.mdx new file mode 100644 index 0000000000000..903966f95d408 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/configuration/configuration.mdx @@ -0,0 +1,11 @@ +--- +title: Teleport Application Service Configuration +sidebar_label: Configuration Guides +description: Provides instructions for configuring the Teleport Application Service +--- + +The guides in this section show you how to configure the Teleport Application +Service, which proxies traffic to and from Teleport-protected applications. + + + diff --git a/docs/pages/enroll-resources/application-access/controls.mdx b/docs/pages/enroll-resources/application-access/configuration/controls.mdx similarity index 80% rename from docs/pages/enroll-resources/application-access/controls.mdx rename to docs/pages/enroll-resources/application-access/configuration/controls.mdx index fcf9e0142762d..5f966621324da 100644 --- a/docs/pages/enroll-resources/application-access/controls.mdx +++ b/docs/pages/enroll-resources/application-access/configuration/controls.mdx @@ -1,6 +1,11 @@ --- title: Application Access Role-Based Access Control +sidebar_label: Access Controls description: Role-Based Access Control (RBAC) for Teleport application access. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- This article describes access control concepts particularly relevant to the @@ -127,24 +132,23 @@ This command uses the `--set-azure-identities` flag to add Azure identities to a user. The value of this flag is a comma-separated list of Azure identity URIs. See our [Azure -CLI](./cloud-apis/azure.mdx#step-34-enable-your-user-to-access-azure-clis) guide +CLI](../cloud-apis/azure.mdx#step-34-enable-your-user-to-access-azure-clis) guide for more information on enabling access to Azure managed identities. ## Next steps -- View access controls [Getting Started](../../admin-guides/access-controls/getting-started.mdx) - and other available [guides](../../admin-guides/access-controls/guides/guides.mdx). +- View the access controls [Getting Started](../../../zero-trust-access/rbac-get-started/role-demo.mdx) guide + and other available + [guides](../../../zero-trust-access/authentication/authentication.mdx). - For full details on how Teleport populates the `internal` and `external` traits we illustrated in this guide, see the [Access - Controls Reference](../../reference/access-controls/roles.mdx). -- View access controls [Getting Started](../../admin-guides/access-controls/getting-started.mdx) - and other available [guides](../../admin-guides/access-controls/guides/guides.mdx). -- Learn about using [JWT tokens](./jwt/introduction.mdx) to implement access + Controls Reference](../../../reference/access-controls/roles.mdx). +- Learn about using [JWT tokens](../jwt/introduction.mdx) to implement access controls in your application. - Integrate with your identity provider: - - [OIDC](../../admin-guides/access-controls/sso/oidc.mdx) - - [ADFS](../../admin-guides/access-controls/sso/adfs.mdx) - - [Microsoft Entra ID](../../admin-guides/access-controls/sso/azuread.mdx) - - [Google Workspace](../../admin-guides/access-controls/sso/google-workspace.mdx) - - [Onelogin](../../admin-guides/access-controls/sso/one-login.mdx) - - [Okta](../../admin-guides/access-controls/sso/okta.mdx) + - [OIDC](../../../zero-trust-access/sso/integrate-idp/oidc.mdx) + - [ADFS](../../../zero-trust-access/sso/integrate-idp/adfs.mdx) + - [Microsoft Entra ID](../../../zero-trust-access/sso/integrate-idp/entra-id.mdx) + - [Google Workspace](../../../zero-trust-access/sso/integrate-idp/google-workspace.mdx) + - [Onelogin](../../../zero-trust-access/sso/integrate-idp/one-login.mdx) + - [Okta](../../../zero-trust-access/sso/integrate-idp/okta.mdx) diff --git a/docs/pages/enroll-resources/application-access/configuration/dynamic-registration.mdx b/docs/pages/enroll-resources/application-access/configuration/dynamic-registration.mdx new file mode 100644 index 0000000000000..4c224249f91da --- /dev/null +++ b/docs/pages/enroll-resources/application-access/configuration/dynamic-registration.mdx @@ -0,0 +1,121 @@ +--- +title: Dynamic App Registration +sidebar_label: Dynamic Registration +description: Register/unregister apps without restarting Teleport. +tags: + - conceptual + - zero-trust + - infrastructure-identity +--- + +Dynamic app registration allows Teleport administrators to register new apps (or +update/unregister existing ones) without having to update the static +configuration files read by Teleport Application Service instances. + +Application Service instances periodically query the Teleport Auth Service for +`app` resources, each of which includes the information that the Application +Service needs to proxy an application. + +Dynamic registration is useful for managing pools of Application Service +instances. And behind the scenes, the Teleport Discovery Service uses dynamic +registration to [register Kubernetes +applications](../../auto-discovery/kubernetes-applications/kubernetes-applications.mdx). + +## Required permissions + +(!docs/pages/includes/application-access/dynamic-app-permissions.mdx!) + +## Enabling dynamic registration + +(!docs/pages/includes/application-access/dynamic-app-config.mdx!) + +## Creating an app resource + +Configure Teleport to proxy an application dynamically by creating an `app` +resource. The following example configures Teleport to proxy the application +called `example` at `localhost:4321`, making it available at the public address +`test.example.com`: + +```yaml +kind: app +version: v3 +metadata: + name: example + description: "Example app" + labels: + env: test +spec: + uri: http://localhost:4321 + public_addr: test.example.com +``` + +See the full app resource spec [reference](../reference.mdx). + +The user creating the dynamic registration needs to have a role with access to the +application labels and the `app` resource. In this example role the user can only +create and maintain application services labeled `env: test`. + +```yaml +kind: role +metadata: + name: dynamicappregexample +spec: + allow: + app_labels: + env: test + rules: + - resources: + - app + verbs: + - list + - create + - read + - update + - delete +version: v5 +``` + +To create an application resource, run: + + + + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +$ tctl create app.yaml +``` + + + + +```code +# Log in to your Teleport cluster so you can use tctl remotely. +$ tsh login --proxy=mytenant.teleport.sh --user=myuser +$ tctl create app.yaml +``` + + + + + +After the resource has been created, it will appear among the list of available +apps (in `tsh apps ls` or UI) as long as at least one Application Service +instance picks it up according to its label selectors. + +To update an existing application resource, run: + +```code +$ tctl create -f app.yaml +``` + +If the updated resource's labels no longer match a particular app agent, it +will unregister and stop proxying it. + +To delete an application resource, run: + +```code +$ tctl rm app/example +``` diff --git a/docs/pages/enroll-resources/application-access/guides/ha.mdx b/docs/pages/enroll-resources/application-access/configuration/ha.mdx similarity index 96% rename from docs/pages/enroll-resources/application-access/guides/ha.mdx rename to docs/pages/enroll-resources/application-access/configuration/ha.mdx index 407ffa6e9c92b..b7b4f2065825f 100644 --- a/docs/pages/enroll-resources/application-access/guides/ha.mdx +++ b/docs/pages/enroll-resources/application-access/configuration/ha.mdx @@ -1,6 +1,11 @@ --- title: Application Access High Availability (HA) +sidebar_label: High Availability description: How to configure Teleport application access in a Highly Available (HA) configuration. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- You can deploy the Application Service in a Highly Available (HA) configuration diff --git a/docs/pages/enroll-resources/application-access/getting-started.mdx b/docs/pages/enroll-resources/application-access/getting-started.mdx index 3fe1756f69c93..2c1152e3ad510 100644 --- a/docs/pages/enroll-resources/application-access/getting-started.mdx +++ b/docs/pages/enroll-resources/application-access/getting-started.mdx @@ -1,7 +1,12 @@ --- title: Protect a Web Application with Teleport +sidebar_label: Getting Started description: Provides instructions to set up the Teleport Application Service and enable secure access to a web application. videoBanner: cvW4b96aPL0 +tags: + - how-to + - zero-trust + - infrastructure-identity --- This tutorial demonstrates how to configure secure access to an application @@ -138,7 +143,7 @@ $ helm install example-grafana grafana/grafana \ -Select a Teleport edition, then follow the [Installation](../../installation.mdx) instructions +Select a Teleport edition, then follow the [Installation](../../installation/installation.mdx) instructions for your environment. To install on Linux: @@ -238,8 +243,8 @@ You are prompted to sign in if you haven't already been authenticated. Learn more about protecting applications with Teleport in the following topics: -- [Connecting applications](./guides/connecting-apps.mdx). +- [Connecting applications](protect-apps/connecting-apps.mdx). - Integrating with [JWT tokens](./jwt/introduction.mdx). -- Accessing applications with [RESTful APIs](./guides/api-access.mdx). -- Setting configuration options AND running CLI commands in the [Application Service reference](../../reference/agent-services/application-access.mdx). +- Accessing applications with [RESTful APIs](protect-apps/api-access.mdx). +- Setting configuration options AND running CLI commands in the [Application Service reference](reference.mdx). - Using the Let's Encrypt [ACME protocol](https://letsencrypt.org/how-it-works/). diff --git a/docs/pages/enroll-resources/application-access/guides/dynamic-registration.mdx b/docs/pages/enroll-resources/application-access/guides/dynamic-registration.mdx deleted file mode 100644 index 178621494f739..0000000000000 --- a/docs/pages/enroll-resources/application-access/guides/dynamic-registration.mdx +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Dynamic App Registration -description: Register/unregister apps without restarting Teleport. ---- - -Dynamic app registration allows Teleport administrators to register new apps (or -update/unregister existing ones) without having to update the static -configuration files read by Teleport Application Service instances. - -Application Service instances periodically query the Teleport Auth Service for -`app` resources, each of which includes the information that the Application -Service needs to proxy an application. - -Dynamic registration is useful for [managing pools of Application Service -instances](../../../admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx). And behind the scenes, the -Teleport Discovery Service uses dynamic registration to [register Kubernetes -applications](../../auto-discovery/kubernetes-applications/kubernetes-applications.mdx). - -## Required permissions - -In order to interact with dynamically registered applications, a user must have -a Teleport role with permissions to manage `app` resources. - -In the following example, a role allows a user to perform all possible -operations against `app` resources: - -```yaml -allow: - rules: - - resources: - - app - verbs: [list, create, read, update, delete] -``` - -## Enabling dynamic registration - -To enable dynamic registration, include a `resources` section in your Application -Service configuration with a list of resource label selectors you'd like this -service to monitor for registering: - -```yaml -app_service: - enabled: true - resources: - - labels: - "*": "*" -``` - -You can use a wildcard selector to register all dynamic app resources in the cluster -on the Application Service or provide a specific set of labels for a subset: - -```yaml -resources: -- labels: - "env": "prod" -- labels: - "env": "test" -``` - -## Creating an app resource - -Configure Teleport to proxy an application dynamically by creating an `app` -resource. The following example configures Teleport to proxy the application -called `example` at `localhost:4321`, making it available at the public address -`test.example.com`: - -```yaml -kind: app -version: v3 -metadata: - name: example - description: "Example app" - labels: - env: test -spec: - uri: http://localhost:4321 - public_addr: test.example.com -``` - -See the full app resource spec [reference](../../../reference/agent-services/application-access.mdx). - -The user creating the dynamic registration needs to have a role with access to the -application labels and the `app` resource. In this example role the user can only -create and maintain application services labeled `env: test`. - -```yaml -kind: role -metadata: - name: dynamicappregexample -spec: - allow: - app_labels: - env: test - rules: - - resources: - - app - verbs: - - list - - create - - read - - update - - delete -version: v5 -``` - -To create an application resource, run: - - - - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -# You can also run tctl on your Auth Service host without running "tsh login" -# first. -$ tsh login --proxy=teleport.example.com --user=myuser -$ tctl create app.yaml -``` - - - - -```code -# Log in to your Teleport cluster so you can use tctl remotely. -$ tsh login --proxy=mytenant.teleport.sh --user=myuser -$ tctl create app.yaml -``` - - - - - -After the resource has been created, it will appear among the list of available -apps (in `tsh apps ls` or UI) as long as at least one Application Service -instance picks it up according to its label selectors. - -To update an existing application resource, run: - -```code -$ tctl create -f app.yaml -``` - -If the updated resource's labels no longer match a particular app agent, it -will unregister and stop proxying it. - -To delete an application resource, run: - -```code -$ tctl rm app/example -``` diff --git a/docs/pages/enroll-resources/application-access/guides/guides.mdx b/docs/pages/enroll-resources/application-access/guides/guides.mdx deleted file mode 100644 index ecade5b582e83..0000000000000 --- a/docs/pages/enroll-resources/application-access/guides/guides.mdx +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Application Access Guides -description: Guides for configuring Teleport application access. -layout: tocless-doc ---- - -These guides explain how to use the Teleport Application Service, which allows -your teams to connect to applications within private networks with fine-grained -RBAC and audit logging. - -Manage access to internal applications: - -- [Web App Access](connecting-apps.mdx): How to access web apps with Teleport. -- [TCP App Access](tcp.mdx): How to access plain TCP apps with Teleport. -- [VNet](vnet.mdx): How to configure VNet to support applications with custom public addresses. -- [API Access](api-access.mdx): How to access REST APIs with Teleport. -- [Dynamic Registration](dynamic-registration.mdx): Register/unregister apps without restarting Teleport. -- [Amazon Athena Access](amazon-athena.mdx): How to access Amazon Athena with Teleport. -- [Amazon DynamoDB Access](dynamodb.mdx): How to access Amazon DynamoDB as an application. -- [Application Service HA](ha.mdx): How to configure the Teleport Application Service for high availability. diff --git a/docs/pages/enroll-resources/application-access/guides/tcp.mdx b/docs/pages/enroll-resources/application-access/guides/tcp.mdx deleted file mode 100644 index 4bfc4171bc66f..0000000000000 --- a/docs/pages/enroll-resources/application-access/guides/tcp.mdx +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: TCP Application Access -description: How to configure Teleport for accessing plain TCP apps ---- - -Teleport can provide access to any TCP-based application. This allows users to -connect to applications which Teleport doesn't natively support such as SMTP -servers or databases not yet natively supported by the Teleport Database -Service. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- TCP application to connect to. In this guide we'll use a PostgreSQL running - in Docker as an example. You can also use any TCP-based application you may - already have. -- Host where you will run the Teleport Application Service. - -We will assume your Teleport cluster is accessible at `teleport.example.com` -and `*.teleport.example.com`. You can substitute the address of your Teleport -Proxy Service. (For Teleport Cloud customers, this will be similar to -`mytenant.teleport.sh`.) - - -(!docs/pages/includes/dns-app-access.mdx!) - - -## Step 1/4. Start PostgreSQL container - -Skip this step if you already have an application you'd like to connect to. - -Start a PostgreSQL server in a Docker container: - -```code -$ docker run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD= -d postgres -``` - -## Step 2/4. Start Teleport Application Service - -Teleport Application Service requires a valid auth token to join the cluster. - - - -To generate one, run the following command on your Auth Service node: - -```code -$ tctl tokens add --type=app -``` - -Next, create a Teleport user with the `access` role that will allow it to -connect to cluster applications: - -```code -$ tctl users add --roles=access alice -``` - - - -To generate one, log into your Cloud tenant and run the following command: - -```code -$ tsh login --proxy=mytenant.teleport.sh -$ tctl tokens add --type=app -``` - - - - -Save the generated token in `/tmp/token` on the node where Application Service -will run. - -Now, install Teleport on the Application Service node. It must be able to reach -both your Teleport Proxy and the TCP application it's going to proxy. - -(!docs/pages/includes/install-linux.mdx!) - -Create the Application Service configuration file `/etc/teleport.yaml` with -the following contents: - -```yaml -version: v3 -teleport: - auth_token: "/tmp/token" - proxy_server: teleport.example.com:3080 -auth_service: - enabled: false -ssh_service: - enabled: false -proxy_service: - enabled: false -app_service: - enabled: true - apps: - - name: "tcp-app" - uri: tcp://localhost:5432 -``` - -Note that the URI scheme must be `tcp://` in order for Teleport to recognize -this as a TCP application. - -(!docs/pages/includes/start-teleport.mdx!) - -## Step 3/4. Start app proxy - -Log into your Teleport cluster and view available applications: - -```code -$ tsh login --proxy=teleport.example.com -$ tsh apps ls -Application Description Type Public Address Labels ------------ ------------- ---- -------------------------------- ----------- -tcp-app TCP tcp-app.root.gravitational.io -``` - -Your TCP application should show up and be denoted with a `TCP` type. - -Now log into the application: - -```code -$ tsh apps login tcp-app -Logged into TCP app tcp-app. Start the local TCP proxy for it: - - tsh proxy app tcp-app - -Then connect to the application through this proxy. -``` - -Next, start a local proxy for it: - -```code -$ tsh proxy app tcp-app -Proxying connections to tcp-app on 127.0.0.1:55868 -``` - -The `tsh proxy app` command will set up a listener that will proxy all connections to -the target application. - -## Step 4/4. Connect - -Once the local proxy is running, you can connect to the application using the -application client you would normally use to connect to it: - -```code -$ psql postgres://postgres@localhost:55868/postgres -``` - -## Next steps - -### Configuring access to multiple ports - -By default, the Application Service proxies connections to the `uri` field from the application -specification. However, Teleport can enable access to multiple ports of a TCP application. An -application specification in this case needs to have no port number in the `uri` field and a new -field called `tcp_ports` with a list of ports. - -For example, let's take tcp-app from the steps above and add access to port 8080 and port range -31276-32300. The Application Service definition should look like this: - -```yaml -app_service: - enabled: true - apps: - - name: "tcp-app" - uri: tcp://localhost # No port in the URI - tcp_ports: - - port: 5432 # PostgreSQL - - port: 8080 # HTTP server - - port: 31276 - end_port: 32300 # Inclusive end of range -``` - -To access the app, [start VNet](../../../connect-your-client/vnet.mdx) and point an application -client towards the target port: - -```code -$ curl -I http://tcp-app.teleport.example.com:8080 -HTTP/1.1 200 OK - -$ psql postgres://postgres@tcp-app.teleport.example.com:5432/postgres -``` - - -There is no RBAC for TCP ports – a user that has access to an application can connect to any port in -the specification. We strongly recommend specifying only the necessary ports instead of defining a -wide port range that happens to include ports that are meant to be available. - - -{/* TODO: DELETE IN 19.0.0. At this point all compatible servers and clients are going -to support multiple ports. */} - -Support for multiple ports is available in Teleport v17.1+. Connections from Teleport clients that -do not support multiple ports are routed to the first port from the application specification. An -Application Service that does not support multiple ports will not be able to handle traffic to a -multi-port application if it receives such application through [dynamic -registration](./dynamic-registration.mdx) from an Auth Service. - -### Further reading - -- Learn about [access controls](../controls.mdx) for applications. -- Learn how to [connect to TCP apps with VNet](../../../connect-your-client/vnet.mdx) and - [configure VNet for custom `public_addr`](vnet.mdx). diff --git a/docs/pages/enroll-resources/application-access/guides/vnet.mdx b/docs/pages/enroll-resources/application-access/guides/vnet.mdx deleted file mode 100644 index 6510f1bf8391e..0000000000000 --- a/docs/pages/enroll-resources/application-access/guides/vnet.mdx +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: VNet -description: How to configure custom DNS zones for VNet ---- - -VNet automatically proxies connections made to TCP applications available under the public address -of a Proxy Service. This guide explains how to configure VNet to support apps with [custom public -addresses](connecting-apps.mdx#customize-public-address). - -## How it works - -Let's assume that a user has logged in to a cluster through a Proxy Service available at -`teleport.example.com`. There's a leaf cluster associated with that cluster. It has its own Proxy -Service available at `leaf.example.com`. Once started, VNet captures DNS queries for both of those -domains and their subdomains. - -Type A and AAAA queries are matched against `public_addr` of applications registered in both -clusters. If there's a match and the application is registered as a TCP application, VNet responds -with a virtual IP address over which the connection will be proxied to the app. In any other -case, the query is forwarded to the default DNS name server used by the OS. - -If you want VNet to forward connections to an app that has a custom `public_addr` set, you need -to first update the VNet config in the Auth Service to include a matching DNS zone. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx version="16.0.0"!) - -- (!docs/pages/includes/tctl.mdx!) -- A TCP application connected to the cluster. -- A domain name under your control. - -{/* vale messaging.protocol-products = NO */} -In this guide, we'll use the example app from [TCP Application Access guide](tcp.mdx) and make it -available through VNet at with - as the custom DNS zone. -{/* vale messaging.protocol-products = YES */} - -## Step 1/3. Configure custom DNS zone - -Edit the VNet config: - -```code -$ tctl edit vnet_config -``` - -Add a custom DNS zone. In our case the `public_addr` of the app is going to be -`tcp-app.company.test`, so we're going to set `company.test` as `suffix`. The `spec` section of the -config should include the `custom_dns_zones` field: - -```diff -kind: vnet_config -metadata: - name: vnet-config - # … -spec: -+ custom_dns_zones: -+ - suffix: - # … -version: v1 -``` - - -`suffix` doesn't have to point to a domain that's exactly one level above the `public_addr` of an -app. Any level of nesting works. For example, you can have an app under `tcp-app.foo.bar.qux.test` -and the suffix can be set to `bar.qux.test`. - - -## Step 2/3. Set `public_addr` of the app - -Set `public_addr` field of the application in the Application Service configuration file -`/etc/teleport.yaml` and restart the `teleport` daemon. - -```diff -version: v3 -# … -app_service: - # … - apps: - - name: "tcp-app" - uri: tcp://localhost:5432 -+ public_addr: "" -``` - -## Step 3/3. Connect - -Once you [start VNet](../../../connect-your-client/vnet.mdx), you should be able to connect to the -application over the custom `public_addr` using the application client you would normally use to -connect to it. You might need to restart VNet if it was already running while you were making -changes to the cluster. - -```code -$ psql postgres://postgres@tcp-app.company.test/postgres -``` - -## Next steps - -### Configuring IPv4 CIDR range - -Each cluster has a configurable IPv4 CIDR range which VNet uses when assigning IP addresses to -applications from that cluster. Root and leaf clusters can use different ranges. The default is -`100.64.0.0/10` and it can be changed by setting the `ipv4_cidr_range` field of the VNet config. - -Edit the VNet config: - -```code -$ tctl edit vnet_config -``` - -Edit the `ipv4_cidr_range` field in the `spec` section of the config: - -```diff -kind: vnet_config -metadata: - name: vnet-config - # … -spec: -+ ipv4_cidr_range: "100.64.0.0/10" - # … -version: v1 -``` - - -When starting, VNet needs to assign an IPv4 address for its virtual network device. To pick an -address, [VNet arbitrarily chooses a root -cluster](https://github.com/gravitational/teleport/issues/43343) that the user is logged in to and -picks an address from the range used by that cluster. If your cluster uses a custom range, but your -users are logged in to other clusters that are not under your control, this might cause VNet to pick -an address for the TUN device from a range offered by one of those clusters. - - -### Configuring leaf cluster apps - -To make a [leaf cluster](../../../admin-guides/management/admin/trustedclusters.mdx) app accessible over a custom -`public_addr`, you need to follow the same steps while being logged in directly to the leaf cluster. - -```code -$ tsh login --proxy=leaf.example.com --user=email@example.com -``` - -### Accessing web apps through VNet - -VNet does not officially support [web apps](connecting-apps.mdx) yet. However, technically any web -app is [a TCP app](tcp.mdx), so they can be made available over VNet as well. You'll need to change -`uri` of your application in the Application Service configuration file to use `tcp://` instead of -`http://`. There's also a couple of caveats: - -- The Teleport Web UI uses [HSTS](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security). - If the application is going to be served from a subdomain of a Proxy Service, it means that the - application will not be accessible in browsers over plain HTTP, even with VNet running. It's - possible to work around this by setting a custom `public_addr` as explained above in this guide. -- If the application needs to be accessible over HTTPS, it must handle TLS connections and return [a valid - cert](connecting-apps.mdx#step-23-optional-configure-tls-and-dns-for-your-web-applications) for the domain it is served on. -- [JWT Token](../jwt/introduction.mdx), [redirect](connecting-apps.mdx#rewrite-redirect) and - [header rewrites](connecting-apps.mdx#headers-passthrough) are not available for TCP apps. -- Teleport records the start and the end of a session for TCP apps in the audit log, but [session - chunks](../../../reference/architecture/session-recording.mdx) are not captured. - -When accessing an HTTP API through VNet, the same caveats apply as above, with one main exception. -Since API clients don't need to respect HSTS, the API itself does not need to be served over HTTPS. - -The important thing to understand is that VNet doesn't do anything extra with a connection, other -than passing it through a Teleport Proxy Service. Which application layer protocol is going to be -used depends solely on the app itself and its clients. - -### Further reading - -- Read our VNet usage [guide](../../../connect-your-client/vnet.mdx) for end-users - accessing your applications with VNet. -- Read [RFD 163](https://github.com/gravitational/teleport/blob/master/rfd/0163-vnet.md) to learn how VNet works on a technical level. diff --git a/docs/pages/enroll-resources/application-access/introduction.mdx b/docs/pages/enroll-resources/application-access/introduction.mdx deleted file mode 100644 index bdfabb5fd0963..0000000000000 --- a/docs/pages/enroll-resources/application-access/introduction.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Introduction to Enrolling Applications -description: How to set up Teleport to protect applications and cloud provider APIs ---- - -Teleport can provide secure access to applications and cloud provider APIs. - -Examples include: - -- The AWS and GCP management consoles. -- The `aws`, `gcloud`, `gsutil`, and `az` CLIs. -- Applications configured for single sign-on through Okta. -- Internal control panels. -- Tools, such as wikis, that are available only when connected to a VPN. -- Infrastructure dashboards, such as Kubernetes or Grafana. -- Developer tools, such as Jenkins, GitLab, or Opsgenie. - -![Application access architecture](../../../img/application-access/architecture.svg) - -If you are running applications on Kubernetes, you can [enroll them in your -Teleport cluster automatically](../auto-discovery/kubernetes-applications/kubernetes-applications.mdx). - -Teleport protects applications through the Teleport Application Service, which -is a Teleport Agent service. For more information on agent services, read -[Teleport Agent Architecture](../../reference/architecture/agents.mdx). You can also learn -how to deploy a [pool of Teleport Agents](../agents/agents.mdx) to run -multiple agent services. - -## Getting started - -Learn how to register an application with Teleport in our [getting started -guide](./getting-started.mdx). - -## Cloud provider APIs - -You can use Teleport to provide secure access to your cloud provider's APIs. -This means that you can prevent unauthorized usage of management consoles and -CLI tools with the same RBAC system you use to protect your infrastructure. - -- [AWS Console and CLI Applications](./cloud-apis/aws-console.mdx): How to access - AWS Management Console, AWS CLI, and AWS SDKs with Teleport. -- [Google Cloud Web Console](../../admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx): How to access the Google Cloud web console with Teleport. -- [Google Cloud CLI Applications](./cloud-apis/google-cloud.mdx): How to access - Google Cloud CLI applications and SDKs with Teleport. -- [Azure CLI Applications](./cloud-apis/azure.mdx): How to access Azure CLI - applications and SDKs with Teleport. - -## Internal applications - -You can use Teleport to enable secure access to internal applications. For -example, a load balancer might display network telemetry through a control panel -but might lack the ability to authenticate with and be accessed by resources -outside your private network. - -Teleport lets team members access these resources securely, even outside a -private network, with no shared secrets. - -These guides explain how to protect internal applications with Teleport: - -- [Web App Access](./guides/connecting-apps.mdx): How to access web apps with Teleport. -- [TCP App Access](./guides/tcp.mdx): How to access plain TCP apps with Teleport. -- [API Access](./guides/api-access.mdx): How to access REST APIs with Teleport. -- [Dynamic Registration](./guides/dynamic-registration.mdx): Register/unregister apps without restarting Teleport. -- [Interactive Lab](https://goteleport.com/labs/): Try Teleport using our guided Teleport application access lab. - -## Teleport-signed JSON Web Tokens - -These guides explain how web apps registered with Teleport can use -Teleport-signed JSON web tokens to implement authentication and authorization. - -- [Introduction](./jwt/introduction.mdx): Introduction to JWT tokens with application access. -- [Elasticsearch](./jwt/elasticsearch.mdx): How to use JWT authentication with Elasticsearch. diff --git a/docs/pages/enroll-resources/application-access/jwt/elasticsearch.mdx b/docs/pages/enroll-resources/application-access/jwt/elasticsearch.mdx index bbc437b919610..1cedfa26a5fa7 100644 --- a/docs/pages/enroll-resources/application-access/jwt/elasticsearch.mdx +++ b/docs/pages/enroll-resources/application-access/jwt/elasticsearch.mdx @@ -1,8 +1,15 @@ --- -title: Using JWT authentication with Elasticsearch -description: How to use JWT authentication with Elasticsearch +title: Using JWT Authentication with Elasticsearch +sidebar_label: Elasticsearch +description: How to use JWT Authentication with Elasticsearch +tags: + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + This guide will help you configure Elasticsearch [JWT authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/jwt-realm.html) with Teleport. @@ -11,7 +18,7 @@ with Teleport. (!docs/pages/includes/edition-prereqs-tabs.mdx!) - (!docs/pages/includes/tctl.mdx!) -- Running [Application Service](../guides/connecting-apps.mdx). +- Running [Application Service](../protect-apps/connecting-apps.mdx). - Elasticsearch cluster version >= `8.2.0`. ## Step 1/3. Enable a JWT realm in Elasticsearch @@ -20,7 +27,7 @@ Update your Elasticsearch configuration file, `elasticsearch.yaml`, to enable a JWT realm: ```yaml -xpack.security.authc.realms.jwt.jwt1: +xpack.security.authc.realms.jwt.teleport: order: 1 client_authentication.type: none pkc_jwkset_path: https://proxy.example.com/.well-known/jwks.json @@ -30,6 +37,18 @@ xpack.security.authc.realms.jwt.jwt1: allowed_audiences: ["https://elasticsearch.example.com:9200"] ``` + +For **hosted deployments** in Elastic Cloud, you can configure the same settings +by editing your deployment. + +Go to "Manage user settings and extensions" for "Elasticsearch", then add the +configuration snippet above under "User Settings". You may have to use an order +number bigger than 1. + +Please note that Elastic Cloud **Serverless** projects do not support connecting +external realms, and therefore cannot be configured with JWT authentication. + + Let's take a closer look at the parameters and their values: - Set `client_authentication.type` to `none`, otherwise Elasticsearch requires @@ -40,7 +59,7 @@ Let's take a closer look at the parameters and their values: it instead of using a URL. - Set `claims.principal` and `claims.groups` to `sub` and `roles` respectively. These are the claims Teleport uses to pass user and role information in JWT - tokens. Keep in mind that **users and roles must exist** in Elasticsearch. + tokens. - Set `allowed_issuer` to the name of your Teleport cluster. - Set `allowed_audiences` to the URL which Teleport Application Service will use to connect to Elasticsearch. @@ -48,14 +67,33 @@ Let's take a closer look at the parameters and their values: Note that when using JWT authentication, you cannot map user roles using the standard Elasticsearch `role_mapping.yml` file. Instead, you need to set the - role mapping using the API. See [JWT realm authorization](https://www.elastic.co/guide/en/elasticsearch/reference/current/jwt-realm.html#jwt-authorization) + role mapping using the API. + + For example, you can create the following mapping to map Teleport's `access` role + to `superuser` role in Elasticsearch: + ``` + PUT /_security/role_mapping/teleport_access + { + "roles": [ "superuser" ], + "rules": { + "all": [ + { "field": { "realm.name": "teleport" } }, + { "field": { "groups": "access" } } + ] + }, + "enabled": true + } + ``` + + See [JWT realm + authorization](https://www.elastic.co/guide/en/elasticsearch/reference/current/jwt-realm.html#jwt-authorization) for details. ## Step 2/3. Register an Elasticsearch application in Teleport -In your Teleport App Service configuration file, `teleport.yaml`, register an -entry for Elasticsearch: +In your Teleport Application Service configuration file, `teleport.yaml`, +register an entry for Elasticsearch: ```yaml app_service: @@ -68,10 +106,6 @@ app_service: - "Authorization: Bearer {{internal.jwt}}" ``` - - You can also use [dynamic registration](../guides/dynamic-registration.mdx). - - Elasticsearch requires a JWT token to be passed inside the `Authorization` header. The header rewrite configuration above will replace the `{{internal.jwt}}` template variable with a Teleport-signed JWT token in each request. @@ -102,12 +136,13 @@ $ curl \ --cacert ~/.tsh/keys/teleport.example.com/cas/root.pem \ --cert ~/.tsh/keys/teleport.example.com/alice-app/example-cluster/elastic-x509.pem \ --key ~/.tsh/keys/teleport.example.com/alice \ - https://elastic.teleport.example.com/_security/user | jq + https://elastic.teleport.example.com/_security/_authenticate | jq ``` ## Next steps - Get more information about integrating with [Teleport JWT tokens](./introduction.mdx). -- Learn more about [accessing APIs](../guides/api-access.mdx) with the Teleport +- See the [dynamic registration](../configuration/dynamic-registration.mdx) guide. +- Learn more about [accessing APIs](../protect-apps/api-access.mdx) with the Teleport Application Service. -- Take a look at application-related [Access Controls](../controls.mdx). +- Take a look at application-related [Access Controls](../configuration/controls.mdx). diff --git a/docs/pages/enroll-resources/application-access/jwt/grafana.mdx b/docs/pages/enroll-resources/application-access/jwt/grafana.mdx new file mode 100644 index 0000000000000..e1dbaaca4c82d --- /dev/null +++ b/docs/pages/enroll-resources/application-access/jwt/grafana.mdx @@ -0,0 +1,133 @@ +--- +title: Using JWT Authentication with Grafana +sidebar_label: Grafana +description: How to use JWT Authentication with Grafana +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +This guide will help you configure Grafana [JWT +authentication](https://grafana.com/docs/grafana/latest/setup-grafana/configure-access/configure-authentication/jwt/) +with Teleport. + +## How it works + +Teleport issues short-lived JWTs and injects them into each proxied request to +Grafana. Grafana is configured to trust Teleport’s JWT signer, allowing it to +verify the user’s identity and retrieve role information from the +Teleport-signed token. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- Running [Application Service](../protect-apps/connecting-apps.mdx). +- Access to main config of your Grafana instance + +## Step 1/3. Configure JWT authentication in Grafana + +(!docs/pages/includes/application-access/jwt-grafana-config.mdx!) + +## Step 2/3. Register a Grafana application in Teleport + +You can register the Grafana application in Teleport by defining it in your +Teleport Application Service configuration, or by using dynamic registration +with `tctl` or Terraform. Assign to the domain of your Grafana +instance: + + + +Add an application entry in your Teleport Application Service configuration +file, `teleport.yaml`: +```yaml +app_service: + enabled: true + apps: + - name: "grafana" + uri: + rewrite: + headers: + - "Authorization: Bearer {{internal.jwt}}" +``` +Restart the Teleport Application service. + + + +Create an `app` resource definition file named `app-grafana.yaml`: +```yaml +# app-grafana.yaml +kind: app +version: v3 +metadata: + name: grafana +spec: + uri: + rewrite: + headers: + - name: "Authorization" + value: "Bearer {{internal.jwt}}" +``` + +Create the `app` resource with: +```code +$ tctl create -f app-grafana.yaml +``` + + + +Create a `teleport_app` resource in terraform: +```hcl +resource "teleport_app" "grafana" { + version = "v3" + metadata = { + name = "grafana" + labels = { + "teleport.dev/origin" = "dynamic" + } + } + + spec = { + uri = "" + rewrite = { + headers = [{ + name = "Authorization" + value = "Bearer {{internal.jwt}}" + }] + } + } +} +``` +Apply the configuration: +```code +$ terraform apply +``` + + + + + +The header rewrite configuration above will replace the `{{internal.jwt}}` +template variable with a Teleport-signed JWT token in each request. + +## Step 3/3. Connect to Grafana + +Log in to your Teleport cluster in your browser at https://. + +In the **Resources** tab, locate the `grafana` application and click **Launch**. + +Grafana will open and you should be automatically logged in as your Teleport +user. You can verify this by clicking your profile icon in the upper-right +corner. + +## Next steps + +- Get more information about integrating with [Teleport JWT tokens](./introduction.mdx). +- See the [dynamic registration](../configuration/dynamic-registration.mdx) guide. +- Learn more about [accessing APIs](../protect-apps/api-access.mdx) with the Teleport + Application Service. +- Take a look at application-related [Access Controls](../configuration/controls.mdx). diff --git a/docs/pages/enroll-resources/application-access/jwt/introduction.mdx b/docs/pages/enroll-resources/application-access/jwt/introduction.mdx index f6485908584ad..e808247550b13 100644 --- a/docs/pages/enroll-resources/application-access/jwt/introduction.mdx +++ b/docs/pages/enroll-resources/application-access/jwt/introduction.mdx @@ -1,86 +1,18 @@ --- title: Use JWT Tokens With Application Access +sidebar_label: JWT Tokens description: How to use JWT tokens for authentication with Teleport application access. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- -Teleport sends a JWT token signed with Teleport's authority with each request -to a target application in a `Teleport-Jwt-Assertion` header. - -You can use the JWT token to get information about the authenticated Teleport -user, its roles, and its traits. This allows you to: - -- Map Teleport identity/roles/traits onto the identity/roles/traits of your web application. -- Trust Teleport identity to automatically sign in users into your application. - -## Introduction to JWTs - -JSON Web Token (JWT) is an open standard that defines a secure way to transfer -information between parties as a JSON Object. - -For an in-depth explanation please visit [https://jwt.io/introduction/](https://jwt.io/introduction/). - -Teleport JWTs include three sections: - -- Header -- Payload -- Signature - -### Header - -*Example Header* - -```json -{ - "alg": "RS256", - "typ": "JWT" -} -``` - -### Payload - -*Example Payload* - -```json -{ - "aud": [ - "http://127.0.0.1:34679" - ], - "iss": "aws", - "nbf": 1603835795, - "sub": "alice", - // Teleport user name. - "username": "alice" - // Teleport user roles. - "roles": [ - "admin" - ], - // Teleport user traits. - "traits": { - "logins": [ - "root", - "ubuntu", - "ec2-user" - ] - }, - // Teleport identity expiration. - "exp": 1603943800, -} -``` - -The JWT will be sent with the header: `Teleport-Jwt-Assertion`. - -*Example Teleport JWT Assertion* - -```json -eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiaHR0cDovLzEyNy4wLjAuMTozNDY3OSJdLCJleHAiOjE2MDM5NDM4MDAsImlzcyI6ImF3cyIsIm5iZiI6MTYwMzgzNTc5NSwicm9sZXMiOlsiYWRtaW4iXSwic3ViIjoiYmVuYXJlbnQiLCJ1c2VybmFtZSI6ImJlbmFyZW50In0.PZGUyFfhEWl22EDniWRLmKAjb3fL0D4cTmkxEfb-Q30hVMzVhka5WB8AUsPsLPVhTzsQ6Nkk1DnXHdz6oxrqDDfumuRrDnpJpjiXj_l0D3bExrchN61enzBHxSD13VkRIqP1V6l4i8yt8kXDIBWc-QejLTodA_GtczkDfnnpuAfaxIbD7jEwF27KI4kZu7uES9LMu2iCLdV9ZqarA-6HeDhXPA37OJ3P6eVQzYpgaOBYro5brEiVpuJLr1yA0gncmR4FqmhCpCj-KmHi2vmjmJAuuHId6HZoEZJjC9IAsNlrSA4GHH9j82o7FF1F4J2s38bRy3wZv46MT8X8-QBSpg -``` +(!docs/pages/includes/application-access/jwt-intro.mdx!) ## Inject JWT -You can inject a JWT token into any header using [headers passthrough](../guides/connecting-apps.mdx#headers-passthrough) -configuration and the `{{internal.jwt}}` template variable. This variable will -be replaced with JWT token signed by Teleport JWT CA containing user identity -information like described above. +(!docs/pages/includes/application-access/jwt-inject.mdx!) For example: @@ -95,23 +27,7 @@ For example: ## Validate JWT -Teleport provides a JSON Web Key Set (`jwks`) endpoint to verify that the JWT -can be trusted. This endpoint is `https://[cluster-name]:3080/.well-known/jwks.json`: - -*Example jwks.json* - -```json -{ - "keys": [ - { - "kty": "RSA", - "n": "xk-0VSVZY76QGqeN9TD-FJp32s8jZrpsalnRoFwlZ_JwPbbd5-_bPKcz8o2tv1eJS0Ll6ePxRCyK68Jz2UC4V4RiYaqJCRq_qVpDQMB1sQ7p9M-8qvT82FJ-Rv-W4RNe3xRmBSFDYdXaFm51Uk8OIYfv-oZ0kGptKpkNY390aJOzjHPH2MqSvhk9Xn8GwM8kEbpSllavdJCRPCeNVGJXiSCsWrOA_wsv_jqBP6g3UOA9GnI8R6HR14OxV3C184vb3NxIqxtrW0C4W6UtSbMDcKcNCgajq2l56pHO8In5GoPCrHqlo379LE5QqpXeeHj8uqcjeGdxXTuPrRq1AuBpvQ", - "e": "AQAB", - "alg": "RS256" - } - ] -} -``` +(!docs/pages/includes/application-access/jwt-validate.mdx!) See the example Go program used to validate Teleport's JWT tokens on our [GitHub](https://github.com/gravitational/teleport/blob/v(=teleport.version=)/examples/jwt/verify-jwt.go). @@ -123,3 +39,8 @@ Many existing web applications and APIs support JWT authentication. The following guides are currently available showing how to configure it: - [ElasticSearch](./elasticsearch.mdx) +- [Grafana](./grafana.mdx) + +## Troubleshooting + +(!docs/pages/includes/application-access/jwt-configure-claims.mdx!) diff --git a/docs/pages/enroll-resources/application-access/jwt/jwt.mdx b/docs/pages/enroll-resources/application-access/jwt/jwt.mdx index aab990bf03f89..3b8fae471dd47 100644 --- a/docs/pages/enroll-resources/application-access/jwt/jwt.mdx +++ b/docs/pages/enroll-resources/application-access/jwt/jwt.mdx @@ -1,7 +1,12 @@ --- title: Application Access JWT Authentication +sidebar_label: JWT Applications +sidebar_position: 4 description: Guides for using Teleport application access JWT authentication. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity --- These guides explain how web apps behind the Teleport Application Service can @@ -10,3 +15,4 @@ authorization. - [Introduction](introduction.mdx): Introduction to JWT tokens with application access. - [Elasticsearch](elasticsearch.mdx): How to use JWT authentication with Elasticsearch. +- [Grafana](grafana.mdx): How to use JWT authentication with Grafana. diff --git a/docs/pages/enroll-resources/application-access/guides/amazon-athena.mdx b/docs/pages/enroll-resources/application-access/protect-apps/amazon-athena.mdx similarity index 98% rename from docs/pages/enroll-resources/application-access/guides/amazon-athena.mdx rename to docs/pages/enroll-resources/application-access/protect-apps/amazon-athena.mdx index 58a7245ebb7c0..277f834445803 100644 --- a/docs/pages/enroll-resources/application-access/guides/amazon-athena.mdx +++ b/docs/pages/enroll-resources/application-access/protect-apps/amazon-athena.mdx @@ -1,8 +1,16 @@ --- title: Amazon Athena Access +sidebar_label: Amazon Athena description: How to access Amazon Athena with Teleport +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- +{/* lint disable page-structure remark-lint */} + You can set up secure access to Amazon Athena using Teleport's support for the [AWS CLI and Console](../cloud-apis/aws-console.mdx). diff --git a/docs/pages/enroll-resources/application-access/guides/api-access.mdx b/docs/pages/enroll-resources/application-access/protect-apps/api-access.mdx similarity index 98% rename from docs/pages/enroll-resources/application-access/guides/api-access.mdx rename to docs/pages/enroll-resources/application-access/protect-apps/api-access.mdx index 3265a88febc8c..17299076bb564 100644 --- a/docs/pages/enroll-resources/application-access/guides/api-access.mdx +++ b/docs/pages/enroll-resources/application-access/protect-apps/api-access.mdx @@ -1,6 +1,11 @@ --- title: Access REST APIs With Teleport Application Access +sidebar_label: REST APIs description: How to access REST APIs with Teleport application access. +tags: + - how-to + - zero-trust + - infrastructure-identity --- The Teleport Application Service can be used to access applications' (REST or diff --git a/docs/pages/enroll-resources/application-access/guides/connecting-apps.mdx b/docs/pages/enroll-resources/application-access/protect-apps/connecting-apps.mdx similarity index 93% rename from docs/pages/enroll-resources/application-access/guides/connecting-apps.mdx rename to docs/pages/enroll-resources/application-access/protect-apps/connecting-apps.mdx index 0146edf094867..c708627239a3c 100644 --- a/docs/pages/enroll-resources/application-access/guides/connecting-apps.mdx +++ b/docs/pages/enroll-resources/application-access/protect-apps/connecting-apps.mdx @@ -1,6 +1,11 @@ --- title: Web Application Access +sidebar_label: Web Applications description: In this getting started guide, learn how to connect an application to your Teleport cluster by running the Teleport Application Service. +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide shows you how to enroll a web application with your Teleport cluster @@ -14,7 +19,7 @@ Application Service, which uses a join token to establish trust with the Teleport Auth Service. Users visit Teleport-protected web applications through the Teleport Web UI. The Teleport Proxy Service routes browser traffic to the Teleport Application Service, which forwards HTTP requests to and from target -applications. +applications. ## Prerequisites @@ -35,7 +40,7 @@ target application, then deploy a Teleport Agent to run the service. ### Generate a token A join token is required to authorize a Teleport Application Service to -join the cluster. +join the cluster. 1. Generate a short-lived join token. Make sure to change `app-name` to the name of your application and `app-uri` to the application's domain name and port: @@ -154,6 +159,7 @@ header or move this page. */} An application name should make a valid sub-domain (\<=63 characters, no spaces, only `a-z 0-9 -` allowed). +It should also be unique across root and leaf clusters if you are deploying in a trusted clusters environment. After Teleport is running, users can access the app at `app-name.proxy_public_addr.com` e.g. `grafana.teleport.example.com`. You can also override `public_addr` e.g @@ -192,6 +198,7 @@ address. To override the public address, specify the `public_addr` field: ```yaml - name: "jira" uri: "https://localhost:8001" + # The public address must be a unique DNS name and not conflict with the Teleport cluster's public addresses. public_addr: "jira.example.com" ``` @@ -321,7 +328,7 @@ rewritten: - Any header matching the pattern `X-Forwarded-*` Rewritten header values support the same templating variables as -[role templates](../../../admin-guides/access-controls/guides/role-templates.mdx). In the +[role templates](../../../zero-trust-access/rbac-get-started/role-templates.mdx). In the example above, `X-Internal-Trait` header will be populated with the value of internal user trait `logins` and `X-External-Trait` header will get the value of the user's external `env` trait coming from the identity provider. @@ -336,29 +343,7 @@ Controls Reference](../../../reference/access-controls/roles.mdx). ### Configuring the JWT token -By default, Teleport includes a user's roles and traits in the JWT -generated for application access. If your application doesn't care -about these values, or you are encountering an error due to exceeding -the size limit of HTTP headers, you can configure Teleport to omit -this information from the token. - -```yaml -- name: "dashboard" - uri: https://localhost:4321 - public_addr: dashboard.example.com - rewrite: - # Specify whether to include roles or traits in the JWT. - # Options: - # - roles-and-traits: include both roles and traits - # - roles: include only roles - # - traits: include only traits - # - none: exclude both roles and traits from the JWT token - # Default: roles-and-traits - jwt_claims: roles-and-traits - headers: - # Inject header with Teleport-signed JWT token. - - "Authorization: Bearer {{internal.jwt}}" -``` +(!docs/pages/includes/application-access/jwt-configure-claims.mdx!) ### Backends-for-Frontends support @@ -461,4 +446,4 @@ do so by hitting the `/teleport-logout` endpoint: ## Next steps - Learn how to [configure web apps as TCP apps to access them through - VNet](vnet.mdx#accessing-web-apps-through-vnet). + VNet](../vnet.mdx#accessing-web-apps-through-vnet). diff --git a/docs/pages/enroll-resources/application-access/guides/dynamodb.mdx b/docs/pages/enroll-resources/application-access/protect-apps/dynamodb.mdx similarity index 96% rename from docs/pages/enroll-resources/application-access/guides/dynamodb.mdx rename to docs/pages/enroll-resources/application-access/protect-apps/dynamodb.mdx index ea9bb72f6a801..1e1cbac34d1b8 100644 --- a/docs/pages/enroll-resources/application-access/guides/dynamodb.mdx +++ b/docs/pages/enroll-resources/application-access/protect-apps/dynamodb.mdx @@ -1,6 +1,12 @@ --- title: Amazon DynamoDB using the Teleport Application Service +sidebar_label: Amazon DynamoDB description: How to access Amazon DynamoDB through the Teleport Application Service +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- You can configure the Teleport Application Service to enable secure access to @@ -28,7 +34,7 @@ The Teleport Application Service enables secure access to DynamoDB via its [integration](../cloud-apis/aws-console.mdx) with the AWS management console and API. This is an alternative to accessing DynamoDB through the Teleport Database service, as described in our [Protect Amazon DynamoDB with -Teleport](../../database-access/enroll-aws-databases/aws-dynamodb.mdx) guide. +Teleport](../../database-access/enrollment/aws/aws-dynamodb.mdx) guide. The Application Service's integration with AWS is not designed specifically for diff --git a/docs/pages/enroll-resources/application-access/protect-apps/protect-apps.mdx b/docs/pages/enroll-resources/application-access/protect-apps/protect-apps.mdx new file mode 100644 index 0000000000000..a2f5267fca4d5 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/protect-apps/protect-apps.mdx @@ -0,0 +1,11 @@ +--- +title: Protecting Applications with Teleport +sidebar_label: Internal Applications +sidebar_position: 3 +description: Provides step-by-step instructions to protecting different kinds of applications with Teleport. +--- + +The guides in this section show you how to enroll specific kinds of applications +with your Teleport cluster, including the following: + + diff --git a/docs/pages/enroll-resources/application-access/protect-apps/tcp.mdx b/docs/pages/enroll-resources/application-access/protect-apps/tcp.mdx new file mode 100644 index 0000000000000..b40c8b4f4da74 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/protect-apps/tcp.mdx @@ -0,0 +1,231 @@ +--- +title: TCP Application Access +sidebar_label: TCP Applications +description: How to configure Teleport for accessing plain TCP apps +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +Teleport can provide access to any TCP-based application. This allows users to +connect to applications which Teleport doesn't natively support such as SMTP +servers or databases not yet natively supported by the Teleport Database +Service. + +## How it works + +A Teleport administrator configures the Teleport Application Service to proxy a +TCP application. The end user starts a local proxy that authenticates to the +Teleport Proxy Service using mutual TLS. The Proxy Service forwards traffic from +the local proxy to and from the Teleport Application Service, which in turn +proxies traffic to and from a Teleport-protected TCP application. + +As with any Teleport-protected resource, the TCP application needs to be in the +same network as the agent (the Teleport Application Service), but the agent and +the Teleport-protected resource can run in a separate network from the Teleport +Proxy Service. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- TCP application to connect to. In this guide we'll use a PostgreSQL running + in Docker as an example. You can also use any TCP-based application you may + already have. +- Host where you will run the Teleport Application Service. + +We will assume your Teleport cluster is accessible at `teleport.example.com`. +You can substitute the address of your Teleport Proxy Service. (For Teleport +Cloud customers, this will be similar to `example.teleport.sh`.) + +## Step 1/4. Start PostgreSQL container + +Skip this step if you already have an application you'd like to connect to. + +Start a PostgreSQL server in a Docker container: + +```code +$ docker run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD= -d postgres +``` + +## Step 2/4. Start Teleport Application Service + +Teleport Application Service requires a valid auth token to join the cluster. + + + +To generate one, run the following command on your Auth Service node: + +```code +$ tctl tokens add --type=app +``` + +Next, create a Teleport user with the `access` role that will allow it to +connect to cluster applications: + +```code +$ tctl users add --roles=access alice +``` + + + +To generate one, log into your Cloud tenant and run the following command: + +```code +$ tsh login --proxy=mytenant.teleport.sh +$ tctl tokens add --type=app +``` + + + + +Save the generated token in `/tmp/token` on the node where Application Service +will run. + +Now, install Teleport on the Application Service node. It must be able to reach +both your Teleport Proxy and the TCP application it's going to proxy. + +(!docs/pages/includes/install-linux.mdx!) + +Create the Application Service configuration file `/etc/teleport.yaml` with +the following contents: + +```yaml +version: v3 +teleport: + auth_token: "/tmp/token" + proxy_server: teleport.example.com:3080 +auth_service: + enabled: false +ssh_service: + enabled: false +proxy_service: + enabled: false +app_service: + enabled: true + apps: + - name: "tcp-app" + uri: tcp://localhost:5432 +``` + +Note that the URI scheme must be `tcp://` in order for Teleport to recognize +this as a TCP application. + +(!docs/pages/includes/start-teleport.mdx!) + +## Step 3/4. Start app proxy + +Log into your Teleport cluster and view available applications: + +```code +$ tsh login --proxy=teleport.example.com +$ tsh apps ls +Application Description Type Public Address Labels +----------- ------------- ---- -------------------------------- ----------- +tcp-app TCP tcp-app.teleport.example.com +``` + +Your TCP application should show up and be denoted with a `TCP` type. + +Now log into the application: + +```code +$ tsh apps login tcp-app +Logged into TCP app tcp-app. Start the local TCP proxy for it: + + tsh proxy app tcp-app + +Then connect to the application through this proxy. +``` + +Next, start a local proxy for it: + +```code +$ tsh proxy app tcp-app +Proxying connections to tcp-app on 127.0.0.1:55868 +``` + +The `tsh proxy app` command will set up a listener that will proxy all connections to +the target application. + +## Step 4/4. Connect + +Once the local proxy is running, you can connect to the application using the +application client you would normally use to connect to it: + +```code +$ psql postgres://postgres@localhost:55868/postgres +``` + +## Next steps + +### Configuring access to multiple ports + +By default, the Application Service proxies connections to the `uri` field from the application +specification. However, Teleport can enable access to multiple ports of a TCP application. An +application specification in this case needs to have no port number in the `uri` field and a new +field called `tcp_ports` with a list of ports. + +For example, let's take tcp-app from the steps above and add access to port 8080 and port range +31276-32300. The Application Service definition should look like this: + +```yaml +app_service: + enabled: true + apps: + - name: "tcp-app" + uri: tcp://localhost # No port in the URI + tcp_ports: + - port: 5432 # PostgreSQL + - port: 8080 # HTTP server + - port: 31276 + end_port: 32300 # Inclusive end of range +``` + +To access the app, [start VNet](../../../connect-your-client/teleport-clients/vnet.mdx) and point an application +client towards the target port: + +```code +# Use 8080 for the HTTP server. +$ curl -I http://tcp-app.teleport.example.com:8080 +HTTP/1.1 200 OK + +# Use 5432 for postgres. +$ psql postgres://postgres@tcp-app.teleport.example.com:5432/postgres +``` + +Without VNet, use `tsh proxy app` and specify the target port after the listening port. All +connections made to the proxy will be routed to that target port. + +```code +# Use 12345 as the listening port and 8080 as the target port. +$ tsh proxy app tcp-app --port 12345:8080 +Proxying connections to tcp-app:8080 on 127.0.0.1:12345 + +# Use a random listening port and 5432 as the target port. +$ tsh proxy app tcp-app --port 0:5432 +Proxying connections to tcp-app:5432 on 127.0.0.1:65060 +``` + + +There is no RBAC for TCP ports – a user that has access to an application can connect to any port in +the specification. We strongly recommend specifying only the necessary ports instead of defining a +wide port range that happens to include ports that are meant to be available. + + +{/* TODO: DELETE IN 19.0.0. At this point all compatible servers and clients are going +to support multiple ports. */} + +Support for multiple ports is available in Teleport v17.1+. Connections from Teleport clients that +do not support multiple ports are routed to the first port from the application specification. An +Application Service that does not support multiple ports will not be able to handle traffic to a +multi-port application if it receives such application through [dynamic +registration](../configuration/dynamic-registration.mdx) from an Auth Service. + +### Further reading + +- Learn about [access controls](../configuration/controls.mdx) for applications. +- Learn how to [connect to TCP apps with VNet](../../../connect-your-client/teleport-clients/vnet.mdx) and + [configure VNet for custom `public_addr`](../vnet.mdx). diff --git a/docs/pages/enroll-resources/application-access/reference.mdx b/docs/pages/enroll-resources/application-access/reference.mdx new file mode 100644 index 0000000000000..08e3e11a27635 --- /dev/null +++ b/docs/pages/enroll-resources/application-access/reference.mdx @@ -0,0 +1,238 @@ +--- +title: Teleport Application Service Reference +sidebar_label: Reference +description: Configuration and CLI reference documentation for Teleport application access. +sidebar_position: 8 +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +This guide describes interfaces and options for interacting with the Teleport +Application Service, including the static configuration file for the `teleport` +binary, the dynamic `app` resource, and `tsh apps` commands. + +## Configuration + +(!docs/pages/includes/backup-warning.mdx!) + +The following snippet shows the full YAML configuration of an Application Service +appearing in the `teleport.yaml` configuration file: + +```yaml +app_service: + # Enables application proxy service. + enabled: true + # Teleport provides a small debug app called "dumper" that can be used + # to make sure application access is working correctly. It outputs JWTs, + # so it can be useful when extending your application. + debug_app: true + # Matchers for application resources created with "tctl create" command. + resources: + - labels: + "*": "*" + # This section contains definitions of all applications proxied by this + # service. It can contain multiple items. + apps: + # Name of the application. Used for identification purposes. + - name: "grafana" + # Free-form application description. + description: "This is an internal Grafana instance" + # URI and port the application is available at. + uri: "http://localhost:3000" + # Optional application public address to override. + public_addr: "grafana.teleport.example.com" + # Rewrites section. + rewrite: + # Specify whether to include roles or traits in the JWT. + # Options: + # - roles-and-traits: include both roles and traits + # - roles: include only roles + # - traits: include only traits + # - none: exclude both roles and traits from the JWT token + # Default: roles-and-traits + jwt_claims: roles-and-traits + # Rewrite the "Location" header on redirect responses replacing the + # host with the public address of this application. + redirect: + - "grafana.internal.dev" + # Headers passthrough configuration. + headers: + - "X-Custom-Header: example" + - "X-External-Trait: {{external.env}}" + # Disable application certificate validation. + insecure_skip_verify: true + # Optional static labels to assign to the app. Used in RBAC. + labels: + env: "prod" + # Optional dynamic labels to assign to the app. Used in RBAC. + commands: + - name: "hostname" + command: ["hostname"] + period: 1m0s + # Optional AWS-specific configurations. + aws: + # External ID used when assuming AWS roles for this application. + external_id: "example-external-id" + - name: "azure-cli" + # Optional: For access to cloud provider APIs, specify the cloud provider. + # Allowed values are "AWS", "Azure", and "GCP". + cloud: "Azure" +``` + +For full details on configuring Teleport roles, including how Teleport +populates the `external` traits, see the [Access Controls +Reference](../../reference/access-controls/roles.mdx). + +## Application resource + +Full YAML spec of application resources managed by `tctl` resource commands: + +```yaml +kind: app +version: v3 +metadata: + # Application name. + name: example + # Application description. + description: "Example application" + # Application static labels. + labels: + env: local +spec: + # URI and port application is available at. + uri: http://localhost:4321 + # Optional application public address. + public_addr: test.example.com + # Disable application certificate validation. + insecure_skip_verify: true + # Rewrites configuration. + rewrite: + # Rewrite the "Location" header on redirect responses replacing the + # host with the public address of this application. + redirect: + - "grafana.internal.dev" + # Headers passthrough configuration. + headers: + - name: "X-Custom-Header" + value: "example" + - name: "X-External-Trait" + value: "{{external.env}}" + # Optional dynamic labels. + dynamic_labels: + - name: "hostname" + command: ["hostname"] + period: 1m0s +``` + +You can create a new `app` resource by running the following commands, which +assume that you have created a YAML file called `app.yaml` with your configuration: + + + + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +# Create the resource +$ tctl create -f app.yaml +``` + + + + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +$ tsh login --proxy=mytenant.teleport.sh --user=myuser +# Create the resource. +$ tctl create -f app.yaml +``` + + + + + +## CLI + +This section shows CLI commands relevant for application access. + +### tsh apps ls + +Lists available applications. + +```code +$ tsh apps ls +``` + +### tsh apps login + +Retrieves short-lived X.509 certificate for CLI application access. + +```code +$ tsh apps login grafana +``` + +| Flag | Description | +| - | - | +| `--aws-role` | For AWS CLI access, the role ARN or role name of an AWS IAM role. | +| `--azure-identity` | For Azure CLI access, the name or URI of an Azure managed identity to use for accessing the Azure CLI. | + +### tsh apps logout + +Removes CLI application access certificate. + +```code +# Log out of a particular app. +$ tsh apps logout grafana + +# Log out of all apps. +$ tsh apps logout +``` + +### tsh apps config + +Prints application connection information. + +```code +# Print app information in a table form. +$ tsh apps config + +# Print information for a particular app. +$ tsh apps config grafana + +# Print an example curl command. +$ tsh apps config --format=curl + +# Construct a curl command. +$ curl $(tsh apps config --format=uri) \ + --cacert $(tsh apps config --format=ca) \ + --cert $(tsh apps config --format=cert) \ + --key $(tsh apps config --format=key) +``` + +| Flag | Description | +| - | - | +| `--format` | Optional print format, one of: `uri` to print app address, `ca` to print CA cert path, `cert` to print cert path, `key` print key path, `curl` to print example curl command.| + +### tsh az + +Run an Azure CLI command via the Teleport Application Service: + +```code +$ tsh az +``` + +``: A valid command within the `az` CLI, including arguments and flags. +See the [Azure +documentation](https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest) +for the full list of `az` CLI commands. + +To run this command, one of the user's roles must include the +`spec.allow.azure_identities` field with one of the identities used by the +Application Service. To learn how to set up secure access to Azure via Teleport, +read [Protect the Azure CLI with Teleport Application +Access](../../enroll-resources/application-access/cloud-apis/azure.mdx). + diff --git a/docs/pages/enroll-resources/application-access/troubleshooting-apps.mdx b/docs/pages/enroll-resources/application-access/troubleshooting-apps.mdx index 1407f5d75b584..784413b44681b 100644 --- a/docs/pages/enroll-resources/application-access/troubleshooting-apps.mdx +++ b/docs/pages/enroll-resources/application-access/troubleshooting-apps.mdx @@ -1,6 +1,11 @@ --- title: Troubleshooting Application Access +sidebar_label: Troubleshooting description: Describes common issues and solutions for access to applications protected by Teleport. +tags: + - how-to + - zero-trust + - infrastructure-identity --- This section describes common issues that you might encounter in managing access to applications @@ -197,7 +202,7 @@ likely to exceed the limit. {/* vale messaging.protocol-products = NO */} This configuration is available under the `jwt_claims` property of the application's `rewrite` configuration. See -[Web Application Access](./guides/connecting-apps.mdx#configuring-the-jwt-token) +[Web Application Access](protect-apps/connecting-apps.mdx#configuring-the-jwt-token) for details. {/* vale messaging.protocol-products = YES */} diff --git a/docs/pages/enroll-resources/application-access/vnet.mdx b/docs/pages/enroll-resources/application-access/vnet.mdx new file mode 100644 index 0000000000000..9b9ccaa8b94ad --- /dev/null +++ b/docs/pages/enroll-resources/application-access/vnet.mdx @@ -0,0 +1,185 @@ +--- +title: VNet +description: How to configure custom DNS zones for VNet +sidebar_position: 5 +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +VNet automatically proxies connections made to TCP applications available under the public address +of a Proxy Service. This guide explains how to configure VNet to support apps with [custom public +addresses](protect-apps/connecting-apps.mdx#customize-public-address). + +## How it works + +Let's assume that a user has logged in to a cluster through a Proxy Service available at +`teleport.example.com`. There's a leaf cluster associated with that cluster. It has its own Proxy +Service available at `leaf.example.com`. Once started, VNet captures DNS queries for both of those +domains and their subdomains. + +Type A and AAAA queries are matched against `public_addr` of applications registered in both +clusters. If there's a match and the application is registered as a TCP application, VNet responds +with a virtual IP address over which the connection will be proxied to the app. In any other +case, the query is forwarded to the default DNS name server used by the OS. + +If you want VNet to forward connections to an app that has a custom `public_addr` set, you need +to first update the VNet config in the Auth Service to include a matching DNS zone. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v16.0.0 or higher)"!) + +- (!docs/pages/includes/tctl.mdx!) +- A TCP application connected to the cluster. +- A domain name under your control. + +{/* vale messaging.protocol-products = NO */} +In this guide, we'll use the example app from [TCP Application Access guide](protect-apps/tcp.mdx) and make it +available through VNet at with + as the custom DNS zone. +{/* vale messaging.protocol-products = YES */} + +## Step 1/3. Configure custom DNS zone + +Edit the VNet config: + +```code +$ tctl edit vnet_config +``` + +Add a custom DNS zone. In our case the `public_addr` of the app is going to be +`tcp-app.company.test`, so we're going to set `company.test` as `suffix`. The `spec` section of the +config should include the `custom_dns_zones` field: + +```diff +kind: vnet_config +metadata: + name: vnet-config + # … +spec: ++ custom_dns_zones: ++ - suffix: + # … +version: v1 +``` + + +`suffix` doesn't have to point to a domain that's exactly one level above the `public_addr` of an +app. Any level of nesting works. For example, you can have an app under `tcp-app.foo.bar.qux.test` +and the suffix can be set to `bar.qux.test`. + + +## Step 2/3. Set `public_addr` of the app + +Set `public_addr` field of the application in the Application Service configuration file +`/etc/teleport.yaml` and restart the `teleport` daemon. + +```diff +version: v3 +# … +app_service: + # … + apps: + - name: "tcp-app" + uri: tcp://localhost:5432 ++ public_addr: "" +``` + +## Step 3/3. Connect + +Once you [start VNet](../../connect-your-client/teleport-clients/vnet.mdx), you should be able to connect to the +application over the custom `public_addr` using the application client you would normally use to +connect to it. You might need to restart VNet if it was already running while you were making +changes to the cluster. + +```code +$ psql postgres://postgres@tcp-app.company.test/postgres +``` + +## Next steps + +### Configuring IPv4 CIDR range + +Each cluster has a configurable IPv4 CIDR range which VNet uses when assigning IP addresses to +applications from that cluster. Root and leaf clusters can use different ranges. The default is +`100.64.0.0/10` and it can be changed by setting the `ipv4_cidr_range` field of the VNet config. + +Edit the VNet config: + +```code +$ tctl edit vnet_config +``` + +Edit the `ipv4_cidr_range` field in the `spec` section of the config: + +```diff +kind: vnet_config +metadata: + name: vnet-config + # … +spec: ++ ipv4_cidr_range: "100.64.0.0/10" + # … +version: v1 +``` + + +When starting, VNet needs to assign an IPv4 address for its virtual network device. To pick an +address, [VNet arbitrarily chooses a root +cluster](https://github.com/gravitational/teleport/issues/43343) that the user is logged in to and +picks an address from the range used by that cluster. If your cluster uses a custom range, but your +users are logged in to other clusters that are not under your control, this might cause VNet to pick +an address for the TUN device from a range offered by one of those clusters. + + +### Configuring leaf cluster apps + +To make a [leaf cluster](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) app accessible over a custom +`public_addr`, you need to follow the same steps while being logged in directly to the leaf cluster. + +```code +$ tsh login --proxy=leaf.example.com --user=email@example.com +``` + +### Accessing web apps through VNet + +VNet does not officially support [web apps](protect-apps/connecting-apps.mdx) yet. +However, since all web apps are served over TCP, it's possible to convert a web +app to [a TCP app](protect-apps/tcp.mdx) to make it available via VNet. +You'll need to change the `uri` of the application to use `tcp://` instead of `https://`. + +Exposing plain HTTP web apps or APIs via VNet is not recommended. +Untrusted websites can potentially use DNS rebinding attacks to bypass the +browser’s Same-Origin Policy and issue plain HTTP requests to VNet IP addresses. +It is strongly recommended to either avoid VNet for plain HTTP access or +implement one or more of the following mitigations for DNS rebinding attacks: +- upgrade these APIs to HTTPS or another protocol +- enforce a Host header allowlist at the HTTP server +- block browser access to HTTP websites + +There are a few more caveats when converting a Teleport web app to a TCP app: + +- The Teleport Web UI uses [HSTS](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security). + If the application is going to be served from a subdomain of a Proxy Service + it must use HTTPS, it will not be accessible in browsers over plain HTTP. + It's possible to work around this by setting a custom `public_addr` as explained + above in this guide to an address that is not a subdomain of the proxy address. +- HTTPS Applications must handle their own TLS connections and have + a valid certificate for the app `public_addr`. +- [JWT Tokens](jwt/introduction.mdx), [redirects](protect-apps/connecting-apps.mdx#rewrite-redirect) and + [header rewrites](protect-apps/connecting-apps.mdx#headers-passthrough) are not available for TCP apps. +- Teleport records the start and the end of a session for TCP apps in the audit log, but [session + chunks](../../reference/architecture/session-recording.mdx) are not captured. + +The important thing to understand is that VNet doesn't do anything extra with a +TCP connection, it tunnels it directly to the target application's `uri`. +The application layer protocol is determined solely by the app itself and its +clients. + +### Further reading + +- Read our VNet usage [guide](../../connect-your-client/teleport-clients/vnet.mdx) for end-users + accessing your applications with VNet. +- Read [RFD 163](https://github.com/gravitational/teleport/blob/master/rfd/0163-vnet.md) to learn how VNet works on a technical level. diff --git a/docs/pages/enroll-resources/auto-discovery/auto-discovery.mdx b/docs/pages/enroll-resources/auto-discovery/auto-discovery.mdx index 46f3c5ca15c6a..6ec1b23e7ef7f 100644 --- a/docs/pages/enroll-resources/auto-discovery/auto-discovery.mdx +++ b/docs/pages/enroll-resources/auto-discovery/auto-discovery.mdx @@ -1,20 +1,125 @@ --- -title: Teleport Auto-Discovery -description: "Learn how to use the Teleport Discovery Service, which automatically enrolls resources by query APIs" +title: Auto-Discovery +description: "Automatically detect and enroll resources in your infrastructure" +template: doc-page +tags: + - zero-trust + - infrastructure-identity +sidebar_position: 1 --- -The Teleport Discovery Service automatically detects resources in your -infrastructure and enrolls them in your Teleport cluster. When you deploy -servers, databases, and Kubernetes clusters, Teleport enables secure access to -these resources with no further configuration. This lets you decouple the need -to protect your infrastructure resources from the work of deploying and managing -them. +import DocHero from "@site/src/components/Pages/Landing/DocHero"; +import EnrollmentMethods, { Method } from "@site/src/components/Pages/Landing/EnrollmentMethods/EnrollmentMethods"; -The Discovery Service runs on [Teleport Agents](../agents/agents.mdx). It -periodically queries cloud provider APIs to list resources in your -infrastructure. It then reconciles these resources with Teleport resources -registered on the Auth Service backend. +import autoDiscoveryImg from "@version/docs/img/auto-discovery/auto-discovery-hero.jpg"; +import databaseAccessImg from "@site/src/components/Icon/teleport-svg/database-access.svg"; +import kubernetesClustersSvg from "@site/src/components/Icon/teleport-svg/kubernetes-clusters.svg"; +import linuxServersSvg from "@site/src/components/Icon/teleport-svg/linux-servers.svg"; +import kubernetesSvg from "@site/src/components/Icon/svg/kubernetes2.svg"; -Set up Teleport auto-discovery for resources in your infrastructure: + +Streamline infrastructure security with the Teleport Discovery Service. When you deploy servers, databases, and Kubernetes clusters, the service enables secure access to them with no further configuration needed. - +[Teleport Agents](../agents/agents.mdx) power the Discovery Service, automatically syncing resources with the Auth Service. + + + + + Automatically discover and register databases in your cluster. + + + Connect to your cloud provider to automatically find and enroll servers. + + + Enable automatic detection and enrollment of cloud-hosted Kubernetes clusters. + + + Teleport can automatically detect applications running in your Kubernetes clusters and register them with Teleport for secure access. + + diff --git a/docs/pages/enroll-resources/auto-discovery/databases/aws.mdx b/docs/pages/enroll-resources/auto-discovery/databases/aws.mdx index f016509abb173..c3fefec31a1e5 100644 --- a/docs/pages/enroll-resources/auto-discovery/databases/aws.mdx +++ b/docs/pages/enroll-resources/auto-discovery/databases/aws.mdx @@ -1,6 +1,12 @@ --- title: AWS Database Discovery +sidebar_label: AWS description: How to configure Teleport to auto-discover AWS databases. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- Teleport can be configured to discover AWS-hosted databases automatically and @@ -251,15 +257,14 @@ resources in your Teleport cluster. In this case, it will match auto-discovered AWS databases in the region from any AWS account (`"*"` is a wildcard and it can be used as a label key and/or value). -You can make it match more specific databases by adjusting the label selectors. - +(!docs/pages/includes/discovery/db-label-warning.mdx!) + The AWS tags attached to AWS databases are imported as Teleport `db` labels in addition to some other identifying metadata. Refer to -[Database Labels Reference](../../../reference/agent-services/database-access-reference/labels.mdx) +[Database Labels Reference](../../database-access/reference/labels.mdx) for more information about available database labels. - ### Generate a join token @@ -292,7 +297,7 @@ Additional Teleport RBAC configuration and possibly IAM configuration may also be required to connect to the discovered databases via Teleport. Refer to the appropriate guide in -[Enroll AWS Databases](../../database-access/enroll-aws-databases/enroll-aws-databases.mdx) +[Enroll AWS Databases](../../database-access/enrollment/aws/aws.mdx) for information about database user provisioning and configuration. @@ -300,9 +305,9 @@ for information about database user provisioning and configuration. - Learn about [Dynamic Registration](../../database-access/guides/dynamic-registration.mdx) by the Teleport Database Service. - Get started by [connecting](../../database-access/guides/guides.mdx) your database. -- Connect AWS databases in [external AWS accounts](../../database-access/enroll-aws-databases/aws-cross-account.mdx). +- Connect AWS databases in [external AWS accounts](../../database-access/enrollment/aws/aws-cross-account.mdx). - Refer to the appropriate guide in -[Enroll AWS Databases](../../database-access/enroll-aws-databases/enroll-aws-databases.mdx) +[Enroll AWS Databases](../../database-access/enrollment/aws/aws.mdx) for information about database user provisioning and configuration. ## Troubleshooting diff --git a/docs/pages/enroll-resources/auto-discovery/databases/databases.mdx b/docs/pages/enroll-resources/auto-discovery/databases/databases.mdx index 1b1e0e6e62e14..a8d0b5226d602 100644 --- a/docs/pages/enroll-resources/auto-discovery/databases/databases.mdx +++ b/docs/pages/enroll-resources/auto-discovery/databases/databases.mdx @@ -1,6 +1,11 @@ --- title: Database Discovery +sidebar_label: Databases description: Detailed guides for configuring database discovery. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Teleport can be configured to discover databases automatically and register @@ -9,7 +14,7 @@ them with your Teleport cluster. ## Supported clouds - [AWS](aws.mdx): Discovery for AWS databases. -- [Azure](../../database-access/enroll-azure-databases/enroll-azure-databases.mdx): Discovery for Azure databases. +- [Azure](../../database-access/enrollment/azure/azure.mdx): Discovery for Azure databases. {/* TODO(gavin): Add an Azure discovery guide and permission reference */} ## Architecture overview @@ -87,7 +92,8 @@ discovery_service: # 'rdsproxy' - Amazon RDS Proxy databases. # 'redshift' - Amazon Redshift databases. # 'redshift-serverless' - Amazon Redshift Serverless databases. - # 'elasticache' - Amazon ElastiCache Redis databases. + # 'elasticache' - Amazon ElastiCache Redis and Valkey databases. + # 'elasticache-serverless' - Amazon ElastiCache Serverless Redis or Valkey databases. # 'memorydb' - Amazon MemoryDB Redis databases. # 'opensearch' - Amazon OpenSearch Redis databases. # 'docdb' - Amazon DocumentDB databases. @@ -146,7 +152,7 @@ Here's how it works in detail: For more information about Discovery Service configuration, refer to [one of the guides above](#supported-clouds) or the -[Discovery Service Config File Reference](../../../reference/config.mdx). +[Discovery Service Config File Reference](../../../reference/deployment/config.mdx). ## How the Database Service works @@ -186,6 +192,10 @@ db_service: "teleport.dev/cloud": "AWS" # Specific to AWS. This is the AWS account ID. "account-id": "123456789012" + # Use specific label selectors so each Database Service only matches + # databases it can actually reach. + "env": "staging" + "vpc-id": "vpc-id-123" aws: # This is an optional AWS role ARN to assume when forwarding connections # to AWS databases. @@ -212,10 +222,13 @@ Here's how it works in detail: 1. **Label matching:** The Database Service monitors `db` resources and the first label selector in `db_service.resources[].labels` that matches a `db` will be used. If no selector matches, then the `db` resource is ignored. -1. **Permissions:** The Database Service will assume the identity configured in + + (!docs/pages/includes/discovery/db-label-warning.mdx!) + +2. **Permissions:** The Database Service will assume the identity configured in the label selector that matched. If the selector does not specify an identity, then the Database Service will use its own identity. -1. **Connection:** The Database Service, acting as either its own identity or +3. **Connection:** The Database Service, acting as either its own identity or the identity it assumed, will retrieve credentials to authenticate to the database and use those credentials to act as a proxy for an authorized Teleport user. diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/get-started.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/get-started.mdx index 9cd71ba61a1c1..5060a718873f0 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/get-started.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/get-started.mdx @@ -1,6 +1,11 @@ --- title: Get Started with Kubernetes Application Discovery +sidebar_label: Get Started description: Detailed guide for configuring Kubernetes Application Discovery. +tags: + - get-started + - zero-trust + - infrastructure-identity --- Teleport can automatically detect applications running in your Kubernetes diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/kubernetes-applications.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/kubernetes-applications.mdx index 13c1d8339bdff..f613f68fccb02 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/kubernetes-applications.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/kubernetes-applications.mdx @@ -1,6 +1,10 @@ --- title: "Enroll Kubernetes Services as Teleport Applications" +sidebar_label: Kubernetes Services description: "Teleport can automatically detect applications running in your Kubernetes clusters and register them with Teleport for secure access." +tags: + - zero-trust + - infrastructure-identity --- Teleport can automatically detect applications running in your Kubernetes @@ -20,6 +24,6 @@ traffic to them. application discovery with the `teleport-kube-agent` Helm chart. - [Architecture](../../../reference/architecture/kubernetes-applications-architecture.mdx): Learn how automatic application discovery works. -- [Reference](../../../reference/agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx): Consult this guide +- [Reference](reference.mdx): Consult this guide for options and Kubernetes annotations you can use to configure automatic Kubernetes application discovery. diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/reference.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/reference.mdx new file mode 100644 index 0000000000000..f8c79637a660e --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes-applications/reference.mdx @@ -0,0 +1,171 @@ +--- +title: Kubernetes Application Discovery Reference +sidebar_label: Configuration Reference +description: This guide is a comprehensive reference of configuration options for automatically enrolling Kubernetes applications with Teleport. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +Kubernetes Application Discovery involves the Teleport Discovery Service, +Teleport Application Service, and annotations on Kubernetes services. This guide +shows you how to configure each of these to manage Kubernetes Application +Discovery in your Kubernetes cluster. + +## Configuring Teleport Agent Helm chart + +You can configure scope of services discovery by setting value `kubernetesDiscovery` of the chart. For more information +please see [helm chart documentation](../../../reference/helm-reference/teleport-kube-agent.mdx). + +`values.yaml` example: + +```yaml +kubernetesDiscovery: +- types: ["app"] + namespaces: [ "toronto", "porto" ] + labels: + env: staging +- types: ["app"] + namespaces: [ "seattle", "oakland" ] + labels: + env: testing +``` + +## Configuring Kubernetes Apps Discovery manually + +While the `teleport-kube-agent` Helm chart will set up configuration for you +automatically, you can also configure the required services manually. To do so, +adjust the configuration files for the Teleport Application Service and Teleport +Discovery Service, then restart the agents running these services. + +Configuration for the Discovery Service is controlled by the `kubernetes` field, +example: + +(!docs/pages/includes/discovery/discovery-group.mdx!) + +```yaml +# This section configures the Discovery Service +discovery_service: + enabled: true + discovery_group: main-cluster + kubernetes: + - types: ["app"] + namespaces: [ "toronto", "porto" ] + labels: + env: staging + - types: ["app"] + namespaces: [ "seattle", "oakland" ] + labels: + env: testing +``` + +Configuration for the Application Service is controlled by the `resources` field, example: + +```yaml +app_service: + enabled: true + resources: + - labels: + "teleport.dev/kubernetes-cluster": "main-cluster" + "teleport.dev/origin": "discovery-kubernetes" +``` + +Label `teleport.dev/kubernetes-cluster` should match value of `discovery_group` field in the Discovery Service config. + +For more information you can take a look at [`discovery_service`](../../../reference/deployment/config.mdx) and [`app_service`](../../../reference/deployment/config.mdx) configuration references. + +## Annotations + +Kubernetes annotations on services can be used to fine tune transformation of services to apps. +All annotations are optional - they will override default behaviour, but they are not required for import of services. + +### `teleport.dev/discovery-type` + +Controls what type this service is considered to be. If annotation is missing, +by default all services are considered to be of "app" type. If matchers in the Discovery Service +config match service type it will be imported. Currently the only supported value is +`app`, which means Teleport application will be imported from this service. In the future there are plans to expand to database importing. + +### `teleport.dev/protocol` + +Controls protocol for the uri of the Teleport app we create. If annotation is not set, +heuristics will be used to try to determine protocol of an exposed port. +If all heuristics didn't work, the port will be skipped. For app to be imported with `tcp` protocol, the +service should have explicit annotation `teleport.dev/protocol: "tcp"` + +### `teleport.dev/port` + +Controls preferred port for the Kubernetes service, only this one will be used even if service +has multiple exposed ports. Its value should be one of the exposed service ports; otherwise, the app will not be imported. +Value can be matched either by numeric value or by the name of the port defined on the service. + +### `teleport.dev/name` + +Controls resulting app name. If present it will override default app name pattern +`$SERVICE_NAME-$NAMESPACE-$KUBE_CLUSTER_NAME`. If multiple ports are exposed, resulting apps will have port names added +as a suffix to the annotation value, as `$APP_NAME-$PORT1_NAME`, `$APP_NAME-$PORT2_NAME` etc, where `$APP_NAME` is the name +set by the annotation. + +### `teleport.dev/description` + +Overrides the default description of the discovered app. + +### `teleport.dev/insecure-skip-verify` + +Controls whether TLS certificate verification should be skipped for this app. +If present and set to `true`, TLS certificate verification will be skipped. + +```yaml +annotations: + teleport.dev/insecure-skip-verify: "true" +``` + +### `teleport.dev/ignore` + +Controls whether this service should be ignored by the Discovery Service. +This annotation is useful when you want to exclude a service from being imported as an app +when it matches the Discovery Service config. For example, you may want to exclude a service +that shares the same labels as another services that you want to import as apps. + +```yaml +annotations: + teleport.dev/ignore: "true" +``` + +### `teleport.dev/app-rewrite` + +Controls rewrite configuration for Teleport app, if needed. It should +contain full rewrite configuration in YAML format, same as one would use when configuring an app with dynamic registration syntax (see [documentation](../../../enroll-resources/application-access/protect-apps/connecting-apps.mdx)). + +```yaml +annotations: + teleport.dev/app-rewrite: | + redirect: + - "localhost" + - "jenkins.internal.dev" + headers: + - name: "X-Custom-Header" + value: "example" + - name: "Authorization" + value: "Bearer {{internal.jwt}}" +``` + +### `teleport.dev/public-addr` + +Controls the public address for the Teleport app we create if needed. + +```yaml +annotations: + teleport.dev/public-addr: "custom.teleport.dev" +``` + +### `teleport.dev/path` + +The path is appended to the URI generated for the Teleport app for cases where +an application is served on a sub-path of an HTTP service. + +```yaml +annotations: + teleport.dev/path: "foo/bar" +``` diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes/aws.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes/aws.mdx index 93db37bf55cd5..16ff99eef04b6 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes/aws.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes/aws.mdx @@ -1,6 +1,12 @@ --- title: Teleport EKS Auto-Discovery +sidebar_label: Elastic Kubernetes Service description: How to configure auto-discovery of AWS EKS clusters in Teleport. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- EKS Auto-Discovery can automatically @@ -269,6 +275,13 @@ and create the `ClusterRole` and `ClusterRoleBinding` resources during cluster p ## Step 3/3. Configure Teleport to discover EKS clusters +### Install Teleport + +Install Teleport on the host you are using to run the Kubernetes Service and +Discovery Service: + +(!docs/pages/includes/install-linux.mdx!) + ### Get a join token Teleport EKS Auto-Discovery requires a valid Teleport auth token for the Discovery and @@ -277,11 +290,15 @@ command against your Teleport Auth Service and save it in `/tmp/token` on the machine that will run Kubernetes Discovery: ```code -$ tctl tokens add --type=discovery,kube +$ tctl tokens add --type=discovery,kube --format=text +(=presets.tokens.first=) ``` ### Configure the Teleport Kubernetes and Discovery Services +On the host running the Kubernetes Service and Discovery Service, create a +Teleport configuration file with the following content at `/etc/teleport.yaml`: + (!docs/pages/includes/discovery/discovery-group.mdx!) Enabling EKS Auto-Discovery requires that the `discovery_service.aws` section diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes/azure.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes/azure.mdx index 5cdc53acfbadd..49ab2c4e7c014 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes/azure.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes/azure.mdx @@ -1,6 +1,12 @@ --- title: Teleport AKS Auto-Discovery +sidebar_label: Azure Kubernetes Service description: Auto-Discovery of AKS clusters in Azure cloud. +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- AKS Auto-Discovery can automatically @@ -211,15 +217,30 @@ associated with Teleport identity. ## Step 2/2. Configure Teleport to discover AKS clusters +### Install Teleport + +Install Teleport on the host you are using to run the Kubernetes Service and +Discovery Service: + +(!docs/pages/includes/install-linux.mdx!) + +### Get a join token + Teleport AKS Auto-Discovery requires a valid auth token for the Discovery and Kubernetes services to join the cluster. Generate one by running the following command against your Teleport Auth Service and save it in `/tmp/token` on the machine that will run Kubernetes Discovery: ```code -$ tctl tokens add --type=discovery,kube +$ tctl tokens add --type=discovery,kube --format=text +(=presets.tokens.first=) ``` +### Configure the Kubernetes Service and Discovery Service + +On the host running the Kubernetes Service and Discovery Service, create a +Teleport configuration file with the following content at `/etc/teleport.yaml`: + Enabling AKS Auto-Discovery requires that the `discovery_service.azure` section include at least one entry and that `discovery_service.azure.types` include `aks`. It also requires configuring the `kubernetes_service.resources.tags` to use the same @@ -262,6 +283,8 @@ kubernetes_service: Once you have added this configuration, start Teleport. AKS clusters matching the tags and regions specified in the Azure section will be added to the Teleport cluster automatically. +(!docs/pages/includes/start-teleport.mdx service="the Kubernetes and Discovery Services"!) + ## Troubleshooting (!docs/pages/includes/discovery/discovery-service-troubleshooting.mdx resourceKind="Kubernetes cluster" tctlResource="kube_cluster" !) diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes/google-cloud.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes/google-cloud.mdx index bbe985f34abe2..5c86acbd870c4 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes/google-cloud.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes/google-cloud.mdx @@ -1,6 +1,12 @@ --- title: Teleport GKE Auto-Discovery +sidebar_label: Google Kubernetes Engine description: How to configure auto-discovery of Google Kubernetes Engine clusters in Teleport. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- The Teleport Discovery Service can automatically register your Google Kubernetes diff --git a/docs/pages/enroll-resources/auto-discovery/kubernetes/kubernetes.mdx b/docs/pages/enroll-resources/auto-discovery/kubernetes/kubernetes.mdx index 11dec8351b1d6..0bedb0be774bd 100644 --- a/docs/pages/enroll-resources/auto-discovery/kubernetes/kubernetes.mdx +++ b/docs/pages/enroll-resources/auto-discovery/kubernetes/kubernetes.mdx @@ -1,6 +1,11 @@ --- -title: Kubernetes Clusters Discovery +title: Kubernetes Cluster Discovery +sidebar_label: Kubernetes Clusters description: Detailed guides for configuring Kubernetes Clusters Discovery. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Kubernetes Clusters Discovery allows Kubernetes clusters diff --git a/docs/pages/reference/agent-services/auto-discovery-reference/aws-iam.mdx b/docs/pages/enroll-resources/auto-discovery/reference/aws-iam.mdx similarity index 83% rename from docs/pages/reference/agent-services/auto-discovery-reference/aws-iam.mdx rename to docs/pages/enroll-resources/auto-discovery/reference/aws-iam.mdx index b92cb31801650..463fc439e6eb0 100644 --- a/docs/pages/reference/agent-services/auto-discovery-reference/aws-iam.mdx +++ b/docs/pages/enroll-resources/auto-discovery/reference/aws-iam.mdx @@ -1,7 +1,11 @@ --- title: Discovery Service AWS IAM Reference +sidebar_label: Discovery Service AWS IAM description: AWS IAM permissions for the Teleport Discovery Service. -tocDepth: 3 +tags: + - reference + - zero-trust + - infrastructure-identity --- The Teleport Discovery Service requires AWS IAM permissions to discover AWS @@ -30,10 +34,14 @@ type of AWS resource. (!docs/pages/includes/discovery/reference/aws-iam/dynamodb.mdx!) -### ElastiCache for Redis +### ElastiCache for Redis and Valkey (!docs/pages/includes/discovery/reference/aws-iam/elasticache.mdx!) +### ElastiCache Serverless for Redis and Valkey + +(!docs/pages/includes/discovery/reference/aws-iam/elasticache-serverless.mdx!) + ### Keyspaces (!docs/pages/includes/discovery/reference/aws-iam/keyspaces.mdx!) diff --git a/docs/pages/enroll-resources/auto-discovery/reference/labels.mdx b/docs/pages/enroll-resources/auto-discovery/reference/labels.mdx new file mode 100644 index 0000000000000..255349fc1b96d --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/reference/labels.mdx @@ -0,0 +1,139 @@ +--- +title: Labels +sidebar_label: Labels +description: Learn about resource labels applied during auto-discovery +--- + +Cloud resources such as AWS EC2 instances, EKS clusters, RDS databases and similar +resources in Azure and Google Cloud enrolled in a Teleport cluster during +auto-discovery get a set of default labels applied to them which can then be +used in RBAC. + +## AWS + +### EC2 instances + +See the AWS EC2 auto-discovery [guide](../../../enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx). + +| Label | Description | +|----------------------------|------------------------------------------------------| +| `teleport.dev/account-id` | AWS account ID where the the EC2 instance is running | +| `teleport.dev/instance-id` | AWS EC2 instance ID | + +### Databases + +See the AWS Databases auto-discovery [guide](../../../enroll-resources/auto-discovery/databases/aws.mdx). + +| Label | Description | +|------------------------------------------------|-------------------------------------------------------------------------------------------------------------------| +| `account-id` | ID of the AWS account the resource resides in. | +| `endpoint-type` | Type of the endpoint. See [`endpoint-type`](../../database-access/reference/labels.mdx#endpoint-type) for more details. | +| `engine-version` | Database engine version, if available. | +| `engine` | Amazon RDS: engine type of the RDS instance.
Amazon RDS Proxy: engine family of the proxy. | +| `namespace` | Amazon Redshift Serverless namespace name. | +| `region` | AWS region. | +| `vpc-id` | ID of the Amazon VPC the resource resides in, if available. | +| `workgroup` | Amazon Redshift Serverless workgroup name. | +| `teleport.dev/cloud` | Always `AWS`. | +| `teleport.dev/discovery-type` | Specifies the type of resource matched by the Teleport Discovery Service, e.g. "rds", "redshift", etc. | +| `teleport.dev/origin` | Always `cloud`. | +| `teleport.internal/discovered-name` | Original Database name. | +| `teleport.internal/discovery-config-name` | Name of the discovery config name. Absent when using matchers defined in Discovery Service configuration. | +| `teleport.internal/discovery-group-name` | The name of the discovery group present in the Discovery Service configuration | +| `teleport.internal/discovery-integration-name` | Integration name used to fetch the Database. Absent when using ambient credentials. | + +### Kubernetes clusters + +See the AWS EKS auto-discovery [guide](../../../enroll-resources/auto-discovery/kubernetes/aws.mdx). + +| Label | Description | +|------------------------------------------------|-----------------------------------------------------------------------------------------------------------| +| `account-id` | ID of the AWS account the resource resides in. | +| `region` | AWS region. | +| `teleport.dev/cloud` | Always `AWS`. | +| `teleport.dev/discovery-type` | Always `eks`. | +| `teleport.dev/origin` | Always `cloud`. | +| `teleport.internal/aws-arn` | Contains the AWS ARN for the resource. | +| `teleport.internal/discovered-name` | Original EKS Cluster name. | +| `teleport.internal/discovery-config-name` | Name of the discovery config name. Absent when using matchers defined in Discovery Service configuration. | +| `teleport.internal/discovery-group-name` | The name of the discovery group present in the Discovery Service configuration | +| `teleport.internal/discovery-integration-name` | Integration name used to fetch the Kubernetes cluster. Absent when using ambient credentials. | + +## Azure + +### VMs + +See the Azure VM auto-discovery [guide](../../../enroll-resources/auto-discovery/servers/azure-discovery.mdx). + +| Label | Description | +|-------------------------------------|-----------------------------------------------| +| `teleport.internal/region` | Azure region where the VM is running | +| `teleport.internal/resource-group` | Azure resource group the VM belongs to | +| `teleport.internal/subscription-id` | Azure subscription ID where the VM is running | +| `teleport.internal/vm-id` | Azure VM ID | + + +### Databases + +See the Azure Databases auto-discovery [guide](../../database-access/enrollment/azure/azure.mdx). + +| Label | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------------| +| `endpoint-type` | For Azure Redis Enterprise, one of `EnterpriseCluster`, `OSSCluster`. | +| `engine-version` | Database engine version, if available. | +| `engine` | Resource type of the resource ID. | +| `region` | Azure location. | +| `replication-role` | The replication role of an Azure DB Flexible server, e.g. "Source" or "Replica". | +| `resource-group` | Azure resource group. | +| `source-server` | The source server for replica Azure DB Flexible servers. This is the source (primary) database resource name. | +| `subscription-id` | Azure subscription ID. | +| `teleport.dev/cloud` | Always `Azure`. | +| `teleport.dev/discovery-type` | Specifies the type of resource matched by the Teleport Discovery Service, e.g. "mysql", "postgres", etc. | +| `teleport.dev/origin` | Always `cloud`. | +| `teleport.internal/discovered-name` | Original Database name. | +| `teleport.internal/discovery-config-name` | Name of the discovery config name. Absent when using matchers defined in Discovery Service configuration. | +| `teleport.internal/discovery-group-name` | The name of the discovery group present in the Discovery Service configuration | + +### Kubernetes clusters + +See the Azure AKS auto-discovery [guide](../../../enroll-resources/auto-discovery/kubernetes/azure.mdx). + +| Label | Description | +|-------------------------------------------|-----------------------------------------------------------------------------------------------------------| +| `region` | Azure location. | +| `resource-group` | Azure resource group. | +| `subscription-id` | Azure subscription ID. | +| `teleport.dev/cloud` | Always `Azure`. | +| `teleport.dev/discovery-type` | Always `aks`. | +| `teleport.dev/origin` | Always `cloud`. | +| `teleport.internal/discovered-name` | Original AKS Cluster name. | +| `teleport.internal/discovery-config-name` | Name of the discovery config name. Absent when using matchers defined in Discovery Service configuration. | +| `teleport.internal/discovery-group-name` | The name of the discovery group present in the Discovery Service configuration | + +## Google Cloud + +### VMs + +See the GCP VM auto-discovery [guide](../../../enroll-resources/auto-discovery/servers/gcp-discovery.mdx). + +| Label | Description | +|--------------------------------|-------------------------------------| +| `teleport.dev/project-id` | GCP project ID the VM is running in | +| `teleport.internal/name` | GCP VM name | +| `teleport.internal/project-id` | GCP project ID the VM is running in | +| `teleport.internal/zone` | GCP zone where the VM is running | + +### Kubernetes clusters + +See the Azure AKS auto-discovery [guide](../../../enroll-resources/auto-discovery/kubernetes/google-cloud.mdx). + +| Label | Description | +|-------------------------------------------|-----------------------------------------------------------------------------------------------------------| +| `location` | GCP location where the GKE is running in. | +| `project-id` | GCP project ID where the GKE is running in. | +| `teleport.dev/cloud` | Always `GCP`. | +| `teleport.dev/discovery-type` | Always `gke`. | +| `teleport.dev/origin` | Always `cloud`. | +| `teleport.internal/discovered-name` | Original GKE Cluster name. | +| `teleport.internal/discovery-config-name` | Name of the discovery config name. Absent when using matchers defined in Discovery Service configuration. | +| `teleport.internal/discovery-group-name` | The name of the discovery group present in the Discovery Service configuration | diff --git a/docs/pages/enroll-resources/auto-discovery/reference/reference.mdx b/docs/pages/enroll-resources/auto-discovery/reference/reference.mdx new file mode 100644 index 0000000000000..536b44ea2f15a --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/reference/reference.mdx @@ -0,0 +1,11 @@ +--- +title: Auto-Discovery Reference +sidebar_label: Reference +description: Configuration reference for the Teleport Discovery Service. +sidebar_position: 5 +tags: + - zero-trust + - infrastructure-identity +--- + + diff --git a/docs/pages/enroll-resources/auto-discovery/servers/azure-discovery.mdx b/docs/pages/enroll-resources/auto-discovery/servers/azure-discovery.mdx index c83203f2073f0..fca90b846138f 100644 --- a/docs/pages/enroll-resources/auto-discovery/servers/azure-discovery.mdx +++ b/docs/pages/enroll-resources/auto-discovery/servers/azure-discovery.mdx @@ -1,8 +1,19 @@ --- title: Automatically Discover Azure Virtual Machines +sidebar_label: Azure Virtual Machines description: How to configure Teleport to automatically enroll Azure virtual machines. +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- +This guide shows you how to set up automatic server discovery for Azure virtual +machines. + +## How it works + The Teleport Discovery Service can connect to Azure and automatically discover and enroll virtual machines matching configured labels. It will then execute a script on these discovered instances that will install Teleport, @@ -19,7 +30,7 @@ Ubuntu/Debian/RHEL if making use of the default Teleport install script. (For other Linux distributions, you can install Teleport manually.) - (!docs/pages/includes/tctl.mdx!) -## Step 1/6. Create an Azure invite token +## Step 1/5. Create an Azure invite token When discovering Azure virtual machines, Teleport makes use of Azure invite tokens for authenticating joining SSH Service instances. @@ -56,7 +67,7 @@ Add the token to the Teleport cluster with: $ tctl create -f token.yaml ``` -## Step 2/6. Configure IAM permissions for Teleport +## Step 2/5. Configure IAM permissions for Teleport The Teleport Discovery Service needs Azure IAM permissions to discover and register Azure virtual machines. @@ -68,10 +79,10 @@ resources: - The Discovery Service can run on an Azure VM with attached managed identity. This is the recommended way of deploying the Discovery Service in production since it eliminates the need to manage Azure credentials. -- The Discovery Service can be registered as an Azure AD application (via AD's "App - registrations") and configured with its credentials. This is only recommended - for development and testing purposes since it requires Azure credentials to - be present in the Discovery Service's environment. +- The Discovery Service can be registered as a Microsoft Entra ID application + and configured with its credentials. This is only recommended for development + and testing purposes since it requires Azure credentials to be present in the + Discovery Service's environment. @@ -99,14 +110,14 @@ resources: - Registering the Discovery Service as Azure AD application is suitable for - test and development scenarios, or if your Discovery Service does not run on - an Azure VM. For production scenarios prefer to use the managed identity - approach. + Registering the Discovery Service as a Microsoft Entra ID application is + suitable for test and development scenarios, or if your Discovery Service + does not run on an Azure VM. For production scenarios prefer to use the + managed identity approach. Go to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) - page of your Azure Active Directory and click on *New registration*: + page in Microsoft Entra ID and click on *New registration*: ![App registrations](../../../../img/azure/app-registrations@2x.png) @@ -192,7 +203,7 @@ and replace the subscription in `assignableScopes` with your own subscription id (!docs/pages/includes/server-access/azure-assign-service-principal.mdx!) -## Step 3/6. Set up managed identities for discovered nodes +## Step 3/5. Set up managed identities for discovered nodes Every Azure VM to be discovered must have a managed identity assigned to it with at least the `Microsoft.Compute/virtualMachines/read` permission. @@ -202,7 +213,7 @@ with at least the `Microsoft.Compute/virtualMachines/read` permission. If the VMs to be discovered have more than one managed identity assigned to them, save the client ID of the identity you just created for step 5. -## Step 4/6. Install the Teleport Discovery Service +## Step 4/5. Install the Teleport Discovery Service @@ -215,7 +226,7 @@ Install Teleport on the virtual machine that will run the Discovery Service: (!docs/pages/includes/install-linux.mdx!) -## Step 5/6. Configure Teleport to discover Azure instances +## Step 5/5. Configure Teleport to discover Azure instances If you are running the Discovery Service on its own host, the service requires a valid invite token to connect to the cluster. Generate one by running the @@ -270,12 +281,13 @@ discovery_service: specifically the regions and tags you want to associate with the Discovery Service. -## Step 6/6. [Optional] Customize the default installer script +## Auto-discovery labels + +(!docs/pages/includes/auto-discovery/auto-discovery-labels.mdx!) -(!docs/pages/includes/server-access/custom-installer.mdx cloud="Azure" matcher="azure" matchTypes="[\"vm\"]"!) +## Advanced configuration -If `client_id` is set in the Discovery Service config, custom installers will -also have the `{{ .AzureClientID }}` templating option. +(!docs/pages/includes/auto-discovery/azure-vm-advanced-config.mdx!) ## Troubleshooting @@ -304,4 +316,4 @@ logs can be found on the targeted VM at - Read [Joining Nodes via Azure Managed Identity](../../agents/azure.mdx) for more information on Azure tokens. - Full documentation on Azure discovery configuration can be found through the [ - config file reference documentation](../../../reference/config.mdx). + config file reference documentation](../../../reference/deployment/config.mdx). diff --git a/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery.mdx b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery.mdx deleted file mode 100644 index 837d692eba8d5..0000000000000 --- a/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery.mdx +++ /dev/null @@ -1,365 +0,0 @@ ---- -title: Server Auto-Discovery for Amazon EC2 -description: How to configure Teleport to automatically enroll EC2 instances. ---- - -This guide shows you how to configure Teleport to automatically enroll EC2 -instances in your cluster. - -## How it works - -In the setup we describe in this guide, the Teleport Discovery Service connects -to Amazon EC2 and reconciles the servers enrolled on the Auth Service backend -with servers it lists from the EC2 API. If an EC2 instance matches a configured -label and is not enrolled in your cluster, the Discovery Service executes a -script on these discovered instances using AWS Systems Manager that installs -Teleport, starts it and joins the cluster using the [IAM join method]( -../../agents/aws-iam.mdx). - -The Teleport Discovery Service uses an IAM invite token with a long time-to-live -(TTL), so that new instances can be discovered and added to the Teleport cluster -for the lifetime of the token. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- AWS account with EC2 instances and permissions to create and attach IAM -policies. -- EC2 instances running Ubuntu/Debian/RHEL/Amazon Linux 2/Amazon Linux 2023 and SSM agent version 3.1 or greater if making use of the -default Teleport install script. (For other Linux distributions, you can -install Teleport manually.) -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/7. Create an EC2 invite token - -When discovering EC2 instances, Teleport makes use of IAM invite tokens for -authenticating joining Nodes. - -Create a file called `token.yaml`: - -```yaml -# token.yaml -kind: token -version: v2 -metadata: - # the token name is not a secret because instances must prove that they are - # running in your AWS account to use this token - name: aws-discovery-iam-token -spec: - # use the minimal set of roles required (e.g. Node, App, Kube, DB, WindowsDesktop) - roles: [Node] - - # set the join method allowed for this token - join_method: iam - - allow: - # specify the AWS account which Nodes may join from - - aws_account: "123456789" -``` - -Assign the `aws_account` field to your AWS account number. -Add the token to the Teleport cluster with: - -```code -$ tctl create -f token.yaml -``` - -## Step 2/7. Define an IAM policy - -The `teleport discovery bootstrap` command will automate the process of -defining and implementing IAM policies required to make auto-discovery work. It -requires only a single pre-defined policy, attached to the EC2 instance running -the command: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "iam:GetPolicy", - "iam:TagPolicy", - "iam:ListPolicyVersions", - "iam:CreatePolicyVersion", - "iam:CreatePolicy", - "ssm:CreateDocument", - "iam:DeletePolicyVersion", - "iam:AttachRolePolicy", - "iam:PutRolePermissionsBoundary" - ], - "Resource": "*" - } - ] -} -``` - -Create this policy and apply it to the Node (EC2 instance) that will run the Discovery Service. - -## Step 3/7. Install Teleport on the Discovery Node - - - -If you plan on running the Discovery Service on the same Node already running -another Teleport service (Auth or Proxy, for example), you can skip this step. - - - -Install Teleport on the EC2 instance that will run the Discovery Service: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 4/7. Configure Teleport to discover EC2 instances - -If you are running the Discovery Service on its own host, the service requires a -valid invite token to connect to the cluster. Generate one by running the -following command against your Teleport Auth Service: - -```code -$ tctl tokens add --type=discovery -``` - -Save the generated token in `/tmp/token` on the Node (EC2 instance) that will -run the Discovery Service. - -In order to enable EC2 instance discovery the `discovery_service.aws` section -of `teleport.yaml` must include at least one entry: - -(!docs/pages/includes/discovery/discovery-group.mdx!) - -```yaml -version: v3 -teleport: - join_params: - token_name: "/tmp/token" - method: token - proxy_server: "" -auth_service: - enabled: false -proxy_service: - enabled: false -ssh_service: - enabled: false -discovery_service: - enabled: true - discovery_group: "aws-prod" - aws: - - types: ["ec2"] - regions: ["us-east-1","us-west-1"] - install: - join_params: - token_name: aws-discovery-iam-token - method: iam - tags: - "env": "prod" # Match EC2 instances where tag:env=prod -``` - -- Edit the `teleport.auth_servers` key to match your Auth Service or Proxy Service's URI - and port. -- Adjust the keys under `discovery_service.aws` to match your EC2 environment, - specifically the regions and tags you want to associate with the Discovery - Service. - -## Step 5/7. Bootstrap the Discovery Service AWS configuration - -On the same Node as above, run `teleport discovery bootstrap`. This command -will generate and display the additional IAM policies and AWS Systems Manager (SSM) documents -required to enable the Discovery Service: - -```code -$ sudo teleport discovery bootstrap -Reading configuration at "/etc/teleport.yaml"... - -AWS -1. Create IAM Policy "TeleportEC2Discovery": -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "ec2:DescribeInstances", - "ssm:DescribeInstanceInformation", - "ssm:GetCommandInvocation", - "ssm:ListCommandInvocations", - "ssm:SendCommand" - ], - "Resource": [ - "*" - ] - } - ] -} - -2. Create IAM Policy "TeleportEC2DiscoveryBoundary": -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "ec2:DescribeInstances", - "ssm:DescribeInstanceInformation", - "ssm:GetCommandInvocation", - "ssm:ListCommandInvocations", - "ssm:SendCommand" - ], - "Resource": [ - "*" - ] - } - ] -} - -3. Create SSM Document "TeleportDiscoveryInstaller": - -schemaVersion: '2.2' -description: aws:runShellScript -parameters: - token: - type: String - description: "(Required) The Teleport invite token to use when joining the cluster." - scriptName: - type: String - description: "(Required) The Teleport installer script to use when joining the cluster." -mainSteps: -- action: aws:downloadContent - name: downloadContent - inputs: - sourceType: "HTTP" - destinationPath: "/tmp/installTeleport.sh" - sourceInfo: - url: "https:///webapi/scripts/installer/{{ scriptName }}" -- action: aws:runShellScript - name: runShellScript - inputs: - timeoutSeconds: '300' - runCommand: - - /bin/sh /tmp/installTeleport.sh "{{ token }}" - -4. Attach IAM policies to "yourUser-discovery-role". - -Confirm? [y/N]: y -``` - -Review the policies and confirm: - -```code -Confirm? [y/N]: y -✅[AWS] Create IAM Policy "TeleportEC2Discovery"... done. -✅[AWS] Create IAM Policy "TeleportEC2DiscoveryBoundary"... done. -✅[AWS] Create IAM SSM Document "TeleportDiscoveryInstaller"... done. -✅[AWS] Attach IAM policies to "alex-discovery-role"... done. -``` - -All EC2 instances that are to be added to the Teleport cluster by the -Discovery Service must include the `AmazonSSMManagedInstanceCore` IAM policy -in order to receive commands from the Discovery Service. - -This policy includes the following permissions: - -```js -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "ssm:DescribeAssociation", - "ssm:GetDeployablePatchSnapshotForInstance", - "ssm:GetDocument", - "ssm:DescribeDocument", - "ssm:GetManifest", - "ssm:GetParameter", - "ssm:GetParameters", - "ssm:ListAssociations", - "ssm:ListInstanceAssociations", - "ssm:PutInventory", - "ssm:PutComplianceItems", - "ssm:PutConfigurePackageResult", - "ssm:UpdateAssociationStatus", - "ssm:UpdateInstanceAssociationStatus", - "ssm:UpdateInstanceInformation" - ], - "Resource": "*" - }, - { - "Effect": "Allow", - "Action": [ - "ssmmessages:CreateControlChannel", - "ssmmessages:CreateDataChannel", - "ssmmessages:OpenControlChannel", - "ssmmessages:OpenDataChannel" - ], - "Resource": "*" - }, - { - "Effect": "Allow", - "Action": [ - "ec2messages:AcknowledgeMessage", - "ec2messages:DeleteMessage", - "ec2messages:FailMessage", - "ec2messages:GetEndpoint", - "ec2messages:GetMessages", - "ec2messages:SendReply" - ], - "Resource": "*" - } - ] -} -``` - -## Step 6/7. [Optional] Customize the default installer script - -(!docs/pages/includes/server-access/custom-installer.mdx cloud="AWS" matcher="aws" matchTypes="[\"ec2\"]"!) - -## Step 7/7. Start Teleport - -(!docs/pages/includes/aws-credentials.mdx service="the Discovery Service"!) - -(!docs/pages/includes/start-teleport.mdx service="the Discovery Service"!) - -Once you have started the Discovery Service, EC2 instances matching the tags you -specified earlier will begin to be added to the Teleport cluster automatically. - -## Troubleshooting - -If Installs are showing failed or instances are failing to appear check the -Command history in AWS System Manager -> Node Management -> Run Command. -Select the instance-id of the Target to review Errors. - -### `cannot unmarshal object into Go struct field` - -If you encounter an error similar to the following: - -```text -invalid format in plugin properties map[destinationPath:/tmp/installTeleport.sh sourceInfo:map[url:[https://example.teleport.sh:443/webapi/scripts/installer/preprod-installer](https://example.teleport.sh/webapi/scripts/installer/preprod-installer)] sourceType:HTTP]; -error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string -``` - -It is likely that you're running an older SSM agent version. Upgrade to SSM agent version 3.1 or greater to resolve. - -### `InvalidInstanceId: Instances [[i-123]] not in a valid state for account 456` - -The following problems can cause this error: -- The Discovery Service doesn't have permission to access the managed node. -- AWS Systems Manager Agent (SSM Agent) isn't running. Verify that SSM Agent is running. -- SSM Agent isn't registered with the SSM endpoint. Try reinstalling SSM Agent. -- The discovered instance does not have permission to receive SSM - commands, verify the instance includes the AmazonSSMManagedInstanceCore IAM policy. - -See SSM RunCommand error codes and troubleshooting information in AWS documentation for more details: -- https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-managed-instances.html -- https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html#API_SendCommand_Errors - -## Next steps - -- Read [Joining Nodes via AWS IAM - Role](../../agents/aws-iam.mdx) -for more information on IAM Invite Tokens. -- Information on IAM best practices on EC2 instances managed by Systems -Manager can be found for in the [AWS Cloud Operations & Migrations Blog -](https://aws.amazon.com/blogs/mt/applying-managed-instance-policy-best-practices/). -- Full documentation on EC2 discovery configuration can be found through the [ -config file reference documentation](../../../reference/config.mdx). diff --git a/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-guided.mdx b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-guided.mdx new file mode 100644 index 0000000000000..8550fe4236f74 --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-guided.mdx @@ -0,0 +1,255 @@ +--- +title: Guided EC2 Auto-Discovery Configuration +sidebar_label: Guided +description: How to configure Teleport EC2 auto-discovery using Teleport to configure permissions +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +This guide shows you how to configure Teleport to automatically enroll EC2 +instances in your cluster, with permissions configured by Teleport. + +## How it works + +The Teleport Discovery Service runs on an EC2 instance and queries the AWS API +to list instances in your AWS account. For any new EC2 instance that you deploy, +the Discovery Service uses AWS Systems Manager to install Teleport on the +instance and join it to the cluster as a Teleport-protected server. `teleport` +commands allow you to create IAM policies that enable the Discovery Service to +enroll EC2 instances as servers in your Teleport cluster. + +To manually configure IAM policies and SSM documents instead, read [Manual EC2 +Auto-Discovery Configuration](ec2-discovery-manual.mdx). + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with EC2 instances and permissions to create and attach IAM +policies. +- EC2 instances running Ubuntu/Debian/RHEL/Amazon Linux 2/Amazon Linux 2023 and SSM agent version 3.1 or greater if making use of the +default Teleport install script. (For other Linux distributions, you can +install Teleport manually.) +- (!docs/pages/includes/tctl.mdx!) + +All EC2 instances that are to be added to the Teleport cluster by the Discovery +Service must include the `AmazonSSMManagedInstanceCore` IAM policy in order to +receive commands from the Discovery Service. For a list of permissions included +in the policy, see the [AWS +documentation](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). + +## Step 1/6. Create an EC2 invite token + +(!docs/pages/includes/auto-discovery/ec2-invite-token.mdx!) + +## Step 2/6. Configure IAM policies + +The `teleport discovery bootstrap` command will automate the process of +defining and implementing IAM policies required to make auto-discovery work. It +requires only a single pre-defined policy, attached to the EC2 instance running +the command: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "iam:GetPolicy", + "iam:TagPolicy", + "iam:ListPolicyVersions", + "iam:CreatePolicyVersion", + "iam:CreatePolicy", + "iam:GetRole", + "ssm:CreateDocument", + "iam:DeletePolicyVersion", + "iam:AttachRolePolicy", + "iam:PutRolePermissionsBoundary" + ], + "Resource": "*" + } + ] +} +``` + +Create this policy and apply it to the Node (EC2 instance) that will run the Discovery Service. + +## Step 3/6. Install Teleport on the Discovery Node + + + +If you plan on running the Discovery Service on the same Node already running +another Teleport service (Auth or Proxy, for example), you can skip this step. + + + +Install Teleport on the EC2 instance that will run the Discovery Service: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 4/6. Configure Teleport to discover EC2 instances + +(!docs/pages/includes/auto-discovery/ec2-config.mdx!) + +## Step 5/6. Bootstrap the Discovery Service AWS configuration + +On the same Node as above, run `teleport discovery bootstrap`. This command +will generate and display the additional IAM policies and AWS Systems Manager (SSM) documents +required to enable the Discovery Service: + +```code +$ sudo teleport discovery bootstrap +Reading configuration at "/etc/teleport.yaml"... +# ... +``` + +This will add the following additional permissions to the Discovery Service's +role: + +- `account:ListRegions` +- `ec2:DescribeInstances` +- `ssm:DescribeInstanceInformation` +- `ssm:GetCommandInvocation` +- `ssm:ListCommandInvocations` +- `ssm:SendCommand` + + +Review the policies and confirm: + +```code +# ... +Confirm? [y/N]: y +✅[AWS] Create IAM Policy "TeleportEC2Discovery"... done. +✅[AWS] Create IAM Policy "TeleportEC2DiscoveryBoundary"... done. +✅[AWS] Create IAM SSM Document "AWS-RunShellScript"... done. +✅[AWS] Attach IAM policies to "alex-discovery-role"... done. +``` + +## Step 6/6. Start Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Discovery Service"!) + +(!docs/pages/includes/start-teleport.mdx service="the Discovery Service"!) + +Once you have started the Discovery Service, EC2 instances matching the tags you +specified earlier will begin to be added to the Teleport cluster automatically. + +## Discovering instances in multiple AWS accounts + +To discover EC2 instances in AWS accounts other than the account your Teleport +Discovery Service is running in, Teleport must have permission to assume an IAM +role in each of those accounts. This guide assumes you have finished the main +EC2 discovery guide above and should be repeated for each AWS account you want +to discover instances from. + +### Step 1/5. Update EC2 invite token + +(!docs/pages/includes/auto-discovery/ec2-cross-account-token.mdx!) + +### Step 2/5. Configure IAM permissions + +In the destination account, create a new role and note its ARN. Create the +following IAM policy and attach it to the new role: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "iam:GetPolicy", + "iam:TagPolicy", + "iam:ListPolicyVersions", + "iam:CreatePolicyVersion", + "iam:CreatePolicy", + "iam:GetRole", + "ssm:CreateDocument", + "iam:DeletePolicyVersion", + "iam:AttachRolePolicy", + "iam:PutRolePermissionsBoundary" + ], + "Resource": "*" + } + ] +} +``` + +Edit the trust policy of the new role to allow the Discovery Service to assume +it, adding the : + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "" + }, + "Action": "sts:AssumeRole" + } + ] +} +``` + +Create the following policy in the Discovery Service's account and attach it +to the Discovery Service's role, adding the : + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "sts:AssumeRole", + "Resource": "" + } + ] +} +``` + +### Step 3/5. Update Teleport configuration + +(!docs/pages/includes/auto-discovery/ec2-cross-account-config.mdx!) + +### Step 4/5. Bootstrap the Discovery Service AWS configuration + +When all of your accounts are ready, run `teleport discovery bootstrap` again +to generate the remaining IAM policies and SSM documents. For each +distinct `assume_role_arn`/`external_id`, Teleport will assume that role and +attach the new policies to it (unless overridden by `--attach-to-user` or `--attach-to-role`). + +```code +$ sudo teleport discovery bootstrap +``` + +### Step 5/5. Restart Teleport + +Restart the Teleport service to start discovering new instances: + +```code +$ sudo systemd restart teleport +``` + +You can check the status of the Discovery Service with `systemctl status teleport` +and view its logs with `journalctl -fu teleport`. + +## Auto-discovery labels + +(!docs/pages/includes/auto-discovery/auto-discovery-labels.mdx!) + +## Advanced configuration + +(!docs/pages/includes/auto-discovery/aws-ec2-advanced-config.mdx!) + +## Troubleshooting + +(!docs/pages/includes/auto-discovery/ec2-troubleshooting.mdx!) + +## Next steps + +(!docs/pages/includes/auto-discovery/ec2-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-manual.mdx b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-manual.mdx new file mode 100644 index 0000000000000..8d7e4a24b3ef7 --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery-manual.mdx @@ -0,0 +1,238 @@ +--- +title: Manual EC2 Auto-Discovery Configuration +sidebar_label: Manual +description: How to configure Teleport EC2 auto-discovery with manually configured permissions +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +This guide shows you how to configure Teleport to automatically enroll EC2 +instances in your cluster, with permissions configured manually. + +## How it works + +The Teleport Discovery Service runs on an EC2 instance and queries the AWS API +to list instances in your AWS account. For any new EC2 instance that you deploy, +the Discovery Service uses AWS Systems Manager (SSM) to install Teleport on the +instance and join it to the cluster as a Teleport-protected server. In addition +to deploying the Teleport Discovery Service, the procedure shown in this guide +includes manually editing IAM policies and SSM documents. + +To automatically bootstrap IAM policies instead, read [Guided EC2 Auto-Discovery +Configuration](ec2-discovery-guided.mdx). + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with EC2 instances and permissions to create and attach IAM +policies. +- EC2 instances running Ubuntu/Debian/RHEL/Amazon Linux 2/Amazon Linux 2023 and SSM agent version 3.1 or greater if making use of the +default Teleport install script. (For other Linux distributions, you can +install Teleport manually.) +- (!docs/pages/includes/tctl.mdx!) + +All EC2 instances that are to be added to the Teleport cluster by the Discovery +Service must include the `AmazonSSMManagedInstanceCore` IAM policy in order to +receive commands from the Discovery Service. For a list of permissions included +in the policy, see the [AWS +documentation](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). + +## Step 1/5. Create an EC2 invite token + +(!docs/pages/includes/auto-discovery/ec2-invite-token.mdx!) + +## Step 2/5. Configure IAM policies + +Create the following policy and attach it to the EC2 instance that will run +the Discovery Service: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "account:ListRegions", + "ec2:DescribeInstances", + "ssm:DescribeInstanceInformation", + "ssm:GetCommandInvocation", + "ssm:ListCommandInvocations", + "ssm:SendCommand" + ], + "Resource": [ + "*" + ] + } + ] +} +``` + +## Step 3/5. Install Teleport on the Discovery Node + + + +If you plan on running the Discovery Service on the same Node already running +another Teleport service (Auth or Proxy, for example), you can skip this step. + + + +Install Teleport on the EC2 instance that will run the Discovery Service: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 4/5. Configure Teleport to discover EC2 instances + +(!docs/pages/includes/auto-discovery/ec2-config.mdx!) + +## Step 5/5. Start Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Discovery Service"!) + +(!docs/pages/includes/start-teleport.mdx service="the Discovery Service"!) + +Once you have started the Discovery Service, EC2 instances matching the tags you +specified earlier will begin to be added to the Teleport cluster automatically. + +## Discovering instances in multiple AWS accounts + +To discover EC2 instances in AWS accounts other than the account your Teleport +Discovery Service is running in, Teleport must have permission to assume an IAM +role in each of those accounts. This guide assumes you have finished the main +EC2 discovery guide above and should be repeated for each AWS account you want +to discover instances from. + +### Step 1/5. Update EC2 invite token + +(!docs/pages/includes/auto-discovery/ec2-cross-account-token.mdx!) + +### Step 2/5. Configure IAM permissions + +In the destination account, create a new role and note its ARN. Create the +following IAM policy and attach it to the new role: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "account:ListRegions", + "ec2:DescribeInstances", + "ssm:DescribeInstanceInformation", + "ssm:GetCommandInvocation", + "ssm:ListCommandInvocations", + "ssm:SendCommand" + ], + "Resource": [ + "*" + ] + } + ] +} +``` + +Edit the trust policy of the new role to allow the Discovery Service +to assume it, including the : + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "" + }, + "Action": "sts:AssumeRole" + } + ] +} +``` + +Create the following policy in the Discovery Service's account and attach it +to the Discovery Service's role, adding the : + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "sts:AssumeRole", + "Resource": "" + } + ] +} +``` + +### Step 3/5. Create SSM Documents + +In each region you plan to discover instances in, create the following SSM document +in the AWS Systems Manager and name it `TeleportDiscoveryInstaller`: + +```yaml +schemaVersion: '2.2' +description: aws:runShellScript +parameters: + token: + type: String + description: "(Required) The Teleport invite token to use when joining the cluster." + scriptName: + type: String + description: "(Required) The Teleport installer script to use when joining the cluster." + env: + type: String + description: "Environment variables exported to the script. Format 'ENV=var FOO=bar'" + default: "X=$X" +mainSteps: +- action: aws:downloadContent + name: downloadContent + inputs: + sourceType: "HTTP" + destinationPath: "/tmp/installTeleport.sh" + sourceInfo: + url: "https:///webapi/scripts/installer/{{ scriptName }}" +- action: aws:runShellScript + name: runShellScript + inputs: + timeoutSeconds: '300' + runCommand: + - export {{ env }}; /bin/sh /tmp/installTeleport.sh "{{ token }}" +``` + +### Step 4/5. Update Teleport configuration + +(!docs/pages/includes/auto-discovery/ec2-cross-account-config.mdx!) + +### Step 5/5. Restart Teleport + +Restart the Teleport service to start discovering new instances: + +```code +$ sudo systemd restart teleport +``` + +You can check the status of the Discovery Service with `systemctl status teleport` +and view its logs with `journalctl -fu teleport`. + +## Auto-discovery labels + +(!docs/pages/includes/auto-discovery/auto-discovery-labels.mdx!) + +## Advanced configuration + +(!docs/pages/includes/auto-discovery/aws-ec2-advanced-config.mdx!) + +## Troubleshooting + +(!docs/pages/includes/auto-discovery/ec2-troubleshooting.mdx!) + +## Next steps + +(!docs/pages/includes/auto-discovery/ec2-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx new file mode 100644 index 0000000000000..2f0c76b36f2dc --- /dev/null +++ b/docs/pages/enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx @@ -0,0 +1,40 @@ +--- +title: Server Auto-Discovery for Amazon EC2 +sidebar_label: Amazon EC2 +description: How to configure Teleport to automatically enroll EC2 instances. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +This guide shows you how to configure Teleport to automatically enroll EC2 +instances in your cluster. + +## How it works + +In the setup we describe in this guide, the Teleport Discovery Service connects +to Amazon EC2 and reconciles the servers enrolled on the Auth Service backend +with servers it lists from the EC2 API. If an EC2 instance matches a configured +label and is not enrolled in your cluster, the Discovery Service executes a +script on these discovered instances using AWS Systems Manager that installs +Teleport, starts it and joins the cluster using the [IAM join method]( +../../../agents/aws-iam.mdx). + +The Teleport Discovery Service uses an IAM invite token with a long time-to-live +(TTL), so that new instances can be discovered and added to the Teleport cluster +for the lifetime of the token. + +## Choosing guided or manual EC2 auto-discovery configuration + +In the guided EC2 auto-discovery configuration process, Teleport generates +its own policies and SSM documents for the Discovery Service to use. + +If you want to have more control over the policies and SSM documents used, +manual configuration may be suitable for you. + +## Guides + +- [Guided EC2 Auto-Discovery Configuration](ec2-discovery-guided.mdx) +- [Manual EC2 Auto-Discovery Configuration](ec2-discovery-manual.mdx) diff --git a/docs/pages/enroll-resources/auto-discovery/servers/gcp-discovery.mdx b/docs/pages/enroll-resources/auto-discovery/servers/gcp-discovery.mdx index 8d795275c0729..256b1329cf6ac 100644 --- a/docs/pages/enroll-resources/auto-discovery/servers/gcp-discovery.mdx +++ b/docs/pages/enroll-resources/auto-discovery/servers/gcp-discovery.mdx @@ -1,8 +1,19 @@ --- title: Automatically Discover GCP Compute Instances +sidebar_label: Google Compute Engine description: How to configure Teleport to automatically enroll GCP compute instances. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- +This guide shows you how to set up automatic server discovery for Google Compute +Engine virtual machines. + +## How it works + The Teleport Discovery Service can connect to GCP and automatically discover and enroll GCP Compute Engine instances matching configured labels. It will then execute a script on these discovered instances that will install Teleport, @@ -18,7 +29,7 @@ Ubuntu/Debian/RHEL if making use of the default Teleport install script. (For other Linux distributions, you can install Teleport manually.) - (!docs/pages/includes/tctl.mdx!) -## Step 1/6. Create a GCP invite token +## Step 1/5. Create a GCP invite token When discovering GCP compute instances, Teleport makes use of GCP invite tokens for authenticating joining SSH Service instances. @@ -59,7 +70,7 @@ Add the token to the Teleport cluster with: $ tctl create -f token.yaml ``` -## Step 2/6. Configure IAM permissions for Teleport +## Step 2/5. Configure IAM permissions for Teleport Create a service account that will give Teleport IAM permissions needed to discover instances. @@ -150,7 +161,7 @@ discover instances. -## Step 3/6. Configure instances to be discovered +## Step 3/5. Configure instances to be discovered Ensure that each instance to be discovered has a service account assigned to it. No permissions are required on the service account. To check if an instance @@ -204,7 +215,7 @@ the host keys to the guest attributes below:
-## Step 4/6. Install the Teleport Discovery Service +## Step 4/5. Install the Teleport Discovery Service @@ -217,7 +228,7 @@ Install Teleport on the virtual machine that will run the Discovery Service. (!docs/pages/includes/install-linux.mdx!) -## Step 5/6. Configure Teleport to discover GCP compute instances +## Step 5/5. Configure Teleport to discover GCP compute instances If you are running the Discovery Service on its own host, the service requires a valid invite token to connect to the cluster. Generate one by running the @@ -285,13 +296,17 @@ instance and assign the service account to that instance. Refer to [Set Up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc) for details on alternate methods. -## Step 6/6. [Optional] Customize the default installer script +## Auto-discovery labels + +(!docs/pages/includes/auto-discovery/auto-discovery-labels.mdx!) + +## Advanced configuration -(!docs/pages/includes/server-access/custom-installer.mdx cloud="GCP" matcher="gcp" matchTypes="[\"gce\"]"!) +(!docs/pages/includes/auto-discovery/gcp-vm-advanced-config.mdx!) ## Next steps - Read [Joining Services via GCP](../../agents/gcp.mdx) for more information on GCP tokens. - Full documentation on GCP discovery configuration can be found through the [ - config file reference documentation](../../../reference/config.mdx). + config file reference documentation](../../../reference/deployment/config.mdx). diff --git a/docs/pages/enroll-resources/auto-discovery/servers/servers.mdx b/docs/pages/enroll-resources/auto-discovery/servers/servers.mdx index 331471111a59f..c4ac9ccc5e2fa 100644 --- a/docs/pages/enroll-resources/auto-discovery/servers/servers.mdx +++ b/docs/pages/enroll-resources/auto-discovery/servers/servers.mdx @@ -1,6 +1,10 @@ --- title: Server Auto-Discovery +sidebar_label: Linux Servers description: You can set up the Teleport Discovery Service to automatically enroll servers in your infrastructure. +tags: + - zero-trust + - infrastructure-identity --- The Teleport Discovery Service can connect to your cloud provider and @@ -10,6 +14,6 @@ Teleport, start it and join the cluster. Learn how to set up auto-discovery for servers in your cloud: -- [Amazon EC2](ec2-discovery.mdx) +- [Amazon EC2](ec2-discovery/ec2-discovery.mdx) - [Google Compute Engine](gcp-discovery.mdx) - [Azure Virtual Machines](azure-discovery.mdx) diff --git a/docs/pages/enroll-resources/automatic-labels/automatic-labels.mdx b/docs/pages/enroll-resources/automatic-labels/automatic-labels.mdx new file mode 100644 index 0000000000000..aef1f0a7f5929 --- /dev/null +++ b/docs/pages/enroll-resources/automatic-labels/automatic-labels.mdx @@ -0,0 +1,13 @@ +--- +title: Automatically Label Teleport Agents +sidebar_label: Automatic Labels +sidebar_position: 9 +description: Provides information on labeling Teleport Agents automatically by integrating with your cloud provider. +--- + +When running on a virtual machine in your cloud provider, Teleport will +automatically detect and import the cloud provider's tags as Teleport labels for +protected resources. Read the following guides for information on configuring +automatic labeling: + + diff --git a/docs/pages/admin-guides/management/guides/ec2-tags.mdx b/docs/pages/enroll-resources/automatic-labels/ec2-tags.mdx similarity index 89% rename from docs/pages/admin-guides/management/guides/ec2-tags.mdx rename to docs/pages/enroll-resources/automatic-labels/ec2-tags.mdx index 9baa413faf673..0cde0731fb165 100644 --- a/docs/pages/admin-guides/management/guides/ec2-tags.mdx +++ b/docs/pages/enroll-resources/automatic-labels/ec2-tags.mdx @@ -1,7 +1,12 @@ --- title: EC2 Tags as Teleport Node Labels +sidebar_label: EC2 description: How to set up Teleport Node labels based on EC2 tags -h1: Sync EC2 Tags and Teleport Node Labels +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- When running on an AWS EC2 instance, Teleport will automatically detect and import EC2 tags as @@ -28,7 +33,7 @@ fakehost.example.com 127.0.0.1:3022 env=example,hostname=ip-172-31-53-70,aws/Nam (!docs/pages/includes/edition-prereqs-tabs.mdx!) - One Teleport Agent running on an Amazon EC2 instance. See - [our guides](../../../enroll-resources/agents/agents.mdx) for how to set up Teleport Agents. + [our guides](../agents/agents.mdx) for how to set up Teleport Agents. ## Enable tags in instance metadata @@ -50,17 +55,17 @@ To launch a new instance with instance metadata tags enabled: 1. Ensure that `Metadata accessible` is not disabled. 1. Enable `Allow tags in metadata`. - ![Advanced Options](../../../../img/aws/launch-instance-advanced-options.png) + ![Advanced Options](../../../img/aws/launch-instance-advanced-options.png) To modify an existing instance to enable instance metadata tags: 1. From the instance summary, go to `Actions > Instance Settings > Allow tags in instance metadata`. - ![Instance Settings](../../../../img/aws/instance-settings.png) + ![Instance Settings](../../../img/aws/instance-settings.png) 1. Enable `Allow`. - ![Allow Tags](../../../../img/aws/allow-tags.png) + ![Allow Tags](../../../img/aws/allow-tags.png) ### AWS CLI diff --git a/docs/pages/admin-guides/management/guides/gcp-tags.mdx b/docs/pages/enroll-resources/automatic-labels/gcp-tags.mdx similarity index 95% rename from docs/pages/admin-guides/management/guides/gcp-tags.mdx rename to docs/pages/enroll-resources/automatic-labels/gcp-tags.mdx index 2461d21469a47..ac0fdf2e4107d 100644 --- a/docs/pages/admin-guides/management/guides/gcp-tags.mdx +++ b/docs/pages/enroll-resources/automatic-labels/gcp-tags.mdx @@ -1,7 +1,12 @@ --- title: GCP Tags and Labels as Teleport Agent Labels +sidebar_label: Google Cloud description: How to set up Teleport Agent labels based on GCP tags and labels -h1: Sync GCP Tags/Labels and Teleport Agent labels +tags: + - conceptual + - zero-trust + - infrastructure-identity + - google-cloud --- When running on an Google Compute Engine instance, Teleport will automatically detect and import GCP @@ -36,7 +41,7 @@ fakehost.example.com 127.0.0.1:3022 gcp/label/testing=yes,gcp/tag/environment=st (!docs/pages/includes/edition-prereqs-tabs.mdx!) - One Teleport Agent running on a GCP Compute instance. See - [our guides](../../../enroll-resources/agents/agents.mdx) for how to set up Teleport Agents. + [our guides](../agents/agents.mdx) for how to set up Teleport Agents. ## Configure service account on instances with Teleport nodes diff --git a/docs/pages/admin-guides/management/guides/oracle-tags.mdx b/docs/pages/enroll-resources/automatic-labels/oracle-tags.mdx similarity index 93% rename from docs/pages/admin-guides/management/guides/oracle-tags.mdx rename to docs/pages/enroll-resources/automatic-labels/oracle-tags.mdx index 48263c1c57545..7d734d9ef856f 100644 --- a/docs/pages/admin-guides/management/guides/oracle-tags.mdx +++ b/docs/pages/enroll-resources/automatic-labels/oracle-tags.mdx @@ -1,7 +1,11 @@ --- title: Oracle Cloud Tags as Teleport Agent Labels +sidebar_label: Oracle description: How to set up Teleport Agent labels based on Oracle Cloud tags -h1: Sync Oracle Cloud tags and Teleport Agent labels +tags: + - conceptual + - zero-trust + - infrastructure-identity --- When running on an Oracle Cloud (OCI) Compute instance, Teleport will @@ -28,4 +32,4 @@ fakehost.example.com 127.0.0.1:3022 oracle/testing=yes,oracle/definedTagNamespac For services that manage multiple resources (such as the Database Service), each resource will receive the same tags from Oracle. - \ No newline at end of file + diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/auto-user-provisioning.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/auto-user-provisioning.mdx index cfe99bfb4c339..f0eb7132b7e0c 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/auto-user-provisioning.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/auto-user-provisioning.mdx @@ -1,6 +1,10 @@ --- title: Database Automatic User Provisioning +sidebar_label: Auto User Provisioning description: Configure automatic user provisioning for databases. +tags: + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/auto-user-provisioning/intro.mdx!) diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/aws-redshift.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/aws-redshift.mdx index 65d4a9dc29a7d..ab31bf41790dc 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/aws-redshift.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/aws-redshift.mdx @@ -1,14 +1,22 @@ --- title: Amazon Redshift Automatic User Provisioning +sidebar_label: Amazon Redshift description: Configure automatic user provisioning for Amazon Redshift. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- +{/* lint disable page-structure remark-lint */} + (!docs/pages/includes/database-access/auto-user-provisioning/intro.mdx!) ## Prerequisites -- Teleport cluster v14.1.3 or higher with a configured [Amazon - Redshift](../enroll-aws-databases/postgres-redshift.mdx) database. +- Teleport cluster with a configured [Amazon + Redshift](../enrollment/aws/postgres-redshift.mdx) database. - Ability to connect to and create user accounts in the target database. @@ -50,6 +58,8 @@ Users created within the database will: (!docs/pages/includes/database-access/auto-user-provisioning/connect.mdx gui="pgAdmin"!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) + ## Troubleshooting ### Use your mapped remote username error @@ -62,6 +72,6 @@ Users created within the database will: ## Next steps -- Connect using your [GUI database client](../../../connect-your-client/gui-clients.mdx). -- Learn about [role templating](../../../admin-guides/access-controls/guides/role-templates.mdx). +- Connect using your [GUI database client](../../../connect-your-client/third-party/gui-clients.mdx). +- Learn about [role templating](../../../zero-trust-access/rbac-get-started/role-templates.mdx). - Read automatic user provisioning [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0113-automatic-database-users.md). diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mariadb.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mariadb.mdx index 7d9e09c95ea0f..7949d247c0436 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mariadb.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mariadb.mdx @@ -1,14 +1,21 @@ --- title: MariaDB Automatic User Provisioning +sidebar_label: MariaDB description: Configure automatic user provisioning for MariaDB. +tags: + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + (!docs/pages/includes/database-access/auto-user-provisioning/intro.mdx!) ## Prerequisites -- Teleport cluster v14.1.3 or higher with a configured [self-hosted - MariaDB](../enroll-self-hosted-databases/mysql-self-hosted.mdx) or [RDS MariaDB](../enroll-aws-databases/rds.mdx) +- Teleport cluster with a configured [self-hosted + MariaDB](../enrollment/self-hosted/mysql-self-hosted.mdx) or [RDS MariaDB](../enrollment/aws/rds/mysql-postgres-mariadb.mdx) database. - Ability to connect to and create user accounts in the target database. @@ -29,8 +36,8 @@ The admin user must have privileges within the database to create users and grant them privileges. The admin user must also have privileges to monitor user processes and role assignments. -In addition, a schema is required for the admin user to log into by default. -This schema is also used to store custom user attributes and stored procedures. +In addition, a database is required for the admin user to log into by default. +This database is also used to store custom user attributes and stored procedures. @@ -47,9 +54,7 @@ CREATE DATABASE IF NOT EXISTS `teleport`; GRANT ALL ON `teleport`.* TO 'teleport-admin' WITH GRANT OPTION; ``` -Note that Teleport uses `teleport` as the name of the default schema but the -name is configurable in the Teleport database definition. Replace the database -name in the last two lines if you wish to use another database name. +(!docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx!) In order for the admin user to grant a role to a database user, they must be @@ -83,9 +88,7 @@ CREATE DATABASE IF NOT EXISTS `teleport`; GRANT ALL ON `teleport`.* TO 'teleport-admin' WITH GRANT OPTION; ``` -Note that Teleport uses `teleport` as the name of the default schema but the -name is configurable in the Teleport database definition. Replace the database -name in the last two lines if you wish to use another database name. +(!docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx!) In order for the admin user to grant a role to a database user, they must be @@ -154,6 +157,8 @@ database queries in the Teleport Audit Logs, when the Teleport username is over (!docs/pages/includes/database-access/auto-user-provisioning/connect.mdx gui="MySQL Workbench"!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="MariaDB"!) + ## Troubleshooting ### Use your mapped remote username error @@ -162,6 +167,6 @@ database queries in the Teleport Audit Logs, when the Teleport username is over ## Next steps -- Connect using your [GUI database client](../../../connect-your-client/gui-clients.mdx). -- Learn about [role templating](../../../admin-guides/access-controls/guides/role-templates.mdx). +- Connect using your [GUI database client](../../../connect-your-client/third-party/gui-clients.mdx). +- Learn about [role templating](../../../zero-trust-access/rbac-get-started/role-templates.mdx). - Read automatic user provisioning [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0113-automatic-database-users.md). diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mongodb.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mongodb.mdx index 02c1f6bc839b6..8b612291733ef 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mongodb.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mongodb.mdx @@ -1,15 +1,22 @@ --- title: MongoDB Automatic User Provisioning +sidebar_label: MongoDB description: Configure automatic user provisioning for MongoDB. +tags: + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + (!docs/pages/includes/database-access/auto-user-provisioning/intro.mdx!) ## Prerequisites -- Teleport cluster v14.3 or above. +- A Teleport cluster. - A self-hosted MongoDB database enrolled with your Teleport cluster. Follow - the [Teleport documentation](../enroll-self-hosted-databases/mongodb-self-hosted.mdx) to learn how + the [Teleport documentation](../enrollment/self-hosted/mongodb-self-hosted.mdx) to learn how to enroll your database. Your MongoDB database must have Role-Based Access Control (RBAC) enabled by setting @@ -133,6 +140,6 @@ Users created within the database will: ## Next steps - Learn more about MongoDB [built-in roles](https://www.mongodb.com/docs/manual/reference/built-in-roles/) and [User-Defined Roles](https://www.mongodb.com/docs/manual/core/security-user-defined-roles/). -- Connect using your [GUI database client](../../../connect-your-client/gui-clients.mdx). -- Learn about [role templating](../../../admin-guides/access-controls/guides/role-templates.mdx). +- Connect using your [GUI database client](../../../connect-your-client/third-party/gui-clients.mdx). +- Learn about [role templating](../../../zero-trust-access/rbac-get-started/role-templates.mdx). - Read automatic user provisioning [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0113-automatic-database-users.md). diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mysql.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mysql.mdx index cd357833cac52..774c5dfae6d0d 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/mysql.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/mysql.mdx @@ -1,6 +1,11 @@ --- title: MySQL Automatic User Provisioning +sidebar_label: MySQL description: Configure automatic user provisioning for MySQL. +tags: + - how-to + - zero-trust + - infrastructure-identity --- Teleport can automatically create an account in your MySQL database when a @@ -28,8 +33,8 @@ stripping its privileges. ## Prerequisites -- Teleport cluster v14.1 or higher with a configured [self-hosted - MySQL](../enroll-self-hosted-databases/mysql-self-hosted.mdx) or [RDS MySQL](../enroll-aws-databases/rds.mdx) +- Teleport cluster with a configured [self-hosted + MySQL](../enrollment/self-hosted/mysql-self-hosted.mdx) or [RDS MySQL](../enrollment/aws/rds/mysql-postgres-mariadb.mdx) database. - Ability to connect to and create user accounts in the target database. - Automatic user provisioning is not compatible with MySQL versions lower than @@ -49,8 +54,8 @@ The admin user must have privileges within the database to create users and grant them privileges. The admin user must also have privileges to monitor user processes and role assignments. -In addition, a schema is required for the admin user to log into by default. -Stored procedures are also created and executed from this schema. +In addition, a database is required for the admin user to log into by default. +Stored procedures are also created and executed from this database. @@ -64,6 +69,9 @@ GRANT PROCESS, ROLE_ADMIN, CREATE USER ON *.* TO 'teleport-admin' ; CREATE DATABASE IF NOT EXISTS `teleport`; GRANT ALTER ROUTINE, CREATE ROUTINE, EXECUTE ON `teleport`.* TO 'teleport-admin' ; ``` + +(!docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx!) + @@ -76,13 +84,16 @@ GRANT PROCESS, ROLE_ADMIN, CREATE USER ON *.* TO 'teleport-admin' ; CREATE DATABASE IF NOT EXISTS `teleport`; GRANT ALTER ROUTINE, CREATE ROUTINE, EXECUTE ON `teleport`.* TO 'teleport-admin' ; ``` + +(!docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx!) + Users created by Teleport will be assigned the `teleport-auto-user` role in the database, which will be created automatically if it doesn't exist. -(!docs/pages/includes/database-access/auto-user-provisioning/db-definition.mdx protocol="mysql" uri="localhost:3306" !) +(!docs/pages/includes/database-access/auto-user-provisioning/db-definition-default-dbname.mdx protocol="mysql" uri="localhost:3306" !) ## Step 2/3. Configure a Teleport role @@ -122,6 +133,8 @@ database queries in the Teleport Audit Logs, when the Teleport username is over (!docs/pages/includes/database-access/auto-user-provisioning/connect.mdx gui="MySQL Workbench"!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="MySQL"!) + ## Troubleshooting ### Access denied to database error @@ -154,6 +167,6 @@ endpoints. Please use auto-user provisioning on the primary endpoints. ## Next steps -- Connect using your [GUI database client](../../../connect-your-client/gui-clients.mdx). -- Learn about [role templating](../../../admin-guides/access-controls/guides/role-templates.mdx). +- Connect using your [GUI database client](../../../connect-your-client/third-party/gui-clients.mdx). +- Learn about [role templating](../../../zero-trust-access/rbac-get-started/role-templates.mdx). - Read automatic user provisioning [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0113-automatic-database-users.md). diff --git a/docs/pages/enroll-resources/database-access/auto-user-provisioning/postgres.mdx b/docs/pages/enroll-resources/database-access/auto-user-provisioning/postgres.mdx index 46f44bf3306ca..dc2386b1a666f 100644 --- a/docs/pages/enroll-resources/database-access/auto-user-provisioning/postgres.mdx +++ b/docs/pages/enroll-resources/database-access/auto-user-provisioning/postgres.mdx @@ -1,15 +1,22 @@ --- title: PostgreSQL Automatic User Provisioning +sidebar_label: PostgreSQL description: Configure automatic user provisioning for PostgreSQL. +tags: + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + (!docs/pages/includes/database-access/auto-user-provisioning/intro.mdx!) ## Prerequisites - Teleport cluster with a configured [self-hosted - PostgreSQL](../enroll-self-hosted-databases/postgres-self-hosted.mdx) or [RDS - PostgreSQL](../enroll-aws-databases/rds.mdx) database. To configure + PostgreSQL](../enrollment/self-hosted/postgres-self-hosted.mdx) or [RDS + PostgreSQL](../enrollment/aws/rds/mysql-postgres-mariadb.mdx) database. To configure permissions for database objects like tables, your cluster must be on version v15.2 or above. - Ability to connect to and create user accounts in the target database. @@ -77,7 +84,7 @@ hostssl all all ::/0 cert hostssl all all 0.0.0.0/0 cert ``` -Refer to the [self-hosted PostgreSQL guide](../enroll-self-hosted-databases/postgres-self-hosted.mdx#step-35-configure-your-postgresql-server) +Refer to the [self-hosted PostgreSQL guide](../enrollment/self-hosted/postgres-self-hosted.mdx#step-35-configure-your-postgresql-server) to ensure that your configuration is correct. @@ -189,6 +196,8 @@ Users created within the database will: (!docs/pages/includes/database-access/auto-user-provisioning/connect.mdx gui="pgAdmin"!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) + ## Troubleshooting ### User does not have CONNECT privilege error @@ -292,9 +301,9 @@ admin user through Teleport. ## Next steps - Connect using your [GUI database - client](../../../connect-your-client/gui-clients.mdx). + client](../../../connect-your-client/third-party/gui-clients.mdx). - Learn about [role - templating](../../../admin-guides/access-controls/guides/role-templates.mdx). + templating](../../../zero-trust-access/rbac-get-started/role-templates.mdx). - Read automatic user provisioning [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0113-automatic-database-users.md). - Read database permission management [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0151-database-permission-management.md). - The `internal.db_roles` traits we illustrated in this guide diff --git a/docs/pages/enroll-resources/database-access/database-access.mdx b/docs/pages/enroll-resources/database-access/database-access.mdx index ae59ffb853911..4cba98dba409d 100644 --- a/docs/pages/enroll-resources/database-access/database-access.mdx +++ b/docs/pages/enroll-resources/database-access/database-access.mdx @@ -1,6 +1,9 @@ --- title: Databases description: Teleport database access introduction, demo and resources. +tags: + - zero-trust + - infrastructure-identity --- Teleport can provide secure connections to your databases while improving both @@ -11,7 +14,7 @@ Some of the things you can do with database access: - Enable users to retrieve short-lived database certificates using a Single Sign-On flow, thus maintaining their organization-wide identity. - Configure role-based access controls for databases and implement custom - [Access Request](../../admin-guides/access-controls/access-requests/access-requests.mdx) workflows. + [Access Request](../../identity-governance/access-requests/access-requests.mdx) workflows. - Capture database activity in the Teleport audit log. Teleport protects databases through the Teleport Database Service, which is a @@ -22,4 +25,25 @@ agent services. ![Architecture diagram for enrolling databases with Teleport](../../../img/database-access/architecture.svg) - +## Getting started + +- [Getting Started Guide](./getting-started.mdx): Getting started with Teleport database access and AWS Aurora PostgreSQL. + +## Guides + +- [Enroll AWS Databases (section)](./enrollment/aws/): Provides instructions on protecting databases in your AWS-managed infrastructure with Teleport. +- [Enroll Azure Databases (section)](./enrollment/azure/): Provides instructions on protecting databases in your Azure-managed infrastructure with Teleport. +- [Enroll Google Cloud Databases (section)](./enrollment/google-cloud/): Provides instructions on protecting databases in your Google Cloud-managed infrastructure with Teleport. +- [Enroll Cloud-Hosted Database Platforms (section)](./enrollment/managed/): Provides instructions on protecting managed databases in your infrastructure with Teleport. +- [Enroll Self-Hosted Databases (section)](./enrollment/self-hosted/): Provides instructions on protecting self-hosted databases in your infrastructure with Teleport. +- [Database Automatic User Provisioning (section)](./auto-user-provisioning/): Configure automatic user provisioning for databases. + +## Configuration & management + +- [Access Controls](./rbac.mdx): Role-based access control (RBAC) for Teleport database access. +- [Using the Teleport Database Service (section)](./guides/): Guides to possibilities for running the Teleport Database Service. + +## Troubleshooting & support + +- [FAQ](./faq.mdx): Frequently asked questions about enrolling databases with Teleport. +- [Troubleshooting](./troubleshooting.mdx): Common issues and resolutions for enrolling databases with Teleport. diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/enroll-aws-databases.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/enroll-aws-databases.mdx deleted file mode 100644 index 595f4176c0114..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/enroll-aws-databases.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Enroll AWS Databases -description: "Provides instructions on protecting databases in your AWS-managed infrastructure with Teleport." ---- - -The guides in this section show you how to protect AWS-managed databases with -Teleport. - -You can configure Teleport to discover databases in your AWS account and enroll -them with your cluster automatically. Read more about setting up -[Database Auto-Discovery](../../auto-discovery/databases/databases.mdx). - -It is also possible to protect databases across your AWS accounts. Read the -instructions in [AWS Cross-Account Database -Access](aws-cross-account.mdx). - -Read the following guides for how to protect a specific AWS-managed database -with Teleport: - -- [Amazon DocumentDB](aws-docdb.mdx) -- [Amazon DynamoDB](aws-dynamodb.mdx) -- [Amazon ElastiCache and MemoryDB for Redis](redis-aws.mdx) -- [Amazon Keyspaces (Apache Cassandra)](aws-cassandra-keyspaces.mdx) -- [Amazon OpenSearch](aws-opensearch.mdx) -- [Amazon RDS Oracle](rds-oracle.mdx) -- [Amazon RDS Proxy MySQL](rds-proxy-mysql.mdx) -- [Amazon RDS Proxy for Microsoft SQL Server](rds-proxy-sqlserver.mdx) -- [Amazon RDS Proxy for PostgreSQL](rds-proxy-postgres.mdx) -- [Amazon RDS and Aurora](rds.mdx) -- [Amazon RDS for SQL Server](sql-server-ad.mdx) -- [Amazon Redshift Serverless](redshift-serverless.mdx) -- [Amazon Redshift](postgres-redshift.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/postgres-redshift.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/postgres-redshift.mdx deleted file mode 100644 index a69c5b3e1da90..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/postgres-redshift.mdx +++ /dev/null @@ -1,233 +0,0 @@ ---- -title: Database Access with Redshift on AWS -description: How to configure Teleport database access with Amazon Redshift PostgreSQL. -videoBanner: UFhT52d5bYg ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon Redshift" dbConfigure="with IAM authentication"!) - -## How it works - -(!docs/pages/includes/database-access/how-it-works/iam.mdx db="Amazon Redshift" cloud="AWS"!) - - - -![Enroll Redshift with a self-hosted Teleport cluster](../../../../img/database-access/guides/redshift_selfhosted.png) - - -![Enroll Redshift with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/redshift_cloud.png) - - - - -(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon Redshift cluster" providerType="AWS"!) - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- AWS account with a Redshift cluster and permissions to create and attach IAM - policies. -- Command-line client `psql` installed and added to your system's `PATH` environment variable. -- A host, e.g., an EC2 instance, where you will run the Teleport Database - Service. -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/6. Create a Teleport user - -(!docs/pages/includes/database-access/create-user.mdx!) - -## Step 2/6. Create a Database Service configuration - -(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) - -(!docs/pages/includes/database-access/alternative-methods-join.mdx!) - -(!docs/pages/includes/install-linux.mdx!) - -On the node that is running the Database Service, create a configuration file. -Assign `CLUSTER_URI` to the domain name and port of the cluster: - - - - -```code -$ sudo teleport db configure create \ - -o file \ - --name="redshift-postgres" \ - --proxy=teleport.example.com:443 \ - --protocol=postgres \ - --token=/tmp/token \ - --uri=${CLUSTER_URI?} -``` - - - - -```code -$ sudo teleport db configure create \ - -o file \ - --name="redshift-postgres" \ - --proxy=mytenant.teleport.sh:443 \ - --protocol=postgres \ - --token=/tmp/token \ - --uri=${CLUSTER_URI?} -``` - - - - - -The command will generate a Database Service configuration to proxy your AWS -Redshift cluster place it at the `/etc/teleport.yaml` location. - -## Step 3/6. Create an IAM Role for user access (optional) - -Redshift supports two methods of IAM authentication, both of which Teleport -also supports. - -First is IAM authentication as a database user. In this method, the Teleport -Database Service generates a temporary IAM authentication token for a database -user that already exists in the Redshift database. If you choose to use this -method, you can skip this step as no additional IAM roles are required. - -The second method is to authenticate as an IAM role. In this case, the Teleport -Database Service assumes an IAM role to authenticate with Redshift. Redshift -then maps that IAM role to a database username and creates that database user -if it doesn't already exist in the database. - -If you choose the second method, create the AWS IAM role to provide user access -to the Redshift database. Go to IAM -> Access Management -> -[Roles](https://console.aws.amazon.com/iamv2/home#/roles), and click "Create Role". - -![Create Role Step 1](../../../../img/database-access/guides/keyspaces/create-role-step1.png) - -Skip "Add permissions" for now, enter a role name, and press "Create role". - -Once the role is created, find the role, and add the following inline policy to -the IAM role: -![Create Role Step 1](../../../../img/database-access/guides/create-redshift-iam-role-policy.png) - -Or in JSON format: -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": "redshift:GetClusterCredentialsWithIAM", - "Resource": "arn:aws:redshift:us-east-1:123456789012:dbname:my-redshift/*" - } - ] -} -``` - -Replace `123456789012` and `my-redshift` with your AWS account ID and your -Redshift database's cluster ID. - -## Step 4/6. Configure IAM permissions for the Database Service - -(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="Redshift databases" !) - -### Create an IAM role for Teleport - -(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) - -### Grant permissions - -Attach the following AWS IAM permissions to the Database Service IAM role: - -(!docs/pages/includes/database-access/reference/aws-iam/redshift/access-policy.mdx dbUserRole="example-iam-role" !) - -## Step 5/6. Start the Database Service - -(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) - -The Database Service will proxy the Amazon Redshift cluster with the ID you -specified earlier. Keep in mind that AWS IAM changes may not propagate -immediately and can take a few minutes to come into effect. - -## Step 6/6. Connect - - - - -Once the Database Service has started and joined the cluster, log in to see the -registered databases. Replace `--proxy` with the address of your Teleport Proxy -Service. - -```code -$ tsh login --proxy=teleport.example.com --user=alice -$ tsh db ls -# Name Description Labels -# ----------- ------------------------------ -------- -# my-redshift ... -``` - - - - -Once the Database Service has started and joined the cluster, log in to see the -registered databases. Replace `--proxy` with the address of your Teleport Cloud -tenant. - -```code -$ tsh login --proxy=mytenant.teleport.sh --user=alice -$ tsh db ls -# Name Description Labels -# ----------- ------------------------------ -------- -# my-redshift ... -``` - - - - - -To retrieve credentials for a database and connect to it: - - - -```code -$ tsh db connect --db-user=alice --db-name=dev my-redshift -``` - - - Teleport does not currently use the auto-create option when generating - tokens for Redshift databases. Users must exist in the database. - - - - - -```code -$ tsh db connect --db-user=role/example-iam-role --db-name=dev my-redshift -``` - - - -(!docs/pages/includes/database-access/pg-access-webui.mdx!) - -To log out of the database and remove credentials: - -```code -$ tsh db logout my-redshift -``` - -## Troubleshooting - -(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) - -(!docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx!) - -(!docs/pages/includes/database-access/pg-cancel-request-limitation.mdx PIDQuery="SELECT pid,starttime,duration,trim(user_name) AS user,trim(query) AS query FROM stv_recents WHERE status = 'Running';"!) - -(!docs/pages/includes/database-access/psql-ssl-syscall-error.mdx!) - -## Next steps - -- Learn more about [using IAM authentication to generate database user - credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html) for Amazon Redshift. -- Learn how to [restrict access](../rbac.mdx) to certain users and databases. -- View the [High Availability (HA)](../guides/ha.mdx) guide. -- Take a look at the YAML configuration [reference](../../../reference/agent-services/database-access-reference/configuration.mdx). - diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-mysql.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-mysql.mdx deleted file mode 100644 index f1c4811410d89..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-mysql.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Database Access with AWS RDS Proxy for MariaDB/MySQL -description: How to configure Teleport database access with AWS RDS Proxy for MariaDB/MySQL. ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for MariaDB or MySQL" dbConfigure="with IAM authentication"!) - -(!docs/pages/includes/database-access/rds-proxy.mdx protocol="mysql" !) diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres.mdx deleted file mode 100644 index 4199e4b3c7193..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Database Access with AWS RDS Proxy for PostgreSQL -description: How to configure Teleport database access with AWS RDS Proxy for PostgreSQL ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for PostgreSQL" dbConfigure="with IAM authentication"!) - -(!docs/pages/includes/database-access/rds-proxy.mdx protocol="postgres"!) diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-sqlserver.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-sqlserver.mdx deleted file mode 100644 index ea03c13e01e28..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-proxy-sqlserver.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Database Access with AWS RDS Proxy for Microsoft SQL Server -description: How to configure Teleport database access with AWS RDS Proxy for Microsoft SQL Server ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for Microsoft SQL Server" dbConfigure="with IAM authentication"!) - -(!docs/pages/includes/database-access/rds-proxy.mdx protocol="sqlserver" !) diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx deleted file mode 100644 index e3fa7c345c1bc..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds.mdx +++ /dev/null @@ -1,341 +0,0 @@ ---- -title: Database Access with AWS RDS and Aurora -h1: Database Access with AWS RDS and Aurora for PostgreSQL, MySQL and MariaDB -description: How to configure Teleport database access with AWS RDS and Aurora for PostgreSQL, MySQL and MariaDB. ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS or Aurora" dbConfigure="with IAM authentication"!) - -## How it works - -(!docs/pages/includes/database-access/how-it-works/iam.mdx db="RDS" cloud="AWS"!) - - - -![Teleport Architecture RDS Self-Hosted](../../../../img/database-access/guides/rds_selfhosted.png) - - -![Teleport Architecture RDS Cloud-Hosted](../../../../img/database-access/guides/rds_cloud.png) - - - - - - -The following products are not compatible with Teleport as they don't support -IAM authentication: - - - Aurora Serverless v1. - - RDS MariaDB versions lower than 10.6. - -We recommend upgrading Aurora Serverless v1 to [Aurora Serverless -v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html), -which supports IAM authentication. - - - -(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="RDS" providerType="AWS" !) - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- AWS account with RDS and Aurora databases and permissions to create and attach - IAM policies. - - Your RDS and Aurora databases must have password and IAM authentication - enabled. - - If IAM authentication is not enabled on the target RDS and Aurora databases, - the Database Service will attempt to enable IAM authentication by modifying - them using respective APIs. - -- A Linux host or Amazon Elastic Kubernetes Service cluster where you will run - the Teleport Database Service, which proxies connections to your RDS - databases. -- (!docs/pages/includes/tctl.mdx!) - -If you plan to run the Teleport Database Service on Kubernetes, you will need -the following: - - - The `aws` CLI in your PATH. Install it by following the [AWS - documentation](https://aws.amazon.com/cli/). - - - An IAM OIDC provider running in your Kubernetes cluster. See the [AWS - documentation](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) - for how to create an IAM OIDC provider. - - To check whether you have an IAM OIDC provider running in your cluster, run - the following `aws` command, assigning to the - region where your EKS cluster is running and to - the name of your Kubernetes cluster: - - ```code - $ aws --region= eks describe-cluster --name --query "cluster.identity.oidc.issuer" --output text - ``` - - If you have an IAM OIDC provider associated with your cluster, this command - will print its ID. - - - The [`jq` CLI tool](https://jqlang.github.io/jq/), which we use to process - JSON data in this guide. - -## Step 1/6. Create a Teleport user - -(!docs/pages/includes/database-access/create-user.mdx!) - -## Step 2/6. Create a Database Service configuration - -In this section, you will configure the Teleport Database Service. To do so, you -will: - -- Create a join token for the service to demonstrate trust with your Teleport - cluster. -- Set up your package manager so you can install and run the Database Service. -- Generate a configuration for the Database Service. - -### Create a join token - -Establish trust between the Teleport Database Service and your Teleport cluster -by creating a join token. - -Generate a join token by running the following command on your workstation: - -```code -$ tctl tokens add --type=db -``` - -The next step depends on how you plan to run the Teleport Database Service: - - - - -Save the token in a file called `/tmp/token` on the host that will run the -Database Service. - - - - -Later in this guide, you will use this join token when configuring the Teleport -Database Service. - - - - -(!docs/pages/includes/database-access/alternative-methods-join.mdx!) - -### Prepare your environment - -Next, get your environment ready to run the Teleport Database Service: - - - - -(!docs/pages/includes/install-linux.mdx!) - -Provide the following information and then generate a configuration file for the -Teleport Database Service: -- The host **and port** of your Teleport -Proxy Service or cloud-hosted Teleport Enterprise site -- The protocol of the database you want to proxy, either -`mysql` or `postgres` -- The endpoint **and port** of the database - the -cluster endpoint for Aurora or the instance endpoint for an RDS instance, e.g. -`myrds.us-east-1.rds.amazonaws.com:5432` - -```code -$ sudo teleport db configure create \ - -o file \ - --name=rds-example \ - --proxy= \ - --protocol= \ - --uri= \ - --labels=env=dev \ - --token=/tmp/token -``` - -The command will generate a Teleport Database Service configuration file and -place it at the `/etc/teleport.yaml` location. - - - - -(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - - - - -## Step 3/6. Create IAM policies for Teleport - -(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="RDS instances and Aurora clusters" !) - -### Create an IAM role for Teleport - -(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) - -### Grant permissions - -Attach the following AWS IAM permissions to the Database Service IAM role: - -(!docs/pages/includes/database-access/reference/aws-iam/rds/access-policy.mdx!) - -## Step 4/6. Start the Database Service - -Start the Teleport Database Service in your environment: - - - - -(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) - - - - -Retrieve the join token you created earlier in this guide by running the -following command and copying a token with the `Db` type: - -```code -$ tctl tokens ls -Token Type Labels Expiry Time (UTC) --------------------------------- ---- ------ ---------------------------- -(=presets.tokens.first=) Db 14 Jun 23 21:21 UTC (20m15s) -``` - -Create a Helm values file called `values.yaml`, assigning -to the value of the join token you retrieved above, to the host **and port** of your Teleport -Proxy Service, and to the host **and port** of your -RDS database (e.g., `myrds.us-east-1.rds.amazonaws.com:5432`). Assign to your AWS account ID. Set `enterprise` to false if you -are using Teleport Community Edition: - -```yaml -authToken: -proxyAddr: -roles: db -enterprise: true -databases: -- name: example - uri: "" - protocol: - static_labels: - env: dev -annotations: - serviceAccount: - eks.amazonaws.com/role-arn: arn:aws:iam:::role/teleport-rds-role -``` - -Get the version of Teleport to install. If you have automatic agent updates enabled in your cluster, query the latest Teleport version that is compatible with the updater: - -```code -$ TELEPORT_VERSION="$(curl https:///v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" -``` - -Otherwise, get the version of your Teleport cluster: - -```code -$ TELEPORT_VERSION="$(curl https:///v1/webapi/ping | jq -r '.server_version')" -``` - - -Install the Helm chart for Teleport Agent services, `teleport-kube-agent`: - -```code -$ helm -n teleport-agent install teleport-kube-agent teleport/teleport-kube-agent \ - --values values.yaml --create-namespace --version $TELEPORT_VERSION -``` - -Make sure that the Teleport Agent pod is running. You should see one -`teleport-kube-agent` pod with a single ready container: - -```code -$ kubectl -n teleport-agent get pods -NAME READY STATUS RESTARTS AGE -teleport-kube-agent-0 1/1 Running 0 32s -``` - - - - -## Step 5/6. Create a database IAM user - -Database users must allow IAM authentication in order to be used with Database -Access for RDS. See below how to enable it for the user `alice` on your database -engine. In the next step, we will authenticate to the database as the `alice` -user via the user's Teleport account. - - - - PostgreSQL users must have a `rds_iam` role: - - ```sql - CREATE USER alice; - GRANT rds_iam TO alice; - ``` - - - MySQL and MariaDB users must have the RDS authentication plugin enabled: - - ```sql - CREATE USER alice IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; - ``` - - Created user may not have access to anything by default so let's grant it - some permissions: - - ```sql - GRANT ALL ON `%`.* TO 'alice'@'%'; - ``` - - - -See [Creating a database account using IAM authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html) -for more information. - -## Step 6/6. Connect - -Once the Database Service has started and joined the cluster, log in as the -`alice` user you created earlier to see the registered databases: - -```code -$ tsh login --proxy= --user=alice -$ tsh db ls -# Name Description Labels -# ----------- ----------- -------- -# rds-example env=dev -``` - -Retrieve credentials for the database and connect to it as the `alice` user: - -```code -$ tsh db connect --db-user=alice --db-name=postgres rds-example -``` - -(!docs/pages/includes/database-access/pg-access-webui.mdx!) - - - The appropriate database command-line client (`psql`, `mysql`, `mariadb`) should be - available in `PATH` in order to be able to connect. - - -Log out of the database and remove credentials: - -```code -$ tsh db logout rds-example -``` - -## Troubleshooting - -(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) - -(!docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx!) - -(!docs/pages/includes/database-access/pg-cancel-request-limitation.mdx!) - -(!docs/pages/includes/database-access/psql-ssl-syscall-error.mdx!) - -## Next steps - -(!docs/pages/includes/database-access/guides-next-steps.mdx!) -- Set up [automatic database user provisioning](../auto-user-provisioning/auto-user-provisioning.mdx). - diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/redis-aws.mdx b/docs/pages/enroll-resources/database-access/enroll-aws-databases/redis-aws.mdx deleted file mode 100644 index 7a515fb40b7df..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/redis-aws.mdx +++ /dev/null @@ -1,356 +0,0 @@ ---- -title: Database Access with Amazon ElastiCache and Amazon MemoryDB for Redis -description: How to configure Teleport database access with Amazon ElastiCache and Amazon MemoryDB for Redis. ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon ElastiCache or MemoryDB for Redis" dbConfigure="with IAM authentication"!) - -## How it works - -The Teleport Database Service proxies traffic from users to Amazon ElastiCache or -MemoryDB for Redis. Authentication between the Database Service and the -AWS-hosted Redis database can take one of two forms: - -- **IAM authentication (preferred):** The Teleport Database Service connects to - the database using a short-lived AWS IAM authentication token. AWS IAM - authentication is available for ElastiCache and MemoryDB with Redis version - 7.0 or above. -- **Managing users:** The Teleport Database Service manages users in a Redis - access control list, rotates their passwords every 15 minutes, and saves these - passwords in AWS Secrets Manager. The Database Service automatically sends an - `AUTH` command with the saved password when connecting the client to the Redis - server. - - - -![Enroll RDS with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/redis_elasticache_selfhosted.png) - - -![Enroll RDS with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/redis_elasticache_cloud.png) - - - - -(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon Elasticache or MemoryDB cluster" providerType="AWS"!) - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- AWS account with at least one ElastiCache or MemoryDB for Redis cluster. - **In-transit encryption via (TLS) must be enabled**. -- Permissions to create and attach IAM policies. -- `redis-cli` version `6.2` or newer installed and added to your system's `PATH` environment variable. -- A host, e.g., an EC2 instance, where you will run the Teleport Database - Service. -- [Redis ACL](https://redis.io/docs/manual/security/acl/) enabled for your - ElastiCache or MemoryDB for Redis cluster. -- (!docs/pages/includes/database-access/aws-auto-discovery-prerequisite.mdx!) -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/6. Create a Teleport user - -(!docs/pages/includes/database-access/create-user.mdx!) - -## Step 2/6. Create a Database Service configuration - -(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) - -(!docs/pages/includes/database-access/alternative-methods-join.mdx!) - -(!docs/pages/includes/install-linux.mdx!) - -Create the Database Service configuration: - - - - - Change `example.teleport.sh:443` to the host and port of your Teleport Proxy - Service. Set `ELASTICACHE_URI` to the domain name and port of your ElastiCache - database: - - ```code - $ ELASTICACHE_URI="" - $ sudo teleport db configure create \ - -o file \ - --name="elasticache" \ - --proxy=example.teleport.sh:443 \ - --protocol="redis" \ - --uri=${ELASTICACHE_URI?} \ - --token=/tmp/token - ``` - - - - Change `example.teleport.sh:443` to the host and port of your Teleport Proxy - Service. Set `MEMORYDB_URI` to the domain name and port of your ElastiCache - database: - - ```code - $ MEMORYDB_URI="" - $ sudo teleport db configure create \ - -o file \ - --name="memorydb" \ - --proxy=example.teleport.sh:443 \ - --protocol="redis" \ \ - --uri=${MEMORYDB_URI} \ - --token=/tmp/token - ``` - - - -The command will generate a Database Service configuration and place it at the -`/etc/teleport.yaml` location. - -## Step 3/6. Create an IAM role for Teleport - -(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="ElastiCache or MemoryDB databases" !) - -### Create an IAM role for Teleport - -(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) - -### Grant permissions - -Attach the following AWS IAM permissions to the Database Service IAM role: - - - - -(!docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx!) - - - - -(!docs/pages/includes/database-access/reference/aws-iam/memorydb/access-policy.mdx!) - - - - -## Step 4/6. Start the Database Service - -(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) - -## Step 5/6. Configure authentication for ElastiCache or MemoryDB users - -Configure authentication for your AWS-hosted Redis database. The steps to follow -depend on whether you want to enable the Teleport Database Service to use IAM -authentication with ElastiCache, IAM authentication with MemoryDB, or -authentication based on managing passwords via AWS Secrets Manager: - - - - -To enable Redis ACL, please see [Authenticating users with Role-Based Access -Control for -ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.RBAC.html). - -Some additional limitations apply when using IAM authentication - for more -information, see: -[ElastiCache Auth IAM Limits](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth-iam.html#auth-iam-limits). - -There are a few requirements for configuring an ElastiCache IAM-enabled user: -- the user must have identical username and user id properties. -- the user must have authentication mode set to "IAM". -- the user must be attached to an ElastiCache user group. - -Create an ElastiCache IAM-enabled user. -The following example creates an ElastiCache user with the access string -`on ~* +@all` that represents an active user with access to all available keys -and commands: -```code -$ aws elasticache create-user \ - --user-name iam-user-01 \ - --user-id iam-user-01 \ - --authentication-mode Type=iam \ - --engine redis \ - --access-string "on ~* +@all" -``` - - -You may prefer a less permissive access string for your ElastiCache users. -For more information about ElastiCache access strings, please see: -[ElastiCache Cluster RBAC Access String](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.RBAC.html#Access-string). - - -Create an ElastiCache user group and attach it to your ElastiCache replication -group: -```code -$ aws elasticache create-user-group \ - --user-group-id iam-user-group-01 \ - --engine redis \ - --user-ids default iam-user-01 -$ aws elasticache modify-replication-group \ - --replication-group-id replication-group-01 \ - --user-group-ids-to-add iam-user-group-01 -``` - -Once the ElastiCache user has been created, verify that the user is configured -to satisfy the requirements for IAM authentication: - -![ElastiCache IAM-enabled User](../../../../img/database-access/guides/redis/redis-aws-iam-user@2x.png) - - - - - -It is highly recommended to use a different ACL than the preset `open-access` -ACL which allows all access using the `default` user. - -If you do not have another MemoryDB ACL yet, create one: -```code -$ aws memorydb create-acl --acl-name my-acl -``` - -Make sure the ACL is attached to your MemoryDB cluster: -```code -$ aws memorydb update-cluster --cluster-name my-memorydb --acl-name my-acl -``` - -Now create an MemoryDB IAM-enabled user: -```code -$ aws memorydb create-user \ - --user-name iam-user-01 \ - --authentication-mode Type=iam \ - --access-string "on ~* +@all" -``` - - -The above example creates a MemoryDB user with the access string `on ~* +@all` -that represents an active user with access to all available keys and commands. - -You may prefer a less permissive access string for your MemoryDB users. For -more information about access strings, please see: [Specifying Permissions -Using an Access -String](https://docs.aws.amazon.com/memorydb/latest/devguide/clusters.acls.html#access-string). - - -Then add this user to the ACL attached to your MemoryDB cluster: -```code -$ aws memorydb update-acl --user-names-to-add iam-user-01 --acl-name my-acl -``` - - - - -To enable Redis ACL, please see [Authenticating users with Role-Based Access -Control for -ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.RBAC.html) -and [Authenticating users with Access Control Lists for -MemoryDB](https://docs.aws.amazon.com/memorydb/latest/devguide/clusters.acls.html). - -Once an ElastiCache or MemoryDB user is created with the desired access, add an -AWS resource tag `teleport.dev/managed` with the value `true` to this user: - -![Managed User Tag](../../../../img/database-access/guides/redis/redis-aws-managed-user-tag.png) - -The Database Service will automatically discover this user if it is associated -with a registered database. Keep in mind that it may take the Database Service -some time (up to 20 minutes) to discover this user once the tag is added. - - - - -If you choose not to use the above options, Teleport will not automatically -authenticate with the Redis server. - -You can either set up a "no password" configuration for your ElastiCache or -MemoryDB user, or manually enter an `AUTH` command with the password you have -configured after a successful client connection. However, it is strongly -advised to use one of the first two options or a strong password for better -security. - -## Step 6/6. Connect - -Once the Database Service has started and joined the cluster, log in to see the -registered databases: - - - -```code -$ tsh login --proxy=teleport.example.com --user=alice -$ tsh db ls -# Name Description Labels -# --------------------------- --------------------------------------------------------- -------- -# my-cluster-mode-elasticache ... -# my-elasticache ... -# my-elasticache-reader ... -# my-memorydb ... -``` - - - - -```code -$ tsh login --proxy=mytenant.teleport.sh --user=alice -$ tsh db ls -# Name Description Labels -# --------------------------- --------------------------------------------------------- -------- -# my-cluster-mode-elasticache ... -# my-elasticache ... -# my-elasticache-reader ... -# my-memorydb ... -``` - - - - - -To retrieve credentials for a database and connect to it: - -```code -$ tsh db connect --db-user=my-database-user my-elasticache -``` - -If flag `--db-user` is not provided, Teleport logs in as the `default` user. - -Now, depending on the authentication configurations, you may need to send an -`AUTH` command to authenticate with the Redis server: - - - - The Database Service automatically authenticates Teleport-managed and - IAM-enabled users with the Redis server. No `AUTH` command is required - after successful connection. - - If you are connecting as a user that is not managed by Teleport and is not - IAM-enabled, the connection normally starts as the `default` user. - Now you can authenticate the database user with its password: - - ``` - AUTH my-database-user - ``` - - - - Now you can authenticate with the shared AUTH token: - - ``` - AUTH - ``` - - - - For Redis deployments without the ACL system or legacy `requirepass` - directive enabled, no `AUTH` command is required. - - - - -To log out of the database and remove credentials: - -```code -# Remove credentials for a particular database instance. -$ tsh db logout my-elasticache -# Remove credentials for all database instances. -$ tsh db logout -``` - -## Troubleshooting - -(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) - -## Next steps - -(!docs/pages/includes/database-access/guides-next-steps.mdx!) - diff --git a/docs/pages/enroll-resources/database-access/enroll-azure-databases/enroll-azure-databases.mdx b/docs/pages/enroll-resources/database-access/enroll-azure-databases/enroll-azure-databases.mdx deleted file mode 100644 index 13afa45e52f94..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-azure-databases/enroll-azure-databases.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Enroll Azure Databases -description: "Provides instructions on protecting databases in your Azure-managed infrastructure with Teleport." ---- - -You can protect Azure-managed databases with Teleport. Learn how to enroll the -following databases: - -- [Azure SQL Server](azure-sql-server-ad.mdx) -- [Azure Database for PostgreSQL or MySQL Server](azure-postgres-mysql.mdx) -- [Azure Redis](azure-redis.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/enroll-google-cloud-databases.mdx b/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/enroll-google-cloud-databases.mdx deleted file mode 100644 index f46814237b8dc..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/enroll-google-cloud-databases.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Enroll Google Cloud Databases -description: "Provides instructions on protecting databases in your Google Cloud-managed infrastructure with Teleport." ---- - -You can protect databases hosted on Google Cloud with Teleport. Read the -following guides for instructions on enrolling a specific database: - -- [PostgreSQL on Google Cloud SQL](postgres-cloudsql.mdx) -- [MySQL on Google Cloud SQL](mysql-cloudsql.mdx) -- [Cloud Spanner](spanner.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-managed-databases/enroll-managed-databases.mdx b/docs/pages/enroll-resources/database-access/enroll-managed-databases/enroll-managed-databases.mdx deleted file mode 100644 index 0987c878f5504..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-managed-databases/enroll-managed-databases.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Enroll Cloud-Hosted Database Platforms -description: "Provides instructions on protecting managed databases in your infrastructure with Teleport." ---- - -Teleport can protect databases that are managed as a dedicated cloud platform. -Learn how to enroll the following databases in your Teleport cluster: - -- [MongoDB Atlas](mongodb-atlas.mdx) -- [Oracle Exadata](oracle-exadata.mdx) -- [Snowflake](snowflake.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-managed-databases/snowflake.mdx b/docs/pages/enroll-resources/database-access/enroll-managed-databases/snowflake.mdx deleted file mode 100644 index 6d760138869b5..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-managed-databases/snowflake.mdx +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Database Access with Snowflake -description: How to configure Teleport database access with Snowflake. ---- - -(!docs/pages/includes/database-access/db-introduction.mdx dbType="Snowflake" dbConfigure="with key pair authentication"!) - -## How it works - -The Teleport Database Service communicates with Snowflake using HTTP messages -that contain JSON web tokens signed by the Teleport certificate authority for -database clients. Snowflake is configured to trust the Teleport database client -CA. When a user connects to Snowflake via Teleport, the Database Service -forwards the user's requests to Snowflake as Teleport-authenticated messages. - - - - ![Enroll Snowflake with a self-hosted Teleport cluster](../../../../img/database-access/guides/snowflake_selfhosted.png) - - - ![Enroll Snowflake with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/snowflake_cloud.png) - - - - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- Snowflake account with `SECURITYADMIN` role or higher. - -- `snowsql` installed and added to your system's `PATH` environment variable. - -- A host where you will run the Teleport Database Service. - - See [Installation](../../../installation.mdx) for details. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/5. Set up the Teleport Database Service - -(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) - -Install and configure Teleport where you will run the Teleport Database Service: - - - - -(!docs/pages/includes/install-linux.mdx!) - -(!docs/pages/includes/database-access/db-configure-start.mdx dbName="example-snowflake" dbProtocol="snowflake" databaseAddress="abc12345.snowflakecomputing.com" !) - - - - Teleport provides Helm charts for installing the Teleport Database Service in Kubernetes Clusters. - - (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - - (!docs/pages/includes/database-access/db-helm-install.mdx dbName="example-snowflake" dbProtocol="snowflake" databaseAddress="abc12345.snowflakecomputing.com" !) - - - -(!docs/pages/includes/database-access/multiple-instances-tip.mdx !) - -## Step 2/5. Create a Teleport user - -(!docs/pages/includes/database-access/create-user.mdx!) - -## Step 3/5. Export a public key - -Use the `tctl auth sign` command below to export a public key for your Snowflake user: - -```code -$ tctl auth sign --format=snowflake --out=server -``` - -The command will create a `server.pub` file with Teleport's public key. Teleport will use the corresponding private key to -generate a JWT (JSON Web Token) that will be used to authenticate to Snowflake. - - -## Step 4/5. Add the public key to your Snowflake user - -Use the public key you generated earlier to enable key pair authentication. - -Log in to your Snowflake instance and execute the SQL statement below: - -```sql -alter user alice set rsa_public_key='MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv3dHYw4LJCcZzdbhb3hV...LwIDAQAB'; -``` - -In this statement, `alice` is the name of the Snowflake user and the `rsa_public_key` is the key generated earlier without -the PEM header/footer (first and the last line). - -You can use the `describe user` command to verify the user's public key: - -```sql -desc user alice; -``` - -See the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth.html#step-4-assign-the-public-key-to-a-snowflake-user) -for more details. - -## Step 5/5. Connect - -Log in to your Teleport cluster and see the available databases: - - - - ```code - $ tsh login --proxy=teleport.example.com --user=alice - $ tsh db ls - # Name Description Labels - # ----------------- ------------------- -------- - # example-snowflake Example Snowflake ❄ env=dev - ``` - - - ```code - $ tsh login --proxy=mytenant.teleport.sh --user=alice - $ tsh db ls - # Name Description Labels - # ----------------- ------------------- -------- - # example-snowflake Example Snowflake ❄ env=dev - ``` - - - -To retrieve credentials for a database and connect to it: - -```code -$ tsh db connect --db-user=alice --db-name=SNOWFLAKE_SAMPLE_DATA example-snowflake -``` - -The `snowsql` command-line client should be available in the system `PATH` in order to be -able to connect. - -To log out of the database and remove credentials: - -```code -# Remove credentials for a particular database instance. -$ tsh db logout example-snowflake -# Remove credentials for all database instances. -$ tsh db logout -``` - -## Next steps - -(!docs/pages/includes/database-access/guides-next-steps.mdx!) - diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/enroll-self-hosted-databases.mdx b/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/enroll-self-hosted-databases.mdx deleted file mode 100644 index bdbecd8478fc4..0000000000000 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/enroll-self-hosted-databases.mdx +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Enroll Self-Hosted Databases -description: "Provides instructions on protecting self-hosted databases in your infrastructure with Teleport." ---- - -You can protect self-hosted databases with Teleport. Learn how to enroll your -database in your Teleport cluster with the following guides: - -- [Cassandra and - ScyllaDB](cassandra-self-hosted.mdx) -- [ClickHouse](clickhouse-self-hosted.mdx) -- [CockroachDB](cockroachdb-self-hosted.mdx) -- [Elastic](elastic.mdx) -- [MongoDB](mongodb-self-hosted.mdx) -- [MySQL](mysql-self-hosted.mdx) -- [Oracle](oracle-self-hosted.mdx) -- [PostgreSQL](postgres-self-hosted.mdx) -- [Redis Cluster](redis-cluster.mdx) -- [Redis](redis.mdx) -- [SQL Server with PKINIT - authentication](sql-server-ad-pkinit.mdx) -- [Vitess](vitess.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cassandra-keyspaces.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-cassandra-keyspaces.mdx similarity index 91% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cassandra-keyspaces.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/aws-cassandra-keyspaces.mdx index 312f211c2d1ae..ca6164a1252e0 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cassandra-keyspaces.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-cassandra-keyspaces.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Amazon Keyspaces (Apache Cassandra) +sidebar_label: Keyspaces (Apache Cassandra) description: How to configure Teleport database access with Amazon Keyspaces (Apache Cassandra) +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon Keyspaces (Apache Cassandra)" dbConfigure="with IAM authentication"!) @@ -11,10 +17,10 @@ description: How to configure Teleport database access with Amazon Keyspaces (Ap -![Enroll Redis with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/cassandra_keyspaces_selfhosted.png) +![Enroll Redis with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/cassandra_keyspaces_selfhosted.png) -![Enroll Redis with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/cassandra_keyspaces_cloud.png) +![Enroll Redis with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/cassandra_keyspaces_cloud.png) @@ -90,7 +96,7 @@ Create an AWS IAM Role that will be used as your Keyspaces user. Go to the IAM -> Access Management -> [Roles](https://console.aws.amazon.com/iamv2/home#/roles). Press Create Role. -![Create Role Step 1](../../../../img/database-access/guides/keyspaces/create-role-step1.png) +![Create Role Step 1](../../../../../img/database-access/guides/keyspaces/create-role-step1.png) AWS provides the `AmazonKeyspacesReadOnlyAccess` and `AmazonKeyspacesFullAccess` IAM policies that you can incorporate into your Keyspaces user's role. You can choose `AmazonKeyspacesReadOnlyAccess` for read-only access to Amazon Keyspaces or `AmazonKeyspacesFullAccess` for full access. @@ -101,9 +107,9 @@ You can choose `AmazonKeyspacesReadOnlyAccess` for read-only access to Amazon Ke You can also create your own custom Amazon Keyspaces Permissions Policies: [Amazon Keyspaces identity-based policy examples](https://docs.aws.amazon.com/keyspaces/latest/devguide/security_iam_id-based-policy-examples.html). -![Create Role Step 1](../../../../img/database-access/guides/keyspaces/create-role-step2.png) +![Create Role Step 1](../../../../../img/database-access/guides/keyspaces/create-role-step2.png) Enter a role name and press "Create role". -![Create Role Step 1](../../../../img/database-access/guides/keyspaces/create-role-step3.png) +![Create Role Step 1](../../../../../img/database-access/guides/keyspaces/create-role-step3.png) ## Step 4/5. Give Teleport permissions to assume roles diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cross-account.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-cross-account.mdx similarity index 95% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cross-account.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/aws-cross-account.mdx index 8675ba1f76589..fa72de4e556bc 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-cross-account.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-cross-account.mdx @@ -1,6 +1,13 @@ --- title: AWS Cross-Account Database Access +sidebar_label: Cross-Account +sidebar_position: 1 description: How to connect AWS databases in external AWS accounts to Teleport. +tags: + - conceptual + - zero-trust + - infrastructure-identity + - aws --- You can deploy the Teleport Database Service with AWS IAM credentials in one @@ -31,7 +38,7 @@ Teleport Database Service to connect to the databases. This guide does not cover AWS network configuration, because it depends on your specific AWS network setup and the kind(s) of AWS databases you wish to connect to Teleport. For more information, see [how to connect your -database](enroll-aws-databases.mdx). +database](aws.mdx).
## Teleport configuration @@ -60,7 +67,7 @@ Modify your Teleport Database Service configuration file to assume an external AWS IAM role when it is discovering AWS databases. ```yaml -# This example configuration will discover AWS RDS databases in us-west-1 +# This example configuration will discover Amazon RDS databases in us-west-1 # within AWS account `222222222222` by assuming the external AWS IAM role # "example-role". db_service: @@ -137,7 +144,7 @@ Save the configuration to a file like `database.yaml` and create it with `tctl`: $ tctl create database.yaml ``` For more information about database registration using dynamic database -resources, see: [Dynamic Registration](../guides/dynamic-registration.mdx). +resources, see: [Dynamic Registration](../../guides/dynamic-registration.mdx). @@ -227,4 +234,4 @@ role, then the trust policy might look like: ## Next steps -- Get started by [connecting](../guides/guides.mdx) your database. +- Get started by [connecting](../../guides/guides.mdx) your database. diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-docdb.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-docdb.mdx similarity index 92% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-docdb.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/aws-docdb.mdx index 2c921395bb4c1..1953bdd94391c 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-docdb.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-docdb.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Amazon DocumentDB +sidebar_label: DocumentDB description: How to access Amazon DocumentDB with Teleport database access +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon DocumentDB" dbConfigure="with IAM authentication"!) @@ -11,10 +17,10 @@ description: How to access Amazon DocumentDB with Teleport database access -![Teleport Architecture DocumentDB Access Self-Hosted](../../../../img/database-access/guides/docdb_selfhosted.svg) +![Teleport Architecture DocumentDB Access Self-Hosted](../../../../../img/database-access/guides/docdb_selfhosted.svg) -![Teleport Architecture DocumentDB Cloud](../../../../img/database-access/guides/docdb_cloud.svg) +![Teleport Architecture DocumentDB Cloud](../../../../../img/database-access/guides/docdb_cloud.svg) @@ -113,12 +119,12 @@ page](https://console.aws.amazon.com/iamv2/home#/roles) of the AWS Console, then press "Create Role". Under **Trusted entity type** select "AWS service". Under **Use case** select "EC2" or the intended use case, then click **Next**. -![Create Role Step 1](../../../../img/database-access/guides/dynamodb-create-ec2-role.png) +![Create Role Step 1](../../../../../img/database-access/guides/dynamodb-create-ec2-role.png) On the "Add Permissions" page, find and select the `TeleportDatabaseAccessDocumentDB` policy that is created in the previous step. -![Create Role Step 2](../../../../img/database-access/guides/docdb-create-role-select-policy.png) +![Create Role Step 2](../../../../../img/database-access/guides/docdb-create-role-select-policy.png) Click "Next" and give the role a name. In this guide, we will use the example name `TeleportDatabaseService` for this role. Once you have chosen a name, @@ -143,7 +149,7 @@ Navigate back to the Roles page on the AWS Web Console and create a new role. Select the "AWS account" option, which creates a default trust policy to allow other entities in this account to assume this role: -![Create Role Step 1](../../../../img/database-access/guides/dynamodb-create-role-1.png) +![Create Role Step 1](../../../../../img/database-access/guides/dynamodb-create-role-1.png) Skip the "Add Permissions" page by clicking "Next", and give the role a name. In this guide, we will use the example "teleport-docdb-user" for this role. @@ -151,7 +157,7 @@ In this guide, we will use the example "teleport-docdb-user" for this role. Now click **Add new tag** at Step 3, use `TeleportDatabaseService` for the key and `Allowed` for the value. Then click **Create Role** to complete the process. -![Create Role Step 3](../../../../img/database-access/guides/aws-create-role-add-tags.png) +![Create Role Step 3](../../../../../img/database-access/guides/aws-create-role-add-tags.png) ### Create a DocumentDB user diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-dynamodb.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-dynamodb.mdx similarity index 91% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-dynamodb.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/aws-dynamodb.mdx index 7d46c867a6d4d..57529284a2cf5 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-dynamodb.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-dynamodb.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Amazon DynamoDB +sidebar_label: DynamoDB description: How to access Amazon DynamoDB with Teleport database access +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon DynamoDB" dbConfigure="with IAM authentication"!) @@ -16,10 +22,10 @@ request with credentials from AWS, then forwards it to the DynamoDB API. -![DynamoDB Self-Hosted](../../../../img/database-access/guides/aws-dynamodb_selfhosted.png) +![DynamoDB Self-Hosted](../../../../../img/database-access/guides/aws-dynamodb_selfhosted.png) -![DynamoDB Cloud](../../../../img/database-access/guides/aws-dynamodb_cloud.png) +![DynamoDB Cloud](../../../../../img/database-access/guides/aws-dynamodb_cloud.png) @@ -56,7 +62,7 @@ Visit the [IAM > Roles page](https://console.aws.amazon.com/iamv2/home#/roles) o the AWS Console, then press "Create Role". Under **Trusted entity type** select "AWS service". Under **Use case** select "EC2", then click **Next**. -![Create Role to Identify EC2 Instance](../../../../img/database-access/guides/dynamodb-create-ec2-role.png) +![Create Role to Identify EC2 Instance](../../../../../img/database-access/guides/dynamodb-create-ec2-role.png) On the "Add Permissions" page, you can simply click **Next** since this role does not require any permissions. In this guide, we will use the example name `TeleportDatabaseService` for this role. Once you have chosen a name, click **Create Role** to complete the process. @@ -66,11 +72,11 @@ Navigate back to the Roles page and create a new role. Select the "AWS account" option, which creates a default trust policy to allow other entities in this account to assume this role: -![Create Role Step 1](../../../../img/database-access/guides/dynamodb-create-role-1.png) +![Create Role Step 1](../../../../../img/database-access/guides/dynamodb-create-role-1.png) Click **Next**. Find the AWS-managed policy `AmazonDynamoDBFullAccess` and then select the policy: -![Create Role Step 2](../../../../img/database-access/guides/dynamodb-create-role-2.png) +![Create Role Step 2](../../../../../img/database-access/guides/dynamodb-create-role-2.png) The `AmazonDynamoDBFullAccess` policy may grant more permissions than desired. @@ -219,7 +225,7 @@ $ aws dynamodb list-tables --endpoint-url=http://localhost:8000 ``` {/* vale messaging.protocol-products = NO */} -You can also connect to this database from the AWS NoSQL Workbench, as documented in our [Database Access GUI Clients](../../../connect-your-client/gui-clients.mdx#nosql-workbench) guide. +You can also connect to this database from the AWS NoSQL Workbench, as documented in our [Database Access GUI Clients](../../../../connect-your-client/third-party/gui-clients.mdx#nosql-workbench) guide. {/* vale messaging.protocol-products = YES */} You can also use this tunnel for programmatic access. The example below uses the `boto3` SDK from AWS: @@ -238,7 +244,7 @@ Type "help", "copyright", "credits" or "license" for more information. ## Next Steps -- See [Dynamic Database Registration](../guides/dynamic-registration.mdx) to +- See [Dynamic Database Registration](../../guides/dynamic-registration.mdx) to learn how to use resource labels to keep Teleport up to date with accessible databases in your infrastructure. diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/aws-memorydb.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-memorydb.mdx new file mode 100644 index 0000000000000..41e2d5f41800a --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-memorydb.mdx @@ -0,0 +1,212 @@ +--- +title: Database Access with Amazon MemoryDB +sidebar_label: MemoryDB +description: How to configure Teleport database access with Amazon MemoryDB +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon MemoryDB for Redis and Valkey" dbConfigure="with IAM authentication"!) + +## How it works + +(!docs/pages/includes/database-access/aws-redis-how-it-works.mdx dbType="Amazon MemoryDB"!) + + + +![Enroll MemoryDB with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/aws_memorydb_selfhosted.png) + + +![Enroll MemoryDB with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/aws_memorydb_cloud.png) + + + + +(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon MemoryDB" providerType="AWS"!) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with an MemoryDB for Redis or Valkey cluster. + **In-transit encryption via (TLS) must be enabled**. +- Permissions to create and attach IAM policies. +- `redis-cli` version `6.2` or newer installed and added to your system's `PATH` environment variable. +- A host, e.g., an EC2 instance, where you will run the Teleport Database + Service. +- [ACL](https://docs.aws.amazon.com/memorydb/latest/devguide/clusters.acls.html) enabled for your + MemoryDB cluster. +- (!docs/pages/includes/database-access/aws-auto-discovery-prerequisite.mdx!) +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/6. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 2/6. Create a Database Service configuration + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +(!docs/pages/includes/install-linux.mdx!) + +Create the Database Service configuration: +```code +$ MEMORYDB_URI="" +$ sudo teleport db configure create \ + -o file \ + --name="memorydb" \ + --proxy=example.teleport.sh:443 \ + --protocol="redis" \ \ + --uri=${MEMORYDB_URI} \ + --token=/tmp/token +``` + +Change `example.teleport.sh:443` to the host and port of your Teleport Proxy +Service. Set `MEMORYDB_URI` to the domain name and port of your MemoryDB +database. + +The command will generate a Database Service configuration and place it at the +`/etc/teleport.yaml` location. + +## Step 3/6. Create an IAM role for Teleport + +(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="MemoryDB databases" !) + +### Create an IAM role for Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) + +### Grant permissions + +(!docs/pages/includes/database-access/reference/aws-iam/memorydb/access-policy.mdx!) + +## Step 4/6. Start the Database Service + +(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) + +## Step 5/6. Configure authentication for MemoryDB users + +Configure authentication for your MemoryDB database. The steps to follow +depend on whether you want to enable the Teleport Database Service to use IAM +authentication with MemoryDB, or authentication based on managing passwords +via AWS Secrets Manager: + + + + +It is highly recommended to use a different ACL than the preset `open-access` +ACL which allows all access using the `default` user. + +If you do not have another MemoryDB ACL yet, create one: +```code +$ aws memorydb create-acl --acl-name my-acl +``` + +Make sure the ACL is attached to your MemoryDB cluster: +```code +$ aws memorydb update-cluster --cluster-name my-memorydb --acl-name my-acl +``` + +Now create an MemoryDB IAM-enabled user: +```code +$ aws memorydb create-user \ + --user-name iam-user-01 \ + --authentication-mode Type=iam \ + --access-string "on ~* +@all" +``` + + +The above example creates a MemoryDB user with the access string `on ~* +@all` +that represents an active user with access to all available keys and commands. + +You may prefer a less permissive access string for your MemoryDB users. For +more information about access strings, please see: [Specifying Permissions +Using an Access +String](https://docs.aws.amazon.com/memorydb/latest/devguide/clusters.acls.html#access-string). + + +Then add this user to the ACL attached to your MemoryDB cluster: +```code +$ aws memorydb update-acl --user-names-to-add iam-user-01 --acl-name my-acl +``` + + + + +To enable ACL, please see [Authenticating users with Access Control Lists for +MemoryDB](https://docs.aws.amazon.com/memorydb/latest/devguide/clusters.acls.html). + +Once an MemoryDB user is created with the desired access, add an AWS resource +tag `teleport.dev/managed` with the value `true` to this user: + +![Managed User Tag](../../../../../img/database-access/guides/redis/redis-aws-managed-user-tag.png) + +The Database Service will automatically discover this user if it is associated +with a registered database. Keep in mind that it may take the Database Service +some time (up to 20 minutes) to discover this user once the tag is added. + + + + +(!docs/pages/includes/database-access/aws-redis-no-auth.mdx dbType="MemoryDB"!) + +## Step 6/6. Connect + +Once the Database Service has started and joined the cluster, log in to see the +registered databases: + + + +```code +$ tsh login --proxy=teleport.example.com --user=alice +$ tsh db ls +# Name Description Labels +# ----------- ----------- -------- +# my-memorydb ... +``` + + + + +```code +$ tsh login --proxy=mytenant.teleport.sh --user=alice +$ tsh db ls +# Name Description Labels +# ----------- ----------- -------- +# my-memorydb ... +``` + + + + + +To retrieve credentials for a database and connect to it: + +```code +$ tsh db connect --db-user=my-database-user my-memorydb +``` + +(!docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx!) + +To log out of the database and remove credentials: + +```code +# Remove credentials for a particular database instance. +$ tsh db logout my-memorydb +# Remove credentials for all database instances. +$ tsh db logout +``` + +## Troubleshooting + +(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) + diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-opensearch.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-opensearch.mdx similarity index 90% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-opensearch.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/aws-opensearch.mdx index 54afd5e0efd52..b5e6062363074 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/aws-opensearch.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws-opensearch.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Amazon OpenSearch +sidebar_label: OpenSearch description: How to access Amazon OpenSearch with Teleport database access +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon OpenSearch" dbConfigure="via REST API with IAM Authentication"!) @@ -17,10 +23,10 @@ requests with AWS credentials, and forwards them to the OpenSearch API. -![OpenSearch Self-Hosted](../../../../img/database-access/guides/aws-opensearch/opensearch_selfhosted.png) +![OpenSearch Self-Hosted](../../../../../img/database-access/guides/aws-opensearch/opensearch_selfhosted.png) -![OpenSearch Cloud](../../../../img/database-access/guides/aws-opensearch/opensearch_cloud.png) +![OpenSearch Cloud](../../../../../img/database-access/guides/aws-opensearch/opensearch_cloud.png) @@ -48,6 +54,13 @@ and uses an EC2 instance to serve the Teleport Database Service. The level of access provided may not suit your needs, or may not fit your organization's access conventions. You should adjust the AWS IAM permissions to fit your needs. + +To access the OpenSearch Dashboard deployed within private VPC subnets using +Teleport, you can enroll the Dashboard as a [Web +application](../../../application-access/protect-apps/connecting-apps.mdx) +in Teleport. + + ## Step 1/4. Create IAM roles for OpenSearch Managed Cluster access The setup described in this guide requires two IAM roles: @@ -62,7 +75,7 @@ Visit the [IAM > Roles page](https://console.aws.amazon.com/iamv2/home#/roles) o the AWS Console, then press "Create Role". Under **Trusted entity type** select "AWS service". Under **Use case** select "EC2", then click **Next**. -![Create Role to Identify EC2 Instance](../../../../img/database-access/guides/aws-opensearch/create-ec2-role.png) +![Create Role to Identify EC2 Instance](../../../../../img/database-access/guides/aws-opensearch/create-ec2-role.png) On the "Add Permissions" page, you can simply click **Next** since this role does not require any permissions. In this guide, we will use the example name @@ -75,7 +88,7 @@ Navigate back to the Roles page and create a new role. Select the "AWS account" option, which creates a default trust policy to allow other entities in this account to assume this role: -![Create Role Step 1](../../../../img/database-access/guides/aws-opensearch/create-role-1.png) +![Create Role Step 1](../../../../../img/database-access/guides/aws-opensearch/create-role-1.png) Click **Next**. On the next page, enter a role name. In this guide we'll use the example name `ExampleTeleportOpenSearchRole` for this role. @@ -112,18 +125,18 @@ where the IAM role or user is mapped to the OpenSearch role. In order to configure Role Mapping log into OpenSearch Domain Dashboard using the master user and go to the `Security` settings: -![Select Get Started](../../../../img/database-access/guides/aws-opensearch/01-opensearch_get_started.png) +![Select Get Started](../../../../../img/database-access/guides/aws-opensearch/01-opensearch_get_started.png) Create a new role with least privilege permissions, or select an existing one. For the purpose of this example the `readall` OpenSearch role will be used. Select the OpenSearch role and go to the `Mapped users` tab: -![Mapped User](../../../../img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png) +![Mapped User](../../../../../img/database-access/guides/aws-opensearch/02-opensearch_mapped_users.png) Add mapping between the OpenSearch role and AWS IAM `ExampleTeleportOpenSearchRole` role created in the previous step. -![IAM Role mapping](../../../../img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png) +![IAM Role mapping](../../../../../img/database-access/guides/aws-opensearch/03-opensearch_iam_role_mapping.png) Finally, click the **Map** button to apply the settings. diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/aws.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/aws.mdx new file mode 100644 index 0000000000000..de1ad878ebe3d --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/aws.mdx @@ -0,0 +1,39 @@ +--- +title: Enroll AWS Databases +sidebar_label: AWS +description: "Provides instructions on protecting databases in your AWS-managed infrastructure with Teleport." +tags: + - zero-trust + - infrastructure-identity + - aws +--- + +The guides in this section show you how to protect AWS-managed databases with +Teleport. + +You can configure Teleport to discover databases in your AWS account and enroll +them with your cluster automatically. Read more about setting up +[Database Auto-Discovery](../../../auto-discovery/databases/databases.mdx). + +It is also possible to protect databases across your AWS accounts. Read the +instructions in [AWS Cross-Account Database +Access](aws-cross-account.mdx). + +Read the following guides for how to protect a specific AWS-managed database +with Teleport: + +- [Amazon DocumentDB](aws-docdb.mdx) +- [Amazon DynamoDB](aws-dynamodb.mdx) +- [Amazon ElastiCache for Redis and Valkey](redis-aws.mdx) +- [Amazon ElastiCache Serverless for Redis and Valkey](elasticache-serverless.mdx) +- [Amazon MemoryDB](aws-memorydb.mdx) +- [Amazon Keyspaces (Apache Cassandra)](aws-cassandra-keyspaces.mdx) +- [Amazon OpenSearch](aws-opensearch.mdx) +- [Amazon RDS Oracle](rds/rds-oracle.mdx) +- [Amazon RDS Proxy MySQL](rds-proxy/rds-proxy-mysql.mdx) +- [Amazon RDS Proxy for Microsoft SQL Server](rds-proxy/rds-proxy-sqlserver.mdx) +- [Amazon RDS Proxy for PostgreSQL](rds-proxy/rds-proxy-postgres.mdx) +- [Amazon RDS and Aurora](rds/mysql-postgres-mariadb.mdx) +- [Amazon RDS for SQL Server](rds/sql-server-ad.mdx) +- [Amazon Redshift Serverless](redshift-serverless.mdx) +- [Amazon Redshift](postgres-redshift.mdx) diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/elasticache-serverless.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/elasticache-serverless.mdx new file mode 100644 index 0000000000000..e9aad78f9bfae --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/elasticache-serverless.mdx @@ -0,0 +1,208 @@ +--- +title: Database Access with Amazon ElastiCache Serverless for Redis and Valkey +sidebar_label: ElastiCache Serverless +description: How to configure Teleport database access with Amazon ElastiCache Serverless for Redis and Valkey. +tags: + - how-to + - zero-trust +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon ElastiCache Serverless" dbConfigure="with IAM authentication"!) + +## How it works + +The Teleport Database Service connects on user behalf using IAM authentication and proxies traffic from users to Amazon ElastiCache Serverless. + + + +![Enroll ElastiCache Serverless with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/elasticache_serverless_selfhosted.png) + + +![Enroll ElastiCache Serverless with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/elasticache_serverless_cloud.png) + + + + +(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon Elasticache Serverless cache" providerType="AWS"!) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with an ElastiCache Serverless for Redis or Valkey cache. +- Permissions to create and attach IAM policies. +- `redis-cli` version `6.2` or newer installed and added to your system's `PATH` environment variable. +- A host, e.g., an EC2 instance, where you will run the Teleport Database + Service. +- (!docs/pages/includes/database-access/aws-auto-discovery-prerequisite.mdx!) +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/6. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 2/6. Create a Database Service configuration + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +(!docs/pages/includes/install-linux.mdx!) + +Create the Database Service configuration: +```code +$ ELASTICACHE_SERVERLESS_URI="" +$ sudo teleport db configure create \ + -o file \ + --name="elasticache-serverless" \ + --proxy=example.teleport.sh:443 \ + --protocol="redis" \ + --uri=${ELASTICACHE_SERVERLESS_URI?} \ + --token=/tmp/token +``` + +Change `example.teleport.sh:443` to the host and port of your Teleport Proxy +Service. Set `ELASTICACHE_SERVERLESS_URI` to the domain name and port of your ElastiCache +database. + +The command will generate a Database Service configuration and place it at the +`/etc/teleport.yaml` location. + +## Step 3/6. Create an IAM role for Teleport + +(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="ElastiCache Serverless caches" !) + +### Create an IAM role for Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) + +### Grant permissions + +(!docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx!) + +## Step 4/6. Start the Database Service + +(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) + +## Step 5/6. Configure authentication for ElastiCache users + +Configure authentication for your ElastiCache Serverless database. + +To enable ACL, please see [Authenticating users with Role-Based Access +Control for +ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html). + + +ElastiCache Serverless Redis OSS will automatically create a default user with no password. +We strongly recommend that you take steps to disable the default user. +The automatically created default user cannot be modified or deleted and you cannot remove the default user name from an ElastiCache Redis OSS user group. +To disable the default user: +1. create a new user with the user name "default" and a unique (i.e., not "default") user ID. +2. *replace* the default user in your ElastiCache user group with the new default user. + +We recommend setting a strong password on your customized default user (IAM authentication is not possible for the default user) and using an access string that is not permissive. +For example, the access string `off -@all` will disable logins and deny all Redis privileges to the default user. + +See [Applying RBAC](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html#rbac-using) for more information. + + +Some additional limitations apply when using IAM authentication - for more +information, see: +[ElastiCache Auth IAM Limits](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth-iam.html#auth-iam-limits). + +There are a few requirements for configuring an ElastiCache IAM-enabled user: +- the user must have identical username and user id properties. +- the user must have authentication mode set to "IAM". +- the user must be attached to an ElastiCache user group. + +Create an ElastiCache IAM-enabled user. +The following example creates an ElastiCache user with the access string +`on ~* +@all` that represents an active user with access to all available keys +and commands: +```code +$ aws elasticache create-user \ + --user-name iam-user-01 \ + --user-id iam-user-01 \ + --authentication-mode Type=iam \ + --engine redis \ + --access-string "on ~* +@all" +``` + + +You may prefer a less permissive access string for your ElastiCache users. +For more information about ElastiCache access strings, please see: +[ElastiCache Cluster RBAC Access String](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.RBAC.html#Access-string). + + +Create an ElastiCache user group and attach it to your ElastiCache Serverless cache: +```code +$ aws elasticache create-user-group \ + --user-group-id iam-user-group-01 \ + --engine redis \ + --user-ids default iam-user-01 +$ aws elasticache modify-serverless-cache \ + --serverless-cache-name serverless-cache-1 \ + --user-group-id "iam-user-group-01" +``` + +Once the ElastiCache user has been created, verify that the user is configured +to satisfy the requirements for IAM authentication: + +![ElastiCache IAM-enabled User](../../../../../img/database-access/guides/redis/redis-aws-iam-user@2x.png) + +(!docs/pages/includes/database-access/aws-redis-no-auth.mdx dbType="ElastiCache Serverless"!) + +## Step 6/6. Connect + +Once the Database Service has started and joined the cluster, log in to see the +registered databases: + + + +```code +$ tsh login --proxy=teleport.example.com --user=alice +$ tsh db ls +Name Description Allowed Users Labels +------------------- ----------- ------------- ----------------------- +my-serverless-cache [*] account_id=123456789012 +``` + + + + +```code +$ tsh login --proxy=mytenant.teleport.sh --user=alice +$ tsh db ls +Name Description Allowed Users Labels +------------------- ----------- ------------- ----------------------- +my-serverless-cache [*] account_id=123456789012 +``` + + + + + +To retrieve credentials for a database and connect to it: + +```code +$ tsh db connect --db-user=my-database-user my-serverless-cache +``` + +(!docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx!) + +To log out of the database and remove credentials: + +```code +# Remove credentials for a particular database instance. +$ tsh db logout my-serverless-cache +# Remove credentials for all database instances. +$ tsh db logout +``` + +## Troubleshooting + +(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/postgres-redshift.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/postgres-redshift.mdx new file mode 100644 index 0000000000000..4af888b101b6a --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/postgres-redshift.mdx @@ -0,0 +1,208 @@ +--- +title: Database Access with Amazon Redshift +sidebar_label: Redshift +description: How to configure Teleport database access with Amazon Redshift. +videoBanner: UFhT52d5bYg +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon Redshift" dbConfigure="with IAM authentication"!) + +## How it works + +(!docs/pages/includes/database-access/how-it-works/iam.mdx db="Amazon Redshift" cloud="AWS"!) + + + +![Enroll Redshift with a self-hosted Teleport cluster](../../../../../img/database-access/guides/redshift_selfhosted.png) + + +![Enroll Redshift with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/redshift_cloud.png) + + + + +(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon Redshift cluster" providerType="AWS"!) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with a Redshift cluster and permissions to create and attach IAM + policies. +- Command-line client `psql` installed and added to your system's `PATH` environment variable. +- A host, e.g., an EC2 instance, where you will run the Teleport Database + Service. +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/5. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 2/5. Create a Database Service configuration + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +(!docs/pages/includes/install-linux.mdx!) + +On the node that is running the Database Service, create a configuration file. +Assign `CLUSTER_URI` to the domain name and port of the cluster: + + + + +```code +$ sudo teleport db configure create \ + -o file \ + --name="redshift-postgres" \ + --proxy=teleport.example.com:443 \ + --protocol=postgres \ + --token=/tmp/token \ + --uri=${CLUSTER_URI?} +``` + + + + +```code +$ sudo teleport db configure create \ + -o file \ + --name="redshift-postgres" \ + --proxy=mytenant.teleport.sh:443 \ + --protocol=postgres \ + --token=/tmp/token \ + --uri=${CLUSTER_URI?} +``` + + + + + +The command will generate a Database Service configuration to proxy your AWS +Redshift cluster place it at the `/etc/teleport.yaml` location. + +## Step 3/5. Configure IAM permissions for the Database Service + +(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="Redshift databases" !) + +### Create an IAM role for Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) + +### Grant permissions + +Attach the following AWS IAM permissions to the Database Service IAM role: + +(!docs/pages/includes/database-access/reference/aws-iam/redshift/access-policy.mdx dbUserRole="redshift-user-role" !) + +## Step 4/5. Start the Database Service + +(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) + +The Database Service will proxy the Amazon Redshift cluster with the ID you +specified earlier. Keep in mind that AWS IAM changes may not propagate +immediately and can take a few minutes to come into effect. + +## Step 5/5. Connect + + + + +Once the Database Service has started and joined the cluster, log in to see the +registered databases. Replace `--proxy` with the address of your Teleport Proxy +Service. + +```code +$ tsh login --proxy=teleport.example.com --user=alice +$ tsh db ls +# Name Description Labels +# ----------- ------------------------------ -------- +# my-redshift ... +``` + + + + +Once the Database Service has started and joined the cluster, log in to see the +registered databases. Replace `--proxy` with the address of your Teleport Cloud +tenant. + +```code +$ tsh login --proxy=mytenant.teleport.sh --user=alice +$ tsh db ls +# Name Description Labels +# ----------- ------------------------------ -------- +# my-redshift ... +``` + + + + + +To retrieve credentials for a database and connect to it: + +```code +$ tsh db connect --db-user=alice --db-name=dev my-redshift +``` + + + Teleport does not currently use the auto-create option when generating + tokens for Redshift databases. Users must exist in the database. + + +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) + +To log out of the database and remove credentials: + +```code +$ tsh db logout my-redshift +``` + +## Authenticate Redshift with as an IAM role + +Amazon Redshift supports two methods of IAM-based authentication, and Teleport +is compatible with both. + +**First method: Authenticate as a Database User (Default)** + +In this method, the Teleport Database Service generates a temporary IAM +authentication token for an existing database user in Redshift. This user must +already exist in the Redshift database. Teleport uses this method by default. + +**Second method: Authenticate as an IAM Role** + +In this alternative method, the Teleport Database Service assumes an AWS IAM +role to authenticate to Redshift. Redshift maps the IAM role to a database user +and automatically creates that user if it doesn't already exist. If you use this +method, you must first create an AWS IAM role that grants access to the Redshift +database. + +If you use this method, you must first create an AWS IAM role that grants access +to the Redshift database. + +(!docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx dbUserRole="redshift-user-role"!) + +## Troubleshooting + +(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) + +(!docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx!) + +(!docs/pages/includes/database-access/pg-cancel-request-limitation.mdx PIDQuery="SELECT pid,starttime,duration,trim(user_name) AS user,trim(query) AS query FROM stv_recents WHERE status = 'Running';"!) + +(!docs/pages/includes/database-access/psql-ssl-syscall-error.mdx!) + +## Next steps + +- Learn more about [using IAM authentication to generate database user + credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html) for Amazon Redshift. +- Learn how to [restrict access](../../rbac.mdx) to certain users and databases. +- View the [High Availability (HA)](../../guides/ha.mdx) guide. +- Take a look at the YAML configuration [reference](../../reference/configuration.mdx). + diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-mysql.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-mysql.mdx new file mode 100644 index 0000000000000..6dc2c2f7df71c --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-mysql.mdx @@ -0,0 +1,14 @@ +--- +title: Database Access with Amazon RDS Proxy for MariaDB/MySQL +sidebar_label: MariaDB/MySQL +description: How to configure Teleport database access with Amazon RDS Proxy for MariaDB/MySQL. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for MariaDB or MySQL" dbConfigure="with IAM authentication"!) + +(!docs/pages/includes/database-access/rds-proxy.mdx protocol="mysql" !) diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres.mdx new file mode 100644 index 0000000000000..51f8d2dc56be2 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres.mdx @@ -0,0 +1,14 @@ +--- +title: Database Access with Amazon RDS Proxy for PostgreSQL +sidebar_label: PostgreSQL +description: How to configure Teleport database access with Amazon RDS Proxy for PostgreSQL +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for PostgreSQL" dbConfigure="with IAM authentication"!) + +(!docs/pages/includes/database-access/rds-proxy.mdx protocol="postgres"!) diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver.mdx new file mode 100644 index 0000000000000..36d4779ee193a --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver.mdx @@ -0,0 +1,14 @@ +--- +title: Database Access with Amazon RDS Proxy for SQL Server +sidebar_label: SQL Server +description: How to configure Teleport database access with Amazon RDS Proxy for SQL Server +tags: + - how-to + - zero-trust + - aws + - infrastructure-identity +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS proxy for Microsoft SQL Server" dbConfigure="with IAM authentication"!) + +(!docs/pages/includes/database-access/rds-proxy.mdx protocol="sqlserver" !) diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy.mdx new file mode 100644 index 0000000000000..f7c82b2f4e2ef --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy.mdx @@ -0,0 +1,11 @@ +--- +title: Protecting Amazon RDS Proxy +sidebar_label: RDS Proxy +description: Provides guidance on enrolling Amazon RDS Proxy databases with Teleport. +--- + +The guides in this section show you how to enroll Amazon RDS Proxy databases +with Teleport. For RDS databases, see [Protecting Amazon RDS +Databases](../rds/rds.mdx): + + diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb.mdx new file mode 100644 index 0000000000000..800c19948a1a1 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb.mdx @@ -0,0 +1,335 @@ +--- +title: Database Access with Amazon RDS and Aurora for PostgreSQL/MySQL/MariaDB +sidebar_label: PostgreSQL/MySQL/MariaDB +description: How to configure Teleport database access with Amazon RDS and Aurora for PostgreSQL, MySQL and MariaDB. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS or Aurora" dbConfigure="with IAM authentication"!) + +## How it works + +(!docs/pages/includes/database-access/how-it-works/iam.mdx db="RDS" cloud="AWS"!) + + + +![Teleport Architecture RDS Self-Hosted](../../../../../../img/database-access/guides/rds_selfhosted.png) + + +![Teleport Architecture RDS Cloud-Hosted](../../../../../../img/database-access/guides/rds_cloud.png) + + + + + + +The following products are not compatible with Teleport as they don't support +IAM authentication: + + - Aurora Serverless v1. + - RDS MariaDB versions lower than 10.6. + +We recommend upgrading Aurora Serverless v1 to [Aurora Serverless +v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html), +which supports IAM authentication. + + + +(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="RDS" providerType="AWS" !) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with RDS and Aurora databases and permissions to create and attach + IAM policies. + + Your RDS and Aurora databases must have password and IAM authentication + enabled. + + If IAM authentication is not enabled on the target RDS and Aurora databases, + the Database Service will attempt to enable IAM authentication by modifying + them using respective APIs. + +- A Linux host or Amazon Elastic Kubernetes Service cluster where you will run + the Teleport Database Service, which proxies connections to your RDS + databases. +- (!docs/pages/includes/tctl.mdx!) + +If you plan to run the Teleport Database Service on Kubernetes, you will need +the following: + + - The `aws` CLI in your PATH. Install it by following the [AWS + documentation](https://aws.amazon.com/cli/). + + - An IAM OIDC provider running in your Kubernetes cluster. See the [AWS + documentation](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) + for how to create an IAM OIDC provider. + + To check whether you have an IAM OIDC provider running in your cluster, run + the following `aws` command, assigning to the + region where your EKS cluster is running and to + the name of your Kubernetes cluster: + + ```code + $ aws --region= eks describe-cluster --name --query "cluster.identity.oidc.issuer" --output text + ``` + + If you have an IAM OIDC provider associated with your cluster, this command + will print its ID. + + - The [`jq` CLI tool](https://jqlang.github.io/jq/), which we use to process + JSON data in this guide. + +## Step 1/6. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 2/6. Create a Database Service configuration + +In this section, you will configure the Teleport Database Service. To do so, you +will: + +- Create a join token for the service to demonstrate trust with your Teleport + cluster. +- Set up your package manager so you can install and run the Database Service. +- Generate a configuration for the Database Service. + +### Create a join token + +Establish trust between the Teleport Database Service and your Teleport cluster +by creating a join token. + +Generate a join token by running the following command on your workstation: + +```code +$ tctl tokens add --type=db +``` + +The next step depends on how you plan to run the Teleport Database Service: + + + + +Save the token in a file called `/tmp/token` on the host that will run the +Database Service. + + + + +Later in this guide, you will use this join token when configuring the Teleport +Database Service. + + + + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +### Prepare your environment + +Next, get your environment ready to run the Teleport Database Service: + + + + +(!docs/pages/includes/install-linux.mdx!) + +Provide the following information and then generate a configuration file for the +Teleport Database Service: +- The host **and port** of your Teleport +Proxy Service or cloud-hosted Teleport Enterprise site +- The protocol of the database you want to proxy, either +`mysql` or `postgres` +- The endpoint **and port** of the database - the +cluster endpoint for Aurora or the instance endpoint for an RDS instance, e.g. +`myrds.us-east-1.rds.amazonaws.com:5432` + +```code +$ sudo teleport db configure create \ + -o file \ + --name=rds-example \ + --proxy= \ + --protocol= \ + --uri= \ + --labels=env=dev \ + --token=/tmp/token +``` + +The command will generate a Teleport Database Service configuration file and +place it at the `/etc/teleport.yaml` location. + + + + +(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) + + + + +## Step 3/6. Create IAM policies for Teleport + +(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="RDS instances and Aurora clusters" !) + +### Create an IAM role for Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) + +### Grant permissions + +Attach the following AWS IAM permissions to the Database Service IAM role: + +(!docs/pages/includes/database-access/reference/aws-iam/rds/access-policy.mdx!) + +## Step 4/6. Start the Database Service + +Start the Teleport Database Service in your environment: + + + + +(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) + + + + +Retrieve the join token you created earlier in this guide by running the +following command and copying a token with the `Db` type: + +```code +$ tctl tokens ls +Token Type Labels Expiry Time (UTC) +-------------------------------- ---- ------ ---------------------------- +(=presets.tokens.first=) Db 14 Jun 23 21:21 UTC (20m15s) +``` + +Create a Helm values file called `values.yaml`, assigning +to the value of the join token you retrieved above, to the host **and port** of your Teleport +Proxy Service, and to the host **and port** of your +RDS database (e.g., `myrds.us-east-1.rds.amazonaws.com:5432`). Assign to your AWS account ID. Set `enterprise` to false if you +are using Teleport Community Edition: + +```yaml +authToken: +proxyAddr: +roles: db +enterprise: true +databases: +- name: example + uri: "" + protocol: + static_labels: + env: dev +annotations: + serviceAccount: + eks.amazonaws.com/role-arn: arn:aws:iam:::role/teleport-rds-role +``` + +Get the version of Teleport to install. If you have automatic agent updates enabled in your cluster, query the latest Teleport version that is compatible with the updater: + +```code +$ TELEPORT_VERSION="$(curl https:///v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" +``` + +Otherwise, get the version of your Teleport cluster: + +```code +$ TELEPORT_VERSION="$(curl https:///v1/webapi/ping | jq -r '.server_version')" +``` + + +Install the Helm chart for Teleport Agent services, `teleport-kube-agent`: + +```code +$ helm -n teleport-agent install teleport-kube-agent teleport/teleport-kube-agent \ + --values values.yaml --create-namespace --version $TELEPORT_VERSION +``` + +Make sure that the Teleport Agent pod is running. You should see one +`teleport-kube-agent` pod with a single ready container: + +```code +$ kubectl -n teleport-agent get pods +NAME READY STATUS RESTARTS AGE +teleport-kube-agent-0 1/1 Running 0 32s +``` + + + + +## Step 5/6. Create a database IAM user + +Database users must allow IAM authentication in order to be used with Database +Access for RDS. See below how to enable it for the user `alice` on your database +engine. In the next step, we will authenticate to the database as the `alice` +user via the user's Teleport account. + + + + PostgreSQL users must have a `rds_iam` role: + + ```sql + CREATE USER alice; + GRANT rds_iam TO alice; + ``` + + + MySQL and MariaDB users must have the RDS authentication plugin enabled: + + ```sql + CREATE USER alice IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; + ``` + + Created user may not have access to anything by default so let's grant it + some permissions: + + ```sql + GRANT ALL ON `%`.* TO 'alice'@'%'; + ``` + + + +See [Creating a database account using IAM authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html) +for more information. + +## Step 6/6. Connect + +Once the Database Service has started and joined the cluster, log in as the +`alice` user you created earlier to see the registered databases: + +```code +$ tsh login --proxy= --user=alice +$ tsh db ls +# Name Description Labels +# ----------- ----------- -------- +# rds-example env=dev +``` + +(!docs/pages/includes/database-access/pg-mysql-connect.mdx dbUser="alice" dbService="rds-example"!) + +Log out of the database and remove credentials: + +```code +$ tsh db logout rds-example +``` + +## Troubleshooting + +(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) + +(!docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx!) + +(!docs/pages/includes/database-access/pg-cancel-request-limitation.mdx!) + +(!docs/pages/includes/database-access/psql-ssl-syscall-error.mdx!) + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) +- Set up [automatic database user provisioning](../../../auto-user-provisioning/auto-user-provisioning.mdx). + diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-oracle.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds-oracle.mdx similarity index 83% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-oracle.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds-oracle.mdx index 6bcdb8591d00e..c04d9732927dd 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/rds-oracle.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds-oracle.mdx @@ -1,6 +1,12 @@ --- -title: Database Access with AWS RDS Oracle with Kerberos Authentication -description: How to configure Teleport Database Access with AWS RDS Oracle with Kerberos authentication. +title: Database Access with Amazon RDS Oracle with Kerberos Authentication +sidebar_label: Oracle +description: How to enroll Amazon RDS Oracle in your Teleport cluster with Kerberos authentication. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon RDS for Oracle" dbConfigure="with Kerberos authentication"!) @@ -16,7 +22,7 @@ Database Service forwards user traffic to the database. ## Prerequisites -(!docs/pages/includes/edition-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v17.4.0 or higher)"!) - An Amazon RDS for Oracle database instance. - An AWS Directory Service Managed Microsoft AD. @@ -29,8 +35,8 @@ Database Service forwards user traffic to the database. Before configuring Teleport, ensure your Oracle RDS instance has Kerberos authentication and TLS properly configured: -1. Follow the [AWS RDS Oracle Kerberos Setup guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-kerberos-setting-up.html) to enable Kerberos authentication on your instance. -2. Enable TLS on your Oracle RDS instance by following the [AWS RDS Oracle SSL Setup documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html). Ensure `SQLNET.SSL_VERSION` is set to `1.2` for optimal security. Make note of the SSL port choice; in the rest of the guide we will assume it is 2484. Also ensure `SQLNET.CIPHER_SUITE` parameter includes a supported value, for example `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`. +1. Follow the [Amazon RDS Oracle Kerberos Setup guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-kerberos-setting-up.html) to enable Kerberos authentication on your instance. +2. Enable TLS on your Oracle RDS instance by following the [Amazon RDS Oracle SSL Setup documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html). Ensure `SQLNET.SSL_VERSION` is set to `1.2` for optimal security. Make note of the SSL port choice; in the rest of the guide we will assume it is 2484. Also ensure `SQLNET.CIPHER_SUITE` parameter includes a supported value, for example `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`. Verify connectivity between your Teleport Database Service host and the Oracle RDS instance before proceeding. @@ -216,7 +222,7 @@ be merged into the same `teleport.keytab` file. To check if the user has any SPNs assigned, go to the user's page in AWS Console and locate the "Account settings - optional" section. - ![AWS AD Set SPN](../../../../img/database-access/guides/aws_ad_set_spn.png) + ![AWS AD Set SPN](../../../../../../img/database-access/guides/aws_ad_set_spn.png) Alternatively, run the following command on the Windows machine joined to your Active Directory domain: @@ -385,7 +391,7 @@ Other clients can use: - a custom JDBC connection string: 'jdbc:oracle:thin:@tcps://localhost:12345/ORCL?TNS_ADMIN=/home/alice/.tsh/keys/teleport.example.com/alice-db/teleport.example.com/oracle-wallet' ``` -This method also enables use of various graphical clients, as explained in [Oracle graphical clients](../../../connect-your-client/gui-clients.mdx#oracle-graphical-clients) section. +This method also enables use of various graphical clients, as explained in [Oracle graphical clients](../../../../../connect-your-client/third-party/gui-clients.mdx#oracle-graphical-clients) section. ## Next steps @@ -393,24 +399,7 @@ This method also enables use of various graphical clients, as explained in [Orac ## Troubleshooting -### Connection hangs or is refused - -The Teleport Database Service needs connectivity to your database endpoints. That may require enabling inbound traffic on the database from the Database Service on the same VPC or routing rules from another VPC. Verify the connection using `SQLcl` or another database client. - -```code -$ sql -L test/test@oracle-instance.aabbccddeegg.eu-central-1.rds.amazonaws.com:2484 -``` - -### Teleport can reach RDS Oracle instance, but TLS negotiation fails (handshake failure) - -Ensure that `SQLNET.SSL_VERSION` parameter enables `TLS 1.2` version. TLS 1.0 is rejected by Teleport due to known weaknesses. See [AWS RDS Oracle SSL Setup documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html) for more information. - -Teleport also rejects the following weak cipher suites: -- `SSL_RSA_WITH_AES_256_CBC_SHA` -- `SSL_RSA_WITH_AES_256_CBC_SHA256` -- `SSL_RSA_WITH_AES_256_GCM_SHA384` - -Ensure that `SQLNET.CIPHER_SUITE` parameter contains at least one supported cipher suite. We recommend using `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` cipher suite. Note that this parameter can contain multiple suites separated by a comma, so the compatible cipher suite can always be appended to the list. +(!docs/pages/includes/database-access/oracle-troubleshooting.mdx!) ### Username not recognized @@ -423,8 +412,8 @@ Installation-specific configuration variations may lead to different values, how ## Further reading -- [AWS RDS Oracle Kerberos Setup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-kerberos-setting-up.html). -- [AWS RDS Oracle SSL Setup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html). -- [AWS RDS Oracle auditing](https://aws.amazon.com/blogs/database/part-1-security-auditing-in-amazon-rds-for-oracle/). +- [Amazon RDS Oracle Kerberos Setup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-kerberos-setting-up.html). +- [Amazon RDS Oracle SSL Setup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html). +- [Amazon RDS Oracle auditing](https://aws.amazon.com/blogs/database/part-1-security-auditing-in-amazon-rds-for-oracle/). - [Manually join a Linux instance](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_linux_instance.html) in the AWS documentation. - [Introduction to `adutil`](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-ad-auth-adutil-introduction) in the Microsoft documentation. diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds.mdx new file mode 100644 index 0000000000000..3100eab047c53 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/rds.mdx @@ -0,0 +1,11 @@ +--- +title: Protecting Amazon RDS and Aurora Databases +sidebar_label: RDS and Aurora +description: Guides for protecting Amazon RDS databases with Teleport +--- + +The guides in this section show you how to protect Amazon RDS and Aurora +databases with Teleport. For RDS Proxy databases, see [Protecting Amazon RDS +Proxy Databases](../rds-proxy/rds-proxy.mdx): + + diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/sql-server-ad.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/sql-server-ad.mdx similarity index 95% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/sql-server-ad.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/rds/sql-server-ad.mdx index 402a6cbdbf84b..b90897eca1248 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/sql-server-ad.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/rds/sql-server-ad.mdx @@ -1,7 +1,13 @@ --- -title: Database Access with Microsoft SQL Server with Active Directory authentication -description: How to configure Teleport database access with Microsoft SQL Server with Active Directory authentication. +title: Database Access with Amazon RDS for Microsoft SQL Server with Active Directory Authentication +sidebar_label: SQL Server +description: How to configure Teleport database access with RDS for SQL Server with Active Directory authentication. videoBanner: k2wz79XCexY +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Microsoft SQL Server" dbConfigure="with Active Directory authentication"!) @@ -20,10 +26,10 @@ Database Service forwards user traffic to the database. -![Database access with SQL Server and AD authentication](../../../../img/database-access/sql-server-ad-1.png) +![Database access with SQL Server and AD authentication](../../../../../../img/database-access/sql-server-ad-1.png) -![Database access with SQL Server and AD authentication](../../../../img/database-access/sql-server-ad-2.png) +![Database access with SQL Server and AD authentication](../../../../../../img/database-access/sql-server-ad-2.png) @@ -248,7 +254,7 @@ endpoint. --ad-spn=MSSQLSvc/sqlserver.example.com:1433 \ --labels=env=dev ``` - + @@ -287,7 +293,7 @@ Provide Active Directory parameters: You can use `ldapsearch` command to see the SPNs registered for your SQL Server. Typically, they take a form of `MSSQLSvc/.:`. -For example, an AWS RDS SQL Server named `sqlserver` and joined to an AWS managed +For example, an Amazon RDS SQL Server named `sqlserver` and joined to an AWS managed Active Directory domain `EXAMPLE.COM` will have the following SPNs registered: ```code @@ -305,7 +311,7 @@ Alternatively, you can look SPNs up in the Attribute Editor of the Active Direct Users and Computers dialog on your AD-joined Windows machine. The RDS SQL Server object typically resides under the AWS Reserved / RDS path: -![SPN](../../../../img/database-access/guides/sqlserver/spn@2x.png) +![SPN](../../../../../../img/database-access/guides/sqlserver/spn@2x.png) If you don't see Attribute Editor tab, make sure that "View > Advanced Features" @@ -402,7 +408,7 @@ following: + tls: + # Point it to your Database CA PEM certificate. + ca_cert_file: "rdsca.pem" -+ # If your database certificate has an empty CN filed, you must change ++ # If your database certificate has an empty CN field, you must change + # the TLS mode to only verify the CA. + mode: verify-ca ``` @@ -419,4 +425,3 @@ skipping TLS verification in production environments. - [Manually join a Linux instance](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_linux_instance.html) in the AWS documentation. - [Introduction to `adutil`](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-ad-auth-adutil-introduction) in the Microsoft documentation. - diff --git a/docs/pages/enroll-resources/database-access/enrollment/aws/redis-aws.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/redis-aws.mdx new file mode 100644 index 0000000000000..dbfd0a121a7a4 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/redis-aws.mdx @@ -0,0 +1,230 @@ +--- +title: Database Access with Amazon ElastiCache for Redis and Valkey +sidebar_label: ElastiCache +description: How to configure Teleport database access with Amazon ElastiCache for Redis and Valkey. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon ElastiCache" dbConfigure="with IAM authentication"!) + +## How it works + +(!docs/pages/includes/database-access/aws-redis-how-it-works.mdx dbType="Amazon ElastiCache"!) + + + +![Enroll ElastiCache with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/redis_elasticache_selfhosted.png) + + +![Enroll ElastiCache with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/redis_elasticache_cloud.png) + + + +(!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="Amazon Elasticache cluster" providerType="AWS"!) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- AWS account with an ElastiCache for Redis or Valkey cluster. + **In-transit encryption via (TLS) must be enabled**. +- Permissions to create and attach IAM policies. +- `redis-cli` version `6.2` or newer installed and added to your system's `PATH` environment variable. +- A host, e.g., an EC2 instance, where you will run the Teleport Database + Service. +- [ACL](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html) enabled for your + ElastiCache cluster. +- (!docs/pages/includes/database-access/aws-auto-discovery-prerequisite.mdx!) +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/6. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 2/6. Create a Database Service configuration + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +(!docs/pages/includes/install-linux.mdx!) + +Create the Database Service configuration: +```code +$ ELASTICACHE_URI="" +$ sudo teleport db configure create \ + -o file \ + --name="elasticache" \ + --proxy=example.teleport.sh:443 \ + --protocol="redis" \ + --uri=${ELASTICACHE_URI?} \ + --token=/tmp/token +``` + +Change `example.teleport.sh:443` to the host and port of your Teleport Proxy +Service. Set `ELASTICACHE_URI` to the domain name and port of your ElastiCache +database. + +The command will generate a Database Service configuration and place it at the +`/etc/teleport.yaml` location. + +## Step 3/6. Create an IAM role for Teleport + +(!docs/pages/includes/database-access/create-iam-role-step-description.mdx accessFor="ElastiCache databases" !) + +### Create an IAM role for Teleport + +(!docs/pages/includes/aws-credentials.mdx service="the Database Service"!) + +### Grant permissions + +(!docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx!) + +## Step 4/6. Start the Database Service + +(!docs/pages/includes/start-teleport.mdx service="the Database Service"!) + +## Step 5/6. Configure authentication for ElastiCache users + +Configure authentication for your ElastiCache database. The steps to follow +depend on whether you want to enable the Teleport Database Service to use IAM +authentication with ElastiCache, or authentication based on managing passwords +via AWS Secrets Manager: + + + + +To enable ACL, please see [Authenticating users with Role-Based Access +Control for +ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html). + +Some additional limitations apply when using IAM authentication - for more +information, see: +[ElastiCache Auth IAM Limits](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth-iam.html#auth-iam-limits). + +There are a few requirements for configuring an ElastiCache IAM-enabled user: +- the user must have identical username and user id properties. +- the user must have authentication mode set to "IAM". +- the user must be attached to an ElastiCache user group. + +Create an ElastiCache IAM-enabled user. +The following example creates an ElastiCache user with the access string +`on ~* +@all` that represents an active user with access to all available keys +and commands: +```code +$ aws elasticache create-user \ + --user-name iam-user-01 \ + --user-id iam-user-01 \ + --authentication-mode Type=iam \ + --engine redis \ + --access-string "on ~* +@all" +``` + + +You may prefer a less permissive access string for your ElastiCache users. +For more information about ElastiCache access strings, please see: +[ElastiCache Cluster RBAC Access String](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.RBAC.html#Access-string). + + +Create an ElastiCache user group and attach it to your ElastiCache replication +group: +```code +$ aws elasticache create-user-group \ + --user-group-id iam-user-group-01 \ + --engine redis \ + --user-ids default iam-user-01 +$ aws elasticache modify-replication-group \ + --replication-group-id replication-group-01 \ + --user-group-ids-to-add iam-user-group-01 +``` + +Once the ElastiCache user has been created, verify that the user is configured +to satisfy the requirements for IAM authentication: + +![ElastiCache IAM-enabled User](../../../../../img/database-access/guides/redis/redis-aws-iam-user@2x.png) + + + + + +To enable ACL, please see [Authenticating users with Role-Based Access +Control for +ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html). + +Once an ElastiCache user is created with the desired access, add an AWS resource +tag `teleport.dev/managed` with the value `true` to this user: + +![Managed User Tag](../../../../../img/database-access/guides/redis/redis-aws-managed-user-tag.png) + +The Database Service will automatically discover this user if it is associated +with a registered database. Keep in mind that it may take the Database Service +some time (up to 20 minutes) to discover this user once the tag is added. + + + + +(!docs/pages/includes/database-access/aws-redis-no-auth.mdx dbType="ElastiCache"!) + +## Step 6/6. Connect + +Once the Database Service has started and joined the cluster, log in to see the +registered databases: + + + +```code +$ tsh login --proxy=teleport.example.com --user=alice +$ tsh db ls +# Name Description Labels +# --------------------------- ----------- -------- +# my-cluster-mode-elasticache ... +# my-elasticache ... +# my-elasticache-reader ... +``` + + + + +```code +$ tsh login --proxy=mytenant.teleport.sh --user=alice +$ tsh db ls +# Name Description Labels +# --------------------------- ----------- -------- +# my-cluster-mode-elasticache ... +# my-elasticache ... +# my-elasticache-reader ... +``` + + + + + +To retrieve credentials for a database and connect to it: + +```code +$ tsh db connect --db-user=my-database-user my-elasticache +``` + +(!docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx!) + +To log out of the database and remove credentials: + +```code +# Remove credentials for a particular database instance. +$ tsh db logout my-elasticache +# Remove credentials for all database instances. +$ tsh db logout +``` + +## Troubleshooting + +(!docs/pages/includes/database-access/aws-troubleshooting.mdx!) + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) + diff --git a/docs/pages/enroll-resources/database-access/enroll-aws-databases/redshift-serverless.mdx b/docs/pages/enroll-resources/database-access/enrollment/aws/redshift-serverless.mdx similarity index 92% rename from docs/pages/enroll-resources/database-access/enroll-aws-databases/redshift-serverless.mdx rename to docs/pages/enroll-resources/database-access/enrollment/aws/redshift-serverless.mdx index 9e76bc6703b07..1f13068bab298 100644 --- a/docs/pages/enroll-resources/database-access/enroll-aws-databases/redshift-serverless.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/aws/redshift-serverless.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Amazon Redshift Serverless +sidebar_label: Redshift Serverless description: How to configure Teleport database access with Amazon Redshift Serverless. +tags: + - how-to + - zero-trust + - infrastructure-identity + - aws --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Amazon Redshift Serverless" dbConfigure="with IAM authentication"!) @@ -16,10 +22,10 @@ This guide will help you to: -![Enroll Redshift with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/redshift_selfhosted_serverless.png) +![Enroll Redshift with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/redshift_selfhosted_serverless.png) -![Enroll Redshift with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/redshift_cloud_serverless.png) +![Enroll Redshift with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/redshift_cloud_serverless.png) @@ -203,9 +209,9 @@ Service or cloud tenant: ```code $ tsh login --proxy=mytenant.teleport.sh --user=alice $ tsh db ls -Name Description Labels ------------ ------------------------------ -------- -my-redshift ... +Name Description Labels +----------- ----------- ------ +my-redshift ... ``` To connect to the Redshift Serverless instance: @@ -221,7 +227,7 @@ Type "help" for help. dev=> ``` -(!docs/pages/includes/database-access/pg-access-webui.mdx!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) To log out of the database and remove credentials: @@ -259,7 +265,7 @@ prior to logging in as this new IAM role to avoid or resolve user permission iss - Learn more about [using IAM authentication to generate database user credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html) for Amazon Redshift. -- Learn how to [restrict access](../rbac.mdx) to certain users and databases. -- View the [High Availability (HA)](../guides/ha.mdx) guide. -- Take a look at the YAML configuration [reference](../../../reference/agent-services/database-access-reference/configuration.mdx). +- Learn how to [restrict access](../../rbac.mdx) to certain users and databases. +- View the [High Availability (HA)](../../guides/ha.mdx) guide. +- Take a look at the YAML configuration [reference](../../reference/configuration.mdx). diff --git a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql.mdx b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-postgres-mysql.mdx similarity index 87% rename from docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql.mdx rename to docs/pages/enroll-resources/database-access/enrollment/azure/azure-postgres-mysql.mdx index 427a3712fc406..c80d19e1e69d3 100644 --- a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-postgres-mysql.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Azure PostgreSQL and MySQL +sidebar_label: PostgreSQL/MySQL description: How to configure Teleport database access with Azure Database for PostgreSQL and MySQL. +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Azure PostgreSQL or MySQL" dbConfigure="with Microsoft Entra ID-based authentication"!) @@ -18,10 +24,10 @@ database. -![Enrolling Azure PostgreSQL/MySQL with a self-hosted Teleport cluster](../../../../img/database-access/guides/azure_selfhosted.png) +![Enrolling Azure PostgreSQL/MySQL with a self-hosted Teleport cluster](../../../../../img/database-access/guides/azure_selfhosted.png) -![Enrolling Azure PostgreSQL/MySQL with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/azure_cloud.png) +![Enrolling Azure PostgreSQL/MySQL with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/azure_cloud.png) @@ -30,7 +36,7 @@ database. (!docs/pages/includes/edition-prereqs-tabs.mdx!) - Deployed Azure Database for PostgreSQL or MySQL server. -- Azure Active Directory administrative privileges. +- Microsoft Entra ID administrative privileges. - A host, e.g., an Azure VM instance, where you will run the Teleport Database Service. - (!docs/pages/includes/tctl.mdx!) @@ -39,7 +45,7 @@ database. ## Step 1/5. Configure Azure service principal To authenticate with PostgreSQL or MySQL databases, Teleport Database Service -needs to obtain access tokens from Azure AD. +needs to obtain access tokens from Microsoft Entra ID. (!docs/pages/includes/database-access/azure-configure-service-principal.mdx!) @@ -94,11 +100,11 @@ more information. Go to the [Subscriptions](https://portal.azure.com/#view/Microsoft_Azure_Billing/SubscriptionsBlade) page and select a subscription. Click on *Access control (IAM)* in the subscription and select *Add > Add custom role*: -![IAM custom role](../../../../img/azure/add-custom-role@2x.png) +![IAM custom role](../../../../../img/azure/add-custom-role@2x.png) In the custom role creation page, click the *JSON* tab and click *Edit*, then paste the JSON example and replace the subscription in "assignableScopes" with your own subscription id: -![Create JSON role](../../../../img/database-access/guides/azure/create-role-from-json@2x.png) +![Create JSON role](../../../../../img/database-access/guides/azure/create-role-from-json@2x.png) ### Create a role assignment for the Teleport Database Service principal @@ -107,42 +113,42 @@ and replace the subscription in "assignableScopes" with your own subscription id ## Step 3/5. Create Azure database users To let Teleport connect to your Azure database authenticating as a service -principal, you need to create Azure AD users authenticated by that principal in the database. +principal, you need to create Entra ID users authenticated by that principal in the database. -### Assign Azure AD administrator +### Assign Entra ID administrator -Only the Azure AD administrator for the database can connect to it and create -Azure AD users. +Only the Entra ID administrator for the database can connect to it and create +Entra ID users. Go to your database's **Authentication** page and set the AD admin using the edit button: -![Set AD admin](../../../../img/database-access/guides/azure/set-ad-admin.png) +![Set AD admin](../../../../../img/database-access/guides/azure/set-ad-admin.png) Go to your database's **Authentication** page and set the AD -admin by selecting **+ Add Azure AD Admins**: +admin by selecting **+ Add Entra ID Admins**: -![Set AD admin](../../../../img/database-access/guides/azure/set-ad-admin-postgres.png) +![Set AD admin](../../../../../img/database-access/guides/azure/set-ad-admin-postgres.png) Go to your database's *Active Directory admin* page and set the AD admin using the *Set admin* button: -![Set AD admin](../../../../img/database-access/guides/azure/set-ad-admin@2x.png) +![Set AD admin](../../../../../img/database-access/guides/azure/set-ad-admin@2x.png) - -Only one Azure user (or group) can be set as an Azure AD admin for the database. -If the Azure AD admin is removed from the server, all Azure AD logins will be disabled for the server. -Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins. -Refer to [Use Azure Active Directory for authenticating with PostgreSQL](https://docs.microsoft.com/en-us/azure/postgresql/single-server/concepts-azure-ad-authentication) + +Only one Azure user (or group) can be set as an Entra ID admin for the database. +If the Entra ID admin is removed from the server, all Entra ID logins will be disabled for the server. +Adding a new Entra ID admin from the same tenant will re-enable Entra ID logins. +Refer to [Use Microsoft Entra ID for authenticating with PostgreSQL](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/security-entra-concepts) for more information. @@ -407,31 +413,7 @@ $ tsh db ls -To retrieve credentials for a database and connect to it: - - - - -```code -$ tsh db connect --db-user=teleport azure-db -``` - - - - -```code -$ tsh db connect --db-user=teleport --db-name=postgres azure-db -``` - - - - -(!docs/pages/includes/database-access/pg-access-webui.mdx!) - - -The appropriate database command-line client (`psql`, `mysql`) should be -available in the `PATH` of the machine you're running `tsh db connect` from. - +(!docs/pages/includes/database-access/pg-mysql-connect.mdx dbUser="teleport" dbService="azure-db"!) To log out of the database and remove credentials: diff --git a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-redis.mdx b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-redis.mdx similarity index 93% rename from docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-redis.mdx rename to docs/pages/enroll-resources/database-access/enrollment/azure/azure-redis.mdx index f6c93bc18edf2..899380b6b04ee 100644 --- a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-redis.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-redis.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Azure Cache for Redis +sidebar_label: Azure Cache for Redis description: How to configure Teleport database access with Azure Cache for Redis +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Azure Cache for Redis" dbConfigure="with Microsoft Entra ID-based authentication"!) @@ -9,16 +15,16 @@ description: How to configure Teleport database access with Azure Cache for Redi The Teleport Database Service proxies traffic between Teleport users and Azure Cache for Redis. When a user connects to the database via Teleport, the Database -Service obtains an access token from Microsoft Entra ID (formerly Azure AD) and -authenticates to Azure as a principal with permissions to manage the database. +Service obtains an access token from Microsoft Entra ID and authenticates to +Azure as a principal with permissions to manage the database. -![Enroll Azure Cache for Redis with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/azure/redis-self-hosted.png) +![Enroll Azure Cache for Redis with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/azure/redis-self-hosted.png) -![Enroll Azure Cache for Redis with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/azure/redis-cloud.png) +![Enroll Azure Cache for Redis with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/azure/redis-cloud.png) @@ -139,11 +145,11 @@ you want to further limit the `assignableScopes`, you can use a resource group Now go to the [Subscriptions](https://portal.azure.com/#view/Microsoft_Azure_Billing/SubscriptionsBlade) page and select a subscription. Click on *Access control (IAM)* in the subscription and select *Add > Add custom role*: -![IAM custom role](../../../../img/azure/add-custom-role@2x.png) +![IAM custom role](../../../../../img/azure/add-custom-role@2x.png) In the custom role creation page, click the *JSON* tab and click *Edit*, then paste the JSON example and replace the subscription in `assignableScopes` with your own subscription id: -![Create JSON role](../../../../img/database-access/guides/azure/redis-create-role-from-json.png) +![Create JSON role](../../../../../img/database-access/guides/azure/redis-create-role-from-json.png) ### Create a role assignment for the Teleport Database Service principal diff --git a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-sql-server-ad.mdx b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-sql-server-ad.mdx similarity index 86% rename from docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-sql-server-ad.mdx rename to docs/pages/enroll-resources/database-access/enrollment/azure/azure-sql-server-ad.mdx index 4e5927768e31e..8850fda0786cf 100644 --- a/docs/pages/enroll-resources/database-access/enroll-azure-databases/azure-sql-server-ad.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/azure/azure-sql-server-ad.mdx @@ -1,6 +1,12 @@ --- -title: Database Access with SQL Server on Azure +title: Database Access with Azure SQL Server +sidebar_label: SQL Server description: How to configure Teleport database access with Azure SQL Server using Microsoft Entra authentication. +tags: + - how-to + - zero-trust + - infrastructure-identity + - azure --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Azure SQL Server" dbConfigure="with Microsoft Entra ID-based authentication"!) @@ -8,18 +14,18 @@ description: How to configure Teleport database access with Azure SQL Server usi ## How it works The Teleport Database Service runs on an Azure virtual machine with an attached -Azure identity with permissions to retrieve authentication tokens from -Microsoft Entra ID. When a user connects to SQL Server with Teleport, the Teleport -Database service authenticates with Azure AD, then uses an authentication token -to connect to SQL Server. The Database Service then forwards user traffic to the -database. +Azure identity with permissions to retrieve authentication tokens from Microsoft +Entra ID. When a user connects to SQL Server with Teleport, the Teleport +Database service authenticates with Microsoft Entra ID, then uses an +authentication token to connect to SQL Server. The Database Service then +forwards user traffic to the database. -![Access Azure SQL Server Microsoft Entra Self-Hosted](../../../../img/database-access/guides/sqlserver/sql-aad.png) +![Access Azure SQL Server Microsoft Entra Self-Hosted](../../../../../img/database-access/guides/sqlserver/sql-aad.png) -![Access Azure SQL Server Microsoft Entra Cloud](../../../../img/database-access/guides/sqlserver/cloud-sql-aad.png) +![Access Azure SQL Server Microsoft Entra Cloud](../../../../../img/database-access/guides/sqlserver/cloud-sql-aad.png) @@ -49,7 +55,7 @@ Select **Microsoft Entra ID** under "Settings" in the left-hand column. Select **Set Admin**, and choose an account that will be added as an admin login to SQL Server. -![Azure SQL Server Microsoft Entra admin page](../../../../img/database-access/guides/sqlserver/azure-set-ad-admin.png) +![Azure SQL Server Microsoft Entra admin page](../../../../../img/database-access/guides/sqlserver/azure-set-ad-admin.png) ## Step 3/8. Configure IAM permissions for Teleport @@ -104,13 +110,13 @@ page and select a subscription. Click on **Access control (IAM)** in the subscription and select **Add** > **Add custom role**: -![IAM custom role](../../../../img/azure/add-custom-role@2x.png) +![IAM custom role](../../../../../img/azure/add-custom-role@2x.png) In the custom role creation page, click the **JSON** tab and click **Edit**, then paste the JSON example and replace the subscription in `assignableScopes` with your own subscription id: -![Create JSON role](../../../../img/database-access/guides/sqlserver/create-role-from-json.png) +![Create JSON role](../../../../../img/database-access/guides/sqlserver/create-role-from-json.png) ## Step 4/8. Configure virtual machine identities @@ -118,7 +124,7 @@ In the Teleport Database Service virtual machine's **Identity** section, enable the system assigned identity. This is used by Teleport to access Azure APIs. -![System assigned identity page](../../../../img/database-access/guides/sqlserver/system-managed-identity.png) +![System assigned identity page](../../../../../img/database-access/guides/sqlserver/system-managed-identity.png) To grant Teleport permissions, the custom role you created must be assigned to the virtual machine system assigned identity. On the same page, click on the **Azure @@ -134,8 +140,8 @@ for more information about Azure scopes and creating role assignments. ### Login identities -The Teleport Database Service needs access tokens from Azure AD to authenticate with -SQL Server databases. +The Teleport Database Service needs access tokens from Microsoft Entra ID to +authenticate with SQL Server databases. It uses the managed identities attached to its Virtual Machine to fetch the authentication token. @@ -144,19 +150,19 @@ To create a new user-assigned managed identity, go to the **Managed Identities** page in your [Azure Portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.ManagedIdentity%2FuserAssignedIdentities) and click on *Create*. Choose a name and resource group for it and create: -![Azure Create user managed identity page](../../../../img/database-access/guides/sqlserver/azure-user-managed-identity.png) +![Azure Create user managed identity page](../../../../../img/database-access/guides/sqlserver/azure-user-managed-identity.png) Next, go to the **Teleport Database Service virtual machine instance**, **Identity** section, select **User assigned**, and add the identity we just created: -![Azure Virtual machine user managed identities page](../../../../img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png) +![Azure Virtual machine user managed identities page](../../../../../img/database-access/guides/sqlserver/azure-attach-managed-identity-vm.png) ## Step 5/8. Enable managed identities login on SQL Server -Azure AD SQL Server integration uses database-level authentication (contained -users), meaning we must create a user for our identities on each database we -want to access. +The Microsoft Entra ID SQL Server integration uses database-level authentication +(contained users), meaning we must create a user for our identities on each +database we want to access. To create contained users for the identities, connect to your SQL server using its Activity Directory Admin and execute the query: diff --git a/docs/pages/enroll-resources/database-access/enrollment/azure/azure.mdx b/docs/pages/enroll-resources/database-access/enrollment/azure/azure.mdx new file mode 100644 index 0000000000000..2d7df2349ac5f --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/azure/azure.mdx @@ -0,0 +1,16 @@ +--- +title: Enroll Azure Databases +sidebar_label: Azure +description: "Provides instructions on protecting databases in your Azure-managed infrastructure with Teleport." +tags: + - zero-trust + - infrastructure-identity + - azure +--- + +You can protect Azure-managed databases with Teleport. Learn how to enroll the +following databases: + +- [Azure SQL Server](azure-sql-server-ad.mdx) +- [Azure Database for PostgreSQL or MySQL Server](azure-postgres-mysql.mdx) +- [Azure Redis](azure-redis.mdx) diff --git a/docs/pages/enroll-resources/database-access/enrollment/enrollment.mdx b/docs/pages/enroll-resources/database-access/enrollment/enrollment.mdx new file mode 100644 index 0000000000000..93f69c0c131db --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/enrollment.mdx @@ -0,0 +1,13 @@ +--- +title: Database Enrollment Guides +sidebar_label: Enrollment Guides +sidebar_position: 2 +description: Provides instructions on enrolling databases with your Teleport cluster for secure access, authentication, authorization, and audit. +--- + +The guides in this section show you how to enroll your database with Teleport +for secure access, authentication, authorization, and audit. + +Teleport supports the following kinds of databases: + + diff --git a/docs/pages/enroll-resources/database-access/enrollment/google-cloud/alloydb.mdx b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/alloydb.mdx new file mode 100644 index 0000000000000..4f9b51410d2de --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/alloydb.mdx @@ -0,0 +1,417 @@ +--- +title: Database Access with AlloyDB +sidebar_label: AlloyDB +description: How to configure Teleport database access with AlloyDB. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="AlloyDB" dbConfigure="with a service account"!) + +## How it works + +(!docs/pages/includes/database-access/how-it-works/iam.mdx db="AlloyDB" cloud="Google Cloud"!) + +![Teleport Architecture for AlloyDB Access](../../../../../img/database-access/guides/alloydb/architecture.png) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- Google Cloud account with an AlloyDB cluster and instance deployed. Ensure that your instance is configured to use [IAM database authentication](https://cloud.google.com/alloydb/docs/database-users/manage-iam-auth). +- Command-line client `psql` installed and added to your system's `PATH` environment variable. +- A host, e.g., a Compute Engine instance, where you will run the Teleport Database Service. +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/5: Configure GCP IAM +### IAM setup: roles for the database user and database service + +To grant Teleport access to your AlloyDB instances, you need to create two service accounts: +- `teleport-db-service`: for the Teleport Database Service to access AlloyDB metadata. +- `alloydb-user`: for end-users to authenticate to the database. + +### Create a service account for the Teleport Database Service + +A GCP service account will be used by the Teleport Database Service to create +ephemeral access tokens for *other* GCP service accounts when it's acting on the +behalf of authorized Teleport users. + +Go to the [Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) +page and create a service account: + +![Create System Service Account](../../../../../img/database-access/guides/alloydb/create-system-service-account.png) + +The Teleport Database Service needs permissions to call Google Cloud APIs to fetch +database connection information and generate client certificates. + +Assign the predefined +[`roles/alloydb.client` (Cloud AlloyDB Client)](https://cloud.google.com/alloydb/docs/reference/iam-roles-permissions) +role to the `teleport-db-service` service account. This role grants the +necessary permissions. + +![Grant permissions to user Service Account](../../../../../img/database-access/guides/alloydb/system-service-account-permissions.png) + +### Create a service account for a database user + + + If you already have a standard GCP service account for database access, you can use it instead of creating a new one. Ensure it has the required permissions listed below. + + +Teleport uses service accounts to connect to AlloyDB databases. + +Go to the IAM & Admin [Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) +page and create a new service account named `alloydb-user`: + +![Create User Service Account](../../../../../img/database-access/guides/alloydb/create-user-service-account.png) + +Click "Create and continue". + +Assign the following [predefined roles](https://cloud.google.com/alloydb/docs/reference/iam-roles-permissions) to the `alloydb-user` service account: + +* Cloud AlloyDB Database User (`roles/alloydb.databaseUser`) +* Cloud AlloyDB Client (`roles/alloydb.client`) +* [Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`)](https://cloud.google.com/service-usage/docs/access-control#serviceusage.serviceUsageConsumer) + +![Grant permissions to user Service Account](../../../../../img/database-access/guides/alloydb/user-service-account-permissions.png) + +### Grant access to the service account + +The Teleport Database Service must be able to impersonate this service account. +Navigate to the `alloydb-user` service account overview page and select the +"Principals with Access" tab: + +![Select Principals with Access Tab](../../../../../img/database-access/guides/alloydb/user-service-account-principals-with-access.png) + +Click "Grant Access" and add the `teleport-db-service` principal ID. +Select the "Service Account Token Creator" role and save the change: + +![Grant Service Account Token Creator to Database Service](../../../../../img/database-access/guides/alloydb/user-service-account-grant-access.png) + + +## Step 2/5: Database configuration + +
+ Enabling IAM Authentication + +Teleport uses [IAM database authentication](https://cloud.google.com/alloydb/docs/database-users/manage-iam-auth) +with AlloyDB instances. + +Ensure that your instance is configured to use IAM authentication. Navigate to your instance settings and check +the presence of the `alloydb.iam_authentication` flag under Advanced Configuration Options section. + +![Enable IAM Authentication](../../../../../img/database-access/guides/alloydb/flag-iam-authentication-on.png) +
+ +### Create a database user + + + If your AlloyDB instance already has an IAM user configured for your designated service account, you can skip this step. + + +Go to the Users page of your AlloyDB instance and add a new user +account. In the sidebar, choose "Cloud IAM" authentication type and add the +`alloydb-user` service account that you created earlier. + +![Add AlloyDB User Account](../../../../../img/database-access/guides/alloydb/add-user-account.png) + +Press "Add" and your Users table should look similar to this: + +![AlloyDB User Accounts Table](../../../../../img/database-access/guides/alloydb/user-account-added.png) + +## Step 3/5: Create a host for the Database Service + + + If you already have a host running the Teleport Database Service, you can skip this step. Just ensure that the host is configured with the `teleport-db-service` service account's credentials, either by attaching the service account (for GCE) or through workload identity. + + +Create a Google Compute Engine (GCE) instance where you will run the Teleport Database Service. + +When creating the instance, in the "Security" section, attach the `teleport-db-service` service account you created earlier. This allows the Teleport Database Service to authenticate with Google Cloud APIs. + +
+ Attaching a service account to an existing GCE instance + + + +If you have an existing GCE instance, you can attach the service account through the Google Cloud Console. + +1. Navigate to the [VM instances](https://console.cloud.google.com/compute/instances) page and open your instance. +2. Stop the instance. Wait for it to fully stop. +3. Edit the instance details. +4. Find the **Service account** dropdown in the **Identity and API access** section. +5. Select the `teleport-db-service` service account. +6. Save the changes and restart the instance. + + + + +If you have an existing GCE instance, you can attach the service account using the `gcloud` command-line tool. + +Set the variables: +- instance name +- instance zone +- GCP project ID + + +First, stop the instance: +```code +$ gcloud compute instances stop --zone= +``` + +Then, set the service account: +```code +$ gcloud compute instances set-service-account \ + --service-account=teleport-db-service@.iam.gserviceaccount.com \ + --zone= +``` + +Restart the instance: +```code +$ gcloud compute instances start --zone= +``` + +Verify the instance is now running with the specified service account: +```code +$ gcloud compute instances describe --zone= \ + --format="yaml(status,serviceAccounts)" +``` + + + +
+ +If you are running the Teleport Database Service on a different host, you will need to provide credentials to the service. We recommend using [workload identity](https://cloud.google.com/iam/docs/workload-identity-federation). + +
+ Using service account keys (insecure) + +Alternatively, go to that service account's Keys tab and create a new key. + +Make sure to choose JSON format. + +Save the file. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to +point to the JSON credentials file you downloaded earlier. For example, if you +use `systemd` to start `teleport`, then you should edit the service's +`EnvironmentFile` to include the env var: +```code +$ echo 'GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json' | sudo tee -a /etc/default/teleport +``` + + +A service account key can be a security risk - we only describe using a key in +this guide for simplicity. +We do not recommend using service account keys in production. +See [authentication](https://cloud.google.com/docs/authentication#service-accounts) +in the Google Cloud documentation for more information about service account +authentication methods. + +
+ +## Step 4/5: Configure Teleport +### Install the Teleport Database Service + +(!docs/pages/includes/install-linux.mdx!) + +### Create a join token + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +### Configure and start the Database Service + +In the command below, replace `` with the +host and port of your Teleport Proxy Service or Enterprise Cloud site, and +replace `` with your AlloyDB connection URI. + +The connection URI has the format `projects/PROJECT/locations/REGION/clusters/CLUSTER/instances/INSTANCE`. +You can copy it from the AlloyDB instance details page in the Google Cloud +console. + +![AlloyDB Connection URI](../../../../../img/database-access/guides/alloydb/connection-uri.png) + +Run the command as follows. Make sure to include the mandatory `alloydb://` prefix in the specified URI. + +```code +$ sudo teleport db configure create \ + -o file \ + --name=alloydb \ + --protocol=postgres \ + --labels=env=dev \ + --token=/tmp/token \ + --proxy= \ + --uri=alloydb:// +``` + +This command will generate a Teleport Database Service configuration file and +save it to `/etc/teleport.yaml`. + + +By default, Teleport uses [_private_](https://cloud.google.com/alloydb/docs/about-private-services-access) AlloyDB endpoint. To change this to either [public](https://cloud.google.com/alloydb/docs/connect-public-ip) or [PSC](https://cloud.google.com/alloydb/docs/about-private-service-connect) endpoints, update the `endpoint_type` field: + +```yaml +db_service: + resources: + - name: alloydb + protocol: postgres + uri: alloydb://projects/PROJECT/locations/REGION/clusters/CLUSTER/instances/INSTANCE + gcp: + alloydb: + # one of: private | public | psc (default: private) + endpoint_type: private + static_labels: + env: dev +``` + + +
+Dynamic resource + +As an alternative to configuring the database in `teleport.yaml`, you can create a dynamic database resource. This allows you to add or update databases without restarting the Database Service. + +Create a file named `alloydb.yaml` with the following content: + +```yaml +kind: db +version: v3 +metadata: + name: alloydb-dynamic + labels: + env: dev +spec: + protocol: "postgres" + uri: "alloydb://" + gcp: + alloydb: + # one of: private | public | psc (default: private) + endpoint_type: private +``` + +Replace `` with your AlloyDB connection URI. + +Create the resource: + +```code +$ tctl create -f alloydb.yaml +``` + +
+ +Finally, start the Teleport Database Service: + +(!docs/pages/includes/start-teleport.mdx service="the Teleport Database Service"!) + +## Step 5/5: Connect to your database +### Grant access to the database + + + The following commands create a new Teleport user and role. If you have an existing Teleport user and a role that grants access to resources with the `env: dev` label, you can skip these steps. + + +(!docs/pages/includes/database-access/create-user.mdx!) + +### Connect + +Once the Database Service has joined the cluster, log in to see the available +databases: + +```code +$ tsh login --proxy=teleport.example.com --user=alice +$ tsh db ls +# Name Description Labels +# ------- ----------- ------- +# alloydb GCP AlloyDB env=dev +``` + + +You will only be able to see databases that your Teleport role has +access to. See our [RBAC](../../rbac.mdx) guide for more details. + + +When connecting to the database, use the name of the database's service account +that you added as an IAM database user earlier, +minus the `.gserviceaccount.com` suffix. The database user name is shown on +the Users page of your AlloyDB instance. + +In the command below, replace `` with your Google Cloud +project ID. Retrieve credentials for the `alloydb` example database and connect +to it: + +```code +$ tsh db connect --db-user=alloydb-user@.iam --db-name=postgres alloydb +``` + + + Starting from version `17.1`, you can now [access your PostgreSQL databases using the Web UI.](../../../../connect-your-client/teleport-clients/web-ui.mdx#starting-a-database-session) + + +To log out of the database and remove credentials: + +```code +# Remove credentials for a particular database instance: +$ tsh db logout alloydb +# Or remove credentials for all databases: +$ tsh db logout +``` + +## Optional: least-privilege access + +When possible, enforce least-privilege by defining custom IAM roles that grant only the required permissions. + +### Custom role for the Teleport Database Service + +The Teleport Database Service, running as the `teleport-db-service` service account, needs permissions to access the AlloyDB instance. + +Create a custom role with the following permissions: + +```ini +# Used to generate client certificate +alloydb.clusters.generateClientCertificate +# Used to fetch connection information +alloydb.instances.connect +``` + +For impersonating the `alloydb-user` service account, the built-in "Service Account Token Creator" IAM role +is broader than necessary. To restrict permissions for that service account, create a custom role +that includes only: + +```ini +iam.serviceAccounts.getAccessToken +``` + +### Custom role for the database user + +The `alloydb-user` service account used for database access requires permissions to connect +to the instance and authenticate as a database user. Create a custom role with: + +```ini +alloydb.instances.connect +alloydb.users.login +serviceusage.services.use +``` + +## Troubleshooting + +(!docs/pages/includes/database-access/gcp-troubleshooting.mdx!) + +(!docs/pages/includes/database-access/pg-cancel-request-limitation.mdx!) + +(!docs/pages/includes/database-access/psql-ssl-syscall-error.mdx!) + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) + +{/* lint ignore list-item-spacing remark-lint */} +- Learn more about [managing IAM authentication](https://cloud.google.com/alloydb/docs/database-users/manage-iam-auth) for AlloyDB. + +{/* lint ignore list-item-spacing remark-lint */} +- Learn more about [authenticating as a service + account](https://cloud.google.com/docs/authentication#service-accounts) in + Google Cloud. + +{/* lint ignore list-item-spacing remark-lint */} +- Learn more about AlloyDB [Auth Proxy](https://cloud.google.com/alloydb/docs/auth-proxy/connect#required-iam-permissions). diff --git a/docs/pages/enroll-resources/database-access/enrollment/google-cloud/google-cloud.mdx b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/google-cloud.mdx new file mode 100644 index 0000000000000..489ef18556aaa --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/google-cloud.mdx @@ -0,0 +1,17 @@ +--- +title: Enroll Google Cloud Databases +sidebar_label: Google Cloud +description: "Provides instructions on protecting databases in your Google Cloud-managed infrastructure with Teleport." +tags: + - zero-trust + - infrastructure-identity + - google-cloud +--- + +You can protect databases hosted on Google Cloud with Teleport. Read the +following guides for instructions on enrolling a specific database: + +- [PostgreSQL on Google Cloud SQL](postgres-cloudsql.mdx) +- [MySQL on Google Cloud SQL](mysql-cloudsql.mdx) +- [Cloud Spanner](spanner.mdx) +- [AlloyDB](alloydb.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/mysql-cloudsql.mdx b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/mysql-cloudsql.mdx similarity index 94% rename from docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/mysql-cloudsql.mdx rename to docs/pages/enroll-resources/database-access/enrollment/google-cloud/mysql-cloudsql.mdx index ee6302d62f2a8..29ca7859d251d 100644 --- a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/mysql-cloudsql.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/mysql-cloudsql.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Cloud SQL for MySQL +sidebar_label: MySQL description: How to configure Teleport database access with Cloud SQL for MySQL. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="MySQL on Google Cloud SQL" dbConfigure="with a service account"!) @@ -11,10 +17,10 @@ description: How to configure Teleport database access with Cloud SQL for MySQL. -![Self-Hosted Teleport Architecture for Cloud SQL Access](../../../../img/database-access/guides/cloudsql_selfhosted.png) +![Self-Hosted Teleport Architecture for Cloud SQL Access](../../../../../img/database-access/guides/cloudsql_selfhosted.png) -![Cloud-Hosted Teleport Architecture for Cloud SQL Access](../../../../img/database-access/guides/cloudsql_cloud.png) +![Cloud-Hosted Teleport Architecture for Cloud SQL Access](../../../../../img/database-access/guides/cloudsql_cloud.png) @@ -108,7 +114,7 @@ account. In the sidebar, choose "Cloud IAM" authentication type and add the "cloudsql-user" service account you created in [the second step](#step-29-create-a-service-account-for-a-database-user): -![Add Cloud SQL User Account](../../../../img/database-access/guides/cloudsql/add-user-account-mysql@2x.png) +![Add Cloud SQL User Account](../../../../../img/database-access/guides/cloudsql/add-user-account-mysql@2x.png) Press "Add". See [Creating and managing IAM users](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users) in Google @@ -179,7 +185,7 @@ $ tsh db ls type="note" > You will only be able to see databases that your Teleport role has -access to. See our [RBAC](../rbac.mdx) guide for more details. +access to. See our [RBAC](../../rbac.mdx) guide for more details.
When connecting to the database, use either the database user name or the @@ -195,6 +201,8 @@ $ tsh db connect --db-user=cloudsql-user --db-name=mysql cloudsql $ tsh db connect --db-user=cloudsql-user@.iam.gserviceaccount.com --db-name=mysql cloudsql ``` +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="MySQL"!) + To log out of the database and remove credentials: ```code diff --git a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/postgres-cloudsql.mdx b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/postgres-cloudsql.mdx similarity index 90% rename from docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/postgres-cloudsql.mdx rename to docs/pages/enroll-resources/database-access/enrollment/google-cloud/postgres-cloudsql.mdx index 172ca2d2a6ee4..918fe46d6ede6 100644 --- a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/postgres-cloudsql.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/postgres-cloudsql.mdx @@ -1,7 +1,13 @@ --- title: Database Access with Cloud SQL for PostgreSQL +sidebar_label: PostgreSQL description: How to configure Teleport database access with Cloud SQL for PostgreSQL. videoBanner: br9LZ3ZXqCk +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="PostgreSQL on Google Cloud SQL" dbConfigure="with a service account"!) @@ -12,10 +18,10 @@ videoBanner: br9LZ3ZXqCk -![Self-Hosted Teleport Architecture for Cloud SQL Access](../../../../img/database-access/guides/cloudsql_selfhosted.png) +![Self-Hosted Teleport Architecture for Cloud SQL Access](../../../../../img/database-access/guides/cloudsql_selfhosted.png) -![Cloud-Hosted Teleport Architecture for Cloud SQL Access](../../../../img/database-access/guides/cloudsql_cloud.png) +![Cloud-Hosted Teleport Architecture for Cloud SQL Access](../../../../../img/database-access/guides/cloudsql_cloud.png) @@ -61,11 +67,11 @@ account. In the sidebar, choose "Cloud IAM" authentication type and add the "cloudsql-user" service account that you created in [the second step](#step-29-create-a-service-account-for-a-database-user): -![Add Cloud SQL User Account](../../../../img/database-access/guides/cloudsql/add-user-account@2x.png) +![Add Cloud SQL User Account](../../../../../img/database-access/guides/cloudsql/add-user-account@2x.png) Press "Add" and your Users table should look similar to this: -![Cloud SQL User Accounts Table](../../../../img/database-access/guides/cloudsql/user-accounts@2x.png) +![Cloud SQL User Accounts Table](../../../../../img/database-access/guides/cloudsql/user-accounts@2x.png) See [Creating and managing IAM users](https://cloud.google.com/sql/docs/postgres/create-manage-iam-users) in Google Cloud documentation for more info. @@ -135,7 +141,7 @@ $ tsh db ls type="note" > You will only be able to see databases that your Teleport role has -access to. See our [RBAC](../rbac.mdx) guide for more details. +access to. See our [RBAC](../../rbac.mdx) guide for more details.
When connecting to the database, use the name of the database's service account @@ -150,7 +156,7 @@ assigning to your Google Cloud project ID: $ tsh db connect --db-user=cloudsql-user@.iam --db-name=postgres cloudsql ``` -(!docs/pages/includes/database-access/pg-access-webui.mdx!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) To log out of the database and remove credentials: diff --git a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/spanner.mdx b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/spanner.mdx similarity index 87% rename from docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/spanner.mdx rename to docs/pages/enroll-resources/database-access/enrollment/google-cloud/spanner.mdx index 81d222a0c52a3..8421cac87cac9 100644 --- a/docs/pages/enroll-resources/database-access/enroll-google-cloud-databases/spanner.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/google-cloud/spanner.mdx @@ -1,6 +1,12 @@ --- title: Database Access with Cloud Spanner +sidebar_label: Spanner description: How to configure Teleport database access with GCP's Cloud Spanner. +tags: + - how-to + - zero-trust + - infrastructure-identity + - google-cloud --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Cloud Spanner" dbConfigure="with a service account"!) @@ -12,11 +18,11 @@ description: How to configure Teleport database access with GCP's Cloud Spanner. -![Self-Hosted Teleport Architecture for Cloud Spanner Access](../../../../img/database-access/guides/spanner_selfhosted.png) +![Self-Hosted Teleport Architecture for Cloud Spanner Access](../../../../../img/database-access/guides/spanner_selfhosted.png) -![Cloud-Hosted Teleport Architecture for Cloud Spanner Access](../../../../img/database-access/guides/spanner_cloud.png) +![Cloud-Hosted Teleport Architecture for Cloud Spanner Access](../../../../../img/database-access/guides/spanner_cloud.png) @@ -52,7 +58,7 @@ Teleport users, but for this guide we will just create one. Go to the IAM & Admin [Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) page and create a new service account named "spanner-user": -![Create Service Account](../../../../img/database-access/guides/spanner/create-spanner-user@2x.png) +![Create Service Account](../../../../../img/database-access/guides/spanner/create-spanner-user@2x.png) Ignore the optional steps - just click "Done". Rather than granting access at the project level, we will grant this service @@ -64,12 +70,12 @@ Navigate to the [Spanner instance overview page](https://console.cloud.google.com/spanner/instances) and check the box of your Spanner instance, then click "Permissions". -![Open Cloud Spanner Instance Permissions](../../../../img/database-access/guides/spanner/select-instance@2x.png) +![Open Cloud Spanner Instance Permissions](../../../../../img/database-access/guides/spanner/select-instance@2x.png) In the permissions blade, click "Add Principal" then add the "spanner-user" service account as a principal and assign it the "Cloud Spanner Database User" role: -![Grant Cloud Spanner Database User to Service Account](../../../../img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png) +![Grant Cloud Spanner Database User to Service Account](../../../../../img/database-access/guides/spanner/grant-service-account-access-to-instance@2x.png) Click "Save". @@ -87,12 +93,12 @@ The Teleport Database Service must be able to impersonate this service account. Navigate to the "spanner-user" service account overview page and select the "permissions" tab: -![Select Service Account Permissions Tab](../../../../img/database-access/guides/spanner/service-account-permissions-tab@2x.png) +![Select Service Account Permissions Tab](../../../../../img/database-access/guides/spanner/service-account-permissions-tab@2x.png) Click "Grant Access" and add the "teleport-db-service" principal ID. Select the "Service Account Token Creator" role and save the change: -![Grant Service Account Token Creator to Database Service](../../../../img/database-access/guides/spanner/grant-token-creator@2x.png) +![Grant Service Account Token Creator to Database Service](../../../../../img/database-access/guides/spanner/grant-token-creator@2x.png) The "Service Account Token Creator" IAM role includes more permissions than @@ -181,7 +187,7 @@ spanner-example GCP Cloud Spanner [*] env=dev type="note" > You will only be able to see databases that your Teleport role has -access to. See our [RBAC](../rbac.mdx) guide for more details. +access to. See our [RBAC](../../rbac.mdx) guide for more details. When connecting to the database, use the name of the service account @@ -212,7 +218,7 @@ $ tsh db logout (!docs/pages/includes/database-access/guides-next-steps.mdx!) -- Learn how to [connect with a GUI client](../../../connect-your-client/gui-clients.mdx#cloud-spanner-datagrip). +- Learn how to [connect with a GUI client](../../../../connect-your-client/third-party/gui-clients.mdx#cloud-spanner-datagrip). - Learn more about [authenticating as a service account](https://cloud.google.com/docs/authentication#service-accounts) in Google Cloud. diff --git a/docs/pages/enroll-resources/database-access/enrollment/managed/managed.mdx b/docs/pages/enroll-resources/database-access/enrollment/managed/managed.mdx new file mode 100644 index 0000000000000..c317912d8c545 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/managed/managed.mdx @@ -0,0 +1,15 @@ +--- +title: Enroll Managed Databases +sidebar_label: Managed Databases +description: "Provides instructions on protecting managed databases in your infrastructure with Teleport." +tags: + - zero-trust + - infrastructure-identity +--- + +Teleport can protect databases that are managed as a dedicated cloud platform. +Learn how to enroll the following databases in your Teleport cluster: + +- [MongoDB Atlas](mongodb-atlas.mdx) +- [Oracle Exadata](oracle-exadata.mdx) +- [Snowflake](snowflake.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-managed-databases/mongodb-atlas.mdx b/docs/pages/enroll-resources/database-access/enrollment/managed/mongodb-atlas.mdx similarity index 92% rename from docs/pages/enroll-resources/database-access/enroll-managed-databases/mongodb-atlas.mdx rename to docs/pages/enroll-resources/database-access/enrollment/managed/mongodb-atlas.mdx index b8706cba57d77..c5b50e05a11ea 100644 --- a/docs/pages/enroll-resources/database-access/enroll-managed-databases/mongodb-atlas.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/managed/mongodb-atlas.mdx @@ -1,7 +1,12 @@ --- title: Database Access with MongoDB Atlas +sidebar_label: MongoDB Atlas description: How to configure Teleport database access with MongoDB Atlas. videoBanner: mu_ZKTjnFJ8 +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="MongoDB Atlas" dbConfigure="with either mutual TLS or AWS IAM authentication"!) @@ -23,10 +28,10 @@ or AWS IAM: -![Enroll MongoDB with a self-hosted Teleport cluster](../../../../img/database-access/guides/mongodbatlas_selfhosted.png) +![Enroll MongoDB with a self-hosted Teleport cluster](../../../../../img/database-access/guides/mongodbatlas_selfhosted.png) -![Enroll MongoDB with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/mongodbatlas_cloud.png) +![Enroll MongoDB with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/mongodbatlas_cloud.png) @@ -101,7 +106,7 @@ db_service: (!docs/pages/includes/start-teleport.mdx service="the Teleport Database Service"!) -See the full [YAML reference](../../../reference/agent-services/database-access-reference/configuration.mdx) for details. +See the full [YAML reference](../../reference/configuration.mdx) for details.
@@ -112,12 +117,12 @@ See below for details on how to configure the Teleport Database Service. You will need to provide your Atlas cluster's connection endpoint for the `db_service.databases[*].uri` configuration option or `--uri` CLI flag. You can find this via the Connect dialog on the Database Deployments overview page: -![Connect](../../../../img/database-access/guides/atlas/atlas-connect-btn@2x.png) +![Connect](../../../../../img/database-access/guides/atlas/atlas-connect-btn@2x.png) Go through the "Setup connection security" step and select "Connect with the MongoDB shell" to view the connection string: -![Connection string](../../../../img/database-access/guides/atlas/atlas-connect@2x.png) +![Connection string](../../../../../img/database-access/guides/atlas/atlas-connect@2x.png) Use only the scheme and hostname parts of the connection string in the URI: @@ -154,7 +159,7 @@ You can discard the other `mongo.crt` file. Go to the Security / Advanced configuration section of your Atlas cluster and toggle "Self-managed X.509 Authentication" on: -![X.509](../../../../img/database-access/guides/atlas/atlas-self-managed-x509@2x.png) +![X.509](../../../../../img/database-access/guides/atlas/atlas-self-managed-x509@2x.png) Paste the contents of `mongo.cas` file in the Certificate Authority edit box and click Save. @@ -166,7 +171,7 @@ On the Security / Database Access page add a new database user with Certificate authentication method: {/*vale messaging.protocol-products = YES*/} -![Add user](../../../../img/database-access/guides/atlas/atlas-add-user@2x.png) +![Add user](../../../../../img/database-access/guides/atlas/atlas-add-user@2x.png) Make sure to specify the user as `CN=` as shown above since MongoDB treats the entire certificate subject as a username. When connecting to a @@ -224,7 +229,7 @@ User Privileges** section, give the user sufficient privileges to access the desired database data. {/*vale messaging.protocol-products = YES*/} -![Create AWS IAM database user](../../../../img/database-access/guides/atlas/atlas-add-aws-iam-user.png) +![Create AWS IAM database user](../../../../../img/database-access/guides/atlas/atlas-add-aws-iam-user.png) Please note that Teleport does not support authentication using AWS IAM users; it exclusively supports authentication using AWS IAM roles. diff --git a/docs/pages/enroll-resources/database-access/enroll-managed-databases/oracle-exadata.mdx b/docs/pages/enroll-resources/database-access/enrollment/managed/oracle-exadata.mdx similarity index 96% rename from docs/pages/enroll-resources/database-access/enroll-managed-databases/oracle-exadata.mdx rename to docs/pages/enroll-resources/database-access/enrollment/managed/oracle-exadata.mdx index a5355bb632861..28f8a1ff1779c 100644 --- a/docs/pages/enroll-resources/database-access/enroll-managed-databases/oracle-exadata.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/managed/oracle-exadata.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Oracle Exadata +sidebar_label: Oracle Exadata description: How to configure Teleport database access with Oracle Exadata. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Oracle Exadata" dbConfigure="with mTLS authentication"!) @@ -11,16 +16,16 @@ description: How to configure Teleport database access with Oracle Exadata. -![Enroll Oracle with a self-hosted Teleport cluster](../../../../img/database-access/guides/oracle_selfhosted.png) +![Enroll Oracle with a self-hosted Teleport cluster](../../../../../img/database-access/guides/oracle_selfhosted.png) -![Enroll Oracle with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/oracle_selfhosted_cloud.png) +![Enroll Oracle with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/oracle_selfhosted_cloud.png) ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - Oracle Exadata server instance 19c or later. - The `sqlcl` [Oracle client](https://www.oracle.com/pl/database/sqldeveloper/technologies/sqlcl/) installed and added to your system's `PATH` environment variable or any GUI client that supports JDBC Oracle thin client. @@ -171,7 +176,7 @@ Install and configure Teleport where you will run the Teleport Database Service: -(!docs/pages/includes/install-linux-enterprise.mdx!) +(!docs/pages/includes/install-linux.mdx!) On the host where you will run the Teleport Database Service, start Teleport with the appropriate configuration. @@ -389,6 +394,14 @@ $ tsh db logout oracle $ tsh db logout ``` +## (Optional) Configure additional hostnames + +(!docs/pages/includes/database-access/oracle-multihost.mdx!) + +## Troubleshooting + +(!docs/pages/includes/database-access/oracle-troubleshooting.mdx!) + ## Next steps (!docs/pages/includes/database-access/guides-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/database-access/enrollment/managed/snowflake.mdx b/docs/pages/enroll-resources/database-access/enrollment/managed/snowflake.mdx new file mode 100644 index 0000000000000..fd6598765c4e0 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/managed/snowflake.mdx @@ -0,0 +1,194 @@ +--- +title: Database Access with Snowflake +sidebar_label: Snowflake +description: How to configure Teleport database access with Snowflake. +tags: + - how-to + - zero-trust + - infrastructure-identity +--- + +(!docs/pages/includes/database-access/db-introduction.mdx dbType="Snowflake" dbConfigure="with key pair authentication"!) + +## How it works + +The Teleport Database Service communicates with Snowflake using HTTP messages +that contain JSON web tokens signed by the Teleport certificate authority for +database clients. Snowflake is configured to trust the Teleport database client +CA. When a user connects to Snowflake via Teleport, the Database Service +forwards the user's requests to Snowflake as Teleport-authenticated messages. + + + + ![Enroll Snowflake with a self-hosted Teleport cluster](../../../../../img/database-access/guides/snowflake_selfhosted.png) + + + ![Enroll Snowflake with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/snowflake_cloud.png) + + + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- Snowflake account with `SECURITYADMIN` role or higher. + +- `snowsql` installed and added to your system's `PATH` environment variable. + +- A host where you will run the Teleport Database Service. + + See [Installation](../../../../installation/installation.mdx) for details. + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/5. Set up the Teleport Database Service + +(!docs/pages/includes/tctl-token.mdx serviceName="Database" tokenType="db" tokenFile="/tmp/token"!) + +Install and configure Teleport where you will run the Teleport Database Service: + + + + +(!docs/pages/includes/install-linux.mdx!) + +(!docs/pages/includes/database-access/db-configure-start.mdx dbName="example-snowflake" dbProtocol="snowflake" databaseAddress="abc12345.snowflakecomputing.com" !) + + + + Teleport provides Helm charts for installing the Teleport Database Service in Kubernetes Clusters. + + (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) + + (!docs/pages/includes/database-access/db-helm-install.mdx dbName="example-snowflake" dbProtocol="snowflake" databaseAddress="abc12345.snowflakecomputing.com" !) + + + +(!docs/pages/includes/database-access/multiple-instances-tip.mdx !) + +## Step 2/5. Create a Teleport user + +(!docs/pages/includes/database-access/create-user.mdx!) + +## Step 3/5. Export a public key + +Use the `tctl auth sign` command below to export a public key for your Snowflake user: + +```code +$ tctl auth sign --format=snowflake --out=server +``` + +The command will create a `server.pub` file with Teleport's public key. Teleport will use the corresponding private key to +generate a JWT (JSON Web Token) that will be used to authenticate to Snowflake. + + +## Step 4/5. Add the public key to your Snowflake user + +Use the public key you generated earlier to enable key pair authentication. + +Log in to your Snowflake instance and execute the SQL statement below: + +```sql +alter user alice set rsa_public_key='MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv3dHYw4LJCcZzdbhb3hV...LwIDAQAB'; +``` + +In this statement, `alice` is the name of the Snowflake user and the `rsa_public_key` is the key generated earlier without +the PEM header/footer (first and the last line). + +You can use the `describe user` command to verify the user's public key: + +```sql +desc user alice; +``` + +See the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth.html#step-4-assign-the-public-key-to-a-snowflake-user) +for more details. + +## Step 5/5. Connect + +Log in to your Teleport cluster and see the available databases: + + + + + ```code + $ tsh login --proxy=teleport.example.com --user=alice + $ tsh db ls + # Name Description Labels + # ----------------- ------------------- -------- + # example-snowflake Example Snowflake ❄ env=dev + ``` + + + ```code + $ tsh login --proxy=mytenant.teleport.sh --user=alice + $ tsh db ls + # Name Description Labels + # ----------------- ------------------- -------- + # example-snowflake Example Snowflake ❄ env=dev + ``` + + + +To retrieve credentials for a database and connect to it: + +```code +$ tsh db connect --db-user=alice --db-name=SNOWFLAKE_SAMPLE_DATA example-snowflake +``` + +The `snowsql` command-line client should be available in the system `PATH` in order to be +able to connect. + +To log out of the database and remove credentials: + +```code +# Remove credentials for a particular database instance. +$ tsh db logout example-snowflake +# Remove credentials for all database instances. +$ tsh db logout +``` + +## Access Snowsight via Teleport + +The Teleport Database Service provides CLI and programmatic access to Snowflake +databases, but it does not provide direct access to the Snowsight web interface. +To enable Snowsight access, you can configure it as a SAML application in +Teleport, allowing Teleport to act as the Identity Provider (IdP) for Snowsight. + + +When Teleport is configured as the IdP for Snowsight, it only handles SAML +authentication. Snowsight activity is not audited or recorded through Teleport. + + +Follow [Using Teleport as a SAML identity +provider](../../../../identity-governance/idps/saml-guide.mdx) and [Configuring +Snowflake to use federated +authentication](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-security-integration) +to setup Teleport as an IDP. + +Here is an example of the security integration to be created in your Snowflake database: +```sql +CREATE SECURITY INTEGRATION teleport_idp + TYPE = saml2 + ENABLED = true + SAML2_ISSUER = 'https://teleport.example.com/enterprise/saml-idp/metadata' + SAML2_SSO_URL = 'https://teleport.example.com/enterprise/saml-idp/sso' + SAML2_PROVIDER = 'custom' + SAML2_X509_CERT = 'MII...' + SAML2_ENABLE_SP_INITIATED = true + SAML2_SP_INITIATED_LOGIN_PAGE_LABEL = 'Teleport Login' +``` + +Replace the URLs and X.509 certificate with the values generated during the +enrollment flow in the Teleport Web UI. After creating the integration, describe +the integration to obtain the Snowflake URLs that must be configured in +Teleport. + +By default, Teleport passes your Teleport username as the Snowsight account +name. For custom mappings, see [SAML Idp Attribute +Mapping](../../../../identity-governance/idps/saml-attribute-mapping.mdx). + +## Next steps + +(!docs/pages/includes/database-access/guides-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cassandra-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/cassandra-self-hosted.mdx similarity index 96% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cassandra-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/cassandra-self-hosted.mdx index 33109fbe865be..8a78dd0e1368c 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cassandra-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/cassandra-self-hosted.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Cassandra and ScyllaDB +sidebar_label: Cassandra/ScyllaDB description: How to configure Teleport database access with Cassandra and ScyllaDB. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="Cassandra or ScyllaDB"!) @@ -11,10 +16,10 @@ description: How to configure Teleport database access with Cassandra and Scylla -![Enroll Cassandra with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/cassandra_selfhosted.png) +![Enroll Cassandra with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/cassandra_selfhosted.png) -![Enroll Cassandra with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/cassandra_cloud.png) +![Enroll Cassandra with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/cassandra_cloud.png) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/clickhouse-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/clickhouse-self-hosted.mdx similarity index 97% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/clickhouse-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/clickhouse-self-hosted.mdx index a69e23b764261..a770542c3c189 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/clickhouse-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/clickhouse-self-hosted.mdx @@ -1,6 +1,11 @@ --- title: Database Access with ClickHouse +sidebar_label: ClickHouse description: How to configure Teleport database access with ClickHouse. +tags: + - how-to + - zero-trust + - infrastructure-identity --- The Teleport Clickhouse integration allows you to enroll ClickHouse databases @@ -27,10 +32,10 @@ include audit logs for database query activity. -![Enroll ClickHouse with a self-hosted Teleport cluster](../../../../img/database-access/guides/clickhouse_selfhosted_selfhosted.png) +![Enroll ClickHouse with a self-hosted Teleport cluster](../../../../../img/database-access/guides/clickhouse_selfhosted_selfhosted.png) -![Enroll ClickHouse with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/clickhouse_selfhosted_cloud.png) +![Enroll ClickHouse with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/clickhouse_selfhosted_cloud.png) @@ -191,7 +196,7 @@ databases: - name: example-clickhouse uri: clickhouse.example.com:8443 protocol: -labels: +tags: env: dev ``` diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cockroachdb-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/cockroachdb-self-hosted.mdx similarity index 96% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cockroachdb-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/cockroachdb-self-hosted.mdx index f458944ba330c..d295767076f00 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/cockroachdb-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/cockroachdb-self-hosted.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Self-Hosted CockroachDB +sidebar_label: CockroachDB description: How to configure Teleport database access with self-hosted CockroachDB. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="CockroachDB"!) @@ -11,10 +16,10 @@ description: How to configure Teleport database access with self-hosted Cockroac -![Enrolling a CockroachDB instance with a self-hosted Teleport cluster](../../../../img/database-access/guides/cockroachdb_selfhosted.png) +![Enrolling a CockroachDB instance with a self-hosted Teleport cluster](../../../../../img/database-access/guides/cockroachdb_selfhosted.png) -![Enrolling a CockroachDB instance with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/cockroachdb_cloud.png) +![Enrolling a CockroachDB instance with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/cockroachdb_cloud.png) @@ -251,6 +256,8 @@ $ tsh db connect --db-user=alice roach in order to be able to connect.
+(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="CockroachDB"!) + To log out of the database and remove credentials: ```code diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/elastic.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/elastic.mdx similarity index 94% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/elastic.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/elastic.mdx index a9657ca408cc9..67ebce48c9eaf 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/elastic.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/elastic.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Elasticsearch +sidebar_label: Elasticsearch description: How to configure Teleport database access with Elasticsearch. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="Elasticsearch"!) @@ -17,7 +22,7 @@ description: How to configure Teleport database access with Elasticsearch. - A host where you will run the Teleport Database Service. - See [Installation](../../../installation.mdx) for details. + See [Installation](../../../../installation/installation.mdx) for details. - Optional: a certificate authority that issues certificates for your self-hosted database. @@ -76,7 +81,7 @@ $ curl -u elastic:your_elasticsearch_password -X POST "https://elasticsearch.exa
Role Mapping with wildcards -In a scenario where Teleport is using [single sign-on](../../../admin-guides/access-controls/sso/sso.mdx) you may want to define a mapping for all users to a role: +In a scenario where Teleport is using [single sign-on](../../../../zero-trust-access/sso/sso.mdx) you may want to define a mapping for all users to a role: ```code $ curl -u elastic:your_elasticsearch_password -X POST "https://elasticsearch.example.com:9200/_security/role_mapping/mapping1?pretty" -H 'Content-Type: application/json' -d' @@ -183,7 +188,7 @@ Use one of the following commands to connect to the database: Note the assigned port, and provide it to your GUI client: -![ElasticVue](../../../../img/database-access/guides/elasticvue.png) +![ElasticVue](../../../../../img/database-access/guides/elasticvue.png) ## Next steps diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mongodb-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/mongodb-self-hosted.mdx similarity index 97% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mongodb-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/mongodb-self-hosted.mdx index 98172a9ae3ab3..55a199d05af64 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mongodb-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/mongodb-self-hosted.mdx @@ -1,7 +1,12 @@ --- title: Database Access with Self-Hosted MongoDB +sidebar_label: MongoDB description: How to configure Teleport database access with self-hosted MongoDB. videoBanner: 6lgVObxoLkc +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="MongoDB"!) @@ -12,10 +17,10 @@ videoBanner: 6lgVObxoLkc -![Enroll MongoDB with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/mongodb_selfhosted.png) +![Enroll MongoDB with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/mongodb_selfhosted.png) -![Enroll MongoDB with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/mongodb_cloud.png) +![Enroll MongoDB with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/mongodb_cloud.png) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mysql-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/mysql-self-hosted.mdx similarity index 92% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mysql-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/mysql-self-hosted.mdx index 28dec6dd7c06c..7a362a83bff0b 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/mysql-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/mysql-self-hosted.mdx @@ -1,6 +1,11 @@ --- -title: Database Access with Self-Hosted MySQL/MariaDB +title: Database Access with Self-Hosted MySQL or MariaDB +sidebar_label: MySQL/MariaDB description: How to configure Teleport database access with self-hosted MySQL/MariaDB. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="MySQL or MariaDB"!) @@ -11,10 +16,10 @@ description: How to configure Teleport database access with self-hosted MySQL/Ma -![Enroll MySQL with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/mysql_selfhosted.png) +![Enroll MySQL with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/mysql_selfhosted.png) -![Enroll MySQL with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/mysql_cloud.png) +![Enroll MySQL with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/mysql_cloud.png) @@ -180,12 +185,12 @@ $ tsh db ls Note that you will only be able to see databases your role has access to. See -the [RBAC](../rbac.mdx) guide for more details. +the [RBAC](../../rbac.mdx) guide for more details. To retrieve credentials for a database and connect to it: ```code -$ tsh db connect --db-user=root --db-name=mysql example-mysql +$ tsh db connect --db-user=alice --db-name=mysql example-mysql ``` @@ -193,6 +198,8 @@ $ tsh db connect --db-user=root --db-name=mysql example-mysql able to connect. `mariadb` is a default command-line client for MySQL and MariaDB. +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="MySQL or MariaDB"!) + To log out of the database and remove credentials: ```code diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/oracle-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/oracle-self-hosted.mdx similarity index 94% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/oracle-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/oracle-self-hosted.mdx index dfcc91c0c6949..18a5d86852586 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/oracle-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/oracle-self-hosted.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Oracle +sidebar_label: Oracle description: How to configure Teleport database access with Oracle. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="Oracle"!) @@ -11,17 +16,17 @@ description: How to configure Teleport database access with Oracle. -![Enroll Oracle with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/oracle_selfhosted.png) +![Enroll Oracle with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/oracle_selfhosted.png) -![Enroll Oracle with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/oracle_selfhosted_cloud.png) +![Enroll Oracle with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/oracle_selfhosted_cloud.png) ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - Self-hosted Oracle server instance 18c or later. - The `sqlcl` [Oracle client](https://www.oracle.com/pl/database/sqldeveloper/technologies/sqlcl/) installed and added to your system's `PATH` environment variable or any GUI client that supports JDBC Oracle thin client. @@ -34,7 +39,7 @@ description: How to configure Teleport database access with Oracle. -To modify an existing user to provide access to the Database Service, see [Database Access Controls](../../database-access/rbac.mdx) +To modify an existing user to provide access to the Database Service, see [Database Access Controls](../../rbac.mdx) @@ -59,7 +64,7 @@ $ tctl users add \ For more detailed information about database access controls and how to restrict -access see [RBAC](../../database-access/rbac.mdx) documentation. +access see [RBAC](../../rbac.mdx) documentation. ## Step 2/6. Create a certificate/key pair and Teleport Oracle Wallet @@ -154,7 +159,7 @@ Install and configure Teleport where you will run the Teleport Database Service: -(!docs/pages/includes/install-linux-enterprise.mdx!) +(!docs/pages/includes/install-linux.mdx!) (!docs/pages/includes/database-access/self-hosted-config-start.mdx dbName="oracle" dbProtocol="oracle" databaseAddress="oracle.example.com:2484" dbName="oracle" !) @@ -258,6 +263,14 @@ $ tsh db logout oracle $ tsh db logout ``` +## (Optional) Configure additional hostnames + +(!docs/pages/includes/database-access/oracle-multihost.mdx!) + +## Troubleshooting + +(!docs/pages/includes/database-access/oracle-troubleshooting.mdx!) + ## Next steps (!docs/pages/includes/database-access/guides-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/postgres-self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/postgres-self-hosted.mdx similarity index 91% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/postgres-self-hosted.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/postgres-self-hosted.mdx index f334094884305..672c4c92f0730 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/postgres-self-hosted.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/postgres-self-hosted.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Self-Hosted PostgreSQL +sidebar_label: PostgreSQL description: How to configure Teleport database access with self-hosted PostgreSQL. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="PostgreSQL"!) @@ -11,10 +16,10 @@ description: How to configure Teleport database access with self-hosted PostgreS -![Enroll PostgreSQL with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/postgresqlselfhosted_selfhosted.png) +![Enroll PostgreSQL with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/postgresqlselfhosted_selfhosted.png) -![Enroll PostgreSQL with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/postgresqlselfhosted_cloud.png) +![Enroll PostgreSQL with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/postgresqlselfhosted_cloud.png) @@ -134,7 +139,7 @@ $ tsh db ls Note that you will only be able to see databases your role has access to. See -[RBAC](../rbac.mdx) section for more details. +[RBAC](../../rbac.mdx) section for more details. To retrieve credentials for a database and connect to it: @@ -142,7 +147,7 @@ To retrieve credentials for a database and connect to it: $ tsh db connect --db-user=postgres --db-name=postgres example-postgres ``` -(!docs/pages/includes/database-access/pg-access-webui.mdx!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) To log out of the database and remove credentials: @@ -161,6 +166,6 @@ $ tsh db logout ## Next steps -- Set up [automatic database user provisioning](../auto-user-provisioning/postgres.mdx). +- Set up [automatic database user provisioning](../../auto-user-provisioning/postgres.mdx). (!docs/pages/includes/database-access/guides-next-steps.mdx!) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis-cluster.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis-cluster.mdx similarity index 97% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis-cluster.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis-cluster.mdx index 4b243746d97c0..ab1ab28e11e54 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis-cluster.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis-cluster.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Redis Cluster +sidebar_label: Redis Cluster description: How to configure Teleport database access with Redis Cluster. +tags: + - how-to + - zero-trust + - infrastructure-identity --- {/* vale messaging.protocol-products = NO */} @@ -15,10 +20,10 @@ If you want to configure Redis Standalone, please read [Database Access with Red -![Enroll Redis Cluster with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/rediscluster_selfhosted.png) +![Enroll Redis Cluster with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/rediscluster_selfhosted.png) -![Enroll Redis Cluster with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/rediscluster_cloud.png) +![Enroll Redis Cluster with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/rediscluster_cloud.png) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis.mdx similarity index 93% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis.mdx index 257dec7d7f58c..e9833c11236a0 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/redis.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/redis.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Redis +sidebar_label: Redis description: How to configure Teleport database access with Redis. +tags: + - how-to + - zero-trust + - infrastructure-identity --- {/* vale messaging.protocol-products = NO */} @@ -15,10 +20,10 @@ If you want to configure Redis Cluster, please read [Database Access with Redis -![Enroll Redis with a Self-Hosted Teleport Cluster](../../../../img/database-access/guides/redis_selfhosted.png) +![Enroll Redis with a Self-Hosted Teleport Cluster](../../../../../img/database-access/guides/redis_selfhosted.png) -![Enroll Redis with a Cloud-Hosted Teleport Cluster](../../../../img/database-access/guides/redis_cloud.png) +![Enroll Redis with a Cloud-Hosted Teleport Cluster](../../../../../img/database-access/guides/redis_cloud.png) @@ -37,7 +42,7 @@ If you want to configure Redis Cluster, please read [Database Access with Redis - A host where you will run the Teleport Database Service. - See [Installation](../../../installation.mdx) for details. + See [Installation](../../../../installation/installation.mdx) for details. - Optional: a certificate authority that issues certificates for your self-hosted database. @@ -121,4 +126,3 @@ returns the `ERR Teleport: not supported by Teleport` error. ## Next steps (!docs/pages/includes/database-access/guides-next-steps.mdx!) - diff --git a/docs/pages/enroll-resources/database-access/enrollment/self-hosted/self-hosted.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/self-hosted.mdx new file mode 100644 index 0000000000000..ed48343eff95b --- /dev/null +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/self-hosted.mdx @@ -0,0 +1,26 @@ +--- +title: Enroll Self-Hosted Databases +sidebar_label: Self-Hosted Databases +description: "Provides instructions on protecting self-hosted databases in your infrastructure with Teleport." +tags: + - zero-trust + - infrastructure-identity +--- + +You can protect self-hosted databases with Teleport. Learn how to enroll your +database in your Teleport cluster with the following guides: + +- [Cassandra and + ScyllaDB](cassandra-self-hosted.mdx) +- [ClickHouse](clickhouse-self-hosted.mdx) +- [CockroachDB](cockroachdb-self-hosted.mdx) +- [Elastic](elastic.mdx) +- [MongoDB](mongodb-self-hosted.mdx) +- [MySQL](mysql-self-hosted.mdx) +- [Oracle](oracle-self-hosted.mdx) +- [PostgreSQL](postgres-self-hosted.mdx) +- [Redis Cluster](redis-cluster.mdx) +- [Redis](redis.mdx) +- [SQL Server with PKINIT + authentication](sql-server-ad-pkinit.mdx) +- [Vitess](vitess.mdx) diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/sql-server-ad-pkinit.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/sql-server-ad-pkinit.mdx similarity index 94% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/sql-server-ad-pkinit.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/sql-server-ad-pkinit.mdx index a49beca8985e3..c557920555d61 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/sql-server-ad-pkinit.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/sql-server-ad-pkinit.mdx @@ -1,6 +1,11 @@ --- title: Microsoft SQL Server access with PKINIT authentication +sidebar_label: SQL Server description: How to configure Microsoft SQL Server access with Active Directory PKINIT authentication. +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="Microsoft SQL Server" dbConfigure="with PKINIT authentication"!) @@ -17,10 +22,10 @@ Teleport Database Service forwards user traffic to the database. -![Database access with SQL Server and AD authentication](../../../../img/database-access/sql-server-ad-1.png) +![Database access with SQL Server and AD authentication](../../../../../img/database-access/sql-server-ad-1.png) -![Database access with SQL Server and AD authentication](../../../../img/database-access/sql-server-ad-2.png) +![Database access with SQL Server and AD authentication](../../../../../img/database-access/sql-server-ad-2.png) @@ -86,6 +91,13 @@ You will need to repeat these steps if you rotate Teleport's database certificat $ tctl auth crl --type=db_client > db-ca.crl ``` + If you use HSM, there are multiple CRLs that have to be exported. + This can be done by running: + + ```code + $ tctl auth crl --type=db_client --out= + ``` + 1. Transfer the `db-ca.cer` and `db-ca.crl` files to a Windows machine where you can manage your group policy. ### Create a GPO and import the Teleport CA @@ -119,7 +131,7 @@ You will need to repeat these steps if you rotate Teleport's database certificat 1. Click through the wizard, selecting your CA file (`db-ca.cer`). - ![Import Teleport CA](../../../../img/desktop-access/ca.png) + ![Import Teleport CA](../../../../../img/desktop-access/ca.png) ### Enable smart card service @@ -134,7 +146,7 @@ Teleport performs certificate-based authentication by emulating a smart card. 1. Double click on `Smart Card`, select `Define this policy setting` and switch to `Automatic` then click `OK`. - ![Enable Smartcard](../../../../img/desktop-access/smartcard.png) + ![Enable Smartcard](../../../../../img/desktop-access/smartcard.png) You will be modifying GPOs, and sometimes GPO modifications can take some time @@ -169,6 +181,12 @@ Teleport CRL to your Active Directory domain (using the path to the exported certutil -dspublish -f TeleportDB ``` +If you use HSM, run commands printed at the end of the CRL export. They will be in the form of: + +```powershell +certutil -dspublish -f TeleportDB +``` + To avoid waiting until the certificate propagates, you can force the CA retrieval from LDAP after importing the CA and CRL with the command: @@ -384,7 +402,7 @@ db_service: + tls: + # Point it to your Database CA PEM certificate. + ca_cert_file: "rdsca.pem" -+ # If your database certificate has an empty CN filed, you must change ++ # If your database certificate has an empty CN field, you must change + # the TLS mode to only verify the CA. + mode: verify-ca ad: @@ -402,4 +420,3 @@ skipping TLS verification in production environments. ## Further reading - [Kerberos PKINIT authentication](https://web.mit.edu/kerberos/krb5-1.13/doc/admin/pkinit.html). - diff --git a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/vitess.mdx b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/vitess.mdx similarity index 96% rename from docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/vitess.mdx rename to docs/pages/enroll-resources/database-access/enrollment/self-hosted/vitess.mdx index 21e4df1440e61..ff2db3208532f 100644 --- a/docs/pages/enroll-resources/database-access/enroll-self-hosted-databases/vitess.mdx +++ b/docs/pages/enroll-resources/database-access/enrollment/self-hosted/vitess.mdx @@ -1,6 +1,11 @@ --- title: Database Access with Vitess (MySQL protocol) +sidebar_label: Vitess (MySQL protocol) description: How to configure Teleport database access for Vitess (MySQL protocol) +tags: + - how-to + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/self-hosted-introduction.mdx dbType="Vitess (MySQL)"!) @@ -11,10 +16,10 @@ description: How to configure Teleport database access for Vitess (MySQL protoco -![Enroll Vitess with a self-hosted Teleport cluster](../../../../img/database-access/guides/vitess_selfhosted.png) +![Enroll Vitess with a self-hosted Teleport cluster](../../../../../img/database-access/guides/vitess_selfhosted.png) -![Enroll Vitess with a cloud-hosted Teleport cluster](../../../../img/database-access/guides/vitess_cloud.png) +![Enroll Vitess with a cloud-hosted Teleport cluster](../../../../../img/database-access/guides/vitess_cloud.png) @@ -184,7 +189,7 @@ $ tsh db ls Note that you will only be able to see databases your role has access to. See -the [RBAC](../rbac.mdx) guide for more details. +the [RBAC](../../rbac.mdx) guide for more details. To retrieve credentials for a database and connect to it: diff --git a/docs/pages/enroll-resources/database-access/faq.mdx b/docs/pages/enroll-resources/database-access/faq.mdx index 7fcbcb5bcf44d..fc1acefffa05e 100644 --- a/docs/pages/enroll-resources/database-access/faq.mdx +++ b/docs/pages/enroll-resources/database-access/faq.mdx @@ -1,6 +1,11 @@ --- title: Database Access FAQ +sidebar_label: FAQ description: Frequently asked questions about Teleport database access. +tags: + - faq + - zero-trust + - infrastructure-identity --- This page provides the answers to common questions about enrolling databases @@ -22,7 +27,7 @@ The Teleport Database Service currently supports the following protocols: - Oracle - OpenSearch - PostgreSQL -- Redis +- Redis and Valkey - Snowflake For PostgreSQL, Oracle and MySQL, the following Cloud-hosted versions are supported in addition to self-hosted deployments: @@ -57,7 +62,7 @@ The following PostgreSQL protocol features aren't currently supported: When configuring the Teleport Proxy Service, administrators can set the `postgres_public_addr` and `mysql_public_addr` configuration fields to public addresses over which respective database clients should connect. See -[Proxy Configuration](../../reference/agent-services/database-access-reference/configuration.mdx) for +[Proxy Configuration](reference/configuration.mdx) for more details. This is useful when the Teleport Web UI is running behind an L7 load balancer @@ -84,7 +89,7 @@ should work. Standard command-line clients such as `psql`, `mysql`, `mongo` or `mongosh` are supported. There are also instructions for configuring select -[graphical clients](../../connect-your-client/gui-clients.mdx). +[graphical clients](../../connect-your-client/third-party/gui-clients.mdx). ## When will you support X database? @@ -98,14 +103,14 @@ to register your interest. ## Can I provide a custom CA certificate? Yes, you can pass custom CA certificate by using a -[configuration file](../../reference/agent-services/database-access-reference/configuration.mdx) +[configuration file](reference/configuration.mdx) (look at `ca_cert_file`). ## Can I provide a custom DNS name for Teleport generated CA? Yes, use `server_name` under the `tls` section in your Teleport configuration file. Please look on our reference -[configuration file](../../reference/agent-services/database-access-reference/configuration.mdx) +[configuration file](reference/configuration.mdx) for more details. ## Can I disable CA verification when connecting to a database? @@ -115,7 +120,7 @@ person-in-the-middle attacks and makes sure that you are connected to the database that you intended to. Teleport also allows you to edit your -[configuration file](../../reference/agent-services/database-access-reference/configuration.mdx) +[configuration file](reference/configuration.mdx) to provide a custom CA certificate (`ca_cert_file`) or custom DNS name (`server_name`), which is more secure. @@ -123,7 +128,7 @@ If none of the above options work for you and you still want to disable the CA check, you can use `mode` under the `tls` option in the Teleport configuration file. For more details please refer to the reference -[configuration file](../../reference/agent-services/database-access-reference/configuration.mdx). +[configuration file](reference/configuration.mdx). ## Can I disable read-only and custom endpoints from auto-discovered databases? @@ -142,5 +147,5 @@ match: - "instance" ``` -See [labels reference](../../reference/agent-services/database-access-reference/labels.mdx) for a full list of Teleport +See [labels reference](reference/labels.mdx) for a full list of Teleport generated labels and values. diff --git a/docs/pages/enroll-resources/database-access/getting-started.mdx b/docs/pages/enroll-resources/database-access/getting-started.mdx index 09f6b97d260f9..d5ddbbbfd2220 100644 --- a/docs/pages/enroll-resources/database-access/getting-started.mdx +++ b/docs/pages/enroll-resources/database-access/getting-started.mdx @@ -1,6 +1,11 @@ --- title: Database Access Getting Started Guide +sidebar_label: Getting Started description: Getting started with Teleport database access and AWS Aurora PostgreSQL. +tags: + - get-started + - zero-trust + - infrastructure-identity --- (!docs/pages/includes/database-access/db-introduction.mdx dbType="PostgreSQL Amazon Aurora" dbConfigure="with IAM authentication"!) @@ -246,7 +251,7 @@ For the next steps, dive deeper into the topics relevant to your Database Access use-case, for example: - Check out configuration [guides](guides/guides.mdx). -- Learn how to configure [GUI clients](../../connect-your-client/gui-clients.mdx). +- Learn how to configure [GUI clients](../../connect-your-client/third-party/gui-clients.mdx). - Learn about database access [role-based access control](./rbac.mdx). - See [frequently asked questions](./faq.mdx). diff --git a/docs/pages/enroll-resources/database-access/guides/dynamic-registration.mdx b/docs/pages/enroll-resources/database-access/guides/dynamic-registration.mdx index bf9e5adbcec99..6df298b07ed33 100644 --- a/docs/pages/enroll-resources/database-access/guides/dynamic-registration.mdx +++ b/docs/pages/enroll-resources/database-access/guides/dynamic-registration.mdx @@ -1,6 +1,11 @@ --- title: Dynamic Database Registration +sidebar_label: Dynamic Registration description: Register/unregister databases without restarting Teleport. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Dynamic database registration allows Teleport administrators to register new @@ -97,7 +102,7 @@ spec: version: v5 ``` -See the full database resource spec [reference](../../../reference/agent-services/database-access-reference/configuration.mdx). +See the full database resource spec [reference](../reference/configuration.mdx). To create a database resource, run: @@ -128,10 +133,10 @@ $ tctl rm db/example Aside from `tctl`, dynamic resources can also be added by: - [Auto-Discovery](../../auto-discovery/databases/databases.mdx) -- [Terraform Provider](../../../admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx) -- [Kubernetes Operator](../../../admin-guides/infrastructure-as-code/teleport-operator/teleport-operator.mdx) -- [Teleport API](../../../admin-guides/api/api.mdx) +- [Terraform Provider](../../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) +- [Kubernetes Operator](../../../zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator.mdx) +- [Teleport API](../../../zero-trust-access/api/api.mdx) -See [Using Dynamic Resources](../../../admin-guides/infrastructure-as-code/infrastructure-as-code.mdx) to learn +See [Using Dynamic Resources](../../../zero-trust-access/infrastructure-as-code/infrastructure-as-code.mdx) to learn more about managing Teleport's dynamic resources in general. diff --git a/docs/pages/enroll-resources/database-access/guides/guides.mdx b/docs/pages/enroll-resources/database-access/guides/guides.mdx index 4bd860491828c..ddbd22346cacf 100644 --- a/docs/pages/enroll-resources/database-access/guides/guides.mdx +++ b/docs/pages/enroll-resources/database-access/guides/guides.mdx @@ -1,7 +1,11 @@ --- title: Using the Teleport Database Service +sidebar_label: Configuration Guides description: Guides to possibilities for running the Teleport Database Service. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity --- The Teleport Database Service proxies connections to databases protected by @@ -13,6 +17,7 @@ enrolling databases: databases. - [Dynamic Registration](dynamic-registration.mdx): Learn how to enroll databases without re-deploying the Teleport Database Service. +- [Health Checks](health-checks.mdx): Learn how to configure Teleport database health checks and view health information. The Teleport Database Service is one service that you can run on an a Teleport **agent.** Read the [Teleport Agents](../../agents/agents.mdx) diff --git a/docs/pages/enroll-resources/database-access/guides/ha.mdx b/docs/pages/enroll-resources/database-access/guides/ha.mdx index 6493d054bc87c..487ff8226adc1 100644 --- a/docs/pages/enroll-resources/database-access/guides/ha.mdx +++ b/docs/pages/enroll-resources/database-access/guides/ha.mdx @@ -1,6 +1,11 @@ --- title: Database Access High Availability (HA) +sidebar_label: High Availability description: How to configure Teleport database access in a Highly Available (HA) configuration. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- You can deploy the Teleport Database Service in a Highly Available (HA) diff --git a/docs/pages/enroll-resources/database-access/guides/health-checks.mdx b/docs/pages/enroll-resources/database-access/guides/health-checks.mdx new file mode 100644 index 0000000000000..586559a508d08 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/guides/health-checks.mdx @@ -0,0 +1,266 @@ +--- +title: Teleport Database Health Checks +sidebar_label: Health Checks +description: How to configure Teleport database health checks and view target health information. +tags: +- conceptual +- zero-trust +- infrastructure-identity +--- + +This documentation explains how to manage database health checks. +Available in Teleport 18, database health checks are used to monitor connectivity from Teleport Database Service instances to databases enrolled in a Teleport cluster. + +Why monitor database connectivity with health checks? + +- **Observability**: Discover networking problems before a user attempts to connect to a database. Unhealthy databases are highlighted in the Teleport web UI and can be discovered programmatically through your Teleport cluster's API. +- **High Availability**: Multiple Teleport Database Service instances can be deployed to proxy connections to the same enrolled databases. Teleport prioritizes routing user connections through Database Service instances that can reach the database endpoint over those that cannot. + +## How it works + +When a user connects to a database through Teleport, the connection is routed via a Teleport Database Service. +The Database Service must be able to reach the database's endpoint to proxy the connection. + +Teleport Database Service instances will perform regular health checks for databases that match a global `health_check_config` resource's label selectors. +If no `health_check_config` matches a database, then health checks will not be enabled for that database. + +To perform a health check, Teleport Database Service instances will attempt to establish a TCP connection to the database's endpoint and report whether the connection succeeded. + + +Only TCP health checks are available at this time. + + +When a database is registered, it will initially have an "unknown" health status. +If health checks are disabled, then the health status will remain "unknown". +If health checks are enabled, then the first health check result will determine the database's health status: either "healthy" or "unhealthy". + +After the first health check for a database, its health status will only change once the number of consecutive passing or failing health checks reaches a configurable threshold. + +## Configuration + +Teleport's `health_check_config` resource determines which databases will have health checks enabled and what settings will be used for those checks: + +```yaml +kind: health_check_config +version: v1 +metadata: + name: example + description: Example healthcheck configuration +spec: + # interval is the time between each health check. Default 30s. + interval: 30s + # timeout is the health check connection establishment timeout. Default 5s. + timeout: 5s + # healthy_threshold is the number of consecutive passing health checks + # after which a target's health status becomes "healthy". Default 2. + healthy_threshold: 2 + # unhealthy_threshold is the number of consecutive failing health checks + # after which a target's health status becomes "unhealthy". Default 1. + unhealthy_threshold: 1 + # match is used to select databases that these settings apply to. + # Databases are matched by label selectors and at least one label selector + # must be set. + # If multiple `health_check_config` databases match the same database, then + # the matching health check configs are sorted by name and only the first + # config applies. + match: + # db_labels matches database labels. An empty value is ignored. + # If db_labels_expression is also set, then the match result is the logical + # AND of both. + db_labels: + - name: env + values: + - dev + - staging + # db_labels_expression is a label predicate expression to match databases. + # An empty value is ignored. + # If db_labels is also set, then the match result is the logical AND of both. + db_labels_expression: 'labels["owner"] == "database-team"' +``` + +There can be multiple `health_check_config` in a cluster and each config can provide different settings for different sets of databases. +If more than one `health_check_config` matches the same database, then the matching health check configs are sorted, in ascending order by name, and only the first config applies (e.g., the name "00-my-config" has greater precedence than "10-my-config"). + +### Managing configuration + +Use `tctl` to manage `health_check_config` resources in your cluster. + +A preset default `health_check_config` is created starting in Teleport 18. +The default config enables TCP health checks for all enrolled databases. + +Try viewing the default configuration with `tctl`: + +```code +$ tctl get health_check_config/default +kind: health_check_config +metadata: + description: Enables all health checks by default + labels: + teleport.internal/resource-type: preset + name: default + namespace: default +spec: + match: + db_labels: + - name: '*' + values: + - '*' +version: v1 +``` + +You can create your own health check config with `tctl create`: + +```code +$ tctl create health_check_config.yaml +``` + +Or update an existing config interactively with `tctl edit`: + +```code +$ tctl edit health_check_config/example +``` + +To delete a health check config, run: + +```code +$ tctl rm health_check_config/example +``` + +When a config is created, updated, or deleted, all Teleport Database Service instances will reevaluate which health check config matches and applies for each of the registered databases they proxy. + +## Target health + +Target health information is reported by Teleport Database Service instances in `db_server` heartbeat objects for each database that they proxy. +Among the information is a health status field. +There are three health statuses: + +- "unknown": Health checks for this database are disabled or still initializing +- "healthy": The Teleport service is able to reach the database's endpoint. +- "unhealthy": The Teleport service is not able to reach the database's endpoint. + + +Teleport Database Service instances running a version of Teleport that does not support health checks will not report target health information for the databases that they proxy. + + +You can use `tctl` to view target health information in `db_server.status.target_health`, for example: + +```code +$ tctl get db_server/example-postgres-db | yq -y .status +target_health: + # address is the database address. + address: "localhost:5432", + # message is additional information meant for a user. + message: "1 health check failed" + # protocol is the health check connection protocol, such as "TCP". + protocol: "TCP", + # status is the health status, one of "unknown", "healthy", or "unhealthy". + status: "unhealthy", + # transition_reason is a unique reason for the last transition: one of "initialized", "disabled", "threshold_reached", or "internal_error". + transition_reason: "threshold_reached", + # transition_timestamp is the time that the last status transition occurred. + transition_timestamp: "2025-06-09T22:40:24.147753Z", + # transition_error shows the health check error observed when the transition to "unhealthy" happened. + transition_error: "dial tcp 127.0.0.1:5432: connect: connection refused", +``` + +## Troubleshooting + +The Teleport web UI will highlight unhealthy databases. +You can click on a highlighted database to view some health check failure details or other warnings: + +![Health warning indicator in web UI](../../../../img/resource-health-check/database-health-warning-indicator.png) + +In this AWS RDS database example the health check error is a connection dial timeout, which is commonly caused by AWS security groups blocking connections to the database. + +You can use `tctl` to view more information about the affected Teleport Database Service. +For example, the affected Teleport Database Service instances in the screenshot were deployed using the [Teleport AWS RDS enrollment wizard](../../application-access/cloud-apis/awsoidc-integration-rds.mdx), which is indicated by the `teleport.dev/awsoidc-agent` label: + +```code +$ tctl get db_service/22c8ecba-69fc-4c15-b94e-e8815236b9f0 +kind: db_service +metadata: + expires: "2025-06-29T03:55:52Z" + labels: + teleport.dev/awsoidc-agent: "true" + name: 22c8ecba-69fc-4c15-b94e-e8815236b9f0 + revision: 089129ef-52e2-4a8f-8451-c1e91879c14e +spec: + hostname: ip-192-168-0-107.ca-central-1.compute.internal + resources: + - aws: {} + labels: + account-id: "111111111111" + region: ca-central-1 + vpc-id: vpc-abc123abc123abc12 +version: v1 +``` + +Refer to the [Database Service troubleshooting guide](../troubleshooting.mdx) for general troubleshooting steps. + +## FAQ + +### Why are MySQL health checks disabled? + +MySQL tracks connection errors for each remote host that tries to connect to the database. +Each TCP health check is counted as a connection error and eventually MySQL will +block a host when `sum_connect_errors >= max_connect_errors`. +As a result, TCP health checks eventually cause MySQL databases to block Teleport. + +To re-enable TCP health checks: +- set `max_connect_errors` to its maximum value on the database to effectively disable the automatic host blocking. +- set the environment variable `TELEPORT_ENABLE_MYSQL_DB_HEALTH_CHECKS=1` on your Teleport Database Service instance(s). + + +[MySQL documentation](https://dev.mysql.com/doc/refman/8.4/en/host-cache.html#:~:text=The%20value%20of%20the%20max_connect_errors,host%20from%20further%20connection%20requests) notes: +> The value of the max_connect_errors system variable determines how many successive interrupted connection requests the server permits before blocking a host. After max_connect_errors failed requests without a successful connection, the server assumes that something is wrong (for example, that someone is trying to break in), and blocks the host from further connection requests. + +Setting `max_connect_errors` to its maximum value will effectively disable MySQL's host blocking feature. +See https://dev.mysql.com/doc/refman/8.4/en/server-system-variables.html#sysvar_max_connect_errors + + +### How to disable database health checks? + +You can disable health checks for databases by updating your cluster's `health_check_config` resources to only match specific databases. + +For example, update the default preset configuration to match only databases with the label `"healthcheck": "enabled"`: + +```code +$ tctl create < @@ -64,8 +69,8 @@ spec: # Note, this is not the same as the "name" field in "db_service", this is # the database names within a particular database instance. # - # Also note, this setting has effect only for PostgreSQL. It does not - # currently have any effect on MySQL databases/schemas. + # Also note, database names are only enforced for PostgreSQL, MongoDB, and + # Cloud Spanner databases. db_names: ["main", "metrics", "postgres"] ``` @@ -112,6 +117,9 @@ disconnects from the current one and establishes a new connection. During a PostgreSQL connection attempt, `db_names` field is checked against the name of the logical database that the user is connecting to. +Similar to PostgreSQL, Teleport also enforces `db_names` for MongoDB and Cloud +Spanner databases. + In MySQL a logical "database" and a "schema" are synonyms for each other, and the scope of permissions a user has once connected is determined by the permission grants set on the account within the database. As such, `db_names` role field @@ -122,7 +130,7 @@ is not currently enforced on MySQL connection attempts. Similar to other role fields, `db_*` fields support templating variables. The `external.xyz` traits are replaced with values from external [single -sign-on](../../admin-guides/access-controls/sso/sso.mdx) providers. For OIDC, they will be +sign-on](../../zero-trust-access/sso/sso.mdx) providers. For OIDC, they will be replaced with the value of an "xyz" claim. For SAML, they are replaced with an "xyz" assertion value. diff --git a/docs/pages/enroll-resources/database-access/reference/audit.mdx b/docs/pages/enroll-resources/database-access/reference/audit.mdx new file mode 100644 index 0000000000000..04c74780365de --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/audit.mdx @@ -0,0 +1,163 @@ +--- +title: Database Access Audit Events Reference +sidebar_label: Audit Events +description: Audit events reference for Teleport database access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + + +(!docs/pages/includes/database-access/db-audit-events.mdx!) + +## db.session.start (TDB00I/W) + +Emitted when a client successfully connects to a database, or when a connection +attempt fails due to access denied. + +Successful connection event: + +```json +{ + "cluster_name": "root", // Teleport cluster name. + "code": "TDB00I", // Event code. + "db_name": "test", // Database/schema name. + "db_protocol": "postgres", // Database protocol. + "db_service": "local", // Database service name. + "db_uri": "localhost:5432", // Database server endpoint. + "db_user": "postgres", // Database account name. + "ei": 0, // Event index within the session. + "event": "db.session.start", // Event name. + "namespace": "default", // Event namespace, always "default". + "server_id": "05ff66c9-a948-42f4-af0e-a1b6ba62561e", // Database Service host ID. + "sid": "63b6fa11-cd44-477b-911a-602b75ab13b5", // Unique database session ID. + "success": true, // Indicates successful connection. + "time": "2021-04-27T23:00:26.014Z", // Event timestamp. + "uid": "eac5b6c8-384a-4471-9559-e135834b1ab0", // Unique event ID. + "user": "alice" // Teleport user name. +} +``` + +Access denied event: + +```json +{ + "cluster_name": "root", // Teleport cluster name. + "code": "TDB00W", // Event code. + "db_name": "test", // Database/schema name user attempted to connect to. + "db_protocol": "postgres", // Database protocol. + "db_service": "local", // Database service name. + "db_uri": "localhost:5432", // Database server endpoint. + "db_user": "superuser", // Database account name user attempted to log in as. + "ei": 0, // Event index within the session. + "error": "access to database denied", // Connection error. + "event": "db.session.start", // Event name. + "message": "access to database denied", // Detailed error message. + "namespace": "default", // Event namespace, always "default". + "server_id": "05ff66c9-a948-42f4-af0e-a1b6ba62561e", // Database Service host ID. + "sid": "d18388e5-cc7c-4624-b22b-d36db60d0c50", // Unique database session ID. + "success": false, // Indicates unsuccessful connection. + "time": "2021-04-27T23:03:05.226Z", // Event timestamp. + "uid": "507fe008-99a4-4247-8603-6ba03408d047", // Unique event ID. + "user": "alice" // Teleport user name. +} +``` + +## db.session.end (TDB01I) + +Emitted when a client disconnects from the database. + +```json +{ + "cluster_name": "root", // Teleport cluster name. + "code": "TDB01I", // Event code. + "db_name": "test", // Database/schema name. + "db_protocol": "postgres", // Database protocol. + "db_service": "local", // Database service name. + "db_uri": "localhost:5432", // Database server endpoint. + "db_user": "postgres", // Database account name. + "ei": 3, // Event index within the session. + "event": "db.session.end", // Event name. + "sid": "63b6fa11-cd44-477b-911a-602b75ab13b5", // Unique database session ID. + "time": "2021-04-27T23:00:30.046Z", // Event timestamp. + "uid": "a626b22d-bbd0-40ef-9896-b7ff365664b0", // Unique event ID. + "user": "alice" // Teleport user name. +} +``` + +## db.session.query (TDB02I) + +Emitted when a client executes a SQL query. + +```json +{ + "cluster_name": "root", // Teleport cluster name. + "code": "TDB02I", // Event code. + "db_name": "test", // Database/schema name. + "db_protocol": "postgres", // Database protocol. + "db_query": "INSERT INTO public.test (id,\"timestamp\",json)\n\tVALUES ($1,$2,$3)", // Query text. + "db_query_parameters": [ // Query parameters (for prepared statements). + "test-id", + "2022-04-02 17:50:20-07", + "{\"k\": \"v\"}" + ], + "db_service": "local", // Database service name. + "db_uri": "localhost:5432", // Database server endpoint. + "db_user": "postgres", // Database account name. + "ei": 29, // Event index within the session. + "event": "db.session.query", // Event name. + "sid": "691e6f70-3c31-4412-90aa-fe0558abb212", // Unique database session ID. + "time": "2021-04-27T23:04:57.395Z", // Event timestamp. + "uid": "9f7b4179-b9cf-4302-bb7c-1408e404823f", // Unique event ID. + "user": "alice" // Teleport user name. +} +``` + +## db.session.spanner.rpc (TSPN001I/W) + +Emitted when a client executes a remote procedure call (RPC), or when an RPC +execution attempt fails due to access denied. + +```json +{ + "args": { // RPC arguments (specific to the "procedure" below). + "query_options": {}, + "request_options": {}, + "seqno": 1, + "session": "projects/project-id/instances/instance-id/databases/dev-db/sessions/ABCDEF1234567890", + "sql": "select * from TestTable", + "transaction": { + "Selector": { + "SingleUse": { + "Mode": { + "ReadOnly": { + "TimestampBound": { + "Strong": true + }, + "return_read_timestamp": true + } + } + } + } + } + }, + "cluster_name": "root", // Teleport cluster name. + "code": "TSPN001I", // Event code. + "db_name": "dev-db", // Database name. + "db_origin": "dynamic", // Teleport database service config origin. + "db_protocol": "spanner", // Database protocol. + "db_service": "teleport-spanner", // Database service name. + "db_type": "spanner", // Database type. + "db_uri": "spanner.googleapis.com:443", // Database service endpoint. + "db_user": "some-user", // Database account name, (a GCP IAM service account name without its @.iam.gserviceaccount.com suffix). + "ei": 29, // Event index within the session. + "event": "db.session.spanner.rpc", // Event name. + "procedure": "ExecuteStreamingSql", // Name of the remote procedure call (RPC). + "sid": "406b9883-0e16-42f2-9d0b-b3bd956f9cd4", // Unique database session ID. + "success": true, // The RPC was allowed by Teleport RBAC. + "time": "2024-03-13T00:02:44.739Z", // Event timestamp. + "uid": "e0625e79-9399-4ea3-aa8b-dba1eb98658d", // Unique event ID. + "user": "alice@example.com" // Teleport user name. +} +``` diff --git a/docs/pages/enroll-resources/database-access/reference/aws.mdx b/docs/pages/enroll-resources/database-access/reference/aws.mdx new file mode 100644 index 0000000000000..f0e031013b097 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/aws.mdx @@ -0,0 +1,114 @@ +--- +title: Database Access AWS IAM Reference +sidebar_label: AWS IAM Policies +description: AWS IAM policies for Teleport database access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +The Teleport Database Service requires IAM permissions for various tasks +depending on the database type and setup, such as discovering endpoints and +metadata of the database servers, generating IAM authentication tokens, and +assuming IAM roles. + +You can generate IAM permissions with the [`teleport db configure aws +print-iam`](cli.mdx) +command. For example, the following command would generate and print the IAM +policies: + +```code +$ teleport db configure aws print-iam --types rds,redshift --role teleport-db-service-role +``` + +To learn more about IAM permissions for a specific type of database, refer to +the related section below. + +## DocumentDB + +(!docs/pages/includes/database-access/reference/aws-iam/documentdb/access-policy.mdx dbUserRole="documentdb-user-role"!) + +### IAM role as a DocumentDB database user + +(!docs/pages/includes/database-access/reference/aws-iam/documentdb/role-as-user-policy.mdx!) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="documentdb-user-role"!) + +## DynamoDB + +(!docs/pages/includes/database-access/reference/aws-iam/dynamodb/access-policy.mdx dbUserRole="dynamodb-user-role"!) + +### IAM role as a DynamoDB database user + +(!docs/pages/includes/database-access/reference/aws-iam/dynamodb/role-as-user-policy.mdx!) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="dynamodb-user-role"!) + +## ElastiCache for Redis and Valkey + +(!docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx!) + +### ElastiCache managed users + +(!docs/pages/includes/database-access/reference/aws-iam/redis/auto-password-access-policy.mdx dbType="ElastiCache" permissionType="elasticache" updateUserPermission="ModifyUser" listTagsPermission="ListTagsForResource"!) + +## ElastiCache Serverless for Redis and Valkey + +(!docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx!) + +## Keyspaces + +(!docs/pages/includes/database-access/reference/aws-iam/keyspaces/access-policy.mdx dbUserRole="keyspaces-user-role"!) + +### IAM role as a Keyspaces database user + +(!docs/pages/includes/database-access/reference/aws-iam/keyspaces/role-as-user-policy.mdx!) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="keyspaces-user-role"!) + +## MemoryDB + +(!docs/pages/includes/database-access/reference/aws-iam/memorydb/access-policy.mdx!) + +### MemoryDB managed users + +(!docs/pages/includes/database-access/reference/aws-iam/redis/auto-password-access-policy.mdx dbType="MemoryDB" permissionType="memorydb" updateUserPermission="UpdateUser" listTagsPermission="ListTags"!) + +## OpenSearch + +(!docs/pages/includes/database-access/reference/aws-iam/opensearch/access-policy.mdx dbUserRole="opensearch-user-role"!) + +### IAM role as an OpenSearch database user + +(!docs/pages/includes/database-access/reference/aws-iam/opensearch/role-as-user-policy.mdx!) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="opensearch-user-role"!) + +## RDS + +(!docs/pages/includes/database-access/reference/aws-iam/rds/access-policy.mdx!) + +## RDS Proxy + +(!docs/pages/includes/database-access/reference/aws-iam/rds-proxy/access-policy.mdx!) + +## Redshift + +(!docs/pages/includes/database-access/reference/aws-iam/redshift/access-policy.mdx dbUserRole="redshift-user-role"!) + +### IAM role as a Redshift database user + +(!docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx dbUserRole="redshift-user-role"!) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="redshift-user-role"!) + +## Redshift Serverless + +(!docs/pages/includes/database-access/reference/aws-iam/redshift-serverless/access-policy.mdx dbUserRole="redshift-serverless-user-role"!) + +### IAM role as a Redshift Serverless database user + +(!docs/pages/includes/database-access/reference/aws-iam/redshift-serverless/role-as-user-policy.mdx dbUserRole="redshift-serverless-user-role" !) + +(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="redshift-serverless-user-role"!) diff --git a/docs/pages/enroll-resources/database-access/reference/cli.mdx b/docs/pages/enroll-resources/database-access/reference/cli.mdx new file mode 100644 index 0000000000000..c1bffc276ff23 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/cli.mdx @@ -0,0 +1,437 @@ +--- +title: Database Access CLI Reference +sidebar_label: CLI Commands +description: CLI reference for Teleport database access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +This reference shows you how to run common commands for managing Teleport +the Database Service, including: + +- The `teleport` daemon command, which is executed on the host where you + will run the Teleport Database Service. + +- The `tctl` administration tool, which you use to manage `db` resources that + represent databases known to your Teleport cluster. + + (!docs/pages/includes/tctl.mdx!) + +- The `tsh` client tool, which end-users run in order to access databases in + your cluster. + +## teleport db start + +Starts Teleport Database Service. + + + + +```code +$ teleport db start \ + --token=/path/to/token \ + --auth-server=proxy.example.com:443 \ + --name=example \ + --protocol=postgres \ + --uri=postgres.example.com:5432 +``` + + + + +```code +$ teleport db start \ + --token=/path/to/token \ + --auth-server=mytenant.teleport.sh:443 \ + --name=example \ + --protocol=postgres \ + --uri=postgres.mytenant.teleport.sh:5432 +``` + + + + + +| Flag | Description | +| - | - | +| `-d/--debug` | Enable verbose logging to stderr. | +| `--pid-file` | Full path to the PID file. By default no PID file will be created. | +| `--auth-server` | Address of the Teleport Proxy Service. | +| `--token` | Invitation token to register with the Auth Service. | +| `--ca-pin` | CA pin to validate the Auth Service. | +| `-c/--config` | Path to a configuration file (default `/etc/teleport.yaml`). | +| `--labels` | Comma-separated list of labels for this node, for example `env=dev,app=web`. | +| `--fips` | Start Teleport in FedRAMP/FIPS mode. | +| `--name` | Name of the proxied database. | +| `--description` | Description of the proxied database. | +| `--protocol` | Proxied database protocol. Supported are: `postgres` and `mysql`. | +| `--uri` | Address the proxied database is reachable at. | +| `--ca-cert` | Database CA certificate path. | +| `--aws-region` | (Only for RDS, Aurora or Redshift) AWS region RDS, Aurora or Redshift database instance is running in. | +| `--aws-redshift-cluster-id` | (Only for Redshift) Redshift database cluster identifier. | +| `--gcp-project-id` | (Only for Cloud SQL) GCP Cloud SQL project identifier. | +| `--gcp-instance-id` | (Only for Cloud SQL) GCP Cloud SQL instance identifier.| + +## teleport db configure create + +Creates a sample Database Service configuration. + + + + +```code +$ teleport db configure create --rds-discovery=us-west-1 --rds-discovery=us-west-2 +$ teleport db configure create \ + --token=/tmp/token \ + --proxy=proxy.example.com:443 \ + --name=example \ + --protocol=postgres \ + --uri=postgres://postgres.example.com:5432 \ + --labels=env=prod +``` + + + + +```code +$ teleport db configure create --rds-discovery=us-west-1 --rds-discovery=us-west-2 +$ teleport db configure create \ + --token=/tmp/token \ + --proxy=mytenant.teleport.sh:443 \ + --name=example \ + --protocol=postgres \ + --uri=postgres://postgres.mytenant.teleport.sh:5432 \ + --labels=env=prod +``` + + + + + +| Flag | Description | +| - | - | +| `--proxy` | Teleport Proxy Service address to connect to. Default: `0.0.0.0:3080`. | +| `--token` | Invitation token to register with the Auth Service. Default: none. | +| `--rds-discovery` | List of AWS regions in which the agent will discover RDS/Aurora instances. | +| `--rdsproxy-discovery` | List of AWS regions in which the agent will discover RDS Proxies. | +| `--redshift-discovery` | List of AWS regions in which the agent will discover Redshift instances. | +| `--redshift-serverless-discovery` | List of AWS regions in which the agent will discover Redshift Serverless instances. | +| `--elasticache-discovery` | List of AWS regions in which the agent will discover ElastiCache Redis and Valkey clusters. | +| `elasticache-serverless-discovery` | List of AWS regions in which the agent will discover ElastiCache Serverless Valkey or Redis caches. | +| `--aws-tags` | (Only for AWS discoveries) Comma-separated list of AWS resource tags to match, for example env=dev,dept=it | +| `--memorydb-discovery` | List of AWS regions in which the agent will discover MemoryDB clusters. | +| `--azure-mysql-discovery` | List of Azure regions in which the agent will discover MySQL servers. | +| `--azure-postgres-discovery` | List of Azure regions in which the agent will discover Postgres servers. | +| `--azure-redis-discovery` | List of Azure regions in which the agent will discover Azure Cache For Redis servers. | +| `--azure-subscription` | List of Azure subscription IDs for Azure discoveries. Default is "*". | +| `--azure-resource-group` | List of Azure resource groups for Azure discoveries. Default is "*". | +| `--azure-tags` | (Only for Azure discoveries) Comma-separated list of Azure resource tags to match, for example env=dev,dept=it | +| `--ca-pin` | CA pin to validate the Auth Service (can be repeated for multiple pins). | +| `--name` | Name of the proxied database. | +| `--protocol` | Proxied database protocol. Refer to the [configuration](./configuration.mdx#database-service-configuration) reference for supported values. | +| `--uri` | Address the proxied database is reachable at. | +| `--labels` | Comma-separated list of labels for the database, for example env=dev,dept=it | +| `-o/--output` | Write to stdout with `--output=stdout`, the default config file with `--output=file`, or a custom path with `--output=file:///path` | +| `--dynamic-resources-labels` | Comma-separated list(s) of labels to match dynamic resources, for example env=dev,dept=it. Required to enable dynamic resources matching. | + +## teleport db configure bootstrap + +Bootstrap the necessary configuration for the Database Service. It reads the +provided configuration to determine what will be bootstrapped. + +```code +$ teleport db configure bootstrap -c /etc/teleport.yaml --attach-to-user TeleportUser +$ teleport db configure bootstrap -c /etc/teleport.yaml --attach-to-role TeleportRole +$ teleport db configure bootstrap -c /etc/teleport.yaml --manual +``` + +| Flag | Description | +| - | - | +| `-c/--config` | Path to a configuration file. Default: `/etc/teleport.yaml`. | +| `--manual` | When executed in "manual" mode, this command will print the instructions to complete the configuration instead of applying them directly. | +| `--policy-name` | Name of the Teleport Database Service policy. Default: `DatabaseAccess` | +| `--confirm` | Do not prompt the user and auto-confirm all actions. | +| `--attach-to-role` | Role name to attach the policy to. Mutually exclusive with `--attach-to-user`. If none of the attach-to flags is provided, the command will try to attach the policy to the current user/role based on the credentials. | +| `--attach-to-user` | User name to attach the policy to. Mutually exclusive with `--attach-to-role`. If none of the attach-to flags is provided, the command will try to attach the policy to the current user/role based on the credentials. | + +## teleport db configure aws print-iam + +Print the necessary IAM permissions required for the Database Service based on +provided database types. + +```code +$ teleport db configure aws print-iam --types rds +$ teleport db configure aws print-iam --types rds,redshift --role my-db-service-role +$ teleport db configure aws print-iam --types redshift-serverless --assumes-roles my-access-role --policy +``` + +| Flag | Description | +| - | - | +| `-r/--types` | Comma-separated list of database types to include in the policy. Any of `rds`, `rdsproxy`, `redshift`, `redshift-serverless`, `elasticache`, `elasticache-serverless`, `memorydb`, `keyspace`, `dynamodb`, `opensearch`. | +| `--role` | IAM role name to attach policy to. Mutually exclusive with --user. | +| `--user` | IAM user name to attach policy to. Mutually exclusive with --role. | +| `--[no-]policy` | Only print an IAM policy document. | +| `--[no-]boundary` | Only print an IAM boundary policy document. | +| `--assumes-roles` | Comma-separated list of additional IAM roles that the IAM identity should be able to assume. Each role can be either an IAM role ARN or the name of a role in the identity's account. | + +## tctl auth sign + +When invoked with a `--format=db` (or `--format=mongodb` for MongoDB) flag, +produces a CA certificate, a client certificate and a private key file used for +configuring the Database Service with self-hosted database instances. + + + For database formats, `tctl` must be run on an Auth Service host or the remote + user must be be able to impersonate the built-in `Db` role and user. See the + [impersonation guide](../../../zero-trust-access/authentication/impersonation.mdx) + for details on how to allow impersonation. + + +```code +$ tctl auth sign --format=db --host=db.example.com --out=db --ttl=2190h +$ tctl auth sign --format=db --host=host1,localhost,127.0.0.1 --out=db --ttl=2190h +``` + +In this example, `db.example.com` is the hostname where the Teleport Database +Service can reach the database server. The second example assumes a +database running on the same host as Teleport. + +| Flag | Description | +| - | - | +| `--format` | When given value `db`, produces secrets in database compatible format. Use `mongodb` when generating MongoDB secrets. | +| `--host` | Comma-separated SANs to encode in the certificate. Must contain the hostname Teleport will use to connect to the database. | +| `--out` | Name prefix for output files. | +| `--ttl` | Certificate validity period. | + +
+Setting up RBAC for signing database certificates + +The `tctl` user must have permissions to impersonate the Teleport Database +Service role, `Db`, in order to generate a signed database certificate. To add +these impersonation privileges to your Teleport user, run the following +commands. + +First, define a role that can impersonate the `Db` user. Add the following +content to a file called `db-impersonator.yaml`: + +```yaml +kind: role +version: v5 +metadata: + name: db-impersonator +spec: + options: + allow: + impersonate: + users: ['Db'] + roles: ['Db'] +``` + +Create the role: + +```code +$ tctl create -f db-impersonator.yaml +``` + +(!docs/pages/includes/create-role-using-web.mdx!) + +Open your Teleport user's dynamic configuration resource in your editor so you +can add the `db-impersonator` role: + +```code +$ TELEPORT_USER= +$ tctl edit user/${TELEPORT_USER?} +``` + +Add the `db-impersonator` role: + +```diff +spec: + - access + - auditor + - editor ++ - db-impersonator + status: + is_locked: false +``` + +Update your user by saving and closing the file in your editor. + +Log out of your Teleport cluster and log in again. You will now be able to run +`tctl auth sign` for database-specific certificate formats. + +
+ +(!docs/pages/includes/database-access/ttl-note.mdx!) + +## tctl db ls + +Administrative command to list all databases registered with the cluster. + +```code +$ tctl db ls +$ tctl db ls --format=yaml +``` + +| Flag | Description | +| - | - | +| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `text`. | + +## tctl get db + +Prints the list of all configured database resources. + +| Flag | Description | +| - | - | +| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `yaml`. | + +## tctl get db/database-resource-name + +Prints details about `database-resource-name` database resource. + +| Flag | Description | +| - | - | +| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `yaml`. | + +## tctl rm db/database-resource-name + +Removes database resource called `database-resource-name`. + +## tsh db ls + +Lists the databases available to the user based on +[RBAC](../../../enroll-resources/database-access/rbac.mdx) and their connection +information. + +```code +# List all databases. +$ tsh db ls +# Search databases with keywords. +$ tsh db ls --search foo,bar +# Filter databases with labels. +$ tsh db ls key1=value1,key2=value2 +# List databases from all clusters with extra fields. +$ tsh db ls --all -v +# Get database names using "jq". +$ tsh db ls --format json | jq -r '.[].metadata.name' +``` + +| Flag | Description | +| - | - | +| `--search` | List of comma separated search keywords or phrases enclosed in quotations (e.g. `--search=foo,bar,"some phrase"`). | +| `--query` | Query by predicate language enclosed in single quotes. (e.g. `--query='labels["key1"] == "value1" && labels["key2"] != "value2"')`. | +| `--format` | Format output (`text`, `json`, `yaml`). | + +## tsh db login + +Retrieves database credentials. + +```code +$ tsh db login example +$ tsh db login --db-user=postgres --db-name=postgres example +``` + +| Flag | Description | +| - | - | +| `--db-user` | The database user to log in as. | +| `--db-name` | The database name to log in to. | +| `--db-roles` | Comma-separated list of database roles to use for auto-provisioned user. If not provided, all database roles will be assigned. | + +(!docs/pages/includes/db-user-name-flags.mdx!) + +## tsh db logout + +Removes database credentials. + +```code +$ tsh db logout example +$ tsh db logout +``` + +## tsh db connect + +Connect to a database using its CLI client. + +```code +# Short syntax when only logged into a single database. +$ tsh db connect +# Specify database service to connect to explicitly. +$ tsh db connect example +# Provide database user and name to connect to. +$ tsh db connect --db-user=alice --db-name=db example +# Select a subset of allowed database roles. +$ tsh db connect --db-user=alice --db-name=db --db-roles reader example +``` + + + Respective database CLI clients (`psql`, `mysql`, `mongo` or `mongosh`) should be + available in PATH. + + +| Flag | Description | +| - | - | +| `--db-user` | The database user to log in as. | +| `--db-name` | The database name to log in to. | +| `--db-roles` | Comma-separated list of database roles to use for auto-provisioned user. If not provided, all database roles will be assigned. | + +(!docs/pages/includes/db-user-name-flags.mdx!) + +## tsh db exec + +Execute database commands on target database services. +```code +# Search databases with labels. +$ tsh db exec "source my_script.sql" --db-user mysql --labels key1=value1,key2=value2 +# Search databases with keywords. +$ tsh db exec "select 1" --db-user mysql --db-name mysql --search foo,bar +# Execute a command on specified target databases without confirmation. +$ tsh db exec "select @@hostname" --db-user mysql --dbs mydb1,mydb2,mydb3 --no-confirm +# Run commands in parallel, and save outputs to files. +$ tsh db exec "select 1" --db-user mysql --labels env=dev --parallel=5 --output-dir=exec-outputs +``` + + + Currently only PostgreSQL and MySQL databases are supported. Respective + database CLI clients (`psql`, `mysql`) should be available in PATH. + + +{/* vale messaging.capitalization = NO */} + +| Flag | Description | +| - | - | +| `--db-user` | The database user to log in as. | +| `--db-name` | The database name to log in to. | +| `--db-roles` | List of comma separate database roles to use for auto-provisioned user. | +| `--dbs` | List of comma separated target database services. Mutually exclusive with `--search` or `--labels`. | +| `--search` | List of comma separated search keywords or phrases enclosed in quotations (e.g. `--search=foo,bar,"some phrase"`). | +| `--labels` | List of comma separated labels to filter by labels (e.g. `key1=value1,key2=value2`). | +| `--output-dir` | Directory to store command output per target database service. A summary is saved as "summary.json". | +| `--[no-]confirm` | Confirm selected database services before executing command. | + +{/* vale messaging.capitalization = YES */} + +## tsh db env + +Outputs environment variables for a particular database. + +```code +$ tsh db env +$ tsh db env example +$ eval $(tsh db env) +``` + +## tsh db config + +Prints database connection information. Useful when configuring GUI clients. + +```code +$ tsh db config +$ tsh db config example +$ tsh db config --format=cmd example +``` + +| Flag | Description | +| - | - | +| `--format` | Output format: `text` is default, `cmd` to print native database client connect command. | diff --git a/docs/pages/enroll-resources/database-access/reference/configuration.mdx b/docs/pages/enroll-resources/database-access/reference/configuration.mdx new file mode 100644 index 0000000000000..3e8976fa00b48 --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/configuration.mdx @@ -0,0 +1,130 @@ +--- +title: Database Access Configuration Reference +sidebar_label: Configuration +description: Configuration reference for Teleport database access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +This guide explains configuration options for the Teleport Database Service, +which proxies user traffic between Teleport users and protected databases. + +## Database service configuration + +The following snippet shows full YAML configuration of a Database Service +appearing in `teleport.yaml` configuration file: + +```yaml +(!docs/pages/includes/config-reference/database-config.yaml!) +``` + +## Proxy configuration + + + + +The following Proxy service configuration is relevant for database access: + + +The `--insecure-no-tls` `tsh` flag is only supported for MySQL/MariaDB and PostgreSQL +connections using a unique port, specified with `mysql_public_addr` or `postgres_public_addr`. + + +```yaml +proxy_service: + enabled: true + # Database proxy is listening on the regular web proxy port. + web_listen_addr: "0.0.0.0:443" + # MySQL proxy is listening on a separate port and needs to be enabled + # on the proxy server. + mysql_listen_addr: "0.0.0.0:3036" + # MySQL Server version allows you to overwrite the default Teleport Proxy Service MySQL version (8.0.0-Teleport) + # Note that if the MySQL client connection is using TLS Routing the dynamic MySQL Server Version takes + # precedence over the mysql_server_version proxy settings. + # mysql_server_version: "8.0.4" + # Postgres proxy listening address. If provided, proxy will use a separate listener + # instead of multiplexing Postgres protocol on web_listener_addr. + # postgres_listen_addr: "0.0.0.0:5432" + # Mongo proxy listening address. If provided, proxy will use a separate listener + # instead of multiplexing Mongo protocol on web_listener_addr. + # mongo_listen_addr: "0.0.0.0:27017" + # By default database clients will be connecting to the Proxy over this + # hostname. To override public address for specific database protocols + # use postgres_public_addr and mysql_public_addr. + public_addr: "teleport.example.com:443" + # Address advertised to MySQL clients. If not set, public_addr is used. + mysql_public_addr: "mysql.teleport.example.com:3306" + # Address advertised to PostgreSQL clients. If not set, public_addr is used. + postgres_public_addr: "postgres.teleport.example.com:443" + # Address advertised to Mongo clients. If not set, public_addr is used. + mongo_public_addr: "mongo.teleport.example.com:443" +``` + + + + +Teleport Enterprise Cloud automatically configures the Teleport Proxy Service +with the following settings that are relevant to database access. This reference +configuration uses `example.teleport.sh` in place of your Teleport Enterprise +Cloud tenant address: + +```yaml +proxy_service: + enabled: true + # Database proxy is listening on the regular web proxy port. + web_listen_addr: "0.0.0.0:3080" + # MySQL proxy is listening on a separate port. + mysql_listen_addr: "0.0.0.0:3036" + # Database clients will connect to the Proxy Service over this hostname. + public_addr: "mytenant.teleport.sh:443" + # Address advertised to MySQL clients. + mysql_public_addr: "mytenant.teleport.sh:3036" + # Address advertised to PostgreSQL clients. + postgres_public_addr: "mytenant.teleport.sh:443" + # Address advertised to Mongo clients. If not set, public_addr is used. + mongo_public_addr: "mongo.teleport.example.com:443 +``` + + + + +## Database resource + +Full YAML spec of database resources managed by `tctl` resource commands: + +(!docs/pages/includes/reference/resources/db.mdx!) + +You can create a new `db` resource by running the following commands, which +assume that you have created a YAML file called `db.yaml` with your configuration: + + + + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +# Create the resource +$ tctl create -f db.yaml +``` + + + + +```code +# Log in to your Teleport cluster so you can use tctl from your local machine. +$ tsh login --proxy=mytenant.teleport.sh --user=myuser +# Create the resource +$ tctl create -f db.yaml +``` + + + + + diff --git a/docs/pages/enroll-resources/database-access/reference/labels.mdx b/docs/pages/enroll-resources/database-access/reference/labels.mdx new file mode 100644 index 0000000000000..96831240a3cbf --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/labels.mdx @@ -0,0 +1,84 @@ +--- +title: Database Labels Reference +sidebar_label: Labels +description: Database labels reference for Teleport database access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +Teleport assigns system-defined labels to protected databases. This guide +describes the system-defined labels and how Teleport uses them. + +## Origin + +All registered databases have a predefined `teleport.dev/origin` label with one +of the following values: + +| Label Value | Description | +| - | - | +| `cloud` | database resources created by auto-discovery. | +| `config` | database resources manually defined in the `database_service.databases` section of `teleport.yaml`. | +| `dynamic` | database resources created through [dynamic registration](../../../enroll-resources/database-access/guides/dynamic-registration.mdx) like `tcl create` command. | + +## Auto-discovery + +The labels of auto-discovered databases primarily come from the tags that are +assigned to the original cloud resources, such as the resources tags of an +Amazon RDS instance. + +The following tags will override Teleport's default behavior if assigned to the +original cloud resources: + +| Tag name | Description | +| - | - | +| `TeleportDatabaseName` | Overrides the name of the discovered database. | +| `teleport.dev/database_name` | (AWS only, legacy) Overrides the name of the discovered database. `TeleportDatabaseName` is preferred. | +| `teleport.dev/db-admin` | (AWS only) Specifies the name of the admin user for Automatic User Provisioning. | +| `teleport.dev/db-admin-default-database` | (AWS only) Overrides the default database the admin user logs into for Automatic User Provisioning. | + +Additionally, Teleport will generate certain labels derived from the cloud +resource attributes: + +| Label name | Description | +| - | - | +| `account-id` | ID of the AWS account the resource resides in. | +| `endpoint-type` | Type of the endpoint. See section below for more details. | +| `engine` | Amazon RDS: engine type of the RDS instance or Aurora cluster.
Amazon RDS Proxy: engine family of the proxy.
Azure-hosted databases: resource type of the resource ID. | +| `engine-version` | Database engine version, if available. | +| `namespace` | Amazon Redshift Serverless namespace name. | +| `region` | AWS region or Azure location. | +| `replication-role` | The replication role of an Azure DB Flexible server. | +| `source-server` | The source server of an Azure DB Flexible server replica. | +| `vpc-id` | ID of the Amazon VPC the resource resides in. Available for RDS, ElastiCache and MemoryDB databases. | +| `workgroup` | Amazon Redshift Serverless workgroup name. | +| `teleport.dev/discovery-type` | Specifies the type of resource matched by the Teleport Discovery Service, e.g. "rds", "redshift", etc. | + +### `endpoint-type` + +The following values are used to indicate the type of the database endpoint: + +| Database Type | Values | +| - | - | +| Amazon RDS instance | `instance` | +| Amazon RDS Aurora cluster | one of `primary`, `reader`, `custom` | +| Amazon RDS Proxy | one of `READ_WRITE`, `READ_ONLY` (custom endpoints only) | +| Amazon Redshift Serverless | one of `workgroup`, `vpc-endpoint` | +| Amazon ElastiCache | one of `configuration`, `primary`, `reader`, `node` | +| Amazon MemoryDB | one of `cluster`, `node` | +| Amazon OpenSearch | one of `default`, `custom`, `vpc` | +| Azure Redis Enterprise | one of `EnterpriseCluster`, `OSSCluster` | + +## Manual and dynamic registration + +Static labels and dynamic labels can be specified in `labels` and +`dynamic_labels` fields respectively in database definition. See +[Configuration](./configuration.mdx) for reference. + +## Database Service on Amazon EC2 + +All registered databases can inherit the labels converted from the tags of the +EC2 instance running the Teleport Database Service. Labels created this way +will have the `aws/` prefix. See [Sync EC2 +Tags](../../../enroll-resources/automatic-labels/ec2-tags.mdx) for more details. diff --git a/docs/pages/enroll-resources/database-access/reference/reference.mdx b/docs/pages/enroll-resources/database-access/reference/reference.mdx new file mode 100644 index 0000000000000..a918dbc14725f --- /dev/null +++ b/docs/pages/enroll-resources/database-access/reference/reference.mdx @@ -0,0 +1,15 @@ +--- +title: Database Access Reference +sidebar_label: Reference +sidebar_position: 8 +description: Configuration and CLI reference for the Teleport Database Service. +tags: + - zero-trust + - infrastructure-identity +--- + +- [Configuration](configuration.mdx) +- [CLI](cli.mdx) +- [Audit Events](audit.mdx) +- [AWS IAM](aws.mdx) +- [Database Labels](labels.mdx) diff --git a/docs/pages/enroll-resources/database-access/troubleshooting.mdx b/docs/pages/enroll-resources/database-access/troubleshooting.mdx index 4aadd877c5808..908433e654fa5 100644 --- a/docs/pages/enroll-resources/database-access/troubleshooting.mdx +++ b/docs/pages/enroll-resources/database-access/troubleshooting.mdx @@ -1,12 +1,23 @@ --- title: Troubleshooting Database Access -description: Common issues and resolutions for Teleport's Database Access +sidebar_label: Troubleshooting +description: Common issues and resolutions for protecting databases with Teleport. +tags: + - how-to + - zero-trust + - infrastructure-identity --- Common issues and resolution steps. ## Connection attempts fail +### Timeout errors + +Attempts to connect to the database fail with a message similar to **"dial tcp ... i/o timeout"**. + +(!docs/pages/includes/database-access/connection-timeout-troubleshooting.mdx!) + ### Certificate expired or is not yet valid Attempts to connect to the database fail, and the error message returned is @@ -35,16 +46,16 @@ Service can reach the PostgreSQL server. Each database uses a different format. You can check your database guide for more details and examples: -- [PostgreSQL](./enroll-self-hosted-databases/postgres-self-hosted.mdx#step-25-create-a-certificatekey-pair) -- [MySQL/MariaDB](./enroll-self-hosted-databases/mysql-self-hosted.mdx#step-24-create-a-certificatekey-pair) -- [MongoDB](./enroll-self-hosted-databases/mongodb-self-hosted.mdx#set-up-mutual-tls) -- [CockroachDB](./enroll-self-hosted-databases/cockroachdb-self-hosted.mdx#set-up-mutual-tls) -- [Redis](./enroll-self-hosted-databases/redis.mdx#step-45-set-up-mutual-tls) -- [Redis Cluster](./enroll-self-hosted-databases/redis-cluster.mdx#step-46-set-up-mutual-tls) +- [PostgreSQL](enrollment/self-hosted/postgres-self-hosted.mdx#step-25-create-a-certificatekey-pair) +- [MySQL/MariaDB](enrollment/self-hosted/mysql-self-hosted.mdx#step-24-create-a-certificatekey-pair) +- [MongoDB](enrollment/self-hosted/mongodb-self-hosted.mdx#set-up-mutual-tls) +- [CockroachDB](enrollment/self-hosted/cockroachdb-self-hosted.mdx#set-up-mutual-tls) +- [Redis](enrollment/self-hosted/redis.mdx#step-45-set-up-mutual-tls) +- [Redis Cluster](enrollment/self-hosted/redis-cluster.mdx#step-46-set-up-mutual-tls) After the new certificate is issued, update your database to make it take effect. -## Access to db denied +### Access to db denied Attempts to connect to the database fail with an error message similar to: **"access to db denied"**. @@ -146,17 +157,17 @@ This example is intentionally simple; we could have configured Alice's permissio For more detailed information about database access controls and how to restrict access see the [RBAC](../database-access/rbac.mdx) documentation. -## Connection to MySQL database results in "Unknown system variable 'query_cache_size'" error +### Connection to MySQL database results in "Unknown system variable 'query_cache_size'" error When TLS Routing is disable by default, the Teleport Proxy Service returns `8.0.0-Teleport` as the MySQL server version. In some cases, like connecting with a GUI Client, this can result in obtaining an `Unknown system variable 'query_cache_size'` error that indicates that MySQL capabilities were not properly negotiated between the MySQL client and server. One way to solve this issue is to [use the TLS Routing -feature](../../admin-guides/management/operations/tls-routing.mdx), where the Teleport Proxy +feature](../../zero-trust-access/deploy-a-cluster/tls-routing.mdx), where the Teleport Proxy Service propagates the correct MySQL server version via TLS Routing extensions. If migration to TLS Routing is not possible, another way to bypass this error is to use the [Teleport local proxy -command](../../connect-your-client/gui-clients.mdx#get-connection-information), +command](../../connect-your-client/third-party/gui-clients.mdx#how-gui-clients-access-teleport-protected-databases), which allows you to establish a TLS Routing connection to the Teleport Proxy Service even if TLS Routing was not enabled on the Teleport cluster. diff --git a/docs/pages/enroll-resources/desktop-access/active-directory.mdx b/docs/pages/enroll-resources/desktop-access/active-directory.mdx index 113882b977ea8..b0bb96efb3454 100644 --- a/docs/pages/enroll-resources/desktop-access/active-directory.mdx +++ b/docs/pages/enroll-resources/desktop-access/active-directory.mdx @@ -1,8 +1,12 @@ --- -title: Configure access for Active Directory manually +title: Configure Access for Active Directory Manually +sidebar_label: Manual Registration description: Explains how to manually connect Teleport to an Active Directory domain. videoBanner: YvMqgcq0MTQ -tocDepth: 3 +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide demonstrates how to connect an Active Directory domain and how to log @@ -17,33 +21,15 @@ Service for Microsoft Entra ID. To complete the steps in this guide, verify your environment meets the following requirements: -- Access to a running Teleport cluster, `tctl` admin tool, and `tsh` client tool - version >= (=teleport.version=). - - You can verify the tools you have installed by running the following commands: - - ```code - $ tctl version - # Teleport Enterprise v(=teleport.version=) go(=teleport.golang=) - - $ tsh version - # Teleport v(=teleport.version=) go(=teleport.golang=) - ``` - - You can download these tools by following the appropriate [Installation - instructions](../../installation.mdx#linux) for the Teleport - edition you use. +(!docs/pages/includes/edition-prereqs-tabs.mdx!) - A Linux server to run the Teleport Windows Desktop Service. You can use an existing server that runs the Teleport Agent for other resources. - - An Active Directory domain that is configured for LDAPS. Because Teleport requires an encrypted LDAP connection, you should verify that your domain uses Active Directory Certificate Services or a non-Microsoft certification authority (CA) to issue LDAPS certificates. - - Administrative access to a domain controller. - - (!docs/pages/includes/tctl.mdx!) ## Option 1: Automated configuration @@ -599,7 +585,8 @@ To configure Teleport to protect access to Windows desktops: windows_desktop_service: enabled: true ldap: - # Port must be included for the addr. + # Address (including port) of an LDAP server. + # Can be omitted if `locate_server` is enabled. # LDAPS port is 636 by default (example.com:636) addr: "$LDAP_SERVER_ADDRESS" domain: "$LDAP_DOMAIN_NAME" @@ -618,7 +605,48 @@ To configure Teleport to protect access to Windows desktops: ``` For a detailed description of the configuration fields, see - [Desktop Configuration Reference](../../reference/agent-services/desktop-access-reference/configuration.mdx). + [Desktop Configuration Reference](reference/configuration.mdx). + +1. Configure LDAP server selection. + + To find desktops to connect to in Active Directory, Teleport must be given the + address of an LDAP server. There are two ways to specify the LDAP server, either + by providing a single LDAP server address in the `addr` field, or by enabling + `locate_server` to automatically discover the LDAP server from DNS SRV records. + + If you opt to use `addr`, for the best results, the address should point to a + highly-available endpoint to avoid downtime. The port for LDAPS should also be + included, which is typically 636. + +
+ Example of using the addr field + ```yaml + windows_desktop_service: + enabled: true + ldap: + addr: ldap.example.com:636 + domain: example.com + ``` +
+ + Alternatively, you can enable `locate_server` to automatically discover an + LDAP server from the DNS SRV records of your specified `domain`. This is useful + if you have multiple LDAP servers for high availability and load balancing. + Optionally, you can specify a `site` to limit the discovery to a specific + Active Directory site. + +
+ Example of using locate_server + ```yaml + windows_desktop_service: + enabled: true + ldap: + locate_server: + enabled: true + site: "my-site" # optional + domain: example.com + ``` +
1. (!docs/pages/includes/start-teleport.mdx service="the Teleport Desktop Service"!) @@ -684,7 +712,7 @@ To connect to a Windows desktop: In Active Directory environments, Teleport can be configured to discover hosts via LDAP. LDAP discovery is enabled by adding one or more discovery configs to the `discovery_configs` field in the Teleport Windows Desktop Service -configuration. You can set `base_dn` to a wildcard `'*` to search from the root +configuration. You can set `base_dn` to a wildcard `'*'` to search from the root of the domain, or you can specify an alternate base distinguished name to search from. @@ -813,34 +841,79 @@ In order to perform NLA, Teleport's `windows_desktop_service` needs to be able to make an outbound Kerberos connection to a key distribution center (KDC). This is most commonly performed on TCP port 88. -By default, Teleport will assume that a KDC is available on the same host that -is specified in the `ldap` configuration's `addr` field, using the default -Kerberos port. +Teleport determines the address of the KDC by checking the following (in order of priority): + +1. If `kdc_address` is set, Teleport will use this address. +1. As of version 18.3.1, if `locate_server` is enabled and `kdc_address` is + not set, Teleport will attempt to discover the KDC address via DNS SRV records. +1. If neither `locate_server` or `kdc_address` is specified, Teleport will + assume that a KDC is available on the same host that is specified in the + `ldap` configuration's `addr` field. -For example, with the following configuration, Teleport will attempt to perform -NLA against `example.com:88`. +For example, you can set the KDC address by specifying the `kdc_address` +in your Teleport configuration file. ```yaml windows_desktop_service: enabled: true - ldap: - addr: example.com:636 + kdc_address: kdc.example.com # defaults to port 88 if unspecified ``` -Alternatively, you can override the KDC address by specifying the `kdc_address` -in your Teleport configuration file. +If you're using the `locate_server` option, Teleport will perform a DNS +SRV lookup on the provided `domain` (e.g. `_kerberos._tcp.my-site._sites.example.com`). +NLA will be performed against the highest-priority and reachable KDC. + +```yaml +windows_desktop_service: + enabled: true + + locate_server: + enabled: true + site: "my-site" # optional + domain: example.com +``` + +If server location is disabled and `kdc_address` isn't specified, with the following configuration, +Teleport will attempt to perform NLA against `example.com:88`. ```yaml windows_desktop_service: enabled: true - kdc_address: kdc.example.com # defaults to port 88 if unspecified + + ldap: + addr: example.com:636 ``` To enable NLA, set the `TELEPORT_ENABLE_RDP_NLA` environment variable to `yes` -on any hosts running the Teleport's `windows_desktop_service`. Note that NLA is -only supported when connecting to hosts that are part of an Active Directory -domain. Teleport will not perform NLA when connecting to hosts as a local -Windows user. +on any hosts running the Teleport's `windows_desktop_service`. The process for +setting an environment variable varies depending on the environment in which you +are running Teleport. + +If you're running Teleport in Kubernetes, you'll need to +[edit the Pod configuration](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/). + +If you're running Teleport as a systemd service, then you'll want to create a +systemd override using `systemctl edit teleport`: + +``` +$ sudo systemctl edit teleport +``` + +This will open a text editor where you can insert an override for the environment: + +``` +### Editing /etc/systemd/system/teleport.service.d/override.conf +### Anything between here and the comment below will become the contents of the drop-in file + +[Service] +Environment="TELEPORT_ENABLE_RDP_NLA=yes" +``` + +Note that NLA is only supported when connecting to hosts that are part of an +Active Directory domain. Teleport will not perform NLA when connecting to hosts +as a local Windows user. + +NLA is not supported when Teleport runs in FIPS mode. ### Computer Name diff --git a/docs/pages/enroll-resources/desktop-access/desktop-access.mdx b/docs/pages/enroll-resources/desktop-access/desktop-access.mdx index 71eb5a11e1cb6..a906b7c857cc5 100644 --- a/docs/pages/enroll-resources/desktop-access/desktop-access.mdx +++ b/docs/pages/enroll-resources/desktop-access/desktop-access.mdx @@ -1,6 +1,107 @@ --- title: Windows Desktops -description: Guides to protecting Windows desktops with Teleport +description: Protect Windows Resources with Teleport's passwordless access and other features. +sidebar_position: 7 +template: doc-page +tags: + - zero-trust + - infrastructure-identity --- - +import DocHero from "@site/src/components/Pages/Landing/DocHero"; +import Resources from "@site/src/components/Pages/Homepage/Resources"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; + +import userCheckSvg from "@site/src/components/Icon/svg/user-check.svg"; +import questionSvg from "@site/src/components/Icon/svg/question2.svg"; +import slidersHorizontalSvg from "@site/src/components/Icon/svg/sliders-horizontal.svg"; +import keyholeSvg from "@site/src/components/Icon/svg/keyhole.svg"; +import activeDirectorySvg from "@site/src/components/Icon/svg/active-directory.svg"; +import windowsSvg from "@site/src/components/Icon/svg/windows2.svg"; +import windowsDesktopsSvg from "@site/src/components/Icon/teleport-svg/windows-desktops.svg"; +import folderNotchOpenSvg from "@site/src/components/Icon/svg/folder-notch-open.svg"; + + +Provide secure, passwordless access to Microsoft Windows desktops and servers, backed by short-lived certificates, role-based access controls, clipboard and directory sharing, session recordings, and detailed audit logs. + +{/*vale messaging.protocol-products = NO*/} + +{/*vale messaging.protocol-products = YES*/} + diff --git a/docs/pages/enroll-resources/desktop-access/directory-sharing.mdx b/docs/pages/enroll-resources/desktop-access/directory-sharing.mdx index a07d985740f49..5633108c3c61f 100644 --- a/docs/pages/enroll-resources/desktop-access/directory-sharing.mdx +++ b/docs/pages/enroll-resources/desktop-access/directory-sharing.mdx @@ -1,6 +1,10 @@ --- title: Directory Sharing description: Teleport desktop Directory Sharing lets you easily send files to a remote desktop. +tags: + - how-to + - zero-trust + - infrastructure-identity --- Directory Sharing is a Teleport feature of that makes it easy to move files @@ -186,11 +190,11 @@ $ tctl create -f role.yaml Directory Sharing is a powerful tool for editing files on remote desktops, and you'll want to make sure you have a comprehensive audit trail so you can conduct a post-incident retrospective or investigate unintended usage. Learn how to set -up [session recording for desktop access](../../reference/agent-services/desktop-access-reference/sessions.mdx). +up [session recording for desktop access](reference/sessions.mdx). Aside from Directory Sharing, the Teleport Desktop Service also enables you to share the contents of your clipboard with a remote desktop. Learn how to use -[Clipboard Sharing](../../reference/agent-services/desktop-access-reference/clipboard.mdx). +[Clipboard Sharing](reference/clipboard.mdx). ### How Directory Sharing works @@ -239,4 +243,4 @@ Teleport Connect establishes a secure mTLS connection to the Teleport Proxy Serv which forwards traffic to and from the relevant Teleport Desktop Service instance. Just like the Web UI, Teleport Connect uses the Teleport Desktop Protocol (TDP) -to communicate with the Teleport Desktop Service, which then connects to the remote desktop via RDP. \ No newline at end of file +to communicate with the Teleport Desktop Service, which then connects to the remote desktop via RDP. diff --git a/docs/pages/enroll-resources/desktop-access/dynamic-registration.mdx b/docs/pages/enroll-resources/desktop-access/dynamic-registration.mdx index b1a9adcc12a38..2611133726a2b 100644 --- a/docs/pages/enroll-resources/desktop-access/dynamic-registration.mdx +++ b/docs/pages/enroll-resources/desktop-access/dynamic-registration.mdx @@ -1,6 +1,11 @@ --- title: Dynamic Windows Desktop Registration +sidebar_label: Dynamic Registration description: Register/unregister Windows desktops without restarting Teleport. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Dynamic Windows desktop registration allows Teleport administrators to register diff --git a/docs/pages/enroll-resources/desktop-access/getting-started.mdx b/docs/pages/enroll-resources/desktop-access/getting-started.mdx index 7b2e20764115a..b20f09c298ce4 100644 --- a/docs/pages/enroll-resources/desktop-access/getting-started.mdx +++ b/docs/pages/enroll-resources/desktop-access/getting-started.mdx @@ -1,7 +1,12 @@ --- -title: Configure access for local Windows users +title: Configure Access for Local Windows Users +sidebar_label: Local Windows Users description: Use Teleport to configure passwordless access for local Windows users. videoBanner: 9DyKQbg4ORc +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide demonstrates how to configure Teleport to provide secure, passwordless access @@ -14,6 +19,20 @@ Community Edition will not allow connections to any of them. Teleport Enterprise users can have unlimited number of desktops. +## How it works + +The Teleport Desktop Service proxies traffic between the Teleport Proxy Service +and remote Windows desktops in your infrastructure. Teleport users connect to a +Windows desktop using Teleport Connect or the Teleport Web UI. These graphical +clients use the proprietary Teleport Desktop Protocol to send traffic to and +receive traffic from the Teleport Desktop Service, which converts TDP messages +to and from Remote Desktop Protocol messages. + +To access a remote desktop, a Teleport user must have a role with the +appropriate permissions for that desktop. An administrator configures the +desktop to trust the Teleport certificate authority, and the Windows Desktop +Service uses certificate-based authentication to access the desktop. + ## Prerequisites To complete the steps in this guide, verify your environment meets the following requirements: @@ -134,7 +153,7 @@ Windows Desktop Service by running the following command: ``` If Teleport isn't installed on the Linux server, follow the appropriate [Installation -instructions](../../installation.mdx#linux) for your environment. +instructions](../../installation/linux.mdx) for your environment. 1. Configure the `/etc/teleport.yaml` file on the Linux server with settings similar to the following for the Windows Desktop Service: @@ -253,7 +272,7 @@ To configure a role for desktop access: Depending on how you configure role-based access controls for Windows, Teleport can create local user logins automatically from the `windows_desktop_logins` you specify. For more information about enabling Teleport - to create local Windows logins, see [Automatic User Creation](../../reference/agent-services/desktop-access-reference/user-creation.mdx). + to create local Windows logins, see [Automatic User Creation](reference/user-creation.mdx). 1. Apply the new role to your cluster by running the following command: @@ -301,6 +320,6 @@ Windows login to use for the connection. ## Next steps For more general information about how to create, assign, and update roles, see [Access Controls -Getting Started](../../admin-guides/access-controls/getting-started.mdx). +Getting Started](../../zero-trust-access/rbac-get-started/role-demo.mdx). For more specific information about configuring Windows-specific role permissions, see [Role-Based Access Control for Desktops](./rbac.mdx). diff --git a/docs/pages/enroll-resources/desktop-access/introduction.mdx b/docs/pages/enroll-resources/desktop-access/introduction.mdx index 7368b52c9e01d..e6e029b5118cf 100644 --- a/docs/pages/enroll-resources/desktop-access/introduction.mdx +++ b/docs/pages/enroll-resources/desktop-access/introduction.mdx @@ -1,7 +1,12 @@ --- title: Manage Access to Windows Resources +sidebar_label: Introduction description: Demonstrates how you can manage access to Windows desktops with Teleport. videoBanner: n2h0GisWdss +tags: + - conceptual + - zero-trust + - privileged-access --- The topics in this guide describe how to configure Teleport to provide secure, passwordless @@ -75,13 +80,13 @@ Windows-specific configuration settings, role-based permissions, and audit event {/* vale messaging.protocol-products = NO */} - [Role-Based Access Control for Desktops](./rbac.mdx) -- [Clipboard Sharing](../../reference/agent-services/desktop-access-reference/clipboard.mdx) +- [Clipboard Sharing](reference/clipboard.mdx) - [Directory Sharing](./directory-sharing.mdx) - [Dynamic Registration](./dynamic-registration.mdx) -- [Session Recording and Playback](../../reference/agent-services/desktop-access-reference/sessions.mdx) +- [Session Recording and Playback](reference/sessions.mdx) - [Troubleshooting Desktop Access](./troubleshooting.mdx) -- [Desktop Access Audit Events Reference](../../reference/agent-services/desktop-access-reference/audit.mdx) -- [Desktop Access Configuration Reference](../../reference/agent-services/desktop-access-reference/configuration.mdx) -- [Desktop Access CLI Reference](../../reference/agent-services/desktop-access-reference/cli.mdx) +- [Desktop Access Audit Events Reference](reference/audit.mdx) +- [Desktop Access Configuration Reference](reference/configuration.mdx) +- [Desktop Access CLI Reference](reference/cli.mdx) {/* vale messaging.protocol-products = YES */} diff --git a/docs/pages/enroll-resources/desktop-access/rbac.mdx b/docs/pages/enroll-resources/desktop-access/rbac.mdx index 9c32cab48f1e5..da66a0897acd9 100644 --- a/docs/pages/enroll-resources/desktop-access/rbac.mdx +++ b/docs/pages/enroll-resources/desktop-access/rbac.mdx @@ -1,6 +1,11 @@ --- title: Role-Based Access Control for Desktops +sidebar_label: Access Controls description: Role-based access control (RBAC) for desktops protected by Teleport. +tags: + - conceptual + - zero-trust + - privileged-access --- Teleport's RBAC allows administrators to set up granular access policies for diff --git a/docs/pages/enroll-resources/desktop-access/reference/audit.mdx b/docs/pages/enroll-resources/desktop-access/reference/audit.mdx new file mode 100644 index 0000000000000..9ae79a0c24c41 --- /dev/null +++ b/docs/pages/enroll-resources/desktop-access/reference/audit.mdx @@ -0,0 +1,255 @@ +--- +title: Desktop Access Audit Events Reference +sidebar_label: Audit Events +description: Audit events reference for Teleport desktop access. +tags: + - reference + - zero-trust + - audit +--- + +This guide lists the structures and field names of audit events related to +connecting to remote desktops with Teleport. Use this guide to understand +desktop-related audit events and configure your log management solutions if you +are [exporting audit +events](../../../zero-trust-access/export-audit-events/export-audit-events.mdx). + +## windows.desktop.session.start (TDP00I/W) + +Emitted when a client successfully connects to a desktop or when a connection +attempt fails because access was denied. + +Successful connection event: + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP00I", + "desktop_addr": "192.168.1.206:3389", + "desktop_labels": { + "teleport.dev/computer_name": "WIN-I44F9TN11M3", + "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", + "teleport.dev/is_domain_controller": "true", + "teleport.dev/origin": "dynamic", + "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", + "teleport.dev/os_version": "6.3 (9600)", + "teleport.dev/windows_domain": "teleport.example.com" + }, + "ei": 0, + "event": "windows.desktop.session.start", + "login": "administrator", + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "success": true, + "time": "2022-02-16T16:43:30.459Z", + "uid": "1605346b-d90b-4df7-8148-67a3e2d85673", + "user": "alice", + "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", + "windows_domain": "teleport.example.com", + "windows_user": "administrator" +} +``` + +Access denied event: + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP00W", + "desktop_addr": "192.168.1.206:3389", + "desktop_labels": { + "teleport.dev/computer_name": "WIN-I44F9TN11M3", + "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", + "teleport.dev/is_domain_controller": "true", + "teleport.dev/origin": "dynamic", + "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", + "teleport.dev/os_version": "6.3 (9600)", + "teleport.dev/windows_domain": "teleport.example.com" + }, + "ei": 0, + "error": "access to desktop denied", // Connection error + "event": "windows.desktop.session.start", + "message": "access to desktop denied", // Detailed error message. + "login": "administrator", + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "success": false, // Indicates unsuccessful connection + "time": "2022-02-16T16:43:30.459Z", + "uid": "1605346b-d90b-4df7-8148-67a3e2d85673", + "user": "alice", + "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", + "windows_domain": "teleport.example.com", + "windows_user": "administrator" +} +``` + +## windows.desktop.session.end (TDP01I) + +Emitted when a client disconnects from the desktop. + +```json +{ + "cluster_name": "root", + "code": "TDP01I", + "desktop_addr": "192.168.1.206:3389", + "desktop_labels": { + "teleport.dev/computer_name": "WIN-I44F9TN11M3", + "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", + "teleport.dev/is_domain_controller": "true", + "teleport.dev/origin": "dynamic", + "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", + "teleport.dev/os_version": "6.3 (9600)", + "teleport.dev/windows_domain": "teleport.example.com" + }, + "desktop_name": "WIN-I44F9TN11M3-teleport-example-com", + "ei": 0, + "event": "windows.desktop.session.end", + "login": "administrator", + "participants": ["alice"], + "recorded": true, + "session_start": "2022-02-16T16:43:30.459Z", + "session_stop": "2022-02-16T16:46:50.894Z", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "time": "2022-02-16T16:46:50.895Z", + "uid": "c7956a81-597f-4452-90d7-800506f7a05b", + "user": "alice", + "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", + "windows_domain": "teleport.example.com", + "windows_user": "administrator" +} +``` + +## desktop.clipboard.send (TDP02I) + +Emitted when clipboard data is sent from a user's workstation to Teleport. In +order to avoid capturing sensitive data, the event only records the number of +bytes that were sent. + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP02I", + "desktop_addr": "192.168.1.206:3389", + "ei": 0, + "event": "desktop.clipboard.send", + "length": 4, // number of bytes sent + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "time": "2022-02-16T16:43:40.010217Z", + "uid": "e45d9890-38a9-4580-8572-35fa0192b123", + "user": "alice" +} +``` + +## desktop.clipboard.receive (TDP03I) + +Emitted when Teleport receives clipboard data from a remote desktop. In order to +avoid capturing sensitive data, the event only records the number of bytes that +were received. + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP03I", + "desktop_addr": "192.168.1.206:3389", + "ei": 0, + "event": "desktop.clipboard.receive", + "length": 4, // number of bytes received + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "time": "2022-02-16T16:43:40.010217Z", + "uid": "e45d9890-38a9-4580-8572-35fa0192b123", + "user": "alice" +} +``` + +## desktop.directory.share (TDP04I/W) + +Emitted when Teleport starts sharing a directory on a local machine to the remote desktop. + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP04I", // TDP04W if the operation failed + "desktop_addr": "192.168.1.206:3389", + "directory_id": 2, + "directory_name": "local-files", + "ei": 0, + "event": "desktop.directory.share", + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "success": true, // false if the operation failed + "time": "2022-10-21T22:36:27.314409Z", + "uid": "e45d9890-38a9-4580-8572-35fa0192b123", + "user": "alice" +} +``` + +## desktop.directory.read (TDP05I/W) + +This event is part of the directory sharing feature, and is emitted when +Teleport reads data from a file on the user's local machine and sends it +to the remote Windows desktop. + +In order to avoid capturing sensitive data, the event only records the offset +from the start of the file from which the read began and the number of bytes +that were sent. + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP05I", // TDP05W if the operation failed + "desktop_addr": "192.168.1.206:3389", + "directory_id": 2, + "directory_name": "local-files", + "ei": 0, + "event": "desktop.directory.read", + "file_path": "powershell-scripts/a-script.ps1", // relative path from the root of the shared directory (local-files in this case) + "length": 734, // the number of bytes read + "offset": 0, // the offset from the start of the file from which the read began + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "success": true, // false if the operation failed + "time": "2022-10-21T22:36:27.314409Z", + "uid": "e45d9890-38a9-4580-8572-35fa0192b123", + "user": "alice" +} +``` + +## desktop.directory.write (TDP06I/W) + +This event is part of the directory sharing feature, and is emitted when +Teleport reads writes from the remote desktop to a file on the user's local +machine. + +In order to avoid capturing sensitive data, the event only records the offset +from the start of the file from which the write began and the number of bytes +that were written. + +```json +{ + "addr.remote": "192.168.1.206:3389", + "cluster_name": "root", + "code": "TDP06I", // TDP06W if the operation failed + "desktop_addr": "192.168.1.206:3389", + "directory_id": 2, + "directory_name": "local-files", + "ei": 0, + "event": "desktop.directory.read", + "file_path": "powershell-scripts/a-script.ps1", // relative path from the root of the shared directory (local-files in this case) + "length": 734, // the number of bytes written + "offset": 0, // the offset from the start of the file from which the write began + "proto": "tdp", + "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", + "success": true, // false if the operation failed + "time": "2022-10-21T22:36:27.314409Z", + "uid": "e45d9890-38a9-4580-8572-35fa0192b123", + "user": "alice" +} +``` diff --git a/docs/pages/enroll-resources/desktop-access/reference/cli.mdx b/docs/pages/enroll-resources/desktop-access/reference/cli.mdx new file mode 100644 index 0000000000000..759fc733d1ce2 --- /dev/null +++ b/docs/pages/enroll-resources/desktop-access/reference/cli.mdx @@ -0,0 +1,32 @@ +--- +title: Desktop Access CLI Reference +sidebar_label: CLI Commands +description: CLI reference for Teleport desktop access. +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +The following `tctl` commands are used to manage the Teleport Windows Desktop +Service. + +- (!docs/pages/includes/tctl.mdx!) + +Generate a join token for a Windows Desktop Service: + +```sh +$ tctl tokens add --type=WindowsDesktop +``` + +List registered Windows Desktop Services: + +```sh +$ tctl get windows_desktop_service +``` + +List registered Windows desktops: + +```sh +$ tctl get windows_desktop +``` diff --git a/docs/pages/reference/agent-services/desktop-access-reference/clipboard.mdx b/docs/pages/enroll-resources/desktop-access/reference/clipboard.mdx similarity index 95% rename from docs/pages/reference/agent-services/desktop-access-reference/clipboard.mdx rename to docs/pages/enroll-resources/desktop-access/reference/clipboard.mdx index 8a884684b013e..f8532a86eefda 100644 --- a/docs/pages/reference/agent-services/desktop-access-reference/clipboard.mdx +++ b/docs/pages/enroll-resources/desktop-access/reference/clipboard.mdx @@ -1,6 +1,10 @@ --- title: Clipboard Sharing description: Using Clipboard Sharing with Teleport desktop access. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Teleport supports copying and pasting text between your browser and a remote diff --git a/docs/pages/enroll-resources/desktop-access/reference/configuration.mdx b/docs/pages/enroll-resources/desktop-access/reference/configuration.mdx new file mode 100644 index 0000000000000..505bf00f6830b --- /dev/null +++ b/docs/pages/enroll-resources/desktop-access/reference/configuration.mdx @@ -0,0 +1,73 @@ +--- +title: Desktop Access Configuration Reference +sidebar_label: Configuration +description: Configuration reference for Teleport desktop access. +tags: + - conceptual + - zero-trust + - infrastructure-identity +--- + +`teleport.yaml` fields related to desktop access: + +```yaml +# Main service responsible for desktop access. +# +# You can have multiple Desktop Service instances in your cluster (but not in the +# same teleport.yaml), connected to the same or different Active Directory +# domains. +(!docs/pages/includes/config-reference/desktop-config.yaml!) +``` + +## Deployment + +The Windows Desktop Service can be deployed in two modes. + +### Direct mode + +In *direct* mode, Windows Desktop Services registers directly with the Teleport +Auth Service, and listens for desktop connections from the Teleport Proxy. To +enable direct mode, set `windows_desktop_service.listen_addr` in +`teleport.yaml`, and ensure that `teleport.auth_server` points directly at the +Auth Service. + +Direct mode requires network connectivity both: + +- from the Teleport Proxy to the Windows Desktop Service. +- from the Windows Desktop Service to the Auth Service. + +For these reasons direct mode is not available in Teleport cloud, only +self-hosted Teleport clusters. + +### IoT mode (reverse tunnel) + +In *IoT mode*, Windows Desktop Service only needs to be able to make an outbound +connection to a Teleport Proxy. The Windows Desktop Service establishes a +reverse tunnel to the proxy, and both registration with the Auth Service and +desktop sessions are performed over this tunnel. To enable this mode, ensure +that `windows_desktop_service.listen_addr` is *unset*, and point +`teleport.proxy_server` at a Teleport Proxy. + +## Screen size + +By default, Teleport will set the screen size of the remote desktop session +based on the size of your browser window. In some cases, you may wish to +configure specific hosts to use a specific screen size. To do this, set the +`screen_size` attribute on the `windows_desktop` resource: + +```yaml +kind: windows_desktop +metadata: + name: fixed-screen-size +spec: + host_id: 307e091b-7f6b-42e0-b78d-3362ad10b55d + addr: 192.168.1.153:3389 + non_ad: true + + # Optional - ensures that all sessions use the same screen size, + # no matter what the size of the browser window is. + # Leave blank to use the size of the browser window. + screen_size: + width: 1024 + height: 768 +``` diff --git a/docs/pages/enroll-resources/desktop-access/reference/reference.mdx b/docs/pages/enroll-resources/desktop-access/reference/reference.mdx new file mode 100644 index 0000000000000..3f11742c37d5c --- /dev/null +++ b/docs/pages/enroll-resources/desktop-access/reference/reference.mdx @@ -0,0 +1,18 @@ +--- +title: Desktop Access Reference +sidebar_label: Reference +description: Comprehensive guides to configuring and auditing desktop access. +sidebar_position: 8 +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity +--- + +- [Configuration](configuration.mdx): Configure Teleport desktop access. +- [Audit](audit.mdx): Desktop access audit events. +- [Clipboard](clipboard.mdx): Share your clipboard with a remote desktop. +- [Session Recording](sessions.mdx): Desktop session recording and playback +- [CLI](cli.mdx): Relevant `tctl` commands +- [Scaling](../../../reference/deployment/scaling.mdx): Tips on scaling to many concurrent users. +- [User creation](user-creation.mdx): Automatic user creation diff --git a/docs/pages/reference/agent-services/desktop-access-reference/sessions.mdx b/docs/pages/enroll-resources/desktop-access/reference/sessions.mdx similarity index 92% rename from docs/pages/reference/agent-services/desktop-access-reference/sessions.mdx rename to docs/pages/enroll-resources/desktop-access/reference/sessions.mdx index 01d8ddceadf51..c375368ca8441 100644 --- a/docs/pages/reference/agent-services/desktop-access-reference/sessions.mdx +++ b/docs/pages/enroll-resources/desktop-access/reference/sessions.mdx @@ -1,6 +1,11 @@ --- title: Session Recording and Playback description: Recording and playing back Teleport desktop access sessions. +tags: + - session-recording + - conceptual + - zero-trust + - audit --- Teleport supports recording and playback of desktop sessions. @@ -53,7 +58,7 @@ being recorded. In other words, *all* of the roles applied to a user must explicitly disable recording to prevent the session from being recorded. See the -[Access Controls Reference](../../access-controls/roles.mdx) +[Access Controls Reference](../../../reference/access-controls/roles.mdx) for all possible settings in the `record_session` section. ## Playback @@ -80,5 +85,5 @@ saved to disk. If disk space is a concern, we recommend using sync recording and configuring a cloud storage solution such as S3. Refer to the `storage` section of the [Teleport Configuration -Reference](../../../reference/config.mdx) for more information on session +Reference](../../../reference/deployment/config.mdx) for more information on session logging and recording management. diff --git a/docs/pages/reference/agent-services/desktop-access-reference/user-creation.mdx b/docs/pages/enroll-resources/desktop-access/reference/user-creation.mdx similarity index 98% rename from docs/pages/reference/agent-services/desktop-access-reference/user-creation.mdx rename to docs/pages/enroll-resources/desktop-access/reference/user-creation.mdx index cb06b0b3933d6..d5f71ab46791e 100644 --- a/docs/pages/reference/agent-services/desktop-access-reference/user-creation.mdx +++ b/docs/pages/enroll-resources/desktop-access/reference/user-creation.mdx @@ -1,6 +1,10 @@ --- title: Automatic User Creation description: Using Automatic User Creation with Teleport desktop access. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Teleport's Desktop Service can be configured to automatically create local diff --git a/docs/pages/enroll-resources/desktop-access/troubleshooting.mdx b/docs/pages/enroll-resources/desktop-access/troubleshooting.mdx index ca2888f84954d..b7f632b132919 100644 --- a/docs/pages/enroll-resources/desktop-access/troubleshooting.mdx +++ b/docs/pages/enroll-resources/desktop-access/troubleshooting.mdx @@ -1,6 +1,11 @@ --- title: Troubleshooting Desktop Access +sidebar_label: Troubleshooting description: Common issues and resolutions for Teleport's desktop access +tags: + - how-to + - zero-trust + - infrastructure-identity --- Common issues and resolution steps. diff --git a/docs/pages/enroll-resources/enroll-resources.mdx b/docs/pages/enroll-resources/enroll-resources.mdx index 21df75ded60c1..5e95406729fa1 100644 --- a/docs/pages/enroll-resources/enroll-resources.mdx +++ b/docs/pages/enroll-resources/enroll-resources.mdx @@ -1,29 +1,260 @@ --- title: Enrolling Teleport Resources description: Provides step-by-step instructions for enrolling servers, databases, and other infrastructure resources with your Teleport cluster. +template: "landing-page" +tags: + - conceptual + - zero-trust --- -You can use Teleport to protect infrastructure resources like servers, -databases, and Kubernetes clusters. Once an infrastructure resource is protected -by Teleport, you can restrict access to the resource using the Teleport -[role-based access controls -system](../admin-guides/access-controls/access-controls.mdx) and use Teleport -features like session recordings and audit events to understand how your users -interact with the resource. +import LandingHero from '@site/src/components/Pages/Landing/LandingHero'; +import EnrollmentMethods, { Method } from "@site/src/components/Pages/Landing/EnrollmentMethods/EnrollmentMethods"; +import ResourceGrid from "@site/src/components/Pages/Landing/ResourceGrid"; -To enroll a resource with Teleport, you deploy a Teleport Agent, an instance of -the `teleport` binary configured to run certain services, such as the Teleport -SSH Service and Teleport Database Service. You then configure the Agent to proxy -a resource by querying a service discovery API (Auto Discovery), using a -[dynamic Teleport -resource](../admin-guides/infrastructure-as-code/infrastructure-as-code.mdx), or -naming the resource in the Agent's configuration file. Read more about [Teleport -Agent architecture](../reference/architecture/agents.mdx). +import enrollResourcesImg from '@version/docs/img/enroll-resources/enroll-resources-hero.png'; +import bookOpenSvg from "@site/src/components/Icon/teleport-svg/book-open.svg"; +import magicWandSvg from "@site/src/components/Icon/teleport-svg/magic-wand.svg"; +import terminalWindowSvg from "@site/src/components/Icon/teleport-svg/terminal-window.svg"; +import applicationSvg from "@site/src/components/Icon/teleport-svg/application.svg"; +import linuxServersSvg from "@site/src/components/Icon/teleport-svg/linux-servers.svg"; +import databaseSvg from "@site/src/components/Icon/teleport-svg/database-access.svg"; +import kubernetesClustersSvg from "@site/src/components/Icon/teleport-svg/kubernetes-clusters.svg"; +import windowsDesktopsSvg from "@site/src/components/Icon/teleport-svg/windows-desktops.svg"; +import mcpAndAiSvg from "@site/src/components/Icon/teleport-svg/mcp-and-ai.svg"; -You can also create a Teleport bot user and set up Machine ID to enable service -accounts to access Teleport-protected resources. + +Teleport protects infrastructure resources such as servers, databases, and Kubernetes clusters by enforcing strong access controls and auditability. Once a resource is enrolled with Teleport, you can manage access to it using the Teleport role-based access control (RBAC) system, allowing you to define which users or teams can connect and what actions they are permitted to perform. +- [Learn about Teleport Agents](../reference/architecture/agents.mdx) +- [Restrict access to resources with RBAC](../zero-trust-access/rbac-get-started/rbac-get-started.mdx) + -Read the following documentation for more information on enrolling -infrastructure resources with Teleport: + + +Secure access to web applications, TCP apps, cloud provider consoles and CLIs, and more. + + +Secure, auditable access to cloud and self-hosted databases. + + +Protect Kubernetes clusters with SSO integration, RBAC, and session recordings. + + +Passwordless access to Windows Desktops and servers for domain users and local users. + + +Streamline SSH access, reduce configuration overhead, and provide auditability. + + +Secure the vibes with access controls and auditability for all Model Context Protocol access. + + - + +Automatically detect and enroll resources in your Teleport cluster with the Teleport Discovery Service. + diff --git a/docs/pages/enroll-resources/kubernetes-access/controls.mdx b/docs/pages/enroll-resources/kubernetes-access/controls.mdx index 033675d0c5bef..88a92dab44e66 100644 --- a/docs/pages/enroll-resources/kubernetes-access/controls.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/controls.mdx @@ -1,6 +1,11 @@ --- title: Teleport Kubernetes Access Controls +sidebar_label: Access Controls Reference description: How the Teleport Kubernetes Service applies RBAC to manage access to Kubernetes +tags: + - conceptual + - zero-trust + - privileged-access --- This guide explains the way the Teleport Kubernetes Service applies role-based @@ -39,21 +44,21 @@ version: v8 spec: allow: kubernetes_labels: - 'region': '*' - 'platform': 'minikube' + region: '*' + platform: minikube kubernetes_resources: - kind: pods - namespace: "production" - name: "^webapp-[a-z0-9-]+$" - api_group: "" + namespace: production + name: '^webapp-[a-z0-9-]+$' + api_group: '' - kind: pods - namespace: "development" - name: "*" - api_group: "" + namespace: development + name: '*' + api_group: '' - kind: deployments - namespace: "development" - name: "*" - api_group: "apps" + namespace: development + name: '*' + api_group: apps kubernetes_groups: - developers kubernetes_users: @@ -71,12 +76,12 @@ using a role's `kubernetes_labels` field. kind: role metadata: name: kube-access -version: v7 +version: v8 spec: allow: kubernetes_labels: - 'region': '*' - 'environment': 'development' + region: '*' + environment: development # ... deny: {} ``` @@ -88,7 +93,7 @@ or more label *values* (i.e., either a string or a list). If both the key and the value of a label are wildcards, `*`, the Teleport Kubernetes Service allows the user to access Kubernetes clusters with all -labels: +tags: ```yaml spec: @@ -140,8 +145,8 @@ Here is an example: spec: allow: kubernetes_labels: - 'region': 'us-east-*' - 'team': '^data-eng-[a-z-]+$' + region: 'us-east-*' + team: '^data-eng-[a-z-]+$' # ... ``` @@ -161,8 +166,8 @@ clusters with the `region:us-east-2` label and either the `development` or spec: allow: kubernetes_labels: - 'region': 'us-east-*' - 'environment': ['development', 'staging'] + region: 'us-east-*' + environment: ['development', 'staging'] # ... ``` @@ -316,8 +321,8 @@ determines this from the `kubernetes_users` and `kubernetes_groups` fields in a user's roles. If a user has exactly one value in `kubernetes_users`, the Teleport Kubernetes -Service impersonates that user. If there are no values in `kubernetes_users`, -the Kubernetes Service uses the user's Teleport username. +Service impersonates that user. If there are no values or a wildcard (`*`) in +`kubernetes_users`, the Kubernetes Service uses the user's Teleport username. The Kubernetes Service will deny a request if a user has multiple `kubernetes_users` and has not specified one when authenticating to a cluster @@ -326,6 +331,16 @@ The Kubernetes Service will deny a request if a user has multiple If the user has not specified a Kubernetes group to impersonate, the Kubernetes Service uses all values within `kubernetes_groups`. + + When impersonating a less privileged user, remember that unless you're + also manually impersonating specific groups (e.g. using `--as-groups` flag), + the Kubernetes Service will automatically impersonate any groups within + `kubernetes_groups`. + + This can be confusing because you will have the combined permissions of both + the user and any automatically-impersonated groups. + + With the `kube-access` role above, after you authenticate to Teleport, the Kubernetes Service uses impersonation headers to forward requests to the API server with the `developers` group and the `myuser` Kubernetes user. @@ -478,10 +493,51 @@ spec: - myuser ``` +#### Configuring just-in-time access + +If you are setting up a Teleport role to enable just-in-time access to a +specific Kubernetes resources, you should set the role's `kubernetes_groups` and +`kubernetes_users` to a principal with a role that has no access to Kubernetes +resource beside the Kubernetes resources that Teleport is able to restrict +access for. + +This is because, if a user requests access to a Kubernetes pod, and the request +is approved, the Teleport Kubernetes Service will use the `kubernetes_groups` +and `kubernetes_users` fields in the role to add impersonation headers to the user's +requests to a Kubernetes API server. Under these conditions, Teleport will be able +to restrict access to all supported Kubernetes resources kinds except for +the desired `pod`. + +Teleport is also able to restrict access to namespaced-scoped custom resources +but not cluster-scoped custom resources. CRDs resources that are cluster scoped +will be accessible to the user if the principals in the `kubernetes_users` and +`kubernetes_groups` fields have access to them. + +Requesting access to a Kubernetes Namespace allows you to access all resources +in that namespace but you won't be able to access any other supported resources +in the cluster. + ### `kubernetes_resources` The `kubernetes_resources` field enables a Teleport role to configure access to -specific resources in a Kubernetes cluster: +specific resources in a Kubernetes cluster. + +The value of this field is a list of mappings, where each mapping is described as follows: + +#### Role V8 + + +Role V8 added support for managing access to any kubernetes resource kind, including custom resource definitions (CRDs). +To do so it has multiple changes to the way the `kubernetes_resources` section is handled compared to prior role versions: + +- the `kind` field must always specify the plural form of the resource kind +- the `api_group` field must be set for resources not found in the core API group +- `kind: namespaces` now matches namespace resources, it no longer matches resources within the namespace +- the `namespace` field when set and with a value of anything but `*` will not match cluster-wide resources. + +As this behavior differs from earlier role versions, if you migrate a role to V8 from a previous version you will +likely need to adjust the `kubernetes_resources` section. + ```yaml kind: role @@ -491,30 +547,168 @@ version: v8 spec: allow: kubernetes_labels: - '*':'*' + "*": "*" kubernetes_resources: - kind: pods - namespace: "production" - name: "webapp" - verbs: ['*'] api_group: "" + namespace: production + name: webapp + verbs: ["*"] + - kind: deployments + api_group: apps + namespace: development + name: "*" # ... ``` -The value of this field is a list of mappings, where each mapping has four -fields: +- `kind`: The kind of resource to enable access to. It can be `*` or + the plural form of the kind (e.g. `pods`, `deployments`, `cronjobs`, `mycustomresources`). + If the resource has a group, it must be specified in the `api_group` field, for example, + while `pods` doesn't need a group, `deployments` require the `api_group` field be set to `apps`. + + The full list of available resources, along with their api group can be found by running the following command: + + ```bash + kubectl api-resources --namespaced=true -o=name + kubectl api-resources --namespaced=false -o=name + ``` + + - If the line has a `.`, then the kind is the first element before the `.` and the api group is everything after the first `.`. + - If the line doesn't have a `.`, then the kind is the full element and there is no api group. + + + + - The `namespaces` kind doesn't include all the namespaced resources, it only covers the namespace itself. + This behavior is different from earlier role versions where it covered the resource itself and everything in the namespace, + which means that if you migrate an existing role to V8 from a prior version you will likely need to adjust the `kubernetes_resources` section. + - The kind, including `*` now enforces the `namespace` field. To match a cluster-wide resource, the `namespace` field must be empty or `*`. + This behavior is different from earlier role versions where it included cluster-wide resources regardless of the `namespace` field. + + +- `api_group`: The API group of the resource. In `kube-access`, in the example above, the `pods` resource being in a core resource + has `""` for api_group, the value is set to `apps` for the `deployments` resource kind. A wildcard can be used to match any API groups. + +- `namespace`: The Kubernetes namespace in which to allow access to a resource. Must be empty or `*` for cluster-wide resources. Any other value, + including other wildcards, will only match namespaced resources. + In the `kube-access` role, we are allowing access to a pod in the `production` namespace. + + + Note that when using the wildcard `*`, it will match any resources, including cluster-wide ones (based on kind/api_group). + Using empty string `""` will match only cluster-wide resources. + Any other value, like `^.+$` will only match namespaced resources and exclude cluster-wide resources. + + + +- `name`: The name of the pod to allow access to. In `kube-access`, this is + the `webapp` pod. + +- `verbs`: The operations to allow on the resource. Currently, Teleport supports: + + | Verb | Grants access to | + | ---- | ----------- | + | `*` | All operations | + | `get` | Read a resource | + | `list` | List resources | + | `create` | Create a resource | + | `update` | Update a resource | + | `patch` | Patch a resource | + | `delete` | Delete a resource | + | `deletecollection` | Delete a collection of resources | + | `watch` | Watch resources | + | `portforward` | Create portforward requests for Pods | + | `exec` | Execute commands in Pods | + +For the `namespace`, `name` and `api_group` fields, you can add a wildcard character +(`*`) to replace any sequence of characters. For example, `name: "pod-*-*"` +matches pods named `pod-1-a` and `pod-2-c`. As with `kubernetes_labels`, if a +value begins with `^` and ends in `$`, the Kubernetes Service will treat it as a +regular expression using Go's `re2` syntax (see the `re2` +[README](https://github.com/google/re2/wiki/Syntax)). + + +For a user to access a pod named in a role's `kubernetes_resources` field, the user +must be assigned a Teleport role that contains at least one value within +`kubernetes_groups` or `kubernetes_users`. Teleport does not alter Kubernetes +roles to allow or deny access. Read the next section for an explanation of how the +Kubernetes Service evaluates Teleport roles in order to allow or deny access to +pods in a cluster. + + +#### Role V7 + + +Role V7 added support for more `kind` values. It uses singular names while later role +versions use the plural form. + + + +The wildcard (`*`) and `namespace` kinds have special meaning, when using those, pay extra +attention to the intent and the difference of behavior with later versions. + + +```yaml +kind: role +metadata: + name: kube-access +version: v7 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + - kind: pod + namespace: production + name: webapp + verbs: ["*"] + # ... +``` - `kind`: The kind of resource to enable access to. Currently, Teleport supports the following kinds: + + The `*` kind behavior slightly differs from later versions. In V7, it allows access to all resources + including the cluster-wide ones regardless of the `namespace` field. + In later role versions, the `namespace` field is enforced, which means that when upgrading you may need to adjust + your resources, especially on the deny side. + + + The `namespace` kind covers the `namespace` resource as well as all resources within it. + This behavior is different from later role versions where it only covers the resource itself which means that + when upgrading you may need to adjust your resources, especially on the deny side. + + | Kind | Grants access to | | ---- | ----------- | - | `*` | All resources | - | `` | Resource plural name (e.g. pods, deployments, cronjobs, mycustomresources) | + | `*` | All resources, including cluster-wide ones regardless of the `namespace` field | + | `pod` | Pods | + | `secret` | Secrets | + | `configmap` | ConfigMaps | + | `namespace` | Namespaces and all resources within. | + | `service` | Services | + | `serviceaccount` | ServiceAccounts | + | `kube_node` | Nodes | + | `persistentvolume` | PersistentVolumes | + | `persistentvolumeclaim` | PersistentVolumeClaims | + | `deployment` | Deployments | + | `replicaset` | ReplicaSets | + | `statefulset` | StatefulSets | + | `daemonset` | DaemonSets | + | `clusterrole` | ClusterRoles | + | `kube_role` | Roles | + | `clusterrolebinding` | ClusterRoleBindings | + | `rolebinding` | RoleBindings | + | `cronjob` | CronJobs | + | `job` | Jobs | + | `certificatesigningrequest` | CertificateSigningRequests | + | `ingress` | Ingresses | - `namespace`: The Kubernetes namespace in which to allow access to a resource. In the `kube-access` role, we are allowing access to a pod in the `production` namespace. + Note that when the `kind` field is set to `*`, the `namespace` field is ignored, + which is a different behavior from later role versions where the `namespace` field is + enforced. - `name`: The name of the pod to allow access to. In `kube-access`, this is the `webapp` pod. @@ -535,17 +729,60 @@ the following kinds: | `portforward` | Create portforward requests for Pods | | `exec` | Execute commands in Pods | -- `api_group`: The API group of the resource. In `kube-access`, this is the - `""` for `pods` or `apps` for `deployments`. A wildcard can be used to match any API groups. +For both the `namespace` and `name` fields, you can add a wildcard character +(`*`) to replace any sequence of characters. For example, `name: "pod-*-*"` +matches pods named `pod-1-a` and `pod-2-c`. As with `kubernetes_labels`, if a +value begins with `^` and ends in `$`, the Kubernetes Service will treat it as a +regular expression using Go's `re2` syntax (see the `re2` +[README](https://github.com/google/re2/wiki/Syntax)). + + +For a user to access a pod named in a role's `kubernetes_resources` field, the user +must be assigned a Teleport role that contains at least one value within +`kubernetes_groups` or `kubernetes_users`. Teleport does not alter Kubernetes +roles to allow or deny access. Read the next section for an explanation of how the +Kubernetes Service evaluates Teleport roles in order to allow or deny access to +pods in a cluster. + + +#### Role V6 + + +Role V6 introduced the support for the `kubernetes_resources` field, but was restricted +to only allow access to pods with kind 'pod'. Later role versions extended support to additional resources. + + +```yaml +kind: role +metadata: + name: kube-access +version: v6 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + - kind: pod + namespace: production + name: webapp + # ... +``` + +- `kind`: The kind of resource to enable access to. The only supported value is `pod`. +- `namespace`: The Kubernetes namespace in which to allow access to a resource. + In the `kube-access` role, we are allowing access to a pod in the + `production` namespace. +- `name`: The name of the pod to allow access to. In `kube-access`, this is + the `webapp` pod. -For both the `namespace`, `name` and `api_group` fields, you can add a wildcard character +For both the `namespace` and `name` fields, you can add a wildcard character (`*`) to replace any sequence of characters. For example, `name: "pod-*-*"` matches pods named `pod-1-a` and `pod-2-c`. As with `kubernetes_labels`, if a value begins with `^` and ends in `$`, the Kubernetes Service will treat it as a regular expression using Go's `re2` syntax (see the `re2` [README](https://github.com/google/re2/wiki/Syntax)). - + For a user to access a pod named in a role's `kubernetes_resources` field, the user must be assigned a Teleport role that contains at least one value within `kubernetes_groups` or `kubernetes_users`. Teleport does not alter Kubernetes @@ -554,6 +791,206 @@ Kubernetes Service evaluates Teleport roles in order to allow or deny access to pods in a cluster. +#### Examples + +##### Full access to namespaced resources except to production + +The following roles will grant full access to all namespaced resources in all namespaces +except the `production` one, where no resources will be accessible. + +```yaml +kind: role +metadata: + name: kube-access +version: v7 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + # In v7, namespace means everything in the namespace. + - kind: namespace # v7 uses singular. + name: "*" + verbs: ["*"] + deny: + kubernetes_resources: + - kind: namespace # v7 uses singular. + name: production + verbs: ["*"] + # ... +``` + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + # Grant access to namespaced resources in all namespaces. + - kind: "*" + api_group: "*" + name: "*" + namespace: "^.+$" # Match any namespaced resource, will not match cluster-wide resources. + verbs: ["*"] + # Grant access to the namespace resource itself. + - kind: namespaces # In v8, namespaces mean the namespace itself. As it is considered a cluster-wide resource, it must be added separately. + name: "*" + verbs: ["*"] + deny: + kubernetes_resources: + # Deny access to namespaced resources in the production namespace. + - kind: "*" + api_group: "*" + name: "*" + namespace: production + verbs: ["*"] + # Deny access to the namespace resource itself. + - kind: namespaces # v8 uses plural. + name: production + verbs: ["*"] + # ... +``` + +Alternatively, this could also be used: + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + # Grant full access on everything. + - kind: "*" + api_group: "*" + name: "*" + namespace: "*" # In v8, '*' will match both namespaced and cluster-wide resources. + verbs: ["*"] + deny: + kubernetes_resources: + # Deny access to cluster-wide resources. + - kind: "*" + api_group: "*" + name: "*" + namespace: "" # Empty namespace means only cluster-wide resources will match. + verbs: ["*"] + # Deny access to namespaced resources in the production namespace. + - kind: "*" + api_group: "*" + name: "*" + namespace: production + verbs: ["*"] + # Deny access to the namespace resource itself. + - kind: namespaces # v8 uses plural. + name: production + verbs: ["*"] + # ... +``` + + +##### Full access to dev namespace and all cluster-wide resources except clusterroles + +The following roles will grant access to all resources in the `dev` namespace and all cluster-wide resources +except `clusterroles`. + +```yaml +kind: role +metadata: + name: kube-access +version: v7 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + # In v7, "*" includes cluster-wide resources regardless of the namespace. + - kind: "*" + name: "*" + namespace: dev + verbs: ["*"] + deny: + kubernetes_resources: + - kind: clusterrole # v7 uses singular. + name: "*" + verbs: ["*"] + # ... +``` + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + # Grant access to all resources in the dev namespace. + # In v8, "*" doesn't include cluster-wide resources if the namespace is set. + - kind: "*" + api_group: "*" + name: "*" + namespace: dev + verbs: ["*"] + # Grant access to all cluster-wide resources. + - kind: "*" + api_group: "*" + name: "*" + namespace: "" # Empty namespace means only cluster-wide resources will match. + verbs: ["*"] + deny: + kubernetes_resources: + - kind: clusterroles # v8 uses plural. + api_group: "*" + name: "*" + # ... +``` + +##### Full access to everything + +```yaml +kind: role +metadata: + name: kube-access +version: v7 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + - kind: "*" + name: "*" + namespace: "*" + verbs: ["*"] + # ... +``` + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + "*": "*" + kubernetes_resources: + - kind: "*" + api_group: "*" + name: "*" + namespace: "*" # Wildcard '*' will match both namespaced and cluster-wide resources. + verbs: ["*"] + # ... +``` + ## How the Kubernetes Service evaluates Teleport roles When a Teleport user makes a request to a Kubernetes cluster's API server, the @@ -648,7 +1085,7 @@ version: v7 spec: allow: kubernetes_labels: - - "region": "us-east-2" + "region": "us-east-2" kubernetes_resources: - kind: pod namespace: "development" diff --git a/docs/pages/enroll-resources/kubernetes-access/faq.mdx b/docs/pages/enroll-resources/kubernetes-access/faq.mdx index 64626b6b363b9..ae71adaa88de1 100644 --- a/docs/pages/enroll-resources/kubernetes-access/faq.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/faq.mdx @@ -1,6 +1,11 @@ --- title: Kubernetes Access FAQ -description: Frequently asked questions about Teleport Kubernetes Access +sidebar_label: FAQ +description: Frequently asked questions about protecting Kubernetes clusters with Teleport. +tags: + - faq + - zero-trust + - infrastructure-identity --- This page offers answers to frequently asked questions about Teleport's diff --git a/docs/pages/enroll-resources/kubernetes-access/getting-started.mdx b/docs/pages/enroll-resources/kubernetes-access/getting-started.mdx index 554cd44489112..2db7a7980a543 100644 --- a/docs/pages/enroll-resources/kubernetes-access/getting-started.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/getting-started.mdx @@ -1,7 +1,12 @@ --- -title: Enroll a Kubernetes Cluster +title: Get Started with Enrolling a Kubernetes Cluster +sidebar_label: Getting Started description: Demonstrates how to enroll a Kubernetes cluster as a resource protected by Teleport. videoBanner: 3AUGrOZ5me0 +tags: + - how-to + - zero-trust + - infrastructure-identity --- This guide demonstrates how to enroll a Kubernetes cluster as a Teleport @@ -22,21 +27,7 @@ For information about other ways to enroll and discover Kubernetes clusters, see ## Prerequisites -- Access to a running Teleport cluster, `tctl` admin tool, and `tsh` client tool, - version >= (=teleport.version=). - - You can verify the tools you have installed by running the following commands: - - ```code - $ tctl version - # Teleport v(=teleport.version=) go(=teleport.golang=) - - $ tsh version - # Teleport v(=teleport.version=) go(=teleport.golang=) - ``` - - You can download these tools by following the appropriate [Installation - instructions](../../installation.mdx#linux) for your environment. +(!docs/pages/includes/edition-prereqs-tabs.mdx!) (!docs/pages/includes/kubernetes-access/helm-k8s.mdx!) diff --git a/docs/pages/enroll-resources/kubernetes-access/health-checks.mdx b/docs/pages/enroll-resources/kubernetes-access/health-checks.mdx new file mode 100644 index 0000000000000..4b773df7598a3 --- /dev/null +++ b/docs/pages/enroll-resources/kubernetes-access/health-checks.mdx @@ -0,0 +1,267 @@ +--- +title: Teleport Kubernetes Health Checks +sidebar_label: Health Checks +sidebar_position: 6 +description: How to configure Teleport Kubernetes health checks and view health. +tags: +- conceptual +- zero-trust +- infrastructure-identity +--- + +This documentation provides an overview of Kubernetes cluster health checks with Teleport. Teleport Kubernetes Service instances periodically check the connectivity and permissions of enrolled Kubernetes clusters. + +Kubernetes cluster health checks are available in Teleport version (= kubernetes.health_check_min_version =) and later. + +Teleport Kubernetes health checks provide: + +- **Observability**: Discover network and permission issues before users do. Unhealthy Kubernetes clusters are visible in a Teleport UI, command line tool, or Prometheus metrics. +- **High Availability**: Automatically route and distribute connections to healthy Kubernetes clusters in a high-availability configuration. + +## How Kubernetes health checks work + +The Teleport Kubernetes Service checks Kubernetes permissions and health endpoint to determine whether a Kubernetes cluster is both up and usable with Teleport. + +Four Kubernetes RBAC permissions are routinely checked with the Kubernetes [SelfSubjectAccessReview](https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/) API. The permissions are part of minimum requirements for Teleport to work with a Kubernetes cluster. The checked permissions are: +- Impersonate users +- Impersonate groups +- Impersonate service accounts +- Get pods + +If a permission can't be checked, the Kubernetes cluster's [/readyz](https://kubernetes.io/docs/reference/using-api/health-checks/) endpoint is called to further distinguish connection errors and Kubernetes component errors. + +## Health states + +A Kubernetes cluster is in a `healthy`, `unhealthy`, or `unknown` state. +- `healthy` indicates a Kubernetes cluster's health was checked and is fine +- `unhealthy` indicates a Kubernetes cluster's health was checked and is not usable for some reason +- `unknown` indicates a Kubernetes cluster has been excluded from health checks, or the first health check is initializing + + + +`unknown` Kubernetes clusters may be unhealthy. + +`unknown` Kubernetes clusters are not checked for health due to: +- Running a pre-(= kubernetes.health_check_min_version =) version of Teleport Kubernetes Service instances +- Explicitly disabling `health_check_config` +- Explicitly configuring `health_check_config` labels to exclude a Kubernetes cluster + + +## Viewing health + +Kubernetes cluster health is viewed through the Teleport web UI, desktop Connect UI, `tctl` CLI tool, or Prometheus metrics. + +### Teleport Web UI and Teleport Connect + +The Teleport web and Connect UI highlight unhealthy Kubernetes clusters. + +Clicking a highlighted Kubernetes cluster shows details of an unhealthy Kubernetes cluster. + +![Kubernetes health warning in the UI](../../../img/resource-health-check/kubernetes-health-warning.png) + +It may take approximately `5m` for a health change to be reported. +{/* TODO(gavin): delete this line when we fix health status reporting delay */} + +Click the circular arrow refresh icon to get the latest health status. + +### Teleport `tctl` CLI + +The Teleport `tctl` CLI tool searches and displays unhealthy Kubernetes clusters. +```code +tctl kube ls --query 'health.status == "unhealthy"' +``` + +Run `tctl get kube_server/` for an overview of Kubernetes cluster health for a specific Kubernetes service. +```yaml +kind: kube_server +... +status: + target_health: + address: 192.168.106.2:58458 + message: 1 health check passed + protocol: http + status: healthy + transition_reason: threshold_reached + transition_timestamp: "2025-10-13T19:26:58.842855Z" +version: v3 +``` + +### Teleport Prometheus metrics + +Health check metrics offer a high-level view of Kubernetes cluster health. The total number of Kubernetes clusters in a `healthy`, `unhealthy`, or `unknown` state are monitored with gauge metrics `teleport_resources_health_status_healthy`, `teleport_resources_health_status_unhealthy`, and `teleport_resources_health_status_unknown`. + +- `teleport_resources_health_status_healthy{type="kubernetes"}` is the total number of _healthy_ Kubernetes clusters +- `teleport_resources_health_status_unhealthy{type="kubernetes"}` is the total number of _unhealthy_ Kubernetes clusters +- `teleport_resources_health_status_unknown{type="kubernetes"}` is the total number of Kubernetes clusters in an _unknown_ state + +A [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) expression may be used to determine the total number of Kubernetes clusters. +```promql +teleport_resources_health_status_healthy{type="kubernetes"} + +teleport_resources_health_status_unhealthy{type="kubernetes"} + +teleport_resources_health_status_unknown{type="kubernetes"} +``` + +A PromQL expression may be used to detect the presence of unhealthy Kubernetes clusters. +```promql +teleport_resources_health_status_unhealthy{type="kubernetes"} > 0 +``` + + +Prometheus metrics don't distinguish which cluster is unhealthy. + +To determine which cluster is unhealthy, use the Teleport Web UI, Teleport Connect, or the following `tctl` command: + +```code +$ tctl kube ls --query 'health.status == "unhealthy"' +``` + + +Health check metrics may also be viewed with the Teleport diagnostic endpoint `http:///metrics`. +```text +# HELP teleport_resources_health_status_healthy Number of healthy resources +# TYPE teleport_resources_health_status_healthy gauge +teleport_resources_health_status_healthy{type="kubernetes"} 99972 +# HELP teleport_resources_health_status_unhealthy Number of unhealthy resources +# TYPE teleport_resources_health_status_unhealthy gauge +teleport_resources_health_status_unhealthy{type="kubernetes"} 3 +# HELP teleport_resources_health_status_unknown Number of resources in an unknown health state +# TYPE teleport_resources_health_status_unknown gauge +teleport_resources_health_status_unknown{type="kubernetes"} 25 +``` + +Diagnostic endpoint metrics are disabled by default. See [Monitoring your Teleport deployment](../../zero-trust-access/management/diagnostics/diagnostics.mdx) to learn how to enable diagnostic metrics. + +## Configuring health checks + +The Teleport `tctl` CLI tool enables reading, adding, editing, and deleting `health_check_config` resources. + +`health_check_config` resources offer a way to configure and selectively apply health checks to Kubernetes clusters. + +An example `health_check_config`. +```yaml +kind: health_check_config +version: v1 +metadata: + name: example + description: Example healthcheck configuration +spec: + # interval is the time between each health check. Default 30s. + interval: 30s + # timeout is the health check connection establishment timeout. Default 5s. + timeout: 5s + # healthy_threshold is the number of consecutive passing health checks + # after which a target's health status becomes "healthy". Default 2. + healthy_threshold: 2 + # unhealthy_threshold is the number of consecutive failing health checks + # after which a target's health status becomes "unhealthy". Default 1. + unhealthy_threshold: 1 + # match is used to select Kubernetes clusters that apply these settings. + # Kubernetes clusters are matched by label selectors and at least one label selector + # must be set. + # If multiple `health_check_config` resources match the same Kubernetes cluster, + # the matching health check configs are sorted by name and only the first + # config applies. + match: + # disable all labels and expressions for all resources in this config + disable: false + # kubernetes_labels matches Kubernetes cluster labels. An empty value is ignored. + # If kubernetes_labels_expression is also set, then the match result is the logical + # AND of both. + kubernetes_labels: + - name: env + values: + - dev + - staging + # kubernetes_labels_expression is a label predicate expression to match Kubernetes clusters. + # An empty value is ignored. + # If kubernetes_labels is also set, then the match result is the logical AND of both. + kubernetes_labels_expression: 'labels["owner"] == "platform-team"' +``` + +A `default-kube` `health_check_config` is introduced in version (= kubernetes.health_check_min_version =), and enables all Kubernetes clusters to participate in health checks. +```yaml +kind: health_check_config +metadata: + description: Enables all health checks by default + name: default-kube +spec: + match: + kubernetes_labels: + - name: '*' + values: + - '*' +version: v1 +``` + +`default-kube` may be disabled, but not permanently deleted. Deleting with `tctl rm health_check_config/default-kube` has the effect of resetting the config to its default settings and matching all Kubernetes clusters. + +A different `default` `health_check_config` also exists, and focuses on matching databases for health checks. + +Multiple different `health_check_config` resources may be created for different groups of Kubernetes clusters. When multiple `health_check_config` match the same Kubernetes cluster, configs are sorted in ascending order by name, and only the first config applies (e.g., the name "00-my-config" has greater precedence than "10-my-config"). + +## Disabling Kubernetes health checks + +Set the `match.disabled` field to `true` on any `health_check_config`. + +For example, use `tctl edit health_check_config/default-kube` +```yaml +kind: health_check_config +metadata: + description: Enables all health checks by default + name: default-kube +spec: + match: + disable: true + kubernetes_labels: + - name: '*' + values: + - '*' +version: v1 +``` + +Any defined labels, such as `kubernetes_labels`, are ignored when `disable: true`. + +## `tctl` health check commands + +Read the default health check config with `tctl get`: +```bash +$ tctl get health_check_config/default-kube +``` + +Create a new health check config with `tctl create`: +```bash +$ tctl create health_check_config.yaml +``` + +Update an existing config interactively with `tctl edit`: +```bash +$ tctl edit health_check_config/default-kube +``` + +Delete a health check config with `tctl rm`: +```bash +$ tctl rm health_check_config/example +``` + +Teleport Kubernetes Service instances are notified of changes to `health_check_config`, and reevaluate whether a Kubernetes cluster participates in health checks, applying any changes. + +## FAQ + +### What guidance is there for troubleshooting unhealthy Kubernetes clusters? + +See the [Kubernetes Service troubleshooting guide](./troubleshooting.mdx) for specific errors returned by health checks. + +### Do health check metrics show which Kubernetes cluster is unhealthy? + +No. A specific Kubernetes cluster's health cannot be determined from the `teleport_resources_health_status_*` health metrics. + +Only the quantity of unhealthy Kubernetes clusters are available from metrics. + +### How do I configure health checks for high-availability? + +No additional configuration is needed. + +Health-based connection routing is automatic when multiple Teleport Kubernetes Service instances are proxying to the same Kubernetes cluster. + +Configuring high-availability with multiple Teleport Kubernetes Service instances proxying the same Kubernetes cluster would be needed. + diff --git a/docs/pages/enroll-resources/kubernetes-access/introduction.mdx b/docs/pages/enroll-resources/kubernetes-access/introduction.mdx index 5af6b08f073af..9e671fa2a08e0 100644 --- a/docs/pages/enroll-resources/kubernetes-access/introduction.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/introduction.mdx @@ -1,6 +1,11 @@ --- title: Introduction to Enrolling Kubernetes Clusters +sidebar_label: Introduction description: Learn how Teleport can protect your Kubernetes clusters with RBAC, audit logging, and more. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Teleport provides secure access to Kubernetes clusters: @@ -15,7 +20,7 @@ Teleport provides secure access to Kubernetes clusters: The guides in this section show you how to protect Kubernetes clusters with Teleport. For instructions on self-hosting Teleport Community Edition or Teleport Enterprise on Kubernetes, see the [Kubernetes Deployment -Guides](../../admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx). +Guides](../../zero-trust-access/deploy-a-cluster/helm-deployments/helm-deployments.mdx). Here is an example of using Teleport to access a Kubernetes cluster, execute commands, and view your `kubectl` activity in Teleport's audit log: diff --git a/docs/pages/enroll-resources/kubernetes-access/kubernetes-access.mdx b/docs/pages/enroll-resources/kubernetes-access/kubernetes-access.mdx index eb25b21adae4d..ff884038fec96 100644 --- a/docs/pages/enroll-resources/kubernetes-access/kubernetes-access.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/kubernetes-access.mdx @@ -1,6 +1,129 @@ --- title: Kubernetes Clusters -description: Guides to protecting Kubernetes clusters with Teleport +description: Protect Kubernetes clusters with granular role-based access controls, audit logging, and session recording. +template: doc-page +tags: + - zero-trust + - infrastructure-identity --- - +import DocHero, { DocHeroProps } from "@site/src/components/Pages/Landing/DocHero"; +import Resources from "@site/src/components/Pages/Homepage/Resources"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; + +import noteSvg from "@site/src/components/Icon/svg/note2.svg"; +import questionSvg from "@site/src/components/Icon/svg/question2.svg"; +import identificationBadgeSvg from "@site/src/components/Icon/svg/identification-badge.svg"; +import listChecksSvg from "@site/src/components/Icon/svg/list-checks.svg"; +import kubernetesClustersSvg from "@site/src/components/Icon/teleport-svg/kubernetes-clusters.svg"; +import iamSvg from "@site/src/components/Icon/svg/awsIdentity.svg"; +import kubernetesSvg from "@site/src/components/Icon/svg/kubernetes2.svg"; +import awsSvg from "@site/src/components/Icon/teleport-svg/aws.svg"; +import gcpSvg from "@site/src/components/Icon/teleport-svg/gcp.svg"; +import azureSvg from "@site/src/components/Icon/teleport-svg/azure.svg"; + +{/*vale messaging.protocol-products = NO*/} + +{/*vale messaging.protocol-products = YES*/} +Log in with SSO and securely access enrolled Kubernetes clusters. Manage access within clusters with granular RBAC down to specific cluster resources, and achieve compliance needs with session recordings and detailed audit logs. + + + + + + + \ No newline at end of file diff --git a/docs/pages/enroll-resources/kubernetes-access/manage-access.mdx b/docs/pages/enroll-resources/kubernetes-access/manage-access.mdx index b837ca3c54626..2dc23551c167f 100644 --- a/docs/pages/enroll-resources/kubernetes-access/manage-access.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/manage-access.mdx @@ -1,6 +1,11 @@ --- title: Setting Up Access Controls for Kubernetes +sidebar_label: Access Controls Guide description: How to configure Teleport roles to access clusters, groups, users, and resources in Kubernetes. +tags: + - how-to + - zero-trust + - privileged-access --- The Teleport Kubernetes Service is a proxy that sits between Kubernetes users @@ -452,8 +457,8 @@ RBAC configurations. Now that you know how to configure Teleport's RBAC system to control access to Kubernetes clusters, learn how to set up [Resource Access -Requests](../../admin-guides/access-controls/access-requests/resource-requests.mdx) +Requests](../../identity-governance/access-requests/resource-requests.mdx) for just-in-time access and [Access Request -plugins](../../admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx) so you can manage +plugins](../../identity-governance/access-requests/plugins/plugins.mdx) so you can manage access with your communication workflow of choice. diff --git a/docs/pages/enroll-resources/kubernetes-access/register-clusters/dynamic-registration.mdx b/docs/pages/enroll-resources/kubernetes-access/register-clusters/dynamic-registration.mdx index efdaa8fb77b4e..b3988e2df69d2 100644 --- a/docs/pages/enroll-resources/kubernetes-access/register-clusters/dynamic-registration.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/register-clusters/dynamic-registration.mdx @@ -1,6 +1,10 @@ --- title: Dynamic Kubernetes Cluster Registration description: Register and unregister Kubernetes clusters without restarting a Teleport Kubernetes Service instance. +tags: + - how-to + - zero-trust + - infrastructure-identity --- With dynamic Kubernetes cluster registration, you can manage the Kubernetes @@ -15,6 +19,17 @@ In this guide, we will show you how to set up dynamic Kubernetes cluster registration, then create, list, update, and delete Kubernetes clusters via `tctl`. +## How it works + +The Teleport Kubernetes Service proxies traffic from Teleport users to a +Kubernetes API server so you can take advantage of passwordless authentication, +role-based access controls, audit logging, and other Teleport features in order +to manage access to Kubernetes. + +In this step, you will install the Teleport Kubernetes Service on a Linux host +and configure it to access any Kubernetes cluster you register with your +Teleport cluster. + ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) @@ -36,14 +51,8 @@ registration, then create, list, update, and delete Kubernetes clusters via ## Step 1/3. Set up the Teleport Kubernetes Service -The Teleport Kubernetes Service proxies traffic from Teleport users to a -Kubernetes API server so you can take advantage of passwordless authentication, -role-based access controls, audit logging, and other Teleport features in order -to manage access to Kubernetes. - -In this step, you will install the Teleport Kubernetes Service on a Linux host -and configure it to access any Kubernetes cluster you register with your -Teleport cluster. +This step shows you how to install the Teleport Kubernetes Service on a Linux +server. ### Get a join token diff --git a/docs/pages/enroll-resources/kubernetes-access/register-clusters/iam-joining.mdx b/docs/pages/enroll-resources/kubernetes-access/register-clusters/iam-joining.mdx index 34bbe75aab12c..42cd85453a0a7 100644 --- a/docs/pages/enroll-resources/kubernetes-access/register-clusters/iam-joining.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/register-clusters/iam-joining.mdx @@ -1,6 +1,10 @@ --- title: Register a Kubernetes Cluster using IAM Joining description: Connecting a Kubernetes cluster to Teleport with IAM joining. +tags: + - how-to + - zero-trust + - infrastructure-identity --- In this guide, we will show you how to register a Kubernetes cluster with diff --git a/docs/pages/enroll-resources/kubernetes-access/register-clusters/register-clusters.mdx b/docs/pages/enroll-resources/kubernetes-access/register-clusters/register-clusters.mdx index 00ce72a418f68..a3291eda9c260 100644 --- a/docs/pages/enroll-resources/kubernetes-access/register-clusters/register-clusters.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/register-clusters/register-clusters.mdx @@ -1,7 +1,12 @@ --- title: Registering Kubernetes Clusters with Teleport +sidebar_label: Registering Clusters +sidebar_position: 3 description: How to manually add a Kubernetes cluster to Teleport after creating it. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity --- In some cases, you will want to register a Kubernetes cluster with Teleport diff --git a/docs/pages/enroll-resources/kubernetes-access/register-clusters/static-kubeconfig.mdx b/docs/pages/enroll-resources/kubernetes-access/register-clusters/static-kubeconfig.mdx index 46975242971f9..4b186351b9e04 100644 --- a/docs/pages/enroll-resources/kubernetes-access/register-clusters/static-kubeconfig.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/register-clusters/static-kubeconfig.mdx @@ -1,6 +1,10 @@ --- title: Register a Kubernetes Cluster with a Static kubeconfig description: Connecting standalone Teleport installations to Kubernetes clusters. +tags: + - how-to + - zero-trust + - infrastructure-identity --- While you can register a Kubernetes cluster with Teleport by running the diff --git a/docs/pages/enroll-resources/kubernetes-access/troubleshooting.mdx b/docs/pages/enroll-resources/kubernetes-access/troubleshooting.mdx index 4bd3c67668e08..74b8b33eb5933 100644 --- a/docs/pages/enroll-resources/kubernetes-access/troubleshooting.mdx +++ b/docs/pages/enroll-resources/kubernetes-access/troubleshooting.mdx @@ -1,6 +1,11 @@ --- title: Kubernetes Access Troubleshooting +sidebar_label: Troubleshooting description: Troubleshooting common issues with Kubernetes access +tags: + - how-to + - zero-trust + - infrastructure-identity --- This page describes common issues with Kubernetes and how to resolve them. @@ -175,7 +180,7 @@ joinParams: method: "token" tokenName: "${TOKEN}" kubeClusterName: "${CLUSTER}" -labels: +tags: "type" : "autopilot" EOF # Install the helm chart with the values.yaml setting @@ -271,4 +276,139 @@ resource with the `create` verb, update it to allow the `get` verb as well: ``` Once the `ClusterRole` is updated, you should be able to exec into Pods using -`kubectl` version 1.30 or later. \ No newline at end of file +`kubectl` version 1.30 or later. + +## Health Check: Unable to contact the Kubernetes cluster + +### Symptoms + +A Kubernetes cluster is reported as `unhealthy`, and all calls between Teleport and the Kubernetes cluster API fail. + +### Explanation + +A Teleport Kubernetes Service performed multiple health checks on a Kubernetes cluster and did not receive a response. + +Calls to Kubernetes cluster API endpoints `/selfsubjectaccessreviews` and `/readyz` returned errors without HTTP status codes. + +### Resolution + +Examine the Kubernetes cluster and network for root causes. + +Check the state of the Kubernetes cluster. +```bash +kubectl cluster-info +kubectl get --raw /readyz +``` + +Check network and firewall rules. +- Is the network configured properly? +- Check security groups +- Check network policies + +## Health Check: Missing required Kubernetes permissions + +### Symptoms + +A Kubernetes cluster is reported as `unhealthy`, and is missing one or more Kubernetes permissions such as `impersonate users`, `impersonate groups`, `impersonate service accounts` and/or `get pods`. + +### Explanation + +A Teleport Kubernetes Service performed a health check on a Kubernetes cluster, and received a response saying required Kubernetes cluster capabilities are denied. + +The Kubernetes cluster is missing a `ClusterRole`, or is configured with a partial `ClusterRole`. Teleport requires four Kubernetes [SelfSubjectAccessReview](https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/self-subject-access-review-v1/) permissions: +- Impersonate users +- Impersonate groups +- Impersonate service accounts +- Get pods + +### Resolution + +Add a `ClusterRole` and `ClusterRoleBinding` to the Kubernetes cluster with the four permissions. + +```bash +$ kubectl config use-context +kubectl apply -f - <= (=ansible.min_version=) - -- Optional: [`jq`](https://stedolan.github.io/jq/) to process `JSON` output - -- `tbot` must already be installed and configured on the machine that will - run Ansible. For more information, see the - [deployment guides](../deployment/deployment.mdx). - -- If you followed the above guide, note the `--destination-dir=/opt/machine-id` - flag, which defines the directory where SSH certificates and OpenSSH configuration - used by Ansible will be written. - - In particular, you will be using the `/opt/machine-id/ssh_config` file in your - Ansible configuration to define how Ansible should connect to Teleport Nodes. - -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/4. Configure RBAC - -As Ansible will use the credentials produced by `tbot` to connect to the SSH -nodes, you first need to configure Teleport to grant the bot access. This is -done by creating a role that grants the necessary permissions and then assigning -this role to a Bot. - -In this example, access will be granted to all SSH nodes for the username -. Ensure that you set this to a username that is available -across your SSH nodes and that will have the appropriate privileges to manage -your nodes. - -Create a file called `role.yaml` with the following content: - -```yaml -kind: role -version: v6 -metadata: - name: example-role -spec: - allow: - # Allow login to the user 'root'. - logins: [''] - # Allow connection to any node. Adjust these labels to match only nodes - # that Ansible needs to access. - node_labels: - '*': '*' -``` - -Replace `example-role` with a descriptive name related to your use case. - -For production use, you should use labels to restrict this access to only the -hosts that Ansible will need to access. This is known as the principal of -least privilege and reduces damage that exfiltrated credentials can do. - -Use `tctl create -f ./role.yaml` to create the role. - -(!docs/pages/includes/create-role-using-web.mdx!) - -Now, use `tctl bots update` to add the role to the Bot. Replace `example` -with the name of the Bot you created in the deployment guide and `example-role` -with the name of the role you just created: - -```code -$ tctl bots update example --add-roles example-role -``` - -## Step 2/4. Configure the `tbot` output - -Now, `tbot` needs to be configured with an output that will produce the -credentials and SSH configuration that is needed by Ansible. For SSH, -we use the `identity` output type. - -Outputs must be configured with a destination. In this example, the `directory` -destination will be used. This will write these credentials to a specified -directory on disk. Ensure that this directory can be written to by the Linux -user that `tbot` runs as, and that it can be read by the Linux user that Ansible -will run as. - -Modify your `tbot` configuration to add an `identity` output: - -```yaml -outputs: -- type: identity - destination: - type: directory - # For this guide, /opt/machine-id is used as the destination directory. - # You may wish to customize this. Multiple outputs cannot share the same - # destination. - path: /opt/machine-id -``` - -If operating `tbot` as a background service, restart it. If running `tbot` in -one-shot mode, it must be executed before you attempt to execute the Ansible -playbook. - -You should now see several files under `/opt/machine-id`: - -- `ssh_config`: this can be used with Ansible or OpenSSH to configure them to -use the Teleport Proxy Service with the correct credentials when making connections. -- `known_hosts`: this contains the Teleport SSH host CAs and allows the SSH -client to validate a host's certificate. -- `key-cert.pub`: this is an SSH certificate signed by the Teleport SSH user -CA. -- `key`: this is the private key needed to use the SSH certificate. - -Next, Ansible will be configured to use these files when making connections. - -## Step 3/4. Configure Ansible - -Create a folder named `ansible` where all Ansible files will be collected. - -```code -$ mkdir -p ansible -$ cd ansible -``` - -Create a file called `ansible.cfg`. We will configure Ansible -to run the OpenSSH client with the configuration file generated -by Machine ID, `/opt/machine-id/ssh_config`. Note, `example.com` here is the -name of your Teleport cluster. - -```code -[defaults] -host_key_checking = True -inventory=./hosts -remote_tmp=/tmp - -[ssh_connection] -scp_if_ssh = True -ssh_args = -F /opt/machine-id/ssh_config -``` - -You can then create an inventory file called `hosts`. This should refer to the -hosts using their hostname as registered in Teleport and the name of your -Teleport cluster should be appended to this. For example, if your cluster is -called `teleport.example.com` and your host is called `node1`, the entry in -`hosts` would be `node1.teleport.example.com`. - -You can generate an inventory file for all your nodes that meets this -requirement with a script like the following: - -```code -# Source tsh env to get the name of the current Teleport cluster. -$ eval "$( tsh env )" -# You can modify the `tsh ls` command to filter nodes based ont he label. -$ tsh ls --format=json | jq --arg cluster $TELEPORT_CLUSTER -r '.[].spec.hostname + "." + $cluster' > hosts -``` - -
-Not seeing Nodes? - -(!docs/pages/includes/node-logins.mdx!) - -
- -## Step 4/4. Run a playbook - -Finally, let's create a simple Ansible playbook, `playbook.yaml`. The example -playbook below runs `hostname` on all hosts. - -```yaml -- hosts: all - remote_user: - tasks: - - name: "hostname" - command: "hostname" -``` - -From the folder `ansible`, run the Ansible playbook: - -```code -$ ansible-playbook playbook.yaml - -# PLAY [all] ***************************************************************************************************************************************** -# TASK [Gathering Facts] ***************************************************************************************************************************** -# -# ok: [terminal] -# -# TASK [hostname] ************************************************************************************************************************************ -# changed: [terminal] -# -# PLAY RECAP ***************************************************************************************************************************************** -# terminal : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 -``` - -You are all set. You have provided your machine with short-lived certificates -tied to a machine identity that can be rotated, audited, and controlled with -all the familiar access controls. - -## Troubleshooting - -In case if Ansible cannot connect, you may see error like this one: - -```txt -example.host | UNREACHABLE! => { - "changed": false, - "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname node-name: Name or service not known", - "unreachable": true -} -``` - -You can examine and tweak patterns matching the inventory hosts in `ssh_config`. - -Try the SSH connection using `ssh_config` with verbose mode to inspect the error: - -```code -$ ssh -vvv -F /opt/machine-id/ssh_config @node-name.example.com -``` - -If `ssh` works, try running the playbook with verbose mode on: - -```code -$ ansible-playbook -vvv playbook.yaml -``` - -If your hostnames contain uppercase characters (like `MYHOSTNAME`), please note that Teleport's internal hostname matching -is case sensitive by default, which can also lead to seeing this error. - -If this is the case, you can work around this by enabling case-insensitive routing at the cluster level. - - - - -Edit your `/etc/teleport.yaml` config file on all servers running the Teleport `auth_service`, then restart Teleport on each. - -```yaml -auth_service: - case_insensitive_routing: true -``` - - - - -Run `tctl edit cluster_networking_config` to add the following specification, then save and exit. - -```yaml -spec: - case_insensitive_routing: true -``` - - - - -## Next steps - -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. diff --git a/docs/pages/enroll-resources/machine-id/access-guides/kubernetes.mdx b/docs/pages/enroll-resources/machine-id/access-guides/kubernetes.mdx deleted file mode 100644 index fa31db21ad6f7..0000000000000 --- a/docs/pages/enroll-resources/machine-id/access-guides/kubernetes.mdx +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Machine ID with Kubernetes Access -description: How to use Machine ID to access Kubernetes clusters ---- - -Teleport protects and controls access to Kubernetes -clusters. Machine ID can be used to grant machines secure, short-lived -access to these clusters. - -In this guide, you will configure `tbot` to produce credentials that can be -used to access a Kubernetes cluster enrolled with your Teleport cluster. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- If you have not already connected your Kubernetes cluster to Teleport, follow - [Enroll a Kubernetes Cluster](../../kubernetes-access/getting-started.mdx). -- (!docs/pages/includes/tctl.mdx!) -- To configure the Kubernetes cluster, your client system will need to have - `kubectl` installed. See the - [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) for - installation instructions. -- `tbot` must already be installed and configured on the machine that will - access Kubernetes clusters. For more information, see the - [deployment guides](../deployment/deployment.mdx). -- To demonstrate connecting to the Kubernetes cluster, the machine that will - access Kubernetes clusters will need to have `kubectl` installed. See the - [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) for - installation instructions. - -## Step 1/3. Configure Teleport and Kubernetes RBAC - -First, we need to configure the RBAC for both Teleport and Kubernetes in order -to grant the bot the correct level of access. - -When forwarding requests to the Kubernetes API on behalf of a bot, the -Teleport Proxy attaches the groups configured (using `kubernetes_groups`) in the -bot's Teleport roles to the request. These groups are then used to configure a -RoleBinding or ClusterRoleBinding in Kubernetes to grant specific permissions -within the Kubernetes cluster to the bot. - -For the purpose of this guide, we will bind the `editor` group to the default -`edit` ClusterRole that is preconfigured in most Kubernetes clusters to give -the bot read and write access to resources in all the cluster namespaces. - -When configuring this for a production environment, you should consider: - -- If RoleBinding should be used instead of ClusterRoleBinding to limit the - bot's access to a specific namespace. -- If a Role should be created that grants the bot the least privileges - necessary rather than using a pre-existing general Role such as `edit`. - -To bind the `editor` group to the `edit` Cluster Role, execute: - -```code -$ kubectl create clusterrolebinding teleport-editor-edit \ - --clusterrole=edit \ - --group=editor -``` - -With the appropriate RoleBinding configured in Kubernetes to grant access to -a specific group, you now need to add this group to the role that the bot -will impersonate when producing credentials. You also need to grant the bot -access through Teleport to the cluster itself. This is done by creating a -role that grants the necessary permissions and then assigning this role to a -Bot. - -Create a file called `role.yaml` with the following content: - -```yaml -kind: role -version: v7 -metadata: - name: example-role -spec: - allow: - kubernetes_labels: - '*': '*' - kubernetes_groups: - - editor - kubernetes_resources: - - kind: "*" - namespace: "*" - name: "*" - verbs: ["*"] -``` - -Replace `example-role` with a descriptive name related to your use case. - -Adjust the `allow` field for your environment: - -- `kubernetes_labels` should be adjusted to grant access to only the clusters - that the bot will need to access. The value shown, `'*': '*'` will grant - access to all Kubernetes clusters. -- `editor` must match the name of the group you specified in the - RoleBinding or ClusterRoleBinding. -- `kubernetes_resources` can be used to apply additional restrictions to what - the bot can access within the Kubernetes cluster. These restrictions are - layered upon the RBAC configured within the Kubernetes role itself. - -Use `tctl create -f ./role.yaml` to create the role. - -(!docs/pages/includes/create-role-using-web.mdx!) - -Now, use `tctl bots update` to add the role to the Bot. Replace `example` -with the name of the Bot you created in the deployment guide and `example-role` -with the name of the role you just created: - -```code -$ tctl bots update example --add-roles example-role -``` - -## Step 2/3. Configure a Kubernetes `tbot` output - -Now, `tbot` needs to be configured with an output to produce the Kubernetes -credentials and client configuration file. This is done using the -`kubernetes/v2` output type. - -The Kubernetes clusters you wish to make available must be specified using -entries in the `selectors` list. In this example, `example-k8s-cluster` will be -selected using a name selector, and all clusters with the label -`environment=dev` will be selected as well. - -Outputs must also be configured with a destination. In this example, the -`directory` type will be used. This will write artifacts to a specified -directory on disk. Ensure that this directory can be written to by the Linux -user that `tbot` runs as, and that it can be read by the Linux user that will -be accessing the Kubernetes cluster. - -Modify your `tbot` configuration to add a `kubernetes/v2` output: - -```yaml -outputs: - - type: kubernetes/v2 - selectors: - # Specify the name of the Kubernetes cluster you wish the credentials to - # grant access to. Note that wildcards are not supported. - - name: example-k8s-cluster - - # Specify a label selector to dynamically select many clusters at once. - # All labels in a selector must match for a cluster to be selected, and - # multiple separate selectors can be specified if desired. Note that - # wildcards are not supported. - - labels: - environment: dev - destination: - type: directory - # For this guide, /opt/machine-id is used as the destination directory. - # You may wish to customize this. Multiple outputs cannot share the same - # destination. - path: /opt/machine-id -``` - -Ensure you replace `example-k8s-cluster` with the name of the Kubernetes cluster -as registered in Teleport and adjust `/opt/machine-id` if you wish. - -If operating `tbot` as a background service, restart it. If running `tbot` in -one-shot mode, it must be executed before you attempt to use the credentials. - -## Step 3/3. Connect to your Kubernetes cluster with the Machine ID identity - -Once `tbot` has been run with the new output configured, a file called -`kubeconfig.yaml` should have been generated in the destination directory -you specified. This contains all the information necessary for `kubectl` to -connect to the Kubernetes cluster through the Teleport Proxy. - -To use `kubeconfig.yaml` with `kubectl`, the `--kubeconfig` flag or `KUBECONFIG` -environment variable can be provided to `kubectl`: - -```code -$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml get pods -A -# Or, set the KUBECONFIG environment variable: -$ export KUBECONFIG=/opt/machine-id/kubeconfig.yaml -$ kubectl get pods -A -``` - -If you selected multiple clusters, they will be exposed as separate contexts -within the generated `kubeconfig.yaml`, and will be named following the format -`$teleportClusterName-$kubeClusterName`. To target a specific cluster, use the -`--context` flag: - -```code -$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml --context=example.teleport.sh-my-kube-cluster get pods -A -``` - -Note that the first selected cluster in `tbot.yaml` will be used as the default -context. If using label selectors, the default context may vary over time if -clusters are added or removed in Teleport. - -If new matching clusters are added or removed in Teleport, `kubeconfig.yaml` -will be regenerated to reflect the change on the bot's next certificate renewal. -If needed, the `tbot` process can be restarted or signaled (`pkill -USR1 tbot`) -to trigger an immediate reload. Note that modifications to `kubeconfig.yaml`, -such as changes to the `current-context` field, will be overwritten. - -Whilst this guide has demonstrated `kubeconfig.yaml` being used with `kubectl`, -this format is compatible with most Kubernetes tools including: - -- Helm -- Lens -- ArgoCD - -## Next steps - -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- Read the [Teleport Kubernetes RBAC guide](../../kubernetes-access/controls.mdx) - for more details on controlling Kubernetes access. diff --git a/docs/pages/enroll-resources/machine-id/deployment/aws.mdx b/docs/pages/enroll-resources/machine-id/deployment/aws.mdx deleted file mode 100644 index 0ab78b1f7a3e9..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/aws.mdx +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: Deploying Machine ID on AWS -description: How to install and configure Machine ID on an AWS EC2 instance ---- - -This guide explains how to deploy Machine ID on Amazon Web Services by running -the `tbot` binary and joining it to your Teleport cluster. - -## How it works - -On AWS, virtual machines can be assigned an IAM role, which they can assume in -order to request a signed document that includes information about the machine. -The Teleport `iam` join method instructs the Machine ID bot to request this -signed document from AWS using the assigned identity and send it to the Teleport -Auth Service for verification. This allows the bot to join the cluster without -the exchange of a long-lived secret. - -While this guide focuses on deploying Machine ID on an EC2 instance, it is also -possible to use the `iam` join method with workloads running on an EKS -Kubernetes cluster. To do so, you must configure [IAM Roles for Service Accounts -(IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) -for the cluster and the Kubernetes service account that will be used by the -`tbot` pod. See the [Kubernetes platform guide](kubernetes.mdx) for further -guidance on deploying Machine ID as a workload on Kubernetes. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- An AWS IAM role that you wish to grant access to your Teleport cluster. This - role must be granted `sts:GetCallerIdentity`. In this guide, this role will - be named `teleport-bot-role`. -- An AWS EC2 virtual machine that you wish to install Machine ID onto configured - with the IAM role attached. - -## Step 1/5. Install `tbot` - -**This step is completed on the AWS EC2 instance.** - -First, `tbot` needs to be installed on the VM that you wish to use Machine ID -on. - -Download and install the appropriate Teleport package for your platform: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 2/5. Create a Bot - -**This step is completed on your local machine.** - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 3/5. Create a join token - -**This step is completed on your local machine.** - -Create `bot-token.yaml`: - -```yaml -kind: token -version: v2 -metadata: - # name will be specified in the `tbot` to use this token - name: example-bot -spec: - roles: [Bot] - # bot_name should match the name of the bot created earlier in this guide. - bot_name: example - join_method: iam - # Restrict the AWS account and (optionally) ARN that can use this token. - # This information can be obtained from running the - # "aws sts get-caller-identity" command from the CLI. - allow: - - aws_account: "111111111111" - aws_arn: "arn:aws:sts::111111111111:assumed-role/teleport-bot-role/i-*" -``` - -Replace: - -- `111111111111` with the ID of your AWS account. -- `teleport-bot-role` with the name of the AWS IAM role you created and assigned - to the EC2 instance. -- `example` with the name of the bot you created in the second step. -- `i-*` indicates that any instance with the specified role can use the join - method. If you wish to restrict this to an individual instance, replace `i-*` - with the full instance ID. - -Use `tctl` to apply this file: - -```code -$ tctl create -f bot-token.yaml -``` - -## Step 4/5. Configure `tbot` - -**This step is completed on the AWS EC2 instance.** - -Create `/etc/tbot.yaml`: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: iam - token: example-bot -storage: - type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy Service or - Auth Service. Prefer using the address of a Teleport Proxy Service instance. -- `example-bot` with the name of the token you created in the second step. - -(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) - -## Step 5/5. Configure outputs - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## Next steps - -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/azure.mdx b/docs/pages/enroll-resources/machine-id/deployment/azure.mdx deleted file mode 100644 index 6a8e46a950f72..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/azure.mdx +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: Deploying Machine ID on Azure -description: How to install and configure Machine ID on an Azure VM ---- - -In this guide, you will install Machine ID's agent, `tbot` on an Azure VM. The -bot will be configured to use the `azure` delegated joining method to -authenticate to your Teleport cluster. This eliminates the need for long-lived -secrets. - -## How it works - -On the Azure platform, virtual machines can be assigned a managed identity. The -Azure platform will then make available to the virtual machine an attested -data document and JWT that allows the virtual machine to act as this identity. -This identity can be validated by a third party by attempting to use this token -to fetch its own identity from the Azure identity service. - -The `azure` join method instructs the bot to use this attested data document and -JWT to prove its identity to the Teleport Auth Service. This allows joining to -occur without the use of a long-lived secret. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- An Azure managed identity with a role granting the - `Microsoft.Compute/virtualMachines/read` permission. You will need to know - the UID of this identity. -- An Azure VM you wish to install Machine ID onto with the managed identity - configured as a user-assigned managed identity. - -## Step 1/5. Install `tbot` - -**This step is completed on the Azure VM.** - -First, `tbot` needs to be installed on the VM that you wish to use Machine ID -on. - -Download and install the appropriate Teleport package for your platform: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 2/5. Create a Bot - -**This step is completed on your local machine.** - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 3/5. Create a join token - -**This step is completed on your local machine.** - -Create `bot-token.yaml`: - -```yaml -kind: token -version: v2 -metadata: - # name will be specified in the `tbot` to use this token - name: example-bot -spec: - roles: [Bot] - # bot_name should match the name of the bot created earlier in this guide. - bot_name: example - join_method: azure - azure: - allow: - # subscription should be the UID of an Azure subscription. Only VMs within - # this subscription will be able to join. - - subscription: 11111111-1111-1111-1111-111111111111 - # resource_groups allows joining to be restricted to VMs in a specific - # resource group. It can be omitted to allow joining from any VM within - # a subscription. - resource_groups: ["group1"] -``` - -Replace: - -- `11111111-1111-1111-1111-111111111111` with the UID of your Azure subscription -- `example` with the name of the bot you created in the second step -- `group1` with the name of the resource group that the VM resides within or - omit this entirely to allow joining from any VM within the subscription - -Use `tctl` to apply this file: - -```code -$ tctl create -f bot-token.yaml -``` - -## Step 4/5. Configure `tbot` - -**This step is completed on the Azure VM.** - -Create `/etc/tbot.yaml`: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: azure - token: example-bot - azure : - client_id: 22222222-2222-2222-2222-222222222222 -storage: - type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or - Auth Service. Prefer using the address of a Teleport Proxy. -- `22222222-2222-2222-2222-222222222222` with the ID of the Azure managed - identity that has been assigned to the VM. -- `example-bot` with the name of the token you created in the second step. - -(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) - -## Step 5/5. Configure outputs - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## Next steps - -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/deployment.mdx b/docs/pages/enroll-resources/machine-id/deployment/deployment.mdx deleted file mode 100644 index 82880a240447a..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/deployment.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Deploy Machine ID -description: Explains how to deploy Machine ID on your platform and join it to your Teleport cluster. -tocDepth: 3 ---- - -The first step to set up Machine ID is to deploy the `tbot` binary and join a -Machine ID bot to your Teleport cluster. You can run the `tbot` binary on a -number of platforms, from AWS and GitHub Actions to a generic Linux server or -Kubernetes cluster. This guide shows you how to deploy Machine ID on your -infrastructure. - -## Choosing a deployment method - -There are two considerations to make when determining how to deploy Machine ID on -your infrastructure. - -### Your infrastructure - -The `tbot` binary runs as a container or on a Linux virtual machine. If you run -`tbot` on GitHub Actions, you can use one of the ready-made [Teleport GitHub -Actions workflows](https://github.com/teleport-actions). - -### Join method - -Machine ID joins your Teleport cluster by using one of the following -authentication methods: - -- **Platform-signed document:** The platform that hosts `tbot`, such as a - Kubernetes cluster or Amazon EC2 instance, provides a signed identity document - that Teleport can verify using the platform's certificate authority. This is - the recommended approach because it avoids the use of shared secrets. -- **Static join token:** Your Teleport client tool generates a string and stores - it on the Teleport Auth Service. Machine ID provides this string when it first - connects to your Teleport cluster, demonstrating to the Auth Service that it - belongs in the cluster. From then on, Machine ID authenticates to your - Teleport cluster with a renewable certificate. - -## Deployment guides - -The guides in this section show you how to deploy Machine ID and join it -to your cluster. Choose a guide based on the platform where you intend to run -Machine ID. - -If a specific guide does not exist for your platform, the [Linux -guide](linux.mdx) is compatible with most platforms. For -custom approaches, you can also read the [Machine ID Reference](../../../reference/machine-id/machine-id.mdx) -and [Architecture](../../../reference/architecture/machine-id-architecture.mdx) to plan your deployment. - -### Self-hosted infrastructure - -Read the following guides for how to deploy Machine ID on your cloud platform or -on-prem infrastructure. - -| Platform | Installation method | Join method | -|-------------------------------------------|-------------------------------------------------|-----------------------------------------------------| -| [Linux](linux.mdx) | Package manager or TAR archive | Static join token | -| [Linux (TPM)](linux-tpm.mdx) | Package manager or TAR archive | Attestation from TPM 2.0 | -| [GCP](gcp.mdx) | Package manager, TAR archive, or Kubernetes pod | Identity document signed by GCP | -| [AWS](aws.mdx) | Package manager, TAR archive, or Kubernetes pod | Identity document signed by AWS | -| [Azure](azure.mdx) | Package manager or TAR archive | Identity document signed by Azure | -| [Kubernetes](kubernetes.mdx) | Kubernetes pod | Identity document signed by your Kubernetes cluster | - -### CI/CD - -Read the following guides for how to deploy Machine ID on a continuous -integration and continuous deployment platform - -| Platform | Installation method | Join method | -|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------|------------------------------------| -| [Bitbucket Pipelines](bitbucket.mdx) | TAR archive | Bitbucket-signed identity document | -| [CircleCI](circleci.mdx) | TAR archive | CircleCI-signed identity document | -| [GitLab](gitlab.mdx) | TAR archive | GitLab-signed identity document | -| [GitHub Actions](github-actions.mdx) | Teleport job available through the GitHub Actions marketplace | GitHub-signed identity document. | -| [Jenkins](jenkins.mdx) | Package manager or TAR archive | Static join token | -| [Spacelift](../../../admin-guides/infrastructure-as-code/terraform-provider/spacelift.mdx) | Docker Image | Spacelift-signed identity document | -| [Terraform Cloud](../../../admin-guides/infrastructure-as-code/terraform-provider/terraform-cloud.mdx) | Teleport Terraform Provider via Teleport's Terraform Registry | Terraform Cloud-signed identity document | diff --git a/docs/pages/enroll-resources/machine-id/deployment/gcp.mdx b/docs/pages/enroll-resources/machine-id/deployment/gcp.mdx deleted file mode 100644 index 0cafa3203fb5d..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/gcp.mdx +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Deploying Machine ID on GCP -description: How to install and configure Machine ID on a GCP VM ---- - -This guide explains how to deploy Machine ID on Google Cloud Platform (GCP) by -running the `tbot` binary and joining it to your Teleport cluster. - -## How it works - -On GCP, virtual machines can be assigned a service account. These machines can -then request a signed JSON web token from GCP, which allows third parties to -verify information about them, including their service accounts, using the GCP -public key. The Teleport `gcp` join method instructs a Machine ID bot to use -this service account JWT to prove its identity to the Teleport Auth Service and -join your Teleport cluster without using long-lived secrets. - -Whilst the guide on this page focuses explicitly on deploying Machine ID on a -GCP Virtual Machine, it is also possible to use the `gcp` join method with -workloads running on Google Kubernetes Engine. To do so, you must configure -[GCP Workload -Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) -for the cluster and the Kubernetes service account that will be used by the -`tbot` pod. See the [Kubernetes platform guide](kubernetes.mdx) for further -guidance on deploying Machine ID as a workload on Kubernetes. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- A GCP service account you wish to grant access to your Teleport cluster that - is not the GCP compute default service account. -- A GCP Compute Engine VM that you wish to install Machine ID onto that has - been configured with the GCP service account. - -## Step 1/5. Install `tbot` - -**This step is completed on the GCP VM.** - -First, `tbot` needs to be installed on the VM that you wish to use Machine ID -on. - -Download and install the appropriate Teleport package for your platform: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 2/5. Create a Bot - -**This step is completed on your local machine.** - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 3/5. Create a join token - -**This step is completed on your local machine.** - -Create `bot-token.yaml`: - -```yaml -kind: token -version: v2 -metadata: - # name will be specified in the `tbot` to use this token - name: example-bot -spec: - roles: [Bot] - # bot_name should match the name of the bot created earlier in this guide. - bot_name: example - join_method: gcp - gcp: - # allow specifies the rules by which the Auth Service determines if `tbot` - # should be allowed to join. - allow: - - project_ids: - - my-project-123456 - service_accounts: - # This should be the full "name" of a GCP service account. The default - # compute service account is not supported. - - my-service-account@my-project-123456.iam.gserviceaccount.com -``` - -Replace: - -- `my-project-123456` with the ID of your GCP project -- `example` with the name of the bot you created in the second step. -- `my-service-account@my-project-123456.iam.gserviceaccount.com` with the email - of the service account configured in the previous step. The default compute - service account is not supported. - -Use `tctl` to apply this file: - -```code -$ tctl create -f bot-token.yaml -``` - -## Step 4/5. Configure `tbot` - -**This step is completed on the GCP VM.** - -Create `/etc/tbot.yaml`: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: gcp - token: example-bot -storage: - type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or - Auth Service. Prefer using the address of a Teleport Proxy. -- `example-bot` with the name of the token you created in the second step. - -(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) - -## Step 5/5. Configure outputs - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## Next steps - -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/github-actions.mdx b/docs/pages/enroll-resources/machine-id/deployment/github-actions.mdx deleted file mode 100644 index 0da1e8563e35f..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/github-actions.mdx +++ /dev/null @@ -1,449 +0,0 @@ ---- -title: Deploying Machine ID on GitHub Actions -description: How to install and configure Machine ID on GitHub Actions ---- - -GitHub Actions is a popular CI/CD platform that works as a part of the larger -GitHub ecosystem. Teleport Machine ID allows GitHub Actions to securely interact -with Teleport protected resources without the need for long-lived credentials. - -Teleport supports secure joining on both GitHub-hosted and self-hosted GitHub -Actions runners as well as GitHub Enterprise Server. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- Your user should have the privileges to create token resources. -- A GitHub repository with GitHub Actions enabled. This guide uses the example - `gravitational/example` repo, however this value should be replaced with - your own unique repo. - -## Step 1/3. Create a Bot - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 2/3. Create a join token for GitHub Actions - -In order to allow your GitHub Actions workflow to authenticate with your -Teleport cluster, you'll first need to create a join token. These tokens set out -criteria by which the Auth Service decides whether to allow a bot or node -to join. - -In this example, you will create a join token that grants access to any -GitHub Actions run within a specific GitHub repository. In production, you may -wish to further restrict these rules to ensure that access can only occur -when CI is running against a specific branch. You can find a full list of the -available rules on the -[GitHub Actions reference page.](../../../reference/machine-id/github-actions.mdx) - -Create a file named `bot-token.yaml`: - -```yaml -kind: token -version: v2 -metadata: - name: example-bot -spec: - # The Bot role indicates that this token grants access to a bot user, rather - # than allowing a node to join. This role is built in to Teleport. - roles: [Bot] - join_method: github - # The bot_name indicates which bot user this token grants access to. This - # should match the name of the bot that you created in the previous step. - bot_name: example - github: - # allow specifies rules that control which GitHub Actions runs will be - # granted access. Those not matching any allow rule will be denied. - allow: - # repository should include the name of the owner of the repository. - - repository: gravitational/example -``` - -Replace `gravitational/example` with the name of the repository that `tbot` -will run within. You may also choose to change the name of the bot and token -to more accurately describe your use-case. - - -**Enterprise Server** - -If you are using self-hosted Teleport Enterprise you are able to permit -workflows within GitHub Enterprise Server instances to authenticate using the -GitHub join method. - -The Teleport Auth Service must be able to connect to the GitHub Enterprise -Server. - -To configure this, set `spec.github.enterprise_server_host` to the hostname of -the GHES instance. - -For example: -```yaml -spec: - github: - enterprise_server_host: ghes.example.com -``` - -**Enterprise Cloud** - -If you have enabled `include_enterprise_slug` in your GitHub Enterprise -Cloud configuration, you will need to set `spec.github.enterprise_slug` to -the slug of your GitHub Enterprise organization. - -For example: -```yaml -spec: - github: - enterprise_slug: my-enterprise -``` - -Read more about `include_enterprise_slug` on the GitHub guide to -[customizing the issuer value for an enterprise](https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise). - - -Once the resource file has been written, create the token with `tctl`: - -```code -$ tctl create -f bot-token.yaml -``` - -Check that token `example-bot` has been created with the following -command: - -```code -$ tctl tokens ls -Token Type Labels Expiry Time (UTC) ------------ ---- ------ ---------------------------------------------- -example-bot Bot 01 Jan 00 00:00 UTC (2562047h47m16.854775807s) -``` - -## Step 3/3. Configure a GitHub Actions Workflow - -Now that the bot has been successfully created, you now need to configure your -GitHub Action's workflow to authenticate as this bot and then use the -credentials produced by `tbot`. To help with this, Teleport publishes -several easy-to-use GitHub Actions that can be used within your workflow. - -It is also possible to manually configure `tbot` rather than using one of the -Teleport GitHub Actions. This involves more configuration but allows for -precise control of `tbot` and allows for implementations that are not possible -with the actions. - -What follows is examples demonstrating two of the GitHub Actions available as -well as showing how to manually configure `tbot` for use with GitHub Actions. - -### Example: `teleport-actions/auth` - -The `teleport-actions/auth` action generates a versatile identity output that -can be used for SSH and for administrative actions against a Teleport cluster. -Environment variables are configured by this action and these automatically -configure `tsh` and `tctl` to use this identity. - -This example shows using the credentials to: - -- List the SSH nodes available using `tsh` -- List the SSH nodes available using `tctl` -- Connect to an SSH node using `tsh` -- Connect to an SSH node using OpenSSH's `ssh` - -First, you'll need to adjust the role you assigned to the bot to grant it access -to SSH. This example grants access to root on all nodes. In a production setup, -it would be a good idea to restrict this to only the nodes that the bot would -need. - -Use `tctl edit role/example-bot` to add the following to the role: - -```yaml -spec: - allow: - # Allow login to the Linux user 'root'. - logins: ['root'] - # Allow connection to any node. Adjust these labels to match only nodes - # that ansible needs to access. - node_labels: - '*': '*' -``` - -With that privileges granted, you can now create the GitHub Actions workflow. -Create `.github/workflows/example.yaml`: - -```yaml -# This is a basic workflow to help you get started. -# It will take the following action whenever a push is made to the "main" branch. -on: - push: - branches: - - main -jobs: - demo: - permissions: - # The "id-token: write" permission is required or Machine ID will not be - # able to authenticate with the cluster. - id-token: write - contents: read - # The name of the workflow, and the Linux distro to be used to perform the - # required steps. - name: example - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v3 - - name: Fetch Teleport binaries - uses: teleport-actions/setup@v1 - with: - version: (=teleport.version=) - - name: Fetch credentials using Machine ID - id: auth - uses: teleport-actions/auth@v2 - with: - # Use the address of the auth/proxy server for your own cluster. - proxy: example.teleport.sh:443 - # Use the name of the join token resource you created in step 1. - token: example-bot - # Specify the length of time that the generated credentials should be - # valid for. This is optional and defaults to "1h" - certificate-ttl: 1h - # Enable the submission of anonymous usage telemetry. This - # helps us shape the future development of `tbot`. You can disable this - # by omitting this. - anonymous-telemetry: 1 - - name: List nodes (tsh) - # Enters a command from the cluster, in this case "tsh ls" using Machine - # ID credentials to list remote SSH nodes. - run: tsh ls - - name: List nodes (tctl) - run: tctl nodes ls - - name: Run hostname via SSH (tsh) - # Ensure that `root` matches the username of a remote SSH username, and - # that hostname matches an SSH host name that is a part of the Teleport - # cluster configured for access. - run: tsh ssh root@example-node hostname - - name: Run hostname via SSH (OpenSSH) - run: ssh -F ${{ steps.auth.outputs.ssh-config }} root@example-node.example.teleport.sh hostname -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or cloud - tenant. -- `example-bot` with the name of the token you created in a previous step. -- `example-node` with the name of a Teleport SSH node that you wish to connect - to. -- `root` with the name of a user on the node that you are connecting to and that - you have granted the bot access to. - -Add, commit, and push your changes to the `main` branch of the repository. - -Navigate to the **Actions** tab of your GitHub repository in your web browser. -Select the **Workflow** that has now been created and triggered by the change, -and select the `example` job. The GitHub Actions workflow may take some time -to complete, and will resemble the following once successful. - -![GitHub Actions](../../../../img/machine-id/github-actions.png) - -Expand the **List nodes** step of the action, and the output will -list all nodes in the cluster, from the perspective of the -Machine ID bot using the command `tsh ls`. - -### Example: `teleport-actions/auth-k8s` - -The `teleport-actions/auth-k8s` action generates a Kubernetes output that -contains the necessary credentials and config for a Kubernetes client to connect -to a Kubernetes cluster enrolled in Teleport. The action emits the necessary -environment variable to automatically configure these clients. - -In this example, the `teleport-actions/auth-k8s` action will be used to list -all the pods contained within the cluster, but this could just as easily be -modified to deploy to a Kubernetes cluster with `kubectl` or `helm`. - -First, you'll need to adjust the role you assigned to the bot to grant it access -to the Kubernetes cluster. This example will grant the bot access to all -clusters with the group `editor`. For more detailed instructions on setting -up Kubernetes RBAC, see the Kubernetes access guide. - -Use `tctl edit role/example-bot` to add the following rule to the Teleport role: - -```yaml -spec: - allow: - kubernetes_labels: - '*': '*' - kubernetes_resources: - - kind: pod - namespace: "*" - name: "*" - kubernetes_groups: - - editor -``` - - -This example assumes the role is version `v6`. If you are using a `v7`+ role -you will need to include `verbs: ["get", "list"]` for the `kind: pod` section -in `kubernetes_resources`. Otherwise the example `kubectl get pods -A` execution -will be denied. - - -With that privileges granted, you can now create the GitHub Actions workflow. -Create `.github/workflows/example.yaml`: - -```yaml -# This is a basic workflow to help you get started, modify it for your needs. -on: - push: - branches: - - main -jobs: - demo: - permissions: - # The "id-token: write" permission is required or Machine ID will not be - # able to authenticate with the cluster. - id-token: write - contents: read - name: example - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v3 - - name: Fetch kubectl - uses: azure/setup-kubectl@v3 - - name: Fetch Teleport binaries - uses: teleport-actions/setup@v1 - with: - version: (=teleport.version=) - - name: Fetch credentials using Machine ID - uses: teleport-actions/auth-k8s@v2 - with: - # Use the address of the auth/proxy server for your own cluster. - proxy: example.teleport.sh:443 - # Use the name of the join token resource you created in step 1. - token: example-bot - # Use the name of your Kubernetes cluster - kubernetes-cluster: my-kubernetes-cluster - # Enable the submission of anonymous usage telemetry. This helps us - # shape the future development of `tbot`. You can disable this by - # omitting this. - anonymous-telemetry: 1 - - name: List pods - run: kubectl get pods -A -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or cloud - tenant. -- `example-bot` with the name of the token you created in a previous step. -- `my-kubernetes-cluster` with the name of your Kubernetes cluster. - -The `auth-k8s` action sets the `KUBECONFIG` for future steps to the credentials -it has fetched from Teleport. This means that most existing tooling for -Kubernetes (e.g `kubectl` and `helm`) can use your cluster with no additional -configuration. - -Add, commit, and push this new workflow file to the default branch of your -repository. - -Navigate to the **Actions** tab of your GitHub repository in your web browser. -Select the **Workflow** that has now been created and triggered by the change, -and select the `example` job. - -Expand the **List pods** step of the action, where you can then confirm that the -output shows a list of all the pods within your Kubernetes cluster. - -### Example: Manual configuration - -To configure `tbot` manually, a YAML file will be used. In this example we'll -commit this to the repository, but this could be generated or created by the -CI pipeline itself. - -Create `tbot.yaml` within your repository: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: github - token: example-bot -oneshot: true -storage: - type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or - Auth Service. Prefer using the address of a Teleport Proxy. -- `example-bot` with the name of the token you created in the first step. - -Now you can define a GitHub Actions workflow that will start `tbot` with this -configuration. - -Create `.github/workflows/example-action.yaml`: - -```yaml -# This is a basic workflow to help you get started. -# It will take the following action whenever a push is made to the "main" branch. -on: - push: - branches: - - main -jobs: - demo: - permissions: - # The "id-token: write" permission is required or Machine ID will not be - # able to authenticate with the cluster. - id-token: write - contents: read - # The name of the workflow, and the Linux distro to be used to perform the - # required steps. - name: guide-demo - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v3 - - name: Fetch Teleport binaries - uses: teleport-actions/setup@v1 - with: - version: (=teleport.version=) - - name: Execute Machine ID - env: - # TELEPORT_ANONYMOUS_TELEMETRY enables the submission of anonymous - # usage telemetry. This helps us shape the future development of - # tbot. You can disable this by omitting this. - TELEPORT_ANONYMOUS_TELEMETRY: 1 - run: tbot start -c ./tbot.yaml --oneshot -``` - -Add, commit, and push these two files to the repository. Check the GitHub -Actions UI to ensure that the workflow has succeeded. - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## A note on security implications and risk - -Once `teleport-actions/auth` has been used in a workflow job, all successive -steps in that job will have access to the credentials which grant access to your -Teleport cluster as the bot. Where possible, run as few steps as necessary after -this action has been used. It may be a good idea to break your workflow up into -multiple jobs in order to segregate these credentials from other code running in -your CI/CD pipeline. - -Most importantly, ensure that the role you assign to your GitHub Actions bot has -access to only the resources in your Teleport cluster that your CI/CD needs to -interact with. - -## Next steps - -- Check out the GitHub Actions for more usage information: - - [teleport-actions/setup](https://github.com/teleport-actions/setup) - - [teleport-actions/auth](https://github.com/teleport-actions/auth) - - [teleport-actions/auth-k8s](https://github.com/teleport-actions/auth-k8s) - - [teleport-actions/auth-application](https://github.com/teleport-actions/auth-application) -- For more information about the `github` join method, read the [GitHub Actions - reference page](../../../reference/machine-id/github-actions.mdx) -- Find out more about GitHub Actions itself, read - [their documentation](https://docs.github.com/en/actions). -- [More information about `anonymous-telemetry`.](../../../reference/machine-id/telemetry.mdx) - diff --git a/docs/pages/enroll-resources/machine-id/deployment/gitlab.mdx b/docs/pages/enroll-resources/machine-id/deployment/gitlab.mdx deleted file mode 100644 index 7974c35f1b637..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/gitlab.mdx +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: Deploying Machine ID on GitLab CI -description: How to install and configure Machine ID on GitLab CI ---- - -In this guide, you will use Teleport Machine ID to allow a GitLab pipeline to -securely connect to a Teleport SSH node without the need for long-lived secrets. - -Machine ID for GitLab works with GitLab's cloud-hosted option and with -self-hosted GitLab installations. **The minimum supported GitLab version is -15.7**. - -This mitigates the risk of long-lived secrets such as passwords or SSH private -keys being exfiltrated from your GitLab organization and provides many of -the other benefits of Teleport such as auditing and finely-grained access -control. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- A GitLab project to connect to Teleport. This can either be on GitLab's -cloud-hosted offering (gitlab.com) or on a self-hosted GitLab instance. **When -using a self-hosted GitLab instance, your Teleport Auth Service must be able to -connect to your GitLab instance and your GitLab instance must be configured with -a valid TLS certificate.** - -## Step 1/4. Create a Bot - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 2/4. Create a join token - -To allow GitLab CI to authenticate to your Teleport cluster, you'll first need -to create a join token. A GitLab join token contains allow rules that describe -which pipelines can use that token in order to join the Teleport cluster. A rule -can contain multiple fields, and any pipeline that matches all the fields -within a single rule is granted access. - -In this example, you will create a token with a rule that grants access to any -GitLab CI job within a specific GitLab project. Determine the fully qualified -path of your GitLab project. This will include your username (or group) and the -name of your project, e.g `my-user/my-project`. - -Create a file named `bot-token.yaml`. Ensure you substitute any values as -suggested by the comments in this example: - -```yaml -kind: token -version: v2 -metadata: - name: example-bot -spec: - # The Bot role indicates that this token grants access to a bot user, rather - # than allowing a node to join. This role is built in to Teleport. - roles: [Bot] - join_method: gitlab - # The bot_name indicates which bot user this token grants access to. This - # should match the name of the bot that you created in step 1. - bot_name: example - gitlab: - # domain should be the domain of your GitLab instance. If you are using - # GitLab's cloud hosted offering, omit this field entirely. - domain: gitlab.example.com - # allow specifies rules that control which GitLab tokens will be accepted - # by Teleport. Tokens not matching any allow rule will be denied. - allow: - # project_path should be the fully qualified path of your GitLab - # project that you determined earlier. This will grant access to any - # GitLab CI run in that project. - - project_path: my-user/my-project -``` - -You can find a full list of the token configuration options for GitLab joining -on the -[GitLab CI reference page.](../../../reference/machine-id/gitlab.mdx) - -Apply this to your Teleport cluster using `tctl`: - -```code -$ tctl create -f bot-token.yaml -``` - -## Step 3/4. Configure a GitLab Pipeline - -With the bot and join token created, you can now configure a GitLab pipeline -that sets up `tbot` to use these. - -To configure `tbot`, a YAML file will be used. In this example we'll store this -within the repository itself, but this could be generated or created by the -CI pipeline itself. - -Create `tbot.yaml` within your repository: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: gitlab - token: example-bot -oneshot: true -storage: - type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: - -- `example.teleport.sh:443` with the address of your Teleport Proxy or - Auth Service. Prefer using the address of a Teleport Proxy. -- `example-bot` with the name of the token you created in the second step - -Now, the GitLab CI pipeline can be defined. Before the pipeline can use `tbot`, -it must be available within the environment. For this example, we'll show -downloading `tbot` as part of the CI step, but in a production implementation -you may wish to build a docker image that contains this binary to avoid -depending on the Teleport CDN. - -Create `.gitlab-ci.yml` within your repository: - -```yaml -stages: - - deploy - -deploy-job: - stage: deploy - # id_tokens configures ID Tokens that GitLab will automatically inject into - # the environment of your GitLab run. - # - # See https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html - # for further explanation of the id_tokens configuration in GitLab. - id_tokens: - TBOT_GITLAB_JWT: - # aud for TBOT_GITLAB_JWT must be configured with the name of your - # Teleport cluster. This is not necessarily the address of your Teleport - # cluster and will not include a port or scheme (http/https) - # - # This helps the Teleport Auth Service know that the token is intended for - # it, and not a different service or Teleport cluster. - aud: teleport.example.com - script: - - cd /tmp - - 'curl -O https://cdn.teleport.dev/teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz' - - tar -xvf teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz - - sudo ./teleport/install - - 'TELEPORT_ANONYMOUS_TELEMETRY=1 tbot start -c tbot.yaml' -``` - -Replace `teleport.example.com` with the name of your Teleport cluster. This -is not necessarily the address of your Teleport cluster and will not include -a port or scheme (e.g. http/https). - -`TELEPORT_ANONYMOUS_TELEMETRY` enables the submission of anonymous usage -telemetry. This helps us shape the future development of `tbot`. You can disable -this by omitting this. - -Commit and push these two files to the repository. - -Check your GitLab CI status, and examine the log results from the commit for -failure. - -## Step 4/4. Configure outputs - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## Further steps - -- For more information about GitLab joining, read the - [GitLab CI reference page.](../../../reference/machine-id/gitlab.mdx) -- For more information about GitLab itself, read - [their documentation](https://docs.gitlab.com/ee/ci/). -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/jenkins.mdx b/docs/pages/enroll-resources/machine-id/deployment/jenkins.mdx deleted file mode 100644 index 02a19406addbb..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/jenkins.mdx +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Deploying Machine ID on Jenkins -description: How to install and configure Machine ID on Jenkins ---- - -Jenkins is an open source automation server that is frequently used to build -Continuous Integration and Continuous Delivery (CI/CD) pipelines. - -In this guide, we will demonstrate how to migrate existing Jenkins pipelines to -utilize Machine ID with minimal changes. - -## Prerequisites - -You will need the following tools to use Teleport with Jenkins. - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- `ssh` OpenSSH tool -- Jenkins -- (!docs/pages/includes/tctl.mdx!) - -## Architecture - -Before we begin, it should be noted that Jenkins is a tool that is notoriously -difficult to secure. Machine ID is one part of securing your infrastructure, -but it alone is not sufficient. Below we will provide some basic guidance which -can help improve the security posture of your Jenkins installation. - -### Single-host deployments - -The simplest Jenkins deployments have the -controller (process that stores configuration, plugins, UI) and agents (process -that executes tasks) run on the same host. This deployment model is simple to -get started with, however any compromise of the `jenkins` user within a single -pipeline can lead to the compromise of your entire CI/CD infrastructure. - -### Multihost deployments - -A slightly more complex, but more secure deployment -is running your Jenkins controllers and agents on different hosts and pinning -workloads to specific agents. This is an improvement over the simple deployment -because you can limit the blast radius of the compromise of a single pipeline -to a subset of your CI/CD infrastructure instead of all of your infrastructure. - -### Best practices - -We strongly encourage the use of the second deployment model whenever possible, -with ephemeral hosts and IAM joining when possible. When using Machine ID with -this model, create and run Machine ID bots per-host and pin particular -pipelines to a worker. This will allow you to give each pipeline the minimal -scope for server access, reduce the blast radius if one pipeline is -compromised, and allow you to remotely audit and lock pipelines if you detect -malicious behavior. - - ![Jenkins Deployments](../../../../img/machine-id/jenkins.png) - -## Step 1/2 Configure and start Machine ID - -First, determine if you would like to create a new role for Machine ID or use -an existing role. You can run `tctl get roles` to examine your existing roles. - -In the example below, create a file called `api-workers.yaml` with the content below -to create a new role called `api-workers` that will allow you to log in to Nodes -with the label `group: api` and Linux user `jenkins`. - -``` -kind: "role" -version: "v3" -metadata: - name: "api-workers" -spec: - allow: - logins: ["jenkins"] - node_labels: - "group": "api" -``` - - - -On your client machine, log in to Teleport using `tsh` before using `tctl`. - -```code -$ tctl create -f api-workers.yaml -$ tctl bots add jenkins --roles=api-workers -``` - - -Connect to the Teleport Auth Service and use `tctl` to examine what roles exist on -your system. - -```code -$ tctl create -f api-workers.yaml -$ tctl bots add jenkins --roles=api-workers -``` - - - - -Machine ID allows you to use Linux Access Control Lists (ACLs) to control -access to certificates on disk. You will use this to limit the access Jenkins -has to the short-lived certificates Machine ID issues. - -In the example that follows, you will create a Linux user called `teleport` to -run Machine ID but short-lived certificates will be written to disk as the -Linux user `jenkins`. - -```code -$ sudo adduser \ - --disabled-password \ - --no-create-home \ - --shell=/bin/false \ - --gecos "" \ - teleport -``` - -Create and initialize the directories you will need using the `tbot init` -command. - -```code -$ sudo tbot init \ - --destination-dir=/opt/machine-id \ - --bot-user=teleport \ - --owner=teleport:teleport \ - --reader-user=jenkins -``` - -(!docs/pages/includes/machine-id/machine-id-init-bot-data.mdx!) - -Next, you need to start Machine ID in the background of each Jenkins worker. - -First create a configuration file for Machine ID at `/etc/tbot.yaml`. - -```yaml -version: v2 -# Replace "example.teleport.sh:443" with the address of your Teleport Proxy or -# Teleport Cloud tenant. -proxy_server: "example.teleport.sh:443" -onboarding: - join_method: "token" - # Replace the token field with the name of the token that was output when you - # ran `tctl bots add`. - token: "00000000000000000000000000000000" -storage: - type: directory - path: /var/lib/teleport/bot -outputs: - - type: identity - destination: - type: directory - path: /opt/machine-id -``` - -### Create a `tbot` systemd unit file - -(!docs/pages/includes/machine-id/daemon.mdx!) - -## Step 2/2. Update and run Jenkins pipelines - -Using Machine ID within a Jenkins pipeline is now a one-line change. For -example, if you want to run the `hostname` command on a remote host, add the -following to your Jenkins pipeline. - -``` -steps { - sh "ssh -F /opt/machine-id/ssh_config root@node-name.example.com hostname" -} -``` - -You are all set. You have provided Jenkins with short-lived certificates tied -to a machine identity that can be rotated, audited, and controlled with access controls. - -## Next steps - -[More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) - diff --git a/docs/pages/enroll-resources/machine-id/deployment/kubernetes.mdx b/docs/pages/enroll-resources/machine-id/deployment/kubernetes.mdx deleted file mode 100644 index 1d737a3cb0a7b..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/kubernetes.mdx +++ /dev/null @@ -1,367 +0,0 @@ ---- -title: Deploying Machine ID on Kubernetes -description: How to install and configure Machine ID on Kubernetes ---- - -This guide shows you how to deploy the Machine ID daemon `tbot`, on a Kubernetes -cluster. - -## How it works - -In the setup we demonstrate in this guide, `tbot` runs as a Kubernetes -deployment. It writes output credentials to a Kubernetes secret, which can then -be mounted in the pods that need to use the credentials. While `tbot` can also -run as a sidecar within the same pod as the service that needs to use the -credentials it generates, we recommend running `tbot` as a standalone deployment -due the limited support Kubernetes has for sidecars. - -In this guide, we demonstrate the `kubernetes` join method, in which `tbot` -proves its identity to the Teleport Auth Service by presenting a JSON web token -(JWT) signed by the Kubernetes API server. This JWT contains identifies the -service account, the pod and the namespace in which `tbot` is running. The -Teleport Auth Service checks the signature of the JWT against the Kubernetes -cluster's public signing key. - -
-Using another join method - -When deploying `tbot` to a Teleport cluster, it is generally recommended to use -the `kubernetes` join method. This will work with most Kubernetes clusters. -The guide that follows will demonstrate configuring this join method. - -However, when using certain cloud Kubernetes services, it is possible to use the -join method associated with that platform rather than the `kubernetes` join -method. This may be beneficial if you wish to manage the joining of `tbot` -within the Kubernetes clusters and on standard VMs on the same platform with -a single join token. These services are: - -- Google Kubernetes Engine: Where - [GCP Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) - is configured for the cluster, it is possible to use the `gcp` join method. - See the [GCP Platform Guide](./gcp.mdx) for further information. -- Amazon Elastic Kubernetes Service: Where - [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) - is configured for the cluster, it is possible to use the `iam` join method. - See the [AWS Platform Guide](./aws.mdx) for further information. - -
- -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- A Kubernetes cluster with support for Token Request Projection (which - graduated to a generally available feature in Kubernetes 1.20). -- `kubectl` authenticated with the ability to create resources in the cluster - you wish to deploy `tbot` into. - -The examples in this guide will install a `tbot` deployment in the `default` -Namespace of the Kubernetes cluster. Adjust references to `default` to the -Namespace you wish to use. - -## Step 1/5. Prepare Kubernetes RBAC - -In order to prepare the Kubernetes cluster for Machine ID, several Kubernetes -RBAC resources must be created. - -A ServiceAccount will be created and later assigned to the Pod that will run -`tbot`. This creates a static identity that we can allow access to join the -Teleport Cluster and also provides an identity to which we can assign Kubernetes -privileges. - -A Role granting the ability to read and write to secrets in the Namespace will -be created and then assigned to the ServiceAccount using a RoleBinding. This -will allow the `tbot` Pod to read and write credentials to a Secret. - -Create a file called `k8s-rbac.yaml`: - -```yaml -# This ServiceAccount will be used to give the `tbot` pods a discrete identity -# which can be validated by the Teleport Auth Service. -apiVersion: v1 -kind: ServiceAccount -metadata: - name: tbot - namespace: default ---- -# This role grants the ability to manage secrets within the namespace - this is -# necessary for the `kubernetes_secret` destination to work correctly. -# -# You may wish to add the `resourceNames` field to the role to further restrict -# this access in sensitive environments. -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: secrets-admin - namespace: default -rules: - - apiGroups: [""] - resources: ["secrets"] - verbs: ["*"] ---- -# Bind the role to the service account created for tbot. -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: tbot-secrets-admin - namespace: default -subjects: - - kind: ServiceAccount - name: tbot -roleRef: - kind: Role - name: secrets-admin - apiGroup: rbac.authorization.k8s.io -``` - -Apply this file to your Kubernetes cluster: - -```code -$ kubectl apply -f ./k8s-rbac.yaml -``` - -## Step 2/5. Create a Bot - -(!docs/pages/includes/machine-id/create-a-bot.mdx!) - -## Step 3/5. Create a join token - -Next, a join token needs to be configured. This will be used by `tbot` to join -the cluster. As the `kubernetes` join method will be used, the public key of the -Kubernetes cluster must first be determined. The public key used to sign JWTs -is exposed on the "JWKS" endpoint of the Kubernetes API server. This public key -can then be used by the Teleport Auth to verify that the Service Account JWT -presented by `tbot` is signed legitimately by the Kubernetes cluster. - -Run the following commands to determine the JWKS formatted public key: - -```code -$ kubectl proxy -p 8080 -$ curl http://localhost:8080/openid/v1/jwks -{"keys":[--snip--]}% -``` - -Create `bot-token.yaml`, ensuring you insert the value from the JWKS endpoint -in `spec.kubernetes.static_jwks.jwks`: - -```yaml -kind: token -version: v2 -metadata: - # name will be specified in the `tbot` to use this token - name: example-bot -spec: - roles: [Bot] - # bot_name should match the name of the bot created earlier in this guide. - bot_name: example - join_method: kubernetes - kubernetes: - # static_jwks configures the Auth Service to validate the JWT presented by - # `tbot` using the public key from a statically configured JWKS. - type: static_jwks - static_jwks: - jwks: | - # Place the data returned by the curl command here - {"keys":[--snip--]} - # allow specifies the rules by which the Auth Service determines if `tbot` - # should be allowed to join. - allow: - - service_account: "default:tbot" # service_account -``` - -Use `tctl` to apply this file: - -```code -$ tctl create -f bot-token.yaml -``` - -## Step 4/5. Create a `tbot` deployment - -First, a ConfigMap will be created to contain the configuration file for `tbot`. -This will then be mounted into the Pod. - -Create `k8s-deployment-config.yaml`, replacing the value of `token` with the -name of the token you created earlier and the value of `proxy_server` with the -address of your Teleport Proxy Service: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: tbot-config - namespace: default -data: - tbot.yaml: | - version: v2 - onboarding: - join_method: kubernetes - # ensure token is set to the name of the join token you created earlier - token: bot-kubernetes - storage: - # a memory destination is used for the bots own state since the kubernetes - # join method does not require persistence. - type: memory - # ensure this is configured to the address of your Teleport Proxy Service. - proxy_server: example.teleport.sh:443 - # outputs will be filled in during the completion of an access guide. - outputs: [] -``` - -Apply this file to your Kubernetes cluster: - -```code -$ kubectl apply -f k8s-deployment-config.yaml -``` - -With the ConfigMap created, you can now create the `tbot` deployment itself. - -Create `k8s-deployment.yaml`: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: tbot - namespace: default -spec: - replicas: 1 - strategy: - type: Recreate - selector: - matchLabels: - app.kubernetes.io/name: tbot - template: - metadata: - labels: - app.kubernetes.io/name: tbot - spec: - containers: - - name: tbot - image: public.ecr.aws/gravitational/tbot-distroless:(=teleport.version=) - args: - - start - - -c - - /config/tbot.yaml - env: - # POD_NAMESPACE is required for the kubernetes_secret` destination - # type to work correctly. - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # KUBERNETES_TOKEN_PATH specifies the path to the service account - # JWT to use for joining. - # This path is based on the configuration of the volume and - # volumeMount. - - name: KUBERNETES_TOKEN_PATH - value: /var/run/secrets/tokens/join-sa-token - # TELEPORT_ANONYMOUS_TELEMETRY enables the submission of anonymous - # usage telemetry. This helps us shape the future development of - # `tbot`. You can disable this by omitting this. - - name: TELEPORT_ANONYMOUS_TELEMETRY - value: "1" - volumeMounts: - - mountPath: /config - name: config - - mountPath: /var/run/secrets/tokens - name: join-sa-token - serviceAccountName: tbot - volumes: - - name: config - configMap: - name: tbot-config - - name: join-sa-token - projected: - sources: - - serviceAccountToken: - path: join-sa-token - # 600 seconds is the minimum that Kubernetes supports. We - # recommend this value is used. - expirationSeconds: 600 - # `example.teleport.sh` must be replaced with the name of - # your Teleport cluster. - audience: example.teleport.sh -``` - -Replace `example.teleport.sh` with the name of your Teleport cluster - this is -not necessarily the public address (the port should not be included). - -This is an example manifest, consider modifying it to fit within the conventions -of deployments to your clusters (e.g customizing labels). - - -The default `tbot-distroless` image does not contain the FIPS-compliant -binaries. If you operate in an environment where FIPS compliance is required, -please use the `tbot-fips-distroless` image instead. - - -Apply this file to your Kubernetes cluster: - -```code -$ kubectl apply -f ./k8s-deployment.yaml -``` - -Use `kubectl` to verify that the deployment is healthy: - -```code -$ kubectl describe deployment/tbot -$ kubectl logs deployment/tbot -``` - -With this complete, `tbot` is now successfully deployed to your cluster. -However, it is not yet producing any useful output. - -## Step 5/5. Configure outputs - -Follow one of the [access guides](../access-guides/access-guides.mdx) to configure an output -that meets your access needs. - -In order to adjust the access guides to work well with Kubernetes, use the -Kubernetes Secret destination type. This will write the generated artifacts -to a specified Kubernetes Secret, for example: - -```yaml -outputs: - - type: identity - destination: - type: kubernetes_secret - name: identity-output -``` - -The output can then be consumed by mounting this secret within another pod: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: tsh - namespace: default -spec: - containers: - - name: tsh - image: public.ecr.aws/gravitational/teleport-distroless:(=teleport.version=) - command: - - tsh - args: - - -i - - /identity-output/identity - - --proxy - - example.teleport.sh:443 - - ls - volumeMounts: - - name: identity-output - mountPath: /identity-output - volumes: - - name: identity-output - secret: - secretName: identity-output -``` - -## Next steps - -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/linux.mdx b/docs/pages/enroll-resources/machine-id/deployment/linux.mdx deleted file mode 100644 index b96c847510473..0000000000000 --- a/docs/pages/enroll-resources/machine-id/deployment/linux.mdx +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: Deploying Machine ID on Linux -description: How to install and configure Machine ID on a Linux host ---- - -This page explains how to deploy Machine ID on a Linux host. - -## How it works - -The process in which `tbot` initially authenticates with the Teleport cluster is -known as joining. A join method is a specific technique for the bot to prove its -identity. - -On platforms where there is no form of identity available to the machine, the -only available join method is `token`. The `token` join method is special as -it is the only join method that relies on a shared secret. In order to mitigate -the risks associated with this, the `token` join method is single use and it -is not possible to use the same token multiple times. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- A Linux host that you wish to install Machine ID onto. -- A Linux user on that host that you wish Machine ID to run as. In the guide, - we will use `teleport` for this. - -## Step 1/4. Install `tbot` - -**This step is completed on the Linux host.** - -First, `tbot` needs to be installed on the VM that you wish to use Machine ID -on. - -Download the appropriate Teleport package for your platform: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 2/4. Create a bot user - -**This step is completed on your local machine.** - -Create the bot: - -```code -$ tctl bots add example -``` - -A join token will be included in the results of `tctl bots add`, record this -value as it will be needed when configuring `tbot`. - -## Step 3/4. Configure `tbot` - -**This step is completed on the Linux host.** - -Create `/etc/tbot.yaml`: - -```yaml -version: v2 -proxy_server: example.teleport.sh:443 -onboarding: - join_method: token - token: (=presets.tokens.first=) -storage: - type: directory - path: /var/lib/teleport/bot -# outputs will be filled in during the completion of an access guide. -outputs: [] -``` - -Replace: -- `example.teleport.sh:443` with the address of your Teleport Proxy or - Auth Service. Prefer using the address of a Teleport Proxy. -- `(=presets.tokens.first=)` with the token that was returned by `tctl bots add` - in the previous step. - - -The first time that `tbot` runs, this token will be exchanged for a certificate -that the bot uses for authentication. At this point, the token is invalidated. -This means you may remove the token from the configuration file after the first -run has completed, but there is no tangible security benefit to doing so. - - -### Prepare the storage directory - -When using the `token` join method, `tbot` must be able to persist its state -across restarts. The destination used to persist this state is known as the -bot's "storage destination". In this guide, the directory -`/var/lib/teleport/bot` will be used. - -As this directory will store the bots sensitive credentials, it is important -to protect it. To do this, you will configure the directory to only be -accessible to the Linux user which `tbot` will run as. - -Execute the following, replacing `teleport` with the Linux user that you will -run `tbot` as: - -```code -# Make the bot directory and assign ownership to teleport user -$ sudo mkdir -p /var/lib/teleport/bot -$ sudo chown teleport:teleport /var/lib/teleport/bot -``` - -### Create a systemd service - -(!docs/pages/includes/machine-id/daemon.mdx!) - -## Step 4/4. Configure outputs - -(!docs/pages/includes/machine-id/configure-outputs.mdx!) - -## Next steps - -- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for - your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/faq.mdx b/docs/pages/enroll-resources/machine-id/faq.mdx deleted file mode 100644 index 474b19d0ff7eb..0000000000000 --- a/docs/pages/enroll-resources/machine-id/faq.mdx +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: Machine ID FAQ -description: Frequently asked questions about Teleport Machine ID ---- - -This page provides answers to frequently asked questions about Machine ID. For a -list of frequently asked questions about Teleport in general, see [Frequently -Asked Questions](../../faq.mdx). - -## Can Machine ID be used within CI/CD jobs? - -On CI/CD platforms where your workflow runs in an ephemeral environment (e.g -no persistent state exists between individual workflow runs), Machine ID works -best where a supported join method exists. These are: - -- GitHub Actions -- CircleCI -- GitLab -- AWS -- GCP -- Azure -- Kubernetes -- Spacelift -- Terraform Cloud - -On CI/CD platforms where you control the runner environment (e.g self-hosted -Jenkins runner), Machine ID can run as a daemon on the runner and the generated -credentials can be mounted into the environment of your individual workflow -runs. - -## Can Machine ID be used with Trusted Clusters ? - -You can use Machine ID for SSH access in trusted leaf clusters. - -We currently do not support access to applications, databases, or Kubernetes -clusters in leaf clusters. - -## Should I define allowed logins as user traits or within roles? - -When defining the logins that your bot will be allowed to use, there are two -options: - -- Directly adding the login to the `logins` section of the role that your bot - will impersonate. -- Adding the login to the logins trait of the bot user, and impersonating a role - that includes the `{{ internal.logins }}` role variable. This is usually done - by providing the `--logins` parameter when creating the bot. - -For simpler scenarios — where you only expect the bot to use a single output -or role — you can add the login to the logins trait of the bot user. This -approach allows you to leverage default roles like `access`. - -For situations where your bot is producing certificates for different roles in -different outputs, it is important to consider whether using a login trait -grants access to resources that you didn't intend. To prevent a login trait from -granting access you didn't intend, we recommend that you create bespoke roles -that explicitly specify the logins that should be included in the certificates. - -## Can Machine ID be used with per-session MFA? - -We do not currently support Machine ID and per-session MFA. Enabling per-session -MFA globally, or for roles impersonated by Machine ID, will prevent credentials -produced by Machine ID from being used to connect to resources. - -As a work-around, ensure that per-session MFA is enforced on individual roles -rather than enforced globally, and that it is not enforced for roles that you -will impersonate using Machine ID. - -## Can Machine ID be used with Device Trust? - -We do not currently support Machine ID and Device Trust. Requiring Device -Trust cluster-wide or for roles impersonated by Machine ID will prevent -credentials produced by Machine ID from being used to connect to resources. - -As a work-around, configure Device Trust enforcement on a role-by-role basis -and ensure that it is not required for roles that you will impersonate using -Machine ID. - -## Can Machine ID be used to generate long-lived certificates? - -Machine ID cannot currently be used to generate certificates valid for longer -than 24 hours, and requests for longer certificates using the `credential_ttl` -parameter will be reduced to this 24 hour limit. - -This limit serves multiple purposes. For one, it encourages security best -practices by only ever issuing very short lived certificates. Additionally, as -Machine ID allows for certificate renewal, this limit helps to prevent further -exploitation should a Machine ID identity be compromised: an attacker could use -a stolen renewable certificate to request very long lived certificates and -maintain access for a much longer period. - -If your use case absolutely requires long-lived certificates, -[`tctl auth sign`](../../reference/cli/tctl.mdx#tctl-auth-sign) can -alternatively be used, however this loses the security benefits of Machine ID's -short-lived renewable certificates. - -## Can Machine ID be used to connect to multiple Kubernetes clusters? - -This is possible in Teleport v17.2.7 or higher, using the new `kubernetes/v2` -output type in `tbot`. This service can expose many clusters at once via -contexts in the generated `kubeconfig.yaml`, and if label selectors are used, -will dynamically add contexts as clusters are added and removed in Teleport. - -Note that both `tbot` and the Teleport Proxy need to be running v17.2.7 to take -advantage of this functionality. - -Refer to the -[CLI reference](../../reference/cli/tbot.mdx#tbot-start-kubernetesv2) and -[config reference](../../reference/machine-id/configuration.mdx#kubernetesv2) -for more information. - -## Does `tbot` support Windows? - -Yes, the `tbot` binary is available for Windows. It can be found in the client -tools archive that also includes `tsh` and `tctl`. See the -[Installing Teleport guide](../../installation.mdx) for further information. - -However, there are a few limitations to be aware of: - -- Functionality that relies on Unix Domain Sockets (e.g. SSH multiplexer, - SPIFFE Workload API etc.) is not available. -- Functionality relating to the configuration of Symlink protection on directory - destinations is not available. -- Functionality relating to the management of ACLs on directory destinations is - not available. -- Most delegated join methods are unlikely to function correctly. - -In some circumstances, it can be more practical to run `tbot` within Windows -Subsystem for Linux rather than running it natively on Windows. This will depend -on where the tools that will consume the output of `tbot` are running. \ No newline at end of file diff --git a/docs/pages/enroll-resources/machine-id/getting-started.mdx b/docs/pages/enroll-resources/machine-id/getting-started.mdx deleted file mode 100644 index 7cfd4b9a5cd6e..0000000000000 --- a/docs/pages/enroll-resources/machine-id/getting-started.mdx +++ /dev/null @@ -1,226 +0,0 @@ ---- -title: Machine ID Getting Started Guide -description: Getting started with Teleport Machine ID ---- - -In this getting started guide, you will configure Machine ID to issue -certificates that enable a bot user to connect to a remote host. - -Here's an overview of what you will do: - -- Download and install `tbot` on the host that will run Machine ID. -- Create a bot user. -- Start Machine ID. -- Use certificates issued by Machine ID to connect to a remote machine with SSH. - -This guide covers configuring Machine ID for development and learning purposes. -For a production-ready configuration of Machine ID, visit the [Deploying Machine -ID](deployment/deployment.mdx) guides. - -## Prerequisites - -- A host that you wish to assign an identity to using Machine ID. - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -(!/docs/pages/includes/tctl.mdx!) - -## Step 1/4. Download and install Teleport - -In this step, you will be downloading and installing Teleport binaries onto the -machine you wish to assign an identity to. - -Each Teleport package hosted on our downloads page ships with several useful -binaries, including `teleport`, `tctl`, `tsh`,`tbot`, and `fdpass-teleport`: - -- `teleport` is the daemon used to initialize a Teleport cluster; this binary is not used in this guide -- `tctl` is the administrative tool you will use to create the bot user (step 1/4) -- `tsh` is the client tool you will use to log in to the Teleport Cluster (steps 2/4 and 4/4) -- `tbot` is the Machine ID tool you will use to associate a bot user with a machine (step 3/4) -- `fdpass-teleport` is used to integrate Machine ID with OpenSSH to enable higher performance and reduced resource consumption when establishing SSH connections; this binary is not used in this guide - -Download the appropriate Teleport package for your platform: - -(!docs/pages/includes/install-linux.mdx!) - -## Step 2/4. Create a Bot - -In Teleport, a **Bot** represents an identity for a machine. This is similar to -how a user represents the identity of a human. Like users, bots are assigned -roles to manage their access to resources. However, unlike users, bots do not -authenticate using a username and password or SSO. Instead, they initially -authenticate in a process called joining. - -Teleport supports a number of secure join methods specific to the platform the -bot is running on, but for the purposes of this guide, we will use the simpler -`token` join method. You can follow a deployment guide later to learn about the -secure join methods available for your platform. - -Before you create a bot user, you need to determine which role(s) you want to -assign to it. You can use the `tctl` command below to examine what roles exist -on your system. - -On your client machine, log in to Teleport using `tsh`, then use `tctl` to examine -what roles exist on your system. - -```code -$ tctl get roles --format=text -``` - -You will see something like the output below on a fresh install of Teleport with the -default roles—your cluster may have different roles. In this example, let's -assume you want to give the bot the `access` role to allow it to connect to -machines within your cluster. - -``` -Role Allowed to login as Node Labels Access to resources -------- --------------------------------------------- ----------- ---------------------------------------- -access {{internal.logins}} event:list,read,session:read,list -auditor no-login-6566121f-b602-47f1-a118-c9c618ee5aec session:list,read,event:list,read -editor user:list,create,read,update,delete,... -``` - -The `internal.logins` trait is replaced with values from the Teleport local user -database. For full details on how traits work in Teleport roles, see the -[Access Controls -Reference](../../reference/access-controls/roles.mdx). - -Assuming that you are using the default `access` role, ensure that you use the -`--logins` flag when adding your bot to specify the SSH logins that you wish to -allow the bot to access on hosts. For our example, we will be using `root`. - -Use `tctl bots add` to create our bot: - -```code -$ tctl bots add robot --roles=access --logins=root -``` - -## Step 3/4. Start Machine ID - -Now start Machine ID using the `tbot` binary. The `tbot start` command will -start running Machine ID in a loop, writing renewable certificates to -`/var/lib/teleport/bot` and the short-lived certificates your application will -use to `/opt/machine-id`. - -In a production environment you will want to run Machine ID in the background -using a service manager like systemd. However, in this guide you will run it in -the foreground to better understand how it works. - -```code -$ export TELEPORT_ANONYMOUS_TELEMETRY=1 -$ sudo tbot start \ - --data-dir=/var/lib/teleport/bot \ - --destination-dir=/opt/machine-id \ - --token=(=presets.tokens.first=) \ - --join-method=token \ - --proxy-server=example.teleport.sh:443 -``` - -`TELEPORT_ANONYMOUS_TELEMETRY` enables the submission of anonymous usage -telemetry. This helps us shape the future development of `tbot`. You can disable -this by omitting this. - -Replace the following fields with values from your own cluster. - -- `token` is the token output by the `tctl bots add` command or the name of your IAM method token. -- `destination-dir` is where Machine ID writes user certificates that can be used by applications and tools. -- `data-dir` is where Machine ID writes its private data, including its own short-lived renewable certificates. These should not be used by applications and tools. -- `proxy-server` is the address of your Teleport Proxy service, for example `example.teleport.sh:443`. - -Now that Machine ID has successfully started, let's investigate the -`/opt/machine-id` directory to see what was written to disk. - -```code -$ tree /opt/machine-id -machine-id -├── identity -├── key -├── key-cert.pub -├── key.pub -├── known_hosts -├── ssh_config -├── teleport-database-ca.crt -├── teleport-host-ca.crt -├── teleport-user-ca.crt -└── tlscert - -0 directories, 10 files -``` - -This directory contains private key material in the `key.*` files, SSH -certificates in the `identity` file, X.509 certificates in the `tls*` and -`*.crt` files, OpenSSH configuration in the `ssh_config` and -`known_hosts` files to make it easy to integrate Machine ID with external - applications and tools. - -## Step 4/4. Use certificates issued by Machine ID - -To use Machine ID, find a host that you want to connect to within your cluster -using `tsh ls`. You might see output like the following on your system. - -```code -$ tsh ls -Node Name Address Labels ---------- -------------- ----------------------------- -node-name 127.0.0.1:3022 arch=x86_64,group=api-servers -``` - -
-Not seeing Nodes? - -(!docs/pages/includes/node-logins.mdx!) - -
- -To use Machine ID with the OpenSSH integration, run the following command to -connect to `node-name` within cluster `example.com`. - -```code -$ ssh -F /opt/machine-id/ssh_config root@node-name.example.com -``` - -In addition to the `ssh` client you can use `tsh`. Replace the `--proxy` parameter -with your proxy address. - -```code -$ tsh ssh --proxy=mytenant.teleport.sh -i /opt/machine-id/identity root@node-name -``` - - - The below error can occur when the bot does not have permission to log in to - a node as the requested user: - - ```code - $ ssh -F /opt/machine-id/ssh_config root@node-name.example.com - root@node-name: Permission denied (publickey). - kex_exchange_identification: Connection closed by remote host - ``` - This can happen in two circumstances: - - The user you are trying to log in as is not specified under `logins` in the - role you are using - - If you have used `--logins` when creating the bot user, the role the bot is - impersonating does not have the `{{ internal.logins }}` variable specified. - - If you have been following along with the `access` role, do the following. - - - Export the role by running `tctl get roles/access > access.yaml` - - Edit the `logins` field in `access.yaml` - - Update the role by running `tctl create -f access.yaml` - - -Now you can replace any invocations of `ssh` with the above command to provide -your applications and tools a machine identity that can be rotated, audited, -and controlled with access controls. - -## Next Steps - -- Read the [architecture overview](../../reference/architecture/machine-id-architecture.mdx) to learn about how - Machine ID works in more detail. -- Check out the [deployment guides](deployment/deployment.mdx) to learn about - configuring `tbot` in a production-ready way for your platform. -- Check out the [access guides](access-guides/access-guides.mdx) to learn about configuring - `tbot` for different use cases than SSH. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore - all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-id/telemetry.mdx) - diff --git a/docs/pages/enroll-resources/machine-id/introduction.mdx b/docs/pages/enroll-resources/machine-id/introduction.mdx deleted file mode 100644 index e5e39aff21dfe..0000000000000 --- a/docs/pages/enroll-resources/machine-id/introduction.mdx +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: Introduction to Machine ID -description: Teleport Machine ID introduction, demo and resources. -videoBanner: QWd0eqIa9mA -tocDepth: 3 ---- - -Teleport Machine ID enables machines, such as CI/CD workflows, to securely -authenticate with your Teleport cluster in order to connect to resources and -configure the cluster itself. This is sometimes referred to as -machine-to-machine access. - -Machine ID supports a number of use cases and platforms, but is commonly used -for: - -- Granting CI/CD workflows access to resources such as SSH servers, - Kubernetes clusters, databases and applications that are protected by Teleport - in order to deploy changes. -- Granting SecOps tooling access to resources in order to run security analysis - against them. -- Creating and renewing credentials for custom scripts that use `tctl` or - the Teleport API in order to control Teleport configuration. - -## Concepts - -Read this section to understand the high-level architecture of a Machine ID -setup. For a more in-depth overview, check out the [architecture -page](../../reference/architecture/machine-id-architecture.mdx). - -![Machine ID architecture](../../../img/machine-id/architecture-diagram.png) - -### Bots - -Machine ID provides machines with an identity that can authenticate to the -Teleport cluster. This identity is known as a **bot**. Bots share a number of -similarities with human users: - -- Access controlled by roles assigned to them in Teleport -- Access to resources recorded in audit logs -- Identity encoded in an x.509 client certificate which is signed by the - Teleport Auth Service and which can then be used for access. - -### Join tokens - -Unlike a human Teleport user, a bot does not "log in" using a static username -and password. Instead, a bot authenticates to Teleport with a **join token**, -which is configured within Teleport and specifies which bot user it grants -access to and what sort of proof (known as the **join method**) is needed to use -this join token. This proof is typically an identity issued to the machine by -the platform it runs on (e.g. AWS IAM). - -Multiple join tokens may be created for a single bot to allow joining with -different join methods. - -### Bot Instances - -Each time a new `tbot` client joins from scratch, it creates a new server-side -Bot Instance. Bot Instances keep track of individual `tbot` installations over -time, even as they renew their certificates or rejoin. These server-side -resources also record the most recent authentication attempts, as well as -bot heartbeats. - -Many Bot Instances can exist concurrently for a given Bot, regardless of their -join method. - -Bot Instances can be inspected with: -- `tctl get bot_instance` to list all instances -- `tctl get bot_instance/$botName` to list all instances associated with a - particular Bot -- `tctl get bot_instance/$botName/$id` to show a single bot instance by its bot - name and ID - -### tbot - -Machine ID is used through an agent called `tbot`. `tbot` authenticates with the -Teleport Cluster and then generates credentials and configuration files for -other tools to use to connect to Teleport resources using the bot's identity. - -### Artifacts - -The files generated by `tbot` are referred to as its **artifacts**. Artifacts -can be a number of things from credentials, such as signed certificates, to -configuration files that will automatically configure a tool (such as `kubectl`) -to use Teleport. This behaviour is controlled by configuring `tbot`'s -**outputs**. An output specifies what should be generated and where it should be -saved. - -## Get started - -For a quickstart non-production introduction to Machine ID, read the -[Getting Started Guide](./getting-started.mdx). - -## Deploy to production - -Production-ready guidance on deploying Machine ID is broken out into two parts: - -- [Deploying Machine ID](deployment/deployment.mdx): How to install and configure - Machine ID for a specific platform. -- [Access your Infrastructure with Machine ID](access-guides/access-guides.mdx): How to use Machine ID to access - Teleport and Teleport resources. - -## Further reading - -- [Workload Identity](../workload-identity/introduction.mdx): Information on Teleport Workload - Identity for SPIFFE, a feature for issuing short-lived identities intended - for workload to workload communication. -- [Frequently Asked Questions](./faq.mdx): Commonly asked questions. -- [Troubleshooting Guide](./troubleshooting.mdx): Common issues and how to solve - them. -- [Architecture](../../reference/architecture/machine-id-architecture.mdx): A technical deep-dive into how Machine ID - works. -- [Reference](../../reference/machine-id/machine-id.mdx): Complete documentation of available - configuration options. -- [Manifesto](./manifesto.mdx): Our vision for Machine Identity and the - direction Machine ID and Workload ID are heading in. diff --git a/docs/pages/enroll-resources/machine-id/machine-id.mdx b/docs/pages/enroll-resources/machine-id/machine-id.mdx deleted file mode 100644 index dd01e00691b4c..0000000000000 --- a/docs/pages/enroll-resources/machine-id/machine-id.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Machine ID -description: Guides to using Machine ID, which allows you to provide secure access to your infrastructure from automated services. ---- - - diff --git a/docs/pages/enroll-resources/machine-id/troubleshooting.mdx b/docs/pages/enroll-resources/machine-id/troubleshooting.mdx deleted file mode 100644 index e963ef7f9e446..0000000000000 --- a/docs/pages/enroll-resources/machine-id/troubleshooting.mdx +++ /dev/null @@ -1,354 +0,0 @@ ---- -title: Machine ID Troubleshooting Guide -description: Troubleshooting common issues with Machine ID ---- - -This page provides resolution steps for issues that you may come across when -setting up Machine ID. - -## A bot failed to renew a certificate due to a "generation mismatch" - -### Symptoms - -The bot will log an error like this: - -```text -ERROR: renewable cert generation mismatch: stored=3, presented=2 -``` - -Subsequent connection attempts by the bot may see errors like the following: -```text -ERROR: failed direct dial to auth server: auth API: access denied [00] -"\tauth API: access denied [00], failed dial to auth server through reverse tunnel: Get \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://example.com:3025/webapi/find\": x509: cannot validate certificate for example.com because it doesn't contain any IP SANs" -"\tGet \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://example.com:3025/webapi/find\": x509: cannot validate certificate for example.com because it doesn't contain any IP SANs" -``` - -In particular, note the message `auth API: access denied`. - -In self-hosted Teleport deployments, the Teleport Auth Service will also provide -some additional context: - -```text -[AUTH] WARN lock targeting User:"bot-example" is in force: The bot user "bot-example" has been locked due to a certificate generation mismatch, possibly indicating a stolen certificate. auth/apiserver.go:224 -``` - -### Explanation - - -This applies only to bots using the `token` join method, which makes use of -one-time use shared secrets. Provider-specific join methods, such as GitHub, -AWS IAM, etc will not be locked in this fashion unless another instance of the -bot uses `token` joining. - - -Machine ID (with token-based joining) uses a certificate generation counter to -detect potentially stolen renewable certificates. Each time a bot fetches a new -renewable certificate, the Auth Service increments the counter, stores it on the -backend, and embeds a copy of the counter in the certificate. - -If the counter embedded in your bot certificate doesn't match the counter -stored in Teleport's Auth Service, the renewal will fail and the bot user will -be automatically [locked](../../admin-guides/access-controls/guides/locking.mdx). - -Renewable certificates are exclusively stored in the bot's internal data -directory, by default `/var/lib/teleport/bot`. It's possible to trigger this by -accident if multiple bots are started using the same internal data directory, or -if this internal data is otherwise being shared between multiple `tbot` -processes. - -Additionally, if a bot fails to save its freshly renewed certificates (for -example, due to a filesystem error) and crashes, it will attempt a renewal -with old certificates and trigger a lock. - -### Resolution - -Before unlocking the bot, try to determine if either of the two scenarios -described above apply. If the certificates were stolen, there may be -underlying security concerns that need to be addressed. - -Otherwise, first ensure only one `tbot` process is using the internal data -directory. Multiple bots can be run on a single system, but separate data -directories must be configured for each. - -Additionally, ensure the internal data is not being shared with or copied to any -other nodes, for example via a shared NFS volume. If you'd like to share -certificates between nodes, only copy or share content from destination -directories (usually `/opt/machine-id`) rather than the internal data directory -(by default, `/var/lib/teleport/bot`). - -Once you have addressed the underlying cause, follow these steps to reset a -locked bot: - 1. Remove the lock on the bot's user - 1. Reset the bot's generation counter by creating a new bot instance - -To remove the lock, first find and remove the lock targeting the bot user. For -this example, we'll assume the bot is named `example`, which will have an -associated Teleport user named `bot-example`: - -```code -$ tctl get locks -kind: lock -metadata: - id: 1658359514703080513 - name: 5cee949f-5203-4f3b-9805-dac35d798a16 -spec: - message: The bot user "bot-example" has been locked due to a certificate generation - mismatch, possibly indicating a stolen certificate. - target: - user: bot-example -version: v2 - -$ tctl rm lock/5cee949f-5203-4f3b-9805-dac35d798a16 -``` - -Next, use `tctl bots instances add` to generate a new join token for the -preexisting bot `example`: -```code -$ tctl bots instances add example -``` - -Finally, reconfigure the local `tbot` instance with the new token and restart -it. It will detect the new token and automatically reset its internal data -directory. The bot will be issued a new bot instance UUID once connected, and -the generation counter will be reset. - -## `tbot` shows a "bad certificate error" at startup - -### Symptoms - -Restarting a `tbot` process outputs a log like the following: - -```text -INFO [TBOT] Successfully loaded bot identity, valid: after=2022-07-21T21:49:26Z, before=2022-07-21T22:50:26Z, duration=1h1m0s | kind=tls, renewable=true, disallow-reissue=false, roles=[bot-test], principals=[-teleport-internal-join], generation=2 tbot/tbot.go:281 -ERRO [TBOT] Identity has expired. The renewal is likely to fail. (expires: 2022-07-21T22:50:26Z, current time: 2022-07-25T20:18:33Z) tbot/tbot.go:415 -WARN [TBOT] Note: onboarding config ignored as identity was loaded from persistent storage tbot/tbot.go:288 -ERRO [TBOT] Failed to resolve tunnel address Get "https://auth.example.com:3025/webapi/find": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs reversetunnel/transport.go:90 -ERRO [TBOT] Failed to resolve tunnel address Get "https://auth.example.com:3025/webapi/find": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs reversetunnel/transport.go:90 -ERROR: failed direct dial to auth server: Get "https://teleport.cluster.local/v2/configuration/name": remote error: tls: bad certificate -"\tGet \"https://teleport.cluster.local/v2/configuration/name\": remote error: tls: bad certificate, failed dial to auth server through reverse tunnel: Get \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://auth.example.com:3025/webapi/find\": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs" -"\tGet \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://auth.example.com:3025/webapi/find\": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs" -``` - -In particular, note the log line: "Identity has expired. The renewal is likely to -fail." - -### Explanation - -Token-joined bots are unable to reauthenticate to the Teleport Auth Service once -their certificates have expired. Tokens in token-based joining (as opposed to -AWS IAM and other join methods) can only be used once, so when the bot's -internal certificates expire, it will not be able to connect. - -When a bot's identity expires, certain parameters associated with the bot on the -Auth Service must be reset and a new joining token must be issued. The simplest -way to accomplish this is by removing and recreating the bot, which purges all -server-side data and issues a new joining token. - -### Resolution - -Use `tctl bots instances add` to create a new one-time use token for the bot: - -```code -$ tctl bots instances add example -``` - -Copy the resulting join token into the existing bot config—either the -`--token` CLI flag or the `onboarding.token` parameter in `tbot.yaml`—and -restart the bot. It will detect the new token and rejoin the cluster as normal. - -## SSH connections fail with `ssh: handshake failed: ssh: unable to authenticate` - -### Symptoms - -When attempting to connect to a node via SSH, connections fail with an error -like the following: - -```code -$ ssh -F /opt/machine-id/ssh_config bob@node.example.com -ERROR: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain - -ERROR: unable to execute tsh -executing `tsh proxy` -exit status 1 - -kex_exchange_identification: Connection closed by remote host -Connection closed by UNKNOWN port 65535 -``` - -In particular, note the `ssh: unable to authenticate` message. - -### Explanation - -This can occur when attempting to log into the node as a user not listed as a -principal on the SSH certificate. - -You can verify this by viewing the `tbot` logs and looking for the log message -when impersonated certificates for the matching outputs were renewed. - -In the following example, the only principal listed for the identity in -`/opt/machine-id` is `alice` (via the `access` role): -```text -INFO [TBOT] Successfully renewed impersonated certificates for directory /opt/machine-id, valid: after=2022-07-21T21:49:26Z, before=2022-07-21T22:50:26Z, duration=1h1m0s | kind=tls, renewable=false, disallow-reissue=true, roles=[access], principals=[alice -teleport-internal-join], generation=0 tbot/renew.go:630 -``` - -However, the SSH command attempted to log in as `bob`. - -### Resolution - -Ensure the bot identity is allowed to log in as the requested user by taking any -of the following actions: - - - Changing the SSH command to log in as an allowed user - - Modifying the `access` role to allow the `alice` principal - - Adding a role granting login via the `bob` principal - -Note that if roles are added or modified, the certificates will need to be -renewed for the changes to take effect. The bot will renew certificates on its -own after the renewal interval (by default, 20 minutes), but you can trigger a -renewal immediately by either restarting the `tbot` process or sending it a -reload signal: - -```code -## If using systemd, you can restart the process: -$ systemctl restart machine-id -## Alternatively, you can send `tbot` a reload signal directly: -$ pkill -sigusr1 tbot -``` - -## Database requests fail with `database "example" not found`, but the database exists - -### Symptoms - -When requesting certificates for Teleport-protected -[databases](../database-access/database-access.mdx), the certificate request -fails with an error like the following: - -```text -ERROR: Failed to generate impersonated certs for directory /opt/machine-id: database "example" not found -database "example" not found -``` - -However, the database exists and can be seen by regular users via `tsh`: - -```code -$ tsh db ls -Name Description Allowed Users Labels Connect ----------- ----------- ------------- ------- ------- -example [alice] env=dev -``` - -### Explanation - -Unlike regular Teleport users, Machine ID bot users are granted only minimal -Teleport [RBAC permissions](../../reference/access-controls/roles.mdx) and are not -allowed to view or list databases by default unless granted permission via one -or more roles. - -### Resolution - -{/* vale messaging.protocol-products = NO */} -Per the [Machine ID Database Access Guide](./access-guides/databases.mdx), ensure at -least one role providing database permissions has been granted to the -output listed in the error. -{/* vale messaging.protocol-products = YES */} - -For example, note the `rules` section in the following example role: -```yaml -kind: role -version: v5 -metadata: - name: machine-id-db -spec: - allow: - db_labels: - '*': '*' - db_names: [example] - db_users: [alice] - rules: - - resources: [db_server, db] - verbs: [read, list] -``` - -Ensure the bot has a role that grants it at least these RBAC rules. If desired -you can examine bot roles with `tctl` to ensure the necessary `rules` have been -granted: - -```code -$ tctl get role/machine-id-db -``` - -If the role is missing database permissions, it can be modified in your text -editor: - -```code -$ tctl edit role/machine-id-db -``` - -Edit the role, then save and close the file to apply your changes. - -(!docs/pages/includes/create-role-using-web.mdx!) - - -By default, outputs (like `/opt/machine-id`) are granted all roles provided -to the bot via `tctl bots add --roles=...`, but it's possible to grant only a -subset of these roles using the `roles: ...` parameter in `tbot.yaml`. - -If permissions are unexpectedly missing, ensure `tbot.yaml` requests your -database role, either by relying on default behavior or adding the role to the -`roles: ...` list. - - -Once fixed, restart or reload the `tbot` clients for the updated role to take -effect. - -If the bot was not granted the role initially, the simplest solution is to -delete and recreate the bot, being sure to include the role in the `--roles=...` -flag: - -```code -$ tctl bots rm example -$ tctl bots add example --roles=foo,bar,machine-id-db -``` - -## Destination kubernetes_secret: `identity-output` must be a directory in exec plugin mode - -By default, when outputting a Kubernetes identity, `tbot` outputs make use of a Kubernetes exec -plugin to always provide the latest version of the credentials. - -When outputting a Kubernetes identity to a Kubernetes secret, however, it is important to disable -the use of the `exec` plugin by adding `disable_exec_plugin: true` to the output. This means that -a static `kubeconfig` file with embedded short-lived credentials is written instead: - -```yaml -outputs: - - type: kubernetes - # Specify the name of the Kubernetes cluster you wish the credentials to - # grant access to. - kubernetes_cluster: example-k8s-cluster - # Required when outputting a Kubernetes identity to a Kubernetes secret. - disable_exec_plugin: true - destination: - type: kubernetes_secret - # For this guide, identity-output is used as the secret name. - # You may wish to customize this. Multiple outputs cannot share the same - # destination. - name: identity-output -``` - -Failure to add the `disable_exec_plugin` flag will result in a warning being displayed: -`Destination kubernetes_secret: identity-output must be a directory in exec plugin mode`. - -## Configuring `tbot` for split DNS proxies - -When you have deployed your Proxy Service in such a way that it is -accessible via two different DNS names, e.g an internal and external address, -you may find that a `tbot` that is configured to use one of these addresses may -attempt to use the other address and that this may cause connections to fail. - -This is because `tbot` queries an auto-configuration endpoint exposed by the -Proxy Service to determine the canonical address to use when connecting. - -To fix this, set a variable of `TBOT_USE_PROXY_ADDR=yes` in the environment of the -`tbot` process. This configures `tbot` to prefer using the address that you have -explicitly provided. This only functions correctly in cases where TLS -routing/multiplexing is enabled for the Teleport cluster. diff --git a/docs/pages/enroll-resources/mcp-access/dynamic-registration.mdx b/docs/pages/enroll-resources/mcp-access/dynamic-registration.mdx new file mode 100644 index 0000000000000..805f7b8cc65d5 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/dynamic-registration.mdx @@ -0,0 +1,81 @@ +--- +title: Dynamic MCP Server Registration +sidebar_label: Dynamic Server Registration +sidebar_position: 5 +description: Register/unregister MCP servers without restarting Teleport. +tags: +- how-to +- zero-trust +- mcp +--- + +Dynamic MCP server registration allows Teleport administrators to register new +MCP servers (or update/unregister existing ones) without having to update the +static configuration files read by Teleport Application Service instances. + +The MCP server resources are registered as `app` resources in the Teleport +backend. Application Service instances periodically query the Teleport Auth +Service for `app` resources, each of which includes the information that the +Application Service needs to proxy an application. + +## Required permissions + +(!docs/pages/includes/application-access/dynamic-app-permissions.mdx!) + +## Enabling dynamic registration + +(!docs/pages/includes/application-access/dynamic-app-config.mdx!) + +## Creating a MCP server + +The following example configures Teleport to proxy the "Everything" MCP server +by launching it through docker: +```yaml +kind: app +version: v3 +metadata: + name: everything + description: The Everything MCP server + labels: + env: dev +spec: + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" +``` + +See the full resource spec [reference](reference.mdx). + +To create the resource, run: + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +$ tctl create mcp_server.yaml +``` + +After the resource has been created, it will appear among the list of available +MCP servers (in `tsh mcp ls` or UI) as long as at least one Application Service +instance picks it up according to its label selectors. + +To update an existing application resource, run: + +```code +$ tctl create -f mcp_server.yaml +``` + +If the updated resource's labels no longer match a particular app agent, it +will unregister and stop proxying it. + +To delete the resource, run: + +```code +$ tctl rm app/everything +``` diff --git a/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/enrolling-mcp-servers.mdx b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/enrolling-mcp-servers.mdx new file mode 100644 index 0000000000000..ea951bf378ba6 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/enrolling-mcp-servers.mdx @@ -0,0 +1,7 @@ +--- +title: Protecting MCP Servers +sidebar_label: Enrollment Guides +description: Provides guidance on enrolling various kinds of MCP servers with Teleport. +--- + + diff --git a/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/sse.mdx b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/sse.mdx new file mode 100644 index 0000000000000..cf83505622cf2 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/sse.mdx @@ -0,0 +1,80 @@ +--- +title: MCP Access with SSE MCP Server +sidebar_label: SSE +description: How to access MCP server with SSE transport. +tags: +- how-to +- mcp +- zero-trust +--- + +Teleport can provide secure access to MCP servers with SSE transport. + +This guides shows you how to: + +- Enroll a MCP server with SSE transport in your Teleport cluster. +- Connect to the SSE MCP server via Teleport. + +## How it works + +![How it works](../../../../img/mcp-access/architecture-sse.svg) + +Users can configure their MCP clients such as Claude Desktop to start an MCP +server using `tsh`. Once successfully authorized, `tsh` establishes a session +with the Application Service. + +The Teleport Application Service first starts an SSE connection to the remote MCP server +defined in the application definition. Teleport then proxies the MCP protocol +between the client and the remote MCP server, applying additional role-based +access controls such as filtering which tools are available to the user. While +proxying, Teleport also logs MCP protocol requests as audit events, providing +visibility into user activity. + + +HTTP with SSE transport has been deprecated in MCP specification version +2025-03-26. It is recommended to update your MCP server to use the +streamable-HTTP transport instead. + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)"!) +- A host, e.g., an EC2 instance, where you will run the Teleport Application + Service. +- The endpoint of the SSE MCP server . + +## Step 1/3. Configure the Teleport Application Service + +(!docs/pages/includes/mcp-access/configure-app-service.mdx extraAppScheme="sse+" !) + +## Step 2/3. Configure your Teleport user + +(!docs/pages/includes/mcp-access/configure-user-rbac.mdx!) + +## Step 3/3. Connect + +Log in to Teleport with the user we've just created, : + +```code +$ tsh login --proxy= --user= +``` + +Now we can inspect available MCP servers: + +```code +$ tsh mcp ls +Name Description Type Labels +---------- --------------------- ----- ---------- +everything everything MCP server SSE env=dev +``` + +(!docs/pages/includes/mcp-access/tsh-mcp-config.mdx!) + +## Next Steps + +Learn more about protecting MCP servers with Teleport in the following topics: + +- [MCP Access Control](../rbac.mdx). +- [JWT Authentication to MCP server](../jwt.mdx) +- [Register MCP servers dynamically](../dynamic-registration.mdx) +- Configuration and CLI [reference](../reference.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/stdio.mdx b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/stdio.mdx new file mode 100644 index 0000000000000..a671f0e41d3cf --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/stdio.mdx @@ -0,0 +1,160 @@ +--- +title: MCP Access with Stdio MCP Server +sidebar_label: Stdio +description: How to access MCP server with stdio transport. +tags: +- how-to +- mcp +- zero-trust +--- + +Teleport can provide secure access to MCP servers with stdio transport. + +This guides shows you how to: + +- Enroll a MCP server with stdio transport in your Teleport cluster. +- Connect to the stdio MCP server via Teleport. + +## How it works + +![How it works](../../../../img/mcp-access/architecture-stdio.svg) + +Users can configure their MCP clients such as Claude Desktop to start an MCP +server using `tsh`. Once successfully authorized, `tsh` establishes a session +with the Application Service. + +The Teleport Application Service starts the MCP server using the command and arguments +defined by Teleport administrators in the app definition. Teleport then proxies +the connection between the client and the remote MCP server, applying additional +role-based access controls such as filtering which tools are available to the +user. While proxying, Teleport also logs MCP protocol requests as audit events, +providing visibility into user activity. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.1.0 or higher)"!) +- A host, e.g., an EC2 instance, where you will run the Teleport Application + Service. +- Tools on the same host that can launch stdio-based MCP servers like `npx` + or `docker`. + +## Step 1/3. Configure the Teleport Application Service + +You can update an existing Application Service or create a new one to enable the +the MCP server. + + + + + +If you already have an existing Application Service running, you can add a MCP +server in your YAML configuration: +```yaml +app_service: + enabled: true + apps: + - name: "everything" + labels: + env: dev + description: + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" +``` + +Replace the MCP details with the MCP server you desire to run then restart the Application Service. + + + + +{/* lint ignore heading-increment remark-lint */} +#### Get a join token +(!docs/pages/includes/tctl-token.mdx serviceName="Application" tokenType="app" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +#### Install the Teleport Application Service +Install Teleport on the host where you will run the Teleport Application Service: + +(!docs/pages/includes/install-linux.mdx!) + +#### Configure the Teleport Application service + +On the host where you will run the Teleport Application Service, create a file +at `/etc/teleport.yaml` with the following content: +```yaml +version: v3 +teleport: + join_params: + token_name: "/tmp/token" + method: token + proxy_server: "" +auth_service: + enabled: false +proxy_service: + enabled: false +ssh_service: + enabled: false +app_service: + enabled: true + apps: + - name: "everything" + labels: + env: dev + description: + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" +``` + +Replace with the host and port of your Teleport Proxy +Service or Teleport Cloud tenant, and replace the MCP details with the MCP +server you desire to run. + +#### Start the Teleport Application Service + +(!docs/pages/includes/start-teleport.mdx service="the Application Service"!) + + + + +## Step 2/3. Configure your Teleport user + +(!docs/pages/includes/mcp-access/configure-user-rbac.mdx!) + +## Step 3/3. Connect + +Log in to Teleport with the user we've just created, : + +```code +$ tsh login --proxy= --user= +``` + +Now we can inspect available MCP servers: + +```code +$ tsh mcp ls +Name Description Type Labels +---------- --------------------- ----- ---------- +everything everything MCP server stdio env=dev +``` + +(!docs/pages/includes/mcp-access/tsh-mcp-config.mdx!) + +## Next Steps + +Learn more about protecting MCP servers with Teleport in the following topics: + +- [MCP Access Control](../rbac.mdx). +- [Register MCP servers dynamically](../dynamic-registration.mdx) +- Configuration and CLI [reference](../reference.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/streamable-http.mdx b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/streamable-http.mdx new file mode 100644 index 0000000000000..22978552dea4b --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/enrolling-mcp-servers/streamable-http.mdx @@ -0,0 +1,96 @@ +--- +title: MCP Access with Streamable-HTTP MCP Server +sidebar_label: Streamable-HTTP +description: How to access MCP server with streamable-HTTP transport. +tags: +- how-to +- mcp +- zero-trust +--- + +Teleport can provide secure access to MCP servers with streamable-HTTP transport. + +This guides shows you how to: + +- Enroll a MCP server with streamable-HTTP transport in your Teleport cluster. +- Connect to the stdio MCP server via Teleport. + +## How it works + +![How it works](../../../../img/mcp-access/architecture-http.svg) + +Users can configure their MCP clients such as Claude Desktop to start an MCP +server using `tsh`. Once successfully authorized, `tsh` establishes a session +with the Application Service. + +Teleport proxies the MCP protocol between the client and the remote MCP server, +applying additional role-based access controls such as filtering which tools are +available to the user. While proxying, Teleport also logs MCP protocol requests +as audit events, providing visibility into user activity. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)"!) +- A host, e.g., an EC2 instance, where you will run the Teleport Application + Service. +- The endpoint of the streamable-HTTP MCP server . + +## Step 1/3. Configure the Teleport Application Service + +(!docs/pages/includes/mcp-access/configure-app-service.mdx!) + +## Step 2/3. Configure your Teleport user + +(!docs/pages/includes/mcp-access/configure-user-rbac.mdx!) + +## Step 3/3. Connect + +Log in to Teleport with the user we've just created, : + +```code +$ tsh login --proxy= --user= +``` + +Now we can inspect available MCP servers: + +```code +$ tsh mcp ls +Name Description Type Labels +---------- --------------------- --------------- ---------- +everything everything MCP server Streamable HTTP env=dev +``` + +(!docs/pages/includes/mcp-access/tsh-mcp-config.mdx!) + +## Connect with streamable-HTTP client + +The `tsh mcp connect` command demonstrated above can be used by any stdio MCP +client to start a MCP session, and Teleport automatically handles the transport +conversion to the remote MCP server. + +However, if you wish to connect to the MCP server in streamable-HTTP transport, +you can start a local proxy that creates an authenticated tunnel to the remote +MCP server: +```code +$ tsh proxy mcp everything -p 8888 +Proxying connections to everything on 127.0.0.1:8888 +``` + +Then, connect to the MCP server locally with a MCP client that supports +streamable-HTTP, for example: +```code +$ npx @modelcontextprotocol/inspector --transport http --cli --method tools/list http://127.0.0.1:8888 +``` + +Alternatively, instead of `tsh`, you can use [Teleport +Connect](../../../connect-your-client/teleport-clients/teleport-connect.mdx#connecting-to-an-mcp-server) which can maintain +the local proxies in the background. + +## Next Steps + +Learn more about protecting MCP servers with Teleport in the following topics: + +- [MCP Access Control](../rbac.mdx). +- [JWT Authentication to MCP server](../jwt.mdx) +- [Register MCP servers dynamically](../dynamic-registration.mdx) +- Configuration and CLI [reference](../reference.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/getting-started.mdx b/docs/pages/enroll-resources/mcp-access/getting-started.mdx new file mode 100644 index 0000000000000..eb27b96e1dc5d --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/getting-started.mdx @@ -0,0 +1,140 @@ +--- +title: MCP Access Getting Started Guide +sidebar_label: Getting Started +sidebar_position: 1 +description: Getting started with Teleport MCP access. +tags: +- how-to +- mcp +- zero-trust +--- + +Teleport can provide secure connections to your MCP (Model Context Protocol) +servers while improving both access control and visibility. + +This guides shows you how to: + +- Enroll the Teleport demo MCP server in your Teleport cluster. +- Connect to the MCP server via Teleport. + +## How it works + +![How it works](../../../img/mcp-access/architecture-demo.svg) + +The Teleport Application Service includes a built-in demo MCP server designed to +showcase how MCP access works. + +Users can configure their MCP clients such as Claude Desktop to start an MCP +server using `tsh`. Once successfully authorized, `tsh` establishes a session +with the Application Service. + +Once the session is established, the Application Service starts the in-memory +demo MCP server. Teleport then proxies the connection between the client and the +remote MCP server, applying additional role-based access controls such as +filtering which tools are available to the user. While proxying, Teleport also +logs MCP protocol requests as audit events, providing visibility into user +activity. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.1.0 or higher)"!) +- A host, e.g., an EC2 instance, where you will run the Teleport Applications + Service. + +## Step 1/3. Configure the Teleport Application Service + +You can update an existing Application Service or create a new one to enable the +demo MCP server. + + + + + +If you already have an existing Application Service running, you can enable the +demo MCP server by adding the following in your YAML configuration: +```diff +app_service: + enabled: true ++ mcp_demo_server: true +... +``` + +Now restart the Application Service. + + + + +{/* lint ignore heading-increment remark-lint */} +#### Get a join token +(!docs/pages/includes/tctl-token.mdx serviceName="Application" tokenType="app" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +#### Install the Teleport Application Service +Install Teleport on the host where you will run the Teleport Application Service: + +(!docs/pages/includes/install-linux.mdx!) + +#### Configure the Teleport Application service + +On the host where you will run the Teleport Application Service, create a +configuration file: +```code +$ sudo teleport configure \ + -o file \ + --roles=app \ + --proxy= \ + --token=/tmp/token \ + --mcp-demo-server +``` + +The command will generate an Application Service configuration to proxy the demo +MCP server and save the configuration to `/etc/teleport.yaml`. + +#### Start the Teleport Application Service + +(!docs/pages/includes/start-teleport.mdx service="the Application Service"!) + + + +## Step 2/3. Configure your Teleport user + +(!docs/pages/includes/mcp-access/configure-user-rbac.mdx!) + +## Step 3/3. Connect + +Log in to Teleport with the user we've just created, : + +```code +$ tsh login --proxy= --user= +``` + +Now we can inspect available MCP servers: + +```code +$ tsh mcp ls +Name Description Type Labels +----------------- ----------------------------------------------------------------- ----- ------ +teleport-mcp-demo A demo MCP server that shows current user and session information stdio +``` + +(!docs/pages/includes/mcp-access/tsh-mcp-config.mdx appName="teleport-mcp-demo" !) + +The demo MCP server consists of several tools: +- `teleport_user_info`: Shows basic information about your Teleport user. +- `teleport_session_info`: Shows information about this MCP session. +- `teleport_demo_info`: Shows information about this Teleport Demo MCP server. + +You can interact with it using sample questions like "can you show some details +on this Teleport demo?": + +![Demo Server Claude Desktop](../../../img/mcp-access/demo-server-claude-desktop.png) + + +## Next Steps + +Learn more about protecting MCP servers with Teleport in the following topics: + +- [MCP Access Control](rbac.mdx). +- Configuration and CLI [reference](reference.mdx). +- Audit events [reference](../../reference/deployment/monitoring/audit.mdx#event-types). diff --git a/docs/pages/enroll-resources/mcp-access/integration-guides/github.mdx b/docs/pages/enroll-resources/mcp-access/integration-guides/github.mdx new file mode 100644 index 0000000000000..b57359b7b1c0d --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/integration-guides/github.mdx @@ -0,0 +1,121 @@ +--- +title: Connect a GitHub MCP Server to Teleport +sidebar_label: GitHub +description: Set up a GitHub MCP server for access through Teleport +tags: +- how-to +- mcp +- zero-trust +--- + +(!docs/pages/includes/mcp-access/integration-intro.mdx serviceName="GitHub" !) + +## How it works + +The [GitHub MCP server](https://github.com/github/github-mcp-server) uses a +personal access token of a service account to access GitHub and +[`mcp-proxy`](https://github.com/sparfenyuk/mcp-proxy) exposes it to the +Teleport Application Service over a streamable-HTTP endpoint by translating the +original transport. Teleport proxies all client requests to the server, which +interacts with GitHub using the permissions granted to the service account. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)" clients="\`tsh\` client"!) +- Access to a service account to your GitHub organization. +- A host to run the MCP server that is reachable by the Teleport Application Service. +- A running Teleport Application Service. If you have not yet done this, follow + the [Getting Started guide](../getting-started.mdx). +- A Teleport user with sufficient permissions (e.g. role `mcp-user`) to access + MCP servers. + +## Step 1/3. Create a personal access token + +Log in to GitHub using your service account. Navigate to Settings > Developer +Settings > Personal access tokens, then click **Generate new token**. + +When creating the token, grant only the minimal permissions needed. Avoid broad +scopes such as write or admin access unless absolutely required. + +Once the token is created, save it for use in the next step. + +## Step 2/3. Run the GitHub MCP server + +First, install `mcp-proxy` on the host that will run the MCP server: +```code +# Option 1: With uv (recommended) +$ uv tool install mcp-proxy + +# Option 2: With pipx (alternative) +$ pipx install mcp-proxy +``` + +Now start the GitHub MCP server behind `mcp-proxy`, using the personal access +token : +```code +$ mcp-proxy \ + --host --port 8000 \ + -- docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN= \ + ghcr.io/github/github-mcp-server +``` + +Replace `` with the hostname of the host machine running +the MCP server. The host must be reachable by the Teleport Application +Service. + +After starting, `mcp-proxy` exposes a streamable-HTTP endpoint at `http://:8000/mcp`. + +## Step 3/3. Connect via Teleport + +(!docs/pages/includes/mcp-access/integration-teleport-app.mdx service="github" serviceName="GitHub" port="8000"!) + +(!docs/pages/includes/mcp-access/integration-limit-tools.mdx!) +```yaml +kind: role +version: v8 +metadata: + name: github-mcp-readonly +spec: + allow: + app_labels: + 'service': 'github' + mcp: + tools: + - ^(get|search|list)_.*$ + - ^.*_read$ +``` + +(!docs/pages/includes/mcp-access/integration-tsh.mdx service="github" serviceName="GitHub" !) + +![Grafana Claude](../../../../img/mcp-access/github-claude.png) + +## Connect to GitHub without service account + +Instead of using a service account's personal access token, you can require each +Teleport user to supply their own token from the client side. This removes the +need to run an `mcp-proxy`, allowing you to use the official MCP server directly +when configuring your MCP server: +```yaml +app_service: + enabled: "yes" + apps: + - name: "github-mcp" + uri: "mcp+https://api.githubcopilot.com/mcp/" + labels: + env: dev + service: github +``` + +When configuring the MCP client, use your own personal access token as a bearer +token for the `Authorization` header: +```code +$ tsh mcp config github-mcp --client-config claude --header "Authorization: Bearer " +``` + +## Next steps + +- Review [Enroll a Streamable-HTTP MCP Server](../enrolling-mcp-servers/streamable-http.mdx). +- See the [dynamic registration](../dynamic-registration.mdx) guide. +- Learn more about [github-mcp-server](https://github.com/github/github-mcp-server). +- Connect your [MCP clients](../../../connect-your-client/model-context-protocol/mcp-access.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/integration-guides/grafana.mdx b/docs/pages/enroll-resources/mcp-access/integration-guides/grafana.mdx new file mode 100644 index 0000000000000..75547c37af080 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/integration-guides/grafana.mdx @@ -0,0 +1,137 @@ +--- +title: Connect a Grafana MCP Server to Teleport +sidebar_label: Grafana +description: Set up a Grafana MCP server for access through Teleport +tags: +- how-to +- mcp +- zero-trust +--- + +(!docs/pages/includes/mcp-access/integration-intro.mdx serviceName="Grafana" !) + +## How it works + +The [Grafana MCP server](https://github.com/grafana/mcp-grafana) +forwards JWT tokens signed by Teleport to access Grafana and runs on a local +endpoint reachable by the Teleport Application Service. Teleport proxies all +client requests to the server, which interacts with Grafana using permissions +mapped by [JWT authentication](https://grafana.com/docs/grafana/latest/setup-grafana/configure-access/configure-authentication/jwt/). + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)" clients="\`tsh\` client"!) +- Ability to configure your Grafana instance. +- A host to run the MCP server that is reachable by the Teleport Application Service. +- A running Teleport Application Service. If you have not yet done this, follow + the [Getting Started guide](../getting-started.mdx). +- A Teleport user with sufficient permissions (e.g. role `mcp-user`) to access + MCP servers. + +## Step 1/3. Configure JWT authentication in Grafana + +(!docs/pages/includes/application-access/jwt-grafana-config.mdx!) + +## Step 2/3. Run the Grafana MCP server + +The Grafana MCP Server can be run either as a compiled binary or via the +official Docker image: + + + +Run the MCP server in streamable-HTTP transport. Assign to the URL of your Grafana +instance and to the hostname +of the host machine running the MCP server: +```code +$ export GRAFANA_URL= +$ ./mcp-grafana --transport streamable-http --address :8000 +``` + + + +Run the MCP server in streamable-HTTP transport. Assign to the URL of your Grafana +instance: +```code +$ docker run -d -p 8000:8000 \ + -e GRAFANA_URL= \ + mcp/grafana --transport streamable-http +``` + + + +The Grafana MCP Server now exposes a streamable-HTTP endpoint at `http://:8000/mcp`. + +## Step 3/3. Connect via Teleport + +(!docs/pages/includes/mcp-access/integration-teleport-app-rewrite.mdx service="grafana" serviceName="Grafana" port="8000" headerName="X-Grafana-API-Key" headerValue="{{internal.jwt}}" !) + +The header rewrite configuration above will replace the `{{internal.jwt}}` +template variable with a Teleport-signed JWT token in each request. The Grafana +MCP server will use this token as a bearer token in "Authorization" header when +connecting to Grafana. + +(!docs/pages/includes/mcp-access/integration-limit-tools.mdx!) +```yaml +kind: role +version: v8 +metadata: + name: grafana-mcp-readonly +spec: + allow: + app_labels: + 'service': 'grafana' + mcp: + tools: + - ^(get|query|list|search|find)_.*$ +``` + +(!docs/pages/includes/mcp-access/integration-tsh.mdx service="grafana" serviceName="Grafana" !) + +![Grafana Claude](../../../../img/mcp-access/grafana-claude.png) + +## Connect to Grafana using a service account + +Instead of accessing Grafana with JWT authentication, you can also use a service +account. + +Navigate to Administrators > Users and Accounts > Service Accounts, and then +click **Create Service Account**: + +![Grafana Service Account](../../../../img/mcp-access/grafana-create-sa.png) + +Assign the **Viewer** role to keep the service account limited to read-only +access. + +After the service account is created, click **Add service account +token** to generate a new token. Use this token when +starting the Grafana MCP server: +```code +$ docker run -d -p 8000:8000 \ + -e GRAFANA_URL= \ + -e GRAFANA_SERVICE_ACCOUNT_TOKEN= \ + mcp/grafana --transport streamable-http +``` + +Lastly, you don’t need to configure any header rewrites in the Teleport +application: +```yaml +app_service: + enabled: "yes" + apps: + - name: "grafana-mcp" + uri: "mcp+http://:8000/mcp" + labels: + env: dev + service: grafana +``` + +## Next steps + +- Read more on accessing Grafana through Teleport with [JWT authentication](../../application-access/jwt/grafana.mdx). +- Review [Enroll a Streamable-HTTP MCP Server](../enrolling-mcp-servers/streamable-http.mdx). +- See the [dynamic registration](../dynamic-registration.mdx) guide. +- Learn more about [mcp-grafana](https://github.com/grafana/mcp-grafana). +- Connect your [MCP clients](../../../connect-your-client/model-context-protocol/mcp-access.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/integration-guides/integration-guides.mdx b/docs/pages/enroll-resources/mcp-access/integration-guides/integration-guides.mdx new file mode 100644 index 0000000000000..d7c8ecaa3d4a2 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/integration-guides/integration-guides.mdx @@ -0,0 +1,15 @@ +--- +title: MCP Server Integration Guides +sidebar_label: Integration Guides +description: How to configure popular services and connect their MCP servers through Teleport. +sidebar_position: 6 +tags: + - mcp + - zero-trust +--- + + +Guides on how to configure popular services with the credentials and transports +required to run their MCP servers and connect them through Teleport. + + diff --git a/docs/pages/enroll-resources/mcp-access/integration-guides/notion.mdx b/docs/pages/enroll-resources/mcp-access/integration-guides/notion.mdx new file mode 100644 index 0000000000000..10323869773a0 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/integration-guides/notion.mdx @@ -0,0 +1,97 @@ +--- +title: Connect a Notion MCP Server to Teleport +sidebar_label: Notion +description: Set up a Notion MCP server for access through Teleport +tags: +- how-to +- mcp +- zero-trust +--- + +(!docs/pages/includes/mcp-access/integration-intro.mdx serviceName="Notion" !) + +## How it works + +The [Notion MCP server](https://github.com/makenotion/notion-mcp-server) +uses an integration token to access Notion and runs on a local endpoint +reachable by the Teleport Application Service. Teleport proxies all client +requests to the server, which interacts with Notion using the permissions +granted to the integration. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)" clients="\`tsh\` client"!) +- Access to your Notion workspace and sufficient privileges to manage integrations. +- A host to run the MCP server that is reachable by the Teleport Application Service. +- A running Teleport Application Service. If you have not yet done this, follow + the [Getting Started guide](../getting-started.mdx). +- A Teleport user with sufficient permissions (e.g. role `mcp-user`) to access + MCP servers. + +## Step 1/3. Create an integration in Notion + +Go to https://www.notion.so/profile/integrations and create a new **internal +integration**. + +![Notion integration](../../../../img/mcp-access/notion-integration.png) + +To limit the scope available to LLMs, disable all permissions except "Read +Content" in the "Capabilities" section. + +Next, open "Access" tab and select the pages you want the integration to be able +to access. + +![Notion access](../../../../img/mcp-access/notion-access.png) + +Finally, return to the "Configuration" tab, copy the "Internal Integration +Secret" for use in the next step. + +## Step 2/3. Run the Notion MCP server + +Start the Notion MCP server using your Notion integration token : +```code +$ export NOTION_TOKEN= +$ npx @notionhq/notion-mcp-server --transport http --port 8000 --auth-token teleport-local-connection +``` + +The MCP server listens on all network interfaces by default. Run it on a private +network and ensure the hostname is +reachable by the Teleport Application Service. + +The `--auth-token` value is the shared secret Teleport uses to authenticate to +the MCP server. Since the MCP server is not publicly accessible, using a fixed +value is acceptable. + +## Step 3/3. Connect via Teleport + +(!docs/pages/includes/mcp-access/integration-teleport-app.mdx service="notion" serviceName="Notion" port="8000" !) + +(!docs/pages/includes/mcp-access/integration-limit-tools.mdx!) +```yaml +kind: role +version: v8 +metadata: + name: notion-mcp-readonly +spec: + allow: + app_labels: + 'service': 'notion' + mcp: + tools: + - API-get-* + - API-retrieve-* + - API-post-database-query + - API-post-search +``` + +(!docs/pages/includes/mcp-access/integration-tsh.mdx service="notion" serviceName="Notion" !) + +![Notion Claude](../../../../img/mcp-access/notion-claude.png) + +## Next steps + +- Review [Enroll a Streamable-HTTP MCP Server](../enrolling-mcp-servers/streamable-http.mdx). +- See the [dynamic registration](../dynamic-registration.mdx) guide. +- Learn more about [notion-mcp-server](https://github.com/makenotion/notion-mcp-server). +- Connect your [MCP clients](../../../connect-your-client/model-context-protocol/mcp-access.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/integration-guides/vault.mdx b/docs/pages/enroll-resources/mcp-access/integration-guides/vault.mdx new file mode 100644 index 0000000000000..8771ad828986e --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/integration-guides/vault.mdx @@ -0,0 +1,125 @@ +--- +title: Connect a HashiCorp Vault MCP Server to Teleport +sidebar_label: HashiCorp Vault +description: Set up a HashiCorp Vault MCP server for access through Teleport +tags: +- how-to +- mcp +- zero-trust +--- + +(!docs/pages/includes/mcp-access/integration-intro.mdx serviceName="HashiCorp Vault" !) + +## How it works + +The [HashiCorp Vault MCP server](https://github.com/hashicorp/vault-mcp-server) +uses a service token to access HashiCorp Vault and runs on a local endpoint +reachable by the Teleport Application Service. Teleport proxies all client +requests to the server, which interacts with HashiCorp Vault using the +permissions granted by the policy bound to the token. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v18.3.0 or higher)" clients="\`tsh\` client"!) +- Access to your Vault instance and sufficient privileges to manage policies. +- A host to run the MCP server that is reachable by the Teleport Application Service. +- A running Teleport Application Service. If you have not yet done this, follow + the [Getting Started guide](../getting-started.mdx). +- A Teleport user with sufficient permissions (e.g. role `mcp-user`) to access + MCP servers. + +## Step 1/3. Create a policy in Vault + +First, create a policy file: +```code +$ cat > mcp-readonly.hcl < + +To start the MCP server in streamable-HTTP mode: +```code +$ export TRANSPORT_MODE=http +$ export TRANSPORT_HOST= # or listen to a network that is reachable by Teleport +$ export VAULT_ADDR= +$ export VAULT_TOKEN= +$ ./vault-mcp-server +``` + +Replace `` with the hostname of the host machine running +the MCP server. The host must be reachable by the Teleport Application +Service. + + + +To start the MCP server in streamable-HTTP mode: +```code +$ docker run -d -p 8080:8080 \ + -e TRANSPORT_MODE=http \ + -e TRANSPORT_HOST=0.0.0.0 \ + -e VAULT_ADDR= \ + -e VAULT_TOKEN= \ + --name vault-teleport-mcp hashicorp/vault-mcp-server +``` + + + +After starting, the Vault MCP Server exposes a streamable-HTTP endpoint at +`http://:8080/mcp`. + +## Step 3/3. Connect via Teleport + +(!docs/pages/includes/mcp-access/integration-teleport-app.mdx service="vault" serviceName="Vault" port="8080" !) + +(!docs/pages/includes/mcp-access/integration-limit-tools.mdx!) +```yaml +kind: role +version: v8 +metadata: + name: vault-mcp-readonly +spec: + allow: + app_labels: + 'service': 'vault' + mcp: + tools: + - ^(get|list)_.*$ +``` + +(!docs/pages/includes/mcp-access/integration-tsh.mdx service="vault" serviceName="Vault" !) + +![Vault Claude](../../../../img/mcp-access/vault-claude.png) + +## Next steps + +- Review [Enroll a Streamable-HTTP MCP Server](../enrolling-mcp-servers/streamable-http.mdx). +- See the [dynamic registration](../dynamic-registration.mdx) guide. +- Learn more about [vault-mcp-server](https://github.com/hashicorp/vault-mcp-server). +- Connect your [MCP clients](../../../connect-your-client/model-context-protocol/mcp-access.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/jwt.mdx b/docs/pages/enroll-resources/mcp-access/jwt.mdx new file mode 100644 index 0000000000000..777d02056dc4f --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/jwt.mdx @@ -0,0 +1,37 @@ +--- +title: JWT Authentication to MCP Servers +sidebar_label: JWT Authentication +sidebar_position: 4 +description: How to use Teleport JWT to authenticate your MCP servers +tags: +- conceptual +- mcp +- zero-trust +--- + +(!docs/pages/includes/application-access/jwt-intro.mdx appType="MCP server" !) + +## Inject JWT + +(!docs/pages/includes/application-access/jwt-inject.mdx!) + +For example: + +```yaml +- name: "my-mcp-server" + uri: mcp+http://localhost:4321 + rewrite: + headers: + - "Authorization: Bearer {{internal.jwt}}" +``` + +## Validate JWT + +(!docs/pages/includes/application-access/jwt-validate.mdx!) + +See the example Go program used to validate Teleport's JWT tokens on our +[GitHub](https://github.com/gravitational/teleport/blob/v(=teleport.version=)/examples/mcp-servers/). + +## Troubleshooting + +(!docs/pages/includes/application-access/jwt-configure-claims.mdx appType="MCP server" appScheme="mcp+http" !) diff --git a/docs/pages/enroll-resources/mcp-access/mcp-access.mdx b/docs/pages/enroll-resources/mcp-access/mcp-access.mdx new file mode 100644 index 0000000000000..20ff4b023fba4 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/mcp-access.mdx @@ -0,0 +1,135 @@ +--- +title: MCP Servers +description: Protect Model Context Protocol (MCP) servers with Teleport's access controls and auditing capabilities +template: doc-page +tags: + - mcp + - zero-trust +--- + +import DocHero from "@site/src/components/Pages/Landing/DocHero"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; +import Resources from "@site/src/components/Pages/Homepage/Resources"; + +import userCheckSvg from "@site/src/components/Icon/svg/user-check.svg"; +import questionSvg from "@site/src/components/Icon/svg/question2.svg"; +import databaseSvg from "@site/src/components/Icon/teleport-svg/database.svg"; +import kubernetesSvg from "@site/src/components/Icon/svg/kubernetes2.svg"; +import githubSvg from "@site/src/components/Icon/svg/github.svg"; +import vaultSvg from "@site/src/components/Icon/svg/hashicorp-vault.svg"; +import notionSvg from "@site/src/components/Icon/svg/notion.svg"; +import grafanaSvg from "@site/src/components/Icon/svg/grafana.svg"; + + +{/* vale messaging.protocol-products = NO */} +Connect an MCP server with the Teleport Application Service, configure and assign a role granting MCP permissions, and connect and query the server with the Teleport CLI or your AI platform of choice. + + + +{/* vale messaging.protocol-products = YES */} + + diff --git a/docs/pages/enroll-resources/mcp-access/rbac.mdx b/docs/pages/enroll-resources/mcp-access/rbac.mdx new file mode 100644 index 0000000000000..e5b660a7c1b8c --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/rbac.mdx @@ -0,0 +1,118 @@ +--- +title: MCP Access Controls +sidebar_label: Access Controls +sidebar_position: 2 +description: Role-based access control (RBAC) for Teleport MCP access. +tags: +- conceptual +- mcp +- zero-trust +--- + +You can use Teleport's role-based access control (RBAC) system to set up +granular permissions for authenticating to MCP servers connected to Teleport. + +## Role configuration + +Teleport's `role` resource provides the following options for controlling MCP +access: +```yaml +kind: role +version: v8 +metadata: + name: mcp-developer +spec: + allow: + # app_labels: a user with this role will be allowed to connect to + # MCP servers with labels matching below. + app_labels: + "env": "dev" + + # app_labels_expression: optional field which has the same purpose of the + # matching app_labels fields, but support predicate expressions instead of + # label matchers. + app_labels_expression: 'labels["env"] == "staging"' + + # mcp: defines MCP servers related permissions. + mcp: + # tools: list of tools allowed for this role. + # + # No tools are allowed if not specified. + # Each entry can be a literal string, a glob pattern, or a regular + # expression (must start with '^' and end with '$'). A wildcard '*' allows + # all tools. + # This value field also supports variable interpolation. + tools: + - search-files + - slack_* + - ^(get|list|read).*$ + - "{{internal.mcp_tools}}" + - "{{external.mcp_tools}}" + + deny: + mcp: + # tools: list of tools denied for this role. + tools: + - slack_post_message +``` + + + Deny rules will match greedily. In the example above, `slack_post_message` is + denied even `role.allow.mcp.tools` matches it. + + +## Template variables + +Similar to other role fields, `app_labels` and `mcp` fields support templating +variables. + +The `external.xyz` traits are replaced with values from external [single +sign-on](../../zero-trust-access/sso/sso.mdx) providers. For OIDC, +they will be replaced with the value of an "xyz" claim. For SAML, they are +replaced with an "xyz" assertion value. + +For full details on how traits work in Teleport roles, see the [Access Controls +Reference](../../reference/access-controls/roles.mdx). + +For example, here is what a role may look like if you want to assign allowed +tools from the user's Okta `mcp_tools` assertion: + +```yaml +spec: + allow: + mcp: + tools: + - "{{external.mcp_tools}}" +``` + +The `{{internal.mcp_tools}}` variables permit sharing allowed MCP tools with +remote clusters. They will be replaced with the respective properties of a +remote user connecting from a root cluster. + +For example, suppose a user in the root cluster has the following role: + +```yaml +spec: + allow: + mcp: + tools: + - "slack_*" +``` + +The role on the leaf cluster can be set up to use the same tools allowed from +the root cluster: + +```yaml +spec: + allow: + mcp: + tools: + - "{{internal.mcp_tools}}" +``` + +For full details on how variable expansion works in Teleport roles, see the +[Access Controls +Reference](../../reference/access-controls/roles.mdx). diff --git a/docs/pages/enroll-resources/mcp-access/reference.mdx b/docs/pages/enroll-resources/mcp-access/reference.mdx new file mode 100644 index 0000000000000..8f3c28f5ff1f2 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/reference.mdx @@ -0,0 +1,203 @@ +--- +title: MCP Access Reference +sidebar_label: Reference +description: Configuration and CLI reference for Teleport MCP access. +sidebar_position: 8 +tags: +- reference +- zero-trust +- mcp +--- + +This guide describes interfaces and options for interacting with the Teleport +Application Service for MCP access, including the static configuration file for +the `teleport` binary, and `tsh mcp` commands. + +## Configuration + +(!docs/pages/includes/backup-warning.mdx!) + +The following snippet shows the full YAML configuration of an Application Service +appearing in the `teleport.yaml` configuration file: + +```yaml +app_service: + # Enables application proxy service. + enabled: true + # Enables the builtin Teleport demo MCP server that shows current user and + # session information. To access it, this MCP server uses the app label + # "teleport.internal/resource-type" with the value "demo". + mcp_demo_server: true + # This section contains definitions of all applications proxied by this + # service. It can contain multiple items. + apps: + # Name of the application. Used for identification purposes. + - name: "mcp-everything" + # Endpoint URI of the target MCP server. + # Use "mcp+http://" and "mcp+https://" as scheme for streamable HTTP MCP servers. + # Use "mcp+sse+http://" and "mcp+sse+https://" as scheme for SSE MCP servers. + # Leave it empty or use "mcp+stdio://" for stdio MCP servers. + uri: "mcp+http://localhost:12345/mcp" + # Free-form application description. + description: "Example Everything MCP server" + # Static labels to assign to the app. Used in RBAC. + labels: + env: "prod" + + # Contains MCP server-related configurations. + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" + + # Disable TLS validation for SSE and streamable HTTP MCP servers. + insecure_skip_verify: true + # Rewrites section for SSE and streamable HTTP MCP servers. + rewrite: + # Specify whether to include roles or traits in the JWT. + # Options: + # - roles-and-traits: include both roles and traits + # - roles: include only roles + # - traits: include only traits + # - none: exclude both roles and traits from the JWT token + # Default: roles-and-traits + jwt_claims: roles-and-traits + # Headers passthrough configuration. + headers: + - "Authorization: Bearer {{internal.jwt}}" + - "X-Custom-Header: example" + - "X-External-Trait: {{external.env}}" +``` + +## Resource + +The MCP server resources are registered as `app` resources in the Teleport +backend. Here is the spec of MCP server resources managed by `tctl` resource +command: + +```yaml +kind: app +version: v3 +metadata: + # MCP server name + name: everything + # MCP server description. + description: The Everything MCP server + # MCP server labels. + labels: + env: local +spec: + # Endpoint URI of the target MCP server. + # Use "mcp+http://" and "mcp+https://" as scheme for streamable HTTP MCP servers. + # Use "mcp+sse+http://" and "mcp+sse+https://" as scheme for SSE MCP servers. + # Leave it empty or use "mcp+stdio://" for stdio MCP servers. + uri: "mcp+http://localhost:12345/mcp" + + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" + + # Disable TLS validation for SSE and streamable HTTP MCP servers. + insecure_skip_verify: true + # Rewrites section for SSE and streamable HTTP MCP servers. + rewrite: + # Specify whether to include roles or traits in the JWT. + # Options: + # - roles-and-traits: include both roles and traits + # - roles: include only roles + # - traits: include only traits + # - none: exclude both roles and traits from the JWT token + # Default: roles-and-traits + jwt_claims: roles-and-traits + # Headers passthrough configuration. + headers: + - "Authorization: Bearer {{internal.jwt}}" + - "X-Custom-Header: example" + - "X-External-Trait: {{external.env}}" +``` + +## CLI + +This section shows CLI commands relevant for MCP access. + +### tsh mcp ls + +Lists available MCP servers. + +```code +# List all MCP servers. +$ tsh mcp ls +# Search MCP servers with keywords. +$ tsh mcp ls --search foo,bar +# Filter MCP servers with labels. +$ tsh mcp ls key1=value1,key2=value2 +# Get MCP server names using "jq". +$ tsh mcp ls --format json | jq -r '.[].metadata.name' +``` + +| Flag | Description | +| - | - | +| `--search` | List of comma separated search keywords or phrases enclosed in quotations (e.g. `--search=foo,bar,"some phrase"`). | +| `--query` | Query by predicate language enclosed in single quotes. (e.g. `--query='labels["key1"] == "value1" && labels["key2"] != "value2"')`. | +| `--format` | Format output (`text`, `json`, `yaml`). | + +### tsh mcp config + +Print client configuration details or update the configuration directly. + +```code +# Print sample configuration for a MCP server app +$ tsh mcp config my-mcp-server-app +# Print sample configuration for a streamable HTTP MCP server with custom headers +$ tsh mcp config my-mcp-server-app -H "Header1: value1" -H "Header2: value2" +# Add all MCP servers to Claude Desktop +$ tsh mcp config --all --client-config=claude +# Search MCP servers with labels and add to the specified JSON file +$ tsh mcp config --labels env=dev --client-config=my-config.json` +``` + +| Flag | Description | +| - | - | +| `--all` | Select all MCP servers. Mutually exclusive with `--labels` or `--query`.| +| `--labels` | List of comma separated labels to filter by labels (e.g. key1=value1,key2=value2).| +| `--query` | Query by predicate language enclosed in single quotes. (e.g. `--query='labels["key1"] == "value1" && labels["key2"] != "value2"')`. | +| `--client-config` | If specified, update the specified client config. `claude` for default Claude Desktop config, or specify a JSON file path. Can also be set with environment variable `TELEPORT_MCP_CLIENT_CONFIG`.", +| `--json-format` | Format the JSON file (`pretty`, `compact`, `auto`, `none`). `auto` saves in compact if the file is already compact, otherwise pretty. Can also be set with environment variable `TELEPORT_MCP_CONFIG_JSON_FORMAT`. Default is `auto`.", +| `-H`, `--header` | Extra custom headers used for streamable HTTP MCP servers.| + +### tsh mcp connect + +Used by AI tools such as Claude Desktop to connect to an MCP server via +Teleport. + + +`tsh mcp config` can print sample configuration or update your AI tools +directly. This eliminates the need to manually construct the tsh mcp connect +command. + + +```code +$ tsh mcp connect mcp-everything +``` + +`tsh` debug logs are enabled by default and can be disabled by the environment +variable `TELEPORT_DEBUG=false`. You can also specify the `--no-debug` flag when +generating sample configurations with `tsh mcp config`. + +### tsh proxy mcp + +Starts a local proxy for MCP connections. Only supported for target MCP servers +with streamable HTTP transport. + +```code +$ tsh proxy mcp my-streamable-http-mcp-server +``` diff --git a/docs/pages/enroll-resources/mcp-access/troubleshooting.mdx b/docs/pages/enroll-resources/mcp-access/troubleshooting.mdx new file mode 100644 index 0000000000000..8f2ade4a59035 --- /dev/null +++ b/docs/pages/enroll-resources/mcp-access/troubleshooting.mdx @@ -0,0 +1,99 @@ +--- +title: Troubleshooting MCP Access +sidebar_label: Troubleshooting +sidebar_position: 7 +description: Describes common issues and solutions for access to MCP servers protected by Teleport. +tags: +- conceptual +- zero-trust +- mcp +--- + +This section describes common issues that you might encounter in managing access +to MCP servers with Teleport and how to work around or resolve them. + +## Disabled MCP server or tools missing from the MCP server + +By default, no MCP tools are allowed by your Teleport roles. You can check if +any tool is allowed by running `tsh mcp ls`: + +```code +$ tsh mcp ls +Name Description Type Allowed Tools Labels +----------------- -------------------------------------------------- ----- ------------- ------ +teleport-mcp-demo A demo MCP server that shows current user and s... stdio (none) [!] + +[!] Warning: you do not have access to any tools on the MCP server. +Please contact your Teleport administrator to ensure your Teleport role has +appropriate 'allow.mcp.tools' set. For details on MCP access RBAC, see: +https://goteleport.com/docs/enroll-resources/mcp-access/rbac/ +``` + +If a user is assigned the `access` preset role, by default the available MCP +tools are controlled by the `{{internal.mcp_tools}}` source in the role +definition. This value can be populated through user traits: + +```yaml +kind: role +metadata: + name: access +spec: + allow: + mcp: + tools: + - "{{internal.mcp_tools}}" +``` + +You can configure this user trait with `tctl`, assigning it to the Teleport user +: + +```code +$ tctl users update --set-mcp-tools "*" +``` + +Alternatively you can assign your Teleport user the preset role `mcp-user` which +allows access to all MCP servers and their tools. You can also define a custom +role that explicitly specifies the allowed MCP tools and assigns them to your +Teleport user. See [RBAC](./rbac.mdx) for more details. + +## Server disconnected when starting the MCP server + +You may encounter a "Server disconnected" error or similar errors when your MCP +client starts the MCP server. + +![Server disconnected error](../../../img/mcp-access/troubleshoot-server-disconnected.png) + +First, make sure your client configuration is correct and your `tsh` session is +active. See [connect MCP +clients](../../connect-your-client/model-context-protocol/mcp-access.mdx) for +more details. + +If the problem persists, follow your MCP client's specific guideline to find the +debug logs of the MCP server execution. + +If you encounter a "Lost server connection" error in the debug log, or the `tsh` +command exits immediately, it usually indicates that the command used to launch +the stdio MCP server on the Teleport Application Service instance has failed to +execute properly. + +To confirm this, check the Application Service logs for any warnings that +indicate a failure to start the MCP server process — commonly an +`exec.ExitError`, but it may also include other related execution or runtime +errors. These typically appear shortly after the client connection is +established: +``` +WARN [APP:SERVICE] Failed to handle client connection. error:[ +ERROR REPORT: +Original Error: *exec.ExitError exit status 1 +``` + +To fix the issue, ensure that the Teleport app definition used to start the MCP +server is configured correctly. You can test it by manually running the +`app.mcp.command` and `app.mcp.args` directly on the host where the Teleport +Application Service is running — without using Teleport. Also, make sure the +host user set for `mcp.run_as_host_user` exists on the host and has the +necessary permissions to execute the configured command. + +## `tsh` path errors in your MCP clients + +(!docs/pages/includes/mcp-access/troubleshoot-tsh-binary-enoent.mdx!) diff --git a/docs/pages/enroll-resources/server-access/getting-started.mdx b/docs/pages/enroll-resources/server-access/getting-started.mdx index 8ac9c99c92f9f..289486a6c819e 100644 --- a/docs/pages/enroll-resources/server-access/getting-started.mdx +++ b/docs/pages/enroll-resources/server-access/getting-started.mdx @@ -1,7 +1,12 @@ --- title: Server Access Getting Started Guide +sidebar_label: Getting Started description: Getting started with Teleport server access. videoBanner: LnaRP0xKWRI +tags: + - get-started + - zero-trust + - infrastructure-identity --- You can protect a server with Teleport by running the Teleport SSH Service on @@ -30,14 +35,12 @@ per the instructions in this guide. Do not run the SSH Service as a Kubernetes pod, as there is no guarantee that the SSH Service pod is running on a server that a user intends to access. -![Teleport Bastion](../../../img/server-access/getting-started-diagram.png) - ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) -- One host running a Linux environment (such as Ubuntu 20.04, CentOS 8.0, or - Debian 10). This will serve as a Teleport Node. +- One host running a Linux environment (such as Amazon Linux, Ubuntu, CentOS Stream, or + Debian). This will serve as a Teleport Node. - (!docs/pages/includes/tctl.mdx!) (!docs/pages/includes/permission-warning.mdx!) @@ -370,9 +373,9 @@ further Getting Started exercises. - While this guide shows you how to create a local user in order to access a server, you can also enable Teleport users to authenticate through a single sign-on provider. Read the - [documentation](../../admin-guides/access-controls/sso/sso.mdx) to learn more. + [documentation](../../zero-trust-access/sso/sso.mdx) to learn more. - Learn more about Teleport `tsh` through the [reference documentation](../../reference/cli/tsh.mdx#tsh-ssh). -- For a complete list of ports used by Teleport, read the [Networking Guide](../../reference/networking.mdx). +- For a complete list of ports used by Teleport, read the [Networking Guide](../../reference/deployment/networking.mdx). ## Resources @@ -380,5 +383,5 @@ further Getting Started exercises. - [Announcing Teleport SSH Server](https://goteleport.com/blog/announcing-teleport-ssh-server/) - [How to SSH properly](https://goteleport.com/blog/how-to-ssh-properly/) - Consider whether [OpenSSH or Teleport SSH](https://goteleport.com/blog/openssh-vs-teleport/) is right for you. -- [Labels](../../admin-guides/management/admin/labels.mdx) +- [Labels](../../zero-trust-access/rbac-get-started/labels.mdx) diff --git a/docs/pages/enroll-resources/server-access/guides/ansible.mdx b/docs/pages/enroll-resources/server-access/guides/ansible.mdx deleted file mode 100644 index 7f117b73a9722..0000000000000 --- a/docs/pages/enroll-resources/server-access/guides/ansible.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: Ansible -description: Using Teleport with Ansible ---- - -Ansible uses the OpenSSH client by default. Teleport supports SSH protocol and -works as SSH jumphost. - -In this guide we will configure OpenSSH client to work with Teleport Proxy -and run a sample ansible playbook. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- `ssh` openssh tool -- `ansible` >= (=ansible.min_version=) -- Optional tool `jq` to process `JSON` output. -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Login and configure SSH - -Log into Teleport with `tsh`: - -```code -$ tsh login --proxy= -``` - -Generate `openssh` configuration using `tsh config` shortcut: - -```code -$ tsh config > ssh.cfg -``` - - -You can edit matching patterns used in `ssh.cfg` if something -is not working out of the box. - - -## Step 2/3. Configure Ansible - -Create a folder `ansible` where we will collect all generated files: - -```code -$ mkdir -p ansible -# Copy the openssh configuration from the previous step to the ansible dir -$ cp ssh.cfg ansible/ -$ cd ansible -``` - -Create a file `ansible.cfg`: - -``` -[defaults] -host_key_checking = True -inventory=./hosts -remote_tmp=/tmp - -[ssh_connection] -scp_if_ssh = True -ssh_args = -F ./ssh.cfg -``` - -You can create an inventory file `hosts` manually or use a script below to generate it from your environment. Set your -cluster name (e.g. `teleport.example.com` or in the form `mytenant.teleport.sh` for Teleport Enterprise Cloud) -and this script will generate the host names to match the `openssh` configuration: - -```code -$ tsh ls --format=json | jq '.[].spec.hostname + "."' > hosts -``` - -## Step 3/3. Run a playbook - -Finally, let's create a simple ansible playbook `playbook.yaml`. - -The playbook below runs `hostname` on all hosts. Make sure to set the `remote_user` parameter -to a valid SSH username that works with the target host and is allowed by Teleport: - -```yaml -- hosts: all - remote_user: ubuntu - tasks: - - name: "hostname" - command: "hostname" -``` - -From the folder `ansible`, run the ansible playbook: - -```code -$ ansible-playbook playbook.yaml - -# PLAY [all] ***************************************************************************************************************************************** -# TASK [Gathering Facts] ***************************************************************************************************************************** -# -# ok: [terminal] -# -# TASK [hostname] ************************************************************************************************************************************ -# changed: [terminal] -# -# PLAY RECAP ***************************************************************************************************************************************** -# terminal : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 -``` - -You are all set. You are now using short-lived SSH certificates and Teleport can now record -all ansible commands in the audit log. - -## Troubleshooting - -In cases where Ansible cannot connect, you may see an error like this: - -```txt -example.host | UNREACHABLE! => { - "changed": false, - "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname example.host: Name or service not known", - "unreachable": true -} -``` - -You can examine and tweak patterns matching the inventory hosts in `ssh.cfg`. - -Try the SSH connection using `ssh.cfg` with verbose mode to inspect the error: - -```code -$ ssh -vvv -F ./ssh.cfg root@example.host -``` - -If `ssh` works, try running the playbook with verbose mode on: - -```code -$ ansible-playbook -vvvv playbook.yaml -``` - -If your hostnames contain uppercase characters (like `MYHOSTNAME`), please note that Teleport's internal hostname matching -is case sensitive by default, which can also lead to seeing this error. - -If this is the case, you can work around this by enabling case-insensitive routing at the cluster level. - - - - -Edit your `/etc/teleport.yaml` config file on all servers running the Teleport `auth_service`, then restart Teleport on each. - -```yaml -auth_service: - case_insensitive_routing: true -``` - - - - -Run `tctl edit cluster_networking_config` to add the following specification, then save and exit. - -```yaml -spec: - case_insensitive_routing: true -``` - - - diff --git a/docs/pages/enroll-resources/server-access/guides/auditd.mdx b/docs/pages/enroll-resources/server-access/guides/auditd.mdx index e2a375f276f59..97da1e682d6d4 100644 --- a/docs/pages/enroll-resources/server-access/guides/auditd.mdx +++ b/docs/pages/enroll-resources/server-access/guides/auditd.mdx @@ -1,9 +1,15 @@ --- title: Configure SSH with the Linux Auditing System +sidebar_label: Linux Auditing System description: How to configure Teleport SSH with auditd (Linux Auditing System). -h1: Linux Auditing System (auditd) +tags: + - how-to + - zero-trust + - audit --- +{/* lint disable page-structure remark-lint */} + You can configure Teleport's SSH Service to integrate with the Linux Auditing System (auditd). ## Prerequisites diff --git a/docs/pages/enroll-resources/server-access/guides/bpf-session-recording.mdx b/docs/pages/enroll-resources/server-access/guides/bpf-session-recording.mdx index 97b9cb59c8dbb..5b68fc58eda0b 100644 --- a/docs/pages/enroll-resources/server-access/guides/bpf-session-recording.mdx +++ b/docs/pages/enroll-resources/server-access/guides/bpf-session-recording.mdx @@ -1,10 +1,16 @@ --- title: Enhanced Session Recording for SSH with BPF description: How to record your SSH session commands using BPF. -h1: Enhanced Session Recording with BPF videoBanner: 8uO5H-iYw5A +tags: + - session-recording + - how-to + - zero-trust + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + This guide explains Enhanced Session Recording for SSH with BPF and how to set it up in your Teleport cluster. @@ -44,7 +50,7 @@ session). Also, a local user with both monitored and unmonitored console sessions or ptrace privileges may not be fully captured in recordings. Commands executed via daemons (systemd, crond, atd, etc.) could be outside of -the recorded session scope. Proper network-based restrictions for ingress +the recorded session scope. Proper network-based restrictions for egress traffic must also be implemented to prevent possible unauthorized data transfer. Additionally, certain forensic information such as full binary paths @@ -402,4 +408,4 @@ ends will also include the `"enhanced_recording": true` field, similar to the fo - Read more about [session recording](../../../reference/architecture/session-recording.mdx). - See all configuration options for Enhanced Session Recording in our - [Configuration Reference](../../../reference/config.mdx). + [Configuration Reference](../../../reference/deployment/config.mdx). diff --git a/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/encrypted-session-recordings.mdx b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/encrypted-session-recordings.mdx new file mode 100644 index 0000000000000..ab676aaf05984 --- /dev/null +++ b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/encrypted-session-recordings.mdx @@ -0,0 +1,160 @@ +--- +title: Encrypted Session Recordings +description: How to enable encrypted session recordings using your CA key backend. +tags: + - session-recording + - how-to + - zero-trust + - resiliency +--- + +Encrypted session recordings allow Teleport users to enable at-rest encryption +of their session recording data. This guide explains how to setup encrypted +session recordings. + +## How it works + +When encrypted recordings are enabled, Teleport will encrypt session recording +data before saving it to disk or long-term storage. Encryption keys are +provisioned using the same key storage backend configured for CAs defined by +the `ca_key_params` section of the Teleport Auth Service configuration file. In +`node-sync` and `proxy-sync` session recording modes, session recording events +are sent directly to the Teleport Auth Service where they are encrypted prior +to being written to long term storage. In `node` and `proxy` session recording +modes, session recording events are encrypted locally at the Teleport-protected +server or Proxy Service instance before being written to disk for future upload +to the Auth Service. + +Decryption of session recordings for replay is always facilitated by the +Teleport Auth Service regardless of the session recording mode. This means +replays can be viewed using the Teleport Web UI, or by using `tsh play` +with a valid session ID. Replaying session recording files directly using +`tsh play` is not possible for encrypted files. + +## Step 1/2. Configure Teleport + +In this section, you will configure Teleport to encrypt session recordings +using the configured key storage backend. + +### Enable recording encryption + +Encrypted session recordings can be enabled through the Teleport Auth +Service configuration file. + +You can edit the Auth Service configuration file as follows: + +```yaml +# snippet from teleport.yaml +auth_service: + session_recording_config: + encryption: + enabled: yes +``` + +### Optional step: Configure key storage backend + +Session recording encryption keys are provisioned using the same key storage +backend configured for CAs. Software keys stored in Teleport's backend storage +are the default, but other backends can also be configured: + +- [AWS KMS](../../../../zero-trust-access/deploy-a-cluster/private-keys/aws-kms.mdx) +- [Google Cloud KMS](../../../../zero-trust-access/deploy-a-cluster/private-keys/gcp-kms.mdx) +- [HSM](../../../../zero-trust-access/deploy-a-cluster/hsm.mdx) + +If your cluster is already configured to use any of these backends, you should +ensure Teleport has permission to use their decryption functions. For example, +the AWS KMS backend will require adding the `kms:Decrypt` action to Teleport's +role policy. + +It is required that all Teleport Auth Service instances have access to the +exact same encryption keys in order to ensure all recorded sessions are always +replayable. This means shared access to keys in AWS KMS and GCP KMS backends, +or a shared, networked HSM for PKCS#11 backends. + + +Before enabling encrypted recordings with an HSM backend, make sure OAEP +decryption is permitted for generated keys. For YubiHSM2, the authkey used +should have `decrypt-oaep` included in both its capabilities and delegated +capabilities. For other PKCS#11 HSMs, the `CKM_DECRYPT` mechanism must be +enabled. + + +## Step 2/2. Confirm new recordings are encrypted + +It is possible to confirm that session recordings are now encrypted by +capturing a new recording and downloading the resulting `.tar` file from your +[audit sessions backend](../../../../reference/architecture/session-recording.mdx#storage). +Replaying the session file directly with `tsh play` should fail and replaying +using the session ID should succeed. + +## Manual Encryption Key Management + +Encrypted session recordings require at least one active key which will be used +during encryption at recording time and decryption at replay time. Once an +active key has been rotated, meaning it is no longer used during encryption, it +should become a rotated key. Rotated keys can only be used for decryption +during replay in order to maintain access to historical recordings. + +By default, Teleport will automatically provision and manage these keys. +However there may be cases where automatic management is not possible or +desired, such as environments where Teleport lacks sufficient privileges to +manage keys directly. For this reason, Teleport can be configured to use +externally managed keys. + +Manual encryption key management enables explicit configuration of active +and rotated keys by defining what type of key storage backend they are found in +and the labels identifying the specific keys to be used. + + +Manual encryption key management places all responsibility and complexity of +managing keys on the administrator. This comes with much more risk of a +misconfiguration impacting Teleport's ability to record or replay sessions. +It's for this reason that Teleport Cloud does not allow use of manual +encryption key management. Consider using the default automatic key management +unless your usage explicitly requires manual key management. + + +### Configuration + +This example enables `manual_key_management` with one active key and one +rotated key configured. Both expect a PKCS#11 compliant HSM as the key storage +backend and each have their own label used to identify the keys within the HSM. + +```yaml +encryption: + enabled: yes + manual_key_management: + enabled: yes + active_keys: + - type: pkcs11 + label: 'session_recordings_002' + rotated_keys: + - type: pkcs11 + label: 'session_recordings_001' +``` + +Teleport will search the HSM for the keys described in both `active_keys` and +`rotated_keys`. The active keys will be used during encryption and replay of +any new session recordings. The rotated keys will be used during replay of +historical session recordings, but will not be used during any encryption +operations. For PKCS#11 backends it is required that keys also have an ID +assigned in addition to the label used during key lookup. The ID is not +included in the `manual_key_management` configuration, but the private and +public keys for a single pair must be assigned the same ID. + +As mentioned previously, it is important that all Teleport Auth Service +instances have access to the same keys. Degraded availability of replays should +be expected if unshared keys are used. + + +Session recording encryption relies on a form of envelope encryption using OAEP +with 4096-bit RSA key pairs. This means RSA 4096 is the only key type eligible +for use with `manual_key_management`. It is also important that these keys are +permitted to be used for decryption. What this means varies per key storage +backend, but generally must be defined when the key is first provisioned. + + +## Related reading + +- [Rotating Session Recording Encryption Keys](./rotating-keys.mdx) +- [Rotating Manual Session Recording Encryption Keys](./rotating-manual-keys.mdx) diff --git a/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-keys.mdx b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-keys.mdx new file mode 100644 index 0000000000000..0c616a8eeb684 --- /dev/null +++ b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-keys.mdx @@ -0,0 +1,87 @@ +--- +title: Rotating Session Recording Encryption Keys +sidebar_label: Key Rotation (Automatic) +description: How to rotate automatically provisioned session recording encryption keys. +tags: + - session-recording + - how-to + - zero-trust + - resiliency +--- + +This guide explains how to rotate encryption keys for session recordings using +the automatic method. In this approach, the Teleport Auth Service manages the +selection and labeling of encryption keys for you. + +For instructions on manually rotating encryption keys for session recordings, +see [Rotating Manual Session Recording Encryption +Keys](rotating-manual-keys.mdx). + +## How it works + +Session recording encryption keys are rotated using `tctl`. Rotation is a +two-phase process that requires initiating a rotation and then completing that +rotation. It is also possible to roll back an in-progress rotation in the event +that the previous key state needs to be restored. + +## Prerequisites + +This guide assumes that you have followed the setup instructions in [Encrypted +Session Recordings](encrypted-session-recordings.mdx). + +## Step 1/3. Initiate a rotation + +First a rotation must be initiated. This will provision a new key and add +it to the list of active keys, which means newly captured recordings will be +replayable using both the original key and the new key. This is done with: + +```code +$ tctl recordings encryption rotate +``` + +If a rotation is already in progress, then the `rotate` subcommand will result +in an error. + +## Step 2/3. Confirm rotation is in progress + +Printing the status of the active encryption keys informs whether or not a +rotation is in progress. It also shows which key will be rotating out. + +```code +$ tctl recordings encryption status +Rotation in progress +Key Pair Fingerprint State +---------------------------------------------------------------- -------- +48303729235b962c69940fe4cc9d47fcd6f5dd3bcbd149a6d4944098ce01b84c rotating +8a8581543c70cd2ed5e993080670aefec2c620ef792730f020cb463350adeccb active +``` + +It is also possible for a key to be in an `inaccessible` state, which means at +least one Teleport Auth Service instance does not have access to the key. In +this case the rotation should be rolled back and the Teleport Auth Service +instances should be diagnosed for connection or permission issues while +accessing the key backend. For example, this could be the result of an +improperly configured security group or IAM role when using the AWS KMS key +backend. + +## Step 3/3. Complete rotation + +Completing a key rotation will retain the rotated key for future replay of +historical session recordings. All recordings captured after completion will +only be replayable using the new key. You can complete the rotation with: + +```code +$ tctl recordings encryption complete-rotation +``` + +If any key is in an `inaccessible` state, then attempting to complete the +rotation will result in an error. + +### Rollback + +If an in progress rotation needs to be rolled back for any reason it can be +reverted to the previous state using: + +```code +$ tctl recordings encryption rollback-rotation +``` diff --git a/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-manual-keys.mdx b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-manual-keys.mdx new file mode 100644 index 0000000000000..f4a7d4a517ca2 --- /dev/null +++ b/docs/pages/enroll-resources/server-access/guides/encrypted-session-recordings/rotating-manual-keys.mdx @@ -0,0 +1,95 @@ +--- +title: Rotating Manual Session Recording Encryption Keys +sidebar_label: Key Rotation (Manual) +description: How to rotate private keys for encrypted session recordings while designating certain keys only for decryption. +tags: + - session-recording + - how-to + - zero-trust + - resiliency +--- + +This guide explains how to rotate encryption keys for session recordings. + +For instructions on using the automatic approach, see [Rotating Session +Recording Encryption Keys](rotating-keys.mdx). + +## How it works + +In the manual approach to session recording key management, a user provides the +Auth Service with the types and labels of keys used to encrypt Teleport session +recordings. In this way, the user has control over the keys the Auth Service +uses to encrypt session recordings, as well as rotated keys that the Auth +Service no longer uses for encryption, but that are available for decrypting +stored session recordings. + +## Prerequisites + +This guide assumes that you have followed the setup instructions in [Encrypted +Session Recordings](encrypted-session-recordings.mdx). + +Although Manual Key Management leaves key rotation entirely up to the +administrator, the `manual_key_management` configuration can be leveraged to help facilitate rotations. + +As an example, we will assume an existing Teleport Auth Service configured to +use a PKCS#11 compatible HSM with an active key identified by the label +`session_recording_001`: + + +All configuration examples are reduced to the `encryption` configuration block +for brevity since the available options are identical between the Teleport Auth +Service configuration file and the dynamic `session_recording_config` resource. + + +```yaml +encryption: + enabled: yes + manual_key_management: + enabled: yes + active_keys: + - type: pkcs11 + label: 'session_recordings_001' +``` + +## Step 1/2. Add the new key + +A new key can be added to the the list of active keys: + +```yaml +encryption: + enabled: yes + manual_key_management: + enabled: yes + active_keys: + - type: pkcs11 + label: 'session_recordings_002' + - type: pkcs11 + label: 'session_recordings_001' +``` + +This configuration expects a second key to be accessible using the +`session_recordings_002` label. Teleport maintains a cache of references to +accessible keys that is periodically updated, but it is best practice to ensure +the key exists prior to updating the `manual_key_management` configuration. + +## Step 2/2. Rotate the old key + +The old key can be moved out of the active set of encryption keys and into the +set of rotated keys: + +```yaml +encryption: + enabled: yes + manual_key_management: + enabled: yes + active_keys: + - type: pkcs11 + label: 'session_recordings_002' + rotated_keys: + - type: pkcs11 + label: 'session_recordings_001' +``` + +All new recordings will now be encrypted using the key labeled by +`session_recordings_002`, and historical recordings encrypted using +`session_recordings_001` will remain replayable. diff --git a/docs/pages/enroll-resources/server-access/guides/guides.mdx b/docs/pages/enroll-resources/server-access/guides/guides.mdx index c7ba23b463bf7..0578cb863dafd 100644 --- a/docs/pages/enroll-resources/server-access/guides/guides.mdx +++ b/docs/pages/enroll-resources/server-access/guides/guides.mdx @@ -1,18 +1,11 @@ --- -title: Server Access Guides +title: Server Access Configuration Guides +sidebar_label: Configuration Guides description: Teleport server access guides. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity --- -- [Using Teleport with PAM](ssh-pam.mdx): How to configure Teleport SSH with PAM (Pluggable Authentication Modules). -- [Agentless OpenSSH Integration](../openssh/openssh-agentless.mdx): How to use Teleport in agentless mode on systems with OpenSSH and `sshd`. -- [Agentless OpenSSH Integration (Manual Installation)](../openssh/openssh-manual-install.mdx): How to use Teleport in agentless mode - on systems with OpenSSH and `sshd` that can't run `teleport`. -- [Recording Proxy Mode](recording-proxy-mode.mdx): How to use Teleport Recording Proxy Mode to capture activity on OpenSSH servers. -- [BPF Session Recording](bpf-session-recording.mdx): How to use BPF to record SSH session commands, modified files and network connections. -- [Visual Studio Code](vscode.mdx): How to remotely develop with Visual Studio Code and Teleport. -- [JetBrains SFTP](jetbrains-sftp.mdx): How to use a JetBrains IDE to access SFTP with Teleport. -- [Host User Creation](host-user-creation.mdx): How to configure Teleport to automatically create transient host users. -- [Linux Auditing System](auditd.mdx): How to integrate Teleport with the Linux Auditing System (auditd). -- [Using Teleport with Ansible](ansible.mdx): How to use Ansible with - Teleport-issued SSH credentials. + diff --git a/docs/pages/enroll-resources/server-access/guides/host-user-creation.mdx b/docs/pages/enroll-resources/server-access/guides/host-user-creation.mdx index e3a8e67f03f47..5fbc54666931e 100644 --- a/docs/pages/enroll-resources/server-access/guides/host-user-creation.mdx +++ b/docs/pages/enroll-resources/server-access/guides/host-user-creation.mdx @@ -1,6 +1,11 @@ --- title: Configure Teleport to Create Host Users +sidebar_label: Host User Creation description: How to configure Teleport to automatically create transient host users. +tags: + - how-to + - zero-trust + - infrastructure-identity --- Teleport's SSH Service can be configured to automatically create local Unix users @@ -478,6 +483,6 @@ on the hosts. ## Next steps - Configure automatic user provisioning for [database access](../../database-access/auto-user-provisioning/auto-user-provisioning.mdx). -- Configure automatic user provisioning for [desktop access](../../../reference/agent-services/desktop-access-reference/user-creation.mdx). -- Configure automatic user provisioning with [Terraform](../../../reference/terraform-provider/resources/role.mdx). +- Configure automatic user provisioning for [desktop access](../../desktop-access/reference/user-creation.mdx). +- Configure automatic user provisioning with [Terraform](../../../reference/infrastructure-as-code/terraform-provider/resources/role.mdx). Note when using the terraform provider that some values may be different than described in this guide. diff --git a/docs/pages/enroll-resources/server-access/guides/recording-proxy-mode.mdx b/docs/pages/enroll-resources/server-access/guides/recording-proxy-mode.mdx deleted file mode 100644 index d949139b05d97..0000000000000 --- a/docs/pages/enroll-resources/server-access/guides/recording-proxy-mode.mdx +++ /dev/null @@ -1,213 +0,0 @@ ---- -title: Teleport Recording Proxy Mode -description: Use Recording Proxy Mode to capture OpenSSH server activity ---- - -Teleport Recording Proxy Mode was added to allow Teleport users -to enable session recording for servers running `sshd`, which is helpful -when gradually transitioning large server fleets to Teleport. - -![Teleport OpenSSH Recording Proxy](../../../../img/server-access/openssh-proxy.png) - - - -Teleport Cloud only supports session recording at the Node level. If you are -interested in setting up session recording, read our -[getting started guide](../getting-started.mdx) so you can start -replacing your OpenSSH servers with Teleport Agents. - - - -We consider Recording Proxy Mode to be less secure than recording at the Node -level for two reasons: - -- It grants additional privileges to the Teleport Proxy Service. In the default Node Recording mode, the Proxy Service stores no secrets and cannot "see" the decrypted data. This makes a Proxy Server less critical to the security of the overall cluster. But if an attacker gains physical access to a Proxy Server running in Proxy Recording mode, they will be able to see the decrypted traffic and client keys stored in the Proxy Server's process memory. -- Recording Proxy Mode requires the use of SSH agent forwarding. Agent forwarding is required because without it, a Proxy Server will not be able to establish a second connection to the destination node. - -The Teleport Proxy Service should be available to clients and set up with TLS. - -## Prerequisites - -(!docs/pages/includes/self-hosted-prereqs-tabs.mdx!) - -- A host where you will run an OpenSSH server. -- (!docs/pages/includes/tctl.mdx!) - -## Step 1/3. Configure Teleport - -(!docs/pages/includes/permission-warning.mdx!) - -(!docs/pages/includes/backup-warning.mdx!) - -### Enable Proxy Recording Mode - -To enable session recording for `sshd` nodes, the cluster must be switched to -Recording Proxy Mode. In this mode, the recording will be done on the Proxy level. - -Edit the Auth Service configuration file as follows: - -```yaml -# snippet from /etc/teleport.yaml -auth_service: - # Session Recording must be set to Proxy to work with OpenSSH - session_recording: "proxy" # can also be "off" and "node" (default) -``` - -### Optional insecure step: Disable strict host checking - -When in recording mode, Teleport will check that the host certificate of any -Node a user connects to is signed by a Teleport CA. By default, this is a strict -check. If the Node presents just a key or a certificate signed by a different -CA, Teleport will reject this connection with the error message: - -```text -ssh: handshake failed: remote host presented a public key, expected a host -certificate -``` - -You can disable strict host checks as shown below. However, this opens the -possibility for Person-in-the-Middle attacks and is not recommended. - -```yaml -# snippet from /etc/teleport.yaml -auth_service: - proxy_checks_host_keys: false -``` - -## Step 2/3. Configure `sshd` - -`sshd` must be told to allow users to log in with certificates generated -by the Teleport User CA. Start by exporting the Teleport CA public key. - -On your Teleport Node, export the Teleport Certificate Authority certificate -into a file and update your SSH configuration to trust Teleport's CA. Assign to the address of your Teleport Proxy Service: - -```code -$ curl 'https:///webapi/auth/export?type=user' | sed s/cert-authority\ // > teleport_user_ca.pub -$ sudo mv ./teleport_user_ca.pub /etc/ssh/teleport_user_ca.pub -$ echo "TrustedUserCAKeys /etc/ssh/teleport_user_ca.pub" | sudo tee -a /etc/ssh/sshd_config -``` - -Restart `sshd`. - -Now, `sshd` will trust users who present a Teleport-issued certificate. -The next step is to configure host authentication. - -The recommended solution is to ask Teleport to issue valid host certificates for -all OpenSSH nodes. To generate a host certificate, run this command: - -```code -# Creating host certs, with an array of every host to be accessed. -# Wildcard certs aren't supported by OpenSSH. The domain must be fully -# qualified. -# Management of the host certificates can become complex. This is another -# reason we recommend using Teleport SSH on nodes. -$ tctl auth sign \ - --host=api.example.com,ssh.example.com,64.225.88.175,64.225.88.178 \ - --format=openssh \ - --out=api.example.com - -The credentials have been written to api.example.com, api.example.com-cert.pub - -# You can use ssh-keygen to verify the contents. -$ ssh-keygen -L -f api.example.com-cert.pub -#api.example.com-cert.pub: -# Type: ssh-rsa-cert-v01@openssh.com host certificate -# Public key: RSA-CERT SHA256:ireEc5HWFjhYPUhmztaFud7EgsopO8l+GpxNMd3wMSk -# Signing CA: RSA SHA256:/6HSHsoU5u+r85M26Ut+M9gl+HventwSwrbTvP/cmvo -# Key ID: "" -# Serial: 0 -# Valid: after 2020-07-29T20:26:24 -# Principals: -# api.example.com -# ssh.example.com -# 64.225.88.175 -# 64.225.88.178 -# Critical Options: (none) -# Extensions: -# x-teleport-authority UNKNOWN OPTION (len 47) -# x-teleport-role UNKNOWN OPTION (len 8) -``` - -Then add the following lines to `/etc/ssh/sshd_config` on all OpenSSH nodes, and -restart `sshd`. - -```yaml -HostKey /etc/ssh/api.example.com -HostCertificate /etc/ssh/api.example.com-cert.pub -``` - -## Step 3/3. Use Proxy Recording Mode - -Now you can use the `tsh ssh` command to log in to any `sshd` node in the -cluster, and the session will be recorded. - -```code -# tsh ssh to use default ssh port:22 -$ tsh ssh --port=22 user@host.example.com -# Example for a Amazon EC2 Host -# tsh ssh --port=22 ec2-user@ec2-54-EXAMPLE.us-west-2.compute.amazonaws.com -``` - -If you want to use the OpenSSH `ssh` client for logging into `sshd` servers behind a proxy -in "recording mode", you have to tell the `ssh` client to use a jump host and -enable SSH agent forwarding, otherwise, a recording proxy will not be able to -terminate the SSH connection to record it: - -```code -# Note that agent forwarding is enabled twice: one from a client to a proxy -# (mandatory if using a recording proxy), and then optionally from a proxy -# to the end server if you want your agent running on the end server -$ ssh -o "ForwardAgent yes" \ - -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %r@p.example.com -s proxy:%h:%p" \ - user@host.example.com -``` - - - To avoid typing all this and use the usual `ssh user@host.example.com`, users can update their `~/.ssh/config` file. - - -Verify that a Teleport certificate is loaded into the agent after -logging in: - -```code -# Login as Joe -$ tsh login --proxy=proxy.example.com --user=joe -# see if the certificate is present (look for "teleport:joe") at the end of the cert -$ ssh-add -L -``` - - - It is well known that the Gnome Keyring SSH agent, used by many popular Linux desktops like Ubuntu, and `gpg-agent` from GnuPG do not support SSH - certificates. We recommend using the `ssh-agent` from OpenSSH. - - Alternatively, you can disable the SSH agent integration entirely using the - `--no-use-local-ssh-agent` flag or `TELEPORT_USE_LOCAL_SSH_AGENT=false` - environment variable with `tsh`. - - -## OpenSSH rate limiting - -When using a Teleport proxy in "recording mode", be aware of OpenSSH's built-in -rate-limiting. On large numbers of Proxy Service connections, you may encounter errors -like: - -```txt -channel 0: open failed: connect failed: ssh: handshake failed: EOF -``` - -See the `MaxStartups` setting in `man sshd_config`. This setting means that by -default, OpenSSH only allows 10 unauthenticated connections at a time and starts -dropping connections 30% of the time when the number of connections goes over 10. -When it hits 100 authentication connections, all new connections are -dropped. - -To increase the concurrency level, increase the value to something like -`MaxStartups 50:30:100`. This allows 50 concurrent connections and a max of 100. - diff --git a/docs/pages/enroll-resources/server-access/guides/ssh-pam.mdx b/docs/pages/enroll-resources/server-access/guides/ssh-pam.mdx index 587a1debebf22..5ed5ae28475f5 100644 --- a/docs/pages/enroll-resources/server-access/guides/ssh-pam.mdx +++ b/docs/pages/enroll-resources/server-access/guides/ssh-pam.mdx @@ -1,7 +1,10 @@ --- title: Configure SSH with Pluggable Authentication Modules +sidebar_label: Pluggable Authentication Modules description: How to configure Teleport SSH with PAM (Pluggable Authentication Modules). -h1: Pluggable Authentication Modules (PAM) +tags: + - conceptual + - zero-trust --- Teleport's SSH Service can be configured to integrate with diff --git a/docs/pages/enroll-resources/server-access/introduction.mdx b/docs/pages/enroll-resources/server-access/introduction.mdx deleted file mode 100644 index 821083e112682..0000000000000 --- a/docs/pages/enroll-resources/server-access/introduction.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: Introduction to Enrolling Servers -description: Teleport server access features and introduction. -videoBanner: EsEvO5ndNDI ---- - -Teleport consolidates SSH access across all environments, decreases -configuration complexity, supports industry best practices and compliance while -giving complete visibility over all sessions and events. - -Teleport server access is designed for the following kinds of scenarios: - -- When up to a vast number of clusters must be managed using the command-line (`tsh`) or programmatically (through the Teleport API) and you want to simplify your stack, security, and configuration complexity. -- When security team members must track and audit every user session. -- When Teleport users require a complete, dedicated, and secure SSH option (Teleport Node running in SSH mode) and more than a certificate authority (Teleport Auth) with proxy (Teleport Proxy). -- When resource and network security must be maximized: SSH certificates over secret keys, multi-factor authentication (MFA), Single Sign-On (SSO), and short-lived certificates. - -![Server access architecture](../../../img/server-access/architecture.png) - -Teleport protects servers through the Teleport SSH Service, which is a Teleport -agent service. For more information on agent services, read [Teleport Agent -Architecture](../../reference/architecture/agents.mdx). You can also learn how to deploy a -[pool of Teleport Agents](../agents/agents.mdx) to run multiple agent -services. - -## Getting started - -- [Get started](getting-started.mdx): Get started using Teleport server access - in 10 minutes. Server access for most common SSH use-cases. - -## Enrolling OpenSSH servers - -You can protect OpenSSH servers with Teleport, which makes it easier to protect -legacy infrastructure, using an [agentless architecture](openssh/openssh-agentless.mdx). -Read the [Teleport OpenSSH guides](openssh/openssh.mdx) to learn more. - -## Guides - -- [Using Teleport with PAM](./guides/ssh-pam.mdx): How to configure Teleport SSH with PAM (Pluggable Authentication Modules). -- [Recording Proxy Mode](./guides/recording-proxy-mode.mdx): How to use Teleport Recording Proxy Mode to capture activity on OpenSSH servers. -- [BPF Session Recording](./guides/bpf-session-recording.mdx): How to use BPF to record SSH session commands, modified files and network connections. -- [Visual Studio Code](./guides/vscode.mdx): How to remotely develop with Visual Studio Code and Teleport. diff --git a/docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx b/docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx index b04e0dfe1b331..a291c88b07f78 100644 --- a/docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx +++ b/docs/pages/enroll-resources/server-access/openssh/openssh-agentless.mdx @@ -3,6 +3,10 @@ title: Using Teleport with OpenSSH in agentless mode description: This guide shows you how to set up Teleport in agentless mode to enable secure access to OpenSSH servers so you can protect systems that do not run a Teleport binary. keywords: [openssh, teleport, agentless] videoBanner: x0eYFUEIOrM +tags: + - how-to + - zero-trust + - infrastructure-identity --- In this guide, we will show you how to configure Teleport in agentless mode and @@ -14,22 +18,24 @@ Using Teleport and OpenSSH has the advantage of getting you up and running, but in the long run, we would recommend replacing `sshd` with `teleport`. `teleport` SSH servers have support for multiple features that are incompatible with OpenSSH: -- RBAC and resource filtering based on [dynamically updated labels](../../../admin-guides/management/admin/labels.mdx) -- [Session recording without SSH connection termination](../guides/recording-proxy-mode.mdx) -- [Session sharing](../../../connect-your-client/tsh.mdx) +- RBAC and resource filtering based on [dynamically updated labels](../../../zero-trust-access/rbac-get-started/labels.mdx) +- [Session recording without SSH connection termination](../../../reference/architecture/session-recording.mdx#record-at-the-proxy-service) +- [Session sharing](../../../zero-trust-access/authentication/joining-sessions.mdx) - [Advanced session recording](../guides/bpf-session-recording.mdx) ## How it works -Teleport supports OpenSSH by proxying SSH connections through the Proxy Service. When a Teleport user requests to connect to an OpenSSH node, the Proxy Service checks the user's Teleport roles. +At a high level, Teleport supports OpenSSH servers by proxying SSH connections through the Proxy Service. +The OpenSSH server will be set up to trust connections from the Proxy Service rather than trusting direct +connections from users. This ensures that all connections to OpenSSH servers go through the +Proxy Service, where session IO and audit events can be recorded and RBAC can be enforced. -If the RBAC checks succeed, the Proxy Service authenticates to the OpenSSH node with a dynamically generated certificate signed by a Teleport CA. This allows the -Proxy Service to record and audit connections to OpenSSH nodes. - -The Proxy Service prevents Teleport users from bypassing auditing by requiring -a certificate signed by a Teleport CA that only the Auth Service possesses. - -In this setup, the Teleport SSH Service performs RBAC checks as well as audits and records sessions on its host, which eliminates the need for connection termination when recording SSH sessions. +For deeper details as to how this works, you may be interested in the [Registered OpenSSH Nodes RFD](https://github.com/gravitational/teleport/blob/master/rfd/0098-registered-openssh-nodes.md) +Below are some key details from the RFD: +- The Proxy Service is uniquely able to request certificates signed by the Auth Service with the Teleport OpenSSH CA on behalf of users, and the OpenSSH server is set up to only trust certificates signed by this CA. Therefore, in order to connect to an OpenSSH server, a user must connect through the Proxy Service to request and use an OpenSSH certificate. +- Before connecting the user to the OpenSSH server, the Proxy Service performs RBAC checks to see if the user should be allowed to access it. +- In order to record the session and its audit events, the Proxy Service terminates (decrypts) the client SSH connection, and establishes its own connection to the OpenSSH server. It then acts as a pipe between the two connections, recording events and IO as it proceeds. + - See [Recording Proxy Mode](../../../reference/architecture/session-recording.mdx#record-at-the-proxy-service) for more details. + +### OpenSSH rate limiting + +Be aware of OpenSSH's built-in rate-limiting. On large numbers of Proxy Service connections, you may encounter errors like: + +```txt +channel 0: open failed: connect failed: ssh: handshake failed: EOF +``` + +See the `MaxStartups` setting in `man sshd_config`. This setting means that by +default, OpenSSH only allows 10 unauthenticated connections at a time and starts +dropping connections 30% of the time when the number of connections goes over 10. +When it hits 100 authentication connections, all new connections are +dropped. + +To increase the concurrency level, increase the value to something like +`MaxStartups 50:30:100`. This allows 50 concurrent connections and a max of 100. + diff --git a/docs/pages/enroll-resources/server-access/openssh/openssh-manual-install.mdx b/docs/pages/enroll-resources/server-access/openssh/openssh-manual-install.mdx index a50e9970a9c90..1971fb53445b7 100644 --- a/docs/pages/enroll-resources/server-access/openssh/openssh-manual-install.mdx +++ b/docs/pages/enroll-resources/server-access/openssh/openssh-manual-install.mdx @@ -2,6 +2,10 @@ title: Using Teleport with OpenSSH in agentless mode (manual installation) description: This guide shows you how to set up Teleport to enable secure access to OpenSSH servers so you can protect legacy systems that do not run a Teleport binary. videoBanner: x0eYFUEIOrM +tags: + - how-to + - zero-trust + - infrastructure-identity --- In this guide, we will show you how to configure the OpenSSH server `sshd` to @@ -12,22 +16,24 @@ Using Teleport and OpenSSH has the advantage of getting you up and running, but in the long run, we would recommend replacing `sshd` with `teleport`. `teleport` SSH servers have support for multiple features that are incompatible with OpenSSH: -- RBAC and resource filtering based on [dynamically updated labels](../../../admin-guides/management/admin/labels.mdx) -- [Session recording without SSH connection termination](../guides/recording-proxy-mode.mdx) -- [Session sharing](../../../connect-your-client/tsh.mdx) +- RBAC and resource filtering based on [dynamically updated labels](../../../zero-trust-access/rbac-get-started/labels.mdx) +- [Session recording without SSH connection termination](../../../reference/architecture/session-recording.mdx#record-at-the-proxy-service) +- [Session sharing](../../../connect-your-client/teleport-clients/tsh.mdx) - [Advanced session recording](../guides/bpf-session-recording.mdx) ## How it works -Teleport supports OpenSSH by proxying SSH connections through the Proxy Service. When a Teleport user requests to connect to an OpenSSH node, the Proxy Service checks the user's Teleport roles. +At a high level, Teleport supports OpenSSH servers by proxying SSH connections through the Proxy Service. +The OpenSSH server will be set up to trust connections from the Proxy Service rather than trusting direct +connections from users. This ensures that all connections to OpenSSH servers go through the +Proxy Service, where session IO and audit events can be recorded and RBAC can be enforced. -If the RBAC checks succeed, the Proxy Service authenticates to the OpenSSH node with a dynamically generated certificate signed by a Teleport CA. This allows the -Proxy Service to record and audit connections to OpenSSH nodes. - -The Proxy Service prevents Teleport users from bypassing auditing by requiring -a certificate signed by a Teleport CA that only the Auth Service possesses. - -In this setup, the Teleport SSH Service performs RBAC checks as well as audits and records sessions on its host, which eliminates the need for connection termination when recording SSH sessions. +For deeper details as to how this works, you may be interested in the [Registered OpenSSH Nodes RFD](https://github.com/gravitational/teleport/blob/master/rfd/0098-registered-openssh-nodes.md) +Below are some key details from the RFD: +- The Proxy Service is uniquely able to request certificates signed by the Auth Service with the Teleport OpenSSH CA on behalf of users, and the OpenSSH server is set up to only trust certificates signed by this CA. Therefore, in order to connect to an OpenSSH server, a user must connect through the Proxy to request and use an OpenSSH certificate. +- Before connecting the user to the OpenSSH server, the Proxy Service performs RBAC checks to see if the user should be allowed to access it. +- In order to record the session and its audit events, the Proxy Service terminates (decrypts) the client SSH connection, and establishes its own connection to the OpenSSH server. It then acts as a pipe between the two connections, recording events and IO as it proceeds. + - See [Recording Proxy Mode](../../../reference/architecture/session-recording.mdx#record-at-the-proxy-service) for more details. This step can be done with Infrastructure-as-Code (IaC) tools (tctl, Terraform, or Kubernetes Operator). This is described [in the OpenSSH server - IaC guide](../../../admin-guides/infrastructure-as-code/managing-resources/agentless-ssh-servers.mdx). + IaC guide](../../../zero-trust-access/infrastructure-as-code/managing-resources/agentless-ssh-servers.mdx). ## Step 2/5. Configure `sshd` to trust the Teleport CA @@ -549,3 +538,21 @@ $ ssh -F ssh_config_teleport ${USER?}@node2.leafcluster.${CLUSTER} qualified domain name, rather than an IP address. + +### OpenSSH rate limiting + +Be aware of OpenSSH's built-in rate-limiting. On large numbers of Proxy Service connections, you may encounter errors like: + +```txt +channel 0: open failed: connect failed: ssh: handshake failed: EOF +``` + +See the `MaxStartups` setting in `man sshd_config`. This setting means that by +default, OpenSSH only allows 10 unauthenticated connections at a time and starts +dropping connections 30% of the time when the number of connections goes over 10. +When it hits 100 authentication connections, all new connections are +dropped. + +To increase the concurrency level, increase the value to something like +`MaxStartups 50:30:100`. This allows 50 concurrent connections and a max of 100. + diff --git a/docs/pages/enroll-resources/server-access/openssh/openssh.mdx b/docs/pages/enroll-resources/server-access/openssh/openssh.mdx index 2af40693b1fc5..bcbd82d5397d3 100644 --- a/docs/pages/enroll-resources/server-access/openssh/openssh.mdx +++ b/docs/pages/enroll-resources/server-access/openssh/openssh.mdx @@ -1,7 +1,11 @@ --- -title: OpenSSH Guides +title: Enrolling OpenSSH Servers with Teleport +sidebar_label: OpenSSH Servers description: Teleport Agentless OpenSSH integration guides. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust + - infrastructure-identity --- - [Agentless OpenSSH Integration](openssh-agentless.mdx): How to use Teleport in agentless mode on systems with OpenSSH and `sshd`. diff --git a/docs/pages/enroll-resources/server-access/rbac.mdx b/docs/pages/enroll-resources/server-access/rbac.mdx index 918acb372cbf0..1b074f2ec2ae8 100644 --- a/docs/pages/enroll-resources/server-access/rbac.mdx +++ b/docs/pages/enroll-resources/server-access/rbac.mdx @@ -1,6 +1,11 @@ --- title: Access Controls for Servers +sidebar_label: Access Controls description: Role-based access control (RBAC) for Teleport server access. +tags: + - conceptual + - zero-trust + - privileged-access --- You can use Teleport's role-based access control (RBAC) system to set up @@ -11,7 +16,7 @@ everything, the QA team and engineers have full access to staging servers, and engineers can gain temporary access to production servers in case of emergency.* -For a more general description of Teleport roles and examples see our [Access Controls guides](../../admin-guides/access-controls/access-controls.mdx), as +For a more general description of Teleport roles and examples see our [Access Controls guides](../../zero-trust-access/authentication/authentication.mdx), as this section focuses on configuring RBAC for servers connected to Teleport. ## Role configuration @@ -71,7 +76,7 @@ spec: Similar to role fields for accessing other resources in Teleport, server-related fields support template variables. -Variables with the format `{{external.xyz}}` are replaced with values from external [SSO](../../admin-guides/access-controls/sso/sso.mdx) +Variables with the format `{{external.xyz}}` are replaced with values from external [SSO](../../zero-trust-access/sso/sso.mdx) providers. For OIDC logins, `{{external.xyz}}` refers to the "xyz" claim; for SAML logins, `{{external.xyz}}` refers to the "xyz" assertion. diff --git a/docs/pages/enroll-resources/server-access/server-access.mdx b/docs/pages/enroll-resources/server-access/server-access.mdx index a32e58bd8e97f..2ef7b9ec605f3 100644 --- a/docs/pages/enroll-resources/server-access/server-access.mdx +++ b/docs/pages/enroll-resources/server-access/server-access.mdx @@ -1,6 +1,134 @@ --- title: Linux Servers -description: Guides to protecting Linux servers with Teleport, including OpenSSH servers. +description: Securely connect to Linux servers via SSH +template: "doc-page" +tags: + - zero-trust + - infrastructure-identity --- - +import DocHero from "@site/src/components/Pages/Landing/DocHero"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; +import Resources from "@site/src/components/Pages/Homepage/Resources"; + +import linuxServersImg from "@version/docs/img/server-access/linux-servers-hero.jpg"; +import userCheckSvg from "@site/src/components/Icon/svg/user-check.svg"; +import questionSvg from "@site/src/components/Icon/svg/question2.svg"; +import linuxServersSvg from "@site/src/components/Icon/teleport-svg/linux-servers.svg"; +import cliSvg from "@site/src/components/Icon/teleport-svg/cli.svg"; +import vscodeSvg from "@site/src/components/Icon/svg/vscode.svg"; +import jetbrainsSvg from "@site/src/components/Icon/svg/jetbrains.svg"; +import ansibleSvg from "@site/src/components/Icon/svg/ansible.svg"; +import teleportSvg from "@site/src/components/Icon/teleport-svg/teleport.svg"; +import linuxSvg from "@site/src/components/Icon/svg/linux.svg"; +import playCircleSvg from "@site/src/components/Icon/svg/play-circle.svg"; +import addUserSvg from "@site/src/components/Icon/svg/add-user.svg"; + + +Consolidates SSH access across all environments, decreases configuration complexity, supports industry best practices and compliance while giving complete visibility over all sessions and events. + +Teleport protects servers through the Teleport SSH Service, which is a Teleport Agent service. For more information on agent services, read [Teleport Agent Architecture](../../reference/architecture/agents.mdx). + + + + + + + diff --git a/docs/pages/enroll-resources/server-access/troubleshooting-server.mdx b/docs/pages/enroll-resources/server-access/troubleshooting-server.mdx index e1754859994bd..9bc3c9d5263ac 100644 --- a/docs/pages/enroll-resources/server-access/troubleshooting-server.mdx +++ b/docs/pages/enroll-resources/server-access/troubleshooting-server.mdx @@ -1,6 +1,11 @@ --- title: Troubleshooting Server Access +sidebar_label: Troubleshooting description: Describes common issues and solutions for access to servers. +tags: + - how-to + - zero-trust + - infrastructure-identity --- This section describes common issues that you might encounter in managing access to servers @@ -69,7 +74,7 @@ provider don't see any of the logins they need to access remote resources. To fix this issue, you should check that the configuration of your auth connectors assigns logins to your single sign-on users or modify the traits in the Teleport roles assigned to users through their group membership in the external identity provider. -For more information about using traits in roles, see [Role Templates](../../admin-guides/access-controls/guides/role-templates.mdx). +For more information about using traits in roles, see [Role Templates](../../zero-trust-access/rbac-get-started/role-templates.mdx). ## Offline servers are included in the server list @@ -140,7 +145,7 @@ spec: ``` For more information about moderated sessions and session sharing, see -[Joining Sessions](../../admin-guides/access-controls/guides/joining-sessions.mdx). +[Joining Sessions](../../zero-trust-access/authentication/joining-sessions.mdx). ## Unable to connect to agentless OpenSSH server as root diff --git a/docs/pages/enroll-resources/workload-identity/getting-started.mdx b/docs/pages/enroll-resources/workload-identity/getting-started.mdx deleted file mode 100644 index 6c2a0f35febec..0000000000000 --- a/docs/pages/enroll-resources/workload-identity/getting-started.mdx +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: Getting Started with Workload Identity -description: Getting started with Teleport Workload Identity for SPIFFE and Machine ID ---- - -Teleport's Workload Identity issues flexible short-lived identities intended -for workloads. It is compatible with the industry-standard SPIFFE specification -meaning that it can be used in place of other SPIFFE compatible identity -providers. - -In this guide, you'll configure the RBAC necessary to allow a Bot to issue -workload identity credentials and then configure `tbot` to expose a SPIFFE -Workload API endpoint. You can then connect your workloads to this endpoint to -receive SPIFFE SVID-compatible workload identity credentials. - -## Prerequisites - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -- (!docs/pages/includes/tctl.mdx!) -- `tbot` must already be installed and configured on the host where the - workloads which need to access Teleport Workload Identity will run. For more - information, see the [deployment guides](../machine-id/deployment/deployment.mdx). - -## Step 1/4. Configure Workload Identity - -First, you will need to create a Workload Identity resource. - -This resource is the primary way that Teleport Workload Identity is configured. -Each Workload Identity resource represents the configuration of an identity for a -specific workload or a template to be used when representing the identity of a -group of workloads. The Workload Identity resource specifies a number of key -things, including: - -- The name of the Workload Identity, which will be needed when issuing it. -- The SPIFFE ID that will be included in credentials issued for this - WorkloadIdentity. -- Any rules around when this Workload Identity can be used to issue credentials. - -Before proceeding, you'll want to determine the SPIFFE ID path that your -workload will use. In our example, we'll use `/svc/foo`. We provide more -guidance on choosing a SPIFFE ID structure in the -[Best Practices](./best-practices.mdx) guide. - -Create a new file called `workload-identity.yaml`: - -```yaml -kind: workload_identity -version: v1 -metadata: - name: example-workload-identity - labels: - example: getting-started -spec: - spiffe: - id: /svc/foo -``` - -Replace: - -- `example-workload-identity` with a name that describes your use-case. -- `/svc/foo` with the SPIFFE ID path you have decided on issuing. - -Use `tctl create -f ./workload-identity.yaml` to create the Workload Identity. - -Now, you'll need to create a role that will grant access to the Workload Identity -that you have just created. As with other Teleport resources, access is granted -by specifying label matchers on the role that will match the labels on the -resource itself. - -In addition to granting access to the resource, we will also need to grant the -ability to read and list the Workload Identity resource type. - -Create `workload-identity-issuer-role.yaml`: - -```yaml -kind: role -version: v6 -metadata: - name: example-workload-identity-issuer -spec: - allow: - workload_identity_labels: - example: ["getting-started"] - rules: - - resources: - - workload_identity - verbs: - - list - - read -``` - -Use `tctl create -f ./workload-identity-issuer-role.yaml` to create the role. - -Now, use `tctl bots update` to add the role to the Bot. Replace `example-bot` -with the name of the Bot you created in the deployment guide and -`example-workload-identity-issuer` with the name of the role you just created: - -```code -$ tctl bots update example-bot --add-roles example-workload-identity-issuer -``` - -### Configuring DNS SANs - -In some cases, you may wish to configure DNS SANs which should be included in -the X509 certificates issued by the Workload API. This is useful in cases -where the client may not be SPIFFE aware and will check the DNS SAN rather than -the SPIFFE URI during the TLS handshake. - -Modify your `workload-identity.yaml` resource definition to include the -`spec.spiffe.x509.dns_sans` field, replacing `example.com` with the DNS name you -require: - -```yaml -kind: workload_identity -version: v1 -metadata: - name: example-workload-identity - labels: - example: getting-started -spec: - spiffe: - id: /svc/foo - x509: - dns_sans: - - example.com -``` - -Use `tctl create -f ./workload-identity.yaml` to update the WorkloadIdentity -resource with your changes. - -## Step 2/4. Configure `workload-identity-api` service in `tbot` - -To set up a SPIFFE Workload API endpoint with `tbot`, we configure an instance -of the `workload-identity-api` service. - -First, determine where you wish this socket to be created. In our example, -we'll use `/opt/machine-id/workload.sock`. You may wish to choose a directory -that is only accessible by the processes that will need to connect to the -Workload API. - -Modify your `tbot` configuration file to include the `workload-identity-api` -service: - -```yaml -services: -- type: workload-identity-api - listen: unix:///opt/machine-id/workload.sock - selector: - name: example-workload-identity -``` - -Replace: - -- `/opt/machine-id/workload.sock` with the path to the socket you wish to create. -- `example-workload-identity` with the name of the Workload Identity resource you - created earlier. - -Start or restart your `tbot` instance to apply the new configuration - -### Configuring Unix Workload Attestation - -By default, an SVID listed under the Workload API service will be issued to any -workload that connects to the Workload API. You may wish to restrict which SVIDs -are issued based on certain characteristics of the workload. This is known as -Workload Attestation. - -When using the Unix listener, `tbot` supports workload attestation based on -three characteristics of the workload process: - -- `uid`: The UID of the user that the workload process is running as. -- `gid`: The primary GID of the user that the workload process is running as. -- `pid`: The PID of the workload process. - -Within a Workload Identity, you can configure rules based on the attributes -determined via workload attestation. Each rule contains a number of tests and -all tests must pass for the rule to pass. At least one rule must pass for the -Workload Identity to be allowed to issue a credential. - -For example, to configure a Workload Identity to be issued only to workloads that -are running as the user with ID 1000 or running as a user with a primary group -ID of 50: - -```yaml -kind: workload_identity -version: v1 -metadata: - name: example-workload-identity - labels: - example: getting-started -spec: - rules: - allow: - - conditions: - - attribute: workload.unix.uid - eq: - value: 1000 - - conditions: - - attribute: workload.unix.gid - eq: - value: 50 - spiffe: - id: /svc/foo -``` - -## Step 3/4. Testing the Workload API with `tbot spiffe-inspect` - -The `tbot` binary includes a `spiffe-inspect` command that can be used to -test the configuration of the Workload API. This command will connect to the -Workload API and request SVIDs, whilst providing debug information. - -Before configuring your workload to use the Workload API, we recommend using -this command to ensure that the Workload API is behaving as expected. - -Use the `spiffe-inspect` command with `--path` to specify the path to the -Workload API socket, replacing `/opt/machine-id/workload.sock` with the path you -configured in the previous step: - -```code -$ tbot spiffe-inspect --path unix:///opt/machine-id/workload.sock -INFO [TBOT] Inspecting SPIFFE Workload API Endpoint unix:///opt/machine-id/workload.sock tbot/spiffe.go:31 -INFO [TBOT] Received X.509 SVID context from Workload API bundles_count:1 svids_count:1 tbot/spiffe.go:46 -SVIDS -- spiffe://example.teleport.sh/svc/foo - - Expiry: 2024-03-20 10:55:52 +0000 UTC -Trust Bundles -- example.teleport.sh -``` - -## Step 4/4. Configuring your workload to use the Workload API - -Now that you know that the Workload API is behaving as expected, you can -configure your workload to use it. The exact steps will depend on the workload. - -In cases where you have used the SPIFFE SDKs, you can configure the -`SPIFFE_ENDPOINT_SOCKET` environment variable to point to the socket created by -`tbot`. - -See the [Best Practices](./best-practices.mdx) guide for more information on -integrating SPIFFE with your workloads. - -## Next steps - -- [Workload Identity Overview](./introduction.mdx): Overview of Teleport -Workload Identity. -- [Best Practices](./best-practices.mdx): Best practices for using Workload -Identity in Production. -- Read the [Workload Identity reference](../../reference/workload-identity/workload-identity-resource.mdx) -to explore the configuration of the Workload Identity resource. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore -all the available configuration options for `tbot`. diff --git a/docs/pages/enroll-resources/workload-identity/introduction.mdx b/docs/pages/enroll-resources/workload-identity/introduction.mdx deleted file mode 100644 index d728f7f288f53..0000000000000 --- a/docs/pages/enroll-resources/workload-identity/introduction.mdx +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: Introduction to Workload Identity -description: Describes Teleport Workload Identity, which securely issues flexible, short-lived cryptographic identities to workloads and non-human identities. ---- - -Teleport Workload Identity securely issues short-lived cryptographic identities -to workloads. It is a flexible foundation for workload identity across your -infrastructure, creating a uniform way for your workloads to authenticate -regardless of where they are running. - -The flexible nature of Teleport Workload Identity enables it to be used for a -range of purposes, including: - -- Workload authentication to third-party APIs on cloud platforms such as AWS, - GCP and Azure. -- Providing X.509 certificates for mutual TLS authentication between workloads - within your infrastructure as part of a Zero Trust strategy. -- Workload authentication between services within your infrastructure. - -Teleport Workload Identity is compatible with the open-source -[Secure Production Identity Framework For Everyone (SPIFFE)](./spiffe.mdx) -standard. This enables interoperability between workload identity -implementations and also provides a wealth of off-the-shelf tools and SDKs to -simplify integration with your workloads. - -There's a whole host of benefits to adopting Teleport Workload Identity, but -here's some of the key ones: - -- Eliminate the use of long-lived shared secrets within your infrastructure, and - reduce the risk of exfiltration and time spent by engineers creating and - rotating these secrets. -- Establish an out of the box universal form of identity for your workloads, - enabling your engineers to get on with building new services without needing - to think about how they'll authenticate. -- Converge on a first-class form of identity for your workloads, simplifying - infrastructure by reducing the number of different ways workloads authenticate. - -## How it works - -Teleport Workload Identity establishes a root certificate authority within your -Teleport cluster that will be responsible for issuing the short-lived JWTs and -X509 certificates to workloads. - -These identities are also known as SPIFFE Verifiable Identity Documents (SVIDs) -and contain the identity of the workload encoded as a URI, also known as a -SPIFFE ID. The structure of this SPIFFE ID is up to you and can encode any -information you need to uniquely identify your workload. - -The ability to request these identities is controlled by Teleport's Role-Based -Access Control system. Users and Bots are granted roles which will allow them to -request identities with specific SPIFFE IDs. - -The `tbot` agent is installed in close proximity to workloads which require an -identity. It manages the process of requesting and renewing the identities for -the workloads. The `tbot` agent authenticates to the Teleport cluster using one -of our support join methods, which in many cases enable it to authenticate -on the basis of federated trust rather than the use of long-lived secrets. - -Workloads can receive their identities in one of two ways: - -- The `tbot` agent can write these identities to a directory on the local - filesystem, or, to a Kubernetes secret. -- The `tbot` agent can expose a SPIFFE Workload API. A standardized gRPC API - that allows workloads to request their identities directly from the `tbot` - agent. - -When using the Workload API, the `tbot` agent can perform an additional process -called Workload Attestation. This allows the issuance of identities to be -restricted to specific workloads. For example, you can restrict an identity to -only be issued to a Linux process with a specific UID or GID, or restrict an -identity to only be issued to a specific Kubernetes pod. The Workload -Attestation process eliminates the need for a "bootstrapping" secret to be -provided to the workloads. - -![Workload ID overview](../../../img/workload-identity/intro-diagram.png) - -Once the workload has its identity, it can be used for a range of purposes. The -X.509 certificates can be used to establish Mutual TLS, and the JWTs can be used -to authenticate with a range of third-party APIs. - -## Teleport Workload Identity vs Machine ID - -Teleport Machine ID issues short-lived credentials to workloads to enable them -to access resources secured by the Teleport cluster. The credentials issued are -only compatible with Teleport itself, and access to resources must be through -the Teleport Proxy. - -Teleport Workload Identity issues cryptographic identities that are compatible -with the popular SPIFFE standard for interoperable workload identity. These -identities are flexible enough to be used for a range of purposes. The -Teleport Proxy is not used for securing workload-to-workload communication. - -## Next steps - -Learn more about Teleport Workload Identity: - -- [SPIFFE](./spiffe.mdx): Learn about the SPIFFE specification and how it is implemented by Teleport Workload Identity. -- [Federation](./federation.mdx): Learn about using Federation to allow workloads to trust workloads from other trust domains. -- [JWT SVIDs](./jwt-svids.mdx): Learn about the short-lived JWTs issued by Workload Identity. -- [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- [Workload Identity Resource](../../reference/workload-identity/workload-identity-resource.mdx): The full reference for the Workload Identity resource. -- [Workload Identity API and Workload Attestation](../../reference/workload-identity/workload-identity-api-and-workload-attestation.mdx): To learn more about the Workload Identity API and Workload Attestation. - -Learn how to configure Teleport Workload Identity for specific use-cases: - -- [Getting Started](./getting-started.mdx): How to configure Teleport for Workload Identity. -- [TSH Support](./tsh.mdx): How to use `tsh` with Workload Identity to issue SVIDs to users. -- [AWS Roles Anywhere](./aws-roles-anywhere.mdx): Configuring AWS to accept Workload Identity certificates as authentication using AWS Roles Anywhere. -- [AWS OIDC Federation](./aws-oidc-federation.mdx): Configuring AWS to accept Workload Identity JWTs as authentication using AWS OIDC Federation. -- [GCP Workload Identity Federation](./gcp-workload-identity-federation-jwt.mdx): Configuring GCP to accept Workload Identity JWTs as authentication using GCP Workload Identity Federation. -- [Azure Federated Credentials](./azure-federated-credentials.mdx): Configuring Azure to accept Workload Identity JWTs as authentication using Azure Federated Credentials. - -### Other resources - -- [SPIFFE Specification](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE.md): The official SPIFFE specification. Useful for understanding the SPIFFE ID and SVID formats. -- [Solving The Bottom Turtle](https://spiffe.io/pdf/Solving-the-bottom-turtle-SPIFFE-SPIRE-Book.pdf): A book covering the fundamentals and details of SPIFFE. diff --git a/docs/pages/enroll-resources/workload-identity/tsh.mdx b/docs/pages/enroll-resources/workload-identity/tsh.mdx deleted file mode 100644 index be97b9124c1d0..0000000000000 --- a/docs/pages/enroll-resources/workload-identity/tsh.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Workload Identity and tsh -description: Issuing SPIFFE SVIDs using Workload Identity and tsh ---- - -In some scenarios, you may wish to issue a SPIFFE SVID manually, without using -Machine ID. This can be useful in scenarios where you need to impersonate a -service for the purposes of debugging or could provide a mechanism for providing -human access to services which use SPIFFE SVIDs for authentication. - -In this guide, you will use the `tsh` tool to issue a SPIFFE SVID. - -## Prerequisites - -- A role configured to allow issuing SPIFFE SVIDs and this role assigned to - your user. See [Getting Started](./getting-started.mdx) for more information. - -(!docs/pages/includes/edition-prereqs-tabs.mdx!) - -## Step 1/2. Using `tsh` to issue a SPIFFE X509 SVID - -First, determine where you wish to write the SPIFFE SVID. If you wish to write -these into a directory, you must first create it. In our example, we will write -the SVID to a directory called `svid`. - -Next, determine which workload identity resource you'll use to issue the X509 -SVID. In our example, we'll use a workload identity called -`my-workload-identity`. - -Issue the SVID specifying the output directory using `--output` and the name of -the workload identity resource using `--name-selector`: - -```sh -$ tsh workload-identity issue-x509 --output ./svid --name-selector my-workload-identity -``` - -Additionally, flags can be used to further customize the SVID: - -| `flag` | Description | -|--------------------|---------------------------------------------------------------------------------------------------------------------| -| `--credential-ttl` | Sets the Time To Live for the resulting X509 SVID. Specify duration using `s`, `m` and `h`, e.g `12h` for 12 hours. | - -### Using headless authentication to issue a SVID on a remote host - -In some scenarios, you may wish to use `tsh` to issue a SVID on a host you have -SSHed into, without logging into Teleport on that host. This can be particularly -useful in scenarios where authentication may not be possible, for example, when -you authenticate using a hardware token. - -In this case, you can use the headless authentication feature of `tsh`. This -provides a prompt for you to authenticate the command on the remote machine, -using your local `tsh` client. - -To use headless authentication, we provide the `--headless` flag, and must -also specify the `--proxy` and `--user` flags: - -```sh -$ tsh --proxy=example.teleport.sh \ - --user example \ - --headless \ - workload-identity issue-x509 \ - --output . \ - --name-selector my-workload-identity -``` - -## Step 2/2. Using the output SVID - -Once the SVID has been issued, you can configure your workload to use it. This -will differ depending on the workload. - -When written to disk, the SVID will be written as three files: - -| `file` | Description | -|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `svid_key.pem` | The private key for the X509 SVID. This is PEM wrapped and marshalled in PKCS8 format. | -| `svid.pem` | The X509 SVID itself. This is PEM wrapped and DER encoded. | -| `svid_bundle.pem` | The SPIFFE trust bundle. A concatenated list of X509 certificates for the certificate authorities within the trust domain. These are individually PEM wrapped and DER encoded. | - -## Next steps - -- [Workload Identity Overview](./introduction.mdx): Overview of Teleport -Workload Identity. -- [Getting Started](./getting-started.mdx): How to configure Teleport for -Workload Identity. -- [Best Practices](./best-practices.mdx): Best practices for using Workload -Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore -all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/workload-attestation.mdx b/docs/pages/enroll-resources/workload-identity/workload-attestation.mdx deleted file mode 100644 index 077e9894e9c1e..0000000000000 --- a/docs/pages/enroll-resources/workload-identity/workload-attestation.mdx +++ /dev/null @@ -1,247 +0,0 @@ ---- -title: Workload Attestation -description: An overview of the Teleport Workload Identity Workload Attestation feature. ---- - -Workload Attestation is the process completed by `tbot` to assert the identity -of a workload that has connected to the Workload API and requested certificates. -The information gathered during attestation is used to decide which, if any, -SPIFFE IDs should be encoded into an SVID and issued to the workload. - -Workload Attestors are the individual components that perform this attestation. -They use the process ID of the workload to gather information about the workload -from platform-specific APIs. For example, the Kubernetes Workload Attestor -queries the local Kubelet API to determine which Kubernetes pod the process -belongs to. - -The result of the attestation process is known as attestation metadata. This -attestation metadata is referred to by the rules you configured for `tbot`'s -Workload API service. For example, you may state that only workloads running in -a specific Kubernetes namespace should be issued a specific SPIFFE ID. - -Additionally, this metadata is included in the log messages output by `tbot` -when it issues an SVID. This allows you to audit the issuance of SVIDs and -understand why a specific SPIFFE ID was issued to a workload. - -## Unix - -The Unix Workload Attestor is the most basic attestor and allows you to restrict -the issuance of SVIDs to specific Unix processes based on a range of criteria. - -### Attestation Metadata - -The following metadata is produced by the Unix Workload Attestor and is -available to be used when configuring rules for `tbot`'s Workload API service: - -| Field | Description | -|-------------------|------------------------------------------------------------------------------| -| `unix.attested` | Indicates that the workload has been attested by the Unix Workload Attestor. | -| `unix.pid` | The process ID of the attested workload. | -| `unix.uid` | The effective user ID of the attested workload. | -| `unix.gid` | The effective primary group ID of the attested workload. | - -### Support for non-standard procfs mounting - -To resolve information about a process from the PID, the Unix Workload Attestor -reads information from the procfs filesystem. By default, it expects procfs to -be mounted at `/proc`. - -If procfs is mounted at a different location, you must configure the Unix -Workload Attestor to read from that alternative location by setting the -`HOST_PROC` environment variable. - -This is a sensitive configuration option, and you should ensure that it is -set correctly or not set at all. If misconfigured, an attacker could provide -falsified information about processes, and this could lead to the issuance of -SVIDs to unauthorized workloads. - -## Kubernetes - -The Kubernetes Workload Attestor allows you to restrict the issuance of SVIDs -to specific Kubernetes workloads based on a range of criteria. - -It works by first determining the pod ID for a given process ID and then by -querying the local kubelet API for details about that pod. - -### Attestation Metadata - -The following metadata is produced by the Kubernetes Workload Attestor and is -available to be used when configuring rules for `tbot`'s Workload API service: - -| Field | Description | -|------------------------------|------------------------------------------------------------------------------------| -| `kubernetes.attested` | Indicates that the workload has been attested by the Kubernetes Workload Attestor. | -| `kubernetes.namespace` | The namespace of the Kubernetes Pod. | -| `kubernetes.service_account` | The service account of the Kubernetes Pod. | -| `kubernetes.pod_name` | The name of the Kubernetes Pod. | - -### Deployment Guidance - -To use Kubernetes Workload Attestation, `tbot` must be deployed as a daemon -set. This is because the unix domain socket can only be accessed by pods on the -same node as the agent. Additionally, the daemon set must have the `hostPID` -property set to `true` to allow the agent to access information about -processes within other containers. - -The daemon set must also have a service account assigned that allows it to query -the Kubelet API. This is an example role with the required RBAC: - -```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: tbot -rules: - - resources: ["pods","nodes","nodes/proxy"] - apiGroups: [""] - verbs: ["get"] -``` - -Mapping the Workload API Unix domain socket into the containers of workloads -can be done in two ways: - -- Directly configuring a hostPath volume for the `tbot` daemonset and workloads - which will need to connect to it. -- Using [spiffe-csi-driver](https://github.com/spiffe/spiffe-csi). - -Example manifests for required Kubernetes resources: - -```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: tbot -rules: - - resources: ["pods","nodes","nodes/proxy"] - apiGroups: [""] - verbs: ["get"] ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: tbot -subjects: - - kind: ServiceAccount - name: tbot - namespace: default -roleRef: - kind: ClusterRole - name: tbot - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: tbot - namespace: default ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: tbot-config - namespace: default -data: - tbot.yaml: | - version: v2 - onboarding: - join_method: kubernetes - # replace with the name of a join token you have created. - token: example-token - storage: - type: memory - # ensure this is configured to the address of your Teleport Proxy Service. - proxy_server: example.teleport.sh:443 - services: - - type: spiffe-workload-api - listen: unix:///run/tbot/sockets/workload.sock - attestor: - kubernetes: - enabled: true - kubelet: - # skip verification of the Kubelet API certificate as this is not - # usually issued by the cluster CA. - skip_verify: true - # replace the svid entries with the SPIFFE IDs that you wish to issue, - # using the `rules` blocks to restrict these to specific Kubernetes - # workloads. - svids: - - path: /my-service - rules: - - kubernetes: - namespace: default - service_account: example-sa ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: tbot -spec: - selector: - matchLabels: - app: tbot - template: - metadata: - labels: - app: tbot - spec: - securityContext: - runAsUser: 0 - runAsGroup: 0 - hostPID: true - containers: - - name: tbot - image: public.ecr.aws/gravitational/tbot-distroless:(=teleport.version=) - imagePullPolicy: IfNotPresent - securityContext: - privileged: true - args: - - start - - -c - - /config/tbot.yaml - - --log-format - - json - volumeMounts: - - mountPath: /config - name: config - - mountPath: /var/run/secrets/tokens - name: join-sa-token - - name: tbot-sockets - mountPath: /run/tbot/sockets - readOnly: false - env: - - name: TELEPORT_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: KUBERNETES_TOKEN_PATH - value: /var/run/secrets/tokens/join-sa-token - serviceAccountName: tbot - volumes: - - name: tbot-sockets - hostPath: - path: /run/tbot/sockets - type: DirectoryOrCreate - - name: config - configMap: - name: tbot-config - - name: join-sa-token - projected: - sources: - - serviceAccountToken: - path: join-sa-token - # 600 seconds is the minimum that Kubernetes supports. We - # recommend this value is used. - expirationSeconds: 600 - # `example.teleport.sh` must be replaced with the name of - # your Teleport cluster. - audience: example.teleport.sh -``` - -## Next steps - -- [Workload Identity Overview](./introduction.mdx): Overview of Teleport -Workload Identity. -- [Best Practices](./best-practices.mdx): Best practices for using Workload -Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore -all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/workload-identity.mdx b/docs/pages/enroll-resources/workload-identity/workload-identity.mdx deleted file mode 100644 index 0f58d4bb583cd..0000000000000 --- a/docs/pages/enroll-resources/workload-identity/workload-identity.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Workload Identity -description: Securely issue flexible short-lived identities to your workloads ---- - - diff --git a/docs/pages/faq.mdx b/docs/pages/faq.mdx index 46ead150441dd..5d48f22d538bb 100644 --- a/docs/pages/faq.mdx +++ b/docs/pages/faq.mdx @@ -1,12 +1,14 @@ --- title: Teleport FAQ description: Frequently Asked Questions About Using Teleport -h1: Teleport FAQ +tags: + - faq + - platform-wide --- This page includes answers to frequently asked questions about setting up, managing, and using Teleport. If you are new to Teleport, read our [Getting -Started Guide](get-started.mdx). +Started Guide](./get-started/get-started.mdx). ## Can I use Teleport in production today? @@ -18,7 +20,7 @@ the stability of Teleport from a security perspective. ## Can I connect to nodes behind a firewall? Yes, Teleport supports reverse SSH tunnels out of the box. To configure -behind-firewall clusters, see [Configure Trusted Clusters](admin-guides/management/admin/trustedclusters.mdx). +behind-firewall clusters, see [Configure Trusted Clusters](zero-trust-access/deploy-a-cluster/trustedclusters.mdx). ## How is Teleport's Community Edition different from Enterprise? @@ -72,7 +74,7 @@ look at [Using OpenSSH Guide](enroll-resources/server-access/openssh/openssh-age ## Can I copy files from one Teleport node to another? -Yes, Teleport supports [Headless WebAuthn authentication](admin-guides/access-controls/guides/headless.mdx), +Yes, Teleport supports [Headless Authentication](zero-trust-access/authentication/headless.mdx), which allows you to perform operations like `tsh ssh` or `tsh scp` from remote systems where you are not logged in to Teleport or may not have access to a browser. @@ -105,7 +107,7 @@ Yes. You can copy and paste using a mouse. ## What TCP ports does Teleport use? -Please refer to our [Networking](./reference/networking.mdx) guide. +Please refer to our [Networking](reference/deployment/networking.mdx) guide. ## Does Teleport support authentication via OIDC, SAML, or Active Directory? @@ -122,7 +124,7 @@ To get a new certificate with the new role set, the user will need to log out and log back in. Revocation of Teleport access should be done with Teleport's -[session and identity locks](./admin-guides/access-controls/guides/locking.mdx), +[session and identity locks](identity-governance/locking.mdx), not by removing roles. ## Does Teleport support provisioning users via SCIM? @@ -131,9 +133,14 @@ Teleport supports [SCIM](https://scim.cloud/) provisioning for Okta via the hosted Okta integration, available in the Enterprise (Cloud) and Enterprise (Self-Hosted) versions of Teleport. -Refer to the [hosted Okta integration guide](./admin-guides/access-controls/okta/okta.mdx) +Refer to the [hosted Okta integration guide](identity-governance/integrations/okta/okta.mdx) for details on setting up and configuring SCIM support. +You can also set up Teleport to integrate with a SCIM provider to automatically +synchronize SCIM group memberships with Teleport Access Lists. For details, see +the [SCIM integration +documentation](identity-governance/integrations/scim/scim.mdx). + ## Why do I see an alert that some agents are out of date? Teleport monitors the inventory of all cluster components and compares their @@ -198,4 +205,18 @@ editions of Teleport. (!docs/pages/includes/teleport-connect-telemetry.mdx!) -If you no longer want to send usage data, see [disabling telemetry](./connect-your-client/teleport-connect.mdx#disabling-telemetry). +If you no longer want to send usage data, see [disabling telemetry](connect-your-client/teleport-clients/teleport-connect.mdx#disabling-telemetry). + +### How do I update my security and business contacts? + +Teleport Enterprise Self-Hosted users can configure up to 3 security contacts and 3 business contacts for their cluster. It's important to +keep these up to date so that we always know who to notify of important updates and alerts. + +To do this, log in to your Teleport License dashboard with the email address of the user that created the dashboard initially. Open +the user dropdown menu on the top right of the navigation bar, and select "Help & Support," then scroll down until you see the contacts sections. +Once you add a contact, they will receive an invitation email which they must accept within 14 days. + +![Web UI view showing security and business contacts options](../img/cloud/security-business-contacts.png) + +If you don't see the contacts lists on the Help & Support page, ensure that you are logged into the Teleport License dashboard and not your self-hosted cluster, +and ensure that your user has the `dashboard-admin` role. Users with the `dashboard-user` role cannot edit contacts. diff --git a/docs/pages/feature-matrix.mdx b/docs/pages/feature-matrix.mdx index 40efe790e9152..cbac1abb88ddb 100644 --- a/docs/pages/feature-matrix.mdx +++ b/docs/pages/feature-matrix.mdx @@ -1,6 +1,9 @@ --- title: Teleport Feature Matrix description: Provides a comparison of features available in Teleport products. +tags: + - conceptual + - platform-wide --- The Teleport feature matrix lists capabilities of the Teleport Infrastructure @@ -65,34 +68,35 @@ across distributed infrastructures. | | **Enterprise (Cloud)** | **Enterprise (Self-Hosted)** | **Community Edition** | |---|:---:|:---:|:---:| |**User identity.** Authenticate users without passwords:|||| -| [Single Sign-On](./admin-guides/access-controls/sso/sso.mdx)| GitHub, Google Workspace, Microsoft Entra ID, Okta, OIDC, SAML, Teleport | GitHub, Google Workspace, Microsoft Entra ID, Okta, OIDC, SAML, Teleport | GitHub | -| User & Group Provisioning & Deprovisioning (SCIM & Custom Protocols), including Okta and Entra | Available In Teleport Identity Governance | Available In Teleport Identity Governance | ✖ | -| [Hardware Private Key Support](./admin-guides/access-controls/guides/hardware-key-support.mdx) (e.g., via YubiKey) | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | +| [Single Sign-On](zero-trust-access/sso/sso.mdx)| GitHub, Google Workspace, Microsoft Entra ID, Okta, SailPoint, OIDC, SAML, Teleport | GitHub, Google Workspace, Microsoft Entra ID, Okta, SailPoint, OIDC, SAML, Teleport | GitHub | +| User & Group Provisioning & Deprovisioning (SCIM & Custom Protocols), including Okta, Microsoft Entra ID, and SailPoint | Available In Teleport Identity Governance | Available In Teleport Identity Governance | ✖ | +| [Hardware Private Key Support](zero-trust-access/authentication/hardware-key-support.mdx) (e.g., via YubiKey) | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | +| [Per-Session MFA](zero-trust-access/authentication/per-session-mfa.mdx)| ✔ | ✔ | ✔ | |**Resource identity.** Assign a cryptographic identity to every Teleport Protected Resource:|||| -| Protecting: [Applications](./enroll-resources/application-access/getting-started.mdx), [Databases](./enroll-resources/database-access/getting-started.mdx), [Kubernetes Clusters](./enroll-resources/kubernetes-access/getting-started.mdx), [Linux Servers](./enroll-resources/server-access/getting-started.mdx), [Windows Servers](./enroll-resources/desktop-access/introduction.mdx), [Windows Desktops](./enroll-resources/desktop-access/introduction.mdx), Cloud Consoles & Resources (AWS, Azure, GCP), [GitHub](./admin-guides/management/guides/github-integration.mdx) | ✔ | ✔ | ✔ (does not include Oracle support) | +| Protecting: [MCP Servers](./enroll-resources/mcp-access/mcp-access.mdx), [Applications](./enroll-resources/application-access/getting-started.mdx), [Databases](./enroll-resources/database-access/getting-started.mdx), [Kubernetes Clusters](./enroll-resources/kubernetes-access/getting-started.mdx), [Linux Servers](./enroll-resources/server-access/getting-started.mdx), [Windows Servers](./enroll-resources/desktop-access/introduction.mdx), [Windows Desktops](./enroll-resources/desktop-access/introduction.mdx), Cloud Consoles & Resources (AWS, Azure, GCP), [GitHub](enroll-resources/application-access/cloud-apis/github-integration.mdx) | ✔ | ✔ | ✔ (does not include Oracle support) | |**Secure remote access.** Zero-trust, auditable access to your infrastructure:|||| | Dynamic, self-updating inventory | ✔ | ✔ | ✔ | -| Supports SSH, RDP, Kubernetes, Databases, AWS, Azure, GCP API and CLI, Web applications and services, TCP endpoints for Linux, Windows and MacOS. | ✔ | ✔ | ✔ | -| [Machines](./enroll-resources/machine-id/machine-id.mdx) and [workloads](./enroll-resources/workload-identity/introduction.mdx#teleport-workload-identity-vs-machine-id) | Available in Teleport Machine & Workload Identity | Available in Teleport Machine & Workload Identity | Available in Teleport Machine & Workload Identity | +| Supports MCP servers, SSH, RDP, Kubernetes, Databases, AWS, Azure, GCP API and CLI, Web applications and services, TCP endpoints for Linux, Windows and MacOS. | ✔ | ✔ | ✔ | +| [Machines](machine-workload-identity/machine-workload-identity.mdx) and [workloads](machine-workload-identity/workload-identity/introduction.mdx) | Available in Teleport Machine & Workload Identity | Available in Teleport Machine & Workload Identity | Available in Teleport Machine & Workload Identity | | Agentless Integration with [OpenSSH Servers](./enroll-resources/server-access/openssh/openssh-agentless.mdx) | ✔ | ✔ | ✔ | -| [IP-Based Restrictions](./admin-guides/access-controls/guides/ip-pinning.mdx) | ✔ | ✔ | ✖ | -| [Teleport VNet](./connect-your-client/vnet.mdx) | ✔ | ✔ | ✔ | +| [IP-Based Restrictions](zero-trust-access/authentication/ip-pinning.mdx) | ✔ | ✔ | ✖ | +| [Teleport VNet](connect-your-client/teleport-clients/vnet.mdx) | ✔ | ✔ | ✔ | |**Short-lived privileges.** Ephemeral authorization granted through short-lived certificates:|||| -| [Role-Based Access Control](./admin-guides/access-controls/guides/role-templates.mdx) | ✔ | ✔ | ✔ | -| [Just-in-Time Access Requests & Reviews](./admin-guides/access-controls/access-requests/resource-requests.mdx) | Available in Teleport Identity Governance | Available in Teleport Identity Governance | Only can request roles through CLI | +| [Role-Based Access Control](zero-trust-access/rbac-get-started/role-templates.mdx) | ✔ | ✔ | ✔ | +| [Just-in-Time Access Requests & Reviews](identity-governance/access-requests/resource-requests.mdx) | Available in Teleport Identity Governance | Available in Teleport Identity Governance | Only can request roles through CLI | |**Session recording and interactive controls.** Record, replay, join, and moderate interactive sessions:|||| | [Session Recording with Playback](./reference/architecture/session-recording.mdx) | ✔ | ✔ | ✔ | | [Enhanced Session Recording](./enroll-resources/server-access/guides/bpf-session-recording.mdx) | ✔ | ✔ | ✔ | -| [Recording Proxy Mode](./enroll-resources/server-access/guides/recording-proxy-mode.mdx) | ✖ | ✔ | ✔ | +| [Recording Proxy Mode](./reference/architecture/session-recording.mdx#record-at-the-proxy-service) | ✖ | ✔ | ✔ | | Live Sessions View | SSH, Kubernetes, Desktops, Databases | SSH, Kubernetes, Desktops, Databases | SSH, Kubernetes, Desktops, Databases | | Protocol-Level Events, for all supported resources | ✔ | ✔ | ✔ | -| [Dual Authorization](./admin-guides/access-controls/guides/dual-authz.mdx) | ✔ | ✔ | ✖ | -| [Session Sharing & Moderation](./admin-guides/access-controls/guides/joining-sessions.mdx) | ✔ | ✔ | ✖ | +| [Dual Authorization](identity-governance/access-requests/access-requests.mdx) | ✔ | ✔ | ✖ | +| [Session Sharing & Moderation](zero-trust-access/authentication/joining-sessions.mdx) | ✔ | ✔ | ✖ | |**Identity-based audit events:** |||| -| [Structured Audit Logs](./reference/monitoring/audit.mdx) | ✔ | ✔ | ✔ | -| [Export to SIEM](./admin-guides/management/export-audit-events/export-audit-events.mdx) | ✔ | ✔ | ✔ | +| [Structured Audit Logs](reference/deployment/monitoring/audit.mdx) | ✔ | ✔ | ✔ | +| [Export to SIEM](zero-trust-access/export-audit-events/export-audit-events.mdx) | ✔ | ✔ | ✔ | |**Regulatory standards and frameworks:**|||| -| [FedRAMP Control](./admin-guides/access-controls/compliance-frameworks/fedramp.mdx) | ✖ | ✔ | ✖ | +| [FedRAMP Control](zero-trust-access/compliance-frameworks/fedramp.mdx) | ✖ | ✔ | ✖ | | FIPS-compliant binaries for FedRAMP (Low, Moderate, High) | ✖ | ✔ | ✖ | | DORA, SOX, ISO, NIS2, PCI DSS, SOC 2, HIPAA, NIST | ✔ | ✔ | Limited | @@ -105,7 +109,7 @@ certificates, access control, and auditability. | | **Enterprise (Cloud)** | **Enterprise (Self-Hosted)** | **Community Edition** | |---|:---:|:---:|:---:| | **Service Discovery:** Live inventory of machine and workload identities for CI/CD jobs, microservices, and others | ✔ | ✔ | ✔ | -|**Issuance:** Provisions cryptographic identities for [machines](./enroll-resources/machine-id/getting-started.mdx) and [workloads](./enroll-resources/workload-identity/getting-started.mdx), eliminating anonymous computing and the need for static over-privileged users and automating certificate rotation | ✔ | ✔ | ✔ | +|**Issuance:** Provisions cryptographic identities for [machines](machine-workload-identity/getting-started.mdx) and [workloads](machine-workload-identity/workload-identity/getting-started.mdx), eliminating anonymous computing and the need for static over-privileged users and automating certificate rotation | ✔ | ✔ | ✔ | |**Secretless Authentication:** Eliminates the need for API keys and long-term secrets with short-lived certificates.| ✔ | ✔ | ✔ | |**Ephemeral Authorization:** With granular ABAC/RBAC for workload interactions | ✔ | ✔ | ✔ | |**Auditability:** Audit data, exportable to SIEMs, for compliance reporting & reviews | ✔ | ✔ | ✔ | @@ -122,15 +126,15 @@ and non-human identities. | | **Enterprise (Cloud)** | **Enterprise (Self-Hosted)** | **Community Edition** | |---|:---:|:---:|:---:| -| [JIT Access Requests](./admin-guides/access-controls/guides/dual-authz.mdx): Grant only those privileges necessary to complete the task at hand. Remove the need for super-privileged accounts. | ✔ | ✔ | Only can request roles through CLI | +| [JIT Access Requests](identity-governance/access-requests/access-requests.mdx): Grant only those privileges necessary to complete the task at hand. Remove the need for super-privileged accounts. | ✔ | ✔ | Only can request roles through CLI | | Automatic Access Requests & Approvals: Automate pre-defined workflows based on RBAC, ABAC, or context-based authorization. | ✔ | ✔ | ✖ | -| [Access Lists & Access Reviews](./admin-guides/access-controls/access-lists/access-lists.mdx): Review access requests using Slack, PagerDuty, Microsoft Teams, Jira and ServiceNow. Assign managers, automate mandatory reviews, and implement custom review logic using our API and Go SDK. Integrates with AWS Identity Center. | ✔ | ✔ | ✖ | -| [Session & Identity Locks](./admin-guides/access-controls/guides/locking.mdx): Lock suspicious or compromised identities and stop all their activity across all protocols and services. | ✔ | ✔ | ✖ | -| [Device Trust](./admin-guides/access-controls/device-trust/guide.mdx): Require an up-to-date, registered device for each authentication. Teleport uses TPMs and secure enclaves to give every device a cryptographic identity. Restrict further by resource or MDM-authorization. | ✔ | ✔ | ✖ | -| User & Group Provisioning & Deprovisioning (SCIM & Custom Protocols), including Okta and Entra | ✔ | ✔ | ✖ | -| [Access Monitoring & Response](./admin-guides/access-controls/access-monitoring.mdx): Detect overly broad privileges and inspect sessions that are not using strong protection, such as multi-factor authentication or device trust. Alert on access violations and purge unused permissions with automated access rules. | ✔ | ✔ | ✖ | -| [Okta integration](./admin-guides/access-controls/okta/okta.mdx): Configure Teleport to import and grant access to Okta applications and user groups. | ✔ | ✔ | ✖ | -| Microsoft Entra ID directory synchronization and SSO [integration](./admin-guides/access-controls/sso/azuread.mdx) | ✔ | ✔ | ✖ | +| [Access Lists & Access Reviews](identity-governance/access-lists/access-lists.mdx): Review access requests using Slack, PagerDuty, Microsoft Teams, Jira and ServiceNow. Assign managers, automate mandatory reviews, and implement custom review logic using our API and Go SDK. Integrates with AWS Identity Center. | ✔ | ✔ | ✖ | +| [Session & Identity Locks](identity-governance/locking.mdx): Lock suspicious or compromised identities and stop all their activity across all protocols and services. | ✔ | ✔ | ✖ | +| [Device Trust](identity-governance/device-trust/guide.mdx): Require an up-to-date, registered device for each authentication. Teleport uses TPMs and secure enclaves to give every device a cryptographic identity. Restrict further by resource or MDM-authorization. | ✔ | ✔ | ✖ | +| User & Group Provisioning & Deprovisioning (SCIM & Custom Protocols), including Okta, Microsoft Entra ID, and SailPoint | ✔ | ✔ | ✖ | +| [Access Monitoring & Response](identity-governance/access-monitoring.mdx): Detect overly broad privileges and inspect sessions that are not using strong protection, such as multi-factor authentication or device trust. Alert on access violations and purge unused permissions with automated access rules. | ✔ | ✔ | ✖ | +| [Okta integration](identity-governance/integrations/okta/okta.mdx): Configure Teleport to import and grant access to Okta applications and user groups. | ✔ | ✔ | ✖ | +| Microsoft Entra ID directory synchronization and SSO [integration](zero-trust-access/sso/integrate-idp/entra-id.mdx) | ✔ | ✔ | ✖ | ## Teleport Identity Security @@ -143,7 +147,9 @@ and non-human identities. | Discover standing privileges | ✔ | ✔ | ✖ | | Analyze shadow access and drift of security posture | ✔ | ✔ | ✖ | | Investigate identity vulnerabilities and potential exposures | ✔ | ✔ | ✖ | -| Monitor critical assets with [Crown Jewel](./admin-guides/teleport-policy/crown-jewels.mdx) Alerting | ✔ | ✔ | ✖ | +| Monitor critical assets with [Crown Jewel](identity-security/usage/crown-jewels.mdx) Alerting | ✔ | ✔ | ✖ | +| [Session Recording Summaries](identity-security/session-summaries.mdx) | ✔ | ✔ | ✖ | +| Identity Activity Center | ✖ | ✔ | ✖ | ## Platform integrations, management, licensing, and deployment @@ -155,12 +161,12 @@ and non-human identities. | Security Information & Event Management (SIEM): Elastic, Splunk, Panther, and anything else that integrates with Fluentd | ✔ | ✔ | ✔ | | ITSM: ServiceNow, JIRA | ✔ | ✔ | ✖ | | Access Request Integration: Slack, Teams, Discord, Mattermost, PagerDuty, Opsgenie, Email | ✔ | ✔ | ✔ | -| [Hardware Private Key Support](./admin-guides/access-controls/guides/hardware-key-support.mdx) (e.g., via YubiKey) | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | -| [Hardware Security Module support](./admin-guides/deploy-a-cluster/hsm.mdx) for encryption at rest | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | +| [Hardware Private Key Support](zero-trust-access/authentication/hardware-key-support.mdx) (e.g., via YubiKey) | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | +| [Hardware Security Module support](zero-trust-access/deploy-a-cluster/hsm.mdx) for encryption at rest | ✔ (External-connected HSM/KMS coming soon) | ✔ | ✖ | | **Management and licensing:** |||| | Annual or multi-year contracts, volume discounts | ✔ | ✔ | ✖ | | Anonymized Usage Tracking | ✔ | ✔ | Opt-in | -| [Backend support](./reference/backends.mdx) | All data is stored in DynamoDB and S3 with server-side encryption. | Any S3-compatible storage for session records, many managed backends for custom audit log storage | Any S3-compatible storage for session records, many managed backends for custom audit log storage. | +| [Backend support](reference/deployment/backends.mdx) | All data is stored in DynamoDB and S3 with server-side encryption. | Any S3-compatible storage for session records, many managed backends for custom audit log storage | Any S3-compatible storage for session records, many managed backends for custom audit log storage. | | Multi-region failover using Cockroach DB | ✔ | ✔ | ✖ | | Data storage location | Data is stored in Teleport's AWS infrastructure with audit logs/sessions optionally in customer AWS accounts. Proxy Service instances are deployed across the world for low-latency access. | Can store data anywhere in the world, on most managed cloud backends | Can store data anywhere in the world, on most managed cloud backends | | License | Commercial | Commercial | Commercial for binaries, with restrictions: Free usage for companies with \<100 employees and \ Session Recordings**. - -1. In the sidebar, under **Activity**, click **Session Recordings**. - - You will see the recording for your interactive session from the previous - step listed. For example: - - ![Session - recordings](../img/cloud/getting-started/session-recordings@2x.png) - -1. Click **Play** to see a full recording of your session. - -## Step 5/5. Access the server from the command line - -To access the server using commands in a terminal: - -1. Open a new terminal window. - -1. Sign in to your Teleport cluster by running the `tsh login` command with the - URL of your cluster and the name of your Teleport user, assigning - to your account subdomain and to - your Teleport username: - - ```code - $ tsh login --proxy=.teleport.sh --user= - ``` - - When prompted, authenticate using your password, authenticator app, or hardware key. - The command displays information about your Teleport cluster and account. For example: - - ```code - > Profile URL: https://example.teleport.sh:443 - Logged in as: admin@teleport.example.com - Cluster: example.teleport.sh - Roles: access, auditor, editor - Logins: root - Kubernetes: enabled - Valid until: 2023-07-08 01:35:20 -0700 PDT [valid for 12h0m0s] - Extensions: login-ip, permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy - ``` - -1. List the servers your Teleport user can access. - - ```code - $ tsh ls - ``` - - You should see the name of the container you just registered. For example: - - ```code - Node Name Address Labels - ------------ --------- ---------------------------------------------------------------------------------------- - b6c1072b5af5 ⟵ Tunnel - ``` - -1. Access your server as the `root` user, assigning to - the name of the server as displayed by `tsh ls`: - - ```code - $ tsh ssh root@ - ``` - -## Next steps - -This guide introduced how you can use Teleport Enterprise (Cloud) to protect your -infrastructure by demonstrating how to register a server with your Teleport -cluster. - -You can provide secure access to more of your infrastructure through Teleport by -deploying one or more Teleport **Agents** and configuring role-based access -control for users. - -Agents proxy traffic to all of your infrastructure resources—including servers, -databases, Kubernetes clusters, cloud provider APIs, and Windows desktops. -Role-based access control ensures that only authorized users are allowed access -to those resources. - -To learn more information about deploying agents, see [Deploy Teleport Agents -with Terraform](admin-guides/infrastructure-as-code/terraform-starter/enroll-resources.mdx). diff --git a/docs/pages/get-started/access.mdx b/docs/pages/get-started/access.mdx new file mode 100644 index 0000000000000..0b0bf278478e4 --- /dev/null +++ b/docs/pages/get-started/access.mdx @@ -0,0 +1,65 @@ +--- +title: "Step 3 - Set Up Access Controls" +description: "Configure identity-based access controls in your Teleport cluster." +toc_max_heading_level: 2 +tags: + - get-started +--- + +import Button from '@site/src/components/Button'; +import Icon from '@site/src/components/Icon'; + +Teleport uses **Role-Based Access Control (RBAC)** to determine who can access what across your infrastructure. + +Every Teleport user is assigned one or more roles. These roles define the user's permissions, such as: + +- What infrastructure resources they can access +- Whether they can request temporary access +- Whether they can edit cluster settings or view session recordings + +You can assign roles when creating users or manage them later in the Teleport Web UI. + +## Preset roles + +You were able to connect to your Ubuntu server in Step 2 because of the preset roles assigned to your user account when you created your cluster. + +To see which roles you currently have: + +1. In the Teleport Web UI, navigate to **Zero Trust Access** > **Users** +2. Find your user, select **Options** > **Edit** +3. Under **User Roles**, you'll see the roles assigned to you + +You likely have at least the `access` and `editor` roles assigned. The `access` role is what granted you permission to SSH into your server. + +Teleport includes several preset roles to help you get started: + +| **Role** | **Description** | +|------------------------------|---------------------------------------------------------------------------------| +| `access` | Grants access to infrastructure resources. | +| `editor` | Allows editing cluster configuration (e.g., roles, connectors). | +| `auditor` | Grants read-only access to audit logs, events, and session recordings. | + +
+View the full list of preset roles including Enterprise + +(!docs/pages/includes/preset-roles-table.mdx!) + +
+ +You can view all available roles by navigating to **Zero Trust Access** > **Roles** in the Web UI. + +## Custom roles + +Organizations often require custom roles to enforce least-privilege access and follow internal security policies. By creating custom roles, you can align Teleport's access controls with your company's structure and security policies. + +For instructions on creating custom roles and assigning roles to users, follow along with our [Getting Started with Teleport Access Controls](../zero-trust-access/rbac-get-started/role-demo.mdx) demo guide. + +For mapping SSO users to roles, refer to our guide on [Configuring Single Sign-On](../zero-trust-access/sso/sso.mdx#how-teleport-uses-sso) for more information. + +## Next steps + +In the final step of the Getting Started guide, we'll cover how to monitor activity and use audit logs to strengthen security and ensure compliance. + +
+ +
\ No newline at end of file diff --git a/docs/pages/get-started/audit.mdx b/docs/pages/get-started/audit.mdx new file mode 100644 index 0000000000000..3907bcd6dbf76 --- /dev/null +++ b/docs/pages/get-started/audit.mdx @@ -0,0 +1,98 @@ +--- +title: "Step 4 - Monitor Audit Logs" +description: "Explore audit logging features in your Teleport cluster." +toc_max_heading_level: 2 +tags: + - get-started +--- + +import Icon from '@site/src/components/Icon'; + +Teleport logs cluster activity by emitting various events into its audit log. + +This can be viewed in the Teleport Web UI by clicking on **Audit** > **Audit Log** in the left sidebar. + +Here you'll see events like successful logins along with metadata such as event type, remote IP address, timestamp, and the identity involved. Click on **Details** to see the complete event information in JSON format. + +![View audit log](../../img/get-started/get-started-audit-log@2x.png) + +## View your SSH session recording + +In addition to recording structured events in its audit log, Teleport can also capture full session recordings for SSH, desktop, or Kubernetes shell sessions. + +Remember when you connected to your Ubuntu server via SSH in Step 2? That entire session was recorded. + +To view the recording: + +1. Click on **Audit** > **Session Recordings** in the left sidebar +2. Find your recent SSH session to your Ubuntu server +3. Click the **Play** button to watch a full playback of everything you typed and the server's responses + +This is especially useful for security audits, compliance requirements, and troubleshooting. + +For a more in-depth look at how Teleport's audit system works, including event types and descriptions, storage options, and exporting events, see our [Audit Events and Records guide](../reference/deployment/monitoring/audit.mdx). + + +## What's next? + +You've now explored the core steps of getting started with Teleport. However, this is only skimming the surface of what Teleport has to offer. + +### Enroll other resource types + +In this guide, you enrolled an Ubuntu Linux server. Teleport supports many other resource types that you can enroll using similar workflows: + +, + to: "../../enroll-resources/application-access/", + name: "Applications", + }, + { + icon: , + to: "../../enroll-resources/server-access/", + name: "Linux Servers", + }, + { + icon: , + to: "../../enroll-resources/database-access/", + name: "Databases", + }, + { + icon: , + to: "../../enroll-resources/kubernetes-access/", + name: "Kubernetes Clusters", + }, + { + icon: , + to: "../../enroll-resources/desktop-access/", + name: "Windows Desktops", + }, + { + icon: , + to: "../../enroll-resources/auto-discovery/", + name: "Auto-Discovery of Resources", + }, + { + icon: , + to: "../../enroll-resources/application-access/cloud-apis/", + name: "Cloud Providers", + }, + { + icon: , + to: "../../enroll-resources/mcp-access/", + name: "MCP and AI Agents", + }, + ]} +/> + +### Explore additional features + +You can use the search above to locate content that best fits your specific situation or scroll through our many product use cases listed on our [documentation homepage](../index.mdx). + +Here are a few places to start: + +- Explore just-in-time access with [Access Requests](../identity-governance/access-requests/access-requests.mdx) or longer-term, auditable access management with [Access Lists](../identity-governance/access-lists/access-lists.mdx) +- Automate onboarding using [Terraform](../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) or the [Kubernetes Operator](../zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator.mdx) +- Integrate Teleport with your [SSO provider](../zero-trust-access/sso/sso.mdx) or even configure Teleport as an [IDP](../identity-governance/idps/idps.mdx) +- Learn how to use [Teleport's API libraries](../zero-trust-access/api/api.mdx) to automate setup tasks such as registering agents with an external service discovery API, generating roles from an external RBAC system, or writing [Access Request plugins](../identity-governance/access-requests/plugins/plugins.mdx). \ No newline at end of file diff --git a/docs/pages/get-started/connect.mdx b/docs/pages/get-started/connect.mdx new file mode 100644 index 0000000000000..ff11f1be93139 --- /dev/null +++ b/docs/pages/get-started/connect.mdx @@ -0,0 +1,124 @@ +--- +title: "Step 2 - Connect Infrastructure" +description: "Enroll the resources you want to protect with your Teleport cluster." +toc_max_heading_level: 2 +tags: + - get-started +--- + +import Button from '@site/src/components/Button'; +import Icon from '@site/src/components/Icon'; + +After deploying your Teleport cluster, the next step is to enroll the infrastructure resources you want to secure. + +Teleport supports a wide range of resource types, including Linux servers, MCP servers,Kubernetes clusters, databases, internal web apps, and cloud provider APIs. You can enroll these through the Teleport Web UI's guided setup workflows. + +## Demo: Enroll an Ubuntu server + +To help you understand the enrollment process, let's walk through enrolling an Ubuntu Linux server as a hands-on example. + +**What you'll need for this demo:** + +- An Ubuntu Linux server that you can SSH into (physical server, VM or cloud instance like EC2, or homelab server), OR +- Docker installed on your workstation (to create a temporary demo server) + +If you don't have an Ubuntu server available, expand the section below for instructions to create a demo server using Docker. + +
+ Create a demo Ubuntu server with Docker + + To follow along with this guide, you can spin up a new Ubuntu server using Docker: + +1. Open a terminal on your workstation. + +1. Start a Docker container to create a server that you can enroll as a resource in your Teleport cluster: + + ```code + $ docker run --interactive --tty ubuntu:24.04 /bin/bash + ``` + + This command starts a new shell session in a container running Ubuntu 24.04. + +1. Run the following command to install `curl` and `telnet`: + + ```code + $ apt update && apt install -y curl telnet + ``` + + The Teleport installation script requires both `curl` and `telnet` to be installed. + +1. Create an alias for `sudo` since it's not installed on the Docker container: + + ```code + $ alias sudo="" + ``` + + The Teleport installation script uses `sudo`, so this alias allows the script to run without errors. + + Keep this shell open. You'll use it in the next steps. + +
+ +### Enroll your Ubuntu server + +You can enroll resources through the Teleport Web UI using guided setup workflows. + +![Enroll a resource](../../img/get-started/get-started-enroll-resources@2x.png) + +Follow these steps to enroll your Ubuntu Linux server: + +1. In the Teleport Web UI, click **Enroll New Resource**, or navigate to **Add New** > **Resource** in the left sidebar. +2. Select **Ubuntu** from the list of resource types. +3. (Optional) Add labels like `env: staging` or `cloud: aws` for filtering or restricting access later. +4. Run the install script that Teleport provides on your Ubuntu server (or in your Docker container) to install the Teleport SSH Service. +5. Wait for the installation to complete. You should see a notification in the Teleport Web UI confirming the agent was successfully detected. +6. Add `ubuntu` as an OS user you will use to connect to the server and test your connection to verify that the server is accessible before you finish the guided setup. + +Now you'll see your Ubuntu server listed under Resources in your Teleport Web UI. Click on the Connect button to SSH into the server. + + + Here are some troubleshooting tips: + - If the server doesn't appear, verify the Teleport Agent is running on the server with `systemctl status teleport`. + - Ensure the server can reach your Teleport cluster over the network (check firewalls and security groups). + - Check the agent logs for errors: `journalctl -u teleport`. + + +
+ How does passwordless SSH work? + +When you click **Connect** and choose the `ubuntu` user, here's what happens under the hood: + +1. **Short-lived certificate authentication** - Teleport's Auth Service uses the short-lived identity certificate it issued when you logged in (typically valid for 12 hours). This certificate is signed by Teleport's internal Certificate Authority (CA) and includes your roles and allowed logins, defining exactly what you can access. +2. **The server trusts Teleport's CA** - When the Ubuntu server was enrolled, the Teleport Agent configured it to trust certificates signed by Teleport's CA instead of static SSH keys. +3. **Certificate-based access** - Your client presents that certificate, the server validates it against the trusted CA, and access is granted, with authorization enforced by the roles encoded in your certificate. + +Teleport handles all the PKI complexity behind the scenes with automatic certificate issuance and expiration. + +
+ +### Alternative: Enroll using the CLI + +{/* vale messaging.protocol-products = NO */} +While the Web UI provides a guided experience, you can also enroll servers using the Teleport CLI as demonstrated in the [Server Access Getting Started Guide](../enroll-resources/server-access/getting-started.mdx). +{/* vale messaging.protocol-products = YES */} + +## Enroll other resource types + +While this step focuses on Ubuntu servers for demonstration purposes, you can enroll many other resource types through the same Web UI or CLI workflow: + +- **Applications** - Internal web apps and cloud provider APIs +- **Databases** - PostgreSQL, MySQL, MongoDB, and more +- **Kubernetes clusters** - EKS, GKE, AKS, or self-hosted +- **Windows Desktops** - RDP access to Windows machines +- **And more** - See detailed enrollment guides for all resource types at the end of this guide + +## Next step: Set up access controls + +In the next step of the Getting Started guide, we'll discuss role-based access control (RBAC) so you can control **who** can connect to your newly enrolled server and **what actions** they can take once connected. + +
+ +
\ No newline at end of file diff --git a/docs/pages/get-started/deploy-cloud.mdx b/docs/pages/get-started/deploy-cloud.mdx new file mode 100644 index 0000000000000..b7622ea7b5328 --- /dev/null +++ b/docs/pages/get-started/deploy-cloud.mdx @@ -0,0 +1,31 @@ +--- +title: "Step 1 - Sign up for Teleport Enterprise Cloud" +description: "Sign up for Teleport Enterprise Cloud and deploy your hosted cluster." +toc_max_heading_level: 2 +tags: + - get-started + - platform-wide +--- + +import Button from '@site/src/components/Button'; + +Teleport Enterprise Cloud is the **fastest and easiest way to get started**. + +Infrastructure and certificates are managed for you. All that's left is deploying agents to enroll your resources. + +To get started, simply sign up for a 14-day free trial. We'll automatically provision and manage a fully featured Teleport cluster in a dedicated tenant with a unique `example.teleport.sh` domain. + +**To deploy your cluster:** + +1. Go to [goteleport.com/signup](https://goteleport.com/signup) +2. Follow the sign-up steps to launch your cluster + +When setup is finished, you'll have a production-grade Teleport cluster configured with secure defaults, automatic updates, and built-in scalability. + +## Next step: Connect infrastructure + +At this point, you've launched your own Teleport Enterprise Cloud cluster and created a user in the process. You can now move on to the next step of connecting your infrastructure. + +
+ +
\ No newline at end of file diff --git a/docs/pages/get-started/deploy-community.mdx b/docs/pages/get-started/deploy-community.mdx new file mode 100644 index 0000000000000..3704b1d38bb2f --- /dev/null +++ b/docs/pages/get-started/deploy-community.mdx @@ -0,0 +1,385 @@ +--- +title: "Step 1 - Deploy Teleport Community Edition" +description: "Deploy a self-hosted Teleport Community Edition cluster using the Linux demo." +tags: + - get-started + - platform-wide +--- + +import Button from '@site/src/components/Button'; + +This guide will walk you through deploying Teleport Community Edition. + +You'll spin up a single-instance Teleport cluster on a Linux server, ideal for small-scale demos or home lab environments. Once deployed, you can move ahead in the guide to connecting your infrastructure, setting up role-based access control (RBAC), and auditing your access. + +## How it works + +The Teleport cluster consists of two services: + +- **Teleport Auth Service:** The certificate authority for your cluster. It + issues certificates and conducts authentication challenges. The Auth Service + is typically inaccessible outside your private network. +- **Teleport Proxy Service:** The cluster frontend, which handles user requests, + forwards user credentials to the Auth Service, and communicates with Teleport + instances that enable access to specific resources in your infrastructure. + +![Architecture of the setup you will complete in this +guide](../../img/linux-server-diagram.png) + +You can read more about the architecture of Teleport in the [Core Concepts](../core-concepts.mdx) page. + +## Prerequisites + +You will need the following to deploy a demo Teleport cluster. If your +environment doesn't meet the prerequisites, you can get started with Teleport by +signing up for a [free trial of Teleport Enterprise +Cloud](https://goteleport.com/signup/) and jumping ahead to [Step 2: Connect infrastructure](./connect.mdx). + +Also, if you want to get a feel for Teleport commands and capabilities without setting +up any infrastructure, take a look at the browser-based [Teleport +Labs](https://goteleport.com/labs). + +As for this guide, you can work through it with either a remote virtual machine (e.g., an Amazon +EC2 instance) or a local Docker container. Make sure you have met the following +requirements for your platform: + + + + +- A Linux host with only port `443` open to ingress traffic. You must be able to + install and run software on the host. Either configure access to the host via + SSH for the initial setup (and open an SSH port in addition to port `443`) or + enter the commands in this guide into an Amazon EC2 [user data + script](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html), + Google Compute Engine [startup + script](https://cloud.google.com/compute/docs/instances/startup-scripts), or + similar. + +You must also have **one** of the following: +- A registered domain name. +- An authoritative DNS nameserver managed by your organization, plus an existing + certificate authority. If using this approach, ensure that your browser is + configured to use your organization's nameserver. + + + +1. Install [mkcert](https://github.com/FiloSottile/mkcert) so you can set up a + local certificate authority and create a certificate for running the Teleport + Web UI with HTTPS. + +1. Install the mkcert CA: + + ```code + $ mkcert -install + ``` + +1. Create a directory on your workstation in which to place TLS credentials for + Teleport: + + ```code + $ mkdir teleport-tls + ``` + +1. Generate a certificate and private key for Teleport: + + ```code + $ cd teleport-tls + $ mkcert localhost + ``` + +1. Add the mkcert CA certificate to the `teleport-tls` directory so your Docker + container can access it: + + ```code + $ cp "$(mkcert -CAROOT)/rootCA.pem" . + ``` + +1. Start a local Docker container where you can follow the remaining + instructions in this guide: + + ```code + $ docker run -it -v .:/etc/teleport-tls -p 3080:443 ubuntu:22.04 + ``` + +1. Make sure `curl` is installed on your container: + + ```code + $ apt-get update && apt-get install -y curl + ``` + +1. On the container, move the mkcert CA certificate into the directory where + your container stores CA certs (installing `curl` sets this up for you). When + starting, Teleport verifies its TLS certificate against the CA: + + ```code + $ cp /etc/teleport-tls/rootCA.pem /etc/ssl/certs/mkcertCA.pem + ``` + + + + +Finally, you will need a multi-factor authenticator app such as +[Authy](https://authy.com/download/), [Google +Authenticator](https://www.google.com/landing/2step/), or +[1Password](https://support.1password.com/one-time-passwords/). + +## Step 1/4. Configure DNS + +If you are following this guide with a local Docker container, you can skip to +[Step 2](#step-24-set-up-teleport-on-your-linux-host). + +If you are following this guide with a virtual machine, set up two DNS `A` +records, each pointing to the IP address of your Linux host. Assuming +`teleport.example.com` is your domain name, set up records for: + +|Domain|Reason| +|---|---| +|`teleport.example.com`|Traffic to the Proxy Service from users and services.| +|`*.teleport.example.com`|Traffic to web applications registered with Teleport. Teleport issues a subdomain of your cluster's domain name to each application.| + +## Step 2/4. Set up Teleport on your Linux host + +In this step, you will log into your Linux host, download the Teleport binary, +generate a Teleport configuration file, and start the Teleport Auth Service, +Proxy Service, and SSH Service on the host. + +### Install Teleport + +On your Linux host or container, run the following command to install the +Teleport binary: + +```code +$ curl (=teleport.teleport_install_script_url=) | bash -s (=teleport.version=) +``` + +### Configure Teleport + +Generate a configuration file for Teleport using the `teleport configure` command. +This command requires information about a TLS certificate and private key. + +The instructions depend on whether you are running Teleport on the public +internet, a local container, or a private network: + + + + + Let's Encrypt verifies that you control the domain name of your Teleport cluster + by communicating with the HTTPS server listening on port 443 of your Teleport + Proxy Service. + + You can configure the Teleport Proxy Service to complete the Let's Encrypt + verification process when it starts up. + + On the host where you will start the Teleport Auth Service and Proxy Service, + run the following `teleport configure` command. Assign + to the + domain name of your Teleport cluster and to + an email address used for notifications (you can use any domain): + + ```code + $ sudo teleport configure -o file \ + --acme --acme-email= \ + --cluster-name= + ``` + + Port 443 on your Teleport Proxy Service host must allow traffic from all sources. + + + + The Docker container you started while beginning this guide mounts the + `teleport-tls` directory in `/etc/`, including a TLS certificate and private + key for Teleport. + + On the container, run the following `teleport configure` command: + + ```code + $ teleport configure -o file \ + --cluster-name=localhost \ + --public-addr=localhost:443 \ + --cert-file=/etc/teleport-tls/localhost.pem \ + --key-file=/etc/teleport-tls/localhost-key.pem + ``` + + + On your Teleport host, place a valid private key and a certificate chain in `/var/lib/teleport/privkey.pem` + and `/var/lib/teleport/fullchain.pem` respectively. + + The leaf certificate must have a subject that corresponds to the domain of your Teleport host, e.g., `*.teleport.example.com`. + + On the host where you will start the Teleport Auth Service and Proxy Service, + run the following `teleport configure` command. Assign to the domain name of your Teleport cluster. + + ```code + $ sudo teleport configure -o file \ + --cluster-name= \ + --public-addr=:443 \ + --cert-file=/var/lib/teleport/fullchain.pem \ + --key-file=/var/lib/teleport/privkey.pem + ``` + + + +### Start Teleport + +1. Start Teleport on your virtual machine or container by following the + instructions below: + + + + + Enable and start the Teleport systemd service: + + ```code + $ sudo systemctl enable teleport + $ sudo systemctl start teleport + ``` + + + + + Run the following command: + + ```code + $ teleport start --config="/etc/teleport.yaml" + ``` + + + + +1. Access the Teleport Web UI via HTTPS at the domain you created earlier at + and accept the terms of using Teleport + Community Edition. + + If you are running Teleport on a local Docker container, visit + https://localhost:3080. + + You should see a welcome screen similar to the following: + + ![Teleport Welcome Screen](../../img/quickstart/welcome.png) + + + Here are some troubleshooting tips: + - Ensure that Teleport is running on your host (`systemctl status teleport` on a VM, or check that the `teleport start` command is still running in your container). + - Verify that port 443 (or 3080 for Docker) is accessible and not blocked by a firewall. + - Check the Teleport logs for errors: `sudo journalctl -u teleport` on a VM, or review the terminal output in your container. + + +## Step 3/4. Create a Teleport user and set up multi-factor authentication + +In this step, we'll create a new Teleport user, `teleport-admin`, which is +allowed to log into SSH hosts as any of the principals `root`, `ubuntu`, or +`ec2-user`. + +1. If you are following this guide on a local container, open another terminal and + access your container: + + ```code + $ docker exec -it /bin/bash + ``` + +1. On your VM or container, run the following command (remove `sudo` if using a + local container). `tctl` is a client tool for configuring the Teleport Auth + Service: + + ```code + $ sudo tctl users add teleport-admin --roles=editor,access --logins=root,ubuntu,ec2-user + ``` + + The command prints a message similar to the following: + + ```text + User "teleport-admin" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h: + https://teleport.example.com:443/web/invite/123abc456def789ghi123abc456def78 + + NOTE: Make sure teleport.example.com:443 points at a Teleport proxy which users can access. + ``` + + If using a local container, replace the host and port with `localhost:3080`. + +1. Visit the provided URL in order to create your Teleport user. + + + + The users that you specify in the `logins` flag (e.g., `root`, `ubuntu` and + `ec2-user` in our examples) must exist on your Linux host. Otherwise, you + will get authentication errors later in this tutorial. + + If a user does not already exist, you can create it with `adduser ` or + use [host user creation](../enroll-resources/server-access/guides/host-user-creation.mdx). + + If you do not have the permission to create new users on the Linux host, run + `tctl users update teleport-admin --logins=root,ubuntu,ec2-user,$(whoami)` to explicitly allow Teleport to + authenticate as the user that you have currently logged in as. + + + +1. Teleport enforces the use of multi-factor authentication by default. It + supports one-time passwords (OTP) and multi-factor authenticators (WebAuthn). + In this guide, you will need to enroll an OTP authenticator application using + the QR code on the Teleport welcome screen. + +
+Logging in via the CLI + +In addition to Teleport's Web UI, you can access resources in your +infrastructure via the `tsh` client tool. + +Install `tsh` on your local workstation: + +(!docs/pages/includes/install-tsh.mdx!) + +Log in to receive short-lived certificates from Teleport. Replace + with your Teleport cluster's public address +as configured above: + +```code +$ tsh login --proxy= --user=teleport-admin +> Profile URL: https://teleport.example.com:443 + Logged in as: teleport-admin + Cluster: teleport.example.com + Roles: access, editor + Logins: root, ubuntu, ec2-user + Kubernetes: enabled + Valid until: 2022-04-26 03:04:46 -0400 EDT [valid for 12h0m0s] + Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty +``` + +
+ +## Step 4/4. Access your server with Teleport + +Now that you have Teleport running and a user configured, you can access your Linux server through the Teleport Web UI (it will be automatically enrolled since Teleport is running on it). + +![Teleport Web UI showing an SSH server](../../img/new-ssh-server.png) + +Click on `Connect` to access it via the web-based terminal, or use Teleport's [`tsh` CLI tool](../connect-your-client/teleport-clients/tsh.mdx) to SSH into it: + +```code +$ tsh ssh root@ +``` + + + Here are some troubleshooting tips: + - Ensure the user you created has the correct logins assigned (e.g., `root`, `ubuntu`, `ec2-user`) that match existing OS users on the host. + - If you receive a permission denied error, verify that the login user exists on the Linux host or create it with `adduser `. + - Check that your MFA device is properly enrolled and that you're completing the authentication challenge. + + +## Next step: deploy Teleport Agents + +At this point, you've launched your own Teleport Community Edition cluster and created a user. You can now move on to the next step of connecting your infrastructure. + +
+ +
+ diff --git a/docs/pages/get-started/get-started.mdx b/docs/pages/get-started/get-started.mdx new file mode 100644 index 0000000000000..5816a579f8f20 --- /dev/null +++ b/docs/pages/get-started/get-started.mdx @@ -0,0 +1,59 @@ +--- +title: "Get Started with Teleport" +description: "Learn how to deploy a cluster, connect infrastructure, set up access controls, and review audit logs." +template: no-toc +tags: + - get-started +--- + +import Button from '@site/src/components/Button'; +import Icon from '@site/src/components/Icon'; +import TeleportEditionsGrid from '@site/src/components/Pages/GetStarted/TeleportEditionsGrid'; +import TeleportEditionCard from '@site/src/components/Pages/GetStarted/TeleportEditionCard'; +import FeatureMatrixBanner from '@site/src/components/Pages/GetStarted/FeatureMatrixBanner'; + +Teleport is an identity-based access platform that secures servers, Kubernetes clusters, databases, internal applications, and desktops using short-lived certificates, detailed audit logging, and fine-grained role-based access controls tied to your SSO provider (e.g., Okta, GitHub, Google Workspace). + +This guide walks you through four essential steps to get you up and running. + +## Choose your Teleport edition + +There are several editions of Teleport that differ in their feature sets, deployment options, and support. + +Before choosing one, ask yourself: + +- Am I looking to evaluate Teleport quickly or explore the full feature set without provisioning infrastructure? → **Enterprise Cloud** +- Do I need to deploy and manage Teleport in my own environment for compliance reasons? → **Enterprise Self-Hosted** +- Do I prefer to self-host Teleport locally to test features or use it for free? → **Community Edition** + + + +

We handle infrastructure, upgrades, and certificates. Each customer receives a dedicated .teleport.sh subdomain (e.g., example.teleport.sh).

+
+ + +

A paid plan for organizations requiring strict compliance, with features like FIPS support. A valid license is required, so use this option to contact our team.

+
+ + +

Includes core features like secure SSH, Kubernetes, databases, and app access. Best for hobbyists, homelabs, or small teams securing small-scale infrastructure.

+
+
diff --git a/docs/pages/identity-governance/access-lists/access-lists.mdx b/docs/pages/identity-governance/access-lists/access-lists.mdx new file mode 100644 index 0000000000000..665d7493de3fe --- /dev/null +++ b/docs/pages/identity-governance/access-lists/access-lists.mdx @@ -0,0 +1,22 @@ +--- +title: Access Lists +description: Use Access Lists in Teleport +template: "no-toc" +sidebar_position: 3 +tags: + - identity-governance +enterprise: Identity Governance +--- + +Access Lists allow Teleport users to be granted long term access to resources +managed within Teleport. With Access Lists, administrators and Access List +owners can regularly audit and control membership to specific roles and +traits, which then tie easily back into Teleport's existing RBAC system. + +[Getting Started with Access Lists](guide.mdx) + +[Nested Access Lists](nested-access-lists.mdx) + +[Reviewing Access Requests](reviewing-access-requests.mdx) + +[Access List Reference](../../reference/access-controls/access-lists.mdx) diff --git a/docs/pages/identity-governance/access-lists/guide.mdx b/docs/pages/identity-governance/access-lists/guide.mdx new file mode 100644 index 0000000000000..0566ae8dd1a6f --- /dev/null +++ b/docs/pages/identity-governance/access-lists/guide.mdx @@ -0,0 +1,92 @@ +--- +title: Getting Started with Access Lists +sidebar_label: Getting Started +description: Learn how to use Access Lists to manage and audit long lived access to Teleport resources. +tags: + - get-started + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +{/* lint disable page-structure remark-lint */} + +This guide will help you: +- Create an Access List +- Assign a member to it +- Verify permissions granted through the list membership + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +- (!docs/pages/includes/tctl.mdx!) +- A user with the preset `editor` role, which will have permissions to create Access Lists. + +## Step 1/4. Setting up the Application Service on the cluster for testing + +One of the easiest ways to get resources on the cluster for testing is to set up a Teleport Application Service +instance with the debugging application enabled. To do this, add the following config to your `teleport.yaml` +configuration: + +```yaml +app_service: + enabled: true + debug_app: true +``` + +And restart Teleport. The "dumper" app should show up in the resource list. + +![Debug app](../../../img/access-controls/access-lists/debug-app.png) + +## Step 2/4. Create a test user + +We need to create a simple test user that has only the `requester` role, which has no default access +to anything within a cluster. This user will only be used for the purposes of this guide, so you may use +another user if you so choose. If you would rather use your own user, skip to the next step. + +To create a user, first navigate to Zero Trust Access in the left pane and click Users. Click on "Enroll Users" and fill in `test-user` as +the name and select `requester` as the role. + +![Create Test User](../../../img/access-controls/access-lists/create-test-user.png) + +Click "Save," and then navigate to the provided URL in order to set up the credentials for your test user. +Try logging into the cluster with the test user to verify that no resources show up in the resources page. + +## Step 3/4. Create an Access List + +Next, we'll create a simple Access List that will grant the `access` role to its members. +Login as the administrative user mentioned in the prerequisites. Click on "Add New" in the left pane, and then "Create an Access List." + +![Navigate to create new Access List](../../../img/access-controls/access-lists/create-new-access-list.png) + +Here, fill in a title, description, and grant the `access` role. Select a date in the future for the next +review date. + +![Fill out Access List Fields](../../../img/access-controls/access-lists/fill-out-access-list-fields.png) + +Under "List Owners" select `editor` as a required role, then add your administrative user under "Add +Eligible List Owners." By selecting `editor` as a required role, this will ensure that any owner of the list +must have the `editor` role in order to actually manage the list. If the user loses this role later, they will +not be able to manage the list, though they will still be reflected as an owner. + +![Select an owner](../../../img/access-controls/access-lists/select-owner.png) + +Under "Members" select `requester` as a required role, then add your test user to the Access List. Similar to +the owner requirements, this will ensure that any member of the list must have the `requester` role in order to +be granted the access described in this list. If the user loses this role later, they will not be granted the +roles or traits described in the Access List. + +![Add a member](../../../img/access-controls/access-lists/add-member.png) + +Finally, click "Create Access List" at the bottom of the page. + +## Step 4/4. Login as the test user + +Again, login as the test user. When logging in now, you should now see the dumper application contained within +the cluster, and should be able to interact with it as expected. + +## Next Steps + +- Familiarize yourself with the CLI tooling available for managing Access Lists in the [reference](../../reference/access-controls/access-lists.mdx). +- Learn how to work with nested Access Lists in the [nested Access Lists guide](./nested-access-lists.mdx). diff --git a/docs/pages/admin-guides/access-controls/access-lists/nested-access-lists.mdx b/docs/pages/identity-governance/access-lists/nested-access-lists.mdx similarity index 87% rename from docs/pages/admin-guides/access-controls/access-lists/nested-access-lists.mdx rename to docs/pages/identity-governance/access-lists/nested-access-lists.mdx index 057c4335c75de..5809c722e2716 100644 --- a/docs/pages/admin-guides/access-controls/access-lists/nested-access-lists.mdx +++ b/docs/pages/identity-governance/access-lists/nested-access-lists.mdx @@ -1,6 +1,11 @@ --- title: Nested Access Lists description: Learn how to use nested Access Lists to manage complex permissions and grant inheritance in Teleport. +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- Nested Access Lists allow inclusion of an Access List as a member or owner of another Access List. @@ -29,11 +34,11 @@ Inheritance is recursive – members of "Engineering Team" can themselves be Acc with their own members, and so on. However, circular nesting is not allowed, and nesting is limited to a maximum depth of 10 levels. -For more information, see the [Access Lists reference](../../../reference/access-controls/access-lists.mdx). +For more information, see the [Access Lists reference](../../reference/access-controls/access-lists.mdx). ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v17.0.1 or higher)"!) - (!docs/pages/includes/tctl.mdx!) - A user with the default `editor` role or equivalent permissions (ability to read, create, and manage Access Lists). @@ -79,5 +84,5 @@ the "dumper" app should be visible to the user. ## Next Steps -- Review the [Access Lists reference](../../../reference/access-controls/access-lists.mdx) for more detailed information on Access Lists' nesting and inheritance. -- Learn how nested Access Lists work with Okta/SCIM synchronization in [Synchronization with Okta and SCIM](../okta/app-and-group-sync.mdx). +- Review the [Access Lists reference](../../reference/access-controls/access-lists.mdx) for more detailed information on Access Lists' nesting and inheritance. +- Learn how nested Access Lists work with Okta/SCIM synchronization in [Synchronization with Okta and SCIM](../integrations/okta/app-and-group-sync.mdx). diff --git a/docs/pages/identity-governance/access-lists/reviewing-access-requests.mdx b/docs/pages/identity-governance/access-lists/reviewing-access-requests.mdx new file mode 100644 index 0000000000000..cdf24a412f5a9 --- /dev/null +++ b/docs/pages/identity-governance/access-lists/reviewing-access-requests.mdx @@ -0,0 +1,139 @@ +--- +title: Reviewing Access Requests +description: Learn how Access List owners review Access Requests +tags: + - get-started + - identity-governance + - privileged-access + - access-requests +enterprise: Identity Governance +--- + +Access List owners can be automatically assigned as suggested reviewers to +resource-based Access Requests that include resources granted by their Access +List. + +In this guide we will walk through an example and the configuration of this +use-case. + +## How it works + +We will create an Access List that grants users access to certain resources and +allows its owners to review Access Requests to those resources. + +Then, we will issue an Access Request to those resources to verify that the list +owners are prepopulated as suggested reviewers. + +## Prerequisites + +- Teleport cluster with a connected resource e.g. an SSH node. +- Teleport user (`admin` in this guide) with an `editor` role to perform configuration. +- Teleport user (`alice` in this guide) acting as an Access Request reviewer. +- Teleport user (`bob` in this guide) acting as a low-privileged requester user. + +## Step 1/5. Create roles + +As an `admin` user, let's create 3 roles: + +- Role that grants access to SSH nodes with a label `env:prod` +- Role that allows users to request access to SSH nodes with that label +- Role that allows users to review Access Requests for SSH nodes with that label + +The `ssh-access` role allows access to SSH nodes with the label `env: prod`: + +```yaml +kind: role +version: v8 +metadata: + name: ssh-access +spec: + allow: + logins: + - ubuntu + node_labels: + 'env': 'prod' +``` + +The `ssh-access-requester` role allows to request access to such SSH nodes: + +```yaml +kind: role +version: v8 +metadata: + name: ssh-access-requester +spec: + allow: + request: + search_as_roles: + - ssh-access +``` + +The `ssh-access-reviewer` role allows to review such Access Requests: + +```yaml +kind: role +version: v8 +metadata: + name: ssh-access-reviewer +spec: + allow: + review_requests: + roles: + - ssh-access + preview_as_roles: + - ssh-access +``` + +## Step 2/5. Assign requester role + +As an `admin` user, assign the `ssh-access-requester` role to `bob`. + +![bob user roles](../../../img/identity-governance/access-lists/bob-user-roles.png) + +This role will allow `bob` to issue Access Requests to SSH nodes with `env: prod` +labels. + +## Step 3/5. Create an Access List + +Now, as an `admin` user, let's create an Access List that grants access to the +SSH nodes (via `ssh-access` member role grant) and allows its owners to review +requests to these SSH nodes (via `ssh-access-reviewer` owner role grant). + +On the Identity Governance / Access Lists web UI page select "Create New Access +List" and create a new one with the following parameters: + +- List name: `SSH Access` +- Permissions granted to list owners: `ssh-access-reviewer` +- Permissions granted to list members: `ssh-access` +- List owner: `alice` + +![access list owners](../../../img/identity-governance/access-lists/ssh-access-list-owners.png) + +You can fill out the rest of the parameters as desired. + +## Step 4/5. Submit an Access Request + +Once you log into Teleport as `bob`, you should be able to see your SSH node(-s) +as requestable resources. + +On the Access Request checkout dialog, you should see that `alice` has been +prepopulated as a suggested reviewer because she is an owner of the access +list that grants access to the requested SSH node. + +![alice is suggested reviewer](../../../img/identity-governance/access-lists/bob-submit-request.png) + +Submit the request. + +## Step 5/5. Review the Access Request + +Once the request is submitted, log in as `alice` and go to the Identity Governance / +Access Requests page to see `bob`'s pending request and review it: + +![alice is suggested reviewer](../../../img/identity-governance/access-lists/alice-review-request.png) + +That's it! `alice` as an owner of the "SSH Access" list has successfully reviewed +`bob`'s request to an SSH node that's granted by her Access List. + +## Next steps + +- Learn more about [Resource Access Requests](../access-requests/resource-requests.mdx). diff --git a/docs/pages/identity-governance/access-lists/terraform.mdx b/docs/pages/identity-governance/access-lists/terraform.mdx new file mode 100644 index 0000000000000..86582d7fbb38a --- /dev/null +++ b/docs/pages/identity-governance/access-lists/terraform.mdx @@ -0,0 +1,350 @@ +--- +title: Managing Access Lists with Terraform +sidebar_label: Managing with Terraform +description: Learn how to manage Access Lists and their members with Terraform. +tags: + - infrastructure-as-code + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +You can manage Access List and their members using Terraform. This guide will help you: + +- Understand how to manage Access Lists with Terraform +- Understand the difference between Access Lists managed by Teleport vs Terraform +- Understand how to manage Access List members with Terraform + + +## How it works + +We will create a simple organization structure represented with Access Lists using +Terraform. It will consist of: + +- Access List representing a team +- Access List representing access to a resource (staging database) +- assigning the team access to the resource +- (optionally) assigning Okta and/or Entra ID group access to the resource + +Access List management with Terraform is done using 2 Terraform resources: +`teleport_access_list` and `teleport_access_list_member`. + +There are some nuances to members management. This is where concept of *Access List type* comes +into play. There are 2 key Access List types: *default* and *static*. + +### Access List of default type + +The default Access List has the `.spec.type` field unset, or set to null or an empty string. Those are +regular Access Lists, like those created in the web UI. And as a consequence: + +- They require auditing. Even though `.spec.audit` is not required, to be specified in the + Terraform resource, the default value will be set and those lists have to be periodically + reviewed in the web UI. +- Their members can be only managed with with the web UI and `tctl`. The source of truth + for these lists is Teleport so you cannot manage their members with Terraform. + + +### Access List of static type + +Access List of type static, have `.spec.type` set to "static". They differ from the default Access Lists: + +- They don't support audit. `.spec.audit` field can be set, but it's ignored by Teleport. +- Their members can be managed by Terraform. + +For static Access Lists, the source of truth for the membership is external (provisioned by +Terraform) so audit is not supported. The members should be reviewed at source, which are the +Terraform data sources or manifests and this process has to be external to Teleport. + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v18.2.0 or higher)"!) +- Teleport [Terraform provider set up with credentials](../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) + + +## Step 1/6. Create an Access List for a team + +First let's define a role, because Access List validation requires a role to be specified. Because +this Access List represents a team and shouldn't grant any permissions on its own, let's create a +noop role for it: + +```hcl +resource "teleport_role" "noop" { + version = "v7" + metadata = { + name = "noop" + } +} +``` + +And a static Access Lists representing a dev team with some user members: + +```hcl +resource "teleport_access_list" "developers" { + depends_on = [ teleport_role.noop ] # roles can't be deleted if they are in use + header = { + version = "v1" + metadata = { + name = "developers" + } + } + spec = { + type = "static" # type must be set to "static" to manage members with Terraform + title = "Developers" + description = "Dev team." + owners = [ + { name = "admin" }, + ] + grants = { + roles = ["noop"] + } + } +} + +resource "teleport_access_list_member" "developers_alice" { + header = { + version = "v1" + metadata = { + name = "alice" # Teleport user name + } + } + spec = { + access_list = teleport_access_list.developers.id # assign to "Developers" Access List + membership_kind = 1 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} + +resource "teleport_access_list_member" "developers_bob" { + header = { + version = "v1" + metadata = { + name = "bob" + } + } + spec = { + access_list = teleport_access_list.developers.id + membership_kind = 1 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + +Note that the Developers list is just a container for the users which belong to a single team in +the org. On its own it doesn't grant any permissions to its members, as it assigns only the "noop" +role, but it is possible to nest this list in another Access List to inherit the permissions. That +will be shown in the follow up step. + + +## Step 2/6. Create an Access Lists granting database access + +Now let's create an Access List representing access: + +```hcl +resource "teleport_role" "db_access_staging" { + version = "v7" + metadata = { + name = "db_access_staging" + } + spec = { + allow = { + db_users = ["viewer"] + db_names = ["*"] + db_labels = { + env = "staging" + } + } + } +} + +resource "teleport_access_list" "db_access_staging" { + header = { + version = "v1" + metadata = { + name = "db_access_staging" + } + } + spec = { + type = "static" # static, so members can be assigned with Terraform + title = "Staging DBs access" + description = "Read-only access to staging databases" + owners = [ + { name = "admin" }, + ] + grants = { + roles = [teleport_role.db_access_staging.id] + } + } +} +``` + + +## Step 3/6. Give access to the team + +Knowing that nested Access Lists inherit grants of the parent Access list, giving "Developers" team +"Staging DBs access" is now as simple as creating a nested Access List membership: + +```hcl +resource "teleport_access_list_member" "db_access_staging_developers" { + header = { + version = "v1" + metadata = { + name = teleport_access_list.developers.id + } + } + spec = { + access_list = teleport_access_list.db_access_staging.id + membership_kind = 2 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + +Once "Developers" Access List is a member of "Staging DBs access" Access Lists, "Developers" list +inherits grants of the "Staging DBs access" and in effect developers are granted access to the +staging databases. + + +## Step 4/6. Give access to a team managed in the web UI + +This step will show how Access Lists of the default type can be employed in the structure. As a +quick reminder, the default Access Lists are supposed to be managed with the web UI and they +require periodic reviews performed by the list owners. The default Access List members cannot be +manged with Terraform, but the lists themselves can be created/modified and imported to Terraform. + +Let's create a default Access List representing another team "DB Administrators" reusing the "noop" +role created in the previous steps: + +```hcl +resource "teleport_access_list" "dbas" { + depends_on = [ teleport_role.noop ] # roles can't be deleted if they are in use + header = { + version = "v1" + metadata = { + name = "dbas" + } + } + spec = { + # type is skipped, this is a regular Access List + title = "DBAs" + description = "DB Administrators team." + owners = [ + { name = "admin" }, + ] + grants = { + roles = ["noop"] + } + # audit settings can be skipped, the values below are the defaults + audit = { + recurrence = { + frequency = 6 # every 6 months + day_of_month = 1 # first day of the month + } + } + } +} +``` + +The new "DBAs" team members can be managed using the web UI. Teleport user `admin`, as the Access List +owner, will be automatically granted permissions to manage members of this list. `admin` will be +also required to review this list in the web UI every 6 months as defined in the *audit* section. +Because Access List members are separate resources in Teleport, modifying the members won't affect +the Teleport state for the `"teleport_access_list" "dbas"` resource. + +Now, let's give "DBAs" the "Staging DBs access": + +```hcl +resource "teleport_access_list_member" "db_access_staging_dbas" { + header = { + version = "v1" + metadata = { + name = teleport_access_list.dbas.id + } + } + spec = { + access_list = teleport_access_list.db_access_staging.id + membership_kind = 2 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + + +## Step 5/6. (optional) Give access to a Okta group + +The Okta integration allows [synchronizing Okta groups and apps](../integrations/okta/app-and-group-sync.mdx) as +Teleport Access Lists. + +To give permissions to the Okta group represented as an Access List in Teleport, navigate to the Okta +Access List in the web UI, and from its URL (e.g. +`https://example.teleport.sh/web/accesslists/00gt3c8z9ukePm5uF697`) copy the last path segment. In +this case `00gt3c8z9ukePm5uF697` - this is the name of the Okta Access List resource in Teleport. + +Now to give the Okta group members "Staging DBs access" permissions, create the nested Access List +membership: + +```hcl +resource "teleport_access_list_member" "db_access_staging_okta_group" { + header = { + version = "v1" + metadata = { + name = "00gt3c8z9ukePm5uF697" + } + } + spec = { + access_list = teleport_access_list.db_access_staging + membership_kind = 2 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + +## Step 6/6. (optional) Give access to an Entra ID group + +Entra ID integration allows [synchronizing groups](../integrations/entra-id/entra-id.mdx) as +Teleport Access Lists. + +To give permissions to Entra ID group represented as Access List in Teleport navigate to the Entra +ID Access List in the web UI, and from its URL (e.g. +`https://example.teleport.sh/web/accesslists/b1a6a594-a4ac-51d1-a6f6-1746a413a79a`) copy the last +path segment. In this case `b1a6a594-a4ac-51d1-a6f6-1746a413a79a` - this is the name of the Entra +ID Access List resource in Teleport. + +Now to give the Entra ID group members "Staging DBs access" permissions, create the nested Access +List membership: + +```hcl +resource "teleport_access_list_member" "db_access_staging_entra_id_group" { + header = { + version = "v1" + metadata = { + name = "b1a6a594-a4ac-51d1-a6f6-1746a413a79a" + } + } + spec = { + access_list = teleport_access_list.db_access_staging + membership_kind = 2 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + +## FAQ + +### Can I import my existing Access List into Terraform and start managing its members with IaC? + +This is usually not the case. Any Access List can be imported, but Access Lists created in the UI +are not static type, so their members can't be managed with Terraform. The same applies to Access +Lists created by an integration (e.g. [Okta](../integrations/okta/app-and-group-sync.mdx) or [Entra +ID](../integrations/entra-id/entra-id.mdx)). An existing static list created by another Terraform setup (with +a different state) could be imported and then its members can be managed by Terraform, but that's +rather a rare use-case. + +The recommended solution is to create a static Access List managed with Terraform and make this +list a member of the existing Access List. That way the members of the static Access List managed +with Terraform will inherit the grants of the existing Access List. + + +## Next Steps + +- Learn how [nested Access Lists](./nested-access-lists.mdx) grants inheritance work. +- Familiarize yourself with + [teleport_access_list](../../reference/infrastructure-as-code/terraform-provider/resources/access_list.mdx) and + [teleport_access_list_member](../../reference/infrastructure-as-code/terraform-provider/resources/access_list_member.mdx) + Terraform resources. + diff --git a/docs/pages/admin-guides/access-controls/access-monitoring.mdx b/docs/pages/identity-governance/access-monitoring.mdx similarity index 95% rename from docs/pages/admin-guides/access-controls/access-monitoring.mdx rename to docs/pages/identity-governance/access-monitoring.mdx index 0289d8f2e4662..557ee38b5e5f3 100644 --- a/docs/pages/admin-guides/access-controls/access-monitoring.mdx +++ b/docs/pages/identity-governance/access-monitoring.mdx @@ -1,6 +1,12 @@ --- -title: Getting Started with Access Monitoring +title: Access Monitoring description: Learn how to use Access Monitoring. +sidebar_position: 6 +tags: + - how-to + - identity-governance + - access-risks +enterprise: Identity Governance --- Access Monitoring allows users to understand and analyze the access patterns in a Teleport cluster. @@ -22,8 +28,8 @@ Users are able to write their own custom access monitoring queries by querying t
## Prerequisites -- Teleport v14 or later. -- For self-hosted Teleport the [Amazon Athena Backend](../../reference/backends.mdx) is required. +(!docs/pages/includes/edition-prereqs-tabs.mdx!) +- For self-hosted Teleport the [Amazon Athena Backend](../reference/deployment/backends.mdx) is required. ### Configuration @@ -243,7 +249,7 @@ WHERE command LIKE '%/etc/passwd%'; ``` -![exec passwd](../../../img/access-monitoring/exec_passwd.png) +![exec passwd](../../img/access-monitoring/exec_passwd.png) - Select the count of unique IP addresses associated with each user cert over different event dates: ```sql @@ -343,7 +349,7 @@ ORDER BY event_date ``` -![privileged access report](../../../img/access-monitoring/privileged_access_report.png) +![privileged access report](../../img/access-monitoring/privileged_access_report.png) **Suggestion:** Set up Access Requests, Device Trust and per-session MFA. @@ -475,7 +481,7 @@ ORDER BY event_date ``` -**Suggestion:** Use short lived certificates less than a working day. Check out Machine ID to get short lived certificates for your automation. +**Suggestion:** Use short lived certificates less than a working day. Check out Machine & Workload Identity to get short lived certificates for your automation. ### Long-lived join tokens @@ -499,7 +505,7 @@ ORDER BY event_date ``` -**Suggestion:** Use short-lived tokens to reduce risk of compromise or [delegated](../../reference/join-methods.mdx) joining when possible. +**Suggestion:** Use short-lived tokens to reduce risk of compromise or [delegated](../reference/deployment/join-methods.mdx) joining when possible. ### Root SSH sessions diff --git a/docs/pages/admin-guides/access-controls/access-requests/access-request-configuration.mdx b/docs/pages/identity-governance/access-requests/access-request-configuration.mdx similarity index 87% rename from docs/pages/admin-guides/access-controls/access-requests/access-request-configuration.mdx rename to docs/pages/identity-governance/access-requests/access-request-configuration.mdx index a98b16246faf5..a60d4732e2d52 100644 --- a/docs/pages/admin-guides/access-controls/access-requests/access-request-configuration.mdx +++ b/docs/pages/identity-governance/access-requests/access-request-configuration.mdx @@ -1,7 +1,14 @@ --- title: Configure Access Requests +sidebar_label: Configuration +sidebar_position: 6 description: Describes the options available for configuring just-in-time access to roles and resources in your Teleport cluster. -tocDepth: 3 +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- This guide explains the considerations you should make when configuring Teleport @@ -95,7 +102,7 @@ A role matcher can include the following values: To add the values of `claims_to_roles` to the lists of role matchers, the Auth Service evaluates template expressions with the user's traits. See the [Role -Templates](../guides/role-templates.mdx#templating-in-access-requests) for more +Templates](../../zero-trust-access/rbac-get-started/role-templates.mdx) for more information on how Teleport executes template expressions with user traits in the `claims_to_roles` field. @@ -167,7 +174,7 @@ privileges: 1. Calculate the maximum duration of elevated privileges if the Access Request were granted. This is the lowest value of: - The `--max-duration` flag of the [`tsh request - create`](../../../reference/cli/tsh.mdx) command (if the + create`](../../reference/cli/tsh.mdx) command (if the user creating the request provides this flag). - The lowest value of the `request.max_duration` field included in one of the user's requested roles. @@ -176,7 +183,7 @@ privileges: lowest value of: - The requested session expiration time, which is the value of the `--session-ttl` flag of [`tsh request - create`](../../../reference/cli/tsh.mdx). + create`](../../reference/cli/tsh.mdx). - The remaining time in the user's current Teleport session. - The lowest value of the `options.max_session_ttl` field in the user's requested roles. @@ -190,8 +197,8 @@ privileges: When creating or reviewing Access Requests, you can specify the earliest time that a user can assume elevated privileges by using the `--assume-start-time` flag. This flag is available for the -[`tsh request create`](../../../reference/cli/tsh.mdx) and [`tsh request -review`](../../../reference/cli/tsh.mdx) commands. The format accepted +[`tsh request create`](../../reference/cli/tsh.mdx) and [`tsh request +review`](../../reference/cli/tsh.mdx) commands. The format accepted is defined in [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339), e.g, `2023-12-12T23:20:50.52Z`. The time specified must be in the future. @@ -235,7 +242,7 @@ valid while it awaits approval: 1. Begin with with the base expiration of the Access Request, which a user can set with the `--request-ttl` flag of the [`tsh request - create`](../../../reference/cli/tsh.mdx) command. If this is + create`](../../reference/cli/tsh.mdx) command. If this is unset, the request TTL is one hour. 1. If the user's Teleport session is due to expire before the base expiration time, Teleport sets the Access Request expiration to the end of the Teleport @@ -296,7 +303,7 @@ user: ## Requiring request reasons -The `allow.request.reason.mode` field controls whether a reason is required when users submit +The `spec.allow.request.reason.mode` field controls whether a reason is required when users submit Access Requests. Allowed values are: @@ -306,6 +313,9 @@ Allowed values are: | `optional` | The default. The user does not need to provide a reason when making a request. | | `required` | The user must provide a non-empty reason when making a request. | +You can specify a scoped prompt to remind the user to provide a reason for specific requestable +roles or resources using `spec.allow.request.reason.prompt`. + Example: ```yaml @@ -322,12 +332,67 @@ spec: - 'root-node-access' reason: mode: 'required' + prompt: 'Please give a reason for accessing node resources' ``` -If a user with "node-requester" role assigned makes an Access Request for "node-access" role or any -resource allowed by "root-node-access" they will be required to provide a reason. If a user's +If a user with `node-requester` role assigned makes an Access Request for `node-access` role or any +resource allowed by `root-node-access` they will be required to provide a reason. If a user's role set includes multiple roles governing Access Requests to the same roles and resources, -"require" mode takes precedence. +`required` mode takes precedence. + +## Custom request reason prompts + +It is possible to specify custom request prompts for a user: + +- for requestable resources or roles specified by a single role +- for all Access Requests made by the user + +To specify a scoped prompt for requestable resources specified by a single role, set +`spec.allow.request.reason.prompt` to a non-empty string. This will affect only Access Requests that +contain the specific resource or role. + +A custom global request prompt can be specified by assigning the user a role +with `spec.options.request_prompt` set to non-empty string. This global prompt will apply for +all Access Requests made by the user. + +If multiple global prompts and scoped prompts apply to the same Access Request, all +prompts will be listed in alphabetical order within the request reason form. + +Example: + +```yaml +kind: role +version: v7 +metadata: + name: k8s-requester +spec: + allow: + request: + search_as_roles: + - 'k8s-viewer' + reason: + # If a user is assigned this role, the prompt below will be displayed + # when any resource allowed by k8s-viewer role is requested. + prompt: 'Please give a reason for accessing kubernetes resources' +``` + +```yaml +kind: role +version: v7 +metadata: + name: employee +spec: + allow: {} + options: + # If a user is assigned this role, the prompt below will be displayed for + # all access requests made by this user. + request_prompt: 'Please provide your ticket ID' +``` + +If a user assigned with `k8s-requester` and `employee` roles makes an Access Request +for resources searchable as `k8s-viewer`, they will be prompted with the text +"Please give a reason for accessing kubernetes resources" on the first line and +"Please provide your ticket ID" on the second line, in alphabetical order. ## Review thresholds @@ -385,7 +450,7 @@ the request has met the criteria specified in one of the thresholds in `allow.request.thresholds` associated with that role. The value of `filter` is an expression that uses the Teleport [predicate -language](../../../reference/predicate-language.mdx). +language](../../reference/access-controls/predicate-language.mdx). For example, the following configuration includes four thresholds, three of which have filters: @@ -486,6 +551,17 @@ While Teleport will accept a role with a nonempty `deny.request.suggested_reviewers` field, it only considers the `allow.request.suggested_reviewers` field when evaluating Access Requests. +### Access List owners as suggested reviewers + +For resource-based Access Requests, owners of the Access Lists that grant access +to the requested resources are automatically added as suggested reviewers. + +Note that those owners still need to have an appropriate Teleport role that +allows them to review such Access Requests. + +You can learn more about this use-case in the [Reviewing Access Requests](../access-lists/reviewing-access-requests.mdx) +guide. + ## Roles that a reviewer can grant access to Teleport users must be authorized to review Access Requests for a particular @@ -505,7 +581,7 @@ following role fields: The Auth Service evaluates the `claims_to_roles` field using template expressions with the user's traits. See the [Role -Templates](../guides/role-templates.mdx#templating-in-access-requests) for more +Templates](../../zero-trust-access/rbac-get-started/role-templates.mdx) for more information on how Teleport executes template expressions with user traits in the `claims_to_roles` field. @@ -673,7 +749,7 @@ pagerduty_services: ### Reading annotations in custom plugins -If you [write your own Access Request plugin](../../api/access-plugin.mdx), the +If you [write your own Access Request plugin](plugins/how-to-build.mdx), the program can access system annotations using a function similar to the following: ```go @@ -694,4 +770,4 @@ Requests with the key `my-annotation`. For a full description of the configuration options within a Teleport role, refer to the [Access Controls -Reference](../../../reference/access-controls/roles.mdx). +Reference](../../reference/access-controls/roles.mdx). diff --git a/docs/pages/identity-governance/access-requests/access-requests.mdx b/docs/pages/identity-governance/access-requests/access-requests.mdx new file mode 100644 index 0000000000000..ad8afbf829bc1 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/access-requests.mdx @@ -0,0 +1,73 @@ +--- +title: Just-in-Time Access Requests +sidebar_label: Access Requests +description: Use just-in-time Access Requests to request elevated privileges. +template: "no-toc" +sidebar_position: 2 +tags: + - access-requests + - conceptual + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +Just-in-time Access Requests allow Teleport users to request access to a +resource or role depending on need. The request can then be approved or denied +based on a configurable number of approvers. Access Requests enable your +organization to implement security best practices, including: + +- **Dual authorization:** Two reviewers must grant access before a user can + receive elevated permissions. This satisfies the FedRAMP AC-3 dual + authorization control that requires approval of two authorized individuals. +- **Principle of least privilege:** You can configure Access Requests to give an + attacker no permanent admin roles target. Users receive elevated privileges + for a limited period of time. Request approvers can be configured with limited + cluster access so they are not high value targets. + +Access Requests are designed to provide temporary permissions to users. If you +want to grant longstanding permissions to a group of users, with the option to +renew these permissions after a recurring interval (such as three months), +consider [Access Lists](../access-lists/access-lists.mdx). + +## See how Access Requests work + +Access Requests support two main use cases: **Role Access Requests** +and **Resource Access Requests**. + +With Role Access Requests, engineers can request temporary credentials with +elevated roles in order to perform critical system-wide tasks. + +[Get started with Role Access Requests](role-requests.mdx). + +With Resource Access Requests, engineers can easily get access to only the +individual resources they need, when they need it. + +[Get started with Resource Access Requests](resource-requests.mdx). + +## Configure Access Requests + +You can configure all aspects of the Access Request lifecycle in Teleport, +including: + +- When a user must make a request. +- What permissions a user can request. +- How long elevated permissions can last. +- How many users can approve or deny different kinds of requests. + +Read the [Access Request +Configuration](access-request-configuration.mdx) guide for an +overview of the configuration options available for Access Requests. + +## Teleport Community Edition users + +Just-in-time Access Requests are a feature of Teleport Enterprise. Teleport +Community Edition users can get a preview of how Access Requests work by +requesting a role via the Teleport CLI. Full Access Request functionality, +including Resource Access Requests managing Access Requests via the Web UI are +available in Teleport Enterprise. + +For information on how to use Just-in-time Access Requests with Teleport Community +Edition, see [Teleport Community Access Requests](oss-role-requests.mdx). + + diff --git a/docs/pages/identity-governance/access-requests/automatic-reviews.mdx b/docs/pages/identity-governance/access-requests/automatic-reviews.mdx new file mode 100644 index 0000000000000..3b744d4f71570 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/automatic-reviews.mdx @@ -0,0 +1,435 @@ +--- +title: Configure Automatic Reviews +sidebar_label: Automatic Reviews +sidebar_position: 3 +description: Describes how to configure Access Automation Rules for Automatic Reviews. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +Teleport supports automatic reviews of Access Requests. This +feature enables teams to enforce a zero standing privilege policy, while +still allowing users to receive temporary access without manual approval. + +In this guide, we'll walk through how to configure Teleport RBAC and demonstrate +example use cases for automatic reviews: + +- Automatically approve role requests from users on a specific team. +- Automatically approve resource requests from users on a specific team. + +## How it works + +Automatic reviews are triggered by Access Automation Rules. These rules instruct +Teleport to monitor Access Requests and automatically submit a review +when certain conditions (such as requested roles, user traits, or resource labels) +are met. + +For example, an Access Automation Rule can perform an automatic Access Request +approval when a user with the Teleport traits or IdP attribute `team: demo` +requests access to the `access` role. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v18.2.8 or higher)"!) + +- This feature requires Teleport Identity Governance. +- This guide requires at least one labeled SSH server. +- If you're following along with the Terraform steps, first setup the +[Teleport Terraform Provider](../../reference/infrastructure-as-code/terraform-provider/terraform-provider.mdx) + +## Step 1/3. Create a requester role and user + +In this example, we'll first create: + +- A role named `demo-access-request`, which allows users to request access to the +`access` role and any resources that the `access` role can access. +- A user named `demo-access-requester`, assigned the above role. +### Create the role + +Create a `role` configuration file named `demo-role.yaml`: + +```yaml +# demo-role.yaml +kind: role +version: v7 +metadata: + name: demo-access-request +spec: + allow: + request: + roles: + - access + search_as_roles: + - access +``` + +Create the role with: +```code +$ tctl create demo-role.yaml +``` + +### Create the user + +Use the following command to create the user and assign the role: +```code +$ tctl users add --roles=demo-access-request demo-access-requester +``` + +Alternatively, you can assign the role after creating the user: + + (!docs/pages/includes/add-role-to-user.mdx role="demo-access-request" user="\`demo-access-requester\`"!) + +## Step 2/3. Assign user traits + +To allow automatic review rules to evaluate the requesting user, assign them +traits via the Teleport Web UI: + +1. Go to **Zero Trust Access** -> **Users** +2. Next to `demo-access-requester`, click **Options** -> **Edit...** +3. Click **Add user trait**, and set: + - Key: `team` + - Value: `demo` +4. Click **Save** +5. Verify that the user has been updated with the desired trait + +![Edit User Traits](../../../img/access-controls/automatic-reviews/users-edit.png) + +When adding user traits, you can enter any keys and values. The user trait form +does not support wildcard or regular expressions. + + + Automatic reviews are compatible with SSO users and the attributes provided + by the IdP. + + +## Step 3/3. Configure automatic reviews + +Now that you've created a user and assigned them traits, you're ready to +configure automatic review rules. + +Access Automation Rules leverage the user traits, resource labels, and schedules +to determine whether requests can be automatically approved. In this step, you'll +configure one or more kinds of automatic review rules to demonstrate how Teleport +can automatically approve access. + +### Role-based automatic reviews + +To demonstrate how role requests from a specific team can be automatically +approved, create the following Access Automation Rule: + + + +1. Go to **Identity Governance** -> **Access Requests** -> **Set Up Access Automation Rules** +2. Click **Create New Access Automation Rule** -> **Automatic Review Rule** +3. Configure the rule and set: + - **Name of requested roles to match**: `access` + - **User Traits to match**: `team: demo` + - **Review decision**: `APPROVED` +4. Click **Create Rule** + + +Create an `access_monitoring_rule` configuration file named `demo-rule.yaml`: + +```yaml +# demo-rule.yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: demo-rule +spec: + automatic_review: + decision: APPROVED + integration: builtin + condition: |- + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) + desired_state: reviewed + subjects: + - access_request +``` + +Create the access_monitoring_rule with: +```code +$ tctl create -f demo-rule.yaml +``` + + +Create an `access_monitoring_rule` configuration file named `demo-rule.tf`: + +```hcl +# demo-rule.tf +resource "teleport_access_monitoring_rule" "demo-rule" { + version = "v1" + metadata = { + name = "demo-rule" + } + spec = { + automatic_review = { + decision = "APPROVED" + integration = "builtin" + } + condition = <<-EOT + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) + EOT + desired_state = "reviewed" + subjects = ["access_request"] + } +} +``` + +Apply the configuration: +```code +$ terraform apply +``` + + + +This rule automatically approves Access Requests for the `access` role if the +user has the trait `team: demo`. + +To verify the new automatic review rule, create an Access Request via the Teleport +Web UI: + +1. Log in as `demo-access-requester` +2. Go to **Identity Governance** -> **Access Requests** and click **New Access Request** +3. Change the request type from **Resources** to **Roles** +4. Add the `access` role to the Access Request +5. Click **Proceed to Request**, then **Submit Request** + +At this point, the new Access Request should have been created, automatically +reviewed, and transitioned into an `APPROVED` state. Navigate **Back to Listings** +and verify the Access Request status. It might take a second for the review to +process, so you may have to refresh the page. + +### Resource-based automatic reviews + +Before creating an Access Automation Rule, ensure that you have at least one +labeled SSH server connected to your Teleport cluster: + +```code +# List your SSH servers to identify available labels: +$ tsh ls +Node Name Address Labels +----------- ---------- ------------------------------- +teleport-00 ⟵ Tunnel env=demo +``` + +To demonstrate how resource requests from a specific team can be automatically +approved, create the following Access Automation Rule: + + + +1. Go to **Identity Governance** -> **Access Requests** -> **Set Up Access Automation Rules** +2. Click **Create New Access Automation Rule** -> **Automatic Review Rule** +3. Configure the rule and set: + - **Name of requested roles to match**: `access` + - **Resource labels to match**: `env: demo` + - **User Traits to match**: `team: demo` + - **Review decision**: `APPROVED` +4. Click **Create Rule** + + +Create an `access_monitoring_rule` configuration file named `demo-rule.yaml`: + +```yaml +# demo-rule.yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: demo-rule +spec: + automatic_review: + decision: APPROVED + integration: builtin + condition: |- + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) && + access_request.spec.resource_labels_intersection["env"].contains("demo") + desired_state: reviewed + subjects: + - access_request +``` + +Create the access_monitoring_rule with: +```code +$ tctl create -f demo-rule.yaml +``` + + +Create an `access_monitoring_rule` configuration file named `demo-rule.tf`: + +```hcl +# demo-rule.tf +resource "teleport_access_monitoring_rule" "demo-rule" { + version = "v1" + metadata = { + name = "demo-rule" + } + spec = { + automatic_review = { + decision = "APPROVED" + integration = "builtin" + } + condition = <<-EOT + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) && + access_request.spec.resource_labels_intersection["env"].contains("demo") + EOT + desired_state = "reviewed" + subjects = ["access_request"] + } +} +``` + +Apply the configuration: +```code +$ terraform apply +``` + + + +This rule automatically approves Access Requests for the `access` role and +resources labeled `env: demo` if the user has the trait `team: demo`. + +To verify the new automatic review rule, create an Access Request via the Teleport +Web UI: + +1. Log in as `demo-access-requester` +2. Go to **Identity Governance** -> **Access Requests** and click **New Access Request** +3. Add the SSH server you'd like to request access to +4. Click **Proceed to Request**, then **Submit Request** + +At this point, the new Access Request should have been created, automatically +reviewed, and transitioned into an `APPROVED` state. Navigate **Back to Listings** +and verify the Access Request status. It might take a second for the review to +process, so you may have to refresh the page. + +### Schedule-based automatic reviews + +To demonstrate how role requests can be automatically approved during a +specified schedule, create the following Access Automation Rule: + + + +1. Go to **Identity Governance** -> **Access Requests** -> **Set Up Access Automation Rules** +2. Click **Create New Access Automation Rule** -> **Automatic Review Rule** +3. Configure the rule and set: + - **Name of requested roles to match**: `access` + - **User Traits to match**: `team: demo` + - **Review decision**: `APPROVED` +4. Configure the schedule: + - Enable **For specified times only** + - Select the desired **Time Zone** + - Choose the desired weekdays + - Set the schedule for each selected weekday +5. Click **Create Rule** + +For example, to enable automatic reviews at any time on Mondays, the schedule +editor would appear as follows: + +![Schedule Editor](../../../img/access-controls/automatic-reviews/schedule-editor.png) + + +Create an `access_monitoring_rule` configuration file named `demo-rule.yaml`: + +```yaml +# demo-rule.yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: demo-rule +spec: + automatic_review: + decision: APPROVED + integration: builtin + condition: |- + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) + desired_state: reviewed + subjects: + - access_request + schedules: + default: + time: + timezone: America/Los_Angeles + shifts: + - weekday: Monday + start: "00:00" + end: "23:59" +``` + +Create the access_monitoring_rule with: +```code +$ tctl create -f demo-rule.yaml +``` + + +Create an `access_monitoring_rule` configuration file named `demo-rule.tf`: + +```hcl +# demo-rule.tf +resource "teleport_access_monitoring_rule" "demo-rule" { + version = "v1" + metadata = { + name = "demo-rule" + } + spec = { + automatic_review = { + decision = "APPROVED" + integration = "builtin" + } + condition = <<-EOT + contains_all(set("access"), access_request.spec.roles) && + contains_any(user.traits["team"], set("demo")) + EOT + desired_state = "reviewed" + subjects = ["access_request"] + schedules = { + default = { + time = { + timezone = "America/Los_Angeles" + shifts = [ + { + weekday : "Monday" + start : "00:00" + end : "23:59" + } + ] + } + } + } + } +} +``` + +Apply the configuration: +```code +$ terraform apply +``` + + + +## Troubleshooting + +### Conflicting automatic review rules + +Automatic review rules can automatically approve or deny Access Requests based +on the selected review decision. If an Access Request meets the conditions for +both an approval rule and a denial rule, the denial rule takes precedence. + +## Next Steps + +- Access Automation Rules are configured using an underlying `access_monitoring_rule` + resource. For more details about the `access_monitoring_rule` resource, refer to the + [Access Monitoring Rules Reference](../../reference/access-controls/access-monitoring-rules.mdx). +- For configuration with Teleport Terraform Provider, refer to the + [Terraform Resources Index](../../reference/infrastructure-as-code/terraform-provider/resources/access_monitoring_rule.mdx) +- For configuration options with SSO, refer to the + [Single Sign-On Guides](../../zero-trust-access/sso/sso.mdx). +- For more details about managing resource labels, refer to the + [Add Labels to Resources Guide](../../zero-trust-access/rbac-get-started/labels.mdx). diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/notification-routing-rules.mdx b/docs/pages/identity-governance/access-requests/notification-routing-rules.mdx similarity index 81% rename from docs/pages/admin-guides/access-controls/access-request-plugins/notification-routing-rules.mdx rename to docs/pages/identity-governance/access-requests/notification-routing-rules.mdx index 15b85449dd8b5..782dc6cbb6ad2 100644 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/notification-routing-rules.mdx +++ b/docs/pages/identity-governance/access-requests/notification-routing-rules.mdx @@ -1,6 +1,14 @@ --- -title: Routing Access Request notifications +title: Routing Access Request Notifications +sidebar_label: Notification Routing Rules description: How to set up Teleport's Access Monitoring Rules to route Access Request notifications +sidebar_position: 4 +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- With Teleport's Access Monitoring Rules, Access Request notifications can be @@ -19,29 +27,13 @@ the the AMR and uses it to process incoming events. Plugins implement AMR handling logic separately from one another. Currently, only a subset of hosted plugins support notification routing rules. We are working on extending support to the rest of hosted plugins. Keep an eye on -Teleport [changelog](../../../changelog.mdx) to learn about new plugins. +Teleport [changelog](../../changelog.mdx) to learn about new plugins. ## Prerequisites -- A managed Teleport Enterprise account. - -- The `tctl` admin tool and `tsh` client tool version >= (=teleport.version=). - - You can verify the tools you have installed by running the following commands: - - ```code - $ tctl version - # Teleport Enterprise v(=teleport.version=) go(=teleport.golang=) - - $ tsh version - # Teleport v(=teleport.version=) go(=teleport.golang=) - ``` - - You can download these tools by following the appropriate [Installation - instructions](../../../installation.mdx) for your environment and Teleport edition. +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise Cloud"!) - At least one of the Teleport Access Plugins that support Access Monitoring Rules is enrolled. - - (!docs/pages/includes/tctl.mdx!) ## Step 1/2. Create an Access Monitoring Rule @@ -55,11 +47,11 @@ You can define an Access Request notification rule in two ways: To create an Access Monitoring Rule via the Web UI, first navigate to the Access Request page and click on `View Notification Routing Rules`. -![View Notification Routing Rules](../../../../img/enterprise/plugins/access_monitoring_rules/view_routing_rules.png) +![View Notification Routing Rules](../../../img/enterprise/plugins/access_monitoring_rules/view_routing_rules.png) Then click `Create Notification Rule`. -![Create Access Monitoring Rule](../../../../img/enterprise/plugins/access_monitoring_rules/access_monitoring_rule_ui.png) +![Create Access Monitoring Rule](../../../img/enterprise/plugins/access_monitoring_rules/access_monitoring_rule_ui.png) If a plugin that supports Access Monitoring Rule based routing is not enrolled the UI will prompt you to enroll one. diff --git a/docs/pages/admin-guides/access-controls/access-requests/oss-role-requests.mdx b/docs/pages/identity-governance/access-requests/oss-role-requests.mdx similarity index 97% rename from docs/pages/admin-guides/access-controls/access-requests/oss-role-requests.mdx rename to docs/pages/identity-governance/access-requests/oss-role-requests.mdx index 8e38f342e2709..02ed28c37848a 100644 --- a/docs/pages/admin-guides/access-controls/access-requests/oss-role-requests.mdx +++ b/docs/pages/identity-governance/access-requests/oss-role-requests.mdx @@ -1,6 +1,13 @@ --- title: Teleport Community Edition Role Access Requests +sidebar_label: Teleport Community Edition +sidebar_position: 7 description: Teleport Community Edition allows users to request access to roles from the CLI. +tags: + - access-requests + - conceptual + - identity-governance + - privileged-access --- Just-in-time Access Requests are a feature of Teleport Enterprise. diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/datadog-hosted.mdx b/docs/pages/identity-governance/access-requests/plugins/datadog-hosted.mdx similarity index 92% rename from docs/pages/admin-guides/access-controls/access-request-plugins/datadog-hosted.mdx rename to docs/pages/identity-governance/access-requests/plugins/datadog-hosted.mdx index f28fec38048e6..270fc0b487aa6 100644 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/datadog-hosted.mdx +++ b/docs/pages/identity-governance/access-requests/plugins/datadog-hosted.mdx @@ -1,6 +1,13 @@ --- title: Access Requests with Datadog Incident Management +sidebar_label: Datadog description: How to set up Teleport's Datadog Incident Management plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- With Teleport's Datadog Incident Management integration, engineers can access the @@ -24,7 +31,7 @@ may approve the Access Request automatically. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v17.0.1 or higher)"!) - A Datadog account with the role "Datadog Admin Role". The admin role is required to create a Service Account and generate required credentials for the plugin. @@ -59,7 +66,7 @@ For the purpose of this guide, we will define an `editor-requester` role, which can request the built-in `editor` role, and an `editor-reviewer` role that can review requests for the `editor` role. -In the Teleport WebUI navigate to **Access -> Roles**. Then select **Create New +In the Teleport Web UI navigate to **Zero Trust Access -> Roles**. Then select **Create New Role** and create the desired roles. @@ -90,12 +97,12 @@ spec: First, assign yourself the `editor-reviewer` role. This will allow your user to review Access Requests for the `editor` role. To edit your user roles navigate to -**Management -> Access -> Users**, then for your user select **Options -> Edit** +**Zero Trust Access -> Users**, then for your user select **Options -> Edit** and add the `editor-reviewer` role. Next, create a user called `myuser@example.com` who has the `editor-requester` role. Later in this guide, you will create an Access Request as this user to test the -Datadog plugin. To this user, navigate to **Management -> Access -> Users**. Then +Datadog plugin. To this user, navigate to **Zero Trust Access -> Users**. Then select **Enroll Users** and create a user with the `editor-requester` role. You should end up with two users that look like this: @@ -137,9 +144,8 @@ to create a new Application key. Copy the Application key to paste in a later st ## Step 4/6. Enroll the Datadog Incident Management plugin At this point, you're now ready to enroll the Datadog Incident Management plugin. -Navigate to **Access Management -> Enroll New Integration -> Datadog**. -![Select enrollment](../../../../img/enterprise/plugins/datadog/select-enrollment.png) +(!docs/pages/includes/plugins/enroll.mdx name="the Datadog Incident Management"!) Provide the API and Application keys generated above. Select the desired API endpoint. Then provide the Datadog team handle, that you created earlier, as the fallback recipient. @@ -148,9 +154,7 @@ This should be "teleport-access". The fallback recipient will be the default recipient for notifications. The recipient can be a Datadog user email, or a Datadog team handle. You can configure more custom notification routing rules afterwards using -[Access Monitoring Rules](./notification-routing-rules.mdx). - -![Datadog enrollment](../../../../img/enterprise/plugins/datadog/datadog-enrollment.png) +[Access Monitoring Rules](../notification-routing-rules.mdx). If the recipient is a Datadog team, the team name will be added to the Datadog incident teams attribute. @@ -259,5 +263,5 @@ that the Teleport username matches the Datadog on-call user email. ## Next steps -- Read our guide on [Routing Access Request notifications](./notification-routing-rules.mdx) +- Read our guide on [Routing Access Request notifications](../notification-routing-rules.mdx) to configure custom notification routing rules for your plugin. diff --git a/docs/pages/identity-governance/access-requests/plugins/discord.mdx b/docs/pages/identity-governance/access-requests/plugins/discord.mdx new file mode 100644 index 0000000000000..a7ca17354dc03 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/discord.mdx @@ -0,0 +1,410 @@ +--- +title: Run the Discord Access Request Plugin +sidebar_label: Discord +description: How to set up Teleport's Discord plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +import BotLogo from "/static/avatar_logo.png"; +import AppIcon from "@version/docs/img/sso/onelogin/teleport.png"; + +This guide will explain how to set up Discord to receive Access Request messages +from Teleport. Teleport's Discord integration notifies individuals and channels of +Access Requests. Users can then approve and deny Access Requests from within +Discord, making it easier to implement security best practices without +compromising productivity. + +![The Discord Access Request plugin](../../../../img/enterprise/plugins/discord.png) + +
+This integration is hosted on Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="the Discord integration"!) + +
+ +## How it works + +The Teleport Discord plugin authenticates to your Discord server as well as the +Teleport Auth Service, and listens for audit events from the Teleport Auth +Service related to Access Requests. When a user creates an Access Request, the +plugin sends a message on Discord to reviewers of the request, including a link +that each reviewer can follow in order to review the request in the Teleport Web +UI. You can configure the reviewers the plugin will notify based on the role +targeted by the Access Request. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- Admin account on your Discord server. Installing a bot requires at least the + "manager server" permission. +- Either a Linux host or Kubernetes cluster where you will run the Discord plugin. +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/8. Define RBAC resources + +Before you set up the Discord plugin, you will need to enable Role Access Requests +in your Teleport cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/8. Define a Teleport Discord plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/8. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-discord-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-discord-identity"!) + + + +## Step 4/8. Install the Teleport Discord plugin + +(!docs/pages/includes/plugins/install-access-request.mdx name="discord"!) + +## Step 5/8. Register a Discord app + +The Access Request plugin for Discord receives Access Request events from the +Teleport Auth Service, formats them into Discord messages, and sends them to the +Discord API to post them in your guild (Discord server). For this to work, +you must register a new app with the Discord API. + +### Create your application + +1. Visit + [https://discord.com/developers/applications](https://discord.com/developers/applications) + to create a new Discord application. Click "New Application" and name the + application "Teleport". +1. Download the application icon. +1. Set the application icon. + +### Create the application bot + +Go to the "Bot" tab and choose "Add Bot". You can download our avatar to set as your Bot icon. Un-check the "Public +Bot" toggle as this bot should only be used within your Discord servers. +Finally, press "Reset Token", copy and save the new token into a safe place. +This token will be used by the Teleport plugin to authenticate against the +Discord API. + +### Install and authorize the application in your Discord server + +Go to the "OAuth2" tab, open the "URL Generator" and check the following +permissions: + +- bot +- View Channels +- Send Messages + +Copy and access the generated URL. Choose to install the application into the +desired Discord server. If the server is not available in the +dropdown list, it means you don't have sufficient rights. Ask a server +administrator to grant you a role with the "manage server" permission. + + +The same application can be installed into multiple Discord servers. To +do so, access the OAuth URL multiple times and choose different servers. You +have to be admin on a Discord server to install the app into it. + + +## Step 6/8. Configure the Teleport Discord plugin + +At this point, the Teleport Discord plugin has the credentials it needs to +communicate with your Teleport cluster and the Discord API. In this step, you will +configure the Discord plugin to use these credentials. You will also configure the +plugin to notify the right Discord channels when it receives an Access Request +update. + +### Create a config file + + + +The Teleport Discord plugin uses a config file in TOML format. Generate a +boilerplate config by running the following command (the plugin will not run +unless the config file is in `/etc/teleport-discord.toml`): + +```code +$ teleport-discord configure | sudo tee /etc/teleport-discord.toml > /dev/null +``` + +This should result in a config file like the one below: + +```toml +(!examples/resources/plugins/teleport-discord.toml!) +``` + + +The Discord Helm chart uses a YAML values file to configure the plugin. +On your local workstation, create a file called `teleport-discord-helm.yaml` +based on the following example: + +```toml +(!examples/resources/plugins/teleport-discord-helm.yaml!) +``` + + + + +### Edit the config file + +Open the configuration file created for the Teleport Discord plugin and update the following fields: + +**`[teleport]`** + +The Discord plugin uses this section to connect to the Teleport Auth Service. + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +**`[discord]`** + +`token`: Paste the bot token saved previously in this field. + +**`[role_to_recipients]`** + +The `role_to_recipients` map configures the channels that the Discord plugin will +notify when a user requests access to a specific role. When the Discord plugin +receives an Access Request from the Auth Service, it will look up the role being +requested and identify the Discord channels to notify. + +Each channel is represented by a numeric ID. Channels can be public, private or +direct messages between a user and the bot. +To determine the numeric ID of a channel for the bot to notify, follow the instructions below: + + + + + Open Discord in a web browser and navigate to the desired channel. + + The web browser URL should look like: + ``` + https://discord.com/channels// + ``` + + Copy the last part of the URL (everything after the last `/`), which is the channel ID. + + + Open Discord in a web browser and navigate to the desired channel. + + In the channel list choose "Create invite", type "teleport" in the search field + and invite your Discord Teleport bot. The bot should now appear in the channel + member list. + + The web browser URL should look like: + ``` + https://discord.com/channels// + ``` + + Copy the last part of the URL (everything after the last `/`), which is the channel ID. + + + To retrieve the channel ID of the private discussion between User A and the + Teleport bot, have User A send a direct message to the Teleport bot. This will + open a conversation between the user and the bot. Once the conversation is + initiated, the user can open the discussion page. + + The web browser URL should look like: + ``` + https://discord.com/channels/@me/ + ``` + + Copy the last part of the URL (everything after the last `/`), which is the channel ID. + + + +In the `role_to_recipients` map, each key is the name of a Teleport role. Each +value configures the Discord channel (or channels) to notify. + +The `role_to_recipients` map must also include an entry for `"*"`, which the +plugin looks up if no other entry matches a given role name. In the example +above, requests for roles aside from `dev` will notify the +`security-team` channel. + +Configure the Discord plugin to notify you when a user requests the `editor` role +by adding the following to your `role_to_recipients` config (replace +`YOUR-CHANNEL-ID` with a valid channel ID): + + + + +Here is an example of a `role_to_recipients` map. Each value can be a single +string or an array of strings: + +```toml +[role_to_recipients] +"*" = "YOUR-CHANNEL-ID" +"editor" = "YOUR-CHANNEL-ID" +``` + + + +In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` +and uses the following format, where keys are strings and values are arrays of +strings: + +```yaml +roleToRecipients: + "*": ["YOUR-CHANNEL-ID"] + "editor": ["YOUR-CHANNEL-ID"] +``` + + + +The final configuration file should resemble the following: + + + +```toml +(!examples/resources/plugins/teleport-discord.toml!) +``` + + +```yaml +(!examples/resources/plugins/teleport-discord-helm.yaml!) +``` + + + +## Step 7/8. Test your Discord app + +Once Teleport is running, you've created the Discord app, and the plugin is +configured, you can now run the plugin and test the workflow. + + + +Start the plugin: + +```code +$ teleport-discord start +``` + +If everything works fine, the log output should look like this: + +```code +$ teleport-discord start +INFO Starting Teleport Access Discord Plugin (=teleport.version=): discord/app.go:80 +INFO Plugin is ready discord/app.go:101 +``` + + +Start the plugin: + +```code +$ docker run -v :/etc/teleport-discord.toml public.ecr.aws/gravitational/teleport-plugin-discord:(=teleport.version=) start +``` + + +Install the plugin: + +```code +$ helm upgrade --install teleport-plugin-discord teleport/teleport-plugin-discord --values teleport-discord-helm.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-discord +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-discord-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-discord +``` + + + +Create an Access Request and check if the plugin works as expected with the +following steps. + +### Create an Access Request + +(!docs/pages/includes/plugins/create-request.mdx!) + +The channel you configured earlier to review the request should receive a +message from "Teleport" in Discord allowing them to visit a link in the Teleport +Web UI and either approve or deny the request. + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + +Once the request is resolved, the Discord bot will update the Access Request +message with ✅ or ❌, depending on whether the request was approved or denied. + + + +When the Discord plugin posts an Access Request notification to a channel, anyone +with access to the channel can view the notification and follow the link. While +users must be authorized via their Teleport roles to review Access Requests, you +should still check the Teleport audit log to ensure that the right users are +reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +## Step 8/8. Set up systemd + +This section is only relevant if you are running the Teleport Discord plugin on a +Linux host. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!examples/systemd/plugins/teleport-discord.service!) +``` + +Save this as `teleport-discord.service` in either `/usr/lib/systemd/system/` or +another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-discord +$ sudo systemctl start teleport-discord +``` + +## Next steps + +- Read our guides to configuring [Resource Access + Requests](../resource-requests.mdx) and [Role Access + Requests](../role-requests.mdx) so you can get the most out + of your Access Request plugins. diff --git a/docs/pages/identity-governance/access-requests/plugins/email.mdx b/docs/pages/identity-governance/access-requests/plugins/email.mdx new file mode 100644 index 0000000000000..245900159551f --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/email.mdx @@ -0,0 +1,528 @@ +--- +title: Access Requests with Email +sidebar_label: Email +description: How to set up the Teleport email plugin to notify users when another user requests elevated privileges. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +This guide will explain how to set up Teleport to send Just-in-Time Access +Request notifications to users via email. Since all organizations use email for +at least some of their communications, Teleport's email plugin makes it +straightforward to integrate Access Requests into your existing workflows, +letting you implement security best practices without compromising productivity. + +![The email Access Request plugin](../../../../img/enterprise/plugins/email.png) + +
+This integration is hosted on Teleport Enterprise (Cloud) + +(!docs/pages/includes/plugins/enroll.mdx name="the Email integration"!) + +![Enroll email plugin](../../../../img/enterprise/plugins/email/enroll-email.png) + +Configure and connect email integration by providing the following configuration +values: + +**`Sender`**: Configures the sender address. + +**`Fallback Recipient`**: Configures the default recipient for Access Request +notifications. + +**`Email Service`**: Selects the desired email service. Note: only `Mailgun` is +supported for Teleport Enterprise (Cloud). + +![Configure Mailgun integration](../../../../img/enterprise/plugins/email/configure-mailgun.png) + +Complete Mailgun integration by providing the following Mailgun API +configuration values: + +**`Domain`**: Configures the Mailgun sending domain. + +**`Mailgun Private Key`**: Configures the Mailgun API key. + +
+ +## How it works + +The Teleport email plugin authenticates to a third-party SMTP service as well as +the Teleport Auth Service, and listens for audit events from the Teleport Auth +Service related to Access Requests. When a user creates an Access Request, the +plugin sends an email to reviewers of the request, including a link that each +reviewer can follow in order to review the request in the Teleport Web UI. You +can configure the reviewers the plugin will notify based on the role targeted by +the Access Request. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- Access to an SMTP service. The Teleport email plugin supports either Mailgun + or a generic SMTP service that authenticates via username and password. +- Either a Linux host or Kubernetes cluster where you will run the email plugin. + + + +The Teleport plugin needs to use a username and password to authenticate to your +SMTP service. To mitigate the risk of these credentials being leaked, you should +set up a dedicated email account for the Teleport plugin and rotate the password +regularly. + + + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/7. Define RBAC resources + +Before you set up the email plugin, you will need to enable Role Access Requests +in your Teleport cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/7. Define a Teleport email plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/7. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-email-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-email-identity"!) + + + +## Step 4/7. Install the Teleport email plugin + +In this step, you'll install the Teleport email plugin on a host that has network +access to your SMTP server. This host can be any system capable of reaching both +your Teleport cluster and the SMTP server used for sending emails. + +
+Using a local SMTP server? + +If you are using a local SMTP server to test the plugin, you should install the +plugin on your local machine to ensure the plugin can connect to the SMTP server +and perform any necessary DNS lookups to send email. + +Your Teleport cluster does *not* need to perform DNS lookups for your plugin +because the plugin dials out to the Teleport Proxy Service or Teleport Auth Service. + +
+ +(!docs/pages/includes/plugins/install-access-request.mdx name="email"!) + +## Step 5/7. Configure the plugin + +At this point, you have generated credentials that the email plugin will use to +connect to Teleport. You will now configure the plugin to use these credentials +to receive Access Request notifications from Teleport and email them to your +chosen recipients. + +### Create a config file + + + + +The Teleport email plugin uses a configuration file in TOML format. Generate a +boilerplate configuration by running the following command: + +```code +$ teleport-email configure | sudo tee /etc/teleport-email.toml +``` + + + +The email plugin Helm Chart uses a YAML values file to configure the plugin. +On your local workstation, create a file called `teleport-email-helm.yaml` +based on the following example: + +```yaml +(!examples/resources/plugins/teleport-email-helm.yaml!) +``` + + + + +### Edit the configuration file + +Edit the configuration file for your environment. We will show you how to set +each value below. + +### `[teleport]` + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +### `[mailgun]` or `[smtp]` + +Provide the credentials for your SMTP service depending on whether you are using +Mailgun or SMTP service. + + + + +If you are deploying the email plugin on a Linux host: + +1. In the `mailgun` section, assign `domain` to the domain name and subdomain of + your Mailgun account. +1. Assign `mailgun.private_key` to your Mailgun private key. + +If you are deploying the email plugin on Kubernetes: + +1. Write your Mailgun private key to a local file called `mailgun-private-key`. +1. Create a Kubernetes secret from the file: + + ```code + $ kubectl -n teleport create secret generic mailgun-private-key --from-file=mailgun-private-key + ``` + +1. Assign `mailgun.privateKeyFromSecret` to `mailgun-private-key`. + + + + +Assign `host` to the fully qualified domain name of your SMTP service, omitting +the URL scheme and port. (If you're using a local SMTP server for testing, use +`"localhost"` for `host`.) Assign `port` to the port of your SMTP service. + +If you are running the email plugin on a Linux host, fill in `username` and +`password`. + + + +You can also save your password to a separate file and assign `password_file` to +the file's path. The plugin reads the file and uses the file's content as the +password. + + + +If you are deploying the email plugin on Kubernetes: + +1. Write your SMTP service's password a local file called `smtp-password.txt`. +1. Create a Kubernetes secret from the file: + + ```code + $ kubectl -n teleport create secret generic smtp-password --from-file=smtp-password + ``` + +1. Assign `smtp.passwordFromSecret` to `smtp-password`. + +
+Disabling TLS for testing + +If you are testing the email plugin against a trusted internal SMTP server where +you would rather not use TLS—e.g., a local SMTP server on your development +machine—you can assign the `starttls_policy` setting to `disabled` (always +disable TLS) or `opportunistic` (disable TLS if the server does not advertise +the `STARTTLS` extension). The default is to always enforce TLS, and you should +leave this setting unassigned unless you know what you are doing and understand +the risks. + +For Kubernetes deployments, `starttls_policy` is called `smtp.starttlsPolicy` in +the Helm values file for the email plugin. + +
+ +
+
+ +### `[delivery]` + +Assign `sender` to the email address from which you would like the Teleport +plugin to send messages. + +### `[role_to_recipients]` + +The `role_to_recipients` map (`roleToRecipients` for Helm users) configures the +recipients that the email plugin will notify when a user requests access to a +specific role. When the plugin receives an Access Request from the Auth Service, +it will look up the role being requested and identify the recipients to notify. + + + + +Here is an example of a `role_to_recipients` map. Each value can be a single +string or an array of strings: + +```toml +[role_to_recipients] +"*" = ["security@example.com", "executive-team@example.com"] +"dev" = "eng@example.com" +"dba" = "mallory@example.com" +``` + + + + +In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` +and uses the following format, where keys are strings and values are arrays of +strings: + +```yaml +roleToRecipients: + "*": ["security@example.com", "executive-team@example.com"] + "dev": ["eng@example.com"] + "dba": ["mallory@example.com"] +``` + + + + +In the `role_to_recipients` map, each key is the name of a Teleport role. Each +value configures the recipients the plugin will email when it receives an Access +Request for that role. Each string must be an email address. + +The `role_to_recipients` map must also include an entry for `"*"`, which the +plugin looks up if no other entry matches a given role name. In the example +above, requests for roles aside from `dev` and `dba` will notify +`security@example.com` and `executive-team@example.com`. + +
+Suggested reviewers + +Users can suggest reviewers when they create an Access Request, e.g.,: + +```code +$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com +``` + +If an Access Request includes suggested reviewers, the email plugin will add +these to the list of recipients to notify. If a suggested reviewer is an email +address, the plugin will send a message to that recipient in addition to those +configured in `role_to_recipients`. + +
+ +Configure the email plugin to notify you when a user requests the `editor` role +by adding the following to your `role_to_recipients` config, replacing +`YOUR_EMAIL_ADDRESS` with the appropriate address: + + + +```toml +[role_to_recipients] +"*" = "YOUR_EMAIL_ADDRESS" +"editor" = "YOUR_EMAIL_ADDRESS" +``` + + +```yaml +roleToRecipients: + "*": "YOUR_EMAIL_ADDRESS" + "editor": "YOUR_EMAIL_ADDRESS" +``` + + + +
+Configuring recipients without role mapping + +If you do not plan to use role-to-recipient mapping, you can configure the +Teleport email plugin to notify a static list of recipients for every Access +Request event by using the `delivery.recipients` field: + + + +```toml +[delivery] +recipients = ["eng@exmaple.com", "dev@example.com"] +``` + + +```yaml +delivery: + recipients: ["eng@exmaple.com", "dev@example.com"] +``` + + + +If you use `delivery.recipients`, you must remove the `role_to_recipients` +configuration section. Behind the scenes, `delivery.recipients` assigns the +recipient list to a `role_to_recipients` mapping under the wildcard value `"*"`. + +
+ +You configuration should resemble the following: + + + + +```toml +# /etc/teleport-email.toml +[teleport] +addr = "example.com:443" +identity = "/var/lib/teleport/plugins/email/identity" +refresh_identity = true + +[mailgun] +domain = "sandbox123abc.mailgun.org" +private_key = "xoxb-fakekey62b0eac53565a38c8cc0316f6" + +# As an alternative, you can use SMTP server credentials: +# +# [smtp] +# host = "smtp.gmail.com" +# port = 587 +# username = "username@gmail.com" +# password = "" +# password_file = "/var/lib/teleport/plugins/email/smtp_password" +# starttls_policy = "mandatory" + +[delivery] +sender = "noreply@example.com" + +[role_to_recipients] +"*" = "eng@example.com" +"editor" = ["admin@example.com", "execs@example.com"] + +[log] +output = "stderr" # Logger output. Could be "stdout", "stderr" or "/var/lib/teleport/email.log" +severity = "INFO" # Logger severity. Could be "INFO", "ERROR", "DEBUG" or "WARN". +``` + + + + +```yaml +# teleport-email-helm.yaml +teleport: + address: "teleport.example.com:443" + identitySecretName: teleport-plugin-email-identity + identitySecretPath: identity + +mailgun: + domain: "sandbox123abc.mailgun.org" + privateKeyFromSecret: "mailgun-private-key" + +# As an alternative, you can use SMTP server credentials: +# +# smtp: +# host: "smtp.gmail.com" +# port: 587 +# username: "username@gmail.com" +# passwordFromSecret: "smtp-password" +# starttls_policy = "mandatory" + +delivery: + sender: "noreply@example.com" + +roleToRecipients: + "*": "eng@example.com" + "editor": ["admin@example.com", "execs@example.com"] +``` + + + + +## Step 6/7. Test the email plugin + +After finishing your configuration, you can now run the plugin and test your +email-based Access Request flow: + + + + +```code +$ teleport-email start +``` + +If everything works as expected, the log output should look like this: + +```code +$ teleport-email start +INFO Starting Teleport Access Email Plugin (): email/app.go:80 +INFO Plugin is ready email/app.go:101 +``` + + +Start the plugin: + +```code +$ docker run -v :/etc/teleport-email.toml public.ecr.aws/gravitational/teleport-plugin-email:(=teleport.version=) start +``` + + +Install the plugin: + +```code +$ helm upgrade --install teleport-plugin-email teleport/teleport-plugin-email --values teleport-email-helm.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-email +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-email-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-email +``` + + + +### Create an Access Request + +(!docs/pages/includes/plugins/create-request.mdx!) + +The recipients you configured earlier should receive notifications of the +request by email. + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + +## Step 7/7. Set up systemd + +This section is only relevant if you are running the Teleport email plugin on a +Linux host. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!/examples/systemd/plugins/teleport-email.service!) +``` + +Save this as `teleport-email.service` in either `/usr/lib/systemd/system/` or +another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-email +$ sudo systemctl start teleport-email +``` + diff --git a/docs/pages/identity-governance/access-requests/plugins/how-to-build.mdx b/docs/pages/identity-governance/access-requests/plugins/how-to-build.mdx new file mode 100644 index 0000000000000..18ab0b65d4ace --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/how-to-build.mdx @@ -0,0 +1,503 @@ +--- +title: How to Build an Access Request Plugin +sidebar_position: 11 +description: Manage Access Requests using custom workflows with the Teleport API +tags: + - access-requests + - how-to + - zero-trust + - privileged-access +enterprise: Identity Governance +--- + +With Teleport [Access Requests](../access-requests.mdx), you can +assign Teleport users to less privileged roles by default and allow them to +temporarily escalate their privileges. Reviewers can grant or deny Access +Requests within your organization's existing communication workflows (e.g., +Slack, email, and PagerDuty) using [Access Request +plugins](plugins.mdx). + +You can use Teleport's API client library to build an Access Request plugin that +integrates with your organization's unique workflows. + +In this guide, we will explore a number of Teleport's API client libraries by +showing you how to write a plugin that lets you manage Access Requests via +Google Sheets. The plugin lists new Access Requests in a Google Sheets +spreadsheet, with links to allow or deny each request. + +![The result of the plugin](../../../../img/api/google-sheets.png) + +## How it works + +A Teleport Access Request plugin authenticates to the Teleport Auth Service gRPC +API and receives messages when the Auth Service creates or updates an Access +Request. The plugin then interacts with a third-party API based on the Access +Request notifications it receives. The Teleport API client library includes +methods for authenticating to the API using a Teleport identity file, as well as +for receiving audit events from the Auth Service in order to perform some +action. + + + +The plugin we will build in this guide is intended as a learning tool. **Do not +connect it to your production Teleport cluster.** Use a demo cluster instead. + + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +- Go version (=teleport.golang=)+ installed on your workstation. See the [Go + download page](https://go.dev/dl/). You do not need to be familiar with Go to + complete this guide, though Go knowledge is required if you want to build your + own Access Request plugin. + +You will need the following in order to set up the demo plugin, which requires +authenticating to the Google Sheets API: + +- A Google Cloud project with permissions to create service accounts. +- A Google account that you will use to create a Google Sheets spreadsheet. We + will grant permissions to edit the spreadsheet to the service account used for + the plugin. + + + +Even if you do not plan to set up the demo project, you can follow this guide to see +which libraries, types, and functions you can use to develop an Access Request +plugin. + +The demo is a minimal working example, and you can see fully fledged plugins in +the +[`gravitational/teleport`](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/integrations/access). +repository on GitHub. + + + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +## Step 1/5. Set up your Go project + +Download the source code for our minimal Access Request plugin: + +```code +$ git clone https://github.com/gravitational/teleport -b branch/v(=teleport.major_version=) +$ cd teleport/examples/access-plugin-minimal +``` + +For the rest of this guide, we will show you how to set up this plugin and +explore the way the plugin uses Teleport's API to integrate Access Requests with +a particular workflow. + +## Step 2/5. Set up the Google Sheets API + +Access Request plugins typically communicate with two APIs. They receive Access +Request events from the Teleport Auth Service's gRPC API, and use the data to +interact with the API of your chosen messaging or collaboration tool. + +In this section, we will enable the Google Sheets API, create a Google Cloud +service account for the plugin, and use the service account to authenticate the +plugin to Google Sheets. + +### Enable the Google Sheets API + +Enable the Google Sheets API by visiting the following Google Cloud console URL: + +https://console.cloud.google.com/apis/enableflow?apiid=sheets.googleapis.com + +Ensure that your Google Cloud project is the one you intend to use. + +Click **Next** > **Enable**. + +### Create a Google Cloud service account for the plugin + +Visit the following Google Cloud console URL: + +https://console.cloud.google.com/iam-admin/serviceaccounts + +Click **Create Service Account**. + +For **Service account name**, enter "Teleport Google Sheets Plugin". Google +Cloud will populate the **Service account ID** field for you. + +Click **Create and Continue**. When prompted to grant roles to the service +account, click **Continue** again. We will create our service account without +roles. Skip the step to grant users access to the service account, clicking +**Done**. + +The console will take you to the **Service accounts** view. Click the name of +the service account you just created, then click the **Keys** tab. Click **Add +Key**, then **Create new key**. Leave the **Key type** as "JSON" and click +**Create**. + +Save your Google Cloud credentials file as `credentials.json` in your Go project +directory. + +Your plugin will use this JSON file to authenticate to Google Sheets. + +### Create a Google Sheets spreadsheet + +Visit the following URL and make sure you are authenticated as the correct user: + +https://sheets.new + +Name your spreadsheet. + +Give the plugin access to the spreadsheet by clicking **Share**. In the **Add +people and groups** field, enter +`teleport-google-sheets-plugin@PROJECT_NAME.iam.gserviceaccount.com`, replacing +`PROJECT_NAME` with the name of your project. Make sure that the service account +has "Editor" permissions. Click **Share**, then **Share anyway** when prompted +with a warning. + +By authenticating to Google Sheets with the service account you created, the +plugin will have access to modify your spreadsheet. + +Next, ensure that the following is true within your spreadsheet: + +- There is only one sheet +- The sheet includes the following columns: + +|ID|Created|User|Roles|Status|Link| +|---|---|---|---|---|---| + +After we write our Access Request plugin, it will populate the spreadsheet with +data automatically. + +## Step 3/5. Set up Teleport RBAC + +In this section, we will set up Teleport roles that enable creating and +reviewing Access Requests, plus another Teleport role that can generate +credentials for your Access Request plugin to authenticate to Teleport. + +### Set up RBAC for the plugin + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +### Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-identity"!) + + + +Teleport's Access Request plugins listen for new and updated Access Requests by +connecting to the Teleport Auth Service's gRPC endpoint over TLS. + +The identity file includes both TLS and SSH credentials. Your +Access Request plugin uses the SSH credentials to connect to the Proxy Service, +which establishes a reverse tunnel connection to the Auth Service. The plugin +uses this reverse tunnel, along with your TLS credentials, to connect to the +Auth Service's gRPC endpoint. + +You will refer to this file later when configuring the plugin. + +### Set up Role Access Requests + +In this guide, we will use our plugin to manage Role Access Requests. For this +to work, we will set up Role Access Requests in your cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 4/5. Write the Access Request plugin + +In this step, we will walk you through the structure of the Access Request +plugin in `examples/access-plugin-minimal/main.go`. You can use the example here +to write your own Access Request plugin. + +### Imports + +Here are the packages our Access Request plugin will import from Go's standard +library: + +|Package|Description| +|---|---| +|`context`|Includes the `context.Context` type. `context.Context` is an abstraction for controlling long-running routines, such as connections to external services, that might fail or time out. Programs can cancel contexts or assign them timeouts and metadata. | +|`errors`|Working with errors.| +|`fmt`|Formatting data for printing, strings, or errors.| +|`strings`|Manipulating strings.| + +The plugin imports the following third-party code: + +|Package|Description| +|---|---| +|`github.com/gravitational/teleport/api/client`|A library for authenticating to the Auth Service's gRPC API and making requests.| +|`github.com/gravitational/teleport/api/types`|Types used in the Auth Service API, e.g., Access Requests.| +|`github.com/gravitational/trace`|Presenting errors with more useful detail than the standard library provides.| +|`google.golang.org/api/option`|Settings for configuring Google API clients.| +|`google.golang.org/api/sheets/v4`|The Google Sheets API client library, aliased as `sheets` in our program.| +|`google.golang.org/grpc`|The gRPC client and server library.| + +### Configuration + +First, we declare two constants that you need to configure for your environment: + +```go +(!examples/access-plugin-minimal/config.go!) +``` + +`proxyAddr` indicates the hostname and port of your Teleport Proxy Service or +Teleport Enterprise Cloud tenant. Assign it to the address of your own Proxy +Service, e.g., `mytenant.teleport.sh:443`. + +Assign `spreadSheetID` to the ID of the spreadsheet you created earlier. To find +the spreadsheet ID, visit your spreadsheet in Google Drive. The ID will be in +the URL path segment called `SPREADSHEET_ID` below: + +```text +https://docs.google.com/spreadsheets/d/SPREADHSEET_ID/edit#gid=0 +``` + +### The `AccessRequestPlugin` type + +The `plugin.go` file declares types that we will use to organize our Access +Request plugin code: + +```go +(!examples/access-plugin-minimal/plugin.go!) +``` + +The `AccessRequestPlugin` type represents a generic Access Request plugin, and +you can use this type to build your own plugin. It contains a Teleport API +client and an `EventHandler`, any Go type that implements a `HandleEvent` +method. + +In our case, the type that implements `HandleEvent` is `googleSheetsClient`, a +struct type that contains an API client for Google Sheets. + +### Prepare row data + +Whether creating a new row of the spreadsheet or updating an existing one, we +need a way to extract data from an Access Request in order to provide it to +Google Sheets. We achieve this with the `makeRowData` method: + +```go +(!examples/access-plugin-minimal/makerowdata.go!) +``` + +The `sheets.RowData` type makes extensive use of pointers to strings, so we +introduce a utility function called `stringPtr` that returns the pointer to the +provided string. This makes it easier to assign the values of cells in the +`sheets.RowData` using chains of function calls. + +`makeRowData` is a method of the `googleSheetsClient` type. (The `*` before +`googleSheetsClient` indicates that the method receives a *pointer* to a +`googleSheetsClient`.) It takes a `types.AccessRequest`, which Teleport's API +library uses to represent the fields within an Access Request. + +The Google Sheets client library defines a `sheets.RowData` type that we +include in requests to update a spreadsheet. This function converts a +`types.AccessRequest` into a `*sheets.RowData` (another pointer). + +Access Requests have one of four states: approved, denied, pending, and none. +We obtain the request states from Teleport's `types` library and map them to +strings in the `requestStates` map. + +When extracting the data, we use the `types.AccessRequest.GetName()` method to +retrieve the ID of the Access Request as a string we can include in the +spreadsheet. + +Users can review an Access Request by visiting a URL within the Teleport Web UI +that corresponds to the request's ID. `makeRowData` assembles a `=HYPERLINK` +formula that we can insert into the spreadsheet as a link to this URL. + +### Create a row + +The following function submits a request to the Google Sheets API to create a +new row based on an incoming Access Request, using the data returned by +`makeRowData`. It returns an error if the attempt to create a row failed: + +```go +(!examples/access-plugin-minimal/createrow.go!) +``` + +`createRow` assembles a `sheets.BatchUpdateSpreadsheetRequest` and sends it to +the Google Sheets API using `g.sheetsClient.BatchUpdate()`, returning errors +encountered while sending the request. + +We log unexpected HTTP status codes without returning an error since these may +be transient server-side issues. A production Access Request plugin would handle +these situations in a more sophisticated way, e.g., storing the request so it +can retry it later. + +### Update a row + +The code for updating a row is similar to the code for creating a new row: + +```go +(!examples/access-plugin-minimal/updaterow.go!) +``` + +The only difference between `updateRow` and `createRow` is that we send a +`&sheets.UpdateCellsRequest` instead of a `&sheets.AppendCellsRequest`. This +function takes the number of a row within the spreadsheet to update and sends a +request to update that row with information from the provided Access Request. + +### Determine where to update the spreadsheet + +When our program receives an event that updates an Access Request, it needs a +way to look up the row in the spreadsheet that corresponds to the Access Request +so it can update the row: + +```go +(!examples/access-plugin-minimal/updatespreadsheet.go!) +``` + +`updateSpreadSheet` takes a `types.AccessRequest`, gets the latest data from +your spreadsheet, determines which row to update, and calls `updateRow` +accordingly. It uses linear search to look up the first column within each row +of the sheet and check whether that column matches the ID of the Access Request. +It then calls `updateRow` with the Access Request and the row's number. + +### Handle incoming Access Requests + +The plugin calls a handler function when it receives an event. To set this up, +we use the `Run` method of our generic `AccessRequestPlugin` type, which +contains the main loop of the plugin: + +```go +(!examples/access-plugin-minimal/watcherjob.go!) +``` + +As we described above, the `AccessRequestPlugin` type's `EventHandler` field is +assigned to an interface with a `HandleEvent` method. In this case, the +implementation is `*googleSheetsClient.HandleEvent`. This method checks whether +an Access Request is in a pending state, i.e., whether the request is new. If +so, it calls `createRow`. If not, it calls `updateSpreadsheet`. + +The Teleport API client type, `client.Client`, has a `NewWatcher` method that +listens for new audit events from the Auth Service API via a gRPC stream. The +second parameter of the method indicates the type of audit event to watch for, +in this case, events having to do with Access Requests. + +The result of `NewWatcher`, a `types.Watcher`, enables `Run` to respond to new +audit events by calling the `Events` method. This returns a Go **channel**, a +runtime abstraction that allows concurrent routines to communicate. Another +channel, returned by `Done`, indicates when the watcher has finished. + +In a `for` loop, the `Run` method receives from either the `Done` channel or +`Events` channel, whichever is ready to send first. If it receives from the +`Events` channel, it calls the `HandleEvent` method to process the event. + +### Initialize the API clients + +Now we have all the code we need to use the Teleport and Google Sheets API +clients to listen for Access Request events and use them to maintain a +spreadsheet. The final step is to start our program by initializing the API +clients: + +```go +(!examples/access-plugin-minimal/main.go!) +``` + +The `main` function, the entrypoint to our program, initializes an +`AccessRequestPlugin` and `googleSheetsClient` and uses them run the plugin. + +The function creates a Google Sheets API client by loading the credentials file +you downloaded earlier at the relative path `credentials.json`. + +`client` is Teleport's library for setting up an API client. Our plugin does so +by calling `client.LoadIdentityFile` to obtain a `client.Credentials`. It then +uses the `client.Credentials` to call `client.New`, which connects to the +Teleport Proxy Service specified in the `Addrs` field using the provided +identity file. + +In this example, we are passing the `grpc.WithReturnConnectionError()` function +call to `client.New`, which instructs the gRPC client to return more detailed +connection errors. + + + +This program does not validate your credentials or Teleport cluster address. +Make sure that: + +- The identity file you exported earlier does not have an expired TTL +- The value you supplied for the `proxyAddr` constant includes both the host + **and** the web port of your Teleport Proxy Service, e.g., + `mytenant.teleport.sh:443` + + + +## Step 5/5. Test your plugin + +Run the plugin to forward Access Requests from your Teleport cluster to Google +Sheets. Execute the following command from within +`examples/access-plugin-minimal`: + +```code +$ go run teleport-sheets +``` + +Now that the plugin is running, create an Access Request: + +(!docs/pages/includes/plugins/create-request.mdx!) + +You should see the new Access Request in your spreadsheet with the `PENDING` +state. + +In your spreadsheet, click "View Access Request" next to your new request. Sign +into the Teleport Web UI as your original user. When you submit your review, +e.g., deny the request, the new status will appear within the spreadsheet. + + + +Access Request plugins must not enable reviewing Access Requests via the plugin, +and must always refer a reviewer to the Teleport Web UI to complete the review. +Otherwise, an unauthorized party could spoof traffic to the plugin and escalate +privileges. + + + +## Next steps + +In this guide, we showed you how to set up an Access Request plugin using +Teleport's API client libraries. To go beyond the minimal plugin we demonstrate +in this guide, you can use the Teleport API to set up more sophisticated +workflows that take full advantage of your communication and project management +tools. + +### Manage state + +While the plugin we developed in this guide is stateless, updating Access +Request information by searching all rows of a spreadsheet, real-world Access +Request plugins typically need to manage state. You can use the +[`plugindata`](https://pkg.go.dev/github.com/gravitational/teleport/api/types#PluginData) +package to make it easier for your Access Request plugin to do this. + +### Consult the examples + +Explore the +[`gravitational/teleport`](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/integrations/access). +repository on GitHub for examples of plugins developed at Teleport. You can see +how these plugins use the packages we discuss in this guide, as well as how they +add more complete functionality like configuration validation and state +management. + +### Provision the plugin with short-lived credentials + +In this example, we used the `tctl auth sign` command to fetch credentials for +the plugin. For production usage, we recommend provisioning short-lived +credentials via Machine & Workload Identity, which reduces the risk of these credentials becoming +stolen. View our [Machine & Workload Identity documentation](../../../machine-workload-identity/machine-workload-identity.mdx) to +learn more. + diff --git a/docs/pages/identity-governance/access-requests/plugins/jira.mdx b/docs/pages/identity-governance/access-requests/plugins/jira.mdx new file mode 100644 index 0000000000000..786583a8da99d --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/jira.mdx @@ -0,0 +1,452 @@ +--- +title: Run the Jira Access Request Plugin +sidebar_label: Jira +description: How to set up the Teleport Jira plugin to notify users when another user requests elevated privileges. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +This guide explains how to set up the Teleport Access Request plugin for Jira. +Teleport's Jira integration allows you to manage Access Requests as +Jira issues. + +
+This integration is hosted on Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="the Jira integration"!) + +
+ +## How it works + +The Teleport Jira plugin synchronizes a Jira project board with the Access +Requests processed by your Teleport cluster. When you change the status of an +Access Request within Teleport, the plugin updates the board. And when you +update the status of an Access Request on the board, the plugin notifies a Jira +webhook run by the plugin, which modifies the Access Request in Teleport. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- A Jira account with permissions to create applications and webhooks. + +- A registered domain name for the Jira webhook. Jira notifies the webhook of + changes in your project board. + +- An environment where you will run the Jira plugin. This is either: + + - A Linux virtual machine with ports `80` and `8081` open, plus a means of + accessing the host (e.g., OpenSSH with an SSH port exposed to your + workstation). + - A Kubernetes cluster deployed via a cloud provider. This guide shows you how + to allow traffic to the Jira plugin via a `LoadBalancer` service, so your + environment must support services of this type. + +- A means of providing TLS credentials for the Jira webhook run by the plugin. + **TLS certificates must not be self signed.** For example, you can obtain TLS + credentials for the webhook with Let's Encrypt by using an [ACME + client](https://letsencrypt.org/docs/client-options/). + + - If you run the plugin on a Linux server, you must provide TLS credentials to + a directory available to the plugin. + - If you run the plugin on Kubernetes, you must write these credentials to a + secret that the plugin can read. This guide assumes that the name of the + secret is `teleport-plugin-jira-tls`. + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/8. Define RBAC resources + +Before you set up the Jira plugin, you need to enable Role Access Requests in +your Teleport cluster. + +(!docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/8. Define a Teleport Jira plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/8. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-jira-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-jira-identity"!) + + + +## Step 4/8. Install the Teleport Jira plugin + +Install the Teleport Jira plugin following the instructions below, which depend +on whether you are deploying the plugin on a host (e.g., an EC2 instance) or a +Kubernetes cluster. + +The Teleport Jira plugin must run on a host or Kubernetes cluster that can +access both Jira and your Teleport Proxy Service (or Teleport Enterprise Cloud +tenant). + +(!docs/pages/includes/plugins/install-access-request.mdx name="jira"!) + +## Step 5/8. Set up a Jira project + +In this section, you will create a Jira a project that the Teleport plugin can +modify when a Teleport user creates or updates an Access Request. The plugin +then uses the Jira webhook to monitor the state of the board and respond to any +changes in the tickets it creates. + +### Create a project for managing Access Requests + +In Jira, find the top navigation bar and click **Projects** -> **Create +project**. Select **Kanban** for the template, then **Use template**. Click +**Select a company-managed project**. + +You'll see a screen where you can enter a name for your project. In this guide, +we assume that your project is called "Teleport Access Requests", which +receives the key `TAR` by default. + +Make sure "Connect repositories, documents, and more" is unset, then click +**Create project**. + +In the three-dots menu on the upper right of your new board, click **Board +settings**, then click **Columns**. Edit the statuses in your board so it +contains the following four: + +1. Pending +1. Approved +1. Denied +1. Expired + +Create a column with the same name as each status. The result should be the +following: + +![Jira board setup](../../../../img/enterprise/plugins/jira/board-setup.png) + + + +If your project board does not contain these (and only these) columns, each with +a status of the same name, the Jira Access Request plugin will behave in +unexpected ways. Remove all other columns and statuses. + + + +Click **Back to board** to review your changes. + +### Retrieve your Jira API token + +Obtain an API token that the Access Request plugin uses to make +changes to your Jira project. Click the gear menu at the upper right of the +screen, then click **Atlassian account settings**. Click **Security** > +**Create and manage API tokens** > **Create API token**. + +Choose any label and click **Copy**. Paste the API token into a convenient +location (e.g., a password manager or local text document) so you can use it +later in this guide when you configure the Jira plugin. + +### Set up a Jira webhook + +Now that you have generated an API key that the Teleport Jira plugin uses to +manage your project, enable Jira to notify the Teleport Jira plugin when your +project is updated by creating a webhook. + +Return to Jira. Click the gear menu on the upper right of the screen. Click +**System** > **WebHooks** > **Create a WebHook**. + + + + +Enter "Teleport Access Request Plugin" in the "Name" field. In the "URL" field, +enter the domain name you created for the plugin earlier, plus port `8081`. + + + + +Enter "Teleport Access Request Plugin" in the "Name" field. In the "URL" field, +enter the domain name you created for the plugin earlier, plus port `443`. + + + + +The webhook needs to be notified only when an issue is created, updated, or +deleted. You can leave all the other boxes empty. + +Click **Create**. + +## Step 6/8. Configure the Jira Access Request plugin + +Earlier, you retrieved credentials that the Jira plugin uses to connect to +Teleport and the Jira API. You will now configure the plugin to use these +credentials and run the Jira webhook at the address you configured earlier. + +### Create a configuration file + + + +The Teleport Jira plugin uses a configuration file in TOML format. Generate a +boilerplate configuration by running the following command (the plugin will not run +unless the config file is in `/etc/teleport-jira.toml`): + +```code +$ teleport-jira configure | sudo tee /etc/teleport-jira.toml > /dev/null +``` + +This should result in a configuration file like the one below: + +```toml +(!examples/resources/plugins/teleport-jira-cloud.toml!) +``` + + +The Helm chart for the Jira plugin uses a YAML values file to configure the +plugin. On your local workstation, create a file called +`teleport-jira-helm.yaml` based on the following example: + +```yaml +(!examples/resources/plugins/teleport-jira-helm-cloud.yaml!) +``` + + + + +### Edit the configuration file + +Open the configuration file created for the Teleport Jira plugin and update the +following fields: + +**`[teleport]`** + +The Jira plugin uses this section to connect to your Teleport cluster: + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + + + + +### `jira` + +**url:** The URL of your Jira tenant, e.g., `https://[your-jira].atlassian.net`. + +**username:** The username you were logged in as when you created your API +token. + +**api_token:** The Jira API token you retrieved earlier. + +**project:** The project key for your project, which in our case is `TAR`. + +You can leave `issue_type` as `Task` or remove the field, as `Task` is the +default. + +### `http` + +The `[http]` setting block describes how the plugin's webhook works. + +**listen_addr** indicates the address that the plugin listens on, and defaults +to `:8081`. If you opened port `8081` on your plugin host as we recommended +earlier in the guide, you can leave this option unset. + +**public_addr** is the public address of your webhook. This is the domain name you +added to the DNS A record you created earlier. + +**https_key_file** and **https_cert_file** correspond to the private key and +certificate you obtained before following this guide. Use the following values, +assigning to the domain name you created for the +plugin earlier: + +- **https_key_file:** + + ```code + $ /var/teleport-jira/tls/certificates/acme-v02.api.letsencrypt.org-directory//.key + ``` + +- **https_cert_file:** + + ```code + $ /var/teleport-jira/tls/certificates/acme-v02.api.letsencrypt.org-directory//.crt + ``` + + + + +### `jira` + +**url:** The URL of your Jira tenant, e.g., `https://[your-jira].atlassian.net`. + +**username:** The username you were logged in as when you created your API +token. + +**apiToken:** The API token you retrieved earlier. + +**project:** The project key for your project, which in our case is `TAR`. + +You can leave `issueType` as `Task` or remove the field, as `Task` is the +default. + +### `http` + +The `http` setting block describes how the plugin's webhook works. + +**publicAddress:** The public address of your webhook. This is the domain name +you created for your webhook. (We will create a DNS record for this domain name +later.) + +**tlsFromSecret:** The name of a Kubernetes secret containing TLS credentials +for the webhook. Use `teleport-plugin-jira-tls`. + + + + +## Step 7/8. Run the Jira plugin + +After finishing your configuration, you can now run the plugin and test your +Jira-based Access Request flow: + + + + +Run the following on your Linux host: + +```code +$ sudo teleport-jira start +INFO Starting Teleport Jira Plugin 12.1.1: jira/app.go:112 +INFO Plugin is ready jira/app.go:142 +``` + + + +Install the Helm chart for the Teleport Jira plugin: + +```code +$ helm install teleport-plugin-jira teleport/teleport-plugin-jira \ + --namespace teleport \ + --values values.yaml \ + --version (=teleport.plugin.version=) +``` + +Create a DNS record that associates the webhook's domain name with the address +of the load balancer created by the Jira plugin Helm chart. + +See whether the load balancer has a domain name or IP address: + +```code +$ kubectl -n teleport get services/teleport-plugin-jira +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +teleport-plugin-jira LoadBalancer 10.100.135.75 abc123.us-west-2.elb.amazonaws.com 80:30625/TCP,443:31672/TCP 134m +``` + +If the `EXTERNAL-IP` field has a domain name for the value, create a `CNAME` +record in which the domain name for your webhook points to the domain name of +the load balancer. + +If the `EXTERNAL-IP` field's value is an IP address, create a DNS `A` record +instead. + +You can then generate signed TLS credentials for the Jira plugin, which expects +them to be written to a Kubernetes secret. + + + + +### Check the status of the webhook + +Confirm that the Jira webhook has started serving by sending a GET request to +the `/status` endpoint. If the webhook is running, it will return a `200` status +code with no document body: + + + + +```code +$ curl -v https://:8081/status 2>&1 | grep "^< HTTP/2" +< HTTP/2 200 +``` + + + + +```code +$ curl -v https://:443/status 2>&1 | grep "^< HTTP/2" +< HTTP/2 200 +``` + + + + +### Create an Access Request + +Sign in to your cluster as the `myuser` user you created earlier and create an +Access Request: + +(!docs/pages/includes/plugins/create-request.mdx!) + +When you create the request, you will see a new task in the "Pending" column of the Access Requests board: + +![New Access Request](../../../../img/enterprise/plugins/jira/new-request.png) + +### Resolve the request + +Move the card corresponding to your new Access Request to the "Denied" column, +then click the card and navigate to Teleport. You will see that the Access +Request has been denied. + + + +Anyone with access to the Jira project board can modify the status of Access +Requests reflected on the board. You can check the Teleport audit log to ensure +that the right users are reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +## Step 8/8. Set up systemd + + + +This step is only applicable if you are running the Teleport Jira plugin on a +Linux machine. + + + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```txt +(!examples/systemd/plugins/teleport-jira.service!) +``` + +Save this as `teleport-jira.service` or another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +```code +$ sudo systemctl enable teleport-jira +$ sudo systemctl start teleport-jira +``` diff --git a/docs/pages/identity-governance/access-requests/plugins/mattermost.mdx b/docs/pages/identity-governance/access-requests/plugins/mattermost.mdx new file mode 100644 index 0000000000000..5b86b07f8fb4c --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/mattermost.mdx @@ -0,0 +1,382 @@ +--- +title: Run the Mattermost Access Request plugin +sidebar_label: Mattermost +description: How to set up Teleport's Mattermost plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +import BotLogo from "/static/avatar_logo.png"; + +This guide explains how to integrate Access Requests with Mattermost, an open +source messaging platform. + +
+This integration is hosted on Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="the Mattermost integration"!) + +
+ +## How it works + +The Teleport Mattermost plugin notifies individuals of Access Requests. Users +can then approve and deny Access Requests by following the message link, making +it easier to implement security best practices without compromising +productivity. + +![The Mattermost Access Request plugin](../../../../img/enterprise/plugins/mattermost/diagram.png) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- A Mattermost account with admin privileges. This plugin has been tested with + Mattermost v7.0.1. +- Either a Linux host or Kubernetes cluster where you will run the Teleport Mattermost plugin. +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/8. Define RBAC resources + +Before you set up the Mattermost plugin, you will need to enable Role Access Requests +in your Teleport cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/8. Define a Teleport Mattermost plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/8. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-mattermost-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-mattermost-identity"!) + + + +## Step 4/8. Install the Teleport Mattermost plugin + + + + +We recommend installing Teleport plugins on the same host as the Teleport Proxy +Service. This is an ideal location as plugins have a low memory footprint, and +require both public internet access and Teleport Auth Service access. + + + + + +Install the Teleport Mattermost plugin on a host that can access both your +Teleport Proxy Service and your Mattermost deployment. + + + + + +(!docs/pages/includes/plugins/install-access-request.mdx name="mattermost"!) + +## Step 5/8. Register a Mattermost bot + +Now that you have generated the credentials your plugin needs to connect to your +Teleport cluster, register your plugin with Mattermost so it can send Access +Request messages to your workspace. + +In Mattermost, click the menu button in the upper left of the UI, then click +System Console → Integrations → Bot Accounts. + +Set "Enable Bot Account Creation" to "true". + +![Enable Mattermost bots](../../../../img/enterprise/plugins/mattermost/mattermost_admin_console_integrations_bot_accounts.png) + +This will allow you to create a new bot account for the Mattermost plugin. + +Go back to your team. In the menu on the upper left of the UI, click +Integrations → Bot Accounts → Add Bot Account. + +Set the "Username", "Display Name", and "Description" fields according to how +you would like the Mattermost plugin bot to appear in your workspace. Set "Role" +to "Member". + +{/* lint ignore absolute-docs-links */} +You can download our avatar to set as your Bot +Icon. + +Set "post:all" to "Enabled". + +![Enable Mattermost Bots](../../../../img/enterprise/plugins/mattermost/mattermost_bot.png) + +Click "Create Bot Account". We will use the resulting OAuth 2.0 token when we +configure the Mattermost plugin. + +## Step 6/8. Configure the Mattermost plugin + +At this point, you have generated credentials that the Mattermost plugin will use +to connect to Teleport and Mattermost. You will now configure the Mattermost +plugin to use these credentials and post messages in the right channels for your +workspace. + + + +The Mattermost plugin uses a configuration file in TOML format. On the host where you +will run the Mattermost plugin, generate a boilerplate configuration by running the +following commands: + +```code +$ teleport-mattermost configure > teleport-mattermost.toml +$ sudo mv teleport-mattermost.toml /etc +``` + + +The Mattermost Helm Chart uses a YAML values file to configure the plugin. On +the host where you have Helm installed, create a file called +`teleport-mattermost-helm.yaml` based on the following example: + +```yaml +(!examples/resources/plugins/teleport-mattermost-helm-cloud.yaml!) +``` + + + +Edit the configuration as explained below: + +### `[teleport]` + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +### `[mattermost]` + +**`url`**: Include the scheme (`https://`) and fully qualified domain name of +your Mattermost deployment. + +**`token`**: Find your Mattermost bot's OAuth 2.0 token. To do so, visit +Mattermost. In the menu on the upper left of the UI, go to Integrations → Bot +Accounts. Find the listing for the Teleport plugin and click "Create New Token". +After you save the token, you will see a message with text in the format, +"Access Token: TOKEN". Copy the token and paste it here. + +**`recipients`**: This field configures the channels that the Mattermost plugin +will notify when it receives an Access Request message. The value is an array of +strings, where each element is either: + +- The email address of a Mattermost user to notify via a direct message when the + plugin receives an Access Request event +- A channel name in the format `team/channel`, where `/` separates the name + of the team and the name of the channel + +For example, this configuration will notify `first.last@example.com` and +the `Town Square` channel in the `myteam` team of any Access Request events: + + + + +```toml +recipients = [ + "myteam/Town Square", + "first.last@example.com" +] +``` + + + + +```yaml +recipients: + - "myteam/Town Square" + - first.last@example.com +``` + + + + +You will need to invite your Teleport plugin to any channel you add to the +`recipients` list (aside from direct message channels). Visit Mattermost, +navigate to each channel you want to invite the plugin to, and enter `/invite +@teleport` (or the name of the bot you configured) into the message box. + +![Invite the bot](../../../../img/enterprise/plugins/mattermost/add-bot.png) + +
+Suggested reviewers + +Users can also suggest reviewers when they create an Access Request, e.g.,: + +```code +$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com +``` + +If an Access Request includes suggested reviewers, the Mattermost plugin will +add these to the list of channels to notify. If a suggested reviewer is an email +address, the plugin will look up the direct message channel for that address +and post a message in that channel. + +If `recipients` is empty, and the user requesting elevated privileges has not +suggested any reviewers, the plugin will skip forwarding the Access Request to +Mattermost. + +
+ +The final configuration should look similar to this: + + + +```yaml +# example mattermost configuration TOML file +[teleport] +auth_server = "myinstance.teleport.sh:443" # Teleport Cloud proxy HTTPS address +identity = "/var/lib/teleport/plugins/mattermost/identity" # Identity file path +refresh_identity = true # Refresh identity file on a periodic basis + +[mattermost] +url = "https://mattermost.example.com" # Mattermost Server URL +token = "api-token" # Mattermost Bot OAuth token +recipients = [ + "myteam/general", + "first.last@example.com" +] + +[log] +output = "stderr" # Logger output. Could be "stdout", "stderr" or "/var/lib/teleport/mattermost.log" +severity = "INFO" # Logger severity. Could be "INFO", "ERROR", "DEBUG" or "WARN". + +``` + + +```yaml +(!examples/resources/plugins/teleport-mattermost-helm-cloud.yaml!) +``` + + + +## Step 7/8. Test your Mattermost bot + + + +After modifying your configuration, run the bot with the following command: + +```code +$ teleport-mattermost start -d +``` + +The `-d` flag provides debug information to make sure the bot can connect to +Mattermost, e.g.: + +```text +DEBU Checking Teleport server version mattermost/main.go:234 +DEBU Starting a request watcher... mattermost/main.go:296 +DEBU Starting Mattermost API health check... mattermost/main.go:186 +DEBU Starting secure HTTPS server on :8081 utils/http.go:146 +DEBU Watcher connected mattermost/main.go:260 +DEBU Mattermost API health check finished ok mattermost/main.go:19 +``` + + + +Run the plugin: + +```code +$ docker run -v :/etc/teleport-mattermost.toml public.ecr.aws/gravitational/teleport-plugin-mattermost:(=teleport.version=) start +``` + + +After modifying your configuration, run the bot with the following command: + +```code +$ helm upgrade --install teleport-plugin-mattermost teleport/teleport-plugin-mattermost --values teleport-mattermost-helm.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-mattermost +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-mattermost-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-mattermost +``` + + + + +### Create an Access Request + +(!docs/pages/includes/plugins/create-request.mdx!) + +The users and channels you configured earlier to review the request should +receive a message from "Teleport" in Mattermost allowing them to visit a link in +the Teleport Web UI and either approve or deny the request. + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + + + +When the Mattermost plugin posts an Access Request notification to a channel, +anyone with access to the channel can view the notification and follow the link. +While users must be authorized via their Teleport roles to review Access +Requests, you should still check the Teleport audit log to ensure that the right +users are reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +## Step 8/8. Set up systemd + +This section is only relevant if you are running the Teleport Mattermost plugin +on a Linux host. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!examples/systemd/plugins/teleport-mattermost.service!) +``` + +Save this as `teleport-mattermost.service` in either `/usr/lib/systemd/system/` or +another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-mattermost +$ sudo systemctl start teleport-mattermost +``` + diff --git a/docs/pages/identity-governance/access-requests/plugins/msteams.mdx b/docs/pages/identity-governance/access-requests/plugins/msteams.mdx new file mode 100644 index 0000000000000..d5127d327a372 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/msteams.mdx @@ -0,0 +1,584 @@ +--- +title: Access Requests with Microsoft Teams +sidebar_label: Microsoft Teams +description: How to set up Teleport's Microsoft Teams plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +This guide will explain how to set up Microsoft Teams to receive Access Request messages +from Teleport. + +
+This integration is hosted on Teleport Enterprise (Cloud) + +(!docs/pages/includes/plugins/enroll.mdx name="the Microsoft Teams integration"!) + +![Create Microsoft Teams Bot](../../../../img/enterprise/plugins/msteams/enroll-bot.png) + +Once enrolled you can download the required `app.zip` file from the integrations status page. + +![Download app.zip](../../../../img/enterprise/plugins/msteams/app-zip.png) + +
+ +## How it works + +Teleport's Microsoft Teams integration notifies individuals of Access Requests. +Users can then approve and deny Access Requests by following the message link, +making it easier to implement security best practices without compromising +productivity. + +![The Microsoft Teams Access Request plugin](../../../../img/enterprise/plugins/msteams.png) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- A Microsoft Teams License (Microsoft 365 Business). +- Azure console access in the organization/directory holding the Microsoft Teams + License. +- An Azure resource group in the same directory. This will host resources for + the Microsoft Teams Access Request plugin. You should have enough + permissions to create and edit Azure Bot Services in this resource group. +- Someone with Global Admin rights on Microsoft Entra ID in order to grant + permissions to the plugin. +- Someone with the `Teams administrator` role that can approve installation + requests for Microsoft Teams Apps. +- Either an Azure virtual machine or Kubernetes cluster where you will run the + Teleport Microsoft Teams plugin. + +(!/docs/pages/includes/tctl.mdx!) + +## Step 1/9. Define RBAC resources + +Before you set up the Microsoft Teams plugin, you will need to enable Role Access Requests +in your Teleport cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/9. Define a Teleport Microsoft Teams plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/9. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-msteams-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-msteams-identity"!) + + + +## Step 4/9. Install the Teleport Microsoft Teams plugin + +Install the Microsoft Teams plugin on your workstation. If you are deploying the +plugin on Kubernetes, you will still need to install the plugin locally in order +to generate an application archive to upload later in this guide. + +(!docs/pages/includes/plugins/install-access-request.mdx name="msteams"!) + +## Step 5/9. Register an Azure Bot + +The Access Request plugin for Microsoft Teams receives Access Request events from the +Teleport Auth Service, formats them into Microsoft Teams messages, and sends them to the +Microsoft Teams API to post them in your workspace. For this to work, you must register a +new Azure Bot. Azure Bot is a managed service by Microsoft that allows to +develop bots that interact with users through different channels, including +Microsoft Teams. + +### Register a new Azure bot + +Visit [https://portal.azure.com/#create/Microsoft.AzureBot](https://portal.azure.com/#create/Microsoft.AzureBot) +to create a new bot. Choose the bot handle so you can find the bot later in the Azure console (the bot handle will +not be displayed to the user or used to configure the Microsoft Teams plugin). Also edit the Azure subscription, +the resource group and the bot pricing tier. + +In the "Microsoft App ID" section choose "Single Tenant" and "Create new +Microsoft App ID". + +![Create Azure Bot](../../../../img/enterprise/plugins/msteams/create-azure-bot.png) + +### Connect the bot to Microsoft Teams + +Once the bot is created, open its resource page on the Azure console and +navigate to the "Channels" tab. Click "Microsoft Teams" and add the Microsoft Teams +channel. + +The result should be as follows: + +![Add Bot Channel](../../../../img/enterprise/plugins/msteams/add-bot-channel.png) + +### Obtain information about your Microsoft App + +On the bot's "Configuration" tab, copy and keep in a safe place the values of +"Microsoft App ID" and "App Tenant ID". Those two UUIDs will be used in the +plugin configuration. + +Click the "Manage" link next to "Microsoft App ID". This will open the app management view. + +![Manage Bot App](../../../../img/enterprise/plugins/msteams/manage-bot-app.png) + +Then, go to the "Certificates & Secrets" section and choose to create a "New client secret". +Use the "Copy" icon to copy the newly created secret and keep it with the +previously recovered App ID and Tenant ID. + +The client secret will be used by the Teleport plugin to authenticate as the bot's app when +searching users and posting messages. + +### Specify the permissions used by the app + +Still in the app management view ("Configuration", then "Manage" the Microsoft App ID), +go to the "API permissions" tab. + +Add the following Microsoft Graph Application permissions: + +| Permission name | Reason | +|---|---| +| `AppCatalog.Read.All` | Used to list Teams Apps and check the app is installed. | +| `User.Read.All` | Used to get notification recipients. | +| `TeamsAppInstallation.ReadWriteSelfForUser.All` | Used to initiate communication with a user that never interacted with the Teams App before. | +| `TeamsAppInstallation.ReadWriteSelfForTeam.All` | Used to discover if the app is installed in the Team. | + +At this point the app declares the required permissions but those have not been granted. + +If you are an admin, click "Grant admin consent for \". If you are not an admin, +contact an admin user to grant the permissions. + +![Specify App Permissions](../../../../img/enterprise/plugins/msteams/specify-app-permissions.png) + +Once permissions have been approved, refresh the page and check the approval status. +The result should be as follows: + +![Granted App Permissions](../../../../img/enterprise/plugins/msteams/granted-app-permissions.png) + +## Step 6/9. Configure the Teleport Microsoft Teams plugin + +At this point, the Teleport Microsoft Teams plugin has the credentials it needs to +communicate with your Teleport cluster and Azure APIs, but the app has not been +installed to Microsoft Teams yet. + +In this step, you will configure the Microsoft Teams plugin to use the Azure +credentials and generate the Teams App package that will be used to install the +Microsoft Teams App. You will also configure the plugin to notify +the right Microsoft Teams users when it receives an Access Request update. + +### Generate a configuration file and assets + +Generate a configuration file for the plugin. The instructions depend on whether +you are deploying the plugin on a virtual machine or Kubernetes: + + + + +The Teleport Microsoft Teams plugin uses a config file in TOML format. The +`configure` subcommand generates the directory +`/var/lib/teleport/plugins/msteams/assets` containing the TOML configuration +file and an `app.zip` file that will be used later to add the Teams App into the +organization catalog. + +Run the following command on your virtual machine: + +```code +$ export AZURE_APPID="your-appid" +$ export AZURE_TENANTID="your-tenantid" +$ export AZURE_APPSECRET="your-appsecret" +$ teleport-msteams configure /var/lib/teleport/plugins/msteams/assets --appID "$AZURE_APPID" --tenantID "$AZURE_TENANTID" --appSecret "$AZURE_APPSECRET" +``` + +This should result in a config file like the one below: + +```toml +(!examples/resources/plugins/teleport-msteams.toml!) +``` + +Copy the `/var/lib/teleport/plugins/msteams/assets/app.zip` file to your local +computer. You will have to upload it to Microsoft Teams later. + +On the host where you will run the Microsoft Teams plugin, move the file +`/var/lib/teleport/plugins/msteams/assets/teleport-msteams.toml` to +`/etc/teleport-msteams.toml`. You can then edit the copy located in `/etc/`. + + + + +Run the following command on your local machine: + +```code +$ export AZURE_APPID="your-appid" +$ export AZURE_TENANTID="your-tenantid" +$ export AZURE_APPSECRET="your-appsecret" +$ teleport-msteams configure /var/lib/teleport/plugins/msteams/assets --appID "$AZURE_APPID" --tenantID "$AZURE_TENANTID" --appSecret "$AZURE_APPSECRET" +``` + +This command generates an application archive at +`/var/lib/teleport/plugins/msteams/assets/app.zip`. You will upload it to +Microsoft Teams later in this guide. + +Create a file on your workstation called `teleport-msteams-helm.yaml` with the +following content: + +```yaml +(!examples/resources/plugins/teleport-msteams-helm.yaml!) +``` + +You will edit this file in the next section. + + + + + +The `configure` command is not idempotent. It generates a new Microsoft Teams +application UUID with each execution. It is not possible to use an `app.zip` and +a TOML configuration generated by two different executions. + + +### Edit the configuration file + +Edit the configuration file according to the instructions below. + +#### `[teleport]` + +The Microsoft Teams plugin uses this section to connect to your Teleport +cluster. + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +#### `[msapi]`/`msTeams` + + + + +Make sure the `app_id`, `app_secret`, `tenant_id`, and `teams_app_id` fields are +filled in with the correct information, which you obtained earlier in this +guide. + + + +Make sure the `appID`, `tenantID`, and `teamsAppID` fields are filled in with +the correct information, which you obtained earlier in this guide. + + + + +#### `[role_to_recipients]` + +The `role_to_recipients` map (`roleToRecipients` for Helm users) configures the +users and channels that the Microsoft Teams plugin will notify when a user +requests access to a specific role. When the Microsoft Teams plugin receives an +Access Request from the Auth Service, it will look up the role being requested +and identify the Microsoft Teams users and channels to notify. + + + + +Here is an example of a `role_to_recipients` map. Each value can be a single +string or an array of strings: + +```toml +[role_to_recipients] +"*" = "alice@example.com" +"dev" = ["alice@example.com", "bob@example.com"] +"dba" = "https://teams.microsoft.com/l/channel/19%3somerandomid%40thread.tacv2/ChannelName?groupId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" +``` + + + +In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` +and uses the following format, where keys are strings and values are arrays of +strings: + +```toml +roleToRecipients: + "*": "alice@example.com" + "dev": ["alice@example.com", "bob@example.com"] + "dba": "https://teams.microsoft.com/l/channel/19%3somerandomid%40thread.tacv2/ChannelName?groupId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" +``` + + + +In the `role_to_recipients` map, each key is the name of a Teleport role. Each +value configures the Teams user (or users) to notify. Each string must be either +the email address of a Microsoft Teams user or a channel URL. + +You can find the URL of a channel by opening the channel and clicking the button +"Get link to channel": + +![Copy Teams Channel](../../../../img/enterprise/plugins/msteams/copy-teams-channel.png) + +The `role_to_recipients` map must also include an entry for `"*"`, which the +plugin looks up if no other entry matches a given role name. In the example +above, requests for roles aside from `dev` and `dba` will notify `alice@example.com`. + +
+Suggested reviewers + +Users can suggest reviewers when they create an Access Request, e.g.,: + +```code +$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com +``` + +If an Access Request includes suggested reviewers, the Microsoft Teams plugin will add +these to the list of channels to notify. If a suggested reviewer is an email +address, the plugin will look up the direct message channel for that +address and post a message in that channel. + +
+ +Configure the Microsoft Teams plugin to notify you when a user requests the `editor` role +by adding the following to your `role_to_recipients` config (replace +`TELEPORT_USERNAME` with the email of the user you assigned the `editor-reviewer` +role earlier): + + + + +```toml +[role_to_recipients] +"*" = "TELEPORT_USERNAME" +"editor" = "TELEPORT_USERNAME" +``` + + + +```toml +roleToRecipients: + "*": "TELEPORT_USERNAME" + "editor": "TELEPORT_USERNAME" +``` + + + +The final configuration file should resemble the following: + + + +```toml +(!examples/resources/plugins/teleport-msteams.toml!) +``` + + +```yaml +(!examples/resources/plugins/teleport-msteams-helm.yaml!) +``` + + + +## Step 7/9. Add and configure the Teams App + +### Upload the Teams App + +Open Microsoft Teams and go to "Apps", "Manage your apps", then in the additional +choices menu choose "Upload an App". + +![Upload Teams App](../../../../img/enterprise/plugins/msteams/upload-teams-app.png) + +If you're a Teams admin, choose "Upload an app to your org's app catalog". +This will allow you to skip the approval step. +If you're not a Microsoft Teams admin, choose "Submit an app to your org". + +Upload the `app.zip` file you generated earlier. + +### Approve the Teams App + +If you are not a Teams admin and chose "Submit an app to your org", +you will have to ask a Teams admin to approve it. + +They can do so by connecting to the +[Teams admin dashboard](https://admin.teams.microsoft.com/policies/manage-apps), +searching "TeleBot", selecting it and choosing "Allow". + +![Upload Teams App](../../../../img/enterprise/plugins/msteams/allowed-teams-app.png) + +### Add the Teams App to a Team + +Once the app is approved it should appear in the "Apps built for your org" section. +Add the newly uploaded app to a team. Open the app, click "Add to a team", +choose the "General" channel of your team and click "Set up a bot". + +![Add Teams App](../../../../img/enterprise/plugins/msteams/add-teams-app.png) + +Note: Once an app is added to a team, it can post on all channels. + +## Step 8/9. Test the Teams App + +Once Teleport is running, you've created the Teams App, and the plugin is +configured, you can now run the plugin and test the workflow. + +### Test Microsoft Teams connectivity + +Start the plugin in validation mode: + +```code +$ teleport-msteams validate +``` + +If everything works fine, the log output should look like this: + +```text +teleport-msteams v(=teleport.plugin.version=) go(=teleport.golang=) + + - Checking application xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx status... + - Application found in the team app store (internal ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx) + - User xxxxxx@xxxxxxxxx.xxx found: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + - Application installation ID for user: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX + - Chat ID for user: 19:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@unq.gbl.spaces + - Chat web URL: https://teams.microsoft.com/l/chat/19%3Axxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx%40unq.gbl.spaces/0?tenantId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + - Hailing the user... + - Message sent, ID: XXXXXXXXXXXXX + +Check your MS Teams! +``` + +The plugin should exit and you should have received two messages through Microsoft Teams. + +![Validate Bot Message](../../../../img/enterprise/plugins/msteams/validate-bot-message.png) + +### Start the MS Teams Plugin + +After you configured and validated the MS Teams plugin, you can now run the +plugin and test the workflow. + + + + +Run the following command to start the Teleport MS Teams plugin. The `-d` flag +will provide debug information to ensure that the plugin can connect to MS Teams +and your Teleport cluster: + +```code +$ teleport-msteams start -d +DEBU DEBUG logging enabled msteams/main.go:120 +INFO Starting Teleport MS Teams Plugin (=teleport.plugin.version=): msteams/app.go:74 +DEBU Attempting GET teleport.example.com:443/webapi/find webclient/webclient.go:129 +DEBU Checking Teleport server version msteams/app.go:242 +INFO MS Teams app found in org app store id:292e2881-38ab-7777-8aa7-cefed1404a63 name:TeleBot msteams/app.go:179 +INFO Preloading recipient data... msteams/app.go:185 +INFO Recipient found, chat found chat_id:19:a8c06deb-aa2b-4db5-9c78-96e48f625aef_a36aec2e-f11c-4219-b79a-19UUUU57de70@unq.gbl.spaces kind:user recipient:jeff@example.com msteams/app.go:195 +INFO Recipient data preloaded and cached. msteams/app.go:198 +DEBU Watcher connected watcherjob/watcherjob.go:121 +INFO Plugin is ready msteams/app.go:227 +``` + + + +Run the plugin: + +```code +$ docker run -v :/etc/teleport-msteams.toml public.ecr.aws/gravitational/teleport-plugin-msteams:(=teleport.version=) start +``` + + + +Install the plugin: + +```code +$ helm upgrade --install teleport-plugin-msteams teleport/teleport-plugin-msteams --values teleport-msteams-helm.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-msteams +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-msteams-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-msteams +``` + + + + +### Create an Access Request + +Create an Access Request and check if the plugin works as expected with the +following steps. + +(!docs/pages/includes/plugins/create-request.mdx!) + +The user you configured earlier to review the request should receive a direct +message from "TeleBot" in Microsoft Teams allowing them to visit a link in the Teleport +Web UI and either approve or deny the request. + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + +Once the request is resolved, the Microsoft Teams bot will update the Access Request message +to reflect its new status. + + + +When the Microsoft Teams plugin posts an Access Request notification to a channel, anyone +with access to the channel can view the notification and follow the link. While +users must be authorized via their Teleport roles to review Access Requests, you +should still check the Teleport audit log to ensure that the right users are +reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +## Step 9/9. Set up systemd + +This section is only relevant if you are running the Teleport Microsoft Teams +plugin on a virtual machine. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!examples/systemd/plugins/teleport-msteams.service!) +``` + +Save this as `teleport-msteams.service` in either `/usr/lib/systemd/system/` or +another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-msteams +$ sudo systemctl start teleport-msteams +``` + +## Next steps + +- Read our guides to configuring [Resource Access + Requests](../resource-requests.mdx) and [Role Access + Requests](../role-requests.mdx) so you can get the most out + of your Access Request plugins. diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/opsgenie.mdx b/docs/pages/identity-governance/access-requests/plugins/opsgenie.mdx similarity index 80% rename from docs/pages/admin-guides/access-controls/access-request-plugins/opsgenie.mdx rename to docs/pages/identity-governance/access-requests/plugins/opsgenie.mdx index 344eb2f806d76..52881599cbcb6 100644 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/opsgenie.mdx +++ b/docs/pages/identity-governance/access-requests/plugins/opsgenie.mdx @@ -1,8 +1,17 @@ --- title: Access Requests with Opsgenie +sidebar_label: Opsgenie description: How to set up Teleport's Opsgenie plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + With Teleport's Opsgenie integration, engineers can access the infrastructure they need to resolve alerts quickly, without longstanding admin permissions that can become a vector for attacks. @@ -18,26 +27,10 @@ Opsgenie. ## Prerequisites -- A Teleport Enterprise Cloud account. - -- The `tctl` admin tool and `tsh` client tool version >= (=teleport.version=). - - You can verify the tools you have installed by running the following commands: - - ```code - $ tctl version - # Teleport v(=teleport.version=) go(=teleport.golang=) - - $ tsh version - # Teleport v(=teleport.version=) go(=teleport.golang=) - ``` - - You can download these tools by following the appropriate [Installation - instructions](../../../installation.mdx) for your environment and Teleport edition. +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise Cloud"!) - An Opsgenie account with the ability to create API keys with the 'read' and 'create and update' access rights. - - (!docs/pages/includes/tctl.mdx!) ## Step 1/5. Create services @@ -89,13 +82,7 @@ Create a user called `myuser` who has the `requester` role. Later in this guide, you will create an Access Request as this user to test the Opsgenie plugin: -To create a user first navigate to Management -> Access -> Users - -![Add user one](../../../../img/enterprise/plugins/opsgenie/add-user-one.png) - -Then select 'Create New User' and create a user with the requester role. - -![Add user two](../../../../img/enterprise/plugins/opsgenie/add-user-two.png) +To create a user, first navigate to Zero Trust Access -> Users. ## Step 3/5. Set up an Opsgenie API key @@ -109,10 +96,9 @@ See https://support.atlassian.com/opsgenie/docs/create-a-default-api-integration ## Step 4/5. Configure the Opsgenie plugin At this point, you have generated credentials that the Opsgenie plugin will use -to connect to the Opsgenie API. To configure the plugin to use this API key navigate -to Management -> Integrations -> Enroll New Integration. - -![Integrations page](../../../../img/enterprise/plugins/opsgenie/opsgenie-integration-page.png) +to connect to the Opsgenie API. To configure the plugin to use this API key, +navigate to **Add New** in the Web UI; in the left pane, click **Integration**. +Click the Opsgenie tile. ## Step 5/5. Test the Opsgenie plugin diff --git a/docs/pages/identity-governance/access-requests/plugins/pagerduty.mdx b/docs/pages/identity-governance/access-requests/plugins/pagerduty.mdx new file mode 100644 index 0000000000000..ea9706a5c57a7 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/pagerduty.mdx @@ -0,0 +1,636 @@ +--- +title: Run the PagerDuty Access Request Plugin +sidebar_label: PagerDuty +description: How to set up Teleport's PagerDuty plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +With Teleport's PagerDuty integration, engineers can access the infrastructure +they need to resolve incidents quickly—without longstanding admin permissions +that can become a vector for attacks. This guide explains how to set up +Teleport's Access Request plugin for PagerDuty. + +## How it works + +Teleport's PagerDuty integration allows you to treat Teleport Role Access +Requests as PagerDuty incidents, notify the appropriate on-call team, and +approve or deny the requests via Teleport. You can also configure the plugin to +approve Role Access Requests automatically if the user making the request is on +the on-call team for a service affected by an incident. + +
+This integration is hosted on Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="the PagerDuty integration"!) + +
+ +![PagerDuty plugin architecture](../../../../img/enterprise/plugins/pagerduty/diagram.png) + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- A PagerDuty account with the "Admin", "Global Admin", or "Account Owner" + roles. These roles are necessary for generating an API token that can list and + look up user profiles. + + You can see your role by visiting your user page in PagerDuty, navigating to + the "Permissions & Teams" tab, and checking the value of the "Base Role" + field. + +- Either a Linux host or Kubernetes cluster where you will run the PagerDuty plugin. + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/8. Create services + +To demonstrate the PagerDuty plugin, create two services in PagerDuty. For each +service, fill in only the "Name" field and skip all other configuration screens, +leaving options as the defaults: + +- `Teleport Access Request Notifications` +- `My Critical Service` + +We will configure the PagerDuty plugin to create an incident in the `Teleport +Access Request Notifications` service when certain users create an Access +Request. + +For users on the on-call team for `My Critical Service` (in this case, your +PagerDuty user), we will configure the PagerDuty plugin to approve Access +Requests automatically, letting them investigate incidents on the service +quickly. + +## Step 2/8. Define RBAC resources + +The Teleport PagerDuty plugin works by receiving Access Request events from the +Teleport Auth Service and, based on these events, interacting with the PagerDuty +API. + +In this section, we will show you how to configure the PagerDuty plugin by +defining the following RBAC resources: + +- A role called `editor-requester`, which can request the built-in `editor` + role. We will configure this role to open a PagerDuty incident whenever a + user requests it, notifying the on-call team for the `Teleport Access Request + Notifications` service. +- A role called `demo-role-requester`, which can request a role called + `demo-role`. We will configure the PagerDuty plugin to auto-approve this + request whenever the user making it is on the on-call team for `My Critical + Service`. +- A user and role called `access-plugin` that the PagerDuty plugin will assume + in order to authenticate to the Teleport Auth Service. This role will have + permissions to approve Access Requests from users on the on-call team for `My + Critical Service` automatically. +- A role called `access-plugin-impersonator` that allows you to generate signed + credentials that the PagerDuty plugin can use to authenticate with your + Teleport cluster. + +### `editor-requester` + +Create a file called `editor-request-rbac.yaml` with the following content, +which defines a role called `editor-reviewer` that can review requests for the +`editor` role, plus an `editor-requester` role that can request this role. + +```yaml +kind: role +version: v5 +metadata: + name: editor-reviewer +spec: + allow: + review_requests: + roles: ['editor'] +--- +kind: role +version: v5 +metadata: + name: editor-requester +spec: + allow: + request: + roles: ['editor'] + thresholds: + - approve: 1 + deny: 1 + annotations: + pagerduty_notify_service: ["Teleport Access Request Notifications"] +``` + +The Teleport Auth Service *annotates* Access Request events with metadata based +on the roles of the Teleport user submitting the Access Request. The PagerDuty +plugin reads these annotations to determine how to respond to a new Access +Request event. + +Whenever a user with the `editor-requester` role requests the `editor` role, the +PagerDuty plugin will read the `pagerduty_notify_service` annotation and notify +PagerDuty to open an incident in the specified service, `Teleport Access Request +Notifications`, until someone with the `editor-reviewer` role approves or denies +the request. + +Create the roles you defined: + +```code +$ tctl create -f editor-request-rbac.yaml +role 'editor-reviewer' has been created +role 'editor-requester' has been created +``` + +(!docs/pages/includes/create-role-using-web.mdx!) + +### `demo-role-requester` + +Create a file called `demo-role-requester.yaml` with the following content: + +```yaml +kind: role +version: v5 +metadata: + name: demo-role +--- +kind: role +version: v5 +metadata: + name: demo-role-requester +spec: + allow: + request: + roles: ['demo-role'] + thresholds: + - approve: 1 + deny: 1 + annotations: + pagerduty_services: ["My Critical Service"] +``` + +Users with the `demo-role-requester` role can request the `demo-role` role. When +such a user makes this request, the PagerDuty plugin will read the +`pagerduty_services` annotation. If the user making the request is on the +on-call team for a service listed as a value for the annotation, the plugin will +approve the Access Request automatically. + +In this case, the PagerDuty plugin will approve any requests from users on the +on-call team for `My Critical Service`. + +Create the resources: + +```code +$ tctl create -f demo-role-requester.yaml; +``` + + + +For auto-approval to work, the user creating an Access Request must have a +Teleport username that is also the email address associated with a PagerDuty +account. In this guide, we will add the `demo-role-requester` role to your own +Teleport account—which we assume is also your email address for PagerDuty—so you +can request the `demo-role` role. + + + +### `access-plugin` + +Teleport's Access Request plugins authenticate to your Teleport cluster as a +user with permissions to list, read, and update Access Requests. This way, +plugins can retrieve Access Requests from the Teleport Auth Service, present +them to reviewers, and modify them after a review. + +Define a user and role called `access-plugin` by adding the following content to +a file called `access-plugin.yaml`: + +```yaml +kind: role +version: v5 +metadata: + name: access-plugin +spec: + allow: + rules: + - resources: ['access_request'] + verbs: ['list', 'read'] + - resources: ['access_plugin_data'] + verbs: ['update'] + review_requests: + roles: ['demo-role'] + where: 'contains(request.system_annotations["pagerduty_services"], "My Critical Service")' +--- +kind: user +metadata: + name: access-plugin +spec: + roles: ['access-plugin'] +version: v2 +``` + +Notice that the `access-plugin` role includes an `allow.review_requests.roles` +field with `demo-role` as a value. This allows the plugin to review requests for +the `demo-role` role. + +We are also restricting the `access-plugin` role to reviewing only Access +Requests associated with `My Critical Service`. To do so, we have defined a +*predicate expression* in the `review_requests.where` field. This expression +indicates that the plugin *cannot* review requests for `demo-role` unless the +request contains an annotation with the key `pagerduty_services` and the value +`My Critical Service`. + +
+How "where" conditions work + +The `where` field includes a predicate expression that determines whether a +reviewer is allowed to review a specific request. You can include two functions +in a predicate expression: + +|Function|Description|Example| +|---|---|---| +|`equals`|A field is equivalent to a value.|`equals(request.reason, "resolve an incident")` +|`contains`|A list of strings includes a value.|`contains(reviewer.traits["team"], "devops")`| + +When you use the `where` field, you can include the following fields in your +predicate expression: + +|Field|Type|Description| +|---|---|---| +|`reviewer.roles`|`[]string`|A list of the reviewer's Teleport role names| +|`reviewer.traits`|`map[string][]string`|A map of the reviewer's Teleport traits by the name of the trait| +|`request.roles`|`[]string`|A list of the Teleport roles a user is requesting| +|`request.reason`|`string`|The reason attached to the request| +|`request.system_annotations`| `map[string][]string`|A map of annotations for the request by annotation key, e.g., `pagerduty_services`| + +You can combine functions using the following operators: + +|Operator|Format|Description| +|---|---|---| +|`&&`|`function && function`|Evaluates to true if both functions evaluate to true| +|`\|\|`|`function \|\| function`|Evaluates to true if either one or both functions evaluate to true| +|`!`| `!function`|Evaluates to true if the function evaluates to false| + +An example of a function is `equals(request.reason, "resolve an incident")`. To +configure an `allow` condition to match any Access Request that does not include +the reason, "resolve an incident", you could use the function, +`!equals(request.reason, "resolve an incident")`. + +
+ +Create the user and role: + +```code +$ tctl create -f access-plugin.yaml +``` + +### `access-plugin-impersonator` + +As with all Teleport users, the Teleport Auth Service authenticates the +`access-plugin` user by issuing short-lived TLS credentials. In this case, we +will need to request the credentials manually by *impersonating* the +`access-plugin` role and user. + +If you are running a self-hosted Teleport Enterprise cluster and are using +`tctl` from the Auth Service host, you will already have impersonation +privileges. + +To grant your user impersonation privileges for `access-plugin`, define a role +called `access-plugin-impersonator` by pasting the following YAML document into +a file called `access-plugin-impersonator.yaml`: + +```yaml +kind: role +version: v5 +metadata: + name: access-plugin-impersonator +spec: + allow: + impersonate: + roles: + - access-plugin + users: + - access-plugin +``` + +Create the `access-plugin-impersonator` role: + +```code +$ tctl create -f access-plugin-impersonator.yaml +``` + +### Add roles to your user + +Later in this guide, your Teleport user will take three actions that require +additional permissions: + +- Generate signed credentials that the PagerDuty plugin will use to connect to + your Teleport Cluster +- Manually review an Access Request for the `editor` role +- Create an Access Request for the `demo-role` role + +To grant these permissions to your user, give your user the `editor-reviewer`, +`access-plugin-impersonator`, and `demo-role-requester` roles we defined +earlier. + +Open your user definition in an editor: + +```code +$ TELEPORT_USER=$(tsh status --format=json | jq -r .active.username) +$ tctl edit users/${TELEPORT_USER?} +``` + +Edit the user to include the roles you just created: + +```diff + roles: + - access + - auditor + - editor ++ - editor-reviewer ++ - access-plugin-impersonator ++ - demo-role-requester +``` + +Apply your changes by saving and closing the file in your editor. + +Log out of your Teleport cluster and log in again. You will now be able to +review requests for the `editor` role, request the `demo-role` role, and +generate signed certificates for the `access-plugin` role and user. + +### Create a user who will request access + +Create a user called `myuser` who has the `editor-requester` role. Later in this +guide, you will create an Access Request as this user to test the PagerDuty +plugin: + +```code +$ tctl users add myuser --roles=editor-requester +``` + +`tctl` will print an invitation URL to your terminal. Visit the URL and log in +as `myuser` for the first time, registering credentials as configured for your +Teleport cluster. + +## Step 3/8. Install the Teleport PagerDuty plugin + +(!docs/pages/includes/plugins/install-access-request.mdx name="pagerduty"!) + +## Step 4/8. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-pagerduty-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-pagerduty-identity"!) + + + +## Step 5/8. Set up a PagerDuty API key + +Generate an API key that the PagerDuty plugin will use to create and modify +incidents as well as list users, services, and on-call policies. + +In your PagerDuty dashboard, go to **Integrations → API Access Keys** and click +**Create New API Key**. Add a key description, e.g., "Teleport integration". +Leave "Read-only API Key" unchecked. Copy the key to a file on your local +machine. We'll use the key in the plugin config file later. + +![Create an API +key](../../../../img/enterprise/plugins/pagerduty/pagerduty-integrations.png) + +## Step 6/8. Configure the PagerDuty plugin + +At this point, you have generated credentials that the PagerDuty plugin will use +to connect to Teleport and the PagerDuty API. You will now configure the +PagerDuty plugin to use these credentials, plus adjust any settings required for +your environment. + + + +Teleport's PagerDuty plugin has its own configuration file in TOML format. On +the host where you will run the PagerDuty plugin, generate a boilerplate config +by running the following commands: + +```code +$ teleport-pagerduty configure > teleport-pagerduty.toml +$ sudo mv teleport-pagerduty.toml /etc +``` + + +The PagerDuty Helm Chart uses a YAML values file to configure the plugin. On +the host where you have Helm installed, create a file called +`teleport-pagerduty-values.yaml` based on the following example: + +```yaml +teleport: + address: "" # Teleport Auth Service GRPC API address + identitySecretName: "" # Identity secret name + identitySecretPath: "" # Identity secret path + +pagerduty: + apiKey: "" # PagerDuty API Key + userEmail: "" # PagerDuty bot user email (Could be admin email) +``` + + + +
+Saving the configuration to another location + +The PagerDuty plugin expects the configuration to be in +`/etc/teleport-pagerduty.toml`, but you can override this with the `--config` +flag when you run the plugin binary later in this guide. + +
+ +Edit the configuration file in `/etc/teleport-pagerduty.toml` as explained +below: + +### `[teleport]` + +The PagerDuty plugin uses this section to connect to your Teleport cluster: + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +### `[pagerduty]` + +Assign `api_key` to the PagerDuty API key you generated earlier. + +Assign `user_email` to the email address of a PagerDuty user on the account +associated with your API key. When the PagerDuty plugin creates a new incident, +PagerDuty will display this incident as created by that user. + +
+Overriding annotation names + +This guide has assumed that the Teleport PagerDuty plugin uses +`pagerduty_notify_service` annotation to determine which services to notify of +new Access Request events and the `pagerduty_services` annotation to configure +auto-approval. + +If you would like to use a different name for these annotations in your Teleport +roles, you can assign the `pagerduty.notify_service` and `pagerduty.services` +fields. + +
+ +The final configuration should resemble the following: + + + +```yaml +(!examples/resources/plugins/teleport-pagerduty-cloud.toml!) +``` + + +```yaml +(!examples/resources/plugins/teleport-pagerduty-helm-cloud.yaml!) +``` + + + +## Step 7/8. Test the PagerDuty plugin + + + +After you configure the PagerDuty plugin, run the following command to start it. +The `-d` flag will provide debug information to ensure that the plugin can +connect to PagerDuty and your Teleport cluster: + +```code +$ teleport-pagerduty start -d +# DEBU DEBUG logging enabled logrus/exported.go:117 +# INFO Starting Teleport Access PagerDuty extension 0.1.0-dev.1: pagerduty/main.go:124 +# DEBU Checking Teleport server version pagerduty/main.go:226 +# DEBU Starting a request watcher... pagerduty/main.go:288 +# DEBU Starting PagerDuty API health check... pagerduty/main.go:170 +# DEBU Starting secure HTTPS server on :8081 utils/http.go:146 +# DEBU Watcher connected pagerduty/main.go:252 +# DEBU PagerDuty API health check finished ok pagerduty/main.go:176 +# DEBU Setting up the webhook extensions pagerduty/main.go:178 +``` + + +Run the plugin: + +```code +$ docker run -v :/etc/teleport-pagerduty.toml public.ecr.aws/gravitational/teleport-plugin-pagerduty:(=teleport.version=) start +``` + + +After modifying your configuration, run the bot with the following command: + +```code +$ helm upgrade --install teleport-plugin-pagerduty teleport/teleport-plugin-pagerduty --values teleport-pagerduty-values.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-pagerduty +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-pagerduty-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-pagerduty +``` + + + + +### Create an Access Request + +As the Teleport user `myuser`, create an Access Request for the `editor` role: + +(!docs/pages/includes/plugins/create-request.mdx!) + +You should see a log resembling the following on your PagerDuty plugin host: + +``` +INFO Successfully created PagerDuty incident pd_incident_id:00000000000000 +pd_service_name:Teleport Access Request Notifications +request_id:00000000-0000-0000-0000-000000000000 request_op:put +request_state:PENDING pagerduty/app.go:366 +``` + +In PagerDuty, you will see a new incident containing information about the +Access Request: + +![PagerDuty dashboard showing an Access +Request](../../../../img/enterprise/plugins/pagerduty/new-access-req-incident.png) + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + + + +When the PagerDuty plugin sends a notification, anyone who receives the +notification can follow the enclosed link to an Access Request URL. While users +must be authorized via their Teleport roles to review Access Request, you +should still check the Teleport audit log to ensure that the right users are +reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +### Trigger an auto-approval + +As your Teleport user, create an Access Request for the `demo-role` role. + +You will see a log similar to the following on your PagerDuty plugin host: + +``` +INFO Successfully submitted a request approval +pd_user_email:myuser@example.com pd_user_name:My User +request_id:00000000-0000-0000-0000-000000000000 request_op:put +request_state:PENDING pagerduty/app.go:511 +``` + +Your Access Request will appear as `APPROVED`: + +```code +$ tsh requests ls +ID User Roles Created (UTC) Status +------------------------------------ ------------------ --------- ------------------- -------- +00000000-0000-0000-0000-000000000000 myuser@example.com demo-role 12 Aug 22 18:30 UTC APPROVED +``` + +## Step 8/8. Set up systemd + +This section is only relevant if you are running the Teleport PagerDuty plugin +on a Linux host. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!examples/systemd/plugins/teleport-pagerduty.service!) +``` + +Save this as `teleport-pagerduty.service` in either `/usr/lib/systemd/system/` +or another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-pagerduty +$ sudo systemctl start teleport-pagerduty +``` diff --git a/docs/pages/identity-governance/access-requests/plugins/plugins.mdx b/docs/pages/identity-governance/access-requests/plugins/plugins.mdx new file mode 100644 index 0000000000000..f5125dd71f20f --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/plugins.mdx @@ -0,0 +1,69 @@ +--- +title: Just-in-Time Access Request Plugins +description: "Use Teleport's Access Request plugins to least-privilege access without sacrificing productivity." +sidebar_label: Plugins +template: "no-toc" +sidebar_position: 5 +tags: + - access-requests + - conceptual + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +Teleport Just-in-Time Access Requests allow users to receive temporary elevated +privileges by seeking consent from one or more reviewers, depending on your +configuration. + +With Teleport's Access Request plugins, users can manage Access Requests from +within your organization's existing messaging and project management solutions. + +Access Request plugins are self-contained programs that connect to the Teleport +Auth Service's gRPC API to listen for audit events relating to new or updated +Access Requests. After processing an Access Request event, Access Request plugins +interact with a third-party API (e.g., the Slack or PagerDuty APIs). + +## Enrolling Access Request plugins in Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="Access Request plugins"!) + +The following Access Request plugins are hosted on Teleport Cloud: + +- Datadog +- Discord +- Email +- Jira +- Mattermost +- Microsoft Teams +- Opsgenie +- PagerDuty +- ServiceNow +- Slack + +## Self-hosting Access Request plugins + +You can host Teleport Access Request plugins yourself. Self-hosted Access +Request plugins are the only way to manage Access Requests through a third-party +communication platform if you are self-hosting Teleport. If you use Teleport +Team or Teleport Enterprise Cloud, you can run self-hosted Access Request +plugins for more control over configuration and architecture. + +Access Request plugins can run within private networks that are isolated from +the Teleport Auth Service. To access the Auth Service API, they connect to the +Proxy Service, which establishes a reverse tunnel for the plugin to access the +Auth Service. + +You can run multiple instances of an Access Request plugin for high availability +by deploying each instance in a separate availability zone. There is no need for +additional configuration or load balancing, as plugins avoid creating duplicate +requests to their third-party APIs. + +Learn how to deploy and configure a plugin for your organization's communication +workflows by reading our setup guides: + +(!docs/pages/includes/access-request-integrations.mdx!) + +To read more about the architecture of an Access Request plugin, and start +writing your own, read our [Access Request plugin development +guide](how-to-build.mdx). diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/servicenow.mdx b/docs/pages/identity-governance/access-requests/plugins/servicenow.mdx similarity index 93% rename from docs/pages/admin-guides/access-controls/access-request-plugins/servicenow.mdx rename to docs/pages/identity-governance/access-requests/plugins/servicenow.mdx index 3428590972661..7f3332e61a01a 100644 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/servicenow.mdx +++ b/docs/pages/identity-governance/access-requests/plugins/servicenow.mdx @@ -1,8 +1,17 @@ --- title: Access Requests with ServiceNow +sidebar_label: ServiceNow description: How to set up Teleport's ServiceNow plugin for privilege elevation approvals. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + With Teleport's ServiceNow integration, engineers can access the infrastructure they need to resolve incidents quickly, without granting longstanding admin permissions that can become a vector for attacks. @@ -18,7 +27,7 @@ ServiceNow. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - An ServiceNow account with access to read and write to and from the 'incident' table. - (!docs/pages/includes/tctl.mdx!) diff --git a/docs/pages/identity-governance/access-requests/plugins/slack.mdx b/docs/pages/identity-governance/access-requests/plugins/slack.mdx new file mode 100644 index 0000000000000..61f9bf17080c0 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/plugins/slack.mdx @@ -0,0 +1,435 @@ +--- +title: Run the Slack Access Request Plugin +description: How to set up Teleport's Slack plugin for privilege elevation approvals. +sidebar_label: Slack +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +import SlackVideoMp4 from "@version/docs/img/enterprise/plugins/slack/slack.mp4" +import SlackVideoWebm from "@version/docs/img/enterprise/plugins/slack/slack.webm" + +This guide will explain how to set up Slack to receive Access Request messages +from Teleport. + +
+This integration is hosted on Teleport Cloud + +(!docs/pages/includes/plugins/enroll.mdx name="the Slack integration"!) + +
+ +## How it works + +Teleport's Slack integration notifies individuals and channels of +Access Requests. Users are then directed to log in to the Teleport cluster where +they can approve and deny Access Requests, making it easier to implement security +best practices without compromising productivity. + +![The Slack Access Request plugin](../../../../img/enterprise/plugins/slack/diagram.png) + +Here is an example of sending an Access Request via Teleport's Slack plugin: + + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/machine-id/plugin-prerequisites.mdx!) + +- Slack admin privileges to create an app and install it to your workspace. Your + Slack profile must have the "Workspace Owner" or "Workspace Admin" banner + below your profile picture. +- Either a Linux host or Kubernetes cluster where you will run the Slack plugin. + +(!/docs/pages/includes/tctl.mdx!) + +## Step 1/8. Define RBAC resources + +Before you set up the Slack plugin, you will need to enable Role Access Requests +in your Teleport cluster. + +(!/docs/pages/includes/plugins/editor-request-rbac.mdx!) + +## Step 2/8. Define a Teleport Slack plugin user + +The required permissions for the plugin are configured in the preset `access-plugin` +role. To generate credentials for the plugin, define either a Machine ID bot user +or a regular Teleport user. + + + +(!docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx!) + + +(!docs/pages/includes/plugins/rbac-impersonate.mdx!) + + + +## Step 3/8. Export the access plugin identity + +Give the plugin access to a Teleport identity file. We recommend using Machine +ID for this in order to produce short-lived identity files that are less +dangerous if exfiltrated, though in demo deployments, you can generate +longer-lived identity files with `tctl`: + + + +(!docs/pages/includes/plugins/tbot-identity.mdx secret="teleport-plugin-slack-identity"!) + + +(!docs/pages/includes/plugins/identity-export.mdx user="access-plugin" secret="teleport-plugin-slack-identity"!) + + + +## Step 4/8. Install the Teleport Slack plugin + +(!docs/pages/includes/plugins/install-access-request.mdx name="slack"!) + +## Step 5/8. Register a Slack app + +The Access Request plugin for Slack receives Access Request events from the +Teleport Auth Service, formats them into Slack messages, and sends them to the +Slack API to post them in your workspace. For this to work, you must register a +new app with the Slack API. + +### Create your app + +Visit [https://api.slack.com/apps](https://api.slack.com/apps) to create a new +Slack app. Click "Create an App", then "From scratch". Fill in the form as shown +below: + +![Create Slack App](../../../../img/enterprise/plugins/slack/Create-a-Slack-App.png) + +The "App Name" should be "Teleport". Click the "Development Slack Workspace" +dropdown and choose the workspace where you would like to see Access Request +messages. + +### Generate an OAuth token with scopes + +Next, configure your application to authenticate to the Slack API. We will do +this by generating an OAuth token that the plugin will present to the Slack API. + +We will restrict the plugin to the narrowest possible permissions by using OAuth +scopes. The Slack plugin needs to post messages to your workspace. It also needs +to read usernames and email addresses in order to direct Access Request +notifications from the Auth Service to the appropriate Teleport users in Slack. + +After creating your app, the Slack website will open a console where you can +specify configuration options. On the sidebar menu under "Features", click +"OAuth & Permissions". + +Scroll to the "Scopes" section and click "Add an OAuth Scope" for each of the +following scopes: + +- `chat:write` +- `incoming-webhook` +- `users:read` +- `users:read.email` + +The result should look like this: + +![API Scopes](../../../../img/enterprise/plugins/slack/api-scopes.png) + +After you have configured scopes for your plugin, scroll back to the top of the +OAuth & Permissions page, find the "OAuth Tokens for Your Workspace" section, +and click "Install to Workspace". You will see a summary of the permission you +configured for the Slack plugin earlier. + +In "Where should Teleport post?", choose "Slackbot" as the default channel the +plugin will post to. The plugin will post here when sending direct messages. +Later in this guide, we will configure the plugin to post in other channels as +well. + +After submitting this form, you will see an OAuth token in the "OAuth & +Permissions" tab under "Tokens for Your Workspace": + +![OAuth Tokens](../../../../img/enterprise/plugins/slack/OAuth.png) + +You will use this token later when configuring the Slack plugin. + +## Step 6/8. Configure the Teleport Slack plugin + +At this point, the Teleport Slack plugin has the credentials it needs to +communicate with your Teleport cluster and the Slack API. In this step, you will +configure the Slack plugin to use these credentials. You will also configure the +plugin to notify the right Slack channels when it receives an Access Request +update. + +### Create a configuration file + + + +The Teleport Slack plugin uses a configuration file in TOML format. Generate a +boilerplate configuration by running the following command (the plugin will not run +unless the config file is in `/etc/teleport-slack.toml`): + +```code +$ teleport-slack configure | sudo tee /etc/teleport-slack.toml > /dev/null +``` + +This should result in a configuration file like the one below: + +```toml +(!examples/resources/plugins/teleport-slack.toml!) +``` + + +The Slack Helm Chart uses a YAML values file to configure the plugin. +On your local workstation, create a file called `teleport-slack-helm.yaml` +based on the following example: + +```toml +(!examples/resources/plugins/teleport-slack-helm.yaml!) +``` + + + + +### Edit the configuration file + +Open the configuration file created for the Teleport Slack plugin and update the following fields: + +**`[teleport]`** + +The Slack plugin uses this section to connect to your Teleport cluster: + +(!docs/pages/includes/plugins/config-toml-teleport.mdx!) + +(!docs/pages/includes/plugins/refresh-plugin-identity.mdx!) + +**`[slack]`** + +`token`: Open [`https://api.slack.com/apps`](https://api.slack.com/apps), find +the Slack app you created earlier, navigate to the "OAuth & Permissions" tab, +copy the "Bot User OAuth Token", and paste it into this field. + +**`[role_to_recipients]`** + +The `role_to_recipients` map configure the channels that the Slack plugin will +notify when a user requests access to a specific role. When the Slack plugin +receives an Access Request from the Auth Service, it will look up the role being +requested and identify the Slack channels to notify. + + + +Here is an example of a `role_to_recipients` map. Each value can be a +single string or an array of strings: + +```toml +[role_to_recipients] +"*" = "admin-slack-channel" +"dev" = ["dev-slack-channel", "admin-slack-channel"] +"dba" = "alex@gmail.com" +``` + + +In the Helm chart, the `role_to_recipients` field is called `roleToRecipients` +and uses the following format, where keys are strings and values are arrays of +strings: + +```yaml +roleToRecipients: + "*": ["admin-slack-channel"] + "dev": + - "dev-slack-channel" + - "admin-slack-channel" + "dba": ["alex@gmail.com"] +``` + + + +In the `role_to_recipients` map, each key is the name of a Teleport role. Each +value configures the Slack channel (or channels) to notify. Each string must be +either the name of a Slack channel (including a user's direct message channel) +or the email address of a Slack user. If the recipient is an email address, the +Slack plugin will use that email address to look up a direct message channel. + +The `role_to_recipients` map must also include an entry for `"*"`, which the +plugin looks up if no other entry matches a given role name. In the example +above, requests for roles aside from `dev` and `dba` will notify the +`admin-slack-channel` channel. + +
+Suggested reviewers + +Users can suggest reviewers when they create an Access Request, e.g.,: + +```code +$ tsh request create --roles=dbadmin --reviewers=alice@example.com,ivan@example.com +``` + +If an Access Request includes suggested reviewers, the Slack plugin will add +these to the list of channels to notify. If a suggested reviewer is an email +address, the plugin will look up the direct message channel for that +address and post a message in that channel. + +
+ +Configure the Slack plugin to notify you when a user requests the `editor` role +by adding the following to your `role_to_recipients` config (replace +`TELEPORT_USERNAME` with the user you assigned the `editor-reviewer` role +earlier): + + + +```toml +[role_to_recipients] +"*" = "access-requests" +"editor" = "TELEPORT_USERNAME" +``` + + +```yaml +roleToRecipients: + "*": "access-requests" + "editor": "TELEPORT_USERNAME" +``` + + + +Either create an `access-requests` channel in your Slack workspace or rename the +value of the `"*"` key to an existing channel. + +### Invite your Slack app + +Once you have configured the channels that the Slack plugin will notify when it +receives an Access Request, you will need to ensure that the plugin can post in +those channels. + +You have already configured the plugin to send direct messages as Slackbot. For +any other channel you mention in your `role_to_recipients` map, you will need +to invite the plugin to that channel. Navigate to each channel and enter `/invite +@teleport` in the message box. + +## Step 7/8. Test your Slack app + +Once Teleport is running, you've created the Slack app, and the plugin is +configured, you can now run the plugin and test the workflow. + + + +Start the plugin: + +```code +$ teleport-slack start +``` + +If everything works fine, the log output should look like this: + +```code +$ teleport-slack start +INFO Starting Teleport Access Slack Plugin 7.2.1: slack/app.go:80 +INFO Plugin is ready slack/app.go:101 +``` + + +run the plugin: + +```code +$ docker run -v :/etc/teleport-slack.toml public.ecr.aws/gravitational/teleport-plugin-slack:(=teleport.version=) start +``` + + +Install the plugin: + +```code +$ helm upgrade --install teleport-plugin-slack teleport/teleport-plugin-slack --values teleport-slack-helm.yaml +``` + +To inspect the plugin's logs, use the following command: + +```code +$ kubectl logs deploy/teleport-plugin-slack +``` + +Debug logs can be enabled by setting `log.severity` to `DEBUG` in +`teleport-slack-helm.yaml` and executing the `helm upgrade ...` command +above again. Then you can restart the plugin with the following command: + +```code +$ kubectl rollout restart deployment teleport-plugin-slack +``` + + + +Create an Access Request and check if the plugin works as expected with the +following steps. + +### Create an Access Request + +(!docs/pages/includes/plugins/create-request.mdx!) + +The user you configured earlier to review the request should receive a direct +message from "Teleport" in Slack allowing them to visit a link in the Teleport +Web UI and either approve or deny the request. + +### Resolve the request + +(!docs/pages/includes/plugins/resolve-request.mdx!) + +Once the request is resolved, the Slack bot will add an emoji reaction of ✅ or +❌ to the Slack message for the Access Request, depending on whether the request +was approved or denied. + + + +When the Slack plugin posts an Access Request notification to a channel, anyone +with access to the channel can view the notification and follow the link. While +users must be authorized via their Teleport roles to review Access Requests, you +should still check the Teleport audit log to ensure that the right users are +reviewing the right requests. + +When auditing Access Request reviews, check for events with the type `Access +Request Reviewed` in the Teleport Web UI. + + + +## Step 8/8. Set up systemd + +This section is only relevant if you are running the Teleport Slack plugin on a +Linux host. + +In production, we recommend starting the Teleport plugin daemon via an init +system like systemd. Here's the recommended Teleport plugin service unit file +for systemd: + +```ini +(!examples/systemd/plugins/teleport-slack.service!) +``` + +Save this as `teleport-slack.service` in either `/usr/lib/systemd/system/` or +another [unit file load +path](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Unit%20File%20Load%20Path) +supported by systemd. + +Enable and start the plugin: + +```code +$ sudo systemctl enable teleport-slack +$ sudo systemctl start teleport-slack +``` + +## Next steps + +- Read our guides to configuring [Resource Access + Requests](../resource-requests.mdx) and [Role Access + Requests](../role-requests.mdx) so you can get the most out + of your Access Request plugins. diff --git a/docs/pages/identity-governance/access-requests/resource-requests.mdx b/docs/pages/identity-governance/access-requests/resource-requests.mdx new file mode 100644 index 0000000000000..974045ce26bd4 --- /dev/null +++ b/docs/pages/identity-governance/access-requests/resource-requests.mdx @@ -0,0 +1,506 @@ +--- +title: Resource Access Requests +sidebar_label: Resource Requests +sidebar_position: 2 +description: Teleport allows users to request access to specific resources from the CLI or UI. Requests can be escalated via ChatOps or anywhere else via our flexible Authorization Workflow API. +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +{/* lint disable page-structure remark-lint */} + +With Teleport Resource Access Requests, users can request access to specific +resources without needing to know anything about the roles or RBAC controls used +under the hood. +The Access Request API makes it easy to dynamically approve or deny these +requests. + +Just-in-time Access Requests are a feature of Teleport Enterprise. +Teleport Community Edition users can get a preview of how Access Requests work by +requesting a role via the Teleport CLI. Full Access Request functionality, +including Resource Access Requests and an intuitive and searchable UI are +available in Teleport Enterprise. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/6. Grant roles to users + +The built-in `requester` and `reviewer` roles have permissions to, respectively, +open and review Access Requests. Grant the `requester` and `reviewer` roles to +existing users, or create new users to test this feature. Make sure the +requester has a valid `login` so that they can view and access SSH nodes. + +For the rest of the guide we will assume that the `requester` role has been +granted to a user named `alice` and the `reviewer` role has been granted to a +user named `bob`. + +1. Assign the `requester` role to a user named `alice`: + + (!docs/pages/includes/add-role-to-user.mdx role="requester" user="\`alice\`"!) + +1. Repeat these steps to assign the `reviewer` role to a user named `bob`. + + + +Consider defining custom roles to limit the scope of a requester or reviewer's +permissions. Read the [Access Request +Configuration](./access-request-configuration.mdx) guide for available options. + + + +## Step 2/6. Search for resources + +First, log in as `alice`. + +```code +$ tsh login --proxy teleport.example.com --user alice +``` + +Notice that `tsh ls` returns an empty list, because `alice` does not have access to any resources by default. +```code +$ tsh ls +Node Name Address Labels +--------- ------- ------ +``` + +Then try searching for all available ssh nodes. + +```code +$ tsh request search --kind node +Name Hostname Labels Resource ID +------------------------------------ ----------- ------------ ------------------------------------------------------ +b1168402-9340-421a-a344-af66a6675738 iot test=test /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 +bbb56211-7b54-4f9e-bee9-b68ea156be5f node test=test /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f + +To request access to these resources, run +> tsh request create --resource /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ + --reason +``` + +You can search for resources of kind `node`, `kube_cluster`, `db`, `app`, and +`windows_desktop`. Teleport also supports searching and requesting access to +resources within Kubernetes clusters with kind `kube_resource`. + +Advanced filters and queries are supported. See our +[filtering reference](../../reference/cli/cli.mdx) for more information. + +Try narrowing your search to a specific resource you want to access. + +```code +$ tsh request search --kind node --search iot +Name Hostname Labels Resource ID +------------------------------------ ----------- ------------ ------------------------------------------------------ +b1168402-9340-421a-a344-af66a6675738 iot test=test /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 + +To request access to these resources, run +> tsh request create --resource /teleport.example.com/node/b1168402-9340-421a-a344-af66a6675738 \ + --reason +``` + +## Step 3/6. Request access to a resource + +Copy the command output by `tsh request search` in the previous step, optionally filling in a request reason. + +```code +$ tsh request create --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ + --reason "responding to incident 123" +Creating request... +Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab +Username: alice +Roles: access +Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] +Reason: "responding to incident 123" +Reviewers: [none] (suggested) +Status: PENDING + +hint: use 'tsh login --request-id=' to login with an approved request + +Waiting for request approval... + +``` + +The command will automatically wait until the request is approved. + +## Step 4/6. Approve the Access Request + +First, log in as `bob`. + +```code +$ tsh login --proxy teleport.example.com --user bob +``` + +Then list, review, and approve the Access Request. + +```code +$ tsh request ls +ID User Roles Resources Created At (UTC) Status +------------------------------------ ----- ------ --------------------------- ------------------- ------- +f406f5d8-3c2a-428f-8547-a1d091a4ddab alice access ["/teleport.example.... [+] 23 Jun 22 18:25 UTC PENDING + +[+] Requested resources truncated, use `tsh request show ` to view the full list + +hint: use 'tsh request show ' for additional details + use 'tsh login --request-id=' to login with an approved request +$ tsh request show f406f5d8-3c2a-428f-8547-a1d091a4ddab +Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab +Username: alice +Roles: access +Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] +Reason: "responding to incident 123" +Reviewers: [none] (suggested) +Status: PENDING + +hint: use 'tsh login --request-id=' to login with an approved request +$ tsh request review --approve f406f5d8-3c2a-428f-8547-a1d091a4ddab +Successfully submitted review. Request state: APPROVED +``` + + +Check out our [Access Request +integrations](plugins/plugins.mdx) to notify +the right people about new Access Requests. + + +## Step 5/6. Access the requested resource + +`alice`'s `tsh request create` command should resolve now that the request has been approved. + +```code +$ tsh request create --resource /teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f \ + --reason "responding to incident 123" +Creating request... +Request ID: f406f5d8-3c2a-428f-8547-a1d091a4ddab +Username: alice +Roles: access +Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] +Reason: "responding to incident 123" +Reviewers: [none] (suggested) +Status: PENDING + +hint: use 'tsh login --request-id=' to login with an approved request + +Waiting for request approval... + +Approval received, getting updated certificates... + +> Profile URL: https://teleport.example.com + Logged in as: alice + Active requests: f406f5d8-3c2a-428f-8547-a1d091a4ddab + Cluster: teleport.example.com + Roles: access, requester + Logins: alice + Kubernetes: disabled + Allowed Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] + Valid until: 2022-06-23 22:46:22 -0700 PDT [valid for 11h16m0s] + Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty +``` + +`alice` can now view and access the node. + +```code +$ tsh ls +Node Name Address Labels +--------- --------- --------- +iot [::]:3022 test=test + +$ tsh ssh alice@iot +iot:~ alice$ +``` + +## Step 6/6. Resume regular access + +While logged in with a Resource Access Request, users will be blocked from access to any other resources. +This is necessary because their certificate now contains an elevated role, +so it is restricted to only allow access to the resources they were specifically approved for. +Use the `tsh request drop` command to "drop" the request and resume regular access. + +```code +$ tsh request drop +``` + +Once you have configured Resource Access Requests, `tsh ssh` is able to +automatically create a Resource Access Request for you when access is denied, +allowing you to skip the `tsh request search` and `tsh request create` steps. + +```code +$ tsh ssh alice@iot +ERROR: access denied to alice connecting to iot on cluster teleport.example.com + +You do not currently have access to alice@iot, attempting to request access. + +Enter request reason: please +Creating request... +Request ID: ab43fc70-e893-471b-872e-ae65eb24fd76 +Username: alice +Roles: access +Resources: ["/teleport.example.com/node/bbb56211-7b54-4f9e-bee9-b68ea156be5f"] +Reason: "please" +Reviewers: [none] (suggested) +Status: PENDING + +hint: use 'tsh login --request-id=' to login with an approved request + +Waiting for request approval... + +Approval received, reason="okay" +Getting updated certificates... + +iot:~ alice$ +``` + +## Restricting the resources a user can request access to + +In this guide, we showed you how to enable a user to search for resources to +request access to. To do so, we assigned the user a Teleport role with the +`search_as_roles` field set to the preset `access` role. + +You can impose further restrictions on the resources a user is allowed to +search by assigning `search_as_roles` to a more limited role. Below, we will +show you which permissions you must set to restrict a user's ability to search +for different resources. + +To restrict access to a particular resource using a role similar to the ones +below, edit one of the user's roles so the `search_as_roles` field includes the +role you have created. + +For full details on how to use Teleport roles to configure RBAC, see the +[Access Controls Reference](../../reference/access-controls/roles.mdx). + +### `node` + +You can restrict access to searching `node` resources by assigning values to the +`node_labels` field in the `spec.allow` or `spec.deny` fields. The following +role allows access to SSH Service instances with the `env:staging` label. + +```yaml +kind: role +version: v5 +metadata: + name: staging-access +spec: + allow: + node_labels: + env: staging + logins: + - '{{internal.logins}}' + options: + # Only allows the requester to use this role for 1 hour from time of request. + max_session_ttl: 1h +``` + +### `kube_cluster` + +You can restrict access to searching `kube_cluster` resources by assigning +values to the `kubernetes_labels` field in the `spec.allow` or `spec.deny` +fields. + +The following role allows access to Kubernetes clusters with the `env:staging` +label: + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + env: staging + kubernetes_resources: + - kind: '*' + namespace: '*' + name: '*' + api_group: '*' + deny: {} +``` + +### Kubernetes resources + +You can restrict access to Kubernetes resources by assigning values to the +`kubernetes_resources` field in the `spec.allow` or `spec.deny` fields. + +The following role allows access to Kubernetes pods with the name `nginx` in any +namespace, and all pods in the `dev` namespace: + +```yaml +kind: role +metadata: + name: kube-access +version: v8 +spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_resources: + - kind: pods + namespace: '*' + name: 'nginx*' + api_group: '' + - kind: pods + namespace: dev + name: '*' + api_group: '' + kubernetes_groups: + - viewers + deny: {} +``` + +The `request.kubernetes_resources` field allows you to restrict what kinds of Kubernetes +resources a user can request access to. Configuring this field to any value will disallow +requesting access to the entire Kubernetes cluster. + +If the `request.kubernetes_resources` field is not configured, then a user can request access +to any Kubernetes resources, including the entire Kubernetes cluster. + +The following role allows users to request access to Kubernetes namespaces. +Requests for Kubernetes resources other than `pods` will not be allowed. + +```yaml +kind: role +metadata: + name: requester-kube-access +version: v8 +spec: + allow: + request: + search_as_roles: + - kube-access + kubernetes_resources: + - kind: pods + api_group: '' +``` + +The following role allows users to request access only to Kubernetes deployments and/or pods. + +```yaml +kind: role +metadata: + name: requester-kube-access +version: v8 +spec: + allow: + request: + search_as_roles: + - kube-access + kubernetes_resources: + - kind: deployments + api_group: apps + - kind: pods + api_group: '' +``` + +The following role allows users to request access to any specific Kubernetes resources. + +```yaml +kind: role +metadata: + name: requester-kube-access +version: v8 +spec: + allow: + request: + search_as_roles: + - kube-access + kubernetes_resources: + - kind: '*' + api_group: '*' +``` + +The `request.kubernetes_resources` field only restricts what +`kind` / `api_group` of Kubernetes resource requests are allowed. +To control Kubernetes access to these resources see +[Kubernetes Resources](../../enroll-resources/kubernetes-access/controls.mdx) +section for more details. + +### `db` + +You can restrict access to searching `db` resources by assigning values to the +`db_labels` field in the `spec.allow` or `spec.deny` fields. + +The following role allows access to databases with the `env:dev` or +`env:staging` labels: + +```yaml +kind: role +version: v5 +metadata: + name: developer +spec: + allow: + db_labels: + env: ["dev", "staging"] + + # Database account names this role can connect as. + db_users: ["viewer", "editor"] + db_names: ["*"] +``` + +### `app` + +You can restrict access to searching `app` resources by assigning values to the +`app_labels` field in the `spec.allow` or `spec.deny` fields. + +The following role allows access to all applications except for those in +`env:prod`: + +```yaml +kind: role +version: v5 +metadata: + name: dev +spec: + allow: + app_labels: + "*": "*" + deny: + app_labels: + env: "prod" +``` + +### `windows_desktop` + +You can restrict access to searching `windows_desktop` resources by assigning +values to the `windows_desktop_labels` field in the `spec.allow` or `spec.deny` +fields. + +The following role allows access to all Windows desktops with the +`env:dev` or `env:staging` labels. + +```yaml +kind: role +version: v4 +metadata: + name: developer +spec: + allow: + windows_desktop_labels: + env: ["dev", "staging"] + + windows_desktop_logins: ["{{internal.windows_logins}}"] +``` + +## Next steps + +- Read about all of the ways you can configure Access Requests in the [Access + Request Configuration Guide](access-request-configuration.mdx). For example, + you can configure the `search_as_roles` fields in a user's role to restrict + the resources the user has access to list in order to request access. +- With Teleport's Access Request plugins, users can manage Access Requests from + within your organization's existing messaging and project management + solutions. Read the documentation on [Access Request + plugins](../access-requests/plugins/plugins.mdx). +- For more information about creating and managing Access Requests as an end + user, read [Request Access to Roles and + Resources](../../connect-your-client/teleport-clients/request-access.mdx). +- Learn about [Access Lists](../access-lists/access-lists.mdx), which allow you + to assign elevated privileges to a list of Teleport users for a limited time. + + diff --git a/docs/pages/admin-guides/access-controls/access-requests/role-requests.mdx b/docs/pages/identity-governance/access-requests/role-requests.mdx similarity index 94% rename from docs/pages/admin-guides/access-controls/access-requests/role-requests.mdx rename to docs/pages/identity-governance/access-requests/role-requests.mdx index e4e255e030168..2109414b7fa63 100644 --- a/docs/pages/admin-guides/access-controls/access-requests/role-requests.mdx +++ b/docs/pages/identity-governance/access-requests/role-requests.mdx @@ -1,7 +1,14 @@ --- title: Role Access Requests +sidebar_position: 1 +sidebar_label: Role Requests description: Use Just-in-time Access Requests to request new roles with elevated privileges. -h1: Teleport Role Access Requests +tags: + - access-requests + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- Teleport's Just-in-time Access Requests allow users to request access to @@ -10,7 +17,7 @@ via ChatOps or anywhere else via our flexible Authorization Workflow API. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - (!docs/pages/includes/tctl.mdx!) @@ -98,12 +105,12 @@ You can not mix the two. For more information on how to request access to specific resources, see the [Resource Access Requests Guide](./resource-requests.mdx). -![New Request](../../../../img/access-requests/new-role-request.png) +![New Request](../../../img/access-requests/new-role-request.png) When all desired roles have been added, click **PROCEED TO REQUEST**, where you can review and submit the request. -![Submit Request](../../../../img/access-requests/submit-request.png) +![Submit Request](../../../img/access-requests/submit-request.png) ## Reviewing Access Requests via the Web UI diff --git a/docs/pages/admin-guides/access-controls/device-trust/device-management.mdx b/docs/pages/identity-governance/device-trust/device-management.mdx similarity index 88% rename from docs/pages/admin-guides/access-controls/device-trust/device-management.mdx rename to docs/pages/identity-governance/device-trust/device-management.mdx index 1f47615f73f5a..f81481fc2c1c6 100644 --- a/docs/pages/admin-guides/access-controls/device-trust/device-management.mdx +++ b/docs/pages/identity-governance/device-trust/device-management.mdx @@ -1,6 +1,12 @@ --- title: Manage Trusted Devices +sidebar_label: Management description: Learn how to manage Trusted Devices +tags: + - conceptual + - identity-governance + - resiliency +enterprise: Identity Governance --- This guide provides instructions for performing Device Trust management @@ -9,7 +15,7 @@ token, and removing a trusted device. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) (!docs/pages/includes/device-trust/prereqs.mdx!) @@ -27,9 +33,9 @@ removing devices that are no longer in use. ```
- -Teleport supports device synchronization with [Jamf Pro](./jamf-integration.mdx). Once configured, devices -are automatically updated in Teleport's device inventory. + +Teleport supports device synchronization through [MDM integrations](device-trust.mdx#mdm-integrations). +Once configured, devices are automatically updated in Teleport's device inventory. Before you can enroll the device, you need to register it. To register @@ -81,9 +87,9 @@ Use `tctl` to check that the device has been registered: ```code $ tctl devices ls -Asset Tag OS Enroll Status Device ID ------------- ----- ------------- ------------------------------------ -(=devicetrust.asset_tag=) macOS not enrolled (=devicetrust.device_id=) +Asset Tag OS Source Enroll Status Owner Device ID +------------ ----- ------ ------------- ----- ------------------------------------ +(=devicetrust.asset_tag=) macOS not enrolled (=devicetrust.device_id=) ``` ## Create a device enrollment token @@ -136,8 +142,8 @@ enrolls the user's device in their next Teleport (`tsh`) login. For auto-enrollment to work, the following conditions must be met: - A device must be registered. Registration may be -[manual](#register-a-trusted-device) or performed using an -integration, like the [Jamf Pro integration](./jamf-integration.mdx). +[manual](#register-a-trusted-device) or performed using [an MDM +integration](./device-trust.mdx#mdm-integrations). - Auto-enrollment must be enabled in the cluster setting. Enable auto-enrollment in your cluster settings. To do so, modify the dynamic @@ -194,9 +200,9 @@ A device that is no longer in use may be removed using `tctl devices rm First, find a device to delete: ```code $ tctl devices ls -Asset Tag OS Enroll Status Device ID ------------- ----- ------------- ------------------------------------ -C00AA0AAAA0A macOS enrolled c9cbb327-68a8-497e-b820-6a4b2bf58269 +Asset Tag OS Source Enroll Status Owner Device ID +------------ ----- ------ ------------- ----- ------------------------------------ +C00AA0AAAA0A macOS enrolled alice c9cbb327-68a8-497e-b820-6a4b2bf58269 ``` Now use asset-tag or device id to delete a device: @@ -270,3 +276,4 @@ Save and close your editor to apply your changes. - [Device Trust Enforcement](./enforcing-device-trust.mdx) - [Jamf Pro Integration](./jamf-integration.mdx) +- [Microsoft Intune Integration](./intune-integration.mdx) diff --git a/docs/pages/admin-guides/access-controls/device-trust/device-trust.mdx b/docs/pages/identity-governance/device-trust/device-trust.mdx similarity index 85% rename from docs/pages/admin-guides/access-controls/device-trust/device-trust.mdx rename to docs/pages/identity-governance/device-trust/device-trust.mdx index dafb8c7683e16..33e0fad71af50 100644 --- a/docs/pages/admin-guides/access-controls/device-trust/device-trust.mdx +++ b/docs/pages/identity-governance/device-trust/device-trust.mdx @@ -1,14 +1,16 @@ --- title: Device Trust description: Teleport Device Trust Concepts -layout: tocless-doc +template: "no-toc" videoBanner: gBQyj_X1LVw +sidebar_position: 5 +tags: + - conceptual + - identity-governance + - resiliency +enterprise: Identity Governance --- -(!docs/pages/includes/device-trust/support-notice.mdx!) - -## Concepts - Device Trust allows Teleport admins to enforce the use of trusted devices. Resources protected by the device mode "required" will enforce the use of a trusted device, in addition to establishing the user's identity and enforcing @@ -23,6 +25,8 @@ Device Trust requires two of the following steps to have been configured: Categorically, we define these two requirements as Trusted Device management and Device Trust enforcement. +(!docs/pages/includes/device-trust/support-notice.mdx!) + ## Trusted Device management Device management is divided into two separate phases: inventory management and @@ -69,10 +73,11 @@ the better the ongoing guarantees that the device itself is trustworthy. ## Device Trust enforcement -Enforcing Device Trust means configuring Teleport with Device Trust mode, i.e. applying -`device_trust_mode: required` rule, which tells Teleport Auth Service to only allow access -with a trusted and an authenticated device, in addition to establishing the user's identity and enforcing -the necessary roles. +Enforcing Device Trust means configuring Teleport with Device Trust mode, i.e. +applying a `device_trust_mode: required` or `device_trust_mode: required-for-humans` +rule, which tells Teleport Auth Service to only allow access with a trusted and +an authenticated device, in addition to establishing the user's identity and +enforcing the necessary roles. Teleport supports two methods for device enforcement: Role-based enforcement and Cluster-wide enforcement. @@ -80,9 +85,15 @@ enforcement and Cluster-wide enforcement. - **Role-based enforcement** can be used to enforce Device Trust at role level, using RBAC. - **Cluster-wide enforcement** can be used to enforce Device Trust at cluster level. +## MDM integrations + +Device Trust currently supports syncing devices from [Jamf Pro](jamf-integration.mdx) and [Microsoft +Intune](intune-integration.mdx). + ## Guides - [Getting Started with Device Trust](guide.mdx) - [Device Management](device-management.mdx) - [Enforcing Device Trust](enforcing-device-trust.mdx) - [Jamf Pro Integration](jamf-integration.mdx) +- [Microsoft Intune Integration](intune-integration.mdx) diff --git a/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx b/docs/pages/identity-governance/device-trust/enforcing-device-trust.mdx similarity index 83% rename from docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx rename to docs/pages/identity-governance/device-trust/enforcing-device-trust.mdx index 566cc9132339a..0c8b999d82fa9 100644 --- a/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx +++ b/docs/pages/identity-governance/device-trust/enforcing-device-trust.mdx @@ -1,7 +1,13 @@ --- title: Enforce Device Trust +sidebar_label: Enforcement description: Learn how to enforce trusted devices with Teleport videoBanner: gBQyj_X1LVw +tags: + - conceptual + - identity-governance + - resiliency +enterprise: Identity Governance --- @@ -29,9 +35,12 @@ by the `device_trust_mode` authentication setting: - `required` - enables device authentication and device-aware audit. Additionally, it requires a trusted device for all SSH, Database and Kubernetes connections. +- `required-for-humans` - enables device authentication and device-aware audit. + Additionally, it requires a trusted device for all SSH, Database and + Kubernetes connections, for human users only (bots are exempt). ### Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) (!docs/pages/includes/device-trust/prereqs.mdx!) @@ -45,12 +54,12 @@ instructions. Role-based configuration enforces trusted device access at the role level. It can be configured with the `spec.options.device_trust_mode` option and applies to the resources in its `allow` rules. It -works similarly to [`require_session_mfa`](../guides/per-session-mfa.mdx). +works similarly to [`require_session_mfa`](../../zero-trust-access/authentication/per-session-mfa.mdx). To enforce authenticated device checks for a specific role when a user accesses databases, Kubernetes clusters, and servers with Teleport, update the role with -the `device_trust_mode` field assigned to `"required"`. The following example -updates the preset `require-trusted-device` role: +the `device_trust_mode` field assigned to `"required"` or `"required-for-humans"`. +The following example updates the preset `require-trusted-device` role: ```yaml kind: role @@ -132,7 +141,7 @@ ERROR: ssh: rejected: administratively prohibited (unauthorized device) It is possible to use [trusted -clusters](../../management/admin/trustedclusters.mdx) to limit the impact of +clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) to limit the impact of device mode `required`. A leaf cluster in mode `required` will enforce access to all of its resources, without imposing the same restrictions to the root cluster. Likewise, a root cluster will not enforce Device Trust on resources in @@ -146,10 +155,10 @@ enforcement](#role-based-trusted-device-enforcement). To access apps protected by Device Trust using the Web UI (Teleport v16 or later), make sure your device is [registered and enrolled]( -./device-management.mdx#register-a-trusted-device), install [Teleport Connect](../../../connect-your-client/teleport-connect.mdx), and +./device-management.mdx#register-a-trusted-device), install [Teleport Connect](../../connect-your-client/teleport-clients/teleport-connect.mdx), and follow the instructions during login. -Alternatively, you may use [tsh proxy app](../../../reference/cli/tsh.mdx) or the certificates issued by +Alternatively, you may use [tsh proxy app](../../reference/cli/tsh.mdx) or the certificates issued by `tsh app login`. As an example, to enforce Device Trust for all `env:production` apps, save the @@ -198,7 +207,7 @@ enforcement]( #role-based-trusted-device-enforcement). To access desktops protected by Device Trust make sure your device is [registered and enrolled](./device-management.mdx#register-a-trusted-device), -install [Teleport Connect](../../../connect-your-client/teleport-connect.mdx), and +install [Teleport Connect](../../connect-your-client/teleport-clients/teleport-connect.mdx), and follow the instructions during login. As an example, to enforce Device Trust for all `env:production` desktops, save @@ -244,7 +253,7 @@ device. ## Locking a device -Similar to [session and identity locking](../guides/locking.mdx), a device can +Similar to [session and identity locking](../locking.mdx), a device can be locked using `tctl lock`. Locking blocks certificate issuance and ongoing or future accesses originating @@ -255,9 +264,9 @@ Find a device ID to lock: ```code $ tctl devices ls -Asset Tag OS Enroll Status Device ID ------------- ----- ------------- ------------------------------------ -(=devicetrust.asset_tag=) macOS enrolled (=devicetrust.device_id=) +Asset Tag OS Source Enroll Status Owner Device ID +------------ ----- ------ ------------- ----- ------------------------------------ +(=devicetrust.asset_tag=) macOS enrolled alice (=devicetrust.device_id=) ``` Lock a device: @@ -282,3 +291,4 @@ ERROR: ssh: rejected: administratively prohibited (lock targeting Device:"(=devi ## Next steps - [Device Management](./device-management.mdx) - [Jamf Pro Integration](./jamf-integration.mdx) +- [Microsoft Intune Integration](./intune-integration.mdx) diff --git a/docs/pages/identity-governance/device-trust/guide.mdx b/docs/pages/identity-governance/device-trust/guide.mdx new file mode 100644 index 0000000000000..ab5bd7dc35b19 --- /dev/null +++ b/docs/pages/identity-governance/device-trust/guide.mdx @@ -0,0 +1,167 @@ +--- +title: Getting Started with Device Trust +sidebar_label: Getting Started +description: Get started with Teleport Device Trust +videoBanner: gBQyj_X1LVw +tags: + - get-started + - identity-governance + - resiliency +enterprise: Identity Governance +--- + +{/* lint disable page-structure remark-lint */} + +(!docs/pages/includes/device-trust/support-notice.mdx!) + +Device Trust requires two of the following steps to have been configured: + +- Device enforcement mode configured via either a role or a cluster-wide config. +- Trusted device registered and enrolled with Teleport. + +In this guide, you will update an existing user profile to assign the preset `require-trusted-device` +role and then enroll a trusted device into Teleport to access a resource (a test linux server) +protected with Teleport. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/device-trust/prereqs.mdx!) + +- User with `editor` role. + ```code + $ tsh status + > Profile URL: (=clusterDefaults.clusterName=):443 + Logged in as: (=clusterDefaults.username=) + Cluster: (=clusterDefaults.clusterName=) + Roles: access, auditor, editor + Logins: root, ubuntu, ec2-user + Kubernetes: disabled + Valid until: 2023-08-22 03:30:24 -0400 EDT [valid for 11h52m0s] + Extensions: login-ip, permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy + ``` +- Access to a linux server (any Linux server you can access via `tsh ssh` will do). + ```code + $ tsh ls + Node Name Address Labels + ---------------- -------------- -------------------------------------- + (=clusterDefaults.nodeIP=) ⟵ Tunnel + + # test connection to (=clusterDefaults.nodeIP=) + $ tsh ssh root@(=clusterDefaults.nodeIP=) + root@(=clusterDefaults.nodeIP=):~# + ``` + +Once the above prerequisites are met, begin with the following step. + +## Step 1/2. Update user profile to enforce Device Trust + +To enforce Device Trust, a user must be assigned with a role with Device Trust mode "required". + +For this guide, we will use the preset `require-trusted-device` role to update current user profile. + +Open the user resource in your editor so we can update it with the preset `require-trusted-device` role. + +```code +$ tctl edit users/(=clusterDefaults.username=) +``` + +Edit the profile: + +```diff +kind: user +metadata: + id: 1692716146877042322 + name: (=clusterDefaults.username=) +spec: + created_by: + time: "2023-08-14T13:42:22.291972449Z" + expires: "0001-01-01T00:00:00Z" + roles: + - access + - auditor + - editor ++ - require-trusted-device # add this line + status: + is_locked: false + ... +``` + +Update the user by saving and closing the file in your editor. + +Now that the user profile is updated to enforce Device Trust, try to access the test server +again. + +```code +$ tsh logout; tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) +$ tsh ssh root@(=clusterDefaults.nodeIP=) +ERROR: access denied to root connecting to (=clusterDefaults.nodeIP=):0 +``` + +As you can verify from the above step, access to `(=clusterDefaults.nodeIP=)` ssh server, +which was previously accessible, is now forbidden. + +## Step 2/2. Enroll device + +To access `(=clusterDefaults.nodeIP=)` server again, you will have to enroll your device. + +Enrolling your device is easy, and can be done using `tsh` client: + +```code +$ tsh device enroll --current-device +Device "(=devicetrust.asset_tag=)"/macOS registered and enrolled +``` + + + The `--current-device` flag tells `tsh` to enroll the current device. The user must have the preset `editor` + or `device-admin` role to be able to self-enroll their device. For users without the `editor` or + `device-admin` roles, a device admin must generate the an enrollment token, which can then be + used to enroll the device. Learn more about manual device enrollment in the + [device management guide](./device-management.mdx#register-a-trusted-device). + + +Relogin to fetch updated certificate with device extension: + +```code +$ tsh logout; tsh login --proxy=(=clusterDefaults.clusterName=) --user=(=clusterDefaults.username=) + +$ tsh status +> Profile URL: (=clusterDefaults.clusterName=):443 + Logged in as: (=clusterDefaults.username=) + Cluster: (=clusterDefaults.clusterName=):443 + Roles: access, auditor, editor + Logins: root + Kubernetes: enabled + Valid until: 2023-08-22 04:06:53 -0400 EDT [valid for 12h0m0s] + Extensions: login-ip, ... teleport-device-asset-tag, teleport-device-credential-id, teleport-device-id +``` + +The presence of the `teleport-device-*` extensions shows that the device was successfully authenticated. + +Now, let's try to access server (`(=clusterDefaults.nodeIP=)`) again: + +```code +$ tsh ssh root@(=clusterDefaults.nodeIP=) +root@(=clusterDefaults.nodeIP=):~# +``` + +Congratulations! You have successfully configured a Trusted Device and accessed a resource protected with +Device Trust enforcement. + +## Troubleshooting + +(!docs/pages/includes/device-trust/troubleshooting.mdx!) + +## Next steps + +- [Device Management](./device-management.mdx) +- [Enforcing Device Trust](./enforcing-device-trust.mdx) +- [Jamf Pro Integration](./jamf-integration.mdx) +- [Microsoft Intune Integration](./intune-integration.mdx) +- The role we illustrated in this guide uses the `internal.logins` trait, + which Teleport replaces with values from the Teleport local user + database. For full details on how traits work in Teleport roles, + see the [Access Controls + Reference](../../reference/access-controls/roles.mdx). + diff --git a/docs/pages/identity-governance/device-trust/intune-integration.mdx b/docs/pages/identity-governance/device-trust/intune-integration.mdx new file mode 100644 index 0000000000000..4ecd0ff933cf5 --- /dev/null +++ b/docs/pages/identity-governance/device-trust/intune-integration.mdx @@ -0,0 +1,188 @@ +--- +title: Microsoft Intune Integration +description: Sync your Microsoft Intune inventory into Teleport +# Positioning the two integration guides together until we can create a +# docs subsection/subdirectory for Device Trust integrations +sidebar_position: 5 +tags: + - how-to + - identity-governance + - azure + - resiliency +enterprise: Identity Governance +--- + +The Device Trust Microsoft Intune integration lets you automatically sync your managed devices from +Intune into Teleport. This guide explains how to set up the integration. + +## How it works + +The Teleport Intune integration periodically reads your managed device inventory from Microsoft +Intune and syncs it to Teleport. It performs both incremental (called "partial") and full syncs, +as well as removals from Teleport if a device is removed from Intune. + +Syncing devices from Intune is an **inventory management** step, equivalent to +automatically running the corresponding `tctl devices add` commands. + +See the [Device Trust guide](device-trust.mdx) for fundamental Device Trust concepts +and behavior. + + +The Microsoft Intune integration is available exclusively as a hosted plugin through the +Teleport Web UI. Running it as a standalone service is not currently supported. + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +(!docs/pages/includes/device-trust/prereqs.mdx!) +- Access to Microsoft Entra ID with Global Administrator or Application Administrator permissions. +- An active Microsoft Intune subscription with managed devices. + +## Step 1/3. Register an application in Microsoft Entra ID + +Create and configure an application in Microsoft Entra ID that Teleport will use to access your +Intune data: + +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with an account + that has Global Administrator or Application Administrator permissions. +2. Navigate to Entra ID > App registrations and select New registration. +3. Configure the application: + - Name: enter a descriptive name (e.g., "Teleport Intune Integration"). + - Supported account types: select "Accounts in this organizational directory only". + - Redirect URI: leave this blank and click "Register". +4. Once created, copy the Application (client) ID from the Overview page. You'll need this + when configuring the integration in Teleport. + +In case of problems with the above instructions, follow [the official docs for registering an +application](https://learn.microsoft.com/en-us/graph/auth-register-app-v2). + +## Step 2/3. Configure API permissions and create client secret + +Grant the application permission to read Intune managed devices: + +1. In your registered application, navigate to API permissions > Add a permission. +2. Select Microsoft Graph > Application permissions. +3. Search for and add the `DeviceManagementManagedDevices.Read.All` permission. +4. Click "Grant admin consent" for your organization and confirm. + +In case of problems with the above instructions, follow [the official docs for granting application +permission](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-configure-app-access-web-apis#application-permission-to-microsoft-graph). + + +The consent must be granted by a Global Administrator or Privileged Role Administrator. Failure to +grant this permission will result in an error from the Graph API. + + +To create a client secret: + +1. In your registered application, navigate to Certificates & secrets > Client secrets. +2. Click New client secret. +3. Add a description and select an expiration period (we recommend setting a reminder to rotate + the secret before it expires). +4. Click "Add" and copy the Value (not the Secret ID). + +In case of problems with the above instructions, follow [the official docs for adding a client +secret](https://learn.microsoft.com/en-us/graph/auth-register-app-v2#option-2-add-a-client-secret). + +## Step 3/3. Configure the Intune integration in Teleport + +1. Follow [the official docs to find your tenant + identifier](https://learn.microsoft.com/en-us/partner-center/account-settings/find-ids-and-domain-names#find-the-microsoft-entra-tenant-id-and-primary-domain-name) + (e.g., `contoso.onmicrosoft.com` or the tenant GUID). +2. In the Web UI, navigate to Add New > Integration and select "Microsoft Intune". +3. Fill in the required information: + - Primary domain or Microsoft Entra tenant ID: enter your tenant identifier. + - Application (client) ID: Paste the Application ID you copied in Step 1. + - Client secret value: Paste the client secret value you copied in Step 2. +4. Click "Connect Microsoft Intune" to complete the setup. + +The integration will begin syncing devices from your Intune inventory within two minutes. Depending +on the size of your inventory, the initial sync may take a few minutes to complete. + +## How syncing works + +### Sync schedule + +The Intune integration uses the following fixed schedule that starts when the integration is +installed, updated, or when the Auth Service is restarted: + +- Full sync: every 24 hours – fetches all devices from Intune, reconciles the complete + inventory and removes missing devices. +- Partial sync: every 6 hours between full syncs – fetches only devices that have been modified in + Intune since the last sync. + +The integration tracks the last sync time from Intune to fetch only changed devices during partial +syncs. + +### Device requirements + +For a device to be synced, it must have a serial number, its device registration state must be set +to "registered" and its operating system must be either macOS, Windows, or Linux. + +### Device ownership and removal + +When syncing inventory, the Intune integration claims ownership of all synced devices. +This can be verified by inspecting a device's `source` field: + +```yaml +# tctl get device/mydevice +kind: device +metadata: + name: 20ec6373-9e8e-46e0-8f1c-47ad6b06a768 +spec: + asset_tag: mydevice + os_type: macos + # ... + source: + name: intune + origin: intune + update_time: "2023-06-21T19:44:40.40601Z" +version: v1 +``` + +During full syncs, devices that are no longer present in Intune or have mismatched properties +(different serial number or OS type for the same Intune device ID) will be automatically removed +from Teleport's inventory. + + +Only devices synced by Intune are subject to automatic deletion. Devices added manually via `tctl +devices add` or synced from other sources won't be deleted by the Intune integration. + + +For immediate removal of unwanted devices, first lock the device in Teleport, +then remove it from Intune: + +```code +$ tctl devices lock --asset-tag=SERIAL_NUMBER --message='reason for locking' +Created a lock with name "a2f1491c-4a3e-4daf-9c83-2fe931668076". +``` + +Manual removal via `tctl devices rm` is possible, but note that if the device is still in the Intune +inventory, it'll be recreated during the next sync. + +## Monitoring the integration + +After configuration, you can monitor the integration status in the Web UI. Navigate to Zero Trust +Access > Integrations. If the integration status indicates an error, check the Auth Service logs for +errors from the Intune integration. + +You can also verify synced devices using the command line: + +```code +$ tctl devices ls +Asset Tag OS Source Enroll Status Owner Device ID +------------ ------- ------ ------------- ----- ------------------------------------ +CXXXXXXXXX17 macOS Intune not enrolled 20ec6373-9e8e-46e0-8f1c-47ad6b06a768 +CXXXXXXXXX2T macOS Intune not enrolled 79755778-7cbe-4e2c-83ec-7eaa3d4d7e36 +CXXXXXXXXX3T Windows Intune not enrolled 665e59d5-393a-4894-841d-edad06329717 +CXXXXXXXXX4T macOS Intune not enrolled dd032e90-bfb0-47d5-bce5-e57545f6788f +CXXXXXXXXX5T Windows Intune not enrolled bf189863-a94a-40dc-9013-d96f8dada2f1 +(...) +``` + +## Next steps + +Automatically enroll synced devices on user login with +[auto-enrollment](./device-management.mdx#auto-enrollment). diff --git a/docs/pages/admin-guides/access-controls/device-trust/jamf-integration.mdx b/docs/pages/identity-governance/device-trust/jamf-integration.mdx similarity index 90% rename from docs/pages/admin-guides/access-controls/device-trust/jamf-integration.mdx rename to docs/pages/identity-governance/device-trust/jamf-integration.mdx index d229c8f3deef2..46e7751fa7346 100644 --- a/docs/pages/admin-guides/access-controls/device-trust/jamf-integration.mdx +++ b/docs/pages/identity-governance/device-trust/jamf-integration.mdx @@ -1,9 +1,17 @@ --- title: Jamf Pro Integration description: Sync your Jamf Pro inventory into Teleport +# Positioning the two integration guides together until we can create a +# docs subsection/subdirectory for Device Trust integrations +sidebar_position: 4 +tags: + - how-to + - identity-governance + - resiliency +enterprise: Identity Governance --- -Device Trust Jamf Pro integration lets you automatically sync your Jamf Pro computer +The Device Trust Jamf Pro integration lets you automatically sync your Jamf Pro computer inventory into Teleport. ## How it works @@ -28,7 +36,7 @@ and behavior. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) (!docs/pages/includes/device-trust/prereqs.mdx!) @@ -84,9 +92,9 @@ try to recreate your API client.
Configure Jamf hosted plugin Select the Jamf plugin: - ![Select Jamf plugin](../../../../img/access-controls/device-trust/select-jamf.png) + ![Select Jamf plugin](../../../img/access-controls/device-trust/select-jamf.png) Fill in the required information and click "Connect Jamf" button: - ![Configure Jamf plugin](../../../../img/access-controls/device-trust/hosted-jamf.png) + ![Configure Jamf plugin](../../../img/access-controls/device-trust/hosted-jamf.png)
@@ -183,13 +191,13 @@ devices ls`: ```code $ tctl devices ls -Asset Tag OS Enroll Status Device ID ------------- ------- ------------- ------------------------------------ -CXXXXXXXXX17 macOS not enrolled 20ec6373-9e8e-46e0-8f1c-47ad6b06a768 -CXXXXXXXXX2T macOS not enrolled 79755778-7cbe-4e2c-83ec-7eaa3d4d7e36 -CXXXXXXXXX3T macOS not enrolled 665e59d5-393a-4894-841d-edad06329717 -CXXXXXXXXX4T macOS not enrolled dd032e90-bfb0-47d5-bce5-e57545f6788f -CXXXXXXXXX5T macOS not enrolled bf189863-a94a-40dc-9013-d96f8dada2f1 +Asset Tag OS Source Enroll Status Owner Device ID +------------ ----- ------ ------------- ----- ------------------------------------ +CXXXXXXXXX17 macOS Jamf not enrolled 20ec6373-9e8e-46e0-8f1c-47ad6b06a768 +CXXXXXXXXX2T macOS Jamf not enrolled 79755778-7cbe-4e2c-83ec-7eaa3d4d7e36 +CXXXXXXXXX3T macOS Jamf not enrolled 665e59d5-393a-4894-841d-edad06329717 +CXXXXXXXXX4T macOS Jamf not enrolled dd032e90-bfb0-47d5-bce5-e57545f6788f +CXXXXXXXXX5T macOS Jamf not enrolled bf189863-a94a-40dc-9013-d96f8dada2f1 (...) ``` @@ -364,10 +372,10 @@ Create a readonly Jamf user for inventory sync. Take note of the user and password. User account setup: ![Jamf user setup]( - ../../../../img/access-controls/device-trust/jamf-setup-1.png) + ../../../img/access-controls/device-trust/jamf-setup-1.png) Privileges setup: ![Jamf privileges setup]( - ../../../../img/access-controls/device-trust/jamf-setup-2.png) + ../../../img/access-controls/device-trust/jamf-setup-2.png) ## Next steps diff --git a/docs/pages/identity-governance/identity-gov-use-cases.mdx b/docs/pages/identity-governance/identity-gov-use-cases.mdx new file mode 100644 index 0000000000000..2f8b160bb788e --- /dev/null +++ b/docs/pages/identity-governance/identity-gov-use-cases.mdx @@ -0,0 +1,70 @@ +--- +title: Identity Governance in Action +sidebar_label: Use Cases +description: Common use cases with Teleport Identity Governance +sidebar_position: 1 +tags: + - identity-governance +enterprise: Identity Governance +--- + +Teleport Identity Governance helps organizations strengthen security and compliance by centralizing +how access is granted, monitored, and enforced across infrastructure and applications. +The following use cases highlight practical ways to apply Teleport's governance features, from integrating +with device and identity providers, and extending controls into external services. +Each scenario links to detailed guides so you can quickly put these capabilities into practice. + +## Identity integrations + +Teleport supports multiple identity options so you can plug into your existing stack or make Teleport your source of truth. + +Integrate with IdPs like **AWS IAM Identity Center**, **Okta,** **Microsoft Entra ID**, or **SailPoint** to sync groups to Teleport roles. +If you prefer, you can also run **Teleport as an identity provider**, issuing +short-lived credentials and federating access to downstream apps and services. + +These integrations enable centralized onboarding/off-boarding, group-to-role mapping, and consistent policy enforcement across all your resources. + +- [**AWS IAM Identity Center**](integrations/aws-iam-identity-center/aws-iam-identity-center.mdx) +- [**Okta Integration**](integrations/okta/okta.mdx) +- [**Entra ID Integration**](integrations/entra-id/getting-started.mdx) +- [**Teleport Identity Provider**](./idps/idps.mdx) +- [**SailPoint SCIM Integration**](integrations/scim/sailpoint.mdx) + +## Just-in-time Access Requests + +[Grant temporary access when it's needed](./access-requests/access-requests.mdx). +Developers request elevated roles only for the specific task and duration required. +Approvals are tracked in the audit log, and Teleport issues short-lived certificates +so access automatically expires without manual cleanup. + +## Just-in-time Access Request plugins + +[Manage requests via third-party tools](access-requests/plugins/plugins.mdx). +Plugins enforce your reviewer policies, post status updates, and keep a complete, auditable trail. +You can receive Access Request notifications where your team already works, such as Slack, Microsoft Teams, PagerDuty, Jira, ServiceNow, and more. + +## Access Lists + +[Grant auditable access by user group](./access-lists/access-lists.mdx). +Define membership-based access with owners, eligibility rules, and time-boxed enrollment. +Access Lists map groups to Teleport roles, require periodic reviews, and provide a clear record of who had access and why. + +## Device Trust + +[Enforce trusted registered device access](./device-trust/device-trust.mdx). +Device identity can help block access from unknown or non-compliant workstations by policy. + +## Session and Identity Locking + +[Lock compromised users and resources](./locking.mdx). +Instantly quarantine a user, device, or node to cut off access in an incident. Locks terminate active sessions, +prevent new certificate issuance, and are fully scoped and time-limited for safe rollback. + +## Further reading + +- Read about all of the ways you can configure Access Requests in the +[Access Request Configuration Guide](../identity-governance/access-requests/access-request-configuration.mdx). +- To read more about the architecture of an Access Request plugin, and start writing your own, +read our [Access Request plugin development guide](access-requests/plugins/how-to-build.mdx). +- Learn how [nested Access Lists](../identity-governance/access-lists/nested-access-lists.mdx) grants inheritance work. +- Learn how [SCIM](integrations/scim/scim.mdx) (System for Cross-domain Identity Management) integration supports automated user management. diff --git a/docs/pages/identity-governance/identity-governance.mdx b/docs/pages/identity-governance/identity-governance.mdx new file mode 100644 index 0000000000000..17fd003e87955 --- /dev/null +++ b/docs/pages/identity-governance/identity-governance.mdx @@ -0,0 +1,188 @@ +--- +title: Teleport Identity Governance +description: Provides guides on Teleport Identity Governance. +template: "landing-page" +tags: + - identity-governance +enterprise: Identity Governance +--- + +import LandingHero from '@site/src/components/Pages/Landing/LandingHero'; +import Integrations from "@site/src/components/Pages/Homepage/Integrations"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; + +import identityGovernanceImg from '@version/docs/img/identity-governance/identity-governance-hero.png'; +import oktaSvg from "@site/src/components/Icon/teleport-svg/okta.svg"; +import arrowRightSvg from "@site/src/components/Icon/teleport-svg/arrow-right.svg"; +import msEntraIdSvg from "@site/src/components/Icon/svg/ms-entra-id.svg"; +import slackSvg from "@site/src/components/Icon/teleport-svg/slack.svg"; +import jiraSvg from "@site/src/components/Icon/svg/jira.svg"; +import pagerDutySvg from "@site/src/components/Icon/svg/pagerduty.svg"; +import terraformSvg from "@site/src/components/Icon/svg/terraform.svg"; +import awsIdentityCenterSvg from "@site/src/components/Icon/svg/aws-identity-center.svg"; +import gcpSvg from "@site/src/components/Icon/svg/googleCloud.svg"; +import grafanaSvg from "@site/src/components/Icon/svg/grafana.svg"; +import jamfProSvg from "@site/src/components/Icon/teleport-svg/jamf-pro.svg"; + + + Manage on-demand access, privileges, and compliance for all your infrastructure. + + ## Grant on-demand access with role and resource requests + Provide your organization access with the right level of access granularity. Configure all aspects of the Access Request lifecycle in Teleport. + + + + + + + diff --git a/docs/pages/identity-governance/idps/idps.mdx b/docs/pages/identity-governance/idps/idps.mdx new file mode 100644 index 0000000000000..301493a83cd6f --- /dev/null +++ b/docs/pages/identity-governance/idps/idps.mdx @@ -0,0 +1,15 @@ +--- +title: Configure Teleport as an Identity Provider +sidebar_label: Teleport as an IdP +description: How to set up Teleport's identity provider functionality +tags: + - identity-governance + - infrastructure-identity +enterprise: Identity Governance +--- + +Users can authenticate to both internal and external applications +through the use of a built in identity provider in Teleport. + + +- [SAML Reference](../../reference/access-controls/saml-idp.mdx): A reference for Teleport's SAML identity provider. diff --git a/docs/pages/admin-guides/access-controls/idps/saml-attribute-mapping.mdx b/docs/pages/identity-governance/idps/saml-attribute-mapping.mdx similarity index 94% rename from docs/pages/admin-guides/access-controls/idps/saml-attribute-mapping.mdx rename to docs/pages/identity-governance/idps/saml-attribute-mapping.mdx index 25a485b253975..2d394b94215fd 100644 --- a/docs/pages/admin-guides/access-controls/idps/saml-attribute-mapping.mdx +++ b/docs/pages/identity-governance/idps/saml-attribute-mapping.mdx @@ -1,13 +1,19 @@ --- title: SAML IdP Attribute Mapping +sidebar_label: Attribute Mapping description: How to map user attributes to custom SAML response -h1: SAML Attribute Mapping +tags: + - teleport-as-idp + - conceptual + - identity-governance + - privileged-access +enterprise: Identity Governance --- Attribute mapping configures Teleport SAML Identity Provider to assert custom user attributes in SAML response. The Teleport SAML IdP supports three configurable fields for attribute mapping: - `name`: Name of the outgoing attribute. Required. Name should be unique across attribute mapping. -- `value`: Value defined using a [predicate expression](../../../reference/predicate-language.mdx), which can reference +- `value`: Value defined using a [predicate expression](../../reference/access-controls/predicate-language.mdx), which can reference Teleport usernames, roles or traits. Required. - `name_format`: SAML attribute name format. Optional. The following formats are supported: - `unspecified`: value equals to `urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified`. Used as a default value. @@ -36,11 +42,11 @@ spec: ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - (!docs/pages/includes/tctl.mdx!) - Teleport user with permission to create a service provider resource. The preset `editor` role has this permission. - If you're new to SAML, consider reviewing our [SAML Identity Provider - Reference](../../../reference/access-controls/saml-idp.mdx) before proceeding. + Reference](../../reference/access-controls/saml-idp.mdx) before proceeding. ## Predicate expressions diff --git a/docs/pages/identity-governance/idps/saml-guide.mdx b/docs/pages/identity-governance/idps/saml-guide.mdx new file mode 100644 index 0000000000000..702f755553211 --- /dev/null +++ b/docs/pages/identity-governance/idps/saml-guide.mdx @@ -0,0 +1,265 @@ +--- +title: Using Teleport as a SAML Identity Provider +sidebar_label: Getting Started +description: How to configure and use Teleport as a SAML identity provider. +sidebar_position: 1 +tags: + - teleport-as-idp + - how-to + - identity-governance + - infrastructure-identity +enterprise: Identity Governance +--- + +This guide details an example on how to use Teleport as a SAML identity provider +(IdP). You can set up the Teleport SAML IdP to enable Teleport users to +authenticate to external services through Teleport. + +## How it works + +On Teleport Enterprise deployments, the Teleport SAML IdP runs as part of the +Teleport Proxy Service and exposes paths of the Proxy Service HTTP API that +implement a SAML IdP. You can register a third-party application with the IdP by +creating a Teleport dynamic resource, the SAML IdP service provider, that +includes information about the application. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) + +- (!docs/pages/includes/tctl.mdx!) +- If you're new to SAML, consider reviewing our [SAML Identity Provider + Reference](../../reference/access-controls/saml-idp.mdx) before proceeding. +- SAML application (also known as a SAML service provider or SP) for testing. + For this guide, we'll be using [RSA Simple Test Service Provider](https://sptest.iamshowcase.com/) - + a free service that lets us test the Teleport SAML IdP. The test service + has a protected page, which can be accessed only after a user is federated to + the site with a valid SAML assertion flow. Below in this guide, we will refer + to this application as the "IAMShowcase" app. +![iamshowcase protected page](../../../img/access-controls/saml-idp/rsa-simplesp-protected-page.png) + +## Step 1/4. Create a role + +As a first step, we will create a role which will grant permissions to manage +a `saml_idp_service_provider` resource for the IAMShowcase app. +The role will also grant permission to log in to this application +by authenticating with Teleport. We will name this role as `saml-admin`. + +In the Teleport Web UI, from the side the navigation menu, select **Zero Trust Access > Roles** +menu. From the **Roles** UI, click `Create New Role` button. + +Enter "saml-admin" as a role name and then click the `Next: Resources` button to +move to the "Resources" tab. + +{/* vale off */} +In the **Resources** tab, open the dropdown menu by clicking on the `+ Add Teleport Resource Access` +button, then select **Application Access**. Enter "env" as the label key and "saml-test" +as the label value. In the Step 3 of this guide below, we will create a SAML service +provider resource with this matching label value `env: saml-test`, limiting +scope of the permissions granted by this role. + +![Configure app label](../../../img/access-controls/saml-idp/step-1-create-role-app-label.png) + +Click `Next: Admin Rules` button to move to the **Admin Rules** tab. + +In the **Admin Rules** tab, click `+ Add New` button. In the **Teleport Resources** +input box, search and select the `saml_idp_service_provider` resource. +And then in the **Permissions** section, check `read, list, create, update, delete` +permissions. + +![Configure service provider rule verbs](../../../img/access-controls/saml-idp/step-1-create-role-saml-resource.png) + +Click `Next: Options` button to move to next step and then click `Create Role` +button to create a new role. + +
+ +Reference YAML spec for the `saml-admin` role. +```yaml +kind: role +metadata: + name: saml-admin +spec: + allow: + app_labels: + env: saml-test + rules: + - resources: + - saml_idp_service_provider + verbs: + - read + - list + - create + - update + - delete +version: v8 +``` +
+ +(!docs/pages/includes/add-role-to-user.mdx role="saml-admin"!) + +## Step 2/4. Configure the service provider + +This step involves configuring the SAML service provider with the Teleport SAML IdP metadata. + +You can obtain the Teleport SAML IdP metadata values from the SAML application +setup wizard, which is available in the Teleport Web UI. + +In the Teleport Web UI, select **Add New > Resource** menu, and search for +"saml application". Choose the "SAML Application (Generic)" tile. + +![Enroll SAML app](../../../img/access-controls/saml-idp/step-2-find-saml-discover-tile.png) + +As a first step of the SAML Application setup wizard, the Teleport Web UI displays +the Teleport SAML IdP metadata values. + +The process of SAML service provider configuration varies from service provider +to service provider. Some service provider may ask to provide Teleport SAML IdP's +entity ID, SSO URL and X.509 certificate explicitly. Other's may ask to upload +the Teleport SAML IdP metadata (SAML entity descriptor) file . + +![Teleport IdP metadata](../../../img/access-controls/saml-idp/step-2-copy-metadata.png) + +In the case of IAMShowcase app, which this guide is +based on, it is designed to grant access protected page for any well formatted +IdP federated SAML assertion data. + +As such, we will move to the next step in the setup wizard to add the IAMShowcase app to Teleport. + +## Step 3/4. Add a service provider to Teleport + +To add a service provider to Teleport, you must configure a service provider +metadata. This can be configured by either providing an entity ID and ACS URL +values of the service provider or by providing an entity descriptor value +(also known as metadata file, which is an XML file) of the service provider. + +Below we'll show both of the configuration options. + +### Option 1: Configure with entity ID and ACS URL + +With this configuration method, Teleport first tries to fetch an entity +descriptor by querying the entity ID endpoint. If an entity descriptor is not +found at that endpoint, Teleport will generate a new entity descriptor with the +given entity ID and ACS URL values. + +To configure the IAMShowcase app, the values you +need to provide are the following: +- **App Name:** `IAMShowcase`. +- **SP Entity ID / Audience URI:** `IAMShowcase`. The SAML metadata value or an + endpoint of the service provider. +- **ACS URL / SP SSO URL:** `https://sptest.iamshowcase.com/acs`. The endpoint + where users will be redirected after SAML authentication. ACS URL is also + referred to as SAML SSO URL. +- **Label:** `env: saml-test`. Label will be used in RBAC. + +![add service provider](../../../img/access-controls/saml-idp/step-3-add-service-provider.png) + +Click `Finish` button. The "IAMShowcase" app is now added to Teleport. + +### Option 2: Configure with Entity Descriptor file + +With this option, you provide the service provider entity descriptor file, which +has all the details required to configure service provider metadata. + +If the service provider provides an option to download its entity descriptor file +or you need more control over the entity descriptor, this is the recommended +option to add a service provider to Teleport. + +To configure the IAMShowcase app, the values you need to provide are the following: +- **App Name:** `IAMShowcase`. +- **Entity Descriptor:** Entity descriptor for the IAMShowcase app + which is available in this URL: `https://sptest.iamshowcase.com/testsp_metadata.xml`. +- **Label:** `env: saml-test`. + +![add service provider with entity descriptor](../../../img/access-controls/saml-idp/step-3-add-service-provider-ed.png) + +Click `Finish` button. The "IAMShowcase" app is now added to Teleport. + + +If an entity descriptor is provided, its content takes preference over values +provided for entity ID and ACS URL. + + +## Step 4/4. Verify access to the protected page + +To verify everything works, navigate to the **Resources** page in the Teleport Web UI +and search for the "IAMShowcase" app. + +![Access app](../../../img/access-controls/saml-idp/step-4-login.png) + +From the "IAMShowcase" app tile, click the `Log in` button, which will forward +you to the IAMShowcase app's protected page with a SAML assertion data signed by +the Teleport SAML IdP. + +![Protected page access](../../../img/access-controls/saml-idp/step-4-service-provider-authenticated-page.png) + +This page shows Teleport user details along with other attributes such as role +names that are federated by the Teleport SAML IdP, confirming a successful SAML +service provider configuration in Teleport. + + +## Manage SAML IdP service provider resource using `tctl` + +Examples of managing a `saml_idp_service_provider` resource using `tctl`. + +### Create a `saml_idp_service_provider` resource + +First, create a Teleport resource spec file with the following `saml_idp_service_provider` +resource spec: + +```yaml +cat > iamshowcase.yaml << EOF +kind: saml_idp_service_provider +metadata: + labels: + env: saml-test + # The resource name of the service provider. + name: IAMShowcase +spec: + acs_url: https://sptest.iamshowcase.com/acs + entity_id: IAMShowcase +version: v1 +EOF +``` + +Next, create a Teleport resource using the `tctl create` command: + +```code +$ tctl create iamshowcase.yaml +# SAML IdP service provider 'IAMShowcase' has been created. +``` + +### Update a `saml_idp_service_provider` resource + +To update the resource, first, get the latest copy of the resource spec +from the Teleport cluster. +```code +$ tctl get saml_idp_service_provider/IAMShowcase > iamshowcase.yaml +``` + +Then modify the spec by updating the `iamshowcase.yaml`, save it, and then update +the Teleport resource by using the `tctl create -f` command: +```code +$ tctl create -f iamshowcase.yaml +``` + +### List `saml_idp_service_provider` resources +To list a specific service provider: +```code +$ tctl get saml_idp_service_provider/IAMShowcase +``` + +To list all the service providers: +```code +$ tctl get saml_idp_service_provider +``` + +### Delete a `saml_idp_service_provider` resource +```code +$ tctl rm saml_idp_service_provider/IAMShowcase +``` + +## Next steps + +- Learn how to [control access](saml-idp-rbac.mdx) to the SAML IdP service provider resource. +- Configure [SAML Attribute Mapping](./saml-attribute-mapping.mdx). diff --git a/docs/pages/identity-governance/idps/saml-idp-rbac.mdx b/docs/pages/identity-governance/idps/saml-idp-rbac.mdx new file mode 100644 index 0000000000000..64c6647d93893 --- /dev/null +++ b/docs/pages/identity-governance/idps/saml-idp-rbac.mdx @@ -0,0 +1,169 @@ +--- +title: SAML Application Access Control +sidebar_label: Access Controls +description: Learn how to manage access to a SAML service provider (SAML applications). +sidebar_position: 2 +tags: + - identity-governance +enterprise: Identity Governance +--- + +This page explains how access to a SAML IdP service provider resource (SAML application) +can be managed in Teleport. + +If you are new to the Teleport SAML IdP, start by learning how to configure [Teleport as +an SAML IdP](saml-guide.mdx). + +## How it works + +User access to the SAML IdP service provider resource can be categorized into two different +use cases: +- **Managing a SAML IdP service provider resource.** For example, when a Teleport administrator + attempts to create or update a SAML IdP service provider resource. +- **Logging in to the SAML service provider.** For example, when a Teleport user attempts to + log in to the SAML IdP service provider by authenticating with Teleport. + +In both the cases, access can be configured by using a Teleport role +with an allow/deny rule targeting the `saml_idp_service_provider` resource and label matchers +matching role `app_labels` with the `saml_idp_service_provider` resource labels. + + +## RBAC behavior between different Teleport role versions + +The Teleport SAML IdP applies different RBAC logic to the service provider resource in +role version 8 versus role version 7 and below. + +In role version 7 and below, the following access controls are applied to the `saml_idp_service_provider` +resource access: + +- Role option that enables the IdP: `spec.options.idp.saml.enabled: true/false`. +- Cluster auth preference that enables the IdP: `spec.idp.saml.enabled: true/false`. +- Resource rule `spec.allow/deny.rules.resources.saml_idp_service_provider`. Applicable only + to admin actions. + - Allow rules with `read,list` verbs are applied implicitly. + - Deny rules with `read,list` verbs gets precedence over implicit allow. +- Per session MFA: `spec.options.require_session_mfa: true/false`. + +Teleport role version 8 (released with Teleport version 18.0) introduced the following RBAC changes: +- Label matchers based on `app_labels`. +- Resource rule with verbs targeting `saml_idp_service_provider` is now applicable to both resource + access and admin actions. +- Device Trust for SAML IdP session. + +The role option `spec.options.idp.saml.enabled: true/false` is no longer supported starting +role version 8. + +Per session MFA is supported in all role versions. + + +## RBAC precedence + +Users can be assigned with both the newer role (version 8) and the older versioned roles +(version 7 and below) at the same time. If a user is assigned with both role version 7 and 8, +deny rules of the version 8 takes precedence. + +For example, +- If role version 7 denies access, access is denied. +- If role version 7 allows access but role version 8 denies access, access is denied. +- If role version 7 allows access but role version 8 does not explicitly allow access + (via matching app labels), access is denied. +- If role version 7 allows access and role version 8 also allows access, access is allowed. + +The table below shows a few more examples of applicable RBAC, when two roles with version 7 and 8 each are assigned to the user. + +| Role v7 | Role v8 | Result | +|----------------------------------------------------------------------------|------------------------------------------------------------|-------------------| +|
options:
idp:
saml:
enabled: false
|
allow:
app_labels:
* : *
| ❌ no access. | +|
options:
idp:
saml:
enabled: true
|
deny:
app_labels:
* : *
| ❌ no access | +|
options:
idp:
saml:
enabled: true
|
allow:
app_labels:
* : *
deny:
rules:
resources:
- saml_idp_service_provider
verbs:
- read
- list
| ❌ no access | +|
options:
idp:
saml:
enabled: true
|
allow:
app_labels:
* : *
| ✅ full access | +| No version 7 role assigned to the user |
allow:
app_labels:
* : *
| ✅ full access | +|
options:
idp:
saml:
enabled: true
|
allow:
app_labels:
env : dev
| ✅ access to SAML app matching `env:dev` resource label | +|
options:
idp:
saml:
enabled: true
| No version 8 role assigned to the user | ✅ full access | + + + `saml_idp_service_provider` resource does not yet support MFA and Device Trust for admin actions. + + +## Role examples + +Examples of Teleport role that grants permissions to either access or manage +the SAML IdP service provider resource. + +### Role to manage SAML IdP service provider resource + +In this case, the role needs to target `saml_idp_service_provider` resource with +either `create,update,read,list,delete` or all of them as needed. + +The role should also grants access to the `app_labels` that matches with the +resource label configured for the `saml_idp_service_provider` resource. + +`saml_idp_service_provider` resource access verb +```yaml +kind: role +version: v8 +metadata: + name: saml-resource-manager +spec: + allow: + app_labels: + 'env': 'dev' # This label must match with the saml_idp_service_provider resource label + rules: + - resources: + - saml_idp_service_provider + verbs: + - read + - list + - create + - update + - delete +``` + +### Role to allow users to log in to a SAML IdP service provider + +In this case, at minimum, user needs `read,list` access to the `saml_idp_service_provider` +resource and must have an `app_labels` value matching with the resource label defined for +the `saml_idp_service_provider` resource. + +Resource labels matching role `app_labels`. +```yaml +kind: role +version: v8 +metadata: + name: saml-access +spec: + allow: + app_labels: + 'env': 'dev' # This label must match with the saml_idp_service_provider resource label + rules: + - resources: + - saml_idp_service_provider + verbs: + - read + - list +options: + device_trust_mode: required + require_session_mfa: true +``` + + +## Disabling SAML identity provider at cluster level + +To disable access to the identity provider at the cluster level, create +or update the `cluster_auth_preference` object with the following setting: + +```yaml +kind: cluster_auth_preference +metadata: + name: cluster-auth-preference +spec: + ... + idp: + saml: + enabled: false + ... +version: v2 +``` + +This will disable access to the SAML identity provider for all users regardless +of their role level permissions. diff --git a/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx b/docs/pages/identity-governance/idps/usage/saml-gcp-workforce-identity-federation.mdx similarity index 96% rename from docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx rename to docs/pages/identity-governance/idps/usage/saml-gcp-workforce-identity-federation.mdx index 262bd3a4519a1..831eb7d3d67da 100644 --- a/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx +++ b/docs/pages/identity-governance/idps/usage/saml-gcp-workforce-identity-federation.mdx @@ -1,9 +1,18 @@ --- -title: Access GCP Web Console and API with federated authentication +title: GCP Workforce Identity Federation with Teleport SAML IdP +sidebar_label: GCP Workforce Identity Federation description: Manage Google Cloud Platform (GCP) web console access with Teleport SAML IdP. -h1: GCP Workforce Identity Federation with Teleport SAML IdP +tags: + - teleport-as-idp + - how-to + - identity-governance + - google-cloud + - infrastructure-identity +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + GCP Workforce Identity Federation enables provisioning access to GCP web console and APIs for identities that are not managed within Google Workspace Admin or GCP Cloud Identity. @@ -17,10 +26,9 @@ process. This guide details how to integrate GCP workforce Identity Federation service with Teleport SAML IdP, so users can sign in into GCP web console by authenticating with Teleport. - ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - (!docs/pages/includes/tctl.mdx!) - If you're new to SAML, consider reviewing our [SAML Identity Provider @@ -39,7 +47,7 @@ pool provider to help you quickly get started with the integration. ## Guided configuration flow Create a workforce pool and pool provider with the script generated by Teleport. -In the Web UI, under **Access Management**, click **Enroll New Resource** menu. +In the Web UI, under **Add New** in the left pane, click **Resource** menu. In the search box, enter “workforce”, which will show the Workforce Identity Federation integration tile. Click the tile. ![Test the IdP](../../../../img/access-controls/saml-idp/gcp-workforce/gcp-workforce-tile.png) @@ -141,7 +149,7 @@ granular access control to GCP resources. Attribute mapping allows to map user identity and traits available in the SAML assertion data created by Teleport to GCP identity. -GCP attribute mapping is similar to [Teleport SAML IdP attribute mapping](./saml-attribute-mapping.mdx) +GCP attribute mapping is similar to [Teleport SAML IdP attribute mapping](../saml-attribute-mapping.mdx) with a slight difference that you use [Google CEL](https://github.com/google/cel-spec/blob/master/doc/langdef.md) instead of predicate expression to map attributes. diff --git a/docs/pages/admin-guides/access-controls/idps/saml-grafana.mdx b/docs/pages/identity-governance/idps/usage/saml-grafana.mdx similarity index 87% rename from docs/pages/admin-guides/access-controls/idps/saml-grafana.mdx rename to docs/pages/identity-governance/idps/usage/saml-grafana.mdx index 284b5e7a8a0df..958a63d80b624 100644 --- a/docs/pages/admin-guides/access-controls/idps/saml-grafana.mdx +++ b/docs/pages/identity-governance/idps/usage/saml-grafana.mdx @@ -1,6 +1,13 @@ --- -title: Use Teleport's SAML Provider to authenticate with Grafana +title: Use Teleport's SAML Provider to Authenticate with Grafana +sidebar_label: Grafana Authentication description: Configure Grafana to use identities provided by Teleport. +tags: + - teleport-as-idp + - how-to + - identity-governance + - infrastructure-identity +enterprise: Identity Governance --- Grafana is an open source observability platform. Their enterprise version supports @@ -10,12 +17,20 @@ and Grafana to accept the identities it provides. Note that Teleport can act as an identity provider to any SAML-compatible service, not just those running behind the Teleport App Service. +## How it works + +Grafana enables users to authenticate using SAML. You can export SAML IdP +metadata from Teleport, then provide it to your Grafana configuration file in +order to instruct Grafana to trust the Teleport IdP's certificate authority. +Your Teleport cluster then uses a SAML IdP service provider resource to provide +information about your Grafana deployment to the Teleport SAML IdP. + ## Prerequisites - An instance of Grafana Enterprise, with edit access to `grafana.ini`. - A trusted certificate authority to create TLS certificates/keys for the SAML connection. -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - (!docs/pages/includes/tctl.mdx!) diff --git a/docs/pages/admin-guides/access-controls/idps/saml-microsoft-entra-external-id.mdx b/docs/pages/identity-governance/idps/usage/saml-microsoft-entra-external-id.mdx similarity index 89% rename from docs/pages/admin-guides/access-controls/idps/saml-microsoft-entra-external-id.mdx rename to docs/pages/identity-governance/idps/usage/saml-microsoft-entra-external-id.mdx index 1d8e795359378..2be6f917dd0d1 100644 --- a/docs/pages/admin-guides/access-controls/idps/saml-microsoft-entra-external-id.mdx +++ b/docs/pages/identity-governance/idps/usage/saml-microsoft-entra-external-id.mdx @@ -1,14 +1,45 @@ --- title: Access Azure Portal and CLI description: Access Azure Portal and CLI by authenticating with Teleport SAML IdP +tags: + - teleport-as-idp + - how-to + - identity-governance + - azure + - infrastructure-identity +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + This guide explains how to integrate the Teleport SAML IdP with [Microsoft Entra External ID](https://learn.microsoft.com/en-us/entra/external-id/) so that users can access Azure Portal and the Azure CLI by authenticating with Teleport. +## How it works + +An administrator configures the [Teleport SAML IdP](../saml-guide.mdx) as an +external IdP in Microsoft Entra External ID, and configures users who are to be +granted access to Azure as external users. As long as the accounts of these +users are associated with a domain that is assigned to the Teleport SAML IdP, +these users can access Azure portal and CLI tools by authenticating with +Teleport. + +Authentication is based on the [interactive sign-in]( +https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli-interactively) +process. User audit events in Azure are tagged on behalf of Teleport user account, +providing better audit visibility. + +Alternatively, you can also manage access to Azure API and CLI by deploying the +[Teleport Application Service](../../../enroll-resources/application-access/cloud-apis/azure.mdx). +This approach supports access based on [managed identity]( +https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli-managed-identity), +which may be suitable for automated access. The downside to this approach is that it cannot +be used to provision access to Azure portal and the audit logs are not tied to the +Teleport user account, limiting user account visibility on the the audit events. + ## Prerequisites -- A running Teleport Enterprise cluster version (=teleport.version=) or above. If you +- A running Teleport Enterprise cluster v17.5.1 or higher. If you want to get started with Teleport, [sign up](https://goteleport.com/signup) for a free trial. - If you're new to SAML, consider reviewing our [SAML identity provider (IdP)](../../../reference/access-controls/saml-idp.mdx) diff --git a/docs/pages/identity-governance/idps/usage/usage.mdx b/docs/pages/identity-governance/idps/usage/usage.mdx new file mode 100644 index 0000000000000..d000ab93bfaef --- /dev/null +++ b/docs/pages/identity-governance/idps/usage/usage.mdx @@ -0,0 +1,12 @@ +--- +title: Using the Teleport SAML IdP +sidebar_label: Usage +description: Explains how to enable Teleport users to authenticate to external services using the Teleport SAML IdP. +enterprise: Identity Governance +--- + +The guides in this section explain how to set up the Teleport SAML IdP to enable +Teleport users to authenticate to external services via Teleport: + + + diff --git a/docs/pages/identity-governance/integrations/aws-iam-identity-center/advanced-options.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/advanced-options.mdx new file mode 100644 index 0000000000000..a39713b192396 --- /dev/null +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/advanced-options.mdx @@ -0,0 +1,78 @@ +--- +title: Advanced Identity Center Options +sidebar_label: Advanced Options +description: Describes advanced Identity Center use cases. +tags: + - conceptual + - identity-governance +enterprise: Identity Governance +--- + +The Identity Center Integration can be configured to handle various advanced use +cases that are ot necessarily supported by the default installation flow. This +guide describes these advanced options and use cases. + +## Disabling Account Assignment role creation + +By default, the AWS Identity Center integration will create a Teleport role for +every possible combination of AWS Account and Permission Set managed by your AWS +Identity Center instance. If your Identity Center controls a large number of AWS +Accounts and/or Permission Sets, this may end up creating so many roles that it +starts to affect Teleport's performance. + +To avoid creating these Account Assignment roles, you can create the AWS IC +integration with this feature disabled by specifying `--roles-sync-mode NONE` +when creating the integration with `tctl`, for example: + + +Setting the Roles Sync Mode is only available when installing the Identity Center +integration via `tctl`. + +Role Sync Mode `NONE` is only available during installation. The Roles Sync Mode +can be changed to `ALL` later, but you can't go back the other way. + + +```console +$ tctl plugins install awsic \ + --instance-arn ${IDENTITY_CENTER_INSTANCE_ARN} \ + --instance-region ${IDENTITY_CENTER_INSTANCE_REGION} \ + --use-system-credentials \ + --assume-role-arn ${AWS_IAM_ROLE_ARN} \ + --scim-url ${IDENTITY_CENTER_SCIM_BASE_URL} \ + --scim-token ${IDENTITY_CENTER_SCIM_BEARER_TOKEN} \ + --access-list-default-owner ${TELEPORT_ACCESS_LIST_DEFAULT_OWNER} \ + --roles-sync-mode NONE +``` + +### Roles Sync Modes + +The Roles Sync Mode controls whether the IC integration will create Account +Assignment roles for each possible AWS Account Assignment. There are currently +two possible values: `ALL` (create roles for all possible Account Assignments) +and `NONE` (do not create roles for _any_ possible Account Assignment). + + +The integration's Group Import process uses these Account Assignment roles to +provision access for the Access Lists it creates. In order to prevent the integration +from creating invalid Access Lists, setting the Roles Sync Mode to `NONE` also +requires that integration's Group Import filter contain a single exclude-all clause. + +Teleport enforces this restriction, preventing the accidental creation of an invalid +configuration. + + +### Switching Roles Sync Modes + +After installation you can switch the Roles Sync Mode from `NONE` to `ALL` using +`tctl plugins edit`. + +```console +$ tctl plugins edit awsic --roles-sync-mode ALL +``` + + +Moving from Roles Sync Mode `ALL` to `NONE` may cause Teleport to potentially +delete in-use roles, so that transition is not allowed. + +Currently the only way to move back to `NONE` is deleting and re-installing the integration. + diff --git a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/aws-iam-identity-center.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/aws-iam-identity-center.mdx similarity index 83% rename from docs/pages/admin-guides/management/guides/aws-iam-identity-center/aws-iam-identity-center.mdx rename to docs/pages/identity-governance/integrations/aws-iam-identity-center/aws-iam-identity-center.mdx index 7450ac2b00f3e..b54dabf9eba5d 100644 --- a/docs/pages/admin-guides/management/guides/aws-iam-identity-center/aws-iam-identity-center.mdx +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/aws-iam-identity-center.mdx @@ -1,6 +1,12 @@ --- title: AWS IAM Identity Center description: Provides an overview of the Teleport AWS IAM Identity Center integration. +tags: + - conceptual + - identity-governance + - privileged-access + - aws +enterprise: Identity Governance --- Teleport's integration with [AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) @@ -11,9 +17,9 @@ With the AWS Identity Center integration, you can manage AWS access by granting ## How it works -The Identity Center integration builds on top of Teleport's [role-based access controls](../../../access-controls/guides/guides.mdx), -[just-in-time Access Requests](../../../access-controls/access-requests/access-requests.mdx) -and [Access Lists](../../../access-controls/access-lists/access-lists.mdx) to manage +The Identity Center integration builds on top of Teleport's [role-based access controls](../../../index.mdx), +[just-in-time Access Requests](../../access-requests/access-requests.mdx) +and [Access Lists](../../access-lists/access-lists.mdx) to manage the creation and deletion of Identity Center _Account Assignments_. An _account assignment_ is the combination of a specific AWS Permission Set on a @@ -53,3 +59,6 @@ user and automatically unassign once the Access Request expires. - [Getting Started with AWS IAM Identity Center integration](guide.mdx) - [Migrating Okta-managed AWS IAM Identity Center integration to Teleport](migrating-identity-center-from-okta-to-teleport.mdx) +- [Using the AWS CLI tools with Teleport and AWS IAM Identity Center](using-aws-cli.mdx) +- [Advanced Identity Center Options](advanced-options.mdx) +- [Rotating the AWS IAM Identity Center SCIM Token](maintenance.mdx) diff --git a/docs/pages/identity-governance/integrations/aws-iam-identity-center/guide.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/guide.mdx new file mode 100644 index 0000000000000..196198efddae4 --- /dev/null +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/guide.mdx @@ -0,0 +1,475 @@ +--- +title: Getting Started with AWS IAM Identity Center integration +sidebar_label: Getting Started +description: Explains how to set up and use Teleport AWS IAM Identity Center integration. +tags: + - teleport-as-idp + - how-to + - identity-governance + - privileged-access + - aws +enterprise: Identity Governance +--- + +Teleport's integration with [AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) +allows you to organize and manage your users' short- and long-term access to AWS +accounts and their permissions. + +## How it works + +The Teleport Web UI provides a guided configuration flow for the Identity Center +integration. The AWS IAM Identity Center integration fetches Identity Center +accounts, users, groups, permission set assignments, and other data. To enable +the integration, you set up Teleport as an OIDC identity provider for your AWS +account and create an AWS role with the permissions required for the integration +to function. You then enable the SCIM endpoint in AWS Identity Center to allow +Teleport to push user and group changes. + +## Prerequisites + +- Teleport Enterprise or Teleport Enterprise Cloud cluster version 17.0 or higher. +- Administrative access to AWS IAM Identity Center. +- Teleport preset `editor` role or equivalent to configure plugin resource, integration + resource and SAML IdP service provider resource. If you are using [role version 8](../../../reference/access-controls/roles.mdx#role-versions) + to manage access (which is the default role version since Teleport version 18.0), + ensure that your role allows `app_labels` for the AWS IAM Identity Center + resource `'teleport.dev/origin': 'aws-identity-center'`. + +Note that Identity Center integration requires using Teleport as an external +identity source. + +As such, we recommend ensuring that all Identity Center users have access to +your Teleport cluster before turning the integration on to avoid access +interruption. If your Identity Center already uses external identity source, +you can configure the corresponding [SSO connector](../../../zero-trust-access/sso/sso.mdx) +in Teleport or, if you're using Okta, enable the +[Okta integration](../okta/okta.mdx). + +## Step 1/7. Configure AWS integration + +To get started, navigate to the "Add new integration" page in your Teleport +cluster control panel and select "AWS Identity Center". + +![Pick Identity Center integration](../../../../img/identity-center/ic-pick-integration.png) + +Next, you will generate a script that creates an AWS IAM role for the +integration. + +
+Full list of IAM permissions required by Identity Center integration +``` +// ListAccounts +organizations:ListAccounts +organizations:ListAccountsForParent + +// ListGroupsAndMembers +identitystore:ListUsers +identitystore:ListGroups +identitystore:ListGroupMemberships + +// ListPermissionSetsAndAssignments +sso:DescribeInstance +sso:DescribePermissionSet +sso:ListPermissionSets +sso:ListAccountAssignmentsForPrincipal +sso:ListPermissionSetsProvisionedToAccount + +// CreateAndDeleteAccountAssignment +sso:CreateAccountAssignment +sso:DescribeAccountAssignmentCreationStatus +sso:DeleteAccountAssignment +sso:DescribeAccountAssignmentDeletionStatus +iam:AttachRolePolicy +iam:CreateRole +iam:GetRole +iam:ListAttachedRolePolicies +iam:ListRolePolicies + +// AllowAccountAssignmentOnOwner +iam:GetSAMLProvider + +// ListProvisionedRoles +iam:ListRoles +``` +
+ +![Configure AWS integration](../../../../img/identity-center/ic-step1.1.png) + +Enter required information such as Identity Center region, ARN and integration +name, and execute the generated command in the Cloud Shell. + +After the script has run, fill out the ARN for the role created by the script. + +![Run script for AWS integration](../../../../img/identity-center/ic-step1.2.png) + +## Step 2/7. Preview AWS resources + +On the next step, you are presented with the list of AWS accounts, groups, and +permission sets that Teleport was able to find in your Identity Center. + +![Preview AWS resources](../../../../img/identity-center/ic-step2.png) + +Pick the default owners that should be assigned to the Access Lists in Teleport. +These resources will be imported into Teleport once the plugin is installed. + +## Step 3/7. Configure identity source + + +After this step, Teleport will become your Identity Center's identity provider. + +To avoid access interruptions, we recommend making sure that all existing +Identity Center users have access to your Teleport cluster by, for example, using +the same [IdP](../../../zero-trust-access/sso/sso.mdx) as your current Identity Center +external identity source. + + +In this step, you will configure Teleport as an SAML IdP for the AWS IAM Identity +Center. + +You will also configure AWS IAM Identity Center as an SAML service provider in +Teleport. + +Follow the instructions to change your Identity Center's identity source to +Teleport. + +![Configure identity source](../../../../img/identity-center/ic-step3.png) + +## Step 4/7. Enable SCIM + +The final step is to enable the SCIM endpoint in your Identity Center to +allow Teleport to push user and group changes. + +![Enable SCIM](../../../../img/identity-center/ic-step4.png) + +Make sure to test SCIM connection after enabling it. + +## Step 5/7. Verify the integration + +Navigate to the Access Lists view page in your cluster and make sure that all +your Identity Center groups have been imported. + + +It may take a few minutes for the initial sync to complete. + + +![Access Lists view](../../../../img/identity-center/ic-lists.png) + +Imported Access Lists should show the same members as their corresponding +Identity Center groups. + +## Step 6/7. Configure access + +You need two kinds of Teleport role permissions to access the AWS accounts: +- A permission that allows access to the SAML service provider which logs you into the + AWS Console. You configured this SAML service provider for the AWS IAM Identity Center + in Step 3. +- A permission that allows access to the AWS account and permission set. + +### Configure access to the SAML service provider + +First, we will configure an access to the SAML service provider that logs you +into the AWS Console. + +In the Teleport Web UI, from side navigation menu, select "Zero Trust Access > Roles". +From the "Roles" UI, click "Create New Role" button. Switch to the YAML editor. +![Teleport role editor YAML mode](../../../../img/igs/entraid/entra-teleport-role-yaml-editor.png) + +Copy the role spec shown below and paste it in the role editor to create a new role. +```yaml +kind: role +version: v8 +metadata: + name: identity-center-access +spec: + allow: + app_labels: + 'teleport.dev/origin': 'aws-identity-center' +``` + +Notice that in the role spec above, the `app_labels` defines a `'teleport.dev/origin': 'aws-identity-center'` +label. This is the default resource label which Teleport assigns to the SAML service +provider resource created for the AWS IAM Identity Center. + +Grant this role to the users so they can log in to the AWS Console. Note that users +still needs access to the AWS account and permission set, which is explained below. + +### Configure access to the AWS account and permission set + +Once you ensure that users have access to the SAML service provider created for the +AWS IAM Identity Center, the next step is to grant permission to the +AWS accounts and permission sets. + +When the Teleport AWS IAM Identity Center integration service is run for the first time, +Teleport preserves the current state of the group-based account and permission set +assignments that exists in your AWS IAM Identity Center instance. If a user has +had an access to an AWS account based on their membership to a group, they will +continue to have that access. + +If you want to configure a new sets of AWS account access for a user, you will +have to grant them an account assignment role. An account assignment role defines +an `account_assignment` rule that binds an AWS account with the AWS permission set. + +Teleport creates an account assignment role for each of the cartesian product of +the AWS account and a permission set which is available in your AWS IAM Identity +Center instance. These roles have the following role name format: +`-on--` (e.g. `AdministratorAccess-on-MyAccount-012345678`). + +You may use these auto-generated roles, or create an entirely new role that defines +an `account_assignment` rule. Below is an example of such a role, which allows +access to the two different AWS accounts and permission sets. + +```yaml +kind: role +version: v8 +metadata: + name: identity-center-access +spec: + allow: + app_labels: + 'teleport.dev/origin': 'aws-identity-center' + account_assignments: + - account: "" # AWS identity center account ID + # permission set ARN of AdministratorAccess + permission_set: arn:aws:sso:::permissionSet/ssoins-1234/ps-5678 + - account: "" + # permission set ARN of ReadOnlyAccess + permission_set: arn:aws:sso:::permissionSet/ssoins-1234/ps-8765 +``` + +This role can then be granted to the users either by a direct assignment, +Access List role grants or by using the Access Request workflow. + +## Step 7/7. Connect to AWS + +Once the integration is up and running, and you have the required permissions, +you will see an application named `aws-identity-center` among your resources: + +![Connect to AWS SSO portal](../../../../img/identity-center/ic-app.png) + +Clicking the "Log In" button for this app takes you to your Identity Center +SSO start page which you can use to pick a role and connect to your AWS account +as usual. + +## Usage scenarios + +Let's take a look at some common usage scenarios enabled by the Identity Center +integration. + +### Managing Account Assignments with Access Lists + +Teleport creates an Access List for each group imported from the Identity Center +instance, with group members becoming Access List members. Default Access List +owners are configured during the initial integration enrollment flow and can be +adjusted as necessary after the initial sync completes. + +Each imported Access List is automatically assigned a role (or a set of roles) +that grant all members of that list access to all of the Account Assignments +assigned to the corresponding AWS Identity Center group during the integration +setup. + +These Teleport-generated roles each represent a single Account Assignment, and +are named using `-on--` convention +(e.g. `AdministratorAccess-on-MyAccount-012345678`). + + +These roles are considered system roles, and any edits or updates to them will +be automatically reverted. + + +To give a user permission granted by an already-existing Identity Center synced +Access List, an owner can add that user as a member which makes Teleport to add +the user to its corresponding Identity Center group. + + +Currently all existing Teleport users are synced to Identity Center. Label-based +user filtering will be supported in a later release. + + +Removing a member from an Identity Center synced Access List removes them +from the corresponding Identity Center group effectively revoking privileges. + +In addition to membership changes, Teleport propagates changes in Access List +grants back to Identity Center as well. For example, imagine an Access List with +the roles `AdminAccess-on-my-account` and `ReadOnlyAccess-on-my-account`. If the +Access List owner removes the `AdminAccess-on-my-account` role from the Access Lists, +that change will be propagated back to AWS and the corresponding Identity Center +group will have its assignments updated to remove the `AdminAccess` Permission + +### Just-in-time Access with Resource Access Requests + +Teleport represents the imported AWS accounts as apps in the Teleport Resource +View, with the permission sets available for each account bundled up inside the +app. AWS accounts are treated the same as any other Teleport-managed resource, +so users can see what AWS permission sets they are allowed to request just by +checking "Show requestable resources" in the resource view. + +Users can then choose the specific Account Assignments they want access to by +selecting from the Permission Sets available to each AWS Account. Users +can mix Permission Sets from multiple AWS Accounts, and even include other +Teleport-managed resources if necessary. + +![Selecting Identity Center resources](../../../../img/identity-center/ic-select-ps.png) + +Once the used has selected their desired Account Assignments, the Access Request +submission and review process is the same as for any other Teleport-managed +resource. Assuming the Access Request is approved, Teleport will create the +appropriate AWS Account Assignments in Identity Center to grant the requested +access. These AWS Account Assignments will automatically be deleted when the +Access Request expires. + +The user can access their temporary AWS Accounts and Roles from within Teleport +by assuming the Access Request roles. + +![Assumed Role granting Identity Center Account Assignments](../../../../img/identity-center/ic-role-assumed.png) + + +The AWS Account Assignments will exist for the lifetime of the Access Request, +regardless of when the user assumes the associated role(s). + + +### Just-in-time access with role Access Requests + +The Identity Center integration allows Teleport users to submit Access Requests for short-term privilege elevation. + +When an Access Request for a role granting Identity Center privileges is +approved, Teleport creates an individual assignment for that user in the +Identity Center. The assignment is deleted when the Access Request expires. + +### Long-term access with Access Requests + +If a user requests access to Account Assignments that can also be granted via an +existing Access List, Teleport will offer the reviewer the option of *promoting* +the Access Request to long-term access. + +![Promoting Access Request](../../../../img/identity-center/ic-promote.png) + +When an Access Request is promoted to long-term access, the requesting user is +added to the targeted Access List. This membership change is propagated to the +corresponding Identity Center group, and the user is then granted their requested +Account Assignments via group membership. + +## FAQ + +### Which Access Lists are synced to Identity Center? + +Teleport syncs all Access Lists that have AWS account and permission set rules +among their role grants to Identity Center. + +### How does it work with nested Access Lists? + +Identity Center does not support nested groups. As such, Teleport recursively +flattens any [nested Access Lists](../../access-lists/nested-access-lists.mdx) +into a single Identity Center group containing all members reachable from the +top-level Access List. + +The flattened Identity Center group will be kept updated as members are added to +or removed from nested + +### How do I uninstall the integration? + + +Before fully removing the integration, make sure to remember to change the +identity source in your Identity Center instance. + + +Deleting the integration automatically removes all Teleport resources it used to manage its state, including: + +- Teleport roles created for AWS Identity Center account assignments. +- Access Lists imported from AWS Identity Center groups. + +However, user-created resources remain intact, including: + +- Access Lists you created. +- Roles you manually configured for account assignments. + +If an Access List grants permissions to a now-deleted integration role, or if a user +has a deleted role assigned directly, you must manually remove those references. + +If you decide not to switch to Teleport, you can delete the Identity Center integration in two ways. + +You can remove the integration by navigating to your cluster's Integrations +list and deleting both the integration named `AWS IAM Identity Center`. The AWS +OIDC integration that was created during the first enrollment step will be +automatically removed as well once the plugin is uninstalled. + +To clean up AWS resources created for the integration, remove the Identity +Provider and its role from your AWS IAM console as well. + +## Troubleshooting + +### Access forbidden during AWS account login + +This may happen if the user does not have access to the SAML IdP service provider +resource created for the AWS IAM Identity Center. + +First, check the SAML IdP audit event which can be found in the Teleport Audit Log UI. + +![Assumed Role granting Identity Center Account Assignments](../../../../img/identity-center/ic-saml-login-audit-log.png) + +More detail on the error message can be found by clicking on the "Details" button. + +Example of a failed login audit log due to missing permission: +``` +{ + "cluster_name": "example.teleport.sh", + "code": "TSI000I", + "ei": 0, + "error": "access to saml_idp_service_provider denied. User does not have permissions. ", + "event": "saml.idp.auth", + "service_provider_entity_id": "https://ca-central-1.signin.aws.amazon.com/platform/saml/d-xxxxxx", + "sid": "", + "success": false, + "time": "2025-09-03T04:36:40.123Z", + "uid": "7523b6cc-313e-4f56-9f58-3f1b91075c4d", + "user": "example-user" +} +``` + +To resolve RBAC errors, first, identify what roles are assigned to the user. + +If the user has been assigned with a Teleport role of version 8, +the role must define `app_labels` matching the `teleport.dev/origin: aws-identity-center` +label. + +```yaml +kind: role +version: v8 +metadata: + name: ic-access-example +spec: + allow: + app_labels: + 'teleport.dev/origin': 'aws-identity-center' +``` + +If the role requires Device Trust, the user must have a device authenticated +Teleport web session. + +If the role requires a session MFA, the user must authorize the login flow with MFA. + +If the user is assigned with a Teleport role version 7 or below, +check if the IdP role option is not disabled: +```yaml +kind: role +version: v7 +metadata: + name: ic-access-example +spec: + options: + idp: + saml: + enabled: true # idp must be enabled. +``` + + +## Next steps + +- Take a deeper dive into fundamental Teleport concepts used in Identity Center + integration such as + [RBAC](../../../zero-trust-access/authentication/authentication.mdx), + [JIT Access Requests](../../access-requests/access-requests.mdx), + [Access Lists](../../access-lists/access-lists.mdx) + and the [Teleport SAML IdP](../../idps/idps.mdx). +- Learn how to enable the [Okta integration](../okta/okta.mdx) + to sync apps, users and groups from Okta in conjunction with Identity Center + integration. diff --git a/docs/pages/identity-governance/integrations/aws-iam-identity-center/maintenance.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/maintenance.mdx new file mode 100644 index 0000000000000..86b13663f5b76 --- /dev/null +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/maintenance.mdx @@ -0,0 +1,53 @@ +--- +title: Rotating the AWS IAM Identity Center SCIM token +sidebar_label: SCIM Token Rotation +description: How to update the AWS IAM Identity Center SCIM token in Teleport +tags: + - identity-governance + - how-to + - aws +enterprise: Identity Governance +--- + +This guide will show you how to rotate the SCIM bearer token in Teleport using `tctl`. + +## How it works + +Teleport provisions AWS users and groups into AWS IAM Identity Center via +[SCIM](https://scim.org/). The Teleport SCIM client authenticates itself to AWS +IAM Identity Center using a bearer token. By their nature bearer tokens need to be +rotated occasionally to maintain security. + +## Generating the token + +You can generate the new SCIM bearer token by following the AWS IAM Identity Center +[Rotate an access token](https://docs.aws.amazon.com/singlesignon/latest/userguide/rotate-token.html) +user guide. + +Be sure to capture the token value displayed at the end of the AWS token creation +flow, as AWS will not display it again. + +## Rotating the token + + +This functionality is only available in `tctl` and cannot yet be done in the Teleport UI. + + +```shell +$ tctl plugins rotate awsic ${TOKEN} +``` + +Once the SCIM token is updated Teleport will check to see if the actual token +value has changed. If so, Teleport will automatically restart the Identity Center +integration for it to pick up and use the new token. + +## Disabling token validation + +By default, `tctl` will validate that the supplied token can be used to successfully +authenticate with the configured SCIM service. If, for example, the target SCIM +is unavailable and you want to force the token rotation you can disable the token +validation with the `--no-validate-token` flag. + +```shell +$ tctl plugins rotate awsic --no-validate-token ${TOKEN} +``` diff --git a/docs/pages/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx new file mode 100644 index 0000000000000..8c286aa9475eb --- /dev/null +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/migrating-identity-center-from-okta-to-teleport.mdx @@ -0,0 +1,482 @@ +--- +title: Migrating AWS IAM Identity Center from Okta to Teleport +sidebar_label: Okta to Teleport Migration +description: Explains how to migrate an Identity Center instance from Okta control to Teleport control. +tags: + - how-to + - identity-governance + - privileged-access + - aws +enterprise: Identity Governance +--- + +This guide describes how to introduce Teleport into an existing Okta-managed AWS +IAM Identity Center configuration. It specifically describes two scenarios: one where +Teleport shares control of your Identity Center managed resources in conjunction +with Okta, and another where Teleport replaces Okta as the Identity Center SSO +Provider and user provisioner + +## How it works + +You can introduce Teleport into an existing Okta-managed AWS IAM Identity Center +instance in one of two ways: +- **Partial hand-off** (also called _Hybrid mode_): Keep Okta as the User provisioner +and SSO provider for the AWS IAM Identity Center instance, but delegate account +assignment and (some) Identity Center group provisioning to Teleport, or +- **Full hand-off**: Fully transfer the control of AWS IAM Identity Center from +Okta to Teleport, including user provisioning, group provisioning, account assignment provisioning and SAML SSO. + +For simplicity, we recommend full hand-off. + +In both cases, the AWS IAM Identity Center SCIM bearer token must be shared with +Teleport, as Teleport uses SCIM even in Partial hand-off mode to provision Identity +Center groups and manage Group membership. + +## Prerequisites + +- A Teleport cluster. +- An AWS role configured as per the [AWS IAM Identity Center guide](./guide.mdx#step-17-configure-aws-integration) + for the integration to use. +- AWS credentials configured for the Teleport Auth Service to pick up and use + (e.g. as environment vars, system profiles, etc). +- An Okta API token, with the following privileges: + - View users and their details. + - View groups and their details. + - View applications and their details. +- The AWS IAM Identity Center ARN, AWS region, SCIM base address and SCIM bearer token. + +## Partial hand-off (Hybrid setup) + +We will start by setting up Teleport, Okta and Identity Center in a Partial hand-off +configuration, covering +- integrating Okta and Teleport, ensuring that all users Okta users provisioned + into Identity Center are also provisioned into Teleport, +- configuring Teleport to manage groups and account assignments in Identity Center +- setting up Teleport group import rules so that Identity Center groups managed + by Okta are not overridden by Teleport. + +Once this initial setup is complete you can either run in Partial hand-off mode indefinitely, or +use the Partial hand-off configuration as a stepping stone towards a full hand-off integration. + + +Given the nature of a Partial hand-off configuration it is trivial for the external +IdP (in this case, Okta) to re-activate an AWS user deactivated by Teleport, without +Teleport being aware of it. + +To avoid confusion, when in a Partial hand-off configuration Teleport assumes that +_all_ user-related provisioning will be managed by the external IdP, including +deactivating and re-activating users as necessary. Teleport will **not** propagate +Teleport user locks to Identity Center users, nor strip an Identity Center user's +account assignments or group memberships in response to a Teleport lock. + +To prevent a deactivated user accessing AWS resources, they need to be forbidden +from logging in to AWS via the external IdP SSO provider. + + + + If you already have an Okta integration set up in Teleport, can ensure that users and groups + that exist in AWS IAM Identity Center are already synced to Teleport, and want Teleport to fully + control AWS IAM Identity Center, you can switch to Teleport right away, without the need for migration. + + See [Getting Started with AWS IAM Identity Center integration](./guide.mdx). + + +### Starting Point + +This is our starting configuration: Okta as sole Identity Source for Identity Center. + +![Migration Starting Point](../../../../img/identity-center/ic-migrate-start.png) + +### Destination + +At this point Okta remains in control of User provisioning and SSO, while +Teleport controls Identity Center groups and account assignments. + +![Partial Hand-Off configuration](../../../../img/identity-center/ic-migrate-mid.png) + +Specifically: + - Okta provides SSO login for Identity Center. + - Okta manages a subset of the Identity Center group membership (selected via Push Groups in Okta). + - Okta controls user provisioning via SCIM. + - Teleport manages a second subset of Identity Center groups (selected via group filters during plugin installation). + - Teleport controls Identity Center group account assignments for the Identity Center groups under its control. + - Teleport controls direct Identity Center user account assignments for the Identity Center users under its control. + +### Step 1/6. Install Okta SAML connector + +Install Okta SAML connector into Teleport as per the Teleport +[Okta as an SSO provider](../../../zero-trust-access/sso/integrate-idp/okta.mdx) guide. + + + For the integration to function properly, both AWS IAM Identity Center and Teleport must view the + same user set which can be achieved by using the same Okta SAML application for both Identity + Center and Teleport SAML connector. + + If this is not possible, the most flexible approach we've found is having an + an Okta Group for those users that should have Identity Center access, and then assigning + both the Okta Identity Center App and the Okta SAML App used for Teleport SAML + connector to that group. This ensures that the same user set is visible across + both applications. + + +You will need the Teleport SAML Connector Name and Okta SAML App ID in the next +step. + +### Step 2/6. Install the Teleport Okta integration + +We will be using a very limited subset of the Teleport Okta integration in this +deployment, disabling all features except periodic user synchronization. This +configuration is not currently supported by the normal installation UI, so we +will have to use `tctl` to install it: + +```console +$ tctl plugins install okta \ + --org ${OKTA_ORG_URL} \ + --saml-connector ${TELEPORT_SAML_CONNECTOR_NAME} \ + --app-id ${OKTA_SAML_APP_ID} \ + --api-token ${OKTA_API_TOKEN} \ + --no-scim \ + --no-accesslist-sync \ + --no-appgroup-sync +``` + +This will install the Okta integration and start the user sync service with a +configuration that: +- Imports Okta users assigned to the Okta App `${OKTA_SAML_APP_ID}`, and keeps + them synced with the upstream Okta organization. +- Does *not* expose a SCIM service. +- Does *not* attempt to sync or manage any other resources from Okta. + +You can monitor the state of the Okta integration in the Teleport Integrations UI. + +### Step 3/6. Wait for user sync + +To make sure everything is working, wait until the first Okta to Teleport user +sync has occurred. You can verify this by either + - Refreshing the user page and finding your Okta users, or + - Checking the Okta integration status page. + +Once your Okta users are imported into Teleport, you can progress to the next +step. + +### Step 4/6. Install the Teleport AWS IAM Identity Center integration + +Again, we need to install the plugin using `tctl`. + +```console +$ tctl plugins install awsic \ + --instance-arn ${IDENTITY_CENTER_INSTANCE_ARN} \ + --instance-region ${IDENTITY_CENTER_INSTANCE_REGION} \ + --use-system-credentials \ + --assume-role-arn ${AWS_IAM_ROLE_ARN} \ + --scim-url ${IDENTITY_CENTER_SCIM_BASE_URL} \ + --scim-token ${IDENTITY_CENTER_SCIM_BEARER_TOKEN} \ + --access-list-default-owner ${TELEPORT_ACCESS_LIST_DEFAULT_OWNER} \ + --user-origin okta \ + --account-name ${ACCOUNT_NAME_ALLOW_FILTER} \ + --group-name ${GROUP_NAME_ALLOW_FILTER} +``` + +This will install the Teleport AWS IAM Identity Center integration with a +Teleport configuration that: +- Controls the AWS IAM Identity Center instance indicated by `--instance-arn`. +- Uses the system AWS credentials to authenticate with AWS (from `--use-system-credentials`) + and assumes the IAM role indicated by `--assume-role-arn`. +- Manages account assignments all AWS accounts that match the `${ACCOUNT_NAME_ALLOW_FILTER}`. +- Provisions all users imported from Okta into AWS IAM Identity Center (from the `--user-origin okta` flag). +- Only imports all groups matching `${GROUP_NAME_ALLOW_FILTER}` into Teleport as Access + Lists, with `${TELEPORT_ACCESS_LIST_DEFAULT_OWNER}` as the owner. + + +Note that the `tctl` installer currently only supports installations using +system-level AWS credentials with `--use-system-credentials`. + +Using system-level credential is also the recommended way to provide AWS credential when +configuring integration in the Teleport Enterprise self-hosted deployment. + + +You can change the AWS account, group and user filters later by following the +instructions in the [Extending the Integration](#extending-the-integration) section +below. + +During the installation process, Teleport will import all of the Identity Center +groups that match its allow list (or all of them, if no allow list is defined) +and create matching Access Lists, preserving the group membership and account +assignments. + +For more installation options, such as disabling the creation of Teleport Roles +for each Account Assignments, see the [Advanced Options](./advanced-options.mdx) guide. + + +Individual user account assignments will ***not*** be preserved during import. You +will need to ensure these are preserved manually, or converted to group assignments +prior to installation. + + +#### Group import control + +The Group import allow list is controlled by the `--group-name` option. You can +specify multiple filters and a Group will be imported if matches _any_ of the +supplied filters. Filters can be either literal names, globbed names or Go +compatible regular expressions. To treat a filter as a regular expression, +enclose it in a leading `^` and trailing `$`. + +Example filters: + - `administrators`: The literal "administrators" group + - `site-*`: Any group with the prefix `site-` + - `^(?:[^a]|a[^w]|aw[^s]|aws[^\-]).*$`: Any group that does ***not*** have the prefix `aws-` + +Ensure that there is no overlap between the groups imported to Teleport and the +groups you want Okta to maintain control over. + + +Avoid creating an Access List with the same name as a Push Group managed by Okta. +Teleport will attempt to adopt the group, and may change the group membership. +Deleting the Teleport Access List and forcing a re-push from Okta should restore +access. + + +#### User provisioning control + +Your Teleport cluster may have a mix of local Teleport users (e.g. a local Admin +user) and users imported from Okta. By default, Teleport will try to provision +_all_ Teleport users into Identity Center. You can control which users are +provisioned by the Identity Center integration with the `--user-origin` and +`--user-filter` arguments. In the example above, the `--user-origin okta` will +restrict Teleport to only provisioning users that are synced from Okta, and +excluding all local Teleport users. + +#### AWS account import control + +By default, Teleport will take control of account assignments for all AWS Accounts +managed by Identity Center. You can create an allow-list of AWS Accounts to +import with the `--account-name` and `--account-id` install options. + +The `--account-name` filters work like the `--group-name` filters above. The +`--account-id` filters specify a literal AWS Account ID. + +Teleport will not create or delete account assignments on AWS accounts outside +of its allow-list. + +### Step 5/6. Migrate AWS account + +We have now configured the Teleport Identity Center integration in partial hand-off +mode, and ready to migrate account assignments from Okta-managed groups into new +Teleport-managed Access Lists. To migrate groups, create a new Access List in +Teleport (taking care _not_ to use the same name as the existing Okta-managed Group) +and create the appropriate memberships and account assignments. + +Account assignments can be created on an Access List by assigning it the Account +Assignment roles created by the Identity Center integration, assigning it a +custom Teleport role that specifies a specific combination of access, or a +combination of each. + +For more information, see the [Identity Center integration guide](./guide.mdx). + +### Step 6/6. Retire Okta group provisioning + +Once you are satisfied that an AWS IAM Identity Center group has been migrated to +Teleport control, you can remove the corresponding push Group from the Okta +Identity Center integration. + +## Full hand-off + +Teleport, Okta and Identity Center are now running in partial hand-off mode. It is +perfectly fine to continue running Teleport this way indefinitely, but you can also +use the Partial hand-off configuration as a stepping stone towards a full +hand-off integration. + +Moving from Partial- to Full hand-off fully transfers control of AWS users and +groups to Teleport, including SSO login. + +![Full hand-off configuration](../../../../img/identity-center/ic-migrate-end.png) + + +Teleport generally has much less information about users than Okta, so when Teleport +takes over user provisioning from Okta it may strip user attributes from your +Identity Center users. + +Please ensure that your AWS security policies do not rely on any user attributes +beyond the username and ID. + + +### Step 1/4. Create a Teleport SAML IdP Service Provider + +To operate in Full hand-off mode Teleport needs to be the Identity Source for +your AWS Identity Center instance by providing SAML authentication for your users. +To do this we need to create a Teleport SAML Service Provider configured for your +Identity Center instance. + +#### Obtain the Identity Center SAML Metadata + +You can get the Identity Center SAML metadata file from AWS by following the AWS +[How To Connect to an External Identity Provider](https://docs.aws.amazon.com/singlesignon/latest/userguide/how-to-connect-idp.html) +guide. Once you have the metadata file you will need to pause configuring AWS +and switch back to Teleport. + +#### Create the Teleport SAML Service Provider + +Create a SAML Service Provider in Teleport following the (Using Teleport as a SAML +Identity Provider)[../idps/saml-guide.mdx] guide. Use the Identity Center SAML +Metadata file from the previous step to provide the Identity Center metadata to +Teleport. + +While creating the Teleport SAML service provider, download the Teleport IdP metadata +when offered. You will need this to finish configuring the Identity Source in +Identity Center. + +### Step 2/4. Configure Identity Center to use Teleport as its Identity Source + +Once you have created the Teleport SAML Service Provider you can finish configuring +the new Identity Center Identity Source. Continue following the AWS [How To Connect to an External Identity Provider](https://docs.aws.amazon.com/singlesignon/latest/userguide/how-to-connect-idp.html) +guide, uploading the Teleport IdP Metadata file when prompted. + +### Step 3/4. Switch Teleport to Full hand-off Mode + +Teleport uses the absence of a SAML service provider name in the Identity Center +configuration to trigger running in Partial hand-off mode. To switch Teleport to +Full hand-off mode we need to link the Identity Center integration and the +SAML Service Provider created in the previous step. + +This can currently only be done by manually editing the integration configuration +with `tctl`. + + +Manually editing the Identity Center's plugin resource using `tctl` is a +dangerous operation. Please ensure you take a backup of the plugin resource in +order to roll back if necessary. + +A guided editing workflow is currently under development. + + +The plugin resource is a YAML document describing the Identity Center integration +configuration. Running `tctl edit` will fetch the resource and open it your +configured editor, for example: + +```console +$ tctl edit plugins/aws-identity-center +``` + +To enable Full hand-off mode, set the `saml_idp_service_provider_name` field in +the `aws_ic` block to the name of the SAML Service Provider created in (Step 1)[#step-14-create-a-teleport-saml-idp-service-provider] above. + +```YAML +kind: plugin +version: v1 +metadata: + labels: + teleport.dev/hosted-plugin: "true" + name: aws-identity-center +spec: + Settings: + aws_ic: + arn: arn:aws:sso:::instance/ssoins-722326ecc902a06a + access_list_default_owners: + - admin + credentials: + system: + assumeRoleArn: arn:aws:iam::637423191929:role/idc-integration + provisioning_spec: + base_url: https://scim.us-east-1.amazonaws.com/f3v9c6bc2ca-b104-4571-b669-f2eba522efe8/scim/v2 + region: us-east-1 + + # add or edit this to match your SAML Service Provider name + saml_idp_service_provider_name: aws-identity-center-sso +``` + +Saving and closing the resource editor will update the Identity Center config and +restart the Identity Center integration in Full Hand-Off mode. + +### Step 4/4. Retire Okta Identity Center integration + +Once Teleport has full control over provisioning into Identity Center, you can +deactivate the Identity Center integration in Okta. Leaving the Okta Identity +Center integration in place will cause Teleport and Okta to both try and control +the user information + +## Extending the integration + +You can add or remove filters to the various filter sets with `tctl edit`. + + +This currently involves manually editing the Identity Center Integration's plugin +resource using `tctl`, which is a dangerous operation. Please ensure you take a +backup of the plugin resource in order to roll back if necessary. + +A guided editing workflow is currently under development. + + +``` +$ tctl edit plugins/aws-identity-center +``` + +Once you save and quit the editor, `tctl` will replace the existing resource +with your updated version. This will automatically restart the Identity Center +Integration with the new filters. + +```YAML +kind: plugin +version: v1 +metadata: + labels: + teleport.dev/hosted-plugin: "true" + name: aws-identity-center +spec: + Settings: + aws_ic: + arn: arn:aws:sso:::instance/ssoins-722326ecc902a06a + access_list_default_owners: + - admin + credentials: + system: + assumeRoleArn: arn:aws:iam::637423191929:role/idc-integration + provisioning_spec: + base_url: https://scim.us-east-1.amazonaws.com/f3v9c6bc2ca-b104-4571-b669-f2eba522efe8/scim/v2 + region: us-east-1 + + # Account import filters. An absent or empty list of filters implies + # "manage all AWS accounts" + aws_accounts_filters: + - Include: + id: "058264527036" + - Include: + name_regex: ^Staging-.*$ + + # User provisioning filters. An absent or empty list of filters implies + # "provision all users to AWS" + user_sync_filters: + - labels: + teleport.dev/origin: okta + + # Group import filters, as per the "Group import control" section above + group_sync_filters: + - Include: + name_regex: '^Group #00\d+$' +``` + +Saving the updated resource and quitting the editor will update the remote +configuration and restart the Identity Center integration with the new settings. + +Any change to the `group_sync_filters` field will also trigger a new group import +cycle, which may be time consuming for Identity Center instances with lots of +users and groups. + +## Deleting the AWS IAM Identity Center integration + +Deleting the integration automatically removes all Teleport resources it used to manage its state. + +The impact of plugin deletion and general consideration is explained in the [AWS IAM Identity Center guide](./guide.mdx#how-do-i-uninstall-the-integration). + +Delete AWS IAM Identity Center plugin with `tctl`. + +``` +$ tctl plugins delete aws-identity-center +``` +## Next Steps + +- Take a deeper dive into fundamental Teleport concepts used in Identity Center + integration such as + [RBAC](../../../zero-trust-access/authentication/authentication.mdx), + [JIT Access Requests](../../access-requests/access-requests.mdx) and + [Access Lists](../../access-lists/access-lists.mdx). +- See how to [use Teleport-managed account assignments with the AWS command-line tools](./using-aws-cli.mdx). diff --git a/docs/pages/identity-governance/integrations/aws-iam-identity-center/using-aws-cli.mdx b/docs/pages/identity-governance/integrations/aws-iam-identity-center/using-aws-cli.mdx new file mode 100644 index 0000000000000..1e256c1d0d15a --- /dev/null +++ b/docs/pages/identity-governance/integrations/aws-iam-identity-center/using-aws-cli.mdx @@ -0,0 +1,213 @@ +--- +title: Using the AWS CLI tools with Teleport and AWS IAM Identity Center +sidebar_label: AWS CLI Access +description: Explains how to use the AWS IAM Identity Center integration to protect AWS CLI tools with Teleport. +tags: + - how-to + - identity-governance + - privileged-access + - aws +enterprise: Identity Governance +--- + +This guide will show you how to configure the `aws` command-line tool to use +access granted via Teleport and AWS Identity Center. + +## How it works + +For a deep dive into how Teleport manages AWS Identity Center access works you can +read the main [AWS IAM Identity Center guide](./aws-iam-identity-center.mdx). For the +purposes of this guide, it's enough to understand that Teleport manages the creation +and deletion of AWS Account Assignments based on a users's Account Assignment grants, +either from their standing Teleport Roles, Access List membership or approved +Access Requests. + +You can access these Teleport-managed Accounts and Permission Set assignments +with the AWS CLI tools by using `sso` login and AWS profiles. + +## Prerequisites + +Before you begin, you will need: + + * A Teleport-managed AWS Identity Center organization. See our [getting started guide](./guide.mdx) + for setting up an Identity Center integration. + * The AWS CLI tools, installed as per the AWS [installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) + * The SSO Start URL and AWS Region for your Identity Center organization. Ask your AWS administrator for the appropriate values. + +## Step 1/2. Configure the AWS SSO Session + +This step configures the AWS CLI tools to use SSO for authentication. You can +either configure this manually, or via the `aws configure sso-session` wizard. + + + +Invoke the AWS SSO configuration wizard by running the following command and +answering the questions asked by the wizard. + +```shell +$ aws configure sso-session +``` + +You will need to pick a name for the SSO session; this is just a local name for +this particular SSO configuration. For this example we are using `my-identity-center`. + +```shell +$ aws configure sso-session +SSO session name: my-identity-center +SSO start URL [None]: https://d-12234567890.awsapps.com/start +SSO region [None]: us-east-1 +SSO registration scopes [sso:account:access]: +``` + + +You can configure SSO authentication manually by editing your `.aws/config` file. The above example would look like this: + +```ini +[sso-session my-identity-center] +sso_start_url = https://d-12234567890.awsapps.com/start +sso_region = us-east-1 +sso_registration_scopes = sso:account:access +``` + + + +### Testing the SSO Session + +To test the SSO configuration, try logging into AWS via SSO. + +```shell +$ aws sso login --sso-session my-identity-center +``` + +This will launch a browser-based flow that will log you into AWS via Teleport +and ask you confirm the AWS CLI tool's access to AWS using your account. + +## Step 2/2. Creating AWS Profiles + +You will need to create a separate AWS CLI profile for each AWS Account and Permission Set +you want to access. These profiles will reference the SSO Session created above, +which tells the AWS tools to use SSO authentication when the profile is active. + +Again you can do this either via an `aws configure` wizard, or editing your `/.aws/config` file directly. + + +You can create as many profiles as you like, so repeat this step for as many AWS +Account / Permission Set pairs that you need. + + + + +To create a profile that uses given SSO session, invoke the SSO configuration +wizard with the following commands, and answering the questions it asks: + +```shell +$ aws configure sso +``` + +The wizard asks several questions about the profile to create, but for our +purposes, only selecting the AWS Account and Role are important. + +Firstly, select the AWS account this profile will use. The wizard will offer you +a list of available AWS accounts based on your current Account Assignments, if +you are only permitted to use a single AWS account, the wizard automatically +picks that and skips the question. + +```shell +There are 2 AWS accounts available to you. +> Staging, my.login@example.com (058264527036) + Production, my.login@example.com (637423191929) +``` + +Next, select the AWS Role to assume when this profile is active. Identity Center +Permission Sets are provisioned onto AWS accounts as Roles, so select the role +with the same name as the Permission Set you want to use. + +```shell +There are 3 roles available to you. + SecurityAudit +> AdministratorAccess + PowerUserAccess +``` + +Again, if only one option is available the wizard will automatically select that +and skip the question. + +After several generic AWS profile questions (e.g. default AWS region, default +output format, etc), the wizard will ask for the profile name. For this example, +given that the profile will use the `AdminAccess` role on the `Staging` account +we will call it `admin-on-staging`. + + +While helpful, the `aws configure sso` wizard requires currently-assigned Accounts +and Permission Sets to work with. You can pre-configure an AWS profile to use +Account Assignments that you currently do not have access to by editing your +`~/.aws/config` file and adding the profile directly. + +For example, the `admin-on-staging` profile we created above looks like this: + +```ini +[profile admin-on-staging] +sso_session = my-identity-center +sso_account_id = 058264527036 +sso_role_name = AdministratorAccess +region = us-east-1 +``` + + + +### Testing the profile + +You can test the profile by running `aws sts get-caller-identity` and verifying +the returned user ID and assumed Role. For example: + +```shell +$ aws sts get-caller-identity --profile admin-on-staging +{ + "UserId": "AROA123456789AEXAMPLE:my.login@example.com", + "Account": "058264527036", + "Arn": "arn:aws:sts::058264527036:assumed-role/AWSReservedSSO_AdministratorAccess_69450ffeac834ef7/my.login@example.com" +} +``` + +Once you have validate that the profile is configured correctly, you can use the +`--profile` argument in any `aws` subcommand select it and use the corresponding +Identity Center Account assignment in that operation. + + +You can also use this profile with other tools that support the standard AWS client +environment variables. Set the profile by setting the `AWS_PROFILE` environment +variable. For example: + + +```shell +$ AWS_PROFILE=admin-on-staging ./some-aws-tool +``` + + +## Troubleshooting + +### "Invalid Callback" error + +If AWS presents you with an "invalid Callback URL" error message, the most likely +problem is an incorrect AWS region in your `sso-session` configuration. + +### "Error loading SSO Token" error + +The AWS cache directory has probably been deleted. Log in again with `aws sso login --sso-session ${SSO_SESSION_NAME}`, +where `${SSO_SESSION_NAME}` is the name of your configured SSO session. + +## Next Steps + +- Learn how to request [Just-in-Time access to an Account Assignment](./guide.mdx#just-in-time-access-with-resource-access-requests). +- Take a deeper dive into fundamental Teleport concepts used in Identity Center + integration such as [RBAC](../../../zero-trust-access/authentication/authentication.mdx), + [JIT Access Requests](../../access-requests/access-requests.mdx), and + [Access Lists](../../access-lists/access-lists.mdx). +- Learn how Teleport uses RBAC, JIT Access Requests and Access Lists to manage AWS + Identity Center Account Assignments in the [AWS IAM Identity Center guide](./aws-iam-identity-center.mdx) + +## Further reading + +For a broader introduction to using the AWS CLI with IAM Identity Center, see the +AWS [Configuring IAM Identity Center authentication with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html#cli-configure-sso-configure) +guide. diff --git a/docs/pages/identity-governance/integrations/entra-id/advanced-options.mdx b/docs/pages/identity-governance/integrations/entra-id/advanced-options.mdx new file mode 100644 index 0000000000000..7b1881715d835 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/advanced-options.mdx @@ -0,0 +1,86 @@ +--- +title: Advanced Entra ID Integration Options +sidebar_label: Advanced Options +description: Advanced Teleport Entra ID integration configuration options. +sidebar_position: 4 +tags: + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +This page lists advanced configuration options related to the +Teleport Entra ID integration. + +## Group filters + +By default, all the groups that exists in the Microsoft Entra ID directory gets +imported to Teleport. + +This import behavior can be controlled by using the group filters, which can +include or exclude certain groups based on their matching group object ID or group +display name. + +Group filter can only be configured using `tctl` and the ability to configure it +using Teleport Web UI is in the works. + + +### Group filter precedence + +- If no filters are configured, all the groups are imported (default behavior). +- If an include filter is defined, only the matching group is imported. +- If a group is matched in both the include filter and exclude filter, exclude +filter gets precedence. + + +### Configure group filters during installation + +Example to configure group filters during installation: +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner=admin \ + --no-access-graph \ + --use-system-credentials \ + --manual-setup \ + --group-id 25f9c527-2314-414c-a75d-ef7efabcc99b \ + --group-name "admin*" \ + --exclude-group-id 080b50c3-1c98-4d8e-a54e-20143dbd4f99 \ + --exclude-group-name "fin*" +``` +- `--group-id`: Include group matching the specified group ID. + Multiple flags allowed. +- `--group-name`: Include groups matching the specified group name regex. + Multiple flags allowed. +- `--exclude-group-id`. Exclude group matching the specified group ID. + Multiple flags allowed. +- `--exclude-group-name`. Exclude groups matching the specified group name regex. + Multiple flags allowed. + +### Updating group filters +Group filters can be updated using `group_filters` flag, which is available in +the `sync_settings` of the Teleport Entra ID plugin resource spec. + +Reference configuration spec: +```yaml +kind: plugin +metadata: + name: entra-id +spec: + Settings: + entra_id: + sync_settings: + ... # other settings omitted for brevity + group_filters: + - id: 080b50c3-1c98-4d8e-a54e-20143dbd4f99 + - id: 45f9c527-2314-414c-a75d-ef7efabcc99b + - id: 35f9c527-2314-414c-a75d-ef7efabcc99b + - nameRegex: 'admin*' + - excludeId: 080b50c52-1c98-4d8e-a54e-20143dbd4f99 + - excludeNameRegex: 'finance*' +version: v1 +``` + +The plugin spec can be edited using the `tctl edit plugins/entra-id` command. diff --git a/docs/pages/identity-governance/integrations/entra-id/configure-access.mdx b/docs/pages/identity-governance/integrations/entra-id/configure-access.mdx new file mode 100644 index 0000000000000..e0b488d85eb63 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/configure-access.mdx @@ -0,0 +1,171 @@ +--- +title: Configure Teleport Access for Entra ID Users +sidebar_label: Group Import +description: Learn how to map Entra ID groups to Teleport roles with Nested Access Lists. +sidebar_position: 3 +tags: + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +This guide shows how to setup access for Entra ID imported users based on their Entra +group memberships using Teleport's Nested Access Lists. + +## How it works + +In a Nested Access List setup, the child Access List inherits roles and traits that +are granted by the parent Access List. + +By utilizing this feature, we can add an Entra ID imported Access List as a member of +another Access List to grant Teleport roles to its members. + +For demonstration, this guide uses Grafana as a reference application to which we +want to configure access for Entra ID users. This application is enrolled +in Teleport with a resource label `env: monitor`. +![Grafana app enrolled in Teleport](../../../../img/igs/entraid/entra-teleport-grafana-app.png) + +We will have two user groups created in Entra ID: `ad-app-admin` and +`ad-app-support`. We want members of these groups to have a permanent +access and an ability to request access to Grafana respectively. You +may also use existing Entra ID groups instead. + +We will then create two roles in Teleport. One will allow access to Grafana +application while the other role will allow requesting access to the role that +grants access to Grafana application. + +These roles will then be assigned to the Access List to grant roles and traits +to the Entra ID imported groups. + +![Entra ID to Teleport role mapping](../../../../img/igs/entraid/entra-teleport-role-mapping.png) + +## Prerequisites + +- Teleport user with preset `editor` or an equivalent role that allows to +read and write Auth Connector, plugins, roles and Access Lists. +- Permission to create groups in Entra ID. +- [Entra ID integration](getting-started.mdx) configured in your Teleport cluster. +- For demonstration, this guide references a Grafana application. You may use any other +[resource](../../../enroll-resources/enroll-resources.mdx) type to get started. + +## Step 1/3. Create groups in Entra ID + + + You may skip this step if you are using an existing Entra ID + groups to follow through this guide. + + +In the Azure portal, select the "Groups" menu under "Azure services". + +From the "Groups" page, click the "New group" button to create a new user group +named `ad-app-support`. You may add desired users to this group. + +![Entra ID create group](../../../../img/igs/entraid/entra-create-group.png) + +Repeat the steps and create another user group named `ad-app-admin`. + +Every 5 minutes, Teleport imports groups from Entra ID and creates +an Access List for each of the imported groups. Teleport will also preserve respective +group members as an Access List member. + +## Step 2/3. Create roles in Teleport + +First, create a role template that grants access to Grafana. + +In the Teleport Web UI, from side navigation menu, select "Zero Trust Access > Roles". +From the "Roles" UI, click "Create New Role" button. Switch to the YAML editor. +![Teleport role editor YAML mode](../../../../img/igs/entraid/entra-teleport-role-yaml-editor.png) + +Copy the role spec shown below and paste it in the role editor to create a new role. +```yaml +kind: role +version: v8 +metadata: + name: app-monitor +spec: + allow: + app_labels: + env: '{{external.apps}}' +``` + +The role is configured with allowed `app_labels` that matches +label key `env` and a label value `'{{external.apps}}'`, which will be derived from +user's `apps` trait. As long as the label value configured in the application resource +label, and the value configured in the user `apps` trait matches, this role will grant +access to that application. + +Using trait template to define label makes this role scalable as user traits are dynamically +configurable. You can learn more about traits and role templates [in this guide](../../../zero-trust-access/rbac-get-started/role-templates.mdx). + +When user authenticates with Entra ID, the SAML attributes (or claims if you are using OIDC) +that are available in the user SSO response are preserved as user traits. Additionally, you +can also grant traits to user using Access List. As you will see in the next step below, +this guide will use Access List to grant trait to the user. + + +Repeat the role creation step in the UI to create another role that allows +requesting access to the `app-monitor` role. + +We name this role as `support-team`. +```yaml +kind: role +version: v8 +metadata: + name: support-team +spec: + allow: + request: + roles: + - app-monitor +``` + + + In the example role `app-monitor`, the allow `app_labels` + rule we defined applies to the application resources. You may need to reference + a different [resource label](../../../reference/access-controls/roles.mdx#example-role-specification) + rule if you are following through this guide with a different kind of resource. + + +## Step 3/3. Create a Nested Access List + +Assuming Teleport has already imported new groups we created in Entra ID, +we will now create new Access Lists for short-term (Just-In-Time) +and long-term access management. The Entra ID imported groups will then be added as a member to +these new Access Lists. + +In the Teleport Web UI, from the side-navigation menu, select “Identity Governance > Access List”. + +Next, click the “Create New Access List” button and enter Access List details as follows. + +- **Title**: Short-term access +- **Deadline for First Review**: Select a future date. +- **Member Role Grants**: `support-team` +- **Member Trait Grants**: `apps: monitor` +- **Owners**: Add yourself or any appropriate users as owners. +- **Members**: `ad-app-support`. This is the Access List created for the Entra ID group of the same name. + +![Teleport short term access list](../../../../img/igs/entraid/entra-teleport-acl-short-term.png) + +This Access List grants `support-team` role and a trait `apps: monitor` to its members. +Effectively, allowing its members to request access to the `app-monitor` role. + +Create another Access List that will grant its members with a long-term access to Grafana +based on a direct-assigned `app-monitor` role. +- **Title**: Long-term access +- **Deadline for First Review**: Select a future date. +- **Member Role Grants**: `app-monitor` +- **Member Trait Grants**: `apps: monitor` +- **Owners**: Add yourself or any appropriate users as owners. +- **Members**: `ad-app-admin`. This is the Access List created for the Entra ID group of the same name. + +![Teleport long term access list](../../../../img/igs/entraid/entra-teleport-acl-long-term.png) + +This Access List grants `app-monitor` role and a trait `apps: monitor` to its members. +Effectively, allowing its members to access application resource via the `app-monitor` role. + +## Next steps + +- Learn more about [Access List](../../access-lists/access-lists.mdx) and [Nested Access List](../../access-lists/nested-access-lists.mdx) management. +- Learn more about [Role and Resource Access Request](../../access-requests/access-requests.mdx). +- Learn more about [role templates](../../../zero-trust-access/rbac-get-started/role-templates.mdx). diff --git a/docs/pages/identity-governance/integrations/entra-id/entra-id.mdx b/docs/pages/identity-governance/integrations/entra-id/entra-id.mdx new file mode 100644 index 0000000000000..9503e35106516 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/entra-id.mdx @@ -0,0 +1,80 @@ +--- +title: Entra ID Integration +sidebar_label: Microsoft Entra ID +description: Describes how Entra ID integration works in Teleport. +tags: + - teleport-as-idp + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +The Entra ID integration enables the following features in Teleport: + +1. **Single Sign-On (SSO):** Configures Teleport authentication with Entra ID as an + identity provider. +2. **User sync:** Periodic import of Entra ID users as Teleport users. +3. **Group sync:** Periodic import of Entra ID groups as Teleport + Access Lists. +4. **Integration with Teleport Identity Security (Optional):** Lets you analyze user access +paths and policies from the Teleport Identity Security product. If enabled, Teleport imports +enterprise applications as well. + +## How it works + +To configure SSO, Teleport uses an Entra ID enterprise application where +Teleport must be set up as an SAML service provider. + +To import users and groups from Entra ID, Teleport must be configured with +a credential to authenticate with the Microsoft Graph API. + +### Choosing the Microsoft Graph API authentication method + +Teleport supports two types of authentication mechanisms to authenticate with the +Microsoft Graph API: OIDC IdP and system credentials. + +#### Teleport as an OIDC Provider for Entra ID + +In this setup, Teleport is configured as an OpenID Connect (OIDC) identity provider +for the Entra ID enterprise application. +Teleport OIDC IdP then generates a short-lived credential for the Microsoft Graph API +client configured for Entra ID. Authorization is limited to the API permission +configured in the Entra ID enterprise application. + +Direct bidirectional connectivity between Teleport and Entra ID is necessary for +Entra ID to validate the OIDC tokens issued by Teleport. + +For a Teleport cloud cluster, OIDC IdP based authentication is the only supported +authentication method. + +#### System credentials + +In this setup, Teleport relies on the Microsoft Graph API credentials available where the +Teleport Auth Service is running. The setup typically involves configuring a managed identity +for Teleport Auth Service and assigning that managed identity with the Microsoft Graph API +permissions required by the Teleport Entra ID integration. + +This method is best suited for an air-gapped Teleport clusters where the Teleport Proxy Service +is not publicly accessible. + +### Choosing guided or manual Entra ID configuration method + +In the guided Entra ID configuration process, Teleport generates a configuration +script, which configures your Entra ID tenant with the properties that are required +for the Teleport Entra ID integration. + +If you want to have more control over the Entra ID configuration, a manual +Entra ID configuration may be suitable for you. In this case, you update the Entra ID +tenant with the properties that are required for Teleport Entra ID integration. + +The Web UI only supports guided Entra ID configuration with Teleport as OIDC IdP +authentication method. `tctl` supports both the guided and manual Entra ID +configuration methods, for both Teleport as OIDC IdP and system credential based setup. + + +## Guides + +- [Getting Started with the Entra ID Integration](getting-started.mdx) +- [Configure the Entra ID Integration using Azure Portal](setup/manual-installation.mdx) +- [Configure the Entra ID Integration using Terraform](setup/terraform.mdx) diff --git a/docs/pages/identity-governance/integrations/entra-id/faq.mdx b/docs/pages/identity-governance/integrations/entra-id/faq.mdx new file mode 100644 index 0000000000000..f65ffdac4a760 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/faq.mdx @@ -0,0 +1,52 @@ +--- +title: Entra ID Integration FAQ +description: Frequently asked questions on the Teleport Entra ID integration. +sidebar_label: FAQ +sidebar_position: 5 +tags: + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +This page provides answers to frequently asked questions about Teleport Entra ID integration. + +## What resources are imported to Teleport? + +Teleport imports users, user groups and its members from the Microsoft Entra ID +directory. + +If Teleport Identity Security integration is enabled, Teleport will import +applications and policies as well. + +You can control which groups from Microsoft Entra ID gets imported to Teleport +by configuring [group filters](advanced-options.mdx#group-filters). + +## How does it work with nested Access Lists? + +If an Entra ID group is assigned as a member to another group, Teleport preserves this assignment +as a nested Access List. + +However, note that Teleport does not support recursive groups where group A is a member of group B +and group B is also a member of group A. + +## What permissions does Teleport need to authenticate with the Microsoft Graph API? + +At minimum, Teleport needs read access to users, groups and the main enterprise application +for which the integration is configured. +```code +- Application.ReadWrite.OwnedBy +- Group.Read.All +- User.Read.All +``` + +If you enable the Identity Security integration, you will need a broader set of permissions. +```code +- Application.Read.All # instead of Application.ReadWrite.OwnedBy +- Directory.Read.All # instead of User.Read.All and Group.Read.All +- Policy.Read.All +``` + +By default, the guided configuration script sets up a broader scoped permission, which +is required by the Identity Security product to perform policy and access path analysis. diff --git a/docs/pages/identity-governance/integrations/entra-id/getting-started.mdx b/docs/pages/identity-governance/integrations/entra-id/getting-started.mdx new file mode 100644 index 0000000000000..29981482c88a2 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/getting-started.mdx @@ -0,0 +1,192 @@ +--- +title: Getting Started with the Entra ID Integration +sidebar_label: Getting Started +description: Describes how to set up the Teleport Entra ID integration in Teleport. +tags: + - how-to + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + + +This guide shows how to configure Entra ID integration in a guided configuration set up. + +Teleport will generate a script that will configure your Entra ID tenant with the +properties required for the Teleport Entra ID integration. + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites + +- Your user must have privileged administrator permissions in the Microsoft Entra ID tenant. +- Choose Microsoft Graph API [authentication method](entra-id.mdx#choosing-the-microsoft-graph-api-authentication-method). + +## Step 1/3: Generate configuration script + + + +In the Teleport Web UI, from the side-navigation, select “Add New > Integration”. + +Next, select the “Microsoft Entra ID” tile. + +In the Teleport Microsoft Entra ID configuration UI, you will notice a default +integration name “entra-id” is already populated for you. You will need to select +Teleport user(s) that will be assigned as the default owner of Access Lists that +are created for your Entra ID groups. + +![Entra ID Integration](../../../../img/igs/entraid/entra-default-owners.png) + +In the next step, you will be provided with a Entra ID configuration script. + + + + + +To begin integration, run the `tctl plugins install entraid` command. + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner= \ + --auth-server +``` + +The `--name` flag specifies the resource name of the Entra ID plugin. +The `--auth-connector-name` flag specifies the name of the auth connector this integration will create. +The `--default-owner` flag specifies default owners for the Access Lists that will be created +in Teleport based on the groups imported from the Entra ID. + +The command will generate a configuration script in the current directory +from where the `tctl` is invoked. + + + + + +You will need to grant Azure Identity with the necessary permissions required for the Entra ID +integration. + +In the Azure Portal, find the identities linked to your Teleport Auth Service, +and copy the Principal ID of the identity you wish to update with the new permissions. + +After obtaining the Principal ID, open the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) +in PowerShell mode and run the following script to assign the required permissions to ``. + +
+Assign required permissions to Azure Identity + +```powershell +# Connect to Microsoft Graph with the required scopes for directory and app role assignment permissions. +Connect-MgGraph -Scopes 'Directory.ReadWrite.All', 'AppRoleAssignment.ReadWrite.All' + +# Retrieve the managed identity's service principal object using its unique principal ID (UUID). +$managedIdentity = Get-MgServicePrincipal -ServicePrincipalId '' + +# Set the Microsoft Graph enterprise application object. +# This is a service principal object representing Microsoft Graph in Entra ID with a specific app ID. +$graphSPN = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'" + +# Define the permission scopes that we want to assign to the managed identity. +# These are Microsoft Graph API permissions required by the managed identity. +$permissions = @( + "Application.ReadWrite.OwnedBy" # Permission to read application + "Group.Read.All" # Permission to read groups + "User.Read.All" # Permission to read users +) + +# Filter and find the app roles in the Microsoft Graph service principal that match the defined permissions. +# Only include roles where "AllowedMemberTypes" includes "Application" (suitable for managed identities). +$appRoles = $graphSPN.AppRoles | + Where-Object Value -in $permissions | + Where-Object AllowedMemberTypes -contains "Application" + +# Iterate over each app role to assign it to the managed identity. +foreach ($appRole in $appRoles) { + # Define the parameters for the role assignment, including the managed identity's principal ID, + # the Microsoft Graph service principal's resource ID, and the specific app role ID. + $bodyParam = @{ + PrincipalId = $managedIdentity.Id # The ID of the managed identity (service principal) + ResourceId = $graphSPN.Id # The ID of the Microsoft Graph service principal + AppRoleId = $appRole.Id # The ID of the app role being assigned + } + + # Create a new app role assignment for the managed identity, granting it the specified permissions. + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $managedIdentity.Id -BodyParameter $bodyParam +} + +``` + +
+ +Your identity principal `` now has the necessary permissions to list Applications, +Directories, and Policies. + +Now, to begin integration, run the `tctl plugins install entraid` command. + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner= \ + --auth-server \ + --use-system-credentials +``` + +The `--name` flag specifies the resource name of the Entra ID plugin. +The `--auth-connector-name` flag specifies the name of the auth connector this integration will create. +The `--default-owner` flag specifies default owners for the Access Lists that will be created +in Teleport based on the groups imported from the Entra ID. +The `--use-system-credentials` flag specifies the plugin will use the system credential configured +for the Auth Service. + +The command will generate a configuration script in the current directory +from where the `tctl` is invoked. + +
+
+ +## Step 2/3: Configure Entra ID + +(!docs/pages/includes/identity-governance/azure-shell.mdx!) + +## Step 3/3: Finish plugin installation + + + + +Copy the Entra ID tenant ID and enterprise application client ID from the script output +and enter it in the Web UI to finish the integration. + +![Entra ID Integration](../../../../img/igs/entraid/entra-finish.png) + + + + + +Copy the Entra ID tenant ID and enterprise application client ID from the script output +and enter it in the `tctl` to finish the integration. + + + + + +Copy the Entra ID tenant ID and enterprise application client ID from the script output +and enter it in the `tctl` to finish the integration. + + + + +The integration is now configured and the Teleport Entra ID service will start +importing resources from Entra ID to Teleport. + +## Next steps + +- [Configure Access](configure-access.mdx) for Entra ID users. +- Configure [group filters](advanced-options.mdx#group-filters). +- Learn more about [Access List](../../access-lists/access-lists.mdx) management. +- Take a deeper look into setting up [Entra ID auth connector](../../../zero-trust-access/sso/integrate-idp/entra-id.mdx). +- Learn how the [Identity Security integration with Entra ID](../../../identity-security/integrations/entra-id.mdx) works. +- See [FAQs](faq.mdx) related to the Teleport Entra ID integration. diff --git a/docs/pages/identity-governance/integrations/entra-id/setup/manual-installation.mdx b/docs/pages/identity-governance/integrations/entra-id/setup/manual-installation.mdx new file mode 100644 index 0000000000000..f6b6cbfce28e9 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/setup/manual-installation.mdx @@ -0,0 +1,121 @@ +--- +title: Configure the Entra ID Integration using Azure Portal +sidebar_label: Azure Portal +description: Describes how to manually set up Entra ID for the Teleport Entra ID integration. +tags: + - how-to + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +This guide shows manual Entra ID configuration steps to set up Teleport Entra ID integration. +See [getting started with Entra ID integration](../getting-started.mdx) for a guided Entra ID configuration. + +The set up is based on the [OIDC IdP authentication method](../entra-id.mdx#choosing-the-microsoft-graph-api-authentication-method). + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites + +- Teleport Identity Governance enabled for your Teleport cluster. +- Your user must have privileged administrator permissions in the Microsoft Entra ID tenant. + +## Step 1/5. Create enterprise application + +In the Azure Portal, under “Azure services”, select “Enterprise applications”. +Click on `+ New Application` button, then click `+ Create your own application` button. +Enter a name for your application and create the application. + +![Entra ID Integration](../../../../../img/igs/entraid/entra-create-enterprise-app.png) + + +## Step 2/5. Configure SSO + +Open the newly created enterprise application. + +Under “Manage” menu select “Single sign-on”. In this configuration UI, you will need to set up Teleport +as an SAML servicer provider. + +Click edit button to configure “Basic SAML Configuration”. +Enter the SAML assertion endpoint as Entity ID and ACS URL value. +- **Entity ID and ACS URL:** SAML ACS endpoint of your Teleport cluster. +E.g. `https://example.teleport.sh/v1/webapi/saml/acs/entra-id` + +For “Attributes & Claims”, attributes with user will already be available for you but you will need to +add a `groups` claim. + +![Entra ID Integration](../../../../../img/igs/entraid/entra-sso-config.png) + + +## Step 3/5. Configure OIDC IdP + +Under “App registrations” from Azure services menu, find and open your enterprise +application created in step 1. + +Select “Manage > Certificates & secrets” and then select “Federated credentials“. +Click `+ Add credential` button. + +Under “Add a credential” UI, configure credential with the following values: +- **Federated credential scenario:** Other issuer + +Under “Connect your account”, configure the following values: +- **Issuer:** `https://example.teleport.sh` (replace this value with your Teleport cluster proxy address) +- **Type:** Explicit subject identifier +- **Value:** teleport-azure + +Under “Credential details”, configure the following values: +- **Name:** teleport-oidc +- **Description:** Teleport OIDC Identity Provider + +![Entra ID Integration](../../../../../img/igs/entraid/entra-setup-oidc.png) + +## Step 4/5. Configure API permissions + +Under the same App registration UI for your enterprise application, select “Manage > API permissions”. + +You can add a new graph permission by clicking on `+ Add a permission` button and then selecting +“Microsoft Graph > Application permissions”. + +The following permissions need to be added to the application. +- `Application.ReadWrite.OwnedBy` +- `Group.Read.All` +- `User.Read.All` + +![Entra ID Integration](../../../../../img/igs/entraid/entra-api-permission.png) + + +## Step 5/5. Install the Entra ID plugin + +Now run the `tctl plugins install entraid` command, including the name of the +: + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner= \ + --no-access-graph \ + --manual-setup +``` + +The `--name` flag specifies the resource name of the Entra ID plugin. +The `--auth-connector-name` flag specifies the name of the auth connector this integration will create. +The `--default-owner` flag specifies default owners for the Access Lists that will be created +in Teleport based on the groups imported from the Entra ID. +The `--manual-setup` flag specifies a manual Entra ID configuration is selected by the user. + +`tctl` will then prompt for Entra ID tenant ID and application ID of the enterprise +application created in step 1. + +After you enter these values, Entra ID plugin will be installed with the OIDC IdP based authentication method. + +## Next steps + +- [Configure Access](../configure-access.mdx) for Entra ID users. +- Configure [group filters](../advanced-options.mdx#group-filters). +- Learn more about [Access List](../../../access-lists/access-lists.mdx) management. +- Take a deeper look into setting up [Entra ID auth connector](../../../../zero-trust-access/sso/integrate-idp/entra-id.mdx). +- Learn how the [Identity Security integration with Entra ID](../../../../identity-security/integrations/entra-id.mdx) works. +- See [FAQs](../faq.mdx) related to the Teleport Entra ID integration. diff --git a/docs/pages/identity-governance/integrations/entra-id/setup/setup.mdx b/docs/pages/identity-governance/integrations/entra-id/setup/setup.mdx new file mode 100644 index 0000000000000..e873e446f4971 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/setup/setup.mdx @@ -0,0 +1,8 @@ +--- +title: Setting up the Microsoft Entra ID Integration +sidebar_label: Setup +description: Provides instructions for setting up the Microsoft Entra ID integration using various methods. +enterprise: Identity Governance +--- + + diff --git a/docs/pages/identity-governance/integrations/entra-id/setup/terraform.mdx b/docs/pages/identity-governance/integrations/entra-id/setup/terraform.mdx new file mode 100644 index 0000000000000..c09a3ad54ba64 --- /dev/null +++ b/docs/pages/identity-governance/integrations/entra-id/setup/terraform.mdx @@ -0,0 +1,237 @@ +--- +title: Configure the Entra ID Integration using Terraform +description: Describes how to set up the Teleport Entra ID integration using Terraform +sidebar_label: Terraform +tags: + - teleport-as-idp + - infrastructure-as-code + - how-to + - identity-governance + - azure + - privileged-access +enterprise: Identity Governance +--- + +This guide shows you how to integrate Teleport with Microsoft Entra ID using Terraform and `tctl`. +You will configure the Entra ID side of the configuration using Terraform and install the Entra ID +plugin in Teleport using `tctl`. + +For other supported configuration methods, see the following guides: +- [Getting Started with the Entra ID Integration](../getting-started.mdx) +- [Configure the Entra ID Integration using Azure Portal](manual-installation.mdx) + +## How it works + +If you are new to the Teleport Entra ID integration, start by reading [how the +integration works](../entra-id.mdx). The `entra-id-integration` Terraform module +sets up the integration by deploying a Microsoft Entra ID enterprise application and +configuring it as a SAML identity provider for your Teleport cluster in order to +enable authentication to Teleport via Microsoft Entra ID. + +The module also assigns permissions to the Teleport Auth Service to fetch user +and group data from Microsoft Entra ID in order to synchronize it with Teleport +RBAC resources. Depending on your configuration, the Terraform module either +does this by using system credentials (Microsoft Graph API permission IDs) or +using Teleport as an OIDC IdP. + +## Prerequisites + +- Teleport Identity Governance enabled for your Teleport cluster. +- Your user must have privileged administrator permissions in Microsoft Azure and Entra ID tenant. + +## Step 1/3. Prepare Terraform + +Clone the Teleport Entra ID integration example module, which is available in the Teleport Github +[repository](https://github.com/gravitational/teleport/tree/master/examples/terraform/entra-id-integration). + +The example provides a configurable Terraform module to set up Entra ID tenant with attributes that are +required for the Teleport Entra ID integration. The module is based on the `azuread` [provider](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs). + +The example expects an authenticated Azure CLI session available in your environment. +The `azuread` provider expects tenant-level access, so the Azure CLI must be authenticated using +the `--allow-no-subscriptions` flag. +``` +$ az login --allow-no-subscriptions +``` + +## Step 2/3. Configure and apply + +The example Terraform module provides Entra ID configuration options are based on the +Microsoft Graph API authentication method you wish to configure for the Teleport Entra ID integration. + +Teleport supports two types of Microsoft Graph API authentication methods: +Teleport as an OIDC IdP and system credentials. + +Setting up Teleport as an OIDC IdP option is the only supported method if you are using Teleport Cloud. +If you have a self-hosted Teleport Cluster, you must ensure that the Teleport Proxy Service is publicly accessible. + +Alternatively, using system credentials setup is best suited if you have a self-hosted Teleport cluster +or when the Teleport Proxy Service is not publicly accessible. You can learn more about the differences +on these authentication methods on this [page](../entra-id.mdx#choosing-the-microsoft-graph-api-authentication-method). + +At minimum, the Terraform example expects the following input variables: +- `app_name`: Name of the enterprise application that will be created in Entra ID. +- `proxy_service_address`: Teleport Proxy Service host, e.g., `example.teleport.sh`. +- `certificate_expiry_date`: SAML assertion signing certificate expiry date. + E.g., `2026-05-01T01:02:03Z`. Note that an expired certificate will prevent users from + signing in to Teleport. If it expires, you will need to update this value in Terraform, + as well as in the SAML-based Teleport Auth Connector created for the Entra ID. + + + + +Create `tfvars` with your inputs. + +```bash +cat > variables.auto.tfvars << EOF +app_name = "" +proxy_service_address = "" +certificate_expiry_date = "" +EOF +``` + +Applying this plan will perform the following actions: +- Create an enterprise application. +- Set up SAML SSO for Teleport. +- Grant Microsoft Graph API permissions necessary for the Teleport Entra ID integration. +- Setup Teleport as an OIDC-based federated credential provider. + +After you configure `tfvars`, apply the plan. +```bash +$ terraform plan + +$ terraform apply +``` + + + + +You need additional input variables to set up managed identity. +- `use_system_credentials`: Configures the Terraform module to plan for resources required + for the system credential setup. Value must be `true`. +- `graph_permission_ids`: Permission IDs of the Microsoft Graph API permissions required for the integration: + - `Application.ReadWrite.OwnedBy` + - `Group.Read.All` + - `User.Read.All` +- `managed_id`: Principal ID of the managed identity which is assigned to the Teleport Auth Service. + +The simplest way to retrieve the permission IDs is to run the PowerShell script in your Azure Cloud Shell. +A working PowerShell script is provided below. +
+PowerShell script + +```powershell +Connect-AzureAD +# This is a service principal object representing Microsoft Graph in Entra ID with a specific app ID. +$GraphServicePrincipal = Get-AzureADServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'" +# These are Microsoft Graph API permissions required by the managed identity. +$permissions = @( + "Application.ReadWrite.OwnedBy" # Permission to read application + "Group.Read.All" # Permission to read groups in the directory + "User.Read.All" # Permission to read users in the directory +) +# Filter and find the app roles in the Microsoft Graph service principal that matches with permissions. +# Only include roles where "AllowedMemberTypes" includes "Application" (suitable for managed identities). +$appRoles = $GraphServicePrincipal.AppRoles | + Where-Object Value -in $permissions | + Where-Object AllowedMemberTypes -contains "Application" +# Print ID of each of the three permissions. +foreach ($appRole in $appRoles) { + "{0} : {1}" -f $appRole.Value, $appRole.Id +} +``` + +
+ +Once you have the permission IDs and the principal ID of the managed identity +ready, create `tfvars` with your inputs. Assign values to the variables below: + +- +- + +Enter the following command to populate your `tfvars` file: + +```bash +cat > variables.auto.tfvars << EOF +app_name = "" +proxy_service_address = "" +certificate_expiry_date = "" +use_system_credentials = true +graph_permission_ids = [] +managed_id = "" +EOF +``` + +Applying this plan will perform the following actions: +- Create an enterprise application. +- Set up SAML SSO for Teleport. +- Grant Microsoft Graph API permissions to the managed identity. Permissions + are based on the permission IDs which we configured above. + +After you configure `tfvars`, apply the plan. +```bash +$ terraform plan + +$ terraform apply +``` + +
+
+ +Once the changes are applied, the Terraform module will produce an application client ID and +Entra ID tenant ID values as an output. You will need these values in the next step. + +Entra ID is now configured with the required attributes needed for the Teleport Entra ID integration. + +## Step 3/3. Install the Entra ID plugin + +To complete the integration setup, install the Entra ID plugin in Teleport. +The plugin can be installed with the `tctl plugins install entraid` command. + + + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner= \ + --no-access-graph \ + --manual-setup +``` + + + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner= \ + --no-access-graph \ + --use-system-credentials \ + --manual-setup +``` + + + +- `--name` flag specifies the resource name of the Entra ID plugin. +- `--auth-connector-name` flag specifies the name of the auth connector this integration will create. +- `--default-owner` flag specifies default owners for the Access Lists that will be created +in Teleport based on the groups imported from the Entra ID. +- `--no-access-graph` flag specifies Identity Security integration is to be skipped. +- `--use-system-credentials` flag specifies to use system credential configured using Managed Identity. Use this flag only +if you with to set up system credential based authentication. +- `--manual-setup` flag specifies a manual Entra ID configuration is selected by the user. + +`tctl` will then prompt for an Entra ID tenant ID and an application client ID of the enterprise +application created by using Terraform in step 2. + +After you enter these values, the installation command will create a SAML Auth Connector in Teleport, along with the +plugin service that imports users and groups from Entra ID to Teleport. + +## Next steps + +- Learn how to [configure access](../configure-access.mdx) for Entra ID users. +- Configure [group filters](../advanced-options.mdx#group-filters). +- Take a deeper look into setting up [Entra ID auth connector](../../../../zero-trust-access/sso/integrate-idp/entra-id.mdx). +- Learn how the [Identity Security integration with Entra ID](../../../../identity-security/integrations/entra-id.mdx) works. +- See [FAQs](../faq.mdx) related to the Teleport Entra ID integration. diff --git a/docs/pages/identity-governance/integrations/integrations.mdx b/docs/pages/identity-governance/integrations/integrations.mdx new file mode 100644 index 0000000000000..01e8c5c54aa80 --- /dev/null +++ b/docs/pages/identity-governance/integrations/integrations.mdx @@ -0,0 +1,20 @@ +--- +title: Identity Governance Integrations +sidebar_label: Integrations +description: Shows you how to sync Teleport RBAC resources with third-party tools and services. +tags: + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +You can configure Teleport to sync role-based access control (RBAC) configurations with +third-party software, such as AWS IAM Identity Center, Microsoft Entra ID, and +Okta. Teleport creates resources such as roles and Access Lists based on +resources it retrieves from the integration. This way, you can manage all access +controls in your infrastructure from a single platform. + +Read the following guides for how to configure the Identity Governance +integration for your third-party solution: + + diff --git a/docs/pages/admin-guides/access-controls/okta/app-and-group-sync.mdx b/docs/pages/identity-governance/integrations/okta/app-and-group-sync.mdx similarity index 92% rename from docs/pages/admin-guides/access-controls/okta/app-and-group-sync.mdx rename to docs/pages/identity-governance/integrations/okta/app-and-group-sync.mdx index dbc70815b839c..8bd760d7a4136 100644 --- a/docs/pages/admin-guides/access-controls/okta/app-and-group-sync.mdx +++ b/docs/pages/identity-governance/integrations/okta/app-and-group-sync.mdx @@ -1,6 +1,12 @@ --- title: Okta App and Group Sync +sidebar_label: App and Group Sync description: Explains how to enable the Okta app and group sync integration, which imports Okta configurations into the Teleport RBAC system. +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- Okta has its own permissions system that doesn't directly map into Teleport. This @@ -26,6 +32,13 @@ application: - An Access List representing membership to the group/application. - Members for the Access List. +All synchronized Okta users are assigned a builtin `okta-requester` role which allows to request +access to the synchronized resources. This role assignment can be disabled with +`--no-assign-default-roles` flag when creating the integration with `tctl` or can be disabled with +`tctl edit plugins/okta` by setting `okta.sync_settings.disable_assign_default_roles: true`. +Note that unless the connector was created manually, this role is also assigned by default in the +auth connector role mapping and needs to be updated there for the change to take effect. + It should be noted that the Access List sync waits until the Okta groups and Okta applications has finished syncing as Teleport resources, so it may not start synchronizing immediately on startup. @@ -36,13 +49,13 @@ List will be removed from Teleport on the next sync. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - Teleport Identity Governance enabled on your account. - An Okta authentication connector. - + Before following the guided app and group sync integration flow, you must have completed the following: - [Guided Okta single sign-on flow](./guided-sso.mdx). diff --git a/docs/pages/admin-guides/access-controls/okta/guided-sso.mdx b/docs/pages/identity-governance/integrations/okta/guided-sso.mdx similarity index 93% rename from docs/pages/admin-guides/access-controls/okta/guided-sso.mdx rename to docs/pages/identity-governance/integrations/okta/guided-sso.mdx index 77f3bd6878b90..07007975fd160 100644 --- a/docs/pages/admin-guides/access-controls/okta/guided-sso.mdx +++ b/docs/pages/identity-governance/integrations/okta/guided-sso.mdx @@ -1,8 +1,16 @@ --- title: Guided Okta SSO Integration +sidebar_label: Guided SSO Integration description: Explains how to enroll Okta in your Teleport cluster as an identity provider for single sign-on using the guided flow. +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + The Teleport Okta SSO integration allows Teleport users to authenticate using Okta as an identity provider. Teleport. The guided enrollment flow for the Okta SSO integration is part of the [guided Okta integration](./okta.mdx). This @@ -12,11 +20,11 @@ perform required actions within Okta. If you do not plan to enroll additional components of the guided Okta integration, you can set up only the Okta SSO integration - called an authentication connector - by following [Authentication With Okta as an SSO -Provider](../../../admin-guides/access-controls/sso/okta.mdx). +Provider](../../../zero-trust-access/sso/integrate-idp/okta.mdx). ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - Familiarity with the [Okta guided integration flow](./okta.mdx). @@ -28,7 +36,7 @@ Provider](../../../admin-guides/access-controls/sso/okta.mdx). the integration is hardcoded as `okta`. If you have a connector named `okta` and you'd like to use a different one, you'll have to create your connector manually following [Authentication With Okta as an SSO Provider - ](../../../admin-guides/access-controls/sso/okta.mdx). + ](../../../zero-trust-access/sso/integrate-idp/okta.mdx). - (!docs/pages/includes/tctl.mdx!) diff --git a/docs/pages/identity-governance/integrations/okta/okta.mdx b/docs/pages/identity-governance/integrations/okta/okta.mdx new file mode 100644 index 0000000000000..14807a2d34ecd --- /dev/null +++ b/docs/pages/identity-governance/integrations/okta/okta.mdx @@ -0,0 +1,135 @@ +--- +title: Enroll the Teleport Okta Integration +sidebar_label: Okta +description: Describes how to set up the Teleport Okta integration in order to grant Teleport users access to resources managed in Okta. +sidebar_position: 4 +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance +--- + +The Teleport Okta integration can import and grant access to resources from an +Okta organizations, such as user profiles, groups and applications. Teleport can +provision user accounts based Okta users, Okta applications can be accessed +through Teleport's application access UI, and access to these applications along +with user groups can be managed by Teleport's RBAC along with Access Requests. + +This guide shows you how to set up the Teleport Okta integration. + +## How it works + +Enrolling the Okta integration is a guided process and contains four components: + +1. **Single sign-on (SSO) integration:** Configures Teleport authentication with Okta as an + identity provider. +2. **System for Cross-domain Identity Management (SCIM) integration:** Uses the + [SCIM standard](https://developer.okta.com/docs/concepts/scim/) to + allow nearly instant provisioning of Okta users in Teleport. This is + available only if Teleport Identity Governance is enabled. The SCIM + integration is optional for enabling user, application, and group sync. +3. **User sync:** Imports Okta users as Teleport users via a continuous + reconciliation loop. It is usually slower than SCIM, but but more reliable, + and does not require a publicly accessible Teleport cluster. Combined with + SCIM, it provides both speed and reliability. +4. **Application and group sync:** imports Okta applications and groups as Teleport + Access Lists. This enables you to manage access to Okta applications and user + groups via Teleport RBAC, including Access Requests. + +The Okta SSO integration is required for all other components of the Okta +integration. User sync is required for application and group sync. + +Teleport manages Okta application group assignments using a declarative state +approach. If an assignment is created via Teleport and subsequently removed via +the Okta UI, Teleport will re-evaluate and potentially overwrite Okta UI changes +to align with the state defined by Teleport's RBAC configuration. + +## Get started + +To start the enrolment, visit the Teleport Web UI. From the left sidebar, +navigate to **Add New** -> **Integration**. Select the **Okta** tile. + +![Enroll an integration](../../../../img/enterprise/plugins/enroll.png) + +This will bring you to the SSO integration configuration screen. Read [Guided +Okta SSO Integration](./guided-sso.mdx) for supplementary instructions as you +work through the guided flow. + +## Guides + +For supplemental information on the guided process, such as instructions on +making changes in Okta, review the documentation in this section: + + + +## Syncing System Logs and API Keys with Teleport Identity Security Activity Center + +(!docs/pages/includes/policy/identity-activity-center.mdx!) + +### How Teleport Identity Security's Okta Integration works + +Identity Activity Center automatically retrieves Okta System Logs through +minute-by-minute polling, then processes and standardizes this data +before storing it for long-term analysis and retention. + +Additionally, the Okta integration maps access paths for existing +Okta API keys, tracking which users own them and what permissions +they have within the organization. + +### How to enable Identity Security's Okta Integration + +Users with an existing Okta setup in Teleport can enable the Identity +Security Integration directly from the status page. +When configuring a new integration, the setup wizard provides a +dedicated Identity Security step with guided configuration instructions. + +## Managing integration components + +At any point in the guided flow, you can return to the integration status page +to manage any component of the integration. To open the integration status page +in the Teleport Web UI, navigate to **Zero Trust Access** -> **Integrations** and +click anywhere on the Okta integration row. + +![Okta integration #1](../../../../img/enterprise/plugins/okta/okta-status-page-1.png) + +## Deleting the Okta integration + +You can delete the Okta integration by taking the following steps: + +1. Clean up user accounts created by the Okta integration. To do so, un-assign + all Okta users from the Okta SAML Application Teleport uses as its identity + provider, and wait for the sync process and/or SCIM provisioning to delete + the corresponding Teleport accounts. Once the Teleport accounts have been + automatically deleted you can proceed to delete the integration. + +2. Navigate to **Zero Trust Access** -> **Integrations** and click anywhere on the Okta + integration row to visit the integration status page. + +3. Click **Delete Integration**. + ![Delete Okta integration](../../../../img/enterprise/plugins/okta/okta-delete-integration.png) + +4. Permanently delete the SAML connector created as part of the integration. + Navigate to the **Auth Connectors** page in the Teleport UI and select the + option to delete it. The SAML connector created during the enrollment process + is ***not*** deleted when the hosted Okta integration is deleted, and will + automatically be re-used if the Okta integration is re-enrolled. + +5. Clean up Access Lists created by the Okta integration. These are not deleted + when the Okta integration is deleted. To delete Okta-sourced Access Lists, + run the following command: + + ```code + $ tctl plugins cleanup okta + +If any Access Lists created by the integration are nested members or owners +of Teleport-created Access Lists, cleanup will fail. Any nested Access Lists +need to be unassigned before you can perform a cleanup. + + +If the Okta integration is still active, removing Okta-sourced Access Lists +could revoke Okta access from users in your organization. Exercise caution +when cleaning up Access Lists. + + +Once the integration is deleted, you can manually remove any remaining user accounts with `tctl`. diff --git a/docs/pages/identity-governance/integrations/okta/reference.mdx b/docs/pages/identity-governance/integrations/okta/reference.mdx new file mode 100644 index 0000000000000..f307d1138f0eb --- /dev/null +++ b/docs/pages/identity-governance/integrations/okta/reference.mdx @@ -0,0 +1,116 @@ +--- +title: Okta Service Reference +sidebar_label: Reference +description: Configuration and CLI reference documentation for Teleport Okta service. +sidebar_position: 5 +tags: + - reference + - zero-trust + - infrastructure-identity +enterprise: Identity Governance +--- + +This guide describes interfaces and options for configuring the Teleport Okta +Service, including Okta import rules, Okta assignments, and `tctl` commands. It +also includes troubleshooting instructions. + +## Okta Import Rule resources + +Full YAML spec of Okta import rule resources managed by `tctl` resource commands: + +(!docs/pages/includes/okta-import-rule.mdx!) + +You can create a new `okta_import_rule` resource by running the following commands, which +assume that you have created a YAML file called `okta-import-rule.yaml` with your configuration: + +```code +# Log in to your cluster with tsh so you can use tctl from your local machine. +# You can also run tctl on your Auth Service host without running "tsh login" +# first. +$ tsh login --proxy=teleport.example.com --user=myuser +# Create the resource +$ tctl create -f okta-import-rule.yaml +``` + +## Okta Assignment resources + +These objects are internally facing and are not intended to be modified by users. However, +you can query them for informational or debugging purposes. + +Full YAML spec of Okta assignment resources queried by `tctl` resource commands: + +```yaml +kind: okta_assignment +version: v1 +metadata: + name: test-assignment +spec: + # The user that the Okta assignment is granting access for. + user: test-user@test.user + # The list of targets to grant access to. + targets: + # An application target. + - type: application + id: "123456" + # A group target. + - type: group + id: "234567" + # The current status of the Okta assignment. + status: pending +``` + +## CLI + +This section shows CLI commands relevant for managing Okta Service behaviors. + +### tctl get okta_import_rules + +Lists available Okta import rules. + +```code +$ tctl get okta_import_rules +``` + +### tctl get okta_import_rules/NAME + +Gets an individual Okta import rule. + +```code +$ tctl get okta_import_rules/my-import-rule +``` + +### tctl rm okta_import_rules/NAME + +Removes an individual Okta import rule. + +```code +$ tctl rm okta_import_rules/my-import-rule +``` + +### tctl get okta_assignments + +Lists available Okta assignments. + +```code +$ tctl get okta_assignments +``` + +### tctl get okta_assignments/NAME + +Gets an individual Okta assignment. + +```code +$ tctl get okta_assignments/my-assignment +``` + +## Troubleshooting + +### No Okta groups or applications seen in the Teleport UI + +If the Teleport applications UI isn't displaying any Okta applications, ensure +that the Okta API token and endpoint are correct in the Okta service. + +If they are, double check the user permissions and ensure that the user has +appropriate resource and label level access to the groups and applications. You +may need to tweak the `app_labels` and `group_labels` sections of a role in order +to see these resources. diff --git a/docs/pages/admin-guides/access-controls/okta/scim-integration.mdx b/docs/pages/identity-governance/integrations/okta/scim-integration.mdx similarity index 96% rename from docs/pages/admin-guides/access-controls/okta/scim-integration.mdx rename to docs/pages/identity-governance/integrations/okta/scim-integration.mdx index aae45233102fa..a5a8cb492fc74 100644 --- a/docs/pages/admin-guides/access-controls/okta/scim-integration.mdx +++ b/docs/pages/identity-governance/integrations/okta/scim-integration.mdx @@ -1,6 +1,12 @@ --- title: Okta SCIM Integration +sidebar_label: SCIM Integration description: Explains how to use the guided integration enrollment flow to enable the Okta SCIM integration, which allows Teleport to immediately reflect changes in Okta. +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- The SCIM (System for Cross-domain Identity Management) integration enables automated user management, @@ -43,13 +49,13 @@ Okta does not send SCIM updates to Teleport when a user is suspended. Even thoug ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - Teleport Identity Governance enabled on your account. - An Okta authentication connector. - + Before following the guided SCIM integration flow, you must have completed the [guided Okta single sign-on flow](./guided-sso.mdx). diff --git a/docs/pages/admin-guides/access-controls/okta/user-sync.mdx b/docs/pages/identity-governance/integrations/okta/user-sync.mdx similarity index 96% rename from docs/pages/admin-guides/access-controls/okta/user-sync.mdx rename to docs/pages/identity-governance/integrations/okta/user-sync.mdx index c68fdd55f0ae2..93b564fa5be61 100644 --- a/docs/pages/admin-guides/access-controls/okta/user-sync.mdx +++ b/docs/pages/identity-governance/integrations/okta/user-sync.mdx @@ -1,8 +1,16 @@ --- title: Okta User Sync +sidebar_label: User Sync description: Explains how to set up Okta user sync with the guided integration flow. +tags: + - how-to + - identity-governance + - privileged-access +enterprise: Identity Governance --- +{/* lint disable page-structure remark-lint */} + Okta user sync imports Okta users as Teleport users via a continuous reconciliation loop. User sync works in tandem with the Teleport [SCIM integration](./scim-integration.mdx#how-it-works). @@ -19,11 +27,11 @@ integration enrollment flow. ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise (v17.3.0 or higher)"!) - An Okta authentication connector. - + Before following the guided user sync integration flow, you must have completed the [guided Okta single sign-on flow](./guided-sso.mdx). @@ -40,7 +48,7 @@ group assignments in Okta and can make changes within Okta based on your Teleport RBAC configuration. To limit the scope of the integration, ensure that: - In the Teleport roles you have assigned to users, no role contains an - [app_labels](../../../enroll-resources/application-access/controls.mdx#configuring-application-labels-in-roles) field + [app_labels](../../../enroll-resources/application-access/configuration/controls.mdx) field with a wildcard value. Since Teleport uses this field to govern access to Okta applications, wildcard values will grant Teleport users access to all Okta applications. diff --git a/docs/pages/identity-governance/integrations/scim/sailpoint.mdx b/docs/pages/identity-governance/integrations/scim/sailpoint.mdx new file mode 100644 index 0000000000000..183c020c0ddff --- /dev/null +++ b/docs/pages/identity-governance/integrations/scim/sailpoint.mdx @@ -0,0 +1,79 @@ +--- +title: SCIM SailPoint Integration +sidebar_label: SailPoint +description: How to Configure SCIM Connector in SailPoint to manage Access List membership +tags: +- how-to +- identity-governance +- privileged-access +enterprise: Identity Governance +--- + + +SailPoint provides a SCIM identity management connector that allows you to manage Teleport Access List membership +through SailPoint IdentityNow or SailPoint IdentityIQ. + +{/* lint ignore page-structure remark-lint */} + +## Prerequisites +- Teleport SCIM plugin setup: [SCIM Plugin Installation](./scim.mdx) + +## Step 1/2: Configure a SCIM 2.0 Teleport connector in SailPoint + +To integrate Teleport with SailPoint using SCIM, you need to configure a SCIM connector in SailPoint IdentityNow or SailPoint IdentityIQ. +The exact configuration steps may vary slightly depending on your version of SailPoint, but the general process is as follows. + +### Configure SCIM in SailPoint + +Create a new SCIM connector in SailPoint at: Applications > Application Definition > Add New Application. +Select **SCIM 2.0** as the application type and provide the required configuration details: + +![SailPoint Connector Details](../../../../img/sailpoint/sailpoint-1.png) + +Navigate to the Teleport SCIM Integration page and copy the OAuth 2.0 details and the SCIM base URL. + + If the Teleport SCIM integration has not been set up, follow the [SCIM Plugin Installation](./scim.mdx) guide before proceeding. + + +In the SailPoint SCIM application configuration view, populate the configuration settings using the values obtained from the Teleport SCIM integration: +* SCIM Base URL: Paste the SCIM Base URL copied from SCIM Teleport Integration page +* Authentication Type: Pick "OAuth 2.0" +* Token URL: Paste the Token UR copied from SCIM Teleport Integration page +* Grant Type: Pick "Client Credentials" +* Client ID: The Client ID copied from SCIM Teleport Integration page + +Click **Test Connection** to verify that the connection is successful: + +![SailPoint SCIM Oauth](../../../../img/sailpoint/sailspoint-scim-config.png) + +### Configure SCIM schema discovery + +Under **Configuration -> Schema**, click **Discover Schema Attributes** on both the **Accounts** and **Groups** tabs to retrieve the schema attributes: +![Account Schema Discovery](../../../../img/sailpoint/sailpoint-3.png) +![Group Schema Discovery](../../../../img/sailpoint/sailpoint-4.png) +Go to the **Provisioning Policy** section, and create a **Create Policy** that maps the `userName` SCIM attribute to the user’s email address: +![User Provisioning settings](../../../../img/sailpoint/sailpoint-5.png) +Save all changes. + +#### Configure SCIM group aggregation in SailPoint + + +SailPoint group aggregation enables the retrieval of SCIM-type Access Lists into SailPoint as Application Group Entitlements. +This allows you to import Teleport Access Lists into SailPoint and manage membership directly through SailPoint with the changes being reflected in Teleport Access Lists. + +Navigate to **Setup > Tasks -> New Task -> Group Aggregation** in SailPoint. +![Select Aggregation](../../../../img/sailpoint/sailpoint-6.png) +Select the **Teleport SCIM Connector**, then click **Save and Execute** to run the aggregation task. +![Configure Aggregation](../../../../img/sailpoint/sailpoint-7.png) +If the aggregation completes successfully, +you should see the imported Access Lists `type: "scim"` from Teleport in SailPoint under: **Applications > Entitlement Catalog** + +![Aggregation Result](../../../../img/sailpoint/sailpoint-8.png) + +## Step 2/2: Submit Access Requests to SailPoint Group Entitlement (Optional) + +Go to **Manage > Manage User Access > Manage User Access** in SailPoint. +Submit an Access Request for a mapped Access List (as represented by a group +entitlement in SailPoint). +![Access request](../../../../img/sailpoint/sailpoint-9.png) +Once the request is approved, the user will be added to the appropriate Access List in Teleport. diff --git a/docs/pages/identity-governance/integrations/scim/scim.mdx b/docs/pages/identity-governance/integrations/scim/scim.mdx new file mode 100644 index 0000000000000..eecd24856712f --- /dev/null +++ b/docs/pages/identity-governance/integrations/scim/scim.mdx @@ -0,0 +1,117 @@ +--- +title: SCIM Integration +sidebar_label: SCIM +description: How to manage Access List membership using SCIM integration +tags: +- how-to +- identity-governance +- privileged-access +enterprise: Identity Governance +--- + +The SCIM integration between **SCIM providers** and **Teleport** enables automated +synchronization of SCIM group memberships and Teleport Access List memberships. +This integration supports centralized identity governance in +external Identity Management System (like SailPoint) while Teleport enforces +fine-grained access controls defined by Access Lists membership grants types. + +User permissions in Teleport are defined through Access Lists. While role definitions +live in Teleport, group membership is dynamically managed by SCIM Provider via SCIM group +membership. This ensures users have up-to-date access aligned with organizational +policies. + +## How it works + +The SCIM integration uses a 1:1 mapping between SCIM group and Teleport +Access Lists + +- Each SCIM group `displayName` must match the `spec.title` of a Teleport +Access List. +- SCIM-type Access Lists must be created in advance in Teleport. In this guide we're creating them using Terraform. +- Only Access Lists of type `scim` can be managed by SCIM providers. +- Role assignments are handled in Teleport, while group membership is delegated to External Identity Management System (like SailPoint). + +## Prerequisites +- Teleport Enterprise v17.6.1, v18.0.3 or higher. +- Teleport Terraform Provider v17.6.1, v18.0.3 or higher. +- A running Teleport cluster with SSO enabled (e.g. Okta SAML connector) +- Identity Management System (like SailPoint) with SCIM support +- SCIM Provider with OAuth 2.0 Client Credentials grant type support + +## Step 1/3: Create an SCIM-Managed Access List in Teleport + +Create a new Access List in Teleport using Terraform. Be sure to set `type = "scim"` +and match `spec.title` to the name of `displayName` of SCIM group that we be provided to Teleport: + + +```hcl +resource "teleport_access_list" "acl-group-requester" { + header = { + version = "v1" + metadata = { + name = "scim-group-requester" + } + } + spec = { + title = "GroupRequester" + type = "scim" + grants = { + roles = ["requester"] + traits = [] + } + owners = [ + { + name = "alice" + } + ] + membership_requires = { + roles = [] + } + ownership_requires = { + roles = [] + } + audit = { + recurrence = { + frequency = 3 + day_of_month = 15 + } + } + } +} +``` + + + The SCIM group name (displayName) in SCIM Provider must exactly match + `spec.title` in the Teleport Access List. + + + +## Step 2/3. Configure SCIM Integration +Teleport provides a guided Web UI-based configuration flow for the SCIM integration. + +In the Teleport Web UI, go to **"Add new integration"** and select **SCIM**. + +![Plugin Install](../../../../img/sailpoint/plugin-install.png) + + +Select the **SAML connector** to associate SCIM-provisioned users with SSO logins. + +- By default, SSO users in Teleport are ephemeral. +- SCIM provisioning ensures users are persistently created and managed by External Identity management system via SCIM protocol. + +![Connector Selection](../../../../img/sailpoint/plugin-connector.png) + + +Click **Continue** to proceed to the **SCIM Credentials** screen. + +- Teleport uses OAuth 2.0 Bearer Tokens for SCIM authentication. +- Copy the **Client ID**, **Client Secret**, and **Base URL** — you'll use them + when configuring your Identity Provider in the next step. + +## Step 3/3. Configure SCIM integration with your Identity Management SCIM provider + +SCIM configuration may differ depending on your IdP. The integration has been officially tested with the following providers: + +- [SailPoint Teleport SCIM connector setup](./sailpoint.mdx) + +In case of other SCIM providers, please refer to their documentation for setting up a SCIM integration. diff --git a/docs/pages/admin-guides/access-controls/guides/locking.mdx b/docs/pages/identity-governance/locking.mdx similarity index 79% rename from docs/pages/admin-guides/access-controls/guides/locking.mdx rename to docs/pages/identity-governance/locking.mdx index c8232f8ee82e2..d29f4911fa907 100644 --- a/docs/pages/admin-guides/access-controls/guides/locking.mdx +++ b/docs/pages/identity-governance/locking.mdx @@ -1,6 +1,11 @@ --- title: Session and Identity Locking description: How to lock compromised users or agents +tags: + - how-to + - identity-governance + - resiliency +enterprise: Identity Governance --- System administrators can disable a compromised user or Teleport Agent—or @@ -14,15 +19,24 @@ matching the lock's target. A lock can target the following objects or attributes: - a Teleport user by the user's name -- a Teleport [RBAC](../../../reference/access-controls/roles.mdx) role by the role's name -- a Teleport [trusted device]( - ../device-trust/enforcing-device-trust.mdx#locking-a-device) by the device ID +- a Teleport [RBAC](../reference/access-controls/roles.mdx) role by the role's name +- a Teleport [trusted device](device-trust/enforcing-device-trust.mdx#locking-a-device) by the device ID - an MFA device by the device's UUID - an OS/UNIX login - a Teleport Agent by the Agent's server UUID (effectively unregistering it from the cluster) - a Windows desktop by the desktop's name -- an [Access Request](../access-requests/access-requests.mdx) by UUID +- an [Access Request](access-requests/access-requests.mdx) by UUID +- a bot instance ID (for Machine & Workload Identity bots) +- a join token name (for Machine & Workload Identity bots using a [delegated join method](../reference/deployment/join-methods.mdx#delegated-join-methods)) + +## How it works + +A lock is a dynamic Teleport resource stored on the Teleport Auth Service +backend. Teleport services implement a lock watcher that subscribes to Auth +Service events related to lock creation. When these services receive a +notification that the a lock has been created or modified, they enact safeguards +to prevent various operations, e.g., preventing access from a locked user. ## Prerequisites @@ -84,6 +98,29 @@ with one of the following options: # Created a lock with name "dc7cee9d-fe5e-4534-a90d-db770f0234a1". ``` + + The most appropriate locking target for a Machine & Workload Identity bot + depends on its join method. + + For [delegated join methods](../reference/deployment/join-methods.mdx#secret-vs-delegated), + it's best to target the specific join token the bot is using to join: + ```code + $ tctl lock --join-token=example-token-name + ``` + + The join token name cannot be targeted for bots joined using the `token` join + method, so it's best to use the + [bot instance ID](../reference/architecture/machine-id-architecture.mdx#bot-instances): + ```code + $ tctl lock --bot-instance-id aabbccdd-1234-5678-0000-3b04d7d03acc + ``` + + In all cases, you may also target the bot user, which will lock all instances + of a bot that share the same underlying user: + ```code + $ tctl lock --user bot-example + ``` +
@@ -161,7 +198,7 @@ spec: ``` The `kind: lock` resources can also be created and updated using `tctl create` -as per usual. See the [Admin Guide](../../../reference/resources.mdx) for more +as per usual. See the [Admin Guide](../reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx) for more details.
@@ -210,7 +247,7 @@ auth_service: Restart or redeploy the Auth Service for the change to take effect. -If not, edit your cluster authentication preference resource: +If not, edit your cluster authentication preference resource: ```code $ tctl edit cap diff --git a/docs/pages/identity-security/access-graph/access-graph.mdx b/docs/pages/identity-security/access-graph/access-graph.mdx new file mode 100644 index 0000000000000..40b6bb952b514 --- /dev/null +++ b/docs/pages/identity-security/access-graph/access-graph.mdx @@ -0,0 +1,28 @@ +--- +title: "Self-Hosting Teleport Identity Security" +sidebar_label: Self-Host Teleport Identity Security +description: Explains how to deploy Access Graph alongside a self-hosted Teleport cluster. +tags: + - identity-security + - access-risks +sidebar_position: 4 +enterprise: Identity Security +--- + +If you run a self-hosted Teleport cluster, using Teleport Identity Security +requires running the **Teleport Access Graph Service** on your own +infrastructure. + +By default, the Access Graph Service enables access path exploration features, +including the Dashboard, Browse, Crown Jewels, Graph Explorer, and SQL Editor +views. + +If you enable **Teleport Identity Activity Center** on the Access Graph Service +and deploy its supporting infrastructure, Teleport Identity Security will +provide audit log centralization and querying, including the Browse and Alerts +views. + +The following guides show you how to deploy the Access Graph Service and support +Identity Activity Center on your own infrastructure: + + diff --git a/docs/pages/identity-security/access-graph/identity-activity-center.mdx b/docs/pages/identity-security/access-graph/identity-activity-center.mdx new file mode 100644 index 0000000000000..a6455789ddbca --- /dev/null +++ b/docs/pages/identity-security/access-graph/identity-activity-center.mdx @@ -0,0 +1,259 @@ +--- +title: "Self-Hosting Identity Activity Center" +sidebar_label: Identity Activity Center +description: Explains how to enable Identity Activity Center. +tags: + - how-to + - identity-security + - aws + - access-risks +enterprise: Identity Security +--- + +In this guide, you will set up the infrastructure required +to enable the Identity Activity Center in Teleport's Identity Security product. +Identity Activity Center allows you to centralize audit logs and access paths from various sources +for enhanced visibility and management. + +(!docs/pages/includes/policy/identity-activity-center.mdx!) + +Identity Activity Center is a feature of [Teleport Identity Security](https://goteleport.com/platform/identity-security/) +product that is only available to Teleport Enterprise customers. + +## How it works + +Teleport Identity Security uses an AWS SQS queue to publish and consume audit logs. +When deployed in high-availability mode, Teleport Identity Security elects a leader +that's responsible for consuming messages from the queue, converting and enhancing +them with metadata such as location, and writing them as Parquet files in the +long-term S3 bucket. + +Amazon Athena powers a query engine in the Identity Activity Center. To specify +the schema and retrieve data from Parquet files kept in the long-term S3 bucket, +Amazon Athena makes use of an AWS Glue Table. With this configuration, audit +logs can be efficiently queried and analyzed, leading to improved insights and +simplified data management for identity security. + +## Prerequisites +- A running Teleport Enterprise cluster v18.0.0 or later. +- An updated Teleport Enterprise license file with Teleport Identity Security enabled. +- A PostgreSQL database server v14 or later. + - Access Graph needs a dedicated [database](https://www.postgresql.org/docs/current/sql-createdatabase.html) to store its data. + The user that Teleport connects to the database with needs to be the owner of this database, or have similar broad permissions: + at least the `CREATE TABLE` privilege on the `public` schema, and the `CREATE SCHEMA` privilege. + - Amazon RDS for PostgreSQL is supported. +- A TLS certificate for the Access Graph service +- A running Access Graph node v1.28.0 or later. + Check the [Identity Security page](../identity-security.mdx) for details on + how to set up Access Graph. + + + + +The Identity Activity Center is currently supported only for self-hosted clusters. +Teleport Enterprise Cloud support is planned for release later this year. + + + +## Step 1/4. Infrastructure setup + +You can set up the required infrastructure to support the Identity Activity Center with the +following Terraform script. + +
+Terraform script + +Below is a list of the required variables along with their descriptions, +which are necessary for the Terraform script to execute. + +- is the name of the region where infrastructure should be created. +- is the name of the AWS SQS queue. +- is the name of the AWS SQS dead-letter queue. +- is the name of the AWS KMS encryption key used to encrypt S3 bucket files and SQS messages. +- is the long-term S3 bucket used to store the events. +- is the transient S3 bucket used to store temporary files such as query results and large files. +- is the name of the AWS Glue database. +- is the name of the AWS Glue table. +- is the name of the Amazon Athena Workgroup. +- is the AWS account id. + +Define the variables using the script below or configure them in your +Teleport IaC script. + +```bash +cat > variables.auto.tfvars << EOF +aws_region = "" +iac_sqs_queue_name = "" +iac_sqs_dlq_name = "" +iac_kms_key_alias = "" +iac_long_term_bucket_name = "" +iac_transient_bucket_name = "" +iac_database_name = "" +iac_table_name = "" +iac_workgroup = "" +EOF +``` + +Execute the following Terraform script to create the required infrastructure. + +### `variables.tf` + +`variables.tf` file includes the description of the required Terraform variables. + +```hcl +(!examples/identity-activity-center/variables.tf!) +``` + +### `identity_activity_center.tf` + +`identity_activity_center.tf` file includes the declaration of every resource +that Terraform will create and manage. This includes AWS KMS keys, AWS S3 Buckets +for transient and long-term storage, and AWS Glue table and database. + +```hcl +(!examples/identity-activity-center/identity_activity_center.tf!) +``` + +### `policy.tf` + +`policy.tf` includes the declaration of the AWS IAM policy that must +be attached to the AWS identity that the Identity Security service runs as, +enabling it to access the resources and execute queries. If using an E2 instance +for Identity Security, attach the policy to the instance role. If using ECS or EKS, +attach the policy to the task or pod role. + +```hcl +(!examples/identity-activity-center/policy.tf!) +``` + +The Terraform script will output two variables: + +- `identity_activity_center_iam_policy`: AWS IAM policy required for Identity Security to connect to AWS resources. Attach the policy to the Identity Security AWS IAM role. +- `identity_activity_center_yaml`: YAML configuration to append to the Identity Security configuration or Helm chart values. + +
+ +## Step 2/4. Configure MaxMind GeoIP City database (optional) + + + +MaxMind GeoIP database configuration is optional, but we do recommend it for geolocation tracking. + + + + +To enrich audit events with GeoIP details such as country, city, region, latitude, +and longitude based on IP addresses, we strongly recommend using the +[MaxMind GeoIP City database](https://www.maxmind.com/en/geoip-databases). + +MaxMind provides both free and paid versions of its GeoIP City database. +The free version is less accurate and updated less frequently compared to +the paid version. You can find instructions for downloading the free +version or purchasing the paid version on the +[MaxMind Developer website](https://dev.maxmind.com/geoip/geolite2-free-geolocation-data/). + + + + +To configure enhanced location details for self-hosted clusters, upload the database file to the machine where +Identity Security runs. You will specify its location in the config file below. + + + + + +To configure enhanced location details for self-hosted clusters installed using the Helm Chart, first create +the secret that contains the IP database: + +```bash +$ kubectl create secret generic maxmind-geoip-city-db --from-file=GeoLite2-City.mmdb + +``` + + + + +## Step 3/4. Modify Identity Security configuration + + + + +To configure it for self-hosted clusters, update the Identity Security configuration and +append the following section: + +```yaml +# Configuration for Teleport Access Graph service. +# Example: /etc/access_graph/config.yaml +# Add the following section to enable Identity Activity Center. +# ... +identity_activity_center: + region: + database: + table: + s3: s3:///data/ + s3_results: s3:///results/ + s3_large_files: s3:///large_files + workgroup: + sqs_queue_url: https://sqs..amazonaws.com// + maxmind_geoip_city_db_path: /path/to/geoIp-city.mmdb # optional +# ... +``` + + + + + +To configure it for self-hosted clusters installed using the Helm Chart, update your +Helm chart values with the following details: + +```yaml + +# optional + volumes: + - name: maxmind-geoip-city-db + secret: + secretName: maxmind-geoip-city-db + optional: false + +# optional + volumeMounts: + - name: maxmind-geoip-city-db + mountPath: "/etc/maxmindGeoIP/" + readOnly: true + +identity_activity_center: + enabled: true + region: + database: + table: + s3: s3:///data/ + s3_results: s3:///results/ + s3_large_files: s3:///large_files + workgroup: + sqs_queue_url: https://sqs..amazonaws.com// + maxmind_geoip_city_db_path: "/etc/maxmindGeoIP/GeoLite2-City.mmdb" # optional +``` + +Run `helm upgrade` to re-deploy Identity Security. + + + + + +## Step 4/4. Restart Identity Security + + + + +The final step is to restart the Identity Security docker process or restart the process +so it can reload the latest configuration. + + + + + +The final step is to wait for the Identity Security Deployment to complete the rollout +of the new replicas. + + + diff --git a/docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted-helm.mdx b/docs/pages/identity-security/access-graph/self-hosted-helm.mdx similarity index 86% rename from docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted-helm.mdx rename to docs/pages/identity-security/access-graph/self-hosted-helm.mdx index 525198d853b25..8fbad3873312c 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/access-graph/self-hosted-helm.mdx +++ b/docs/pages/identity-security/access-graph/self-hosted-helm.mdx @@ -1,6 +1,12 @@ --- title: Run Teleport Identity Security with Access Graph on Self-Hosted Clusters with Helm +sidebar_label: Helm description: How to deploy Access Graph on self-hosted clusters using Helm. +tags: + - how-to + - identity-security + - access-risks +enterprise: Identity Security --- Using Teleport Identity Security with Access Graph on a self-hosted Teleport cluster requires @@ -11,20 +17,31 @@ to collect information about resources and access. This guide will help you set up the Access Graph service in a Kubernetes cluster using a Helm Chart, and enable the Access Graph feature in your Teleport cluster. -The full listing of supported parameters can be found in the [Helm chart reference](../../../reference/helm-reference/teleport-access-graph.mdx). +The full listing of supported parameters can be found in the [Helm chart reference](../../reference/helm-reference/teleport-access-graph.mdx). Access Graph is a feature of the [Identity Security](https://goteleport.com/platform/policy/) product that is only available to Teleport Enterprise customers. +## How it works + +Unlike Teleport services like the Auth Service, Proxy Service, and agent +services, Teleport Access Graph does not run from the `teleport` binary, but as +a separate piece of software available from Teleport as a container image. +Teleport Access Graph uses TLS credentials to authenticate to the Teleport Auth +Service. It must also connect to PostgreSQL for its backing storage. After +obtaining credentials from your Teleport cluster, you create a configuration +file for Teleport Access Graph and install the `teleport-access-graph` Helm +chart. + ## Prerequisites - Kubernetes >= v1.21 - Helm >= (=helm.version=) -- A running Teleport Enterprise cluster v14.3.6 or later. +- A running Teleport Enterprise cluster v18.0.0 or later. - For the purposes of this guide, we assume that the Teleport cluster is set up - [using the `teleport-cluster` Helm chart](../helm-deployments/helm-deployments.mdx) + [using the `teleport-cluster` Helm chart](../../zero-trust-access/deploy-a-cluster/helm-deployments/helm-deployments.mdx) in the same Kubernetes cluster that will be used to deploy Access Graph. -- An updated `license.pem` with Identity Security enabled. +- An updated Teleport Enterprise license file with Teleport Identity Security enabled. - A PostgreSQL database server v14 or later. - Access Graph needs a dedicated [database](https://www.postgresql.org/docs/current/sql-createdatabase.html) to store its data. The user that Teleport connects to the database with needs to be the owner of this database, or have similar broad permissions: @@ -152,7 +169,7 @@ the Auth Service configuration needs to be updated with: - The path to the CA which issued the Access Graph service TLS certificate. - This path must refer to a volume containing the CA, mounted on the Teleport pods. - Specifying the CA certificate file can be skipped if you are using a CA that is already trusted by the Teleport cluster - (e.g. via the [`tls.existingCASecretName` option](../../../reference/helm-reference/teleport-cluster.mdx)), + (e.g. via the [`tls.existingCASecretName` option](../../reference/helm-reference/teleport-cluster.mdx)), or if the certificate was issued by a CA included in the [Mozilla CA Certificate List](https://wiki.mozilla.org/CA/Included_Certificates). Create a ConfigMap containing the CA certificate as follows: @@ -205,10 +222,10 @@ $ kubectl -n rollout restart deploymen $ kubectl -n rollout status deployment/teleport-proxy # Wait for the deployment to succeed ``` -## Step 4/4. View the Access Graph in the Web UI +## Step 4/4. View Access Graph data in the Graph Explorer -You can find the Access Graph in the "Access Management" tab in the Web UI. -![Access Management menu item](../../../../img/access-graph/menu-item.png) +In order to visualize the data from the Access Graph service, use the Graph Explorer in the Web UI. +Click **Identity Security** --> **Graph Explorer** and then select a resource to view in the Graph Explorer. To access the interface, your user must have a role that allows `list` and `read` verbs on the `access_graph` resource, e.g.: @@ -228,3 +245,8 @@ spec: ``` The preset `editor` role has the required permissions by default. + + +## Next steps + +- Configure [Identity Activity Center](./identity-activity-center.mdx) diff --git a/docs/pages/identity-security/access-graph/self-hosted.mdx b/docs/pages/identity-security/access-graph/self-hosted.mdx new file mode 100644 index 0000000000000..0707853ca9e0a --- /dev/null +++ b/docs/pages/identity-security/access-graph/self-hosted.mdx @@ -0,0 +1,170 @@ +--- +title: Run Teleport Identity Security on Self-Hosted Clusters +sidebar_label: Docker +description: Describes how to deploy Access Graph on self-hosted clusters. +tags: + - how-to + - identity-security + - access-risks +enterprise: Identity Security +--- + +This guide shows you how to set up Teleport Access Graph in a self-hosted +Teleport cluster. + +## How it works + +Unlike Teleport services like the Auth Service, Proxy Service, and agent +services, Teleport Access Graph does not run from the `teleport` binary, but as +a separate piece of software available from Teleport as a container image. +Teleport Access Graph uses TLS credentials to authenticate to the Teleport Auth +Service. It must also connect to PostgreSQL for its backing storage. After +obtaining credentials from your Teleport cluster, you create a configuration +file for Teleport Access Graph and start a container that loads the +configuration file and Teleport credentials. + +## Prerequisites + +- A running Teleport Enterprise cluster. +- An updated Teleport Enterprise license file with Teleport Identity Security enabled. +- Docker version v(=docker.version=) or later. +- A PostgreSQL database server v14 or later. + - Access Graph needs a dedicated [database](https://www.postgresql.org/docs/current/sql-createdatabase.html) to store its data. + The user that Access Graph connects to the database with needs to be the owner of this database, or have similar broad permissions: + at least the `CREATE TABLE` privilege on the `public` schema, and the `CREATE SCHEMA` privilege. + - Amazon RDS for PostgreSQL is supported. +- A TLS certificate for the Access Graph service + - The TLS certificate must be issued for "server authentication" key usage, + and must list the IP or DNS name of the Access Graph service in an X.509 v3 `subjectAltName` extension. + - Starting from version 1.20.4 of the Access Graph service, the container runs as a non-root user by default. + Make sure the certificate files are readable by the user running the container. You can set correct permissions with the following command: + ```code + $ sudo chown 65532 /etc/access_graph/tls.key + ``` +- The node running the Access Graph service must be reachable from Teleport Auth Service and Proxy Service. + + + The deployment with Docker is suitable for testing and development purposes. For production deployments, + consider using the Access Graph Helm chart to deploy this service on Kubernetes. + Refer to [Helm chart for Access Graph](self-hosted-helm.mdx) for instructions. + + +## Step 1/3. Set up Access Graph + +You will need a copy of your Teleport cluster's host certificate authority (CA) on the machine that hosts the Access Graph service. +The service requires incoming connections to be authenticated via host certificates that the host CA issues to the Auth Service and Proxy Service. + +The host CA can be retrieved and saved into a file in one of the following ways: + + + +```code +$ sudo mkdir /etc/access_graph +$ curl -s 'https:///webapi/auth/export?type=tls-host' | sudo tee /etc/access_graph/teleport_host_ca.pem +``` + + + +```code +$ sudo mkdir /etc/access_graph +$ tsh login --proxy= +$ tctl get cert_authorities --format=json \ + | jq -r '.[] | select(.spec.type == "host") | .spec.active_keys.tls[].cert' \ + | base64 -d | sudo tee /etc/access_graph/teleport_host_ca.pem +``` + + + +Then, on the same machine, create a configuration file for the Access Graph service, similar to this: + +```yaml +# Configuration for Teleport Access Graph service. +# Example: /etc/access_graph/config.yaml +backend: + postgres: + # This uses the PostgreSQL connection URI format, see https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS + # A stricter `sslmode` value is strongly recommended, + # e.g. `sslmode=verify-full&sslrootcert=/etc/access_graph/my_postgres_ca.crt`. + # For a full reference on possible parameters see https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS + connection: postgres://access_graph_user:my_password@db.example.com:5432/access_graph_db?sslmode=require + + # When running on Amazon RDS, IAM auth via credentials set in the environment can be used as follows: + # iam: + # aws_region: us-west-2 + +# IP address (optional) and port for the Access Graph service to listen to. +# This is the default value. This key can be omitted to listen on port 50051 on all interfaces. +address: ":50051" + +# Example Identity Activity Center configuration. +# identity_activity_center: +# region: eu-central-1 +# database: example-db +# table: example-table +# s3: s3://example-long-term-bucket/data/ +# s3_results: s3://example-transient-bucket/results/ +# s3_large_files: s3://example-transient-bucket/large_files +# workgroup: example-workgroup +# sqs_queue_url: https://sqs.eu-central-1.amazonaws.com/aws-account-id/example-sqs-queue +# maxmind_geoip_city_db_path: "/etc/maxmindGeoIP/GeoLite2-City.mmdb" # optional + +tls: + # File paths of PEM-encoded TLS certificate and private key for the Access Graph server. + cert: /etc/access_graph/tls.crt + key: /etc/access_graph/tls.key + +# This lists the file paths for host CAs of Teleport clusters that are allowed to register with this Access Graph service. +# Several paths can be included to allow several Teleport clusters to connect to the Access Graph service. +registration_cas: + - /etc/access_graph/teleport_host_ca.pem # A full path to the file containing the Teleport cluster's host CA certificate. +``` + +Finally, start the Access Graph service using Docker as follows: + +```console +$ docker run -p 50051:50051 -v :/app/config.yaml -v /etc/access_graph:/etc/access_graph public.ecr.aws/gravitational/access-graph:(=access_graph.version=) +``` + +## Step 2/3. Update the Teleport Auth Service configuration + +In the YAML config for the Auth Service, add a new top-level section for Access Graph configuration. + +```yaml +access_graph: + enabled: true + # host:port where the Access Graph service is listening + endpoint: access-graph.example.com:50051 + # Specify a trusted CA we expect the Access Graph server certificate to be signed by. + # If not specified, the system trust store will be used. + ca: /etc/access_graph_ca.pem +``` + +Then, restart Auth Service instances, followed by Proxy Service instances. + +## Step 3/3. View Access Graph data in the Graph Explorer + +In order to visualize the data from the Access Graph service, use the Graph Explorer in the Web UI. +Click **Identity Security** --> **Graph Explorer** and then select a resource to view in the Graph Explorer. + +To access the interface, your user must have a role that allows `list` and `read` verbs on the `access_graph` resource, e.g.: + +```yaml +kind: role +version: v7 +metadata: + name: my-role +spec: + allow: + rules: + - resources: + - access_graph + verbs: + - list + - read +``` + +The preset `editor` role has the required permissions by default. + +## Next steps + +- Configure [Identity Activity Center](./identity-activity-center.mdx) diff --git a/docs/pages/identity-security/identity-security.mdx b/docs/pages/identity-security/identity-security.mdx new file mode 100644 index 0000000000000..91898996bfd10 --- /dev/null +++ b/docs/pages/identity-security/identity-security.mdx @@ -0,0 +1,142 @@ +--- +title: Teleport Identity Security +description: An overview of Teleport Identity Security. +template: landing-page +tags: + - identity-security + - access-risks +enterprise: Identity Security +--- + +import LandingHero from '@site/src/components/Pages/Landing/LandingHero'; +import TextWithMedia from "@site/src/components/Pages/Landing/TextWithMedia"; +import UseCasesList from "@site/src/components/Pages/Landing/UseCasesList"; +import Integrations from "@site/src/components/Pages/Homepage/Integrations"; + +import identitySecurityImg from '@version/docs/img/identity-security/identity-security-hero.png'; +import bookOpenSvg from "@site/src/components/Icon/teleport-svg/book-open.svg"; +import identificationBadgeSvg from "@site/src/components/Icon/teleport-svg/identification-badge.svg"; +import listBulletsSvg from "@site/src/components/Icon/teleport-svg/list-bullets.svg"; +import verifySvg from "@site/src/components/Icon/teleport-svg/verify.svg"; +import githubSvg from "@site/src/components/Icon/svg/github.svg"; +import awsSvg from "@site/src/components/Icon/teleport-svg/aws.svg"; +import oktaSvg from "@site/src/components/Icon/teleport-svg/okta.svg"; +import teleportSvg from "@site/src/components/Icon/teleport-svg/teleport.svg"; +import video from "@site/src/components/Pages/Landing/TextWithMedia/identity-security.mp4"; + + + Teleport Identity Security centralizes access policy across your infrastructure, consolidates disparate identity audit logs, discovers shadow access, and alerts on access anomalies. It helps you quickly answer: + - What resources can a specific user access? + - What actions do users perform when connecting to systems? + - How are users, roles, and resources connected? + - Can users gain access to resources outside of defined RBAC policies? + - Can users interact with resources in ways that bypass audit logging? + + + +
diff --git a/docs/pages/includes/application-access/aws-database-start-app-service.mdx b/docs/pages/includes/application-access/aws-database-start-app-service.mdx index d0838200ede9c..bcb989d171fd3 100644 --- a/docs/pages/includes/application-access/aws-database-start-app-service.mdx +++ b/docs/pages/includes/application-access/aws-database-start-app-service.mdx @@ -23,7 +23,7 @@ Replace `https://console.aws.amazon.com` with ### Install and start Teleport Install Teleport on the host where you will run the Teleport Application -Service. See our [Installation](../../installation.mdx) page for options +Service. See our [Installation](../../installation/installation.mdx) page for options besides Linux servers. (!docs/pages/includes/install-linux.mdx!) diff --git a/docs/pages/includes/application-access/dynamic-app-config.mdx b/docs/pages/includes/application-access/dynamic-app-config.mdx new file mode 100644 index 0000000000000..2f2caea8e0c96 --- /dev/null +++ b/docs/pages/includes/application-access/dynamic-app-config.mdx @@ -0,0 +1,22 @@ +To enable dynamic registration, include a `resources` section in your Application +Service configuration with a list of resource label selectors you'd like this +service to monitor for registering: + +```yaml +app_service: + enabled: true + resources: + - labels: + "*": "*" +``` + +You can use a wildcard selector to register all dynamic app resources in the cluster +on the Application Service or provide a specific set of labels for a subset: + +```yaml +resources: +- labels: + "env": "prod" +- labels: + "env": "test" +``` diff --git a/docs/pages/includes/application-access/dynamic-app-permissions.mdx b/docs/pages/includes/application-access/dynamic-app-permissions.mdx new file mode 100644 index 0000000000000..5ad226a6e4495 --- /dev/null +++ b/docs/pages/includes/application-access/dynamic-app-permissions.mdx @@ -0,0 +1,13 @@ +In order to interact with dynamically registered applications, a user must have +a Teleport role with permissions to manage `app` resources. + +In the following example, a role allows a user to perform all possible +operations against `app` resources: + +```yaml +allow: + rules: + - resources: + - app + verbs: [list, create, read, update, delete] +``` diff --git a/docs/pages/includes/application-access/jwt-configure-claims.mdx b/docs/pages/includes/application-access/jwt-configure-claims.mdx new file mode 100644 index 0000000000000..f6900e3e4d7c1 --- /dev/null +++ b/docs/pages/includes/application-access/jwt-configure-claims.mdx @@ -0,0 +1,26 @@ +{{ appType="web application" appScheme="https" }} + +By default, Teleport includes a user's roles and traits in the JWT generated for +application access, and the `Teleport-Jwt-Assertion` header is sent along with +every request that Teleport makes to an upstream {{ appType }}. + +If your {{ appType }} doesn't care about these values, or you are encountering an +error due to exceeding the size limit of HTTP headers, you can configure +Teleport to omit this information from the token. + +```yaml +- name: "dashboard" + uri: {{ appScheme }}://localhost:4321 + rewrite: + # Specify whether to include roles or traits in the JWT. + # Options: + # - roles-and-traits: include both roles and traits + # - roles: include only roles + # - traits: include only traits + # - none: exclude both roles and traits from the JWT token + # Default: roles-and-traits + jwt_claims: roles-and-traits + headers: + # Inject header with Teleport-signed JWT token. + - "Authorization: Bearer {{internal.jwt}}" +``` diff --git a/docs/pages/includes/application-access/jwt-grafana-config.mdx b/docs/pages/includes/application-access/jwt-grafana-config.mdx new file mode 100644 index 0000000000000..546f44fbc6153 --- /dev/null +++ b/docs/pages/includes/application-access/jwt-grafana-config.mdx @@ -0,0 +1,31 @@ +Add an `auth.jwt` section in Grafana’s main configuration file. Replace with the domain name of your Teleport cluster: +```ini +[auth.jwt] +enabled = true + +# HTTP header to look into to get a JWT token. +header_name = Authorization + +# JSON Web Key Set (JWKS) URL from your Teleport cluster. +jwk_set_url = https:///.well-known/jwks.json + +# Teleport username can be found in "sub" or "username" claims. +username_claim = sub + +# Map Teleport users to Grafana organization roles based on their Teleport +# roles. Adjust accordingly. +# +# In this example, if the user's Teleport role list (the "roles" claim) contains +# "editor", assign them the Grafana "Editor" role. All other users get the +# "Viewer" role. +# +# Teleport user traits are also available in the "traits" claim and can be +# used in expressions in the same way as roles. +role_attribute_path = contains(roles[*], 'editor') && 'Editor' || 'Viewer' + +# auto-create users if they are not already matched. +auto_sign_up = true +``` + +Restart your Grafana instance after updating the config. diff --git a/docs/pages/includes/application-access/jwt-inject.mdx b/docs/pages/includes/application-access/jwt-inject.mdx new file mode 100644 index 0000000000000..322b4c273f195 --- /dev/null +++ b/docs/pages/includes/application-access/jwt-inject.mdx @@ -0,0 +1,5 @@ +You can inject a JWT token into any header using [headers passthrough](../../enroll-resources/application-access/protect-apps/connecting-apps.mdx#headers-passthrough) +configuration and the `{{internal.jwt}}` template variable. This variable will +be replaced with JWT token signed by Teleport JWT CA containing user identity +information like described above. + diff --git a/docs/pages/includes/application-access/jwt-intro.mdx b/docs/pages/includes/application-access/jwt-intro.mdx new file mode 100644 index 0000000000000..8c3efc802b1f5 --- /dev/null +++ b/docs/pages/includes/application-access/jwt-intro.mdx @@ -0,0 +1,73 @@ +{{ appType="application" }} + +Teleport sends a JWT token signed with Teleport's authority with each request +to a target {{ appType }} in a `Teleport-Jwt-Assertion` header. + +You can use the JWT token to get information about the authenticated Teleport +user, its roles, and its traits. This allows you to: + +- Map Teleport identity/roles/traits onto the identity/roles/traits of your web application. +- Trust Teleport identity to automatically sign in users into your application. + +## Introduction to JWTs + +JSON Web Token (JWT) is an open standard that defines a secure way to transfer +information between parties as a JSON Object. + +For an in-depth explanation please visit [https://jwt.io/introduction/](https://jwt.io/introduction/). + +Teleport JWTs include three sections: + +- Header +- Payload +- Signature + +### Header + +*Example Header* + +```json +{ + "alg": "RS256", + "typ": "JWT" +} +``` + +### Payload + +*Example Payload* + +```json +{ + "aud": [ + "http://127.0.0.1:34679" + ], + "iss": "aws", + "nbf": 1603835795, + "sub": "alice", + // Teleport user name. + "username": "alice" + // Teleport user roles. + "roles": [ + "admin" + ], + // Teleport user traits. + "traits": { + "logins": [ + "root", + "ubuntu", + "ec2-user" + ] + }, + // Teleport identity expiration. + "exp": 1603943800, +} +``` + +The JWT will be sent with the header: `Teleport-Jwt-Assertion`. + +*Example Teleport JWT Assertion* + +```json +eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiaHR0cDovLzEyNy4wLjAuMTozNDY3OSJdLCJleHAiOjE2MDM5NDM4MDAsImlzcyI6ImF3cyIsIm5iZiI6MTYwMzgzNTc5NSwicm9sZXMiOlsiYWRtaW4iXSwic3ViIjoiYmVuYXJlbnQiLCJ1c2VybmFtZSI6ImJlbmFyZW50In0.PZGUyFfhEWl22EDniWRLmKAjb3fL0D4cTmkxEfb-Q30hVMzVhka5WB8AUsPsLPVhTzsQ6Nkk1DnXHdz6oxrqDDfumuRrDnpJpjiXj_l0D3bExrchN61enzBHxSD13VkRIqP1V6l4i8yt8kXDIBWc-QejLTodA_GtczkDfnnpuAfaxIbD7jEwF27KI4kZu7uES9LMu2iCLdV9ZqarA-6HeDhXPA37OJ3P6eVQzYpgaOBYro5brEiVpuJLr1yA0gncmR4FqmhCpCj-KmHi2vmjmJAuuHId6HZoEZJjC9IAsNlrSA4GHH9j82o7FF1F4J2s38bRy3wZv46MT8X8-QBSpg +``` diff --git a/docs/pages/includes/application-access/jwt-validate.mdx b/docs/pages/includes/application-access/jwt-validate.mdx new file mode 100644 index 0000000000000..2d3b43b6ece67 --- /dev/null +++ b/docs/pages/includes/application-access/jwt-validate.mdx @@ -0,0 +1,17 @@ +Teleport provides a JSON Web Key Set (`jwks`) endpoint to verify that the JWT +can be trusted. This endpoint is `https://[cluster-name]/.well-known/jwks.json`: + +*Example jwks.json* + +```json +{ + "keys": [ + { + "kty": "RSA", + "n": "xk-0VSVZY76QGqeN9TD-FJp32s8jZrpsalnRoFwlZ_JwPbbd5-_bPKcz8o2tv1eJS0Ll6ePxRCyK68Jz2UC4V4RiYaqJCRq_qVpDQMB1sQ7p9M-8qvT82FJ-Rv-W4RNe3xRmBSFDYdXaFm51Uk8OIYfv-oZ0kGptKpkNY390aJOzjHPH2MqSvhk9Xn8GwM8kEbpSllavdJCRPCeNVGJXiSCsWrOA_wsv_jqBP6g3UOA9GnI8R6HR14OxV3C184vb3NxIqxtrW0C4W6UtSbMDcKcNCgajq2l56pHO8In5GoPCrHqlo379LE5QqpXeeHj8uqcjeGdxXTuPrRq1AuBpvQ", + "e": "AQAB", + "alg": "RS256" + } + ] +} +``` diff --git a/docs/pages/includes/auto-discovery/auto-discovery-labels.mdx b/docs/pages/includes/auto-discovery/auto-discovery-labels.mdx new file mode 100644 index 0000000000000..09071d5967627 --- /dev/null +++ b/docs/pages/includes/auto-discovery/auto-discovery-labels.mdx @@ -0,0 +1,3 @@ +Teleport applies a set of default labels to resources on AWS, Azure, and Google +Cloud that join a cluster via auto-discovery. See the auto-discovery labels +[reference](../../enroll-resources/auto-discovery/reference/labels.mdx) diff --git a/docs/pages/includes/auto-discovery/aws-ec2-advanced-config.mdx b/docs/pages/includes/auto-discovery/aws-ec2-advanced-config.mdx new file mode 100644 index 0000000000000..4c81cd1f977d0 --- /dev/null +++ b/docs/pages/includes/auto-discovery/aws-ec2-advanced-config.mdx @@ -0,0 +1,93 @@ +{/* this comment is here to make the includes below render */} + +(!docs/pages/includes/auto-discovery/server-advanced-config.mdx matcher="aws"!) + +### Use a custom installation script + +(!docs/pages/includes/server-access/custom-installer.mdx matcher="aws"!) + +### Use a custom SSM Document + +When executing the installation script on discovered EC2 instances, the Discovery Service uses an SSM document. + +The default `AWS-RunShellScript` SSM document works in most cases and is always available in AWS. + +However, if you need to customize the installation process for your environment, you can create a custom SSM Document and configure the Discovery Service to use it during installation. + +The custom document's parameters must include `env`, `scriptName` and `token`. + +The recommended approach is to use the following document and customize it as needed: + +```yaml +schemaVersion: '2.2' +description: aws:runShellScript +parameters: + token: + type: String + description: "(Required) The Teleport invite token to use when joining the cluster." + scriptName: + type: String + description: "(Required) The Teleport installer script to use when joining the cluster." + env: + type: String + description: "Environment variables exported to the script. Format 'ENV=var FOO=bar'" + default: "X=$X" +mainSteps: +- action: aws:downloadContent + name: downloadContent + inputs: + sourceType: "HTTP" + destinationPath: "/tmp/installTeleport.sh" + sourceInfo: + url: "https:///webapi/scripts/installer/{{ scriptName }}" +- action: aws:runShellScript + name: runShellScript + inputs: + timeoutSeconds: '300' + runCommand: + - export {{ env }}; /bin/sh /tmp/installTeleport.sh "{{ token }}" +``` + +Create this document using AWS Systems Manager in each region where you plan to discover instances. + +Edit your Discovery Service configuration to use the custom SSM Document, by setting the `ssm.document_name` key: + +```yaml +# teleport.yaml +version: v3 +# ... +discovery_service: + enabled: true + aws: + - ssm: + document_name: "TeleportDiscoveryInstaller" +``` + +### Discover instances in all active regions + +The Discovery Service can be configured to scan all active AWS regions for EC2 instances. + +Edit the AWS matcher and set the `regions` key to wildcard (`*`): + +```yaml +# teleport.yaml +version: v3 +# ... +discovery_service: + enabled: true + aws: + - regions: ["*"] + # other fields +``` + +Add the necessary IAM permissions to allow the Discovery Service to list regions: +```json +{ + "Effect": "Allow", + "Action": [ + // existing permissions + "account:ListRegions" + ], + "Resource": "*" +} +``` diff --git a/docs/pages/includes/auto-discovery/azure-vm-advanced-config.mdx b/docs/pages/includes/auto-discovery/azure-vm-advanced-config.mdx new file mode 100644 index 0000000000000..6bb99ceffda45 --- /dev/null +++ b/docs/pages/includes/auto-discovery/azure-vm-advanced-config.mdx @@ -0,0 +1,10 @@ +{/* this comment is here to make the includes below render */} + +(!docs/pages/includes/auto-discovery/server-advanced-config.mdx matcher="azure"!) + +### Use a custom installation script + +(!docs/pages/includes/server-access/custom-installer.mdx matcher="azure"!) + +If `client_id` is set in the Discovery Service config, custom installers will +also have the `{{ .AzureClientID }}` templating option. diff --git a/docs/pages/includes/auto-discovery/ec2-config.mdx b/docs/pages/includes/auto-discovery/ec2-config.mdx new file mode 100644 index 0000000000000..a0949c1809d6d --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-config.mdx @@ -0,0 +1,51 @@ +If you are running the Discovery Service on its own host, the service requires a +valid invite token to connect to the cluster. Generate one by running the +following command against your Teleport Auth Service: + +```code +$ tctl tokens add --type=discovery +``` + +Save the generated token in `/tmp/token` on the Node (EC2 instance) that will +run the Discovery Service. + +In order to enable EC2 instance discovery the `discovery_service.aws` section +of `teleport.yaml` must include at least one entry: + +(!docs/pages/includes/discovery/discovery-group.mdx!) + +```yaml +# teleport.yaml +version: v3 +teleport: + join_params: + token_name: "/tmp/token" + method: token + proxy_server: "" +auth_service: + enabled: false +proxy_service: + enabled: false +ssh_service: + enabled: false +discovery_service: + enabled: true + discovery_group: "aws-prod" + aws: + - types: ["ec2"] + regions: ["us-east-1","us-west-1"] + ssm: + document_name: "AWS-RunShellScript" + install: + join_params: + token_name: aws-discovery-iam-token + method: iam + tags: + "env": "prod" # Match EC2 instances where tag:env=prod +``` + +- Edit the `teleport.proxy_server` key to match your Proxy Service's URI + and port. +- Adjust the keys under `discovery_service.aws` to match your EC2 environment, + specifically the regions and tags you want to associate with the Discovery + Service. diff --git a/docs/pages/includes/auto-discovery/ec2-cross-account-config.mdx b/docs/pages/includes/auto-discovery/ec2-cross-account-config.mdx new file mode 100644 index 0000000000000..94a2a1a7a77a2 --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-cross-account-config.mdx @@ -0,0 +1,33 @@ +Add an entry to the `discovery_service.aws` section of your `teleport.yaml` +file, assigning to the ARN of the IAM role to assume and + to an optional external ID: + +```yaml +# teleport.yaml +# ... +discovery_service: + enabled: true + discovery_group: "aws-prod" + aws: + - types: ["ec2"] + ssm: + document_name: "AWS-RunShellScript" + # Add new entry after existing entries + - types: ["ec2"] + regions: ["us-east-1","us-west-1"] + install: + join_params: + token_name: aws-discovery-iam-token + method: iam + ssm: + document_name: "AWS-RunShellScript" + tags: + "env": "prod" # Match EC2 instances where tag:env=prod + assume_role_arn: "" + external_id: "" + - types: ["ec2"] + ssm: + document_name: "AWS-RunShellScript" + # Add a new entry for each account. + # ... +``` \ No newline at end of file diff --git a/docs/pages/includes/auto-discovery/ec2-cross-account-token.mdx b/docs/pages/includes/auto-discovery/ec2-cross-account-token.mdx new file mode 100644 index 0000000000000..3f4edc94c1770 --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-cross-account-token.mdx @@ -0,0 +1,18 @@ +Add a new entry to `spec.allow` in `token.yaml` and set `aws_account` to the +account number of the new account, including the +: + +```diff +# token.yaml +kind: token +version: v2 +metadata: + name: aws-discovery-iam-token +spec: + roles: [Node] + join_method: iam + allow: + - aws_account: "123456789012" + # Existing entry... ++ - aws_account: "" +``` diff --git a/docs/pages/includes/auto-discovery/ec2-invite-token.mdx b/docs/pages/includes/auto-discovery/ec2-invite-token.mdx new file mode 100644 index 0000000000000..163562f841029 --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-invite-token.mdx @@ -0,0 +1,31 @@ +When discovering EC2 instances, Teleport makes use of IAM invite tokens for +authenticating joining Nodes. + +Create a file called `token.yaml`: + +```yaml +# token.yaml +kind: token +version: v2 +metadata: + # the token name is not a secret because instances must prove that they are + # running in your AWS account to use this token + name: aws-discovery-iam-token +spec: + # use the minimal set of roles required (e.g. Node, App, Kube, DB, WindowsDesktop) + roles: [Node] + + # set the join method allowed for this token + join_method: iam + + allow: + # specify the AWS account which Nodes may join from + - aws_account: "123456789" +``` + +Assign the `aws_account` field to your AWS account number. +Add the token to the Teleport cluster with: + +```code +$ tctl create -f token.yaml +``` \ No newline at end of file diff --git a/docs/pages/includes/auto-discovery/ec2-next-steps.mdx b/docs/pages/includes/auto-discovery/ec2-next-steps.mdx new file mode 100644 index 0000000000000..6c3047a92e1c9 --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-next-steps.mdx @@ -0,0 +1,8 @@ +- Read [Joining Nodes via AWS IAM + Role](../../enroll-resources/agents/aws-iam.mdx) +for more information on IAM Invite Tokens. +- Information on IAM best practices on EC2 instances managed by Systems +Manager can be found in the [AWS Cloud Operations & Migrations Blog +](https://aws.amazon.com/blogs/mt/applying-managed-instance-policy-best-practices/). +- Full documentation on EC2 discovery configuration can be found through the [ +config file reference documentation](../../reference/deployment/config.mdx). \ No newline at end of file diff --git a/docs/pages/includes/auto-discovery/ec2-troubleshooting.mdx b/docs/pages/includes/auto-discovery/ec2-troubleshooting.mdx new file mode 100644 index 0000000000000..1229a2e13af10 --- /dev/null +++ b/docs/pages/includes/auto-discovery/ec2-troubleshooting.mdx @@ -0,0 +1,27 @@ +If Installs are showing failed or instances are failing to appear check the +Command history in AWS System Manager -> Node Management -> Run Command. +Select the instance-id of the Target to review Errors. + +### `cannot unmarshal object into Go struct field` + +If you encounter an error similar to the following: + +```text +invalid format in plugin properties map[destinationPath:/tmp/installTeleport.sh sourceInfo:map[url:[https://example.teleport.sh:443/webapi/scripts/installer/preprod-installer](https://example.teleport.sh/webapi/scripts/installer/preprod-installer)] sourceType:HTTP]; +error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string +``` + +It is likely that you're running an older SSM agent version. Upgrade to SSM agent version 3.1 or greater to resolve. + +### `InvalidInstanceId: Instances [[i-123]] not in a valid state for account 456` + +The following problems can cause this error: +- The Discovery Service doesn't have permission to access the managed node. +- AWS Systems Manager Agent (SSM Agent) isn't running. Verify that SSM Agent is running. +- SSM Agent isn't registered with the SSM endpoint. Try reinstalling SSM Agent. +- The discovered instance does not have permission to receive SSM + commands, verify the instance includes the AmazonSSMManagedInstanceCore IAM policy. + +See SSM RunCommand error codes and troubleshooting information in AWS documentation for more details: +- https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-managed-instances.html +- https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html#API_SendCommand_Errors \ No newline at end of file diff --git a/docs/pages/includes/auto-discovery/gcp-vm-advanced-config.mdx b/docs/pages/includes/auto-discovery/gcp-vm-advanced-config.mdx new file mode 100644 index 0000000000000..41aa03596a314 --- /dev/null +++ b/docs/pages/includes/auto-discovery/gcp-vm-advanced-config.mdx @@ -0,0 +1,7 @@ +{/* this comment is here to make the includes below render */} + +(!docs/pages/includes/auto-discovery/server-advanced-config.mdx matcher="gcp"!) + +### Use a custom installation script + +(!docs/pages/includes/server-access/custom-installer.mdx matcher="gcp"!) \ No newline at end of file diff --git a/docs/pages/includes/auto-discovery/server-advanced-config.mdx b/docs/pages/includes/auto-discovery/server-advanced-config.mdx new file mode 100644 index 0000000000000..76962af9503b2 --- /dev/null +++ b/docs/pages/includes/auto-discovery/server-advanced-config.mdx @@ -0,0 +1,61 @@ +{{ matcher="bar" }} + +This section covers configuration options for discovering and enrolling servers. + +### Install multiple Teleport agents on the same instance + +When using blue-green deployments or other multiple clusters setups, you might want to access your instances from different clusters. + +Teleport supports installing and running multiple agents on the same instance, using a suffixed installation which allows you to isolate each installation. + +To configure the Discovery Service to use a suffixed installation, specify the `install.suffix` key in your Discovery Service configuration: + +```yaml +# teleport.yaml +version: v3 +# ... +discovery_service: + enabled: true + {{ matcher }}: + - install: + suffix: "blue-cluster" +``` + +Requires [agent managed updates](../../upgrading/agent-managed-updates/agent-managed-updates.mdx) to be enabled. + +### Define the group for Managed Updates + +If you are using Teleport [Agent managed updates](../../upgrading/agent-managed-updates/agent-managed-updates.mdx), you can configure the update group so that you can control which instances get updated together. + +To set the update group, specify the `install.update_group` key in your Discovery Service configuration: + +```yaml +# teleport.yaml +version: v3 +# ... +discovery_service: + enabled: true + {{ matcher }}: + - install: + update_group: "update-group-1" +``` + +### Configure HTTP Proxy during installation + +For instances which require a proxy to access the installation files, you can configure HTTP Proxy settings in the Discovery Service. + +You must set the `install.http_proxy_settings` key in your configuration: + +```yaml +# teleport.yaml +version: v3 +# ... +discovery_service: + enabled: true + {{ matcher }}: + - install: + http_proxy_settings: + https_proxy: http://172.31.5.130:3128 + http_proxy: http://172.31.5.130:3128 + no_proxy: my-local-domain +``` \ No newline at end of file diff --git a/docs/pages/includes/client-cluster-compatibility.mdx b/docs/pages/includes/client-cluster-compatibility.mdx new file mode 100644 index 0000000000000..8ed5bd4857ac1 --- /dev/null +++ b/docs/pages/includes/client-cluster-compatibility.mdx @@ -0,0 +1,4 @@ +*For best results, Teleport clients (tsh, tctl, tbot) should be the same major +version as the cluster they are connecting to. Teleport servers are compatible +with clients that are on the same major version or one major version older. +Teleport servers do not support clients that are on a newer major version. See our [Upgrading](../upgrading/upgrading.mdx) guide for more information.* \ No newline at end of file diff --git a/docs/pages/includes/cloud/install-linux-cloud.mdx b/docs/pages/includes/cloud/install-linux-cloud.mdx deleted file mode 100644 index e5b168bccc12d..0000000000000 --- a/docs/pages/includes/cloud/install-linux-cloud.mdx +++ /dev/null @@ -1,93 +0,0 @@ - - - - Add the Teleport repository to your repository list: - - ```code - $ sudo mkdir -p /etc/apt/keyrings - # Download Teleport's PGP public key - $ sudo curl https://apt.releases.teleport.dev/gpg \ - -o /etc/apt/keyrings/teleport-archive-keyring.asc - # Source variables about OS version - $ source /etc/os-release - # Add the Teleport APT repository for cloud. - $ echo "deb [signed-by=/etc/apt/keyrings/teleport-archive-keyring.asc] \ - https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \ - | sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null - - # Provide your Teleport domain to query the latest compatible Teleport version - $ export TELEPORT_DOMAIN= - $ export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - - # Update the repo and install Teleport and the Teleport updater - $ sudo apt-get update - $ sudo apt-get install "teleport-ent=$TELEPORT_VERSION" teleport-ent-updater - ``` - - - - - ```code - # Source variables about OS version - $ source /etc/os-release - # Add the Teleport YUM repository for cloud. - # First, get the OS major version from $VERSION_ID so this fetches the correct - # package version. - $ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") - $ sudo yum install -y yum-utils - $ sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")" - - # Provide your Teleport domain to query the latest compatible Teleport version - $ export TELEPORT_DOMAIN= - $ export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - - # Install Teleport and the Teleport updater - $ sudo yum install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater - - ``` - - - - - ```code - # Source variables about OS version - $ source /etc/os-release - # Add the Teleport YUM repository for cloud. - # First, get the OS major version from $VERSION_ID so this fetches the correct - # package version. - $ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") - # Use the dnf config manager plugin to add the teleport RPM repo - $ sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")" - - # Provide your Teleport domain to query the latest compatible Teleport version - $ export TELEPORT_DOMAIN= - $ export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - - # Install Teleport and the Teleport updater - $ sudo dnf install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater - - ``` - - - - - ```code - # Source variables about OS version - $ source /etc/os-release - # Add the Teleport Zypper repository for cloud. - # First, get the OS major version from $VERSION_ID so this fetches the correct - # package version. - $ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") - # Use Zypper to add the teleport RPM repo - $ sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo") - - # Provide your Teleport domain to query the latest compatible Teleport version - $ export TELEPORT_DOMAIN= - $ export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - - # Install Teleport and the Teleport updater - $ sudo zypper install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater - ``` - - - diff --git a/docs/pages/includes/commercial-prereqs-tabs.mdx b/docs/pages/includes/commercial-prereqs-tabs.mdx deleted file mode 100644 index f42fa807a28cb..0000000000000 --- a/docs/pages/includes/commercial-prereqs-tabs.mdx +++ /dev/null @@ -1,10 +0,0 @@ -{{ version="(=teleport.version=)" }} - -- A running Teleport Enterprise cluster version {{ version }} or above. If you - want to get started with Teleport, [sign up](https://goteleport.com/signup) - for a free trial. - -- The `tctl` admin tool and `tsh` client tool. - - Visit [Installation](../installation.mdx) for instructions on downloading - `tctl` and `tsh`. diff --git a/docs/pages/includes/compatibility.mdx b/docs/pages/includes/compatibility.mdx index 2c6f8c7ab8ca3..ecc68ce3bc797 100644 --- a/docs/pages/includes/compatibility.mdx +++ b/docs/pages/includes/compatibility.mdx @@ -4,12 +4,15 @@ When running multiple `teleport` binaries within a cluster, the following rules apply: - Servers support clients that are one major version behind, but do not support - clients that are on a newer major version. For example, an 8.x.x Proxy Service - instance is compatible with 7.x.x agents and 7.x.x `tsh`, but we don't - guarantee that a 9.x.x agent will work with an 8.x.x Proxy Service instance. - This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. - You must upgrade to 7.x.x first. + clients that are on a newer major version. For example, an 17.x.x Proxy + Service instance is compatible with 16.x.x agents and 16.x.x `tsh`, but a + 17.x.x agent will not work with an 16.x.x Proxy Service instance. This also + means you must not attempt to upgrade from 16.x.x straight to 18.x.x. You must + upgrade to 17.x.x first. - Proxy Service instances and agents do not support Auth Service instances that are on an older major version, and will fail to connect to older Auth Service - instances by default. For example, an 8.x.x Proxy Service or agent is not - compatible with an 7.x.x Auth Service. + instances by default. For example, an 18.x.x Proxy Service or agent is not + compatible with an 17.x.x Auth Service. +- Auth Service instances should always be the first component of the cluster that + is upgraded, and you must upgrade all Auth Service instances to the target version + before proceeding to upgrade Proxy Service instances, other agents, and client tools (tsh, tctl, tbot, Connect, etc). diff --git a/docs/pages/includes/config-reference/app-service.yaml b/docs/pages/includes/config-reference/app-service.yaml index 348f6764975b0..56c5383d39121 100644 --- a/docs/pages/includes/config-reference/app-service.yaml +++ b/docs/pages/includes/config-reference/app-service.yaml @@ -5,6 +5,10 @@ app_service: # Application Service is working correctly. The app outputs JWTs so it can # be useful when extending your application. debug_app: true + # Enables the builtin Teleport demo MCP server that shows current user and + # session information. To access it, this MCP server uses the app label + # "teleport.internal/resource-type" with the value "demo". + mcp_demo_server: true # Matchers for dynamic application resources # @@ -103,3 +107,13 @@ app_service: ## Default: roles-and-traits # jwt_claims: roles-and-traits + # Contains MCP server-related configurations. + mcp: + # Command to launch stdio-based MCP servers. + command: "docker" + # Args to execute with the command. + args: ["run", "-i", "--rm", "mcp/everything"] + # Name of the host user account under which the command will be + # executed. Required for stdio-based MCP servers. + run_as_host_user: "docker" + diff --git a/docs/pages/includes/config-reference/auth-service.yaml b/docs/pages/includes/config-reference/auth-service.yaml index 4281b00b6ddab..47f84d3179d7c 100644 --- a/docs/pages/includes/config-reference/auth-service.yaml +++ b/docs/pages/includes/config-reference/auth-service.yaml @@ -258,6 +258,9 @@ auth_service: # - 'required' - enables device authentication and device-aware audit. # Additionally, it requires a trusted device for all SSH, Database # and Kubernetes connections. + # - 'required-for-humans' - enables device authentication and device-aware + # audit. Additionally, it requires a trusted device for all SSH, Database + # and Kubernetes connections, for human users only (bots are exempt). mode: optional # always "off" for Teleport Community Edition # Determines the default time to live for user certificates @@ -312,29 +315,66 @@ auth_service: # We recommend to use tools like `pwgen` to generate sufficiently random # tokens of 32+ byte length. tokens: - - "proxy,node:xxxxx" - - "auth:yyyy" - - # Optional setting for configuring session recording. Possible values are: - # "node" : (default) sessions will be recorded on the node - # and periodically cleaned up after they are uploaded - # to the storage service. - # "node-sync" : session recordings will be streamed from - # node -> auth -> storage service without being stored on - # disk at all. - # "proxy" : sessions will be recorded on the proxy and periodically - # cleaned up after they are uploaded to the storage service. - # "proxy-sync : session recordings will be streamed from - # proxy -> auth -> storage service without being stored on - # disk at all. - # "off" : session recording is turned off - # - session_recording: "node" - - # This setting determines if a Teleport proxy performs strict host key - # checks. - # Only applicable if session_recording=proxy - proxy_checks_host_keys: yes + - 'proxy,node:xxxxx' + - 'auth:yyyy' + + # Optional configuration for session recording. + session_recording_config: + # Recording mode that should be use for session recordings. Possible + # values are: + # "node" : (default) sessions will be recorded on the node + # and periodically cleaned up after they are uploaded + # to the storage service. + # "node-sync" : session recordings will be streamed from + # node -> auth -> storage service without being stored on + # disk at all. + # "proxy" : sessions will be recorded on the Teleport Proxy Service + # and periodically cleaned up after they are uploaded to + # the storage service. + # "proxy-sync : session recordings will be streamed from + # proxy -> auth -> storage service without being stored on + # disk at all. + # "off" : session recording is turned off + # + mode: 'node' + # This setting determines if a Teleport Proxy Service instance performs + # strict host key checks. + # Only applicable if session_recording=proxy + proxy_checks_host_keys: yes + # Optional configuration for encrypting session recordings. + encryption: + # Determines whether or not session recordings should be encrypted at + # rest. By default, all encryption keys required will be provisioned + # using the key storage backend defined in ca_key_params. + enabled: yes + # Optional configuration allowing for manually managing encryption keys + # instead of relying on automatic key provisioning and management. + manual_key_management: + # Determine whether or not manual key management should be used. + enabled: yes + # The list of key labels that should be used to find active encryption + # keys. These support encrypting new session recordings and are each a + # pair of key backend type and label value to use during key lookup. + # Possible values for 'type' match the possible keys in ca_key_params, + # which are: + # "pkcs11" : PKCS#11 compliant HSM + # "aws_kms" : AWS KMS + # "gcp_kms" : Google Cloud KMS + # + # Label values are used to identify the key(s) within the key backend. + # For "pkcs11" keys these are expected to map directly to labels within + # the HSM. For 'aws_kms' keys, both ARN or ID values are valid. For + # 'gcp_kms' the full key version name is expected. + active_keys: + - type: pkcs11 + label: "session_recording" + # The list of key labels that should be used to find rotated encryption + # keys. These allow for replaying historical session recordings + # encrypted with keys that are no longer active. Individual list + # elements share the format described above for "active_keys". + rotated_keys: + - type: pkcs11 + label: "rotated_session_recording" # Determines if sessions to cluster resources are forcefully terminated after # no activity from a client (idle client). @@ -373,8 +413,8 @@ auth_service: # unlikely that you'll need to change this. # session_control_timeout: 2m - # Determines the routing strategy used to connect to nodes. Can be - # 'unambiguous_match' (default), or 'most_recent'. + # Determines the routing strategy used to connect to nodes when connecting via + # node name. Can be 'unambiguous_match' (default), or 'most_recent'. routing_strategy: unambiguous_match # License file to start auth server with. Note that this setting is ignored diff --git a/docs/pages/includes/config-reference/database-config.yaml b/docs/pages/includes/config-reference/database-config.yaml index be80ab1cdd55c..18f7fe72a4f56 100644 --- a/docs/pages/includes/config-reference/database-config.yaml +++ b/docs/pages/includes/config-reference/database-config.yaml @@ -5,26 +5,35 @@ db_service: # Matchers for database resources created with "tctl create" command or by the # discovery service. resources: + # Resource labels to match. + # + # Use specific label selectors so each Database Service instance only + # matches databases it can actually reach. - labels: - "*": "*" + "env": "staging" # Optional AWS role that the Database Service will assume to access the # databases. aws: assume_role_arn: "arn:aws:iam::123456789012:role/example-role-name" external_id: "example-external-id" - # Matchers for registering AWS-hosted databases. + # Matchers for registering AWS-hosted databases by performing auto-discovery + # on the Database Service. + # + # NOTE: for most deployments, it is recommended to use the Discovery Service + # to register AWS databases instead of Database Service–based discovery. aws: # Database types. Valid options are: # 'rds' - discovers and registers Amazon RDS and Aurora databases. # 'rdsproxy' - discovers and registers Amazon RDS Proxy databases. # 'redshift' - discovers and registers Amazon Redshift databases. # 'redshift-serverless' - discovers and registers Amazon Redshift Serverless databases. - # 'elasticache' - discovers and registers Amazon ElastiCache Redis databases. - # 'memorydb' - discovers and registers Amazon MemoryDB Redis databases. + # 'elasticache' - discovers and registers Amazon ElastiCache Redis and Valkey databases. + # 'elasticache-serverless' - Amazon ElastiCache Serverless Redis or Valkey databases. + # 'memorydb' - discovers and registers Amazon MemoryDB databases. # 'opensearch' - discovers and registers Amazon OpenSearch databases. # 'docdb' - discovers and registers Amazon DocumentDB databases. - - types: ["rds", "rdsproxy","redshift", "redshift-serverless", "elasticache", "memorydb", "opensearch"] + - types: ["rds", "rdsproxy","redshift", "redshift-serverless", "elasticache", "elasticache-serverless", "memorydb", "opensearch", "docdb"] # AWS regions to register databases from. regions: ["us-west-1", "us-east-2"] # Optional AWS role that the Database Service will assume to discover @@ -36,10 +45,17 @@ db_service: # a role in an external AWS account. external_id: "example-external-id" # AWS resource tags to match when registering databases. + # + # Use specific tag selectors so each Database Service instance only matches + # databases it can actually reach. tags: - "*": "*" + "env": "staging" - # Matchers for registering Azure-hosted databases. + # Matchers for registering Azure-hosted databases by performing auto-discovery + # on the Database Service. + # + # NOTE: for most deployments, it is recommended to use the Discovery Service + # to register AWS databases instead of Database Service–based discovery. azure: # Database types. Valid options are: # 'mysql' - discovers and registers Azure MySQL databases. @@ -58,8 +74,11 @@ db_service: # '*' - discovers databases in all resource groups within configured subscription(s) (default). resource_groups: ["group1", "group2"] # Azure resource tags to match when registering databases. + # + # Use specific tag selectors so each Database Service instance only matches + # databases it can actually reach. tags: - "*": "*" + "env": "staging" # Lists statically registered databases proxied by this agent. databases: @@ -116,6 +135,13 @@ db_service: # When this option is set the Database Agent doesn't try to check the MySQL server version. server_version: 8.0.28 + # Oracle only options. + oracle: + # Randomize host order per connection attempt to spread load. Optional. + shuffle_hostnames: true + # Retries per host on network errors only; non-network errors stop (default: 2). Optional. + retry_count: 5 + # Optional admin user configuration for Automatic User Provisioning. admin_user: # Name of the admin user. @@ -180,6 +206,12 @@ db_service: project_id: "xxx-1234" # Cloud SQL instance ID. instance_id: "example" + # AlloyDB-specific configuration. + alloydb: + # Endpoint type. Valid types: "private" (default), "public", "PSC". + endpoint_type: "private" + # Endpoint override. IP address or hostname to be used instead of automatically resolved endpoint. + endpoint_override: "11.22.33.44" # Settings specific to Active Directory authentication e.g. for SQL Server. ad: diff --git a/docs/pages/includes/config-reference/desktop-config.yaml b/docs/pages/includes/config-reference/desktop-config.yaml index 236e43b6b144d..bd2aab8e8e814 100644 --- a/docs/pages/includes/config-reference/desktop-config.yaml +++ b/docs/pages/includes/config-reference/desktop-config.yaml @@ -19,6 +19,13 @@ windows_desktop_service: # For best results, this address should point to a highly-available # endpoint rather than a single domain controller. addr: "$LDAP_SERVER_ADDRESS" + # locate_server gets a list of available LDAP servers from the AD + # domain's SRV records. When enabled, addr is ignored. + locate_server: + enabled: true + # Optional: Site is the logical AD site that locate_server should return. + # Ignored if locate_server is false. + site: "$LDAP_SITE_NAME" # Optional: the server name to use when validating the LDAP server's # certificate. Useful in cases where addr is an IP but the server # presents a cert with some other hostname. @@ -69,8 +76,9 @@ windows_desktop_service: pki_domain: root.example.com # (optional) Configures the address of the Kerberos Key Distribution Center, - # which is used to support RDP Network Level Authentication (NLA). - # If empty, the LDAP address will be used instead. + # which is used to support RDP Network Level Authentication (NLA). When set, + # this field takes priority over locate_server. If empty and locate_server + # is disabled, the LDAP address will be used instead. # # example: kdc.example.com:88. # The port is optional and defaults to port 88 if unspecified. @@ -112,6 +120,10 @@ windows_desktop_service: # (optional) interval at which to run desktop discovery discovery_interval: 10m + # (optional) interval at which to publish CRLs + # Defaults to 5m if unspecified + publish_crl_interval: 10m + # (optional) configure a set of label selectors for dynamic registration. # If specified, this service will monitor the cluster for dynamic_windows_desktop # and automatically proxy connections for desktops with matching labels. diff --git a/docs/pages/includes/config-reference/identity-security.yaml b/docs/pages/includes/config-reference/identity-security.yaml new file mode 100644 index 0000000000000..d71b67dd21f89 --- /dev/null +++ b/docs/pages/includes/config-reference/identity-security.yaml @@ -0,0 +1,166 @@ +# IP and the port for Teleport Identity Security to bind to. +address: 0.0.0.0:8080 + +# Registration CA certificates for the Identity Security service. +# These are used to verify the authenticity of the Teleport clusters +# when they register with the Identity Security service. +# The Identity Security service uses these certificates to ensure that +# only trusted Teleport clusters can register and communicate with it. +# You can specify multiple CA certificates if you have multiple Teleport clusters +# that need to register with the Identity Security service. +# The certificates should be the Teleport Host CA certificate and be in the +# PEM format. +registration_cas: + - /var/is/teleport-host-ca.pem + - /var/is/teleport-host-ca2.pem + +# TLS certificate for the HTTPS/gRPC connections. Configuring these properly is +# critical for security. +tls: + cert: /var/lib/teleport/identity_security_cert.pem + key: /var/lib/teleport/identity_security_key.pem + +# Teleport Identity Security service storage configuration. +backend: + # postgres defines connection parameters for the PostgreSQL database + # used by the Identity Security service to store its data. + # If you use a PostgreSQL cluster, you must ensure to define + # the connection string to connect to the primary node (the writer). + postgres: + # Address and port for the PostgreSQL database to connect to. + # This is used to store the Identity Security service data. + # The database should be running and accessible from the Identity Security service. + # If you do not need a PostgreSQL database, you can disable this feature. + connection: postgres://teleport:teleport_password@localhost:5432/identity_security?sslmode=disable + + # Maximum number of connections to the PostgreSQL database. + max_conns: 20 + + # Minimum number of connections to the PostgreSQL database. + min_conns: 10 + + # Maximum time a connection can be open before it is closed. + max_conn_lifetime: 24h + + # Maximum time a connection can be idle before it is closed. + max_conn_idle_time: 10m + + # Health check period for the PostgreSQL database. + health_check_period: 10s + + # Maximum connection lifetime jitter in seconds. + max_conn_lifetime_jitter: 10s + + # If you want to use IAM authentication instead of password authentication, + # you can uncomment the following section and provide the AWS region. + # + # iam: + # aws_region: us-west-2 + + # postgres_read_replica defines connection parameters for the PostgreSQL read replica + # used by the Identity Security service to store its data. + # postgres_read_replica: + # Address and port for the PostgreSQL database to connect to. + # This is used to store the Identity Security service data. + # The database should be running and accessible from the Identity Security service. + # If you do not need a PostgreSQL database, you can disable this feature. + # connection: postgres://teleport:teleport_password@localhost:5432/identity_security?sslmode=disable + + # Maximum number of connections to the PostgreSQL database. + # max_conns: 20 + + # Minimum number of connections to the PostgreSQL database. + # min_conns: 10 + + # Maximum time a connection can be open before it is closed. + # max_conn_lifetime: 24h + + # Maximum time a connection can be idle before it is closed. + # max_conn_idle_time: 10m + + # Health check period for the PostgreSQL database. + # health_check_period: 10s + + # Maximum connection lifetime jitter in seconds. + # max_conn_lifetime_jitter: 10s + + # If you want to use IAM authentication instead of password authentication, + # you can uncomment the following section and provide the AWS region. + # iam: + # aws_region: us-west-2 + +# Teleport Identity Security Identity Activity Center configuration. +identity_activity_center: + # region defines the AWS region where the Identity Activity Center is deployed. + region: eu-central-1 + # The AWS Athena database and table used by the Identity Activity Center. + # This is used to query the Identity Activity Center data. + database: identity_activity_center + # The AWS Athena table used by the Identity Activity Center. + # This is used to query the Identity Activity Center data. + # The table should be created in the Athena database specified above. + table: identity_activity_center_table + # The S3 long-term bucket used by the Identity Activity Center to store its data. + # This must be the same bucket used by the Athena database and table. + s3: s3://long-term-bucket/data/ + # Transient S3 bucket location used by the Identity Activity Center to store temporary data + # such as query results. + s3_results: s3://transient-bucket/results/ + # Transient S3 bucket location used by the Identity Activity Center to store large files. + s3_large_files: s3://transient-bucket/large_files + # Workgroup name used by the Identity Activity Center to execute Athena queries. + workgroup: identity-activity-center-workgroup + # AWS SQS queue URL used by the Identity Activity Center to send notifications + # between the Teleport Identity Security replicas. + sqs_queue_url: https://sqs.eu-central-1.amazonaws.com/123456789/example-queue + # MaxMind GeoIP database path used by the Identity Activity Center + # to enrich the Identity Activity Center data with geolocation information. + # This is optional. + maxmind_geoip_city_db_path: /path/to/geoIp-city.mmdb + +# Teleport Identity Security metrics endpoint configuration. +metrics: + # Enable Teleport Identity Security Metrics. This is used to collect + # and expose metrics about the Identity Security service such as + # the number of requests, errors, and latency. + # This is useful for monitoring and alerting purposes. + # If you do not need these metrics, you can disable this feature. + enabled: true + + # Address and port for Teleport Identity Security Metrics to bind to. + address: 0.0.0.0:3000 + + # TLS configuration for the metrics endpoint. + # If you do not need TLS for the metrics endpoint, you can disable it. + # tls: + # cert: /var/lib/is/identity_security_metrics_cert.pem + # key: /var/lib/is/identity_security_metrics_key.pem + + # Teleport Identity Security profiling endpoint configuration. + # This is used to collect profiling data about the Identity Security service. + pprof: false + +# Teleport Identity Security tracing configuration. +# This is used to collect distributed tracing data about the Identity Security service. +# If you do not need tracing, you can disable this feature. +tracing: + # Enable Teleport Identity Security Tracing. This is used to collect + # and export tracing data about the Identity Security service. + enabled: false + # Exporter URL for the tracing data. + # This should be the URL of the OpenTelemetry Collector or any other + # compatible tracing backend. + # The URL should include the protocol (e.g., "otlp://") and the address + # of the tracing backend (e.g., "localhost:4317"). + exporter_url: "otlp://localhost:4317" + # Sampling rate for the tracing data. This controls how many traces are sampled + # per million requests. + # A value of 1000 means that 1000 traces will be sampled per million requests + # i.e. 10%. + sampling_rate_per_million: 1000 + +# Logging configuration. +log: + # Possible severity values are DEBUG, INFO (default), WARN, + # and ERROR. + level: debug diff --git a/docs/pages/includes/config-reference/instance-wide.yaml b/docs/pages/includes/config-reference/instance-wide.yaml index ae423f2c1a2c9..51a5a36835dd2 100644 --- a/docs/pages/includes/config-reference/instance-wide.yaml +++ b/docs/pages/includes/config-reference/instance-wide.yaml @@ -92,16 +92,38 @@ teleport: # point to a Load Balancer. proxy_server: teleport-proxy.example.com:443 + # Relay tunnel address and port to connect to, if set. Used by some services + # to open additional tunnels to a Relay group if Teleport is configured to + # connect to a Proxy Server. If a Relay group consists of more than one + # Relay Service instance, the address should point to a Load Balancer. + # Used in Teleport v18.3.0 and later for the SSH service. + relay_server: teleport-relay.example.com:3042 + # cache: # # The cache is enabled by default, it can be disabled with this flag # enabled: true + # The duration (in string form) of the delay between receiving a termination + # signal and the beginning of the shutdown procedures. It can be used to + # give time to load balancers to stop routing connections to the Teleport + # instance while the instance is still capable of handling them. If unset or + # negative, no delay is applied. + #shutdown_delay: "0s" + # Teleport can limit the number of connections coming from each client # IP address to avoid abuse. Note that these limits are enforced separately # for each service (SSH, Kubernetes, etc.) connection_limits: max_connections: 1000 + # Auth Service connection configuration. + # These settings can be tweaked to control how aggresively the Proxy or Agent instances will retry to connect. In addition + # each instance will apply jitter. + # auth_connection_config: + # upper_limit_between_retries: "90s" # Cannot be lower than 10s + # initial_connection_delay: "9s" # When unset upper_limit_between_retries / 10 + # backoff_step_duration: "18s" # When unset upper_limit_between_retries / 5 + # Logging configuration. Possible output values to disk via # '/var/lib/teleport/teleport.log', # 'stdout', 'stderr' and 'syslog'. Possible severity values are DEBUG, INFO (default), WARN, diff --git a/docs/pages/includes/config-reference/relay-service.yaml b/docs/pages/includes/config-reference/relay-service.yaml new file mode 100644 index 0000000000000..72a1cd7861d8d --- /dev/null +++ b/docs/pages/includes/config-reference/relay-service.yaml @@ -0,0 +1,49 @@ +relay_service: + # Enables the Relay Service, defaults to false. + enabled: true + + # The name of the Relay group. All Relay Service instances that are accessible + # behind the same Load Balancer must use the same name, and other Relay + # Service instances in the same Teleport cluster must use a different name. + relay_group: groupname + + # The amount of distinct tunnels that other Teleport agents will open when + # using this Relay group. The target connection count should be no bigger than + # the amount of distinct Relay Service instances behind the Load Balancer. + target_connection_count: 2 + + # A list of hostnames or IP addresses that agents and clients can use to + # connect to the Relay group. Most setups will only need one. + public_hostnames: + - relay-group.example.com + + # The listen address and port for the transport server of the Relay Service, + # used by clients to access resources through the Relay. + transport_listen_addr: 0.0.0.0:3040 + + # Whether or not the transport server should expect a PROXY protocol v2 header + # for incoming connections. If set, anything with the ability to connect + # directly to the transport listener will be able to spoof the source of + # network connections. Defaults to false. + #transport_proxy_protocol: true + + # The listen address and port for the peer server of the Relay Service, used + # by other Relay Service instances of the same Relay group to forward + # connections between instances. + peer_listen_addr: 0.0.0.0:3041 + + # The address and port that other Relay Service instances of the same group should use + # to connect to the peer server. Defaults to the first available private IP + # address found in the system's network interfaces. + #peer_public_addr: 1.2.3.4:3041 + + # The listen address and port for the tunnel server of the Relay Service, used + # by agents to discover the Relay group configuration and open tunnels to the + # Relay. + tunnel_listen_addr: 0.0.0.0:3042 + + # Whether or not the tunnel server should expect a PROXY protocol v2 header + # for incoming connections. If set, anything with the ability to connect + # directly to the tunnel listener will be able to spoof the source of network + # connections. Defaults to false. + #tunnel_proxy_protocol: true diff --git a/docs/pages/includes/config-reference/ssh-service.yaml b/docs/pages/includes/config-reference/ssh-service.yaml index d7965552b563c..d8b38d2a9b2df 100644 --- a/docs/pages/includes/config-reference/ssh-service.yaml +++ b/docs/pages/includes/config-reference/ssh-service.yaml @@ -1,6 +1,6 @@ ssh_service: - # Turns 'ssh' role on. Default is true - enabled: true + # Turns 'ssh' role on. Default is true + enabled: true # IP and the port for SSH service to bind to. listen_addr: 0.0.0.0:3022 diff --git a/docs/pages/includes/database-access/alternative-methods-join.mdx b/docs/pages/includes/database-access/alternative-methods-join.mdx index da316944bd776..2de0dd2aa4fcd 100644 --- a/docs/pages/includes/database-access/alternative-methods-join.mdx +++ b/docs/pages/includes/database-access/alternative-methods-join.mdx @@ -5,7 +5,7 @@ For users with a lot of infrastructure in AWS, or who might create or recreate many instances, consider alternative methods for joining new EC2 instances running Teleport: -- [Configure Teleport to Automatically Enroll EC2 instances](../../enroll-resources/auto-discovery/servers/ec2-discovery.mdx) +- [Configure Teleport to Automatically Enroll EC2 instances](../../enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx) - [Joining Teleport Services via AWS IAM Role](../../enroll-resources/agents/aws-iam.mdx) - [Joining Teleport Services via AWS EC2 Identity Document](../../enroll-resources/agents/aws-ec2.mdx) diff --git a/docs/pages/includes/database-access/auto-user-provisioning/connect.mdx b/docs/pages/includes/database-access/auto-user-provisioning/connect.mdx index 891efcc664187..2d55518cc26ec 100644 --- a/docs/pages/includes/database-access/auto-user-provisioning/connect.mdx +++ b/docs/pages/includes/database-access/auto-user-provisioning/connect.mdx @@ -2,8 +2,6 @@ Now, log into your Teleport cluster and connect to the database: -(!docs/pages/includes/database-access/pg-access-webui.mdx!) - ```code $ tsh login --proxy=teleport.example.com $ tsh db connect --db-name example diff --git a/docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx b/docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx new file mode 100644 index 0000000000000..9c01c83a737bc --- /dev/null +++ b/docs/pages/includes/database-access/auto-user-provisioning/mysql-default-database-note.mdx @@ -0,0 +1,3 @@ +Note that Teleport uses `teleport` as the name of the default database but the +name is configurable in the Teleport database definition. Replace the database +name in the last two lines if you wish to use another database name. \ No newline at end of file diff --git a/docs/pages/includes/database-access/aws-db-iam-policy-picker.mdx b/docs/pages/includes/database-access/aws-db-iam-policy-picker.mdx index b160db76e0c36..420f38fc1e532 100644 --- a/docs/pages/includes/database-access/aws-db-iam-policy-picker.mdx +++ b/docs/pages/includes/database-access/aws-db-iam-policy-picker.mdx @@ -11,10 +11,15 @@ (!docs/pages/includes/database-access/reference/aws-iam/dynamodb/access-policy.mdx dbUserRole="DatabaseUserRole" !) - + (!docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx!) + + + +(!docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx!) + diff --git a/docs/pages/includes/database-access/aws-redis-how-it-works.mdx b/docs/pages/includes/database-access/aws-redis-how-it-works.mdx new file mode 100644 index 0000000000000..af10499808470 --- /dev/null +++ b/docs/pages/includes/database-access/aws-redis-how-it-works.mdx @@ -0,0 +1,12 @@ +The Teleport Database Service proxies traffic from users to {{ dbType }} +for Redis and Valkey. Authentication between the Database Service and the +{{ dbType }} database can take one of two forms: + +- **IAM authentication (preferred):** The Teleport Database Service connects to + the database using a short-lived AWS IAM authentication token. AWS IAM + authentication is available for {{ dbType }} with engine version 7.0 or above. +- **Managing users:** The Teleport Database Service manages users in a access + control list, rotates their passwords every 15 minutes, and saves these + passwords in AWS Secrets Manager. The Database Service automatically sends an + `AUTH` command with the saved password when connecting the client to the + {{ dbType }} server. diff --git a/docs/pages/includes/database-access/aws-redis-no-auth.mdx b/docs/pages/includes/database-access/aws-redis-no-auth.mdx new file mode 100644 index 0000000000000..9695628d96a4f --- /dev/null +++ b/docs/pages/includes/database-access/aws-redis-no-auth.mdx @@ -0,0 +1,7 @@ +If you choose not to use the above options, Teleport will not automatically +authenticate with the database. + +You can either set up a "no password" configuration for your {{ dbType }} user, +or manually enter an `AUTH` command with the password you have configured after +a successful client connection. However, it is strongly advised to use one of +the first two options or a strong password for better security. diff --git a/docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx b/docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx new file mode 100644 index 0000000000000..e4e22930168db --- /dev/null +++ b/docs/pages/includes/database-access/aws-redis-tsh-db-user-auth.mdx @@ -0,0 +1,34 @@ +If flag `--db-user` is not provided, Teleport logs in as the `default` user. + +Now, depending on the authentication configurations, you may need to send an +`AUTH` command to authenticate with the Redis server: + + + + The Database Service automatically authenticates Teleport-managed and + IAM-enabled users with the database. No `AUTH` command is required after + successful connection. + + If you are connecting as a user that is not managed by Teleport and is not + IAM-enabled, the connection normally starts as the `default` user. + Now you can authenticate the database user with its password: + + ``` + AUTH my-database-user + ``` + + + + Now you can authenticate with the shared AUTH token: + + ``` + AUTH + ``` + + + + For Redis or Valkey deployments without the ACL system or legacy + `requirepass` directive enabled, no `AUTH` command is required. + + + diff --git a/docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx b/docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx index 7821678f87026..d2b2deabdc0a8 100644 --- a/docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx +++ b/docs/pages/includes/database-access/aws-troubleshooting-max-policy-size.mdx @@ -79,7 +79,7 @@ You can reduce the policy size by separating them into multiple IAM roles. Use Teleport generates certain labels derived from the cloud resource attributes during discovery. See [Auto-Discovery - labels](../../reference/agent-services/database-access-reference/labels.mdx) + labels](../../enroll-resources/database-access/reference/labels.mdx) /labels/#auto-discovery) for more details. @@ -207,7 +207,7 @@ the IAM identity of the host running the Database Service. The `assume_role_arn` is not limited to the same AWS account so you can also use this feature for [AWS Cross-Account -Access](../../enroll-resources/database-access/enroll-aws-databases/aws-cross-account.mdx). +Access](../../enroll-resources/database-access/enrollment/aws/aws-cross-account.mdx).
diff --git a/docs/pages/includes/database-access/aws-troubleshooting.mdx b/docs/pages/includes/database-access/aws-troubleshooting.mdx index 0782c1f794494..a27abe97dd9aa 100644 --- a/docs/pages/includes/database-access/aws-troubleshooting.mdx +++ b/docs/pages/includes/database-access/aws-troubleshooting.mdx @@ -12,14 +12,7 @@ AWS provides instructions to rotate your [SSL/TLS certificate](https://docs.aws. ### Timeout errors -The Teleport Database Service needs connectivity to your database endpoints. That may require -enabling inbound traffic on the database from the Database Service on the same VPC or routing rules from another VPC. Using the `nc` -program you can verify connections to databases: - -```code -$ nc -zv postgres-instance-1.sadas.us-east-1.rds.amazonaws.com 5432 -# Connection to postgres-instance-1.sadas.us-east-1.rds.amazonaws.com (172.31.24.172) 5432 port [tcp/postgresql] succeeded! -``` +(!docs/pages/includes/database-access/connection-timeout-troubleshooting.mdx!) ### Not authorized to perform `sts:AssumeRole` diff --git a/docs/pages/includes/database-access/cloudsql_service_credentials.mdx b/docs/pages/includes/database-access/cloudsql_service_credentials.mdx index 349c776b49541..90b5599386f43 100644 --- a/docs/pages/includes/database-access/cloudsql_service_credentials.mdx +++ b/docs/pages/includes/database-access/cloudsql_service_credentials.mdx @@ -6,6 +6,8 @@ If the Teleport Database Service is hosted on a GCE instance, you can For non-GCE deployments of Teleport, we recommend using [workload identity](https://cloud.google.com/iam/docs/workload-identity-federation). +
+ Using service account keys (insecure) Alternatively, go to that service account's Keys tab and create a new key: ![Service Account Keys](../../../img/database-access/guides/cloudsql/service-account-keys@2x.png) @@ -30,3 +32,5 @@ See [authentication](https://cloud.google.com/docs/authentication#service-accoun in the Google Cloud documentation for more information about service account authentication methods. + +
\ No newline at end of file diff --git a/docs/pages/includes/database-access/connection-timeout-troubleshooting.mdx b/docs/pages/includes/database-access/connection-timeout-troubleshooting.mdx new file mode 100644 index 0000000000000..ba04e92dbdb5b --- /dev/null +++ b/docs/pages/includes/database-access/connection-timeout-troubleshooting.mdx @@ -0,0 +1,59 @@ +The Teleport Database Service requires connectivity to your database endpoints. + +Check that firewall rules (e.g., AWS security groups) allow connectivity between the Teleport Database Service and the database endpoint. +- Inbound firewall rules for the database must allow connections from the Teleport Database Service. +- Outbound firewall rules for the Teleport Database Service must allow connections to the database endpoint. + + +On the same host as the Teleport Database Service, try running `nc` to check if it can reach the database port. + +- Database host: +- Database port: + +```code +$ nc -zv +# Connection to postgres-instance-1.sadas.us-east-1.rds.amazonaws.com (172.31.24.172) 5432 port [tcp/postgresql] succeeded! +``` + + +
+Debugging connection timeout errors in AWS + +For deployments in AWS, it may be helpful to use [AWS Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/what-is-reachability-analyzer.html) to analyze the network path between the Teleport Database Service and the database. + +1. Identify the Elastic Network Interface (ENI) associated with the Teleport Database Service host. This can be found in the [EC2 console](https://console.aws.amazon.com/ec2/home?NIC). +2. Identify the private IP address of the database. +3. Create and analyze a network path: + - Set the path source to the ENI associated with the Teleport Database Service host. + - Set the path destination to the database IP. +4. Check the analysis results to identify reachability issues. + +
+ +If your database is registered dynamically or via auto-discovery, repeat the +above connectivity test for *every** Teleport Database Service instance that +proxies this database. To list all Teleport Database Service instances +associated with a given database, run the `tctl get db_server/` +command. For example: +``` +$ tctl get db_server/postgres-instance-1 --format json | jq '.[] | {hostname: .spec.hostname, host_id: .spec.host_id, version: .spec.version, target_health: .status.target_health}' +{ + "hostname": "ip-10-0-0-111.ca-central-1.compute.internal", + "host_id": "e5e670ac-a7b8-44ef-b373-6296d87f50e8", + "version": "18.3.0", + "target_health": { + "status": "unhealthy", + ... + } +} +{ + "hostname": "ip-10-0-0-222.ca-central-1.compute.internal", + ... +} +``` + +If any of the Database Service instances listed here **should not** proxy the +database, (for example, a Database Service instance in a different VPC or AWS +region without connectivity), locate and update their configurations so they +only receive or discover databases they can reach. In most cases, you can +achieve this by refining your tag filters, such as adding the a `vpc-id` label. \ No newline at end of file diff --git a/docs/pages/includes/database-access/db-access-webui-ad.mdx b/docs/pages/includes/database-access/db-access-webui-ad.mdx new file mode 100644 index 0000000000000..159d52cd65e94 --- /dev/null +++ b/docs/pages/includes/database-access/db-access-webui-ad.mdx @@ -0,0 +1,4 @@ +{{ dbType="MySQL" }} + + You can also [access your {{ dbType }} databases using the Web UI.](../../connect-your-client/teleport-clients/web-ui.mdx#starting-a-database-session) + diff --git a/docs/pages/includes/database-access/db-audit-events.mdx b/docs/pages/includes/database-access/db-audit-events.mdx index 08ebe62f09243..8c45c3e86e2c6 100644 --- a/docs/pages/includes/database-access/db-audit-events.mdx +++ b/docs/pages/includes/database-access/db-audit-events.mdx @@ -5,35 +5,52 @@ with the `tsh play` command. Database session ID will be in a UUID format (ex: `307b49d6-56c7-4d20-8cf0-5bc5348a7101`) See the audit log to get a database session ID with a key of `sid`. -Example: +PostgreSQL database recordings are available in interactive format: + +```code +$ tsh play 307b49d6-56c7-4d20-8cf0-5bc5348a7101 +Session started to database "postgres-database" at Mon Jul 20 20:00 UTC + +postgres=> SELECT * FROM products; +SUCCESS +(10 rows affected) + +postgres=> INSERT INTO products (name, price) VALUES ('Phone', 150.00); +ERROR: permission denied for table products (SQLSTATE 42501) + +Session ended at Mon Jul 20 20:30 UTC +``` + +All database protocols recordings are supported in JSON format (`--format json`): ```code $ tsh play --format json 307b49d6-56c7-4d20-8cf0-5bc5348a7101 ``` ```json - { - "cluster_name": "teleport.example.com", - "code": "TDB02I", - "db_name": "example", - "db_origin": "dynamic", - "db_protocol": "postgres", - "db_query": "select * from sample;", - "db_roles": [ - "access" - ], - "db_service": "example", - "db_type": "rds", - "db_uri": "databases-1.us-east-1.rds.amazonaws.com:5432", - "db_user": "alice", - "ei": 2, - "event": "db.session.query", - "sid": "307b49d6-56c7-4d20-8cf0-5bc5348a7101", - "success": true, - "time": "2023-10-06T10:58:32.88Z", - "uid": "a649d925-9dac-44cc-bd04-4387c295580f", - "user": "alice" - } +{ + "cluster_name": "teleport.example.com", + "code": "TDB02I", + "db_name": "example", + "db_origin": "dynamic", + "db_protocol": "postgres", + "db_query": "select * from sample;", + "db_roles": [ + "access" + ], + "db_service": "example", + "db_type": "rds", + "db_uri": "databases-1.us-east-1.rds.amazonaws.com:5432", + "db_user": "alice", + "ei": 2, + "event": "db.session.query", + "sid": "307b49d6-56c7-4d20-8cf0-5bc5348a7101", + "success": true, + "time": "2023-10-06T10:58:32.88Z", + "uid": "a649d925-9dac-44cc-bd04-4387c295580f", + "user": "alice" +} ``` -The audit log is viewable under **Audit** in the left-hand pane via the Web UI for users with permission to the `event` resources. Database sessions do not appear -in the session recordings page. +The audit log is viewable under **Audit** in the left-hand pane via the Web UI +for users with permission to the `event` resources. Database sessions are listed +on the session recordings page, but only PostgreSQL sessions are playable. diff --git a/docs/pages/includes/database-access/guides-next-steps.mdx b/docs/pages/includes/database-access/guides-next-steps.mdx index 15d9e76bea8f9..2a7dc6dccaf1b 100644 --- a/docs/pages/includes/database-access/guides-next-steps.mdx +++ b/docs/pages/includes/database-access/guides-next-steps.mdx @@ -5,7 +5,7 @@ - View the [High Availability (HA)](../../enroll-resources/database-access/guides/ha.mdx) guide. {/* lint ignore list-item-spacing remark-lint */} -- Take a look at the YAML configuration [reference](../../reference/agent-services/database-access-reference/configuration.mdx). +- Take a look at the YAML configuration [reference](../../enroll-resources/database-access/reference/configuration.mdx). {/* lint ignore list-item-spacing remark-lint */} -- See the full CLI [reference](../../reference/agent-services/database-access-reference/cli.mdx). +- See the full CLI [reference](../../enroll-resources/database-access/reference/cli.mdx). diff --git a/docs/pages/includes/database-access/oracle-multihost.mdx b/docs/pages/includes/database-access/oracle-multihost.mdx new file mode 100644 index 0000000000000..b67751decf14d --- /dev/null +++ b/docs/pages/includes/database-access/oracle-multihost.mdx @@ -0,0 +1,27 @@ +In some deployments the same logical database is reachable via multiple hostnames with different characteristics, for example: + +* replicated databases +* hostnames that traverse different network paths + +If this applies to your setup, list all hosts in order of preference to improve connection resiliency. + +If a TCP dial error occurs for a host, the next host in the list is tried automatically. Non-network errors (e.g., certificate or authentication failures) are not retried and do not advance to the next host. + +By default, hosts are attempted in the listed order. Retries cycle through the list and wrap to the start as needed (e.g., `host1 → host2 → host3 → host1 → ...`). To randomize the sequence per connection attempt, set `shuffle_hostnames`; the same cyclic pattern then applies to that randomized order. + +`retry_count` controls the number of retries per host after the initial attempt on a network error. The default is `2`, so there are 3 total attempts per host (1 initial + 2 retries) before moving to the next host in sequence. + +This setup supports failover and basic load-balancing for new connections: enabling `shuffle_hostnames` spreads initial connection attempts across hosts (load-balancing), while retries automatically move to the next host if the current one is unreachable (failover). + +```yaml +- name: oracle + protocol: oracle + uri: host1:2484,host2:2484,host3:2484 # Multiple hosts; dials in sequence and wraps (host1 → host2 → host3 → host1 ...). Dialing sequence can be randomized with `shuffle_hostnames`. + static_labels: + env: dev + oracle: + # Randomize host order per connection attempt to spread load. Optional. + shuffle_hostnames: true + # Retries per host on network errors only; non-network errors stop (default: 2). Optional. + retry_count: 5 +``` diff --git a/docs/pages/includes/database-access/oracle-troubleshooting.mdx b/docs/pages/includes/database-access/oracle-troubleshooting.mdx new file mode 100644 index 0000000000000..1593a64226a5b --- /dev/null +++ b/docs/pages/includes/database-access/oracle-troubleshooting.mdx @@ -0,0 +1,194 @@ +### Connection hangs or is refused + +A common issue when connecting to an Oracle database is a connection timeout or refusal. This typically indicates a networking problem where the Teleport Database Service cannot reach the Oracle database endpoint. +Verify that network routing and access controls, such as firewalls and VPC security groups, allow traffic to flow from the Database Service host to the database endpoint. + +You can validate connectivity using a native Oracle client, which helps confirm whether the issue is with Teleport or the underlying network configuration. For example, using Oracle SQLcl: + +```bash +# Example: Oracle SQLcl +sql -L myuser/mypassword@oracle-instance.example.com:2484 +``` + +Network connectivity issues are often detected by automated [health checks](../../enroll-resources/database-access/guides/health-checks.mdx). + +To check the health status of all registered databases: + +```bash +# All databases +tctl db ls --format=json | jq -r '.[] | [.metadata.name, .status.target_health]' +``` + +An unhealthy database will have output similar to the following: + +```json +... + "oracle", + { + "address": "11.22.33.44:2484", + "protocol": "TCP", + "status": "unhealthy", + "transition_timestamp": "2025-09-25T09:47:39.435973Z", + "transition_reason": "threshold_reached", + "transition_error": "dial tcp 11.22.33.44:2484: i/o timeout", + "message": "1 health check failed" + } +... +``` + +### TLS negotiation fails + +Properly configuring TLS on an Oracle database can be challenging. Different underlying issues can result in the same error message, such as the following from Teleport: + +``` +Original Error: *tls.permanentError remote error: tls: handshake failure +``` + +Or you might see the following in the Oracle logs: + +``` +ORA-00609: could not attach to incoming connection +ORA-28860: Fatal SSL error +``` + +To identify the root cause, follow the debugging steps in the sections below. The output of the following `openssl` command can help diagnose many common TLS issues. Capture the output and use it as you follow the debugging steps. + +``` +> openssl s_client -connect oracle.example.com:2484 -showcerts +``` + +#### Wrong server certificate + +Teleport rejects connections to databases with untrusted server certificates. If you are using Teleport to issue certificates, ensure that the server certificate was issued by the Teleport Database CA. An invalid server certificate will prevent Teleport from establishing a secure connection. + +You can view the Teleport Database CA certificate with the following command: + +``` +tctl auth export --type=db | openssl x509 -issuer -noout +... +issuer=O=teleport.example.com, CN=teleport.example.com, serialNumber=200129862304303044762346177566738813560 +``` + +Compare the `issuer` in the server certificate with the `issuer` of the Teleport Database CA certificate. The `openssl s_client` command from the previous section will show you the server certificate: + + +``` +# openssl s_client output: +... +Server certificate +subject=CN=oracle.example.com +issuer=O=teleport.example.com, CN=teleport.example.com, serialNumber=200129862304303044762346177566738813560 +... +``` + +You can also inspect the Oracle wallet directly using the `orapki` utility to verify the server certificate. + +```bash +# Prompt for wallet password +orapki wallet display -complete -wallet /path/to/wallet +``` + +The "User Certificates" section of the output should contain the server's certificate. Its `Issuer` should match the `Subject` of the Teleport Database CA. + +``` +User Certificates: +Subject: CN=oracle.example.com +Issuer: SERIALNUMBER=200129862304303044762346177566738813560,CN=teleport.example.com,O=teleport.example.com +Serial Number: ... +``` + +#### Wrong client certificate + +If the Oracle server rejects client certificates presented by the Teleport Database Service, you should verify that the Oracle database trusts the Teleport Database User CA. + +You can view the Teleport Database User CA with this command: + +``` +tctl auth export --type=db-client | openssl x509 -issuer -noout +issuer=O=teleport.example.com, CN=teleport.example.com, serialNumber=183359545647055551607366887578713393931 +``` + +Compare the Teleport Database User CA with the list of CAs trusted by the Oracle database. The `openssl s_client` command from earlier will show the list of CAs the Oracle database trusts: + +``` +# openssl s_client output: +... +--- +Acceptable client certificate CA names +O=teleport.example.com, CN=teleport.example.com, serialNumber=183359545647055551607366887578713393931 +``` + +Ensure that the Teleport Database User CA certificate has been added to the correct wallet and that the Oracle server configuration references this wallet. + +You can also inspect the Oracle wallet directly using the `orapki` utility to verify that the Teleport Database User CA is trusted. + +```bash +# Prompt for wallet password +orapki wallet display -complete -wallet /path/to/wallet +``` + +The "Trusted Certificates" section of the output should contain the Teleport Database User CA. Its `Issuer` should match the `issuer` of the Teleport Database User CA. + +``` +Trusted Certificates: +Subject: SERIALNUMBER=183359545647055551607366887578713393931,CN=teleport.example.com,O=teleport.example.com +Issuer: SERIALNUMBER=183359545647055551607366887578713393931,CN=teleport.example.com,O=teleport.example.com +Serial Number: ... +``` + +#### Wrong TLS version + +Teleport rejects connections that use TLS 1.0 or 1.1 due to known weaknesses. Ensure that the `SSL_VERSION` parameter in your Oracle configuration is set to `1.2` or higher to enable TLS 1.2 or a newer version. + +#### No common cipher suites + +Ensure that the `SQLNET.CIPHER_SUITE` parameter in your Oracle configuration contains modern TLS cipher suites that match the configured TLS version. +The following cipher suites are secure and widely supported across different Oracle versions. + +For TLS 1.2: +- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` +- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` + +For TLS 1.3: +- `TLS_AES_128_GCM_SHA256` +- `TLS_AES_256_GCM_SHA384` + +### `Must be logged on to the server` error + +The following error indicates that the login procedure has failed: + +``` +ORA-17430: Must be logged on to the server. +``` + +This is most commonly caused by the Oracle database enforcing native encryption or data-integrity checksums on a TCPS endpoint. +Teleport uses TLS for transport security, and does not support native Oracle encryption. + +To disable the redundant encryption requirement for the TCPS endpoint, add the following line to your `sqlnet.ora` file: + +`SQLNET.IGNORE_ANO_ENCRYPTION_FOR_TCPS=TRUE` + +Make sure to use an up-to-date version of the Oracle database. In older versions, this setting may not disable data-integrity checksums, which will lead to continued failures. + +### Invalid username + +An incorrectly specified username will result in the following error: + +``` +ORA-01017: invalid username/password; logon denied +``` + +When using TLS-based authentication, Oracle maps the Common Name (CN) from the client certificate to an external user in the database. Verify the `EXTERNAL_NAME` for your user in the `dba_users` table. It should be in the format `cn=`, where `` matches the value of the `--db-user` flag used in the `tsh db login` command. + +You can query the `dba_users` table to check the `EXTERNAL_NAME` of your users: + +```sql +SQL> SELECT username, authentication_type, external_name + 2 FROM dba_users + 3 WHERE authentication_type = 'EXTERNAL' + 4 ORDER BY 1; + +USERNAME AUTHENTICATION_TYPE EXTERNAL_NAME +_____________ ______________________ ________________ +ALICE EXTERNAL cn=alice +``` \ No newline at end of file diff --git a/docs/pages/includes/database-access/pg-access-webui.mdx b/docs/pages/includes/database-access/pg-access-webui.mdx deleted file mode 100644 index 5f6f7cddb4b33..0000000000000 --- a/docs/pages/includes/database-access/pg-access-webui.mdx +++ /dev/null @@ -1,3 +0,0 @@ - - Starting from version `17.1`, you can now [access your PostgreSQL databases using the Web UI.](../../connect-your-client/web-ui.mdx#starting-a-database-session) - diff --git a/docs/pages/includes/database-access/pg-cancel-request-limitation.mdx b/docs/pages/includes/database-access/pg-cancel-request-limitation.mdx index cb64d4c5931bf..2e27188aab672 100644 --- a/docs/pages/includes/database-access/pg-cancel-request-limitation.mdx +++ b/docs/pages/includes/database-access/pg-cancel-request-limitation.mdx @@ -2,24 +2,24 @@ ### Unable to cancel a query If you use a PostgreSQL cli client like `psql`, and you try to cancel a query -with `ctrl+c`, but it doesn't cancel the query, then you need to connect using a +with `Ctrl+C`, but it doesn't cancel the query, then you need to connect using a tsh local proxy instead. When `psql` cancels a query, it establishes a new connection without TLS certificates, however Teleport requires TLS certificates not only for authentication, but also to route database connections. If you -[enable TLS Routing in Teleport](../../admin-guides/management/operations/tls-routing.mdx) +[enable TLS Routing in Teleport](../../zero-trust-access/deploy-a-cluster/tls-routing.mdx) then `tsh db connect` will automatically start a local proxy for every connection. Alternatively, you can connect via -[Teleport Connect](../../connect-your-client/teleport-connect.mdx) +[Teleport Connect](../../connect-your-client/teleport-clients/teleport-connect.mdx) which also uses a local proxy. Otherwise, you need to start a tsh local proxy manually using `tsh proxy db` and connect via the local proxy. If you have already started a long-running query in a `psql` session that you -cannot cancel with ctrl+c, you can start a new client session to cancel that +cannot cancel with `Ctrl+C`, you can start a new client session to cancel that query manually: First, find the query's process identifier (PID): diff --git a/docs/pages/includes/database-access/pg-mysql-connect.mdx b/docs/pages/includes/database-access/pg-mysql-connect.mdx new file mode 100644 index 0000000000000..6dfe949d8c479 --- /dev/null +++ b/docs/pages/includes/database-access/pg-mysql-connect.mdx @@ -0,0 +1,35 @@ +{{ dbUser="alice" dbService="example-db" pgSchema="postgres" mySchema="mysql" }} +{/* this comment is here to make the includes below render */} + + + + +Retrieve credentials for the database and connect to it as the `{{ dbUser }}` user: + +```code +$ tsh db connect --db-user={{ dbUser }} --db-name={{ mySchema }} {{ dbService }} +``` + + + Either the `mysql` or the `mariadb` client must be available on `PATH` to connect. + + +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="MySQL or MariaDB"!) + + + + +Retrieve credentials for the database and connect to it as the `{{ dbUser }}` user: + +```code +$ tsh db connect --db-user={{ dbUser }} --db-name={{ pgSchema }} {{ dbService }} +``` + + + The `psql` client must be available on `PATH` to connect. + + +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL"!) + + + diff --git a/docs/pages/includes/database-access/rds-proxy.mdx b/docs/pages/includes/database-access/rds-proxy.mdx index dccbc48e02bc9..4804e4d56c416 100644 --- a/docs/pages/includes/database-access/rds-proxy.mdx +++ b/docs/pages/includes/database-access/rds-proxy.mdx @@ -1,3 +1,4 @@ +{/* this comment is here to make the includes below render */} ## How it works (!docs/pages/includes/database-access/how-it-works/iam.mdx db="RDS Proxy" cloud="AWS"!) @@ -14,9 +15,9 @@ Teleport currently supports RDS Proxy instances with engine family -[PostgreSQL](../../enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres.mdx), -[MariaDB/MySQL](../../enroll-resources/database-access/enroll-aws-databases/rds-proxy-postgres.mdx) or -[Microsoft SQL Server](../../enroll-resources/database-access/enroll-aws-databases/rds-proxy-sqlserver.mdx). +[PostgreSQL](../../enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres.mdx), +[MariaDB/MySQL](../../enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-postgres.mdx) or +[Microsoft SQL Server](../../enroll-resources/database-access/enrollment/aws/rds-proxy/rds-proxy-sqlserver.mdx). (!docs/pages/includes/database-access/auto-discovery-tip.mdx dbType="RDS Proxy" providerType="AWS"!) @@ -184,7 +185,7 @@ Retrieve credentials for the database and connect to it as the `alice` user: $ tsh db connect --db-user=alice --db-name=dev rds-proxy ``` -(!docs/pages/includes/database-access/pg-access-webui.mdx!) +(!docs/pages/includes/database-access/db-access-webui-ad.mdx dbType="PostgreSQL and MySQL"!) To log out of the database and remove credentials: diff --git a/docs/pages/includes/database-access/reference/auto-discovery-unavailable.mdx b/docs/pages/includes/database-access/reference/auto-discovery-unavailable.mdx index 501ae4b5cf739..768f5e9f958fa 100644 --- a/docs/pages/includes/database-access/reference/auto-discovery-unavailable.mdx +++ b/docs/pages/includes/database-access/reference/auto-discovery-unavailable.mdx @@ -5,6 +5,6 @@ Database discovery is not available for {{ dbType }}. To register a {{ dbType }} database with your Teleport cluster, you must configure the database manually via static config or dynamic `db` resource. -See the [database access reference](../../../reference/agent-services/database-access-reference/configuration.mdx) for more +See the [database access reference](../../../enroll-resources/database-access/reference/configuration.mdx) for more information.
diff --git a/docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx b/docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx new file mode 100644 index 0000000000000..be6945fce3dec --- /dev/null +++ b/docs/pages/includes/database-access/reference/aws-iam/elasticache-serverless/access-policy.mdx @@ -0,0 +1,48 @@ +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "ElastiCacheServerlessFetchMetadata", + "Effect": "Allow", + "Action": "elasticache:DescribeServerlessCaches", + "Resource": "*" + }, + { + "Sid": "ElastiCacheServerlessDescribeUsers", + "Effect": "Allow", + "Action": "elasticache:DescribeUsers", + "Resource": "*" + }, + { + "Sid": "ElastiCacheServerlessConnect", + "Effect": "Allow", + "Action": "elasticache:Connect", + "Resource": "*" + } + ] +} +``` + +| Statement | Purpose | +| ---------- | ------- | +|`ElastiCacheServerlessFetchMetadata` | Automatically import AWS metadata about the database. | +|`ElastiCacheServerlessDescribeUsers` | Determine whether a user is compatible with IAM authentication. | +|`ElastiCacheServerlessConnect` | Connect using IAM authentication. | + +
+You can reduce the scope of the `ElastiCacheServerlessConnect` statement by +updating it to only allow specific ElastiCache Serverless caches and IAM users. +The resource ARN you can specify has the following formats: + +```code +arn:aws:elasticache:{Region}:{AccountID}:serverlesscache:{CacheName} +arn:aws:elasticache:{Region}:{AccountID}:user:{UserName} +``` + +See +[Authenticating with IAM for +ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth-iam.html) +for more information. +
+ diff --git a/docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx b/docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx index b57a042fda1c9..1cba533470ee6 100644 --- a/docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx +++ b/docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx @@ -1,4 +1,4 @@ -ElastiCache supports IAM authentication for Redis engine version +ElastiCache supports IAM authentication for Redis and Valkey engine version 7.0 or above. This is the recommended way to configure Teleport access to ElastiCache. diff --git a/docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx b/docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx index 96bf433c84797..ed5ff75417e90 100644 --- a/docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx +++ b/docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx @@ -27,7 +27,7 @@ Teleport users can connect as that role by specifying "role/\{RoleName\}" as a database user, e.g. ```code -$ tsh db connect redshift-example-db --db-user=role/{{ dbUserRole }} +$ tsh db connect my-redshift --db-user=role/{{ dbUserRole }} ``` See diff --git a/docs/pages/includes/database-access/self-hosted-config-start.mdx b/docs/pages/includes/database-access/self-hosted-config-start.mdx index 6ea01b701f9bf..786e1fd684e17 100644 --- a/docs/pages/includes/database-access/self-hosted-config-start.mdx +++ b/docs/pages/includes/database-access/self-hosted-config-start.mdx @@ -47,7 +47,7 @@ To configure the Teleport Database Service to trust a custom CA: If your database servers use certificates that are signed by a public CA such -as ComodoCA or DigiCert, you can use the `trust_system_cert_pool` option +as ComodoCA or DigiCert, you can use the `trust-system-cert-pool` option without exporting the CA: ```code $ sudo teleport db configure create \ @@ -57,7 +57,7 @@ $ sudo teleport db configure create \ --name={{ dbName }} \ --protocol={{ dbProtocol }} \ --uri={{ databaseAddress }} \ - --trust_system_cert_pool \ + --trust-system-cert-pool \ --labels=env=dev ``` diff --git a/docs/pages/includes/database-access/split-db-ca-details.mdx b/docs/pages/includes/database-access/split-db-ca-details.mdx index e1c8b230f6818..d2301341c61d0 100644 --- a/docs/pages/includes/database-access/split-db-ca-details.mdx +++ b/docs/pages/includes/database-access/split-db-ca-details.mdx @@ -23,6 +23,6 @@ in Teleport versions (= db_client_ca.released_version.v15 =). See -[Database CA Migrations](../../admin-guides/management/operations/db-ca-migrations.mdx) +[Database CA Migrations](../../zero-trust-access/management/security/db-ca-migrations.mdx) for more information. diff --git a/docs/pages/includes/database-access/sql-server-connect-note.mdx b/docs/pages/includes/database-access/sql-server-connect-note.mdx index c6bb7b5da1f91..ca35fa46d58ea 100644 --- a/docs/pages/includes/database-access/sql-server-connect-note.mdx +++ b/docs/pages/includes/database-access/sql-server-connect-note.mdx @@ -11,6 +11,6 @@ your SQL Server client: $ tsh proxy db --db-user=teleport --tunnel sqlserver ``` -Read the [Database GUI Clients](../../connect-your-client/gui-clients.mdx#sql-server-with-azure-data-studio) +Read the [Database GUI Clients](../../connect-your-client/third-party/gui-clients.mdx#sql-server-with-azure-data-studio) guide for how to connect your DB GUI client to the local proxy. diff --git a/docs/pages/includes/device-trust/prereqs.mdx b/docs/pages/includes/device-trust/prereqs.mdx index 0ad4cc364b1f7..77ce5df1a708b 100644 --- a/docs/pages/includes/device-trust/prereqs.mdx +++ b/docs/pages/includes/device-trust/prereqs.mdx @@ -1,22 +1,22 @@ - To enroll a macOS device, you need: - A signed and notarized `tsh` binary. - [Download the macOS tsh installer](../../installation.mdx#macos). + [Download the macOS tsh installer](../../installation/macos.mdx). - To enroll a Windows device, you need: - A device with TPM 2.0. - A user with administrator privileges. This is only required during enrollment. - - The `tsh` client. [Download the Windows tsh installer](../../installation.mdx#windows-tsh-tbot-and-tctl-clients-only). + - The `tsh` client. [Download the Windows tsh installer](../../installation/windows.mdx). - To enroll a Linux device, you need: - A device with TPM 2.0. - A user with permissions to use the /dev/tpmrm0 device (typically done by assigning the `tss` group to the user). - - The `tsh` client. [Install tsh for Linux](../../installation.mdx#linux). + - The `tsh` client. [Install tsh for Linux](../../installation/linux.mdx). WSL users should use the Windows binary instead. - [Download the Windows tsh installer](../../installation.mdx#windows-tsh-tbot-and-tctl-clients-only). + [Download the Windows tsh installer](../../installation/windows.mdx). - To authenticate a Web UI session you need [Teleport Connect]( - ../../connect-your-client/teleport-connect.mdx#installation--upgrade) + ../../connect-your-client/teleport-clients/teleport-connect.mdx#installation--upgrade) - Correct end-user IP propagation to your Teleport deployment: - [X-Forwarded-For header](../../reference/config.mdx#proxy-service) (L7 load + [X-Forwarded-For header](../../reference/deployment/config.mdx#proxy-service) (L7 load balancer) or [PROXY protocol]( - ../../admin-guides/management/security/proxy-protocol.mdx) (L4 load balancer) + ../../zero-trust-access/management/security/proxy-protocol.mdx) (L4 load balancer) diff --git a/docs/pages/includes/device-trust/troubleshooting.mdx b/docs/pages/includes/device-trust/troubleshooting.mdx index 9e8435e87ad98..1dd5be27cc649 100644 --- a/docs/pages/includes/device-trust/troubleshooting.mdx +++ b/docs/pages/includes/device-trust/troubleshooting.mdx @@ -1,14 +1,14 @@ ### "binary missing signature or entitlements" on `tsh device enroll` A signed and notarized `tsh` binary is necessary to enroll and use a a trusted -device. [Download the macOS tsh installer](../../installation.mdx#macos) to fix +device. [Download the macOS tsh installer](../../installation/macos.mdx) to fix the problem. ### "unauthorized device" errors using a trusted device A trusted device needs to be registered and enrolled before it is recognized by -Teleport as such. Follow the [registration](../../admin-guides/access-controls/device-trust/device-management.mdx) and -[enrollment](../../admin-guides/access-controls/device-trust/device-management.mdx) steps +Teleport as such. Follow the [registration](../../identity-governance/device-trust/device-management.mdx#register-a-trusted-device) and +[enrollment](../../identity-governance/device-trust/device-management.mdx#enroll-a-trusted-device) steps and make sure to `tsh logout` and `tsh login` after enrollment is done. ### "Failed to open the TPM device" on Linux @@ -27,7 +27,7 @@ https://github.com/tpm2-software/tpm2-tss/blob/ede63dd1ac1f0a46029d457304edcac21 Auto-enrollment ceremonies, due to their automated nature, are stricter than regular enrollment. Additional auto-enrollment checks include: -1. Verifying device profile data, such as data originated from Jamf, against the +1. Verifying device profile data, such as data originated from an MDM service, against the actual device 2. Verifying that the device is not enrolled by another user (auto-enroll cannot take devices that are already enrolled) @@ -49,7 +49,7 @@ Follow the instructions in the [Web UI troubleshooting section]( #web-ui-fails-to-authenticate-trusted-device) below (Teleport v16 or later). Alternatively, you may use one of the tsh commands described by -[App Access support](../../admin-guides/access-controls/device-trust/enforcing-device-trust.mdx). +[App Access support](../../identity-governance/device-trust/enforcing-device-trust.mdx). For example, for an app called `myapp`, run `tsh proxy app myapp -p 8888`, then open http://localhost:8888 in your browser. @@ -69,10 +69,9 @@ The Web UI attempts to authenticate your device using Teleport Connect during login. If you are not asked to authenticate your device immediately after login, follow the steps below: -1. Make sure your device is [registered and enrolled]( - ../../admin-guides/access-controls/device-trust/device-management.mdx#register-a-trusted-device) +1. Make sure your device is [registered and enrolled](../../identity-governance/device-trust/device-management.mdx#register-a-trusted-device) 2. Install [Teleport Connect]( - ../../connect-your-client/teleport-connect.mdx#installation--upgrade). + ../../connect-your-client/teleport-clients/teleport-connect.mdx#installation--upgrade). Use the DEB or RPM packages on Linux (the tarball doesn't register the custom URL handler). 3. Make sure Teleport Connect can access the same resource you are trying to @@ -92,9 +91,9 @@ device web authentication ceremony failed. In this case it's likely that end-user IPs are not propagated correctly to your Teleport deployment. * L7 load balancer: make sure it propagates the [X-Forwarded-For header]( - ../../reference/config.mdx#proxy-service) + ../../reference/deployment/config.mdx#proxy-service) * L4 load balancer: enable [PROXY protocol]( - ../../admin-guides/management/security/proxy-protocol.mdx) + ../../zero-trust-access/management/security/proxy-protocol.mdx) ### Checking Device Trust authorization status in the web UI diff --git a/docs/pages/includes/discovery/aws-db-discovery-config-picker.mdx b/docs/pages/includes/discovery/aws-db-discovery-config-picker.mdx index 173f9c78b7ede..1aa11fcd60c5b 100644 --- a/docs/pages/includes/discovery/aws-db-discovery-config-picker.mdx +++ b/docs/pages/includes/discovery/aws-db-discovery-config-picker.mdx @@ -11,10 +11,15 @@ (!docs/pages/includes/database-access/reference/auto-discovery-unavailable.mdx dbType="DynamoDB"!) - + (!docs/pages/includes/discovery/aws-db-discovery-config.mdx matcherType="elasticache" !) + + + +(!docs/pages/includes/discovery/aws-db-discovery-config.mdx matcherType="elasticache-serverless" !) + diff --git a/docs/pages/includes/discovery/aws-db-discovery-config.mdx b/docs/pages/includes/discovery/aws-db-discovery-config.mdx index 72d8e1f351fd9..343221c49304a 100644 --- a/docs/pages/includes/discovery/aws-db-discovery-config.mdx +++ b/docs/pages/includes/discovery/aws-db-discovery-config.mdx @@ -15,8 +15,9 @@ spec: aws: # Database types. Valid options are: # 'docdb' - discovers and registers Amazon DocumentDB databases. - # 'elasticache' - discovers Amazon ElastiCache Redis databases. - # 'memorydb' - discovers Amazon MemoryDB Redis databases. + # 'elasticache' - discovers Amazon ElastiCache Redis and Valkey databases. + # 'elasticache-serverless' - Amazon ElastiCache Serverless Redis or Valkey databases. + # 'memorydb' - discovers Amazon MemoryDB databases. # 'opensearch' - discovers Amazon OpenSearch Redis databases. # 'rds' - discovers Amazon RDS and Aurora databases. # 'rdsproxy' - discovers Amazon RDS Proxy databases. diff --git a/docs/pages/includes/discovery/aws-db-iam-policy-picker.mdx b/docs/pages/includes/discovery/aws-db-iam-policy-picker.mdx index c4e27274d66e6..bdfff66c56727 100644 --- a/docs/pages/includes/discovery/aws-db-iam-policy-picker.mdx +++ b/docs/pages/includes/discovery/aws-db-iam-policy-picker.mdx @@ -11,10 +11,15 @@ (!docs/pages/includes/discovery/reference/aws-iam/dynamodb.mdx!) - + (!docs/pages/includes/discovery/reference/aws-iam/elasticache.mdx!) + + + +(!docs/pages/includes/discovery/reference/aws-iam/elasticache-serverless.mdx!) + diff --git a/docs/pages/includes/discovery/database-service-troubleshooting.mdx b/docs/pages/includes/discovery/database-service-troubleshooting.mdx index 8786e37d0de31..bb2f003d6d2a2 100644 --- a/docs/pages/includes/discovery/database-service-troubleshooting.mdx +++ b/docs/pages/includes/discovery/database-service-troubleshooting.mdx @@ -40,7 +40,7 @@ spec: This section assumes you have already provisioned a database user and configured Teleport RBAC for that database user by following a specific guide in -[Enroll AWS Databases](../../enroll-resources/database-access/enroll-aws-databases/enroll-aws-databases.mdx). +[Enroll AWS Databases](../../enroll-resources/database-access/enrollment/aws/aws.mdx). If there are connection errors when you try to connect to a database, then @@ -68,5 +68,5 @@ guide](../../enroll-resources/database-access/troubleshooting.mdx) for more general troubleshooting steps. Additionally, a guide specific to the type of database in -[Enroll AWS Databases](../../enroll-resources/database-access/enroll-aws-databases/enroll-aws-databases.mdx). +[Enroll AWS Databases](../../enroll-resources/database-access/enrollment/aws/aws.mdx). may have more specific troubleshooting information. diff --git a/docs/pages/includes/discovery/db-label-warning.mdx b/docs/pages/includes/discovery/db-label-warning.mdx new file mode 100644 index 0000000000000..780f9f05891dc --- /dev/null +++ b/docs/pages/includes/discovery/db-label-warning.mdx @@ -0,0 +1,6 @@ + + +Use more specific label selectors when needed so that a Database Service +instance only matches databases it can actually reach. Broad selectors can +cause unreachable databases to be registered and lead to connection failures. + \ No newline at end of file diff --git a/docs/pages/includes/discovery/discovery-config.yaml b/docs/pages/includes/discovery/discovery-config.yaml index 4deb99c5d69da..f9542c66111bb 100644 --- a/docs/pages/includes/discovery/discovery-config.yaml +++ b/docs/pages/includes/discovery/discovery-config.yaml @@ -22,9 +22,11 @@ discovery_service: # 'rdsproxy' - Amazon RDS Proxy databases. # 'redshift' - Amazon Redshift databases. # 'redshift-serverless' - Amazon Redshift Serverless databases. - # 'elasticache' - Amazon ElastiCache Redis databases. - # 'memorydb' - Amazon MemoryDB Redis databases. + # 'elasticache' - Amazon ElastiCache Redis and Valkey databases. + # 'elasticache-serverless' - Amazon ElastiCache Serverless Redis or Valkey databases. + # 'memorydb' - Amazon MemoryDB databases. # 'opensearch' - Amazon OpenSearch Redis databases. + # 'docdb' - Amazon DocumentDB databases. - types: ["ec2"] # AWS regions to search for resources from regions: ["us-east-1","us-west-1"] @@ -38,8 +40,7 @@ discovery_service: # Optional AWS external ID that the Discovery Service will use to assume # a role in an external AWS account. external_id: "example-external-id" - # Optional section: install is used to provide parameters to the AWS SSM document. - # If the install section isn't provided, the below defaults are used. + # Optional section: install is used to provide parameters to the installer script. # Only applicable for EC2 discovery. install: join_params: @@ -49,13 +50,29 @@ discovery_service: # script_name is the name of the Teleport install script to use. # Optional, defaults to: "default-installer". script_name: "default-installer" + # Optional: adds a suffix to teleport installation, allowing for multiple agent installations. + # Requires managed updates to be enabled. + # Supported characters are alphanumeric characters and `-`. + suffix: "" + # Optional: when using managed updates, set the update group of the installation. + # Supported characters are alphanumeric characters and `-`. + update_group: "" + # Optional: proxy settings for the install script. + # Sets the http_proxy, HTTP_PROXY, https_proxy, HTTPS_PROXY, no_proxy, and NO_PROXY + # environment variables for the install script. + http_proxy_settings: + https_proxy: http://172.31.5.130:3128 + http_proxy: http://172.31.5.130:3128 + no_proxy: my-local-domain # Optional section: ssm is used to configure which AWS SSM document to use # If the ssm section isnt provided the below defaults are used. ssm: # document_name is the name of the SSM document that should be # executed when installing teleport on matching nodes + # Can be set to "AWS-RunShellScript" which is a pre-defined SSM Document, + # removing the need to create a custom SSM Document in each region. # Optional, defaults to: "TeleportDiscoveryInstaller". - document_name: "TeleportDiscoveryInstaller" + document_name: "AWS-RunShellScript" # Optional role for which the Discovery Service should create the EKS access entry. # If not set, the Discovery Service will attempt to create the access # entry using its own identity. @@ -83,6 +100,30 @@ discovery_service: # '*' - discovers resources in all resource groups within configured subscription(s) (default). # Any resource_groups: `az group list -o table` resource_groups: ["group1", "group2"] + # Optional section: install is used to provide parameters to the Teleport installation in Azure VMs. + # Only applicable for VM discovery. + install: + join_params: + # token_name is the name of the Teleport invite token to use. + # Optional, defaults to: "azure-discovery-token". + token_name: "azure-discovery-token" + # script_name is the name of the Teleport install script to use. + # Optional, defaults to: "default-installer". + script_name: "default-installer" + # Optional: adds a suffix to teleport installation, allowing for multiple agent installations. + # Requires managed updates to be enabled. + # Supported characters are alphanumeric characters and `-`. + suffix: "" + # Optional: when using managed updates, set the update group of the installation. + # Supported characters are alphanumeric characters and `-`. + update_group: "" + # Optional proxy settings for the install script. + # Sets the http_proxy, HTTP_PROXY, https_proxy, HTTPS_PROXY, no_proxy, and NO_PROXY + # environment variables for the install script. + http_proxy_settings: + https_proxy: http://172.31.5.130:3128 + http_proxy: http://172.31.5.130:3128 + no_proxy: my-local-domain # Azure resource tag filters used to match resources. tags: "*": "*" @@ -102,6 +143,30 @@ discovery_service: # Email addresses of service accounts that instances can join with. # If empty, any service account is allowed. service_accounts: [] + # Optional section: install is used to provide parameters to the Teleport installation in Google Cloud VMs. + # Only applicable for VM discovery. + install: + join_params: + # token_name is the name of the Teleport invite token to use. + # Optional, defaults to: "gcp-discovery-token". + token_name: "gcp-discovery-token" + # script_name is the name of the Teleport install script to use. + # Optional, defaults to: "default-installer". + script_name: "default-installer" + # Optional: adds a suffix to teleport installation, allowing for multiple agent installations. + # Requires managed updates to be enabled. + # Supported characters are alphanumeric characters and `-`. + suffix: "" + # Optional: when using managed updates, set the update group of the installation. + # Supported characters are alphanumeric characters and `-`. + update_group: "" + # Optional proxy settings for the install script. + # Sets the http_proxy, HTTP_PROXY, https_proxy, HTTPS_PROXY, no_proxy, and NO_PROXY + # environment variables for the install script. + http_proxy_settings: + https_proxy: http://172.31.5.130:3128 + http_proxy: http://172.31.5.130:3128 + no_proxy: my-local-domain # GCP resource label filters used to match resources. labels: "*": "*" diff --git a/docs/pages/includes/discovery/reference/aws-iam/elasticache-serverless.mdx b/docs/pages/includes/discovery/reference/aws-iam/elasticache-serverless.mdx new file mode 100644 index 0000000000000..73c5d4608d8b0 --- /dev/null +++ b/docs/pages/includes/discovery/reference/aws-iam/elasticache-serverless.mdx @@ -0,0 +1,27 @@ +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "ElastiCacheServerlessDiscovery", + "Effect": "Allow", + "Action": "elasticache:DescribeServerlessCaches", + "Resource": "*" + }, + { + "Sid": "ElastiCacheServerlessFetchMetadata", + "Effect": "Allow", + "Action": [ + "ec2:DescribeSubnets", + "elasticache:ListTagsForResource" + ], + "Resource": "*" + } + ] +} +``` + +| Statement | Purpose | +| --------- | ------- | +| `ElastiCacheServerlessDiscovery` | Discover ElastiCache Serverless caches. | +| `ElastiCacheServerlessFetchMetadata` | Import AWS tags and other metadata for each database as Teleport database labels. | diff --git a/docs/pages/includes/dynamodb-iam-policy.mdx b/docs/pages/includes/dynamodb-iam-policy.mdx index f44dd4fd61ef3..c2f113919115a 100644 --- a/docs/pages/includes/dynamodb-iam-policy.mdx +++ b/docs/pages/includes/dynamodb-iam-policy.mdx @@ -7,7 +7,7 @@ depends on whether you expect to create a table yourself or enable the Auth Service to create and configure one for you: - + If you choose to manage DynamoDB tables yourself, you must take the following steps, which we will explain in more detail below: @@ -44,7 +44,15 @@ The audit event table must have the following attribute definitions: |`CreatedAtDate`|`S`| |`CreatedAt`|`N`| -The table must also have the following key schema elements: +The table must have the following key schema elements: + +|Name|Type| +|---|---| +|`SessionID`|`HASH`| +|`EventIndex`|`RANGE`| + +The table must also have a global secondary index named `timesearchV2` with the +following key schema elements: |Name|Type| |---|---| @@ -125,7 +133,7 @@ Note that you can omit the `dynamodb:UpdateContinuousBackups` permission if disabling continuous backups. - + You'll need to replace these values in the policy example below: diff --git a/docs/pages/includes/edition-prereqs-tabs-not-admin.mdx b/docs/pages/includes/edition-prereqs-tabs-not-admin.mdx deleted file mode 100644 index 844f19ac5c8f9..0000000000000 --- a/docs/pages/includes/edition-prereqs-tabs-not-admin.mdx +++ /dev/null @@ -1,10 +0,0 @@ -{{ version="(=teleport.version=)" }} - -- A running Teleport cluster version {{ version }} or above. If you want to get started with Teleport, [sign - up](https://goteleport.com/signup) for a free trial or [set up a demo - environment](../admin-guides/deploy-a-cluster/linux-demo.mdx). - -- The `tsh` client tool. - - Visit [Installation](../installation.mdx) for instructions on downloading - `tsh`. diff --git a/docs/pages/includes/edition-prereqs-tabs.mdx b/docs/pages/includes/edition-prereqs-tabs.mdx index 3de3e9dd732ee..46031d0989e82 100644 --- a/docs/pages/includes/edition-prereqs-tabs.mdx +++ b/docs/pages/includes/edition-prereqs-tabs.mdx @@ -1,10 +1,71 @@ -{{ version="(=teleport.version=)" }} +{{ edition="Teleport" clients="`tctl` and `tsh` clients" }} -- A running Teleport cluster version {{ version }} or above. If you want to get started with Teleport, [sign - up](https://goteleport.com/signup) for a free trial or [set up a demo - environment](../admin-guides/deploy-a-cluster/linux-demo.mdx). +- A running {{ edition }} cluster. If you want to get started with Teleport, + [sign up](https://goteleport.com/signup) for a free trial or [set up a demo + environment](../get-started/deploy-community.mdx). -- The `tctl` admin tool and `tsh` client tool. +- The {{ clients }}. - Visit [Installation](../installation.mdx) for instructions on downloading - `tctl` and `tsh`. +
+ Installing {{ clients }} + + 1. Determine the version of your Teleport cluster. The {{ clients }} must be + at most one major version behind your Teleport cluster version. Send a GET + request to the Proxy Service at `/v1/webapi/find` and use a JSON query tool + to obtain your cluster version. Replace + with the web address of your Teleport Proxy Service: + + ```code + $ TELEPORT_DOMAIN= + $ TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')" + ``` + + 1. Follow the instructions for your platform to install {{ clients }}: + + + + + Download the signed macOS .pkg installer for Teleport, which includes the {{ clients }}: + + ```code + $ curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg + ``` + + In Finder double-click the `pkg` file to begin installation. + + + Using Homebrew to install Teleport is not supported. The Teleport package in + Homebrew is not maintained by Teleport and we can't guarantee its reliability or + security. + + + + + + + ```code + $ curl.exe -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-windows-amd64-bin.zip + # Unzip the archive and move the {{ clients }} to your %PATH% + # NOTE: Do not place the {{ clients }} in the System32 directory, as this can cause issues when using WinSCP. + # Use %SystemRoot% (C:\Windows) or %USERPROFILE% (C:\Users\) instead. + ``` + + + + + + All of the Teleport binaries in Linux installations include the {{ clients }}. For more + options (including RPM/DEB packages and downloads for i386/ARM/ARM64) see + our [installation page](../installation/installation.mdx). + + ```code + $ curl -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz + $ tar -xzf teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz + $ cd teleport + $ sudo ./install + # Teleport binaries have been copied to /usr/local/bin + ``` + + + +
diff --git a/docs/pages/includes/ent-vs-community-faq.mdx b/docs/pages/includes/ent-vs-community-faq.mdx index 8da3cc463335e..d955e23dcc6bf 100644 --- a/docs/pages/includes/ent-vs-community-faq.mdx +++ b/docs/pages/includes/ent-vs-community-faq.mdx @@ -18,5 +18,5 @@ $ teleport version # Teleport v(=teleport.version=) go(=teleport.golang=) ``` -See the [Installation](../installation.mdx) guide for details on installing to specific platforms +See the [Installation](../installation/installation.mdx) guide for details on installing to specific platforms with Enterprise or Teleport Community Edition releases. diff --git a/docs/pages/includes/enterprise/oidcauthentication.mdx b/docs/pages/includes/enterprise/oidcauthentication.mdx deleted file mode 100644 index 136102ea83651..0000000000000 --- a/docs/pages/includes/enterprise/oidcauthentication.mdx +++ /dev/null @@ -1,19 +0,0 @@ -Configure Teleport to use OIDC authentication as the default instead of the local -user database. - -Edit your `cluster_auth_preference` resource: - -```code -$ tctl edit cap -``` - -In your editor, ensure that the file contains the following content: - -```yaml -kind: cluster_auth_preference -metadata: - name: cluster-auth-preference -spec: - type: oidc -version: v2 -``` diff --git a/docs/pages/includes/enterprise/samlauthentication.mdx b/docs/pages/includes/enterprise/samlauthentication.mdx deleted file mode 100644 index 9fcd1e9db5362..0000000000000 --- a/docs/pages/includes/enterprise/samlauthentication.mdx +++ /dev/null @@ -1,37 +0,0 @@ -### Enable default SAML authentication - -Configure Teleport to use SAML authentication as the default instead of the local -user database. - -Use `tctl` to edit the `cluster_auth_preference` value: - -```code -$ tctl edit cluster_auth_preference -``` - -Set the value of `spec.type` to `saml`: - -```yaml -kind: cluster_auth_preference -metadata: - ... - name: cluster-auth-preference -spec: - ... - type: saml - ... -version: v2 -``` - -After you save and exit the editor, `tctl` will update the resource: - -```text -cluster auth preference has been updated -``` - - - -If you need to log in again before configuring your SAML provider, use the flag `--auth=local`. - - - diff --git a/docs/pages/includes/helm-reference/zz_generated.access-datadog.mdx b/docs/pages/includes/helm-reference/zz_generated.access-datadog.mdx index cc90d24bab68e..7cd2459db2d0c 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-datadog.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-datadog.mdx @@ -46,7 +46,7 @@ data: ``` Check out the [Access Requests with -Datadog Incident Management](../../admin-guides/access-controls/access-request-plugins/datadog-hosted.mdx) guide +Datadog Incident Management](../../identity-governance/access-requests/plugins/datadog-hosted.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -251,7 +251,7 @@ teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" | `string` | `"kubernetes"` | `tbot.joinMethod` describes how tbot joins the Teleport cluster. -See [the join method reference](../../reference/join-methods.mdx) for a list fo supported values and detailed explanations. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. ## `annotations` diff --git a/docs/pages/includes/helm-reference/zz_generated.access-discord.mdx b/docs/pages/includes/helm-reference/zz_generated.access-discord.mdx index d8d15de12a3e4..85c800c779ff1 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-discord.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-discord.mdx @@ -20,6 +20,8 @@ proxy servers. For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. ### `teleport.identitySecretName` @@ -43,7 +45,7 @@ data: ``` Check out the [Access Requests with -Discord](../../admin-guides/access-controls/access-request-plugins/ssh-approval-discord.mdx) guide +Discord](../../identity-governance/access-requests/plugins/discord.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -145,6 +147,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the Discord plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the Discord plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-email.mdx b/docs/pages/includes/helm-reference/zz_generated.access-email.mdx index 7d5e0c4cb8f89..e995138597439 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-email.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-email.mdx @@ -20,6 +20,8 @@ proxy servers. For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. ### `teleport.identitySecretName` @@ -43,7 +45,7 @@ data: ``` Check out the [Access Requests with -Email](../../admin-guides/access-controls/access-request-plugins/ssh-approval-email.mdx) guide +Email](../../identity-governance/access-requests/plugins/email.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -265,6 +267,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the mail plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the mail plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-jira.mdx b/docs/pages/includes/helm-reference/zz_generated.access-jira.mdx index fc28bd73352c0..e81e8cb037073 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-jira.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-jira.mdx @@ -20,6 +20,8 @@ proxy servers. For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. ### `teleport.identityFromSecret` @@ -43,7 +45,7 @@ data: ``` Check out the [Access Requests with -Jira](../../admin-guides/access-controls/access-request-plugins/ssh-approval-jira.mdx) guide +Jira](../../identity-governance/access-requests/plugins/jira.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -213,6 +215,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the Jira plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the Jira plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-mattermost.mdx b/docs/pages/includes/helm-reference/zz_generated.access-mattermost.mdx index 53f7f570761e9..8af5e5a5aaf07 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-mattermost.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-mattermost.mdx @@ -20,6 +20,8 @@ proxy servers. For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. ### `teleport.identitySecretName` @@ -43,7 +45,7 @@ data: ``` Check out the [Access Requests with -Mattermost](../../admin-guides/access-controls/access-request-plugins/ssh-approval-mattermost.mdx) guide +Mattermost](../../identity-governance/access-requests/plugins/mattermost.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -141,6 +143,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the Mattermost plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the Mattermost plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-msteams.mdx b/docs/pages/includes/helm-reference/zz_generated.access-msteams.mdx index 4d1dcb05c21ea..af8ce41fa0da8 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-msteams.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-msteams.mdx @@ -20,6 +20,8 @@ proxy servers. For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. ### `teleport.identitySecretName` @@ -43,7 +45,7 @@ data: ``` Check out the [Access Requests with -Microsoft Teams](../../admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx) guide +Microsoft Teams](../../identity-governance/access-requests/plugins/msteams.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -72,7 +74,7 @@ You can pass the MsTeams appSecret: | `string` | `""` | `msTeams.appID` is the Azure app ID used by the plugin. -See [the MsTeams guide](../../admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx) to know how to get this value. +See [the MsTeams guide](../../identity-governance/access-requests/plugins/msteams.mdx) to know how to get this value. This value is mandatory. @@ -83,7 +85,7 @@ This value is mandatory. | `string` | `""` | `msTeams.tenantID` is the Azure tenant ID used by the plugin and Microsoft Teams. -See [the MsTeams guide](../../admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx) to know how to get this value. +See [the MsTeams guide](../../identity-governance/access-requests/plugins/msteams.mdx) to know how to get this value. This value is mandatory. @@ -94,7 +96,7 @@ This value is mandatory. | `string` | `""` | `msTeams.teamsAppID` is the MsTeams app ID used by the plugin. -See [the MsTeams guide](../../admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx) to know how to get this value. +See [the MsTeams guide](../../identity-governance/access-requests/plugins/msteams.mdx) to know how to get this value. This value is mandatory. @@ -177,6 +179,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the MS Teams plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the MS Teams plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-pagerduty.mdx b/docs/pages/includes/helm-reference/zz_generated.access-pagerduty.mdx index 5acf238939fa7..a6f2bdb2f38e1 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-pagerduty.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-pagerduty.mdx @@ -21,6 +21,9 @@ For example: - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` - joining an Auth: `teleport-auth.example.com:3025` +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. + ### `teleport.identitySecretName` | Type | Default | @@ -43,7 +46,7 @@ data: ``` Check out the [Access Requests with -PagerDuty](../../admin-guides/access-controls/access-request-plugins/ssh-approval-pagerduty.mdx) guide +PagerDuty](../../identity-governance/access-requests/plugins/pagerduty.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -132,6 +135,77 @@ The value can also be set to a file path (such as `/var/log/teleport.log`) to write logs to a file. Bear in mind that a few service startup messages will still go to `stderr` for resilience. +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the PagerDuty plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the PagerDuty plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + ## `annotations` `annotations` contains annotations to apply to the different Kubernetes diff --git a/docs/pages/includes/helm-reference/zz_generated.access-slack.mdx b/docs/pages/includes/helm-reference/zz_generated.access-slack.mdx index 5252bc39f1d62..b0a9a8447d182 100644 --- a/docs/pages/includes/helm-reference/zz_generated.access-slack.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.access-slack.mdx @@ -46,7 +46,7 @@ data: ``` Check out the [Access Requests with -Slack](../../admin-guides/access-controls/access-request-plugins/ssh-approval-slack.mdx) guide +Slack](../../identity-governance/access-requests/plugins/slack.mdx) guide for more information about how to acquire these credentials. ### `teleport.identitySecretPath` @@ -216,7 +216,7 @@ teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" | `string` | `"kubernetes"` | `tbot.joinMethod` describes how tbot joins the Teleport cluster. -See [the join method reference](../../reference/join-methods.mdx) for a list fo supported values and detailed explanations. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. ## `annotations` diff --git a/docs/pages/includes/helm-reference/zz_generated.event-handler.mdx b/docs/pages/includes/helm-reference/zz_generated.event-handler.mdx new file mode 100644 index 0000000000000..ef51324c1ad5e --- /dev/null +++ b/docs/pages/includes/helm-reference/zz_generated.event-handler.mdx @@ -0,0 +1,618 @@ + +{/* Generated file. Do not edit.*/} +{/* Generate this file by navigating to examples/chart and running make render-chart-ref*/} +## `teleport` + +`teleport` contains the configuration describing how the plugin connects to +your Teleport cluster. + +### `teleport.address` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`teleport.address` is the address of the Teleport cluster the plugin +connects to. The address must contain both the domain name and the port of +the Teleport cluster. It can be either the address of the auth servers or the +proxy servers. + +For example: + - joining a Proxy: `teleport.example.com:443` or `teleport.example.com:3080` + - joining an Auth: `teleport-auth.example.com:3025` + +When the address is empty, `tbot.teleportProxyAddress` +or `tbot.teleportAuthAddress` will be used if they are set. + +### `teleport.identitySecretName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`teleport.identitySecretName` is the name of the Kubernetes secret +that contains the credentials for the connection to your Teleport cluster. + +The secret should be in the following format: + +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: teleport-plugin-event-handler-identity +data: + auth_id: #... +``` + +Check out the [Export Events with Fluentd] +(../../zero-trust-access/export-audit-events/fluentd.mdx) guide +for more information about how to acquire these credentials. + +### `teleport.identitySecretPath` + +| Type | Default | +|------|---------| +| `string` | `"auth_id"` | + +`teleport.identitySecretPath` is the key in the Kubernetes secret +specified by `teleport.identitySecretName` that holds the credentials for +the connection to your Teleport cluster. If the secret has the path, +`"auth_id"`, you can omit this field. + +## `eventHandler` + +`eventHandler` contains the configuration used by the plugin to forward Teleport events. + +### `eventHandler.storagePath` + +| Type | Default | +|------|---------| +| `string` | `"/var/lib/teleport/plugins/event-handler/storage"` | + +`eventHandler.storagePath` is the storage directory for the event handler. + +### `eventHandler.timeout` + +| Type | Default | +|------|---------| +| `string` | `"10s"` | + +`eventHandler.timeout` is the polling timeout. + +### `eventHandler.batch` + +| Type | Default | +|------|---------| +| `int` | `20` | + +`eventHandler.batch` is the fetch batch size. + +### `eventHandler.windowSize` + +| Type | Default | +|------|---------| +| `string` | `"24h"` | + +`eventHandler.windowSize` configures the duration of the time window for the event handler +to request events from Teleport. By default, this is set to 24 hours. +Reduce the window size if the events backend cannot manage the event volume +for the default window size. +The window size should be specified as a duration string, parsed by Go's time.ParseDuration. + +### `eventHandler.debug` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`eventHandler.debug` enables debug logging. + +### `eventHandler.types` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`eventHandler.types` is the list of event types to forward. +When unset, the event handler forwards all events. + +### `eventHandler.skipEventTypes` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`eventHandler.skipEventTypes` is the list of audit event types to skip. + +### `eventHandler.skipSessionTypes` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`eventHandler.skipSessionTypes` is the list of session recording event types to skip. +When unset, the event handler skips noisy and binary events. + +See the [Teleport-event-handler README](https://github.com/gravitational/teleport/blob/1d2bd5eb8fc3500deb7d7108f6835efde98b7b24/integrations/event-handler/README.md) +for a list of default skipped events. + +### `eventHandler.startTime` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`eventHandler.startTime` is the start time to start ingestion from (RFC3339 format). + +### `eventHandler.dryRun` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`eventHandler.dryRun` enables dry run without sending events to fluentd. + +### `eventHandler.concurrency` + +| Type | Default | +|------|---------| +| `int` | `5` | + +`eventHandler.concurrency` is the number of concurrent sessions to process. By default, this is set to 5. + +#### `eventHandler.lock.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`eventHandler.lock.enabled` controls whether user auto-locking is enabled. + +#### `eventHandler.lock.failedAttemptsCount` + +| Type | Default | +|------|---------| +| `int` | `3` | + +`eventHandler.lock.failedAttemptsCount` is the number of failed attempts in the `lockPeriod` which +triggers locking. By default, this is set to 3. + +#### `eventHandler.lock.period` + +| Type | Default | +|------|---------| +| `string` | `"1m"` | + +`eventHandler.lock.period` is the time period where `lock-failed-attempts-count` failed attempts +will trigger locking. By default, this is set to 1 minute. + +#### `eventHandler.lock.for` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`eventHandler.lock.for` is the time period for which user gets lock. + +## `fluentd` + +`fluentd` contains the configuration for the fluentd forwarder. + +### `fluentd.url` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`fluentd.url` is the Fluentd URL where the events will be sent. + +### `fluentd.sessionUrl` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`fluentd.sessionUrl` is the Fluentd URL where the session logs will be sent. + +#### `fluentd.certificate.secretName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`fluentd.certificate.secretName` is the secret containing the credentials to connect to Fluentd. +It must contain the CA certificate, the client key and the client certificate. + +#### `fluentd.certificate.caPath` + +| Type | Default | +|------|---------| +| `string` | `"ca.crt"` | + +`fluentd.certificate.caPath` is the name of the key which contains the CA certificate inside the secret. + +#### `fluentd.certificate.certPath` + +| Type | Default | +|------|---------| +| `string` | `"client.crt"` | + +`fluentd.certificate.certPath` is the name of the key which contains the client's certificate inside the secret. + +#### `fluentd.certificate.keyPath` + +| Type | Default | +|------|---------| +| `string` | `"client.key"` | + +`fluentd.certificate.keyPath` is the name of the key which contains the client's private key inside the secret. + +### `fluentd.maxConnections` + +| Type | Default | +|------|---------| +| `int` | `0` | + +`fluentd.maxConnections` is the maximum number of connections to Fluentd. By default, or when set to 0, +this becomes `eventHandler.concurrency` * 2. + +## `tbot` + +`tbot` controls the optional tbot deployment that obtains and renews +credentials for the plugin to connect to Teleport. +Only default and mandatory values are described here, see the tbot chart reference +for the full list of supported values. + +### `tbot.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`tbot.enabled` controls if tbot should be deployed with the event handler plugin. + +### `tbot.clusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.clusterName` is the name of the Teleport cluster tbot and the event handler plugin will join. +Setting this value is mandatory when tbot is enabled. + +### `tbot.teleportProxyAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportProxyAddress` is the teleport Proxy Service address the bot will connect to. +This must contain the port number, usually 443 or 3080 for Proxy Service. +Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. +This is mandatory to connect to Teleport Enterprise (Cloud). + +This setting is mutually exclusive with `teleportAuthAddress`. + +For example: +```yaml +tbot: + teleportProxyAddress: "test.teleport.sh:443" +``` + +### `tbot.teleportAuthAddress` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tbot.teleportAuthAddress` is the teleport Auth Service address the bot will connect to. +This must contain the port number, usually 3025 for Auth Service. Direct Auth Service connection +should be used when you are deploying the bot in the same Kubernetes cluster than your `teleport-cluster` +Helm release and have direct access to the Auth Service. +Else, you should prefer connecting via the Proxy Service. + +This setting is mutually exclusive with `teleportProxyAddress`. + +For example: +```yaml +teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" +``` + +### `tbot.joinMethod` + +| Type | Default | +|------|---------| +| `string` | `"kubernetes"` | + +`tbot.joinMethod` describes how tbot joins the Teleport cluster. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. + +## `annotations` + +`annotations` contains annotations to apply to the different Kubernetes +objects created by the chart. See [the Kubernetes annotation +documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) +for more details. + +### `annotations.config` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.config` are annotations to set on the ConfigMap. + +### `annotations.deployment` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.deployment` are annotations to set on the Deployment. + +### `annotations.pod` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.pod` are annotations to set on the Pods. + +## `extraLabels` + +`extraLabels` contains additional Kubernetes labels to apply on the resources +created by the chart. See [the Kubernetes label documentation +](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) +for more information. + +### `extraLabels.config` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.config` are labels to set on the ConfigMap. + +### `extraLabels.deployment` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.deployment` are labels to set on the Deployment. + +### `extraLabels.pod` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.pod` are labels to set on the Pods. + +## `image` + +`image` sets the container image used for plugin pods created by the chart. + +You can override this to use your own plugin image rather than a Teleport-published image. + +### `image.repository` + +| Type | Default | +|------|---------| +| `string` | `"public.ecr.aws/gravitational/teleport-plugin-event-handler"` | + +`image.repository` is the image repository. + +### `image.pullPolicy` + +| Type | Default | +|------|---------| +| `string` | `"IfNotPresent"` | + +`image.pullPolicy` is the [Kubernetes image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). + +### `image.tag` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`image.tag` Overrides the image tag whose default is the chart appVersion. + +Normally, the version of the Teleport plugin matches the +version of the chart. If you install chart version 15.0.0, you'll use +the plugin version 15.0.0. Upgrading the plugin is done by upgrading the chart. + + +`image.tag` is intended for development and custom tags. This MUST NOT be +used to control the plugin version in a typical deployment. This +chart is designed to run a specific plugin version. You will face +compatibility issues trying to run a different version with it. + +If you want to run the Teleport plugin version `X.Y.Z`, you should use +`helm install --version X.Y.Z` instead. + + +## `imagePullSecrets` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`imagePullSecrets` is a list of secrets containing authorization tokens +which can be optionally used to access a private Docker registry. + +See the [Kubernetes reference](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) for more details. + +## `nameOverride` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`nameOverride` optionally overrides the name of the chart, used +together with the release name when giving a name to resources. + +## `fullnameOverride` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`fullnameOverride` optionally overrides the full name of resources. + +## `podSecurityContext` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`podSecurityContext` sets the pod security context for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) +for more details. + +To unset the security context, set it to `null` or `~`. + +## `securityContext` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`securityContext` sets the container security context for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) +for more details. + +To unset the security context, set it to `null` or `~`. + +## `resources` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`resources` sets the resource requests/limits for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) +for more details. + +## `nodeSelector` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`nodeSelector` sets the node selector for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) +for more details. + +## `tls` + +`tls` contains settings for mounting your own TLS material in the event-handler pod. +The event-handler does not expose a TLS server, so this is only used to trust CAs. + +### `tls.existingCASecretName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tls.existingCASecretName` sets the `SSL_CERT_FILE` environment +variable to load a trusted CA or bundle in PEM format into Teleport pods. +The injected CA will be used to validate TLS communications with the Proxy +Service. + +You must create a secret containing the CA certs in the same namespace as Teleport using a command like: + +```code +$ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem +``` + +### `tls.existingCASecretKeyName` + +| Type | Default | +|------|---------| +| `string` | `"ca.pem"` | + +`tls.existingCASecretKeyName` determines which key in the CA secret +will be used as a trusted CA bundle file. + +## `tolerations` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`tolerations` sets the tolerations for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +for more details. + +## `affinity` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`affinity` sets the affinities for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) +for more details. + +## `volumes` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`volumes` sets the volumes mounted into the main event-handler pod. +See [the Kubernetes volume +documentation](https://kubernetes.io/docs/concepts/storage/volumes/) for more +details. + +For example: +```yaml +- name: storage + persistentVolumeClaim: + claimName: teleport-plugin-event-handler +``` + +## `volumeMounts` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`volumeMounts` sets the volume mounts for the main event-handler container. +See [the Kubernetes volume +documentation](https://kubernetes.io/docs/concepts/storage/volumes/) for more +details. + +For example: +```yaml +- name: storage + mountPath: "/var/lib/teleport/plugins/event-handler/storage" +``` + +## `extraArgs` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraArgs` contains extra arguments to pass to `teleport-plugin start` for +the main event-handler container. + +## `extraEnv` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraEnv` contains extra environment variables to set in the main +event-handler container. + +For example: +```yaml +extraEnv: + - name: HTTPS_PROXY + value: "http://username:password@my.proxy.host:3128" +``` diff --git a/docs/pages/includes/helm-reference/zz_generated.tbot.mdx b/docs/pages/includes/helm-reference/zz_generated.tbot.mdx index d9a5e36600eec..9236240bffb94 100644 --- a/docs/pages/includes/helm-reference/zz_generated.tbot.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.tbot.mdx @@ -35,7 +35,7 @@ This must contain the port number, usually 443 or 3080 for Proxy Service. Connecting to the Proxy Service is the most common and recommended way to connect to Teleport. This is mandatory to connect to Teleport Enterprise (Cloud) -This setting is mutually exclusive with teleportProxyAddress and is ignored if `customConfig` is set. +This setting is mutually exclusive with teleportProxyAddress and is ignored if `tbotConfig` is set. For example: ```yaml @@ -54,7 +54,7 @@ should be used when you are deploying the bot in the same Kubernetes cluster tha Helm release and have direct access to the Auth Service. Else, you should prefer connecting via the Proxy Service. -This setting is mutually exclusive with teleportProxyAddress and is ignored if `customConfig` is set. +This setting is mutually exclusive with teleportProxyAddress and is ignored if `tbotConfig` is set. For example: ```yaml @@ -64,7 +64,7 @@ teleportAuthAddress: "teleport-auth.teleport-namespace.svc.cluster.local:3025" ## `defaultOutput` `defaultOutput` controls the default output configured for the tbot agent. -Ignored if `customConfig` is set. +Ignored if `tbotConfig` is set. ### `defaultOutput.enabled` @@ -74,6 +74,127 @@ Ignored if `customConfig` is set. `defaultOutput.enabled` controls whether the default output is enabled. +## `argocd` + +`argocd` configures tbot to synchronize Teleport-managed Kubernetes clusters +to Argo CD. +Ignored if `tbotConfig` is set. + +### `argocd.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`argocd.enabled` controls whether the Argo CD output is enabled. + +### `argocd.clusterSelectors` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`argocd.clusterSelectors` determines which Kubernetes clusters will +be synchronized to Argo CD. + +For example: +```yaml +clusterSelectors: + - name: my-cluster-1 + - labels: + environment: production +``` + +### `argocd.secretNamespace` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`argocd.secretNamespace` determines to which Kubernetes namespace +cluster secrets will be written (it must be the namespace in which Argo CD +is running). Defaults to the current namespace. + +### `argocd.secretNamePrefix` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`argocd.secretNamePrefix` overrides the string that cluster secret +names will be prefixed with. Defaults to "teleport.argocd-cluster". + +### `argocd.secretLabels` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`argocd.secretLabels` provides a set of labels that will be applied +to cluster secrets. Label values can be Go template strings (rendered by tbot, +not Helm) with the following variables: + + - \{\{.ClusterName\}\} - Name of the Teleport cluster + - \{\{.KubeName\}\} - Name of the Kubernetes cluster resource + - \{\{.Labels\}\} - Map of labels applied to the Kubernetes cluster + resource that can be indexed using `\{\{index .Labels "key"\}\}` + +If the label value is empty, the label will not be added to the secret. + +### `argocd.secretAnnotations` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`argocd.secretAnnotations` provides a set of annotations that will +be applied to cluster secrets. + +### `argocd.project` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`argocd.project` sets the Argo CD project with which the Kubernetes +clusters will be associated. + +### `argocd.namespaces` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`argocd.namespaces` controls which Kubernetes namespaces the Argo CD +clusters will be allowed to operate on. + +### `argocd.clusterResources` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`argocd.clusterResources` determines whether the Argo CD cluster is +allowed to operate on cluster-scoped resources (only when `argocd.namespaces` +is non-empty). + +### `argocd.clusterNameTemplate` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`argocd.clusterNameTemplate` determines the format of cluster names +in Argo CD. It is a Go template string (rendered by tbot, not Helm) that +supports the following variables: + + - \{\{.ClusterName\}\} - Name of the Teleport cluster + - \{\{.KubeName\}\} - Name of the Kubernetes cluster resource + - \{\{.Labels\}\} - Map of labels applied to the Kubernetes cluster + resource that can be indexed using `\{\{index .Labels "key"\}\}` + +By default, the following template will be used: "\{\{.ClusterName\}\}-\{\{.KubeName\}\}". + ## `persistence` `persistence` controls how the tbot agent stores its data. @@ -102,7 +223,7 @@ use the more specific configuration values throughout this chart. `outputs` contains additional outputs to configure for the tbot agent. These should be in the same format as the `outputs` field in the tbot.yaml. -Ignored if `customConfig` is set. +Ignored if `tbotConfig` is set. ## `services` @@ -112,7 +233,7 @@ Ignored if `customConfig` is set. `services` contains additional services to configure for the tbot agent. These should be in the same format as the `services` field in the tbot.yaml. -Ignored if `customConfig` is set. +Ignored if `tbotConfig` is set. ## `joinMethod` @@ -121,8 +242,8 @@ Ignored if `customConfig` is set. | `string` | `"kubernetes"` | `joinMethod` describes how tbot joins the Teleport cluster. -See [the join method reference](../../reference/join-methods.mdx) for a list fo supported values and detailed explanations. -Ignored if `customConfig` is set. +See [the join method reference](../../reference/deployment/join-methods.mdx) for a list of supported values and detailed explanations. +Ignored if `tbotConfig` is set. ## `token` @@ -132,7 +253,7 @@ Ignored if `customConfig` is set. `token` is the name of the token used by tbot to join the Teleport cluster. This value is not sensitive unless the `joinMethod` is set to `"token"`. -Ignored if `customConfig` is set. +Ignored if `tbotConfig` is set. ## `teleportVersionOverride` @@ -462,3 +583,15 @@ See [the Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-po for more details. By default, this is unset. + +## `podSecurityContext` + +| Type | Default | +|------|---------| +| `object` | `null` | + +`podSecurityContext` sets the pod security context for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) +for more details. + +By default, this is unset. diff --git a/docs/pages/includes/helm-reference/zz_generated.teleport-kube-agent.mdx b/docs/pages/includes/helm-reference/zz_generated.teleport-kube-agent.mdx index 67e3f1a95c4d3..e846c48b32b78 100644 --- a/docs/pages/includes/helm-reference/zz_generated.teleport-kube-agent.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.teleport-kube-agent.mdx @@ -41,6 +41,21 @@ Here are a few examples: | Teleport Cloud cluster | `example.teleport.sh:443` | | `teleport-cluster` Helm chart | `teleport.example.com:443` | +## `relayAddr` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`relayAddr` defines the optional address of the Teleport Relay +Service tunnel endpoint that this agent should use. + +If set, the agent will open reverse tunnels to the specified Relay in addition +to the tunnels to the control plane, for the protocols supported by the Relay. + +relayAddr is available starting from Teleport v18.5.0, taking effect for the +Kubernetes service. + ## `enterprise` | Type | Default | @@ -93,14 +108,14 @@ Its usage should be preferred. `joinParams.method` controls which join method will be used by the instance to join the Teleport cluster. -See [the join method reference](../../reference/join-methods.mdx) for the list of possible +See [the join method reference](../../reference/deployment/join-methods.mdx) for the list of possible values, the implications of each join method, and guides to set up each method. Common join-methods for the `teleport-kube-agent` are: - `token`: the most basic one, with regular ephemeral secret tokens - `kubernetes`: either the `in-cluster` variant (if the agent runs in the - same Kubernetes cluster as the `teleport-cluster` chart) or the `JWKS` - variant (works in every Kubernetes cluster, regardless of the Teleport Auth + same Kubernetes cluster as the `teleport-cluster` chart) or the `JWKS/OIDC` + variants (work in every Kubernetes cluster, regardless of the Teleport Auth Service location). ### `joinParams.tokenName` @@ -112,7 +127,7 @@ Common join-methods for the `teleport-kube-agent` are: `joinParams.tokenName` controls which token is used by the agent to join the Teleport cluster. -When `joinParams.method` is [a delegated join method](../../reference/join-methods.mdx#delegated-join-methods), +When `joinParams.method` is [a delegated join method](../../reference/deployment/join-methods.mdx#delegated-join-methods), the value is not sensitive. When `joinParams.method` is `token` (by default), `joinParams.tokenName` @@ -124,6 +139,21 @@ If method is `token`, `joinParams.tokenName` can be empty if the token is provided through an existing Kubernetes Secret, see [`joinTokenSecret`](#joinTokenSecret) for more details and instructions. +If method is `kubernetes`, you must set [`teleportClusterName`](#teleportClusterName). + +## `teleportClusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`teleportClusterName` is the name of the joined Teleport cluster. +Setting this value is required when joining via the +[Kubernetes JWKS or OIDC](../../reference/deployment/join-methods.mdx#kubernetes-jwks) join method. + +When this value is set, the chart mounts a kubernetes service account token +via a projected volume and configures Teleport to use it for joining. + ## `kubeClusterName` | Type | Default | @@ -143,7 +173,7 @@ This setting is required if the chart `roles` contains `kube`. | `list` | `[]` | `apps` is a static list of applications that should be proxied by -the agent. See [the Teleport Application access documentation](../../reference/agent-services/application-access.mdx#configuration) +the agent. See [the Teleport Application access documentation](../../enroll-resources/application-access/reference.mdx#configuration) for more details. Proxied applications can be defined statically (through this value) or dynamically @@ -168,7 +198,7 @@ apps: You can see a list of all the supported values that can be used in a Teleport Application Service configuration in the [Application Service Configuration - Reference](../../reference/agent-services/application-access.mdx#configuration). + Reference](../../enroll-resources/application-access/reference.mdx#configuration). ## `appResources` @@ -179,7 +209,7 @@ apps: `appResources` is a set of labels the agent will monitor. Any application matching those labels will be proxied by the agent. See [the Teleport -Application access documentation](../../enroll-resources/application-access/introduction.mdx) +Application access documentation](../../enroll-resources/application-access/application-access.mdx) for more details. Proxied applications can be defined statically (through [`apps`](#apps)) or @@ -198,7 +228,7 @@ appResources: Once `appResources` is set, you can dynamically register application with -`tsh` by following [the Dynamic App Registration guide](../../enroll-resources/application-access/guides/dynamic-registration.mdx). +`tsh` by following [the Dynamic App Registration guide](../../enroll-resources/application-access/configuration/dynamic-registration.mdx). ## `clusterDomain` @@ -221,7 +251,7 @@ to match your cluster domain if it is different from the default value `cluster. `awsDatabases` configures AWS database auto-discovery. - For AWS database auto-discovery to work, your Database Service pods will need to use a role which has appropriate IAM permissions as per the [database documentation](../../enroll-resources/database-access/enroll-aws-databases/rds.mdx#step-36-create-iam-policies-for-teleport). + For AWS database auto-discovery to work, your Database Service pods will need to use a role which has appropriate IAM permissions as per the [database documentation](../../enroll-resources/database-access/enrollment/aws/rds/mysql-postgres-mariadb.mdx#step-36-create-iam-policies-for-teleport). After configuring a role, you can use an `eks.amazonaws.com/role-arn` annotation with the `annotations.serviceAccount` value to associate it with the service account and grant permissions: ```yaml @@ -266,7 +296,7 @@ annotations: `azureDatabases` configures Azure database auto-discovery. - For Azure database auto-discovery to work, your Database Service pods will need to have appropriate IAM permissions as per the [database documentation](../../enroll-resources/database-access/enroll-azure-databases/azure-postgres-mysql.mdx#step-25-configure-iam-permissions-for-teleport). + For Azure database auto-discovery to work, your Database Service pods will need to have appropriate IAM permissions as per the [database documentation](../../enroll-resources/database-access/enrollment/azure/azure-postgres-mysql.mdx#step-25-configure-iam-permissions-for-teleport). After configuring a service principal with appropriate IAM permissions, you must pass credentials to the pods. The easiest way is to use an Azure client secret. @@ -372,7 +402,7 @@ databases: ``` - You can see a list of all the supported [values which can be used in a Teleport Database Service configuration here](../../reference/agent-services/database-access-reference/configuration.mdx). + You can see a list of all the supported [values which can be used in a Teleport Database Service configuration here](../../enroll-resources/database-access/reference/configuration.mdx). @@ -617,7 +647,7 @@ which does not have valid TLS certificates for testing. Teleport pods. The configuration will be merged with the chart-generated configuration and will take precedence in case of conflict. -See the [Teleport Configuration Reference](../../reference/config.mdx) for the list of supported fields. +See the [Teleport Configuration Reference](../../reference/deployment/config.mdx) for the list of supported fields. ```yaml teleportConfig: @@ -675,9 +705,14 @@ You must create a secret containing the CA certs in the same namespace as Telepo $ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem ``` - - The key containing the root CA in the secret must be `ca.pem`. - +### `tls.existingCASecretKeyName` + +| Type | Default | +|------|---------| +| `string` | `"ca.pem"` | + +`tls.existingCASecretKeyName` determines which key in the CA secret +will be used as a trusted CA bundle file. ## `updater` @@ -779,7 +814,7 @@ It is also used as a failover when fetching the version using the webapi protoco | `string` | `""` | `updater.group` is the update group used when fetching the version using the webapi protocol. -When unset, the group defaults to `update.releaseChannel`. +When unset, the group defaults to `default`. ### `updater.image` @@ -893,7 +928,7 @@ PodSecurityPolicy resources (PSP) have been removed in Kubernetes 1.25 and replaced since 1.23 by PodSecurityAdmission (PSA). If you are running on Kubernetes 1.23 or later, it is recommended to disable PSPs and use PSAs. The steps are documented in the -[PSP removal guide](../../admin-guides/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp.mdx). +[PSP removal guide](../../zero-trust-access/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp.mdx). This value will be removed in a future chart version. @@ -919,7 +954,7 @@ This is only used when the [`roles`](#roles) contains `kube`. To set labels for applications, add a `labels` element to the [`apps`](#apps) section. To set labels for databases, add a `static_labels` element to the [`databases`](#databases) section. - For more information on how to set static/dynamic labels for Teleport services, see [labelling nodes and applications](../../admin-guides/management/admin/labels.mdx). + For more information on how to set static/dynamic labels for Teleport services, see [labelling nodes and applications](../../zero-trust-access/rbac-get-started/labels.mdx). For example: @@ -1020,7 +1055,7 @@ available pod specified on the PodDisruptionBudget. This CRD is managed by the prometheus-operator and allows workload to get monitored. To use this value, you need to run a `prometheus-operator` in the cluster for this value to take effect. -See https://prometheus-operator.dev/docs/prologue/introduction/ +See https://prometheus-operator.dev/docs/getting-started/introduction/ ### `podMonitor.enabled` @@ -1173,7 +1208,7 @@ Teleport-published image. By default, the image contains only the Teleport application and its runtime dependencies, and does not contain a shell. This setting only takes effect when [`enterprise`](#enterprise) is `true`. -When running an enterprise version, you must use [`image`](#image) instead. +When running an OSS version, you must use [`image`](#image) instead. ## `imagePullSecrets` @@ -1393,7 +1428,7 @@ Possible values are `text` (default) or `json`. `log.extraFields` sets the fields used in logging for the Teleport process. -See the [Teleport config file reference](../../reference/config.mdx) for +See the [Teleport config file reference](../../reference/deployment/config.mdx) for more details on possible values for `extra_fields`. ## `affinity` @@ -1406,6 +1441,42 @@ more details on possible values for `extra_fields`. See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) for more details. +## `disableTopologySpreadConstraints` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`disableTopologySpreadConstraints` turns off the topology spread constraints. +The feature is automatically turned off on Kubernetes versions below 1.18. + +## `topologySpreadConstraints` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`topologySpreadConstraints` sets the topology spread constraints for any pods created by the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) +for more details. + +When unset, the chart defaults to a soft topology spread constraint +that tries to spread pods across hosts and zones. + +``` +topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: # dynamically computed + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: # dynamically computed +``` + ## `dnsConfig` | Type | Default | @@ -1754,6 +1825,23 @@ initContainers: See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) for more details. +## `goMemLimitRatio` + +| Type | Default | +|------|---------| +| `float` | `0.9` | + +`goMemLimitRatio` configures the GOMEMLIMIT env var set by the chart. +GOMEMLIMIT instructs the go garbage collector to try to keep allocated memory +below a given threshold. This is a best-effort attempt, but this helps +to prevent OOMs in case of bursts. + +When the memory limits are set and goMemLimitRatio is non-zero, +the chart sets the GOMEMLIMIT to `resources.memory.limits * goMemLimitRatio`. +The value must be between 0 and 1. +Set to 0 to unset GOMEMLIMIT. +This has no effect if GOMEMLIMIT is already set through `extraEnv`. + ## `initSecurityContext` | Type | Default | diff --git a/docs/pages/includes/helm-reference/zz_generated.teleport-operator.mdx b/docs/pages/includes/helm-reference/zz_generated.teleport-operator.mdx index 09fd0fb3a48c6..9a77cba007ba3 100644 --- a/docs/pages/includes/helm-reference/zz_generated.teleport-operator.mdx +++ b/docs/pages/includes/helm-reference/zz_generated.teleport-operator.mdx @@ -75,7 +75,7 @@ method. `kubernetes` is the most common one. `teleportClusterName` is the name of the joined Teleport cluster. Setting this value is required when joining via the -[Kubernetes JWKS](../../reference/join-methods.mdx#kubernetes-jwks) join method. +[Kubernetes JWKS](../../reference/deployment/join-methods.mdx#kubernetes-jwks) join method. ## `token` @@ -151,7 +151,7 @@ put on the `Pod` resources created by the chart. `annotations.serviceAccount` contains the Kubernetes annotations put on the `Deployment` resource created by the chart. -## `annotations` +## `labels` ### `labels.deployment` @@ -218,6 +218,22 @@ is false and the operator must use an existing `ServiceAccount`. This value can be set to `false` when deploying in constrained environments where the user deploying the operator is not allowed to edit RBAC resources. +## `extraEnv` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraEnv` contains extra environment to be configured on the pod. + +## `extraArgs` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraArgs` contains extra arguments to pass to the operator + ## `imagePullPolicy` | Type | Default | @@ -328,6 +344,15 @@ command such as: $ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem ``` +### `tls.existingCASecretKeyName` + +| Type | Default | +|------|---------| +| `string` | `"ca.pem"` | + +`tls.existingCASecretKeyName` determines which key in the CA secret +will be used as a trusted CA bundle file. + ## `podSecurityContext` | Type | Default | diff --git a/docs/pages/includes/helm-reference/zz_generated.teleport-relay.mdx b/docs/pages/includes/helm-reference/zz_generated.teleport-relay.mdx new file mode 100644 index 0000000000000..672a3da2b37cd --- /dev/null +++ b/docs/pages/includes/helm-reference/zz_generated.teleport-relay.mdx @@ -0,0 +1,914 @@ + +{/* Generated file. Do not edit.*/} +{/* Generate this file by navigating to examples/chart and running make render-chart-ref*/} +## `relayGroup` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`relayGroup` sets the internal identifier for the group of Relay +instances reachable through the same load balancer. Should be unique in the +whole Teleport cluster. + +## `publicHostnames` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`publicHostnames` a list of hostnames that this Relay group is publicly +reachable at by clients. + +``` +publicHostnames: + - relay.example.com +``` + +## `targetConnectionCount` + +| Type | Default | +|------|---------| +| `int` | `2` | + +`targetConnectionCount` the amount of tunnel connections that agents +will open to distinct Relay instances. It should not be bigger than the +replica count. + +## `proxyAddr` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`proxyAddr` provides the public-facing Teleport Proxy Service +endpoint which should be used to join the cluster. This is the same URL used +to access the web UI of your Teleport cluster. The port used is usually either +3080 or 443. + +Here are a few examples: + +| Deployment method | Example `proxy_service.public_addr` | +|-------------------------------|-------------------------------------| +| On-prem Teleport cluster | `teleport.example.com:3080` | +| Teleport Cloud cluster | `example.teleport.sh:443` | +| `teleport-cluster` Helm chart | `teleport.example.com:443` | + +## `enterprise` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`enterprise` controls if the `teleport-relay` chart should deploy the +OSS version or the enterprise version of the container image. This must be set +to `true` when connecting to Teleport Cloud or self-hosted Teleport Enterprise +clusters to allow the agent to leverage enterprise features. + +## `joinParams` + +`joinParams` controls how the Teleport Agent joins the Teleport cluster. +These sub-values must be configured for the agent to connect to a cluster. + +The token used must grant the `Relay` role, and should be valid for the +lifetime of the Helm release. + +### `joinParams.method` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`joinParams.method` controls which join method will be used by the +instance to join the Teleport cluster. + +See [the join method reference](../../reference/deployment/join-methods.mdx) +for the list of possible values, the implications of each join method, and +guides to set up each method. + +Common join-methods for the `teleport-relay` are: +- `token`: the most basic one, with regular ephemeral secret tokens +- `kubernetes`: either the `in-cluster` variant (if the agent runs in the + same Kubernetes cluster as the `teleport-cluster` chart) or the + `JWKS/OIDC` variants (work in every Kubernetes cluster, regardless of the + Teleport Auth Service location). + +### `joinParams.tokenName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`joinParams.tokenName` controls which token is used by the agent to +join the Teleport cluster. + +When `joinParams.method` is [a delegated join +method](../../reference/deployment/join-methods.mdx#delegated-join-methods), +the value is not sensitive. + +When `joinParams.method` is `token` (by default), `joinParams.tokenName` +contains the secret token itself. In this case, the value is sensitive and +is automatically stored in a Kubernetes Secret instead of being directly +included in the agent's configuration. + +If method is `token`, `joinParams.tokenName` can be empty if the token is +provided through an existing Kubernetes Secret, see +[`joinTokenSecret`](#joinTokenSecret) for more details and instructions. + +If method is `kubernetes`, you must set +[`teleportClusterName`](#teleportClusterName). + +## `teleportClusterName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`teleportClusterName` is the name of the joined Teleport cluster. +Setting this value is required when joining via the +[Kubernetes JWKS or OIDC](../../reference/deployment/join-methods.mdx#kubernetes-jwks) join method. + +When this value is set, the chart mounts a kubernetes service account token +via a projected volume and configures Teleport to use it for joining. + +## `joinTokenSecret` + +`joinTokenSecret` manages the join token secret creation and its name. See +the [`joinParams`](#joinParams) section for more details. + +### `joinTokenSecret.create` + +| Type | Default | +|------|---------| +| `bool` | `true` | + +`joinTokenSecret.create` controls whether the chart creates the +Kubernetes `Secret` containing the Teleport join token. If false, you must +create a Kubernetes Secret with the configured name in the Helm release +namespace. + +### `joinTokenSecret.name` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`joinTokenSecret.name` is the name of the Kubernetes Secret +containing the Teleport join token used by the chart. + +If `joinTokenSecret.create` is `false`, the chart will not attempt to create +the secret itself. Instead, it will read the value from an existing secret. +`joinTokenSecret.name` configures the name of this secret. This allows you +to configure this secret externally and avoid having a plaintext join token +stored in your Teleport chart values. + +To create your own join token secret, you can use a command like this: + +```code +$ kubectl --namespace teleport create secret generic my-token-secret --from-literal=auth-token= +``` + + + + The key used for the auth token inside the secret must be `auth-token`, as + in the command above. + + + +```yaml +joinTokenSecret: + create: false + name: my-token-secret + +joinParams: + method: "token" + tokenName: "" +``` + +## `proxyProtocol` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`proxyProtocol` controls whether or not the connections coming from the +load balancer will include a PROXY protocol v2 header. + +## `log` + +`log` controls the agent logging. + +### `log.level` + +| Type | Default | +|------|---------| +| `string` | `"INFO"` | + +`log.level` is the log level for the Teleport process. +Available log levels are: `DEBUG`, `INFO`, `WARNING`, `ERROR`. + +The default is `INFO`, which is recommended in production. +`DEBUG` is useful during first-time setup or to see more detailed logs for debugging. + +### `log.format` + +| Type | Default | +|------|---------| +| `string` | `"json"` | + +`log.format` sets the log output format for the Teleport process. +Possible values are `text` or `json`. + +## `preStopDelay` + +| Type | Default | +|------|---------| +| `string` | `"30s"` | + +`preStopDelay` the optional time that will pass between the pod +entering the Terminating state and the Teleport instance getting signaled to +begin its shutdown advertisement. Useful to allow load balancers to stop +routing connections to the terminating pod. + +## `shutdownDelay` + +| Type | Default | +|------|---------| +| `string` | `"30s"` | + +`shutdownDelay` the optional time that the Teleport instance will +wait after advertising its shutdown and before it will stop serving new +inbound connections. + +## `terminationGracePeriodSeconds` + +| Type | Default | +|------|---------| +| `int` | `90` | + +`terminationGracePeriodSeconds` the time allotted to a Teleport instance +pod for termination. It should be longer than the sum of +[`preStopDelay`](#preStopDelay) and [`shutdownDelay`](#shutdownDelay). + +## `highAvailability` + +`highAvailability` contains settings controlling the availability of the +Teleport Agent deployed by the chart. + +The availability can be increased by: +- running more replicas with `replicaCount` +- requiring that the Pods are not scheduled on the same Kubernetes Node with `requireAntiAffinity` +- by asking Kubernetes not to delete all pods at the same time with `podDisruptionBudget`. + +Even with highAvailability settings Restarting/rolling-out pods can still cause +disruption for established long-lived sessions. + +### `highAvailability.replicaCount` + +| Type | Default | +|------|---------| +| `int` | `2` | + +`highAvailability.replicaCount` is the number of agent replicas +deployed by the Chart. + +Set to a number higher than `1` for a high availability mode where multiple +Teleport pods will be deployed. + + + + As a rough guide, we recommend configuring one replica per distinct + availability zone where your cluster has worker nodes. + + 2 replicas/availability zones will be fine for smaller workloads. 3-5 + replicas/availability zones will be more appropriate for bigger clusters + with more traffic. + + + +### `highAvailability.podDisruptionBudget` + +`highAvailability.podDisruptionBudget` controls how the chart creates and +configures a Kubernetes PodDisruptionBudget to ensure Kubernetes does not +delete all agent replicas at the same time. + +#### `highAvailability.podDisruptionBudget.enabled` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`highAvailability.podDisruptionBudget.enabled` makes the chart create +a Kubernetes PodDisruptionBudget for the agent pods. + +#### `highAvailability.podDisruptionBudget.minAvailable` + +| Type | Default | +|------|---------| +| `intOrString` | `1` | + +`highAvailability.podDisruptionBudget.minAvailable` is the +minimum available pod count specified on the PodDisruptionBudget. + +## `resources` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`resources` sets the resource requests/limits for any pods created by +the chart. See [the Kubernetes +documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) +for more details. + +## `goMemLimitRatio` + +| Type | Default | +|------|---------| +| `float` | `0.9` | + +`goMemLimitRatio` configures the GOMEMLIMIT env var set by the chart. +GOMEMLIMIT instructs the go garbage collector to try to keep allocated memory +below a given threshold. This is a best-effort attempt, but this helps +to prevent OOMs in case of bursts. + +When the memory limits are set and goMemLimitRatio is non-zero, +the chart sets the GOMEMLIMIT to `resources.memory.limits * goMemLimitRatio`. +The value must be between 0 and 1. +Set to 0 to unset GOMEMLIMIT. +This has no effect if GOMEMLIMIT is already set through `extraEnv`. + +## `service` + +`service` options for the Service that points to the Teleport Relay +instances. + +### `service.type` + +| Type | Default | +|------|---------| +| `string` | `"LoadBalancer"` | + +`service.type` the type of the Service. Unless you have specific +needs, it should probably be set to LoadBalancer. + +### `service.spec` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`service.spec` any additional entries here will be added to the +Service spec. + +``` +spec: + loadBalancerIP: "1.2.3.4" + loadBalancerClass: service.k8s.aws/nlb +``` + +## `serviceAccount` + +`serviceAccount` contains settings related to the Kubernetes service account +used by pods. + +### `serviceAccount.create` + +| Type | Default | +|------|---------| +| `bool` | `true` | + +`serviceAccount.create` specifies whether a `ServiceAccount` should +be created or if an existing one should be used. + +### `serviceAccount.name` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`serviceAccount.name` the name of the `ServiceAccount` to use. If +not set and serviceAccount.create is true, the name is generated using the +release name. If create is false, the name will be used to reference an +existing service account. + +## `extraLabels` + +`extraLabels` contains additional Kubernetes labels to apply on the resources +created by the chart. See [the Kubernetes label documentation +](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) +for more information. + +### `extraLabels.config` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.config` are labels to set on the ConfigMap. + +### `extraLabels.deployment` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.deployment` are labels to set on the Deployment. + +### `extraLabels.pod` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.pod` are labels to set on the Pods. + +### `extraLabels.podDisruptionBudget` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.podDisruptionBudget` are labels to set on the +PodDisruptionBudget. + +### `extraLabels.secret` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.secret` are labels to set on the Secret. + +### `extraLabels.service` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.service` are labels to set on the Service. + +### `extraLabels.serviceAccount` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`extraLabels.serviceAccount` are labels to set on the +ServiceAccount. + +## `annotations` + +`annotations` contains annotations to apply to the different Kubernetes +objects created by the chart. See [the Kubernetes annotation +documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) +for more details. + +### `annotations.config` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.config` are annotations to set on the ConfigMap. + +### `annotations.deployment` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.deployment` are annotations to set on the Deployment. + +### `annotations.pod` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.pod` are annotations to set on the Pods. + +### `annotations.podDisruptionBudget` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.podDisruptionBudget` are annotations to set on the +podDisruptionBudget. + +### `annotations.secret` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.secret` are annotations to set on the Secret. + +### `annotations.service` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.service` are annotations to set on the Service. + +### `annotations.serviceAccount` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`annotations.serviceAccount` are annotations to set on the +ServiceAccount. + +## `image` + +| Type | Default | +|------|---------| +| `string` | `"public.ecr.aws/gravitational/teleport-distroless"` | + +`image` sets the container image used for Teleport Community Edition +Agent pods created by the chart. + +You can override this to use your own Teleport image rather than a +Teleport-published image. + +By default, the image contains only the Teleport application and its runtime +dependencies, and does not contain a shell. + +This setting only takes effect when [`enterprise`](#enterprise) is `false`. +When running an enterprise version, you must use +[`enterpriseImage`](#enterpriseImage) instead. + +## `enterpriseImage` + +| Type | Default | +|------|---------| +| `string` | `"public.ecr.aws/gravitational/teleport-ent-distroless"` | + +`enterpriseImage` sets the container image used for Teleport Enterprise +agent pods created by the chart. + +You can override this to use your own Teleport image rather than a +Teleport-published image. + +By default, the image contains only the Teleport application and its runtime +dependencies, and does not contain a shell. + +This setting only takes effect when [`enterprise`](#enterprise) is `true`. +When running an OSS version, you must use [`image`](#image) instead. + +## `imagePullSecrets` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`imagePullSecrets` is a list of secrets containing authorization tokens +which can be optionally used to access a private Docker registry. + +See the [Kubernetes +reference](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) +for more details. + +## `imagePullPolicy` + +| Type | Default | +|------|---------| +| `string` | `"IfNotPresent"` | + +`imagePullPolicy` sets the pull policy for any pods created by the +chart. See [the Kubernetes +documentation](https://kubernetes.io/docs/concepts/containers/images/#updating-images) +for more details. + +## `topologySpreadConstraints` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`topologySpreadConstraints` sets the topology spread constraints for +any pods created by the chart. See [the Kubernetes +documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) +for more details. + +When unset, the chart defaults to a soft topology spread constraint that tries +to spread pods across hosts and zones. + +``` +topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: # dynamically computed + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: # dynamically computed +``` + +## `disableTopologySpreadConstraints` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`disableTopologySpreadConstraints` turns off the default topology +spread constraints. + +## `affinity` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`affinity` sets the affinities for any pods created by the chart. See +[the Kubernetes +documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) +for more details. + +## `tolerations` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`tolerations` sets the tolerations for any pods created by the chart. +See [the Kubernetes +documentation](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +for more details. + +## `teleportConfig` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`teleportConfig` contains YAML teleport configuration to pass to the +Teleport pods. The configuration will be merged with the chart-generated +configuration and will take precedence in case of conflict. + +See the [Teleport Configuration Reference](../../reference/deployment/config.mdx) for the list of supported fields. + +## `extraArgs` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraArgs` contains extra arguments to pass to `teleport start` for +the main Teleport container. + +## `extraEnv` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraEnv` contains extra environment variables to set in the main +Teleport container. + +For example: +```yaml +extraEnv: + - name: HTTPS_PROXY + value: "http://username:password@my.proxy.host:3128" +``` + +## `extraVolumes` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraVolumes` contains extra volumes to mount into the Teleport pods. +See [the Kubernetes volume +documentation](https://kubernetes.io/docs/concepts/storage/volumes/) for more +details. + +For example: +```yaml +extraVolumes: +- name: myvolume + secret: + secretName: testSecret +``` + +## `extraVolumeMounts` + +| Type | Default | +|------|---------| +| `list` | `[]` | + +`extraVolumeMounts` contains extra volumes mounts for the main Teleport +container. See [the Kubernetes volume +documentation](https://kubernetes.io/docs/concepts/storage/volumes/) for more +details. + +For example: +```yaml +extraVolumesMounts: +- name: myvolume + mountPath: /path/on/host +``` + +## `dnsConfig` + +| Type | Default | +|------|---------| +| `object` | `{}` | + +`dnsConfig` contains custom Pod DNS Configuration for the agent pods. +This value is useful if you need to reduce the DNS load: set "ndots" to 0 and +only use FQDNs to refer to remote hosts. + +See [the Kubernetes pod DNS documentation +](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) +for more information. + +For example: +```yaml + nameservers: + - 1.2.3.4 + searches: + - ns1.svc.cluster-domain.example + - my.dns.search.suffix + options: + - name: ndots + value: "2" +``` + +## `dnsPolicy` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`dnsPolicy` sets the Pod's DNS Policy + +See [the Kubernetes pod DNS documentation +](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) +for more information. + +## `hostAliases` + +`hostAliases` sets Host aliases in the Teleport Pod. See [the Kubernetes +hosts file +documentation](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/) +for more details. + +For example: +```yaml +hostAliases: + - ip: "127.0.0.1" + hostnames: + - "foo.local" + - "bar.local" + - ip: "10.1.2.3" + hostnames: + - "foo.remote" + - "bar.remote" +``` + +## `tls` + +`tls` contains settings for mounting your own TLS material in the agent pod. + +### `tls.existingCASecretName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`tls.existingCASecretName` sets the `SSL_CERT_FILE` environment +variable to load a trusted CA or bundle in PEM format into Teleport pods. +The injected CA will be used to validate TLS communications with the Proxy +Service. + +You must create a secret containing the CA certs in the same namespace as +Teleport using a command like: + +```code +$ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem +``` + +### `tls.existingCASecretKeyName` + +| Type | Default | +|------|---------| +| `string` | `"ca.pem"` | + +`tls.existingCASecretKeyName` determines which key in the CA secret +will be used as a trusted CA bundle file. + +## `insecureSkipProxyTLSVerify` + +| Type | Default | +|------|---------| +| `bool` | `false` | + +`insecureSkipProxyTLSVerify` disables TLS verification of the TLS +certificate presented by the Proxy Service. + +This can be used for joining a Teleport instance to a Teleport cluster which +does not have valid TLS certificates for testing. + + + + Using a self-signed TLS certificate and disabling TLS verification is OK for + testing, but is not viable when running a production Teleport cluster as it + will drastically reduce security. You must configure valid TLS certificates + on your Teleport cluster for production workloads. + + One option might be to use Teleport's built-in [ACME + support](../../reference/helm-reference/teleport-cluster.mdx#acme) or enable + [cert-manager + support](../../reference/helm-reference/teleport-cluster.mdx#highavailabilitycertmanager). + + + +## `securityContext` + +| Type | Default | +|------|---------| +| `object` | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsGroup":65532,"runAsNonRoot":true,"runAsUser":65532,"seccompProfile":{"type":"RuntimeDefault"}}` | + +`securityContext` sets the container security context for any pods +created by the chart. See [the Kubernetes +documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) +for more details. + +The default value is compatible with [the restricted +PSS](https://kubernetes.io/docs/concepts/security/pod-security-standards/). + +To unset the security context, set it to `null` or `~`. + +## `podSecurityContext` + +| Type | Default | +|------|---------| +| `object` | `{"fsGroup":65532}` | + +`podSecurityContext` sets the pod security context for any pods +created by the chart. See [the Kubernetes +documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) +for more details. + +To unset the security context, set it to `null` or `~`. + +## `priorityClassName` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`priorityClassName` sets the priority class used by any pods created by the chart. +The user is responsible for creating the `PriorityClass` resource before deploying the chart. +See [the Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) +for more details. + +## `nameOverride` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`nameOverride` optionally overrides the name of the chart, used +together with the release name when giving a name to resources. + +## `fullnameOverride` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`fullnameOverride` optionally overrides the full name of resources. + +## `teleportVersionOverride` + +| Type | Default | +|------|---------| +| `string` | `""` | + +`teleportVersionOverride` controls the Teleport Kubernetes Operator +image version deployed by the chart. + +Normally, the version of the Teleport Kubernetes Operator matches the version +of the chart. If you install chart version 15.0.0, you'll use Teleport version +15.0.0. Upgrading the agent is done by upgrading the chart. + + + + `teleportVersionOverride` is intended for development and MUST NOT be used + to control the Teleport version in a typical deployment. This chart is + designed to run a specific Teleport version. You will face compatibility + issues trying to run a different Teleport version with it. + + If you want to run Teleport version `X.Y.Z`, you should use `helm install + --version X.Y.Z` instead. + + diff --git a/docs/pages/includes/identity-governance/azure-shell.mdx b/docs/pages/includes/identity-governance/azure-shell.mdx new file mode 100644 index 0000000000000..4cc7e469e19f2 --- /dev/null +++ b/docs/pages/includes/identity-governance/azure-shell.mdx @@ -0,0 +1,25 @@ +Open Azure Cloud Shell by navigating to shell.azure.com, +or by clicking the Cloud Shell icon in the Azure Portal. Make sure to use the bash version of Cloud Shell. +Once a Cloud Shell instance opens, paste the Teleport generated bash script that downloads the Teleport binary in your +Azure Shell and run the `teleport integration configure azure-oidc` command. +The command performs the following actions: +- Creates an enterprise application. +- Configures Teleport as an OIDC IdP for the application. +- Assigns read-only Microsoft Graph API permissions to read your directory's data (such as users and groups). +- Configures authentication by setting up a Teleport SAML service provider. + +```code +# Azure Shell +$ bash -c "$(curl 'https://example.teleport.sh/webapi/scripts/integrations/configure/azureoidc.sh?authConnectorName=entra-id')" + +> teleport integration configure azure-oidc --proxy-public-addr=https://example.teleport.sh --auth-connector-name=entra-id +... +Success! Use the following information to finish the integration onboarding in Teleport: +Tenant ID: entra-tenant-id +Client ID: enterprise-app-id +Success! You can now go back to the Teleport Web UI to use the integration with Azure. +``` + +Once the script is done setting up the Entra ID tenant with the necessary properties that are +required for the Teleport Entra ID integration, it prints out the Entra ID tenant ID +and the client ID of the enterprise application set up by the script. \ No newline at end of file diff --git a/docs/pages/includes/install-linux-ent-self-hosted.mdx b/docs/pages/includes/install-linux-ent-self-hosted.mdx deleted file mode 100644 index f128bf2526595..0000000000000 --- a/docs/pages/includes/install-linux-ent-self-hosted.mdx +++ /dev/null @@ -1,176 +0,0 @@ -{{ to-install="teleport-ent" }} -The easiest installation method, for Teleport versions 17.3 and above, is the -cluster install script. It will use the best version, edition, and installation -mode for your cluster. - -1. Assign to your Teleport cluster hostname. - This should contain you cluster hostname and port, but not the scheme (https://). - -1. Run your cluster's install script: - - ```code - $ curl "https:///scripts/install.sh" | sudo bash - ``` - -The other installation methods are: - - - -```code -$ sudo mkdir -p /etc/apt/keyrings -# Download Teleport's PGP public key -$ sudo curl https://apt.releases.teleport.dev/gpg \ --o /etc/apt/keyrings/teleport-archive-keyring.asc -# Source variables about OS version -$ source /etc/os-release -# Add the Teleport APT repository for v(=teleport.major_version=). You'll need to update this -# file for each major release of Teleport. -$ echo "deb [signed-by=/etc/apt/keyrings/teleport-archive-keyring.asc] \ -https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/v(=teleport.major_version=)" \ -| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null - -$ sudo apt-get update -$ sudo apt-get install {{ to-install }} -``` - -For FedRAMP/FIPS-compliant installations, install the `teleport-ent-fips` package instead: - -```code -$ sudo apt-get install teleport-ent-fips -``` - - - - -```code -# Source variables about OS version -$ source /etc/os-release -# Add the Teleport YUM repository for v(=teleport.major_version=). You'll need to update this -# file for each major release of Teleport. -# First, get the major version from $VERSION_ID so this fetches the correct -# package version. -$ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") -$ sudo yum install -y yum-utils -$ sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v(=teleport.major_version=)/teleport.repo")" -$ sudo yum install {{ to-install }} -# -# Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs) -# echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path -``` - -For FedRAMP/FIPS-compliant installations, install the `teleport-ent-fips` package instead: - -```code -$ sudo yum install teleport-ent-fips -``` - - - - -```code -# Source variables about OS version -$ source /etc/os-release -# Add the Teleport Zypper repository for v(=teleport.major_version=). You'll need to update this -# file for each major release of Teleport. -# First, get the OS major version from $VERSION_ID so this fetches the correct -# package version. -$ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") -# Use zypper to add the teleport RPM repo -$ sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo") -$ sudo yum install {{ to-install }} -# -# Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs) -# echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path -``` - -For FedRAMP/FIPS-compliant installations, install the `teleport-ent-fips` package instead: - -```code -$ sudo yum install teleport-ent-fips -``` - - - - -```code -# Source variables about OS version -$ source /etc/os-release -# Add the Teleport YUM repository for v(=teleport.major_version=). You'll need to update this -# file for each major release of Teleport. -# First, get the major version from $VERSION_ID so this fetches the correct -# package version. -$ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") -# Use the dnf config manager plugin to add the teleport RPM repo -$ sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v(=teleport.major_version=)/teleport.repo")" - -# Install teleport -$ sudo dnf install {{ to-install }} - -# Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs) -# echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path -``` - -For FedRAMP/FIPS-compliant installations, install the `teleport-ent-fips` package instead: - -```code -$ sudo dnf install teleport-ent-fips -``` - - - - -```code -# Source variables about OS version -$ source /etc/os-release -# Add the Teleport Zypper repository. -# First, get the OS major version from $VERSION_ID so this fetches the correct -# package version. -$ VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+") -# Use Zypper to add the teleport RPM repo -$ sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v(=teleport.major_version=)/teleport-zypper.repo") - -# Install teleport -$ sudo zypper install {{ to-install }} -``` - -For FedRAMP/FIPS-compliant installations, install the `teleport-ent-fips` package instead: - -```code -$ sudo zypper install teleport-ent-fips -``` - - - - - -In the example commands below, update `$SYSTEM_ARCH` with the appropriate -value (`amd64`, `arm64`, or `arm`). All example commands using this variable -will update after one is filled out. - -```code -$ curl https://cdn.teleport.dev/teleport-ent-v(=teleport.version=)-linux--bin.tar.gz.sha256 -# -$ curl -O https://cdn.teleport.dev/teleport-ent-v(=teleport.version=)-linux--bin.tar.gz -$ shasum -a 256 teleport-ent-v(=teleport.version=)-linux--bin.tar.gz -# Verify that the checksums match -$ tar -xvf teleport-ent-v(=teleport.version=)-linux--bin.tar.gz -$ cd teleport-ent -$ sudo ./install -``` - -For FedRAMP/FIPS-compliant installations of Teleport Enterprise, package URLs -will be slightly different: - -```code -$ curl https://cdn.teleport.dev/teleport-ent-v(=teleport.version=)-linux--fips-bin.tar.gz.sha256 -# -$ curl -O https://cdn.teleport.dev/teleport-ent-v(=teleport.version=)-linux--fips-bin.tar.gz -$ shasum -a 256 teleport-ent-v(=teleport.version=)-linux--fips-bin.tar.gz -# Verify that the checksums match -$ tar -xvf teleport-ent-v(=teleport.version=)-linux--fips-bin.tar.gz -$ cd teleport-ent -$ sudo ./install -``` - - - diff --git a/docs/pages/includes/install-linux-enterprise.mdx b/docs/pages/includes/install-linux-enterprise.mdx deleted file mode 100644 index ea7a5dc6c9cfa..0000000000000 --- a/docs/pages/includes/install-linux-enterprise.mdx +++ /dev/null @@ -1,37 +0,0 @@ -Install Teleport on your Linux server: - -1. Assign to one of the following, depending on your - Teleport edition: - - | Edition | Value | - |-----------------------------------|--------------| - | Teleport Enterprise Cloud | `cloud` | - | Teleport Enterprise (Self-Hosted) | `enterprise` | - -1. Get the version of Teleport to install. If you have automatic agent updates - enabled in your cluster, query the latest Teleport version that is compatible - with the updater: - - ```code - $ TELEPORT_DOMAIN= - $ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - ``` - - Otherwise, get the version of your Teleport cluster: - - ```code - $ TELEPORT_DOMAIN= - $ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/ping | jq -r '.server_version')" - ``` - -1. Install Teleport on your Linux server: - - ```code - $ curl (=teleport.teleport_install_script_url=) | bash -s ${TELEPORT_VERSION} - ``` - - The installation script detects the package manager on your Linux server and - uses it to install Teleport binaries. To customize your installation, learn - about the Teleport package repositories in the [installation - guide](../installation.mdx#linux). - diff --git a/docs/pages/includes/install-linux.mdx b/docs/pages/includes/install-linux.mdx index 9fbfcc55dbc23..c55f235cfd873 100644 --- a/docs/pages/includes/install-linux.mdx +++ b/docs/pages/includes/install-linux.mdx @@ -1,8 +1,7 @@ To install a Teleport Agent on your Linux server: -The easiest installation method, for *Teleport versions 17.3 and above*, is the -cluster install script. It will use the best version, edition, and installation -mode for your cluster. +The recommended installation method is the cluster install script. +It will select the correct version, edition, and installation mode for your cluster. 1. Assign to your Teleport cluster hostname and port, but not the scheme (https://). @@ -12,41 +11,3 @@ mode for your cluster. ```code $ curl "https:///scripts/install.sh" | sudo bash ``` - -On *older Teleport versions*: - -1. Assign to one of the following, depending on your - Teleport edition: - - | Edition | Value | - |-----------------------------------|--------------| - | Teleport Enterprise Cloud | `cloud` | - | Teleport Enterprise (Self-Hosted) | `enterprise` | - | Teleport Community Edition | `oss` | - -1. Get the version of Teleport to install. If you have automatic agent updates - enabled in your cluster, query the latest Teleport version that is compatible - with the updater: - - ```code - $ TELEPORT_DOMAIN= - $ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')" - ``` - - Otherwise, get the version of your Teleport cluster: - - ```code - $ TELEPORT_DOMAIN= - $ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/ping | jq -r '.server_version')" - ``` - -1. Install Teleport on your Linux server: - - ```code - $ curl (=teleport.teleport_install_script_url=) | bash -s ${TELEPORT_VERSION} - ``` - - The installation script detects the package manager on your Linux server and - uses it to install Teleport binaries. To customize your installation, learn - about the Teleport package repositories in the [installation - guide](../installation.mdx#linux). diff --git a/docs/pages/includes/install-tsh.mdx b/docs/pages/includes/install-tsh.mdx index 977fc912414b1..3a0b1087af6fc 100644 --- a/docs/pages/includes/install-tsh.mdx +++ b/docs/pages/includes/install-tsh.mdx @@ -1,36 +1,37 @@ - Download the signed macOS .pkg installer for Teleport including `tsh`. In Finder double-click the `pkg` file to install it: + + Download the signed macOS .pkg installer for Teleport, which includes `tsh`. + In Finder double-click the `pkg` file to begin installation: ```code $ curl -O https://cdn.teleport.dev/teleport-(=teleport.version=).pkg ``` - - - Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security. - - We recommend the use of our [own Teleport packages](https://goteleport.com/download?os=mac) - for any installations on macOS. + + ```code $ curl.exe -O https://cdn.teleport.dev/teleport-v(=teleport.version=)-windows-amd64-bin.zip - # Unzip the archive and move `tsh.exe` to your %PATH% + # Unzip the archive and move tsh.exe to your %PATH% # NOTE: Do not place tsh.exe in the System32 directory, as this can cause issues when using WinSCP. # Use %SystemRoot% (C:\Windows) or %USERPROFILE% (C:\Users\) instead. ``` + - `tsh` is included with all of the Teleport binaries in Linux installations. - For more options (including RPM/DEB packages and downloads for i386/ARM/ARM64) please see our [installation page](../installation.mdx). + + All of the Teleport binaries in Linux installations include `tsh`. For more + options (including RPM/DEB packages and downloads for i386/ARM/ARM64) see + our [installation page](../installation/installation.mdx). ```code $ curl -O https://cdn.teleport.dev/teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz diff --git a/docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx b/docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx index d8242619da2e1..e2335ace682af 100644 --- a/docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx +++ b/docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx @@ -1,13 +1,10 @@ -Set up the Teleport Helm repository. - -Allow Helm to install charts that are hosted in the Teleport Helm repository: +Configure Helm to fetch Teleport charts from the Teleport Helm repository: ```code $ helm repo add teleport (=teleport.helm_repo_url=) ``` -Update the cache of charts from the remote repository so you can upgrade to all -available releases: +Refresh the local Helm cache by fetching the latest charts: ```code $ helm repo update diff --git a/docs/pages/includes/machine-id/common-output-config.yaml b/docs/pages/includes/machine-id/common-output-config.yaml index 37a2f0ec37b8e..df32295a9afcf 100644 --- a/docs/pages/includes/machine-id/common-output-config.yaml +++ b/docs/pages/includes/machine-id/common-output-config.yaml @@ -25,3 +25,8 @@ roles: # on the next invocation, but don't want long-lived workload certificates on-disk. credential_ttl: 30m renewal_interval: 15m + +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name diff --git a/docs/pages/includes/machine-id/configure-outputs.mdx b/docs/pages/includes/machine-id/configure-outputs.mdx deleted file mode 100644 index b002290094ddc..0000000000000 --- a/docs/pages/includes/machine-id/configure-outputs.mdx +++ /dev/null @@ -1,7 +0,0 @@ -You have now prepared the base configuration for `tbot`. At this point, it -identifies itself to the Teleport cluster and renews its own credentials but -does not output any credentials for other applications to use. - -Follow one of the [access -guides](../../enroll-resources/machine-id/access-guides/access-guides.mdx) to -configure an output that meets your access needs. diff --git a/docs/pages/includes/machine-id/configure-services.mdx b/docs/pages/includes/machine-id/configure-services.mdx new file mode 100644 index 0000000000000..c2b9c56d6b487 --- /dev/null +++ b/docs/pages/includes/machine-id/configure-services.mdx @@ -0,0 +1,7 @@ +You have now prepared the base configuration for `tbot`. At this point, it +identifies itself to the Teleport cluster and renews its own credentials but +does not output any credentials for other applications to use. + +Follow one of the [access +guides](../../machine-workload-identity/access-guides/access-guides.mdx) to +configure a service that meets your access needs. diff --git a/docs/pages/includes/machine-id/daemon.mdx b/docs/pages/includes/machine-id/daemon.mdx index 424d3a9f0e544..909c039411b97 100644 --- a/docs/pages/includes/machine-id/daemon.mdx +++ b/docs/pages/includes/machine-id/daemon.mdx @@ -1,13 +1,35 @@ By default, `tbot` will run in daemon mode. However, this must then be configured as a service within the service manager on the Linux host. The service manager will start `tbot` on boot and ensure it is restarted if it -fails. For this guide, systemd will be demonstrated but `tbot` should be +fails. + +**If tbot was installed using the Teleport install script or `teleport-update` +command, the `tbot` systemd service is automatically created for you.** + +After `tbot.yaml` is created, enable and start the service:: + +```code +$ sudo systemctl enable tbot --now +``` + +Check the service has started successfully: + +```code +$ sudo systemctl status tbot +``` + +Service properties like `User` and `Group` may be configured using `systemctl edit tbot`.` + +**If tbot was installed manually, service configuration will need to be +performed manually as well.** + +For this guide, systemd will be demonstrated but `tbot` should be compatible with all common alternatives. Use `tbot install systemd` to generate a systemd service file: ```code -$ tbot install systemd \ +$ sudo tbot install systemd \ --write \ --config /etc/tbot.yaml \ --user teleport \ @@ -32,12 +54,11 @@ service: ```code $ sudo systemctl daemon-reload -$ sudo systemctl enable tbot -$ sudo systemctl start tbot +$ sudo systemctl enable tbot --now ``` Check the service has started successfully: ```code $ sudo systemctl status tbot -``` \ No newline at end of file +``` diff --git a/docs/pages/includes/machine-id/machine-id-init-bot-data.mdx b/docs/pages/includes/machine-id/machine-id-init-bot-data.mdx index 37a6fd86222c1..43086e2eabba5 100644 --- a/docs/pages/includes/machine-id/machine-id-init-bot-data.mdx +++ b/docs/pages/includes/machine-id/machine-id-init-bot-data.mdx @@ -1,4 +1,5 @@ -Create the bot data directory and grant permissions to access it to the Linux user (in our example, `teleport`) which `tbot` will run as. +Create the bot data directory and grant permissions to access it to the Linux +user (in our example, `teleport`) which `tbot` will run as. ```code # Make the bot directory and assign ownership to teleport user diff --git a/docs/pages/includes/machine-id/mongodb/mongodb.go b/docs/pages/includes/machine-id/mongodb/mongodb.go index 9712712fffcaf..2d4c58422dc84 100644 --- a/docs/pages/includes/machine-id/mongodb/mongodb.go +++ b/docs/pages/includes/machine-id/mongodb/mongodb.go @@ -1,5 +1,5 @@ // This example program demonstrates how to connect to a MongoDB database -// using certificates issued by Teleport Machine ID. +// using certificates issued by Teleport Machine & Workload Identity. package main diff --git a/docs/pages/includes/machine-id/plugin-prerequisites.mdx b/docs/pages/includes/machine-id/plugin-prerequisites.mdx index 22d43626da035..fb1139657b9ea 100644 --- a/docs/pages/includes/machine-id/plugin-prerequisites.mdx +++ b/docs/pages/includes/machine-id/plugin-prerequisites.mdx @@ -1,5 +1,5 @@ -**Recommended:** Configure Machine ID to provide short-lived Teleport -credentials to the plugin. Before following this guide, follow a Machine ID -[deployment guide](../../enroll-resources/machine-id/deployment/deployment.mdx) +**Recommended:** Configure Machine & Workload Identity to provide short-lived +Teleport credentials to the plugin. Before following this guide, follow a +Machine & Workload Identity [deployment guide](../../machine-workload-identity/deployment/deployment.mdx) to run the `tbot` binary on your infrastructure. diff --git a/docs/pages/includes/machine-id/postgresql/postgresql.go b/docs/pages/includes/machine-id/postgresql/postgresql.go index bdb9c3f0d6019..6f9252cd71967 100644 --- a/docs/pages/includes/machine-id/postgresql/postgresql.go +++ b/docs/pages/includes/machine-id/postgresql/postgresql.go @@ -1,5 +1,5 @@ // This example program demonstrates how to connect to a Postgres database -// using certificates issued by Teleport Machine ID. +// using certificates issued by Teleport Machine & Workload Identity. package main diff --git a/docs/pages/includes/mcp-access/configure-app-service.mdx b/docs/pages/includes/mcp-access/configure-app-service.mdx new file mode 100644 index 0000000000000..2d24da8c22180 --- /dev/null +++ b/docs/pages/includes/mcp-access/configure-app-service.mdx @@ -0,0 +1,75 @@ +{{ extraAppScheme="" }} + +You can update an existing Application Service or create a new one to enable the +the MCP server. + + + + + +If you already have an existing Application Service running, you can add a MCP +server in your YAML configuration: +```yaml +app_service: + enabled: true + apps: + - name: "everything" + uri: "mcp+{{extraAppScheme}}" + labels: + env: dev + description: +``` + + + + +{/* lint ignore heading-increment remark-lint */} +#### Get a Join Token + +(!docs/pages/includes/tctl-token.mdx serviceName="Application" tokenType="app" tokenFile="/tmp/token"!) + +(!docs/pages/includes/database-access/alternative-methods-join.mdx!) + +#### Install the Teleport Application Service + +Install Teleport on the host where you will run the Teleport Application Service: + +(!docs/pages/includes/install-linux.mdx!) + +#### Configure the Teleport Application Service + +On the host where you will run the Teleport Application Service, create a file +at `/etc/teleport.yaml` with the following content: +```yaml +version: v3 +teleport: + join_params: + token_name: "/tmp/token" + method: token + proxy_server: "" +auth_service: + enabled: false +proxy_service: + enabled: false +ssh_service: + enabled: false +app_service: + enabled: true + apps: + - name: "everything" + uri: "mcp+{{extraAppScheme}}" + labels: + env: dev + description: +``` + +Replace with the host and port of your Teleport Proxy +Service or Teleport Cloud tenant, and replace the MCP details with the MCP +server you desire to run. + +#### Start the Teleport Application Service + +(!docs/pages/includes/start-teleport.mdx service="the Application Service"!) + + + diff --git a/docs/pages/includes/mcp-access/configure-user-rbac.mdx b/docs/pages/includes/mcp-access/configure-user-rbac.mdx new file mode 100644 index 0000000000000..cad558334e185 --- /dev/null +++ b/docs/pages/includes/mcp-access/configure-user-rbac.mdx @@ -0,0 +1,43 @@ +In this step, you will grant your Teleport user access to all MCP servers and +their MCP tools. + + + + + +If you have an existing Teleport user, assign the preset role `mcp-user` to +that user. The `mcp-user` role allows access to all MCP servers and their tools: +```yaml +kind: role +version: v8 +metadata: + description: Access to MCP servers + labels: + teleport.internal/resource-type: preset + name: mcp-user +spec: + allow: + app_labels: + 'teleport.internal/app-sub-kind': 'mcp' + mcp: + tools: + - '*' +``` + +Alternatively, add the above allow permissions to an existing Teleport role. + + + + + +Create a new local user called with MCP access: + +```code +$ tctl users add --roles=mcp-user +``` + + + + + +(!docs/pages/includes/create-role-using-web.mdx!) diff --git a/docs/pages/includes/mcp-access/integration-intro.mdx b/docs/pages/includes/mcp-access/integration-intro.mdx new file mode 100644 index 0000000000000..29b94752baa7b --- /dev/null +++ b/docs/pages/includes/mcp-access/integration-intro.mdx @@ -0,0 +1,7 @@ +Teleport can provide secure access to MCP servers via Teleport Application +Service. + +In this guide, you will: +1. Configure your {{ serviceName }} service for access by the MCP server. +1. Run the {{ serviceName }} MCP Server. +1. Enroll the MCP server into your Teleport cluster and connect to it. diff --git a/docs/pages/includes/mcp-access/integration-limit-tools.mdx b/docs/pages/includes/mcp-access/integration-limit-tools.mdx new file mode 100644 index 0000000000000..a6724a6a7a0ce --- /dev/null +++ b/docs/pages/includes/mcp-access/integration-limit-tools.mdx @@ -0,0 +1,5 @@ +To grant access to the MCP server and all its tools, assign the preset +`mcp-user` role to your Teleport user. + +Optionally, you can limit which MCP tools the user can access by adjusting the +`mcp.tools` list in their role. For example: \ No newline at end of file diff --git a/docs/pages/includes/mcp-access/integration-teleport-app-rewrite.mdx b/docs/pages/includes/mcp-access/integration-teleport-app-rewrite.mdx new file mode 100644 index 0000000000000..1ca5a65e82a06 --- /dev/null +++ b/docs/pages/includes/mcp-access/integration-teleport-app-rewrite.mdx @@ -0,0 +1,83 @@ +You can register an MCP application in Teleport by defining it in your Teleport +Application Service configuration, or by using dynamic registration with `tctl` +or Terraform: + + + +Replace with the host running the {{serviceName}} MCP server: +```yaml +app_service: + enabled: "yes" + apps: + - name: "{{service}}-mcp" + uri: "mcp+http://:{{port}}/mcp" + labels: + env: dev + service: {{service}} + rewrite: + headers: + - "{{headerName}}: {{headerValue}}" +``` + +Restart the Application Service. + + + +Create an `app` resource definition file named `app-{{service}}-mcp.yaml`. Replace + with the host running the {{serviceName}} MCP server: +```yaml +# app-{{service}}-mcp.yaml +kind: app +version: v3 +metadata: + name: {{service}}-mcp + labels: + env: dev + service: {{service}} + rewrite: + headers: + - name: "{{headerName}}" + value: "{{headerValue}}" +spec: + uri: "mcp+http://:{{port}}/mcp" +``` + +Create the `app` resource with: +```code +$ tctl create -f app-{{service}}-app.yaml +``` + + + +Create a `teleport_app` resource in terraform. Replace +with the host running the {{serviceName}} MCP server: +```hcl +resource "teleport_app" "grafana" { + version = "v3" + metadata = { + name = "grafana" + labels = { + "teleport.dev/origin" = "dynamic" + "env" = "dev" + "service" = "{{service}}" + } + } + + spec = { + uri = "mcp+http://:{{port}}/mcp" + rewrite = { + headers = [{ + name = "{{headerName}}" + value = "{{headerValue}}" + }] + } + } +} +``` +Apply the configuration: +```code +$ terraform apply +``` + + + \ No newline at end of file diff --git a/docs/pages/includes/mcp-access/integration-teleport-app.mdx b/docs/pages/includes/mcp-access/integration-teleport-app.mdx new file mode 100644 index 0000000000000..25fcd4000cf5a --- /dev/null +++ b/docs/pages/includes/mcp-access/integration-teleport-app.mdx @@ -0,0 +1,70 @@ +You can register an MCP application in Teleport by defining it in your Teleport +Application Service configuration, or by using dynamic registration with `tctl` +or Terraform: + + + +Replace with the host running the {{serviceName}} MCP server: +```yaml +app_service: + enabled: "yes" + apps: + - name: "{{service}}-mcp" + uri: "mcp+http://:{{port}}/mcp" + labels: + env: dev + service: {{service}} +``` + +Restart the Application Service. + + + +Create an `app` resource definition file named `app-{{service}}-mcp.yaml`. Replace + with the host running the {{serviceName}} MCP server: +```yaml +# app-{{service}}-mcp.yaml +kind: app +version: v3 +metadata: + name: {{service}}-mcp + labels: + env: dev + service: {{service}} +spec: + uri: "mcp+http://:{{port}}/mcp" +``` + +Create the `app` resource with: +```code +$ tctl create -f app-{{service}}-app.yaml +``` + + + +Create a `teleport_app` resource in terraform. Replace +with the host running the {{serviceName}} MCP server: +```hcl +resource "teleport_app" "grafana" { + version = "v3" + metadata = { + name = "grafana" + labels = { + "teleport.dev/origin" = "dynamic" + "env" = "dev" + "service" = "{{service}}" + } + } + + spec = { + uri = "mcp+http://:{{port}}/mcp" + } +} +``` +Apply the configuration: +```code +$ terraform apply +``` + + + diff --git a/docs/pages/includes/mcp-access/integration-tsh.mdx b/docs/pages/includes/mcp-access/integration-tsh.mdx new file mode 100644 index 0000000000000..74f4217c18ad4 --- /dev/null +++ b/docs/pages/includes/mcp-access/integration-tsh.mdx @@ -0,0 +1,10 @@ + +Now wait until the application appears in `tsh mcp ls`, then configure your MCP +clients to access the MCP server, for example: +```code +$ tsh mcp config {{service}}-mcp --client-config claude +``` + +After configuring your MCP client, you will find {{serviceName}}-related tools from +`teleport-mcp-{{service}}-mcp`. You can now use these tools to interactive with +{{serviceName}} via Teleport in your MCP clients: diff --git a/docs/pages/includes/mcp-access/troubleshoot-tsh-binary-enoent.mdx b/docs/pages/includes/mcp-access/troubleshoot-tsh-binary-enoent.mdx new file mode 100644 index 0000000000000..58794acb930fc --- /dev/null +++ b/docs/pages/includes/mcp-access/troubleshoot-tsh-binary-enoent.mdx @@ -0,0 +1,33 @@ +When starting your MCP client, you might see an error such as `spawn +ENOENT` or `command not found`: + +![ENOENT error](../../../img/mcp-access/troubleshoot-tsh-binary-enoent.png) + +This error indicates the path to the `tsh` binary is misconfigured in the +client’s settings. To fix it, re-run the `tsh mcp config` command to update the +path, or manually correct it in the client’s configuration file. See [connect +MCP clients](../../connect-your-client/model-context-protocol/mcp-access.mdx) +for more details. + +This issue can also occur due to a bug from the managed update feature, which +has been fixed in version 18.2.3. If your Teleport cluster has managed update +enabled for tools, check your `tsh` version by: +```code +$ tsh version +Teleport v18.2.3 git:v18.2.3-0-xxxxxxxx go1.24.7 +Proxy version: 18.2.0 +Proxy: teleport.example.com:443 +Re-executed from version: 18.2.0 +``` + +The first line shows the version of the `tsh` binary from the managed update, +and the last line shows the version from your original installation. Both +versions must be at least `18.2.3` to fully resolve this issue. + +If the version shown on the first line is outdated, contact your Teleport +administrator to raise the version for tools update. If the version shown in the +"Re-executed from version" is outdated, find the `tsh` binary from your original +installation (e.g. `which tsh`), uninstall it and install a newer release. + +Once `tsh` has been updated, re-run the `tsh mcp config` commands to reconfigure +your MCP client. diff --git a/docs/pages/includes/mcp-access/tsh-mcp-config.mdx b/docs/pages/includes/mcp-access/tsh-mcp-config.mdx new file mode 100644 index 0000000000000..8f7e99a74decd --- /dev/null +++ b/docs/pages/includes/mcp-access/tsh-mcp-config.mdx @@ -0,0 +1,28 @@ +{{ appName="everything" }} + +To show configurations for your MCP client to connect: + +```code +$ tsh mcp config {{ appName }} +Found MCP servers: +everything + +Here is a sample JSON configuration for launching Teleport MCP servers: +{ + "mcpServers": { + "teleport-mcp-{{ appName }}": { + "command": "/path/to/tsh", + "args": ["mcp", "connect", "{{ appName }}"] + } + } +} + +Tip: You can use this command to update your MCP servers configuration file automatically. +- For Claude Desktop, use --client-config=claude to update the default configuration. +- For Cursor, use --client-config=cursor to update the global MCP servers configuration. +In addition, you can use --client-config= to specify a config file location that is compatible with the "mcpServers" mapping. +For example, you can update a Cursor project using --client-config=/.cursor/mcp.json +``` + +Once your MCP client configuration is updated, you will see +`teleport-mcp-{{ appName }}` MCP server with allowed tools appear in your MCP client. diff --git a/docs/pages/includes/metrics.mdx b/docs/pages/includes/metrics.mdx index d3ff005ac946b..4b9ed5e3c2bca 100644 --- a/docs/pages/includes/metrics.mdx +++ b/docs/pages/includes/metrics.mdx @@ -59,7 +59,7 @@ | `teleport_audit_parquetlog_last_processed_timestamp` | gauge | Teleport Audit Log | Number of last processing time in Parquet-format audit log. | | `teleport_audit_parquetlog_age_oldest_processed_message` | gauge | Teleport Audit Log | Number of age of oldest event in Parquet-format audit log. | | `teleport_audit_parquetlog_errors_from_collect_count` | counter | Teleport Audit Log | Number of collect failures in Parquet-format audit log. | -| `teleport_connected_resources` | gauge | Teleport Auth | Number and type of resources connected via keepalives. x | +| `teleport_connected_resources` | gauge | Teleport Auth | Number and type of resources connected via keepalives. | | `teleport_postgres_events_backend_write_requests` | counter | Postgres (Events) | Number of write requests to postgres events, labeled with the request `status` (success or failure). | | `teleport_postgres_events_backend_batch_read_requests` | counter | Postgres (Events) | Number of batch read requests to postgres events, labeled with the request `status` (success or failure). | | `teleport_postgres_events_backend_batch_delete_requests` | counter | Postgres (Events) | Number of batch delete requests to postgres events, labeled with the request `status` (success or failure). | @@ -70,10 +70,39 @@ | `teleport_registered_servers_by_install_methods` | gauge | Teleport Auth | The number of Teleport services that are connected to an Auth Service instance grouped by install methods. | | `teleport_roles_total` | gauge | Teleport Auth | The number of roles that exist in the cluster. | | `teleport_migrations` | gauge | Teleport Auth | Tracks for each migration if it is active (1) or not (0). | +| `teleport_bot_instances` | gauge | Teleport Auth | The number of bot instances across the entire cluster grouped by version. | | `user_login_total` | counter | Teleport Auth | Number of user logins. | | `watcher_event_sizes` | histogram | cache | Overall size of events emitted. | | `watcher_events` | histogram | cache | Per resource size of events emitted. | +## Session recording summarizer + +These metrics are exported by the Auth Service. They are all labeled with an +`inference_model_name` label, which is the `metadata.name` field of +corresponding `inference_model` resource. + +### General metrics + +These metrics apply to all inference providers. + +| Name | Type | Component | Description | +| ------------------------------------------------ | ------- | ------------- | --------------------------------------------------------- | +| `teleport_summarizer_summarizations_total` | counter | Teleport Auth | Total number of summarization jobs started | +| `teleport_summarizer_summarization_errors` | counter | Teleport Auth | Number of failed summarization jobs | +| `teleport_summarizer_summarization_jobs_pending` | gauge | Teleport Auth | Number of summarization jobs currently awaiting execution | +| `teleport_summarizer_summarization_jobs_running` | gauge | Teleport Auth | Number of summarization jobs currently being executed | + +### OpenAI-specific metrics + +These metrics apply to jobs executed using OpenAI inference provider, including +OpenAI-compatible proxies. + +| Name | Type | Component | Description | +| --------------------------------------------------- | ------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------- | +| `teleport_summarizer_openai_api_requests` | counter | Teleport Auth | Total number of OpenAI API requests | +| `teleport_summarizer_openai_api_errors` | counter | Teleport Auth | Number of errors returned by the OpenAI API. Additionally labeled with `api_error_code` which denotes the OpenAI API error code. | +| `teleport_summarizer_openai_api_requests_in_flight` | gauge | Teleport Auth | Number of OpenAI requests currently awaiting response | + ## Enhanced Session Recording / BPF | Name | Type | Component | Description | @@ -221,6 +250,14 @@ The following table identifies all metrics available for incoming connections. | `teleport_cache_stale_events` | counter | Teleport | Number of stale events received by a Teleport service cache. A high percentage of stale events can indicate a degraded backend. | | `tx` | counter | Teleport | Number of bytes transmitted during an SSH connection. | +## Teleport Health Checks + +| Name | Type | Component | Description | +|----------------------------------------------|-------|-----------------------|-------------------------------------------------| +| `teleport_resources_health_status_healthy` | gauge | Teleport Health Check | Number of healthy resources. | +| `teleport_resources_health_status_unhealthy` | gauge | Teleport Health Check | Number of unhealthy resources. | +| `teleport_resources_health_status_unknown` | gauge | Teleport Health Check | Number of resources in an unknown health state. | + ## Go runtime metrics These metrics are surfaced by the Go runtime and are not specific to Teleport. diff --git a/docs/pages/includes/node-logins.mdx b/docs/pages/includes/node-logins.mdx index 097fe9d35c4f6..73917a18357b5 100644 --- a/docs/pages/includes/node-logins.mdx +++ b/docs/pages/includes/node-logins.mdx @@ -1,14 +1,16 @@ -When Teleport's Auth Service receives a request to list Teleport Nodes (e.g., to -display Nodes in the Web UI or via `tsh ls`), it only returns the Nodes that the -current user is authorized to view. +When the Teleport Auth Service receives a request to list Teleport-connected +resources (e.g., to display resources in the Web UI or via `tsh ls`), it only +returns the resources that the current user is authorized to view. -For each Node in the user's Teleport cluster, the Auth Service applies the -following checks in order and, if one check fails, hides the Node from the user: +For each resource in the user's Teleport cluster, the Auth Service applies the +following checks in order and, if one check fails, hides the resource from the +user: -- None of the user's roles contain a `deny` rule that matches the Node's labels. +- None of the user's roles contain a `deny` rule that matches the resource's + labels. - At least one of the user's roles contains an `allow` rule that matches the - Node's labels. + resource's labels. -If you are not seeing Nodes when expected, make sure that your user's roles -include the appropriate `allow` and `deny` rules as documented in the -[Access Controls Reference](../reference/access-controls/roles.mdx). +If you are not seeing resources when expected, make sure that your user's roles +include the appropriate `allow` and `deny` rules as documented in the [Access +Controls Reference](../reference/access-controls/roles.mdx). diff --git a/docs/pages/includes/okta-create-saml-connector.mdx b/docs/pages/includes/okta-create-saml-connector.mdx deleted file mode 100644 index c7d81d1dcb489..0000000000000 --- a/docs/pages/includes/okta-create-saml-connector.mdx +++ /dev/null @@ -1,64 +0,0 @@ -### Create Okta SAML 2.0 App - -From the main navigation menu, select **Applications** -> **Applications**, and click -**Create App Integration**. Select SAML 2.0, then click **Next**. - -![Create APP](../../img/sso/okta/okta-saml-1.png) - -On the next screen (**General Settings**), provide a name and optional logo for -your new app, then click **Next**. This will bring you to the **Configure SAML** section. - -### Configure the App - -Provide the following values to their respective fields: - -#### General - -- Single sign on URL: `https://:/v1/webapi/saml/acs/okta` -- Audience URI (SP Entity ID): `https://:/v1/webapi/saml/acs/okta` -- Name ID format `EmailAddress` -- Application username `Okta username` - -Replace `` with your Teleport Proxy Service address or Enterprise -Cloud tenant (e.g. `mytenant.teleport.sh`). Replace `` with your Proxy -Service listening port (`443` by default). - -#### Attribute Statements - -- Name: `username` | Name format: `Unspecified` | Value: `user.login` - -#### Group Attribute Statements - -We will map our Okta groups to SAML attribute statements (special signed metadata -exposed via a SAML XML response), so that Teleport can discover a user's group -membership and assign matching roles. - -- Name: `groups` | Name format: `Unspecified` -- Filter: `Matches regex` | `.*` - -The configuration page should now look like this: - -![Configure APP](../../img/sso/okta/setup-redirection.png) - - -The "Matches regex" filter requires the literal string `.*` in order to match all -content from the group attribute statement. - - - -Notice that we have set "NameID" to the email format and mapped the groups with -a wildcard regex in the Group Attribute statements. We have also set the "Audience" -and SSO URLs to the same value. This is so Teleport can read and use Okta users' -email addresses to create their usernames in Teleport, instead of relying on additional -name fields. - - -Once you've filled the required fields, click **Next**, then finish the app creation wizard. - -### Group assignment - -From the **Assignments** tab of the new application page, click **Assign**. Assign the user groups -which can access to the app. Users being members of those groups will have the SSO access to -Teleport once the Auth Connector is configured. - -![Configure APP](../../img/sso/okta/okta-saml-3.1.png) diff --git a/docs/pages/includes/plugins/enroll.mdx b/docs/pages/includes/plugins/enroll.mdx index 0a4e8e15b9eab..87619e05e5078 100644 --- a/docs/pages/includes/plugins/enroll.mdx +++ b/docs/pages/includes/plugins/enroll.mdx @@ -2,13 +2,11 @@ In Teleport Enterprise Cloud, Teleport manages {{ name }} for you, and you can enroll {{ name }} from the Teleport Web UI. -Visit the Teleport Web UI and on the left sidebar, click **Access** followed -by **Integrations**. Then click **Enroll New Integration** to visit the -"Enroll New Integration" page: +Visit the Teleport Web UI and on the left sidebar, click **Add New** followed +by **Integration**: ![Enroll an Access Request plugin](../../../img/enterprise/plugins/enroll.png) On the "Select Integration Type" menu, click the tile for your integration. You will see a page with instructions to set up the integration, as well as a form that you can use to configure the integration. - diff --git a/docs/pages/includes/plugins/finish-event-handler-config.mdx b/docs/pages/includes/plugins/finish-event-handler-config.mdx index 795ba52253461..be6edf5dc1e5f 100644 --- a/docs/pages/includes/plugins/finish-event-handler-config.mdx +++ b/docs/pages/includes/plugins/finish-event-handler-config.mdx @@ -8,10 +8,9 @@ the Fluentd event handler. This file includes setting similar to the following: storage = "./storage" timeout = "10s" batch = 20 -namespace = "default" # The window size configures the duration of the time window for the event handler # to request events from Teleport. By default, this is set to 24 hours. -# Reduce the window size if the events backend cannot manage the event volume +# Reduce the window size if the events backend cannot manage the event volume # for the default window size. # The window size should be specified as a duration string, parsed by Go's time.ParseDuration. window-size = "24h" @@ -20,16 +19,16 @@ window-size = "24h" # and new Access Requests, you can assign this field to # "user.login,access_request.create". types = "" -# skip-event-types is a comma-separated list of types of events to skip. For -# example, to forward all audit events except for new app deletion events, you -# can include the following assignment: -# skip-event-types = "app.delete" -skip-event-types: [] -# skip-session-types is a comma-separated list of session event types to skip. +# skip-event-types is a comma-separated list of audit log event types to skip. +# For example, to forward all audit events except for new app deletion events, +# you can include the following assignment: +# skip-event-types = ["app.delete"] +skip-event-types = [] +# skip-session-types is a comma-separated list of session recording event types to skip. # For example, to forward all session events except for malformed SQL packet # events, you can include the following assignment: -# skip-session-types = "db.session.malformed_packet" -skip-session-types: [] +# skip-session-types = ["db.session.malformed_packet"] +skip-session-types = [] [forward.fluentd] ca = "/home/bob/event-handler/ca.crt" @@ -39,7 +38,7 @@ url = "https://fluentd.example.com:8888/test.log" session-url = "https://fluentd.example.com:8888/session" [teleport] -addr = "example.teleport.com:443" +addr = "teleport.example.com:443" identity = "identity" ``` @@ -56,10 +55,9 @@ eventHandler: storagePath: "./storage" timeout: "10s" batch: 20 - namespace: "default" # The window size configures the duration of the time window for the event handler # to request events from Teleport. By default, this is set to 24 hours. - # Reduce the window size if the events backend cannot manage the event volume + # Reduce the window size if the events backend cannot manage the event volume # for the default window size. # The window size should be specified as a duration string, parsed by Go's time.ParseDuration. windowSize: "24h" @@ -68,18 +66,18 @@ eventHandler: # and new Access Requests, you can assign this field to: # ["user.login", "access_request.create"] types: [] - # skipEventTypes lists types of events to skip. For example, to forward all + # skipEventTypes lists types of audit events to skip. For example, to forward all # audit events except for new app deletion events, you can assign this to: # ["app.delete"] skipEventTypes: [] - # skipSessionTypes lists session event types to skip. For example, to forward - # all session events except for malformed SQL packet events, you can assign - # this to: + # skipSessionTypes lists types of session recording events to skip. For example, + # to forward all session events except for malformed SQL packet events, + # you can assign this to: # ["db.session.malformed_packet"] skipSessionTypes: [] teleport: - address: "example.teleport.com:443" + address: "teleport.example.com:443" identitySecretName: teleport-event-handler-identity identitySecretPath: identity @@ -98,4 +96,3 @@ persistentVolumeClaim: - diff --git a/docs/pages/includes/plugins/identity-export.mdx b/docs/pages/includes/plugins/identity-export.mdx index 5c6ad9a57a602..a6d2d42c45788 100644 --- a/docs/pages/includes/plugins/identity-export.mdx +++ b/docs/pages/includes/plugins/identity-export.mdx @@ -25,9 +25,9 @@ Auth Service's gRPC endpoint. By default, `tctl auth sign` produces certificates with a relatively short lifetime. For production deployments, we suggest using [Machine - ID](../../enroll-resources/machine-id/introduction.mdx) to programmatically issue and renew - certificates for your plugin. See our Machine ID [getting started - guide](../../enroll-resources/machine-id/getting-started.mdx) to learn more. + & Workload Identity](../../machine-workload-identity/introduction.mdx) to programmatically issue and renew + certificates for your plugin. See our Machine & Workload Identity [getting started + guide](../../machine-workload-identity/getting-started.mdx) to learn more. Note that you cannot issue certificates that are valid longer than your existing credentials. For example, to issue certificates with a 1000-hour TTL, you must be logged in with a session that is diff --git a/docs/pages/includes/plugins/install-access-request.mdx b/docs/pages/includes/plugins/install-access-request.mdx index dc67e1abdc61c..accd634714fb4 100644 --- a/docs/pages/includes/plugins/install-access-request.mdx +++ b/docs/pages/includes/plugins/install-access-request.mdx @@ -29,7 +29,7 @@ Make sure the plugin is installed by running the following command: ```code $ docker run public.ecr.aws/gravitational/teleport-plugin-{{ name }}:(=teleport.plugin.version=) version -teleport-{{ name }} v(=teleport.plugin.version=) git:teleport-{{ name }}-v(=teleport.plugin.version=)-(=teleport.git=) (=teleport.golang=) +teleport-{{ name }} v(=teleport.plugin.version=) (=teleport.golang=) ``` For a list of available tags, visit [Amazon ECR Public Gallery](https://gallery.ecr.aws/gravitational/teleport-plugin-{{ name }}). diff --git a/docs/pages/includes/plugins/rbac-impersonate-event-handler-machine-id.mdx b/docs/pages/includes/plugins/rbac-impersonate-event-handler-machine-id.mdx index d664c756d0d81..ce0ac8571b227 100644 --- a/docs/pages/includes/plugins/rbac-impersonate-event-handler-machine-id.mdx +++ b/docs/pages/includes/plugins/rbac-impersonate-event-handler-machine-id.mdx @@ -1,5 +1,5 @@ -With the role created, you now need to allow the Machine ID bot to produce -credentials for this role. +With the role created, you now need to allow the Machine & Workload Identity bot +to produce credentials for this role. This can be done with `tctl`, replacing `my-bot` with the name of your bot: diff --git a/docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx b/docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx new file mode 100644 index 0000000000000..960257059df29 --- /dev/null +++ b/docs/pages/includes/plugins/rbac-impersonate-machine-id.mdx @@ -0,0 +1,10 @@ +If you haven't set up a Machine ID bot yet, refer to the +[deployment guide](../../machine-workload-identity/deployment/deployment.mdx) +to run the `tbot` binary on your infrastructure. + +Next, allow the Machine ID bot to generate credentials for the `access-plugin` +role. You can do this using `tctl`, replacing `my-bot` with the name of your bot: + +```code +$ tctl bots update my-bot --add-roles access-plugin +``` diff --git a/docs/pages/includes/plugins/rbac-impersonate.mdx b/docs/pages/includes/plugins/rbac-impersonate.mdx index c1cf00bcca4d9..b3284675f9f4f 100644 --- a/docs/pages/includes/plugins/rbac-impersonate.mdx +++ b/docs/pages/includes/plugins/rbac-impersonate.mdx @@ -7,11 +7,18 @@ If you are running a self-hosted Teleport Enterprise deployment and are using `tctl` from the Auth Service host, you will already have impersonation privileges. -To grant your user impersonation privileges for `access-plugin`, define a role -called `access-plugin-impersonator` by pasting the following YAML document into -a file called `access-plugin-impersonator.yaml`: +To grant your user impersonation privileges for `access-plugin`, define a user +named `access-plugin` and a role named `access-plugin-impersonator` by adding +the following YAML document into a file called `access-plugin-impersonator.yaml`: ```yaml +kind: user +metadata: + name: access-plugin +spec: + roles: ['access-plugin'] +version: v2 +--- kind: role version: v7 metadata: @@ -25,18 +32,18 @@ spec: - access-plugin ``` -Create the `access-plugin-impersonator` role: +Create the user and role: ```code $ tctl create -f access-plugin-impersonator.yaml +user "access-plugin" has been created +role "access-plugin-impersonator" has been created ``` (!docs/pages/includes/create-role-using-web.mdx!) -If you are providing identity files to the plugin with Machine ID, assign the -`access-plugin` role to the Machine ID bot user. Otherwise, assign this role to -the user you plan to use to generate credentials for the `access-plugin` role -and user: +Assign this role to the user you plan to use to generate credentials for the +`access-plugin` role and user: (!docs/pages/includes/add-role-to-user.mdx role="access-plugin-impersonator"!) diff --git a/docs/pages/includes/plugins/rbac-with-friendly-name.mdx b/docs/pages/includes/plugins/rbac-with-friendly-name.mdx deleted file mode 100644 index b277ddd2887ad..0000000000000 --- a/docs/pages/includes/plugins/rbac-with-friendly-name.mdx +++ /dev/null @@ -1,85 +0,0 @@ -Teleport's Access Request plugins authenticate to your Teleport cluster as a -user with permissions to list and read Access Requests. This way, plugins can -retrieve Access Requests from the Teleport Auth Service and present them to -reviewers. - -Define a user and role called `access-plugin` by adding the following content to -a file called `access-plugin.yaml`: - -```yaml -kind: role -version: v7 -metadata: - name: access-plugin -spec: - allow: - rules: - - resources: ['access_request'] - verbs: ['list', 'read'] - - resources: ['access_plugin_data'] - verbs: ['update'] - - # Optional: Required to use access monitoring rules. - - resources: ['access_monitoring_rule'] - verbs: ['list', 'read'] - - # Optional: In order to provide access list review reminders, permissions to list and read access lists - # are necessary. This is currently only supported for a subset of plugins. - - resources: ['access_list'] - verbs: ['list', 'read'] - - # Optional: To display logins permitted by roles, the plugin also needs - # permission to read the role resource. - - resources: ['role'] - verbs: ['read'] - # Optional: To have the users traits apply when evaluating the roles, - # the plugin also needs permission to read users. - - resources: ['user'] - verbs: ['read'] - - # Optional: To display user-friendly names in resource-based Access - # Requests instead of resource IDs, the plugin also needs permission - # to list the resources being requested. Include this along with the - # list-access-request-resources role definition. - review_requests: - preview_as_roles: - - list-access-request-resources ---- -kind: user -metadata: - name: access-plugin -spec: - roles: ['access-plugin'] -version: v2 ---- -# Optional, for displaying friendly names of resources. Resource types and -# labels can be further limited to only the resources that access can be -# requested to. -kind: role -version: v7 -metadata: - name: list-access-request-resources -spec: - allow: - rules: - - resources: ['node', 'app', 'db', 'kube_cluster'] - verbs: ['list', 'read'] - node_labels: - '*': '*' - kubernetes_labels: - '*': '*' - db_labels: - '*': '*' - app_labels: - '*': '*' - group_labels: - '*': '*' -``` - -Create the user and role: - -```code -$ tctl create -f access-plugin.yaml -``` - -(!docs/pages/includes/create-role-using-web.mdx!) \ No newline at end of file diff --git a/docs/pages/includes/plugins/tbot-identity.mdx b/docs/pages/includes/plugins/tbot-identity.mdx index 0583f9bab5644..8f1ca5d44f6c0 100644 --- a/docs/pages/includes/plugins/tbot-identity.mdx +++ b/docs/pages/includes/plugins/tbot-identity.mdx @@ -15,7 +15,7 @@ If running `tbot` on a Linux server, use the `directory` output to write identity files to the `/opt/machine-id` directory: ```yaml -outputs: +services: - type: identity destination: type: directory @@ -29,7 +29,7 @@ If running `tbot` on Kubernetes, write the identity file to Kubernetes secret instead: ```yaml -outputs: +services: - type: identity destination: type: kubernetes_secret diff --git a/docs/pages/includes/policy/access-graph.mdx b/docs/pages/includes/policy/access-graph.mdx index dd3d698e7d96f..21c01106aa396 100644 --- a/docs/pages/includes/policy/access-graph.mdx +++ b/docs/pages/includes/policy/access-graph.mdx @@ -2,4 +2,4 @@ Access Graph is a feature of the [Teleport Identity Security](https://goteleport Teleport Enterprise edition customers. To verify that Access Graph is set up correctly for your cluster, sign in to the Teleport Web UI, click the -Policy sidebar button, and then the Browse menu item. Identities, resources, etc. should be listed. +Identity Security sidebar button, and then the Browse menu item. Identities, resources, etc. should be listed. diff --git a/docs/pages/includes/policy/identity-activity-center.mdx b/docs/pages/includes/policy/identity-activity-center.mdx new file mode 100644 index 0000000000000..88a93b164c899 --- /dev/null +++ b/docs/pages/includes/policy/identity-activity-center.mdx @@ -0,0 +1,13 @@ +Teleport Identity Activity Center is a centralized data platform that enhances visibility, allows to +search and analyze activity from both human and non-human identities across multiple data sources. + +It provides a rich visualization layer that maps access policies +across services such as AWS, GitHub, Okta, and Teleport with the real-time +activity from those identities. + +Built to assist security and operations teams, Identity Activity Center combines activities +from the same identity across different platforms improving the correlation of identity-based +events across platforms and expedites investigations. +Through an intelligent alerting engine that detects irregularities in audit logs, emphasizes +odd behavior, and describes the access levels each identity has across corporate services, +it offers contextual insights during incident response. \ No newline at end of file diff --git a/docs/pages/includes/preset-roles-table.mdx b/docs/pages/includes/preset-roles-table.mdx index c3a2e1a9d829c..2fa271c192535 100644 --- a/docs/pages/includes/preset-roles-table.mdx +++ b/docs/pages/includes/preset-roles-table.mdx @@ -3,11 +3,13 @@ | `access`| Allows access to cluster resources. | | | `editor` | Allows editing of cluster configuration settings. | | | `auditor`| Allows reading cluster events, audit logs, and playing back session records. | | +| `access-plugin` | Enables self-hosted Access Request plugin features. | | +| `list-access-request-resources` | Allows reading Access Request resources. | | | `requester`| Allows a user to create Access Requests. | ✔ | | `reviewer`| Allows review of Access Requests. | ✔ | | `group-access`| Allows access to all user groups. | ✔ | | `device-admin`| Used to manage trusted devices. | ✔ | | `device-enroll`| Used to grant device enrollment powers to users. | ✔ | | `require-trusted-device`| Requires trusted device access to resources. | ✔ | -| `terraform-provider`| Allows the Teleport Terraform provider to configure all of its supported Teleport resources. | ✔ | +| `terraform-provider`| Allows the Teleport Terraform provider to configure all of its supported Teleport resources. | | diff --git a/docs/pages/includes/provision-token/bound-keypair-spec.mdx b/docs/pages/includes/provision-token/bound-keypair-spec.mdx new file mode 100644 index 0000000000000..2e4d38b383d54 --- /dev/null +++ b/docs/pages/includes/provision-token/bound-keypair-spec.mdx @@ -0,0 +1,65 @@ +```yaml +kind: token +version: v2 +metadata: + name: example-token +spec: + roles: [Bot] + join_method: bound_keypair + bot_name: example + + # Fields related to the bound keypair joining process. + bound_keypair: + # Fields related to the initial join attempt. + onboarding: + # If set to a public key in SSH authorized_keys format, the + # joining client must have the corresponding private key to join. This + # keypair may be created using `tbot keypair create`. If set, + # `registration_secret` and `must_register_before` are ignored. + initial_public_key: "" + + # If set to a secret string value, a client may use this secret to perform + # the first join without pre-registering a public key in + # `initial_public_key`. If unset and no `initial_public_key` is provided, + # a random value will be generated automatically into + # `.status.bound_keypair.registration_secret`. + registration_secret: "" + + # If set to an RFC 3339 timestamp, attempts to register via + # `registration_secret` will be denied once the timestamp has elapsed. If + # more time is needed, this field can be edited to extend the registration + # period. + must_register_before: "" + + # Fields related to recovery after certificates have expired. + recovery: + # The maximum number of allowed recovery attempts. This value may + # be raised or lowered after creation to allow additional recovery + # attempts should the initial limit be exhausted. If `mode` is set to + # `standard`, recovery attempts will only be allowed if + # `.status.bound_keypair.recovery_count` is less than this limit. This + # limit is not enforced if `mode` is set to `relaxed` or `insecure`. This + # value must be at least 1 to allow for the initial join during + # onboarding, which counts as a recovery. + limit: 1 + + # The recovery rule enforcement mode. Valid values: + # - standard (or unset): all configured rules enforced. The recovery limit + # and client join state are required and verified. This is the most + # secure recovery mode. + # - relaxed: recovery limit is not enforced, but client join state is + # still required. This effectively allows unlimited recovery attempts, + # but client join state still helps mitigate stolen credentials. + # - insecure: neither the recovery limit nor client join state are + # enforced. This allows any client with the private key to join freely. + # This is less secure, but can be useful in certain situations, like in + # otherwise unsupported CI/CD providers. This mode should be used with + # care, and RBAC rules should be configured to heavily restrict which + # resources this identity can access. + mode: "standard" + + # If set to an RFC 3339 timestamp, once elapsed, a keypair rotation will be + # forced on next join if it has not already been rotated. The most recent + # rotation is recorded in `.status.bound_keypair.last_rotated_at`. + rotate_after: "" +``` \ No newline at end of file diff --git a/docs/pages/includes/provision-token/circleci-spec.mdx b/docs/pages/includes/provision-token/circleci-spec.mdx index 2928a82a2c952..fc8b724441e41 100644 --- a/docs/pages/includes/provision-token/circleci-spec.mdx +++ b/docs/pages/includes/provision-token/circleci-spec.mdx @@ -12,8 +12,8 @@ spec: # allow specifies the rules by which the Auth Service determines if `tbot` # should be allowed to join. allow: - - # CircleCI context id. See the CircleCI MachineID guide to learn - # how to create a context and recover its ID. + - # CircleCI context id. See the CircleCI Machine & Workload Identity + # guide to learn how to create a context and recover its ID. context_id: 00000000-0000-0000-0000-000000000000 # CircleCI projectID. project_id: 1234 diff --git a/docs/pages/includes/provision-token/env0-spec.mdx b/docs/pages/includes/provision-token/env0-spec.mdx new file mode 100644 index 0000000000000..7175739a8f275 --- /dev/null +++ b/docs/pages/includes/provision-token/env0-spec.mdx @@ -0,0 +1,52 @@ +```yaml +kind: token +version: v2 +metadata: + name: env0 +spec: + roles: [Bot] + join_method: env0 + + # This must match a bot name, created either with `tctl bots add` or by + # creating a `bot` resource. + bot_name: env0 + + env0: + allow: + # organization_id and one of either project_name or project_id must be + # set. All other fields are optional and only checked if set. + # For more information on possible values, see Env0's documentation: + # https://docs.envzero.com/guides/integrations/oidc-integrations#format-of-the-openid-connect-id-token + - organization_id: "00000000-0000-0000-0000-000000000000" + + # A unique project identifier. Either this field or `project_name` must + # be set. + project_id: "00000000-0000-0000-0000-000000000000" + + # The project name. Either this field or `project_id` must be set. + project_name: ExampleProject + + # A unique template identifier. + template_id: "00000000-0000-0000-0000-000000000000" + + # The template name, optional. + template_name: ExampleTemplate + + # The unique environment ID. + environment_id: "00000000-0000-0000-0000-000000000000" + + # The environment name. + environment_name: ExampleEnvironment + + # The workspace name. + workspace_name: WorkspaceName + + # The deployment type, including "deploy", "destroy", "prPlan", "task" + deployment_type: "" + + # The email address of the user that started the deployment. + deployer_email: "" + + # A custom value provided via the `ENV0_OIDC_TAG` environment variable + env0_tag: "" +``` diff --git a/docs/pages/includes/provision-token/github-spec.mdx b/docs/pages/includes/provision-token/github-spec.mdx index 9e86dfa0694df..3a0a947f7d2c9 100644 --- a/docs/pages/includes/provision-token/github-spec.mdx +++ b/docs/pages/includes/provision-token/github-spec.mdx @@ -6,8 +6,8 @@ metadata: # token, this name should be specified. name: github-token spec: - # For Machine ID and GitHub joining, roles will always be "Bot" and - # join_method will always be "github". + # For Machine & Workload Identity via GitHub joining, roles will always be + # "Bot" and join_method will always be "github". roles: [Bot] join_method: github @@ -72,7 +72,7 @@ spec: # whether by committing or by directly despatching the workflow. actor: octocat # ref is the git ref that triggered the action run. - ref: ref/heads/main + ref: refs/heads/main # ref_type is the type of the git ref that triggered the action run. ref_type: branch # sub is a concatenated string of various attributes of the workflow diff --git a/docs/pages/includes/provision-token/kubernetes-oidc-spec.mdx b/docs/pages/includes/provision-token/kubernetes-oidc-spec.mdx new file mode 100644 index 0000000000000..5adf61e19d539 --- /dev/null +++ b/docs/pages/includes/provision-token/kubernetes-oidc-spec.mdx @@ -0,0 +1,21 @@ +```yaml +kind: token +version: v2 +metadata: + name: example +spec: + roles: [App] + join_method: kubernetes + kubernetes: + # oidc configures the Auth Service to validate the JWT presented by `tbot` + # using the public keys published by the configured OIDC issuer. + type: oidc + oidc: + # Issuer must be a fully-qualified HTTPS URL and may vary between + # providers. + issuer: https://oidc.eks.us-west-2.amazonaws.com/id/my-cluster + # allow specifies the rules by which the Auth Service determines if the node + # should be allowed to join. + allow: + - service_account: "namespace:serviceaccount" +``` diff --git a/docs/pages/includes/provision-token/oracle-spec.mdx b/docs/pages/includes/provision-token/oracle-spec.mdx index 9987f0a16b2d0..96ce5ecd538ae 100644 --- a/docs/pages/includes/provision-token/oracle-spec.mdx +++ b/docs/pages/includes/provision-token/oracle-spec.mdx @@ -16,12 +16,16 @@ spec: oracle: allow: # OCID of the tenancy to allow instances to join from. Required. - - tenancy: "ocid1.tenancy.oc1.." + - tenancy: "ocid1.tenancy.oc1.." # (Optional) OCIDs of compartments to allow instances to join from. Only the direct parent # compartment applies; i.e. nested compartments are not taken into account. # If empty, instances can join from any compartment in the tenancy. - parent_compartments: ["ocid1.compartment.oc1..."] + parent_compartments: ["ocid1.compartment.oc1.."] # (Optional) Regions to allow instances to join from. Both full names ("us-phoenix-1") # and abbreviations ("phx") are allowed. If empty, instances can join from any region. regions: ["example-region"] -``` \ No newline at end of file + # (Optional) OCIDs of specific compute instances that should be allowed to join. + # If empty, any instance matching the other fields is allowed. + # The instances field is supported in Teleport v18.4.0+. + instances: ["ocid1.instance.oc1.."] +``` diff --git a/docs/pages/includes/provision-token/spacelift-spec.mdx b/docs/pages/includes/provision-token/spacelift-spec.mdx index a147e1ff0f17f..a51f0efdf316f 100644 --- a/docs/pages/includes/provision-token/spacelift-spec.mdx +++ b/docs/pages/includes/provision-token/spacelift-spec.mdx @@ -14,14 +14,24 @@ spec: spacelift: # hostname should be the hostname of your Spacelift tenant. hostname: example.app.spacelift.io + # enable_glob_matching enables glob-style matching for the allow rules. + enable_glob_matching: true # allow specifies rules that control which Spacelift executions will be # granted access. Those not matching any allow rule will be denied. allow: # space_id identifies the space that the module or stack resides within. + # + # This field supports glob-style matching when the enable_glob_matching field is true: + # - Use '*' to match zero or more characters. + # - Use '?' to match any single character. - space_id: root # caller_type is the type of caller_id. This must be `stack` or `module`. caller_type: stack # caller_id is the id of the caller. e.g the name of the stack or module. + # + # This field supports glob-style matching when the enable_glob_matching field is true: + # - Use '*' to match zero or more characters. + # - Use '?' to match any single character. caller_id: my-stack # scope is the scope of the token - either `read` or `write`. # See https://docs.spacelift.io/integrations/cloud-providers/oidc/#about-scopes diff --git a/docs/pages/includes/provision-token/terraform-spec.mdx b/docs/pages/includes/provision-token/terraform-spec.mdx index 2660faccf142b..24742fba098df 100644 --- a/docs/pages/includes/provision-token/terraform-spec.mdx +++ b/docs/pages/includes/provision-token/terraform-spec.mdx @@ -5,13 +5,13 @@ metadata: name: terraform spec: roles: [Bot] - join_method: terraform + join_method: terraform_cloud # This must match a bot name, created either with `tctl bots add` or by # creating a `bot` resource. bot_name: terraform - terraform: + terraform_cloud: # Manually override the expected audience. If unset, defaults to the # Teleport cluster name. It is not recommended to override this value. audience: '' diff --git a/docs/pages/includes/provision-token/tpm-spec.mdx b/docs/pages/includes/provision-token/tpm-spec.mdx index 5ba8a6b021061..b5df1ed6eecf9 100644 --- a/docs/pages/includes/provision-token/tpm-spec.mdx +++ b/docs/pages/includes/provision-token/tpm-spec.mdx @@ -6,8 +6,8 @@ metadata: # token, this name should be specified. name: tpm-token spec: - # For Machine ID and TPM joining, roles will always be "Bot" and - # join_method will always be "tpm". + # For Machine & Workload Identity via TPM joining, roles will always be "Bot" + # and join_method will always be "tpm". roles: [Bot] join_method: tpm diff --git a/docs/pages/includes/reference/resources/autoupdate_agent_report.mdx b/docs/pages/includes/reference/resources/autoupdate_agent_report.mdx new file mode 100644 index 0000000000000..9413d4fc0baac --- /dev/null +++ b/docs/pages/includes/reference/resources/autoupdate_agent_report.mdx @@ -0,0 +1,40 @@ +```yaml +kind: autoupdate_agent_report +version: v1 +metadata: + # Instance ID of the auth that generated the report. + name: auth1 +spec: + # When the report was generated. + timestamp: 2025-05-28T11:22:41.924956-04:00 + # Map of the agent groups seen by the auth. + groups: + dev: + # Map of the agent versions seen for this group by the auth. + versions: + "1.2.3": + # Number of registered agents with this version. + count: 15 + "1.2.4": + count: 2 + "1.2.5": + count: 34 + stage: + versions: + "1.2.3": + count: 15 + "1.2.4": + count: 125 + prod: + versions: + "1.2.5": + count: 1543 + # List of agents omitted from the report. + omitted: + - count: 120 + reason: "version is pinned" + - count: 12 + reason: "managed update v1 updater does not support agent reports" + - count: 42 + reason: "updater version predates agent report" +``` diff --git a/docs/pages/includes/reference/resources/autoupdate_agent_rollout.mdx b/docs/pages/includes/reference/resources/autoupdate_agent_rollout.mdx new file mode 100644 index 0000000000000..2334b53fd4514 --- /dev/null +++ b/docs/pages/includes/reference/resources/autoupdate_agent_rollout.mdx @@ -0,0 +1,109 @@ +```yaml +kind: autoupdate_agent_rollout +metadata: + # autoupdate_agent_rollout is a singleton resource. There can be only one instance + # of this resource in the Teleport cluster, and it must be named `autoupdate-agent-rollout`. + name: autoupdate-agent-rollout +spec: + # start_version is the version used to install new agents before their + # group's scheduled update time. Agents never update to the start_version + # automatically, but may be required to via "teleport-update update --now". + start_version: v17.2.0 + + # target_version is the version that agents update to during their group's + # scheduled update time. New agents also use this version after their group's + # scheduled update time. + target_version: v17.2.1 + + # schedule used to roll out updates. + # The regular schedule is defined in the autoupdate_config resource. + # The immediate schedule updates all agents to target_version immediately. + # Possible values: "regular" or "immediate" + schedule: regular + + # autoupdate_mode allows users to enable, disable, or suspend agent updates at the + # cluster level. Disable agent automatic updates only if self-managed + # updates are in place. This value may also be set in autoupdate_config. + # If set in both places, disabled overrides suspended, which overrides enabled. + # Possible values: "enabled", "disabled", "suspended" + autoupdate_mode: enabled + + # strategy used to roll out updates to agents. + # The halt-on-error strategy ensures that groups earlier in the schedule are + # given the opportunity to update to the target_version before groups that are + # later in the schedule. (Currently, the schedule must be stopped manually by + # setting the mode to "suspended" or "disabled". In the future, errors will be + # detected automatically). + # The time-based strategy ensure that each group updates within a defined + # time window, with no dependence between groups. + # Possible values: "halt-on-error" or "time-based" + # Default: "halt-on-error" + strategy: halt-on-error + + # maintenance_window_duration configures the duration after the start_hour + # when updates may occur. Only valid for the time-based strategy. + # maintenance_window_duration: 1h + +status: + + # groups contains the status for each group in the currently executing schedule. + groups: + + # name of each group, configured locally via "teleport-update enable --group" + - name: staging + + # start_time of the group + start_time: 0001-01-01T00:00:00Z + + # state of the group + # Possible values: unstarted, active, done, rolledback + state: active + + # last_update_time of this group's status + last_update_time: 0001-01-01T00:00:00Z + + # last_update_reason of this group's status + last_update_reason: "new version" + + # days that the update may occur on, from autoupdate_config + # Possible values: "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", and "*" + config_days: [ "Mon", "Tue", "Wed", "Thu" ] + + # start_hour of the update, in UTC, from autoupdate_config + config_start_hour: 4 + + - name: production + + # ... + + # config_wait_hours is specific number of hours after the previous group that this + # group may execute after, from autoupdate_config. + config_wait_hours: 24 + + # canary_count is the number of agents selected to update and verify before the rest + # of the group. Only present for the halt-on-error schedule. + canary_count: 1 + + # canaries describes the status of the selected canaries for this group. + canaries: + + # updater_id is the unique ID of the updater that is managing the canary. + - updater_id: 3c6bcc1b-1992-4abc-a6f1-43f8f1b9ff05 + + # host_id is the unique host ID of the agent that is being updated. + host_id: c757ad82-95f2-4416-ae9d-a20ffd9b5a54 + + # hostname is the hostname of the agent that is being updated. + hostname: prod43 + + # success is true if the canary was updated successfully. + success: true + + + # start_time of the rollout + start_time: 0001-01-01T00:00:00Z + + # state of the entire rollout + # Possible values: unstarted, active, done, rolledback + state: active +``` diff --git a/docs/pages/includes/reference/resources/autoupdate_config.mdx b/docs/pages/includes/reference/resources/autoupdate_config.mdx new file mode 100644 index 0000000000000..08e82e708204e --- /dev/null +++ b/docs/pages/includes/reference/resources/autoupdate_config.mdx @@ -0,0 +1,72 @@ +```yaml +kind: autoupdate_config +metadata: + # autoupdate_config is a singleton resource. There can be only one instance + # of this resource in the Teleport cluster, and it must be named `autoupdate-config`. + name: autoupdate-config +spec: + agents: + # mode allows users to enable, disable, or suspend agent updates at the + # cluster level. Disable agent automatic updates only if self-managed + # updates are in place. This value may also be set in autoupdate_version. + # If set in both places, disabled overrides suspended, which overrides enabled. + # Possible values: "enabled", "disabled", "suspended" + # Default: "disabled" (unless specified in autoupdate_version) + mode: enabled + + # strategy used to roll out updates to agents. Applies to every group. + # The halt-on-error strategy ensures that groups earlier in the schedule are + # given the opportunity to update to the target_version before groups that are + # later in the schedule. (Currently, the schedule must be stopped manually by + # setting the mode to "suspended" or "disabled". In the future, errors will be + # detected automatically). + # The time-based strategy ensure that each group updates within a defined + # time window, with no dependence between groups. + # Possible values: "halt-on-error" or "time-based" + # Default: "halt-on-error" + strategy: halt-on-error + + # maintenance_window_duration configures the duration after the start_hour + # when updates may occur. Only valid for the time-based strategy. + # maintenance_window_duration: 1h + + # schedules define groups of agents with different update times. + # Currently, only the regular schedule is configurable. + schedules: + regular: + + # name of each group, configured locally via "teleport-update enable --group". + # Agents presenting a missing or empty group are placed in the last group in the list. + - name: staging + + # start_hour of the update, in UTC + start_hour: 4 + + # days that the update may occur on + # Days are not configurable for most Enterprise cloud-hosted users. + # Possible values: "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", and "*" + # Default: [ "Mon", "Tue", "Wed", "Thu" ] + days: [ "Mon", "Tue", "Wed", "Thu" ] + + - name: production + start_hour: 5 + + # wait_hours ensures that the group executes at least a specific number of hours + # after the previous group. Only valid for the halt-on-error schedule. + # Default: 0 + wait_hours: 24 + + # canary_count specifies the number of agents to update and verify before the rest + # of the group. Only valid for the halt-on-error schedule. + # Possible values: 0-5 + # Default: 0 + canary_count: 5 + + tools: + # mode allows users to enable or disable client tool updates at the + # cluster level. Disable client tool automatic updates only if self-managed + # updates are in place. + # Possible values: "enabled" or "disabled" + # Default: "disabled" + mode: enabled +``` diff --git a/docs/pages/includes/reference/resources/autoupdate_version.mdx b/docs/pages/includes/reference/resources/autoupdate_version.mdx new file mode 100644 index 0000000000000..995ab8109abd4 --- /dev/null +++ b/docs/pages/includes/reference/resources/autoupdate_version.mdx @@ -0,0 +1,40 @@ +```yaml +kind: autoupdate_version +metadata: + # autoupdate_version is a singleton resource. There can be only one instance + # of this resource in the Teleport cluster, and it must be named `autoupdate-version`. + name: autoupdate-version +spec: + agents: + # start_version is the version used to install new agents before their + # group's scheduled update time. Agents never update to the start_version + # automatically, but may be required to via "teleport-update update --now". + start_version: v17.2.0 + + # target_version is the version that agents update to during their group's + # scheduled update time. New agents also use this version after their group's + # scheduled update time. + target_version: v17.2.1 + + # schedule used to roll out updates. + # The regular schedule is defined in the autoupdate_config resource. + # The immediate schedule updates all agents to target_version immediately. + # Possible values: "regular" or "immediate" + # Default: "regular" + schedule: regular + + # mode allows users to enable, disable, or suspend agent updates at the + # cluster level. Disable agent automatic updates only if self-managed + # updates are in place. This value may also be set in autoupdate_config. + # If set in both places, disabled overrides suspended, which overrides enabled. + # Possible values: "enabled", "disabled", "suspended" + # Default: "disabled" (unless specified in autoupdate_config) + mode: enabled + + tools: + # target_version is the semver version of client tools the cluster will advertise. + # Client tools such as tsh and tctl will update to this version at login if the + # mode set in autoupdate_config is set to enabled. + # Default: cluster version + target_version: v17.2.1 +``` diff --git a/docs/pages/includes/reference/resources/db.mdx b/docs/pages/includes/reference/resources/db.mdx new file mode 100644 index 0000000000000..33efa3ecddab9 --- /dev/null +++ b/docs/pages/includes/reference/resources/db.mdx @@ -0,0 +1,130 @@ +```yaml +kind: db +version: v3 +metadata: + # Database resource name. + name: example + + # Database resource description. + description: "Example database" + + # Database resource static labels. + labels: + env: example + +spec: + # Database protocol. Valid options are: + # "cassandra" + # "clickhouse" + # "clickhouse-http" + # "cockroachdb" + # "dynamodb" + # "elasticsearch" + # "mongodb" + # "mysql" + # "oracle" + # "postgres" + # "redis" + # "snowflake" + # "spanner" + # "sqlserver" + protocol: "postgres" + + # Database connection endpoint. + uri: "localhost:5432" + + # Optional TLS configuration. + tls: + # TLS verification mode. Valid options are: + # 'verify-full' - performs full certificate validation (default). + # 'verify-ca' - the same as `verify-full`, but skips the server name validation. + # 'insecure' - accepts any certificate provided by database (not recommended). + mode: verify-full + # Optional database DNS server name. It allows to override the DNS name on + # a client certificate when connecting to a database. + # Use only with 'verify-full' mode. + server_name: db.example.com + # Optional CA for validating the database certificate. + ca_cert: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + # Optional configuration that allows Teleport to trust certificate + # authorities available on the host system. If not set (by default), + # Teleport only trusts self-signed databases with TLS certificates signed + # by Teleport's Database Server CA or the ca_cert specified in this TLS + # setting. For cloud-hosted databases, Teleport downloads the corresponding + # required CAs for validation. + trust_system_cert_pool: false + + # Database admin user for automatic user provisioning. + admin_user: + # Database admin user name. + name: "teleport-admin" + + # MySQL only options. + mysql: + # The MySQL server version reported by the Teleport Proxy Service. + # Teleport uses this string when reporting the server version to a + # connecting client. + # + # When this option is not set, the Database Service will try to connect to + # a MySQL instance on startup and fetch the server version. Otherwise, + # it will use the provided value without connecting to a database. + # + # In both cases, the MySQL server version reported to a client will be + # updated on the first successful connection made by a user. + # Teleport uses that string instead of default '8.0.0-Teleport' version when reporting + # the server version to a connecting client. When this option is not set, the Database Service will try + # to connect to MySQL instance on startup and fetch the server version. + # Otherwise, it will use the provided value without connecting to a database. + # In both cases MySQL server version reported to a client will be updated on the first successful + # connection made by a user. + server_version: 8.0.28 + + # Oracle only options. + oracle: + # Randomize host order per connection attempt to spread load. Optional. + shuffle_hostnames: true + # Retries per host on network errors only; non-network errors stop (default: 2). Optional. + retry_count: 5 + + # Optional AWS configuration for RDS/Aurora/Redshift. Can be auto-detected from the endpoint. + aws: + # Region the database is deployed in. + region: "us-east-1" + # Optional AWS role that the Database Service will assume to access + # this database. + assume_role_arn: "arn:aws:iam::123456789012:role/example-role-name" + # Optional AWS external ID that the Database Service will use to assume + # a role in an external AWS account. + external_id: "example-external-id" + # Redshift specific configuration. + redshift: + # Redshift cluster identifier. + cluster_id: "redshift-cluster-1" + + # GCP configuration (required for Cloud SQL and Spanner databases). + gcp: + # GCP project ID. + project_id: "xxx-1234" + # Cloud SQL instance ID. + instance_id: "example" + + # Settings specific to Active Directory authentication e.g. for SQL Server. + ad: + # Path to Kerberos keytab file. + keytab_file: /path/to/keytab + # Active Directory domain name. + domain: EXAMPLE.COM + # Service Principal Name to obtain Kerberos tickets for. + spn: MSSQLSvc/ec2amaz-4kn05du.dbadir.teleportdemo.net:1433 + # Optional path to Kerberos configuration file. Defaults to /etc/krb5.conf. + krb5_file: /etc/krb5.conf + + # Optional dynamic labels. + dynamic_labels: + - name: "hostname" + command: ["hostname"] + period: 1m0s +``` \ No newline at end of file diff --git a/docs/pages/includes/role-spec.mdx b/docs/pages/includes/role-spec.mdx index 7dbaeb884305a..8417c10a6f455 100644 --- a/docs/pages/includes/role-spec.mdx +++ b/docs/pages/includes/role-spec.mdx @@ -1,6 +1,6 @@ ```yaml kind: role -version: v7 +version: v8 metadata: name: example description: This is an example role. @@ -49,7 +49,7 @@ spec: # clients and servers through the proxy permit_x11_forwarding: true # device_trust_mode enforces authenticated device access for assigned user of this role. - device_trust_mode: optional|required|off + device_trust_mode: optional|required|required-for-humans|off # require_session_mfa require per-session MFA for any assigned user of this role require_session_mfa: true # mfa_verification_interval optionally defines the maximum duration that can elapse between successive MFA verifications. @@ -103,6 +103,8 @@ spec: # Defaults to true if unspecified. # If one or more of the user's roles has disabled directory sharing, then it will be disabled. desktop_directory_sharing: true + # Specify whether this role supports auto-provisioning of local Windows users. + create_desktop_user: true # enterprise-only: when enabled, the source IP that was used to log in is embedded in the user # certificates, preventing a compromised certificate from being used on another # network. The default is false. @@ -171,6 +173,16 @@ spec: "ALL = (root) NOPASSWD: /usr/bin/systemctl restart nginx.service" ] + # List of local Windows groups the created user will be added to. + # Any that don't already exist are created. Only applies when + # create_desktop_user is 'true'. + desktop_groups: + - developers + # to make the newly-created user an administrator + - Administrators + # IdP trait templating is also supported + - '{{external.desktop_groups}}' + # kubernetes_groups specifies Kubernetes groups a user with this role will assume. # You can refer to a SAML/OIDC trait via the 'external' property bag. # This allows you to specify Kubernetes group membership in an identity manager: @@ -199,12 +211,15 @@ spec: kubernetes_resources: # The resource kind. Teleport currently supports: # - * (all resources) - # - namespaces (including all resources kinds in the namespace) # - (Resource plural name, e.g. pods, deployments, cronjobs, mycustomresources) - kind: '*' + # The resource name of the Kubernetes cluster in which to allow access + # to the resources you specify with 'name' and 'kind'. + api_group: '*' # The name of the Kubernetes namespace in which to allow access the # resources you specify with 'name' and 'kind'. - # The wildcard character '*' matches any sequence of characters. + # The wildcard character '*' matches any sequence of characters for namespaced + # resource. If set, global resources will not match. # If the value begins with '^' and ends with '$', the Kubernetes # Service will treat it as a regular expression. namespace: '*' @@ -227,21 +242,30 @@ spec: # - exec - allows users to execute commands in a pod # - portforward - allows users to forward ports from a pod verbs: ['*'] - # The resource name of the Kubernetes cluster in which to allow access - # to the resources you specify with 'name' and 'kind'. - api_group: '*' - # Functions transform variables. + # Database account names this role can connect as. + # + # Supports role templating with traits. db_users: ['{{email.local(external.email)}}'] + # Database names this role will be able to connect to. Database names are + # only enforced for PostgreSQL, MongoDB, and Cloud Spanner databases. + # + # Supports role templating with traits. db_names: ['{{external.db_names}}'] + # Label selectors for database instances this role has access to. + # + # Supports role templating with traits. db_labels: 'env': '{{regexp.replace(external.env, "^(staging)$", "$1")}}' - # List of database roles to grant. Mutually exclusive with 'db_permissions'. + # List of database roles to grant to the auto-provisioned user. Mutually + # exclusive with 'db_permissions'. + # + # Supports role templating with traits. db_roles: ['{{external.db_roles}}'] - # Grant all possible Postgres permissions for all tables. - # List of database permissions to grant. Mutually exclusive with 'db_roles'. + # List of database permissions to grant to the auto-provisioned user. + # Mutually exclusive with 'db_roles'. db_permissions: - match: object_kind: table @@ -282,8 +306,11 @@ spec: # workload_identity_labels: a user/bot with this role will be allowed to # issue Workload Identities with labels matching below. + # + # Supports role templating with traits. workload_identity_labels: 'env': 'prod' + 'team': '{{external.team}}' # node_labels_expression has the same purpose as node_labels but # supports predicate expressions to configure custom logic. @@ -366,6 +393,9 @@ spec: # is required for all Access Requests requesting roles or resources allowed by # this role. It applies only to users who have this role assigned. mode: "optional" + # 'prompt' is a custom message prompted to the user for the requested roles or resources searchable + # as other roles. This is only applied to the requested roles and resources specifying the prompt. + prompt: I am a reason prompt specific to a requested role or resource # thresholds specifies minimum amount of approvers and deniers, # defaults to 1 for both (enterprise-only) @@ -465,6 +495,22 @@ spec: - orgs: - my-org + # mcp: defines MCP servers related permissions. + mcp: + # tools: list of tools allowed for this role. + # + # No tools are allowed if not specified. + # Each entry can be a literal string, a glob pattern, or a regular + # expression (must start with '^' and end with '$'). A wildcard '*' allows + # all tools. + # This value field also supports variable interpolation. + tools: + - search-files + - slack_* + - ^(get|list|read).*$ + - "{{internal.mcp_tools}}" + - "{{external.mcp_tools}}" + # rules allow a user holding this role to modify other resources # matching the expressions below # supported resources: @@ -487,6 +533,8 @@ spec: # instance - a Teleport instance # event - structured audit logging event # + # workload_identity - config for Machine & Workload Identity SVIDS + # bot - config for Machine & Workload Identity bots # # lock - lock resource. # network_restrictions - restrictions for SSH sessions diff --git a/docs/pages/includes/s3-iam-policy.mdx b/docs/pages/includes/s3-iam-policy.mdx index dc968e697eaea..f0abc3cb24e00 100644 --- a/docs/pages/includes/s3-iam-policy.mdx +++ b/docs/pages/includes/s3-iam-policy.mdx @@ -7,7 +7,7 @@ recording bucket depends on whether you expect to create the bucket yourself or enable the Auth Service to create and configure it for you: - + Note that Teleport will only use S3 buckets with versioning enabled. This ensures that a session log cannot be permanently altered or deleted, as @@ -53,7 +53,7 @@ You'll need to replace these values in the policy example below: ``` - + You'll need to replace these values in the policy example below: diff --git a/docs/pages/includes/self-hosted-helm-dns.mdx b/docs/pages/includes/self-hosted-helm-dns.mdx new file mode 100644 index 0000000000000..9aeae85c4f9a8 --- /dev/null +++ b/docs/pages/includes/self-hosted-helm-dns.mdx @@ -0,0 +1,59 @@ +The `teleport-cluster` Helm chart exposes the Proxy Service to traffic from the +internet using a Kubernetes service that sets up an external load balancer with +your cloud provider. + +Obtain the address of your load balancer by following the instructions below. + +1. Get information about the Proxy Service load balancer: + + ```code + $ kubectl get services/teleport-cluster + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + teleport-cluster LoadBalancer 10.4.4.73 192.0.2.0 443:31204/TCP 89s + ``` + + The `teleport-cluster` service directs traffic to the Teleport Proxy Service. + Notice the `EXTERNAL-IP` field, which shows you the IP address or domain name of + the cloud-hosted load balancer. For example, on AWS, you may see a domain name + resembling the following: + + ```text + 00000000000000000000000000000000-0000000000.us-east-2.elb.amazonaws.com + ``` + +1. Set up two DNS records: `teleport.example.com` for all traffic and + `*.teleport.example.com` for any web applications you will register with + Teleport. We are assuming that your domain name is `example.com` and + `teleport` is the subdomain you have assigned to your Teleport cluster. + + Depending on whether the `EXTERNAL-IP` column above points to an IP address or a + domain name, the records will have the following details: + + + + + |Record Type|Domain Name|Value| + |---|---|---| + |A|`teleport.example.com`|The IP address of your load balancer| + |A|`*.teleport.example.com`|The IP address of your load balancer| + + + + + |Record Type|Domain Name|Value| + |---|---|---| + |CNAME|`teleport.example.com`|The domain name of your load balancer| + |CNAME|`*.teleport.example.com`|The domain name of your load balancer| + + + + +1. Once you create the records, use the following command to confirm that your + Teleport cluster is running, assigning to the + name of your cluster: + + ```code + $ curl https:///webapi/ping + {"auth":{"type":"local","second_factor":"on","preferred_local_mfa":"webauthn","allow_passwordless":true,"allow_headless":true,"local":{"name":""},"webauthn":{"rp_id":"teleport.example.com"},"private_key_policy":"none","device_trust":{},"has_motd":false},"proxy":{"kube":{"enabled":true,"listen_addr":"0.0.0.0:3026"},"ssh":{"listen_addr":"[::]:3023","tunnel_listen_addr":"0.0.0.0:3024","web_listen_addr":"0.0.0.0:3080","public_addr":"teleport.example.com:443"},"db":{"mysql_listen_addr":"0.0.0.0:3036"},"tls_routing_enabled":false},"server_version":"(=teleport.version=)","min_client_version":"12.0.0","cluster_name":"teleport.example.com","automatic_upgrades":false} + ``` + diff --git a/docs/pages/includes/self-hosted-prereqs-tabs.mdx b/docs/pages/includes/self-hosted-prereqs-tabs.mdx deleted file mode 100644 index c84cd7e33d49a..0000000000000 --- a/docs/pages/includes/self-hosted-prereqs-tabs.mdx +++ /dev/null @@ -1,9 +0,0 @@ -- A running self-hosted Teleport cluster. If you want to get started with - self-hosted Teleport Enterprise, [contact - Sales](https://goteleport.com/contact-us/). You can also [set up a demo - environment](../admin-guides/deploy-a-cluster/linux-demo.mdx) with Teleport Community Edition. - -- The `tctl` admin tool and `tsh` client tool version >= (=teleport.version=). - - Visit [Installation](../installation.mdx) for instructions on downloading - `tctl` and `tsh`. diff --git a/docs/pages/includes/server-access/custom-installer-aws.yaml b/docs/pages/includes/server-access/custom-installer-aws.yaml new file mode 100644 index 0000000000000..ad37619180ac4 --- /dev/null +++ b/docs/pages/includes/server-access/custom-installer-aws.yaml @@ -0,0 +1,19 @@ +discovery_service: + # ... + aws: + - types: ["ec2"] + tags: + - "env": "prod" + regions: ["us-west1", "us-east1"] + install: + script_name: "default-installer" + ssm: + document_name: "AWS-RunShellScript" + - types: ["ec2"] + tags: + - "env": "devel" + regions: ["us-west1", "us-east1"] + install: + script_name: "devel-installer" + ssm: + document_name: "AWS-RunShellScript" \ No newline at end of file diff --git a/docs/pages/includes/server-access/custom-installer-azure.yaml b/docs/pages/includes/server-access/custom-installer-azure.yaml new file mode 100644 index 0000000000000..dd9def0a07b81 --- /dev/null +++ b/docs/pages/includes/server-access/custom-installer-azure.yaml @@ -0,0 +1,13 @@ +discovery_service: + # ... + azure: + - types: ["vm"] + tags: + - "env": "prod" + install: # optional section when default-installer is used. + script_name: "default-installer" + - types: ["vm"] + tags: + - "env": "devel" + install: + script_name: "devel-installer" \ No newline at end of file diff --git a/docs/pages/includes/server-access/custom-installer-gcp.yaml b/docs/pages/includes/server-access/custom-installer-gcp.yaml new file mode 100644 index 0000000000000..e6558b14d7c52 --- /dev/null +++ b/docs/pages/includes/server-access/custom-installer-gcp.yaml @@ -0,0 +1,13 @@ +discovery_service: + # ... + gcp: + - types: ["gce"] + tags: + - "env": "prod" + install: # optional section when default-installer is used. + script_name: "default-installer" + - types: ["gce"] + tags: + - "env": "devel" + install: + script_name: "devel-installer" \ No newline at end of file diff --git a/docs/pages/includes/server-access/custom-installer.mdx b/docs/pages/includes/server-access/custom-installer.mdx index 2bc0983c022f9..b842c0c2c479e 100644 --- a/docs/pages/includes/server-access/custom-installer.mdx +++ b/docs/pages/includes/server-access/custom-installer.mdx @@ -1,4 +1,4 @@ -{{ cloud="foo" matcher="bar" matchTypes="baz" }} +{{ matcher="bar" }} To customize an installer, your user must have a role that allows `list`, `create`, `read` and `update` verbs on the `installer` resource. Create a file called `installer-manager.yaml` with the following content: @@ -40,19 +40,7 @@ Multiple `installer` resources can exist and be specified in the `teleport.yaml`: ```yaml -discovery_service: - # ... - {{ matcher }}: - - types: {{ matchTypes }} - tags: - - "env": "prod" - install: # optional section when default-installer is used. - script_name: "default-installer" - - types: {{ matchTypes }} - tags: - - "env": "devel" - install: - script_name: "devel-installer" +(!docs/pages/includes/server-access/custom-installer-{{ matcher }}.yaml!) ``` --- @@ -64,10 +52,10 @@ The `installer` resource has the following templating options: - `{{ .PublicProxyAddr }}`: the public address of the Teleport Proxy Service to connect to. - `{{ .RepoChannel }}`: Optional package repository (apt/yum) channel name. -Has format `/` e.g. stable/v(=teleport.major_version=). See [installation](../../installation.mdx#linux) for more details. +Has format `/` e.g. stable/v(=teleport.major_version=). See [installation](../../installation/linux.mdx) for more details. - `{{ .AutomaticUpgrades }}`: indicates whether Automatic Updates are enabled or disabled. Its value is either `true` or `false`. See - [Automatic Agent Updates](../../upgrading/agent-managed-updates.mdx) + [Automatic Agent Updates](../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more information. - `{{ .TeleportPackage }}`: the Teleport package to use. Its value is either `teleport-ent` or `teleport` depending on whether the diff --git a/docs/pages/includes/sso/cap.mdx b/docs/pages/includes/sso/cap.mdx new file mode 100644 index 0000000000000..e6c007a785063 --- /dev/null +++ b/docs/pages/includes/sso/cap.mdx @@ -0,0 +1,59 @@ +Edit your cluster authentication preferences so you can make the authentication +connector you configured in this guide the default authentication method for +your Teleport cluster. + +Open the Teleport Web UI. From the left sidebar, navigate to **Zero Trust +Access** > **Auth Connectors**. Find the connector you would like to make the +default and, from its three-dot menu, select **Set as default**. + +If you are managing your Teleport resources as configuration files, you can +choose a default authentication connector using a dynamic resource. In this +case, use `tctl` to edit the `cluster_auth_preference` value: + +```code +$ tctl edit cluster_auth_preference +``` + +Set the value of `spec.type` to `{{ type }}`: + +```yaml +kind: cluster_auth_preference +metadata: + ... + name: cluster-auth-preference +spec: + ... + type: {{ type }} + ... +version: v2 +``` + +After you save and exit the editor, `tctl` will update the resource: + +```text +cluster auth preference has been updated +``` + +
+Additional ways to edit cluster auth preferences + +The cluster auth preference is available as a Teleport Terraform provider +resource. Find a list of configuration options in the [Cluster Auth Preferences +Resource +Reference](../../reference/infrastructure-as-code/teleport-resources/cluster-auth-preferences.mdx). + +If you self-host Teleport, you can edit your Teleport Auth Service configuration +file to include the following: + +```yaml +# Snippet from /etc/teleport.yaml +auth_service: + authentication: + type: {{ type }} +``` + +
+ +If you need to log in again before configuring your identity provider, use the +flag `--auth=local`. + diff --git a/docs/pages/includes/sso/change-callback.mdx b/docs/pages/includes/sso/change-callback.mdx new file mode 100644 index 0000000000000..0e14b70a77e64 --- /dev/null +++ b/docs/pages/includes/sso/change-callback.mdx @@ -0,0 +1,29 @@ +The callback address can be changed if calling back to a remote machine +instead of the local machine is required: + +```code +# --bind-addr sets the host and port tsh will listen on, and --callback changes +# what link is displayed to the user +$ tsh login --proxy=proxy.example.com --auth=github --bind-addr=localhost:1234 --callback https://remote.machine:1234 +``` + +For this to work the hostname or CIDR of the remote machine that will be used for +the callback will need to be allowed via your auth connector's `client_redirect_settings`: + +```yaml +kind: oidc +metadata: + name: example-connector +spec: + client_redirect_settings: + # a list of hostnames allowed for HTTPS client redirect URLs + # can be a regex pattern + allowed_https_hostnames: + - remote.machine + - '*.app.github.dev' + - '^\d+-[a-zA-Z0-9]+\.foo.internal$' + # a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + insecure_allowed_cidr_ranges: + - '192.168.1.0/24' + - '2001:db8::/96' +``` diff --git a/docs/pages/includes/sso/how-it-works.mdx b/docs/pages/includes/sso/how-it-works.mdx new file mode 100644 index 0000000000000..be69066ac0fe0 --- /dev/null +++ b/docs/pages/includes/sso/how-it-works.mdx @@ -0,0 +1,18 @@ +{{ idp="your SSO provider" }} + +You can register your Teleport cluster as an application with {{ idp }}, then +create an **authentication connector** resource that provides Teleport with +information about your application. When a user signs in to Teleport, {{ idp }} +executes its own authentication flow, then sends an HTTP request to your +Teleport cluster to indicate that authentication has completed. + +Teleport authenticates users to your infrastructure by issuing short-lived +certificates. After a user completes an SSO authentication flow, Teleport issues +short-lived TLS and SSH certificates to the user. Teleport also creates a +temporary user on the Auth Service backend. + +Teleport roles are encoded in the user's certificates. To assign Teleport roles +to the user, the Auth Service inspects the **role mapping** within the +authentication connector, which associates user data on {{ idp }} with the names +of one or more Teleport roles. + diff --git a/docs/pages/includes/sso/next-step-traits.mdx b/docs/pages/includes/sso/next-step-traits.mdx new file mode 100644 index 0000000000000..ec99f1cc37898 --- /dev/null +++ b/docs/pages/includes/sso/next-step-traits.mdx @@ -0,0 +1,43 @@ +Now that you have connected Teleport to your identity provider, you can +customize the way Teleport includes IdP data in Teleport roles. + +With **role templates**, you can include user data from your IdP directly in +Teleport roles. If you use the `external` template variable in the value of a +role field, Teleport passes that value from your IdP. In this example, all of +the role options you can use for allowing users to assume certain principals on +remote systems come from your IdP: + +```yaml +kind: role +version: v7 +metadata: + name: sso-users +spec: + allow: + logins: ['{{external.logins}}'] + aws_role_arns: ['{{external.aws_role_arns}}'] + azure_identities: ['{{external.azure_identities}}'] + db_names: ['{{external.db_names}}'] + db_roles: ['{{external.db_roles}}'] + db_users: ['{{external.db_users}}'] + desktop_groups: ['{{external.desktop_groups}}'] + gcp_service_accounts: ['{{external.gcp_service_accounts}}'] + host_groups: ['{{external.host_groups}}'] + host_sudoers: ['{{external.host_sudoers}}'] + kubernetes_groups: ['{{external.kubernetes_groups}}'] + kubernetes_users: ['{{external.kubernetes_users}}'] + windows_desktop_logins: ['{{external.windows_desktop_logins}}'] +``` + +For more information on using the `external` template variable, see [Role +Templates](../../zero-trust-access/rbac-get-started/role-templates.mdx). + +For an explanation of the fields listed above, see the [Role +Reference](../../reference/access-controls/roles.mdx). + +If you need to transform your IdP user data before you include it in Teleport +roles, you can do so using **Login Rules**. Login Rules allow you to include +external traits within Teleport roles even if your IdP provides user data in a +different format than the one expected by Teleport. Read more about [Login +Rules](../../zero-trust-access/sso/login-rules/login-rules.mdx). + diff --git a/docs/pages/includes/sso/role-mapping.mdx b/docs/pages/includes/sso/role-mapping.mdx new file mode 100644 index 0000000000000..cca00c4d5e766 --- /dev/null +++ b/docs/pages/includes/sso/role-mapping.mdx @@ -0,0 +1,28 @@ +When a user authenticates to Teleport, the Teleport Auth Service issues SSH and +TLS certificates to the user that contain the user's Teleport roles. + +For SSO authentication connectors, the Auth Service determines which roles to +encode in the certificate by reading the authentication connector's **role +mapping**. The role mapping indicates which Teleport roles to assign based on +data that your identity provider stores about the user. + +When you configure an authentication connector using the `tctl` CLI, a role +mapping takes the following format: + +```text +<{{fieldType}}_name>,<{{fieldType}}_value>,,,..., +``` + +For example, the following role mapping means that any user with the {{fieldType}} +`group` with the value `admin` receives the Teleport roles `editor` and +`auditor`: + +```text +group,admin,editor,auditor +``` + +For the purpose of this guide, assign two separate role mappings: + +- A more permissive role mapping: +- A more restrictive role mapping: + diff --git a/docs/pages/includes/sso/username-claim.mdx b/docs/pages/includes/sso/username-claim.mdx new file mode 100644 index 0000000000000..fb2d292c10a8f --- /dev/null +++ b/docs/pages/includes/sso/username-claim.mdx @@ -0,0 +1,24 @@ +By default, Teleport expects an `email` claim to be present in the OIDC claim. +The value from the `email` claim is used as a username in Teleport. If you wish +to use another claim to configure username, you can override the +default expected `email` claim with the `username_claim` field in the auth +connector spec. + +The example below shows configuring the `username_claim` field with `preferred_username` +value. This will configure Teleport to query `preferred_username` claim instead +of the `email` claim.: + +```yaml +kind: oidc +metadata: + # ... +spec: + # ... + username_claim: preferred_username +``` + + + Ensure that the claim value you use to configure `username_claim` is a uniquely + identifiable value for the user. + + diff --git a/docs/pages/includes/start-event-handler.mdx b/docs/pages/includes/start-event-handler.mdx index 96594c5e6f64e..06cd03e104cb4 100644 --- a/docs/pages/includes/start-event-handler.mdx +++ b/docs/pages/includes/start-event-handler.mdx @@ -28,8 +28,9 @@ PIDFile=/run/teleport-event-handler.pid WantedBy=multi-user.target ``` -If you are not using Machine ID to provide short-lived credentials to the Event -Handler, you can remove the `--teleport-refresh-enabled true` flag. +If you are not using Machine & Workload Identity to provide short-lived +credentials to the Event Handler, you can remove the +`--teleport-refresh-enabled true` flag. Enable and start the plugin: diff --git a/docs/pages/includes/tctl-get-resources.mdx b/docs/pages/includes/tctl-get-resources.mdx new file mode 100644 index 0000000000000..9c14cdff1510c --- /dev/null +++ b/docs/pages/includes/tctl-get-resources.mdx @@ -0,0 +1,77 @@ +- `access_graph_settings` +- `access_list` +- `access_monitoring_rule` +- `access_request` +- `app` +- `app_server` +- `audit_query` +- `auth_server` +- `autoupdate_agent_report` +- `autoupdate_agent_rollout` +- `autoupdate_bot_instance_report` +- `autoupdate_config` +- `autoupdate_version` +- `bot` +- `bot_instance` +- `cert_authority` +- `cluster_auth_preference` +- `cluster_maintenance_config` +- `cluster_networking_config` +- `connectors` +- `crown_jewel` +- `db` +- `db_object` +- `db_object_import_rule` +- `db_server` +- `db_service` +- `device` +- `discovery_config` +- `dynamic_windows_desktop` +- `external_audit_storage` +- `git_server` +- `github` +- `health_check_config` +- `inference_model` +- `inference_policy` +- `inference_secret` +- `installer` +- `integration` +- `kube_cluster` +- `kube_server` +- `lock` +- `login_rule` +- `namespace` +- `network_restrictions` +- `node` +- `oidc` +- `okta_assignment` +- `okta_import_rule` +- `plugin` +- `proxy` +- `relay_server` +- `remote_cluster` +- `role` +- `saml` +- `saml_idp_service_provider` +- `scoped_role` +- `scoped_role_assignment` +- `security_report` +- `semaphore` +- `server_info` +- `session_recording_config` +- `sigstore_policy` +- `spiffe_federation` +- `static_host_user` +- `token` +- `trusted_cluster` +- `tunnel` +- `ui_config` +- `user` +- `user_group` +- `user_task` +- `vnet_config` +- `windows_desktop` +- `windows_desktop_service` +- `workload_identity` +- `workload_identity_x509_issuer_override` +- `workload_identity_x509_revocation` diff --git a/docs/pages/includes/tctl-tsh-prerequisite.mdx b/docs/pages/includes/tctl-tsh-prerequisite.mdx deleted file mode 100644 index 64e278a57ed24..0000000000000 --- a/docs/pages/includes/tctl-tsh-prerequisite.mdx +++ /dev/null @@ -1,2 +0,0 @@ -The `tctl` and `tsh` client tools version >= (=teleport.version=). Read -[Installation](../installation.mdx) for how to install these. diff --git a/docs/pages/includes/tls-certificate-setup.mdx b/docs/pages/includes/tls-certificate-setup.mdx deleted file mode 100644 index 9d823347f1611..0000000000000 --- a/docs/pages/includes/tls-certificate-setup.mdx +++ /dev/null @@ -1,48 +0,0 @@ -If you are running Teleport on the internet, we recommend using Let's Encrypt to -receive your key and certificate automatically. For private networks or custom -deployments, use your own private key and certificate. - - - - Let's Encrypt verifies that you control the domain name of your Teleport cluster - by communicating with the HTTPS server listening on port 443 of your Teleport - Proxy Service. - - You can configure the Teleport Proxy Service to complete the Let's Encrypt - verification process when it starts up. - - On the host where you will start the Teleport Auth Service and Proxy Service, - run the following `teleport configure` command. Assign - to the - domain name of your Teleport cluster and to - an email address used for notifications (you can use any domain): - - ```code - $ sudo teleport configure -o file \ - --acme --acme-email= \ - --cluster-name= - ``` - - Port 443 on your Teleport Proxy Service host must allow traffic from all sources. - - - - On your Teleport host, place a valid private key and a certificate chain in `/var/lib/teleport/privkey.pem` - and `/var/lib/teleport/fullchain.pem` respectively. - - The leaf certificate must have a subject that corresponds to the domain of your Teleport host, e.g., `*.teleport.example.com`. - - On the host where you will start the Teleport Auth Service and Proxy Service, - run the following `teleport configure` command. Assign to the domain name of your Teleport cluster. - - ```code - $ sudo teleport configure -o file \ - --cluster-name= \ - --public-addr=:443 \ - --cert-file=/var/lib/teleport/fullchain.pem \ - --key-file=/var/lib/teleport/privkey.pem - ``` - - - diff --git a/docs/pages/includes/token-types.mdx b/docs/pages/includes/token-types.mdx index fee31c63ebb8a..22302480ab43b 100644 --- a/docs/pages/includes/token-types.mdx +++ b/docs/pages/includes/token-types.mdx @@ -1,12 +1,12 @@ -| Role | Teleport Service | -| ---------------- | ----------------------- | -| `app` | Application Service | -| `auth` | Auth Service | -| `bot` | Machine ID | -| `db` | Database Service | -| `discovery` | Discovery Service | -| `kube` | Kubernetes Service | -| `node` | SSH Service | -| `proxy` | Proxy Service | -| `windowsdesktop` | Windows Desktop Service | +| Role | Teleport Service | +| ---------------- | ------------------------------- | +| `app` | Application Service | +| `auth` | Auth Service | +| `bot` | Machine & Workload Identity Bot | +| `db` | Database Service | +| `discovery` | Discovery Service | +| `kube` | Kubernetes Service | +| `node` | SSH Service | +| `proxy` | Proxy Service | +| `windowsdesktop` | Windows Desktop Service | diff --git a/docs/pages/includes/tpm-joining-background.mdx b/docs/pages/includes/tpm-joining-background.mdx index 24d537245cdea..d9137b1bdab9a 100644 --- a/docs/pages/includes/tpm-joining-background.mdx +++ b/docs/pages/includes/tpm-joining-background.mdx @@ -28,5 +28,5 @@ such as Ansible to query the TPMs across your fleet and then generate join tokens. -The `tpm` join method is currently not compatible with FIPS 140-2. +The `tpm` join method is currently not compatible with FIPS. diff --git a/docs/pages/includes/uninstall-teleport-binaries.mdx b/docs/pages/includes/uninstall-teleport-binaries.mdx new file mode 100644 index 0000000000000..8d3775c71dc80 --- /dev/null +++ b/docs/pages/includes/uninstall-teleport-binaries.mdx @@ -0,0 +1,8 @@ + ```code + $ sudo rm -f /usr/local/bin/tbot + $ sudo rm -f /usr/local/bin/tctl + $ sudo rm -f /usr/local/bin/teleport + $ sudo rm -f /usr/local/bin/tsh + $ sudo rm -f /usr/local/bin/fdpass-teleport + $ sudo rm -f /usr/local/bin/teleport-update + ``` \ No newline at end of file diff --git a/docs/pages/index.mdx b/docs/pages/index.mdx index a6b3c8acc1d57..e39911710a80c 100644 --- a/docs/pages/index.mdx +++ b/docs/pages/index.mdx @@ -1,99 +1,430 @@ --- -h1: Introduction to Teleport -title: "Introduction to the Teleport Infrastructure Identity Platform" +title: "The Teleport Infrastructure Identity Platform" description: Read an overview of the Teleport Access Platform. Learn how to implement Zero Trust Security across all your infrastructure for enhanced protection and streamlined access control. -tocDepth: 3 +template: "landing-page" --- -Teleport is the easiest, most secure way to access and protect all your infrastructure. +import DocsHeader, { DocsHeaderProps } from "@site/src/components/Pages/Homepage/DocsHeader"; +import Products, { ProductProps } from "@site/src/components/Pages/Homepage/Products/Products"; +import VersionHighlights from "@site/src/components/Pages/Homepage/VersionHighlights"; +import Resources, { ResourcesProps } from "@site/src/components/Pages/Homepage/Resources"; +import Integrations, { IntegrationsProps } from "@site/src/components/Pages/Homepage/Integrations"; +import GetStarted from "@site/src/components/Pages/Homepage/GetStarted"; -The Teleport Infrastructure Identity Platform implements trusted computing at -scale, with unified cryptographic identities for humans, machines and workloads, -endpoints, infrastructure assets, and AI agents. +import applicationSvg from "@site/src/components/Icon/teleport-svg/application.svg"; +import linuxServersSvg from "@site/src/components/Icon/teleport-svg/linux-servers.svg"; +import databaseSvg from "@site/src/components/Icon/teleport-svg/database-access.svg"; +import kubernetesClustersSvg from "@site/src/components/Icon/teleport-svg/kubernetes-clusters.svg"; +import windowsDesktopsSvg from "@site/src/components/Icon/teleport-svg/windows-desktops.svg"; +import autoDiscoverySvg from "@site/src/components/Icon/teleport-svg/auto-discovery.svg"; +import cloudProvidersSvg from "@site/src/components/Icon/teleport-svg/cloud-providers.svg"; +import mcpAndAiSvg from "@site/src/components/Icon/teleport-svg/mcp-and-ai.svg"; +import awsSvg from "@site/src/components/Icon/teleport-svg/aws.svg"; +import gcpSvg from "@site/src/components/Icon/teleport-svg/gcp.svg"; +import azureSvg from "@site/src/components/Icon/teleport-svg/azure.svg"; +import oktaSvg from "@site/src/components/Icon/teleport-svg/okta.svg"; +import slackSvg from "@site/src/components/Icon/teleport-svg/slack.svg"; +import pagerDutySvg from "@site/src/components/Icon/teleport-svg/pagerduty.svg"; +import jamfProSvg from "@site/src/components/Icon/teleport-svg/jamf-pro.svg"; +import jenkinsSvg from "@site/src/components/Icon/teleport-svg/jenkins.svg"; +import snowflakeSvg from "@site/src/components/Icon/teleport-svg/snowflake.svg"; +import arrowRightSvg from "@site/src/components/Icon/teleport-svg/arrow-right.svg"; +import zeroTrustSvg from "@site/src/components/Icon/teleport-svg/zero-trust.svg"; +import mwiSvg from "@site/src/components/Icon/teleport-svg/mwi.svg"; +import identityGovernanceSvg from "@site/src/components/Icon/teleport-svg/identity-governance.svg"; +import identitySecuritySvg from "@site/src/components/Icon/teleport-svg/identity-security.svg"; +import bookOpenSvg from "@site/src/components/Icon/teleport-svg/book-open.svg"; +import flowArrowSvg from "@site/src/components/Icon/teleport-svg/flow-arrow.svg"; +import identificationBadgeSvg from "@site/src/components/Icon/teleport-svg/identification-badge.svg"; +import listBulletsSvg from "@site/src/components/Icon/teleport-svg/list-bullets.svg"; -## Products +{/* lint disable page-structure remark-lint */} -The Teleport Infrastructure Identity Platform consists of four products. + -- [Teleport Zero Trust Access](#teleport-zero-trust-access) -- [Teleport Machine & Workload Identity](#teleport-machine--workload-identity) -- [Teleport Identity Governance](#teleport-identity-governance) -- [Teleport Identity Security](#teleport-identity-security) + -### Teleport Zero Trust Access +
- -Outputs must be configured with a destination. In this example, the `directory` -destination will be used. This will write artifacts to a specified directory on -disk. Ensure that this directory can be written to by the Linux user that -`tbot` runs as, and that it can be read by the Linux user that will be accessing -applications. + +Output services must be configured with a destination. In this example, the +`directory` destination will be used. This will write artifacts to a specified +directory on disk. Ensure that this directory can be written to by the Linux +user that `tbot` runs as, and that it can be read by the Linux user that will be +accessing applications. -Modify your `tbot` configuration to add an `application` output: +Modify your `tbot` configuration to add an `application` service: ```yaml -outputs: +services: - type: application # specify the name of the application you wish the credentials to grant # access to. @@ -122,8 +134,8 @@ outputs: destination: type: directory # For this guide, /opt/machine-id is used as the destination directory. - # You may wish to customize this. Multiple outputs cannot share the same - # destination. + # You may wish to customize this. Multiple output services cannot share the + # same destination. path: /opt/machine-id ``` @@ -135,7 +147,7 @@ one-shot mode, it must be executed before you attempt to use the credentials.
-## Step 3/3. Connect to your web application with the Machine ID identity +## Step 3/3. Connect to your web application @@ -148,7 +160,7 @@ For example, to access the application using `curl`: $ curl http://127.0.0.1:1234/ ``` - + Once `tbot` has been run, credentials will be output to the directory specified in the destination. Using the example of `/opt/machine-id`: @@ -185,10 +197,11 @@ will be redirected to the Teleport login page when attempting to access the app. ### Client application requires certificates with standard extensions If your automated service requires TLS certificates with a specific file -extension, you may also enable the `specific_tls_naming` option for the output: +extension, you may also enable the `specific_tls_naming` option for the output +service: ```yaml -outputs: +services: - type: application destination: type: directory @@ -211,10 +224,10 @@ has been configured to use both the client certificate and key. ## Next steps -- Review the [Access Controls Reference](../../../reference/access-controls/roles.mdx) +- Review the [Access Controls Reference](../../reference/access-controls/roles.mdx) to learn about restricting which Applications and other Teleport resources your bot may access. -- Configure [JWTs](../../application-access/jwt/introduction.mdx) for your +- Configure [JWTs](../../enroll-resources/application-access/jwt/introduction.mdx) for your Application to remove the need for additional login credentials. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/machine-workload-identity/access-guides/argocd.mdx b/docs/pages/machine-workload-identity/access-guides/argocd.mdx new file mode 100644 index 0000000000000..6c8771e2d5011 --- /dev/null +++ b/docs/pages/machine-workload-identity/access-guides/argocd.mdx @@ -0,0 +1,342 @@ +--- +title: Machine & Workload Identity with Argo CD +description: How to use Machine & Workload Identity to enable Argo CD to connect to external Kubernetes clusters +sidebar_label: Argo CD +tags: + - ci-cd + - infrastructure-as-code + - how-to + - mwi + - infrastructure-identity +--- + +Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It +runs in Kubernetes and can deploy applications to the same cluster or to other +"external" clusters. + +In this guide, you will configure the Machine & Workload Identity agent, `tbot`, +to manage Argo CD's cluster credentials, enabling it to securely deploy +applications using Teleport. + +## How it works + +Argo CD supports declaratively managing clusters using Kubernetes secrets (see +the [Argo CD +documentation](https://argo-cd.readthedocs.io/en/latest/operator-manual/declarative-setup/#clusters). +The Argo CD application controller reads these secrets in order to retrieve +cluster credentials. When configured to use the Argo CD output service, `tbot` +writes Teleport-signed Kubernetes cluster credentials into secrets that Argo CD +can then use to access Teleport-protected Kubernetes clusters. + +## Prerequisite + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- Argo CD [installed](https://argo-cd.readthedocs.io/en/latest/#quick-start) + in a Kubernetes cluster. This cluster will be referred to as the **source + cluster** throughout this guide, it does not need to be enrolled into Teleport. +- The Kubernetes cluster to which you'd like Argo CD to deploy applications. + This cluster will be referred to as the **target cluster** throughout this + guide, it must be enrolled into Teleport, if you have not already done this, + follow the [Enroll a Kubernetes Cluster](../../enroll-resources/kubernetes-access/getting-started.mdx) + guide. +- (!docs/pages/includes/tctl.mdx!) +- To configure Kubernetes and deploy `tbot` you will need + [`kubectl`](https://kubernetes.io/docs/tasks/tools/) and + [`helm`](https://helm.sh/docs/intro/install/) installed. +- To check clusters have been registered with Argo CD, you will need the + [`argocd` CLI](https://argo-cd.readthedocs.io/en/stable/cli_installation/) + installed. + +## Step 1/4. Configure Teleport and Kubernetes RBAC + +First, we need to configure the RBAC for both Teleport and Kubernetes in order +to grant the Machine & Workload Identity agent, `tbot`, the correct level of +access. + +When forwarding requests to the Kubernetes API on behalf of a bot, the Teleport +Proxy attaches the groups configured (using `kubernetes_groups`) in the bot's +Teleport roles to the request. These groups are then used to configure a +RoleBinding or ClusterRoleBinding in Kubernetes to grant specific permissions +within the Kubernetes cluster to the bot. + +For the purpose of this guide, we will bind the `editor` group to the default +`edit` ClusterRole that is preconfigured in most Kubernetes clusters to give the +bot read and write access to resources in all the cluster namespaces. + +When configuring this in a production environment, you should consider: + +- If RoleBinding should be used instead of ClusterRoleBinding to limit the bot's + access to a specific namespace. +- If a Role should be created that grants the bot the least privileges necessary + rather than using a pre-existing general Role such as `edit`. + +To bind the `editor` group to the `edit` Cluster Role, run the following command +against both the **source and target clusters**: + +```code +$ kubectl create clusterrolebinding teleport-editor-edit \ + --clusterrole=edit \ + --group=editor +``` + +With the appropriate RoleBinding configured in Kubernetes to grant access to a +specific group, you now need to add this group to the role that the bot will +impersonate when producing credentials. You also need to grant the bot access +through Teleport to the cluster itself. This is done by creating a role that +grants the necessary permissions and then assigning this role to the bot. + +Create a file called `role.yaml` with the following content: + +```yaml +kind: role +version: v7 +metadata: + name: example-role +spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_groups: + - editor + kubernetes_resources: + - kind: "*" + namespace: "*" + name: "*" + verbs: ["*"] +``` + +Replace `example-role` with a descriptive name related to your use case. + +Adjust the `allow` field for your environment: + +- `kubernetes_labels` should be adjusted to grant access to only the clusters + that the bot will need to access. The value shown, `'*': '*'` will grant + access to all Kubernetes clusters. +- `editor` must match the name of the group you specified in the RoleBinding or + ClusterRoleBinding. +- `kubernetes_resources` can be used to apply additional restrictions to what + the bot can access within the Kubernetes cluster. These restrictions are + layered upon the RBAC configured within the Kubernetes role itself. + +Use `tctl create -f ./role.yaml` to create the role. + +(!docs/pages/includes/create-role-using-web.mdx!) + +## Step 2/4. Create a bot + +Next, we need to create the bot. A bot is a Teleport identity for a machine or +group of machines. + +Create a file called `bot.yaml` with the following content: + +```yaml +kind: bot +version: v1 +metadata: + # name uniquely identifies the bot within Teleport + name: example-bot +spec: + # roles that will be granted to the bot. + roles: [example-role] +``` + +Make sure you replace `example-bot` with a unique, descriptive name for your Bot, +and replace `example-role` with the name of the role you created in the previous +step. + +Use `tctl create -f ./bot.yaml` to create the bot. + +## Step 3/4. Create a join token + +In order for `tbot` to be able to authenticate and join the Teleport cluster, +we need to configure a join token. There are a number of different available +[methods](../../reference/deployment/join-methods.mdx), but for this guide we will use +the `kubernetes` method with a static JWKS. Please refer to the +[Deploying tbot on Kubernetes](../deployment/kubernetes.mdx) guide for more +in-depth information. + + +Certain cloud providers like Amazon EKS regularly rotate their OIDC signing +keys, which will cause the `static_jwks` configuration you create in this guide +to become invalid after a short period of time. + +On Kubernetes providers with OIDC support, like Amazon's Elastic Kubernetes +Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service +(AKS), consider using [Kubernetes OIDC joining](../deployment/kubernetes-oidc.mdx) +instead. + + +First, run the following command against the **source cluster**, to determine +the JWKS-formatted public key: + +```code +$ kubectl get --raw /openid/v1/jwks +{"keys":[--snip--]}% +``` + +Next, create a file called `join-token.yaml` with the following content, ensuring +you insert the output from the `curl` command in `spec.kubernetes.static_jwks.jwks` +and specify the namespace in which Argo CD is running in +`spec.kubernetes.allow[0].service_account`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the tbot Helm chart values later + name: example-join-token +spec: + roles: [Bot] + # bot_name must match the name of the bot created earlier in this guide. + bot_name: example-bot + join_method: kubernetes + kubernetes: + # static_jwks configures the Auth Service to validate the JWT presented by + # `tbot` using the public key from a statically configured JWKS. + type: static_jwks + static_jwks: + jwks: | + # Place the data returned by the curl command here + {"keys":[--snip--]} + # allow specifies the rules by which the Auth Service determines if `tbot` + # should be allowed to join. + allow: + - service_account: "argocd:tbot" # namespace:service_account +``` + +Use `tctl create -f ./join-token.yaml` to create the join. + +## Step 4/4. Deploy `tbot` + +Finally, we'll use the official [Helm chart](../../reference/helm-reference/tbot.mdx) +to deploy `tbot`. + +Find your cluster name by running `tctl status`. + +Create a file called `tbot-values.yaml` with the following content, ensuring you +replace the placeholder values with your cluster name, proxy address, and using +`clusterSelectors` to identify which Kubernetes clusters you'd like to expose to +Argo CD: + +```yaml +clusterName: "test.teleport.sh" +teleportProxyAddress: "test.teleport.sh:443" +token: "example-join-token" +defaultOutput: + enabled: false +argocd: + enabled: true + clusterSelectors: + - labels: + environment: production +``` + +Please refer to the [Helm chart reference](../../reference/helm-reference/tbot.mdx#argocd) +for all of the supported configuration options. + +Install the Helm chart by running the following commands against the source +cluster, ensuring you specify the namespace in which Argo CD is running: + +```code +$ helm repo add teleport (=teleport.helm_repo_url=) +$ helm repo update +$ helm install tbot teleport/tbot \ + --namespace argocd \ + --values tbot-values.yaml +``` + +Once the Helm chart installation is complete (usually after a few minutes), you +should see your Kubernetes clusters in Argo CD. You can check this by running +the following command: + +```code +$ argocd cluster list +SERVER NAME +https://test.teleport.sh:443/v1/teleport/dHAxLmZsb3BweS5jbw/Ym94b2ZyYWQ1 test.teleport.sh-prod-eu-1 +https://kubernetes.default.svc in-cluster +``` + +You should also be able to see the `tbot`-managed cluster secrets in Kubernetes +when running the following command: + +```code +$ kubectl get secrets -n argocd | grep ^teleport.argocd-cluster. +teleport.argocd-cluster.e815df7b7588be17 Opaque 3 7d3h +``` + +You can now configure Argo CD to deploy applications to your Teleport-enrolled +clusters! + +If new matching clusters are added in Teleport, `tbot` will register them with +Argo CD on the bot's next certificate renewal. If needed, the `tbot` process can +be restarted by deleting the pod or signalled (`pkill -USR1 tbot`) to trigger an +immediate reload. + +Please note that, for safety, if a cluster is deleted in Teleport or no longer +matches your `clusterSelectors`, it will **not** be automatically removed from +Argo CD - but `tbot` will stop refreshing its credentials. + +### Argo CD Impersonation + +Argo CD [supports using](https://argo-cd.readthedocs.io/en/latest/operator-manual/app-sync-using-impersonation/) +Kubernetes' user impersonation feature to give the application sync process more +restrictive privileges than the Argo control plane has generally. This can be +used to create a permission boundary where applications deployed to the same +cluster, but in different projects, cannot read or modify each other's resources. + +In order to use Argo CD impersonation with Teleport Kubernetes Access, your bot +must have a role where `kubernetes_users` contains a wildcard. For example: + +```yaml +kind: role +version: v7 +metadata: + name: kube-wildcard-access +spec: + allow: + kubernetes_users: + - '*' +``` + +If you simply enumerate the users that Argo CD is allowed to impersonate in +`kubernetes_users`, you will encounter the following error: + +``` +please select a user to impersonate, refusing to select a user due to several kubernetes_users set up for this user +``` + +This happens because Argo CD *only* sets impersonation headers on requests that +operate on application-scoped resources. Because your role has permission to +impersonate many users, it's ambiguous which one Teleport should use by default. + +When `kubernetes_users` contains a wildcard, and no specific user is being +impersonated, Kubernetes Access will fall back to sending your bot's Teleport +username by default. You should therefore create a RoleBinding or +ClusterRoleBinding, in the **target cluster**, granting your bot user the +broader permissions needed by the Argo CD control plane. + +```code +$ kubectl create clusterrolebinding example-bot-edit \ + --clusterrole=edit \ + --user=bot-example-bot +``` + + +When using impersonation to create a permission boundary between Argo CD +projects, remember that Kubernetes Access will automatically impersonate any +groups listed in `kubernetes_groups` by default. + +You should ensure that your bot user does not have any roles which contain +`kubernetes_groups`, otherwise the privilege isolation provided by impersonating +different users is compromised, because the application sync process will have +the combination of the impersonated user's privileges and those granted to the +groups. + + +## Next steps + +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) + to explore all the available configuration options. +- Read the [Teleport Kubernetes RBAC guide](../../enroll-resources/kubernetes-access/controls.mdx) + for more details on controlling Kubernetes access. diff --git a/docs/pages/enroll-resources/machine-id/access-guides/databases.mdx b/docs/pages/machine-workload-identity/access-guides/databases.mdx similarity index 81% rename from docs/pages/enroll-resources/machine-id/access-guides/databases.mdx rename to docs/pages/machine-workload-identity/access-guides/databases.mdx index b271136245ae0..6cb09a4e139f2 100644 --- a/docs/pages/enroll-resources/machine-id/access-guides/databases.mdx +++ b/docs/pages/machine-workload-identity/access-guides/databases.mdx @@ -1,29 +1,37 @@ --- -title: Machine ID with Database Access -description: How to use Machine ID to access database servers +# lint disable page-structure remark-lint +title: Machine & Workload Identity with Database Access +sidebar_label: Databases +description: How to use Machine & Workload Identity to access database servers +tags: + - how-to + - mwi + - infrastructure-identity --- -Teleport protects and controls access to databases. Machine ID +{/* lint disable page-structure remark-lint */} + +Teleport protects and controls access to databases. Machine & Workload Identity can be used to grant machines secure, short-lived access to these databases. In this guide, you will configure `tbot` to produce credentials that can be used to access a database configured in Teleport. -![Accessing Teleport-protected databases with Machine ID](../../../../img/machine-id/machine-id-database-access.svg) +![Accessing Teleport-protected databases with Machine & Workload Identity](../../../img/machine-id/machine-id-database-access.svg) ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) - If you have not already put your database behind the Teleport Database Service, - follow the [database access getting started guide](../../database-access/getting-started.mdx). + follow the [database access getting started guide](../../enroll-resources/database-access/getting-started.mdx). The Teleport Database Service supports databases like PostgreSQL, MongoDB, Redis, and much more. See our [database access - guides](../../database-access/guides/guides.mdx) for a complete list. + guides](../../enroll-resources/database-access/guides/guides.mdx) for a complete list. - (!docs/pages/includes/tctl.mdx!) - The `tsh` binary must be installed on the machine that will access the database. Depending on how `tbot` was installed, this may already be - installed. If it is not, see [Installation](../../../installation.mdx) for + installed. If it is not, see [Installation](../../installation/installation.mdx) for details. - `tbot` must already be installed and configured on the machine that will access the database. For more information, see the @@ -84,18 +92,18 @@ This rule will allow the bot to do two things: The `'*': '*'` label selector grants access to any database server configured in Teleport. In production, consider restricting the bot's access using a more specific label selector; see the -[Database Access RBAC guide](../../database-access/rbac.mdx) +[Database Access RBAC guide](../../enroll-resources/database-access/rbac.mdx) for a full reference of database-related role options. {/* vale messaging.protocol-products = YES */} -## Step 2/4. Configure a database `tbot` output +## Step 2/4. Configure a database `tbot` output service -Now, `tbot` needs to be configured with an output that will produce the +Now, `tbot` needs to be configured with an output service that will produce the credentials needed for database access. To do this, the `database` output -type is used. +service type is used. The database you wish to generate credentials for is configured as part of the -`database` output. This is controlled using three fields: +`database` service. This is controlled using three fields: - `service` specifies the Database Service as named in the Teleport configuration that the credentials will grant access to. @@ -105,11 +113,11 @@ The database you wish to generate credentials for is configured as part of the access to. This field does not need to be specified for all types of database. -In addition, the `format` field in the database output controls the format of -the generated credentials. This allows for compatibility with clients that -expect a specific format. When this field is not specified, a sensible default -option is used that is compatible with most clients. The full list of supported -`format` options is below: +In addition, the `format` field in the database output service controls the +format of the generated credentials. This allows for compatibility with clients +that expect a specific format. When this field is not specified, a sensible +default option is used that is compatible with most clients. The full list of +supported `format` options is below: | Client | `format` | Description | |---------------|---------------|--------------------------------------| @@ -118,16 +126,16 @@ option is used that is compatible with most clients. The full list of supported | CockroachDB | `cockroach` | Provides `cockroach/node.key`, `cockroach/node.crt`, and `cockroach/ca.crt`. | | Generic TLS | `tls` | Provides `tls.key`, `tls.crt`, and `tls.cas` for generic clients that require specific file extensions. | -Outputs must be configured with a destination. In this example, the `directory` -destination will be used. This will write artifacts to a specified directory on -disk. Ensure that this directory can be written to by the Linux user that -`tbot` runs as, and that it can be read by the Linux user that will be accessing -applications. +Output services must be configured with a destination. In this example, the +`directory` destination will be used. This will write artifacts to a specified +directory on disk. Ensure that this directory can be written to by the Linux +user that `tbot` runs as, and that it can be read by the Linux user that will be +accessing applications. -Modify your `tbot` configuration to add a `database` output: +Modify your `tbot` configuration to add a `database` service: ```yaml -outputs: +services: - type: database destination: type: directory @@ -175,7 +183,7 @@ To create a systemd service for this purpose, create a unit file at ```systemd [Unit] -Description=Teleport Machine ID Proxy Service +Description=Teleport Machine & Workload Identity Proxy Service After=network.target # If you have followed a previous guide and configured tbot itself as a systemd # service, uncomment the following line to create a dependency between the two @@ -261,5 +269,5 @@ access controls. ## Next steps -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/machine-workload-identity/access-guides/kubernetes.mdx b/docs/pages/machine-workload-identity/access-guides/kubernetes.mdx new file mode 100644 index 0000000000000..369d747e65a03 --- /dev/null +++ b/docs/pages/machine-workload-identity/access-guides/kubernetes.mdx @@ -0,0 +1,216 @@ +--- +title: Machine & Workload Identity with Kubernetes Access +sidebar_label: Kubernetes Clusters +description: How to use Machine & Workload Identity to access Kubernetes clusters +tags: + - how-to + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +Teleport protects and controls access to Kubernetes +clusters. Machine & Workload Identity can be used to grant machines secure, +short-lived access to these clusters. + +In this guide, you will configure `tbot` to produce credentials that can be +used to access a Kubernetes cluster enrolled with your Teleport cluster. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- If you have not already connected your Kubernetes cluster to Teleport, follow + [Enroll a Kubernetes Cluster](../../enroll-resources/kubernetes-access/getting-started.mdx). +- (!docs/pages/includes/tctl.mdx!) +- To configure the Kubernetes cluster, your client system will need to have + `kubectl` installed. See the + [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) for + installation instructions. +- `tbot` must already be installed and configured on the machine that will + access Kubernetes clusters. For more information, see the + [deployment guides](../deployment/deployment.mdx). +- To demonstrate connecting to the Kubernetes cluster, the machine that will + access Kubernetes clusters will need to have `kubectl` installed. See the + [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) for + installation instructions. + +## Step 1/3. Configure Teleport and Kubernetes RBAC + +First, we need to configure the RBAC for both Teleport and Kubernetes in order +to grant the bot the correct level of access. + +When forwarding requests to the Kubernetes API on behalf of a bot, the +Teleport Proxy attaches the groups configured (using `kubernetes_groups`) in the +bot's Teleport roles to the request. These groups are then used to configure a +RoleBinding or ClusterRoleBinding in Kubernetes to grant specific permissions +within the Kubernetes cluster to the bot. + +For the purpose of this guide, we will bind the `editor` group to the default +`edit` ClusterRole that is preconfigured in most Kubernetes clusters to give +the bot read and write access to resources in all the cluster namespaces. + +When configuring this for a production environment, you should consider: + +- If RoleBinding should be used instead of ClusterRoleBinding to limit the + bot's access to a specific namespace. +- If a Role should be created that grants the bot the least privileges + necessary rather than using a pre-existing general Role such as `edit`. + +To bind the `editor` group to the `edit` Cluster Role, execute: + +```code +$ kubectl create clusterrolebinding teleport-editor-edit \ + --clusterrole=edit \ + --group=editor +``` + +With the appropriate RoleBinding configured in Kubernetes to grant access to +a specific group, you now need to add this group to the role that the bot +will impersonate when producing credentials. You also need to grant the bot +access through Teleport to the cluster itself. This is done by creating a +role that grants the necessary permissions and then assigning this role to a +Bot. + +Create a file called `role.yaml` with the following content: + +```yaml +kind: role +version: v7 +metadata: + name: example-role +spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_groups: + - editor + kubernetes_resources: + - kind: "*" + namespace: "*" + name: "*" + verbs: ["*"] +``` + +Replace `example-role` with a descriptive name related to your use case. + +Adjust the `allow` field for your environment: + +- `kubernetes_labels` should be adjusted to grant access to only the clusters + that the bot will need to access. The value shown, `'*': '*'` will grant + access to all Kubernetes clusters. +- `editor` must match the name of the group you specified in the + RoleBinding or ClusterRoleBinding. +- `kubernetes_resources` can be used to apply additional restrictions to what + the bot can access within the Kubernetes cluster. These restrictions are + layered upon the RBAC configured within the Kubernetes role itself. + +Use `tctl create -f ./role.yaml` to create the role. + +(!docs/pages/includes/create-role-using-web.mdx!) + +Now, use `tctl bots update` to add the role to the Bot. Replace `example` +with the name of the Bot you created in the deployment guide and `example-role` +with the name of the role you just created: + +```code +$ tctl bots update example --add-roles example-role +``` + +## Step 2/3. Configure a Kubernetes `tbot` output service + +Now, `tbot` needs to be configured with an output service to produce the +Kubernetes credentials and client configuration file. This is done using the +`kubernetes/v2` service type. + +The Kubernetes clusters you wish to make available must be specified using +entries in the `selectors` list. In this example, `example-k8s-cluster` will be +selected using a name selector, and all clusters with the label +`environment=dev` will be selected as well. + +Output services must also be configured with a destination. In this example, the +`directory` type will be used. This will write artifacts to a specified +directory on disk. Ensure that this directory can be written to by the Linux +user that `tbot` runs as, and that it can be read by the Linux user that will +be accessing the Kubernetes cluster. + +Modify your `tbot` configuration to add a `kubernetes/v2` service: + +```yaml +services: + - type: kubernetes/v2 + selectors: + # Specify the name of the Kubernetes cluster you wish the credentials to + # grant access to. Note that wildcards are not supported. + - name: example-k8s-cluster + + # Specify a label selector to dynamically select many clusters at once. + # All labels in a selector must match for a cluster to be selected, and + # multiple separate selectors can be specified if desired. Note that + # wildcards are not supported. + - labels: + environment: dev + destination: + type: directory + # For this guide, /opt/machine-id is used as the destination directory. + # You may wish to customize this. Multiple output services cannot + # share the same destination. + path: /opt/machine-id +``` + +Ensure you replace `example-k8s-cluster` with the name of the Kubernetes cluster +as registered in Teleport and adjust `/opt/machine-id` if you wish. + +If operating `tbot` as a background service, restart it. If running `tbot` in +one-shot mode, it must be executed before you attempt to use the credentials. + +## Step 3/3. Connect to your Kubernetes cluster + +Once `tbot` has been run with the new service configured, a file called +`kubeconfig.yaml` should have been generated in the destination directory +you specified. This contains all the information necessary for `kubectl` to +connect to the Kubernetes cluster through the Teleport Proxy. + +To use `kubeconfig.yaml` with `kubectl`, the `--kubeconfig` flag or `KUBECONFIG` +environment variable can be provided to `kubectl`: + +```code +$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml get pods -A +# Or, set the KUBECONFIG environment variable: +$ export KUBECONFIG=/opt/machine-id/kubeconfig.yaml +$ kubectl get pods -A +``` + +If you selected multiple clusters, they will be exposed as separate contexts +within the generated `kubeconfig.yaml`, and will be named following the format +`$teleportClusterName-$kubeClusterName`. To target a specific cluster, use the +`--context` flag: + +```code +$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml --context=example.teleport.sh-my-kube-cluster get pods -A +``` + +Note that the first selected cluster in `tbot.yaml` will be used as the default +context. If using label selectors, the default context may vary over time if +clusters are added or removed in Teleport. + +If new matching clusters are added or removed in Teleport, `kubeconfig.yaml` +will be regenerated to reflect the change on the bot's next certificate renewal. +If needed, the `tbot` process can be restarted or signaled (`pkill -USR1 tbot`) +to trigger an immediate reload. Note that modifications to `kubeconfig.yaml`, +such as changes to the `current-context` field, will be overwritten. + +Whilst this guide has demonstrated `kubeconfig.yaml` being used with `kubectl`, +this format is compatible with most Kubernetes tools including: + +- Helm +- Lens +- ArgoCD + +## Next steps + +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- Read the [Teleport Kubernetes RBAC guide](../../enroll-resources/kubernetes-access/controls.mdx) + for more details on controlling Kubernetes access. diff --git a/docs/pages/machine-workload-identity/access-guides/mcp.mdx b/docs/pages/machine-workload-identity/access-guides/mcp.mdx new file mode 100644 index 0000000000000..bd10d2f18a3e0 --- /dev/null +++ b/docs/pages/machine-workload-identity/access-guides/mcp.mdx @@ -0,0 +1,212 @@ +--- +title: Machine & Workload Identity with MCP Access +description: How to use Machine & Workload Identity to access MCP servers +sidebar_label: MCP +tags: + - how-to + - mwi + - infrastructure-identity + - mcp +--- + +Teleport protects and control access to Model Context Protocol (MCP) servers. +Machine & Workload Identity (MWI) can then be used to grant machines and +workloads secure and short-lived access to these MCP servers without the need +for long-lived static secrets. + +In this guide, you will configure `tbot` to produce short-lived credentials that +can be used by an MCP client to access MCP servers enrolled within your Teleport +cluster. + +This guide focuses on granting machines access to MCP servers. If you wish to +grant human users access to MCP servers, you should instead refer to the +[MCP Access with Stdio MCP Server guide](../../enroll-resources/mcp-access/enrolling-mcp-servers/stdio.mdx). + +## How it works + +The Teleport Application Access agent is deployed in front of the MCP server +that you wish to protect access to. This agent is responsible for enforcing +access control based on roles configured in Teleport. + +The Machine & Workload Identity agent, `tbot`, is installed on the machine +that will require access to the MCP server. It is responsible for authenticating +to the Teleport cluster and producing an identity file that can be used by the +MCP client to access the MCP server through the Teleport Proxy. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- If you have not already connected your MCP server to Teleport, follow +the [MCP Getting Started guide](../../enroll-resources/mcp-access/getting-started.mdx). +- (!docs/pages/includes/tctl.mdx!) +- `tbot` and `tsh` must already be installed and configured on the machine that +will access MCP servers. For more information, see the +[deployment guides](../deployment/deployment.mdx). + +## Step 1/3. Configure RBAC + +First, Teleport must be configured to allow the credentials produced by the bot +to access MCP servers in your infrastructure. This is done by creating a role +that specifies label matchers for the MCP server that you want to grant access +to, and, specifying with tools within the MCP server that should be accessible. + +In our example, we will grant access to an MCP server that has been labelled +with `env: dev` and allow access to all tools within that MCP server. + +Create a file named `role.yaml` with the following content: + +```yaml +kind: role +version: v6 +metadata: + name: example-role +spec: + allow: + app_labels: + 'env': 'dev' + mcp: + tools: ['*'] +``` + +Replace `example-role` with a descriptive name related to your use case, adjust +the `app_labels` to match the labels of your MCP server, and modify the +`mcp.tools` field to specify which tools you want to allow access to. + +Use `tctl create -f ./role.yaml` to create the role. + +(!docs/pages/includes/create-role-using-web.mdx!) + +Now, use `tctl bots update` to add the role to the Bot. Replace `example` +with the name of the Bot you created in the deployment guide and `example-role` +with the name of the role you just created: + +```code +$ tctl bots update example --add-roles example-role +``` + +## Step 2/3. Configure `tbot` + +Next, you'll configure `tbot` to output an identity file. This identity file +will then be used by the MCP client to authenticate to Teleport. This is done +using the `identity` service type. + +The service will need to be configured with a destination. In this example, +the `directory` type will be used. This will write artifacts to the specified +directory on disk. Ensure that this directory is writable by the user that +`tbot` runs as, and that it can be read by the Linux user that the MCP client +will be running as. + +Modify your `tbot` configuration to add an `identity` service: + +```yaml +services: + - type: identity + allow_reissue: true + destination: + type: directory + path: /opt/machine-id +``` + +Ensure that you replace `/opt/machine-id` with the directory you have chosen. + +If operating `tbot` as a background service, restart it. If running `tbot` in +one-shot mode, it must be executed before you attempt to use the credentials. + +## Step 3/3. Connect your MCP client + +You can now configure your MCP client to use the credentials produced by `tbot` +to access MCP servers. The exact steps you need to take will depend on the +MCP client you are using. + + + + For compatibility with LangGraph, use the `langchain-mcp-adapters` + package. This package implements an MCP client and exposes the tools + from an MCP server in the same way as tools defined natively in Python. + + Instantiate a `MultiServerMCPClient` and configure it to call + `tsh mcp connect` with the identity file produced by `tbot`: + + ```python + from langchain_mcp_adapters.client import MultiServerMCPClient + client = MultiServerMCPClient({ + "my-mcp-server": { + "command": "tsh" + "args": [ + "mcp", + "connect", + "-i", "/opt/machine-id/identity", + "--proxy", "example.teleport.sh:443", + "my-mcp-server", + ], + "transport": "stdio", + }, + }) + tools = await client.get_tools() + ``` + + Modify the example values to match your environment: + - `/opt/machine-id/identity` with the path to the identity file + produced by `tbot`. + - `example.teleport.sh:443` with the address of your Teleport proxy. + - `my-mcp-server` with the name of your MCP server as enrolled in Teleport. + + See the [LangGraph documentation](https://langchain-ai.github.io/langgraph/agents/mcp/) + for further details. + + + To configure Claude Desktop to call an MCP server through Teleport, + add an entry to `claude_desktop_config.json` that invokes + `tsh mcp connect` with the identity file produced by `tbot`: + + ```json + { + "mcpServers": { + "teleport-mcp-teleport-mcp-demo": { + "command": "tsh", + "args": [ + "mcp", + "connect", + "-i", + "/opt/machine-id/identity", + "--proxy", + "example.teleport.sh:443", + "my-mcp-server" + ] + } + } + } + ``` + + Modify the example values to match your environment: + - `/opt/machine-id/identity` with the path to the identity file + produced by `tbot`. + - `example.teleport.sh:443` with the address of your Teleport proxy. + - `my-mcp-server` with the name of your MCP server as enrolled in Teleport. + + See the [Claude Desktop documentation](https://modelcontextprotocol.io/quickstart/user) + for further details. + + + Generally, any MCP client that supports STDIO transport can be used with + Teleport MCP access. + + You'll need to configure the MCP client to invoke the `tsh` binary with + the following arguments: + `tsh mcp connect -i /opt/machine-id/identity --proxy example.teleport.sh:443 my-mcp-server`. + + Modify the example values to match your environment: + - `/opt/machine-id/identity` with the path to the identity file + produced by `tbot`. + - `example.teleport.sh:443` with the address of your Teleport proxy. + - `my-mcp-server` with the name of your MCP server as enrolled in Teleport. + + + +## Next steps + +- Read the [MCP access documentation](../../enroll-resources/mcp-access/mcp-access.mdx) +to learn more about Teleport's MCP integration and RBAC controls. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore +all the available configuration options for `tbot`. diff --git a/docs/pages/enroll-resources/machine-id/access-guides/ssh.mdx b/docs/pages/machine-workload-identity/access-guides/ssh.mdx similarity index 76% rename from docs/pages/enroll-resources/machine-id/access-guides/ssh.mdx rename to docs/pages/machine-workload-identity/access-guides/ssh.mdx index 7a05e458a4154..d2f41d40e2e5d 100644 --- a/docs/pages/enroll-resources/machine-id/access-guides/ssh.mdx +++ b/docs/pages/machine-workload-identity/access-guides/ssh.mdx @@ -1,10 +1,18 @@ --- -title: Machine ID with Server Access -description: How to use Machine ID to access servers via SSH +title: Machine & Workload Identity with Server Access +sidebar_label: Linux Servers +description: How to use Machine & Workload Identity to access servers via SSH +tags: + - how-to + - mwi + - infrastructure-identity --- -Teleport protects and controls SSH access to servers. Machine ID -can be used to grant machines secure, short-lived access to these servers. +{/* lint disable page-structure remark-lint */} + +Teleport protects and controls SSH access to servers. Machine & Workload +Identity can be used to grant machines secure, short-lived access to these +servers. In this guide, you will configure `tbot` to produce credentials that can be used to access a Linux server enrolled in Teleport. The guide @@ -15,7 +23,7 @@ will cover access using the Teleport CLI `tsh` as well as the OpenSSH client. (!docs/pages/includes/edition-prereqs-tabs.mdx!) - If you have not already connected your server to Teleport, follow - the [getting started guide](../../server-access/getting-started.mdx). + the [getting started guide](../../enroll-resources/server-access/getting-started.mdx). - (!docs/pages/includes/tctl.mdx!) - `tbot` must already be installed and configured on the machine that will connect to Linux hosts with SSH. For more information, see the @@ -67,22 +75,22 @@ with the name of the role you just created: $ tctl bots update example --add-roles example-role ``` -## Step 2/3. Configure the `tbot` output +## Step 2/3. Configure the `tbot` output service -Now, `tbot` needs to be configured with an output that will produce an -SSH certificate and OpenSSH configuration. For this we use the `identity` output -type. +Now, `tbot` needs to be configured with a service that will produce an SSH +certificate and OpenSSH configuration. For this we use the `identity` output +service type. -Outputs must be configured with a destination. In this example, the `directory` -destination will be used. This will write these credentials to a specified -directory on disk. Ensure that this directory can be written to by the Linux -user that `tbot` runs as, and that it can be read by the Linux user that needs -to use SSH. +Output services must be configured with a destination. In this example, the +`directory` destination will be used. This will write these credentials to a +specified directory on disk. Ensure that this directory can be written to by the +Linux user that `tbot` runs as, and that it can be read by the Linux user that +needs to use SSH. -Modify your `tbot` configuration to add an `identity` output: +Modify your `tbot` configuration to add an `identity` service: ```yaml -outputs: +services: - type: identity destination: type: directory @@ -117,7 +125,7 @@ tools like `ssh` and `sftp`. ### Connecting using `tsh` -To use `tsh` with a `tbot` identity output, the path to the `identity` file +To use `tsh` with a `tbot` identity service the path to the `identity` file must be specified using the `-i` flag. In addition, `--proxy` must be used to specify the address of the Teleport Proxy Service that should be used when making connections. @@ -132,7 +140,7 @@ my-host ### Connecting using OpenSSH -To use OpenSSH with the identity output, the path to the `ssh_config` should be +To use OpenSSH with the identity service, the path to the `ssh_config` should be provided to `ssh` with the `-F` flag. With the generated `ssh_config` you must append the name of the Teleport cluster @@ -161,10 +169,11 @@ If you wish to integrate with Ansible, check out the [Ansible-specific guide](./ansible.mdx). If your tool is not compatible with `ssh_config`, it may still be possible to -configure it to use the credentials produced by Machine ID. It must support -SSH client certificates and either ProxyCommand or ProxyJump functionality. +configure it to use the credentials produced by Machine & Workload Identity. It +must support SSH client certificates and either ProxyCommand or ProxyJump +functionality. ## Next steps -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/enroll-resources/machine-id/access-guides/tctl.mdx b/docs/pages/machine-workload-identity/access-guides/tctl.mdx similarity index 80% rename from docs/pages/enroll-resources/machine-id/access-guides/tctl.mdx rename to docs/pages/machine-workload-identity/access-guides/tctl.mdx index 63eb7e79e38b6..cd570aed36fa0 100644 --- a/docs/pages/enroll-resources/machine-id/access-guides/tctl.mdx +++ b/docs/pages/machine-workload-identity/access-guides/tctl.mdx @@ -1,12 +1,20 @@ --- -title: Machine ID with tctl -description: How to use Machine ID with tctl to manage your Teleport configuration +title: Machine & Workload Identity with tctl +sidebar_label: tctl +description: How to use Machine & Workload Identity with tctl to manage your Teleport configuration +tags: + - how-to + - mwi + - infrastructure-identity --- +{/* lint disable page-structure remark-lint */} + `tctl` is the Teleport cluster management CLI tool. Whilst it usually uses the credentials from the locally logged in user, it is also possible to use -Machine ID credentials. This allows `tctl` to be leveraged as part of a custom -automation workflow deployed in a non-interactive environment (e.g CI/CD). +credentials issued by Machine & Workload Identity. This allows `tctl` to be +leveraged as part of a custom automation workflow deployed in a non-interactive +environment (e.g CI/CD). In this guide, you will configure `tbot` to produce credentials for `tctl`, and then use `tctl` to deploy Teleport roles defined in files. @@ -66,11 +74,11 @@ with the name of the role you just created: $ tctl bots update example --add-roles example-role ``` -## Step 2/3. Configure `tbot` output +## Step 2/3. Configure `tbot` output service -Now, `tbot` needs to be configured with an output that will produce the +Now, `tbot` needs to be configured with a service that will produce the credentials needed by `tctl`. As `tctl` will be accessing the Teleport API, the -correct output type to use is `identity`. +correct service type to use is `identity`. For this guide, the `directory` destination will be used. This will write these credentials to a specified directory on disk. Ensure that this directory can @@ -80,13 +88,13 @@ the Linux user that `tctl` will run as. Modify your `tbot` configuration to add an `identity` output: ```yaml -outputs: +services: - type: identity destination: type: directory # For this guide, /opt/machine-id is used as the destination directory. - # You may wish to customize this. Multiple outputs cannot share the same - # destination. + # You may wish to customize this. Multiple output services cannot share the + # same destination. path: /opt/machine-id ``` @@ -98,7 +106,7 @@ You should now see an `identity` file under `/opt/machine-id`. This contains the private key and signed certificates needed by `tctl` to authenticate with the Teleport Auth Service. -## Step 3/3. Use `tctl` with the identity output +## Step 3/3. Use `tctl` with the identity output service As an example, `tctl` will be used to apply a directory of YAML files that define Teleport roles. If these were stored in version control (e.g., `git`) and @@ -141,7 +149,7 @@ $ tctl get role/tctl-test ## Next steps -- Explore the [`tctl` reference](../../../reference/cli/tctl.mdx) to discover all +- Explore the [`tctl` reference](../../reference/cli/tctl.mdx) to discover all `tctl` can do. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore - all the available `tbot` configuration options. \ No newline at end of file +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available `tbot` configuration options. diff --git a/docs/pages/machine-workload-identity/deployment/aws.mdx b/docs/pages/machine-workload-identity/deployment/aws.mdx new file mode 100644 index 0000000000000..eea67080a6d98 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/aws.mdx @@ -0,0 +1,138 @@ +--- +title: Deploying tbot on AWS +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on an AWS EC2 instance +sidebar_label: AWS EC2 +tags: + - how-to + - mwi + - infrastructure-identity + - aws +--- + +This guide shows you how to deploy Machine & Workload Identity's agent, `tbot`, +on an AWS EC2 instance and connect it to your Teleport cluster. + +## How it works + +On AWS, virtual machines can be assigned an IAM role, which they can assume in +order to request a signed document that includes information about the machine. + +The Teleport `iam` join method instructs the Machine & Workload Identity bot to +request this signed document from AWS using the assigned identity and send it to +the Teleport Auth Service for verification. This allows the bot to join the +cluster without the exchange of a long-lived secret. + +While this guide focuses on deploying `tbot` on an EC2 instance, it is also +possible to use the `iam` join method with workloads running on an EKS +Kubernetes cluster. To do so, you must configure [IAM Roles for Service Accounts +(IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) +for the cluster and the Kubernetes service account that will be used by the +`tbot` pod. See the [Kubernetes platform guide](kubernetes.mdx) for further +guidance on deploying `tbot` as a workload on Kubernetes. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- An AWS IAM role that you wish to grant access to your Teleport cluster. This + role must be granted `sts:GetCallerIdentity`. In this guide, this role will + be named `teleport-bot-role`. +- An AWS EC2 virtual machine that you wish to install `tbot` onto configured + with the IAM role attached. + +## Step 1/5. Install `tbot` + +**This step is completed on the AWS EC2 instance.** + +First, `tbot` needs to be installed on the VM that you wish to use Machine & +Workload Identity on. + +Download and install the appropriate Teleport package for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/5. Create a Bot + +**This step is completed on your local machine.** + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +**This step is completed on your local machine.** + +Create `bot-token.yaml`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the `tbot` to use this token + name: example-bot +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: iam + # Restrict the AWS account and (optionally) ARN that can use this token. + # This information can be obtained from running the + # "aws sts get-caller-identity" command from the CLI. + allow: + - aws_account: "111111111111" + aws_arn: "arn:aws:sts::111111111111:assumed-role/teleport-bot-role/i-*" +``` + +Replace: + +- `111111111111` with the ID of your AWS account. +- `teleport-bot-role` with the name of the AWS IAM role you created and assigned + to the EC2 instance. +- `example` with the name of the bot you created in the second step. +- `i-*` indicates that any instance with the specified role can use the join + method. If you wish to restrict this to an individual instance, replace `i-*` + with the full instance ID. + +Use `tctl` to apply this file: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 4/5. Configure `tbot` + +**This step is completed on the AWS EC2 instance.** + +Create `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: iam + token: example-bot +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy Service or + Auth Service. Prefer using the address of a Teleport Proxy Service instance. +- `example-bot` with the name of the token you created in the second step. + +(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) + +## Step 5/5. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Next steps + +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for + your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/azure-devops.mdx b/docs/pages/machine-workload-identity/deployment/azure-devops.mdx new file mode 100644 index 0000000000000..9b451564b9500 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/azure-devops.mdx @@ -0,0 +1,205 @@ +--- +title: Deploying tbot on Azure DevOps +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on Azure DevOps. +sidebar_label: Azure DevOps +tags: + - how-to + - mwi + - infrastructure-identity + - azure +--- + +In this guide, you will configure Machine & Workload Identity's agent, `tbot`, +to run within a Azure DevOps pipeline run. The bot will be configured to use +the `azure_devops` delegated joining method to eliminate the need for long-lived +secrets. + +## How it works + +The `azure_devops` join method is a secure way for Machine & Workload Identity +bots to authenticate with the Teleport Auth Service without using any shared +secrets. Instead, it makes use of an OpenID Connect token that Azure DevOps +provides via an API to the pipeline job. + +This token is sent to the Teleport Auth Service, and assuming it has been +configured to trust Azure DevOps's identity provider and all identity assertions +match, the authentication attempt will succeed. + +This mitigates the risk of long-lived secrets such as passwords or SSH private +keys being exfiltrated from your Azure DevOps organization and provides many of +the other benefits of Teleport such as auditing and finely-grained access +control. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport (v17.5.1 or higher)"!) + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/5. Determine your Azure DevOps organization ID + +Before we can grant access to a `tbot` running within an Azure DevOps pipeline, +we must determine the ID of the Azure DevOps organization that the pipeline +runs within. This will be used later to configure the join token. + +To determine the organization ID, add a step to a new or existing pipeline that +exists within the organization that echoes the `System.CollectionId` variable. + +For example: + +```yaml +trigger: +- main + +pool: + vmImage: ubuntu-latest + +steps: +- script: | + echo "Organization ID: $(System.CollectionId)" +``` + +Run this pipeline, and inspect the output of the step. You should see your +organization ID printed in the logs, similar to the following: + +```code +========================== Starting Command Output =========================== +/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/c4fdfbf7-6cd4-4737-a2a2-16702d201449.sh +Organization ID: 0ca3ddd9-0000-1111-2222-example58665 +``` + +Record this organization ID as you will need it later: + +## Step 2/5. Create a Bot + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +To allow the Azure DevOps pipeline to authenticate to your Teleport cluster, +you'll first need to create a join token. An Azure DevOps join token contains +allow rules that describe which pipelines can use that token in order to join +the Teleport cluster. A rule can contain multiple fields, and any pipeline that +matches all the fields within a single rule is granted access. + +In this example, you will create a token with a rule that grants access to any +Azure DevOps pipeline within a specific project. + +Create a file named `bot-token.yaml`, assigning to the +name of your Azure DevOps project: + +```yaml +kind: token +version: v2 +metadata: + name: example-bot +spec: + roles: [Bot] + join_method: azure_devops + bot_name: example + azure_devops: + organization_id: + allow: + - project_name: +``` +Replace: + +- `example-bot` with a meaningful name that describes the token. +- `example` with the name of the bot you created in Step 2. +- `my-project` with the name of the Azure DevOps project that you want to allow + access to the bot. + +You can find a full list of the token configuration options for Azure DevOps +joining on the +[join methods reference page.](../../reference/deployment/join-methods.mdx#azure-devops-azure_devops) + +Apply this to your Teleport cluster using `tctl`: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 4/5. Configure an Azure DevOps pipeline + +With the bot and join token created, you can now configure an Azure DevOps +pipeline that sets up `tbot` to use these. + +To configure `tbot`, a YAML file will be used. In this example, we'll store the +configuration within the repository itself but this could be generated by the +CI pipeline itself. + +Create `tbot.yaml` within your repository: + +```yaml +version: v2 +proxy_server: :443 +onboarding: + join_method: azure_devops + token: example-bot +oneshot: true +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- with the address of your Teleport Proxy Service or +Auth Service. Prefer using the address of the Teleport Proxy Service. +- `example-bot` with the name of the token you created in the second step + +Now you'll modify your Azure DevOps pipeline to install `tbot` and run it with +the configuration you just created. + +In production, rather than installing `tbot` in each pipeline run, you may wish +to build a container image that contains `tbot` and other dependencies, and run +your pipeline within a container based on that image. + +Create or modify an existing pipeline with the following YAML: + +```yaml +trigger: +- main + +pool: + vmImage: ubuntu-latest + +steps: +- script: | + curl "https://:443/scripts/install.sh" | sudo bash + tbot start -c tbot.yaml + displayName: 'Install Teleport and start tbot' + env: + SYSTEM_ACCESSTOKEN: $(System.AccessToken) + TELEPORT_ANONYMOUS_TELEMETRY: 1 +``` + +Replace with the address of your Teleport Proxy Service. + +Note the inclusion of the `SYSTEM_ACCESSTOKEN` environment variable. This +variable must be populated in any step where `tbot` is run. + +`TELEPORT_ANONYMOUS_TELEMETRY` enables the submission of anonymous usage +telemetry. This helps us shape the future development of `tbot`. You can disable +this by omitting this. + +Commit and push these two files to the repository. + +Check the status of your Azure DevOps pipeline run. If everything is configured +correctly, you should see `tbot` successfully start, join the cluster, and then +exit. + +## Step 5/5. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Further steps + +- For more information about the Azure DevOps join method, read the + [join method reference page.](../../reference/deployment/join-methods.mdx#azure-devops-azure_devops) +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for +your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore +all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/azure.mdx b/docs/pages/machine-workload-identity/deployment/azure.mdx new file mode 100644 index 0000000000000..3db8b05872f57 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/azure.mdx @@ -0,0 +1,138 @@ +--- +title: Deploying tbot on Azure +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on an Azure VM +sidebar_label: Azure VM +tags: + - how-to + - mwi + - infrastructure-identity + - azure +--- + +In this guide, you will install Machine & Workload Identity's agent, `tbot`, on +an Azure VM. The bot will be configured to use the `azure` delegated joining +method to authenticate to your Teleport cluster. This eliminates the need for +long-lived secrets. + +## How it works + +On the Azure platform, virtual machines can be assigned a managed identity. The +Azure platform will then make available to the virtual machine an attested +data document and JWT that allows the virtual machine to act as this identity. +This identity can be validated by a third party by attempting to use this token +to fetch its own identity from the Azure identity service. + +The `azure` join method instructs the bot to use this attested data document and +JWT to prove its identity to the Teleport Auth Service. This allows joining to +occur without the use of a long-lived secret. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- An Azure managed identity with a role granting the + `Microsoft.Compute/virtualMachines/read` permission. You will need to know + the UID of this identity. +- An Azure VM you wish to install Machine & Workload Identity onto with the + managed identity configured as a user-assigned managed identity. + +## Step 1/5. Install `tbot` + +**This step is completed on the Azure VM.** + +First, `tbot` needs to be installed on the VM that you wish to use Machine & +Workload Identity on. + +Download and install the appropriate Teleport package for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/5. Create a Bot + +**This step is completed on your local machine.** + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +**This step is completed on your local machine.** + +Create `bot-token.yaml`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the `tbot` to use this token + name: example-bot +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: azure + azure: + allow: + # subscription should be the UID of an Azure subscription. Only VMs within + # this subscription will be able to join. + - subscription: 11111111-1111-1111-1111-111111111111 + # resource_groups allows joining to be restricted to VMs in a specific + # resource group. It can be omitted to allow joining from any VM within + # a subscription. + resource_groups: ["group1"] +``` + +Replace: + +- `11111111-1111-1111-1111-111111111111` with the UID of your Azure subscription +- `example` with the name of the bot you created in the second step +- `group1` with the name of the resource group that the VM resides within or + omit this entirely to allow joining from any VM within the subscription + +Use `tctl` to apply this file: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 4/5. Configure `tbot` + +**This step is completed on the Azure VM.** + +Create `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: azure + token: example-bot + azure : + client_id: 22222222-2222-2222-2222-222222222222 +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy or + Auth Service. Prefer using the address of a Teleport Proxy. +- `22222222-2222-2222-2222-222222222222` with the ID of the Azure managed + identity that has been assigned to the VM. +- `example-bot` with the name of the token you created in the second step. + +(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) + +## Step 5/5. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Next steps + +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for + your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/bitbucket.mdx b/docs/pages/machine-workload-identity/deployment/bitbucket.mdx similarity index 75% rename from docs/pages/enroll-resources/machine-id/deployment/bitbucket.mdx rename to docs/pages/machine-workload-identity/deployment/bitbucket.mdx index 102233c0de666..af80a83eccefe 100644 --- a/docs/pages/enroll-resources/machine-id/deployment/bitbucket.mdx +++ b/docs/pages/machine-workload-identity/deployment/bitbucket.mdx @@ -1,18 +1,24 @@ --- -title: Deploying Machine ID on Bitbucket Pipelines -description: How to install and configure Machine ID on Bitbucket Pipelines +title: Deploying tbot on Bitbucket Pipelines +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on Bitbucket Pipelines +sidebar_label: Bitbucket Pipelines +tags: + - how-to + - mwi + - infrastructure-identity --- -In this guide, you will configure Machine ID's agent, `tbot`, to run within a -Bitbucket Pipelines workflow. The bot will be configured to use the `bitbucket` -delegated joining method to eliminate the need for long-lived secrets. +In this guide, you will configure Machine & Workload Identity's agent, `tbot`, +to run within a Bitbucket Pipelines workflow. The bot will be configured to use +the `bitbucket` delegated joining method to eliminate the need for long-lived +secrets. ## How it works -The `bitbucket` join method is a secure way for Machine ID bots to authenticate -with the Teleport Auth Service without using any shared secrets. Instead, it -makes use of an OpenID Connect token that Bitbucket Pipelines injects into the -job environment. +The `bitbucket` join method is a secure way for Machine & Workload Identity bots +to authenticate with the Teleport Auth Service without using any shared secrets. +Instead, it makes use of an OpenID Connect token that Bitbucket Pipelines +injects into the job environment. This token is sent to the Teleport Auth Service, and assuming it has been configured to trust Bitbucket's identity provider and all identity assertions @@ -38,7 +44,7 @@ From this page, note the following values: - Workspace UUID, including the braces () - Repository UUID, including the braces () -## Step 2/4. Create the Machine ID bot +## Step 2/4. Create a Bot (!docs/pages/includes/machine-id/create-a-bot.mdx!) @@ -67,8 +73,8 @@ spec: # allow specifies the rules by which the Auth Service determines if `tbot` # should be allowed to join. allow: - - workspace_uuid: - repository_uuid: + - workspace_uuid: '' + repository_uuid: '' ``` Let's go over the token resource's fields in more detail: @@ -79,7 +85,8 @@ Let's go over the token resource's fields in more detail: access to. Note that this value will need to be used in other parts of the configuration later. - `spec.roles` defines which roles that this token will grant access to. The - value of `[Bot]` states that this token grants access to a Machine ID bot. + value of `[Bot]` states that this token grants access to a Machine & Workload + Identity bot. - `spec.join_method` defines the join method the token is applicable for. Since this guide only focuses on Bitbucket Pipelines, you will set this to to `bitbucket`. @@ -90,7 +97,7 @@ Let's go over the token resource's fields in more detail: - `spec.bitbucket.allow` is used to set rules for what Bitbucket Pipelines runs will be able to authenticate by using the token. -Refer to the [token reference](../../../reference/join-methods.mdx#bitbucket-pipelines-bitbucket) +Refer to the [token reference](../../reference/deployment/join-methods.mdx#bitbucket-pipelines-bitbucket) for a full list of valid fields. Apply this to your Teleport cluster using `tctl`: @@ -116,25 +123,26 @@ pipelines: - step: oidc: true script: - # Download and extract Teleport - - wget https://cdn.teleport.dev/teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz - - tar -xvf teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz + # Install Teleport + - curl "https://:443/scripts/install.sh" | bash # Run `tbot` in identity mode for SSH access - - ./teleport/tbot start identity --destination=./tbot-user --join-method=bitbucket --proxy-server=example.teleport.sh:443 --token=bot-bitbucket --oneshot + - tbot start identity --destination=./tbot-user --join-method=bitbucket --proxy-server=:443 --token=bot-bitbucket --oneshot # Make use of the generated SSH credentials - ssh -F ./tbot-user/ssh_config user@node.example.teleport.sh echo "hello world" ``` +Replace with the address of your Teleport Proxy Service. + This example will start `tbot` in identity mode to generate SSH credentials. -Refer to the [`tbot start` documentation](../../../reference/cli/tbot.mdx#tbot-start) +Refer to the [`tbot start` documentation](../../reference/cli/tbot.mdx#tbot-start) for details on using other modes such as database, application, and Kubernetes access. If you're adapting an existing workflow, note these steps: 1. Set `oidc: true` on the step properties so that step will be issued a token -1. Download and extract a `.tar.gz` Teleport build +1. Installs Teleport from your cluster's install script 1. Run `tbot` with `--join-method=bitbucket`, `--token=example-bot` (or whichever name was configured in Step 3), and `--oneshot` @@ -153,7 +161,7 @@ credentials are needed in multiple steps. - Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. - For more information about Bitbucket Pipelines itself, read [their documentation](https://support.atlassian.com/bitbucket-cloud/docs/get-started-with-bitbucket-pipelines/). diff --git a/docs/pages/enroll-resources/machine-id/deployment/circleci.mdx b/docs/pages/machine-workload-identity/deployment/circleci.mdx similarity index 84% rename from docs/pages/enroll-resources/machine-id/deployment/circleci.mdx rename to docs/pages/machine-workload-identity/deployment/circleci.mdx index 3775cebf03a6a..bcf1019b0d87e 100644 --- a/docs/pages/enroll-resources/machine-id/deployment/circleci.mdx +++ b/docs/pages/machine-workload-identity/deployment/circleci.mdx @@ -1,11 +1,19 @@ --- -title: Deploying Machine ID on CircleCI -description: How to install and configure Machine ID on CircleCI +title: Deploying tbot on CircleCI +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on CircleCI +sidebar_label: CircleCI +tags: + - ci-cd + - how-to + - mwi + - infrastructure-identity --- -In this guide, you will configure Machine ID's agent, `tbot`, to run within a -CircleCI workflow. The bot will be configured to use the `circleci` delegated -joining method to eliminate the need for long-lived secrets. +{/* lint disable page-structure remark-lint */} + +In this guide, you will configure Machine & Workload Identity's agent, `tbot`, +to run within a CircleCI workflow. The bot will be configured to use the +`circleci` delegated joining method to eliminate the need for long-lived secrets. ## Prerequisites @@ -63,7 +71,7 @@ Note this value down and substitute in configuration e with it. -## Step 2/5. Create the Machine ID bot +## Step 2/5. Create a Bot (!docs/pages/includes/machine-id/create-a-bot.mdx!) @@ -103,7 +111,8 @@ example is set to the year `2100`. access to. Note that this value will need to be used in other parts of the configuration later. - `spec.roles` defines which roles that this token will grant access to. The -value of `[Bot]` states that this token grants access to a Machine ID bot. +value of `[Bot]` states that this token grants access to a Machine & Workload +Identity bot. - `spec.join_method` defines the join method the token is applicable for. Since this guide only focuses on CircleCI, you will set this to to `circleci`. - `spec.circleci.allow` is used to set rules for what CircleCI runs will be able @@ -128,20 +137,20 @@ Create `tbot.yaml` within your repository: ```yaml version: v2 -proxy_server: example.teleport.sh:443 +proxy_server: :443 onboarding: join_method: circleci token: example-bot oneshot: true storage: type: memory -# outputs will be filled in during the completion of an access guide. -outputs: [] +# services will be filled in during the completion of an access guide. +services: [] ``` Replace: -- `example.teleport.sh:443` with the address of your Teleport Proxy or +- with the address of your Teleport Proxy or Auth Service. Prefer using the address of a Teleport Proxy. - `example-bot` with the name of the token you created in the second step @@ -155,7 +164,7 @@ Open your Git repository and create a directory called `.circleci`. Then open a file called `config.yml` and insert the following configuration: ```yaml -# See: https://circleci.com/docs/2.0/configuration-reference +# See: https://circleci.com/docs/configuration-reference version: 2.1 jobs: write-run-log: @@ -166,12 +175,9 @@ jobs: - run: name: "Install Teleport" command: | - cd /tmp - curl -O https://cdn.teleport.dev/teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz - tar -xvf teleport-v(=teleport.version=)-linux-amd64-bin.tar.gz - sudo ./teleport/install + curl "https://:443/scripts/install.sh" | sudo bash - run: - name: "Run Machine ID" + name: "Run Machine & Workload Identity" command: | export TELEPORT_ANONYMOUS_TELEMETRY=1 tbot start -c tbot.yaml @@ -183,6 +189,8 @@ workflows: - teleport-access ``` +Replace with the address of your Teleport Proxy Service. + `TELEPORT_ANONYMOUS_TELEMETRY` enables the submission of anonymous usage telemetry. This helps us shape the future development of `tbot`. You can disable this by omitting this. @@ -202,16 +210,16 @@ to these credentials. Ensure that the role you assign to your CircleCI bot has access to only the resources in your Teleport cluster that your CI/CD needs to interact with. -## Step 5/5. Configure outputs +## Step 5/5. Configure services -(!docs/pages/includes/machine-id/configure-outputs.mdx!) +(!docs/pages/includes/machine-id/configure-services.mdx!) ## Further steps - Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for your environment. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. - For more information about CircleCI itself, read [their documentation](https://circleci.com/docs/). -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/deployment.mdx b/docs/pages/machine-workload-identity/deployment/deployment.mdx new file mode 100644 index 0000000000000..f980e2519ad24 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/deployment.mdx @@ -0,0 +1,181 @@ +--- +title: Deploy tbot +description: Explains how to deploy tbot on your platform and join it to your Teleport cluster. +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +The first step to set up Machine & Workload Identity is to deploy the `tbot` +agent and join it as a Bot to your Teleport cluster. You can run the `tbot` +binary on a number of platforms, from AWS and GitHub Actions to a generic Linux +server or Kubernetes cluster. + +## Choosing a deployment method + +There are two considerations to make when determining how to deploy the `tbot` +agent on your infrastructure. + +### Your infrastructure + +The `tbot` agent runs as a container or on a Linux virtual machine. If you run +`tbot` on GitHub Actions, you can use one of the ready-made [Teleport GitHub +Actions workflows](https://github.com/teleport-actions). + +### Join method + +The `tbot` agent joins your Teleport cluster by using one of the following +authentication methods: + +- **Platform-signed document:** The platform that hosts `tbot`, such as a + Kubernetes cluster or Amazon EC2 instance, provides a signed identity document + that Teleport can verify using the platform's certificate authority. This is + the recommended approach because it avoids the use of shared secrets. +- **Static join token:** Your Teleport client tool generates a string and stores + it on the Teleport Auth Service. `tbot` provides this string when it first + connects to your Teleport cluster, demonstrating to the Auth Service that it + belongs in the cluster. From then on, `tbot` authenticates to your Teleport + cluster with a renewable certificate. + +## Deployment guides + +The guides in this section show you how to deploy the Machine & Workload +Identity agent, `tbot`, and join it to your cluster. + +If a specific guide does not exist for your platform, the [Linux +guide](linux.mdx) is compatible with most platforms. For +custom approaches, you can also read the [Machine & Workload Identity Reference](../../reference/machine-workload-identity/machine-workload-identity.mdx) +and [Architecture](../../reference/architecture/machine-id-architecture.mdx) to plan your deployment. + +, + to: "./aws", + name: "AWS", + }, + { + icon: , + to: "./azure", + name: "Azure", + }, + { + icon: , + to: "./azure-devops", + name: "Azure DevOps", + }, + { + icon: , + to: "./bitbucket", + name: "BitBucket Pipelines", + }, + { + icon: , + to: "./circleci", + name: "CircleCI", + }, + { + icon: , + to: "./gitlab", + name: "GitLab CI", + }, + { + icon: , + to: "./github-actions", + name: "GitHub Actions", + }, + { + icon: , + to: "./gcp", + name: "Google Cloud", + }, + { + icon: , + to: "./kubernetes", + name: "Kubernetes", + }, + { + icon: , + to: "./kubernetes-oidc", + name: "Kubernetes OIDC", + }, + { + icon: , + to: "./linux", + name: "Linux", + }, + { + icon: , + to: "./linux-tpm", + name: "Linux TPM", + }, + { + icon: , + to: "../../reference/machine-workload-identity/bound-keypair/getting-started", + name: "Bound Keypair Joining", + } + ]} +/> + +### CI/CD + +Read the following guides for how to deploy `tbot` on a continuous integration +and continuous deployment platform. + +, + to: "./azure-devops", + name: "Azure DevOps", + }, + { + icon: , + to: "./bitbucket", + name: "BitBucket Pipelines", + }, + { + icon: , + to: "./circleci", + name: "CircleCI", + }, + { + icon: , + to: "./gitlab", + name: "GitLab CI", + }, + { + icon: , + to: "./github-actions", + name: "GitHub Actions", + }, + { + icon: , + to: "./jenkins", + name: "Jenkins", + }, + { + icon: , + to: "../../zero-trust-access/infrastructure-as-code/terraform-provider/spacelift", + name: "Spacelift", + }, + { + icon: , + to: "../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-cloud", + name: "Terraform Cloud", + }, + { + icon: , + to: "../../reference/machine-workload-identity/bound-keypair/static-keys", + name: "Bound Keypair static keys (Generic)", + } + ]} +/> + + +If your CI/CD provider does not have a dedicated join method listed above, +consider using [Bound Keypair static keys][bound-keypair-static] as a fallback. + + +[bound-keypair-static]: ../../reference/machine-workload-identity/bound-keypair/static-keys.mdx diff --git a/docs/pages/machine-workload-identity/deployment/gcp.mdx b/docs/pages/machine-workload-identity/deployment/gcp.mdx new file mode 100644 index 0000000000000..08be157e3879b --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/gcp.mdx @@ -0,0 +1,139 @@ +--- +title: Deploying tbot on GCP +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on a GCP VM +sidebar_label: GCP VM +tags: + - how-to + - mwi + - infrastructure-identity + - google-cloud +--- + +This guide explains how to deploy the Machine & Workload Identity agent, `tbot`, +on a Google Cloud Platform GCE instance and connect it to your Teleport cluster. + +## How it works + +On GCP, virtual machines can be assigned a service account. These machines can +then request a signed JSON web token from GCP, which allows third parties to +verify information about them, including their service accounts, using the GCP +public key. The Teleport `gcp` join method instructs `tbot` to use this service +account JWT to prove its identity to the Teleport Auth Service and join your +Teleport cluster without using long-lived secrets. + +Whilst the guide on this page focuses explicitly on deploying `tbot` on a GCP +Virtual Machine, it is also possible to use the `gcp` join method with workloads +running on Google Kubernetes Engine. To do so, you must configure +[GCP Workload +Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) +for the cluster and the Kubernetes service account that will be used by the +`tbot` pod. See the [Kubernetes platform guide](kubernetes.mdx) for further +guidance on deploying `tbot` as a workload on Kubernetes. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- A GCP service account you wish to grant access to your Teleport cluster that + is not the GCP compute default service account. +- A GCP Compute Engine VM that you wish to install `tbot` onto that has been + configured with the GCP service account. + +## Step 1/5. Install `tbot` + +**This step is completed on the GCP VM.** + +First, `tbot` needs to be installed on the VM that you wish to use Machine & +Workload Identity on. + +Download and install the appropriate Teleport package for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/5. Create a Bot + +**This step is completed on your local machine.** + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +**This step is completed on your local machine.** + +Create `bot-token.yaml`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the `tbot` to use this token + name: example-bot +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: gcp + gcp: + # allow specifies the rules by which the Auth Service determines if `tbot` + # should be allowed to join. + allow: + - project_ids: + - my-project-123456 + service_accounts: + # This should be the full "name" of a GCP service account. The default + # compute service account is not supported. + - my-service-account@my-project-123456.iam.gserviceaccount.com +``` + +Replace: + +- `my-project-123456` with the ID of your GCP project +- `example` with the name of the bot you created in the second step. +- `my-service-account@my-project-123456.iam.gserviceaccount.com` with the email + of the service account configured in the previous step. The default compute + service account is not supported. + +Use `tctl` to apply this file: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 4/5. Configure `tbot` + +**This step is completed on the GCP VM.** + +Create `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: gcp + token: example-bot +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy or + Auth Service. Prefer using the address of a Teleport Proxy. +- `example-bot` with the name of the token you created in the second step. + +(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) + +## Step 5/5. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Next steps + +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for + your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/github-actions.mdx b/docs/pages/machine-workload-identity/deployment/github-actions.mdx new file mode 100644 index 0000000000000..5e2503d44ce2a --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/github-actions.mdx @@ -0,0 +1,462 @@ +--- +title: Deploying tbot on GitHub Actions +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on GitHub Actions +sidebar_label: GitHub Actions +tags: + - ci-cd + - how-to + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +GitHub Actions is a popular CI/CD platform that works as a part of the larger +GitHub ecosystem. Teleport Machine & Workload Identity allows GitHub Actions to +securely interact with Teleport protected resources without the need for +long-lived credentials. + +Teleport supports secure joining on both GitHub-hosted and self-hosted GitHub +Actions runners as well as GitHub Enterprise Server. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- Your user should have the privileges to create token resources. +- A GitHub repository with GitHub Actions enabled. This guide uses the example + `gravitational/example` repo, however this value should be replaced with + your own unique repo. + +## Step 1/3. Create a Bot + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 2/3. Create a join token for GitHub Actions + +In order to allow your GitHub Actions workflow to authenticate with your +Teleport cluster, you'll first need to create a join token. These tokens set out +criteria by which the Auth Service decides whether to allow a bot or node +to join. + +In this example, you will create a join token that grants access to any +GitHub Actions run within a specific GitHub repository. In production, you may +wish to further restrict these rules to ensure that access can only occur +when CI is running against a specific branch. You can find a full list of the +available rules on the +[join token reference page.](../../reference/deployment/join-methods.mdx#github-actions-github) + +Create a file named `bot-token.yaml`: + +```yaml +kind: token +version: v2 +metadata: + name: example-bot +spec: + # The Bot role indicates that this token grants access to a bot user, rather + # than allowing a node to join. This role is built in to Teleport. + roles: [Bot] + join_method: github + # The bot_name indicates which bot user this token grants access to. This + # should match the name of the bot that you created in the previous step. + bot_name: example + github: + # allow specifies rules that control which GitHub Actions runs will be + # granted access. Those not matching any allow rule will be denied. + allow: + # repository should include the name of the owner of the repository. + - repository: gravitational/example +``` + +Replace `gravitational/example` with the name of the repository that `tbot` +will run within. You may also choose to change the name of the bot and token +to more accurately describe your use-case. + + +**Enterprise Server** + +If you are using self-hosted Teleport Enterprise you are able to permit +workflows within GitHub Enterprise Server instances to authenticate using the +GitHub join method. + +The Teleport Auth Service must be able to connect to the GitHub Enterprise +Server. + +To configure this, set `spec.github.enterprise_server_host` to the hostname of +the GHES instance. + +For example: +```yaml +spec: + github: + enterprise_server_host: ghes.example.com +``` + +**Enterprise Cloud** + +If you have enabled `include_enterprise_slug` in your GitHub Enterprise +Cloud configuration, you will need to set `spec.github.enterprise_slug` to +the slug of your GitHub Enterprise organization. + +For example: +```yaml +spec: + github: + enterprise_slug: my-enterprise +``` + +Read more about `include_enterprise_slug` on the GitHub guide to +[customizing the issuer value for an enterprise](https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise). + + +Once the resource file has been written, create the token with `tctl`: + +```code +$ tctl create -f bot-token.yaml +``` + +Check that token `example-bot` has been created with the following +command: + +```code +$ tctl tokens ls +Token Type Labels Expiry Time (UTC) +----------- ---- ------ ---------------------------------------------- +example-bot Bot 01 Jan 00 00:00 UTC (2562047h47m16.854775807s) +``` + +## Step 3/3. Configure a GitHub Actions Workflow + +Now that the bot has been successfully created, you now need to configure your +GitHub Action's workflow to authenticate as this bot and then use the +credentials produced by `tbot`. To help with this, Teleport publishes +several easy-to-use GitHub Actions that can be used within your workflow. + +It is also possible to manually configure `tbot` rather than using one of the +Teleport GitHub Actions. This involves more configuration but allows for +precise control of `tbot` and allows for implementations that are not possible +with the actions. + +What follows is examples demonstrating two of the GitHub Actions available as +well as showing how to manually configure `tbot` for use with GitHub Actions. + +### Example: `teleport-actions/auth` + +The `teleport-actions/auth` action generates a versatile identity output that +can be used for SSH and for administrative actions against a Teleport cluster. +Environment variables are configured by this action and these automatically +configure `tsh` and `tctl` to use this identity. + +This example shows using the credentials to: + +- List the SSH nodes available using `tsh` +- List the SSH nodes available using `tctl` +- Connect to an SSH node using `tsh` +- Connect to an SSH node using OpenSSH's `ssh` + +First, you'll need to adjust the role you assigned to the bot to grant it access +to SSH. This example grants access to root on all nodes. In a production setup, +it would be a good idea to restrict this to only the nodes that the bot would +need. + +Use `tctl edit role/example-bot` to add the following to the role: + +```yaml +spec: + allow: + # Allow login to the Linux user 'root'. + logins: ['root'] + # Allow connection to any node. Adjust these labels to match only nodes + # that ansible needs to access. + node_labels: + '*': '*' +``` + +With that privileges granted, you can now create the GitHub Actions workflow. +Create `.github/workflows/example.yaml`: + +```yaml +# This is a basic workflow to help you get started. +# It will take the following action whenever a push is made to the "main" branch. +on: + push: + branches: + - main +jobs: + demo: + permissions: + # The "id-token: write" permission is required or tbot will not be able to + # authenticate with the cluster. + id-token: write + contents: read + # The name of the workflow, and the Linux distro to be used to perform the + # required steps. + name: example + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Fetch Teleport binaries + uses: teleport-actions/setup@v1 + with: + version: auto + # Replace with the address of your Teleport Proxy Service. + proxy: example.teleport.sh:443 + - name: Fetch credentials using Machine & Workload Identity + id: auth + uses: teleport-actions/auth@v2 + with: + # Replace with the address of your Teleport Proxy Service. + proxy: example.teleport.sh:443 + # Use the name of the join token resource you created in step 1. + token: example-bot + # Specify the length of time that the generated credentials should be + # valid for. This is optional and defaults to "1h" + certificate-ttl: 1h + # Enable the submission of anonymous usage telemetry. This + # helps us shape the future development of `tbot`. You can disable this + # by omitting this. + anonymous-telemetry: 1 + - name: List nodes (tsh) + # Enters a command from the cluster, in this case "tsh ls" using Machine + # ID credentials to list remote SSH nodes. + run: tsh ls + - name: List nodes (tctl) + run: tctl nodes ls + - name: Run hostname via SSH (tsh) + # Ensure that `root` matches the username of a remote SSH username, and + # that hostname matches an SSH host name that is a part of the Teleport + # cluster configured for access. + run: tsh ssh root@example-node hostname + - name: Run hostname via SSH (OpenSSH) + run: ssh -F ${{ steps.auth.outputs.ssh-config }} root@example-node.example.teleport.sh hostname +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy or cloud + tenant. +- `example-bot` with the name of the token you created in a previous step. +- `example-node` with the name of a Teleport SSH node that you wish to connect + to. +- `root` with the name of a user on the node that you are connecting to and that + you have granted the bot access to. + +Add, commit, and push your changes to the `main` branch of the repository. + +Navigate to the **Actions** tab of your GitHub repository in your web browser. +Select the **Workflow** that has now been created and triggered by the change, +and select the `example` job. The GitHub Actions workflow may take some time +to complete, and will resemble the following once successful. + +![GitHub Actions](../../../img/machine-id/github-actions.png) + +Expand the **List nodes** step of the action, and the output will +list all nodes in the cluster, from the perspective of the +Machine & Workload Identity bot using the command `tsh ls`. + +### Example: `teleport-actions/auth-k8s` + +The `teleport-actions/auth-k8s` action generates a Kubernetes output service that +contains the necessary credentials and config for a Kubernetes client to connect +to a Kubernetes cluster enrolled in Teleport. The action emits the necessary +environment variable to automatically configure these clients. + +In this example, the `teleport-actions/auth-k8s` action will be used to list +all the pods contained within the cluster, but this could just as easily be +modified to deploy to a Kubernetes cluster with `kubectl` or `helm`. + +First, you'll need to adjust the role you assigned to the bot to grant it access +to the Kubernetes cluster. This example will grant the bot access to all +clusters with the group `editor`. For more detailed instructions on setting +up Kubernetes RBAC, see the Kubernetes access guide. + +Use `tctl edit role/example-bot` to add the following rule to the Teleport role: + +```yaml +spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_resources: + - kind: pod + namespace: "*" + name: "*" + kubernetes_groups: + - editor +``` + + +This example assumes the role is version `v6`. If you are using a `v7`+ role +you will need to include `verbs: ["get", "list"]` for the `kind: pod` section +in `kubernetes_resources`. Otherwise the example `kubectl get pods -A` execution +will be denied. + + +With that privileges granted, you can now create the GitHub Actions workflow. +Create `.github/workflows/example.yaml`: + +```yaml +# This is a basic workflow to help you get started, modify it for your needs. +on: + push: + branches: + - main +jobs: + demo: + permissions: + # The "id-token: write" permission is required or tbot will not be able to + # authenticate with the cluster. + id-token: write + contents: read + name: example + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Fetch kubectl + uses: azure/setup-kubectl@v3 + - name: Fetch Teleport binaries + uses: teleport-actions/setup@v1 + with: + version: auto + # Replace with the address of your Teleport Proxy Service. + proxy: example.teleport.sh:443 + - name: Fetch credentials using Machine ID + uses: teleport-actions/auth-k8s@v2 + with: + # Replace with the address of your Teleport Proxy Service. + proxy: example.teleport.sh:443 + # Use the name of the join token resource you created in step 1. + token: example-bot + # Use the name of your Kubernetes cluster + kubernetes-cluster: my-kubernetes-cluster + # Enable the submission of anonymous usage telemetry. This helps us + # shape the future development of `tbot`. You can disable this by + # omitting this. + anonymous-telemetry: 1 + - name: List pods + run: kubectl get pods -A +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy Service. +- `example-bot` with the name of the token you created in a previous step. +- `my-kubernetes-cluster` with the name of your Kubernetes cluster. + +The `auth-k8s` action sets the `KUBECONFIG` for future steps to the credentials +it has fetched from Teleport. This means that most existing tooling for +Kubernetes (e.g `kubectl` and `helm`) can use your cluster with no additional +configuration. + +Add, commit, and push this new workflow file to the default branch of your +repository. + +Navigate to the **Actions** tab of your GitHub repository in your web browser. +Select the **Workflow** that has now been created and triggered by the change, +and select the `example` job. + +Expand the **List pods** step of the action, where you can then confirm that the +output shows a list of all the pods within your Kubernetes cluster. + +### Example: Manual configuration + +To configure `tbot` manually, a YAML file will be used. In this example we'll +commit this to the repository, but this could be generated or created by the +CI pipeline itself. + +Create `tbot.yaml` within your repository: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: github + token: example-bot +oneshot: true +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy Service. +- `example-bot` with the name of the token you created in the first step. + +Now you can define a GitHub Actions workflow that will start `tbot` with this +configuration. + +Create `.github/workflows/example-action.yaml`: + +```yaml +# This is a basic workflow to help you get started. +# It will take the following action whenever a push is made to the "main" branch. +on: + push: + branches: + - main +jobs: + demo: + permissions: + # The "id-token: write" permission is required or tbot will not be able to + # authenticate with the cluster. + id-token: write + contents: read + # The name of the workflow, and the Linux distro to be used to perform the + # required steps. + name: guide-demo + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Fetch Teleport binaries + uses: teleport-actions/setup@v1 + with: + version: auto + # Replace with the address of your Teleport Proxy Service. + proxy: example.teleport.sh:443 + - name: Execute Machine ID + env: + # TELEPORT_ANONYMOUS_TELEMETRY enables the submission of anonymous + # usage telemetry. This helps us shape the future development of + # tbot. You can disable this by omitting this. + TELEPORT_ANONYMOUS_TELEMETRY: 1 + run: tbot start -c ./tbot.yaml --oneshot +``` + +Add, commit, and push these two files to the repository. Check the GitHub +Actions UI to ensure that the workflow has succeeded. + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## A note on security implications and risk + +Once `teleport-actions/auth` has been used in a workflow job, all successive +steps in that job will have access to the credentials which grant access to your +Teleport cluster as the bot. Where possible, run as few steps as necessary after +this action has been used. It may be a good idea to break your workflow up into +multiple jobs in order to segregate these credentials from other code running in +your CI/CD pipeline. + +Most importantly, ensure that the role you assign to your GitHub Actions bot has +access to only the resources in your Teleport cluster that your CI/CD needs to +interact with. + +## Next steps + +- Check out the GitHub Actions for more usage information: + - [teleport-actions/setup](https://github.com/teleport-actions/setup) + - [teleport-actions/auth](https://github.com/teleport-actions/auth) + - [teleport-actions/auth-k8s](https://github.com/teleport-actions/auth-k8s) + - [teleport-actions/auth-application](https://github.com/teleport-actions/auth-application) +- For more information about the `github` join method, read the + [join token reference page](../../reference/deployment/join-methods.mdx#github-actions-github) +- Find out more about GitHub Actions itself, read + [their documentation](https://docs.github.com/en/actions). +- [More information about `anonymous-telemetry`.](../../reference/machine-workload-identity/telemetry.mdx) + diff --git a/docs/pages/machine-workload-identity/deployment/gitlab.mdx b/docs/pages/machine-workload-identity/deployment/gitlab.mdx new file mode 100644 index 0000000000000..fa7d32546fb96 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/gitlab.mdx @@ -0,0 +1,187 @@ +--- +title: Deploying tbot on GitLab CI +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on GitLab CI +sidebar_label: GitLab CI +tags: + - ci-cd + - how-to + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +In this guide, you will use Teleport Machine & Workload Identity to allow a +GitLab pipeline to securely connect to a Teleport SSH node without the need for +long-lived secrets. + +Machine & Workload Identity for GitLab works with GitLab's cloud-hosted option +and with self-hosted GitLab installations. **The minimum supported GitLab +version is 15.7**. + +This mitigates the risk of long-lived secrets such as passwords or SSH private +keys being exfiltrated from your GitLab organization and provides many of +the other benefits of Teleport such as auditing and finely-grained access +control. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- A GitLab project to connect to Teleport. This can either be on GitLab's +cloud-hosted offering (gitlab.com) or on a self-hosted GitLab instance. **When +using a self-hosted GitLab instance, your Teleport Auth Service must be able to +connect to your GitLab instance and your GitLab instance must be configured with +a valid TLS certificate.** + +## Step 1/4. Create a Bot + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 2/4. Create a join token + +To allow GitLab CI to authenticate to your Teleport cluster, you'll first need +to create a join token. A GitLab join token contains allow rules that describe +which pipelines can use that token in order to join the Teleport cluster. A rule +can contain multiple fields, and any pipeline that matches all the fields +within a single rule is granted access. + +In this example, you will create a token with a rule that grants access to any +GitLab CI job within a specific GitLab project. Determine the fully qualified +path of your GitLab project. This will include your username (or group) and the +name of your project, e.g `my-user/my-project`. + +Create a file named `bot-token.yaml`. Ensure you substitute any values as +suggested by the comments in this example: + +```yaml +kind: token +version: v2 +metadata: + name: example-bot +spec: + # The Bot role indicates that this token grants access to a bot user, rather + # than allowing a node to join. This role is built in to Teleport. + roles: [Bot] + join_method: gitlab + # The bot_name indicates which bot user this token grants access to. This + # should match the name of the bot that you created in step 1. + bot_name: example + gitlab: + # domain should be the domain of your GitLab instance. If you are using + # GitLab's cloud hosted offering, omit this field entirely. + domain: gitlab.example.com + # allow specifies rules that control which GitLab tokens will be accepted + # by Teleport. Tokens not matching any allow rule will be denied. + allow: + # project_path should be the fully qualified path of your GitLab + # project that you determined earlier. This will grant access to any + # GitLab CI run in that project. + - project_path: my-user/my-project +``` + +You can find a full list of the token configuration options for GitLab joining +on the +[join tokens reference page.](../../reference/deployment/join-methods.mdx#gitlab-gitlab) + +Apply this to your Teleport cluster using `tctl`: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 3/4. Configure a GitLab Pipeline + +With the bot and join token created, you can now configure a GitLab pipeline +that sets up `tbot` to use these. + +To configure `tbot`, a YAML file will be used. In this example we'll store this +within the repository itself, but this could be generated or created by the +CI pipeline itself. + +Create `tbot.yaml` within your repository: + +```yaml +version: v2 +proxy_server: :443 +onboarding: + join_method: gitlab + token: example-bot +oneshot: true +storage: + type: memory +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: + +- with the address of your Teleport Proxy or + Auth Service. Prefer using the address of a Teleport Proxy. +- `example-bot` with the name of the token you created in the second step + +Now, the GitLab CI pipeline can be defined. Before the pipeline can use `tbot`, +it must be available within the environment. For this example, we'll show +downloading `tbot` as part of the CI step, but in a production implementation +you may wish to build a docker image that contains this binary to avoid +depending on the Teleport CDN. + +Create `.gitlab-ci.yml` within your repository: + +```yaml +stages: + - deploy + +deploy-job: + stage: deploy + # id_tokens configures ID Tokens that GitLab will automatically inject into + # the environment of your GitLab run. + # + # See https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html + # for further explanation of the id_tokens configuration in GitLab. + id_tokens: + TBOT_GITLAB_JWT: + # aud for TBOT_GITLAB_JWT must be configured with the name of your + # Teleport cluster. This is not necessarily the address of your Teleport + # cluster and will not include a port or scheme (http/https) + # + # This helps the Teleport Auth Service know that the token is intended for + # it, and not a different service or Teleport cluster. + aud: + script: + - curl "https://:443/scripts/install.sh" | bash + - 'TELEPORT_ANONYMOUS_TELEMETRY=1 tbot start -c tbot.yaml' +``` + +Replace: + +- for `aud` with the name of your Teleport cluster. This + is not necessarily the address of your Teleport cluster and will not include + a port or scheme (e.g. http/https). +- with the address of your Teleport Proxy Service. + +`TELEPORT_ANONYMOUS_TELEMETRY` enables the submission of anonymous usage +telemetry. This helps us shape the future development of `tbot`. You can disable +this by omitting this. + +Commit and push these two files to the repository. + +Check your GitLab CI status, and examine the log results from the commit for +failure. + +## Step 4/4. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Further steps + +- For more information about GitLab joining, read the + [join method reference page.](../../reference/deployment/join-methods.mdx#gitlab-gitlab) +- For more information about GitLab itself, read + [their documentation](https://docs.gitlab.com/ee/ci/). +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for + your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/jenkins.mdx b/docs/pages/machine-workload-identity/deployment/jenkins.mdx new file mode 100644 index 0000000000000..618bf06d740f3 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/jenkins.mdx @@ -0,0 +1,187 @@ +--- +title: Deploying tbot on Jenkins +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on Jenkins +sidebar_label: Jenkins +tags: + - ci-cd + - how-to + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +Jenkins is an open source automation server that is frequently used to build +Continuous Integration and Continuous Delivery (CI/CD) pipelines. + +In this guide, we will demonstrate how to migrate existing Jenkins pipelines to +utilize Machine & Workload Identity to connect to infrastructure protected by +Teleport. + +## Prerequisites + +You will need the following tools to use Teleport with Jenkins. + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- `ssh` OpenSSH tool +- Jenkins +- (!docs/pages/includes/tctl.mdx!) + +## Architecture + +Before we begin, it should be noted that Jenkins is a tool that is notoriously +difficult to secure. Machine & Workload Identity is one part of securing your +infrastructure, but it alone is not sufficient. Below we will provide some basic +guidance which can help improve the security posture of your Jenkins +installation. + +### Single-host deployments + +The simplest Jenkins deployments have the +controller (process that stores configuration, plugins, UI) and agents (process +that executes tasks) run on the same host. This deployment model is simple to +get started with, however any compromise of the `jenkins` user within a single +pipeline can lead to the compromise of your entire CI/CD infrastructure. + +### Multihost deployments + +A slightly more complex, but more secure deployment +is running your Jenkins controllers and agents on different hosts and pinning +workloads to specific agents. This is an improvement over the simple deployment +because you can limit the blast radius of the compromise of a single pipeline +to a subset of your CI/CD infrastructure instead of all of your infrastructure. + +### Best practices + +We strongly encourage the use of the second deployment model whenever possible, +with ephemeral hosts and IAM joining when possible. When using Machine & +Workload Identity with this model, create and run separate bots per-host and pin +particular pipelines to a worker. This will allow you to give each pipeline the +minimal scope for server access, reduce the blast radius if one pipeline is +compromised, and allow you to remotely audit and lock pipelines if you detect +malicious behavior. + + ![Jenkins Deployments](../../../img/machine-id/jenkins.png) + +## Step 1/2 Configure and start `tbot` + +First, determine if you would like to create a new role or use an existing role +for your Jenkins workflow. You can run `tctl get roles` to examine your existing +roles. + +In the example below, create a file called `api-workers.yaml` with the content below +to create a new role called `api-workers` that will allow you to log in to Nodes +with the label `group: api` and Linux user `jenkins`. + +``` +kind: "role" +version: "v3" +metadata: + name: "api-workers" +spec: + allow: + logins: ["jenkins"] + node_labels: + "group": "api" +``` + + + +On your client machine, log in to Teleport using `tsh` before using `tctl`. + +```code +$ tctl create -f api-workers.yaml +$ tctl bots add jenkins --roles=api-workers +``` + + +Connect to the Teleport Auth Service and use `tctl` to examine what roles exist on +your system. + +```code +$ tctl create -f api-workers.yaml +$ tctl bots add jenkins --roles=api-workers +``` + + + + +The Machine & Workload Identity agent, `tbot`, allows you to use Linux Access +Control Lists (ACLs) to control access to certificates on disk. You will use +this to limit the access Jenkins has to the short-lived certificates `tbot` +issues. + +In the example that follows, you will create a Linux user called `teleport` to +run `tbot` but short-lived certificates will be written to disk as the +Linux user `jenkins`. + +```code +$ sudo adduser \ + --disabled-password \ + --no-create-home \ + --shell=/bin/false \ + --gecos "" \ + teleport +``` + +Create and initialize the directories you will need using the `tbot init` +command. + +```code +$ sudo tbot init \ + --destination-dir=/opt/machine-id \ + --bot-user=teleport \ + --owner=teleport:teleport \ + --reader-user=jenkins +``` + +(!docs/pages/includes/machine-id/machine-id-init-bot-data.mdx!) + +Next, you need to start `tbot` in the background of each Jenkins worker. + +First create a configuration file for `tbot` at `/etc/tbot.yaml`. + +```yaml +version: v2 +# Replace "example.teleport.sh:443" with the address of your Teleport Proxy or +# Teleport Cloud tenant. +proxy_server: "example.teleport.sh:443" +onboarding: + join_method: "token" + # Replace the token field with the name of the token that was output when you + # ran `tctl bots add`. + token: "00000000000000000000000000000000" +storage: + type: directory + path: /var/lib/teleport/bot +services: + - type: identity + destination: + type: directory + path: /opt/machine-id +``` + +### Create a `tbot` systemd unit file + +(!docs/pages/includes/machine-id/daemon.mdx!) + +## Step 2/2. Update and run Jenkins pipelines + +Using the credentials produced by `tbot` within a Jenkins pipeline is now a +one-line change. For example, if you want to run the `hostname` command on a +remote host, add the following to your Jenkins pipeline. + +``` +steps { + sh "ssh -F /opt/machine-id/ssh_config root@node-name.example.com hostname" +} +``` + +You are all set. You have provided Jenkins with short-lived certificates tied +to a machine identity that can be rotated, audited, and controlled with access controls. + +## Next steps + +[More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) + diff --git a/docs/pages/machine-workload-identity/deployment/kubernetes-oidc.mdx b/docs/pages/machine-workload-identity/deployment/kubernetes-oidc.mdx new file mode 100644 index 0000000000000..2448d91ac3644 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/kubernetes-oidc.mdx @@ -0,0 +1,309 @@ +--- +title: Deploying tbot on Kubernetes with OIDC +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on Kubernetes with OIDC +sidebar_label: Kubernetes (OIDC) +tags: + - how-to + - mwi + - infrastructure-identity +--- + +This guide shows you how to deploy the Machine & Workload Identity agent, +`tbot`, on a Kubernetes cluster that supports OIDC. The Kubernetes OIDC join +method is a good fit for use inside Kubernetes clusters on a cloud platform that +can be configured to provide a public OpenID Connect (OIDC) endpoint, including +EKS, AKS, and GKE. + +## How it works + +In the setup we demonstrate in this guide, `tbot` runs as a Kubernetes +deployment. It writes output credentials to a Kubernetes secret, which can then +be mounted in the pods that need to use the credentials. While `tbot` can also +run as a sidecar within the same pod as the service that needs to use the +credentials it generates, we recommend running `tbot` as a standalone deployment +due the limited support Kubernetes has for sidecars. + +In this guide, we demonstrate the `kubernetes` join method using its OIDC +support, in which `tbot` proves its identity to the Teleport Auth Service by +presenting a JSON web token (JWT) signed by the platform OIDC issuer. This JWT +is projected into pods by Kubernetes, and identifies the service account, the +pod, and the namespace in which `tbot` is running. The Teleport Auth Service +checks the signature of the JWT against the Kubernetes OIDC provider's published +signing keys to verify the pod identity. + + +Not all Kubernetes providers support [Service Account Issuer Discovery][said], +which is required for this guide. If your provider does not support this +feature, refer to our [Kubernetes with Static JWKS guide](./kubernetes.mdx) +instead. + + +
+Using another join method + +When deploying `tbot` to a Teleport cluster, it is generally recommended to use +the `kubernetes` join method. This will work with most Kubernetes clusters. +The guide that follows will demonstrate configuring this join method. + +However, when using certain cloud Kubernetes services, it is possible to use the +join method associated with that platform rather than the `kubernetes` join +method. This may be beneficial if you wish to manage the joining of `tbot` +within the Kubernetes clusters and on standard VMs on the same platform with +a single join token. These services are: + +- Google Kubernetes Engine: Where + [GCP Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) + is configured for the cluster, it is possible to use the `gcp` join method. + See the [GCP Platform Guide](./gcp.mdx) for further information. +- Amazon Elastic Kubernetes Service: Where + [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) + is configured for the cluster, it is possible to use the `iam` join method. + See the [AWS Platform Guide](./aws.mdx) for further information. + +It is recommended to use Kubernetes OIDC joining on Azure (AKS) whenever +possible. + +
+ +## Prerequisites + +- A running Teleport cluster, version 18.1.5 or above. +- The `tsh` and `tctl` clients, version 18.1.5 or above. +- (!docs/pages/includes/tctl.mdx!) +- A Kubernetes cluster with support for Token Request Projection (which + graduated to a generally available feature in Kubernetes 1.20). +- `kubectl` authenticated with the ability to create resources in the cluster + you wish to deploy `tbot` into. +- A Kubernetes platform that supports [Service Account Issuer Discovery][said], + which became generally available in Kubernetes 1.21 +- The `helm` CLI tool installed. + +The examples in this guide will install a `tbot` deployment in the `default` +Namespace of the Kubernetes cluster. Adjust references to `default` to the +Namespace you wish to use. + +## Step 1/5. Verify support for Service Account Issuer Discovery + +Before continuing, verify that your Kubernetes cluster can issue tokens that +Teleport will be able to verify. To do so, we'll want to perform the following +steps: +1. Fetch the cluster's OpenID configuration endpoint and extract the `issuer` + field +2. Attempt to fetch the same OpenID configuration endpoint from its public + address, derived from the `issuer` field value, and extract the public + `jwks_uri` +3. Attempt to fetch the `jwks_uri` to ensure Teleport will be able to do so when + your agents attempt to join + +To do so, ensure you have `kubectl`, `curl`, and `jq` available. Start by +running this command to determine the public OIDC configuration URL: +```code +$ kubectl get --raw /.well-known/openid-configuration | jq -r '.issuer + "/.well-known/openid-configuration"' +https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id/.well-known/openid-configuration +``` + +If this command fails, the Service Account Issuer Discovery feature is not +enabled on your Kubernetes cluster. + +Next, we'll attempt to fetch the configuration document returned by the previous +command. Note that this should be run over the public internet, such as your +home internet connection, to help ensure the endpoint will be accessible to +Teleport: +```code +$ curl https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id/.well-known/openid-configuration | jq +{ + "issuer": "https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id", + "jwks_uri": "https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id/keys", + "authorization_endpoint": "urn:kubernetes:programmatic_authorization", + "response_types_supported": [ + "id_token" + ], + "subject_types_supported": [ + "public" + ], + "claims_supported": [ + "sub", + "iss" + ], + "id_token_signing_alg_values_supported": [ + "RS256" + ] +} +``` + +If this command succeeds, make a note of the `issuer` value for future steps. +Note that the particular `issuer` value will vary depending on your cloud +provider, region, individual cluster; the example here roughly matches an Amazon +EKS cluster. + +As one final check, you can also attempt to fetch the `jwks_uri`: +```code +$ curl https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id/keys +{"keys":[{...snip...}]} +``` + +There's no need to record anything from this response - we just want to be +certain that Teleport will be able to access this URL. + + +If any of these commands fail, additional steps may be needed to enable public +[Service Account Issuer Discovery][said] is enabled for your Kubernetes +provider. + +Refer to your cloud provider documentation to enable this feature, including: +- [AWS](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html): Enabling OIDC for EKS clusters +- [Azure](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer): Enabling OIDC for AKS clusters +- GKE clusters should work out of the box + +If your provider does not support this feature, consider using either [Static +JWKS joining](./kubernetes.mdx) or an [alternative Teleport join +method](../../reference/deployment/join-methods.mdx) better suited to your cloud provider. + + +## Step 2/5. Create a Bot + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +Next, a join token needs to be configured. This will be used by `tbot` to join +the cluster, and will require the issuer URL determined in Step 1. + +Create `bot-token.yaml`, ensuring you insert the `issuer` value in +`spec.kubernetes.oidc.issuer`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the `tbot` to use this token + name: example-bot +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: kubernetes + kubernetes: + # oidc configures the Auth Service to validate the JWT presented by `tbot` + # using the public keys published by the configured OIDC issuer. + type: oidc + oidc: + # Insert the OIDC issuer value here, it will vary depending on your + # cluster and cloud provider. + issuer: https://oidc.eks.us-west-2.amazonaws.com/id/cluster-id + # allow specifies the rules by which the Auth Service determines if `tbot` + # should be allowed to join. + allow: + - service_account: "default:tbot" # service_account +``` + +Use `tctl` to apply this file: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 4/5. Create a `tbot` deployment + +Now, you'll deploy `tbot` to your Kubernetes cluster using the Teleport `tbot` +Helm chart. This will be configured using values provided to the Helm CLI tool. + +First, create a file called `tbot-values.yaml` to hold the configuration values +for the Helm chart: + +```yaml +# Replace the cluster name with the name of your Teleport cluster. +# This is not necessarily the public address of your Teleport Proxy Service. +clusterName: "example.teleport.sh" +# Replace this with the address of your Teleport Proxy Service. +teleportProxyAddress: "example.teleport.sh:443" +# Ensure this matches the name of the join token you created earlier. +token: "example-bot" +``` + + + The default `tbot-distroless` image does not contain the FIPS-compliant + binaries. If you operate in an environment where FIPS compliance is required, + additionally set the `image: public.ecr.aws/gravitational/tbot-fips-distroless`. + + +Before you can deploy the Helm chart, if you have not previously deployed a +Teleport Helm chart, you'll need to add the Teleport chart repository to your +CLI: + +```code +$ helm repo add teleport (=teleport.helm_repo_url=) +$ helm repo update +``` + +You can now deploy the `tbot` Helm chart using the configuration you created +earlier, ensuring you specify the namespace you wish to deploy `tbot` into: + +```code +$ helm install tbot teleport/tbot \ + --namespace default \ + --values tbot-values.yaml +``` + +Use `kubectl` to verify that the deployment is healthy: + +```code +$ kubectl describe deployment/tbot +$ kubectl logs deployment/tbot +``` + +With this complete, `tbot` is now successfully deployed to your cluster. + +## Step 5/5. Configure services + +By default, the `tbot` Helm chart is configured to write an identity file to a +Kubernetes Secret called `tbot-out` in the namespace where `tbot` has been +deployed. + +This identity file can be mounted into other pods and used with `tsh` or `tctl` +to access and configure your Teleport cluster. For example: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: tsh + namespace: default +spec: + containers: + - name: tsh + image: public.ecr.aws/gravitational/teleport-distroless:(=teleport.version=) + command: + - tsh + args: + - -i + - /identity-output/identity + - --proxy + - example.teleport.sh:443 + - ls + volumeMounts: + - name: identity-output + mountPath: /identity-output + volumes: + - name: identity-output + secret: + secretName: tbot-out +``` + +If you wish to use `tbot` for a different kind of access, you can override the +type of service using the `services` value of the Helm chart and setting +`defaultOutput.enabled` to `false`. + +Follow one of the [access guides](../access-guides/access-guides.mdx) to find +out more about how to configure `tbot` for your use case. + +## Next steps + +- Explore the [Helm chart configuration reference](../../reference/helm-reference/tbot.mdx). +- Follow the [access guides](../access-guides/access-guides.mdx) to finish + configuring `tbot` for your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) + +[said]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery diff --git a/docs/pages/machine-workload-identity/deployment/kubernetes.mdx b/docs/pages/machine-workload-identity/deployment/kubernetes.mdx new file mode 100644 index 0000000000000..1b6c4f94165d9 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/kubernetes.mdx @@ -0,0 +1,236 @@ +--- +title: Deploying tbot on Kubernetes +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on Kubernetes with Static JWKS Keys +sidebar_label: Kubernetes (Static JWKS) +tags: + - how-to + - mwi + - infrastructure-identity +--- + +This guide shows you how to deploy the Machine & Workload Identity agent, +`tbot`, on a Kubernetes cluster and use a static reference to the cluster's +Service Account issuer keys for authentication. + +## How it works + +In the setup we demonstrate in this guide, `tbot` runs as a Kubernetes +deployment. It writes output credentials to a Kubernetes secret, which can then +be mounted in the pods that need to use the credentials. While `tbot` can also +run as a sidecar within the same pod as the service that needs to use the +credentials it generates, we recommend running `tbot` as a standalone deployment +due the limited support Kubernetes has for sidecars. + +In this guide, we demonstrate the `kubernetes` join method, in which `tbot` +proves its identity to the Teleport Auth Service by presenting a JSON web token +(JWT) signed by the Kubernetes API server. This JWT contains identifies the +service account, the pod and the namespace in which `tbot` is running. The +Teleport Auth Service checks the signature of the JWT against the Kubernetes +cluster's public signing key. + + +Certain cloud providers like Amazon EKS regularly rotate their OIDC signing +keys, which will cause the `static_jwks` configuration you create in this guide +to become invalid after a short period of time. + +On Kubernetes providers with OIDC support, like Amazon's Elastic Kubernetes +Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service +(AKS), consider using [Kubernetes OIDC joining](./kubernetes-oidc.mdx) instead. + + +
+Using another join method + +When deploying `tbot` to a Teleport cluster, it is generally recommended to use +the `kubernetes` join method. This will work with most Kubernetes clusters. +The guide that follows will demonstrate configuring this join method. + +However, when using certain cloud Kubernetes services, it is possible to use the +join method associated with that platform rather than the `kubernetes` join +method. This may be beneficial if you wish to manage the joining of `tbot` +within the Kubernetes clusters and on standard VMs on the same platform with +a single join token. These services are: + +- Google Kubernetes Engine: Where + [GCP Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) + is configured for the cluster, it is possible to use the `gcp` join method. + See the [GCP Platform Guide](./gcp.mdx) for further information. +- Amazon Elastic Kubernetes Service: Where + [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) + is configured for the cluster, it is possible to use the `iam` join method. + See the [AWS Platform Guide](./aws.mdx) for further information. + +
+ +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- A Kubernetes cluster with support for Token Request Projection (which + graduated to a generally available feature in Kubernetes 1.20). +- `kubectl` authenticated with the ability to create resources in the cluster + you wish to deploy `tbot` into. +- The `helm` CLI tool installed. + +The examples in this guide will install a `tbot` deployment in the `default` +Namespace of the Kubernetes cluster. Adjust references to `default` to the +Namespace you wish to use. + +## Step 1/4. Create a Bot + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 2/4. Create a join token + +Next, a join token needs to be configured. This will be used by `tbot` to join +the cluster. As the `kubernetes` join method will be used, the public key of the +Kubernetes cluster must first be determined. The public key used to sign JWTs +is exposed on the "JWKS" endpoint of the Kubernetes API server. This public key +can then be used by the Teleport Auth to verify that the Service Account JWT +presented by `tbot` is signed legitimately by the Kubernetes cluster. + +Run the following commands to determine the JWKS formatted public key: + +```code +$ kubectl proxy -p 8080 +$ curl http://localhost:8080/openid/v1/jwks +{"keys":[--snip--]}% +``` + +Create `bot-token.yaml`, ensuring you insert the value from the JWKS endpoint +in `spec.kubernetes.static_jwks.jwks`: + +```yaml +kind: token +version: v2 +metadata: + # name will be specified in the `tbot` to use this token + name: example-bot +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: kubernetes + kubernetes: + # static_jwks configures the Auth Service to validate the JWT presented by + # `tbot` using the public key from a statically configured JWKS. + type: static_jwks + static_jwks: + jwks: | + # Place the data returned by the curl command here + {"keys":[--snip--]} + # allow specifies the rules by which the Auth Service determines if `tbot` + # should be allowed to join. + allow: + - service_account: "default:tbot" # service_account +``` + +Use `tctl` to apply this file: + +```code +$ tctl create -f bot-token.yaml +``` + +## Step 3/4. Create a `tbot` deployment + +Now, you'll deploy `tbot` to your Kubernetes cluster using the Teleport `tbot` +Helm chart. This will be configured using values provided to the Helm CLI tool. + +First, create a file called `tbot-values.yaml` to hold the configuration values +for the Helm chart: + +```yaml +# Replace the cluster name with the name of your Teleport cluster. +# This is not necessarily the public address of your Teleport Proxy Service. +clusterName: "example.teleport.sh" +# Replace this with the address of your Teleport Proxy Service. +teleportProxyAddress: "example.teleport.sh:443" +# Ensure this matches the name of the join token you created earlier. +token: "example-bot" +``` + + +The default `tbot-distroless` image does not contain the FIPS-compliant +binaries. If you operate in an environment where FIPS compliance is required, +additionally set the `image: public.ecr.aws/gravitational/tbot-fips-distroless`. + + +Before you can deploy the Helm chart, if you have not previously deployed a +Teleport Helm chart, you'll need to add the Teleport chart repository to your +CLI: + +```code +$ helm repo add teleport (=teleport.helm_repo_url=) +$ helm repo update +``` + +You can now deploy the `tbot` Helm chart using the configuration you created +earlier, ensuring you specify the namespace you wish to deploy `tbot` into: + +```code +$ helm install tbot teleport/tbot \ + --namespace default \ + --values tbot-values.yaml +``` + +Use `kubectl` to verify that the deployment is healthy: + +```code +$ kubectl describe deployment/tbot +$ kubectl logs deployment/tbot +``` + +With this complete, `tbot` is now successfully deployed to your cluster. + +## Step 4/4. Using the output + +By default, the `tbot` Helm chart is configured to write an identity file to a +Kubernetes Secret called `tbot-out` in the namespace where `tbot` has been +deployed. + +This identity file can be mounted into other pods and used with `tsh` or `tctl` +to access and configure your Teleport cluster. For example: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: tsh + namespace: default +spec: + containers: + - name: tsh + image: public.ecr.aws/gravitational/teleport-distroless:(=teleport.version=) + command: + - tsh + args: + - -i + - /identity-output/identity + - --proxy + - example.teleport.sh:443 + - ls + volumeMounts: + - name: identity-output + mountPath: /identity-output + volumes: + - name: identity-output + secret: + secretName: tbot-out +``` + +If you wish to use `tbot` for a different kind of access, you can override the +type of output using the `services` value of the Helm chart and setting +`defaultOutput.enabled` to `false`. + +Follow one of the [access guides](../access-guides/access-guides.mdx) to find +out more about how to configure `tbot` for your use case. + +## Next steps + +- Explore the [Helm chart configuration reference](../../reference/helm-reference/tbot.mdx). +- Follow the [access guides](../access-guides/access-guides.mdx) to finish + configuring `tbot` for your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/enroll-resources/machine-id/deployment/linux-tpm.mdx b/docs/pages/machine-workload-identity/deployment/linux-tpm.mdx similarity index 82% rename from docs/pages/enroll-resources/machine-id/deployment/linux-tpm.mdx rename to docs/pages/machine-workload-identity/deployment/linux-tpm.mdx index 65c3a558dec45..fd47d97d53b88 100644 --- a/docs/pages/enroll-resources/machine-id/deployment/linux-tpm.mdx +++ b/docs/pages/machine-workload-identity/deployment/linux-tpm.mdx @@ -1,11 +1,17 @@ --- -title: Deploying Machine ID on Linux (TPM) -description: How to install and configure Machine ID on a Linux host and use a TPM 2.0 for authentication +title: Deploying tbot on Linux (TPM) +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on a Linux host and use a TPM 2.0 for authentication +sidebar_label: Linux (TPM) +tags: + - how-to + - mwi + - infrastructure-identity +enterprise: The TPM join method --- -This page explains how to deploy Machine ID on a Linux host, and use the -secure identify of the onboard TPM 2.0 chip for authenticating with the -Teleport cluster. +This page explains how to deploy Machine & Workload Identity's agent, `tbot`, on +a Linux host, and use the secure identity of the onboard TPM 2.0 chip for +authenticating with the Teleport cluster. The `tpm` join method requires a valid Teleport Enterprise license to be installed on the cluster's Auth Service. @@ -19,17 +25,17 @@ installed on the cluster's Auth Service. (!docs/pages/includes/edition-prereqs-tabs.mdx!) - (!docs/pages/includes/tctl.mdx!) -- A Linux host that you wish to install Machine ID onto, with a TPM2.0 +- A Linux host that you wish to install `tbot` onto with a TPM2.0 installed. -- A Linux user on that host that you wish Machine ID to run as. In the guide, +- A Linux user on that host that you wish `tbot` to run as. In the guide, we will use `teleport` for this. ## Step 1/5. Install `tbot` **This step is completed on the Linux host.** -First, `tbot` needs to be installed on the VM that you wish to use Machine ID -on. +First, `tbot` needs to be installed on the VM that you wish to use Machine +& Workload Identity on. Download the appropriate Teleport package for your platform: @@ -58,7 +64,7 @@ With the Bot created, we now need to create a token. The token will be used by ### Determining the EKPub Hash or EKCert Serial for your TPM First, you need to determine the characteristics of the TPM on the host that -you wish to use Machine ID on. These characteristics will then be used within +you wish to install `tbot` on. These characteristics will then be used within the allow rules of the join token to grant access to this specific host. On the machine, run `tbot tpm identify`: @@ -98,8 +104,8 @@ metadata: # name identifies the token. Try to ensure that this is descriptive. name: my-bot-token spec: - # For Machine ID and TPM joining, roles will always be "Bot" and - # join_method will always be "tpm". + # For MWI joining via TPM, roles will always be "Bot" and join_method will + # always be "tpm". roles: [Bot] join_method: tpm @@ -161,8 +167,8 @@ onboarding: storage: type: directory path: /var/lib/teleport/bot -# outputs will be filled in during the completion of an access guide. -outputs: [] +# services will be filled in during the completion of an access guide. +services: [] ``` Replace: @@ -193,16 +199,16 @@ $ sudo chown teleport:teleport /var/lib/teleport/bot (!docs/pages/includes/machine-id/daemon.mdx!) -## Step 5/5. Configure outputs +## Step 5/5. Configure services -(!docs/pages/includes/machine-id/configure-outputs.mdx!) +(!docs/pages/includes/machine-id/configure-services.mdx!) ## Next steps - Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for your environment. -- Read the [TPM joining reference](../../../reference/join-methods.mdx#trusted-platform-module-tpm) +- Read the [TPM joining reference](../../reference/deployment/join-methods.mdx#trusted-platform-module-tpm) to learn more about `tpm`joining. -- Read the [configuration reference](../../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. -- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../../reference/machine-id/telemetry.mdx) +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/deployment/linux.mdx b/docs/pages/machine-workload-identity/deployment/linux.mdx new file mode 100644 index 0000000000000..b2b3cb1d1db08 --- /dev/null +++ b/docs/pages/machine-workload-identity/deployment/linux.mdx @@ -0,0 +1,109 @@ +--- +title: Deploying tbot on Linux +description: How to install and configure the Machine & Workload Identity agent, `tbot`, on a Linux host +sidebar_label: Linux +tags: + - how-to + - mwi + - infrastructure-identity +--- + +This page explains how to deploy the Machine & Workload Identity agent, `tbot`, +on a Linux host. + +## How it works + +The process in which `tbot` initially authenticates with the Teleport cluster is +known as joining. A join method is a specific technique for the bot to prove its +identity. + +On platforms where there is no form of identity available to the machine, the +only available join method is `token`. The `token` join method is special as +it is the only join method that relies on a shared secret. In order to mitigate +the risks associated with this, the `token` join method is single use and it +is not possible to use the same token multiple times. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- A Linux host that you wish to install `tbot` onto. +- A Linux user on that host that you wish `tbot` to run as. In the guide, we + will use `teleport` for this. + +## Step 1/4. Install `tbot` + +**This step is completed on the Linux host.** + +First, `tbot` needs to be installed on the VM that you wish to use Machine & +Workload Identity on. + +Install Teleport as appropriate for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/4. Create a bot user + +**This step is completed on your local machine.** + +Create the bot: + +```code +$ tctl bots add example +``` + +A join token will be included in the results of `tctl bots add`, record this +value as it will be needed when configuring `tbot`. + +## Step 3/4. Configure `tbot` + +**This step is completed on the Linux host.** + +Create `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: token + token: (=presets.tokens.first=) +storage: + type: directory + path: /var/lib/teleport/bot +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace: +- `example.teleport.sh:443` with the address of your Teleport Proxy or + Auth Service. Prefer using the address of a Teleport Proxy. +- `(=presets.tokens.first=)` with the token that was returned by `tctl bots add` + in the previous step. + + +The first time that `tbot` runs, this token will be exchanged for a certificate +that the bot uses for authentication. At this point, the token is invalidated. +This means you may remove the token from the configuration file after the first +run has completed, but there is no tangible security benefit to doing so. + + +### Prepare the storage directory + +(!docs/pages/includes/machine-id/machine-id-init-bot-data.mdx!) + +### Create a systemd service + +(!docs/pages/includes/machine-id/daemon.mdx!) + +## Step 4/4. Configure services + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Next steps + +- Follow the [access guides](../access-guides/access-guides.mdx) to finish configuring `tbot` for + your environment. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../../reference/machine-workload-identity/telemetry.mdx) diff --git a/docs/pages/machine-workload-identity/faq.mdx b/docs/pages/machine-workload-identity/faq.mdx new file mode 100644 index 0000000000000..77891ef209cff --- /dev/null +++ b/docs/pages/machine-workload-identity/faq.mdx @@ -0,0 +1,135 @@ +--- +title: Machine & Workload Identity FAQ +description: Frequently asked questions about Teleport Machine & Workload Identity +sidebar_label: FAQ +tags: + - faq + - mwi + - infrastructure-identity +--- + +This page provides answers to frequently asked questions about Machine & +Workload Identity (MWI). For a list of frequently asked questions about +Teleport in general, see [Frequently Asked Questions](../faq.mdx). + +## Can MWI be used within CI/CD jobs? + +On CI/CD platforms where your workflow runs in an ephemeral environment (e.g +no persistent state exists between individual workflow runs), MWI works best +where a supported join method exists. These are: + +- GitHub Actions +- CircleCI +- GitLab +- AWS +- GCP +- Azure +- Kubernetes +- Spacelift +- Terraform Cloud + +On CI/CD platforms where you control the runner environment (e.g self-hosted +Jenkins runner), MWI can run as a daemon on the runner and the generated +credentials can be mounted into the environment of your individual workflow +runs. + +## Can MWI be used with Trusted Clusters ? + +You can use MWI for SSH access in trusted leaf clusters. + +We currently do not support access to applications, databases, or Kubernetes +clusters in leaf clusters. + +## Should I define allowed logins as user traits or within roles? + +When defining the logins that your bot will be allowed to use, there are two +options: + +- Directly adding the login to the `logins` section of the role that your bot + will impersonate. +- Adding the login to the logins trait of the bot user, and impersonating a role + that includes the `{{ internal.logins }}` role variable. This is usually done + by providing the `--logins` parameter when creating the bot. + +For simpler scenarios — where you only expect the bot to use a single service +or role — you can add the login to the logins trait of the bot user. This +approach allows you to leverage default roles like `access`. + +For situations where your bot is producing certificates for different roles in +different services, it is important to consider whether using a login trait +grants access to resources that you didn't intend. To prevent a login trait from +granting access you didn't intend, we recommend that you create bespoke roles +that explicitly specify the logins that should be included in the certificates. + +## Can MWI be used with per-session MFA? + +We do not currently support MWI and per-session MFA. Enabling per-session +MFA globally, or for roles impersonated by MWI, will prevent credentials +produced by MWI from being used to connect to resources. + +As a work-around, ensure that per-session MFA is enforced on individual roles +rather than enforced globally, and that it is not enforced for roles that you +will impersonate using MWI. + +## Can MWI be used with Device Trust? + +We do not currently support MWI and Device Trust. Requiring Device +Trust cluster-wide or for roles impersonated by MWI will prevent +credentials produced by MWI from being used to connect to resources. + +As a work-around, configure Device Trust enforcement on a role-by-role basis +and ensure that it is not required for roles that you will impersonate using +MWI. + +## Can MWI be used to generate long-lived certificates? + +MWI cannot currently be used to generate certificates valid for longer +than 24 hours, and requests for longer certificates using the `credential_ttl` +parameter will be reduced to this 24 hour limit. + +This limit serves multiple purposes. For one, it encourages security best +practices by only ever issuing very short lived certificates. Additionally, as +MWI allows for certificate renewal, this limit helps to prevent further +exploitation should a MWI identity be compromised: an attacker could use +a stolen renewable certificate to request very long lived certificates and +maintain access for a much longer period. + +If your use case absolutely requires long-lived certificates, +[`tctl auth sign`](../reference/cli/tctl.mdx#tctl-auth-sign) can +alternatively be used, however this loses the security benefits of MWI's +short-lived renewable certificates. + +## Can MWI be used to connect to multiple Kubernetes clusters? + +This is possible in Teleport v17.2.7 or higher, using the new `kubernetes/v2` +output service type in `tbot`. This service can expose many clusters at once via +contexts in the generated `kubeconfig.yaml`, and if label selectors are used, +will dynamically add contexts as clusters are added and removed in Teleport. + +Note that both `tbot` and the Teleport Proxy need to be running v17.2.7 to take +advantage of this functionality. + +Refer to the +[CLI reference](../reference/cli/tbot.mdx#tbot-start-kubernetesv2) and +[config reference](../reference/machine-workload-identity/configuration.mdx#kubernetesv2) +for more information. + +## Does `tbot` support Windows? + +Yes, the `tbot` binary is available for Windows. It can be found in the client +tools archive that also includes `tsh` and `tctl`. See the +[Installing Teleport guide](../installation/installation.mdx) for further information. + +However, there are a few limitations to be aware of: + +- Functionality that relies on Unix Domain Sockets (e.g. SSH multiplexer, + SPIFFE Workload API etc.) is not available. +- Functionality relating to the configuration of Symlink protection on directory + destinations is not available. +- Functionality relating to the management of ACLs on directory destinations is + not available. +- Most delegated join methods are unlikely to function correctly. + +In some circumstances, it can be more practical to run `tbot` within Windows +Subsystem for Linux rather than running it natively on Windows. This will depend +on where the tools that will consume the output of `tbot` are running. diff --git a/docs/pages/machine-workload-identity/getting-started.mdx b/docs/pages/machine-workload-identity/getting-started.mdx new file mode 100644 index 0000000000000..1787461b24f91 --- /dev/null +++ b/docs/pages/machine-workload-identity/getting-started.mdx @@ -0,0 +1,479 @@ +--- +title: Machine and Workload Identity Getting Started Guide +description: Getting started with Teleport Machine and Workload Identity +sidebar_label: Getting Started +sidebar_position: 2 +videoBanner: YzVK2tr6u-U +tags: + - get-started + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +Teleport Machine and Workload Identity (MWI) provides secure access for Non-Human Identities across multiple platforms +and resource types, supporting everything from [Infrastructure-as-Code](./use-cases/iac-mwi.mdx) +workflows to [AI agent](./use-cases/ai-agents-mwi.mdx) operations. This guide focuses on a popular +implementation: executing commands on deployment targets through CI/CD pipelines. Even if your specific use case differs, +this guide covers the fundamental MWI setup process, after which you can reference the dedicated use case pages for tool-specific guidance. + +Here's an overview of what you will do: + +- Choose a Linux server or Kubernetes cluster as your target resource. +- Create a Role for your Bot, or choose an existing one. +- Create a Bot in Teleport with a role allowing it to access your target resource. +- Create a GitHub join token for the Bot. +- Set up a GitHub Actions workflow that authenticates and issues a command using the `tbot` binary. + +This guide covers configuring MWI for development and learning purposes. +For a production-ready configuration of MWI, visit the [Deploying Machine +ID](../machine-workload-identity/deployment/deployment.mdx) guides. + +## Prerequisites + +In this getting started guide, you will configure MWI to issue commands to a Linux +server or Kubernetes cluster from a GitHub Actions workflow. This guide assumes you've already enrolled a Linux server +or Kubernetes cluster to Teleport. If you haven't done so, refer to the +[guides on enrolling resources](../enroll-resources/enroll-resources.mdx). + +- A GitHub repository where you have permissions to create GitHub Actions workflows. +
+ Using GitHub Enterprise? + + There is extra configuration needed when using GitHub Enterprise repositories, either + cloud or self-hosted. We recommend using a personal repository for this guide if possible. + + If you need to use GitHub Enterprise, check the following: + - Cloud + - In the join token, under `github` set the `enterprise_slug` field to the name of your enterprise's slug, likely the name of the organization. + - Self-hosted + - Your Teleport Auth Service must be able to reach your GitHub Enterprise instance. + - In the join token, under `github`, set the `enterprise_server_host` field to the hostname of your GitHub Enterprise instance. + +
+ The join token fields are available and commented out in the example join token file. +
+- A target resource enrolled in Teleport, either: + - A Linux server + - A Kubernetes cluster + - If you don't have a target resource that you can use, follow one of the [guides](../enroll-resources/enroll-resources.mdx) + for enrolling a new resource. + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +(!/docs/pages/includes/tctl.mdx!) + +## Step 1/5. Choose a target resource + +First, choose a target resource that you want your GitHub Actions workflow to +access using Machine and Workload Identity. + +To grant the GitHub Actions workflow access to the resource, you'll create a +role and specify within this role the labels of the resources it should grant +access to. Labels are key-value pairs that help identify and categorize +resources in Teleport. + + + + +You can find your nodes and labels in the GUI or with the following command: + +```code +$ tctl nodes ls --format=text + +Host UUID Public Address Labels Version +------- ------------------------------------ -------------- ----------------------------------- ------- +target1 8a50c8aa-c45f-403c-95ff-83f50561d64c env=mwi-demo,hostname=ip-10-0-0-200 18.1.5 +``` + + + + + + You can find your clusters and labels in the GUI or with the following command: + +```code +$ tctl kube ls --format=text + +Cluster Labels Version +-------- ----------------------------- ------- +staging env=mwi-demo,region=us-west-2 18.1.5 +``` + + + + +In our examples, we'll use the `env` label with the value `mwi-demo` to control +what our GitHub Actions workflow can access. + +## Step 2/5. Choose or create a role + +Now, we'll create a role which will grant access to our target resource. If you +have a pre-existing role which grants access to your target resources, you can +skip this step and use that instead. + + + + +On your local machine, create a file called `role.yaml` and add the following +contents: + +```yaml +kind: role +version: v7 +metadata: + name: github-bot +spec: + allow: + node_labels: + env: mwi-demo + logins: + - ubuntu +``` + +Replace: + +- `env: mwi-demo` with the label selector that matches your target resource. +- `ubuntu` with the name of the Linux user that the workflow should have access to. + +Use `tctl create` to create the role from the file: + +```code +$ tctl create -f ./role.yaml +``` + + + + + +On your local machine, create a file called `role.yaml` and add the following +contents: + +```yaml +kind: role +version: v7 +metadata: + name: github-bot +spec: + allow: + kubernetes_labels: + env: mwi-demo + kubernetes_groups: + - system:masters + kubernetes_resources: + - kind: '*' + name: '*' + namespace: default + verbs: + - '*' +``` + +Replace: + +- `env: mwi-demo` with the label selector that matches your target resource. +- `system:masters` with the Kubernetes group that the workflow should have access to. + +Use `tctl create` to create the role from the file: + +```code +$ tctl create -f role.yaml +``` + + + + +## Step 3/5. Create a bot + +In Teleport, a **Bot** represents an identity for a machine. This is similar to +how a user represents the identity of a human. Like users, bots are assigned +roles to manage their access to resources. + +You'll now create a bot to represent the GitHub Actions workflow. + +On your local machine, create a file called `bot.yaml` with the following +contents: + +```yaml +kind: bot +version: v1 +metadata: + name: github-bot +spec: + roles: + - github-bot +``` + +Ensure that the value within the `spec.roles` field matches the name of the role +you have just created. + +Use `tctl create` to create the bot from the file: + +```code +$ tctl create -f ./bot.yaml +``` + +## Step 4/5 Create a join token + +Unlike users, bots do not authenticate using a username and password or SSO. +Instead, they authenticate in a process called joining. Teleport +uses metadata about the platform the bot is running on, such as OIDC endpoints for CI pipelines, +or the Assumed Role of an AWS EC2 Instance, to attest to the identity of the process, +ensuring only authorized bots can join the cluster. This means the bot has a verified identity, +rather than just a shared secret. + +Teleport supports a number of secure [join +methods](../reference/deployment/join-methods.mdx#delegated-join-methods) +specific to the platform the bot is running on. Since we are using GitHub +Actions, we will use the `github` join method. + +For the join token definition, edit the `repository` +field to match the GitHub repository where you will run the GitHub Actions workflow. +When a bot attempts to join from that GitHub organization and repository, Teleport +will identify it as your `github-bot` and assign it the correct role. If a bot attempts +to join from any other repository, it will be rejected. + +On your local machine, create a file called `join_token.yaml` with the following +contents: + +```yaml +kind: token +version: v2 +metadata: + name: github-bot +spec: + join_method: github + roles: + - Bot + bot_name: github-bot + github: + allow: + - repository: "your-github-username/my-repo" + # enterprise_server_host: github.my-company.com # use for self-hosted GitHub Enterprise + # enterprise_slug: my-company # use for GitHub Enterprise Cloud organization +``` + +Ensure that: + +- You replace `your-github-username/my-repo` with the name of the GitHub + repository where your GitHub Actions workflow will run. +- The `spec.bot_name` field matches the name of the bot you created in the + previous step. +- That you have set `enterprise_server_host` or `enterprise_slug` if appropriate. + +Use `tctl create` to create the join token from the file: + +```code +$ tctl create -f join_token.yaml +``` + +## Step 5/5 Access a resource from GitHub Actions + +We have several published [Actions](../reference/deployment/join-methods.mdx#github-actions-helpers) +for convenience, but for this guide we will look at things explicitly to aid understanding. + +Within your GitHub repository, you'll create two different files: + +- `.github/workflows/teleport.yaml`: the configuration for the GitHub Actions + workflow, specifying which actions should be taken on what triggers. +- `tbot.yaml`: the configuration file for the Machine & Workload Identity agent, + `tbot`, which specifies how the bot should authenticate and what kind of + identity it should request. + + + + +At the root of your GitHub repository, create a file called `tbot.yaml` with the +following contents: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: github + token: github-bot +certificate_ttl: 5m +storage: + type: memory +services: + - type: identity + destination: + type: directory + path: ./ssh_out +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy. +- `github-bot` with the name of the token you created in Step 4/5. + +This configuration will write credentials for SSH access to the `./ssh_out` +directory, which will last up to 5 minutes. You can adjust this TTL to match the +expected runtime of your job, so the identity expires when its purpose is +complete. + +Commit and push this file to your repository. + +You'll now add the GitHub Actions workflow file. Within the GitHub repository, +create a file at `.github/workflows/teleport.yaml` with the following contents: + +```yaml +on: + workflow_dispatch: + +jobs: + check_resource_usage: + permissions: + # The "id-token: write" permission is required, or MWI will not be + # able to authenticate with the cluster. + id-token: write + contents: read + name: Check resource usage on server + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Fetch Teleport binaries + uses: teleport-actions/setup@v1 + with: + proxy: example.teleport.sh:443 + version: auto + - name: Export ssh config + run: tbot start --oneshot -c ./tbot.yaml + - name: Run mpstat + run: | + ssh -F ./ssh_out/ssh_config ubuntu@myinstance.example.teleport.sh mpstat +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport proxy +- `myinstance.example.teleport.sh` with the address of your target server. +- `ubuntu` with the Linux user you want to log in as. + +In the second step of this workflow, we use one one of the published actions +which installs the `tbot` binary into the workflow run environment. + +In the third step, we use this binary with the configuration you created earlier +to authenticate as the bot and produce the SSH configuration and credentials in +the `./ssh_out` directory. + +Finally, we run an SSH command using the short-lived identity. Instead, this +configuration file could be used with Ansible or other kinds of SSH-based +automations. + +Commit and push this file to your repository. + + + + + +At the root of your GitHub repository, create a file called `tbot.yaml` with the +following contents: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: github + token: github-bot +certificate_ttl: 5m +storage: + type: memory +services: + - type: kubernetes/v2 + selectors: + - name: my-kubernetes-cluster + destination: + type: directory + path: ./k8s_out +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport Proxy. +- `github-bot` with the name of the token you created in Step 4/5. +- `my-kubernetes-cluster` with the name of your Kubernetes cluster as enrolled in Teleport. + +This configuration will write credentials for Kubernetes access to the +`./k8s_out` directory, which will last up to 5 minutes. You can adjust this TTL +to match the expected runtime of your job, so the identity expires when its +purpose is complete. + +Commit and push this file to your repository. + +You'll now add the GitHub Actions workflow file. Within the GitHub repository, +create a file at `.github/workflows/teleport.yaml` with the following contents: + +```yaml +on: + workflow_dispatch: + +jobs: + list_pods: + permissions: + # The "id-token: write" permission is required, or MWI will not be + # able to authenticate with the cluster. + id-token: write + contents: read + name: List pods in default namespace + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Fetch Teleport binaries + uses: teleport-actions/setup@v1 + with: + proxy: example.teleport.sh:443 + version: auto + - name: Export kubectl config + run: tbot start --oneshot -c ./tbot.yaml + - name: Run kubectl get pods + run: | + kubectl --kubeconfig=./k8s_out/kubeconfig.yaml get pods -n default +``` + +Replace: + +- `example.teleport.sh:443` with the address of your Teleport proxy + +In the second step of this workflow, we use one one of the published actions +which installs the `tbot` binary into the workflow run environment. + +In the third step, we use this binary with the configuration you created earlier +to authenticate as the bot and produce the Kubernetes configuration and +credentials in the `./k8s_out` directory. + +Finally, we run a `kubectl` command using the short-lived identity. This could +instead be Helm or another Kubernetes configuration management tool. + +Commit and push this file to your repository. + + + + +### Run the workflow + +In your GitHub repository +- Go to the `Actions` tab +- Select the Teleport workflow on the left +- Click `Run workflow` on the right +- Make sure the branch is `main` and click the `Run workflow` confirmation button + +After the workflow completes, you should see the job complete successfully, and the output of the command in the logs. + +## Summary + +You've successfully set up a workflow in GitHub Actions that can access your resources securely through the Teleport proxy, +without distributing long-lived credentials, making the process more secure and efficient for development teams. + +## Next Steps + +- Check out the [deployment guides](./deployment/deployment.mdx) to learn about + configuring `tbot` in a production-ready way for your platform. +- Check out the [access guides](./access-guides/access-guides.mdx) to learn about configuring + `tbot` for other use cases than SSH and Kubernetes. +- Read the [configuration reference](../reference/machine-workload-identity/configuration.mdx) to explore + all the available configuration options. +- Learn how [Workload Identities](../machine-workload-identity/workload-identity/introduction.mdx) enable the same capabilities + for resources like cloud APIs that can't be protected with the Teleport proxy. diff --git a/docs/pages/machine-workload-identity/introduction.mdx b/docs/pages/machine-workload-identity/introduction.mdx new file mode 100644 index 0000000000000..ebd14ab8c19c7 --- /dev/null +++ b/docs/pages/machine-workload-identity/introduction.mdx @@ -0,0 +1,177 @@ +--- +title: "Introduction to Machine & Workload Identity" +description: "Explains concepts and use cases for Machine & Workload Identity, which replaces static secrets with secure, short-lived identities for machines and workloads." +sidebar_label: "Introduction" +sidebar_position: 1 +videoBanner: "ZDWRt105tBg" +tags: + - mwi + - infrastructure-identity +--- + +Teleport Machine & Workload Identity offers two complementary sets of capabilities for non-human entities in your infrastructure: + +- **Zero Trust Access for machines**: +Enables machines (like CI/CD pipelines) to securely authenticate with your Teleport cluster to access protected resources and configure the cluster itself. +- **Flexible Workload Identities**: +Issues short-lived cryptographic identities to workloads, compatible with the SPIFFE standard, enabling secure workload-to-workload communication and third-party API authentication. + +## Secure service-to-service authentication + +Establish a root certificate authority within your Teleport cluster that issues short-lived JWTs and X509 certificates to workloads. These identities ([SPIFFE](../machine-workload-identity/workload-identity/spiffe.mdx) Verifiable Identity Documents or SVIDs) contain the workload's identity encoded as a URI (SPIFFE ID). + +Key benefits: + +- Eliminates long-lived shared secrets +- Establishes a universal form of identity for workloads +- Simplifies infrastructure by reducing authentication methods + +The tbot agent manages identity requests and renewals, authenticating to the Teleport cluster using supported join methods. Workloads receive identities either through filesystem/Kubernetes secrets or via the SPIFFE Workload API. + +## Zero Trust Access for machines + +Teleport provides machines with an identity ("bot") that can authenticate to the Teleport cluster. Bots are similar to human users with access controlled by roles and activities recorded in audit logs. + +Bots authenticate using join tokens that specify which bot user they grant access to and what proof (join method) is needed. Each tbot client connection creates a server-side Bot Instance to track installations over time. + +## Integrated use cases + +Zero Trust Access & Flexible Workload Identity can work together to create a comprehensive security model. Machines can securely access resources while workloads communicate securely with each other and external services, all managed through Teleport's unified access plane. + +### CI/CD pipeline with end-to-end authentication + +A [CI/CD system](./use-cases/mwi-ci-cd.mdx) securely deploys services to Kubernetes and establishes secure communication channels between them: + +- The pipeline authenticates through the proxy to deploy to Kubernetes and receives credentials to interact with cloud APIs (e.g., to push container images) +- Services deployed by the pipeline receive SPIFFE identities for mutual TLS. The pipeline manages the identity lifecycle for the services it deploys + +### Cloud-native application with third-party API access + +A Kubernetes-based application needs access to both internal services and external APIs: + +- Automation tools authenticate to configure the cluster securely +- Application components are issued SPIFFE identities +- Identities authenticate to internal services via mTLS +- JWT-based authentication is used for external API access + +### Zero Trust security implementation + +A [Zero Trust strategy](../machine-workload-identity/workload-identity/getting-started.mdx) is applied across workloads and automation: + +- Automation scripts authenticate through the proxy to perform infrastructure tasks +- Workloads authenticate using short-lived, cryptographically verifiable identities +- Security teams use Teleport’s unified audit logs to trace all identity activity + +### Identity-based communication without shared secrets + +Zero-trust, identity-based communication without shared secrets are rotated automatically without human involvement. + +Instead of managing static credentials (e.g., API keys, database passwords), workloads authenticate using short-lived X.509 certificates or JWTs compatible with the SPIFFE/SPIRE standard. + +- The service issues new identities to workloads on a regular schedule, dynamically issued by Teleport’s Auth Service and rotate automatically +- All identity issuance and usage is recorded in audit logs + +## Key differences + +**Flexible Workload Identities**: Issues SPIFFE-compatible identities for various authentication purposes; doesn't use Teleport Proxy for workload-to-workload communication + +**Zero Trust Access for machines**: Issues Teleport-specific credentials for accessing resources secured by Teleport; requires using the Teleport Proxy + +| Feature | Flexible Workload Identities | Zero Trust Access for machines | +|---------|------------------------------|--------------------------------| +| Purpose | Authenticate workloads to other workloads or third-party APIs | Authenticate bots to Teleport to access infrastructure | +| Standards | SPIFFE (SVIDs, Workload API, mTLS, JWT) | Teleport-native X.509 credentials | +| Proxy Usage | No Teleport Proxy involved | Access goes through the Teleport Proxy | +| Use Case Focus | Service-to-service authentication | Infrastructure and configuration access | +| Credential Delivery | Filesystem or SPIFFE API via tbot | Artifacts written to disk via tbot | + +## Concepts + +Read this section to understand the high-level architecture of a Machine & +Workload Identity setup. For a more in-depth overview, check out the [architecture +page](../reference/architecture/machine-id-architecture.mdx). + +![Machine & Workload Identity architecture](../../img/machine-id/architecture-diagram.png) + +### Bots + +Machine & Workload Identity provides machines with an identity that can +authenticate to the Teleport cluster. This identity is known as a **bot**. Bots +share a number of similarities with human users: + +- Access controlled by roles assigned to them in Teleport +- Access to resources recorded in audit logs +- Identity encoded in an x.509 client certificate which is signed by the +Teleport Auth Service and which can then be used for access. + +### Join tokens + +Unlike a human Teleport user, a bot does not "log in" using a static username +and password. Instead, a bot authenticates to Teleport with a **join token**, +which is configured within Teleport and specifies which bot user it grants +access to and what sort of proof (known as the **join method**) is needed to use +this join token. This proof is typically an identity issued to the machine by +the platform it runs on (e.g. AWS IAM). + +Multiple join tokens may be created for a single bot to allow joining with +different join methods. + +### Bot Instances + +Each time a new `tbot` client joins from scratch, it creates a new server-side +Bot Instance. Bot Instances keep track of individual `tbot` installations over +time, even as they renew their certificates or rejoin. These server-side +resources also record the most recent authentication attempts, as well as +bot heartbeats. + +Many Bot Instances can exist concurrently for a given Bot, regardless of their +join method. + +Bot Instances can be inspected with: +- `tctl get bot_instance` to list all instances +- `tctl get bot_instance/$botName` to list all instances associated with a +particular Bot +- `tctl get bot_instance/$botName/$id` to show a single bot instance by its bot +name and ID + +### tbot + +Machine & Workload Identity is used through an agent called `tbot`. `tbot` +authenticates with the Teleport Cluster and then generates credentials and +configuration files for other tools to use to connect to Teleport resources +using the bot's identity. + +### Artifacts + +The files generated by `tbot` are referred to as its **artifacts**. Artifacts +can be a number of things from credentials, such as signed certificates, to +configuration files that will automatically configure a tool (such as `kubectl`) +to use Teleport. This behaviour is controlled by configuring `tbot`'s +**services**. A service specifies what should be generated and where it should +be saved. + +## Further reading + +For a quickstart non-production introduction to Machine & Workload Identity, read the +[Getting Started Guide](./getting-started.mdx). + +Production-ready guidance on deploying Machine & Workload Identity is broken out into two parts: + +- [Deploying `tbot`](./deployment/deployment.mdx): How to install and configure +`tbot` on specific platforms. +- [Access your Infrastructure using `tbot`](./access-guides/access-guides.mdx): +How to use `tbot` to access Infrastructure through Teleport. + +Reference information: + +- [Workload Identity](./workload-identity/introduction.mdx): Information on Teleport Workload +Identity for SPIFFE, a feature for issuing short-lived identities intended +for workload to workload communication. +- [Frequently Asked Questions](./faq.mdx): Commonly asked questions. +- [Troubleshooting Guide](./troubleshooting.mdx): Common issues and how to solve +them. +- [Architecture](../reference/architecture/machine-id-architecture.mdx): A technical deep-dive into how Machine & +Workload Identity works. +- [Reference](../reference/machine-workload-identity/machine-workload-identity.mdx): Complete documentation of available +configuration options. +- [Manifesto](./manifesto.mdx): Our vision for Machine & Workload Identity. diff --git a/docs/pages/machine-workload-identity/machine-workload-identity.mdx b/docs/pages/machine-workload-identity/machine-workload-identity.mdx new file mode 100644 index 0000000000000..940c76f675ed8 --- /dev/null +++ b/docs/pages/machine-workload-identity/machine-workload-identity.mdx @@ -0,0 +1,331 @@ +--- +title: "Teleport Machine & Workload Identity" +description: "Includes guides for for Machine & Workload Identity, which replaces static secrets with secure, short-lived identities for machines and workloads." +template: "landing-page" +tags: + - mwi + - infrastructure-identity +--- + +import LandingHero, { LandingHeroProps } from '@site/src/components/Pages/Landing/LandingHero'; +import Integrations, { IntegrationsProps } from "@site/src/components/Pages/Homepage/Integrations"; +import UseCasesList, { UseCasesListProps } from "@site/src/components/Pages/Landing/UseCasesList"; + +import bookOpenSvg from "@site/src/components/Icon/teleport-svg/book-open.svg"; +import identificationBadgeSvg from "@site/src/components/Icon/teleport-svg/identification-badge.svg"; +import listBulletsSvg from "@site/src/components/Icon/teleport-svg/list-bullets.svg"; +import verifySvg from "@site/src/components/Icon/teleport-svg/verify.svg"; +import awsSvg from "@site/src/components/Icon/teleport-svg/aws.svg"; +import azureSvg from "@site/src/components/Icon/svg/azure2.svg"; +import azureDevOpsSvg from "@site/src/components/Icon/teleport-svg/azure.svg"; +import bitbucketSvg from "@site/src/components/Icon/svg/bitbucket.svg"; +import circleCISvg from "@site/src/components/Icon/svg/circleci2.svg"; +import gcpSvg from "@site/src/components/Icon/teleport-svg/gcp.svg"; +import gitlabSvg from "@site/src/components/Icon/svg/gitlab.svg"; +import jenkinsSvg from "@site/src/components/Icon/teleport-svg/jenkins.svg"; +import kubernetesSvg from "@site/src/components/Icon/svg/kubernetes2.svg"; +import linuxSvg from "@site/src/components/Icon/svg/linux.svg"; +import arrowRightSvg from "@site/src/components/Icon/teleport-svg/arrow-right.svg"; +import terraformSvg from "@site/src/components/Icon/svg/terraform.svg"; +import ansibleSvg from "@site/src/components/Icon/svg/ansible.svg"; +import terminalSvg from "@site/src/components/Icon/teleport-svg/terminal.svg"; +import databaseSvg from "@site/src/components/Icon/teleport-svg/database.svg"; +import appWindowSvg from "@site/src/components/Icon/teleport-svg/app-window.svg"; +import certificateSvg from "@site/src/components/Icon/teleport-svg/certificate.svg"; +import cloudSvg from "@site/src/components/Icon/teleport-svg/cloud.svg"; +import teleportSvg from "@site/src/components/Icon/teleport-svg/teleport.svg"; + + + +Use Teleport to replace long-lived secrets with identity-based authentication for your machines and workloads. + +## Introduction to Machine & Workload Identity + +Teleport Machine & Workload Identity replaces static secrets across your infrastructure with short-lived certificates that are automatically issued and renewed for your Non-Human Identities (NHI). + +[What Teleport can do for non-human infrastructure access](./introduction.mdx) + + + + + +{/* lint ignore page-structure remark-lint */} + +## Getting started with Machine & Workload Identity + +The following steps will help you get started with Machine and Workload Identity. At the core of this flow is tbot, a lightweight agent that runs on your machines and workloads to automatically issue and renew short-lived certificates. This gives your systems secure, identity-based access to infrastructure and cloud providers without relying on static secrets. + + + + + + diff --git a/docs/pages/enroll-resources/machine-id/manifesto.mdx b/docs/pages/machine-workload-identity/manifesto.mdx similarity index 89% rename from docs/pages/enroll-resources/machine-id/manifesto.mdx rename to docs/pages/machine-workload-identity/manifesto.mdx index 8ef76db90c83e..f997c3063b28f 100644 --- a/docs/pages/enroll-resources/machine-id/manifesto.mdx +++ b/docs/pages/machine-workload-identity/manifesto.mdx @@ -1,6 +1,11 @@ --- -title: Machine ID Manifesto -description: A manifesto for Machine Identity +title: Machine & Workload Identity Manifesto +description: A manifesto for Machine & Workload Identity +sidebar_label: Manifesto +tags: + - conceptual + - mwi + - infrastructure-identity --- The world of machine identity is changing. Machines are trusted to complete @@ -48,5 +53,5 @@ Together, these three pillars create a strong foundation for AI, automations and other workloads running together in a predictable way that is secure and possible to reason about and maintain. -Teleport Workload & Machine Identity is on a mission to make this vision a +Teleport Machine & Workload Identity is on a mission to make this vision a reality. diff --git a/docs/pages/machine-workload-identity/troubleshooting.mdx b/docs/pages/machine-workload-identity/troubleshooting.mdx new file mode 100644 index 0000000000000..cc9b78456c198 --- /dev/null +++ b/docs/pages/machine-workload-identity/troubleshooting.mdx @@ -0,0 +1,360 @@ +--- +title: Machine & Workload Identity Troubleshooting Guide +description: Troubleshooting common issues with Machine & Workload Identity +sidebar_label: Troubleshooting +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +This page provides resolution steps for issues that you may come across when +setting up Machine & Workload Identity (MWI). + +## A bot failed to renew a certificate due to a "generation mismatch" + +### Symptoms + +The bot will log an error like this: + +```text +ERROR: renewable cert generation mismatch: stored=3, presented=2 +``` + +Subsequent connection attempts by the bot may see errors like the following: +```text +ERROR: failed direct dial to auth server: auth API: access denied [00] +"\tauth API: access denied [00], failed dial to auth server through reverse tunnel: Get \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://example.com:3025/webapi/find\": x509: cannot validate certificate for example.com because it doesn't contain any IP SANs" +"\tGet \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://example.com:3025/webapi/find\": x509: cannot validate certificate for example.com because it doesn't contain any IP SANs" +``` + +In particular, note the message `auth API: access denied`. + +In self-hosted Teleport deployments, the Teleport Auth Service will also provide +some additional context: + +```text +[AUTH] WARN lock targeting User:"bot-example" is in force: The bot user "bot-example" has been locked due to a certificate generation mismatch, possibly indicating a stolen certificate. auth/apiserver.go:224 +``` + +### Explanation + + +This applies only to bots using the `token` join method, which makes use of +one-time use shared secrets. Provider-specific join methods, such as GitHub, +AWS IAM, etc will not be locked in this fashion unless another instance of the +bot uses `token` joining. + + +Machine & Workload Identity (with token-based joining) uses a certificate +generation counter to detect potentially stolen renewable certificates. Each +time a bot fetches a new renewable certificate, the Auth Service increments the +counter, stores it on the backend, and embeds a copy of the counter in the +certificate. + +If the counter embedded in your bot certificate doesn't match the counter +stored in Teleport's Auth Service, the renewal will fail and the bot user will +be automatically [locked](../identity-governance/locking.mdx). + +Renewable certificates are exclusively stored in the bot's internal data +directory, by default `/var/lib/teleport/bot`. It's possible to trigger this by +accident if multiple bots are started using the same internal data directory, or +if this internal data is otherwise being shared between multiple `tbot` +processes. + +Additionally, if a bot fails to save its freshly renewed certificates (for +example, due to a filesystem error) and crashes, it will attempt a renewal +with old certificates and trigger a lock. + +### Resolution + +Before unlocking the bot, try to determine if either of the two scenarios +described above apply. If the certificates were stolen, there may be +underlying security concerns that need to be addressed. + +Otherwise, first ensure only one `tbot` process is using the internal data +directory. Multiple bots can be run on a single system, but separate data +directories must be configured for each. + +Additionally, ensure the internal data is not being shared with or copied to any +other nodes, for example via a shared NFS volume. If you'd like to share +certificates between nodes, only copy or share content from destination +directories (usually `/opt/machine-id`) rather than the internal data directory +(by default, `/var/lib/teleport/bot`). + +Once you have addressed the underlying cause, follow these steps to reset a +locked bot: + 1. Remove the lock on the bot's user + 1. Reset the bot's generation counter by creating a new bot instance + +To remove the lock, first find and remove the lock targeting the bot user. For +this example, we'll assume the bot is named `example`, which will have an +associated Teleport user named `bot-example`: + +```code +$ tctl get locks +kind: lock +metadata: + id: 1658359514703080513 + name: 5cee949f-5203-4f3b-9805-dac35d798a16 +spec: + message: The bot user "bot-example" has been locked due to a certificate generation + mismatch, possibly indicating a stolen certificate. + target: + user: bot-example +version: v2 + +$ tctl rm lock/5cee949f-5203-4f3b-9805-dac35d798a16 +``` + +Next, use `tctl bots instances add` to generate a new join token for the +preexisting bot `example`: +```code +$ tctl bots instances add example +``` + +Finally, reconfigure the local `tbot` instance with the new token and restart +it. It will detect the new token and automatically reset its internal data +directory. The bot will be issued a new bot instance UUID once connected, and +the generation counter will be reset. + +## `tbot` shows a "bad certificate error" at startup + +### Symptoms + +Restarting a `tbot` process outputs a log like the following: + +```text +INFO [TBOT] Successfully loaded bot identity, valid: after=2022-07-21T21:49:26Z, before=2022-07-21T22:50:26Z, duration=1h1m0s | kind=tls, renewable=true, disallow-reissue=false, roles=[bot-test], principals=[-teleport-internal-join], generation=2 tbot/tbot.go:281 +ERRO [TBOT] Identity has expired. The renewal is likely to fail. (expires: 2022-07-21T22:50:26Z, current time: 2022-07-25T20:18:33Z) tbot/tbot.go:415 +WARN [TBOT] Note: onboarding config ignored as identity was loaded from persistent storage tbot/tbot.go:288 +ERRO [TBOT] Failed to resolve tunnel address Get "https://auth.example.com:3025/webapi/find": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs reversetunnel/transport.go:90 +ERRO [TBOT] Failed to resolve tunnel address Get "https://auth.example.com:3025/webapi/find": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs reversetunnel/transport.go:90 +ERROR: failed direct dial to auth server: Get "https://teleport.cluster.local/v2/configuration/name": remote error: tls: bad certificate +"\tGet \"https://teleport.cluster.local/v2/configuration/name\": remote error: tls: bad certificate, failed dial to auth server through reverse tunnel: Get \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://auth.example.com:3025/webapi/find\": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs" +"\tGet \"https://teleport.cluster.local/v2/configuration/name\": Get \"https://auth.example.com:3025/webapi/find\": x509: cannot validate certificate for auth.example.com because it doesn't contain any IP SANs" +``` + +In particular, note the log line: "Identity has expired. The renewal is likely to +fail." + +### Explanation + +Token-joined bots are unable to reauthenticate to the Teleport Auth Service once +their certificates have expired. Tokens in token-based joining (as opposed to +AWS IAM and other join methods) can only be used once, so when the bot's +internal certificates expire, it will not be able to connect. + +When a bot's identity expires, certain parameters associated with the bot on the +Auth Service must be reset and a new joining token must be issued. The simplest +way to accomplish this is by removing and recreating the bot, which purges all +server-side data and issues a new joining token. + +### Resolution + +Use `tctl bots instances add` to create a new one-time use token for the bot: + +```code +$ tctl bots instances add example +``` + +Copy the resulting join token into the existing bot config—either the +`--token` CLI flag or the `onboarding.token` parameter in `tbot.yaml`—and +restart the bot. It will detect the new token and rejoin the cluster as normal. + +## SSH connections fail with `ssh: handshake failed: ssh: unable to authenticate` + +### Symptoms + +When attempting to connect to a node via SSH, connections fail with an error +like the following: + +```code +$ ssh -F /opt/machine-id/ssh_config bob@node.example.com +ERROR: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain + +ERROR: unable to execute tsh +executing `tsh proxy` +exit status 1 + +kex_exchange_identification: Connection closed by remote host +Connection closed by UNKNOWN port 65535 +``` + +In particular, note the `ssh: unable to authenticate` message. + +### Explanation + +This can occur when attempting to log into the node as a user not listed as a +principal on the SSH certificate. + +You can verify this by viewing the `tbot` logs and looking for the log message +when impersonated certificates for the matching services were renewed. + +In the following example, the only principal listed for the identity in +`/opt/machine-id` is `alice` (via the `access` role): +```text +INFO [TBOT] Successfully renewed impersonated certificates for directory /opt/machine-id, valid: after=2022-07-21T21:49:26Z, before=2022-07-21T22:50:26Z, duration=1h1m0s | kind=tls, renewable=false, disallow-reissue=true, roles=[access], principals=[alice -teleport-internal-join], generation=0 tbot/renew.go:630 +``` + +However, the SSH command attempted to log in as `bob`. + +### Resolution + +Ensure the bot identity is allowed to log in as the requested user by taking any +of the following actions: + + - Changing the SSH command to log in as an allowed user + - Modifying the `access` role to allow the `alice` principal + - Adding a role granting login via the `bob` principal + +Note that if roles are added or modified, the certificates will need to be +renewed for the changes to take effect. The bot will renew certificates on its +own after the renewal interval (by default, 20 minutes), but you can trigger a +renewal immediately by either restarting the `tbot` process or sending it a +reload signal: + +```code +## If using systemd, you can restart the process: +$ systemctl restart machine-id +## Alternatively, you can send `tbot` a reload signal directly: +$ pkill -sigusr1 tbot +``` + +## Database requests fail with `database "example" not found`, but the database exists + +### Symptoms + +When requesting certificates for Teleport-protected +[databases](../enroll-resources/database-access/database-access.mdx), the certificate request +fails with an error like the following: + +```text +ERROR: Failed to generate impersonated certs for directory /opt/machine-id: database "example" not found +database "example" not found +``` + +However, the database exists and can be seen by regular users via `tsh`: + +```code +$ tsh db ls +Name Description Allowed Users Labels Connect +---------- ----------- ------------- ------- ------- +example [alice] env=dev +``` + +### Explanation + +Unlike regular Teleport users, Machine & Workload Identity bot users are granted +only minimal Teleport [RBAC permissions](../reference/access-controls/roles.mdx) and are not +allowed to view or list databases by default unless granted permission via one +or more roles. + +### Resolution + +{/* vale messaging.protocol-products = NO */} +Per the [Machine & Workload Identity Database Access Guide](./access-guides/databases.mdx), ensure at +least one role providing database permissions has been granted to the +output service listed in the error. +{/* vale messaging.protocol-products = YES */} + +For example, note the `rules` section in the following example role: +```yaml +kind: role +version: v5 +metadata: + name: machine-id-db +spec: + allow: + db_labels: + '*': '*' + db_names: [example] + db_users: [alice] + rules: + - resources: [db_server, db] + verbs: [read, list] +``` + +Ensure the bot has a role that grants it at least these RBAC rules. If desired +you can examine bot roles with `tctl` to ensure the necessary `rules` have been +granted: + +```code +$ tctl get role/machine-id-db +``` + +If the role is missing database permissions, it can be modified in your text +editor: + +```code +$ tctl edit role/machine-id-db +``` + +Edit the role, then save and close the file to apply your changes. + +(!docs/pages/includes/create-role-using-web.mdx!) + + +By default, output services (like `/opt/machine-id`) are granted all roles provided +to the bot via `tctl bots add --roles=...`, but it's possible to grant only a +subset of these roles using the `roles: ...` parameter in `tbot.yaml`. + +If permissions are unexpectedly missing, ensure `tbot.yaml` requests your +database role, either by relying on default behavior or adding the role to the +`roles: ...` list. + + +Once fixed, restart or reload the `tbot` clients for the updated role to take +effect. + +If the bot was not granted the role initially, the simplest solution is to +delete and recreate the bot, being sure to include the role in the `--roles=...` +flag: + +```code +$ tctl bots rm example +$ tctl bots add example --roles=foo,bar,machine-id-db +``` + +## Destination kubernetes_secret: `identity-output` must be a directory in exec plugin mode + +By default, when outputting a Kubernetes identity, `tbot` outputs make use of a Kubernetes exec +plugin to always provide the latest version of the credentials. + +When outputting a Kubernetes identity to a Kubernetes secret, however, it is important to disable +the use of the `exec` plugin by adding `disable_exec_plugin: true` to the output. This means that +a static `kubeconfig` file with embedded short-lived credentials is written instead: + +```yaml +services: + - type: kubernetes + # Specify the name of the Kubernetes cluster you wish the credentials to + # grant access to. + kubernetes_cluster: example-k8s-cluster + # Required when outputting a Kubernetes identity to a Kubernetes secret. + disable_exec_plugin: true + destination: + type: kubernetes_secret + # For this guide, identity-output is used as the secret name. + # You may wish to customize this. Multiple outputs cannot share the same + # destination. + name: identity-output +``` + +Failure to add the `disable_exec_plugin` flag will result in a warning being displayed: +`Destination kubernetes_secret: identity-output must be a directory in exec plugin mode`. + +## Configuring `tbot` for split DNS proxies + +When you have deployed your Proxy Service in such a way that it is +accessible via two different DNS names, e.g an internal and external address, +you may find that a `tbot` that is configured to use one of these addresses may +attempt to use the other address and that this may cause connections to fail. + +This is because `tbot` queries an auto-configuration endpoint exposed by the +Proxy Service to determine the canonical address to use when connecting. + +To fix this, set a variable of `TBOT_USE_PROXY_ADDR=yes` in the environment of the +`tbot` process. This configures `tbot` to prefer using the address that you have +explicitly provided. This only functions correctly in cases where TLS +routing/multiplexing is enabled for the Teleport cluster. diff --git a/docs/pages/machine-workload-identity/use-cases/ai-agents-mwi.mdx b/docs/pages/machine-workload-identity/use-cases/ai-agents-mwi.mdx new file mode 100644 index 0000000000000..349deb077aa8b --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/ai-agents-mwi.mdx @@ -0,0 +1,26 @@ +--- +title: AI Agents with Machine & Workload Identity +description: Use Machine & Workload Identity to securely grant agentic AI access to infrastructure. +sidebar_label: Agentic AI +tags: + - conceptual + - mwi + - ai +--- + +With Teleport, you can enforce access and privileges for agents. + +Security must be enforced deterministically; AI agents cannot be trusted to follow high-level instructions like "don't delete production". +Teleport solves this by issuing each agent its own identity and requiring the agent's actions (for example, database queries) to flow through the Teleport proxy. +This allows Teleport to apply Role-Based Access Control (RBAC) at both the network and protocol level. + +Teleport can secure infrastructure components such as SSH servers, Kubernetes clusters, databases, or MCP servers, when accessed by agents. +All queries, commands, and requests executed by the agent are logged, providing full visibility and [auditability](../../reference/deployment/monitoring/audit.mdx). + +## Interested in a design partnership? + + +If you're exploring how to secure AI Agents with Teleport Machine & Workload Identity, we'd love to hear from you. +[Contact us](mailto:product-management@goteleport.com?subject=AI%20Agents%20Design%20Partnership) to share your use case and learn more about opportunities for a design partnership. + + diff --git a/docs/pages/machine-workload-identity/use-cases/hybrid-multi-mwi.mdx b/docs/pages/machine-workload-identity/use-cases/hybrid-multi-mwi.mdx new file mode 100644 index 0000000000000..eda0f198354c6 --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/hybrid-multi-mwi.mdx @@ -0,0 +1,55 @@ +--- +title: Hybrid & Multi-Cloud with Machine & Workload Identity +description: Use Machine & Workload Identity to streamline the management of identity and access for machines and workloads across cloud providers and on-prem environments. +sidebar_label: Hybrid & Multi-Cloud +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +Teleport Machine & Workload Identity streamlines hybrid and multi-cloud operations while reducing management costs. +It integrates with Terraform, Pulumi, AWS, GCP, Azure, and also provides solutions for on-premises environments. + +## Choose your cloud provider + +, + to: "/docs/machine-workload-identity/workload-identity/aws-roles-anywhere", + name: "AWS Roles Anywhere and Workload Identity", + }, + { + icon: , + to: "/docs/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt", + name: "Google Cloud Workload Identity Federation", + }, + { + icon: , + to: "/docs/machine-workload-identity/workload-identity/azure-federated-credentials", + name: "Azure Federated Credentials", + }, + { + icon: , + name: "Oracle Cloud Infrastructure and Workload Identity (Guide coming soon!)", + } + ]} +/> + +## Manage access for all clouds with one credential + +Managing IAM for machines and applications in one cloud is doable. But when organizations move outside one cloud provider, either to a hybrid on-prem architecture or multiple cloud providers (or both), complexity grows quickly. Distributing, managing, and rotating cloud credentials for on-premises servers or for services that span multiple clouds introduces operational complexity and increases the risk of mismanagement. + +Teleport issues credentials compatible with all major cloud providers that can also be used for mTLS between applications, making securing communication between clouds easier. + +## Secure and auditable access with ephemeral credentials + +Teleport generates a credential in X.509 or JWT form, compatible with: + +- AWS IAM Roles Anywhere +- Google Cloud Workload Identity Federation +- Microsoft Entra Workload ID +- OCI Workload Identity Federation + +You can use these credentials with Infrastructure-as-Code like Terraform and Pulumi, applications that need to access a cloud provider API from outside that cloud (i.e. sending logs to an AWS S3 bucket from GCP), and with human users authenticating with cloud providers via CLI. The credentials are ephemeral, with a custom time-to-live set, and Teleport automatically rotates them. Teleport provides a comprehensive audit log of every credential issued to make compliance reporting easy. diff --git a/docs/pages/machine-workload-identity/use-cases/iac-mwi.mdx b/docs/pages/machine-workload-identity/use-cases/iac-mwi.mdx new file mode 100644 index 0000000000000..737ded671a35b --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/iac-mwi.mdx @@ -0,0 +1,62 @@ +--- +title: Infrastructure-as-Code with Machine & Workload Identity +description: Use Machine & Workload Identity to replace the use of static secrets in Infrastructure-as-Code workflows. +sidebar_label: Infrastructure-as-Code +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +Teleport Machine & Workload Identity replaces long-lived static secrets in Infrastructure-as-Code (IaC) tools with short-lived, +automatically generated certificates. Machine & Workload Identity supports Terraform and Pulumi, and has native integrations with +AWS, GCP and Azure, as well as solutions for on-prem configuration. + +## Choose your cloud provider + +, + to: "/docs/machine-workload-identity/workload-identity/aws-roles-anywhere", + name: "AWS Roles Anywhere and Workload Identity", + }, + { + icon: , + to: "/docs/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt", + name: "Google Cloud Workload Identity Federation", + }, + { + icon: , + to: "/docs/machine-workload-identity/workload-identity/azure-federated-credentials", + name: "Azure Federated Credentials and Workload Identity", + }, + { + icon: , + name: "Oracle Cloud Infrastructure and Workload Identity (Guide coming soon!)", + } + ]} +/> + +## Eliminate secrets from your Infrastructure-as-Code + +IaC repositories require credentials with wide-ranging privileges in order to create, delete and manage all manner of infrastructure, +from networks to Kubernetes clusters to cloud IAM resources. Exfiltration of these credentials, in the form of AWS key pairs, GCP service key +files, Azure service principals, etc, can be catastrophic. + +Best practices for managing these secrets include very controlled chains of access, and regular rotation, which causes drag on platform, security and developer teams. +Teleport eliminates the need for these static, long-lived secrets by issuing short-lived, identity-based credentials when your IaC runs. + +## Secure and auditable access with ephemeral credentials + +When your IaC runs, Teleport generates a short-lived credential compatible with: + +- AWS IAM Roles Anywhere +- Google Cloud Workload Identity Federation +- Microsoft Entra Workload ID +- OCI Workload Identity Federation + +The cloud provider issues a temporary credential linked to an IAM role, which allows the IaC workflow to perform its tasks. This credential exists only for the duration of the run and automatically expires once it completes. +By design, this minimizes the risk of credential exposure and eliminates the need for manual rotation or long-term management. Because all major cloud providers support this method of authentication, IaC pipelines that manage resources across multiple clouds can rely on a single mechanism instead of juggling separate credential processes. + +Teleport further enhances this model by recording every credential issuance in a comprehensive [audit log](../../reference/deployment/monitoring/audit.mdx), simplifying compliance and reporting. diff --git a/docs/pages/machine-workload-identity/use-cases/mwi-ci-cd.mdx b/docs/pages/machine-workload-identity/use-cases/mwi-ci-cd.mdx new file mode 100644 index 0000000000000..acb61c6b3e338 --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/mwi-ci-cd.mdx @@ -0,0 +1,92 @@ +--- +title: CI/CD with Machine & Workload Identity +description: Use Machine & Workload Identity to replace static secrets in CI/CD pipelines for accessing servers, databases, Kubernetes clusters and applications. +sidebar_label: CI/CD +tags: + - ci-cd + - conceptual + - mwi + - infrastructure-identity +--- + +Teleport Machine & Workload Identity eliminates the need for long-lived static secrets in CI/CD pipelines by issuing short-lived +certificates at runtime. Teleport natively integrates with many CI/CD providers and deployment targets. + +## Choose your CI/CD platform + +, + to: "/docs/machine-workload-identity/deployment/azure-devops", + name: "Azure DevOps", + }, + { + icon: , + to: "/docs/machine-workload-identity/deployment/bitbucket", + name: "BitBucket Pipelines", + }, + { + icon: , + to: "/docs/machine-workload-identity/deployment/circleci", + name: "CircleCI", + }, + { + icon: , + to: "/docs/machine-workload-identity/deployment/gitlab", + name: "GitLab CI", + }, + { + icon: , + to: "/docs/machine-workload-identity/deployment/github-actions", + name: "GitHub Actions", + }, + { + icon: , + to: "/docs/machine-workload-identity/deployment/jenkins", + name: "Jenkins", + }, + { + icon: , + to: "/docs/zero-trust-access/infrastructure-as-code/terraform-provider/spacelift", + name: "Spacelift", + } + ]} +/> + +## Eliminate secrets from your CI/CD pipeline + +In a typical CI/CD setup, a pipeline such as one running in GitHub Actions on GitHub's infrastructure outside the corporate network builds a +container image, pushes it to a container registry, and updates an application running in a container orchestration platform (such as Kubernetes). + +This workflow commonly relies on sensitive credentials stored in the CI/CD system, such as: + +- Cloud provider access credentials (e.g., AWS, GCP, Azure) +- Configuration files or tokens for deploying to orchestration platforms + +Traditionally, platform teams would generate static credentials and share them with development teams, who were then responsible for manually updating +repositories. Although the credential generation process emits a security log, teams often lack a reliable way to correlate it with GitHub's audit logs +to verify timely rotation. Sharing credentials via Slack, email, file drops, or password managers adds further risk. Moreover, exposing the Kubernetes API +to the internet introduces unnecessary surface area for attack. + +Teleport eliminates the need for these shared secrets by issuing short-lived, identity-based credentials at runtime. Rather than storing static access tokens in the CI/CD +system, pipelines can request ephemeral credentials that expire automatically, reducing the risk of credential leaks and simplifying access management. + +## Secure and auditable access with ephemeral credentials + +In CI/CD pipelines, Teleport eliminates the need for static credentials by generating ephemeral, short-lived identities at the start of each job: + +- A SPIFFE Verifiable Identity Document (SVID) used with AWS IAM Roles Anywhere for secure AWS access (e.g., to ECR or S3). +- A time-bound Kubernetes config file to interact with the cluster. + +These credentials are issued as short-lived JWT-SVIDs or X.509 certificates that expire automatically when the job completes. +This improves the security posture and simplifies credential management for platform teams. All usage is logged by Teleport, +and Kubernetes commands from the CI/CD runner are routed through the Teleport Proxy, enabling secure access to private +API endpoints and full audit visibility. + +### Further reading + +- [Architecture](../../reference/architecture/machine-id-architecture.mdx): A technical deep-dive into how Machine & + Workload Identity works. +- [Reference](../../reference/machine-workload-identity/machine-workload-identity.mdx): Complete documentation of available + configuration options. diff --git a/docs/pages/machine-workload-identity/use-cases/use-cases.mdx b/docs/pages/machine-workload-identity/use-cases/use-cases.mdx new file mode 100644 index 0000000000000..fde8d961e4cd0 --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/use-cases.mdx @@ -0,0 +1,12 @@ +--- +title: Machine & Workload Identity Use Cases +description: Popular use cases for Teleport Machine & Workload Identity +sidebar_label: Use Cases +sidebar_position: 3 +tags: + - conceptual + - mwi + - infrastructure-identity +--- + + diff --git a/docs/pages/machine-workload-identity/use-cases/workload-mwi.mdx b/docs/pages/machine-workload-identity/use-cases/workload-mwi.mdx new file mode 100644 index 0000000000000..0f246b969f7f2 --- /dev/null +++ b/docs/pages/machine-workload-identity/use-cases/workload-mwi.mdx @@ -0,0 +1,36 @@ +--- +title: Service to Service mTLS with Machine & Workload Identity +description: Use Machine & Workload Identity to issue flexible, short-lived cryptographic identity documents for service to service authentication. +sidebar_label: Service to Service mTLS +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +Using Workload Identity certificates reduces the risk of credential exfiltration and provides +engineers with a built-in authentication method for new services. Each certificate is tied to +the identity of the application itself, rather than relying on a shared certificate infrastructure +or API key that can be copied or reused. This ensures that authentication and authorization are both +more secure and reliable. + +## Eliminate secrets from your applications + +Teleport issues special credentials to applications in the form of x.509 certificates or JWTs, after +verifying their identity (to get started, see [Introduction to Workload Identity](../workload-identity/introduction.mdx). +These credentials are automatically rotated every 20 minutes by default. They contain a URI that +uniquely identifies the application. Applications using the credentials automatically gain mTLS, +and can verify that a request or response not only comes from a trusted certificate, but from a specific trusted application. +This makes it possible to guarantee separation of tenants, geographic areas, etc. + +## Improve developer efficiency and experience + +With Teleport Machine & Workload Identity powering application to application authentication, developers can use +standardized open-source libraries in their services to request a credential, and not worry about setting up API keys or +integrating with custom PKI. Teleport Workload Identity credentials follow the SPIFFE standard, making them interoperable +with a wide ecosystem of libraries and SDKs. + +### Further reading + +- [Best Practices for Teleport Workload Identity](../workload-identity/best-practices.mdx): Learn how Teleport verifies applications and issues credentials +- [Introduction to SPIFFE:](../workload-identity/spiffe.mdx) Learn about the open-source standard for workload identities and federation diff --git a/docs/pages/enroll-resources/workload-identity/aws-oidc-federation.mdx b/docs/pages/machine-workload-identity/workload-identity/aws-oidc-federation.mdx similarity index 98% rename from docs/pages/enroll-resources/workload-identity/aws-oidc-federation.mdx rename to docs/pages/machine-workload-identity/workload-identity/aws-oidc-federation.mdx index bf98e03e298b0..130ec1f025619 100644 --- a/docs/pages/enroll-resources/workload-identity/aws-oidc-federation.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/aws-oidc-federation.mdx @@ -1,6 +1,12 @@ --- title: Configuring Workload Identity and AWS OIDC Federation +sidebar_label: AWS OIDC Federation description: Configuring AWS to accept Workload Identity JWTs as authentication using OIDC Federation +tags: + - how-to + - mwi + - infrastructure-identity + - aws --- Teleport Workload Identity issues flexible short-lived identities in JWT format. @@ -55,7 +61,7 @@ This guide covers configuring OIDC federation. For Roles Anywhere, see - `tbot` must already be installed and configured on the host where the workloads which need to access Teleport Workload Identity will run. For more information, see the [deployment - guides](../machine-id/deployment/deployment.mdx). + guides](../deployment/deployment.mdx). Issuing JWT SVIDs with Teleport Workload Identity requires at least Teleport @@ -390,7 +396,7 @@ Take your already deployed `tbot` service and configure it to issue SPIFFE SVIDs by adding the following to the `tbot` configuration file: ```yaml -outputs: +services: - type: workload-identity-jwt destination: type: directory @@ -485,5 +491,5 @@ Workload Identity. Teleport Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/aws-roles-anywhere.mdx b/docs/pages/machine-workload-identity/workload-identity/aws-roles-anywhere.mdx similarity index 96% rename from docs/pages/enroll-resources/workload-identity/aws-roles-anywhere.mdx rename to docs/pages/machine-workload-identity/workload-identity/aws-roles-anywhere.mdx index a874634baffb2..a1bedef597252 100644 --- a/docs/pages/enroll-resources/workload-identity/aws-roles-anywhere.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/aws-roles-anywhere.mdx @@ -1,6 +1,12 @@ --- title: Configuring Workload Identity and AWS Roles Anywhere +sidebar_label: AWS Roles Anywhere description: Configuring AWS to accept Workload Identity certificates as authentication using AWS Roles Anywhere +tags: + - how-to + - mwi + - infrastructure-identity + - aws --- Teleport Workload Identity issues flexible short-lived identities in X.509 @@ -23,11 +29,12 @@ AWS APIs in a few ways: - Workload Identity works with any AWS client, including the command-line tool but also their SDKs. -- Using the Teleport Application Service to access AWS does not work with Machine ID - and therefore cannot be used when a machine needs to authenticate with AWS. +- Using the Teleport Application Service to access AWS does not work with + Machine & Workload Identity and therefore cannot be used when a machine needs + to authenticate with AWS. Whilst this guide is primarily aimed at allowing a machine to access AWS, -the `tsh svid issue` command can be used in place of Machine ID to allow a human +the `tsh svid issue` command can be used in place of `tbot` to allow a human user to authenticate with using AWS Roles Anywhere. ## OIDC Federation vs Roles Anywhere @@ -56,7 +63,7 @@ This guide covers configuring Roles Anywhere, for OIDC federation, see - (!docs/pages/includes/tctl.mdx!) - `tbot` must already be installed and configured on the host where the workloads which need to access Teleport Workload Identity will run. For more -information, see the [deployment guides](../machine-id/deployment/deployment.mdx). +information, see the [deployment guides](../deployment/deployment.mdx). ### Deciding on a SPIFFE ID structure @@ -266,7 +273,7 @@ Take your already deployed `tbot` service and configure it to issue SPIFFE SVIDs by adding the following to the `tbot` configuration file: ```yaml -outputs: +services: - type: workload-identity-x509 destination: type: directory @@ -344,5 +351,5 @@ The official AWS documentation for Roles Anywhere. Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/azure-federated-credentials.mdx b/docs/pages/machine-workload-identity/workload-identity/azure-federated-credentials.mdx similarity index 97% rename from docs/pages/enroll-resources/workload-identity/azure-federated-credentials.mdx rename to docs/pages/machine-workload-identity/workload-identity/azure-federated-credentials.mdx index 9c457be5fd993..4e83b55afe2e8 100644 --- a/docs/pages/enroll-resources/workload-identity/azure-federated-credentials.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/azure-federated-credentials.mdx @@ -1,6 +1,12 @@ --- title: Configuring Workload Identity and Azure Federated Credentials +sidebar_label: Azure Federated Credentials description: Configuring Azure to accept Workload Identity JWTs as authentication using Azure Federated Credentials +tags: + - how-to + - mwi + - infrastructure-identity + - azure --- Teleport Workload Identity issues flexible short-lived identities in JWT format. @@ -27,8 +33,8 @@ protect Azure APIs in a few ways: - Workload Identity works with any Azure client, including the command-line tool but also their SDKs. - Using the Teleport Application Service to access Azure does not work with - Machine ID and therefore cannot be used when a machine needs to authenticate - with Azure. + Machine & Workload Identity and therefore cannot be used when a machine needs + to authenticate with Azure. ## Prerequisites @@ -38,7 +44,7 @@ protect Azure APIs in a few ways: - `tbot` must already be installed and configured on the host where the workloads which need to access Teleport Workload Identity will run. For more information, see the [deployment - guides](../machine-id/deployment/deployment.mdx). + guides](../deployment/deployment.mdx). - An Azure resource group and subscription you wish to grant the workload access to. @@ -247,7 +253,7 @@ Take your already deployed `tbot` service and configure it to issue SPIFFE SVIDs by adding the following to the `tbot` configuration file: ```yaml -outputs: +services: - type: workload-identity-jwt destination: type: directory @@ -338,5 +344,5 @@ Workload Identity. Teleport Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/best-practices.mdx b/docs/pages/machine-workload-identity/workload-identity/best-practices.mdx similarity index 96% rename from docs/pages/enroll-resources/workload-identity/best-practices.mdx rename to docs/pages/machine-workload-identity/workload-identity/best-practices.mdx index d4f7d32d55487..18a2258567952 100644 --- a/docs/pages/enroll-resources/workload-identity/best-practices.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/best-practices.mdx @@ -1,6 +1,11 @@ --- title: Best Practices for Teleport Workload Identity +sidebar_label: Best Practices description: Answers common questions and describes best practices for using Teleport Workload Identity in production. +tags: + - conceptual + - mwi + - infrastructure-identity --- This page covers common questions and best practices for using Teleport's @@ -88,7 +93,7 @@ The SPIFFE SDKs are available in a number of languages: To configure the SPIFFE Workload API, follow the instructions in [Getting Started with Workload Identity](./getting-started.mdx). -### Using the `spiffe-svid` Output +### Using the `spiffe-svid` output service In cases where your workload is not written in a language that has a SPIFFE SDK, `tbot` can be configured to write the SVID, SVID key and trust bundle to files @@ -99,7 +104,7 @@ trust bundle for mTLS. If the workload is long-running, it must watch these files and reload them when changes occur. This accounts for renewals of the short-lived SVID and CA rotations. -To configure the `spiffe-svid` output type follow the instructions in +To configure the `spiffe-svid` output service type follow the instructions in [Getting Started with Workload Identity](./getting-started.mdx). ### Proxy @@ -141,7 +146,8 @@ can then configure Postgres to map this common name to `myuser`. There are certain circumstances where you may wish to use Teleport Workload ID to authenticate a workload's access to a database. -This differs from using Machine ID with the Database Service in a few ways: +This differs from using Machine & Workload Identity with the Database Service in +a few ways: - The connection is made directly from the workload to the database, rather than through Teleport's Proxy Service and Database Service. diff --git a/docs/pages/enroll-resources/workload-identity/federation.mdx b/docs/pages/machine-workload-identity/workload-identity/federation.mdx similarity index 96% rename from docs/pages/enroll-resources/workload-identity/federation.mdx rename to docs/pages/machine-workload-identity/workload-identity/federation.mdx index 346dba351e494..5dc21e9e8aaeb 100644 --- a/docs/pages/enroll-resources/workload-identity/federation.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/federation.mdx @@ -1,6 +1,10 @@ --- title: SPIFFE Federation description: An overview of the Teleport Workload Identity SPIFFE Federation feature. +tags: + - conceptual + - mwi + - infrastructure-identity --- Federation allows a relationship to be established between your Teleport @@ -99,7 +103,7 @@ Workload API. Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. - Read the [SPIFFE Federation Specification](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE_Federation.md) to understand the technical details of SPIFFE Federation. diff --git a/docs/pages/enroll-resources/workload-identity/gcp-workload-identity-federation-jwt.mdx b/docs/pages/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt.mdx similarity index 97% rename from docs/pages/enroll-resources/workload-identity/gcp-workload-identity-federation-jwt.mdx rename to docs/pages/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt.mdx index 52bd95401815b..ad800081eec16 100644 --- a/docs/pages/enroll-resources/workload-identity/gcp-workload-identity-federation-jwt.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/gcp-workload-identity-federation-jwt.mdx @@ -1,6 +1,12 @@ --- title: Configuring Workload Identity and GCP Workload Identity Federation with JWTs +sidebar_label: GCP Workload Identity Federation with JWTs description: Configuring GCP to accept Workload Identity JWTs as authentication using Workload Identity Federation +tags: + - how-to + - mwi + - infrastructure-identity + - google-cloud --- Teleport Workload Identity issues flexible short-lived identities in JWT format. @@ -26,8 +32,9 @@ GCP APIs in a few ways: recorded in Teleport's audit log. - Workload Identity works with any GCP client, including the command-line tool but also their SDKs. -- Using the Teleport Application Service to access GCP does not work with Machine ID - and therefore cannot be used when a machine needs to authenticate with AWS. +- Using the Teleport Application Service to access GCP does not work with + Machine & Workload Identity and therefore cannot be used when a machine needs + to authenticate with AWS. ## Prerequisites @@ -36,7 +43,7 @@ GCP APIs in a few ways: - (!docs/pages/includes/tctl.mdx!) - `tbot` must already be installed and configured on the host where the workloads which need to access Teleport Workload Identity will run. For more -information, see the [deployment guides](../machine-id/deployment/deployment.mdx). +information, see the [deployment guides](../deployment/deployment.mdx). Issuing JWT SVIDs with Teleport Workload Identity requires at minimum version @@ -315,7 +322,7 @@ Take your already deployed `tbot` service and configure it to issue SPIFFE SVIDs by adding the following to the `tbot` configuration file: ```yaml -outputs: +services: - type: workload-identity-jwt destination: type: directory @@ -412,5 +419,5 @@ Workload Identity. Teleport Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/machine-workload-identity/workload-identity/getting-started.mdx b/docs/pages/machine-workload-identity/workload-identity/getting-started.mdx new file mode 100644 index 0000000000000..87a91643d001e --- /dev/null +++ b/docs/pages/machine-workload-identity/workload-identity/getting-started.mdx @@ -0,0 +1,258 @@ +--- +title: Getting Started with Workload Identity +sidebar_label: Getting Started +description: Getting started with Teleport Workload Identity for SPIFFE +tags: + - get-started + - mwi + - infrastructure-identity +--- + +Teleport's Workload Identity issues flexible short-lived identities intended +for workloads. It is compatible with the industry-standard SPIFFE specification +meaning that it can be used in place of other SPIFFE compatible identity +providers. + +## How it works + +In this guide, you'll configure the RBAC necessary to allow a Bot to issue +workload identity credentials and then configure `tbot` to expose a SPIFFE +Workload API endpoint. You can then connect your workloads to this endpoint to +receive SPIFFE SVID-compatible workload identity credentials. + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) +- `tbot` must already be installed and configured on the host where the + workloads which need to access Teleport Workload Identity will run. For more + information, see the [deployment guides](../deployment/deployment.mdx). + +## Step 1/4. Configure Workload Identity + +First, you will need to create a Workload Identity resource. + +This resource is the primary way that Teleport Workload Identity is configured. +Each Workload Identity resource represents the configuration of an identity for a +specific workload or a template to be used when representing the identity of a +group of workloads. The Workload Identity resource specifies a number of key +things, including: + +- The name of the Workload Identity, which will be needed when issuing it. +- The SPIFFE ID that will be included in credentials issued for this + WorkloadIdentity. +- Any rules around when this Workload Identity can be used to issue credentials. + +Before proceeding, you'll want to determine the SPIFFE ID path that your +workload will use. In our example, we'll use `/svc/foo`. We provide more +guidance on choosing a SPIFFE ID structure in the +[Best Practices](./best-practices.mdx) guide. + +Create a new file called `workload-identity.yaml`: + +```yaml +kind: workload_identity +version: v1 +metadata: + name: example-workload-identity + labels: + example: getting-started +spec: + spiffe: + id: /svc/foo +``` + +Replace: + +- `example-workload-identity` with a name that describes your use-case. +- `/svc/foo` with the SPIFFE ID path you have decided on issuing. + +Use `tctl create -f ./workload-identity.yaml` to create the Workload Identity. + +Now, you'll need to create a role that will grant access to the Workload Identity +that you have just created. As with other Teleport resources, access is granted +by specifying label matchers on the role that will match the labels on the +resource itself. + +In addition to granting access to the resource, we will also need to grant the +ability to read and list the Workload Identity resource type. + +Create `workload-identity-issuer-role.yaml`: + +```yaml +kind: role +version: v6 +metadata: + name: example-workload-identity-issuer +spec: + allow: + workload_identity_labels: + example: ["getting-started"] + rules: + - resources: + - workload_identity + verbs: + - list + - read +``` + +Use `tctl create -f ./workload-identity-issuer-role.yaml` to create the role. + +Now, use `tctl bots update` to add the role to the Bot. Replace `example-bot` +with the name of the Bot you created in the deployment guide and +`example-workload-identity-issuer` with the name of the role you just created: + +```code +$ tctl bots update example-bot --add-roles example-workload-identity-issuer +``` + +### Configuring DNS SANs + +In some cases, you may wish to configure DNS SANs which should be included in +the X509 certificates issued by the Workload API. This is useful in cases +where the client may not be SPIFFE aware and will check the DNS SAN rather than +the SPIFFE URI during the TLS handshake. + +Modify your `workload-identity.yaml` resource definition to include the +`spec.spiffe.x509.dns_sans` field, replacing `example.com` with the DNS name you +require: + +```yaml +kind: workload_identity +version: v1 +metadata: + name: example-workload-identity + labels: + example: getting-started +spec: + spiffe: + id: /svc/foo + x509: + dns_sans: + - example.com +``` + +Use `tctl create -f ./workload-identity.yaml` to update the WorkloadIdentity +resource with your changes. + +## Step 2/4. Configure `workload-identity-api` service in `tbot` + +To set up a SPIFFE Workload API endpoint with `tbot`, we configure an instance +of the `workload-identity-api` service. + +First, determine where you wish this socket to be created. In our example, +we'll use `/opt/machine-id/workload.sock`. You may wish to choose a directory +that is only accessible by the processes that will need to connect to the +Workload API. + +Modify your `tbot` configuration file to include the `workload-identity-api` +service: + +```yaml +services: +- type: workload-identity-api + listen: unix:///opt/machine-id/workload.sock + selector: + name: example-workload-identity +``` + +Replace: + +- `/opt/machine-id/workload.sock` with the path to the socket you wish to create. +- `example-workload-identity` with the name of the Workload Identity resource you + created earlier. + +Start or restart your `tbot` instance to apply the new configuration + +### Configuring Unix Workload Attestation + +By default, an SVID listed under the Workload API service will be issued to any +workload that connects to the Workload API. You may wish to restrict which SVIDs +are issued based on certain characteristics of the workload. This is known as +Workload Attestation. + +When using the Unix listener, `tbot` supports workload attestation based on +three characteristics of the workload process: + +- `uid`: The UID of the user that the workload process is running as. +- `gid`: The primary GID of the user that the workload process is running as. +- `pid`: The PID of the workload process. + +Within a Workload Identity, you can configure rules based on the attributes +determined via workload attestation. Each rule contains a number of tests and +all tests must pass for the rule to pass. At least one rule must pass for the +Workload Identity to be allowed to issue a credential. + +For example, to configure a Workload Identity to be issued only to workloads that +are running as the user with ID 1000 or running as a user with a primary group +ID of 50: + +```yaml +kind: workload_identity +version: v1 +metadata: + name: example-workload-identity + labels: + example: getting-started +spec: + rules: + allow: + - conditions: + - attribute: workload.unix.uid + eq: + value: 1000 + - conditions: + - attribute: workload.unix.gid + eq: + value: 50 + spiffe: + id: /svc/foo +``` + +## Step 3/4. Testing the Workload API with `tbot spiffe-inspect` + +The `tbot` binary includes a `spiffe-inspect` command that can be used to +test the configuration of the Workload API. This command will connect to the +Workload API and request SVIDs, whilst providing debug information. + +Before configuring your workload to use the Workload API, we recommend using +this command to ensure that the Workload API is behaving as expected. + +Use the `spiffe-inspect` command with `--path` to specify the path to the +Workload API socket, replacing `/opt/machine-id/workload.sock` with the path you +configured in the previous step: + +```code +$ tbot spiffe-inspect --path unix:///opt/machine-id/workload.sock +INFO [TBOT] Inspecting SPIFFE Workload API Endpoint unix:///opt/machine-id/workload.sock tbot/spiffe.go:31 +INFO [TBOT] Received X.509 SVID context from Workload API bundles_count:1 svids_count:1 tbot/spiffe.go:46 +SVIDS +- spiffe://example.teleport.sh/svc/foo + - Expiry: 2024-03-20 10:55:52 +0000 UTC +Trust Bundles +- example.teleport.sh +``` + +## Step 4/4. Configuring your workload to use the Workload API + +Now that you know that the Workload API is behaving as expected, you can +configure your workload to use it. The exact steps will depend on the workload. + +In cases where you have used the SPIFFE SDKs, you can configure the +`SPIFFE_ENDPOINT_SOCKET` environment variable to point to the socket created by +`tbot`. + +See the [Best Practices](./best-practices.mdx) guide for more information on +integrating SPIFFE with your workloads. + +## Next steps + +- [Workload Identity Overview](./introduction.mdx): Overview of Teleport +Workload Identity. +- [Best Practices](./best-practices.mdx): Best practices for using Workload +Identity in Production. +- Read the [Workload Identity reference](../../reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx) +to explore the configuration of the Workload Identity resource. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore +all the available configuration options for `tbot`. diff --git a/docs/pages/machine-workload-identity/workload-identity/introduction.mdx b/docs/pages/machine-workload-identity/workload-identity/introduction.mdx new file mode 100644 index 0000000000000..a46e806a45d31 --- /dev/null +++ b/docs/pages/machine-workload-identity/workload-identity/introduction.mdx @@ -0,0 +1,121 @@ +--- +title: Introduction to Workload Identity +sidebar_label: Introduction +description: Describes Teleport Workload Identity, which securely issues flexible, short-lived cryptographic identities to workloads and non-human identities. +tags: + - conceptual + - mwi + - infrastructure-identity +--- + +Teleport Workload Identity securely issues short-lived cryptographic identities +to workloads. It is a flexible foundation for workload identity across your +infrastructure, creating a uniform way for your workloads to authenticate +regardless of where they are running. + +The flexible nature of Teleport Workload Identity enables it to be used for a +range of purposes, including: + +- Workload authentication to third-party APIs on cloud platforms such as AWS, + GCP and Azure. +- Providing X.509 certificates for mutual TLS authentication between workloads + within your infrastructure as part of a Zero Trust strategy. +- Workload authentication between services within your infrastructure. + +Teleport Workload Identity is compatible with the open-source +[Secure Production Identity Framework For Everyone (SPIFFE)](./spiffe.mdx) +standard. This enables interoperability between workload identity +implementations and also provides a wealth of off-the-shelf tools and SDKs to +simplify integration with your workloads. + +There's a whole host of benefits to adopting Teleport Workload Identity, but +here's some of the key ones: + +- Eliminate the use of long-lived shared secrets within your infrastructure, and + reduce the risk of exfiltration and time spent by engineers creating and + rotating these secrets. +- Establish an out of the box universal form of identity for your workloads, + enabling your engineers to get on with building new services without needing + to think about how they'll authenticate. +- Converge on a first-class form of identity for your workloads, simplifying + infrastructure by reducing the number of different ways workloads authenticate. + +## How it works + +Teleport Workload Identity establishes a root certificate authority within your +Teleport cluster that will be responsible for issuing the short-lived JWTs and +X509 certificates to workloads. + +These identities are also known as SPIFFE Verifiable Identity Documents (SVIDs) +and contain the identity of the workload encoded as a URI, also known as a +SPIFFE ID. The structure of this SPIFFE ID is up to you and can encode any +information you need to uniquely identify your workload. + +The ability to request these identities is controlled by Teleport's Role-Based +Access Control system. Users and Bots are granted roles which will allow them to +request identities with specific SPIFFE IDs. + +The `tbot` agent is installed in close proximity to workloads which require an +identity. It manages the process of requesting and renewing the identities for +the workloads. The `tbot` agent authenticates to the Teleport cluster using one +of our support join methods, which in many cases enable it to authenticate +on the basis of federated trust rather than the use of long-lived secrets. + +Workloads can receive their identities in one of two ways: + +- The `tbot` agent can write these identities to a directory on the local + filesystem, or, to a Kubernetes secret. +- The `tbot` agent can expose a SPIFFE Workload API. A standardized gRPC API + that allows workloads to request their identities directly from the `tbot` + agent. + +When using the Workload API, the `tbot` agent can perform an additional process +called Workload Attestation. This allows the issuance of identities to be +restricted to specific workloads. For example, you can restrict an identity to +only be issued to a Linux process with a specific UID or GID, or restrict an +identity to only be issued to a specific Kubernetes pod. The Workload +Attestation process eliminates the need for a "bootstrapping" secret to be +provided to the workloads. + +![Workload ID overview](../../../img/workload-identity/intro-diagram.png) + +Once the workload has its identity, it can be used for a range of purposes. The +X.509 certificates can be used to establish Mutual TLS, and the JWTs can be used +to authenticate with a range of third-party APIs. + +## Teleport Zero Trust Access vs Workload Identity + +Teleport Machine & Workload Identity for Zero Trust Access primarily issues +short-lived credentials to workloads to enable them to access resources secured +by the Teleport cluster. The credentials issued are only compatible with +Teleport itself, and access to resources must be through the Teleport Proxy. + +Teleport Workload Identity issues cryptographic identities that are compatible +with the popular SPIFFE standard for interoperable workload identity. These +identities are flexible enough to be used for a range of purposes. The +Teleport Proxy is not used for securing workload-to-workload communication. + +## Next steps + +Learn more about Teleport Workload Identity: + +- [SPIFFE](./spiffe.mdx): Learn about the SPIFFE specification and how it is implemented by Teleport Workload Identity. +- [Federation](./federation.mdx): Learn about using Federation to allow workloads to trust workloads from other trust domains. +- [JWT SVIDs](./jwt-svids.mdx): Learn about the short-lived JWTs issued by Workload Identity. +- [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. +- [Workload Identity Resource](../../reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx): The full reference for the Workload Identity resource. +- [Workload Identity API and Workload Attestation](../../reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx): To learn more about the Workload Identity API and Workload Attestation. + +Learn how to configure Teleport Workload Identity for specific use-cases: + +- [Getting Started](./getting-started.mdx): How to configure Teleport for Workload Identity. +- [TSH Support](./tsh.mdx): How to use `tsh` with Workload Identity to issue SVIDs to users. +- [AWS Roles Anywhere](./aws-roles-anywhere.mdx): Configuring AWS to accept Workload Identity certificates as authentication using AWS Roles Anywhere. +- [AWS OIDC Federation](./aws-oidc-federation.mdx): Configuring AWS to accept Workload Identity JWTs as authentication using AWS OIDC Federation. +- [GCP Workload Identity Federation](./gcp-workload-identity-federation-jwt.mdx): Configuring GCP to accept Workload Identity JWTs as authentication using GCP Workload Identity Federation. +- [Azure Federated Credentials](./azure-federated-credentials.mdx): Configuring Azure to accept Workload Identity JWTs as authentication using Azure Federated Credentials. + +### Other resources + +- [SPIFFE Specification](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE.md): The official SPIFFE specification. Useful for understanding the SPIFFE ID and SVID formats. +- [Solving The Bottom Turtle](https://spiffe.io/pdf/Solving-the-bottom-turtle-SPIFFE-SPIRE-Book.pdf): A book covering the fundamentals and details of SPIFFE. diff --git a/docs/pages/enroll-resources/workload-identity/jwt-svids.mdx b/docs/pages/machine-workload-identity/workload-identity/jwt-svids.mdx similarity index 94% rename from docs/pages/enroll-resources/workload-identity/jwt-svids.mdx rename to docs/pages/machine-workload-identity/workload-identity/jwt-svids.mdx index 51e9b055a4946..befeec7e965a0 100644 --- a/docs/pages/enroll-resources/workload-identity/jwt-svids.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/jwt-svids.mdx @@ -1,6 +1,10 @@ --- title: JWT SVIDs description: An overview of the JWT SVIDs issued by Teleport Workload Identity +tags: + - conceptual + - mwi + - infrastructure-identity --- One type of credential that can be issued by Teleport Workload Identity is a @@ -64,5 +68,5 @@ platforms: Workload Identity. - [Best Practices](./best-practices.mdx): Best practices for using Workload Identity in Production. -- Read the [configuration reference](../../reference/machine-id/configuration.mdx) to explore +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore all the available configuration options. diff --git a/docs/pages/enroll-resources/workload-identity/spiffe.mdx b/docs/pages/machine-workload-identity/workload-identity/spiffe.mdx similarity index 98% rename from docs/pages/enroll-resources/workload-identity/spiffe.mdx rename to docs/pages/machine-workload-identity/workload-identity/spiffe.mdx index 8a311104dccd4..5a024bff176b9 100644 --- a/docs/pages/enroll-resources/workload-identity/spiffe.mdx +++ b/docs/pages/machine-workload-identity/workload-identity/spiffe.mdx @@ -1,6 +1,10 @@ --- title: Introduction to SPIFFE description: Learn about Secure Production Identity Framework For Everyone (SPIFFE) and how it is implemented by Teleport Workload Identity +tags: + - conceptual + - mwi + - infrastructure-identity --- SPIFFE (Secure Production Identity Framework For Everyone) is a set of standards @@ -90,4 +94,4 @@ before issuing SVIDs. This is known as Workload Attestation. - [Workload Identity Overview](./introduction.mdx): Overview of Teleport Workload Identity. - [SPIFFE Website](https://spiffe.io/): Learn more about the SPIFFE - specification. \ No newline at end of file + specification. diff --git a/docs/pages/machine-workload-identity/workload-identity/tsh.mdx b/docs/pages/machine-workload-identity/workload-identity/tsh.mdx new file mode 100644 index 0000000000000..6f746c4eac9eb --- /dev/null +++ b/docs/pages/machine-workload-identity/workload-identity/tsh.mdx @@ -0,0 +1,94 @@ +--- +title: Workload Identity and tsh +description: Issuing SPIFFE SVIDs using Workload Identity and tsh +tags: + - how-to + - mwi + - infrastructure-identity +--- + +{/* lint disable page-structure remark-lint */} + +In some scenarios, you may wish to issue a SPIFFE SVID manually, without using +`tbot`. This can be useful in scenarios where you need to impersonate a service +for the purposes of debugging or could provide a mechanism for providing +human access to services which use SPIFFE SVIDs for authentication. + +In this guide, you will use the `tsh` tool to issue a SPIFFE SVID. + +## Prerequisites + +- A role configured to allow issuing SPIFFE SVIDs and this role assigned to + your user. See [Getting Started](./getting-started.mdx) for more information. + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +## Step 1/2. Using `tsh` to issue a SPIFFE X509 SVID + +First, determine where you wish to write the SPIFFE SVID. If you wish to write +these into a directory, you must first create it. In our example, we will write +the SVID to a directory called `svid`. + +Next, determine which workload identity resource you'll use to issue the X509 +SVID. In our example, we'll use a workload identity called +`my-workload-identity`. + +Issue the SVID specifying the output directory using `--output` and the name of +the workload identity resource using `--name-selector`: + +```sh +$ tsh workload-identity issue-x509 --output ./svid --name-selector my-workload-identity +``` + +Additionally, flags can be used to further customize the SVID: + +| `flag` | Description | +|--------------------|---------------------------------------------------------------------------------------------------------------------| +| `--credential-ttl` | Sets the Time To Live for the resulting X509 SVID. Specify duration using `s`, `m` and `h`, e.g `12h` for 12 hours. | + +### Using headless authentication to issue a SVID on a remote host + +In some scenarios, you may wish to use `tsh` to issue a SVID on a host you have +SSHed into, without logging into Teleport on that host. This can be particularly +useful in scenarios where authentication may not be possible, for example, when +you authenticate using a hardware token. + +In this case, you can use the headless authentication feature of `tsh`. This +provides a prompt for you to authenticate the command on the remote machine, +using your local `tsh` client. + +To use headless authentication, we provide the `--headless` flag, and must +also specify the `--proxy` and `--user` flags: + +```sh +$ tsh --proxy=example.teleport.sh \ + --user example \ + --headless \ + workload-identity issue-x509 \ + --output . \ + --name-selector my-workload-identity +``` + +## Step 2/2. Using the output SVID + +Once the SVID has been issued, you can configure your workload to use it. This +will differ depending on the workload. + +When written to disk, the SVID will be written as three files: + +| `file` | Description | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `svid_key.pem` | The private key for the X509 SVID. This is PEM wrapped and marshalled in PKCS8 format. | +| `svid.pem` | The X509 SVID itself. This is PEM wrapped and DER encoded. | +| `svid_bundle.pem` | The SPIFFE trust bundle. A concatenated list of X509 certificates for the certificate authorities within the trust domain. These are individually PEM wrapped and DER encoded. | + +## Next steps + +- [Workload Identity Overview](./introduction.mdx): Overview of Teleport +Workload Identity. +- [Getting Started](./getting-started.mdx): How to configure Teleport for +Workload Identity. +- [Best Practices](./best-practices.mdx): Best practices for using Workload +Identity in Production. +- Read the [configuration reference](../../reference/machine-workload-identity/configuration.mdx) to explore +all the available configuration options. diff --git a/docs/pages/machine-workload-identity/workload-identity/workload-identity.mdx b/docs/pages/machine-workload-identity/workload-identity/workload-identity.mdx new file mode 100644 index 0000000000000..d526856abb615 --- /dev/null +++ b/docs/pages/machine-workload-identity/workload-identity/workload-identity.mdx @@ -0,0 +1,55 @@ +--- +title: Workload Identity +description: Securely issue flexible short-lived identities to your workloads +tags: + - mwi + - infrastructure-identity +--- + +The guides in this section explain how to use Workload Identity, which securely +issues flexible short-lived identities to workloads in your infrastructure. + +## Getting started +- [Introduction to Workload Identity](./introduction.mdx): Describes Teleport Workload Identity, which securely issues flexible, short-lived cryptographic identities to workloads and non-human identities. +- [Introduction to SPIFFE](./spiffe.mdx): Learn about Secure Production Identity Framework For Everyone (SPIFFE) and how it is implemented by Teleport Workload Identity +- [Getting Started with Workload Identity](./getting-started.mdx): Getting started with Teleport Workload Identity for SPIFFE + +## Configuration Guides + +, + to: "./aws-oidc-federation", + name: "AWS OIDC Federation", + }, + { + icon: , + to: "./aws-roles-anywhere", + name: "AWS Roles Anywhere", + }, + { + icon: , + to: "./azure-federated-credentials", + name: "Azure Federated Credentials", + }, + { + icon: , + to: "./gcp-workload-identity-federation-jwt", + name: "GCP Workload Identity Federation", + }, + { + icon: , + to: "./tsh", + name: "Manually issue SPIFFE SVIDs with Teleport CLI tool tsh", + } + ]} +/> + + +## Configuration management +- [Best Practices for Teleport Workload Identity](./best-practices.mdx): Answers common questions and describes best practices for using Teleport Workload Identity in production. +- [JWT SVIDs](./jwt-svids.mdx): An overview of the JWT SVIDs issued by Teleport Workload Identity +- [SPIFFE Federation](./federation.mdx): An overview of the Teleport Workload Identity SPIFFE Federation feature. +- [Workload Attestation](../../reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx): An overview of the Teleport Workload Identity Workload Attestation feature. +- [Workload Identity Resource](../../reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx): The full reference for the Workload Identity resource. diff --git a/docs/pages/admin-guides/migrate-plans.mdx b/docs/pages/migrate-plans.mdx similarity index 84% rename from docs/pages/admin-guides/migrate-plans.mdx rename to docs/pages/migrate-plans.mdx index e667bde1b71d4..b0bb33c2376a5 100644 --- a/docs/pages/admin-guides/migrate-plans.mdx +++ b/docs/pages/migrate-plans.mdx @@ -1,6 +1,9 @@ --- title: Migrate Between Teleport Plans description: Explains how to migrate between Teleport Enterprise (Self-Hosted), Teleport Enterprise (Cloud), and Teleport Community Edition. +tags: + - how-to + - platform-wide --- This guide explains how to migrate from Teleport Community Edition, Teleport @@ -8,10 +11,10 @@ Enterprise (Self-Hosted) and Teleport Enterprise (Cloud) to another Teleport plan. We recommend that you try out a Teleport [demo -cluster](deploy-a-cluster/linux-demo.mdx) with Teleport Community Edition, -migrate to [Teleport Enterprise (Cloud)](../get-started.mdx) to roll out +cluster](./get-started/deploy-community.mdx) with Teleport Community Edition, +migrate to [Teleport Enterprise (Cloud)](./get-started/deploy-cloud.mdx) to roll out Teleport across your organization, and deploy a -[self-hosted](deploy-a-cluster/deploy-a-cluster.mdx) Teleport Enterprise cluster +[self-hosted](zero-trust-access/deploy-a-cluster/deploy-a-cluster.mdx) Teleport Enterprise cluster if you have security and compliance requirements that Teleport Enterprise (Cloud) cannot address. @@ -43,10 +46,10 @@ migrating from Teleport Enterprise to Teleport Community Edition. - An existing Teleport cluster. - The `tsh` and `tctl` client tools. This guide assumes that you are using `tctl` to manage dynamic resources, but it is also possible to use [Teleport - Terraform provider](infrastructure-as-code/terraform-provider/terraform-provider.mdx) and + Terraform provider](zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) and [Kubernetes - operator](infrastructure-as-code/teleport-operator/teleport-operator-standalone.mdx), in - addition to custom scripts that use the [Teleport API](api/api.mdx) + operator](zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator-standalone.mdx), in + addition to custom scripts that use the [Teleport API](zero-trust-access/api/api.mdx) to manage the Teleport Auth Service backend. - If you are migrating to Teleport Enterprise (Cloud), you must not have an account with trusted clusters enrolled. Trusted clusters are not supported in @@ -69,7 +72,7 @@ migrating from Teleport Enterprise to Teleport Community Edition. 1. If you are migrating to a self-hosted Teleport Enterprise account, plan and execute the deployment with the help of your Account Management team. To assist with this, read the the documentation on [Self-Hosting - Teleport](deploy-a-cluster/deploy-a-cluster.mdx). + Teleport](zero-trust-access/deploy-a-cluster/deploy-a-cluster.mdx). 1. Ensure you are running Teleport Enterprise agents with versions that are the same major or just one major version lower than the new Teleport @@ -84,8 +87,8 @@ migrating from Teleport Enterprise to Teleport Community Edition. $ tctl inventory ls --newer-than= ``` -1. We recommend that Teleport Enterprise (Self-Hosted) customers set up automatic updates to maintain a healthy set of Agents, and require this in Teleport Enterprise (Cloud). See how to - [Set up Automatic Agent Updates](../upgrading/agent-managed-updates.mdx) to +1. We recommend that Teleport Enterprise (Self-Hosted) customers set up automatic updates to maintain a healthy set of Agents, and require this in Teleport Enterprise (Cloud). See how to + [Set up Automatic Agent Updates](upgrading/agent-managed-updates/agent-managed-updates.mdx) to incorporate this into your migration. @@ -138,7 +141,7 @@ Teleport Auth Service backend. Since your original Teleport cluster uses a separate Auth Service backend from your new cluster, you must retrieve the resources on the first backend, then re-apply them against the second backend. -Review the [dynamic resources](../reference/resources.mdx) list to see if any +Review the [dynamic resources](reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx) list to see if any other resources need to be migrated. Some common dynamic resources includes: - `windows_desktop` @@ -185,12 +188,12 @@ and configure a new Auth Connector in the new Teleport Enterprise tenant. ## Step 3/4. Migrate Teleport services and plugins -To migrate services such as Teleport Agents, Machine ID Bots, and plugins, start -by cataloging the various services you're managing with Teleport. The following -resources should be considered for migration: +To migrate services such as Teleport Agents, Machine & Workload Identity Bots, +and plugins, start by cataloging the various services you're managing with +Teleport. The following resources should be considered for migration: - Teleport Agents - - Machine ID Bots + - Machine & Workload Identity Bots - Access Request plugins - The Teleport Event Handler @@ -206,9 +209,9 @@ carrying out the actions involved in migrating agent configurations. To migrate Teleport Agents: -1. For each Agent and Machine ID Bot, obtain a valid join token. We recommend - using a [delegated join - method](../reference/join-methods.mdx). +1. For each Agent and Machine & Workload Identity Bot, obtain a valid join + token. We recommend using a [delegated join + method](reference/deployment/join-methods.mdx). 1. If using ephemeral tokens, ensure you specify the appropriate token type to match the Teleport services. Token types can include `node`, `app`, `kube`, @@ -237,7 +240,7 @@ To migrate Teleport Agents: upgrades the `stable/rolling` channel has all released versions. -1. If you are using Teleport Community Edition, [uninstall Teleport](./management/admin/uninstall-teleport.mdx) +1. If you are using Teleport Community Edition, [uninstall Teleport](installation/uninstall-teleport.mdx) and install the enterprise edition of the `teleport` binary. You can confirm if the binary is correct by running `teleport version`. The output will contain the words `Teleport Enterprise`. The enterprise edition of the `teleport` @@ -299,25 +302,25 @@ To migrate Teleport Agents: The settings for the new Teleport Kubernetes Agent should have the `enterprise` value set to true. -### Migrate Machine ID Bots +### Migrate Machine & Workload Identity Bots -In general, you can migrate a Machine ID Bot using the following steps: +In general, you can migrate a Machine & Workload Identity Bot using the +following steps: 1. Obtain a new join token. 1. In the `tbot` configuration file, edit the `proxy_server` configuration field to point to the new Teleport cluster address and port `443`. 1. Restart `tbot`. -To learn how to restart and configure a Machine ID Bot in your infrastructure, -read the [full documentation](../enroll-resources/machine-id/deployment/deployment.mdx) on deploying a -Machine ID Bot. +To learn how to restart and configure a Bot for your infrastructure, read the +[Machine & Workload Identity deployment guide.](machine-workload-identity/deployment/deployment.mdx) ### Access Request plugins and the Event Handler In general, you can migrate Teleport plugins using the following steps: -1. If you are using Machine ID to produce credentials for the plugin, - reconfigure the Machine ID Bot to connect to the new Teleport cluster and +1. If you are using Machine & Workload Identity to produce credentials for the + plugin, reconfigure the Bot to connect to the new Teleport cluster and restart the Bot. Otherwise, connect to the new Teleport cluster with `tctl` and produce an @@ -331,8 +334,8 @@ In general, you can migrate Teleport plugins using the following steps: For specific plugins running in your infrastructure, read the full documentation on: -- [Access Request plugins](access-controls/access-request-plugins/access-request-plugins.mdx) -- The [Teleport Event Handler](management/export-audit-events/export-audit-events.mdx) +- [Access Request plugins](identity-governance/access-requests/plugins/plugins.mdx) +- The [Teleport Event Handler](zero-trust-access/export-audit-events/export-audit-events.mdx) ## Step 4/4. Verify end user access and performance @@ -396,7 +399,7 @@ your new Teleport cluster, ensure that the setup is complete. For more information on using cloud-hosted Teleport Enterprise, refer to our documentation on [signing up for a cloud-hosted Teleport Enterprise -account](../get-started.mdx). +account](./get-started/deploy-cloud.mdx). Read the documentation on [Self-Hosting Teleport -Enterprise](deploy-a-cluster/deploy-a-cluster.mdx). +Enterprise](zero-trust-access/deploy-a-cluster/deploy-a-cluster.mdx). diff --git a/docs/pages/model-context-preview.mdx b/docs/pages/model-context-preview.mdx deleted file mode 100644 index d0ed1f43d53f0..0000000000000 --- a/docs/pages/model-context-preview.mdx +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: "Secure Agentic AI with Teleport Zero Trust Access" -description: "Access control and audit between LLMs and data sources or MCP servers." -h1: Secure Agentic AI with Teleport Zero Trust Access ---- - -Secure Large Language Model (LLM) interactions with infrastructure using Teleport's secure -[Model Context Protocol](https://goteleport.com/use-cases/secure-model-context-protocol/) (MCP), -delivering fine-grained access control and audit between LLMs and data sources or MCP servers. - - - -MCP support is coming soon. - -Want to be notified when it's available? Sign up for [Early Access](https://goteleport.com/mcp-early-access/)! - - diff --git a/docs/pages/reference/access-controls/access-controls.mdx b/docs/pages/reference/access-controls/access-controls.mdx index bff9cc26c47bc..13fbe38e2d6c7 100644 --- a/docs/pages/reference/access-controls/access-controls.mdx +++ b/docs/pages/reference/access-controls/access-controls.mdx @@ -1,6 +1,18 @@ --- title: Access Controls References +sidebar_label: Access Controls description: Contains guides to configuring authentication and authorization in Teleport. +tags: + - zero-trust + - privileged-access --- +The Teleport access controls reference guides provide comprehensive information +on configuring permissions for users to connect to infrastructure resources or +manage your Teleport cluster. For example, you can find lists of configuration +fields for Teleport roles, Access Lists, and Access Monitoring Rules. + +If you are new to Teleport RBAC, read [Get Started with Role-Based Access +Control](../../zero-trust-access/rbac-get-started/rbac-get-started.mdx). + diff --git a/docs/pages/reference/access-controls/access-lists.mdx b/docs/pages/reference/access-controls/access-lists.mdx index 2b1ee353d64ce..4e00abf068e22 100644 --- a/docs/pages/reference/access-controls/access-lists.mdx +++ b/docs/pages/reference/access-controls/access-lists.mdx @@ -1,6 +1,11 @@ --- title: Access List Reference +sidebar_label: Access Lists description: An explanation and overview of Access Lists in Teleport. +tags: + - conceptual + - identity-governance + - privileged-access --- Access Lists allow Teleport users to be granted long term access to resources @@ -44,6 +49,11 @@ Provided owners meet requirements, owners are able to do the following: Owners are not able to add or remove owners from an Access List or control what roles and traits are granted by the Access List. +Access List owners are also automatically added as suggested reviewers to any +resource-based Access Requests that include resource(-s) granted by this Access +List. Learn more in the [Reviewing Access Requests](../../identity-governance/access-lists/reviewing-access-requests.mdx) +guide. + ## Access List Membership Access List members are Teleport users or nested Access Lists who are granted the roles diff --git a/docs/pages/reference/access-controls/access-monitoring-events.mdx b/docs/pages/reference/access-controls/access-monitoring-events.mdx new file mode 100644 index 0000000000000..45dfce8feeed8 --- /dev/null +++ b/docs/pages/reference/access-controls/access-monitoring-events.mdx @@ -0,0 +1,17 @@ +--- +title: Access Monitoring Event Reference +sidebar_label: Access Monitoring Events +description: Provides a comprehensive list of Access Monitoring events, their fields, and their types. +--- + +The Access Monitoring event reference includes a list of Access Monitoring +events that you can query and view in reports, along with examples of `tctl` +commands you can run to query each event. + +Access Monitoring tracks a subset of Teleport audit events that are relevant to +identifying unusual access patterns. To view a comprehensive set of events, +visit the [Investigate](../../identity-security/usage/investigate.mdx) view of +Teleport Identity Security. For a reference of all audit events you can track +with Teleport, see the [Audit Event Reference](../audit-events.mdx). + +(!docs/pages/includes/access-monitoring-events.mdx!) diff --git a/docs/pages/reference/access-controls/access-monitoring-rules.mdx b/docs/pages/reference/access-controls/access-monitoring-rules.mdx index 1cae7f2c6d981..3379acde229dd 100644 --- a/docs/pages/reference/access-controls/access-monitoring-rules.mdx +++ b/docs/pages/reference/access-controls/access-monitoring-rules.mdx @@ -1,6 +1,11 @@ --- title: Access Monitoring Rule Reference +sidebar_label: Access Monitoring Rules description: An explanation and overview of Access Monitoring Rules. +tags: + - reference + - identity-governance + - privileged-access --- Access Monitoring Rules allow administrators to monitor Access Requests and apply @@ -29,12 +34,14 @@ spec: # monitoring rule. The condition accepts a predicate expression which must # evaluate to a boolean value. # - # This condition would be satisfied by an Access Request if all requested roles - # are either `access` or `editor`, and if the requester user has the `team: dev` - # or `team: stage" user trait. + # This condition would be satisfied if: + # - `access` role is requested + # - all requested resources have the label `env: dev` + # - requesting user has the `team: dev` user trait. condition: |- - contains_all(set("access", "editor"), access_request.spec.roles) && - contains_any(user.traits["team"], set("dev", "stage")) + contains_all(set("access"), access_request.spec.roles) && + access_request.spec.resource_labels_intersection["env"].contains("dev") && + contains_any(user.traits["team"], set("dev")) # Optional: desired_state specifies the desired reconciled state of the Access # Request after the rule is applied. This field must be set to "reviewed" to @@ -67,6 +74,29 @@ spec: # access monitoring rule is applied. recipients: - example@goteleport.com + + # Optional: schedules defines a map of named schedules to apply to this rule. + schedules: + demo-schedule: + time: + # timezone specifies the schedule's time zone. This field is optional and + # defaults to "UTC". Accepted values follow the time zone identifiers defined in the + # IANA Time Zone Database, such as "America/Los_Angeles", "Europe/Lisbon", or + # "Asia/Singapore". + # + # For a list of supported values, see: https://data.iana.org/time-zones/tzdb/zone1970.tab + timezone: America/Los_Angeles + + # shifts is a list of time blocks that make up the schedule. + # Each shift specifies a weekday, a start time, and an end time. + # Start and end times are inclusive. + shifts: + - weekday: Friday + start: 17:00 + end: 23:59 + - weekday: Saturday + start: 00:00 + end: 23:59 ``` ## Notification routing rules @@ -89,7 +119,7 @@ slack-default RUNNING If no integrations are listed, enroll a plugin first. For a step by step guide on how to enroll a plugin see -[Access Request Plugins](../../admin-guides/access-controls/access-request-plugins/access-request-plugins.mdx). +[Access Request Plugins](../../identity-governance/access-requests/plugins/plugins.mdx). ### Recipients @@ -171,22 +201,97 @@ spec: decision: APPROVED ``` +### Example automatic review rule for resources + +If the following rule is configured, any Access Request for the `access` role and +resources labeled `env: dev` will be automatically approved. + +```yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: env-dev-approval +spec: + subjects: + - access_request + condition: |- + contains_all(set("access"), access_request.spec.roles) && + access_request.spec.resource_labels_intersection["env"].contains("dev") + desired_state: reviewed + automatic_review: + integration: builtin + decision: APPROVED +``` + +### Example scheduled automatic review rule + +If the following rule is configured, any Access Request for the `access` role and +resources labeled `env: dev` will be automatically approved on Sundays (UTC time). + +```yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: env-dev-approval +spec: + subjects: + - access_request + condition: |- + contains_all(set("access"), access_request.spec.roles) && + access_request.spec.resource_labels_intersection["env"].contains("dev") + desired_state: reviewed + automatic_review: + integration: builtin + decision: APPROVED + schedules: + dev-on-call: + time: + timezone: UTC + shifts: + - weekday: Sunday + start: 00:00 + end: 23:59 +``` + +### Example automatic review rule with wildcard or regular expression + +If the following rule is configured, any Access Request for the `access` role +with a request reason containing phrase "on-call" will be automatically approved. + +```yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: on-call-automatic-approval +spec: + subjects: + - access_request + condition: |- + regexp.match(set(access_request.spec.request_reason), "*on-call*") + desired_state: reviewed + automatic_review: + integration: builtin + decision: APPROVED +``` + ## Condition The `condition` field is a predicate expression that evaluates to a boolean value and determines which Access Requests the rule applies to. Accepted fields within the condition predicate expression: -| Field | Description | -| --------------------------------------- | ---------------------------------------------- | -| access_request.spec.roles | The set of roles requested. | -| access_request.spec.suggested_reviewers | The set of reviewers specified in the request. | -| access_request.spec.system_annotations | A map of system annotations on the request. | -| access_request.spec.user | The requesting user. | -| access_request.spec.request_reason | The request reason. | -| access_request.spec.creation_time | The creation time of the request. | -| access_request.spec.expiry | The expiry time of the request. | -| user.traits | A map of traits of the requesting user. | +| Field | Description | +| ------------------------------------------------ | ------------------------------------------------------------------- | +| access_request.spec.roles | The set of roles requested. | +| access_request.spec.suggested_reviewers | The set of reviewers specified in the request. | +| access_request.spec.system_annotations | A map of system annotations on the request. | +| access_request.spec.user | The requesting user. | +| access_request.spec.request_reason | The request reason. | +| access_request.spec.creation_time | The creation time of the request. | +| access_request.spec.expiry | The expiry time of the request. | +| access_request.spec.resource_labels_intersection | A map containing the intersection of all requested resource labels. | +| access_request.spec.resource_labels_union | A map containing the union of all requested resource labels. | +| user.traits | A map of traits of the requesting user. | Examples: @@ -194,7 +299,7 @@ Examples: # Applies if the request contains at least one role. condition: !is_empty(access_request.spec.roles) -# Applies if created by "example_user" +# Applies if created by "example_user". condition: access_request.spec.user == "example_user" # Applies if the "example_role" role is requested. @@ -204,10 +309,19 @@ condition: access_request.spec.roles.contains("example_role") condition: set("role_1", "role_2").contains_all(access_request.spec.roles) # Applies if the user has trait "team: dev" or "team: stage". -condition: contains_any(user.traits["team"], set("dev". "stage")) +condition: contains_any(user.traits["team"], set("dev", "stage")) + +# Applies if all requested resources have the `env: dev` label. +condition: access_request.spec.resource_labels_intersection["env"].contains("dev") + +# Applies if any requested resources has the `env: prod` label. +condition: access_request.spec.resource_labels_union["env"].contains("prod") + +# Applies if the request reason contains a phrase "on-call". +condition: regexp.match(set(access_request.spec.request_reason), "*on-call*") ``` -See [Predicate Language](../predicate-language.mdx) for more details. +See [Predicate Language](predicate-language.mdx) for more details. ## SSO users and IdP attributes @@ -253,7 +367,7 @@ spec: ``` Trait mapping depends on the SSO provider. For configuration instructions, see: -[Configure Single Sign-On](../../admin-guides/access-controls/sso/sso.mdx) +[Configure Single Sign-On](../../zero-trust-access/sso/sso.mdx) ## Access Monitoring Rule with infrastructure as code @@ -282,5 +396,5 @@ resource "teleport_access_monitoring_rule" "example_rule" { } ``` -See [Terraform Provider](../terraform-provider/resources/access_monitoring_rule.mdx) +See [Terraform Provider](../infrastructure-as-code/terraform-provider/resources/access_monitoring_rule.mdx) for more details. diff --git a/docs/pages/reference/access-controls/authentication.mdx b/docs/pages/reference/access-controls/authentication.mdx index 52bbb18d69a64..7880f0e8190bc 100644 --- a/docs/pages/reference/access-controls/authentication.mdx +++ b/docs/pages/reference/access-controls/authentication.mdx @@ -1,6 +1,10 @@ --- title: Authentication options description: A reference for Teleport's authentication connectors +tags: + - reference + - zero-trust + - privileged-access --- Teleport authenticates users either via the Proxy Service or with an identity @@ -20,7 +24,7 @@ possible values (types) of MFA: multi-factor authenticators and hardware devices. You can use [YubiKeys](https://www.yubico.com/), [SoloKeys](https://solokeys.com/) or any other authenticator that implements FIDO2 or FIDO U2F standards. - See our [Harden your Cluster Against IdP Compromises](../../admin-guides/access-controls/guides/webauthn.mdx) guide for detailed + See our [Harden your Cluster Against IdP Compromises](../../zero-trust-access/management/security/idp-compromise.mdx) guide for detailed instructions on setting up WebAuthn for Teleport. - `on` enables both TOTP and WebAuthn, and all local users are required to have at least one MFA device registered. - `optional` enables both TOTP and WebAuthn but makes it optional for users. Local users that register a MFA device will @@ -174,7 +178,7 @@ spec: version: v2 ``` -See [GitHub OAuth 2.0](../../admin-guides/access-controls/sso/github-sso.mdx) for details on how to configure it. +See [GitHub OAuth 2.0](../../zero-trust-access/sso/integrate-idp/github-sso.mdx) for details on how to configure it. ### SAML @@ -222,7 +226,7 @@ auth_service: type: github ``` -See [GitHub OAuth 2.0](../../admin-guides/access-controls/sso/github-sso.mdx) for details on how to configure it. +See [GitHub OAuth 2.0](../../zero-trust-access/sso/integrate-idp/github-sso.mdx) for details on how to configure it. ### SAML @@ -264,7 +268,7 @@ auth_service: type: github ``` -See [GitHub OAuth 2.0](../../admin-guides/access-controls/sso/github-sso.mdx) for details on how to configure it. +See [GitHub OAuth 2.0](../../zero-trust-access/sso/integrate-idp/github-sso.mdx) for details on how to configure it.
diff --git a/docs/pages/reference/access-controls/login-rules.mdx b/docs/pages/reference/access-controls/login-rules.mdx index fdf24fe1efe45..99f9e0c5a1d8a 100644 --- a/docs/pages/reference/access-controls/login-rules.mdx +++ b/docs/pages/reference/access-controls/login-rules.mdx @@ -1,12 +1,17 @@ --- title: Login Rules Reference +sidebar_label: Login Rules description: Reference documentation for Login Rules -tocDepth: 3 +tags: + - sso + - conceptual + - zero-trust + - privileged-access --- This page provides details on the expression language that powers Login Rules. To learn how to add the first login rule to your cluster, checkout out the -[Login Rules Guide](../../admin-guides/access-controls/login-rules/guide.mdx). +[Login Rules Guide](../../zero-trust-access/sso/login-rules/guide.mdx). ## YAML specification @@ -789,3 +794,53 @@ Expression | Result ---------- | ------ `union(set("a"), set("b"))` | `("a", "b")` `union(set("a", "b"), set("b", "c"))` | `("a", "b", "c")` + + +### `jsonpath` + +#### Signature + +```go +func jsonpath(path string) set +``` + +#### Description + +`jsonpath` returns a new [`set`](#set-type) interpolated from the user's original IdP +claims using the given [jsonpath query](https://support.smartbear.com/alertsite/docs/monitors/api/endpoint/jsonpath.html). +This is intended for use with traditionally unmapped arbitrary JSON claims which are +used in some custom OIDC solutions. + +See [this guide](../../zero-trust-access/sso/login-rules/guide.mdx#using-an-oidc-provider-with-arbitrary-non-standard-json-claims) for more context. + +#### Arguments + +Argument | Type | Description +-------- | ---- | ----------- +`path` | `string` | A jsonpath query. + +#### Returns + +Type | Description +---- | ---------- +`set` | A new `set` containing the interpolated values. + +#### Examples + +The examples below operates on the given IdP claims object. +```json +{ + // normal claim - string list + "a": ["1", "2", "3"], + // unmapped claim - arbitrary object + "b": { + "c": "d" + } +} +``` + +Expression | Result +---------- | ------ +`jsonpath("$.a")` | `("1", "2", "3")` +`jsonpath("$.b.*")` | `("d")` +`jsonpath("$.*.*")` | `("1", "2", "3", "d")` diff --git a/docs/pages/reference/access-controls/predicate-language.mdx b/docs/pages/reference/access-controls/predicate-language.mdx new file mode 100644 index 0000000000000..0110260aef4ec --- /dev/null +++ b/docs/pages/reference/access-controls/predicate-language.mdx @@ -0,0 +1,164 @@ +--- +title: Predicate Language +description: How to use Teleport's predicate language to define filter conditions. +tags: + - conceptual + - zero-trust +--- + +Teleport's predicate language is used to define conditions for filtering in dynamic configuration resources. +It is also used as a query language to filter and search through a [list of select resources](#resource-filtering). + +The predicate language uses a slightly different syntax depending on whether it is used in: + +- [Role resources](#scoping-allowdeny-rules-in-role-resources) +- [Resource filtering](#resource-filtering) +- [Label expressions](#label-expressions) +- [Access Monitoring Rules](#access-monitoring-rules) + +## Scoping allow/deny rules in role resources + +Some fields in Teleport's role resources use the predicate language to define +the scope of a role's permissions: + +- [Dynamic Impersonation](../../zero-trust-access/authentication/impersonation.mdx) +- [RBAC for sessions](roles.mdx) + +When used in role resources, the predicate language supports the following operators: + +| Operator | Meaning | Example | +|----------|--------------------------------------------------|----------------------------------------------------------| +| && | and (all conditions must match) | `contains(field1, field2) && equals(field2, "val")` | +| \|\| | or (any one condition should match) | `contains(field1, field2) \|\| contains(field1, "val2")` | +| ! | not (used with functions, more about this below) | `!equals(field1, field2)` | + +The language also supports the following functions: + +| Functions | Description | +|--------------------------------|---------------------------------------------------------------------------------------| +| `contains(, )` | checks if the value from `` is included in the list of strings from `` | +| `contains(, "")` | checks if `` is included in the list of strings from `` | +| `equals(, )` | checks if the value from `` is equal to the value from `` | +| `equals(, "")` | checks if `` is equal to the value from `` | + +## Resource filtering + +Both the [`tsh`](../cli/tsh.mdx) and [`tctl`](../cli/tctl.mdx) CLI tools allow you to filter nodes, +applications, databases, and Kubernetes resources using the `--query` flag. The `--query` flag allows you to +perform more sophisticated searches using the predicate language. + +For common resource fields, we defined shortened field names that can easily be accessed by: + +| Short Field | Actual Field Equivalent | Example | +|-------------------|----------------------------------------------------------------------------------------|------------------------------| +| `labels[""]` | `resource.metadata.labels` + `resource.spec.dynamic_labels` | `labels["env"] == "staging"` | +| `name` | `resource.spec.hostname` (only applies to server resource) or `resource.metadata.name` | `name == "jenkins"` | + +The language supports the following operators: + +| Operator | Meaning | Example | +|----------|--------------------------------------|--------------------------------------------------------| +| == | equal to | `labels["env"] == "prod"` or ``labels[`env`] == "prod"`` | +| != | not equal to | `labels["env"] != "prod"` | +| && | and (all conditions must match) | `labels["env"] == "prod" && labels["os"] == "mac"` | +| \|\| | or (any one condition should match) | `labels["env"] == "dev" \|\| labels["env"] == "qa"` | +| ! | not (used with functions) | `!equals(labels["env"], "prod")` | + +The language also supports the following functions: + +| Functions (with examples) | Description | +|----------------------------------------------|------------------------------------------------------------| +| `equals(labels["env"], "prod")` | resources with label key `env` equal to label value `prod` | +| `exists(labels["env"])` | resources with a label key `env`; label value unchecked | +| `!exists(labels["env"])` | resources without a label key `env`; label value unchecked | +| `search("foo", "bar", "some phrase")` | fuzzy match against common resource fields | +| `hasPrefix(name, "foo")` | resources with a name that starts with the prefix `foo` | +| `split(labels["foo"], ",")` | converts a delimited string into a list | +| `contains(split(labels["foo"], ","), "bar")` | determines if a value exists in a list | + +See some [examples](../cli/cli.mdx) of the different ways you can filter resources. + +## Label expressions + +Label expressions can be used in Teleport roles to define access to resources +with custom logic. +Check out the Access Controls +[reference page](roles.mdx) +for an overview of label expressions and where they can be used. + +Label expressions support a predicate language with the following fields +available: + +| Field | Type | Description | +|--------------------|-------------------------|-------------| +| `labels` | `map[string]string` | Combined static and dynamic labels of the resource (server, application, etc.) being accessed. | +| `user.spec.traits` | `map[string][]string` | All traits of the user accessing the resource (referred to as `external` or `internal` in role template expressions). | + +The language supports the following operators: + +| Operator | Meaning | Example | +|----------|-------------------------------------|---------| +| == | equal to | `labels["env"] == "staging"` | +| != | not equal to | `labels["env"] != "production"` | +| \|\| | or (any one condition should match) | `labels["env"] == "staging" \|\| labels["env"] == "test"` | +| && | and (all conditions must match) | `labels["env"] == "staging" && labels["team"] == "dev"` | +| ! | not (logical negation) | `!regexp.match(user.spec.traits["teams"], "contractor")` | + +The language also supports the following functions: + +| Syntax | Return type | Description | Example | +|--------|-------------|-------------|---------| +| `contains(list, item)` | Boolean | Returns true if `list` contains an exact match for `item` | `contains(user.spec.traits[teams], labels["team"])` | +| `regexp.match(list, re)` | Boolean | Returns true if `list` contains a match for `re` | `regexp.match(labels["team"], "dev-team-\d+$")` | +| `regexp.replace(list,` `re, replacement)` | `[]string` | Replaces all matches of `re` with replacement for all items in `list` | `contains(regexp.replace(user.spec.traits["allowed-env"],` `"^env-(.*)$", "$1"), labels["env"])` | +| `email.local(list)` | `[]string` | Returns the local part of each email in `list`, or an error if any email fails to parse | `contains(email.local(user.spec.traits["email"]),` `labels["owner"])` | +| `strings.upper(list)` | `[]string` | Converts all items of the list to uppercase | `contains(strings.upper(user.spec.traits["username"]),` `labels["owner"])` | +| `strings.lower(list)` | `[]string` | Converts all items of the list to lowercase | `contains(strings.lower(user.spec.traits["username"]),` `labels["owner"])` | +| `labels_matching(re)` | `[]string` | Returns the aggregate of all label values with keys matching `re`, which can be a glob or a regular expression | `contains(labels_matching("^project-(team\|label)$"),` `"security")` | +| `contains_any(list, items)` | Boolean | Returns true if `list` contains an exact match for any element of `items` | `contains_any(user.spec.traits["projects"],` `labels_matching("project-*"))` | +| `contains_all(list, items)` | Boolean | Returns true if `list` contains an exact match for all elements of `items` | `contains_all(user.spec.traits["projects"],` `labels_matching("project-*"))` | + +Above, any argument named `list` can accept a list of values (like the list of +values for a specific user trait) or a single value (like the value of a +resource label or a string literal). + +## Access Monitoring Rules + +Access Monitoring Rules use the predicate language to define conditions for +applying notification routing rules or automatic review rules to Access Requests. + +Access Monitoring Rules support a predicate language with the following fields +available: + +| Field | Description | +| ------------------------------------------------ | ------------------------------------------------------------------- | +| access_request.spec.roles | The set of roles requested. | +| access_request.spec.suggested_reviewers | The set of reviewers specified in the request. | +| access_request.spec.system_annotations | A map of system annotations on the request. | +| access_request.spec.user | The requesting user. | +| access_request.spec.request_reason | The request reason. | +| access_request.spec.creation_time | The creation time of the request. | +| access_request.spec.expiry | The expiry time of the request. | +| access_request.spec.resource_labels_intersection | A map containing the intersection of all requested resource labels. | +| access_request.spec.resource_labels_union | A map containing the union of all requested resource labels. | +| user.traits | A map of traits of the requesting user. | + +The language supports the following operators: + +| Operator | Meaning | Example | +|----------|--------------------------------------|--------------------------------------------------------| +| == | equal to | `access_request.spec.user == "example_user"` | +| != | not equal to | `access_request.spec.user != "example_user"` | +| && | and (all conditions must match) | `regexp.match(user.traits["level"], "^L1$") && regexp.match(user.traits["team"], "^Cloud$")` | +| \|\| | or (any one condition should match) | `regexp.match(user.traits["level"], "^L1$") \|\| regexp.match(user.traits["team"], "^Cloud$")` | +| ! | not (used with functions) | `!regexp.match(set(access_request.spec.request_reason), "*important*")` | + +The language also supports the following functions: + +| Functions | Description | Example | +|-----------|-------------|---------| +| `contains_any(set, items)` | Returns true if `set` contains an exact match for any element of `items` | `contains_any(user.traits["team"], set("dev", "stage"))` | +| `contains_all(set, items)` | Returns true if `set` contains an exact match for all elements of `items` | `contains_all(set("dev"), access_request.spec.roles)` | +| `set.contains(item)` | Returns true if `set` contains an exact match for `item` | `access_request.spec.roles.contains("example_role")` | +| `regexp.match(set, re)` | Returns true if `set` contains a match for `re` | `regexp.match(set(access_request.spec.request_reason), "*on-call*")` | +| `is_empty(set)`| Returns true if `set` contains no elements | `is_empty(access_request.spec.roles)` | diff --git a/docs/pages/reference/access-controls/roles.mdx b/docs/pages/reference/access-controls/roles.mdx index f228d26f15ea1..b421a205f7fbb 100644 --- a/docs/pages/reference/access-controls/roles.mdx +++ b/docs/pages/reference/access-controls/roles.mdx @@ -1,7 +1,11 @@ --- -title: Access Controls Reference +title: Teleport Role Reference +sidebar_label: Roles description: Explains the configuration settings that you can include in a Teleport role, which enables you to apply access controls for your infrastructure. -tocDepth: 3 +tags: + - reference + - zero-trust + - privileged-access --- This guide shows you how to use Teleport roles to manage role-based access @@ -20,10 +24,10 @@ resources: - The `tctl` client tool - Teleport Terraform provider - Teleport Kubernetes operator -- [Custom API clients](../../admin-guides/api/rbac.mdx) +- [Custom API clients](../../zero-trust-access/api/rbac.mdx) To read more about managing dynamic resources, see the [Dynamic -Resources](../../admin-guides/infrastructure-as-code/infrastructure-as-code.mdx) guide. +Resources](../../zero-trust-access/infrastructure-as-code/infrastructure-as-code.mdx) guide. You can view all roles in your cluster on your local workstation by running the following commands: @@ -59,7 +63,7 @@ user: | `max_sessions` | Total number of session channels which can be established across a single SSH connection via Teleport | The lowest value takes precedence. | | `enhanced_recording` | Indicates which events should be recorded by the BFP-based session recorder | | | `permit_x11_forwarding` | Allow users to enable X11 forwarding with OpenSSH clients and servers | | -| `device_trust_mode` | Enforce authenticated device access for users assigned this role (`required`, `optional`, `off`). Applies only to the resources in the roles' allow field. | | +| `device_trust_mode` | Enforce authenticated device access for users assigned this role (`required`, `required-for-humans`, `optional`, `off`). Applies only to the resources in the roles' allow field. | | | `require_session_mfa` | Enforce per-session MFA or PIV-hardware key restrictions on user login sessions (`no`, `yes`, `hardware_key`, `hardware_key_touch`). Applies only to the resources in the roles' allow field. | For per-session MFA, Logical "OR" i.e. evaluates to "yes" if at least one role requires session MFA | | `mfa_verification_interval` | Define the maximum duration that can elapse between successive MFA verifications | The shortest interval wins | | `lock` | Locking mode (`strict` or `best_effort`) | `strict` wins in case of conflict | @@ -67,9 +71,10 @@ user: | `request_prompt` | Prompt for the Access Request "reason" field | | | `max_connections` | Enterprise-only limit on how many concurrent sessions can be started via Teleport | The lowest value takes precedence. | | `max_kubernetes_connections` | Defines the maximum number of concurrent Kubernetes sessions per user | | -| `record_session` |Defines the [Session recording mode](../monitoring/audit.mdx).|The strictest value takes precedence.| +| `record_session` |Defines the [Session recording mode](../deployment/monitoring/audit.mdx).|The strictest value takes precedence.| | `desktop_clipboard` | Allow clipboard sharing for desktop sessions | Logical "AND" i.e. evaluates to "yes" if all roles enable clipboard sharing | | `desktop_directory_sharing` | Allows sharing local workstation directory to remote desktop | Logical "AND" i.e. evaluates to "yes" if all roles enable directory sharing | +| `create_desktop_user` | Allows [desktop auto user creation](../../enroll-resources/desktop-access/reference/user-creation.mdx) on a local Windows desktop. | Logical "AND" i.e. for all roles matching a desktop resource. If all roles that allow access to the resource are set to `true` that will allow user creation. If any are set to `false` user auto creation will be denied. | | `pin_source_ip` | Enable source IP pinning for SSH certificates. | Logical "OR" i.e. evaluates to "yes" if at least one role requires session termination | | `cert_extensions` | Specifies extensions to be included in SSH certificates | | | `create_host_user_mode` | Allow users to be automatically created on a host | Logical "AND" i.e. if all roles matching a server specify host user creation (`off`, `keep`, `insecure-drop`), it will evaluate to the option specified by all of the roles. If some roles specify both `insecure-drop` or `keep` it will evaluate to `keep`| @@ -88,7 +93,7 @@ There are currently six supported role versions: `v3`, `v4`, `v5`, `v6`, `v7`, a `v4` and higher roles are completely backwards compatible with `v3`. The only difference lies in the default values which will be applied to the role if they are not explicitly set. Additionally, roles with version `v5` or higher do not allow joining sessions by default. The -[Joining Sessions](../../admin-guides/access-controls/guides/joining-sessions.mdx) configuration in roles 'v5` or higher +[Joining Sessions](../../zero-trust-access/authentication/joining-sessions.mdx) configuration in roles 'v5` or higher must be set to allow for joining sessions. Label | `v3` Default | `v4` and higher Default @@ -101,21 +106,33 @@ Label | `v3` Default | `v4` and higher Default Role `v6` introduced a new field `kubernetes_resources` that allows fine-grained control over Kubernetes resources. See [Kubernetes RBAC](../../enroll-resources/kubernetes-access/controls.mdx) for more details. +Role `v7` added support for more `kind` values in the `kubernetes_resources` field. + +Role `v8` added support for Kubernetes CRDs and introduced changes for how `kubernetes_resources` behave and are defined. + +See [Kubernetes Resources](../../enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources) for more details about the difference between versions. + +#### Default values for `kubernetes_resources` + +The default value for `kubernetes_resources` is applied when there is a label set to +allow access to a Kubernetes cluster. + Version | `kubernetes_resources` ------------------ | -------------- -`v3`, `v4` and `v5` Default | `[{"kind":"pod", "name":"*", "namespace":"*", "verbs": ["*"]}]` -`v6` Default | `[]` -`v7` Default | `[{"kind":"*", "name":"*", "namespace":"*", "verbs": ["*"]}]` +`v3`, `v4` and `v5` | `[{"kind":"pod","name":"*","namespace":"*","verbs":["*"]}]` +`v6` | `[]` +`v7` | `[{"kind":"*","name":"*","namespace":"*","verbs":["*"]}]` +`v8` | `[{"kind":"*","api_group":"*","name":"*","namespace":"*","verbs":["*"]},{"kind":"*","api_group":"*","name":"*","namespace":"","verbs":["*"]}]` SAML IdP role option `spec.idp.saml.enabled: true/false` is only supported in role version 7 and below. -There are further changes to the `saml_idp_service_provider` resource RBAC starting role `v8`. +There are further changes to the `saml_idp_service_provider` resource RBAC starting role `v8`. See [SAML IdP RBAC](./saml-idp.mdx#RBAC) for more details. ## RBAC for infrastructure resources A Teleport role defines which resources (e.g., applications, servers, and databases) a user can access. -This works by [labeling resources](../../admin-guides/management/admin/labels.mdx) and +This works by [labeling resources](../../zero-trust-access/rbac-get-started/labels.mdx) and configuring allow/deny labels in a role definition. Consider the following use case: @@ -235,7 +252,7 @@ conditions: - `workload_identity_labels_expression` Check out our -[predicate language](../predicate-language.mdx) +[predicate language](predicate-language.mdx) guide for a more in depth explanation of the language. Typically you will only want to use one of `_labels` or @@ -264,7 +281,10 @@ allow: ``` If this rule was declared in the `deny` section of a role definition, it would -prohibit users from getting a list of active sessions. You can see all of the +prohibit users from getting a list of active sessions. If a role does not allow +access to list a resource, the user will not be able to view it in the Web UI +or remotely with `tctl`. The only exception is that a user can always view their +assigned roles and their own user information with `tctl`. You can see all of the available resources and verbs under the `allow` section in the example role configuration below. @@ -356,13 +376,15 @@ an administrative role. In particular, you should avoid configuring `allow` rule that grant `create` and `update` permissions on `token` resources to prevent unexpected changes to the configuration or state of your cluster. -## RBAC for sessions +## RBAC for session recordings It is possible to further limit access to -[shared sessions](../../connect-your-client/tsh.mdx) and [session recordings](../architecture/session-recording.mdx). -The examples below illustrate how to restrict session access only for the user -who created the session. + +The `where` field in RBAC rules provides powerful filtering capabilities, +allowing you to restrict access to session recordings based on user traits +and resource metadata. This enables fine-grained control over who can view +recordings based on contextual information. -Role for restricted access to session recordings: +### Understanding the `where` clause + +The `where` clause acts as a condition that must evaluate to true for access to be +granted. It uses predicate expressions that can reference: + +- Attributes of the authenticated user (name, roles, traits) +- Metadata about the session recording (participants, proto, creator, etc.) +- Resource-specific attributes (server details, database info, etc.) + +Basic syntax example: + +``` +where: contains(session.participants, user.metadata.name) +``` + +This expression evaluates to true only when the current user is found in the session's +participants list. + + +### Common Access Control Recipes + +Here are practical examples of how to use `where` expressions to implement common access control patterns: + +#### Restrict to Session Participants + +Allow users to view only recordings of sessions they participated in: ```yaml -version: v5 +version: v8 kind: role metadata: name: only-own-sessions spec: allow: rules: - # Users can only view session recordings for sessions in which they - # participated. - resources: [session] verbs: [list, read] where: contains(session.participants, user.metadata.name) ``` -Role for restricted access to active sessions: +#### Restrict by User Role + +Allow access only to users that share a role with the session creator: + +```yaml +version: v8 +kind: role +metadata: + name: sessions-viewer +spec: + allow: + rules: + - resources: [session] + verbs: [list, read] + where: contains_any(user.spec.roles, session.user_roles) +``` + +#### Restrict by Team Trait + +Allow members of the same team to view each other's sessions: + +```yaml +version: v8 +kind: role +metadata: + name: team-sessions-viewer +spec: + allow: + rules: + - resources: [session] + verbs: [list, read] + where: contains_any(session.user_traits["team"], user.spec.traits["team"]) +``` + +#### Restrict by Session participants + +Allow viewing only sessions where the user participated: + +```yaml +version: v5 +kind: role +metadata: + name: own-sessions +spec: + allow: + rules: + - resources: [session] + verbs: [list, read] + where: contains(session.participants, user.metadata.name) +``` + +#### Restrict by Resource Type + +Allow viewing only SSH session recordings: + +```yaml +version: v5 +kind: role +metadata: + name: ssh-sessions-only +spec: + allow: + rules: + - resources: [session] + verbs: [list, read] + where: session.proto == "ssh" +``` + +#### Combine Multiple Conditions + +This rule allows users to view SSH session recordings only if they were a participant OR the node is owned by his team. + +```yaml +version: v5 +kind: role +metadata: + name: complex-sessions-access +spec: + allow: + rules: + - resources: [session] + verbs: [list, read] + where: > + (contains(session.participants, user.metadata.name) || + contains(user.spec.traits["team"], session.server_labels["team"])) && + session.proto == "ssh" +``` + +By leveraging these where expressions, you can implement access control policies that +match your organization's security requirements while still allowing appropriate visibility for auditing and +monitoring purposes. + +### Filter fields + +The `where` field in a rule allows you to filter access to resources based on +user traits and resource metadata. + +#### User fields + +The following user fields can be used in `where` expressions to filter access. +The `user` object represents the authenticated user making the request to +access the session recording. + +| User Field | Description | Type | +|---|---|---| +| `user.metadata.name` | The name of the user. | `string` | +| `user.spec.traits` | The traits assigned to the user. | `dict>` | +| `user.spec.roles` | The roles assigned to the user. | `array `| + + +#### Session fields + + + +Session fields are immutable and capture the state of the session at the moment it +was created. + +If the underlying resource changes (e.g., server labels), these fields will not be +updated, and the session’s RBAC will not reflect the changes. + + + +The following session fields can be used in `where` expressions to filter access. +The `session` object represents the session resource being accessed. +It contains metadata about the session, including the participants, proto, +resource type and the session creator (user who initiated the session) roles and traits. + +| Session Field | Description | Type | +|---|---|---| +| `session.id` | The unique identifier of the session. | `string` | +| `session.kind` | The kind of session (e.g., "ssh", "kube", "db", "rdp"). | `string` | +| `session.participants` | The list of users who participated in the session. | `array` | +| `session.proto` | The proto used for the session (e.g., "ssh", "kube", "db", "rdp"). | `string` | +| `session.user` | The user who initiated the session. | `string` | +| `session.user_roles` | The roles assigned to the user who initiated the session. | `array` | +| `session.user_traits` | The traits assigned to the user who initiated the session. | `dict>` | + +Depending on the session kind, additional session fields are available for filtering. + + + + +The following server session fields can also be used in `where` expressions to filter access +for SSH sessions together with the user and session fields above. + +| Server Session Field | Description | Type | +|---|---|---| +| `session.server_id` | The unique identifier of the server where the session took place. | `string` | +| `session.server_hostname` | The hostname of the server where the session took place. | `string` | +| `session.server_labels` | The labels assigned to the server where the session took place. | `dict` | +| `session.server_addr` | The address of the server where the session took place. | `string` | +| `session.login` | The username used to log in to the server during the session. |`string` | + + + + +The following Kubernetes session fields can be used in `where` expressions to filter access +for Kubernetes sessions together with the user and session fields above. + +| Kubernetes Session Field | Description | Type | +|---|---|---| +| `session.kubernetes_cluster` | The name of the Kubernetes cluster where the session took place. | `string` | +| `session.kubernetes_labels` | The labels assigned to the Kubernetes cluster where the session took place. | `dict` | +| `session.kubernetes_user` | The Kubernetes user used in the session. | `string` | +| `session.kubernetes_groups` | The Kubernetes groups used in the session. | `array` | +| `session.kubernetes_pod_namespace` | The namespace in the Kubernetes cluster where the session took place. | `string` | +| `session.kubernetes_pod_name` | The name of the pod in the Kubernetes cluster where the session took place. | `string` | +| `session.kubernetes_container_name` | The name of the container in the Kubernetes cluster where the session took place. | `string` | + + + + +The following database session fields can be used in `where` expressions to filter access +for database sessions together with the user and session fields above. + +| Database Session Field | Description | Type | +|---|---|---| +| `session.db_service` | The name of the database service where the session took place. | `string` | +| `session.db_protocol` | The protocol used to connect to the database (e.g., "postgres", "mysql"). | `string` | +| `session.db_uri` | The URI of the database where the session took place. | `string` | +| `session.db_name` | The name of the database accessed during the session. | `string` | +| `session.db_user` | The database user used during the session. | `string` | +| `session.db_labels` | The labels assigned to the database where the session took place. | `dict` | +| `session.db_type` | The type of the database (e.g., "postgres", "mysql"). | `string` | + + + + +The following Windows Desktop session fields can be used in `where` expressions to filter access +for Windows Desktop sessions together with the user and session fields above. + +| Windows Desktop Session Field | Description | Type | +|---|---|---| +| `session.windows_desktop_service` | The name of the Windows Desktop service where the session took place. | `string` | +| `session.desktop_addr` | The address of the Windows desktop where the session took place. | `string` | +| `session.desktop_name` | The name of the Windows desktop where the session took place. | `string` | +| `session.domain` | The domain of the Windows desktop where the session took place. | `string` | +| `session.windows_user` | The username used to log in to the Windows desktop during the session. | `string` | +| `session.desktop_labels` | The labels assigned to the Windows desktop where the session took place. | `dict` | + + + + +### Functions + +You can use the following functions in `where` expressions to filter access to +session recordings: + +| Function | Signature |Description | +|---|---|---| +| `contains` | `contains(array, string) -> bool` | Returns true if the array contains the specified value. | +| `contains_any` | `contains_any(array, array) -> bool` | Returns true if the two arrays have any elements in common. | +| `contains_all` | `contains_all(array, array) -> bool` | Returns true if the first array contains all elements of the second array. | +| `can_view` | `can_view() -> bool` | Returns true if the user can view the SSH Node, Kubernetes Cluster, Database or Windows machine in Teleport. | +| `equals` | `equals(array, array) -> bool` | Returns true if the two strings or two arrays are equal. | +| `set` | `set(string,string...) -> array` | Converts one or multiple strings into an array. | + + +## RBAC for active sessions + +It is possible to further limit access to +[shared sessions](../../connect-your-client/teleport-clients/tsh.mdx). +The example below illustrates how to restrict access to active SSH sessions +only for the user who created the session. ```yaml version: v5 @@ -419,7 +689,7 @@ spec: The following role fields configure Access Requests. For a full description of Access Request configuration fields and their relationship to one another, see the [Access Request configuration -guide](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx). +guide](../../identity-governance/access-requests/access-request-configuration.mdx). ### Creating Access Requests @@ -428,14 +698,14 @@ Requests. The `request` field can fall under both `spec.allow` and `spec.deny`: |Role Field|Description|Further Information| |---|---|---| -| `request.roles` |Teleport roles that a user can or cannot request|[Restrict role requests](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#restrict-role-requests)| -| `request.claims_to_roles`|Teleport roles that a user can or cannot access based on their traits.|[Restrict role requests](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#restrict-role-requests)| -| `request.search_as_roles`|Teleport roles that a user can or cannot assume while searching for resources to request access to.|[Restrict resource requests](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#restrict-resource-requests)| -| `request.max_duration`|The maximum duration of elevated privileges if an Access Request were granted. | [How long access lasts](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#how-long-access-lasts) | -| `request.reason`|Configures the behavior of Access Request reasons. | [Requiring request reasons](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#requiring-request-reasons) | -| `request.thresholds`|Configures the criteria that an Access Request must meet.| [Review thresholds](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#review-thresholds) | -| `request.suggested_reviewers`|Reviewers to add to an Access Request by default.| [Suggested reviewers](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#suggested-reviewers)| -| `request.annotations`|Arbitrary data to write (or prevent writing) to an Access Request|[Request annotations](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#request-annotations)| +| `request.roles` |Teleport roles that a user can or cannot request|[Restrict role requests](../../identity-governance/access-requests/access-request-configuration.mdx)| +| `request.claims_to_roles`|Teleport roles that a user can or cannot access based on their traits.|[Restrict role requests](../../identity-governance/access-requests/access-request-configuration.mdx)| +| `request.search_as_roles`|Teleport roles that a user can or cannot assume while searching for resources to request access to.|[Restrict resource requests](../../identity-governance/access-requests/access-request-configuration.mdx)| +| `request.max_duration`|The maximum duration of elevated privileges if an Access Request were granted. | [How long access lasts](../../identity-governance/access-requests/access-request-configuration.mdx) | +| `request.reason`|Configures the behavior of Access Request reasons. | [Requiring request reasons](../../identity-governance/access-requests/access-request-configuration.mdx) | +| `request.thresholds`|Configures the criteria that an Access Request must meet.| [Review thresholds](../../identity-governance/access-requests/access-request-configuration.mdx) | +| `request.suggested_reviewers`|Reviewers to add to an Access Request by default.| [Suggested reviewers](../../identity-governance/access-requests/access-request-configuration.mdx)| +| `request.annotations`|Arbitrary data to write (or prevent writing) to an Access Request|[Request annotations](../../identity-governance/access-requests/access-request-configuration.mdx)| ### Reviewing Access Requests @@ -444,10 +714,10 @@ Requests, and like `request`, can fall under `allow` or `deny`: |Role Field|Description|Further Information| |---|---|---| -|`review_requests.roles`|Allows or denies the ability to review requests for particular roles.|[Reviews for specific roles](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#allowing-and-denying-reviews-for-specific-roles)| -|`review_requests.where`|Configures conditions in which a user with the role may review an Access Request|[`where` expressions](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#where-expressions)| -|`review_requests.preview_as_roles`|Allows or denies the ability to list resources that a user with the requested role can access|[Inspecting requested resources](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#inspecting-requested-resources)| -|`review_requests.claims_to_roles`|Teleport roles that a user can or cannot review requests for based on their traits.|[Reviews for specific roles](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#allowing-and-denying-reviews-for-specific-roles)| +|`review_requests.roles`|Allows or denies the ability to review requests for particular roles.|[Reviews for specific roles](../../identity-governance/access-requests/access-request-configuration.mdx)| +|`review_requests.where`|Configures conditions in which a user with the role may review an Access Request|[`where` expressions](../../identity-governance/access-requests/access-request-configuration.mdx)| +|`review_requests.preview_as_roles`|Allows or denies the ability to list resources that a user with the requested role can access|[Inspecting requested resources](../../identity-governance/access-requests/access-request-configuration.mdx)| +|`review_requests.claims_to_roles`|Teleport roles that a user can or cannot review requests for based on their traits.|[Reviews for specific roles](../../identity-governance/access-requests/access-request-configuration.mdx)| ### Client options @@ -455,8 +725,8 @@ The `options` field, a property of `spec`, applies additional settings: |Role Field|Description|Further Information| |---|---|---| -|`options.request_access`|Configures Teleport client behavior for creating Access Requests|[How clients request access](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#how-clients-request-access)| -|`options.request_prompt`|Configures the prompt that Teleport clients display when a user requests access|[How clients request access](../../admin-guides/access-controls/access-requests/access-request-configuration.mdx#how-clients-request-access)| +|`options.request_access`|Configures Teleport client behavior for creating Access Requests|[How clients request access](../../identity-governance/access-requests/access-request-configuration.mdx)| +|`options.request_prompt`|Configures the prompt that Teleport clients display when a user requests access|[How clients request access](../../identity-governance/access-requests/access-request-configuration.mdx)| ## Role templates @@ -483,14 +753,14 @@ Labels for resources enrolled with Teleport: |Role Field|Teleport Resource| |---|---| -|`app_labels`|[Applications](../../enroll-resources/application-access/controls.mdx)| -|`cluster_labels`|[Trusted Clusters](../../admin-guides/management/admin/trustedclusters.mdx)| +|`app_labels`|[Applications](../../enroll-resources/application-access/configuration/controls.mdx)| +|`cluster_labels`|[Trusted Clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx)| |`db_labels`|[Databases](../../enroll-resources/database-access/rbac.mdx)| |`db_service_labels`|[Database Service](../../enroll-resources/database-access/database-access.mdx) instances| |`kubernetes_labels`|[Kubernetes clusters](../../enroll-resources/kubernetes-access/controls.mdx)| |`node_labels`|[SSH Servers](../../enroll-resources/server-access/server-access.mdx)| |`windows_desktop_labels`|[Windows desktops](../../enroll-resources/server-access/server-access.mdx)| -|`workload_identity_labels`|[Workload Identities](../workload-identity/workload-identity-resource.mdx)| +|`workload_identity_labels`|[Workload Identities](../machine-workload-identity/workload-identity/workload-identity-resource.mdx)| Principals a user can assume on infrastructure resources: - `aws_role_arns` @@ -539,7 +809,7 @@ sign-on provider. |`{{internal.jwt}}`|JWT header used for app access.| |`{{internal.kubernetes_groups}}`|List of allowed Kubernetes groups for the user.| |`{{internal.kubernetes_users}}`|List of allowed Kubernetes users for the user.| -|`{{internal.logins}}` | Substituted with a value stored in Teleport's local user database and logins from a root cluster.

For local users, Teleport will substitute this with the "allowed logins" parameter used in the `tctl users add [user] ` command.

If the role is within a leaf cluster in a [trusted cluster](../../admin-guides/management/admin/trustedclusters.mdx), Teleport will substitute the logins from the root cluster whether the user is a local user or from an SSO provider.

As an example, if the user has the `ubuntu` login in the root cluster, then `ubuntu` will be substituted in the leaf cluster with this variable. | +|`{{internal.logins}}` | Substituted with a value stored in Teleport's local user database and logins from a root cluster.

For local users, Teleport will substitute this with the "allowed logins" parameter used in the `tctl users add [user] ` command.

If the role is within a leaf cluster in a [trusted cluster](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx), Teleport will substitute the logins from the root cluster whether the user is a local user or from an SSO provider.

As an example, if the user has the `ubuntu` login in the root cluster, then `ubuntu` will be substituted in the leaf cluster with this variable. | |`{{internal.windows_logins}}`|List of allowed Windows logins for the user.| |`{{external.xyz}}` | Substituted with a value from an external [SSO provider](https://en.wikipedia.org/wiki/Single_sign-on). If using SAML, this will be expanded with "xyz" assertion value. For OIDC, the variable is expanded to the value of the "xyz" claim. See the next section for more information on referring to the `external` variable in Teleport roles. | @@ -548,7 +818,7 @@ sign-on provider. The `internal` trait namespace includes only the exact internal trait names included in the above table. For local Teleport users, these traits can be set in the `spec.traits` field of the -[user resource](../resources.mdx#user). +[user resource](../infrastructure-as-code/teleport-resources/user.mdx). These trait names can also be set for SSO users if they are included in an attribute or claim from your IdP. @@ -580,7 +850,7 @@ included in the `spec.allow.logins` field of roles the user holds in the root cl #### Referring to external traits in Teleport roles For local Teleport users, the `external` trait namespace includes all values -from the `spec.traits` field of the [user resource](../resources.mdx#user). +from the `spec.traits` field of the [user resource](../infrastructure-as-code/teleport-resources/user.mdx). This includes any custom trait names, as well as names matching the `internal` traits listed above. For example, `{{internal.logins}}` and `{{external.logins}}` are both valid ways @@ -689,5 +959,5 @@ within this guide: | `user.metadata.name` | The user's name | Read the [predicate -language](../predicate-language.mdx) +language](predicate-language.mdx) guide for a more in depth explanation of the language. diff --git a/docs/pages/reference/access-controls/saml-idp.mdx b/docs/pages/reference/access-controls/saml-idp.mdx index 040505a8c9354..43f0ae10ba344 100644 --- a/docs/pages/reference/access-controls/saml-idp.mdx +++ b/docs/pages/reference/access-controls/saml-idp.mdx @@ -1,6 +1,11 @@ --- title: SAML Identity Provider Reference +sidebar_label: SAML Identity Provider description: Reference documentation for the SAML identity provider +tags: + - conceptual + - identity-governance + - infrastructure-identity --- This page provides details on the SAML identity provider available @@ -99,69 +104,12 @@ The assertions currently provided by Teleport's SAML identity provider are liste ## RBAC -In role version 7 and below, the following access controls are applied to the `saml_idp_service_provider` resource access: +Access to the SAML IdP service provider can be configured by using a Teleport role +with an allow/deny rule targeting the `saml_idp_service_provider` resource and label matchers +matching role `app_labels` with the `saml_idp_service_provider` resource labels. -- Role option that enables idp: `spec.options.idp.saml.enabled: true/false`. -- Cluster auth preference that enables idp: `spec.idp.saml.enabled: true/false`. -- Resource rule `spec.allow/deny.rules.resources.saml_idp_service_provider`. Applicable only to admin actions. - - Allow rule with `read,list` verbs are applied implicitly. - - Deny rule with `read,list` verbs gets precedence over implicit allow. -- Per session MFA: `spec.options.require_session_mfa: true/false`. - -Teleport role version 8 introduces the following changes: -- Label matchers based on `app_labels`. -- Resource rule with verbs targeting `saml_idp_service_provider` is now applicable to both resource access and admin actions. -- Device Trust for SAML IdP session. - -The role option `spec.options.idp.saml.enabled: true/false` is no longer supported starting role version 8. - -Per session MFA is supported in all role versions. - -### RBAC precedence - -Users can be assigned with both the newer role (version 8) and the older versioned roles (version 7 and below) at the same time. -If a user is assigned with both role version 7 and 8, deny rules of the version 8 takes precedence. - -For example, -- If role version 7 denies access, access is denied. -- If role version 7 allows access but role version 8 denies access, access is denied. -- If role version 7 allows access and role version 8 also allows access, access is allowed. - -The table below shows a few more examples of applicable RBAC, when two roles with version 7 and 8 each are assigned to the user. - -| Role v7 | Role v8 | Result | -|----------------------------------------------------------------------------|------------------------------------------------------------|-------------------| -|
options:
idp:
saml:
enabled: false
|
allow:
app_labels:
* : *
| ❌ no access. | -|
options:
idp:
saml:
enabled: true
|
deny:
app_labels:
* : *
| ❌ no access | -|
options:
idp:
saml:
enabled: true
|
allow:
app_labels:
* : *
deny:
rules:
resources:
- saml_idp_service_provider
verbs:
- read
- list
| ❌ no access | -|
options:
idp:
saml:
enabled: true
|
allow:
app_labels:
* : *
| ✅ full access | -| No version 7 role assigned to the user |
allow:
app_labels:
* : *
| ✅ full access | -|
options:
idp:
saml:
enabled: true
| No version 8 role assigned to the user | ✅ full access | - - - `saml_idp_service_provider` resource does not yet support MFA and Device Trust for admin actions. - - -## Disabling SAML identity provider at cluster level - -To disable access to the identity provider at the cluster level, create -or update the `cluster_auth_preference` object with the following setting: - -```yaml -kind: cluster_auth_preference -metadata: - name: cluster-auth-preference -spec: - ... - idp: - saml: - enabled: false - ... -version: v2 -``` - -This will disable access to the SAML identity provider for all users regardless -of their role level permissions. +See this [RBAC guide](../../identity-governance/idps/saml-idp-rbac.mdx) to learn more on how to +manage access to the SAML IdP Service provider resource in Teleport. ## Troubleshooting diff --git a/docs/pages/reference/user-types.mdx b/docs/pages/reference/access-controls/user-types.mdx similarity index 83% rename from docs/pages/reference/user-types.mdx rename to docs/pages/reference/access-controls/user-types.mdx index 0b0f03e31b985..43ab872b118d7 100644 --- a/docs/pages/reference/user-types.mdx +++ b/docs/pages/reference/access-controls/user-types.mdx @@ -2,7 +2,9 @@ title: User Types description: Describes the different types of Teleport users and their properties. keywords: [user,idp,sso] -tocDepth: 3 +tags: + - conceptual + - zero-trust --- This guide explains the different kinds of users in Teleport, how they are @@ -23,20 +25,20 @@ one-time passwords. Local user login can be disabled via `cluster_auth_preference` or `teleport.yaml`. Disabling local authentication is required for [FIPS/FedRAMP compliance -](../admin-guides/access-controls/compliance-frameworks/fedramp.mdx). +](../../zero-trust-access/compliance-frameworks/fedramp.mdx). ### Special case: Bots -Machine ID provides machines with an identity that can authenticate to the -Teleport cluster. This identity is known as a bot. Bots are represented in -Teleport by a user and a role resource and can be created via the +Machine & Workload Identity provides machines with an identity that can +authenticate to the Teleport cluster. This identity is known as a bot. Bots are +represented in Teleport by a user and a role resource and can be created via the `tctl bots add` command. Unlike human users, who use a password, MFA, or SSO, bot users join -the cluster as Teleport services using [a join method](./join-methods.mdx). They +the cluster as Teleport services using [a join method](../deployment/join-methods.mdx). They can still join even if local auth is disabled. -See the [Machine ID introduction](../enroll-resources/machine-id/introduction.mdx) for more information. +See the [Machine & Workload Identity introduction](../../machine-workload-identity/introduction.mdx) for more information. ## SSO users @@ -68,7 +70,7 @@ and automatically expire. The expiry is dynamically computed based on the IdP answer validity, the max session duration allowed by the user roles, and cannot exceed 30 hours. Those users cannot be edited via `tctl`, only deleted. -See the [SSO setup guides](../admin-guides/access-controls/sso/sso.mdx) to learn how to setup an +See the [SSO setup guides](../../zero-trust-access/sso/sso.mdx) to learn how to setup an authentication connector and allow user to log in via an IdP. ### Synced users diff --git a/docs/pages/reference/agent-services/agent-services.mdx b/docs/pages/reference/agent-services/agent-services.mdx deleted file mode 100644 index b777635df07c3..0000000000000 --- a/docs/pages/reference/agent-services/agent-services.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Agent Services -h1: Agent Service References -description: Includes guides to use while using the SSH Service, Database Service, and other Teleport Agent services. ---- - -Use these reference guides to configure and run Teleport Agent services. For a -comprehensive list of options you can configure when running the `teleport` -binary, including all options for Agent services, consult the [Teleport -Configuration Reference](../config.mdx). - - diff --git a/docs/pages/reference/agent-services/application-access.mdx b/docs/pages/reference/agent-services/application-access.mdx deleted file mode 100644 index be61f71a592fe..0000000000000 --- a/docs/pages/reference/agent-services/application-access.mdx +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: Application Access Reference Documentation -description: Configuration and CLI reference documentation for Teleport application access. ---- - -This guide describes interfaces and options for interacting with the Teleport -Application Service, including the static configuration file for the `teleport` -binary, the dynamic `app` resource, and `tsh apps` commands. - -## Configuration - -(!docs/pages/includes/backup-warning.mdx!) - -The following snippet shows the full YAML configuration of an Application Service -appearing in the `teleport.yaml` configuration file: - -```yaml -app_service: - # Enables application proxy service. - enabled: true - # Teleport provides a small debug app called "dumper" that can be used - # to make sure application access is working correctly. It outputs JWTs, - # so it can be useful when extending your application. - debug_app: true - # Matchers for application resources created with "tctl create" command. - resources: - - labels: - "*": "*" - # This section contains definitions of all applications proxied by this - # service. It can contain multiple items. - apps: - # Name of the application. Used for identification purposes. - - name: "grafana" - # Free-form application description. - description: "This is an internal Grafana instance" - # URI and port the application is available at. - uri: "http://localhost:3000" - # Optional application public address to override. - public_addr: "grafana.teleport.example.com" - # Rewrites section. - rewrite: - # Specify whether to include roles or traits in the JWT. - # Options: - # - roles-and-traits: include both roles and traits - # - roles: include only roles - # - traits: include only traits - # - none: exclude both roles and traits from the JWT token - # Default: roles-and-traits - jwt_claims: roles-and-traits - # Rewrite the "Location" header on redirect responses replacing the - # host with the public address of this application. - redirect: - - "grafana.internal.dev" - # Headers passthrough configuration. - headers: - - "X-Custom-Header: example" - - "X-External-Trait: {{external.env}}" - # Disable application certificate validation. - insecure_skip_verify: true - # Optional static labels to assign to the app. Used in RBAC. - labels: - env: "prod" - # Optional dynamic labels to assign to the app. Used in RBAC. - commands: - - name: "hostname" - command: ["hostname"] - period: 1m0s - # Optional AWS-specific configurations. - aws: - # External ID used when assuming AWS roles for this application. - external_id: "example-external-id" - - name: "azure-cli" - # Optional: For access to cloud provider APIs, specify the cloud provider. - # Allowed values are "AWS", "Azure", and "GCP". - cloud: "Azure" -``` - -For full details on configuring Teleport roles, including how Teleport -populates the `external` traits, see the [Access Controls -Reference](../access-controls/roles.mdx). - -## Application resource - -Full YAML spec of application resources managed by `tctl` resource commands: - -```yaml -kind: app -version: v3 -metadata: - # Application name. - name: example - # Application description. - description: "Example application" - # Application static labels. - labels: - env: local -spec: - # URI and port application is available at. - uri: http://localhost:4321 - # Optional application public address. - public_addr: test.example.com - # Disable application certificate validation. - insecure_skip_verify: true - # Rewrites configuration. - rewrite: - # Rewrite the "Location" header on redirect responses replacing the - # host with the public address of this application. - redirect: - - "grafana.internal.dev" - # Headers passthrough configuration. - headers: - - name: "X-Custom-Header" - value: "example" - - name: "X-External-Trait" - value: "{{external.env}}" - # Optional dynamic labels. - dynamic_labels: - - name: "hostname" - command: ["hostname"] - period: 1m0s -``` - -You can create a new `app` resource by running the following commands, which -assume that you have created a YAML file called `app.yaml` with your configuration: - - - - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -# You can also run tctl on your Auth Service host without running "tsh login" -# first. -$ tsh login --proxy=teleport.example.com --user=myuser -# Create the resource -$ tctl create -f app.yaml -``` - - - - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -$ tsh login --proxy=mytenant.teleport.sh --user=myuser -# Create the resource. -$ tctl create -f app.yaml -``` - - - - - -## CLI - -This section shows CLI commands relevant for application access. - -### tsh apps ls - -Lists available applications. - -```code -$ tsh apps ls -``` - -### tsh apps login - -Retrieves short-lived X.509 certificate for CLI application access. - -```code -$ tsh apps login grafana -``` - -| Flag | Description | -| - | - | -| `--aws-role` | For AWS CLI access, the role ARN or role name of an AWS IAM role. | -| `--azure-identity` | For Azure CLI access, the name or URI of an Azure managed identity to use for accessing the Azure CLI. | - -### tsh apps logout - -Removes CLI application access certificate. - -```code -# Log out of a particular app. -$ tsh apps logout grafana - -# Log out of all apps. -$ tsh apps logout -``` - -### tsh apps config - -Prints application connection information. - -```code -# Print app information in a table form. -$ tsh apps config - -# Print information for a particular app. -$ tsh apps config grafana - -# Print an example curl command. -$ tsh apps config --format=curl - -# Construct a curl command. -$ curl $(tsh apps config --format=uri) \ - --cacert $(tsh apps config --format=ca) \ - --cert $(tsh apps config --format=cert) \ - --key $(tsh apps config --format=key) -``` - -| Flag | Description | -| - | - | -| `--format` | Optional print format, one of: `uri` to print app address, `ca` to print CA cert path, `cert` to print cert path, `key` print key path, `curl` to print example curl command.| - -### tsh az - -Run an Azure CLI command via the Teleport Application Service: - -```code -$ tsh az -``` - -``: A valid command within the `az` CLI, including arguments and flags. -See the [Azure -documentation](https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest) -for the full list of `az` CLI commands. - -To run this command, one of the user's roles must include the -`spec.allow.azure_identities` field with one of the identities used by the -Application Service. To learn how to set up secure access to Azure via Teleport, -read [Protect the Azure CLI with Teleport Application -Access](../../enroll-resources/application-access/cloud-apis/azure.mdx). - diff --git a/docs/pages/reference/agent-services/auto-discovery-reference/auto-discovery-reference.mdx b/docs/pages/reference/agent-services/auto-discovery-reference/auto-discovery-reference.mdx deleted file mode 100644 index 59bbc9646d46a..0000000000000 --- a/docs/pages/reference/agent-services/auto-discovery-reference/auto-discovery-reference.mdx +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Auto-Discovery Reference -description: Configuration reference for the Teleport Discovery Service. ---- - -- [AWS IAM](aws-iam.mdx) -- [Kubernetes Applications](kubernetes-application-discovery.mdx) diff --git a/docs/pages/reference/agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx b/docs/pages/reference/agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx deleted file mode 100644 index 8ab153e65c0d2..0000000000000 --- a/docs/pages/reference/agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: Kubernetes Application Discovery Reference -description: This guide is a comprehensive reference of configuration options for automatically enrolling Kubernetes applications with Teleport. ---- - -Kubernetes Application Discovery involves the Teleport Discovery Service, -Teleport Application Service, and annotations on Kubernetes services. This guide -shows you how to configure each of these to manage Kubernetes Application -Discovery in your Kubernetes cluster. - -## Configuring Teleport Agent Helm chart - -You can configure scope of services discovery by setting value `kubernetesDiscovery` of the chart. For more information -please see [helm chart documentation](../../helm-reference/teleport-kube-agent.mdx). - -`values.yaml` example: - -```yaml -kubernetesDiscovery: -- types: ["app"] - namespaces: [ "toronto", "porto" ] - labels: - env: staging -- types: ["app"] - namespaces: [ "seattle", "oakland" ] - labels: - env: testing -``` - -## Configuring Kubernetes Apps Discovery manually - -While the `teleport-kube-agent` Helm chart will set up configuration for you -automatically, you can also configure the required services manually. To do so, -adjust the configuration files for the Teleport Application Service and Teleport -Discovery Service, then restart the agents running these services. - -Configuration for the Discovery Service is controlled by the `kubernetes` field, -example: - -(!docs/pages/includes/discovery/discovery-group.mdx!) - -```yaml -# This section configures the Discovery Service -discovery_service: - enabled: true - discovery_group: main-cluster - kubernetes: - - types: ["app"] - namespaces: [ "toronto", "porto" ] - labels: - env: staging - - types: ["app"] - namespaces: [ "seattle", "oakland" ] - labels: - env: testing -``` - -Configuration for the Application Service is controlled by the `resources` field, example: - -```yaml -app_service: - enabled: true - resources: - - labels: - "teleport.dev/kubernetes-cluster": "main-cluster" - "teleport.dev/origin": "discovery-kubernetes" -``` - -Label `teleport.dev/kubernetes-cluster` should match value of `discovery_group` field in the Discovery Service config. - -For more information you can take a look at [`discovery_service`](../../config.mdx) and [`app_service`](../../config.mdx) configuration references. - -## Annotations - -Kubernetes annotations on services can be used to fine tune transformation of services to apps. -All annotations are optional - they will override default behaviour, but they are not required for import of services. - -### `teleport.dev/discovery-type` - -Controls what type this service is considered to be. If annotation is missing, -by default all services are considered to be of "app" type. If matchers in the Discovery Service -config match service type it will be imported. Currently the only supported value is -`app`, which means Teleport application will be imported from this service. In the future there are plans to expand to database importing. - -### `teleport.dev/protocol` - -Controls protocol for the uri of the Teleport app we create. If annotation is not set, -heuristics will be used to try to determine protocol of an exposed port. -If all heuristics didn't work, the port will be skipped. For app to be imported with `tcp` protocol, the -service should have explicit annotation `teleport.dev/protocol: "tcp"` - -### `teleport.dev/port` - -Controls preferred port for the Kubernetes service, only this one will be used even if service -has multiple exposed ports. Its value should be one of the exposed service ports; otherwise, the app will not be imported. -Value can be matched either by numeric value or by the name of the port defined on the service. - -### `teleport.dev/name` - -Controls resulting app name. If present it will override default app name pattern -`$SERVICE_NAME-$NAMESPACE-$KUBE_CLUSTER_NAME`. If multiple ports are exposed, resulting apps will have port names added -as a suffix to the annotation value, as `$APP_NAME-$PORT1_NAME`, `$APP_NAME-$PORT2_NAME` etc, where `$APP_NAME` is the name -set by the annotation. - -### `teleport.dev/insecure-skip-verify` - -Controls whether TLS certificate verification should be skipped for this app. -If present and set to `true`, TLS certificate verification will be skipped. - -```yaml -annotations: - teleport.dev/insecure-skip-verify: "true" -``` - -### `teleport.dev/ignore` - -Controls whether this service should be ignored by the Discovery Service. -This annotation is useful when you want to exclude a service from being imported as an app -when it matches the Discovery Service config. For example, you may want to exclude a service -that shares the same labels as another services that you want to import as apps. - -```yaml -annotations: - teleport.dev/ignore: "true" -``` - -### `teleport.dev/app-rewrite` - -Controls rewrite configuration for Teleport app, if needed. It should -contain full rewrite configuration in YAML format, same as one would use when configuring an app with dynamic registration syntax (see [documentation](../../../enroll-resources/application-access/guides/connecting-apps.mdx)). - -```yaml -annotations: - teleport.dev/app-rewrite: | - redirect: - - "localhost" - - "jenkins.internal.dev" - headers: - - name: "X-Custom-Header" - value: "example" - - name: "Authorization" - value: "Bearer {{internal.jwt}}" -``` - -### `teleport.dev/public-addr` - -Controls the public address for the Teleport app we create if needed. - -```yaml -annotations: - teleport.dev/public-addr: "custom.teleport.dev" -``` - -### `teleport.dev/path` - -The path is appended to the URI generated for the Teleport app for cases where -an application is served on a sub-path of an HTTP service. - -```yaml -annotations: - teleport.dev/path: "foo/bar" -``` diff --git a/docs/pages/reference/agent-services/database-access-reference/audit.mdx b/docs/pages/reference/agent-services/database-access-reference/audit.mdx deleted file mode 100644 index b182d030bee48..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/audit.mdx +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: Database Access Audit Events Reference -description: Audit events reference for Teleport database access. ---- - - -(!docs/pages/includes/database-access/db-audit-events.mdx!) - -## db.session.start (TDB00I/W) - -Emitted when a client successfully connects to a database, or when a connection -attempt fails due to access denied. - -Successful connection event: - -```json -{ - "cluster_name": "root", // Teleport cluster name. - "code": "TDB00I", // Event code. - "db_name": "test", // Database/schema name. - "db_protocol": "postgres", // Database protocol. - "db_service": "local", // Database service name. - "db_uri": "localhost:5432", // Database server endpoint. - "db_user": "postgres", // Database account name. - "ei": 0, // Event index within the session. - "event": "db.session.start", // Event name. - "namespace": "default", // Event namespace, always "default". - "server_id": "05ff66c9-a948-42f4-af0e-a1b6ba62561e", // Database Service host ID. - "sid": "63b6fa11-cd44-477b-911a-602b75ab13b5", // Unique database session ID. - "success": true, // Indicates successful connection. - "time": "2021-04-27T23:00:26.014Z", // Event timestamp. - "uid": "eac5b6c8-384a-4471-9559-e135834b1ab0", // Unique event ID. - "user": "alice" // Teleport user name. -} -``` - -Access denied event: - -```json -{ - "cluster_name": "root", // Teleport cluster name. - "code": "TDB00W", // Event code. - "db_name": "test", // Database/schema name user attempted to connect to. - "db_protocol": "postgres", // Database protocol. - "db_service": "local", // Database service name. - "db_uri": "localhost:5432", // Database server endpoint. - "db_user": "superuser", // Database account name user attempted to log in as. - "ei": 0, // Event index within the session. - "error": "access to database denied", // Connection error. - "event": "db.session.start", // Event name. - "message": "access to database denied", // Detailed error message. - "namespace": "default", // Event namespace, always "default". - "server_id": "05ff66c9-a948-42f4-af0e-a1b6ba62561e", // Database Service host ID. - "sid": "d18388e5-cc7c-4624-b22b-d36db60d0c50", // Unique database session ID. - "success": false, // Indicates unsuccessful connection. - "time": "2021-04-27T23:03:05.226Z", // Event timestamp. - "uid": "507fe008-99a4-4247-8603-6ba03408d047", // Unique event ID. - "user": "alice" // Teleport user name. -} -``` - -## db.session.end (TDB01I) - -Emitted when a client disconnects from the database. - -```json -{ - "cluster_name": "root", // Teleport cluster name. - "code": "TDB01I", // Event code. - "db_name": "test", // Database/schema name. - "db_protocol": "postgres", // Database protocol. - "db_service": "local", // Database service name. - "db_uri": "localhost:5432", // Database server endpoint. - "db_user": "postgres", // Database account name. - "ei": 3, // Event index within the session. - "event": "db.session.end", // Event name. - "sid": "63b6fa11-cd44-477b-911a-602b75ab13b5", // Unique database session ID. - "time": "2021-04-27T23:00:30.046Z", // Event timestamp. - "uid": "a626b22d-bbd0-40ef-9896-b7ff365664b0", // Unique event ID. - "user": "alice" // Teleport user name. -} -``` - -## db.session.query (TDB02I) - -Emitted when a client executes a SQL query. - -```json -{ - "cluster_name": "root", // Teleport cluster name. - "code": "TDB02I", // Event code. - "db_name": "test", // Database/schema name. - "db_protocol": "postgres", // Database protocol. - "db_query": "INSERT INTO public.test (id,\"timestamp\",json)\n\tVALUES ($1,$2,$3)", // Query text. - "db_query_parameters": [ // Query parameters (for prepared statements). - "test-id", - "2022-04-02 17:50:20-07", - "{\"k\": \"v\"}" - ], - "db_service": "local", // Database service name. - "db_uri": "localhost:5432", // Database server endpoint. - "db_user": "postgres", // Database account name. - "ei": 29, // Event index within the session. - "event": "db.session.query", // Event name. - "sid": "691e6f70-3c31-4412-90aa-fe0558abb212", // Unique database session ID. - "time": "2021-04-27T23:04:57.395Z", // Event timestamp. - "uid": "9f7b4179-b9cf-4302-bb7c-1408e404823f", // Unique event ID. - "user": "alice" // Teleport user name. -} -``` - -## db.session.spanner.rpc (TSPN001I/W) - -Emitted when a client executes a remote procedure call (RPC), or when an RPC -execution attempt fails due to access denied. - -```json -{ - "args": { // RPC arguments (specific to the "procedure" below). - "query_options": {}, - "request_options": {}, - "seqno": 1, - "session": "projects/project-id/instances/instance-id/databases/dev-db/sessions/ABCDEF1234567890", - "sql": "select * from TestTable", - "transaction": { - "Selector": { - "SingleUse": { - "Mode": { - "ReadOnly": { - "TimestampBound": { - "Strong": true - }, - "return_read_timestamp": true - } - } - } - } - } - }, - "cluster_name": "root", // Teleport cluster name. - "code": "TSPN001I", // Event code. - "db_name": "dev-db", // Database name. - "db_origin": "dynamic", // Teleport database service config origin. - "db_protocol": "spanner", // Database protocol. - "db_service": "teleport-spanner", // Database service name. - "db_type": "spanner", // Database type. - "db_uri": "spanner.googleapis.com:443", // Database service endpoint. - "db_user": "some-user", // Database account name, (a GCP IAM service account name without its @.iam.gserviceaccount.com suffix). - "ei": 29, // Event index within the session. - "event": "db.session.spanner.rpc", // Event name. - "procedure": "ExecuteStreamingSql", // Name of the remote procedure call (RPC). - "sid": "406b9883-0e16-42f2-9d0b-b3bd956f9cd4", // Unique database session ID. - "success": true, // The RPC was allowed by Teleport RBAC. - "time": "2024-03-13T00:02:44.739Z", // Event timestamp. - "uid": "e0625e79-9399-4ea3-aa8b-dba1eb98658d", // Unique event ID. - "user": "alice@example.com" // Teleport user name. -} -``` diff --git a/docs/pages/reference/agent-services/database-access-reference/aws.mdx b/docs/pages/reference/agent-services/database-access-reference/aws.mdx deleted file mode 100644 index d7f57e4b5e05f..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/aws.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: Database Access AWS IAM Reference -description: AWS IAM policies for Teleport database access. ---- - -The Teleport Database Service requires IAM permissions for various tasks -depending on the database type and setup, such as discovering endpoints and -metadata of the database servers, generating IAM authentication tokens, and -assuming IAM roles. - -You can generate IAM permissions with the [`teleport db configure aws -print-iam`](cli.mdx) -command. For example, the following command would generate and print the IAM -policies: - -```code -$ teleport db configure aws print-iam --types rds,redshift --role teleport-db-service-role -``` - -To learn more about IAM permissions for a specific type of database, refer to -the related section below. - -## DocumentDB - -(!docs/pages/includes/database-access/reference/aws-iam/documentdb/access-policy.mdx dbUserRole="documentdb-user-role"!) - -### IAM role as a DocumentDB database user - -(!docs/pages/includes/database-access/reference/aws-iam/documentdb/role-as-user-policy.mdx!) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="documentdb-user-role"!) - -## DynamoDB - -(!docs/pages/includes/database-access/reference/aws-iam/dynamodb/access-policy.mdx dbUserRole="dynamodb-user-role"!) - -### IAM role as a DynamoDB database user - -(!docs/pages/includes/database-access/reference/aws-iam/dynamodb/role-as-user-policy.mdx!) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="dynamodb-user-role"!) - -## ElastiCache for Redis - -(!docs/pages/includes/database-access/reference/aws-iam/elasticache/access-policy.mdx!) - -### ElastiCache managed users - -(!docs/pages/includes/database-access/reference/aws-iam/redis/auto-password-access-policy.mdx dbType="ElastiCache" permissionType="elasticache" updateUserPermission="ModifyUser" listTagsPermission="ListTagsForResource"!) - -## Keyspaces - -(!docs/pages/includes/database-access/reference/aws-iam/keyspaces/access-policy.mdx dbUserRole="keyspaces-user-role"!) - -### IAM role as a Keyspaces database user - -(!docs/pages/includes/database-access/reference/aws-iam/keyspaces/role-as-user-policy.mdx!) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="keyspaces-user-role"!) - -## MemoryDB - -(!docs/pages/includes/database-access/reference/aws-iam/memorydb/access-policy.mdx!) - -### MemoryDB managed users - -(!docs/pages/includes/database-access/reference/aws-iam/redis/auto-password-access-policy.mdx dbType="MemoryDB" permissionType="memorydb" updateUserPermission="UpdateUser" listTagsPermission="ListTags"!) - -## OpenSearch - -(!docs/pages/includes/database-access/reference/aws-iam/opensearch/access-policy.mdx dbUserRole="opensearch-user-role"!) - -### IAM role as an OpenSearch database user - -(!docs/pages/includes/database-access/reference/aws-iam/opensearch/role-as-user-policy.mdx!) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="opensearch-user-role"!) - -## RDS - -(!docs/pages/includes/database-access/reference/aws-iam/rds/access-policy.mdx!) - -## RDS Proxy - -(!docs/pages/includes/database-access/reference/aws-iam/rds-proxy/access-policy.mdx!) - -## Redshift - -(!docs/pages/includes/database-access/reference/aws-iam/redshift/access-policy.mdx dbUserRole="redshift-user-role"!) - -### IAM role as a Redshift database user - -(!docs/pages/includes/database-access/reference/aws-iam/redshift/role-as-user-policy.mdx dbUserRole="redshift-user-role"!) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="redshift-user-role"!) - -## Redshift Serverless - -(!docs/pages/includes/database-access/reference/aws-iam/redshift-serverless/access-policy.mdx dbUserRole="redshift-serverless-user-role"!) - -### IAM role as a Redshift Serverless database user - -(!docs/pages/includes/database-access/reference/aws-iam/redshift-serverless/role-as-user-policy.mdx dbUserRole="redshift-serverless-user-role" !) - -(!docs/pages/includes/database-access/iam_role_trust_relationship.mdx role1="teleport-db-service-role" role2="redshift-serverless-user-role"!) diff --git a/docs/pages/reference/agent-services/database-access-reference/cli.mdx b/docs/pages/reference/agent-services/database-access-reference/cli.mdx deleted file mode 100644 index f35f791ad0c70..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/cli.mdx +++ /dev/null @@ -1,383 +0,0 @@ ---- -title: Database Access CLI Reference -description: CLI reference for Teleport database access. ---- - -This reference shows you how to run common commands for managing Teleport -the Database Service, including: - -- The `teleport` daemon command, which is executed on the host where you - will run the Teleport Database Service. - -- The `tctl` administration tool, which you use to manage `db` resources that - represent databases known to your Teleport cluster. - - (!docs/pages/includes/tctl.mdx!) - -- The `tsh` client tool, which end-users run in order to access databases in - your cluster. - -## teleport db start - -Starts Teleport Database Service. - - - - -```code -$ teleport db start \ - --token=/path/to/token \ - --auth-server=proxy.example.com:443 \ - --name=example \ - --protocol=postgres \ - --uri=postgres.example.com:5432 -``` - - - - -```code -$ teleport db start \ - --token=/path/to/token \ - --auth-server=mytenant.teleport.sh:443 \ - --name=example \ - --protocol=postgres \ - --uri=postgres.mytenant.teleport.sh:5432 -``` - - - - - -| Flag | Description | -| - | - | -| `-d/--debug` | Enable verbose logging to stderr. | -| `--pid-file` | Full path to the PID file. By default no PID file will be created. | -| `--auth-server` | Address of the Teleport Proxy Service. | -| `--token` | Invitation token to register with the Auth Service. | -| `--ca-pin` | CA pin to validate the Auth Service. | -| `-c/--config` | Path to a configuration file (default `/etc/teleport.yaml`). | -| `--labels` | Comma-separated list of labels for this node, for example `env=dev,app=web`. | -| `--fips` | Start Teleport in FedRAMP/FIPS 140-2 mode. | -| `--name` | Name of the proxied database. | -| `--description` | Description of the proxied database. | -| `--protocol` | Proxied database protocol. Supported are: `postgres` and `mysql`. | -| `--uri` | Address the proxied database is reachable at. | -| `--ca-cert` | Database CA certificate path. | -| `--aws-region` | (Only for RDS, Aurora or Redshift) AWS region RDS, Aurora or Redshift database instance is running in. | -| `--aws-redshift-cluster-id` | (Only for Redshift) Redshift database cluster identifier. | -| `--gcp-project-id` | (Only for Cloud SQL) GCP Cloud SQL project identifier. | -| `--gcp-instance-id` | (Only for Cloud SQL) GCP Cloud SQL instance identifier.| - -## teleport db configure create - -Creates a sample Database Service configuration. - - - - -```code -$ teleport db configure create --rds-discovery=us-west-1 --rds-discovery=us-west-2 -$ teleport db configure create \ - --token=/tmp/token \ - --proxy=proxy.example.com:443 \ - --name=example \ - --protocol=postgres \ - --uri=postgres://postgres.example.com:5432 \ - --labels=env=prod -``` - - - - -```code -$ teleport db configure create --rds-discovery=us-west-1 --rds-discovery=us-west-2 -$ teleport db configure create \ - --token=/tmp/token \ - --proxy=mytenant.teleport.sh:443 \ - --name=example \ - --protocol=postgres \ - --uri=postgres://postgres.mytenant.teleport.sh:5432 \ - --labels=env=prod -``` - - - - - -| Flag | Description | -| - | - | -| `--proxy` | Teleport Proxy Service address to connect to. Default: `0.0.0.0:3080`. | -| `--token` | Invitation token to register with the Auth Service. Default: none. | -| `--rds-discovery` | List of AWS regions in which the agent will discover RDS/Aurora instances. | -| `--rdsproxy-discovery` | List of AWS regions in which the agent will discover RDS Proxies. | -| `--redshift-discovery` | List of AWS regions in which the agent will discover Redshift instances. | -| `--redshift-serverless-discovery` | List of AWS regions in which the agent will discover Redshift Serverless instances. | -| `--elasticache-discovery` | List of AWS regions in which the agent will discover ElastiCache Redis clusters. | -| `--aws-tags` | (Only for AWS discoveries) Comma-separated list of AWS resource tags to match, for example env=dev,dept=it | -| `--memorydb-discovery` | List of AWS regions in which the agent will discover MemoryDB clusters. | -| `--azure-mysql-discovery` | List of Azure regions in which the agent will discover MySQL servers. | -| `--azure-postgres-discovery` | List of Azure regions in which the agent will discover Postgres servers. | -| `--azure-redis-discovery` | List of Azure regions in which the agent will discover Azure Cache For Redis servers. | -| `--azure-subscription` | List of Azure subscription IDs for Azure discoveries. Default is "*". | -| `--azure-resource-group` | List of Azure resource groups for Azure discoveries. Default is "*". | -| `--azure-tags` | (Only for Azure discoveries) Comma-separated list of Azure resource tags to match, for example env=dev,dept=it | -| `--ca-pin` | CA pin to validate the Auth Service (can be repeated for multiple pins). | -| `--name` | Name of the proxied database. | -| `--protocol` | Proxied database protocol. Refer to the [configuration](./configuration.mdx#database-service-configuration) reference for supported values. | -| `--uri` | Address the proxied database is reachable at. | -| `--labels` | Comma-separated list of labels for the database, for example env=dev,dept=it | -| `-o/--output` | Write to stdout with `--output=stdout`, the default config file with `--output=file`, or a custom path with `--output=file:///path` | -| `--dynamic-resources-labels` | Comma-separated list(s) of labels to match dynamic resources, for example env=dev,dept=it. Required to enable dynamic resources matching. | - -## teleport db configure bootstrap - -Bootstrap the necessary configuration for the Database Service. It reads the -provided configuration to determine what will be bootstrapped. - -```code -$ teleport db configure bootstrap -c /etc/teleport.yaml --attach-to-user TeleportUser -$ teleport db configure bootstrap -c /etc/teleport.yaml --attach-to-role TeleportRole -$ teleport db configure bootstrap -c /etc/teleport.yaml --manual -``` - -| Flag | Description | -| - | - | -| `-c/--config` | Path to a configuration file. Default: `/etc/teleport.yaml`. | -| `--manual` | When executed in "manual" mode, this command will print the instructions to complete the configuration instead of applying them directly. | -| `--policy-name` | Name of the Teleport Database Service policy. Default: `DatabaseAccess` | -| `--confirm` | Do not prompt the user and auto-confirm all actions. | -| `--attach-to-role` | Role name to attach the policy to. Mutually exclusive with `--attach-to-user`. If none of the attach-to flags is provided, the command will try to attach the policy to the current user/role based on the credentials. | -| `--attach-to-user` | User name to attach the policy to. Mutually exclusive with `--attach-to-role`. If none of the attach-to flags is provided, the command will try to attach the policy to the current user/role based on the credentials. | - -## teleport db configure aws print-iam - -Print the necessary IAM permissions required for the Database Service based on -provided database types. - -```code -$ teleport db configure aws print-iam --types rds -$ teleport db configure aws print-iam --types rds,redshift --role my-db-service-role -$ teleport db configure aws print-iam --types redshift-serverless --assumes-roles my-access-role --policy -``` - -| Flag | Description | -| - | - | -| `-r/--types` | Comma-separated list of database types to include in the policy. Any of `rds`, `rdsproxy`, `redshift`, `redshift-serverless`, `elasticache`, `memorydb`, `keyspace`, `dynamodb`, `opensearch`. | -| `--role` | IAM role name to attach policy to. Mutually exclusive with --user. | -| `--user` | IAM user name to attach policy to. Mutually exclusive with --role. | -| `--[no-]policy` | Only print an IAM policy document. | -| `--[no-]boundary` | Only print an IAM boundary policy document. | -| `--assumes-roles` | Comma-separated list of additional IAM roles that the IAM identity should be able to assume. Each role can be either an IAM role ARN or the name of a role in the identity's account. | - -## tctl auth sign - -When invoked with a `--format=db` (or `--format=mongodb` for MongoDB) flag, -produces a CA certificate, a client certificate and a private key file used for -configuring the Database Service with self-hosted database instances. - - - For database formats, `tctl` must be run on an Auth Service host or the remote - user must be be able to impersonate the built-in `Db` role and user. See the - [impersonation guide](../../../admin-guides/access-controls/guides/impersonation.mdx) - for details on how to allow impersonation. - - -```code -$ tctl auth sign --format=db --host=db.example.com --out=db --ttl=2190h -$ tctl auth sign --format=db --host=host1,localhost,127.0.0.1 --out=db --ttl=2190h -``` - -In this example, `db.example.com` is the hostname where the Teleport Database -Service can reach the database server. The second example assumes a -database running on the same host as Teleport. - -| Flag | Description | -| - | - | -| `--format` | When given value `db`, produces secrets in database compatible format. Use `mongodb` when generating MongoDB secrets. | -| `--host` | Comma-separated SANs to encode in the certificate. Must contain the hostname Teleport will use to connect to the database. | -| `--out` | Name prefix for output files. | -| `--ttl` | Certificate validity period. | - -
-Setting up RBAC for signing database certificates - -The `tctl` user must have permissions to impersonate the Teleport Database -Service role, `Db`, in order to generate a signed database certificate. To add -these impersonation privileges to your Teleport user, run the following -commands. - -First, define a role that can impersonate the `Db` user. Add the following -content to a file called `db-impersonator.yaml`: - -```yaml -kind: role -version: v5 -metadata: - name: db-impersonator -spec: - options: - allow: - impersonate: - users: ['Db'] - roles: ['Db'] -``` - -Create the role: - -```code -$ tctl create -f db-impersonator.yaml -``` - -(!docs/pages/includes/create-role-using-web.mdx!) - -Open your Teleport user's dynamic configuration resource in your editor so you -can add the `db-impersonator` role: - -```code -$ TELEPORT_USER= -$ tctl edit user/${TELEPORT_USER?} -``` - -Add the `db-impersonator` role: - -```diff -spec: - - access - - auditor - - editor -+ - db-impersonator - status: - is_locked: false -``` - -Update your user by saving and closing the file in your editor. - -Log out of your Teleport cluster and log in again. You will now be able to run -`tctl auth sign` for database-specific certificate formats. - -
- -(!docs/pages/includes/database-access/ttl-note.mdx!) - -## tctl db ls - -Administrative command to list all databases registered with the cluster. - -```code -$ tctl db ls -$ tctl db ls --format=yaml -``` - -| Flag | Description | -| - | - | -| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `text`. | - -## tctl get db - -Prints the list of all configured database resources. - -| Flag | Description | -| - | - | -| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `yaml`. | - -## tctl get db/database-resource-name - -Prints details about `database-resource-name` database resource. - -| Flag | Description | -| - | - | -| `--format` | Output format, one of `text`, `yaml` or `json`. Defaults to `yaml`. | - -## tctl rm db/database-resource-name - -Removes database resource called `database-resource-name`. - -## tsh db ls - -Lists available databases and their connection information. - -```code -$ tsh db ls -``` - -Displays only the databases a user has access to (see [RBAC](../../../enroll-resources/database-access/rbac.mdx)). - -## tsh db login - -Retrieves database credentials. - -```code -$ tsh db login example -$ tsh db login --db-user=postgres --db-name=postgres example -``` - -| Flag | Description | -| - | - | -| `--db-user` | The database user to log in as. | -| `--db-name` | The database name to log in to. | -| `--db-roles` | Comma-separated list of database roles to use for auto-provisioned user. If not provided, all database roles will be assigned. | - -(!docs/pages/includes/db-user-name-flags.mdx!) - -## tsh db logout - -Removes database credentials. - -```code -$ tsh db logout example -$ tsh db logout -``` - -## tsh db connect - -Connect to a database using its CLI client. - -```code -# Short syntax when only logged into a single database. -$ tsh db connect -# Specify database service to connect to explicitly. -$ tsh db connect example -# Provide database user and name to connect to. -$ tsh db connect --db-user=alice --db-name=db example -# Select a subset of allowed database roles. -$ tsh db connect --db-user=alice --db-name=db --db-roles reader example -``` - - - Respective database CLI clients (`psql`, `mysql`, `mongo` or `mongosh`) should be - available in PATH. - - -| Flag | Description | -| - | - | -| `--db-user` | The database user to log in as. | -| `--db-name` | The database name to log in to. | -| `--db-roles` | Comma-separated list of database roles to use for auto-provisioned user. If not provided, all database roles will be assigned. | - -(!docs/pages/includes/db-user-name-flags.mdx!) - -## tsh db env - -Outputs environment variables for a particular database. - -```code -$ tsh db env -$ tsh db env example -$ eval $(tsh db env) -``` - -## tsh db config - -Prints database connection information. Useful when configuring GUI clients. - -```code -$ tsh db config -$ tsh db config example -$ tsh db config --format=cmd example -``` - -| Flag | Description | -| - | - | -| `--format` | Output format: `text` is default, `cmd` to print native database client connect command. | - diff --git a/docs/pages/reference/agent-services/database-access-reference/configuration.mdx b/docs/pages/reference/agent-services/database-access-reference/configuration.mdx deleted file mode 100644 index efdee868d79ca..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/configuration.mdx +++ /dev/null @@ -1,247 +0,0 @@ ---- -title: Database Access Configuration Reference -description: Configuration reference for Teleport database access. ---- - -This guide explains configuration options for the Teleport Database Service, -which proxies user traffic between Teleport users and protected databases. - -## Database service configuration - -The following snippet shows full YAML configuration of a Database Service -appearing in `teleport.yaml` configuration file: - -```yaml -(!docs/pages/includes/config-reference/database-config.yaml!) -``` - -## Proxy configuration - - - - -The following Proxy service configuration is relevant for database access: - - -The `--insecure-no-tls` `tsh` flag is only supported for MySQL/MariaDB and PostgreSQL -connections using a unique port, specified with `mysql_public_addr` or `postgres_public_addr`. - - -```yaml -proxy_service: - enabled: true - # Database proxy is listening on the regular web proxy port. - web_listen_addr: "0.0.0.0:443" - # MySQL proxy is listening on a separate port and needs to be enabled - # on the proxy server. - mysql_listen_addr: "0.0.0.0:3036" - # MySQL Server version allows you to overwrite the default Teleport Proxy Service MySQL version (8.0.0-Teleport) - # Note that if the MySQL client connection is using TLS Routing the dynamic MySQL Server Version takes - # precedence over the mysql_server_version proxy settings. - # mysql_server_version: "8.0.4" - # Postgres proxy listening address. If provided, proxy will use a separate listener - # instead of multiplexing Postgres protocol on web_listener_addr. - # postgres_listen_addr: "0.0.0.0:5432" - # Mongo proxy listening address. If provided, proxy will use a separate listener - # instead of multiplexing Mongo protocol on web_listener_addr. - # mongo_listen_addr: "0.0.0.0:27017" - # By default database clients will be connecting to the Proxy over this - # hostname. To override public address for specific database protocols - # use postgres_public_addr and mysql_public_addr. - public_addr: "teleport.example.com:443" - # Address advertised to MySQL clients. If not set, public_addr is used. - mysql_public_addr: "mysql.teleport.example.com:3306" - # Address advertised to PostgreSQL clients. If not set, public_addr is used. - postgres_public_addr: "postgres.teleport.example.com:443" - # Address advertised to Mongo clients. If not set, public_addr is used. - mongo_public_addr: "mongo.teleport.example.com:443" -``` - - - - -Teleport Enterprise Cloud automatically configures the Teleport Proxy Service -with the following settings that are relevant to database access. This reference -configuration uses `example.teleport.sh` in place of your Teleport Enterprise -Cloud tenant address: - -```yaml -proxy_service: - enabled: true - # Database proxy is listening on the regular web proxy port. - web_listen_addr: "0.0.0.0:3080" - # MySQL proxy is listening on a separate port. - mysql_listen_addr: "0.0.0.0:3036" - # Database clients will connect to the Proxy Service over this hostname. - public_addr: "mytenant.teleport.sh:443" - # Address advertised to MySQL clients. - mysql_public_addr: "mytenant.teleport.sh:3036" - # Address advertised to PostgreSQL clients. - postgres_public_addr: "mytenant.teleport.sh:443" - # Address advertised to Mongo clients. If not set, public_addr is used. - mongo_public_addr: "mongo.teleport.example.com:443 -``` - - - - -## Database resource - -Full YAML spec of database resources managed by `tctl` resource commands: - -```yaml -kind: db -version: v3 -metadata: - # Database resource name. - name: example - - # Database resource description. - description: "Example database" - - # Database resource static labels. - labels: - env: example - -spec: - # Database protocol. Valid options are: - # "cassandra" - # "clickhouse" - # "clickhouse-http" - # "cockroachdb" - # "dynamodb" - # "elasticsearch" - # "mongodb" - # "mysql" - # "oracle" - # "postgres" - # "redis" - # "snowflake" - # "spanner" - # "sqlserver" - protocol: "postgres" - - # Database connection endpoint. - uri: "localhost:5432" - - # Optional TLS configuration. - tls: - # TLS verification mode. Valid options are: - # 'verify-full' - performs full certificate validation (default). - # 'verify-ca' - the same as `verify-full`, but skips the server name validation. - # 'insecure' - accepts any certificate provided by database (not recommended). - mode: verify-full - # Optional database DNS server name. It allows to override the DNS name on - # a client certificate when connecting to a database. - # Use only with 'verify-full' mode. - server_name: db.example.com - # Optional CA for validating the database certificate. - ca_cert: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - # Optional configuration that allows Teleport to trust certificate - # authorities available on the host system. If not set (by default), - # Teleport only trusts self-signed databases with TLS certificates signed - # by Teleport's Database Server CA or the ca_cert specified in this TLS - # setting. For cloud-hosted databases, Teleport downloads the corresponding - # required CAs for validation. - trust_system_cert_pool: false - - # Database admin user for automatic user provisioning. - admin_user: - # Database admin user name. - name: "teleport-admin" - - # MySQL only options. - mysql: - # The MySQL server version reported by the Teleport Proxy Service. - # Teleport uses this string when reporting the server version to a - # connecting client. - # - # When this option is not set, the Database Service will try to connect to - # a MySQL instance on startup and fetch the server version. Otherwise, - # it will use the provided value without connecting to a database. - # - # In both cases, the MySQL server version reported to a client will be - # updated on the first successful connection made by a user. - # Teleport uses that string instead of default '8.0.0-Teleport' version when reporting - # the server version to a connecting client. When this option is not set, the Database Service will try - # to connect to MySQL instance on startup and fetch the server version. - # Otherwise, it will use the provided value without connecting to a database. - # In both cases MySQL server version reported to a client will be updated on the first successful - # connection made by a user. - server_version: 8.0.28 - - # Optional AWS configuration for RDS/Aurora/Redshift. Can be auto-detected from the endpoint. - aws: - # Region the database is deployed in. - region: "us-east-1" - # Optional AWS role that the Database Service will assume to access - # this database. - assume_role_arn: "arn:aws:iam::123456789012:role/example-role-name" - # Optional AWS external ID that the Database Service will use to assume - # a role in an external AWS account. - external_id: "example-external-id" - # Redshift specific configuration. - redshift: - # Redshift cluster identifier. - cluster_id: "redshift-cluster-1" - - # GCP configuration (required for Cloud SQL and Spanner databases). - gcp: - # GCP project ID. - project_id: "xxx-1234" - # Cloud SQL instance ID. - instance_id: "example" - - # Settings specific to Active Directory authentication e.g. for SQL Server. - ad: - # Path to Kerberos keytab file. - keytab_file: /path/to/keytab - # Active Directory domain name. - domain: EXAMPLE.COM - # Service Principal Name to obtain Kerberos tickets for. - spn: MSSQLSvc/ec2amaz-4kn05du.dbadir.teleportdemo.net:1433 - # Optional path to Kerberos configuration file. Defaults to /etc/krb5.conf. - krb5_file: /etc/krb5.conf - - # Optional dynamic labels. - dynamic_labels: - - name: "hostname" - command: ["hostname"] - period: 1m0s -``` - -You can create a new `db` resource by running the following commands, which -assume that you have created a YAML file called `db.yaml` with your configuration: - - - - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -# You can also run tctl on your Auth Service host without running "tsh login" -# first. -$ tsh login --proxy=teleport.example.com --user=myuser -# Create the resource -$ tctl create -f db.yaml -``` - - - - -```code -# Log in to your Teleport cluster so you can use tctl from your local machine. -$ tsh login --proxy=mytenant.teleport.sh --user=myuser -# Create the resource -$ tctl create -f db.yaml -``` - - - - - diff --git a/docs/pages/reference/agent-services/database-access-reference/database-access-reference.mdx b/docs/pages/reference/agent-services/database-access-reference/database-access-reference.mdx deleted file mode 100644 index 565379ff08f02..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/database-access-reference.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Database Access Reference -description: Configuration and CLI reference for the Teleport Database Service. ---- - -- [Configuration](configuration.mdx) -- [CLI](cli.mdx) -- [Audit Events](audit.mdx) -- [AWS IAM](aws.mdx) -- [Database Labels](labels.mdx) diff --git a/docs/pages/reference/agent-services/database-access-reference/labels.mdx b/docs/pages/reference/agent-services/database-access-reference/labels.mdx deleted file mode 100644 index f33aab6c1449e..0000000000000 --- a/docs/pages/reference/agent-services/database-access-reference/labels.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: Database Labels Reference -description: Database labels reference for Teleport database access. ---- - -Teleport assigns system-defined labels to protected databases. This guide -describes the system-defined labels and how Teleport uses them. - -## Origin - -All registered databases have a predefined `teleport.dev/origin` label with one -of the following values: - -| Label Value | Description | -| - | - | -| `cloud` | database resources created by auto-discovery. | -| `config` | database resources manually defined in the `database_service.databases` section of `teleport.yaml`. | -| `dynamic` | database resources created through [dynamic registration](../../../enroll-resources/database-access/guides/dynamic-registration.mdx) like `tcl create` command. | - -## Auto-discovery - -The labels of auto-discovered databases primarily come from the tags that are -assigned to the original cloud resources, such as the resources tags of an -Amazon RDS instance. - -The following tags will override Teleport's default behavior if assigned to the -original cloud resources: - -| Tag name | Description | -| - | - | -| `TeleportDatabaseName` | Overrides the name of the discovered database. | -| `teleport.dev/database_name` | (AWS only, legacy) Overrides the name of the discovered database. `TeleportDatabaseName` is preferred. | -| `teleport.dev/db-admin` | (AWS only) Specifies the name of the admin user for Automatic User Provisioning. | -| `teleport.dev/db-admin-default-database` | (AWS only) Overrides the default database the admin user logs into for Automatic User Provisioning. | - -Additionally, Teleport will generate certain labels derived from the cloud -resource attributes: - -| Label name | Description | -| - | - | -| `account-id` | ID of the AWS account the resource resides in. | -| `endpoint-type` | Type of the endpoint. See section below for more details. | -| `engine` | Amazon RDS: engine type of the RDS instance or Aurora cluster.
Amazon RDS Proxy: engine family of the proxy.
Azure-hosted databases: resource type of the resource ID. | -| `engine-version` | Database engine version, if available. | -| `namespace` | Amazon Redshift Serverless namespace name. | -| `region` | AWS region or Azure location. | -| `replication-role` | The replication role of an Azure DB Flexible server. | -| `source-server` | The source server of an Azure DB Flexible server replica. | -| `vpc-id` | ID of the Amazon VPC the resource resides in, if available. | -| `workgroup` | Amazon Redshift Serverless workgroup name. | -| `teleport.dev/discovery-type` | Specifies the type of resource matched by the Teleport Discovery Service, e.g. "rds", "redshift", etc. | - -### `endpoint-type` - -The following values are used to indicate the type of the database endpoint: - -| Database Type | Values | -| - | - | -| Amazon RDS instance | `instance` | -| Amazon RDS Aurora cluster | one of `primary`, `reader`, `custom` | -| Amazon RDS Proxy | one of `READ_WRITE`, `READ_ONLY` (custom endpoints only) | -| Amazon Redshift Serverless | one of `workgroup`, `vpc-endpoint` | -| Amazon ElastiCache | one of `configuration`, `primary`, `reader`, `node` | -| Amazon MemoryDB | one of `cluster`, `node` | -| Amazon OpenSearch | one of `default`, `custom`, `vpc` | -| Azure Redis Enterprise | one of `EnterpriseCluster`, `OSSCluster` | - -## Manual and dynamic registration - -Static labels and dynamic labels can be specified in `labels` and -`dynamic_labels` fields respectively in database definition. See -[Configuration](./configuration.mdx) for reference. - -## Database Service on Amazon EC2 - -All registered databases can inherit the labels converted from the tags of the -EC2 instance running the Teleport Database Service. Labels created this way -will have the `aws/` prefix. See [Sync EC2 -Tags](../../../admin-guides/management/guides/ec2-tags.mdx) for more details. diff --git a/docs/pages/reference/agent-services/desktop-access-reference/audit.mdx b/docs/pages/reference/agent-services/desktop-access-reference/audit.mdx deleted file mode 100644 index 8da5f7f531d63..0000000000000 --- a/docs/pages/reference/agent-services/desktop-access-reference/audit.mdx +++ /dev/null @@ -1,250 +0,0 @@ ---- -title: Desktop Access Audit Events Reference -description: Audit events reference for Teleport desktop access. ---- - -This guide lists the structures and field names of audit events related to -connecting to remote desktops with Teleport. Use this guide to understand -desktop-related audit events and configure your log management solutions if you -are [exporting audit -events](../../../admin-guides/management/export-audit-events/export-audit-events.mdx). - -## windows.desktop.session.start (TDP00I/W) - -Emitted when a client successfully connects to a desktop or when a connection -attempt fails because access was denied. - -Successful connection event: - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP00I", - "desktop_addr": "192.168.1.206:3389", - "desktop_labels": { - "teleport.dev/computer_name": "WIN-I44F9TN11M3", - "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", - "teleport.dev/is_domain_controller": "true", - "teleport.dev/origin": "dynamic", - "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", - "teleport.dev/os_version": "6.3 (9600)", - "teleport.dev/windows_domain": "teleport.example.com" - }, - "ei": 0, - "event": "windows.desktop.session.start", - "login": "administrator", - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "success": true, - "time": "2022-02-16T16:43:30.459Z", - "uid": "1605346b-d90b-4df7-8148-67a3e2d85673", - "user": "alice", - "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", - "windows_domain": "teleport.example.com", - "windows_user": "administrator" -} -``` - -Access denied event: - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP00W", - "desktop_addr": "192.168.1.206:3389", - "desktop_labels": { - "teleport.dev/computer_name": "WIN-I44F9TN11M3", - "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", - "teleport.dev/is_domain_controller": "true", - "teleport.dev/origin": "dynamic", - "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", - "teleport.dev/os_version": "6.3 (9600)", - "teleport.dev/windows_domain": "teleport.example.com" - }, - "ei": 0, - "error": "access to desktop denied", // Connection error - "event": "windows.desktop.session.start", - "message": "access to desktop denied", // Detailed error message. - "login": "administrator", - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "success": false, // Indicates unsuccessful connection - "time": "2022-02-16T16:43:30.459Z", - "uid": "1605346b-d90b-4df7-8148-67a3e2d85673", - "user": "alice", - "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", - "windows_domain": "teleport.example.com", - "windows_user": "administrator" -} -``` - -## windows.desktop.session.end (TDP01I) - -Emitted when a client disconnects from the desktop. - -```json -{ - "cluster_name": "root", - "code": "TDP01I", - "desktop_addr": "192.168.1.206:3389", - "desktop_labels": { - "teleport.dev/computer_name": "WIN-I44F9TN11M3", - "teleport.dev/dns_host_name": "WIN-I44F9TN11M3.teleport.example.com", - "teleport.dev/is_domain_controller": "true", - "teleport.dev/origin": "dynamic", - "teleport.dev/os": "Windows Server 2012 R2 Standard Evaluation", - "teleport.dev/os_version": "6.3 (9600)", - "teleport.dev/windows_domain": "teleport.example.com" - }, - "desktop_name": "WIN-I44F9TN11M3-teleport-example-com", - "ei": 0, - "event": "windows.desktop.session.end", - "login": "administrator", - "participants": ["alice"], - "recorded": true, - "session_start": "2022-02-16T16:43:30.459Z", - "session_stop": "2022-02-16T16:46:50.894Z", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "time": "2022-02-16T16:46:50.895Z", - "uid": "c7956a81-597f-4452-90d7-800506f7a05b", - "user": "alice", - "windows_desktop_service": "316a3ffa-23e6-4d85-92a1-5e44754f8189", - "windows_domain": "teleport.example.com", - "windows_user": "administrator" -} -``` - -## desktop.clipboard.send (TDP02I) - -Emitted when clipboard data is sent from a user's workstation to Teleport. In -order to avoid capturing sensitive data, the event only records the number of -bytes that were sent. - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP02I", - "desktop_addr": "192.168.1.206:3389", - "ei": 0, - "event": "desktop.clipboard.send", - "length": 4, // number of bytes sent - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "time": "2022-02-16T16:43:40.010217Z", - "uid": "e45d9890-38a9-4580-8572-35fa0192b123", - "user": "alice" -} -``` - -## desktop.clipboard.receive (TDP03I) - -Emitted when Teleport receives clipboard data from a remote desktop. In order to -avoid capturing sensitive data, the event only records the number of bytes that -were received. - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP03I", - "desktop_addr": "192.168.1.206:3389", - "ei": 0, - "event": "desktop.clipboard.receive", - "length": 4, // number of bytes received - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "time": "2022-02-16T16:43:40.010217Z", - "uid": "e45d9890-38a9-4580-8572-35fa0192b123", - "user": "alice" -} -``` - -## desktop.directory.share (TDP04I/W) - -Emitted when Teleport starts sharing a directory on a local machine to the remote desktop. - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP04I", // TDP04W if the operation failed - "desktop_addr": "192.168.1.206:3389", - "directory_id": 2, - "directory_name": "local-files", - "ei": 0, - "event": "desktop.directory.share", - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "success": true, // false if the operation failed - "time": "2022-10-21T22:36:27.314409Z", - "uid": "e45d9890-38a9-4580-8572-35fa0192b123", - "user": "alice" -} -``` - -## desktop.directory.read (TDP05I/W) - -This event is part of the directory sharing feature, and is emitted when -Teleport reads data from a file on the user's local machine and sends it -to the remote Windows desktop. - -In order to avoid capturing sensitive data, the event only records the offset -from the start of the file from which the read began and the number of bytes -that were sent. - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP05I", // TDP05W if the operation failed - "desktop_addr": "192.168.1.206:3389", - "directory_id": 2, - "directory_name": "local-files", - "ei": 0, - "event": "desktop.directory.read", - "file_path": "powershell-scripts/a-script.ps1", // relative path from the root of the shared directory (local-files in this case) - "length": 734, // the number of bytes read - "offset": 0, // the offset from the start of the file from which the read began - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "success": true, // false if the operation failed - "time": "2022-10-21T22:36:27.314409Z", - "uid": "e45d9890-38a9-4580-8572-35fa0192b123", - "user": "alice" -} -``` - -## desktop.directory.write (TDP06I/W) - -This event is part of the directory sharing feature, and is emitted when -Teleport reads writes from the remote desktop to a file on the user's local -machine. - -In order to avoid capturing sensitive data, the event only records the offset -from the start of the file from which the write began and the number of bytes -that were written. - -```json -{ - "addr.remote": "192.168.1.206:3389", - "cluster_name": "root", - "code": "TDP06I", // TDP06W if the operation failed - "desktop_addr": "192.168.1.206:3389", - "directory_id": 2, - "directory_name": "local-files", - "ei": 0, - "event": "desktop.directory.read", - "file_path": "powershell-scripts/a-script.ps1", // relative path from the root of the shared directory (local-files in this case) - "length": 734, // the number of bytes written - "offset": 0, // the offset from the start of the file from which the write began - "proto": "tdp", - "sid": "4a0ed655-1e0b-412b-b14a-348e840e7fa2", - "success": true, // false if the operation failed - "time": "2022-10-21T22:36:27.314409Z", - "uid": "e45d9890-38a9-4580-8572-35fa0192b123", - "user": "alice" -} -``` diff --git a/docs/pages/reference/agent-services/desktop-access-reference/cli.mdx b/docs/pages/reference/agent-services/desktop-access-reference/cli.mdx deleted file mode 100644 index f6398dee6462c..0000000000000 --- a/docs/pages/reference/agent-services/desktop-access-reference/cli.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Desktop Access CLI Reference -description: CLI reference for Teleport desktop access. ---- - -The following `tctl` commands are used to manage the Teleport Windows Desktop -Service. - -- (!docs/pages/includes/tctl.mdx!) - -Generate a join token for a Windows Desktop Service: - -```sh -$ tctl tokens add --type=WindowsDesktop -``` - -List registered Windows Desktop Services: - -```sh -$ tctl get windows_desktop_service -``` - -List registered Windows desktops: - -```sh -$ tctl get windows_desktop -``` diff --git a/docs/pages/reference/agent-services/desktop-access-reference/configuration.mdx b/docs/pages/reference/agent-services/desktop-access-reference/configuration.mdx deleted file mode 100644 index 1f487b4e828ca..0000000000000 --- a/docs/pages/reference/agent-services/desktop-access-reference/configuration.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Desktop Access Configuration Reference -description: Configuration reference for Teleport desktop access. ---- - -`teleport.yaml` fields related to desktop access: - -```yaml -# Main service responsible for desktop access. -# -# You can have multiple Desktop Service instances in your cluster (but not in the -# same teleport.yaml), connected to the same or different Active Directory -# domains. -(!docs/pages/includes/config-reference/desktop-config.yaml!) -``` - -## Deployment - -The Windows Desktop Service can be deployed in two modes. - -### Direct mode - -In *direct* mode, Windows Desktop Services registers directly with the Teleport -Auth Service, and listens for desktop connections from the Teleport Proxy. To -enable direct mode, set `windows_desktop_service.listen_addr` in -`teleport.yaml`, and ensure that `teleport.auth_server` points directly at the -Auth Service. - -Direct mode requires network connectivity both: - -- from the Teleport Proxy to the Windows Desktop Service. -- from the Windows Desktop Service to the Auth Service. - -For these reasons direct mode is not available in Teleport cloud, only -self-hosted Teleport clusters. - -### IoT mode (reverse tunnel) - -In *IoT mode*, Windows Desktop Service only needs to be able to make an outbound -connection to a Teleport Proxy. The Windows Desktop Service establishes a -reverse tunnel to the proxy, and both registration with the Auth Service and -desktop sessions are performed over this tunnel. To enable this mode, ensure -that `windows_desktop_service.listen_addr` is *unset*, and point -`teleport.proxy_server` at a Teleport Proxy. - -## Screen size - -By default, Teleport will set the screen size of the remote desktop session -based on the size of your browser window. In some cases, you may wish to -configure specific hosts to use a specific screen size. To do this, set the -`screen_size` attribute on the `windows_desktop` resource: - -```yaml -kind: windows_desktop -metadata: - name: fixed-screen-size -spec: - host_id: 307e091b-7f6b-42e0-b78d-3362ad10b55d - addr: 192.168.1.153:3389 - non_ad: true - - # Optional - ensures that all sessions use the same screen size, - # no matter what the size of the browser window is. - # Leave blank to use the size of the browser window. - screen_size: - width: 1024 - height: 768 -``` diff --git a/docs/pages/reference/agent-services/desktop-access-reference/desktop-access-reference.mdx b/docs/pages/reference/agent-services/desktop-access-reference/desktop-access-reference.mdx deleted file mode 100644 index 6301559fd876c..0000000000000 --- a/docs/pages/reference/agent-services/desktop-access-reference/desktop-access-reference.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Desktop Access Reference -description: Comprehensive guides to configuring and auditing desktop access. -layout: tocless-doc ---- - -- [Configuration](configuration.mdx): Configure Teleport desktop access. -- [Audit](audit.mdx): Desktop access audit events. -- [Clipboard](clipboard.mdx): Share your clipboard with a remote desktop. -- [Session Recording](sessions.mdx): Desktop session recording and playback -- [CLI](cli.mdx): Relevant `tctl` commands -- [Scaling](../../../admin-guides/management/operations/scaling.mdx): Tips on scaling to many concurrent users. -- [User creation](user-creation.mdx): Automatic user creation \ No newline at end of file diff --git a/docs/pages/reference/agent-services/okta.mdx b/docs/pages/reference/agent-services/okta.mdx deleted file mode 100644 index 69ebd3bdf74e0..0000000000000 --- a/docs/pages/reference/agent-services/okta.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: Okta Service Reference Documentation -description: Configuration and CLI reference documentation for Teleport Okta service. ---- - -This guide describes interfaces and options for configuring the Teleport Okta -Service, including Okta import rules, Okta assignments, and `tctl` commands. It -also includes troubleshooting instructions. - -## Okta Import Rule resources - -Full YAML spec of Okta import rule resources managed by `tctl` resource commands: - -(!docs/pages/includes/okta-import-rule.mdx!) - -You can create a new `okta_import_rule` resource by running the following commands, which -assume that you have created a YAML file called `okta-import-rule.yaml` with your configuration: - -```code -# Log in to your cluster with tsh so you can use tctl from your local machine. -# You can also run tctl on your Auth Service host without running "tsh login" -# first. -$ tsh login --proxy=teleport.example.com --user=myuser -# Create the resource -$ tctl create -f okta-import-rule.yaml -``` - -## Okta Assignment resources - -These objects are internally facing and are not intended to be modified by users. However, -you can query them for informational or debugging purposes. - -Full YAML spec of Okta assignment resources queried by `tctl` resource commands: - -```yaml -kind: okta_assignment -version: v1 -metadata: - name: test-assignment -spec: - # The user that the Okta assignment is granting access for. - user: test-user@test.user - # The list of targets to grant access to. - targets: - # An application target. - - type: application - id: "123456" - # A group target. - - type: group - id: "234567" - # The current status of the Okta assignment. - status: pending -``` - -## CLI - -This section shows CLI commands relevant for managing Okta Service behaviors. - -### tctl get okta_import_rules - -Lists available Okta import rules. - -```code -$ tctl get okta_import_rules -``` - -### tctl get okta_import_rules/NAME - -Gets an individual Okta import rule. - -```code -$ tctl get okta_import_rules/my-import-rule -``` - -### tctl rm okta_import_rules/NAME - -Removes an individual Okta import rule. - -```code -$ tctl rm okta_import_rules/my-import-rule -``` - -### tctl get okta_assignments - -Lists available Okta assignments. - -```code -$ tctl get okta_assignments -``` - -### tctl get okta_assignments/NAME - -Gets an individual Okta assignment. - -```code -$ tctl get okta_assignments/my-assignment -``` - -## Troubleshooting - -### No Okta groups or applications seen in the Teleport UI - -If the Teleport applications UI isn't displaying any Okta applications, ensure -that the Okta API token and endpoint are correct in the Okta service. - -If they are, double check the user permissions and ensure that the user has -appropriate resource and label level access to the groups and applications. You -may need to tweak the `app_labels` and `group_labels` sections of a role in order -to see these resources. diff --git a/docs/pages/reference/architecture/agent-update-management.mdx b/docs/pages/reference/architecture/agent-update-management.mdx index d94ca01ff0db9..052fd1999da9f 100644 --- a/docs/pages/reference/architecture/agent-update-management.mdx +++ b/docs/pages/reference/architecture/agent-update-management.mdx @@ -1,6 +1,9 @@ --- title: Managed Updates description: This chapter explains how Teleport Agent Managed Updates work. +tags: + - conceptual + - platform-wide --- While many Teleport resources [support agentless @@ -112,7 +115,9 @@ updated properly. ## Next steps -[Configure automatic updates v2](../../upgrading/agent-managed-updates.mdx). +[Configure Managed Updates v2](../../upgrading/agent-managed-updates/agent-managed-updates.mdx) in your cluster. + +Consult the [Managed Updates v2 Resource Reference](../deployment/managed-updates-v2.mdx) After that, you can set enroll agents in automatic updates as part of the [upgrading procedure](../../upgrading/upgrading.mdx). diff --git a/docs/pages/reference/architecture/agents.mdx b/docs/pages/reference/architecture/agents.mdx index 2577824089df7..53ce0dc342eb1 100644 --- a/docs/pages/reference/architecture/agents.mdx +++ b/docs/pages/reference/architecture/agents.mdx @@ -1,7 +1,11 @@ --- title: Teleport Agent Architecture +sidebar_label: Agents description: Describes the architecture that enables Teleport to securely proxy client traffic to infrastructure resources. -tocDepth: 3 +tags: + - conceptual + - zero-trust + - infrastructure-identity --- **Teleport Agents** route traffic to and from resources in your infrastructure. @@ -173,7 +177,7 @@ To learn more about the mechanism an agent uses to authenticate to an infrastructure resource, read the guide to enrolling that resource in your Teleport cluster: -- [Applications](../../enroll-resources/application-access/guides/guides.mdx) +- [Applications](../../enroll-resources/application-access/application-access.mdx) - [Cloud provider APIs](../../enroll-resources/application-access/cloud-apis/cloud-apis.mdx) - [Databases](../../enroll-resources/database-access/guides/guides.mdx) - [Kubernetes clusters](../../enroll-resources/kubernetes-access/register-clusters/register-clusters.mdx) @@ -216,7 +220,7 @@ In most cases, users will receive certificates from the Auth Service via a connection to the Teleport Proxy Service. The Auth Service and Proxy Service connect to each other using mutual TLS. -[Teleport Connect](../../connect-your-client/teleport-connect.mdx) runs `tshd`, a +[Teleport Connect](../../connect-your-client/teleport-clients/teleport-connect.mdx) runs `tshd`, a `tsh` daemon that manages user certificates and kubeconfigs for the graphical client. @@ -225,9 +229,9 @@ client. Teleport makes several client tools available for accessing infrastructure resources through agents: -- [The `tsh` CLI](../../connect-your-client/tsh.mdx) -- [Teleport Connect](../../connect-your-client/teleport-connect.mdx) -- [Teleport Web UI](../../connect-your-client/web-ui.mdx) +- [The `tsh` CLI](../../connect-your-client/teleport-clients/tsh.mdx) +- [Teleport Connect](../../connect-your-client/teleport-clients/teleport-connect.mdx) +- [Teleport Web UI](../../connect-your-client/teleport-clients/web-ui.mdx) After retrieving [client credentials](#credentials-for-teleport-clients), these tools are authenticated to the Teleport Proxy Service and can send traffic @@ -250,12 +254,12 @@ CLI: |`tsh` command|Upstream infrastructure resource| |---|---| -|`tsh proxy app`|HTTP and [TCP](../../enroll-resources/application-access/guides/tcp.mdx) applications| +|`tsh proxy app`|HTTP and [TCP](../../enroll-resources/application-access/protect-apps/tcp.mdx) applications| |`tsh proxy aws`|[AWS SDK applications](../../enroll-resources/application-access/cloud-apis/aws-console.mdx)| |`tsh proxy azure`|[Azure SDK applications](../../enroll-resources/application-access/cloud-apis/azure.mdx)| |`tsh proxy gcloud`|[Google Cloud SDK applications](../../enroll-resources/application-access/cloud-apis/google-cloud.mdx)| |`tsh proxy ssh`|[OpenSSH client traffic](../../enroll-resources/server-access/openssh/openssh-agentless.mdx)| -|`tsh proxy db`|[Native database clients](../../connect-your-client/gui-clients.mdx)| +|`tsh proxy db`|[Native database clients](../../connect-your-client/third-party/gui-clients.mdx)| |`tsh proxy kube`|[Kubernetes clusters behind L7 load balancers](tls-routing.mdx#kubernetes)| @@ -273,7 +277,7 @@ interacts with a resource, and signs out. Agents interpret the wire protocol messages they forward to infrastructure resources in order to detect events. Learn more about the Teleport audit events in the [Audit Event -Reference](../monitoring/audit.mdx). +Reference](../deployment/monitoring/audit.mdx). ## Further reading diff --git a/docs/pages/reference/architecture/api-architecture.mdx b/docs/pages/reference/architecture/api-architecture.mdx index efb28745ec113..6c2eee4817ba6 100644 --- a/docs/pages/reference/architecture/api-architecture.mdx +++ b/docs/pages/reference/architecture/api-architecture.mdx @@ -1,13 +1,17 @@ --- title: API Architecture +sidebar_label: API description: Architectural overview of the Teleport gRPC API. +tags: + - conceptual + - platform-wide --- This guide describes the architecture of the Teleport gRPC API, which enables clients like `tctl`, the Teleport Terraform provider, and the Teleport Kubernetes operator to manage dynamic resources in your Teleport cluster. If you are new to the Teleport gRPC API, read how to [Get -Started](../../admin-guides/api/getting-started.mdx). +Started](../../zero-trust-access/api/getting-started.mdx). ## Authentication @@ -28,9 +32,11 @@ action on the `role` resource. You should create a user and role with the minimu (!docs/pages/includes/permission-warning.mdx!) +Copy and paste the below and run on the Teleport Auth Service: + ```code -# Copy and Paste the below and run on the Teleport Auth server. -cat > api-role.yaml < api-role.yaml < In some cases, certificate expiration is not fast enough, and all sessions have to be terminated immediately, for example during active security incident. -For those cases, Teleport Proxy can terminate live connections using [session and identity locking](../../admin-guides/access-controls/guides/locking.mdx). +For those cases, Teleport Proxy can terminate live connections using [session and identity locking](../../identity-governance/locking.mdx).
### Short-lived Certs For Users @@ -98,15 +101,15 @@ We recommend using SSO with GitHub, Okta or any other identity provider and get ### Short-lived Certs for Services -Deployment automation services, such as Jenkins, can use Teleport's Machine ID -service to receive and renew certificates. Teleport Machine ID's bot runs alongside -services and rotates SSH and X.509 certificates. +Deployment automation services, such as Jenkins, can use Teleport's Machine & +Workload Identity to receive and renew certificates. Teleport Machine & Workload +Identity's Bot runs alongside services and rotates SSH and X.509 certificates. ![Certificates for services](../../../img/architecture/certs-machine-id@1.8x.svg) ### Internal certificates -Teleport internal services - the Auth Service, Proxy Service, Agents, and Machine ID Bots - use certificates to identify themselves +Teleport internal services - the Auth Service, Proxy Service, Agents, and Machine & Workload Identity Bots - use certificates to identify themselves within a cluster. To join services to the cluster and receive certificates, admins should use [short-lived tokens or cloud identity services](../../enroll-resources/agents/join-token.mdx). @@ -116,14 +119,14 @@ To renew these certificates, admins should use certificate authority rotation, t previously-issued certificates for nodes or users regardless of expiry and issuing a new ones, using a new certificate authority. -Take a look at the [Certificate Rotation Guide](../../admin-guides/management/operations/ca-rotation.mdx) to +Take a look at the [Certificate Rotation Guide](../../zero-trust-access/management/security/ca-rotation.mdx) to learn how to do certificate rotation in practice. To quickly lock out a potentially compromised Auth Service instance, Proxy Service instance, or Teleport Agent without rotating the entire cluster certificates, use [session and identity -locking](../../admin-guides/access-controls/guides/locking.mdx). +locking](../../identity-governance/locking.mdx). ## More concepts diff --git a/docs/pages/reference/architecture/authorization.mdx b/docs/pages/reference/architecture/authorization.mdx index b67465d01541c..2b6ae0508d826 100644 --- a/docs/pages/reference/architecture/authorization.mdx +++ b/docs/pages/reference/architecture/authorization.mdx @@ -1,7 +1,11 @@ --- title: Teleport Authorization +sidebar_label: Authorization description: This chapter explains how Teleport authorizes users and roles. -h1: Teleport Authorization +tags: + - conceptual + - platform-wide + - privileged-access --- Teleport handles both authentication and authorization. @@ -64,8 +68,8 @@ running in your organization. Local non-interactive users also have a user entry that maps their name to roles, but they do not have credentials stored in the database. -Non-interactive users have to use Teleport's machine ID product to receive and renew certificates. -Teleport Machine ID's bot runs alongside services and rotates SSH and X.509 certificates on behalf +Non-interactive users have to use Teleport Machine & Workload Identity product to receive and renew certificates. +Teleport Machine & Workload Identity's Bot runs alongside services and rotates SSH and X.509 certificates on behalf of non-interactive users: ![Certificates for services](../../../img/architecture/certs-machine-id@1.8x.svg) @@ -385,8 +389,8 @@ spec: ## Next steps - [Access Control Reference](../access-controls/roles.mdx). -- [Teleport Predicate Language](../predicate-language.mdx). -- [Access Requests Guides](../../admin-guides/access-controls/access-requests/access-requests.mdx) +- [Teleport Predicate Language](../access-controls/predicate-language.mdx). +- [Access Requests Guides](../../identity-governance/access-requests/access-requests.mdx) - [Architecture Overview](../../core-concepts.mdx) - [Teleport Auth](authentication.mdx) - [Teleport Agents](agents.mdx) diff --git a/docs/pages/reference/architecture/device-trust.mdx b/docs/pages/reference/architecture/device-trust.mdx index 6b615b108d86a..3ac35985069e0 100644 --- a/docs/pages/reference/architecture/device-trust.mdx +++ b/docs/pages/reference/architecture/device-trust.mdx @@ -1,6 +1,11 @@ --- -title: Device Trust +title: Device Trust Architecture +sidebar_label: Device Trust description: How Teleport Device Trust works. +tags: + - conceptual + - identity-governance + - resiliency --- Device Trust leverages the macOS Secure Enclave, or TPM 2.0 on Linux and Windows @@ -47,8 +52,8 @@ web][blog-post] blog post describes the implementation challenges in detail. For practical use see the [Device Trust section][section]. -[auto-enrollment]: ../../admin-guides/access-controls/device-trust/device-management.mdx#auto-enrollment -[device enrollment tokens]: ../../admin-guides/access-controls/device-trust/device-management.mdx#create-a-device-enrollment-token -[device enforcement]: ../../admin-guides/access-controls/device-trust/enforcing-device-trust.mdx +[auto-enrollment]: ../../identity-governance/device-trust/device-management.mdx#auto-enrollment +[device enrollment tokens]: ../../identity-governance/device-trust/device-management.mdx#create-a-device-enrollment-token +[device enforcement]: ../../identity-governance/device-trust/enforcing-device-trust.mdx [blog-post]: https://goteleport.com/blog/device-trust-for-web-challenges-and-solutions/ -[section]: ../../admin-guides/access-controls/device-trust/device-trust.mdx +[section]: ../../identity-governance/device-trust/device-trust.mdx diff --git a/docs/pages/reference/architecture/kubernetes-applications-architecture.mdx b/docs/pages/reference/architecture/kubernetes-applications-architecture.mdx index ff22c78b78144..833a717ed390f 100644 --- a/docs/pages/reference/architecture/kubernetes-applications-architecture.mdx +++ b/docs/pages/reference/architecture/kubernetes-applications-architecture.mdx @@ -1,6 +1,11 @@ --- title: Kubernetes App Discovery Architecture +sidebar_label: Kubernetes App Discovery description: Learn how Teleport automatically discovers applications running on Kubernetes. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Kubernetes application auto-discovery consists of two parts: @@ -15,20 +20,20 @@ Kubernetes application auto-discovery consists of two parts: ### Polling Kubernetes services The Discovery Service running in a Kubernetes cluster will periodically list services and filter them out -according to the matchers specified in `kubernetes` filed of the service config. You can filter services based on +according to the matchers specified in `kubernetes` field of the service config. You can filter services based on types, namespaces and service labels. Services running in the `kube-system` and `kube-public` namespaces are automatically ignored. All services by default currently -are considered of an "app" type, but it can be changed for a service by Kubernetes annotation [`teleport.dev/discovery-type`](../agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx). +are considered of an "app" type, but it can be changed for a service by Kubernetes annotation [`teleport.dev/discovery-type`](../../enroll-resources/auto-discovery/kubernetes-applications/reference.mdx). If type of a service doesn't equal the one specified in the matcher, service is ignored. -By default name of the created Teleport app will consist of Kubernetes service name, namespace and -Kubernetes cluster name: `$SERVICE_NAME-$NAMESPACE-$KUBE_CLUSTER_NAME`. That name can be changed by Kubernetes annotation -[`teleport.dev/name`](../agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx). +By default name of the created Teleport app will consist of Kubernetes service name, namespace and +Kubernetes cluster name: `$SERVICE_NAME-$NAMESPACE-$KUBE_CLUSTER_NAME`. That name can be changed by Kubernetes annotation +[`teleport.dev/name`](../../enroll-resources/auto-discovery/kubernetes-applications/reference.mdx). -Every port that is exposed by the service is considered separately, so one Kubernetes service can result in creation of multiple Teleport app resources, +Every port that is exposed by the service is considered separately, so one Kubernetes service can result in creation of multiple Teleport app resources, if more than one port is exposed on the service. In that case port name will be added to the app name. -By default, the Discovery Service will only try to expose ports that serve HTTP/HTTPS. To understand if this port serves HTTP, discovery +By default, the Discovery Service will only try to expose ports that serve HTTP/HTTPS. To understand if this port serves HTTP, discovery will use several heuristics or will try to probe exposed port with a HEAD HTTP request. Heuristics for determining if port serves HTTP/HTTPS are: @@ -38,13 +43,13 @@ values `http`/`https` it will be used in the URI. - Teleport will perform HTTP request to the port to see if it serves HTTP/HTTPS requests - if exposed port's name is `http` or it has numeric value 80 or 8080, `http` will be used. -Otherwise, this port is ignored. But if annotation [`teleport.dev/protocol`](../agent-services/auto-discovery-reference/kubernetes-application-discovery.mdx) is used on the service and its value is -"tcp", then this port will be exposed as a TCP app. +Otherwise, this port is ignored. But if annotation [`teleport.dev/protocol`](../../enroll-resources/auto-discovery/kubernetes-applications/reference.mdx) is used on the service and its value is +"tcp", then this port will be exposed as a TCP app. ### Creating Teleport apps and proxying requests to them After relevant Kubernetes services were listed and filtered, the Discovery Service will create Teleport apps, reconciling -existing and new ones: +existing and new ones: - If a discovered app was not present at the Teleport backend, it will be created - If a discovered app was already present at the backend, it will be updated - If a discovered app was already present at the backend, but it was not found in the Kubernetes cluster anymore, it will be deleted. @@ -53,5 +58,3 @@ App service runs on the Kubernetes cluster and proxies apps based on labels spec Discovery Service will have labels copied from the service of origin. In addition, label `teleport.dev/kubernetes-cluster` will be set for the app and it will be equal to the name of the Kubernetes cluster of origin. Discovery service uses `discovery_group` property to get Kubernetes cluster name. - - diff --git a/docs/pages/reference/architecture/machine-id-architecture.mdx b/docs/pages/reference/architecture/machine-id-architecture.mdx index 36142ff26568d..dcfd763998e02 100644 --- a/docs/pages/reference/architecture/machine-id-architecture.mdx +++ b/docs/pages/reference/architecture/machine-id-architecture.mdx @@ -1,11 +1,18 @@ --- -title: Machine ID Architecture -description: How Teleport Machine ID works. +title: Machine & Workload Identity Architecture +sidebar_label: Machine & Workload Identity +description: How Teleport Machine & Workload Identity works. +tags: + - conceptual + - mwi + - infrastructure-identity --- -This section provides an overview of Teleport Machine ID's inner workings. +This section provides an overview of Teleport Machine & Workload Identity's +inner workings. -The initial specification and design for Machine ID can be found in +The initial specification and design for Machine & Workload Identity can be +found in [the Request For Discussion.](https://github.com/gravitational/teleport/blob/master/rfd/0064-bot-for-cert-renewals.md) ## What is a bot? @@ -17,15 +24,16 @@ static username/password credentials. A bot does not exist as a single distinct resource within Teleport. Instead, they comprise three linked resources. These are: -- Bot user: this will be the user that the Machine ID agent authenticates as. +- Bot user: this will be the user that the Machine & Workload Identity agent + authenticates as. - Bot role: the bot user is assigned the bot role, and the bot role contains various permissions that the bot will need to function. For example, the ability to watch the certificate authorities and the ability to [impersonate roles](#role-impersonation). - Token: for [onboarding](#joining-and-authentication), a token must exist that - allows the Machine ID agent to initially authenticate as the bot user. If an - existing token is not specified, then a single-use token will be created by - the Auth Service. + allows the Machine & Workload Identity agent to initially authenticate as the + bot user. If an existing token is not specified, then a single-use token will + be created by the Auth Service. - Bot instance: a single instance of a bot. As multiple `tbot` clients can join with a single Bot user or a single token, Bot Instances keep a running record of unique bot joins. @@ -39,7 +47,7 @@ many cases there may be multiple `tbot`'s using the same bot identity. ## Role Impersonation Role Impersonation is an RBAC feature of Teleport that is used heavily by -Machine ID. +Machine & Workload Identity. Role Impersonation allows a user to generate credentials with a set of requested roles. The user does not have to hold these roles, but must have been granted @@ -50,18 +58,19 @@ the user. These credentials can then be used to complete any action that is allowed by the role's configured permissions. -In the case of Machine ID, the bot user is assigned a bot role, which includes -permissions to impersonate the roles that the user has configured. +In the case of Machine & Workload Identity, the bot user is assigned a bot role, +which includes permissions to impersonate the roles that the user has +configured. ## `tbot` -`tbot` is the binary that acts as the agent for Machine ID on your machines that -need access to resources protected by Teleport. It is typically ran in one of -two modes. By default, it is a daemon-like long-running process. This is -suitable for situations where your machine is long running and will need -continuous access to resources. `tbot` can also run in "oneshot" mode where it -will fetch credentials for your machine once before exiting. This is ideal for -short-lived environments such as CI/CD workflows. +`tbot` is the binary that acts as the agent for Machine & Workload Identity on +your machines that need access to resources protected by Teleport. It is +typically ran in one of two modes. By default, it is a daemon-like long-running +process. This is suitable for situations where your machine is long running and +will need continuous access to resources. `tbot` can also run in "oneshot" mode +where it will fetch credentials for your machine once before exiting. This is +ideal for short-lived environments such as CI/CD workflows. Before `tbot` can be started, at least two parts of configuration will need to be provided via the configuration file or as arguments provided to `tbot` when @@ -75,7 +84,7 @@ it is executed. This consists of: impersonated). For more detail about the configuration options, see -[the reference.](../machine-id/machine-id.mdx) +[the reference.](../machine-workload-identity/machine-workload-identity.mdx) On initial load, `tbot` uses the configured join method to obtain a set of credentials for the bot user from the Teleport Auth Service. It can then use @@ -102,12 +111,12 @@ by the latest certificate authority. Joining is the process by which `tbot` initially authenticates as a bot with the Teleport Auth Service. -Machine ID leverages the existing token resource within Teleport, with the -token containing an additional `botName` field that identifies the bot user -associated with the token. +Machine & Workload Identity leverages the existing token resource within +Teleport, with the token containing an additional `botName` field that +identifies the bot user associated with the token. -Machine ID currently supports two methods of joining that have some key -differences. +Machine & Workload Identity currently supports two methods of joining that have +some key differences. ### Ephemeral token @@ -166,10 +175,10 @@ host, like its architecture and OS version. As tracking Bot Instances requires bots to prove their identity during each authentication attempt, this does require bots to maintain state if they wish to keep a single Bot Instance ID over time. It isn't expected or feasible to -keep state for many Machine ID use cases: for example, CI/CD workflows generally -should rejoin from scratch each time. This is expected behavior, and bots with -use cases like this will generate more unique Bot Instances than long-lived -clients. +keep state for many Machine & Workload Identity use cases: for example, CI/CD +workflows generally should rejoin from scratch each time. This is expected +behavior, and bots with use cases like this will generate more unique Bot +Instances than long-lived clients. Bot Instances have a relatively short lifespan and are set to expire after the most recent identity issued for that instance will expire. If the `tbot` client diff --git a/docs/pages/reference/architecture/proxy-peering.mdx b/docs/pages/reference/architecture/proxy-peering.mdx deleted file mode 100644 index 245eca05ad760..0000000000000 --- a/docs/pages/reference/architecture/proxy-peering.mdx +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Proxy Peering -description: How Teleport implements more efficient networking with Proxy Peering. ---- - -Proxy Peering enables Teleport Agents to be reachable without connecting to -every Teleport Proxy Service. This allows Teleport Proxy instances to scale -horizontally without increasing the number of connections created by agents. - - - A Teleport Agent is a Teleport instance that provides access to resources in your infrastructure, i.e., by running the SSH, - Kubernetes, Database, Application, or Desktop Services. - - -By default, Teleport Agents need to create a reverse tunnel to every Teleport Proxy -to ensure a client is able to reach every agent. With Proxy Peering this is no -longer a requirement. When Proxy Peering is enabled agents will automatically -change their behavior to connect to the configured number of Teleport Proxy -instances. - -## How it works - -### Proxy Service - -A gRPC service on each Teleport Proxy Service instance provides an API for establishing a -bi-directional connection to the agents connected to that Teleport Proxy. Teleport -Proxy instances manage a gRPC client to all other Teleport Proxy instances in the -cluster. - -Routing information on which Teleport Proxy instances each agent is connected to -is stored in Teleport's backend and propagated to each Teleport Proxy instance. - -The routing information and gRPC service allow a Teleport Proxy to identify which -Proxy Service instance an agent is connected to and create an end-to-end connection -from a client to that agent. This allows for access to the agent without connecting -to the same Teleport Proxy instance initially. - -### Reverse tunnel agents - -Agents will check whether you have enabled Proxy Peering before -attempting to create a reverse tunnel to a Teleport Proxy instance. - -By default, in Proxy Peering mode, agents are configured to connect to a single -Teleport Proxy instance. For high availability a cluster administrator may -configure agents to connect to 2 or more Teleport Proxy instances. - -![Teleport Proxy Peering](../../../img/architecture/proxy-peering@1.2x.png) - -## Next Steps -- See the [migration guide](../../admin-guides/management/operations/proxy-peering.mdx) to learn how to upgrade an existing cluster to use -Proxy Peering diff --git a/docs/pages/reference/architecture/proxy.mdx b/docs/pages/reference/architecture/proxy.mdx index edfb17dc62989..289d0b0e33759 100644 --- a/docs/pages/reference/architecture/proxy.mdx +++ b/docs/pages/reference/architecture/proxy.mdx @@ -1,7 +1,10 @@ --- title: Teleport Proxy Service +sidebar_label: Proxy Service description: Architecture of Teleport's identity-aware proxy service -h1: Teleport Identity-Aware Proxy Service +tags: + - conceptual + - platform-wide --- The Teleport Proxy Service is an identity-aware proxy with a web UI. The Teleport Proxy Service @@ -17,7 +20,7 @@ provides the following key features: to one TLS port using TLS routing feature. - Records commands and API calls, and queries and streams them to the audit log. The audit log is stored on one of the Teleport Auth Service - [backends](../backends.mdx). + [backends](../deployment/backends.mdx). ![Proxy service](../../../img/architecture/proxy.png) diff --git a/docs/pages/reference/architecture/relay.mdx b/docs/pages/reference/architecture/relay.mdx new file mode 100644 index 0000000000000..fdde701bed2d1 --- /dev/null +++ b/docs/pages/reference/architecture/relay.mdx @@ -0,0 +1,119 @@ +--- +title: Teleport Relay Service +sidebar_label: Relay Service +description: Architecture of Teleport's Relay service +tags: + - conceptual + - platform-wide +--- + + + +The Teleport Relay service is available starting from version 18.3.0 for the SSH protocol and version 18.5.0 for the Kubernetes protocol. Other protocols are not supported at this time. + + + +The Relay service is an optional component of a Teleport cluster that provides an alternative connectivity path between clients and resources. Similar to the Proxy service, it forwards connections to resources [through reverse tunnels](proxy.mdx#tunnels). However, unlike the Proxy service, the Relay service does not route connections through the Teleport control plane. Instead, it allows Teleport agents to open tunnels directly to the Relay service, enabling clients to connect to resources with lower latency and higher efficiency in specific network scenarios. + +This alternate connectivity is beneficial in situations where the client and the resource are known to be close together in a physical or logical sense (the same data center, office, campus, geographical region) and regular connections through the control plane would incur higher latency, lower throughput, or higher costs. + +Unlike the Proxy service, the Relay does not host a web UI, intercept connections, impersonate users or provide access to the Teleport control plane API. As such, it's simpler to deploy and keep secure, and it's thus suitable to be deployed even in environments where it wouldn't be possible to securely deploy a Proxy instance. + + + +The Teleport Relay service is intended for specific scenarios where clients and agents are in the same network segment and there is a need for connectivity that does not go through the Teleport control plane. It is not a required or recommended cluster component in most Teleport deployments. + + + +## How it works + +A Relay service deployment consists of one or more Relay instances configured with the same Relay group name, reachable by Teleport agents and clients through a L4 network load balancer. Relay instances of the same group receive connections from agents and clients through the load balancer, and can connect to each other directly. Relay instances listen on two separate ports to serve connections from agents and clients, and a third listening port is used for direct connections from other Relay instances in the same group. + +The typical setup uses a load balancer serving the two ports, reachable at a single hostname, with port 3042 used by agents and port 443 used by clients. However, it's possible to use different load balancers and hostnames for the client and agent ports if necessary. The listening port for connectivity between Relay instances does not make use of the load balancer, since each Relay instance might require connecting to specific other instances to serve connections; as such, the Relay service requires outbound connectivity from each Relay instance to its peers on that port. + +The connections from the client to the Relay, the connections between Relay instances, and the tunnel connections from the Teleport agents and the Relay instances use TCP and are encrypted and authenticated through mutual TLS, with X.509 certificates issued by the Teleport control plane. + +The `relay_service` section of the Teleport configuration file ([see reference](../deployment/config.mdx#relay-service)) defines both the network connectivity of the single Relay instance and the settings of the Relay group, which should be the same between all instances and will be fetched by agents as needed. + +An agent can be configured to use a given Relay by adding the address of the load balancer to its `teleport.yaml` configuration file: + +```yaml +teleport: + proxy_server: proxy.example.com:443 + relay_server: relay.example.com:3042 +``` + +Agents running the SSH service that are also connected to a Proxy (i.e. they're running in "tunnel mode") will also open tunnels to the Relay service. The address specified in the configuration should resolve to the load balancer for the port used by agents. + +When an agent is configured to open tunnels to a given Relay group, it will periodically check the Relay configuration advertised by the Relay instances, then open enough tunnel connections to distinct Relay instances as determined by the Relay group configuration. The target connection count in the configuration should be not bigger than the amount of active and reachable Relay instances in the group at any given time, or all agents configured to use the group will keep connecting to the load balancer to reach for distinct instances that are not actually available. + +Before a Relay instance shuts down, it will inform all connected agents that it's about to terminate, and agents will then open new tunnel connections to replace their existing connection, ideally maintaining availability throughout this process. Because of this, we recommend deploying new Relay instances before shutting down old ones, if possible. This is what will happen when deploying the Relay service through [the `teleport-relay` Helm chart](../helm-reference/teleport-relay.mdx). Running a fixed set of Relay instances can result in minor downtime when restarting instances. + +{/* TODO: link appropriate deployment guides */} + +When connected to a Relay group, Teleport agents will include the group name and the list of Relay instances that they have connected to in the heartbeat for their served resources. It's possible to see this by reading the appropriate resource heartbeat through `tctl` (e.g. `tctl get node/`) or, for SSH servers, `tsh ls --format=json`. The list of Relay instance IDs is also used internally by the Relay service to route connections if an agent serving the requested resource is not connected to all Relay instances: in such a case, the Relay instance forwarding a connection from a client will pass the connection along to a Relay instance that has a tunnel from the agent serving the resource - this is the same mechanism used for connection routing in the control plane when using [Proxy Peering](../../zero-trust-access/deploy-a-cluster/proxy-peering.mdx). + +When a Relay instance receives a request to open a connection to a resource, it'll run the same connection routing logic used by the Proxy service, resulting in one or more registered resources as a target for the connection. It will then check if the resource is available through an agent that has a tunnel opened directly with the instance; if so, the connection is then forwarded directly to the agent, otherwise, if the resource is available through a Relay instance of the same Relay group, the connection is forwarded to one of the other Relay instances, which will then pass the connection along to the appropriate agent through a tunnel. If the connection request is for a resource that is not available through the Relay group, an error is returned to the client. + +## Client configuration + +Use of the Relay service requires a desktop client (i.e. `tsh`) or the `ssh-multiplexer` service of the Machine & Workload Identity agent (`tbot`). It's not possible to use a Relay through the Teleport web UI. + +When using `tsh`, it's possible to specify the `--relay` option (or the `TELEPORT_RELAY` environment variable) with the hostname and port of the load balancer for the port used by clients; if a port is not specified, the default is taken to be port 443. The load balancer for the client port of the Relay service can employ any routing strategy, since all Relay instances are going to serve client connections equally. + +When you specify `--relay` at `tsh login` time, the configured Relay address is saved in your `tsh` profile configuration. This address will be used for all later invocations unless you override it. + +You can configure a default Relay address on a per-user basis using the `default_relay_addr` user trait. This trait can be set in several ways: + +- Directly for each [local user](../../zero-trust-access/rbac-get-started/users.mdx) +- Passed by the SSO provider for [SSO users](../../zero-trust-access/sso/sso.mdx) +- Granted through an [Access List](../../identity-governance/access-lists/access-lists.mdx) +- Added by a [Login Rule](../../zero-trust-access/sso/login-rules/login-rules.mdx) + +When using `tsh ssh`: + +- If a Relay address was specified at login time or if you pass `--relay`, the SSH connection will go through the specified Relay instead of the control plane. +- If an address was specified at login time, you can use `--relay=none` to temporarily disable the Relay service. +- Use `--relay=default` to use the default Relay address for the user, even if a different address was specified at login time. +- When a Relay address is configured, only servers available through the Relay will be accessible. No connection will go through the control plane. + +``` +# specifying a relay address when logging in +$ tsh login --proxy proxy.example.com --relay relay.example.com +... +> Profile URL: https://proxy.example.com:443 + Relay address: relay.example.com + Logged in as: username + Cluster: proxy.example.com + Roles: access, auditor, editor + Logins: root, ubuntu + Kubernetes: enabled + Kubernetes groups: system:masters + Valid until: 2025-10-22 00:48:07 +0200 CEST [valid for 12h0m0s] + Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty +# connections will use the relay by default +$ tsh ssh root@nodename +... +# using a specific relay for a single connection +$ tsh ssh --relay another-relay.example.com root@nodename +... +# using no relay +$ tsh ssh --relay none root@nodename +... +``` + +When using `tsh kube login`, `tsh proxy kube`, `tsh kube exec` and `tsh kubectl`: +- If a Relay address was specified at login time or if you pass `--relay`, the generated Kubeconfig file for direct connections will point to the Relay endpoint and the local forwarding proxy will connect to the Relay rather than the control plane. +- If an address was specified at login time, you can use `--relay=none` to temporarily disable the Relay service. +- Use `--relay=default` to use the default Relay address for the user, even if a different address was specified at login time. +- When a Relay address is configured, only Kubernetes clusters available through the Relay will be accessible. No connection will go through the control plane. If access to the same Kubernetes cluster is being provided by multiple Teleport agents, we recommend making all the agents available through the Relay for the best resiliency and load balancing. + +It's not currently possible to use the Relay with `tsh kube join` or with credential files generated by `tsh login` or `tctl auth sign`. + +The [`ssh-multiplexer`](../machine-workload-identity/configuration.mdx#ssh-multiplexer) and [`kubernetes/v2`](../machine-workload-identity/configuration.mdx#kubernetesv2) Machine & Workload Identity services can be configured to use a Relay. Similarly to the behavior of `tsh`, if a Relay address is set in the service configuration, only servers or Kubernetes clusters available through the Relay will be accessible. It's possible to provide access to resources through the Relay and through the control plane from the same `tbot` instance by configuring independent services with and without the `relay_server` option. + +## More concepts + +- [Architecture Overview](../../core-concepts.mdx) +- [Teleport Proxy](proxy.mdx) +- [Proxy Peering](../../zero-trust-access/deploy-a-cluster/proxy-peering.mdx) diff --git a/docs/pages/reference/architecture/session-recording.mdx b/docs/pages/reference/architecture/session-recording.mdx index 1f40d9d6eb35d..e891dae773b3f 100644 --- a/docs/pages/reference/architecture/session-recording.mdx +++ b/docs/pages/reference/architecture/session-recording.mdx @@ -1,6 +1,12 @@ --- title: Teleport Session Recording +sidebar_label: Session Recording description: An overview of Teleport's session recording and its configuration +tags: + - session-recording + - conceptual + - zero-trust + - audit --- This page provides an overview of how Teleport records sessions and the various @@ -64,7 +70,7 @@ Teleport cluster. It can be configured by setting `session_recording` in the `auth_service` section of your `teleport.yaml`, or dynamically using the `session_recording_config` resource. If you need to apply different recording configurations to different sets of resources, you can set up -[trusted clusters](../../admin-guides/management/admin/trustedclusters.mdx) with their own +[trusted clusters](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) with their own recording configurations. @@ -78,6 +84,41 @@ and `proxy-sync` identically (perform synchronous recording). +### Encryption + +Teleport supports at-rest encryption of session recordings for all 4 recording +modes. It can be enabled through the `teleport.yaml` configuration file or +dynamically through the `session_recording_config` resource. + +An example of enabling encryption using the `teleport.yaml`: + +```yaml +teleport: + auth_service: + session_recording_config: + encryption: + enabled: yes +``` + +And using the dynamic `session_recording_config` resource: + +```yaml +kind: session_recording_config +spec: + encryption: + enabled: yes +``` + +Any encryption keys are provisioned using the same key storage backend +configured for CAs. + + +In distributed HA environments where there are multiple Teleport Auth Services, +it is important that all services have access to the same key storage backend +and keys. Otherwise availability of session replay may be degraded. This holds +true even when using [Manual Encryption Key Management](#manual-encryption-key-management). + + ### Record at the SSH node By default, Teleport performs recording at the SSH node. This is because Teleport's @@ -88,43 +129,50 @@ Proxy Server cannot see the SSH traffic to the node. It is encrypted end-to-end ### Record at the Proxy Service -In **Recording Proxy Mode**, the Proxy Service terminates (decrypts) the SSH -connection using the certificate supplied by the client via SSH agent forwarding -and then establishes its own SSH connection to the final destination server. -This allows the Proxy Service to forward SSH session data to the Auth Service to -be recorded, as shown below: +In **Recording Proxy Mode**, the Proxy Service terminates (decrypts) SSH connections +by presenting the client with a trusted host certificate dynamically generated by +the Auth Service and signed by the Teleport Host CA. -![recording-proxy](../../../img/recording-proxy.svg) +The Proxy Service then establishes its own connection to the destination server authenticating in one of two ways: -Recording Proxy Mode allows Teleport users to enable session recording for -OpenSSH's servers running `sshd`, which is helpful when gradually transitioning -large server fleets to Teleport. +- For the Teleport SSH Service and legacy OpenSSH servers, the Proxy Service requires the client to provide its credentials through SSH Agent Forwarding so that these credentials can be reused to connect to the OpenSSH server. + - Note that SSH Agent Forwarding must be allowed for any user trying to connect in this mode. +- For [OpenSSH servers](../../enroll-resources/server-access/openssh/openssh-agentless.mdx), the Proxy Service requests a certificate signed by the Teleport OpenSSH CA, which will be trusted by the OpenSSH server. -We consider Recording Proxy Mode to be less secure, as it grants additional -privileges to the Proxy Service. Since the Proxy Service needs credentials to -decrypt the SSH connection, it must be properly secured and is a higher value -target for an attacker than a Proxy Service instance that cannot decrypt the -data flowing through it. +With both sides of the connections decrypted, the Proxy Service acts as a pipe between the client +and the destination server, performing RBAC checks, recording the session, and emitting audit events +in the process as shown below: -Additionally, the credentials that the Proxy Service uses to decrypt the SSH -connection are provided via SSH Agent Forwarding, so Agent Forwarding must be -enabled to record at the Proxy Service. +![recording-proxy](../../../img/recording-proxy.svg) + +We consider Recording Proxy Mode to be less secure than recording at the server +level for two reasons: -However, there are advantages of proxy-based session recording too. When -sessions are recorded at the SSH nodes, a root user can add iptables rules to -prevent sessions logs from reaching the Auth Service. With sessions recorded at -the Proxy Service, users with root privileges on nodes have no way of disabling -the audit. +- It grants additional privileges to the Teleport Proxy Service. When recording takes place at the Teleport SSH Service, the Proxy Service stores no secrets and cannot "see" the decrypted data. This makes a Proxy Service instance less critical to the security of the overall cluster. But if an attacker gains physical access to a Proxy Server running in Recording Proxy Mode, they will be able to see the decrypted traffic and client keys stored in the Proxy Server's process memory. +- It requires the use of SSH Agent Forwarding (for Teleport Nodes and legacy OpenSSH server support). -See the [reference](../monitoring/audit.mdx) to learn how to +See the [reference](../deployment/monitoring/audit.mdx) to learn how to turn on Recording Proxy Mode. Note that the recording mode is configured on the Auth Service. + + +The use of Recording Proxy Mode across your cluster is no longer necessary to support +[OpenSSH servers](../../enroll-resources/server-access/openssh/openssh-agentless.mdx). +Recording Proxy Mode is automatically used for OpenSSH servers without the need for SSH +Agent Forwarding, which is the primary security concern for this mode. + +Recording Proxy Mode will continue to be supported for legacy OpenSSH support and other +use cases, but many existing and new features will not be supported in this mode due +to several technical limitations it introduces. + + + ### Synchronous recording When synchronous recording is enabled, the Teleport component doing the recording (which may be the Teleport SSH Service or the Proxy Service instance depending on your configuration) -submits each recording event to Teleport's Auth Service as it occurs. In this mode, +submits each recording event to the Teleport Auth Service as it occurs. In this mode, failure to emit a recording event is considered fatal - the session will be terminated if an event cannot be recorded. This makes synchronous recording best suited for highly regulated environments where you need to be confident that all data is recorded. @@ -132,24 +180,28 @@ This also means that you need a reliable and low-latency connection to the Auth Server for the duration of the session to ensure that the session isn't interrupted or terminated due to temporary connection loss. -In synchronous recording modes, the Auth Service receives a stream of recording +In synchronous recording modes, the Teleport Auth Service receives a stream of recording events and is responsible for assembling them into the final artifact and uploading -it to the storage backend. Since data is streamed directly to the Auth Service, -Teleport administrators don't need to be concerned with disk space on their -Teleport SSH Service and Proxy Service instances, as no recording data is -written to those disks. +it to the storage backend. If recording encryption is enabled, the Teleport Auth Service +is also responsible for encrypting the event stream prior to upload. Since data is +streamed directly, Teleport administrators don't need to be concerned with disk space on +their Teleport SSH Service and Proxy Service instances, as no recording data is written +to those disks. ### Asynchronous recording -When asynchronous, recording events are written to the local filesystem -during the session. When the session completes, Teleport assembles the parts into a -complete recording and submits the entire recording to the Auth Service for storage. +When asynchronous recording is enabled, recording events are written to the +local filesystem during the session. When the session completes, Teleport +assembles the parts into a complete recording and submits the entire recording +to the Auth Service for storage. Since recording data is flushed to disk, administrators should be careful to ensure that the system has enough disk space to accommodate the expected number of Teleport sessions. Additionally, since recording data is temporarily stored on disk, there is a greater chance that it can be tampered with, deleted, or otherwise corrupted -before the upload completes. +before the upload completes. Enabling recording encryption for asynchronous +modes ensures that events are encrypted before being written to the local +filesystem, which can help prevent tampering. The advantage of asynchronous recording is that it doesn't require a persistent connection to the Auth Service. For example, an SSH session can continue to operate @@ -164,7 +216,7 @@ high-latency. ## Storage Session recordings are stored in Teleport's audit sessions backend, which is -specified by the `audit_sessions_uri` field in the `teleport.yaml` [configuration file](../config.mdx). +specified by the `audit_sessions_uri` field in the `teleport.yaml` [configuration file](../deployment/config.mdx). Teleport current supports the following session storage backends: - File: stores recordings on the local filesystem. Suitable for dev environments, demos, @@ -182,7 +234,8 @@ audit log storage, which supports a different set of services (SQLite, DynamoDB, A Teleport session recording is an ordered sequence of structured events associated with a session. Each event is an encoded [Protocol Buffer](https://developers.google.com/protocol-buffers) -and the complete session is compressed with gzip before being written to the storage backend. +and the complete session is compressed with gzip, and optionally encrypted, +before being written to the storage backend. These recordings have a `.tar` extension for backwards compatibility reasons, but it should be noted that they are not TAR archives and cannot be read using the `tar` utility. @@ -193,6 +246,10 @@ SSH and Kubernetes sessions can be played in Teleport's Web UI or by using the [`tsh play`](../cli/tsh.mdx) command. Desktop session recordings can only be played back in the Web UI. +PostgreSQL database sessions can also be played using [`tsh play`](../cli/tsh.mdx) +and the Web UI. Sessions from all database protocols are accessible with +[`tsh play`](../cli/tsh.mdx) in JSON format (`--format json`). + In the Web UI, the session recordings page is populated by querying Teleport's audit log for session end events, which contain metadata about the recording. This is sometimes surprising to Teleport users, because even though the recordings @@ -219,8 +276,65 @@ as a series of parts (in cloud storage or on an Auth Service instance's disk) an responsibility of the Auth Service's upload completer to detect the abandoned upload and complete it. +## Recording summaries + +With Teleport Identity Security, you can run language models against session +recordings to generate session summaries. This feature supports SSH, +Kubernetes, and database sessions and is only available if the Teleport cluster +is licensed for Identity Security. + +After each recording is uploaded, Teleport Auth Service matches session metadata +against all `inference_policy` resources. If a match is found, it will then use +configuration from a corresponding `inference_model` resource to generate +session recording. The inference model specifies, among other options, which +inference provider and provider-specific model is used to generate a summary. + +No more than 150 concurrent summary jobs will be executed per Auth Service. If +more sessions arrive, the auth server will wait until the previous ones are +summarized. + +Session recording summaries are stored along with session recordings and can be +viewed using the Teleport web app. + +## Manual Encryption Key Management + +Manual encryption key management places all responsibility and complexity of managing +keys on the administrator. This comes with much more risk of a misconfiguration impacting +Teleport's ability to record or replay sessions. It's for this reason that Teleport Cloud +does not allow configuring manual encryption key management. Consider using the default +automatic key management unless your usage explicitly requires manual key management. + +By default, Teleport will automatically provision and manage session recording encryption +keys. However there may be cases where automatic management is not possible or desired, +such as environments where Teleport lacks sufficient privileges to manage keys directly. +For this reason, Teleport can be configured to use externally managed keys. This is +controlled using the `encryption.manual_key_management` configuration in either the +`teleport.yaml` file or the dynamic `session_recording_config` resource. + +```yaml +encryption: + enabled: yes + manual_key_management: + enabled: yes + active_keys: + - type: pkcs11 + label: 'session_recordings_002' + rotated_keys: + - type: pkcs11 + label: 'session_recordings_001' +``` + + +Session recording encryption relies on a form of envelope encryption using OAEP with +4096-bit RSA key pairs. This means RSA 4096 is the only key type eligible for use with +`manual_key_management`. It is also important that these keys are permitted to be used +for decryption. What this means varies per key storage backend, but generally must be +defined when the key is first provisioned. + + ## Related reading -- [Recording Proxy Mode](../../enroll-resources/server-access/guides/recording-proxy-mode.mdx) -- [SSH recording modes](../monitoring/audit.mdx) -- [Session recording for desktops](../agent-services/desktop-access-reference/sessions.mdx) +- [RBAC for session recordings](../access-controls/roles.mdx#rbac-for-session-recordings) +- [SSH recording modes](../deployment/monitoring/audit.mdx) +- [Session recording for desktops](../../enroll-resources/desktop-access/reference/sessions.mdx) +- [Encrypted Session Recordings](../../enroll-resources/server-access/guides/encrypted-session-recordings/encrypted-session-recordings.mdx) diff --git a/docs/pages/reference/architecture/teleport-cloud-architecture.mdx b/docs/pages/reference/architecture/teleport-cloud-architecture.mdx index d1c6e59e1ca2b..016dbb125d072 100644 --- a/docs/pages/reference/architecture/teleport-cloud-architecture.mdx +++ b/docs/pages/reference/architecture/teleport-cloud-architecture.mdx @@ -1,6 +1,10 @@ --- title: Teleport Enterprise Cloud Architecture +sidebar_label: Enterprise Cloud description: Cloud security, availability, and networking details. +tags: + - conceptual + - platform-wide --- This guide describes architectural features of Teleport Enterprise (Cloud). In @@ -9,7 +13,7 @@ Teleport Auth Service and Teleport Proxy Service. Read on for information about topics like High Availability, deployment regions, and data retention. If you are new to Teleport Enterprise (Cloud), read the [Getting Started -Guide](../../get-started.mdx). +Guide](../../get-started/get-started.mdx). ## Security @@ -38,7 +42,7 @@ configuration retained in DynamoDB indefinitely, and session recordings retained Customers whose subscriptions lapse will have all session recordings, audit logs, and cluster state deleted between 7 and 30 days after the lapse. -Customers can configure [External Audit Storage](../../admin-guides/management/external-audit-storage.mdx) to store audit logs and session recordings in their +Customers can configure [External Audit Storage](../../zero-trust-access/management/external-audit-storage.mdx) to store audit logs and session recordings in their own AWS account where retention period can be managed independently. ## High Availability @@ -47,28 +51,26 @@ own AWS account where retention period can be managed independently. The Teleport [Auth Service](authentication.mdx) is deployed in 2 AWS availability zones, and can tolerate a single zone failure. AWS guarantees [99.99%](https://aws.amazon.com/compute/sla/) of monthly uptime. Teleport Enterprise Cloud can run in one of the following AWS regions: -- us-west-2 -- us-east-1 -- eu-central-1 -- ap-south-1 -- ap-southeast-1 -- sa-east-1 -- ap-southeast-2 +|AWS Region Code|AWS Region Name| +|---|---| +|us-west-2|US West (Oregon)| +|us-east-1|US East (N. Virginia)| +|eu-central-1|Europe (Frankfurt)| +|ap-south-1|Asia Pacific (Mumbai)| +|ap-southeast-1|Asia Pacific (Singapore)| +|sa-east-1|South America (São Paulo)| ### Proxies The Teleport [Proxy Service](proxy.mdx) is deployed to multiple AWS regions around the world for low-latency access to distributed infrastructure. -- us-west-2 -- us-east-1 -- eu-central-1 -- ap-south-1 -- ap-southeast-1 -- sa-east-1 -- ap-southeast-2 - - - Proxy Service in the ap-southeast-2 region is available upon request or when the Auth Service is deployed to the region. - +|AWS Region Code|AWS Region Name| +|---|---| +|us-west-2|US West (Oregon)| +|us-east-1|US East (N. Virginia)| +|eu-central-1|Europe (Frankfurt)| +|ap-south-1|Asia Pacific (Mumbai)| +|ap-southeast-1|Asia Pacific (Singapore)| +|sa-east-1|South America (São Paulo)| ## Releases @@ -95,6 +97,13 @@ notifications. ## Service Level Agreement -Teleport Enterprise Cloud commits to an SLA of (=cloud.sla.monthly_percentage=) of monthly uptime, +Teleport Cloud commits to an SLA of (=cloud.sla.monthly_percentage=) of monthly uptime, a maximum of (=cloud.sla.monthly_downtime=) of downtime per month. As we continue to invest in the cloud product and infrastructure, the SLA will be increased. + +In Teleport Cloud accounts that enable multi-region high availability, the SLA +increases to 99.99%. + +For more information on Teleport Cloud SLAs, see the [Teleport Enterprise +Edition Pricing +Guide](https://goteleport.com/api/files/teleport-pricing-guide.pdf) (PDF). diff --git a/docs/pages/reference/architecture/tls-routing.mdx b/docs/pages/reference/architecture/tls-routing.mdx index 28a2ca8b0ed97..8452e11d84d9e 100644 --- a/docs/pages/reference/architecture/tls-routing.mdx +++ b/docs/pages/reference/architecture/tls-routing.mdx @@ -1,6 +1,9 @@ --- title: TLS Routing description: How Teleport implements a single-port setup with TLS routing +tags: + - conceptual + - platform-wide --- In TLS routing mode [Teleport proxy](./proxy.mdx) multiplexes all client @@ -127,7 +130,7 @@ Teleport provides a `tsh proxy db` command to launch a local database proxy: $ tsh proxy db example-db ``` -See [GUI clients](../../connect-your-client/gui-clients.mdx) guide for a usage +See [GUI clients](../../connect-your-client/third-party/gui-clients.mdx) guide for a usage example. ## Web UI, apps and desktops @@ -266,6 +269,6 @@ behind layer 7 load balancers. ## Next steps -- See [migration guide](../../admin-guides/management/operations/tls-routing.mdx) to learn how to +- See [migration guide](../../zero-trust-access/deploy-a-cluster/tls-routing.mdx) to learn how to upgrade an existing cluster to use TLS routing. - Read through TLS routing design document [RFD](https://github.com/gravitational/teleport/blob/master/rfd/0039-sni-alpn-teleport-proxy-routing.md). diff --git a/docs/pages/reference/architecture/trustedclusters.mdx b/docs/pages/reference/architecture/trustedclusters.mdx index 3aee77e05129b..5b86971b780ca 100644 --- a/docs/pages/reference/architecture/trustedclusters.mdx +++ b/docs/pages/reference/architecture/trustedclusters.mdx @@ -1,6 +1,11 @@ --- title: Trusted Clusters Architecture +sidebar_label: Trusted Clusters description: Deep dive into design of Teleport Trusted Clusters. +tags: + - conceptual + - zero-trust + - infrastructure-identity --- Teleport can partition compute infrastructure into multiple clusters. A cluster @@ -68,5 +73,5 @@ Read the rest of the Architecture Guides: - See how Teleport uses [Certificates](authentication.mdx) for authentication. - Reduce your surface of attack using [TLS routing](./tls-routing.mdx). -- Follow our [guide](../../admin-guides/management/admin/trustedclusters.mdx) to set up trusted clusters. +- Follow our [guide](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) to set up trusted clusters. diff --git a/docs/pages/reference/audit-events.mdx b/docs/pages/reference/audit-events.mdx new file mode 100644 index 0000000000000..4082aef75e784 --- /dev/null +++ b/docs/pages/reference/audit-events.mdx @@ -0,0 +1,7082 @@ +--- +title: "Audit Event Reference" +description: "Provides a comprehensive list of Teleport audit events and their fields." +--- +{/* Generated file. Do not edit. */} +{/* To regenerate, navigate to docs/gen-event-reference and run pnpm gen-docs */} + +{/*cSpell:disable*/} + +{/* Formatted event examples sometimes include different capitalization than +what we standardize on in the docs*/} +{/* vale messaging.capitalization = NO */} + +Teleport components emit audit events to record activity within the cluster. + +Audit event payloads have an `event` field that describes the event, which is +often an operation performed against a dynamic resource (e.g., +`access_list.create` for the creation of an Access List) or some other user +behavior, such as a local user login (`user.login`). The `code` field +includes a string with pattern `[A-Z0-9]{6}` that is unique to an audit event, +such as `TAP03I` for the creation of an application resource. + +In some cases, an audit event describes both a success state and a failure +state, while the `event` field is the same for both states. In this case, the +`code` field differs between states. For example, `access_list.create` +describes both successful and failed Access List creations, while the success +event has code `TAL001I` and the failure has code `TAL001E`. For other +events, like `db.session.query.failed` and `db.session.query`, the event +type describes only the success or failure state. + +You can set up Teleport to export audit events to third-party services for +storage, visualization, and analysis. For more information, read [Exporting +Teleport Audit Events]( +../zero-trust-access/export-audit-events/export-audit-events.mdx). + +## access_graph.crown_jewel.create + +Crown Jewel Created + +Example: + +```json +{ + "code": "CJ001I", + "event": "access_graph.crown_jewel.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_graph.crown_jewel.delete + +Crown Jewel Deleted + +Example: + +```json +{ + "code": "CJ003I", + "event": "access_graph.crown_jewel.delete", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_graph.crown_jewel.update + +Crown Jewel Updated + +Example: + +```json +{ + "code": "CJ002I", + "event": "access_graph.crown_jewel.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_graph.path.changed + +Access Path Changed + +Example: + +```json +{ + "code": "TAG001I", + "event": "access_graph.path.changed", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_list.create + +There are multiple events with the `access_list.create` type. + +### TAL001I + +Access list created + +Example: + +```json +{ + "code": "TAL001I", + "event": "access_list.create", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +### TAL001E + +Access list create failed + +Example: + +```json +{ + "code": "TAL001E", + "event": "access_list.create", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +## access_list.delete + +There are multiple events with the `access_list.delete` type. + +### TAL003I + +Access list deleted + +Example: + +```json +{ + "code": "TAL003I", + "event": "access_list.delete", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +### TAL003E + +Access list delete failed + +Example: + +```json +{ + "code": "TAL003E", + "event": "access_list.delete", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +## access_list.member.add + +There are multiple events with the `access_list.member.add` type. + +### TAL005I + +Access list member added + +Example: + +```json +{ + "code": "TAL005I", + "event": "access_list.member.add", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "user" + } + ], + "updated_by": "mike" +} +``` + +### TAL005E + +Access list member addition failure + +Example: + +```json +{ + "code": "TAL005E", + "event": "access_list.member.add", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "user" + } + ], + "updated_by": "mike" +} +``` + +## access_list.member.delete + +There are multiple events with the `access_list.member.delete` type. + +### TAL007I + +Access list member removed + +Example: + +```json +{ + "code": "TAL007I", + "event": "access_list.member.delete", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "user" + } + ], + "updated_by": "mike" +} +``` + +### TAL007E + +Access list member removal failure + +Example: + +```json +{ + "code": "TAL007E", + "event": "access_list.member.delete", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "carrot" + }, + { + "member_name": "apple" + }, + { + "member_name": "banana" + } + ], + "updated_by": "mike" +} +``` + +## access_list.member.delete_all_members + +There are multiple events with the `access_list.member.delete_all_members` type. + +### TAL008I + +All members removed from access list + +Example: + +```json +{ + "code": "TAL008I", + "event": "access_list.member.delete_all_members", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "updated_by": "mike" +} +``` + +### TAL008E + +Access list member delete all members failure + +Example: + +```json +{ + "code": "TAL008E", + "event": "access_list.member.delete_all_members", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "updated_by": "mike" +} +``` + +## access_list.member.update + +There are multiple events with the `access_list.member.update` type. + +### TAL006I + +Access list member updated + +Example: + +```json +{ + "code": "TAL006I", + "event": "access_list.member.update", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "user" + } + ], + "updated_by": "mike" +} +``` + +### TAL006E + +Access list member update failure + +Example: + +```json +{ + "code": "TAL006E", + "event": "access_list.member.update", + "time": "2023-05-08T19:21:36.144Z", + "access_list_name": "access-list", + "access_list_title": "example_title", + "members": [ + { + "member_name": "user" + } + ], + "updated_by": "mike" +} +``` + +## access_list.review + +There are multiple events with the `access_list.review` type. + +### TAL004I + +Access list reviewed + +Example: + +```json +{ + "code": "TAL004I", + "event": "access_list.review", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +### TAL004E + +Access list review failed + +Example: + +```json +{ + "code": "TAL004E", + "event": "access_list.review", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +## access_list.update + +There are multiple events with the `access_list.update` type. + +### TAL002I + +Access list updated + +Example: + +```json +{ + "code": "TAL002I", + "event": "access_list.update", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +### TAL002E + +Access list update failed + +Example: + +```json +{ + "code": "TAL002E", + "event": "access_list.update", + "time": "2023-05-08T19:21:36.144Z", + "name": "access-list", + "updated_by": "mike", + "access_list_title": "example_title" +} +``` + +## access_request.create + +Access Request Created + +Example: + +```json +{ + "id": "66b827b2-1b0b-512b-965d-6c789388d3c9", + "code": "T5000I", + "event": "access_request.create", + "time": "2020-06-05T19:26:53Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0", + "user": "Carrie_Sandoval", + "state": "PENDING", + "roles": [ + "admin" + ] +} +``` + +## access_request.delete + +Access Request Deleted + +Example: + +```json +{ + "id": "66b827b2-1b0b-512b-965d-6c789388d3c9", + "code": "T5003I", + "event": "access_request.delete", + "time": "2020-06-05T19:26:53Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_request.expire + +Access Request Expired + +Example: + +```json +{ + "id": "66b827b2-1b0b-512b-965d-6c789388d3c9", + "code": "T5005I", + "event": "access_request.expire", + "time": "2020-06-05T19:26:53Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_request.review + +Access Request Reviewed + +Example: + +```json +{ + "code": "T5002I", + "event": "access_request.review", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## access_request.search + +Resource Access Search + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "T5004I", + "ei": 0, + "event": "access_request.search", + "namespace": "default", + "resource_type": "db_server", + "search_as_roles": [ + "admin", + "really-long-role-name-1", + "really-long-role-name-2", + "really-long-role-name-3", + "really-long-role-name-4", + "really-long-role-name-5", + "really-long-role-name-6", + "really-long-role-name-7", + "really-long-role-name-8", + "really-long-role-name-9" + ], + "time": "2022-06-08T15:10:35.368Z", + "uid": "b13d61-b97-475f-86ef-1fedf", + "user": "foo" +} +``` + +## access_request.update + +Access Request Updated + +Example: + +```json +{ + "id": "66b827b2-1b0b-512b-965d-6c789388d3c9", + "code": "T5001I", + "event": "access_request.update", + "time": "2020-06-05T19:26:53Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0", + "state": "APPROVED", + "updated_by": "Sam_Waters" +} +``` + +## app.create + +Application Created + +Example: + +```json +{ + "code": "TAP03I", + "ei": 0, + "event": "app.create", + "time": "2022-09-27T19:07:35.00Z", + "uid": "45cabf1e-3f19-4f83-a360-01ac0a176b67", + "aws_role_arn": "arn:aws:iam::1234567890:role/steve", + "name": "dynamic-app", + "user": "mike" +} +``` + +## app.delete + +Application Deleted + +Example: + +```json +{ + "code": "TAP05I", + "ei": 0, + "event": "app.delete", + "time": "2022-09-27T19:11:35.00Z", + "uid": "d2342a20-9697-4a5d-9658-5d473e04624a", + "aws_role_arn": "arn:aws:iam::1234567890:role/steve", + "name": "dynamic-app", + "user": "mike" +} +``` + +## app.session.chunk + +App Session Data + +Example: + +```json +{ + "code": "T2008I", + "ei": 0, + "event": "app.session.chunk", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "session_chunk_id": "3a54f32d-210f-4338-abf5-133bfe19ccc0", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2020-10-30T17:28:14.705Z", + "uid": "8ea5be3d-07b1-4308-8e0d-2d2ec57cbb20", + "user": "alice", + "app_name": "test" +} +``` + +## app.session.dynamodb.request + +App Session DynamoDB Request + +Example: + +```json +{ + "code": "T2013I", + "ei": 1, + "event": "app.session.dynamodb.request", + "app_name": "dyno1", + "app_public_addr": "dynamodb.root.com", + "app_uri": "https://console.aws.amazon.com/dynamodbv2/home", + "aws_host": "dynamodb.us-west-2.amazonaws.com", + "aws_region": "us-west-2", + "aws_role_arn": "arn:aws:iam::123456789012:role/GavinDynamoDBRole", + "aws_service": "dynamodb", + "body": { + "TableName": "test-table" + }, + "cluster_name": "root.com", + "method": "POST", + "path": "/", + "raw_query": "", + "session_chunk_id": "3a54f32d-210f-4338-abf5-133bfe19ccc0", + "status_code": 200, + "target": "DynamoDB_20120810.Scan", + "time": "2022-10-19T19:04:07.763Z", + "uid": "f6f38f69-46e9-4110-a773-2c88278d08ca", + "user": "alice" +} +``` + +## app.session.end + +App Session Ended + +Example: + +```json +{ + "app_name": "ponger", + "app_public_addr": "ponger.root.gravitational.io", + "app_uri": "tcp://localhost:9876", + "cluster_name": "root", + "code": "T2011I", + "ei": 0, + "event": "app.session.end", + "namespace": "default", + "server_id": "8e70002c-7a07-4513-a3fa-ac556a1d7534", + "sid": "11c328b4-5a1e-4adc-b7cb-206389e5f130", + "time": "2022-08-10T19:54:40.444Z", + "uid": "ac8c9b6b-46a0-4b0e-8d85-2204101d5615", + "user": "alice" +} +``` + +## app.session.start + +App Session Started + +Example: + +```json +{ + "addr.remote": "50.34.48.113:56902", + "code": "T2007I", + "ei": 0, + "event": "app.session.start", + "namespace": "default", + "public_addr": "dumper.test.domain.com", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2020-10-30T17:28:14.381Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "kimlisa", + "app_name": "test" +} +``` + +## app.update + +Application Updated + +Example: + +```json +{ + "code": "TAP04I", + "ei": 0, + "event": "app.update", + "time": "2022-09-27T19:09:35.00Z", + "uid": "9909a8d6-b45f-455c-953d-ba1a62340810", + "aws_role_arn": "arn:aws:iam::1234567890:role/steve", + "name": "dynamic-app", + "user": "mike" +} +``` + +## auth + +Auth Attempt Failed + +Example: + +```json +{ + "code": "T3007W", + "error": "ssh: principal \"fsdfdsf\" not in the set of valid principals for given certificate: [\"root\"]", + "event": "auth", + "success": false, + "time": "2019-04-22T02:09:06Z", + "uid": "036659d6-fdf7-40a4-aa80-74d6ac73b9c0", + "user": "admin@example.com" +} +``` + +## auth_preference.update + +Cluster Authentication Preferences Updated + +Example: + +```json +{ + "code": "TCAUTH001I", + "event": "auth_preference.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## auto_update_agent_rollout.force_done + +Automatic Update Agent Rollout Forced Done. + +Example: + +```json +{ + "ei": 0, + "event": "auto_update_agent_rollout.force_done", + "code": "AUAR002I", + "time": "0001-01-01T00:00:00Z", + "user": "system", + "groups": [ + "prod" + ], + "success": true +} +``` + +## auto_update_agent_rollout.rollback + +Automatic Update Agent Rollout Rollback + +Example: + +```json +{ + "ei": 0, + "event": "auto_update_agent_rollout.rollback", + "code": "AUAR003I", + "time": "0001-01-01T00:00:00Z", + "user": "system", + "groups": [ + "prod" + ], + "success": true +} +``` + +## auto_update_agent_rollout.trigger + +Automatic Update Agent Rollout Triggered + +Example: + +```json +{ + "ei": 0, + "event": "auto_update_agent_rollout.trigger", + "code": "AUAR001I", + "time": "0001-01-01T00:00:00Z", + "user": "system", + "groups": [ + "dev", + "prod" + ], + "success": true +} +``` + +## auto_update_config.create + +Automatic Update Config Created + +Example: + +```json +{ + "addr.remote": "127.0.0.1:46790", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUC001I", + "ei": 0, + "event": "auto_update_config.create", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-config", + "success": true, + "time": "2025-03-04T15:49:31.946Z", + "uid": "6fcbf7ed-b44c-4b83-bb70-02a574564e0b", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## auto_update_config.delete + +Automatic Update Config Deleted + +Example: + +```json +{ + "addr.remote": "127.0.0.1:39518", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUC003I", + "ei": 0, + "event": "auto_update_config.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-config", + "success": true, + "time": "2025-03-04T15:49:21.869Z", + "uid": "af17ab4a-d5a2-44a3-93ce-89390b50d52f", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## auto_update_config.update + +Automatic Update Config Updated + +Example: + +```json +{ + "addr.remote": "127.0.0.1:46798", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUC002I", + "ei": 0, + "event": "auto_update_config.update", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-config", + "success": true, + "time": "2025-03-04T15:49:37.633Z", + "uid": "94c580a9-6f87-4a23-9fe5-f93de4390cff", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## auto_update_version.create + +Automatic Update Version Created + +Example: + +```json +{ + "addr.remote": "127.0.0.1:41608", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUV001I", + "ei": 0, + "event": "auto_update_version.create", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-version", + "success": true, + "time": "2025-03-04T15:41:24.433Z", + "uid": "3d677d2f-91d0-4b5a-966d-183a59cec888", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## auto_update_version.delete + +Automatic Update Version Deleted + +Example: + +```json +{ + "addr.remote": "127.0.0.1:50316", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUV003I", + "ei": 0, + "event": "auto_update_version.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-version", + "success": true, + "time": "2025-03-04T15:25:44.805Z", + "uid": "c4d0d165-3a17-46ac-baa7-c7f521629997", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## auto_update_version.update + +Automatic Update Version Updated + +Example: + +```json +{ + "addr.remote": "127.0.0.1:42540", + "cluster_name": "autest.cloud.gravitational.io", + "code": "AUV002I", + "ei": 0, + "event": "auto_update_version.update", + "expires": "0001-01-01T00:00:00Z", + "name": "autoupdate-version", + "success": true, + "time": "2025-03-04T15:27:36.039Z", + "uid": "b7f9dde2-2899-46f1-bd4e-699d7b630e33", + "updated_by": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user": "b6eae9ed-bfde-40ba-a880-948a2c598b2b.autest.cloud.gravitational.io", + "user_kind": 1 +} +``` + +## aws_identity_center.resource_sync.failed + +AWS IAM Identity Center Resource Sync Failed + +Example: + +```json +{ + "code": "TAIC001E", + "event": "aws_identity_center.resource_sync.failed", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## aws_identity_center.resource_sync.success + +AWS IAM Identity Center Resource Sync Completed + +Example: + +```json +{ + "code": "TAIC001I", + "event": "aws_identity_center.resource_sync.success", + "message": "Periodic account, permission set and account assignment sync", + "success": true, + "time": "2025-02-11T15:29:30.045Z", + "total_account_assignments": 12, + "total_accounts": 4, + "total_permission_sets": 3, + "total_user_groups": 5 +} +``` + +## billing.create_card + +Credit Card Added + +Example: + +```json +{ + "cluster_name": "some-name", + "code": "TBL00I", + "ei": 0, + "event": "billing.create_card", + "time": "2021-03-18T16:29:05.044Z", + "uid": "5c40b62a-4ddd-466c-87a0-fa2922f743d0", + "user": "root" +} +``` + +## billing.delete_card + +Credit Card Deleted + +Example: + +```json +{ + "cluster_name": "some-name", + "code": "TBL01I", + "ei": 0, + "event": "billing.delete_card", + "time": "2021-03-18T16:28:51.219Z", + "uid": "056517e0-f7e1-4286-b437-c75f3a865af4", + "user": "root" +} +``` + +## billing.update_card + +Credit Card Updated + +Example: + +```json +{ + "cluster_name": "some-name", + "code": "TBL02I", + "ei": 0, + "event": "billing.update_card", + "time": "2021-03-18T16:28:49.067Z", + "uid": "0a06aba1-b87c-4d58-8922-e173f6b9729f", + "user": "root" +} +``` + +## billing.update_info + +Billing Information Updated + +Example: + +```json +{ + "cluster_name": "some-name", + "code": "TBL03I", + "ei": 0, + "event": "billing.update_info", + "time": "2021-03-18T16:29:15.719Z", + "uid": "95344b33-d25c-4875-896e-f21abc911547", + "user": "root" +} +``` + +## bot.create + +Bot Created + +Example: + +```json +{ + "cluster_name": "leaf.tele.ottr.sh:443", + "code": "TB001I", + "ei": 0, + "event": "bot.create", + "expires": "0001-01-01T00:00:00Z", + "name": "made-by-noah", + "time": "2023-12-08T10:53:39.798Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "noah" +} +``` + +## bot.delete + +Bot Deleted + +Example: + +```json +{ + "cluster_name": "leaf.tele.ottr.sh:443", + "code": "TB003I", + "ei": 0, + "event": "bot.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "review2", + "time": "2023-12-08T09:52:30.579Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "noah" +} +``` + +## bot.join + +There are multiple events with the `bot.join` type. + +### TJ001I + +Bot Joined + +Example: + +```json +{ + "attributes": { + "actor": "strideynet", + "actor_id": "16336790", + "base_ref": "", + "environment": "", + "event_name": "push", + "head_ref": "", + "job_workflow_ref": "strideynet/sandbox/.github/workflows/build.yaml@refs/heads/main", + "ref": "refs/heads/main", + "ref_type": "branch", + "repository": "strideynet/sandbox", + "repository_id": "539963344", + "repository_owner": "strideynet", + "repository_owner_id": "16336790", + "repository_visibility": "private", + "run_attempt": "6", + "run_id": "3547291254", + "run_number": "73", + "sha": "758c69462083ad67f0714112aab31fdeb1ba3a59", + "sub": "repo:strideynet/sandbox:ref:refs/heads/main", + "workflow": "Demo" + }, + "bot_name": "github-demo", + "cluster_name": "root.tele.ottr.sh", + "code": "TJ001I", + "ei": 0, + "event": "bot.join", + "method": "github", + "success": true, + "time": "2022-12-05T17:11:03.268Z", + "token_name": "github-bot", + "uid": "15a82555-b5aa-4eb8-820e-551f991bf902" +} +``` + +### TJ001E + +Bot Join Failed + +Example: + +```json +{ + "attributes": { + "actor": "strideynet", + "actor_id": "16336790", + "base_ref": "", + "environment": "", + "event_name": "push", + "head_ref": "", + "job_workflow_ref": "strideynet/sandbox/.github/workflows/build.yaml@refs/heads/main", + "ref": "refs/heads/main", + "ref_type": "branch", + "repository": "strideynet/sandbox", + "repository_id": "539963344", + "repository_owner": "strideynet", + "repository_owner_id": "16336790", + "repository_visibility": "private", + "run_attempt": "3", + "run_id": "8604159359", + "run_number": "100", + "sha": "0c9c5361d15154caf1c151dc1f430ea3552c9b93", + "sub": "repo:strideynet/sandbox:ref:refs/heads/main", + "workflow": "Demo" + }, + "bot_name": "unknown", + "cluster_name": "leaf.tele.ottr.sh", + "code": "TJ001E", + "ei": 0, + "error": "id token claims did not match any allow rules", + "event": "bot.join", + "method": "unknown", + "success": false, + "time": "2024-04-08T17:33:48.877Z", + "uid": "2bc5e2cb-5ba1-47d7-a7ae-381cf323ae7f" +} +``` + +## bot.update + +Bot Updated + +Example: + +```json +{ + "code": "TB002I", + "event": "bot.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## cert.create + +Certificate Issued + +Example: + +```json +{ + "cert_type": "user", + "code": "TC000I", + "event": "cert.create", + "identity": { + "user": "alice" + }, + "time": "2022-02-04T19:43:23.529Z" +} +``` + +## cir.update + +Client IP Restrictions update + +Example: + +```json +{ + "client_ip_restrictions": [ + "3.0.0.0/8" + ], + "cluster_name": "localhost", + "code": "CIR001I", + "ei": 0, + "event": "cir.update", + "expires": "0001-01-01T00:00:00Z", + "name": "client_ip_restriction", + "success": true, + "time": "2025-10-21T20:45:14.775Z", + "updated_by": "admin", + "user": "admin", + "user_cluster_name": "localhost", + "user_kind": 1, + "user_roles": [ + "editor" + ] +} +``` + +## client.disconnect + +Client Disconnected + +Example: + +```json +{ + "code": "T3006I", + "event": "client.disconnect", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## cluster_networking_config.update + +Cluster Networking Configuration Updated + +Example: + +```json +{ + "code": "TCNET002I", + "event": "cluster_networking_config.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## contact.create + +Contact Created + +Example: + +```json +{ + "code": "TCTC001I", + "event": "contact.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## contact.delete + +Contact Deleted + +Example: + +```json +{ + "code": "TCTC002I", + "event": "contact.delete", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## db.create + +Database Created + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB03I", + "db_labels": { + "env": "local", + "teleport.dev/origin": "dynamic" + }, + "db_protocol": "postgres", + "db_uri": "localhost:5432", + "ei": 0, + "event": "db.create", + "expires": "0001-01-01T00:00:00Z", + "name": "postgres-local", + "time": "2021-10-08T15:42:15.39Z", + "uid": "9d37514f-aef5-426f-9fda-31fd35d070f5", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## db.delete + +Database Deleted + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB05I", + "ei": 0, + "event": "db.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "postgres-local", + "time": "2021-10-08T15:42:36.005Z", + "uid": "74f5e6b9-50c4-4195-bb26-d615641255bc", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## db.session.cassandra.batch + +Cassandra Batch + +Example: + +```json +{ + "ei": 0, + "event": "db.session.cassandra.batch", + "code": "TCA01I", + "time": "2022-06-02T08:46:33.825Z", + "cluster_name": "im-a-cluster-name", + "user": "alice", + "sid": "a724c7e8-8e00-45a6-afac-82023d0f86b6", + "db_service": "cassandra", + "db_protocol": "cassandra", + "db_uri": "localhost:65054", + "db_user": "cassandra", + "consistency": "ConsistencyLevel QUORUM [0x0004]", + "batch_type": "BatchType LOGGED [0x00]", + "children": [ + { + "query": "INSERT INTO batch_table (id) VALUES 1" + }, + { + "query": "INSERT INTO batch_table (id) VALUES 2" + } + ] +} +``` + +## db.session.cassandra.execute + +Cassandra Execute + +Example: + +```json +{ + "ei": 0, + "event": "db.session.cassandra.execute", + "code": "TCA03I", + "time": "2022-06-02T08:46:33.825Z", + "cluster_name": "im-a-cluster-name", + "user": "alice", + "sid": "2126ee07-cfe1-4213-8032-70b3e6e1ac79", + "db_service": "cassandra", + "db_protocol": "cassandra", + "db_uri": "localhost:65054", + "db_user": "cassandra", + "query_id": "d34e638934721c3bcd69933f992a00cb" +} +``` + +## db.session.cassandra.prepare + +Cassandra Prepare Event + +Example: + +```json +{ + "ei": 0, + "event": "db.session.cassandra.prepare", + "code": "TCA02I", + "time": "2022-06-02T08:46:33.825Z", + "cluster_name": "im-a-cluster-name", + "user": "alice", + "sid": "2126ee07-cfe1-4213-8032-70b3e6e1ac79", + "db_service": "cassandra", + "db_protocol": "cassandra", + "db_uri": "localhost:65054", + "db_user": "cassandra", + "query": "SELECT * FROM system_schema.keyspaces" +} +``` + +## db.session.cassandra.register + +Cassandra Register + +Example: + +```json +{ + "ei": 0, + "event": "db.session.cassandra.register", + "code": "TCA04I", + "time": "2022-06-02T08:46:33.825Z", + "cluster_name": "im-a-cluster-name", + "user": "alice", + "sid": "2126ee07-cfe1-4213-8032-70b3e6e1ac79", + "db_service": "cassandra", + "db_protocol": "cassandra", + "db_uri": "localhost:65054", + "db_user": "cassandra", + "event_types": [ + "TOPOLOGY_CHANGE", + "STATUS_CHANGE", + "SCHEMA_CHANGE" + ] +} +``` + +## db.session.dynamodb.request + +There are multiple events with the `db.session.dynamodb.request` type. + +### TDY01I + +DynamoDB Request + +Example: + +```json +{ + "cluster_name": "root.com", + "code": "TDY01I", + "event": "db.session.dynamodb.request", + "db_name": "", + "db_protocol": "dynamodb", + "db_service": "ddb1", + "db_user": "DynamoDBRole", + "ei": 1, + "uri": "dynamodb.us-west-2.amazonaws.com", + "body": { + "TableName": "test-table" + }, + "method": "POST", + "path": "", + "raw_query": "", + "status_code": 200, + "target": "DynamoDB_20120810.Scan", + "time": "2022-12-23T19:14:07.763Z", + "uid": "12345678-46e9-4110-a773-2c88278d08ca", + "user": "alice@example.com" +} +``` + +### TDY01E + +DynamoDB Request Failed + +Example: + +```json +{ + "cluster_name": "root.com", + "code": "TDY01E", + "event": "db.session.dynamodb.request", + "db_name": "", + "db_protocol": "dynamodb", + "db_service": "ddb1", + "db_user": "DynamoDBRole", + "ei": 1, + "uri": "dynamodb.us-west-2.amazonaws.com", + "body": { + "TableName": "test-table" + }, + "method": "POST", + "path": "", + "raw_query": "", + "status_code": 0, + "target": "DynamoDB_20120810.Scan", + "time": "2022-12-23T19:04:07.763Z", + "uid": "12345678-46e9-4110-a773-2c88278d08ca", + "user": "alice@example.com" +} +``` + +## db.session.elasticsearch.request + +There are multiple events with the `db.session.elasticsearch.request` type. + +### TES00I + +Elasticsearch Request + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TES00I", + "body": null, + "category": 0, + "db_protocol": "elasticsearch", + "db_service": "myelastic", + "db_uri": "localhost:9201", + "db_user": "elasticuser", + "ei": 101, + "event": "db.session.elasticsearch.request", + "headers": { + "Accept": [ + "*/*" + ], + "User-Agent": [ + "curl/7.79.1" + ] + }, + "method": "GET", + "path": "/", + "query": "", + "raw_query": "", + "sid": "b739c817-bc11-4eaa-b256-c6646d7fcc21", + "target": "", + "time": "2022-09-27T11:43:58.433Z", + "uid": "730a8de0-79a9-486f-b9c6-3820c3a6977c", + "user": "alice" +} +``` + +### TES00E + +Elasticsearch Request Failed + +Example: + +```json +{ + "code": "TES00E", + "event": "db.session.elasticsearch.request", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## db.session.end + +Database Session Ended + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB01I", + "db_name": "", + "db_protocol": "mongodb", + "db_service": "mongo-primary", + "db_uri": "mongodb://mongo-1:27017,mongo-2:27018/?replicaSet=rs0", + "db_user": "alice", + "ei": 16, + "event": "db.session.end", + "sid": "13c04d4b-2e94-4106-a3a1-5ab8aae10465", + "time": "2021-07-14T07:06:25.608Z", + "uid": "0a2387cd-3fa2-4424-9c14-e33af17e4ab1", + "user": "alice@example.com" +} +``` + +## db.session.malformed_packet + +Database Malformed Packet + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB06I", + "db_name": "master", + "db_protocol": "sqlserver", + "db_service": "sqlserver02", + "db_uri": "localhost:1433", + "db_user": "sqlserver", + "ei": 50, + "event": "db.session.malformed_packet", + "payload": "AwEAkAAAAgByAGEAbQBfADEAIABuAHYAYQByAGMAaABhAHIAKAA0ADAAMAAwACkAC0AAXwBtAHMAcABhAHIAYQBtAF8AMAAA50AfCQTQADQWAHMAcAB0AF8AbQBvAG4AaQB0AG8AcgALQABfAG0AcwBwAGEAcgBhAG0AXwAxAADnQB8JBNAANAYAZABiAG8A", + "sid": "3ed38c42-eef0-419b-b893-f2f10990f117", + "time": "2022-06-02T08:46:33.825Z", + "uid": "503e310d-8d88-4bea-bbbb-a1b35456a03a", + "user": "alice" +} +``` + +## db.session.mysql.create_db + +MySQL Create Database + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY08I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.create_db", + "schema_name": "another_database", + "time": "2022-04-13T20:00:09.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.debug + +MySQL Debug + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY12I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.debug", + "time": "2022-04-13T20:00:05.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.drop_db + +MySQL Drop Database + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY09I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.drop_db", + "schema_name": "another_database", + "time": "2022-04-13T20:00:08.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.init_db + +MySQL Change Database + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY07I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.init_db", + "schema_name": "another_database", + "time": "2022-04-13T20:00:10.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.process_kill + +MySQL Kill Process + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY11I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.process_kill", + "process_id": 60, + "time": "2022-04-13T20:00:06.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.refresh + +MySQL Refresh + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY13I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.refresh", + "subcommand": "REFRESH_THREADS", + "time": "2022-04-13T20:00:04.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.shut_down + +MySQL Shut Down + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY10I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "event": "db.session.mysql.shut_down", + "time": "2022-04-13T20:00:07.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.bulk_execute + +MySQL Statement Bulk Execute + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY06I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.bulk_execute", + "parameters": null, + "statement_id": 1, + "time": "2022-02-10T20:57:53.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.close + +MySQL Statement Close + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY03I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.close", + "statement_id": 1, + "time": "2022-02-10T20:57:56.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.execute + +MySQL Statement Execute + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY01I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.execute", + "parameters": null, + "statement_id": 1, + "time": "2022-02-10T20:57:54.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.fetch + +MySQL Statement Fetch + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY05I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.fetch", + "rows_count": 5, + "statement_id": 1, + "time": "2022-02-10T20:57:55.000Z", + "uid": "0a2bd129-7c2f-4e68-9c84-a17dc4415444", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.prepare + +MySQL Statement Prepare + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY00I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.prepare", + "query": "UPDATE `test`.`user` SET `age` = '7' WHERE (`name` = 'alice')", + "time": "2022-02-10T20:57:50.000Z", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.reset + +MySQL Statement Reset + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY04I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.reset", + "statement_id": 1, + "time": "2022-02-10T20:57:52.000Z", + "uid": "0a2bd129-7c2f-4e68-9c84-a17dc4415444", + "user": "alice@example.com" +} +``` + +## db.session.mysql.statements.send_long_data + +MySQL Statement Send Long Data + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMY02I", + "db_name": "test", + "db_protocol": "mysql", + "db_service": "self-hosted-mysql", + "db_uri": "localhost:3306", + "db_user": "alice", + "ei": 0, + "event": "db.session.mysql.statements.send_long_data", + "statement_id": 1, + "parameter_id": 2, + "data_size": 32, + "time": "2022-02-10T20:57:51.000Z", + "user": "alice@example.com" +} +``` + +## db.session.opensearch.request + +There are multiple events with the `db.session.opensearch.request` type. + +### TOS00I + +OpenSearch Request + +Example: + +```json +{ + "category": 2, + "cluster_name": "im-a-cluster-name", + "code": "TOS00I", + "db_protocol": "opensearch", + "db_service": "opensearch-aws", + "db_uri": "opensearch-aws-aaa111.eu-central-1.es.amazonaws.com:443", + "db_user": "arn:aws:iam::1234567890:role/teleport-db-role", + "ei": 1, + "event": "db.session.opensearch.request", + "headers": { + "Accept-Encoding": [ + "gzip" + ], + "Content-Type": [ + "application/json" + ], + "User-Agent": [ + "Go-http-client/1.1" + ] + }, + "method": "GET", + "path": "/_count", + "query": "", + "raw_query": "", + "sid": "370e5d86-84a6-4995-8476-dbea80f9eacf", + "status_code": 200, + "target": "", + "time": "2023-03-11T11:08:29.954Z", + "uid": "d15f795c-1f63-4076-bdd4-ffe88cde716e", + "user": "alice@example.com" +} +``` + +### TOS00E + +OpenSearch Request Failed + +Example: + +```json +{ + "category": 2, + "cluster_name": "im-a-cluster-name", + "code": "TOS00E", + "db_protocol": "opensearch", + "db_service": "opensearch-aws", + "db_uri": "opensearch-aws-aaa111.eu-central-1.es.amazonaws.com:443", + "db_user": "arn:aws:iam::1234567890:role/does-not-exist", + "ei": 1, + "event": "db.session.opensearch.request", + "headers": { + "Accept-Encoding": [ + "gzip" + ], + "Content-Type": [ + "application/json" + ], + "User-Agent": [ + "Go-http-client/1.1" + ] + }, + "method": "GET", + "path": "/_count", + "query": "", + "raw_query": "", + "sid": "2d9a43c1-14ab-40fa-88db-195312f3401c", + "status_code": 0, + "target": "", + "time": "2023-03-11T11:08:29.954Z", + "uid": "01ad9a74-c8d6-497f-83db-e1c0be83d8da", + "user": "alice@example.com" +} +``` + +## db.session.permissions.update + +Database User Permissions Updated + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB07I", + "db_name": "master", + "db_protocol": "postgres", + "db_service": "postgres-local", + "db_uri": "localhost:1433", + "db_user": "alice", + "ei": 50, + "event": "db.session.permissions.update", + "sid": "3ed38c42-eef0-419b-b893-f2f10990f117", + "time": "2022-06-02T08:46:33.825Z", + "uid": "503e310d-8d88-4bea-bbbb-a1b35456a03a", + "user": "alice", + "permission_summary": [ + { + "counts": { + "table": 1, + "view": 2 + }, + "permission": "INSERT" + }, + { + "counts": { + "table": 2, + "view": 4 + }, + "permission": "SELECT" + }, + { + "counts": { + "table": 3 + }, + "permission": "UPDATE" + } + ] +} +``` + +## db.session.postgres.function + +PostgreSQL Function Call + +Example: + +```json +{ + "cluster_name": "root", + "code": "TPG04I", + "db_name": "test", + "db_protocol": "postgres", + "db_service": "local", + "db_uri": "localhost:5432", + "db_user": "postgres", + "ei": 23, + "event": "db.session.postgres.function", + "sid": "5e0c50cc-4ee7-4110-8d6e-735bf1f06f1f", + "function_oid": "123", + "function_args": [ + "qweqweqwe" + ], + "time": "2021-12-16T00:40:37.073Z", + "uid": "295c88fc-4725-4de0-9049-64040fc69ec7", + "user": "alice" +} +``` + +## db.session.postgres.statements.bind + +PostgreSQL Statement Bind + +Example: + +```json +{ + "cluster_name": "root", + "code": "TPG01I", + "db_name": "test", + "db_protocol": "postgres", + "db_service": "local", + "db_uri": "localhost:5432", + "db_user": "postgres", + "ei": 20, + "event": "db.session.postgres.statements.bind", + "parameters": [ + "qweqweqwe" + ], + "portal_name": "", + "sid": "5e0c50cc-4ee7-4110-8d6e-735bf1f06f1f", + "statement_name": "test-ps", + "time": "2021-12-16T00:40:37.071Z", + "uid": "d5bed7e5-6a15-441b-b8ee-a2abd73f3136", + "user": "alice" +} +``` + +## db.session.postgres.statements.close + +PostgreSQL Statement Close + +Example: + +```json +{ + "cluster_name": "root", + "code": "TPG03I", + "db_name": "test", + "db_protocol": "postgres", + "db_service": "local", + "db_uri": "localhost:5432", + "db_user": "postgres", + "ei": 22, + "event": "db.session.postgres.statements.close", + "portal_name": "", + "sid": "5e0c50cc-4ee7-4110-8d6e-735bf1f06f1f", + "statement_name": "test-ps", + "time": "2021-12-16T00:40:37.073Z", + "uid": "295c88fc-4725-4de0-9049-64040fc69ec7", + "user": "alice" +} +``` + +## db.session.postgres.statements.execute + +PostgreSQL Statement Execute + +Example: + +```json +{ + "cluster_name": "root", + "code": "TPG02I", + "db_name": "test", + "db_protocol": "postgres", + "db_service": "local", + "db_uri": "localhost:5432", + "db_user": "postgres", + "ei": 21, + "event": "db.session.postgres.statements.execute", + "portal_name": "", + "sid": "5e0c50cc-4ee7-4110-8d6e-735bf1f06f1f", + "time": "2021-12-16T00:40:37.071Z", + "uid": "a0f045a2-45a4-4a4d-b14a-5f986c1818ff", + "user": "alice" +} +``` + +## db.session.postgres.statements.parse + +PostgreSQL Statement Parse + +Example: + +```json +{ + "cluster_name": "root", + "code": "TPG00I", + "db_name": "test", + "db_protocol": "postgres", + "db_service": "local", + "db_uri": "localhost:5432", + "db_user": "postgres", + "ei": 19, + "event": "db.session.postgres.statements.parse", + "query": "select id from test where id = $1::varchar", + "sid": "5e0c50cc-4ee7-4110-8d6e-735bf1f06f1f", + "statement_name": "test-ps", + "time": "2021-12-16T00:40:37.069Z", + "uid": "06781ebf-6c5b-463b-ad32-e7395afd4a59", + "user": "alice" +} +``` + +## db.session.query + +Database Query + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB02I", + "db_name": "test", + "db_protocol": "mongodb", + "db_query": "{\"find\": \"test\",\"filter\": {},\"lsid\": {\"id\": {\"$binary\":{\"base64\":\"2KMk23/TTCKUtiAVU0fbgg==\",\"subType\":\"04\"}}},\"$clusterTime\": {\"clusterTime\": {\"$timestamp\":{\"t\":\"1626246087\",\"i\":\"1\"}},\"signature\": {\"hash\": {\"$binary\":{\"base64\":\"8X7BlnDAUxKgUo5lpI3XoKoNF54=\",\"subType\":\"00\"}},\"keyId\": {\"$numberLong\":\"6969719000615878659\"}}},\"$db\": \"test\"}", + "db_service": "mongo-primary", + "db_uri": "mongodb://mongo-1:27017,mongo-2:27018/?replicaSet=rs0", + "db_user": "alice", + "ei": 11, + "event": "db.session.query", + "sid": "13c04d4b-2e94-4106-a3a1-5ab8aae10465", + "success": true, + "time": "2021-07-14T07:03:49.783Z", + "uid": "c4550623-0538-452d-912b-1242715666c4", + "user": "alice@example.com" +} +``` + +## db.session.query.failed + +Database Query Failed + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB02W", + "db_name": "houston", + "db_protocol": "mongodb", + "db_query": "{\"find\": \"test\",\"filter\": {},\"lsid\": {\"id\": {\"$binary\":{\"base64\":\"2KMk23/TTCKUtiAVU0fbgg==\",\"subType\":\"04\"}}},\"$clusterTime\": {\"clusterTime\": {\"$timestamp\":{\"t\":\"1626246227\",\"i\":\"1\"}},\"signature\": {\"hash\": {\"$binary\":{\"base64\":\"zBJKAl6VcjwQrr05N0O4qrQ92PY=\",\"subType\":\"00\"}},\"keyId\": {\"$numberLong\":\"6969719000615878659\"}}},\"$db\": \"houston\"}", + "db_service": "mongo-primary", + "db_uri": "mongodb://mongo-1:27017,mongo-2:27018/?replicaSet=rs0", + "db_user": "alice", + "ei": 13, + "error": "access to database denied", + "event": "db.session.query.failed", + "message": "access to database denied", + "sid": "13c04d4b-2e94-4106-a3a1-5ab8aae10465", + "success": false, + "time": "2021-07-14T07:05:22.32Z", + "uid": "21796ef9-a5dc-4595-a833-b893ac620488", + "user": "alice@example.com" +} +``` + +## db.session.spanner.rpc + +There are multiple events with the `db.session.spanner.rpc` type. + +### TSPN001W + +Spanner RPC Denied + +Example: + +```json +{ + "args": { + "database": "projects/project-id/instances/instance-id/databases/prod-db", + "session_count": 100, + "session_template": {} + }, + "cluster_name": "root", + "code": "TSPN001W", + "db_name": "prod-db", + "db_origin": "dynamic", + "db_protocol": "spanner", + "db_service": "teleport-spanner", + "db_type": "spanner", + "db_uri": "spanner.googleapis.com:443", + "db_user": "some-user", + "error": "access to db denied. User does not have permissions. Confirm database user and name.", + "event": "db.session.spanner.rpc", + "message": "access to db denied. User does not have permissions. Confirm database user and name.", + "procedure": "BatchCreateSessions", + "sid": "04364984-a6d0-4e2c-93c7-5c44e2359502", + "success": false, + "time": "2024-03-13T01:25:48.568Z", + "uid": "1de57538-2eea-438b-a52d-3098f8093b28", + "user": "alice@example.com" +} +``` + +### TSPN001I + +Spanner RPC + +Example: + +```json +{ + "args": { + "query_options": {}, + "request_options": {}, + "seqno": 1, + "session": "projects/project-id/instances/instance-id/databases/dev-db/sessions/ABCDEF1234567890Aye8_QwuELYD9rxa74YTWc-lu9LNuDDADbi4EOGm2C2j0ixe", + "sql": "select * from TestTable", + "transaction": { + "Selector": { + "SingleUse": { + "Mode": { + "ReadOnly": { + "TimestampBound": { + "Strong": true + }, + "return_read_timestamp": true + } + } + } + } + } + }, + "cluster_name": "root", + "code": "TSPN001I", + "db_name": "dev-db", + "db_origin": "dynamic", + "db_protocol": "spanner", + "db_service": "teleport-spanner", + "db_type": "spanner", + "db_uri": "spanner.googleapis.com:443", + "db_user": "some-user", + "event": "db.session.spanner.rpc", + "procedure": "ExecuteStreamingSql", + "sid": "406b9883-0e16-42f2-9d0b-b3bd956f9cd4", + "success": true, + "time": "2024-03-13T00:02:44.739Z", + "uid": "e0625e79-9399-4ea3-aa8b-dba1eb98658d", + "user": "alice@example.com" +} +``` + +## db.session.sqlserver.rpc_request + +SQLServer RPC Request + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TMS00I", + "db_name": "master", + "db_protocol": "sqlserver", + "db_service": "sqlserver02", + "db_uri": "localhost:1433", + "db_user": "sqlserver", + "ei": 7, + "event": "db.session.sqlserver.rpc_request", + "parameters": [ + "SELECT\ndtb.collation_name AS [Collation],\ndtb.name AS [DatabaseName2]\nFROM\nmaster.sys.databases AS dtb\nWHERE\n(dtb.name=@_msparam_0)" + ], + "proc_name": "Sp_ExecuteSql", + "sid": "6b37d89b-0d9c-4681-976b-ba12588a1bcd", + "time": "2022-06-02T08:29:17.693Z", + "uid": "a29dfad1-5a71-4c48-b4e0-10d1d857a23c", + "user": "alice" +} +``` + +## db.session.start + +There are multiple events with the `db.session.start` type. + +### TDB00I + +Database Session Started + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB00I", + "db_name": "", + "db_protocol": "mongodb", + "db_service": "mongo-primary", + "db_uri": "mongodb://mongo-1:27017,mongo-2:27018/?replicaSet=rs0", + "db_user": "alice", + "ei": 0, + "event": "db.session.start", + "namespace": "default", + "server_id": "05ff66c9-a948-42f4-af0e-a1b6ba62561e", + "sid": "13c04d4b-2e94-4106-a3a1-5ab8aae10465", + "success": true, + "time": "2021-07-14T07:01:31.958Z", + "uid": "4a613b84-7315-41f4-9219-1afd6b08d4da", + "user": "alice@example.com" +} +``` + +### TDB00W + +Database Session Denied + +Example: + +```json +{ + "code": "TDB00W", + "event": "db.session.start", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## db.session.user.create + +There are multiple events with the `db.session.user.create` type. + +### TDB08I + +Database User Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB08I", + "db_name": "master", + "db_protocol": "postgres", + "db_service": "postgres-local", + "db_uri": "localhost:1433", + "db_user": "alice", + "ei": 0, + "event": "db.session.user.create", + "private_key_policy": "none", + "roles": null, + "sid": "47f20b91-f5c3-4eef-85e1-9509252238e7", + "success": true, + "time": "2022-06-02T08:46:33.825Z", + "uid": "95e74359-e5a1-4c76-970e-c522b550dbb9", + "user": "alice", + "user_kind": 1, + "username": "alice" +} +``` + +### TDB08W + +Database User Creation Failed + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB08W", + "db_name": "master", + "db_protocol": "postgres", + "db_service": "postgres-local", + "db_uri": "localhost:1433", + "db_user": "alice", + "ei": 0, + "error": "dummy error", + "event": "db.session.user.create", + "message": "dummy error", + "private_key_policy": "none", + "roles": null, + "sid": "3fd14bfe-be21-40a4-b1da-744fa14f5108", + "success": false, + "time": "2022-06-02T08:46:33.825Z", + "uid": "4a4a6a70-c81d-4326-8565-3f7bd23b874f", + "user": "ben", + "user_kind": 1, + "username": "ben" +} +``` + +## db.session.user.deactivate + +There are multiple events with the `db.session.user.deactivate` type. + +### TDB09I + +Database User Deactivated + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB09I", + "db_name": "master", + "db_protocol": "postgres", + "db_service": "postgres-local", + "db_uri": "localhost:1433", + "db_user": "alice", + "delete": false, + "ei": 5, + "event": "db.session.user.deactivate", + "private_key_policy": "none", + "sid": "c362e10b-dbc4-44e5-b90f-0bee5dd0c623", + "success": true, + "time": "2022-06-02T08:46:33.825Z", + "uid": "0ab70491-4d33-4bc5-be58-27922a647f50", + "user": "ben", + "user_kind": 1, + "username": "ben" +} +``` + +### TDB09W + +Database User Deactivate Failure + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDB09W", + "db_name": "master", + "db_protocol": "postgres", + "db_service": "postgres-local", + "db_uri": "localhost:1433", + "db_user": "alice", + "delete": false, + "ei": 4, + "error": "dummy error", + "event": "db.session.user.deactivate", + "message": "dummy error", + "private_key_policy": "none", + "sid": "3bb429a1-be03-4c03-827c-98ff846dacf7", + "success": false, + "time": "2022-06-02T08:46:33.825Z", + "uid": "c6569248-ac06-417d-b5b6-e0bf94eccb1a", + "user": "ben", + "user_kind": 1, + "username": "ben" +} +``` + +## db.update + +Database Updated + +Example: + +```json +{ + "cluster_name": "root", + "code": "TDB04I", + "db_labels": { + "env": "local", + "teleport.dev/origin": "dynamic" + }, + "db_protocol": "postgres", + "db_uri": "localhost:5432", + "ei": 0, + "event": "db.update", + "expires": "0001-01-01T00:00:00Z", + "name": "postgres-local", + "time": "2021-10-08T15:42:24.581Z", + "uid": "fe631a5a-6418-49d6-99e7-5280654663ec", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## desktop.clipboard.receive + +Clipboard Data Received + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDP03I", + "desktop_addr": "100.104.52.89:3389", + "desktop_name": "desktop-name", + "ei": 0, + "event": "desktop.clipboard.receive", + "sid": "b7f734d8-bdc2-4996-8959-0b42a11708e7", + "time": "2021-10-18T23:39:13.105Z", + "uid": "84d408d1-3314-4a30-b7b7-35970633c9de", + "user": "joe", + "length": 512 +} +``` + +## desktop.clipboard.send + +Clipboard Data Sent + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDP02I", + "desktop_addr": "100.104.52.89:3389", + "desktop_name": "desktop-name", + "ei": 0, + "event": "desktop.clipboard.send", + "sid": "b7f734d8-bdc2-4996-8959-0b42a11708e7", + "time": "2021-10-18T23:39:13.105Z", + "uid": "84d408d1-3314-4a30-b7b7-35970633c9de", + "user": "joe", + "length": 512 +} +``` + +## desktop.directory.read + +There are multiple events with the `desktop.directory.read` type. + +### TDP05I + +Directory Sharing Read + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP05I", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 9766, + "event": "desktop.directory.read", + "file_path": "powershell-scripts/domain-controller.ps1", + "length": 734, + "offset": 0, + "proto": "tdp", + "sid": "b9329a34-ab0c-4aa0-9fc8-1054d491e818", + "success": true, + "time": "2022-10-21T23:07:36.496189Z", + "uid": "a6ea5e5b-daac-47c2-9ce5-3f868e51a146", + "user": "joe" +} +``` + +### TDP05W + +Directory Sharing Read Failed + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP05W", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 9766, + "event": "desktop.directory.read", + "file_path": "powershell-scripts/domain-controller.ps1", + "length": 734, + "offset": 0, + "proto": "tdp", + "sid": "b9329a34-ab0c-4aa0-9fc8-1054d491e818", + "success": false, + "time": "2022-10-21T23:07:36.496189Z", + "uid": "a6ea5e5b-daac-47c2-9ce5-3f868e51a146", + "user": "joe" +} +``` + +## desktop.directory.share + +There are multiple events with the `desktop.directory.share` type. + +### TDP04I + +Directory Sharing Started + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP04I", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 3317, + "event": "desktop.directory.share", + "proto": "tdp", + "sid": "6ecf916d-dedf-4769-afc0-d08e55fbebf7", + "success": true, + "time": "2022-10-21T22:36:27.314409Z", + "uid": "f38b07d4-2f3e-400b-a91a-bad7283db775", + "user": "joe" +} +``` + +### TDP04W + +Directory Sharing Start Failed + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP04W", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 3317, + "event": "desktop.directory.share", + "proto": "tdp", + "sid": "6ecf916d-dedf-4769-afc0-d08e55fbebf7", + "success": false, + "time": "2022-10-21T22:36:27.314409Z", + "uid": "f38b07d4-2f3e-400b-a91a-bad7283db775", + "user": "joe" +} +``` + +## desktop.directory.write + +There are multiple events with the `desktop.directory.write` type. + +### TDP06I + +Directory Sharing Write + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP06I", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 7428, + "event": "desktop.directory.write", + "file_path": "powershell-scripts/domain-controller.ps1", + "length": 734, + "offset": 0, + "proto": "tdp", + "sid": "ea959406-27e4-4b11-85c4-1a485ff48417", + "success": true, + "time": "2022-10-21T23:19:34.519058Z", + "uid": "6bb2ebdf-d7e2-4a03-80ae-514ff9a5c71f", + "user": "joe" +} +``` + +### TDP06W + +Directory Sharing Write Failed + +Example: + +```json +{ + "addr.remote": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP06W", + "desktop_addr": "ec2-54-162-177-255.compute-1.amazonaws.com:3389", + "desktop_name": "desktop-name", + "directory_id": 2, + "directory_name": "windows-server-2012-shared", + "ei": 7428, + "event": "desktop.directory.write", + "file_path": "powershell-scripts/domain-controller.ps1", + "length": 734, + "offset": 0, + "proto": "tdp", + "sid": "ea959406-27e4-4b11-85c4-1a485ff48417", + "success": false, + "time": "2022-10-21T23:19:34.519058Z", + "uid": "6bb2ebdf-d7e2-4a03-80ae-514ff9a5c71f", + "user": "joe" +} +``` + +## device + +Device Enrolled + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV005I", + "device": { + "asset_tag": "M2CQVQV64R", + "device_id": "99d39707-efdd-436c-94f3-6a1aeef1fbf2", + "os_type": 2 + }, + "ei": 0, + "event": "device", + "status": { + "success": true + }, + "time": "2023-01-12T19:28:36.842Z", + "uid": "94d33b77-82cd-4558-8893-0320699bf755", + "user": { + "user": "this user wont render properly" + } +} +``` + +## device.authenticate + +Device Authenticated + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV006I", + "ei": 0, + "event": "device.authenticate", + "success": true, + "time": "2023-01-12T19:34:48.1Z", + "uid": "fa279611-91d8-47b5-9fad-b8ea3e5286e0", + "user": "lisa" +} +``` + +## device.authenticate.confirm + +Device Web Authentication Confirmed + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV009I", + "device": { + "device_id": "f84f6b35-6226-4e73-8205-3bcbd7d12970", + "web_authentication": true, + "web_session_id": "my-session-id-12345" + }, + "ei": 0, + "event": "device.authenticate.confirm", + "success": false, + "time": "2024-04-08T19:35:48.1Z", + "uid": "b1361d51-70fa-4f1b-803c-a252c2877707", + "user": "llama", + "user_kind": 1 +} +``` + +## device.create + +Device Registered + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV001I", + "device": { + "asset_tag": "M2CQVQV64R", + "device_id": "99d39707-efdd-436c-94f3-6a1aeef1fbf2", + "os_type": 2 + }, + "ei": 0, + "event": "device.create", + "success": true, + "time": "2023-01-12T19:28:36.842Z", + "uid": "94d33b77-82cd-4558-8893-0320699bf755", + "user": "3827e8ad-7cbe-4423-a80f-dfc89e83eb86.im-a-cluster-name" +} +``` + +## device.delete + +Device Deleted + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV002I", + "device": { + "device_id": "99d39707-efdd-436c-94f3-6a1aeef1fbf2" + }, + "ei": 0, + "event": "device.delete", + "success": true, + "time": "2023-01-12T20:33:20.527Z", + "uid": "a12e693e-1c45-43e4-a9d1-5fd8399e303c", + "user": "3827e8ad-7cbe-4423-a80f-dfc89e83eb86.im-a-cluster-name" +} +``` + +## device.token.create + +Device Enroll Token Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV003I", + "device": { + "device_id": "99d39707-efdd-436c-94f3-6a1aeef1fbf2" + }, + "ei": 0, + "event": "device.token.create", + "success": true, + "time": "2023-01-12T19:51:54.168Z", + "uid": "24cce2a0-57b7-494e-a196-c7fd2482b10c", + "user": "3827e8ad-7cbe-4423-a80f-dfc89e83eb86.im-a-cluster-name" +} +``` + +## device.token.spent + +Device Enroll Token Spent + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV004I", + "device": { + "asset_tag": "M2CQVQV64R", + "device_id": "0e288b23-f99f-4635-b182-06e9308095a8", + "os_type": 2 + }, + "ei": 0, + "event": "device.token.spent", + "success": true, + "time": "2023-01-12T21:31:29.191Z", + "uid": "bbbc496f-820b-4f49-ae0d-1c1b29faee85", + "user": "lisa" +} +``` + +## device.update + +Device Updated + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV007I", + "device": { + "asset_tag": "M2CQVQV64R", + "device_id": "0e288b23-f99f-4635-b182-06e9308095a8", + "os_type": 2 + }, + "ei": 0, + "event": "device.update", + "success": true, + "time": "2023-01-12T21:31:29.191Z", + "uid": "bbbc496f-820b-4f49-ae0d-1c1b29faee85", + "user": "lisa" +} +``` + +## device.webtoken.create + +Device Web Token Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TV008I", + "device": { + "asset_tag": "M2CQVQV64R", + "credential_id": "c7572891-8426-4e62-874f-c793029d53a6", + "device_id": "f84f6b35-6226-4e73-8205-3bcbd7d12970", + "os_type": 2 + }, + "ei": 0, + "event": "device.webtoken.create", + "success": true, + "time": "2024-03-05T17:18:43.296Z", + "uid": "b1361d51-70fa-4f1b-803c-a252c2877707", + "user": "llama", + "user_kind": 1 +} +``` + +## discovery_config.create + +Discovery Config Created + +Example: + +```json +{ + "code": "DC001I", + "event": "discovery_config.create", + "time": "2023-05-08T19:21:36.144Z", + "name": "discovery-config", + "updated_by": "joe" +} +``` + +## discovery_config.delete + +Discovery Config Deleted + +Example: + +```json +{ + "code": "DC003I", + "event": "discovery_config.delete", + "time": "2023-05-08T19:21:38.144Z", + "name": "discovery-config", + "updated_by": "joe" +} +``` + +## discovery_config.delete_all + +All Discovery Configs Deleted + +Example: + +```json +{ + "code": "DC004I", + "event": "discovery_config.delete_all", + "time": "2023-05-08T19:21:39.144Z", + "name": "discovery-config", + "updated_by": "joe" +} +``` + +## discovery_config.update + +Discovery Config Updated + +Example: + +```json +{ + "code": "DC002I", + "event": "discovery_config.update", + "time": "2023-05-08T19:21:37.144Z", + "name": "discovery-config", + "updated_by": "joe" +} +``` + +## exec + +There are multiple events with the `exec` type. + +### T3002I + +Command Execution + +Example: + +```json +{ + "code": "T3002I", + "proto": "kube", + "kubernetes_cluster": "clusterOne", + "ei": 0, + "addr.local": "172.31.28.130:3022", + "addr.remote": "151.181.228.114:51752", + "event": "exec", + "namespace": "default", + "sid": "8d57a9d5-3848-5ce2-a326-85eb4a6d2eed", + "time": "2020-10-30T17:28:14.705Z", + "uid": "8ea5be3d-07b1-4308-8e0d-2d2ec57cbb20", + "user": "alex" +} +``` + +### T3002E + +Command Execution Failed + +Example: + +```json +{ + "code": "T3002E", + "event": "exec", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## external_audit_storage.disable + +External Audit Storage Disabled + +Example: + +```json +{ + "code": "TEA002I", + "event": "external_audit_storage.disable", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## external_audit_storage.enable + +External Audit Storage Enabled + +Example: + +```json +{ + "code": "TEA001I", + "event": "external_audit_storage.enable", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## git.command + +There are multiple events with the `git.command` type. + +### TGIT001E + +Git Command Failed + +Example: + +```json +{ + "code": "TGIT001E", + "event": "git.command", + "time": "2024-12-07T11:11:11.111Z", + "uid": "7699b806-e717-4821-85a5-d2f41acbe373", + "user": "Linus.Torvalds", + "service": "git-upload-pack", + "exitError": "some-error", + "path": "my-org/my-repo" +} +``` + +### TGIT001I + +Git Command + +Example: + +```json +{ + "code": "TGIT001I", + "event": "git.command", + "time": "2024-12-07T11:11:11.112Z", + "uid": "7699b806-e717-4821-85a5-d2f41acbe373", + "user": "Linus.Torvalds", + "service": "git-upload-pack", + "path": "my-org/my-repo" +} +``` + +## github.created + +GitHub Auth Connector Created + +Example: + +```json +{ + "code": "T8000I", + "event": "github.created", + "name": "new_github_connector", + "time": "2020-06-05T19:28:00Z", + "uid": "2b7bb323-35d1-4b9c-9a6d-00ab34c95fb8", + "user": "unimplemented" +} +``` + +## github.deleted + +GitHub Auth Connector Deleted + +Example: + +```json +{ + "code": "T8001I", + "event": "github.deleted", + "name": "new_github_connector", + "time": "2020-06-05T19:28:28Z", + "uid": "26f12a67-d593-40df-b3d3-965faee60143", + "user": "unimplemented" +} +``` + +## github.updated + +GitHub Auth Connector Updated + +Example: + +```json +{ + "code": "T80002I", + "event": "github.updated", + "name": "new_github_connector", + "time": "2020-06-05T19:28:28Z", + "uid": "26f12a67-d593-40df-b3d3-965faee60143", + "user": "unimplemented" +} +``` + +## health_check_config.create + +Health Check Config Created + +Example: + +```json +{ + "cluster_name": "gavin-leaf.cloud.gravitational.io", + "code": "THCC001I", + "ei": 0, + "event": "health_check_config.create", + "expires": "0001-01-01T00:00:00Z", + "name": "example-cfg", + "time": "2025-03-04T15:49:21.869Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "gavin" +} +``` + +## health_check_config.delete + +Health Check Config Deleted + +Example: + +```json +{ + "cluster_name": "gavin-leaf.cloud.gravitational.io", + "code": "THCC003I", + "ei": 0, + "event": "health_check_config.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "example-cfg", + "time": "2025-03-04T15:49:21.869Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "gavin" +} +``` + +## health_check_config.update + +Health Check Config Updated + +Example: + +```json +{ + "cluster_name": "gavin-leaf.cloud.gravitational.io", + "code": "THCC002I", + "ei": 0, + "event": "health_check_config.update", + "expires": "0001-01-01T00:00:00Z", + "name": "example-cfg", + "time": "2025-03-04T15:49:21.869Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "gavin" +} +``` + +## instance.join + +There are multiple events with the `instance.join` type. + +### TJ002I + +Instance Joined + +Example: + +```json +{ + "cluster_name": "root.tele.ottr.sh", + "code": "TJ002I", + "ei": 0, + "event": "instance.join", + "method": "token", + "node_name": "noah-laptop-follower", + "role": "Instance", + "success": true, + "time": "2022-12-06T09:17:06.392Z", + "token_name": "************************a2418147", + "uid": "c1ea0e6c-ee3a-4f7e-9a98-9df283b01a98" +} +``` + +### TJ002E + +Instance Join Failed + +Example: + +```json +{ + "code": "TJ002E", + "event": "instance.join", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## integration.create + +Integration Created + +Example: + +```json +{ + "code": "IG001I", + "event": "integration.create", + "time": "2023-05-09T19:21:36.144Z", + "name": "integration", + "updated_by": "joe" +} +``` + +## integration.delete + +Integration Deleted + +Example: + +```json +{ + "code": "IG003I", + "event": "integration.delete", + "time": "2023-05-09T19:21:38.144Z", + "name": "integration", + "updated_by": "joe" +} +``` + +## integration.update + +Integration Updated + +Example: + +```json +{ + "code": "IG002I", + "event": "integration.update", + "time": "2023-05-09T19:21:37.144Z", + "name": "integration", + "updated_by": "joe" +} +``` + +## join_token.bound_keypair.join_state_verification_failed + +Bound Keypair Join Verification Failed + +Example: + +```json +{ + "code": "TBK003W", + "event": "join_token.bound_keypair.join_state_verification_failed", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## join_token.bound_keypair.recovery + +Bound Keypair Recovery + +Example: + +```json +{ + "code": "TBK001I", + "event": "join_token.bound_keypair.recovery", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## join_token.bound_keypair.rotation + +Bound Keypair Rotation + +Example: + +```json +{ + "code": "TBK002I", + "event": "join_token.bound_keypair.rotation", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## join_token.create + +Join Token Created + +Example: + +```json +{ + "code": "TJT00I", + "event": "join_token.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## kube.create + +Kubernetes Created + +Example: + +```json +{ + "cluster_name": "root", + "code": "T3010I", + "kube_labels": { + "env": "local", + "teleport.dev/origin": "dynamic" + }, + "ei": 0, + "event": "kube.create", + "expires": "0001-01-01T00:00:00Z", + "name": "kube-local", + "time": "2022-09-08T15:42:36.005Z", + "uid": "9d37514f-aef5-426f-9fda-31fd35d070f5", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## kube.delete + +Kubernetes Deleted + +Example: + +```json +{ + "cluster_name": "root", + "code": "T3012I", + "ei": 0, + "event": "kube.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "kube-local", + "time": "2022-09-08T15:42:36.005Z", + "uid": "74f5e6b9-50c4-4195-bb26-d615641255bc", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## kube.request + +Kubernetes Request + +Example: + +```json +{ + "addr.local": "127.0.0.1:3027", + "addr.remote": "[::1]:43026", + "code": "T3009I", + "ei": 0, + "event": "kube.request", + "kubernetes_cluster": "gke_teleport-a", + "login": "awly", + "namespace": "default", + "proto": "kube", + "request_path": "/api/v1/namespaces/teletest/pods/test-pod", + "resource_api_group": "core/v1", + "resource_kind": "pods", + "resource_name": "test-pod", + "resource_namespace": "teletest", + "response_code": 200, + "server_id": "9b67377e-d61e-4865-96d6-fa71989fd9e9", + "time": "2020-11-12T20:35:44.978Z", + "uid": "8c1459a8-9199-4d25-bc5d-38e000ddd9ab", + "user": "alex", + "verb": "GET" +} +``` + +## kube.update + +Kubernetes Updated + +Example: + +```json +{ + "cluster_name": "root", + "code": "T3011I", + "kube_labels": { + "env": "local", + "teleport.dev/origin": "dynamic" + }, + "ei": 0, + "event": "kube.update", + "expires": "0001-01-01T00:00:00Z", + "name": "kube-local", + "time": "2022-09-08T15:42:36.005Z", + "uid": "fe631a5a-6418-49d6-99e7-5280654663ec", + "user": "05ff66c9-a948-42f4-af0e-a1b6ba62561e.root" +} +``` + +## lock.created + +Lock Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TLK00I", + "ei": 0, + "event": "lock.created", + "expires": "0001-01-01T00:00:00Z", + "name": "lock-name", + "time": "2021-08-06T18:47:19.75Z", + "uid": "070fcb2a-e1cf-5b84-8190-14448cc63c76", + "user": "df83fda8-1111-5567-8bcc-c282dec3290e.im-a-cluster-name" +} +``` + +## lock.deleted + +Lock Deleted + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TLK01I", + "ei": 0, + "event": "lock.deleted", + "expires": "0001-01-01T00:00:00Z", + "name": "lock-name", + "time": "2021-08-06T18:49:51.626Z", + "uid": "e4630384-ac85-5a43-9ba9-3355b8d5cae4", + "user": "df83fda8-1111-5567-8bcc-c282dec3290e.im-a-cluster-name" +} +``` + +## login_rule.create + +Login Rule Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TLR00I", + "ei": 0, + "event": "login_rule.create", + "expires": "0001-01-01T00:00:00Z", + "name": "test_rule", + "time": "2023-01-25T19:21:36.144Z", + "uid": "266e8563-729e-412f-ba26-1050fbec0cd6", + "user": "nic" +} +``` + +## login_rule.delete + +Login Rule Deleted + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TLR01I", + "ei": 0, + "event": "login_rule.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "test_rule", + "time": "2023-01-25T19:21:36.144Z", + "uid": "266e8563-729e-412f-ba26-1050fbec0cd6", + "user": "nic" +} +``` + +## mcp.session.end + +There are multiple events with the `mcp.session.end` type. + +### TMCP002I + +MCP Session Ended + +Example: + +```json +{ + "code": "TMCP002I", + "ei": 0, + "event": "mcp.session.end", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T12:22:22.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": true +} +``` + +### TMCP002E + +MCP Session End Failure + +Example: + +```json +{ + "code": "TMCP002E", + "ei": 0, + "event": "mcp.session.end", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T12:22:22.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": true, + "error": "HTTP 405 Method Not Allowed" +} +``` + +## mcp.session.invalid_http_request + +MCP Session Invalid Request + +Example: + +```json +{ + "code": "TMCP006E", + "ei": 0, + "event": "mcp.session.invalid_http_request", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T12:22:22.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "method": "OPTIONS", + "path": "/" +} +``` + +## mcp.session.listen_sse_stream + +There are multiple events with the `mcp.session.listen_sse_stream` type. + +### TMCP005I + +MCP Session Listen + +Example: + +```json +{ + "code": "TMCP005I", + "ei": 0, + "event": "mcp.session.listen_sse_stream", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T12:22:22.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": true +} +``` + +### TMCP005E + +MCP Session Listen Failure + +Example: + +```json +{ + "code": "TMCP005E", + "ei": 0, + "event": "mcp.session.listen_sse_stream", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T12:22:22.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": false, + "error": "HTTP 405 Method Not Allowed" +} +``` + +## mcp.session.notification + +There are multiple events with the `mcp.session.notification` type. + +### TMCP004I + +MCP Session Notification + +Example: + +```json +{ + "code": "TMCP004I", + "ei": 0, + "event": "mcp.session.notification", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T11:11:11.333Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": true, + "message": { + "method": "notifications/initialized", + "jsonrpc": "2.0" + } +} +``` + +### TMCP004E + +MCP Session Notification Failure + +Example: + +```json +{ + "code": "TMCP004E", + "ei": 0, + "event": "mcp.session.notification", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T11:11:11.333Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": false, + "message": { + "method": "notifications/initialized", + "jsonrpc": "2.0" + }, + "error": "HTTP 401 Unauthorized" +} +``` + +## mcp.session.request + +There are multiple events with the `mcp.session.request` type. + +### TMCP003I + +MCP Session Request + +Example: + +```json +{ + "code": "TMCP003I", + "ei": 0, + "event": "mcp.session.request", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T11:11:11.222Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": true, + "message": { + "id": 0, + "method": "initialize", + "params": { + "clientInfo": { + "name": "claude-ai", + "version": "0.1.0" + }, + "protocolVersion": "2024-11-05" + }, + "jsonrpc": "2.0" + } +} +``` + +### TMCP003E + +MCP Session Request Failure + +Example: + +```json +{ + "code": "TMCP003E", + "ei": 0, + "event": "mcp.session.request", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T11:11:11.555Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything", + "success": false, + "error": "access denied", + "message": { + "id": 2, + "method": "tools/call", + "params": { + "name": "write_file", + "arguments": { + "path": "/etc/passwd" + } + }, + "jsonrpc": "2.0" + } +} +``` + +## mcp.session.start + +MCP Session Started + +Example: + +```json +{ + "code": "TMCP001I", + "ei": 0, + "event": "mcp.session.start", + "namespace": "default", + "server_id": "a0518380-0d53-4188-ac8b-8ddd8103e45b", + "sid": "6593cf87-9839-4f18-abf8-c54873aaeb4e", + "time": "2025-05-23T11:11:11.111Z", + "uid": "80400ed9-644e-4a6e-ab99-b264b34d0f55", + "user": "ai-user", + "app_name": "mcp-everything" +} +``` + +## mfa.delete + +There are multiple events with the `mfa.delete` type. + +### T1006I + +MFA Device Added + +Example: + +```json +{ + "cluster_name": "localhost", + "code": "T1006I", + "mfa_device_name": "usb-c", + "mfa_device_type": "U2F", + "mfa_device_uuid": "7a6fbf23-d75c-4c62-8215-e962d0f2a1f3", + "ei": 0, + "event": "mfa.delete", + "time": "2021-03-03T22:58:34.737Z", + "uid": "9be91d9e-79ec-422b-b6ae-ccf7235476d4", + "user": "awly" +} +``` + +### T1007I + +MFA Device Deleted + +Example: + +```json +{ + "cluster_name": "localhost", + "code": "T1007I", + "mfa_device_name": "usb-c", + "mfa_device_type": "U2F", + "mfa_device_uuid": "7a6fbf23-d75c-4c62-8215-e962d0f2a1f3", + "ei": 0, + "event": "mfa.delete", + "time": "2021-03-03T22:58:44.737Z", + "uid": "c6afe861-d53c-42ce-837c-7920d2398b44", + "user": "awly" +} +``` + +## mfa_auth_challenge.create + +MFA Authentication Attempt + +Example: + +```json +{ + "challenge_allow_reuse": false, + "challenge_scope": "CHALLENGE_SCOPE_LOGIN", + "cluster_name": "zarq", + "code": "T1015I", + "ei": 0, + "event": "mfa_auth_challenge.create", + "time": "2024-04-16T21:46:59.317Z", + "uid": "815bbcf4-fb05-4e08-917c-7259e9332d69", + "user": "llama", + "user_kind": 1 +} +``` + +## mfa_auth_challenge.validate + +There are multiple events with the `mfa_auth_challenge.validate` type. + +### T1016I + +MFA Authentication Success + +Example: + +```json +{ + "code": "T1016I", + "event": "mfa_auth_challenge.validate", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T1016W + +MFA Authentication Failure + +Example: + +```json +{ + "code": "T1016W", + "event": "mfa_auth_challenge.validate", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## oidc.created + +OIDC Auth Connector Created + +Example: + +```json +{ + "code": "T8100I", + "event": "oidc.created", + "name": "new_oidc_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## oidc.deleted + +OIDC Auth Connector Deleted + +Example: + +```json +{ + "code": "T8101I", + "event": "oidc.deleted", + "name": "new_oidc_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## oidc.updated + +OIDC Auth Connector Updated + +Example: + +```json +{ + "code": "T8102I", + "event": "oidc.updated", + "name": "new_oidc_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## okta.access_list.sync + +There are multiple events with the `okta.access_list.sync` type. + +### TOK006I + +Okta access list synchronization completed + +Example: + +```json +{ + "code": "TOK006I", + "event": "okta.access_list.sync", + "time": "2023-05-08T19:21:36.144Z" +} +``` + +### TOK006E + +Okta access list synchronization failed + +Example: + +```json +{ + "code": "TOK006E", + "event": "okta.access_list.sync", + "time": "2023-05-08T19:21:36.144Z" +} +``` + +## okta.applications.update + +Okta applications have been updated + +Example: + +```json +{ + "code": "TOK002I", + "event": "okta.applications.update", + "time": "2023-05-08T19:21:36.144Z", + "added": 5, + "updated": 1, + "deleted": 7 +} +``` + +## okta.assignment.cleanup + +There are multiple events with the `okta.assignment.cleanup` type. + +### TOK005I + +Okta assignment has been cleaned up + +Example: + +```json +{ + "code": "TOK005I", + "event": "okta.assignment.cleanup", + "time": "2023-05-08T19:21:36.144Z", + "name": "assignment-id", + "source": "source", + "user": "mike" +} +``` + +### TOK005E + +Okta assignment failed to clean up + +Example: + +```json +{ + "code": "TOK005E", + "event": "okta.assignment.cleanup", + "time": "2023-05-08T19:21:36.144Z", + "name": "assignment-id", + "source": "source", + "user": "mike" +} +``` + +## okta.assignment.process + +There are multiple events with the `okta.assignment.process` type. + +### TOK004I + +Okta assignment has been processed + +Example: + +```json +{ + "code": "TOK004I", + "event": "okta.assignment.process", + "time": "2023-05-08T19:21:36.144Z", + "name": "assignment-id", + "source": "source", + "user": "mike" +} +``` + +### TOK004E + +Okta assignment failed to process + +Example: + +```json +{ + "code": "TOK004E", + "event": "okta.assignment.process", + "time": "2023-05-08T19:21:36.144Z", + "name": "assignment-id", + "source": "source", + "user": "mike" +} +``` + +## okta.groups.update + +Okta groups have been updated + +Example: + +```json +{ + "code": "TOK001I", + "event": "okta.groups.update", + "time": "2023-05-08T19:21:36.144Z", + "added": 5, + "updated": 1, + "deleted": 7 +} +``` + +## okta.sync.failure + +Okta synchronization failed + +Example: + +```json +{ + "code": "TOK003E", + "event": "okta.sync.failure", + "time": "2023-05-08T19:21:36.144Z" +} +``` + +## okta.user.sync + +There are multiple events with the `okta.user.sync` type. + +### TOK007I + +Okta user synchronization completed + +Example: + +```json +{ + "code": "TOK007I", + "event": "okta.user.sync", + "time": "2023-05-08T19:21:36.144Z", + "num_users_created": 5, + "num_users_deleted": 1, + "num_users_modified": 7 +} +``` + +### TOK007E + +Okta user synchronization failed + +Example: + +```json +{ + "code": "TOK007E", + "event": "okta.user.sync", + "time": "2023-05-08T19:21:36.144Z" +} +``` + +## plugin.create + +Plugin Created + +Example: + +```json +{ + "code": "PG001I", + "event": "plugin.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## plugin.delete + +Plugin Deleted + +Example: + +```json +{ + "code": "PG003I", + "event": "plugin.delete", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## plugin.update + +Plugin Updated + +Example: + +```json +{ + "code": "PG002I", + "event": "plugin.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## port + +There are multiple events with the `port` type. + +### T3003I + +Port Forwarding Start + +Example: + +```json +{ + "code": "T3003I", + "event": "port", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T3003E + +Port Forwarding Failure + +Example: + +```json +{ + "code": "T3003E", + "event": "port", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T3003S + +Port Forwarding Stop + +Example: + +```json +{ + "code": "T3003S", + "event": "port", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## privilege_token.create + +Privilege Token Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "T6002I", + "ei": 0, + "event": "privilege_token.create", + "expires": "2021-11-01T22:29:47.989984Z", + "name": "user@example.com", + "time": "2021-11-01T22:24:47.99Z", + "ttl": "5m0s", + "uid": "6a9d5ac1-08c5-5c1e-9ebd-086d34155b08", + "user": "user@example.com" +} +``` + +## recovery_code.generated + +Recovery Codes Generated + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "T1008I", + "ei": 0, + "event": "recovery_code.generated", + "time": "2021-08-05T21:16:17.13Z", + "uid": "ed0f6962-e34d-5fa4-bd41-7961cf2c51bb", + "user": "user@example.com" +} +``` + +## recovery_code.used + +There are multiple events with the `recovery_code.used` type. + +### T1009I + +Recovery Code Used + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "T1009I", + "ei": 0, + "event": "recovery_code.used", + "success": true, + "time": "2021-08-05T21:22:46.042Z", + "uid": "4bb44dfe-70dc-5820-8c65-0baf40f62d13", + "user": "user@example.com" +} +``` + +### T1009W + +Recovery Code Use Failed + +Example: + +```json +{ + "cluster_name": "localhost", + "code": "T1009W", + "ei": 0, + "error": "recovery code did not match", + "event": "recovery_code.used", + "message": "recovery code did not match", + "success": false, + "time": "2021-08-05T23:32:41.273Z", + "uid": "714625ab-48d5-51d0-ab1f-c4b267881594", + "user": "user@example.com" +} +``` + +## recovery_token.create + +Recovery Token Created + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "T6001I", + "ei": 0, + "event": "recovery_token.create", + "expires": "2021-08-05T21:56:14.935267Z", + "name": "user@example.com", + "time": "2021-08-05T21:41:14.935Z", + "ttl": "15m0s", + "uid": "29cd2ad5-f1cd-54d2-85fc-4910fbfc9bfa", + "user": "user@example.com" +} +``` + +## reset_password_token.create + +Reset Password Token Created + +Example: + +```json +{ + "code": "T6000I", + "name": "hello", + "event": "reset_password_token.create", + "time": "2020-06-05T16:24:22Z", + "ttl": "8h0m0s", + "uid": "85fef5df-6dca-475e-a049-393f4cf1d6a3", + "user": "b331fb6c-85f9-4cb0-b308-3452420bf81e.one" +} +``` + +## resize + +Terminal Resize + +Example: + +```json +{ + "code": "T2002I", + "ei": 3, + "event": "resize", + "login": "root", + "namespace": "default", + "sid": "56408539-6536-11e9-80a1-427cfde50f5a", + "size": "80:25", + "time": "2019-04-22T19:39:52.432Z", + "uid": "917d8108-3617-4273-ab37-7bbf8e7c1ab9", + "user": "admin@example.com" +} +``` + +## role.created + +User Role Created + +Example: + +```json +{ + "code": "T9000I", + "event": "role.created", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## role.deleted + +User Role Deleted + +Example: + +```json +{ + "code": "T9001I", + "event": "role.deleted", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## role.updated + +User Role Updated + +Example: + +```json +{ + "code": "T9002I", + "event": "role.updated", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## saml.created + +SAML Connector Created + +Example: + +```json +{ + "code": "T8200I", + "event": "saml.created", + "name": "new_saml_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## saml.deleted + +SAML Connector Deleted + +Example: + +```json +{ + "code": "T8201I", + "event": "saml.deleted", + "name": "new_saml_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## saml.idp.auth + +SAML IdP authentication + +Example: + +```json +{ + "code": "TSI000I", + "event": "saml.idp.auth", + "time": "2023-01-25T19:21:36.144Z", + "user": "mike", + "session_id": "123456", + "success": true, + "service_provider_entity_id": "valid-entity-id" +} +``` + +## saml.idp.service.provider.create + +There are multiple events with the `saml.idp.service.provider.create` type. + +### TSI001I + +SAML IdP service provider created + +Example: + +```json +{ + "code": "TSI001I", + "event": "saml.idp.service.provider.create", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +### TSI001W + +SAML IdP service provider create failed + +Example: + +```json +{ + "code": "TSI001W", + "event": "saml.idp.service.provider.create", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +## saml.idp.service.provider.delete + +There are multiple events with the `saml.idp.service.provider.delete` type. + +### TSI003I + +SAML IdP service provider deleted + +Example: + +```json +{ + "code": "TSI003I", + "event": "saml.idp.service.provider.delete", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +### TSI003W + +SAML IdP service provider delete failed + +Example: + +```json +{ + "code": "TSI003W", + "event": "saml.idp.service.provider.delete", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +### TSI004I + +All SAML IdP service provider deleted + +Example: + +```json +{ + "code": "TSI004I", + "event": "saml.idp.service.provider.delete", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike" +} +``` + +### TSI004W + +SAML IdP service provider delete failed + +Example: + +```json +{ + "code": "TSI004W", + "event": "saml.idp.service.provider.delete", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike" +} +``` + +## saml.idp.service.provider.update + +There are multiple events with the `saml.idp.service.provider.update` type. + +### TSI002I + +SAML IdP service provider updated + +Example: + +```json +{ + "code": "TSI002I", + "event": "saml.idp.service.provider.update", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +### TSI002W + +SAML IdP service provider update failed + +Example: + +```json +{ + "code": "TSI002W", + "event": "saml.idp.service.provider.update", + "time": "2023-01-25T19:21:36.144Z", + "name": "saml-idp", + "updated_by": "mike", + "service_provider_entity_id": "valid-entity-id" +} +``` + +## saml.updated + +SAML Connector Updated + +Example: + +```json +{ + "code": "T8202I", + "event": "saml.updated", + "name": "new_saml_connector", + "time": "2020-06-05T19:29:14Z", + "uid": "6208b4b9-0077-41aa-967a-f173b6bcc0d3", + "user": "unimplemented" +} +``` + +## scim.create + +There are multiple events with the `scim.create` type. + +### TSCIM001I + +SCIM Resource Creation Succeeded + +Example: + +```json +{ + "ei": 163, + "time": "2023-05-08T19:21:36.144Z", + "cluster": "dev", + "code": "TSCIM001I", + "event": "scim.create", + "success": true, + "request": { + "id": "ff5cea87-db00-4fa8-a30f-99f220f61075", + "source_address": "127.0.0.1", + "user_agent": "carrier pigeon", + "method": "PUT", + "path": "/scim/v2/Users", + "body": { + "active": true, + "id": "external-id-0987654321", + "nickName": "bofh", + "schemas": [ + "urn:ietf:params:scim:schemas:core:2.0:User" + ], + "userName": "root@localhost" + } + }, + "integration": "okta", + "resource_type": "user", + "external_id": "external-id-0987654321", + "teleport_id": "root@localhost" +} +``` + +### TSCIM001E + +SCIM Resource Creation Failed + +Example: + +```json +{ + "time": "2023-05-08T19:21:36.144Z", + "cluster": "dev", + "code": "TSCIM001E", + "event": "scim.create", + "success": false, + "error": "Too many candidates", + "integration": "okta", + "resource_type": "group", + "teleport_id": "access-list-guid", + "external_id": "0987654321", + "display": "Some group" +} +``` + +## scim.delete + +There are multiple events with the `scim.delete` type. + +### TSCIM003I + +SCIM Delete Succeeded + +Example: + +```json +{ + "time": "2023-05-08T19:21:36.144Z", + "cluster": "dev", + "code": "TSCIM003I", + "event": "scim.delete", + "success": true, + "integration": "okta", + "resource_type": "user", + "teleport_id": "test@example.com", + "external_id": "external-id-00123456789", + "display": "test@example.com" +} +``` + +### TSCIM003E + +SCIM Delete Failed + +Example: + +```json +{ + "time": "2023-05-08T19:21:37.000Z", + "cluster": "dev", + "code": "TSCIM003E", + "event": "scim.delete", + "resource_type": "group", + "success": false, + "error": "no such group", + "integration": "okta", + "teleport_id": "access-list-guid", + "external_id": "external-id-00123456789", + "display": "some group" +} +``` + +## scim.get + +There are multiple events with the `scim.get` type. + +### TSCIM004I + +SCIM Resource Fetch Succeeded + +Example: + +```json +{ + "code": "TSCIM004I", + "event": "scim.get", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### TSCIM004E + +SCIM Resource Fetch Failed + +Example: + +```json +{ + "code": "TSCIM004E", + "event": "scim.get", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## scim.list + +There are multiple events with the `scim.list` type. + +### TSCIM005I + +SCIM Resource Listing Succeeded + +Example: + +```json +{ + "ei": 163, + "time": "2023-05-08T19:21:36.144Z", + "cluster": "dev", + "code": "TSCIM005I", + "event": "scim.list", + "success": true, + "integration": "okta", + "resource_type": "user", + "teleport_id": "root@localhost", + "external_id": "external-id-0987654321", + "display": "local admin" +} +``` + +### TSCIM005IE + +SCIM Resource Listing Failed + +Example: + +```json +{ + "code": "TSCIM005IE", + "event": "scim.list", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## scim.update + +There are multiple events with the `scim.update` type. + +### TSCIM002I + +SCIM Update Succeeded + +Example: + +```json +{ + "time": "2023-05-08T19:21:36.144Z", + "cluster": "dev", + "code": "TSCIM002I", + "event": "scim.update", + "success": true, + "integration": "okta", + "resource_type": "user", + "teleport_id": "test@example.com", + "external_id": "externa-id-00123456789", + "display": "test@example.com" +} +``` + +### TSCIM002E + +SCIM Update Failed + +Example: + +```json +{ + "time": "2023-05-08T19:21:37.000Z", + "cluster": "dev", + "code": "TSCIM002E", + "event": "scim.update", + "resource_type": "user", + "success": false, + "error": "no such user", + "integration": "okta", + "teleport_id": "test@example.com", + "external_id": "external-id-000123456789", + "display": "test@example.com" +} +``` + +## scp + +There are multiple events with the `scp` type. + +### T3004I + +SCP Download + +Example: + +```json +{ + "code": "T3004I", + "action": "download", + "addr.local": "172.31.28.130:3022", + "addr.remote": "127.0.0.1:55594", + "event": "scp", + "login": "root", + "namespace": "default", + "path": "~/fsdfsdfsdfsdfs", + "time": "2019-04-22T19:41:23Z", + "uid": "183ca6de-c24b-4f67-854f-163c01245fa1", + "user": "admin@example.com" +} +``` + +### T3004E + +SCP Download Failed + +Example: + +```json +{ + "action": "download", + "addr.local": "192.168.0.105:3022", + "addr.remote": "127.0.0.1:39932", + "cluster_name": "im-a-cluster-name", + "code": "T3004E", + "command": "/home/path scp --remote-addr=\"127.0.0.1:39932\" --local-addr=\"111.222.0.105:3022\" -f ~/sdfsdf", + "ei": 0, + "event": "scp", + "exitCode": "1", + "exitError": "exit status 1", + "login": "root", + "namespace": "default", + "path": "~/sdfsdf", + "server_id": "8045a8cc-49bb-4e02-bdc99313", + "sid": "8ff117ec-70a2-4481-8e359cf6", + "time": "2019-04-22T19:41:23Z", + "uid": "30e13b84-a51f-467676258b9bf", + "user": "root" +} +``` + +### T3005I + +SCP Upload + +Example: + +```json +{ + "action": "upload", + "addr.local": "192.168.0.105:3022", + "addr.remote": "127.0.0.1:57058", + "cluster_name": "im-a-cluster-name", + "code": "T3005I", + "command": "/home/path scp --remote-addr=\"127.0.0.1:57058\" --local-addr=\"111.222.0.105:3022\" -t ~/", + "ei": 0, + "event": "scp", + "exitCode": "0", + "login": "root", + "namespace": "default", + "path": "~/", + "server_id": "8045a8cc-49bb-4e02-bdc5-a782a313", + "sid": "b484b5cc-9065-40fa-9a0c-db3", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root" +} +``` + +### T3005E + +SCP Upload Failed + +Example: + +```json +{ + "code": "T3005E", + "event": "scp", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T3010E + +SCP Disallowed + +Example: + +```json +{ + "code": "T3010E", + "event": "scp", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## secreports.audit.query.run + +Access Monitoring Query Executed + +Example: + +```json +{ + "cluster_name": "root.com", + "code": "SRE001I", + "data_scanned_in_bytes": 4045, + "days": 90, + "event": "secreports.audit.query.run", + "query": "select * FROM cert_create", + "success": true, + "time": "2023-10-09T10:09:10.473Z", + "total_execution_time_in_millis": 1440, + "uid": "dc29d36c-c5b6-4ffc-9aa7-2d9ba18a3953", + "user": "marek" +} +``` + +## secreports.report.run + +Access Monitoring Report Executed + +Example: + +```json +{ + "cluster_name": "root.com", + "code": "SRE002I", + "data_scanned_in_bytes": 13258, + "event": "secreports.report.run", + "name": "privilege_access_report_90_days", + "success": true, + "time": "2023-10-09T09:10:03.633Z", + "total_execution_time_in_millis": 14082, + "uid": "f44871b9-7247-467b-a760-8159d3f47bac", + "user": "system" +} +``` + +## session.command + +Session Command + +Example: + +```json +{ + "argv": [ + "google.com" + ], + "cgroup_id": 4294968064, + "code": "T4000I", + "ei": 5, + "event": "session.command", + "login": "root", + "namespace": "default", + "path": "/bin/ping", + "pid": 2653, + "ppid": 2660, + "program": "ping", + "return_code": 0, + "server_id": "96f2bed2-ebd1-494a-945c-2fd57de41644", + "sid": "44c6cea8-362f-11ea-83aa-125400432324", + "time": "2020-01-13T18:05:53.919Z", + "uid": "734930bb-00e6-4ee6-8798-37f1e9473fac", + "user": "benarent" +} +``` + +## session.connect + +Session Connected + +Example: + +```json +{ + "addr.local": "192.168.0.106:43858", + "addr.remote": "192.168.0.106:3022", + "cluster_name": "im-a-cluster-name", + "code": "T2010I", + "ei": 0, + "event": "session.connect", + "server_addr": "192.168.0.106:43858", + "server_id": "bd5eff-f59b-4fb3-b8ed-757c52ff", + "time": "2022-02-04T18:15:28.572Z", + "uid": "f2a0f9-d78c-4c38-b3fa-ca63453b" +} +``` + +## session.data + +Session Data + +Example: + +```json +{ + "addr.local": "172.10.1.1:3022", + "addr.remote": "172.10.1.254:46992", + "code": "T2006I", + "ei": 2147483646, + "event": "session.data", + "login": "root", + "rx": 3974, + "server_id": "b331fb6c-85f9-4cb0-b308-3452420bf81e", + "sid": "5fc8bf85-a73e-11ea-afd1-0242ac0a0101", + "time": "2020-06-05T15:14:51Z", + "tx": 4730, + "uid": "2f2f07d0-8a01-4abe-b1c0-5001fd86829b", + "user": "Stanley_Cooper" +} +``` + +## session.disk + +Session File Access + +Example: + +```json +{ + "code": "T4001I", + "event": "session.disk", + "namespace": "default", + "sid": "44c6cea8-362f-11ea-83aa-125400432324", + "server_id": "96f2bed2", + "login": "root", + "user": "benarent", + "pid": 2653, + "cgroup_id": 4294968064, + "program": "bash", + "path": "/etc/profile.d/", + "flags": 2100000, + "return_code": 0, + "time": "2019-04-22T19:39:26.676Z" +} +``` + +## session.end + +Session Ended + +Example: + +```json +{ + "cluster_name": "kimlisa.cloud.gravitational.io", + "code": "T2004I", + "ei": 1, + "enhanced_recording": false, + "event": "session.end", + "interactive": false, + "login": "root", + "namespace": "default", + "participants": [ + "foo" + ], + "server_addr": "172.31.30.254:32962", + "server_hostname": "ip-172-31-30-254", + "server_id": "d3ddd1f8-b602-488b-00c66e29879f", + "session_start": "2021-05-21T22:23:55.313562027Z", + "session_stop": "2021-05-21T22:54:27.122508023Z", + "sid": "9d92ad96-a45c-4add-463cc7bc48b1", + "time": "2021-05-21T22:54:27.123Z", + "uid": "984ac949-6605-4f0a-e450aa5665f4", + "user": "foo" +} +``` + +## session.join + +User Joined + +Example: + +```json +{ + "addr.local": "172.31.28.130:3022", + "addr.remote": "151.181.228.114:51752", + "code": "T2001I", + "ei": 4, + "event": "session.join", + "login": "root", + "namespace": "default", + "server_id": "de3800ea-69d9-4d72-a108-97e57f8eb393", + "sid": "56408539-6536-11e9-80a1-427cfde50f5a", + "time": "2019-04-22T19:39:52.434Z", + "uid": "13d26190-289b-41d4-af67-c8c8b0617ebe", + "user": "admin@example.com" +} +``` + +## session.leave + +User Disconnected + +Example: + +```json +{ + "code": "T2003I", + "event": "session.leave", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## session.network + +Session Network Connection + +Example: + +```json +{ + "code": "T4002I", + "event": "session.network", + "namespace": "default", + "sid": "44c6cea8-362f-11ea-83aa-125400432324", + "server_id": "96f2bed2", + "login": "root", + "user": "benarent", + "pid": 2653, + "cgroup_id": 4294968064, + "program": "bash", + "src_addr": "10.217.136.161", + "dst_addr": "190.58.129.4", + "dst_port": "3000", + "version": 4, + "time": "2019-04-22T19:39:26.676Z", + "action": 1 +} +``` + +## session.process_exit + +Session Process Exit + +Example: + +```json +{ + "code": "T4003I", + "event": "session.process_exit", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## session.recording.access + +Session Recording Accessed + +Example: + +```json +{ + "code": "T2012I", + "event": "session.recording.access", + "sid": "44c6cea8-362f-11ea-83aa-125400432324", + "success": true, + "time": "2022-07-14T18:04:37.067Z", + "uid": "7d440ee1-15f6-4b56-9391-344e8984fd97", + "user": "ops@gravitational.io" +} +``` + +## session.rejected + +Session Rejected + +Example: + +```json +{ + "code": "T1006W", + "event": "session.rejected", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## session.start + +Session Started + +Example: + +```json +{ + "addr.local": "172.31.28.130:3022", + "addr.remote": "151.181.228.114:51454", + "code": "T2000I", + "ei": 0, + "event": "session.start", + "login": "root", + "namespace": "default", + "server_id": "de3800ea-69d9-4d72-a108-97e57f8eb393", + "sid": "56408539-6536-11e9-80a1-427cfde50f5a", + "size": "80:25", + "time": "2019-04-22T19:39:26.676Z", + "uid": "84c07a99-856c-419f-9de5-15560451a116", + "user": "admin@example.com" +} +``` + +## session.upload + +Session Uploaded + +Example: + +```json +{ + "code": "T2005I", + "event": "session.upload", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## session_recording_config.update + +Session Recording Configuration Updated + +Example: + +```json +{ + "code": "TCREC003I", + "event": "session_recording_config.update", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## sftp + +There are multiple events with the `sftp` type. + +### TS001I + +SFTP Open + +Example: + +```json +{ + "action": 1, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS001I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS001E + +SFTP Open Failed + +Example: + +```json +{ + "action": 1, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS001E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS007I + +SFTP Setstat + +Example: + +```json +{ + "action": 7, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS007I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS007E + +SFTP Setstat Failed + +Example: + +```json +{ + "action": 7, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS007E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS009I + +SFTP Opendir + +Example: + +```json +{ + "action": 9, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS009I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS009E + +SFTP Opendir Failed + +Example: + +```json +{ + "action": 9, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS009E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS010I + +SFTP Readdir + +Example: + +```json +{ + "action": 10, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS010I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS010E + +SFTP Readdir Failed + +Example: + +```json +{ + "action": 10, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS010E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS011I + +SFTP Remove + +Example: + +```json +{ + "action": 11, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS011I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS011E + +SFTP Remove Failed + +Example: + +```json +{ + "action": 11, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS011E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS012I + +SFTP Mkdir + +Example: + +```json +{ + "action": 12, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS012I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS012E + +SFTP Mkdir Failed + +Example: + +```json +{ + "action": 12, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS012E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS013I + +SFTP Rmdir + +Example: + +```json +{ + "action": 13, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS013I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS013E + +SFTP Rmdir Failed + +Example: + +```json +{ + "action": 13, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS013E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS016I + +SFTP Rename + +Example: + +```json +{ + "action": 16, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS016I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS016E + +SFTP Rename Failed + +Example: + +```json +{ + "action": 16, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS016E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS018I + +SFTP Symlink + +Example: + +```json +{ + "action": 18, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS018I", + "ei": 0, + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS018E + +SFTP Symlink Failed + +Example: + +```json +{ + "action": 18, + "addr.local": "[::1]:3022", + "addr.remote": "127.0.0.1:41106", + "cluster_name": "im-a-cluster-name", + "code": "TS018E", + "ei": 0, + "error": "EOF", + "event": "sftp", + "login": "root", + "namespace": "default", + "path": "/tmp/file", + "server_hostname": "im-a-server-hostname", + "server_id": "e106fdd0-51db-4efa-a9ab-c3afa7a1565a", + "sid": "", + "time": "2019-04-22T19:41:23Z", + "uid": "16bfdc34-2766-a5d3-dfd6f7ff7ad6", + "user": "root", + "working_directory": "/root" +} +``` + +### TS019I + +SFTP Link + +Example: + +```json +{ + "code": "TS019I", + "event": "sftp", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### TS019E + +SFTP Link Failed + +Example: + +```json +{ + "code": "TS019E", + "event": "sftp", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### TS020E + +SFTP Disallowed + +Example: + +```json +{ + "code": "TS020E", + "event": "sftp", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## sftp_summary + +File Transfer Completed + +Example: + +```json +{ + "code": "TS021I", + "event": "sftp_summary", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## sigstore_policy.create + +Sigstore Policy Created + +Example: + +```json +{ + "addr.remote": "203.0.113.77:64794", + "cluster_name": "clustername", + "code": "TSSP001I", + "ei": 0, + "event": "sigstore_policy.create", + "expires": "0001-01-01T00:00:00Z", + "name": "default", + "time": "2025-03-26T01:14:36.881Z", + "uid": "e52def2f-4109-4cc9-91a8-150c6792f89f", + "user": "bob", + "user_kind": 1 +} +``` + +## sigstore_policy.delete + +Sigstore Policy Deleted + +Example: + +```json +{ + "addr.remote": "203.0.113.77:64794", + "cluster_name": "clustername", + "code": "TSSP003I", + "ei": 0, + "event": "sigstore_policy.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "default", + "time": "2025-03-26T01:14:36.881Z", + "uid": "e52def2f-4109-4cc9-91a8-150c6792f89f", + "user": "bob", + "user_kind": 1 +} +``` + +## sigstore_policy.update + +Sigstore Policy Updated + +Example: + +```json +{ + "addr.remote": "203.0.113.77:64794", + "cluster_name": "clustername", + "code": "TSSP002I", + "ei": 0, + "event": "sigstore_policy.update", + "expires": "0001-01-01T00:00:00Z", + "name": "default", + "time": "2025-03-26T01:14:36.881Z", + "uid": "e52def2f-4109-4cc9-91a8-150c6792f89f", + "user": "bob", + "user_kind": 1 +} +``` + +## spiffe.svid.issued + +There are multiple events with the `spiffe.svid.issued` type. + +### TSPIFFE000I + +SPIFFE SVID Issued + +Example: + +```json +{ + "addr.remote": "127.0.0.1:54378", + "cluster_name": "leaf.tele.ottr.sh", + "code": "TSPIFFE000I", + "dns_sans": null, + "ei": 0, + "event": "spiffe.svid.issued", + "hint": "", + "ip_sans": null, + "serial_number": "d1:e5:fc:bf:19:67:e7:8c:7a:21:37:b5:05:ea:77:41", + "spiffe_id": "spiffe://example.teleport.com/bar", + "svid_type": "x509", + "time": "2024-02-02T15:48:25.35Z", + "uid": "45e13afc-0890-4ffb-b125-99d93c26d7de", + "user": "bot-test12", + "user_kind": 2 +} +``` + +### TSPIFFE000E + +SPIFFE SVID Issued Failure + +Example: + +```json +{ + "code": "TSPIFFE000E", + "event": "spiffe.svid.issued", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## ssm.run + +There are multiple events with the `ssm.run` type. + +### TDS00I + +SSM Command Executed + +Example: + +```json +{ + "account_id": "278576220453", + "cluster_name": "localhost", + "code": "TDS00I", + "command_id": "e8a5f3ba-e9e5-4cbd-979b-18fd1e7ad00f", + "ei": 0, + "event": "ssm.run", + "exit_code": 0, + "instance_id": "i-057d0ffe877128673", + "region": "eu-central-1", + "status": "Success", + "time": "2022-09-14T14:45:38.122Z", + "uid": "d053a9a4-6362-4d46-8868-55d83b7b338f" +} +``` + +### TDS00W + +SSM Command Execution Failed + +Example: + +```json +{ + "account_id": "278576220453", + "cluster_name": "localhost", + "code": "TDS00W", + "command_id": "c2936d68-fc0c-4c16-a860-916a97f57644", + "ei": 0, + "event": "ssm.run", + "exit_code": 1, + "instance_id": "i-057d0ffe877128673", + "region": "eu-central-1", + "status": "Failure", + "time": "2022-09-14T14:45:38.122Z", + "uid": "ad123558-1d20-42dd-bf82-a7c544d76550" +} +``` + +## stable_unix_user.create + +Stable UNIX user created + +Example: + +```json +{ + "code": "TSUU001I", + "event": "stable_unix_user.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## static_host_user.create + +Static Host User Created + +Example: + +```json +{ + "code": "SHU001I", + "event": "static_host_user.create", + "time": "2023-05-09T19:21:36.144Z", + "name": "test-user", + "user": "bob" +} +``` + +## static_host_user.delete + +Static Host User Deleted + +Example: + +```json +{ + "code": "SHU003I", + "updated_by": "joe", + "event": "static_host_user.delete", + "time": "2023-05-09T19:21:38.144Z", + "name": "test-user", + "user": "bob" +} +``` + +## static_host_user.update + +Static Host User Updated + +Example: + +```json +{ + "code": "SHU002I", + "event": "static_host_user.update", + "time": "2023-05-09T19:21:37.144Z", + "name": "test-user", + "user": "bob" +} +``` + +## subsystem + +There are multiple events with the `subsystem` type. + +### T3001I + +Subsystem Requested + +Example: + +```json +{ + "code": "T3001I", + "event": "subsystem", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T3001E + +Subsystem Request Failed + +Example: + +```json +{ + "code": "T3001E", + "event": "subsystem", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## trusted_cluster.create + +Trusted Cluster Created + +Example: + +```json +{ + "code": "T7000I", + "event": "trusted_cluster.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## trusted_cluster.delete + +Trusted Cluster Deleted + +Example: + +```json +{ + "code": "T7001I", + "event": "trusted_cluster.delete", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## trusted_cluster_token.create + +Trusted Cluster Token Created + +Example: + +```json +{ + "code": "T7002I", + "event": "trusted_cluster_token.create", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## unknown + +Unknown Event + +Example: + +```json +{ + "code": "TCC00E", + "event": "unknown", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## upgradewindowstart.update + +Upgrade Window Start Updated + +Example: + +```json +{ + "code": "TUW01I", + "time": "2022-04-13T20:00:04.000Z", + "user": "alice@example.com", + "event": "upgradewindowstart.update", + "upgrade_window_start": "23:00" +} +``` + +## user.create + +User Created + +Example: + +```json +{ + "code": "T1002I", + "connector": "local", + "name": "hello", + "event": "user.create", + "expires": "0001-01-01T00:00:00Z", + "roles": [ + "admin" + ], + "time": "2020-06-05T16:24:05Z", + "uid": "22a273678c-ee78-5ffc-a298-68a841555c98", + "user": "b331fb6c-85f9-4cb0-b308-3452420bf81e.one" +} +``` + +## user.delete + +User Deleted + +Example: + +```json +{ + "code": "T1004I", + "uid": "b121fc4c-e419-56a2-a760-19cd746c0650", + "time": "2020-06-05T16:24:05Z", + "event": "user.delete", + "name": "bob", + "user": "benarent" +} +``` + +## user.login + +There are multiple events with the `user.login` type. + +### T1000I + +Local Login + +Example: + +```json +{ + "code": "T1000I", + "event": "user.login", + "method": "local", + "success": true, + "time": "2019-04-22T00:49:03Z", + "uid": "173d6b6e-d613-44be-8ff6-f9f893791ef2", + "user": "admin@example.com" +} +``` + +### T1000W + +Local Login Failed + +Example: + +```json +{ + "code": "T1000W", + "error": "user(name=\"fsdfsdf\") not found", + "event": "user.login", + "method": "local", + "success": false, + "time": "2019-04-22T18:06:32Z", + "uid": "597bf08b-75b2-4dda-a578-e387c5ce9b76", + "user": "fsdfsdf" +} +``` + +### T1010I + +SSO Test Flow Login + +Example: + +```json +{ + "attributes": { + "amr": [ + "pwd" + ], + "at_hash": "7_foQ_0QRVU5dIq_B72_zw", + "aud": "0oa17kaknnntGFKiJ0h8", + "auth_time": 1653294514, + "email": "ops@gravitational.io", + "email_verified": true, + "exp": 1653298115, + "groups": [ + "Everyone", + "okta-admin", + "okta-dev" + ], + "iat": 1653294515, + "idp": "00oafg105f5D4gv5Y0h7", + "iss": "https://dev-813354.oktapreview.com", + "jti": "ID.e_EKsCvMELMLa-Gx0aciOazUvPEFdZSxhTj42zccz3g", + "sub": "00uafg106hK16pwqE0h7", + "ver": 1 + }, + "cluster_name": "boson.tener.io", + "code": "T1010I", + "ei": 0, + "event": "user.login", + "method": "oidc", + "success": true, + "time": "2022-05-23T08:28:37.067Z", + "uid": "7d440ee1-15f6-4b56-9391-344e8984fd97", + "user": "ops@gravitational.io" +} +``` + +### T1011W + +SSO Test Flow Login Failed + +Example: + +```json +{ + "attributes": { + "amr": [ + "pwd" + ], + "at_hash": "Xz4ibHjouHuIIBOSgWm07w", + "aud": "0oa17kaknnntGFKiJ0h8", + "auth_time": 1653294514, + "email": "ops@gravitational.io", + "email_verified": true, + "exp": 1653298153, + "groups": [ + "Everyone", + "okta-admin", + "okta-dev" + ], + "iat": 1653294553, + "idp": "00oafg105f5D4gv5Y0h7", + "iss": "https://dev-813354.oktapreview.com", + "jti": "ID.h0qtjVPXttmNEHb-yHOvziD20Mru4qiw8L3i74se8YA", + "sub": "00uafg106hK16pwqE0h7", + "ver": 1 + }, + "cluster_name": "boson.tener.io", + "code": "T1011W", + "ei": 0, + "error": "No roles mapped from claims. The mappings may contain typos.", + "event": "user.login", + "message": "Failed to calculate user attributes.\n\tNo roles mapped from claims. The mappings may contain typos.", + "method": "oidc", + "success": false, + "time": "2022-05-23T08:29:14.126Z", + "uid": "6fa08495-170a-4de9-884f-9931fbdb5982" +} +``` + +### T1012I + +Headless Login Requested + +Example: + +```json +{ + "addr.remote": "1.1.1.1:42", + "code": "T1012I", + "cluster_name": "root.cluster", + "event": "user.login", + "method": "headless", + "ei": 0, + "success": false, + "time": "2019-04-22T00:49:03Z", + "uid": "173d6b6e-d613-44be-8ff6-f9f893791ef4", + "user": "admin@example.com" +} +``` + +### T1013I + +Headless Login Approved + +Example: + +```json +{ + "addr.remote": "2.2.2.2:42", + "code": "T1013I", + "cluster_name": "root.cluster", + "event": "user.login", + "method": "headless", + "ei": 0, + "success": true, + "time": "2019-04-22T00:49:03Z", + "uid": "173d6b6e-d613-44be-8ff6-f9f893791ef5", + "user": "admin@example.com", + "message": "Headless login was requested from the address 1.1.1.1:42" +} +``` + +### T1013W + +Headless Login Failed + +Example: + +```json +{ + "addr.remote": "2.2.2.2:42", + "code": "T1013W", + "error": "user(name=\"fsdfsdf\") not found", + "cluster_name": "root.cluster", + "event": "user.login", + "method": "headless", + "ei": 0, + "success": true, + "time": "2019-04-22T00:49:03Z", + "uid": "173d6b6e-d613-44be-8ff6-f9f893791ef5", + "user": "admin@example.com", + "message": "Headless login was requested from the address 1.1.1.1:42" +} +``` + +### T1014W + +Headless Login Rejected + +Example: + +```json +{ + "addr.remote": "2.2.2.2:42", + "code": "T1014W", + "cluster_name": "root.cluster", + "event": "user.login", + "method": "headdless", + "ei": 0, + "success": false, + "time": "2019-04-22T00:49:03Z", + "uid": "173d6b6e-d613-44be-8ff6-f9f893791ef6", + "user": "admin@example.com", + "message": "Headless login was requested from the address 1.1.1.1:42" +} +``` + +### T1001I + +SSO Login + +Example: + +```json +{ + "code": "T1001I", + "event": "user.login", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +### T1001W + +SSO Login Failed + +Example: + +```json +{ + "code": "T1001W", + "event": "user.login", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## user.password_change + +User Password Updated + +Example: + +```json +{ + "code": "T1005I", + "event": "user.password_change", + "time": "2020-06-05T19:26:53Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0", + "user": "Ivan_Jordan" +} +``` + +## user.update + +User Updated + +Example: + +```json +{ + "code": "T1003I", + "event": "user.update", + "name": "bob", + "time": "2020-06-05T16:24:05Z", + "uid": "3a8cd55b5-bce9-5a4c-882d-8e0a5ae10008", + "expires": 111111, + "roles": [ + "root" + ] +} +``` + +## user_login.invalid_access_list + +Access list skipped. + +Example: + +```json +{ + "code": "TAL009W", + "event": "user_login.invalid_access_list", + "time": "2020-06-05T16:24:05Z", + "uid": "68a83a99-73ce-4bd7-bbf7-99103c2ba6a0" +} +``` + +## user_task.create + +User Task Created + +Example: + +```json +{ + "addr.remote": "127.0.0.1:52763", + "cluster_name": "lenix", + "code": "UT001I", + "ei": 0, + "event": "user_task.create", + "expires": "0001-01-01T00:00:00Z", + "name": "d217950f-cb5f-5703-96ef-39ab8cd86601", + "success": true, + "time": "2024-10-17T14:00:34.186Z", + "uid": "709840ec-288e-4056-ba20-c8f4b12a478f", + "updated_by": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "user": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "user_kind": 1, + "user_task_integration": "teleportdev", + "user_task_issue_type": "ec2-ssm-invocation-failure", + "user_task_type": "discover-ec2" +} +``` + +## user_task.delete + +User Task Deleted + +Example: + +```json +{ + "addr.remote": "127.0.0.1:52915", + "cluster_name": "lenix", + "code": "UT003I", + "ei": 0, + "event": "user_task.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "d217950f-cb5f-5703-96ef-39ab8cd86601", + "success": true, + "time": "2024-10-17T14:01:11.031Z", + "uid": "7699b806-e717-4821-85a5-d2f41acbe373", + "updated_by": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "user": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "user_kind": 1 +} +``` + +## user_task.update + +User Task Updated + +Example: + +```json +{ + "addr.remote": "127.0.0.1:52833", + "cluster_name": "lenix", + "code": "UT002I", + "current_user_task_state": "OPEN", + "ei": 0, + "event": "user_task.update", + "expires": "0001-01-01T00:00:00Z", + "name": "d217950f-cb5f-5703-96ef-39ab8cd86601", + "success": true, + "time": "2024-10-17T14:01:02.853Z", + "uid": "0ba36761-4a6a-429e-bce4-1825d80ce06a", + "updated_by": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "updated_user_task_state": "OPEN", + "user": "30a6b2e1-3b61-4965-92cf-b4f84e9dc683.lenix", + "user_kind": 1, + "user_task_integration": "teleportdev", + "user_task_issue_type": "ec2-ssm-invocation-failure", + "user_task_type": "discover-ec2" +} +``` + +## vnet.config.create + +VNet config created + +Example: + +```json +{ + "addr.remote": "127.0.0.1:62460", + "cluster_name": "teleport.dev", + "code": "TVNET001I", + "ei": 0, + "event": "vnet.config.create", + "success": true, + "time": "2025-03-04T15:49:21.869Z", + "uid": "e1973df2-4bee-4a67-9925-575c1d38cc80", + "user": "08fcd522-edcb-492d-8752-90494a28b70e.teleport.dev", + "user_cluster_name": "teleport.dev", + "user_kind": 3, + "user_roles": [ + "Admin" + ] +} +``` + +## vnet.config.delete + +VNet config deleted + +Example: + +```json +{ + "addr.remote": "127.0.0.1:62460", + "cluster_name": "teleport.dev", + "code": "TVNET003I", + "ei": 0, + "event": "vnet.config.delete", + "success": true, + "time": "2025-03-04T15:49:21.869Z", + "uid": "e1973df2-4bee-4a67-9925-575c1d38cc80", + "user": "08fcd522-edcb-492d-8752-90494a28b70e.teleport.dev", + "user_cluster_name": "teleport.dev", + "user_kind": 3, + "user_roles": [ + "Admin" + ] +} +``` + +## vnet.config.update + +VNet config updated + +Example: + +```json +{ + "addr.remote": "127.0.0.1:62460", + "cluster_name": "teleport.dev", + "code": "TVNET002I", + "ei": 0, + "event": "vnet.config.update", + "success": true, + "time": "2025-03-04T15:49:21.869Z", + "uid": "e1973df2-4bee-4a67-9925-575c1d38cc80", + "user": "08fcd522-edcb-492d-8752-90494a28b70e.teleport.dev", + "user_cluster_name": "teleport.dev", + "user_kind": 3, + "user_roles": [ + "Admin" + ] +} +``` + +## windows.desktop.session.end + +Windows Desktop Session Ended + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDP01I", + "desktop_addr": "100.104.52.89:3389", + "desktop_name": "desktop-name", + "desktop_labels": { + "env": "prod", + "foo": "bar" + }, + "ei": 0, + "event": "windows.desktop.session.end", + "sid": "b7f734d8-bdc2-4996-8959-0b42a11708e7", + "time": "2021-10-18T23:19:13.105Z", + "uid": "84d408d1-3314-4a30-b7b7-35970633c9de", + "user": "joe", + "windows_desktop_service": "ba17ae92-5519-476a-954e-c225cf751de1", + "windows_domain": "desktopaccess.com", + "windows_user": "Administrator" +} +``` + +## windows.desktop.session.start + +There are multiple events with the `windows.desktop.session.start` type. + +### TDP00I + +Windows Desktop Session Started + +Example: + +```json +{ + "addr.remote": "100.104.52.89:3389", + "cluster_name": "im-a-cluster-name", + "code": "TDP00I", + "desktop_addr": "100.104.52.89:3389", + "desktop_name": "desktop-name", + "desktop_labels": { + "env": "prod", + "foo": "bar" + }, + "ei": 0, + "event": "windows.desktop.session.start", + "proto": "tdp", + "sid": "b7f734d8-bdc2-4996-8959-0b42a11708e7", + "success": true, + "time": "2021-10-18T23:18:29.144Z", + "uid": "cf15cc08-f818-4f09-91c5-238e1326b22b", + "user": "joe", + "windows_desktop_service": "ba17ae92-5519-476a-954e-c225cf751de1", + "windows_domain": "desktopaccess.com", + "windows_user": "Administrator" +} +``` + +### TDP00W + +Windows Desktop Session Denied + +Example: + +```json +{ + "cluster_name": "im-a-cluster-name", + "code": "TDP00W", + "desktop_addr": "100.104.52.89:3389", + "desktop_name": "desktop-name", + "desktop_labels": { + "env": "prod", + "foo": "bar" + }, + "ei": 0, + "event": "windows.desktop.session.start", + "sid": "b7f734d8-bdc2-4996-8959-0b42a11708e7", + "time": "2021-10-18T23:39:13.105Z", + "uid": "84d408d1-3314-4a30-b7b7-35970633c9de", + "user": "joe", + "windows_desktop_service": "ba17ae92-5519-476a-954e-c225cf751de1", + "windows_domain": "desktopaccess.com", + "windows_user": "Administrator" +} +``` + +## workload_identity.create + +Workload Identity Created + +Example: + +```json +{ + "cluster_name": "leaf.tele.ottr.sh:443", + "code": "WID001I", + "ei": 0, + "event": "workload_identity.create", + "expires": "0001-01-01T00:00:00Z", + "name": "made-by-noah", + "time": "2023-12-08T10:53:39.798Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "noah" +} +``` + +## workload_identity.delete + +Workload Identity Deleted + +Example: + +```json +{ + "cluster_name": "leaf.tele.ottr.sh:443", + "code": "WID003I", + "ei": 0, + "event": "workload_identity.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "made-by-noah", + "time": "2023-12-08T10:53:39.798Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "noah" +} +``` + +## workload_identity.update + +Workload Identity Updated + +Example: + +```json +{ + "cluster_name": "leaf.tele.ottr.sh:443", + "code": "WID002I", + "ei": 0, + "event": "workload_identity.update", + "expires": "0001-01-01T00:00:00Z", + "name": "made-by-noah", + "time": "2023-12-08T10:53:39.798Z", + "uid": "0efbb33d-fa50-44e0-8dec-4ac89c0dd4ab", + "user": "noah" +} +``` + +## workload_identity_x509_issuer_override.create + +Workload Identity X.509 Issuer Override Created + +Example: + +```json +{ + "addr.remote": "203.0.113.71:59517", + "cluster_name": "clustername", + "code": "WID007I", + "ei": 0, + "event": "workload_identity_x509_issuer_override.create", + "expires": "0001-01-01T00:00:00Z", + "name": "default", + "time": "2025-03-28T08:42:14.526Z", + "uid": "d99124ab-34f8-490e-b839-ca881e7cc6ba", + "user": "alice", + "user_kind": 1 +} +``` + +## workload_identity_x509_issuer_override.delete + +Workload Identity X.509 Issuer Override Deleted + +Example: + +```json +{ + "addr.remote": "203.0.113.77:64794", + "cluster_name": "clustername", + "code": "WID008I", + "ei": 0, + "event": "workload_identity_x509_issuer_override.delete", + "expires": "0001-01-01T00:00:00Z", + "name": "default", + "time": "2025-03-26T01:14:36.881Z", + "uid": "e52def2f-4109-4cc9-91a8-150c6792f89f", + "user": "bob", + "user_kind": 1 +} +``` + +## x11-forward + +There are multiple events with the `x11-forward` type. + +### T3008I + +X11 Forwarding Requested + +Example: + +```json +{ + "addr.local": "192.000.0.000:3022", + "addr.remote": "127.0.0.1:50000", + "cluster_name": "im-a-cluster-name", + "code": "T3008I", + "ei": 0, + "event": "x11-forward", + "login": "root", + "success": true, + "time": "2022-01-20T18:31:45.012Z", + "uid": "6333-37a7-4c3c-9180-f3abc8e2b", + "user": "lisa" +} +``` + +### T3008W + +X11 Forwarding Request Failed + +Example: + +```json +{ + "addr.local": "192.000.0.000:3022", + "addr.remote": "127.0.0.1:60000", + "cluster_name": "im-a-cluster-name", + "code": "T3008W", + "ei": 0, + "error": "lisa was here", + "event": "x11-forward", + "login": "root", + "success": false, + "time": "2022-01-20T19:49:02.307Z", + "uid": "0629c7-3d98-4451-ac90-dc5330", + "user": "lisa" +} +``` diff --git a/docs/pages/reference/cli/cli.mdx b/docs/pages/reference/cli/cli.mdx index fdd33353897f0..1ee53a20119ba 100644 --- a/docs/pages/reference/cli/cli.mdx +++ b/docs/pages/reference/cli/cli.mdx @@ -1,16 +1,20 @@ --- -title: Command-Line Tools -h1: CLI References +title: Command-Line Tool References +sidebar_label: Command-Line Tools description: Detailed guide and reference documentation for Teleport's command line interface (CLI) tools. +tags: + - reference + - platform-wide --- -Teleport is made up of five CLI tools. +Teleport is made up of six CLI tools. - [teleport](teleport.mdx): Supports the Teleport Infrastructure Identity Platform by starting and configuring various Teleport services. +- [teleport-update](teleport-update.mdx): Installs and updates Teleport binaries. - [tsh](tsh.mdx): Allows end users to authenticate to Teleport and access resources in a cluster. - [tctl](tctl.mdx): Used to configure the Teleport Auth Service. -- [tbot](tbot.mdx): Supports Machine ID, which provides short lived credentials to service accounts (e.g, a CI/CD server). -- [fdpass-teleport](fdpass-teleport.mdx): Supports integrating Machine ID with OpenSSH for higher performance SSH connections. +- [tbot](tbot.mdx): Supports Machine & Workload Identity, which provides short lived credentials to service accounts (e.g, a CI/CD server). +- [fdpass-teleport](fdpass-teleport.mdx): Supports integrating Machine & Workload Identity with OpenSSH for higher performance SSH connections. (!docs/pages/includes/permission-warning.mdx!) @@ -54,7 +58,7 @@ desktops, and Kubernetes clusters using the `--search` and `--query` flags. The `--search` flag performs a simple fuzzy search on resource fields. For example, `--search=mac` searches for resources containing `mac`. -The `--query` flag allows you to perform more sophisticated searches using a [predicate language](../predicate-language.mdx). +The `--query` flag allows you to perform more sophisticated searches using a [predicate language](../access-controls/predicate-language.mdx). In both cases, you can further refine the results by appending a list of comma-separated labels to the command. For example: @@ -78,4 +82,3 @@ $ tsh ls --search=staging,mac # with key `env` equal to `staging` and key `os` equal to `mac`. $ tsh ls --query='labels["env"] == "staging" && equals(labels["os"], "mac")' ``` - diff --git a/docs/pages/reference/cli/fdpass-teleport.mdx b/docs/pages/reference/cli/fdpass-teleport.mdx index e8a5781cac4cf..b5c85197119e4 100644 --- a/docs/pages/reference/cli/fdpass-teleport.mdx +++ b/docs/pages/reference/cli/fdpass-teleport.mdx @@ -1,16 +1,20 @@ --- title: fdpass-teleport CLI reference +sidebar_label: fdpass-teleport description: Comprehensive reference of subcommands, flags, and arguments for the fdpass-teleport CLI tool. +tags: + - reference + - mwi --- -The `fdpass-teleport` binary is used to integrate Machine ID with OpenSSH to -enable higher performance and reduced resource consumption when establishing SSH -connections. +The `fdpass-teleport` binary is used to integrate Machine & Workload Identity +with OpenSSH to enable higher performance and reduced resource consumption when +establishing SSH connections. You should not need to manually invoke `fdpass-teleport` and it will -automatically be included in OpenSSH configurations generated by the Machine ID -SSH multiplexer service. For further information, see the -[Machine ID reference](../machine-id/configuration.mdx). +automatically be included in OpenSSH configurations generated by the Machine & +Workload Identity SSH multiplexer service. For further information, see the +[Machine & Workload Identity reference](../machine-workload-identity/configuration.mdx). ## Usage diff --git a/docs/pages/reference/cli/tbot.mdx b/docs/pages/reference/cli/tbot.mdx index 0c472087c11a3..a49592e88cd6a 100644 --- a/docs/pages/reference/cli/tbot.mdx +++ b/docs/pages/reference/cli/tbot.mdx @@ -1,10 +1,15 @@ --- -title: tbot CLI reference +title: tbot CLI Reference +sidebar_label: tbot description: Comprehensive reference of subcommands, flags, and arguments for the tbot CLI tool. +tags: + - reference + - mwi --- -`tbot` is a CLI tool used with **Machine ID** that programatically issues and renews -short-lived certificates to any service account (e.g, a CI/CD server). +`tbot` is a CLI tool used with **Machine & Workload Identity** that +programatically issues and renews short-lived certificates to any service +account (e.g, a CI/CD server). The primary commands for `tbot` are as follows: @@ -12,8 +17,8 @@ The primary commands for `tbot` are as follows: | - | - | | `tbot help` | Outputs guidance for using commands with `tbot`. | | `tbot version` | Outputs the current version of the `tbot` binary. | -| `tbot configure` | Outputs a basic Machine ID client configuration file to be adjusted as needed. | -| `tbot start` | Starts the Machine ID client `tbot`, fetching and writing certificates to disk at a set interval. | +| `tbot configure` | Outputs a basic `tbot` client configuration file to be adjusted as needed. | +| `tbot start` | Starts `tbot`, fetching and writing certificates to disk at a set interval. | | `tbot init` | Initialize a certificate destination directory for writes from a separate bot user, configuring either file or POSIX ACL permissions. | | `tbot db` | Connects to databases using native clients and queries database information. Functions as a wrapper for `tsh`, and requires `tsh` installation. | | `tbot proxy` | Allows for access to Teleport resources on a cluster using TLS Routing. Functions as a wrapper for `tsh`, and requires `tsh` installation. | @@ -31,14 +36,14 @@ Note that `tsh` must be installed to make use of this command. | Flag | Description | |---------------------|----------------------------------------------------------------------------------------------------------| | `-d/--debug` | Enable verbose logging to stderr. | -| `-c/--config` | Path to a Machine ID configuration file. Required if not using other required configuration flags. | -| `--destination-dir` | Path to the Machine ID destination dir that should be used for authentication. Required. | +| `-c/--config` | Path to a `tbot` configuration file. Required if not using other required configuration flags. | +| `--destination-dir` | Path to the destination dir that should be used for authentication. Required. | | `--proxy-server` | The `host:port` of the Teleport Proxy Service to use to access resources. Required. | | `--cluster` | The name of the cluster on which resources should be accessed. Extracted from the bot identity if unset. | All other flags and arguments are passed directly to `tsh db ...`, along -with authentication parameters to use the Machine ID identity to skip `tsh`'s -login steps. +with authentication parameters to use the Machine & Workload identity to skip +`tsh`'s login steps. Note that certain CLI parameters, for example `--help`, may be captured by `tbot` even if intended to be passed to the wrapped `tsh`. A `--` argument can @@ -54,19 +59,19 @@ Additionally, be aware of the following limitations of `tbot db`: ## tbot init Initializes a certificate destination directory for access from a separate bot -user. Allows for certificates to be written to disks other than a Machine ID -client, configuring either file or POSIX ACL permissions. +user. Allows for certificates to be written to disks other than a Machine & +Workload Identity client, configuring either file or POSIX ACL permissions. Note that most use cases should instead use tbot's runtime ACL management by specifying allowed reader users and groups in the -[destination configuration](../machine-id/configuration.mdx#directory). +[destination configuration](../machine-workload-identity/configuration.mdx#directory). ### Flags | Flag | Description | |---------------------|--------------------------------------------------------------------------------------------------------------------| | `-d/--debug` | Enable verbose logging to stderr. | -| `-c/--config` | Path to a Machine ID configuration file. | +| `-c/--config` | Path to a tbot configuration file. | | `--destination-dir` | Directory to write short-lived machine certificates to. | | `--owner` | Defines the Linux `user:group` owner of `--destination-dir`. Defaults to the Linux user running `tbot` if unspecified. | | `--bot-user` | Enables POSIX ACLs and defines the Linux user that can read/write short-lived certificates to `--destination-dir`. | @@ -120,13 +125,13 @@ Consider using one of the following dedicated tunnel modes where possible: | Flag | Description | |---------------------|----------------------------------------------------------------------------------------------------------| | `-d/--debug` | Enable verbose logging to stderr. | -| `-c/--config` | Path to a Machine ID configuration file. Required if not using other required configuration flags. | -| `--destination-dir` | Path to the Machine ID destination dir that should be used for authentication. Required. | +| `-c/--config` | Path to the tbot configuration file. Required if not using other required configuration flags. | +| `--destination-dir` | Path to the tbot destination dir that should be used for authentication. Required. | | `--proxy-server` | The `host:port` of the Teleport Proxy Service through which resources will be accessed. Required. | | `--cluster` | The name of the cluster on which resources should be accessed. Extracted from the bot identity if unset. | All other flags and arguments are passed directly to `tsh proxy ...`, along -with authentication parameters to use the Machine ID identity to skip `tsh`'s +with authentication parameters to use Machine & Workload Identity to skip `tsh`'s login step. Additionally, the following should be noted: @@ -180,7 +185,7 @@ Note that this decreases security: - Your connection to the database will be unencrypted until it reaches the `tbot` proxy running on `localhost`. -Refer to the [database guide](../../enroll-resources/machine-id/access-guides/databases.mdx) for more information on +Refer to the [database guide](../../machine-workload-identity/access-guides/databases.mdx) for more information on using database proxies. ### Flags @@ -248,8 +253,8 @@ This subcommand supports one additional flag beyond its `tbot start` equivalent: ## tbot start -The `tbot start` family of commands starts the Machine ID client in various -modes, depending on the type of resources to be accessed: +The `tbot start` family of commands starts the Machine & Workload Identity +client in various modes, depending on the type of resources to be accessed: - [`tbot start legacy`](#tbot-start-legacy): Starts with a YAML configuration file or in legacy output mode - [`tbot start identity`](#tbot-start-identity): Starts with an identity output for SSH and Teleport API access @@ -269,27 +274,31 @@ These flags are available to all `tbot start` commands. Note that `tbot start legacy` supports slightly different options, so refer to its specific section for details when using a YAML config file or legacy output. -| Flag | Description | -|----------------------|-------------| -| `-d/--debug` | Enable verbose logging to stderr. | -| `--[no-]fips` | Whether to run tbot in FIPS compliance mode. This requires the FIPS `tbot` binary. | -| `--log-format` | Controls the format of output logs. Can be `json` or `text`. Defaults to `text`. | -| `-a/--auth-server` | Address of the Teleport Auth Service. Prefer using `--proxy-server` where possible. | -| `--proxy-server` | Address of the Teleport Proxy Server. | -| `--token` | A bot join token or path to file with token value, if attempting to onboard a new bot; used on first connect. | -| `--ca-pin` | CA pin to validate the Teleport Auth Service; used on first connect. | -| `--certificate-ttl` | TTL of short-lived machine certificates. | -| `--renewal-interval` | Interval at which short-lived certificates are renewed; must be less than the certificate TTL. | -| `--join-method` | Method to use to join the cluster. One of: `azure`, `circleci`, `gcp`, `github`, `gitlab`, `iam`, `kubernetes`, `spacelift`, `token`, `tpm`, `terraform_cloud` | -| `--[no-]oneshot` | If set, quit after the first renewal. | -| `--diag-addr` | If set and the bot is in debug mode, a diagnostics service will listen on specified address. | -| `--storage` | A destination URI for tbot's internal storage, e.g. `file:///foo/bar`. See [Destination URIs](#destination-uris) for more info. | +| Flag | Description | +|-------------------------|-------------| +| `-d/--debug` | Enable verbose logging to stderr. | +| `--[no-]fips` | Whether to run tbot in FIPS compliance mode. This requires the FIPS `tbot` binary. | +| `--log-format` | Controls the format of output logs. Can be `json` or `text`. Defaults to `text`. | +| `-a/--auth-server` | Address of the Teleport Auth Service. Prefer using `--proxy-server` where possible. | +| `--proxy-server` | Address of the Teleport Proxy Service. | +| `--token` | A bot join token or path to file with token value, if attempting to onboard a new bot; used on first connect. | +| `--ca-pin` | CA pin to validate the Teleport Auth Service; used on first connect. | +| `--certificate-ttl` | TTL of short-lived machine certificates. | +| `--renewal-interval` | Interval at which short-lived certificates are renewed; must be less than the certificate TTL. | +| `--join-method` | Method to use to join the cluster. One of: `azure`, `circleci`, `gcp`, `github`, `gitlab`, `iam`, `kubernetes`, `spacelift`, `token`, `tpm`, `terraform_cloud` | +| `--[no-]oneshot` | If set, quit after the first renewal. | +| `--diag-addr` | If set and the bot is in debug mode, a diagnostics service will listen on specified address. | +| `--storage` | A destination URI for tbot's internal storage, e.g. `file:///foo/bar`. See [Destination URIs](#destination-uris) for more info. | +| `--registration-secret` | An optional joining secret to use on first join with the `bound_keypair` join method. This can also be provided via the `TBOT_REGISTRATION_SECRET` environment variable. | +| `--registration-secret-path` | An optional path to a file containing a joining secret to use on first join with the `bound_keypair` join method. | +| `--static-key-path` | An optional path to a file containing a static private key for use with the `bound_keypair` join method. A base64-encoded key can also be provided via the `TBOT_BOUND_KEYPAIR_STATIC_KEY` environment variable. | +| `--pid-file` | Full path to the PID file. By default no PID file will be created. | ## tbot start legacy -Starts the Machine ID client `tbot`, fetching and writing certificates to disk -at a set interval. This command either starts from a configuration file if `-c` -is specified, or starts with a default, legacy-compatible identity output. +Starts `tbot`, fetching and writing certificates to disk at a set interval. This +command either starts from a configuration file if `-c` is specified, or starts +with a default, legacy-compatible identity output. This is the default `tbot start` subcommand if no other command is specified. Unless using a configuration file, consider using `tbot start identity` or @@ -314,6 +323,7 @@ another dedicated mode instead. | `--join-method` | Method to use to join the cluster. Can be `token`, `azure`, `circleci`, `gcp`, `github`, `gitlab` or `iam`. | | `--oneshot` | If set, quit after the first renewal. | | `--log-format` | Controls the format of output logs. Can be `json` or `text`. Defaults to `text`. | +| `--pid-file` | Full path to the PID file. By default no PID file will be created. | ### Examples @@ -347,8 +357,8 @@ $ tbot start \ ## tbot start identity -Starts the Machine ID client `tbot` with an identity output, fetching and -writing certificates at a regular interval to the output specified with +Starts `tbot` with an identity output, fetching and writing certificates at a +regular interval to the output specified with `--destination`. ```code @@ -382,8 +392,8 @@ $ tbot start identity \ ## tbot start database -Starts the Machine ID client `tbot` with a database output, fetching and writing -database certificates at a regular interval to the output specified with +Starts `tbot` with a database output, fetching and writing database certificates +at a regular interval to the output specified with `--destination`. ```code @@ -407,9 +417,9 @@ command supports these additional flags: ## tbot start kubernetes -Starts the Machine ID client `tbot` with a Kubernetes output, fetching and -writing Kubernetes credentials and a `kubeconfig.yaml` at a regular interval to -the output specified with `--destination`. +Starts `tbot` with a Kubernetes output, fetching and writing Kubernetes +credentials and a `kubeconfig.yaml` at a regular interval to the output +specified with `--destination`. ```code $ tbot start kubernetes --destination=DESTINATION --kubernetes-cluster=KUBERNETES-CLUSTER [] @@ -429,9 +439,9 @@ command supports these additional flags: ## tbot start kubernetes/v2 -Starts the Machine ID client `tbot` with a Kubernetes V2 output, fetching and -writing Kubernetes credentials to a `kubeconfig.yaml` at a regular interval to -the output specified with `--destination`. +Starts `tbot` with a Kubernetes V2 output, fetching and writing Kubernetes +credentials to a `kubeconfig.yaml` at a regular interval to the output specified +with `--destination`. Unlike the `kubernetes` output, `kubernetes/v2` allows fetching many Kubernetes clusters at once, as multiple contexts within the generated `kubeconfig.yaml`. @@ -477,9 +487,8 @@ matched by the label selector. ## tbot start application -Starts the Machine ID client `tbot` with an application output, fetching and -writing application TLS credentials at a regular interval to the output -specified with `--destination`. +Starts `tbot` with an application output, fetching and writing application TLS +credentials at a regular interval to the output specified with `--destination`. ```code $ tbot start application --destination=DESTINATION --app=APP [] @@ -499,8 +508,8 @@ command supports these additional flags: ## tbot start application-tunnel -Starts the Machine ID client with a local tunnel to a particular application. -This tunnel will run continuously and automatically refresh its certificates. +Starts `tbot` with a local tunnel to a particular application. This tunnel will +run continuously and automatically refresh its certificates. Note that this tunnel will be unencrypted. Be wary of the selected listen address, and prefer to use `localhost` or an equivalent loopback interface @@ -526,8 +535,8 @@ $ tbot start application-tunnel --listen=LISTEN --app=APP [] ## tbot start database-tunnel -Starts the Machine ID client with a local tunnel to a particular database. -This tunnel will run continuously and automatically refresh its certificates. +Starts `tbot` with a local tunnel to a particular database. This tunnel will run +continuously and automatically refresh its certificates. Note that this tunnel will be unencrypted. Be wary of the selected listen address, and prefer to use `localhost` or an equivalent loopback interface @@ -563,7 +572,7 @@ command supports these additional flags: Issues an X509 workload identity credential using Teleport Workload Identity and writes this credential to a specified destination. -See the [configuration reference](../machine-id/configuration.mdx) for further +See the [configuration reference](../machine-workload-identity/configuration.mdx) for further information about the workload identity credential output and the YAML configuration file format. @@ -586,7 +595,7 @@ command supports these additional flags: Issues an X509 workload identity credential, exchanges this for short-lived AWS credentials using Roles Anywhere, and writes these to a configured destination. -See the [configuration reference](../machine-id/configuration.mdx) for further +See the [configuration reference](../machine-workload-identity/configuration.mdx) for further information about the workload identity credential output and the YAML configuration file format. @@ -617,7 +626,7 @@ writes this credential to a specified destination. The JWT workload identity credential is compatible with the [SPIFFE JWT SVID specification](https://github.com/spiffe/spiffe/blob/main/standards/JWT-SVID.md). -See the [configuration reference](../machine-id/configuration.mdx) for further +See the [configuration reference](../machine-workload-identity/configuration.mdx) for further information about the workload identity credential output and the YAML configuration file format. @@ -643,7 +652,7 @@ API. The configuration for this service can be complex, and therefore, it is recommended that you leverage the YAML configuration. -See [Workload Identity API & Workload Attestation](../workload-identity/workload-identity-api-and-workload-attestation.mdx) +See [Workload Identity API & Workload Attestation](../machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx) for further information about the local workload identity API and the YAML configuration. @@ -666,7 +675,7 @@ new Workload Identity configuration experience. You can replace the use of this command with the new `tbot start workload-identity-x509` command. For further information, see [the new Workload Identity configuration experience -and how to migrate](../workload-identity/configuration-resource-migration.mdx). +and how to migrate](../machine-workload-identity/workload-identity/configuration-resource-migration.mdx).
### Flags @@ -710,6 +719,52 @@ $ tbot install systemd \ --write ``` +## tbot keypair create + +Generates a keypair for use with `bound_keypair` joining and stores it in the +specified bot internal storage directory. If a key already exists in the storage +directory, no new key will be generated and the existing public key will be +printed to the console; use the `--overwrite` flag to force the generation of a +new keypair. + +### Flags + +| Flag | Description | +|------|-------------| +| `--storage` | A destination URI to be used for bot internal storage. Required. | +| `--proxy-server` | A Teleport Proxy Service address. Required. | +| `--overwrite` | If set, always generate a new key. If unset, the existing public key will be printed if one already exists in the destination specified by `--storage`. | +| `--static` | If set, generates a static keypair. For more information, see [the static key guide](../machine-workload-identity/bound-keypair/static-keys.mdx). | +| `--static-key-path` | If set with `--static`, writes the static keypair to a file. | +| `--format` | Override the output format, supported values: `text`, `json`. | + +### Examples + +First, ensure the desired internal storage directory exists: +```code +$ mkdir -p /var/lib/teleport/bot +``` + +Next, generate a keypair: +```code +$ tbot keypair create --proxy-server example.teleport.sh:443 --storage /var/lib/teleport/bot +2025-07-09T00:00:00.000-00:00 INFO [TBOT] keypair has been written to storage storage:directory: /var/lib/teleport/bot tbot/keypair.go:135 + +To register the keypair with Teleport, include this public key in the token's +`spec.bound_keypair.onboarding.initial_public_key`: + + ssh-ed25519 +``` + +This public key, including the algorithm identifier (`ssh-ed25519`, but may vary +depending on your cluster configuration) can then be copied into a Bound Keypair +join token to be used as a +[preregistered key](../machine-workload-identity/bound-keypair/concepts.mdx#onboarding). + +Note that the Teleport Proxy Service address is required to fetch the currently +enabled [signature suite](../deployment/signature-algorithms.mdx). No authentication takes +place at this time. + ## Destination URIs Many `tbot start` subcommands accept destination URIs via the `--storage` and diff --git a/docs/pages/reference/cli/tctl.mdx b/docs/pages/reference/cli/tctl.mdx index de7bf4f75108c..60b559f6ce9c3 100644 --- a/docs/pages/reference/cli/tctl.mdx +++ b/docs/pages/reference/cli/tctl.mdx @@ -1,6 +1,10 @@ --- -title: tctl CLI reference +title: tctl CLI Reference +sidebar_label: tctl description: Comprehensive reference of subcommands, flags, and arguments for the tctl CLI tool. +tags: + - reference + - platform-wide --- `tctl` is a CLI tool that allows a cluster administrator to manage all resources @@ -9,98 +13,8 @@ in a cluster, including nodes, users, tokens, certificates, and devices. `tctl` can also be used to modify the dynamic configuration of the cluster, such as creating new user roles or connecting to trusted clusters. -## Authentication - -Before running `tctl` commands, administrators must authenticate to a Teleport -cluster. This section explains how `tctl` authenticates to the cluster. - - - - -### On a remote host with an identity file - -`tctl` can authenticate with a user-provided identity file. The Teleport Auth -Service signs an identity file when a user runs `tctl auth sign` or -`tsh login --out=`, and the user can include the path to the -identity file in the `--identity` flag when running `tctl` commands. - -When using the `--identity` flag, the user must provide the `--auth-server` flag -with the address of an Auth Service or Proxy Service so `tctl` knows which -cluster to authenticate to. - -### On the same host as the Teleport Auth Service - -If there is a Teleport configuration file on the host where `tctl` is run, -`tctl` attempts to authenticate to the Auth Service named in the configuration -file using an identity stored in its local backend. - -`tctl` authenticates using this method if a configuration file exists at -`/etc/teleport.yaml` or `TELEPORT_CONFIG_FILE` points to a configuration file -in another location. If the `auth_service` is disabled in the configuration -file, then the configuration file is ignored. - - - - Note that when a `tctl` command is run locally on the Auth Service, the audit - logs will show that it was performed by the Auth Service itself. - - To provide an accurate audit trail, it is important to limit direct SSH access - to the Auth Service with - [Access Controls](../../admin-guides/access-controls/access-controls.mdx) and ensure that - admins use `tctl` remotely instead. - - - -### On a remote host after running `tsh login` - -If `tctl` cannot find a local Teleport configuration file or a user-provided -identity file, it attempts to load the user's `tsh` profile to authenticate to -the cluster. The `tsh` profile is created when a user runs `tsh login`. - -`tctl` reads the `TELEPORT_CONFIG_FILE` environment variable to determine if -a Teleport configuration file is present. If you are using your `tsh` profile to -authenticate `tctl`, you must ensure that one of these conditions is true: - -- `TELEPORT_CONFIG_FILE` is blank -- No file exists at `/etc/teleport.yaml` - -Otherwise `tctl` will attempt to connect to a Teleport cluster on the machine, -which could result in the error, -`ERROR: open /var/lib/teleport/host_uuid: permission denied`. - - - - -### On a remote host with an identity file - -`tctl` can authenticate with a user-provided identity file. The Teleport Auth -Service signs an identity file when a user runs `tctl auth sign` or -`tsh login --out=`, and the user can include the path to the -identity file in the `--identity` flag when running `tctl` commands. - -When using the `--identity` flag, the user must alo provide the `--auth-server` -flag with the address of an Auth Service or Proxy Service so `tctl` knows which -cluster to authenticate to. - -### On a remote host after running `tsh login` - -If `tctl` cannot find a local Teleport configuration file or a user-provided -identity file, it attempts to load the user's `tsh` profile to authenticate to -the cluster. The `tsh` profile is created when a user runs `tsh login`. - -`tctl` reads the `TELEPORT_CONFIG_FILE` environment variable to determine if -a Teleport configuration file is present. If you are using your `tsh` profile to -authenticate `tctl`, you must ensure that one of these conditions is true: - -- `TELEPORT_CONFIG_FILE` is blank -- No file exists at `/etc/teleport.yaml` - -Otherwise `tctl` will attempt to connect to a Teleport cluster on the machine, -which could result in the error, -`ERROR: open /var/lib/teleport/host_uuid: permission denied`. - - - +For a conceptual overview of `tctl`, see [Getting Started with +`tctl`](../../zero-trust-access/infrastructure-as-code/using-tctl.mdx). ## tctl global flags @@ -204,7 +118,7 @@ $ tctl notifications ls [] | Name | Default Value(s) | Allowed Value(s) | Description | | ---------- | ---------------- | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | `--user` | none | **string** | List only a specific user's notifications. | -| `--format` | `text` | `text`,`yaml`,`json` | The format of the output. | +| `--format` | `text` | `text`,`yaml`,`json` | The format of the output, default is `text`. | | `--all` | `false` | `true`,`false` | If `true`, notifications generated by Teleport will also be returned, as opposed to only notifications created by admins via the `tctl` interface. | | `--labels` | none | Comma-separated strings | Filter notifications by labels. | @@ -266,6 +180,7 @@ $ tctl alerts ack [] | `--ttl` | 24h | Any duration | Time duration to acknowledge the alert for | | `--clear` | none | | Clear an existing alert acknowledgement | | `--reason` | none | | The reason for suppressing this alert | +| `--format` | `text` | `text`,`yaml`,`json` | Output format, default is `text` | ### Examples @@ -286,6 +201,18 @@ $ tctl alerts ack --clear upgrade-suggestion Lists alert acknowledgements. +```code +$ tctl alerts ack ls [] +``` + +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| - | - | - | - | +| `--format` | `text` | `text`,`yaml`,`json` | Output format, default is `text` | + +### Examples + ```code $ tctl alerts ack ls ID Reason Expires @@ -428,7 +355,7 @@ $ tctl auth rotate --type=host --grace-period=8h `tctl auth sign` is commonly used to generate long-lived certificates for automation purposes. On supported platforms, -[Machine ID](../../enroll-resources/machine-id/introduction.mdx) is a more secure alternative that +[Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) is a more secure alternative that generates short-lived certificates for automation workflows. @@ -494,7 +421,7 @@ $ tctl auth sign --user=admin --out=identity.pem ## tctl bots add -Create a new Machine ID Bot: +Create a new Machine & Workload Identity Bot: ```code $ tctl bots add [] @@ -502,7 +429,7 @@ $ tctl bots add [] ### Arguments -- `` - name of Machine ID Bot to create. +- `` - name of Machine & Workload Identity Bot to create. ### Flags @@ -567,7 +494,7 @@ These flags are available for all commands: `--debug, --config`. Run ## tctl bots instances list -List all current instances of a Machine ID bot: +List all current instances of a Machine & Workload Identity bot: ```code $ tctl bots instances list [] @@ -578,6 +505,16 @@ $ tctl bots instances list [] - `[]` - an optional bot name. If provided, filters the result to show only instances for the named bot. Otherwise, shows all instances for all bots. +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| - | - | - | - | +| `--search` | none | A search term | Optional. Filters the returned bot instances using a fuzzy search based on the term provided. | +| `--query` | none | Teleport predicate language query | Optional. Filters the returned bot instances based on the Teleport predicate language query provided. | +| `--sort-index` | `bot_name` | `bot_name`, `active_at_latest`, `version_latest`, `host_name_latest` | Optional. Sorts the returned bot instances using the given field. | +| `--sort-order` | `ascending` | `ascending`, `descending` | Optional. Sorts the returned bot instances in the given order. | +| `--format` | `text` | `text`, `json` | If set to `json`, returns results as a machine-readable JSON string. | + ### Examples This shows all known instances for the bot named "example": @@ -586,6 +523,25 @@ This shows all known instances for the bot named "example": $ tctl bots instance list example ``` +This shows all known instances which contain the term "github"; + +```code +$ tctl bots instance list --search github +``` + +Searchable fields include; bot name, instance id, hostname, join method, version + +This shows all known instances with a version older than "18.0.0"; + +```code +$ tctl bots instance list --query `older_than(status.latest_heartbeat.version, "18.0.0")` +``` +Version-specific functions include; + +- `newer_than(version, comparison)` +- `older_than(version, comparison)` +- `between(version, lower (inclusive), upper (exclusive))` + ### Global flags These flags are available for all commands: `--debug, --config`. Run @@ -618,7 +574,7 @@ These flags are available for all commands: `--debug, --config`. Run ## tctl bots ls -Lists all Machine ID Bots: +Lists all Machine & Workload Identity Bots: ```code $ tctl bots ls [] @@ -631,7 +587,7 @@ These flags are available for all commands: `--debug, --config`. Run ## tctl bots rm -Delete a Machine ID Bot: +Delete a Machine & Workload Identity Bot: ```code $ tctl bots rm [] @@ -639,7 +595,7 @@ $ tctl bots rm [] ### Arguments -- `` - name of Machine ID Bot to delete. +- `` - name of Machine & Workload Identity Bot to delete. ### Global flags @@ -648,7 +604,7 @@ These flags are available for all commands: `--debug, --config`. Run ## tctl bots update -Update a Machine ID bot: +Update a Machine & Workload Identity Bot: ```code $ tctl bots update [] @@ -694,7 +650,7 @@ These flags are available for all commands: `--debug, --config`. Run Create or update a Teleport resource from a YAML file. The supported resource types are: user, node, cluster, role, connector, and device. -See the [Resources Reference](../resources.mdx) for complete docs on how to build these yaml files. +See the [Resources Reference](../infrastructure-as-code/teleport-resources/teleport-resources.mdx) for complete docs on how to build these yaml files. ```code $ tctl create [] @@ -731,7 +687,7 @@ $ tctl create -f cluster.yaml Register a device. ```code -$ tctl devices add --os=OS --asset-tag=SERIAL_NUMBER +$ tctl devices add --os=OS --asset-tag=SERIAL_NUMBER [] ``` ### Flags @@ -741,6 +697,7 @@ $ tctl devices add --os=OS --asset-tag=SERIAL_NUMBER | `--os` | none | `linux`, `macos`, `windows` | Device operating system | | `--asset-tag` | none | string | Device inventory identifier (e.g., Mac serial number) | | `--enroll` | false | boolean | If set, creates a device enrollment token | +| `--format` | `text` | `text`,`yaml`,`json` | Output format, default is `text` | ### Examples @@ -788,17 +745,22 @@ tsh device enroll --token=(=devicetrust.enroll_token=) List registered devices. ```code -$ tctl devices ls +$ tctl devices ls [] ``` +### Flags +| Name | Default Value(s) | Allowed Value(s) | Description | +| - | - | - | - | +| `--format` | `text` | `text`,`yaml`,`json` | Output format, default is `text` | + + ### Examples ```code $ tctl devices ls - -Asset Tag OS Enroll Status Device ID ------------- ----- ------------- ------------------------------------ -(=devicetrust.asset_tag=) macOS enrolled d40f2ee4-856d-4aef-b784-c4371e39c036 +Asset Tag OS Source Enroll Status Owner Device ID +------------ ----- ------ ------------- ----- ------------------------------------ +(=devicetrust.asset_tag=) macOS not enrolled d40f2ee4-856d-4aef-b784-c4371e39c036 ``` ## tctl devices rm @@ -965,7 +927,7 @@ $ tctl inventory list [] | Name | Default Value(s) | Allowed Value(s) | Description | | - | - | - | - | -| `--format` | `text` | `yaml, json` or `text` | Output format | +| `--format` | `text` | `yaml, json` or `text` | Output format, default is `text` | | `--older-than` | none | string | Filter for older teleport versions | | `--exact-version` | none | string | Filter output by teleport version | | `--services` | none | string | Filter output by service (node,kube,proxy,etc) | @@ -1014,7 +976,7 @@ $ tctl inventory status [] | Name | Default Value(s) | Allowed Value(s) | Description | | - | - | - | - | -| `--format` | `text` | `yaml, json` or `text` | Output format | +| `--format` | `text` | `yaml, json` or `text` | Output format, default is `text` | | `--[no-]connected` | `--no-connected` | none | Show locally connected instances summary | ### Global flags @@ -1087,6 +1049,46 @@ Total Instances: 19 Note that newly added Teleport services do not show in the inventory right as the service joins. Those can take around 5 minutes under regular circumstances to show in inventory counts and up to 15 minutes for heavy load. +## tctl lock + +Create a new lock. + +```code +$ tctl lock [] +``` + +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| --- | --- | --- | --- | +| `--user` | none | string | Target a specific user by username. | +| `--role` | none | string | Target a specific role by name. | +| `--login` | none | string | Target a specific server login. | +| `--server-id` | none | string | Target a specific node by name or UUID. | +| `--mfa-device` | none | string | Target a specific MFA device by UUID. | +| `--windows-desktop` | none | string | Target a specific Windows desktop by name. | +| `--access-request` | none | string | Target a specific Access Request by ID. | +| `--device` | none | string | Target a specific trusted device by ID. | +| `--message` | none | string | Optional message explaining the reason for the lock. | +| `--expires` | never | timestamp | When the lock should automatically expire (RFC3339). | +| `--ttl` | none | duration | Time-to-live for the lock (alternative to --expires). | + +### Examples + +```code +# Lock a specific user +$ tctl lock --user alice --message "Security incident investigation" + +# Lock a role with expiration +$ tctl lock --role developers --expires "2025-06-26T00:20:47.16109Z" --message "Emergency maintenance" + +# Lock a specific node +$ tctl lock --server-id web-server-01 --message "Suspected compromise" + +# Lock multiple targets (user and access-request combination) +$ tctl lock --user=foo@example.com --message="Suspicious activity." --ttl=10h --access-request=1234 +``` + ## tctl login_rule test Test a Login Rule resource without installing it in the cluster. @@ -1101,7 +1103,7 @@ Test a Login Rule resource without installing it in the cluster. | - | - | - | - | | `--resource-file` | none | **string** filepath | Path to Login Rule resource path, can be repeated for multiple files | | `--load-from-cluster` | `false` | `true, false` | When true, all Login Rules currently installed in the cluster will be loaded for the test. Can be combined with `--resource-file` to test interactions. | -| `--format` | `yaml` | `yaml, json` | Output format for traits | +| `--format` | `yaml` | `yaml, json` | Output format for traits, default is `yaml` | ### Global flags @@ -1202,7 +1204,7 @@ $ tctl request approve request-id-1, request-id-2 Create a pending Access Request. ```code -$ tctl request create +$ tctl request create [] ``` ### Arguments @@ -1213,10 +1215,10 @@ $ tctl request create | Name | Default Value(s) | Allowed Value(s) | Description | | - | - | - | - | -|`roles`|none|Comma-separated list of strings|Roles to be requested| -|`resource`|none|Comma-separated list of strings|Resource IDs to be requested| -|`reason`|none|String|Optional reason message| -|`dry-run`|none|Boolean|Don't actually generate the Access Request| +|`--roles`|none|Comma-separated list of strings|Roles to be requested| +|`--resource`|none|Comma-separated list of strings|Resource IDs to be requested| +|`--reason`|none|String|Optional reason message| +|`--dry-run`|none|Boolean|Don't actually generate the Access Request| Use the `dry-run` flag if you want to validate whether Teleport can create an Access Request for the user in the `username` argument, given the user's static @@ -1312,6 +1314,9 @@ $ tctl rm saml/okta # Delete a local user called "admin": $ tctl rm users/admin + +# Delete a lock +$ tctl rm lock/ed7038cb-a3cc-4f59-8063-09553665b773 ``` ## tctl sso configure github @@ -1445,7 +1450,7 @@ The following flags are specific to Google Workspace: | `--google-acc-uri` | URI of your service account credentials file. Example: `file:///var/lib/teleport/gworkspace-creds.json`.| | `--google-acc` | String containing Google service account credentials. | | `--google-admin` | Email of a Google admin to impersonate. | -| `--google-legacy` | Flag to select groups with direct membership filtered by domain (legacy behavior).
Disabled by default. [More info](../../admin-guides/access-controls/sso/google-workspace.mdx) | +| `--google-legacy` | Flag to select groups with direct membership filtered by domain (legacy behavior).
Disabled by default. [More info](../../zero-trust-access/sso/integrate-idp/google-workspace.mdx) | | `--google-id` | Shorthand for setting the `--id` flag to `.apps.googleusercontent.com` | ### Global flags @@ -1665,7 +1670,7 @@ This command accepts no arguments. | Name | Default Value(s) | Allowed Value(s) | Description | | - | - | - | - | -| `--format` | `text` | `json` or `text` | Output format | +| `--format` | `text` | `json` or `text` | Output format, default is `text` | ### Examples @@ -1804,12 +1809,18 @@ $ tctl tokens rm [] Reports diagnostic information. -The diagnostic metrics endpoint must be enabled with `teleport start --diag-addr=` for `tctl top` to work. +`tctl top` can consume metrics from a HTTP diagnostic endpoint. ```code $ tctl top [] [] ``` +When a specific endpoint is provided, `tctl top` will always attempt to connect to it. +The endpoint should be a valid HTTP URL, corresponding to a matching +diagnostic metrics service configuration such as `teleport start --diag-addr=`. + +When no endpoint is specified, `tctl top` will attempt to connect via the debug UNIX socket endpoint, falling back to localhost. + ### Argument - `[]` Diagnostic HTTP URL (HTTPS not supported) @@ -1821,6 +1832,8 @@ $ tctl top [] [] $ sudo teleport start --diag-addr=127.0.0.1:3000 # View stats with a refresh period of 5 seconds $ tctl top http://127.0.0.1:3000 5s +# Use configured defaults +$ tctl top ``` ## tctl users add @@ -2058,6 +2071,83 @@ prod Done 2025-01-15 18:00:12 outside_window backup Done 2025-01-14 20:00:31 outside_window ``` +## tctl autoupdate agents report + +Aggregates the agent autoupdate reports and displays agent count per version and per update group. + +## tctl autoupdate agents start-update + +```code +$ tctl autoupdate agents start-update [--force] [...] +``` + +Manually initiates an update for the specified groups. + +### Arguments + +- `` - Groups to update. If not specified, all groups will be started. +- `--force` - Skips progressive deployment mechanism such as canaries or backpressure. + +### Examples + +Manually trigger staging and production updates: +```code +$ tctl autoupdate agents start-update stage prod +Started updating agents groups: [stage prod]. +New agent rollout status: + +Group Name State Start Time State Reason +---------- --------- ------------------- -------------- +dev Unstarted cannot_start +stage Active 2025-03-10 15:04:16 manual_trigger +prod Active 2025-03-10 15:04:16 manual_trigger +``` +## tctl autoupdate agents mark-done + +```code +$ tctl autoupdate agents mark-done [...] +``` + +Manually marks an agent update group as successfully updated. + +### Arguments + +- `` - Groups to mark as updated. If not specified, all groups will be marked. + +### Examples + +Manually mark the staging group as updated: +```code +$ tctl autoupdate agents mark-done stage +Successfully marked agent groups as completed: [stage]. +New agent rollout status: + +Group Name State Start Time State Reason +---------- --------- ------------------- ------------------ +dev Unstarted cannot_start +stage Done 2025-03-10 15:04:16 manual_forced_done +prod Active 2025-03-10 15:04:16 manual_trigger +``` + +## tctl autoupdate agents rollback + +```code +$ tctl autoupdate agents rollback [...] +``` + +Initiates an immediate rollback of Teleport agents from the target version to the start version. + +### Arguments + +- `` - Groups to rollback. If not specified, all groups not in the `unstarted` state will be rolled back. + +### Examples + +Rollback all agents immediately: +```code +$ tctl autoupdate agents rollback +``` + ## tctl plugins install awsic Install the AWS Identity Center integration. @@ -2113,7 +2203,7 @@ the two user filter flags imply that if a user - was provisioned into Teleport by Okta (from `--user-origin okta`), OR - has the label values `role=aws-admin` AND `dept=engineering` (from `--user-label "role=aws-admin,dept=engineering"`) -then they will be provisioned into AWS Identity Center by Teleport. +then they will be provisioned into AWS Identity Center by Teleport. ## tctl plugins install okta @@ -2130,7 +2220,7 @@ Install the Okta integration. | `--api-token` | none | string | Optional. Okta API token for the plugin to use. | | `--[no-]scim` | `--no-scim` | boolean | Optional. Enable SCIM Okta integration. | | `--[no-]users-sync` | `--users-sync` | none | Optional. Enable user synchronization. | -| `-o`, `--owner` | none | string | Optional. Add a default owner for synced Access Lists. | +| `-o`, `--owner` | none | string | Optional. Add a default owner for synced Access Lists. | | `--[no-]accesslist-sync` | `--accesslist-sync` | none | Optional. Enable or disable group to Access List synchronization. | | `--[no-]appgroup-sync` | `--appgroup-sync` | none | Optional. Enable or disable Okta Applications and Groups sync. | | `-g`, `--group-filter` | none | string | Optional. Add a group filter. Supports globbing by default. Enclose in `^pattern$` for full regex support. | @@ -2149,6 +2239,88 @@ $ tctl plugins install okta \ --no-appgroup-sync ``` +## tctl plugins install entraid + +Install the Entra ID plugin. + +```code +$ tctl plugins install entraid +``` + +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| - | - | - | - | +| `--name` | `entra-id` | string | Name of the plugin resource to create. | +| `--auth-connector-name` | `entra-id-default` | string | Name of the SAML connector resource to create. | +| `--default-owner` | none | string | Required. A Teleport username to set as the default owner for the imported Access Lists. Multiple flags allowed. | +| `--[no-]access-graph` | `--access-graph` | boolean | Optional. Enable Access Graph cache build. | +| `--[no-]use-system-credentials` | `--no-use-system-credentials` | boolean | Optional. Use system credentials instead of OIDC. | +| `--[no-]manual-setup` | `--no-manual-setup` | boolean | Optional. Manually set up the EntraID integration. | +| `--[no-]force` | `--no-force` | boolean | Optional. Proceed with the installation even if the plugin already exists. | +| `--group-id` | none | string | Optional. Include group matching the specified group ID. Multiple flags allowed. | +| `--group-name` | none | string | Optional. Include groups matching the specified group name regex. Multiple flags allowed. | +| `--exclude-group-id` | none | string | Optional. Exclude group matching the specified group ID. Multiple flags allowed. | +| `--exclude-group-name` | none | string | Optional. Exclude groups matching the specified group name regex. Multiple flags allowed. | + + +### Example + +```code +$ tctl plugins install entraid \ + --name entra-id-default \ + --auth-connector-name entra-id \ + --default-owner=admin \ + --default-owner=root@example.com \ + --no-access-graph \ + --use-system-credentials \ + --manual-setup \ + --group-id 25f9c527-2314-414c-a75d-ef7efabcc99b \ + --group-name "admin*" \ + --exclude-group-id 080b50c3-1c98-4d8e-a54e-20143dbd4f99 \ + --exclude-group-name "fin*" +``` + +Multiple flags are collected together. In the given example, based on the +two `default-owner` flags, all the Access Lists imported by the integration will +be created with owners `admin` and `root@example.com`. + + +## tctl plugins rotate awsic + +Rotate the bearer token used by Teleport to authenticate with the AWS IAM Identity +Center SCIM service when provisioning AWS users and groups. + +By default, `tctl` will validate that the token can be used to authenticate with +the SCIM service configured in the integration. Use the `--no-validate-token` +flag to disable token validation. + +```code +$ tctl plugins rotate awsic [] +``` + +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| - | - | - | - | +| `--plugin-name` | | string | Optionally override the name of the AWS IAM Identity Center integration to target | +| `--[no-]validate-token` | `--validate-token` | none | Enables or disables validating the supplied token against the configured SCIM server | + +### Examples + +Rotate the SCIM bearer token without displaying the token in your command history + +```code +$ tctl plugins rotate awsic $(cat ./aws-iam-ic-bearer-token) +``` + +Rotate the the SCIM bearer token for a custom integration enrollment that does not +have the default name set by the guided enrollment flow. + +```code +$ tctl plugins rotate awsic --plugin-name "my-custom-ic-integration" ${TOKEN_VALUE} +``` + ## tctl version Print the version of your `tctl` binary: @@ -2156,3 +2328,50 @@ Print the version of your `tctl` binary: ```code tctl version ``` + +## tctl recordings encryption rotate + +Rotate the active session recording encryption key. Will fail if rotation is already in +progress. + +### Example + +```code +$ tctl recordings encryption rotate +``` + +## tctl recordings encryption status + +Print the rotation status of all active encryption keys. + +### Flags + +| Name | Default Value(s) | Allowed Value(s) | Description | +| ---------- | ---------------- | -------------------- | ------------------------- | +| `--format` | `text` | `text`,`yaml`,`json` | The format of the output, default is `text` | + +### Example + +```code +$ tctl recordings encryption status +``` + +## tctl recordings encryption complete-rotation + +Completes a key rotation. + +### Example + +```code +$ tctl recordings encryption complete-rotation +``` + +## tctl recordings encryption rollback-rotation + +Reverts a key rotation. + +### Example + +```code +$ tctl recordings encryption rollback-rotation +``` diff --git a/docs/pages/reference/cli/teleport-update.mdx b/docs/pages/reference/cli/teleport-update.mdx index 32229f9dcbd88..e6eda3fd3caa1 100644 --- a/docs/pages/reference/cli/teleport-update.mdx +++ b/docs/pages/reference/cli/teleport-update.mdx @@ -1,11 +1,15 @@ --- title: teleport-update CLI reference +sidebar_label: teleport-update description: Comprehensive reference of subcommands, flags, and arguments for the teleport-update CLI tool. +tags: + - reference + - platform-wide --- -`teleport-update` is a CLI tool that is used to update Teleport Agents installed on Linux servers. +`teleport-update` is a CLI tool that is used to update Teleport Agents and Bots installed on Linux servers. -See [Teleport Agent Managed Updates](../../upgrading/agent-managed-updates.mdx) for more details. +See [Teleport Agent Managed Updates](../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more details. The primary commands for `teleport-update` are as follows: @@ -25,7 +29,7 @@ The primary commands for `teleport-update` are as follows: ## teleport-update enable -Enables agent Managed Updates and performs an initial installation of the Teleport Agent. +Enables Managed Updates and performs an initial installation of the Teleport Agent and tbot. This command also creates a systemd timer that periodically runs the update subcommand. If Teleport is already installed, `enable` will update to the cluster-advertised version @@ -37,8 +41,9 @@ Existing tarball-based, static installations may require `--overwrite`. Files are installed to the following paths (with the default `install-suffix`): -- `/usr/local/bin/{teleport,tsh,...}` - Symbolic links into `/opt/teleport/default/versions/X.Y.Z/bin/` +- `/usr/local/bin/{teleport,tbot,tsh,...}` - Symbolic links into `/opt/teleport/default/versions/X.Y.Z/bin/` - `/lib/systemd/system/teleport.service` - Teleport SystemD service +- `/etc/systemd/system/tbot.service` - Tbot SystemD service (not replaced if already present) - `/opt/teleport/default` - Storage for Teleport versions and updater configuration - `/etc/systemd/system/teleport-update.{service,timer}` - Updater SystemD timer and service - `/etc/systemd/system/teleport.service.d/teleport-update.conf` - Environment variables that configure Teleport @@ -62,7 +67,7 @@ To change these flags, run `enable` again with the new flags. ### Examples -**Example for a new installation.** +**Example for a new Teleport Agent installation.** Install Teleport with Managed Updates enabled on a fresh system. @@ -72,6 +77,17 @@ $ sudo teleport-update enable $ sudo systemctl enable teleport --now ``` +**Example for a new Teleport Agent and tbot installation.** + +Install Teleport with Managed Updates enabled on a fresh system. + +```code +# create /etc/teleport.yaml and /etc/tbot.yaml +$ sudo teleport-update enable +$ sudo systemctl enable teleport --now +$ sudo systemctl enable tbot --now +``` + **Example for an existing installation.** Install Teleport with Managed Updates enabled on a system with a running Teleport version. @@ -96,7 +112,7 @@ $ export PATH=/opt/teleport/mycluster/bin:$PATH ## teleport-update disable -Disable Managed Updates for the installed agent. +Disable Managed Updates for the installed Teleport Agent and/or tbot. This command does not remove or change the active installation of Teleport. Unlike `pin`, this command will not touch the current installation, and version lookup requests will stop entirely. @@ -122,7 +138,7 @@ $ sudo teleport-update disable ## teleport-update pin -Pin the installed agent to a specific version of Teleport. +Pin the installed Teleport Agent and/or tbot to a specific version of Teleport. This command updates Teleport to latest version (or a version specified with `--force-version`), and ensures the local installation of Teleport remains at that version. New versions will continue to be reported in SystemD `teleport-update.service` logs, they but will not be installed. @@ -234,7 +250,7 @@ Link the system installation of Teleport from the Teleport package, if Managed U This command is used to link the system package installation by: - Creating symbolic links from `/opt/teleport/system/bin/*` into `/usr/local/bin/`. -- Copying the Teleport systemd service file from `/opt/teleport/system/lib/systemd/system/teleport.service` into `/ib/systemd/system/teleport.service`. +- Copying the Teleport systemd service file from `/opt/teleport/system/lib/systemd/system/teleport.service` into `/lib/systemd/system/teleport.service`. This command is executed automatically when the Teleport package is installed, and does not need to be manually executed. diff --git a/docs/pages/reference/cli/teleport.mdx b/docs/pages/reference/cli/teleport.mdx index 137a4be2b43f6..3a87e4336f7b7 100644 --- a/docs/pages/reference/cli/teleport.mdx +++ b/docs/pages/reference/cli/teleport.mdx @@ -1,6 +1,10 @@ --- title: teleport CLI Reference +sidebar_label: teleport description: Comprehensive reference of subcommands, flags, and arguments for the teleport CLI tool. +tags: + - reference + - platform-wide --- The CLI tool that supports the Teleport Infrastructure Identity Platform is called `teleport`, and allows Teleport services to be managed @@ -9,7 +13,7 @@ over the command line: - [Auth](../architecture/authentication.mdx) - [Node/SSH](../architecture/agents.mdx) - [Proxy](../architecture/proxy.mdx) -- [App](../../enroll-resources/application-access/introduction.mdx) +- [App](../../enroll-resources/application-access/application-access.mdx) - [Database](../../enroll-resources/database-access/database-access.mdx) - [Windows Desktop](../../enroll-resources/desktop-access/introduction.mdx) - [Kubernetes](../../enroll-resources/kubernetes-access/introduction.mdx) @@ -19,7 +23,7 @@ The primary commands for the `teleport` CLI are as follows: | Command | Description | | - | - | | `teleport app start` | Starts the Teleport Application Service. | -| `teleport configure` | Generates and writes a [configuration YAML file](../config.mdx) for the Teleport service. This file should be customized in production to suit the needs of your environment, and the default output should only be used when testing. | +| `teleport configure` | Generates and writes a [configuration YAML file](../deployment/config.mdx) for the Teleport service. This file should be customized in production to suit the needs of your environment, and the default output should only be used when testing. | | `teleport db configure aws create-iam` | Generates, creates, and attaches desired IAM policies to a Teleport-managed database. | | `teleport db configure aws print-iam` | Generates and outputs current IAM policies for a Teleport-managed database. | | `teleport db configure bootstrap` | Used to bootstrap a configuration to the Teleport Database Service by reading a provided configuration. | @@ -29,7 +33,7 @@ The primary commands for the `teleport` CLI are as follows: | `teleport install systemd` | Creates a systemd unit file, used to configure and install a `teleport` service daemon. | | `teleport join openssh` | Registers an OpenSSH server with Teleport. | | `teleport node configure` | Generates a configuration YAML file for a Teleport Node accessed via SSH. This file should be customized in production to suit the needs of your environment, and the default output should only be used when testing. | -| `teleport start` | Starts the `teleport` process in the foreground using the current shell session, including any services configured by the [configuration YAML file](../config.mdx). | +| `teleport start` | Starts the `teleport` process in the foreground using the current shell session, including any services configured by the [configuration YAML file](../deployment/config.mdx). | | `teleport status` | Prints the status of the current active Teleport SSH session. | | `teleport version` | Prints the current release version of the Teleport binary installed on your system. | | `teleport debug set-log-level` | Changes instance log level. | @@ -45,7 +49,7 @@ For more information on subcommands when working with the `teleport` cli, use th The `teleport start` command includes a large number of optional configuration flags. While configuration flags for `teleport start` can be used to set parameters for Teleport's configuration, -we recommend using a [configuration file](../config.mdx) in production. +we recommend using a [configuration file](../deployment/config.mdx) in production. ### Flags @@ -64,9 +68,9 @@ we recommend using a [configuration file](../config.mdx) in production. | `-c, --config` | `/etc/teleport.yaml` | **string** `.yaml` filepath | starts services with config specified in the YAML file, overrides CLI flags if set | | `--apply-on-startup` | none | **string** `.yaml` filepath | On startup, always apply resources described in the file at the given path. Only supports the following kinds: `token`, `role`, `user`, `cluster_auth_preference`, `cluster_networking_config`, `bot`. | | `--bootstrap` | none | **string** `.yaml` filepath | bootstrap configured YAML resources {/* TODO link how to configure this file */} | -| `--labels` | none | **string** comma-separated list | assigns a set of labels to a node, for example env=dev,app=web. See the explanation of labeling mechanism in the [Labeling Nodes](../../admin-guides/management/admin/labels.mdx) section. | +| `--labels` | none | **string** comma-separated list | assigns a set of labels to a node, for example env=dev,app=web. See the explanation of labeling mechanism in the [Labeling Nodes](../../zero-trust-access/rbac-get-started/labels.mdx) section. | | `--insecure` | none | none | disable certificate validation on Proxy Service, validation still occurs on Auth Service. | -| `--fips` | none | none | start Teleport in FedRAMP/FIPS 140-2 mode. | +| `--fips` | none | none | start Teleport in FedRAMP/FIPS mode. | | `--skip-version-check` | `false` | `true` or `false` | Skips version checks between the Auth Service and this Teleport instance | | `--diag-addr` | none | none | Enable diagnostic endpoints | | `--permit-user-env` | none | none | flag reads in environment variables from `~/.tsh/environment` when creating a session. | @@ -83,8 +87,8 @@ The `--roles` flag when used with `teleport --start` instructs Teleport on which | [Node](../architecture/agents.mdx) | `node` | Allows SSH connections from authenticated clients. | | [Auth](../architecture/authentication.mdx) | `auth` | Authenticates and authorizes hosts and users who want access to Teleport-managed resources or information about a cluster. | | [Proxy](../architecture/proxy.mdx) | `proxy` | The gateway that clients use to connect to the Auth Service or resources managed by Teleport. | -| [App](../../enroll-resources/application-access/introduction.mdx) | `app` | Provides access to applications. | -| [Database](../agent-services/database-access-reference/database-access-reference.mdx) | `db` | Provides access to databases. | +| [App](../../enroll-resources/application-access/application-access.mdx) | `app` | Provides access to applications. | +| [Database](../../enroll-resources/database-access/reference/reference.mdx) | `db` | Provides access to databases. | diff --git a/docs/pages/reference/cli/tsh.mdx b/docs/pages/reference/cli/tsh.mdx index 50c207bda3f2f..77a45b212a002 100644 --- a/docs/pages/reference/cli/tsh.mdx +++ b/docs/pages/reference/cli/tsh.mdx @@ -1,6 +1,10 @@ --- title: tsh CLI reference +sidebar_label: tsh description: Comprehensive reference of subcommands, flags, and arguments for the tsh CLI tool. +tags: + - reference + - platform-wide --- `tsh` is a CLI client used by Teleport users. It allows users to interact with @@ -14,18 +18,19 @@ Environment variables configure your tsh client and can help you avoid using fla | Environment Variable | Description | Example Value | | - | - | - | | TELEPORT_AUTH | Any defined [authentication connector](../access-controls/authentication.mdx), including `passwordless` and `local` (i.e., no authentication connector) | okta | -| TELEPORT_MFA_MODE | Preferred mode for MFA and Passwordless assertions | otp | | TELEPORT_CLUSTER | Name of a Teleport root or leaf cluster | cluster.example.com | | TELEPORT_LOGIN | Login name to be used by default on the remote host | root | | TELEPORT_LOGIN_BIND_ADDR | Address in the form of host:port to bind to for login command webhook | host:port | | TELEPORT_LOGIN_BROWSER | Set to `none` to stop the system default browser from opening for SSO logins. If the value is not `none`, `tsh` will open the system default browser. | none | | TELEPORT_PROXY | Address of the Teleport proxy server | cluster.example.com:3080 | +| TELEPORT_RELAY | Address of the Teleport relay server to use, "none" to disable the use of a relay, or "default" to use the default address specified by the control plane at login time. Defaults to port 443. | relay.example.com | +| TELEPORT_HEADLESS | Use headless authentication | true, false, 1, 0 | | TELEPORT_HOME | Home location for tsh configuration and data | /directory | | TELEPORT_USER | A Teleport user name | alice | | TELEPORT_ADD_KEYS_TO_AGENT | Specifies if the user certificate should be stored on the running SSH agent | yes, no, auto, only | | TELEPORT_USE_LOCAL_SSH_AGENT | Disable or enable local SSH agent integration | true, false | | TELEPORT_GLOBAL_TSH_CONFIG | Override location of global `tsh` config file from default `/etc/tsh.yaml` | /opt/teleport/tsh.yaml | -| TELEPORT_HEADLESS | Use headless authentication | true, false, 1, 0 | +| TELEPORT_MFA_MODE | Preferred mode for MFA and Passwordless assertions | auto, cross-platform, platform, otp, sso | | TELEPORT_IDENTITY_FILE | File path to identity file | /opt/identity | ## tsh global flags @@ -34,17 +39,18 @@ Environment variables configure your tsh client and can help you avoid using fla | - | - | - | - | | `-l, --login` | none | an identity name | The login identity that the Teleport user will use | | `--proxy` | none | `host:https_port[,ssh_proxy_port]` | Teleport Proxy Service address | +| `--relay` | none | `host[:port]|none|default` | Address of the Teleport relay server to use, "none" to disable the use of a relay, or "default" to use the default address specified by the control plane at login time. Defaults to port 443. | | `--user` | `$USER` | none | The Teleport username | | `--ttl` | `720` (12 hours) | integer | Number of minutes a certificate issued for the `tsh` user will be valid for | | `-i, --identity` | none | **string** filepath | Identity file | | `--cert-format` | `standard` | `standard` or `oldssh` | SSH certificate format. `oldssh` supports older versions of OpenSSH servers that do not allow for custom metadata, which is how Teleport encodes a user's roles in their SSH certificate. | | `--insecure` | none | none | Do not verify the server's certificate and host name. Use only in test environments. | | `--auth` | `local` | Any defined [authentication connector](../access-controls/authentication.mdx), including `passwordless` and `local` (i.e., no authentication connector) | Specify the type of authentication connector to use. | -| `--mfa-mode` | auto | `auto`, `cross-platform`, `platform` or `otp` | Preferred mode for MFA and Passwordless assertions. | +| `--mfa-mode` | auto | `auto`, `cross-platform`, `platform`, `otp`, or `sso` | Preferred mode for MFA and Passwordless assertions. | | `--skip-version-check` | none | none | Skip version checking between server and client. | | `-d, --debug` | none | none | Verbose logging to stdout | | `-J, --jumphost` | none | A jump host | SSH jumphost | -| `--headless` | none | none | Use Headless WebAuthn for authentication | +| `--headless` | none | none | Use Headless Authentication | | `--mlock` | `auto` | `auto`, `off`, `best_effort`, `strict` | Lock process memory to protect client secrets stored in memory from being swapped to disk. | ## tsh apps ls @@ -69,7 +75,7 @@ $ tsh clusters [] ### Global flags -These flags are available for all commands `--login`, `--proxy`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. +These flags are available for all commands `--login`, `--proxy`, `--relay`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. Run `tsh help ` or see the [Global Flags section](#tsh-global-flags). ### Examples @@ -162,7 +168,7 @@ $ tsh gcloud [--app] [] ### Global flags -These flags are available for all commands `--login, --proxy, --user, --ttl, --identity, --cert-format, --insecure, --auth, --skip-version-check, --debug, --jumphost`. +These flags are available for all commands `--login`, `--proxy`, `--relay`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. Run `tsh help ` or see the [Global Flags section](#tsh-global-flags). ### Examples @@ -194,7 +200,7 @@ $ tsh gsutil [--app] [] ### Global flags -These flags are available for all commands `--login, --proxy, --user, --ttl, --identity, --cert-format, --insecure, --auth, --skip-version-check, --debug, --jumphost`. +These flags are available for all commands `--login`, `--proxy`, `--relay`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. Run `tsh help ` or see the [Global Flags section](#tsh-global-flags). ### Examples @@ -234,7 +240,7 @@ $ tsh join [] ### Global flags -These flags are available for all commands `--login, --proxy, --user, --ttl, --identity, --cert-format, --insecure, --auth, --skip-version-check, --debug, --jumphost`. +These flags are available for all commands `--login`, `--proxy`, `--relay`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. Run `tsh help ` or see the [Global Flags section](#tsh-global-flags). ### Examples @@ -310,7 +316,7 @@ $ tsh login [] [] ### Arguments -- `` - the name of the cluster, see [Trusted Cluster](../../admin-guides/management/admin/trustedclusters.mdx) for more information. +- `` - the name of the cluster, see [Trusted Cluster](../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) for more information. ### Flags @@ -329,9 +335,11 @@ $ tsh login [] [] ### Global flags -These flags are available for all commands `--login, --proxy, --user, --ttl, --identity, --cert-format, --insecure, --auth, --skip-version-check, --debug, --jumphost`. +These flags are available for all commands `--login`, `--proxy`, `--relay`, `--user`, `--ttl`, `--identity`, `--cert-format`, `--insecure`, `--auth`, `--skip-version-check`, `--debug`, `--jumphost`. Run `tsh help ` or see the [Global Flags section](#tsh-global-flags). +A relay address specified (or explicitly disabled with the "none" value) at login time will be stored in the `tsh` configuration directory for the specified proxy; if unspecified, the relay address may fall back to a default value specified by the Teleport control plane. Use of a Relay requires `tsh` v18.3.0 or later. + ### Examples *The proxy endpoint can take a https and ssh port in this format `host:https_port[,ssh_proxy_port]`* @@ -410,7 +418,7 @@ $ tsh ls [] [ If your load balancer supports HTTP health checks, configure it to hit the -`/readyz` [diagnostics endpoint](../admin-guides/management/diagnostics/monitoring.mdx) on +`/readyz` [diagnostics endpoint](../../zero-trust-access/management/diagnostics/monitoring.mdx) on machines running Teleport. This endpoint must be enabled by using the `--diag-addr` flag to teleport start: @@ -120,17 +123,92 @@ and user records will be stored. To configure Teleport for using etcd as a storage backend: - Make sure you are using **etcd versions 3.3** or newer. -- Follow [etcd's cluster hardware recommendations](https://etcd.io/docs/v3.5/op-guide/hardware/). In particular, leverage +- Follow [etcd's cluster hardware recommendations](https://etcd.io/docs/v3.6/op-guide/hardware/). In particular, leverage SSD or high-performance virtualized block device storage for best performance. - Install etcd and configure peer and client TLS authentication using the [etcd - security guide](https://etcd.io/docs/v3.5/op-guide/security/). - - You can use [this script provided by - etcd](https://github.com/etcd-io/etcd/tree/master/hack/tls-setup) if you - don't already have a TLS setup. + security guide](https://etcd.io/docs/v3.6/op-guide/security/). See + [Configuring mTLS](#configuring-mtls-authentication-with-etcd) below for + details. - Configure all Teleport Auth Service instances to use etcd in the "storage" section of the config file as shown below. - Deploy several Auth Service instances connected to etcd backend. - Deploy several Proxy Service instances that have `auth_server` pointed to the Auth Service to connect to. +### Configuring mTLS authentication with etcd + +We recommend using [cfssl](https://github.com/cloudflare/cfssl) and etcd's +[`tls-setup` script](https://github.com/etcd-io/etcd/tree/main/hack/tls-setup) +to configure mTLS between Teleport and etcd. It's helpful for creating +certificates and keys for the entire etcd cluster. After [installing +cfssl](https://github.com/cloudflare/cfssl?tab=readme-ov-file#installation), +clone the `tls-setup` repo and follow the `Readme.md` file to customize the +script configuration. Make sure to tweak the makefile if you change the number +of etcd nodes (and thus the number of `infra` variables). + +After you run `make`, the following files will be generated: + +- `tls-setup/certs/ca.pem` is the certificate authority and it should be + referenced in Teleport's `tls_ca_file` option (see [Configuring Auth Service + instances to use etcd](#configuring-auth-service-instances-to-use-etcd) + below). +- `tls-setup/certs/peer-*` files are used internally by etcd for communicating + between nodes. +- `tls-setup/certs/.pem` and `tls-setup/certs/-key.pem` are the + host certificates and their corresponding private keys. Each corresponding + etcd host needs to know them, and they are required for providing transport + security on the etcd side. + +This should be enough to configure mTLS on etcd side. + +On the Teleport side, you need to configure client certificates. Here's one way +to do it: + +1. Generate a default CSR: + ```code + $ cfssl print-defaults csr > client.json` + ``` + +1. Remove all hostnames from the hosts element of the `client.json` file, as + this CSR doesn't represent any host, but a client. Also customize any other + options if required. + +1. For each Auth Service instance, generate the client certificate: + ```code + $ cfssl gencert -ca=certs/ca.pem \ + -ca-key=certs/ca-key.pem \ + -config=config/ca-config.json -profile=client client.json \ + | cfssljson -bare certs/client + ``` + This assumes that the `tls-setup` tool has been previously run and generated + the `certs/ca.pem` and `certs/ca-key.pem` files. + + The following files will be generated: + + - `certs/client.pem`, which will be referenced by the `tls_cert_file` option + in Teleport configuration file. + - `certs/client-key.pem` which will be referenced by the `tls_key_file` + option in Teleport configuration file. + + Upload these files to the corresponding Auth Service instance, along with + `certs/ca.pem`. + +After generating all the keys and certificates, run etcd using some variation +of this command, replacing with your etcd hostname: + +```code +$ etcd --cert-file=certs/host.pem \ + --key-file=certs/host-key.pem \ + --client-cert-auth \ + --trusted-ca-file=certs/ca.pem \ + --peer-cert-file=certs/peer-host.pem \ + --peer-cert-file=certs/peer-host-key.pem \ + --advertise-client-urls=https://:2379 --listen-client-urls=https://:2379 + # ... other applicable options + ``` + +### Configuring Auth Service instances to use etcd + +Put these settings into the configuration file of all Auth Service instances: + ```yaml teleport: storage: @@ -139,20 +217,19 @@ teleport: # List of etcd peers to connect to: peers: ["https://172.17.0.1:4001", "https://172.17.0.2:4001"] - # Required path to TLS client certificate and key files to connect to etcd. + # Required path to TLS client certificate and key files to connect to + # etcd. # - # To create these, follow - # https://coreos.com/os/docs/latest/generate-self-signed-certificates.html - # or use the etcd-provided script - # https://github.com/etcd-io/etcd/tree/master/hack/tls-setup. + # If you used cfssl as described above, these should be the + # certs/client.pem and certs/client-key.pem files from the previous steps. tls_cert_file: /var/lib/teleport/etcd-cert.pem tls_key_file: /var/lib/teleport/etcd-key.pem # Optional file with trusted CA authority # file to authenticate etcd nodes # - # If you used the script above to generate the client TLS certificate, - # this CA certificate should be one of the other generated files + # If you used the tls-setup script, this CA certificate should be the + # certs/ca.pem file. tls_ca_file: /var/lib/teleport/etcd-ca.pem # Alternative password-based authentication, if not using TLS client @@ -184,6 +261,45 @@ teleport: etcd_max_client_msg_size_bytes: 15728640 ``` +### Disk sizing, compaction and defragmentation + +For large-scale deployments, plan for high write throughput and allocate sufficient storage to meet the history retention requirements. + +A rule of thumb for minimum storage size (excluding metadata) is: +``` +minimum_storage ≈ backend_resource_count * average_record_size * auto_compaction_retention +``` + +- `backend_resource_count`: the total number of resources including replicas. +- `average_record_size`: the average size (in bytes) of a resource. +- `auto_compaction_retention`: the maximum number of revisions per item. + +Auto-compaction based on revision count is strongly recommended. +Note that physical disk space is not reclaimed during compaction. To recover space, schedule regular defragmentation +based on available disk space and write throughput. + +Use `etcd_mvcc_db_total_size_in_bytes` and `etcd_mvcc_db_total_size_in_use_in_bytes` metrics to tune as needed. + +Refer to [etcd's maintenance guide](https://etcd.io/docs/v3.6/op-guide/maintenance/) for auto-compaction and defragmentation. + +#### Example disk sizing + +For a cluster with 50k resources, with average size of 2 KiB and retention history of 5 revisions we would expect +the dataset to be at least: + +``` +50000 * 2 KiB * 5 ≈ 488 MiB +``` + +An additional 20-40% safety margin is recommended for metadata and fragmentation overhead. + +Given an existing resource you can estimate its size with the following command, +assigning to the resource name, e.g., `role/editor`: + +```code +tctl get --format=json | awk '{ total += length($0) } END { print total * 2 }' +``` + ## PostgreSQL PostgreSQL cluster state and audit log storage is available starting from @@ -309,7 +425,7 @@ teleport: # use a different user or different settings for the connection used # to set up and make use of logical decoding. If specified, Teleport # will use the connection string in change_feed_conn_string for that, - # instead of the one in conn_string. Available in Teleport 13.4 and later. + # instead of the one in conn_string. change_feed_conn_string: postgresql://replication_user_name@database-address/teleport_backend?sslmode=verify-full # An audit_events_uri with a scheme of postgresql:// will use the @@ -1002,7 +1118,6 @@ the following Terraform script: To configure Teleport to use Athena: -- Make sure you are using **Teleport version 14.0.0** or newer. - Prepare infrastructure - Specify an Athena URL inside the `audit_events_uri` array in your Teleport configuration file: @@ -1218,7 +1333,7 @@ instance with a disk size at least 2x bigger than the table size in Amazon DynamoDB. Instructions for how to use the migration tool can be found -[on GitHub](https://github.com/gravitational/teleport/blob/master/examples/dynamoathenamigration/README.md). +[on GitHub](https://github.com/gravitational/teleport/blob/branch/v(=teleport.major_version=)/examples/dynamoathenamigration/README.md). You should set `exportTime` to the time when dual writing began. @@ -1520,7 +1635,7 @@ teleport: # conn_string is a required parameter. It is a PostgreSQL connection string used # to connect to CockroachDB using the PostgreSQL wire protocol. Client # parameters may be specified using the URL. For a detailed list of available - # parameters see https://www.cockroachlabs.com/docs/stable/connection-parameter + # parameters see https://www.cockroachlabs.com/docs/v25.3/connection-parameters.html # # If your certificates are not stored at the default ~/.postgresql # location, you will need to specify them with the sslcert, sslkey, and diff --git a/docs/pages/reference/config.mdx b/docs/pages/reference/deployment/config.mdx similarity index 86% rename from docs/pages/reference/config.mdx rename to docs/pages/reference/deployment/config.mdx index a6fbf0f0f2bf7..914cf92f4ef1f 100644 --- a/docs/pages/reference/config.mdx +++ b/docs/pages/reference/deployment/config.mdx @@ -1,9 +1,11 @@ --- -title: Teleport Configuration -h1: Teleport Configuration Reference +title: Teleport Configuration Reference +sidebar_label: Teleport Configuration description: The detailed guide and reference documentation for configuring Teleport for SSH and Kubernetes access. keywords: [config, file, reference, yaml, etc] -tocDepth: 3 +tags: + - reference + - platform-wide --- Teleport uses the YAML file format for configuration. A full configuration @@ -31,7 +33,7 @@ $ teleport configure -o file There are also `configure` commands available for the SSH Service and Database Service. See our documentation on `teleport node configure` and `teleport db -configure` in the [Teleport CLI Reference](cli/teleport.mdx). +configure` in the [Teleport CLI Reference](../cli/teleport.mdx). @@ -69,6 +71,7 @@ Teleport supports the following services: |SSH Service|`ssh_service`|✅| |Desktop Service|`windows_desktop_service`|❌| |Jamf Service|`jamf_service`|❌| +|Relay Service|`relay_service`|❌| |Debug Service|`debug_service`|✅| Teleport Cloud manages the Auth Service and Proxy Service for you. Instances of @@ -100,10 +103,10 @@ These settings apply to any `teleport` instance: Further reading: - [Joining Services to a - Cluster](../enroll-resources/agents/agents.mdx): + Cluster](../../enroll-resources/agents/agents.mdx): Available join methods to help you configure `join_params`. - [Using a CA - Pin](../enroll-resources/agents/join-token.mdx): + Pin](../../enroll-resources/agents/join-token.mdx): When to assign a value to `ca_pin`. - [Teleport Metrics Reference](monitoring/monitoring.mdx): Data to collect using `diag_addr`. @@ -137,18 +140,18 @@ to specify these configuration settings. Further reading: - [Storage Backends](backends.mdx) reference: instructions on configuring DynamoDB, S3, etcd, and other highly available backends. -- [Passwordless](../admin-guides/access-controls/guides/passwordless.mdx): More +- [Passwordless](../../zero-trust-access/authentication/passwordless.mdx): More information about the `passwordless` authentication option. - [Headless - WebAuthn](../admin-guides/access-controls/guides/headless.mdx): The + WebAuthn](../../zero-trust-access/authentication/headless.mdx): The `headless` authentication option. -- [Single Sign-On](../admin-guides/access-controls/sso/sso.mdx): Configuring SSO +- [Single Sign-On](../../zero-trust-access/sso/sso.mdx): Configuring SSO so you can configure Teleport to use a specific SSO authentication connector. -- [Locking](../admin-guides/access-controls/guides/locking.mdx): Configuring the +- [Locking](../../identity-governance/locking.mdx): Configuring the `locking_mode` option. -- [Device Trust](../admin-guides/access-controls/device-trust/device-trust.mdx): Configuring +- [Device Trust](../../identity-governance/device-trust/device-trust.mdx): Configuring the `device_trust` section. -- [Recording Proxy Mode](architecture/session-recording.mdx): If you configure +- [Recording Proxy Mode](../architecture/session-recording.mdx): If you configure Recording Proxy Mode, consider enabling `proxy_checks_host_keys`. ### SSH Service @@ -161,9 +164,9 @@ These settings apply to the Teleport SSH Service: Further reading: - [Enhanced Session - Recording](../enroll-resources/server-access/guides/bpf-session-recording.mdx): + Recording](../../enroll-resources/server-access/guides/bpf-session-recording.mdx): Configuring `enhanced_recording`. -- [PAM Integration](../enroll-resources/server-access/guides/ssh-pam.mdx): +- [PAM Integration](../../enroll-resources/server-access/guides/ssh-pam.mdx): Configuring the `pam` section. ### Kubernetes Service @@ -214,6 +217,16 @@ These settings apply to the Jamf Service: (!docs/pages/includes/config-reference/jamf-service.yaml!) ``` +### Relay Service + +The Relay Service is available in Teleport v18.3.0 and later. + +These settings apply to the Relay Service: + +```yaml +(!docs/pages/includes/config-reference/relay-service.yaml!) +``` + ### Debug Service These settings apply to the Debug Service diff --git a/docs/pages/reference/deployment/deployment.mdx b/docs/pages/reference/deployment/deployment.mdx new file mode 100644 index 0000000000000..44890b10bca49 --- /dev/null +++ b/docs/pages/reference/deployment/deployment.mdx @@ -0,0 +1,12 @@ +--- +title: Teleport Deployment References +sidebar_label: Deploying Teleport +description: Provides reference information for running Teleport on your infrastructure. +--- + +The Teleport deployment reference guides include comprehensive lists of options +for managing a Teleport deployment, including the Auth Service, Proxy Service, +and Teleport Agents. + + + diff --git a/docs/pages/reference/deployment/identity-security-config.mdx b/docs/pages/reference/deployment/identity-security-config.mdx new file mode 100644 index 0000000000000..c00d1c1391ad5 --- /dev/null +++ b/docs/pages/reference/deployment/identity-security-config.mdx @@ -0,0 +1,41 @@ +--- +title: Teleport Identity Security Configuration +sidebar_label: Identity Security Configuration +description: The detailed guide and reference documentation for configuring Teleport Identity Security. +keywords: [config, file, reference, yaml, etc] +tags: + - identity-security +--- + +Teleport Identity Security uses the YAML file format for configuration. A full configuration +reference file is shown below. This provides comments and all available options +for `identity-security.yaml`. + +## Before using this reference + + + +Do not use this example configuration in production. + + + +You must edit your configuration file to meet the needs of your environment. +Using a copy of the reference configuration will have unintended effects. + +## Reference configurations + +These example configurations include all possible configuration options in YAML +format to demonstrate proper use of indentation. + +### Teleport Identity Security settings + +These settings apply the Teleport Identity Security process. + +```yaml +(!docs/pages/includes/config-reference/identity-security.yaml!) +``` + +## Further reading + +- [Install Teleport Identity Security](../../identity-security/access-graph/access-graph.mdx): instructions on how to install Teleport Identity Security. +- [Teleport Identity Security](../../identity-security/identity-security.mdx): instructions on how to use Teleport Identity Security. diff --git a/docs/pages/reference/deployment/join-methods.mdx b/docs/pages/reference/deployment/join-methods.mdx new file mode 100644 index 0000000000000..f7876b43500be --- /dev/null +++ b/docs/pages/reference/deployment/join-methods.mdx @@ -0,0 +1,592 @@ +--- +title: Join Methods and Tokens +description: Describes the different ways to configure a Teleport to join a cluster. +keywords: [join, joining, token] +tags: + - conceptual + - reference + - zero-trust +--- + +This guide explains the core concepts behind the Teleport joining process, +references all supported join methods and classifies them based on their +security properties. This guide does not explain step-by-step how to +join an instance with each join method, but links to the relevant How-To +guides when possible. + + +You must be familiar with the [Teleport Core concepts](../../core-concepts.mdx) +before reading this page. + + +## Definitions + +### Joining + +Joining a Teleport cluster is the act of establishing trust between a new +Teleport instance and all the existing instances already part of the Teleport +cluster. At the end of the joining process, the Auth Service signs certificates +for the joining instance. Those certificates represent the trust that was +established. With them, the newly-joined instance can interact with the +other Teleport instances. + +To request its certificates, an instance must prove its identity to the Auth Service. +Teleport offers multiple ways for a joining instance to prove its authenticity, they +are called the join methods. + +The joining process only happens when a Teleport service doesn't have valid +certificates. Once the token is exchanged for certificates, those +certificates are used on all subsequent attempts to connect. In most cases, +this happens during the first startup. + +### Join methods + +A join method is a way for the Auth Service to validate that an instance requesting to +join the Teleport cluster is legitimate. Some join methods are universal while +others rely on the joining instance context. For example cloud-provider +join-methods (such as `iam`, `gcp` or `azure`) or CI-provider (such as `github`, +`gitlab`, `circleci`) are more flexible and provide +better security guarantees but require the joining instance to run in a specific +cloud-provider. + +Different join methods may provide different security guarantees. e.g. some +join methods allow the joining instance to request renewable certificates while other +will require the instance to join again to renew its certificate. + +The join method and its parameters are specified in the token resource. + +### Token + +A Token is a Teleport resource that specifies which join method can be used in +which context. For example, a token can allow SSH services to join with the +`iam` join method if they are in the AWS account `333333333333` and can assume +the role `teleport-instance-role`: + +```yaml +kind: token +version: v2 +metadata: + name: my-iam-token +spec: + roles: [Node] + join_method: iam + allow: + - aws_account: "333333333333" + aws_arn: "arn:aws:sts::333333333333:assumed-role/teleport-instance-role/i-*" +``` + + +The token name may, or may not be sensitive depending on the join method. +Secret-based join methods rely on the token name to be secret. In such +cases the token name must be protected as knowing the token name in enough +for an instance to join the cluster. + + +## Classification of join methods + +### Secret vs delegated + +#### Secret-based join methods + +Secret-based join methods are universal: Teleport service can use a secret-based +join method regardless of the platform/cloud provider it runs on. The joining instance +sends the secret and the Auth Service validates that it matches the one it +knows. Those joining methods are inherently prone to secret exfiltration and +the delegated join methods should be preferred when available. If you have +to use a secret-based join method, it is recommended to use short-lived tokens +(valid only 1 hour for example) to reduce the risk of the token leaking. + +Secret-based join methods are: +- [ephemeral `token`](#ephemeral-tokens) +- [static `token`](#static-tokens) + + +Teleport supports static tokens for backward compatibility, their use should be avoided. + + +#### Delegated join methods + +Delegated join methods rely on the context of the joining instance and a third party to +establish trust. The third party can be a cloud provider, a CI platform or the +container runtime. Those methods cannot be used for every instance (e.g. joining an SSH +agent from a Raspberry Pi is not possible) but should be preferred when possible. + +Delegated join methods might also offer more granularity. For example, cloud-provider +based join methods can allow instances to join based on their Availability Zone, +service account, or cloud account ID. + +Delegated join methods are: +- [`azure`](#azure-managed-identity-azure) +- [`azure_devops`](#azure-devops-azure_devops) +- [`bitbucket`](#bitbucket-pipelines-bitbucket) +- [`bound_keypair`](#bound-keypair-bound_keypair) +- [`circleci`](#circleci-circleci) +- [`ec2`](#aws-ec2-identity-document-ec2) +- [`env0`](#env0-env0) +- [`gcp`](#gcp-service-account-gcp) +- [`github`](#github-actions-github) +- [`gitlab`](#gitlab-gitlab) +- [`iam`](#aws-iam-role-iam) +- [`kubernetes`](#kubernetes-kubernetes) +- [`oracle`](#oracle-cloud-oracle) +- [`spacelift`](#spacelift-spacelift) +- [`terraform_cloud`](#terraform-cloud-terraform_cloud) +- [`tpm`](#trusted-platform-module-tpm) + +### Renewable vs non-renewable + +Depending on the join method used, the Auth Service might issue renewable +or non-renewable certificates. + +When the certificate is about to expire, instances with renewable certificates can +request a new one without having to use a token again. Typically, secret-based join methods +provide renewable certificates because the secret token is sensitive and typically +short-lived. With a single join, the instance can stay part of the cluster indefinitely. + +Renewable join-methods are: +- [ephemeral `token`](#ephemeral-tokens) +- [static `token`](#static-tokens) +- [`ec2`](#aws-ec2-identity-document-ec2) +- [`bound_keypair`](#bound-keypair-bound_keypair) + +Nodes with non-renewable certificates must join again in order to get a new +certificate before expiry. The instance will have to prove again that it is legitimate. +The non-renewable join methods guarantee that an attacker stealing the instance +certificates will not be able to maintain access to the Teleport cluster. +Those join methods can be considered more secure and more appropriate for +temporary workloads such as CI/CD pipelines or containerized environments. + +Non-renewable join methods are: +- [`iam`](#aws-iam-role-iam) +- [`azure`](#azure-managed-identity-azure) +- [`azure_devops`](#azure-devops-azure_devops) +- [`env0`](#env0-env0) +- [`gcp`](#gcp-service-account-gcp) +- [`github`](#github-actions-github) +- [`circleci`](#circleci-circleci) +- [`gitlab`](#gitlab-gitlab) +- [`kubernetes`](#kubernetes-kubernetes) +- [`tpm`](#trusted-platform-module-tpm) + +## Token resource reference + +The token resource has the following common fields for all join methods: + +```yaml +# token.yaml +kind: token +version: v2 +metadata: + name: my-token-name +spec: + # System roles describe what services the joining Teleport instance can run. + # Those roles are written on the instance certificate. If you want to change + # them (e.g. add Application access to an SSH Node), you need to: + # - edit the token to update the roles (e.g. add "App") + # - un-register the Teleport instance + # - modify its configuration to enable the new service (here "app_service.enabled") + # - have the instance join again + # + # You should use the minimal set of system roles required. + # Common roles are: + # - "Node" for SSH Service + # - "Proxy" for Proxy Service + # - "Kube" for Kubernetes Service + # - "App" for Application Service + # - "Db" for Database Service + # - "WindowsDesktop" for Windows Desktop Service + # - "Discovery" for Discovery Service + # - "Bot" for Machine & Workload Identity (when set, "spec.bot_name" must be set in the token) + roles: + - Node + - App + join_method: gcp + # Only set bot name when the token is used for Machine & Workload Identity. + # When set, the token must have the "Bot" role as well. + bot_name: my-bot + # SuggestedLabels is a set of labels that resources should set when using this + # token to enroll themselves in the cluster. + # Currently, only node-join scripts create a configuration according to the suggestion. + suggested_labels: + teams: ["sales-eng", "eng", "qa"] + application: ["demo-product"] + # SuggestedAgentMatcherLabels is a set of labels to be used by discovery agents to match on resources. + # When an agent uses this token, the agent should monitor resources that match those labels. + # For databases, this means adding the labels to `db_service.resources.labels`. + # Currently, only node-join scripts create a configuration according to the suggestion. + suggested_agent_matcher_labels: + teams: ["sales-eng"] +``` + +## Join methods + +### Static tokens + + +This join method is inherently less secure because long-lived tokens can be stolen and reused. +Relying on it significantly reduces the security benefits of using Teleport. Its usage is strongly discouraged. +You should use [ephemeral tokens](#ephemeral-tokens) instead. + + +Static tokens are tokens defined in the Auth Service configuration (`teleport.yaml`). +The token name must be kept secret as knowing it allows to join instances to +the Teleport cluster. + +```yaml +auth_service: + enabled: true + # Pre-defined tokens for adding new instances to a cluster. Each token specifies + # the role a new node will be allowed to assume. The more secure way to + # add instances is to use `tctl nodes add --ttl` command to generate auto-expiring + # tokens. + # + # We recommend to use tools like `pwgen` to generate sufficiently random + # tokens of 32+ byte length. + tokens: + - "proxy,node:xxxxx" + - "auth:yyyy" + - "discovery,app,db:zzzzz" +``` + +### Ephemeral tokens + +Ephemeral tokens are secret tokens created dynamically via the CLI or Teleport API. +They are time-bound and are typically created just before joining an instance to +the Teleport cluster. + +They can be created by the CLI (a strong random value is picked when not specified, default TTL is 30 minutes): +```code +$ tctl tokens add --type discovery,app --ttl 15m +``` + +Or as Teleport resources: + +(!docs/pages/includes/provision-token/ephemeral-spec.mdx!) + +When a Machine & Workload Identity Bot uses an ephemeral join token, the token is deleted. + + +New Bot deployments should consider upgrading to the +[`bound_keypair` join method](#bound-keypair-bound_keypair). + + + +- How to [Join Services with a Secure Token](../../enroll-resources/agents/join-token.mdx). +- [Deploying Machine & Workload Identity on Linux](../../machine-workload-identity/deployment/linux.mdx) + + +### Bound Keypair: `bound_keypair` + +Bound Keypair tokens are an alternative to +[secret-based join methods](#secret-based-join-methods) for Machine and Workload +ID bots that improve security and flexibility. They are best used on platforms +with persistent storage, but can be configured for use in nearly any +environment. + +This join method is recommended for on-prem environments +[without TPMs](#trusted-platform-module-tpm) or cloud platforms +without a specialized [delegated join method](#delegated-join-methods). + +At this time, `bound_keypair` can only be used to join Machine and Workload ID +bots, and cannot be used to join other Teleport agent types. + +(!docs/pages/includes/provision-token/bound-keypair-spec.mdx!) + + +- [Deploying Machine & Workload Identity with Bound Keypair joining](../machine-workload-identity/bound-keypair/getting-started.mdx) +- [Bound Keypair Reference](../machine-workload-identity/bound-keypair/bound-keypair.mdx) + + +### AWS IAM role: `iam` + +The IAM join method is available to any Teleport process running anywhere with access to IAM credentials, +such as an EC2 instance with an attached IAM role. No specific permissions or IAM policy is required: an +IAM role with no attached policies is sufficient. No IAM credentials are required on the Teleport Auth Service. + +This is the recommended method to join workload running on AWS. + +(!docs/pages/includes/provision-token/iam-spec.mdx!) + + +- [Joining Services via AWS IAM Role](../../enroll-resources/agents/aws-iam.mdx). +- [Deploying Machine & Workload Identity on AWS](../../machine-workload-identity/deployment/aws.mdx) + + +### AWS EC2 identity document: `ec2` + +The EC2 join method is available to any Teleport process running on an EC2 +instance. Only one Teleport process per EC2 instance may use the EC2 join +method. + +IAM credentials with `ec2:DescribeInstances` permissions are required on your +Teleport Auth Service. No IAM credentials are required on the Teleport processes +joining the cluster. + + + +The EC2 join method is not available in Teleport Enterprise Cloud and Teleport +Team. Teleport Enterprise Cloud and Team customers can use the [IAM join +method](#aws-iam-role-iam) or [ephemeral secret tokens](#ephemeral-tokens). + + + +(!docs/pages/includes/provision-token/ec2-spec.mdx!) + + +[Joining Services via AWS EC2 Identity Document](../../enroll-resources/agents/aws-ec2.mdx). + + +### Azure managed identity: `azure` + +The Azure join method is available to any Teleport process running in an +Azure Virtual Machine. + +(!docs/pages/includes/provision-token/azure-spec.mdx!) + + +- [Joining Services via Azure Managed Identity](../../enroll-resources/agents/azure.mdx). +- [Deploying Machine & Workload Identity on Azure](../../machine-workload-identity/deployment/azure.mdx) + + +### Azure Devops: `azure_devops` + +The Azure DevOps is available to any Teleport process running in an Azure DevOps +pipeline. This join method is typically used with +[Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) +to access Teleport-protected resources in Azure DevOps pipelines without the +use of long-lived secrets. + +(!docs/pages/includes/provision-token/azure-devops-spec.mdx!) + + +- [Deploying Machine & Workload Identity on Azure DevOps](../../machine-workload-identity/deployment/azure-devops.mdx) + + +### Env0: `env0` + +The Env0 join method is available to workloads running in the Env0 cloud +environment. The "Enable OIDC during deployments" option must be enabled for +tokens to be issued into runner environments. + +This join method is typically used by the Teleport Terraform provider to +authenticate to Teleport from the Env0 cloud environment. It cannot be used to +join Terraform runs on other platforms and dedicated join methods should be used +instead. + +The `env0` join method requires Teleport v18.4.0 or later. + +(!docs/pages/includes/provision-token/env0-spec.mdx!) + + +- [Run the Teleport Terraform Provider on Env0](../../zero-trust-access/infrastructure-as-code/terraform-provider/env0.mdx) + + +### GCP service account: `gcp` + +The GCP join method is available to any Teleport process running on a GCP VM. +The VM must have a +[service account](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) +assigned to it (the default service account is fine). No IAM roles are required +on the Teleport process joining the cluster. + +(!docs/pages/includes/provision-token/gcp-spec.mdx!) + + +- How to [Join Services with GCP](../../enroll-resources/agents/gcp.mdx). +- [Deploying Machine & Workload Identity on GCP](../../machine-workload-identity/deployment/gcp.mdx) + + +### GitHub Actions: `github` + +Teleport supports secure joining on both GitHub-hosted and self-hosted GitHub +Actions runners as well as GitHub Enterprise Server. This join method is +typically used with [Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) to access +Teleport-protected resources in GitHub Actions pipelines. + +(!docs/pages/includes/provision-token/github-spec.mdx!) + + +- [Deploying Machine & Workload Identity on GitHub Actions](../../machine-workload-identity/deployment/github-actions.mdx) + + +#### GitHub Actions helpers + +We offer a series of off-the-shelf GitHub Actions to use in your workflows when +utilizing Teleport Machine & Workload Identity and GitHub Actions. + +More information about these individual actions can be found in their GitHub +repositories: + +- [https://github.com/teleport-actions/setup](https://github.com/teleport-actions/setup) +- [https://github.com/teleport-actions/auth](https://github.com/teleport-actions/auth) +- [https://github.com/teleport-actions/auth-k8s](https://github.com/teleport-actions/auth-k8s) +- [https://github.com/teleport-actions/auth-application](https://github.com/teleport-actions/auth-application) + +If you experience problems when using these actions, please raise an issue in +their source repository: +[https://github.com/teleport-actions/root](https://github.com/teleport-actions/root). + +### CircleCI: `circleci` + +This join method is typically used with [Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) +to access Teleport-protected resources in Circle CI pipelines. + +(!docs/pages/includes/provision-token/circleci-spec.mdx!) + + +- [Deploying Machine & Workload Identity on Circle CI](../../machine-workload-identity/deployment/circleci.mdx) + + +### GitLab: `gitlab` + +Teleport supports secure joining on both cloud-hosted and self-hosted GitLab +instances. **The minimum supported GitLab version is 15.7**. + +This join method is typically used with [Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) +to access Teleport-protected resources in Gitlab CI pipelines. + +(!docs/pages/includes/provision-token/gitlab-spec.mdx!) + + +- [Deploying Machine & Workload Identity on GitLab CI](../../machine-workload-identity/deployment/gitlab.mdx) + + +### Kubernetes: `kubernetes` + +The Kubernetes join methods exists in three variants: +- [in-cluster](#kubernetes-in-cluster) +- [JWKS](#kubernetes-jwks) +- [OIDC](#kubernetes-oidc) + +#### Kubernetes In-cluster + +Kubernetes in-cluster joining is available for any Teleport process running +in the same Kubernetes cluster than the Auth Service. It uses the Kubernetes +ServiceAccount tokens to validate the pod identity. The method relies on the +[Kubernetes TokenReview API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1/) +which is typically only reachable from within the Kubernetes cluster. Because of +this limitation, this join method is only available for self-hosted Teleport +clusters in Kubernetes. + +This method should be preferred when available as tokens are revoked as soon as +the pod enters the `Terminated` state. + +(!docs/pages/includes/provision-token/kubernetes-in-cluster-spec.mdx!) + + +- [Joining Services via Kubernetes ServiceAccount Token](../../enroll-resources/agents/kubernetes.mdx) + + +#### Kubernetes JWKS + +Kubernetes JWKS joining is available for any Teleport process running in +Kubernetes. The Auth Service does not have to run in Kubernetes so this method +can be used with any Teleport cluster, including Teleport Cloud. +This join method works by exporting the public Kubernetes signing keys and using +them to validate Kubernetes SA token signatures. The signature validation can be +performed by an Auth Service without access to the Kubernetes. + +(!docs/pages/includes/provision-token/kubernetes-jwks-spec.mdx!) + + +After rotating the Kubernetes CA, you must update the Kubernetes JWKS tokens +to contain the new Kubernetes signing keys (update the +`spec.kubernetes.static_jwks.jwks` field). + + + +- [Deploying Machine & Workload Identity on Kubernetes](../../machine-workload-identity/deployment/kubernetes.mdx) + + +#### Kubernetes OIDC + +Kubernetes OIDC joining is available for any Teleport process running in +Kubernetes where the cluster issues service account tokens from a publicly +accessible, OIDC-compliant issuer. + +This feature makes use of the Kubernetes +[Service Account Issuer Discovery][discovery] feature, added in Kubernetes 1.21, +to verify Kubernetes workloads. Note that some providers, like Amazon EKS, may +require additional configuration to enable OIDC joining. + +Unlike the `static_jwks` variant described above, this method fetches JWKS keys +from the upstream provider dynamically and does not need to be reconfigured if +keys are rotated, which providers like EKS do frequently. However, not all +Kubernetes implementations have accessible OIDC endpoints. + + +To use Kubernetes OIDC joining, your Teleport cluster must be running at least +version 18.1.5. + + +(!docs/pages/includes/provision-token/kubernetes-oidc-spec.mdx!) + + +- [Deploying Machine & Workload Identity on Kubernetes with OIDC](../../machine-workload-identity/deployment/kubernetes-oidc.mdx) + + +### Trusted Platform Module: `tpm` + +(!docs/pages/includes/tpm-joining-background.mdx!) + +(!docs/pages/includes/provision-token/tpm-spec.mdx!) + + +- [Deploying Machine & Workload Identity on Linux: TPM](../../machine-workload-identity/deployment/linux-tpm.mdx) + + +### Terraform Cloud: `terraform_cloud` + +This join method is used to authenticate using Terraform Cloud Workload +Identity. It is typically used by the Teleport Terraform provider on either +HCP Terraform or self-hosted Terraform Enterprise. It can not be used to join +Terraform runs on other platforms and dedicated join methods should be used +instead. + + +Support for self-hosted Terraform Enterprise requires Teleport Enterprise. + + +(!docs/pages/includes/provision-token/terraform-spec.mdx!) + + +- [Run the Teleport Terraform Provider on Terraform Cloud](../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-cloud.mdx) + + +### Spacelift: `spacelift` + +This join method is used to authenticate using Spacelift. It is typically used +by the Teleport Terraform provider on Spacelift (including self-hosted +deployments). + +(!docs/pages/includes/provision-token/spacelift-spec.mdx!) + + + - [Run the Teleport Terraform Provider on Spacelift](../../zero-trust-access/infrastructure-as-code/terraform-provider/spacelift.mdx) + + +### Bitbucket Pipelines: `bitbucket` + +This join method is used to authenticate using Bitbucket's support for OpenID +Connect, and is typically used to allow either Machine & Workload Identity's +`tbot` or the Teleport Terraform provider to authenticate to Teleport without +use of shared secrets. + +(!docs/pages/includes/provision-token/bitbucket-spec.mdx!) + + +- [Deploying Machine & Workload Identity on Bitbucket Pipelines](../../machine-workload-identity/deployment/bitbucket.mdx) + + +### Oracle Cloud: `oracle` + +The Oracle join method is available to any Teleport process running on an Oracle +Cloud Compute instance. + +(!docs/pages/includes/provision-token/oracle-spec.mdx!) + + +- How to [Join Services with Oracle Cloud](../../enroll-resources/agents/oracle.mdx). + + +[discovery]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery diff --git a/docs/pages/reference/deployment/managed-updates-v2.mdx b/docs/pages/reference/deployment/managed-updates-v2.mdx new file mode 100644 index 0000000000000..2f222deb42edd --- /dev/null +++ b/docs/pages/reference/deployment/managed-updates-v2.mdx @@ -0,0 +1,281 @@ +--- +title: Managed Updates Resource Reference +sidebar_label: Managed Updates Resources +description: This page describes the details of the Managed Updates v2 resources. +tags: + - reference + - platform-wide +--- + +This document provides detailed reference information about the Managed Updates v2 resources. +See also: +- the [Managed Updates Architecture](../architecture/agent-update-management.mdx) describing the agent update + architecture and security invariants. +- the [Managed Updates Guide](../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more details on how to + set up Managed updates. + +## Managed Updates v2 resources + +### User facing + +Managed Updates are configured via two user-facing resources: + +#### `autoupdate_config` + +This describes how and when updates are applied. Configuration changes take effect during the next rollout. + +(!docs/pages/includes/reference/resources/autoupdate_config.mdx!) + +#### `autoupdate_version` + +This specifies which version agents should update to. +Changing the start or target version creates a new rollout. + + +This resource is not editable for cloud-managed Teleport Enterprise to ensure that all of +your clients receive security patches and remain compatible with your cluster. + + +(!docs/pages/includes/reference/resources/autoupdate_version.mdx!) + +### System resources + +Those resources are generated automatically by Teleport. Users should not edit them, reading them can be helpful to +track the autoupdate state and understand Teleport's update decisions. + +These resources only track Teleport Agent installations. +However, if tbot is deployed alongside a Teleport Agent, a tbot update failure will cause the agent update to revert, +and the agent update will be marked as failed. + +#### `autoupdate_agent_rollout` + +`autoupdate_agent_rollout` describes the rollout of the version across agent groups. +A single rollout is active at a time. Each Auth Service updates the rollout every minute. + +(!docs/pages/includes/reference/resources/autoupdate_agent_rollout.mdx!) + +#### `autoupdate_agent_report` + +Reports how many agents are connected to each auth, and which version they are running. +This is used to track update progress and automatically progress the rollout. +Each Teleport Auth Service instance generate one report evert minute. + +Agents are counted in the report only if: +- they are connected for at least a minute +- they are automatically updating with `teleport-update` + (`teleport-kube-agent` updater support will be included in a future version) +- managed updates are enabled (the updater is not disabled or pinned to a specific version) + +Due to the reporting period, an agent can take up to 2 minutes to appear in the +reports, and 3 minutes to appear in the rollout: +- 1 minute for the agent to be counted as healthy +- 1 minute for the next report to be generated +- 1 minute for the next `autoupdate_agent_rollout` update + +(!docs/pages/includes/reference/resources/autoupdate_agent_report.mdx!) + +Reports can be individually seen using `tctl get autoupdate_agent_reports`. +To aggregate every report into a single one, run `tctl autoupdate agents report`. + +### Agent Update Overview + +Here's a schema describing the interactions between the different Managed Updates v2 components when updating agents: + +![Schema describing the relations between every resource](../../../img/architecture/managed-updates-v2.svg) + +## Update Strategies + +The `autoupdate_config` resource supports two different strategies for managing how updates propagate across your agent groups: + +### `halt-on-error` Strategy + +The `halt-on-error` strategy provides predictable, sequential updates across environments. It's ideal for traditional development pipelines where you want to ensure that development environments are successfully updated before proceeding to staging and production. + +**Key characteristics:** + +- Updates proceed sequentially through groups in the order they are defined +- If an earlier group fails to update, later groups will not be updated +- Provides a controlled, predictable update path +- Groups with a `wait_hours` parameter will wait the specified duration after the previous group's successful update + +**Example configuration:** + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error + schedules: + regular: + - name: dev + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 16 + - name: stage + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 18 + - name: prod + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 20 + wait_hours: 24 +``` + +In this example, the `dev` group updates first, followed by `stage`, and finally `prod`. The `prod` group will only update if both `dev` and `stage` have updated successfully, and will wait 24 hours after `stage` has completed. + +### `time-based` Strategy + +The `time-based` strategy is designed for environments where update groups are independent of each other, such as geographical regions or different teams. It allows updates to occur whenever the specified maintenance window is active for a group, regardless of the status of other groups. + +**Key characteristics:** + +- Groups update independently during their configured maintenance windows +- No dependency between groups - if prod's window occurs before dev's, prod will update first +- Maintenance windows are strictly enforced - updates only occur during the specified window duration +- Does not support the `wait_hours` parameter + +**Example configuration:** + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: time-based + maintenance_window_duration: 2h + schedules: + regular: + - name: us-east + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 4 + - name: us-west + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 6 + - name: europe + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 12 +``` + +In this example, each region will update during its own 2-hour maintenance window, +regardless of the status of other regions. If a new version becomes available +when Europe's window is active but before US East's window, Europe will update first. + +## Version Behavior During Rollout + +Depending on which strategy is selected, the system behaves differently regarding which versions are installed at +different points in the rollout. Consider a scenario where `start_version` is A and `target_version` is B: + +### `halt-on-error` Strategy Version Behavior + +This table summarizes which version an agent will install based on its group's maintenance window: + +| Timing | New Agent Installations | Existing Agent Updates | +|---------------------------|-------------------------|------------------------| +| Before maintenance window | Install version A | No updates occur | +| During maintenance window | Install version B | Update to version B | +| After maintenance window | Install version B | Update to version B | + +In halt-on-error strategy, if the previous group is not done updating, the next one will not start. +For example, if you have 2 groups: +- `dev` updating everyday at 13:00 +- `prod` updating on weekdays at 15:00 + +A group is considered updated once at least 90% of its agents are running the target version. +The group initial agent count is measured when the group starts updating. + + +In case of heavy downscale during the update window, the rollout will get stuck +as the group agent count will never reach 90% of its initial value. + +In such case, Teleport cannot differentiate between agents that got legitimately +downscaled and agents that might be unable to connect because of the new version. +Agents will be considered missing and the rollout will require manual +intervention to resume (`tctl autoupdate agents mark-done $GROUP_NAME`). + + +If `dev` finishes updating Friday at 16:30, `prod` will not start updating immediately as update windows are +1-hour-long and `prod`'s update window is over. `prod` update will start on Monday at 15:00. + +#### Canaries + +With the `halt-on-error` strategy, the `canary_count` field can be set on each group to specify +a number of randomly selected agents (fewer than five) to update and verify before +proceeding to the rest of the agents in the group. This can be used to reduce the impact +of a failed update that might not be caught by earlier groups due to environment differences. + +**Key characteristics:** + +- Up to five Linux agent hosts are selected automatically and updated first. +- If any canary fails to update, the group is marked as failed and not updated. +- Subsequent groups will not be updated until the group successfully updates. +- Kubernetes agents are not currently selected as canaries. +- tbot-only installations are not selected as canaries, but agents may be deployed + alongside tbot to detect tbot update failures. + +**Example configuration:** + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error + schedules: + regular: + - name: stage + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 18 + canary_count: 5 + - name: prod + days: ["Mon", "Tue", "Wed", "Thu"] + start_hour: 20 + wait_hours: 24 + canary_count: 5 +``` + + +### `time-based` Strategy Version Behavior + +This table summarizes which version an agent will install based on its group's maintenance window: + +| Timing | New Agent Installations | Existing Agent Updates | +|---------------------------|-------------------------|------------------------------------| +| Before maintenance window | Install version A | No updates occur | +| During maintenance window | Install version B | Update to version B | +| After maintenance window | Install version B | No updates occur until next window | + +The key difference is what happens after the maintenance window closes: + +- With `halt-on-error`, agents update even if they missed the window to run the + same version as the other agents in their group. +- With `time-based`, updates are strictly limited to the maintenance window duration. + +In time-based strategy, the group will start updating as soon as it enters its maintenance window, +even if the previous group is not done updating yet. + +## Mode Precedence + +The `mode` field can be set in both `autoupdate_config` and `autoupdate_version` resources. The most restrictive mode takes precedence: + +1. `disabled` (most restrictive) +2. `suspended` +3. `enabled` (least restrictive) + +This allows different teams to manage the configuration and version resources independently while maintaining safety controls. + +## Cloud-specific Constraints + +For cloud-hosted Teleport Enterprise: + +- The `autoupdate_version` resource is managed automatically and cannot be edited +- The `days` field in schedules is often not configurable +- The `start_hour` defaults to your selected maintenance window +- A maximum of 5 update groups is allowed by default +- A full update schedule cannot be longer than 4 days + +These constraints ensure that all agents are updated weekly and remain compatible with the Teleport cluster's version. diff --git a/docs/pages/reference/deployment/monitoring/audit.mdx b/docs/pages/reference/deployment/monitoring/audit.mdx new file mode 100644 index 0000000000000..2ec515c47f4ca --- /dev/null +++ b/docs/pages/reference/deployment/monitoring/audit.mdx @@ -0,0 +1,291 @@ +--- +title: Audit Events and Records +description: Reference of Teleport Audit Events and Session Records +tags: + - conceptual + - platform-wide + - audit +--- + +Teleport logs cluster activity by emitting various events into its audit log. +There are two components of the audit log: + + + + +- **Cluster Events:** Teleport logs events like successful user logins along + with metadata like remote IP address, time, and the session ID. +- **Recorded Sessions:** Every SSH, desktop, or Kubernetes shell session is recorded and + can be replayed later. By default, the recording is done by Teleport Nodes, + but can be configured to be done by the proxy. + + + + +- **Cluster Events:** Teleport logs events like successful user logins along + with metadata like remote IP address, time, and the session ID. +- **Recorded Sessions:** Every SSH, desktop, or Kubernetes shell session is recorded and + can be replayed later. Teleport Cloud manages the storage of session + recording data. + + + + + + +You can use +[Enhanced Session Recording with BPF](../../../enroll-resources/server-access/guides/bpf-session-recording.mdx) +to get even more comprehensive audit logs with advanced security. + + + +## Events + + + + +Teleport supports multiple storage backends for storing audit events. The `dir` +backend uses the local filesystem of an Auth Service host. When this backend is +used, events are written to the filesystem in JSON format. The `dir` backend rotates +the event file approximately once every 24 hours, but never deletes captured events. + +For High Availability configurations, users can refer to our +[Athena](../backends.mdx), [DynamoDB](../backends.mdx) or +[Firestore](../backends.mdx) chapters for information on how to +configure the SSH events and recorded sessions to be stored on network storage. +When these backends are in use, audit events will eventually expire and be +removed from the log. The default retention period is 1 year, but this can be +overridden using the `retention_period` configuration parameter. + +It is even possible to store audit logs in multiple places at the same time. For +more information on how to configure the audit log, refer to the `storage` +section of the example configuration file in the +[Teleport Configuration Reference](../config.mdx). + +Let's examine the Teleport audit log using the `dir` backend. Teleport Auth +Service instances write their logs to a subdirectory of Teleport's configured +data directory that is named based on the service's UUID. + +Each day is represented as a file: + +```code +$ ls -l /var/lib/teleport/log/bbdfe5be-fb97-43af-bf3b-29ef2e302941 + +# total 104 +# -rw-r----- 1 root root 31638 Jan 22 20:00 2022-01-23.00:00:00.log +# -rw-r----- 1 root root 91256 Jan 31 21:00 2022-02-01.00:00:00.log +# -rw-r----- 1 root root 15815 Feb 32 22:54 2022-02-03.00:00:00.log +``` + + + + +Teleport Enterprise Cloud manages the storage of audit logs for you. You can +access your audit logs via the Teleport Web UI by clicking: + +**Audit** > **Audit Log** + + + + +Audit logs use JSON format. They are human readable but can also be +programmatically parsed. Each line represents an event and has the following +format: + +```javascript +{ + // Event type. See below for the list of all possible event types. + "event": "session.start", + // A unique ID for the event log. Useful for deduplication. + "uid": "59cf8d1b-7b36-4894-8e90-9d9713b6b9ef", + // Teleport user name + "user": "ekontsevoy", + // OS login + "login": "root", + // Server namespace. This field is reserved for future use. + "namespace": "default", + // Unique server ID + "server_id": "f84f7386-5e22-45ff-8f7d-b8079742e63f", + // Server Labels + "server_labels": { + "datacenter": "us-east-1", + "label-b": "x" + } + // Session ID. Can be used to replay the session. + "sid": "8d3895b6-e9dd-11e6-94de-40167e68e931", + // Address of the SSH node + "addr.local": "10.5.l.15:3022", + // Address of the connecting client (user) + "addr.remote": "73.223.221.14:42146", + // Terminal size + "size": "80:25", + // Timestamp + "time": "2017-02-03T06:54:05Z" +} +``` + +## Event types + +Below are some possible types of audit events. + + + +This list is not comprehensive. We recommend exporting audit events to a +platform that automatically parses event payloads so you can group and filter +them by their `event` key and discover trends. To set up audit event exporting, +read [Exporting Teleport Audit Events](../../../zero-trust-access/export-audit-events/export-audit-events.mdx). + + + +| Event Type | Description | +| - | - | +| auth | Authentication attempt. Adds the following fields: `{"success": "false", "error": "access denied"}` | +| session.start | Started an interactive shell session. | +| session.end | An interactive shell session has ended. | +| session.join | A new user has joined the existing interactive shell session. | +| session.leave | A user has left the session. | +| session.disk | A list of files opened during the session. *Requires Enhanced Session Recording*. | +| session.network | A list of network connections made during the session. *Requires Enhanced Session Recording*. | +| session.command | A list of commands ran during the session. *Requires Enhanced Session Recording*. | +| session.recording.access | A session recording has been accessed. | +| exec | Remote command has been executed via SSH, like `tsh ssh root@node ls /`. The following fields will be logged: `{"command": "ls /", "exitCode": 0, "exitError": ""}` | +| scp | Remote file copy has been executed. The following fields will be logged: `{"path": "/path/to/file.txt", "len": 32344, "action": "read" }` | +| resize | Terminal has been resized. | +| user.login | A user logged into web UI or via tsh. The following fields will be logged: `{"user": "alice@example.com", "method": "local"}` . | +| app.session.start | A user accessed an application | +| app.session.chunk | A record of activity during an app session | +| join_token.create | A new join token has been created. Adds the following fields: `{"roles": ["Node", "Db"], "join_method": "token"}` | +| mcp.session.start | An MCP server session has started. | +| mcp.session.end | An MCP server session has ended. | +| mcp.session.notification | A notification has been sent from the MCP client. | +| mcp.session.request | A request has been sent from the MCP client. | +| mcp.session.listen_sse_stream | Streamable HTTP client has started to listening server notifications via an SSE stream. | +| mcp.session.invalid_http_request | The client has sent an HTTP request that Teleport could not process. | + +## Recorded sessions + +In addition to logging start and end events, Teleport can also record the entire session. +For SSH or Kubernetes sessions this captures the entire stream of bytes from the PTY. +For desktop sessions the recording includes the contents of the screen. + + + + +Teleport can store the recorded sessions in an [AWS S3 bucket](../backends.mdx) +or in a local filesystem (including NFS). + +The recorded sessions are stored as raw bytes in the `sessions` directory under +`log`. Each session is a protobuf-encoded stream of binary data. + +You can replay recorded sessions using the [`tsh play`](../../cli/tsh.mdx) +command or the Web UI. + +For example, replay a session via CLI: + +```code +$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 +``` + +Print the session events in JSON to stdout: + +```code +$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 --format=json +``` + + + + +Teleport Enterprise Cloud automatically stores recorded sessions. + +You can replay recorded sessions using the [`tsh play`](../../cli/tsh.mdx) +command or the Web UI. + +For example, replay a session via CLI: + +```code +$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 +``` + +Print the session events in JSON to stdout: + +```code +$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 --format=json +``` + + + + +### Modes + + +Available only for SSH sessions and when Teleport is configured with +`auth_service.session_recording: node`. + + +Modes define how Teleport deals with recording failures, such as a full disk +error. They are configured per-service at the role level, where the strictest +value takes precedence. The available modes are: + +|Mode|After a recording failure| +|----|-------------------------| +|Best effort (`best_effort`)|Disables recording without terminating the session.| +|Strict (`strict`)|Immediately terminates the session.| + +If the user role doesn’t specify a recording mode, `best_effort` will be used. Here +is an example of a role configured to use strict mode for SSH sessions: + +```yaml +kind: role +version: v5 +metadata: + name: ssh-strict +spec: + options: + record_session: + ssh: strict +``` + +### Enable Recording Proxy Mode + +Edit the Auth Service configuration file as follows: + +```yaml +# snippet from /etc/teleport.yaml +auth_service: + # Session Recording must be set to Proxy to work with OpenSSH + session_recording: "proxy" # can also be "off" and "node" (default) +``` + + + +The use of Proxy Recording Mode across your cluster is no longer necessary to support +[OpenSSH servers](../../../enroll-resources/server-access/openssh/openssh-agentless.mdx). +Proxy Recording Mode is automatically used for OpenSSH servers without the need for SSH +agent forwarding, which is the primary security concern for this mode. + +Recording Proxy Mode will continue to be supported for legacy OpenSSH support and other +use cases, but many existing and new features will not be supported in this mode due +to several technical limitations it introduces. + + + +#### Optional insecure step: Disable strict host checking + +When in Recording Proxy Mode, Teleport will check that the host certificate of any +Node a user connects to is signed by a Teleport CA. By default, this is a strict +check. If the Node presents just a key or a certificate signed by a different +CA, Teleport will reject this connection with the error message: + +```text +ssh: handshake failed: remote host presented a public key, expected a host +certificate +``` + +You can disable strict host checks as shown below. However, this opens the +possibility for Person-in-the-Middle attacks and is not recommended. + +```yaml +# snippet from /etc/teleport.yaml +auth_service: + proxy_checks_host_keys: false +``` diff --git a/docs/pages/reference/deployment/monitoring/metrics.mdx b/docs/pages/reference/deployment/monitoring/metrics.mdx new file mode 100644 index 0000000000000..06da7b9a00d11 --- /dev/null +++ b/docs/pages/reference/deployment/monitoring/metrics.mdx @@ -0,0 +1,25 @@ +--- +title: Teleport Metrics +description: Comprehensive list of all metrics exposed by Teleport. +tags: + - reference + - platform-wide +--- + + + +Teleport Cloud does not expose monitoring endpoints for the Auth Service and Proxy Service. + + + +Teleport metrics are intended for performance monitoring. If you'd like to +monitor Teleport usage, consider utilizing our Event Handler plugin to push Audit Events into your +preferred logging aggregation system (Elastic, Splunk, Sumo Logic, etc). + +- [Audit Events and Records](audit.mdx) +- [Forwarding events with Fluentd](../../../zero-trust-access/export-audit-events/fluentd.mdx) +- [Monitor Teleport Audit Events with the Elastic Stack](../../../zero-trust-access/export-audit-events/elastic-stack.mdx) + +The following metrics are available: + +(!docs/pages/includes/metrics.mdx!) diff --git a/docs/pages/reference/deployment/monitoring/monitoring.mdx b/docs/pages/reference/deployment/monitoring/monitoring.mdx new file mode 100644 index 0000000000000..b62401f5246e1 --- /dev/null +++ b/docs/pages/reference/deployment/monitoring/monitoring.mdx @@ -0,0 +1,9 @@ +--- +title: Teleport Monitoring References +sidebar_label: Monitoring +description: Provides comprehensive guides to monitoring data available from Teleport. +tags: + - platform-wide +--- + + diff --git a/docs/pages/reference/monitoring/tracing-service-configuration.mdx b/docs/pages/reference/deployment/monitoring/tracing-service-configuration.mdx similarity index 93% rename from docs/pages/reference/monitoring/tracing-service-configuration.mdx rename to docs/pages/reference/deployment/monitoring/tracing-service-configuration.mdx index 9515b6882edca..742256b06732f 100644 --- a/docs/pages/reference/monitoring/tracing-service-configuration.mdx +++ b/docs/pages/reference/deployment/monitoring/tracing-service-configuration.mdx @@ -1,6 +1,9 @@ --- title: Distributed Tracing Configuration Reference description: Configuration reference for Distributed Tracing. +tags: + - reference + - platform-wide --- This guide lays out the configuration fields that are related to distributed diff --git a/docs/pages/reference/networking.mdx b/docs/pages/reference/deployment/networking.mdx similarity index 97% rename from docs/pages/reference/networking.mdx rename to docs/pages/reference/deployment/networking.mdx index 39a84e7c55a89..ba9cfc9257782 100644 --- a/docs/pages/reference/networking.mdx +++ b/docs/pages/reference/deployment/networking.mdx @@ -1,6 +1,9 @@ --- title: Networking description: This reference explains the networking requirements of a Teleport cluster, including its public address, ports, and support for HTTP CONNECT proxies. +tags: + - conceptual + - platform-wide --- A Teleport cluster is a distributed system that may comprise a number of @@ -145,7 +148,7 @@ TLS routing is enabled by default. In this mode, all connections to a Teleport service (e.g., the Teleport SSH Service or Kubernetes) are routed through the Proxy Service's public web address. -Read more in our [TLS Routing](architecture/tls-routing.mdx) guide. +Read more in our [TLS Routing](../architecture/tls-routing.mdx) guide. | Port | Downstream Service | Description | | - | - | - | @@ -220,7 +223,7 @@ than through a port allocated to that service. In this case, you can see that TLS routing is enabled, and that the Proxy Service's public web address (`ssh.public_addr`) is `mytenant.teleport.sh:443`. -Read more in our [TLS Routing](architecture/tls-routing.mdx) guide. +Read more in our [TLS Routing](../architecture/tls-routing.mdx) guide.
@@ -239,7 +242,7 @@ agents to the public internet. ### Direct connections to agents If you run a self-hosted Teleport cluster, you can join an agent [directly to -the Teleport Auth Service](../enroll-resources/agents/join-token.mdx). +the Teleport Auth Service](../../enroll-resources/agents/join-token.mdx). In this setup, certain Teleport services open their own listeners rather than accepting connections via reverse tunnel. The Proxy Service connects to these agent services by dialing them directly. @@ -260,4 +263,4 @@ expose ports directly. Direct auth joining is only supported for ssh and kube agents. Application, database, and other discovery-based features are not supported when joining auth directly. - \ No newline at end of file + diff --git a/docs/pages/admin-guides/management/operations/scaling.mdx b/docs/pages/reference/deployment/scaling.mdx similarity index 97% rename from docs/pages/admin-guides/management/operations/scaling.mdx rename to docs/pages/reference/deployment/scaling.mdx index 43efbb7b00127..23d48f8a0a45e 100644 --- a/docs/pages/admin-guides/management/operations/scaling.mdx +++ b/docs/pages/reference/deployment/scaling.mdx @@ -1,6 +1,9 @@ --- title: Scaling description: How to configure Teleport for large-scale deployments +tags: + - conceptual + - platform-wide --- This section explains the recommended configuration settings for large-scale @@ -8,13 +11,9 @@ self-hosted deployments of Teleport. (!docs/pages/includes/cloud/call-to-action.mdx!) -## Prerequisites - -- Teleport v(=teleport.version=) Open Source or Enterprise. - ## Hardware recommendations -Set up Teleport with a [High Availability configuration](../../deploy-a-cluster/high-availability.mdx). +Set up Teleport with a [High Availability configuration](../../zero-trust-access/deploy-a-cluster/deployments/high-availability.mdx). | Scenario | Max Recommended Count | Proxy Service | Auth Service | AWS Instance Types | |-----------------------------------------------------------------------|-----------------------|-----------------------|-----------------------|--------------------| diff --git a/docs/pages/reference/signals.mdx b/docs/pages/reference/deployment/signals.mdx similarity index 94% rename from docs/pages/reference/signals.mdx rename to docs/pages/reference/deployment/signals.mdx index 7db71384330fb..e0392545ca690 100644 --- a/docs/pages/reference/signals.mdx +++ b/docs/pages/reference/deployment/signals.mdx @@ -1,7 +1,10 @@ --- title: Teleport Signals -h1: Teleport Signals Reference +sidebar_label: Signals description: "Signals you can send to a running teleport process." +tags: + - reference + - platform-wide --- You can send the following signals to a `teleport` process to trigger different diff --git a/docs/pages/reference/signature-algorithms.mdx b/docs/pages/reference/deployment/signature-algorithms.mdx similarity index 98% rename from docs/pages/reference/signature-algorithms.mdx rename to docs/pages/reference/deployment/signature-algorithms.mdx index cb12848a529b5..e9f94bdca21f4 100644 --- a/docs/pages/reference/signature-algorithms.mdx +++ b/docs/pages/reference/deployment/signature-algorithms.mdx @@ -1,7 +1,9 @@ --- title: Signature Algorithms -h1: Signature Algorithms Reference description: "Signature algorithms used in Teleport." +tags: + - conceptual + - zero-trust --- The Teleport Auth Service issues SSH and TLS certificates to users and hosts @@ -158,7 +160,7 @@ updated. In order to use new signature algorithms for your existing Certificate Authorities, you will need to complete a CA rotation for each authority. This may require manual steps to update the trust relationships in your Cluster. -The procedure is documented in the [CA rotation guide](../admin-guides/management/operations/ca-rotation.mdx). +The procedure is documented in the [CA rotation guide](../../zero-trust-access/management/security/ca-rotation.mdx). This process is optional, your cluster will continue to function with the existing Certificate Authority keys if you don't complete a CA rotation. @@ -241,7 +243,7 @@ For these reasons we recommend continuing to use the `legacy` algorithm suite. ### My YubiHSM2 authentication key does not have capabilities for ECDSA keys, what do I do? -Our [YubiHSM2 guide](../admin-guides/deploy-a-cluster/hsm.mdx) recommends +Our [YubiHSM2 guide](../../zero-trust-access/deploy-a-cluster/hsm.mdx) recommends creating an [authentication key](https://docs.yubico.com/hardware/yubihsm-2/hsm-2-user-guide/hsm2-core-concepts.html#authentication-key) to be used by the Teleport Auth Service to authenticate with the YubiHSM2. diff --git a/docs/pages/reference/helm-reference/helm-reference.mdx b/docs/pages/reference/helm-reference/helm-reference.mdx index 3636a6e0f14c3..86bccb0875158 100644 --- a/docs/pages/reference/helm-reference/helm-reference.mdx +++ b/docs/pages/reference/helm-reference/helm-reference.mdx @@ -1,8 +1,10 @@ --- -title: Helm Charts -h1: Helm Chart References +title: Helm Chart References +sidebar_label: Helm Charts description: Comprehensive lists of configuration values in Teleport's Helm charts -layout: tocless-doc +template: "no-toc" +tags: + - platform-wide --- - [teleport-cluster](teleport-cluster.mdx): Deploy the @@ -13,9 +15,10 @@ layout: tocless-doc Kubernetes. - [teleport-operator](teleport-operator.mdx): Deploy the Teleport Kubernetes Operator. +- [teleport-relay](teleport-relay.mdx): Deploy the Teleport Relay Service. - [teleport-access-graph](teleport-access-graph.mdx): Deploy the Teleport Identity Security Access Graph service. -- [tbot](tbot.mdx): Deploy an instance of TBot, the [MachineID](../../enroll-resources/machine-id/introduction.mdx) agent. +- [tbot](tbot.mdx): Deploy an instance of TBot, the [Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) agent. - [teleport-plugin-event-handler](teleport-plugin-event-handler.mdx): Deploy the Teleport Event Handler plugin which sends events and session logs to Fluentd. @@ -42,4 +45,4 @@ layout: tocless-doc when Access Requests are made. - [teleport-plugin-datadog](teleport-plugin-datadog.mdx): Deploy the Teleport Datadog Incident Management Plugin, which allows Access Requests - to be managed as Datadog incidents. \ No newline at end of file + to be managed as Datadog incidents. diff --git a/docs/pages/reference/helm-reference/tbot.mdx b/docs/pages/reference/helm-reference/tbot.mdx index e68ebfc5ae72a..7ef4ce0c0340f 100644 --- a/docs/pages/reference/helm-reference/tbot.mdx +++ b/docs/pages/reference/helm-reference/tbot.mdx @@ -1,17 +1,22 @@ --- title: tbot Chart Reference +sidebar_label: tbot description: Values that can be set using the tbot Helm chart +tags: + - reference + - mwi + - infrastructure-identity --- -This chart deploys an instance of the [MachineID](../../enroll-resources/machine-id/introduction.mdx) agent, +This chart deploys an instance of the [Machine & Workload Identity](../../machine-workload-identity/introduction.mdx) agent, TBot, into your Kubernetes cluster. To use it, you will need to know: - The address of your Teleport Proxy Service or Auth Service - The name of your Teleport cluster -- The name of a join token configured for Machine ID and your Kubernetes cluster - as described in the [Machine ID on Kubernetes guide](../../enroll-resources/machine-id/deployment/kubernetes.mdx) +- The name of a join token configured for Machine & Workload Identity and your Kubernetes cluster + as described in the [Machine & Workload Identity on Kubernetes guide](../../machine-workload-identity/deployment/kubernetes.mdx) By default, this chart is designed to use the `kubernetes` join method but it can be customized to use any delegated join method. We do not recommend that @@ -20,13 +25,11 @@ you use the `token` join method with this chart. ## Minimal configuration This basic configuration will write a Teleport identity file to a secret in -the deployment namespace called `test-output`. +the deployment namespace called `-out`. For example `tbot-out`. ```yaml clusterName: "test.teleport.sh" teleportProxyAddress: "test.teleport.sh:443" -defaultOutput: - secretName: "test-output" token: "my-token" ``` diff --git a/docs/pages/reference/helm-reference/teleport-access-graph.mdx b/docs/pages/reference/helm-reference/teleport-access-graph.mdx index 0806c5fb9dc88..58ce5cc406027 100644 --- a/docs/pages/reference/helm-reference/teleport-access-graph.mdx +++ b/docs/pages/reference/helm-reference/teleport-access-graph.mdx @@ -1,11 +1,16 @@ --- title: teleport-access-graph Chart Reference +sidebar_label: teleport-access-graph description: Values that can be set using the teleport-access-graph Helm chart +tags: + - reference + - identity-security + - access-risks --- The `teleport-access-graph` Helm chart deploys the Access Graph service. -See [Teleport Identity Security with Access Graph on Self-Hosted Clusters with Helm](../../admin-guides/deploy-a-cluster/access-graph/self-hosted-helm.mdx) +See [Teleport Identity Security with Access Graph on Self-Hosted Clusters with Helm](../../identity-security/access-graph/self-hosted-helm.mdx) for more details. diff --git a/docs/pages/reference/helm-reference/teleport-cluster.mdx b/docs/pages/reference/helm-reference/teleport-cluster.mdx index fca78605d5305..735a21ac13fa4 100644 --- a/docs/pages/reference/helm-reference/teleport-cluster.mdx +++ b/docs/pages/reference/helm-reference/teleport-cluster.mdx @@ -1,11 +1,15 @@ --- title: teleport-cluster Chart Reference +sidebar_label: teleport-cluster description: Values that can be set using the teleport-cluster Helm chart +tags: + - reference + - platform-wide --- The `teleport-cluster` Helm chart deploys a Teleport cluster on Kubernetes. This includes deploying proxies, auth servers, and [kubernetes-access](../../enroll-resources/kubernetes-access/introduction.mdx). -See the [Teleport HA Architecture page](../../admin-guides/deploy-a-cluster/high-availability.mdx) +See the [Teleport HA Architecture page](../../zero-trust-access/deploy-a-cluster/deployments/high-availability.mdx) for more details. You can @@ -35,11 +39,11 @@ Get started with a guide for each mode: | `chartMode` | Purpose | Guide | | - | - | - | -| `standalone` | Runs by relying only on Kubernetes resources. | [Getting Started - Kubernetes](../../admin-guides/deploy-a-cluster/helm-deployments/kubernetes-cluster.mdx) | -| `aws` | Leverages AWS managed services to store data. | [Running an HA Teleport cluster using an AWS EKS Cluster](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) | -| `gcp` | Leverages GCP managed services to store data. | [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) | -| `azure` | Leverages Azure managed services to store data. | [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../admin-guides/deploy-a-cluster/helm-deployments/azure.mdx) | -| `scratch` (v12 and above) | Generates empty Teleport configuration. User must pass their own config. This is discouraged, use `standalone` mode with [`auth.teleportConfig`](#authteleportconfig) and [`proxy.teleportConfig`](#proxyteleportconfig) instead. | [Running a Teleport cluster with a custom config](../../admin-guides/deploy-a-cluster/helm-deployments/custom.mdx) | +| `standalone` | Runs by relying only on Kubernetes resources. | [Getting Started - Kubernetes](../../zero-trust-access/deploy-a-cluster/helm-deployments/kubernetes-cluster.mdx) | +| `aws` | Leverages AWS managed services to store data. | [Running an HA Teleport cluster using an AWS EKS Cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) | +| `gcp` | Leverages GCP managed services to store data. | [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) | +| `azure` | Leverages Azure managed services to store data. | [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/azure.mdx) | +| `scratch` (v12 and above) | Generates empty Teleport configuration. User must pass their own config. This is discouraged, use `standalone` mode with [`auth.teleportConfig`](#authteleportconfig) and [`proxy.teleportConfig`](#proxyteleportconfig) instead. | [Running a Teleport cluster with a custom config](../../zero-trust-access/deploy-a-cluster/helm-deployments/custom.mdx) | The chart is versioned with Teleport. No compatibility guarantees are ensured @@ -143,7 +147,7 @@ If `proxyProtocol` is unspecified, Teleport does not require PROXY header for th connection, but will accept it if present. This mode is considered insecure and should only be used for testing purposes. -See the [PROXY Protocol security section](../../admin-guides/management/security/proxy-protocol.mdx) for more details. +See the [PROXY Protocol security section](../../zero-trust-access/management/security/proxy-protocol.mdx) for more details. ### `auth.teleportConfig` @@ -164,7 +168,7 @@ The merge logic is as follows: - values (string, integer, boolean, ...) are replaced - fields can be unset by setting them to `null` or `~` -See the [Teleport Configuration Reference](../config.mdx) for the list of supported fields. +See the [Teleport Configuration Reference](../deployment/config.mdx) for the list of supported fields. ```yaml auth: @@ -225,7 +229,7 @@ The merge logic is as follows: - values (string, integer, boolean, ...) are replaced - fields can be unset by setting them to `null` or `~` -See the [Teleport Configuration Reference](../config.mdx) for the list of supported fields. +See the [Teleport Configuration Reference](../deployment/config.mdx) for the list of supported fields. ```yaml proxy: @@ -257,12 +261,12 @@ Possible values are `local` and `github` for Teleport Community Edition, plus `o | `string` | `""` | No | `auth_service.authentication.connector_name` | `authentication.connectorName` sets the default authentication connector. -[The SSO documentation](../../admin-guides/access-controls/sso/sso.mdx) explains how to create +[The SSO documentation](../../zero-trust-access/sso/sso.mdx) explains how to create authentication connectors for common identity providers. In addition to SSO connector names, the following built-in connectors are supported: -- [`local`](../../admin-guides/management/admin/users.mdx) for local users -- [`passwordless`](../../admin-guides/access-controls/guides/passwordless.mdx) to enable by +- [`local`](../../zero-trust-access/rbac-get-started/users.mdx) for local users +- [`passwordless`](../../zero-trust-access/authentication/passwordless.mdx) to enable by default passwordless authentication. Defaults to `local`. @@ -276,7 +280,7 @@ Defaults to `local`. `authentication.localAuth` controls whether local authentication is enabled. When disabled, users can only log in through authentication connectors like `saml`, `oidc` or `github`. -[Disabling local auth is required for FedRAMP / FIPS](../../admin-guides/access-controls/compliance-frameworks/fedramp.mdx). +[Disabling local auth is required for FedRAMP / FIPS](../../zero-trust-access/compliance-frameworks/fedramp.mdx). ### `authentication.lockingMode` @@ -285,7 +289,7 @@ When disabled, users can only log in through authentication connectors like `sam | `string` | `""` | No | `auth_service.authentication.locking_mode` | `authentication.lockingMode` controls the locking mode cluster-wide. Possible values are `best_effort` and `strict`. -See [the locking modes documentation](../../admin-guides/access-controls/guides/locking.mdx) for more +See [the locking modes documentation](../../identity-governance/locking.mdx) for more details. Defaults to Teleport's binary default when empty: `best_effort`. @@ -298,7 +302,7 @@ Defaults to Teleport's binary default when empty: `best_effort`. `authentication.passwordless` controls whether passwordless authentication is enabled. -[Can be used to forbid passwordless access to your cluster](../../admin-guides/access-controls/guides/passwordless.mdx) +[Can be used to forbid passwordless access to your cluster](../../zero-trust-access/authentication/passwordless.mdx) ### `authentication.secondFactor` @@ -367,7 +371,7 @@ Changing the RP ID will invalidate all already registered webauthn second factor ### `authentication.webauthn` -See [Harden your Cluster Against IdP Compromises](../../admin-guides/access-controls/guides/webauthn.mdx) for more details. +See [Harden your Cluster Against IdP Compromises](../../zero-trust-access/management/security/idp-compromise.mdx) for more details. #### `authentication.webauthn.attestationAllowedCas` @@ -415,7 +419,7 @@ By default no devices are forbidden. `sessionRecording` controls the `session_recording` field in the `teleport.yaml` configuration. It is passed as-is in the configuration. -For possible values, [see the Teleport Configuration Reference](../../reference/config.mdx). +For possible values, [see the Teleport Configuration Reference](../deployment/config.mdx). `values.yaml` example: @@ -668,27 +672,6 @@ By using this value you will update the Kubernetes volume specification to mount licenseSecretName: enterprise-license ``` -## `installCRDs` - -| Type | Default value | -|--------|---------------| -| `bool` | `false` | - -CRDs are not namespace-scoped resources - they can be installed only once in a cluster. - -CRDs are required by the Teleport Kubernetes Operator and are installed by default when `operator.enabled` is true. -`installCRDs` overrides this behavior and allows users to indicate whether to deploy Teleport CRDs. - -If several releases of the `teleport-cluster` chart are deployed in the same Kubernetes cluster, only one -release should have `installCRDs` enabled. Unless you are deploying multiple `teleport-cluster` Helm releases in -the same Kubernetes cluster or installing the CRDs on your own you should not have to set this value. - -`values.yaml` example: - - ```yaml - installCRDs: true - ``` - ## `operator` ### `operator.annotations.deployment` @@ -758,7 +741,7 @@ Kubernetes annotations which should be applied to the `ServiceAccount` created b Enabling the operator will also deploy the Teleport CRDs in the Kubernetes cluster. If you are deploying multiple releases of the Helm chart in the same cluster you can override this behavior with -[`installCRDs`](#installCRDs). +[`installCRDs`](#operatorinstallcrds). `values.yaml` example: @@ -766,6 +749,22 @@ If you are deploying multiple releases of the Helm chart in the same cluster you operator: enabled: true ``` +### `operator.installCRDs` + +| Type | Default | +|------|---------| +| `string` | `"dynamic"` | + +`operator.installCRDs` controls if the chart should install the CRDs. +There are 3 possible values: dynamic, always, never. + +- "dynamic" means the CRDs are installed if the operator is enabled or if +the CRDs are already present in the cluster. The presence check is here to +avoid all CRDs to be removed if you temporarily disable the operator. +Removing CRDs triggers a cascading deletion, which removes CRs, and all the +related resources in Teleport. +- "always" means the CRDs are always installed +- "never" means the CRDs are never installed ### `operator.image` @@ -885,7 +884,7 @@ If you want to run Teleport version `X.Y.Z`, you should use `helm --version X.Y.Z` instead. -See our [installation guide](../../installation.mdx#docker) for information on +See our [installation guide](../../installation/docker.mdx) for information on Docker image versions. `values.yaml` example: @@ -958,7 +957,7 @@ PodSecurityPolicy resource has been removed in Kubernetes 1.25 and replaced since 1.23 by PodSecurityAdmission. If you are running on Kubernetes 1.23 or later it is recommended to disable PSPs and use PSAs. The steps are documented in the -[PSP removal guide](../../admin-guides/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp.mdx). +[PSP removal guide](../../zero-trust-access/deploy-a-cluster/helm-deployments/migration-kubernetes-1-25-psp.mdx). To disable PSP creation, you can set `enabled` to `false`. @@ -1005,11 +1004,11 @@ policies to define access rules for the cluster. | `chartMode` | Guide | |--------------|--------------------------------------------------------------------------------------------------------------------| -| `standalone` | [Getting Started - Kubernetes](../../admin-guides/deploy-a-cluster/helm-deployments/kubernetes-cluster.mdx) | -| `aws` | [Running an HA Teleport cluster using an AWS EKS Cluster](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) | -| `gcp` | [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) | -| `azure` | [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../admin-guides/deploy-a-cluster/helm-deployments/azure.mdx) | -| `scratch` | [Running a Teleport cluster with a custom config](../../admin-guides/deploy-a-cluster/helm-deployments/custom.mdx) | +| `standalone` | [Getting Started - Kubernetes](../../zero-trust-access/deploy-a-cluster/helm-deployments/kubernetes-cluster.mdx) | +| `aws` | [Running an HA Teleport cluster using an AWS EKS Cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) | +| `gcp` | [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) | +| `azure` | [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/azure.mdx) | +| `scratch` | [Running a Teleport cluster with a custom config](../../zero-trust-access/deploy-a-cluster/helm-deployments/custom.mdx) | Using the `scratch` chart mode is discouraged. Precise chart and Teleport @@ -1031,7 +1030,7 @@ This custom resource configures Prometheus and makes it scrape Teleport metrics. The CRD is deployed by the prometheus-operator and allows workload to get monitored. You need to deploy the `prometheus-operator` in the cluster prior to configuring the `podMonitor` section of the chart. See -[the prometheus-operator documentation](https://prometheus-operator.dev/docs/prologue/introduction/) +[the prometheus-operator documentation](https://prometheus-operator.dev/docs/getting-started/introduction/) for setup instructions. ### `podMonitor.enabled` @@ -1141,7 +1140,7 @@ You can set `volumeSize` to request a different size of persistent volume when i ## `aws` -`aws` settings are described in the AWS guide: [Running an HA Teleport cluster using an AWS EKS Cluster](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) +`aws` settings are described in the AWS guide: [Running an HA Teleport cluster using an AWS EKS Cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) ### `aws.region` @@ -1195,7 +1194,7 @@ When this value is set, Teleport will export events to the Athena audit backend. To use the Athena audit backend, you must set up the required infrastructure (S3 buckets, SQS queue, AthenaDB, IAM roles and permissions, ...). -The requirements are described in [the Athena backend documentation](../backends.mdx#athena) +The requirements are described in [the Athena backend documentation](../deployment/backends.mdx#athena) If both `aws.athenaURL` and `aws.auditLogTable` (DynamoDB) are set, the `aws.auditLogPrimaryBackend` value configures which backend is used for querying. @@ -1225,11 +1224,11 @@ Whether Teleport should configure DynamoDB's autoscaling. Defaults to `false`. ### `aws.accessMonitoring` -`aws.accessMonitoring` configures the [Access Monitoring](../../admin-guides/access-controls/access-monitoring.mdx) +`aws.accessMonitoring` configures the [Access Monitoring](../../identity-governance/access-monitoring.mdx) feature of the Auth Service. Using this features requires setting up specific AWS infrastructure as described -in [the AccessMonitoring configuration section](../../admin-guides/access-controls/access-monitoring.mdx). +in [the AccessMonitoring configuration section](../../identity-governance/access-monitoring.mdx). The [Terraform example](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/examples/athena) code will output the chart values for this section. @@ -1256,11 +1255,11 @@ queries. ## `gcp` -`gcp` settings are described in the GCP guide: [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) +`gcp` settings are described in the GCP guide: [Running an HA Teleport cluster using a Google Cloud GKE cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) ## `azure` -`azure` settings are described in the Azure guide: [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../admin-guides/deploy-a-cluster/helm-deployments/azure.mdx) +`azure` settings are described in the Azure guide: [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../zero-trust-access/deploy-a-cluster/helm-deployments/azure.mdx) ## `highAvailability` @@ -1384,7 +1383,7 @@ cluster deployed in HA mode. You must install and configure `cert-manager` in your Kubernetes cluster yourself. See the [cert-manager Helm install instructions](https://cert-manager.io/docs/installation/kubernetes/#option-2-install-crds-as-part-of-the-helm-release) - and the relevant sections of the [AWS](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. + and the relevant sections of the [AWS](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. ### `highAvailability.certManager.addCommonName` @@ -1399,7 +1398,7 @@ Setting `highAvailability.certManager.addCommonName` to `true` will instruct `ce You must install and configure `cert-manager` in your Kubernetes cluster yourself. See the [cert-manager Helm install instructions](https://cert-manager.io/docs/installation/kubernetes/#option-2-install-crds-as-part-of-the-helm-release) - and the relevant sections of the [AWS](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. + and the relevant sections of the [AWS](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. `values.yaml` example: @@ -1444,7 +1443,7 @@ Sets the name of the `cert-manager` `Issuer` or `ClusterIssuer` to use for issui You must install configure an appropriate `Issuer` supporting a DNS01 challenge yourself. Please see the [cert-manager DNS01 docs](https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers) and the relevant sections - of the [AWS](../../admin-guides/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. + of the [AWS](../../zero-trust-access/deploy-a-cluster/helm-deployments/aws.mdx) and [GCP](../../zero-trust-access/deploy-a-cluster/helm-deployments/gcp.mdx) guides for more information. `values.yaml` example: @@ -1549,13 +1548,9 @@ in the pod logs. You should create the secret in the same namespace as Teleport using a command like this: ```code -kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem +$ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem ``` - - The filename used for the root CA in the secret must be `ca.pem`. - - `values.yaml` example: ```yaml @@ -1563,6 +1558,21 @@ kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem existingCASecretName: my-root-ca ``` +## `tls.existingCASecretKeyName` + +| Type | Default value | +|----------|---------------| +| `string` | `"ca.pem"` | + +`tls.existingCASecretKeyName` determines which key in the CA secret will be used as a trusted CA bundle file. + +`values.yaml` example: + + ```yaml + tls: + existingCASecretKeyName: "ca.pem" + ``` + ## `image` | Type | Default value | @@ -1666,7 +1676,7 @@ Possible values are `text` (default) or `json`. `log.extraFields` sets the fields used in logging for the Teleport process. -See the [Teleport config file reference](../../reference/config.mdx) for more details on possible values for `extra_fields`. +See the [Teleport config file reference](../deployment/config.mdx) for more details on possible values for `extra_fields`. `values.yaml` example: @@ -2312,6 +2322,22 @@ See [the GitHub PR](https://github.com/gravitational/teleport/pull/36251) for te cpu: 1 memory: 2Gi ``` +## `goMemLimitRatio` + +| Type | Default | +|---------|---------| +| `float` | `0.9` | + +`goMemLimitRatio` configures the GOMEMLIMIT env var set by the chart. +GOMEMLIMIT instructs the go garbage collector to try to keep allocated memory +below a given threshold. This is a best-effort attempt, but this helps +to prevent OOMs in case of bursts. + +When the memory limits are set and goMemLimitRatio is non-zero, +the chart sets the GOMEMLIMIT to `resources.memory.limits * goMemLimitRatio`. +The value must be between 0 and 1. +Set to 0 to unset GOMEMLIMIT. +This has no effect if GOMEMLIMIT is already set through `extraEnv`. ## `podSecurityContext` diff --git a/docs/pages/reference/helm-reference/teleport-kube-agent.mdx b/docs/pages/reference/helm-reference/teleport-kube-agent.mdx index 6c02a056dc463..405e13314b484 100644 --- a/docs/pages/reference/helm-reference/teleport-kube-agent.mdx +++ b/docs/pages/reference/helm-reference/teleport-kube-agent.mdx @@ -1,6 +1,11 @@ --- title: teleport-kube-agent Chart Reference +sidebar_label: teleport-kube-agent description: Values that can be set using the teleport-kube-agent Helm chart +tags: + - reference + - zero-trust + - infrastructure-identity --- The `teleport-kube-agent` Helm chart is used to configure a Teleport Agent that @@ -23,10 +28,10 @@ The `teleport-kube-agent` chart can run any or all of three Teleport services: | Teleport service | Name for `roles` and `tctl tokens add` | Purpose | |---------------------------------------------------------------------------|----------------------------------------|----------------------------------------------------------------------------------------------| | [`kubernetes_service`](../../enroll-resources/kubernetes-access/introduction.mdx) | `kube` | Uses Teleport to handle authentication
with and proxy access to a Kubernetes cluster | -| [`application_service`](../../enroll-resources/application-access/guides/guides.mdx) | `app` | Uses Teleport to handle authentication
with and proxy access to web-based applications | +| [`application_service`](../../enroll-resources/application-access/application-access.mdx) | `app` | Uses Teleport to handle authentication
with and proxy access to web-based applications | | [`database_service`](../../enroll-resources/database-access/guides/guides.mdx) | `db` | Uses Teleport to handle authentication
with and proxy access to databases | | [`discovery_service`](../../enroll-resources/auto-discovery/auto-discovery.mdx) | `discovery` | Uses Teleport to discover new resources
and dynamically add them to the cluster | -| [`jamf_service`](../../admin-guides/access-controls/device-trust/jamf-integration.mdx) | `jamf` | Uses Teleport to integrate with Jamf Pro
and sync devices with Device Trust inventory | +| [`jamf_service`](../../identity-governance/device-trust/jamf-integration.mdx) | `jamf` | Uses Teleport to integrate with Jamf Pro
and sync devices with Device Trust inventory | ### Legacy releases diff --git a/docs/pages/reference/helm-reference/teleport-operator.mdx b/docs/pages/reference/helm-reference/teleport-operator.mdx index 7fa9f42b19a65..5c1954fd37453 100644 --- a/docs/pages/reference/helm-reference/teleport-operator.mdx +++ b/docs/pages/reference/helm-reference/teleport-operator.mdx @@ -1,12 +1,17 @@ --- title: teleport-operator Chart Reference +sidebar_label: teleport-operator description: Values that can be set using the teleport-operator Helm chart +tags: + - infrastructure-as-code + - reference + - zero-trust --- The `teleport-operator` Helm chart deploys the Teleport Kubernetes Operator. When deployed via the chart, the operator can join Teleport clusters living in Kubernetes or remote ones (such as Teleport Cloud). -See the [Kubernetes Operator for remote Teleport clusters guide](../../admin-guides/infrastructure-as-code/teleport-operator/teleport-operator-standalone.mdx) +See the [Kubernetes Operator for remote Teleport clusters guide](../../zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator-standalone.mdx) for more details. You can diff --git a/docs/pages/reference/helm-reference/teleport-plugin-datadog.mdx b/docs/pages/reference/helm-reference/teleport-plugin-datadog.mdx index 6afc4634da03a..61258588eff84 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-datadog.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-datadog.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-datadog Chart Reference +sidebar_label: teleport-plugin-datadog description: Values that can be set using the teleport-plugin-datadog Helm chart +tags: + - access-requests + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-datadog` Helm chart runs the Datadog Teleport plugin, which diff --git a/docs/pages/reference/helm-reference/teleport-plugin-discord.mdx b/docs/pages/reference/helm-reference/teleport-plugin-discord.mdx index 2bb6d3a7b3e62..a524ad40654e7 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-discord.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-discord.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-discord Chart Reference +sidebar_label: teleport-plugin-discord description: Values that can be set using the teleport-plugin-discord Helm chart +tags: + - access-requests + - reference + - zero-trust + - privileged-access --- The `teleport-plugin-discord` Helm chart is used to configure the Discord Teleport plugin, which allows users to receive Access Requests via channels or direct messages in Discord. diff --git a/docs/pages/reference/helm-reference/teleport-plugin-email.mdx b/docs/pages/reference/helm-reference/teleport-plugin-email.mdx index 4adfdd3f2aeb4..16b0e822a7149 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-email.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-email.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-email Chart Reference +sidebar_label: teleport-plugin-email description: Values that can be set using the teleport-plugin-email Helm chart +tags: + - access-requests + - reference + - zero-trust + - privileged-access --- The `teleport-plugin-email` Helm chart is used to configure the email Teleport plugin, which allows users to receive Access Requests via emails. diff --git a/docs/pages/reference/helm-reference/teleport-plugin-event-handler.mdx b/docs/pages/reference/helm-reference/teleport-plugin-event-handler.mdx index e5753d0209390..41907ec037f92 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-event-handler.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-event-handler.mdx @@ -1,6 +1,11 @@ --- title: teleport-plugin-event-handler Chart Reference +sidebar_label: teleport-plugin-event-handler description: Values that can be set using the teleport-plugin-event-handler Helm chart +tags: + - reference + - zero-trust + - audit --- The `teleport-plugin-event-handler` Helm chart is used to configure the Event Handler Teleport plugin which allows users to send events and session logs to a Fluentd instance for further processing or storage. @@ -11,179 +16,4 @@ This reference details available values for the `teleport-plugin-event-handler` (!docs/pages/includes/backup-warning.mdx!) -## `teleport.address` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `""` | Yes | - -This parameter contains the host/port combination of the Teleport Auth Service. - -`values.yaml` example: - - ```yaml - teleport: - address: "teleport.example.com:3025" - ``` - -## `teleport.identitySecretName` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `""` | Yes | - -Name of the Kubernetes secret that contains the credentials for the connection. - -The secret should be in the following format: - -```yaml -apiVersion: v1 -kind: Secret -type: Opaque -metadata: - name: teleport-plugin-event-handler-identity -data: - auth_id: ... -``` - -`values.yaml` example: - - ```yaml - teleport: - identitySecretName: "teleport-plugin-event-handler-identity" - ``` - -## `teleport.identitySecretPath` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `"auth_id"` | No | - -Name of the key in the Kubernetes secret that holds the credentials for the connection. If the secret follows the format above, it can be omitted. - -`values.yaml` example: - - ```yaml - teleport: - identitySecretPath: "auth_id" - ``` - -## `fluentd.url` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `""` | Yes | - -Fluentd URL where the events will be sent. - -`values.yaml` example: - - ```yaml - fluentd: - url: "https://fluentd:24224/events.log" - ``` - -## `fluentd.sessionUrl` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `""` | Yes | - -Fluentd URL where the session logs will be sent. - -`values.yaml` example: - - ```yaml - fluentd: - sessionUrl: "https://fluentd:24224/session.log" - ``` - -## `fluentd.certificate.secretName` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `""` | Yes | - -Secret containing the credentials to connect to Fluentd. It must to contain the CA certificate, the client key and the client certificate. - -`values.yaml` example: - - ```yaml - fluentd: - secretName: "teleport-plugin-event-handler-fluentd" - ``` - -## `fluentd.certificate.caPath` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `"ca.crt"` | No | - -Name of the key which contains the CA certificate inside the secret. - -`values.yaml` example: - - ```yaml - fluentd: - caPath: "ca.crt" - ``` - -## `fluentd.certificate.keyPath` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `"client.key"` | No | - -Name of the key which contains the client's private key inside the secret. - -`values.yaml` example: - - ```yaml - fluentd: - keyPath: "client.key" - ``` - -## `fluentd.certificate.certPath` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `"client.crt"` | No | - -Name of the key which contains the client's certificate inside the secret. - -`values.yaml` example: - - ```yaml - fluentd: - certPath: "client.crt" - ``` - -## `log.output` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `stdout` | No | - -Logger output. Can be `stdout`, `stderr` or a file name, eg. `/var/log/teleport/fluentd.log`. - -`values.yaml` example: - - ```yaml - log: - output: /var/log/teleport/fluentd.log - ``` - -## `log.severity` - -| Type | Default value | Required? | -| - | - | - | -| `string` | `stdout` | No | - -Logger severity. Possible values are `INFO`, `ERROR`, `DEBUG` or `WARN`. - -`values.yaml` example: - - ```yaml - log: - severity: DEBUG - ``` +(!docs/pages/includes/helm-reference/zz_generated.event-handler.mdx!) diff --git a/docs/pages/reference/helm-reference/teleport-plugin-jira.mdx b/docs/pages/reference/helm-reference/teleport-plugin-jira.mdx index 246fb6a09044e..02f3d7385af0f 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-jira.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-jira.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-jira Chart Reference +sidebar_label: teleport-plugin-jira description: Values that can be set using the teleport-plugin-jira Helm chart +tags: + - access-requests + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-jira` Helm chart runs the Jira Teleport plugin, which diff --git a/docs/pages/reference/helm-reference/teleport-plugin-mattermost.mdx b/docs/pages/reference/helm-reference/teleport-plugin-mattermost.mdx index 74dc0e0888ae0..727fc2ceaf5e7 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-mattermost.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-mattermost.mdx @@ -1,6 +1,11 @@ --- title: teleport-plugin-mattermost Chart Reference +sidebar_label: teleport-plugin-mattermost description: Values that can be set using the teleport-plugin-mattermost Helm chart +tags: + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-mattermost` Helm chart is used to configure the diff --git a/docs/pages/reference/helm-reference/teleport-plugin-msteams.mdx b/docs/pages/reference/helm-reference/teleport-plugin-msteams.mdx index eb0548e6a3500..141ea57408ad3 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-msteams.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-msteams.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-msteams Chart Reference +sidebar_label: teleport-plugin-msteams description: Values that can be set using the teleport-plugin-msteams Helm chart +tags: + - access-requests + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-msteams` Helm chart is used to configure the MsTeams Teleport plugin, which allows users to receive Access Requests via channels or direct messages in MsTeams. diff --git a/docs/pages/reference/helm-reference/teleport-plugin-pagerduty.mdx b/docs/pages/reference/helm-reference/teleport-plugin-pagerduty.mdx index 27a706bdbc196..541c4adb0bf19 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-pagerduty.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-pagerduty.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-pagerduty Chart Reference +sidebar_label: teleport-plugin-pagerduty description: Values that can be set using the teleport-plugin-pagerduty Helm chart +tags: + - access-requests + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-pagerduty` Helm chart is used to configure the PagerDuty Teleport plugin, which allows users to receive Access Requests as pages via PagerDuty. diff --git a/docs/pages/reference/helm-reference/teleport-plugin-slack.mdx b/docs/pages/reference/helm-reference/teleport-plugin-slack.mdx index 047bd43adcdbe..fa9f13ba7bf18 100644 --- a/docs/pages/reference/helm-reference/teleport-plugin-slack.mdx +++ b/docs/pages/reference/helm-reference/teleport-plugin-slack.mdx @@ -1,6 +1,12 @@ --- title: teleport-plugin-slack Chart Reference +sidebar_label: teleport-plugin-slack description: Values that can be set using the teleport-plugin-slack Helm chart +tags: + - access-requests + - reference + - identity-governance + - privileged-access --- The `teleport-plugin-slack` Helm chart is used to configure the Slack Teleport plugin, which allows users to receive Access Requests via channels or direct messages in Slack. diff --git a/docs/pages/reference/helm-reference/teleport-relay.mdx b/docs/pages/reference/helm-reference/teleport-relay.mdx new file mode 100644 index 0000000000000..c26cee9e51bbc --- /dev/null +++ b/docs/pages/reference/helm-reference/teleport-relay.mdx @@ -0,0 +1,45 @@ +--- +title: teleport-relay Chart Reference +sidebar_label: teleport-relay +description: Values that can be set using the teleport-relay Helm chart +tags: + - reference + - zero-trust + - infrastructure-identity +--- + +The `teleport-relay` Helm chart is used to deploy a Teleport Relay Service group in a Kubernetes cluster, to provide connectivity between clients and resources without involving the Teleport control plane in the network path. + +The `teleport-relay` chart is available for Teleport v18.3.0 and later. + +You can [browse the source on +GitHub](https://github.com/gravitational/teleport/tree/branch/v(=teleport.major_version=)/examples/chart/teleport-relay). + +This reference details available values for the `teleport-relay` chart. + + + +The Teleport Relay service provides alternate network paths for accessing resources through Teleport in specific scenarios where clients and agents are in the same network segment and there is a need for connectivity that does not go through the Teleport control plane. It is not a required or recommended cluster component in most Teleport deployments. + +If you want to provide access to resources like Databases, Applications or Kubernetes clusters, you should use the [`teleport-kube-agent` Helm chart](teleport-kube-agent.mdx) instead. + + + +(!docs/pages/includes/backup-warning.mdx!) + +## What the chart deploys + +The `teleport-relay` chart deploys the following Kubernetes resources: + +| Kind | Default name | Description | +|-|-|-| +| `Deployment` | The release name | One or more replicas of the Teleport instance running the Relay Service | +| `Service` | The release name | The transport and tunnel servers, pointed at the respective listeners of the Relay Service replicas. | +| `ConfigMap` | The release name | The Teleport configuration. | +| `Secret` | `joinTokenSecret.name`, if given, or the release name | The join token name, used to join the cluster. It's possible to use an existing `Secret` that's managed outside of the chart. | +| `ServiceAccount` | `serviceAccount.name`, if given, or the release name | Used for the `kubernetes` join method. It's possible to use an existing `ServiceAccount` that's managed outside of the chart. | +| `PodDisruptionBudget` | The release name | Ensures high availability for the Teleport pods. | + +## Reference + +(!docs/pages/includes/helm-reference/zz_generated.teleport-relay.mdx!) diff --git a/docs/pages/reference/infrastructure-as-code/infrastructure-as-code.mdx b/docs/pages/reference/infrastructure-as-code/infrastructure-as-code.mdx new file mode 100644 index 0000000000000..85f165ae9692a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/infrastructure-as-code.mdx @@ -0,0 +1,7 @@ +--- +title: Infrastructure as Code References +sidebar_label: Infrastructure as Code +description: Provides comprehensive directories of resources available to tctl, the Teleport Kubernetes operator, and Teleport Terraform provider. +--- + + diff --git a/docs/pages/reference/infrastructure-as-code/operator-resources/operator-resources.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/operator-resources.mdx new file mode 100644 index 0000000000000..1cc4d8925e38a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/operator-resources.mdx @@ -0,0 +1,33 @@ +--- +title: "Teleport Kubernetes Operator Resource Reference Guides" +sidebar_label: Teleport Kubernetes Operator +description: "Comprehensive guides to fields available in Kubernetes resources you can apply to manage Teleport resources with the Teleport Kubernetes operator" +tags: + - infrastructure-as-code + - reference + - platform-wide +--- + +The Teleport Kubernetes operator allows cluster administrators to manage dynamic +resources as Kubernetes resources using `kubectl` or a continuous deployment +tool like Argo CD. The guides in this section of the documentation include +comprehensive lists of fields for each supported resource. + +## Getting started with the Kubernetes operator + +If you are getting started with the Teleport Kubernetes operator, we recommend +reading the [introductory +guide](../../../zero-trust-access/infrastructure-as-code/teleport-operator/teleport-operator.mdx) +to setting up and using the operator. + +For guidance on using the Teleport Kubernetes operator to manage the dynamic +resources that you would typically see in a Teleport cluster, such as roles and +users, see [Managing +Resources](../../../zero-trust-access/infrastructure-as-code/managing-resources/managing-resources.mdx). + +## Selecting a reference guide + +The Kubernetes operator resource reference guides are organized by resource +version. Select a resource version from the sidebar to view supported fields. + + diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-accesslists.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-accesslists.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-accesslists.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-accesslists.mdx index 5b9d66fc1ea3e..a1c36958b3dca 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-accesslists.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-accesslists.mdx @@ -1,7 +1,6 @@ --- title: TeleportAccessList description: Provides a comprehensive list of fields in the TeleportAccessList resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -34,6 +33,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |owners|[][object](#specowners-items)|owners is a list of owners of the Access List.| |ownership_requires|[object](#specownership_requires)|ownership_requires describes the requirements for a user to be an owner of the Access List. For ownership of an Access List to be effective, the user must meet the requirements of ownership_requires and must be in the owners list.| |title|string|title is a plaintext short description of the Access List.| +|type|string|type can be an empty string which denotes a regular Access List, "scim" which represents an Access List created from SCIM group or "static" for Access Lists managed by IaC tools.| ### spec.audit diff --git a/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-appsv3.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-appsv3.mdx new file mode 100644 index 0000000000000..d1640f0e7ab7f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-appsv3.mdx @@ -0,0 +1,128 @@ +--- +title: TeleportAppV3 +description: Provides a comprehensive list of fields in the TeleportAppV3 resource available through the Teleport Kubernetes operator +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/operator and run "make crd-docs".*/} + +This guide is a comprehensive reference to the fields in the `TeleportAppV3` +resource, which you can apply after installing the Teleport Kubernetes operator. + + +## resources.teleport.dev/v1 + +**apiVersion:** resources.teleport.dev/v1 + +|Field|Type|Description| +|---|---|---| +|apiVersion|string|APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources| +|kind|string|Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds| +|metadata|object|| +|spec|[object](#spec)|App resource definition v3 from Teleport| + +### spec + +|Field|Type|Description| +|---|---|---| +|UserGroups|[]string|UserGroups are a list of user group IDs that this app is associated with.| +|aws|[object](#specaws)|AWS contains additional options for AWS applications.| +|cloud|string|Cloud identifies the cloud instance the app represents.| +|cors|[object](#speccors)|CORSPolicy defines the Cross-Origin Resource Sharing settings for the app.| +|dynamic_labels|[object](#specdynamic_labels)|DynamicLabels are the app's command labels.| +|identity_center|[object](#specidentity_center)|IdentityCenter encasulates AWS identity-center specific information. Only valid for Identity Center account apps.| +|insecure_skip_verify|boolean|InsecureSkipVerify disables app's TLS certificate verification.| +|integration|string|Integration is the integration name that must be used to access this Application. Only applicable to AWS App Access. If present, the Application must use the Integration's credentials instead of ambient credentials to access Cloud APIs.| +|mcp|[object](#specmcp)|MCP contains MCP server related configurations.| +|public_addr|string|PublicAddr is the public address the application is accessible at.| +|required_app_names|[]string|RequiredAppNames is a list of app names that are required for this app to function. Any app listed here will be part of the authentication redirect flow and authenticate along side this app.| +|rewrite|[object](#specrewrite)|Rewrite is a list of rewriting rules to apply to requests and responses.| +|tcp_ports|[][object](#spectcp_ports-items)|TCPPorts is a list of ports and port ranges that an app agent can forward connections to. Only applicable to TCP App Access. If this field is not empty, URI is expected to contain no port number and start with the tcp protocol.| +|uri|string|URI is the web app endpoint.| +|use_any_proxy_public_addr|boolean|UseAnyProxyPublicAddr will rebuild this app's fqdn based on the proxy public addr that the request originated from. This should be true if your proxy has multiple proxy public addrs and you want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, setting this value to true will overwrite that public address in the web UI.| + +### spec.aws + +|Field|Type|Description| +|---|---|---| +|external_id|string|ExternalID is the AWS External ID used when assuming roles in this app.| +|roles_anywhere_profile|[object](#specawsroles_anywhere_profile)|RolesAnywhereProfile contains the IAM Roles Anywhere fields associated with this Application. These fields are set when performing the synchronization of AWS IAM Roles Anywhere Profiles into Teleport Apps.| + +### spec.aws.roles_anywhere_profile + +|Field|Type|Description| +|---|---|---| +|accept_role_session_name|boolean|Whether this Roles Anywhere Profile accepts a custom role session name. When not supported, the AWS Session Name will be the X.509 certificate's serial number. When supported, the AWS Session Name will be the identity's username. This values comes from: https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ProfileDetail.html / acceptRoleSessionName| +|profile_arn|string|ProfileARN is the AWS IAM Roles Anywhere Profile ARN that originated this Teleport App.| + +### spec.cors + +|Field|Type|Description| +|---|---|---| +|allow_credentials|boolean|allow_credentials indicates whether credentials are allowed.| +|allowed_headers|[]string|allowed_headers specifies which headers can be used when accessing the app.| +|allowed_methods|[]string|allowed_methods specifies which methods are allowed when accessing the app.| +|allowed_origins|[]string|allowed_origins specifies which origins are allowed to access the app.| +|exposed_headers|[]string|exposed_headers indicates which headers are made available to scripts via the browser.| +|max_age|integer|max_age indicates how long (in seconds) the results of a preflight request can be cached.| + +### spec.dynamic_labels + +|Field|Type|Description| +|---|---|---| +|key|string|| +|value|[object](#specdynamic_labelsvalue)|| + +### spec.dynamic_labels.value + +|Field|Type|Description| +|---|---|---| +|command|[]string|Command is a command to run| +|period|string|Period is a time between command runs| +|result|string|Result captures standard output| + +### spec.identity_center + +|Field|Type|Description| +|---|---|---| +|account_id|string|Account ID is the AWS-assigned ID of the account| +|permission_sets|[][object](#specidentity_centerpermission_sets-items)|PermissionSets lists the available permission sets on the given account| + +### spec.identity_center.permission_sets items + +|Field|Type|Description| +|---|---|---| +|arn|string|ARN is the fully-formed ARN of the Permission Set.| +|assignment_name|string|AssignmentID is the ID of the Teleport Account Assignment resource that represents this permission being assigned on the enclosing Account.| +|name|string|Name is the human-readable name of the Permission Set.| + +### spec.mcp + +|Field|Type|Description| +|---|---|---| +|args|[]string|Args to execute with the command.| +|command|string|Command to launch stdio-based MCP servers.| +|run_as_host_user|string|RunAsHostUser is the host user account under which the command will be executed. Required for stdio-based MCP servers.| + +### spec.rewrite + +|Field|Type|Description| +|---|---|---| +|headers|[][object](#specrewriteheaders-items)|Headers is a list of headers to inject when passing the request over to the application.| +|jwt_claims|string|JWTClaims configures whether roles/traits are included in the JWT token.| +|redirect|[]string|Redirect defines a list of hosts which will be rewritten to the public address of the application if they occur in the "Location" header.| + +### spec.rewrite.headers items + +|Field|Type|Description| +|---|---|---| +|name|string|Name is the http header name.| +|value|string|Value is the http header value.| + +### spec.tcp_ports items + +|Field|Type|Description| +|---|---|---| +|end_port|integer|EndPort describes the end of the range, inclusive. If set, it must be between 2 and 65535 and be greater than Port when describing a port range. When omitted or set to zero, it signifies that the port range defines a single port.| +|port|integer|Port describes the start of the range. It must be between 1 and 65535.| + diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx similarity index 89% rename from docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx index f5cd8d33db3b0..08201aea60e55 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateconfigsv1.mdx @@ -1,7 +1,6 @@ --- title: TeleportAutoupdateConfigV1 description: Provides a comprehensive list of fields in the TeleportAutoupdateConfigV1 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -48,6 +47,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| +|canary_count|integer|canary_count is the number of canary agents that will be updated before the whole group is updated. when set to 0, the group does not enter the canary phase. This number is capped to 5. This number must always be lower than the total number of agents in the group, else the rollout will be stuck.| |days|[]string|days when the update can run. Supported values are "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" and "*"| |name|string|name of the group| |start_hour|integer|start_hour to initiate update| diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx similarity index 89% rename from docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx index 2c2e33e8195cb..a044e78edac11 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-autoupdateversionsv1.mdx @@ -1,7 +1,6 @@ --- title: TeleportAutoupdateVersionV1 description: Provides a comprehensive list of fields in the TeleportAutoupdateVersionV1 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -35,8 +34,8 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |---|---|---| |mode|string|autoupdate_mode to use for the rollout| |schedule|string|schedule to use for the rollout| -|start_version|string|start_version is the version to update from.| -|target_version|string|target_version is the version to update to.| +|start_version|string|start_version is the version used for newly installed agents before their update window.| +|target_version|string|target_version is the version that all agents will update to during their update window.| ### spec.tools diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-botsv1.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-botsv1.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-botsv1.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-botsv1.mdx index 9ad6792854b8b..27c63b0ddab74 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-botsv1.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-botsv1.mdx @@ -1,7 +1,6 @@ --- title: TeleportBotV1 description: Provides a comprehensive list of fields in the TeleportBotV1 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-databasesv3.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-databasesv3.mdx new file mode 100644 index 0000000000000..2f2eaa7897627 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-databasesv3.mdx @@ -0,0 +1,244 @@ +--- +title: TeleportDatabaseV3 +description: Provides a comprehensive list of fields in the TeleportDatabaseV3 resource available through the Teleport Kubernetes operator +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/operator and run "make crd-docs".*/} + +This guide is a comprehensive reference to the fields in the `TeleportDatabaseV3` +resource, which you can apply after installing the Teleport Kubernetes operator. + + +## resources.teleport.dev/v1 + +**apiVersion:** resources.teleport.dev/v1 + +|Field|Type|Description| +|---|---|---| +|apiVersion|string|APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources| +|kind|string|Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds| +|metadata|object|| +|spec|[object](#spec)|Database resource definition v3 from Teleport| + +### spec + +|Field|Type|Description| +|---|---|---| +|ad|[object](#specad)|AD is the Active Directory configuration for the database.| +|admin_user|[object](#specadmin_user)|AdminUser is the database admin user for automatic user provisioning.| +|aws|[object](#specaws)|AWS contains AWS specific settings for RDS/Aurora/Redshift databases.| +|azure|[object](#specazure)|Azure contains Azure specific database metadata.| +|ca_cert|string|CACert is the PEM-encoded database CA certificate. DEPRECATED: Moved to TLS.CACert. DELETE IN 10.0.| +|dynamic_labels|[object](#specdynamic_labels)|DynamicLabels is the database dynamic labels.| +|gcp|[object](#specgcp)|GCP contains parameters specific to GCP Cloud SQL databases.| +|mongo_atlas|[object](#specmongo_atlas)|MongoAtlas contains Atlas metadata about the database.| +|mysql|[object](#specmysql)|MySQL is an additional section with MySQL database options.| +|oracle|[object](#specoracle)|Oracle is an additional Oracle configuration options.| +|protocol|string|Protocol is the database protocol: postgres, mysql, mongodb, etc.| +|tls|[object](#spectls)|TLS is the TLS configuration used when establishing connection to target database. Allows to provide custom CA cert or override server name.| +|uri|string|URI is the database connection endpoint.| + +### spec.ad + +|Field|Type|Description| +|---|---|---| +|domain|string|Domain is the Active Directory domain the database resides in.| +|kdc_host_name|string|KDCHostName is the host name for a KDC for x509 Authentication.| +|keytab_file|string|KeytabFile is the path to the Kerberos keytab file.| +|krb5_file|string|Krb5File is the path to the Kerberos configuration file. Defaults to /etc/krb5.conf.| +|ldap_cert|string|LDAPCert is a certificate from Windows LDAP/AD, optional; only for x509 Authentication.| +|ldap_service_account_name|string|LDAPServiceAccountName is the name of service account for performing LDAP queries. Required for x509 Auth / PKINIT.| +|ldap_service_account_sid|string|LDAPServiceAccountSID is the SID of service account for performing LDAP queries. Required for x509 Auth / PKINIT.| +|spn|string|SPN is the service principal name for the database.| + +### spec.admin_user + +|Field|Type|Description| +|---|---|---| +|default_database|string|DefaultDatabase is the database that the privileged database user logs into by default. Depending on the database type, this database may be used to store procedures or data for managing database users.| +|name|string|Name is the username of the privileged database user.| + +### spec.aws + +|Field|Type|Description| +|---|---|---| +|account_id|string|AccountID is the AWS account ID this database belongs to.| +|assume_role_arn|string|AssumeRoleARN is an optional AWS role ARN to assume when accessing a database. Set this field and ExternalID to enable access across AWS accounts.| +|docdb|[object](#specawsdocdb)|DocumentDB contains Amazon DocumentDB-specific metadata.| +|elasticache|[object](#specawselasticache)|ElastiCache contains Amazon ElastiCache Redis-specific metadata.| +|elasticache_serverless|[object](#specawselasticache_serverless)|ElastiCacheServerless contains Amazon ElastiCache Serverless metadata.| +|external_id|string|ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts.| +|iam_policy_status|string or integer|IAMPolicyStatus indicates whether the IAM Policy is configured properly for database access. If not, the user must update the AWS profile identity to allow access to the Database. Eg for an RDS Database: the underlying AWS profile allows for `rds-db:connect` for the Database. Can be either the string or the integer representation of each option.| +|memorydb|[object](#specawsmemorydb)|MemoryDB contains AWS MemoryDB specific metadata.| +|opensearch|[object](#specawsopensearch)|OpenSearch contains AWS OpenSearch specific metadata.| +|rds|[object](#specawsrds)|RDS contains RDS specific metadata.| +|rdsproxy|[object](#specawsrdsproxy)|RDSProxy contains AWS Proxy specific metadata.| +|redshift|[object](#specawsredshift)|Redshift contains Redshift specific metadata.| +|redshift_serverless|[object](#specawsredshift_serverless)|RedshiftServerless contains Amazon Redshift Serverless-specific metadata.| +|region|string|Region is a AWS cloud region.| +|secret_store|[object](#specawssecret_store)|SecretStore contains secret store configurations.| +|session_tags|[object](#specawssession_tags)|SessionTags is a list of AWS STS session tags.| + +### spec.aws.docdb + +|Field|Type|Description| +|---|---|---| +|cluster_id|string|ClusterID is the cluster identifier.| +|endpoint_type|string|EndpointType is the type of the endpoint.| +|instance_id|string|InstanceID is the instance identifier.| + +### spec.aws.elasticache + +|Field|Type|Description| +|---|---|---| +|endpoint_type|string|EndpointType is the type of the endpoint.| +|replication_group_id|string|ReplicationGroupID is the Redis replication group ID.| +|transit_encryption_enabled|boolean|TransitEncryptionEnabled indicates whether in-transit encryption (TLS) is enabled.| +|user_group_ids|[]string|UserGroupIDs is a list of user group IDs.| + +### spec.aws.elasticache_serverless + +|Field|Type|Description| +|---|---|---| +|cache_name|string|CacheName is an ElastiCache Serverless cache name.| + +### spec.aws.memorydb + +|Field|Type|Description| +|---|---|---| +|acl_name|string|ACLName is the name of the ACL associated with the cluster.| +|cluster_name|string|ClusterName is the name of the MemoryDB cluster.| +|endpoint_type|string|EndpointType is the type of the endpoint.| +|tls_enabled|boolean|TLSEnabled indicates whether in-transit encryption (TLS) is enabled.| + +### spec.aws.opensearch + +|Field|Type|Description| +|---|---|---| +|domain_id|string|DomainID is the ID of the domain.| +|domain_name|string|DomainName is the name of the domain.| +|endpoint_type|string|EndpointType is the type of the endpoint.| + +### spec.aws.rds + +|Field|Type|Description| +|---|---|---| +|cluster_id|string|ClusterID is the RDS cluster (Aurora) identifier.| +|iam_auth|boolean|IAMAuth indicates whether database IAM authentication is enabled.| +|instance_id|string|InstanceID is the RDS instance identifier.| +|resource_id|string|ResourceID is the RDS instance resource identifier (db-xxx).| +|security_groups|[]string|SecurityGroups is a list of attached security groups for the RDS instance.| +|subnets|[]string|Subnets is a list of subnets for the RDS instance.| +|vpc_id|string|VPCID is the VPC where the RDS is running.| + +### spec.aws.rdsproxy + +|Field|Type|Description| +|---|---|---| +|custom_endpoint_name|string|CustomEndpointName is the identifier of an RDS Proxy custom endpoint.| +|name|string|Name is the identifier of an RDS Proxy.| +|resource_id|string|ResourceID is the RDS instance resource identifier (prx-xxx).| + +### spec.aws.redshift + +|Field|Type|Description| +|---|---|---| +|cluster_id|string|ClusterID is the Redshift cluster identifier.| + +### spec.aws.redshift_serverless + +|Field|Type|Description| +|---|---|---| +|endpoint_name|string|EndpointName is the VPC endpoint name.| +|workgroup_id|string|WorkgroupID is the workgroup ID.| +|workgroup_name|string|WorkgroupName is the workgroup name.| + +### spec.aws.secret_store + +|Field|Type|Description| +|---|---|---| +|key_prefix|string|KeyPrefix specifies the secret key prefix.| +|kms_key_id|string|KMSKeyID specifies the AWS KMS key for encryption.| + +### spec.aws.session_tags + +|Field|Type|Description| +|---|---|---| +|key|string|| +|value|string|| + +### spec.azure + +|Field|Type|Description| +|---|---|---| +|is_flexi_server|boolean|IsFlexiServer is true if the database is an Azure Flexible server.| +|name|string|Name is the Azure database server name.| +|redis|[object](#specazureredis)|Redis contains Azure Cache for Redis specific database metadata.| +|resource_id|string|ResourceID is the Azure fully qualified ID for the resource.| + +### spec.azure.redis + +|Field|Type|Description| +|---|---|---| +|clustering_policy|string|ClusteringPolicy is the clustering policy for Redis Enterprise.| + +### spec.dynamic_labels + +|Field|Type|Description| +|---|---|---| +|key|string|| +|value|[object](#specdynamic_labelsvalue)|| + +### spec.dynamic_labels.value + +|Field|Type|Description| +|---|---|---| +|command|[]string|Command is a command to run| +|period|string|Period is a time between command runs| +|result|string|Result captures standard output| + +### spec.gcp + +|Field|Type|Description| +|---|---|---| +|alloydb|[object](#specgcpalloydb)|AlloyDB contains AlloyDB specific configuration elements.| +|instance_id|string|InstanceID is the Cloud SQL instance ID.| +|project_id|string|ProjectID is the GCP project ID the Cloud SQL instance resides in.| + +### spec.gcp.alloydb + +|Field|Type|Description| +|---|---|---| +|endpoint_override|string|EndpointOverride is an override of endpoint address to use.| +|endpoint_type|string|EndpointType is the database endpoint type to use. Should be one of: "private", "public", "psc".| + +### spec.mongo_atlas + +|Field|Type|Description| +|---|---|---| +|name|string|Name is the Atlas database instance name.| + +### spec.mysql + +|Field|Type|Description| +|---|---|---| +|server_version|string|ServerVersion is the server version reported by DB proxy if the runtime information is not available.| + +### spec.oracle + +|Field|Type|Description| +|---|---|---| +|audit_user|string|AuditUser is the name of the Oracle database user that should be used to access the internal audit trail.| +|retry_count|integer|RetryCount is the maximum number of times to retry connecting to a host upon failure. If not specified it defaults to 2, for a total of 3 connection attempts.| +|shuffle_hostnames|boolean|ShuffleHostnames, when true, randomizes the order of hosts to connect to from the provided list.| + +### spec.tls + +|Field|Type|Description| +|---|---|---| +|ca_cert|string|CACert is an optional user provided CA certificate used for verifying database TLS connection.| +|mode|string or integer|Mode is a TLS connection mode. 0 is "verify-full"; 1 is "verify-ca", 2 is "insecure". Can be either the string or the integer representation of each option.| +|server_name|string|ServerName allows to provide custom hostname. This value will override the servername/hostname on a certificate during validation.| +|trust_system_cert_pool|boolean|TrustSystemCertPool allows Teleport to trust certificate authorities available on the host system. If not set (by default), Teleport only trusts self-signed databases with TLS certificates signed by Teleport's Database Server CA or the ca_cert specified in this TLS setting. For cloud-hosted databases, Teleport downloads the corresponding required CAs for validation.| + diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-githubconnectors.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-githubconnectors.mdx similarity index 94% rename from docs/pages/reference/operator-resources/resources-teleport-dev-githubconnectors.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-githubconnectors.mdx index 0b95b075c0a7c..ee81a15560231 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-githubconnectors.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-githubconnectors.mdx @@ -1,7 +1,6 @@ --- title: TeleportGithubConnector description: Provides a comprehensive list of fields in the TeleportGithubConnector resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -34,6 +33,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |endpoint_url|string|EndpointURL is the URL of the GitHub instance this connector is for.| |redirect_url|string|RedirectURL is the authorization callback URL.| |teams_to_roles|[][object](#specteams_to_roles-items)|TeamsToRoles maps Github team memberships onto allowed roles.| +|user_matchers|[]string|UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login.| ### spec.client_redirect_settings diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-loginrules.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-loginrules.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-loginrules.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-loginrules.mdx index 4befea3b164f3..cbc20e04bcf09 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-loginrules.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-loginrules.mdx @@ -1,7 +1,6 @@ --- title: TeleportLoginRule description: Provides a comprehensive list of fields in the TeleportLoginRule resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oidcconnectors.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oidcconnectors.mdx new file mode 100644 index 0000000000000..da59d5d8b2e29 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oidcconnectors.mdx @@ -0,0 +1,85 @@ +--- +title: TeleportOIDCConnector +description: Provides a comprehensive list of fields in the TeleportOIDCConnector resource available through the Teleport Kubernetes operator +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/operator and run "make crd-docs".*/} + +This guide is a comprehensive reference to the fields in the `TeleportOIDCConnector` +resource, which you can apply after installing the Teleport Kubernetes operator. + + +## resources.teleport.dev/v3 + +**apiVersion:** resources.teleport.dev/v3 + +|Field|Type|Description| +|---|---|---| +|apiVersion|string|APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources| +|kind|string|Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds| +|metadata|object|| +|spec|[object](#spec)|OIDCConnector resource definition v3 from Teleport| + +### spec + +|Field|Type|Description| +|---|---|---| +|acr_values|string|ACR is an Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers.| +|allow_unverified_email|boolean|AllowUnverifiedEmail tells the connector to accept OIDC users with unverified emails.| +|claims_to_roles|[][object](#specclaims_to_roles-items)|ClaimsToRoles specifies a dynamic mapping from claims to roles.| +|client_id|string|ClientID is the id of the authentication client (Teleport Auth Service).| +|client_redirect_settings|[object](#specclient_redirect_settings)|ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones.| +|client_secret|string|ClientSecret is used to authenticate the client. This field supports secret lookup. See the operator documentation for more details.| +|display|string|Display is the friendly name for this provider.| +|entra_id_groups_provider|[object](#specentra_id_groups_provider)|EntraIDGroupsProvider configures out-of-band user groups provider. It works by following through the groups claim source, which is sent for the "groups" claim when the user's group membership exceeds 200 max item limit.| +|google_admin_email|string|GoogleAdminEmail is the email of a google admin to impersonate.| +|google_service_account|string|GoogleServiceAccount is a string containing google service account credentials.| +|google_service_account_uri|string|GoogleServiceAccountURI is a path to a google service account uri.| +|issuer_url|string|IssuerURL is the endpoint of the provider, e.g. https://accounts.google.com.| +|max_age|string|MaxAge is the amount of time that user logins are valid for. If a user logs in, but then does not login again within this time period, they will be forced to re-authenticate.| +|mfa|[object](#specmfa)|MFASettings contains settings to enable SSO MFA checks through this auth connector.| +|pkce_mode|string|PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled"| +|prompt|string|Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.| +|provider|string|Provider is the external identity provider.| +|redirect_url|[]string|RedirectURLs is a list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used.| +|request_object_mode|string|RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters.| +|scope|[]string|Scope specifies additional scopes set by provider.| +|user_matchers|[]string|UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login.| +|username_claim|string|UsernameClaim specifies the name of the claim from the OIDC connector to be used as the user's username.| + +### spec.claims_to_roles items + +|Field|Type|Description| +|---|---|---| +|claim|string|Claim is a claim name.| +|roles|[]string|Roles is a list of static teleport roles to match.| +|value|string|Value is a claim value to match.| + +### spec.client_redirect_settings + +|Field|Type|Description| +|---|---|---| +|allowed_https_hostnames|[]string|a list of hostnames allowed for https client redirect URLs| +|insecure_allowed_cidr_ranges|[]string|a list of CIDRs allowed for HTTP or HTTPS client redirect URLs| + +### spec.entra_id_groups_provider + +|Field|Type|Description| +|---|---|---| +|disabled|boolean|Disabled specifies that the groups provider should be disabled even when Entra ID responds with a groups claim source. User may choose to disable it if they are using integrations such as SCIM or similar groups importer as connector based role mapping may be not needed in such a scenario.| +|graph_endpoint|string|GraphEndpoint is a Microsoft Graph API endpoint. The groups claim source endpoint provided by Entra ID points to the now-retired Azure AD Graph endpoint ("https://graph.windows.net"). To convert it to the newer Microsoft Graph API endpoint, Teleport defaults to the Microsoft Graph global service endpoint ("https://graph.microsoft.com"). Update GraphEndpoint to point to a different Microsoft Graph national cloud deployment endpoint.| +|group_type|string|GroupType is a user group type filter. Defaults to "security-groups". Value can be "security-groups", "directory-roles", "all-groups".| + +### spec.mfa + +|Field|Type|Description| +|---|---|---| +|acr_values|string|AcrValues are Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR.| +|client_id|string|ClientID is the OIDC OAuth app client ID.| +|client_secret|string|ClientSecret is the OIDC OAuth app client secret.| +|enabled|boolean|Enabled specified whether this OIDC connector supports MFA checks. Defaults to false.| +|max_age|string|MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions.| +|prompt|string|Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.| +|request_object_mode|string|RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters. If omitted, MFA flows will default to the `RequestObjectMode` behavior specified in the base OIDC connector. Set this property to 'none' to explicitly disable request objects for the MFA client.| + diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-oktaimportrules.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oktaimportrules.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-oktaimportrules.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oktaimportrules.mdx index c28c50efeefc2..1b4c910f77204 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-oktaimportrules.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-oktaimportrules.mdx @@ -1,7 +1,6 @@ --- title: TeleportOktaImportRule description: Provides a comprehensive list of fields in the TeleportOktaImportRule resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx index 4f87f08676fda..d1241b6c184a1 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-openssheiceserversv2.mdx @@ -1,7 +1,6 @@ --- title: TeleportOpenSSHEICEServerV2 description: Provides a comprehensive list of fields in the TeleportOpenSSHEICEServerV2 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -33,6 +32,8 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |peer_addr|string|PeerAddr is the address a proxy server is reachable at by its peer proxies.| |proxy_ids|[]string|ProxyIDs is a list of proxy IDs this server is expected to be connected to.| |public_addrs|[]string|PublicAddrs is a list of public addresses where this server can be reached.| +|relay_group|string|the name of the Relay group that the server is connected to| +|relay_ids|[]string|the list of Relay host IDs that the server is connected to| |rotation|[object](#specrotation)|Rotation specifies server rotation| |use_tunnel|boolean|UseTunnel indicates that connections to this server should occur over a reverse tunnel.| |version|string|TeleportVersion is the teleport version that the server is running on| diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-opensshserversv2.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-opensshserversv2.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-opensshserversv2.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-opensshserversv2.mdx index 3961366039613..26cc9c60bda10 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-opensshserversv2.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-opensshserversv2.mdx @@ -1,7 +1,6 @@ --- title: TeleportOpenSSHServerV2 description: Provides a comprehensive list of fields in the TeleportOpenSSHServerV2 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -33,6 +32,8 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |peer_addr|string|PeerAddr is the address a proxy server is reachable at by its peer proxies.| |proxy_ids|[]string|ProxyIDs is a list of proxy IDs this server is expected to be connected to.| |public_addrs|[]string|PublicAddrs is a list of public addresses where this server can be reached.| +|relay_group|string|the name of the Relay group that the server is connected to| +|relay_ids|[]string|the list of Relay host IDs that the server is connected to| |rotation|[object](#specrotation)|Rotation specifies server rotation| |use_tunnel|boolean|UseTunnel indicates that connections to this server should occur over a reverse tunnel.| |version|string|TeleportVersion is the teleport version that the server is running on| diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-provisiontokens.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens.mdx similarity index 93% rename from docs/pages/reference/operator-resources/resources-teleport-dev-provisiontokens.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens.mdx index 82d12d72fa590..d2a82ce98f06b 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-provisiontokens.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-provisiontokens.mdx @@ -1,7 +1,6 @@ --- title: TeleportProvisionToken description: Provides a comprehensive list of fields in the TeleportProvisionToken resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -34,6 +33,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |bot_name|string|BotName is the name of the bot this token grants access to, if any| |bound_keypair|[object](#specbound_keypair)|BoundKeypair allows the configuration of options specific to the "bound_keypair" join method.| |circleci|[object](#speccircleci)|CircleCI allows the configuration of options specific to the "circleci" join method.| +|env0|[object](#specenv0)|Env0 allows the configuration of options specific to the "env0" join method.| |gcp|[object](#specgcp)|GCP allows the configuration of options specific to the "gcp" join method.| |github|[object](#specgithub)|GitHub allows the configuration of options specific to the "github" join method.| |gitlab|[object](#specgitlab)|GitLab allows the configuration of options specific to the "gitlab" join method.| @@ -143,6 +143,28 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |context_id|string|| |project_id|string|| +### spec.env0 + +|Field|Type|Description| +|---|---|---| +|allow|[][object](#specenv0allow-items)|Allow is a list of Rules, jobs using this token must match at least one allow rule to use this token.| + +### spec.env0.allow items + +|Field|Type|Description| +|---|---|---| +|deployer_email|string|| +|deployment_type|string|| +|env0_tag|string|| +|environment_id|string|| +|environment_name|string|| +|organization_id|string|| +|project_id|string|| +|project_name|string|| +|template_id|string|| +|template_name|string|| +|workspace_name|string|| + ### spec.gcp |Field|Type|Description| @@ -213,8 +235,9 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |allow|[][object](#speckubernetesallow-items)|Allow is a list of Rules, nodes using this token must match one allow rule to use this token.| +|oidc|[object](#speckubernetesoidc)|OIDCConfig configures the `oidc` type.| |static_jwks|[object](#speckubernetesstatic_jwks)|StaticJWKS is the configuration specific to the `static_jwks` type.| -|type|string|Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` If unset, this defaults to `in_cluster`.| +|type|string|Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` - `oidc` If unset, this defaults to `in_cluster`.| ### spec.kubernetes.allow items @@ -222,6 +245,13 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |---|---|---| |service_account|string|| +### spec.kubernetes.oidc + +|Field|Type|Description| +|---|---|---| +|insecure_allow_http_issuer|boolean|| +|issuer|string|| + ### spec.kubernetes.static_jwks |Field|Type|Description| @@ -238,6 +268,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| +|instances|[]string|| |parent_compartments|[]string|| |regions|[]string|| |tenancy|string|| @@ -247,6 +278,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |allow|[][object](#specspaceliftallow-items)|Allow is a list of Rules, nodes using this token must match one allow rule to use this token.| +|enable_glob_matching|boolean|EnableGlobMatching enables glob-style matching for the space_id and caller_id fields in the rules.| |hostname|string|Hostname is the hostname of the Spacelift tenant that tokens will originate from. E.g `example.app.spacelift.io`| ### spec.spacelift.allow items diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-roles.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-roles.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-roles.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-roles.mdx index 128657fc8d12e..4320678c237d0 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-roles.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-roles.mdx @@ -1,7 +1,6 @@ --- title: TeleportRole description: Provides a comprehensive list of fields in the TeleportRole resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -64,6 +63,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specallowkubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specallowmcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specallowrequest)|| @@ -124,6 +124,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.allow.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.allow.request |Field|Type|Description| @@ -158,6 +164,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.allow.request.thresholds items @@ -247,6 +254,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specdenykubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specdenymcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specdenyrequest)|| @@ -307,6 +315,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.deny.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.deny.request |Field|Type|Description| @@ -341,6 +355,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.deny.request.thresholds items @@ -533,6 +548,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specallowkubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specallowmcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specallowrequest)|| @@ -593,6 +609,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.allow.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.allow.request |Field|Type|Description| @@ -627,6 +649,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.allow.request.thresholds items @@ -716,6 +739,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specdenykubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specdenymcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specdenyrequest)|| @@ -776,6 +800,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.deny.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.deny.request |Field|Type|Description| @@ -810,6 +840,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.deny.request.thresholds items diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv6.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv6.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-rolesv6.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv6.mdx index 06cff4ba93001..90cb1e128fb28 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv6.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv6.mdx @@ -1,7 +1,6 @@ --- title: TeleportRoleV6 description: Provides a comprehensive list of fields in the TeleportRoleV6 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -64,6 +63,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specallowkubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specallowmcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specallowrequest)|| @@ -124,6 +124,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.allow.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.allow.request |Field|Type|Description| @@ -158,6 +164,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.allow.request.thresholds items @@ -247,6 +254,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specdenykubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specdenymcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specdenyrequest)|| @@ -307,6 +315,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.deny.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.deny.request |Field|Type|Description| @@ -341,6 +355,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.deny.request.thresholds items diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv7.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv7.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-rolesv7.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv7.mdx index 140930e3b82b2..d1d8f54e0cad7 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv7.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv7.mdx @@ -1,7 +1,6 @@ --- title: TeleportRoleV7 description: Provides a comprehensive list of fields in the TeleportRoleV7 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -64,6 +63,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specallowkubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specallowmcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specallowrequest)|| @@ -124,6 +124,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.allow.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.allow.request |Field|Type|Description| @@ -158,6 +164,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.allow.request.thresholds items @@ -247,6 +254,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specdenykubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specdenymcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specdenyrequest)|| @@ -307,6 +315,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.deny.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.deny.request |Field|Type|Description| @@ -341,6 +355,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.deny.request.thresholds items diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv8.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv8.mdx similarity index 96% rename from docs/pages/reference/operator-resources/resources-teleport-dev-rolesv8.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv8.mdx index fc64854b6e677..ca68d99b68531 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-rolesv8.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-rolesv8.mdx @@ -1,7 +1,6 @@ --- title: TeleportRoleV8 description: Provides a comprehensive list of fields in the TeleportRoleV8 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -64,6 +63,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specallowkubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specallowmcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specallowrequest)|| @@ -124,6 +124,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.allow.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.allow.request |Field|Type|Description| @@ -158,6 +164,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.allow.request.thresholds items @@ -247,6 +254,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |kubernetes_resources|[][object](#specdenykubernetes_resources-items)|KubernetesResources is the Kubernetes Resources this Role grants access to.| |kubernetes_users|[]string|KubeUsers is an optional kubernetes users to impersonate| |logins|[]string|Logins is a list of *nix system logins.| +|mcp|[object](#specdenymcp)|MCPPermissions defines MCP servers related permissions.| |node_labels|object|NodeLabels is a map of node labels (used to dynamically grant access to nodes).| |node_labels_expression|string|NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes.| |request|[object](#specdenyrequest)|| @@ -307,6 +315,12 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |namespace|string|Namespace is the resource namespace. It supports wildcards.| |verbs|[]string|Verbs are the allowed Kubernetes verbs for the following resource.| +### spec.deny.mcp + +|Field|Type|Description| +|---|---|---| +|tools|[]string|Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed.| + ### spec.deny.request |Field|Type|Description| @@ -341,6 +355,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |Field|Type|Description| |---|---|---| |mode|string|Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned.| +|prompt|string|Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt.| ### spec.deny.request.thresholds items diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-samlconnectors.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-samlconnectors.mdx similarity index 94% rename from docs/pages/reference/operator-resources/resources-teleport-dev-samlconnectors.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-samlconnectors.mdx index 699416850551b..b02c05cbd230e 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-samlconnectors.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-samlconnectors.mdx @@ -1,7 +1,6 @@ --- title: TeleportSAMLConnector description: Provides a comprehensive list of fields in the TeleportSAMLConnector resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} @@ -37,6 +36,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |entity_descriptor|string|EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements.| |entity_descriptor_url|string|EntityDescriptorURL is a URL that supplies a configuration XML.| |force_authn|string or integer|ForceAuthn specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO. Can be either the string or the integer representation of each option.| +|include_subject|boolean|IncludeSubject is a flag that indicates whether the Subject element is included in the SAML authentication request. Defaults to false. Note: Some IdPs will reject requests that contain a Subject.| |issuer|string|Issuer is the identity provider issuer.| |mfa|[object](#specmfa)|MFASettings contains settings to enable SSO MFA checks through this auth connector.| |preferred_request_binding|string|PreferredRequestBinding is a preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured.| @@ -45,6 +45,7 @@ resource, which you can apply after installing the Teleport Kubernetes operator. |signing_key_pair|[object](#specsigning_key_pair)|SigningKeyPair is an x509 key pair used to sign AuthnRequest.| |single_logout_url|string|SingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled.| |sso|string|SSO is the URL of the identity provider's SSO service.| +|user_matchers|[]string|UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login.| ### spec.assertion_key_pair diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-trustedclustersv2.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-trustedclustersv2.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-trustedclustersv2.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-trustedclustersv2.mdx index 8728b51b2ab5c..2d26501c8abad 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-trustedclustersv2.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-trustedclustersv2.mdx @@ -1,7 +1,6 @@ --- title: TeleportTrustedClusterV2 description: Provides a comprehensive list of fields in the TeleportTrustedClusterV2 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-users.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-users.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-users.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-users.mdx index f4559d680a7ee..b3f3137837f80 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-users.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-users.mdx @@ -1,7 +1,6 @@ --- title: TeleportUser description: Provides a comprehensive list of fields in the TeleportUser resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx similarity index 99% rename from docs/pages/reference/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx rename to docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx index e5854fb054844..f7db4d17e26fe 100644 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx +++ b/docs/pages/reference/infrastructure-as-code/operator-resources/resources-teleport-dev-workloadidentitiesv1.mdx @@ -1,7 +1,6 @@ --- title: TeleportWorkloadIdentityV1 description: Provides a comprehensive list of fields in the TeleportWorkloadIdentityV1 resource available through the Teleport Kubernetes operator -tocDepth: 3 --- {/*Auto-generated file. Do not edit.*/} diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/access-monitoring-rule.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/access-monitoring-rule.mdx new file mode 100644 index 0000000000000..10279cbd63b7a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/access-monitoring-rule.mdx @@ -0,0 +1,82 @@ +--- +title: Access Monitoring Rule Resource Reference +sidebar_label: Access Monitoring rule +description: Provides a comprehensive list of fields for the Teleport access monitoring rule resource. +--- + +Access monitoring rules allows cluster administrators to monitor Access Requests +and apply notification routing and automatic review rules. + +```yaml +kind: access_monitoring_rule +version: v1 +metadata: + name: example_rule +spec: + # subjects specifies the kinds of subjects to monitor. + # Possible values: "access_request" + subjects: + - access_request + + # condition specifies the conditions that should be met to apply the access + # monitoring rule. The condition accepts a predicate expression which must + # evaluate to a boolean value. + # + # This condition would be satisfied if: + # - `access` role is requested + # - all requested resources have the label `env: dev` + # - requesting user has the `team: dev` user trait. + condition: |- + contains_all(set("access"), access_request.spec.roles) && + access_request.spec.resource_labels_intersection["env"].contains("dev") && + contains_any(user.traits["team"], set("dev")) + + # Optional: desired_state specifies the desired reconciled state of the access + # request after the rule is applied. This field must be set to "reviewed" to + # enable automatic reviews. + # Possible values: "reviewed". + desired_state: reviewed + + # Optional: automatic_review configures the automatic review rules. + automatic_review: + # integration specifies the name of an external integration source used to + # help determine if a requesting user satisfies the rule conditions. + # Use "builtin" to specify no external integration. + # Possible values: "builtin" + integration: builtin + + # decision determines whether to automatically approve or deny the + # access request. + # Possible values: "APPROVED" or "DENIED" + decision: APPROVED + + # Optional: notification configures notification routing rules. + notification: + # name specifies the external integration to which the notifications should + # be routed. + # Possible values: "email", "discord", "slack", "pagerduty", "jira", + # "mattermost", "msteams", "opsgenie", "servicenow", "datadog" + name: email + + # recipients specifies the list of recipients to be notified when the + # access monitoring rule is applied. + recipients: + - example@goteleport.com +``` + +Accepted fields within the condition predicate expression: +| Field | Description | +| ------------------------------------------------ | ------------------------------------------------------------------- | +| access_request.spec.roles | The set of roles requested. | +| access_request.spec.suggested_reviewers | The set of reviewers specified in the request. | +| access_request.spec.system_annotations | A map of system annotations on the request. | +| access_request.spec.user | The requesting user. | +| access_request.spec.request_reason | The request reason. | +| access_request.spec.creation_time | The creation time of the request. | +| access_request.spec.expiry | The expiry time of the request. | +| access_request.spec.resource_labels_intersection | A map containing the intersection of all requested resource labels. | +| access_request.spec.resource_labels_union | A map containing the union of all requested resource labels. | +| user.traits | A map of traits of the requesting user. | + +See [Predicate Language](../../access-controls/predicate-language.mdx) for more details. + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-report.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-report.mdx new file mode 100644 index 0000000000000..0824eea124d98 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-report.mdx @@ -0,0 +1,14 @@ +--- +title: Auto-Update Agent Report Resource Reference +sidebar_label: Auto-update Agent Report +description: Provides a comprehensive list of fields for the Teleport auto-update agent report resource. +--- + +The auto-update agent report is an internal resource used by the Auth Service to +track which agent is running which version and decide if the update can +progress. + +This resource should not be edited by humans. + +(!docs/pages/includes/reference/resources/autoupdate_agent_report.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-rollout.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-rollout.mdx new file mode 100644 index 0000000000000..1d00d54b7c979 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-agent-rollout.mdx @@ -0,0 +1,15 @@ +--- +title: Auto-Update Agent Rollout Resource Reference +sidebar_label: Auto-Update Agent Rollout +description: Provides a comprehensive list of fields for the Teleport auto-update agent rollout resource. +--- + +The auto-update agent rollout resource allows cluster administrators to view the +current agent rollout for Teleport Agent Managed Updates (v2). + +This resource should not be edited by humans. + +(!docs/pages/includes/reference/resources/autoupdate_agent_rollout.mdx!) + +See [Teleport Agent Managed Updates](../../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more details. + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-config.mdx new file mode 100644 index 0000000000000..58c0199692e0e --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-config.mdx @@ -0,0 +1,22 @@ +--- +title: Auto-Update Config Resource Reference +sidebar_label: Auto-Update Config +description: Provides a comprehensive list of fields for the Teleport auto-update config resource. +--- + +The auto-update config resource contains configuration options for Teleport +Agent and client tool Managed Updates (v2). + + +The `autoupdate_config` and `autoupdate_version` resources configure Managed +Updates v2 and Managed Updates v1. + +`cluster_maintenance_config` above configures only Managed Updates v1 +which are currently supported but will be deprecated in a future version. + + +(!docs/pages/includes/reference/resources/autoupdate_config.mdx!) + +See [Teleport Client Tool Managed Updates](../../../upgrading/client-tools-managed-updates.mdx) and +[Teleport Agent Managed Updates](../../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more details. + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-version.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-version.mdx new file mode 100644 index 0000000000000..9af2375b5e877 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/auto-update-version.mdx @@ -0,0 +1,25 @@ +--- +title: Auto-Update Version Resource Reference +sidebar_label: Auto-Update Version +description: Provides a comprehensive list of fields for the Teleport auto-update version resource. +--- + +The auto-update version resource allows cluster administrators to manage +versions used for agent and client tools Managed Updates. + + +The `autoupdate_config` and `autoupdate_version` resources configure Managed +Updates v2 and Managed Updates v1. + +`cluster_maintenance_config` above configures only Managed Updates v1 +which are currently supported but will be deprecated in a future version. + + +This resource is not editable for cloud-managed Teleport Enterprise to ensure that all of +your clients receive security patches and remain compatible with your cluster. + +(!docs/pages/includes/reference/resources/autoupdate_version.mdx!) + +See [Teleport Client Tool Managed Updates](../../../upgrading/client-tools-managed-updates.mdx) and +[Teleport Agent Managed Updates](../../../upgrading/agent-managed-updates/agent-managed-updates.mdx) for more details. + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/bot.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/bot.mdx new file mode 100644 index 0000000000000..d4eacdc8482d5 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/bot.mdx @@ -0,0 +1,13 @@ +--- +title: Bot Resource Reference +sidebar_label: Bot +description: Provides a comprehensive list of fields for the Teleport bot resource. +--- + +The bot resources define a Machine ID Bot identity and its access. + +Find out more on the +[Machine ID configuration reference](../../machine-workload-identity/configuration.mdx). + +(!docs/pages/includes/machine-id/bot-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-auth-preferences.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-auth-preferences.mdx new file mode 100644 index 0000000000000..bd63b744cdc50 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-auth-preferences.mdx @@ -0,0 +1,109 @@ +--- +title: Cluster Auth Preferences Resource Reference +sidebar_label: Cluster Auth Preferences +description: Provides a comprehensive list of fields for the Teleport cluster auth preferences resource. +--- + +The cluster auth preferences resource contains global cluster configuration +options for authentication. + +```yaml +metadata: + name: cluster-auth-preference +spec: + # Sets the list of allowed second factors for the cluster. + # Possible values: "otp", "webauthn", and "sso". + # Defaults to ["otp"]. + second_factors: ["webauthn", "otp"] + + # second_factors is the list of allowed second factors for the cluster. + # Possible values: "on", "otp" and "webauthn" + # If "on" is set, all MFA protocols are supported. + # + # Prefer setting second_factors instead. + #second_factor: "webauthn" + + # The name of the OIDC or SAML connector. if this is not set, the first connector in the backend is used. + connector_name: "" + + # webauthn is the settings for server-side Web authentication support. + webauthn: + # rp_id is the ID of the Relying Party. + # It should be set to the domain name of the Teleport installation. + # + # IMPORTANT: rp_id must never change in the lifetime of the cluster, because + # it's recorded in the registration data on the WebAuthn device. If the + # ri_id changes, all existing WebAuthn key registrations will become invalid + # and all users who use WebAuthn as the second factor will need to + # re-register. + rp_id: teleport.example.com + # Allow list of device attestation CAs in PEM format. + # If present, only devices whose attestation certificates match the + # certificates specified here may be registered (existing registrations are + # unchanged). + # If supplied in conjunction with `attestation_denied_cas`, then both + # conditions need to be true for registration to be allowed (the device + # MUST match an allowed CA and MUST NOT match a denied CA). + # By default all devices are allowed. + attestation_allowed_cas: [] + # Deny list of device attestation CAs in PEM format. + # If present, only devices whose attestation certificates don't match the + # certificates specified here may be registered (existing registrations are + # unchanged). + attestation_denied_cas: [] + + # Enforce per-session MFA or PIV-hardware key restrictions on user login sessions. + # Possible values: true, false, "hardware_key", "hardware_key_touch". + # Defaults to false. + require_session_mfa: false + + # Sets whether connections with expired client certificates will be disconnected. + disconnect_expired_cert: false + + # Sets whether headless authentication is allowed. + # Headless authentication requires WebAuthn. + # Defaults to true if webauthn is configured. + allow_headless: false + + # Sets whether local auth is enabled alongside any other authentication + # type. + allow_local_auth: true + + # Sets whether passwordless authentication is allowed. + # Requires Webauthn to work. + allow_passwordless: false + + # Sets the message of the day for the cluster. + message_of_the_day: "" + + # idp is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise + idp: + # options related to the Teleport SAML IdP. + saml: + # enables access to the Teleport SAML IdP. + enabled: true + + # locking_mode is the cluster-wide locking mode default. + # Possible values: "strict" or "best_effort" + locking_mode: best_effort + + # default_session_ttl defines the default TTL (time to live) of certificates + # issued to the users on this cluster. + default_session_ttl: "12h" + + # The type of authentication to use for this cluster. + # Possible values: "local", "oidc", "saml" and "github" + type: local + + stable_unix_user_config: + # If set to true, SSH instances will use the same UID for each given + # username when automatically creating users. + enabled: false + + # The range of UIDs (including both ends) used for automatic UID assignment. + first_uid: 90000 + last_uid: 95000 + +version: v2 +``` + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-maintenance-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-maintenance-config.mdx new file mode 100644 index 0000000000000..a9269ad7a65b9 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/cluster-maintenance-config.mdx @@ -0,0 +1,18 @@ +--- +title: Cluster Maintenance Config Resource Reference +sidebar_label: Cluster Maintenance Config +description: Provides a comprehensive list of fields for the Teleport cluster maintenance config resource. +--- + +The cluster maintenance config resource represents global configuration options +for the agents enrolled into automatic updates (v1). + + +`cluster_maintenance_config` configures Managed Updates v1, +which are currently supported but will be superseded by Managed Updates v2. +The `autoupdate_config` and `autoupdate_version` resources further down +configure Managed Updates v2. + + +(!docs/pages/includes/cluster-maintenance-config-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/database-object-import-rule.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/database-object-import-rule.mdx new file mode 100644 index 0000000000000..dfd4ab8cbe08a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/database-object-import-rule.mdx @@ -0,0 +1,13 @@ +--- +title: Database Object Import Rule Resource Reference +sidebar_label: Database Object Import Rule +description: Provides a comprehensive list of fields for the Teleport database object import rule resource. +--- + +Database object import rules define the labels to be applied to database objects +imported into Teleport. + +See [Database Access Controls](../../../enroll-resources/database-access/rbac.mdx) for more details. + +(!docs/pages/includes/database-access/auto-user-provisioning/database-object-import-rule-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/db.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/db.mdx new file mode 100644 index 0000000000000..70312e3a85629 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/db.mdx @@ -0,0 +1,10 @@ +--- +title: DB Resource Reference +sidebar_label: DB +description: Provides a comprehensive list of fields for the Teleport DB resource. +--- + +The DB resource contains a configuration of a database that can be accessed +through Teleport. + +(!docs/pages/includes/reference/resources/db.mdx!) diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/device.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/device.mdx new file mode 100644 index 0000000000000..3e5f4818deb86 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/device.mdx @@ -0,0 +1,10 @@ +--- +title: Device Resource Reference +sidebar_label: Device +description: Provides a comprehensive list of fields for the Teleport device resource. +--- + +Device contains information identifying a trusted device. + +(!docs/pages/includes/device-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/discovery-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/discovery-config.mdx new file mode 100644 index 0000000000000..ad0b2fbc74791 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/discovery-config.mdx @@ -0,0 +1,521 @@ +--- +title: Discovery Config Reference +description: Provides a reference of fields within the Discovery Config resource, which you can manage with tctl. +sidebar_label: Discovery Config +--- +{/* vale 3rd-party-products.former-names = NO */} +{/* vale messaging.capitalization = NO */} +{/* Automatically generated from: types/discoveryconfig/discoveryconfig.go */} +{/* DO NOT EDIT */} + +**Kind**: `discovery_config`
+**Version**: `v1` + +Describes extra discovery matchers that are added to DiscoveryServices that share the same Discovery Group. + +Example: + +```yaml +kind: "string" +sub_kind: "string" +version: "string" +metadata: # [...] +spec: # [...] +status: # [...] +``` +|Field Name|Description|Type| +|---|---|---| +|kind|A resource kind.|string| +|metadata|Metadata for the resource.|[Metadata](#metadata)| +|spec|The specification for the discovery config.|[Spec](#spec)| +|status|The status for the discovery config.|[Status](#status)| +|sub_kind|An optional resource sub kind, used in some resources.|string| +|version|The resource version.|string| + +## AWS Matcher + +Matches AWS EC2 instances and AWS Databases + + +Example: + +```yaml +types: + - "string" + - "string" + - "string" +regions: + - "string" + - "string" + - "string" +assume_role: # [...] +tags: # [...] +install: # [...] +ssm: # [...] +integration: "string" +kube_app_discovery: true +setup_access_for_arn: "string" +organization: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|assume_role|ARN is the AWS role to assume for database discovery.|[Assume Role](#assume-role)| +|install|Sets the join method when installing on discovered EC2 nodes|[Installer Params](#installer-params)| +|integration|The integration name used to generate credentials to interact with AWS APIs. Environment credentials will not be used when this value is set.|string| +|kube_app_discovery|Controls whether Kubernetes App Discovery will be enabled for agents running on discovered clusters, currently only affects AWS EKS discovery in integration mode.|Boolean| +|organization|An AWS Organization matcher for discovering resources accross multiple accounts under an Organization.|[AWS Organization Matcher](#aws-organization-matcher)| +|regions|AWS regions to query for databases.|[]string| +|setup_access_for_arn|The role that the Discovery Service should create EKS Access Entries for. This value should match the IAM identity that Teleport Kubernetes Service uses. If this value is empty, the Discovery Service will attempt to set up access for its own identity (self).|string| +|ssm|Provides options to use when sending a document command to an EC2 node|[AWSSSM](#awsssm)| +|tags|AWS resource Tags to match.|[Labels](#labels)| +|types|AWS database types to match, "ec2", "rds", "redshift", "elasticache", or "memorydb".|[]string| + +## AWS Organization Matcher + +Specifies an Organization and rules for discovering accounts under that organization. + + +Example: + +```yaml +organization_id: "string" +organizational_units: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|organization_id|The AWS Organization ID to match against. Required.|string| +|organizational_units|Contains rules for matchings AWS accounts based on their Organizational Units.|[AWS Organization Units Matcher](#aws-organization-units-matcher)| + +## AWS Organization Units Matcher + +Contains rules for matching accounts under an Organization. Accounts that belong to an excluded Organizational Unit, and its children, will be excluded even if they were included. + + +Example: + +```yaml +include: + - "string" + - "string" + - "string" +exclude: + - "string" + - "string" + - "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|exclude|A list of AWS Organizational Unit IDs to exclude. Only exact matches or wildcard (*) are supported. If empty, no Organizational Units are excluded by default.|[]string| +|include|A list of AWS Organizational Unit IDs to match. Only exact matches or wildcard (*) are supported. If empty, all Organizational Units are included by default.|[]string| + +## AWSSSM + +Provides options to use when executing SSM documents + + +Example: + +```yaml +document_name: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|document_name|The name of the document to use when executing an SSM command|string| + +## Access Graph AWS Sync + +A configuration for AWS Access Graph service poll service. + + +Example: + +```yaml +regions: + - "string" + - "string" + - "string" +assume_role: # [...] +integration: "string" +cloud_trail_logs: # [...] +eks_audit_logs: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|assume_role|ARN is the AWS role to assume for database discovery.|[Assume Role](#assume-role)| +|cloud_trail_logs|Configuration settings for collecting AWS CloudTrail logs via an SQS queue.|[Access Graph AWS Sync Cloud Trail Logs](#access-graph-aws-sync-cloud-trail-logs)| +|eks_audit_logs||[Access Graph AWS Sync EKS Audit Logs](#access-graph-aws-sync-eks-audit-logs)| +|integration|The integration name used to generate credentials to interact with AWS APIs.|string| +|regions|AWS regions to import resources from.|[]string| + +## Access Graph AWS Sync Cloud Trail Logs + +Defines settings for ingesting AWS CloudTrail logs by polling an SQS queue that receives notifications about new log files. + + +Example: + +```yaml +region: "string" +sqs_queue: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|region|The AWS region of the SQS queue for CloudTrail notifications, ex.: "us-east-2".|string| +|sqs_queue|The name or URL for CloudTrail log events, ex.: "demo-cloudtrail-queue".|string| + +## Access Graph AWS Sync EKS Audit Logs + +Defines the settings for ingesting Kubernetes apiserver audit logs from EKS clusters. + + +Example: + +```yaml +tags: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|tags|The tags of EKS clusters for which apiserver audit logs should be fetched.|[Labels](#labels)| + +## Access Graph Azure Sync + +A configuration for Azure Access Graph service poll service. + + +Example: + +```yaml +subscription_id: "string" +integration: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|integration|The integration name used to generate credentials to interact with AWS APIs.|string| +|subscription_id|Is the ID of the Azure subscription to sync resources from|string| + +## Access Graph Sync + +A configuration for Access Graph service. + + +Example: + +```yaml +aws: + - # [...] + - # [...] + - # [...] +poll_interval: # See description +azure: + - # [...] + - # [...] + - # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|aws|A configuration for AWS Access Graph service poll service.|[][Access Graph AWS Sync](#access-graph-aws-sync)| +|azure|A configuration for Azure Access Graph service poll service.|[][Access Graph Azure Sync](#access-graph-azure-sync)| +|poll_interval|The frequency at which to poll for resources|| + +## Assume Role + +Provides a role ARN and ExternalID to assume an AWS role when interacting with AWS resources. + + +Example: + +```yaml +role_arn: "string" +external_id: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|external_id|The external ID used to assume a role in another account.|string| +|role_arn|The fully specified AWS IAM role ARN.|string| + +## Azure Installer Params + +The set of Azure-specific installation parameters. + + +Example: + +```yaml +client_id: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|client_id|The client ID of the managed identity discovered nodes should use to join the cluster.|string| + +## Azure Matcher + +Matches Azure resources. It defines which resource types, filters and some configuration params. + + +Example: + +```yaml +subscriptions: + - "string" + - "string" + - "string" +resource_groups: + - "string" + - "string" + - "string" +types: + - "string" + - "string" + - "string" +regions: + - "string" + - "string" + - "string" +tags: # [...] +install_params: # [...] +integration: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|install_params|Sets the join method when installing on discovered Azure nodes.|[Installer Params](#installer-params)| +|integration|The integration name used to generate credentials to interact with Azure APIs. Environment credentials will not be used when this value is set.|string| +|regions|Azure locations to match for databases.|[]string| +|resource_groups|Azure resource groups to query for resources.|[]string| +|subscriptions|Azure subscriptions to query for resources.|[]string| +|tags|Azure tags on resources to match.|[Labels](#labels)| +|types|Azure types to match: "mysql", "postgres", "aks", "vm"|[]string| + +## GCP Matcher + +Matches GCP resources. + + +Example: + +```yaml +types: + - "string" + - "string" + - "string" +locations: + - "string" + - "string" + - "string" +tags: # [...] +project_ids: + - "string" + - "string" + - "string" +service_accounts: + - "string" + - "string" + - "string" +install_params: # [...] +labels: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|install_params|Sets the join method when installing on discovered GCP nodes.|[Installer Params](#installer-params)| +|labels|GCP labels to match.|[Labels](#labels)| +|locations|GKE locations to search resources for.|[]string| +|project_ids|The GCP project ID where the resources are deployed.|[]string| +|service_accounts|The emails of service accounts attached to VMs.|[]string| +|tags|Obsolete and only exists for backwards compatibility. Use Labels instead.|[Labels](#labels)| +|types|GKE resource types to match: "gke", "vm".|[]string| + +## HTTP Proxy Settings + +Defines HTTP proxy settings for making HTTP and HTTPS requests. + + +Example: + +```yaml +http_proxy: "string" +https_proxy: "string" +no_proxy: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|http_proxy|The URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable.|string| +|https_proxy|The URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable.|string| +|no_proxy|A comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable.|string| + +## Install Param Enroll Mode + +The mode used to enroll the node into the cluster. + + +## Installer Params + +InstallParams sets join method to use on discovered nodes + + +Example: + +```yaml +join_method: # [...] +join_token: "string" +script_name: "string" +install_teleport: true +sshd_config: "string" +proxy_addr: "string" +azure: # [...] +enroll_mode: # [...] +suffix: "string" +update_group: "string" +http_proxy_settings: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|azure|The set of Azure-specific installation parameters.|[Azure Installer Params](#azure-installer-params)| +|enroll_mode|Indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode|[Install Param Enroll Mode](#install-param-enroll-mode)| +|http_proxy_settings|Defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation.|[HTTP Proxy Settings](#http-proxy-settings)| +|install_teleport|Disables agentless discovery|Boolean| +|join_method|The method to use when joining the cluster|[Join Method](#join-method)| +|join_token|The token to use when joining the cluster|string| +|proxy_addr|The address of the proxy the discovered node should use to connect to the cluster.|string| +|script_name|The name of the teleport installer script resource for the cloud instance to execute|string| +|sshd_config|Provides the path to write sshd configuration changes|string| +|suffix|Indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens.|string| +|update_group|Indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens.|string| + +## Join Method + +The method used for new nodes to join the cluster. + + +## Kubernetes Matcher + +Matches Kubernetes services. + + +Example: + +```yaml +types: + - "string" + - "string" + - "string" +namespaces: + - "string" + - "string" + - "string" +labels: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|labels|Kubernetes services labels to match.|[Labels](#labels)| +|namespaces|Kubernetes namespaces in which to discover services|[]string| +|types|Kubernetes services types to match. Currently only 'app' is supported.|[]string| + +## Labels + +A wrapper around map that can marshal and unmarshal itself from scalar and list values + + +## Metadata + +Resource metadata + + +Example: + +```yaml +name: "string" +description: "string" +labels: + "string": "string" + "string": "string" + "string": "string" +expires: # See description +revision: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|description|Object description|string| +|expires|A global expiry time header can be set on any resource in the system.|| +|labels|A set of labels|map[string]string| +|name|An object name|string| +|revision|An opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource.|string| + +## Spec + +The specification for a discovery config. + + +Example: + +```yaml +discovery_group: "string" +aws: + - # [...] + - # [...] + - # [...] +azure: + - # [...] + - # [...] + - # [...] +gcp: + - # [...] + - # [...] + - # [...] +kube: + - # [...] + - # [...] + - # [...] +access_graph: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|access_graph|The configuration for the Access Graph Cloud sync.|[Access Graph Sync](#access-graph-sync)| +|aws|A list of matchers for the supported resources in AWS.|[][AWS Matcher](#aws-matcher)| +|azure|A list of matchers for the supported resources in Azure.|[][Azure Matcher](#azure-matcher)| +|discovery_group|The Discovery Group for the current DiscoveryConfig. DiscoveryServices should include all the matchers if the DiscoveryGroup matches with their own group.|string| +|gcp|A list of matchers for the supported resources in GCP.|[][GCP Matcher](#gcp-matcher)| +|kube|A list of matchers for the supported resources in Kubernetes.|[][Kubernetes Matcher](#kubernetes-matcher)| + +## Status + +Holds dynamic information about the discovery configuration running status such as errors, state and count of the resources. + + +Example: + +```yaml +state: "string" +error_message: "string" +discovered_resources: 1 +last_sync_time: # See description +integration_discovered_resources: + "string": # See description + "string": # See description + "string": # See description +``` + +|Field Name|Description|Type| +|---|---|---| +|discovered_resources|Holds the count of the discovered resources in the previous iteration.|number| +|error_message|Holds the error message when state is DISCOVERY_CONFIG_STATE_ERROR.|string| +|integration_discovered_resources|Maps an integration to a summary of resources that were found using that integration.|map[string]| +|last_sync_time|The timestamp when the Discovery Config was last sync.|| +|state|The current state of the discovery config.|string| + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/health-check-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/health-check-config.mdx new file mode 100644 index 0000000000000..d96eba4240a50 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/health-check-config.mdx @@ -0,0 +1,51 @@ +--- +title: Health Check Config Resource Reference +sidebar_label: Health Check Config +description: Provides a comprehensive list of fields for the Teleport health check config resource. +--- + +The health check config resource specifies configuration options for resource +endpoint health checks. + +Currently, health checks can only be configured for database endpoints. + +```yaml +kind: health_check_config +version: v1 +metadata: + name: example + description: Example healthcheck configuration +spec: + # interval is the time between each health check. Default 30s. + interval: 30s + # timeout is the health check connection establishment timeout. Default 5s. + timeout: 5s + # healthy_threshold is the number of consecutive passing health checks + # after which a target's health status becomes "healthy". Default 2. + healthy_threshold: 2 + # unhealthy_threshold is the number of consecutive failing health checks + # after which a target's health status becomes "unhealthy". Default 1. + unhealthy_threshold: 1 + # match is used to select resources that these settings apply to. + # Resources are matched by label selectors and at least one label selector + # must be set. + # If multiple `health_check_config` resources match the same resource, then + # the matching health check configs are sorted by name and only the first + # config applies. + match: + # db_labels matches database labels. An empty value is ignored. + # If db_labels_expression is also set, then the match result is the logical + # AND of both. + db_labels: + - name: env + values: + - dev + - staging + # db_labels_expression is a label predicate expression to match databases. + # An empty value is ignored. + # If db_labels is also set, then the match result is the logical AND of both. + db_labels_expression: 'labels["owner"] == "database-team"' +``` + +See [predicate language label expressions](../../access-controls/predicate-language.mdx#label-expressions) for more info about using predicate language to match resource labels. + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-model.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-model.mdx new file mode 100644 index 0000000000000..a45480a117878 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-model.mdx @@ -0,0 +1,62 @@ +--- +title: Inference Model Resource Reference +sidebar_label: Inference Model +description: Provides a comprehensive list of fields for the Teleport inference model resource. +--- + +The inference model resource configures which third-party inference provider +should be accessed and contains both general and provider-specific options that +control session recording summarization. Multiple models can be configured. + +```yaml +kind: inference_model +version: v1 +metadata: + name: example-model +spec: + + # openai tells Teleport to use OpenAI for summarizing session recordings + # using this model. + openai: + + # openai_model_id is the provider-specific model name. If this model + # connects to a real OpenAI API, this needs to be a valid OpenAI model + # name. If it connects to an OpenAI-compatible proxy, this ID is whatever + # the public model name is used by that particular proxy. Required. + openai_model_id: gpt-4o + + # temperature is an optional temperature of the model. Defaults to whatever + # is the model's default. + temperature: 0.4 + + # api_key_secret_ref is a name of an inference_secret resource containing + # the OpenAI API key (in case of direct connection to OpenAI) or other + # secret required to authenticate against an LLM proxy. Required. + api_key_secret_ref: example-openai-key + + # base_url allows changing the base URL to point Teleport to an alternate + # OpenAI-compatible API, e.g. a proxy. Optional, defaults to the public + # OpenAI API endpoint. + base_url: "http://my-llm-proxy:4000/" + + # max_session_length_bytes is the maximum length of a session recording to be + # processed using this model configuration. Setting it protects from + # incurring additional cost from summarization attempts that fail because of + # the model's context window limit. Optional, defaults to 200kB. + max_session_length_bytes: 235000 + + # bedrock tells Teleport to use Amazon Bedrock for summarizing session + # recordings using this model. + bedrock: + + # region is the AWS region which will be used for inference. Required. + region: eu-central-1 + + # bedrock_model_id specifies a model ID or an inference profile as + # understood by the Bedrock API. Required. + bedrock_model_id: "anthropic.claude-3-5-sonnet-20240620-v1:0" + + # temperature is an optional temperature of the model. Defaults to whatever + # is the model's default. + temperature: 0.6 +``` diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-policy.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-policy.mdx new file mode 100644 index 0000000000000..8b0192d49547c --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-policy.mdx @@ -0,0 +1,34 @@ +--- +title: Inference Policy Resource Reference +sidebar_label: Inference Policy +description: Provides a comprehensive list of fields for the Teleport inference policy resource. +--- + +The inference policy resource controls, in session recording summaries, which +sessions are summarized and which model will be used to summarize them. There +can be multiple policies defined; multiple policies can also share the same +model. + +```yaml +kind: inference_policy +version: v1 +metadata: + name: example-policy +spec: + + # kinds indicate which kinds of sessions will match this policy. Required. + # Supported values: 'ssh', 'k8s', 'db'. + kinds: ['ssh', 'db'] + + # model is the name of the inference_model resource containing provider and + # model configuration. Required. + model: example-model + + # filter is an additional filter that the sessions need to match to be + # matched by this policy. Optional, defaults to matching all sessions of + # given kind. + filter: 'equals(resource.metadata.labels["env"], "prod")' +``` + +See [Predicate Language](../../access-controls/predicate-language.mdx) for more details on the +filter expressions. diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-secret.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-secret.mdx new file mode 100644 index 0000000000000..62be9b419f24f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/inference-secret.mdx @@ -0,0 +1,22 @@ +--- +title: Inference Secret Resource Reference +sidebar_label: Inference Secret +description: Provides a comprehensive list of fields for the Teleport inference secret resource. +--- + +The inference secret resource configures a secret, such as OpenAI API key or +LiteLLM master key, that is used along with an `inference_model`. Multiple +inference models can use the same secret. Inference secret resources can be +listed and retrieved, but their values will be stripped from the payload to +protect them from leaking outside. + +```yaml +kind: inference_secret +version: v1 +metadata: + name: example-openai-key +spec: + # value is the secret value, e.g. an OpenAI key. + value: "************************************" +``` + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/installer-v1.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/installer-v1.mdx new file mode 100644 index 0000000000000..94f2280dad77c --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/installer-v1.mdx @@ -0,0 +1,73 @@ +--- +title: Installer V1 Reference +description: Provides a reference of fields within the Installer V1 resource, which you can manage with tctl. +sidebar_label: Installer V1 +--- +{/* vale 3rd-party-products.former-names = NO */} +{/* vale messaging.capitalization = NO */} +{/* Automatically generated from: types/types.pb.go */} +{/* DO NOT EDIT */} + +**Kind**: `installer`
+**Version**: `v1` + +Represents an installer script resource. Used to provide a script to install teleport on discovered nodes. + +Example: + +```yaml +kind: "string" +sub_kind: "string" +version: "string" +metadata: # [...] +spec: # [...] +``` +|Field Name|Description|Type| +|---|---|---| +|kind|The resource kind.|string| +|metadata|The resource metadata.|[Metadata](#metadata)| +|spec|The resource spec.|[Installer Spec V1](#installer-spec-v1)| +|sub_kind|An optional resource subkind. Currently unused for this resource.|string| +|version|The resource version.|string| + +## Installer Spec V1 + +The specification for an Installer + + +Example: + +```yaml +script: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|script|Represents the contents of a installer shell script|string| + +## Metadata + +Resource metadata + + +Example: + +```yaml +name: "string" +description: "string" +labels: + "string": "string" + "string": "string" + "string": "string" +expires: # See description +revision: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|description|Object description|string| +|expires|A global expiry time header can be set on any resource in the system.|| +|labels|A set of labels|map[string]string| +|name|An object name|string| +|revision|An opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource.|string| + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/login-rules.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/login-rules.mdx new file mode 100644 index 0000000000000..6818b2afe0abe --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/login-rules.mdx @@ -0,0 +1,10 @@ +--- +title: Login Rules Resource Reference +sidebar_label: Login Rules +description: Provides a comprehensive list of fields for the Teleport login rules resource. +--- + +Login rules contain logic to transform SSO user traits during login. + +(!docs/pages/includes/login-rule-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/oidc-connector-v3.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/oidc-connector-v3.mdx new file mode 100644 index 0000000000000..8b184646e9909 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/oidc-connector-v3.mdx @@ -0,0 +1,225 @@ +--- +title: OIDC Connector V3 Reference +description: Provides a reference of fields within the OIDC Connector V3 resource, which you can manage with tctl. +sidebar_label: OIDC Connector V3 +--- +{/* vale 3rd-party-products.former-names = NO */} +{/* vale messaging.capitalization = NO */} +{/* Automatically generated from: types/types.pb.go */} +{/* DO NOT EDIT */} + +**Kind**: `oidc`
+**Version**: `v3` + +Represents an OIDC connector. + +Example: + +```yaml +kind: "string" +sub_kind: "string" +version: "string" +metadata: # [...] +spec: # [...] +``` +|Field Name|Description|Type| +|---|---|---| +|kind|A resource kind.|string| +|metadata|Holds resource metadata.|[Metadata](#metadata)| +|spec|An OIDC connector specification.|[OIDC Connector Spec V3](#oidc-connector-spec-v3)| +|sub_kind|An optional resource sub kind, used in some resources.|string| +|version|The resource version. It must be specified. Supported values are: `v3`.|string| + +## Claim Mapping + +Maps a claim to teleport roles. + + +Example: + +```yaml +claim: "string" +value: "string" +roles: + - "string" + - "string" + - "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|claim|A claim name.|string| +|roles|A list of static teleport roles to match.|[]string| +|value|A claim value to match.|string| + +## Duration + +A wrapper around duration to set up custom marshal/unmarshal + + +## Entra ID Groups Provider + +Configures out-of-band user groups provider. It works by following through the groups claim source, which is sent for "groups" claim when the user's group membership exceeds 200 max item limit. + + +Example: + +```yaml +disabled: true +group_type: "string" +graph_endpoint: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|disabled|Specifies that the groups provider should be disabled even when Entra ID responds with a groups claim source. User may choose to disable it if they are using integrations such as SCIM or similar groups importer as connector based role mapping may be not needed in such a scenario.|Boolean| +|graph_endpoint|A Microsoft Graph API endpoint. The groups claim source endpoint provided by Entra ID points to the now-retired Azure AD Graph endpoint ("https://graph.windows.net"). To convert it to the newer Microsoft Graph API endpoint, Teleport defaults to the Microsoft Graph global service endpoint ("https://graph.microsoft.com"). Update GraphEndpoint to point to a different Microsoft Graph national cloud deployment endpoint.|string| +|group_type|A user group type filter. Defaults to "security-groups". Value can be "security-groups", "directory-roles", "all-groups".|string| + +## Metadata + +Resource metadata + + +Example: + +```yaml +name: "string" +description: "string" +labels: + "string": "string" + "string": "string" + "string": "string" +expires: # See description +revision: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|description|Object description|string| +|expires|A global expiry time header can be set on any resource in the system.|| +|labels|A set of labels|map[string]string| +|name|An object name|string| +|revision|An opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource.|string| + +## OIDC Connector MFA Settings + +Contains OIDC MFA settings. + + +Example: + +```yaml +enabled: true +client_id: "string" +client_secret: "string" +acr_values: "string" +prompt: "string" +max_age: # [...] +request_object_mode: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|acr_values|Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR.|string| +|client_id|ClientID is the OIDC OAuth app client ID.|string| +|client_secret|The OIDC OAuth app client secret.|string| +|enabled|Specified whether this OIDC connector supports MFA checks. Defaults to false.|Boolean| +|max_age|The amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions.|[Duration](#duration)| +|prompt|An optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.|string| +|request_object_mode|Determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters. If omitted, MFA flows will default to the `RequestObjectMode` behavior specified in the base OIDC connector. Set this property to 'none' to explicitly disable request objects for the MFA client.|string| + +## OIDC Connector Spec V3 + +An OIDC connector specification. It specifies configuration for Open ID Connect compatible external identity provider: https://openid.net/specs/openid-connect-core-1_0.html + + +Example: + +```yaml +issuer_url: "string" +client_id: "string" +client_secret: "string" +acr_values: "string" +provider: "string" +display: "string" +scope: + - "string" + - "string" + - "string" +prompt: "string" +claims_to_roles: + - # [...] + - # [...] + - # [...] +google_service_account_uri: "string" +google_service_account: "string" +google_admin_email: "string" +redirect_url: # [...] +allow_unverified_email: true +username_claim: "string" +max_age: # [...] +client_redirect_settings: # [...] +mfa: # [...] +pkce_mode: "string" +user_matchers: + - "string" + - "string" + - "string" +request_object_mode: "string" +entra_id_groups_provider: # [...] +``` + +|Field Name|Description|Type| +|---|---|---| +|acr_values|An Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers.|string| +|allow_unverified_email|Tells the connector to accept OIDC users with unverified emails.|Boolean| +|claims_to_roles|Specifies a dynamic mapping from claims to roles.|[][Claim Mapping](#claim-mapping)| +|client_id|The id of the authentication client (Teleport Auth Service).|string| +|client_redirect_settings|Defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones.|[SSO Client Redirect Settings](#sso-client-redirect-settings)| +|client_secret|Used to authenticate the client.|string| +|display|The friendly name for this provider.|string| +|entra_id_groups_provider|EntraIDGroupsProvider configures out-of-band user groups provider. It works by following through the groups claim source, which is sent for the "groups" claim when the user's group membership exceeds 200 max item limit.|[Entra ID Groups Provider](#entra-id-groups-provider)| +|google_admin_email|The email of a google admin to impersonate.|string| +|google_service_account|A string containing google service account credentials.|string| +|google_service_account_uri|A path to a google service account uri.|string| +|issuer_url|The endpoint of the provider, e.g. https://accounts.google.com.|string| +|max_age||[Duration](#duration)| +|mfa|Contains settings to enable SSO MFA checks through this auth connector.|[OIDC Connector MFA Settings](#oidc-connector-mfa-settings)| +|pkce_mode|Represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled"|string| +|prompt|An optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.|string| +|provider|The external identity provider.|string| +|redirect_url|A list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used.|[Strings](#strings)| +|request_object_mode|Determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters.|string| +|scope|Specifies additional scopes set by provider.|[]string| +|user_matchers|A set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login.|[]string| +|username_claim|Specifies the name of the claim from the OIDC connector to be used as the user's username.|string| + +## SSO Client Redirect Settings + +Contains settings to define which additional client redirect URLs should be allowed for non-browser SSO logins. + + +Example: + +```yaml +allowed_https_hostnames: + - "string" + - "string" + - "string" +insecure_allowed_cidr_ranges: + - "string" + - "string" + - "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|allowed_https_hostnames|A list of hostnames allowed for https client redirect URLs|[]string| +|insecure_allowed_cidr_ranges|A list of CIDRs allowed for HTTP or HTTPS client redirect URLs|[]string| + +## Strings + +A list of string that can unmarshal from list of strings or a scalar string from scalar yaml or json property + + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/role.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/role.mdx new file mode 100644 index 0000000000000..6271b90c47b9f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/role.mdx @@ -0,0 +1,59 @@ +--- +title: Role Resource Reference +sidebar_label: Role +description: Provides a comprehensive list of fields for the Teleport role resource. +--- + +Interactive and non-interactive users (bots) assume one or many roles. + +Roles govern access to databases, SSH servers, Kubernetes clusters, web services and applications and Windows Desktops. + +(!docs/pages/includes/role-spec.mdx!) + +## Role versions + +There are currently six supported role versions: `v3`, `v4`, `v5`, `v6`, `v7`, and `v8`. + +Different role versions may have varying RBAC applied to resources. + +### `kubernetes_resource` + +Versions 5, 6, 7 and 8 of the Teleport role resource have different behaviors when +accessing Kubernetes resources. +{/*lint ignore messaging*/} +Roles not [granting Kubernetes access](../../../enroll-resources/kubernetes-access/introduction.mdx) are +equivalent in the four versions. + +Roles v5 and v6 can only restrict actions on pods (e.g. executing in them). +Role v7 supports restricting some common resource kinds ( +see [the `kubernetes_resource` documentation](../../../enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources) +for a complete list). +Role v8 supports restricting all resource kinds, including CRDs. It also changes the format of the `kind` field. + +When no `kubernetes_resource` is set: +- Roles v5, v7 and v8 grant all access by default +- Roles v6 blocks pod execution by default, this was reverted by roles v7 to improve the user experience. + +{/* This table is cursed. Our current docs engine doesn't support HTML tables +(due to SSR and the rehydration process). We have dto do everything inlined in +markdown. Some HTML character codes are used to render specific chars like {} +or to avoid line breaks in the middle fo the YAML. Spaces before br tags +are required.*/} + +| Allow rule | Role v5 | Role v6 | Role v7 | Role v8 | +|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------| +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels: {}
kubernetes_resources: []
| ❌ no access | ❌ no access | ❌ no access | ❌ no access | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources: []
| ✅ full access to `dev` clusters | ❌ cannot exec in pods
✅ can access other
resources like `secrets` | ✅ full access to `dev` clusters | ✅ full access to `dev` clusters | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- name: "*"
kind: pod
namespace: "foo"
| ✅ can exec in pods in `foo`
✅ can access `secrets` in all namespaces.
❌ cannot exec in other namespaces | ✅ can exec in pods in `foo`
✅ can access `secrets` in all namespaces.
❌ cannot exec in other namespaces | ✅ can exec in pods in `foo`
❌ cannot access `secrets` in all namespaces
❌ cannot exec in other namespaces | ⚠️ invalid, v8 uses plural | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- name: "*"
kind: pod
namespace: "foo"
- name: "*"
kind: secret
namespace: "foo"
| ⚠️ not supported | ⚠️ not supported | ✅ can exec in pods in `foo`
✅ can access `secrets` in `foo`
❌ cannot exec in other namespaces
❌ cannot access `secrets` in other namespaces
❌ cannot access `configmaps` in `foo` | ⚠️ invalid, v8 uses plural | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- kind: "namespace"
name: "foo"
| ⚠️ not supported | ⚠️ not supported | ✅ full access in namespace `foo` including all its resources
❌ cannot access other namespaces
❌ cannot access cluster-wide resources | ⚠️ invalid, v8 uses plural | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- kind: "*"
namespace: "foo"
name: "*"
| ⚠️ not supported | ⚠️ not supported | ✅ full access in namespace `foo` including all its resources
❌ cannot access other namespaces
✅ full access to cluster-wide resources | ⚠️ invalid, v8 requires api_group for '*' kind | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- kind: "*"
namespace: "*"
name: "*"
| ⚠️ not supported | ⚠️ not supported | ✅ full access to `dev` clusters | ⚠️ invalid, v8 requires api_group for '*' kind | +|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- name: "*"
kind: pods
namespace: "foo"
- name: "*"
kind: deployments
api_group: apps
namespace: "foo"
| ⚠️ not supported | ⚠️ not supported | ⚠️ not supported | ✅ can exec in pods in `foo`
✅ can access `deployments` in `foo`
❌ cannot exec in other namespaces
❌ cannot access `deployments` in other namespaces
❌ cannot access `configmaps` in `foo` | + +### `saml_idp_service_provider` +SAML IdP role option `spec.idp.saml.enabled: true/false` is only supported in role version 7 and below. +See [SAML IdP reference](../../access-controls/saml-idp.mdx#RBAC) to learn how the RBAC is applied to +the `saml_idp_service_provider` resource starting role version 8. + + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/saml-connector-v2.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/saml-connector-v2.mdx new file mode 100644 index 0000000000000..cd7180c957b90 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/saml-connector-v2.mdx @@ -0,0 +1,213 @@ +--- +title: SAML Connector V2 Reference +description: Provides a reference of fields within the SAML Connector V2 resource, which you can manage with tctl. +sidebar_label: SAML Connector V2 +--- +{/* vale 3rd-party-products.former-names = NO */} +{/* vale messaging.capitalization = NO */} +{/* Automatically generated from: types/types.pb.go */} +{/* DO NOT EDIT */} + +**Kind**: `saml`
+**Version**: `v2` + +Represents a SAML connector. + +Example: + +```yaml +kind: "string" +sub_kind: "string" +version: "string" +metadata: # [...] +spec: # [...] +``` +|Field Name|Description|Type| +|---|---|---| +|kind|A resource kind.|string| +|metadata|Holds resource metadata.|[Metadata](#metadata)| +|spec|An SAML connector specification.|[SAML Connector Spec V2](#saml-connector-spec-v2)| +|sub_kind|An optional resource sub kind, used in some resources.|string| +|version|The resource version. It must be specified. Supported values are: `v2`.|string| + +## Asymmetric Key Pair + +A combination of a public certificate and private key that can be used for encryption and signing. + + +Example: + +```yaml +private_key: "string" +cert: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|cert|A PEM-encoded x509 certificate.|string| +|private_key|A PEM encoded x509 private key.|string| + +## Attribute Mapping + +Maps a SAML attribute statement to teleport roles. + + +Example: + +```yaml +name: "string" +value: "string" +roles: + - "string" + - "string" + - "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|name|An attribute statement name.|string| +|roles|A list of static teleport roles to map to.|[]string| +|value|An attribute statement value to match.|string| + +## Metadata + +Resource metadata + + +Example: + +```yaml +name: "string" +description: "string" +labels: + "string": "string" + "string": "string" + "string": "string" +expires: # See description +revision: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|description|Object description|string| +|expires|A global expiry time header can be set on any resource in the system.|| +|labels|A set of labels|map[string]string| +|name|An object name|string| +|revision|An opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource.|string| + +## SAML Connector MFA Settings + +Contains SAML MFA settings. + + +Example: + +```yaml +enabled: true +entity_descriptor: "string" +entity_descriptor_url: "string" +force_authn: # [...] +issuer: "string" +sso: "string" +cert: "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|cert|The identity provider certificate PEM. IDP signs `\` responses using this certificate.|string| +|enabled|Specified whether this SAML connector supports MFA checks. Defaults to false.|Boolean| +|entity_descriptor|XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. Usually set from EntityDescriptorUrl.|string| +|entity_descriptor_url|A URL that supplies a configuration XML.|string| +|force_authn|Specified whether re-authentication should be forced for MFA checks. UNSPECIFIED is treated as YES to always re-authentication for MFA checks. This should only be set to NO if the IdP is setup to perform MFA checks on top of active user sessions.|[SAML Force Authn](#saml-force-authn)| +|issuer|The identity provider issuer. Usually set from EntityDescriptor.|string| +|sso|SSO is the URL of the identity provider's SSO service. Usually set from EntityDescriptor.|string| + +## SAML Connector Spec V2 + +A SAML connector specification. + + +Example: + +```yaml +issuer: "string" +sso: "string" +cert: "string" +display: "string" +acs: "string" +audience: "string" +service_provider_issuer: "string" +entity_descriptor: "string" +entity_descriptor_url: "string" +attributes_to_roles: + - # [...] + - # [...] + - # [...] +signing_key_pair: # [...] +provider: "string" +assertion_key_pair: # [...] +allow_idp_initiated: true +client_redirect_settings: # [...] +single_logout_url: "string" +mfa: # [...] +force_authn: # [...] +preferred_request_binding: "string" +user_matchers: + - "string" + - "string" + - "string" +include_subject: true +``` + +|Field Name|Description|Type| +|---|---|---| +|acs|A URL for assertion consumer service on the service provider (Teleport's side).|string| +|allow_idp_initiated|A flag that indicates if the connector can be used for IdP-initiated logins.|Boolean| +|assertion_key_pair|A key pair used for decrypting SAML assertions.|[Asymmetric Key Pair](#asymmetric-key-pair)| +|attributes_to_roles|A list of mappings of attribute statements to roles.|[][Attribute Mapping](#attribute-mapping)| +|audience|Uniquely identifies our service provider.|string| +|cert|The identity provider certificate PEM. IDP signs `\` responses using this certificate.|string| +|client_redirect_settings|Defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones.|[SSO Client Redirect Settings](#sso-client-redirect-settings)| +|display|Controls how this connector is displayed.|string| +|entity_descriptor|XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements.|string| +|entity_descriptor_url|A URL that supplies a configuration XML.|string| +|force_authn|Specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO.|[SAML Force Authn](#saml-force-authn)| +|include_subject|A flag that indicates whether the Subject element is included in the SAML authentication request. Defaults to false. Note: Some IdPs will reject requests that contain a Subject.|Boolean| +|issuer|The identity provider issuer.|string| +|mfa|Contains settings to enable SSO MFA checks through this auth connector.|[SAML Connector MFA Settings](#saml-connector-mfa-settings)| +|preferred_request_binding|A preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured.|string| +|provider|The external identity provider.|string| +|service_provider_issuer|The issuer of the service provider (Teleport).|string| +|signing_key_pair|An x509 key pair used to sign AuthnRequest.|[Asymmetric Key Pair](#asymmetric-key-pair)| +|single_logout_url|The SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled.|string| +|sso|The URL of the identity provider's SSO service.|string| +|user_matchers|A set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login.|[]string| + +## SAML Force Authn + +Specified whether existing SAML sessions should be accepted or re-authentication should be forced. + + +## SSO Client Redirect Settings + +Contains settings to define which additional client redirect URLs should be allowed for non-browser SSO logins. + + +Example: + +```yaml +allowed_https_hostnames: + - "string" + - "string" + - "string" +insecure_allowed_cidr_ranges: + - "string" + - "string" + - "string" +``` + +|Field Name|Description|Type| +|---|---|---| +|allowed_https_hostnames|A list of hostnames allowed for https client redirect URLs|[]string| +|insecure_allowed_cidr_ranges|A list of CIDRs allowed for HTTP or HTTPS client redirect URLs|[]string| + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx new file mode 100644 index 0000000000000..5a62a3ccdc7cb --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/teleport-resources.mdx @@ -0,0 +1,72 @@ +--- +title: Teleport Resource Reference +description: Reference documentation for Teleport resources +tags: + - reference + - platform-wide +--- + +The guides in this section list fields within dynamic resources you can manage +with Teleport. For more information on dynamic resources, see our guide to +[Using Dynamic +Resources](../../../zero-trust-access/infrastructure-as-code/infrastructure-as-code.mdx). + +Examples of applying dynamic resources with `tctl`: + +```code +# List all connectors: +$ tctl get connectors + +# Dump a SAML connector called "okta": +$ tctl get saml/okta + +# Delete a SAML connector called "okta": +$ tctl rm saml/okta + +# Delete an OIDC connector called "gworkspace": +$ tctl rm oidc/gworkspace + +# Delete a github connector called "myteam": +$ tctl rm github/myteam + +# Delete a local user called "admin": +$ tctl rm users/admin + +# Show all devices: +$ tctl get devices + +# Fetch a specific device: +$ tctl get devices/ + +# Fetch the cluster auth preferences +$ tctl get cluster_auth_preference +``` + + + Although `tctl get connectors` will show you every connector, when working with an individual connector you must use the correct `kind`, such as `saml` or `oidc`. You can see each connector's `kind` at the top of its YAML output from `tctl get connectors`. + + +Here's the list of resources currently exposed via [`tctl`](../../cli/tctl.mdx): + +| Resource Kind | Description | +| - | - | +| [user](user.mdx) | A user record in the internal Teleport user DB. | +| [role](role.mdx) | A role assumed by interactive and non-interactive users. | +| connector | Authentication connectors for [Single Sign-On](../../../zero-trust-access/sso/sso.mdx) (SSO) for SAML, OIDC and GitHub. | +| node | A registered SSH node. The same record is displayed via `tctl nodes ls`. | +| windows_desktop | A registered Windows desktop. | +| cluster | A trusted cluster. See [here](../../../zero-trust-access/deploy-a-cluster/trustedclusters.mdx) for more details on connecting clusters together. | +| [login_rule](login-rules.mdx) | A Login Rule, see the [Login Rules guide](../../../zero-trust-access/sso/login-rules/login-rules.mdx) for more info. | +| [device](device.mdx) | A Teleport Trusted Device, see the [Device Trust guide](../../../identity-governance/device-trust/guide.mdx) for more info. | +| [ui_config](ui-config.mdx) | Configuration for the Web UI served by the Proxy Service. | +| [vnet_config](vnet-config.mdx) | Configuration for the cluster's VNet options. | +| [cluster_auth_preference](cluster-auth-preferences.mdx) | Configuration for the cluster's auth preferences. | +| [database_object_import_rule](database-object-import-rule.mdx) | Database object import rules. | +| [autoupdate_config](auto-update-config.mdx) | Client tools auto-update configuration | +| [autoupdate_version](auto-update-version.mdx) | Client tools auto-update target version configuration | +| [access_monitoring_rule](access-monitoring-rule.mdx) | Access monitoring rules. | +| [health_check_config](health-check-config.mdx) | Configuration for resource endpoint health checks | +| [inference_model](inference-model.mdx) | Session summarization AI model configuration | +| [inference_secret](inference-secret.mdx) | Session summarization AI provider secret (API key) | +| [inference_policy](inference-policy.mdx)| Matches sessions to inference models using session kind and other metadata | + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/ui-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/ui-config.mdx new file mode 100644 index 0000000000000..f6ccbc3384a66 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/ui-config.mdx @@ -0,0 +1,12 @@ +--- +title: UI Config Resource Reference +sidebar_label: UI Config +description: Provides a comprehensive list of fields for the Teleport UI config resource. +--- + +The UI config resource contains global configuration options for the Web UI +served by the Proxy Service. This resource is not set by default, which means a +`tctl get ui` will result in an error if used before this resource has been set. + +(!docs/pages/includes/ui-config-spec.mdx!) + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/user.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/user.mdx new file mode 100644 index 0000000000000..eaa5e9d409694 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/user.mdx @@ -0,0 +1,41 @@ +--- +title: User Resource Reference +sidebar_label: User +description: Provides a comprehensive list of fields for the Teleport user resource. +--- + +Teleport supports interactive local users, non-interactive local users (bots) +and single-sign on users, and represents these with a dynamic resource: + +```yaml +kind: user +version: v2 +metadata: + name: joe +spec: + # roles is a list of roles assigned to this user + roles: + - admin + # status sets user temporarily locked in a Teleport system, for example + # when users exceed predefined amount of failed login attempts + status: + is_locked: false + lock_expires: 0001-01-01T00:00:00Z + locked_time: 0001-01-01T00:00:00Z + # traits are key, list of values pairs assigned to a user resource. + # Traits can be used in role templates as variables. + traits: + logins: + - joe + - root + # expires, if not empty, sets automatic expiry of the resource + expires: 0001-01-01T00:00:00Z + # created_by is a system property that tracks + # identity of the author of this user resource. + created_by: + time: 0001-01-01T00:00:00Z + user: + name: builtin-Admin +``` + + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/vnet-config.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/vnet-config.mdx new file mode 100644 index 0000000000000..5d56f0576d0ce --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/vnet-config.mdx @@ -0,0 +1,29 @@ +--- +title: VNet Config Resource Reference +sidebar_label: VNet Config +description: Provides a comprehensive list of fields for the Teleport VNet config resource. +--- + +The VNet config resource contains cluster-specific options VNet should use when +setting up connections to resources from this cluster. + +See [VNet](../../../enroll-resources/application-access/vnet.mdx) for more details. + +```yaml +kind: vnet_config +version: v1 +metadata: + name: vnet-config +spec: + # The range to use when assigning IP addresses to resources. + # It can be changed in case of conflicts with other software + # deployed on end user devices. Defaults to "100.64.0.0/10". + ipv4_cidr_range: "100.64.0.0/10" + # Extra DNS zones that VNet should capture DNS queries for. + # Set them if your TCP apps use custom public_addr. + # Requires DNS TXT record to be set on the domains, + # see the guide linked above. Empty by default. + custom_dns_zones: + - suffix: company.test +``` + diff --git a/docs/pages/reference/infrastructure-as-code/teleport-resources/windows-desktops.mdx b/docs/pages/reference/infrastructure-as-code/teleport-resources/windows-desktops.mdx new file mode 100644 index 0000000000000..e10005fceb803 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/teleport-resources/windows-desktops.mdx @@ -0,0 +1,24 @@ +--- +title: Windows Desktops Resource Reference +sidebar_label: Windows Desktops +description: Provides a guide to the Teleport windows desktop resource. +--- + +In most cases, Teleport will register `windows_desktop` resources automatically +based on static hosts in your configuration file or via LDAP-based discovery. + +You can also use [dynamic +registration](../../../enroll-resources/desktop-access/dynamic-registration.mdx) using +`dynamic_windows_desktop` resources. This can be useful for managing inventories +of hosts that are not joined to an Active Directory domain. + +There are a few important considerations to keep in mind when registering +desktops this way: + +1. The desktop's `addr` can be a hostname or IP address, and should include + the RDP port (typically 3389). +1. If you intend to log in to the desktop with local Windows users you must set + `non_ad: true`. If you intend to log in with Active Directory users, leave + `non_ad` unset (or false), and specify the Active Directory domain in the + `domain` field. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list.mdx new file mode 100644 index 0000000000000..f9e324ade47f8 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list.mdx @@ -0,0 +1,164 @@ +--- +title: Reference for the teleport_access_list Terraform data-source +sidebar_label: access_list +description: This page describes the supported values of the teleport_access_list data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_access_list` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) spec is the specification for the Access List. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Required: + +- `owners` (Attributes List) owners is a list of owners of the Access List. (see [below for nested schema](#nested-schema-for-specowners)) + +Optional: + +- `audit` (Attributes) audit describes the frequency that this Access List must be audited. (see [below for nested schema](#nested-schema-for-specaudit)) +- `description` (String) description is an optional plaintext description of the Access List. +- `grants` (Attributes) grants describes the access granted by membership to this Access List. (see [below for nested schema](#nested-schema-for-specgrants)) +- `membership_requires` (Attributes) membership_requires describes the requirements for a user to be a member of the Access List. For a membership to an Access List to be effective, the user must meet the requirements of Membership_requires and must be in the members list. (see [below for nested schema](#nested-schema-for-specmembership_requires)) +- `owner_grants` (Attributes) owner_grants describes the access granted by owners to this Access List. (see [below for nested schema](#nested-schema-for-specowner_grants)) +- `ownership_requires` (Attributes) ownership_requires describes the requirements for a user to be an owner of the Access List. For ownership of an Access List to be effective, the user must meet the requirements of ownership_requires and must be in the owners list. (see [below for nested schema](#nested-schema-for-specownership_requires)) +- `title` (String) title is a plaintext short description of the Access List. +- `type` (String) type can be an empty string which denotes a regular Access List, "scim" which represents an Access List created from SCIM group or "static" for Access Lists managed by IaC tools. + +### Nested Schema for `spec.owners` + +Optional: + +- `description` (String) description is the plaintext description of the owner and why they are an owner. +- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. +- `name` (String) name is the username of the owner. + + +### Nested Schema for `spec.audit` + +Optional: + +- `next_audit_date` (String) next_audit_date is when the next audit date should be done by. +- `notifications` (Attributes) notifications is the configuration for notifying users. (see [below for nested schema](#nested-schema-for-specauditnotifications)) +- `recurrence` (Attributes) recurrence is the recurrence definition (see [below for nested schema](#nested-schema-for-specauditrecurrence)) + +### Nested Schema for `spec.audit.notifications` + +Optional: + +- `start` (String) start specifies when to start notifying users that the next audit date is coming up. + + +### Nested Schema for `spec.audit.recurrence` + +Optional: + +- `day_of_month` (Number) day_of_month is the day of month that reviews will be scheduled on. Supported values are 0, 1, 15, and 31. +- `frequency` (Number) frequency is the frequency of reviews. This represents the period in months between two reviews. Supported values are 0, 1, 3, 6, and 12. + + + +### Nested Schema for `spec.grants` + +Optional: + +- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. +- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specgrantstraits)) + +### Nested Schema for `spec.grants.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.membership_requires` + +Optional: + +- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. +- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specmembership_requirestraits)) + +### Nested Schema for `spec.membership_requires.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.owner_grants` + +Optional: + +- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. +- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specowner_grantstraits)) + +### Nested Schema for `spec.owner_grants.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.ownership_requires` + +Optional: + +- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. +- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specownership_requirestraits)) + +### Nested Schema for `spec.ownership_requires.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list_member.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list_member.mdx new file mode 100644 index 0000000000000..746ce0cade95f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_list_member.mdx @@ -0,0 +1,67 @@ +--- +title: Reference for the teleport_access_list_member Terraform data-source +sidebar_label: access_list_member +description: This page describes the supported values of the teleport_access_list_member data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_access_list_member` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) spec is the specification for the Access List member. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Required: + +- `access_list` (String) associated Access List +- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. + +Optional: + +- `added_by` (String) added_by is the user that added this user to the Access List. +- `expires` (String) expires is when the user's membership to the Access List expires. +- `joined` (String) joined is when the user joined the Access List. +- `name` (String) name is the name of the member of the Access List. +- `reason` (String) reason is the reason this user was added to the Access List. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_monitoring_rule.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_monitoring_rule.mdx new file mode 100644 index 0000000000000..5250b0e536d63 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/access_monitoring_rule.mdx @@ -0,0 +1,97 @@ +--- +title: Reference for the teleport_access_monitoring_rule Terraform data-source +sidebar_label: access_monitoring_rule +description: This page describes the supported values of the teleport_access_monitoring_rule data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_access_monitoring_rule` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an AccessMonitoringRule specification (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) version is version + +### Optional + +- `metadata` (Attributes) metadata is the rules's metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources + +### Nested Schema for `spec` + +Required: + +- `subjects` (List of String) subjects the rule operates on, can be a resource kind or a particular resource property. + +Optional: + +- `automatic_review` (Attributes) automatic_review defines automatic review configurations for Access Requests. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specautomatic_review)) +- `condition` (String) condition is a predicate expression that operates on the specified subject resources, and determines whether the subject will be moved into desired state. +- `desired_state` (String) desired_state defines the desired state of the subject. For Access Request subjects, the desired_state may be set to `reviewed` to indicate that the Access Request should be automatically reviewed. +- `notification` (Attributes) notification defines the plugin configuration for notifications if rule is triggered. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specnotification)) +- `schedules` (Attributes Map) schedules specifies a map of schedules that can be used to configure the access monitoring rule conditions. Available in Teleport v18.2.8 or higher. (see [below for nested schema](#nested-schema-for-specschedules)) +- `states` (List of String) states are the desired state which the monitoring rule is attempting to bring the subjects matching the condition to. + +### Nested Schema for `spec.automatic_review` + +Optional: + +- `decision` (String) decision specifies the proposed state of the access review. This can be either 'APPROVED' or 'DENIED'. +- `integration` (String) integration is the name of the integration that is responsible for monitoring the rule. Set this value to `builtin` to monitor the rule with Teleport. + + +### Nested Schema for `spec.notification` + +Optional: + +- `name` (String) name is the name of the plugin to which this configuration should apply. +- `recipients` (List of String) recipients is the list of recipients the plugin should notify. + + +### Nested Schema for `spec.schedules` + +Optional: + +- `time` (Attributes) TimeSchedule specifies an in-line schedule. (see [below for nested schema](#nested-schema-for-specschedulestime)) + +### Nested Schema for `spec.schedules.time` + +Optional: + +- `shifts` (Attributes List) Shifts contains a set of shifts that make up the schedule. (see [below for nested schema](#nested-schema-for-specschedulestimeshifts)) +- `timezone` (String) Timezone specifies the schedule timezone. This field is optional and defaults to "UTC". Accepted values use timezone locations as defined in the IANA Time Zone Database, such as "America/Los_Angeles", "Europe/Lisbon", or "Asia/Singapore". See https://data.iana.org/time-zones/tzdb/zone1970.tab for a list of supported values. + +### Nested Schema for `spec.schedules.time.shifts` + +Optional: + +- `end` (String) End specifies the end time in the format HH:MM, e.g., "12:30". +- `start` (String) Start specifies the start time in the format HH:MM, e.g., "12:30". +- `weekday` (String) Weekday specifies the day of the week, e.g., "Sunday", "Monday", "Tuesday". + + + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/app.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/app.mdx new file mode 100644 index 0000000000000..ae410ab849153 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/app.mdx @@ -0,0 +1,149 @@ +--- +title: Reference for the teleport_app Terraform data-source +sidebar_label: app +description: This page describes the supported values of the teleport_app data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_app` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v3`. + +### Optional + +- `metadata` (Attributes) Metadata is the app resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the app resource spec. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource subkind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `aws` (Attributes) AWS contains additional options for AWS applications. (see [below for nested schema](#nested-schema-for-specaws)) +- `cloud` (String) Cloud identifies the cloud instance the app represents. +- `cors` (Attributes) CORSPolicy defines the Cross-Origin Resource Sharing settings for the app. (see [below for nested schema](#nested-schema-for-speccors)) +- `dynamic_labels` (Attributes Map) DynamicLabels are the app's command labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) +- `identity_center` (Attributes) IdentityCenter encasulates AWS identity-center specific information. Only valid for Identity Center account apps. (see [below for nested schema](#nested-schema-for-specidentity_center)) +- `insecure_skip_verify` (Boolean) InsecureSkipVerify disables app's TLS certificate verification. +- `integration` (String) Integration is the integration name that must be used to access this Application. Only applicable to AWS App Access. If present, the Application must use the Integration's credentials instead of ambient credentials to access Cloud APIs. +- `mcp` (Attributes) MCP contains MCP server related configurations. (see [below for nested schema](#nested-schema-for-specmcp)) +- `public_addr` (String) PublicAddr is the public address the application is accessible at. +- `required_app_names` (List of String) RequiredAppNames is a list of app names that are required for this app to function. Any app listed here will be part of the authentication redirect flow and authenticate along side this app. +- `rewrite` (Attributes) Rewrite is a list of rewriting rules to apply to requests and responses. (see [below for nested schema](#nested-schema-for-specrewrite)) +- `tcp_ports` (Attributes List) TCPPorts is a list of ports and port ranges that an app agent can forward connections to. Only applicable to TCP App Access. If this field is not empty, URI is expected to contain no port number and start with the tcp protocol. (see [below for nested schema](#nested-schema-for-spectcp_ports)) +- `uri` (String) URI is the web app endpoint. +- `use_any_proxy_public_addr` (Boolean) UseAnyProxyPublicAddr will rebuild this app's fqdn based on the proxy public addr that the request originated from. This should be true if your proxy has multiple proxy public addrs and you want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, setting this value to true will overwrite that public address in the web UI. +- `user_groups` (List of String) UserGroups are a list of user group IDs that this app is associated with. + +### Nested Schema for `spec.aws` + +Optional: + +- `external_id` (String) ExternalID is the AWS External ID used when assuming roles in this app. +- `roles_anywhere_profile` (Attributes) RolesAnywhereProfile contains the IAM Roles Anywhere fields associated with this Application. These fields are set when performing the synchronization of AWS IAM Roles Anywhere Profiles into Teleport Apps. (see [below for nested schema](#nested-schema-for-specawsroles_anywhere_profile)) + +### Nested Schema for `spec.aws.roles_anywhere_profile` + +Optional: + +- `accept_role_session_name` (Boolean) Whether this Roles Anywhere Profile accepts a custom role session name. When not supported, the AWS Session Name will be the X.509 certificate's serial number. When supported, the AWS Session Name will be the identity's username. This values comes from: https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ProfileDetail.html / acceptRoleSessionName +- `profile_arn` (String) ProfileARN is the AWS IAM Roles Anywhere Profile ARN that originated this Teleport App. + + + +### Nested Schema for `spec.cors` + +Optional: + +- `allow_credentials` (Boolean) allow_credentials indicates whether credentials are allowed. +- `allowed_headers` (List of String) allowed_headers specifies which headers can be used when accessing the app. +- `allowed_methods` (List of String) allowed_methods specifies which methods are allowed when accessing the app. +- `allowed_origins` (List of String) allowed_origins specifies which origins are allowed to access the app. +- `exposed_headers` (List of String) exposed_headers indicates which headers are made available to scripts via the browser. +- `max_age` (Number) max_age indicates how long (in seconds) the results of a preflight request can be cached. + + +### Nested Schema for `spec.dynamic_labels` + +Optional: + +- `command` (List of String) Command is a command to run +- `period` (String) Period is a time between command runs +- `result` (String) Result captures standard output + + +### Nested Schema for `spec.identity_center` + +Optional: + +- `account_id` (String) Account ID is the AWS-assigned ID of the account +- `permission_sets` (Attributes List) PermissionSets lists the available permission sets on the given account (see [below for nested schema](#nested-schema-for-specidentity_centerpermission_sets)) + +### Nested Schema for `spec.identity_center.permission_sets` + +Optional: + +- `arn` (String) ARN is the fully-formed ARN of the Permission Set. +- `assignment_name` (String) AssignmentID is the ID of the Teleport Account Assignment resource that represents this permission being assigned on the enclosing Account. +- `name` (String) Name is the human-readable name of the Permission Set. + + + +### Nested Schema for `spec.mcp` + +Optional: + +- `args` (List of String) Args to execute with the command. +- `command` (String) Command to launch stdio-based MCP servers. +- `run_as_host_user` (String) RunAsHostUser is the host user account under which the command will be executed. Required for stdio-based MCP servers. + + +### Nested Schema for `spec.rewrite` + +Optional: + +- `headers` (Attributes List) Headers is a list of headers to inject when passing the request over to the application. (see [below for nested schema](#nested-schema-for-specrewriteheaders)) +- `jwt_claims` (String) JWTClaims configures whether roles/traits are included in the JWT token. +- `redirect` (List of String) Redirect defines a list of hosts which will be rewritten to the public address of the application if they occur in the "Location" header. + +### Nested Schema for `spec.rewrite.headers` + +Optional: + +- `name` (String) Name is the http header name. +- `value` (String) Value is the http header value. + + + +### Nested Schema for `spec.tcp_ports` + +Optional: + +- `end_port` (Number) EndPort describes the end of the range, inclusive. If set, it must be between 2 and 65535 and be greater than Port when describing a port range. When omitted or set to zero, it signifies that the port range defines a single port. +- `port` (Number) Port describes the start of the range. It must be between 1 and 65535. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/auth_preference.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/auth_preference.mdx new file mode 100644 index 0000000000000..149dc2fcb54e3 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/auth_preference.mdx @@ -0,0 +1,137 @@ +--- +title: Reference for the teleport_auth_preference Terraform data-source +sidebar_label: auth_preference +description: This page describes the supported values of the teleport_auth_preference data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_auth_preference` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an AuthPreference specification (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `spec` + +Optional: + +- `allow_headless` (Boolean) AllowHeadless enables/disables headless support. Headless authentication requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. +- `allow_local_auth` (Boolean) AllowLocalAuth is true if local authentication is enabled. +- `allow_passwordless` (Boolean) AllowPasswordless enables/disables passwordless support. Passwordless requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. +- `connector_name` (String) ConnectorName is the name of the OIDC or SAML connector. If this value is not set the first connector in the backend will be used. +- `default_session_ttl` (String) DefaultSessionTTL is the TTL to use for user certs when an explicit TTL is not requested. +- `device_trust` (Attributes) DeviceTrust holds settings related to trusted device verification. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specdevice_trust)) +- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert provides disconnect expired certificate setting - if true, connections with expired client certificates will get disconnected +- `hardware_key` (Attributes) HardwareKey are the settings for hardware key support. (see [below for nested schema](#nested-schema-for-spechardware_key)) +- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specidp)) +- `locking_mode` (String) LockingMode is the cluster-wide locking mode default. +- `message_of_the_day` (String) +- `okta` (Attributes) Okta is a set of options related to the Okta service in Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specokta)) +- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this cluster. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". +- `second_factor` (String) SecondFactor is the type of mult-factor. Deprecated: Prefer using SecondFactors instead. +- `second_factors` (List of Number) SecondFactors is a list of supported multi-factor types. 1 is "otp", 2 is "webauthn", 3 is "sso", If unspecified, the current default value is [1], or ["otp"]. +- `signature_algorithm_suite` (Number) SignatureAlgorithmSuite is the configured signature algorithm suite for the cluster. If unspecified, the current default value is "legacy". 1 is "legacy", 2 is "balanced-v1", 3 is "fips-v1", 4 is "hsm-v1". +- `stable_unix_user_config` (Attributes) StableUnixUserConfig contains the cluster-wide configuration for stable UNIX users. (see [below for nested schema](#nested-schema-for-specstable_unix_user_config)) +- `type` (String) Type is the type of authentication. +- `u2f` (Attributes) U2F are the settings for the U2F device. (see [below for nested schema](#nested-schema-for-specu2f)) +- `webauthn` (Attributes) Webauthn are the settings for server-side Web Authentication support. (see [below for nested schema](#nested-schema-for-specwebauthn)) + +### Nested Schema for `spec.device_trust` + +Optional: + +- `auto_enroll` (Boolean) Enable device auto-enroll. Auto-enroll lets any user issue a device enrollment token for a known device that is not already enrolled. `tsh` takes advantage of auto-enroll to automatically enroll devices on user login, when appropriate. The effective cluster Mode still applies: AutoEnroll=true is meaningless if Mode="off". +- `ekcert_allowed_cas` (List of String) Allow list of EKCert CAs in PEM format. If present, only TPM devices that present an EKCert that is signed by a CA specified here may be enrolled (existing enrollments are unchanged). If not present, then the CA of TPM EKCerts will not be checked during enrollment, this allows any device to enroll. +- `mode` (String) Mode of verification for trusted devices. The following modes are supported: - "off": disables both device authentication and authorization. - "optional": allows both device authentication and authorization, but doesn't enforce the presence of device extensions for sensitive endpoints. - "required": enforces the presence of device extensions for sensitive endpoints. - "required-for-humans": enforces the presence of device extensions for sensitive endpoints, for human users only (bots are exempt). Mode is always "off" for OSS. Defaults to "optional" for Enterprise. + + +### Nested Schema for `spec.hardware_key` + +Optional: + +- `pin_cache_ttl` (String) PinCacheTTL is the amount of time in nanoseconds that Teleport clients will cache the user's PIV PIN when hardware key PIN policy is enabled. +- `piv_slot` (String) PIVSlot is a PIV slot that Teleport clients should use instead of the default based on private key policy. For example, "9a" or "9e". +- `serial_number_validation` (Attributes) SerialNumberValidation holds settings for hardware key serial number validation. By default, serial number validation is disabled. (see [below for nested schema](#nested-schema-for-spechardware_keyserial_number_validation)) + +### Nested Schema for `spec.hardware_key.serial_number_validation` + +Optional: + +- `enabled` (Boolean) Enabled indicates whether hardware key serial number validation is enabled. +- `serial_number_trait_name` (String) SerialNumberTraitName is an optional custom user trait name for hardware key serial numbers to replace the default: "hardware_key_serial_numbers". Note: Values for this user trait should be a comma-separated list of serial numbers, or a list of comm-separated lists. e.g ["123", "345,678"] + + + +### Nested Schema for `spec.idp` + +Optional: + +- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specidpsaml)) + +### Nested Schema for `spec.idp.saml` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. + + + +### Nested Schema for `spec.okta` + +Optional: + +- `sync_period` (String) SyncPeriod is the duration between synchronization calls in nanoseconds. + + +### Nested Schema for `spec.stable_unix_user_config` + +Optional: + +- `enabled` (Boolean) Enabled signifies that (UNIX) Teleport SSH hosts should obtain a UID from the control plane if they're about to provision a host user with no other configured UID. +- `first_uid` (Number) FirstUid is the start of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. +- `last_uid` (Number) LastUid is the end of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. + + +### Nested Schema for `spec.u2f` + +Optional: + +- `app_id` (String) AppID returns the application ID for universal mult-factor. +- `device_attestation_cas` (List of String) DeviceAttestationCAs contains the trusted attestation CAs for U2F devices. +- `facets` (List of String) Facets returns the facets for universal mult-factor. Deprecated: Kept for backwards compatibility reasons, but Facets have no effect since Teleport v10, when Webauthn replaced the U2F implementation. + + +### Nested Schema for `spec.webauthn` + +Optional: + +- `attestation_allowed_cas` (List of String) Allow list of device attestation CAs in PEM format. If present, only devices whose attestation certificates match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationDeniedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default all devices are allowed. +- `attestation_denied_cas` (List of String) Deny list of device attestation CAs in PEM format. If present, only devices whose attestation certificates don't match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationAllowedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default no devices are denied. +- `rp_id` (String) RPID is the ID of the Relying Party. It should be set to the domain name of the Teleport installation. IMPORTANT: RPID must never change in the lifetime of the cluster, because it's recorded in the registration data on the WebAuthn device. If the RPID changes, all existing WebAuthn key registrations will become invalid and all users who use WebAuthn as the multi-factor will need to re-register. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_config.mdx new file mode 100644 index 0000000000000..cd5e0e238d65a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_config.mdx @@ -0,0 +1,81 @@ +--- +title: Reference for the teleport_autoupdate_config Terraform data-source +sidebar_label: autoupdate_config +description: This page describes the supported values of the teleport_autoupdate_config data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_autoupdate_config` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) + +### Optional + +- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) + +### Nested Schema for `spec` + +Optional: + +- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) +- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) + +### Nested Schema for `spec.agents` + +Optional: + +- `maintenance_window_duration` (String) maintenance_window_duration is the maintenance window duration. This can only be set if `strategy` is "time-based". Once the window is over, the group transitions to the done state. Existing agents won't be updated until the next maintenance window. +- `mode` (String) mode specifies whether agent autoupdates are enabled, disabled, or paused. +- `schedules` (Attributes) schedules specifies schedules for updates of grouped agents. (see [below for nested schema](#nested-schema-for-specagentsschedules)) +- `strategy` (String) strategy to use for updating the agents. + +### Nested Schema for `spec.agents.schedules` + +Optional: + +- `regular` (Attributes List) regular schedules for non-critical versions. (see [below for nested schema](#nested-schema-for-specagentsschedulesregular)) + +### Nested Schema for `spec.agents.schedules.regular` + +Optional: + +- `canary_count` (Number) canary_count is the number of canary agents that will be updated before the whole group is updated. when set to 0, the group does not enter the canary phase. This number is capped to 5. This number must always be lower than the total number of agents in the group, else the rollout will be stuck. +- `days` (List of String) days when the update can run. Supported values are "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" and "*" +- `name` (String) name of the group +- `start_hour` (Number) start_hour to initiate update +- `wait_hours` (Number) wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". This field must be positive. + + + + +### Nested Schema for `spec.tools` + +Optional: + +- `mode` (String) Mode defines state of the client tools auto update. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `name` (String) name is an object name. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_version.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_version.mdx new file mode 100644 index 0000000000000..e3aa18e578f71 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/autoupdate_version.mdx @@ -0,0 +1,63 @@ +--- +title: Reference for the teleport_autoupdate_version Terraform data-source +sidebar_label: autoupdate_version +description: This page describes the supported values of the teleport_autoupdate_version data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_autoupdate_version` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) + +### Optional + +- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) + +### Nested Schema for `spec` + +Optional: + +- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) +- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) + +### Nested Schema for `spec.agents` + +Optional: + +- `mode` (String) autoupdate_mode to use for the rollout +- `schedule` (String) schedule to use for the rollout +- `start_version` (String) start_version is the version used for newly installed agents before their update window. +- `target_version` (String) target_version is the version that all agents will update to during their update window. + + +### Nested Schema for `spec.tools` + +Optional: + +- `target_version` (String) TargetVersion specifies the semantic version required for tools to establish a connection with the cluster. Client tools after connection to the cluster going to be updated to this version automatically. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `name` (String) name is an object name. + diff --git a/docs/pages/reference/terraform-provider/data-sources/cluster_maintenance_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_maintenance_config.mdx similarity index 93% rename from docs/pages/reference/terraform-provider/data-sources/cluster_maintenance_config.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_maintenance_config.mdx index 338a4dd882953..543286936b977 100644 --- a/docs/pages/reference/terraform-provider/data-sources/cluster_maintenance_config.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_maintenance_config.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_cluster_ma {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_cluster_maintenance_config` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_networking_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_networking_config.mdx new file mode 100644 index 0000000000000..2130fe042aeda --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/cluster_networking_config.mdx @@ -0,0 +1,73 @@ +--- +title: Reference for the teleport_cluster_networking_config Terraform data-source +sidebar_label: cluster_networking_config +description: This page describes the supported values of the teleport_cluster_networking_config data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_cluster_networking_config` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a ClusterNetworkingConfig specification (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `assist_command_execution_workers` (Number) AssistCommandExecutionWorkers determines the number of workers that will execute arbitrary Assist commands on servers in parallel +- `case_insensitive_routing` (Boolean) CaseInsensitiveRouting causes proxies to use case-insensitive hostname matching. +- `client_idle_timeout` (String) ClientIdleTimeout sets global cluster default setting for client idle timeouts. +- `idle_timeout_message` (String) ClientIdleTimeoutMessage is the message sent to the user when a connection times out. +- `keep_alive_count_max` (Number) KeepAliveCountMax is the number of keep-alive messages that can be missed before the server disconnects the connection to the client. +- `keep_alive_interval` (String) KeepAliveInterval is the interval at which the server sends keep-alive messages to the client. +- `proxy_listener_mode` (Number) ProxyListenerMode is proxy listener mode used by Teleport Proxies. 0 is "separate"; 1 is "multiplex". +- `proxy_ping_interval` (String) ProxyPingInterval defines in which interval the TLS routing ping message should be sent. This is applicable only when using ping-wrapped connections, regular TLS routing connections are not affected. +- `routing_strategy` (Number) RoutingStrategy determines the strategy used to route to nodes. 0 is "unambiguous_match"; 1 is "most_recent". +- `session_control_timeout` (String) SessionControlTimeout is the session control lease expiry and defines the upper limit of how long a node may be out of contact with the Auth Service before it begins terminating controlled sessions. +- `ssh_dial_timeout` (String) SSHDialTimeout is a custom dial timeout used when establishing SSH connections. If not set, the default timeout of 30s will be used. +- `tunnel_strategy` (Attributes) TunnelStrategyV1 determines the tunnel strategy used in the cluster. (see [below for nested schema](#nested-schema-for-spectunnel_strategy)) +- `web_idle_timeout` (String) WebIdleTimeout sets global cluster default setting for the web UI idle timeouts. + +### Nested Schema for `spec.tunnel_strategy` + +Optional: + +- `agent_mesh` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyagent_mesh)) +- `proxy_peering` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyproxy_peering)) + +### Nested Schema for `spec.tunnel_strategy.agent_mesh` + +Optional: + +- `active` (Boolean) Automatically generated field preventing empty message errors + + +### Nested Schema for `spec.tunnel_strategy.proxy_peering` + +Optional: + +- `agent_connection_count` (Number) + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/data-sources.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/data-sources.mdx new file mode 100644 index 0000000000000..068b458f182fb --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/data-sources.mdx @@ -0,0 +1,43 @@ +--- +title: "Terraform data-sources index" +description: "Index of all the data-sources supported by the Teleport Terraform Provider" +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +{/* + This file will be renamed data-sources.mdx during build time. + The template name is reserved by tfplugindocs so we suffix with -index. +*/} + +The Teleport Terraform provider supports the following data-sources: + + - [`teleport_access_list`](./access_list.mdx) + - [`teleport_access_list_member`](./access_list_member.mdx) + - [`teleport_access_monitoring_rule`](./access_monitoring_rule.mdx) + - [`teleport_app`](./app.mdx) + - [`teleport_auth_preference`](./auth_preference.mdx) + - [`teleport_autoupdate_config`](./autoupdate_config.mdx) + - [`teleport_autoupdate_version`](./autoupdate_version.mdx) + - [`teleport_cluster_maintenance_config`](./cluster_maintenance_config.mdx) + - [`teleport_cluster_networking_config`](./cluster_networking_config.mdx) + - [`teleport_database`](./database.mdx) + - [`teleport_discovery_config`](./discovery_config.mdx) + - [`teleport_dynamic_windows_desktop`](./dynamic_windows_desktop.mdx) + - [`teleport_github_connector`](./github_connector.mdx) + - [`teleport_health_check_config`](./health_check_config.mdx) + - [`teleport_installer`](./installer.mdx) + - [`teleport_integration`](./integration.mdx) + - [`teleport_login_rule`](./login_rule.mdx) + - [`teleport_oidc_connector`](./oidc_connector.mdx) + - [`teleport_okta_import_rule`](./okta_import_rule.mdx) + - [`teleport_provision_token`](./provision_token.mdx) + - [`teleport_role`](./role.mdx) + - [`teleport_saml_connector`](./saml_connector.mdx) + - [`teleport_session_recording_config`](./session_recording_config.mdx) + - [`teleport_static_host_user`](./static_host_user.mdx) + - [`teleport_trusted_cluster`](./trusted_cluster.mdx) + - [`teleport_trusted_device`](./trusted_device.mdx) + - [`teleport_user`](./user.mdx) + - [`teleport_workload_identity`](./workload_identity.mdx) diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/database.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/database.mdx new file mode 100644 index 0000000000000..f5824cd9c9a45 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/database.mdx @@ -0,0 +1,273 @@ +--- +title: Reference for the teleport_database Terraform data-source +sidebar_label: database +description: This page describes the supported values of the teleport_database data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_database` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. + +### Optional + +- `metadata` (Attributes) Metadata is the database metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the database spec. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource subkind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Required: + +- `protocol` (String) Protocol is the database protocol: postgres, mysql, mongodb, etc. +- `uri` (String) URI is the database connection endpoint. + +Optional: + +- `ad` (Attributes) AD is the Active Directory configuration for the database. (see [below for nested schema](#nested-schema-for-specad)) +- `admin_user` (Attributes) AdminUser is the database admin user for automatic user provisioning. (see [below for nested schema](#nested-schema-for-specadmin_user)) +- `aws` (Attributes) AWS contains AWS specific settings for RDS/Aurora/Redshift databases. (see [below for nested schema](#nested-schema-for-specaws)) +- `azure` (Attributes) Azure contains Azure specific database metadata. (see [below for nested schema](#nested-schema-for-specazure)) +- `ca_cert` (String) CACert is the PEM-encoded database CA certificate. DEPRECATED: Moved to TLS.CACert. DELETE IN 10.0. +- `dynamic_labels` (Attributes Map) DynamicLabels is the database dynamic labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) +- `gcp` (Attributes) GCP contains parameters specific to GCP Cloud SQL databases. (see [below for nested schema](#nested-schema-for-specgcp)) +- `mongo_atlas` (Attributes) MongoAtlas contains Atlas metadata about the database. (see [below for nested schema](#nested-schema-for-specmongo_atlas)) +- `mysql` (Attributes) MySQL is an additional section with MySQL database options. (see [below for nested schema](#nested-schema-for-specmysql)) +- `oracle` (Attributes) Oracle is an additional Oracle configuration options. (see [below for nested schema](#nested-schema-for-specoracle)) +- `tls` (Attributes) TLS is the TLS configuration used when establishing connection to target database. Allows to provide custom CA cert or override server name. (see [below for nested schema](#nested-schema-for-spectls)) + +### Nested Schema for `spec.ad` + +Optional: + +- `domain` (String) Domain is the Active Directory domain the database resides in. +- `kdc_host_name` (String) KDCHostName is the host name for a KDC for x509 Authentication. +- `keytab_file` (String) KeytabFile is the path to the Kerberos keytab file. +- `krb5_file` (String) Krb5File is the path to the Kerberos configuration file. Defaults to /etc/krb5.conf. +- `ldap_cert` (String) LDAPCert is a certificate from Windows LDAP/AD, optional; only for x509 Authentication. +- `ldap_service_account_name` (String) LDAPServiceAccountName is the name of service account for performing LDAP queries. Required for x509 Auth / PKINIT. +- `ldap_service_account_sid` (String) LDAPServiceAccountSID is the SID of service account for performing LDAP queries. Required for x509 Auth / PKINIT. +- `spn` (String) SPN is the service principal name for the database. + + +### Nested Schema for `spec.admin_user` + +Optional: + +- `default_database` (String) DefaultDatabase is the database that the privileged database user logs into by default. Depending on the database type, this database may be used to store procedures or data for managing database users. +- `name` (String) Name is the username of the privileged database user. + + +### Nested Schema for `spec.aws` + +Optional: + +- `account_id` (String) AccountID is the AWS account ID this database belongs to. +- `assume_role_arn` (String) AssumeRoleARN is an optional AWS role ARN to assume when accessing a database. Set this field and ExternalID to enable access across AWS accounts. +- `docdb` (Attributes) DocumentDB contains Amazon DocumentDB-specific metadata. (see [below for nested schema](#nested-schema-for-specawsdocdb)) +- `elasticache` (Attributes) ElastiCache contains Amazon ElastiCache Redis-specific metadata. (see [below for nested schema](#nested-schema-for-specawselasticache)) +- `elasticache_serverless` (Attributes) ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. (see [below for nested schema](#nested-schema-for-specawselasticache_serverless)) +- `external_id` (String) ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts. +- `iam_policy_status` (Number) IAMPolicyStatus indicates whether the IAM Policy is configured properly for database access. If not, the user must update the AWS profile identity to allow access to the Database. Eg for an RDS Database: the underlying AWS profile allows for `rds-db:connect` for the Database. +- `memorydb` (Attributes) MemoryDB contains AWS MemoryDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsmemorydb)) +- `opensearch` (Attributes) OpenSearch contains AWS OpenSearch specific metadata. (see [below for nested schema](#nested-schema-for-specawsopensearch)) +- `rds` (Attributes) RDS contains RDS specific metadata. (see [below for nested schema](#nested-schema-for-specawsrds)) +- `rdsproxy` (Attributes) RDSProxy contains AWS Proxy specific metadata. (see [below for nested schema](#nested-schema-for-specawsrdsproxy)) +- `redshift` (Attributes) Redshift contains Redshift specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift)) +- `redshift_serverless` (Attributes) RedshiftServerless contains Amazon Redshift Serverless-specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift_serverless)) +- `region` (String) Region is a AWS cloud region. +- `secret_store` (Attributes) SecretStore contains secret store configurations. (see [below for nested schema](#nested-schema-for-specawssecret_store)) +- `session_tags` (Map of String) SessionTags is a list of AWS STS session tags. + +### Nested Schema for `spec.aws.docdb` + +Optional: + +- `cluster_id` (String) ClusterID is the cluster identifier. +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `instance_id` (String) InstanceID is the instance identifier. + + +### Nested Schema for `spec.aws.elasticache` + +Optional: + +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `replication_group_id` (String) ReplicationGroupID is the Redis replication group ID. +- `transit_encryption_enabled` (Boolean) TransitEncryptionEnabled indicates whether in-transit encryption (TLS) is enabled. +- `user_group_ids` (List of String) UserGroupIDs is a list of user group IDs. + + +### Nested Schema for `spec.aws.elasticache_serverless` + +Optional: + +- `cache_name` (String) CacheName is an ElastiCache Serverless cache name. + + +### Nested Schema for `spec.aws.memorydb` + +Optional: + +- `acl_name` (String) ACLName is the name of the ACL associated with the cluster. +- `cluster_name` (String) ClusterName is the name of the MemoryDB cluster. +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `tls_enabled` (Boolean) TLSEnabled indicates whether in-transit encryption (TLS) is enabled. + + +### Nested Schema for `spec.aws.opensearch` + +Optional: + +- `domain_id` (String) DomainID is the ID of the domain. +- `domain_name` (String) DomainName is the name of the domain. +- `endpoint_type` (String) EndpointType is the type of the endpoint. + + +### Nested Schema for `spec.aws.rds` + +Optional: + +- `cluster_id` (String) ClusterID is the RDS cluster (Aurora) identifier. +- `iam_auth` (Boolean) IAMAuth indicates whether database IAM authentication is enabled. +- `instance_id` (String) InstanceID is the RDS instance identifier. +- `resource_id` (String) ResourceID is the RDS instance resource identifier (db-xxx). +- `security_groups` (List of String) SecurityGroups is a list of attached security groups for the RDS instance. +- `subnets` (List of String) Subnets is a list of subnets for the RDS instance. +- `vpc_id` (String) VPCID is the VPC where the RDS is running. + + +### Nested Schema for `spec.aws.rdsproxy` + +Optional: + +- `custom_endpoint_name` (String) CustomEndpointName is the identifier of an RDS Proxy custom endpoint. +- `name` (String) Name is the identifier of an RDS Proxy. +- `resource_id` (String) ResourceID is the RDS instance resource identifier (prx-xxx). + + +### Nested Schema for `spec.aws.redshift` + +Optional: + +- `cluster_id` (String) ClusterID is the Redshift cluster identifier. + + +### Nested Schema for `spec.aws.redshift_serverless` + +Optional: + +- `endpoint_name` (String) EndpointName is the VPC endpoint name. +- `workgroup_id` (String) WorkgroupID is the workgroup ID. +- `workgroup_name` (String) WorkgroupName is the workgroup name. + + +### Nested Schema for `spec.aws.secret_store` + +Optional: + +- `key_prefix` (String) KeyPrefix specifies the secret key prefix. +- `kms_key_id` (String) KMSKeyID specifies the AWS KMS key for encryption. + + + +### Nested Schema for `spec.azure` + +Optional: + +- `is_flexi_server` (Boolean) IsFlexiServer is true if the database is an Azure Flexible server. +- `name` (String) Name is the Azure database server name. +- `redis` (Attributes) Redis contains Azure Cache for Redis specific database metadata. (see [below for nested schema](#nested-schema-for-specazureredis)) +- `resource_id` (String) ResourceID is the Azure fully qualified ID for the resource. + +### Nested Schema for `spec.azure.redis` + +Optional: + +- `clustering_policy` (String) ClusteringPolicy is the clustering policy for Redis Enterprise. + + + +### Nested Schema for `spec.dynamic_labels` + +Optional: + +- `command` (List of String) Command is a command to run +- `period` (String) Period is a time between command runs +- `result` (String) Result captures standard output + + +### Nested Schema for `spec.gcp` + +Optional: + +- `alloydb` (Attributes) AlloyDB contains AlloyDB specific configuration elements. (see [below for nested schema](#nested-schema-for-specgcpalloydb)) +- `instance_id` (String) InstanceID is the Cloud SQL instance ID. +- `project_id` (String) ProjectID is the GCP project ID the Cloud SQL instance resides in. + +### Nested Schema for `spec.gcp.alloydb` + +Optional: + +- `endpoint_override` (String) EndpointOverride is an override of endpoint address to use. +- `endpoint_type` (String) EndpointType is the database endpoint type to use. Should be one of: "private", "public", "psc". + + + +### Nested Schema for `spec.mongo_atlas` + +Optional: + +- `name` (String) Name is the Atlas database instance name. + + +### Nested Schema for `spec.mysql` + +Optional: + +- `server_version` (String) ServerVersion is the server version reported by DB proxy if the runtime information is not available. + + +### Nested Schema for `spec.oracle` + +Optional: + +- `audit_user` (String) AuditUser is the name of the Oracle database user that should be used to access the internal audit trail. +- `retry_count` (Number) RetryCount is the maximum number of times to retry connecting to a host upon failure. If not specified it defaults to 2, for a total of 3 connection attempts. +- `shuffle_hostnames` (Boolean) ShuffleHostnames, when true, randomizes the order of hosts to connect to from the provided list. + + +### Nested Schema for `spec.tls` + +Optional: + +- `ca_cert` (String) CACert is an optional user provided CA certificate used for verifying database TLS connection. +- `mode` (Number) Mode is a TLS connection mode. 0 is "verify-full"; 1 is "verify-ca", 2 is "insecure". +- `server_name` (String) ServerName allows to provide custom hostname. This value will override the servername/hostname on a certificate during validation. +- `trust_system_cert_pool` (Boolean) TrustSystemCertPool allows Teleport to trust certificate authorities available on the host system. If not set (by default), Teleport only trusts self-signed databases with TLS certificates signed by Teleport's Database Server CA or the ca_cert specified in this TLS setting. For cloud-hosted databases, Teleport downloads the corresponding required CAs for validation. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/discovery_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/discovery_config.mdx new file mode 100644 index 0000000000000..c93ea23d5ef69 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/discovery_config.mdx @@ -0,0 +1,297 @@ +--- +title: Reference for the teleport_discovery_config Terraform data-source +sidebar_label: discovery_config +description: This page describes the supported values of the teleport_discovery_config data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_discovery_config` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `header` (Attributes) Header is the resource header. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) Spec is an DiscoveryConfig specification. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Optional: + +- `access_graph` (Attributes) AccessGraph is the configurations for syncing Cloud accounts into Access Graph. (see [below for nested schema](#nested-schema-for-specaccess_graph)) +- `aws` (Attributes List) AWS is a list of AWS Matchers. (see [below for nested schema](#nested-schema-for-specaws)) +- `azure` (Attributes List) Azure is a list of Azure Matchers. (see [below for nested schema](#nested-schema-for-specazure)) +- `discovery_group` (String) DiscoveryGroup is used by discovery_service to add extra matchers. All the discovery_services that have the same discovery_group, will load the matchers of this resource. +- `gcp` (Attributes List) GCP is a list of GCP Matchers. (see [below for nested schema](#nested-schema-for-specgcp)) +- `kube` (Attributes List) Kube is a list of Kubernetes Matchers. (see [below for nested schema](#nested-schema-for-speckube)) + +### Nested Schema for `spec.access_graph` + +Optional: + +- `aws` (Attributes List) AWS is a configuration for AWS Access Graph service poll service. (see [below for nested schema](#nested-schema-for-specaccess_graphaws)) +- `azure` (Attributes List) Azure is a configuration for Azure Access Graph service poll service. (see [below for nested schema](#nested-schema-for-specaccess_graphazure)) +- `poll_interval` (String) PollInterval is the frequency at which to poll for resources + +### Nested Schema for `spec.access_graph.aws` + +Optional: + +- `assume_role` (Attributes) AssumeRoleARN is the AWS role to assume for database discovery. (see [below for nested schema](#nested-schema-for-specaccess_graphawsassume_role)) +- `cloud_trail_logs` (Attributes) Configuration settings for collecting AWS CloudTrail logs via an SQS queue. (see [below for nested schema](#nested-schema-for-specaccess_graphawscloud_trail_logs)) +- `eks_audit_logs` (Attributes) (see [below for nested schema](#nested-schema-for-specaccess_graphawseks_audit_logs)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. +- `regions` (List of String) Regions are AWS regions to import resources from. + +### Nested Schema for `spec.access_graph.aws.assume_role` + +Optional: + +- `external_id` (String) ExternalID is the external ID used to assume a role in another account. +- `role_arn` (String) RoleARN is the fully specified AWS IAM role ARN. + + +### Nested Schema for `spec.access_graph.aws.cloud_trail_logs` + +Optional: + +- `region` (String) The AWS region of the SQS queue for CloudTrail notifications, ex.: "us-east-2". +- `sqs_queue` (String) The name or URL for CloudTrail log events, ex.: "demo-cloudtrail-queue". + + +### Nested Schema for `spec.access_graph.aws.eks_audit_logs` + +Optional: + +- `tags` (Map of List of String) The tags of EKS clusters for which apiserver audit logs should be fetched. + + + +### Nested Schema for `spec.access_graph.azure` + +Optional: + +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. +- `subscription_id` (String) SubscriptionID Is the ID of the Azure subscription to sync resources from + + + +### Nested Schema for `spec.aws` + +Optional: + +- `assume_role` (Attributes) AssumeRoleARN is the AWS role to assume for database discovery. (see [below for nested schema](#nested-schema-for-specawsassume_role)) +- `install` (Attributes) Params sets the join method when installing on discovered EC2 nodes (see [below for nested schema](#nested-schema-for-specawsinstall)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. Environment credentials will not be used when this value is set. +- `kube_app_discovery` (Boolean) KubeAppDiscovery controls whether Kubernetes App Discovery will be enabled for agents running on discovered clusters, currently only affects AWS EKS discovery in integration mode. +- `organization` (Attributes) Organization is an AWS Organization matcher for discovering resources accross multiple accounts under an Organization. (see [below for nested schema](#nested-schema-for-specawsorganization)) +- `regions` (List of String) Regions are AWS regions to query for databases. +- `setup_access_for_arn` (String) SetupAccessForARN is the role that the Discovery Service should create EKS Access Entries for. This value should match the IAM identity that Teleport Kubernetes Service uses. If this value is empty, the Discovery Service will attempt to set up access for its own identity (self). +- `ssm` (Attributes) SSM provides options to use when sending a document command to an EC2 node (see [below for nested schema](#nested-schema-for-specawsssm)) +- `tags` (Map of List of String) Tags are AWS resource Tags to match. +- `types` (List of String) Types are AWS database types to match, "ec2", "rds", "redshift", "elasticache", or "memorydb". + +### Nested Schema for `spec.aws.assume_role` + +Optional: + +- `external_id` (String) ExternalID is the external ID used to assume a role in another account. +- `role_arn` (String) RoleARN is the fully specified AWS IAM role ARN. + + +### Nested Schema for `spec.aws.install` + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specawsinstallazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specawsinstallhttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.aws.install.azure` + +Optional: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.aws.install.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + +### Nested Schema for `spec.aws.organization` + +Optional: + +- `organization_id` (String) OrganizationID is the AWS Organization ID to match against. Required. +- `organizational_units` (Attributes) OrganizationalUnits contains rules for matchings AWS accounts based on their Organizational Units. (see [below for nested schema](#nested-schema-for-specawsorganizationorganizational_units)) + +### Nested Schema for `spec.aws.organization.organizational_units` + +Optional: + +- `exclude` (List of String) Exclude is a list of AWS Organizational Unit IDs to exclude. Only exact matches or wildcard (*) are supported. If empty, no Organizational Units are excluded by default. +- `include` (List of String) Include is a list of AWS Organizational Unit IDs to match. Only exact matches or wildcard (*) are supported. If empty, all Organizational Units are included by default. + + + +### Nested Schema for `spec.aws.ssm` + +Optional: + +- `document_name` (String) DocumentName is the name of the document to use when executing an SSM command + + + +### Nested Schema for `spec.azure` + +Optional: + +- `install_params` (Attributes) Params sets the join method when installing on discovered Azure nodes. (see [below for nested schema](#nested-schema-for-specazureinstall_params)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with Azure APIs. Environment credentials will not be used when this value is set. +- `regions` (List of String) Regions are Azure locations to match for databases. +- `resource_groups` (List of String) ResourceGroups are Azure resource groups to query for resources. +- `subscriptions` (List of String) Subscriptions are Azure subscriptions to query for resources. +- `tags` (Map of List of String) ResourceTags are Azure tags on resources to match. +- `types` (List of String) Types are Azure types to match: "mysql", "postgres", "aks", "vm" + +### Nested Schema for `spec.azure.install_params` + +Required: + +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specazureinstall_paramsazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specazureinstall_paramshttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.azure.install_params.azure` + +Required: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.azure.install_params.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + + +### Nested Schema for `spec.gcp` + +Optional: + +- `install_params` (Attributes) Params sets the join method when installing on discovered GCP nodes. (see [below for nested schema](#nested-schema-for-specgcpinstall_params)) +- `labels` (Map of List of String) Labels are GCP labels to match. +- `locations` (List of String) Locations are GKE locations to search resources for. +- `project_ids` (List of String) ProjectIDs are the GCP project ID where the resources are deployed. +- `service_accounts` (List of String) ServiceAccounts are the emails of service accounts attached to VMs. +- `tags` (Map of List of String) Tags is obsolete and only exists for backwards compatibility. Use Labels instead. +- `types` (List of String) Types are GKE resource types to match: "gke", "vm". + +### Nested Schema for `spec.gcp.install_params` + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specgcpinstall_paramsazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specgcpinstall_paramshttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.gcp.install_params.azure` + +Optional: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.gcp.install_params.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + + +### Nested Schema for `spec.kube` + +Optional: + +- `labels` (Map of List of String) Labels are Kubernetes services labels to match. +- `namespaces` (List of String) Namespaces are Kubernetes namespaces in which to discover services +- `types` (List of String) Types are Kubernetes services types to match. Currently only 'app' is supported. + diff --git a/docs/pages/reference/terraform-provider/data-sources/dynamic_windows_desktop.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/dynamic_windows_desktop.mdx similarity index 93% rename from docs/pages/reference/terraform-provider/data-sources/dynamic_windows_desktop.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/dynamic_windows_desktop.mdx index efc0123c7c373..4d2ffbf114c6e 100644 --- a/docs/pages/reference/terraform-provider/data-sources/dynamic_windows_desktop.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/dynamic_windows_desktop.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_dynamic_wi {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_dynamic_windows_desktop` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/github_connector.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/github_connector.mdx new file mode 100644 index 0000000000000..a2945e46eff00 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/github_connector.mdx @@ -0,0 +1,88 @@ +--- +title: Reference for the teleport_github_connector Terraform data-source +sidebar_label: github_connector +description: This page describes the supported values of the teleport_github_connector data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_github_connector` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an Github connector specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. + +### Optional + +- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. + +### Nested Schema for `spec` + +Required: + +- `client_id` (String) ClientID is the Github OAuth app client ID. +- `client_secret` (String, Sensitive) ClientSecret is the Github OAuth app client secret. + +Optional: + +- `api_endpoint_url` (String) APIEndpointURL is the URL of the API endpoint of the Github instance this connector is for. +- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) +- `display` (String) Display is the connector display name. +- `endpoint_url` (String) EndpointURL is the URL of the GitHub instance this connector is for. +- `redirect_url` (String) RedirectURL is the authorization callback URL. +- `teams_to_logins` (Attributes List) TeamsToLogins maps Github team memberships onto allowed logins/roles. DELETE IN 11.0.0 Deprecated: use GithubTeamsToRoles instead. (see [below for nested schema](#nested-schema-for-specteams_to_logins)) +- `teams_to_roles` (Attributes List) TeamsToRoles maps Github team memberships onto allowed roles. (see [below for nested schema](#nested-schema-for-specteams_to_roles)) +- `user_matchers` (List of String) UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login. + +### Nested Schema for `spec.client_redirect_settings` + +Optional: + +- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs +- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + + +### Nested Schema for `spec.teams_to_logins` + +Optional: + +- `kubernetes_groups` (List of String) KubeGroups is a list of allowed kubernetes groups for this org/team. +- `kubernetes_users` (List of String) KubeUsers is a list of allowed kubernetes users to impersonate for this org/team. +- `logins` (List of String) Logins is a list of allowed logins for this org/team. +- `organization` (String) Organization is a Github organization a user belongs to. +- `team` (String) Team is a team within the organization a user belongs to. + + +### Nested Schema for `spec.teams_to_roles` + +Optional: + +- `organization` (String) Organization is a Github organization a user belongs to. +- `roles` (List of String) Roles is a list of allowed logins for this org/team. +- `team` (String) Team is a team within the organization a user belongs to. + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/health_check_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/health_check_config.mdx new file mode 100644 index 0000000000000..cf8d072c5d178 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/health_check_config.mdx @@ -0,0 +1,80 @@ +--- +title: Reference for the teleport_health_check_config Terraform data-source +sidebar_label: health_check_config +description: This page describes the supported values of the teleport_health_check_config data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_health_check_config` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `metadata` (Attributes) Metadata is the health check config resource's metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the health check config specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the health check config version. + +### Optional + +- `sub_kind` (String) SubKind is an optional resource sub kind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. + + +### Nested Schema for `spec` + +Required: + +- `match` (Attributes) Match is used to select resources that these settings apply to. (see [below for nested schema](#nested-schema-for-specmatch)) + +Optional: + +- `healthy_threshold` (Number) HealthyThreshold is the number of consecutive passing health checks after which a target's health status becomes "healthy". +- `interval` (String) Interval is the time between each health check. +- `timeout` (String) Timeout is the health check connection establishment timeout. An attempt that times out is a failed attempt. +- `unhealthy_threshold` (Number) UnhealthyThreshold is the number of consecutive failing health checks after which a target's health status becomes "unhealthy". + +### Nested Schema for `spec.match` + +Optional: + +- `db_labels` (Attributes List) DBLabels matches database labels. An empty value is ignored. The match result is logically ANDed with DBLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchdb_labels)) +- `db_labels_expression` (String) DBLabelsExpression is a label predicate expression to match databases. An empty value is ignored. The match result is logically ANDed with DBLabels, if both are non-empty. +- `disabled` (Boolean) Disabled disables matches for all labels and expressions. +- `kubernetes_labels` (Attributes List) KubernetesLabels matches Kubernetes labels. An empty value is ignored. The match result is logically ANDed with KubernetesLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchkubernetes_labels)) +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a label predicate expression to match Kubernetes. An empty value is ignored. The match result is logically ANDed with KubernetesLabels, if both are non-empty. + +### Nested Schema for `spec.match.db_labels` + +Optional: + +- `name` (String) The name of the label. +- `values` (List of String) The values associated with the label. + + +### Nested Schema for `spec.match.kubernetes_labels` + +Optional: + +- `name` (String) The name of the label. +- `values` (List of String) The values associated with the label. + diff --git a/docs/pages/reference/terraform-provider/data-sources/installer.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/installer.mdx similarity index 91% rename from docs/pages/reference/terraform-provider/data-sources/installer.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/installer.mdx index 6ced168143064..dd84fb3d4ff05 100644 --- a/docs/pages/reference/terraform-provider/data-sources/installer.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/installer.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_installer {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_installer` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/integration.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/integration.mdx new file mode 100644 index 0000000000000..55ea761d2e841 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/integration.mdx @@ -0,0 +1,82 @@ +--- +title: Reference for the teleport_integration Terraform data-source +sidebar_label: integration +description: This page describes the supported values of the teleport_integration data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_integration` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is an Integration specification. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `aws_oidc` (Attributes) AWSOIDC contains the specific fields to handle the AWS OIDC Integration subkind (see [below for nested schema](#nested-schema-for-specaws_oidc)) +- `aws_ra` (Attributes) AWSRA contains the specific fields to handle the AWS Roles Anywhere Integration subkind. (see [below for nested schema](#nested-schema-for-specaws_ra)) +- `azure_oidc` (Attributes) AzureOIDC contains the specific fields to handle the Azure OIDC Integration subkind (see [below for nested schema](#nested-schema-for-specazure_oidc)) + +### Nested Schema for `spec.aws_oidc` + +Optional: + +- `audience` (String) Audience is used to record a name of a plugin or a discover service in Teleport that depends on this integration. Audience value can either be empty or "aws-identity-center". Preset audience may impose specific behavior on the integration CRUD API, such as preventing integration from update or deletion. Empty audience value should be treated as a default and backward-compatible behavior of the integration. +- `issuer_s3_uri` (String) IssuerS3URI is the Identity Provider that was configured in AWS. This bucket/prefix/* files must be publicly accessible and contain the following: > .well-known/openid-configuration > .well-known/jwks Format: `s3:///` Optional. The proxy's endpoint is used if it is not specified. DEPRECATED: Thumbprint validation requires the issuer to update the IdP in AWS everytime the issuer changes the certificate. Amazon had some whitelisted providers where the thumbprint was ignored. S3 hosted providers was in that list. Amazon is now trusting all the root certificate authorities, and this workaround is no longer needed. DELETE IN 18.0. +- `role_arn` (String) RoleARN contains the Role ARN used to set up the Integration. This is the AWS Role that Teleport will use to issue tokens for API Calls. + + +### Nested Schema for `spec.aws_ra` + +Optional: + +- `profile_sync_config` (Attributes) ProfileSyncConfig contains the configuration for the AWS Roles Anywhere Profile sync. This is used to create AWS Roles Anywhere profiles as application servers. (see [below for nested schema](#nested-schema-for-specaws_raprofile_sync_config)) +- `trust_anchor_arn` (String) TrustAnchorARN contains the AWS IAM Roles Anywhere Trust Anchor ARN used to set up the Integration. + +### Nested Schema for `spec.aws_ra.profile_sync_config` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this integration should sync profiles as application servers. +- `profile_accepts_role_session_name` (Boolean) ProfileAcceptsRoleSessionName indicates whether the profile accepts a custom Role Session name. +- `profile_arn` (String) ProfileARN is the ARN of the Roles Anywhere Profile used to generate credentials to access the AWS APIs. +- `profile_name_filters` (List of String) ProfileNameFilters is a list of filters applied to the profile name. Only matching profiles will be synchronized as application servers. If empty, no filtering is applied. Filters can be globs, for example: profile* *name* Or regexes if they're prefixed and suffixed with ^ and $, for example: ^profile.*$ ^.*name.*$ +- `role_arn` (String) RoleARN is the ARN of the IAM Role to assume when accessing the AWS APIs. + + + +### Nested Schema for `spec.azure_oidc` + +Optional: + +- `client_id` (String) ClientID specifies the ID of Azure enterprise application (client) that corresponds to this plugin. +- `tenant_id` (String) TenantID specifies the ID of Entra Tenant (Directory) that this plugin integrates with. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/login_rule.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/login_rule.mdx new file mode 100644 index 0000000000000..cf60bfd1c6f11 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/login_rule.mdx @@ -0,0 +1,49 @@ +--- +title: Reference for the teleport_login_rule Terraform data-source +sidebar_label: login_rule +description: This page describes the supported values of the teleport_login_rule data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_login_rule` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `priority` (Number) Priority is the priority of the login rule relative to other login rules in the same cluster. Login rules with a lower numbered priority will be evaluated first. +- `version` (String) Version is the resource version. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `traits_expression` (String) TraitsExpression is a predicate expression which should return the desired traits for the user upon login. +- `traits_map` (Attributes Map) TraitsMap is a map of trait keys to lists of predicate expressions which should evaluate to the desired values for that trait. (see [below for nested schema](#nested-schema-for-traits_map)) + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `traits_map` + +Optional: + +- `values` (List of String) + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/oidc_connector.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/oidc_connector.mdx new file mode 100644 index 0000000000000..d3964c8844c3b --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/oidc_connector.mdx @@ -0,0 +1,108 @@ +--- +title: Reference for the teleport_oidc_connector Terraform data-source +sidebar_label: oidc_connector +description: This page describes the supported values of the teleport_oidc_connector data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_oidc_connector` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an OIDC connector specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. + +### Optional + +- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. + +### Nested Schema for `spec` + +Optional: + +- `acr_values` (String) ACR is an Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers. +- `allow_unverified_email` (Boolean) AllowUnverifiedEmail tells the connector to accept OIDC users with unverified emails. +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a dynamic mapping from claims to roles. (see [below for nested schema](#nested-schema-for-specclaims_to_roles)) +- `client_id` (String) ClientID is the id of the authentication client (Teleport Auth Service). +- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) +- `client_secret` (String, Sensitive) ClientSecret is used to authenticate the client. +- `display` (String) Display is the friendly name for this provider. +- `entra_id_groups_provider` (Attributes) EntraIDGroupsProvider configures out-of-band user groups provider. It works by following through the groups claim source, which is sent for the "groups" claim when the user's group membership exceeds 200 max item limit. (see [below for nested schema](#nested-schema-for-specentra_id_groups_provider)) +- `google_admin_email` (String) GoogleAdminEmail is the email of a google admin to impersonate. +- `google_service_account` (String, Sensitive) GoogleServiceAccount is a string containing google service account credentials. +- `google_service_account_uri` (String) GoogleServiceAccountURI is a path to a google service account uri. +- `issuer_url` (String) IssuerURL is the endpoint of the provider, e.g. https://accounts.google.com. +- `max_age` (String) +- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) +- `pkce_mode` (String) PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled" +- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. +- `provider` (String) Provider is the external identity provider. +- `redirect_url` (List of String) RedirectURLs is a list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used. +- `request_object_mode` (String) RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters. +- `scope` (List of String) Scope specifies additional scopes set by provider. +- `user_matchers` (List of String) UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login. +- `username_claim` (String) UsernameClaim specifies the name of the claim from the OIDC connector to be used as the user's username. + +### Nested Schema for `spec.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + +### Nested Schema for `spec.client_redirect_settings` + +Optional: + +- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs +- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + + +### Nested Schema for `spec.entra_id_groups_provider` + +Optional: + +- `disabled` (Boolean) Disabled specifies that the groups provider should be disabled even when Entra ID responds with a groups claim source. User may choose to disable it if they are using integrations such as SCIM or similar groups importer as connector based role mapping may be not needed in such a scenario. +- `graph_endpoint` (String) GraphEndpoint is a Microsoft Graph API endpoint. The groups claim source endpoint provided by Entra ID points to the now-retired Azure AD Graph endpoint ("https://graph.windows.net"). To convert it to the newer Microsoft Graph API endpoint, Teleport defaults to the Microsoft Graph global service endpoint ("https://graph.microsoft.com"). Update GraphEndpoint to point to a different Microsoft Graph national cloud deployment endpoint. +- `group_type` (String) GroupType is a user group type filter. Defaults to "security-groups". Value can be "security-groups", "directory-roles", "all-groups". + + +### Nested Schema for `spec.mfa` + +Optional: + +- `acr_values` (String) AcrValues are Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR. +- `client_id` (String) ClientID is the OIDC OAuth app client ID. +- `client_secret` (String) ClientSecret is the OIDC OAuth app client secret. +- `enabled` (Boolean) Enabled specified whether this OIDC connector supports MFA checks. Defaults to false. +- `max_age` (String) MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions. +- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. +- `request_object_mode` (String) RequestObjectMode determines how JWT-Secured Authorization Requests will be used for authorization requests. JARs, or request objects, can provide integrity protection, source authentication, and confidentiality for authorization request parameters. If omitted, MFA flows will default to the `RequestObjectMode` behavior specified in the base OIDC connector. Set this property to 'none' to explicitly disable request objects for the MFA client. + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + diff --git a/docs/pages/reference/terraform-provider/data-sources/okta_import_rule.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/okta_import_rule.mdx similarity index 95% rename from docs/pages/reference/terraform-provider/data-sources/okta_import_rule.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/okta_import_rule.mdx index 6816340d9c9f9..1e2d4a45dd79e 100644 --- a/docs/pages/reference/terraform-provider/data-sources/okta_import_rule.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/okta_import_rule.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_okta_impor {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_okta_import_rule` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/provision_token.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/provision_token.mdx new file mode 100644 index 0000000000000..8430e0631c5bb --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/provision_token.mdx @@ -0,0 +1,397 @@ +--- +title: Reference for the teleport_provision_token Terraform data-source +sidebar_label: provision_token +description: This page describes the supported values of the teleport_provision_token data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_provision_token` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is a provisioning token V2 spec (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `status` (Attributes) Status is extended status information, depending on token type. It is not user writable. (see [below for nested schema](#nested-schema-for-status)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `spec` + +Required: + +- `roles` (List of String) Roles is a list of roles associated with the token, that will be converted to metadata in the SSH and X509 certificates issued to the user of the token + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specallow)) +- `aws_iid_ttl` (String) AWSIIDTTL is the TTL to use for AWS EC2 Instance Identity Documents used to join the cluster with this token. +- `azure` (Attributes) Azure allows the configuration of options specific to the "azure" join method. (see [below for nested schema](#nested-schema-for-specazure)) +- `azure_devops` (Attributes) AzureDevops allows the configuration of options specific to the "azure_devops" join method. (see [below for nested schema](#nested-schema-for-specazure_devops)) +- `bitbucket` (Attributes) Bitbucket allows the configuration of options specific to the "bitbucket" join method. (see [below for nested schema](#nested-schema-for-specbitbucket)) +- `bot_name` (String) BotName is the name of the bot this token grants access to, if any +- `bound_keypair` (Attributes) BoundKeypair allows the configuration of options specific to the "bound_keypair" join method. (see [below for nested schema](#nested-schema-for-specbound_keypair)) +- `circleci` (Attributes) CircleCI allows the configuration of options specific to the "circleci" join method. (see [below for nested schema](#nested-schema-for-speccircleci)) +- `env0` (Attributes) Env0 allows the configuration of options specific to the "env0" join method. (see [below for nested schema](#nested-schema-for-specenv0)) +- `gcp` (Attributes) GCP allows the configuration of options specific to the "gcp" join method. (see [below for nested schema](#nested-schema-for-specgcp)) +- `github` (Attributes) GitHub allows the configuration of options specific to the "github" join method. (see [below for nested schema](#nested-schema-for-specgithub)) +- `gitlab` (Attributes) GitLab allows the configuration of options specific to the "gitlab" join method. (see [below for nested schema](#nested-schema-for-specgitlab)) +- `join_method` (String) JoinMethod is the joining method required in order to use this token. Supported joining methods include: azure, circleci, ec2, gcp, github, gitlab, iam, kubernetes, spacelift, token, tpm +- `kubernetes` (Attributes) Kubernetes allows the configuration of options specific to the "kubernetes" join method. (see [below for nested schema](#nested-schema-for-speckubernetes)) +- `oracle` (Attributes) Oracle allows the configuration of options specific to the "oracle" join method. (see [below for nested schema](#nested-schema-for-specoracle)) +- `spacelift` (Attributes) Spacelift allows the configuration of options specific to the "spacelift" join method. (see [below for nested schema](#nested-schema-for-specspacelift)) +- `suggested_agent_matcher_labels` (Map of List of String) SuggestedAgentMatcherLabels is a set of labels to be used by agents to match on resources. When an agent uses this token, the agent should monitor resources that match those labels. For databases, this means adding the labels to `db_service.resources.labels`. Currently, only node-join scripts create a configuration according to the suggestion. +- `suggested_labels` (Map of List of String) SuggestedLabels is a set of labels that resources should set when using this token to enroll themselves in the cluster. Currently, only node-join scripts create a configuration according to the suggestion. +- `terraform_cloud` (Attributes) TerraformCloud allows the configuration of options specific to the "terraform_cloud" join method. (see [below for nested schema](#nested-schema-for-specterraform_cloud)) +- `tpm` (Attributes) TPM allows the configuration of options specific to the "tpm" join method. (see [below for nested schema](#nested-schema-for-spectpm)) + +### Nested Schema for `spec.allow` + +Optional: + +- `aws_account` (String) AWSAccount is the AWS account ID. +- `aws_arn` (String) AWSARN is used for the IAM join method, the AWS identity of joining nodes must match this ARN. Supports wildcards "*" and "?". +- `aws_regions` (List of String) AWSRegions is used for the EC2 join method and is a list of AWS regions a node is allowed to join from. +- `aws_role` (String) AWSRole is used for the EC2 join method and is the ARN of the AWS role that the Auth Service will assume in order to call the ec2 API. + + +### Nested Schema for `spec.azure` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specazureallow)) + +### Nested Schema for `spec.azure.allow` + +Optional: + +- `resource_groups` (List of String) ResourceGroups is a list of Azure resource groups the node is allowed to join from. +- `subscription` (String) Subscription is the Azure subscription. + + + +### Nested Schema for `spec.azure_devops` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. At least one allow rule must be specified. (see [below for nested schema](#nested-schema-for-specazure_devopsallow)) +- `organization_id` (String) OrganizationID specifies the UUID of the Azure DevOps organization that this join token will grant access to. This is used to identify the correct issuer verification of the ID token. This is a required field. + +### Nested Schema for `spec.azure_devops.allow` + +Optional: + +- `definition_id` (String) The ID of the AZDO pipeline definition. Example: `1` Mapped from the `def_id` claim. +- `pipeline_name` (String) The name of the AZDO pipeline. Example: `my-pipeline`. Mapped out of the `sub` claim. +- `project_id` (String) The ID of the AZDO pipeline. Example: `271ef6f7-0000-0000-0000-4b54d9129990` Mapped from the `prj_id` claim. +- `project_name` (String) The name of the AZDO project. Example: `my-project`. Mapped out of the `sub` claim. +- `repository_ref` (String) The reference of the repository the pipeline is using. Example: `refs/heads/main`. Mapped from the `rpo_ref` claim. +- `repository_uri` (String) The URI of the repository the pipeline is using. Example: `https://github.com/gravitational/teleport.git`. Mapped from the `rpo_uri` claim. +- `repository_version` (String) The individual commit of the repository the pipeline is using. Example: `e6b9eb29a288b27a3a82cc19c48b9d94b80aff36`. Mapped from the `rpo_ver` claim. +- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. Example: `p://my-organization/my-project/my-pipeline` Mapped from the `sub` claim. + + + +### Nested Schema for `spec.bitbucket` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specbitbucketallow)) +- `audience` (String) Audience is a Bitbucket-specified audience value for this token. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. +- `identity_provider_url` (String) IdentityProviderURL is a Bitbucket-specified issuer URL for incoming OIDC tokens. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. + +### Nested Schema for `spec.bitbucket.allow` + +Optional: + +- `branch_name` (String) BranchName is the name of the branch on which this pipeline executed. +- `deployment_environment_uuid` (String) DeploymentEnvironmentUUID is the UUID of the deployment environment targeted by this pipelines run, if any. These values may be found in the "Pipelines -> OpenID Connect -> Deployment environments" section of the repository settings. +- `repository_uuid` (String) RepositoryUUID is the UUID of the repository for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. +- `workspace_uuid` (String) WorkspaceUUID is the UUID of the workspace for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. + + + +### Nested Schema for `spec.bound_keypair` + +Optional: + +- `onboarding` (Attributes) Onboarding contains parameters related to initial onboarding and keypair registration. (see [below for nested schema](#nested-schema-for-specbound_keypaironboarding)) +- `recovery` (Attributes) Recovery contains parameters related to recovery after identity expiration. (see [below for nested schema](#nested-schema-for-specbound_keypairrecovery)) +- `rotate_after` (String) RotateAfter is an optional timestamp that forces clients to perform a keypair rotation on the next join or recovery attempt after the given date. If `LastRotatedAt` is unset or before this timestamp, a rotation will be requested. It is recommended to set this value to the current timestamp if a rotation should be triggered on the next join attempt. + +### Nested Schema for `spec.bound_keypair.onboarding` + +Optional: + +- `initial_public_key` (String) InitialPublicKey is used to preregister a public key generated by `tbot keypair create`. When set, no initial join secret is generated or made available for use, and clients must have the associated private key available to join. If set, `initial_join_secret` and `must_register_before` are ignored. This value is written in SSH authorized_keys format. +- `must_register_before` (String) MustRegisterBefore is an optional time before which registration via initial join secret must be performed. Attempts to register using an initial join secret after this timestamp will not be allowed. This may be modified after creation if necessary to allow the initial registration to take place. This value is ignored if `initial_public_key` is set. +- `registration_secret` (String) RegistrationSecret is a secret joining clients may use to register their public key on first join, which may be used instead of preregistering a public key with `initial_public_key`. If `initial_public_key` is set, this value is ignored. Otherwise, if set, this value will be used to populate `.status.bound_keypair.registration_secret`. If unset and no `initial_public_key` is provided, a random secure value will be generated server-side to populate the status field. + + +### Nested Schema for `spec.bound_keypair.recovery` + +Optional: + +- `limit` (Number) Limit is the maximum number of allowed recovery attempts. This value may be raised or lowered after creation to allow additional recovery attempts should the initial limit be exhausted. If `mode` is set to `standard`, recovery attempts will only be allowed if `.status.bound_keypair.recovery_count` is less than this limit. This limit is not enforced if `mode` is set to `relaxed` or `insecure`. This value must be at least 1 to allow for the initial join during onboarding, which counts as a recovery. +- `mode` (String) Mode sets the recovery rule enforcement mode. It may be one of these values: - standard (or unset): all configured rules enforced. The recovery limit and client join state are required and verified. This is the most secure recovery mode. - relaxed: recovery limit is not enforced, but client join state is still required. This effectively allows unlimited recovery attempts, but client join state still helps mitigate stolen credentials. - insecure: neither the recovery limit nor client join state are enforced. This allows any client with the private key to join freely. This is less secure, but can be useful in certain situations, like in otherwise unsupported CI/CD providers. This mode should be used with care, and RBAC rules should be configured to heavily restrict which resources this identity can access. + + + +### Nested Schema for `spec.circleci` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speccircleciallow)) +- `organization_id` (String) + +### Nested Schema for `spec.circleci.allow` + +Optional: + +- `context_id` (String) +- `project_id` (String) + + + +### Nested Schema for `spec.env0` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, jobs using this token must match at least one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specenv0allow)) + +### Nested Schema for `spec.env0.allow` + +Optional: + +- `deployer_email` (String) DeployerEmail is the email of the person that triggered the deployment, corresponding to `deployerEmail` in an Env0 OIDC token. +- `deployment_type` (String) DeploymentType is the env0 deployment type, such as "deploy", "destroy", etc. Corresponds to `deploymentType` in an Env0 OIDC token. +- `env0_tag` (String) Env0Tag is a custom tag value corresponding to `env0Tag` when `ENV0_OIDC_TAG` is set. +- `environment_id` (String) EnvironmentID is the unique identifier of the Env0 environment, corresponding to `environmentId` in an Env0 OIDC token. +- `environment_name` (String) EnvironmentName is the name of the Env0 environment, corresponding to `environmentName` in an Env0 OIDC token. +- `organization_id` (String) OrganizationID is the unique organization identifier, corresponding to `organizationId` in an Env0 OIDC token. +- `project_id` (String) ProjectID is a unique project identifier, corresponding to `projectId` in an Env0 OIDC token. +- `project_name` (String) ProjectName is the name of the project under which the job was run corresponding to `projectName` in an Env0 OIDC token. +- `template_id` (String) TemplateID is the unique identifier of the Env0 template, corresponding to `templateId` in an Env0 OIDC token. +- `template_name` (String) TemplateName is the name of the Env0 template, corresponding to `templateName` in an Env0 OIDC token. +- `workspace_name` (String) WorkspaceName is the name of the Env0 workspace, corresponding to `workspaceName` in an Env0 OIDC token. + + + +### Nested Schema for `spec.gcp` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgcpallow)) + +### Nested Schema for `spec.gcp.allow` + +Optional: + +- `locations` (List of String) Locations is a list of regions (e.g. "us-west1") and/or zones (e.g. "us-west1-b"). +- `project_ids` (List of String) ProjectIDs is a list of project IDs (e.g. ``). +- `service_accounts` (List of String) ServiceAccounts is a list of service account emails (e.g. `-compute@developer.gserviceaccount.com`). + + + +### Nested Schema for `spec.github` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgithuballow)) +- `enterprise_server_host` (String) EnterpriseServerHost allows joining from runners associated with a GitHub Enterprise Server instance. When unconfigured, tokens will be validated against github.com, but when configured to the host of a GHES instance, then the tokens will be validated against host. This value should be the hostname of the GHES instance, and should not include the scheme or a path. The instance must be accessible over HTTPS at this hostname and the certificate must be trusted by the Auth Service. +- `enterprise_slug` (String) EnterpriseSlug allows the slug of a GitHub Enterprise organisation to be included in the expected issuer of the OIDC tokens. This is for compatibility with the `include_enterprise_slug` option in GHE. This field should be set to the slug of your enterprise if this is enabled. If this is not enabled, then this field must be left empty. This field cannot be specified if `enterprise_server_host` is specified. See https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise for more information about customized issuer values. +- `static_jwks` (String) StaticJWKS disables fetching of the GHES signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitHub Actions in GHES instances that are not reachable by the Teleport Auth Service. + +### Nested Schema for `spec.github.allow` + +Optional: + +- `actor` (String) The personal account that initiated the workflow run. +- `environment` (String) The name of the environment used by the job. +- `ref` (String) The git ref that triggered the workflow run. +- `ref_type` (String) The type of ref, for example: "branch". +- `repository` (String) The repository from where the workflow is running. This includes the name of the owner e.g `gravitational/teleport` +- `repository_owner` (String) The name of the organization in which the repository is stored. +- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. The format of this varies depending on the type of github action run. +- `workflow` (String) The name of the workflow. + + + +### Nested Schema for `spec.gitlab` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgitlaballow)) +- `domain` (String) Domain is the domain of your GitLab instance. This will default to `gitlab.com` - but can be set to the domain of your self-hosted GitLab e.g `gitlab.example.com`. +- `static_jwks` (String) StaticJWKS disables fetching of the GitLab signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitLab CI instances that are not reachable by the Teleport Auth Service. + +### Nested Schema for `spec.gitlab.allow` + +Optional: + +- `ci_config_ref_uri` (String) CIConfigRefURI is the ref path to the top-level pipeline definition, for example, gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main. +- `ci_config_sha` (String) CIConfigSHA is the git commit SHA for the ci_config_ref_uri. +- `deployment_tier` (String) DeploymentTier is the deployment tier of the environment the job specifies +- `environment` (String) Environment limits access by the environment the job deploys to (if one is associated) +- `environment_protected` (Boolean) EnvironmentProtected is true if the Git ref is protected, false otherwise. +- `namespace_path` (String) NamespacePath is used to limit access to jobs in a group or user's projects. Example: `mygroup` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `pipeline_source` (String) PipelineSource limits access by the job pipeline source type. https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules Example: `web` +- `project_path` (String) ProjectPath is used to limit access to jobs belonging to an individual project. Example: `mygroup/myproject` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `project_visibility` (String) ProjectVisibility is the visibility of the project where the pipeline is running. Can be internal, private, or public. +- `ref` (String) Ref allows access to be limited to jobs triggered by a specific git ref. Ensure this is used in combination with ref_type. This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `ref_protected` (Boolean) RefProtected is true if the Git ref is protected, false otherwise. +- `ref_type` (String) RefType allows access to be limited to jobs triggered by a specific git ref type. Example: `branch` or `tag` +- `sub` (String) Sub roughly uniquely identifies the workload. Example: `project_path:mygroup/my-project:ref_type:branch:ref:main` project_path:GROUP/PROJECT:ref_type:TYPE:ref:BRANCH_NAME This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `user_email` (String) UserEmail is the email of the user executing the job +- `user_id` (String) UserID is the ID of the user executing the job +- `user_login` (String) UserLogin is the username of the user executing the job + + + +### Nested Schema for `spec.kubernetes` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speckubernetesallow)) +- `oidc` (Attributes) OIDCConfig configures the `oidc` type. (see [below for nested schema](#nested-schema-for-speckubernetesoidc)) +- `static_jwks` (Attributes) StaticJWKS is the configuration specific to the `static_jwks` type. (see [below for nested schema](#nested-schema-for-speckubernetesstatic_jwks)) +- `type` (String) Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` - `oidc` If unset, this defaults to `in_cluster`. + +### Nested Schema for `spec.kubernetes.allow` + +Optional: + +- `service_account` (String) ServiceAccount is the namespaced name of the Kubernetes service account. Its format is "namespace:service-account". + + +### Nested Schema for `spec.kubernetes.oidc` + +Optional: + +- `insecure_allow_http_issuer` (Boolean) InsecureAllowHTTPIssuer is a flag that, if set, disables the requirement that the issuer must use HTTPS. +- `issuer` (String) Issuer is the URI of the OIDC issuer. It must have an accessible and OIDC-compliant `/.well-known/oidc-configuration` endpoint. This should be a valid URL and must exactly match the `issuer` field in a service account JWT. For example: https://oidc.eks.us-west-2.amazonaws.com/id/12345... + + +### Nested Schema for `spec.kubernetes.static_jwks` + +Optional: + +- `jwks` (String) JWKS should be the JSON Web Key Set formatted public keys of that the Kubernetes Cluster uses to sign service account tokens. This can be fetched from /openid/v1/jwks on the Kubernetes API Server. + + + +### Nested Schema for `spec.oracle` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specoracleallow)) + +### Nested Schema for `spec.oracle.allow` + +Optional: + +- `instances` (List of String) Instances is a list of the OCIDs of specific instances that are allowed to join. If empty, any instance matching the other fields in the rule is allowed. Limited to 100 instance OCIDs per rule. +- `parent_compartments` (List of String) ParentCompartments is a list of the OCIDs of compartments an instance is allowed to join from. Only direct parents are allowed, i.e. no nested compartments. If empty, any compartment is allowed. +- `regions` (List of String) Regions is a list of regions an instance is allowed to join from. Both full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. If empty, any region is allowed. +- `tenancy` (String) Tenancy is the OCID of the instance's tenancy. Required. + + + +### Nested Schema for `spec.spacelift` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specspaceliftallow)) +- `enable_glob_matching` (Boolean) EnableGlobMatching enables glob-style matching for the space_id and caller_id fields in the rules. +- `hostname` (String) Hostname is the hostname of the Spacelift tenant that tokens will originate from. E.g `example.app.spacelift.io` + +### Nested Schema for `spec.spacelift.allow` + +Optional: + +- `caller_id` (String) CallerID is the ID of the caller, ie. the stack or module that generated the run. This field supports "glob-style" matching when enable_glob_matching is true: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `caller_type` (String) CallerType is the type of the caller, ie. the entity that owns the run - either `stack` or `module`. +- `scope` (String) Scope is the scope of the token - either `read` or `write`. See https://docs.spacelift.io/integrations/cloud-providers/oidc/#about-scopes +- `space_id` (String) SpaceID is the ID of the space in which the run that owns the token was executed. This field supports "glob-style" matching when enable_glob_matching is true: - Use '*' to match zero or more characters. - Use '?' to match any single character. + + + +### Nested Schema for `spec.terraform_cloud` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specterraform_cloudallow)) +- `audience` (String) Audience is the JWT audience as configured in the TFC_WORKLOAD_IDENTITY_AUDIENCE(_$TAG) variable in Terraform Cloud. If unset, defaults to the Teleport cluster name. For example, if `TFC_WORKLOAD_IDENTITY_AUDIENCE_TELEPORT=foo` is set in Terraform Cloud, this value should be `foo`. If the variable is set to match the cluster name, it does not need to be set here. +- `hostname` (String) Hostname is the hostname of the Terraform Enterprise instance expected to issue JWTs allowed by this token. This may be unset for regular Terraform Cloud use, in which case it will be assumed to be `app.terraform.io`. Otherwise, it must both match the `iss` (issuer) field included in JWTs, and provide standard JWKS endpoints. + +### Nested Schema for `spec.terraform_cloud.allow` + +Optional: + +- `organization_id` (String) OrganizationID is the ID of the HCP Terraform organization. At least one organization value is required, either ID or name. +- `organization_name` (String) OrganizationName is the human-readable name of the HCP Terraform organization. At least one organization value is required, either ID or name. +- `project_id` (String) ProjectID is the ID of the HCP Terraform project. At least one project or workspace value is required, either ID or name. +- `project_name` (String) ProjectName is the human-readable name for the HCP Terraform project. At least one project or workspace value is required, either ID or name. +- `run_phase` (String) RunPhase is the phase of the run the token was issued for, e.g. `plan` or `apply` +- `workspace_id` (String) WorkspaceID is the ID of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. +- `workspace_name` (String) WorkspaceName is the human-readable name of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. + + + +### Nested Schema for `spec.tpm` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, the presented delegated identity must match one allow rule to permit joining. (see [below for nested schema](#nested-schema-for-spectpmallow)) +- `ekcert_allowed_cas` (List of String) EKCertAllowedCAs is a list of CA certificates that will be used to validate TPM EKCerts. When specified, joining TPMs must present an EKCert signed by one of the specified CAs. TPMs that do not present an EKCert will be not permitted to join. When unspecified, TPMs will be allowed to join with either an EKCert or an EKPubHash. + +### Nested Schema for `spec.tpm.allow` + +Optional: + +- `description` (String) Description is a human-readable description of the rule. It has no bearing on whether or not a TPM is allowed to join, but can be used to associate a rule with a specific host (e.g the asset tag of the server in which the TPM resides). Example: "build-server-100" +- `ek_certificate_serial` (String) EKCertificateSerial is the serial number of the EKCert in hexadecimal with colon separated nibbles. This value will not be checked when a TPM does not have an EKCert configured. Example: 73:df:dc:bd:af:ef:8a:d8:15:2e:96:71:7a:3e:7f:a4 +- `ek_public_hash` (String) EKPublicHash is the SHA256 hash of the EKPub marshaled in PKIX format and encoded in hexadecimal. This value will also be checked when a TPM has submitted an EKCert, and the public key in the EKCert will be used for this check. Example: d4b45864d9d6fabfc568d74f26c35ababde2105337d7af9a6605e1c56c891aa6 + + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels +- `name` (String, Sensitive) Name is an object name + + +### Nested Schema for `status` + +Optional: + +- `bound_keypair` (Attributes) BoundKeypair contains status information related to bound_keypair type tokens. (see [below for nested schema](#nested-schema-for-statusbound_keypair)) + +### Nested Schema for `status.bound_keypair` + +Optional: + +- `bound_bot_instance_id` (String) BoundBotInstanceID is the ID of the currently associated bot instance. A new bot instance is issued on each join; the new bot instance will have a `previous_bot_instance` set to this value, if any. +- `bound_public_key` (String) BoundPublicKey contains the currently bound public key. If `.spec.bound_keypair.onboarding.initial_public_key` is set, that value will be copied here on creation, otherwise it will be populated as part of public key registration process. This value will be updated over time if keypair rotation takes place, and will always reflect the currently trusted public key. This value is written in SSH authorized_keys format. +- `last_recovered_at` (String) LastRecoveredAt contains a timestamp of the last successful recovery attempt. Note that normal renewals with valid client certificates do not count as a recovery attempt, however the initial join during onboarding does. This corresponds with the last time `bound_bot_instance_id` was updated. +- `last_rotated_at` (String) LastRotatedAt contains a timestamp of the last time the keypair was rotated, if any. This is not set at initial join. +- `recovery_count` (Number) RecoveryCount is a count of the total number of recoveries performed using this token. It is incremented for every successful join or rejoin. Recovery is only allowed if this value is less than `.spec.bound_keypair.recovery.limit`, or if the recovery mode is `relaxed` or `insecure`. +- `registration_secret` (String) RegistrationSecret contains a secret value that may be used for public key registration during the initial join process if no public key is preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` is set, this field will remain empty. Otherwise, if `.spec.bound_keypair.onboarding.registration_secret` is set, that value will be copied here. If that field is unset, a value will be randomly generated. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/role.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/role.mdx new file mode 100644 index 0000000000000..0a5836deb9311 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/role.mdx @@ -0,0 +1,557 @@ +--- +title: Reference for the teleport_role Terraform data-source +sidebar_label: role +description: This page describes the supported values of the teleport_role data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_role` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`, `v8`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a role specification (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `allow` (Attributes) Allow is the set of conditions evaluated to grant access. (see [below for nested schema](#nested-schema-for-specallow)) +- `deny` (Attributes) Deny is the set of conditions evaluated to deny access. Deny takes priority over allow. (see [below for nested schema](#nested-schema-for-specdeny)) +- `options` (Attributes) Options is for OpenSSH options like agent forwarding. (see [below for nested schema](#nested-schema-for-specoptions)) + +### Nested Schema for `spec.allow` + +Optional: + +- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specallowaccount_assignments)) +- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. +- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. +- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. +- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. +- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). +- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. +- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. +- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. +- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. +- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specallowdb_permissions)) +- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. +- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. +- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. +- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. +- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to +- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. +- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specallowgithub_permissions)) +- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. +- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. +- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to +- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file +- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specallowimpersonate)) +- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specallowjoin_sessions)) +- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups +- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. +- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specallowkubernetes_resources)) +- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate +- `logins` (List of String) Logins is a list of *nix system logins. +- `mcp` (Attributes) MCPPermissions defines MCP servers related permissions. (see [below for nested schema](#nested-schema-for-specallowmcp)) +- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). +- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. +- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specallowrequest)) +- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specallowrequire_session_join)) +- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specallowreview_requests)) +- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specallowrules)) +- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specallowspiffe)) +- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. +- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. +- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. +- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. +- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. + +### Nested Schema for `spec.allow.account_assignments` + +Optional: + +- `account` (String) +- `permission_set` (String) + + +### Nested Schema for `spec.allow.db_permissions` + +Optional: + +- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. +- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... + + +### Nested Schema for `spec.allow.github_permissions` + +Optional: + +- `orgs` (List of String) + + +### Nested Schema for `spec.allow.impersonate` + +Optional: + +- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate +- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.allow.join_sessions` + +Optional: + +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is a list of permitted participant modes for this policy. +- `name` (String) Name is the name of the policy. +- `roles` (List of String) Roles is a list of roles that you can join the session of. + + +### Nested Schema for `spec.allow.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. +- `kind` (String) Kind specifies the Kubernetes Resource type. +- `name` (String) Name is the resource name. It supports wildcards. +- `namespace` (String) Namespace is the resource namespace. It supports wildcards. +- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. + + +### Nested Schema for `spec.allow.mcp` + +Optional: + +- `tools` (List of String) Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed. + + +### Nested Schema for `spec.allow.request` + +Optional: + +- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowrequestclaims_to_roles)) +- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specallowrequestkubernetes_resources)) +- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. +- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specallowrequestreason)) +- `roles` (List of String) Roles is the name of roles which will match the request rule. +- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. +- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. +- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specallowrequestthresholds)) + +### Nested Schema for `spec.allow.request.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + +### Nested Schema for `spec.allow.request.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. +- `kind` (String) kind specifies the Kubernetes Resource type. + + +### Nested Schema for `spec.allow.request.reason` + +Optional: + +- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. +- `prompt` (String) Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt. + + +### Nested Schema for `spec.allow.request.thresholds` + +Optional: + +- `approve` (Number) Approve is the number of matching approvals needed for state-transition. +- `deny` (Number) Deny is the number of denials needed for state-transition. +- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. +- `name` (String) Name is the optional human-readable name of the threshold. + + + +### Nested Schema for `spec.allow.require_session_join` + +Optional: + +- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. +- `filter` (String) Filter is a predicate that determines what users count towards this policy. +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. +- `name` (String) Name is the name of the policy. +- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. + + +### Nested Schema for `spec.allow.review_requests` + +Optional: + +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowreview_requestsclaims_to_roles)) +- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. +- `roles` (List of String) Roles is the name of roles which may be reviewed. +- `where` (String) Where is an optional predicate which further limits which requests are reviewable. + +### Nested Schema for `spec.allow.review_requests.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + + +### Nested Schema for `spec.allow.rules` + +Optional: + +- `actions` (List of String) Actions specifies optional actions taken when this rule matches +- `resources` (List of String) Resources is a list of resources +- `verbs` (List of String) Verbs is a list of verbs +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.allow.spiffe` + +Optional: + +- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com +- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 +- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar + + + +### Nested Schema for `spec.deny` + +Optional: + +- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specdenyaccount_assignments)) +- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. +- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. +- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. +- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. +- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). +- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. +- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. +- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. +- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. +- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specdenydb_permissions)) +- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. +- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. +- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. +- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. +- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to +- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. +- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specdenygithub_permissions)) +- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. +- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. +- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to +- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file +- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specdenyimpersonate)) +- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specdenyjoin_sessions)) +- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups +- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. +- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specdenykubernetes_resources)) +- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate +- `logins` (List of String) Logins is a list of *nix system logins. +- `mcp` (Attributes) MCPPermissions defines MCP servers related permissions. (see [below for nested schema](#nested-schema-for-specdenymcp)) +- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). +- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. +- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specdenyrequest)) +- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specdenyrequire_session_join)) +- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specdenyreview_requests)) +- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specdenyrules)) +- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specdenyspiffe)) +- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. +- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. +- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. +- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. +- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. + +### Nested Schema for `spec.deny.account_assignments` + +Optional: + +- `account` (String) +- `permission_set` (String) + + +### Nested Schema for `spec.deny.db_permissions` + +Optional: + +- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. +- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... + + +### Nested Schema for `spec.deny.github_permissions` + +Optional: + +- `orgs` (List of String) + + +### Nested Schema for `spec.deny.impersonate` + +Optional: + +- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate +- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.deny.join_sessions` + +Optional: + +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is a list of permitted participant modes for this policy. +- `name` (String) Name is the name of the policy. +- `roles` (List of String) Roles is a list of roles that you can join the session of. + + +### Nested Schema for `spec.deny.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. +- `kind` (String) Kind specifies the Kubernetes Resource type. +- `name` (String) Name is the resource name. It supports wildcards. +- `namespace` (String) Namespace is the resource namespace. It supports wildcards. +- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. + + +### Nested Schema for `spec.deny.mcp` + +Optional: + +- `tools` (List of String) Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed. + + +### Nested Schema for `spec.deny.request` + +Optional: + +- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyrequestclaims_to_roles)) +- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specdenyrequestkubernetes_resources)) +- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. +- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specdenyrequestreason)) +- `roles` (List of String) Roles is the name of roles which will match the request rule. +- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. +- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. +- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specdenyrequestthresholds)) + +### Nested Schema for `spec.deny.request.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + +### Nested Schema for `spec.deny.request.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. +- `kind` (String) kind specifies the Kubernetes Resource type. + + +### Nested Schema for `spec.deny.request.reason` + +Optional: + +- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. +- `prompt` (String) Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt. + + +### Nested Schema for `spec.deny.request.thresholds` + +Optional: + +- `approve` (Number) Approve is the number of matching approvals needed for state-transition. +- `deny` (Number) Deny is the number of denials needed for state-transition. +- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. +- `name` (String) Name is the optional human-readable name of the threshold. + + + +### Nested Schema for `spec.deny.require_session_join` + +Optional: + +- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. +- `filter` (String) Filter is a predicate that determines what users count towards this policy. +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. +- `name` (String) Name is the name of the policy. +- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. + + +### Nested Schema for `spec.deny.review_requests` + +Optional: + +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyreview_requestsclaims_to_roles)) +- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. +- `roles` (List of String) Roles is the name of roles which may be reviewed. +- `where` (String) Where is an optional predicate which further limits which requests are reviewable. + +### Nested Schema for `spec.deny.review_requests.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + + +### Nested Schema for `spec.deny.rules` + +Optional: + +- `actions` (List of String) Actions specifies optional actions taken when this rule matches +- `resources` (List of String) Resources is a list of resources +- `verbs` (List of String) Verbs is a list of verbs +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.deny.spiffe` + +Optional: + +- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com +- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 +- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar + + + +### Nested Schema for `spec.options` + +Optional: + +- `cert_extensions` (Attributes List) CertExtensions specifies the key/values (see [below for nested schema](#nested-schema-for-specoptionscert_extensions)) +- `cert_format` (String) CertificateFormat defines the format of the user certificate to allow compatibility with older versions of OpenSSH. +- `client_idle_timeout` (String) ClientIdleTimeout sets disconnect clients on idle timeout behavior, if set to 0 means do not disconnect, otherwise is set to the idle duration. +- `create_db_user` (Boolean) CreateDatabaseUser enabled automatic database user creation. +- `create_db_user_mode` (Number) CreateDatabaseUserMode allows users to be automatically created on a database when not set to off. 0 is "unspecified", 1 is "off", 2 is "keep", 3 is "best_effort_drop". +- `create_desktop_user` (Boolean) CreateDesktopUser allows users to be automatically created on a Windows desktop +- `create_host_user` (Boolean) Deprecated: use CreateHostUserMode instead. +- `create_host_user_default_shell` (String) CreateHostUserDefaultShell is used to configure the default shell for newly provisioned host users. +- `create_host_user_mode` (Number) CreateHostUserMode allows users to be automatically created on a host when not set to off. 0 is "unspecified"; 1 is "off"; 2 is "drop" (removed for v15 and above), 3 is "keep"; 4 is "insecure-drop". +- `desktop_clipboard` (Boolean) DesktopClipboard indicates whether clipboard sharing is allowed between the user's workstation and the remote desktop. It defaults to true unless explicitly set to false. +- `desktop_directory_sharing` (Boolean) DesktopDirectorySharing indicates whether directory sharing is allowed between the user's workstation and the remote desktop. It defaults to false unless explicitly set to true. +- `device_trust_mode` (String) DeviceTrustMode is the device authorization mode used for the resources associated with the role. See DeviceTrust.Mode. +- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert sets disconnect clients on expired certificates. +- `enhanced_recording` (List of String) BPF defines what events to record for the BPF-based session recorder. +- `forward_agent` (Boolean) ForwardAgent is SSH agent forwarding. +- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specoptionsidp)) +- `lock` (String) Lock specifies the locking mode (strict|best_effort) to be applied with the role. +- `max_connections` (Number) MaxConnections defines the maximum number of concurrent connections a user may hold. +- `max_kubernetes_connections` (Number) MaxKubernetesConnections defines the maximum number of concurrent Kubernetes sessions a user may hold. +- `max_session_ttl` (String) MaxSessionTTL defines how long a SSH session can last for. +- `max_sessions` (Number) MaxSessions defines the maximum number of concurrent sessions per connection. +- `mfa_verification_interval` (String) MFAVerificationInterval optionally defines the maximum duration that can elapse between successive MFA verifications. This variable is used to ensure that users are periodically prompted to verify their identity, enhancing security by preventing prolonged sessions without re-authentication when using tsh proxy * derivatives. It's only effective if the session requires MFA. If not set, defaults to `max_session_ttl`. +- `permit_x11_forwarding` (Boolean) PermitX11Forwarding authorizes use of X11 forwarding. +- `pin_source_ip` (Boolean) PinSourceIP forces the same client IP for certificate generation and usage +- `port_forwarding` (Boolean) Deprecated: Use SSHPortForwarding instead +- `record_session` (Attributes) RecordDesktopSession indicates whether desktop access sessions should be recorded. It defaults to true unless explicitly set to false. (see [below for nested schema](#nested-schema-for-specoptionsrecord_session)) +- `request_access` (String) RequestAccess defines the request strategy (optional|reason|always) where optional is the default. +- `request_prompt` (String) RequestPrompt is an optional message which tells users what they aught to request. +- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this user. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". +- `ssh_file_copy` (Boolean) SSHFileCopy indicates whether remote file operations via SCP or SFTP are allowed over an SSH session. It defaults to true unless explicitly set to false. +- `ssh_port_forwarding` (Attributes) SSHPortForwarding configures what types of SSH port forwarding are allowed by a role. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwarding)) + +### Nested Schema for `spec.options.cert_extensions` + +Optional: + +- `mode` (Number) Mode is the type of extension to be used -- currently critical-option is not supported. 0 is "extension". +- `name` (String) Name specifies the key to be used in the cert extension. +- `type` (Number) Type represents the certificate type being extended, only ssh is supported at this time. 0 is "ssh". +- `value` (String) Value specifies the value to be used in the cert extension. + + +### Nested Schema for `spec.options.idp` + +Optional: + +- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specoptionsidpsaml)) + +### Nested Schema for `spec.options.idp.saml` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. + + + +### Nested Schema for `spec.options.record_session` + +Optional: + +- `default` (String) Default indicates the default value for the services. +- `desktop` (Boolean) Desktop indicates whether desktop sessions should be recorded. It defaults to true unless explicitly set to false. +- `ssh` (String) SSH indicates the session mode used on SSH sessions. + + +### Nested Schema for `spec.options.ssh_port_forwarding` + +Optional: + +- `local` (Attributes) Allow local port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardinglocal)) +- `remote` (Attributes) Allow remote port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardingremote)) + +### Nested Schema for `spec.options.ssh_port_forwarding.local` + +Optional: + +- `enabled` (Boolean) + + +### Nested Schema for `spec.options.ssh_port_forwarding.remote` + +Optional: + +- `enabled` (Boolean) + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/saml_connector.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/saml_connector.mdx new file mode 100644 index 0000000000000..4d33e3a353e81 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/saml_connector.mdx @@ -0,0 +1,117 @@ +--- +title: Reference for the teleport_saml_connector Terraform data-source +sidebar_label: saml_connector +description: This page describes the supported values of the teleport_saml_connector data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_saml_connector` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an SAML connector specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. + +### Optional + +- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. + +### Nested Schema for `spec` + +Required: + +- `acs` (String) AssertionConsumerService is a URL for assertion consumer service on the service provider (Teleport's side). +- `attributes_to_roles` (Attributes List) AttributesToRoles is a list of mappings of attribute statements to roles. (see [below for nested schema](#nested-schema-for-specattributes_to_roles)) + +Optional: + +- `allow_idp_initiated` (Boolean) AllowIDPInitiated is a flag that indicates if the connector can be used for IdP-initiated logins. +- `assertion_key_pair` (Attributes) EncryptionKeyPair is a key pair used for decrypting SAML assertions. (see [below for nested schema](#nested-schema-for-specassertion_key_pair)) +- `audience` (String) Audience uniquely identifies our service provider. +- `cert` (String, Sensitive) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. +- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) +- `display` (String) Display controls how this connector is displayed. +- `entity_descriptor` (String, Sensitive) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. +- `entity_descriptor_url` (String) EntityDescriptorURL is a URL that supplies a configuration XML. +- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO. +- `include_subject` (Boolean) IncludeSubject is a flag that indicates whether the Subject element is included in the SAML authentication request. Defaults to false. Note: Some IdPs will reject requests that contain a Subject. +- `issuer` (String) Issuer is the identity provider issuer. +- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) +- `preferred_request_binding` (String) PreferredRequestBinding is a preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured. +- `provider` (String) Provider is the external identity provider. +- `service_provider_issuer` (String) ServiceProviderIssuer is the issuer of the service provider (Teleport). +- `signing_key_pair` (Attributes) SigningKeyPair is an x509 key pair used to sign AuthnRequest. (see [below for nested schema](#nested-schema-for-specsigning_key_pair)) +- `single_logout_url` (String) SingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled. +- `sso` (String) SSO is the URL of the identity provider's SSO service. +- `user_matchers` (List of String) UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login. + +### Nested Schema for `spec.attributes_to_roles` + +Optional: + +- `name` (String) Name is an attribute statement name. +- `roles` (List of String) Roles is a list of static teleport roles to map to. +- `value` (String) Value is an attribute statement value to match. + + +### Nested Schema for `spec.assertion_key_pair` + +Optional: + +- `cert` (String) Cert is a PEM-encoded x509 certificate. +- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. + + +### Nested Schema for `spec.client_redirect_settings` + +Optional: + +- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs +- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + + +### Nested Schema for `spec.mfa` + +Optional: + +- `cert` (String) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. +- `enabled` (Boolean) Enabled specified whether this SAML connector supports MFA checks. Defaults to false. +- `entity_descriptor` (String) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. Usually set from EntityDescriptorUrl. +- `entity_descriptor_url` (String) EntityDescriptorUrl is a URL that supplies a configuration XML. +- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced for MFA checks. UNSPECIFIED is treated as YES to always re-authentication for MFA checks. This should only be set to NO if the IdP is setup to perform MFA checks on top of active user sessions. +- `issuer` (String) Issuer is the identity provider issuer. Usually set from EntityDescriptor. +- `sso` (String) SSO is the URL of the identity provider's SSO service. Usually set from EntityDescriptor. + + +### Nested Schema for `spec.signing_key_pair` + +Optional: + +- `cert` (String) Cert is a PEM-encoded x509 certificate. +- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/session_recording_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/session_recording_config.mdx new file mode 100644 index 0000000000000..fcdc3189105e5 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/session_recording_config.mdx @@ -0,0 +1,93 @@ +--- +title: Reference for the teleport_session_recording_config Terraform data-source +sidebar_label: session_recording_config +description: This page describes the supported values of the teleport_session_recording_config data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_session_recording_config` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a SessionRecordingConfig specification (see [below for nested schema](#nested-schema-for-spec)) +- `status` (Attributes) Status is the SessionRecordingConfig status containing active encryption keys (see [below for nested schema](#nested-schema-for-status)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `encryption` (Attributes) Encryption configures if and how session recordings should be encrypted. (see [below for nested schema](#nested-schema-for-specencryption)) +- `mode` (String) Mode controls where (or if) the session is recorded. +- `proxy_checks_host_keys` (Boolean) ProxyChecksHostKeys is used to control if the proxy will check host keys when in recording mode. + +### Nested Schema for `spec.encryption` + +Optional: + +- `enabled` (Boolean) Enabled controls whether or not session recordings should be encrypted. +- `manual_key_management` (Attributes) ManualKeyManagement defines whether or not recording encryption keys should be managed externally and how to query those keys. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_management)) + +### Nested Schema for `spec.encryption.manual_key_management` + +Optional: + +- `active_keys` (Attributes List) ActiveKeys describe which keys should be queried for active recording encryption and replay. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_managementactive_keys)) +- `enabled` (Boolean) Enabled controls whether or recording encryption keys should be managed externally. +- `rotated_keys` (Attributes List) RotatedKeys describe which keys should be queried for historical replay. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_managementrotated_keys)) + +### Nested Schema for `spec.encryption.manual_key_management.active_keys` + +Optional: + +- `label` (String) Label is a value that can be used with the related keystore in order to find relevant keys. +- `type` (String) Type represents which keystore should be searched when looking up keys by label. + + +### Nested Schema for `spec.encryption.manual_key_management.rotated_keys` + +Optional: + +- `label` (String) Label is a value that can be used with the related keystore in order to find relevant keys. +- `type` (String) Type represents which keystore should be searched when looking up keys by label. + + + + + +### Nested Schema for `status` + +Optional: + +- `encryption_keys` (Attributes List) EncryptionKeys contain the currently active age encryption keys used for encrypted session recording. (see [below for nested schema](#nested-schema-for-statusencryption_keys)) + +### Nested Schema for `status.encryption_keys` + +Optional: + +- `public_key` (String) A PKIX ASN.1 DER encoded public key used for key wrapping during age encryption. Expected to be RSA 4096. + diff --git a/docs/pages/reference/terraform-provider/data-sources/static_host_user.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/static_host_user.mdx similarity index 94% rename from docs/pages/reference/terraform-provider/data-sources/static_host_user.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/static_host_user.mdx index d444a8158e33e..9adc1d0c5f7c4 100644 --- a/docs/pages/reference/terraform-provider/data-sources/static_host_user.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/static_host_user.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_static_hos {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_static_host_user` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/terraform-provider/data-sources/trusted_cluster.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_cluster.mdx similarity index 95% rename from docs/pages/reference/terraform-provider/data-sources/trusted_cluster.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_cluster.mdx index 29db4b81913d6..242a7e0f2a2e3 100644 --- a/docs/pages/reference/terraform-provider/data-sources/trusted_cluster.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_cluster.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_trusted_cl {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_trusted_cluster` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_device.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_device.mdx new file mode 100644 index 0000000000000..534b3af3b9693 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/trusted_device.mdx @@ -0,0 +1,56 @@ +--- +title: Reference for the teleport_trusted_device Terraform data-source +sidebar_label: trusted_device +description: This page describes the supported values of the teleport_trusted_device data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_trusted_device` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Specification of the device. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `metadata` + +Optional: + +- `labels` (Map of String) Labels is a set of labels +- `name` (String) Name is an object name + + +### Nested Schema for `spec` + +Required: + +- `asset_tag` (String) +- `os_type` (String) + +Optional: + +- `enroll_status` (String) +- `owner` (String) +- `source` (Attributes) (see [below for nested schema](#nested-schema-for-specsource)) + +### Nested Schema for `spec.source` + +Optional: + +- `name` (String) +- `origin` (String) + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/user.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/user.mdx new file mode 100644 index 0000000000000..2ecec268c99b9 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/user.mdx @@ -0,0 +1,92 @@ +--- +title: Reference for the teleport_user Terraform data-source +sidebar_label: user +description: This page describes the supported values of the teleport_user data-source of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the `teleport_user` data source of the +Teleport Terraform provider. + + + + + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a user specification (see [below for nested schema](#nested-schema-for-spec)) +- `status` (Attributes) (see [below for nested schema](#nested-schema-for-status)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `github_identities` (Attributes List) GithubIdentities list associated Github OAuth2 identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specgithub_identities)) +- `oidc_identities` (Attributes List) OIDCIdentities lists associated OpenID Connect identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specoidc_identities)) +- `roles` (List of String) Roles is a list of roles assigned to user +- `saml_identities` (Attributes List) SAMLIdentities lists associated SAML identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specsaml_identities)) +- `traits` (Map of List of String) Traits are key/value pairs received from an identity provider (through OIDC claims or SAML assertions) or from a system administrator for local accounts. Traits are used to populate role variables. +- `trusted_device_ids` (List of String) TrustedDeviceIDs contains the IDs of trusted devices enrolled by the user. Note that SSO users are transient and thus may contain an empty TrustedDeviceIDs field, even though the user->device association exists under the Device Trust subsystem. Do not rely on this field to determine device associations or ownership, it exists for legacy/informative purposes only. Managed by the Device Trust subsystem, avoid manual edits. + +### Nested Schema for `spec.github_identities` + +Optional: + +- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' +- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. +- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. +- `username` (String) Username is username supplied by external identity provider + + +### Nested Schema for `spec.oidc_identities` + +Optional: + +- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' +- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. +- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. +- `username` (String) Username is username supplied by external identity provider + + +### Nested Schema for `spec.saml_identities` + +Optional: + +- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' +- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. +- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. +- `username` (String) Username is username supplied by external identity provider + + + +### Nested Schema for `status` + +Optional: + +- `mfa_weakest_device` (Number) mfa_weakest_device reflects what the system knows about the user's weakest MFA device. Note that this is a "best effort" property, in that it can be UNSPECIFIED. +- `password_state` (Number) password_state reflects what the system knows about the user's password. Note that this is a "best effort" property, in that it can be UNSPECIFIED for users who were created before this property was introduced and didn't perform any password-related activity since then. See RFD 0159 for details. Do NOT use this value for authentication purposes! + diff --git a/docs/pages/reference/terraform-provider/data-sources/workload_identity.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/workload_identity.mdx similarity index 98% rename from docs/pages/reference/terraform-provider/data-sources/workload_identity.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/workload_identity.mdx index c02037ba98801..22dde51499ff5 100644 --- a/docs/pages/reference/terraform-provider/data-sources/workload_identity.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/data-sources/workload_identity.mdx @@ -7,6 +7,9 @@ description: This page describes the supported values of the teleport_workload_i {/*Auto-generated file. Do not edit.*/} {/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} +This page describes the supported values of the `teleport_workload_identity` data source of the +Teleport Terraform provider. + diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list.mdx new file mode 100644 index 0000000000000..4849076c568a2 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list.mdx @@ -0,0 +1,206 @@ +--- +title: Reference for the teleport_access_list Terraform resource +sidebar_label: access_list +description: This page describes the supported values of the teleport_access_list resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_access_list resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_access_list" "crane-operation" { + header = { + version = "v1" + metadata = { + name = "crane-operation" + labels = { + example = "yes" + } + } + } + spec = { + description = "Used to grant access to the crane." + owners = [ + { + name = "gru" + description = "The supervillain." + } + ] + membership_requires = { + roles = ["minion"] + } + ownership_requires = { + roles = ["supervillain"] + } + grants = { + roles = ["crane-operator"] + traits = [{ + key = "allowed-machines" + values = ["crane", "forklift"] + }] + } + title = "Crane operation" + audit = { + recurrence = { + frequency = 3 # audit every 3 months + day_of_month = 15 # audit happen 15's day of the month. Possible values are 1, 15, and 31. + } + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) spec is the specification for the Access List. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Required: + +- `owners` (Attributes List) owners is a list of owners of the Access List. (see [below for nested schema](#nested-schema-for-specowners)) + +Optional: + +- `audit` (Attributes) audit describes the frequency that this Access List must be audited. (see [below for nested schema](#nested-schema-for-specaudit)) +- `description` (String) description is an optional plaintext description of the Access List. +- `grants` (Attributes) grants describes the access granted by membership to this Access List. (see [below for nested schema](#nested-schema-for-specgrants)) +- `membership_requires` (Attributes) membership_requires describes the requirements for a user to be a member of the Access List. For a membership to an Access List to be effective, the user must meet the requirements of Membership_requires and must be in the members list. (see [below for nested schema](#nested-schema-for-specmembership_requires)) +- `owner_grants` (Attributes) owner_grants describes the access granted by owners to this Access List. (see [below for nested schema](#nested-schema-for-specowner_grants)) +- `ownership_requires` (Attributes) ownership_requires describes the requirements for a user to be an owner of the Access List. For ownership of an Access List to be effective, the user must meet the requirements of ownership_requires and must be in the owners list. (see [below for nested schema](#nested-schema-for-specownership_requires)) +- `title` (String) title is a plaintext short description of the Access List. +- `type` (String) type can be an empty string which denotes a regular Access List, "scim" which represents an Access List created from SCIM group or "static" for Access Lists managed by IaC tools. + +### Nested Schema for `spec.owners` + +Optional: + +- `description` (String) description is the plaintext description of the owner and why they are an owner. +- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. +- `name` (String) name is the username of the owner. + + +### Nested Schema for `spec.audit` + +Optional: + +- `next_audit_date` (String) next_audit_date is when the next audit date should be done by. +- `notifications` (Attributes) notifications is the configuration for notifying users. (see [below for nested schema](#nested-schema-for-specauditnotifications)) +- `recurrence` (Attributes) recurrence is the recurrence definition (see [below for nested schema](#nested-schema-for-specauditrecurrence)) + +### Nested Schema for `spec.audit.notifications` + +Optional: + +- `start` (String) start specifies when to start notifying users that the next audit date is coming up. + + +### Nested Schema for `spec.audit.recurrence` + +Optional: + +- `day_of_month` (Number) day_of_month is the day of month that reviews will be scheduled on. Supported values are 0, 1, 15, and 31. +- `frequency` (Number) frequency is the frequency of reviews. This represents the period in months between two reviews. Supported values are 0, 1, 3, 6, and 12. + + + +### Nested Schema for `spec.grants` + +Optional: + +- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. +- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specgrantstraits)) + +### Nested Schema for `spec.grants.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.membership_requires` + +Optional: + +- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. +- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specmembership_requirestraits)) + +### Nested Schema for `spec.membership_requires.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.owner_grants` + +Optional: + +- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. +- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specowner_grantstraits)) + +### Nested Schema for `spec.owner_grants.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. + + + +### Nested Schema for `spec.ownership_requires` + +Optional: + +- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. +- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specownership_requirestraits)) + +### Nested Schema for `spec.ownership_requires.traits` + +Optional: + +- `key` (String) key is the name of the trait. +- `values` (List of String) values is the list of trait values. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list_member.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list_member.mdx new file mode 100644 index 0000000000000..5091a6bdfaf96 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_list_member.mdx @@ -0,0 +1,141 @@ +--- +title: Reference for the teleport_access_list_member Terraform resource +sidebar_label: access_list_member +description: This page describes the supported values of the teleport_access_list_member resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_access_list_member resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_access_list" "characters" { + header = { + metadata = { + name = "crane-operation" + } + } + spec = { + type = "static" # the access list must be of type "static" to manage its members with Terraform + title = "Characters" + description = "The list of game characters." + owners = [ + { name = "dungeon_master" }, + ] + grants = { + roles = ["dungeon_access"] + } + } +} + +# User member: + +resource "teleport_access_list_member" "fighter" { + header = { + version = "v1" + metadata = { + name = "fighter" # Teleport user name + } + } + spec = { + access_list = teleport_access_list.characters.id + membership_kind = 1 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} + +# Nested Access List member: + +resource "teleport_access_list" "npcs" { + header = { + metadata = { + name = "npcs" + } + } + spec = { + title = "NPCs" + description = "Non-player characters." + owners = [ + { name = "dungeon_master" } + ] + grants = { + roles = ["dungeon_access"] + } + audit = { + recurrence = { + frequency = 3 + day_of_month = 15 + } + } + } +} + +resource "teleport_access_list_member" "npcs" { + header = { + version = "v1" + metadata = { + name = teleport_access_list.npcs.id + } + } + spec = { + access_list = teleport_access_list.characters.id + membership_kind = 2 # 1 for "MEMBERSHIP_KIND_USER", 2 for "MEMBERSHIP_KIND_LIST" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) spec is the specification for the Access List member. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Required: + +- `access_list` (String) associated Access List +- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. + +Optional: + +- `added_by` (String) added_by is the user that added this user to the Access List. +- `expires` (String) expires is when the user's membership to the Access List expires. +- `joined` (String) joined is when the user joined the Access List. +- `name` (String) name is the name of the member of the Access List. +- `reason` (String) reason is the reason this user was added to the Access List. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_monitoring_rule.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_monitoring_rule.mdx new file mode 100644 index 0000000000000..2510cbc7cc919 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/access_monitoring_rule.mdx @@ -0,0 +1,144 @@ +--- +title: Reference for the teleport_access_monitoring_rule Terraform resource +sidebar_label: access_monitoring_rule +description: This page describes the supported values of the teleport_access_monitoring_rule resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_access_monitoring_rule resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Access Monitoring Rule enables notification routing or automatic +# review rules based on specific conditions. +# +# This example automatically approves access requests for the role `your_role_name` +# on Mondays and Tuesdays, and sends notifications to `#your-slack-channel`. + +resource "teleport_access_monitoring_rule" "test" { + version = "v1" + metadata = { + name = "test" + } + spec = { + subjects = ["access_request"] + condition = "access_request.spec.roles.contains(\"your_role_name\")" + desired_state = "reviewed" + notification = { + name = "slack" + recipients = ["#your-slack-channel"] + } + + automatic_review = { + integration = "builtin" + decision = "APPROVED" + } + schedules = { + default = { + time = { + timezone = "America/Los_Angeles" + shifts = [ + { + weekday : "Monday" + start : "00:00" + end : "23:59" + }, + { + weekday : "Tuesday" + start : "00:00" + end : "23:59" + }, + ] + } + } + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an AccessMonitoringRule specification (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) version is version + +### Optional + +- `metadata` (Attributes) metadata is the rules's metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources + +### Nested Schema for `spec` + +Required: + +- `subjects` (List of String) subjects the rule operates on, can be a resource kind or a particular resource property. + +Optional: + +- `automatic_review` (Attributes) automatic_review defines automatic review configurations for Access Requests. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specautomatic_review)) +- `condition` (String) condition is a predicate expression that operates on the specified subject resources, and determines whether the subject will be moved into desired state. +- `desired_state` (String) desired_state defines the desired state of the subject. For Access Request subjects, the desired_state may be set to `reviewed` to indicate that the Access Request should be automatically reviewed. +- `notification` (Attributes) notification defines the plugin configuration for notifications if rule is triggered. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specnotification)) +- `schedules` (Attributes Map) schedules specifies a map of schedules that can be used to configure the access monitoring rule conditions. Available in Teleport v18.2.8 or higher. (see [below for nested schema](#nested-schema-for-specschedules)) +- `states` (List of String) states are the desired state which the monitoring rule is attempting to bring the subjects matching the condition to. + +### Nested Schema for `spec.automatic_review` + +Optional: + +- `decision` (String) decision specifies the proposed state of the access review. This can be either 'APPROVED' or 'DENIED'. +- `integration` (String) integration is the name of the integration that is responsible for monitoring the rule. Set this value to `builtin` to monitor the rule with Teleport. + + +### Nested Schema for `spec.notification` + +Optional: + +- `name` (String) name is the name of the plugin to which this configuration should apply. +- `recipients` (List of String) recipients is the list of recipients the plugin should notify. + + +### Nested Schema for `spec.schedules` + +Optional: + +- `time` (Attributes) TimeSchedule specifies an in-line schedule. (see [below for nested schema](#nested-schema-for-specschedulestime)) + +### Nested Schema for `spec.schedules.time` + +Optional: + +- `shifts` (Attributes List) Shifts contains a set of shifts that make up the schedule. (see [below for nested schema](#nested-schema-for-specschedulestimeshifts)) +- `timezone` (String) Timezone specifies the schedule timezone. This field is optional and defaults to "UTC". Accepted values use timezone locations as defined in the IANA Time Zone Database, such as "America/Los_Angeles", "Europe/Lisbon", or "Asia/Singapore". See https://data.iana.org/time-zones/tzdb/zone1970.tab for a list of supported values. + +### Nested Schema for `spec.schedules.time.shifts` + +Optional: + +- `end` (String) End specifies the end time in the format HH:MM, e.g., "12:30". +- `start` (String) Start specifies the start time in the format HH:MM, e.g., "12:30". +- `weekday` (String) Weekday specifies the day of the week, e.g., "Sunday", "Monday", "Tuesday". + + + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/app.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/app.mdx new file mode 100644 index 0000000000000..cec7314b479ba --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/app.mdx @@ -0,0 +1,167 @@ +--- +title: Reference for the teleport_app Terraform resource +sidebar_label: app +description: This page describes the supported values of the teleport_app resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_app resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport App + +resource "teleport_app" "example" { + version = "v3" + metadata = { + name = "example" + description = "Test app" + labels = { + "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default + } + } + + spec = { + uri = "localhost:3000" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v3`. + +### Optional + +- `metadata` (Attributes) Metadata is the app resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the app resource spec. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource subkind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `aws` (Attributes) AWS contains additional options for AWS applications. (see [below for nested schema](#nested-schema-for-specaws)) +- `cloud` (String) Cloud identifies the cloud instance the app represents. +- `cors` (Attributes) CORSPolicy defines the Cross-Origin Resource Sharing settings for the app. (see [below for nested schema](#nested-schema-for-speccors)) +- `dynamic_labels` (Attributes Map) DynamicLabels are the app's command labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) +- `identity_center` (Attributes) IdentityCenter encasulates AWS identity-center specific information. Only valid for Identity Center account apps. (see [below for nested schema](#nested-schema-for-specidentity_center)) +- `insecure_skip_verify` (Boolean) InsecureSkipVerify disables app's TLS certificate verification. +- `integration` (String) Integration is the integration name that must be used to access this Application. Only applicable to AWS App Access. If present, the Application must use the Integration's credentials instead of ambient credentials to access Cloud APIs. +- `mcp` (Attributes) MCP contains MCP server related configurations. (see [below for nested schema](#nested-schema-for-specmcp)) +- `public_addr` (String) PublicAddr is the public address the application is accessible at. +- `required_app_names` (List of String) RequiredAppNames is a list of app names that are required for this app to function. Any app listed here will be part of the authentication redirect flow and authenticate along side this app. +- `rewrite` (Attributes) Rewrite is a list of rewriting rules to apply to requests and responses. (see [below for nested schema](#nested-schema-for-specrewrite)) +- `tcp_ports` (Attributes List) TCPPorts is a list of ports and port ranges that an app agent can forward connections to. Only applicable to TCP App Access. If this field is not empty, URI is expected to contain no port number and start with the tcp protocol. (see [below for nested schema](#nested-schema-for-spectcp_ports)) +- `uri` (String) URI is the web app endpoint. +- `use_any_proxy_public_addr` (Boolean) UseAnyProxyPublicAddr will rebuild this app's fqdn based on the proxy public addr that the request originated from. This should be true if your proxy has multiple proxy public addrs and you want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, setting this value to true will overwrite that public address in the web UI. +- `user_groups` (List of String) UserGroups are a list of user group IDs that this app is associated with. + +### Nested Schema for `spec.aws` + +Optional: + +- `external_id` (String) ExternalID is the AWS External ID used when assuming roles in this app. +- `roles_anywhere_profile` (Attributes) RolesAnywhereProfile contains the IAM Roles Anywhere fields associated with this Application. These fields are set when performing the synchronization of AWS IAM Roles Anywhere Profiles into Teleport Apps. (see [below for nested schema](#nested-schema-for-specawsroles_anywhere_profile)) + +### Nested Schema for `spec.aws.roles_anywhere_profile` + +Optional: + +- `accept_role_session_name` (Boolean) Whether this Roles Anywhere Profile accepts a custom role session name. When not supported, the AWS Session Name will be the X.509 certificate's serial number. When supported, the AWS Session Name will be the identity's username. This values comes from: https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ProfileDetail.html / acceptRoleSessionName +- `profile_arn` (String) ProfileARN is the AWS IAM Roles Anywhere Profile ARN that originated this Teleport App. + + + +### Nested Schema for `spec.cors` + +Optional: + +- `allow_credentials` (Boolean) allow_credentials indicates whether credentials are allowed. +- `allowed_headers` (List of String) allowed_headers specifies which headers can be used when accessing the app. +- `allowed_methods` (List of String) allowed_methods specifies which methods are allowed when accessing the app. +- `allowed_origins` (List of String) allowed_origins specifies which origins are allowed to access the app. +- `exposed_headers` (List of String) exposed_headers indicates which headers are made available to scripts via the browser. +- `max_age` (Number) max_age indicates how long (in seconds) the results of a preflight request can be cached. + + +### Nested Schema for `spec.dynamic_labels` + +Optional: + +- `command` (List of String) Command is a command to run +- `period` (String) Period is a time between command runs +- `result` (String) Result captures standard output + + +### Nested Schema for `spec.identity_center` + +Optional: + +- `account_id` (String) Account ID is the AWS-assigned ID of the account +- `permission_sets` (Attributes List) PermissionSets lists the available permission sets on the given account (see [below for nested schema](#nested-schema-for-specidentity_centerpermission_sets)) + +### Nested Schema for `spec.identity_center.permission_sets` + +Optional: + +- `arn` (String) ARN is the fully-formed ARN of the Permission Set. +- `assignment_name` (String) AssignmentID is the ID of the Teleport Account Assignment resource that represents this permission being assigned on the enclosing Account. +- `name` (String) Name is the human-readable name of the Permission Set. + + + +### Nested Schema for `spec.mcp` + +Optional: + +- `args` (List of String) Args to execute with the command. +- `command` (String) Command to launch stdio-based MCP servers. +- `run_as_host_user` (String) RunAsHostUser is the host user account under which the command will be executed. Required for stdio-based MCP servers. + + +### Nested Schema for `spec.rewrite` + +Optional: + +- `headers` (Attributes List) Headers is a list of headers to inject when passing the request over to the application. (see [below for nested schema](#nested-schema-for-specrewriteheaders)) +- `jwt_claims` (String) JWTClaims configures whether roles/traits are included in the JWT token. +- `redirect` (List of String) Redirect defines a list of hosts which will be rewritten to the public address of the application if they occur in the "Location" header. + +### Nested Schema for `spec.rewrite.headers` + +Optional: + +- `name` (String) Name is the http header name. +- `value` (String) Value is the http header value. + + + +### Nested Schema for `spec.tcp_ports` + +Optional: + +- `end_port` (Number) EndPort describes the end of the range, inclusive. If set, it must be between 2 and 65535 and be greater than Port when describing a port range. When omitted or set to zero, it signifies that the port range defines a single port. +- `port` (Number) Port describes the start of the range. It must be between 1 and 65535. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/auth_preference.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/auth_preference.mdx new file mode 100644 index 0000000000000..abbdc638183f9 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/auth_preference.mdx @@ -0,0 +1,155 @@ +--- +title: Reference for the teleport_auth_preference Terraform resource +sidebar_label: auth_preference +description: This page describes the supported values of the teleport_auth_preference resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_auth_preference resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# AuthPreference resource + +resource "teleport_auth_preference" "example" { + version = "v2" + metadata = { + description = "Auth preference" + labels = { + "example" = "yes" + "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default + } + } + + spec = { + disconnect_expired_cert = true + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an AuthPreference specification (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `spec` + +Optional: + +- `allow_headless` (Boolean) AllowHeadless enables/disables headless support. Headless authentication requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. +- `allow_local_auth` (Boolean) AllowLocalAuth is true if local authentication is enabled. +- `allow_passwordless` (Boolean) AllowPasswordless enables/disables passwordless support. Passwordless requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. +- `connector_name` (String) ConnectorName is the name of the OIDC or SAML connector. If this value is not set the first connector in the backend will be used. +- `default_session_ttl` (String) DefaultSessionTTL is the TTL to use for user certs when an explicit TTL is not requested. +- `device_trust` (Attributes) DeviceTrust holds settings related to trusted device verification. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specdevice_trust)) +- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert provides disconnect expired certificate setting - if true, connections with expired client certificates will get disconnected +- `hardware_key` (Attributes) HardwareKey are the settings for hardware key support. (see [below for nested schema](#nested-schema-for-spechardware_key)) +- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specidp)) +- `locking_mode` (String) LockingMode is the cluster-wide locking mode default. +- `message_of_the_day` (String) +- `okta` (Attributes) Okta is a set of options related to the Okta service in Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specokta)) +- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this cluster. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". +- `second_factor` (String) SecondFactor is the type of mult-factor. Deprecated: Prefer using SecondFactors instead. +- `second_factors` (List of Number) SecondFactors is a list of supported multi-factor types. 1 is "otp", 2 is "webauthn", 3 is "sso", If unspecified, the current default value is [1], or ["otp"]. +- `signature_algorithm_suite` (Number) SignatureAlgorithmSuite is the configured signature algorithm suite for the cluster. If unspecified, the current default value is "legacy". 1 is "legacy", 2 is "balanced-v1", 3 is "fips-v1", 4 is "hsm-v1". +- `stable_unix_user_config` (Attributes) StableUnixUserConfig contains the cluster-wide configuration for stable UNIX users. (see [below for nested schema](#nested-schema-for-specstable_unix_user_config)) +- `type` (String) Type is the type of authentication. +- `u2f` (Attributes) U2F are the settings for the U2F device. (see [below for nested schema](#nested-schema-for-specu2f)) +- `webauthn` (Attributes) Webauthn are the settings for server-side Web Authentication support. (see [below for nested schema](#nested-schema-for-specwebauthn)) + +### Nested Schema for `spec.device_trust` + +Optional: + +- `auto_enroll` (Boolean) Enable device auto-enroll. Auto-enroll lets any user issue a device enrollment token for a known device that is not already enrolled. `tsh` takes advantage of auto-enroll to automatically enroll devices on user login, when appropriate. The effective cluster Mode still applies: AutoEnroll=true is meaningless if Mode="off". +- `ekcert_allowed_cas` (List of String) Allow list of EKCert CAs in PEM format. If present, only TPM devices that present an EKCert that is signed by a CA specified here may be enrolled (existing enrollments are unchanged). If not present, then the CA of TPM EKCerts will not be checked during enrollment, this allows any device to enroll. +- `mode` (String) Mode of verification for trusted devices. The following modes are supported: - "off": disables both device authentication and authorization. - "optional": allows both device authentication and authorization, but doesn't enforce the presence of device extensions for sensitive endpoints. - "required": enforces the presence of device extensions for sensitive endpoints. - "required-for-humans": enforces the presence of device extensions for sensitive endpoints, for human users only (bots are exempt). Mode is always "off" for OSS. Defaults to "optional" for Enterprise. + + +### Nested Schema for `spec.hardware_key` + +Optional: + +- `pin_cache_ttl` (String) PinCacheTTL is the amount of time in nanoseconds that Teleport clients will cache the user's PIV PIN when hardware key PIN policy is enabled. +- `piv_slot` (String) PIVSlot is a PIV slot that Teleport clients should use instead of the default based on private key policy. For example, "9a" or "9e". +- `serial_number_validation` (Attributes) SerialNumberValidation holds settings for hardware key serial number validation. By default, serial number validation is disabled. (see [below for nested schema](#nested-schema-for-spechardware_keyserial_number_validation)) + +### Nested Schema for `spec.hardware_key.serial_number_validation` + +Optional: + +- `enabled` (Boolean) Enabled indicates whether hardware key serial number validation is enabled. +- `serial_number_trait_name` (String) SerialNumberTraitName is an optional custom user trait name for hardware key serial numbers to replace the default: "hardware_key_serial_numbers". Note: Values for this user trait should be a comma-separated list of serial numbers, or a list of comm-separated lists. e.g ["123", "345,678"] + + + +### Nested Schema for `spec.idp` + +Optional: + +- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specidpsaml)) + +### Nested Schema for `spec.idp.saml` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. + + + +### Nested Schema for `spec.okta` + +Optional: + +- `sync_period` (String) SyncPeriod is the duration between synchronization calls in nanoseconds. + + +### Nested Schema for `spec.stable_unix_user_config` + +Optional: + +- `enabled` (Boolean) Enabled signifies that (UNIX) Teleport SSH hosts should obtain a UID from the control plane if they're about to provision a host user with no other configured UID. +- `first_uid` (Number) FirstUid is the start of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. +- `last_uid` (Number) LastUid is the end of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. + + +### Nested Schema for `spec.u2f` + +Optional: + +- `app_id` (String) AppID returns the application ID for universal mult-factor. +- `device_attestation_cas` (List of String) DeviceAttestationCAs contains the trusted attestation CAs for U2F devices. +- `facets` (List of String) Facets returns the facets for universal mult-factor. Deprecated: Kept for backwards compatibility reasons, but Facets have no effect since Teleport v10, when Webauthn replaced the U2F implementation. + + +### Nested Schema for `spec.webauthn` + +Optional: + +- `attestation_allowed_cas` (List of String) Allow list of device attestation CAs in PEM format. If present, only devices whose attestation certificates match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationDeniedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default all devices are allowed. +- `attestation_denied_cas` (List of String) Deny list of device attestation CAs in PEM format. If present, only devices whose attestation certificates don't match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationAllowedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default no devices are denied. +- `rp_id` (String) RPID is the ID of the Relying Party. It should be set to the domain name of the Teleport installation. IMPORTANT: RPID must never change in the lifetime of the cluster, because it's recorded in the registration data on the WebAuthn device. If the RPID changes, all existing WebAuthn key registrations will become invalid and all users who use WebAuthn as the multi-factor will need to re-register. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_config.mdx new file mode 100644 index 0000000000000..0733447e4ea0f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_config.mdx @@ -0,0 +1,115 @@ +--- +title: Reference for the teleport_autoupdate_config Terraform resource +sidebar_label: autoupdate_config +description: This page describes the supported values of the teleport_autoupdate_config resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_autoupdate_config resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_autoupdate_config" "test" { + version = "v1" + spec = { + tools = { + mode = "enabled" + } + agents = { + mode = "enabled" + strategy = "halt-on-error" + schedules = { + regular = [ + { + name = "dev" + days = ["Mon", "Tue", "Wed", "Thu"] + start_hour : 4 + }, + { + name = "staging" + days = ["Mon", "Tue", "Wed", "Thu"] + start_hour : 14 + }, + { + name = "prod" + days = ["Mon", "Tue", "Wed", "Thu"] + start_hour : 14 + wait_hours : 24 + }, + ] + } + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) + +### Optional + +- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) + +### Nested Schema for `spec` + +Optional: + +- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) +- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) + +### Nested Schema for `spec.agents` + +Optional: + +- `maintenance_window_duration` (String) maintenance_window_duration is the maintenance window duration. This can only be set if `strategy` is "time-based". Once the window is over, the group transitions to the done state. Existing agents won't be updated until the next maintenance window. +- `mode` (String) mode specifies whether agent autoupdates are enabled, disabled, or paused. +- `schedules` (Attributes) schedules specifies schedules for updates of grouped agents. (see [below for nested schema](#nested-schema-for-specagentsschedules)) +- `strategy` (String) strategy to use for updating the agents. + +### Nested Schema for `spec.agents.schedules` + +Optional: + +- `regular` (Attributes List) regular schedules for non-critical versions. (see [below for nested schema](#nested-schema-for-specagentsschedulesregular)) + +### Nested Schema for `spec.agents.schedules.regular` + +Optional: + +- `canary_count` (Number) canary_count is the number of canary agents that will be updated before the whole group is updated. when set to 0, the group does not enter the canary phase. This number is capped to 5. This number must always be lower than the total number of agents in the group, else the rollout will be stuck. +- `days` (List of String) days when the update can run. Supported values are "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" and "*" +- `name` (String) name of the group +- `start_hour` (Number) start_hour to initiate update +- `wait_hours` (Number) wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". This field must be positive. + + + + +### Nested Schema for `spec.tools` + +Optional: + +- `mode` (String) Mode defines state of the client tools auto update. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `name` (String) name is an object name. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_version.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_version.mdx new file mode 100644 index 0000000000000..e121152aab7ce --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/autoupdate_version.mdx @@ -0,0 +1,79 @@ +--- +title: Reference for the teleport_autoupdate_version Terraform resource +sidebar_label: autoupdate_version +description: This page describes the supported values of the teleport_autoupdate_version resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_autoupdate_version resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_autoupdate_version" "test" { + version = "v1" + spec = { + tools = { + target_version = "1.2.3" + } + agents = { + start_version = "1.2.3" + target_version = "1.2.4" + schedule = "regular" + mode = "enabled" + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) + +### Optional + +- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) + +### Nested Schema for `spec` + +Optional: + +- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) +- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) + +### Nested Schema for `spec.agents` + +Optional: + +- `mode` (String) autoupdate_mode to use for the rollout +- `schedule` (String) schedule to use for the rollout +- `start_version` (String) start_version is the version used for newly installed agents before their update window. +- `target_version` (String) target_version is the version that all agents will update to during their update window. + + +### Nested Schema for `spec.tools` + +Optional: + +- `target_version` (String) TargetVersion specifies the semantic version required for tools to establish a connection with the cluster. Client tools after connection to the cluster going to be updated to this version automatically. + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `name` (String) name is an object name. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/bot.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/bot.mdx new file mode 100644 index 0000000000000..f2388e93cf4e7 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/bot.mdx @@ -0,0 +1,81 @@ +--- +title: Reference for the teleport_bot Terraform resource +sidebar_label: bot +description: This page describes the supported values of the teleport_bot resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_bot resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Machine ID Bot creation example + +resource "teleport_bot" "example" { + metadata = { + name = "example" + } + + spec = { + roles = ["access"] + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `metadata` (Attributes) Requires provider v18.4.0 or newer. Common metadata that all resources share (see [below for nested schema](#nested-schema-for-metadata)) +- `name` (String, Deprecated) The name of the bot, i.e. the unprefixed User name +- `roles` (List of String, Deprecated) A list of roles the created bot should be allowed to assume via role impersonation. +- `spec` (Attributes) Requires provider v18.4.0 or newer. The configured properties of a bot. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) Differentiates variations of the same kind. All resources should contain one, even if it is never populated. +- `token_id` (String, Sensitive, Deprecated) +- `token_ttl` (String, Deprecated) +- `traits` (Map of List of String, Deprecated) +- `version` (String) The version of the resource being represented. + +### Read-Only + +- `role_name` (String, Deprecated) The name of the generated bot role +- `status` (Attributes) Requires provider v18.4.0 or newer. Fields that are set by the server as results of operations. These should not be modified by users. (see [below for nested schema](#nested-schema-for-status)) +- `user_name` (String, Deprecated) The name of the generated bot user + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `max_session_ttl` (String) The max session TTL value for the bot's internal role. Unless specified, bots may not request a value beyond the default maximum TTL of 12 hours. This value may not be larger than 7 days (168 hours). +- `roles` (List of String) A list of roles the created bot should be allowed to assume via role impersonation. +- `traits` (Map of List of String) The traits that will be associated with the bot for the purposes of role templating. + +Where multiple specified with the same name, these will be merged by the server. + + +### Nested Schema for `status` + +Read-Only: + +- `role_name` (String) The name of the role associated with the bot. +- `user_name` (String) The name of the user associated with the bot. diff --git a/docs/pages/reference/terraform-provider/resources/cluster_maintenance_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/cluster_maintenance_config.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/cluster_maintenance_config.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/cluster_maintenance_config.mdx diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/cluster_networking_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/cluster_networking_config.mdx new file mode 100644 index 0000000000000..00c91aa643e35 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/cluster_networking_config.mdx @@ -0,0 +1,91 @@ +--- +title: Reference for the teleport_cluster_networking_config Terraform resource +sidebar_label: cluster_networking_config +description: This page describes the supported values of the teleport_cluster_networking_config resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_cluster_networking_config resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Cluster Networking config + +resource "teleport_cluster_networking_config" "example" { + version = "v2" + metadata = { + description = "Networking config" + labels = { + "example" = "yes" + "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default + } + } + + spec = { + client_idle_timeout = "1h" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a ClusterNetworkingConfig specification (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `assist_command_execution_workers` (Number) AssistCommandExecutionWorkers determines the number of workers that will execute arbitrary Assist commands on servers in parallel +- `case_insensitive_routing` (Boolean) CaseInsensitiveRouting causes proxies to use case-insensitive hostname matching. +- `client_idle_timeout` (String) ClientIdleTimeout sets global cluster default setting for client idle timeouts. +- `idle_timeout_message` (String) ClientIdleTimeoutMessage is the message sent to the user when a connection times out. +- `keep_alive_count_max` (Number) KeepAliveCountMax is the number of keep-alive messages that can be missed before the server disconnects the connection to the client. +- `keep_alive_interval` (String) KeepAliveInterval is the interval at which the server sends keep-alive messages to the client. +- `proxy_listener_mode` (Number) ProxyListenerMode is proxy listener mode used by Teleport Proxies. 0 is "separate"; 1 is "multiplex". +- `proxy_ping_interval` (String) ProxyPingInterval defines in which interval the TLS routing ping message should be sent. This is applicable only when using ping-wrapped connections, regular TLS routing connections are not affected. +- `routing_strategy` (Number) RoutingStrategy determines the strategy used to route to nodes. 0 is "unambiguous_match"; 1 is "most_recent". +- `session_control_timeout` (String) SessionControlTimeout is the session control lease expiry and defines the upper limit of how long a node may be out of contact with the Auth Service before it begins terminating controlled sessions. +- `ssh_dial_timeout` (String) SSHDialTimeout is a custom dial timeout used when establishing SSH connections. If not set, the default timeout of 30s will be used. +- `tunnel_strategy` (Attributes) TunnelStrategyV1 determines the tunnel strategy used in the cluster. (see [below for nested schema](#nested-schema-for-spectunnel_strategy)) +- `web_idle_timeout` (String) WebIdleTimeout sets global cluster default setting for the web UI idle timeouts. + +### Nested Schema for `spec.tunnel_strategy` + +Optional: + +- `agent_mesh` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyagent_mesh)) +- `proxy_peering` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyproxy_peering)) + +### Nested Schema for `spec.tunnel_strategy.agent_mesh` + +Optional: + +- `active` (Boolean) Automatically generated field preventing empty message errors + + +### Nested Schema for `spec.tunnel_strategy.proxy_peering` + +Optional: + +- `agent_connection_count` (Number) diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/database.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/database.mdx new file mode 100644 index 0000000000000..e35b901db96fd --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/database.mdx @@ -0,0 +1,293 @@ +--- +title: Reference for the teleport_database Terraform resource +sidebar_label: database +description: This page describes the supported values of the teleport_database resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_database resource of the Teleport Terraform provider. +Follow [the database dynamic registration guide](../../../../enroll-resources/database-access/guides/dynamic-registration.mdx) +to complete deploying a database_service and access the database resource + + + +## Example Usage + +```hcl +# Teleport Database + +resource "teleport_database" "example" { + version = "v3" + metadata = { + name = "example" + description = "Test database" + labels = { + "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default + } + } + + spec = { + protocol = "postgres" + uri = "localhost" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. + +### Optional + +- `metadata` (Attributes) Metadata is the database metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the database spec. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource subkind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Required: + +- `protocol` (String) Protocol is the database protocol: postgres, mysql, mongodb, etc. +- `uri` (String) URI is the database connection endpoint. + +Optional: + +- `ad` (Attributes) AD is the Active Directory configuration for the database. (see [below for nested schema](#nested-schema-for-specad)) +- `admin_user` (Attributes) AdminUser is the database admin user for automatic user provisioning. (see [below for nested schema](#nested-schema-for-specadmin_user)) +- `aws` (Attributes) AWS contains AWS specific settings for RDS/Aurora/Redshift databases. (see [below for nested schema](#nested-schema-for-specaws)) +- `azure` (Attributes) Azure contains Azure specific database metadata. (see [below for nested schema](#nested-schema-for-specazure)) +- `ca_cert` (String) CACert is the PEM-encoded database CA certificate. DEPRECATED: Moved to TLS.CACert. DELETE IN 10.0. +- `dynamic_labels` (Attributes Map) DynamicLabels is the database dynamic labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) +- `gcp` (Attributes) GCP contains parameters specific to GCP Cloud SQL databases. (see [below for nested schema](#nested-schema-for-specgcp)) +- `mongo_atlas` (Attributes) MongoAtlas contains Atlas metadata about the database. (see [below for nested schema](#nested-schema-for-specmongo_atlas)) +- `mysql` (Attributes) MySQL is an additional section with MySQL database options. (see [below for nested schema](#nested-schema-for-specmysql)) +- `oracle` (Attributes) Oracle is an additional Oracle configuration options. (see [below for nested schema](#nested-schema-for-specoracle)) +- `tls` (Attributes) TLS is the TLS configuration used when establishing connection to target database. Allows to provide custom CA cert or override server name. (see [below for nested schema](#nested-schema-for-spectls)) + +### Nested Schema for `spec.ad` + +Optional: + +- `domain` (String) Domain is the Active Directory domain the database resides in. +- `kdc_host_name` (String) KDCHostName is the host name for a KDC for x509 Authentication. +- `keytab_file` (String) KeytabFile is the path to the Kerberos keytab file. +- `krb5_file` (String) Krb5File is the path to the Kerberos configuration file. Defaults to /etc/krb5.conf. +- `ldap_cert` (String) LDAPCert is a certificate from Windows LDAP/AD, optional; only for x509 Authentication. +- `ldap_service_account_name` (String) LDAPServiceAccountName is the name of service account for performing LDAP queries. Required for x509 Auth / PKINIT. +- `ldap_service_account_sid` (String) LDAPServiceAccountSID is the SID of service account for performing LDAP queries. Required for x509 Auth / PKINIT. +- `spn` (String) SPN is the service principal name for the database. + + +### Nested Schema for `spec.admin_user` + +Optional: + +- `default_database` (String) DefaultDatabase is the database that the privileged database user logs into by default. Depending on the database type, this database may be used to store procedures or data for managing database users. +- `name` (String) Name is the username of the privileged database user. + + +### Nested Schema for `spec.aws` + +Optional: + +- `account_id` (String) AccountID is the AWS account ID this database belongs to. +- `assume_role_arn` (String) AssumeRoleARN is an optional AWS role ARN to assume when accessing a database. Set this field and ExternalID to enable access across AWS accounts. +- `docdb` (Attributes) DocumentDB contains Amazon DocumentDB-specific metadata. (see [below for nested schema](#nested-schema-for-specawsdocdb)) +- `elasticache` (Attributes) ElastiCache contains Amazon ElastiCache Redis-specific metadata. (see [below for nested schema](#nested-schema-for-specawselasticache)) +- `elasticache_serverless` (Attributes) ElastiCacheServerless contains Amazon ElastiCache Serverless metadata. (see [below for nested schema](#nested-schema-for-specawselasticache_serverless)) +- `external_id` (String) ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts. +- `iam_policy_status` (Number) IAMPolicyStatus indicates whether the IAM Policy is configured properly for database access. If not, the user must update the AWS profile identity to allow access to the Database. Eg for an RDS Database: the underlying AWS profile allows for `rds-db:connect` for the Database. +- `memorydb` (Attributes) MemoryDB contains AWS MemoryDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsmemorydb)) +- `opensearch` (Attributes) OpenSearch contains AWS OpenSearch specific metadata. (see [below for nested schema](#nested-schema-for-specawsopensearch)) +- `rds` (Attributes) RDS contains RDS specific metadata. (see [below for nested schema](#nested-schema-for-specawsrds)) +- `rdsproxy` (Attributes) RDSProxy contains AWS Proxy specific metadata. (see [below for nested schema](#nested-schema-for-specawsrdsproxy)) +- `redshift` (Attributes) Redshift contains Redshift specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift)) +- `redshift_serverless` (Attributes) RedshiftServerless contains Amazon Redshift Serverless-specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift_serverless)) +- `region` (String) Region is a AWS cloud region. +- `secret_store` (Attributes) SecretStore contains secret store configurations. (see [below for nested schema](#nested-schema-for-specawssecret_store)) +- `session_tags` (Map of String) SessionTags is a list of AWS STS session tags. + +### Nested Schema for `spec.aws.docdb` + +Optional: + +- `cluster_id` (String) ClusterID is the cluster identifier. +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `instance_id` (String) InstanceID is the instance identifier. + + +### Nested Schema for `spec.aws.elasticache` + +Optional: + +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `replication_group_id` (String) ReplicationGroupID is the Redis replication group ID. +- `transit_encryption_enabled` (Boolean) TransitEncryptionEnabled indicates whether in-transit encryption (TLS) is enabled. +- `user_group_ids` (List of String) UserGroupIDs is a list of user group IDs. + + +### Nested Schema for `spec.aws.elasticache_serverless` + +Optional: + +- `cache_name` (String) CacheName is an ElastiCache Serverless cache name. + + +### Nested Schema for `spec.aws.memorydb` + +Optional: + +- `acl_name` (String) ACLName is the name of the ACL associated with the cluster. +- `cluster_name` (String) ClusterName is the name of the MemoryDB cluster. +- `endpoint_type` (String) EndpointType is the type of the endpoint. +- `tls_enabled` (Boolean) TLSEnabled indicates whether in-transit encryption (TLS) is enabled. + + +### Nested Schema for `spec.aws.opensearch` + +Optional: + +- `domain_id` (String) DomainID is the ID of the domain. +- `domain_name` (String) DomainName is the name of the domain. +- `endpoint_type` (String) EndpointType is the type of the endpoint. + + +### Nested Schema for `spec.aws.rds` + +Optional: + +- `cluster_id` (String) ClusterID is the RDS cluster (Aurora) identifier. +- `iam_auth` (Boolean) IAMAuth indicates whether database IAM authentication is enabled. +- `instance_id` (String) InstanceID is the RDS instance identifier. +- `resource_id` (String) ResourceID is the RDS instance resource identifier (db-xxx). +- `security_groups` (List of String) SecurityGroups is a list of attached security groups for the RDS instance. +- `subnets` (List of String) Subnets is a list of subnets for the RDS instance. +- `vpc_id` (String) VPCID is the VPC where the RDS is running. + + +### Nested Schema for `spec.aws.rdsproxy` + +Optional: + +- `custom_endpoint_name` (String) CustomEndpointName is the identifier of an RDS Proxy custom endpoint. +- `name` (String) Name is the identifier of an RDS Proxy. +- `resource_id` (String) ResourceID is the RDS instance resource identifier (prx-xxx). + + +### Nested Schema for `spec.aws.redshift` + +Optional: + +- `cluster_id` (String) ClusterID is the Redshift cluster identifier. + + +### Nested Schema for `spec.aws.redshift_serverless` + +Optional: + +- `endpoint_name` (String) EndpointName is the VPC endpoint name. +- `workgroup_id` (String) WorkgroupID is the workgroup ID. +- `workgroup_name` (String) WorkgroupName is the workgroup name. + + +### Nested Schema for `spec.aws.secret_store` + +Optional: + +- `key_prefix` (String) KeyPrefix specifies the secret key prefix. +- `kms_key_id` (String) KMSKeyID specifies the AWS KMS key for encryption. + + + +### Nested Schema for `spec.azure` + +Optional: + +- `is_flexi_server` (Boolean) IsFlexiServer is true if the database is an Azure Flexible server. +- `name` (String) Name is the Azure database server name. +- `redis` (Attributes) Redis contains Azure Cache for Redis specific database metadata. (see [below for nested schema](#nested-schema-for-specazureredis)) +- `resource_id` (String) ResourceID is the Azure fully qualified ID for the resource. + +### Nested Schema for `spec.azure.redis` + +Optional: + +- `clustering_policy` (String) ClusteringPolicy is the clustering policy for Redis Enterprise. + + + +### Nested Schema for `spec.dynamic_labels` + +Optional: + +- `command` (List of String) Command is a command to run +- `period` (String) Period is a time between command runs +- `result` (String) Result captures standard output + + +### Nested Schema for `spec.gcp` + +Optional: + +- `alloydb` (Attributes) AlloyDB contains AlloyDB specific configuration elements. (see [below for nested schema](#nested-schema-for-specgcpalloydb)) +- `instance_id` (String) InstanceID is the Cloud SQL instance ID. +- `project_id` (String) ProjectID is the GCP project ID the Cloud SQL instance resides in. + +### Nested Schema for `spec.gcp.alloydb` + +Optional: + +- `endpoint_override` (String) EndpointOverride is an override of endpoint address to use. +- `endpoint_type` (String) EndpointType is the database endpoint type to use. Should be one of: "private", "public", "psc". + + + +### Nested Schema for `spec.mongo_atlas` + +Optional: + +- `name` (String) Name is the Atlas database instance name. + + +### Nested Schema for `spec.mysql` + +Optional: + +- `server_version` (String) ServerVersion is the server version reported by DB proxy if the runtime information is not available. + + +### Nested Schema for `spec.oracle` + +Optional: + +- `audit_user` (String) AuditUser is the name of the Oracle database user that should be used to access the internal audit trail. +- `retry_count` (Number) RetryCount is the maximum number of times to retry connecting to a host upon failure. If not specified it defaults to 2, for a total of 3 connection attempts. +- `shuffle_hostnames` (Boolean) ShuffleHostnames, when true, randomizes the order of hosts to connect to from the provided list. + + +### Nested Schema for `spec.tls` + +Optional: + +- `ca_cert` (String) CACert is an optional user provided CA certificate used for verifying database TLS connection. +- `mode` (Number) Mode is a TLS connection mode. 0 is "verify-full"; 1 is "verify-ca", 2 is "insecure". +- `server_name` (String) ServerName allows to provide custom hostname. This value will override the servername/hostname on a certificate during validation. +- `trust_system_cert_pool` (Boolean) TrustSystemCertPool allows Teleport to trust certificate authorities available on the host system. If not set (by default), Teleport only trusts self-signed databases with TLS certificates signed by Teleport's Database Server CA or the ca_cert specified in this TLS setting. For cloud-hosted databases, Teleport downloads the corresponding required CAs for validation. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/discovery_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/discovery_config.mdx new file mode 100644 index 0000000000000..adfac53903bb0 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/discovery_config.mdx @@ -0,0 +1,372 @@ +--- +title: Reference for the teleport_discovery_config Terraform resource +sidebar_label: discovery_config +description: This page describes the supported values of the teleport_discovery_config resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_discovery_config resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Discovery Config +# +# Discovery Config resources define matchers for the Teleport Discovery Service. +# The Discovery Service automatically discovers and enrolls cloud resources +# (EC2 instances, RDS databases, EKS clusters, Azure VMs, etc.) into your +# Teleport cluster. +# +# Each Discovery Config is associated with a discovery_group. Discovery Services +# load matchers from Discovery Configs that share the same discovery_group. + +# Example: AWS Discovery Config for EC2 instances and RDS databases +resource "teleport_discovery_config" "aws_example" { + header = { + metadata = { + name = "aws-discovery" + description = "Discover AWS EC2 instances and RDS databases" + labels = { + env = "production" + } + } + version = "v1" + } + + spec = { + discovery_group = "aws-prod" + + aws = [{ + types = ["ec2", "rds"] + regions = ["us-west-2", "us-east-1"] + tags = { + "env" = ["prod", "production"] + } + install_params = { + join_method = "iam" + join_token = "aws-discovery-token" + script_name = "default-installer" + } + }] + } +} + +# Example: Azure Discovery Config for VMs and AKS clusters +resource "teleport_discovery_config" "azure_example" { + header = { + metadata = { + name = "azure-discovery" + description = "Discover Azure VMs and AKS clusters" + } + version = "v1" + } + + spec = { + discovery_group = "azure-prod" + + azure = [{ + types = ["vm", "aks"] + regions = ["eastus", "westus2"] + subscriptions = ["00000000-0000-0000-0000-000000000000"] + resource_groups = ["my-resource-group"] + tags = { + "*" = ["*"] + } + install_params = { + join_method = "azure" + join_token = "azure-discovery-token" + script_name = "default-installer" + azure = { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + }] + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `header` (Attributes) Header is the resource header. (see [below for nested schema](#nested-schema-for-header)) +- `spec` (Attributes) Spec is an DiscoveryConfig specification. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `header` + +Required: + +- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +Optional: + +- `kind` (String) kind is a resource kind. +- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. + +### Nested Schema for `header.metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. +- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. +- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. + + + +### Nested Schema for `spec` + +Optional: + +- `access_graph` (Attributes) AccessGraph is the configurations for syncing Cloud accounts into Access Graph. (see [below for nested schema](#nested-schema-for-specaccess_graph)) +- `aws` (Attributes List) AWS is a list of AWS Matchers. (see [below for nested schema](#nested-schema-for-specaws)) +- `azure` (Attributes List) Azure is a list of Azure Matchers. (see [below for nested schema](#nested-schema-for-specazure)) +- `discovery_group` (String) DiscoveryGroup is used by discovery_service to add extra matchers. All the discovery_services that have the same discovery_group, will load the matchers of this resource. +- `gcp` (Attributes List) GCP is a list of GCP Matchers. (see [below for nested schema](#nested-schema-for-specgcp)) +- `kube` (Attributes List) Kube is a list of Kubernetes Matchers. (see [below for nested schema](#nested-schema-for-speckube)) + +### Nested Schema for `spec.access_graph` + +Optional: + +- `aws` (Attributes List) AWS is a configuration for AWS Access Graph service poll service. (see [below for nested schema](#nested-schema-for-specaccess_graphaws)) +- `azure` (Attributes List) Azure is a configuration for Azure Access Graph service poll service. (see [below for nested schema](#nested-schema-for-specaccess_graphazure)) +- `poll_interval` (String) PollInterval is the frequency at which to poll for resources + +### Nested Schema for `spec.access_graph.aws` + +Optional: + +- `assume_role` (Attributes) AssumeRoleARN is the AWS role to assume for database discovery. (see [below for nested schema](#nested-schema-for-specaccess_graphawsassume_role)) +- `cloud_trail_logs` (Attributes) Configuration settings for collecting AWS CloudTrail logs via an SQS queue. (see [below for nested schema](#nested-schema-for-specaccess_graphawscloud_trail_logs)) +- `eks_audit_logs` (Attributes) (see [below for nested schema](#nested-schema-for-specaccess_graphawseks_audit_logs)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. +- `regions` (List of String) Regions are AWS regions to import resources from. + +### Nested Schema for `spec.access_graph.aws.assume_role` + +Optional: + +- `external_id` (String) ExternalID is the external ID used to assume a role in another account. +- `role_arn` (String) RoleARN is the fully specified AWS IAM role ARN. + + +### Nested Schema for `spec.access_graph.aws.cloud_trail_logs` + +Optional: + +- `region` (String) The AWS region of the SQS queue for CloudTrail notifications, ex.: "us-east-2". +- `sqs_queue` (String) The name or URL for CloudTrail log events, ex.: "demo-cloudtrail-queue". + + +### Nested Schema for `spec.access_graph.aws.eks_audit_logs` + +Optional: + +- `tags` (Map of List of String) The tags of EKS clusters for which apiserver audit logs should be fetched. + + + +### Nested Schema for `spec.access_graph.azure` + +Optional: + +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. +- `subscription_id` (String) SubscriptionID Is the ID of the Azure subscription to sync resources from + + + +### Nested Schema for `spec.aws` + +Optional: + +- `assume_role` (Attributes) AssumeRoleARN is the AWS role to assume for database discovery. (see [below for nested schema](#nested-schema-for-specawsassume_role)) +- `install` (Attributes) Params sets the join method when installing on discovered EC2 nodes (see [below for nested schema](#nested-schema-for-specawsinstall)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with AWS APIs. Environment credentials will not be used when this value is set. +- `kube_app_discovery` (Boolean) KubeAppDiscovery controls whether Kubernetes App Discovery will be enabled for agents running on discovered clusters, currently only affects AWS EKS discovery in integration mode. +- `organization` (Attributes) Organization is an AWS Organization matcher for discovering resources accross multiple accounts under an Organization. (see [below for nested schema](#nested-schema-for-specawsorganization)) +- `regions` (List of String) Regions are AWS regions to query for databases. +- `setup_access_for_arn` (String) SetupAccessForARN is the role that the Discovery Service should create EKS Access Entries for. This value should match the IAM identity that Teleport Kubernetes Service uses. If this value is empty, the Discovery Service will attempt to set up access for its own identity (self). +- `ssm` (Attributes) SSM provides options to use when sending a document command to an EC2 node (see [below for nested schema](#nested-schema-for-specawsssm)) +- `tags` (Map of List of String) Tags are AWS resource Tags to match. +- `types` (List of String) Types are AWS database types to match, "ec2", "rds", "redshift", "elasticache", or "memorydb". + +### Nested Schema for `spec.aws.assume_role` + +Optional: + +- `external_id` (String) ExternalID is the external ID used to assume a role in another account. +- `role_arn` (String) RoleARN is the fully specified AWS IAM role ARN. + + +### Nested Schema for `spec.aws.install` + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specawsinstallazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specawsinstallhttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.aws.install.azure` + +Optional: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.aws.install.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + +### Nested Schema for `spec.aws.organization` + +Optional: + +- `organization_id` (String) OrganizationID is the AWS Organization ID to match against. Required. +- `organizational_units` (Attributes) OrganizationalUnits contains rules for matchings AWS accounts based on their Organizational Units. (see [below for nested schema](#nested-schema-for-specawsorganizationorganizational_units)) + +### Nested Schema for `spec.aws.organization.organizational_units` + +Optional: + +- `exclude` (List of String) Exclude is a list of AWS Organizational Unit IDs to exclude. Only exact matches or wildcard (*) are supported. If empty, no Organizational Units are excluded by default. +- `include` (List of String) Include is a list of AWS Organizational Unit IDs to match. Only exact matches or wildcard (*) are supported. If empty, all Organizational Units are included by default. + + + +### Nested Schema for `spec.aws.ssm` + +Optional: + +- `document_name` (String) DocumentName is the name of the document to use when executing an SSM command + + + +### Nested Schema for `spec.azure` + +Optional: + +- `install_params` (Attributes) Params sets the join method when installing on discovered Azure nodes. (see [below for nested schema](#nested-schema-for-specazureinstall_params)) +- `integration` (String) Integration is the integration name used to generate credentials to interact with Azure APIs. Environment credentials will not be used when this value is set. +- `regions` (List of String) Regions are Azure locations to match for databases. +- `resource_groups` (List of String) ResourceGroups are Azure resource groups to query for resources. +- `subscriptions` (List of String) Subscriptions are Azure subscriptions to query for resources. +- `tags` (Map of List of String) ResourceTags are Azure tags on resources to match. +- `types` (List of String) Types are Azure types to match: "mysql", "postgres", "aks", "vm" + +### Nested Schema for `spec.azure.install_params` + +Required: + +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specazureinstall_paramsazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specazureinstall_paramshttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.azure.install_params.azure` + +Required: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.azure.install_params.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + + +### Nested Schema for `spec.gcp` + +Optional: + +- `install_params` (Attributes) Params sets the join method when installing on discovered GCP nodes. (see [below for nested schema](#nested-schema-for-specgcpinstall_params)) +- `labels` (Map of List of String) Labels are GCP labels to match. +- `locations` (List of String) Locations are GKE locations to search resources for. +- `project_ids` (List of String) ProjectIDs are the GCP project ID where the resources are deployed. +- `service_accounts` (List of String) ServiceAccounts are the emails of service accounts attached to VMs. +- `tags` (Map of List of String) Tags is obsolete and only exists for backwards compatibility. Use Labels instead. +- `types` (List of String) Types are GKE resource types to match: "gke", "vm". + +### Nested Schema for `spec.gcp.install_params` + +Optional: + +- `azure` (Attributes) Azure is the set of Azure-specific installation parameters. (see [below for nested schema](#nested-schema-for-specgcpinstall_paramsazure)) +- `enroll_mode` (Number) EnrollMode indicates the enrollment mode to be used when adding a node. Valid values: 0: uses eice for EC2 matchers which use an integration and script for all the other methods 1: uses script mode 2: uses eice mode +- `http_proxy_settings` (Attributes) HTTPProxySettings defines HTTP proxy settings for making HTTP requests. When set, this will set the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables before running the installation. (see [below for nested schema](#nested-schema-for-specgcpinstall_paramshttp_proxy_settings)) +- `install_teleport` (Boolean) InstallTeleport disables agentless discovery +- `join_method` (String) JoinMethod is the method to use when joining the cluster +- `join_token` (String) JoinToken is the token to use when joining the cluster +- `proxy_addr` (String) PublicProxyAddr is the address of the proxy the discovered node should use to connect to the cluster. +- `script_name` (String) ScriptName is the name of the teleport installer script resource for the cloud instance to execute +- `sshd_config` (String) SSHDConfig provides the path to write sshd configuration changes +- `suffix` (String) Suffix indicates the installation suffix for the teleport installation. Set this value if you want multiple installations of Teleport. See --install-suffix flag in teleport-update program. Note: only supported for Amazon EC2. Suffix name can only contain alphanumeric characters and hyphens. +- `update_group` (String) UpdateGroup indicates the update group for the teleport installation. This value is used to group installations in order to update them in batches. See --group flag in teleport-update program. Note: only supported for Amazon EC2. Group name can only contain alphanumeric characters and hyphens. + +### Nested Schema for `spec.gcp.install_params.azure` + +Optional: + +- `client_id` (String) ClientID is the client ID of the managed identity discovered nodes should use to join the cluster. + + +### Nested Schema for `spec.gcp.install_params.http_proxy_settings` + +Optional: + +- `http_proxy` (String) HTTPProxy is the URL for the HTTP proxy to use when making requests. When applied, this will set the HTTP_PROXY environment variable. +- `https_proxy` (String) HTTPSProxy is the URL for the HTTPS Proxy to use when making requests. When applied, this will set the HTTPS_PROXY environment variable. +- `no_proxy` (String) NoProxy is a comma separated list of URLs that will be excluded from proxying. When applied, this will set the NO_PROXY environment variable. + + + + +### Nested Schema for `spec.kube` + +Optional: + +- `labels` (Map of List of String) Labels are Kubernetes services labels to match. +- `namespaces` (List of String) Namespaces are Kubernetes namespaces in which to discover services +- `types` (List of String) Types are Kubernetes services types to match. Currently only 'app' is supported. diff --git a/docs/pages/reference/terraform-provider/resources/dynamic_windows_desktop.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/dynamic_windows_desktop.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/dynamic_windows_desktop.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/dynamic_windows_desktop.mdx diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/github_connector.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/github_connector.mdx new file mode 100644 index 0000000000000..9262a42d902df --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/github_connector.mdx @@ -0,0 +1,119 @@ +--- +title: Reference for the teleport_github_connector Terraform resource +sidebar_label: github_connector +description: This page describes the supported values of the teleport_github_connector resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_github_connector resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Terraform Github connector + +variable "github_secret" {} + +resource "teleport_github_connector" "github" { + version = "v3" + # This section tells Terraform that role example must be created before the GitHub connector + depends_on = [ + teleport_role.example + ] + + metadata = { + name = "example" + labels = { + example = "yes" + } + } + + spec = { + client_id = "client" + client_secret = var.github_secret + + teams_to_roles = [{ + organization = "gravitational" + team = "devs" + roles = ["example"] + }] + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an Github connector specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. + +### Optional + +- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. + +### Nested Schema for `spec` + +Required: + +- `client_id` (String) ClientID is the Github OAuth app client ID. +- `client_secret` (String, Sensitive) ClientSecret is the Github OAuth app client secret. + +Optional: + +- `api_endpoint_url` (String) APIEndpointURL is the URL of the API endpoint of the Github instance this connector is for. +- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) +- `display` (String) Display is the connector display name. +- `endpoint_url` (String) EndpointURL is the URL of the GitHub instance this connector is for. +- `redirect_url` (String) RedirectURL is the authorization callback URL. +- `teams_to_logins` (Attributes List) TeamsToLogins maps Github team memberships onto allowed logins/roles. DELETE IN 11.0.0 Deprecated: use GithubTeamsToRoles instead. (see [below for nested schema](#nested-schema-for-specteams_to_logins)) +- `teams_to_roles` (Attributes List) TeamsToRoles maps Github team memberships onto allowed roles. (see [below for nested schema](#nested-schema-for-specteams_to_roles)) +- `user_matchers` (List of String) UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login. + +### Nested Schema for `spec.client_redirect_settings` + +Optional: + +- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs +- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + + +### Nested Schema for `spec.teams_to_logins` + +Optional: + +- `kubernetes_groups` (List of String) KubeGroups is a list of allowed kubernetes groups for this org/team. +- `kubernetes_users` (List of String) KubeUsers is a list of allowed kubernetes users to impersonate for this org/team. +- `logins` (List of String) Logins is a list of allowed logins for this org/team. +- `organization` (String) Organization is a Github organization a user belongs to. +- `team` (String) Team is a team within the organization a user belongs to. + + +### Nested Schema for `spec.teams_to_roles` + +Optional: + +- `organization` (String) Organization is a Github organization a user belongs to. +- `roles` (List of String) Roles is a list of allowed logins for this org/team. +- `team` (String) Team is a team within the organization a user belongs to. + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/health_check_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/health_check_config.mdx new file mode 100644 index 0000000000000..9a067fb5df06a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/health_check_config.mdx @@ -0,0 +1,108 @@ +--- +title: Reference for the teleport_health_check_config Terraform resource +sidebar_label: health_check_config +description: This page describes the supported values of the teleport_health_check_config resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_health_check_config resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_health_check_config" "example" { + metadata = { + name = "example" + description = "Example health check config" + labels = { + foo = "bar" + } + } + version = "v1" + spec = { + interval = "60s" + timeout = "5s" + healthy_threshold = 3 + unhealthy_threshold = 2 + match = { + db_labels = [{ + name = "env" + values = [ + "foo", + "bar", + ] + }] + db_labels_expression = "labels.foo == `bar`" + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `metadata` (Attributes) Metadata is the health check config resource's metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is the health check config specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the health check config version. + +### Optional + +- `sub_kind` (String) SubKind is an optional resource sub kind. + +### Nested Schema for `metadata` + +Required: + +- `name` (String) name is an object name. + +Optional: + +- `description` (String) description is object description. +- `expires` (String) expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) labels is a set of labels. + + +### Nested Schema for `spec` + +Required: + +- `match` (Attributes) Match is used to select resources that these settings apply to. (see [below for nested schema](#nested-schema-for-specmatch)) + +Optional: + +- `healthy_threshold` (Number) HealthyThreshold is the number of consecutive passing health checks after which a target's health status becomes "healthy". +- `interval` (String) Interval is the time between each health check. +- `timeout` (String) Timeout is the health check connection establishment timeout. An attempt that times out is a failed attempt. +- `unhealthy_threshold` (Number) UnhealthyThreshold is the number of consecutive failing health checks after which a target's health status becomes "unhealthy". + +### Nested Schema for `spec.match` + +Optional: + +- `db_labels` (Attributes List) DBLabels matches database labels. An empty value is ignored. The match result is logically ANDed with DBLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchdb_labels)) +- `db_labels_expression` (String) DBLabelsExpression is a label predicate expression to match databases. An empty value is ignored. The match result is logically ANDed with DBLabels, if both are non-empty. +- `disabled` (Boolean) Disabled disables matches for all labels and expressions. +- `kubernetes_labels` (Attributes List) KubernetesLabels matches Kubernetes labels. An empty value is ignored. The match result is logically ANDed with KubernetesLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchkubernetes_labels)) +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a label predicate expression to match Kubernetes. An empty value is ignored. The match result is logically ANDed with KubernetesLabels, if both are non-empty. + +### Nested Schema for `spec.match.db_labels` + +Optional: + +- `name` (String) The name of the label. +- `values` (List of String) The values associated with the label. + + +### Nested Schema for `spec.match.kubernetes_labels` + +Optional: + +- `name` (String) The name of the label. +- `values` (List of String) The values associated with the label. diff --git a/docs/pages/reference/terraform-provider/resources/installer.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/installer.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/installer.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/installer.mdx diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/integration.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/integration.mdx new file mode 100644 index 0000000000000..afe9a8b8461d8 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/integration.mdx @@ -0,0 +1,152 @@ +--- +title: Reference for the teleport_integration Terraform resource +sidebar_label: integration +description: This page describes the supported values of the teleport_integration resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_integration resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +resource "teleport_integration" "aws_oidc" { + version = "v1" + sub_kind = "aws-oidc" + metadata = { + name = "example" + description = "AWS OIDC integration" + labels = { + env = "dev" + } + } + + spec = { + aws_oidc = { + role_arn = "arn:aws:iam::123456789012:role/example-role-name" + } + } +} + +resource "teleport_integration" "azure_oidc" { + version = "v1" + sub_kind = "azure-oidc" + metadata = { + name = "azure-oidc" + description = "Example Azure OIDC integration" + labels = { + env = "dev" + } + } + + spec = { + azure_oidc = { + // Azure Entra ID tenant ID + tenant_id = "a1b2c3d4-f2e4-97a8-9abc-1234567890ab" + // Azure enterprise application client ID + client_id = "7f12e3b5-6789-4abc-def0-112233445566" + } + } +} + +resource "teleport_integration" "aws_roles_anywhere" { + version = "v1" + sub_kind = "aws-ra" + metadata = { + name = "aws-ra" + description = "Example AWS Roles Anywhere integration" + labels = { + env = "dev" + } + } + + spec = { + aws_ra = { + profile_sync_config = { + // sync AWS profiles as Teleport applications + enabled = true + profile_accepts_role_session_name = false + profile_arn = "arn:aws:rolesanywhere:us-east-1:123456789012:profile/" + role_arn = "arn:aws:iam::123456789012:role/example-role-name" + // only sync AWS profiles as Teleport applications if the profile name matches + profile_name_filters = [ + "teleport-*", // supports globs + "^teleport-.*$", // and regex if the string is enclosed in regex anchors + ] + } + trust_anchor_arn = "arn:aws:rolesanywhere:us-east-1:123456789012:trust-anchor/" + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is an Integration specification. (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `aws_oidc` (Attributes) AWSOIDC contains the specific fields to handle the AWS OIDC Integration subkind (see [below for nested schema](#nested-schema-for-specaws_oidc)) +- `aws_ra` (Attributes) AWSRA contains the specific fields to handle the AWS Roles Anywhere Integration subkind. (see [below for nested schema](#nested-schema-for-specaws_ra)) +- `azure_oidc` (Attributes) AzureOIDC contains the specific fields to handle the Azure OIDC Integration subkind (see [below for nested schema](#nested-schema-for-specazure_oidc)) + +### Nested Schema for `spec.aws_oidc` + +Optional: + +- `audience` (String) Audience is used to record a name of a plugin or a discover service in Teleport that depends on this integration. Audience value can either be empty or "aws-identity-center". Preset audience may impose specific behavior on the integration CRUD API, such as preventing integration from update or deletion. Empty audience value should be treated as a default and backward-compatible behavior of the integration. +- `issuer_s3_uri` (String) IssuerS3URI is the Identity Provider that was configured in AWS. This bucket/prefix/* files must be publicly accessible and contain the following: > .well-known/openid-configuration > .well-known/jwks Format: `s3:///` Optional. The proxy's endpoint is used if it is not specified. DEPRECATED: Thumbprint validation requires the issuer to update the IdP in AWS everytime the issuer changes the certificate. Amazon had some whitelisted providers where the thumbprint was ignored. S3 hosted providers was in that list. Amazon is now trusting all the root certificate authorities, and this workaround is no longer needed. DELETE IN 18.0. +- `role_arn` (String) RoleARN contains the Role ARN used to set up the Integration. This is the AWS Role that Teleport will use to issue tokens for API Calls. + + +### Nested Schema for `spec.aws_ra` + +Optional: + +- `profile_sync_config` (Attributes) ProfileSyncConfig contains the configuration for the AWS Roles Anywhere Profile sync. This is used to create AWS Roles Anywhere profiles as application servers. (see [below for nested schema](#nested-schema-for-specaws_raprofile_sync_config)) +- `trust_anchor_arn` (String) TrustAnchorARN contains the AWS IAM Roles Anywhere Trust Anchor ARN used to set up the Integration. + +### Nested Schema for `spec.aws_ra.profile_sync_config` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this integration should sync profiles as application servers. +- `profile_accepts_role_session_name` (Boolean) ProfileAcceptsRoleSessionName indicates whether the profile accepts a custom Role Session name. +- `profile_arn` (String) ProfileARN is the ARN of the Roles Anywhere Profile used to generate credentials to access the AWS APIs. +- `profile_name_filters` (List of String) ProfileNameFilters is a list of filters applied to the profile name. Only matching profiles will be synchronized as application servers. If empty, no filtering is applied. Filters can be globs, for example: profile* *name* Or regexes if they're prefixed and suffixed with ^ and $, for example: ^profile.*$ ^.*name.*$ +- `role_arn` (String) RoleARN is the ARN of the IAM Role to assume when accessing the AWS APIs. + + + +### Nested Schema for `spec.azure_oidc` + +Optional: + +- `client_id` (String) ClientID specifies the ID of Azure enterprise application (client) that corresponds to this plugin. +- `tenant_id` (String) TenantID specifies the ID of Entra Tenant (Directory) that this plugin integrates with. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/login_rule.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/login_rule.mdx new file mode 100644 index 0000000000000..60748d2a55d3b --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/login_rule.mdx @@ -0,0 +1,85 @@ +--- +title: Reference for the teleport_login_rule Terraform resource +sidebar_label: login_rule +description: This page describes the supported values of the teleport_login_rule resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_login_rule resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Login Rule resource + +resource "teleport_login_rule" "example" { + metadata = { + description = "Example Login Rule" + labels = { + "example" = "yes" + } + } + + version = "v1" + priority = 0 + + # Either traits_map or traits_expression must be provided, but not both. + traits_map = { + "logins" = { + values = [ + "external.logins", + "external.username", + ] + } + "groups" = { + values = [ + "external.groups", + ] + } + } + # # This traits_expression is functionally equivalent to the traits_map above. + # traits_expression = < OpenID Connect section of the repository settings. +- `identity_provider_url` (String) IdentityProviderURL is a Bitbucket-specified issuer URL for incoming OIDC tokens. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. + +### Nested Schema for `spec.bitbucket.allow` + +Optional: + +- `branch_name` (String) BranchName is the name of the branch on which this pipeline executed. +- `deployment_environment_uuid` (String) DeploymentEnvironmentUUID is the UUID of the deployment environment targeted by this pipelines run, if any. These values may be found in the "Pipelines -> OpenID Connect -> Deployment environments" section of the repository settings. +- `repository_uuid` (String) RepositoryUUID is the UUID of the repository for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. +- `workspace_uuid` (String) WorkspaceUUID is the UUID of the workspace for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. + + + +### Nested Schema for `spec.bound_keypair` + +Optional: + +- `onboarding` (Attributes) Onboarding contains parameters related to initial onboarding and keypair registration. (see [below for nested schema](#nested-schema-for-specbound_keypaironboarding)) +- `recovery` (Attributes) Recovery contains parameters related to recovery after identity expiration. (see [below for nested schema](#nested-schema-for-specbound_keypairrecovery)) +- `rotate_after` (String) RotateAfter is an optional timestamp that forces clients to perform a keypair rotation on the next join or recovery attempt after the given date. If `LastRotatedAt` is unset or before this timestamp, a rotation will be requested. It is recommended to set this value to the current timestamp if a rotation should be triggered on the next join attempt. + +### Nested Schema for `spec.bound_keypair.onboarding` + +Optional: + +- `initial_public_key` (String) InitialPublicKey is used to preregister a public key generated by `tbot keypair create`. When set, no initial join secret is generated or made available for use, and clients must have the associated private key available to join. If set, `initial_join_secret` and `must_register_before` are ignored. This value is written in SSH authorized_keys format. +- `must_register_before` (String) MustRegisterBefore is an optional time before which registration via initial join secret must be performed. Attempts to register using an initial join secret after this timestamp will not be allowed. This may be modified after creation if necessary to allow the initial registration to take place. This value is ignored if `initial_public_key` is set. +- `registration_secret` (String) RegistrationSecret is a secret joining clients may use to register their public key on first join, which may be used instead of preregistering a public key with `initial_public_key`. If `initial_public_key` is set, this value is ignored. Otherwise, if set, this value will be used to populate `.status.bound_keypair.registration_secret`. If unset and no `initial_public_key` is provided, a random secure value will be generated server-side to populate the status field. + + +### Nested Schema for `spec.bound_keypair.recovery` + +Optional: + +- `limit` (Number) Limit is the maximum number of allowed recovery attempts. This value may be raised or lowered after creation to allow additional recovery attempts should the initial limit be exhausted. If `mode` is set to `standard`, recovery attempts will only be allowed if `.status.bound_keypair.recovery_count` is less than this limit. This limit is not enforced if `mode` is set to `relaxed` or `insecure`. This value must be at least 1 to allow for the initial join during onboarding, which counts as a recovery. +- `mode` (String) Mode sets the recovery rule enforcement mode. It may be one of these values: - standard (or unset): all configured rules enforced. The recovery limit and client join state are required and verified. This is the most secure recovery mode. - relaxed: recovery limit is not enforced, but client join state is still required. This effectively allows unlimited recovery attempts, but client join state still helps mitigate stolen credentials. - insecure: neither the recovery limit nor client join state are enforced. This allows any client with the private key to join freely. This is less secure, but can be useful in certain situations, like in otherwise unsupported CI/CD providers. This mode should be used with care, and RBAC rules should be configured to heavily restrict which resources this identity can access. + + + +### Nested Schema for `spec.circleci` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speccircleciallow)) +- `organization_id` (String) + +### Nested Schema for `spec.circleci.allow` + +Optional: + +- `context_id` (String) +- `project_id` (String) + + + +### Nested Schema for `spec.env0` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, jobs using this token must match at least one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specenv0allow)) + +### Nested Schema for `spec.env0.allow` + +Optional: + +- `deployer_email` (String) DeployerEmail is the email of the person that triggered the deployment, corresponding to `deployerEmail` in an Env0 OIDC token. +- `deployment_type` (String) DeploymentType is the env0 deployment type, such as "deploy", "destroy", etc. Corresponds to `deploymentType` in an Env0 OIDC token. +- `env0_tag` (String) Env0Tag is a custom tag value corresponding to `env0Tag` when `ENV0_OIDC_TAG` is set. +- `environment_id` (String) EnvironmentID is the unique identifier of the Env0 environment, corresponding to `environmentId` in an Env0 OIDC token. +- `environment_name` (String) EnvironmentName is the name of the Env0 environment, corresponding to `environmentName` in an Env0 OIDC token. +- `organization_id` (String) OrganizationID is the unique organization identifier, corresponding to `organizationId` in an Env0 OIDC token. +- `project_id` (String) ProjectID is a unique project identifier, corresponding to `projectId` in an Env0 OIDC token. +- `project_name` (String) ProjectName is the name of the project under which the job was run corresponding to `projectName` in an Env0 OIDC token. +- `template_id` (String) TemplateID is the unique identifier of the Env0 template, corresponding to `templateId` in an Env0 OIDC token. +- `template_name` (String) TemplateName is the name of the Env0 template, corresponding to `templateName` in an Env0 OIDC token. +- `workspace_name` (String) WorkspaceName is the name of the Env0 workspace, corresponding to `workspaceName` in an Env0 OIDC token. + + + +### Nested Schema for `spec.gcp` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgcpallow)) + +### Nested Schema for `spec.gcp.allow` + +Optional: + +- `locations` (List of String) Locations is a list of regions (e.g. "us-west1") and/or zones (e.g. "us-west1-b"). +- `project_ids` (List of String) ProjectIDs is a list of project IDs (e.g. ``). +- `service_accounts` (List of String) ServiceAccounts is a list of service account emails (e.g. `-compute@developer.gserviceaccount.com`). + + + +### Nested Schema for `spec.github` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgithuballow)) +- `enterprise_server_host` (String) EnterpriseServerHost allows joining from runners associated with a GitHub Enterprise Server instance. When unconfigured, tokens will be validated against github.com, but when configured to the host of a GHES instance, then the tokens will be validated against host. This value should be the hostname of the GHES instance, and should not include the scheme or a path. The instance must be accessible over HTTPS at this hostname and the certificate must be trusted by the Auth Service. +- `enterprise_slug` (String) EnterpriseSlug allows the slug of a GitHub Enterprise organisation to be included in the expected issuer of the OIDC tokens. This is for compatibility with the `include_enterprise_slug` option in GHE. This field should be set to the slug of your enterprise if this is enabled. If this is not enabled, then this field must be left empty. This field cannot be specified if `enterprise_server_host` is specified. See https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise for more information about customized issuer values. +- `static_jwks` (String) StaticJWKS disables fetching of the GHES signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitHub Actions in GHES instances that are not reachable by the Teleport Auth Service. + +### Nested Schema for `spec.github.allow` + +Optional: + +- `actor` (String) The personal account that initiated the workflow run. +- `environment` (String) The name of the environment used by the job. +- `ref` (String) The git ref that triggered the workflow run. +- `ref_type` (String) The type of ref, for example: "branch". +- `repository` (String) The repository from where the workflow is running. This includes the name of the owner e.g `gravitational/teleport` +- `repository_owner` (String) The name of the organization in which the repository is stored. +- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. The format of this varies depending on the type of github action run. +- `workflow` (String) The name of the workflow. + + + +### Nested Schema for `spec.gitlab` + +Optional: + +- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgitlaballow)) +- `domain` (String) Domain is the domain of your GitLab instance. This will default to `gitlab.com` - but can be set to the domain of your self-hosted GitLab e.g `gitlab.example.com`. +- `static_jwks` (String) StaticJWKS disables fetching of the GitLab signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitLab CI instances that are not reachable by the Teleport Auth Service. + +### Nested Schema for `spec.gitlab.allow` + +Optional: + +- `ci_config_ref_uri` (String) CIConfigRefURI is the ref path to the top-level pipeline definition, for example, gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main. +- `ci_config_sha` (String) CIConfigSHA is the git commit SHA for the ci_config_ref_uri. +- `deployment_tier` (String) DeploymentTier is the deployment tier of the environment the job specifies +- `environment` (String) Environment limits access by the environment the job deploys to (if one is associated) +- `environment_protected` (Boolean) EnvironmentProtected is true if the Git ref is protected, false otherwise. +- `namespace_path` (String) NamespacePath is used to limit access to jobs in a group or user's projects. Example: `mygroup` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `pipeline_source` (String) PipelineSource limits access by the job pipeline source type. https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules Example: `web` +- `project_path` (String) ProjectPath is used to limit access to jobs belonging to an individual project. Example: `mygroup/myproject` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `project_visibility` (String) ProjectVisibility is the visibility of the project where the pipeline is running. Can be internal, private, or public. +- `ref` (String) Ref allows access to be limited to jobs triggered by a specific git ref. Ensure this is used in combination with ref_type. This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `ref_protected` (Boolean) RefProtected is true if the Git ref is protected, false otherwise. +- `ref_type` (String) RefType allows access to be limited to jobs triggered by a specific git ref type. Example: `branch` or `tag` +- `sub` (String) Sub roughly uniquely identifies the workload. Example: `project_path:mygroup/my-project:ref_type:branch:ref:main` project_path:GROUP/PROJECT:ref_type:TYPE:ref:BRANCH_NAME This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `user_email` (String) UserEmail is the email of the user executing the job +- `user_id` (String) UserID is the ID of the user executing the job +- `user_login` (String) UserLogin is the username of the user executing the job + + + +### Nested Schema for `spec.kubernetes` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speckubernetesallow)) +- `oidc` (Attributes) OIDCConfig configures the `oidc` type. (see [below for nested schema](#nested-schema-for-speckubernetesoidc)) +- `static_jwks` (Attributes) StaticJWKS is the configuration specific to the `static_jwks` type. (see [below for nested schema](#nested-schema-for-speckubernetesstatic_jwks)) +- `type` (String) Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` - `oidc` If unset, this defaults to `in_cluster`. + +### Nested Schema for `spec.kubernetes.allow` + +Optional: + +- `service_account` (String) ServiceAccount is the namespaced name of the Kubernetes service account. Its format is "namespace:service-account". + + +### Nested Schema for `spec.kubernetes.oidc` + +Optional: + +- `insecure_allow_http_issuer` (Boolean) InsecureAllowHTTPIssuer is a flag that, if set, disables the requirement that the issuer must use HTTPS. +- `issuer` (String) Issuer is the URI of the OIDC issuer. It must have an accessible and OIDC-compliant `/.well-known/oidc-configuration` endpoint. This should be a valid URL and must exactly match the `issuer` field in a service account JWT. For example: https://oidc.eks.us-west-2.amazonaws.com/id/12345... + + +### Nested Schema for `spec.kubernetes.static_jwks` + +Optional: + +- `jwks` (String) JWKS should be the JSON Web Key Set formatted public keys of that the Kubernetes Cluster uses to sign service account tokens. This can be fetched from /openid/v1/jwks on the Kubernetes API Server. + + + +### Nested Schema for `spec.oracle` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specoracleallow)) + +### Nested Schema for `spec.oracle.allow` + +Optional: + +- `instances` (List of String) Instances is a list of the OCIDs of specific instances that are allowed to join. If empty, any instance matching the other fields in the rule is allowed. Limited to 100 instance OCIDs per rule. +- `parent_compartments` (List of String) ParentCompartments is a list of the OCIDs of compartments an instance is allowed to join from. Only direct parents are allowed, i.e. no nested compartments. If empty, any compartment is allowed. +- `regions` (List of String) Regions is a list of regions an instance is allowed to join from. Both full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. If empty, any region is allowed. +- `tenancy` (String) Tenancy is the OCID of the instance's tenancy. Required. + + + +### Nested Schema for `spec.spacelift` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specspaceliftallow)) +- `enable_glob_matching` (Boolean) EnableGlobMatching enables glob-style matching for the space_id and caller_id fields in the rules. +- `hostname` (String) Hostname is the hostname of the Spacelift tenant that tokens will originate from. E.g `example.app.spacelift.io` + +### Nested Schema for `spec.spacelift.allow` + +Optional: + +- `caller_id` (String) CallerID is the ID of the caller, ie. the stack or module that generated the run. This field supports "glob-style" matching when enable_glob_matching is true: - Use '*' to match zero or more characters. - Use '?' to match any single character. +- `caller_type` (String) CallerType is the type of the caller, ie. the entity that owns the run - either `stack` or `module`. +- `scope` (String) Scope is the scope of the token - either `read` or `write`. See https://docs.spacelift.io/integrations/cloud-providers/oidc/#about-scopes +- `space_id` (String) SpaceID is the ID of the space in which the run that owns the token was executed. This field supports "glob-style" matching when enable_glob_matching is true: - Use '*' to match zero or more characters. - Use '?' to match any single character. + + + +### Nested Schema for `spec.terraform_cloud` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specterraform_cloudallow)) +- `audience` (String) Audience is the JWT audience as configured in the TFC_WORKLOAD_IDENTITY_AUDIENCE(_$TAG) variable in Terraform Cloud. If unset, defaults to the Teleport cluster name. For example, if `TFC_WORKLOAD_IDENTITY_AUDIENCE_TELEPORT=foo` is set in Terraform Cloud, this value should be `foo`. If the variable is set to match the cluster name, it does not need to be set here. +- `hostname` (String) Hostname is the hostname of the Terraform Enterprise instance expected to issue JWTs allowed by this token. This may be unset for regular Terraform Cloud use, in which case it will be assumed to be `app.terraform.io`. Otherwise, it must both match the `iss` (issuer) field included in JWTs, and provide standard JWKS endpoints. + +### Nested Schema for `spec.terraform_cloud.allow` + +Optional: + +- `organization_id` (String) OrganizationID is the ID of the HCP Terraform organization. At least one organization value is required, either ID or name. +- `organization_name` (String) OrganizationName is the human-readable name of the HCP Terraform organization. At least one organization value is required, either ID or name. +- `project_id` (String) ProjectID is the ID of the HCP Terraform project. At least one project or workspace value is required, either ID or name. +- `project_name` (String) ProjectName is the human-readable name for the HCP Terraform project. At least one project or workspace value is required, either ID or name. +- `run_phase` (String) RunPhase is the phase of the run the token was issued for, e.g. `plan` or `apply` +- `workspace_id` (String) WorkspaceID is the ID of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. +- `workspace_name` (String) WorkspaceName is the human-readable name of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. + + + +### Nested Schema for `spec.tpm` + +Optional: + +- `allow` (Attributes List) Allow is a list of Rules, the presented delegated identity must match one allow rule to permit joining. (see [below for nested schema](#nested-schema-for-spectpmallow)) +- `ekcert_allowed_cas` (List of String) EKCertAllowedCAs is a list of CA certificates that will be used to validate TPM EKCerts. When specified, joining TPMs must present an EKCert signed by one of the specified CAs. TPMs that do not present an EKCert will be not permitted to join. When unspecified, TPMs will be allowed to join with either an EKCert or an EKPubHash. + +### Nested Schema for `spec.tpm.allow` + +Optional: + +- `description` (String) Description is a human-readable description of the rule. It has no bearing on whether or not a TPM is allowed to join, but can be used to associate a rule with a specific host (e.g the asset tag of the server in which the TPM resides). Example: "build-server-100" +- `ek_certificate_serial` (String) EKCertificateSerial is the serial number of the EKCert in hexadecimal with colon separated nibbles. This value will not be checked when a TPM does not have an EKCert configured. Example: 73:df:dc:bd:af:ef:8a:d8:15:2e:96:71:7a:3e:7f:a4 +- `ek_public_hash` (String) EKPublicHash is the SHA256 hash of the EKPub marshaled in PKIX format and encoded in hexadecimal. This value will also be checked when a TPM has submitted an EKCert, and the public key in the EKCert will be used for this check. Example: d4b45864d9d6fabfc568d74f26c35ababde2105337d7af9a6605e1c56c891aa6 + + + + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels +- `name` (String, Sensitive) Name is an object name + + +### Nested Schema for `status` + +Optional: + +- `bound_keypair` (Attributes) BoundKeypair contains status information related to bound_keypair type tokens. (see [below for nested schema](#nested-schema-for-statusbound_keypair)) + +### Nested Schema for `status.bound_keypair` + +Optional: + +- `bound_bot_instance_id` (String) BoundBotInstanceID is the ID of the currently associated bot instance. A new bot instance is issued on each join; the new bot instance will have a `previous_bot_instance` set to this value, if any. +- `bound_public_key` (String) BoundPublicKey contains the currently bound public key. If `.spec.bound_keypair.onboarding.initial_public_key` is set, that value will be copied here on creation, otherwise it will be populated as part of public key registration process. This value will be updated over time if keypair rotation takes place, and will always reflect the currently trusted public key. This value is written in SSH authorized_keys format. +- `last_recovered_at` (String) LastRecoveredAt contains a timestamp of the last successful recovery attempt. Note that normal renewals with valid client certificates do not count as a recovery attempt, however the initial join during onboarding does. This corresponds with the last time `bound_bot_instance_id` was updated. +- `last_rotated_at` (String) LastRotatedAt contains a timestamp of the last time the keypair was rotated, if any. This is not set at initial join. +- `recovery_count` (Number) RecoveryCount is a count of the total number of recoveries performed using this token. It is incremented for every successful join or rejoin. Recovery is only allowed if this value is less than `.spec.bound_keypair.recovery.limit`, or if the recovery mode is `relaxed` or `insecure`. +- `registration_secret` (String) RegistrationSecret contains a secret value that may be used for public key registration during the initial join process if no public key is preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` is set, this field will remain empty. Otherwise, if `.spec.bound_keypair.onboarding.registration_secret` is set, that value will be copied here. If that field is unset, a value will be randomly generated. diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/resources.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/resources.mdx new file mode 100644 index 0000000000000..c11704e2c0046 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/resources.mdx @@ -0,0 +1,45 @@ +--- +title: "Terraform resources index" +description: "Index of all the datasources supported by the Teleport Terraform Provider" +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +{/* + This file will be renamed data-sources.mdx during build time. + The template name is reserved by tfplugindocs so we suffix with -index. +*/} + +The Teleport Terraform provider supports the following resources: + + - [`teleport_access_list`](./access_list.mdx) + - [`teleport_access_list_member`](./access_list_member.mdx) + - [`teleport_access_monitoring_rule`](./access_monitoring_rule.mdx) + - [`teleport_app`](./app.mdx) + - [`teleport_auth_preference`](./auth_preference.mdx) + - [`teleport_autoupdate_config`](./autoupdate_config.mdx) + - [`teleport_autoupdate_version`](./autoupdate_version.mdx) + - [`teleport_bot`](./bot.mdx) + - [`teleport_cluster_maintenance_config`](./cluster_maintenance_config.mdx) + - [`teleport_cluster_networking_config`](./cluster_networking_config.mdx) + - [`teleport_database`](./database.mdx) + - [`teleport_discovery_config`](./discovery_config.mdx) + - [`teleport_dynamic_windows_desktop`](./dynamic_windows_desktop.mdx) + - [`teleport_github_connector`](./github_connector.mdx) + - [`teleport_health_check_config`](./health_check_config.mdx) + - [`teleport_installer`](./installer.mdx) + - [`teleport_integration`](./integration.mdx) + - [`teleport_login_rule`](./login_rule.mdx) + - [`teleport_oidc_connector`](./oidc_connector.mdx) + - [`teleport_okta_import_rule`](./okta_import_rule.mdx) + - [`teleport_provision_token`](./provision_token.mdx) + - [`teleport_role`](./role.mdx) + - [`teleport_saml_connector`](./saml_connector.mdx) + - [`teleport_server`](./server.mdx) + - [`teleport_session_recording_config`](./session_recording_config.mdx) + - [`teleport_static_host_user`](./static_host_user.mdx) + - [`teleport_trusted_cluster`](./trusted_cluster.mdx) + - [`teleport_trusted_device`](./trusted_device.mdx) + - [`teleport_user`](./user.mdx) + - [`teleport_workload_identity`](./workload_identity.mdx) diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/role.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/role.mdx new file mode 100644 index 0000000000000..2aea4f386198f --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/role.mdx @@ -0,0 +1,618 @@ +--- +title: Reference for the teleport_role Terraform resource +sidebar_label: role +description: This page describes the supported values of the teleport_role resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_role resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport Role resource + +resource "teleport_role" "example" { + version = "v8" + metadata = { + name = "example" + description = "Example Teleport Role" + expires = "2022-10-12T07:20:51Z" + labels = { + example = "yes" + } + } + + spec = { + options = { + forward_agent = false + max_session_ttl = "7m" + ssh_port_forwarding = { + remote = { + enabled = false + } + + local = { + enabled = false + } + } + client_idle_timeout = "1h" + disconnect_expired_cert = true + permit_x11_forwarding = false + request_access = "optional" + } + + allow = { + logins = ["example"] + + rules = [{ + resources = ["user", "role"] + verbs = ["list"] + }] + + request = { + roles = ["example"] + claims_to_roles = [{ + claim = "example" + value = "example" + roles = ["example"] + }] + } + + node_labels = { + example = ["yes"] + } + } + + deny = { + logins = ["anonymous"] + } + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`, `v8`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a role specification (see [below for nested schema](#nested-schema-for-spec)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `allow` (Attributes) Allow is the set of conditions evaluated to grant access. (see [below for nested schema](#nested-schema-for-specallow)) +- `deny` (Attributes) Deny is the set of conditions evaluated to deny access. Deny takes priority over allow. (see [below for nested schema](#nested-schema-for-specdeny)) +- `options` (Attributes) Options is for OpenSSH options like agent forwarding. (see [below for nested schema](#nested-schema-for-specoptions)) + +### Nested Schema for `spec.allow` + +Optional: + +- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specallowaccount_assignments)) +- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. +- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. +- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. +- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. +- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). +- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. +- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. +- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. +- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. +- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specallowdb_permissions)) +- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. +- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. +- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. +- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. +- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to +- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. +- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specallowgithub_permissions)) +- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. +- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. +- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to +- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file +- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specallowimpersonate)) +- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specallowjoin_sessions)) +- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups +- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. +- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specallowkubernetes_resources)) +- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate +- `logins` (List of String) Logins is a list of *nix system logins. +- `mcp` (Attributes) MCPPermissions defines MCP servers related permissions. (see [below for nested schema](#nested-schema-for-specallowmcp)) +- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). +- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. +- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specallowrequest)) +- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specallowrequire_session_join)) +- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specallowreview_requests)) +- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specallowrules)) +- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specallowspiffe)) +- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. +- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. +- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. +- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. +- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. + +### Nested Schema for `spec.allow.account_assignments` + +Optional: + +- `account` (String) +- `permission_set` (String) + + +### Nested Schema for `spec.allow.db_permissions` + +Optional: + +- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. +- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... + + +### Nested Schema for `spec.allow.github_permissions` + +Optional: + +- `orgs` (List of String) + + +### Nested Schema for `spec.allow.impersonate` + +Optional: + +- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate +- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.allow.join_sessions` + +Optional: + +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is a list of permitted participant modes for this policy. +- `name` (String) Name is the name of the policy. +- `roles` (List of String) Roles is a list of roles that you can join the session of. + + +### Nested Schema for `spec.allow.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. +- `kind` (String) Kind specifies the Kubernetes Resource type. +- `name` (String) Name is the resource name. It supports wildcards. +- `namespace` (String) Namespace is the resource namespace. It supports wildcards. +- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. + + +### Nested Schema for `spec.allow.mcp` + +Optional: + +- `tools` (List of String) Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed. + + +### Nested Schema for `spec.allow.request` + +Optional: + +- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowrequestclaims_to_roles)) +- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specallowrequestkubernetes_resources)) +- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. +- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specallowrequestreason)) +- `roles` (List of String) Roles is the name of roles which will match the request rule. +- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. +- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. +- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specallowrequestthresholds)) + +### Nested Schema for `spec.allow.request.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + +### Nested Schema for `spec.allow.request.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. +- `kind` (String) kind specifies the Kubernetes Resource type. + + +### Nested Schema for `spec.allow.request.reason` + +Optional: + +- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. +- `prompt` (String) Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt. + + +### Nested Schema for `spec.allow.request.thresholds` + +Optional: + +- `approve` (Number) Approve is the number of matching approvals needed for state-transition. +- `deny` (Number) Deny is the number of denials needed for state-transition. +- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. +- `name` (String) Name is the optional human-readable name of the threshold. + + + +### Nested Schema for `spec.allow.require_session_join` + +Optional: + +- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. +- `filter` (String) Filter is a predicate that determines what users count towards this policy. +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. +- `name` (String) Name is the name of the policy. +- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. + + +### Nested Schema for `spec.allow.review_requests` + +Optional: + +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowreview_requestsclaims_to_roles)) +- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. +- `roles` (List of String) Roles is the name of roles which may be reviewed. +- `where` (String) Where is an optional predicate which further limits which requests are reviewable. + +### Nested Schema for `spec.allow.review_requests.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + + +### Nested Schema for `spec.allow.rules` + +Optional: + +- `actions` (List of String) Actions specifies optional actions taken when this rule matches +- `resources` (List of String) Resources is a list of resources +- `verbs` (List of String) Verbs is a list of verbs +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.allow.spiffe` + +Optional: + +- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com +- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 +- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar + + + +### Nested Schema for `spec.deny` + +Optional: + +- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specdenyaccount_assignments)) +- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. +- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. +- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. +- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. +- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). +- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. +- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. +- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. +- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. +- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specdenydb_permissions)) +- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. +- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. +- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. +- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. +- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to +- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. +- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specdenygithub_permissions)) +- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. +- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. +- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to +- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file +- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specdenyimpersonate)) +- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specdenyjoin_sessions)) +- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups +- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. +- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. +- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specdenykubernetes_resources)) +- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate +- `logins` (List of String) Logins is a list of *nix system logins. +- `mcp` (Attributes) MCPPermissions defines MCP servers related permissions. (see [below for nested schema](#nested-schema-for-specdenymcp)) +- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). +- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. +- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specdenyrequest)) +- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specdenyrequire_session_join)) +- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specdenyreview_requests)) +- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specdenyrules)) +- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specdenyspiffe)) +- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. +- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. +- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. +- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. +- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. + +### Nested Schema for `spec.deny.account_assignments` + +Optional: + +- `account` (String) +- `permission_set` (String) + + +### Nested Schema for `spec.deny.db_permissions` + +Optional: + +- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. +- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... + + +### Nested Schema for `spec.deny.github_permissions` + +Optional: + +- `orgs` (List of String) + + +### Nested Schema for `spec.deny.impersonate` + +Optional: + +- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate +- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.deny.join_sessions` + +Optional: + +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is a list of permitted participant modes for this policy. +- `name` (String) Name is the name of the policy. +- `roles` (List of String) Roles is a list of roles that you can join the session of. + + +### Nested Schema for `spec.deny.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. +- `kind` (String) Kind specifies the Kubernetes Resource type. +- `name` (String) Name is the resource name. It supports wildcards. +- `namespace` (String) Namespace is the resource namespace. It supports wildcards. +- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. + + +### Nested Schema for `spec.deny.mcp` + +Optional: + +- `tools` (List of String) Tools defines the list of tools allowed or denied for this role. Each entry can be a literal string, a glob pattern (e.g. "prefix_*"), or a regular expression (must start with '^' and end with '$'). If the list is empty, no tools are allowed. + + +### Nested Schema for `spec.deny.request` + +Optional: + +- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyrequestclaims_to_roles)) +- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specdenyrequestkubernetes_resources)) +- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. +- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specdenyrequestreason)) +- `roles` (List of String) Roles is the name of roles which will match the request rule. +- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. +- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. +- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specdenyrequestthresholds)) + +### Nested Schema for `spec.deny.request.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + +### Nested Schema for `spec.deny.request.kubernetes_resources` + +Optional: + +- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. +- `kind` (String) kind specifies the Kubernetes Resource type. + + +### Nested Schema for `spec.deny.request.reason` + +Optional: + +- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. +- `prompt` (String) Prompt is a custom message prompted to the user for the requested roles or resources searchable as other roles. This is only applied to the requested roles and resources specifying the prompt. + + +### Nested Schema for `spec.deny.request.thresholds` + +Optional: + +- `approve` (Number) Approve is the number of matching approvals needed for state-transition. +- `deny` (Number) Deny is the number of denials needed for state-transition. +- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. +- `name` (String) Name is the optional human-readable name of the threshold. + + + +### Nested Schema for `spec.deny.require_session_join` + +Optional: + +- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. +- `filter` (String) Filter is a predicate that determines what users count towards this policy. +- `kinds` (List of String) Kinds are the session kinds this policy applies to. +- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. +- `name` (String) Name is the name of the policy. +- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. + + +### Nested Schema for `spec.deny.review_requests` + +Optional: + +- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyreview_requestsclaims_to_roles)) +- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. +- `roles` (List of String) Roles is the name of roles which may be reviewed. +- `where` (String) Where is an optional predicate which further limits which requests are reviewable. + +### Nested Schema for `spec.deny.review_requests.claims_to_roles` + +Optional: + +- `claim` (String) Claim is a claim name. +- `roles` (List of String) Roles is a list of static teleport roles to match. +- `value` (String) Value is a claim value to match. + + + +### Nested Schema for `spec.deny.rules` + +Optional: + +- `actions` (List of String) Actions specifies optional actions taken when this rule matches +- `resources` (List of String) Resources is a list of resources +- `verbs` (List of String) Verbs is a list of verbs +- `where` (String) Where specifies optional advanced matcher + + +### Nested Schema for `spec.deny.spiffe` + +Optional: + +- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com +- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 +- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar + + + +### Nested Schema for `spec.options` + +Optional: + +- `cert_extensions` (Attributes List) CertExtensions specifies the key/values (see [below for nested schema](#nested-schema-for-specoptionscert_extensions)) +- `cert_format` (String) CertificateFormat defines the format of the user certificate to allow compatibility with older versions of OpenSSH. +- `client_idle_timeout` (String) ClientIdleTimeout sets disconnect clients on idle timeout behavior, if set to 0 means do not disconnect, otherwise is set to the idle duration. +- `create_db_user` (Boolean) CreateDatabaseUser enabled automatic database user creation. +- `create_db_user_mode` (Number) CreateDatabaseUserMode allows users to be automatically created on a database when not set to off. 0 is "unspecified", 1 is "off", 2 is "keep", 3 is "best_effort_drop". +- `create_desktop_user` (Boolean) CreateDesktopUser allows users to be automatically created on a Windows desktop +- `create_host_user` (Boolean) Deprecated: use CreateHostUserMode instead. +- `create_host_user_default_shell` (String) CreateHostUserDefaultShell is used to configure the default shell for newly provisioned host users. +- `create_host_user_mode` (Number) CreateHostUserMode allows users to be automatically created on a host when not set to off. 0 is "unspecified"; 1 is "off"; 2 is "drop" (removed for v15 and above), 3 is "keep"; 4 is "insecure-drop". +- `desktop_clipboard` (Boolean) DesktopClipboard indicates whether clipboard sharing is allowed between the user's workstation and the remote desktop. It defaults to true unless explicitly set to false. +- `desktop_directory_sharing` (Boolean) DesktopDirectorySharing indicates whether directory sharing is allowed between the user's workstation and the remote desktop. It defaults to false unless explicitly set to true. +- `device_trust_mode` (String) DeviceTrustMode is the device authorization mode used for the resources associated with the role. See DeviceTrust.Mode. +- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert sets disconnect clients on expired certificates. +- `enhanced_recording` (List of String) BPF defines what events to record for the BPF-based session recorder. +- `forward_agent` (Boolean) ForwardAgent is SSH agent forwarding. +- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specoptionsidp)) +- `lock` (String) Lock specifies the locking mode (strict|best_effort) to be applied with the role. +- `max_connections` (Number) MaxConnections defines the maximum number of concurrent connections a user may hold. +- `max_kubernetes_connections` (Number) MaxKubernetesConnections defines the maximum number of concurrent Kubernetes sessions a user may hold. +- `max_session_ttl` (String) MaxSessionTTL defines how long a SSH session can last for. +- `max_sessions` (Number) MaxSessions defines the maximum number of concurrent sessions per connection. +- `mfa_verification_interval` (String) MFAVerificationInterval optionally defines the maximum duration that can elapse between successive MFA verifications. This variable is used to ensure that users are periodically prompted to verify their identity, enhancing security by preventing prolonged sessions without re-authentication when using tsh proxy * derivatives. It's only effective if the session requires MFA. If not set, defaults to `max_session_ttl`. +- `permit_x11_forwarding` (Boolean) PermitX11Forwarding authorizes use of X11 forwarding. +- `pin_source_ip` (Boolean) PinSourceIP forces the same client IP for certificate generation and usage +- `port_forwarding` (Boolean) Deprecated: Use SSHPortForwarding instead +- `record_session` (Attributes) RecordDesktopSession indicates whether desktop access sessions should be recorded. It defaults to true unless explicitly set to false. (see [below for nested schema](#nested-schema-for-specoptionsrecord_session)) +- `request_access` (String) RequestAccess defines the request strategy (optional|reason|always) where optional is the default. +- `request_prompt` (String) RequestPrompt is an optional message which tells users what they aught to request. +- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this user. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". +- `ssh_file_copy` (Boolean) SSHFileCopy indicates whether remote file operations via SCP or SFTP are allowed over an SSH session. It defaults to true unless explicitly set to false. +- `ssh_port_forwarding` (Attributes) SSHPortForwarding configures what types of SSH port forwarding are allowed by a role. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwarding)) + +### Nested Schema for `spec.options.cert_extensions` + +Optional: + +- `mode` (Number) Mode is the type of extension to be used -- currently critical-option is not supported. 0 is "extension". +- `name` (String) Name specifies the key to be used in the cert extension. +- `type` (Number) Type represents the certificate type being extended, only ssh is supported at this time. 0 is "ssh". +- `value` (String) Value specifies the value to be used in the cert extension. + + +### Nested Schema for `spec.options.idp` + +Optional: + +- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specoptionsidpsaml)) + +### Nested Schema for `spec.options.idp.saml` + +Optional: + +- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. + + + +### Nested Schema for `spec.options.record_session` + +Optional: + +- `default` (String) Default indicates the default value for the services. +- `desktop` (Boolean) Desktop indicates whether desktop sessions should be recorded. It defaults to true unless explicitly set to false. +- `ssh` (String) SSH indicates the session mode used on SSH sessions. + + +### Nested Schema for `spec.options.ssh_port_forwarding` + +Optional: + +- `local` (Attributes) Allow local port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardinglocal)) +- `remote` (Attributes) Allow remote port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardingremote)) + +### Nested Schema for `spec.options.ssh_port_forwarding.local` + +Optional: + +- `enabled` (Boolean) + + +### Nested Schema for `spec.options.ssh_port_forwarding.remote` + +Optional: + +- `enabled` (Boolean) diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/saml_connector.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/saml_connector.mdx new file mode 100644 index 0000000000000..1cf42043bde1a --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/saml_connector.mdx @@ -0,0 +1,162 @@ +--- +title: Reference for the teleport_saml_connector Terraform resource +sidebar_label: saml_connector +description: This page describes the supported values of the teleport_saml_connector resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_saml_connector resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport SAML connector +# +# Please note that the SAML connector will work in Teleport Enterprise only. + +resource "teleport_saml_connector" "example" { + version = "v2" + # This block will tell Terraform to never update private key from our side if a keys are managed + # from an outside of Terraform. + + # lifecycle { + # ignore_changes = [ + # spec[0].signing_key_pair[0].cert, + # spec[0].signing_key_pair[0].private_key, + # spec[0].assertion_key_pair[0].cert, + # spec[0].assertion_key_pair[0].private_key, + # ] + # } + + # This section tells Terraform that role example must be created before the SAML connector + depends_on = [ + teleport_role.example + ] + + metadata = { + name = "example" + } + + spec = { + attributes_to_roles = [{ + name = "groups" + roles = ["example"] + value = "okta-admin" + }, + { + name = "groups" + roles = ["example"] + value = "okta-dev" + }] + + acs = "https://localhost:3025/v1/webapi/saml/acs" + entity_descriptor = "" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `spec` (Attributes) Spec is an SAML connector specification. (see [below for nested schema](#nested-schema-for-spec)) +- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. + +### Optional + +- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. + +### Nested Schema for `spec` + +Required: + +- `acs` (String) AssertionConsumerService is a URL for assertion consumer service on the service provider (Teleport's side). +- `attributes_to_roles` (Attributes List) AttributesToRoles is a list of mappings of attribute statements to roles. (see [below for nested schema](#nested-schema-for-specattributes_to_roles)) + +Optional: + +- `allow_idp_initiated` (Boolean) AllowIDPInitiated is a flag that indicates if the connector can be used for IdP-initiated logins. +- `assertion_key_pair` (Attributes) EncryptionKeyPair is a key pair used for decrypting SAML assertions. (see [below for nested schema](#nested-schema-for-specassertion_key_pair)) +- `audience` (String) Audience uniquely identifies our service provider. +- `cert` (String, Sensitive) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. +- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) +- `display` (String) Display controls how this connector is displayed. +- `entity_descriptor` (String, Sensitive) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. +- `entity_descriptor_url` (String) EntityDescriptorURL is a URL that supplies a configuration XML. +- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO. +- `include_subject` (Boolean) IncludeSubject is a flag that indicates whether the Subject element is included in the SAML authentication request. Defaults to false. Note: Some IdPs will reject requests that contain a Subject. +- `issuer` (String) Issuer is the identity provider issuer. +- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) +- `preferred_request_binding` (String) PreferredRequestBinding is a preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured. +- `provider` (String) Provider is the external identity provider. +- `service_provider_issuer` (String) ServiceProviderIssuer is the issuer of the service provider (Teleport). +- `signing_key_pair` (Attributes) SigningKeyPair is an x509 key pair used to sign AuthnRequest. (see [below for nested schema](#nested-schema-for-specsigning_key_pair)) +- `single_logout_url` (String) SingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled. +- `sso` (String) SSO is the URL of the identity provider's SSO service. +- `user_matchers` (List of String) UserMatchers is a set of glob patterns to narrow down which username(s) this auth connector should match for identifier-first login. + +### Nested Schema for `spec.attributes_to_roles` + +Optional: + +- `name` (String) Name is an attribute statement name. +- `roles` (List of String) Roles is a list of static teleport roles to map to. +- `value` (String) Value is an attribute statement value to match. + + +### Nested Schema for `spec.assertion_key_pair` + +Optional: + +- `cert` (String) Cert is a PEM-encoded x509 certificate. +- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. + + +### Nested Schema for `spec.client_redirect_settings` + +Optional: + +- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs +- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs + + +### Nested Schema for `spec.mfa` + +Optional: + +- `cert` (String) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. +- `enabled` (Boolean) Enabled specified whether this SAML connector supports MFA checks. Defaults to false. +- `entity_descriptor` (String) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. Usually set from EntityDescriptorUrl. +- `entity_descriptor_url` (String) EntityDescriptorUrl is a URL that supplies a configuration XML. +- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced for MFA checks. UNSPECIFIED is treated as YES to always re-authentication for MFA checks. This should only be set to NO if the IdP is setup to perform MFA checks on top of active user sessions. +- `issuer` (String) Issuer is the identity provider issuer. Usually set from EntityDescriptor. +- `sso` (String) SSO is the URL of the identity provider's SSO service. Usually set from EntityDescriptor. + + +### Nested Schema for `spec.signing_key_pair` + +Optional: + +- `cert` (String) Cert is a PEM-encoded x509 certificate. +- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. + + + +### Nested Schema for `metadata` + +Required: + +- `name` (String) Name is an object name + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/terraform-provider/resources/server.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/server.mdx similarity index 95% rename from docs/pages/reference/terraform-provider/resources/server.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/server.mdx index 73da11cd75fa5..3732281462d6e 100644 --- a/docs/pages/reference/terraform-provider/resources/server.mdx +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/server.mdx @@ -71,6 +71,7 @@ resource "teleport_server" "ssh_agentless_eice" { ### Optional - `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `scope` (String) The advertized scope of the server which can not change once assigned. - `spec` (Attributes) Spec is a server spec (see [below for nested schema](#nested-schema-for-spec)) ### Nested Schema for `metadata` @@ -94,6 +95,8 @@ Optional: - `peer_addr` (String) PeerAddr is the address a proxy server is reachable at by its peer proxies. - `proxy_ids` (List of String) ProxyIDs is a list of proxy IDs this server is expected to be connected to. - `public_addrs` (List of String) PublicAddrs is a list of public addresses where this server can be reached. +- `relay_group` (String) the name of the Relay group that the server is connected to +- `relay_ids` (List of String) the list of Relay host IDs that the server is connected to - `rotation` (Attributes) Rotation specifies server rotation (see [below for nested schema](#nested-schema-for-specrotation)) - `use_tunnel` (Boolean) UseTunnel indicates that connections to this server should occur over a reverse tunnel. - `version` (String) TeleportVersion is the teleport version that the server is running on diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/session_recording_config.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/session_recording_config.mdx new file mode 100644 index 0000000000000..d525ec2724c18 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/session_recording_config.mdx @@ -0,0 +1,111 @@ +--- +title: Reference for the teleport_session_recording_config Terraform resource +sidebar_label: session_recording_config +description: This page describes the supported values of the teleport_session_recording_config resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_session_recording_config resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Teleport session recording config + +resource "teleport_session_recording_config" "example" { + version = "v2" + metadata = { + description = "Session recording config" + labels = { + "example" = "yes" + "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default + } + } + + spec = { + proxy_checks_host_keys = true + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Spec is a SessionRecordingConfig specification (see [below for nested schema](#nested-schema-for-spec)) +- `status` (Attributes) Status is the SessionRecordingConfig status containing active encryption keys (see [below for nested schema](#nested-schema-for-status)) +- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources + +### Nested Schema for `metadata` + +Optional: + +- `description` (String) Description is object description +- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. +- `labels` (Map of String) Labels is a set of labels + + +### Nested Schema for `spec` + +Optional: + +- `encryption` (Attributes) Encryption configures if and how session recordings should be encrypted. (see [below for nested schema](#nested-schema-for-specencryption)) +- `mode` (String) Mode controls where (or if) the session is recorded. +- `proxy_checks_host_keys` (Boolean) ProxyChecksHostKeys is used to control if the proxy will check host keys when in recording mode. + +### Nested Schema for `spec.encryption` + +Optional: + +- `enabled` (Boolean) Enabled controls whether or not session recordings should be encrypted. +- `manual_key_management` (Attributes) ManualKeyManagement defines whether or not recording encryption keys should be managed externally and how to query those keys. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_management)) + +### Nested Schema for `spec.encryption.manual_key_management` + +Optional: + +- `active_keys` (Attributes List) ActiveKeys describe which keys should be queried for active recording encryption and replay. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_managementactive_keys)) +- `enabled` (Boolean) Enabled controls whether or recording encryption keys should be managed externally. +- `rotated_keys` (Attributes List) RotatedKeys describe which keys should be queried for historical replay. (see [below for nested schema](#nested-schema-for-specencryptionmanual_key_managementrotated_keys)) + +### Nested Schema for `spec.encryption.manual_key_management.active_keys` + +Optional: + +- `label` (String) Label is a value that can be used with the related keystore in order to find relevant keys. +- `type` (String) Type represents which keystore should be searched when looking up keys by label. + + +### Nested Schema for `spec.encryption.manual_key_management.rotated_keys` + +Optional: + +- `label` (String) Label is a value that can be used with the related keystore in order to find relevant keys. +- `type` (String) Type represents which keystore should be searched when looking up keys by label. + + + + + +### Nested Schema for `status` + +Optional: + +- `encryption_keys` (Attributes List) EncryptionKeys contain the currently active age encryption keys used for encrypted session recording. (see [below for nested schema](#nested-schema-for-statusencryption_keys)) + +### Nested Schema for `status.encryption_keys` + +Optional: + +- `public_key` (String) A PKIX ASN.1 DER encoded public key used for key wrapping during age encryption. Expected to be RSA 4096. diff --git a/docs/pages/reference/terraform-provider/resources/static_host_user.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/static_host_user.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/static_host_user.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/static_host_user.mdx diff --git a/docs/pages/reference/terraform-provider/resources/trusted_cluster.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/trusted_cluster.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/trusted_cluster.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/trusted_cluster.mdx diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/trusted_device.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/trusted_device.mdx new file mode 100644 index 0000000000000..30b369350f4d8 --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/trusted_device.mdx @@ -0,0 +1,67 @@ +--- +title: Reference for the teleport_trusted_device Terraform resource +sidebar_label: trusted_device +description: This page describes the supported values of the teleport_trusted_device resource of the Teleport Terraform provider. +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleport_trusted_device resource of the Teleport Terraform provider. + + + + +## Example Usage + +```hcl +# Trusted device resource + +resource "teleport_trusted_device" "TESTDEVICE1" { + version = "v1" + spec = { + asset_tag = "TESTDEVICE1" + os_type = "macos" + } +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` + +### Optional + +- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) +- `spec` (Attributes) Specification of the device. (see [below for nested schema](#nested-schema-for-spec)) + +### Nested Schema for `metadata` + +Optional: + +- `labels` (Map of String) Labels is a set of labels +- `name` (String) Name is an object name + + +### Nested Schema for `spec` + +Required: + +- `asset_tag` (String) +- `os_type` (String) + +Optional: + +- `enroll_status` (String) +- `owner` (String) +- `source` (Attributes) (see [below for nested schema](#nested-schema-for-specsource)) + +### Nested Schema for `spec.source` + +Optional: + +- `name` (String) +- `origin` (String) diff --git a/docs/pages/reference/terraform-provider/resources/user.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/user.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/user.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/user.mdx diff --git a/docs/pages/reference/terraform-provider/resources/workload_identity.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/resources/workload_identity.mdx similarity index 100% rename from docs/pages/reference/terraform-provider/resources/workload_identity.mdx rename to docs/pages/reference/infrastructure-as-code/terraform-provider/resources/workload_identity.mdx diff --git a/docs/pages/reference/infrastructure-as-code/terraform-provider/terraform-provider.mdx b/docs/pages/reference/infrastructure-as-code/terraform-provider/terraform-provider.mdx new file mode 100644 index 0000000000000..133ab0f63ae3d --- /dev/null +++ b/docs/pages/reference/infrastructure-as-code/terraform-provider/terraform-provider.mdx @@ -0,0 +1,197 @@ +--- +title: "Teleport Terraform Provider Reference" +sidebar_label: Terraform Provider +description: Reference documentation of the Teleport Terraform provider. +tags: + - infrastructure-as-code + - reference + - platform-wide +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +The Teleport Terraform provider allows Terraform users to configure Teleport +from Terraform. + +This section is the Teleport Terraform Provider reference. +It lists all the supported resources and their fields. + + +To get started with the Terraform provider, you must start with [the installation +guide](../../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx). +Once you got a working provider, we recommend you to follow the +["Managing users and roles with IaC"](../../../zero-trust-access/infrastructure-as-code/managing-resources/user-and-role.mdx) guide. + + +The provider exposes Teleport resources both as Terraform data-sources and Terraform resources. +Data-sources are used by Terraform to fetch information from Teleport, while resources are used +to create resources in Teleport. + +{/* Note: the awkward `resource-index` file names are here because `data-sources` +is reserved by the generator for the catch-all resource template */} + +- [list of supported resources](./resources/resources.mdx) +- [list of supported data-sources](./data-sources/data-sources.mdx) + +## Example Usage + + + + +```hcl +terraform { + required_providers { + teleport = { + source = "terraform.releases.teleport.dev/gravitational/teleport" + version = "~> (=teleport.major_version=).0" + } + } +} + +provider "teleport" { + # Update addr to point to Teleport Auth/Proxy + # addr = "auth.example.com:3025" + addr = "proxy.example.com:443" + identity_file_path = "terraform-identity/identity" +} +``` + + + + +```hcl +terraform { + required_providers { + teleport = { + source = "terraform.releases.teleport.dev/gravitational/teleport" + version = "~> (=cloud.major_version=).0" + } + } +} + +provider "teleport" { + # Update addr to point to your Teleport Enterprise (managed) tenant URL's host:port + addr = "mytenant.teleport.sh:443" + identity_file_path = "terraform-identity/identity" +} +``` + + + + +## Connection methods + +This section lists the different ways of passing credentials to the Terraform provider. +You can find which method fits your use case in +the [Teleport Terraform provider setup +page](../../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) + +### With an identity file + +With this connection method, you must provide an identity file.This file allows Terraform to connect both via the Proxy +Service (ports 443 or 3080) and via the Auth Service (port 3025). This is the recommended way of passing credentials to +the Terraform provider. + +The identity file can be obtained via several ways: + +### Obtaining an identity file locally with `tctl` + +Since 16.2, you can use `tctl` and your local credentials to create a temporary bot and load its identity +in your shell's environment variables: + +```code +$ eval "$(tctl terraform env)" +🔑 Detecting if MFA is required +This is an admin-level action and requires MFA to complete +Tap any security key +Detected security key tap +⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token +🤖 Using the temporary bot to obtain certificates +🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s +``` + +You can find more information in +the ["Run the Terraform provider locally" guide](../../../zero-trust-access/infrastructure-as-code/terraform-provider/local.mdx) + +#### Obtaining an identity file via `tbot` + +`tbot` relies on [Machine & Workload Identity](../../../machine-workload-identity/machine-workload-identity.mdx) to obtain and automatically renew +short-lived credentials. Such credentials are harder to exfiltrate, and you can control more precisely who has access to +which roles (e.g. you can allow only GitHub Actions pipelines targeting the `prod` environment to get certificates). + +You can follow [the Terraform Provider +guide](../../../zero-trust-access/infrastructure-as-code/terraform-provider/terraform-provider.mdx) to setup `tbot` +and have Terraform use its identity. + +#### Obtaining an identity file via `tctl auth sign` + +You can obtain an identity file with the command + +```code +$ tctl auth sign --user terraform --format file -o identity.pem +``` + +This auth method has the following limitations: +- Such credentials are high-privileged and long-lived. They must be protected and rotated. +- This auth method does not work against Teleport clusters with MFA set to `webauthn`. + On such clusters, Teleport will reject any long-lived certificate and require + [an additional MFA challenge for administrative + actions](../../../zero-trust-access/authentication/mfa-for-admin-actions.mdx). + +### With a token (native Machine & Workload Identity) + +Starting with 16.2, the Teleport Terraform provider can natively use Machine & Workload Identity (without `tbot`) to join a Teleport +cluster. The Terraform Provider will rely on its runtime (AWS, GCP, Kubernetes, CI/CD system) to prove its identity to +Teleport. + +You can use any [delegated join method](../../deployment/join-methods.mdx#delegated-join-methods) by setting +both `join_method` and `join_token` in the provider configuration. + +This setup is described in more details in +the ["Run the Teleport Terraform provider in CI or Cloud" guide](../../../zero-trust-access/infrastructure-as-code/terraform-provider/ci-or-cloud.mdx). + +### With key, certificate, and CA certificate + +With this connection method, you must provide a TLS key, a TLS certificate, and the Teleport Auth Service TLS CA certificates. +Those can be obtained with the command: + +```code +$ tctl auth sign --user terraform --format=tls -o terraform.pem +``` + +This auth method has the following limitations: +- The provider can only connect to the Auth directly (port 3025). On most clusters, only the proxy is publicly exposed. +- Such credentials are high-privileged and long-lived. They must be protected and rotated. +- This auth method does not work against Teleport clusters with MFA set to `webauthn`. + On such clusters, Teleport will reject any long-lived certificate and require + [an additional MFA challenge for administrative + actions](../../../zero-trust-access/authentication/mfa-for-admin-actions.mdx). + +{/* schema generated by tfplugindocs */} +## Schema + +### Optional + +- `addr` (String) host:port of the Teleport address. This can be the Teleport Proxy Service address (port 443 or 4080) or the Teleport Auth Service address (port 3025). This can also be set with the environment variable `TF_TELEPORT_ADDR`. +- `audience_tag` (String) Name of the optional audience tag used for native Machine ID joining with the `terraform` method. This can also be set with the environment variable `TF_TELEPORT_JOIN_AUDIENCE_TAG`. +- `cert_base64` (String) Base64 encoded TLS auth certificate. This can also be set with the environment variable `TF_TELEPORT_CERT_BASE64`. +- `cert_path` (String) Path to Teleport auth certificate file. This can also be set with the environment variable `TF_TELEPORT_CERT`. +- `dial_timeout_duration` (String) DialTimeout sets timeout when trying to connect to the server. This can also be set with the environment variable `TF_TELEPORT_DIAL_TIMEOUT_DURATION`. +- `gitlab_id_token_env_var` (String) Environment variable used to fetch the ID token issued by GitLab for the `gitlab` join method. If unset, this defaults to `TBOT_GITLAB_JWT`. This can also be set with the environment variable `TF_TELEPORT_GITLAB_ID_TOKEN_ENV_VAR`. +- `identity_file` (String, Sensitive) Teleport identity file content. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE`. +- `identity_file_base64` (String, Sensitive) Teleport identity file content base64 encoded. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE_BASE64`. +- `identity_file_path` (String) Teleport identity file path. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE_PATH`. +- `insecure` (Boolean) Skip proxy certificate verification when joining the Teleport cluster. This is not recommended for production use. This can also be set with the environment variable `TF_TELEPORT_INSECURE`. +- `join_method` (String) Enables the native Terraform MachineID support. When set, Terraform uses MachineID to securely join the Teleport cluster and obtain credentials. See [the join method reference](../../deployment/join-methods.mdx) for possible values. You must use [a delegated join method](../../deployment/join-methods.mdx#secret-vs-delegated). This can also be set with the environment variable `TF_TELEPORT_JOIN_METHOD`. +- `join_token` (String) Name of the token used for the native MachineID joining. This value is not sensitive for [delegated join methods](../../deployment/join-methods.mdx#secret-vs-delegated). This can also be set with the environment variable `TF_TELEPORT_JOIN_TOKEN`. +- `key_base64` (String, Sensitive) Base64 encoded TLS auth key. This can also be set with the environment variable `TF_TELEPORT_KEY_BASE64`. +- `key_path` (String) Path to Teleport auth key file. This can also be set with the environment variable `TF_TELEPORT_KEY`. +- `profile_dir` (String) Teleport profile path. This can also be set with the environment variable `TF_TELEPORT_PROFILE_PATH`. +- `profile_name` (String) Teleport profile name. This can also be set with the environment variable `TF_TELEPORT_PROFILE_NAME`. +- `retry_base_duration` (String) Retry algorithm when the API returns 'not found': base duration between retries (https://pkg.go.dev/time#ParseDuration). This can also be set with the environment variable `TF_TELEPORT_RETRY_BASE_DURATION`. +- `retry_cap_duration` (String) Retry algorithm when the API returns 'not found': max duration between retries (https://pkg.go.dev/time#ParseDuration). This can also be set with the environment variable `TF_TELEPORT_RETRY_CAP_DURATION`. +- `retry_max_tries` (String) Retry algorithm when the API returns 'not found': max tries. This can also be set with the environment variable `TF_TELEPORT_RETRY_MAX_TRIES`. +- `root_ca_base64` (String) Base64 encoded Root CA. This can also be set with the environment variable `TF_TELEPORT_CA_BASE64`. +- `root_ca_path` (String) Path to Teleport Root CA. This can also be set with the environment variable `TF_TELEPORT_ROOT_CA`. + diff --git a/docs/pages/reference/join-methods.mdx b/docs/pages/reference/join-methods.mdx deleted file mode 100644 index ce36690d36447..0000000000000 --- a/docs/pages/reference/join-methods.mdx +++ /dev/null @@ -1,492 +0,0 @@ ---- -title: Join Methods and Tokens -h1: Join Methods and Token Reference -description: Describes the different ways to configure a Teleport to join a cluster. -keywords: [join, joining, token] -tocDepth: 3 ---- - -This guide explains the core concepts behind the Teleport joining process, -references all support join methods and classifies them based on their -security properties. This guide does not explain step-by-step how to -join an instance with each join method, but links to the relevant How-To -guides when possible. - - -You must be familiar with the [Teleport Core concepts](../core-concepts.mdx) -before reading this page. - - -## Definitions - -### Joining - -Joining a Teleport cluster is the act of establishing trust between a new -Teleport instance and all the existing instances already part of the Teleport -cluster. At the end of the joining process, the Auth Service signs certificates -for the joining instance. Those certificates represent the trust that was -established. With them, the newly-joined instance can interact with the -other Teleport instances. - -To request its certificates, an instance must prove its identity to the Auth Service. -Teleport offers multiple ways for a joining instance to prove its authenticity, they -are called the join methods. - -The joining process only happens when a Teleport service doesn't have valid -certificates. Once the token is exchanged for certificates, those -certificates are used on all subsequent attempts to connect. In most cases, -this happens during the first startup. - -### Join methods - -A join method is a way for the Auth Service to validate that an instance requesting to -join the Teleport cluster is legitimate. Some join methods are universal while -others rely on the joining instance context. For example cloud-provider -join-methods (such as `iam`, `gcp` or `azure`) or CI-provider (such as `github`, -`gitlab`, `circleci`) are more flexible and provide -better security guarantees but require the joining instance to run in a specific -cloud-provider. - -Different join methods may provide different security guarantees. e.g. some -join methods allow the joining instance to request renewable certificates while other -will require the instance to join again to renew its certificate. - -The join method and its parameters are specified in the token resource. - -### Token - -A Token is a Teleport resource that specifies which join method can be used in -which context. For example, a token can allow SSH services to join with the -`iam` join method if they are in the AWS account `333333333333` and can assume -the role `teleport-instance-role`: - -```yaml -kind: token -version: v2 -metadata: - name: my-iam-token -spec: - roles: [Node] - join_method: iam - allow: - - aws_account: "333333333333" - aws_arn: "arn:aws:sts::333333333333:assumed-role/teleport-instance-role/i-*" -``` - - -The token name may, or may not be sensitive depending on the join method. -Secret-based join methods rely on the token name to be secret. In such -cases the token name must be protected as knowing the token name in enough -for an instance to join the cluster. - - -## Classification of join methods - -### Secret vs delegated - -#### Secret-based join methods - -Secret-based join methods are universal: Teleport service can use a secret-based -join method regardless of the platform/cloud provider it runs on. The joining instance -sends the secret and the Auth Service validates that it matches the one it -knows. Those joining methods are inherently prone to secret exfiltration and -the delegated join methods should be preferred when available. If you have -to use a secret-based join method, it is recommended to use short-lived tokens -(valid only 1 hour for example) to reduce the risk of the token leaking. - -Secret-based join methods are: -- [ephemeral `token`](#ephemeral-tokens) -- [static `token`](#static-tokens) - - -Teleport supports static tokens for backward compatibility, their use should be avoided. - - -#### Delegated join methods - -Delegated join methods rely on the context of the joining instance and a third party to -establish trust. The third party can be a cloud provider, a CI platform or the -container runtime. Those methods cannot be used for every instance (e.g. joining an SSH -agent from a Raspberry Pi is not possible) but should be preferred when possible. - -Delegated join methods might also offer more granularity. For example, cloud-provider -based join methods can allow instances to join based on their Availability Zone, -service account, or cloud account ID. - -Delegated join methods are: -- [`azure`](#azure-managed-identity-azure) -- [`azure_devops`](#azure-devops-azure_devops) -- [`bitbucket`](#bitbucket-pipelines-bitbucket) -- [`circleci`](#circleci-circleci) -- [`ec2`](#aws-ec2-identity-document-ec2) -- [`gcp`](#gcp-service-account-gcp) -- [`github`](#github-actions-github) -- [`gitlab`](#gitlab-gitlab) -- [`iam`](#aws-iam-role-iam) -- [`kubernetes`](#kubernetes-kubernetes) -- [`oracle`](#oracle-cloud-oracle) -- [`spacelift`](#spacelift-spacelift) -- [`terraform_cloud`](#terraform-cloud-terraform_cloud) -- [`tpm`](#trusted-platform-module-tpm) - -### Renewable vs non-renewable - -Depending on the join method used, the Auth Service might issue renewable -or non-renewable certificates. - -When the certificate is about to expire, instances with renewable certificates can -request a new one without having to use a token again. Typically, secret-based join methods -provide renewable certificates because the secret token is sensitive and typically -short-lived. With a single join, the instance can stay part of the cluster indefinitely. - -Renewable join-methods are: -- [ephemeral `token`](#ephemeral-tokens) -- [static `token`](#static-tokens) -- [`ec2`](#aws-ec2-identity-document-ec2) - -Nodes with non-renewable certificates must join again in order to get a new -certificate before expiry. The instance will have to prove again that it is legitimate. -The non-renewable join methods guarantee that an attacker stealing the instance -certificates will not be able to maintain access to the Teleport cluster. -Those join methods can be considered more secure and more appropriate for -temporary workloads such as CI/CD pipelines or containerized environments. - -Non-renewable join methods are: -- [`iam`](#aws-iam-role-iam) -- [`azure`](#azure-managed-identity-azure) -- [`azure_devops`](#azure-devops-azure_devops) -- [`gcp`](#gcp-service-account-gcp) -- [`github`](#github-actions-github) -- [`circleci`](#circleci-circleci) -- [`gitlab`](#gitlab-gitlab) -- [`kubernetes`](#kubernetes-kubernetes) -- [`tpm`](#trusted-platform-module-tpm) - -## Token resource reference - -The token resource has the following common fields for all join methods: - -```yaml -# token.yaml -kind: token -version: v2 -metadata: - name: my-token-name -spec: - # System roles describe what services the joining Teleport instance can run. - # Those roles are written on the instance certificate. If you want to change - # them (e.g. add Application access to an SSH Node), you need to: - # - edit the token to update the roles (e.g. add "App") - # - un-register the Teleport instance - # - modify its configuration to enable the new service (here "app_service.enabled") - # - have the instance join again - # - # You should use the minimal set of system roles required. - # Common roles are: - # - "Node" for SSH Service - # - "Proxy" for Proxy Service - # - "Kube" for Kubernetes Service - # - "App" for Application Service - # - "Db" for Database Service - # - "WindowsDesktop" for Windows Desktop Service - # - "Discovery" for Discovery Service - # - "Bot" for MachineID (when set, "spec.bot_name" must be set in the token) - roles: - - Node - - App - join_method: gcp - # Only set bot name when the token is used for MachineID. - # When set, the token must have the "Bot" role as well. - bot_name: my-bot - # SuggestedLabels is a set of labels that resources should set when using this - # token to enroll themselves in the cluster. - # Currently, only node-join scripts create a configuration according to the suggestion. - suggested_labels: - teams: ["sales-eng", "eng", "qa"] - application: ["demo-product"] - # SuggestedAgentMatcherLabels is a set of labels to be used by discovery agents to match on resources. - # When an agent uses this token, the agent should monitor resources that match those labels. - # For databases, this means adding the labels to `db_service.resources.labels`. - # Currently, only node-join scripts create a configuration according to the suggestion. - suggested_agent_matcher_labels: - teams: ["sales-eng"] -``` - -## Join methods - -### Static tokens - - -This join method is inherently less secure because long-lived tokens can be stolen and reused. -Relying on it significantly reduces the security benefits of using Teleport. Its usage is strongly discouraged. -You should use [ephemeral tokens](#ephemeral-tokens) instead. - - -Static tokens are tokens defined in the Auth Service configuration (`teleport.yaml`). -The token name must be kept secret as knowing it allows to join instances to -the Teleport cluster. - -```yaml -auth_service: - enabled: true - # Pre-defined tokens for adding new instances to a cluster. Each token specifies - # the role a new node will be allowed to assume. The more secure way to - # add instances is to use `tctl nodes add --ttl` command to generate auto-expiring - # tokens. - # - # We recommend to use tools like `pwgen` to generate sufficiently random - # tokens of 32+ byte length. - tokens: - - "proxy,node:xxxxx" - - "auth:yyyy" - - "discovery,app,db:zzzzz" -``` - -### Ephemeral tokens - -Ephemeral tokens are secret tokens created dynamically via the CLI or Teleport API. -They are time-bound and are typically created just before joining an instance to -the Teleport cluster. - -They can be created by the CLI (a strong random value is picked when not specified, default TTL is 30 minutes): -```code -$ tctl tokens add --type discovery,app --ttl 15m -``` - -Or as Teleport resources: - -(!docs/pages/includes/provision-token/ephemeral-spec.mdx!) - -When a MachineID bot uses an ephemeral join token, the token is deleted. - - -- How to [Join Services with a Secure Token](../enroll-resources/agents/join-token.mdx). -- [Deploying Machine ID on Linux](../enroll-resources/machine-id/deployment/linux.mdx) - - -### AWS IAM role: `iam` - -The IAM join method is available to any Teleport process running anywhere with access to IAM credentials, -such as an EC2 instance with an attached IAM role. No specific permissions or IAM policy is required: an -IAM role with no attached policies is sufficient. No IAM credentials are required on the Teleport Auth Service. - -This is the recommended method to join workload running on AWS. - -(!docs/pages/includes/provision-token/iam-spec.mdx!) - - -- [Joining Services via AWS IAM Role](../enroll-resources/agents/aws-iam.mdx). -- [Deploying Machine ID on AWS](../enroll-resources/machine-id/deployment/aws.mdx) - - -### AWS EC2 identity document: `ec2` - -The EC2 join method is available to any Teleport process running on an EC2 -instance. Only one Teleport process per EC2 instance may use the EC2 join -method. - -IAM credentials with `ec2:DescribeInstances` permissions are required on your -Teleport Auth Service. No IAM credentials are required on the Teleport processes -joining the cluster. - - - -The EC2 join method is not available in Teleport Enterprise Cloud and Teleport -Team. Teleport Enterprise Cloud and Team customers can use the [IAM join -method](#aws-iam-role-iam) or [ephemeral secret tokens](#ephemeral-tokens). - - - -(!docs/pages/includes/provision-token/ec2-spec.mdx!) - - -[Joining Services via AWS EC2 Identity Document](../enroll-resources/agents/aws-ec2.mdx). - - -### Azure managed identity: `azure` - -The Azure join method is available to any Teleport process running in an -Azure Virtual Machine. Support for joining a cluster with the Proxy Service -behind a layer 7 load balancer or reverse proxy is available in Teleport 13.0+. - -(!docs/pages/includes/provision-token/azure-spec.mdx!) - - -- [Joining Services via Azure Managed Identity](../enroll-resources/agents/azure.mdx). -- [Deploying Machine ID on Azure](../enroll-resources/machine-id/deployment/azure.mdx) - - -### Azure Devops: `azure_devops` - -The Azure DevOps is available to any Teleport process running in an Azure DevOps -pipeline. This join method is typically used with -[Machine & Workload Identity](../enroll-resources/machine-id/introduction.mdx) -to access Teleport-protected resources in Azure DevOps pipelines without the -use of long-lived secrets. - -(!docs/pages/includes/provision-token/azure-devops-spec.mdx!) - -### GCP service account: `gcp` - -The GCP join method is available to any Teleport process running on a GCP VM. -The VM must have a -[service account](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) -assigned to it (the default service account is fine). No IAM roles are required -on the Teleport process joining the cluster. - -(!docs/pages/includes/provision-token/gcp-spec.mdx!) - - -- How to [Join Services with GCP](../enroll-resources/agents/gcp.mdx). -- [Deploying Machine ID on GCP](../enroll-resources/machine-id/deployment/gcp.mdx) - - -### GitHub Actions: `github` - -Teleport supports secure joining on both GitHub-hosted and self-hosted GitHub -Actions runners as well as GitHub Enterprise Server. This join method is -typically used with [Machine ID](../enroll-resources/machine-id/introduction.mdx) to access -Teleport-protected resources in GitHub Actions pipelines. - -(!docs/pages/includes/provision-token/github-spec.mdx!) - - -- [Deploying Machine ID on GitHub Actions](../enroll-resources/machine-id/deployment/github-actions.mdx) - - -### CircleCI: `circleci` - -This join method is typically used with [Machine ID](../enroll-resources/machine-id/introduction.mdx) -to access Teleport-protected resources in Circle CI pipelines. - -(!docs/pages/includes/provision-token/circleci-spec.mdx!) - - -- [Deploying Machine ID on Circle CI](../enroll-resources/machine-id/deployment/circleci.mdx) - - -### GitLab: `gitlab` - -Teleport supports secure joining on both cloud-hosted and self-hosted GitLab -instances. **The minimum supported GitLab version is 15.7**. - -This join method is typically used with [MachineID](../enroll-resources/machine-id/introduction.mdx) -to access Teleport-protected resources in Gitlab CI pipelines. - -(!docs/pages/includes/provision-token/gitlab-spec.mdx!) - - -- [Deploying Machine ID on GitLab CI](../enroll-resources/machine-id/deployment/gitlab.mdx) - - -### Kubernetes: `kubernetes` - -The Kubernetes join methods exists in two variants: -- [in-cluster](#kubernetes-in-cluster) -- [JWKS](#kubernetes-jwks) - -#### Kubernetes In-cluster - -Kubernetes in-cluster joining is available for any Teleport process running -in the same Kubernetes cluster than the Auth Service. It uses the Kubernetes -ServiceAccount tokens to validate the pod identity. The method relies on the -[Kubernetes TokenReview API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1/) -which is typically only reachable from within the Kubernetes cluster. Because of -this limitation, this join method is only available for self-hosted Teleport -clusters in Kubernetes. - -This method should be preferred when available as tokens are revoked as soon as -the pod enters the `Terminated` state. - -(!docs/pages/includes/provision-token/kubernetes-in-cluster-spec.mdx!) - - -- [Joining Services via Kubernetes ServiceAccount Token](../enroll-resources/agents/kubernetes.mdx) - - -#### Kubernetes JWKS - -Kubernetes JWKS joining is available for any Teleport process running in -Kubernetes. The Auth Service does not have to run in Kubernetes so this method -can be used with any Teleport cluster, including Teleport Cloud. -This join method works by exporting the public Kubernetes signing keys and using -them to validate Kubernetes SA token signatures. The signature validation can be -performed by an Auth Service without access to the Kubernetes. - -The Kubernetes JWKS join method is available in Teleport 14+. - -(!docs/pages/includes/provision-token/kubernetes-jwks-spec.mdx!) - - -After rotating the Kubernetes CA, you must update the Kubernetes JWKS tokens -to contain the new Kubernetes signing keys (update the -`spec.kubernetes.static_jwks.jwks` field). - - - -- [Deploying Machine ID on Kubernetes](../enroll-resources/machine-id/deployment/kubernetes.mdx) - - -### Trusted Platform Module: `tpm` - -(!docs/pages/includes/tpm-joining-background.mdx!) - -(!docs/pages/includes/provision-token/tpm-spec.mdx!) - - -- [Deploying Machine ID on Linux: TPM](../enroll-resources/machine-id/deployment/linux-tpm.mdx) - - -### Terraform Cloud: `terraform_cloud` - -This join method is used to authenticate using Terraform Cloud Workload -Identity. It is typically used by the Teleport Terraform provider on either -HCP Terraform or self-hosted Terraform Enterprise. It can not be used to join -Terraform runs on other platforms and dedicated join methods should be used -instead. - - -Support for self-hosted Terraform Enterprise requires Teleport Enterprise. - - -(!docs/pages/includes/provision-token/terraform-spec.mdx!) - - -- [Run the Teleport Terraform Provider on Terraform Cloud](../admin-guides/infrastructure-as-code/terraform-provider/terraform-cloud.mdx) - - -### Spacelift: `spacelift` - -This join method is used to authenticate using Spacelift. It is typically used -by the Teleport Terraform provider on Spacelift (including self-hosted -deployments). - -(!docs/pages/includes/provision-token/spacelift-spec.mdx!) - - - - [Run the Teleport Terraform Provider on Spacelift](../admin-guides/infrastructure-as-code/terraform-provider/spacelift.mdx) - - -### Bitbucket Pipelines: `bitbucket` - -This join method is used to authenticate using Bitbucket's support for OpenID -Connect, and is typically used to allow either Machine ID's `tbot` or the -Teleport Terraform provider to authenticate to Teleport without use of shared -secrets. - -(!docs/pages/includes/provision-token/bitbucket-spec.mdx!) - - -- [Deploying Machine ID on Bitbucket Pipelines](../enroll-resources/machine-id/deployment/bitbucket.mdx) - - -### Oracle Cloud: `oracle` - -The Oracle join method is available to any Teleport process running on an Oracle -Cloud Compute instance. - -(!docs/pages/includes/provision-token/oracle-spec.mdx!) - - -- How to [Join Services with Oracle Cloud](../enroll-resources/agents/oracle.mdx). - diff --git a/docs/pages/reference/machine-id/configuration.mdx b/docs/pages/reference/machine-id/configuration.mdx deleted file mode 100644 index a641ec8c76876..0000000000000 --- a/docs/pages/reference/machine-id/configuration.mdx +++ /dev/null @@ -1,1309 +0,0 @@ ---- -title: Machine ID Configuration Reference -description: Configuration reference for Teleport Machine ID. -tocDepth: 4 ---- - -This reference documents the various options that can be configured in the `tbot` -configuration file. This configuration file offers more control than -configuring `tbot` using CLI parameters alone. - -To run `tbot` with a configuration file, specify the path with the `-c` flag: - -```code -$ tbot start -c ./tbot.yaml -``` - -In this reference, the term **artifact** refers an item that `tbot` writes to a -destination as part of the process of generating an output. Examples of -artifacts include configuration files, certificates, and cryptographic key -material. Usually, artifacts are files, but this term is explicitly avoided -because a destination isn't required to be a filesystem. - -From Teleport 14, `tbot` supports the v2 configuration version. - -```yaml -# version specifies the version of the configuration file in use. `v2` is the -# most recent and should be used for all new bots. The rest of this example -# is in the `v2` schema. -version: v2 - -# debug enables verbose logging to stderr. If unspecified, this defaults to -# false. -debug: true - -# auth_server specifies the address of the Auth Service instance that `tbot` should connect -# to. You should prefer specifying `proxy_server` to specify the Proxy Service -# address. -auth_server: "teleport.example.com:3025" - -# proxy_server specifies the address of the Teleport Proxy Service that `tbot` should -# connect to. -# It is recommended to use the address of your Teleport Proxy Service, or, if using -# Teleport Cloud, the address of your Teleport Cloud instance. -proxy_server: "teleport.example.com:443" # or "example.teleport.sh:443" for Teleport Cloud - -# credential_ttl specifies how long certificates generated by `tbot` should -# live for. It should be a positive, numeric value with an `m` (for minutes) or -# `h` (for hours) suffix. By default, this value is `1h`. -# This has a maximum value of `24h`. -# -# It can be overridden for most outputs and services to give them a shorter TTL -# than `tbot`'s internal certificates. -credential_ttl: "1h" - -# renewal_interval specifies how often `tbot` should aim to renew the -# outputs it has generated. It should be a positive, numeric value with an -# `m` (for minutes) or `h` (for hours) suffix. The default value is `20m`. -# This value must be lower than `credential_ttl`. -# This value is ignored when using `tbot` is running in one-shot mode. -# -# It can be overridden for most outputs and services to give them a shorter -# renewal interval than `tbot`'s internal certificates. -renewal_interval: "20m" - -# oneshot configures `tbot` to exit immediately after generating the outputs. -# The default value is `false`. A value of `true` is useful in ephemeral environments, like -# CI/CD. -oneshot: false - -# onboarding is a group of configuration options that control how `tbot` will -# authenticate with the Teleport cluster. -onboarding: - # token specifies which join token, configured in the Teleport cluster, - # should be used to join the Teleport cluster. - # - # This can also be an absolute path to a file containing the value you wish - # to be used. - # File path example: - # token: /var/lib/teleport/tokenjoin - token: "00000000000000000000000000000000" - - # join_method must be the join method associated with the specified token - # above. This setting should match the value output when creating the bot using - # `tctl`. - # - # Support values include: - # - `token` - # - `azure` - # - `gcp` - # - `circleci` - # - `github` - # - `gitlab` - # - `iam` - # - `ec2` - # - `kubernetes` - # - `spacelift` - # - `tpm` - # - `terraform_cloud` - join_method: "token" - - # ca_pins are used to validate the identity of the Teleport Auth Service on - # first connect. This should not be specified when using Teleport Cloud or - # connecting through a Teleport Proxy. - ca_pins: - - "(=presets.ca_pin=)" - - "(=presets.ca_pin=)" - - # ca_path is used to specify where a CA file can be found that can be used to - # validate the identity of the Teleport Auth Service on first connect. - # This should not be specified when using Teleport Cloud or connecting through a - # Teleport Proxy. The ca_pins option should be preferred over ca_path. - ca_path: "/path/to/ca.pem" - - # gitlab holds configuration specific to the "gitlab" join method. - gitlab: - # token_env_var_name allows the environment variable that contains the - # GitLab ID token to be specified. If unspecified, this defaults to - # "TBOT_GITLAB_JWT". - # - # Overriding this is useful when you need to use `tbot` to authenticate to - # multiple Teleport clusters from a single GitLab CI job. - token_env_var_name: "MY_GITLAB_ID_TOKEN" - - -# storage specifies the destination that `tbot` should use to store its -# internal state. This state is sensitive, and you should ensure that the -# destination you specify here can only be accessed by `tbot`. -# -# If unspecified, storage is set to a directory destination with a path -# of `/var/lib/teleport/bot`. -# -# See the full list of supported destinations and their configuration options -# under the Destinations section of this reference page. -storage: - type: directory - path: /var/lib/teleport/bot - -# outputs specifies what artifacts `tbot` should generate and renew when it -# runs. -# -# See the full list of supported outputs and their configuration options -# under the Outputs section of this reference page. -outputs: - - type: identity - destination: - type: directory - path: /opt/machine-id - -# services specify which `tbot` sub-services should be enabled and how they -# should be configured. -# -# See the full list of supported services and their configuration options -# under the Services section of this reference page. -services: - - type: example -``` - -If no configuration file is provided, a simple configuration is used based -on the provided CLI flags. Given the following sample CLI from -`tctl bots add ...`: - -```code -$ tbot start \ - --destination-dir=./tbot-user \ - --token=00000000000000000000000000000000 \ - --ca-pin=(=presets.ca_pin=) \ - --proxy-server=example.teleport.sh:443 -``` - -it uses a configuration equivalent to the following: - -```yaml -proxy_server: example.teleport.sh:443 - -onboarding: - join_method: "token" - token: "(=presets.tokens.first=)" - ca_pins: - - "(=presets.ca_pin=)" - -storage: - type: directory - path: /var/lib/teleport/bot - -outputs: - - type: identity - destination: - type: directory - path: ./tbot-user -``` - -## Outputs - -Outputs define what actions `tbot` should take when it runs. They describe -the format of the certificates to be generated, the roles used to generate the certificates, and the -destination where they should be written. - -There are multiple types of output. Select the one that is most appropriate for -your intended use-case. - -### `identity` - -The `identity` output can be used to authenticate: - -- SSH access to your Teleport servers, using `tsh`, openssh and tools like - ansible. -- Administrative actions against your cluster using tools like `tsh` or `tctl`. -- Management of Teleport resources using the Teleport Terraform provider. -- Access to the Teleport API using the Teleport Go SDK. - -See the [Getting Started guide](../../enroll-resources/machine-id/getting-started.mdx) to see the `identity` -output used in context. - -```yaml -# type specifies the type of the output. For the identity output, this will -# always be `identity`. -type: identity -# ssh_config controls whether the identity output will attempt to generate an -# OpenSSH configuration file. This requires that `tbot` can connect to the -# Teleport Proxy Service. Must be "on" or "off". If unspecified, this defaults to -# "on". -ssh_config: on -# allow_reissue controls whether the certificates generated by the identity -# output can be reissued (e.g. used with `tsh apps login`/`tsh db login`). This -# defaults to `false` if unspecified. If you receive an error message indicating -# that the certificate cannot be reissued, set this to `true`. -allow_reissue: false - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `application` - -The `application` output is used to generate credentials that can be used to -access applications that have been configured with Teleport. - -See the [Machine ID with Applications guide](../../enroll-resources/machine-id/access-guides/applications.mdx) -to see the `application` output used in context. - -```yaml -# type specifies the type of the output. For the application output, this will -# always be `application`. -type: application -# app_name specifies the application name, as configured in your Teleport -# cluster, that `tbot` should generate credentials for. -# This field must be specified. -app_name: grafana - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `database` - -The `database` output is used to generate credentials that can be used to -access databases that have been configured with Teleport. - -See the [Machine ID with Databases guide](../../enroll-resources/machine-id/access-guides/databases.mdx) -to see the `database` output used in context. - -```yaml -# type specifies the type of the output. For the database output, this will -# always be `database`. -type: database -# service is the name of the database server, as configured in Teleport, that -# the output should generate credentials for. This field must be specified. -service: my-postgres-server -# database is the name of the specific database on the specified database -# server to generate credentials for. This field doesn't need to be specified -# for database types that don't support multiple individual databases. -database: my-database -# username is the name of the user on the specified database server to -# generate credentials for. This field doesn't need to be specified -# for database types that don't have users. -username: my-user -# format specifies the format to use for output artifacts. If -# unspecified, a default format is used. See the table titled "Supported -# formats" below for the full list of supported values. -format: tls - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -#### Supported formats - -You can provide the following values to the `format` configuration field in -the `database` output type: - -| `format` | Description | -|---------------|--------------------------------------| -| Unspecified | Provides a certificate in `tlscert`, a private key in `key` and the CA in `teleport-database-ca.crt`. This is compatible with most clients and databases. | -| `mongo` | Provides `mongo.crt` and `mongo.cas`. This is designed to be used with MongoDB clients. | -| `cockroach` | Provides `cockroach/node.key`, `cockroach/node.crt`, and `cockroach/ca.crt`. This is designed to be used with CockroachDB clients. | -| `tls` | Provides `tls.key`, `tls.crt`, and `tls.cas`. This is for generic clients that require the specific file extensions. | - -### `kubernetes` - -The `kubernetes` output is used to generate credentials that can be used to -access Kubernetes clusters that have been configured with Teleport. - -It outputs a `kubeconfig.yaml` in the output destination, which can be used -with `kubectl`. - -See the [Machine ID with Kubernetes Clusters guide](../../enroll-resources/machine-id/access-guides/kubernetes.mdx) -to see the `kubernetes` output used in context. - -```yaml -# type specifies the type of the output. For the kubernetes output, this will -# always be `kubernetes`. -type: kubernetes -# kubernetes_cluster is the name of the Kubernetes cluster, as configured in -# Teleport, that the output should generate credentials and a kubeconfig for. -# This field must be specified. -kubernetes_cluster: my-cluster -# disable_exec_plugin disables the default behaviour of using the `tbot` binary -# as a `kubectl` credentials exec plugin. This is useful in environments where -# `tbot` does not exist on the system that will consume the generated kubeconfig -# (e.g. when using the `kubernetes_secret` output type). This credentials exec -# plugin is used to automatically refresh the credentials within a single -# invocation of `kubectl`. Defaults to `false`. -disable_exec_plugin: false - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `kubernetes/v2` - -The `kubernetes/v2` output type can be used to access many Kubernetes clusters -as individual contexts within the same `kubeconfig.yaml`. - -```yaml -type: kubernetes/v2 - -# selectors include one or more matching Kubernetes clusters. Each match will be -# included in the resulting `kubeconfig.yaml`, assuming the bot has permission -# to access the cluster. -selectors: - # name includes an exact match by name. Note that wildcards are not currently - # supported. Multiple name selectors can be specified if desired. - - name: foo - # labels include all clusters matching all of these labels. Multiple label - # selectors can be provided if needed. - - labels: - env: dev - -# The following configuration fields are available across most output types. -# Note that `roles` are not supported for this output type. - -destination: - type: directory - path: /opt/machine-id - -# credential_ttl and renewal_interval override the credential TTL and renewal -# interval for this specific output, so that you can make its certificates valid -# for shorter than `tbot`'s internal certificates. -# -# This is particularly useful when using `tbot` in one-shot as part of a cron job -# where you need `tbot`'s internal certificate to live long enough to be renewed -# on the next invocation, but don't want long-lived workload certificates on-disk. -credential_ttl: 30m -renewal_interval: 15m - -# disable_exec_plugin disables the default behaviour of using the `tbot` binary -# as a `kubectl` credentials exec plugin. This is useful in environments where -# `tbot` does not exist on the system that will consume the generated kubeconfig -# (e.g. when using the `kubernetes_secret` output type). This credentials exec -# plugin is used to automatically refresh the credentials within a single -# invocation of `kubectl`. Defaults to `false`. -disable_exec_plugin: false -``` - -Each Kubernetes cluster matching a selector will result in a new context in the -generated `kubeconfig.yaml`. This can be consumed like so: - -```code -$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml --context=example.teleport.sh-foo get pods -``` - -The context name is `[Teleport cluster name]-[Kubernetes cluster name]`, so the -command above runs `kubectl get pods` on the `foo` cluster. - -If clusters are added or removed over time, the `kubeconfig.yaml` will be -updated at the bot's normal renewal interval. You can trigger an early renewal -by restarting `tbot`, or signaling it with `pkill -usr1 tbot`. - -### `ssh_host` - -The `ssh_host` output is used to generate the artifacts required to configure -an OpenSSH server with Teleport in order to allow Teleport users to connect to -it. - -The output generates the following artifacts: -- `ssh_host-cert.pub`: an SSH certificate signed by the Teleport host certificate authority. -- `ssh_host`: the private key associated with the SSH host certificate. -- `ssh_host-user-ca.pub`: an export of the Teleport user certificate authority in an OpenSSH compatible format. - -```yaml -# type specifies the type of the output. For the ssh host output, this will -# always be `ssh_host`. -type: ssh_host -# principals is the list of host names to include in the host certificates. -# These names should match the names that clients use to connect to the host. -principals: - - host.example.com - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `workload-identity-x509` - -The `workload-identity-x509` output is used to issue an X509 workload identity -credential and write this to a configured destination. - -The output generates the following artifacts: - -- `svid.pem`: the X509 SVID. -- `svid.key`: the private key associated with the X509 SVID. -- `bundle.pem`: the X509 bundle that contains the trust domain CAs. - -See [Workload Identity introduction](../../enroll-resources/workload-identity/introduction.mdx) -for more information on Workload Identity functionality. - -```yaml -# type specifies the type of the output. For the X509 Workload Identity output, -# this will always be `workload-identity-x509`. -type: workload-identity-x509 -(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `workload-identity-jwt` - -The `workload-identity-jwt` output is used to issue a JWT workload identity -credential and write this to a configured destination. - -The JWT workload identity credential is compatible with the [SPIFFE JWT SVID -specification](https://github.com/spiffe/spiffe/blob/main/standards/JWT-SVID.md). - -The output generates the following artifacts: - -- `jwt_svid`: the JWT SVID. - -See [Workload Identity introduction](../../enroll-resources/workload-identity/introduction.mdx) -for more information on Workload Identity functionality. - -```yaml -# type specifies the type of the output. For the JWT Workload Identity output, -# this will always be `workload-identity-jwt`. -type: workload-identity-jwt -# audiences specifies the values that should be included in the `aud` claim of -# the JWT. Typically, this identifies the intended recipient of the JWT and -# contains a single value. -# -# At least one audience value must be specified. -audiences: - - example.com - - foo.example.com -(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -### `workload-identity-aws-roles-anywhere` - -The `workload-identity-aws-roles-anywhere` output is used to issue an X509 -workload identity credential, exchange this for short-lived AWS credentials -using Roles Anywhere, and write these to a configured destination. - -The credentials are written in the [AWS shared credentials file format](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds), -which is compatible with the AWS CLI and SDKs. - -The output generates the following artifacts: - -- `aws_credentials`: the CLI and SDK compatible AWS shared credentials file. - -See [Workload Identity introduction](../../enroll-resources/workload-identity/introduction.mdx) -for more information on Workload Identity functionality. - -```yaml -# type specifies the type of the output. For the Workload Identity AWS Roles -# Anywhere output, this will always be `workload-identity-aws-roles-anywhere`. -type: workload-identity-aws-roles-anywhere -# The following configuration fields are available across most output types. - -# destination specifies where the output should write any generated artifacts -# such as certificates and configuration files. -# -# See the full list of supported destinations and their configuration options -# under the Destinations section of this reference page. -destination: - type: directory - path: /opt/machine-id -# role_arn is the ARN of the AWS role that the generated credentials should -# assume. -# Required. -role_arn: arn:aws:iam::123456789012:role/example-role -# profile_arn is the ARN of the AWS profile to be used during the Roles Anywhere -# exchange. -# Required. -profile_arn: arn:aws:rolesanywhere:us-east-1:123456789012:profile/0000000-0000-0000-0000-00000000000 -# trust_anchor_arn is the ARN of the trust anchor that should be used during the -# Roles Anywhere exchange. -# Required. -trust_anchor_arn: arn:aws:rolesanywhere:us-east-1:123456789012:trust-anchor/0000000-0000-0000-0000-000000000000 -# region is the AWS region to use for the Roles Anywhere exchange. If omitted, -# this defaults to the region set by `AWS_REGION` environment variable or the -# AWS configuration file. -region: us-east-1 -# session_duration is the duration that the generated AWS credentials should be -# valid for. This may be up to 12 hours. If omitted, this defaults to 6 hours. -session_duration: 6h -# session_renewal_interval is the interval at which the AWS credentials should -# be renewed. This should be less than the session duration. If omitted, this -# defaults to 1 hour. -session_renewal_interval: 1h -# credential_profile_name is the name of the profile to write to in the AWS -# credentials file. If unspecified, this profile will be named `default`. -credential_profile_name: my-profile -# artifact_name is the name of the file that the AWS credentials should be -# written to. If unspecified, this defaults to `aws_credentials`. -artifact_name: my-credentials-file -# overwrite_credential_file controls whether the AWS credentials file should be -# overwritten if it already exists, or whether the profile added by tbot should -# be merged with any existing profiles in the file. If unspecified, this -# defaults to `false`. -overwrite_credential_file: false -(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) -``` - -### `spiffe-svid` - - -The use of this service has been deprecated as part of the introduction of the -new Workload Identity configuration experience. You can replace the use of this -output with the new `workload-identity-x509` or `workload-identity-jwt` service. - -For further information, see [the new Workload Identity configuration experience -and how to migrate](../workload-identity/configuration-resource-migration.mdx). - - -The `spiffe-svid` output is used to generate a SPIFFE X509 SVID and write this -to a configured destination. - -The output generates the following artifacts: -- `svid.pem`: the X509 SVID. -- `svid.key`: the private key associated with the X509 SVID. -- `bundle.pem`: the X509 bundle that contains the trust domain CAs. - - -An artifact will also be generated for each entry within the `jwts` list. This -will be named according to `file_name`. This artifact will contain only the -JWT-SVID with the audience specified in `audience`. - -See [Workload Identity](../../enroll-resources/workload-identity/introduction.mdx) for more information on how -to use SPIFFE SVIDs. - -```yaml -# type specifies the type of the output. For the SPIFFE SVID output, this will -# always be `spiffe-svid`. -type: spiffe-svid -# svid specifies the properties of the SPIFFE SVID that should be requested. -svid: - # path specifies what the path element should be requested for the SPIFFE ID. - path: /svc/foo - # sans specifies optional Subject Alternative Names (SANs) to include in the - # generated X509 SVID. If omitted, no SANs are included. - sans: - # dns specifies the DNS SANs. If omitted, no DNS SANs are included. - dns: - - foo.svc.example.com - # ip specifies the IP SANs. If omitted, no IP SANs are included. - ip: - - 10.0.0.1 - # jwts controls the output of JWT-SVIDs. Each entry will be generated as a - # separate artifact. If omitted, no JWT-SVIDs are generated. - jwts: - # audience specifies the audience that the JWT-SVID should be issued for. - # this typically identifies the service that the JWT-SVID will be used to - # authenticate to. - - audience: https://example.com - # file_name specifies the name of the file that the JWT-SVID should be - # written to. - file_name: example-jwt - -(!docs/pages/includes/machine-id/common-output-config.yaml!) -``` - -## Services - -Services are configurable long-lived components that run within `tbot`. Unlike -Outputs, they may not necessarily generate artifacts. Typically, services -provide supporting functionality for machine to machine access, for example, -opening tunnels or providing APIs. - -### `workload-identity-api` - -The `workload-identity-api` services opens a listener that provides a local -workload identity API, intended to serve workload identity credentials -(e.g X509/JWT SPIFFE SVIDs) to workloads running on the same host. - -For more information about this, see the -[Workload Identity API and Workload Attestation reference](../workload-identity/workload-identity-api-and-workload-attestation.mdx) - -### `spiffe-workload-api` - - -The use of this service has been deprecated as part of the introduction of the -new Workload Identity configuration experience. You can replace the use of this -service with the new `workload-identity-api` service. - -For further information, see [the new Workload Identity configuration experience -and how to migrate](../workload-identity/configuration-resource-migration.mdx). - - -The `spiffe-workload-api` service opens a listener for a service that implements -the SPIFFE Workload API. This service is used to provide SPIFFE SVIDs to -workloads. - -See [Workload Identity](../../enroll-resources/workload-identity/introduction.mdx) for more information on the -SPIFFE Workload API. - -```yaml -# type specifies the type of the service. For the SPIFFE Workload API service, -# this will always be `spiffe-workload-api`. -type: spiffe-workload-api -# listen specifies the address that the service should listen on. -# -# Two types of listener are supported: -# - TCP: `tcp://
:` -# - Unix socket: `unix:///` -listen: unix:///opt/machine-id/workload.sock -# attestors allows Workload Attestation to be configured for this Workload -# API. -attestors: - # docker is configuration for the Docker Workload Attestor. See the Workload - # Identity API & Workload Attestation reference for more information. - docker: - # enabled specifies whether the workload's identity should be attested with - # information about its Docker container. If unspecified, this defaults to - # false. - enabled: true - # addr is the address at which the Docker Engine daemon can be reached. It - # must be in the form `unix://path/to/socket`, as connecting via TCP is not - # currently supported. If unspecified, this defaults to the standard socket - # location for "rootful" Docker installations: `unix:///var/run/docker.sock`. - addr: unix:///var/run/docker.sock - # kubernetes is configuration for the Kubernetes Workload Attestor. See - # the Kubernetes Workload Attestor section for more information. - kubernetes: - # enabled specifies whether the Kubernetes Workload Attestor should be - # enabled. If unspecified, this defaults to false. - enabled: true - # kubelet holds configuration relevant to the Kubernetes Workload Attestors - # interaction with the Kubelet API. - kubelet: - # read_only_port is the port on which the Kubelet API is exposed for - # read-only operations. Since Kubernetes 1.16, the read-only port is - # typically disabled by default and secure_port should be used instead. - read_only_port: 10255 - # secure_port is the port on which the attestor should connect to the - # Kubelet secure API. If unspecified, this defaults to `10250`. This is - # mutually exclusive with ReadOnlyPort. - secure_port: 10250 - # token_path is the path to the token file that the Kubelet API client - # should use to authenticate with the Kubelet API. If unspecified, this - # defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`. - token_path: "/var/run/secrets/kubernetes.io/serviceaccount/token" - # ca_path is the path to the CA file that the Kubelet API client should - # use to validate the Kubelet API server's certificate. If unspecified, - # this defaults to `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`. - ca_path: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" - # skip_verify is used to disable verification of the Kubelet API server's - # certificate. If unspecified, this defaults to false. - # - # If specified, the value specified in ca_path is ignored. - # - # This is useful in cases where the Kubelet API server has not been issued - # with a certificate signed by the Kubernetes cluster's CA. This is fairly - # common with a number of Kubernetes distributions. - skip_verify: true - # anonymous is used to disable authentication with the Kubelet API. If - # unspecified, this defaults to false. If set, the token_path field is - # ignored. - anonymous: false - # podman is configuration for the Podman Workload Attestor. See the Workload - # Identity API & Workload Attestation reference for more information. - podman: - # enabled specifies whether the workload's identity should be attested with - # information about its Podman container and pod. If unspecified, this - # defaults to false. - enabled: true - # addr is the address at which the Podman API Service can be reached. It - # must be in the form `unix://path/to/socket`, as connecting via TCP is not - # supported. This field is required and there is no default value. See the - # Workload Identity API & Workload Attestation reference for more information. - addr: unix:///run/podman/podman.sock - # sigstore is configuration for the Sigstore Workload attestor. See the - # Sigstore Workload Attestation page for more information. - sigstore: - # enabled specifies whether tbot will discover Sigstore signatures for the - # workload's container image. If unspecified, this defaults to false. - enabled: true - # additional_registries optionally configures the OCI registries that will - # be searched for signatures in addition to the workload container image's - # source registry. - additional_registries: - - - # host of the OCI registry. - host: ghcr.io - # credentials_path is the path to a Docker or Podman configuration file - # containing per-registry credentials. - credentials_path: /path/to/docker/config.json - # allowed_private_network_prefixes are the private IP address prefixes (CIDR - # blocks) that the Sigstore attestor is allowed to connect to. By default, - # tbot will only connect to registries at publicly-routable IP addresses to - # reduce the surface area for SSRF attacks. - allowed_private_network_prefixes: - - "192.168.1.42/32" - - "fd12:3456:789a:1::1/128" - # systemd is configuration for the Systemd Workload Attestor. See the Workload - # Identity API & Workload Attestation reference for more information. - systemd: - # enabled specifies whether the workload's identity should be attested with - # information about its Systemd service. If unspecified, this defaults to - # false. - enabled: true - # unix is configuration for the Unix Workload Attestor. - unix: - # binary_hash_max_size_bytes is the maximum number of bytes that will be - # read from a process' binary to calculate its SHA-256 checksum. If the - # binary is larger than this, the `workload.unix.binary_hash` attribute - # will be empty. If unspecified, this defaults to 1GiB. Set it to -1 to - # make it unlimited. - binary_hash_max_size_bytes: 1024 -# svids specifies the SPIFFE SVIDs that the Workload API should provide. -svids: - # path specifies what the path element should be requested for the SPIFFE - # ID. - - path: /svc/foo - # hint is a free-form string which can be used to help workloads determine - # which SVID to select when multiple are available. If omitted, no hint is - # included. - hint: my-hint - # sans specifies optional Subject Alternative Names (SANs) to include in the - # generated X509 SVID. If omitted, no SANs are included. - sans: - # dns specifies the DNS SANs. If omitted, no DNS SANs are included. - dns: - - foo.svc.example.com - # ip specifies the IP SANs. If omitted, no IP SANs are included. - ip: - - 10.0.0.1 - # rules specifies a list of workload attestation rules. At least one of - # these rules must be satisfied by the workload in order for it to receive - # this SVID. - # - # If no rules are specified, the SVID will be issued to all workloads that - # connect to this service. - rules: - # unix is a group of workload attestation criteria that are available - # when the workload is running on the same host, and is connected to - # the Workload API using a Unix socket. - # - # If any of the criteria in this group are specified, then workloads - # that do not connect using a Unix socket will not receive this SVID. - - unix: - # uid is the ID of the user that the workload process must be running - # as to receive this SVID. - # - # If unspecified, the UID is not checked. - uid: 1000 - # pid is the ID that the workload process must have to receive this - # SVID. - # - # If unspecified, the PID is not checked. - pid: 1234 - # gid is the ID of the primary group that the workload process must be - # running as to receive this SVID. - # - # If unspecified, the GID is not checked. - gid: 50 -``` - -#### Envoy SDS - -The `spiffe-workload-api` service endpoint also implements the Envoy SDS API. -This allows it to act as a source of certificates and certificate authorities -for the Envoy proxy. - -As a forward proxy, Envoy can be used to attach an X.509 SVID to an outgoing -connection from a workload that is not SPIFFE-enabled. - -As a reverse proxy, Envoy can be used to terminate mTLS connections from -SPIFFE-enabled clients. Envoy can validate that the client has presented a valid -X.509 SVID and perform enforcement of authorization policies based on the SPIFFE -ID contained within the SVID. - -When acting as a reverse proxy for certain protocols, Envoy can be configured -to attach a header indicating the identity of the client to a request before -forwarding it to the service. This can then be used by the service to make -authorization decisions based on the client's identity. - -When configuring Envoy to use the SDS API exposed by the `spiffe-workload-api` -service, three additional special names can be used to aid configuration: - -- `default`: `tbot` will return the default SVID for the workload. -- `ROOTCA`: `tbot` will return the trust bundle for the trust domain that the -workload is a member of. -- `ALL`: `tbot` will return the trust bundle for the trust domain that the -workload is a member of, as well as the trust bundles of any trust domain -that the trust domain is federated with. - -The following is an example Envoy configuration that sources a certificate -and trust bundle from the `spiffe-workload-api` service listening on -`unix:///opt/machine-id/workload.sock`. It requires that a connecting client -presents a valid SPIFFE SVID and forwards this information to the backend -service in the `x-forwarded-client-cert` header. - -```yaml -node: - id: "my-envoy-proxy" - cluster: "my-cluster" -static_resources: - listeners: - - name: test_listener - enable_reuse_port: false - address: - socket_address: - address: 0.0.0.0 - port_value: 8080 - filter_chains: - - filters: - - name: envoy.filters.network.http_connection_manager - typed_config: - "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager - common_http_protocol_options: - idle_timeout: 1s - forward_client_cert_details: sanitize_set - set_current_client_cert_details: - uri: true - stat_prefix: ingress_http - route_config: - name: local_route - virtual_hosts: - - name: my_service - domains: ["*"] - routes: - - match: - prefix: "/" - route: - cluster: my_service - http_filters: - - name: envoy.filters.http.router - typed_config: - "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router - transport_socket: - name: envoy.transport_sockets.tls - typed_config: - "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext - common_tls_context: - # configure the certificate that the reverse proxy should present. - tls_certificate_sds_secret_configs: - # `name` can be replaced with the desired SPIFFE ID if multiple - # SVIDs are available. - - name: "default" - sds_config: - resource_api_version: V3 - api_config_source: - api_type: GRPC - transport_api_version: V3 - grpc_services: - envoy_grpc: - cluster_name: tbot_agent - # combined validation context "melds" two validation contexts - # together. This is handy for extending the validation context - # from the SDS source. - combined_validation_context: - default_validation_context: - # You can use match_typed_subject_alt_names to configure - # rules that only allow connections from specific SPIFFE IDs. - match_typed_subject_alt_names: [] - validation_context_sds_secret_config: - name: "ALL" # This can also be replaced with the trust domain name - sds_config: - resource_api_version: V3 - api_config_source: - api_type: GRPC - transport_api_version: V3 - grpc_services: - envoy_grpc: - cluster_name: tbot_agent - clusters: - # my_service is the example service that Envoy will forward traffic to. - - name: my_service - type: strict_dns - load_assignment: - cluster_name: my_service - endpoints: - - lb_endpoints: - - endpoint: - address: - socket_address: - address: 127.0.0.1 - port_value: 8090 - - name: tbot_agent - http2_protocol_options: {} - load_assignment: - cluster_name: tbot_agent - endpoints: - - lb_endpoints: - - endpoint: - address: - pipe: - # Configure the path to the socket that `tbot` is - # listening on. - path: /opt/machine-id/workload.sock -``` - -### `database-tunnel` - -The `database-tunnel` service opens a listener for a service that tunnels -connections to a database server. - -The tunnel authenticates connections for the client, meaning that any -application which can connect to the listener will be able to connect to the -database as the specified user. For this reason, we heavily recommend using the -Unix socket listener type and configuring the permissions of the socket to -ensure that only the intended applications can connect. - -```yaml -# type specifies the type of the service. For the database tunnel service, this -# will always be `database-tunnel`. -type: database-tunnel -# listen specifies the address that the service should listen on. -# -# Two types of listener are supported: -# - TCP: `tcp://
:` -# - Unix socket: `unix:///` -listen: tcp://127.0.0.1:25432 -# service is the name of the database server, as configured in Teleport, that -# the service should open a tunnel to. -service: postgres-docker -# database is the name of the specific database on the specified database -# service. -database: postgres -# username is the name of the user on the specified database server to open a -# tunnel for. -username: postgres -``` - -### `application-tunnel` - -The `application-tunnel` service opens a listener that tunnels connections to -an application in Teleport. It supports both HTTP and TCP applications. This is -useful for applications which cannot be configured to use client certificates, -when using TCP application or where using a L7 load-balancer in front of your -Teleport proxies. - -The tunnel authenticates connections for the client, meaning that any -client that connects to the listener will be able to access the application. -For this reason, ensure that the listener is only accessible by the intended -clients by using the Unix socket listener or binding to `127.0.0.1`. - -```yaml -# type specifies the type of the service. For the application tunnel service, -# this will always be `application-tunnel`. -type: application-tunnel -# listen specifies the address that the service should listen on. -# -# Two types of listener are supported: -# - TCP: `tcp://
:` -# - Unix socket: `unix:///` -listen: tcp://127.0.0.1:8084 -# app_name is the name of the application, as configured in Teleport, that -# the service should open a tunnel to. -app_name: my-application -``` - -### `ssh-multiplexer` - -The `ssh-multiplexer` service opens a listener for a high-performance local -SSH multiplexer. This is designed for use-cases which create a large number -of SSH connections using Teleport, for example, Ansible. - -This differs to using `identity` output for SSH in a few ways: - -- The `tbot` instance running the `ssh-multiplexer` service must be running on - the same host as the SSH client. -- The `ssh-multiplexer` service is designed to be a long-running background - service and cannot be used in one-shot mode. It must be running in order for - SSH connections to be established and to continue running. -- Resource consumption is significantly reduced by multiplexing SSH connections - through a fewer number of upstream connections to the Teleport Proxy Service. - -Additionally, the `ssh-multiplexer` opens a socket that implements the SSH -agent protocol. This allows the SSH client to authenticate without writing the -sensitive private key to disk. - -By default, the `ssh-multiplexer` service outputs an `ssh_config` which uses -`tbot` itself as the ProxyCommand. You can further reduce the resource -consumption of SSH connections by installing and specifying the -`fdpass-teleport` binary. - -```yaml -# type specifies the type of the service. For the SSH multiplexer -type: ssh-multiplexer -# destination specifies where the tunnel should be opened and any artifacts -# should be written. It must be of type `directory`. -destination: - type: directory - path: /foo -# enable_resumption specifies whether the multiplexer should negotiate -# session resumption. This allows SSH connections to survive network -# interruptions. It does increase the memory resources used per connection. -# -# If unspecified, this defaults to true. -enable_resumption: true -# proxy_command specifies the command that should be used as the ProxyCommand -# in the generated SSH configuration. -# -# If unspecified, the ProxyCommand will be the currently running binary of tbot -# itself. -proxy_command: - - /usr/local/bin/fdpass-teleport -# proxy_templates_path specifies a path to a proxy templates configuration file -# which should be used when resolving the Teleport node to connect to. This -# file must be accessible by the long-lived tbot process running the -# ssh-multiplexer. -# -# If unspecified, proxy templates will not be used. -proxy_templates_path: /etc/my-proxy-templates.yaml -``` - -Once configured, `tbot` will create the following artifacts in the specified -destination: - -- `ssh_config`: an SSH configuration file that will configure OpenSSH to use - the multiplexer and agent. -- `known_hosts`: the known hosts file that will be used by OpenSSH to validate - a server's identity. -- `v1.sock`: the Unix socket that the multiplexer listens on. -- `agent.sock`: the Unix socket that the SSH agent listens on. - -#### Using the SSH multiplexer programmatically - -To use the SSH multiplexer programmatically, your SSH client library will need -to support one of two things: - -- The ability to use a ProxyCommand with FDPass. If so, you can use the - `ssh_config` file generated by `tbot` to configure the SSH client. -- The ability to accept an open socket to use as the connection to the SSH - server. You will then need to manually connect to the socket and send the - multiplexer request. - -The `v1.sock` Unix Domain Socket implements the V1 Teleport SSH multiplexer -protocol. The client must first send a short request message to indicate the -desired target host and port, terminated with a null byte. The multiplexer will -then begin to forward traffic to the target host and port. The client can then -make an SSH connection. - -
-Example in Python (Paramiko) -```python -import os -import paramiko -import socket - -host = "ubuntu.example.teleport.sh" -username = "root" -port = 3022 -directory_destination = "/opt/machine-id" - -# Connect to Mux Unix Domain Socket -sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) -sock.connect(os.path.join(directory_destination, "v1.sock")) -# Send the connection request specifying the server you wish to connect to -sock.sendall(f"{host}:{port}\x00".encode("utf-8")) - -# We must set the env var as Paramiko does not make this configurable... -os.environ["SSH_AUTH_SOCK"] = os.path.join(directory_destination, "agent.sock") - -ssh_config = paramiko.SSHConfig() -with open(os.path.join(directory_destination, "ssh_config")) as f: - ssh_config.parse(f) - -ssh_client = paramiko.SSHClient() - -# Paramiko does not support known_hosts with CAs: https://github.com/paramiko/paramiko/issues/771 -# Therefore, we must disable host key checking -ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy()) - -ssh_client.connect( - hostname=host, - port=port, - username=username, - sock=sock -) - -stdin, stdout, stderr = ssh_client.exec_command("hostname") -print(stdout.read().decode()) -``` -
- -
-Example in Go -```go -package main - -import ( - "fmt" - "net" - "path/filepath" - - "golang.org/x/crypto/ssh" - "golang.org/x/crypto/ssh/agent" - "golang.org/x/crypto/ssh/knownhosts" -) - -func main() { - host := "ubuntu.example.teleport.sh" - username := "root" - directoryDestination := "/opt/machine-id" - - // Setup Agent and Known Hosts - agentConn, err := net.Dial( - "unix", filepath.Join(directoryDestination, "agent.sock"), - ) - if err != nil { - panic(err) - } - defer agentConn.Close() - agentClient := agent.NewClient(agentConn) - hostKeyCallback, err := knownhosts.New( - filepath.Join(directoryDestination, "known_hosts"), - ) - if err != nil { - panic(err) - } - - // Create SSH Config - sshConfig := &ssh.ClientConfig{ - Auth: []ssh.AuthMethod{ - ssh.PublicKeysCallback(agentClient.Signers), - }, - User: username, - HostKeyCallback: hostKeyCallback, - } - - // Dial Unix Domain Socket and send multiplexing request - conn, err := net.Dial( - "unix", filepath.Join(directoryDestination, "v1.sock"), - ) - if err != nil { - panic(err) - } - defer conn.Close() - _, err = fmt.Fprint(conn, fmt.Sprintf("%s:0\x00", host)) - if err != nil { - panic(err) - } - - sshConn, sshChan, sshReq, err := ssh.NewClientConn( - conn, - // Port here doesn't matter because Multiplexer has already established - // connection. - fmt.Sprintf("%s:22", host), - sshConfig, - ) - if err != nil { - panic(err) - } - sshClient := ssh.NewClient(sshConn, sshChan, sshReq) - defer sshClient.Close() - - sshSess, err := sshClient.NewSession() - if err != nil { - panic(err) - } - defer sshSess.Close() - - out, err := sshSess.CombinedOutput("hostname") - if err != nil { - panic(err) - } - fmt.Println(string(out)) -} -``` -
- -## Destinations - -A destination is somewhere that `tbot` can read and write artifacts. - -Destinations are used in two places in the `tbot` configuration: - -- Specifying where `tbot` should store its internal state. -- Specifying where an output should write its generated artifacts. - -Destinations come in multiple types. Usually, the `directory` type is the most -appropriate. - -### `directory` - -The `directory` destination type stores artifacts as files in a specified -directory. - -```yaml -# type specifies the type of the destination. For the directory destination, -# this will always be `directory`. -type: directory - -# path specifies the path to the directory that this destination should write -# to. This directory should already exist, or `tbot init` should be used to -# create it with the correct permissions. -path: /opt/machine-id - -# symlinks configures the behaviour of symlink attack prevention. -# Requires Linux 5.6+. -# Supported values: -# * try-secure (default): Attempt to securely read and write certificates -# without symlinks, but fall back (with a warning) to insecure read -# and write if the host doesn't support this. -# * secure: Attempt to securely read and write certificates, with a hard error -# if unsupported. -# * insecure: Quietly allow symlinks in paths. -symlinks: try-secure - -# acls configures whether Linux Access Control List (ACL) setup should occur for -# this destination. -# Requires Linux with a file system that supports ACLs. -# Supported values: -# * try (default on Linux): Attempt to use ACLs, warn at runtime if ACLs -# are configured but invalid. -# * off (default on non-Linux): Do not attempt to use ACLs. -# * required: Always use ACLs, produce a hard error at runtime if ACLs -# are invalid. -acls: try - -# readers is a list of users and groups that will be allowed by ACL to access -# this directory output. The `acls` parameter must be either `try` or -# `required`. File ACLs will be monitored and corrected at runtime to ensure -# they match this configuration. -# Individual entries may either specify `user` or `group`, but not both. `user` -# accepts an existing named user or a UID, and `group` accepts an existing named -# group or GID. UIDs and GIDs do not necessarily need to exist on the local -# system. -# An empty list of readers disables runtime ACL management. -readers: -- user: teleport -- user: 123 -- group: teleport -- group: 456 -``` - -### `memory` - -The `memory` destination type stores artifacts in the process memory. When -the process exits, nothing is persisted. This destination type -is most suitable for ephemeral environments, but can also be used for testing. - -Configuration: - -```yaml -# type specifies the type of the destination. For the memory destination, this -# will always be `memory`. -type: memory -``` - -### `kubernetes_secret` - -The `kubernetes_secret` destination type stores artifacts in a Kubernetes -secret. This allows them to be mounted into other containers deployed in -Kubernetes. - -Prerequisites: - -- `tbot` must be running in Kubernetes with at most one replica. If using a - `deployment`, then the `Recreate` strategy must be used to ensure only one - instance exists at any time. This is because multiple `tbot` agents configured - with the same secret will compete to write to the secret and it may be left - in an inconsistent state or the `tbot` agents may fail to write. -- The `tbot` pod must be configured with a service account that allows it to - read and write from the configured secret. -- The `POD_NAMESPACE` environment variable must be configured with the - name of the namespace that `tbot` is running in. This is best achieved with - the [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). - -There is no requirement that the secret already exists, one will be created -if it does not exist. If a secret already exists, `tbot` will overwrite any -other keys within the secret. - -Configuration: - -```yaml -# type specifies the type of the destination. For the kubernetes_secret -# destination, this will always be `kubernetes_secret`. -type: kubernetes_secret -# name specifies the name of the Kubernetes Secret to write the artifacts to. -# This must be in the same namespace that `tbot` is running in. -name: my-secret -``` - -## Bot resource - -The `bot` resource is used to manage Machine ID Bots. It is used to configure -the access that is granted to a Bot. - -(!docs/pages/includes/machine-id/bot-spec.mdx!) - -You can apply a file containing YAML that defines a `bot` resource using -`tctl create -f ./bot.yaml`. diff --git a/docs/pages/reference/machine-id/diagnostics-service.mdx b/docs/pages/reference/machine-id/diagnostics-service.mdx deleted file mode 100644 index 23859771f51d8..0000000000000 --- a/docs/pages/reference/machine-id/diagnostics-service.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: Diagnostics Service -description: Reference information for the `tbot` diagnostics service. ---- - -The `tbot` process can optionally expose a diagnostics service. This is -disabled by default, but once enabled, allows useful information about the -running `tbot` process to be queried via HTTP. - -## Configuration - -To enable the diagnostics service, you must specify an address and port for -it to listen on. - -For security reasons, you should ensure that access to this listener is -restricted. In most cases, the most secure thing to do is to bind the listener -to `127.0.0.1`, which will only allow access from the local machine. - -You can configure the diagnostics service using the `--diag-addr` CLI parameter: - -```code -$ tbot start -c my-config.yaml --diag-addr 127.0.0.1:3001 -``` - -Or directly within the configuration file using `diag_addr`: - -```yaml -diag_addr: 127.0.0.1:3001 -``` - -## Endpoints - -The diagnostics service exposes the following HTTP endpoints. - -### `/livez` - -The `/livez` endpoint always returns with a 200 status code. This can be used -to determine if the `tbot` process is running and has not crashed or hung. - -If deploying to Kubernetes, we recommend this endpoint is used for your -Liveness Probe. - -### `/readyz` - -The `/readyz` endpoint currently returns the same information as `/livez`. - -In the future, this endpoint will be expanded to indicate whether the internal -components of `tbot` have been able to generate certificates and are ready -to serve requests. - -### `/metrics` - -The `/metrics` endpoint returns a Prometheus-compatible metrics snapshot. - -The metrics provided by the Go runtime are exposed and these can be used for -monitoring the overall health of the `tbot` process. In addition, certain -outputs and services configured in `tbot` will produce metrics. - -#### `ssh-multiplexer` - -The SSH multiplexer service exposes metrics about the number of active SSH -connections: - -- `tbot_ssh_multiplexer_requests_started_total`: the total number of SSH - connections that have been started. -- `tbot_ssh_multiplexer_requests_handled_total`: the total number of SSH - connections that have been handled. This has a `status` label with the - following values: `OK` or `ERROR`. -- `tbot_ssh_multiplexer_requests_in_flight`: the number of SSH connections - currently in progress. - -### `/debug/pprof` - -These endpoints allow the collection of pprof profiles for debugging purposes. -You may be asked by a Teleport engineer to collect these if you are experiencing -performance issues. - -They will only be enabled if the `-d`/`--debug` flag is provided when starting -`tbot`. This is known as **debug mode**. \ No newline at end of file diff --git a/docs/pages/reference/machine-id/github-actions.mdx b/docs/pages/reference/machine-id/github-actions.mdx deleted file mode 100644 index 86a033d56ef3e..0000000000000 --- a/docs/pages/reference/machine-id/github-actions.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: GitHub Actions -description: Reference for GitHub Actions joining ---- - -This document acts a reference for GitHub Actions and Machine ID. You will find -links to in-depth guides on using GitHub Actions and a full explanation of the -configuration options available when using the GitHub join method. - -## Guides - -You can read step-by-step guides on using Machine ID and GitHub Actions: - -- [Using Machine ID with GitHub Actions](../../enroll-resources/machine-id/deployment/github-actions.mdx): How to - use Machine ID to SSH into Teleport nodes from GitHub Actions. - -## GitHub join token - -The `token` resource sets out rules for what is allowed to join a Teleport -cluster. Joining clients must specify which `token` they want to use, and then -information included in their join request is compared to the rules contained -within the token by the Auth Service to determine whether or not they should be -admitted. - -The following snippet shows all available options in the `token` resource when -used for GitHub joining with Machine ID: - -(!docs/pages/includes/provision-token/github-spec.mdx!) - -## GitHub Actions helpers - -We offer a series of off-the-shelf GitHub Actions to use in your workflows when -utilizing Teleport Machine ID and GitHub Actions. - -More information about these individual actions can be found in their GitHub -repositories: - -- [https://github.com/teleport-actions/setup](https://github.com/teleport-actions/setup) -- [https://github.com/teleport-actions/auth](https://github.com/teleport-actions/auth) -- [https://github.com/teleport-actions/auth-k8s](https://github.com/teleport-actions/auth-k8s) -- [https://github.com/teleport-actions/auth-application](https://github.com/teleport-actions/auth-application) - -If you experience problems when using these actions, please raise an issue in -their source repository: -[https://github.com/teleport-actions/root](https://github.com/teleport-actions/root). diff --git a/docs/pages/reference/machine-id/gitlab.mdx b/docs/pages/reference/machine-id/gitlab.mdx deleted file mode 100644 index e137e6dafd9ec..0000000000000 --- a/docs/pages/reference/machine-id/gitlab.mdx +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: GitLab CI -description: Reference for GitLab joining ---- - -This document acts a reference for GitLab CI and Machine ID. You will find -links to in-depth guides as well as a full description of the configuration -options available when using the GitLab join method. - -## Guides - -You can read step-by-step guides on using Machine ID and GitLab CI: - -- [Using Machine ID with GitLab](../../enroll-resources/machine-id/deployment/gitlab.mdx): How to - use Machine ID to SSH into Teleport nodes from GitLab CI. - -## GitLab join token - -A GitLab join token contains allow rules that describe which pipelines can -use that token in order to join the Teleport cluster. A rule can contain -multiple fields, and any pipeline that matches all of the fields within a -single rule is granted access. - -(!docs/pages/includes/provision-token/gitlab-spec.mdx!) diff --git a/docs/pages/reference/machine-id/machine-id.mdx b/docs/pages/reference/machine-id/machine-id.mdx deleted file mode 100644 index 69ad4cf34eb20..0000000000000 --- a/docs/pages/reference/machine-id/machine-id.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Machine ID References -description: Configuration and CLI reference for Teleport Machine ID. ---- - -- [Configuration](configuration.mdx) -- [Diagnostics Service](diagnostics-service.mdx) -- [GitHub Actions](github-actions.mdx) -- [GitLab CI](gitlab.mdx) -- [CLI](../../reference/cli/tbot.mdx) -- [Telemetry](telemetry.mdx) -- [V16 Upgrade Guide](v16-upgrade-guide.mdx) -- [Bot Terraform Resource](../../reference/terraform-provider/resources/bot.mdx) diff --git a/docs/pages/reference/machine-id/v16-upgrade-guide.mdx b/docs/pages/reference/machine-id/v16-upgrade-guide.mdx deleted file mode 100644 index 5830d5c7cb58d..0000000000000 --- a/docs/pages/reference/machine-id/v16-upgrade-guide.mdx +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Machine ID v16 Upgrade Guide -description: Upgrade instructions for Machine ID in Teleport 16.0 ---- - -Teleport 16.0 introduces a number of changes to Machine ID. These changes may require action on your part if you use Machine ID with OpenSSH or OpenSSH-based clients (e.g. Ansible). - -This guide explains how to migrate your Teleport 16.0 cluster for continued Machine ID support. - -## Changes OpenSSH support and the `tbot proxy ssh` command - -When using Machine ID with an OpenSSH client, an `ssh_config` is generated and -used to configure the OpenSSH client. - -Within the `ssh_config`, the `ProxyCommand` directive specifies a special -command to be used to connect to the target host. - -Historically, this command would be `tbot proxy ssh`. - -We have now introduced a new command: `tbot ssh-proxy-command`. This has -a number of benefits: - -- `tsh` is no longer required to be installed on the machine. -- The amount of CPU and memory used during a connection is significantly reduced. -- The time taken to establish a connection is significantly reduced. - -This command was introduced in a Teleport 15 release, and from Teleport 16.0 -will become used by default in the generated `ssh_config`. From Teleport 17.0, -the `tbot proxy ssh` command will no longer be supported and will be removed. - -### Actions required - -If you are using Machine ID with OpenSSH or OpenSSH based clients, you may need -to take action. - -#### Using the default `ssh_config` - -If you are using the default generated `ssh_config`, then no explicit action -on your behalf should be necessary. From 16.0, the new command will be -automatically used. - -We do recommend that you perform a test-run when updating to ensure that -everything is working as expected. You can revert to the old behaviour by -setting the `TBOT_SSH_CONFIG_PROXY_COMMAND_MODE` environment variable to -`legacy` in the environment in which you are running `tbot`: - -```code -$ export TBOT_SSH_CONFIG_PROXY_COMMAND_MODE=legacy -$ tbot start -c config.yaml -``` - -#### Using a modified `ssh_config` - -If you have modified the `ssh_config` and therefore do not use the default -`ssh_config` generated by `tbot`, then you will need to manually update the -file. You must do this before 17.0, but we recommend doing this in 16.0 to -benefit from the performance improvements. - -To update your modified `ssh_config`, we recommend running an instance of `tbot` -as you would usually configure it. This will generate a new `ssh_config`. -Carefully inspect the `ProxyCommand` directive, and copy the new command to -your modified `ssh_config`. - -You can find a full list of the parameters available for the -`tbot ssh-proxy-command` on the -[CLI reference page](../cli/tbot.mdx). \ No newline at end of file diff --git a/docs/pages/reference/machine-workload-identity/bound-keypair/admin-guide.mdx b/docs/pages/reference/machine-workload-identity/bound-keypair/admin-guide.mdx new file mode 100644 index 0000000000000..eb67b8aa1eeca --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/bound-keypair/admin-guide.mdx @@ -0,0 +1,338 @@ +--- +title: Bound Keypair Joining Admin Guide +description: "How to deploy and maintain bots in production with Bound Keypair Joining" +tags: + - mwi + - conceptual + - infrastructure-identity +--- + +This guide discusses various tasks users administering bots using Bound Keypair +Joining may need to perform over the lifespan of the bot. + +## Allowing additional recovery attempts + +When using the `standard` recovery mode, only a configured number of recovery +attempts can be made. If the limit is reached, no further recovery attempts can +be made until the limit is increased. + +To increase this limit and allow an expired bot to join again, edit the token +using `tctl edit`: +```code +$ tctl edit token/example-token +``` + +Find the `spec.bound_keypair.recovery.limit` field and increment the limit by +the desired amount. You are free to select any desired threshold. For example, +consider these use cases: +- If human intervention is desired for each join attempt you can increase this + value by 1. This single recovery attempt will be immediately consumed, so + future recoveries will again require human intervention, and may result in + downtime. + + While this approach makes downtime likely, it does ensure a human verifies the + state of the bot host on each recovery. + +- If you want human intervention for each recovery, but want to avoid downtime, + you can increase this value by 2. The first attempt will be consumed + immediately, but the bot will have one recovery attempt for automatic future + use. + + A human user can periodically audit the recovery count and bot host to ensure + a recovery attempt is always available and the host is behaving as expected. + +- Any larger value will increase the amount of time required between human + intervention. You can select your tolerance for automatic bot recoveries as + desired. + +Alternatively, if you wish to allow an unlimited number of automatic recovery +attempts, [refer to the entry below](#allowing-unlimited-recovery-attempts) on +the `relaxed` recovery mode. + +Note that the recovery limit is always relative to the recovery counter (in the +`status.bound_keypair.recovery_count` field in the token resource). It is valid +to decrease the limit or set it to zero, however doing so may prevent future +bot recovery attempts until the limit is increased again. + +Additionally, note that [join state verification](concepts.mdx#join-state-verification) +is still required, and will prevent multiple concurrent uses of the same keypair +and token. In other words, increasing the recovery limit will not allow multiple +clients to join. + +## Allowing unlimited recovery attempts + +To allow unlimited recovery attempts, the `spec.bound_keypair.recovery.mode` +field should be set to `relaxed`. To do this, use `tctl edit` to edit the token: +```code +$ tctl edit token/example-token +``` + +Find or create the `spec.bound_keypair.recovery.mode` field and set the value to +`relaxed`. Save the file and quit your editor to update the token. + +When the recovery mode is set to `relaxed`, the `limit` field is ignored and the +`status.bound_keypair.recovery_count` field may increase beyond the written +limit. If the mode is later changed back to `standard`, be aware that future +recovery attempts will fail unless the `limit` is increased to accommodate the +current value of `recovery_count`. + +Note that when `relaxed` mode is in use, +[join state verification](concepts.mdx#join-state-verification) is still required and will +prevent multiple concurrent uses of the same keypair and token. If your use case +requires this, you can +[disable join state verification](#disabling-join-state-verification), but doing +so does impact the security of the token. + +## Requesting a keypair rotation + +To request a keypair rotation, set the `.spec.bound_keypair.rotate_after` field +to contain a timestamp. On the next authentication attempt after that timestamp +has elapsed, the bot will automatically rotate its keypair. + +To simplify this process, you can use the `tctl bound-keypair rotate` helper: +```code +$ tctl bound-keypair rotate token-name +``` + +This sets the timestamp to the current time. Note that by default bots only +reauthenticate every 20 minutes, so it may take some time for the request to be +acknowledged. You can monitor the rotation status by watching the token's +`.status.bound_keypair.last_rotated_at` field. + +If you want to force an early rotation and have access to the bot host, you can +restart the `tbot` process, or send it a signal with `pkill -usr1 tbot` to +request an early rotation. + +Note that the previous 10 keypairs are retained on the client for use in case of +a cluster rollback; refer to the +[cluster rollback](#recovery-after-a-cluster-rollback) section for additional +information. + +## Locking a `bound_keypair` bot or bot instance + +The simplest way to lock out a bot that joined using the `bound_keypair` join +method is to use a join token lock target: + +```code +$ tctl lock --join-token=token-name +``` + +As a bound keypair token is linked to a single bot, this will effectively lock +the bot. It will not be able to reauthenticate, recover, interact with the +Teleport API, or otherwise use its credentials until the lock is removed. + +Note that if a bot is locked for long enough - bots have a 1 hour certificate +TTL by default - its certificates will expire. If you intend to remove this lock +and reinstate the bot, you may also need to increase the recovery limit +(`.spec.bound_keypair.recovery.limit`) to accommodate the additional recovery +attempt. + +Other lock targets can also be used, but are not preferred: +- Bot instance (`tctl lock --bot-instance-id ...`): will lock only a single + instance of the bot. Note that if the recovery limit allows for it, the + [automatic recovery process](concepts.mdx#recovery) will attempt to rejoin and, if + successful, will generate a new bot instance ID. +- Bot name (`tctl lock --user bot-`): will lock all bots using the same + bot / user. This may be overly broad and lock other instances running under + this bot user. + +## Recovering a locked `bound_keypair` bot instance + +Bots joined with the `bound_keypair` join method can become automatically locked +under various conditions, including: +- Failing to correctly complete [join state verification](concepts.mdx#join-state-verification) +- Connecting with certificates that have an invalid [generation counter][ephemeral] +- Locked manually by a cluster admin + +To recover a bot that has become locked, first ensure the bot's internal storage +(`storage`) has not been compromised. These locking conditions are designed to +trigger if more than one client tries to join using a copy of the same +certificates and private key. This can occur due to a misconfiguration or due +to an attacker copying a bot's credentials, so ideally the latter should be +ruled out before unlocking a bot. + +Next, determine the name (UUID) of the lock or locks targeting the bot: +```code +$ tctl get lock +kind: lock +metadata: + name: 372af058-76d1-4e64-93da-3b04d7d03ac2 +spec: + target: + user: bot-example +version: v2 +--- +kind: lock +metadata: + name: 791d0b1d-01b4-4752-8a99-9b2908aebfae +spec: + target: + bot_instance_id: e7d494ae-a0ff-4d12-b935-de5e2025f667 +version: v2 +--- +kind: lock +metadata: + name: a69fdbb2-8e53-406a-b453-48b2cda6991d +spec: + target: + join_token: example-token-name +version: v2 +``` + +Note the different locks and lock targets shown above: bots can be targeted by +any of their Teleport user name (`bot-example`), the bot instance ID (a UUID), +or the join token name. Locks created automatically for bots using Bound Keypair +Joining will typically use a `join_token` target, but a lock targeting any of +these values could be created manually. + +Note that locks may have a message field containing details about why the lock +was created. + +Once the lock name(s) have been determined, remove each using `tctl rm`: +```code +$ tctl rm lock/372af058-76d1-4e64-93da-3b04d7d03ac2 +``` + +Next, join state should be reset. Use `tctl edit` to set the token's recovery +mode to `insecure`, but make a note of the current value (`standard` or +`relaxed`): +```code +$ tctl edit token/example-token +``` + +Change the `.spec.bound_keypair.recovery.mode` field to `insecure`, save, and +quit the editor. + +The bot can now be allowed to rejoin. Given sufficient time it will retry on its +own, but if you have access to the host, `systemctl restart tbot` or similar can +be used to restart the bot process. + +The bot should now be able to join successfully. You can monitor progress by +watching for new audit events in Teleport's web UI, or by waiting for the +recovery counter to increase: +```code +$ tctl get token/example-token --format=json | jq '.[].status.bound_keypair.recovery_count' +``` + +Once the bot has joined successfully, reset the recovery mode to its previous +value using `tctl edit`: +```code +$ tctl edit token/example-token +``` + +If you do suspect the bot's credentials may have been compromised, you may also +want to [request a keypair rotation](#requesting-a-keypair-rotation) in +addition to taking other steps to ensure the host is properly secured. + +## Disabling join state verification + +It is occasionally useful to intentionally disable join state verification. For +example, this can enable use with: +- CI/CD providers without an explicit [delegated join method][delegated]. +- Nodes with immutable storage that cannot store an updated join state document + after each join. + +Before continuing, be aware that disabling join state verification will prevent +Teleport from detecting if multiple clients are joining using the same bound +keypair token. In other words, if the private key is copied by an attacker, they +will be able to join indefinitely. Take care to protect the keypair, and make +certain to limit access from the bot identity using Teleport's +[RBAC system][rbac]. + +When ready, use `tctl edit` to modify the Bound Keypair token: +```code +$ tctl edit token/example-token +``` + +Find or add the `spec.bound_keypair.recovery.mode` field and set it to +`insecure`. Save and quit your editor to update the token. + +With the mode set to `insecure`, the `recovery.limit` is ignored, allowing +unlimited reuse of the token, and join state verification is disabled, allowing +concurrent or stateless reuse. + +## Recovery after a cluster rollback + +If your Teleport cluster is rolled back for any reason, joining bots may fail +[join state verification](concepts.mdx#join-state-verification) as their local join state +document may not match the values currently (or previously) known to Teleport. + +The simplest workaround is to temporarily set all bound keypair tokens to +`insecure` recovery mode for the first join attempt following a cluster restore. +Once they've joined once, they will once again have a valid join state, so the +recovery mode can be restored to its previous value. + +To change the recovery mode, use `tctl edit` to modify the token resource: +```code +$ tctl edit token/example-token +``` + +Find the `spec.bound_keypair.recovery.mode` field, and set the value to +"insecure". Repeat this for each bound keypair token. Wait for all bound keypair +bots to reauthenticate, and repeat this process to restore the recovery mode to +its previous value. + +If [bot keypairs were rotated](#requesting-a-keypair-rotation) between the +snapshot and restore of the Teleport cluster, note that bots only keep a record +of the previous 10 keypairs. This means server-side recovery may be impossible +if the keypair expected by the restored Teleport cluster has been rotated out of +the client-side history, or if the client-side history has been lost or deleted. + +## Manually rotating static keys + +Static keys prevent automatic key rotation as the `tbot` client cannot update +keys in an arbitrary keystore. However, it may still possible to automate +rotation if your environment or secret store allows you to update secrets +through an API. + +The specific steps needed to automate these will vary based on your environment, +but the general steps are: + +1. Generate a new keypair on any node using the tbot client: + + ```code + $ tbot keypair create --proxy-server example.teleport.sh:443 --static --format json + ``` + +2. Parse the `.public_key` and `.private_key` values using your tool of choice, + like `jq` or any other JSON parser. + +3. Replace the token in Teleport to trust the new public key, using the value in + the `.public_key` field: + ```code + $ cat my-token.yaml + version: v2 + kind: token + metadata: + name: my-token + spec: + bot_name: example-bot + bound_keypair: + onboarding: + initial_public_key: + recovery: + mode: insecure + join_method: bound_keypair + roles: + - Bot + $ tctl create -f my-token.yaml + ``` + +3. Insert the new private key into your keystore. This will vary depending on + which keystore or provider you are using. + * If passing the private key via an environment variable, copy the value directly + * If passing the private key via file, decode the base64-encoded private key first: + ```code + $ tbot keypair create --proxy-server example.teleport.sh:443 --static --format json | jq -r .private_key | base64 -d + ``` + ...and store the result as needed. + +4. Future jobs should now use the new keypair. + +Frequently rotating static keys can help to mitigate the security tradeoffs of +`insecure` recovery. See the [concepts page](concepts.mdx#static-keys) for more +information about static keys. + +[rbac]: ../../access-controls/roles.mdx +[ephemeral]: ../../architecture/machine-id-architecture.mdx#ephemeral-token +[delegated]: ../../deployment/join-methods.mdx#delegated-join-methods diff --git a/docs/pages/reference/machine-workload-identity/bound-keypair/bound-keypair.mdx b/docs/pages/reference/machine-workload-identity/bound-keypair/bound-keypair.mdx new file mode 100644 index 0000000000000..5ca86a96ea589 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/bound-keypair/bound-keypair.mdx @@ -0,0 +1,91 @@ +--- +title: Bound Keypair Joining Reference +sidebar_label: Bound Keypair Joining +description: "Bound Keypair Joining: Reference and admin guide" +tags: + - infrastructure-as-code + - reference + - mwi + - infrastructure-identity +--- + +Bound Keypair is a join method designed to provide the best features of +[delegated join methods][s-vs-d] - like the AWS, GCP, or Azure join +methods - but in on-prem or otherwise unsupported environments where no external +verification is available. + +Specifically, this join method: +- Does not require dedicated TPM hardware or external identity attestation +- Does not require long-lived shared secrets +- Allows for limited automatic recovery if certificates expire +- Allows recovery restrictions to be relaxed or lifted to accommodate different + use cases and deployment scenarios +- Ensures failed bots can be recovered without client-side intervention in most + cases + + +Bound Keypair Joining is available in v18.1.0 and is intended to replace `token` +joining as the default recommended join method in Teleport v19.0.0. + +At this time, Bound Keypair Joining can only be used to join Machine and +Workload ID bots and cannot be used to join other Teleport agent types. + + +## Use cases + +Bound Keypair Joining can be used in any environment and is designed to function +as a drop-in replacement for the traditional [`token`][ephemeral] join method in +all situations where it is used today. This includes bare-metal and on-prem +hardware where TPMs are not available, or cloud providers not currently +supported by a [delegated join method][delegated]. + +Similar to `token` joining, Bound Keypair Joining is also a good replacement for +local experimentation for testing, with minimal configuration needed to onboard +a bot initially. When ready to deploy to production, it's trivial to adjust +onboarding and recovery settings to select your desired balance between +resiliency and security. + +Additionally, with [static keys](static-keys.mdx) and in situations that can +accommodate the security complications, Bound Keypair Joining can be used to +join bots in otherwise unsupported CI/CD providers by persisting the bot's +keypair in a platform keystore. + +## Limitations + +While Bound Keypair Joining does enable or simplify a number use cases, it does +have limitations that may make it unfit in some instances. + +In particular, the [secure recovery modes](concepts.mdx#recovery) introduce some deployment +restrictions: +- Each bot deployment must be issued a unique token. For deployment at scale, + use of Teleport's + [Terraform provider](../../infrastructure-as-code/terraform-provider/terraform-provider.mdx) is + recommended to create tokens in bulk for each deployment. +- Each bot deployment must be able to store client-side state (used for + [join state verification](concepts.mdx#join-state-verification)). + +This limitation can be worked around using the +[`insecure` recovery mode](admin-guide.mdx#disabling-join-state-verification), but doing so +does meaningfully reduce the join method's security protections and should be +used with care. + +## Next steps + +You can read step-by-step guides on using Bound Keypair Joining with +Machine & Workload Identity: + +- [Using Bound Keypair Joining](./getting-started.mdx): + How to install and configure Machine & Workload Identity with Bound Keypair + Joining +- [Using Bound Keypair static keys](./static-keys.mdx): How to use Bound Keypair + static keys with stateless hosts, like otherwise unsupported CI/CD providers +- [Bound Keypair Joining Concepts](./concepts.mdx): Learn more about the + components and architecture of Bound Keypair Joining +- [Bound Keypair Joining Admin Guide](./admin-guide.mdx): Learn how to deploy + and maintain bots in production with Bound Keypair Joining +- [Bound Keypair Provision Token Reference](../../deployment/join-methods.mdx#bound-keypair-bound_keypair): Learn about the options that can be configured for a `bound_keypair` token + +[delegated]: ../../deployment/join-methods.mdx#delegated-join-methods +[ephemeral]: ../../architecture/machine-id-architecture.mdx#ephemeral-token +[token]: ../../deployment/join-methods.mdx#ephemeral-tokens +[s-vs-d]: ../../deployment/join-methods.mdx#secret-vs-delegated diff --git a/docs/pages/reference/machine-workload-identity/bound-keypair/concepts.mdx b/docs/pages/reference/machine-workload-identity/bound-keypair/concepts.mdx new file mode 100644 index 0000000000000..eefab60ab3235 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/bound-keypair/concepts.mdx @@ -0,0 +1,190 @@ +--- +title: Bound Keypair Joining Concepts +description: "Learn the key components of Bound Keypair Joining" +tags: + - mwi + - conceptual + - infrastructure-identity +--- + +This page discusses the main concepts needed to understand Bound Keypair +Joining. + +## Bound keypair + +The term "bound keypair" refers to both the type of credential and way it's used +in Teleport: first, a bot generates a standard public/private keypair; next, +Teleport is configured to trust the public key, binding it to a token. + +While bots using Bound Keypair Joining will receive regular, short-lived +Teleport certificates, and will refresh them regularly in a similar manner to +[renewable certificates][renewable], they also make use of an additional +public/private keypair that is not tied directly to a certificate. Critically, +this means the keypair does not have an expiration date, and that any trust +relationship established between Teleport and a Bound Keypair bot is controlled +by server-side configuration within Teleport and not due to the passage of time. + +Once this trust relationship has been established, bots can - in a sense - serve +as their own identity authority, similar to how cloud providers can attest to +the identity of a host or CI workflow run. Teleport issues the joining bot a +unique cryptographic challenge which the bot signs using its private key, and +the result is verified using the key registered with Teleport. + +As this self attestation is not by itself a sufficient security guarantee, Bound +Keypair Joining makes use of additional server-side rules to control whether or +not a join attempt will be allowed, including: +- Limiting the number of [recovery attempts](#recovery) and optionally requiring + (server-side) human intervention for each recovery. +- Preventing reuse or theft of private keys using + [join state verification](#join-state-verification). + +## Onboarding + +Onboarding takes place at first join and is the stage at which a public key +becomes bound to a token. There are two methods bots may use to complete this +step: +- Preregistered keys: a private key is generated locally on the bot host, and + the public key is copied out-of-band to Teleport by a human user or a script. + This method ensures no secret values are ever copied between systems. +- Registration secrets: a one time use random secret is generated by Teleport + and provided to the bot host via either a human user or a script, similar to + the [ephemeral `token` join method][token]. + +If a registration secret is used, the bot uses the registration secret to prove +its identity for this first connection, generates a keypair on the fly, and +registers the public key with Teleport. + +Registration secrets are used by default: if a token is created and no initial +(preregistered) public key is provided, a registration secret will be randomly +generated by Teleport and written to the token's +`status.bound_keypair.registration_secret` field. + +In both cases - once registered, or when using preregistered keys - the joining +bot is then required to complete a joining challenge. A unique cryptographic +challenge is generated by Teleport which the bot must sign and return. If the +signature matches the registered public key, onboarding has succeeded: the +public key is permanently bound to the token, a certificate bundle is issued, +and a join state document is provided for use on the next join attempt. + +## Recovery + +Recovery takes place when a bot uses its private key to request new +certificates. This can occur in two situations: +- On first join, when the bot has no certificates. +- When the bot's certificates expire before they can be renewed, such as when a + bot is offline for longer than its configured `certificate_ttl`. + +So long as the bot has and maintains valid Teleport credentials - by default, +bot certificates are valid for an hour and renewed every 20 minutes, but these +values are configurable - no recovery will be needed after the initial join. +Recovery is only required when a bot is unable to refresh its certificates for +longer than its configured certificate TTL (e.g. 1 hour). + +The behavior of the recovery process depends on two values configured in the +token resource: the recovery mode (`spec.bound_keypair.recovery.mode`), and the +recovery limit (`spec.bound_keypair.recovery.limit`). The mode may have these +values: +- `standard` (default): only `limit` recovery attempts can be automatically + attempted. + + This mode is recommended for most situations. Bots can recover automatically + so long as the recovery count (`status.bound_keypair.recovery_count`) doesn't + exceed the configured limit, which allows cluster administrators to select + their tolerance for automatic bot recovery attempts. + + [Join state verification](#join-state-verification) is enabled, which helps + prevent keypair reuse, but does require clients to store additional on each + join. + + Note that as the initial join counts as a recovery attempt, `limit` must + always be at least `1` for the initial join to succeed. + +- `relaxed`: `limit` is ignored, but + [join state verification](#join-state-verification) remains enabled. + + This mode allows for unlimited recovery attempts and is useful for lower + security impact bots, or deployments where bots are regularly expected to go + offline for extended periods. + + Like `standard` mode, join state verification is enabled to help prevent + keypair reuse, but does require clients to store additional state on each + join. + +- `insecure`: `limit` is ignored, and join state verification is **disabled**. + + This mode is the most flexible, as bots can rejoin repeatedly and can operate + without mutable client-side state. However, this disables most additional + security checks and should be used with care. + [See the admin guide](admin-guide.mdx#disabling-join-state-verification) for + more information on using `insecure` mode in practice. + + Static key joining (`tbot keypair create --static ...`) requires use of + `insecure` mode as it disables client-side credential storage, meaning bots + cannot complete [join state verification](#join-state-verification). + +## Join state verification + +A **join state document** is an additional piece of information provided to +joining bots alongside their usual Teleport certificate bundle. It contains +signed information about the state of the joining process, including a sequence +number that uniquely identifies each join attempt. + +**Join state verification** both ensures that the bot can provide a valid and +signed join state document, and that the document contains the expected sequence +number. +* If successful, the join attempt can proceed and a new join state document will + be issued with an incremented sequence counter. +* If the sequence number presented by the client is outdated, the join + attempt is rejected, and a lock is created to ensure existing clients will be + denied further access to the Teleport cluster. + +This system takes advantage of Teleport's short lived certificates and frequent +bot reauthentication. If an attacker somehow obtains a copy of a bot's keypair +and attempts to use it to retrieve Teleport certificates, they will initially +succeed. However, the original bot will eventually make another authentication +attempt, using its now-outdated join state document. This join attempt with an +outdated document will trigger a lockout and block both the original bot and any +credentials in use by the attacker. + +Note that join state verification is disabled when the token's +`spec.bound_keypair.recovery.mode` is set to `insecure`. + +## Static Keys + +Static keys trade some security guarantees for flexibility, and can help enable +use of `tbot` in environments where bots otherwise could not authenticate, such +as: +- CI/CD providers for which Teleport has no dedicated join method +- Ephemeral bare-metal nodes without TPMs +- Any other environment where persistent storage is not available +- Any environment where an arbitrary number of instances may join, like + ephemeral CI runners. + +However, static keys have disadvantages: +- `insecure` mode means any client with the private key can join without + restrictions. If the key is stolen, the attacker will have access to Teleport + and any resources the bot is allowed to access. +- `insecure` mode additionally means you cannot limit the number of uses of the + bound keypair token (`spec.bound_keypair.recovery.limit`). +- Keypair rotation is not supported, as client storage is not writable, and + attempting to require a rotation will prevent bots from joining. + +When using static keys, a keypair is generated ahead of time and [preregistered +with Teleport](#onboarding). You can then provide the keypair to the `tbot` +client using a file or the `TBOT_BOUND_KEYPAIR_STATIC_KEY` environment variable. + +The `insecure` recovery mode is required when using static keys (with +`tbot keypair create --static ...`) as this mode disables client-side storage +for join state documents. With no previous state, bots will not be able to +complete join state verification, and will be unable to recover after their +first authentication attempt; as such, join state verification must be disabled. + +With this context in mind, when using static keys, be aware that any client with +knowledge of the private key can authenticate to Teleport with no additional +restrictions. Before deploying a bot with static keys, take additional care to +fully understand your environment's threat model and security needs, and reduce +access using [Teleport's RBAC](../../access-controls/roles.mdx) to ensure the +minimum possible blast radius in the event the keypair is compromised. + +[renewable]: ../../deployment/join-methods.mdx#renewable-vs-non-renewable +[token]: ../../deployment/join-methods.mdx#ephemeral-tokens diff --git a/docs/pages/reference/machine-workload-identity/bound-keypair/getting-started.mdx b/docs/pages/reference/machine-workload-identity/bound-keypair/getting-started.mdx new file mode 100644 index 0000000000000..c35071ffdf9ce --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/bound-keypair/getting-started.mdx @@ -0,0 +1,303 @@ +--- +title: Deploying tbot with Bound Keypair Joining +description: "How to install and configure tbot with Bound Keypair Joining" +tags: + - mwi + - how-to + - infrastructure-identity +--- + +In this guide, you will install Machine & Workload Identity's agent, `tbot`, on +an arbitrary host using Bound Keypair Joining. This host could be a bare-metal +machine, a VM, a container, or any other host - the only requirement is that the +host has persistent storage. + +Bound Keypair Joining is an improved alternative to +[secret-based join methods][secret] and can function as a drop-in replacement. +It is more secure than static token joining, and is more flexible than ephemeral +token joining with renewable certificates: when its certificates expire, it can +perform an automated recovery to ensure the bot can rejoin even after an +extended outage. + +Note that platform-specific join methods may be available that are better suited +to your environment; refer to the +[deployment guides](../../../machine-workload-identity/deployment/deployment.mdx) +for a full list of options. + +## How it works + +With Bound Keypair Joining, Machine & Workload Identity bots generate a unique +keypair which is persistently stored in their internal data directory. Teleport +is then configured to trust this public key for future joining attempts. + +Later, when the bot attempts to join the cluster, Teleport issues it a challenge +that can only be completed using its private key. The bot returns the solved +challenge, attesting to its own identity, and is conditionally allowed to join +the cluster. This process is repeated for every join attempt, but if the bot has +been offline long enough for its certificates to expire, it is additionally +forced to perform an automatic recovery to join again. + +As self attestation is inherently less secure than the external verification +that would be provided by a cloud provider like AWS or a dedicated TPM, Bound +Keypair Joining enforces a number of additional checks to prevent abuse, +including: +- Join state verification to ensure the keypair cannot be usefully shared or + duplicated +- Certificate generation counter checks to ensure regular bot certificates + cannot be usefully shared or duplicated +- Configurable limits on how often - if at all - bots may be allowed to + automatically recover using this keypair + +An important benefit to Bound Keypair Joining is that all joining restrictions +can be reconfigured at any time, and bots that expire or go offline can be +recovered by making a server-side exemption without any client-side +intervention. + +Refer to the [reference page][reference] for further details on how this join +method works and how to use it in production. + +## Prerequisites + +{/* note: consider edition-prereqs-tabs.mdx include for v19; it is misleading due to the minor launch release */} + +- A running Teleport cluster version 18.1.0 or above. +- The `tsh` and `tctl` clients. +- (!docs/pages/includes/tctl.mdx!) +- This guide assumes the bot host has mutable persistent storage for internal + bot data. While it is possible to use Bound Keypair Joining can on immutable + hosts (like CI runs), doing so will reduce security guarantees; see the + [reference page][reference] for further information. + +## Step 1/5. Install `tbot` + +**This step is completed on the bot host.** + +First, `tbot` needs to be installed on the host that you wish to use Machine & +Workload Identity on. + +Download and install the appropriate Teleport package for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/5. Create a Bot + +**This step is completed on your local machine.** + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/5. Create a join token + +**This step is completed on your local machine.** + +In this guide, we'll demonstrate joining a bot using a registration secret: this +is a one-time use secret the bot can provide to Teleport to authenticate its +first join. Once authenticated, the bot automatically generates a keypair and +registers its public key with Teleport for use in all future join attempts. + +Create `token-example.yaml`: + +```yaml +kind: token +version: v2 +metadata: + # This name will be used in tbot's `onboarding.token` field. + name: example +spec: + roles: [Bot] + # bot_name should match the name of the bot created earlier in this guide. + bot_name: example + join_method: bound_keypair + bound_keypair: + recovery: + mode: standard + limit: 1 +``` + +Replace `example` in `spec.bot_name` with the name of the bot you created in the +second step. + +For this example, we don't need to set any additional options for the bound +keypair token. We've allowed a single recovery attempt, which will be used to +allow the bot's initial join, and Teleport will generate a registration secret +automatically when the token is created as we have not preregistered a public +key to use. + + +This example makes use of registration secrets to authenticate the initial join. +If desired, it is also possible to generate a key on the bot host first and +register it with Teleport out-of-band, avoiding the need to copy secrets between +hosts. + +To learn more about preregistering public keys, see the +[alternative flow](#alternative-preregistered-keys) below. For more information +on Bound Keypair Joining's other onboarding and recovery options, refer to the +[reference page][reference]. + + +Use `tctl` to apply this file: + +```code +$ tctl create -f token-example.yaml +``` + +Next, retrieve the generated registration secret, which will be needed for the +next step: +```code +$ tctl get token/example --format=json | jq -r '.[0].status.bound_keypair.registration_secret' +``` + +This assumes `jq` is installed. If not, run `tctl get token/example` and inspect +the `.status.bound_keypair.registration_secret` field. + +## Step 4/5. Configure `tbot` + +**This step is completed on the bot host.** + +### Prepare a storage directory + +(!docs/pages/includes/machine-id/machine-id-init-bot-data.mdx!) + +### Create a configuration file + +Create `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: bound_keypair + token: example + bound_keypair: + registration_secret: SECRET +storage: + type: directory + path: /var/lib/teleport/bot +# services will be filled in during the completion of an access guide. +services: [] +``` + +Replace the following: +- `example.teleport.sh:443` with the address of your Teleport Proxy. +- `example` with the name of the token created in the previous step, if you + changed it from `example`. +- `SECRET` with the registration secret retrieved in the previous step. + +(!docs/pages/includes/machine-id/daemon-or-oneshot.mdx!) + +## Step 5/5. Configure outputs + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Alternative: preregistered keys + +This guide makes use of a registration secret - a single use shared secret +that's consumed on the first bot join and allows a bot to automatically register +its public key with Teleport. If you don't want to use any shared secrets, you +can instead opt to generate the bot's keypair ahead of time and inform Teleport +of the public key yourself. + +To generate a public key, run the following command on the bot host: +```code +## If needed, create the bot's storage directory +$ mkdir -p /var/lib/teleport/bot +$ tbot keypair create --storage /var/lib/teleport/bot --proxy-server=example.teleport.sh:443 +2025-07-08T16:31:48.000-00:00 INFO [TBOT] keypair has been written to storage storage:directory: /var/lib/teleport/bot tbot/keypair.go:135 + +To register the keypair with Teleport, include this public key in the token's +`spec.bound_keypair.onboarding.initial_public_key`: + + ssh-ed25519 +``` + +Note the SSH-style public key written to the console - you'll need this value in +the next step. + + +Be aware that if a keypair already exists within the specified storage directory +it will not be overwritten by default. If an existing key is found, a warning +will be logged and the existing public key will be printed to the console. + +If you explicitly want to create a new public key, pass the `--overwrite` flag +to `tbot keypair create`. A warning will also be logged if any keys are actually +overwritten. + +If you'd like to automate this process, the `--format=json` flag will write the +public key string in a JSON document for use in scripts. + + +Note that while a proxy server address must be provided, this is only used to +ping the cluster to determine its configured +[signature algorithms](../../deployment/signature-algorithms.mdx). Once the keypair has been +generated, the public key will be printed to the console. + +Next, on your local machine, create a file named `token-example.yaml`: +```yaml +kind: token +version: v2 +metadata: + name: example +spec: + roles: [Bot] + join_method: bound_keypair + bot_name: example + bound_keypair: + onboarding: + initial_public_key: "ssh-ed25519 " + recovery: + mode: standard + limit: 1 +``` + +The SSH-style public key generated by the bot should be copied into the +`initial_public_key` field. As shown above, the value should be quoted to ensure +the YAML remains valid. + +Create the token with `tctl`: +```code +$ tctl create -f token-example.yaml +``` + +Back on the bot machine, configure the bot as in the original guide, but this +time with no `registration_secret` field set. Create a `tbot.yaml` with this +content: +```yaml +version: v2 +proxy_server: example.teleport.sh:443 +onboarding: + join_method: bound_keypair + token: example +storage: + type: directory + path: /var/lib/teleport/bot +# outputs will be filled in during the completion of an access guide. +outputs: [] +``` + +As in the original example, replace: +- `example.teleport.sh:443` with the address of your Teleport Proxy. +- `example` with the name of the token created in the previous step, if you + changed it from `example`. + +Note that the `storage.path` must point to the same storage directory you passed +to `tbot keypair create` above. + +Once the config file has been created, you can start the bot as usual: +```code +$ tbot start -c tbot.yaml +``` + +For additional information on how onboarding a new bot works with Bound Keypair +Joining, refer to the [onboarding reference](./concepts.mdx#onboarding). + +## Next steps + +- Read the [Bound Keypair Joining Reference][reference] + for more details about the join method and the available configuration options. +- Follow the [access guides](../../../machine-workload-identity/access-guides/access-guides.mdx) + to finish configuring `tbot` for your environment. +- Read the [configuration reference](../configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../telemetry.mdx) + +[secret]: ../../deployment/join-methods.mdx#secret-vs-delegated +[reference]: ./bound-keypair.mdx diff --git a/docs/pages/reference/machine-workload-identity/bound-keypair/static-keys.mdx b/docs/pages/reference/machine-workload-identity/bound-keypair/static-keys.mdx new file mode 100644 index 0000000000000..b50b258105c6e --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/bound-keypair/static-keys.mdx @@ -0,0 +1,298 @@ +--- +title: Deploying tbot with Bound Keypair Static Keys +description: "How to install and configure tbot with Bound Keypair static keys" +tags: + - mwi + - how-to + - infrastructure-identity +--- + +In this guide, you will install Machine and Workload Identity's agent, `tbot`, +on an arbitrary stateless environment using Bound Keypair static keys. This +might be a CI/CD job on a provider not yet supported by one of Teleport's +[dedicated join methods](../../deployment/join-methods.mdx), an ephemeral +bare-metal node, or any other environment without writable persistent storage. + +Bound Keypair Joining is a more flexible alternative to [secret-based join +methods][secret], and static keys allow arbitrary nodes to join by relaxing some +of its built-in security requirements. + + +If your provider has writable persistent storage, like a Kubernetes PVC or +similar writable persistent storage, you should instead follow the [standard +Bound Keypair guide](./getting-started.mdx). + + +## How it works + +Bound Keypair joining typically assumes you have access to writable client-side +storage to store additional identity proofs, but this requirement is lifted when +using Bound Keypair static keys. + +Normally, the actual Bound Keypair keys are managed internally by the `tbot` +client, which allows them to be generated and rotated on demand. With static +keys, you take direct ownership of the private key and can store it as you see +fit - like in a platform keystore. The `tbot` client is then configured to use +your static private key instead of managing it automatically. + + +Static key joining relaxes certain security checks otherwise performed when +client-side storage is available. This is designed to be used where no other +join methods are possible. Before continuing, consider: +- Using a [dedicated join method](../../deployment/join-methods.mdx) for your + platform if available. +- Using [standard Bound Keypair joining](./getting-started.mdx) if your + environment has writable persistent storage. + +[Read more about static keys and what tradeoffs they +require](./concepts.mdx#static-keys) before using them in a production +environment. + + +## Prerequisites + +{/* note: consider edition-prereqs-tabs.mdx include for v19; it is misleading due to the minor launch release */} + +{/* TODO: replace version with version containing `tbot keypair create --static` */} + +- A running Teleport cluster version 18.2.0 or above. +- The `tsh`, `tctl`, and `tbot` clients. +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/7. Install `tbot` + +**This step is completed on the bot host.** + +First, `tbot` needs to be installed on the host that you wish to use Machine & +Workload Identity on. + +Download and install the appropriate Teleport package for your platform: + +(!docs/pages/includes/install-linux.mdx!) + +## Step 2/7. Create a Bot + +**This step is completed on your local machine.** + +(!docs/pages/includes/machine-id/create-a-bot.mdx!) + +## Step 3/7. Create a keypair and a join token + +Next, we'll need to generate a keypair, consisting of: +* A private key that will need to be available on your bot host at runtime +* A public key that will need to be configured in the Teleport join token + +We'll use a helper in the `tbot` client to generate a static keypair. This +step can be run anywhere and does not require any configuration, but you will +need to decide how the bot will access the key, which may vary depending on your +environment or secret store: +1. Via an environment variable, useful for providers or environments that inject + secrets as environment variables +2. Via a standard file which you transfer to the bot host + +With that in mind, if you intend to make the key available to `tbot` via an +environment variable, run the following: + +```code +$ tbot keypair create --proxy-server :443 --static +``` + +Otherwise, if using a file, run the following: +```code +$ tbot keypair create --proxy-server :443 --static --static-key-path ./path/to/key +``` + +Adjust the `--static-key-path` value as desired. If you didn't run this command +on the bot host directly, be prepared to transfer this to the bot host in the +next step. + +In either case, the command will generate a keypair and print some instructions. +For example, this is shown when using an environment variable key: +```code +To register the keypair with Teleport, include this public key in the token's +'spec.bound_keypair.onboarding.initial_public_key' field: + + + +Refer to this token example as a reference: + + kind: token + metadata: + name: example-token + spec: + bot_name: example + bound_keypair: + onboarding: + initial_public_key: + recovery: + limit: 0 + mode: insecure + join_method: bound_keypair + roles: + - Bot + version: v2 + +Configure your bot to use this static key by inserting the following private key +value into the bot's environment, ideally via a platform-specific keystore if +available: + + export TBOT_BOUND_KEYPAIR_STATIC_KEY="<...encoded private key...>" + +Note that bots joined with static tokens do not support keypair rotation and +will be unable to join if a rotation is requested server-side via the token's +'rotate_after' field. Additionally, 'insecure' recovery mode must be used, as +shown above. Read more at: + + https://goteleport.com/docs/reference/machine-workload-identity/machine-id/bound-keypair/concepts/#recovery +``` + +This command will print the public key, a join token example, and the private +key needed to configure your bot. Keep this output available as you'll need it +through the next step. + +Using the example token printed by the above command as a template, +create `token.yaml` containing the following content: +```yaml +kind: token +metadata: + name: example-token +spec: + bot_name: example + bound_keypair: + onboarding: + initial_public_key: + recovery: + limit: 0 + mode: insecure + join_method: bound_keypair + roles: + - Bot +version: v2 +``` + +Be sure to set `bot_name` to match the bot created in the previous step, and +ensure the public key matches the value printed by `tbot keypair create ...`. + +When ready, create the join token in your Teleport cluster using `tctl`: +```code +$ tctl create -f token.yaml +``` + +## Step 4/7. Store the private key + +Now that you've generated the private key, it needs to be stored and made +available to your job. Exactly how to do this will depend on your provider and +environment, as well as whether you'll be making it available via an environment +variable or a file. + +### Storing the key in a file on the bot host + +Ensure the private key file you generated is made available on the bot host. If +you generated it on your local machine, this might mean copying it via `scp`, +provisioning the key via Ansible, or transferring it to the bot host through +whichever means you prefer. + +Take care to ensure the resulting file on the bot host will only be readable by +the account that will run the `tbot` process. + +### Storing the key in an environment variable + +If using an environment variable, depending on your platform and environment, +you'll likely want to store the key in a platform-specific keystore - such that +the environment variable is set on the bot host when it starts - rather than +on the bot host directly. + +Regardless of backend, set the variable as follows: +- Name: `TBOT_BOUND_KEYPAIR_STATIC_KEY` +- Value: the base64-encoded value as printed by `tbot keypair create --static ...` + +The value is the same content that would otherwise be written to a file if you'd +used `tbot keypair create ... --static --static-key-path /path/to/key`, but the +content has been base64 encoded to simplify use as an environment variable +value. + +## Step 5/7. Configure `tbot` + +**This step is completed on the bot host.** + +### With an environment variable key + +If exposing the secret via an environment variable, ensure it's available in the +`$TBOT_BOUND_KEYPAIR_STATIC_KEY` variable. + +Create `/etc/tbot.yaml` with the following content: + +```yaml +version: v2 +proxy_server: :443 +onboarding: + join_method: bound_keypair + token: example-token +storage: + type: directory + path: /var/lib/teleport/bot +# outputs will be filled in during the completion of an access guide. +outputs: [] +``` + +Aside from specifying the `join_method` and `token`, no additional configuration +is needed; the key will be read from the environment as needed during the join +process. + +### With a file key + +Copy the key to the bot environment or otherwise make sure it's available, and +write the following to `/etc/tbot.yaml`: + +```yaml +version: v2 +proxy_server: :443 +onboarding: + join_method: bound_keypair + token: example-token + bound_keypair: + static_private_key_path: /path/to/key +storage: + type: directory + path: /var/lib/teleport/bot +# outputs will be filled in during the completion of an access guide. +outputs: [] +``` + +Set `static_private_key_path` to point to the location the key will be +available and save the file. + +## Step 6/7. Verify `tbot` can authenticate to Teleport + +Run `tbot` using the `tbot.yaml` you created, and ensure the private key is +available as expected, either via file (at the path you configured earlier) or +via the environment in `$TBOT_BOUND_KEYPAIR_STATIC_KEY`: + +```code +$ tbot start -c /etc/tbot.yaml --oneshot +``` + +If everything has been setup correctly, `tbot` should run, authenticate with +Teleport, and exit cleanly. In production, you can remove the `--oneshot` flag +if you want `tbot` to continually provide updated certificates for +longer-running jobs, otherwise the certificates issued will expire eventually +(1 hour by default). + +## Step 7/7. Configure outputs + +(!docs/pages/includes/machine-id/configure-services.mdx!) + +## Next Steps + +- Read more about [Bound Keypair static keys](./concepts.mdx#static-keys). +- Read about [static key rotation](./admin-guide.mdx#manually-rotating-static-keys) +- Read the [Bound Keypair Joining Reference][reference] + for more details about the join method and the available configuration options. +- Follow the [access guides](../../../machine-workload-identity/access-guides/access-guides.mdx) + to finish configuring `tbot` for your environment. +- Read the [configuration reference](../configuration.mdx) to explore + all the available configuration options. +- [More information about `TELEPORT_ANONYMOUS_TELEMETRY`.](../telemetry.mdx) + +[secret]: ../../deployment/join-methods.mdx#secret-vs-delegated +[reference]: ./bound-keypair.mdx diff --git a/docs/pages/reference/machine-workload-identity/configuration.mdx b/docs/pages/reference/machine-workload-identity/configuration.mdx new file mode 100644 index 0000000000000..7076c3d0d9ecf --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/configuration.mdx @@ -0,0 +1,1600 @@ +--- +title: Machine & Workload Identity Configuration Reference +sidebar_label: Configuration +description: Configuration reference for Teleport Machine & Workload Identity. +toc_max_heading_level: 4 +tags: + - infrastructure-as-code + - reference + - mwi + - infrastructure-identity +--- + +This reference documents the various options that can be configured in the `tbot` +configuration file. This configuration file offers more control than +configuring `tbot` using CLI parameters alone. + +To run `tbot` with a configuration file, specify the path with the `-c` flag: + +```code +$ tbot start -c ./tbot.yaml +``` + +In this reference, the term **artifact** refers an item that `tbot` writes to a +destination as part of the process of generating an output. Examples of +artifacts include configuration files, certificates, and cryptographic key +material. Usually, artifacts are files, but this term is explicitly avoided +because a destination isn't required to be a filesystem. + +```yaml +# version specifies the version of the configuration file in use. `v2` is the +# most recent and should be used for all new bots. The rest of this example +# is in the `v2` schema. +version: v2 + +# debug enables verbose logging to stderr. If unspecified, this defaults to +# false. +debug: true + +# auth_server specifies the address of the Auth Service instance that `tbot` should connect +# to. You should prefer specifying `proxy_server` to specify the Proxy Service +# address. +auth_server: "teleport.example.com:3025" + +# proxy_server specifies the address of the Teleport Proxy Service that `tbot` should +# connect to. +# It is recommended to use the address of your Teleport Proxy Service, or, if using +# Teleport Cloud, the address of your Teleport Cloud instance. +proxy_server: "teleport.example.com:443" # or "example.teleport.sh:443" for Teleport Cloud + +# credential_ttl specifies how long certificates generated by `tbot` should +# live for. It should be a positive, numeric value with an `m` (for minutes) or +# `h` (for hours) suffix. By default, this value is `1h`. +# This has a maximum value of `24h`. +# +# It can be overridden for most outputs and services to give them a shorter TTL +# than `tbot`'s internal certificates. +credential_ttl: "1h" + +# renewal_interval specifies how often `tbot` should aim to renew the +# outputs it has generated. It should be a positive, numeric value with an +# `m` (for minutes) or `h` (for hours) suffix. The default value is `20m`. +# This value must be lower than `credential_ttl`. +# This value is ignored when using `tbot` is running in one-shot mode. +# +# It can be overridden for most outputs and services to give them a shorter +# renewal interval than `tbot`'s internal certificates. +renewal_interval: "20m" + +# oneshot configures `tbot` to exit immediately after generating the outputs. +# The default value is `false`. A value of `true` is useful in ephemeral environments, like +# CI/CD. +oneshot: false + +# onboarding is a group of configuration options that control how `tbot` will +# authenticate with the Teleport cluster. +onboarding: + # token specifies which join token, configured in the Teleport cluster, + # should be used to join the Teleport cluster. + # + # This can also be an absolute path to a file containing the value you wish + # to be used. + # File path example: + # token: /var/lib/teleport/tokenjoin + token: "00000000000000000000000000000000" + + # join_method must be the join method associated with the specified token + # above. This setting should match the value output when creating the bot using + # `tctl`. + # + # Support values include: + # - `token` + # - `azure` + # - `gcp` + # - `circleci` + # - `github` + # - `gitlab` + # - `iam` + # - `ec2` + # - `kubernetes` + # - `spacelift` + # - `tpm` + # - `terraform_cloud` + join_method: "token" + + # ca_pins are used to validate the identity of the Teleport Auth Service on + # first connect. This should not be specified when using Teleport Cloud or + # connecting through a Teleport Proxy. + ca_pins: + - "(=presets.ca_pin=)" + - "(=presets.ca_pin=)" + + # ca_path is used to specify where a CA file can be found that can be used to + # validate the identity of the Teleport Auth Service on first connect. + # This should not be specified when using Teleport Cloud or connecting through a + # Teleport Proxy. The ca_pins option should be preferred over ca_path. + ca_path: "/path/to/ca.pem" + + # gitlab holds configuration specific to the "gitlab" join method. + gitlab: + # token_env_var_name allows the environment variable that contains the + # GitLab ID token to be specified. If unspecified, this defaults to + # "TBOT_GITLAB_JWT". + # + # Overriding this is useful when you need to use `tbot` to authenticate to + # multiple Teleport clusters from a single GitLab CI job. + token_env_var_name: "MY_GITLAB_ID_TOKEN" + + # bound_keypair holds parameters specific to the "bound_keypair" join method + bound_keypair: + # registration_secret is an optional secret to use on first join in lieu of + # a preregistered keypair. You can also set this in the + # `TBOT_REGISTRATION_SECRET` environment variable. + registration_secret: "secret" + + # registration_secret_path is an optional path to a file containing a + # registration secret; conflicts with `registration_secret` + registration_secret_path: ./path/to/secret + + # static_key_path is an optional path to a file containing a static private + # key. + static_key_path: ./path/to/secret + +# storage specifies the destination that `tbot` should use to store its +# internal state. This state is sensitive, and you should ensure that the +# destination you specify here can only be accessed by `tbot`. +# +# If unspecified, storage is set to a directory destination with a path +# of `/var/lib/teleport/bot`. +# +# See the full list of supported destinations and their configuration options +# under the Destinations section of this reference page. +storage: + type: directory + path: /var/lib/teleport/bot + +# outputs specifies what artifacts `tbot` should generate and renew when it +# runs. +# +# See the full list of supported outputs and their configuration options +# under the Outputs section of this reference page. +outputs: + - type: identity + destination: + type: directory + path: /opt/machine-id + +# services specify which `tbot` sub-services should be enabled and how they +# should be configured. +# +# See the full list of supported services and their configuration options +# under the Services section of this reference page. +services: + - type: example +``` + +If no configuration file is provided, a simple configuration is used based +on the provided CLI flags. Given the following sample CLI from +`tctl bots add ...`: + +```code +$ tbot start \ + --destination-dir=./tbot-user \ + --token=00000000000000000000000000000000 \ + --ca-pin=(=presets.ca_pin=) \ + --proxy-server=example.teleport.sh:443 +``` + +it uses a configuration equivalent to the following: + +```yaml +proxy_server: example.teleport.sh:443 + +onboarding: + join_method: "token" + token: "(=presets.tokens.first=)" + ca_pins: + - "(=presets.ca_pin=)" + +storage: + type: directory + path: /var/lib/teleport/bot + +services: + - type: identity + destination: + type: directory + path: ./tbot-user +``` + +## Outputs + +Outputs define what actions `tbot` should take when it runs. They describe +the format of the certificates to be generated, the roles used to generate the certificates, and the +destination where they should be written. + +There are multiple types of output. Select the one that is most appropriate for +your intended use-case. + +### `identity` + +The `identity` output can be used to authenticate: + +- SSH access to your Teleport servers, using `tsh`, openssh and tools like + ansible. +- Administrative actions against your cluster using tools like `tsh` or `tctl`. +- Management of Teleport resources using the Teleport Terraform provider. +- Access to the Teleport API using the Teleport Go SDK. + +See the [Getting Started guide](../../machine-workload-identity/getting-started.mdx) to see the `identity` +output used in context. + +```yaml +# type specifies the type of the output. For the identity output, this will +# always be `identity`. +type: identity +# ssh_config controls whether the identity output will attempt to generate an +# OpenSSH configuration file. This requires that `tbot` can connect to the +# Teleport Proxy Service. Must be "on" or "off". If unspecified, this defaults to +# "on". +ssh_config: on +# allow_reissue controls whether the certificates generated by the identity +# output can be reissued (e.g. used with `tsh apps login`/`tsh db login`). This +# defaults to `false` if unspecified. If you receive an error message indicating +# that the certificate cannot be reissued, set this to `true`. +allow_reissue: false + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `application` + +The `application` output is used to generate credentials that can be used to +access applications that have been configured with Teleport. + +See the [Machine & Workload Identity with Applications guide](../../machine-workload-identity/access-guides/applications.mdx) +to see the `application` output used in context. + +```yaml +# type specifies the type of the output. For the application output, this will +# always be `application`. +type: application +# app_name specifies the application name, as configured in your Teleport +# cluster, that `tbot` should generate credentials for. +# This field must be specified. +app_name: grafana + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `database` + +The `database` output is used to generate credentials that can be used to +access databases that have been configured with Teleport. + +See the [Machine & Workload Identity with Databases guide](../../machine-workload-identity/access-guides/databases.mdx) +to see the `database` output used in context. + +```yaml +# type specifies the type of the output. For the database output, this will +# always be `database`. +type: database +# service is the name of the database server, as configured in Teleport, that +# the output should generate credentials for. This field must be specified. +service: my-postgres-server +# database is the name of the specific database on the specified database +# server to generate credentials for. This field doesn't need to be specified +# for database types that don't support multiple individual databases. +database: my-database +# username is the name of the user on the specified database server to +# generate credentials for. This field doesn't need to be specified +# for database types that don't have users. +username: my-user +# format specifies the format to use for output artifacts. If +# unspecified, a default format is used. See the table titled "Supported +# formats" below for the full list of supported values. +format: tls + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +#### Supported formats + +You can provide the following values to the `format` configuration field in +the `database` output type: + +| `format` | Description | +|---------------|--------------------------------------| +| Unspecified | Provides a certificate in `tlscert`, a private key in `key` and the CA in `teleport-database-ca.crt`. This is compatible with most clients and databases. | +| `mongo` | Provides `mongo.crt` and `mongo.cas`. This is designed to be used with MongoDB clients. | +| `cockroach` | Provides `cockroach/node.key`, `cockroach/node.crt`, and `cockroach/ca.crt`. This is designed to be used with CockroachDB clients. | +| `tls` | Provides `tls.key`, `tls.crt`, and `tls.cas`. This is for generic clients that require the specific file extensions. | + +### `kubernetes` + +The `kubernetes` output is used to generate credentials that can be used to +access Kubernetes clusters that have been configured with Teleport. + +It outputs a `kubeconfig.yaml` in the output destination, which can be used +with `kubectl`. + +See the [Machine & Workload Identity with Kubernetes Clusters guide](../../machine-workload-identity/access-guides/kubernetes.mdx) +to see the `kubernetes` output used in context. + +```yaml +# type specifies the type of the output. For the kubernetes output, this will +# always be `kubernetes`. +type: kubernetes +# kubernetes_cluster is the name of the Kubernetes cluster, as configured in +# Teleport, that the output should generate credentials and a kubeconfig for. +# This field must be specified. +kubernetes_cluster: my-cluster +# disable_exec_plugin disables the default behaviour of using the `tbot` binary +# as a `kubectl` credentials exec plugin. This is useful in environments where +# `tbot` does not exist on the system that will consume the generated kubeconfig +# (e.g. when using the `kubernetes_secret` output type). This credentials exec +# plugin is used to automatically refresh the credentials within a single +# invocation of `kubectl`. Defaults to `false`. +disable_exec_plugin: false + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `kubernetes/v2` + +The `kubernetes/v2` output type can be used to access many Kubernetes clusters +as individual contexts within the same `kubeconfig.yaml`. + +```yaml +type: kubernetes/v2 + +# selectors include one or more matching Kubernetes clusters. Each match will be +# included in the resulting `kubeconfig.yaml`, assuming the bot has permission +# to access the cluster. +selectors: + # name includes an exact match by name. Note that wildcards are not currently + # supported. Multiple name selectors can be specified if desired. + - name: foo + # default_namespace is the namespace that should be configured for the + # context within the kubeconfig. This will be the namespace used by + # `kubectl`/SDK if the user has not explicitly provided one. + # + # If unspecified, no default namespace is set within the kubeconfig and + # `kubectl`/SDKs will use default based on their own logic - which is often + # to select the `default` namespace. + default_namespace: my-namespace + # labels include all clusters matching all of these labels. Multiple label + # selectors can be provided if needed. + - labels: + env: dev + +# The following configuration fields are available across most output types. +# Note that `roles` are not supported for this output type. + +destination: + type: directory + path: /opt/machine-id + +# credential_ttl and renewal_interval override the credential TTL and renewal +# interval for this specific output, so that you can make its certificates valid +# for shorter than `tbot`'s internal certificates. +# +# This is particularly useful when using `tbot` in one-shot as part of a cron job +# where you need `tbot`'s internal certificate to live long enough to be renewed +# on the next invocation, but don't want long-lived workload certificates on-disk. +credential_ttl: 30m +renewal_interval: 15m + +# disable_exec_plugin disables the default behaviour of using the `tbot` binary +# as a `kubectl` credentials exec plugin. This is useful in environments where +# `tbot` does not exist on the system that will consume the generated kubeconfig +# (e.g. when using the `kubernetes_secret` output type). This credentials exec +# plugin is used to automatically refresh the credentials within a single +# invocation of `kubectl`. Defaults to `false`. +disable_exec_plugin: false + +# context_name_template determines the format of context names in the generated +# kubeconfig. It is a Go template string that supports the following variables: +# +# - {{.ClusterName}} - Name of the Teleport cluster +# - {{.KubeName}} - Name of the Kubernetes cluster resource +# - {{.Labels}} - Map of labels applied to the Kubernetes cluster +# resource that can be indexed using `{{index .Labels "key"}}` +# +# By default, the following template will be used: "{{.ClusterName}}-{{.KubeName}}" +context_name_template: "{{.KubeName}}" + +# relay_server specifies the Relay service address that tbot should use to route +# Kubernetes traffic to instead of the Teleport control plane. When set, all +# Kubernetes API connections are sent via this Relay; only Kubernetes clusters +# reachable through the specified Relay will be accessible and tbot will not fall +# back to the control plane. Provide either a hostname or host:port (e.g. +# relay.example.com or relay.example.com:443). +# +# Use of the relay_server option requires `tbot` 18.5.0 or later. +relay_server: relay.example.com + +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +Each Kubernetes cluster matching a selector will result in a new context in the +generated `kubeconfig.yaml`. This can be consumed like so: + +```code +$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml --context=example.teleport.sh-foo get pods +``` + +The context name is `[Teleport cluster name]-[Kubernetes cluster name]`, so the +command above runs `kubectl get pods` on the `foo` cluster. + +If clusters are added or removed over time, the `kubeconfig.yaml` will be +updated at the bot's normal renewal interval. You can trigger an early renewal +by restarting `tbot`, or signaling it with `pkill -usr1 tbot`. + +### `kubernetes/argo-cd` + +The `kubernetes/argo-cd` output type can be used to enable Argo CD to securely +connect to external Kubernetes clusters. + +It works by "declaratively" managing cluster credentials +[using Kubernetes secrets](https://argo-cd.readthedocs.io/en/release-1.8/operator-manual/declarative-setup/#clusters). +For each matching Kubernetes cluster, `tbot` will create and continuously update +a secret containing connection details and short-lived credentials, labeled with +`"argocd.argoproj.io/secret-type": "cluster"` for Argo CD to discover. + +As such, it is only intended to be used within a Kubernetes cluster. See the +[Machine & Workload Identity with Argo CD guide](../../machine-workload-identity/access-guides/argocd.mdx) +for information on how to deploy it using the Helm chart. + + + The `kubernetes/argo-cd` output type does not currently support + configurations where the Teleport proxy is behind a TLS-terminating load + balancer. + + +```yaml +type: kubernetes/argo-cd + +# selectors include one or more matching Kubernetes clusters. Each matching +# cluster that the bot has permission to access will be registered with Argo CD +# by creating a Kubernetes secret. +selectors: + # name includes an exact match by name. Note that wildcards are not currently + # supported. Multiple name selectors can be specified if desired. + - name: foo + # labels include all clusters matching all of these labels. Multiple label + # selectors can be provided if needed. + - labels: + env: dev + +# secret_namespace is the Kubernetes namespace in which Argo CD cluster secrets +# will be created. It must match the namespace where Argo CD is running. +# +# By default, `tbot` will use the `POD_NAMESPACE` environment variable, or if +# that is empty: "default". +secret_namespace: "argocd" + +# secret_name_prefix is the prefix that will be applied to Kubernetes secret +# names so they can be easily identified. The rest of the name will be derived +# from a hash of the target cluster name. +# +# By default, the prefix will be: "teleport.argocd-cluster". +secret_name_prefix: "argocd-cluster" + +# secret_labels is a set of labels that will be applied to the cluster secrets +# in addition to the "argocd.argoproj.io/secret-type" label added for Argo CD +# discovery. +# +# Label values can be Go template strings with the following variables: +# +# - {{.ClusterName}} - Name of the Teleport cluster +# - {{.KubeName}} - Name of the Kubernetes cluster resource +# - {{.Labels}} - Map of labels applied to the Kubernetes cluster +# resource that can be indexed using `{{index .Labels "key"}}` +# +# If the label value is empty, the label will not be added to the secret. +secret_labels: + department: engineering + cluster-region: |- + {{index .Labels "region"}} + +# secret_annotations is a set of annotations that will be applied to the cluster +# secrets in addition to `tbot`'s own annotations: +# +# - "teleport.dev/bot-name" - Name of the Bot +# - "teleport.dev/kubernetes-cluster-name" - Name of the Kubernetes cluster +# - "teleport.dev/updated" - RFC3339-formatted timestamp +# - "teleport.dev/tbot-version" - Version of tbot running +# - "teleport.dev/teleport-cluster-name" - Name of the Teleport cluster +secret_annotations: + creator: bob + +# project is the Argo CD project with which the Kubernetes clusters will be +# associated. +project: edge-services + +# namespaces optionally restricts which namespaces within the target Kubernetes +# clusters applications may be deployed into. By default, all namespaces are +# allowed. +namespaces: + - dev + - qa + +# cluster_resources determines whether Argo CD will be allowed to operate on +# cluster-scoped resources within the target clusters. This option is only +# applicable when `namespaces` is non-empty. +cluster_resources: true + +# cluster_name_template determines the format of cluster names in Argo CD. It is +# a Go template string that supports the following variables: +# +# - {{.ClusterName}} - Name of the Teleport cluster +# - {{.KubeName}} - Name of the Kubernetes cluster resource +# +# By default, the following template will be used: "{{.ClusterName}}-{{.KubeName}}" +cluster_name_template: "{{.KubeName}}" + +# The following configuration fields are available across most output types. +# Note that `roles` and `destination` are not supported for this output type. + +# credential_ttl and renewal_interval override the credential TTL and renewal +# interval for this specific output, so that you can make its certificates valid +# for shorter than `tbot`'s internal certificates. +credential_ttl: 30m +renewal_interval: 15m + +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +### `ssh_host` + +The `ssh_host` output is used to generate the artifacts required to configure +an OpenSSH server with Teleport in order to allow Teleport users to connect to +it. + +The output generates the following artifacts: +- `ssh_host-cert.pub`: an SSH certificate signed by the Teleport host certificate authority. +- `ssh_host`: the private key associated with the SSH host certificate. +- `ssh_host-user-ca.pub`: an export of the Teleport user certificate authority in an OpenSSH compatible format. + +```yaml +# type specifies the type of the output. For the ssh host output, this will +# always be `ssh_host`. +type: ssh_host +# principals is the list of host names to include in the host certificates. +# These names should match the names that clients use to connect to the host. +principals: + - host.example.com + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `workload-identity-x509` + +The `workload-identity-x509` output is used to issue an X509 workload identity +credential and write this to a configured destination. + +The output generates the following artifacts: + +- `svid.pem`: the X509 SVID. +- `svid.key`: the private key associated with the X509 SVID. +- `bundle.pem`: the X509 bundle that contains the trust domain CAs. + +See [Workload Identity introduction](../../machine-workload-identity/workload-identity/introduction.mdx) +for more information on Workload Identity functionality. + +```yaml +# type specifies the type of the output. For the X509 Workload Identity output, +# this will always be `workload-identity-x509`. +type: workload-identity-x509 +(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `workload-identity-jwt` + +The `workload-identity-jwt` output is used to issue a JWT workload identity +credential and write this to a configured destination. + +The JWT workload identity credential is compatible with the [SPIFFE JWT SVID +specification](https://github.com/spiffe/spiffe/blob/main/standards/JWT-SVID.md). + +The output generates the following artifacts: + +- `jwt_svid`: the JWT SVID. + +See [Workload Identity introduction](../../machine-workload-identity/workload-identity/introduction.mdx) +for more information on Workload Identity functionality. + +```yaml +# type specifies the type of the output. For the JWT Workload Identity output, +# this will always be `workload-identity-jwt`. +type: workload-identity-jwt +# audiences specifies the values that should be included in the `aud` claim of +# the JWT. Typically, this identifies the intended recipient of the JWT and +# contains a single value. +# +# At least one audience value must be specified. +audiences: + - example.com + - foo.example.com +(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +### `workload-identity-aws-roles-anywhere` + +The `workload-identity-aws-roles-anywhere` output is used to issue an X509 +workload identity credential, exchange this for short-lived AWS credentials +using Roles Anywhere, and write these to a configured destination. + +The credentials are written in the [AWS shared credentials file format](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds), +which is compatible with the AWS CLI and SDKs. + +The output generates the following artifacts: + +- `aws_credentials`: the CLI and SDK compatible AWS shared credentials file. + +See [Workload Identity introduction](../../machine-workload-identity/workload-identity/introduction.mdx) +for more information on Workload Identity functionality. + +```yaml +# type specifies the type of the output. For the Workload Identity AWS Roles +# Anywhere output, this will always be `workload-identity-aws-roles-anywhere`. +type: workload-identity-aws-roles-anywhere +# The following configuration fields are available across most output types. + +# destination specifies where the output should write any generated artifacts +# such as certificates and configuration files. +# +# See the full list of supported destinations and their configuration options +# under the Destinations section of this reference page. +destination: + type: directory + path: /opt/machine-id +# role_arn is the ARN of the AWS role that the generated credentials should +# assume. +# Required. +role_arn: arn:aws:iam::123456789012:role/example-role +# profile_arn is the ARN of the AWS profile to be used during the Roles Anywhere +# exchange. +# Required. +profile_arn: arn:aws:rolesanywhere:us-east-1:123456789012:profile/0000000-0000-0000-0000-00000000000 +# trust_anchor_arn is the ARN of the trust anchor that should be used during the +# Roles Anywhere exchange. +# Required. +trust_anchor_arn: arn:aws:rolesanywhere:us-east-1:123456789012:trust-anchor/0000000-0000-0000-0000-000000000000 +# region is the AWS region to use for the Roles Anywhere exchange. If omitted, +# this defaults to the region set by `AWS_REGION` environment variable or the +# AWS configuration file. +region: us-east-1 +# session_duration is the duration that the generated AWS credentials should be +# valid for. This may be up to 12 hours. If omitted, this defaults to 6 hours. +session_duration: 6h +# session_renewal_interval is the interval at which the AWS credentials should +# be renewed. This should be less than the session duration. If omitted, this +# defaults to 1 hour. +session_renewal_interval: 1h +# credential_profile_name is the name of the profile to write to in the AWS +# credentials file. If unspecified, this profile will be named `default`. +credential_profile_name: my-profile +# artifact_name is the name of the file that the AWS credentials should be +# written to. If unspecified, this defaults to `aws_credentials`. +artifact_name: my-credentials-file +# overwrite_credential_file controls whether the AWS credentials file should be +# overwritten if it already exists, or whether the profile added by tbot should +# be merged with any existing profiles in the file. If unspecified, this +# defaults to `false`. +overwrite_credential_file: false +(!docs/pages/includes/machine-id/workload-identity-selector-config.yaml!) +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +### `spiffe-svid` + + +The use of this service has been deprecated as part of the introduction of the +new Workload Identity configuration experience. You can replace the use of this +output with the new `workload-identity-x509` or `workload-identity-jwt` service. + +For further information, see [the new Workload Identity configuration experience +and how to migrate](./workload-identity/configuration-resource-migration.mdx). + + +The `spiffe-svid` output is used to generate a SPIFFE X509 SVID and write this +to a configured destination. + +The output generates the following artifacts: +- `svid.pem`: the X509 SVID. +- `svid.key`: the private key associated with the X509 SVID. +- `bundle.pem`: the X509 bundle that contains the trust domain CAs. + + +An artifact will also be generated for each entry within the `jwts` list. This +will be named according to `file_name`. This artifact will contain only the +JWT-SVID with the audience specified in `audience`. + +See [Workload Identity](../../machine-workload-identity/workload-identity/introduction.mdx) for more information on how +to use SPIFFE SVIDs. + +```yaml +# type specifies the type of the output. For the SPIFFE SVID output, this will +# always be `spiffe-svid`. +type: spiffe-svid +# svid specifies the properties of the SPIFFE SVID that should be requested. +svid: + # path specifies what the path element should be requested for the SPIFFE ID. + path: /svc/foo + # sans specifies optional Subject Alternative Names (SANs) to include in the + # generated X509 SVID. If omitted, no SANs are included. + sans: + # dns specifies the DNS SANs. If omitted, no DNS SANs are included. + dns: + - foo.svc.example.com + # ip specifies the IP SANs. If omitted, no IP SANs are included. + ip: + - 10.0.0.1 + # jwts controls the output of JWT-SVIDs. Each entry will be generated as a + # separate artifact. If omitted, no JWT-SVIDs are generated. + jwts: + # audience specifies the audience that the JWT-SVID should be issued for. + # this typically identifies the service that the JWT-SVID will be used to + # authenticate to. + - audience: https://example.com + # file_name specifies the name of the file that the JWT-SVID should be + # written to. + file_name: example-jwt + +(!docs/pages/includes/machine-id/common-output-config.yaml!) +``` + +## Services + +Services are configurable long-lived components that run within `tbot`. Unlike +Outputs, they may not necessarily generate artifacts. Typically, services +provide supporting functionality for machine to machine access, for example, +opening tunnels or providing APIs. + +### `workload-identity-api` + +The `workload-identity-api` services opens a listener that provides a local +workload identity API, intended to serve workload identity credentials +(e.g X509/JWT SPIFFE SVIDs) to workloads running on the same host. + +For more information about this, see the +[Workload Identity API and Workload Attestation reference](./workload-identity/workload-identity-api-and-workload-attestation.mdx) + +The `workload-identity-api` service will not start if `tbot` has been configured +to run in one-shot mode. + +### `spiffe-workload-api` + + +The use of this service has been deprecated as part of the introduction of the +new Workload Identity configuration experience. You can replace the use of this +service with the new `workload-identity-api` service. + +For further information, see [the new Workload Identity configuration experience +and how to migrate](./workload-identity/configuration-resource-migration.mdx). + + +The `spiffe-workload-api` service opens a listener for a service that implements +the SPIFFE Workload API. This service is used to provide SPIFFE SVIDs to +workloads. + +See [Workload Identity](../../machine-workload-identity/workload-identity/introduction.mdx) for more information on the +SPIFFE Workload API. + +```yaml +# type specifies the type of the service. For the SPIFFE Workload API service, +# this will always be `spiffe-workload-api`. +type: spiffe-workload-api +# listen specifies the address that the service should listen on. +# +# Two types of listener are supported: +# - TCP: `tcp://
:` +# - Unix socket: `unix:///` +listen: unix:///opt/machine-id/workload.sock +# attestors allows Workload Attestation to be configured for this Workload +# API. +attestors: + # docker is configuration for the Docker Workload Attestor. See the Workload + # Identity API & Workload Attestation reference for more information. + docker: + # enabled specifies whether the workload's identity should be attested with + # information about its Docker container. If unspecified, this defaults to + # false. + enabled: true + # addr is the address at which the Docker Engine daemon can be reached. It + # must be in the form `unix://path/to/socket`, as connecting via TCP is not + # currently supported. If unspecified, this defaults to the standard socket + # location for "rootful" Docker installations: `unix:///var/run/docker.sock`. + addr: unix:///var/run/docker.sock + # kubernetes is configuration for the Kubernetes Workload Attestor. See + # the Kubernetes Workload Attestor section for more information. + kubernetes: + # enabled specifies whether the Kubernetes Workload Attestor should be + # enabled. If unspecified, this defaults to false. + enabled: true + # kubelet holds configuration relevant to the Kubernetes Workload Attestors + # interaction with the Kubelet API. + kubelet: + # read_only_port is the port on which the Kubelet API is exposed for + # read-only operations. Since Kubernetes 1.16, the read-only port is + # typically disabled by default and secure_port should be used instead. + read_only_port: 10255 + # secure_port is the port on which the attestor should connect to the + # Kubelet secure API. If unspecified, this defaults to `10250`. This is + # mutually exclusive with ReadOnlyPort. + secure_port: 10250 + # token_path is the path to the token file that the Kubelet API client + # should use to authenticate with the Kubelet API. If unspecified, this + # defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`. + token_path: "/var/run/secrets/kubernetes.io/serviceaccount/token" + # ca_path is the path to the CA file that the Kubelet API client should + # use to validate the Kubelet API server's certificate. If unspecified, + # this defaults to `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`. + ca_path: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" + # skip_verify is used to disable verification of the Kubelet API server's + # certificate. If unspecified, this defaults to false. + # + # If specified, the value specified in ca_path is ignored. + # + # This is useful in cases where the Kubelet API server has not been issued + # with a certificate signed by the Kubernetes cluster's CA. This is fairly + # common with a number of Kubernetes distributions. + skip_verify: true + # anonymous is used to disable authentication with the Kubelet API. If + # unspecified, this defaults to false. If set, the token_path field is + # ignored. + anonymous: false + # podman is configuration for the Podman Workload Attestor. See the Workload + # Identity API & Workload Attestation reference for more information. + podman: + # enabled specifies whether the workload's identity should be attested with + # information about its Podman container and pod. If unspecified, this + # defaults to false. + enabled: true + # addr is the address at which the Podman API Service can be reached. It + # must be in the form `unix://path/to/socket`, as connecting via TCP is not + # supported. This field is required and there is no default value. See the + # Workload Identity API & Workload Attestation reference for more information. + addr: unix:///run/podman/podman.sock + # sigstore is configuration for the Sigstore Workload attestor. See the + # Sigstore Workload Attestation page for more information. + sigstore: + # enabled specifies whether tbot will discover Sigstore signatures for the + # workload's container image. If unspecified, this defaults to false. + enabled: true + # additional_registries optionally configures the OCI registries that will + # be searched for signatures in addition to the workload container image's + # source registry. + additional_registries: + - + # host of the OCI registry. + host: ghcr.io + # credentials_path is the path to a Docker or Podman configuration file + # containing per-registry credentials. + credentials_path: /path/to/docker/config.json + # allowed_private_network_prefixes are the private IP address prefixes (CIDR + # blocks) that the Sigstore attestor is allowed to connect to. By default, + # tbot will only connect to registries at publicly-routable IP addresses to + # reduce the surface area for SSRF attacks. + allowed_private_network_prefixes: + - "192.168.1.42/32" + - "fd12:3456:789a:1::1/128" + # systemd is configuration for the Systemd Workload Attestor. See the Workload + # Identity API & Workload Attestation reference for more information. + systemd: + # enabled specifies whether the workload's identity should be attested with + # information about its Systemd service. If unspecified, this defaults to + # false. + enabled: true + # unix is configuration for the Unix Workload Attestor. + unix: + # binary_hash_max_size_bytes is the maximum number of bytes that will be + # read from a process' binary to calculate its SHA-256 checksum. If the + # binary is larger than this, the `workload.unix.binary_hash` attribute + # will be empty. If unspecified, this defaults to 1GiB. Set it to -1 to + # make it unlimited. + binary_hash_max_size_bytes: 1024 +# svids specifies the SPIFFE SVIDs that the Workload API should provide. +svids: + # path specifies what the path element should be requested for the SPIFFE + # ID. + - path: /svc/foo + # hint is a free-form string which can be used to help workloads determine + # which SVID to select when multiple are available. If omitted, no hint is + # included. + hint: my-hint + # sans specifies optional Subject Alternative Names (SANs) to include in the + # generated X509 SVID. If omitted, no SANs are included. + sans: + # dns specifies the DNS SANs. If omitted, no DNS SANs are included. + dns: + - foo.svc.example.com + # ip specifies the IP SANs. If omitted, no IP SANs are included. + ip: + - 10.0.0.1 + # rules specifies a list of workload attestation rules. At least one of + # these rules must be satisfied by the workload in order for it to receive + # this SVID. + # + # If no rules are specified, the SVID will be issued to all workloads that + # connect to this service. + rules: + # unix is a group of workload attestation criteria that are available + # when the workload is running on the same host, and is connected to + # the Workload API using a Unix socket. + # + # If any of the criteria in this group are specified, then workloads + # that do not connect using a Unix socket will not receive this SVID. + - unix: + # uid is the ID of the user that the workload process must be running + # as to receive this SVID. + # + # If unspecified, the UID is not checked. + uid: 1000 + # pid is the ID that the workload process must have to receive this + # SVID. + # + # If unspecified, the PID is not checked. + pid: 1234 + # gid is the ID of the primary group that the workload process must be + # running as to receive this SVID. + # + # If unspecified, the GID is not checked. + gid: 50 +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +#### Envoy SDS + +The `spiffe-workload-api` service endpoint also implements the Envoy SDS API. +This allows it to act as a source of certificates and certificate authorities +for the Envoy proxy. + +As a forward proxy, Envoy can be used to attach an X.509 SVID to an outgoing +connection from a workload that is not SPIFFE-enabled. + +As a reverse proxy, Envoy can be used to terminate mTLS connections from +SPIFFE-enabled clients. Envoy can validate that the client has presented a valid +X.509 SVID and perform enforcement of authorization policies based on the SPIFFE +ID contained within the SVID. + +When acting as a reverse proxy for certain protocols, Envoy can be configured +to attach a header indicating the identity of the client to a request before +forwarding it to the service. This can then be used by the service to make +authorization decisions based on the client's identity. + +When configuring Envoy to use the SDS API exposed by the `spiffe-workload-api` +service, three additional special names can be used to aid configuration: + +- `default`: `tbot` will return the default SVID for the workload. +- `ROOTCA`: `tbot` will return the trust bundle for the trust domain that the +workload is a member of. +- `ALL`: `tbot` will return the trust bundle for the trust domain that the +workload is a member of, as well as the trust bundles of any trust domain +that the trust domain is federated with. + +The following is an example Envoy configuration that sources a certificate +and trust bundle from the `spiffe-workload-api` service listening on +`unix:///opt/machine-id/workload.sock`. It requires that a connecting client +presents a valid SPIFFE SVID and forwards this information to the backend +service in the `x-forwarded-client-cert` header. + +```yaml +node: + id: "my-envoy-proxy" + cluster: "my-cluster" +static_resources: + listeners: + - name: test_listener + enable_reuse_port: false + address: + socket_address: + address: 0.0.0.0 + port_value: 8080 + filter_chains: + - filters: + - name: envoy.filters.network.http_connection_manager + typed_config: + "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager + common_http_protocol_options: + idle_timeout: 1s + forward_client_cert_details: sanitize_set + set_current_client_cert_details: + uri: true + stat_prefix: ingress_http + route_config: + name: local_route + virtual_hosts: + - name: my_service + domains: ["*"] + routes: + - match: + prefix: "/" + route: + cluster: my_service + http_filters: + - name: envoy.filters.http.router + typed_config: + "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router + transport_socket: + name: envoy.transport_sockets.tls + typed_config: + "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext + common_tls_context: + # configure the certificate that the reverse proxy should present. + tls_certificate_sds_secret_configs: + # `name` can be replaced with the desired SPIFFE ID if multiple + # SVIDs are available. + - name: "default" + sds_config: + resource_api_version: V3 + api_config_source: + api_type: GRPC + transport_api_version: V3 + grpc_services: + envoy_grpc: + cluster_name: tbot_agent + # combined validation context "melds" two validation contexts + # together. This is handy for extending the validation context + # from the SDS source. + combined_validation_context: + default_validation_context: + # You can use match_typed_subject_alt_names to configure + # rules that only allow connections from specific SPIFFE IDs. + match_typed_subject_alt_names: [] + validation_context_sds_secret_config: + name: "ALL" # This can also be replaced with the trust domain name + sds_config: + resource_api_version: V3 + api_config_source: + api_type: GRPC + transport_api_version: V3 + grpc_services: + envoy_grpc: + cluster_name: tbot_agent + clusters: + # my_service is the example service that Envoy will forward traffic to. + - name: my_service + type: strict_dns + load_assignment: + cluster_name: my_service + endpoints: + - lb_endpoints: + - endpoint: + address: + socket_address: + address: 127.0.0.1 + port_value: 8090 + - name: tbot_agent + http2_protocol_options: {} + load_assignment: + cluster_name: tbot_agent + endpoints: + - lb_endpoints: + - endpoint: + address: + pipe: + # Configure the path to the socket that `tbot` is + # listening on. + path: /opt/machine-id/workload.sock +``` + +### `database-tunnel` + +The `database-tunnel` service opens a listener for a service that tunnels +connections to a database server. + +The tunnel authenticates connections for the client, meaning that any +application which can connect to the listener will be able to connect to the +database as the specified user. For this reason, we heavily recommend using the +Unix socket listener type and configuring the permissions of the socket to +ensure that only the intended applications can connect. + +```yaml +# type specifies the type of the service. For the database tunnel service, this +# will always be `database-tunnel`. +type: database-tunnel +# listen specifies the address that the service should listen on. +# +# Two types of listener are supported: +# - TCP: `tcp://
:` +# - Unix socket: `unix:///` +listen: tcp://127.0.0.1:25432 +# service is the name of the database server, as configured in Teleport, that +# the service should open a tunnel to. +service: postgres-docker +# database is the name of the specific database on the specified database +# service. +database: postgres +# username is the name of the user on the specified database server to open a +# tunnel for. +username: postgres +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +The `database-tunnel` service will not start if `tbot` has been configured +to run in one-shot mode. + +### `application-tunnel` + +The `application-tunnel` service opens a listener that tunnels connections to +an application in Teleport. It supports both HTTP and TCP applications. This is +useful for applications which cannot be configured to use client certificates, +when using TCP application or where using a L7 load-balancer in front of your +Teleport proxies. + +The tunnel authenticates connections for the client, meaning that any +client that connects to the listener will be able to access the application. +For this reason, ensure that the listener is only accessible by the intended +clients by using the Unix socket listener or binding to `127.0.0.1`. + +```yaml +# type specifies the type of the service. For the application tunnel service, +# this will always be `application-tunnel`. +type: application-tunnel +# listen specifies the address that the service should listen on. +# +# Two types of listener are supported: +# - TCP: `tcp://
:` +# - Unix socket: `unix:///` +listen: tcp://127.0.0.1:8084 +# app_name is the name of the application, as configured in Teleport, that +# the service should open a tunnel to. +app_name: my-application +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +The `application-tunnel` service will not start if `tbot` has been configured +to run in one-shot mode. + +### `ssh-multiplexer` + +The `ssh-multiplexer` service opens a listener for a high-performance local +SSH multiplexer. This is designed for use-cases which create a large number +of SSH connections using Teleport, for example, Ansible. + +This differs to using `identity` output for SSH in a few ways: + +- The `tbot` instance running the `ssh-multiplexer` service must be running on + the same host as the SSH client. +- The `ssh-multiplexer` service is designed to be a long-running background + service and cannot be used in one-shot mode. It must be running in order for + SSH connections to be established and to continue running. +- Resource consumption is significantly reduced by multiplexing SSH connections + through a fewer number of upstream connections to the Teleport Proxy Service. +- It's possible to configure the `ssh-multiplexer` service to connect to SSH + servers through a [Teleport Relay](../architecture/relay.mdx). + +Additionally, the `ssh-multiplexer` opens a socket that implements the SSH +agent protocol. This allows the SSH client to authenticate without writing the +sensitive private key to disk. + +By default, the `ssh-multiplexer` service outputs an `ssh_config` which uses +`tbot` itself as the ProxyCommand. You can further reduce the resource +consumption of SSH connections by installing and specifying the +`fdpass-teleport` binary. + +```yaml +# type specifies the type of the service. For the SSH multiplexer +type: ssh-multiplexer +# destination specifies where the tunnel should be opened and any artifacts +# should be written. It must be of type `directory`. +destination: + type: directory + path: /foo +# enable_resumption specifies whether the multiplexer should negotiate +# session resumption. This allows SSH connections to survive network +# interruptions. It does increase the memory resources used per connection. +# +# If unspecified, this defaults to true. +enable_resumption: true +# proxy_command specifies the command that should be used as the ProxyCommand +# in the generated SSH configuration. +# +# If unspecified, the ProxyCommand will be the currently running binary of tbot +# itself. +proxy_command: + - /usr/local/bin/fdpass-teleport +# proxy_templates_path specifies a path to a proxy templates configuration file +# which should be used when resolving the Teleport node to connect to. This +# file must be accessible by the long-lived tbot process running the +# ssh-multiplexer. +# +# If unspecified, proxy templates will not be used. +proxy_templates_path: /etc/my-proxy-templates.yaml +# relay_server specifies the Relay service address that tbot should use to route +# SSH traffic instead of going through the Teleport control plane. When set, all +# SSH connections are sent via this Relay; only servers reachable through the +# specified Relay will be accessible and tbot will not fall back to the control +# plane. Provide either a hostname or host:port (e.g. relay.example.com or +# relay.example.com:443). +# +# Use of the relay_server option requires `tbot` 18.3.0 or later. +relay_server: relay.example.com +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +Once configured, `tbot` will create the following artifacts in the specified +destination: + +- `ssh_config`: an SSH configuration file that will configure OpenSSH to use + the multiplexer and agent. +- `known_hosts`: the known hosts file that will be used by OpenSSH to validate + a server's identity. +- `v1.sock`: the Unix socket that the multiplexer listens on. +- `agent.sock`: the Unix socket that the SSH agent listens on. + +The `ssh-multiplexer` service will not start if `tbot` has been configured +to run in one-shot mode. + +#### Using the SSH multiplexer programmatically + +To use the SSH multiplexer programmatically, your SSH client library will need +to support one of two things: + +- The ability to use a ProxyCommand with FDPass. If so, you can use the + `ssh_config` file generated by `tbot` to configure the SSH client. +- The ability to accept an open socket to use as the connection to the SSH + server. You will then need to manually connect to the socket and send the + multiplexer request. + +The `v1.sock` Unix Domain Socket implements the V1 Teleport SSH multiplexer +protocol. The client must first send a short request message to indicate the +desired target host and port, terminated with a null byte. The multiplexer will +then begin to forward traffic to the target host and port. The client can then +make an SSH connection. + +
+Example in Python (Paramiko) +```python +import os +import paramiko +import socket + +host = "ubuntu.example.teleport.sh" +username = "root" +port = 3022 +directory_destination = "/opt/machine-id" + +# Connect to Mux Unix Domain Socket +sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) +sock.connect(os.path.join(directory_destination, "v1.sock")) +# Send the connection request specifying the server you wish to connect to +sock.sendall(f"{host}:{port}\x00".encode("utf-8")) + +# We must set the env var as Paramiko does not make this configurable... +os.environ["SSH_AUTH_SOCK"] = os.path.join(directory_destination, "agent.sock") + +ssh_config = paramiko.SSHConfig() +with open(os.path.join(directory_destination, "ssh_config")) as f: + ssh_config.parse(f) + +ssh_client = paramiko.SSHClient() + +# Paramiko does not support known_hosts with CAs: https://github.com/paramiko/paramiko/issues/771 +# Therefore, we must disable host key checking +ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy()) + +ssh_client.connect( + hostname=host, + port=port, + username=username, + sock=sock +) + +stdin, stdout, stderr = ssh_client.exec_command("hostname") +print(stdout.read().decode()) +``` +
+ +
+Example in Go +```go +package main + +import ( + "fmt" + "net" + "path/filepath" + + "golang.org/x/crypto/ssh" + "golang.org/x/crypto/ssh/agent" + "golang.org/x/crypto/ssh/knownhosts" +) + +func main() { + host := "ubuntu.example.teleport.sh" + username := "root" + directoryDestination := "/opt/machine-id" + + // Setup Agent and Known Hosts + agentConn, err := net.Dial( + "unix", filepath.Join(directoryDestination, "agent.sock"), + ) + if err != nil { + panic(err) + } + defer agentConn.Close() + agentClient := agent.NewClient(agentConn) + hostKeyCallback, err := knownhosts.New( + filepath.Join(directoryDestination, "known_hosts"), + ) + if err != nil { + panic(err) + } + + // Create SSH Config + sshConfig := &ssh.ClientConfig{ + Auth: []ssh.AuthMethod{ + ssh.PublicKeysCallback(agentClient.Signers), + }, + User: username, + HostKeyCallback: hostKeyCallback, + } + + // Dial Unix Domain Socket and send multiplexing request + conn, err := net.Dial( + "unix", filepath.Join(directoryDestination, "v1.sock"), + ) + if err != nil { + panic(err) + } + defer conn.Close() + _, err = fmt.Fprint(conn, fmt.Sprintf("%s:0\x00", host)) + if err != nil { + panic(err) + } + + sshConn, sshChan, sshReq, err := ssh.NewClientConn( + conn, + // Port here doesn't matter because Multiplexer has already established + // connection. + fmt.Sprintf("%s:22", host), + sshConfig, + ) + if err != nil { + panic(err) + } + sshClient := ssh.NewClient(sshConn, sshChan, sshReq) + defer sshClient.Close() + + sshSess, err := sshClient.NewSession() + if err != nil { + panic(err) + } + defer sshSess.Close() + + out, err := sshSess.CombinedOutput("hostname") + if err != nil { + panic(err) + } + fmt.Println(string(out)) +} +``` +
+ +### `application-proxy` + +The `application-proxy` service opens a listener serving an HTTP proxy that +forwards requests to HTTP applications enrolled in Teleport. It handles the +process of attaching the necessary client certificate to the upstream connection +on the behalf of the client. + +Unlike the `application-tunnel` service, which is explicitly bound to a specific +application, the `application-proxy` service dynamically routes requests to the +correct application. This makes it more suitable for use-cases where a client +must connect to a large number of applications enrolled in Teleport, +for example, for scraping Prometheus metric endpoints through Teleport. + +The proxy authenticates connections for the client, meaning that any client that +connects to the listener will be able to access applications through Teleport. +For this reason, ensure that the listener is only accessible by the intended +clients by using the Unix socket listener or binding to `127.0.0.1`. + +```yaml +# type specifies the type of the service. For the application proxy service, +# this will always be `application-proxy`. +type: application-proxy +# listen specifies the address that the service should listen on. +# +# Two types of listener are supported: +# - TCP: `tcp://
:` +# - Unix socket: `unix:///` +listen: tcp://127.0.0.1:8080 +# name optionally overrides the name of the service used in logs and the `/readyz` +# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus +# symbols. +name: my-service-name +``` + +The `application-proxy` service will not start if `tbot` has been configured +to run in one-shot mode. + +#### Limitations + +There are a number of limitations in the initial implementation of the +`application-proxy` to be aware of: + +- Only HTTP applications are supported. TCP applications are not supported at + this time. +- Only HTTP/1 and HTTP/1.1 are supported. HTTP/2 is not supported at this time. + +If these limitations are problematic, consider using the `application-tunnel` +service instead, or, reach out to the Teleport team to discuss your use-case. + +#### Using the application proxy + +The listener exposed by the `application-proxy` service is compatible with a +wide range of HTTP clients and libraries. How this is configured will depend on +the client or library being used. For many clients, this is possible using the +`http_proxy` environment variable. + +When using the `application-proxy`, either the `Host` header or authority +specified within a request's target URI must match the name of the application +as enrolled within Teleport. + +For example, using `curl` to access the HTTP application enrolled in Teleport +called `my-app`: + +```bash +curl --proxy localhost:8080 http://my-app/example +```` + +## Destinations + +A destination is somewhere that `tbot` can read and write artifacts. + +Destinations are used in two places in the `tbot` configuration: + +- Specifying where `tbot` should store its internal state. +- Specifying where an output should write its generated artifacts. + +Destinations come in multiple types. Usually, the `directory` type is the most +appropriate. + +### `directory` + +The `directory` destination type stores artifacts as files in a specified +directory. + +```yaml +# type specifies the type of the destination. For the directory destination, +# this will always be `directory`. +type: directory + +# path specifies the path to the directory that this destination should write +# to. This directory should already exist, or `tbot init` should be used to +# create it with the correct permissions. +path: /opt/machine-id + +# symlinks configures the behaviour of symlink attack prevention. +# Requires Linux 5.6+. +# Supported values: +# * try-secure (default): Attempt to securely read and write certificates +# without symlinks, but fall back (with a warning) to insecure read +# and write if the host doesn't support this. +# * secure: Attempt to securely read and write certificates, with a hard error +# if unsupported. +# * insecure: Quietly allow symlinks in paths. +symlinks: try-secure + +# acls configures whether Linux Access Control List (ACL) setup should occur for +# this destination. +# Requires Linux with a file system that supports ACLs. +# Supported values: +# * try (default on Linux): Attempt to use ACLs, warn at runtime if ACLs +# are configured but invalid. +# * off (default on non-Linux): Do not attempt to use ACLs. +# * required: Always use ACLs, produce a hard error at runtime if ACLs +# are invalid. +acls: try + +# readers is a list of users and groups that will be allowed by ACL to access +# this directory output. The `acls` parameter must be either `try` or +# `required`. File ACLs will be monitored and corrected at runtime to ensure +# they match this configuration. +# Individual entries may either specify `user` or `group`, but not both. `user` +# accepts an existing named user or a UID, and `group` accepts an existing named +# group or GID. UIDs and GIDs do not necessarily need to exist on the local +# system. +# An empty list of readers disables runtime ACL management. +readers: +- user: teleport +- user: 123 +- group: teleport +- group: 456 +``` + +### `memory` + +The `memory` destination type stores artifacts in the process memory. When +the process exits, nothing is persisted. This destination type +is most suitable for ephemeral environments, but can also be used for testing. + +Configuration: + +```yaml +# type specifies the type of the destination. For the memory destination, this +# will always be `memory`. +type: memory +``` + +### `kubernetes_secret` + +The `kubernetes_secret` destination type stores artifacts in a Kubernetes +secret. This allows them to be mounted into other containers deployed in +Kubernetes. + +Prerequisites: + +- `tbot` must be running in Kubernetes with at most one replica. If using a + `deployment`, then the `Recreate` strategy must be used to ensure only one + instance exists at any time. This is because multiple `tbot` agents configured + with the same secret will compete to write to the secret and it may be left + in an inconsistent state or the `tbot` agents may fail to write. +- The `tbot` pod must be configured with a service account that allows it to + read and write from the configured secret. +- The `POD_NAMESPACE` environment variable must be configured with the + name of the namespace that `tbot` is running in. This is best achieved with + the [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). + +There is no requirement that the secret already exists, one will be created +if it does not exist. If a secret already exists, `tbot` will overwrite any +other keys within the secret. + +Configuration: + +```yaml +# type specifies the type of the destination. For the kubernetes_secret +# destination, this will always be `kubernetes_secret`. +type: kubernetes_secret +# name specifies the name of the Kubernetes Secret to write the artifacts to. +name: my-secret +# namespace specifies the Kubernetes namespace that the secret should be written +# to. If unspecified, this defaults to the value of the `POD_NAMESPACE` +# environment variable. +# +# When using the Helm chart, and specifying a namespace other than the one that +# `tbot` is running in, you must manually grant the `tbot` service account +# privileges to read and write to secrets in that namespace. +namespace: default +# labels specifies the labels to apply to the Kubernetes Secret. This field is +# optional. +labels: + example: "foo" +``` + +## Bot resource + +The `bot` resource is used to manage Machine & Workload Identity Bots. It is +used to configure the access that is granted to a Bot. + +(!docs/pages/includes/machine-id/bot-spec.mdx!) + +You can apply a file containing YAML that defines a `bot` resource using +`tctl create -f ./bot.yaml`. diff --git a/docs/pages/reference/machine-workload-identity/diagnostics-service.mdx b/docs/pages/reference/machine-workload-identity/diagnostics-service.mdx new file mode 100644 index 0000000000000..588eefb7fe0b1 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/diagnostics-service.mdx @@ -0,0 +1,184 @@ +--- +title: Diagnostics Service +description: Reference information for the `tbot` diagnostics service. +tags: + - conceptual + - platform-wide +--- + +The `tbot` process can optionally expose a diagnostics service. This is +disabled by default, but once enabled, allows useful information about the +running `tbot` process to be queried via HTTP. + +## Configuration + +To enable the diagnostics service, you must specify an address and port for +it to listen on. + +For security reasons, you should ensure that access to this listener is +restricted. In most cases, the most secure thing to do is to bind the listener +to `127.0.0.1`, which will only allow access from the local machine. + +You can configure the diagnostics service using the `--diag-addr` CLI parameter: + +```code +$ tbot start -c my-config.yaml --diag-addr 127.0.0.1:3001 +``` + +Or directly within the configuration file using `diag_addr`: + +```yaml +diag_addr: 127.0.0.1:3001 +``` + +## Endpoints + +The diagnostics service exposes the following HTTP endpoints. + +### `/livez` + +The `/livez` endpoint always returns with a 200 status code. This can be used +to determine if the `tbot` process is running and has not crashed or hung. + +If deploying to Kubernetes, we recommend this endpoint is used for your +Liveness Probe. + +### `/readyz` and `/readyz/{service}` + +The `/readyz` endpoint returns the overall health of `tbot`, including all of +its internal and user-defined services. If all services are healthy, it will +respond with a 200 status code. If any service is unhealthy, it will respond +with a 503 status code. + +```code +$ curl -v http://127.0.0.1:3001/readyz + +HTTP/1.1 503 Service Unavailable +Content-Type: application/json + +{ + "status": "unhealthy", + "services": { + "ca-rotation": { + "status": "healthy" + }, + "heartbeat": { + "status": "healthy" + }, + "identity": { + "status": "healthy" + }, + "aws-roles-anywhere": { + "status": "unhealthy", + "reason": "access denied to perform action \"read\" on \"workload_identity\"" + } + }, + "pid": 42344 +} +``` + +If deploying to Kubernetes, we recommend this endpoint is used for your +Readiness Probe. + +You can also use the `/readyz/{service}` endpoint to query the health of a +specific service. + +```code +$ curl -v http://127.0.0.1:3001/readyz/aws-roles-anywhere + +HTTP/1.1 200 OK +Content-Type: application/json + +{ + "status": "healthy" +} +``` + +By default, `tbot` generates service names based on their type (e.g. +`application-output-1`). You can override this by providing your own name in the +`tbot` configuration file. + +```yaml +services: + - type: identity + name: my-service-123 +``` + +### `/metrics` + +The `/metrics` endpoint returns a Prometheus-compatible metrics snapshot. + +See [Prometheus Metrics](#prometheus-metrics) below for more information. + +### `/debug/pprof` + +These endpoints allow the collection of pprof profiles for debugging purposes. +You may be asked by a Teleport engineer to collect these if you are experiencing +performance issues. + +They will only be enabled if the `-d`/`--debug` flag is provided when starting +`tbot`. This is known as **debug mode**. + +## Prometheus metrics + +The `tbot` process exposes a number of Prometheus metrics via the `/metrics` +endpoint of the diagnostics service. + +In addition to exporting the standard Go runtime metrics, `tbot` also exports +custom metrics that reflect the health and performance of the various +configurable services. + +## Advice + +When monitoring the health of `tbot`, there are three categories of metrics you +should consider: + +- The health of the `tbot` process itself. For example, how much CPU time and + memory is it using? These can be strong indicators of overall health and + provide early warning signs of potential issues (e.g. memory leaks). +- The health of the internal services that `tbot` relies on. For example, has + `tbot` been able to successfully renew its internal identity? If these + internal services have become unhealthy, then it is likely that user-defined + services within `tbot` will also become unhealthy. +- The health of the services you configured within `tbot`. This will indicate + whether `tbot` has been able to successfully perform its intended functions. + +For monitoring the health of the `tbot` process itself, a large number of +metrics are provided by the Go runtime. + +For monitoring the health of internal and user-defined services, there are +two key metrics: + +- `tbot_task_iterations_failed`: the total number of task iterations that have + failed. This will have a `service` label indicating which service within the + `tbot` process the task belongs to. +- `tbot_task_iterations_successful`: the total number of task iterations that + have succeeded. This will also have a `service` label. This metric is a + histogram, and will also indicate the number of retries that were required + before the task succeeded. For a perfectly healthy service, you would expect + this number of retries to be zero, or close to zero. + +## Metrics + +### Generic + +These metrics are generated by more than one service within `tbot` or may be +generated by the core supervisor within `tbot` itself. + +| Name | Description | +|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `tbot_task_iterations_total` | The total number of task iterations that have been performed. This will have a `service` and `name` label to specify which task. | +| `tbot_task_iterations_failed` | The total number of task iterations that have failed. This will have a `service` and `name` label to specify which task. | +| `tbot_task_iterations_successful` | The total number of task iterations that have succeeded. This will have a `service` and `name` label to specify which task. This metric is a histogram, and will also indicate the number of retries that were required before the task succeeded. | +| `tbot_task_iterations_duration_seconds` | The duration of the time taken to perform an iteration of the task. This will have a `service` and `name` label to specify which task. This metric is a histogram. | + +### `ssh-multiplexer` + +These metrics are generated by the SSH multiplexer service. + +| Name | Description | +|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `tbot_ssh_multiplexer_requests_started_total` | The total number of SSH multiplexing requests that have been started. | +| `tbot_ssh_multiplexer_requests_handled_total` | The total number of SSH multiplexing requests that have completed. The `status` label indicates whether the request completed successfully (`OK`) or with an error (`ERROR`). | +| `tbot_ssh_multiplexer_requests_in_flight` | The number of SSH multiplexing requests that are currently in progress. | + diff --git a/docs/pages/reference/machine-workload-identity/machine-workload-identity.mdx b/docs/pages/reference/machine-workload-identity/machine-workload-identity.mdx new file mode 100644 index 0000000000000..70c77af0f6aff --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/machine-workload-identity.mdx @@ -0,0 +1,10 @@ +--- +title: 'Machine & Workload Identity References' +sidebar_label: Machine & Workload Identity +description: 'Includes comprehensive guides to tools for managing Teleport Machine & Workload Identity' +--- + +This section includes comprehensive reference documentation for the tools you +can use to manage Teleport Machine & Workload Identity. + + diff --git a/docs/pages/reference/machine-id/telemetry.mdx b/docs/pages/reference/machine-workload-identity/telemetry.mdx similarity index 76% rename from docs/pages/reference/machine-id/telemetry.mdx rename to docs/pages/reference/machine-workload-identity/telemetry.mdx index 510f7a38d63ef..a4ea25d59e688 100644 --- a/docs/pages/reference/machine-id/telemetry.mdx +++ b/docs/pages/reference/machine-workload-identity/telemetry.mdx @@ -1,19 +1,22 @@ --- title: Telemetry -description: An explanation of the telemetry collected by Machine ID +description: An explanation of the telemetry collected by Machine & Workload Identity +tags: + - conceptual + - mwi --- -This document explains what telemetry is collected by the Machine ID `tbot` -agent, why we want to collect this telemetry and, how to opt in or out. +This document explains what telemetry is collected by the `tbot` agent, why we +want to collect this telemetry and, how to opt in or out. ## Why? -Machine ID is an emerging part of the Teleport product and it's helpful for us -to be able to identify the kinds of use-cases people have. This allows us to -prioritise more common usages. Whilst we try to collect this sort of information -by talking to users directly, having a more general overview of the product in -the wild helps us make even more informed decisions and avoid our decisions -being solely influenced by a select few users. +Machine & Workload Identity is an emerging part of the Teleport product and it's +helpful for us to be able to identify the kinds of use-cases people have. This +allows us to prioritise more common usages. Whilst we try to collect this sort +of information by talking to users directly, having a more general overview of +the product in the wild helps us make even more informed decisions and avoid our +decisions being solely influenced by a select few users. ## Anonymous telemetry @@ -25,9 +28,9 @@ that the collected data does not include anything which identifies: - the hosts, applications, databases and Kubernetes clusters `tbot` connects to - the user that has configured `tbot` -If we introduce further events to Machine ID's anonymous telemetry in future, -we will abide by the above guidelines and ensure that changes are explicitly -included in changelogs where new information is gathered. +If we introduce further events to Machine & Workload Identity's anonymous +telemetry in future, we will abide by the above guidelines and ensure that +changes are explicitly included in changelogs where new information is gathered. Whilst we do not collect data which uniquely identifies the specific machine `tbot` is running on, we may collect general information about the architecture diff --git a/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/data-sources.mdx b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/data-sources.mdx new file mode 100644 index 0000000000000..db7ba0c3f8a7e --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/data-sources.mdx @@ -0,0 +1,31 @@ +--- +title: "MWI Terraform data-sources index" +description: "Index of all the data-sources supported by the Teleport MWI Terraform Provider" +tags: + - infrastructure-as-code + - reference + - mwi +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +{/* + This file will be renamed data-sources.mdx during build time. + The template name is reserved by tfplugindocs so we suffix with -index. +*/} + + +This is reference page is for the Teleport MWI Terraform provider, which +generates short-lived credentials using Teleport Machine & Workload Identity. +These short-lived credentials can be used to grant other Terraform providers +access to resources through Teleport. + +If you are looking to manage the configuration of Teleport itself, use the +[Teleport Terraform Provider](../../../infrastructure-as-code/terraform-provider/terraform-provider.mdx) +instead. + + +The Teleport MWI Terraform provider supports the following data-sources: + + - [`teleportmwi_kubernetes`](./kubernetes.mdx) diff --git a/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/kubernetes.mdx b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/kubernetes.mdx new file mode 100644 index 0000000000000..f00542922c377 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/data-sources/kubernetes.mdx @@ -0,0 +1,71 @@ +--- +title: Reference for the teleportmwi_kubernetes Terraform data-source +sidebar_label: kubernetes +description: This page describes the supported values of the teleportmwi_kubernetes data-source of the Teleport MWI Terraform provider. +tags: + - infrastructure-as-code + - reference + - mwi +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +The Kubernetes data source provides credentials to allow other providers to access Kubernetes cluster through Teleport Machine & Workload Identity. + +## Example Usage + +```hcl +// Warning: The teleportmwi_kubernetes data source will not function correctly +// when the Teleport cluster is fronted by a L7 load balancer that terminates +// TLS. +data "teleportmwi_kubernetes" "my_cluster" { + selector = { + name = "my-k8s-cluster" + } + credential_ttl = "1h" +} + + +// https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs +provider "kubernetes" { + host = data.teleportmwi_kubernetes.my_cluster.output.host + tls_server_name = data.teleportmwi_kubernetes.my_cluster.output.tls_server_name + client_certificate = data.teleportmwi_kubernetes.my_cluster.output.client_certificate + client_key = data.teleportmwi_kubernetes.my_cluster.output.client_key + cluster_ca_certificate = data.teleportmwi_kubernetes.my_cluster.output.cluster_ca_certificate +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `selector` (Attributes) Selects the Kubernetes cluster to connect to. (see [below for nested schema](#nested-schema-for-selector)) + +### Optional + +- `credential_ttl` (String) How long the issued credentials should be valid for. Defaults to 30 minutes. + +### Read-Only + +- `output` (Attributes) (see [below for nested schema](#nested-schema-for-output)) + +### Nested Schema for `selector` + +Required: + +- `name` (String) The name of the Kubernetes cluster to connect to. + + +### Nested Schema for `output` + +Read-Only: + +- `client_certificate` (String) Compatible with the `client_certificate` argument of the `kubernetes` provider. +- `client_key` (String, Sensitive) Compatible with the `client_key` argument of the `kubernetes` provider. +- `cluster_ca_certificate` (String) Compatible with the `cluster_ca_certificate` argument of the `kubernetes` provider. +- `host` (String) Compatible with the `host` argument of the `kubernetes` provider. +- `tls_server_name` (String) Compatible with the `tls_server_name` argument of the `kubernetes` provider. + diff --git a/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/ephemeral-resources.mdx b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/ephemeral-resources.mdx new file mode 100644 index 0000000000000..02f664c9207a3 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/ephemeral-resources.mdx @@ -0,0 +1,31 @@ +--- +title: "MWI Terraform ephemeral resources index" +description: "Index of all the resources supported by the Teleport MWI Terraform Provider" +tags: + - infrastructure-as-code + - reference + - mwi +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +{/* + This file will be renamed ephemeral-resources.mdx during build time. + The template name is reserved by tfplugindocs so we suffix with -index. +*/} + + +This is reference page is for the Teleport MWI Terraform provider, which +generates short-lived credentials using Teleport Machine & Workload Identity. +These short-lived credentials can be used to grant other Terraform providers +access to resources through Teleport. + +If you are looking to manage the configuration of Teleport itself, use the +[Teleport Terraform Provider](../../../infrastructure-as-code/terraform-provider/terraform-provider.mdx) +instead. + + +The Teleport MWI Terraform provider supports the following ephemeral resources: + + - [`teleportmwi_kubernetes`](./kubernetes.mdx) diff --git a/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/kubernetes.mdx b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/kubernetes.mdx new file mode 100644 index 0000000000000..f5b383241d785 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/ephemeral-resources/kubernetes.mdx @@ -0,0 +1,73 @@ +--- +title: Reference for the teleportmwi_kubernetes Terraform ephemeral resource +sidebar_label: kubernetes +description: This page describes the supported values of the teleportmwi_kubernetes ephemeral resource of the Teleport MWI Terraform provider. +tags: + - infrastructure-as-code + - reference + - mwi +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +This page describes the supported values of the teleportmwi_kubernetes ephemeral resource of the Teleport MWI Terraform provider. + + +The Kubernetes Ephemeral Resource provides credentials to allow other providers to access Kubernetes cluster through Teleport Machine & Workload Identity. + +## Example Usage + +```hcl +// Warning: The teleportmwi_kubernetes ephemeral resource will not function +// correctly when the Teleport cluster is fronted by a L7 load balancer that +// terminates TLS. +ephemeral "teleportmwi_kubernetes" "my_cluster" { + selector = { + name = "my-k8s-cluster" + } + credential_ttl = "1h" +} + + +// https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs +provider "kubernetes" { + host = ephemeral.teleportmwi_kubernetes.my_cluster.output.host + tls_server_name = ephemeral.teleportmwi_kubernetes.my_cluster.output.tls_server_name + client_certificate = ephemeral.teleportmwi_kubernetes.my_cluster.output.client_certificate + client_key = ephemeral.teleportmwi_kubernetes.my_cluster.output.client_key + cluster_ca_certificate = ephemeral.teleportmwi_kubernetes.my_cluster.output.cluster_ca_certificate +} +``` + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `selector` (Attributes) Selects the Kubernetes cluster to connect to. (see [below for nested schema](#nested-schema-for-selector)) + +### Optional + +- `credential_ttl` (String) How long the issued credentials should be valid for. Defaults to 30 minutes. + +### Read-Only + +- `output` (Attributes) (see [below for nested schema](#nested-schema-for-output)) + +### Nested Schema for `selector` + +Required: + +- `name` (String) The name of the Kubernetes cluster to connect to. + + +### Nested Schema for `output` + +Read-Only: + +- `client_certificate` (String) Compatible with the `client_certificate` argument of the `kubernetes` provider. +- `client_key` (String, Sensitive) Compatible with the `client_key` argument of the `kubernetes` provider. +- `cluster_ca_certificate` (String) Compatible with the `cluster_ca_certificate` argument of the `kubernetes` provider. +- `host` (String) Compatible with the `host` argument of the `kubernetes` provider. +- `tls_server_name` (String) Compatible with the `tls_server_name` argument of the `kubernetes` provider. diff --git a/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/terraform-provider-mwi.mdx b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/terraform-provider-mwi.mdx new file mode 100644 index 0000000000000..1e4368ff954e3 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/terraform-provider-mwi/terraform-provider-mwi.mdx @@ -0,0 +1,115 @@ +--- +title: "Teleport MWI Terraform Provider" +description: Reference documentation of the Teleport MWI Terraform provider. +tags: + - infrastructure-as-code + - reference + - mwi +--- + +{/*Auto-generated file. Do not edit.*/} +{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} + +The Teleport MWI Terraform provider generates short-lived credentials using +[Teleport Machine & Workload Identity (MWI)](../../../machine-workload-identity/machine-workload-identity.mdx) +that can be used to grant other Terraform providers access to resources. + +To manage the configuration of Teleport itself, use the [Teleport Terraform +Provider instead](../../infrastructure-as-code/terraform-provider/terraform-provider.mdx). + +{/* Note: the awkward `resource-index` file names are here because `data-sources` +is reserved by the generator for the catch-all resource template */} + +- [list of supported ephemeral resources](./ephemeral-resources/ephemeral-resources.mdx) +- [list of supported data-sources](./data-sources/data-sources.mdx) + +## Example usage + +In this example, we will use the Teleport MWI Terraform provider to grant the +Kubernetes provider access to a Kubernetes cluster through Teleport. + +Our Kubernetes cluster is enrolled in Teleport under the name `my-cluster`, and +we've setup a Bot and Join Token to allow our Terraform provider to authenticate +to Teleport. + +```hcl +terraform { + required_providers { + teleportmwi = { + source = "terraform.releases.teleport.dev/gravitational/teleportmwi" + version = "~> (=teleport.major_version=).0" + } + kubernetes = { + source = "hashicorp/kubernetes" + } + } +} + +provider "teleportmwi" { + join_method = "gitlab" + join_token = "my-join-token" + proxy_server = "example.teleport.sh:443" +} + +ephemeral "teleportmwi_kubernetes" "my_cluster" { + selector = { + name = "my-cluster" + } +} + + +// https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs +provider "kubernetes" { + host = ephemeral.teleportmwi_kubernetes.my_cluster.output.host + tls_server_name = ephemeral.teleportmwi_kubernetes.my_cluster.output.tls_server_name + client_certificate = ephemeral.teleportmwi_kubernetes.my_cluster.output.client_certificate + client_key = ephemeral.teleportmwi_kubernetes.my_cluster.output.client_key + cluster_ca_certificate = ephemeral.teleportmwi_kubernetes.my_cluster.output.cluster_ca_certificate +} + +resource "kubernetes_namespace" "ns" { + metadata { + name = "example-namespace" + } +} +``` + +## Ephemeral resources vs data sources + +The MWI Terraform provider exposes functionality using two different kinds of +entity in the Terraform ecosystem: **ephemeral resources** and **data sources**. + +Ephemeral resources are supported by Terraform 1.10 and later, and should be +the preferred way to use the MWI Terraform provider. They have the following +benefits over data sources: + +- The short-lived credentials generated by the MWI Terraform provider will not + be persisted within the Terraform state. +- Fresh short-lived credentials will be generated in the apply phase, allowing + you to grant read-only privileges to plan runs and read-write privileges to + apply runs. + +When using a version of Terraform that does not support ephemeral resources, +you can use the data source variants instead. When using the data sources, +keep the following in mind: + +- The short-lived secrets generated by the MWI Terraform provider will be + persisted within the Terraform state. The secrets will be generated in the + plan phase and reused in the apply phase. We therefore highly recommend that + you encrypt your Terraform state file. +- You will need to configure a `credential_ttl` that will ensure that credentials + generated during the plan phase will still be valid during the apply phase. + +{/* schema generated by tfplugindocs */} +## Schema + +### Required + +- `join_method` (String) The join method to use to authenticate to the Teleport cluster. +- `join_token` (String) The name of the join token to use to authenticate to the Teleport cluster. +- `proxy_server` (String) The address of the Teleport Proxy service. This should exclude the scheme but should include the port. + +### Optional + +- `insecure` (Boolean) When enabled, the certificates of the Proxy will not be verified. This is not recommended for production use. + diff --git a/docs/pages/reference/workload-identity/attributes.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/attributes.mdx similarity index 99% rename from docs/pages/reference/workload-identity/attributes.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/attributes.mdx index 04e668679e0be..a6f45e8c6e763 100644 --- a/docs/pages/reference/workload-identity/attributes.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/attributes.mdx @@ -1,6 +1,11 @@ --- title: Workload Identity Attributes +sidebar_label: Attributes description: Information about the attributes that can be used in templating and rules in the WorkloadIdentity resource. +tags: + - reference + - mwi + - infrastructure-identity --- Attributes are features of an identity which you can use with the diff --git a/docs/pages/reference/workload-identity/configuration-resource-migration.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/configuration-resource-migration.mdx similarity index 93% rename from docs/pages/reference/workload-identity/configuration-resource-migration.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/configuration-resource-migration.mdx index 295b629273588..897edb34b1c31 100644 --- a/docs/pages/reference/workload-identity/configuration-resource-migration.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/configuration-resource-migration.mdx @@ -1,6 +1,10 @@ --- title: WorkloadIdentity Configuration Resource migration description: Migrating to the new WorkloadIdentity resource configuration +tags: + - conceptual + - mwi + - infrastructure-identity --- The way that you configure Teleport Workload Identity is changing. If you are @@ -46,7 +50,7 @@ The following new additional CLI commands have been introduced: - `tbot start workload-identity-jwt` to issue a JWT SVID. You can read more about the new CLI commands in the -[`tbot` CLI reference](../cli/tbot.mdx). +[`tbot` CLI reference](../../cli/tbot.mdx). The following service types have been replaced: @@ -58,4 +62,4 @@ The following new additional service types have been introduced: - `workload-identity-jwt` to issue JWT SVIDs. You can read more about the new service types in the -[`tbot` configuration reference](../machine-id/configuration.mdx). \ No newline at end of file +[`tbot` configuration reference](../configuration.mdx). diff --git a/docs/pages/reference/workload-identity/issuer-override.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/issuer-override.mdx similarity index 91% rename from docs/pages/reference/workload-identity/issuer-override.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/issuer-override.mdx index 890a520f3489c..aa52f501e2855 100644 --- a/docs/pages/reference/workload-identity/issuer-override.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/issuer-override.mdx @@ -1,6 +1,12 @@ --- title: Workload Identity X.509 Issuer Override Resource +sidebar_label: X.509 Issuer Override Resource description: Provides information about the `workload_identity_x509_issuer_override` resource. +tags: + - conceptual + - mwi + - infrastructure-identity +enterprise: Issuer override --- The X.509 issuer override functionality provides a way to replace the self-signed X.509 certificate used by Teleport as the issuer SPIFFE X509-SVID credentials with an issuing certificate of your choosing, together with an optional certificate chain. After configuring a default `workload_identity_x509_issuer_override` resource, all X509-SVID credentials issued to `tbot` or `tsh` will be issued by one of the issuers specified in the resource, and will include the appropriate certificate chain. @@ -55,6 +61,8 @@ SERIALNUMBER=234567890123456789012345678901234567890,CN=clustername,O=clusternam Use of this command requires `create` permissions for the `workload_identity_x509_issuer_override_csr` resource kind in one of the roles associated with the identity running the command. +In clusters that make use of Hardware Security Modules (HSMs) it's possible that no single Teleport Auth Service instance is capable of generating signatures for all the keys that make up the SPIFFE certificate authority at once. In such situations, it's possible to use the `--force` option with the `sign-csrs` command on each machine running the Auth Service, to gather CSRs for keys managed by the different HSMs. + ## Using `tctl` to create a `workload_identity_x509_issuer_override` from certificate chain PEM files The `tctl workload-identity x509-issuer-overrides create` command can be used to build a `workload_identity_x509_issuer_override` resource out of one or more PEM files containing a certificate chain each, and to create or forcibly overwrite an existing resource in the cluster. The command will check that the first certificates in the specified chains have different public keys, and that they match 1:1 with the trusted X.509 certificates in the SPIFFE certificate authority in the Teleport cluster. diff --git a/docs/pages/reference/workload-identity/revocations.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/revocations.mdx similarity index 97% rename from docs/pages/reference/workload-identity/revocations.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/revocations.mdx index 5b540ff7abc20..f84a1fae0d7ba 100644 --- a/docs/pages/reference/workload-identity/revocations.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/revocations.mdx @@ -1,6 +1,11 @@ --- title: Workload Identity Revocations +sidebar_label: Revocations description: Information about performing revocations for issued workload identity credentials +tags: + - conceptual + - mwi + - resiliency --- The revocations mechanism provides a way to mark an issued X509 workload @@ -88,4 +93,4 @@ tFJEdB/d5SoDzpGXC394eeRmFml77+L0XfZmbmcXE00sRBi0Xr5MAa1PGjw/wS9a ``` To directly write this to a file, you can provide the `--out` flag and a path -to which to write the file. \ No newline at end of file +to which to write the file. diff --git a/docs/pages/reference/workload-identity/sigstore-attestation.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/sigstore-attestation.mdx similarity index 98% rename from docs/pages/reference/workload-identity/sigstore-attestation.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/sigstore-attestation.mdx index 1e326b0109345..3725d286216e8 100644 --- a/docs/pages/reference/workload-identity/sigstore-attestation.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/sigstore-attestation.mdx @@ -1,6 +1,11 @@ --- title: Sigstore Workload Attestation description: Using Teleport's integration with Sigstore to ensure workload supply chain security +tags: + - conceptual + - mwi + - infrastructure-identity +enterprise: Sigstore workload attestation --- By using Teleport's integration with [Sigstore](https://sigstore.dev), you can @@ -15,7 +20,7 @@ signed container images, reducing the scope for ## How it works -![Sigstore Integration Diagram](../../../img/workload-identity/sigstore-integration.png) +![Sigstore Integration Diagram](../../../../img/workload-identity/sigstore-integration.png) ### Signing diff --git a/docs/pages/reference/workload-identity/workload-identity-api-and-workload-attestation.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx similarity index 99% rename from docs/pages/reference/workload-identity/workload-identity-api-and-workload-attestation.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx index aa885807c2790..5e9b65a5aa5be 100644 --- a/docs/pages/reference/workload-identity/workload-identity-api-and-workload-attestation.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.mdx @@ -1,6 +1,11 @@ --- title: Workload Identity API & Workload Attestation +sidebar_label: API Service description: Information about the `tbot` Workload Identity API service and Workload Attestation functionality +tags: + - conceptual + - mwi + - infrastructure-identity --- The Workload Identity API service (`workload-identity-api`) is a configurable diff --git a/docs/pages/reference/workload-identity/workload-identity-resource.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx similarity index 96% rename from docs/pages/reference/workload-identity/workload-identity-resource.mdx rename to docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx index 28d17f994386a..574ba8ed0ca77 100644 --- a/docs/pages/reference/workload-identity/workload-identity-resource.mdx +++ b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity-resource.mdx @@ -1,6 +1,10 @@ --- title: WorkloadIdentity Resource description: Information about the WorkloadIdentity resource +tags: + - reference + - mwi + - infrastructure-identity --- The WorkloadIdentity resource is used to define the structure of an identity @@ -166,7 +170,7 @@ You can find a full list of the supported attributes on the [Attributes reference](./attributes.mdx) page. The WorkloadIdentity resource's fields use Teleport's -[Predicate Language](../predicate-language.mdx) for templating, allowing you to +[Predicate Language](../../access-controls/predicate-language.mdx) for templating, allowing you to apply text manipulation functions like `strings.lower` and `regex.replace`. ## Rules @@ -178,7 +182,7 @@ the labels on the WorkloadIdentity resource. However, you can further restrict the issuance of credentials based on the attributes of the workload using the rules mechanism. -Each rule consists of either a [Predicate Language](../predicate-language.mdx) +Each rule consists of either a [Predicate Language](../../access-controls/predicate-language.mdx) expression or a set of conditions. When using the conditions form, all conditions within that rule must pass in order for the rule to be considered a pass. If you specify multiple rules, then at least one rule must pass in order @@ -312,7 +316,7 @@ When using the conditions form and comparing attributes which are not strings (e.g. a boolean or number), the attribute values will be converted to a string representation. -When using the [Predicate Language](../predicate-language.mdx), the attribute +When using the [Predicate Language](../../access-controls/predicate-language.mdx), the attribute values will be compared as-is. ## Infrastructure as Code @@ -323,4 +327,4 @@ tools. For further information see: -- [Terraform provider reference: teleport_workload_identity](../terraform-provider/resources/workload_identity.mdx) +- [Terraform provider reference: teleport_workload_identity](../../infrastructure-as-code/terraform-provider/resources/workload_identity.mdx) diff --git a/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity.mdx b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity.mdx new file mode 100644 index 0000000000000..c116be15276a4 --- /dev/null +++ b/docs/pages/reference/machine-workload-identity/workload-identity/workload-identity.mdx @@ -0,0 +1,10 @@ +--- +title: Workload Identity References +sidebar_label: Workload Identity +description: Configuration and CLI reference for Teleport Workload Identity +tags: + - mwi + - infrastructure-identity +--- + + diff --git a/docs/pages/reference/monitoring/audit.mdx b/docs/pages/reference/monitoring/audit.mdx deleted file mode 100644 index 5d1a79eb1cca9..0000000000000 --- a/docs/pages/reference/monitoring/audit.mdx +++ /dev/null @@ -1,236 +0,0 @@ ---- -title: Audit Events and Records -description: Reference of Teleport Audit Events and Session Records ---- - -Teleport logs cluster activity by emitting various events into its audit log. -There are two components of the audit log: - - - - -- **Cluster Events:** Teleport logs events like successful user logins along - with metadata like remote IP address, time, and the session ID. -- **Recorded Sessions:** Every SSH, desktop, or Kubernetes shell session is recorded and - can be replayed later. By default, the recording is done by Teleport Nodes, - but can be configured to be done by the proxy. - - - - -- **Cluster Events:** Teleport logs events like successful user logins along - with metadata like remote IP address, time, and the session ID. -- **Recorded Sessions:** Every SSH, desktop, or Kubernetes shell session is recorded and - can be replayed later. Teleport Cloud manages the storage of session - recording data. - - - - - - -You can use -[Enhanced Session Recording with BPF](../../enroll-resources/server-access/guides/bpf-session-recording.mdx) -to get even more comprehensive audit logs with advanced security. - - - -## Events - - - - -Teleport supports multiple storage backends for storing audit events. The `dir` -backend uses the local filesystem of an Auth Service host. When this backend is -used, events are written to the filesystem in JSON format. The `dir` backend rotates -the event file approximately once every 24 hours, but never deletes captured events. - -For High Availability configurations, users can refer to our -[Athena](../backends.mdx), [DynamoDB](../backends.mdx) or -[Firestore](../backends.mdx) chapters for information on how to -configure the SSH events and recorded sessions to be stored on network storage. -When these backends are in use, audit events will eventually expire and be -removed from the log. The default retention period is 1 year, but this can be -overridden using the `retention_period` configuration parameter. - -It is even possible to store audit logs in multiple places at the same time. For -more information on how to configure the audit log, refer to the `storage` -section of the example configuration file in the -[Teleport Configuration Reference](../config.mdx). - -Let's examine the Teleport audit log using the `dir` backend. Teleport Auth -Service instances write their logs to a subdirectory of Teleport's configured -data directory that is named based on the service's UUID. - -Each day is represented as a file: - -```code -$ ls -l /var/lib/teleport/log/bbdfe5be-fb97-43af-bf3b-29ef2e302941 - -# total 104 -# -rw-r----- 1 root root 31638 Jan 22 20:00 2022-01-23.00:00:00.log -# -rw-r----- 1 root root 91256 Jan 31 21:00 2022-02-01.00:00:00.log -# -rw-r----- 1 root root 15815 Feb 32 22:54 2022-02-03.00:00:00.log -``` - - - - -Teleport Enterprise Cloud manages the storage of audit logs for you. You can -access your audit logs via the Teleport Web UI by clicking: - -**Audit** > **Audit Log** - - - - -Audit logs use JSON format. They are human readable but can also be -programmatically parsed. Each line represents an event and has the following -format: - -```javascript -{ - // Event type. See below for the list of all possible event types. - "event": "session.start", - // A unique ID for the event log. Useful for deduplication. - "uid": "59cf8d1b-7b36-4894-8e90-9d9713b6b9ef", - // Teleport user name - "user": "ekontsevoy", - // OS login - "login": "root", - // Server namespace. This field is reserved for future use. - "namespace": "default", - // Unique server ID - "server_id": "f84f7386-5e22-45ff-8f7d-b8079742e63f", - // Server Labels - "server_labels": { - "datacenter": "us-east-1", - "label-b": "x" - } - // Session ID. Can be used to replay the session. - "sid": "8d3895b6-e9dd-11e6-94de-40167e68e931", - // Address of the SSH node - "addr.local": "10.5.l.15:3022", - // Address of the connecting client (user) - "addr.remote": "73.223.221.14:42146", - // Terminal size - "size": "80:25", - // Timestamp - "time": "2017-02-03T06:54:05Z" -} -``` - -## Event types - -Below are some possible types of audit events. - - - -This list is not comprehensive. We recommend exporting audit events to a -platform that automatically parses event payloads so you can group and filter -them by their `event` key and discover trends. To set up audit event exporting, -read [Exporting Teleport Audit Events](../../admin-guides/management/export-audit-events/export-audit-events.mdx). - - - -| Event Type | Description | -| - | - | -| auth | Authentication attempt. Adds the following fields: `{"success": "false", "error": "access denied"}` | -| session.start | Started an interactive shell session. | -| session.end | An interactive shell session has ended. | -| session.join | A new user has joined the existing interactive shell session. | -| session.leave | A user has left the session. | -| session.disk | A list of files opened during the session. *Requires Enhanced Session Recording*. | -| session.network | A list of network connections made during the session. *Requires Enhanced Session Recording*. | -| session.command | A list of commands ran during the session. *Requires Enhanced Session Recording*. | -| session.recording.access | A session recording has been accessed. | -| exec | Remote command has been executed via SSH, like `tsh ssh root@node ls /`. The following fields will be logged: `{"command": "ls /", "exitCode": 0, "exitError": ""}` | -| scp | Remote file copy has been executed. The following fields will be logged: `{"path": "/path/to/file.txt", "len": 32344, "action": "read" }` | -| resize | Terminal has been resized. | -| user.login | A user logged into web UI or via tsh. The following fields will be logged: `{"user": "alice@example.com", "method": "local"}` . | -| app.session.start | A user accessed an application | -| app.session.chunk | A record of activity during an app session | -| join_token.create | A new join token has been created. Adds the following fields: `{"roles": ["Node", "Db"], "join_method": "token"}` | - -## Recorded sessions - -In addition to logging start and end events, Teleport can also record the entire session. -For SSH or Kubernetes sessions this captures the entire stream of bytes from the PTY. -For desktop sessions the recording includes the contents of the screen. - - - - -Teleport can store the recorded sessions in an [AWS S3 bucket](../backends.mdx) -or in a local filesystem (including NFS). - -The recorded sessions are stored as raw bytes in the `sessions` directory under -`log`. Each session is a protobuf-encoded stream of binary data. - -You can replay recorded sessions using the [`tsh play`](../cli/tsh.mdx) -command or the Web UI. - -For example, replay a session via CLI: - -```code -$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 -``` - -Print the session events in JSON to stdout: - -```code -$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 --format=json -``` - - - - -Teleport Enterprise Cloud automatically stores recorded sessions. - -You can replay recorded sessions using the [`tsh play`](../cli/tsh.mdx) -command or the Web UI. - -For example, replay a session via CLI: - -```code -$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 -``` - -Print the session events in JSON to stdout: - -```code -$ tsh play 4c146ec8-eab6-11e6-b1b3-40167e68e931 --format=json -``` - - - - -### Modes - - -Available only for SSH sessions and when Teleport is configured with -`auth_service.session_recording: node`. - - -Modes define how Teleport deals with recording failures, such as a full disk -error. They are configured per-service at the role level, where the strictest -value takes precedence. The available modes are: - -|Mode|After a recording failure| -|----|-------------------------| -|Best effort (`best_effort`)|Disables recording without terminating the session.| -|Strict (`strict`)|Immediately terminates the session.| - -If the user role doesn’t specify a recording mode, `best_effort` will be used. Here -is an example of a role configured to use strict mode for SSH sessions: - -```yaml -kind: role -version: v5 -metadata: - name: ssh-strict -spec: - options: - record_session: - ssh: strict -``` diff --git a/docs/pages/reference/monitoring/metrics.mdx b/docs/pages/reference/monitoring/metrics.mdx deleted file mode 100644 index e0587f820621b..0000000000000 --- a/docs/pages/reference/monitoring/metrics.mdx +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Teleport Metrics -description: Comprehensive list of all metrics exposed by Teleport. ---- - - - -Teleport Cloud does not expose monitoring endpoints for the Auth Service and Proxy Service. - - - -Teleport metrics are intended for performance monitoring. If you'd like to -monitor Teleport usage, consider utilizing our Event Handler plugin to push Audit Events into your -preferred logging aggregation system (Elastic, Splunk, Sumo Logic, etc). - -- [Audit Events and Records](audit.mdx) -- [Forwarding events with Fluentd](../../admin-guides/management/export-audit-events/fluentd.mdx) -- [Monitor Teleport Audit Events with the Elastic Stack](../../admin-guides/management/export-audit-events/elastic-stack.mdx) - -The following metrics are available: - -(!docs/pages/includes/metrics.mdx!) diff --git a/docs/pages/reference/monitoring/monitoring.mdx b/docs/pages/reference/monitoring/monitoring.mdx deleted file mode 100644 index 28233d6f37193..0000000000000 --- a/docs/pages/reference/monitoring/monitoring.mdx +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Teleport Monitoring -h1: Teleport Monitoring References -description: Provides comprehensive guides to monitoring data available from Teleport. ---- - - diff --git a/docs/pages/reference/operator-resources/operator-resources.mdx b/docs/pages/reference/operator-resources/operator-resources.mdx deleted file mode 100644 index 88a1660ad2ab2..0000000000000 --- a/docs/pages/reference/operator-resources/operator-resources.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Teleport Kubernetes Operator Resource Reference Guides" -description: "Comprehensive guides to fields available in Kubernetes resources you can apply to manage Teleport resources with the Teleport Kubernetes operator" ---- - - diff --git a/docs/pages/reference/operator-resources/resources-teleport-dev-oidcconnectors.mdx b/docs/pages/reference/operator-resources/resources-teleport-dev-oidcconnectors.mdx deleted file mode 100644 index 44905c4dda8f9..0000000000000 --- a/docs/pages/reference/operator-resources/resources-teleport-dev-oidcconnectors.mdx +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: TeleportOIDCConnector -description: Provides a comprehensive list of fields in the TeleportOIDCConnector resource available through the Teleport Kubernetes operator -tocDepth: 3 ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/operator and run "make crd-docs".*/} - -This guide is a comprehensive reference to the fields in the `TeleportOIDCConnector` -resource, which you can apply after installing the Teleport Kubernetes operator. - - -## resources.teleport.dev/v3 - -**apiVersion:** resources.teleport.dev/v3 - -|Field|Type|Description| -|---|---|---| -|apiVersion|string|APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources| -|kind|string|Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds| -|metadata|object|| -|spec|[object](#spec)|OIDCConnector resource definition v3 from Teleport| - -### spec - -|Field|Type|Description| -|---|---|---| -|acr_values|string|ACR is an Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers.| -|allow_unverified_email|boolean|AllowUnverifiedEmail tells the connector to accept OIDC users with unverified emails.| -|claims_to_roles|[][object](#specclaims_to_roles-items)|ClaimsToRoles specifies a dynamic mapping from claims to roles.| -|client_id|string|ClientID is the id of the authentication client (Teleport Auth Service).| -|client_redirect_settings|[object](#specclient_redirect_settings)|ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones.| -|client_secret|string|ClientSecret is used to authenticate the client. This field supports secret lookup. See the operator documentation for more details.| -|display|string|Display is the friendly name for this provider.| -|google_admin_email|string|GoogleAdminEmail is the email of a google admin to impersonate.| -|google_service_account|string|GoogleServiceAccount is a string containing google service account credentials.| -|google_service_account_uri|string|GoogleServiceAccountURI is a path to a google service account uri.| -|issuer_url|string|IssuerURL is the endpoint of the provider, e.g. https://accounts.google.com.| -|max_age|string|MaxAge is the amount of time that user logins are valid for. If a user logs in, but then does not login again within this time period, they will be forced to re-authenticate.| -|mfa|[object](#specmfa)|MFASettings contains settings to enable SSO MFA checks through this auth connector.| -|pkce_mode|string|PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled"| -|prompt|string|Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.| -|provider|string|Provider is the external identity provider.| -|redirect_url|[]string|RedirectURLs is a list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used.| -|scope|[]string|Scope specifies additional scopes set by provider.| -|username_claim|string|UsernameClaim specifies the name of the claim from the OIDC connector to be used as the user's username.| - -### spec.claims_to_roles items - -|Field|Type|Description| -|---|---|---| -|claim|string|Claim is a claim name.| -|roles|[]string|Roles is a list of static teleport roles to match.| -|value|string|Value is a claim value to match.| - -### spec.client_redirect_settings - -|Field|Type|Description| -|---|---|---| -|allowed_https_hostnames|[]string|a list of hostnames allowed for https client redirect URLs| -|insecure_allowed_cidr_ranges|[]string|a list of CIDRs allowed for HTTP or HTTPS client redirect URLs| - -### spec.mfa - -|Field|Type|Description| -|---|---|---| -|acr_values|string|AcrValues are Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR.| -|client_id|string|ClientID is the OIDC OAuth app client ID.| -|client_secret|string|ClientSecret is the OIDC OAuth app client secret.| -|enabled|boolean|Enabled specified whether this OIDC connector supports MFA checks. Defaults to false.| -|max_age|string|MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions.| -|prompt|string|Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility.| - diff --git a/docs/pages/reference/predicate-language.mdx b/docs/pages/reference/predicate-language.mdx deleted file mode 100644 index adeda2509b85d..0000000000000 --- a/docs/pages/reference/predicate-language.mdx +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: Predicate Language -description: How to use Teleport's predicate language to define filter conditions. ---- - -Teleport's predicate language is used to define conditions for filtering in dynamic configuration resources. -It is also used as a query language to filter and search through a [list of select resources](#resource-filtering). - -The predicate language uses a slightly different syntax depending on whether it is used in: - -- [Role resources](#scoping-allowdeny-rules-in-role-resources) -- [Resource filtering](#resource-filtering) -- [Label expressions](#label-expressions) - -## Scoping allow/deny rules in role resources - -Some fields in Teleport's role resources use the predicate language to define -the scope of a role's permissions: - -- [Dynamic Impersonation](../admin-guides/access-controls/guides/impersonation.mdx) -- [RBAC for sessions](access-controls/roles.mdx) - -When used in role resources, the predicate language supports the following operators: - -| Operator | Meaning | Example | -|----------|--------------------------------------------------|----------------------------------------------------------| -| && | and (all conditions must match) | `contains(field1, field2) && equals(field2, "val")` | -| \|\| | or (any one condition should match) | `contains(field1, field2) \|\| contains(field1, "val2")` | -| ! | not (used with functions, more about this below) | `!equals(field1, field2)` | - -The language also supports the following functions: - -| Functions | Description | -|--------------------------------|---------------------------------------------------------------------------------------| -| `contains(, )` | checks if the value from `` is included in the list of strings from `` | -| `contains(, "")` | checks if `` is included in the list of strings from `` | -| `equals(, )` | checks if the value from `` is equal to the value from `` | -| `equals(, "")` | checks if `` is equal to the value from `` | - -## Resource filtering - -Both the [`tsh`](cli/tsh.mdx) and [`tctl`](cli/tctl.mdx) CLI tools allow you to filter nodes, -applications, databases, and Kubernetes resources using the `--query` flag. The `--query` flag allows you to -perform more sophisticated searches using the predicate language. - -For common resource fields, we defined shortened field names that can easily be accessed by: - -| Short Field | Actual Field Equivalent | Example | -|-------------------|----------------------------------------------------------------------------------------|------------------------------| -| `labels[""]` | `resource.metadata.labels` + `resource.spec.dynamic_labels` | `labels["env"] == "staging"` | -| `name` | `resource.spec.hostname` (only applies to server resource) or `resource.metadata.name` | `name == "jenkins"` | - -The language supports the following operators: - -| Operator | Meaning | Example | -|----------|--------------------------------------|--------------------------------------------------------| -| == | equal to | `labels["env"] == "prod"` or ``labels[`env`] == "prod"`` | -| != | not equal to | `labels["env"] != "prod"` | -| && | and (all conditions must match) | `labels["env"] == "prod" && labels["os"] == "mac"` | -| \|\| | or (any one condition should match) | `labels["env"] == "dev" \|\| labels["env"] == "qa"` | -| ! | not (used with functions) | `!equals(labels["env"], "prod")` | - -The language also supports the following functions: - -| Functions (with examples) | Description | -|----------------------------------------------|------------------------------------------------------------| -| `equals(labels["env"], "prod")` | resources with label key `env` equal to label value `prod` | -| `exists(labels["env"])` | resources with a label key `env`; label value unchecked | -| `!exists(labels["env"])` | resources without a label key `env`; label value unchecked | -| `search("foo", "bar", "some phrase")` | fuzzy match against common resource fields | -| `hasPrefix(name, "foo")` | resources with a name that starts with the prefix `foo` | -| `split(labels["foo"], ",")` | converts a delimited string into a list | -| `contains(split(labels["foo"], ","), "bar")` | determines if a value exists in a list | - -See some [examples](cli/cli.mdx) of the different ways you can filter resources. - -## Label expressions - -Label expressions can be used in Teleport roles to define access to resources -with custom logic. -Check out the Access Controls -[reference page](access-controls/roles.mdx) -for an overview of label expressions and where they can be used. - -Label expressions support a predicate language with the following fields -available: - -| Field | Type | Description | -|--------------------|-------------------------|-------------| -| `labels` | `map[string]string` | Combined static and dynamic labels of the resource (server, application, etc.) being accessed. | -| `user.spec.traits` | `map[string][]string` | All traits of the user accessing the resource (referred to as `external` or `internal` in role template expressions). | - -The language supports the following functions: - -| Syntax | Return type | Description | Example | -|--------|-------------|-------------|---------| -| `contains(list, item)` | Boolean | Returns true if `list` contains an exact match for `item` | `contains(user.spec.traits[teams], labels["team"])` | -| `regexp.match(list, re)` | Boolean | Returns true if `list` contains a match for `re` | `regexp.match(labels["team"], "dev-team-\d+$")` | -| `regexp.replace(list,` `re, replacement)` | `[]string` | Replaces all matches of `re` with replacement for all items in `list` | `contains(regexp.replace(user.spec.traits["allowed-env"],` `"^env-(.*)$", "$1"), labels["env"])` -| `email.local(list)` | `[]string` | Returns the local part of each email in `list`, or an error if any email fails to parse | `contains(email.local(user.spec.traits["email"]),` `labels["owner"])` -| `strings.upper(list)` | `[]string` | Converts all items of the list to uppercase | `contains(strings.upper(user.spec.traits["username"]),` `labels["owner"])` -| `strings.lower(list)` | `[]string` | Converts all items of the list to lowercase | `contains(strings.lower(user.spec.traits["username"]),` `labels["owner"])` -| `labels_matching(re)` | `[]string` | Returns the aggregate of all label values with keys matching `re`, which can be a glob or a regular expression | `contains(labels_matching("^project-(team\|label)$"),` `"security")` -| `contains_any(list, items)` | Boolean | Returns true if `list` contains an exact match for any element of `items` | `contains_any(user.spec.traits["projects"],` `labels_matching("project-*"))` | -| `contains_all(list, items)` | Boolean | Returns true if `list` contains an exact match for all elements of `items` | `contains_all(user.spec.traits["projects"],` `labels_matching("project-*"))` | - -Above, any argument named `list` can accept a list of values (like the list of -values for a specific user trait) or a single value (like the value of a -resource label or a string literal). - -The language also supports the following operators: - -| Operator | Meaning | Example | -|----------|-------------------------------------|---------| -| == | equal to | `labels["env"] == "staging"` | -| != | not equal to | `labels["env"] != "production"` | -| \|\| | or (any one condition should match) | `labels["env"] == "staging" \|\| labels["env"] == "test"` | -| && | and (all conditions must match) | `labels["env"] == "staging" && labels["team"] == "dev"` | -| ! | not (logical negation) | `!regexp.match(user.spec.traits["teams"], "contractor")` | diff --git a/docs/pages/reference/public-certificates.mdx b/docs/pages/reference/public-certificates.mdx new file mode 100644 index 0000000000000..cfa95428ddfe0 --- /dev/null +++ b/docs/pages/reference/public-certificates.mdx @@ -0,0 +1,198 @@ +--- +title: "Public Certificates & Encryption Keys" +--- + +We use the following certificates and public keys to sign our software. Many of +these keys and certificates use our legal business name Gravitational, Inc. and +our former domain gravitational.com. Don’t worry – [Gravitational is +Teleport](https://goteleport.com/blog/gravitational-is-teleport/). + +## APT, YUM, & Zypper signing keys + +We sign our [APT, YUM and Zypper +repositories](../installation/linux.mdx#package-repositories) +with the following PGP key: + +* ID `C87ED53A6282C411` +* Fingerprint `0C5E 8BA5 658E 320D 1B03 1179 C87E D53A 6282 C411` + +The key is available for download at: +* [https://apt.releases.teleport.dev/gpg](https://apt.releases.teleport.dev/gpg) +* [https://yum.releases.teleport.dev/gpg](https://yum.releases.teleport.dev/gpg) +* [https://zypper.releases.teleport.dev/gpg](https://zypper.releases.teleport.dev/gpg) + +## Apple signing certificates + +Our Apple packages and binaries are [code +signed](https://developer.apple.com/support/code-signing/) by "Developer ID +QH8AA5B8UP Gravitational Inc." with the following certificate: +* SHA256 Fingerprint: `78 2F E1 18 5F A1 AD 68 AD 25 0B A9 4D 21 DC BB 0D 8E 47 + C6 E4 1D FE FB AB 05 41 33 4C 33 1D 43` +* SHA1 Fingerprint: `82 B6 25 AD 32 7C 24 1B 37 8A 54 B4 B2 54 BB 08 CE 71 B5 DF` + +Packages published prior to September 14, 2021 are signed with an older certificate for the same Developer ID (QH8AA5B8UP): +* SHA256 Fingerprint: `78 05 14 69 20 59 21 D1 EE 96 42 01 5A 28 35 FB E1 D4 38 5E 2A 23 5D 62 73 A4 D1 27 8A 33 BA 34` +* SHA1 Fingerprint: `D2 70 EA 0C F2 0E CB 17 28 B2 21 E1 D5 B6 7C FE 50 FF AB 62` + +Verify the Developer ID and fingerprint match on package downloads with the +pkgutil tool: +```console +$ pkgutil --check-signature teleport-(=teleport.version=).pkg +Package "teleport-(=teleport.version=).pkg": + Status: signed by a developer certificate issued by Apple for distribution + Notarization: trusted by the Apple notary service + Signed with a trusted timestamp on: 2024-02-16 21:42:52 +0000 + Certificate Chain: + 1. Developer ID Installer: Gravitational Inc. (QH8AA5B8UP) + Expires: 2026-07-27 18:27:29 +0000 + SHA256 Fingerprint: + 78 2F E1 18 5F A1 AD 68 AD 25 0B A9 4D 21 DC BB 0D 8E 47 C6 E4 1D + FE FB AB 05 41 33 4C 33 1D 43 + ------------------------------------------------------------------------ + 2. Developer ID Certification Authority + Expires: 2027-02-01 22:12:15 +0000 + SHA256 Fingerprint: + 7A FC 9D 01 A6 2F 03 A2 DE 96 37 93 6D 4A FE 68 09 0D 2D E1 8D 03 + F2 9C 88 CF B0 B1 BA 63 58 7F + ------------------------------------------------------------------------ + 3. Apple Root CA + Expires: 2035-02-09 21:40:36 +0000 + SHA256 Fingerprint: + B0 B1 73 0E CB C7 FF 45 05 14 2C 49 F1 29 5E 6E DA 6B CA ED 7E 2C + 68 C5 BE 91 B5 A1 10 01 F0 24 +``` + +The codesign tool can be used to perform the verification on individual +binaries: +```console +$ codesign --verify -d --verbose=2 /usr/local/bin/tsh +... +Authority=Developer ID Application: Gravitational Inc. (QH8AA5B8UP) +Authority=Developer ID Certification Authority +Authority=Apple Root CA +Timestamp=Jun 29, 2024 at 11:02:15 PM +Info.plist=not bound +TeamIdentifier=QH8AA5B8UP +... +``` + +The Teleport package in Homebrew is not maintained or signed by Teleport. We +recommend the use of [our Teleport packages](https://goteleport.com/download/). + +## Windows signing certificates + +Our Windows binaries are signed with the following certificate: +* Issued to: Gravitational, Inc. +* Thumbprint: `C644BAB07912F5BD09BDB3C2D9AE6A724F9B2391` + +Verify the binary using the following PowerShell command: +```console +Get-AuthenticodeSignature -FilePath .\tsh.exe + + Directory: C:\Users\ExampleUser + +SignerCertificate Status Path +----------------- ------ ---- +C644BAB07912F5BD09BDB3C2D9AE6A724F9B2391 Valid tsh.exe +``` + +Ensure that the `SignerCertificate` matches the thumbprint shown above, and that +the `Status` field is `Valid`. + +To further inspect the certificate, run the following PowerShell command: +```console +(Get-AuthenticodeSignature -FilePath.\tsh.exe).SignerCertificate | Format-List + +Subject : CN="Gravitational, Inc.", O="Gravitational, Inc.", L=Oakland, + S=California, C=US, SERIALNUMBER=5720258, + OID.2.5.4.15=Private Organization, + OID.1.3.6.1.4.1.311.60.2.1.2=Delaware, + OID.1.3.6.1.4.1.311.60.2.1.3=US +Issuer : CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1, + O="DigiCert, Inc.", C=US +Thumbprint : C644BAB07912F5BD09BDB3C2D9AE6A724F9B2391 +FriendlyName : +NotBefore : 11/2/2023 12:00:00 AM +NotAfter : 10/16/2026 11:59:59 PM +Extensions : {System.Security.Cryptography.Oid, + System.Security.Cryptography.Oid, + System.Security.Cryptography.Oid, + System.Security.Cryptography.Oid...} +``` + +Alternatively, Windows binaries may be inspected graphically via the Windows +Explorer with the following steps: +1. Right click on the binary in question, for example `tsh.exe`. +2. Select “Properties”. +3. On the resulting “tsh.exe Properties” dialog, select the “Digital Signatures” + tab. +4. Select the “Gravitational Inc.” signer from the list. +5. Select the “Details” button. +5. On the resulting “Digital Signature Details” dialog, ensure that the header + states “This digital signature is OK.” +6. Select the “View Certificate” button. +7. On the resulting “Certificate” dialog, select the “Details” tab. +8. Select the “Thumbprint” item from the list, and compare its value to the + thumbprint listed above. + +## OCI container images +All of our distroless OCI container images are signed with `cosign`. The public +key is: +``` +-----BEGIN PUBLIC KEY----- +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAx+9UZboMl9ibwu/IWqbX ++wEJeKJqVpaLEsy1ODRpzIgcgaMh2n3BWtFEIoEszR3ZNlGdfqoPmb0nNnWx/qSf +eEsoSXievXa63M/gAUBB+jecbGEJH+SNaJPMVuvjabPqKtoMT2Spw3cacqpINzq1 +rkWU8IawY333gXbwzgsuK7izT7ymgOLPO9qPuX7Q3EBaGw3EvY7u6UKtqhvSGdyr +MirEErOERQ8EP8TrkCcJk0UfPAukzIcj91uHlXaqYBD/IyNYiC70EOlSLoN5/EeA +I4jQnGRfaKF6H6K+WieX9tP9k8/02S+1EVJW592pdQZhJZEq1B/dMc8UR3IjPMMC +qCT2xT6TsinaVzDaAbaRf0hvp311GxwrckNofGm/OSLn1+HqM6q4/A7qHubeRXGO +byabRr93CHSLegZ7OBMswHqqnu6/DuXjc6gOsQkH09dVTFeh34rQy4GKrvnpmOwj +Er1ccxzKcF/pw+lxi07hkpihR/uHUPxFboA/Wl7H2Jub21MFwIFQrDJv7z8yQgxJ +EuIXJJox2oAL7NzdSi9VIUYnEnx+2EtkU/spAFRR6i1BnT6aoIy3521B76wnmRr9 +atCSKjt6MdRxgj4htCjBWWJAGM9Z/avF4CYFmK7qiVxgpdrSM8Esbt2Ta+Lu3QMJ +T8LjqFu3u3dxVOo9RuLk+BkCAwEAAQ== +-----END PUBLIC KEY----- +``` + +Signatures can be validated against the Teleport OCI image signing key: +```console +$ cosign verify --key teleport-oci-key.pub \ + public.ecr.aws/gravitational/teleport-distroless-debug:(=teleport.version=) + +Verification for public.ecr.aws/gravitational/teleport-distroless-debug:(=teleport.version=) -- +The following checks were performed on each of these signatures: + - The cosign claims were validated + - The signatures were verified against the specified public key + +[ + { + "critical": { + "identity": { + "docker-reference": "public.ecr.aws/gravitational/teleport-distroless-debug" + }, + "image": { + "docker-manifest-digest": "sha256:02093593bf129dc304b79854b01b0b911674e9bd6b9049cac14b6e1b116c58e5" + }, + "type": "cosign container image signature" + }, + "optional": ... + } +] +``` + +## Pre-releases and custom builds + +At times, we share pre-releases or one-off custom builds with customers. Those builds are available +from cdn.cloud.gravitational.io and are signed with a different set of certificates. + +For Apple devices: + +* Developer ID: K497G57PDJ +* SHA256 Fingerprint: `B0 34 81 61 82 B6 C6 3B 5B 4C C2 47 4E 9F EE 3F 12 AE 29 9A DE 70 BB 31 6F 2A + 25 DC 23 46 7D 26` + +For Windows devices: + +* Issued to: Gravitational, Inc. +* Thumbprint: `3C14BE378BE889E35B64649ED18A1EE362DA63D5` diff --git a/docs/pages/reference/reference.mdx b/docs/pages/reference/reference.mdx index 8c32fc711b1d9..8f10bb5140a01 100644 --- a/docs/pages/reference/reference.mdx +++ b/docs/pages/reference/reference.mdx @@ -1,6 +1,9 @@ --- title: Teleport Reference Guides +sidebar_label: References description: Provides comprehensive information on configuration fields, Teleport commands, and other ways of interacting with Teleport. +tags: + - platform-wide --- diff --git a/docs/pages/reference/resources.mdx b/docs/pages/reference/resources.mdx deleted file mode 100644 index 277ea89d5c9a2..0000000000000 --- a/docs/pages/reference/resources.mdx +++ /dev/null @@ -1,644 +0,0 @@ ---- -title: Teleport Resources -h1: Teleport Resource Reference -description: Reference documentation for Teleport resources ---- - -This reference guide lists dynamic resources you can manage with Teleport. For -more information on dynamic resources, see our guide to [Using Dynamic -Resources](../admin-guides/infrastructure-as-code/infrastructure-as-code.mdx). - -Examples of applying dynamic resources with `tctl`: - -```code -# List all connectors: -$ tctl get connectors - -# Dump a SAML connector called "okta": -$ tctl get saml/okta - -# Delete a SAML connector called "okta": -$ tctl rm saml/okta - -# Delete an OIDC connector called "gworkspace": -$ tctl rm oidc/gworkspace - -# Delete a github connector called "myteam": -$ tctl rm github/myteam - -# Delete a local user called "admin": -$ tctl rm users/admin - -# Show all devices: -$ tctl get devices - -# Fetch a specific device: -$ tctl get devices/ - -# Fetch the cluster auth preferences -$ tctl get cluster_auth_preference -``` - - - Although `tctl get connectors` will show you every connector, when working with an individual connector you must use the correct `kind`, such as `saml` or `oidc`. You can see each connector's `kind` at the top of its YAML output from `tctl get connectors`. - - -## List of dynamic resources - -Here's the list of resources currently exposed via [`tctl`](./cli/tctl.mdx): - -| Resource Kind | Description | -| - | - | -| [user](#user) | A user record in the internal Teleport user DB. | -| [role](#role) | A role assumed by interactive and non-interactive users. | -| connector | Authentication connectors for [Single Sign-On](../admin-guides/access-controls/sso/sso.mdx) (SSO) for SAML, OIDC and GitHub. | -| node | A registered SSH node. The same record is displayed via `tctl nodes ls`. | -| windows_desktop | A registered Windows desktop. | -| cluster | A trusted cluster. See [here](../admin-guides/management/admin/trustedclusters.mdx) for more details on connecting clusters together. | -| [login_rule](#login-rules) | A Login Rule, see the [Login Rules guide](../admin-guides/access-controls/login-rules/login-rules.mdx) for more info. | -| [device](#device) | A Teleport Trusted Device, see the [Device Trust guide](../admin-guides/access-controls/device-trust/guide.mdx) for more info. | -| [ui_config](#ui-config) | Configuration for the Web UI served by the Proxy Service. | -| [vnet_config](#vnet-config) | Configuration for the cluster's VNet options. | -| [cluster_auth_preference](#cluster-auth-preferences) | Configuration for the cluster's auth preferences. | -| [database_object_import_rule](#database-object-import-rule) | Database object import rules. | -| [autoupdate_config](#auto-update-config) | Client tools auto-update configuration | -| [autoupdate_version](#auto-update-version) | Client tools auto-update target version configuration | -| [access_monitoring_rule](#access-monitoring-rule) | Access monitoring rules. | - -## User - -Teleport supports interactive local users, non-interactive local users (bots) -and single-sign on users that are represented as a resource. - -```yaml -kind: user -version: v2 -metadata: - name: joe -spec: - # roles is a list of roles assigned to this user - roles: - - admin - # status sets user temporarily locked in a Teleport system, for example - # when users exceed predefined amount of failed login attempts - status: - is_locked: false - lock_expires: 0001-01-01T00:00:00Z - locked_time: 0001-01-01T00:00:00Z - # traits are key, list of values pairs assigned to a user resource. - # Traits can be used in role templates as variables. - traits: - logins: - - joe - - root - # expires, if not empty, sets automatic expiry of the resource - expires: 0001-01-01T00:00:00Z - # created_by is a system property that tracks - # identity of the author of this user resource. - created_by: - time: 0001-01-01T00:00:00Z - user: - name: builtin-Admin -``` - -## Role - -Interactive and non-interactive users (bots) assume one or many roles. - -Roles govern access to databases, SSH servers, Kubernetes clusters, web services and applications and Windows Desktops. - -(!docs/pages/includes/role-spec.mdx!) - -### Role versions - -Versions 5, 6 and 7 of the Teleport role resource have different behaviors when -accessing Kubernetes resources. -{/*lint ignore messaging*/} -Roles not [granting Kubernetes access](../enroll-resources/kubernetes-access/introduction.mdx) are -equivalent in the three versions. - -Roles v5 and v6 can only restrict actions on pods (e.g. executing in them). -Role v7 supports restricting all common resource kinds ( -see [the `kubernetes_resource` documentation](../enroll-resources/kubernetes-access/controls.mdx#kubernetes_resources) -for a complete list). - -When no `kubernetes_resource` is set: -- Roles v5 and v7 grant all access by default -- Roles v6 blocks pod execution by default, this was reverted by roles v7 to improve the user experience. - -{/* This table is cursed. Our current docs engine doesn't support HTML tables -(due to SSR and the rehydration process). We have dto do everything inlined in -markdown. Some HTML character codes are used to render specific chars like {} -or to avoid line breaks in the middle fo the YAML. Spaces before br tags -are required.*/} - -| Allow rule | Role v5 | Role v6 | Role v7 | -|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -|
kubernetes_groups: 
- "system:masters"
kubernetes_labels: {}
kubernetes_resources: []
| ❌ no access | ❌ no access | ❌ no access | -|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources: []
| ✅ full access to `dev` clusters | ❌ cannot exec in pods
✅ can access other
resources like `secrets` | ✅ full access to `dev` clusters | -|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- name: "*"
kind: pod
namespace: "foo"
| ✅ can exec in pods in `foo`
✅ can access `secrets` in all namespaces.
❌ cannot exec in other namespaces | ✅ can exec in pods in `foo`
✅ can access `secrets` in all namespaces.
❌ cannot exec in other namespaces | ✅ can exec in pods in `foo`
❌ cannot access `secrets` in all namespaces
❌ cannot exec in other namespaces | -|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- name: "*"
kind: pod
namespace: "foo"
- name: "*"
kind: secret
namespace: "foo"
| ⚠️ not supported | ⚠️ not supported | ✅ can exec in pods in `foo`
✅ can access `secrets` in `foo`
❌ cannot exec in other namespaces
❌ cannot access `secrets` in other namespaces
❌ cannot access `configmaps` in `foo` | -|
kubernetes_groups: 
- "system:masters"
kubernetes_labels:
env: ["dev"]
kubernetes_resources:
- kind: "namespace"
name: "foo"
| ⚠️ not supported | ⚠️ not supported | ✅ full access in namespace `foo`
❌ cannot access other namespaces | - - -## Windows desktops - -In most cases, Teleport will register `windows_desktop` resources automatically -based on static hosts in your configuration file or via LDAP-based discovery. - -You can also use [dynamic -registration](../enroll-resources/desktop-access/dynamic-registration.mdx) using -`dynamic_windows_desktop` resources. This can be useful for managing inventories -of hosts that are not joined to an Active Directory domain. - -There are a few important considerations to keep in mind when registering -desktops this way: - -1. The desktop's `addr` can be a hostname or IP address, and should include - the RDP port (typically 3389). -1. If you intend to log in to the desktop with local Windows users you must set - `non_ad: true`. If you intend to log in with Active Directory users, leave - `non_ad` unset (or false), and specify the Active Directory domain in the - `domain` field. - -## Login Rules - -Login rules contain logic to transform SSO user traits during login. - -(!docs/pages/includes/login-rule-spec.mdx!) - -## Database object import rule - -Database object import rule define the labels to be applied to database objects imported into Teleport. - -See [Database Access Controls](../enroll-resources/database-access/rbac.mdx) for more details. - -(!docs/pages/includes/database-access/auto-user-provisioning/database-object-import-rule-spec.mdx!) - -## Device - -Device contains information identifying a trusted device. - -(!docs/pages/includes/device-spec.mdx!) - -## UI config - -Global configuration options for the Web UI served by the Proxy Service. This resource is not set by default, which means a `tctl get ui` will result in an error if used before this resource has been set. - -(!docs/pages/includes/ui-config-spec.mdx!) - -## VNet config - -Cluster-specific options VNet should use when setting up connections to resources from this cluster. - -See [VNet](../enroll-resources/application-access/guides/vnet.mdx) for more details. - -```yaml -kind: vnet_config -version: v1 -metadata: - name: vnet-config -spec: - # The range to use when assigning IP addresses to resources. - # It can be changed in case of conflicts with other software - # deployed on end user devices. Defaults to "100.64.0.0/10". - ipv4_cidr_range: "100.64.0.0/10" - # Extra DNS zones that VNet should capture DNS queries for. - # Set them if your TCP apps use custom public_addr. - # Requires DNS TXT record to be set on the domains, - # see the guide linked above. Empty by default. - custom_dns_zones: - - suffix: company.test -``` - -## Cluster maintenance config - -Global configuration options for the agents enrolled into automatic updates (v1). - - -`cluster_maintenance_config` configures Managed Updates v1, -which are currently supported but will be superseded by Managed Updates v2. -The `autoupdate_config` and `autoupdate_version` resources further down -configure Managed Updates v2. - - -(!docs/pages/includes/cluster-maintenance-config-spec.mdx!) - -## Cluster auth preferences - -Global cluster configuration options for authentication. - -```yaml -metadata: - name: cluster-auth-preference -spec: - # Sets the list of allowed second factors for the cluster. - # Possible values: "otp", "webauthn", and "sso". - # Defaults to ["otp"]. - second_factors: ["webauthn", "otp"] - - # second_factors is the list of allowed second factors for the cluster. - # Possible values: "on", "otp" and "webauthn" - # If "on" is set, all MFA protocols are supported. - # - # Prefer setting second_factors instead. - #second_factor: "webauthn" - - # The name of the OIDC or SAML connector. if this is not set, the first connector in the backend is used. - connector_name: "" - - # webauthn is the settings for server-side Web authentication support. - webauthn: - # rp_id is the ID of the Relying Party. - # It should be set to the domain name of the Teleport installation. - # - # IMPORTANT: rp_id must never change in the lifetime of the cluster, because - # it's recorded in the registration data on the WebAuthn device. If the - # ri_id changes, all existing WebAuthn key registrations will become invalid - # and all users who use WebAuthn as the second factor will need to - # re-register. - rp_id: teleport.example.com - # Allow list of device attestation CAs in PEM format. - # If present, only devices whose attestation certificates match the - # certificates specified here may be registered (existing registrations are - # unchanged). - # If supplied in conjunction with `attestation_denied_cas`, then both - # conditions need to be true for registration to be allowed (the device - # MUST match an allowed CA and MUST NOT match a denied CA). - # By default all devices are allowed. - attestation_allowed_cas: [] - # Deny list of device attestation CAs in PEM format. - # If present, only devices whose attestation certificates don't match the - # certificates specified here may be registered (existing registrations are - # unchanged). - attestation_denied_cas: [] - - # Enforce per-session MFA or PIV-hardware key restrictions on user login sessions. - # Possible values: true, false, "hardware_key", "hardware_key_touch". - # Defaults to false. - require_session_mfa: false - - # Sets whether connections with expired client certificates will be disconnected. - disconnect_expired_cert: false - - # Sets whether headless authentication is allowed. - # Headless authentication requires WebAuthn. - # Defaults to true if webauthn is configured. - allow_headless: false - - # Sets whether local auth is enabled alongside any other authentication - # type. - allow_local_auth: true - - # Sets whether passwordless authentication is allowed. - # Requires Webauthn to work. - allow_passwordless: false - - # Sets the message of the day for the cluster. - message_of_the_day: "" - - # idp is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise - idp: - # options related to the Teleport SAML IdP. - saml: - # enables access to the Teleport SAML IdP. - enabled: true - - # locking_mode is the cluster-wide locking mode default. - # Possible values: "strict" or "best_effort" - locking_mode: best_effort - - # default_session_ttl defines the default TTL (time to live) of certificates - # issued to the users on this cluster. - default_session_ttl: "12h" - - # The type of authentication to use for this cluster. - # Possible values: "local", "oidc", "saml" and "github" - type: local - - stable_unix_user_config: - # If set to true, SSH instances will use the same UID for each given - # username when automatically creating users. - enabled: false - - # The range of UIDs (including both ends) used for automatic UID assignment. - first_uid: 90000 - last_uid: 95000 - -version: v2 -``` - -## Bot - -Bot resources define a Machine ID Bot identity and its access. - -Find out more on the -[Machine ID configuration reference](machine-id/configuration.mdx). - -(!docs/pages/includes/machine-id/bot-spec.mdx!) - -## Auto-update config - -Configuration options for Teleport Agent and client tools Managed Updates (v2). - - -The `autoupdate_config` and `autoupdate_version` resources configure Managed -Updates v2 and Managed Updates v1. - -`cluster_maintenance_config` above configures only Managed Updates v1 -which are currently supported but will be deprecated in a future version. - - -```yaml -kind: autoupdate_config -metadata: - name: autoupdate-config -spec: - agents: - # mode allows users to enable, disable, or suspend agent updates at the - # cluster level. Disable agent automatic updates only if self-managed - # updates are in place. This value may also be set in autoupdate_version. - # If set in both places, disabled overrides suspended, which overrides enabled. - # Possible values: "enabled", "disabled", "suspended" - # Default: "disabled" (unless specified in autoupdate_version) - mode: enabled - - # strategy used to roll out updates to agents. - # The halt-on-error strategy ensures that groups earlier in the schedule are - # given the opportunity to update to the target_version before groups that are - # later in the schedule. (Currently, the schedule must be stopped manually by - # setting the mode to "suspended" or "disabled". In the future, errors will be - # detected automatically). - # The time-based strategy ensure that each group updates within a defined - # time window, with no dependence between groups. - # Possible values: "halt-on-error" or "time-based" - # Default: "halt-on-error" - strategy: halt-on-error - - # maintenance_window_duration configures the duration after the start_hour - # when updates may occur. Only valid for the time-based strategy. - # maintenance_window_duration: 1h - - # schedules define groups of agents with different update times. - # Currently, only the regular schedule is configurable. - schedules: - regular: - - # name of each group, configured locally via "teleport-update enable --group" - - name: staging - - # start_hour of the update, in UTC - start_hour: 4 - - # days that the update may occur on - # Days are not configurable for most Enterprise cloud-hosted users. - # Possible values: "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", and "*" - # Default: [ "Mon", "Tue", "Wed", "Thu" ] - days: [ "Mon", "Tue", "Wed", "Thu" ] - - - name: production - start_hour: 5 - - # wait_hours ensures that the group executes at least a specific number of hours - # after the previous group. Only valid for the halt-on-error schedule. - # Default: 0 - wait_hours: 24 - - tools: - # mode allows users to enable or disable client tool updates at the - # cluster level. Disable client tool automatic updates only if self-managed - # updates are in place. - # Possible values: "enabled" or "disabled" - # Default: "disabled" - mode: enabled -``` - -See [Teleport Client Tool Managed Updates](../upgrading/client-tools-autoupdate.mdx) and -[Teleport Agent Managed Updates](../upgrading/agent-managed-updates.mdx) for more details. - -## Auto-update version - -Allows cluster administrators to manage versions used for agent and client tools Managed Updates. - - -The `autoupdate_config` and `autoupdate_version` resources configure Managed -Updates v2 and Managed Updates v1. - -`cluster_maintenance_config` above configures only Managed Updates v1 -which are currently supported but will be deprecated in a future version. - - -This resource is not editable for cloud-managed Teleport Enterprise to ensure that all of -your clients receive security patches and remain compatible with your cluster. - -```yaml -kind: autoupdate_version -metadata: - name: autoupdate-version -spec: - agents: - # start_version is the version used to install new agents before their - # group's scheduled update time. Agents never update to the start_version - # automatically, but may be required to via "teleport-update update --now". - start_version: v17.2.0 - - # target_version is the version that agents update to during their group's - # scheduled update time. New agents also use this version after their group's - # scheduled update time. - target_version: v17.2.1 - - # schedule used to roll out updates. - # The regular schedule is defined in the autoupdate_config resource. - # The immediate schedule updates all agents to target_version immediately. - # Possible values: "regular" or "immediate" - # Default: "regular" - schedule: regular - - # mode allows users to enable, disable, or suspend agent updates at the - # cluster level. Disable agent automatic updates only if self-managed - # updates are in place. This value may also be set in autoupdate_config. - # If set in both places, disabled overrides suspended, which overrides enabled. - # Possible values: "enabled", "disabled", "suspended" - # Default: "disabled" (unless specified in autoupdate_config) - mode: enabled - - tools: - # target_version is the semver version of client tools the cluster will advertise. - # Client tools such as tsh and tctl will update to this version at login if the - # mode set in autoupdate_config is set to enabled. - # Default: cluster version - target_version: v17.2.1 -``` - -See [Teleport Client Tool Managed Updates](../upgrading/client-tools-autoupdate.mdx) and -[Teleport Agent Managed Updates](../upgrading/agent-managed-updates.mdx) for more details. - -## Auto-update agent rollout - -Allows cluster administrators to view the current agent rollout for Teleport Agent Managed Updates (v2). - -This resource should not be edited by humans. - -```yaml -kind: autoupdate_agent_rollout -metadata: - name: autoupdate-agent-rollout -spec: - # start_version is the version used to install new agents before their - # group's scheduled update time. Agents never update to the start_version - # automatically, but may be required to via "teleport-update update --now". - start_version: v17.2.0 - - # target_version is the version that agents update to during their group's - # scheduled update time. New agents also use this version after their group's - # scheduled update time. - target_version: v17.2.1 - - # schedule used to roll out updates. - # The regular schedule is defined in the autoupdate_config resource. - # The immediate schedule updates all agents to target_version immediately. - # Possible values: "regular" or "immediate" - schedule: regular - - # autoupdate_mode allows users to enable, disable, or suspend agent updates at the - # cluster level. Disable agent automatic updates only if self-managed - # updates are in place. This value may also be set in autoupdate_config. - # If set in both places, disabled overrides suspended, which overrides enabled. - # Possible values: "enabled", "disabled", "suspended" - autoupdate_mode: enabled - - # strategy used to roll out updates to agents. - # The halt-on-error strategy ensures that groups earlier in the schedule are - # given the opportunity to update to the target_version before groups that are - # later in the schedule. (Currently, the schedule must be stopped manually by - # setting the mode to "suspended" or "disabled". In the future, errors will be - # detected automatically). - # The time-based strategy ensure that each group updates within a defined - # time window, with no dependence between groups. - # Possible values: "halt-on-error" or "time-based" - # Default: "halt-on-error" - strategy: halt-on-error - - # maintenance_window_duration configures the duration after the start_hour - # when updates may occur. Only valid for the time-based strategy. - # maintenance_window_duration: 1h - -status: - - # groups contains the status for each group in the currently executing schedule. - groups: - - # name of each group, configured locally via "teleport-update enable --group" - - name: staging - - # start_time of the group - start_time: 0001-01-01T00:00:00Z - - # state of the group - # Possible values: unstarted, active, done, rolledback - state: active - - # last_update_time of this group's status - last_update_time: 0001-01-01T00:00:00Z - - # last_update_reason of this group's status - last_update_reason: "new version" - - # days that the update may occur on, from autoupdate_config - # Possible values: "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", and "*" - config_days: [ "Mon", "Tue", "Wed", "Thu" ] - - # start_hour of the update, in UTC, from autoupdate_config - config_start_hour: 4 - - - name: production - - # ... - - # config_wait_hours is specific number of hours after the previous group that this - # group may execute after, from autoupdate_config. - config_wait_hours: 24 - - # start_time of the rollout - start_time: 0001-01-01T00:00:00Z - - # state of the entire rollout - # Possible values: unstarted, active, done, rolledback - state: active - -``` - -See [Teleport Agent Managed Updates](../upgrading/agent-managed-updates.mdx) for more details. - -## Access monitoring rule - -Access monitoring rules allows cluster administrators to monitor Access Requests -and apply notification routing and automatic review rules. - -```yaml -kind: access_monitoring_rule -version: v1 -metadata: - name: example_rule -spec: - # subjects specifies the kinds of subjects to monitor. - # Possible values: "access_request" - subjects: - - access_request - - # condition specifies the conditions that should be met to apply the access - # monitoring rule. The condition accepts a predicate expression which must - # evaluate to a boolean value. - # - # This condition would be satisfied by an access request if the `access` role - # is requested, and if the requesting user has the `team: dev` user trait. - condition: |- - contains_all(set("access"), access_request.spec.roles) && - contains_any(user.traits["team"], set("dev")) - - # Optional: desired_state specifies the desired reconciled state of the access - # request after the rule is applied. This field must be set to "reviewed" to - # enable automatic reviews. - # Possible values: "reviewed". - desired_state: reviewed - - # Optional: automatic_review configures the automatic review rules. - automatic_review: - # integration specifies the name of an external integration source used to - # help determine if a requesting user satisfies the rule conditions. - # Use "builtin" to specify no external integration. - # Possible values: "builtin" - integration: builtin - - # decision determines whether to automatically approve or deny the - # access request. - # Possible values: "APPROVED" or "DENIED" - decision: APPROVED - - # Optional: notification configures notification routing rules. - notification: - # name specifies the external integration to which the notifications should - # be routed. - # Possible values: "email", "discord", "slack", "pagerduty", "jira", - # "mattermost", "msteams", "opsgenie", "servicenow", "datadog" - name: email - - # recipients specifies the list of recipients to be notified when the - # access monitoring rule is applied. - recipients: - - example@goteleport.com -``` - -Accepted fields within the condition predicate expression: -| Field | Description | -| --------------------------------------- | ---------------------------------------------- | -| access_request.spec.roles | The set of roles requested. | -| access_request.spec.suggested_reviewers | The set of reviewers specified in the request. | -| access_request.spec.system_annotations | A map of system annotations on the request. | -| access_request.spec.user | The requesting user. | -| access_request.spec.request_reason | The request reason. | -| access_request.spec.creation_time | The creation time of the request. | -| access_request.spec.expiry | The expiry time of the request. | -| user.traits | A map of traits of the requesting user. | - -See [Predicate Language](./predicate-language.mdx) for more details. diff --git a/docs/pages/reference/terraform-provider/data-sources/access_list.mdx b/docs/pages/reference/terraform-provider/data-sources/access_list.mdx deleted file mode 100644 index 69bad80b38e0d..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/access_list.mdx +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: Reference for the teleport_access_list Terraform data-source -sidebar_label: access_list -description: This page describes the supported values of the teleport_access_list data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Optional - -- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) -- `spec` (Attributes) spec is the specification for the Access List. (see [below for nested schema](#nested-schema-for-spec)) - -### Nested Schema for `header` - -Required: - -- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` - -Optional: - -- `kind` (String) kind is a resource kind. -- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) -- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. - -### Nested Schema for `header.metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. -- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. - - - -### Nested Schema for `spec` - -Required: - -- `audit` (Attributes) audit describes the frequency that this Access List must be audited. (see [below for nested schema](#nested-schema-for-specaudit)) -- `grants` (Attributes) grants describes the access granted by membership to this Access List. (see [below for nested schema](#nested-schema-for-specgrants)) -- `owners` (Attributes List) owners is a list of owners of the Access List. (see [below for nested schema](#nested-schema-for-specowners)) - -Optional: - -- `description` (String) description is an optional plaintext description of the Access List. -- `membership_requires` (Attributes) membership_requires describes the requirements for a user to be a member of the Access List. For a membership to an Access List to be effective, the user must meet the requirements of Membership_requires and must be in the members list. (see [below for nested schema](#nested-schema-for-specmembership_requires)) -- `owner_grants` (Attributes) owner_grants describes the access granted by owners to this Access List. (see [below for nested schema](#nested-schema-for-specowner_grants)) -- `ownership_requires` (Attributes) ownership_requires describes the requirements for a user to be an owner of the Access List. For ownership of an Access List to be effective, the user must meet the requirements of ownership_requires and must be in the owners list. (see [below for nested schema](#nested-schema-for-specownership_requires)) -- `title` (String) title is a plaintext short description of the Access List. - -### Nested Schema for `spec.audit` - -Required: - -- `recurrence` (Attributes) recurrence is the recurrence definition (see [below for nested schema](#nested-schema-for-specauditrecurrence)) - -Optional: - -- `next_audit_date` (String) next_audit_date is when the next audit date should be done by. -- `notifications` (Attributes) notifications is the configuration for notifying users. (see [below for nested schema](#nested-schema-for-specauditnotifications)) - -### Nested Schema for `spec.audit.recurrence` - -Required: - -- `frequency` (Number) frequency is the frequency of reviews. This represents the period in months between two reviews. Supported values are 0, 1, 3, 6, and 12. - -Optional: - -- `day_of_month` (Number) day_of_month is the day of month that reviews will be scheduled on. Supported values are 0, 1, 15, and 31. - - -### Nested Schema for `spec.audit.notifications` - -Optional: - -- `start` (String) start specifies when to start notifying users that the next audit date is coming up. - - - -### Nested Schema for `spec.grants` - -Optional: - -- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. -- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specgrantstraits)) - -### Nested Schema for `spec.grants.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.owners` - -Optional: - -- `description` (String) description is the plaintext description of the owner and why they are an owner. -- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. -- `name` (String) name is the username of the owner. - - -### Nested Schema for `spec.membership_requires` - -Optional: - -- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. -- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specmembership_requirestraits)) - -### Nested Schema for `spec.membership_requires.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.owner_grants` - -Optional: - -- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. -- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specowner_grantstraits)) - -### Nested Schema for `spec.owner_grants.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.ownership_requires` - -Optional: - -- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. -- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specownership_requirestraits)) - -### Nested Schema for `spec.ownership_requires.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - diff --git a/docs/pages/reference/terraform-provider/data-sources/access_monitoring_rule.mdx b/docs/pages/reference/terraform-provider/data-sources/access_monitoring_rule.mdx deleted file mode 100644 index 977e148109dcf..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/access_monitoring_rule.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: Reference for the teleport_access_monitoring_rule Terraform data-source -sidebar_label: access_monitoring_rule -description: This page describes the supported values of the teleport_access_monitoring_rule data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an AccessMonitoringRule specification (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) version is version - -### Optional - -- `metadata` (Attributes) metadata is the rules's metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Required: - -- `subjects` (List of String) subjects the rule operates on, can be a resource kind or a particular resource property. - -Optional: - -- `automatic_review` (Attributes) automatic_review defines automatic review configurations for access requests. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specautomatic_review)) -- `condition` (String) condition is a predicate expression that operates on the specified subject resources, and determines whether the subject will be moved into desired state. -- `desired_state` (String) desired_state defines the desired state of the subject. For access request subjects, the desired_state may be set to `reviewed` to indicate that the access request should be automatically reviewed. -- `notification` (Attributes) notification defines the plugin configuration for notifications if rule is triggered. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specnotification)) -- `states` (List of String) states are the desired state which the monitoring rule is attempting to bring the subjects matching the condition to. - -### Nested Schema for `spec.automatic_review` - -Optional: - -- `decision` (String) decision specifies the proposed state of the access review. This can be either 'APPROVED' or 'DENIED'. -- `integration` (String) integration is the name of the integration that is responsible for monitoring the rule. Set this value to `builtin` to monitor the rule with Teleport. - - -### Nested Schema for `spec.notification` - -Optional: - -- `name` (String) name is the name of the plugin to which this configuration should apply. -- `recipients` (List of String) recipients is the list of recipients the plugin should notify. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. - diff --git a/docs/pages/reference/terraform-provider/data-sources/app.mdx b/docs/pages/reference/terraform-provider/data-sources/app.mdx deleted file mode 100644 index ae1ecc738108d..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/app.mdx +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: Reference for the teleport_app Terraform data-source -sidebar_label: app -description: This page describes the supported values of the teleport_app data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v3`. - -### Optional - -- `metadata` (Attributes) Metadata is the app resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the app resource spec. (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource subkind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `aws` (Attributes) AWS contains additional options for AWS applications. (see [below for nested schema](#nested-schema-for-specaws)) -- `cloud` (String) Cloud identifies the cloud instance the app represents. -- `cors` (Attributes) CORSPolicy defines the Cross-Origin Resource Sharing settings for the app. (see [below for nested schema](#nested-schema-for-speccors)) -- `dynamic_labels` (Attributes Map) DynamicLabels are the app's command labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) -- `identity_center` (Attributes) IdentityCenter encasulates AWS identity-center specific information. Only valid for Identity Center account apps. (see [below for nested schema](#nested-schema-for-specidentity_center)) -- `insecure_skip_verify` (Boolean) InsecureSkipVerify disables app's TLS certificate verification. -- `integration` (String) Integration is the integration name that must be used to access this Application. Only applicable to AWS App Access. If present, the Application must use the Integration's credentials instead of ambient credentials to access Cloud APIs. -- `public_addr` (String) PublicAddr is the public address the application is accessible at. -- `required_app_names` (List of String) RequiredAppNames is a list of app names that are required for this app to function. Any app listed here will be part of the authentication redirect flow and authenticate along side this app. -- `rewrite` (Attributes) Rewrite is a list of rewriting rules to apply to requests and responses. (see [below for nested schema](#nested-schema-for-specrewrite)) -- `tcp_ports` (Attributes List) TCPPorts is a list of ports and port ranges that an app agent can forward connections to. Only applicable to TCP App Access. If this field is not empty, URI is expected to contain no port number and start with the tcp protocol. (see [below for nested schema](#nested-schema-for-spectcp_ports)) -- `uri` (String) URI is the web app endpoint. -- `use_any_proxy_public_addr` (Boolean) UseAnyProxyPublicAddr will rebuild this app's fqdn based on the proxy public addr that the request originated from. This should be true if your proxy has multiple proxy public addrs and you want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, setting this value to true will overwrite that public address in the web UI. -- `user_groups` (List of String) UserGroups are a list of user group IDs that this app is associated with. - -### Nested Schema for `spec.aws` - -Optional: - -- `external_id` (String) ExternalID is the AWS External ID used when assuming roles in this app. -- `roles_anywhere_profile` (Attributes) RolesAnywhereProfile contains the IAM Roles Anywhere fields associated with this Application. These fields are set when performing the synchronization of AWS IAM Roles Anywhere Profiles into Teleport Apps. (see [below for nested schema](#nested-schema-for-specawsroles_anywhere_profile)) - -### Nested Schema for `spec.aws.roles_anywhere_profile` - -Optional: - -- `accept_role_session_name` (Boolean) Whether this Roles Anywhere Profile accepts a custom role session name. When not supported, the AWS Session Name will be the X.509 certificate's serial number. When supported, the AWS Session Name will be the identity's username. This values comes from: https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ProfileDetail.html / acceptRoleSessionName -- `profile_arn` (String) ProfileARN is the AWS IAM Roles Anywhere Profile ARN that originated this Teleport App. - - - -### Nested Schema for `spec.cors` - -Optional: - -- `allow_credentials` (Boolean) allow_credentials indicates whether credentials are allowed. -- `allowed_headers` (List of String) allowed_headers specifies which headers can be used when accessing the app. -- `allowed_methods` (List of String) allowed_methods specifies which methods are allowed when accessing the app. -- `allowed_origins` (List of String) allowed_origins specifies which origins are allowed to access the app. -- `exposed_headers` (List of String) exposed_headers indicates which headers are made available to scripts via the browser. -- `max_age` (Number) max_age indicates how long (in seconds) the results of a preflight request can be cached. - - -### Nested Schema for `spec.dynamic_labels` - -Optional: - -- `command` (List of String) Command is a command to run -- `period` (String) Period is a time between command runs -- `result` (String) Result captures standard output - - -### Nested Schema for `spec.identity_center` - -Optional: - -- `account_id` (String) Account ID is the AWS-assigned ID of the account -- `permission_sets` (Attributes List) PermissionSets lists the available permission sets on the given account (see [below for nested schema](#nested-schema-for-specidentity_centerpermission_sets)) - -### Nested Schema for `spec.identity_center.permission_sets` - -Optional: - -- `arn` (String) ARN is the fully-formed ARN of the Permission Set. -- `assignment_name` (String) AssignmentID is the ID of the Teleport Account Assignment resource that represents this permission being assigned on the enclosing Account. -- `name` (String) Name is the human-readable name of the Permission Set. - - - -### Nested Schema for `spec.rewrite` - -Optional: - -- `headers` (Attributes List) Headers is a list of headers to inject when passing the request over to the application. (see [below for nested schema](#nested-schema-for-specrewriteheaders)) -- `jwt_claims` (String) JWTClaims configures whether roles/traits are included in the JWT token. -- `redirect` (List of String) Redirect defines a list of hosts which will be rewritten to the public address of the application if they occur in the "Location" header. - -### Nested Schema for `spec.rewrite.headers` - -Optional: - -- `name` (String) Name is the http header name. -- `value` (String) Value is the http header value. - - - -### Nested Schema for `spec.tcp_ports` - -Optional: - -- `end_port` (Number) EndPort describes the end of the range, inclusive. If set, it must be between 2 and 65535 and be greater than Port when describing a port range. When omitted or set to zero, it signifies that the port range defines a single port. -- `port` (Number) Port describes the start of the range. It must be between 1 and 65535. - diff --git a/docs/pages/reference/terraform-provider/data-sources/auth_preference.mdx b/docs/pages/reference/terraform-provider/data-sources/auth_preference.mdx deleted file mode 100644 index 36b821618c116..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/auth_preference.mdx +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: Reference for the teleport_auth_preference Terraform data-source -sidebar_label: auth_preference -description: This page describes the supported values of the teleport_auth_preference data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an AuthPreference specification (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Optional: - -- `allow_headless` (Boolean) AllowHeadless enables/disables headless support. Headless authentication requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. -- `allow_local_auth` (Boolean) AllowLocalAuth is true if local authentication is enabled. -- `allow_passwordless` (Boolean) AllowPasswordless enables/disables passwordless support. Passwordless requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. -- `connector_name` (String) ConnectorName is the name of the OIDC or SAML connector. If this value is not set the first connector in the backend will be used. -- `default_session_ttl` (String) DefaultSessionTTL is the TTL to use for user certs when an explicit TTL is not requested. -- `device_trust` (Attributes) DeviceTrust holds settings related to trusted device verification. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specdevice_trust)) -- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert provides disconnect expired certificate setting - if true, connections with expired client certificates will get disconnected -- `hardware_key` (Attributes) HardwareKey are the settings for hardware key support. (see [below for nested schema](#nested-schema-for-spechardware_key)) -- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specidp)) -- `locking_mode` (String) LockingMode is the cluster-wide locking mode default. -- `message_of_the_day` (String) -- `okta` (Attributes) Okta is a set of options related to the Okta service in Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specokta)) -- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this cluster. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". -- `second_factor` (String) SecondFactor is the type of mult-factor. Deprecated: Prefer using SecondFactors instead. -- `second_factors` (List of Number) SecondFactors is a list of supported multi-factor types. 1 is "otp", 2 is "webauthn", 3 is "sso", If unspecified, the current default value is [1], or ["otp"]. -- `signature_algorithm_suite` (Number) SignatureAlgorithmSuite is the configured signature algorithm suite for the cluster. If unspecified, the current default value is "legacy". 1 is "legacy", 2 is "balanced-v1", 3 is "fips-v1", 4 is "hsm-v1". -- `stable_unix_user_config` (Attributes) StableUnixUserConfig contains the cluster-wide configuration for stable UNIX users. (see [below for nested schema](#nested-schema-for-specstable_unix_user_config)) -- `type` (String) Type is the type of authentication. -- `u2f` (Attributes) U2F are the settings for the U2F device. (see [below for nested schema](#nested-schema-for-specu2f)) -- `webauthn` (Attributes) Webauthn are the settings for server-side Web Authentication support. (see [below for nested schema](#nested-schema-for-specwebauthn)) - -### Nested Schema for `spec.device_trust` - -Optional: - -- `auto_enroll` (Boolean) Enable device auto-enroll. Auto-enroll lets any user issue a device enrollment token for a known device that is not already enrolled. `tsh` takes advantage of auto-enroll to automatically enroll devices on user login, when appropriate. The effective cluster Mode still applies: AutoEnroll=true is meaningless if Mode="off". -- `ekcert_allowed_cas` (List of String) Allow list of EKCert CAs in PEM format. If present, only TPM devices that present an EKCert that is signed by a CA specified here may be enrolled (existing enrollments are unchanged). If not present, then the CA of TPM EKCerts will not be checked during enrollment, this allows any device to enroll. -- `mode` (String) Mode of verification for trusted devices. The following modes are supported: - "off": disables both device authentication and authorization. - "optional": allows both device authentication and authorization, but doesn't enforce the presence of device extensions for sensitive endpoints. - "required": enforces the presence of device extensions for sensitive endpoints. Mode is always "off" for OSS. Defaults to "optional" for Enterprise. - - -### Nested Schema for `spec.hardware_key` - -Optional: - -- `pin_cache_ttl` (String) PinCacheTTL is the amount of time in nanoseconds that Teleport clients will cache the user's PIV PIN when hardware key PIN policy is enabled. -- `piv_slot` (String) PIVSlot is a PIV slot that Teleport clients should use instead of the default based on private key policy. For example, "9a" or "9e". -- `serial_number_validation` (Attributes) SerialNumberValidation holds settings for hardware key serial number validation. By default, serial number validation is disabled. (see [below for nested schema](#nested-schema-for-spechardware_keyserial_number_validation)) - -### Nested Schema for `spec.hardware_key.serial_number_validation` - -Optional: - -- `enabled` (Boolean) Enabled indicates whether hardware key serial number validation is enabled. -- `serial_number_trait_name` (String) SerialNumberTraitName is an optional custom user trait name for hardware key serial numbers to replace the default: "hardware_key_serial_numbers". Note: Values for this user trait should be a comma-separated list of serial numbers, or a list of comm-separated lists. e.g ["123", "345,678"] - - - -### Nested Schema for `spec.idp` - -Optional: - -- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specidpsaml)) - -### Nested Schema for `spec.idp.saml` - -Optional: - -- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. - - - -### Nested Schema for `spec.okta` - -Optional: - -- `sync_period` (String) SyncPeriod is the duration between synchronization calls in nanoseconds. - - -### Nested Schema for `spec.stable_unix_user_config` - -Optional: - -- `enabled` (Boolean) Enabled signifies that (UNIX) Teleport SSH hosts should obtain a UID from the control plane if they're about to provision a host user with no other configured UID. -- `first_uid` (Number) FirstUid is the start of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. -- `last_uid` (Number) LastUid is the end of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. - - -### Nested Schema for `spec.u2f` - -Optional: - -- `app_id` (String) AppID returns the application ID for universal mult-factor. -- `device_attestation_cas` (List of String) DeviceAttestationCAs contains the trusted attestation CAs for U2F devices. -- `facets` (List of String) Facets returns the facets for universal mult-factor. Deprecated: Kept for backwards compatibility reasons, but Facets have no effect since Teleport v10, when Webauthn replaced the U2F implementation. - - -### Nested Schema for `spec.webauthn` - -Optional: - -- `attestation_allowed_cas` (List of String) Allow list of device attestation CAs in PEM format. If present, only devices whose attestation certificates match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationDeniedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default all devices are allowed. -- `attestation_denied_cas` (List of String) Deny list of device attestation CAs in PEM format. If present, only devices whose attestation certificates don't match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationAllowedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default no devices are denied. -- `rp_id` (String) RPID is the ID of the Relying Party. It should be set to the domain name of the Teleport installation. IMPORTANT: RPID must never change in the lifetime of the cluster, because it's recorded in the registration data on the WebAuthn device. If the RPID changes, all existing WebAuthn key registrations will become invalid and all users who use WebAuthn as the multi-factor will need to re-register. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - diff --git a/docs/pages/reference/terraform-provider/data-sources/autoupdate_config.mdx b/docs/pages/reference/terraform-provider/data-sources/autoupdate_config.mdx deleted file mode 100644 index 7d8feb1e83ce9..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/autoupdate_config.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Reference for the teleport_autoupdate_config Terraform data-source -sidebar_label: autoupdate_config -description: This page describes the supported values of the teleport_autoupdate_config data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) - -### Optional - -- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) - -### Nested Schema for `spec` - -Optional: - -- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) -- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) - -### Nested Schema for `spec.agents` - -Optional: - -- `maintenance_window_duration` (String) maintenance_window_duration is the maintenance window duration. This can only be set if `strategy` is "time-based". Once the window is over, the group transitions to the done state. Existing agents won't be updated until the next maintenance window. -- `mode` (String) mode specifies whether agent autoupdates are enabled, disabled, or paused. -- `schedules` (Attributes) schedules specifies schedules for updates of grouped agents. (see [below for nested schema](#nested-schema-for-specagentsschedules)) -- `strategy` (String) strategy to use for updating the agents. - -### Nested Schema for `spec.agents.schedules` - -Optional: - -- `regular` (Attributes List) regular schedules for non-critical versions. (see [below for nested schema](#nested-schema-for-specagentsschedulesregular)) - -### Nested Schema for `spec.agents.schedules.regular` - -Optional: - -- `days` (List of String) days when the update can run. Supported values are "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" and "*" -- `name` (String) name of the group -- `start_hour` (Number) start_hour to initiate update -- `wait_hours` (Number) wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". This field must be positive. - - - - -### Nested Schema for `spec.tools` - -Optional: - -- `mode` (String) Mode defines state of the client tools auto update. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `name` (String) name is an object name. - diff --git a/docs/pages/reference/terraform-provider/data-sources/autoupdate_version.mdx b/docs/pages/reference/terraform-provider/data-sources/autoupdate_version.mdx deleted file mode 100644 index 4737c8dbb4f8f..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/autoupdate_version.mdx +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Reference for the teleport_autoupdate_version Terraform data-source -sidebar_label: autoupdate_version -description: This page describes the supported values of the teleport_autoupdate_version data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) - -### Optional - -- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) - -### Nested Schema for `spec` - -Optional: - -- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) -- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) - -### Nested Schema for `spec.agents` - -Optional: - -- `mode` (String) autoupdate_mode to use for the rollout -- `schedule` (String) schedule to use for the rollout -- `start_version` (String) start_version is the version to update from. -- `target_version` (String) target_version is the version to update to. - - -### Nested Schema for `spec.tools` - -Optional: - -- `target_version` (String) TargetVersion specifies the semantic version required for tools to establish a connection with the cluster. Client tools after connection to the cluster going to be updated to this version automatically. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `name` (String) name is an object name. - diff --git a/docs/pages/reference/terraform-provider/data-sources/cluster_networking_config.mdx b/docs/pages/reference/terraform-provider/data-sources/cluster_networking_config.mdx deleted file mode 100644 index cc5ac33f000fe..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/cluster_networking_config.mdx +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: Reference for the teleport_cluster_networking_config Terraform data-source -sidebar_label: cluster_networking_config -description: This page describes the supported values of the teleport_cluster_networking_config data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a ClusterNetworkingConfig specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `assist_command_execution_workers` (Number) AssistCommandExecutionWorkers determines the number of workers that will execute arbitrary Assist commands on servers in parallel -- `case_insensitive_routing` (Boolean) CaseInsensitiveRouting causes proxies to use case-insensitive hostname matching. -- `client_idle_timeout` (String) ClientIdleTimeout sets global cluster default setting for client idle timeouts. -- `idle_timeout_message` (String) ClientIdleTimeoutMessage is the message sent to the user when a connection times out. -- `keep_alive_count_max` (Number) KeepAliveCountMax is the number of keep-alive messages that can be missed before the server disconnects the connection to the client. -- `keep_alive_interval` (String) KeepAliveInterval is the interval at which the server sends keep-alive messages to the client. -- `proxy_listener_mode` (Number) ProxyListenerMode is proxy listener mode used by Teleport Proxies. 0 is "separate"; 1 is "multiplex". -- `proxy_ping_interval` (String) ProxyPingInterval defines in which interval the TLS routing ping message should be sent. This is applicable only when using ping-wrapped connections, regular TLS routing connections are not affected. -- `routing_strategy` (Number) RoutingStrategy determines the strategy used to route to nodes. 0 is "unambiguous_match"; 1 is "most_recent". -- `session_control_timeout` (String) SessionControlTimeout is the session control lease expiry and defines the upper limit of how long a node may be out of contact with the auth server before it begins terminating controlled sessions. -- `ssh_dial_timeout` (String) SSHDialTimeout is a custom dial timeout used when establishing SSH connections. If not set, the default timeout of 30s will be used. -- `tunnel_strategy` (Attributes) TunnelStrategyV1 determines the tunnel strategy used in the cluster. (see [below for nested schema](#nested-schema-for-spectunnel_strategy)) -- `web_idle_timeout` (String) WebIdleTimeout sets global cluster default setting for the web UI idle timeouts. - -### Nested Schema for `spec.tunnel_strategy` - -Optional: - -- `agent_mesh` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyagent_mesh)) -- `proxy_peering` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyproxy_peering)) - -### Nested Schema for `spec.tunnel_strategy.agent_mesh` - -Optional: - -- `active` (Boolean) Automatically generated field preventing empty message errors - - -### Nested Schema for `spec.tunnel_strategy.proxy_peering` - -Optional: - -- `agent_connection_count` (Number) - diff --git a/docs/pages/reference/terraform-provider/data-sources/data-sources.mdx b/docs/pages/reference/terraform-provider/data-sources/data-sources.mdx deleted file mode 100644 index 68a856a5746df..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/data-sources.mdx +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Terraform data-sources index" -description: "Index of all the data-sources supported by the Teleport Terraform Provider" ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -{/* - This file will be renamed data-sources.mdx during build time. - The template name is reserved by tfplugindocs so we suffix with -index. -*/} - -The Teleport Terraform provider supports the following data-sources: - - - [`teleport_access_list`](./access_list.mdx) - - [`teleport_access_monitoring_rule`](./access_monitoring_rule.mdx) - - [`teleport_app`](./app.mdx) - - [`teleport_auth_preference`](./auth_preference.mdx) - - [`teleport_autoupdate_config`](./autoupdate_config.mdx) - - [`teleport_autoupdate_version`](./autoupdate_version.mdx) - - [`teleport_cluster_maintenance_config`](./cluster_maintenance_config.mdx) - - [`teleport_cluster_networking_config`](./cluster_networking_config.mdx) - - [`teleport_database`](./database.mdx) - - [`teleport_dynamic_windows_desktop`](./dynamic_windows_desktop.mdx) - - [`teleport_github_connector`](./github_connector.mdx) - - [`teleport_health_check_config`](./health_check_config.mdx) - - [`teleport_installer`](./installer.mdx) - - [`teleport_login_rule`](./login_rule.mdx) - - [`teleport_oidc_connector`](./oidc_connector.mdx) - - [`teleport_okta_import_rule`](./okta_import_rule.mdx) - - [`teleport_provision_token`](./provision_token.mdx) - - [`teleport_role`](./role.mdx) - - [`teleport_saml_connector`](./saml_connector.mdx) - - [`teleport_session_recording_config`](./session_recording_config.mdx) - - [`teleport_static_host_user`](./static_host_user.mdx) - - [`teleport_trusted_cluster`](./trusted_cluster.mdx) - - [`teleport_trusted_device`](./trusted_device.mdx) - - [`teleport_user`](./user.mdx) - - [`teleport_workload_identity`](./workload_identity.mdx) diff --git a/docs/pages/reference/terraform-provider/data-sources/database.mdx b/docs/pages/reference/terraform-provider/data-sources/database.mdx deleted file mode 100644 index e00fdf02b8955..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/database.mdx +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: Reference for the teleport_database Terraform data-source -sidebar_label: database -description: This page describes the supported values of the teleport_database data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata is the database metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the database spec. (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource subkind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Required: - -- `protocol` (String) Protocol is the database protocol: postgres, mysql, mongodb, etc. -- `uri` (String) URI is the database connection endpoint. - -Optional: - -- `ad` (Attributes) AD is the Active Directory configuration for the database. (see [below for nested schema](#nested-schema-for-specad)) -- `admin_user` (Attributes) AdminUser is the database admin user for automatic user provisioning. (see [below for nested schema](#nested-schema-for-specadmin_user)) -- `aws` (Attributes) AWS contains AWS specific settings for RDS/Aurora/Redshift databases. (see [below for nested schema](#nested-schema-for-specaws)) -- `azure` (Attributes) Azure contains Azure specific database metadata. (see [below for nested schema](#nested-schema-for-specazure)) -- `ca_cert` (String) CACert is the PEM-encoded database CA certificate. DEPRECATED: Moved to TLS.CACert. DELETE IN 10.0. -- `dynamic_labels` (Attributes Map) DynamicLabels is the database dynamic labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) -- `gcp` (Attributes) GCP contains parameters specific to GCP Cloud SQL databases. (see [below for nested schema](#nested-schema-for-specgcp)) -- `mongo_atlas` (Attributes) MongoAtlas contains Atlas metadata about the database. (see [below for nested schema](#nested-schema-for-specmongo_atlas)) -- `mysql` (Attributes) MySQL is an additional section with MySQL database options. (see [below for nested schema](#nested-schema-for-specmysql)) -- `oracle` (Attributes) Oracle is an additional Oracle configuration options. (see [below for nested schema](#nested-schema-for-specoracle)) -- `tls` (Attributes) TLS is the TLS configuration used when establishing connection to target database. Allows to provide custom CA cert or override server name. (see [below for nested schema](#nested-schema-for-spectls)) - -### Nested Schema for `spec.ad` - -Optional: - -- `domain` (String) Domain is the Active Directory domain the database resides in. -- `kdc_host_name` (String) KDCHostName is the host name for a KDC for x509 Authentication. -- `keytab_file` (String) KeytabFile is the path to the Kerberos keytab file. -- `krb5_file` (String) Krb5File is the path to the Kerberos configuration file. Defaults to /etc/krb5.conf. -- `ldap_cert` (String) LDAPCert is a certificate from Windows LDAP/AD, optional; only for x509 Authentication. -- `ldap_service_account_name` (String) LDAPServiceAccountName is the name of service account for performing LDAP queries. Required for x509 Auth / PKINIT. -- `ldap_service_account_sid` (String) LDAPServiceAccountSID is the SID of service account for performing LDAP queries. Required for x509 Auth / PKINIT. -- `spn` (String) SPN is the service principal name for the database. - - -### Nested Schema for `spec.admin_user` - -Optional: - -- `default_database` (String) DefaultDatabase is the database that the privileged database user logs into by default. Depending on the database type, this database may be used to store procedures or data for managing database users. -- `name` (String) Name is the username of the privileged database user. - - -### Nested Schema for `spec.aws` - -Optional: - -- `account_id` (String) AccountID is the AWS account ID this database belongs to. -- `assume_role_arn` (String) AssumeRoleARN is an optional AWS role ARN to assume when accessing a database. Set this field and ExternalID to enable access across AWS accounts. -- `docdb` (Attributes) DocumentDB contains AWS DocumentDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsdocdb)) -- `elasticache` (Attributes) ElastiCache contains AWS ElastiCache Redis specific metadata. (see [below for nested schema](#nested-schema-for-specawselasticache)) -- `external_id` (String) ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts. -- `iam_policy_status` (Number) IAMPolicyStatus indicates whether the IAM Policy is configured properly for database access. If not, the user must update the AWS profile identity to allow access to the Database. Eg for an RDS Database: the underlying AWS profile allows for `rds-db:connect` for the Database. -- `memorydb` (Attributes) MemoryDB contains AWS MemoryDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsmemorydb)) -- `opensearch` (Attributes) OpenSearch contains AWS OpenSearch specific metadata. (see [below for nested schema](#nested-schema-for-specawsopensearch)) -- `rds` (Attributes) RDS contains RDS specific metadata. (see [below for nested schema](#nested-schema-for-specawsrds)) -- `rdsproxy` (Attributes) RDSProxy contains AWS Proxy specific metadata. (see [below for nested schema](#nested-schema-for-specawsrdsproxy)) -- `redshift` (Attributes) Redshift contains Redshift specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift)) -- `redshift_serverless` (Attributes) RedshiftServerless contains AWS Redshift Serverless specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift_serverless)) -- `region` (String) Region is a AWS cloud region. -- `secret_store` (Attributes) SecretStore contains secret store configurations. (see [below for nested schema](#nested-schema-for-specawssecret_store)) -- `session_tags` (Map of String) SessionTags is a list of AWS STS session tags. - -### Nested Schema for `spec.aws.docdb` - -Optional: - -- `cluster_id` (String) ClusterID is the cluster identifier. -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `instance_id` (String) InstanceID is the instance identifier. - - -### Nested Schema for `spec.aws.elasticache` - -Optional: - -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `replication_group_id` (String) ReplicationGroupID is the Redis replication group ID. -- `transit_encryption_enabled` (Boolean) TransitEncryptionEnabled indicates whether in-transit encryption (TLS) is enabled. -- `user_group_ids` (List of String) UserGroupIDs is a list of user group IDs. - - -### Nested Schema for `spec.aws.memorydb` - -Optional: - -- `acl_name` (String) ACLName is the name of the ACL associated with the cluster. -- `cluster_name` (String) ClusterName is the name of the MemoryDB cluster. -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `tls_enabled` (Boolean) TLSEnabled indicates whether in-transit encryption (TLS) is enabled. - - -### Nested Schema for `spec.aws.opensearch` - -Optional: - -- `domain_id` (String) DomainID is the ID of the domain. -- `domain_name` (String) DomainName is the name of the domain. -- `endpoint_type` (String) EndpointType is the type of the endpoint. - - -### Nested Schema for `spec.aws.rds` - -Optional: - -- `cluster_id` (String) ClusterID is the RDS cluster (Aurora) identifier. -- `iam_auth` (Boolean) IAMAuth indicates whether database IAM authentication is enabled. -- `instance_id` (String) InstanceID is the RDS instance identifier. -- `resource_id` (String) ResourceID is the RDS instance resource identifier (db-xxx). -- `security_groups` (List of String) SecurityGroups is a list of attached security groups for the RDS instance. -- `subnets` (List of String) Subnets is a list of subnets for the RDS instance. -- `vpc_id` (String) VPCID is the VPC where the RDS is running. - - -### Nested Schema for `spec.aws.rdsproxy` - -Optional: - -- `custom_endpoint_name` (String) CustomEndpointName is the identifier of an RDS Proxy custom endpoint. -- `name` (String) Name is the identifier of an RDS Proxy. -- `resource_id` (String) ResourceID is the RDS instance resource identifier (prx-xxx). - - -### Nested Schema for `spec.aws.redshift` - -Optional: - -- `cluster_id` (String) ClusterID is the Redshift cluster identifier. - - -### Nested Schema for `spec.aws.redshift_serverless` - -Optional: - -- `endpoint_name` (String) EndpointName is the VPC endpoint name. -- `workgroup_id` (String) WorkgroupID is the workgroup ID. -- `workgroup_name` (String) WorkgroupName is the workgroup name. - - -### Nested Schema for `spec.aws.secret_store` - -Optional: - -- `key_prefix` (String) KeyPrefix specifies the secret key prefix. -- `kms_key_id` (String) KMSKeyID specifies the AWS KMS key for encryption. - - - -### Nested Schema for `spec.azure` - -Optional: - -- `is_flexi_server` (Boolean) IsFlexiServer is true if the database is an Azure Flexible server. -- `name` (String) Name is the Azure database server name. -- `redis` (Attributes) Redis contains Azure Cache for Redis specific database metadata. (see [below for nested schema](#nested-schema-for-specazureredis)) -- `resource_id` (String) ResourceID is the Azure fully qualified ID for the resource. - -### Nested Schema for `spec.azure.redis` - -Optional: - -- `clustering_policy` (String) ClusteringPolicy is the clustering policy for Redis Enterprise. - - - -### Nested Schema for `spec.dynamic_labels` - -Optional: - -- `command` (List of String) Command is a command to run -- `period` (String) Period is a time between command runs -- `result` (String) Result captures standard output - - -### Nested Schema for `spec.gcp` - -Optional: - -- `instance_id` (String) InstanceID is the Cloud SQL instance ID. -- `project_id` (String) ProjectID is the GCP project ID the Cloud SQL instance resides in. - - -### Nested Schema for `spec.mongo_atlas` - -Optional: - -- `name` (String) Name is the Atlas database instance name. - - -### Nested Schema for `spec.mysql` - -Optional: - -- `server_version` (String) ServerVersion is the server version reported by DB proxy if the runtime information is not available. - - -### Nested Schema for `spec.oracle` - -Optional: - -- `audit_user` (String) AuditUser is the Oracle database user privilege to access internal Oracle audit trail. - - -### Nested Schema for `spec.tls` - -Optional: - -- `ca_cert` (String) CACert is an optional user provided CA certificate used for verifying database TLS connection. -- `mode` (Number) Mode is a TLS connection mode. 0 is "verify-full"; 1 is "verify-ca", 2 is "insecure". -- `server_name` (String) ServerName allows to provide custom hostname. This value will override the servername/hostname on a certificate during validation. -- `trust_system_cert_pool` (Boolean) TrustSystemCertPool allows Teleport to trust certificate authorities available on the host system. If not set (by default), Teleport only trusts self-signed databases with TLS certificates signed by Teleport's Database Server CA or the ca_cert specified in this TLS setting. For cloud-hosted databases, Teleport downloads the corresponding required CAs for validation. - diff --git a/docs/pages/reference/terraform-provider/data-sources/github_connector.mdx b/docs/pages/reference/terraform-provider/data-sources/github_connector.mdx deleted file mode 100644 index 662a0991480b2..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/github_connector.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Reference for the teleport_github_connector Terraform data-source -sidebar_label: github_connector -description: This page describes the supported values of the teleport_github_connector data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an Github connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Required: - -- `client_id` (String) ClientID is the Github OAuth app client ID. -- `client_secret` (String, Sensitive) ClientSecret is the Github OAuth app client secret. - -Optional: - -- `api_endpoint_url` (String) APIEndpointURL is the URL of the API endpoint of the Github instance this connector is for. -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `display` (String) Display is the connector display name. -- `endpoint_url` (String) EndpointURL is the URL of the GitHub instance this connector is for. -- `redirect_url` (String) RedirectURL is the authorization callback URL. -- `teams_to_logins` (Attributes List) TeamsToLogins maps Github team memberships onto allowed logins/roles. DELETE IN 11.0.0 Deprecated: use GithubTeamsToRoles instead. (see [below for nested schema](#nested-schema-for-specteams_to_logins)) -- `teams_to_roles` (Attributes List) TeamsToRoles maps Github team memberships onto allowed roles. (see [below for nested schema](#nested-schema-for-specteams_to_roles)) - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.teams_to_logins` - -Optional: - -- `kubernetes_groups` (List of String) KubeGroups is a list of allowed kubernetes groups for this org/team. -- `kubernetes_users` (List of String) KubeUsers is a list of allowed kubernetes users to impersonate for this org/team. -- `logins` (List of String) Logins is a list of allowed logins for this org/team. -- `organization` (String) Organization is a Github organization a user belongs to. -- `team` (String) Team is a team within the organization a user belongs to. - - -### Nested Schema for `spec.teams_to_roles` - -Optional: - -- `organization` (String) Organization is a Github organization a user belongs to. -- `roles` (List of String) Roles is a list of allowed logins for this org/team. -- `team` (String) Team is a team within the organization a user belongs to. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - diff --git a/docs/pages/reference/terraform-provider/data-sources/health_check_config.mdx b/docs/pages/reference/terraform-provider/data-sources/health_check_config.mdx deleted file mode 100644 index bdd5769e5d3c5..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/health_check_config.mdx +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Reference for the teleport_health_check_config Terraform data-source -sidebar_label: health_check_config -description: This page describes the supported values of the teleport_health_check_config data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `metadata` (Attributes) Metadata is the health check config resource's metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the health check config specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the health check config version. - -### Optional - -- `sub_kind` (String) SubKind is an optional resource sub kind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. - - -### Nested Schema for `spec` - -Required: - -- `match` (Attributes) Match is used to select resources that these settings apply to. (see [below for nested schema](#nested-schema-for-specmatch)) - -Optional: - -- `healthy_threshold` (Number) HealthyThreshold is the number of consecutive passing health checks after which a target's health status becomes "healthy". -- `interval` (String) Interval is the time between each health check. -- `timeout` (String) Timeout is the health check connection establishment timeout. An attempt that times out is a failed attempt. -- `unhealthy_threshold` (Number) UnhealthyThreshold is the number of consecutive failing health checks after which a target's health status becomes "unhealthy". - -### Nested Schema for `spec.match` - -Optional: - -- `db_labels` (Attributes List) DBLabels matches database labels. An empty value is ignored. The match result is logically ANDed with DBLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchdb_labels)) -- `db_labels_expression` (String) DBLabelsExpression is a label predicate expression to match databases. An empty value is ignored. The match result is logically ANDed with DBLabels, if both are non-empty. - -### Nested Schema for `spec.match.db_labels` - -Optional: - -- `name` (String) The name of the label. -- `values` (List of String) The values associated with the label. - diff --git a/docs/pages/reference/terraform-provider/data-sources/login_rule.mdx b/docs/pages/reference/terraform-provider/data-sources/login_rule.mdx deleted file mode 100644 index 0456955695f92..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/login_rule.mdx +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Reference for the teleport_login_rule Terraform data-source -sidebar_label: login_rule -description: This page describes the supported values of the teleport_login_rule data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `priority` (Number) Priority is the priority of the login rule relative to other login rules in the same cluster. Login rules with a lower numbered priority will be evaluated first. -- `version` (String) Version is the resource version. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `traits_expression` (String) TraitsExpression is a predicate expression which should return the desired traits for the user upon login. -- `traits_map` (Attributes Map) TraitsMap is a map of trait keys to lists of predicate expressions which should evaluate to the desired values for that trait. (see [below for nested schema](#nested-schema-for-traits_map)) - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `traits_map` - -Optional: - -- `values` (List of String) - diff --git a/docs/pages/reference/terraform-provider/data-sources/oidc_connector.mdx b/docs/pages/reference/terraform-provider/data-sources/oidc_connector.mdx deleted file mode 100644 index 3c05ed35fc097..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/oidc_connector.mdx +++ /dev/null @@ -1,92 +0,0 @@ ---- -title: Reference for the teleport_oidc_connector Terraform data-source -sidebar_label: oidc_connector -description: This page describes the supported values of the teleport_oidc_connector data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an OIDC connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Optional: - -- `acr_values` (String) ACR is an Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers. -- `allow_unverified_email` (Boolean) AllowUnverifiedEmail tells the connector to accept OIDC users with unverified emails. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a dynamic mapping from claims to roles. (see [below for nested schema](#nested-schema-for-specclaims_to_roles)) -- `client_id` (String) ClientID is the id of the authentication client (Teleport Auth Service). -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `client_secret` (String, Sensitive) ClientSecret is used to authenticate the client. -- `display` (String) Display is the friendly name for this provider. -- `google_admin_email` (String) GoogleAdminEmail is the email of a google admin to impersonate. -- `google_service_account` (String, Sensitive) GoogleServiceAccount is a string containing google service account credentials. -- `google_service_account_uri` (String) GoogleServiceAccountURI is a path to a google service account uri. -- `issuer_url` (String) IssuerURL is the endpoint of the provider, e.g. https://accounts.google.com. -- `max_age` (String) -- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) -- `pkce_mode` (String) PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled" -- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. -- `provider` (String) Provider is the external identity provider. -- `redirect_url` (List of String) RedirectURLs is a list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used. -- `scope` (List of String) Scope specifies additional scopes set by provider. -- `username_claim` (String) UsernameClaim specifies the name of the claim from the OIDC connector to be used as the user's username. - -### Nested Schema for `spec.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.mfa` - -Optional: - -- `acr_values` (String) AcrValues are Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR. -- `client_id` (String) ClientID is the OIDC OAuth app client ID. -- `client_secret` (String) ClientSecret is the OIDC OAuth app client secret. -- `enabled` (Boolean) Enabled specified whether this OIDC connector supports MFA checks. Defaults to false. -- `max_age` (String) MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions. -- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - diff --git a/docs/pages/reference/terraform-provider/data-sources/provision_token.mdx b/docs/pages/reference/terraform-provider/data-sources/provision_token.mdx deleted file mode 100644 index 9e442133ac75c..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/provision_token.mdx +++ /dev/null @@ -1,358 +0,0 @@ ---- -title: Reference for the teleport_provision_token Terraform data-source -sidebar_label: provision_token -description: This page describes the supported values of the teleport_provision_token data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is a provisioning token V2 spec (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `status` (Attributes) Status is extended status information, depending on token type. It is not user writable. (see [below for nested schema](#nested-schema-for-status)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Required: - -- `roles` (List of String) Roles is a list of roles associated with the token, that will be converted to metadata in the SSH and X509 certificates issued to the user of the token - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specallow)) -- `aws_iid_ttl` (String) AWSIIDTTL is the TTL to use for AWS EC2 Instance Identity Documents used to join the cluster with this token. -- `azure` (Attributes) Azure allows the configuration of options specific to the "azure" join method. (see [below for nested schema](#nested-schema-for-specazure)) -- `azure_devops` (Attributes) AzureDevops allows the configuration of options specific to the "azure_devops" join method. (see [below for nested schema](#nested-schema-for-specazure_devops)) -- `bitbucket` (Attributes) Bitbucket allows the configuration of options specific to the "bitbucket" join method. (see [below for nested schema](#nested-schema-for-specbitbucket)) -- `bot_name` (String) BotName is the name of the bot this token grants access to, if any -- `bound_keypair` (Attributes) BoundKeypair allows the configuration of options specific to the "bound_keypair" join method. (see [below for nested schema](#nested-schema-for-specbound_keypair)) -- `circleci` (Attributes) CircleCI allows the configuration of options specific to the "circleci" join method. (see [below for nested schema](#nested-schema-for-speccircleci)) -- `gcp` (Attributes) GCP allows the configuration of options specific to the "gcp" join method. (see [below for nested schema](#nested-schema-for-specgcp)) -- `github` (Attributes) GitHub allows the configuration of options specific to the "github" join method. (see [below for nested schema](#nested-schema-for-specgithub)) -- `gitlab` (Attributes) GitLab allows the configuration of options specific to the "gitlab" join method. (see [below for nested schema](#nested-schema-for-specgitlab)) -- `join_method` (String) JoinMethod is the joining method required in order to use this token. Supported joining methods include: azure, circleci, ec2, gcp, github, gitlab, iam, kubernetes, spacelift, token, tpm -- `kubernetes` (Attributes) Kubernetes allows the configuration of options specific to the "kubernetes" join method. (see [below for nested schema](#nested-schema-for-speckubernetes)) -- `oracle` (Attributes) Oracle allows the configuration of options specific to the "oracle" join method. (see [below for nested schema](#nested-schema-for-specoracle)) -- `spacelift` (Attributes) Spacelift allows the configuration of options specific to the "spacelift" join method. (see [below for nested schema](#nested-schema-for-specspacelift)) -- `suggested_agent_matcher_labels` (Map of List of String) SuggestedAgentMatcherLabels is a set of labels to be used by agents to match on resources. When an agent uses this token, the agent should monitor resources that match those labels. For databases, this means adding the labels to `db_service.resources.labels`. Currently, only node-join scripts create a configuration according to the suggestion. -- `suggested_labels` (Map of List of String) SuggestedLabels is a set of labels that resources should set when using this token to enroll themselves in the cluster. Currently, only node-join scripts create a configuration according to the suggestion. -- `terraform_cloud` (Attributes) TerraformCloud allows the configuration of options specific to the "terraform_cloud" join method. (see [below for nested schema](#nested-schema-for-specterraform_cloud)) -- `tpm` (Attributes) TPM allows the configuration of options specific to the "tpm" join method. (see [below for nested schema](#nested-schema-for-spectpm)) - -### Nested Schema for `spec.allow` - -Optional: - -- `aws_account` (String) AWSAccount is the AWS account ID. -- `aws_arn` (String) AWSARN is used for the IAM join method, the AWS identity of joining nodes must match this ARN. Supports wildcards "*" and "?". -- `aws_regions` (List of String) AWSRegions is used for the EC2 join method and is a list of AWS regions a node is allowed to join from. -- `aws_role` (String) AWSRole is used for the EC2 join method and is the ARN of the AWS role that the Auth Service will assume in order to call the ec2 API. - - -### Nested Schema for `spec.azure` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specazureallow)) - -### Nested Schema for `spec.azure.allow` - -Optional: - -- `resource_groups` (List of String) ResourceGroups is a list of Azure resource groups the node is allowed to join from. -- `subscription` (String) Subscription is the Azure subscription. - - - -### Nested Schema for `spec.azure_devops` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. At least one allow rule must be specified. (see [below for nested schema](#nested-schema-for-specazure_devopsallow)) -- `organization_id` (String) OrganizationID specifies the UUID of the Azure DevOps organization that this join token will grant access to. This is used to identify the correct issuer verification of the ID token. This is a required field. - -### Nested Schema for `spec.azure_devops.allow` - -Optional: - -- `definition_id` (String) The ID of the AZDO pipeline definition. Example: `1` Mapped from the `def_id` claim. -- `pipeline_name` (String) The name of the AZDO pipeline. Example: `my-pipeline`. Mapped out of the `sub` claim. -- `project_id` (String) The ID of the AZDO pipeline. Example: `271ef6f7-0000-0000-0000-4b54d9129990` Mapped from the `prj_id` claim. -- `project_name` (String) The name of the AZDO project. Example: `my-project`. Mapped out of the `sub` claim. -- `repository_ref` (String) The reference of the repository the pipeline is using. Example: `refs/heads/main`. Mapped from the `rpo_ref` claim. -- `repository_uri` (String) The URI of the repository the pipeline is using. Example: `https://github.com/gravitational/teleport.git`. Mapped from the `rpo_uri` claim. -- `repository_version` (String) The individual commit of the repository the pipeline is using. Example: `e6b9eb29a288b27a3a82cc19c48b9d94b80aff36`. Mapped from the `rpo_ver` claim. -- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. Example: `p://my-organization/my-project/my-pipeline` Mapped from the `sub` claim. - - - -### Nested Schema for `spec.bitbucket` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specbitbucketallow)) -- `audience` (String) Audience is a Bitbucket-specified audience value for this token. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. -- `identity_provider_url` (String) IdentityProviderURL is a Bitbucket-specified issuer URL for incoming OIDC tokens. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. - -### Nested Schema for `spec.bitbucket.allow` - -Optional: - -- `branch_name` (String) BranchName is the name of the branch on which this pipeline executed. -- `deployment_environment_uuid` (String) DeploymentEnvironmentUUID is the UUID of the deployment environment targeted by this pipelines run, if any. These values may be found in the "Pipelines -> OpenID Connect -> Deployment environments" section of the repository settings. -- `repository_uuid` (String) RepositoryUUID is the UUID of the repository for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. -- `workspace_uuid` (String) WorkspaceUUID is the UUID of the workspace for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. - - - -### Nested Schema for `spec.bound_keypair` - -Optional: - -- `onboarding` (Attributes) Onboarding contains parameters related to initial onboarding and keypair registration. (see [below for nested schema](#nested-schema-for-specbound_keypaironboarding)) -- `recovery` (Attributes) Recovery contains parameters related to recovery after identity expiration. (see [below for nested schema](#nested-schema-for-specbound_keypairrecovery)) -- `rotate_after` (String) RotateAfter is an optional timestamp that forces clients to perform a keypair rotation on the next join or recovery attempt after the given date. If `LastRotatedAt` is unset or before this timestamp, a rotation will be requested. It is recommended to set this value to the current timestamp if a rotation should be triggered on the next join attempt. - -### Nested Schema for `spec.bound_keypair.onboarding` - -Optional: - -- `initial_public_key` (String) InitialPublicKey is used to preregister a public key generated by `tbot keypair create`. When set, no initial join secret is generated or made available for use, and clients must have the associated private key available to join. If set, `initial_join_secret` and `must_register_before` are ignored. This value is written in SSH authorized_keys format. -- `must_register_before` (String) MustRegisterBefore is an optional time before which registeration via initial join secret must be performed. Attempts to register using an initial join secret after this timestamp will not be allowed. This may be modified after creation if necessary to allow the initial registration to take place. This value is ignored if `initial_public_key` is set. -- `registration_secret` (String) RegistrationSecret is a secret joining clients may use to register their public key on first join, which may be used instead of preregistering a public key with `initial_public_key`. If `initial_public_key` is set, this value is ignored. Otherwise, if set, this value will be used to populate `.status.bound_keypair.intitial_join_secret`. If unset and no `initial_public_key` is provided, a random secure value will be generated server-side to populate the status field. - - -### Nested Schema for `spec.bound_keypair.recovery` - -Optional: - -- `limit` (Number) Limit is the maximum number of allowed recovery attempts. This value may be raised or lowered after creation to allow additional recovery attempts should the initial limit be exhausted. If `mode` is set to `standard`, recovery attempts will only be allowed if `.status.bound_keypair.recovery_count` is less than this limit. This limit is not enforced if `mode` is set to `relaxed` or `insecure`. This value must be at least 1 to allow for the initial join during onboarding, which counts as a recovery. -- `mode` (String) Mode sets the recovery rule enforcement mode. It may be one of these values: - standard (or unset): all configured rules enforced. The recovery limit and client join state are required and verified. This is the most secure recovery mode. - relaxed: recovery limit is not enforced, but client join state is still required. This effectively allows unlimited recovery attempts, but client join state still helps mitigate stolen credentials. - insecure: neither the recovery limit nor client join state are enforced. This allows any client with the private key to join freely. This is less secure, but can be useful in certain situations, like in otherwise unsupported CI/CD providers. This mode should be used with care, and RBAC rules should be configured to heavily restrict which resources this identity can access. - - - -### Nested Schema for `spec.circleci` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speccircleciallow)) -- `organization_id` (String) - -### Nested Schema for `spec.circleci.allow` - -Optional: - -- `context_id` (String) -- `project_id` (String) - - - -### Nested Schema for `spec.gcp` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgcpallow)) - -### Nested Schema for `spec.gcp.allow` - -Optional: - -- `locations` (List of String) Locations is a list of regions (e.g. "us-west1") and/or zones (e.g. "us-west1-b"). -- `project_ids` (List of String) ProjectIDs is a list of project IDs (e.g. ``). -- `service_accounts` (List of String) ServiceAccounts is a list of service account emails (e.g. `-compute@developer.gserviceaccount.com`). - - - -### Nested Schema for `spec.github` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgithuballow)) -- `enterprise_server_host` (String) EnterpriseServerHost allows joining from runners associated with a GitHub Enterprise Server instance. When unconfigured, tokens will be validated against github.com, but when configured to the host of a GHES instance, then the tokens will be validated against host. This value should be the hostname of the GHES instance, and should not include the scheme or a path. The instance must be accessible over HTTPS at this hostname and the certificate must be trusted by the Auth Service. -- `enterprise_slug` (String) EnterpriseSlug allows the slug of a GitHub Enterprise organisation to be included in the expected issuer of the OIDC tokens. This is for compatibility with the `include_enterprise_slug` option in GHE. This field should be set to the slug of your enterprise if this is enabled. If this is not enabled, then this field must be left empty. This field cannot be specified if `enterprise_server_host` is specified. See https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise for more information about customized issuer values. -- `static_jwks` (String) StaticJWKS disables fetching of the GHES signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitHub Actions in GHES instances that are not reachable by the Teleport Auth Service. - -### Nested Schema for `spec.github.allow` - -Optional: - -- `actor` (String) The personal account that initiated the workflow run. -- `environment` (String) The name of the environment used by the job. -- `ref` (String) The git ref that triggered the workflow run. -- `ref_type` (String) The type of ref, for example: "branch". -- `repository` (String) The repository from where the workflow is running. This includes the name of the owner e.g `gravitational/teleport` -- `repository_owner` (String) The name of the organization in which the repository is stored. -- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. The format of this varies depending on the type of github action run. -- `workflow` (String) The name of the workflow. - - - -### Nested Schema for `spec.gitlab` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgitlaballow)) -- `domain` (String) Domain is the domain of your GitLab instance. This will default to `gitlab.com` - but can be set to the domain of your self-hosted GitLab e.g `gitlab.example.com`. -- `static_jwks` (String) StaticJWKS disables fetching of the GitLab signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitLab CI instances that are not reachable by the Teleport Auth Service. - -### Nested Schema for `spec.gitlab.allow` - -Optional: - -- `ci_config_ref_uri` (String) CIConfigRefURI is the ref path to the top-level pipeline definition, for example, gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main. -- `ci_config_sha` (String) CIConfigSHA is the git commit SHA for the ci_config_ref_uri. -- `deployment_tier` (String) DeploymentTier is the deployment tier of the environment the job specifies -- `environment` (String) Environment limits access by the environment the job deploys to (if one is associated) -- `environment_protected` (Boolean) EnvironmentProtected is true if the Git ref is protected, false otherwise. -- `namespace_path` (String) NamespacePath is used to limit access to jobs in a group or user's projects. Example: `mygroup` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `pipeline_source` (String) PipelineSource limits access by the job pipeline source type. https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules Example: `web` -- `project_path` (String) ProjectPath is used to limit access to jobs belonging to an individual project. Example: `mygroup/myproject` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `project_visibility` (String) ProjectVisibility is the visibility of the project where the pipeline is running. Can be internal, private, or public. -- `ref` (String) Ref allows access to be limited to jobs triggered by a specific git ref. Ensure this is used in combination with ref_type. This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `ref_protected` (Boolean) RefProtected is true if the Git ref is protected, false otherwise. -- `ref_type` (String) RefType allows access to be limited to jobs triggered by a specific git ref type. Example: `branch` or `tag` -- `sub` (String) Sub roughly uniquely identifies the workload. Example: `project_path:mygroup/my-project:ref_type:branch:ref:main` project_path:GROUP/PROJECT:ref_type:TYPE:ref:BRANCH_NAME This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `user_email` (String) UserEmail is the email of the user executing the job -- `user_id` (String) UserID is the ID of the user executing the job -- `user_login` (String) UserLogin is the username of the user executing the job - - - -### Nested Schema for `spec.kubernetes` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speckubernetesallow)) -- `static_jwks` (Attributes) StaticJWKS is the configuration specific to the `static_jwks` type. (see [below for nested schema](#nested-schema-for-speckubernetesstatic_jwks)) -- `type` (String) Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` If unset, this defaults to `in_cluster`. - -### Nested Schema for `spec.kubernetes.allow` - -Optional: - -- `service_account` (String) ServiceAccount is the namespaced name of the Kubernetes service account. Its format is "namespace:service-account". - - -### Nested Schema for `spec.kubernetes.static_jwks` - -Optional: - -- `jwks` (String) JWKS should be the JSON Web Key Set formatted public keys of that the Kubernetes Cluster uses to sign service account tokens. This can be fetched from /openid/v1/jwks on the Kubernetes API Server. - - - -### Nested Schema for `spec.oracle` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specoracleallow)) - -### Nested Schema for `spec.oracle.allow` - -Optional: - -- `parent_compartments` (List of String) ParentCompartments is a list of the OCIDs of compartments an instance is allowed to join from. Only direct parents are allowed, i.e. no nested compartments. If empty, any compartment is allowed. -- `regions` (List of String) Regions is a list of regions an instance is allowed to join from. Both full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. If empty, any region is allowed. -- `tenancy` (String) Tenancy is the OCID of the instance's tenancy. Required. - - - -### Nested Schema for `spec.spacelift` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specspaceliftallow)) -- `hostname` (String) Hostname is the hostname of the Spacelift tenant that tokens will originate from. E.g `example.app.spacelift.io` - -### Nested Schema for `spec.spacelift.allow` - -Optional: - -- `caller_id` (String) CallerID is the ID of the caller, ie. the stack or module that generated the run. -- `caller_type` (String) CallerType is the type of the caller, ie. the entity that owns the run - either `stack` or `module`. -- `scope` (String) Scope is the scope of the token - either `read` or `write`. See https://docs.spacelift.io/integrations/cloud-providers/oidc/#about-scopes -- `space_id` (String) SpaceID is the ID of the space in which the run that owns the token was executed. - - - -### Nested Schema for `spec.terraform_cloud` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specterraform_cloudallow)) -- `audience` (String) Audience is the JWT audience as configured in the TFC_WORKLOAD_IDENTITY_AUDIENCE(_$TAG) variable in Terraform Cloud. If unset, defaults to the Teleport cluster name. For example, if `TFC_WORKLOAD_IDENTITY_AUDIENCE_TELEPORT=foo` is set in Terraform Cloud, this value should be `foo`. If the variable is set to match the cluster name, it does not need to be set here. -- `hostname` (String) Hostname is the hostname of the Terraform Enterprise instance expected to issue JWTs allowed by this token. This may be unset for regular Terraform Cloud use, in which case it will be assumed to be `app.terraform.io`. Otherwise, it must both match the `iss` (issuer) field included in JWTs, and provide standard JWKS endpoints. - -### Nested Schema for `spec.terraform_cloud.allow` - -Optional: - -- `organization_id` (String) OrganizationID is the ID of the HCP Terraform organization. At least one organization value is required, either ID or name. -- `organization_name` (String) OrganizationName is the human-readable name of the HCP Terraform organization. At least one organization value is required, either ID or name. -- `project_id` (String) ProjectID is the ID of the HCP Terraform project. At least one project or workspace value is required, either ID or name. -- `project_name` (String) ProjectName is the human-readable name for the HCP Terraform project. At least one project or workspace value is required, either ID or name. -- `run_phase` (String) RunPhase is the phase of the run the token was issued for, e.g. `plan` or `apply` -- `workspace_id` (String) WorkspaceID is the ID of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. -- `workspace_name` (String) WorkspaceName is the human-readable name of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. - - - -### Nested Schema for `spec.tpm` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, the presented delegated identity must match one allow rule to permit joining. (see [below for nested schema](#nested-schema-for-spectpmallow)) -- `ekcert_allowed_cas` (List of String) EKCertAllowedCAs is a list of CA certificates that will be used to validate TPM EKCerts. When specified, joining TPMs must present an EKCert signed by one of the specified CAs. TPMs that do not present an EKCert will be not permitted to join. When unspecified, TPMs will be allowed to join with either an EKCert or an EKPubHash. - -### Nested Schema for `spec.tpm.allow` - -Optional: - -- `description` (String) Description is a human-readable description of the rule. It has no bearing on whether or not a TPM is allowed to join, but can be used to associate a rule with a specific host (e.g the asset tag of the server in which the TPM resides). Example: "build-server-100" -- `ek_certificate_serial` (String) EKCertificateSerial is the serial number of the EKCert in hexadecimal with colon separated nibbles. This value will not be checked when a TPM does not have an EKCert configured. Example: 73:df:dc:bd:af:ef:8a:d8:15:2e:96:71:7a:3e:7f:a4 -- `ek_public_hash` (String) EKPublicHash is the SHA256 hash of the EKPub marshaled in PKIX format and encoded in hexadecimal. This value will also be checked when a TPM has submitted an EKCert, and the public key in the EKCert will be used for this check. Example: d4b45864d9d6fabfc568d74f26c35ababde2105337d7af9a6605e1c56c891aa6 - - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels -- `name` (String, Sensitive) Name is an object name - - -### Nested Schema for `status` - -Optional: - -- `bound_keypair` (Attributes) BoundKeypair contains status information related to bound_keypair type tokens. (see [below for nested schema](#nested-schema-for-statusbound_keypair)) - -### Nested Schema for `status.bound_keypair` - -Optional: - -- `bound_bot_instance_id` (String) BoundBotInstanceID is the ID of the currently associated bot instance. A new bot instance is issued on each join; the new bot instance will have a `previous_bot_instance` set to this value, if any. -- `bound_public_key` (String) BoundPublicKey contains the currently bound public key. If `.spec.bound_keypair.onboarding.initial_public_key` is set, that value will be copied here on creation, otherwise it will be populated as part of public key registration process. This value will be updated over time if keypair rotation takes place, and will always reflect the currently trusted public key. This value is written in SSH authorized_keys format. -- `last_recovered_at` (String) LastRecoveredAt contains a timestamp of the last successful recovery attempt. Note that normal renewals do not count as a recovery attempt, however onboarding does, either with a preregistered key or registration secret. This corresponds with the last time `bound_bot_instance_id` was updated. -- `last_rotated_at` (String) LastRotatedAt contains a timestamp of the last time the keypair was rotated, if any. This is not set at initial join. -- `recovery_count` (Number) RecoveryCount is a count of the total number of recoveries performed using this token. It is incremented for every successful join or rejoin. Recovery is only allowed if this value is less than `.spec.bound_keypair.recovery.limit`, or if the recovery mode is `relaxed` or `insecure`. -- `registration_secret` (String) RegistrationSecret contains a secret value that may be used for public key registration during the initial join process if no public key is preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` is set, †his field will remain empty. Otherwise, if `.spec.bound_keypair.onboarding.registration_secret` is set, that value will be copied here. If that field is unset, a value will be randomly generated. - diff --git a/docs/pages/reference/terraform-provider/data-sources/role.mdx b/docs/pages/reference/terraform-provider/data-sources/role.mdx deleted file mode 100644 index 594ce5ccb8d6c..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/role.mdx +++ /dev/null @@ -1,536 +0,0 @@ ---- -title: Reference for the teleport_role Terraform data-source -sidebar_label: role -description: This page describes the supported values of the teleport_role data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a role specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `allow` (Attributes) Allow is the set of conditions evaluated to grant access. (see [below for nested schema](#nested-schema-for-specallow)) -- `deny` (Attributes) Deny is the set of conditions evaluated to deny access. Deny takes priority over allow. (see [below for nested schema](#nested-schema-for-specdeny)) -- `options` (Attributes) Options is for OpenSSH options like agent forwarding. (see [below for nested schema](#nested-schema-for-specoptions)) - -### Nested Schema for `spec.allow` - -Optional: - -- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specallowaccount_assignments)) -- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. -- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. -- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. -- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. -- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). -- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. -- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. -- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. -- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. -- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specallowdb_permissions)) -- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. -- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. -- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. -- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. -- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to -- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. -- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specallowgithub_permissions)) -- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. -- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. -- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to -- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file -- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specallowimpersonate)) -- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specallowjoin_sessions)) -- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups -- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. -- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. -- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specallowkubernetes_resources)) -- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate -- `logins` (List of String) Logins is a list of *nix system logins. -- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). -- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. -- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specallowrequest)) -- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specallowrequire_session_join)) -- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specallowreview_requests)) -- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specallowrules)) -- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specallowspiffe)) -- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. -- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. -- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. -- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. -- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. - -### Nested Schema for `spec.allow.account_assignments` - -Optional: - -- `account` (String) -- `permission_set` (String) - - -### Nested Schema for `spec.allow.db_permissions` - -Optional: - -- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. -- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... - - -### Nested Schema for `spec.allow.github_permissions` - -Optional: - -- `orgs` (List of String) - - -### Nested Schema for `spec.allow.impersonate` - -Optional: - -- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate -- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.allow.join_sessions` - -Optional: - -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is a list of permitted participant modes for this policy. -- `name` (String) Name is the name of the policy. -- `roles` (List of String) Roles is a list of roles that you can join the session of. - - -### Nested Schema for `spec.allow.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. -- `kind` (String) Kind specifies the Kubernetes Resource type. -- `name` (String) Name is the resource name. It supports wildcards. -- `namespace` (String) Namespace is the resource namespace. It supports wildcards. -- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. - - -### Nested Schema for `spec.allow.request` - -Optional: - -- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowrequestclaims_to_roles)) -- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specallowrequestkubernetes_resources)) -- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. -- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specallowrequestreason)) -- `roles` (List of String) Roles is the name of roles which will match the request rule. -- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. -- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. -- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specallowrequestthresholds)) - -### Nested Schema for `spec.allow.request.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.allow.request.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. -- `kind` (String) kind specifies the Kubernetes Resource type. - - -### Nested Schema for `spec.allow.request.reason` - -Optional: - -- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. - - -### Nested Schema for `spec.allow.request.thresholds` - -Optional: - -- `approve` (Number) Approve is the number of matching approvals needed for state-transition. -- `deny` (Number) Deny is the number of denials needed for state-transition. -- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. -- `name` (String) Name is the optional human-readable name of the threshold. - - - -### Nested Schema for `spec.allow.require_session_join` - -Optional: - -- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. -- `filter` (String) Filter is a predicate that determines what users count towards this policy. -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. -- `name` (String) Name is the name of the policy. -- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. - - -### Nested Schema for `spec.allow.review_requests` - -Optional: - -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowreview_requestsclaims_to_roles)) -- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. -- `roles` (List of String) Roles is the name of roles which may be reviewed. -- `where` (String) Where is an optional predicate which further limits which requests are reviewable. - -### Nested Schema for `spec.allow.review_requests.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - - -### Nested Schema for `spec.allow.rules` - -Optional: - -- `actions` (List of String) Actions specifies optional actions taken when this rule matches -- `resources` (List of String) Resources is a list of resources -- `verbs` (List of String) Verbs is a list of verbs -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.allow.spiffe` - -Optional: - -- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com -- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 -- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar - - - -### Nested Schema for `spec.deny` - -Optional: - -- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specdenyaccount_assignments)) -- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. -- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. -- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. -- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. -- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). -- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. -- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. -- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. -- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. -- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specdenydb_permissions)) -- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. -- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. -- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. -- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. -- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to -- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. -- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specdenygithub_permissions)) -- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. -- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. -- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to -- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file -- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specdenyimpersonate)) -- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specdenyjoin_sessions)) -- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups -- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. -- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. -- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specdenykubernetes_resources)) -- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate -- `logins` (List of String) Logins is a list of *nix system logins. -- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). -- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. -- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specdenyrequest)) -- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specdenyrequire_session_join)) -- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specdenyreview_requests)) -- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specdenyrules)) -- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specdenyspiffe)) -- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. -- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. -- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. -- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. -- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. - -### Nested Schema for `spec.deny.account_assignments` - -Optional: - -- `account` (String) -- `permission_set` (String) - - -### Nested Schema for `spec.deny.db_permissions` - -Optional: - -- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. -- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... - - -### Nested Schema for `spec.deny.github_permissions` - -Optional: - -- `orgs` (List of String) - - -### Nested Schema for `spec.deny.impersonate` - -Optional: - -- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate -- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.deny.join_sessions` - -Optional: - -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is a list of permitted participant modes for this policy. -- `name` (String) Name is the name of the policy. -- `roles` (List of String) Roles is a list of roles that you can join the session of. - - -### Nested Schema for `spec.deny.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. -- `kind` (String) Kind specifies the Kubernetes Resource type. -- `name` (String) Name is the resource name. It supports wildcards. -- `namespace` (String) Namespace is the resource namespace. It supports wildcards. -- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. - - -### Nested Schema for `spec.deny.request` - -Optional: - -- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyrequestclaims_to_roles)) -- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specdenyrequestkubernetes_resources)) -- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. -- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specdenyrequestreason)) -- `roles` (List of String) Roles is the name of roles which will match the request rule. -- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. -- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. -- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specdenyrequestthresholds)) - -### Nested Schema for `spec.deny.request.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.deny.request.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. -- `kind` (String) kind specifies the Kubernetes Resource type. - - -### Nested Schema for `spec.deny.request.reason` - -Optional: - -- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. - - -### Nested Schema for `spec.deny.request.thresholds` - -Optional: - -- `approve` (Number) Approve is the number of matching approvals needed for state-transition. -- `deny` (Number) Deny is the number of denials needed for state-transition. -- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. -- `name` (String) Name is the optional human-readable name of the threshold. - - - -### Nested Schema for `spec.deny.require_session_join` - -Optional: - -- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. -- `filter` (String) Filter is a predicate that determines what users count towards this policy. -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. -- `name` (String) Name is the name of the policy. -- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. - - -### Nested Schema for `spec.deny.review_requests` - -Optional: - -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyreview_requestsclaims_to_roles)) -- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. -- `roles` (List of String) Roles is the name of roles which may be reviewed. -- `where` (String) Where is an optional predicate which further limits which requests are reviewable. - -### Nested Schema for `spec.deny.review_requests.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - - -### Nested Schema for `spec.deny.rules` - -Optional: - -- `actions` (List of String) Actions specifies optional actions taken when this rule matches -- `resources` (List of String) Resources is a list of resources -- `verbs` (List of String) Verbs is a list of verbs -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.deny.spiffe` - -Optional: - -- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com -- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 -- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar - - - -### Nested Schema for `spec.options` - -Optional: - -- `cert_extensions` (Attributes List) CertExtensions specifies the key/values (see [below for nested schema](#nested-schema-for-specoptionscert_extensions)) -- `cert_format` (String) CertificateFormat defines the format of the user certificate to allow compatibility with older versions of OpenSSH. -- `client_idle_timeout` (String) ClientIdleTimeout sets disconnect clients on idle timeout behavior, if set to 0 means do not disconnect, otherwise is set to the idle duration. -- `create_db_user` (Boolean) CreateDatabaseUser enabled automatic database user creation. -- `create_db_user_mode` (Number) CreateDatabaseUserMode allows users to be automatically created on a database when not set to off. 0 is "unspecified", 1 is "off", 2 is "keep", 3 is "best_effort_drop". -- `create_desktop_user` (Boolean) CreateDesktopUser allows users to be automatically created on a Windows desktop -- `create_host_user` (Boolean) Deprecated: use CreateHostUserMode instead. -- `create_host_user_default_shell` (String) CreateHostUserDefaultShell is used to configure the default shell for newly provisioned host users. -- `create_host_user_mode` (Number) CreateHostUserMode allows users to be automatically created on a host when not set to off. 0 is "unspecified"; 1 is "off"; 2 is "drop" (removed for v15 and above), 3 is "keep"; 4 is "insecure-drop". -- `desktop_clipboard` (Boolean) DesktopClipboard indicates whether clipboard sharing is allowed between the user's workstation and the remote desktop. It defaults to true unless explicitly set to false. -- `desktop_directory_sharing` (Boolean) DesktopDirectorySharing indicates whether directory sharing is allowed between the user's workstation and the remote desktop. It defaults to false unless explicitly set to true. -- `device_trust_mode` (String) DeviceTrustMode is the device authorization mode used for the resources associated with the role. See DeviceTrust.Mode. -- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert sets disconnect clients on expired certificates. -- `enhanced_recording` (List of String) BPF defines what events to record for the BPF-based session recorder. -- `forward_agent` (Boolean) ForwardAgent is SSH agent forwarding. -- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specoptionsidp)) -- `lock` (String) Lock specifies the locking mode (strict|best_effort) to be applied with the role. -- `max_connections` (Number) MaxConnections defines the maximum number of concurrent connections a user may hold. -- `max_kubernetes_connections` (Number) MaxKubernetesConnections defines the maximum number of concurrent Kubernetes sessions a user may hold. -- `max_session_ttl` (String) MaxSessionTTL defines how long a SSH session can last for. -- `max_sessions` (Number) MaxSessions defines the maximum number of concurrent sessions per connection. -- `mfa_verification_interval` (String) MFAVerificationInterval optionally defines the maximum duration that can elapse between successive MFA verifications. This variable is used to ensure that users are periodically prompted to verify their identity, enhancing security by preventing prolonged sessions without re-authentication when using tsh proxy * derivatives. It's only effective if the session requires MFA. If not set, defaults to `max_session_ttl`. -- `permit_x11_forwarding` (Boolean) PermitX11Forwarding authorizes use of X11 forwarding. -- `pin_source_ip` (Boolean) PinSourceIP forces the same client IP for certificate generation and usage -- `port_forwarding` (Boolean) Deprecated: Use SSHPortForwarding instead -- `record_session` (Attributes) RecordDesktopSession indicates whether desktop access sessions should be recorded. It defaults to true unless explicitly set to false. (see [below for nested schema](#nested-schema-for-specoptionsrecord_session)) -- `request_access` (String) RequestAccess defines the request strategy (optional|reason|always) where optional is the default. -- `request_prompt` (String) RequestPrompt is an optional message which tells users what they aught to request. -- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this user. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". -- `ssh_file_copy` (Boolean) SSHFileCopy indicates whether remote file operations via SCP or SFTP are allowed over an SSH session. It defaults to true unless explicitly set to false. -- `ssh_port_forwarding` (Attributes) SSHPortForwarding configures what types of SSH port forwarding are allowed by a role. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwarding)) - -### Nested Schema for `spec.options.cert_extensions` - -Optional: - -- `mode` (Number) Mode is the type of extension to be used -- currently critical-option is not supported. 0 is "extension". -- `name` (String) Name specifies the key to be used in the cert extension. -- `type` (Number) Type represents the certificate type being extended, only ssh is supported at this time. 0 is "ssh". -- `value` (String) Value specifies the value to be used in the cert extension. - - -### Nested Schema for `spec.options.idp` - -Optional: - -- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specoptionsidpsaml)) - -### Nested Schema for `spec.options.idp.saml` - -Optional: - -- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. - - - -### Nested Schema for `spec.options.record_session` - -Optional: - -- `default` (String) Default indicates the default value for the services. -- `desktop` (Boolean) Desktop indicates whether desktop sessions should be recorded. It defaults to true unless explicitly set to false. -- `ssh` (String) SSH indicates the session mode used on SSH sessions. - - -### Nested Schema for `spec.options.ssh_port_forwarding` - -Optional: - -- `local` (Attributes) Allow local port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardinglocal)) -- `remote` (Attributes) Allow remote port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardingremote)) - -### Nested Schema for `spec.options.ssh_port_forwarding.local` - -Optional: - -- `enabled` (Boolean) - - -### Nested Schema for `spec.options.ssh_port_forwarding.remote` - -Optional: - -- `enabled` (Boolean) - diff --git a/docs/pages/reference/terraform-provider/data-sources/saml_connector.mdx b/docs/pages/reference/terraform-provider/data-sources/saml_connector.mdx deleted file mode 100644 index d1534f6571f45..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/saml_connector.mdx +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: Reference for the teleport_saml_connector Terraform data-source -sidebar_label: saml_connector -description: This page describes the supported values of the teleport_saml_connector data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an SAML connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Required: - -- `acs` (String) AssertionConsumerService is a URL for assertion consumer service on the service provider (Teleport's side). -- `attributes_to_roles` (Attributes List) AttributesToRoles is a list of mappings of attribute statements to roles. (see [below for nested schema](#nested-schema-for-specattributes_to_roles)) - -Optional: - -- `allow_idp_initiated` (Boolean) AllowIDPInitiated is a flag that indicates if the connector can be used for IdP-initiated logins. -- `assertion_key_pair` (Attributes) EncryptionKeyPair is a key pair used for decrypting SAML assertions. (see [below for nested schema](#nested-schema-for-specassertion_key_pair)) -- `audience` (String) Audience uniquely identifies our service provider. -- `cert` (String, Sensitive) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `display` (String) Display controls how this connector is displayed. -- `entity_descriptor` (String, Sensitive) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. -- `entity_descriptor_url` (String) EntityDescriptorURL is a URL that supplies a configuration XML. -- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO. -- `issuer` (String) Issuer is the identity provider issuer. -- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) -- `preferred_request_binding` (String) PreferredRequestBinding is a preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured. -- `provider` (String) Provider is the external identity provider. -- `service_provider_issuer` (String) ServiceProviderIssuer is the issuer of the service provider (Teleport). -- `signing_key_pair` (Attributes) SigningKeyPair is an x509 key pair used to sign AuthnRequest. (see [below for nested schema](#nested-schema-for-specsigning_key_pair)) -- `single_logout_url` (String) SingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled. -- `sso` (String) SSO is the URL of the identity provider's SSO service. - -### Nested Schema for `spec.attributes_to_roles` - -Optional: - -- `name` (String) Name is an attribute statement name. -- `roles` (List of String) Roles is a list of static teleport roles to map to. -- `value` (String) Value is an attribute statement value to match. - - -### Nested Schema for `spec.assertion_key_pair` - -Optional: - -- `cert` (String) Cert is a PEM-encoded x509 certificate. -- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. - - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.mfa` - -Optional: - -- `cert` (String) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. -- `enabled` (Boolean) Enabled specified whether this SAML connector supports MFA checks. Defaults to false. -- `entity_descriptor` (String) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. Usually set from EntityDescriptorUrl. -- `entity_descriptor_url` (String) EntityDescriptorUrl is a URL that supplies a configuration XML. -- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced for MFA checks. UNSPECIFIED is treated as YES to always re-authentication for MFA checks. This should only be set to NO if the IdP is setup to perform MFA checks on top of active user sessions. -- `issuer` (String) Issuer is the identity provider issuer. Usually set from EntityDescriptor. -- `sso` (String) SSO is the URL of the identity provider's SSO service. Usually set from EntityDescriptor. - - -### Nested Schema for `spec.signing_key_pair` - -Optional: - -- `cert` (String) Cert is a PEM-encoded x509 certificate. -- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - diff --git a/docs/pages/reference/terraform-provider/data-sources/session_recording_config.mdx b/docs/pages/reference/terraform-provider/data-sources/session_recording_config.mdx deleted file mode 100644 index 6227a418d8daf..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/session_recording_config.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: Reference for the teleport_session_recording_config Terraform data-source -sidebar_label: session_recording_config -description: This page describes the supported values of the teleport_session_recording_config data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a SessionRecordingConfig specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `mode` (String) Mode controls where (or if) the session is recorded. -- `proxy_checks_host_keys` (Boolean) ProxyChecksHostKeys is used to control if the proxy will check host keys when in recording mode. - diff --git a/docs/pages/reference/terraform-provider/data-sources/trusted_device.mdx b/docs/pages/reference/terraform-provider/data-sources/trusted_device.mdx deleted file mode 100644 index 8b2d90d1101a6..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/trusted_device.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: Reference for the teleport_trusted_device Terraform data-source -sidebar_label: trusted_device -description: This page describes the supported values of the teleport_trusted_device data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Specification of the device. (see [below for nested schema](#nested-schema-for-spec)) - -### Nested Schema for `metadata` - -Optional: - -- `labels` (Map of String) Labels is a set of labels -- `name` (String) Name is an object name - - -### Nested Schema for `spec` - -Required: - -- `asset_tag` (String) -- `os_type` (String) - -Optional: - -- `enroll_status` (String) -- `owner` (String) -- `source` (Attributes) (see [below for nested schema](#nested-schema-for-specsource)) - -### Nested Schema for `spec.source` - -Optional: - -- `name` (String) -- `origin` (String) - diff --git a/docs/pages/reference/terraform-provider/data-sources/user.mdx b/docs/pages/reference/terraform-provider/data-sources/user.mdx deleted file mode 100644 index ddfdba981231b..0000000000000 --- a/docs/pages/reference/terraform-provider/data-sources/user.mdx +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: Reference for the teleport_user Terraform data-source -sidebar_label: user -description: This page describes the supported values of the teleport_user data-source of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - - - - - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a user specification (see [below for nested schema](#nested-schema-for-spec)) -- `status` (Attributes) (see [below for nested schema](#nested-schema-for-status)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `github_identities` (Attributes List) GithubIdentities list associated Github OAuth2 identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specgithub_identities)) -- `oidc_identities` (Attributes List) OIDCIdentities lists associated OpenID Connect identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specoidc_identities)) -- `roles` (List of String) Roles is a list of roles assigned to user -- `saml_identities` (Attributes List) SAMLIdentities lists associated SAML identities that let user log in using externally verified identity (see [below for nested schema](#nested-schema-for-specsaml_identities)) -- `traits` (Map of List of String) Traits are key/value pairs received from an identity provider (through OIDC claims or SAML assertions) or from a system administrator for local accounts. Traits are used to populate role variables. -- `trusted_device_ids` (List of String) TrustedDeviceIDs contains the IDs of trusted devices enrolled by the user. Note that SSO users are transient and thus may contain an empty TrustedDeviceIDs field, even though the user->device association exists under the Device Trust subsystem. Do not rely on this field to determine device associations or ownership, it exists for legacy/informative purposes only. Managed by the Device Trust subsystem, avoid manual edits. - -### Nested Schema for `spec.github_identities` - -Optional: - -- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' -- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. -- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. -- `username` (String) Username is username supplied by external identity provider - - -### Nested Schema for `spec.oidc_identities` - -Optional: - -- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' -- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. -- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. -- `username` (String) Username is username supplied by external identity provider - - -### Nested Schema for `spec.saml_identities` - -Optional: - -- `connector_id` (String) ConnectorID is id of registered OIDC connector, e.g. 'google-example.com' -- `samlSingleLogoutUrl` (String) SAMLSingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out), if applicable. -- `user_id` (String) UserID is the ID of the identity. Some connectors like GitHub have an unique ID apart from the username. -- `username` (String) Username is username supplied by external identity provider - - - -### Nested Schema for `status` - -Optional: - -- `mfa_weakest_device` (Number) mfa_weakest_device reflects what the system knows about the user's weakest MFA device. Note that this is a "best effort" property, in that it can be UNSPECIFIED. -- `password_state` (Number) password_state reflects what the system knows about the user's password. Note that this is a "best effort" property, in that it can be UNSPECIFIED for users who were created before this property was introduced and didn't perform any password-related activity since then. See RFD 0159 for details. Do NOT use this value for authentication purposes! - diff --git a/docs/pages/reference/terraform-provider/resources/access_list.mdx b/docs/pages/reference/terraform-provider/resources/access_list.mdx deleted file mode 100644 index bb33d60362040..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/access_list.mdx +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: Reference for the teleport_access_list Terraform resource -sidebar_label: access_list -description: This page describes the supported values of the teleport_access_list resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_access_list resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -resource "teleport_access_list" "crane-operation" { - header = { - metadata = { - name = "crane-operation" - labels = { - example = "yes" - } - } - } - spec = { - description = "Used to grant access to the crane." - owners = [ - { - name = "gru" - description = "The supervillain." - } - ] - membership_requires = { - roles = ["minion"] - } - ownership_requires = { - roles = ["supervillain"] - } - grants = { - roles = ["crane-operator"] - traits = [{ - key = "allowed-machines" - values = ["crane", "forklift"] - }] - } - title = "Crane operation" - audit = { - recurrence = { - frequency = 3 # audit every 3 months - day_of_month = 15 # audit happen 15's day of the month. Possible values are 1, 15, and 31. - } - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Optional - -- `header` (Attributes) header is the header for the resource. (see [below for nested schema](#nested-schema-for-header)) -- `spec` (Attributes) spec is the specification for the Access List. (see [below for nested schema](#nested-schema-for-spec)) - -### Nested Schema for `header` - -Required: - -- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` - -Optional: - -- `kind` (String) kind is a resource kind. -- `metadata` (Attributes) metadata is resource metadata. (see [below for nested schema](#nested-schema-for-headermetadata)) -- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources. - -### Nested Schema for `header.metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `namespace` (String) namespace is object namespace. The field should be called "namespace" when it returns in Teleport 2.4. -- `revision` (String) revision is an opaque identifier which tracks the versions of a resource over time. Clients should ignore and not alter its value but must return the revision in any updates of a resource. - - - -### Nested Schema for `spec` - -Required: - -- `audit` (Attributes) audit describes the frequency that this Access List must be audited. (see [below for nested schema](#nested-schema-for-specaudit)) -- `grants` (Attributes) grants describes the access granted by membership to this Access List. (see [below for nested schema](#nested-schema-for-specgrants)) -- `owners` (Attributes List) owners is a list of owners of the Access List. (see [below for nested schema](#nested-schema-for-specowners)) - -Optional: - -- `description` (String) description is an optional plaintext description of the Access List. -- `membership_requires` (Attributes) membership_requires describes the requirements for a user to be a member of the Access List. For a membership to an Access List to be effective, the user must meet the requirements of Membership_requires and must be in the members list. (see [below for nested schema](#nested-schema-for-specmembership_requires)) -- `owner_grants` (Attributes) owner_grants describes the access granted by owners to this Access List. (see [below for nested schema](#nested-schema-for-specowner_grants)) -- `ownership_requires` (Attributes) ownership_requires describes the requirements for a user to be an owner of the Access List. For ownership of an Access List to be effective, the user must meet the requirements of ownership_requires and must be in the owners list. (see [below for nested schema](#nested-schema-for-specownership_requires)) -- `title` (String) title is a plaintext short description of the Access List. - -### Nested Schema for `spec.audit` - -Required: - -- `recurrence` (Attributes) recurrence is the recurrence definition (see [below for nested schema](#nested-schema-for-specauditrecurrence)) - -Optional: - -- `next_audit_date` (String) next_audit_date is when the next audit date should be done by. -- `notifications` (Attributes) notifications is the configuration for notifying users. (see [below for nested schema](#nested-schema-for-specauditnotifications)) - -### Nested Schema for `spec.audit.recurrence` - -Required: - -- `frequency` (Number) frequency is the frequency of reviews. This represents the period in months between two reviews. Supported values are 0, 1, 3, 6, and 12. - -Optional: - -- `day_of_month` (Number) day_of_month is the day of month that reviews will be scheduled on. Supported values are 0, 1, 15, and 31. - - -### Nested Schema for `spec.audit.notifications` - -Optional: - -- `start` (String) start specifies when to start notifying users that the next audit date is coming up. - - - -### Nested Schema for `spec.grants` - -Optional: - -- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. -- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specgrantstraits)) - -### Nested Schema for `spec.grants.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.owners` - -Optional: - -- `description` (String) description is the plaintext description of the owner and why they are an owner. -- `membership_kind` (Number) membership_kind describes the type of membership, either `MEMBERSHIP_KIND_USER` or `MEMBERSHIP_KIND_LIST`. -- `name` (String) name is the username of the owner. - - -### Nested Schema for `spec.membership_requires` - -Optional: - -- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. -- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specmembership_requirestraits)) - -### Nested Schema for `spec.membership_requires.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.owner_grants` - -Optional: - -- `roles` (List of String) roles are the roles that are granted to users who are members of the Access List. -- `traits` (Attributes List) traits are the traits that are granted to users who are members of the Access List. (see [below for nested schema](#nested-schema-for-specowner_grantstraits)) - -### Nested Schema for `spec.owner_grants.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. - - - -### Nested Schema for `spec.ownership_requires` - -Optional: - -- `roles` (List of String) roles are the user roles that must be present for the user to obtain access. -- `traits` (Attributes List) traits are the traits that must be present for the user to obtain access. (see [below for nested schema](#nested-schema-for-specownership_requirestraits)) - -### Nested Schema for `spec.ownership_requires.traits` - -Optional: - -- `key` (String) key is the name of the trait. -- `values` (List of String) values is the list of trait values. diff --git a/docs/pages/reference/terraform-provider/resources/access_monitoring_rule.mdx b/docs/pages/reference/terraform-provider/resources/access_monitoring_rule.mdx deleted file mode 100644 index 77a621fde8df0..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/access_monitoring_rule.mdx +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Reference for the teleport_access_monitoring_rule Terraform resource -sidebar_label: access_monitoring_rule -description: This page describes the supported values of the teleport_access_monitoring_rule resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_access_monitoring_rule resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -resource "teleport_access_monitoring_rule" "test" { - version = "v1" - metadata = { - name = "test" - } - spec = { - subjects = ["access_request"] - condition = "access_request.spec.roles.contains(\"your_role_name\")" - desired_state = "reviewed" - notification = { - name = "slack" - recipients = ["your-slack-channel"] - } - automatic_review = { - integration = "builtin" - decision = "APPROVED" - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an AccessMonitoringRule specification (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) version is version - -### Optional - -- `metadata` (Attributes) metadata is the rules's metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) sub_kind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Required: - -- `subjects` (List of String) subjects the rule operates on, can be a resource kind or a particular resource property. - -Optional: - -- `automatic_review` (Attributes) automatic_review defines automatic review configurations for access requests. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specautomatic_review)) -- `condition` (String) condition is a predicate expression that operates on the specified subject resources, and determines whether the subject will be moved into desired state. -- `desired_state` (String) desired_state defines the desired state of the subject. For access request subjects, the desired_state may be set to `reviewed` to indicate that the access request should be automatically reviewed. -- `notification` (Attributes) notification defines the plugin configuration for notifications if rule is triggered. Both notification and automatic_review may be set within the same access_monitoring_rule. If both fields are set, the rule will trigger both notifications and automatic reviews for the same set of access events. Separate plugins may be used if both notifications and automatic_reviews is set. (see [below for nested schema](#nested-schema-for-specnotification)) -- `states` (List of String) states are the desired state which the monitoring rule is attempting to bring the subjects matching the condition to. - -### Nested Schema for `spec.automatic_review` - -Optional: - -- `decision` (String) decision specifies the proposed state of the access review. This can be either 'APPROVED' or 'DENIED'. -- `integration` (String) integration is the name of the integration that is responsible for monitoring the rule. Set this value to `builtin` to monitor the rule with Teleport. - - -### Nested Schema for `spec.notification` - -Optional: - -- `name` (String) name is the name of the plugin to which this configuration should apply. -- `recipients` (List of String) recipients is the list of recipients the plugin should notify. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. diff --git a/docs/pages/reference/terraform-provider/resources/app.mdx b/docs/pages/reference/terraform-provider/resources/app.mdx deleted file mode 100644 index 7d7e7cadc9bba..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/app.mdx +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: Reference for the teleport_app Terraform resource -sidebar_label: app -description: This page describes the supported values of the teleport_app resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_app resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport App - -resource "teleport_app" "example" { - version = "v3" - metadata = { - name = "example" - description = "Test app" - labels = { - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - uri = "localhost:3000" - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v3`. - -### Optional - -- `metadata` (Attributes) Metadata is the app resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the app resource spec. (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource subkind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `aws` (Attributes) AWS contains additional options for AWS applications. (see [below for nested schema](#nested-schema-for-specaws)) -- `cloud` (String) Cloud identifies the cloud instance the app represents. -- `cors` (Attributes) CORSPolicy defines the Cross-Origin Resource Sharing settings for the app. (see [below for nested schema](#nested-schema-for-speccors)) -- `dynamic_labels` (Attributes Map) DynamicLabels are the app's command labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) -- `identity_center` (Attributes) IdentityCenter encasulates AWS identity-center specific information. Only valid for Identity Center account apps. (see [below for nested schema](#nested-schema-for-specidentity_center)) -- `insecure_skip_verify` (Boolean) InsecureSkipVerify disables app's TLS certificate verification. -- `integration` (String) Integration is the integration name that must be used to access this Application. Only applicable to AWS App Access. If present, the Application must use the Integration's credentials instead of ambient credentials to access Cloud APIs. -- `public_addr` (String) PublicAddr is the public address the application is accessible at. -- `required_app_names` (List of String) RequiredAppNames is a list of app names that are required for this app to function. Any app listed here will be part of the authentication redirect flow and authenticate along side this app. -- `rewrite` (Attributes) Rewrite is a list of rewriting rules to apply to requests and responses. (see [below for nested schema](#nested-schema-for-specrewrite)) -- `tcp_ports` (Attributes List) TCPPorts is a list of ports and port ranges that an app agent can forward connections to. Only applicable to TCP App Access. If this field is not empty, URI is expected to contain no port number and start with the tcp protocol. (see [below for nested schema](#nested-schema-for-spectcp_ports)) -- `uri` (String) URI is the web app endpoint. -- `use_any_proxy_public_addr` (Boolean) UseAnyProxyPublicAddr will rebuild this app's fqdn based on the proxy public addr that the request originated from. This should be true if your proxy has multiple proxy public addrs and you want the app to be accessible from any of them. If `public_addr` is explicitly set in the app spec, setting this value to true will overwrite that public address in the web UI. -- `user_groups` (List of String) UserGroups are a list of user group IDs that this app is associated with. - -### Nested Schema for `spec.aws` - -Optional: - -- `external_id` (String) ExternalID is the AWS External ID used when assuming roles in this app. -- `roles_anywhere_profile` (Attributes) RolesAnywhereProfile contains the IAM Roles Anywhere fields associated with this Application. These fields are set when performing the synchronization of AWS IAM Roles Anywhere Profiles into Teleport Apps. (see [below for nested schema](#nested-schema-for-specawsroles_anywhere_profile)) - -### Nested Schema for `spec.aws.roles_anywhere_profile` - -Optional: - -- `accept_role_session_name` (Boolean) Whether this Roles Anywhere Profile accepts a custom role session name. When not supported, the AWS Session Name will be the X.509 certificate's serial number. When supported, the AWS Session Name will be the identity's username. This values comes from: https://docs.aws.amazon.com/rolesanywhere/latest/APIReference/API_ProfileDetail.html / acceptRoleSessionName -- `profile_arn` (String) ProfileARN is the AWS IAM Roles Anywhere Profile ARN that originated this Teleport App. - - - -### Nested Schema for `spec.cors` - -Optional: - -- `allow_credentials` (Boolean) allow_credentials indicates whether credentials are allowed. -- `allowed_headers` (List of String) allowed_headers specifies which headers can be used when accessing the app. -- `allowed_methods` (List of String) allowed_methods specifies which methods are allowed when accessing the app. -- `allowed_origins` (List of String) allowed_origins specifies which origins are allowed to access the app. -- `exposed_headers` (List of String) exposed_headers indicates which headers are made available to scripts via the browser. -- `max_age` (Number) max_age indicates how long (in seconds) the results of a preflight request can be cached. - - -### Nested Schema for `spec.dynamic_labels` - -Optional: - -- `command` (List of String) Command is a command to run -- `period` (String) Period is a time between command runs -- `result` (String) Result captures standard output - - -### Nested Schema for `spec.identity_center` - -Optional: - -- `account_id` (String) Account ID is the AWS-assigned ID of the account -- `permission_sets` (Attributes List) PermissionSets lists the available permission sets on the given account (see [below for nested schema](#nested-schema-for-specidentity_centerpermission_sets)) - -### Nested Schema for `spec.identity_center.permission_sets` - -Optional: - -- `arn` (String) ARN is the fully-formed ARN of the Permission Set. -- `assignment_name` (String) AssignmentID is the ID of the Teleport Account Assignment resource that represents this permission being assigned on the enclosing Account. -- `name` (String) Name is the human-readable name of the Permission Set. - - - -### Nested Schema for `spec.rewrite` - -Optional: - -- `headers` (Attributes List) Headers is a list of headers to inject when passing the request over to the application. (see [below for nested schema](#nested-schema-for-specrewriteheaders)) -- `jwt_claims` (String) JWTClaims configures whether roles/traits are included in the JWT token. -- `redirect` (List of String) Redirect defines a list of hosts which will be rewritten to the public address of the application if they occur in the "Location" header. - -### Nested Schema for `spec.rewrite.headers` - -Optional: - -- `name` (String) Name is the http header name. -- `value` (String) Value is the http header value. - - - -### Nested Schema for `spec.tcp_ports` - -Optional: - -- `end_port` (Number) EndPort describes the end of the range, inclusive. If set, it must be between 2 and 65535 and be greater than Port when describing a port range. When omitted or set to zero, it signifies that the port range defines a single port. -- `port` (Number) Port describes the start of the range. It must be between 1 and 65535. diff --git a/docs/pages/reference/terraform-provider/resources/auth_preference.mdx b/docs/pages/reference/terraform-provider/resources/auth_preference.mdx deleted file mode 100644 index 108b43f4034a2..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/auth_preference.mdx +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: Reference for the teleport_auth_preference Terraform resource -sidebar_label: auth_preference -description: This page describes the supported values of the teleport_auth_preference resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_auth_preference resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# AuthPreference resource - -resource "teleport_auth_preference" "example" { - version = "v2" - metadata = { - description = "Auth preference" - labels = { - "example" = "yes" - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - disconnect_expired_cert = true - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an AuthPreference specification (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Optional: - -- `allow_headless` (Boolean) AllowHeadless enables/disables headless support. Headless authentication requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. -- `allow_local_auth` (Boolean) AllowLocalAuth is true if local authentication is enabled. -- `allow_passwordless` (Boolean) AllowPasswordless enables/disables passwordless support. Passwordless requires Webauthn to work. Defaults to true if the Webauthn is configured, defaults to false otherwise. -- `connector_name` (String) ConnectorName is the name of the OIDC or SAML connector. If this value is not set the first connector in the backend will be used. -- `default_session_ttl` (String) DefaultSessionTTL is the TTL to use for user certs when an explicit TTL is not requested. -- `device_trust` (Attributes) DeviceTrust holds settings related to trusted device verification. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specdevice_trust)) -- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert provides disconnect expired certificate setting - if true, connections with expired client certificates will get disconnected -- `hardware_key` (Attributes) HardwareKey are the settings for hardware key support. (see [below for nested schema](#nested-schema-for-spechardware_key)) -- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specidp)) -- `locking_mode` (String) LockingMode is the cluster-wide locking mode default. -- `message_of_the_day` (String) -- `okta` (Attributes) Okta is a set of options related to the Okta service in Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specokta)) -- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this cluster. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". -- `second_factor` (String) SecondFactor is the type of mult-factor. Deprecated: Prefer using SecondFactors instead. -- `second_factors` (List of Number) SecondFactors is a list of supported multi-factor types. 1 is "otp", 2 is "webauthn", 3 is "sso", If unspecified, the current default value is [1], or ["otp"]. -- `signature_algorithm_suite` (Number) SignatureAlgorithmSuite is the configured signature algorithm suite for the cluster. If unspecified, the current default value is "legacy". 1 is "legacy", 2 is "balanced-v1", 3 is "fips-v1", 4 is "hsm-v1". -- `stable_unix_user_config` (Attributes) StableUnixUserConfig contains the cluster-wide configuration for stable UNIX users. (see [below for nested schema](#nested-schema-for-specstable_unix_user_config)) -- `type` (String) Type is the type of authentication. -- `u2f` (Attributes) U2F are the settings for the U2F device. (see [below for nested schema](#nested-schema-for-specu2f)) -- `webauthn` (Attributes) Webauthn are the settings for server-side Web Authentication support. (see [below for nested schema](#nested-schema-for-specwebauthn)) - -### Nested Schema for `spec.device_trust` - -Optional: - -- `auto_enroll` (Boolean) Enable device auto-enroll. Auto-enroll lets any user issue a device enrollment token for a known device that is not already enrolled. `tsh` takes advantage of auto-enroll to automatically enroll devices on user login, when appropriate. The effective cluster Mode still applies: AutoEnroll=true is meaningless if Mode="off". -- `ekcert_allowed_cas` (List of String) Allow list of EKCert CAs in PEM format. If present, only TPM devices that present an EKCert that is signed by a CA specified here may be enrolled (existing enrollments are unchanged). If not present, then the CA of TPM EKCerts will not be checked during enrollment, this allows any device to enroll. -- `mode` (String) Mode of verification for trusted devices. The following modes are supported: - "off": disables both device authentication and authorization. - "optional": allows both device authentication and authorization, but doesn't enforce the presence of device extensions for sensitive endpoints. - "required": enforces the presence of device extensions for sensitive endpoints. Mode is always "off" for OSS. Defaults to "optional" for Enterprise. - - -### Nested Schema for `spec.hardware_key` - -Optional: - -- `pin_cache_ttl` (String) PinCacheTTL is the amount of time in nanoseconds that Teleport clients will cache the user's PIV PIN when hardware key PIN policy is enabled. -- `piv_slot` (String) PIVSlot is a PIV slot that Teleport clients should use instead of the default based on private key policy. For example, "9a" or "9e". -- `serial_number_validation` (Attributes) SerialNumberValidation holds settings for hardware key serial number validation. By default, serial number validation is disabled. (see [below for nested schema](#nested-schema-for-spechardware_keyserial_number_validation)) - -### Nested Schema for `spec.hardware_key.serial_number_validation` - -Optional: - -- `enabled` (Boolean) Enabled indicates whether hardware key serial number validation is enabled. -- `serial_number_trait_name` (String) SerialNumberTraitName is an optional custom user trait name for hardware key serial numbers to replace the default: "hardware_key_serial_numbers". Note: Values for this user trait should be a comma-separated list of serial numbers, or a list of comm-separated lists. e.g ["123", "345,678"] - - - -### Nested Schema for `spec.idp` - -Optional: - -- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specidpsaml)) - -### Nested Schema for `spec.idp.saml` - -Optional: - -- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. - - - -### Nested Schema for `spec.okta` - -Optional: - -- `sync_period` (String) SyncPeriod is the duration between synchronization calls in nanoseconds. - - -### Nested Schema for `spec.stable_unix_user_config` - -Optional: - -- `enabled` (Boolean) Enabled signifies that (UNIX) Teleport SSH hosts should obtain a UID from the control plane if they're about to provision a host user with no other configured UID. -- `first_uid` (Number) FirstUid is the start of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. -- `last_uid` (Number) LastUid is the end of the range of UIDs for autoprovisioned host users. The range is inclusive on both ends, so the specified UID can be assigned. - - -### Nested Schema for `spec.u2f` - -Optional: - -- `app_id` (String) AppID returns the application ID for universal mult-factor. -- `device_attestation_cas` (List of String) DeviceAttestationCAs contains the trusted attestation CAs for U2F devices. -- `facets` (List of String) Facets returns the facets for universal mult-factor. Deprecated: Kept for backwards compatibility reasons, but Facets have no effect since Teleport v10, when Webauthn replaced the U2F implementation. - - -### Nested Schema for `spec.webauthn` - -Optional: - -- `attestation_allowed_cas` (List of String) Allow list of device attestation CAs in PEM format. If present, only devices whose attestation certificates match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationDeniedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default all devices are allowed. -- `attestation_denied_cas` (List of String) Deny list of device attestation CAs in PEM format. If present, only devices whose attestation certificates don't match the certificates specified here may be registered (existing registrations are unchanged). If supplied in conjunction with AttestationAllowedCAs, then both conditions need to be true for registration to be allowed (the device MUST match an allowed CA and MUST NOT match a denied CA). By default no devices are denied. -- `rp_id` (String) RPID is the ID of the Relying Party. It should be set to the domain name of the Teleport installation. IMPORTANT: RPID must never change in the lifetime of the cluster, because it's recorded in the registration data on the WebAuthn device. If the RPID changes, all existing WebAuthn key registrations will become invalid and all users who use WebAuthn as the multi-factor will need to re-register. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/terraform-provider/resources/autoupdate_config.mdx b/docs/pages/reference/terraform-provider/resources/autoupdate_config.mdx deleted file mode 100644 index 4c357def001cf..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/autoupdate_config.mdx +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Reference for the teleport_autoupdate_config Terraform resource -sidebar_label: autoupdate_config -description: This page describes the supported values of the teleport_autoupdate_config resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_autoupdate_config resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -resource "teleport_autoupdate_config" "test" { - version = "v1" - spec = { - tools = { - mode = "enabled" - } - agents = { - mode = "enabled" - strategy = "halt-on-error" - schedules = { - regular = [ - { - name = "dev" - days = ["Mon", "Tue", "Wed", "Thu"] - start_hour : 4 - }, - { - name = "staging" - days = ["Mon", "Tue", "Wed", "Thu"] - start_hour : 14 - }, - { - name = "prod" - days = ["Mon", "Tue", "Wed", "Thu"] - start_hour : 14 - wait_hours : 24 - }, - ] - } - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) - -### Optional - -- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) - -### Nested Schema for `spec` - -Optional: - -- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) -- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) - -### Nested Schema for `spec.agents` - -Optional: - -- `maintenance_window_duration` (String) maintenance_window_duration is the maintenance window duration. This can only be set if `strategy` is "time-based". Once the window is over, the group transitions to the done state. Existing agents won't be updated until the next maintenance window. -- `mode` (String) mode specifies whether agent autoupdates are enabled, disabled, or paused. -- `schedules` (Attributes) schedules specifies schedules for updates of grouped agents. (see [below for nested schema](#nested-schema-for-specagentsschedules)) -- `strategy` (String) strategy to use for updating the agents. - -### Nested Schema for `spec.agents.schedules` - -Optional: - -- `regular` (Attributes List) regular schedules for non-critical versions. (see [below for nested schema](#nested-schema-for-specagentsschedulesregular)) - -### Nested Schema for `spec.agents.schedules.regular` - -Optional: - -- `days` (List of String) days when the update can run. Supported values are "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" and "*" -- `name` (String) name of the group -- `start_hour` (Number) start_hour to initiate update -- `wait_hours` (Number) wait_hours after last group succeeds before this group can run. This can only be used when the strategy is "halt-on-failure". This field must be positive. - - - - -### Nested Schema for `spec.tools` - -Optional: - -- `mode` (String) Mode defines state of the client tools auto update. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `name` (String) name is an object name. diff --git a/docs/pages/reference/terraform-provider/resources/autoupdate_version.mdx b/docs/pages/reference/terraform-provider/resources/autoupdate_version.mdx deleted file mode 100644 index 8df2a2d2a13b5..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/autoupdate_version.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: Reference for the teleport_autoupdate_version Terraform resource -sidebar_label: autoupdate_version -description: This page describes the supported values of the teleport_autoupdate_version resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_autoupdate_version resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -resource "teleport_autoupdate_version" "test" { - version = "v1" - spec = { - tools = { - target_version = "1.2.3" - } - agents = { - start_version = "1.2.3" - target_version = "1.2.4" - schedule = "regular" - mode = "enabled" - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) - -### Optional - -- `metadata` (Attributes) (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) - -### Nested Schema for `spec` - -Optional: - -- `agents` (Attributes) (see [below for nested schema](#nested-schema-for-specagents)) -- `tools` (Attributes) (see [below for nested schema](#nested-schema-for-spectools)) - -### Nested Schema for `spec.agents` - -Optional: - -- `mode` (String) autoupdate_mode to use for the rollout -- `schedule` (String) schedule to use for the rollout -- `start_version` (String) start_version is the version to update from. -- `target_version` (String) target_version is the version to update to. - - -### Nested Schema for `spec.tools` - -Optional: - -- `target_version` (String) TargetVersion specifies the semantic version required for tools to establish a connection with the cluster. Client tools after connection to the cluster going to be updated to this version automatically. - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. -- `name` (String) name is an object name. diff --git a/docs/pages/reference/terraform-provider/resources/bot.mdx b/docs/pages/reference/terraform-provider/resources/bot.mdx deleted file mode 100644 index 7cff2b1a7483b..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/bot.mdx +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Reference for the teleport_bot Terraform resource -sidebar_label: bot -description: This page describes the supported values of the teleport_bot resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_bot resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport Machine ID Bot creation example - -locals { - bot_name = "example" -} - -resource "random_password" "bot_token" { - length = 32 - special = false -} - -resource "time_offset" "bot_example_token_expiry" { - offset_hours = 1 -} - -resource "teleport_provision_token" "bot_example" { - metadata = { - expires = time_offset.bot_example_token_expiry.rfc3339 - description = "Bot join token for ${local.bot_name} generated by Terraform" - - name = random_password.bot_token.result - } - - spec = { - roles = ["Bot"] - bot_name = local.bot_name - join_method = "token" - } -} - -resource "teleport_bot" "example" { - name = local.bot_name - roles = ["access"] -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `name` (String) The name of the bot, i.e. the unprefixed User name -- `roles` (List of String) A list of roles the created bot should be allowed to assume via role impersonation. - -### Optional - -- `token_id` (String, Sensitive) Deprecated. This field is not required anymore and has no effect. -- `token_ttl` (String) Deprecated. This field is not required anymore and has no effect. -- `traits` (Map of List of String) - -### Read-Only - -- `role_name` (String) The name of the generated bot role -- `user_name` (String) The name of the generated bot user diff --git a/docs/pages/reference/terraform-provider/resources/cluster_networking_config.mdx b/docs/pages/reference/terraform-provider/resources/cluster_networking_config.mdx deleted file mode 100644 index 3977596ef338c..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/cluster_networking_config.mdx +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: Reference for the teleport_cluster_networking_config Terraform resource -sidebar_label: cluster_networking_config -description: This page describes the supported values of the teleport_cluster_networking_config resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_cluster_networking_config resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport Cluster Networking config - -resource "teleport_cluster_networking_config" "example" { - version = "v2" - metadata = { - description = "Networking config" - labels = { - "example" = "yes" - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - client_idle_timeout = "1h" - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a ClusterNetworkingConfig specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `assist_command_execution_workers` (Number) AssistCommandExecutionWorkers determines the number of workers that will execute arbitrary Assist commands on servers in parallel -- `case_insensitive_routing` (Boolean) CaseInsensitiveRouting causes proxies to use case-insensitive hostname matching. -- `client_idle_timeout` (String) ClientIdleTimeout sets global cluster default setting for client idle timeouts. -- `idle_timeout_message` (String) ClientIdleTimeoutMessage is the message sent to the user when a connection times out. -- `keep_alive_count_max` (Number) KeepAliveCountMax is the number of keep-alive messages that can be missed before the server disconnects the connection to the client. -- `keep_alive_interval` (String) KeepAliveInterval is the interval at which the server sends keep-alive messages to the client. -- `proxy_listener_mode` (Number) ProxyListenerMode is proxy listener mode used by Teleport Proxies. 0 is "separate"; 1 is "multiplex". -- `proxy_ping_interval` (String) ProxyPingInterval defines in which interval the TLS routing ping message should be sent. This is applicable only when using ping-wrapped connections, regular TLS routing connections are not affected. -- `routing_strategy` (Number) RoutingStrategy determines the strategy used to route to nodes. 0 is "unambiguous_match"; 1 is "most_recent". -- `session_control_timeout` (String) SessionControlTimeout is the session control lease expiry and defines the upper limit of how long a node may be out of contact with the auth server before it begins terminating controlled sessions. -- `ssh_dial_timeout` (String) SSHDialTimeout is a custom dial timeout used when establishing SSH connections. If not set, the default timeout of 30s will be used. -- `tunnel_strategy` (Attributes) TunnelStrategyV1 determines the tunnel strategy used in the cluster. (see [below for nested schema](#nested-schema-for-spectunnel_strategy)) -- `web_idle_timeout` (String) WebIdleTimeout sets global cluster default setting for the web UI idle timeouts. - -### Nested Schema for `spec.tunnel_strategy` - -Optional: - -- `agent_mesh` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyagent_mesh)) -- `proxy_peering` (Attributes) (see [below for nested schema](#nested-schema-for-spectunnel_strategyproxy_peering)) - -### Nested Schema for `spec.tunnel_strategy.agent_mesh` - -Optional: - -- `active` (Boolean) Automatically generated field preventing empty message errors - - -### Nested Schema for `spec.tunnel_strategy.proxy_peering` - -Optional: - -- `agent_connection_count` (Number) diff --git a/docs/pages/reference/terraform-provider/resources/database.mdx b/docs/pages/reference/terraform-provider/resources/database.mdx deleted file mode 100644 index 944f0584c6693..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/database.mdx +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: Reference for the teleport_database Terraform resource -sidebar_label: database -description: This page describes the supported values of the teleport_database resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_database resource of the Teleport Terraform provider. -Follow [the database dynamic registration guide](../../../enroll-resources/database-access/guides/dynamic-registration.mdx) -to complete deploying a database_service and access the database resource - - - -## Example Usage - -```hcl -# Teleport Database - -resource "teleport_database" "example" { - version = "v3" - metadata = { - name = "example" - description = "Test database" - labels = { - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - protocol = "postgres" - uri = "localhost" - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata is the database metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the database spec. (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource subkind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Required: - -- `protocol` (String) Protocol is the database protocol: postgres, mysql, mongodb, etc. -- `uri` (String) URI is the database connection endpoint. - -Optional: - -- `ad` (Attributes) AD is the Active Directory configuration for the database. (see [below for nested schema](#nested-schema-for-specad)) -- `admin_user` (Attributes) AdminUser is the database admin user for automatic user provisioning. (see [below for nested schema](#nested-schema-for-specadmin_user)) -- `aws` (Attributes) AWS contains AWS specific settings for RDS/Aurora/Redshift databases. (see [below for nested schema](#nested-schema-for-specaws)) -- `azure` (Attributes) Azure contains Azure specific database metadata. (see [below for nested schema](#nested-schema-for-specazure)) -- `ca_cert` (String) CACert is the PEM-encoded database CA certificate. DEPRECATED: Moved to TLS.CACert. DELETE IN 10.0. -- `dynamic_labels` (Attributes Map) DynamicLabels is the database dynamic labels. (see [below for nested schema](#nested-schema-for-specdynamic_labels)) -- `gcp` (Attributes) GCP contains parameters specific to GCP Cloud SQL databases. (see [below for nested schema](#nested-schema-for-specgcp)) -- `mongo_atlas` (Attributes) MongoAtlas contains Atlas metadata about the database. (see [below for nested schema](#nested-schema-for-specmongo_atlas)) -- `mysql` (Attributes) MySQL is an additional section with MySQL database options. (see [below for nested schema](#nested-schema-for-specmysql)) -- `oracle` (Attributes) Oracle is an additional Oracle configuration options. (see [below for nested schema](#nested-schema-for-specoracle)) -- `tls` (Attributes) TLS is the TLS configuration used when establishing connection to target database. Allows to provide custom CA cert or override server name. (see [below for nested schema](#nested-schema-for-spectls)) - -### Nested Schema for `spec.ad` - -Optional: - -- `domain` (String) Domain is the Active Directory domain the database resides in. -- `kdc_host_name` (String) KDCHostName is the host name for a KDC for x509 Authentication. -- `keytab_file` (String) KeytabFile is the path to the Kerberos keytab file. -- `krb5_file` (String) Krb5File is the path to the Kerberos configuration file. Defaults to /etc/krb5.conf. -- `ldap_cert` (String) LDAPCert is a certificate from Windows LDAP/AD, optional; only for x509 Authentication. -- `ldap_service_account_name` (String) LDAPServiceAccountName is the name of service account for performing LDAP queries. Required for x509 Auth / PKINIT. -- `ldap_service_account_sid` (String) LDAPServiceAccountSID is the SID of service account for performing LDAP queries. Required for x509 Auth / PKINIT. -- `spn` (String) SPN is the service principal name for the database. - - -### Nested Schema for `spec.admin_user` - -Optional: - -- `default_database` (String) DefaultDatabase is the database that the privileged database user logs into by default. Depending on the database type, this database may be used to store procedures or data for managing database users. -- `name` (String) Name is the username of the privileged database user. - - -### Nested Schema for `spec.aws` - -Optional: - -- `account_id` (String) AccountID is the AWS account ID this database belongs to. -- `assume_role_arn` (String) AssumeRoleARN is an optional AWS role ARN to assume when accessing a database. Set this field and ExternalID to enable access across AWS accounts. -- `docdb` (Attributes) DocumentDB contains AWS DocumentDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsdocdb)) -- `elasticache` (Attributes) ElastiCache contains AWS ElastiCache Redis specific metadata. (see [below for nested schema](#nested-schema-for-specawselasticache)) -- `external_id` (String) ExternalID is an optional AWS external ID used to enable assuming an AWS role across accounts. -- `iam_policy_status` (Number) IAMPolicyStatus indicates whether the IAM Policy is configured properly for database access. If not, the user must update the AWS profile identity to allow access to the Database. Eg for an RDS Database: the underlying AWS profile allows for `rds-db:connect` for the Database. -- `memorydb` (Attributes) MemoryDB contains AWS MemoryDB specific metadata. (see [below for nested schema](#nested-schema-for-specawsmemorydb)) -- `opensearch` (Attributes) OpenSearch contains AWS OpenSearch specific metadata. (see [below for nested schema](#nested-schema-for-specawsopensearch)) -- `rds` (Attributes) RDS contains RDS specific metadata. (see [below for nested schema](#nested-schema-for-specawsrds)) -- `rdsproxy` (Attributes) RDSProxy contains AWS Proxy specific metadata. (see [below for nested schema](#nested-schema-for-specawsrdsproxy)) -- `redshift` (Attributes) Redshift contains Redshift specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift)) -- `redshift_serverless` (Attributes) RedshiftServerless contains AWS Redshift Serverless specific metadata. (see [below for nested schema](#nested-schema-for-specawsredshift_serverless)) -- `region` (String) Region is a AWS cloud region. -- `secret_store` (Attributes) SecretStore contains secret store configurations. (see [below for nested schema](#nested-schema-for-specawssecret_store)) -- `session_tags` (Map of String) SessionTags is a list of AWS STS session tags. - -### Nested Schema for `spec.aws.docdb` - -Optional: - -- `cluster_id` (String) ClusterID is the cluster identifier. -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `instance_id` (String) InstanceID is the instance identifier. - - -### Nested Schema for `spec.aws.elasticache` - -Optional: - -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `replication_group_id` (String) ReplicationGroupID is the Redis replication group ID. -- `transit_encryption_enabled` (Boolean) TransitEncryptionEnabled indicates whether in-transit encryption (TLS) is enabled. -- `user_group_ids` (List of String) UserGroupIDs is a list of user group IDs. - - -### Nested Schema for `spec.aws.memorydb` - -Optional: - -- `acl_name` (String) ACLName is the name of the ACL associated with the cluster. -- `cluster_name` (String) ClusterName is the name of the MemoryDB cluster. -- `endpoint_type` (String) EndpointType is the type of the endpoint. -- `tls_enabled` (Boolean) TLSEnabled indicates whether in-transit encryption (TLS) is enabled. - - -### Nested Schema for `spec.aws.opensearch` - -Optional: - -- `domain_id` (String) DomainID is the ID of the domain. -- `domain_name` (String) DomainName is the name of the domain. -- `endpoint_type` (String) EndpointType is the type of the endpoint. - - -### Nested Schema for `spec.aws.rds` - -Optional: - -- `cluster_id` (String) ClusterID is the RDS cluster (Aurora) identifier. -- `iam_auth` (Boolean) IAMAuth indicates whether database IAM authentication is enabled. -- `instance_id` (String) InstanceID is the RDS instance identifier. -- `resource_id` (String) ResourceID is the RDS instance resource identifier (db-xxx). -- `security_groups` (List of String) SecurityGroups is a list of attached security groups for the RDS instance. -- `subnets` (List of String) Subnets is a list of subnets for the RDS instance. -- `vpc_id` (String) VPCID is the VPC where the RDS is running. - - -### Nested Schema for `spec.aws.rdsproxy` - -Optional: - -- `custom_endpoint_name` (String) CustomEndpointName is the identifier of an RDS Proxy custom endpoint. -- `name` (String) Name is the identifier of an RDS Proxy. -- `resource_id` (String) ResourceID is the RDS instance resource identifier (prx-xxx). - - -### Nested Schema for `spec.aws.redshift` - -Optional: - -- `cluster_id` (String) ClusterID is the Redshift cluster identifier. - - -### Nested Schema for `spec.aws.redshift_serverless` - -Optional: - -- `endpoint_name` (String) EndpointName is the VPC endpoint name. -- `workgroup_id` (String) WorkgroupID is the workgroup ID. -- `workgroup_name` (String) WorkgroupName is the workgroup name. - - -### Nested Schema for `spec.aws.secret_store` - -Optional: - -- `key_prefix` (String) KeyPrefix specifies the secret key prefix. -- `kms_key_id` (String) KMSKeyID specifies the AWS KMS key for encryption. - - - -### Nested Schema for `spec.azure` - -Optional: - -- `is_flexi_server` (Boolean) IsFlexiServer is true if the database is an Azure Flexible server. -- `name` (String) Name is the Azure database server name. -- `redis` (Attributes) Redis contains Azure Cache for Redis specific database metadata. (see [below for nested schema](#nested-schema-for-specazureredis)) -- `resource_id` (String) ResourceID is the Azure fully qualified ID for the resource. - -### Nested Schema for `spec.azure.redis` - -Optional: - -- `clustering_policy` (String) ClusteringPolicy is the clustering policy for Redis Enterprise. - - - -### Nested Schema for `spec.dynamic_labels` - -Optional: - -- `command` (List of String) Command is a command to run -- `period` (String) Period is a time between command runs -- `result` (String) Result captures standard output - - -### Nested Schema for `spec.gcp` - -Optional: - -- `instance_id` (String) InstanceID is the Cloud SQL instance ID. -- `project_id` (String) ProjectID is the GCP project ID the Cloud SQL instance resides in. - - -### Nested Schema for `spec.mongo_atlas` - -Optional: - -- `name` (String) Name is the Atlas database instance name. - - -### Nested Schema for `spec.mysql` - -Optional: - -- `server_version` (String) ServerVersion is the server version reported by DB proxy if the runtime information is not available. - - -### Nested Schema for `spec.oracle` - -Optional: - -- `audit_user` (String) AuditUser is the Oracle database user privilege to access internal Oracle audit trail. - - -### Nested Schema for `spec.tls` - -Optional: - -- `ca_cert` (String) CACert is an optional user provided CA certificate used for verifying database TLS connection. -- `mode` (Number) Mode is a TLS connection mode. 0 is "verify-full"; 1 is "verify-ca", 2 is "insecure". -- `server_name` (String) ServerName allows to provide custom hostname. This value will override the servername/hostname on a certificate during validation. -- `trust_system_cert_pool` (Boolean) TrustSystemCertPool allows Teleport to trust certificate authorities available on the host system. If not set (by default), Teleport only trusts self-signed databases with TLS certificates signed by Teleport's Database Server CA or the ca_cert specified in this TLS setting. For cloud-hosted databases, Teleport downloads the corresponding required CAs for validation. diff --git a/docs/pages/reference/terraform-provider/resources/github_connector.mdx b/docs/pages/reference/terraform-provider/resources/github_connector.mdx deleted file mode 100644 index 79ccf2b9e8ba2..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/github_connector.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: Reference for the teleport_github_connector Terraform resource -sidebar_label: github_connector -description: This page describes the supported values of the teleport_github_connector resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_github_connector resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Terraform Github connector - -variable "github_secret" {} - -resource "teleport_github_connector" "github" { - version = "v3" - # This section tells Terraform that role example must be created before the GitHub connector - depends_on = [ - teleport_role.example - ] - - metadata = { - name = "example" - labels = { - example = "yes" - } - } - - spec = { - client_id = "client" - client_secret = var.github_secret - - teams_to_roles = [{ - organization = "gravitational" - team = "devs" - roles = ["example"] - }] - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an Github connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Required: - -- `client_id` (String) ClientID is the Github OAuth app client ID. -- `client_secret` (String, Sensitive) ClientSecret is the Github OAuth app client secret. - -Optional: - -- `api_endpoint_url` (String) APIEndpointURL is the URL of the API endpoint of the Github instance this connector is for. -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `display` (String) Display is the connector display name. -- `endpoint_url` (String) EndpointURL is the URL of the GitHub instance this connector is for. -- `redirect_url` (String) RedirectURL is the authorization callback URL. -- `teams_to_logins` (Attributes List) TeamsToLogins maps Github team memberships onto allowed logins/roles. DELETE IN 11.0.0 Deprecated: use GithubTeamsToRoles instead. (see [below for nested schema](#nested-schema-for-specteams_to_logins)) -- `teams_to_roles` (Attributes List) TeamsToRoles maps Github team memberships onto allowed roles. (see [below for nested schema](#nested-schema-for-specteams_to_roles)) - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.teams_to_logins` - -Optional: - -- `kubernetes_groups` (List of String) KubeGroups is a list of allowed kubernetes groups for this org/team. -- `kubernetes_users` (List of String) KubeUsers is a list of allowed kubernetes users to impersonate for this org/team. -- `logins` (List of String) Logins is a list of allowed logins for this org/team. -- `organization` (String) Organization is a Github organization a user belongs to. -- `team` (String) Team is a team within the organization a user belongs to. - - -### Nested Schema for `spec.teams_to_roles` - -Optional: - -- `organization` (String) Organization is a Github organization a user belongs to. -- `roles` (List of String) Roles is a list of allowed logins for this org/team. -- `team` (String) Team is a team within the organization a user belongs to. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/terraform-provider/resources/health_check_config.mdx b/docs/pages/reference/terraform-provider/resources/health_check_config.mdx deleted file mode 100644 index 1cbaa1d7ecc95..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/health_check_config.mdx +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: Reference for the teleport_health_check_config Terraform resource -sidebar_label: health_check_config -description: This page describes the supported values of the teleport_health_check_config resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_health_check_config resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -resource "teleport_health_check_config" "example" { - metadata = { - name = "example" - description = "Example health check config" - labels = { - foo = "bar" - } - } - version = "v1" - spec = { - interval = "60s" - timeout = "5s" - healthy_threshold = 3 - unhealthy_threshold = 2 - match = { - db_labels = [{ - name = "env" - values = [ - "foo", - "bar", - ] - }] - db_labels_expression = "labels.foo == `bar`" - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `metadata` (Attributes) Metadata is the health check config resource's metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is the health check config specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the health check config version. - -### Optional - -- `sub_kind` (String) SubKind is an optional resource sub kind. - -### Nested Schema for `metadata` - -Required: - -- `name` (String) name is an object name. - -Optional: - -- `description` (String) description is object description. -- `expires` (String) expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) labels is a set of labels. - - -### Nested Schema for `spec` - -Required: - -- `match` (Attributes) Match is used to select resources that these settings apply to. (see [below for nested schema](#nested-schema-for-specmatch)) - -Optional: - -- `healthy_threshold` (Number) HealthyThreshold is the number of consecutive passing health checks after which a target's health status becomes "healthy". -- `interval` (String) Interval is the time between each health check. -- `timeout` (String) Timeout is the health check connection establishment timeout. An attempt that times out is a failed attempt. -- `unhealthy_threshold` (Number) UnhealthyThreshold is the number of consecutive failing health checks after which a target's health status becomes "unhealthy". - -### Nested Schema for `spec.match` - -Optional: - -- `db_labels` (Attributes List) DBLabels matches database labels. An empty value is ignored. The match result is logically ANDed with DBLabelsExpression, if both are non-empty. (see [below for nested schema](#nested-schema-for-specmatchdb_labels)) -- `db_labels_expression` (String) DBLabelsExpression is a label predicate expression to match databases. An empty value is ignored. The match result is logically ANDed with DBLabels, if both are non-empty. - -### Nested Schema for `spec.match.db_labels` - -Optional: - -- `name` (String) The name of the label. -- `values` (List of String) The values associated with the label. diff --git a/docs/pages/reference/terraform-provider/resources/login_rule.mdx b/docs/pages/reference/terraform-provider/resources/login_rule.mdx deleted file mode 100644 index a843324a05822..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/login_rule.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Reference for the teleport_login_rule Terraform resource -sidebar_label: login_rule -description: This page describes the supported values of the teleport_login_rule resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_login_rule resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport Login Rule resource - -resource "teleport_login_rule" "example" { - metadata = { - description = "Example Login Rule" - labels = { - "example" = "yes" - } - } - - version = "v1" - priority = 0 - traits_map = { - "logins" = { - values = [ - "external.logins", - "external.username", - ] - } - "groups" = { - values = [ - "external.groups", - ] - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `priority` (Number) Priority is the priority of the login rule relative to other login rules in the same cluster. Login rules with a lower numbered priority will be evaluated first. -- `version` (String) Version is the resource version. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `traits_expression` (String) TraitsExpression is a predicate expression which should return the desired traits for the user upon login. -- `traits_map` (Attributes Map) TraitsMap is a map of trait keys to lists of predicate expressions which should evaluate to the desired values for that trait. (see [below for nested schema](#nested-schema-for-traits_map)) - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `traits_map` - -Optional: - -- `values` (List of String) diff --git a/docs/pages/reference/terraform-provider/resources/oidc_connector.mdx b/docs/pages/reference/terraform-provider/resources/oidc_connector.mdx deleted file mode 100644 index c571bef9912ad..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/oidc_connector.mdx +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Reference for the teleport_oidc_connector Terraform resource -sidebar_label: oidc_connector -description: This page describes the supported values of the teleport_oidc_connector resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_oidc_connector resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport OIDC connector -# -# Please note that the OIDC connector will work in Teleport Enterprise only. - -variable "oidc_secret" {} - -resource "teleport_oidc_connector" "example" { - version = "v3" - metadata = { - name = "example" - labels = { - test = "yes" - } - } - - spec = { - client_id = "client" - client_secret = var.oidc_secret - - claims_to_roles = [{ - claim = "test" - roles = ["terraform"] - }] - - redirect_url = ["https://example.com/redirect"] - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an OIDC connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Optional: - -- `acr_values` (String) ACR is an Authentication Context Class Reference value. The meaning of the ACR value is context-specific and varies for identity providers. -- `allow_unverified_email` (Boolean) AllowUnverifiedEmail tells the connector to accept OIDC users with unverified emails. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a dynamic mapping from claims to roles. (see [below for nested schema](#nested-schema-for-specclaims_to_roles)) -- `client_id` (String) ClientID is the id of the authentication client (Teleport Auth Service). -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `client_secret` (String, Sensitive) ClientSecret is used to authenticate the client. -- `display` (String) Display is the friendly name for this provider. -- `google_admin_email` (String) GoogleAdminEmail is the email of a google admin to impersonate. -- `google_service_account` (String, Sensitive) GoogleServiceAccount is a string containing google service account credentials. -- `google_service_account_uri` (String) GoogleServiceAccountURI is a path to a google service account uri. -- `issuer_url` (String) IssuerURL is the endpoint of the provider, e.g. https://accounts.google.com. -- `max_age` (String) -- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) -- `pkce_mode` (String) PKCEMode represents the configuration state for PKCE (Proof Key for Code Exchange). It can be "enabled" or "disabled" -- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. -- `provider` (String) Provider is the external identity provider. -- `redirect_url` (List of String) RedirectURLs is a list of callback URLs which the identity provider can use to redirect the client back to the Teleport Proxy to complete authentication. This list should match the URLs on the provider's side. The URL used for a given auth request will be chosen to match the requesting Proxy's public address. If there is no match, the first url in the list will be used. -- `scope` (List of String) Scope specifies additional scopes set by provider. -- `username_claim` (String) UsernameClaim specifies the name of the claim from the OIDC connector to be used as the user's username. - -### Nested Schema for `spec.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.mfa` - -Optional: - -- `acr_values` (String) AcrValues are Authentication Context Class Reference values. The meaning of the ACR value is context-specific and varies for identity providers. Some identity providers support MFA specific contexts, such Okta with its "phr" (phishing-resistant) ACR. -- `client_id` (String) ClientID is the OIDC OAuth app client ID. -- `client_secret` (String) ClientSecret is the OIDC OAuth app client secret. -- `enabled` (Boolean) Enabled specified whether this OIDC connector supports MFA checks. Defaults to false. -- `max_age` (String) MaxAge is the amount of time in nanoseconds that an IdP session is valid for. Defaults to 0 to always force re-authentication for MFA checks. This should only be set to a non-zero value if the IdP is setup to perform MFA checks on top of active user sessions. -- `prompt` (String) Prompt is an optional OIDC prompt. An empty string omits prompt. If not specified, it defaults to select_account for backwards compatibility. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/terraform-provider/resources/provision_token.mdx b/docs/pages/reference/terraform-provider/resources/provision_token.mdx deleted file mode 100644 index 524633f7e1a60..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/provision_token.mdx +++ /dev/null @@ -1,394 +0,0 @@ ---- -title: Reference for the teleport_provision_token Terraform resource -sidebar_label: provision_token -description: This page describes the supported values of the teleport_provision_token resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_provision_token resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport Provision Token resource - -resource "teleport_provision_token" "example" { - metadata = { - expires = "2022-10-12T07:20:51Z" - description = "Example token" - - labels = { - example = "yes" - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - roles = ["Node", "Auth"] - } -} - -resource "teleport_provision_token" "iam-token" { - metadata = { - name = "iam-token" - } - spec = { - roles = ["Bot"] - bot_name = "mybot" - join_method = "iam" - allow = [{ - aws_account = "123456789012" - }] - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is a provisioning token V2 spec (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `status` (Attributes) Status is extended status information, depending on token type. It is not user writable. (see [below for nested schema](#nested-schema-for-status)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `spec` - -Required: - -- `roles` (List of String) Roles is a list of roles associated with the token, that will be converted to metadata in the SSH and X509 certificates issued to the user of the token - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specallow)) -- `aws_iid_ttl` (String) AWSIIDTTL is the TTL to use for AWS EC2 Instance Identity Documents used to join the cluster with this token. -- `azure` (Attributes) Azure allows the configuration of options specific to the "azure" join method. (see [below for nested schema](#nested-schema-for-specazure)) -- `azure_devops` (Attributes) AzureDevops allows the configuration of options specific to the "azure_devops" join method. (see [below for nested schema](#nested-schema-for-specazure_devops)) -- `bitbucket` (Attributes) Bitbucket allows the configuration of options specific to the "bitbucket" join method. (see [below for nested schema](#nested-schema-for-specbitbucket)) -- `bot_name` (String) BotName is the name of the bot this token grants access to, if any -- `bound_keypair` (Attributes) BoundKeypair allows the configuration of options specific to the "bound_keypair" join method. (see [below for nested schema](#nested-schema-for-specbound_keypair)) -- `circleci` (Attributes) CircleCI allows the configuration of options specific to the "circleci" join method. (see [below for nested schema](#nested-schema-for-speccircleci)) -- `gcp` (Attributes) GCP allows the configuration of options specific to the "gcp" join method. (see [below for nested schema](#nested-schema-for-specgcp)) -- `github` (Attributes) GitHub allows the configuration of options specific to the "github" join method. (see [below for nested schema](#nested-schema-for-specgithub)) -- `gitlab` (Attributes) GitLab allows the configuration of options specific to the "gitlab" join method. (see [below for nested schema](#nested-schema-for-specgitlab)) -- `join_method` (String) JoinMethod is the joining method required in order to use this token. Supported joining methods include: azure, circleci, ec2, gcp, github, gitlab, iam, kubernetes, spacelift, token, tpm -- `kubernetes` (Attributes) Kubernetes allows the configuration of options specific to the "kubernetes" join method. (see [below for nested schema](#nested-schema-for-speckubernetes)) -- `oracle` (Attributes) Oracle allows the configuration of options specific to the "oracle" join method. (see [below for nested schema](#nested-schema-for-specoracle)) -- `spacelift` (Attributes) Spacelift allows the configuration of options specific to the "spacelift" join method. (see [below for nested schema](#nested-schema-for-specspacelift)) -- `suggested_agent_matcher_labels` (Map of List of String) SuggestedAgentMatcherLabels is a set of labels to be used by agents to match on resources. When an agent uses this token, the agent should monitor resources that match those labels. For databases, this means adding the labels to `db_service.resources.labels`. Currently, only node-join scripts create a configuration according to the suggestion. -- `suggested_labels` (Map of List of String) SuggestedLabels is a set of labels that resources should set when using this token to enroll themselves in the cluster. Currently, only node-join scripts create a configuration according to the suggestion. -- `terraform_cloud` (Attributes) TerraformCloud allows the configuration of options specific to the "terraform_cloud" join method. (see [below for nested schema](#nested-schema-for-specterraform_cloud)) -- `tpm` (Attributes) TPM allows the configuration of options specific to the "tpm" join method. (see [below for nested schema](#nested-schema-for-spectpm)) - -### Nested Schema for `spec.allow` - -Optional: - -- `aws_account` (String) AWSAccount is the AWS account ID. -- `aws_arn` (String) AWSARN is used for the IAM join method, the AWS identity of joining nodes must match this ARN. Supports wildcards "*" and "?". -- `aws_regions` (List of String) AWSRegions is used for the EC2 join method and is a list of AWS regions a node is allowed to join from. -- `aws_role` (String) AWSRole is used for the EC2 join method and is the ARN of the AWS role that the Auth Service will assume in order to call the ec2 API. - - -### Nested Schema for `spec.azure` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specazureallow)) - -### Nested Schema for `spec.azure.allow` - -Optional: - -- `resource_groups` (List of String) ResourceGroups is a list of Azure resource groups the node is allowed to join from. -- `subscription` (String) Subscription is the Azure subscription. - - - -### Nested Schema for `spec.azure_devops` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. At least one allow rule must be specified. (see [below for nested schema](#nested-schema-for-specazure_devopsallow)) -- `organization_id` (String) OrganizationID specifies the UUID of the Azure DevOps organization that this join token will grant access to. This is used to identify the correct issuer verification of the ID token. This is a required field. - -### Nested Schema for `spec.azure_devops.allow` - -Optional: - -- `definition_id` (String) The ID of the AZDO pipeline definition. Example: `1` Mapped from the `def_id` claim. -- `pipeline_name` (String) The name of the AZDO pipeline. Example: `my-pipeline`. Mapped out of the `sub` claim. -- `project_id` (String) The ID of the AZDO pipeline. Example: `271ef6f7-0000-0000-0000-4b54d9129990` Mapped from the `prj_id` claim. -- `project_name` (String) The name of the AZDO project. Example: `my-project`. Mapped out of the `sub` claim. -- `repository_ref` (String) The reference of the repository the pipeline is using. Example: `refs/heads/main`. Mapped from the `rpo_ref` claim. -- `repository_uri` (String) The URI of the repository the pipeline is using. Example: `https://github.com/gravitational/teleport.git`. Mapped from the `rpo_uri` claim. -- `repository_version` (String) The individual commit of the repository the pipeline is using. Example: `e6b9eb29a288b27a3a82cc19c48b9d94b80aff36`. Mapped from the `rpo_ver` claim. -- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. Example: `p://my-organization/my-project/my-pipeline` Mapped from the `sub` claim. - - - -### Nested Schema for `spec.bitbucket` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specbitbucketallow)) -- `audience` (String) Audience is a Bitbucket-specified audience value for this token. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. -- `identity_provider_url` (String) IdentityProviderURL is a Bitbucket-specified issuer URL for incoming OIDC tokens. It is unique to each Bitbucket repository, and must be set to the value as written in the Pipelines -> OpenID Connect section of the repository settings. - -### Nested Schema for `spec.bitbucket.allow` - -Optional: - -- `branch_name` (String) BranchName is the name of the branch on which this pipeline executed. -- `deployment_environment_uuid` (String) DeploymentEnvironmentUUID is the UUID of the deployment environment targeted by this pipelines run, if any. These values may be found in the "Pipelines -> OpenID Connect -> Deployment environments" section of the repository settings. -- `repository_uuid` (String) RepositoryUUID is the UUID of the repository for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. -- `workspace_uuid` (String) WorkspaceUUID is the UUID of the workspace for which this token was issued. Bitbucket UUIDs must begin and end with braces, e.g. `{...}`. This value may be found in the Pipelines -> OpenID Connect section of the repository settings. - - - -### Nested Schema for `spec.bound_keypair` - -Optional: - -- `onboarding` (Attributes) Onboarding contains parameters related to initial onboarding and keypair registration. (see [below for nested schema](#nested-schema-for-specbound_keypaironboarding)) -- `recovery` (Attributes) Recovery contains parameters related to recovery after identity expiration. (see [below for nested schema](#nested-schema-for-specbound_keypairrecovery)) -- `rotate_after` (String) RotateAfter is an optional timestamp that forces clients to perform a keypair rotation on the next join or recovery attempt after the given date. If `LastRotatedAt` is unset or before this timestamp, a rotation will be requested. It is recommended to set this value to the current timestamp if a rotation should be triggered on the next join attempt. - -### Nested Schema for `spec.bound_keypair.onboarding` - -Optional: - -- `initial_public_key` (String) InitialPublicKey is used to preregister a public key generated by `tbot keypair create`. When set, no initial join secret is generated or made available for use, and clients must have the associated private key available to join. If set, `initial_join_secret` and `must_register_before` are ignored. This value is written in SSH authorized_keys format. -- `must_register_before` (String) MustRegisterBefore is an optional time before which registeration via initial join secret must be performed. Attempts to register using an initial join secret after this timestamp will not be allowed. This may be modified after creation if necessary to allow the initial registration to take place. This value is ignored if `initial_public_key` is set. -- `registration_secret` (String) RegistrationSecret is a secret joining clients may use to register their public key on first join, which may be used instead of preregistering a public key with `initial_public_key`. If `initial_public_key` is set, this value is ignored. Otherwise, if set, this value will be used to populate `.status.bound_keypair.intitial_join_secret`. If unset and no `initial_public_key` is provided, a random secure value will be generated server-side to populate the status field. - - -### Nested Schema for `spec.bound_keypair.recovery` - -Optional: - -- `limit` (Number) Limit is the maximum number of allowed recovery attempts. This value may be raised or lowered after creation to allow additional recovery attempts should the initial limit be exhausted. If `mode` is set to `standard`, recovery attempts will only be allowed if `.status.bound_keypair.recovery_count` is less than this limit. This limit is not enforced if `mode` is set to `relaxed` or `insecure`. This value must be at least 1 to allow for the initial join during onboarding, which counts as a recovery. -- `mode` (String) Mode sets the recovery rule enforcement mode. It may be one of these values: - standard (or unset): all configured rules enforced. The recovery limit and client join state are required and verified. This is the most secure recovery mode. - relaxed: recovery limit is not enforced, but client join state is still required. This effectively allows unlimited recovery attempts, but client join state still helps mitigate stolen credentials. - insecure: neither the recovery limit nor client join state are enforced. This allows any client with the private key to join freely. This is less secure, but can be useful in certain situations, like in otherwise unsupported CI/CD providers. This mode should be used with care, and RBAC rules should be configured to heavily restrict which resources this identity can access. - - - -### Nested Schema for `spec.circleci` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speccircleciallow)) -- `organization_id` (String) - -### Nested Schema for `spec.circleci.allow` - -Optional: - -- `context_id` (String) -- `project_id` (String) - - - -### Nested Schema for `spec.gcp` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgcpallow)) - -### Nested Schema for `spec.gcp.allow` - -Optional: - -- `locations` (List of String) Locations is a list of regions (e.g. "us-west1") and/or zones (e.g. "us-west1-b"). -- `project_ids` (List of String) ProjectIDs is a list of project IDs (e.g. ``). -- `service_accounts` (List of String) ServiceAccounts is a list of service account emails (e.g. `-compute@developer.gserviceaccount.com`). - - - -### Nested Schema for `spec.github` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgithuballow)) -- `enterprise_server_host` (String) EnterpriseServerHost allows joining from runners associated with a GitHub Enterprise Server instance. When unconfigured, tokens will be validated against github.com, but when configured to the host of a GHES instance, then the tokens will be validated against host. This value should be the hostname of the GHES instance, and should not include the scheme or a path. The instance must be accessible over HTTPS at this hostname and the certificate must be trusted by the Auth Service. -- `enterprise_slug` (String) EnterpriseSlug allows the slug of a GitHub Enterprise organisation to be included in the expected issuer of the OIDC tokens. This is for compatibility with the `include_enterprise_slug` option in GHE. This field should be set to the slug of your enterprise if this is enabled. If this is not enabled, then this field must be left empty. This field cannot be specified if `enterprise_server_host` is specified. See https://docs.github.com/en/enterprise-cloud@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-issuer-value-for-an-enterprise for more information about customized issuer values. -- `static_jwks` (String) StaticJWKS disables fetching of the GHES signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitHub Actions in GHES instances that are not reachable by the Teleport Auth Service. - -### Nested Schema for `spec.github.allow` - -Optional: - -- `actor` (String) The personal account that initiated the workflow run. -- `environment` (String) The name of the environment used by the job. -- `ref` (String) The git ref that triggered the workflow run. -- `ref_type` (String) The type of ref, for example: "branch". -- `repository` (String) The repository from where the workflow is running. This includes the name of the owner e.g `gravitational/teleport` -- `repository_owner` (String) The name of the organization in which the repository is stored. -- `sub` (String) Sub also known as Subject is a string that roughly uniquely identifies the workload. The format of this varies depending on the type of github action run. -- `workflow` (String) The name of the workflow. - - - -### Nested Schema for `spec.gitlab` - -Optional: - -- `allow` (Attributes List) Allow is a list of TokenRules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specgitlaballow)) -- `domain` (String) Domain is the domain of your GitLab instance. This will default to `gitlab.com` - but can be set to the domain of your self-hosted GitLab e.g `gitlab.example.com`. -- `static_jwks` (String) StaticJWKS disables fetching of the GitLab signing keys via the JWKS/OIDC endpoints, and allows them to be directly specified. This allows joining from GitLab CI instances that are not reachable by the Teleport Auth Service. - -### Nested Schema for `spec.gitlab.allow` - -Optional: - -- `ci_config_ref_uri` (String) CIConfigRefURI is the ref path to the top-level pipeline definition, for example, gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main. -- `ci_config_sha` (String) CIConfigSHA is the git commit SHA for the ci_config_ref_uri. -- `deployment_tier` (String) DeploymentTier is the deployment tier of the environment the job specifies -- `environment` (String) Environment limits access by the environment the job deploys to (if one is associated) -- `environment_protected` (Boolean) EnvironmentProtected is true if the Git ref is protected, false otherwise. -- `namespace_path` (String) NamespacePath is used to limit access to jobs in a group or user's projects. Example: `mygroup` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `pipeline_source` (String) PipelineSource limits access by the job pipeline source type. https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules Example: `web` -- `project_path` (String) ProjectPath is used to limit access to jobs belonging to an individual project. Example: `mygroup/myproject` This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `project_visibility` (String) ProjectVisibility is the visibility of the project where the pipeline is running. Can be internal, private, or public. -- `ref` (String) Ref allows access to be limited to jobs triggered by a specific git ref. Ensure this is used in combination with ref_type. This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `ref_protected` (Boolean) RefProtected is true if the Git ref is protected, false otherwise. -- `ref_type` (String) RefType allows access to be limited to jobs triggered by a specific git ref type. Example: `branch` or `tag` -- `sub` (String) Sub roughly uniquely identifies the workload. Example: `project_path:mygroup/my-project:ref_type:branch:ref:main` project_path:GROUP/PROJECT:ref_type:TYPE:ref:BRANCH_NAME This field supports "glob-style" matching: - Use '*' to match zero or more characters. - Use '?' to match any single character. -- `user_email` (String) UserEmail is the email of the user executing the job -- `user_id` (String) UserID is the ID of the user executing the job -- `user_login` (String) UserLogin is the username of the user executing the job - - - -### Nested Schema for `spec.kubernetes` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-speckubernetesallow)) -- `static_jwks` (Attributes) StaticJWKS is the configuration specific to the `static_jwks` type. (see [below for nested schema](#nested-schema-for-speckubernetesstatic_jwks)) -- `type` (String) Type controls which behavior should be used for validating the Kubernetes Service Account token. Support values: - `in_cluster` - `static_jwks` If unset, this defaults to `in_cluster`. - -### Nested Schema for `spec.kubernetes.allow` - -Optional: - -- `service_account` (String) ServiceAccount is the namespaced name of the Kubernetes service account. Its format is "namespace:service-account". - - -### Nested Schema for `spec.kubernetes.static_jwks` - -Optional: - -- `jwks` (String) JWKS should be the JSON Web Key Set formatted public keys of that the Kubernetes Cluster uses to sign service account tokens. This can be fetched from /openid/v1/jwks on the Kubernetes API Server. - - - -### Nested Schema for `spec.oracle` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specoracleallow)) - -### Nested Schema for `spec.oracle.allow` - -Optional: - -- `parent_compartments` (List of String) ParentCompartments is a list of the OCIDs of compartments an instance is allowed to join from. Only direct parents are allowed, i.e. no nested compartments. If empty, any compartment is allowed. -- `regions` (List of String) Regions is a list of regions an instance is allowed to join from. Both full region names ("us-phoenix-1") and abbreviations ("phx") are allowed. If empty, any region is allowed. -- `tenancy` (String) Tenancy is the OCID of the instance's tenancy. Required. - - - -### Nested Schema for `spec.spacelift` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specspaceliftallow)) -- `hostname` (String) Hostname is the hostname of the Spacelift tenant that tokens will originate from. E.g `example.app.spacelift.io` - -### Nested Schema for `spec.spacelift.allow` - -Optional: - -- `caller_id` (String) CallerID is the ID of the caller, ie. the stack or module that generated the run. -- `caller_type` (String) CallerType is the type of the caller, ie. the entity that owns the run - either `stack` or `module`. -- `scope` (String) Scope is the scope of the token - either `read` or `write`. See https://docs.spacelift.io/integrations/cloud-providers/oidc/#about-scopes -- `space_id` (String) SpaceID is the ID of the space in which the run that owns the token was executed. - - - -### Nested Schema for `spec.terraform_cloud` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, nodes using this token must match one allow rule to use this token. (see [below for nested schema](#nested-schema-for-specterraform_cloudallow)) -- `audience` (String) Audience is the JWT audience as configured in the TFC_WORKLOAD_IDENTITY_AUDIENCE(_$TAG) variable in Terraform Cloud. If unset, defaults to the Teleport cluster name. For example, if `TFC_WORKLOAD_IDENTITY_AUDIENCE_TELEPORT=foo` is set in Terraform Cloud, this value should be `foo`. If the variable is set to match the cluster name, it does not need to be set here. -- `hostname` (String) Hostname is the hostname of the Terraform Enterprise instance expected to issue JWTs allowed by this token. This may be unset for regular Terraform Cloud use, in which case it will be assumed to be `app.terraform.io`. Otherwise, it must both match the `iss` (issuer) field included in JWTs, and provide standard JWKS endpoints. - -### Nested Schema for `spec.terraform_cloud.allow` - -Optional: - -- `organization_id` (String) OrganizationID is the ID of the HCP Terraform organization. At least one organization value is required, either ID or name. -- `organization_name` (String) OrganizationName is the human-readable name of the HCP Terraform organization. At least one organization value is required, either ID or name. -- `project_id` (String) ProjectID is the ID of the HCP Terraform project. At least one project or workspace value is required, either ID or name. -- `project_name` (String) ProjectName is the human-readable name for the HCP Terraform project. At least one project or workspace value is required, either ID or name. -- `run_phase` (String) RunPhase is the phase of the run the token was issued for, e.g. `plan` or `apply` -- `workspace_id` (String) WorkspaceID is the ID of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. -- `workspace_name` (String) WorkspaceName is the human-readable name of the HCP Terraform workspace. At least one project or workspace value is required, either ID or name. - - - -### Nested Schema for `spec.tpm` - -Optional: - -- `allow` (Attributes List) Allow is a list of Rules, the presented delegated identity must match one allow rule to permit joining. (see [below for nested schema](#nested-schema-for-spectpmallow)) -- `ekcert_allowed_cas` (List of String) EKCertAllowedCAs is a list of CA certificates that will be used to validate TPM EKCerts. When specified, joining TPMs must present an EKCert signed by one of the specified CAs. TPMs that do not present an EKCert will be not permitted to join. When unspecified, TPMs will be allowed to join with either an EKCert or an EKPubHash. - -### Nested Schema for `spec.tpm.allow` - -Optional: - -- `description` (String) Description is a human-readable description of the rule. It has no bearing on whether or not a TPM is allowed to join, but can be used to associate a rule with a specific host (e.g the asset tag of the server in which the TPM resides). Example: "build-server-100" -- `ek_certificate_serial` (String) EKCertificateSerial is the serial number of the EKCert in hexadecimal with colon separated nibbles. This value will not be checked when a TPM does not have an EKCert configured. Example: 73:df:dc:bd:af:ef:8a:d8:15:2e:96:71:7a:3e:7f:a4 -- `ek_public_hash` (String) EKPublicHash is the SHA256 hash of the EKPub marshaled in PKIX format and encoded in hexadecimal. This value will also be checked when a TPM has submitted an EKCert, and the public key in the EKCert will be used for this check. Example: d4b45864d9d6fabfc568d74f26c35ababde2105337d7af9a6605e1c56c891aa6 - - - - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels -- `name` (String, Sensitive) Name is an object name - - -### Nested Schema for `status` - -Optional: - -- `bound_keypair` (Attributes) BoundKeypair contains status information related to bound_keypair type tokens. (see [below for nested schema](#nested-schema-for-statusbound_keypair)) - -### Nested Schema for `status.bound_keypair` - -Optional: - -- `bound_bot_instance_id` (String) BoundBotInstanceID is the ID of the currently associated bot instance. A new bot instance is issued on each join; the new bot instance will have a `previous_bot_instance` set to this value, if any. -- `bound_public_key` (String) BoundPublicKey contains the currently bound public key. If `.spec.bound_keypair.onboarding.initial_public_key` is set, that value will be copied here on creation, otherwise it will be populated as part of public key registration process. This value will be updated over time if keypair rotation takes place, and will always reflect the currently trusted public key. This value is written in SSH authorized_keys format. -- `last_recovered_at` (String) LastRecoveredAt contains a timestamp of the last successful recovery attempt. Note that normal renewals do not count as a recovery attempt, however onboarding does, either with a preregistered key or registration secret. This corresponds with the last time `bound_bot_instance_id` was updated. -- `last_rotated_at` (String) LastRotatedAt contains a timestamp of the last time the keypair was rotated, if any. This is not set at initial join. -- `recovery_count` (Number) RecoveryCount is a count of the total number of recoveries performed using this token. It is incremented for every successful join or rejoin. Recovery is only allowed if this value is less than `.spec.bound_keypair.recovery.limit`, or if the recovery mode is `relaxed` or `insecure`. -- `registration_secret` (String) RegistrationSecret contains a secret value that may be used for public key registration during the initial join process if no public key is preregistered. If `.spec.bound_keypair.onboarding.initial_public_key` is set, †his field will remain empty. Otherwise, if `.spec.bound_keypair.onboarding.registration_secret` is set, that value will be copied here. If that field is unset, a value will be randomly generated. diff --git a/docs/pages/reference/terraform-provider/resources/resources.mdx b/docs/pages/reference/terraform-provider/resources/resources.mdx deleted file mode 100644 index 184a9ef06811a..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/resources.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "Terraform resources index" -description: "Index of all the datasources supported by the Teleport Terraform Provider" ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -{/* - This file will be renamed data-sources.mdx during build time. - The template name is reserved by tfplugindocs so we suffix with -index. -*/} - -The Teleport Terraform provider supports the following resources: - - - [`teleport_access_list`](./access_list.mdx) - - [`teleport_access_monitoring_rule`](./access_monitoring_rule.mdx) - - [`teleport_app`](./app.mdx) - - [`teleport_auth_preference`](./auth_preference.mdx) - - [`teleport_autoupdate_config`](./autoupdate_config.mdx) - - [`teleport_autoupdate_version`](./autoupdate_version.mdx) - - [`teleport_bot`](./bot.mdx) - - [`teleport_cluster_maintenance_config`](./cluster_maintenance_config.mdx) - - [`teleport_cluster_networking_config`](./cluster_networking_config.mdx) - - [`teleport_database`](./database.mdx) - - [`teleport_dynamic_windows_desktop`](./dynamic_windows_desktop.mdx) - - [`teleport_github_connector`](./github_connector.mdx) - - [`teleport_health_check_config`](./health_check_config.mdx) - - [`teleport_installer`](./installer.mdx) - - [`teleport_login_rule`](./login_rule.mdx) - - [`teleport_oidc_connector`](./oidc_connector.mdx) - - [`teleport_okta_import_rule`](./okta_import_rule.mdx) - - [`teleport_provision_token`](./provision_token.mdx) - - [`teleport_role`](./role.mdx) - - [`teleport_saml_connector`](./saml_connector.mdx) - - [`teleport_server`](./server.mdx) - - [`teleport_session_recording_config`](./session_recording_config.mdx) - - [`teleport_static_host_user`](./static_host_user.mdx) - - [`teleport_trusted_cluster`](./trusted_cluster.mdx) - - [`teleport_trusted_device`](./trusted_device.mdx) - - [`teleport_user`](./user.mdx) - - [`teleport_workload_identity`](./workload_identity.mdx) diff --git a/docs/pages/reference/terraform-provider/resources/role.mdx b/docs/pages/reference/terraform-provider/resources/role.mdx deleted file mode 100644 index 35721e10a9745..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/role.mdx +++ /dev/null @@ -1,600 +0,0 @@ ---- -title: Reference for the teleport_role Terraform resource -sidebar_label: role -description: This page describes the supported values of the teleport_role resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_role resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport Role resource - -resource "teleport_role" "example" { - version = "v7" - metadata = { - name = "example" - description = "Example Teleport Role" - expires = "2022-10-12T07:20:51Z" - labels = { - example = "yes" - } - } - - spec = { - options = { - forward_agent = false - max_session_ttl = "7m" - ssh_port_forwarding = { - remote = { - enabled = false - } - - local = { - enabled = false - } - } - client_idle_timeout = "1h" - disconnect_expired_cert = true - permit_x11_forwarding = false - request_access = "optional" - } - - allow = { - logins = ["example"] - - rules = [{ - resources = ["user", "role"] - verbs = ["list"] - }] - - request = { - roles = ["example"] - claims_to_roles = [{ - claim = "example" - value = "example" - roles = ["example"] - }] - } - - node_labels = { - example = ["yes"] - } - } - - deny = { - logins = ["anonymous"] - } - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v3`, `v4`, `v5`, `v6`, `v7`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a role specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `allow` (Attributes) Allow is the set of conditions evaluated to grant access. (see [below for nested schema](#nested-schema-for-specallow)) -- `deny` (Attributes) Deny is the set of conditions evaluated to deny access. Deny takes priority over allow. (see [below for nested schema](#nested-schema-for-specdeny)) -- `options` (Attributes) Options is for OpenSSH options like agent forwarding. (see [below for nested schema](#nested-schema-for-specoptions)) - -### Nested Schema for `spec.allow` - -Optional: - -- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specallowaccount_assignments)) -- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. -- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. -- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. -- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. -- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). -- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. -- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. -- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. -- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. -- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specallowdb_permissions)) -- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. -- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. -- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. -- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. -- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to -- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. -- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specallowgithub_permissions)) -- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. -- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. -- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to -- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file -- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specallowimpersonate)) -- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specallowjoin_sessions)) -- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups -- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. -- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. -- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specallowkubernetes_resources)) -- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate -- `logins` (List of String) Logins is a list of *nix system logins. -- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). -- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. -- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specallowrequest)) -- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specallowrequire_session_join)) -- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specallowreview_requests)) -- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specallowrules)) -- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specallowspiffe)) -- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. -- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. -- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. -- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. -- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. - -### Nested Schema for `spec.allow.account_assignments` - -Optional: - -- `account` (String) -- `permission_set` (String) - - -### Nested Schema for `spec.allow.db_permissions` - -Optional: - -- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. -- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... - - -### Nested Schema for `spec.allow.github_permissions` - -Optional: - -- `orgs` (List of String) - - -### Nested Schema for `spec.allow.impersonate` - -Optional: - -- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate -- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.allow.join_sessions` - -Optional: - -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is a list of permitted participant modes for this policy. -- `name` (String) Name is the name of the policy. -- `roles` (List of String) Roles is a list of roles that you can join the session of. - - -### Nested Schema for `spec.allow.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. -- `kind` (String) Kind specifies the Kubernetes Resource type. -- `name` (String) Name is the resource name. It supports wildcards. -- `namespace` (String) Namespace is the resource namespace. It supports wildcards. -- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. - - -### Nested Schema for `spec.allow.request` - -Optional: - -- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowrequestclaims_to_roles)) -- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specallowrequestkubernetes_resources)) -- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. -- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specallowrequestreason)) -- `roles` (List of String) Roles is the name of roles which will match the request rule. -- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. -- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. -- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specallowrequestthresholds)) - -### Nested Schema for `spec.allow.request.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.allow.request.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. -- `kind` (String) kind specifies the Kubernetes Resource type. - - -### Nested Schema for `spec.allow.request.reason` - -Optional: - -- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. - - -### Nested Schema for `spec.allow.request.thresholds` - -Optional: - -- `approve` (Number) Approve is the number of matching approvals needed for state-transition. -- `deny` (Number) Deny is the number of denials needed for state-transition. -- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. -- `name` (String) Name is the optional human-readable name of the threshold. - - - -### Nested Schema for `spec.allow.require_session_join` - -Optional: - -- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. -- `filter` (String) Filter is a predicate that determines what users count towards this policy. -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. -- `name` (String) Name is the name of the policy. -- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. - - -### Nested Schema for `spec.allow.review_requests` - -Optional: - -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specallowreview_requestsclaims_to_roles)) -- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. -- `roles` (List of String) Roles is the name of roles which may be reviewed. -- `where` (String) Where is an optional predicate which further limits which requests are reviewable. - -### Nested Schema for `spec.allow.review_requests.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - - -### Nested Schema for `spec.allow.rules` - -Optional: - -- `actions` (List of String) Actions specifies optional actions taken when this rule matches -- `resources` (List of String) Resources is a list of resources -- `verbs` (List of String) Verbs is a list of verbs -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.allow.spiffe` - -Optional: - -- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com -- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 -- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar - - - -### Nested Schema for `spec.deny` - -Optional: - -- `account_assignments` (Attributes List) AccountAssignments holds the list of account assignments affected by this condition. (see [below for nested schema](#nested-schema-for-specdenyaccount_assignments)) -- `app_labels` (Map of List of String) AppLabels is a map of labels used as part of the RBAC system. -- `app_labels_expression` (String) AppLabelsExpression is a predicate expression used to allow/deny access to Apps. -- `aws_role_arns` (List of String) AWSRoleARNs is a list of AWS role ARNs this role is allowed to assume. -- `azure_identities` (List of String) AzureIdentities is a list of Azure identities this role is allowed to assume. -- `cluster_labels` (Map of List of String) ClusterLabels is a map of node labels (used to dynamically grant access to clusters). -- `cluster_labels_expression` (String) ClusterLabelsExpression is a predicate expression used to allow/deny access to remote Teleport clusters. -- `db_labels` (Map of List of String) DatabaseLabels are used in RBAC system to allow/deny access to databases. -- `db_labels_expression` (String) DatabaseLabelsExpression is a predicate expression used to allow/deny access to Databases. -- `db_names` (List of String) DatabaseNames is a list of database names this role is allowed to connect to. -- `db_permissions` (Attributes List) DatabasePermissions specifies a set of permissions that will be granted to the database user when using automatic database user provisioning. (see [below for nested schema](#nested-schema-for-specdenydb_permissions)) -- `db_roles` (List of String) DatabaseRoles is a list of databases roles for automatic user creation. -- `db_service_labels` (Map of List of String) DatabaseServiceLabels are used in RBAC system to allow/deny access to Database Services. -- `db_service_labels_expression` (String) DatabaseServiceLabelsExpression is a predicate expression used to allow/deny access to Database Services. -- `db_users` (List of String) DatabaseUsers is a list of databases users this role is allowed to connect as. -- `desktop_groups` (List of String) DesktopGroups is a list of groups for created desktop users to be added to -- `gcp_service_accounts` (List of String) GCPServiceAccounts is a list of GCP service accounts this role is allowed to assume. -- `github_permissions` (Attributes List) GitHubPermissions defines GitHub integration related permissions. (see [below for nested schema](#nested-schema-for-specdenygithub_permissions)) -- `group_labels` (Map of List of String) GroupLabels is a map of labels used as part of the RBAC system. -- `group_labels_expression` (String) GroupLabelsExpression is a predicate expression used to allow/deny access to user groups. -- `host_groups` (List of String) HostGroups is a list of groups for created users to be added to -- `host_sudoers` (List of String) HostSudoers is a list of entries to include in a users sudoer file -- `impersonate` (Attributes) Impersonate specifies what users and roles this role is allowed to impersonate by issuing certificates or other possible means. (see [below for nested schema](#nested-schema-for-specdenyimpersonate)) -- `join_sessions` (Attributes List) JoinSessions specifies policies to allow users to join other sessions. (see [below for nested schema](#nested-schema-for-specdenyjoin_sessions)) -- `kubernetes_groups` (List of String) KubeGroups is a list of kubernetes groups -- `kubernetes_labels` (Map of List of String) KubernetesLabels is a map of kubernetes cluster labels used for RBAC. -- `kubernetes_labels_expression` (String) KubernetesLabelsExpression is a predicate expression used to allow/deny access to kubernetes clusters. -- `kubernetes_resources` (Attributes List) KubernetesResources is the Kubernetes Resources this Role grants access to. (see [below for nested schema](#nested-schema-for-specdenykubernetes_resources)) -- `kubernetes_users` (List of String) KubeUsers is an optional kubernetes users to impersonate -- `logins` (List of String) Logins is a list of *nix system logins. -- `node_labels` (Map of List of String) NodeLabels is a map of node labels (used to dynamically grant access to nodes). -- `node_labels_expression` (String) NodeLabelsExpression is a predicate expression used to allow/deny access to SSH nodes. -- `request` (Attributes) (see [below for nested schema](#nested-schema-for-specdenyrequest)) -- `require_session_join` (Attributes List) RequireSessionJoin specifies policies for required users to start a session. (see [below for nested schema](#nested-schema-for-specdenyrequire_session_join)) -- `review_requests` (Attributes) ReviewRequests defines conditions for submitting access reviews. (see [below for nested schema](#nested-schema-for-specdenyreview_requests)) -- `rules` (Attributes List) Rules is a list of rules and their access levels. Rules are a high level construct used for access control. (see [below for nested schema](#nested-schema-for-specdenyrules)) -- `spiffe` (Attributes List) SPIFFE is used to allow or deny access to a role holder to generating a SPIFFE SVID. (see [below for nested schema](#nested-schema-for-specdenyspiffe)) -- `windows_desktop_labels` (Map of List of String) WindowsDesktopLabels are used in the RBAC system to allow/deny access to Windows desktops. -- `windows_desktop_labels_expression` (String) WindowsDesktopLabelsExpression is a predicate expression used to allow/deny access to Windows desktops. -- `windows_desktop_logins` (List of String) WindowsDesktopLogins is a list of desktop login names allowed/denied for Windows desktops. -- `workload_identity_labels` (Map of List of String) WorkloadIdentityLabels controls whether or not specific WorkloadIdentity resources can be invoked. Further authorization controls exist on the WorkloadIdentity resource itself. -- `workload_identity_labels_expression` (String) WorkloadIdentityLabelsExpression is a predicate expression used to allow/deny access to issuing a WorkloadIdentity. - -### Nested Schema for `spec.deny.account_assignments` - -Optional: - -- `account` (String) -- `permission_set` (String) - - -### Nested Schema for `spec.deny.db_permissions` - -Optional: - -- `match` (Map of List of String) Match is a list of object labels that must be matched for the permission to be granted. -- `permissions` (List of String) Permission is the list of string representations of the permission to be given, e.g. SELECT, INSERT, UPDATE, ... - - -### Nested Schema for `spec.deny.github_permissions` - -Optional: - -- `orgs` (List of String) - - -### Nested Schema for `spec.deny.impersonate` - -Optional: - -- `roles` (List of String) Roles is a list of resources this role is allowed to impersonate -- `users` (List of String) Users is a list of resources this role is allowed to impersonate, could be an empty list or a Wildcard pattern -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.deny.join_sessions` - -Optional: - -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is a list of permitted participant modes for this policy. -- `name` (String) Name is the name of the policy. -- `roles` (List of String) Roles is a list of roles that you can join the session of. - - -### Nested Schema for `spec.deny.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes API group of the Kubernetes resource. It supports wildcards. -- `kind` (String) Kind specifies the Kubernetes Resource type. -- `name` (String) Name is the resource name. It supports wildcards. -- `namespace` (String) Namespace is the resource namespace. It supports wildcards. -- `verbs` (List of String) Verbs are the allowed Kubernetes verbs for the following resource. - - -### Nested Schema for `spec.deny.request` - -Optional: - -- `annotations` (Map of List of String) Annotations is a collection of annotations to be programmatically appended to pending Access Requests at the time of their creation. These annotations serve as a mechanism to propagate extra information to plugins. Since these annotations support variable interpolation syntax, they also offer a mechanism for forwarding claims from an external identity provider, to a plugin via `{{external.trait_name}}` style substitutions. -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyrequestclaims_to_roles)) -- `kubernetes_resources` (Attributes List) kubernetes_resources can optionally enforce a requester to request only certain kinds of kube resources. Eg: Users can make request to either a resource kind "kube_cluster" or any of its subresources like "namespaces". This field can be defined such that it prevents a user from requesting "kube_cluster" and enforce requesting any of its subresources. (see [below for nested schema](#nested-schema-for-specdenyrequestkubernetes_resources)) -- `max_duration` (String) MaxDuration is the amount of time the access will be granted for. If this is zero, the default duration is used. -- `reason` (Attributes) Reason defines settings for the reason for the access provided by the user. (see [below for nested schema](#nested-schema-for-specdenyrequestreason)) -- `roles` (List of String) Roles is the name of roles which will match the request rule. -- `search_as_roles` (List of String) SearchAsRoles is a list of extra roles which should apply to a user while they are searching for resources as part of a Resource Access Request, and defines the underlying roles which will be requested as part of any Resource Access Request. -- `suggested_reviewers` (List of String) SuggestedReviewers is a list of reviewer suggestions. These can be teleport usernames, but that is not a requirement. -- `thresholds` (Attributes List) Thresholds is a list of thresholds, one of which must be met in order for reviews to trigger a state-transition. If no thresholds are provided, a default threshold of 1 for approval and denial is used. (see [below for nested schema](#nested-schema-for-specdenyrequestthresholds)) - -### Nested Schema for `spec.deny.request.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - -### Nested Schema for `spec.deny.request.kubernetes_resources` - -Optional: - -- `api_group` (String) APIGroup specifies the Kubernetes Resource API group. -- `kind` (String) kind specifies the Kubernetes Resource type. - - -### Nested Schema for `spec.deny.request.reason` - -Optional: - -- `mode` (String) Mode can be either "required" or "optional". Empty string is treated as "optional". If a role has the request reason mode set to "required", then reason is required for all Access Requests requesting roles or resources allowed by this role. It applies only to users who have this role assigned. - - -### Nested Schema for `spec.deny.request.thresholds` - -Optional: - -- `approve` (Number) Approve is the number of matching approvals needed for state-transition. -- `deny` (Number) Deny is the number of denials needed for state-transition. -- `filter` (String) Filter is an optional predicate used to determine which reviews count toward this threshold. -- `name` (String) Name is the optional human-readable name of the threshold. - - - -### Nested Schema for `spec.deny.require_session_join` - -Optional: - -- `count` (Number) Count is the amount of people that need to be matched for this policy to be fulfilled. -- `filter` (String) Filter is a predicate that determines what users count towards this policy. -- `kinds` (List of String) Kinds are the session kinds this policy applies to. -- `modes` (List of String) Modes is the list of modes that may be used to fulfill this policy. -- `name` (String) Name is the name of the policy. -- `on_leave` (String) OnLeave is the behaviour that's used when the policy is no longer fulfilled for a live session. - - -### Nested Schema for `spec.deny.review_requests` - -Optional: - -- `claims_to_roles` (Attributes List) ClaimsToRoles specifies a mapping from claims (traits) to teleport roles. (see [below for nested schema](#nested-schema-for-specdenyreview_requestsclaims_to_roles)) -- `preview_as_roles` (List of String) PreviewAsRoles is a list of extra roles which should apply to a reviewer while they are viewing a Resource Access Request for the purposes of viewing details such as the hostname and labels of requested resources. -- `roles` (List of String) Roles is the name of roles which may be reviewed. -- `where` (String) Where is an optional predicate which further limits which requests are reviewable. - -### Nested Schema for `spec.deny.review_requests.claims_to_roles` - -Optional: - -- `claim` (String) Claim is a claim name. -- `roles` (List of String) Roles is a list of static teleport roles to match. -- `value` (String) Value is a claim value to match. - - - -### Nested Schema for `spec.deny.rules` - -Optional: - -- `actions` (List of String) Actions specifies optional actions taken when this rule matches -- `resources` (List of String) Resources is a list of resources -- `verbs` (List of String) Verbs is a list of verbs -- `where` (String) Where specifies optional advanced matcher - - -### Nested Schema for `spec.deny.spiffe` - -Optional: - -- `dns_sans` (List of String) DNSSANs specifies matchers for the SPIFFE ID DNS SANs. Each requested DNS SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: *.example.com would match foo.example.com -- `ip_sans` (List of String) IPSANs specifies matchers for the SPIFFE ID IP SANs. Each requested IP SAN is compared against all matchers configured and if any match, the condition is considered to be met. The matchers should be specified using CIDR notation, it supports IPv4 and IPv6. Examples: - 10.0.0.0/24 would match 10.0.0.0 to 10.255.255.255 - 10.0.0.42/32 would match only 10.0.0.42 -- `path` (String) Path specifies a matcher for the SPIFFE ID path. It should not include the trust domain and should start with a leading slash. The matcher by default allows '*' to be used to indicate zero or more of any character. Prepend '^' and append '$' to instead switch to matching using the Go regex syntax. Example: - /svc/foo/*/bar would match /svc/foo/baz/bar - ^\/svc\/foo\/.*\/bar$ would match /svc/foo/baz/bar - - - -### Nested Schema for `spec.options` - -Optional: - -- `cert_extensions` (Attributes List) CertExtensions specifies the key/values (see [below for nested schema](#nested-schema-for-specoptionscert_extensions)) -- `cert_format` (String) CertificateFormat defines the format of the user certificate to allow compatibility with older versions of OpenSSH. -- `client_idle_timeout` (String) ClientIdleTimeout sets disconnect clients on idle timeout behavior, if set to 0 means do not disconnect, otherwise is set to the idle duration. -- `create_db_user` (Boolean) CreateDatabaseUser enabled automatic database user creation. -- `create_db_user_mode` (Number) CreateDatabaseUserMode allows users to be automatically created on a database when not set to off. 0 is "unspecified", 1 is "off", 2 is "keep", 3 is "best_effort_drop". -- `create_desktop_user` (Boolean) CreateDesktopUser allows users to be automatically created on a Windows desktop -- `create_host_user` (Boolean) Deprecated: use CreateHostUserMode instead. -- `create_host_user_default_shell` (String) CreateHostUserDefaultShell is used to configure the default shell for newly provisioned host users. -- `create_host_user_mode` (Number) CreateHostUserMode allows users to be automatically created on a host when not set to off. 0 is "unspecified"; 1 is "off"; 2 is "drop" (removed for v15 and above), 3 is "keep"; 4 is "insecure-drop". -- `desktop_clipboard` (Boolean) DesktopClipboard indicates whether clipboard sharing is allowed between the user's workstation and the remote desktop. It defaults to true unless explicitly set to false. -- `desktop_directory_sharing` (Boolean) DesktopDirectorySharing indicates whether directory sharing is allowed between the user's workstation and the remote desktop. It defaults to false unless explicitly set to true. -- `device_trust_mode` (String) DeviceTrustMode is the device authorization mode used for the resources associated with the role. See DeviceTrust.Mode. -- `disconnect_expired_cert` (Boolean) DisconnectExpiredCert sets disconnect clients on expired certificates. -- `enhanced_recording` (List of String) BPF defines what events to record for the BPF-based session recorder. -- `forward_agent` (Boolean) ForwardAgent is SSH agent forwarding. -- `idp` (Attributes) IDP is a set of options related to accessing IdPs within Teleport. Requires Teleport Enterprise. (see [below for nested schema](#nested-schema-for-specoptionsidp)) -- `lock` (String) Lock specifies the locking mode (strict|best_effort) to be applied with the role. -- `max_connections` (Number) MaxConnections defines the maximum number of concurrent connections a user may hold. -- `max_kubernetes_connections` (Number) MaxKubernetesConnections defines the maximum number of concurrent Kubernetes sessions a user may hold. -- `max_session_ttl` (String) MaxSessionTTL defines how long a SSH session can last for. -- `max_sessions` (Number) MaxSessions defines the maximum number of concurrent sessions per connection. -- `mfa_verification_interval` (String) MFAVerificationInterval optionally defines the maximum duration that can elapse between successive MFA verifications. This variable is used to ensure that users are periodically prompted to verify their identity, enhancing security by preventing prolonged sessions without re-authentication when using tsh proxy * derivatives. It's only effective if the session requires MFA. If not set, defaults to `max_session_ttl`. -- `permit_x11_forwarding` (Boolean) PermitX11Forwarding authorizes use of X11 forwarding. -- `pin_source_ip` (Boolean) PinSourceIP forces the same client IP for certificate generation and usage -- `port_forwarding` (Boolean) Deprecated: Use SSHPortForwarding instead -- `record_session` (Attributes) RecordDesktopSession indicates whether desktop access sessions should be recorded. It defaults to true unless explicitly set to false. (see [below for nested schema](#nested-schema-for-specoptionsrecord_session)) -- `request_access` (String) RequestAccess defines the request strategy (optional|reason|always) where optional is the default. -- `request_prompt` (String) RequestPrompt is an optional message which tells users what they aught to request. -- `require_session_mfa` (Number) RequireMFAType is the type of MFA requirement enforced for this user. 0 is "OFF", 1 is "SESSION", 2 is "SESSION_AND_HARDWARE_KEY", 3 is "HARDWARE_KEY_TOUCH", 4 is "HARDWARE_KEY_PIN", 5 is "HARDWARE_KEY_TOUCH_AND_PIN". -- `ssh_file_copy` (Boolean) SSHFileCopy indicates whether remote file operations via SCP or SFTP are allowed over an SSH session. It defaults to true unless explicitly set to false. -- `ssh_port_forwarding` (Attributes) SSHPortForwarding configures what types of SSH port forwarding are allowed by a role. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwarding)) - -### Nested Schema for `spec.options.cert_extensions` - -Optional: - -- `mode` (Number) Mode is the type of extension to be used -- currently critical-option is not supported. 0 is "extension". -- `name` (String) Name specifies the key to be used in the cert extension. -- `type` (Number) Type represents the certificate type being extended, only ssh is supported at this time. 0 is "ssh". -- `value` (String) Value specifies the value to be used in the cert extension. - - -### Nested Schema for `spec.options.idp` - -Optional: - -- `saml` (Attributes) SAML are options related to the Teleport SAML IdP. (see [below for nested schema](#nested-schema-for-specoptionsidpsaml)) - -### Nested Schema for `spec.options.idp.saml` - -Optional: - -- `enabled` (Boolean) Enabled is set to true if this option allows access to the Teleport SAML IdP. - - - -### Nested Schema for `spec.options.record_session` - -Optional: - -- `default` (String) Default indicates the default value for the services. -- `desktop` (Boolean) Desktop indicates whether desktop sessions should be recorded. It defaults to true unless explicitly set to false. -- `ssh` (String) SSH indicates the session mode used on SSH sessions. - - -### Nested Schema for `spec.options.ssh_port_forwarding` - -Optional: - -- `local` (Attributes) Allow local port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardinglocal)) -- `remote` (Attributes) Allow remote port forwarding. (see [below for nested schema](#nested-schema-for-specoptionsssh_port_forwardingremote)) - -### Nested Schema for `spec.options.ssh_port_forwarding.local` - -Optional: - -- `enabled` (Boolean) - - -### Nested Schema for `spec.options.ssh_port_forwarding.remote` - -Optional: - -- `enabled` (Boolean) diff --git a/docs/pages/reference/terraform-provider/resources/saml_connector.mdx b/docs/pages/reference/terraform-provider/resources/saml_connector.mdx deleted file mode 100644 index 4f5ec67454e15..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/saml_connector.mdx +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: Reference for the teleport_saml_connector Terraform resource -sidebar_label: saml_connector -description: This page describes the supported values of the teleport_saml_connector resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_saml_connector resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport SAML connector -# -# Please note that the SAML connector will work in Teleport Enterprise only. - -resource "teleport_saml_connector" "example" { - version = "v2" - # This block will tell Terraform to never update private key from our side if a keys are managed - # from an outside of Terraform. - - # lifecycle { - # ignore_changes = [ - # spec[0].signing_key_pair[0].cert, - # spec[0].signing_key_pair[0].private_key, - # spec[0].assertion_key_pair[0].cert, - # spec[0].assertion_key_pair[0].private_key, - # ] - # } - - # This section tells Terraform that role example must be created before the SAML connector - depends_on = [ - teleport_role.example - ] - - metadata = { - name = "example" - } - - spec = { - attributes_to_roles = [{ - name = "groups" - roles = ["example"] - value = "okta-admin" - }, - { - name = "groups" - roles = ["example"] - value = "okta-dev" - }] - - acs = "https://localhost:3025/v1/webapi/saml/acs" - entity_descriptor = "" - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `spec` (Attributes) Spec is an SAML connector specification. (see [below for nested schema](#nested-schema-for-spec)) -- `version` (String) Version is the resource version. It must be specified. Supported values are: `v2`. - -### Optional - -- `metadata` (Attributes) Metadata holds resource metadata. (see [below for nested schema](#nested-schema-for-metadata)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources. - -### Nested Schema for `spec` - -Required: - -- `acs` (String) AssertionConsumerService is a URL for assertion consumer service on the service provider (Teleport's side). -- `attributes_to_roles` (Attributes List) AttributesToRoles is a list of mappings of attribute statements to roles. (see [below for nested schema](#nested-schema-for-specattributes_to_roles)) - -Optional: - -- `allow_idp_initiated` (Boolean) AllowIDPInitiated is a flag that indicates if the connector can be used for IdP-initiated logins. -- `assertion_key_pair` (Attributes) EncryptionKeyPair is a key pair used for decrypting SAML assertions. (see [below for nested schema](#nested-schema-for-specassertion_key_pair)) -- `audience` (String) Audience uniquely identifies our service provider. -- `cert` (String, Sensitive) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. -- `client_redirect_settings` (Attributes) ClientRedirectSettings defines which client redirect URLs are allowed for non-browser SSO logins other than the standard localhost ones. (see [below for nested schema](#nested-schema-for-specclient_redirect_settings)) -- `display` (String) Display controls how this connector is displayed. -- `entity_descriptor` (String, Sensitive) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. -- `entity_descriptor_url` (String) EntityDescriptorURL is a URL that supplies a configuration XML. -- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced on login. UNSPECIFIED is treated as NO. -- `issuer` (String) Issuer is the identity provider issuer. -- `mfa` (Attributes) MFASettings contains settings to enable SSO MFA checks through this auth connector. (see [below for nested schema](#nested-schema-for-specmfa)) -- `preferred_request_binding` (String) PreferredRequestBinding is a preferred SAML request binding method. Value must be either "http-post" or "http-redirect". In general, the SAML identity provider lists request binding methods it supports. And the SAML service provider uses one of the IdP supported request binding method that it prefers. But we never honored request binding value provided by the IdP and always used http-redirect binding as a default. Setting up PreferredRequestBinding value lets us preserve existing auth connector behavior and only use http-post binding if it is explicitly configured. -- `provider` (String) Provider is the external identity provider. -- `service_provider_issuer` (String) ServiceProviderIssuer is the issuer of the service provider (Teleport). -- `signing_key_pair` (Attributes) SigningKeyPair is an x509 key pair used to sign AuthnRequest. (see [below for nested schema](#nested-schema-for-specsigning_key_pair)) -- `single_logout_url` (String) SingleLogoutURL is the SAML Single log-out URL to initiate SAML SLO (single log-out). If this is not provided, SLO is disabled. -- `sso` (String) SSO is the URL of the identity provider's SSO service. - -### Nested Schema for `spec.attributes_to_roles` - -Optional: - -- `name` (String) Name is an attribute statement name. -- `roles` (List of String) Roles is a list of static teleport roles to map to. -- `value` (String) Value is an attribute statement value to match. - - -### Nested Schema for `spec.assertion_key_pair` - -Optional: - -- `cert` (String) Cert is a PEM-encoded x509 certificate. -- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. - - -### Nested Schema for `spec.client_redirect_settings` - -Optional: - -- `allowed_https_hostnames` (List of String) a list of hostnames allowed for https client redirect URLs -- `insecure_allowed_cidr_ranges` (List of String) a list of CIDRs allowed for HTTP or HTTPS client redirect URLs - - -### Nested Schema for `spec.mfa` - -Optional: - -- `cert` (String) Cert is the identity provider certificate PEM. IDP signs `` responses using this certificate. -- `enabled` (Boolean) Enabled specified whether this SAML connector supports MFA checks. Defaults to false. -- `entity_descriptor` (String) EntityDescriptor is XML with descriptor. It can be used to supply configuration parameters in one XML file rather than supplying them in the individual elements. Usually set from EntityDescriptorUrl. -- `entity_descriptor_url` (String) EntityDescriptorUrl is a URL that supplies a configuration XML. -- `force_authn` (Number) ForceAuthn specified whether re-authentication should be forced for MFA checks. UNSPECIFIED is treated as YES to always re-authentication for MFA checks. This should only be set to NO if the IdP is setup to perform MFA checks on top of active user sessions. -- `issuer` (String) Issuer is the identity provider issuer. Usually set from EntityDescriptor. -- `sso` (String) SSO is the URL of the identity provider's SSO service. Usually set from EntityDescriptor. - - -### Nested Schema for `spec.signing_key_pair` - -Optional: - -- `cert` (String) Cert is a PEM-encoded x509 certificate. -- `private_key` (String, Sensitive) PrivateKey is a PEM encoded x509 private key. - - - -### Nested Schema for `metadata` - -Required: - -- `name` (String) Name is an object name - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels diff --git a/docs/pages/reference/terraform-provider/resources/session_recording_config.mdx b/docs/pages/reference/terraform-provider/resources/session_recording_config.mdx deleted file mode 100644 index 39c83b9a4c2cf..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/session_recording_config.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Reference for the teleport_session_recording_config Terraform resource -sidebar_label: session_recording_config -description: This page describes the supported values of the teleport_session_recording_config resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_session_recording_config resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Teleport session recording config - -resource "teleport_session_recording_config" "example" { - version = "v2" - metadata = { - description = "Session recording config" - labels = { - "example" = "yes" - "teleport.dev/origin" = "dynamic" // This label is added on Teleport side by default - } - } - - spec = { - proxy_checks_host_keys = true - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the resource version. It must be specified. Supported values are:`v2`. - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Spec is a SessionRecordingConfig specification (see [below for nested schema](#nested-schema-for-spec)) -- `sub_kind` (String) SubKind is an optional resource sub kind, used in some resources - -### Nested Schema for `metadata` - -Optional: - -- `description` (String) Description is object description -- `expires` (String) Expires is a global expiry time header can be set on any resource in the system. -- `labels` (Map of String) Labels is a set of labels - - -### Nested Schema for `spec` - -Optional: - -- `mode` (String) Mode controls where (or if) the session is recorded. -- `proxy_checks_host_keys` (Boolean) ProxyChecksHostKeys is used to control if the proxy will check host keys when in recording mode. diff --git a/docs/pages/reference/terraform-provider/resources/trusted_device.mdx b/docs/pages/reference/terraform-provider/resources/trusted_device.mdx deleted file mode 100644 index 760e1687e6331..0000000000000 --- a/docs/pages/reference/terraform-provider/resources/trusted_device.mdx +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Reference for the teleport_trusted_device Terraform resource -sidebar_label: trusted_device -description: This page describes the supported values of the teleport_trusted_device resource of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -This page describes the supported values of the teleport_trusted_device resource of the Teleport Terraform provider. - - - - -## Example Usage - -```hcl -# Trusted device resource - -resource "teleport_trusted_device" "TESTDEVICE1" { - spec = { - asset_tag = "TESTDEVICE1" - os_type = "macos" - } -} -``` - -{/* schema generated by tfplugindocs */} -## Schema - -### Required - -- `version` (String) Version is the API version used to create the resource. It must be specified. Based on this version, Teleport will apply different defaults on resource creation or deletion. It must be an integer prefixed by "v". For example: `v1` - -### Optional - -- `metadata` (Attributes) Metadata is resource metadata (see [below for nested schema](#nested-schema-for-metadata)) -- `spec` (Attributes) Specification of the device. (see [below for nested schema](#nested-schema-for-spec)) - -### Nested Schema for `metadata` - -Optional: - -- `labels` (Map of String) Labels is a set of labels -- `name` (String) Name is an object name - - -### Nested Schema for `spec` - -Required: - -- `asset_tag` (String) -- `os_type` (String) - -Optional: - -- `enroll_status` (String) -- `owner` (String) -- `source` (Attributes) (see [below for nested schema](#nested-schema-for-specsource)) - -### Nested Schema for `spec.source` - -Optional: - -- `name` (String) -- `origin` (String) diff --git a/docs/pages/reference/terraform-provider/terraform-provider.mdx b/docs/pages/reference/terraform-provider/terraform-provider.mdx deleted file mode 100644 index 5c72f911076ce..0000000000000 --- a/docs/pages/reference/terraform-provider/terraform-provider.mdx +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: "Teleport Terraform Provider" -description: Reference documentation of the Teleport Terraform provider. ---- - -{/*Auto-generated file. Do not edit.*/} -{/*To regenerate, navigate to integrations/terraform and run `make docs`.*/} - -The Teleport Terraform provider allows Terraform users to configure Teleport -from Terraform. - -This section is the Teleport Terraform Provider reference. -It lists all the supported resources and their fields. - - -To get started with the Terraform provider, you must start with [the installation -guide](../../admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx). -Once you got a working provider, we recommend you to follow the -["Managing users and roles with IaC"]( -../../admin-guides/infrastructure-as-code/managing-resources/user-and-role.mdx) guide. - - -The provider exposes Teleport resources both as Terraform data-sources and Terraform resources. -Data-sources are used by Terraform to fetch information from Teleport, while resources are used -to create resources in Teleport. - -{/* Note: the awkward `resource-index` file names are here because `data-sources` -is reserved by the generator for the catch-all resource template */} - -- [list of supported resources](./resources/resources.mdx) -- [list of supported data-sources](./data-sources/data-sources.mdx) - -## Example Usage - - - - -```hcl -terraform { - required_providers { - teleport = { - source = "terraform.releases.teleport.dev/gravitational/teleport" - version = "~> (=teleport.major_version=).0" - } - } -} - -provider "teleport" { - # Update addr to point to Teleport Auth/Proxy - # addr = "auth.example.com:3025" - addr = "proxy.example.com:443" - identity_file_path = "terraform-identity/identity" -} -``` - - - - -```hcl -terraform { - required_providers { - teleport = { - source = "terraform.releases.teleport.dev/gravitational/teleport" - version = "~> (=cloud.major_version=).0" - } - } -} - -provider "teleport" { - # Update addr to point to your Teleport Enterprise (managed) tenant URL's host:port - addr = "mytenant.teleport.sh:443" - identity_file_path = "terraform-identity/identity" -} -``` - - - - -## Connection methods - -This section lists the different ways of passing credentials to the Terraform provider. -You can find which method fits your use case in -the [Teleport Terraform provider setup -page](../../admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx) - -### With an identity file - -With this connection method, you must provide an identity file.This file allows Terraform to connect both via the Proxy -Service (ports 443 or 3080) and via the Auth Service (port 3025). This is the recommended way of passing credentials to -the Terraform provider. - -The identity file can be obtained via several ways: - -### Obtaining an identity file locally with `tctl` - -Since 16.2, you can use `tctl` and your local credentials to create a temporary bot and load its identity -in your shell's environment variables: - -```code -$ eval "$(tctl terraform env)" -🔑 Detecting if MFA is required -This is an admin-level action and requires MFA to complete -Tap any security key -Detected security key tap -⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token -🤖 Using the temporary bot to obtain certificates -🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s -``` - -You can find more information in -the ["Run the Terraform provider locally" guide](../../admin-guides/infrastructure-as-code/terraform-provider/local.mdx) - -#### Obtaining an identity file via `tbot` - -`tbot` relies on [MachineID](../../enroll-resources/machine-id/introduction.mdx) to obtain and automatically renew -short-lived credentials. Such credentials are harder to exfiltrate, and you can control more precisely who has access to -which roles (e.g. you can allow only GitHub Actions pipelines targeting the `prod` environment to get certificates). - -You can follow [the Terraform Provider -guide](../../admin-guides/infrastructure-as-code/terraform-provider/terraform-provider.mdx) to setup `tbot` -and have Terraform use its identity. - -#### Obtaining an identity file via `tctl auth sign` - -You can obtain an identity file with the command - -```code -$ tctl auth sign --user terraform --format file -o identity.pem -``` - -This auth method has the following limitations: -- Such credentials are high-privileged and long-lived. They must be protected and rotated. -- This auth method does not work against Teleport clusters with MFA set to `webauthn`. - On such clusters, Teleport will reject any long-lived certificate and require - [an additional MFA challenge for administrative actions](../../admin-guides/access-controls/guides/mfa-for-admin-actions.mdx). - -### With a token (native MachineID) - -Starting with 16.2, the Teleport Terraform provider can natively use MachineID (without `tbot`) to join a Teleport -cluster. The Terraform Provider will rely on its runtime (AWS, GCP, Kubernetes, CI/CD system) to prove its identity to -Teleport. - -You can use any [delegated join method](../join-methods.mdx#delegated-join-methods) by setting -both `join_method` and `join_token` in the provider configuration. - -This setup is described in more details in -the ["Run the Teleport Terraform provider in CI or Cloud" guide](../../admin-guides/infrastructure-as-code/terraform-provider/ci-or-cloud.mdx). - -### With key, certificate, and CA certificate - -With this connection method, you must provide a TLS key, a TLS certificate, and the Teleport Auth Service TLS CA certificates. -Those can be obtained with the command: - -```code -$ tctl auth sign --user terraform --format=tls -o terraform.pem -``` - -This auth method has the following limitations: -- The provider can only connect to the Auth directly (port 3025). On most clusters, only the proxy is publicly exposed. -- Such credentials are high-privileged and long-lived. They must be protected and rotated. -- This auth method does not work against Teleport clusters with MFA set to `webauthn`. - On such clusters, Teleport will reject any long-lived certificate and require - [an additional MFA challenge for administrative actions](../../admin-guides/access-controls/guides/mfa-for-admin-actions.mdx). - -{/* schema generated by tfplugindocs */} -## Schema - -### Optional - -- `addr` (String) host:port of the Teleport address. This can be the Teleport Proxy Service address (port 443 or 4080) or the Teleport Auth Service address (port 3025). This can also be set with the environment variable `TF_TELEPORT_ADDR`. -- `audience_tag` (String) Name of the optional audience tag used for native Machine ID joining with the `terraform` method. This can also be set with the environment variable `TF_TELEPORT_JOIN_AUDIENCE_TAG`. -- `cert_base64` (String) Base64 encoded TLS auth certificate. This can also be set with the environment variable `TF_TELEPORT_CERT_BASE64`. -- `cert_path` (String) Path to Teleport auth certificate file. This can also be set with the environment variable `TF_TELEPORT_CERT`. -- `dial_timeout_duration` (String) DialTimeout sets timeout when trying to connect to the server. This can also be set with the environment variable `TF_TELEPORT_DIAL_TIMEOUT_DURATION`. -- `gitlab_id_token_env_var` (String) Environment variable used to fetch the ID token issued by GitLab for the `gitlab` join method. If unset, this defaults to `TBOT_GITLAB_JWT`. This can also be set with the environment variable `TF_TELEPORT_GITLAB_ID_TOKEN_ENV_VAR`. -- `identity_file` (String, Sensitive) Teleport identity file content. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE`. -- `identity_file_base64` (String, Sensitive) Teleport identity file content base64 encoded. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE_BASE64`. -- `identity_file_path` (String) Teleport identity file path. This can also be set with the environment variable `TF_TELEPORT_IDENTITY_FILE_PATH`. -- `join_method` (String) Enables the native Terraform MachineID support. When set, Terraform uses MachineID to securely join the Teleport cluster and obtain credentials. See [the join method reference](../join-methods.mdx) for possible values. You must use [a delegated join method](../join-methods.mdx#secret-vs-delegated). This can also be set with the environment variable `TF_TELEPORT_JOIN_METHOD`. -- `join_token` (String) Name of the token used for the native MachineID joining. This value is not sensitive for [delegated join methods](../join-methods.mdx#secret-vs-delegated). This can also be set with the environment variable `TF_TELEPORT_JOIN_TOKEN`. -- `key_base64` (String, Sensitive) Base64 encoded TLS auth key. This can also be set with the environment variable `TF_TELEPORT_KEY_BASE64`. -- `key_path` (String) Path to Teleport auth key file. This can also be set with the environment variable `TF_TELEPORT_KEY`. -- `profile_dir` (String) Teleport profile path. This can also be set with the environment variable `TF_TELEPORT_PROFILE_PATH`. -- `profile_name` (String) Teleport profile name. This can also be set with the environment variable `TF_TELEPORT_PROFILE_NAME`. -- `retry_base_duration` (String) Retry algorithm when the API returns 'not found': base duration between retries (https://pkg.go.dev/time#ParseDuration). This can also be set with the environment variable `TF_TELEPORT_RETRY_BASE_DURATION`. -- `retry_cap_duration` (String) Retry algorithm when the API returns 'not found': max duration between retries (https://pkg.go.dev/time#ParseDuration). This can also be set with the environment variable `TF_TELEPORT_RETRY_CAP_DURATION`. -- `retry_max_tries` (String) Retry algorithm when the API returns 'not found': max tries. This can also be set with the environment variable `TF_TELEPORT_RETRY_MAX_TRIES`. -- `root_ca_base64` (String) Base64 encoded Root CA. This can also be set with the environment variable `TF_TELEPORT_CA_BASE64`. -- `root_ca_path` (String) Path to Teleport Root CA. This can also be set with the environment variable `TF_TELEPORT_ROOT_CA`. - diff --git a/docs/pages/reference/workload-identity/workload-identity.mdx b/docs/pages/reference/workload-identity/workload-identity.mdx deleted file mode 100644 index e82f4221df44b..0000000000000 --- a/docs/pages/reference/workload-identity/workload-identity.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Workload Identity References -description: Configuration and CLI reference for Teleport Workload Identity ---- - - diff --git a/docs/pages/upcoming-releases.mdx b/docs/pages/upcoming-releases.mdx index d60122c14c809..624cb7cd30d18 100644 --- a/docs/pages/upcoming-releases.mdx +++ b/docs/pages/upcoming-releases.mdx @@ -1,117 +1,91 @@ --- title: Teleport Upcoming Releases description: A timeline of upcoming Teleport releases. -h1: Upcoming Teleport Releases +tags: + - reference + - platform-wide --- -The Teleport team delivers a new major release roughly every 3 months. +Teleport releases a new major version once per year, and provides +security-critical support for the current and previous major version. We +support each major version for 24 months. -## Teleport - -### Teleport 12.3.0 - -#### Automatic user creation for desktop access - -Teleport's passwordless access for local users will provide the ability to -create Windows users on demand, similar to how host-user creation works for SSH -servers. - -#### SFTP UX enhancements - -Will include fixes for wildcard patterns, excessive audit logging and PyCharm -compatibility. [#22748](https://github.com/gravitational/teleport/issues/22748) -[#21518](https://github.com/gravitational/teleport/issues/21518) -[#22982](https://github.com/gravitational/teleport/issues/22982) -[#22263](https://github.com/gravitational/teleport/issues/22263) -[#20863](https://github.com/gravitational/teleport/issues/20863) -[#23593](https://github.com/gravitational/teleport/issues/23593) - -| Version | Date | -|---------|---------------| -| 12.3.0 | May 1st, 2023 | - -### Teleport 13.0.0 +The most recent major version of Teleport, referred to as the **current version**, +is the only major version of Teleport that will receive new features. The +previous major version, referred to as the **stable version**, will only receive +bug fixes and security patches. -| Version | Date | -|----------------|------------------| -| 13.0.0-alpha.1 | April 17th, 2023 | -| 13.0.0 | May 8th, 2023 | - -#### Automatic updates - -Users will be able to configure Teleport Agents to upgrade automatically. - -#### TLS routing through ALB for server access and Kubernetes access - -Teleport will support server access and Kubernetes access through load -balancers in TLS routing mode. - -#### Ability to import applications and groups from Okta +## Teleport -Teleport will be able to imports Okta apps and groups and users will be able to -use Access Requests with Okta apps and groups. +### Supported releases -#### AWS OpenSearch support for database access +We continue to support the following major versions of Teleport: -Database access will add support for connecting to AWS OpenSearch. +| Version | Release | Release Date | EOL | Minimum `tsh` version | +|---------|---------|-------------------|------------------|-----------------------| +| Current | v18.x | July 3, 2025 | August 2027 | v17.0.0 | +| Stable | v17.x | November 16, 2024 | August 2026 | v16.0.0 | -#### Cross-Cluster Search for Teleport Connect +### Upcoming releases -Teleport Connect will include a new search experience, allowing you to search -for and connect to resources across all logged-in clusters. +We plan to release the following versions in the coming months: -#### Kubernetes access performance improvements +| Version | Date | +|---------|----------------------------| +| 18.7.0 | Week of January 26, 2026 | +| 19.0.0 | August 2026 | -Users will experience better performance when interacting with Kubernetes clusters. +### 18.7.0 -#### Universal binaries (including Apple Silicon) for macOS +#### VNet for Linux -Teleport will run natively on ARM Macs, without the need for Rosetta emulation. +Teleport VNet support will be extended to Linux workstations. -#### Simplified RDS onboarding flow in Access Management UI +#### Session timeline view for Identity Security -Teleport Access Management UI will provide ability to connect to AWS account -and select RDS databases for onboarding. +Session player for Identity Security users will receive an enhanced timeline view with +per-command session breakdown. -#### Light theme for Web UI +#### Improvements to access list creation UX -Teleport's web UI will ship with an optional light theme. +Teleport will provide guided in-product UX for creating common types of access lists +centered around granting users permissions to resources and permissions to request +access to resources. -#### Other +#### Terraform-native flow for configuration of AWS EC2 auto-discovery -In addition, the following improvements +Teleport will provide in-product UX for configuring EC2 auto-discovery in a single AWS +account using terraform module. -- Improved SQL Server PKINIT Auth support -- Session recording video export support for Teleport-protected desktops -- Add support for locking to Web UI -- Device listing in Web UI +#### Static labels for auto-discovered Windows desktops -### 13.1.0 +Teleport can now be configured to apply a set of static labels to Windows +desktops that it discovers via LDAP. This is an alternative to setting labels +based on the value of LDAP attributes. -#### GCP Auto-Discovery for server access +#### Access requests privilege escalation UX for AWS -Teleport will be able to automatically enroll GCP VM for server access. +Teleport users will be able to see specific IAM roles available to them when requesting +elevated access to AWS CLI/console. Future releases will extend support for specific +principal selection to access requests for other resource types as well. -#### OpsGenie Access Request Plugin +#### RoleV8 support in terraform provider -Teleport will include OpsGenie access plugin. +Teleport terraform provider will support RoleV8 out of the box. -*Delayed from Teleport 13.0.0.* +#### Entra ID integration status page -| Version | Date | -|---------|----------------| -| 12.3.0 | June 1st, 2023 | +Teleport users will be able to see status of the configured Entra ID integration in the +web UI. ## Teleport Cloud The key deliverables for Teleport Cloud in the next quarter: -| Week of | Description | -|------------------|------------------------------------------------------| -| April 10th, 2023 | Teleport 12.2 begins to roll out for Cloud customers | -| May 8th, 2023 | Add support for hosted Access Request plugins | -| June 5th, 2023 | Teleport 13.1 begins to roll out for Cloud customers | -| June 5th, 2023 | Advanced Audit Log rollout begins | +| Week of | Description | +|--------------------|--------------------------------------------------------------| +| December 29, 2025 | Teleport 18.6 will begin rollout on Cloud. | +| December 29, 2025 | Teleport 18.6 agents will begin rollout to eligible tenants. | ## Production readiness @@ -152,18 +126,6 @@ minor release, such as (=teleport.major_version=).1.0. Patch releases contain small bug fixes and can typically be deployed directly to production. -## Preview Policy - -A product or feature may be marked as "Preview" if any of the following is -true. - -- The product or feature is missing key functionality. For example, Microsoft - SQL Server support for database access initially shipped without audit - logging support. -- The product or feature has NOT been adopted by at least 2 existing customers. -- The product or feature has NOT been through a security audit. -- The documentation for the product or feature is incomplete. -- Assets (e.g. binaries) associated with the feature are not published to our - official downloads page. +## Version compatibility -Use preview features in production if the above issues are not a concern. +(!docs/pages/includes/compatibility.mdx!) diff --git a/docs/pages/upgrading/agent-managed-updates-v1.mdx b/docs/pages/upgrading/agent-managed-updates-v1.mdx index 397d473d56660..e193c6219fe58 100644 --- a/docs/pages/upgrading/agent-managed-updates-v1.mdx +++ b/docs/pages/upgrading/agent-managed-updates-v1.mdx @@ -1,23 +1,29 @@ --- -title: Managed Updates for Teleport Agents (v1) +title: Managed Updates for Teleport Agents (legacy) description: Describes how to set up Managed Updates for Teleport Agents (v1) +tags: + - how-to + - platform-wide --- -This document describes Managed Updates for Agents (v1), which is -currently supported but will be deprecated after [Managed Updates for -Agents (v2)](./agent-managed-updates.mdx) is generally available. +This document describes Managed Updates for Agents v1, which +is a legacy system that may be removed in future versions of Teleport. + +For Managed Updates v2 instructions, see [Managed Updates for +Agents (v2)](./agent-managed-updates/agent-managed-updates.mdx). Managed Updates v1 uses a script called `teleport-upgrade` that is provided by the `teleport-ent-updater` package and configured by the `cluster_maintenance_config` Teleport resource. Managed Updates v2 uses a binary called `teleport-update` that is provided by the `teleport` package and configured by the `autoupdate_version` and -`autoupdate_config`. The original updater and resource are described in this document. +`autoupdate_config` resources. The original updater and resource are described in +this document. Only Enterprise versions of Teleport can use Managed Updates v1. -Please consider using [Managed Updates for Agents (v2)](./agent-managed-updates.mdx), +Please consider using [Managed Updates for Agents v2](./agent-managed-updates/agent-managed-updates.mdx), as it provides a safer, simpler, more flexible, compatible, and reliable update experience compared to Managed Updates v1. @@ -67,13 +73,10 @@ regular maintenance window. ## Prerequisites -- A Teleport Enterprise cluster. If you do not have one, [sign - up](https://goteleport.com/signup) for a free trial or consult the [Update - Reference](upgrading-reference.mdx) to read about manual updates. - Familiarity with the [Upgrading Compatibility Overview](./overview.mdx) guide, which describes the sequence in which to upgrade components in your cluster. - Teleport Agents that are not yet enrolled in Managed Updates. -- (!docs/pages/includes/tctl-tsh-prerequisite.mdx!) +- (!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - (!docs/pages/includes/tctl.mdx!) ## Step 1/4. Enable Managed Agent Updates @@ -119,7 +122,7 @@ Teleport cluster that agents use to determine when to check for upgrades. 1. Add the role to your Teleport user: - (!docs/pages/includes/add-role-to-user.mdx role="cmc-editor"!) + (!docs/pages/includes/add-role-to-user.mdx role="cmc-editor"!) 1. Create a cluster maintenance config in a file called `cmc.yaml`. The following example allows maintenance on Monday, Wednesday and Friday between @@ -198,9 +201,9 @@ Server ID Hostname Services Version Upgrader 1. Determine the Teleport version to install by querying the Teleport Proxy Service. This way, the Teleport installation has the same major version as - the automatic updater. + the updater. - Replace with the name of your automatic update + Replace with the name of your update channel. For cloud-hosted Teleport Enterprise accounts, this is always `stable/cloud`: @@ -224,7 +227,7 @@ Server ID Hostname Services Version Upgrader 1. Follow the instructions in the Teleport [installation - guide](../installation.mdx#package-repositories) to install the `teleport` + guide](../installation/linux.mdx#package-repositories) to install the `teleport` binary on your Linux server for your package manager. 1. Using your package manager, install `teleport-ent-updater` on the @@ -240,7 +243,7 @@ Server ID Hostname Services Version Upgrader The installation script detects the package manager on your Linux server and uses it to install Teleport binaries. To customize your installation, learn about the Teleport package repositories in the [installation - guide](../installation.mdx#linux). + guide](../installation/linux.mdx). 1. Confirm that the version of the `teleport` binary is the one you expect: @@ -248,19 +251,19 @@ Server ID Hostname Services Version Upgrader $ teleport version ``` -
-Running the agent as a non-root user +
+ Running the agent as a non-root user -If you changed the agent user to run as non-root, create -`/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user: + If you changed the agent user to run as non-root, create + `/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user: -```code -$ sudo mkdir -p /etc/teleport-upgrade.d/ -$ sudo touch /etc/teleport-upgrade.d/schedule -$ sudo chown your-teleport-user /etc/teleport-upgrade.d/schedule -``` + ```code + $ sudo mkdir -p /etc/teleport-upgrade.d/ + $ sudo touch /etc/teleport-upgrade.d/schedule + $ sudo chown your-teleport-user /etc/teleport-upgrade.d/schedule + ``` -
+
1. Verify that the upgrader can see your version endpoint by checking for upgrades: @@ -409,3 +412,152 @@ until the next reconciliation, you can trigger an event. $ sudo systemctl enable --now teleport-upgrade.timer ``` +## Working with the teleport-upgrade tool + +This section explains advanced usage of the v1 `teleport-upgrade` script for Linux. +Note that this tool is no longer actively developed, and the Managed Updates v2 +`teleport-update` binary should be used instead. +For Managed Updates v2 instructions, see [Managed Updates for +Agents (v2)](./agent-managed-updates/agent-managed-updates.mdx). + +### `teleport-upgrade` commands + +The `teleport-upgrade` tool provides some basic commands to verify and perform an +update of the Teleport Agent. + +```code +$ teleport-upgrade help +USAGE: /usr/sbin/teleport-upgrade + +Tool for automatic upgrades of Teleport Agents. + +Commands: + run check for and potentially apply a teleport upgrade. + dry-run check for new teleport version but do not upgrade. + force performs an upgrade if an upgrade is available. + version print the current version of /usr/sbin/teleport-upgrade. + help show this help text. +``` + +The `dry-run` command can be used to check for an available update without performing +an update. +```code +# Example output when teleport is already on the latest compatible version. +$ teleport-upgrade dry-run +[i] no upgrades available (14.3.14 == 14.3.14) [ 582 ] + +# Example output when an update is available. +$ teleport-upgrade dry-run +[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] +[i] within maintenance window, upgrade will be attempted. [ 596 ] +``` + +The `run` command performs an update if available. +```code +# Successful teleport update from 13.4.14 to 14.3.14. +$ teleport-upgrade run +[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] +[i] within maintenance window, upgrade will be attempted. [ 596 ] +[i] attempting apt install teleport-ent=14.3.14... [ 480 ] +[...] +[i] gracefully restarting Teleport (if already running) [ 449 ] + +# Teleport updates are not attempted when outside the maintenance window. +$ teleport-upgrade run +[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] +[i] upgrade is non-critical and we are outside of maintenance window, not attempting. [ 618 ] +``` + +The `force` command performs an update immediately even when outside the maintenance +window. +```code +$ teleport-upgrade force +[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] +[i] attempting apt install teleport-ent=14.3.14... [ 480 ] +[...] +[i] gracefully restarting Teleport (if already running) [ 449 ] +``` + +### Configuring the `teleport-upgrade` tool + +1. Create the upgrade configuration directory: + + ```code + $ sudo mkdir -p /etc/teleport-upgrade.d/ + ``` + +1. If you changed the agent user to run as non-root, create + `/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user, + assigning to the name of your Teleport + user. Otherwise, you can skip this step: + + ```code + $ sudo touch /etc/teleport-upgrade.d/schedule + $ sudo chown /etc/teleport-upgrade.d/schedule + ``` + + 1. Configure the upgrader to connect to your version server and subscribe to + the right release channel: + + ```code + $ echo "/v1/webapi/automaticupgrades/channel/default" | sudo tee /etc/teleport-upgrade.d/endpoint + ``` + + Make sure not to include `https://` as a prefix to the server address, nor + suffix the endpoint with `/version`. + +### Choosing a release channel + +When [configuring the updater](#configuring-the-teleport-upgrade-tool), you can +select a release channel. + +The following channels are available for APT, YUM, and Zypper repos: + +| Channel name | Description | +|-------------------|--------------------------------------------------------------------------------------------| +| `stable/` | Receives releases for the specified major release line, i.e. `v(=teleport.major_version=)` | +| `stable/cloud` | Rolling channel that receives releases compatible with current Cloud version | +| `stable/rolling` | Rolling channel that receives all published Teleport releases | + +### Updating the updater + +The updater is designed to be minimal and relatively stable, but the updater will +receive updates on occasion. Currently, the updater does not have the capability +to update itself. Updates to the `teleport-ent-updater` package must be done manually. + +The `teleport-ent-updater` will be backwards compatible with older versions of Teleport, +so you can always update the `teleport-ent-updater` package to the latest available +version. + +### Version locking + +As of Teleport `15.1.10`, a version locking mechanism is enabled by the updater. +This mechanism locks the version of Teleport and prevents manual updates of the Teleport +package. This prevents unintentional updates during routine system maintenance, or +an accidental update by a user. The updater still has the capability to update the +Teleport package, and all updates of Teleport are expected to be performed by the +updater. + +The version locking mechanism is implemented using the features of the package managers. +In case a user would like to manually update the Teleport version, this can be done +with the following commands. + +With the `apt` package manager CLI, the `--allow-change-held-packages` flag must be provided +to bypass the version lock. +```code +$ apt-get install --allow-change-held-packages "teleport-ent=" +``` + +With the `yum` package manager CLI, the `--disableexcludes="teleport"` flag must be provided +to bypass the version lock. +```code +$ yum install --disablerepo="*" --enablerepo="teleport" --disableexcludes="teleport" "teleport-ent-" +``` + +With the `zypper` package manager CLI, the lock must be disabled and then re-enabled +after the update. +```code +$ zypper removelock "teleport-ent" +$ zypper install --repo="teleport" "teleport-ent-" +$ zypper addlock "teleport-ent" +``` diff --git a/docs/pages/upgrading/agent-managed-updates.mdx b/docs/pages/upgrading/agent-managed-updates.mdx deleted file mode 100644 index 7ba2ddba74f75..0000000000000 --- a/docs/pages/upgrading/agent-managed-updates.mdx +++ /dev/null @@ -1,561 +0,0 @@ ---- -title: Managed Updates (v2) for Teleport Agents -description: Describes how to set up Managed Updates (v2) for Teleport Agents ---- - - -This document describes Managed Updates for Agents (v2), which is gradually being rolled out to Cloud. - -For Managed Updates v1 instructions, see -[Managed Updates for Agents (v1)](./agent-managed-updates-v1.mdx). - - -In Managed Updates v2, a binary called `teleport-update` is distributed in -all Teleport packages, alongside the `teleport` binary. Admins configure updates -by managing the `autoupdate_version` and `autoupdate_config` dynamic resources. - -This document covers how to use `teleport-update` and the `autoupdate_*` -resources to manage your agent updates from Teleport. It describes: -- [The agent architecture](#how-it-works) -- [How to enroll existing agents](#quick-setup-for-existing-connected-linux-servers) -- [How to enroll new agents](#quick-setup-for-new-linux-servers) -- [How to configure Managed Updates v2](#configuring-managed-agent-updates) ( - [when updates happen](#configuring-the-schedule) and for self-hosted users, - [which version to update to](#setting-the-version-self-hosted-only)) -- [How to migrate to Managed Updates v2](#migrating-agents-on-linux-servers-to-managed-updates) - -`teleport-update` supports: -- Both Teleport Enterprise and Teleport Community Edition -- Both cloud and self-hosted Teleport Enterprise deployments -- Regular and FIPS variants of Teleport -- amd64 and arm64 CPU architectures -- systemd-based operating systems, regardless of the package manager used - - -The Managed Updates v2 `teleport-update` binary is backwards-compatible with the -`cluster_maintenance_config` resource. The Managed Updates v1 `teleport-upgrade` script -is forwards-compatible with the `autoupdate_config` and `autoupdate_version` resources. -Agents connected to the same cluster will all update to the same version. - -If the `autoupdate_config` resource is configured, it takes precedence over -`cluster_maintenance_config`. This allows for a safe, non-breaking, incremental -migration between Managed Updates v1 and v2. If `autoupdate_config` is not present -and `autoupdate_version` is present, the `autoupdate_config` settings are implicitly -derived from `cluster_maintenance_config`. - -Users of cloud-hosted Teleport Enterprise will be migrated to Managed Updates v2 -in the first half of 2025 and should plan to migrate their agents to `teleport-update`. - - -## How it works - -When Managed Updates are enabled, a Teleport updater is installed alongside -each new Teleport Agent. The updater communicates with the Teleport Proxy Service to -determine when an update is available and if it should perform the update now. - -Each agent belongs to an update group. The update schedule specifies when each -group is updated. The schedule is stored in the `autoupdate_config` resource and -can be edited via `tctl`. - -For Linux server-based installations, `teleport-update` command configures -Managed Updates locally on the server. - -For Kubernetes-based installations, the `teleport-kube-agent` Helm chart -deploys a controller that automatically updates the main Teleport container. - -Existing agents must be manually enrolled into Managed Updates. - -## Prerequisites - -- A Teleport cluster. If you do not have one, [sign - up](https://goteleport.com/signup) for a free trial or consult the - [Teleport Installation](../installation.mdx) page. -- Familiarity with the [Upgrading Compatibility Overview](./overview.mdx) guide, - which describes the sequence in which to upgrade components in your cluster. -- Teleport Agents that are not yet enrolled in Managed Updates. -- (!docs/pages/includes/tctl-tsh-prerequisite.mdx!) -- (!docs/pages/includes/tctl.mdx!) - -## Quick setup for existing connected Linux servers - -Users can enable Managed Updates v2 on Linux servers that are already running -a Teleport Agent by running the following command on every server: - -```code -$ sudo teleport-update enable -``` - - -If this command is not available, update the `teleport` package -to the latest version that is supported by your cluster. - - -The `teleport-update enable` command will disable (but not remove) -the v1 updater if present. No other action is necessary. - -If everything is working, the v1 updater package can be removed: - -```code -$ sudo apt remove teleport-ent-updater -``` - -If the v2 updater does not work, your installation can be reverted -back to manual updates or the v1 updater (if it has not been removed): - -```code -$ sudo teleport-update uninstall -``` - -If Teleport was installed via the apt or yum package, -`teleport-update uninstall` will revert the running version of Teleport back to -the version provided by the package. - -## Quick setup for new Linux servers - -The [Install Script](../installation.mdx#one-line-installation-script) is the -fastest way to onboard new Linux servers. However, you may also use -`teleport-update` by itself to set up a Teleport Agent manually. - -Users can create a new installation of Teleport using any version of the -`teleport-update` binary. First, download copy of the Teleport tarball from -the downloads page. Next, invoke `teleport-update` to install the correct version -for your cluster. - -```code -$ tar xf teleport-[version].tgz -$ cd teleport-[version] -$ sudo ./teleport-update enable --proxy example.teleport.sh -``` - -After Teleport is installed, you can create `/etc/teleport.yaml`, either manually -or using `teleport configure`. After, the Teleport Agent can be enabled and -started via the `systemctl` command: - -```code -$ sudo systemctl enable teleport --now -``` - -## Configuring managed agent updates - -Managed agent updates are configured via two Teleport resources: -- `autoupdate_config` controls the update schedule -- `autoupdate_version` controls the desired version - -Self-hosted Teleport users must configure both `autoupdate_config` and -`autoupdate_version`. - -Cloud-hosted Teleport Enterprise users can configure the `autoupdate_config`, while the -`autoupdate_version` is managed by Teleport Cloud. Updates will roll out -automatically during the first chosen maintenance window that is at least 36 -hours after the cluster version is updated. - -To configure Managed Updates in your cluster, you must have access to -the `autoupdate_config` and `autoupdate_version` resources. By default, -the `editor` role can modify both resources. - -### Configuring the schedule - -For both cloud-hosted and self-hosted editions of Teleport, an update schedule -may be set with the `autoupdate_config` resource. The default resource looks -like this: - -```yaml -kind: autoupdate_config -metadata: - name: autoupdate-config -spec: - agents: - mode: enabled - strategy: halt-on-error - schedules: - regular: - - name: default - days: [ "Mon", "Tue", "Wed", "Thu" ] - # start_hour is in UTC - start_hour: 16 -``` - -For example, a Teleport user with staging and production -environments might create a custom schedule that looks like this: - -```yaml -kind: autoupdate_config -metadata: - name: autoupdate-config -spec: - agents: - mode: enabled - strategy: halt-on-error - schedules: - regular: - - name: staging - days: [ "Mon", "Tue", "Wed", "Thu" ] - start_hour: 4 - - name: production - days: [ "Mon", "Tue", "Wed", "Thu" ] - start_hour: 5 - wait_hours: 24 -``` - -This schedule would update agents in the `staging` group at 4 UTC, and then update -the `production` group at 5 UTC the next day. The `production` group will not execute -update until the staging group has updated. The `wait_hours` field sets a minimum -duration between groups, ensuring that `production` happens the day after `staging`, -and not one hour after. - - -While failed installations will revert automatically on the client-side, -server-side healthchecks are still in development. To prevent the `production` -group above from updating after `staging` has failed, you must manually suspend -the schedule by setting the `spec.agents.mode` to `suspended`. - - -You may wish to schedule groups of agents to update without any dependence between -them. For example, groups may represent geographic areas and not environments. -To accomplish this, you can change the default `halt-on-error` strategy to the -`time-based` strategy: - -```yaml -kind: autoupdate_config -metadata: - name: autoupdate-config -spec: - agents: - strategy: time-based - maintenance_window_duration: 1h - schedules: - regular: - - name: nyc - days: [ "Mon", "Tue", "Wed", "Thu" ] - start_hour: 4 - - name: sj - days: [ "Mon", "Tue", "Wed", "Thu" ] - start_hour: 20 -``` - -With this strategy, updates to `sj` may occur before `nyc`, depending on when -new versions become available. The `maintenance_window_duration` restricts -updates to the specified duration after the `start_hour`. This ensures that -disruptions do not occur outside a known window. - -The time-based strategy does not support the `wait_days` option. - -To add agents to groups, run `teleport-update enable --group group-name`. -You may execute `teleport-update enable` repeatedly to change the group -(or other Managed Update settings). - -If no agent update group is configured, or if the group name does not match a -group defined in `autoupdate_config`, the agent will be part of the last -`autoupdate_config` group. - -Except for `autoupdate_config.agents.mode`, changes to `autoupdate_config` fields -take effect during the next version rollout. A new rollout happens when -`autoupdate_version` is changed and targets a new version. -Version is automatically updated for Cloud-hosted Teleport clusters; for -self-hosted ones you have to update the version manually, see -[the dedicated guide section](#setting-the-version-self-hosted-only). - - -For cloud-hosted Teleport Enterprise, the `days` are not configurable for most -customers, and the `start_hour` is defaulted to your selected maintenance window. - -Cloud-hosted Teleport clusters also have a maximum of 5 update groups by default, -and a full update schedule must not be longer than 4 days. Those limitations -ensure that all your agents are updated weekly and that they stay -compatible with the Teleport cluster's version. - - -### Setting the version (self-hosted only) - -For cloud-hosted Teleport Enterprise, Managed Updates are enabled by default. -The `autoupdate_version` resource is managed for you and cannot be edited. -This ensures your agents are always up-to-date and running the best version -for your Teleport cluster. - -Self-hosted Teleport users must specify which version their agents should update -to via the `autoupdate_version` resource. - -Create a file called `autoupdate_version.yaml` containing: - -```yaml -kind: autoupdate_version -metadata: - name: autoupdate-version -spec: - agents: - start_version: 17.2.0 - target_version: 17.2.1 - schedule: regular - mode: enabled -``` - -This resource is used to deploy new versions of Teleport to your agents. -The cluster will update agents from `start_version` to `target_version` -according to the update schedule specified in the `autoupdate_config`. - -The `schedule` may be changed from `regular` to `immediate` to force all -agents to update to the `target_version` immediately. - -The `mode` is used to enable, disable, or suspend Managed Updates. -The `mode` may be set in both `autoupdate_version` or `autoupdate_config`, -such that `disabled` overrides `suspended`, which overrides `enabled` on either -side. The `mode` being specified in two places is useful when -`autoupdate_version` and `autoupdate_config` are not managed by the same team. - -Run the following command to create or update the resource: - -```code -$ tctl create autoupdate_version.yaml -``` - -Changes to `autoupdate_version` can take up to a minute to create a new rollout. -You can observe the current rollout state with the command: - -```code -$ tctl autoupdate agents status -Agent autoupdate mode: enabled -Rollout creation date: 2025-03-10 15:01:45 -Start version: 1.2.3 -Target version: 1.2.4 -Rollout state: Active -Strategy: halt-on-error - -Group Name State Start Time State Reason ----------- --------- ------------------- ------------------------ -dev Active 2025-03-11 12:00:10 can_start -stage Unstarted previous_groups_not_done -prod Unstarted previous_groups_not_done -``` - -## Migrating agents on Linux servers to Managed Updates - -### Finding unmanaged agents - -Use the `tctl inventory ls` command to list connected agents along with their current -version. Use the `--upgrader=none` flag to list agents that are not enrolled in managed -updates. - -```code -$ tctl inventory ls --upgrader=none -Server ID Hostname Services Version Upgrader ------------------------------------- ------------- -------- ------- -------- -00000000-0000-0000-0000-000000000000 ip-10-1-6-130 Node v14.4.5 none -... -``` - -Use the `--upgrader=unit` flag to list agents that are using Managed Updates v1 -and should be updated to Managed Updates v2: - -```code -$ tctl inventory ls --upgrader=unit -Server ID Hostname Services Version Upgrader ------------------------------------- ------------- -------- ------- -------- -00000000-0000-0000-0000-000000000000 ip-10-1-6-131 Node v14.4.5 unit -... -``` - -Agents enrolled into Managed Updates v2 can be queried with the -`--upgrader=binary` flag. - -### Enrolling unmanaged agents - -1. For each agent ID returned by the `tctl inventory ls` command, copy the ID - and run the following `tctl` command to access the host via `tsh`: - - ```code - $ HOST=00000000-0000-0000-0000-000000000000 - $ USER=root - $ tsh ssh "${USER?}@${HOST?}" - ``` - -1. Run `teleport-update enable` on each agent you would like - to enroll into Managed Updates v2: - - ```code - $ sudo teleport-update enable - ``` - -1. Confirm that the version of the `teleport` binary is the one you expect: - - ```code - $ teleport version - ``` - -1. Remove the Managed Updates v1 updater if present: - - - - - ```code - $ sudo apt remove teleport-ent-updater - ``` - - - - - ```code - $ sudo yum remove teleport-ent-updater - ``` - - - - -
-Running the agent as a non-root user - -If you changed the agent user to run as non-root, create -`/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user: - -```code -$ sudo mkdir -p /etc/teleport-upgrade.d/ -$ sudo touch /etc/teleport-upgrade.d/schedule -$ sudo chown your-teleport-user /etc/teleport-upgrade.d/schedule -``` - -While `teleport-update` does not read this file, `teleport` will warn if it -cannot disable the Managed Update v1 updater using this file. - -
- -## Enroll Kubernetes agents in Managed Updates - -This section assumes that the name of your `teleport-kube-agent` release is -`teleport-agent`, and that you have installed it in the `teleport` namespace. - -1. Add the following chart values to the values file for the - `teleport-kube-agent` chart: - - ```yaml - updater: - enabled: true - ``` - -1. Update the Teleport Helm repository to include any new versions of the - `teleport-kube-agent` chart: - - ```code - $ helm repo update teleport - ``` - -1. Update the Helm chart release with the new values: - - - - - ```code - $ helm -n upgrade teleport/teleport-kube-agent \ - --values=values.yaml \ - --version="(=cloud.version=)" - ``` - - - - ```code - $ helm -n upgrade teleport/teleport-kube-agent \ - --values=values.yaml \ - --version="(=teleport.version=)" - ``` - - - -1. You can validate the updater is running properly by checking if its pod is - ready: - - ```code - $ kubectl -n teleport-agent get pods - NAME READY STATUS RESTARTS AGE - -0 1/1 Running 0 14m - -1 1/1 Running 0 14m - -2 1/1 Running 0 14m - -updater-d9f97f5dd-v57g9 1/1 Running 0 16m - ``` - -1. Check for any deployment issues by checking the updater logs: - - ```code - $ kubectl -n logs deployment/-updater - 2023-04-28T13:13:30Z INFO StatefulSet is already up-to-date, not updating. {"controller": "statefulset", "controllerGroup": "apps", "controllerKind": "StatefulSet", "StatefulSet": {"name":"my-agent","namespace":"agent"}, "namespace": "agent", "name": "my-agent", "reconcileID": "10419f20-a4c9-45d4-a16f-406866b7fc05", "namespacedname": "agent/my-agent", "kind": "StatefulSet", "err": "no new version (current: \"v12.2.3\", next: \"v12.2.3\")"} - ``` - -## Troubleshooting - -You can inspect the current agent autoupdate status by running: -```code -$ tctl autoupdate agents status - -Agent autoupdate mode: enabled -Rollout creation date: 2025-02-24 16:01:44 -Start version: 17.2.0 -Target version: 17.2.1 -Rollout state: Unstarted -Strategy: time-based - -Group Name State Start Time State Reason ----------- --------- ---------- -------------- -default Unstarted outside_window -``` - -This rollout state is computed by each Auth Service instance every minute. An `autoupdate_config` or `autoupdate_version` -change might take up to a minute to be reflected and applied. - -Teleport Agents are not updated immediately when a new version of Teleport is -released, and agent updates can lag behind the cluster by a few days. - -If the Teleport Agent has not been automatically updating for several weeks, you -can consult the updater logs to help troubleshoot the problem: - -### Troubleshooting managed agent upgrades on Kubernetes - -The updater is a controller that periodically reconciles expected Kubernetes -resources with those in the cluster. The updater executes a reconciliation loop -every 30 minutes or in response to a Kubernetes event. If you don't want to wait -until the next reconciliation, you can trigger an event. - -1. Any deployment update will send an event, so you can trigger the upgrader by - annotating the resource: - - ```code - $ kubectl -n annotate statefulset/ 'debug.teleport.dev/trigger-event=1' - ``` - -1. To suspend Managed Updates for an agent, annotate the agent deployment - with `teleport.dev/skipreconcile: "true"`, either by setting the - `annotations.deployment` value in Helm, or by patching the deployment - directly with `kubectl`. - -### Troubleshooting managed agent upgrades on Linux - -1. You can query the updater status by running: - - ```code - $ teleport-update status - proxy: teleport.example.com:443 - path: /usr/local/bin - base_url: https://cdn.teleport.dev - enabled: true - pinned: false - active: - version: 17.2.0 - flags: [Enterprise] - target: - version: 17.2.1 - flags: [Enterprise] - in_window: false - jitter: 1m0s - ``` - - Here, the local active version is 17.2.0. The cluster's target version is - 17.2.1, but we are not in an update window, so the agent is not immediately - updated. - -```code -$ journalctl -u teleport-update -``` - -1. If an agent is not automatically updated, you can invoke the updater - manually and look at its logs: - - ```code - $ sudo teleport-update update --now - ``` - diff --git a/docs/pages/upgrading/agent-managed-updates/agent-managed-updates.mdx b/docs/pages/upgrading/agent-managed-updates/agent-managed-updates.mdx new file mode 100644 index 0000000000000..fff047929fe83 --- /dev/null +++ b/docs/pages/upgrading/agent-managed-updates/agent-managed-updates.mdx @@ -0,0 +1,764 @@ +--- +title: Managed Updates for Teleport Agents and Bots +description: Describes how to set up Managed Updates for Teleport Agents and Bots +tags: + - conceptual + - platform-wide +--- + + +This document describes Managed Updates for Agents and Bots (v2), +which replaces Managed Updates for Agents (v1). + +For Managed Updates v1 instructions, see +[Managed Updates for Agents (v1)](../agent-managed-updates-v1.mdx). + + +For Managed Updates, a binary called `teleport-update` is distributed in +all Teleport packages, alongside the `teleport`, `tbot`, and other binaries. +Admins configure updates by managing the `autoupdate_version` and +`autoupdate_config` dynamic resources. + +This document covers how to use `teleport-update` and the `autoupdate_*` +resources to manage automated agent and bot updates from Teleport. It describes: + +- [The agent architecture](#how-it-works) +- [How to enroll existing agents](#quick-setup-for-existing-linux-agent-and-bot-installations) +- [How to enroll new agents](#quick-setup-for-new-linux-agents-and-bot-installations) +- [How to configure Managed Updates v2](#configuring-managed-agent-and-tbot-updates) ( + [when updates happen](#configuring-the-schedule) and for self-hosted users, + [which version to update to](#setting-the-version-self-hosted-only)) +- [How to migrate to Managed Updates v2](#migrating-agents-on-linux-servers-to-managed-updates) + +`teleport-update` supports: + +- Teleport Enterprise and Teleport Community Edition +- Both cloud and self-hosted Teleport Enterprise deployments +- Regular and FIPS variants of Teleport +- amd64, arm64, and other supported CPU architectures +- systemd-based operating systems, regardless of the package manager used + + +The Managed Updates v2 `teleport-update` binary is backwards-compatible with the +`cluster_maintenance_config` resource. The Managed Updates v1 `teleport-upgrade` script +is forwards-compatible with the `autoupdate_config` and `autoupdate_version` resources. +Agents connected to the same cluster will all update to the same version. + +If the `autoupdate_config` resource is configured, it takes precedence over +`cluster_maintenance_config`. This allows for a safe, non-breaking, incremental +migration between Managed Updates v1 and v2. If `autoupdate_config` is not present +and `autoupdate_version` is present, the `autoupdate_config` settings are implicitly +derived from `cluster_maintenance_config`. + +Regardless of how the cluster is configured, `teleport-update` is capable of managing +both Teleport Agent and tbot installations, while `teleport-upgrade` is only capable +of managing Teleport Agents. + +Users of cloud-hosted Teleport Enterprise have been migrated to Managed Updates v2 +and should migrate their agents to `teleport-update` as soon as possible. + + +## How it works + +Managed Updates for Agents and Bots are designed to manage long-running, unattended Teleport +clients, such as Teleport Agents and tbot. This is different from Managed Updates for +Client Tools, which are designed to manage interactive Teleport clients, such as `tsh` and `tctl`. + +When Managed Updates are enabled, a Teleport updater is installed alongside +each new Teleport Agent or tbot. The updater communicates with the Teleport Proxy +Service to determine when an update is available and if it should perform the update now. + +Each installation belongs to an update group. The update schedule specifies when each +group is updated. The schedule is stored in the `autoupdate_config` resource and +can be edited via `tctl`. The `tctl autoupdate agents` subcommands are used to interact +with the rollout for both Teleport Agents and long-running tbot installations. + +For Linux server-based installations, `teleport-update` command configures +Managed Updates for Teleport Agents and tbot locally on the server. + +For Kubernetes-based installations, the `teleport-kube-agent` Helm chart +deploys a controller that automatically updates the main Teleport container. + +Agents and bots that were installed before Managed Updates was enabled on the +cluster usually need to be manually enrolled into Managed Updates. + +## Prerequisites + +- Familiarity with the [Upgrading Compatibility Overview](../overview.mdx) guide, + which describes the sequence in which to upgrade components in your cluster. +- Teleport Agent or tbot installations that are not yet enrolled in Managed Updates. +- (!docs/pages/includes/edition-prereqs-tabs.mdx!) +- (!docs/pages/includes/tctl.mdx!) + +## Quick setup for existing Linux Agent and Bot installations + +Users can enable Managed Updates v2 on Linux servers that are already running +a Teleport Agent by running the following command on every server: + +```code +$ sudo teleport-update enable +``` + + +If this command is not available, update the `teleport` package +to the latest version that is supported by your cluster. + + +The `teleport-update enable` command will disable (but not remove) +the v1 updater if present. No other action is necessary. + +If everything is working, the v1 updater package can be removed: + +```code +$ sudo apt remove teleport-ent-updater +``` + +If the v2 updater does not work, your installation can be reverted +back to manual updates or the v1 updater (if it has not been removed): + +```code +$ sudo teleport-update uninstall +``` + +If Teleport was installed via the apt or yum package, +`teleport-update uninstall` will revert the running version of Teleport back to +the version provided by the package. + +### Migrating Bots + +Existing tbot installations require additional steps to be converted to Managed Updates. + +When `teleport-update enable` is run, a disabled systemd service is created at `/etc/systemd/system/tbot.service` +if a service does not already exist at that location. + +If a custom tbot systemd service is already installed at `/etc/systemd/system/tbot.service`, +a warning will be displayed when `teleport-update enable` is run. To overwrite that custom service +and replace it with an updater-managed service, run the following command: + +```code +$ sudo teleport-update enable --overwrite +``` + +If a custom tbot systemd service is installed with a different name (e.g., `/etc/systemd/system/machineid.service`), +it must be stopped if it shares the same configuration and data directories as the updater-managed service: + +```code +$ sudo systemctl disable machineid --now +``` + +After `teleport-update enable` has successfully created the service, its output will recommend that you run +the following command to enable `tbot.service`: + +```code +$ sudo systemctl enable tbot --now +``` + +Note that you must have a valid `/etc/tbot.yaml` file to use tbot. +See [Deploying tbot on Linux](../../machine-workload-identity/deployment/linux.mdx) for more information. + +## Quick setup for new Linux Agents and Bot installations + +The [Web UI onboarding and Install Script](../../installation/linux.mdx) are the +fastest ways to onboard new Linux servers. However, you may also use +`teleport-update` by itself to set up a Teleport Agent and/or tbot manually. +Note that web-based agent enrollment does not automatically configure tbot. +See the end of this section for information on how to configure and enable tbot. + +Users can create a new installation of Teleport using any version of the +`teleport-update` binary. First, download copy of the `teleport-update` tarball from +the [Agent Installer & Updater section](https://goteleport.com/download/all-downloads/?kind=agentInstaller&os=linux) of +the downloads page. +Next, invoke `teleport-update` to install the correct version for your cluster. + +```code +$ tar xf teleport-update-[version].tgz +$ cd teleport-update-[version] +$ sudo ./teleport-update enable --proxy example.teleport.sh +``` + +After Teleport is installed, you can create `/etc/teleport.yaml`, either manually +or using `teleport configure`. After, the Teleport Agent can be enabled and +started via the `systemctl` command: + +```code +$ sudo systemctl enable teleport --now +``` + +Similarly, you can create an `/etc/tbot.yaml` file, either manually or using `tbot configure`. +See [Deploying tbot on Linux](../../machine-workload-identity/deployment/linux.mdx) for more information. + +After, tbot can be enabled and started via the `systemctl` command: + +```code +$ sudo systemctl enable tbot --now +``` + +## Configuring managed agent and tbot updates + +Managed agent and bot updates are configured via two Teleport resources: + +- `autoupdate_config` controls the update schedule +- `autoupdate_version` controls the desired version + +Self-hosted Teleport users must configure both `autoupdate_config` and +`autoupdate_version`. + +Cloud-hosted Teleport Enterprise users can configure the `autoupdate_config`, while the +`autoupdate_version` is managed by Teleport Cloud. Updates will roll out +automatically during the first chosen maintenance window that is at least 36 +hours after the cluster version is updated. + +To configure Managed Updates in your cluster, you must have access to +the `autoupdate_config` and `autoupdate_version` resources. By default, +the `editor` role can modify both resources. + +### Configuring the schedule + +For both cloud-hosted and self-hosted editions of Teleport, an update schedule +may be set with the `autoupdate_config` resource. The default resource looks +like this: + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error + schedules: + regular: + - name: default + days: [ "Mon", "Tue", "Wed", "Thu" ] + # start_hour is in UTC + start_hour: 16 +``` + +This example configures a single group named "default" for all agents. +All agents will be placed in this group, as agents with missing +or unknown groups are always placed in the last listed group. +Currently, only the "regular" schedule is user-configurable. + +For example, a Teleport user with staging and production +environments might create a custom schedule that looks like this: + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error + schedules: + regular: + - name: staging + days: [ "Mon", "Tue", "Wed", "Thu" ] + start_hour: 4 + - name: production + days: [ "Mon", "Tue", "Wed", "Thu" ] + start_hour: 5 + wait_hours: 24 +``` + +This schedule would update agents and bots in the `staging` group at 4 UTC, and then update +the `production` group at 5 UTC the next day. The `production` group will not execute +update until the staging group has updated. The `wait_hours` field sets a minimum +duration between groups, ensuring that `production` happens the day after `staging`, +and not one hour after. + +Two update rollout strategies are available: + +- The `halt-on-error` strategy provides predictable, sequential updates + across environments. It's ideal for traditional development pipelines where + you want to ensure that development environments are successfully updated + before proceeding to staging and production. +- The `time-based` strategy is designed for environments where update groups + are independent of each other, such as geographical regions or different teams. + It allows updates to occur whenever the specified maintenance window is active + for a group, regardless of the status of other groups. This strategy does not + provide ordering guarantees across groups. + +With the `halt-on-error` strategy, the `canary_count` field can be set on each group to specify +a number of randomly selected agents (fewer than five) to update and verify before +proceeding to the rest of the agents in the group. This can be used to reduce the impact +of a failed update that might not be caught by earlier groups due to environment differences. + +You can find more information +in [the Managed Updates v2 resource reference](../../reference/deployment/managed-updates-v2.mdx) + +Except for `autoupdate_config.agents.mode`, changes to `autoupdate_config` fields +take effect during the next version rollout. A new rollout happens when +`autoupdate_version` is changed and targets a new version. +Version is automatically updated for Cloud-hosted Teleport clusters; for +self-hosted ones you have to update the version manually, see +[the dedicated guide section](#setting-the-version-self-hosted-only). + +### Setting the version (self-hosted only) + +For cloud-hosted Teleport Enterprise, Managed Updates are enabled by default. +The `autoupdate_version` resource is managed for you and cannot be edited. +This ensures your agents are always up-to-date and running the best version +for your Teleport cluster. + + +Self-hosted Teleport users must specify which version their agents and bots should +update to via the `autoupdate_version` resource. +If the resource does not exist, agents and bots will not update. + + +Create a file called `autoupdate_version.yaml` containing: + +```yaml +kind: autoupdate_version +metadata: + name: autoupdate-version +spec: + agents: + start_version: 17.2.0 + target_version: 17.2.1 + schedule: regular + mode: enabled +``` + +This resource is used to deploy new versions of Teleport to your agents and bots. +The cluster will update agents and bots to `target_version` according to the update +schedule specified in the `autoupdate_config`. + +The `start_version` is only used to determine the version used for newly +connected agents and bots when their update window has not occurred yet. +This is useful to prevent version drift within groups, but some users may +prefer to set both version fields to the same version. + +Run the following command to create or update the resource: + +```code +$ tctl create -f autoupdate_version.yaml +``` + +Changes to `autoupdate_version` can take up to a minute to create a new rollout. +You can observe the current rollout state with the command: + +```code +$ tctl autoupdate agents status +Agent autoupdate mode: enabled +Rollout creation date: 2025-03-10 15:01:45 +Start version: 1.2.3 +Target version: 1.2.4 +Rollout state: Active +Strategy: halt-on-error + +Group Name State Start Time State Reason +---------- --------- ------------------- ------------------------ +dev Active 2025-03-11 12:00:10 can_start +stage Unstarted previous_groups_not_done +prod Unstarted previous_groups_not_done +``` + +### Monitoring tbot updates + +Unlike the Teleport Agent, tbot does not have a persistent connection to the cluster and cannot be monitored directly +during upgrades. + +However, tbot installation failures are still tracked if tbot is installed alongside a running Teleport Agent. + +If a tbot upgrade fails and tbot is installed alongside an agent, both tbot and the agent will be rolled back to the +previous version, and the update group may be marked as failed. +If tbot is installed without an agent, tbot will still be rolled back to the previous version, but the upgrade may still +progress to further groups. + +Similarly, tbot installations are only considered candidates for canary installations if they are deployed alongside a +running Teleport Agent. + +## Managing Rollouts + +Managed Update groups are automatically progressed as configured by the `autoupdate_config` resource. +However, it is possible to manually trigger or rollback updates for a selection of groups using `tctl autoupdate agents` commands: + +```code +Commands: + autoupdate agents status Prints agents auto update status. + autoupdate agents report Aggregates the agent autoupdate reports and displays agent count per version and per update group. + autoupdate agents start-update Starts updating one or many groups. + autoupdate agents mark-done Marks one or many groups as done updating. + autoupdate agents rollback Rolls back one or many groups. +``` + +For example, if an earlier group cannot be started, the other groups can be triggered manually: +```code +$ tctl autoupdate agents start-update stage prod +Started updating agents groups: [stage prod]. +New agent rollout status: + +Group Name State Start Time State Reason +---------- --------- ------------------- -------------- +dev Unstarted cannot_start +stage Active 2025-03-10 15:04:16 manual_trigger +prod Active 2025-03-10 15:04:16 manual_trigger +``` + +While individual agents will automatically and immediately rollback if they fail health checks during the update, +regressions or breaking changes in new versions could make it desirable to rollback an agent update to an earlier version. +The `tctl autoupdate agents rollback` command can be used to rollback one or more groups to the cluster's `start_version`. +Rollbacks are immediate and do not wait for canaries to complete. + +## Migrating agents on Linux servers to Managed Updates + +### Finding unmanaged agents + +Use the `tctl inventory ls` command to list connected agents along with their current +version. Use the `--upgrader=none` flag to list agents that are not enrolled in managed +updates. + +```code +$ tctl inventory ls --upgrader=none +Server ID Hostname Services Version Upgrader +------------------------------------ ------------- -------- ------- -------- +00000000-0000-0000-0000-000000000000 ip-10-1-6-130 Node v14.4.5 none +... +``` + +Use the `--upgrader=unit` flag to list agents that are using Managed Updates v1 +and should be updated to Managed Updates v2: + +```code +$ tctl inventory ls --upgrader=unit +Server ID Hostname Services Version Upgrader +------------------------------------ ------------- -------- ------- -------- +00000000-0000-0000-0000-000000000000 ip-10-1-6-131 Node v14.4.5 unit +... +``` + +Agents enrolled into Managed Updates v2 can be queried with the +`--upgrader=binary` flag. + +Note that it may take several minutes for newly upgraded agents to be reflected +in the inventory output. + +### Enrolling unmanaged agents + +1. For each agent ID returned by the `tctl inventory ls` command, copy the ID + and run the following `tctl` command to access the host via `tsh`: + + ```code + $ HOST=00000000-0000-0000-0000-000000000000 + $ USER=root + $ tsh ssh "${USER?}@${HOST?}" + ``` + +1. Run `teleport-update enable` on each agent you would like + to enroll into Managed Updates v2: + + ```code + $ sudo teleport-update enable + ``` + +1. Confirm that the version of the `teleport` binary is the one you expect: + + ```code + $ teleport version + ``` + +1. Remove the Managed Updates v1 updater if present: + + + + + ```code + $ sudo apt remove teleport-ent-updater + ``` + + + + + ```code + $ sudo yum remove teleport-ent-updater + ``` + + + + +
+Running the agent as a non-root user + +If you changed the agent user to run as non-root, create +`/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user: + +```code +$ sudo mkdir -p /etc/teleport-upgrade.d/ +$ sudo touch /etc/teleport-upgrade.d/schedule +$ sudo chown your-teleport-user /etc/teleport-upgrade.d/schedule +``` + +While `teleport-update` does not read this file, `teleport` will warn if it +cannot disable the Managed Update v1 updater using this file. + +
+ +## Enroll Kubernetes agents in Managed Updates + +This section assumes that the name of your `teleport-kube-agent` release is +`teleport-agent`, and that you have installed it in the `teleport` namespace. + +1. Add the following chart values to the values file for the + `teleport-kube-agent` chart: + + ```yaml + updater: + enabled: true + ``` + +1. Update the Teleport Helm repository to include any new versions of the + `teleport-kube-agent` chart: + + ```code + $ helm repo update teleport + ``` + +1. Update the Helm chart release with the new values: + + + + + ```code + $ helm -n upgrade teleport/teleport-kube-agent \ + --values=values.yaml \ + --version="(=cloud.version=)" + ``` + + + + ```code + $ helm -n upgrade teleport/teleport-kube-agent \ + --values=values.yaml \ + --version="(=teleport.version=)" + ``` + + + +1. You can validate the updater is running properly by checking if its pod is + ready: + + ```code + $ kubectl -n teleport-agent get pods + NAME READY STATUS RESTARTS AGE + -0 1/1 Running 0 14m + -1 1/1 Running 0 14m + -2 1/1 Running 0 14m + -updater-d9f97f5dd-v57g9 1/1 Running 0 16m + ``` + +1. Check for any deployment issues by checking the updater logs: + + ```code + $ kubectl -n logs deployment/-updater + 2023-04-28T13:13:30Z INFO StatefulSet is already up-to-date, not updating. {"controller": "statefulset", "controllerGroup": "apps", "controllerKind": "StatefulSet", "StatefulSet": {"name":"my-agent","namespace":"agent"}, "namespace": "agent", "name": "my-agent", "reconcileID": "10419f20-a4c9-45d4-a16f-406866b7fc05", "namespacedname": "agent/my-agent", "kind": "StatefulSet", "err": "no new version (current: \"v12.2.3\", next: \"v12.2.3\")"} + ``` + +## GitOps tools + +Managed updates for Kubernetes agents requires workarounds when used with GitOps tools for +continuous deployment. The `teleport-kube-agent` Helm chart owns the version of the +`teleport-agent` resource, so when the `teleport-agent-updater` modifies the +image version of the `teleport-agent` resource, the GitOps tool will detect a +drift or a diff in the `teleport-agent` resource. + +The sections below describe workarounds for various GitOps tools. + +### ArgoCD deployments + +After a managed update, ArgoCD reports the `teleport-agent` resource as `OutOfSync`. +As a workaround to this problem use +a [Diff Customization](https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/#diffing-customization) +to ignore the difference in image version. Here is an example deployment using the +name `teleport-agent` and namespace `teleport`. + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: teleport-agent + namespace: teleport +spec: + ignoreDifferences: + - group: apps + kind: StatefulSet + name: teleport-agent + namespace: teleport + jqPathExpressions: + - .spec.template.spec.containers[] | select(.name == "teleport").image +... +``` + +### FluxCD deployments + +After a managed update, FluxCD reports a `DriftDetected` event. As a workaround +to this problem modify the [drift detection](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection) +configuration to ignore the difference in image version. Here is an example deployment +using the name `teleport-agent` and namespace `teleport`. + +```yaml +apiVersion: helm.toolkit.fluxcd.io/v2beta2 +kind: HelmRelease +metadata: + name: teleport-agent + namespace: teleport +spec: + driftDetection: + mode: enabled + ignore: + - paths: [ "/spec/template/spec/containers/0/image" ] + target: + kind: StatefulSet + name: teleport-agent + namespace: teleport +... +``` + +## Troubleshooting + +You can inspect the current autoupdate status by running: + +```code +$ tctl autoupdate agents status + +Agent autoupdate mode: enabled +Rollout creation date: 2025-02-24 16:01:44 +Start version: 17.2.0 +Target version: 17.2.1 +Rollout state: Unstarted +Strategy: time-based + +Group Name State Start Time State Reason +---------- --------- ---------- -------------- +default Unstarted outside_window +``` + +This rollout state is computed by each Auth Service instance every minute. An `autoupdate_config` or +`autoupdate_version` +change might take up to a minute to be reflected and applied. + +Teleport Agents are not updated immediately when a new version of Teleport is +released, and agent updates can lag behind the cluster by a few days. + +If the Teleport Agent has not been automatically updating for several weeks, you +can consult the updater logs as described above to help troubleshoot the problem. + +The `teleport-update status` command provides the best UX for determining how an +agent is being instructed to update. However, if `teleport-update is not available, +this information can also be queried directly from the Teleport cluster using `curl`: + +```code +$ curl -s https://teleport.example.com/webapi/find | jq .auto_update +{ + ... + "agent_version": "18.2.7", # version that the agent should update to (also used for new agents) + "agent_auto_update": false, # true if in window and the agent should update now + "agent_update_jitter_seconds": 60 # jitter to reduce load on the cluster and CDN +} +``` + +This may be tuned for a specific agent using the `group` and `update_id` params: +```code +$ curl -s https://teleport.example.com/webapi/find?group=staging&update_id=$( annotate statefulset/ 'debug.teleport.dev/trigger-event=1' + ``` + +1. To suspend Managed Updates for an agent, annotate the agent deployment + with `teleport.dev/skipreconcile: "true"`, either by setting the + `annotations.deployment` value in Helm, or by patching the deployment + directly with `kubectl`. + +### Troubleshooting managed agent upgrades on Linux + +1. You can query the updater status by running: + + ```code + $ teleport-update status + proxy: teleport.example.com:443 + path: /usr/local/bin + base_url: https://cdn.teleport.dev + enabled: true + pinned: false + active: + version: 17.2.0 + flags: [Enterprise] + target: + version: 17.2.1 + flags: [Enterprise] + in_window: false + jitter: 1m0s + ``` + + Here, the local active version is 17.2.0. The cluster's target version is + 17.2.1, but we are not in an update window, so the agent is not immediately + updated. + +1. If an agent is not automatically updated, you can invoke the updater + manually and look at its logs: + + ```code + $ sudo teleport-update update --now + ``` + +### Using a different CDN URL + +If your agents cannot reach the default Teleport CDN URL (`cdn.teleport.dev`), they will be unable to download updates. + +Here are a couple of potential solutions to this issue: + +#### Use an HTTP CONNECT proxy + +If you configure the `HTTPS_PROXY` variable in the `teleport-update` process's environment, it will use this proxy to +pull updates. + +The easiest way to configure a proxy with a default install is to add this variable to +`/etc/systemd/system/teleport-update.service.d/override.conf`: + +```bash +$ sudo mkdir -p /etc/systemd/system/teleport-update.service.d +$ sudo tee -a /etc/systemd/system/teleport-update.service.d/override.conf > /dev/null <<'EOF' +[Service] +Environment=HTTPS_PROXY=http://proxy-url:3128 +EOF +``` + +You can view the `teleport-update` process logs with `sudo journalctl -u teleport-update.service`. + +#### Mirror the Teleport tarball packages and change the base-url + +If you can mirror the Teleport tarball installers somewhere that your agents are able to access, you can change the +`base-url` +used by Teleport updaters so they can pull them directly. + +To change the `base-url`, you should add the `-b` or `--base-url` flag to the `teleport-update enable` command: + +```bash +$ sudo teleport-update enable --base-url https://teleport.artifactory.company.local +``` + +It is safe to re-run `sudo teleport-update enable` to modify the base URL. +Existing updater settings will be preserved if not explicitly overridden by flags. + +More information about flags that can be used with `teleport-update enable` can be +found [here](../../reference/cli/teleport-update.mdx#teleport-update-enable) diff --git a/docs/pages/upgrading/agent-managed-updates/bootstrap-ec2-agent.mdx b/docs/pages/upgrading/agent-managed-updates/bootstrap-ec2-agent.mdx new file mode 100644 index 0000000000000..46ffd6fdd2c02 --- /dev/null +++ b/docs/pages/upgrading/agent-managed-updates/bootstrap-ec2-agent.mdx @@ -0,0 +1,385 @@ +--- +title: Managed Updates v2 in EC2 Agents +description: Describes how to set up EC2 instance by cloud-init script and install Teleport with "teleport-update enable" +tags: + - get-started + - how-to +--- + +This guide shows two cloud-init patterns for bringing Linux EC2 instances under Managed Updates v2 +and assigning them to an update group using `teleport-update` by defining the group and proxy address. + +## How it works + +It assumes you'll join the agent to your cluster during first boot wit delegated join method +and that your cluster has Managed Updates v2 enabled. + +Delegated joins avoid shipping any secret token in user data. You create a named token resource that +encodes context rules (your AWS account, role ARNs, regions, etc.), then the agent proves its identity +to the Auth Service using cloud-issued credentials. Your agent can fetch IAM credentials +(e.g., EC2 instance profile, IRSA, or env vars). + +In more detail, here is how IAM join works: +- The Node signs an AWS STS `GetCallerIdentity` request using its own IAM credentials (e.g., EC2 instance role). +- The Node sends this pre-signed request to the Teleport Auth Service as part of the join handshake. +- The Auth Service does not call AWS APIs directly with its own credentials. Instead, it simply executes that pre-signed `GetCallerIdentity` request over HTTPS. AWS STS returns the identity information (Account ID, ARN, etc.). +- Teleport validates that identity against your token's allow rules. + +In this guide, we use the `teleport-update` binary for installation. +The `teleport-update enable` command installs Teleport at the version advertised by the cluster and enables +Managed Updates v2 on the host. It also creates the necessary systemd units for Teleport, along with a +`teleport-update` timer that periodically runs `teleport-update update`. In addition, it saves most of the +flags you pass in (such as -g, -p, or -b), so that running the command again will update the stored settings +rather than requiring you to re-enter them. + +For full Managed Updates v2 instructions, see [Managed Updates forAgents (v2)](./agent-managed-updates.mdx). + +Every agent belongs to an update group (e.g., development, staging, prod), if no group is configured `default` is used. +The cluster-side `autoupdate_config` resource defines when each group may update. +Cloud or self-hosted clusters use the same schedule model. Self-hosted clusters also set the desired +version via `autoupdate_version` resource. + +## Step 1/5. Creating join tokens + +See [the joining documentation](../../reference/deployment/join-methods.mdx) for a detailed explanation +of the joining process and the supported join methods. + +The IAM join method is the recommended way of joining EC2 instances. It offers stronger security guarantees, +more granular control over who can join, and is easier to use (the token doesn't need to be short-lived nor rotated). + +Create a file named `token.yaml`: +```yaml +# token.yaml +kind: token +version: v2 +metadata: + name: iam-join +spec: + roles: [Node] + join_method: iam + allow: + # Allow specific AWS accounts (or restrict by ARN) + - aws_account: "123456789012" + - aws_account: "999998880000" + aws_arn: "arn:aws:sts::999998880000:assumed-role/teleport-node-role/i-*" +``` + +Run the following command to create or update the resource: + +```code +$ tctl create -f token.yaml +provision_token "iam-join" has been created +``` + +This defines which AWS accounts and roles can join via IAM. You can use `aws_account` alone +(to allow all roles in that account) or add an `aws_arn` filter for stricter control. + +On AWS, prepare an IAM role that your EC2 instances will assume to prove their identity. +No permissions required - the join verification only calls `sts:GetCallerIdentity`. + +Example trust policy for EC2: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { "Service": "ec2.amazonaws.com" }, + "Action": "sts:AssumeRole" + } + ] +} +``` + +This policy is required and must be created using the [AWS Management Console or the AWS CLI](#step-45-create-an-ec2-instance). + +## Step 2/5. Define groups and schedules + +To enable managed updates, you need to create an `autoupdate_config` resource with the `enabled` mode. +Additionally, in `autoupdate_config`, you can define update groups (each agent belongs to one) and configure update schedules. +If no group or schedule is defined, the default one is used. + + + + +Create a file called `autoupdate_config.yaml` containing: + +```yaml +# autoupdate_config.yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error +``` + + + + + +Pick simple, meaningful groups like `development`, `staging`, `production`. Then model your rollout windows +and sequencing in `autoupdate_config`: + +```yaml +# autoupdate_config.yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + agents: + mode: enabled + strategy: halt-on-error + schedules: + regular: + - name: + days: ["Mon","Tue","Wed","Thu"] + start_hour: 4 # UTC + - name: staging + days: ["Mon","Tue","Wed","Thu"] + start_hour: 5 + wait_hours: 24 # run a day later than +``` + +This resource is used to control the update schedule by defining different groups, sequence and upgrade window. + +In the example, we define two update groups: `` and `staging`. Upgrades are allowed on Monday, Tuesday, +Wednesday, and Thursday. + +The first group, ``, must start the upgrade process at 04:00 UTC. Once it is completed and all +agents in the `` group are upgraded, the next group, `staging`, begins the upgrade process with +a 24-hour delay at 05:00 UTC. + +If any agent in the `` group fails, the upgrade process must stop. This behavior is controlled by +the `halt-on-error` strategy value. + + + + +Run the following command to create or update the resource: + +```code +$ tctl create -f autoupdate_config.yaml +autoupdate_config has been created +``` + + + Changes to the schedule configuration will take effect for the next version change. + + +## Step 3/5. Define the version (self-hosted only) + + + Self-hosted Teleport users must specify which version their agents should update + to via the `autoupdate_version` resource. Cloud-hosted Teleport Enterprise users must + skip this step, as it is managed by the Cloud team. + + +By creating `autoupdate_version` resource we define the version of the Teleport that must be initially installed +during EC2 instance bootstrap. + +```yaml +# autoupdate_version.yaml +kind: autoupdate_version +metadata: + name: autoupdate-version +spec: + agents: + start_version: (=teleport.version=) + target_version: (=teleport.version=) + schedule: regular + mode: enabled +``` +Create or update `autoupdate_version` resource: + +```code +$ tctl create -f autoupdate_version.yaml +autoupdate_version has been created +``` + +## Step 4/5. Create an EC2 Instance + + + + +To create required IAM role, instance profile and launch EC2 instance, follow these steps: + +1. Create an IAM role with a trust policy that allows EC2 to assume it: + +```code +$ cat > trust-policy.json </scripts/install.sh?group=" | sudo bash + + # Write agent config (join token) before starting the service. + - teleport configure --roles node --proxy --join-method iam --token iam-join > /etc/teleport.yaml + + # Enable and start Teleport service. + - systemctl enable --now teleport + ``` + +1. Launch the EC2 instance with the instance profile: + +```code +# Specify the AMI image ID for the instance and a security group that allows at least outbound traffic. +$ aws ec2 run-instances \ + --image-id ami-xxxx \ + --instance-type t3.micro \ + --iam-instance-profile Name=EC2TeleportInstanceProfile \ + --security-group-ids sg-xxxx \ + --region us-west-2 \ + --user-data file://cloud-init.yaml + ``` + + + + +Follow the procedure for [launching an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) +and [providing user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html), +which allows you to supply cloud-init directives at launch - whether by entering them in the User data field of the +AWS Management Console's Advanced details section. + +In the example below, the directives create and configure a Teleport node on Amazon Linux 2. +The script installs Teleport and the v2 updater and enables SSH service only. +The `#cloud-config` line at the top is required in order to identify the commands as cloud-init directives. + +User-data example with a previously created join token: + +```yaml +#cloud-config +packages: +- curl + +runcmd: + # Install Teleport using your cluster's script and define the group by query parameter. + # Script persists your updater flags (group, proxy, etc.). + - curl "https:///scripts/install.sh?group=" | sudo bash + + # Write agent config (join token) before starting the service. + - teleport configure --roles node --proxy --join-method iam --token iam-join > /etc/teleport.yaml + + # Enable and start Teleport service. + - systemctl enable --now teleport +``` + + + + +## Step 5/5. Verifying on the instance +Once the EC2 instance is started, SSH on it to check the updater status: + +```code +$ teleport-update status + +proxy: +path: /usr/local/bin +base_url: https://cdn.teleport.dev +enabled: true +pinned: false +active: + version: (=teleport.version=) + flags: [Enterprise] +target: + version: (=teleport.version=) + flags: [Enterprise] +in_window: false +``` + +Shows proxy, group, active/target versions, and whether you're in an update window. + +Check services: + +```code +$ systemctl status teleport +● teleport.service - Teleport Service + Loaded: loaded (/usr/lib/systemd/system/teleport.service; enabled; preset: enabled) + Drop-In: /etc/systemd/system/teleport.service.d + └─teleport-update.conf + Active: active (running) + Invocation: 1725c68591634876afd805b417cf9801 + Main PID: 40848 (teleport) + Tasks: 17 (limit: 11011) + Memory: 79.5M (peak: 81.7M, swap: 27.5M, swap peak: 27.5M) + CPU: 19min 2.548s + CGroup: /system.slice/teleport.service + └─40848 /usr/local/bin/teleport start --config /etc/teleport.yaml --pid-file=/run/teleport.pid + +$ journalctl -u teleport -n100 -f +teleport[2246081]: 2025-10-21T20:51:44.154Z INFO [PROC:1] Found an instance metadata service. Teleport will import labels from this cloud instance. pid:2246081.1 type:EC2 service/service.go:1186 +teleport[2246081]: 2025-10-21T20:51:44.156Z INFO [PROC:1] Service is creating new listener. pid:2246081.1 type:debug address:/var/lib/teleport/debug.sock service/signals.go:242 +teleport[2246081]: 2025-10-21T20:51:44.159Z INFO [PROC:1] Generating new host UUID pid:2246081.1 host_uuid:1fcaf1e0-0fbf-454d-ac13-49965918dc39 storage/storage.go:356 +teleport[2246081]: 2025-10-21T20:51:44.166Z INFO [PROC:1] Joining the cluster with a secure token. pid:2246081.1 service/connect.go:532 +teleport[2246081]: 2025-10-21T20:51:44.166Z INFO Attempting registration. method:via proxy server join/join.go:388 +teleport[2246081]: 2025-10-21T20:51:44.168Z WARN [CLOUD] Could not fetch EC2 instance's tags, please ensure 'allow instance tags in metadata' is enabled on the instance. labels/cloud.go:147 +teleport[2246081]: 2025-10-21T20:51:44.541Z INFO Attempting to register with IAM method using region STS endpoint. role:Instance join/join.go:785 +teleport[2246081]: 2025-10-21T20:51:44.697Z INFO Successfully registered with IAM method using regional STS endpoint. role:Instance join/join.go:807 +teleport[2246081]: 2025-10-21T20:51:44.697Z INFO Successfully registered. method:via proxy server join/join.go:395 +teleport[2246081]: 2025-10-21T20:51:44.698Z INFO [PROC:1] Successfully obtained credentials to connect to the cluster. pid:2246081.1 identity:Instance service/connect.go:383 + +$ journalctl -u teleport-update -n100 -f +systemd[1]: Starting teleport-update.service - Teleport auto-update service... +teleport-update[160893]: INFO [UPDATER] Teleport is up-to-date. Update window is active, but no action is needed. active_version:(=teleport.version=) agent/updater.go:877 +systemd[1]: teleport-update.service: Deactivated successfully. +systemd[1]: Finished teleport-update.service - Teleport auto-update service. +``` + +From an admin workstation. + +Login to the remote cluster: + +```code +$ tsh login --proxy +``` + +Confirm that the node is joined (it might take up to 15 min to be synced). + +```code +$ tctl inventory ls +Server ID Hostname Services Agent Version Upgrader Upgrader Version Update Group +------------------------------------ ------------------------------------------- -------- ------------- -------- ---------------- ------------ +1fcaf1e0-0fbf-454d-ac13-49965918dc39 ip-172-31-44-126.us-west-2.compute.internal Node v(=teleport.version=) binary v(=teleport.version=) default +``` + +Check the Managed Update v2 status: + +```code +$ tctl autoupdate agents status +Group Name State Start Time State Reason Agent Count Up-to-date +------------------- ----- ------------------- --------------- ----------- ---------- +default (catch-all) Done XXXX-XX-XX 00:00:00 update_complete 1 1 +``` \ No newline at end of file diff --git a/docs/pages/upgrading/client-tools-autoupdate.mdx b/docs/pages/upgrading/client-tools-autoupdate.mdx deleted file mode 100644 index a824acd861a7d..0000000000000 --- a/docs/pages/upgrading/client-tools-autoupdate.mdx +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: Teleport Client Tool Automatic Updates -description: Explains how to use Teleport client tools (`tsh` and `tctl`) auto-updates. ---- - -This documentation explains how to keep Teleport client tools like `tsh` and `tctl` up-to-date. -Updates can be automatic or self-managed, ensuring tools are secure, free from bugs, -and compatible with your Teleport cluster. Available in versions: 17.0.1, 16.4.10, and 15.4.24. - -Why keep client tools updated? - -- **Security**: Updates deliver patches for known vulnerabilities. -- **Bug Fixes**: Resolved issues are pushed to endpoints. -- **Compatibility**: Avoid manual understanding of [Teleport component compatibility rules](overview.mdx#component-compatibility). - -## How it works - -When you run `tsh login`, the tsh tool will check if updates are enabled for your cluster. -If your client version differs from the cluster's required version, it will: - -1. Download the updated version. -2. Store it in `~/.tsh/bin`. -3. Validate the binary with a checksum. -4. Re-execute using the updated version (with the same environment variables). - -### Key features - -**Binary Management**: Original binaries are preserved, and updates are stored separately. -Updates are installed in the `$TELEPORT_HOME/.tsh/bin/` folder (if `TELEPORT_HOME` is not defined, the home folder is used). -When client tools (`tctl` or `tsh`) are executed from any other path, they consistently check for binaries in the update -folder and re-execute them if found. - -**Validation**: Downloaded packages are verified with a hash sum to ensure integrity. -Package downloads are directed to the `cdn.teleport.dev` endpoint and depend on the operating system, -platform, and edition. The edition must be identified by the original client tools binary. -The URL pattern is as follows: -- `https://cdn.teleport.dev/teleport-{ent-}vX.Y.Z-{linux,darwin,windows}-{amd64,arm64,arm,386}-{fips-}bin.{tar.gz,pkg,zip}` -- `https://cdn.teleport.dev/teleport-{ent-}vX.Y.Z-{linux,darwin,windows}-{amd64,arm64,arm,386}-{fips-}bin.{tar.gz,pkg,zip}.sha256` - -**Concurrency**: Tools use a locking mechanism to enable smooth operation during updates. -Only one process can acquire the lock to update client tools, while other processes wait for the lock to be released. -If the first process cancels the update, the next process in line will initiate the update. - -## Configuring client tool automatic updates - -### Using environment variables -Values: -- `X.Y.Z`: Use a specific version. -- `off`: Disable updates. - -An environment variable `TELEPORT_TOOLS_VERSION` can be used as an emergency workaround for a known issue, -pinning to a specific version in CI/CD, for debugging, or for manual updates. - -During re-execution, child process will inherit all environment variables and flags. To prevent infinite loops -only version environment variable will be overridden to `TELEPORT_TOOLS_VERSION=off`. - -Example of self-managed auto-update by setting the version with environment variable: -```bash -$ TELEPORT_TOOLS_VERSION=17.0.5 tctl version -Update progress: [▒▒▒▒▒▒▒▒▒▒] (Ctrl-C to cancel update) -Teleport v17.0.5 git:v17.0.5-0-g7cc4c2a go1.23.4 -```` - -### Using `tctl` - -To enable or disable client tools automatic updates in the cluster, use the following command: - -```code -$ tctl autoupdate client-tools enable -client tools auto update mode has been changed - -$ tctl autoupdate client-tools disable -client tools auto update mode has been changed -``` - -To set or remove the target version for automatic updates for all client tools: - -```code -$ tctl autoupdate client-tools target X.Y.Z -client tools auto update target version has been set - -$ tctl autoupdate client-tools target --clear -client tools auto update target version has been cleared -``` - -If the target version is cleared, the cluster version will be used automatically, eliminating the need for manual -updates each time the cluster version is upgraded. - -The `status` command retrieves the target version and mode configured for the logged-in cluster. -To use an unauthenticated endpoint for this configuration, include the `--proxy` flag. - -```code -$ tctl autoupdate client-tools status --format json -{ - "mode": "enabled", - "target_version": "X.Y.Z" -} - -$ tctl autoupdate client-tools status --proxy proxy.example.com --format json -{ - "mode": "enabled", - "target_version": "X.Y.Z" -} -``` - -### Using resource definitions - -To enable client tools automatic updates in cluster, first create a file named `autoupdate_config.yaml` with the following content: - -```yaml -kind: autoupdate_config -metadata: - name: autoupdate-config -spec: - tools: - mode: enabled -``` - -And write resource data to the cluster `tctl create -f autoupdate_config.yaml`, after that any new `tsh` login must -check the target version and initiate downloading desired version to install in Teleport home folder. - -The next resource is responsible for setting target version `autoupdate_version.yaml`. - -```yaml -kind: autoupdate_version -metadata: - name: autoupdate-version -spec: - tools: - target_version: X.Y.Z -``` - -Create the resource using `tctl create -f autoupdate_version.yaml`. -If the `autoupdate_version` resource hasn't been created yet, the cluster version will be used as the default target version. - - - - For self-hosted clusters, automatic updates are disabled by default but can be enabled. - - Cloud clusters are automatically enrolled in updates, managed by the Teleport Cloud team. - - For clusters with multiple root versions, use self-managed updates to avoid frequent version switching. - - -## Determining a client tool version - -To determine the version required to operate with the cluster, during the login process, `tsh` queries from the -unauthenticated proxy discovery `/v1/webapi/find` endpoint. If `.auto_update.tools_auto_update` is enabled, the -client tools must initiate the installation of the version specified in `.auto_update.tools_version`. - -For manual updates, when scheduling updates at specific times or using custom CDN mirrors or with self-build packages, -you can disable auto-update via configuration. In this case, you can monitor the tool's version separately -or pair it with the `TELEPORT_TOOLS_VERSION=off` environment variable. - -```bash -$ curl https://proxy.example.com/v1/webapi/find | jq .auto_update -{ - "tools_auto_update": true, - "tools_version": "X.Y.Z", -} -``` diff --git a/docs/pages/upgrading/client-tools-managed-updates.mdx b/docs/pages/upgrading/client-tools-managed-updates.mdx new file mode 100644 index 0000000000000..c41e6c41117b5 --- /dev/null +++ b/docs/pages/upgrading/client-tools-managed-updates.mdx @@ -0,0 +1,277 @@ +--- +title: Teleport Client Tool Managed Updates +description: Explains how to use Teleport client tools (`tsh` and `tctl`) managed updates. +tags: + - conceptual + - platform-wide +--- + +This documentation explains how to keep Teleport client tools like `tsh` and `tctl` up-to-date. +Updates can be managed by cluster or self-managed, ensuring tools are secure, free from bugs, +and compatible with your Teleport cluster. + +For detailed information on managed updates in Teleport Connect, see [Teleport Connect Managed updates](../connect-your-client/teleport-clients/teleport-connect.mdx#managed-updates). + +Why keep client tools updated? + +- **Security**: Updates deliver patches for known vulnerabilities. +- **Bug Fixes**: Resolved issues are pushed to endpoints. +- **Compatibility**: Avoid manual understanding of [Teleport component compatibility rules](overview.mdx#component-compatibility). + +## How it works + +When you run `tsh login`, the tsh tool will check if updates are enabled for your cluster. +If your client version differs from the cluster's required version, it will: + +1. Download the updated version. +2. Validate the package with a checksum. +3. Extract the package and store the binaries in `~/.tsh/bin` (`$TELEPORT_HOME/bin`). +4. Records the client tools version advertised by the cluster in `~/.tsh/bin/.config.json`(`$TELEPORT_HOME/bin/.config.json`). +5. Re-execute using the updated version (with the same environment variables). + +### Key features + +**Binary Management**: Original binaries are preserved, and updates are stored separately. +Updates are installed in the `$TELEPORT_HOME/bin` folder (if `TELEPORT_HOME` is not defined, the home folder is used `$HOME/.tsh/bin`). +When client tools (`tctl` or `tsh`) are executed from any other path, they consistently check for binaries in the update +folder and re-execute them if found. + +**Validation**: Downloaded packages are verified with a hash sum to ensure integrity. +Package downloads are directed to the `cdn.teleport.dev` endpoint and depend on the operating system, +platform, and edition. The edition must be identified by the original client tools binary. +The URL pattern is as follows: +- `https://cdn.teleport.dev/teleport-{ent-}vX.Y.Z-{linux,darwin,windows}-{amd64,arm64,arm,386}-{fips-}bin.{tar.gz,pkg,zip}` +- `https://cdn.teleport.dev/teleport-{ent-}vX.Y.Z-{linux,darwin,windows}-{amd64,arm64,arm,386}-{fips-}bin.{tar.gz,pkg,zip}.sha256` +The base URL of the CDN can be overridden using the `TELEPORT_CDN_BASE_URL` environment variable. +Allows mirroring the CDN in a private network or using custom builds. + +**Concurrency**: Tools use a locking mechanism to enable smooth operation during updates. +Only one process can acquire the lock to update client tools, while other processes wait for the lock to be released. +If the first process cancels the update, the next process in line will initiate the update. + +### Multi-cluster support and multi-version caching + +Client tools managed updates support multi-cluster environments. When you log in to different clusters that use +different versions (including different major versions), the client makes a request to the `/v1/webapi/find` endpoint +before executing the login command. This request retrieves the client tools version required by the target cluster. +The specified version is then downloaded, extracted to the tools directory, and used to re-execute the command. +As a result, the login command runs using the version required by that cluster. + +The required version for each cluster is tracked in `$TELEPORT_HOME/bin/.config.json` (defaults to `~/.tsh/bin/.config.json`). +This file stores the proxy address, the required version for each cluster, and the update mode. +If managed updates are disabled for a cluster (i.e., the mode is set to "disable"), re-execution +is skipped and the currently installed version is used instead. + +Example of managed updates configuration file: +```json +{ + "version": "v1", + "configs": { + "proxy1.example.com": {"version": "18.1.0","disabled": false}, + "proxy2.example.com": {"version": "17.5.5","disabled": false}, + "proxy3.example.com": {"version": "16.5.5","disabled": false}, + "proxy4.example.com": {"version": "16.5.5","disabled": true} + } +} +``` + +By default, client tools managed updates store up to three different versions in the `$TELEPORT_HOME/bin` directory. +If multiple clusters require the same version, a single shared copy is used. Once the update process is complete, +the relative path to the client tools is recorded in the configuration file. + +```json +{ + "version": "v1", + "configs": { + "proxy1.example.com": {"version": "18.1.0","disabled": false} + }, + "tools": [ + { + "version": "18.1.0", + "path": { + "tctl": "1921b970-807f-4d36-a769-fbda149d8970-update-pkg-v2/tctl-18.1.0.pkg/Payload/tctl.app/Contents/MacOS/tctl", + "tsh": "1921b970-807f-4d36-a769-fbda149d8970-update-pkg-v2/tsh-18.1.0.pkg/Payload/tsh.app/Contents/MacOS/tsh" + } + } + ], + "max_tools": 3 +} +``` + +After each `tctl` or `tsh` command is re-executed using a specific version from the tools list, +the list is reordered to move the most recently used version to the top. If the number of cached +versions exceeds the allowed limit, the least recently used version (at the bottom of the list) +will be removed during the next update. + +To reduce the need for repeated downloads, you can increase the number of stored versions by +manually setting the max_tools value to the desired limit. + +### Managed updates version checks + +Client tools make a request to the Teleport Proxy Service discovery endpoint `/v1/webapi/find` during every login +in order to retrieve the required version of Teleport client tools and check the update mode for the target cluster. + +To enable version checks for `tsh ssh` or `tsh proxy ssh` commands in case a login is not required, +set the `TELEPORT_TOOLS_CHECK_UPDATE=t` environment variable to initiate this check. For example: + +```code +$ TELEPORT_TOOLS_CHECK_UPDATE=t tsh ssh user@host +Update progress: [▒▒▒▒▒▒▒▒▒▒] (Ctrl-C to cancel update) +user@host ~ # +``` + +Managed update local checks are triggered during the execution of any command. If there is an active +session with a cluster and the cluster advertises a required client tools version, +both `tsh` and `tctl` will be re-executed using that version. If the version is not already present +in the tools directory, it will be downloaded, extracted, and executed automatically. + +If there is no active session, but a versioned binary exists in the tools directory +(e.g., `~/.tsh/bin/tsh` or `~/.tsh/bin/tctl`), the corresponding tool will be re-executed with that version +(for backward compatibility with older versions of managed updates). + +## Configuring client tool managed updates + +### Cluster-wide using `tctl` + +To configure client tool managed updates in your cluster, you must have access to the +`autoupdate_config` and `autoupdate_version` resources. By default, the `editor` role can modify both resources. Note that `autoupdate_version` is managed for you on Cloud clusters, and cannot be edited. + +To enable or disable client tools managed updates in the cluster, use the following command: + +```code +$ tctl autoupdate client-tools enable +client tools auto update mode has been changed + +$ tctl autoupdate client-tools disable +client tools auto update mode has been changed +``` + +To set or remove the target version for automatic updates for all client tools: + +```code +$ tctl autoupdate client-tools target X.Y.Z +client tools auto update target version has been set + +$ tctl autoupdate client-tools target --clear +client tools auto update target version has been cleared +``` + +If the target version is cleared, the cluster version will be used automatically, eliminating the need for manual +updates each time the cluster version is upgraded. + +The `status` command retrieves the target version and mode configured for the logged-in cluster. +To use an unauthenticated endpoint for this configuration, include the `--proxy` flag. + +```code +$ tctl autoupdate client-tools status --format json +{ + "mode": "enabled", + "target_version": "X.Y.Z" +} + +$ tctl autoupdate client-tools status --proxy proxy.example.com --format json +{ + "mode": "enabled", + "target_version": "X.Y.Z" +} +``` + +### On a per-client basis using environment variables +Values: +- `X.Y.Z`: Use a specific version. +- `off`: Disable updates. + +The `TELEPORT_TOOLS_VERSION` environment variable can be used for pinning a specific version, for debugging, or for manual updates. + +During re-execution, child process will inherit all environment variables and flags. To prevent infinite loops +only version environment variable will be overridden to `TELEPORT_TOOLS_VERSION=off`. + +Example of self-managed update by setting the version with environment variable: +```bash +$ TELEPORT_TOOLS_VERSION=18.1.0 tctl version +Update progress: [▒▒▒▒▒▒▒▒▒▒] (Ctrl-C to cancel update) +Teleport v18.1.0 git:v18.1.0-0-g8cdb161 go1.24.5 + +# If the environment variable is not set, the currently installed version is used. +$ tctl version +Teleport v17.6.0 git:v17.6.0-0-g4c3b13b go1.23.11 +```` + +To use the version specified by the environment variable, it must always be set. +If the variable is set during the login command, we prioritize this version over the one +provided by the cluster configuration. Even if the cluster indicates that client tools +managed updates are disabled, setting the environment variable will force them to be enabled. +Any tsh or tctl command executed with an active profile for this cluster will be re-executed +using the recorded version. + +```bash +$ TELEPORT_TOOLS_VERSION=18.1.0 tsh login --proxy proxy.example.com +Update progress: [▒▒▒▒▒▒▒▒▒▒] (Ctrl-C to cancel update) +... + +# The session is still active, and the profile at ~/.tsh/current-profile is set to proxy.example.com. +$ tctl version +Teleport v18.1.0 git:v18.1.0-0-g8cdb161 go1.24.5 + +$ cat ~/.tsh/bin/.config.json +{ + "version": "v1", + "configs": { + "proxy.example.com": {"version": "18.1.0","disabled": false} + } +} +```` + +### Using resource definitions + +To enable client tools managed updates in cluster, first create a file named `autoupdate_config.yaml` with the following content: + +```yaml +kind: autoupdate_config +metadata: + name: autoupdate-config +spec: + tools: + mode: enabled +``` + +And write resource data to the cluster `tctl create -f autoupdate_config.yaml`, after that any new `tsh` login must +check the target version and initiate downloading desired version to install in Teleport home folder. + +The next resource is responsible for setting target version `autoupdate_version.yaml`. + +```yaml +kind: autoupdate_version +metadata: + name: autoupdate-version +spec: + tools: + target_version: X.Y.Z +``` + +Create the resource using `tctl create -f autoupdate_version.yaml`. +If the `autoupdate_version` resource hasn't been created yet, the cluster version will be used as the default target version. + + + - For self-hosted clusters, managed updates are disabled by default but can be enabled. + - Cloud clusters are automatically enrolled in updates, managed by the Teleport Cloud team. + - For clusters with multiple root versions, use self-managed updates to avoid frequent version switching. + + +## Determining a client tool version + +To determine the version required to operate with the cluster, during the login process, `tsh` queries from the +unauthenticated proxy discovery `/v1/webapi/find` endpoint. If `.auto_update.tools_auto_update` is enabled, the +client tools must initiate the installation of the version specified in `.auto_update.tools_version`. + +For manual updates, when scheduling updates at specific times or using custom CDN mirrors or with self-build packages, +you can disable managed updates via configuration. In this case, you can monitor the tool's version separately +or pair it with the `TELEPORT_TOOLS_VERSION=off` environment variable. + +```bash +$ curl https://proxy.example.com/v1/webapi/find | jq .auto_update +{ + "tools_auto_update": true, + "tools_version": "X.Y.Z", +} +``` diff --git a/docs/pages/upgrading/cloud-cluster-updates.mdx b/docs/pages/upgrading/cloud-cluster-updates.mdx index 3771aeeb4fe50..85dced6396269 100644 --- a/docs/pages/upgrading/cloud-cluster-updates.mdx +++ b/docs/pages/upgrading/cloud-cluster-updates.mdx @@ -1,6 +1,9 @@ --- title: Cloud Cluster Updates description: Provides a high-level overview of Teleport cluster updates on Cloud. +tags: + - conceptual + - platform-wide --- On Teleport Enterprise (Cloud) clusters, the Auth Service and Proxy Service are automatically @@ -11,7 +14,7 @@ described in [Teleport Upcoming Releases](../upcoming-releases.mdx). release of a new major version. Minor version updates and patches occur more regularly. - Major version updates will not occur if any connected Teleport Agents are more than one major version behind. - Updates only occur during your scheduled maintenance window. -- Teleport Agents are only updated automatically if you enroll them in [managed updates](agent-managed-updates.mdx). +- Teleport Agents are only updated automatically if you enroll them in [managed updates](agent-managed-updates/agent-managed-updates.mdx). ## Notifications diff --git a/docs/pages/upgrading/overview.mdx b/docs/pages/upgrading/overview.mdx index 5556fbfbbb3b1..a6c227d9fee3a 100644 --- a/docs/pages/upgrading/overview.mdx +++ b/docs/pages/upgrading/overview.mdx @@ -1,6 +1,10 @@ --- title: Upgrading Compatibility Overview description: Provides a high-level description of how to upgrade the components in your Teleport cluster. Read this guide before following upgrade instructions for your environment. +sidebar_position: 1 +tags: + - conceptual + - platform-wide --- Since Teleport is a distributed system with a number of services that run on @@ -14,13 +18,22 @@ Teleport cluster while preserving compatibility. (!docs/pages/includes/compatibility.mdx!) +You can think of Teleport cluster components in three tiers: +1. The Auth Service is expected to be the newest component in the cluster. + For this reason it should always be the first component that is upgraded. +2. The Proxy Service is expected to run a version less than or equal to the + Auth Service. The Proxy Service should never run a version newer than the + Auth Service. +3. Other Teleport agents (SSH Service, Database Service, etc.) should always + run a version less than or equal to the Proxy Service. + In Teleport Enterprise Cloud, we manage the Auth Service and Proxy Service for you. You can determine the current version of these services by running the following command, where `mytenant` is the name of your Teleport Enterprise Cloud tenant: ```code -$ curl -s https://mytenant.teleport.sh/webapi/ping | jq '.server_version' +$ curl -s https://mytenant.teleport.sh/webapi/find | jq '.server_version' ``` ## Upgrading a self-hosted Teleport cluster @@ -32,27 +45,32 @@ Teleport cluster. If you are upgrading more then one major version, you must repeat the following steps for each major version until you reach your target version. - For example, if your cluster is on v10 and you wish to upgrade to v13, you - must first follow the sequence below for v11, then v12, before finally upgrading - to v13. You must not upgrade directly from v10 to v13. + For example, if your cluster is on v15 and you wish to upgrade to v18, you + must first follow the sequence below for v16, then v17, before finally upgrading + to v18. You must not upgrade directly from v15 to v18. 1. Back up your Teleport cluster state as a precaution against any unforeseen incidents while upgrading the Auth Service, which may perform data migrations on its backend. Follow the instructions in [Backup and - Restore](../admin-guides/management/operations/backup-restore.mdx). + Restore](../zero-trust-access/deploy-a-cluster/backup-restore.mdx). 1. Upgrade all **Auth Service** instances to the **target version first**. Auth Service instances may be upgraded in any sequence or at the same time. After the upgrade **confirm** that the cluster is in a healthy state before continuing. -1. Upgrade Proxy Service instances to the same version as the Auth - Service. Proxy Service instances are stateless and can be upgraded in any - sequence or at the same time. After the upgrade **confirm** that the cluster - is in a healthy state before continuing. -1. Upgrade your Teleport Agents to the same version as the Auth Service. - You can upgrade resource agents in any sequence or at the same time. After the +1. After all Auth Service instances are running the target version, upgrade Proxy Service instances + to the same version as the Auth Service. Proxy Service instances are + stateless and can be upgraded in any sequence or at the same time. After the upgrade **confirm** that the cluster is in a healthy state before continuing. +1. After all Proxy Service instances are running the target version, upgrade + your Teleport Agents to the same version as the Auth Service. You can upgrade + agents in any sequence or at the same time. After the upgrade **confirm** + that the cluster is in a healthy state before continuing. 1. Upgrade your Teleport clients and plugins (tctl, tsh, tbot, terraform-provider, event-handler, etc.). +By following this process, your agents will never get ahead of your control +plane components (even within the same major/patch version), ensuring that all +expected APIs and migrations will be available by the time the agents upgrade. + ## Upgrading multiple Teleport clusters When upgrading multiple Teleport clusters with a trust relationship, you must @@ -68,6 +86,18 @@ upgrade from v10 to v11. 1. Verify the upgrade was successful. 1. Upgrade the trusted leaf clusters. +## Downgrading major versions + +In order to safely downgrade from one major version to another, i.e. from v17 to v16, +you must restore the Teleport backend to the back up taken prior to upgrading. First, +roll back all components to the exact version they were running prior to the upgrade in +reverse order of the upgrade sequence above until only the Auth Service instances are running +the new major version. Then stop the Auth Service and follow the +[Backup and Restore](../zero-trust-access/deploy-a-cluster/backup-restore.mdx) guidance +to restore the backend to a point in time prior to the upgrade. Once the backup has been +restored, downgrade the Auth Service instances to the exact version of Teleport +they were running prior to the upgrade attempt, then start them again. + ## Next steps Return to the [Upgrading Introduction](upgrading.mdx) for how to upgrade diff --git a/docs/pages/upgrading/upgrading-manual.mdx b/docs/pages/upgrading/upgrading-manual.mdx new file mode 100644 index 0000000000000..32808f9a6806c --- /dev/null +++ b/docs/pages/upgrading/upgrading-manual.mdx @@ -0,0 +1,172 @@ +--- +title: Manual Upgrades +description: Provides detailed information on upgrading Teleport without Managed Updates. +tags: + - conceptual + - platform-wide +--- + +To ensure that your Teleport cluster remains up to date with the lowest amount +of manual overhead, we recommend [signing up](https://goteleport.com/signup) for +a cloud-hosted Teleport Enterprise account and following [Enroll Agents in +Managed Updates](./agent-managed-updates/agent-managed-updates.mdx). + +Before reading this guide, become familiar with the [Upgrading Compatibility +Overview](./overview.mdx) guide, which describes the sequence in which to +upgrade components in your cluster. + +This guide shows you how to upgrade Teleport manually. You can perform manual +upgrades on Teleport Auth Service and Proxy Service instances running in +self-hosted clusters, as well as all Teleport Agents. + +## Teleport Agents + +Note that all Linux servers with SystemD should use Managed Updates for Agents +instead of this workflow, including on self-hosted clusters. Otherwise, agents +may be disconnected when the cluster is upgraded. + +1. Identify the latest compatible Teleport Agent version by querying the + `webapi` endpoint of the Teleport Proxy Service, replacing + with the host and port of your + Teleport account or Teleport Proxy Service: + + ```code + $ curl https:///webapi/find + ... + "auto_update": { + "tools_version": "(=teleport.version=)", + "tools_auto_update": true, + "agent_version": "(=teleport.version=)", + "agent_auto_update": true, + "agent_update_jitter_seconds": 60 + }, + ... + ``` + +1. Use the `tctl inventory ls` command to list connected agents along with their + current version. Use the `--older-than` flag to list agents that are + upgradable: + + ```code + $ tctl inventory ls --older-than=v(=teleport.version=) + Server ID Hostname Services Version Upgrader + ------------------------------------ -------------- -------------- ------- -------- + 00000000-0000-0000-0000-000000000000 ip-10-1-6-130 Node v14.4.5 none + 00000000-0000-0000-0000-000000000001 teleport-proxy Proxy v15.2.0 none + 00000000-0000-0000-0000-000000000002 teleport-auth Auth,Discovery v15.2.0 none + ... + ``` + +1. For each agent ID returned by the `tctl inventory ls` command, copy the ID + and run the following `tctl` command to access the host via `tsh`: + + ```code + $ HOST=00000000-0000-0000-0000-000000000000 + $ USER=root + $ tsh ssh "${USER?}@${HOST?}" + ``` + +1. On each Linux server, follow the instructions in the [next + section](#single-teleport-binaries-on-linux-servers) to install the new + version of the `teleport` binary. + +1. If you have deployed any agents on Kubernetes using the `teleport-kube-agent` + Helm chart, [follow the instructions](#teleport-agents-running-on-kubernetes) + to upgrade the Helm release. + +## Single Teleport binaries on Linux servers + +You can upgrade a single Teleport binary running on a Linux host by running the +one-line installation script with a higher version than the current one. + +Before upgrading Teleport across a self-hosted cluster, read the [Compatibility +Overview](./overview.mdx) to ensure you are upgrading components in +the correct order. + +Complete the following steps on all servers that run the Auth Service and Proxy +Service, then on each of your agents: + +1. Get the current version: + + ```code + $ teleport version + ``` + +1. Assign to one of the following, depending on your + Teleport edition: + + | Edition | Value | + |-----------------------------------|--------------| + | Teleport Enterprise (Cloud) | `cloud` | + | Teleport Enterprise (Self-Hosted) | `enterprise` | + | Teleport Community Edition | `oss` | + +1. Assign to the version you want to install. + +1. Install the new Teleport version on your Linux server: + + ```code + $ curl (=teleport.teleport_install_script_url=) | bash -s + ``` + + The installation script detects the package manager on your Linux server and + uses it to install Teleport binaries. To customize your installation, learn + about the Teleport package repositories in the [installation + guide](../installation/linux.mdx). + +1. Confirm that the version of the `teleport` binary is the one you expect: + + ```code + $ teleport version + ``` + +1. Now that you have installed a more recent `teleport` binary on your Auth + Service and Proxy Service instances, restart Teleport on these servers to run + the new version. + + (!docs/pages/includes/start-teleport.mdx!) + +## Self-hosted Teleport clusters on Kubernetes + +The instructions in this section assume that you have configured the +`teleport-cluster` Helm chart with a values file called `values.yaml`, and that +your `teleport-cluster` release is called `teleport-cluster`. The Auth Service instances +are restarted simultaneously during the upgrade so there is no need to shrink +the number of replicas. + +1. Update the Teleport Helm chart repository so you can install the latest + version of the `teleport-cluster` chart: + + (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) + +1. Upgrade the `teleport-cluster` Helm release: + + ```code + $ helm upgrade teleport-cluster teleport/teleport-cluster \ + --version= \ + --values=values.yaml + ``` + + The `teleport-cluster` Helm chart automatically waits for the previous + version of the Proxy Service to stop responding to requests before running a + new version of the Auth Service. + +## Teleport Agents running on Kubernetes + +The instructions in this section assume that you have configured the +`teleport-kube-agent` Helm chart with a values file called `values.yaml`, and +that your `teleport-kube-agent` release is called `teleport-agent`. + +1. Update the Teleport Helm chart repository so you can install the latest + version of the `teleport-kube-agent` chart: + + (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) + +1. Upgrade the Helm release: + + ```code + $ helm -n "teleport" upgrade teleport-agent teleport/teleport-kube-agent \ + --values=values.yaml \ + --version= + ``` + diff --git a/docs/pages/upgrading/upgrading-reference.mdx b/docs/pages/upgrading/upgrading-reference.mdx deleted file mode 100644 index 1c368abd302e0..0000000000000 --- a/docs/pages/upgrading/upgrading-reference.mdx +++ /dev/null @@ -1,402 +0,0 @@ ---- -title: Upgrading Reference -description: Provides detailed information on upgrading Teleport in various situations. -tocDepth: 3 ---- - - -This document describes the Managed Updates v1 Teleport Agent updater, which is -currently supported but will be deprecated after the [Managed update v2 updater -](./agent-managed-updates.mdx) is generally available. - - -This guide explains how to upgrade components of a Teleport cluster in -non-standard situations. - -To ensure that your Teleport cluster remains up to date with the lowest amount -of manual overhead, we recommend [signing up](https://goteleport.com/signup) for -a cloud-hosted Teleport Enterprise account and following [Enroll Agents in -Automatic Upgrades](./agent-managed-updates.mdx). - -If your infrastructure does not support automatic agent updates, follow this -guide to determine the best approach for keeping your Teleport cluster up to -date. - -Before reading this guide, become familiar with the [Upgrading Compatibility -Overview](./overview.mdx) guide, which describes the sequence in which to -upgrade components in your cluster. - -## Working with the automatic updater - -This section explains how to manage the automatic updater. - -On Kubernetes deployments, the updater is a controller that periodically -reconciles expected Kubernetes resources with those in the cluster. On a Linux -server, the updater is an executable script called `teleport-upgrade`. - -### `teleport-upgrade` commands - -The `teleport-upgrade` tool provides some basic commands to verify and perform an -update of the Teleport Agent. - -```code -$ teleport-upgrade help -USAGE: /usr/sbin/teleport-upgrade - -Tool for automatic upgrades of Teleport Agents. - -Commands: - run check for and potentially apply a teleport upgrade. - dry-run check for new teleport version but do not upgrade. - force performs an upgrade if an upgrade is available. - version print the current version of /usr/sbin/teleport-upgrade. - help show this help text. -``` - -The `dry-run` command can be used to check for an available update without performing -an update. -```code -# Example output when teleport is already on the latest compatible version. -$ teleport-upgrade dry-run -[i] no upgrades available (14.3.14 == 14.3.14) [ 582 ] - -# Example output when an update is available. -$ teleport-upgrade dry-run -[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] -[i] within maintenance window, upgrade will be attempted. [ 596 ] -``` - -The `run` command performs an update if available. -```code -# Successful teleport update from 13.4.14 to 14.3.14. -$ teleport-upgrade run -[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] -[i] within maintenance window, upgrade will be attempted. [ 596 ] -[i] attempting apt install teleport-ent=14.3.14... [ 480 ] -[...] -[i] gracefully restarting Teleport (if already running) [ 449 ] - -# Teleport updates are not attempted when outside the maintenance window. -$ teleport-upgrade run -[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] -[i] upgrade is non-critical and we are outside of maintenance window, not attempting. [ 618 ] -``` - -The `force` command performs an update immediately even when outside the maintenance -window. -```code -$ teleport-upgrade force -[i] an upgrade is available (13.4.14 -> 14.3.14) [ 585 ] -[i] attempting apt install teleport-ent=14.3.14... [ 480 ] -[...] -[i] gracefully restarting Teleport (if already running) [ 449 ] -``` - -### Configuring the `teleport-upgrade` tool - -1. Create the upgrade configuration directory: - - ```code - $ sudo mkdir -p /etc/teleport-upgrade.d/ - ``` - -1. If you changed the agent user to run as non-root, create - `/etc/teleport-upgrade.d/schedule` and grant ownership to your Teleport user, - assigning to the name of your Teleport - user. Otherwise, you can skip this step: - - ```code - $ sudo touch /etc/teleport-upgrade.d/schedule - $ sudo chown /etc/teleport-upgrade.d/schedule - ``` - - 1. Configure the upgrader to connect to your version server and subscribe to - the right release channel: - - ```code - $ echo "/v1/webapi/automaticupgrades/channel/default" | sudo tee /etc/teleport-upgrade.d/endpoint - ``` - - Make sure not to include `https://` as a prefix to the server address, nor - suffix the endpoint with `/version`. - -### Choosing a release channel - -When [configuring the updater](#configuring-the-teleport-upgrade-tool), you can -select a release channel. - -The following channels are available for APT, YUM, and Zypper repos: - -| Channel name | Description | -|-------------------|--------------------------------------------------------------------------------------------| -| `stable/` | Receives releases for the specified major release line, i.e. `v(=teleport.major_version=)` | -| `stable/cloud` | Rolling channel that receives releases compatible with current Cloud version | -| `stable/rolling` | Rolling channel that receives all published Teleport releases | -### Updating the updater - -The updater is designed to be minimal and relatively stable, but the updater will -receive updates on occasion. Currently, the updater does not have the capability -to update itself. Updates to the `teleport-ent-updater` package must be done manually. - -The `teleport-ent-updater` will be backwards compatible with older versions of Teleport, -so you can always update the `teleport-ent-updater` package to the latest available -version. - -### Version locking - -As of Teleport `15.1.10`, a version locking mechanism is enabled by the updater. -This mechanism locks the version of Teleport and prevents manual updates of the Teleport -package. This prevents unintentional updates during routine system maintenance, or -an accidental update by a user. The updater still has the capability to update the -Teleport package, and all updates of Teleport are expected to be performed by the -updater. - -The version locking mechanism is implemented using the features of the package managers. -In case a user would like to manually update the Teleport version, this can be done -with the following commands. - -With the `apt` package manager CLI, the `--allow-change-held-packages` flag must be provided -to bypass the version lock. -```code -$ apt-get install --allow-change-held-packages "teleport-ent=" -``` - -With the `yum` package manager CLI, the `--disableexcludes="teleport"` flag must be provided -to bypass the version lock. -```code -$ yum install --disablerepo="*" --enablerepo="teleport" --disableexcludes="teleport" "teleport-ent-" -``` - -With the `zypper` package manager CLI, the lock must be disabled and then re-enabled -after the update. -```code -$ zypper removelock "teleport-ent" -$ zypper install --repo="teleport" "teleport-ent-" -$ zypper addlock "teleport-ent" -``` - -## Automatic update limitations - -Automatic updates are not available in all Teleport editions and installation -methods. If you cannot use automatic updates, read [Manual -updates](#manual-updates) for possible steps. - -### Automatic updates with Teleport Community Edition - -Automatic updates is not currently supported with the community editions of Teleport. -Ensure that you are using the Teleport Enterprise edition of the `teleport-kube-agent` -chart. You should see the following when you query your `teleport-kube-agent` release: - -```code -$ helm -n "teleport" get values "teleport-agent" -o json | jq '.enterprise' -true -``` - -### Automatic updates with direct binary installation - -Automatic updates is not currently supported with the direct binary installation method. -Automatic updates is only supported with installations via the `apt`, `yum`, and -`zypper` package managers. - -## Automatic updates with GitOps tools - -Automatic updates are incompatible with some GitOps tools used for continuous -deployment. The `teleport-kube-agent` Helm chart owns the version of the -`teleport-agent` resource, so when the `teleport-agent-updater` modifies the -image version of the `teleport-agent` resource, the GitOps tool will detect a -drift or a diff in the `teleport-agent` resource. - -### ArgoCD deployments - -After an automatic update, ArgoCD reports the `teleport-agent` resource as `OutOfSync`. -As a workaround to this problem use a [Diff Customization](https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/#diffing-customization) -to ignore the difference in image version. Here is an example deployment using the -name `teleport-agent` and namespace `teleport`. - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: - name: teleport-agent - namespace: teleport -spec - ignoreDifferences: - - group: apps - kind: StatefulSet - name: teleport-agent - namespace: teleport - jqPathExpressions: - - .spec.template.spec.containers[] | select(.name == "teleport").image -... -``` - -### FluxCD deployments - -After an automatic update, FluxCD reports a `DriftDetected` event. As a workaround -to this problem modify the [drift detection](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection) -configuration to ignore the difference in image version. Here is an example deployment -using the name `teleport-agent` and namespace `teleport`. - -```yaml -apiVersion: helm.toolkit.fluxcd.io/v2beta2 -kind: HelmRelease -metadata: - name: teleport-agent - namespace: teleport -spec - driftDetection: - mode: enabled - ignore: - - paths: ["/spec/template/spec/containers/0/image"] - target: - kind: StatefulSet - name: teleport-agent - namespace: teleport -... -``` - -## Manual updates - -This section shows you how to upgrade Teleport manually. You can perform manual -upgrades on Teleport Auth Service and Proxy Service instances running in -self-hosted clusters, as well as all Teleport Agents. - -### Teleport Agents - -1. Identify the latest compatible Teleport Agent version by querying the - `webapi` endpoint of the Teleport Proxy Service, replacing - with the host and port of your - Teleport account or Teleport Proxy Service: - - ```code - $ curl https:///webapi/automaticupgrades/channel/stable/cloud/version - v15.2.1 - ``` - -1. Use the `tctl inventory ls` command to list connected agents along with their - current version. Use the `--older-than` flag to list agents that are - upgradable: - - ```code - $ tctl inventory ls --older-than=v15.2.1 - Server ID Hostname Services Version Upgrader - ------------------------------------ -------------- -------------- ------- -------- - 00000000-0000-0000-0000-000000000000 ip-10-1-6-130 Node v14.4.5 none - 00000000-0000-0000-0000-000000000001 teleport-proxy Proxy v15.2.0 none - 00000000-0000-0000-0000-000000000002 teleport-auth Auth,Discovery v15.2.0 none - ... - ``` - -1. For each agent ID returned by the `tctl inventory ls` command, copy the ID - and run the following `tctl` command to access the host via `tsh`: - - ```code - $ HOST=00000000-0000-0000-0000-000000000000 - $ USER=root - $ tsh ssh "${USER?}@${HOST?}" - ``` - -1. On each Linux server, follow the instructions in the [next - section](#single-teleport-binaries-on-linux-servers) to install the new - version of the `teleport` binary. - -1. If you have deployed any agents on Kubernetes using the `teleport-kube-agent` - Helm chart, [follow the instructions](#teleport-agents-running-on-kubernetes) - to upgrade the Helm release. - -### Single Teleport binaries on Linux servers - -You can upgrade a single Teleport binary running on a Linux host by running the -one-line installation script with a higher version than the current one. - -Before upgrading Teleport across a self-hosted cluster, read the [Compatibility -Overview](./overview.mdx) to ensure you are upgrading components in -the correct order. - -Complete the following steps on all servers that run the Auth Service and Proxy -Service, then on each of your agents: - -1. Get the current version: - - ```code - $ teleport version - ``` - -1. Assign to one of the following, depending on your - Teleport edition: - - | Edition | Value | - |-----------------------------------|--------------| - | Teleport Enterprise (Cloud) | `cloud` | - | Teleport Enterprise (Self-Hosted) | `enterprise` | - | Teleport Community Edition | `oss` | - -1. Assign to the version you want to install. - -1. Install the new Teleport version on your Linux server: - - ```code - $ curl (=teleport.teleport_install_script_url=) | bash -s - ``` - - The installation script detects the package manager on your Linux server and - uses it to install Teleport binaries. To customize your installation, learn - about the Teleport package repositories in the [installation - guide](../installation.mdx#linux). - -1. Confirm that the version of the `teleport` binary is the one you expect: - - ```code - $ teleport version - ``` - -1. Now that you have installed a more recent `teleport` binary on your Auth - Service and Proxy Service instances, restart Teleport on these servers to run - the new version. - - (!docs/pages/includes/start-teleport.mdx!) - -### Self-hosted Teleport clusters on Kubernetes - -The instructions in this section assume that you have configured the -`teleport-cluster` Helm chart with a values file called `values.yaml`, and that -your `teleport-cluster` release is called `teleport-cluster`. The Auth Service instances -are restarted simultaneously during the upgrade so there is no need to shrink -the number of replicas. - -1. Update the Teleport Helm chart repository so you can install the latest - version of the `teleport-cluster` chart: - - (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - -1. Upgrade the `teleport-cluster` Helm release: - - ```code - $ helm upgrade teleport-cluster teleport/teleport-cluster \ - --version= \ - --values=values.yaml - ``` - - The `teleport-cluster` Helm chart automatically waits for the previous - version of the Proxy Service to stop responding to requests before running a - new version of the Auth Service. - -### Teleport Agents running on Kubernetes - -The instructions in this section assume that you have configured the -`teleport-kube-agent` Helm chart with a values file called `values.yaml`, and -that your `teleport-kube-agent` release is called `teleport-agent`. - -1. Update the Teleport Helm chart repository so you can install the latest - version of the `teleport-kube-agent` chart: - - (!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) - -1. Upgrade the Helm release: - - ```code - $ helm -n "teleport" upgrade teleport-agent teleport/teleport-kube-agent \ - --values=values.yaml \ - --version= - ``` - diff --git a/docs/pages/upgrading/upgrading.mdx b/docs/pages/upgrading/upgrading.mdx index 4efa80e4f08cf..5d8136d1ffd21 100644 --- a/docs/pages/upgrading/upgrading.mdx +++ b/docs/pages/upgrading/upgrading.mdx @@ -1,6 +1,9 @@ --- title: Upgrading Teleport description: Explains how to upgrade Teleport depending on your environment and edition. +tags: + - conceptual + - platform-wide --- The guides in this section show you how to upgrade Teleport to a more recent @@ -13,16 +16,16 @@ compatibility between all components. For Teleport Enterprise (Cloud) users, se managed for you. If you have a Teleport Enterprise (Cloud) account, you **must** [set up Managed -Teleport Agent Updates](agent-managed-updates.mdx) to ensure that +Teleport Agent Updates](agent-managed-updates/agent-managed-updates.mdx) to ensure that the version of Teleport running on agents is always compatible with that of the Teleport cluster. You can also set up automatic agent upgrades in a self-hosted Enterprise cluster. -For more information about upgrading, for example, to upgrade manually, read the -[Upgrading Reference](upgrading-reference.mdx). +For more information about upgrading Teleport manually, read +[Upgrading Manually](upgrading-manual.mdx). -For more information about client tools auto-update, read the -[Teleport Client Tools Automatic Updates](client-tools-autoupdate.mdx). +For more information about client tools managed updates, read the +[Teleport Client Tools Managed Updates](client-tools-managed-updates.mdx). You can find more information regarding the automatic updates architecture in the [Agent Update Management](../reference/architecture/agent-update-management.mdx) page. diff --git a/docs/pages/usage-billing.mdx b/docs/pages/usage-billing.mdx index c5e7e304ae8ca..3fb863ab1164a 100644 --- a/docs/pages/usage-billing.mdx +++ b/docs/pages/usage-billing.mdx @@ -1,7 +1,9 @@ --- title: Usage Reporting and Billing description: Provides a detailed breakdown of Teleport usage reporting and billing. -tocDepth: 3 +tags: + - conceptual + - platform-wide --- Commercial editions of Teleport send anonymized usage data to Teleport so we can @@ -59,7 +61,7 @@ The code that aggregates and anonymizes this data can be found in our [GitHub repository](https://github.com/gravitational/teleport/tree/master/lib/usagereporter/teleport/aggregating). For a restricted network environment you can configure Teleport Auth Service instances -to send usage data through a proxy for version 16.0.4/15.4.7/14.3.21 or later. +to send usage data through a proxy for version 16.0.4/15.4.7 or later. Set the `TELEPORT_REPORTING_HTTPS_PROXY` and `TELEPORT_REPORTING_HTTP_PROXY` environment variables to your proxy address. That will apply as the HTTP connect proxy setting overriding `HTTPS_PROXY` and `HTTP_PROXY` just for outbound usage reporting. @@ -72,6 +74,7 @@ calculate three types of billing metrics: - Monthly Active Users - Teleport Protected Resources - Machine & Workload Identities +- Identity Governance Monthly Active Users ### Usage metrics in the Web UI @@ -135,7 +138,7 @@ for the entire hour, Teleport would report 30 protected servers. The Machine and Workload Identity (MWI) metric is an aggregate of * [Bots](reference/architecture/machine-id-architecture.mdx#what-is-a-bot) * [Bot Instances](reference/architecture/machine-id-architecture.mdx#what-is-a-bot) -* [SPIFFE IDs](enroll-resources/workload-identity/spiffe.mdx#spiffe-ids-and-trust-domains) +* [SPIFFE IDs](machine-workload-identity/workload-identity/spiffe.mdx#spiffe-ids-and-trust-domains) We aggregate Bots, Bot Instances and unique SPIFFE IDs during each day on an hourly basis, and take an hourly average to compute a daily Bot, Bot Instance and SPIFFE ID count. Then we average the daily @@ -155,6 +158,21 @@ as one SPIFFE ID for billing. The sum of your calculated Bots, Bot Instances, and unique SPIFFE IDs is your total MWI for the billing period. +### Identity Governance Monthly Active Users + +Identity Governance Monthly Active Users (IG MAU) is the aggregate number of unique users performing Identity Governance-related activities in Teleport. + +We aggregate IG MAU over each monthly period starting on the subscription start date and ending on each monthly anniversary thereafter. + +Identity Governance "active user" means a user having performed any of the following activities: +- Submitting an Access Request +- Reviewing an Access Request +- Getting roles assigned during login through Access List membership +- Reviewing an Access List +- Logging in to a service provider application via Teleport SAML IdP + +IG MAU is calculated separately from standard MAU and represents users specifically engaging with Teleport's Identity Governance features. A user can contribute to both standard MAU and IG MAU billing metrics within the same period if they perform activities that qualify for both metrics. + ## Usage measurement for billing We aggregate all counts of the billing metrics on a monthly basis starting on @@ -205,10 +223,10 @@ usage numbers. ### SSO users In Teleport, single sign-on (SSO) users are -[ephemeral](reference/user-types.mdx#temporary-users). Teleport deletes an SSO user +[ephemeral](reference/access-controls/user-types.mdx#temporary-users). Teleport deletes an SSO user when its session expires. To count the number of SSO users in your cluster, you can examine Teleport audit events for unique SSO users that have authenticated to Teleport during a given time period. The Teleport documentation includes -[how-to guides](./admin-guides/management/export-audit-events/export-audit-events.mdx) for +[how-to guides](zero-trust-access/export-audit-events/export-audit-events.mdx) for exporting audit events to common log management solutions so you can identify users that have authenticated using an SSO provider. diff --git a/docs/pages/admin-guides/api/api.mdx b/docs/pages/zero-trust-access/api/api.mdx similarity index 88% rename from docs/pages/admin-guides/api/api.mdx rename to docs/pages/zero-trust-access/api/api.mdx index c376b1271bfb0..2122ce145e06e 100644 --- a/docs/pages/admin-guides/api/api.mdx +++ b/docs/pages/zero-trust-access/api/api.mdx @@ -1,7 +1,9 @@ --- title: Using the Teleport API description: Guides to writing a client application for the Teleport gRPC API, which makes it possible to programmatically manage dynamic resources. -layout: tocless-doc +template: "no-toc" +tags: + - zero-trust --- The Teleport Auth Service provides a gRPC API for remotely interacting with your @@ -17,7 +19,7 @@ the same API. Here is what you can do with the Go Client: - Integrate with external tools, e.g., to write an [Access Request - plugin](../access-controls/access-request-plugins/access-request-plugins.mdx). Teleport + plugin](../../identity-governance/access-requests/plugins/plugins.mdx). Teleport maintains Access Request plugins for tools like Slack, Jira, and Mattermost. - Perform CRUD actions on resources, such as roles, authentication connectors, and provisioning tokens. @@ -41,6 +43,6 @@ Teleport's API client libraries: - [Automatically generate Teleport roles](./rbac.mdx) from an external RBAC system, making it easier to get started with Teleport-based RBAC and keep your Teleport roles up to date. -- [Write an Access Request Plugin](./access-plugin.mdx): Follow this guide for a +- [Write an Access Request Plugin](../../identity-governance/access-requests/plugins/how-to-build.mdx): Follow this guide for a minimal working example of a plugin that you can use to manage Access Requests through your organization's unique communication workflows. diff --git a/docs/pages/admin-guides/api/automatically-register-agents.mdx b/docs/pages/zero-trust-access/api/automatically-register-agents.mdx similarity index 98% rename from docs/pages/admin-guides/api/automatically-register-agents.mdx rename to docs/pages/zero-trust-access/api/automatically-register-agents.mdx index 56d3e50ffad35..87d797aac2d94 100644 --- a/docs/pages/admin-guides/api/automatically-register-agents.mdx +++ b/docs/pages/zero-trust-access/api/automatically-register-agents.mdx @@ -1,6 +1,11 @@ --- title: Automatically Register Resources with Teleport +sidebar_label: Custom Auto-Discovery description: Learn how to use the Teleport API to start agents automatically when you add resources to your infrastructure. +tags: + - how-to + - zero-trust + - infrastructure-identity --- You can use Teleport's API to automatically register resources in your @@ -9,7 +14,7 @@ infrastructure with your Teleport cluster. Teleport already supports the automatic discovery of [Kubernetes clusters](../../enroll-resources/auto-discovery/kubernetes/kubernetes.mdx) in AWS, Azure, and Google Cloud, as well as -[servers](../../enroll-resources/auto-discovery/servers/ec2-discovery.mdx) on +[servers](../../enroll-resources/auto-discovery/servers/ec2-discovery/ec2-discovery.mdx) on Amazon EC2. To support other resources and cloud providers, you can use the API to write your own workflow. @@ -973,7 +978,6 @@ and other tasks. In this example, we used the `tctl auth sign` command to fetch credentials for the program you wrote. For production usage, we recommend provisioning -short-lived credentials via Machine ID, which reduces the risk of these -credentials becoming stolen. View our [Machine ID -documentation](../../enroll-resources/machine-id/introduction.mdx) to learn more. - +short-lived credentials via Machine & Workload Identity, which reduces the risk +of these credentials becoming stolen. View our [Machine & Workload Identity +documentation](../../machine-workload-identity/introduction.mdx) to learn more. diff --git a/docs/pages/zero-trust-access/api/getting-started.mdx b/docs/pages/zero-trust-access/api/getting-started.mdx new file mode 100644 index 0000000000000..7cb30ba79860a --- /dev/null +++ b/docs/pages/zero-trust-access/api/getting-started.mdx @@ -0,0 +1,145 @@ +--- +title: API Getting Started Guide +sidebar_label: Get Started +description: Get started working with the Teleport API programmatically using Go. +tags: + - get-started + - mwi +--- + +In this getting started guide we will use the Teleport API Go client to connect +to a Teleport Auth Service. + +Here are the steps we'll walkthrough: + +- Create an API user using a simple role-based authentication method. +- Generate credentials for that user. +- Create and connect a Go client to interact with Teleport's API. + +## How it works + +The Teleport Auth Service exposes a gRPC API that allows client tools to manage +backend resources. `tctl`, the Teleport Kubernetes Operator, and the Teleport +Terraform provider use this API, and you can write custom tools to manage API +resources or subscribe to Teleport audit events. + +Teleport API clients authenticate to Teleport using TLS credentials. In this +guide, we show you how to load the TLS credentials that the Auth Service +provides to you after you log in using `tsh`. + +## Prerequisites + +- Install [Go](https://golang.org/doc/install) (=teleport.golang=)+ and Go development environment. + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/3. Create a user + +(!docs/pages/includes/permission-warning.mdx!) + + + Read [API authorization](../../reference/architecture/api-architecture.mdx) to learn more about defining custom roles for your API client. + + +Create a user `api-admin` with the built-in role `editor`: + +```code +$ tctl users add api-admin --roles=editor +``` + +## Step 2/3. Generate client credentials + +Log in as the newly created user with `tsh`. + +```code +# generate tsh profile +$ tsh login --user=api-admin --proxy=tele.example.com +``` + +The [Profile Credentials loader](https://pkg.go.dev/github.com/gravitational/teleport/api/client#LoadProfile) +will automatically retrieve Credentials from the current profile in the next step. + +## Step 3/3. Create a Go project + +Set up a new [Go module](https://golang.org/doc/tutorial/create-module) and import the `client` package: + +```code +$ mkdir client-demo && cd client-demo +$ go mod init client-demo +$ go get github.com/gravitational/teleport/api/client +``` + + +To ensure compatibility, you should use a version of Teleport's API library that matches +the major version of Teleport running in your cluster. + +To find the pseudoversion appropriate for a go.mod file for a specific git tag, +run the following command from the `teleport` repository: + +```code +$ go list -f '{{.Version}}' -m "github.com/gravitational/teleport/api@$(git rev-parse v12.1.0)" +v0.0.0-20230307032901-49a6de744a3a +``` + + +Create a file called `main.go`, modifying the `Addrs` strings as needed: + +```go +package main + +import ( + "context" + "log" + + "github.com/gravitational/teleport/api/client" +) + +func main() { + ctx := context.Background() + + clt, err := client.New(ctx, client.Config{ + Addrs: []string{ + // Teleport Cloud customers should use .teleport.sh + "tele.example.com:443", + "tele.example.com:3025", + "tele.example.com:3024", + "tele.example.com:3080", + }, + Credentials: []client.Credentials{ + client.LoadProfile("", ""), + }, + }) + + if err != nil { + log.Fatalf("failed to create client: %v", err) + } + + defer clt.Close() + resp, err := clt.Ping(ctx) + if err != nil { + log.Fatalf("failed to ping server: %v", err) + } + + log.Printf("Example success!") + log.Printf("Example server response: %v", resp) + log.Printf("Server version: %s", resp.ServerVersion) +} +``` + +Now you can run the program and connect the client to the Teleport Auth Service to fetch the server version. + +```code +$ go run main.go +``` + +## Next steps + +- Learn about [pkg.go.dev](https://pkg.go.dev/github.com/gravitational/teleport/api/client) +- Learn how to use [the client](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client) +- Learn how to [work with credentials](https://pkg.go.dev/github.com/gravitational/teleport/api/client#Credentials) +- Read about Teleport [API architecture](../../reference/architecture/api-architecture.mdx) for an in-depth overview of the API and API clients. +- Read [API authorization](../../reference/architecture/api-architecture.mdx) to learn more about defining custom roles for your API client. +- Review the `client` [pkg.go reference documentation](https://pkg.go.dev/github.com/gravitational/teleport/api/client) for more information about working with the Teleport API programmatically. +- Familiarize yourself with the [admin manual](../../index.mdx) to make the best use of the API. diff --git a/docs/pages/zero-trust-access/api/rbac.mdx b/docs/pages/zero-trust-access/api/rbac.mdx new file mode 100644 index 0000000000000..537343423d10c --- /dev/null +++ b/docs/pages/zero-trust-access/api/rbac.mdx @@ -0,0 +1,1000 @@ +--- +title: Generate Teleport Roles from an External RBAC System +sidebar_label: External RBAC Sync +description: Use Teleport's API to automatically generate Teleport roles based on third-party RBAC policies +tags: + - how-to + - zero-trust + - privileged-access +--- + +You can use the Teleport gRPC API to generate roles automatically based on an +external role-based access control (RBAC) system, such as GitHub or AWS Identity +and Access Management. + +This is especially useful for: + +- Setting up a new Teleport cluster, since you can preserve your existing + authorization levels or categories while letting Teleport handle access + control. +- Ensuring that your Teleport cluster stays up to date with the RBAC systems of + the infrastructure it manages access to. This way, Teleport roles do not + unexpectedly gain or lose permissions if your teams reconfigure your external + RBAC systems. + +## How it works + +In this guide, we will build a small demo application to show you how to +generate Teleport roles using Teleport's API client library. + +The application authenticates to the Teleport Auth Service gRPC API as well as +your Kubernetes API server, and loads the role bindings and cluster role +bindings from the Kubernetes API server. For each role binding and cluster role +binding, the application generates a Teleport role using logic that maps the +fields in the former to the fields in the latter. + + + +The program we will build in this guide is intended as a learning tool. **Do not +connect it to your production Teleport cluster.** Use a demo cluster instead. + + + +## Prerequisites + +(!docs/pages/includes/edition-prereqs-tabs.mdx!) + +- Go version (=teleport.golang=) or above installed on your workstation. See the + [Go download page](https://go.dev/dl/). You will not need to be familiar with + Go to complete this guide, though Go knowledge is required if you want to + build a production-ready Teleport client application. + +In a production scenario, you will already have a third-party RBAC solution to +use as a basis for generating Teleport roles. In this guide, we will simulate +this by deploying a local Kubernetes cluster using `minikube` and setting up +some RBAC resources. We will then use this Kubernetes cluster to generate +Teleport roles. + +To run the local demo environment, ensure that you have the following tools +installed on your workstation: + +| Tool | Purpose | Installation link | +|----------|----------------------------------|---------------------------------------------------------------| +| minikube | Local Kubernetes deployment tool | [Install minikube](https://minikube.sigs.k8s.io/docs/start/) | +| Helm | Kubernetes package manager | [Install Helm](https://helm.sh/docs/intro/install/) | +| kubectl | Kubernetes admin CLI | [Install kubectl](https://kubernetes.io/docs/tasks/tools/) | +| Docker | Required minikube driver | [Get Started With Docker](https://www.docker.com/get-started) | + + +Even if you do not plan to set up the demo project, you can follow this guide to +see which libraries, types, and functions you can use to automatically generate +Teleport roles based on an external RBAC system. + + + +- (!docs/pages/includes/tctl.mdx!) + +## Step 1/4. Set up your Kubernetes cluster + +In this step, we will launch a local Kubernetes cluster and set up role-based +access controls within it. We will then use this Kubernetes cluster as a basis +for generating Teleport roles. + +### Start minikube + +Start minikube with the Docker driver, which boots a local Kubernetes cluster on +a single Docker container: + +```code +$ minikube start --driver=docker +``` + +This command should start a local Kubernetes cluster and set your context (i.e., +the Kubernetes cluster you are currently interacting with) to `minikube`. To +verify this, run the following command: + +```code +$ kubectl config current-context +minikube +``` + +### Define demo Kubernetes RBAC resources + +Next, we will set up RBAC resources in your local `minikube` cluster to use as a +basis for generating Teleport roles. + +In Kubernetes, you can divide a cluster into logically isolated **namespaces**. +A **role** defines a set of permissions for manipulating resources in a specific +namespace. A **cluster role** is a role that applies to all namespaces in a +cluster. You can use a **role binding** or **cluster role binding** to attach a +role or cluster role to Kubernetes users and groups. + +Define a Kubernetes role and role binding that allows users in the +`app-developer` group to read and list pods in the `app` namespace. Add the +following to a file called `pod-reader.yaml`: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: app +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + namespace: app + name: pod-reader +rules: +- apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: read-pods + namespace: app + annotations: + 'create-teleport-role': 'true' +subjects: +- kind: Group + name: app-developer + apiGroup: rbac.authorization.k8s.io +roleRef: + kind: Role + name: pod-reader + apiGroup: rbac.authorization.k8s.io +``` + +Create the resources: + +```code +$ kubectl apply -f pod-reader.yaml +namespace/app created +role.rbac.authorization.k8s.io/pod-reader created +rolebinding.rbac.authorization.k8s.io/read-pods created +``` + +(!docs/pages/includes/create-role-using-web.mdx!) + +Next, define a cluster role and cluster role binding that allow users in the +`ops` group to read, create, and execute commands on pods in all namespaces. Add +the following to a file called `pod-ops.yaml`: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pod-ops +rules: +- apiGroups: [""] + resources: ["pods"] + verbs: ["get", "watch", "list", "create", "exec", "logs"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: pod-ops + annotations: + 'create-teleport-role': 'true' +subjects: +- kind: Group + name: ops + apiGroup: rbac.authorization.k8s.io +roleRef: + kind: ClusterRole + name: pod-ops + apiGroup: rbac.authorization.k8s.io +``` + +Create the resources: + +```code +$ kubectl apply -f pod-ops.yaml +clusterrole.rbac.authorization.k8s.io/pod-ops created +clusterrolebinding.rbac.authorization.k8s.io/pod-ops created +``` + +Later in this guide, we will show you how to automatically generate Teleport +roles based on the Kubernetes RBAC resources you created. + +### Define RBAC resources for the client application + +Next, ensure that your API client can read the RBAC resources you created. +Create a file called `rbac-sync.yaml` with the following content: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: rbac-sync +rules: +- apiGroups: ["rbac.authorization.k8s.io"] + resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: rbac-sync +subjects: +- kind: User + name: sync-kubernetes-rbac + apiGroup: rbac.authorization.k8s.io +roleRef: + kind: ClusterRole + name: rbac-sync + apiGroup: rbac.authorization.k8s.io +``` + +Apply the changes: + +```code +$ kubectl apply -f rbac-sync.yaml +clusterrole.rbac.authorization.k8s.io/rbac-sync created +clusterrolebinding.rbac.authorization.k8s.io/rbac-sync created +``` + +## Step 2/4. Set up Teleport + +In this step, you will configure Teleport to enable your API client application +to interact with your Kubernetes cluster. + +### Create a user and role for the client application + +Give the client application a Teleport user and role that can retrieve +information about a Kubernetes cluster that is registered with Teleport, +authenticate to the cluster, and create or update Teleport roles. + +Create a file called `sync-kubernetes-rbac.yaml` with the following content: + +```yaml +kind: role +version: v7 +metadata: + name: sync-kubernetes-rbac +spec: + allow: + kubernetes_labels: + '*': '*' + kubernetes_users: + - sync-kubernetes-rbac + kubernetes_resources: + - kind: pod + name: '*' + namespace: '*' + rules: + - resources: ['kubernetes_cluster'] + verbs: ['read'] + - resources: ['role'] + verbs: ['create', 'update'] +--- +kind: user +metadata: + name: sync-kubernetes-rbac +spec: + roles: ['sync-kubernetes-rbac'] +version: v2 +``` + +Create the user and role: + +```code +$ tctl create -f sync-kubernetes-rbac.yaml +role 'sync-kubernetes-rbac' has been created +user "sync-kubernetes-rbac" has been created +``` + +### Enable impersonation of the client application + +As with all Teleport users, the Teleport Auth Service authenticates the +`sync-kubernetes-rbac` user by issuing short-lived TLS credentials. In this +case, we will request the credentials manually by *impersonating* the +`sync-kubernetes-rbac` role and user. + +If you are running a self-hosted Teleport Enterprise deployment and are using +`tctl` from the Auth Service host, you will already have impersonation +privileges. + +To grant your user impersonation privileges for `sync-kubernetes-rbac`, define a role +called `sync-kubernetes-rbac-impersonator` by pasting the following YAML document into +a file called `sync-kubernetes-rbac-impersonator.yaml`: + +```yaml +kind: role +version: v5 +metadata: + name: sync-kubernetes-rbac-impersonator +spec: + allow: + impersonate: + roles: + - sync-kubernetes-rbac + users: + - sync-kubernetes-rbac +``` + +Create the `sync-kubernetes-rbac-impersonator` role: + +```code +$ tctl create -f sync-kubernetes-rbac-impersonator.yaml +``` + +(!docs/pages/includes/add-role-to-user.mdx role="sync-kubernetes-rbac-impersonator"!) + +You will now be able to generate signed certificates for the `sync-kubernetes-rbac` +role and user. + +### Install the Teleport Kubernetes Service + +We will enable your client application to communicate with your Kubernetes +cluster via the Teleport Kubernetes Service, which forwards requests after +authorizing them. While this step is not strictly necessary with a local +`minikube` cluster, it demonstrates one way to use Teleport to securely access +your external RBAC system's API. + +(!docs/pages/includes/kubernetes-access/helm/helm-repo-add.mdx!) + +Request a token that the Kubernetes Service will use to join your Teleport +cluster: + +```code +$ tctl tokens add --type=kube,app,discovery --format=text +``` + +Copy this token so you can use it when running the Teleport Kubernetes Service. + +Ensure that you are connected to the right Kubernetes cluster (logging into +Teleport earlier will have changed your Kubernetes context): + +```code +$ kubectl config use-context minikube +Switched to context "minikube". +``` + +Install the Teleport Kubernetes Service in your cluster, assigning to the host **and port** of your Teleport Proxy Service +(e.g., `mytenant.teleport.sh:443`) and to the token you +requested earlier: + +```code +$ helm install teleport-agent teleport/teleport-kube-agent \ + --set kubeClusterName=minikube \ + --set roles="kube\,app\,discovery" \ + --set proxyAddr= \ + --set authToken= \ + --create-namespace \ + --namespace=teleport-agent \ + --set labels.environment=demo \ + --version (=teleport.version=) +``` + +After a few seconds, verify that you have deployed the Teleport Kubernetes +Service by running the following command: + +```code +$ kubectl -n teleport-agent get pods +``` + +This should show that the Kubernetes Service is running: + +```text +NAME READY STATUS RESTARTS AGE +teleport-agent-0 1/1 Running 0 22s +``` + +`tsh` should indicate that the cluster has registered with Teleport: + +```code +$ tsh kube ls +Kube Cluster Name Labels Selected +----------------- ---------------- -------- +minikube environment=demo +``` + +## Step 3/4. Write the client application + +At this point, we have set up an external RBAC system to use for generating +Teleport roles and configured Teleport to allow our API client to interact with +our Kubernetes cluster and Teleport cluster. In this step, we will write our +client application. + +### Set up your Go project + +Download the source code for the API client application: + +```code +$ git clone --depth=1 https://github.com/gravitational/teleport -b branch/v(=teleport.major_version=) +$ cd teleport/examples/api-sync-roles +``` + +For the rest of this guide, we will show you how to set up the client +application and explore the ways it uses Teleport's API to automatically +generate Teleport roles. + +### Export identity files for the client application + +The `sync-kubernetes-rbac` user needs signed credentials in order to connect to +your Teleport cluster as well as your Kubernetes cluster. You will use the `tctl +auth sign` command to request these credentials for your API client. + +#### Connecting to your Teleport cluster + +The following `tctl auth sign` command impersonates the `sync-kubernetes-rbac` +user, generates signed credentials, and writes an identity file to the local +directory: + +```code +$ tctl auth sign --user=sync-kubernetes-rbac --out=auth.pem +``` + +The identity file, `auth.pem`, includes both TLS and SSH credentials. Your +client application uses the SSH credentials to connect to the Proxy Service, +which establishes a reverse tunnel connection to the Auth Service. The client +application uses this reverse tunnel, along with your TLS credentials, to +connect to the Auth Service's gRPC endpoint. + +#### Connecting to the Kubernetes cluster + +You will also need to give the client application a way to authenticate to your +Kubernetes cluster. To do this, use Teleport's certificate authority to sign +credentials for the `sync-kubernetes-rbac` user. Your API client will present +these credentials to authenticate to the Teleport Kubernetes Service, which will +proxy requests to the Kubernetes cluster. + +Run the following command, ensuring that includes +the host and port of your Proxy Service: + +```code +$ tctl auth sign --user=sync-kubernetes-rbac \ + --kube-cluster-name=minikube \ + --format=kubernetes \ + --proxy=https:// \ + --out=kubeconfig +``` + +### Imports + +In the `api-sync-roles` directory, open `main.go`, which contains the API client +program we demonstrate in this guide. + +Here are the packages our client application imports from Go's standard library: + +|Package|Description| +|---|---| +| `context`|Includes the `context.Context` type. `context.Context` is an abstraction for controlling long-running routines, such as connections to external services, that might fail or time out. Programs can cancel contexts or assign them timeouts and metadata.| +|`fmt`|Formatting data for printing, strings, or errors.| +|`io`|Dealing with I/O operations, e.g., reading files or network sockets.| +|`os`|Interacting with the operating system, e.g., to open files.| +|`time`|Dealing with time. We will use this to define a timeout for connecting to the Auth Service along with a ticker for executing our discovery logic in a loop.| + +The client imports the following third-party code: + +|Package|Description| +|---|---| +|`github.com/gravitational/teleport/api/client`|A library for authenticating to the Auth Service's gRPC API and making requests, aliased as `teleport`.| +|`github.com/gravitational/teleport/api/types`|Types used in the Auth Service API, e.g., Application Service records.| +|`github.com/gravitational/trace`|Presenting errors with more useful detail than the standard library provides.| +|`google.golang.org/grpc`|The gRPC client and server library.| +|`k8s.io/api/rbac/v1`|The Kubernetes RBAC API client library.| +|`k8s.io/apimachinery/pkg/apis/meta/v1`|Code common to Kubernetes' API client libraries.| +|`k8s.io/client-go/kubernetes`|Setting up an general-purpose Kubernetes client.| +|`k8s.io/client-go/kubernetes/typed/rbac/v1`|Types for the Kubernetes RBAC API.| +|`k8s.io/client-go/tools/clientcmd`|Another general-purpose Kubernetes client library.| + +### Constants + +The program defines constants in a visible location so, later on, it's easier to +make them configurable outside the program: + +```go +const ( + proxyAddr string = "" + initTimeout = time.Duration(30) * time.Second + identityFilePath string = "auth.pem" + kubeconfigPath string = "kubeconfig" + clusterName string = "minikube" + roleAnnotationKey string = "create-teleport-role" +) +``` + +We will use these constants later in the program. They define some values we may +want to change later, including: + +|Constant|Description| +|---|---| +|`proxyAddr`|The host and port of the Teleport Proxy Service, e.g., `mytenant.teleport.sh:443`, which we will use to connect the client to your cluster. **Assign this to your own Proxy Service's host and port:** | +|`initTimeout`|The timeout for connecting to the Teleport cluster. We have defined this as 30 seconds.| +|`identityFilePath`|The path to the Teleport identity file you created earlier.| +|`clusterName`|The name of the Kubernetes cluster you will fetch RBAC resources from. In this guide, the cluster's name is `minikube`.| +|`roleAnnotationKey`|In Kubernetes, annotations are arbitrary key/value pairs that you can add to resources. The role and cluster role bindings we created earlier have the annotation key we specify here so our client application can fetch them.| + +### Initializing a Kubernetes RBAC client + +To contact the Kubernetes API, we will need to set up an HTTP client. The client +authenticates to the API using mutual TLS, loading a client certificate, +certificate authority, and private key from the file at `kubeconfigPath`. +Earlier in the guide, we requested this from the Teleport Auth Service. + +The program sets up a Kubernetes API client with the `getRBACClient` function: + +```go +func getRBACClient() (v1.RbacV1Interface, error) { + f, err := os.Open(kubeconfigPath) + if err != nil { + return nil, trace.Wrap(err) + } + + kc, err := io.ReadAll(f) + if err != nil { + return nil, trace.Wrap(err) + } + n, err := clientcmd.RESTConfigFromKubeConfig(kc) + if err != nil { + return nil, trace.Wrap(err) + } + + c, err := kubernetes.NewForConfig(n) + if err != nil { + return nil, trace.Wrap(err) + } + + return c.RbacV1(), nil +} +``` + +`getRBACClient` opens and reads the Kubernetes credentials file at +`kubeconfigPath`, then uses the file to set up a Kubernetes API client +configuration (`clientcmd.RESTConfigFromKubeConfig(kc)`) and, with that, an HTTP +client (`kubernetes.NewForConfig(n)`). + +Finally, it returns an interface to the Kubernetes API dedicated to role-based +access controls, which the rest of the program uses to interact with your +Kubernetes cluster. + +### Creating a Teleport role from a Kubernetes cluster role binding + +The `createTeleportRoleFromClusterRoleBinding` function creates a Teleport role +from a Kubernetes cluster role binding by populating fields in the former based +on fields in latter: + +```go +func createTeleportRoleFromClusterRoleBinding(teleport *client.Client, k types.KubeCluster, r rbacv1.ClusterRoleBinding) error { + if e, ok := r.Annotations[roleAnnotationKey]; !ok || e != "true" { + return nil + } + + role := types.RoleV6{} + role.SetMetadata(types.Metadata{ + Name: k.GetName() + "-" + r.RoleRef.Name + "-" + "cluster", + }) + + b := k.GetStaticLabels() + labels := make(types.Labels) + for k, v := range b { + labels[k] = []string{v} + } + role.SetKubernetesLabels(types.Allow, labels) + role.SetKubeResources(types.Allow, []types.KubernetesResource{ + types.KubernetesResource{ + Kind: "pod", + Namespace: "*", + Name: "*", + }, + }) + + var g []string + var u []string + for _, s := range r.Subjects { + if s.Kind == "User" || s.Kind == "ServiceAccount" { + u = append(u, s.Name) + continue + } + if s.Kind == "Group" { + g = append(g, s.Name) + continue + } + } + role.SetKubeGroups(types.Allow, g) + role.SetKubeUsers(types.Allow, u) + if err := teleport.UpsertRole( + context.Background(), + &role, + ); err != nil { + return trace.Wrap(err) + } + fmt.Println("Upserted Teleport role:", role.GetName()) + return nil +} +``` + +To avoid unexpected behavior, this function ignores Kubernetes-managed roles and +roles for internal systems like the Teleport Kubernetes Service. This function +checks the cluster role binding's metadata for an annotation with a specific +key, `roleAnnotationKey`, and ignores any resource where this key is not set to +`"true"`. + +We also want a quick way to identify roles we created with this program. To do +so, this function names all roles it generates based on cluster role bindings +according to the following attributes: + +- Kubernetes cluster name +- Kubernetes role name +- The suffix `-cluster`. + +In our demo application, this function will create a Teleport role called +`minikube-pod-ops-cluster`. + +The rest of the function assigns fields to a `types.RoleV6`, the Teleport API +client's role type, based on the cluster role binding: + +|Role field|Purpose|How we assign it| +|---|---|---| +|`allow.kubernetes_labels`|Labels for Teleport-registered Kubernetes clusters that a user with this role is allowed to access.|Based on the Teleport-registered Kubernetes cluster that the cluster role binding belongs to.| +|`allow.kubernetes_resources`|Kubernetes pods in specific namespaces that that a user with this role is allowed to access.|Allow access to all namespaces, since cluster role bindings are not restricted by namespace.| +|`allow.kubernetes_users` and `allow.kubernetes_groups`|The Kubernetes groups and users that a Teleport user with this role will assume when interacting with the Kubernetes cluster.|Supply the names of any users or groups connected to the cluster role binding.| + +### Creating a Teleport role from a Kubernetes role binding + +As with cluster role bindings, this program will also create Teleport roles +based on Kubernetes role bindings: + +```go +func createTeleportRoleFromRoleBinding(teleport *client.Client, k types.KubeCluster, r rbacv1.RoleBinding) error { + if e, ok := r.Annotations[roleAnnotationKey]; !ok || e != "true" { + return nil + } + + role := types.RoleV6{} + role.SetMetadata(types.Metadata{ + Name: k.GetName() + "-" + r.RoleRef.Name + "-" + r.Namespace, + }) + + b := k.GetStaticLabels() + labels := make(types.Labels) + for k, v := range b { + labels[k] = []string{v} + } + role.SetKubernetesLabels(types.Allow, labels) + role.SetKubeResources(types.Allow, []types.KubernetesResource{ + types.KubernetesResource{ + Kind: "pod", + Namespace: r.Namespace, + Name: "*", + }, + }) + var g []string + var u []string + for _, s := range r.Subjects { + if s.Kind == "User" || s.Kind == "ServiceAccount" { + u = append(u, s.Name) + continue + } + if s.Kind == "Group" { + g = append(g, s.Name) + continue + } + } + role.SetKubeGroups(types.Allow, g) + role.SetKubeUsers(types.Allow, u) + + if err := teleport.UpsertRole( + context.Background(), + &role, + ); err != nil { + return trace.Wrap(err) + } + fmt.Println("Upserted Teleport role:", role.GetName()) + return nil +} +``` + +While the overall behavior of this function is the same as +`createTeleportRoleFromClusterRoleBinding`, Kubernetes role bindings require +some differences in how we assign fields to Teleport roles: + +- When setting the name of the role, we use the role binding's namespace as the + suffix, rather than `-cluster`, to indicate the namespace that this role + applies to. +- In the role's `kubernetes_resources` field, the value has the same namespace + as the role binding, rather than applying to all namespaces. + +### Creating Teleport roles based on Kubernetes resources + +Now that we have functions to create Teleport roles based on individual +Kubernetes RBAC resources, we can fetch all RBAC resources from our Kubernetes +cluster and call these functions: + +```go +func createTeleportRolesForKubeCluster(teleport *client.Client, k types.KubeCluster) error { + rbac, err := getRBACClient() + if err != nil { + return trace.Wrap(err) + } + + crb, err := rbac.ClusterRoleBindings().List( + context.Background(), + metav1.ListOptions{}, + ) + if err != nil { + return trace.Wrap(err) + } + + for _, i := range crb.Items { + if err := createTeleportRoleFromClusterRoleBinding(teleport, k, i); err != nil { + return trace.Wrap(err) + } + } + + rb, err := rbac.RoleBindings("").List( + context.Background(), + metav1.ListOptions{}, + ) + if err != nil { + return trace.Wrap(err) + } + + for _, i := range rb.Items { + if err := createTeleportRoleFromRoleBinding(teleport, k, i); err != nil { + return trace.Wrap(err) + } + } + return nil +} +``` + +`createTeleportRolesForKubeCluster` takes a Teleport client and a +Teleport-registered Kubernetes cluster. It calls the `getRBACClient` function we +defined earlier to set up a client for the Kubernetes cluster. It then: + +- Lists Kubernetes cluster role bindings and creates a Teleport role for each one. +- Lists Kubernetes role bindings and creates a Teleport role for each one. + +### Initializing clients and start the application + +The functions we declared earlier require a Teleport API client and a +Teleport-registered Kubernetes cluster, and we initialize these in the +entrypoint of the program, the `main` function: + +```go +func main() { + ctx, cancel := context.WithTimeout(context.Background(), initTimeout) + defer cancel() + creds := client.LoadIdentityFile(identityFilePath) + + teleport, err := client.New(ctx, client.Config{ + Addrs: []string{proxyAddr}, + Credentials: []client.Credentials{creds}, + }) + if err != nil { + panic(err) + } + fmt.Println("Connected to Teleport") + + ks, err := teleport.GetKubernetesServers(context.Background()) + if err != nil { + panic(err) + } + for _, k := range ks { + if k.GetCluster().GetName() != clusterName { + continue + } + fmt.Println("Retrieved Kubernetes cluster", clusterName) + + if err := createTeleportRolesForKubeCluster(teleport, k.GetCluster()); err != nil { + panic(err) + } + fmt.Println("Created roles for Kubernetes cluster", clusterName) + return + } + panic("Unable to locate a Kubernetes Service instance for " + clusterName) +} +``` + +`client` is Teleport's library for setting up an API client. Our program +initializes a Teleport client by calling `client.LoadIdentityFile` to obtain a +`client.Credentials`. It then uses the `client.Credentials` to call +`client.New`, which connects to the Teleport Proxy Service specified in the +`Addrs` field using the provided identity file. + + + +This program does not validate your credentials or Teleport cluster address. +Make sure that: + +- The identity file you exported earlier does not have an expired TTL +- The value you supplied to the `Addrs` field in `teleport.Config` includes both + the host **and** the web port of your Teleport Proxy Service, e.g., + `mytenant.teleport.sh:443` + + + +After initializing a Teleport client, the `main` function fetches all Kubernetes +servers registered with Teleport (`teleport.GetKubernetesServers`) and checks if +there is a registered Kubernetes cluster that matches the one you specified. + +If a matching Kubernetes cluster exists, the code calls the +`createTeleportRolesForKubeClusters` function we defined earlier. If not, the +program prints an error message and a stack trace by calling Go's built-in +`panic` function. + +## Step 4/4. Test your client application + +To test the client application, start it up from within its project directory: + +```code +$ go run main.go +``` + +You should see the following output: + +```text +Connected to Teleport +Retrieved Kubernetes cluster minikube +Upserted Teleport role: minikube-pod-ops-cluster +Upserted Teleport role: minikube-pod-reader-app +Created roles for Kubernetes cluster minikube +``` + +Examine the new `minikube-pod-ops-cluster` role by running the command below: + +```code +$ tctl get roles/minikube-pod-ops-cluster +``` + +You should see output similar to the following: + +```yaml +kind: role +metadata: + id: 1678732494974032643 + name: minikube-pod-ops-cluster +spec: + allow: + kubernetes_groups: + - ops + kubernetes_labels: + environment: demo + kubernetes_resources: + - kind: pod + name: '*' + namespace: '*' + deny: {} + options: + cert_format: standard + create_host_user: false + desktop_clipboard: true + desktop_directory_sharing: true + enhanced_recording: + - command + - network + forward_agent: false + idp: + saml: + enabled: true + max_session_ttl: 30h0m0s + pin_source_ip: false + ssh_port_forwarding: + remote: + enabled: true + local: + enabled: true + record_session: + default: best_effort + desktop: true + ssh_file_copy: true +version: v7 +``` + +Compare this with the `minikube-pod-reader-app` role, which you can retrieve +with the following command: + +```code +$ tctl get roles/minikube-pod-reader-app +``` + +Here is the role we created: + +```yaml +kind: role +metadata: + id: 1678732495284493075 + name: minikube-pod-reader-app +spec: + allow: + kubernetes_groups: + - app-developer + kubernetes_labels: + environment: demo + kubernetes_resources: + - kind: pod + name: '*' + namespace: app + deny: {} + options: + cert_format: standard + create_host_user: false + desktop_clipboard: true + desktop_directory_sharing: true + enhanced_recording: + - command + - network + forward_agent: false + idp: + saml: + enabled: true + max_session_ttl: 30h0m0s + pin_source_ip: false + ssh_port_forwarding: + remote: + enabled: true + local: + enabled: true + record_session: + default: best_effort + desktop: true + ssh_file_copy: true +version: v7 +``` + +Since role bindings are namespaced, this role only allows access to pods in the +`app` namespace, where this role binding was applied. The Kubernetes Service +forwards traffic from users with this role using the `app-developer` Kubernetes +group. + +## Next steps + +We have implemented a Teleport API client that generates Teleport roles based on +the Kubernetes RBAC system. You can use Teleport's API to build similar +applications that interact with other RBAC systems, such as GitHub teams or +groups within your database management system. + +Here are some starting points for building out your client application. + +### Learn more about Teleport roles + +To write an effective client application that generates Teleport roles from an +external RBAC solution, you should understand the role fields that apply to +infrastructure resources you want to manage access to. + +See the links below for guides to fields related to different infrastructure +resources: + +- [Servers](../../enroll-resources/server-access/rbac.mdx) +- [Databases](../../enroll-resources/database-access/rbac.mdx) +- [Kubernetes clusters](../../enroll-resources/kubernetes-access/controls.mdx) +- [Windows Desktops](../../enroll-resources/desktop-access/rbac.mdx) +- [Applications](../../enroll-resources/application-access/configuration/controls.mdx) + +For general guidance, read our [Access Controls +Reference](../../reference/access-controls/roles.mdx). + +### Register your cloud provider with Teleport + +You can protect cloud provider APIs with Teleport and instruct your API client +applications to connect to these APIs via the Teleport Application Service. +Using Teleport-protected cloud provider APIs, you can generate Teleport roles +based on your cloud provider's RBAC solution. + +Read our guides for how to set up the Teleport Application Service for cloud +provider APIs: + +- [AWS](../../enroll-resources/application-access/cloud-apis/aws-console.mdx) +- [Google Cloud](../../enroll-resources/application-access/cloud-apis/google-cloud.mdx) +- [Azure](../../enroll-resources/application-access/cloud-apis/azure.mdx) + +### Consult examples + +The Teleport code repository contains [examples of production-ready Teleport API +clients](https://github.com/gravitational/teleport/tree/v(=teleport.version=)/examples/). +While we currently do not maintain plugins that generate Teleport +roles, you can use these examples to see how to implement configuration +parsing, retries, and other tasks. + +### Provision the client application with short-lived credentials + +In this example, we used the `tctl auth sign` command to fetch credentials for +the program you wrote. For production usage, we recommend provisioning +short-lived credentials via Machine & Workload Identity, which reduces the risk +of these credentials becoming stolen. View our [Machine & Workload Identity +documentation](../../machine-workload-identity/introduction.mdx) to learn more. diff --git a/docs/pages/zero-trust-access/authentication/authentication.mdx b/docs/pages/zero-trust-access/authentication/authentication.mdx new file mode 100644 index 0000000000000..9dbe6df9dee6a --- /dev/null +++ b/docs/pages/zero-trust-access/authentication/authentication.mdx @@ -0,0 +1,19 @@ +--- +title: Authentication and Session Joining +description: Provides information on configuring the way users authenticate to your Teleport cluster or join an existing session. +sidebar_position: 3 +template: "no-toc" +tags: + - zero-trust +--- + +Teleport gives you control over how and when users authenticate to your cluster. +For example, you can configure Teleport to allow passwordless authentication +with a hardware key or trust an external identity provider. You can also +configure whether a user can join an existing session with a Kubernetes cluster +or SSH server, and whether the user must authenticate again in order to do so. + +The guides in this section show you how to configure Teleport authentication for +your organization's needs: + + diff --git a/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx b/docs/pages/zero-trust-access/authentication/hardware-key-support.mdx similarity index 96% rename from docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx rename to docs/pages/zero-trust-access/authentication/hardware-key-support.mdx index 7e2d1f524119e..1b4f6e00cf4d6 100644 --- a/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx +++ b/docs/pages/zero-trust-access/authentication/hardware-key-support.mdx @@ -1,8 +1,18 @@ --- title: Hardware Key Support description: Hardware Key Support +tags: + - how-to + - zero-trust + - privileged-access +enterprise: Hardware key support --- +This guide explains how to configure Teleport authentication using +hardware-based private keys. + +## How it works + Hardware Key Support requires Teleport Enterprise. @@ -64,7 +74,7 @@ like `tctl edit`. With touch required, hardware key support provides better secu ## Prerequisites -(!docs/pages/includes/commercial-prereqs-tabs.mdx!) +(!docs/pages/includes/edition-prereqs-tabs.mdx edition="Teleport Enterprise"!) - A series 5+ YubiKey @@ -233,7 +243,7 @@ While Connect is running, touch and PIN prompts for `tsh` and `tctl` commands wi displayed in Teleport Connect instead. These prompts will be brought to the foreground to raise attention to the user. -![Agent](../../../../img/access-controls/hardware-key-support/agent.png) +![Agent](../../../img/access-controls/hardware-key-support/agent.png) ## PIN Caching @@ -244,7 +254,7 @@ common use cases: * Proxying kubectl commands, database queries, or app requests through a Teleport local proxy (`tsh proxy kube|db|app`). * Running automated scripts which run `tsh` commands in bulk. -* Connecting to TCP applications with [VNet](../../../connect-your-client/vnet.mdx). +* Connecting to TCP applications with [VNet](../../connect-your-client/teleport-clients/vnet.mdx). * General Teleport Connect usage. Instead, you can set the cluster-wide `hardware_key.pin_cache_ttl` configuration option to enable diff --git a/docs/pages/zero-trust-access/authentication/headless.mdx b/docs/pages/zero-trust-access/authentication/headless.mdx new file mode 100644 index 0000000000000..c7b0f16941e16 --- /dev/null +++ b/docs/pages/zero-trust-access/authentication/headless.mdx @@ -0,0 +1,257 @@ +--- +title: Headless Authentication +description: Headless Authentication +tags: + - how-to + - zero-trust + - privileged-access +--- + +Headless Authentication provides a secure way to authenticate with Teleport on +a machine that does not have the ability to authenticate directly with the +required mechanisms. + +For example: + +- Authenticating with [WebAuthn](../management/security/idp-compromise.mdx#set-up-cluster-wide-webauthn) or [SSO MFA](../sso/sso-for-mfa.mdx) from a remote dev box +- Authenticating with WebAuthn on a machine without a WebAuthn-compatible browser +- Authenticating with SSO MFA from a browser that is not supported by your SSO provider + + + Headless Authentication only supports the following `tsh` commands: + + - `tsh ls` + - `tsh ssh` + - `tsh scp` + - `tsh proxy kube` + + In the future, Headless Authentication will be extended to other `tsh` commands. + + +## How it works + +In the Headless Authentication flow, a user on a remote machine requests headless +authentication when running `tsh` commands. `tsh` sends a request to an API path +on the Teleport Proxy Service, `/webapi/login/headless`, and the Teleport Proxy +Service sends a request to the Teleport Auth Service to store a headless +authentication request on its backend. `tsh` then obtains the ID of the request +and prints a URL containing the ID in the user's terminal. The user accesses the +URL on their browser and completes an MFA flow with the Teleport Auth Service. + +Once the Teleport Auth Service authenticates the user, `tsh` generates a new +private key in memory, and shares its public key only to obtain user +certificates. `tsh` then holds user certificates in memory with a one-minute TTL +to reduce the impact of exfiltration. + +## Prerequisites + +- A Teleport cluster with [WebAuthn](../management/security/idp-compromise.mdx#set-up-cluster-wide-webauthn) or [SSO MFA](../sso/sso-for-mfa.mdx) configured. +- Machines for Headless Authentication activities have [Linux](../../installation/installation.mdx), [macOS](../../installation/installation.mdx) or [Windows](../../installation/installation.mdx) `tsh` binary installed. +- Machines used to approve Headless Authentication requests have a Web browser with [WebAuthn support]( + https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) or `tsh` binary installed. +- Optional: Teleport Connect for [seamless Headless Authentication approval](#optional-teleport-connect). + +## Step 1/3. Configuration + +A Teleport cluster capable of WebAuthn or SSO MFA is automatically capable of +Headless Authentication without any additional configuration. + +
+Optional: make Headless Authentication the default auth connector + +To make Headless Authentication the default authentication method for your Teleport +Cluster, add `connector_name: headless` to your cluster configuration. + +Create a `cap.yaml` file or get the existing configuration using +`tctl get cluster_auth_preference`: + +```yaml +kind: cluster_auth_preference +version: v2 +metadata: + name: cluster-auth-preference +spec: + type: local + second_factors: ["webauthn"] + webauthn: + rp_id: example.com + connector_name: headless # headless by default +``` + +Update the configuration: + +```code +$ tctl create -f cap.yaml +# cluster auth preference has been updated +``` +
+ +
+Alternative: disable Headless Authentication + +Headless Authentication is enabled automatically when WebAuthn or SSO MFA is configured. +If you want to forbid Headless Authentication in your cluster, add `headless: false` to your +configuration. + +Create a `cap.yaml` file or get the existing configuration using +`tctl get cluster_auth_preference`: + +```yaml +kind: cluster_auth_preference +version: v2 +metadata: + name: cluster-auth-preference +spec: + type: local + second_factors: ["webauthn"] + webauthn: + rp_id: example.com + headless: false # disable Headless Authentication +``` + +Update the configuration: + +```code +$ tctl create -f cap.yaml +# cluster auth preference has been updated +``` + +
+ +## Step 2/3. Initiate Headless Authentication + +Run a headless `tsh` command with the `--headless` flag. This will initiate +headless authentication, printing a URL and `tsh` command. + +```code +$ tsh ls --headless --proxy=proxy.example.com --user=alice +# Complete headless authentication in your local web browser: +# +# https://proxy.example.com:3080/web/headless/86172f78-af7c-5935-a7c1-ed06b94f17dc +# +# or execute this command in your local terminal: +# +# tsh headless approve --user=alice --proxy=proxy.example.com 86172f78-af7c-5935-a7c1-ed06b94f17dc +``` + +## Step 3/3. Approve Headless Authentication + +To approve the headless authentication, click or copy+paste the URL printed by +`tsh` in your local web browser. You will be prompted to approve the log in with +MFA verification. Once approved, your initial `tsh --headless ` +should continue as if you had logged in locally. + +Unlike a standard login session, headless sessions are only available for the +lifetime of a single `tsh` request. This means that for each `tsh --headless` +command, you will need to go through the Headless Authentication flow: + +### Example: Listing SSH servers +```code +$ tsh ls --headless --proxy=proxy.example.com --user=alice +# Complete headless authentication in your local web browser: +# +# https://proxy.example.com:3080/web/headless/86172f78-af7c-5935-a7c1-ed06b94f17dc +# +# or execute this command in your local terminal: +# +# tsh headless approve --user=alice --proxy=proxy.example.com 86172f78-af7c-5935-a7c1-ed06b94f17dc +# # User approves through link +# Node Name Address Labels +# --------- -------------- ----------- +# server01 127.0.0.1:3022 arch=x86_64 +``` + +### Example: Initiating an SSH session +```code +$ tsh ssh --headless --proxy=proxy.example.com --user=alice alice@server01 +# Complete headless authentication in your local web browser: +# +# https://proxy.example.com:3080/web/headless/864cccd9-2425-46d9-a9f2-636387e66ebf +# +# or execute this command in your local terminal: +# +# tsh headless approve --user=alice --proxy=proxy.example.com 864cccd9-2425-46d9-a9f2-636387e66ebf +# # User approves through link and a ssh terminal starts +alice@server01 $ +``` + + + The Teleport user, `--user` parameter, is the Teleport user requesting Headless Authentication activity. + If no `--user` parameter or environment variables set the OS user in the machine terminal is used. + + The login username, `--login` parameter or login@hostname, for `tsh ssh` commands is the user + to open a SSH session as. If no login username for the SSH session is set the OS terminal username is used. + A Teleport user must have access to that login user for that server or they will receive + an access denied message. The user could receive an access denied message after being approved + for their Headless Authentication activity since the same access rights are granted or denied as if running from + your local terminal. + + +### Example: Kubernetes +```code +$ tsh proxy kube --headless --proxy=proxy.example.com --user=alice example-cluster +Complete headless authentication in your local web browser: +# https://proxy.example.com:3080/web/headless/7f7e1369-e45b-5b7b-bb17-84360873acaf +# +# or execute this command in your local terminal: +# +# tsh headless approve --user=alice --proxy=proxy.example.com 7f7e1369-e45b-5b7b-bb17-84360873acaf +# # User approves through link and the local proxy is created: + +Preparing the following Teleport Kubernetes clusters: +Teleport Cluster Name Kube Cluster Name +--------------------- ----------------- +teleport-cluster example-cluster + +Started local proxy for Kubernetes on 127.0.0.1:1234 in the background +and kubectl is set up to work with it. Try issuing a command, for example "kubectl get namespaces" + +$ kubectl get pods +NAMESPACE NAME READY STATUS RESTARTS AGE +teleport-agent teleport-agent-0 1/1 Running 0 2d12h +``` + +## Optional: Teleport Connect + +Teleport Connect can also be used to approve Headless Authentication logins. Teleport +Connect will automatically detect the Headless Authentication login attempt and allow +you to approve or cancel the request. + +![Headless Confirmation](../../../img/headless/confirmation.png) + +You will be prompted to tap your MFA key to complete the approval process. + +![Headless Authentication Approval](../../../img/headless/approval.png) + +## Troubleshooting + +### "WARN: Failed to lock system memory for headless login: ..." + +When using Headless Authentication, `tsh` does not write private key and certificate data +to disk(`~/.tsh`). Instead, `tsh` holds these secrets in memory for the duration of +the request. Additionally, it will try to lock the process memory to further protect +the secrets from being stolen by other users on a shared machine. + +Below are some of the specific warning messages you may run into and how to fix them: + +#### "operation not permitted" OR "cannot allocate memory" + +In order to lock the process memory, your OS user must have permission to lock +the amount of memory needed. Use `ulimit -l` to check your OS user's current limit. +The exact amount of memory needed may vary from system to system, so we recommend +updating your ulimit to unlimited, with either `ulimit -l unlimited` or by adding +the line ` hard memlock unlimited` to your `/etc/security/limits.conf`. + +#### "memory locking is not supported on non-linux operating systems" + +The `mlockall` syscall is only supported on Linux operating systems. This means +that on other operating systems, the memory lock attempt will always fail and +output the warning. We recommend only using Headless Authentication on Linux machines +for the best level of security on shared machines. + +#### Disable mlock + +If the above solutions are not feasible in your environment, you can also disable +the memory locking requirement by setting the `--mlock` flag or `TELEPORT_MLOCK_MODE` +environment variable to `off` or `best_effort`. This is not recommended in production +environments on shared systems where a memory swap attack is possible. diff --git a/docs/pages/admin-guides/access-controls/guides/impersonation.mdx b/docs/pages/zero-trust-access/authentication/impersonation.mdx similarity index 93% rename from docs/pages/admin-guides/access-controls/guides/impersonation.mdx rename to docs/pages/zero-trust-access/authentication/impersonation.mdx index 84c9bd157c3c5..aa1b925cc0b82 100644 --- a/docs/pages/admin-guides/access-controls/guides/impersonation.mdx +++ b/docs/pages/zero-trust-access/authentication/impersonation.mdx @@ -1,6 +1,10 @@ --- title: Impersonating Teleport Users +sidebar_label: Impersonation description: How to issue short-lived certs on behalf of Teleport users using impersonation. +tags: + - how-to + - zero-trust --- Sometimes users need to create short-lived certificates for non-interactive @@ -11,6 +15,16 @@ users and robots to create short-lived certs for other users and roles. Let's explore how interactive user Alice can create credentials for a non-interactive CI/CD user Jenkins and a security scanner. +## How it works + +A Teleport role allows a Teleport user to impersonate other roles or users. When +a Teleport user authenticates with a role that allows impersonation, they can +execute the `tctl auth sign` command to instruct the Teleport Auth Service to +sign a certificate for another Teleport user. As with all Teleport user +certificates, the certificate written by `tctl auth sign` includes the names of +a Teleport user and that user's roles. TLS or SSH clients can then load the +certificate in order to authenticate to Teleport as the impersonated user. + ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) @@ -334,6 +348,6 @@ Here is an explanation of the fields used in the `where` conditions within this | `impersonate_role.metadata.labels["